Repository: liuff19/LangScene-X Branch: main Commit: f91eaedf97d8 Files: 1984 Total size: 20.0 MB Directory structure: gitextract_u3gvlex3/ ├── .gitignore ├── LICENSE ├── README.md ├── auto-seg/ │ ├── auto-mask-align.py │ ├── sam2/ │ │ ├── __init__.py │ │ ├── automatic_mask_generator.py │ │ ├── build_sam.py │ │ ├── csrc/ │ │ │ └── connected_components.cu │ │ ├── modeling/ │ │ │ ├── __init__.py │ │ │ ├── backbones/ │ │ │ │ ├── __init__.py │ │ │ │ ├── hieradet.py │ │ │ │ ├── image_encoder.py │ │ │ │ └── utils.py │ │ │ ├── memory_attention.py │ │ │ ├── memory_encoder.py │ │ │ ├── position_encoding.py │ │ │ ├── sam/ │ │ │ │ ├── __init__.py │ │ │ │ ├── mask_decoder.py │ │ │ │ ├── prompt_encoder.py │ │ │ │ └── transformer.py │ │ │ ├── sam2_base.py │ │ │ └── sam2_utils.py │ │ ├── sam2_image_predictor.py │ │ ├── sam2_video_predictor.py │ │ └── utils/ │ │ ├── __init__.py │ │ ├── amg.py │ │ ├── misc.py │ │ └── transforms.py │ ├── sam2_configs/ │ │ ├── __init__.py │ │ ├── sam2_hiera_b+.yaml │ │ ├── sam2_hiera_l.yaml │ │ ├── sam2_hiera_s.yaml │ │ └── sam2_hiera_t.yaml │ ├── sam2_hiera_l.yaml │ └── submodules/ │ ├── segment-anything-1/ │ │ ├── .gitignore │ │ ├── README.md │ │ ├── scripts/ │ │ │ ├── amg.py │ │ │ └── export_onnx_model.py │ │ ├── segment_anything/ │ │ │ ├── __init__.py │ │ │ ├── automatic_mask_generator.py │ │ │ ├── build_sam.py │ │ │ ├── modeling/ │ │ │ │ ├── __init__.py │ │ │ │ ├── common.py │ │ │ │ ├── image_encoder.py │ │ │ │ ├── mask_decoder.py │ │ │ │ ├── prompt_encoder.py │ │ │ │ ├── sam.py │ │ │ │ └── transformer.py │ │ │ ├── predictor.py │ │ │ └── utils/ │ │ │ ├── __init__.py │ │ │ ├── amg.py │ │ │ ├── onnx.py │ │ │ └── transforms.py │ │ ├── setup.cfg │ │ └── setup.py │ └── segment-anything-2/ │ ├── .clang-format │ ├── .gitignore │ ├── .watchmanconfig │ ├── CODE_OF_CONDUCT.md │ ├── CONTRIBUTING.md │ ├── INSTALL.md │ ├── LICENSE │ ├── LICENSE_cctorch │ ├── MANIFEST.in │ ├── README.md │ ├── backend.Dockerfile │ ├── checkpoints/ │ │ └── download_ckpts.sh │ ├── demo/ │ │ ├── .gitignore │ │ ├── README.md │ │ ├── backend/ │ │ │ └── server/ │ │ │ ├── app.py │ │ │ ├── app_conf.py │ │ │ └── inference/ │ │ │ ├── data_types.py │ │ │ ├── multipart.py │ │ │ └── predictor.py │ │ └── frontend/ │ │ ├── .babelrc │ │ ├── .dockerignore │ │ ├── .eslintignore │ │ ├── .eslintrc.cjs │ │ ├── .gitignore │ │ ├── .prettierignore │ │ ├── .prettierrc.json │ │ ├── .watchmanconfig │ │ ├── frontend.Dockerfile │ │ ├── index.html │ │ ├── package.json │ │ ├── postcss.config.js │ │ ├── schema.graphql │ │ ├── schemas/ │ │ │ ├── inference-api-schema.graphql │ │ │ ├── merge-schemas.ts │ │ │ └── video-api-schema.graphql │ │ ├── src/ │ │ │ ├── App.tsx │ │ │ ├── assets/ │ │ │ │ └── scss/ │ │ │ │ └── App.scss │ │ │ ├── common/ │ │ │ │ ├── codecs/ │ │ │ │ │ ├── VideoDecoder.ts │ │ │ │ │ ├── VideoEncoder.ts │ │ │ │ │ └── WebCodecUtils.ts │ │ │ │ ├── components/ │ │ │ │ │ ├── MobileFirstClickBanner.tsx │ │ │ │ │ ├── Tooltip.tsx │ │ │ │ │ ├── annotations/ │ │ │ │ │ │ ├── AddObjectButton.tsx │ │ │ │ │ │ ├── ClearAllPointsInVideoButton.tsx │ │ │ │ │ │ ├── CloseSessionButton.tsx │ │ │ │ │ │ ├── FirstClickView.tsx │ │ │ │ │ │ ├── LimitNotice.tsx │ │ │ │ │ │ ├── MobileObjectsList.tsx │ │ │ │ │ │ ├── MobileObjectsToolbar.tsx │ │ │ │ │ │ ├── MobileObjectsToolbarHeader.tsx │ │ │ │ │ │ ├── ObjectActions.tsx │ │ │ │ │ │ ├── ObjectPlaceholder.tsx │ │ │ │ │ │ ├── ObjectThumbnail.tsx │ │ │ │ │ │ ├── ObjectUtils.ts │ │ │ │ │ │ ├── ObjectsToolbar.tsx │ │ │ │ │ │ ├── ObjectsToolbarBottomActions.tsx │ │ │ │ │ │ ├── ObjectsToolbarHeader.tsx │ │ │ │ │ │ ├── PointsToggle.tsx │ │ │ │ │ │ ├── PrimaryCTAButton.tsx │ │ │ │ │ │ ├── ToolbarObject.tsx │ │ │ │ │ │ ├── ToolbarObjectContainer.tsx │ │ │ │ │ │ ├── TrackletSwimlane.tsx │ │ │ │ │ │ ├── TrackletsAnnotation.tsx │ │ │ │ │ │ └── useTracklets.ts │ │ │ │ │ ├── button/ │ │ │ │ │ │ ├── GradientBorder.tsx │ │ │ │ │ │ ├── PlaybackButton.tsx │ │ │ │ │ │ ├── PrimaryCTAButton.tsx │ │ │ │ │ │ ├── ResponsiveButton.tsx │ │ │ │ │ │ └── TrackAndPlayButton.tsx │ │ │ │ │ ├── code/ │ │ │ │ │ │ └── InitializeLocalMonaco.ts │ │ │ │ │ ├── effects/ │ │ │ │ │ │ ├── BackgroundEffects.tsx │ │ │ │ │ │ ├── EffectVariantBadge.tsx │ │ │ │ │ │ ├── EffectsCarousel.tsx │ │ │ │ │ │ ├── EffectsCarouselShadow.tsx │ │ │ │ │ │ ├── EffectsToolbar.tsx │ │ │ │ │ │ ├── EffectsToolbarBottomActions.tsx │ │ │ │ │ │ ├── EffectsToolbarHeader.tsx │ │ │ │ │ │ ├── EffectsUtils.ts │ │ │ │ │ │ ├── HighlightEffects.tsx │ │ │ │ │ │ ├── MobileEffectsToolbar.tsx │ │ │ │ │ │ └── MoreFunEffects.tsx │ │ │ │ │ ├── gallery/ │ │ │ │ │ │ ├── ChangeVideoModal.tsx │ │ │ │ │ │ ├── DefaultVideoGalleryModalTrigger.tsx │ │ │ │ │ │ ├── DemoVideoGallery.tsx │ │ │ │ │ │ ├── DemoVideoGalleryModal.tsx │ │ │ │ │ │ ├── VideoGalleryUploadPhoto.tsx │ │ │ │ │ │ ├── VideoPhoto.tsx │ │ │ │ │ │ ├── __generated__/ │ │ │ │ │ │ │ ├── DemoVideoGalleryModalQuery.graphql.ts │ │ │ │ │ │ │ ├── DemoVideoGalleryQuery.graphql.ts │ │ │ │ │ │ │ └── useUploadVideoMutation.graphql.ts │ │ │ │ │ │ └── useUploadVideo.ts │ │ │ │ │ ├── icons/ │ │ │ │ │ │ └── GitHubIcon.tsx │ │ │ │ │ ├── options/ │ │ │ │ │ │ ├── DownloadOption.tsx │ │ │ │ │ │ ├── GalleryOption.tsx │ │ │ │ │ │ ├── MoreOptionsToolbar.tsx │ │ │ │ │ │ ├── MoreOptionsToolbarBottomActions.tsx │ │ │ │ │ │ ├── OptionButton.tsx │ │ │ │ │ │ ├── ShareSection.tsx │ │ │ │ │ │ ├── ShareUtils.ts │ │ │ │ │ │ ├── TryAnotherVideoSection.tsx │ │ │ │ │ │ ├── UploadOption.tsx │ │ │ │ │ │ ├── __generated__/ │ │ │ │ │ │ │ └── GetLinkOptionShareVideoMutation.graphql.ts │ │ │ │ │ │ └── useDownloadVideo.ts │ │ │ │ │ ├── session/ │ │ │ │ │ │ ├── RestartSessionButton.tsx │ │ │ │ │ │ ├── __generated__/ │ │ │ │ │ │ │ └── useCloseSessionBeforeUnloadMutation.graphql.ts │ │ │ │ │ │ ├── useCloseSessionBeforeUnload.ts │ │ │ │ │ │ └── useRestartSession.ts │ │ │ │ │ ├── snackbar/ │ │ │ │ │ │ ├── DemoMessagesSnackbarUtils.ts │ │ │ │ │ │ ├── MessagesSnackbar.tsx │ │ │ │ │ │ ├── snackbarAtoms.ts │ │ │ │ │ │ ├── useDemoMessagesSnackbar.ts │ │ │ │ │ │ ├── useExpireMessage.ts │ │ │ │ │ │ └── useMessagesSnackbar.ts │ │ │ │ │ ├── toolbar/ │ │ │ │ │ │ ├── DesktopToolbar.tsx │ │ │ │ │ │ ├── MobileToolbar.tsx │ │ │ │ │ │ ├── Toolbar.tsx │ │ │ │ │ │ ├── ToolbarActionIcon.tsx │ │ │ │ │ │ ├── ToolbarBottomActionsWrapper.tsx │ │ │ │ │ │ ├── ToolbarConfig.tsx │ │ │ │ │ │ ├── ToolbarHeaderWrapper.tsx │ │ │ │ │ │ ├── ToolbarProgressChip.tsx │ │ │ │ │ │ ├── ToolbarSection.tsx │ │ │ │ │ │ ├── useListenToStreamingState.ts │ │ │ │ │ │ └── useToolbarTabs.ts │ │ │ │ │ ├── useFunctionThrottle.tsx │ │ │ │ │ └── video/ │ │ │ │ │ ├── ChangeVideoModal.tsx │ │ │ │ │ ├── EventEmitter.ts │ │ │ │ │ ├── Video.tsx │ │ │ │ │ ├── VideoFilmstripWithPlayback.tsx │ │ │ │ │ ├── VideoLoadingOverlay.tsx │ │ │ │ │ ├── VideoWorker.ts │ │ │ │ │ ├── VideoWorkerBridge.ts │ │ │ │ │ ├── VideoWorkerContext.ts │ │ │ │ │ ├── VideoWorkerTypes.ts │ │ │ │ │ ├── editor/ │ │ │ │ │ │ ├── DemoVideoEditor.tsx │ │ │ │ │ │ ├── ImageUtils.ts │ │ │ │ │ │ ├── VideoEditor.tsx │ │ │ │ │ │ ├── VideoEditorUtils.ts │ │ │ │ │ │ ├── atoms.ts │ │ │ │ │ │ ├── useResetEditor.ts │ │ │ │ │ │ ├── useVideo.ts │ │ │ │ │ │ └── useVideoEffect.ts │ │ │ │ │ ├── effects/ │ │ │ │ │ │ ├── ArrowGLEffect.ts │ │ │ │ │ │ ├── BackgroundBlurEffect.ts │ │ │ │ │ │ ├── BackgroundTextEffect.ts │ │ │ │ │ │ ├── BaseGLEffect.ts │ │ │ │ │ │ ├── BurstGLEffect.ts │ │ │ │ │ │ ├── CutoutGLEffect.ts │ │ │ │ │ │ ├── DesaturateEffect.ts │ │ │ │ │ │ ├── Effect.ts │ │ │ │ │ │ ├── EffectUtils.ts │ │ │ │ │ │ ├── Effects.ts │ │ │ │ │ │ ├── EraseBackgroundEffect.ts │ │ │ │ │ │ ├── EraseForegroundEffect.ts │ │ │ │ │ │ ├── EraseForegroundGLEffect.ts │ │ │ │ │ │ ├── GradientEffect.ts │ │ │ │ │ │ ├── NoisyMaskEffect.ts │ │ │ │ │ │ ├── OriginalEffect.ts │ │ │ │ │ │ ├── OverlayEffect.ts │ │ │ │ │ │ ├── PixelateEffect.ts │ │ │ │ │ │ ├── PixelateMaskGLEffect.ts │ │ │ │ │ │ ├── ReplaceGLEffect.ts │ │ │ │ │ │ ├── ScopeGLEffect.ts │ │ │ │ │ │ ├── SobelEffect.ts │ │ │ │ │ │ ├── VibrantMaskEffect.ts │ │ │ │ │ │ └── shaders/ │ │ │ │ │ │ ├── Arrow.frag │ │ │ │ │ │ ├── BackgroundBlur.frag │ │ │ │ │ │ ├── Burst.frag │ │ │ │ │ │ ├── Cutout.frag │ │ │ │ │ │ ├── DefaultVert.vert │ │ │ │ │ │ ├── EraseForeground.frag │ │ │ │ │ │ ├── Gradient.frag │ │ │ │ │ │ ├── NoisyMask.frag │ │ │ │ │ │ ├── Overlay.frag │ │ │ │ │ │ ├── Overlay.vert │ │ │ │ │ │ ├── Pixelate.frag │ │ │ │ │ │ ├── PixelateMask.frag │ │ │ │ │ │ ├── Replace.frag │ │ │ │ │ │ ├── Scope.frag │ │ │ │ │ │ ├── Sobel.frag │ │ │ │ │ │ └── VibrantMask.frag │ │ │ │ │ ├── filmstrip/ │ │ │ │ │ │ ├── FilmstripUtil.tsx │ │ │ │ │ │ ├── SelectedFrameHelper.ts │ │ │ │ │ │ ├── VideoFilmstrip.tsx │ │ │ │ │ │ ├── atoms.ts │ │ │ │ │ │ ├── useDisableScrolling.ts │ │ │ │ │ │ └── useSelectedFrameHelper.ts │ │ │ │ │ ├── layers/ │ │ │ │ │ │ ├── InteractionLayer.tsx │ │ │ │ │ │ └── PointsLayer.tsx │ │ │ │ │ ├── useInputVideo.ts │ │ │ │ │ └── useVideoWorker.ts │ │ │ │ ├── error/ │ │ │ │ │ ├── ErrorFallback.tsx │ │ │ │ │ ├── ErrorReport.tsx │ │ │ │ │ ├── ErrorSerializationUtils.ts │ │ │ │ │ ├── ErrorUtils.ts │ │ │ │ │ ├── errorReportAtom.ts │ │ │ │ │ └── useReportError.tsx │ │ │ │ ├── loading/ │ │ │ │ │ ├── LoadingMessage.tsx │ │ │ │ │ ├── LoadingStateScreen.tsx │ │ │ │ │ ├── StaticVideoPlayer.tsx │ │ │ │ │ └── UploadLoadingScreen.tsx │ │ │ │ ├── logger/ │ │ │ │ │ ├── DemoLogger.ts │ │ │ │ │ ├── LogEnvironment.ts │ │ │ │ │ └── Logger.ts │ │ │ │ ├── screen/ │ │ │ │ │ └── useScreenSize.tsx │ │ │ │ ├── tracker/ │ │ │ │ │ ├── SAM2Model.ts │ │ │ │ │ ├── Tracker.ts │ │ │ │ │ ├── TrackerTypes.ts │ │ │ │ │ ├── Trackers.ts │ │ │ │ │ └── __generated__/ │ │ │ │ │ ├── SAM2ModelAddNewPointsMutation.graphql.ts │ │ │ │ │ ├── SAM2ModelCancelPropagateInVideoMutation.graphql.ts │ │ │ │ │ ├── SAM2ModelClearPointsInFrameMutation.graphql.ts │ │ │ │ │ ├── SAM2ModelClearPointsInVideoMutation.graphql.ts │ │ │ │ │ ├── SAM2ModelCloseSessionMutation.graphql.ts │ │ │ │ │ ├── SAM2ModelRemoveObjectMutation.graphql.ts │ │ │ │ │ └── SAM2ModelStartSessionMutation.graphql.ts │ │ │ │ └── utils/ │ │ │ │ ├── FileUtils.ts │ │ │ │ ├── ImageUtils.ts │ │ │ │ ├── MaskUtils.ts │ │ │ │ ├── MultipartStream.ts │ │ │ │ ├── ShaderUtils.ts │ │ │ │ ├── emptyFunction.ts │ │ │ │ └── uuid.ts │ │ │ ├── debug/ │ │ │ │ └── stats/ │ │ │ │ ├── Stats.ts │ │ │ │ └── StatsView.tsx │ │ │ ├── demo/ │ │ │ │ ├── DemoConfig.tsx │ │ │ │ ├── DemoErrorFallback.tsx │ │ │ │ ├── DemoSuspenseFallback.tsx │ │ │ │ ├── SAM2DemoApp.tsx │ │ │ │ └── atoms.ts │ │ │ ├── graphql/ │ │ │ │ ├── RelayEnvironment.ts │ │ │ │ ├── RelayEnvironmentProvider.tsx │ │ │ │ ├── errors/ │ │ │ │ │ ├── CreateFilmstripError.ts │ │ │ │ │ ├── DrawFrameError.ts │ │ │ │ │ └── WebGLContextError.ts │ │ │ │ └── fetchGraphQL.ts │ │ │ ├── jscocotools/ │ │ │ │ └── mask.ts │ │ │ ├── layouts/ │ │ │ │ ├── DemoPageLayout.tsx │ │ │ │ └── RootLayout.tsx │ │ │ ├── routes/ │ │ │ │ ├── DemoPage.tsx │ │ │ │ ├── DemoPageWrapper.tsx │ │ │ │ ├── PageNotFoundPage.tsx │ │ │ │ └── __generated__/ │ │ │ │ └── DemoPageQuery.graphql.ts │ │ │ ├── settings/ │ │ │ │ ├── ApprovableInput.tsx │ │ │ │ ├── SAM2Settings.tsx │ │ │ │ ├── SettingsContextProvider.tsx │ │ │ │ ├── SettingsModal.tsx │ │ │ │ ├── SettingsReducer.ts │ │ │ │ └── useSettingsContext.tsx │ │ │ ├── theme/ │ │ │ │ ├── colors.ts │ │ │ │ ├── gradientStyle.ts │ │ │ │ └── tokens.stylex.ts │ │ │ ├── types/ │ │ │ │ └── mp4box/ │ │ │ │ └── index.d.ts │ │ │ └── vite-env.d.ts │ │ ├── tailwind.config.js │ │ ├── tsconfig.json │ │ ├── tsconfig.node.json │ │ └── vite.config.ts │ ├── docker-compose.yaml │ ├── pyproject.toml │ ├── sam2/ │ │ ├── __init__.py │ │ ├── automatic_mask_generator.py │ │ ├── build_sam.py │ │ ├── configs/ │ │ │ ├── sam2/ │ │ │ │ ├── sam2_hiera_b+.yaml │ │ │ │ ├── sam2_hiera_l.yaml │ │ │ │ ├── sam2_hiera_s.yaml │ │ │ │ └── sam2_hiera_t.yaml │ │ │ ├── sam2.1/ │ │ │ │ ├── sam2.1_hiera_b+.yaml │ │ │ │ ├── sam2.1_hiera_l.yaml │ │ │ │ ├── sam2.1_hiera_s.yaml │ │ │ │ └── sam2.1_hiera_t.yaml │ │ │ └── sam2.1_training/ │ │ │ └── sam2.1_hiera_b+_MOSE_finetune.yaml │ │ ├── csrc/ │ │ │ └── connected_components.cu │ │ ├── modeling/ │ │ │ ├── __init__.py │ │ │ ├── backbones/ │ │ │ │ ├── __init__.py │ │ │ │ ├── hieradet.py │ │ │ │ ├── image_encoder.py │ │ │ │ └── utils.py │ │ │ ├── memory_attention.py │ │ │ ├── memory_encoder.py │ │ │ ├── position_encoding.py │ │ │ ├── sam/ │ │ │ │ ├── __init__.py │ │ │ │ ├── mask_decoder.py │ │ │ │ ├── prompt_encoder.py │ │ │ │ └── transformer.py │ │ │ ├── sam2_base.py │ │ │ └── sam2_utils.py │ │ ├── sam2_image_predictor.py │ │ ├── sam2_video_predictor.py │ │ ├── sam2_video_predictor_legacy.py │ │ └── utils/ │ │ ├── __init__.py │ │ ├── amg.py │ │ ├── misc.py │ │ └── transforms.py │ ├── sav_dataset/ │ │ ├── LICENSE │ │ ├── LICENSE_DAVIS │ │ ├── LICENSE_VOS_BENCHMARK │ │ ├── README.md │ │ ├── example/ │ │ │ ├── sav_000001_auto.json │ │ │ └── sav_000001_manual.json │ │ ├── requirements.txt │ │ ├── sav_evaluator.py │ │ ├── sav_visualization_example.ipynb │ │ └── utils/ │ │ └── sav_utils.py │ ├── setup.py │ ├── tools/ │ │ ├── README.md │ │ └── vos_inference.py │ └── training/ │ ├── README.md │ ├── __init__.py │ ├── assets/ │ │ ├── MOSE_sample_train_list.txt │ │ └── MOSE_sample_val_list.txt │ ├── dataset/ │ │ ├── __init__.py │ │ ├── sam2_datasets.py │ │ ├── transforms.py │ │ ├── utils.py │ │ ├── vos_dataset.py │ │ ├── vos_raw_dataset.py │ │ ├── vos_sampler.py │ │ └── vos_segment_loader.py │ ├── loss_fns.py │ ├── model/ │ │ └── __init__.py │ ├── optimizer.py │ ├── scripts/ │ │ └── sav_frame_extraction_submitit.py │ ├── train.py │ ├── trainer.py │ └── utils/ │ ├── __init__.py │ ├── checkpoint_utils.py │ ├── data_utils.py │ ├── distributed.py │ ├── logger.py │ └── train_utils.py ├── cogvideox_interpolation/ │ ├── datasets.py │ ├── losses.py │ ├── lpips.py │ ├── pipeline.py │ └── utils/ │ ├── colormaps.py │ ├── colors.py │ ├── config_utils.py │ └── misc.py ├── configs/ │ ├── field_construction.yaml │ ├── test_config.py │ ├── unet_config_c16.py │ └── unet_config_c32.py ├── entry_point.py ├── field_construction/ │ ├── auto_encoder.py │ ├── extract_with_openseg.py │ ├── gaussian_field.py │ ├── gaussian_renderer/ │ │ ├── __init__.py │ │ └── network_gui.py │ ├── lpipsPyTorch/ │ │ ├── __init__.py │ │ └── modules/ │ │ ├── lpips.py │ │ ├── networks.py │ │ └── utils.py │ ├── pipeline.py │ ├── pose_estimator/ │ │ ├── __init__.py │ │ └── utils.py │ ├── preprocessor.py │ ├── scene/ │ │ ├── __init__.py │ │ ├── app_model.py │ │ ├── cameras.py │ │ ├── colmap_loader.py │ │ ├── dataset_readers.py │ │ ├── gaussian_model.py │ │ └── per_point_adam.py │ ├── submodules/ │ │ ├── diff-langsurf-rasterizer/ │ │ │ ├── CMakeLists.txt │ │ │ ├── LICENSE.md │ │ │ ├── README.md │ │ │ ├── build/ │ │ │ │ └── temp.linux-x86_64-cpython-310/ │ │ │ │ ├── .ninja_deps │ │ │ │ ├── .ninja_log │ │ │ │ ├── build.ninja │ │ │ │ ├── cuda_rasterizer/ │ │ │ │ │ ├── backward.o │ │ │ │ │ ├── forward.o │ │ │ │ │ └── rasterizer_impl.o │ │ │ │ ├── ext.o │ │ │ │ └── rasterize_points.o │ │ │ ├── cuda_rasterizer/ │ │ │ │ ├── auxiliary.h │ │ │ │ ├── backward.cu │ │ │ │ ├── backward.h │ │ │ │ ├── config.h │ │ │ │ ├── forward.cu │ │ │ │ ├── forward.h │ │ │ │ ├── rasterizer.h │ │ │ │ ├── rasterizer_impl.cu │ │ │ │ └── rasterizer_impl.h │ │ │ ├── diff_LangSurf_rasterization/ │ │ │ │ └── __init__.py │ │ │ ├── diff_LangSurf_rasterization.egg-info/ │ │ │ │ ├── PKG-INFO │ │ │ │ ├── SOURCES.txt │ │ │ │ ├── dependency_links.txt │ │ │ │ └── top_level.txt │ │ │ ├── ext.cpp │ │ │ ├── rasterize_points.cu │ │ │ ├── rasterize_points.h │ │ │ ├── setup.py │ │ │ └── third_party/ │ │ │ ├── glm/ │ │ │ │ ├── .appveyor.yml │ │ │ │ ├── .gitignore │ │ │ │ ├── .travis.yml │ │ │ │ ├── CMakeLists.txt │ │ │ │ ├── cmake/ │ │ │ │ │ └── cmake_uninstall.cmake.in │ │ │ │ ├── copying.txt │ │ │ │ ├── doc/ │ │ │ │ │ ├── api/ │ │ │ │ │ │ ├── a00001_source.html │ │ │ │ │ │ ├── a00002_source.html │ │ │ │ │ │ ├── a00003_source.html │ │ │ │ │ │ ├── a00004_source.html │ │ │ │ │ │ ├── a00005_source.html │ │ │ │ │ │ ├── a00006_source.html │ │ │ │ │ │ ├── a00007.html │ │ │ │ │ │ ├── a00007_source.html │ │ │ │ │ │ ├── a00008.html │ │ │ │ │ │ ├── a00008_source.html │ │ │ │ │ │ ├── a00009.html │ │ │ │ │ │ ├── a00009_source.html │ │ │ │ │ │ ├── a00010.html │ │ │ │ │ │ ├── a00010_source.html │ │ │ │ │ │ ├── a00011.html │ │ │ │ │ │ ├── a00011_source.html │ │ │ │ │ │ ├── a00012.html │ │ │ │ │ │ ├── a00012_source.html │ │ │ │ │ │ ├── a00013.html │ │ │ │ │ │ ├── a00013_source.html │ │ │ │ │ │ ├── a00014.html │ │ │ │ │ │ ├── a00014_source.html │ │ │ │ │ │ ├── a00015.html │ │ │ │ │ │ ├── a00015_source.html │ │ │ │ │ │ ├── a00016.html │ │ │ │ │ │ ├── a00016_source.html │ │ │ │ │ │ ├── a00017.html │ │ │ │ │ │ ├── a00017_source.html │ │ │ │ │ │ ├── a00018.html │ │ │ │ │ │ ├── a00018_source.html │ │ │ │ │ │ ├── a00019_source.html │ │ │ │ │ │ ├── a00020_source.html │ │ │ │ │ │ ├── a00021.html │ │ │ │ │ │ ├── a00021_source.html │ │ │ │ │ │ ├── a00022.html │ │ │ │ │ │ ├── a00022_source.html │ │ │ │ │ │ ├── a00023.html │ │ │ │ │ │ ├── a00023_source.html │ │ │ │ │ │ ├── a00024.html │ │ │ │ │ │ ├── a00024_source.html │ │ │ │ │ │ ├── a00025.html │ │ │ │ │ │ ├── a00025_source.html │ │ │ │ │ │ ├── a00026.html │ │ │ │ │ │ ├── a00026_source.html │ │ │ │ │ │ ├── a00027.html │ │ │ │ │ │ ├── a00027_source.html │ │ │ │ │ │ ├── a00028.html │ │ │ │ │ │ ├── a00028_source.html │ │ │ │ │ │ ├── a00029.html │ │ │ │ │ │ ├── a00029_source.html │ │ │ │ │ │ ├── a00030.html │ │ │ │ │ │ ├── a00030_source.html │ │ │ │ │ │ ├── a00031.html │ │ │ │ │ │ ├── a00031_source.html │ │ │ │ │ │ ├── a00032.html │ │ │ │ │ │ ├── a00032_source.html │ │ │ │ │ │ ├── a00033.html │ │ │ │ │ │ ├── a00033_source.html │ │ │ │ │ │ ├── a00034.html │ │ │ │ │ │ ├── a00034_source.html │ │ │ │ │ │ ├── a00035_source.html │ │ │ │ │ │ ├── a00036.html │ │ │ │ │ │ ├── a00036_source.html │ │ │ │ │ │ ├── a00037.html │ │ │ │ │ │ ├── a00037_source.html │ │ │ │ │ │ ├── a00038.html │ │ │ │ │ │ ├── a00038_source.html │ │ │ │ │ │ ├── a00039.html │ │ │ │ │ │ ├── a00039_source.html │ │ │ │ │ │ ├── a00040.html │ │ │ │ │ │ ├── a00040_source.html │ │ │ │ │ │ ├── a00041.html │ │ │ │ │ │ ├── a00041_source.html │ │ │ │ │ │ ├── a00042.html │ │ │ │ │ │ ├── a00042_source.html │ │ │ │ │ │ ├── a00043.html │ │ │ │ │ │ ├── a00043_source.html │ │ │ │ │ │ ├── a00044.html │ │ │ │ │ │ ├── a00044_source.html │ │ │ │ │ │ ├── a00045.html │ │ │ │ │ │ ├── a00045_source.html │ │ │ │ │ │ ├── a00046.html │ │ │ │ │ │ ├── a00046_source.html │ │ │ │ │ │ ├── a00047_source.html │ │ │ │ │ │ ├── a00048.html │ │ │ │ │ │ ├── a00048_source.html │ │ │ │ │ │ ├── a00049.html │ │ │ │ │ │ ├── a00049_source.html │ │ │ │ │ │ ├── a00050.html │ │ │ │ │ │ ├── a00050_source.html │ │ │ │ │ │ ├── a00051.html │ │ │ │ │ │ ├── a00051_source.html │ │ │ │ │ │ ├── a00052.html │ │ │ │ │ │ ├── a00052_source.html │ │ │ │ │ │ ├── a00053.html │ │ │ │ │ │ ├── a00053_source.html │ │ │ │ │ │ ├── a00054.html │ │ │ │ │ │ ├── a00054_source.html │ │ │ │ │ │ ├── a00055.html │ │ │ │ │ │ ├── a00055_source.html │ │ │ │ │ │ ├── a00056.html │ │ │ │ │ │ ├── a00056_source.html │ │ │ │ │ │ ├── a00057.html │ │ │ │ │ │ ├── a00057_source.html │ │ │ │ │ │ ├── a00058.html │ │ │ │ │ │ ├── a00058_source.html │ │ │ │ │ │ ├── a00059.html │ │ │ │ │ │ ├── a00059_source.html │ │ │ │ │ │ ├── a00060.html │ │ │ │ │ │ ├── a00060_source.html │ │ │ │ │ │ ├── a00061.html │ │ │ │ │ │ ├── a00061_source.html │ │ │ │ │ │ ├── a00062.html │ │ │ │ │ │ ├── a00062_source.html │ │ │ │ │ │ ├── a00063.html │ │ │ │ │ │ ├── a00063_source.html │ │ │ │ │ │ ├── a00064.html │ │ │ │ │ │ ├── a00064_source.html │ │ │ │ │ │ ├── a00065.html │ │ │ │ │ │ ├── a00065_source.html │ │ │ │ │ │ ├── a00066.html │ │ │ │ │ │ ├── a00066_source.html │ │ │ │ │ │ ├── a00067.html │ │ │ │ │ │ ├── a00067_source.html │ │ │ │ │ │ ├── a00068.html │ │ │ │ │ │ ├── a00068_source.html │ │ │ │ │ │ ├── a00069.html │ │ │ │ │ │ ├── a00069_source.html │ │ │ │ │ │ ├── a00070.html │ │ │ │ │ │ ├── a00070_source.html │ │ │ │ │ │ ├── a00071.html │ │ │ │ │ │ ├── a00071_source.html │ │ │ │ │ │ ├── a00072.html │ │ │ │ │ │ ├── a00072_source.html │ │ │ │ │ │ ├── a00073.html │ │ │ │ │ │ ├── a00073_source.html │ │ │ │ │ │ ├── a00074.html │ │ │ │ │ │ ├── a00074_source.html │ │ │ │ │ │ ├── a00075.html │ │ │ │ │ │ ├── a00075_source.html │ │ │ │ │ │ ├── a00076.html │ │ │ │ │ │ ├── a00076_source.html │ │ │ │ │ │ ├── a00077.html │ │ │ │ │ │ ├── a00077_source.html │ │ │ │ │ │ ├── a00078.html │ │ │ │ │ │ ├── a00078_source.html │ │ │ │ │ │ ├── a00079.html │ │ │ │ │ │ ├── a00079_source.html │ │ │ │ │ │ ├── a00080.html │ │ │ │ │ │ ├── a00080_source.html │ │ │ │ │ │ ├── a00081.html │ │ │ │ │ │ ├── a00081_source.html │ │ │ │ │ │ ├── a00082.html │ │ │ │ │ │ ├── a00082_source.html │ │ │ │ │ │ ├── a00083.html │ │ │ │ │ │ ├── a00083_source.html │ │ │ │ │ │ ├── a00084.html │ │ │ │ │ │ ├── a00084_source.html │ │ │ │ │ │ ├── a00085.html │ │ │ │ │ │ ├── a00085_source.html │ │ │ │ │ │ ├── a00086.html │ │ │ │ │ │ ├── a00086_source.html │ │ │ │ │ │ ├── a00087.html │ │ │ │ │ │ ├── a00087_source.html │ │ │ │ │ │ ├── a00088.html │ │ │ │ │ │ ├── a00088_source.html │ │ │ │ │ │ ├── a00089.html │ │ │ │ │ │ ├── a00089_source.html │ │ │ │ │ │ ├── a00090.html │ │ │ │ │ │ ├── a00090_source.html │ │ │ │ │ │ ├── a00091.html │ │ │ │ │ │ ├── a00091_source.html │ │ │ │ │ │ ├── a00092.html │ │ │ │ │ │ ├── a00092_source.html │ │ │ │ │ │ ├── a00093.html │ │ │ │ │ │ ├── a00093_source.html │ │ │ │ │ │ ├── a00094.html │ │ │ │ │ │ ├── a00094_source.html │ │ │ │ │ │ ├── a00095_source.html │ │ │ │ │ │ ├── a00096.html │ │ │ │ │ │ ├── a00096_source.html │ │ │ │ │ │ ├── a00097.html │ │ │ │ │ │ ├── a00097_source.html │ │ │ │ │ │ ├── a00098.html │ │ │ │ │ │ ├── a00098_source.html │ │ │ │ │ │ ├── a00099.html │ │ │ │ │ │ ├── a00099_source.html │ │ │ │ │ │ ├── a00100.html │ │ │ │ │ │ ├── a00100_source.html │ │ │ │ │ │ ├── a00101.html │ │ │ │ │ │ ├── a00101_source.html │ │ │ │ │ │ ├── a00102.html │ │ │ │ │ │ ├── a00102_source.html │ │ │ │ │ │ ├── a00103.html │ │ │ │ │ │ ├── a00103_source.html │ │ │ │ │ │ ├── a00104.html │ │ │ │ │ │ ├── a00104_source.html │ │ │ │ │ │ ├── a00105.html │ │ │ │ │ │ ├── a00105_source.html │ │ │ │ │ │ ├── a00106.html │ │ │ │ │ │ ├── a00106_source.html │ │ │ │ │ │ ├── a00107.html │ │ │ │ │ │ ├── a00107_source.html │ │ │ │ │ │ ├── a00108.html │ │ │ │ │ │ ├── a00108_source.html │ │ │ │ │ │ ├── a00109.html │ │ │ │ │ │ ├── a00109_source.html │ │ │ │ │ │ ├── a00110.html │ │ │ │ │ │ ├── a00110_source.html │ │ │ │ │ │ ├── a00111.html │ │ │ │ │ │ ├── a00111_source.html │ │ │ │ │ │ ├── a00112.html │ │ │ │ │ │ ├── a00112_source.html │ │ │ │ │ │ ├── a00113.html │ │ │ │ │ │ ├── a00113_source.html │ │ │ │ │ │ ├── a00114.html │ │ │ │ │ │ ├── a00114_source.html │ │ │ │ │ │ ├── a00115.html │ │ │ │ │ │ ├── a00115_source.html │ │ │ │ │ │ ├── a00116.html │ │ │ │ │ │ ├── a00116_source.html │ │ │ │ │ │ ├── a00117.html │ │ │ │ │ │ ├── a00117_source.html │ │ │ │ │ │ ├── a00118.html │ │ │ │ │ │ ├── a00118_source.html │ │ │ │ │ │ ├── a00119.html │ │ │ │ │ │ ├── a00119_source.html │ │ │ │ │ │ ├── a00120.html │ │ │ │ │ │ ├── a00120_source.html │ │ │ │ │ │ ├── a00121.html │ │ │ │ │ │ ├── a00121_source.html │ │ │ │ │ │ ├── a00122.html │ │ │ │ │ │ ├── a00122_source.html │ │ │ │ │ │ ├── a00123.html │ │ │ │ │ │ ├── a00123_source.html │ │ │ │ │ │ ├── a00124_source.html │ │ │ │ │ │ ├── a00125.html │ │ │ │ │ │ ├── a00125_source.html │ │ │ │ │ │ ├── a00126.html │ │ │ │ │ │ ├── a00126_source.html │ │ │ │ │ │ ├── a00127.html │ │ │ │ │ │ ├── a00127_source.html │ │ │ │ │ │ ├── a00128.html │ │ │ │ │ │ ├── a00128_source.html │ │ │ │ │ │ ├── a00129.html │ │ │ │ │ │ ├── a00129_source.html │ │ │ │ │ │ ├── a00130.html │ │ │ │ │ │ ├── a00130_source.html │ │ │ │ │ │ ├── a00131.html │ │ │ │ │ │ ├── a00131_source.html │ │ │ │ │ │ ├── a00132.html │ │ │ │ │ │ ├── a00132_source.html │ │ │ │ │ │ ├── a00133.html │ │ │ │ │ │ ├── a00133_source.html │ │ │ │ │ │ ├── a00134.html │ │ │ │ │ │ ├── a00134_source.html │ │ │ │ │ │ ├── a00135.html │ │ │ │ │ │ ├── a00135_source.html │ │ │ │ │ │ ├── a00136.html │ │ │ │ │ │ ├── a00136_source.html │ │ │ │ │ │ ├── a00137.html │ │ │ │ │ │ ├── a00137_source.html │ │ │ │ │ │ ├── a00138.html │ │ │ │ │ │ ├── a00138_source.html │ │ │ │ │ │ ├── a00139.html │ │ │ │ │ │ ├── a00139_source.html │ │ │ │ │ │ ├── a00140.html │ │ │ │ │ │ ├── a00140_source.html │ │ │ │ │ │ ├── a00141.html │ │ │ │ │ │ ├── a00141_source.html │ │ │ │ │ │ ├── a00142.html │ │ │ │ │ │ ├── a00142_source.html │ │ │ │ │ │ ├── a00143.html │ │ │ │ │ │ ├── a00143_source.html │ │ │ │ │ │ ├── a00144.html │ │ │ │ │ │ ├── a00144_source.html │ │ │ │ │ │ ├── a00145.html │ │ │ │ │ │ ├── a00145_source.html │ │ │ │ │ │ ├── a00146.html │ │ │ │ │ │ ├── a00146_source.html │ │ │ │ │ │ ├── a00147.html │ │ │ │ │ │ ├── a00147_source.html │ │ │ │ │ │ ├── a00148.html │ │ │ │ │ │ ├── a00148_source.html │ │ │ │ │ │ ├── a00149.html │ │ │ │ │ │ ├── a00149_source.html │ │ │ │ │ │ ├── a00150.html │ │ │ │ │ │ ├── a00150_source.html │ │ │ │ │ │ ├── a00151.html │ │ │ │ │ │ ├── a00151_source.html │ │ │ │ │ │ ├── a00152.html │ │ │ │ │ │ ├── a00152_source.html │ │ │ │ │ │ ├── a00153_source.html │ │ │ │ │ │ ├── a00154.html │ │ │ │ │ │ ├── a00154_source.html │ │ │ │ │ │ ├── a00155.html │ │ │ │ │ │ ├── a00155_source.html │ │ │ │ │ │ ├── a00156.html │ │ │ │ │ │ ├── a00156_source.html │ │ │ │ │ │ ├── a00157.html │ │ │ │ │ │ ├── a00157_source.html │ │ │ │ │ │ ├── a00158.html │ │ │ │ │ │ ├── a00158_source.html │ │ │ │ │ │ ├── a00159.html │ │ │ │ │ │ ├── a00159_source.html │ │ │ │ │ │ ├── a00160.html │ │ │ │ │ │ ├── a00160_source.html │ │ │ │ │ │ ├── a00161.html │ │ │ │ │ │ ├── a00161_source.html │ │ │ │ │ │ ├── a00162.html │ │ │ │ │ │ ├── a00162_source.html │ │ │ │ │ │ ├── a00163_source.html │ │ │ │ │ │ ├── a00164_source.html │ │ │ │ │ │ ├── a00165.html │ │ │ │ │ │ ├── a00165_source.html │ │ │ │ │ │ ├── a00166.html │ │ │ │ │ │ ├── a00166_source.html │ │ │ │ │ │ ├── a00167.html │ │ │ │ │ │ ├── a00167_source.html │ │ │ │ │ │ ├── a00168.html │ │ │ │ │ │ ├── a00168_source.html │ │ │ │ │ │ ├── a00169.html │ │ │ │ │ │ ├── a00169_source.html │ │ │ │ │ │ ├── a00170.html │ │ │ │ │ │ ├── a00170_source.html │ │ │ │ │ │ ├── a00171.html │ │ │ │ │ │ ├── a00171_source.html │ │ │ │ │ │ ├── a00172.html │ │ │ │ │ │ ├── a00172_source.html │ │ │ │ │ │ ├── a00173.html │ │ │ │ │ │ ├── a00173_source.html │ │ │ │ │ │ ├── a00174.html │ │ │ │ │ │ ├── a00174_source.html │ │ │ │ │ │ ├── a00175.html │ │ │ │ │ │ ├── a00175_source.html │ │ │ │ │ │ ├── a00176.html │ │ │ │ │ │ ├── a00176_source.html │ │ │ │ │ │ ├── a00177.html │ │ │ │ │ │ ├── a00177_source.html │ │ │ │ │ │ ├── a00178.html │ │ │ │ │ │ ├── a00178_source.html │ │ │ │ │ │ ├── a00179.html │ │ │ │ │ │ ├── a00179_source.html │ │ │ │ │ │ ├── a00180.html │ │ │ │ │ │ ├── a00180_source.html │ │ │ │ │ │ ├── a00181.html │ │ │ │ │ │ ├── a00181_source.html │ │ │ │ │ │ ├── a00182.html │ │ │ │ │ │ ├── a00182_source.html │ │ │ │ │ │ ├── a00183.html │ │ │ │ │ │ ├── a00183_source.html │ │ │ │ │ │ ├── a00184.html │ │ │ │ │ │ ├── a00184_source.html │ │ │ │ │ │ ├── a00185.html │ │ │ │ │ │ ├── a00185_source.html │ │ │ │ │ │ ├── a00186.html │ │ │ │ │ │ ├── a00186_source.html │ │ │ │ │ │ ├── a00187.html │ │ │ │ │ │ ├── a00187_source.html │ │ │ │ │ │ ├── a00188.html │ │ │ │ │ │ ├── a00188_source.html │ │ │ │ │ │ ├── a00189.html │ │ │ │ │ │ ├── a00189_source.html │ │ │ │ │ │ ├── a00190.html │ │ │ │ │ │ ├── a00190_source.html │ │ │ │ │ │ ├── a00191.html │ │ │ │ │ │ ├── a00191_source.html │ │ │ │ │ │ ├── a00192.html │ │ │ │ │ │ ├── a00192_source.html │ │ │ │ │ │ ├── a00193.html │ │ │ │ │ │ ├── a00193_source.html │ │ │ │ │ │ ├── a00194.html │ │ │ │ │ │ ├── a00194_source.html │ │ │ │ │ │ ├── a00195.html │ │ │ │ │ │ ├── a00195_source.html │ │ │ │ │ │ ├── a00196.html │ │ │ │ │ │ ├── a00196_source.html │ │ │ │ │ │ ├── a00197.html │ │ │ │ │ │ ├── a00197_source.html │ │ │ │ │ │ ├── a00198.html │ │ │ │ │ │ ├── a00198_source.html │ │ │ │ │ │ ├── a00199.html │ │ │ │ │ │ ├── a00199_source.html │ │ │ │ │ │ ├── a00200.html │ │ │ │ │ │ ├── a00200_source.html │ │ │ │ │ │ ├── a00201.html │ │ │ │ │ │ ├── a00201_source.html │ │ │ │ │ │ ├── a00202.html │ │ │ │ │ │ ├── a00202_source.html │ │ │ │ │ │ ├── a00203.html │ │ │ │ │ │ ├── a00203_source.html │ │ │ │ │ │ ├── a00204.html │ │ │ │ │ │ ├── a00204_source.html │ │ │ │ │ │ ├── a00205.html │ │ │ │ │ │ ├── a00205_source.html │ │ │ │ │ │ ├── a00206.html │ │ │ │ │ │ ├── a00206_source.html │ │ │ │ │ │ ├── a00207.html │ │ │ │ │ │ ├── a00207_source.html │ │ │ │ │ │ ├── a00208.html │ │ │ │ │ │ ├── a00208_source.html │ │ │ │ │ │ ├── a00209.html │ │ │ │ │ │ ├── a00209_source.html │ │ │ │ │ │ ├── a00210.html │ │ │ │ │ │ ├── a00210_source.html │ │ │ │ │ │ ├── a00211.html │ │ │ │ │ │ ├── a00211_source.html │ │ │ │ │ │ ├── a00212.html │ │ │ │ │ │ ├── a00212_source.html │ │ │ │ │ │ ├── a00213.html │ │ │ │ │ │ ├── a00213_source.html │ │ │ │ │ │ ├── a00214.html │ │ │ │ │ │ ├── a00214_source.html │ │ │ │ │ │ ├── a00215.html │ │ │ │ │ │ ├── a00215_source.html │ │ │ │ │ │ ├── a00216.html │ │ │ │ │ │ ├── a00216_source.html │ │ │ │ │ │ ├── a00217.html │ │ │ │ │ │ ├── a00217_source.html │ │ │ │ │ │ ├── a00218.html │ │ │ │ │ │ ├── a00218_source.html │ │ │ │ │ │ ├── a00219.html │ │ │ │ │ │ ├── a00219_source.html │ │ │ │ │ │ ├── a00220.html │ │ │ │ │ │ ├── a00220_source.html │ │ │ │ │ │ ├── a00221.html │ │ │ │ │ │ ├── a00221_source.html │ │ │ │ │ │ ├── a00222.html │ │ │ │ │ │ ├── a00222_source.html │ │ │ │ │ │ ├── a00223.html │ │ │ │ │ │ ├── a00223_source.html │ │ │ │ │ │ ├── a00224.html │ │ │ │ │ │ ├── a00224_source.html │ │ │ │ │ │ ├── a00225.html │ │ │ │ │ │ ├── a00225_source.html │ │ │ │ │ │ ├── a00226.html │ │ │ │ │ │ ├── a00226_source.html │ │ │ │ │ │ ├── a00227.html │ │ │ │ │ │ ├── a00227_source.html │ │ │ │ │ │ ├── a00228.html │ │ │ │ │ │ ├── a00228_source.html │ │ │ │ │ │ ├── a00229.html │ │ │ │ │ │ ├── a00229_source.html │ │ │ │ │ │ ├── a00230.html │ │ │ │ │ │ ├── a00230_source.html │ │ │ │ │ │ ├── a00231.html │ │ │ │ │ │ ├── a00231_source.html │ │ │ │ │ │ ├── a00232.html │ │ │ │ │ │ ├── a00232_source.html │ │ │ │ │ │ ├── a00233.html │ │ │ │ │ │ ├── a00233_source.html │ │ │ │ │ │ ├── a00234.html │ │ │ │ │ │ ├── a00234_source.html │ │ │ │ │ │ ├── a00235.html │ │ │ │ │ │ ├── a00235_source.html │ │ │ │ │ │ ├── a00241.html │ │ │ │ │ │ ├── a00242.html │ │ │ │ │ │ ├── a00243.html │ │ │ │ │ │ ├── a00244.html │ │ │ │ │ │ ├── a00245.html │ │ │ │ │ │ ├── a00246.html │ │ │ │ │ │ ├── a00247.html │ │ │ │ │ │ ├── a00248.html │ │ │ │ │ │ ├── a00249.html │ │ │ │ │ │ ├── a00250.html │ │ │ │ │ │ ├── a00251.html │ │ │ │ │ │ ├── a00252.html │ │ │ │ │ │ ├── a00253.html │ │ │ │ │ │ ├── a00254.html │ │ │ │ │ │ ├── a00255.html │ │ │ │ │ │ ├── a00256.html │ │ │ │ │ │ ├── a00257.html │ │ │ │ │ │ ├── a00258.html │ │ │ │ │ │ ├── a00259.html │ │ │ │ │ │ ├── a00260.html │ │ │ │ │ │ ├── a00261.html │ │ │ │ │ │ ├── a00262.html │ │ │ │ │ │ ├── a00263.html │ │ │ │ │ │ ├── a00264.html │ │ │ │ │ │ ├── a00265.html │ │ │ │ │ │ ├── a00266.html │ │ │ │ │ │ ├── a00267.html │ │ │ │ │ │ ├── a00268.html │ │ │ │ │ │ ├── a00269.html │ │ │ │ │ │ ├── a00270.html │ │ │ │ │ │ ├── a00271.html │ │ │ │ │ │ ├── a00272.html │ │ │ │ │ │ ├── a00273.html │ │ │ │ │ │ ├── a00274.html │ │ │ │ │ │ ├── a00275.html │ │ │ │ │ │ ├── a00276.html │ │ │ │ │ │ ├── a00277.html │ │ │ │ │ │ ├── a00278.html │ │ │ │ │ │ ├── a00279.html │ │ │ │ │ │ ├── a00280.html │ │ │ │ │ │ ├── a00281.html │ │ │ │ │ │ ├── a00282.html │ │ │ │ │ │ ├── a00283.html │ │ │ │ │ │ ├── a00284.html │ │ │ │ │ │ ├── a00285.html │ │ │ │ │ │ ├── a00286.html │ │ │ │ │ │ ├── a00287.html │ │ │ │ │ │ ├── a00288.html │ │ │ │ │ │ ├── a00289.html │ │ │ │ │ │ ├── a00290.html │ │ │ │ │ │ ├── a00291.html │ │ │ │ │ │ ├── a00292.html │ │ │ │ │ │ ├── a00293.html │ │ │ │ │ │ ├── a00294.html │ │ │ │ │ │ ├── a00295.html │ │ │ │ │ │ ├── a00296.html │ │ │ │ │ │ ├── a00297.html │ │ │ │ │ │ ├── a00298.html │ │ │ │ │ │ ├── a00299.html │ │ │ │ │ │ ├── a00300.html │ │ │ │ │ │ ├── a00301.html │ │ │ │ │ │ ├── a00302.html │ │ │ │ │ │ ├── a00303.html │ │ │ │ │ │ ├── a00304.html │ │ │ │ │ │ ├── a00305.html │ │ │ │ │ │ ├── a00306.html │ │ │ │ │ │ ├── a00307.html │ │ │ │ │ │ ├── a00308.html │ │ │ │ │ │ ├── a00309.html │ │ │ │ │ │ ├── a00310.html │ │ │ │ │ │ ├── a00311.html │ │ │ │ │ │ ├── a00312.html │ │ │ │ │ │ ├── a00313.html │ │ │ │ │ │ ├── a00314.html │ │ │ │ │ │ ├── a00315.html │ │ │ │ │ │ ├── a00316.html │ │ │ │ │ │ ├── a00317.html │ │ │ │ │ │ ├── a00318.html │ │ │ │ │ │ ├── a00319.html │ │ │ │ │ │ ├── a00320.html │ │ │ │ │ │ ├── a00321.html │ │ │ │ │ │ ├── a00322.html │ │ │ │ │ │ ├── a00323.html │ │ │ │ │ │ ├── a00324.html │ │ │ │ │ │ ├── a00325.html │ │ │ │ │ │ ├── a00326.html │ │ │ │ │ │ ├── a00327.html │ │ │ │ │ │ ├── a00328.html │ │ │ │ │ │ ├── a00329.html │ │ │ │ │ │ ├── a00330.html │ │ │ │ │ │ ├── a00331.html │ │ │ │ │ │ ├── a00332.html │ │ │ │ │ │ ├── a00333.html │ │ │ │ │ │ ├── a00334.html │ │ │ │ │ │ ├── a00335.html │ │ │ │ │ │ ├── a00336.html │ │ │ │ │ │ ├── a00337.html │ │ │ │ │ │ ├── a00338.html │ │ │ │ │ │ ├── a00339.html │ │ │ │ │ │ ├── a00340.html │ │ │ │ │ │ ├── a00341.html │ │ │ │ │ │ ├── a00342.html │ │ │ │ │ │ ├── a00343.html │ │ │ │ │ │ ├── a00344.html │ │ │ │ │ │ ├── a00345.html │ │ │ │ │ │ ├── a00346.html │ │ │ │ │ │ ├── a00347.html │ │ │ │ │ │ ├── a00348.html │ │ │ │ │ │ ├── a00349.html │ │ │ │ │ │ ├── a00350.html │ │ │ │ │ │ ├── a00351.html │ │ │ │ │ │ ├── a00352.html │ │ │ │ │ │ ├── a00353.html │ │ │ │ │ │ ├── a00354.html │ │ │ │ │ │ ├── a00355.html │ │ │ │ │ │ ├── a00356.html │ │ │ │ │ │ ├── a00357.html │ │ │ │ │ │ ├── a00358.html │ │ │ │ │ │ ├── a00359.html │ │ │ │ │ │ ├── a00360.html │ │ │ │ │ │ ├── a00361.html │ │ │ │ │ │ ├── a00362.html │ │ │ │ │ │ ├── a00363.html │ │ │ │ │ │ ├── a00364.html │ │ │ │ │ │ ├── a00365.html │ │ │ │ │ │ ├── a00366.html │ │ │ │ │ │ ├── a00367.html │ │ │ │ │ │ ├── a00368.html │ │ │ │ │ │ ├── a00369.html │ │ │ │ │ │ ├── a00370.html │ │ │ │ │ │ ├── a00371.html │ │ │ │ │ │ ├── a00372.html │ │ │ │ │ │ ├── a00373.html │ │ │ │ │ │ ├── a00374.html │ │ │ │ │ │ ├── dir_033f5edb0915b828d2c46ed4804e5503.html │ │ │ │ │ │ ├── dir_3a581ba30d25676e4b797b1f96d53b45.html │ │ │ │ │ │ ├── dir_44e5e654415abd9ca6fdeaddaff8565e.html │ │ │ │ │ │ ├── dir_4c6bd29c73fa4e5a2509e1c15f846751.html │ │ │ │ │ │ ├── dir_5189610d3ba09ec39b766fb99b34cd93.html │ │ │ │ │ │ ├── dir_6b66465792d005310484819a0eb0b0d3.html │ │ │ │ │ │ ├── dir_9e5fe034a00e89334fd5186c3e7db156.html │ │ │ │ │ │ ├── dir_a8bee7be44182a33f3820393ae0b105d.html │ │ │ │ │ │ ├── dir_cef2d71d502cb69a9252bca2297d9549.html │ │ │ │ │ │ ├── dir_d9496f0844b48bc7e53b5af8c99b9ab2.html │ │ │ │ │ │ ├── dir_f35778ec600a1b9bbc4524e62e226aa2.html │ │ │ │ │ │ ├── doxygen.css │ │ │ │ │ │ ├── dynsections.js │ │ │ │ │ │ ├── files.html │ │ │ │ │ │ ├── index.html │ │ │ │ │ │ ├── jquery.js │ │ │ │ │ │ ├── modules.html │ │ │ │ │ │ ├── search/ │ │ │ │ │ │ │ ├── all_0.html │ │ │ │ │ │ │ ├── all_0.js │ │ │ │ │ │ │ ├── all_1.html │ │ │ │ │ │ │ ├── all_1.js │ │ │ │ │ │ │ ├── all_10.html │ │ │ │ │ │ │ ├── all_10.js │ │ │ │ │ │ │ ├── all_11.html │ │ │ │ │ │ │ ├── all_11.js │ │ │ │ │ │ │ ├── all_12.html │ │ │ │ │ │ │ ├── all_12.js │ │ │ │ │ │ │ ├── all_13.html │ │ │ │ │ │ │ ├── all_13.js │ │ │ │ │ │ │ ├── all_14.html │ │ │ │ │ │ │ ├── all_14.js │ │ │ │ │ │ │ ├── all_15.html │ │ │ │ │ │ │ ├── all_15.js │ │ │ │ │ │ │ ├── all_16.html │ │ │ │ │ │ │ ├── all_16.js │ │ │ │ │ │ │ ├── all_2.html │ │ │ │ │ │ │ ├── all_2.js │ │ │ │ │ │ │ ├── all_3.html │ │ │ │ │ │ │ ├── all_3.js │ │ │ │ │ │ │ ├── all_4.html │ │ │ │ │ │ │ ├── all_4.js │ │ │ │ │ │ │ ├── all_5.html │ │ │ │ │ │ │ ├── all_5.js │ │ │ │ │ │ │ ├── all_6.html │ │ │ │ │ │ │ ├── all_6.js │ │ │ │ │ │ │ ├── all_7.html │ │ │ │ │ │ │ ├── all_7.js │ │ │ │ │ │ │ ├── all_8.html │ │ │ │ │ │ │ ├── all_8.js │ │ │ │ │ │ │ ├── all_9.html │ │ │ │ │ │ │ ├── all_9.js │ │ │ │ │ │ │ ├── all_a.html │ │ │ │ │ │ │ ├── all_a.js │ │ │ │ │ │ │ ├── all_b.html │ │ │ │ │ │ │ ├── all_b.js │ │ │ │ │ │ │ ├── all_c.html │ │ │ │ │ │ │ ├── all_c.js │ │ │ │ │ │ │ ├── all_d.html │ │ │ │ │ │ │ ├── all_d.js │ │ │ │ │ │ │ ├── all_e.html │ │ │ │ │ │ │ ├── all_e.js │ │ │ │ │ │ │ ├── all_f.html │ │ │ │ │ │ │ ├── all_f.js │ │ │ │ │ │ │ ├── files_0.html │ │ │ │ │ │ │ ├── files_0.js │ │ │ │ │ │ │ ├── files_1.html │ │ │ │ │ │ │ ├── files_1.js │ │ │ │ │ │ │ ├── files_10.html │ │ │ │ │ │ │ ├── files_10.js │ │ │ │ │ │ │ ├── files_11.html │ │ │ │ │ │ │ ├── files_11.js │ │ │ │ │ │ │ ├── files_12.html │ │ │ │ │ │ │ ├── files_12.js │ │ │ │ │ │ │ ├── files_13.html │ │ │ │ │ │ │ ├── files_13.js │ │ │ │ │ │ │ ├── files_14.html │ │ │ │ │ │ │ ├── files_14.js │ │ │ │ │ │ │ ├── files_2.html │ │ │ │ │ │ │ ├── files_2.js │ │ │ │ │ │ │ ├── files_3.html │ │ │ │ │ │ │ ├── files_3.js │ │ │ │ │ │ │ ├── files_4.html │ │ │ │ │ │ │ ├── files_4.js │ │ │ │ │ │ │ ├── files_5.html │ │ │ │ │ │ │ ├── files_5.js │ │ │ │ │ │ │ ├── files_6.html │ │ │ │ │ │ │ ├── files_6.js │ │ │ │ │ │ │ ├── files_7.html │ │ │ │ │ │ │ ├── files_7.js │ │ │ │ │ │ │ ├── files_8.html │ │ │ │ │ │ │ ├── files_8.js │ │ │ │ │ │ │ ├── files_9.html │ │ │ │ │ │ │ ├── files_9.js │ │ │ │ │ │ │ ├── files_a.html │ │ │ │ │ │ │ ├── files_a.js │ │ │ │ │ │ │ ├── files_b.html │ │ │ │ │ │ │ ├── files_b.js │ │ │ │ │ │ │ ├── files_c.html │ │ │ │ │ │ │ ├── files_c.js │ │ │ │ │ │ │ ├── files_d.html │ │ │ │ │ │ │ ├── files_d.js │ │ │ │ │ │ │ ├── files_e.html │ │ │ │ │ │ │ ├── files_e.js │ │ │ │ │ │ │ ├── files_f.html │ │ │ │ │ │ │ ├── files_f.js │ │ │ │ │ │ │ ├── functions_0.html │ │ │ │ │ │ │ ├── functions_0.js │ │ │ │ │ │ │ ├── functions_1.html │ │ │ │ │ │ │ ├── functions_1.js │ │ │ │ │ │ │ ├── functions_10.html │ │ │ │ │ │ │ ├── functions_10.js │ │ │ │ │ │ │ ├── functions_11.html │ │ │ │ │ │ │ ├── functions_11.js │ │ │ │ │ │ │ ├── functions_12.html │ │ │ │ │ │ │ ├── functions_12.js │ │ │ │ │ │ │ ├── functions_13.html │ │ │ │ │ │ │ ├── functions_13.js │ │ │ │ │ │ │ ├── functions_14.html │ │ │ │ │ │ │ ├── functions_14.js │ │ │ │ │ │ │ ├── functions_15.html │ │ │ │ │ │ │ ├── functions_15.js │ │ │ │ │ │ │ ├── functions_16.html │ │ │ │ │ │ │ ├── functions_16.js │ │ │ │ │ │ │ ├── functions_2.html │ │ │ │ │ │ │ ├── functions_2.js │ │ │ │ │ │ │ ├── functions_3.html │ │ │ │ │ │ │ ├── functions_3.js │ │ │ │ │ │ │ ├── functions_4.html │ │ │ │ │ │ │ ├── functions_4.js │ │ │ │ │ │ │ ├── functions_5.html │ │ │ │ │ │ │ ├── functions_5.js │ │ │ │ │ │ │ ├── functions_6.html │ │ │ │ │ │ │ ├── functions_6.js │ │ │ │ │ │ │ ├── functions_7.html │ │ │ │ │ │ │ ├── functions_7.js │ │ │ │ │ │ │ ├── functions_8.html │ │ │ │ │ │ │ ├── functions_8.js │ │ │ │ │ │ │ ├── functions_9.html │ │ │ │ │ │ │ ├── functions_9.js │ │ │ │ │ │ │ ├── functions_a.html │ │ │ │ │ │ │ ├── functions_a.js │ │ │ │ │ │ │ ├── functions_b.html │ │ │ │ │ │ │ ├── functions_b.js │ │ │ │ │ │ │ ├── functions_c.html │ │ │ │ │ │ │ ├── functions_c.js │ │ │ │ │ │ │ ├── functions_d.html │ │ │ │ │ │ │ ├── functions_d.js │ │ │ │ │ │ │ ├── functions_e.html │ │ │ │ │ │ │ ├── functions_e.js │ │ │ │ │ │ │ ├── functions_f.html │ │ │ │ │ │ │ ├── functions_f.js │ │ │ │ │ │ │ ├── groups_0.html │ │ │ │ │ │ │ ├── groups_0.js │ │ │ │ │ │ │ ├── groups_1.html │ │ │ │ │ │ │ ├── groups_1.js │ │ │ │ │ │ │ ├── groups_2.html │ │ │ │ │ │ │ ├── groups_2.js │ │ │ │ │ │ │ ├── groups_3.html │ │ │ │ │ │ │ ├── groups_3.js │ │ │ │ │ │ │ ├── groups_4.html │ │ │ │ │ │ │ ├── groups_4.js │ │ │ │ │ │ │ ├── groups_5.html │ │ │ │ │ │ │ ├── groups_5.js │ │ │ │ │ │ │ ├── groups_6.html │ │ │ │ │ │ │ ├── groups_6.js │ │ │ │ │ │ │ ├── groups_7.html │ │ │ │ │ │ │ ├── groups_7.js │ │ │ │ │ │ │ ├── groups_8.html │ │ │ │ │ │ │ ├── groups_8.js │ │ │ │ │ │ │ ├── groups_9.html │ │ │ │ │ │ │ ├── groups_9.js │ │ │ │ │ │ │ ├── nomatches.html │ │ │ │ │ │ │ ├── pages_0.html │ │ │ │ │ │ │ ├── pages_0.js │ │ │ │ │ │ │ ├── search.css │ │ │ │ │ │ │ ├── search.js │ │ │ │ │ │ │ ├── searchdata.js │ │ │ │ │ │ │ ├── typedefs_0.html │ │ │ │ │ │ │ ├── typedefs_0.js │ │ │ │ │ │ │ ├── typedefs_1.html │ │ │ │ │ │ │ ├── typedefs_1.js │ │ │ │ │ │ │ ├── typedefs_2.html │ │ │ │ │ │ │ ├── typedefs_2.js │ │ │ │ │ │ │ ├── typedefs_3.html │ │ │ │ │ │ │ ├── typedefs_3.js │ │ │ │ │ │ │ ├── typedefs_4.html │ │ │ │ │ │ │ ├── typedefs_4.js │ │ │ │ │ │ │ ├── typedefs_5.html │ │ │ │ │ │ │ ├── typedefs_5.js │ │ │ │ │ │ │ ├── typedefs_6.html │ │ │ │ │ │ │ ├── typedefs_6.js │ │ │ │ │ │ │ ├── typedefs_7.html │ │ │ │ │ │ │ ├── typedefs_7.js │ │ │ │ │ │ │ ├── typedefs_8.html │ │ │ │ │ │ │ ├── typedefs_8.js │ │ │ │ │ │ │ ├── typedefs_9.html │ │ │ │ │ │ │ ├── typedefs_9.js │ │ │ │ │ │ │ ├── typedefs_a.html │ │ │ │ │ │ │ ├── typedefs_a.js │ │ │ │ │ │ │ ├── typedefs_b.html │ │ │ │ │ │ │ ├── typedefs_b.js │ │ │ │ │ │ │ ├── typedefs_c.html │ │ │ │ │ │ │ ├── typedefs_c.js │ │ │ │ │ │ │ ├── typedefs_d.html │ │ │ │ │ │ │ └── typedefs_d.js │ │ │ │ │ │ └── tabs.css │ │ │ │ │ ├── man.doxy │ │ │ │ │ └── theme/ │ │ │ │ │ └── doxygen.css │ │ │ │ ├── glm/ │ │ │ │ │ ├── CMakeLists.txt │ │ │ │ │ ├── common.hpp │ │ │ │ │ ├── detail/ │ │ │ │ │ │ ├── _features.hpp │ │ │ │ │ │ ├── _fixes.hpp │ │ │ │ │ │ ├── _noise.hpp │ │ │ │ │ │ ├── _swizzle.hpp │ │ │ │ │ │ ├── _swizzle_func.hpp │ │ │ │ │ │ ├── _vectorize.hpp │ │ │ │ │ │ ├── compute_common.hpp │ │ │ │ │ │ ├── compute_vector_relational.hpp │ │ │ │ │ │ ├── func_common.inl │ │ │ │ │ │ ├── func_common_simd.inl │ │ │ │ │ │ ├── func_exponential.inl │ │ │ │ │ │ ├── func_exponential_simd.inl │ │ │ │ │ │ ├── func_geometric.inl │ │ │ │ │ │ ├── func_geometric_simd.inl │ │ │ │ │ │ ├── func_integer.inl │ │ │ │ │ │ ├── func_integer_simd.inl │ │ │ │ │ │ ├── func_matrix.inl │ │ │ │ │ │ ├── func_matrix_simd.inl │ │ │ │ │ │ ├── func_packing.inl │ │ │ │ │ │ ├── func_packing_simd.inl │ │ │ │ │ │ ├── func_trigonometric.inl │ │ │ │ │ │ ├── func_trigonometric_simd.inl │ │ │ │ │ │ ├── func_vector_relational.inl │ │ │ │ │ │ ├── func_vector_relational_simd.inl │ │ │ │ │ │ ├── glm.cpp │ │ │ │ │ │ ├── qualifier.hpp │ │ │ │ │ │ ├── setup.hpp │ │ │ │ │ │ ├── type_float.hpp │ │ │ │ │ │ ├── type_half.hpp │ │ │ │ │ │ ├── type_half.inl │ │ │ │ │ │ ├── type_mat2x2.hpp │ │ │ │ │ │ ├── type_mat2x2.inl │ │ │ │ │ │ ├── type_mat2x3.hpp │ │ │ │ │ │ ├── type_mat2x3.inl │ │ │ │ │ │ ├── type_mat2x4.hpp │ │ │ │ │ │ ├── type_mat2x4.inl │ │ │ │ │ │ ├── type_mat3x2.hpp │ │ │ │ │ │ ├── type_mat3x2.inl │ │ │ │ │ │ ├── type_mat3x3.hpp │ │ │ │ │ │ ├── type_mat3x3.inl │ │ │ │ │ │ ├── type_mat3x4.hpp │ │ │ │ │ │ ├── type_mat3x4.inl │ │ │ │ │ │ ├── type_mat4x2.hpp │ │ │ │ │ │ ├── type_mat4x2.inl │ │ │ │ │ │ ├── type_mat4x3.hpp │ │ │ │ │ │ ├── type_mat4x3.inl │ │ │ │ │ │ ├── type_mat4x4.hpp │ │ │ │ │ │ ├── type_mat4x4.inl │ │ │ │ │ │ ├── type_mat4x4_simd.inl │ │ │ │ │ │ ├── type_quat.hpp │ │ │ │ │ │ ├── type_quat.inl │ │ │ │ │ │ ├── type_quat_simd.inl │ │ │ │ │ │ ├── type_vec1.hpp │ │ │ │ │ │ ├── type_vec1.inl │ │ │ │ │ │ ├── type_vec2.hpp │ │ │ │ │ │ ├── type_vec2.inl │ │ │ │ │ │ ├── type_vec3.hpp │ │ │ │ │ │ ├── type_vec3.inl │ │ │ │ │ │ ├── type_vec4.hpp │ │ │ │ │ │ ├── type_vec4.inl │ │ │ │ │ │ └── type_vec4_simd.inl │ │ │ │ │ ├── exponential.hpp │ │ │ │ │ ├── ext/ │ │ │ │ │ │ ├── _matrix_vectorize.hpp │ │ │ │ │ │ ├── matrix_clip_space.hpp │ │ │ │ │ │ ├── matrix_clip_space.inl │ │ │ │ │ │ ├── matrix_common.hpp │ │ │ │ │ │ ├── matrix_common.inl │ │ │ │ │ │ ├── matrix_double2x2.hpp │ │ │ │ │ │ ├── matrix_double2x2_precision.hpp │ │ │ │ │ │ ├── matrix_double2x3.hpp │ │ │ │ │ │ ├── matrix_double2x3_precision.hpp │ │ │ │ │ │ ├── matrix_double2x4.hpp │ │ │ │ │ │ ├── matrix_double2x4_precision.hpp │ │ │ │ │ │ ├── matrix_double3x2.hpp │ │ │ │ │ │ ├── matrix_double3x2_precision.hpp │ │ │ │ │ │ ├── matrix_double3x3.hpp │ │ │ │ │ │ ├── matrix_double3x3_precision.hpp │ │ │ │ │ │ ├── matrix_double3x4.hpp │ │ │ │ │ │ ├── matrix_double3x4_precision.hpp │ │ │ │ │ │ ├── matrix_double4x2.hpp │ │ │ │ │ │ ├── matrix_double4x2_precision.hpp │ │ │ │ │ │ ├── matrix_double4x3.hpp │ │ │ │ │ │ ├── matrix_double4x3_precision.hpp │ │ │ │ │ │ ├── matrix_double4x4.hpp │ │ │ │ │ │ ├── matrix_double4x4_precision.hpp │ │ │ │ │ │ ├── matrix_float2x2.hpp │ │ │ │ │ │ ├── matrix_float2x2_precision.hpp │ │ │ │ │ │ ├── matrix_float2x3.hpp │ │ │ │ │ │ ├── matrix_float2x3_precision.hpp │ │ │ │ │ │ ├── matrix_float2x4.hpp │ │ │ │ │ │ ├── matrix_float2x4_precision.hpp │ │ │ │ │ │ ├── matrix_float3x2.hpp │ │ │ │ │ │ ├── matrix_float3x2_precision.hpp │ │ │ │ │ │ ├── matrix_float3x3.hpp │ │ │ │ │ │ ├── matrix_float3x3_precision.hpp │ │ │ │ │ │ ├── matrix_float3x4.hpp │ │ │ │ │ │ ├── matrix_float3x4_precision.hpp │ │ │ │ │ │ ├── matrix_float4x2.hpp │ │ │ │ │ │ ├── matrix_float4x2_precision.hpp │ │ │ │ │ │ ├── matrix_float4x3.hpp │ │ │ │ │ │ ├── matrix_float4x3_precision.hpp │ │ │ │ │ │ ├── matrix_float4x4.hpp │ │ │ │ │ │ ├── matrix_float4x4_precision.hpp │ │ │ │ │ │ ├── matrix_int2x2.hpp │ │ │ │ │ │ ├── matrix_int2x2_sized.hpp │ │ │ │ │ │ ├── matrix_int2x3.hpp │ │ │ │ │ │ ├── matrix_int2x3_sized.hpp │ │ │ │ │ │ ├── matrix_int2x4.hpp │ │ │ │ │ │ ├── matrix_int2x4_sized.hpp │ │ │ │ │ │ ├── matrix_int3x2.hpp │ │ │ │ │ │ ├── matrix_int3x2_sized.hpp │ │ │ │ │ │ ├── matrix_int3x3.hpp │ │ │ │ │ │ ├── matrix_int3x3_sized.hpp │ │ │ │ │ │ ├── matrix_int3x4.hpp │ │ │ │ │ │ ├── matrix_int3x4_sized.hpp │ │ │ │ │ │ ├── matrix_int4x2.hpp │ │ │ │ │ │ ├── matrix_int4x2_sized.hpp │ │ │ │ │ │ ├── matrix_int4x3.hpp │ │ │ │ │ │ ├── matrix_int4x3_sized.hpp │ │ │ │ │ │ ├── matrix_int4x4.hpp │ │ │ │ │ │ ├── matrix_int4x4_sized.hpp │ │ │ │ │ │ ├── matrix_integer.hpp │ │ │ │ │ │ ├── matrix_integer.inl │ │ │ │ │ │ ├── matrix_projection.hpp │ │ │ │ │ │ ├── matrix_projection.inl │ │ │ │ │ │ ├── matrix_relational.hpp │ │ │ │ │ │ ├── matrix_relational.inl │ │ │ │ │ │ ├── matrix_transform.hpp │ │ │ │ │ │ ├── matrix_transform.inl │ │ │ │ │ │ ├── matrix_uint2x2.hpp │ │ │ │ │ │ ├── matrix_uint2x2_sized.hpp │ │ │ │ │ │ ├── matrix_uint2x3.hpp │ │ │ │ │ │ ├── matrix_uint2x3_sized.hpp │ │ │ │ │ │ ├── matrix_uint2x4.hpp │ │ │ │ │ │ ├── matrix_uint2x4_sized.hpp │ │ │ │ │ │ ├── matrix_uint3x2.hpp │ │ │ │ │ │ ├── matrix_uint3x2_sized.hpp │ │ │ │ │ │ ├── matrix_uint3x3.hpp │ │ │ │ │ │ ├── matrix_uint3x3_sized.hpp │ │ │ │ │ │ ├── matrix_uint3x4.hpp │ │ │ │ │ │ ├── matrix_uint3x4_sized.hpp │ │ │ │ │ │ ├── matrix_uint4x2.hpp │ │ │ │ │ │ ├── matrix_uint4x2_sized.hpp │ │ │ │ │ │ ├── matrix_uint4x3.hpp │ │ │ │ │ │ ├── matrix_uint4x3_sized.hpp │ │ │ │ │ │ ├── matrix_uint4x4.hpp │ │ │ │ │ │ ├── matrix_uint4x4_sized.hpp │ │ │ │ │ │ ├── quaternion_common.hpp │ │ │ │ │ │ ├── quaternion_common.inl │ │ │ │ │ │ ├── quaternion_common_simd.inl │ │ │ │ │ │ ├── quaternion_double.hpp │ │ │ │ │ │ ├── quaternion_double_precision.hpp │ │ │ │ │ │ ├── quaternion_exponential.hpp │ │ │ │ │ │ ├── quaternion_exponential.inl │ │ │ │ │ │ ├── quaternion_float.hpp │ │ │ │ │ │ ├── quaternion_float_precision.hpp │ │ │ │ │ │ ├── quaternion_geometric.hpp │ │ │ │ │ │ ├── quaternion_geometric.inl │ │ │ │ │ │ ├── quaternion_relational.hpp │ │ │ │ │ │ ├── quaternion_relational.inl │ │ │ │ │ │ ├── quaternion_transform.hpp │ │ │ │ │ │ ├── quaternion_transform.inl │ │ │ │ │ │ ├── quaternion_trigonometric.hpp │ │ │ │ │ │ ├── quaternion_trigonometric.inl │ │ │ │ │ │ ├── scalar_common.hpp │ │ │ │ │ │ ├── scalar_common.inl │ │ │ │ │ │ ├── scalar_constants.hpp │ │ │ │ │ │ ├── scalar_constants.inl │ │ │ │ │ │ ├── scalar_int_sized.hpp │ │ │ │ │ │ ├── scalar_integer.hpp │ │ │ │ │ │ ├── scalar_integer.inl │ │ │ │ │ │ ├── scalar_packing.hpp │ │ │ │ │ │ ├── scalar_packing.inl │ │ │ │ │ │ ├── scalar_reciprocal.hpp │ │ │ │ │ │ ├── scalar_reciprocal.inl │ │ │ │ │ │ ├── scalar_relational.hpp │ │ │ │ │ │ ├── scalar_relational.inl │ │ │ │ │ │ ├── scalar_uint_sized.hpp │ │ │ │ │ │ ├── scalar_ulp.hpp │ │ │ │ │ │ ├── scalar_ulp.inl │ │ │ │ │ │ ├── vector_bool1.hpp │ │ │ │ │ │ ├── vector_bool1_precision.hpp │ │ │ │ │ │ ├── vector_bool2.hpp │ │ │ │ │ │ ├── vector_bool2_precision.hpp │ │ │ │ │ │ ├── vector_bool3.hpp │ │ │ │ │ │ ├── vector_bool3_precision.hpp │ │ │ │ │ │ ├── vector_bool4.hpp │ │ │ │ │ │ ├── vector_bool4_precision.hpp │ │ │ │ │ │ ├── vector_common.hpp │ │ │ │ │ │ ├── vector_common.inl │ │ │ │ │ │ ├── vector_double1.hpp │ │ │ │ │ │ ├── vector_double1_precision.hpp │ │ │ │ │ │ ├── vector_double2.hpp │ │ │ │ │ │ ├── vector_double2_precision.hpp │ │ │ │ │ │ ├── vector_double3.hpp │ │ │ │ │ │ ├── vector_double3_precision.hpp │ │ │ │ │ │ ├── vector_double4.hpp │ │ │ │ │ │ ├── vector_double4_precision.hpp │ │ │ │ │ │ ├── vector_float1.hpp │ │ │ │ │ │ ├── vector_float1_precision.hpp │ │ │ │ │ │ ├── vector_float2.hpp │ │ │ │ │ │ ├── vector_float2_precision.hpp │ │ │ │ │ │ ├── vector_float3.hpp │ │ │ │ │ │ ├── vector_float3_precision.hpp │ │ │ │ │ │ ├── vector_float4.hpp │ │ │ │ │ │ ├── vector_float4_precision.hpp │ │ │ │ │ │ ├── vector_int1.hpp │ │ │ │ │ │ ├── vector_int1_sized.hpp │ │ │ │ │ │ ├── vector_int2.hpp │ │ │ │ │ │ ├── vector_int2_sized.hpp │ │ │ │ │ │ ├── vector_int3.hpp │ │ │ │ │ │ ├── vector_int3_sized.hpp │ │ │ │ │ │ ├── vector_int4.hpp │ │ │ │ │ │ ├── vector_int4_sized.hpp │ │ │ │ │ │ ├── vector_integer.hpp │ │ │ │ │ │ ├── vector_integer.inl │ │ │ │ │ │ ├── vector_packing.hpp │ │ │ │ │ │ ├── vector_packing.inl │ │ │ │ │ │ ├── vector_reciprocal.hpp │ │ │ │ │ │ ├── vector_reciprocal.inl │ │ │ │ │ │ ├── vector_relational.hpp │ │ │ │ │ │ ├── vector_relational.inl │ │ │ │ │ │ ├── vector_uint1.hpp │ │ │ │ │ │ ├── vector_uint1_sized.hpp │ │ │ │ │ │ ├── vector_uint2.hpp │ │ │ │ │ │ ├── vector_uint2_sized.hpp │ │ │ │ │ │ ├── vector_uint3.hpp │ │ │ │ │ │ ├── vector_uint3_sized.hpp │ │ │ │ │ │ ├── vector_uint4.hpp │ │ │ │ │ │ ├── vector_uint4_sized.hpp │ │ │ │ │ │ ├── vector_ulp.hpp │ │ │ │ │ │ └── vector_ulp.inl │ │ │ │ │ ├── ext.hpp │ │ │ │ │ ├── fwd.hpp │ │ │ │ │ ├── geometric.hpp │ │ │ │ │ ├── glm.hpp │ │ │ │ │ ├── gtc/ │ │ │ │ │ │ ├── bitfield.hpp │ │ │ │ │ │ ├── bitfield.inl │ │ │ │ │ │ ├── color_space.hpp │ │ │ │ │ │ ├── color_space.inl │ │ │ │ │ │ ├── constants.hpp │ │ │ │ │ │ ├── constants.inl │ │ │ │ │ │ ├── epsilon.hpp │ │ │ │ │ │ ├── epsilon.inl │ │ │ │ │ │ ├── integer.hpp │ │ │ │ │ │ ├── integer.inl │ │ │ │ │ │ ├── matrix_access.hpp │ │ │ │ │ │ ├── matrix_access.inl │ │ │ │ │ │ ├── matrix_integer.hpp │ │ │ │ │ │ ├── matrix_inverse.hpp │ │ │ │ │ │ ├── matrix_inverse.inl │ │ │ │ │ │ ├── matrix_transform.hpp │ │ │ │ │ │ ├── matrix_transform.inl │ │ │ │ │ │ ├── noise.hpp │ │ │ │ │ │ ├── noise.inl │ │ │ │ │ │ ├── packing.hpp │ │ │ │ │ │ ├── packing.inl │ │ │ │ │ │ ├── quaternion.hpp │ │ │ │ │ │ ├── quaternion.inl │ │ │ │ │ │ ├── quaternion_simd.inl │ │ │ │ │ │ ├── random.hpp │ │ │ │ │ │ ├── random.inl │ │ │ │ │ │ ├── reciprocal.hpp │ │ │ │ │ │ ├── round.hpp │ │ │ │ │ │ ├── round.inl │ │ │ │ │ │ ├── type_aligned.hpp │ │ │ │ │ │ ├── type_precision.hpp │ │ │ │ │ │ ├── type_precision.inl │ │ │ │ │ │ ├── type_ptr.hpp │ │ │ │ │ │ ├── type_ptr.inl │ │ │ │ │ │ ├── ulp.hpp │ │ │ │ │ │ ├── ulp.inl │ │ │ │ │ │ └── vec1.hpp │ │ │ │ │ ├── gtx/ │ │ │ │ │ │ ├── associated_min_max.hpp │ │ │ │ │ │ ├── associated_min_max.inl │ │ │ │ │ │ ├── bit.hpp │ │ │ │ │ │ ├── bit.inl │ │ │ │ │ │ ├── closest_point.hpp │ │ │ │ │ │ ├── closest_point.inl │ │ │ │ │ │ ├── color_encoding.hpp │ │ │ │ │ │ ├── color_encoding.inl │ │ │ │ │ │ ├── color_space.hpp │ │ │ │ │ │ ├── color_space.inl │ │ │ │ │ │ ├── color_space_YCoCg.hpp │ │ │ │ │ │ ├── color_space_YCoCg.inl │ │ │ │ │ │ ├── common.hpp │ │ │ │ │ │ ├── common.inl │ │ │ │ │ │ ├── compatibility.hpp │ │ │ │ │ │ ├── compatibility.inl │ │ │ │ │ │ ├── component_wise.hpp │ │ │ │ │ │ ├── component_wise.inl │ │ │ │ │ │ ├── dual_quaternion.hpp │ │ │ │ │ │ ├── dual_quaternion.inl │ │ │ │ │ │ ├── easing.hpp │ │ │ │ │ │ ├── easing.inl │ │ │ │ │ │ ├── euler_angles.hpp │ │ │ │ │ │ ├── euler_angles.inl │ │ │ │ │ │ ├── extend.hpp │ │ │ │ │ │ ├── extend.inl │ │ │ │ │ │ ├── extended_min_max.hpp │ │ │ │ │ │ ├── extended_min_max.inl │ │ │ │ │ │ ├── exterior_product.hpp │ │ │ │ │ │ ├── exterior_product.inl │ │ │ │ │ │ ├── fast_exponential.hpp │ │ │ │ │ │ ├── fast_exponential.inl │ │ │ │ │ │ ├── fast_square_root.hpp │ │ │ │ │ │ ├── fast_square_root.inl │ │ │ │ │ │ ├── fast_trigonometry.hpp │ │ │ │ │ │ ├── fast_trigonometry.inl │ │ │ │ │ │ ├── float_notmalize.inl │ │ │ │ │ │ ├── functions.hpp │ │ │ │ │ │ ├── functions.inl │ │ │ │ │ │ ├── gradient_paint.hpp │ │ │ │ │ │ ├── gradient_paint.inl │ │ │ │ │ │ ├── handed_coordinate_space.hpp │ │ │ │ │ │ ├── handed_coordinate_space.inl │ │ │ │ │ │ ├── hash.hpp │ │ │ │ │ │ ├── hash.inl │ │ │ │ │ │ ├── integer.hpp │ │ │ │ │ │ ├── integer.inl │ │ │ │ │ │ ├── intersect.hpp │ │ │ │ │ │ ├── intersect.inl │ │ │ │ │ │ ├── io.hpp │ │ │ │ │ │ ├── io.inl │ │ │ │ │ │ ├── log_base.hpp │ │ │ │ │ │ ├── log_base.inl │ │ │ │ │ │ ├── matrix_cross_product.hpp │ │ │ │ │ │ ├── matrix_cross_product.inl │ │ │ │ │ │ ├── matrix_decompose.hpp │ │ │ │ │ │ ├── matrix_decompose.inl │ │ │ │ │ │ ├── matrix_factorisation.hpp │ │ │ │ │ │ ├── matrix_factorisation.inl │ │ │ │ │ │ ├── matrix_interpolation.hpp │ │ │ │ │ │ ├── matrix_interpolation.inl │ │ │ │ │ │ ├── matrix_major_storage.hpp │ │ │ │ │ │ ├── matrix_major_storage.inl │ │ │ │ │ │ ├── matrix_operation.hpp │ │ │ │ │ │ ├── matrix_operation.inl │ │ │ │ │ │ ├── matrix_query.hpp │ │ │ │ │ │ ├── matrix_query.inl │ │ │ │ │ │ ├── matrix_transform_2d.hpp │ │ │ │ │ │ ├── matrix_transform_2d.inl │ │ │ │ │ │ ├── mixed_product.hpp │ │ │ │ │ │ ├── mixed_product.inl │ │ │ │ │ │ ├── norm.hpp │ │ │ │ │ │ ├── norm.inl │ │ │ │ │ │ ├── normal.hpp │ │ │ │ │ │ ├── normal.inl │ │ │ │ │ │ ├── normalize_dot.hpp │ │ │ │ │ │ ├── normalize_dot.inl │ │ │ │ │ │ ├── number_precision.hpp │ │ │ │ │ │ ├── number_precision.inl │ │ │ │ │ │ ├── optimum_pow.hpp │ │ │ │ │ │ ├── optimum_pow.inl │ │ │ │ │ │ ├── orthonormalize.hpp │ │ │ │ │ │ ├── orthonormalize.inl │ │ │ │ │ │ ├── pca.hpp │ │ │ │ │ │ ├── pca.inl │ │ │ │ │ │ ├── perpendicular.hpp │ │ │ │ │ │ ├── perpendicular.inl │ │ │ │ │ │ ├── polar_coordinates.hpp │ │ │ │ │ │ ├── polar_coordinates.inl │ │ │ │ │ │ ├── projection.hpp │ │ │ │ │ │ ├── projection.inl │ │ │ │ │ │ ├── quaternion.hpp │ │ │ │ │ │ ├── quaternion.inl │ │ │ │ │ │ ├── range.hpp │ │ │ │ │ │ ├── raw_data.hpp │ │ │ │ │ │ ├── raw_data.inl │ │ │ │ │ │ ├── rotate_normalized_axis.hpp │ │ │ │ │ │ ├── rotate_normalized_axis.inl │ │ │ │ │ │ ├── rotate_vector.hpp │ │ │ │ │ │ ├── rotate_vector.inl │ │ │ │ │ │ ├── scalar_multiplication.hpp │ │ │ │ │ │ ├── scalar_relational.hpp │ │ │ │ │ │ ├── scalar_relational.inl │ │ │ │ │ │ ├── spline.hpp │ │ │ │ │ │ ├── spline.inl │ │ │ │ │ │ ├── std_based_type.hpp │ │ │ │ │ │ ├── std_based_type.inl │ │ │ │ │ │ ├── string_cast.hpp │ │ │ │ │ │ ├── string_cast.inl │ │ │ │ │ │ ├── texture.hpp │ │ │ │ │ │ ├── texture.inl │ │ │ │ │ │ ├── transform.hpp │ │ │ │ │ │ ├── transform.inl │ │ │ │ │ │ ├── transform2.hpp │ │ │ │ │ │ ├── transform2.inl │ │ │ │ │ │ ├── type_aligned.hpp │ │ │ │ │ │ ├── type_aligned.inl │ │ │ │ │ │ ├── type_trait.hpp │ │ │ │ │ │ ├── type_trait.inl │ │ │ │ │ │ ├── vec_swizzle.hpp │ │ │ │ │ │ ├── vector_angle.hpp │ │ │ │ │ │ ├── vector_angle.inl │ │ │ │ │ │ ├── vector_query.hpp │ │ │ │ │ │ ├── vector_query.inl │ │ │ │ │ │ ├── wrap.hpp │ │ │ │ │ │ └── wrap.inl │ │ │ │ │ ├── integer.hpp │ │ │ │ │ ├── mat2x2.hpp │ │ │ │ │ ├── mat2x3.hpp │ │ │ │ │ ├── mat2x4.hpp │ │ │ │ │ ├── mat3x2.hpp │ │ │ │ │ ├── mat3x3.hpp │ │ │ │ │ ├── mat3x4.hpp │ │ │ │ │ ├── mat4x2.hpp │ │ │ │ │ ├── mat4x3.hpp │ │ │ │ │ ├── mat4x4.hpp │ │ │ │ │ ├── matrix.hpp │ │ │ │ │ ├── packing.hpp │ │ │ │ │ ├── simd/ │ │ │ │ │ │ ├── common.h │ │ │ │ │ │ ├── exponential.h │ │ │ │ │ │ ├── geometric.h │ │ │ │ │ │ ├── integer.h │ │ │ │ │ │ ├── matrix.h │ │ │ │ │ │ ├── neon.h │ │ │ │ │ │ ├── packing.h │ │ │ │ │ │ ├── platform.h │ │ │ │ │ │ ├── trigonometric.h │ │ │ │ │ │ └── vector_relational.h │ │ │ │ │ ├── trigonometric.hpp │ │ │ │ │ ├── vec2.hpp │ │ │ │ │ ├── vec3.hpp │ │ │ │ │ ├── vec4.hpp │ │ │ │ │ └── vector_relational.hpp │ │ │ │ ├── manual.md │ │ │ │ ├── readme.md │ │ │ │ ├── test/ │ │ │ │ │ ├── bug/ │ │ │ │ │ │ └── bug_ms_vec_static.cpp │ │ │ │ │ ├── cmake/ │ │ │ │ │ │ ├── CMakeLists.txt │ │ │ │ │ │ └── test_find_glm.cpp │ │ │ │ │ ├── core/ │ │ │ │ │ │ ├── CMakeLists.txt │ │ │ │ │ │ ├── core_cpp_constexpr.cpp │ │ │ │ │ │ ├── core_cpp_defaulted_ctor.cpp │ │ │ │ │ │ ├── core_force_aligned_gentypes.cpp │ │ │ │ │ │ ├── core_force_arch_unknown.cpp │ │ │ │ │ │ ├── core_force_compiler_unknown.cpp │ │ │ │ │ │ ├── core_force_ctor_init.cpp │ │ │ │ │ │ ├── core_force_cxx03.cpp │ │ │ │ │ │ ├── core_force_cxx98.cpp │ │ │ │ │ │ ├── core_force_cxx_unknown.cpp │ │ │ │ │ │ ├── core_force_depth_zero_to_one.cpp │ │ │ │ │ │ ├── core_force_explicit_ctor.cpp │ │ │ │ │ │ ├── core_force_inline.cpp │ │ │ │ │ │ ├── core_force_left_handed.cpp │ │ │ │ │ │ ├── core_force_platform_unknown.cpp │ │ │ │ │ │ ├── core_force_pure.cpp │ │ │ │ │ │ ├── core_force_quat_xyzw.cpp │ │ │ │ │ │ ├── core_force_size_t_length.cpp │ │ │ │ │ │ ├── core_force_unrestricted_gentype.cpp │ │ │ │ │ │ ├── core_force_xyzw_only.cpp │ │ │ │ │ │ ├── core_func_common.cpp │ │ │ │ │ │ ├── core_func_exponential.cpp │ │ │ │ │ │ ├── core_func_geometric.cpp │ │ │ │ │ │ ├── core_func_integer.cpp │ │ │ │ │ │ ├── core_func_integer_bit_count.cpp │ │ │ │ │ │ ├── core_func_integer_find_lsb.cpp │ │ │ │ │ │ ├── core_func_integer_find_msb.cpp │ │ │ │ │ │ ├── core_func_matrix.cpp │ │ │ │ │ │ ├── core_func_noise.cpp │ │ │ │ │ │ ├── core_func_packing.cpp │ │ │ │ │ │ ├── core_func_swizzle.cpp │ │ │ │ │ │ ├── core_func_trigonometric.cpp │ │ │ │ │ │ ├── core_func_vector_relational.cpp │ │ │ │ │ │ ├── core_setup_force_cxx98.cpp │ │ │ │ │ │ ├── core_setup_force_size_t_length.cpp │ │ │ │ │ │ ├── core_setup_message.cpp │ │ │ │ │ │ ├── core_setup_platform_unknown.cpp │ │ │ │ │ │ ├── core_setup_precision.cpp │ │ │ │ │ │ ├── core_type_aligned.cpp │ │ │ │ │ │ ├── core_type_cast.cpp │ │ │ │ │ │ ├── core_type_ctor.cpp │ │ │ │ │ │ ├── core_type_int.cpp │ │ │ │ │ │ ├── core_type_length.cpp │ │ │ │ │ │ ├── core_type_mat2x2.cpp │ │ │ │ │ │ ├── core_type_mat2x3.cpp │ │ │ │ │ │ ├── core_type_mat2x4.cpp │ │ │ │ │ │ ├── core_type_mat3x2.cpp │ │ │ │ │ │ ├── core_type_mat3x3.cpp │ │ │ │ │ │ ├── core_type_mat3x4.cpp │ │ │ │ │ │ ├── core_type_mat4x2.cpp │ │ │ │ │ │ ├── core_type_mat4x3.cpp │ │ │ │ │ │ ├── core_type_mat4x4.cpp │ │ │ │ │ │ ├── core_type_vec1.cpp │ │ │ │ │ │ ├── core_type_vec2.cpp │ │ │ │ │ │ ├── core_type_vec3.cpp │ │ │ │ │ │ └── core_type_vec4.cpp │ │ │ │ │ ├── ext/ │ │ │ │ │ │ ├── CMakeLists.txt │ │ │ │ │ │ ├── ext_matrix_clip_space.cpp │ │ │ │ │ │ ├── ext_matrix_common.cpp │ │ │ │ │ │ ├── ext_matrix_int2x2_sized.cpp │ │ │ │ │ │ ├── ext_matrix_int2x3_sized.cpp │ │ │ │ │ │ ├── ext_matrix_int2x4_sized.cpp │ │ │ │ │ │ ├── ext_matrix_int3x2_sized.cpp │ │ │ │ │ │ ├── ext_matrix_int3x3_sized.cpp │ │ │ │ │ │ ├── ext_matrix_int3x4_sized.cpp │ │ │ │ │ │ ├── ext_matrix_int4x2_sized.cpp │ │ │ │ │ │ ├── ext_matrix_int4x3_sized.cpp │ │ │ │ │ │ ├── ext_matrix_int4x4_sized.cpp │ │ │ │ │ │ ├── ext_matrix_integer.cpp │ │ │ │ │ │ ├── ext_matrix_projection.cpp │ │ │ │ │ │ ├── ext_matrix_relational.cpp │ │ │ │ │ │ ├── ext_matrix_transform.cpp │ │ │ │ │ │ ├── ext_matrix_uint2x2_sized.cpp │ │ │ │ │ │ ├── ext_matrix_uint2x3_sized.cpp │ │ │ │ │ │ ├── ext_matrix_uint2x4_sized.cpp │ │ │ │ │ │ ├── ext_matrix_uint3x2_sized.cpp │ │ │ │ │ │ ├── ext_matrix_uint3x3_sized.cpp │ │ │ │ │ │ ├── ext_matrix_uint3x4_sized.cpp │ │ │ │ │ │ ├── ext_matrix_uint4x2_sized.cpp │ │ │ │ │ │ ├── ext_matrix_uint4x3_sized.cpp │ │ │ │ │ │ ├── ext_matrix_uint4x4_sized.cpp │ │ │ │ │ │ ├── ext_quaternion_common.cpp │ │ │ │ │ │ ├── ext_quaternion_exponential.cpp │ │ │ │ │ │ ├── ext_quaternion_geometric.cpp │ │ │ │ │ │ ├── ext_quaternion_relational.cpp │ │ │ │ │ │ ├── ext_quaternion_transform.cpp │ │ │ │ │ │ ├── ext_quaternion_trigonometric.cpp │ │ │ │ │ │ ├── ext_quaternion_type.cpp │ │ │ │ │ │ ├── ext_scalar_common.cpp │ │ │ │ │ │ ├── ext_scalar_constants.cpp │ │ │ │ │ │ ├── ext_scalar_int_sized.cpp │ │ │ │ │ │ ├── ext_scalar_integer.cpp │ │ │ │ │ │ ├── ext_scalar_packing.cpp │ │ │ │ │ │ ├── ext_scalar_reciprocal.cpp │ │ │ │ │ │ ├── ext_scalar_relational.cpp │ │ │ │ │ │ ├── ext_scalar_uint_sized.cpp │ │ │ │ │ │ ├── ext_scalar_ulp.cpp │ │ │ │ │ │ ├── ext_vec1.cpp │ │ │ │ │ │ ├── ext_vector_bool1.cpp │ │ │ │ │ │ ├── ext_vector_common.cpp │ │ │ │ │ │ ├── ext_vector_iec559.cpp │ │ │ │ │ │ ├── ext_vector_int1_sized.cpp │ │ │ │ │ │ ├── ext_vector_int2_sized.cpp │ │ │ │ │ │ ├── ext_vector_int3_sized.cpp │ │ │ │ │ │ ├── ext_vector_int4_sized.cpp │ │ │ │ │ │ ├── ext_vector_integer.cpp │ │ │ │ │ │ ├── ext_vector_integer_sized.cpp │ │ │ │ │ │ ├── ext_vector_packing.cpp │ │ │ │ │ │ ├── ext_vector_reciprocal.cpp │ │ │ │ │ │ ├── ext_vector_relational.cpp │ │ │ │ │ │ ├── ext_vector_uint1_sized.cpp │ │ │ │ │ │ ├── ext_vector_uint2_sized.cpp │ │ │ │ │ │ ├── ext_vector_uint3_sized.cpp │ │ │ │ │ │ ├── ext_vector_uint4_sized.cpp │ │ │ │ │ │ └── ext_vector_ulp.cpp │ │ │ │ │ ├── gtc/ │ │ │ │ │ │ ├── CMakeLists.txt │ │ │ │ │ │ ├── gtc_bitfield.cpp │ │ │ │ │ │ ├── gtc_color_space.cpp │ │ │ │ │ │ ├── gtc_constants.cpp │ │ │ │ │ │ ├── gtc_epsilon.cpp │ │ │ │ │ │ ├── gtc_integer.cpp │ │ │ │ │ │ ├── gtc_matrix_access.cpp │ │ │ │ │ │ ├── gtc_matrix_inverse.cpp │ │ │ │ │ │ ├── gtc_matrix_transform.cpp │ │ │ │ │ │ ├── gtc_noise.cpp │ │ │ │ │ │ ├── gtc_packing.cpp │ │ │ │ │ │ ├── gtc_quaternion.cpp │ │ │ │ │ │ ├── gtc_random.cpp │ │ │ │ │ │ ├── gtc_reciprocal.cpp │ │ │ │ │ │ ├── gtc_round.cpp │ │ │ │ │ │ ├── gtc_type_aligned.cpp │ │ │ │ │ │ ├── gtc_type_precision.cpp │ │ │ │ │ │ ├── gtc_type_ptr.cpp │ │ │ │ │ │ ├── gtc_ulp.cpp │ │ │ │ │ │ └── gtc_user_defined_types.cpp │ │ │ │ │ ├── gtx/ │ │ │ │ │ │ ├── CMakeLists.txt │ │ │ │ │ │ ├── gtx.cpp │ │ │ │ │ │ ├── gtx_associated_min_max.cpp │ │ │ │ │ │ ├── gtx_closest_point.cpp │ │ │ │ │ │ ├── gtx_color_encoding.cpp │ │ │ │ │ │ ├── gtx_color_space.cpp │ │ │ │ │ │ ├── gtx_color_space_YCoCg.cpp │ │ │ │ │ │ ├── gtx_common.cpp │ │ │ │ │ │ ├── gtx_compatibility.cpp │ │ │ │ │ │ ├── gtx_component_wise.cpp │ │ │ │ │ │ ├── gtx_dual_quaternion.cpp │ │ │ │ │ │ ├── gtx_easing.cpp │ │ │ │ │ │ ├── gtx_euler_angle.cpp │ │ │ │ │ │ ├── gtx_extend.cpp │ │ │ │ │ │ ├── gtx_extended_min_max.cpp │ │ │ │ │ │ ├── gtx_extented_min_max.cpp │ │ │ │ │ │ ├── gtx_exterior_product.cpp │ │ │ │ │ │ ├── gtx_fast_exponential.cpp │ │ │ │ │ │ ├── gtx_fast_square_root.cpp │ │ │ │ │ │ ├── gtx_fast_trigonometry.cpp │ │ │ │ │ │ ├── gtx_functions.cpp │ │ │ │ │ │ ├── gtx_gradient_paint.cpp │ │ │ │ │ │ ├── gtx_handed_coordinate_space.cpp │ │ │ │ │ │ ├── gtx_hash.cpp │ │ │ │ │ │ ├── gtx_int_10_10_10_2.cpp │ │ │ │ │ │ ├── gtx_integer.cpp │ │ │ │ │ │ ├── gtx_intersect.cpp │ │ │ │ │ │ ├── gtx_io.cpp │ │ │ │ │ │ ├── gtx_load.cpp │ │ │ │ │ │ ├── gtx_log_base.cpp │ │ │ │ │ │ ├── gtx_matrix_cross_product.cpp │ │ │ │ │ │ ├── gtx_matrix_decompose.cpp │ │ │ │ │ │ ├── gtx_matrix_factorisation.cpp │ │ │ │ │ │ ├── gtx_matrix_interpolation.cpp │ │ │ │ │ │ ├── gtx_matrix_major_storage.cpp │ │ │ │ │ │ ├── gtx_matrix_operation.cpp │ │ │ │ │ │ ├── gtx_matrix_query.cpp │ │ │ │ │ │ ├── gtx_matrix_transform_2d.cpp │ │ │ │ │ │ ├── gtx_mixed_product.cpp │ │ │ │ │ │ ├── gtx_norm.cpp │ │ │ │ │ │ ├── gtx_normal.cpp │ │ │ │ │ │ ├── gtx_normalize_dot.cpp │ │ │ │ │ │ ├── gtx_number_precision.cpp │ │ │ │ │ │ ├── gtx_optimum_pow.cpp │ │ │ │ │ │ ├── gtx_orthonormalize.cpp │ │ │ │ │ │ ├── gtx_pca.cpp │ │ │ │ │ │ ├── gtx_perpendicular.cpp │ │ │ │ │ │ ├── gtx_polar_coordinates.cpp │ │ │ │ │ │ ├── gtx_projection.cpp │ │ │ │ │ │ ├── gtx_quaternion.cpp │ │ │ │ │ │ ├── gtx_random.cpp │ │ │ │ │ │ ├── gtx_range.cpp │ │ │ │ │ │ ├── gtx_rotate_normalized_axis.cpp │ │ │ │ │ │ ├── gtx_rotate_vector.cpp │ │ │ │ │ │ ├── gtx_scalar_multiplication.cpp │ │ │ │ │ │ ├── gtx_scalar_relational.cpp │ │ │ │ │ │ ├── gtx_simd_mat4.cpp │ │ │ │ │ │ ├── gtx_simd_vec4.cpp │ │ │ │ │ │ ├── gtx_spline.cpp │ │ │ │ │ │ ├── gtx_string_cast.cpp │ │ │ │ │ │ ├── gtx_texture.cpp │ │ │ │ │ │ ├── gtx_type_aligned.cpp │ │ │ │ │ │ ├── gtx_type_trait.cpp │ │ │ │ │ │ ├── gtx_vec_swizzle.cpp │ │ │ │ │ │ ├── gtx_vector_angle.cpp │ │ │ │ │ │ ├── gtx_vector_query.cpp │ │ │ │ │ │ └── gtx_wrap.cpp │ │ │ │ │ └── perf/ │ │ │ │ │ ├── CMakeLists.txt │ │ │ │ │ ├── perf_matrix_div.cpp │ │ │ │ │ ├── perf_matrix_inverse.cpp │ │ │ │ │ ├── perf_matrix_mul.cpp │ │ │ │ │ ├── perf_matrix_mul_vector.cpp │ │ │ │ │ ├── perf_matrix_transpose.cpp │ │ │ │ │ └── perf_vector_mul_matrix.cpp │ │ │ │ └── util/ │ │ │ │ ├── autoexp.txt │ │ │ │ └── glm.natvis │ │ │ └── stbi_image_write.h │ │ └── simple-knn/ │ │ ├── build/ │ │ │ ├── temp.linux-x86_64-cpython-310/ │ │ │ │ ├── .ninja_deps │ │ │ │ ├── .ninja_log │ │ │ │ ├── build.ninja │ │ │ │ ├── ext.o │ │ │ │ ├── simple_knn.o │ │ │ │ └── spatial.o │ │ │ └── temp.linux-x86_64-cpython-38/ │ │ │ ├── ext.o │ │ │ ├── simple_knn.o │ │ │ └── spatial.o │ │ ├── ext.cpp │ │ ├── setup.py │ │ ├── simple_knn/ │ │ │ └── .gitkeep │ │ ├── simple_knn.cu │ │ ├── simple_knn.egg-info/ │ │ │ ├── PKG-INFO │ │ │ ├── SOURCES.txt │ │ │ ├── dependency_links.txt │ │ │ └── top_level.txt │ │ ├── simple_knn.h │ │ ├── spatial.cu │ │ └── spatial.h │ ├── utils/ │ │ ├── camera_utils.py │ │ ├── edit_utils.py │ │ ├── fusion_util.py │ │ ├── general_utils.py │ │ ├── graphics_utils.py │ │ ├── image_utils.py │ │ ├── loss_utils.py │ │ ├── pose_utils.py │ │ ├── sh_utils.py │ │ ├── step_track.py │ │ └── system_utils.py │ └── video_preprocessor/ │ └── __init__.py ├── get_normal.py ├── quick_start.sh ├── requirements.txt ├── train_all.sh ├── utils/ │ ├── align_traj.py │ ├── camera_utils.py │ ├── general_utils.py │ ├── graphics_utils.py │ ├── image_utils.py │ ├── interp_utils.py │ ├── loss_utils.py │ ├── pose_utils.py │ ├── sfm_utils.py │ ├── sh_utils.py │ ├── stepfun.py │ ├── system_utils.py │ └── utils_poses/ │ ├── ATE/ │ │ ├── align_trajectory.py │ │ ├── align_utils.py │ │ ├── compute_trajectory_errors.py │ │ ├── results_writer.py │ │ ├── trajectory_utils.py │ │ └── transformations.py │ ├── align_traj.py │ ├── comp_ate.py │ ├── lie_group_helper.py │ ├── relative_pose.py │ ├── vis_cam_traj.py │ └── vis_pose_utils.py ├── vggt/ │ ├── heads/ │ │ ├── camera_head.py │ │ ├── dpt_head.py │ │ ├── head_act.py │ │ ├── track_head.py │ │ ├── track_modules/ │ │ │ ├── __init__.py │ │ │ ├── base_track_predictor.py │ │ │ ├── blocks.py │ │ │ ├── modules.py │ │ │ └── utils.py │ │ └── utils.py │ ├── layers/ │ │ ├── __init__.py │ │ ├── attention.py │ │ ├── block.py │ │ ├── drop_path.py │ │ ├── layer_scale.py │ │ ├── mlp.py │ │ ├── patch_embed.py │ │ ├── rope.py │ │ ├── swiglu_ffn.py │ │ └── vision_transformer.py │ ├── models/ │ │ ├── aggregator.py │ │ └── vggt.py │ └── utils/ │ ├── geometry.py │ ├── load_fn.py │ ├── pose_enc.py │ ├── rotation.py │ └── visual_track.py └── video_inference.py ================================================ FILE CONTENTS ================================================ ================================================ FILE: .gitignore ================================================ .vscode *.log **/__pycache__/ **/data/ **/output/ ================================================ FILE: LICENSE ================================================ MIT License Copyright (c) 2025 Fangfu-0830 Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ================================================ FILE: README.md ================================================
# ✨LangScene-X: Reconstruct Generalizable 3D Language-Embedded Scenes with TriMap Video Diffusion✨

Fangfu Liu1, Hao Li2, Jiawei Chi1, Hanyang Wang1,3, Minghui Yang3, Fudong Wang3, Yueqi Duan1
1Tsinghua University, 2NTU, 3Ant Group

ICCV 2025 🔥

                    ![Teaser Visualization](assets/teaser.png)
**LangScene-X:** We propose LangScene-X, a unified model that generates RGB, segmentation map, and normal map, enabling to reconstruct 3D field from sparse views input. ## 📢 News - 🔥 [04/07/2025] We release "LangScene-X: Reconstruct Generalizable 3D Language-Embedded Scenes with TriMap Video Diffusion". Check our [project page](https://liuff19.github.io/LangScene-X) and [arXiv paper](https://arxiv.org/abs/2507.02813). ## 🌟 Pipeline ![Pipeline Visualization](assets/pipeline.png) Pipeline of LangScene-X. Our model is composed of a TriMap Video Diffusion model which generates RGB, segmentation map, and normal map videos, an Auto Encoder that compresses the language feature, and a field constructor that reconstructs 3DGS from the generated videos. ## 🎨 Video Demos from TriMap Video Diffusion https://github.com/user-attachments/assets/55346d53-eb04-490e-bb70-64555e97e040 https://github.com/user-attachments/assets/d6eb28b9-2af8-49a7-bb8b-0d4cba7843a5 https://github.com/user-attachments/assets/396f11ef-85dc-41de-882e-e249c25b9961 ## ⚙️ Setup ### 1. Clone Repository ```bash git clone https://github.com/liuff19/LangScene-X.git cd LangScene-X ``` ### 2. Environment Setup 1. **Create conda environment** ```bash conda create -n langscenex python=3.10 -y conda activate langscenex ``` 2. **Install dependencies** ```bash conda install pytorch torchvision -c pytorch -y pip install -e field_construction/submodules/simple-knn pip install -e field_construction/submodules/diff-langsurf-rasterizer pip install -e auto-seg/submodules/segment-anything-1 pip install -e auto-seg/submodules/segment-anything-2 pip install -r requirements.txt ``` ### 3. Model Checkpoints The checkpoints of SAM, SAM2 and fine-tuned CogVideoX can be downloaded from our [huggingface repository](https://huggingface.co/chijw/LangScene-X). ## 💻Running ### Quick Start You can start quickly by running the following scripts: ```bash chmod +x quick_start.sh ./quick_start.sh ``` ### Render Run the following command to render from the reconstructed 3DGS field: ```bash python entry_point.py \ pipeline.rgb_video_path="does/not/matter" \ pipeline.normal_video_path="does/not/matter" \ pipeline.seg_video_path="does/not/matter" \ pipeline.data_path="does/not/matter" \ gaussian.dataset.source_path="does/not/matter" \ gaussian.dataset.model_path="output/path" \ pipeline.selection=False \ gaussian.opt.max_geo_iter=1500 \ gaussian.opt.normal_optim=True \ gaussian.opt.optim_pose=True \ pipeline.skip_video_process=True \ pipeline.skip_lang_feature_extraction=True \ pipeline.mode="render" ``` You can also configurate by editting `configs/field_construction.yaml`. ## ✒️ TODO List - [x] Per-scene Auto Encoder released - [x] Fine-tuned CogVideoX checkpoints released - [ ] Generalizable Auto Encoder (LQC) - [ ] Improved TriMap Video Diffusion model ## 🔗Acknowledgement We are thankful for the following great works when implementing LangScene-X: - [CogVideoX](https://github.com/THUDM/CogVideo), [CogvideX-Interpolation](https://github.com/feizc/CogvideX-Interpolation), [LangSplat](https://github.com/minghanqin/LangSplat), [LangSurf](https://github.com/lifuguan/LangSurf), [VGGT](https://github.com/facebookresearch/vggt), [3DGS](https://github.com/graphdeco-inria/gaussian-splatting), [SAM2](https://github.com/facebookresearch/sam2) ## 📚Citation ```bibtex @misc{liu2025langscenexreconstructgeneralizable3d, title={LangScene-X: Reconstruct Generalizable 3D Language-Embedded Scenes with TriMap Video Diffusion}, author={Fangfu Liu and Hao Li and Jiawei Chi and Hanyang Wang and Minghui Yang and Fudong Wang and Yueqi Duan}, year={2025}, eprint={2507.02813}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2507.02813}, } ``` ================================================ FILE: auto-seg/auto-mask-align.py ================================================ import argparse import os import random import cv2 import imageio import matplotlib.pyplot as plt import numpy as np import torch from loguru import logger from PIL import Image from segment_anything import SamAutomaticMaskGenerator, sam_model_registry from tqdm import tqdm # use bfloat16 for the entire notebook torch.autocast(device_type="cuda", dtype=torch.bfloat16).__enter__() if torch.cuda.get_device_properties(0).major >= 8: # turn on tfloat32 for Ampere GPUs (https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices) torch.backends.cuda.matmul.allow_tf32 = True torch.backends.cudnn.allow_tf32 = True from sam2.automatic_mask_generator import SAM2AutomaticMaskGenerator from sam2.build_sam import build_sam2, build_sam2_video_predictor def show_anns(anns, borders=True): if len(anns) == 0: return sorted_anns = sorted(anns, key=(lambda x: x['area']), reverse=True) ax = plt.gca() ax.set_autoscale_on(False) img = np.ones((sorted_anns[0]['segmentation'].shape[0], sorted_anns[0]['segmentation'].shape[1], 4)) img[:,:,3] = 0 for ann in sorted_anns: m = ann['segmentation'] color_mask = np.concatenate([np.random.random(3), [0.5]]) img[m] = color_mask if borders: import cv2 contours, _ = cv2.findContours(m.astype(np.uint8),cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) # Try to smooth contours contours = [cv2.approxPolyDP(contour, epsilon=0.01, closed=True) for contour in contours] cv2.drawContours(img, contours, -1, (0,0,1,0.4), thickness=1) ax.imshow(img) def mask_nms(masks, scores, iou_thr=0.7, score_thr=0.1, inner_thr=0.2, **kwargs): """ Perform mask non-maximum suppression (NMS) on a set of masks based on their scores. Args: masks (torch.Tensor): has shape (num_masks, H, W) scores (torch.Tensor): The scores of the masks, has shape (num_masks,) iou_thr (float, optional): The threshold for IoU. score_thr (float, optional): The threshold for the mask scores. inner_thr (float, optional): The threshold for the overlap rate. **kwargs: Additional keyword arguments. Returns: selected_idx (torch.Tensor): A tensor representing the selected indices of the masks after NMS. """ scores, idx = scores.sort(0, descending=True) num_masks = idx.shape[0] masks_ord = masks[idx.view(-1), :] masks_area = torch.sum(masks_ord, dim=(1, 2), dtype=torch.float) mask_chunk_size = 20 mask_chunks = masks_ord.split(mask_chunk_size, dim=0) area_chunks = masks_area.split(mask_chunk_size, dim=0) iou_matrix = [] inner_iou_matrix = [] for i_areas, i_chunk in zip(area_chunks, mask_chunks): row_iou_matrix = [] row_inner_iou_matrix = [] for j_areas, j_chunk in zip(area_chunks, mask_chunks): intersection = torch.logical_and(i_chunk.unsqueeze(1), j_chunk.unsqueeze(0)).sum(dim=(-1, -2)) union = torch.logical_or(i_chunk.unsqueeze(1), j_chunk.unsqueeze(0)).sum(dim=(-1, -2)) local_iou_mat = intersection / union row_iou_matrix.append(local_iou_mat) row_inter_mat = intersection / i_areas[:, None] col_inter_mat = intersection / j_areas[None, :] inter = torch.logical_and(row_inter_mat < 0.5, col_inter_mat >= 0.85) local_inner_iou_mat = torch.zeros((len(i_areas), len(j_areas))) local_inner_iou_mat[inter] = 1 - row_inter_mat[inter] * col_inter_mat[inter] row_inner_iou_matrix.append(local_inner_iou_mat) row_iou_matrix = torch.cat(row_iou_matrix, dim=1) row_inner_iou_matrix = torch.cat(row_inner_iou_matrix, dim=1) iou_matrix.append(row_iou_matrix) inner_iou_matrix.append(row_inner_iou_matrix) iou_matrix = torch.cat(iou_matrix, dim=0) inner_iou_matrix = torch.cat(inner_iou_matrix, dim=0) iou_matrix.triu_(diagonal=1) iou_max, _ = iou_matrix.max(dim=0) inner_iou_matrix_u = torch.triu(inner_iou_matrix, diagonal=1) inner_iou_max_u, _ = inner_iou_matrix_u.max(dim=0) inner_iou_matrix_l = torch.tril(inner_iou_matrix, diagonal=1) inner_iou_max_l, _ = inner_iou_matrix_l.max(dim=0) keep = iou_max <= iou_thr keep_conf = scores > score_thr keep_inner_u = inner_iou_max_u <= 1 - inner_thr keep_inner_l = inner_iou_max_l <= 1 - inner_thr if keep_conf.sum() == 0: index = scores.topk(3).indices keep_conf[index, 0] = True if keep_inner_u.sum() == 0: index = scores.topk(3).indices keep_inner_u[index, 0] = True if keep_inner_l.sum() == 0: index = scores.topk(3).indices keep_inner_l[index, 0] = True keep *= keep_conf keep *= keep_inner_u keep *= keep_inner_l selected_idx = idx[keep] return selected_idx def filter(keep: torch.Tensor, masks_result) -> None: keep = keep.int().cpu().numpy() result_keep = [] for i, m in enumerate(masks_result): if i in keep: result_keep.append(m) return result_keep def masks_update(*args, **kwargs): # remove redundant masks based on the scores and overlap rate between masks masks_new = () for masks_lvl in (args): if isinstance(masks_lvl, tuple): masks_lvl = masks_lvl[0] # If it's a tuple, take the first element if len(masks_lvl) == 0: masks_new += (masks_lvl,) continue # Check if masks_lvl is a list of dictionaries if isinstance(masks_lvl[0], dict): seg_pred = torch.from_numpy(np.stack([m['segmentation'] for m in masks_lvl], axis=0)) iou_pred = torch.from_numpy(np.stack([m['predicted_iou'] for m in masks_lvl], axis=0)) stability = torch.from_numpy(np.stack([m['stability_score'] for m in masks_lvl], axis=0)) else: # If it's a direct list of masks, use them directly seg_pred = torch.from_numpy(np.stack(masks_lvl, axis=0)) # Create default values for cases without iou and stability iou_pred = torch.ones(len(masks_lvl)) stability = torch.ones(len(masks_lvl)) scores = stability * iou_pred keep_mask_nms = mask_nms(seg_pred, scores, **kwargs) masks_lvl = filter(keep_mask_nms, masks_lvl) masks_new += (masks_lvl,) return masks_new def show_mask(mask, ax, obj_id=None, random_color=False): if random_color: color = np.concatenate([np.random.random(3), np.array([0.6])], axis=0) else: cmap = plt.get_cmap("tab20") cmap_idx = 0 if obj_id is None else obj_id color = np.array([*cmap(cmap_idx)[:3], 0.6]) h, w = mask.shape[-2:] mask_image = mask.reshape(h, w, 1) * color.reshape(1, 1, -1) ax.imshow(mask_image) def save_mask(mask,frame_idx,save_dir): image_array = (mask * 255).astype(np.uint8) # Create image object image = Image.fromarray(image_array[0]) # Save image image.save(os.path.join(save_dir,f'{frame_idx:03}.png')) def save_masks(mask_list,frame_idx,save_dir): os.makedirs(save_dir,exist_ok=True) if len(mask_list[0].shape) == 3: # Calculate dimensions for concatenated image total_width = mask_list[0].shape[2] * len(mask_list) max_height = mask_list[0].shape[1] # Create large image final_image = Image.new('RGB', (total_width, max_height)) for i, img in enumerate(mask_list): img = Image.fromarray((img[0] * 255).astype(np.uint8)).convert("RGB") final_image.paste(img, (i * img.width, 0)) final_image.save(os.path.join(save_dir,f"mask_{frame_idx:03}.png")) else: # Calculate dimensions for concatenated image total_width = mask_list[0].shape[1] * len(mask_list) max_height = mask_list[0].shape[0] # Create large image final_image = Image.new('RGB', (total_width, max_height)) for i, img in enumerate(mask_list): img = Image.fromarray((img * 255).astype(np.uint8)).convert("RGB") final_image.paste(img, (i * img.width, 0)) final_image.save(os.path.join(save_dir,f"mask_{frame_idx:03}.png")) def save_masks_npy(mask_list,frame_idx,save_dir): np.save(os.path.join(save_dir,f"mask_{frame_idx:03}.npy"),np.array(mask_list)) def show_points(coords, labels, ax, marker_size=200): pos_points = coords[labels==1] neg_points = coords[labels==0] ax.scatter(pos_points[:, 0], pos_points[:, 1], color='green', marker='*', s=marker_size, edgecolor='white', linewidth=1.25) ax.scatter(neg_points[:, 0], neg_points[:, 1], color='red', marker='*', s=marker_size, edgecolor='white', linewidth=1.25) def make_enlarge_bbox(origin_bbox, max_width,max_height,ratio): width = origin_bbox[2] height = origin_bbox[3] new_box = [max(origin_bbox[0]-width*(ratio-1)/2,0),max(origin_bbox[1]-height*(ratio-1)/2,0)] new_box.append(min(width*ratio,max_width-new_box[0])) new_box.append(min(height*ratio,max_height-new_box[1])) return new_box def sample_points(masks, enlarge_bbox,positive_num=1,negtive_num=40): ex, ey, ewidth, eheight = enlarge_bbox positive_count = positive_num negtive_count = negtive_num output_points = [] while True: x = int(np.random.uniform(ex, ex + ewidth)) y = int(np.random.uniform(ey, ey + eheight)) if masks[y][x]==True and positive_count>0: output_points.append((x,y,1)) positive_count-=1 elif masks[y][x]==False and negtive_count>0: output_points.append((x,y,0)) negtive_count-=1 if positive_count == 0 and negtive_count == 0: break return output_points def sample_points_from_mask(mask): # Get indices of all True values true_indices = np.argwhere(mask) # Check if there are any True values if true_indices.size == 0: raise ValueError("The mask does not contain any True values.") # Randomly select a point from True value indices random_index = np.random.choice(len(true_indices)) sample_point = true_indices[random_index] return tuple(sample_point) def search_new_obj(masks_from_prev, mask_list,other_masks_list=None,mask_ratio_thresh=0,ratio=0.5, area_threash = 5000): new_mask_list = [] # Calculate mask_none, representing areas not included in any previous masks mask_none = ~masks_from_prev[0].copy()[0] for prev_mask in masks_from_prev[1:]: mask_none &= ~prev_mask[0] for mask in mask_list: seg = mask['segmentation'] if (mask_none & seg).sum()/seg.sum() > ratio and seg.sum() > area_threash: new_mask_list.append(mask) for mask in new_mask_list: mask_none &= ~mask['segmentation'] logger.info(len(new_mask_list)) logger.info("now ratio:",mask_none.sum() / (mask_none.shape[0] * mask_none.shape[1]) ) logger.info("expected ratios:",mask_ratio_thresh) if other_masks_list is not None: for mask in other_masks_list: if mask_none.sum() / (mask_none.shape[0] * mask_none.shape[1]) > mask_ratio_thresh: # Still a lot of gaps, greater than current thresh seg = mask['segmentation'] if (mask_none & seg).sum()/seg.sum() > ratio and seg.sum() > area_threash: new_mask_list.append(mask) mask_none &= ~seg else: break logger.info(len(new_mask_list)) return new_mask_list def get_bbox_from_mask(mask): # Get row and column indices of non-zero elements rows = np.any(mask, axis=1) cols = np.any(mask, axis=0) # Find min and max indices of non-zero rows and columns ymin, ymax = np.where(rows)[0][[0, -1]] xmin, xmax = np.where(cols)[0][[0, -1]] # Calculate width and height width = xmax - xmin + 1 height = ymax - ymin + 1 return xmin, ymin, width, height def cal_no_mask_area_ratio(out_mask_list): h = out_mask_list[0].shape[1] w = out_mask_list[0].shape[2] mask_none = ~out_mask_list[0].copy() for prev_mask in out_mask_list[1:]: mask_none &= ~prev_mask return(mask_none.sum() / (h * w)) class Prompts: def __init__(self,bs:int): self.batch_size = bs self.prompts = {} self.obj_list = [] self.key_frame_list = [] self.key_frame_obj_begin_list = [] def add(self,obj_id,frame_id,mask): if obj_id not in self.obj_list: new_obj = True self.prompts[obj_id] = [] self.obj_list.append(obj_id) else: new_obj = False self.prompts[obj_id].append((frame_id,mask)) if frame_id not in self.key_frame_list and new_obj: # import ipdb; ipdb.set_trace() self.key_frame_list.append(frame_id) self.key_frame_obj_begin_list.append(obj_id) logger.info("key_frame_obj_begin_list:",self.key_frame_obj_begin_list) def get_obj_num(self): return len(self.obj_list) def __len__(self): if self.obj_list % self.batch_size == 0: return len(self.obj_list) // self.batch_size else: return len(self.obj_list) // self.batch_size +1 def __iter__(self): # self.batch_index = 0 self.start_idx = 0 self.iter_frameindex = 0 return self def __next__(self): if self.start_idx < len(self.obj_list): if self.iter_frameindex == len(self.key_frame_list)-1: end_idx = min(self.start_idx+self.batch_size, len(self.obj_list)) else: if self.start_idx+self.batch_size < self.key_frame_obj_begin_list[self.iter_frameindex+1]: end_idx = self.start_idx+self.batch_size else: end_idx = self.key_frame_obj_begin_list[self.iter_frameindex+1] self.iter_frameindex+=1 # end_idx = min(self.start_idx+self.batch_size, self.key_frame_obj_begin_list[self.iter_frameindex+1]) batch_keys = self.obj_list[self.start_idx:end_idx] batch_prompts = {key: self.prompts[key] for key in batch_keys} self.start_idx = end_idx return batch_prompts # if self.batch_index * self.batch_size < len(self.obj_list): # start_idx = self.batch_index * self.batch_size # end_idx = min(start_idx + self.batch_size, len(self.obj_list)) # batch_keys = self.obj_list[start_idx:end_idx] # batch_prompts = {key: self.prompts[key] for key in batch_keys} # self.batch_index += 1 # return batch_prompts else: raise StopIteration def get_video_segments(prompts_loader,predictor,inference_state,final_output=False): video_segments = {} for batch_prompts in tqdm(prompts_loader,desc="processing prompts\n"): predictor.reset_state(inference_state) for id, prompt_list in batch_prompts.items(): for prompt in prompt_list: # import ipdb; ipdb.set_trace() _, out_obj_ids, out_mask_logits = predictor.add_new_mask( inference_state=inference_state, frame_idx=prompt[0], obj_id=id, mask=prompt[1] ) # start_frame_idx = 0 if final_output else None for out_frame_idx, out_obj_ids, out_mask_logits in predictor.propagate_in_video(inference_state): if out_frame_idx not in video_segments: video_segments[out_frame_idx] = { } for i, out_obj_id in enumerate(out_obj_ids): video_segments[out_frame_idx][out_obj_id]= (out_mask_logits[i] > 0.0).cpu().numpy() if final_output: for out_frame_idx, out_obj_ids, out_mask_logits in predictor.propagate_in_video(inference_state,reverse=True): for i, out_obj_id in enumerate(out_obj_ids): video_segments[out_frame_idx][out_obj_id]= (out_mask_logits[i] > 0.0).cpu().numpy() return video_segments if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument("--video_path",type=str,required=True) parser.add_argument("--output_dir",type=str,required=True) parser.add_argument("--level",choices=['default','small','middle','large']) parser.add_argument("--batch_size",type=int,default=20) parser.add_argument("--sam1_checkpoint",type=str,default="/home/lff/bigdata1/cjw/checkpoints/sam/sam_vit_h_4b8939.pth") parser.add_argument("--sam2_checkpoint",type=str,default="/home/lff/bigdata1/cjw/checkpoints/sam2/sam2_hiera_large.pt") parser.add_argument("--detect_stride",type=int,default=10) parser.add_argument("--use_other_level",type=int,default=1) parser.add_argument("--postnms",type=int,default=1) parser.add_argument("--pred_iou_thresh",type=float,default=0.7) parser.add_argument("--box_nms_thresh",type=float,default=0.7) parser.add_argument("--stability_score_thresh",type=float,default=0.85) parser.add_argument("--reverse", action="store_true") level_dict = { "default": 0, "small": 1, "middle": 2, "large": 3 } args = parser.parse_args() logger.add(os.path.join(args.output_dir,f'{args.level}.log'), rotation="500 MB") logger.info(args) video_dir = args.video_path level = args.level base_dir = args.output_dir ##### load Sam2 and Sam1 Model ##### sam2_checkpoint = args.sam2_checkpoint model_cfg = "sam2_hiera_l.yaml" predictor = build_sam2_video_predictor(model_cfg, sam2_checkpoint) sam2 = build_sam2(model_cfg, sam2_checkpoint, device='cuda', apply_postprocessing=False) sam_ckpt_path = args.sam1_checkpoint sam = sam_model_registry["vit_h"](checkpoint=sam_ckpt_path).to('cuda') mask_generator = SamAutomaticMaskGenerator( model=sam, points_per_side=32, pred_iou_thresh=args.pred_iou_thresh, box_nms_thresh=args.box_nms_thresh, stability_score_thresh=args.stability_score_thresh, crop_n_layers=1, crop_n_points_downscale_factor=1, min_mask_region_area=100, ) # scan all the JPEG frame names in this directory frame_names = [ p for p in os.listdir(video_dir) if os.path.splitext(p)[-1] in [".jpg", ".jpeg", ".JPG", ".JPEG", ".png"] ] try: frame_names.sort(key=lambda p: int(os.path.splitext(p)[0]), reverse=args.reverse) except: frame_names.sort(key=lambda p: os.path.splitext(p)[0], reverse=args.reverse) now_frame = 0 inference_state = predictor.init_state(video_path=video_dir) masks_from_prev = [] sum_id = 0 # Record total number of objects prompts_loader = Prompts(bs=args.batch_size) # hold all the clicks we add for visualization while True: logger.info(f"frame: {now_frame}") sum_id = prompts_loader.get_obj_num() image_path = os.path.join(video_dir,frame_names[now_frame]) image = cv2.imread(image_path) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) # resize if the input is too large: orig_h, orig_w = image.shape[:2] if orig_h > 1080: logger.info("Resizing original image to 1080P...") scale = 1080 / orig_h h = int(orig_h * scale) w = int(orig_w * scale) image = cv2.resize(image, (w, h)) # Generate only large masks # masks_l = mask_generator.generate_l(image) all_masks = mask_generator.generate(image) masks = all_masks[level_dict[args.level]] # masks_l = mask_generator.generate(image) if args.postnms: # # Pass masks_l directly, no need to wrap in tuple # # masks_l = masks_update(masks_l, iou_thr=0.8, score_thr=0.7, inner_thr=0.5)[0] masks = masks_update(masks, iou_thr=0.8, score_thr=0.7, inner_thr=0.5)[0] # Use large level masks # masks = masks_l other_masks = None if not args.use_other_level: other_masks = None if now_frame == 0: # first frame ann_obj_id_list = range(len(masks)) for ann_obj_id in tqdm(ann_obj_id_list): seg = masks[ann_obj_id]['segmentation'] prompts_loader.add(ann_obj_id,0,seg) else: new_mask_list = search_new_obj(masks_from_prev, masks, other_masks,mask_ratio_thresh) logger.info(f"number of new obj: {len(new_mask_list)}") for id,mask in enumerate(masks_from_prev): if mask.sum() == 0: continue prompts_loader.add(id,now_frame,mask[0]) for i in range(len(new_mask_list)): new_mask = new_mask_list[i]['segmentation'] prompts_loader.add(sum_id+i,now_frame,new_mask) logger.info(f"obj num: {prompts_loader.get_obj_num()}") if now_frame==0 or len(new_mask_list)!=0: video_segments = get_video_segments(prompts_loader,predictor,inference_state) vis_frame_stride = args.detect_stride max_area_no_mask = (0,-1) for out_frame_idx in tqdm(range(0, len(frame_names), vis_frame_stride)): if out_frame_idx < now_frame: continue out_mask_list = [] for out_obj_id, out_mask in video_segments[out_frame_idx].items(): out_mask_list.append(out_mask) no_mask_ratio = cal_no_mask_area_ratio(out_mask_list) if now_frame == out_frame_idx: mask_ratio_thresh = no_mask_ratio logger.info(f"mask_ratio_thresh: {mask_ratio_thresh}") if no_mask_ratio > mask_ratio_thresh + 0.01 and out_frame_idx > now_frame: masks_from_prev = out_mask_list max_area_no_mask = (no_mask_ratio, out_frame_idx) logger.info(max_area_no_mask) break if max_area_no_mask[1] == -1: break logger.info("max_area_no_mask:", max_area_no_mask) now_frame = max_area_no_mask[1] ###### Final output ###### save_dir = os.path.join(base_dir,level,"final-output") os.makedirs(save_dir, exist_ok=True) video_segments = get_video_segments(prompts_loader,predictor,inference_state,final_output=True) for out_frame_idx in tqdm(range(0, len(frame_names), 1)): out_mask_list = [] for out_obj_id, out_mask in video_segments[out_frame_idx].items(): out_mask_list.append(out_mask) no_mask_ratio = cal_no_mask_area_ratio(out_mask_list) logger.info(no_mask_ratio) save_masks(out_mask_list, out_frame_idx,save_dir) save_masks_npy(out_mask_list, out_frame_idx,save_dir) ###### Generate Visualization Frames ###### logger.info("Start generating visualization frames...") vis_save_dir = os.path.join(base_dir,level,'visualization','full-mask-npy') os.makedirs(vis_save_dir,exist_ok=True) frame_save_dir = os.path.join(base_dir,level,'visualization','frames') os.makedirs(frame_save_dir, exist_ok=True) # Read all npy files npy_name_list = [] for name in os.listdir(save_dir): if 'npy' in name: npy_name_list.append(name) npy_name_list.sort() logger.info(f"Found {len(npy_name_list)} npy files") npy_list = [np.load(os.path.join(save_dir,name)) for name in npy_name_list] image_list = [Image.open(os.path.join(video_dir,name)) for name in frame_names] assert len(npy_list) == len(image_list), "Number of npy files does not match number of images" logger.info(f"Processing {len(npy_list)} frames in total") # Generate random colors def generate_random_colors(num_colors): colors = [] for _ in range(num_colors): reroll = True iter_cnt = 0 while reroll and iter_cnt < 100: iter_cnt += 1 reroll = False color = tuple(random.randint(1, 255) for _ in range(3)) for selected_color in colors: if np.linalg.norm(np.array(color) - np.array(selected_color)) < 70: reroll = True break colors.append(color) return colors num_masks = max(len(masks) for masks in npy_list) colors = generate_random_colors(num_masks) post_colors = [(0, 0, 0)] + colors post_colors = np.array(post_colors) # [num_masks, 3] np.save(os.path.join(base_dir, "colors.npy"), post_colors) # Only process first and last frames # frames_to_process = [0, -1] # Indices for first and last frames for frame_idx in range(len(frame_names)): # for frame_idx in frames_to_process: masks = npy_list[frame_idx] image = image_list[frame_idx] image_np = np.array(image) mask_combined = np.zeros_like(image_np, dtype=np.uint8) for mask_id, mask in enumerate(masks): mask = mask.squeeze(0) mask_area = mask > 0 mask_combined[mask_area, :] = colors[mask_id] # Blend original image with colored mask mask_combined = np.clip(mask_combined, 0, 255) # blended_image = cv2.addWeighted(image_np, 0.7, mask_combined, 0.3, 0) blended_image = mask_combined # change the save path frame_name = frame_names[frame_idx] frame_save_dir = base_dir output_path = os.path.join(frame_save_dir, frame_name) Image.fromarray(blended_image).save(output_path) logger.info(f"Frame saved to: {output_path}") ================================================ FILE: auto-seg/sam2/__init__.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. from hydra import initialize_config_module initialize_config_module("sam2_configs", version_base="1.2") ================================================ FILE: auto-seg/sam2/automatic_mask_generator.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. # Adapted from https://github.com/facebookresearch/segment-anything/blob/main/segment_anything/automatic_mask_generator.py from typing import Any, Dict, List, Optional, Tuple import numpy as np import torch from torchvision.ops.boxes import batched_nms, box_area # type: ignore from sam2.modeling.sam2_base import SAM2Base from sam2.sam2_image_predictor import SAM2ImagePredictor from sam2.utils.amg import ( area_from_rle, batch_iterator, batched_mask_to_box, box_xyxy_to_xywh, build_all_layer_point_grids, calculate_stability_score, coco_encode_rle, generate_crop_boxes, is_box_near_crop_edge, mask_to_rle_pytorch, MaskData, remove_small_regions, rle_to_mask, uncrop_boxes_xyxy, uncrop_masks, uncrop_points, ) class SAM2AutomaticMaskGenerator: def __init__( self, model: SAM2Base, points_per_side: Optional[int] = 32, points_per_batch: int = 64, pred_iou_thresh: float = 0.8, stability_score_thresh: float = 0.95, stability_score_offset: float = 1.0, mask_threshold: float = 0.0, box_nms_thresh: float = 0.7, crop_n_layers: int = 0, crop_nms_thresh: float = 0.7, crop_overlap_ratio: float = 512 / 1500, crop_n_points_downscale_factor: int = 1, point_grids: Optional[List[np.ndarray]] = None, min_mask_region_area: int = 0, output_mode: str = "binary_mask", use_m2m: bool = False, multimask_output: bool = True, **kwargs, ) -> None: """ Using a SAM 2 model, generates masks for the entire image. Generates a grid of point prompts over the image, then filters low quality and duplicate masks. The default settings are chosen for SAM 2 with a HieraL backbone. Arguments: model (Sam): The SAM 2 model to use for mask prediction. points_per_side (int or None): The number of points to be sampled along one side of the image. The total number of points is points_per_side**2. If None, 'point_grids' must provide explicit point sampling. points_per_batch (int): Sets the number of points run simultaneously by the model. Higher numbers may be faster but use more GPU memory. pred_iou_thresh (float): A filtering threshold in [0,1], using the model's predicted mask quality. stability_score_thresh (float): A filtering threshold in [0,1], using the stability of the mask under changes to the cutoff used to binarize the model's mask predictions. stability_score_offset (float): The amount to shift the cutoff when calculated the stability score. mask_threshold (float): Threshold for binarizing the mask logits box_nms_thresh (float): The box IoU cutoff used by non-maximal suppression to filter duplicate masks. crop_n_layers (int): If >0, mask prediction will be run again on crops of the image. Sets the number of layers to run, where each layer has 2**i_layer number of image crops. crop_nms_thresh (float): The box IoU cutoff used by non-maximal suppression to filter duplicate masks between different crops. crop_overlap_ratio (float): Sets the degree to which crops overlap. In the first crop layer, crops will overlap by this fraction of the image length. Later layers with more crops scale down this overlap. crop_n_points_downscale_factor (int): The number of points-per-side sampled in layer n is scaled down by crop_n_points_downscale_factor**n. point_grids (list(np.ndarray) or None): A list over explicit grids of points used for sampling, normalized to [0,1]. The nth grid in the list is used in the nth crop layer. Exclusive with points_per_side. min_mask_region_area (int): If >0, postprocessing will be applied to remove disconnected regions and holes in masks with area smaller than min_mask_region_area. Requires opencv. output_mode (str): The form masks are returned in. Can be 'binary_mask', 'uncompressed_rle', or 'coco_rle'. 'coco_rle' requires pycocotools. For large resolutions, 'binary_mask' may consume large amounts of memory. use_m2m (bool): Whether to add a one step refinement using previous mask predictions. multimask_output (bool): Whether to output multimask at each point of the grid. """ assert (points_per_side is None) != ( point_grids is None ), "Exactly one of points_per_side or point_grid must be provided." if points_per_side is not None: self.point_grids = build_all_layer_point_grids( points_per_side, crop_n_layers, crop_n_points_downscale_factor, ) elif point_grids is not None: self.point_grids = point_grids else: raise ValueError("Can't have both points_per_side and point_grid be None.") assert output_mode in [ "binary_mask", "uncompressed_rle", "coco_rle", ], f"Unknown output_mode {output_mode}." if output_mode == "coco_rle": try: from pycocotools import mask as mask_utils # type: ignore # noqa: F401 except ImportError as e: print("Please install pycocotools") raise e self.predictor = SAM2ImagePredictor( model, max_hole_area=min_mask_region_area, max_sprinkle_area=min_mask_region_area, ) self.points_per_batch = points_per_batch self.pred_iou_thresh = pred_iou_thresh self.stability_score_thresh = stability_score_thresh self.stability_score_offset = stability_score_offset self.mask_threshold = mask_threshold self.box_nms_thresh = box_nms_thresh self.crop_n_layers = crop_n_layers self.crop_nms_thresh = crop_nms_thresh self.crop_overlap_ratio = crop_overlap_ratio self.crop_n_points_downscale_factor = crop_n_points_downscale_factor self.min_mask_region_area = min_mask_region_area self.output_mode = output_mode self.use_m2m = use_m2m self.multimask_output = multimask_output @classmethod def from_pretrained(cls, model_id: str, **kwargs) -> "SAM2AutomaticMaskGenerator": """ Load a pretrained model from the Hugging Face hub. Arguments: model_id (str): The Hugging Face repository ID. **kwargs: Additional arguments to pass to the model constructor. Returns: (SAM2AutomaticMaskGenerator): The loaded model. """ from sam2.build_sam import build_sam2_hf sam_model = build_sam2_hf(model_id, **kwargs) return cls(sam_model, **kwargs) @torch.no_grad() def generate(self, image: np.ndarray) -> List[Dict[str, Any]]: """ Generates masks for the given image. Arguments: image (np.ndarray): The image to generate masks for, in HWC uint8 format. Returns: list(dict(str, any)): A list over records for masks. Each record is a dict containing the following keys: segmentation (dict(str, any) or np.ndarray): The mask. If output_mode='binary_mask', is an array of shape HW. Otherwise, is a dictionary containing the RLE. bbox (list(float)): The box around the mask, in XYWH format. area (int): The area in pixels of the mask. predicted_iou (float): The model's own prediction of the mask's quality. This is filtered by the pred_iou_thresh parameter. point_coords (list(list(float))): The point coordinates input to the model to generate this mask. stability_score (float): A measure of the mask's quality. This is filtered on using the stability_score_thresh parameter. crop_box (list(float)): The crop of the image used to generate the mask, given in XYWH format. """ # Generate masks mask_data = self._generate_masks(image) # Encode masks if self.output_mode == "coco_rle": mask_data["segmentations"] = [ coco_encode_rle(rle) for rle in mask_data["rles"] ] elif self.output_mode == "binary_mask": mask_data["segmentations"] = [rle_to_mask(rle) for rle in mask_data["rles"]] else: mask_data["segmentations"] = mask_data["rles"] # Write mask records curr_anns = [] for idx in range(len(mask_data["segmentations"])): ann = { "segmentation": mask_data["segmentations"][idx], "area": area_from_rle(mask_data["rles"][idx]), "bbox": box_xyxy_to_xywh(mask_data["boxes"][idx]).tolist(), "predicted_iou": mask_data["iou_preds"][idx].item(), "point_coords": [mask_data["points"][idx].tolist()], "stability_score": mask_data["stability_score"][idx].item(), "crop_box": box_xyxy_to_xywh(mask_data["crop_boxes"][idx]).tolist(), } curr_anns.append(ann) return curr_anns def _generate_masks(self, image: np.ndarray) -> MaskData: orig_size = image.shape[:2] crop_boxes, layer_idxs = generate_crop_boxes( orig_size, self.crop_n_layers, self.crop_overlap_ratio ) # Iterate over image crops data = MaskData() for crop_box, layer_idx in zip(crop_boxes, layer_idxs): crop_data = self._process_crop(image, crop_box, layer_idx, orig_size) data.cat(crop_data) # Remove duplicate masks between crops if len(crop_boxes) > 1: # Prefer masks from smaller crops scores = 1 / box_area(data["crop_boxes"]) scores = scores.to(data["boxes"].device) keep_by_nms = batched_nms( data["boxes"].float(), scores, torch.zeros_like(data["boxes"][:, 0]), # categories iou_threshold=self.crop_nms_thresh, ) data.filter(keep_by_nms) data.to_numpy() return data def _process_crop( self, image: np.ndarray, crop_box: List[int], crop_layer_idx: int, orig_size: Tuple[int, ...], ) -> MaskData: # Crop the image and calculate embeddings x0, y0, x1, y1 = crop_box cropped_im = image[y0:y1, x0:x1, :] cropped_im_size = cropped_im.shape[:2] self.predictor.set_image(cropped_im) # Get points for this crop points_scale = np.array(cropped_im_size)[None, ::-1] points_for_image = self.point_grids[crop_layer_idx] * points_scale # Generate masks for this crop in batches data = MaskData() for (points,) in batch_iterator(self.points_per_batch, points_for_image): batch_data = self._process_batch( points, cropped_im_size, crop_box, orig_size, normalize=True ) data.cat(batch_data) del batch_data self.predictor.reset_predictor() # Remove duplicates within this crop. keep_by_nms = batched_nms( data["boxes"].float(), data["iou_preds"], torch.zeros_like(data["boxes"][:, 0]), # categories iou_threshold=self.box_nms_thresh, ) data.filter(keep_by_nms) # Return to the original image frame data["boxes"] = uncrop_boxes_xyxy(data["boxes"], crop_box) data["points"] = uncrop_points(data["points"], crop_box) data["crop_boxes"] = torch.tensor([crop_box for _ in range(len(data["rles"]))]) return data def _process_batch( self, points: np.ndarray, im_size: Tuple[int, ...], crop_box: List[int], orig_size: Tuple[int, ...], normalize=False, ) -> MaskData: orig_h, orig_w = orig_size # Run model on this batch points = torch.as_tensor( points, dtype=torch.float32, device=self.predictor.device ) in_points = self.predictor._transforms.transform_coords( points, normalize=normalize, orig_hw=im_size ) in_labels = torch.ones( in_points.shape[0], dtype=torch.int, device=in_points.device ) masks, iou_preds, low_res_masks = self.predictor._predict( in_points[:, None, :], in_labels[:, None], multimask_output=self.multimask_output, return_logits=True, ) # Serialize predictions and store in MaskData data = MaskData( masks=masks.flatten(0, 1), iou_preds=iou_preds.flatten(0, 1), points=points.repeat_interleave(masks.shape[1], dim=0), low_res_masks=low_res_masks.flatten(0, 1), ) del masks if not self.use_m2m: # Filter by predicted IoU if self.pred_iou_thresh > 0.0: keep_mask = data["iou_preds"] > self.pred_iou_thresh data.filter(keep_mask) # Calculate and filter by stability score data["stability_score"] = calculate_stability_score( data["masks"], self.mask_threshold, self.stability_score_offset ) if self.stability_score_thresh > 0.0: keep_mask = data["stability_score"] >= self.stability_score_thresh data.filter(keep_mask) else: # One step refinement using previous mask predictions in_points = self.predictor._transforms.transform_coords( data["points"], normalize=normalize, orig_hw=im_size ) labels = torch.ones( in_points.shape[0], dtype=torch.int, device=in_points.device ) masks, ious = self.refine_with_m2m( in_points, labels, data["low_res_masks"], self.points_per_batch ) data["masks"] = masks.squeeze(1) data["iou_preds"] = ious.squeeze(1) if self.pred_iou_thresh > 0.0: keep_mask = data["iou_preds"] > self.pred_iou_thresh data.filter(keep_mask) data["stability_score"] = calculate_stability_score( data["masks"], self.mask_threshold, self.stability_score_offset ) if self.stability_score_thresh > 0.0: keep_mask = data["stability_score"] >= self.stability_score_thresh data.filter(keep_mask) # Threshold masks and calculate boxes data["masks"] = data["masks"] > self.mask_threshold data["boxes"] = batched_mask_to_box(data["masks"]) # Filter boxes that touch crop boundaries keep_mask = ~is_box_near_crop_edge( data["boxes"], crop_box, [0, 0, orig_w, orig_h] ) if not torch.all(keep_mask): data.filter(keep_mask) # Compress to RLE data["masks"] = uncrop_masks(data["masks"], crop_box, orig_h, orig_w) data["rles"] = mask_to_rle_pytorch(data["masks"]) del data["masks"] return data @staticmethod def postprocess_small_regions( mask_data: MaskData, min_area: int, nms_thresh: float ) -> MaskData: """ Removes small disconnected regions and holes in masks, then reruns box NMS to remove any new duplicates. Edits mask_data in place. Requires open-cv as a dependency. """ if len(mask_data["rles"]) == 0: return mask_data # Filter small disconnected regions and holes new_masks = [] scores = [] for rle in mask_data["rles"]: mask = rle_to_mask(rle) mask, changed = remove_small_regions(mask, min_area, mode="holes") unchanged = not changed mask, changed = remove_small_regions(mask, min_area, mode="islands") unchanged = unchanged and not changed new_masks.append(torch.as_tensor(mask).unsqueeze(0)) # Give score=0 to changed masks and score=1 to unchanged masks # so NMS will prefer ones that didn't need postprocessing scores.append(float(unchanged)) # Recalculate boxes and remove any new duplicates masks = torch.cat(new_masks, dim=0) boxes = batched_mask_to_box(masks) keep_by_nms = batched_nms( boxes.float(), torch.as_tensor(scores), torch.zeros_like(boxes[:, 0]), # categories iou_threshold=nms_thresh, ) # Only recalculate RLEs for masks that have changed for i_mask in keep_by_nms: if scores[i_mask] == 0.0: mask_torch = masks[i_mask].unsqueeze(0) mask_data["rles"][i_mask] = mask_to_rle_pytorch(mask_torch)[0] mask_data["boxes"][i_mask] = boxes[i_mask] # update res directly mask_data.filter(keep_by_nms) return mask_data def refine_with_m2m(self, points, point_labels, low_res_masks, points_per_batch): new_masks = [] new_iou_preds = [] for cur_points, cur_point_labels, low_res_mask in batch_iterator( points_per_batch, points, point_labels, low_res_masks ): best_masks, best_iou_preds, _ = self.predictor._predict( cur_points[:, None, :], cur_point_labels[:, None], mask_input=low_res_mask[:, None, :], multimask_output=False, return_logits=True, ) new_masks.append(best_masks) new_iou_preds.append(best_iou_preds) masks = torch.cat(new_masks, dim=0) return masks, torch.cat(new_iou_preds, dim=0) ================================================ FILE: auto-seg/sam2/build_sam.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import logging import torch from hydra import compose from hydra.utils import instantiate from omegaconf import OmegaConf def build_sam2( config_file, ckpt_path=None, device="cuda", mode="eval", hydra_overrides_extra=[], apply_postprocessing=True, **kwargs, ): if apply_postprocessing: hydra_overrides_extra = hydra_overrides_extra.copy() hydra_overrides_extra += [ # dynamically fall back to multi-mask if the single mask is not stable "++model.sam_mask_decoder_extra_args.dynamic_multimask_via_stability=true", "++model.sam_mask_decoder_extra_args.dynamic_multimask_stability_delta=0.05", "++model.sam_mask_decoder_extra_args.dynamic_multimask_stability_thresh=0.98", ] # Read config and init model cfg = compose(config_name=config_file, overrides=hydra_overrides_extra) OmegaConf.resolve(cfg) model = instantiate(cfg.model, _recursive_=True) _load_checkpoint(model, ckpt_path) model = model.to(device) if mode == "eval": model.eval() return model def build_sam2_video_predictor( config_file, ckpt_path=None, device="cuda", mode="eval", hydra_overrides_extra=[], apply_postprocessing=True, **kwargs, ): hydra_overrides = [ "++model._target_=sam2.sam2_video_predictor.SAM2VideoPredictor", ] if apply_postprocessing: hydra_overrides_extra = hydra_overrides_extra.copy() hydra_overrides_extra += [ # dynamically fall back to multi-mask if the single mask is not stable "++model.sam_mask_decoder_extra_args.dynamic_multimask_via_stability=true", "++model.sam_mask_decoder_extra_args.dynamic_multimask_stability_delta=0.05", "++model.sam_mask_decoder_extra_args.dynamic_multimask_stability_thresh=0.98", # the sigmoid mask logits on interacted frames with clicks in the memory encoder so that the encoded masks are exactly as what users see from clicking "++model.binarize_mask_from_pts_for_mem_enc=true", # fill small holes in the low-res masks up to `fill_hole_area` (before resizing them to the original video resolution) "++model.fill_hole_area=8", ] hydra_overrides.extend(hydra_overrides_extra) # Read config and init model cfg = compose(config_name=config_file, overrides=hydra_overrides) OmegaConf.resolve(cfg) model = instantiate(cfg.model, _recursive_=True) _load_checkpoint(model, ckpt_path) model = model.to(device) if mode == "eval": model.eval() return model def build_sam2_hf(model_id, **kwargs): from huggingface_hub import hf_hub_download model_id_to_filenames = { "facebook/sam2-hiera-tiny": ("sam2_hiera_t.yaml", "sam2_hiera_tiny.pt"), "facebook/sam2-hiera-small": ("sam2_hiera_s.yaml", "sam2_hiera_small.pt"), "facebook/sam2-hiera-base-plus": ( "sam2_hiera_b+.yaml", "sam2_hiera_base_plus.pt", ), "facebook/sam2-hiera-large": ("sam2_hiera_l.yaml", "sam2_hiera_large.pt"), } config_name, checkpoint_name = model_id_to_filenames[model_id] ckpt_path = hf_hub_download(repo_id=model_id, filename=checkpoint_name) return build_sam2(config_file=config_name, ckpt_path=ckpt_path, **kwargs) def build_sam2_video_predictor_hf(model_id, **kwargs): from huggingface_hub import hf_hub_download model_id_to_filenames = { "facebook/sam2-hiera-tiny": ("sam2_hiera_t.yaml", "sam2_hiera_tiny.pt"), "facebook/sam2-hiera-small": ("sam2_hiera_s.yaml", "sam2_hiera_small.pt"), "facebook/sam2-hiera-base-plus": ( "sam2_hiera_b+.yaml", "sam2_hiera_base_plus.pt", ), "facebook/sam2-hiera-large": ("sam2_hiera_l.yaml", "sam2_hiera_large.pt"), } config_name, checkpoint_name = model_id_to_filenames[model_id] ckpt_path = hf_hub_download(repo_id=model_id, filename=checkpoint_name) return build_sam2_video_predictor( config_file=config_name, ckpt_path=ckpt_path, **kwargs ) def _load_checkpoint(model, ckpt_path): if ckpt_path is not None: sd = torch.load(ckpt_path, map_location="cpu")["model"] missing_keys, unexpected_keys = model.load_state_dict(sd) if missing_keys: logging.error(missing_keys) raise RuntimeError() if unexpected_keys: logging.error(unexpected_keys) raise RuntimeError() logging.info("Loaded checkpoint sucessfully") ================================================ FILE: auto-seg/sam2/csrc/connected_components.cu ================================================ // Copyright (c) Meta Platforms, Inc. and affiliates. // All rights reserved. // This source code is licensed under the license found in the // LICENSE file in the root directory of this source tree. // adapted from https://github.com/zsef123/Connected_components_PyTorch // with license found in the LICENSE_cctorch file in the root directory. #include #include #include #include #include #include // 2d #define BLOCK_ROWS 16 #define BLOCK_COLS 16 namespace cc2d { template __device__ __forceinline__ unsigned char hasBit(T bitmap, unsigned char pos) { return (bitmap >> pos) & 1; } __device__ int32_t find(const int32_t* s_buf, int32_t n) { while (s_buf[n] != n) n = s_buf[n]; return n; } __device__ int32_t find_n_compress(int32_t* s_buf, int32_t n) { const int32_t id = n; while (s_buf[n] != n) { n = s_buf[n]; s_buf[id] = n; } return n; } __device__ void union_(int32_t* s_buf, int32_t a, int32_t b) { bool done; do { a = find(s_buf, a); b = find(s_buf, b); if (a < b) { int32_t old = atomicMin(s_buf + b, a); done = (old == b); b = old; } else if (b < a) { int32_t old = atomicMin(s_buf + a, b); done = (old == a); a = old; } else done = true; } while (!done); } __global__ void init_labeling(int32_t* label, const uint32_t W, const uint32_t H) { const uint32_t row = (blockIdx.y * blockDim.y + threadIdx.y) * 2; const uint32_t col = (blockIdx.x * blockDim.x + threadIdx.x) * 2; const uint32_t idx = row * W + col; if (row < H && col < W) label[idx] = idx; } __global__ void merge(uint8_t* img, int32_t* label, const uint32_t W, const uint32_t H) { const uint32_t row = (blockIdx.y * blockDim.y + threadIdx.y) * 2; const uint32_t col = (blockIdx.x * blockDim.x + threadIdx.x) * 2; const uint32_t idx = row * W + col; if (row >= H || col >= W) return; uint32_t P = 0; if (img[idx]) P |= 0x777; if (row + 1 < H && img[idx + W]) P |= 0x777 << 4; if (col + 1 < W && img[idx + 1]) P |= 0x777 << 1; if (col == 0) P &= 0xEEEE; if (col + 1 >= W) P &= 0x3333; else if (col + 2 >= W) P &= 0x7777; if (row == 0) P &= 0xFFF0; if (row + 1 >= H) P &= 0xFF; if (P > 0) { // If need check about top-left pixel(if flag the first bit) and hit the // top-left pixel if (hasBit(P, 0) && img[idx - W - 1]) { union_(label, idx, idx - 2 * W - 2); // top left block } if ((hasBit(P, 1) && img[idx - W]) || (hasBit(P, 2) && img[idx - W + 1])) union_(label, idx, idx - 2 * W); // top bottom block if (hasBit(P, 3) && img[idx + 2 - W]) union_(label, idx, idx - 2 * W + 2); // top right block if ((hasBit(P, 4) && img[idx - 1]) || (hasBit(P, 8) && img[idx + W - 1])) union_(label, idx, idx - 2); // just left block } } __global__ void compression(int32_t* label, const int32_t W, const int32_t H) { const uint32_t row = (blockIdx.y * blockDim.y + threadIdx.y) * 2; const uint32_t col = (blockIdx.x * blockDim.x + threadIdx.x) * 2; const uint32_t idx = row * W + col; if (row < H && col < W) find_n_compress(label, idx); } __global__ void final_labeling( const uint8_t* img, int32_t* label, const int32_t W, const int32_t H) { const uint32_t row = (blockIdx.y * blockDim.y + threadIdx.y) * 2; const uint32_t col = (blockIdx.x * blockDim.x + threadIdx.x) * 2; const uint32_t idx = row * W + col; if (row >= H || col >= W) return; int32_t y = label[idx] + 1; if (img[idx]) label[idx] = y; else label[idx] = 0; if (col + 1 < W) { if (img[idx + 1]) label[idx + 1] = y; else label[idx + 1] = 0; if (row + 1 < H) { if (img[idx + W + 1]) label[idx + W + 1] = y; else label[idx + W + 1] = 0; } } if (row + 1 < H) { if (img[idx + W]) label[idx + W] = y; else label[idx + W] = 0; } } __global__ void init_counting( const int32_t* label, int32_t* count_init, const int32_t W, const int32_t H) { const uint32_t row = (blockIdx.y * blockDim.y + threadIdx.y); const uint32_t col = (blockIdx.x * blockDim.x + threadIdx.x); const uint32_t idx = row * W + col; if (row >= H || col >= W) return; int32_t y = label[idx]; if (y > 0) { int32_t count_idx = y - 1; atomicAdd(count_init + count_idx, 1); } } __global__ void final_counting( const int32_t* label, const int32_t* count_init, int32_t* count_final, const int32_t W, const int32_t H) { const uint32_t row = (blockIdx.y * blockDim.y + threadIdx.y); const uint32_t col = (blockIdx.x * blockDim.x + threadIdx.x); const uint32_t idx = row * W + col; if (row >= H || col >= W) return; int32_t y = label[idx]; if (y > 0) { int32_t count_idx = y - 1; count_final[idx] = count_init[count_idx]; } else { count_final[idx] = 0; } } } // namespace cc2d std::vector get_connected_componnets( const torch::Tensor& inputs) { AT_ASSERTM(inputs.is_cuda(), "inputs must be a CUDA tensor"); AT_ASSERTM(inputs.ndimension() == 4, "inputs must be [N, 1, H, W] shape"); AT_ASSERTM( inputs.scalar_type() == torch::kUInt8, "inputs must be a uint8 type"); const uint32_t N = inputs.size(0); const uint32_t C = inputs.size(1); const uint32_t H = inputs.size(2); const uint32_t W = inputs.size(3); AT_ASSERTM(C == 1, "inputs must be [N, 1, H, W] shape"); AT_ASSERTM((H % 2) == 0, "height must be an even number"); AT_ASSERTM((W % 2) == 0, "width must be an even number"); // label must be uint32_t auto label_options = torch::TensorOptions().dtype(torch::kInt32).device(inputs.device()); torch::Tensor labels = torch::zeros({N, C, H, W}, label_options); torch::Tensor counts_init = torch::zeros({N, C, H, W}, label_options); torch::Tensor counts_final = torch::zeros({N, C, H, W}, label_options); dim3 grid = dim3( ((W + 1) / 2 + BLOCK_COLS - 1) / BLOCK_COLS, ((H + 1) / 2 + BLOCK_ROWS - 1) / BLOCK_ROWS); dim3 block = dim3(BLOCK_COLS, BLOCK_ROWS); dim3 grid_count = dim3((W + BLOCK_COLS) / BLOCK_COLS, (H + BLOCK_ROWS) / BLOCK_ROWS); dim3 block_count = dim3(BLOCK_COLS, BLOCK_ROWS); cudaStream_t stream = at::cuda::getCurrentCUDAStream(); for (int n = 0; n < N; n++) { uint32_t offset = n * H * W; cc2d::init_labeling<<>>( labels.data_ptr() + offset, W, H); cc2d::merge<<>>( inputs.data_ptr() + offset, labels.data_ptr() + offset, W, H); cc2d::compression<<>>( labels.data_ptr() + offset, W, H); cc2d::final_labeling<<>>( inputs.data_ptr() + offset, labels.data_ptr() + offset, W, H); // get the counting of each pixel cc2d::init_counting<<>>( labels.data_ptr() + offset, counts_init.data_ptr() + offset, W, H); cc2d::final_counting<<>>( labels.data_ptr() + offset, counts_init.data_ptr() + offset, counts_final.data_ptr() + offset, W, H); } // returned values are [labels, counts] std::vector outputs; outputs.push_back(labels); outputs.push_back(counts_final); return outputs; } PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { m.def( "get_connected_componnets", &get_connected_componnets, "get_connected_componnets"); } ================================================ FILE: auto-seg/sam2/modeling/__init__.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. ================================================ FILE: auto-seg/sam2/modeling/backbones/__init__.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. ================================================ FILE: auto-seg/sam2/modeling/backbones/hieradet.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. from functools import partial from typing import List, Tuple, Union import torch import torch.nn as nn import torch.nn.functional as F from sam2.modeling.backbones.utils import ( PatchEmbed, window_partition, window_unpartition, ) from sam2.modeling.sam2_utils import DropPath, MLP def do_pool(x: torch.Tensor, pool: nn.Module, norm: nn.Module = None) -> torch.Tensor: if pool is None: return x # (B, H, W, C) -> (B, C, H, W) x = x.permute(0, 3, 1, 2) x = pool(x) # (B, C, H', W') -> (B, H', W', C) x = x.permute(0, 2, 3, 1) if norm: x = norm(x) return x class MultiScaleAttention(nn.Module): def __init__( self, dim: int, dim_out: int, num_heads: int, q_pool: nn.Module = None, ): super().__init__() self.dim = dim self.dim_out = dim_out self.num_heads = num_heads self.q_pool = q_pool self.qkv = nn.Linear(dim, dim_out * 3) self.proj = nn.Linear(dim_out, dim_out) def forward(self, x: torch.Tensor) -> torch.Tensor: B, H, W, _ = x.shape # qkv with shape (B, H * W, 3, nHead, C) qkv = self.qkv(x).reshape(B, H * W, 3, self.num_heads, -1) # q, k, v with shape (B, H * W, nheads, C) q, k, v = torch.unbind(qkv, 2) # Q pooling (for downsample at stage changes) if self.q_pool: q = do_pool(q.reshape(B, H, W, -1), self.q_pool) H, W = q.shape[1:3] # downsampled shape q = q.reshape(B, H * W, self.num_heads, -1) # Torch's SDPA expects [B, nheads, H*W, C] so we transpose x = F.scaled_dot_product_attention( q.transpose(1, 2), k.transpose(1, 2), v.transpose(1, 2), ) # Transpose back x = x.transpose(1, 2) x = x.reshape(B, H, W, -1) x = self.proj(x) return x class MultiScaleBlock(nn.Module): def __init__( self, dim: int, dim_out: int, num_heads: int, mlp_ratio: float = 4.0, drop_path: float = 0.0, norm_layer: Union[nn.Module, str] = "LayerNorm", q_stride: Tuple[int, int] = None, act_layer: nn.Module = nn.GELU, window_size: int = 0, ): super().__init__() if isinstance(norm_layer, str): norm_layer = partial(getattr(nn, norm_layer), eps=1e-6) self.dim = dim self.dim_out = dim_out self.norm1 = norm_layer(dim) self.window_size = window_size self.pool, self.q_stride = None, q_stride if self.q_stride: self.pool = nn.MaxPool2d( kernel_size=q_stride, stride=q_stride, ceil_mode=False ) self.attn = MultiScaleAttention( dim, dim_out, num_heads=num_heads, q_pool=self.pool, ) self.drop_path = DropPath(drop_path) if drop_path > 0.0 else nn.Identity() self.norm2 = norm_layer(dim_out) self.mlp = MLP( dim_out, int(dim_out * mlp_ratio), dim_out, num_layers=2, activation=act_layer, ) if dim != dim_out: self.proj = nn.Linear(dim, dim_out) def forward(self, x: torch.Tensor) -> torch.Tensor: shortcut = x # B, H, W, C x = self.norm1(x) # Skip connection if self.dim != self.dim_out: shortcut = do_pool(self.proj(x), self.pool) # Window partition window_size = self.window_size if window_size > 0: H, W = x.shape[1], x.shape[2] x, pad_hw = window_partition(x, window_size) # Window Attention + Q Pooling (if stage change) x = self.attn(x) if self.q_stride: # Shapes have changed due to Q pooling window_size = self.window_size // self.q_stride[0] H, W = shortcut.shape[1:3] pad_h = (window_size - H % window_size) % window_size pad_w = (window_size - W % window_size) % window_size pad_hw = (H + pad_h, W + pad_w) # Reverse window partition if self.window_size > 0: x = window_unpartition(x, window_size, pad_hw, (H, W)) x = shortcut + self.drop_path(x) # MLP x = x + self.drop_path(self.mlp(self.norm2(x))) return x class Hiera(nn.Module): """ Reference: https://arxiv.org/abs/2306.00989 """ def __init__( self, embed_dim: int = 96, # initial embed dim num_heads: int = 1, # initial number of heads drop_path_rate: float = 0.0, # stochastic depth q_pool: int = 3, # number of q_pool stages q_stride: Tuple[int, int] = (2, 2), # downsample stride bet. stages stages: Tuple[int, ...] = (2, 3, 16, 3), # blocks per stage dim_mul: float = 2.0, # dim_mul factor at stage shift head_mul: float = 2.0, # head_mul factor at stage shift window_pos_embed_bkg_spatial_size: Tuple[int, int] = (14, 14), # window size per stage, when not using global att. window_spec: Tuple[int, ...] = ( 8, 4, 14, 7, ), # global attn in these blocks global_att_blocks: Tuple[int, ...] = ( 12, 16, 20, ), return_interm_layers=True, # return feats from every stage ): super().__init__() assert len(stages) == len(window_spec) self.window_spec = window_spec depth = sum(stages) self.q_stride = q_stride self.stage_ends = [sum(stages[:i]) - 1 for i in range(1, len(stages) + 1)] assert 0 <= q_pool <= len(self.stage_ends[:-1]) self.q_pool_blocks = [x + 1 for x in self.stage_ends[:-1]][:q_pool] self.return_interm_layers = return_interm_layers self.patch_embed = PatchEmbed( embed_dim=embed_dim, ) # Which blocks have global att? self.global_att_blocks = global_att_blocks # Windowed positional embedding (https://arxiv.org/abs/2311.05613) self.window_pos_embed_bkg_spatial_size = window_pos_embed_bkg_spatial_size self.pos_embed = nn.Parameter( torch.zeros(1, embed_dim, *self.window_pos_embed_bkg_spatial_size) ) self.pos_embed_window = nn.Parameter( torch.zeros(1, embed_dim, self.window_spec[0], self.window_spec[0]) ) dpr = [ x.item() for x in torch.linspace(0, drop_path_rate, depth) ] # stochastic depth decay rule cur_stage = 1 self.blocks = nn.ModuleList() for i in range(depth): dim_out = embed_dim # lags by a block, so first block of # next stage uses an initial window size # of previous stage and final window size of current stage window_size = self.window_spec[cur_stage - 1] if self.global_att_blocks is not None: window_size = 0 if i in self.global_att_blocks else window_size if i - 1 in self.stage_ends: dim_out = int(embed_dim * dim_mul) num_heads = int(num_heads * head_mul) cur_stage += 1 block = MultiScaleBlock( dim=embed_dim, dim_out=dim_out, num_heads=num_heads, drop_path=dpr[i], q_stride=self.q_stride if i in self.q_pool_blocks else None, window_size=window_size, ) embed_dim = dim_out self.blocks.append(block) self.channel_list = ( [self.blocks[i].dim_out for i in self.stage_ends[::-1]] if return_interm_layers else [self.blocks[-1].dim_out] ) def _get_pos_embed(self, hw: Tuple[int, int]) -> torch.Tensor: h, w = hw window_embed = self.pos_embed_window pos_embed = F.interpolate(self.pos_embed, size=(h, w), mode="bicubic") pos_embed = pos_embed + window_embed.tile( [x // y for x, y in zip(pos_embed.shape, window_embed.shape)] ) pos_embed = pos_embed.permute(0, 2, 3, 1) return pos_embed def forward(self, x: torch.Tensor) -> List[torch.Tensor]: x = self.patch_embed(x) # x: (B, H, W, C) # Add pos embed x = x + self._get_pos_embed(x.shape[1:3]) outputs = [] for i, blk in enumerate(self.blocks): x = blk(x) if (i == self.stage_ends[-1]) or ( i in self.stage_ends and self.return_interm_layers ): feats = x.permute(0, 3, 1, 2) outputs.append(feats) return outputs ================================================ FILE: auto-seg/sam2/modeling/backbones/image_encoder.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. from typing import List, Optional import torch import torch.nn as nn import torch.nn.functional as F class ImageEncoder(nn.Module): def __init__( self, trunk: nn.Module, neck: nn.Module, scalp: int = 0, ): super().__init__() self.trunk = trunk self.neck = neck self.scalp = scalp assert ( self.trunk.channel_list == self.neck.backbone_channel_list ), f"Channel dims of trunk and neck do not match. Trunk: {self.trunk.channel_list}, neck: {self.neck.backbone_channel_list}" def forward(self, sample: torch.Tensor): # Forward through backbone features, pos = self.neck(self.trunk(sample)) if self.scalp > 0: # Discard the lowest resolution features features, pos = features[: -self.scalp], pos[: -self.scalp] src = features[-1] output = { "vision_features": src, "vision_pos_enc": pos, "backbone_fpn": features, } return output class FpnNeck(nn.Module): """ A modified variant of Feature Pyramid Network (FPN) neck (we remove output conv and also do bicubic interpolation similar to ViT pos embed interpolation) """ def __init__( self, position_encoding: nn.Module, d_model: int, backbone_channel_list: List[int], kernel_size: int = 1, stride: int = 1, padding: int = 0, fpn_interp_model: str = "bilinear", fuse_type: str = "sum", fpn_top_down_levels: Optional[List[int]] = None, ): """Initialize the neck :param trunk: the backbone :param position_encoding: the positional encoding to use :param d_model: the dimension of the model :param neck_norm: the normalization to use """ super().__init__() self.position_encoding = position_encoding self.convs = nn.ModuleList() self.backbone_channel_list = backbone_channel_list for dim in backbone_channel_list: current = nn.Sequential() current.add_module( "conv", nn.Conv2d( in_channels=dim, out_channels=d_model, kernel_size=kernel_size, stride=stride, padding=padding, ), ) self.convs.append(current) self.fpn_interp_model = fpn_interp_model assert fuse_type in ["sum", "avg"] self.fuse_type = fuse_type # levels to have top-down features in its outputs # e.g. if fpn_top_down_levels is [2, 3], then only outputs of level 2 and 3 # have top-down propagation, while outputs of level 0 and level 1 have only # lateral features from the same backbone level. if fpn_top_down_levels is None: # default is to have top-down features on all levels fpn_top_down_levels = range(len(self.convs)) self.fpn_top_down_levels = list(fpn_top_down_levels) def forward(self, xs: List[torch.Tensor]): out = [None] * len(self.convs) pos = [None] * len(self.convs) assert len(xs) == len(self.convs) # fpn forward pass # see https://github.com/facebookresearch/detectron2/blob/main/detectron2/modeling/backbone/fpn.py prev_features = None # forward in top-down order (from low to high resolution) n = len(self.convs) - 1 for i in range(n, -1, -1): x = xs[i] lateral_features = self.convs[n - i](x) if i in self.fpn_top_down_levels and prev_features is not None: top_down_features = F.interpolate( prev_features.to(dtype=torch.float32), scale_factor=2.0, mode=self.fpn_interp_model, align_corners=( None if self.fpn_interp_model == "nearest" else False ), antialias=False, ) prev_features = lateral_features + top_down_features if self.fuse_type == "avg": prev_features /= 2 else: prev_features = lateral_features x_out = prev_features out[i] = x_out pos[i] = self.position_encoding(x_out).to(x_out.dtype) return out, pos ================================================ FILE: auto-seg/sam2/modeling/backbones/utils.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. """Some utilities for backbones, in particular for windowing""" from typing import Tuple import torch import torch.nn as nn import torch.nn.functional as F def window_partition(x, window_size): """ Partition into non-overlapping windows with padding if needed. Args: x (tensor): input tokens with [B, H, W, C]. window_size (int): window size. Returns: windows: windows after partition with [B * num_windows, window_size, window_size, C]. (Hp, Wp): padded height and width before partition """ B, H, W, C = x.shape pad_h = (window_size - H % window_size) % window_size pad_w = (window_size - W % window_size) % window_size if pad_h > 0 or pad_w > 0: x = F.pad(x, (0, 0, 0, pad_w, 0, pad_h)) Hp, Wp = H + pad_h, W + pad_w x = x.view(B, Hp // window_size, window_size, Wp // window_size, window_size, C) windows = ( x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) ) return windows, (Hp, Wp) def window_unpartition(windows, window_size, pad_hw, hw): """ Window unpartition into original sequences and removing padding. Args: x (tensor): input tokens with [B * num_windows, window_size, window_size, C]. window_size (int): window size. pad_hw (Tuple): padded height and width (Hp, Wp). hw (Tuple): original height and width (H, W) before padding. Returns: x: unpartitioned sequences with [B, H, W, C]. """ Hp, Wp = pad_hw H, W = hw B = windows.shape[0] // (Hp * Wp // window_size // window_size) x = windows.view( B, Hp // window_size, Wp // window_size, window_size, window_size, -1 ) x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, Hp, Wp, -1) if Hp > H or Wp > W: x = x[:, :H, :W, :].contiguous() return x class PatchEmbed(nn.Module): """ Image to Patch Embedding. """ def __init__( self, kernel_size: Tuple[int, ...] = (7, 7), stride: Tuple[int, ...] = (4, 4), padding: Tuple[int, ...] = (3, 3), in_chans: int = 3, embed_dim: int = 768, ): """ Args: kernel_size (Tuple): kernel size of the projection layer. stride (Tuple): stride of the projection layer. padding (Tuple): padding size of the projection layer. in_chans (int): Number of input image channels. embed_dim (int): embed_dim (int): Patch embedding dimension. """ super().__init__() self.proj = nn.Conv2d( in_chans, embed_dim, kernel_size=kernel_size, stride=stride, padding=padding ) def forward(self, x: torch.Tensor) -> torch.Tensor: x = self.proj(x) # B C H W -> B H W C x = x.permute(0, 2, 3, 1) return x ================================================ FILE: auto-seg/sam2/modeling/memory_attention.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. from typing import Optional import torch from torch import nn, Tensor from sam2.modeling.sam.transformer import RoPEAttention from sam2.modeling.sam2_utils import get_activation_fn, get_clones class MemoryAttentionLayer(nn.Module): def __init__( self, activation: str, cross_attention: nn.Module, d_model: int, dim_feedforward: int, dropout: float, pos_enc_at_attn: bool, pos_enc_at_cross_attn_keys: bool, pos_enc_at_cross_attn_queries: bool, self_attention: nn.Module, ): super().__init__() self.d_model = d_model self.dim_feedforward = dim_feedforward self.dropout_value = dropout self.self_attn = self_attention self.cross_attn_image = cross_attention # Implementation of Feedforward model self.linear1 = nn.Linear(d_model, dim_feedforward) self.dropout = nn.Dropout(dropout) self.linear2 = nn.Linear(dim_feedforward, d_model) self.norm1 = nn.LayerNorm(d_model) self.norm2 = nn.LayerNorm(d_model) self.norm3 = nn.LayerNorm(d_model) self.dropout1 = nn.Dropout(dropout) self.dropout2 = nn.Dropout(dropout) self.dropout3 = nn.Dropout(dropout) self.activation_str = activation self.activation = get_activation_fn(activation) # Where to add pos enc self.pos_enc_at_attn = pos_enc_at_attn self.pos_enc_at_cross_attn_queries = pos_enc_at_cross_attn_queries self.pos_enc_at_cross_attn_keys = pos_enc_at_cross_attn_keys def _forward_sa(self, tgt, query_pos): # Self-Attention tgt2 = self.norm1(tgt) q = k = tgt2 + query_pos if self.pos_enc_at_attn else tgt2 tgt2 = self.self_attn(q, k, v=tgt2) tgt = tgt + self.dropout1(tgt2) return tgt def _forward_ca(self, tgt, memory, query_pos, pos, num_k_exclude_rope=0): kwds = {} if num_k_exclude_rope > 0: assert isinstance(self.cross_attn_image, RoPEAttention) kwds = {"num_k_exclude_rope": num_k_exclude_rope} # Cross-Attention tgt2 = self.norm2(tgt) tgt2 = self.cross_attn_image( q=tgt2 + query_pos if self.pos_enc_at_cross_attn_queries else tgt2, k=memory + pos if self.pos_enc_at_cross_attn_keys else memory, v=memory, **kwds, ) tgt = tgt + self.dropout2(tgt2) return tgt def forward( self, tgt, memory, pos: Optional[Tensor] = None, query_pos: Optional[Tensor] = None, num_k_exclude_rope: int = 0, ) -> torch.Tensor: # Self-Attn, Cross-Attn tgt = self._forward_sa(tgt, query_pos) tgt = self._forward_ca(tgt, memory, query_pos, pos, num_k_exclude_rope) # MLP tgt2 = self.norm3(tgt) tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt2)))) tgt = tgt + self.dropout3(tgt2) return tgt class MemoryAttention(nn.Module): def __init__( self, d_model: int, pos_enc_at_input: bool, layer: nn.Module, num_layers: int, batch_first: bool = True, # Do layers expect batch first input? ): super().__init__() self.d_model = d_model self.layers = get_clones(layer, num_layers) self.num_layers = num_layers self.norm = nn.LayerNorm(d_model) self.pos_enc_at_input = pos_enc_at_input self.batch_first = batch_first def forward( self, curr: torch.Tensor, # self-attention inputs memory: torch.Tensor, # cross-attention inputs curr_pos: Optional[Tensor] = None, # pos_enc for self-attention inputs memory_pos: Optional[Tensor] = None, # pos_enc for cross-attention inputs num_obj_ptr_tokens: int = 0, # number of object pointer *tokens* ): if isinstance(curr, list): assert isinstance(curr_pos, list) assert len(curr) == len(curr_pos) == 1 curr, curr_pos = ( curr[0], curr_pos[0], ) assert ( curr.shape[1] == memory.shape[1] ), "Batch size must be the same for curr and memory" output = curr if self.pos_enc_at_input and curr_pos is not None: output = output + 0.1 * curr_pos if self.batch_first: # Convert to batch first output = output.transpose(0, 1) curr_pos = curr_pos.transpose(0, 1) memory = memory.transpose(0, 1) memory_pos = memory_pos.transpose(0, 1) for layer in self.layers: kwds = {} if isinstance(layer.cross_attn_image, RoPEAttention): kwds = {"num_k_exclude_rope": num_obj_ptr_tokens} output = layer( tgt=output, memory=memory, pos=memory_pos, query_pos=curr_pos, **kwds, ) normed_output = self.norm(output) if self.batch_first: # Convert back to seq first normed_output = normed_output.transpose(0, 1) curr_pos = curr_pos.transpose(0, 1) return normed_output ================================================ FILE: auto-seg/sam2/modeling/memory_encoder.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import math from typing import Tuple import torch import torch.nn as nn import torch.nn.functional as F from sam2.modeling.sam2_utils import DropPath, get_clones, LayerNorm2d class MaskDownSampler(nn.Module): """ Progressively downsample a mask by total_stride, each time by stride. Note that LayerNorm is applied per *token*, like in ViT. With each downsample (by a factor stride**2), channel capacity increases by the same factor. In the end, we linearly project to embed_dim channels. """ def __init__( self, embed_dim=256, kernel_size=4, stride=4, padding=0, total_stride=16, activation=nn.GELU, ): super().__init__() num_layers = int(math.log2(total_stride) // math.log2(stride)) assert stride**num_layers == total_stride self.encoder = nn.Sequential() mask_in_chans, mask_out_chans = 1, 1 for _ in range(num_layers): mask_out_chans = mask_in_chans * (stride**2) self.encoder.append( nn.Conv2d( mask_in_chans, mask_out_chans, kernel_size=kernel_size, stride=stride, padding=padding, ) ) self.encoder.append(LayerNorm2d(mask_out_chans)) self.encoder.append(activation()) mask_in_chans = mask_out_chans self.encoder.append(nn.Conv2d(mask_out_chans, embed_dim, kernel_size=1)) def forward(self, x): return self.encoder(x) # Lightly adapted from ConvNext (https://github.com/facebookresearch/ConvNeXt) class CXBlock(nn.Module): r"""ConvNeXt Block. There are two equivalent implementations: (1) DwConv -> LayerNorm (channels_first) -> 1x1 Conv -> GELU -> 1x1 Conv; all in (N, C, H, W) (2) DwConv -> Permute to (N, H, W, C); LayerNorm (channels_last) -> Linear -> GELU -> Linear; Permute back We use (2) as we find it slightly faster in PyTorch Args: dim (int): Number of input channels. drop_path (float): Stochastic depth rate. Default: 0.0 layer_scale_init_value (float): Init value for Layer Scale. Default: 1e-6. """ def __init__( self, dim, kernel_size=7, padding=3, drop_path=0.0, layer_scale_init_value=1e-6, use_dwconv=True, ): super().__init__() self.dwconv = nn.Conv2d( dim, dim, kernel_size=kernel_size, padding=padding, groups=dim if use_dwconv else 1, ) # depthwise conv self.norm = LayerNorm2d(dim, eps=1e-6) self.pwconv1 = nn.Linear( dim, 4 * dim ) # pointwise/1x1 convs, implemented with linear layers self.act = nn.GELU() self.pwconv2 = nn.Linear(4 * dim, dim) self.gamma = ( nn.Parameter(layer_scale_init_value * torch.ones((dim)), requires_grad=True) if layer_scale_init_value > 0 else None ) self.drop_path = DropPath(drop_path) if drop_path > 0.0 else nn.Identity() def forward(self, x): input = x x = self.dwconv(x) x = self.norm(x) x = x.permute(0, 2, 3, 1) # (N, C, H, W) -> (N, H, W, C) x = self.pwconv1(x) x = self.act(x) x = self.pwconv2(x) if self.gamma is not None: x = self.gamma * x x = x.permute(0, 3, 1, 2) # (N, H, W, C) -> (N, C, H, W) x = input + self.drop_path(x) return x class Fuser(nn.Module): def __init__(self, layer, num_layers, dim=None, input_projection=False): super().__init__() self.proj = nn.Identity() self.layers = get_clones(layer, num_layers) if input_projection: assert dim is not None self.proj = nn.Conv2d(dim, dim, kernel_size=1) def forward(self, x): # normally x: (N, C, H, W) x = self.proj(x) for layer in self.layers: x = layer(x) return x class MemoryEncoder(nn.Module): def __init__( self, out_dim, mask_downsampler, fuser, position_encoding, in_dim=256, # in_dim of pix_feats ): super().__init__() self.mask_downsampler = mask_downsampler self.pix_feat_proj = nn.Conv2d(in_dim, in_dim, kernel_size=1) self.fuser = fuser self.position_encoding = position_encoding self.out_proj = nn.Identity() if out_dim != in_dim: self.out_proj = nn.Conv2d(in_dim, out_dim, kernel_size=1) def forward( self, pix_feat: torch.Tensor, masks: torch.Tensor, skip_mask_sigmoid: bool = False, ) -> Tuple[torch.Tensor, torch.Tensor]: ## Process masks # sigmoid, so that less domain shift from gt masks which are bool if not skip_mask_sigmoid: masks = F.sigmoid(masks) masks = self.mask_downsampler(masks) ## Fuse pix_feats and downsampled masks # in case the visual features are on CPU, cast them to CUDA pix_feat = pix_feat.to(masks.device) x = self.pix_feat_proj(pix_feat) x = x + masks x = self.fuser(x) x = self.out_proj(x) pos = self.position_encoding(x).to(x.dtype) return {"vision_features": x, "vision_pos_enc": [pos]} ================================================ FILE: auto-seg/sam2/modeling/position_encoding.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import math from typing import Any, Optional, Tuple import numpy as np import torch from torch import nn class PositionEmbeddingSine(nn.Module): """ This is a more standard version of the position embedding, very similar to the one used by the Attention Is All You Need paper, generalized to work on images. """ def __init__( self, num_pos_feats, temperature: int = 10000, normalize: bool = True, scale: Optional[float] = None, ): super().__init__() assert num_pos_feats % 2 == 0, "Expecting even model width" self.num_pos_feats = num_pos_feats // 2 self.temperature = temperature self.normalize = normalize if scale is not None and normalize is False: raise ValueError("normalize should be True if scale is passed") if scale is None: scale = 2 * math.pi self.scale = scale self.cache = {} def _encode_xy(self, x, y): # The positions are expected to be normalized assert len(x) == len(y) and x.ndim == y.ndim == 1 x_embed = x * self.scale y_embed = y * self.scale dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device) dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats) pos_x = x_embed[:, None] / dim_t pos_y = y_embed[:, None] / dim_t pos_x = torch.stack( (pos_x[:, 0::2].sin(), pos_x[:, 1::2].cos()), dim=2 ).flatten(1) pos_y = torch.stack( (pos_y[:, 0::2].sin(), pos_y[:, 1::2].cos()), dim=2 ).flatten(1) return pos_x, pos_y @torch.no_grad() def encode_boxes(self, x, y, w, h): pos_x, pos_y = self._encode_xy(x, y) pos = torch.cat((pos_y, pos_x, h[:, None], w[:, None]), dim=1) return pos encode = encode_boxes # Backwards compatibility @torch.no_grad() def encode_points(self, x, y, labels): (bx, nx), (by, ny), (bl, nl) = x.shape, y.shape, labels.shape assert bx == by and nx == ny and bx == bl and nx == nl pos_x, pos_y = self._encode_xy(x.flatten(), y.flatten()) pos_x, pos_y = pos_x.reshape(bx, nx, -1), pos_y.reshape(by, ny, -1) pos = torch.cat((pos_y, pos_x, labels[:, :, None]), dim=2) return pos @torch.no_grad() def forward(self, x: torch.Tensor): cache_key = (x.shape[-2], x.shape[-1]) if cache_key in self.cache: return self.cache[cache_key][None].repeat(x.shape[0], 1, 1, 1) y_embed = ( torch.arange(1, x.shape[-2] + 1, dtype=torch.float32, device=x.device) .view(1, -1, 1) .repeat(x.shape[0], 1, x.shape[-1]) ) x_embed = ( torch.arange(1, x.shape[-1] + 1, dtype=torch.float32, device=x.device) .view(1, 1, -1) .repeat(x.shape[0], x.shape[-2], 1) ) if self.normalize: eps = 1e-6 y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device) dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats) pos_x = x_embed[:, :, :, None] / dim_t pos_y = y_embed[:, :, :, None] / dim_t pos_x = torch.stack( (pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4 ).flatten(3) pos_y = torch.stack( (pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4 ).flatten(3) pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2) self.cache[cache_key] = pos[0] return pos class PositionEmbeddingRandom(nn.Module): """ Positional encoding using random spatial frequencies. """ def __init__(self, num_pos_feats: int = 64, scale: Optional[float] = None) -> None: super().__init__() if scale is None or scale <= 0.0: scale = 1.0 self.register_buffer( "positional_encoding_gaussian_matrix", scale * torch.randn((2, num_pos_feats)), ) def _pe_encoding(self, coords: torch.Tensor) -> torch.Tensor: """Positionally encode points that are normalized to [0,1].""" # assuming coords are in [0, 1]^2 square and have d_1 x ... x d_n x 2 shape coords = 2 * coords - 1 coords = coords @ self.positional_encoding_gaussian_matrix coords = 2 * np.pi * coords # outputs d_1 x ... x d_n x C shape return torch.cat([torch.sin(coords), torch.cos(coords)], dim=-1) def forward(self, size: Tuple[int, int]) -> torch.Tensor: """Generate positional encoding for a grid of the specified size.""" h, w = size device: Any = self.positional_encoding_gaussian_matrix.device grid = torch.ones((h, w), device=device, dtype=torch.float32) y_embed = grid.cumsum(dim=0) - 0.5 x_embed = grid.cumsum(dim=1) - 0.5 y_embed = y_embed / h x_embed = x_embed / w pe = self._pe_encoding(torch.stack([x_embed, y_embed], dim=-1)) return pe.permute(2, 0, 1) # C x H x W def forward_with_coords( self, coords_input: torch.Tensor, image_size: Tuple[int, int] ) -> torch.Tensor: """Positionally encode points that are not normalized to [0,1].""" coords = coords_input.clone() coords[:, :, 0] = coords[:, :, 0] / image_size[1] coords[:, :, 1] = coords[:, :, 1] / image_size[0] return self._pe_encoding(coords.to(torch.float)) # B x N x C # Rotary Positional Encoding, adapted from: # 1. https://github.com/meta-llama/codellama/blob/main/llama/model.py # 2. https://github.com/naver-ai/rope-vit # 3. https://github.com/lucidrains/rotary-embedding-torch def init_t_xy(end_x: int, end_y: int): t = torch.arange(end_x * end_y, dtype=torch.float32) t_x = (t % end_x).float() t_y = torch.div(t, end_x, rounding_mode="floor").float() return t_x, t_y def compute_axial_cis(dim: int, end_x: int, end_y: int, theta: float = 10000.0): freqs_x = 1.0 / (theta ** (torch.arange(0, dim, 4)[: (dim // 4)].float() / dim)) freqs_y = 1.0 / (theta ** (torch.arange(0, dim, 4)[: (dim // 4)].float() / dim)) t_x, t_y = init_t_xy(end_x, end_y) freqs_x = torch.outer(t_x, freqs_x) freqs_y = torch.outer(t_y, freqs_y) freqs_cis_x = torch.polar(torch.ones_like(freqs_x), freqs_x) freqs_cis_y = torch.polar(torch.ones_like(freqs_y), freqs_y) return torch.cat([freqs_cis_x, freqs_cis_y], dim=-1) def reshape_for_broadcast(freqs_cis: torch.Tensor, x: torch.Tensor): ndim = x.ndim assert 0 <= 1 < ndim assert freqs_cis.shape == (x.shape[-2], x.shape[-1]) shape = [d if i >= ndim - 2 else 1 for i, d in enumerate(x.shape)] return freqs_cis.view(*shape) def apply_rotary_enc( xq: torch.Tensor, xk: torch.Tensor, freqs_cis: torch.Tensor, repeat_freqs_k: bool = False, ): xq_ = torch.view_as_complex(xq.float().reshape(*xq.shape[:-1], -1, 2)) xk_ = ( torch.view_as_complex(xk.float().reshape(*xk.shape[:-1], -1, 2)) if xk.shape[-2] != 0 else None ) freqs_cis = reshape_for_broadcast(freqs_cis, xq_) xq_out = torch.view_as_real(xq_ * freqs_cis).flatten(3) if xk_ is None: # no keys to rotate, due to dropout return xq_out.type_as(xq).to(xq.device), xk # repeat freqs along seq_len dim to match k seq_len if repeat_freqs_k: r = xk_.shape[-2] // xq_.shape[-2] if freqs_cis.is_cuda: freqs_cis = freqs_cis.repeat(*([1] * (freqs_cis.ndim - 2)), r, 1) else: # torch.repeat on complex numbers may not be supported on non-CUDA devices # (freqs_cis has 4 dims and we repeat on dim 2) so we use expand + flatten freqs_cis = freqs_cis.unsqueeze(2).expand(-1, -1, r, -1, -1).flatten(2, 3) xk_out = torch.view_as_real(xk_ * freqs_cis).flatten(3) return xq_out.type_as(xq).to(xq.device), xk_out.type_as(xk).to(xk.device) ================================================ FILE: auto-seg/sam2/modeling/sam/__init__.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. ================================================ FILE: auto-seg/sam2/modeling/sam/mask_decoder.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. from typing import List, Optional, Tuple, Type import torch from torch import nn from sam2.modeling.sam2_utils import LayerNorm2d, MLP class MaskDecoder(nn.Module): def __init__( self, *, transformer_dim: int, transformer: nn.Module, num_multimask_outputs: int = 3, activation: Type[nn.Module] = nn.GELU, iou_head_depth: int = 3, iou_head_hidden_dim: int = 256, use_high_res_features: bool = False, iou_prediction_use_sigmoid=False, dynamic_multimask_via_stability=False, dynamic_multimask_stability_delta=0.05, dynamic_multimask_stability_thresh=0.98, pred_obj_scores: bool = False, pred_obj_scores_mlp: bool = False, use_multimask_token_for_obj_ptr: bool = False, ) -> None: """ Predicts masks given an image and prompt embeddings, using a transformer architecture. Arguments: transformer_dim (int): the channel dimension of the transformer transformer (nn.Module): the transformer used to predict masks num_multimask_outputs (int): the number of masks to predict when disambiguating masks activation (nn.Module): the type of activation to use when upscaling masks iou_head_depth (int): the depth of the MLP used to predict mask quality iou_head_hidden_dim (int): the hidden dimension of the MLP used to predict mask quality """ super().__init__() self.transformer_dim = transformer_dim self.transformer = transformer self.num_multimask_outputs = num_multimask_outputs self.iou_token = nn.Embedding(1, transformer_dim) self.num_mask_tokens = num_multimask_outputs + 1 self.mask_tokens = nn.Embedding(self.num_mask_tokens, transformer_dim) self.pred_obj_scores = pred_obj_scores if self.pred_obj_scores: self.obj_score_token = nn.Embedding(1, transformer_dim) self.use_multimask_token_for_obj_ptr = use_multimask_token_for_obj_ptr self.output_upscaling = nn.Sequential( nn.ConvTranspose2d( transformer_dim, transformer_dim // 4, kernel_size=2, stride=2 ), LayerNorm2d(transformer_dim // 4), activation(), nn.ConvTranspose2d( transformer_dim // 4, transformer_dim // 8, kernel_size=2, stride=2 ), activation(), ) self.use_high_res_features = use_high_res_features if use_high_res_features: self.conv_s0 = nn.Conv2d( transformer_dim, transformer_dim // 8, kernel_size=1, stride=1 ) self.conv_s1 = nn.Conv2d( transformer_dim, transformer_dim // 4, kernel_size=1, stride=1 ) self.output_hypernetworks_mlps = nn.ModuleList( [ MLP(transformer_dim, transformer_dim, transformer_dim // 8, 3) for i in range(self.num_mask_tokens) ] ) self.iou_prediction_head = MLP( transformer_dim, iou_head_hidden_dim, self.num_mask_tokens, iou_head_depth, sigmoid_output=iou_prediction_use_sigmoid, ) if self.pred_obj_scores: self.pred_obj_score_head = nn.Linear(transformer_dim, 1) if pred_obj_scores_mlp: self.pred_obj_score_head = MLP(transformer_dim, transformer_dim, 1, 3) # When outputting a single mask, optionally we can dynamically fall back to the best # multimask output token if the single mask output token gives low stability scores. self.dynamic_multimask_via_stability = dynamic_multimask_via_stability self.dynamic_multimask_stability_delta = dynamic_multimask_stability_delta self.dynamic_multimask_stability_thresh = dynamic_multimask_stability_thresh def forward( self, image_embeddings: torch.Tensor, image_pe: torch.Tensor, sparse_prompt_embeddings: torch.Tensor, dense_prompt_embeddings: torch.Tensor, multimask_output: bool, repeat_image: bool, high_res_features: Optional[List[torch.Tensor]] = None, ) -> Tuple[torch.Tensor, torch.Tensor]: """ Predict masks given image and prompt embeddings. Arguments: image_embeddings (torch.Tensor): the embeddings from the image encoder image_pe (torch.Tensor): positional encoding with the shape of image_embeddings sparse_prompt_embeddings (torch.Tensor): the embeddings of the points and boxes dense_prompt_embeddings (torch.Tensor): the embeddings of the mask inputs multimask_output (bool): Whether to return multiple masks or a single mask. Returns: torch.Tensor: batched predicted masks torch.Tensor: batched predictions of mask quality torch.Tensor: batched SAM token for mask output """ masks, iou_pred, mask_tokens_out, object_score_logits = self.predict_masks( image_embeddings=image_embeddings, image_pe=image_pe, sparse_prompt_embeddings=sparse_prompt_embeddings, dense_prompt_embeddings=dense_prompt_embeddings, repeat_image=repeat_image, high_res_features=high_res_features, ) # Select the correct mask or masks for output if multimask_output: masks = masks[:, 1:, :, :] iou_pred = iou_pred[:, 1:] elif self.dynamic_multimask_via_stability and not self.training: masks, iou_pred = self._dynamic_multimask_via_stability(masks, iou_pred) else: masks = masks[:, 0:1, :, :] iou_pred = iou_pred[:, 0:1] if multimask_output and self.use_multimask_token_for_obj_ptr: sam_tokens_out = mask_tokens_out[:, 1:] # [b, 3, c] shape else: # Take the mask output token. Here we *always* use the token for single mask output. # At test time, even if we track after 1-click (and using multimask_output=True), # we still take the single mask token here. The rationale is that we always track # after multiple clicks during training, so the past tokens seen during training # are always the single mask token (and we'll let it be the object-memory token). sam_tokens_out = mask_tokens_out[:, 0:1] # [b, 1, c] shape # Prepare output return masks, iou_pred, sam_tokens_out, object_score_logits def predict_masks( self, image_embeddings: torch.Tensor, image_pe: torch.Tensor, sparse_prompt_embeddings: torch.Tensor, dense_prompt_embeddings: torch.Tensor, repeat_image: bool, high_res_features: Optional[List[torch.Tensor]] = None, ) -> Tuple[torch.Tensor, torch.Tensor]: """Predicts masks. See 'forward' for more details.""" # Concatenate output tokens s = 0 if self.pred_obj_scores: output_tokens = torch.cat( [ self.obj_score_token.weight, self.iou_token.weight, self.mask_tokens.weight, ], dim=0, ) s = 1 else: output_tokens = torch.cat( [self.iou_token.weight, self.mask_tokens.weight], dim=0 ) output_tokens = output_tokens.unsqueeze(0).expand( sparse_prompt_embeddings.size(0), -1, -1 ) tokens = torch.cat((output_tokens, sparse_prompt_embeddings), dim=1) # Expand per-image data in batch direction to be per-mask if repeat_image: src = torch.repeat_interleave(image_embeddings, tokens.shape[0], dim=0) else: assert image_embeddings.shape[0] == tokens.shape[0] src = image_embeddings src = src + dense_prompt_embeddings assert ( image_pe.size(0) == 1 ), "image_pe should have size 1 in batch dim (from `get_dense_pe()`)" pos_src = torch.repeat_interleave(image_pe, tokens.shape[0], dim=0) b, c, h, w = src.shape # Run the transformer hs, src = self.transformer(src, pos_src, tokens) iou_token_out = hs[:, s, :] mask_tokens_out = hs[:, s + 1 : (s + 1 + self.num_mask_tokens), :] # Upscale mask embeddings and predict masks using the mask tokens src = src.transpose(1, 2).view(b, c, h, w) if not self.use_high_res_features: upscaled_embedding = self.output_upscaling(src) else: dc1, ln1, act1, dc2, act2 = self.output_upscaling feat_s0, feat_s1 = high_res_features upscaled_embedding = act1(ln1(dc1(src) + feat_s1)) upscaled_embedding = act2(dc2(upscaled_embedding) + feat_s0) hyper_in_list: List[torch.Tensor] = [] for i in range(self.num_mask_tokens): hyper_in_list.append( self.output_hypernetworks_mlps[i](mask_tokens_out[:, i, :]) ) hyper_in = torch.stack(hyper_in_list, dim=1) b, c, h, w = upscaled_embedding.shape masks = (hyper_in @ upscaled_embedding.view(b, c, h * w)).view(b, -1, h, w) # Generate mask quality predictions iou_pred = self.iou_prediction_head(iou_token_out) if self.pred_obj_scores: assert s == 1 object_score_logits = self.pred_obj_score_head(hs[:, 0, :]) else: # Obj scores logits - default to 10.0, i.e. assuming the object is present, sigmoid(10)=1 object_score_logits = 10.0 * iou_pred.new_ones(iou_pred.shape[0], 1) return masks, iou_pred, mask_tokens_out, object_score_logits def _get_stability_scores(self, mask_logits): """ Compute stability scores of the mask logits based on the IoU between upper and lower thresholds, similar to https://github.com/fairinternal/onevision/pull/568. """ mask_logits = mask_logits.flatten(-2) stability_delta = self.dynamic_multimask_stability_delta area_i = torch.sum(mask_logits > stability_delta, dim=-1).float() area_u = torch.sum(mask_logits > -stability_delta, dim=-1).float() stability_scores = torch.where(area_u > 0, area_i / area_u, 1.0) return stability_scores def _dynamic_multimask_via_stability(self, all_mask_logits, all_iou_scores): """ When outputting a single mask, if the stability score from the current single-mask output (based on output token 0) falls below a threshold, we instead select from multi-mask outputs (based on output token 1~3) the mask with the highest predicted IoU score. This is intended to ensure a valid mask for both clicking and tracking. """ # The best mask from multimask output tokens (1~3) multimask_logits = all_mask_logits[:, 1:, :, :] multimask_iou_scores = all_iou_scores[:, 1:] best_scores_inds = torch.argmax(multimask_iou_scores, dim=-1) batch_inds = torch.arange( multimask_iou_scores.size(0), device=all_iou_scores.device ) best_multimask_logits = multimask_logits[batch_inds, best_scores_inds] best_multimask_logits = best_multimask_logits.unsqueeze(1) best_multimask_iou_scores = multimask_iou_scores[batch_inds, best_scores_inds] best_multimask_iou_scores = best_multimask_iou_scores.unsqueeze(1) # The mask from singlemask output token 0 and its stability score singlemask_logits = all_mask_logits[:, 0:1, :, :] singlemask_iou_scores = all_iou_scores[:, 0:1] stability_scores = self._get_stability_scores(singlemask_logits) is_stable = stability_scores >= self.dynamic_multimask_stability_thresh # Dynamically fall back to best multimask output upon low stability scores. mask_logits_out = torch.where( is_stable[..., None, None].expand_as(singlemask_logits), singlemask_logits, best_multimask_logits, ) iou_scores_out = torch.where( is_stable.expand_as(singlemask_iou_scores), singlemask_iou_scores, best_multimask_iou_scores, ) return mask_logits_out, iou_scores_out ================================================ FILE: auto-seg/sam2/modeling/sam/prompt_encoder.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. from typing import Optional, Tuple, Type import torch from torch import nn from sam2.modeling.position_encoding import PositionEmbeddingRandom from sam2.modeling.sam2_utils import LayerNorm2d class PromptEncoder(nn.Module): def __init__( self, embed_dim: int, image_embedding_size: Tuple[int, int], input_image_size: Tuple[int, int], mask_in_chans: int, activation: Type[nn.Module] = nn.GELU, ) -> None: """ Encodes prompts for input to SAM's mask decoder. Arguments: embed_dim (int): The prompts' embedding dimension image_embedding_size (tuple(int, int)): The spatial size of the image embedding, as (H, W). input_image_size (int): The padded size of the image as input to the image encoder, as (H, W). mask_in_chans (int): The number of hidden channels used for encoding input masks. activation (nn.Module): The activation to use when encoding input masks. """ super().__init__() self.embed_dim = embed_dim self.input_image_size = input_image_size self.image_embedding_size = image_embedding_size self.pe_layer = PositionEmbeddingRandom(embed_dim // 2) self.num_point_embeddings: int = 4 # pos/neg point + 2 box corners point_embeddings = [ nn.Embedding(1, embed_dim) for i in range(self.num_point_embeddings) ] self.point_embeddings = nn.ModuleList(point_embeddings) self.not_a_point_embed = nn.Embedding(1, embed_dim) self.mask_input_size = ( 4 * image_embedding_size[0], 4 * image_embedding_size[1], ) self.mask_downscaling = nn.Sequential( nn.Conv2d(1, mask_in_chans // 4, kernel_size=2, stride=2), LayerNorm2d(mask_in_chans // 4), activation(), nn.Conv2d(mask_in_chans // 4, mask_in_chans, kernel_size=2, stride=2), LayerNorm2d(mask_in_chans), activation(), nn.Conv2d(mask_in_chans, embed_dim, kernel_size=1), ) self.no_mask_embed = nn.Embedding(1, embed_dim) def get_dense_pe(self) -> torch.Tensor: """ Returns the positional encoding used to encode point prompts, applied to a dense set of points the shape of the image encoding. Returns: torch.Tensor: Positional encoding with shape 1x(embed_dim)x(embedding_h)x(embedding_w) """ return self.pe_layer(self.image_embedding_size).unsqueeze(0) def _embed_points( self, points: torch.Tensor, labels: torch.Tensor, pad: bool, ) -> torch.Tensor: """Embeds point prompts.""" points = points + 0.5 # Shift to center of pixel if pad: padding_point = torch.zeros((points.shape[0], 1, 2), device=points.device) padding_label = -torch.ones((labels.shape[0], 1), device=labels.device) points = torch.cat([points, padding_point], dim=1) labels = torch.cat([labels, padding_label], dim=1) point_embedding = self.pe_layer.forward_with_coords( points, self.input_image_size ) point_embedding[labels == -1] = 0.0 point_embedding[labels == -1] += self.not_a_point_embed.weight point_embedding[labels == 0] += self.point_embeddings[0].weight point_embedding[labels == 1] += self.point_embeddings[1].weight point_embedding[labels == 2] += self.point_embeddings[2].weight point_embedding[labels == 3] += self.point_embeddings[3].weight return point_embedding def _embed_boxes(self, boxes: torch.Tensor) -> torch.Tensor: """Embeds box prompts.""" boxes = boxes + 0.5 # Shift to center of pixel coords = boxes.reshape(-1, 2, 2) corner_embedding = self.pe_layer.forward_with_coords( coords, self.input_image_size ) corner_embedding[:, 0, :] += self.point_embeddings[2].weight corner_embedding[:, 1, :] += self.point_embeddings[3].weight return corner_embedding def _embed_masks(self, masks: torch.Tensor) -> torch.Tensor: """Embeds mask inputs.""" mask_embedding = self.mask_downscaling(masks) return mask_embedding def _get_batch_size( self, points: Optional[Tuple[torch.Tensor, torch.Tensor]], boxes: Optional[torch.Tensor], masks: Optional[torch.Tensor], ) -> int: """ Gets the batch size of the output given the batch size of the input prompts. """ if points is not None: return points[0].shape[0] elif boxes is not None: return boxes.shape[0] elif masks is not None: return masks.shape[0] else: return 1 def _get_device(self) -> torch.device: return self.point_embeddings[0].weight.device def forward( self, points: Optional[Tuple[torch.Tensor, torch.Tensor]], boxes: Optional[torch.Tensor], masks: Optional[torch.Tensor], ) -> Tuple[torch.Tensor, torch.Tensor]: """ Embeds different types of prompts, returning both sparse and dense embeddings. Arguments: points (tuple(torch.Tensor, torch.Tensor) or none): point coordinates and labels to embed. boxes (torch.Tensor or none): boxes to embed masks (torch.Tensor or none): masks to embed Returns: torch.Tensor: sparse embeddings for the points and boxes, with shape BxNx(embed_dim), where N is determined by the number of input points and boxes. torch.Tensor: dense embeddings for the masks, in the shape Bx(embed_dim)x(embed_H)x(embed_W) """ bs = self._get_batch_size(points, boxes, masks) sparse_embeddings = torch.empty( (bs, 0, self.embed_dim), device=self._get_device() ) if points is not None: coords, labels = points point_embeddings = self._embed_points(coords, labels, pad=(boxes is None)) sparse_embeddings = torch.cat([sparse_embeddings, point_embeddings], dim=1) if boxes is not None: box_embeddings = self._embed_boxes(boxes) sparse_embeddings = torch.cat([sparse_embeddings, box_embeddings], dim=1) if masks is not None: dense_embeddings = self._embed_masks(masks) else: dense_embeddings = self.no_mask_embed.weight.reshape(1, -1, 1, 1).expand( bs, -1, self.image_embedding_size[0], self.image_embedding_size[1] ) return sparse_embeddings, dense_embeddings ================================================ FILE: auto-seg/sam2/modeling/sam/transformer.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import contextlib import math import warnings from functools import partial from typing import Tuple, Type import torch import torch.nn.functional as F from torch import nn, Tensor from sam2.modeling.position_encoding import apply_rotary_enc, compute_axial_cis from sam2.modeling.sam2_utils import MLP from sam2.utils.misc import get_sdpa_settings warnings.simplefilter(action="ignore", category=FutureWarning) # Check whether Flash Attention is available (and use it by default) OLD_GPU, USE_FLASH_ATTN, MATH_KERNEL_ON = get_sdpa_settings() # A fallback setting to allow all available kernels if Flash Attention fails ALLOW_ALL_KERNELS = False def sdp_kernel_context(dropout_p): """ Get the context for the attention scaled dot-product kernel. We use Flash Attention by default, but fall back to all available kernels if Flash Attention fails. """ if ALLOW_ALL_KERNELS: return contextlib.nullcontext() return torch.backends.cuda.sdp_kernel( enable_flash=USE_FLASH_ATTN, # if Flash attention kernel is off, then math kernel needs to be enabled enable_math=(OLD_GPU and dropout_p > 0.0) or MATH_KERNEL_ON, enable_mem_efficient=OLD_GPU, ) class TwoWayTransformer(nn.Module): def __init__( self, depth: int, embedding_dim: int, num_heads: int, mlp_dim: int, activation: Type[nn.Module] = nn.ReLU, attention_downsample_rate: int = 2, ) -> None: """ A transformer decoder that attends to an input image using queries whose positional embedding is supplied. Args: depth (int): number of layers in the transformer embedding_dim (int): the channel dimension for the input embeddings num_heads (int): the number of heads for multihead attention. Must divide embedding_dim mlp_dim (int): the channel dimension internal to the MLP block activation (nn.Module): the activation to use in the MLP block """ super().__init__() self.depth = depth self.embedding_dim = embedding_dim self.num_heads = num_heads self.mlp_dim = mlp_dim self.layers = nn.ModuleList() for i in range(depth): self.layers.append( TwoWayAttentionBlock( embedding_dim=embedding_dim, num_heads=num_heads, mlp_dim=mlp_dim, activation=activation, attention_downsample_rate=attention_downsample_rate, skip_first_layer_pe=(i == 0), ) ) self.final_attn_token_to_image = Attention( embedding_dim, num_heads, downsample_rate=attention_downsample_rate ) self.norm_final_attn = nn.LayerNorm(embedding_dim) def forward( self, image_embedding: Tensor, image_pe: Tensor, point_embedding: Tensor, ) -> Tuple[Tensor, Tensor]: """ Args: image_embedding (torch.Tensor): image to attend to. Should be shape B x embedding_dim x h x w for any h and w. image_pe (torch.Tensor): the positional encoding to add to the image. Must have the same shape as image_embedding. point_embedding (torch.Tensor): the embedding to add to the query points. Must have shape B x N_points x embedding_dim for any N_points. Returns: torch.Tensor: the processed point_embedding torch.Tensor: the processed image_embedding """ # BxCxHxW -> BxHWxC == B x N_image_tokens x C bs, c, h, w = image_embedding.shape image_embedding = image_embedding.flatten(2).permute(0, 2, 1) image_pe = image_pe.flatten(2).permute(0, 2, 1) # Prepare queries queries = point_embedding keys = image_embedding # Apply transformer blocks and final layernorm for layer in self.layers: queries, keys = layer( queries=queries, keys=keys, query_pe=point_embedding, key_pe=image_pe, ) # Apply the final attention layer from the points to the image q = queries + point_embedding k = keys + image_pe attn_out = self.final_attn_token_to_image(q=q, k=k, v=keys) queries = queries + attn_out queries = self.norm_final_attn(queries) return queries, keys class TwoWayAttentionBlock(nn.Module): def __init__( self, embedding_dim: int, num_heads: int, mlp_dim: int = 2048, activation: Type[nn.Module] = nn.ReLU, attention_downsample_rate: int = 2, skip_first_layer_pe: bool = False, ) -> None: """ A transformer block with four layers: (1) self-attention of sparse inputs, (2) cross attention of sparse inputs to dense inputs, (3) mlp block on sparse inputs, and (4) cross attention of dense inputs to sparse inputs. Arguments: embedding_dim (int): the channel dimension of the embeddings num_heads (int): the number of heads in the attention layers mlp_dim (int): the hidden dimension of the mlp block activation (nn.Module): the activation of the mlp block skip_first_layer_pe (bool): skip the PE on the first layer """ super().__init__() self.self_attn = Attention(embedding_dim, num_heads) self.norm1 = nn.LayerNorm(embedding_dim) self.cross_attn_token_to_image = Attention( embedding_dim, num_heads, downsample_rate=attention_downsample_rate ) self.norm2 = nn.LayerNorm(embedding_dim) self.mlp = MLP( embedding_dim, mlp_dim, embedding_dim, num_layers=2, activation=activation ) self.norm3 = nn.LayerNorm(embedding_dim) self.norm4 = nn.LayerNorm(embedding_dim) self.cross_attn_image_to_token = Attention( embedding_dim, num_heads, downsample_rate=attention_downsample_rate ) self.skip_first_layer_pe = skip_first_layer_pe def forward( self, queries: Tensor, keys: Tensor, query_pe: Tensor, key_pe: Tensor ) -> Tuple[Tensor, Tensor]: # Self attention block if self.skip_first_layer_pe: queries = self.self_attn(q=queries, k=queries, v=queries) else: q = queries + query_pe attn_out = self.self_attn(q=q, k=q, v=queries) queries = queries + attn_out queries = self.norm1(queries) # Cross attention block, tokens attending to image embedding q = queries + query_pe k = keys + key_pe attn_out = self.cross_attn_token_to_image(q=q, k=k, v=keys) queries = queries + attn_out queries = self.norm2(queries) # MLP block mlp_out = self.mlp(queries) queries = queries + mlp_out queries = self.norm3(queries) # Cross attention block, image embedding attending to tokens q = queries + query_pe k = keys + key_pe attn_out = self.cross_attn_image_to_token(q=k, k=q, v=queries) keys = keys + attn_out keys = self.norm4(keys) return queries, keys class Attention(nn.Module): """ An attention layer that allows for downscaling the size of the embedding after projection to queries, keys, and values. """ def __init__( self, embedding_dim: int, num_heads: int, downsample_rate: int = 1, dropout: float = 0.0, kv_in_dim: int = None, ) -> None: super().__init__() self.embedding_dim = embedding_dim self.kv_in_dim = kv_in_dim if kv_in_dim is not None else embedding_dim self.internal_dim = embedding_dim // downsample_rate self.num_heads = num_heads assert ( self.internal_dim % num_heads == 0 ), "num_heads must divide embedding_dim." self.q_proj = nn.Linear(embedding_dim, self.internal_dim) self.k_proj = nn.Linear(self.kv_in_dim, self.internal_dim) self.v_proj = nn.Linear(self.kv_in_dim, self.internal_dim) self.out_proj = nn.Linear(self.internal_dim, embedding_dim) self.dropout_p = dropout def _separate_heads(self, x: Tensor, num_heads: int) -> Tensor: b, n, c = x.shape x = x.reshape(b, n, num_heads, c // num_heads) return x.transpose(1, 2) # B x N_heads x N_tokens x C_per_head def _recombine_heads(self, x: Tensor) -> Tensor: b, n_heads, n_tokens, c_per_head = x.shape x = x.transpose(1, 2) return x.reshape(b, n_tokens, n_heads * c_per_head) # B x N_tokens x C def forward(self, q: Tensor, k: Tensor, v: Tensor) -> Tensor: # Input projections q = self.q_proj(q) k = self.k_proj(k) v = self.v_proj(v) # Separate into heads q = self._separate_heads(q, self.num_heads) k = self._separate_heads(k, self.num_heads) v = self._separate_heads(v, self.num_heads) dropout_p = self.dropout_p if self.training else 0.0 # Attention try: with sdp_kernel_context(dropout_p): out = F.scaled_dot_product_attention(q, k, v, dropout_p=dropout_p) except Exception as e: # Fall back to all kernels if the Flash attention kernel fails warnings.warn( f"Flash Attention kernel failed due to: {e}\nFalling back to all available " f"kernels for scaled_dot_product_attention (which may have a slower speed).", category=UserWarning, stacklevel=2, ) global ALLOW_ALL_KERNELS ALLOW_ALL_KERNELS = True out = F.scaled_dot_product_attention(q, k, v, dropout_p=dropout_p) out = self._recombine_heads(out) out = self.out_proj(out) return out class RoPEAttention(Attention): """Attention with rotary position encoding.""" def __init__( self, *args, rope_theta=10000.0, # whether to repeat q rope to match k length # this is needed for cross-attention to memories rope_k_repeat=False, feat_sizes=(32, 32), # [w, h] for stride 16 feats at 512 resolution **kwargs, ): super().__init__(*args, **kwargs) self.compute_cis = partial( compute_axial_cis, dim=self.internal_dim // self.num_heads, theta=rope_theta ) freqs_cis = self.compute_cis(end_x=feat_sizes[0], end_y=feat_sizes[1]) self.freqs_cis = freqs_cis self.rope_k_repeat = rope_k_repeat def forward( self, q: Tensor, k: Tensor, v: Tensor, num_k_exclude_rope: int = 0 ) -> Tensor: # Input projections q = self.q_proj(q) k = self.k_proj(k) v = self.v_proj(v) # Separate into heads q = self._separate_heads(q, self.num_heads) k = self._separate_heads(k, self.num_heads) v = self._separate_heads(v, self.num_heads) # Apply rotary position encoding w = h = math.sqrt(q.shape[-2]) self.freqs_cis = self.freqs_cis.to(q.device) if self.freqs_cis.shape[0] != q.shape[-2]: self.freqs_cis = self.compute_cis(end_x=w, end_y=h).to(q.device) if q.shape[-2] != k.shape[-2]: assert self.rope_k_repeat num_k_rope = k.size(-2) - num_k_exclude_rope q, k[:, :, :num_k_rope] = apply_rotary_enc( q, k[:, :, :num_k_rope], freqs_cis=self.freqs_cis, repeat_freqs_k=self.rope_k_repeat, ) dropout_p = self.dropout_p if self.training else 0.0 # Attention try: with sdp_kernel_context(dropout_p): out = F.scaled_dot_product_attention(q, k, v, dropout_p=dropout_p) except Exception as e: # Fall back to all kernels if the Flash attention kernel fails warnings.warn( f"Flash Attention kernel failed due to: {e}\nFalling back to all available " f"kernels for scaled_dot_product_attention (which may have a slower speed).", category=UserWarning, stacklevel=2, ) global ALLOW_ALL_KERNELS ALLOW_ALL_KERNELS = True out = F.scaled_dot_product_attention(q, k, v, dropout_p=dropout_p) out = self._recombine_heads(out) out = self.out_proj(out) return out ================================================ FILE: auto-seg/sam2/modeling/sam2_base.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import torch import torch.distributed import torch.nn.functional as F from torch.nn.init import trunc_normal_ from sam2.modeling.sam.mask_decoder import MaskDecoder from sam2.modeling.sam.prompt_encoder import PromptEncoder from sam2.modeling.sam.transformer import TwoWayTransformer from sam2.modeling.sam2_utils import get_1d_sine_pe, MLP, select_closest_cond_frames # a large negative value as a placeholder score for missing objects NO_OBJ_SCORE = -1024.0 class SAM2Base(torch.nn.Module): def __init__( self, image_encoder, memory_attention, memory_encoder, num_maskmem=7, # default 1 input frame + 6 previous frames image_size=512, backbone_stride=16, # stride of the image backbone output sigmoid_scale_for_mem_enc=1.0, # scale factor for mask sigmoid prob sigmoid_bias_for_mem_enc=0.0, # bias factor for mask sigmoid prob # During evaluation, whether to binarize the sigmoid mask logits on interacted frames with clicks binarize_mask_from_pts_for_mem_enc=False, use_mask_input_as_output_without_sam=False, # on frames with mask input, whether to directly output the input mask without using a SAM prompt encoder + mask decoder # The maximum number of conditioning frames to participate in the memory attention (-1 means no limit; if there are more conditioning frames than this limit, # we only cross-attend to the temporally closest `max_cond_frames_in_attn` conditioning frames in the encoder when tracking each frame). This gives the model # a temporal locality when handling a large number of annotated frames (since closer frames should be more important) and also avoids GPU OOM. max_cond_frames_in_attn=-1, # on the first frame, whether to directly add the no-memory embedding to the image feature # (instead of using the transformer encoder) directly_add_no_mem_embed=False, # whether to use high-resolution feature maps in the SAM mask decoder use_high_res_features_in_sam=False, # whether to output multiple (3) masks for the first click on initial conditioning frames multimask_output_in_sam=False, # the minimum and maximum number of clicks to use multimask_output_in_sam (only relevant when `multimask_output_in_sam=True`; # default is 1 for both, meaning that only the first click gives multimask output; also note that a box counts as two points) multimask_min_pt_num=1, multimask_max_pt_num=1, # whether to also use multimask output for tracking (not just for the first click on initial conditioning frames; only relevant when `multimask_output_in_sam=True`) multimask_output_for_tracking=False, # Whether to use multimask tokens for obj ptr; Only relevant when both # use_obj_ptrs_in_encoder=True and multimask_output_for_tracking=True use_multimask_token_for_obj_ptr: bool = False, # whether to use sigmoid to restrict ious prediction to [0-1] iou_prediction_use_sigmoid=False, # The memory bank's temporal stride during evaluation (i.e. the `r` parameter in XMem and Cutie; XMem and Cutie use r=5). # For r>1, the (self.num_maskmem - 1) non-conditioning memory frames consist of # (self.num_maskmem - 2) nearest frames from every r-th frames, plus the last frame. memory_temporal_stride_for_eval=1, # if `add_all_frames_to_correct_as_cond` is True, we also append to the conditioning frame list any frame that receives a later correction click # if `add_all_frames_to_correct_as_cond` is False, we conditioning frame list to only use those initial conditioning frames add_all_frames_to_correct_as_cond=False, # whether to apply non-overlapping constraints on the object masks in the memory encoder during evaluation (to avoid/alleviate superposing masks) non_overlap_masks_for_mem_enc=False, # whether to cross-attend to object pointers from other frames (based on SAM output tokens) in the encoder use_obj_ptrs_in_encoder=False, # the maximum number of object pointers from other frames in encoder cross attention (only relevant when `use_obj_ptrs_in_encoder=True`) max_obj_ptrs_in_encoder=16, # whether to add temporal positional encoding to the object pointers in the encoder (only relevant when `use_obj_ptrs_in_encoder=True`) add_tpos_enc_to_obj_ptrs=True, # whether to add an extra linear projection layer for the temporal positional encoding in the object pointers to avoid potential interference # with spatial positional encoding (only relevant when both `use_obj_ptrs_in_encoder=True` and `add_tpos_enc_to_obj_ptrs=True`) proj_tpos_enc_in_obj_ptrs=False, # whether to only attend to object pointers in the past (before the current frame) in the encoder during evaluation # (only relevant when `use_obj_ptrs_in_encoder=True`; this might avoid pointer information too far in the future to distract the initial tracking) only_obj_ptrs_in_the_past_for_eval=False, # Whether to predict if there is an object in the frame pred_obj_scores: bool = False, # Whether to use an MLP to predict object scores pred_obj_scores_mlp: bool = False, # Only relevant if pred_obj_scores=True and use_obj_ptrs_in_encoder=True; # Whether to have a fixed no obj pointer when there is no object present # or to use it as an additive embedding with obj_ptr produced by decoder fixed_no_obj_ptr: bool = False, # Soft no object, i.e. mix in no_obj_ptr softly, # hope to make recovery easier if there is a mistake and mitigate accumulation of errors soft_no_obj_ptr: bool = False, use_mlp_for_obj_ptr_proj: bool = False, # extra arguments used to construct the SAM mask decoder; if not None, it should be a dict of kwargs to be passed into `MaskDecoder` class. sam_mask_decoder_extra_args=None, compile_image_encoder: bool = False, ): super().__init__() # Part 1: the image backbone self.image_encoder = image_encoder # Use level 0, 1, 2 for high-res setting, or just level 2 for the default setting self.use_high_res_features_in_sam = use_high_res_features_in_sam self.num_feature_levels = 3 if use_high_res_features_in_sam else 1 self.use_obj_ptrs_in_encoder = use_obj_ptrs_in_encoder self.max_obj_ptrs_in_encoder = max_obj_ptrs_in_encoder if use_obj_ptrs_in_encoder: # A conv layer to downsample the mask prompt to stride 4 (the same stride as # low-res SAM mask logits) and to change its scales from 0~1 to SAM logit scale, # so that it can be fed into the SAM mask decoder to generate a pointer. self.mask_downsample = torch.nn.Conv2d(1, 1, kernel_size=4, stride=4) self.add_tpos_enc_to_obj_ptrs = add_tpos_enc_to_obj_ptrs if proj_tpos_enc_in_obj_ptrs: assert add_tpos_enc_to_obj_ptrs # these options need to be used together self.proj_tpos_enc_in_obj_ptrs = proj_tpos_enc_in_obj_ptrs self.only_obj_ptrs_in_the_past_for_eval = only_obj_ptrs_in_the_past_for_eval # Part 2: memory attention to condition current frame's visual features # with memories (and obj ptrs) from past frames self.memory_attention = memory_attention self.hidden_dim = memory_attention.d_model # Part 3: memory encoder for the previous frame's outputs self.memory_encoder = memory_encoder self.mem_dim = self.hidden_dim if hasattr(self.memory_encoder, "out_proj") and hasattr( self.memory_encoder.out_proj, "weight" ): # if there is compression of memories along channel dim self.mem_dim = self.memory_encoder.out_proj.weight.shape[0] self.num_maskmem = num_maskmem # Number of memories accessible # Temporal encoding of the memories self.maskmem_tpos_enc = torch.nn.Parameter( torch.zeros(num_maskmem, 1, 1, self.mem_dim) ) trunc_normal_(self.maskmem_tpos_enc, std=0.02) # a single token to indicate no memory embedding from previous frames self.no_mem_embed = torch.nn.Parameter(torch.zeros(1, 1, self.hidden_dim)) self.no_mem_pos_enc = torch.nn.Parameter(torch.zeros(1, 1, self.hidden_dim)) trunc_normal_(self.no_mem_embed, std=0.02) trunc_normal_(self.no_mem_pos_enc, std=0.02) self.directly_add_no_mem_embed = directly_add_no_mem_embed # Apply sigmoid to the output raw mask logits (to turn them from # range (-inf, +inf) to range (0, 1)) before feeding them into the memory encoder self.sigmoid_scale_for_mem_enc = sigmoid_scale_for_mem_enc self.sigmoid_bias_for_mem_enc = sigmoid_bias_for_mem_enc self.binarize_mask_from_pts_for_mem_enc = binarize_mask_from_pts_for_mem_enc self.non_overlap_masks_for_mem_enc = non_overlap_masks_for_mem_enc self.memory_temporal_stride_for_eval = memory_temporal_stride_for_eval # On frames with mask input, whether to directly output the input mask without # using a SAM prompt encoder + mask decoder self.use_mask_input_as_output_without_sam = use_mask_input_as_output_without_sam self.multimask_output_in_sam = multimask_output_in_sam self.multimask_min_pt_num = multimask_min_pt_num self.multimask_max_pt_num = multimask_max_pt_num self.multimask_output_for_tracking = multimask_output_for_tracking self.use_multimask_token_for_obj_ptr = use_multimask_token_for_obj_ptr self.iou_prediction_use_sigmoid = iou_prediction_use_sigmoid # Part 4: SAM-style prompt encoder (for both mask and point inputs) # and SAM-style mask decoder for the final mask output self.image_size = image_size self.backbone_stride = backbone_stride self.sam_mask_decoder_extra_args = sam_mask_decoder_extra_args self.pred_obj_scores = pred_obj_scores self.pred_obj_scores_mlp = pred_obj_scores_mlp self.fixed_no_obj_ptr = fixed_no_obj_ptr self.soft_no_obj_ptr = soft_no_obj_ptr if self.fixed_no_obj_ptr: assert self.pred_obj_scores assert self.use_obj_ptrs_in_encoder if self.pred_obj_scores and self.use_obj_ptrs_in_encoder: self.no_obj_ptr = torch.nn.Parameter(torch.zeros(1, self.hidden_dim)) trunc_normal_(self.no_obj_ptr, std=0.02) self.use_mlp_for_obj_ptr_proj = use_mlp_for_obj_ptr_proj self._build_sam_heads() self.add_all_frames_to_correct_as_cond = add_all_frames_to_correct_as_cond self.max_cond_frames_in_attn = max_cond_frames_in_attn # Model compilation if compile_image_encoder: # Compile the forward function (not the full module) to allow loading checkpoints. print( "Image encoder compilation is enabled. First forward pass will be slow." ) self.image_encoder.forward = torch.compile( self.image_encoder.forward, mode="max-autotune", fullgraph=True, dynamic=False, ) @property def device(self): return next(self.parameters()).device def forward(self, *args, **kwargs): raise NotImplementedError( "Please use the corresponding methods in SAM2VideoPredictor for inference." "See notebooks/video_predictor_example.ipynb for an example." ) def _build_sam_heads(self): """Build SAM-style prompt encoder and mask decoder.""" self.sam_prompt_embed_dim = self.hidden_dim self.sam_image_embedding_size = self.image_size // self.backbone_stride # build PromptEncoder and MaskDecoder from SAM # (their hyperparameters like `mask_in_chans=16` are from SAM code) self.sam_prompt_encoder = PromptEncoder( embed_dim=self.sam_prompt_embed_dim, image_embedding_size=( self.sam_image_embedding_size, self.sam_image_embedding_size, ), input_image_size=(self.image_size, self.image_size), mask_in_chans=16, ) self.sam_mask_decoder = MaskDecoder( num_multimask_outputs=3, transformer=TwoWayTransformer( depth=2, embedding_dim=self.sam_prompt_embed_dim, mlp_dim=2048, num_heads=8, ), transformer_dim=self.sam_prompt_embed_dim, iou_head_depth=3, iou_head_hidden_dim=256, use_high_res_features=self.use_high_res_features_in_sam, iou_prediction_use_sigmoid=self.iou_prediction_use_sigmoid, pred_obj_scores=self.pred_obj_scores, pred_obj_scores_mlp=self.pred_obj_scores_mlp, use_multimask_token_for_obj_ptr=self.use_multimask_token_for_obj_ptr, **(self.sam_mask_decoder_extra_args or {}), ) if self.use_obj_ptrs_in_encoder: # a linear projection on SAM output tokens to turn them into object pointers self.obj_ptr_proj = torch.nn.Linear(self.hidden_dim, self.hidden_dim) if self.use_mlp_for_obj_ptr_proj: self.obj_ptr_proj = MLP( self.hidden_dim, self.hidden_dim, self.hidden_dim, 3 ) else: self.obj_ptr_proj = torch.nn.Identity() if self.proj_tpos_enc_in_obj_ptrs: # a linear projection on temporal positional encoding in object pointers to # avoid potential interference with spatial positional encoding self.obj_ptr_tpos_proj = torch.nn.Linear(self.hidden_dim, self.mem_dim) else: self.obj_ptr_tpos_proj = torch.nn.Identity() def _forward_sam_heads( self, backbone_features, point_inputs=None, mask_inputs=None, high_res_features=None, multimask_output=False, ): """ Forward SAM prompt encoders and mask heads. Inputs: - backbone_features: image features of [B, C, H, W] shape - point_inputs: a dictionary with "point_coords" and "point_labels", where 1) "point_coords" has [B, P, 2] shape and float32 dtype and contains the absolute pixel-unit coordinate in (x, y) format of the P input points 2) "point_labels" has shape [B, P] and int32 dtype, where 1 means positive clicks, 0 means negative clicks, and -1 means padding - mask_inputs: a mask of [B, 1, H*16, W*16] shape, float or bool, with the same spatial size as the image. - high_res_features: either 1) None or 2) or a list of length 2 containing two feature maps of [B, C, 4*H, 4*W] and [B, C, 2*H, 2*W] shapes respectively, which will be used as high-resolution feature maps for SAM decoder. - multimask_output: if it's True, we output 3 candidate masks and their 3 corresponding IoU estimates, and if it's False, we output only 1 mask and its corresponding IoU estimate. Outputs: - low_res_multimasks: [B, M, H*4, W*4] shape (where M = 3 if `multimask_output=True` and M = 1 if `multimask_output=False`), the SAM output mask logits (before sigmoid) for the low-resolution masks, with 4x the resolution (1/4 stride) of the input backbone_features. - high_res_multimasks: [B, M, H*16, W*16] shape (where M = 3 if `multimask_output=True` and M = 1 if `multimask_output=False`), upsampled from the low-resolution masks, with shape size as the image (stride is 1 pixel). - ious, [B, M] shape, where (where M = 3 if `multimask_output=True` and M = 1 if `multimask_output=False`), the estimated IoU of each output mask. - low_res_masks: [B, 1, H*4, W*4] shape, the best mask in `low_res_multimasks`. If `multimask_output=True`, it's the mask with the highest IoU estimate. If `multimask_output=False`, it's the same as `low_res_multimasks`. - high_res_masks: [B, 1, H*16, W*16] shape, the best mask in `high_res_multimasks`. If `multimask_output=True`, it's the mask with the highest IoU estimate. If `multimask_output=False`, it's the same as `high_res_multimasks`. - obj_ptr: [B, C] shape, the object pointer vector for the output mask, extracted based on the output token from the SAM mask decoder. """ B = backbone_features.size(0) device = backbone_features.device assert backbone_features.size(1) == self.sam_prompt_embed_dim assert backbone_features.size(2) == self.sam_image_embedding_size assert backbone_features.size(3) == self.sam_image_embedding_size # a) Handle point prompts if point_inputs is not None: sam_point_coords = point_inputs["point_coords"] sam_point_labels = point_inputs["point_labels"] assert sam_point_coords.size(0) == B and sam_point_labels.size(0) == B else: # If no points are provide, pad with an empty point (with label -1) sam_point_coords = torch.zeros(B, 1, 2, device=device) sam_point_labels = -torch.ones(B, 1, dtype=torch.int32, device=device) # b) Handle mask prompts if mask_inputs is not None: # If mask_inputs is provided, downsize it into low-res mask input if needed # and feed it as a dense mask prompt into the SAM mask encoder assert len(mask_inputs.shape) == 4 and mask_inputs.shape[:2] == (B, 1) if mask_inputs.shape[-2:] != self.sam_prompt_encoder.mask_input_size: sam_mask_prompt = F.interpolate( mask_inputs.float(), size=self.sam_prompt_encoder.mask_input_size, align_corners=False, mode="bilinear", antialias=True, # use antialias for downsampling ) else: sam_mask_prompt = mask_inputs else: # Otherwise, simply feed None (and SAM's prompt encoder will add # a learned `no_mask_embed` to indicate no mask input in this case). sam_mask_prompt = None sparse_embeddings, dense_embeddings = self.sam_prompt_encoder( points=(sam_point_coords, sam_point_labels), boxes=None, masks=sam_mask_prompt, ) ( low_res_multimasks, ious, sam_output_tokens, object_score_logits, ) = self.sam_mask_decoder( image_embeddings=backbone_features, image_pe=self.sam_prompt_encoder.get_dense_pe(), sparse_prompt_embeddings=sparse_embeddings, dense_prompt_embeddings=dense_embeddings, multimask_output=multimask_output, repeat_image=False, # the image is already batched high_res_features=high_res_features, ) if self.pred_obj_scores: is_obj_appearing = object_score_logits > 0 # Mask used for spatial memories is always a *hard* choice between obj and no obj, # consistent with the actual mask prediction low_res_multimasks = torch.where( is_obj_appearing[:, None, None], low_res_multimasks, NO_OBJ_SCORE, ) # convert masks from possibly bfloat16 (or float16) to float32 # (older PyTorch versions before 2.1 don't support `interpolate` on bf16) low_res_multimasks = low_res_multimasks.float() high_res_multimasks = F.interpolate( low_res_multimasks, size=(self.image_size, self.image_size), mode="bilinear", align_corners=False, ) sam_output_token = sam_output_tokens[:, 0] if multimask_output: # take the best mask prediction (with the highest IoU estimation) best_iou_inds = torch.argmax(ious, dim=-1) batch_inds = torch.arange(B, device=device) low_res_masks = low_res_multimasks[batch_inds, best_iou_inds].unsqueeze(1) high_res_masks = high_res_multimasks[batch_inds, best_iou_inds].unsqueeze(1) if sam_output_tokens.size(1) > 1: sam_output_token = sam_output_tokens[batch_inds, best_iou_inds] else: low_res_masks, high_res_masks = low_res_multimasks, high_res_multimasks # Extract object pointer from the SAM output token (with occlusion handling) obj_ptr = self.obj_ptr_proj(sam_output_token) if self.pred_obj_scores: # Allow *soft* no obj ptr, unlike for masks if self.soft_no_obj_ptr: # Only hard possible with gt assert not self.teacher_force_obj_scores_for_mem lambda_is_obj_appearing = object_score_logits.sigmoid() else: lambda_is_obj_appearing = is_obj_appearing.float() if self.fixed_no_obj_ptr: obj_ptr = lambda_is_obj_appearing * obj_ptr obj_ptr = obj_ptr + (1 - lambda_is_obj_appearing) * self.no_obj_ptr return ( low_res_multimasks, high_res_multimasks, ious, low_res_masks, high_res_masks, obj_ptr, object_score_logits, ) def _use_mask_as_output(self, backbone_features, high_res_features, mask_inputs): """ Directly turn binary `mask_inputs` into a output mask logits without using SAM. (same input and output shapes as in _forward_sam_heads above). """ # Use -10/+10 as logits for neg/pos pixels (very close to 0/1 in prob after sigmoid). out_scale, out_bias = 20.0, -10.0 # sigmoid(-10.0)=4.5398e-05 mask_inputs_float = mask_inputs.float() high_res_masks = mask_inputs_float * out_scale + out_bias low_res_masks = F.interpolate( high_res_masks, size=(high_res_masks.size(-2) // 4, high_res_masks.size(-1) // 4), align_corners=False, mode="bilinear", antialias=True, # use antialias for downsampling ) # a dummy IoU prediction of all 1's under mask input ious = mask_inputs.new_ones(mask_inputs.size(0), 1).float() if not self.use_obj_ptrs_in_encoder: # all zeros as a dummy object pointer (of shape [B, C]) obj_ptr = torch.zeros( mask_inputs.size(0), self.hidden_dim, device=mask_inputs.device ) else: # produce an object pointer using the SAM decoder from the mask input _, _, _, _, _, obj_ptr, _ = self._forward_sam_heads( backbone_features=backbone_features, mask_inputs=self.mask_downsample(mask_inputs_float), high_res_features=high_res_features, ) # In this method, we are treating mask_input as output, e.g. using it directly to create spatial mem; # Below, we follow the same design axiom to use mask_input to decide if obj appears or not instead of relying # on the object_scores from the SAM decoder. is_obj_appearing = torch.any(mask_inputs.flatten(1).float() > 0.0, dim=1) is_obj_appearing = is_obj_appearing[..., None] lambda_is_obj_appearing = is_obj_appearing.float() object_score_logits = out_scale * lambda_is_obj_appearing + out_bias if self.pred_obj_scores: if self.fixed_no_obj_ptr: obj_ptr = lambda_is_obj_appearing * obj_ptr obj_ptr = obj_ptr + (1 - lambda_is_obj_appearing) * self.no_obj_ptr return ( low_res_masks, high_res_masks, ious, low_res_masks, high_res_masks, obj_ptr, object_score_logits, ) def forward_image(self, img_batch: torch.Tensor): """Get the image feature on the input batch.""" backbone_out = self.image_encoder(img_batch) if self.use_high_res_features_in_sam: # precompute projected level 0 and level 1 features in SAM decoder # to avoid running it again on every SAM click backbone_out["backbone_fpn"][0] = self.sam_mask_decoder.conv_s0( backbone_out["backbone_fpn"][0] ) backbone_out["backbone_fpn"][1] = self.sam_mask_decoder.conv_s1( backbone_out["backbone_fpn"][1] ) return backbone_out def _prepare_backbone_features(self, backbone_out): """Prepare and flatten visual features.""" backbone_out = backbone_out.copy() assert len(backbone_out["backbone_fpn"]) == len(backbone_out["vision_pos_enc"]) assert len(backbone_out["backbone_fpn"]) >= self.num_feature_levels feature_maps = backbone_out["backbone_fpn"][-self.num_feature_levels :] vision_pos_embeds = backbone_out["vision_pos_enc"][-self.num_feature_levels :] feat_sizes = [(x.shape[-2], x.shape[-1]) for x in vision_pos_embeds] # flatten NxCxHxW to HWxNxC vision_feats = [x.flatten(2).permute(2, 0, 1) for x in feature_maps] vision_pos_embeds = [x.flatten(2).permute(2, 0, 1) for x in vision_pos_embeds] return backbone_out, vision_feats, vision_pos_embeds, feat_sizes def _prepare_memory_conditioned_features( self, frame_idx, is_init_cond_frame, current_vision_feats, current_vision_pos_embeds, feat_sizes, output_dict, num_frames, track_in_reverse=False, # tracking in reverse time order (for demo usage) ): """Fuse the current frame's visual feature map with previous memory.""" B = current_vision_feats[-1].size(1) # batch size on this frame C = self.hidden_dim H, W = feat_sizes[-1] # top-level (lowest-resolution) feature size device = current_vision_feats[-1].device # The case of `self.num_maskmem == 0` below is primarily used for reproducing SAM on images. # In this case, we skip the fusion with any memory. if self.num_maskmem == 0: # Disable memory and skip fusion pix_feat = current_vision_feats[-1].permute(1, 2, 0).view(B, C, H, W) return pix_feat num_obj_ptr_tokens = 0 # Step 1: condition the visual features of the current frame on previous memories if not is_init_cond_frame: # Retrieve the memories encoded with the maskmem backbone to_cat_memory, to_cat_memory_pos_embed = [], [] # Add conditioning frames's output first (all cond frames have t_pos=0 for # when getting temporal positional embedding below) assert len(output_dict["cond_frame_outputs"]) > 0 # Select a maximum number of temporally closest cond frames for cross attention cond_outputs = output_dict["cond_frame_outputs"] selected_cond_outputs, unselected_cond_outputs = select_closest_cond_frames( frame_idx, cond_outputs, self.max_cond_frames_in_attn ) t_pos_and_prevs = [(0, out) for out in selected_cond_outputs.values()] # Add last (self.num_maskmem - 1) frames before current frame for non-conditioning memory # the earliest one has t_pos=1 and the latest one has t_pos=self.num_maskmem-1 # We also allow taking the memory frame non-consecutively (with r>1), in which case # we take (self.num_maskmem - 2) frames among every r-th frames plus the last frame. r = self.memory_temporal_stride_for_eval for t_pos in range(1, self.num_maskmem): t_rel = self.num_maskmem - t_pos # how many frames before current frame if t_rel == 1: # for t_rel == 1, we take the last frame (regardless of r) if not track_in_reverse: # the frame immediately before this frame (i.e. frame_idx - 1) prev_frame_idx = frame_idx - t_rel else: # the frame immediately after this frame (i.e. frame_idx + 1) prev_frame_idx = frame_idx + t_rel else: # for t_rel >= 2, we take the memory frame from every r-th frames if not track_in_reverse: # first find the nearest frame among every r-th frames before this frame # for r=1, this would be (frame_idx - 2) prev_frame_idx = ((frame_idx - 2) // r) * r # then seek further among every r-th frames prev_frame_idx = prev_frame_idx - (t_rel - 2) * r else: # first find the nearest frame among every r-th frames after this frame # for r=1, this would be (frame_idx + 2) prev_frame_idx = -(-(frame_idx + 2) // r) * r # then seek further among every r-th frames prev_frame_idx = prev_frame_idx + (t_rel - 2) * r out = output_dict["non_cond_frame_outputs"].get(prev_frame_idx, None) if out is None: # If an unselected conditioning frame is among the last (self.num_maskmem - 1) # frames, we still attend to it as if it's a non-conditioning frame. out = unselected_cond_outputs.get(prev_frame_idx, None) t_pos_and_prevs.append((t_pos, out)) for t_pos, prev in t_pos_and_prevs: if prev is None: continue # skip padding frames # "maskmem_features" might have been offloaded to CPU in demo use cases, # so we load it back to GPU (it's a no-op if it's already on GPU). feats = prev["maskmem_features"].to(device, non_blocking=True) to_cat_memory.append(feats.flatten(2).permute(2, 0, 1)) # Spatial positional encoding (it might have been offloaded to CPU in eval) maskmem_enc = prev["maskmem_pos_enc"][-1].to(device) maskmem_enc = maskmem_enc.flatten(2).permute(2, 0, 1) # Temporal positional encoding maskmem_enc = ( maskmem_enc + self.maskmem_tpos_enc[self.num_maskmem - t_pos - 1] ) to_cat_memory_pos_embed.append(maskmem_enc) # Construct the list of past object pointers if self.use_obj_ptrs_in_encoder: max_obj_ptrs_in_encoder = min(num_frames, self.max_obj_ptrs_in_encoder) # First add those object pointers from selected conditioning frames # (optionally, only include object pointers in the past during evaluation) if not self.training and self.only_obj_ptrs_in_the_past_for_eval: ptr_cond_outputs = { t: out for t, out in selected_cond_outputs.items() if (t >= frame_idx if track_in_reverse else t <= frame_idx) } else: ptr_cond_outputs = selected_cond_outputs pos_and_ptrs = [ # Temporal pos encoding contains how far away each pointer is from current frame (abs(frame_idx - t), out["obj_ptr"]) for t, out in ptr_cond_outputs.items() ] # Add up to (max_obj_ptrs_in_encoder - 1) non-conditioning frames before current frame for t_diff in range(1, max_obj_ptrs_in_encoder): t = frame_idx + t_diff if track_in_reverse else frame_idx - t_diff if t < 0 or (num_frames is not None and t >= num_frames): break out = output_dict["non_cond_frame_outputs"].get( t, unselected_cond_outputs.get(t, None) ) if out is not None: pos_and_ptrs.append((t_diff, out["obj_ptr"])) # If we have at least one object pointer, add them to the across attention if len(pos_and_ptrs) > 0: pos_list, ptrs_list = zip(*pos_and_ptrs) # stack object pointers along dim=0 into [ptr_seq_len, B, C] shape obj_ptrs = torch.stack(ptrs_list, dim=0) # a temporal positional embedding based on how far each object pointer is from # the current frame (sine embedding normalized by the max pointer num). if self.add_tpos_enc_to_obj_ptrs: t_diff_max = max_obj_ptrs_in_encoder - 1 tpos_dim = C if self.proj_tpos_enc_in_obj_ptrs else self.mem_dim obj_pos = torch.tensor(pos_list, device=device) obj_pos = get_1d_sine_pe(obj_pos / t_diff_max, dim=tpos_dim) obj_pos = self.obj_ptr_tpos_proj(obj_pos) obj_pos = obj_pos.unsqueeze(1).expand(-1, B, self.mem_dim) else: obj_pos = obj_ptrs.new_zeros(len(pos_list), B, self.mem_dim) if self.mem_dim < C: # split a pointer into (C // self.mem_dim) tokens for self.mem_dim < C obj_ptrs = obj_ptrs.reshape( -1, B, C // self.mem_dim, self.mem_dim ) obj_ptrs = obj_ptrs.permute(0, 2, 1, 3).flatten(0, 1) obj_pos = obj_pos.repeat_interleave(C // self.mem_dim, dim=0) to_cat_memory.append(obj_ptrs) to_cat_memory_pos_embed.append(obj_pos) num_obj_ptr_tokens = obj_ptrs.shape[0] else: num_obj_ptr_tokens = 0 else: # for initial conditioning frames, encode them without using any previous memory if self.directly_add_no_mem_embed: # directly add no-mem embedding (instead of using the transformer encoder) pix_feat_with_mem = current_vision_feats[-1] + self.no_mem_embed pix_feat_with_mem = pix_feat_with_mem.permute(1, 2, 0).view(B, C, H, W) return pix_feat_with_mem # Use a dummy token on the first frame (to avoid empty memory input to tranformer encoder) to_cat_memory = [self.no_mem_embed.expand(1, B, self.mem_dim)] to_cat_memory_pos_embed = [self.no_mem_pos_enc.expand(1, B, self.mem_dim)] # Step 2: Concatenate the memories and forward through the transformer encoder memory = torch.cat(to_cat_memory, dim=0) memory_pos_embed = torch.cat(to_cat_memory_pos_embed, dim=0) pix_feat_with_mem = self.memory_attention( curr=current_vision_feats, curr_pos=current_vision_pos_embeds, memory=memory, memory_pos=memory_pos_embed, num_obj_ptr_tokens=num_obj_ptr_tokens, ) # reshape the output (HW)BC => BCHW pix_feat_with_mem = pix_feat_with_mem.permute(1, 2, 0).view(B, C, H, W) return pix_feat_with_mem def _encode_new_memory( self, current_vision_feats, feat_sizes, pred_masks_high_res, is_mask_from_pts, ): """Encode the current image and its prediction into a memory feature.""" B = current_vision_feats[-1].size(1) # batch size on this frame C = self.hidden_dim H, W = feat_sizes[-1] # top-level (lowest-resolution) feature size # top-level feature, (HW)BC => BCHW pix_feat = current_vision_feats[-1].permute(1, 2, 0).view(B, C, H, W) if self.non_overlap_masks_for_mem_enc and not self.training: # optionally, apply non-overlapping constraints to the masks (it's applied # in the batch dimension and should only be used during eval, where all # the objects come from the same video under batch size 1). pred_masks_high_res = self._apply_non_overlapping_constraints( pred_masks_high_res ) # scale the raw mask logits with a temperature before applying sigmoid binarize = self.binarize_mask_from_pts_for_mem_enc and is_mask_from_pts if binarize and not self.training: mask_for_mem = (pred_masks_high_res > 0).float() else: # apply sigmoid on the raw mask logits to turn them into range (0, 1) mask_for_mem = torch.sigmoid(pred_masks_high_res) # apply scale and bias terms to the sigmoid probabilities if self.sigmoid_scale_for_mem_enc != 1.0: mask_for_mem = mask_for_mem * self.sigmoid_scale_for_mem_enc if self.sigmoid_bias_for_mem_enc != 0.0: mask_for_mem = mask_for_mem + self.sigmoid_bias_for_mem_enc maskmem_out = self.memory_encoder( pix_feat, mask_for_mem, skip_mask_sigmoid=True # sigmoid already applied ) maskmem_features = maskmem_out["vision_features"] maskmem_pos_enc = maskmem_out["vision_pos_enc"] return maskmem_features, maskmem_pos_enc def track_step( self, frame_idx, is_init_cond_frame, current_vision_feats, current_vision_pos_embeds, feat_sizes, point_inputs, mask_inputs, output_dict, num_frames, track_in_reverse=False, # tracking in reverse time order (for demo usage) # Whether to run the memory encoder on the predicted masks. Sometimes we might want # to skip the memory encoder with `run_mem_encoder=False`. For example, # in demo we might call `track_step` multiple times for each user click, # and only encode the memory when the user finalizes their clicks. And in ablation # settings like SAM training on static images, we don't need the memory encoder. run_mem_encoder=True, # The previously predicted SAM mask logits (which can be fed together with new clicks in demo). prev_sam_mask_logits=None, ): current_out = {"point_inputs": point_inputs, "mask_inputs": mask_inputs} # High-resolution feature maps for the SAM head, reshape (HW)BC => BCHW if len(current_vision_feats) > 1: high_res_features = [ x.permute(1, 2, 0).view(x.size(1), x.size(2), *s) for x, s in zip(current_vision_feats[:-1], feat_sizes[:-1]) ] else: high_res_features = None if mask_inputs is not None and self.use_mask_input_as_output_without_sam: # When use_mask_input_as_output_without_sam=True, we directly output the mask input # (see it as a GT mask) without using a SAM prompt encoder + mask decoder. pix_feat = current_vision_feats[-1].permute(1, 2, 0) pix_feat = pix_feat.view(-1, self.hidden_dim, *feat_sizes[-1]) sam_outputs = self._use_mask_as_output( pix_feat, high_res_features, mask_inputs ) else: # fused the visual feature with previous memory features in the memory bank pix_feat_with_mem = self._prepare_memory_conditioned_features( frame_idx=frame_idx, is_init_cond_frame=is_init_cond_frame, current_vision_feats=current_vision_feats[-1:], current_vision_pos_embeds=current_vision_pos_embeds[-1:], feat_sizes=feat_sizes[-1:], output_dict=output_dict, num_frames=num_frames, track_in_reverse=track_in_reverse, ) # apply SAM-style segmentation head # here we might feed previously predicted low-res SAM mask logits into the SAM mask decoder, # e.g. in demo where such logits come from earlier interaction instead of correction sampling # (in this case, any `mask_inputs` shouldn't reach here as they are sent to _use_mask_as_output instead) if prev_sam_mask_logits is not None: assert point_inputs is not None and mask_inputs is None mask_inputs = prev_sam_mask_logits multimask_output = self._use_multimask(is_init_cond_frame, point_inputs) sam_outputs = self._forward_sam_heads( backbone_features=pix_feat_with_mem, point_inputs=point_inputs, mask_inputs=mask_inputs, high_res_features=high_res_features, multimask_output=multimask_output, ) ( _, _, _, low_res_masks, high_res_masks, obj_ptr, _, ) = sam_outputs current_out["pred_masks"] = low_res_masks current_out["pred_masks_high_res"] = high_res_masks current_out["obj_ptr"] = obj_ptr # Finally run the memory encoder on the predicted mask to encode # it into a new memory feature (that can be used in future frames) if run_mem_encoder and self.num_maskmem > 0: high_res_masks_for_mem_enc = high_res_masks maskmem_features, maskmem_pos_enc = self._encode_new_memory( current_vision_feats=current_vision_feats, feat_sizes=feat_sizes, pred_masks_high_res=high_res_masks_for_mem_enc, is_mask_from_pts=(point_inputs is not None), ) current_out["maskmem_features"] = maskmem_features current_out["maskmem_pos_enc"] = maskmem_pos_enc else: current_out["maskmem_features"] = None current_out["maskmem_pos_enc"] = None return current_out def _use_multimask(self, is_init_cond_frame, point_inputs): """Whether to use multimask output in the SAM head.""" num_pts = 0 if point_inputs is None else point_inputs["point_labels"].size(1) multimask_output = ( self.multimask_output_in_sam and (is_init_cond_frame or self.multimask_output_for_tracking) and (self.multimask_min_pt_num <= num_pts <= self.multimask_max_pt_num) ) return multimask_output def _apply_non_overlapping_constraints(self, pred_masks): """ Apply non-overlapping constraints to the object scores in pred_masks. Here we keep only the highest scoring object at each spatial location in pred_masks. """ batch_size = pred_masks.size(0) if batch_size == 1: return pred_masks device = pred_masks.device # "max_obj_inds": object index of the object with the highest score at each location max_obj_inds = torch.argmax(pred_masks, dim=0, keepdim=True) # "batch_obj_inds": object index of each object slice (along dim 0) in `pred_masks` batch_obj_inds = torch.arange(batch_size, device=device)[:, None, None, None] keep = max_obj_inds == batch_obj_inds # suppress overlapping regions' scores below -10.0 so that the foreground regions # don't overlap (here sigmoid(-10.0)=4.5398e-05) pred_masks = torch.where(keep, pred_masks, torch.clamp(pred_masks, max=-10.0)) return pred_masks ================================================ FILE: auto-seg/sam2/modeling/sam2_utils.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import copy import torch import torch.nn as nn import torch.nn.functional as F def select_closest_cond_frames(frame_idx, cond_frame_outputs, max_cond_frame_num): """ Select up to `max_cond_frame_num` conditioning frames from `cond_frame_outputs` that are temporally closest to the current frame at `frame_idx`. Here, we take - a) the closest conditioning frame before `frame_idx` (if any); - b) the closest conditioning frame after `frame_idx` (if any); - c) any other temporally closest conditioning frames until reaching a total of `max_cond_frame_num` conditioning frames. Outputs: - selected_outputs: selected items (keys & values) from `cond_frame_outputs`. - unselected_outputs: items (keys & values) not selected in `cond_frame_outputs`. """ if max_cond_frame_num == -1 or len(cond_frame_outputs) <= max_cond_frame_num: selected_outputs = cond_frame_outputs unselected_outputs = {} else: assert max_cond_frame_num >= 2, "we should allow using 2+ conditioning frames" selected_outputs = {} # the closest conditioning frame before `frame_idx` (if any) idx_before = max((t for t in cond_frame_outputs if t < frame_idx), default=None) if idx_before is not None: selected_outputs[idx_before] = cond_frame_outputs[idx_before] # the closest conditioning frame after `frame_idx` (if any) idx_after = min((t for t in cond_frame_outputs if t >= frame_idx), default=None) if idx_after is not None: selected_outputs[idx_after] = cond_frame_outputs[idx_after] # add other temporally closest conditioning frames until reaching a total # of `max_cond_frame_num` conditioning frames. num_remain = max_cond_frame_num - len(selected_outputs) inds_remain = sorted( (t for t in cond_frame_outputs if t not in selected_outputs), key=lambda x: abs(x - frame_idx), )[:num_remain] selected_outputs.update((t, cond_frame_outputs[t]) for t in inds_remain) unselected_outputs = { t: v for t, v in cond_frame_outputs.items() if t not in selected_outputs } return selected_outputs, unselected_outputs def get_1d_sine_pe(pos_inds, dim, temperature=10000): """ Get 1D sine positional embedding as in the original Transformer paper. """ pe_dim = dim // 2 dim_t = torch.arange(pe_dim, dtype=torch.float32, device=pos_inds.device) dim_t = temperature ** (2 * (dim_t // 2) / pe_dim) pos_embed = pos_inds.unsqueeze(-1) / dim_t pos_embed = torch.cat([pos_embed.sin(), pos_embed.cos()], dim=-1) return pos_embed def get_activation_fn(activation): """Return an activation function given a string""" if activation == "relu": return F.relu if activation == "gelu": return F.gelu if activation == "glu": return F.glu raise RuntimeError(f"activation should be relu/gelu, not {activation}.") def get_clones(module, N): return nn.ModuleList([copy.deepcopy(module) for i in range(N)]) class DropPath(nn.Module): # adapted from https://github.com/huggingface/pytorch-image-models/blob/main/timm/layers/drop.py def __init__(self, drop_prob=0.0, scale_by_keep=True): super(DropPath, self).__init__() self.drop_prob = drop_prob self.scale_by_keep = scale_by_keep def forward(self, x): if self.drop_prob == 0.0 or not self.training: return x keep_prob = 1 - self.drop_prob shape = (x.shape[0],) + (1,) * (x.ndim - 1) random_tensor = x.new_empty(shape).bernoulli_(keep_prob) if keep_prob > 0.0 and self.scale_by_keep: random_tensor.div_(keep_prob) return x * random_tensor # Lightly adapted from # https://github.com/facebookresearch/MaskFormer/blob/main/mask_former/modeling/transformer/transformer_predictor.py # noqa class MLP(nn.Module): def __init__( self, input_dim: int, hidden_dim: int, output_dim: int, num_layers: int, activation: nn.Module = nn.ReLU, sigmoid_output: bool = False, ) -> None: super().__init__() self.num_layers = num_layers h = [hidden_dim] * (num_layers - 1) self.layers = nn.ModuleList( nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim]) ) self.sigmoid_output = sigmoid_output self.act = activation() def forward(self, x): for i, layer in enumerate(self.layers): x = self.act(layer(x)) if i < self.num_layers - 1 else layer(x) if self.sigmoid_output: x = F.sigmoid(x) return x # From https://github.com/facebookresearch/detectron2/blob/main/detectron2/layers/batch_norm.py # noqa # Itself from https://github.com/facebookresearch/ConvNeXt/blob/d1fa8f6fef0a165b27399986cc2bdacc92777e40/models/convnext.py#L119 # noqa class LayerNorm2d(nn.Module): def __init__(self, num_channels: int, eps: float = 1e-6) -> None: super().__init__() self.weight = nn.Parameter(torch.ones(num_channels)) self.bias = nn.Parameter(torch.zeros(num_channels)) self.eps = eps def forward(self, x: torch.Tensor) -> torch.Tensor: u = x.mean(1, keepdim=True) s = (x - u).pow(2).mean(1, keepdim=True) x = (x - u) / torch.sqrt(s + self.eps) x = self.weight[:, None, None] * x + self.bias[:, None, None] return x ================================================ FILE: auto-seg/sam2/sam2_image_predictor.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import logging from typing import List, Optional, Tuple, Union import numpy as np import torch from PIL.Image import Image from sam2.modeling.sam2_base import SAM2Base from sam2.utils.transforms import SAM2Transforms class SAM2ImagePredictor: def __init__( self, sam_model: SAM2Base, mask_threshold=0.0, max_hole_area=0.0, max_sprinkle_area=0.0, **kwargs, ) -> None: """ Uses SAM-2 to calculate the image embedding for an image, and then allow repeated, efficient mask prediction given prompts. Arguments: sam_model (Sam-2): The model to use for mask prediction. mask_threshold (float): The threshold to use when converting mask logits to binary masks. Masks are thresholded at 0 by default. max_hole_area (int): If max_hole_area > 0, we fill small holes in up to the maximum area of max_hole_area in low_res_masks. max_sprinkle_area (int): If max_sprinkle_area > 0, we remove small sprinkles up to the maximum area of max_sprinkle_area in low_res_masks. """ super().__init__() self.model = sam_model self._transforms = SAM2Transforms( resolution=self.model.image_size, mask_threshold=mask_threshold, max_hole_area=max_hole_area, max_sprinkle_area=max_sprinkle_area, ) # Predictor state self._is_image_set = False self._features = None self._orig_hw = None # Whether the predictor is set for single image or a batch of images self._is_batch = False # Predictor config self.mask_threshold = mask_threshold # Spatial dim for backbone feature maps self._bb_feat_sizes = [ (256, 256), (128, 128), (64, 64), ] @classmethod def from_pretrained(cls, model_id: str, **kwargs) -> "SAM2ImagePredictor": """ Load a pretrained model from the Hugging Face hub. Arguments: model_id (str): The Hugging Face repository ID. **kwargs: Additional arguments to pass to the model constructor. Returns: (SAM2ImagePredictor): The loaded model. """ from sam2.build_sam import build_sam2_hf sam_model = build_sam2_hf(model_id, **kwargs) return cls(sam_model, **kwargs) @torch.no_grad() def set_image( self, image: Union[np.ndarray, Image], ) -> None: """ Calculates the image embeddings for the provided image, allowing masks to be predicted with the 'predict' method. Arguments: image (np.ndarray or PIL Image): The input image to embed in RGB format. The image should be in HWC format if np.ndarray, or WHC format if PIL Image with pixel values in [0, 255]. image_format (str): The color format of the image, in ['RGB', 'BGR']. """ self.reset_predictor() # Transform the image to the form expected by the model if isinstance(image, np.ndarray): logging.info("For numpy array image, we assume (HxWxC) format") self._orig_hw = [image.shape[:2]] elif isinstance(image, Image): w, h = image.size self._orig_hw = [(h, w)] else: raise NotImplementedError("Image format not supported") input_image = self._transforms(image) input_image = input_image[None, ...].to(self.device) assert ( len(input_image.shape) == 4 and input_image.shape[1] == 3 ), f"input_image must be of size 1x3xHxW, got {input_image.shape}" logging.info("Computing image embeddings for the provided image...") backbone_out = self.model.forward_image(input_image) _, vision_feats, _, _ = self.model._prepare_backbone_features(backbone_out) # Add no_mem_embed, which is added to the lowest rest feat. map during training on videos if self.model.directly_add_no_mem_embed: vision_feats[-1] = vision_feats[-1] + self.model.no_mem_embed feats = [ feat.permute(1, 2, 0).view(1, -1, *feat_size) for feat, feat_size in zip(vision_feats[::-1], self._bb_feat_sizes[::-1]) ][::-1] self._features = {"image_embed": feats[-1], "high_res_feats": feats[:-1]} self._is_image_set = True logging.info("Image embeddings computed.") @torch.no_grad() def set_image_batch( self, image_list: List[Union[np.ndarray]], ) -> None: """ Calculates the image embeddings for the provided image batch, allowing masks to be predicted with the 'predict_batch' method. Arguments: image_list (List[np.ndarray]): The input images to embed in RGB format. The image should be in HWC format if np.ndarray with pixel values in [0, 255]. """ self.reset_predictor() assert isinstance(image_list, list) self._orig_hw = [] for image in image_list: assert isinstance( image, np.ndarray ), "Images are expected to be an np.ndarray in RGB format, and of shape HWC" self._orig_hw.append(image.shape[:2]) # Transform the image to the form expected by the model img_batch = self._transforms.forward_batch(image_list) img_batch = img_batch.to(self.device) batch_size = img_batch.shape[0] assert ( len(img_batch.shape) == 4 and img_batch.shape[1] == 3 ), f"img_batch must be of size Bx3xHxW, got {img_batch.shape}" logging.info("Computing image embeddings for the provided images...") backbone_out = self.model.forward_image(img_batch) _, vision_feats, _, _ = self.model._prepare_backbone_features(backbone_out) # Add no_mem_embed, which is added to the lowest rest feat. map during training on videos if self.model.directly_add_no_mem_embed: vision_feats[-1] = vision_feats[-1] + self.model.no_mem_embed feats = [ feat.permute(1, 2, 0).view(batch_size, -1, *feat_size) for feat, feat_size in zip(vision_feats[::-1], self._bb_feat_sizes[::-1]) ][::-1] self._features = {"image_embed": feats[-1], "high_res_feats": feats[:-1]} self._is_image_set = True self._is_batch = True logging.info("Image embeddings computed.") def predict_batch( self, point_coords_batch: List[np.ndarray] = None, point_labels_batch: List[np.ndarray] = None, box_batch: List[np.ndarray] = None, mask_input_batch: List[np.ndarray] = None, multimask_output: bool = True, return_logits: bool = False, normalize_coords=True, ) -> Tuple[List[np.ndarray], List[np.ndarray], List[np.ndarray]]: """This function is very similar to predict(...), however it is used for batched mode, when the model is expected to generate predictions on multiple images. It returns a tuple of lists of masks, ious, and low_res_masks_logits. """ assert self._is_batch, "This function should only be used when in batched mode" if not self._is_image_set: raise RuntimeError( "An image must be set with .set_image_batch(...) before mask prediction." ) num_images = len(self._features["image_embed"]) all_masks = [] all_ious = [] all_low_res_masks = [] for img_idx in range(num_images): # Transform input prompts point_coords = ( point_coords_batch[img_idx] if point_coords_batch is not None else None ) point_labels = ( point_labels_batch[img_idx] if point_labels_batch is not None else None ) box = box_batch[img_idx] if box_batch is not None else None mask_input = ( mask_input_batch[img_idx] if mask_input_batch is not None else None ) mask_input, unnorm_coords, labels, unnorm_box = self._prep_prompts( point_coords, point_labels, box, mask_input, normalize_coords, img_idx=img_idx, ) masks, iou_predictions, low_res_masks = self._predict( unnorm_coords, labels, unnorm_box, mask_input, multimask_output, return_logits=return_logits, img_idx=img_idx, ) masks_np = masks.squeeze(0).float().detach().cpu().numpy() iou_predictions_np = ( iou_predictions.squeeze(0).float().detach().cpu().numpy() ) low_res_masks_np = low_res_masks.squeeze(0).float().detach().cpu().numpy() all_masks.append(masks_np) all_ious.append(iou_predictions_np) all_low_res_masks.append(low_res_masks_np) return all_masks, all_ious, all_low_res_masks def predict( self, point_coords: Optional[np.ndarray] = None, point_labels: Optional[np.ndarray] = None, box: Optional[np.ndarray] = None, mask_input: Optional[np.ndarray] = None, multimask_output: bool = True, return_logits: bool = False, normalize_coords=True, ) -> Tuple[np.ndarray, np.ndarray, np.ndarray]: """ Predict masks for the given input prompts, using the currently set image. Arguments: point_coords (np.ndarray or None): A Nx2 array of point prompts to the model. Each point is in (X,Y) in pixels. point_labels (np.ndarray or None): A length N array of labels for the point prompts. 1 indicates a foreground point and 0 indicates a background point. box (np.ndarray or None): A length 4 array given a box prompt to the model, in XYXY format. mask_input (np.ndarray): A low resolution mask input to the model, typically coming from a previous prediction iteration. Has form 1xHxW, where for SAM, H=W=256. multimask_output (bool): If true, the model will return three masks. For ambiguous input prompts (such as a single click), this will often produce better masks than a single prediction. If only a single mask is needed, the model's predicted quality score can be used to select the best mask. For non-ambiguous prompts, such as multiple input prompts, multimask_output=False can give better results. return_logits (bool): If true, returns un-thresholded masks logits instead of a binary mask. normalize_coords (bool): If true, the point coordinates will be normalized to the range [0,1] and point_coords is expected to be wrt. image dimensions. Returns: (np.ndarray): The output masks in CxHxW format, where C is the number of masks, and (H, W) is the original image size. (np.ndarray): An array of length C containing the model's predictions for the quality of each mask. (np.ndarray): An array of shape CxHxW, where C is the number of masks and H=W=256. These low resolution logits can be passed to a subsequent iteration as mask input. """ if not self._is_image_set: raise RuntimeError( "An image must be set with .set_image(...) before mask prediction." ) # Transform input prompts mask_input, unnorm_coords, labels, unnorm_box = self._prep_prompts( point_coords, point_labels, box, mask_input, normalize_coords ) masks, iou_predictions, low_res_masks = self._predict( unnorm_coords, labels, unnorm_box, mask_input, multimask_output, return_logits=return_logits, ) masks_np = masks.squeeze(0).float().detach().cpu().numpy() iou_predictions_np = iou_predictions.squeeze(0).float().detach().cpu().numpy() low_res_masks_np = low_res_masks.squeeze(0).float().detach().cpu().numpy() return masks_np, iou_predictions_np, low_res_masks_np def _prep_prompts( self, point_coords, point_labels, box, mask_logits, normalize_coords, img_idx=-1 ): unnorm_coords, labels, unnorm_box, mask_input = None, None, None, None if point_coords is not None: assert ( point_labels is not None ), "point_labels must be supplied if point_coords is supplied." point_coords = torch.as_tensor( point_coords, dtype=torch.float, device=self.device ) unnorm_coords = self._transforms.transform_coords( point_coords, normalize=normalize_coords, orig_hw=self._orig_hw[img_idx] ) labels = torch.as_tensor(point_labels, dtype=torch.int, device=self.device) if len(unnorm_coords.shape) == 2: unnorm_coords, labels = unnorm_coords[None, ...], labels[None, ...] if box is not None: box = torch.as_tensor(box, dtype=torch.float, device=self.device) unnorm_box = self._transforms.transform_boxes( box, normalize=normalize_coords, orig_hw=self._orig_hw[img_idx] ) # Bx2x2 if mask_logits is not None: mask_input = torch.as_tensor( mask_logits, dtype=torch.float, device=self.device ) if len(mask_input.shape) == 3: mask_input = mask_input[None, :, :, :] return mask_input, unnorm_coords, labels, unnorm_box @torch.no_grad() def _predict( self, point_coords: Optional[torch.Tensor], point_labels: Optional[torch.Tensor], boxes: Optional[torch.Tensor] = None, mask_input: Optional[torch.Tensor] = None, multimask_output: bool = True, return_logits: bool = False, img_idx: int = -1, ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: """ Predict masks for the given input prompts, using the currently set image. Input prompts are batched torch tensors and are expected to already be transformed to the input frame using SAM2Transforms. Arguments: point_coords (torch.Tensor or None): A BxNx2 array of point prompts to the model. Each point is in (X,Y) in pixels. point_labels (torch.Tensor or None): A BxN array of labels for the point prompts. 1 indicates a foreground point and 0 indicates a background point. boxes (np.ndarray or None): A Bx4 array given a box prompt to the model, in XYXY format. mask_input (np.ndarray): A low resolution mask input to the model, typically coming from a previous prediction iteration. Has form Bx1xHxW, where for SAM, H=W=256. Masks returned by a previous iteration of the predict method do not need further transformation. multimask_output (bool): If true, the model will return three masks. For ambiguous input prompts (such as a single click), this will often produce better masks than a single prediction. If only a single mask is needed, the model's predicted quality score can be used to select the best mask. For non-ambiguous prompts, such as multiple input prompts, multimask_output=False can give better results. return_logits (bool): If true, returns un-thresholded masks logits instead of a binary mask. Returns: (torch.Tensor): The output masks in BxCxHxW format, where C is the number of masks, and (H, W) is the original image size. (torch.Tensor): An array of shape BxC containing the model's predictions for the quality of each mask. (torch.Tensor): An array of shape BxCxHxW, where C is the number of masks and H=W=256. These low res logits can be passed to a subsequent iteration as mask input. """ if not self._is_image_set: raise RuntimeError( "An image must be set with .set_image(...) before mask prediction." ) if point_coords is not None: concat_points = (point_coords, point_labels) else: concat_points = None # Embed prompts if boxes is not None: box_coords = boxes.reshape(-1, 2, 2) box_labels = torch.tensor([[2, 3]], dtype=torch.int, device=boxes.device) box_labels = box_labels.repeat(boxes.size(0), 1) # we merge "boxes" and "points" into a single "concat_points" input (where # boxes are added at the beginning) to sam_prompt_encoder if concat_points is not None: concat_coords = torch.cat([box_coords, concat_points[0]], dim=1) concat_labels = torch.cat([box_labels, concat_points[1]], dim=1) concat_points = (concat_coords, concat_labels) else: concat_points = (box_coords, box_labels) sparse_embeddings, dense_embeddings = self.model.sam_prompt_encoder( points=concat_points, boxes=None, masks=mask_input, ) # Predict masks batched_mode = ( concat_points is not None and concat_points[0].shape[0] > 1 ) # multi object prediction high_res_features = [ feat_level[img_idx].unsqueeze(0) for feat_level in self._features["high_res_feats"] ] low_res_masks, iou_predictions, _, _ = self.model.sam_mask_decoder( image_embeddings=self._features["image_embed"][img_idx].unsqueeze(0), image_pe=self.model.sam_prompt_encoder.get_dense_pe(), sparse_prompt_embeddings=sparse_embeddings, dense_prompt_embeddings=dense_embeddings, multimask_output=multimask_output, repeat_image=batched_mode, high_res_features=high_res_features, ) # Upscale the masks to the original image resolution masks = self._transforms.postprocess_masks( low_res_masks, self._orig_hw[img_idx] ) low_res_masks = torch.clamp(low_res_masks, -32.0, 32.0) if not return_logits: masks = masks > self.mask_threshold return masks, iou_predictions, low_res_masks def get_image_embedding(self) -> torch.Tensor: """ Returns the image embeddings for the currently set image, with shape 1xCxHxW, where C is the embedding dimension and (H,W) are the embedding spatial dimension of SAM (typically C=256, H=W=64). """ if not self._is_image_set: raise RuntimeError( "An image must be set with .set_image(...) to generate an embedding." ) assert ( self._features is not None ), "Features must exist if an image has been set." return self._features["image_embed"] @property def device(self) -> torch.device: return self.model.device def reset_predictor(self) -> None: """ Resets the image embeddings and other state variables. """ self._is_image_set = False self._features = None self._orig_hw = None self._is_batch = False ================================================ FILE: auto-seg/sam2/sam2_video_predictor.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import warnings from collections import OrderedDict import torch from tqdm import tqdm from sam2.modeling.sam2_base import NO_OBJ_SCORE, SAM2Base from sam2.utils.misc import concat_points, fill_holes_in_mask_scores, load_video_frames class SAM2VideoPredictor(SAM2Base): """The predictor class to handle user interactions and manage inference states.""" def __init__( self, fill_hole_area=0, # whether to apply non-overlapping constraints on the output object masks non_overlap_masks=False, # whether to clear non-conditioning memory of the surrounding frames (which may contain outdated information) after adding correction clicks; # note that this would only apply to *single-object tracking* unless `clear_non_cond_mem_for_multi_obj` is also set to True) clear_non_cond_mem_around_input=False, # whether to also clear non-conditioning memory of the surrounding frames (only effective when `clear_non_cond_mem_around_input` is True). clear_non_cond_mem_for_multi_obj=False, **kwargs, ): super().__init__(**kwargs) self.fill_hole_area = fill_hole_area self.non_overlap_masks = non_overlap_masks self.clear_non_cond_mem_around_input = clear_non_cond_mem_around_input self.clear_non_cond_mem_for_multi_obj = clear_non_cond_mem_for_multi_obj @torch.inference_mode() def init_state( self, video_path, offload_video_to_cpu=False, offload_state_to_cpu=False, async_loading_frames=False, ): """Initialize an inference state.""" compute_device = self.device # device of the model images, video_height, video_width = load_video_frames( video_path=video_path, image_size=self.image_size, offload_video_to_cpu=offload_video_to_cpu, async_loading_frames=async_loading_frames, compute_device=compute_device, ) inference_state = {} inference_state["images"] = images inference_state["num_frames"] = len(images) # whether to offload the video frames to CPU memory # turning on this option saves the GPU memory with only a very small overhead inference_state["offload_video_to_cpu"] = offload_video_to_cpu # whether to offload the inference state to CPU memory # turning on this option saves the GPU memory at the cost of a lower tracking fps # (e.g. in a test case of 768x768 model, fps dropped from 27 to 24 when tracking one object # and from 24 to 21 when tracking two objects) inference_state["offload_state_to_cpu"] = offload_state_to_cpu # the original video height and width, used for resizing final output scores inference_state["video_height"] = video_height inference_state["video_width"] = video_width inference_state["device"] = compute_device if offload_state_to_cpu: inference_state["storage_device"] = torch.device("cpu") else: inference_state["storage_device"] = compute_device # inputs on each frame inference_state["point_inputs_per_obj"] = {} inference_state["mask_inputs_per_obj"] = {} # visual features on a small number of recently visited frames for quick interactions inference_state["cached_features"] = {} # values that don't change across frames (so we only need to hold one copy of them) inference_state["constants"] = {} # mapping between client-side object id and model-side object index inference_state["obj_id_to_idx"] = OrderedDict() inference_state["obj_idx_to_id"] = OrderedDict() inference_state["obj_ids"] = [] # A storage to hold the model's tracking results and states on each frame inference_state["output_dict"] = { "cond_frame_outputs": {}, # dict containing {frame_idx: } "non_cond_frame_outputs": {}, # dict containing {frame_idx: } } # Slice (view) of each object tracking results, sharing the same memory with "output_dict" inference_state["output_dict_per_obj"] = {} # A temporary storage to hold new outputs when user interact with a frame # to add clicks or mask (it's merged into "output_dict" before propagation starts) inference_state["temp_output_dict_per_obj"] = {} # Frames that already holds consolidated outputs from click or mask inputs # (we directly use their consolidated outputs during tracking) inference_state["consolidated_frame_inds"] = { "cond_frame_outputs": set(), # set containing frame indices "non_cond_frame_outputs": set(), # set containing frame indices } # metadata for each tracking frame (e.g. which direction it's tracked) inference_state["tracking_has_started"] = False inference_state["frames_already_tracked"] = {} # Warm up the visual backbone and cache the image feature on frame 0 self._get_image_feature(inference_state, frame_idx=0, batch_size=1) return inference_state @classmethod def from_pretrained(cls, model_id: str, **kwargs) -> "SAM2VideoPredictor": """ Load a pretrained model from the Hugging Face hub. Arguments: model_id (str): The Hugging Face repository ID. **kwargs: Additional arguments to pass to the model constructor. Returns: (SAM2VideoPredictor): The loaded model. """ from sam2.build_sam import build_sam2_video_predictor_hf sam_model = build_sam2_video_predictor_hf(model_id, **kwargs) return sam_model def _obj_id_to_idx(self, inference_state, obj_id): """Map client-side object id to model-side object index.""" obj_idx = inference_state["obj_id_to_idx"].get(obj_id, None) if obj_idx is not None: return obj_idx # This is a new object id not sent to the server before. We only allow adding # new objects *before* the tracking starts. allow_new_object = not inference_state["tracking_has_started"] if allow_new_object: # get the next object slot obj_idx = len(inference_state["obj_id_to_idx"]) inference_state["obj_id_to_idx"][obj_id] = obj_idx inference_state["obj_idx_to_id"][obj_idx] = obj_id inference_state["obj_ids"] = list(inference_state["obj_id_to_idx"]) # set up input and output structures for this object inference_state["point_inputs_per_obj"][obj_idx] = {} inference_state["mask_inputs_per_obj"][obj_idx] = {} inference_state["output_dict_per_obj"][obj_idx] = { "cond_frame_outputs": {}, # dict containing {frame_idx: } "non_cond_frame_outputs": {}, # dict containing {frame_idx: } } inference_state["temp_output_dict_per_obj"][obj_idx] = { "cond_frame_outputs": {}, # dict containing {frame_idx: } "non_cond_frame_outputs": {}, # dict containing {frame_idx: } } return obj_idx else: raise RuntimeError( f"Cannot add new object id {obj_id} after tracking starts. " f"All existing object ids: {inference_state['obj_ids']}. " f"Please call 'reset_state' to restart from scratch." ) def _obj_idx_to_id(self, inference_state, obj_idx): """Map model-side object index to client-side object id.""" return inference_state["obj_idx_to_id"][obj_idx] def _get_obj_num(self, inference_state): """Get the total number of unique object ids received so far in this session.""" return len(inference_state["obj_idx_to_id"]) @torch.inference_mode() def add_new_points_or_box( self, inference_state, frame_idx, obj_id, points=None, labels=None, clear_old_points=True, normalize_coords=True, box=None, ): """Add new points to a frame.""" obj_idx = self._obj_id_to_idx(inference_state, obj_id) point_inputs_per_frame = inference_state["point_inputs_per_obj"][obj_idx] mask_inputs_per_frame = inference_state["mask_inputs_per_obj"][obj_idx] if (points is not None) != (labels is not None): raise ValueError("points and labels must be provided together") if points is None and box is None: raise ValueError("at least one of points or box must be provided as input") if points is None: points = torch.zeros(0, 2, dtype=torch.float32) elif not isinstance(points, torch.Tensor): points = torch.tensor(points, dtype=torch.float32) if labels is None: labels = torch.zeros(0, dtype=torch.int32) elif not isinstance(labels, torch.Tensor): labels = torch.tensor(labels, dtype=torch.int32) if points.dim() == 2: points = points.unsqueeze(0) # add batch dimension if labels.dim() == 1: labels = labels.unsqueeze(0) # add batch dimension # If `box` is provided, we add it as the first two points with labels 2 and 3 # along with the user-provided points (consistent with how SAM 2 is trained). if box is not None: if not clear_old_points: raise ValueError( "cannot add box without clearing old points, since " "box prompt must be provided before any point prompt " "(please use clear_old_points=True instead)" ) if inference_state["tracking_has_started"]: warnings.warn( "You are adding a box after tracking starts. SAM 2 may not always be " "able to incorporate a box prompt for *refinement*. If you intend to " "use box prompt as an *initial* input before tracking, please call " "'reset_state' on the inference state to restart from scratch.", category=UserWarning, stacklevel=2, ) if not isinstance(box, torch.Tensor): box = torch.tensor(box, dtype=torch.float32, device=points.device) box_coords = box.reshape(1, 2, 2) box_labels = torch.tensor([2, 3], dtype=torch.int32, device=labels.device) box_labels = box_labels.reshape(1, 2) points = torch.cat([box_coords, points], dim=1) labels = torch.cat([box_labels, labels], dim=1) if normalize_coords: video_H = inference_state["video_height"] video_W = inference_state["video_width"] points = points / torch.tensor([video_W, video_H]).to(points.device) # scale the (normalized) coordinates by the model's internal image size points = points * self.image_size points = points.to(inference_state["device"]) labels = labels.to(inference_state["device"]) if not clear_old_points: point_inputs = point_inputs_per_frame.get(frame_idx, None) else: point_inputs = None point_inputs = concat_points(point_inputs, points, labels) point_inputs_per_frame[frame_idx] = point_inputs mask_inputs_per_frame.pop(frame_idx, None) # If this frame hasn't been tracked before, we treat it as an initial conditioning # frame, meaning that the inputs points are to generate segments on this frame without # using any memory from other frames, like in SAM. Otherwise (if it has been tracked), # the input points will be used to correct the already tracked masks. is_init_cond_frame = frame_idx not in inference_state["frames_already_tracked"] # whether to track in reverse time order if is_init_cond_frame: reverse = False else: reverse = inference_state["frames_already_tracked"][frame_idx]["reverse"] obj_output_dict = inference_state["output_dict_per_obj"][obj_idx] obj_temp_output_dict = inference_state["temp_output_dict_per_obj"][obj_idx] # Add a frame to conditioning output if it's an initial conditioning frame or # if the model sees all frames receiving clicks/mask as conditioning frames. is_cond = is_init_cond_frame or self.add_all_frames_to_correct_as_cond storage_key = "cond_frame_outputs" if is_cond else "non_cond_frame_outputs" # Get any previously predicted mask logits on this object and feed it along with # the new clicks into the SAM mask decoder. prev_sam_mask_logits = None # lookup temporary output dict first, which contains the most recent output # (if not found, then lookup conditioning and non-conditioning frame output) prev_out = obj_temp_output_dict[storage_key].get(frame_idx) if prev_out is None: prev_out = obj_output_dict["cond_frame_outputs"].get(frame_idx) if prev_out is None: prev_out = obj_output_dict["non_cond_frame_outputs"].get(frame_idx) if prev_out is not None and prev_out["pred_masks"] is not None: device = inference_state["device"] prev_sam_mask_logits = prev_out["pred_masks"].to(device, non_blocking=True) # Clamp the scale of prev_sam_mask_logits to avoid rare numerical issues. prev_sam_mask_logits = torch.clamp(prev_sam_mask_logits, -32.0, 32.0) current_out, _ = self._run_single_frame_inference( inference_state=inference_state, output_dict=obj_output_dict, # run on the slice of a single object frame_idx=frame_idx, batch_size=1, # run on the slice of a single object is_init_cond_frame=is_init_cond_frame, point_inputs=point_inputs, mask_inputs=None, reverse=reverse, # Skip the memory encoder when adding clicks or mask. We execute the memory encoder # at the beginning of `propagate_in_video` (after user finalize their clicks). This # allows us to enforce non-overlapping constraints on all objects before encoding # them into memory. run_mem_encoder=False, prev_sam_mask_logits=prev_sam_mask_logits, ) # Add the output to the output dict (to be used as future memory) obj_temp_output_dict[storage_key][frame_idx] = current_out # Resize the output mask to the original video resolution obj_ids = inference_state["obj_ids"] consolidated_out = self._consolidate_temp_output_across_obj( inference_state, frame_idx, is_cond=is_cond, run_mem_encoder=False, consolidate_at_video_res=True, ) _, video_res_masks = self._get_orig_video_res_output( inference_state, consolidated_out["pred_masks_video_res"] ) return frame_idx, obj_ids, video_res_masks def add_new_points(self, *args, **kwargs): """Deprecated method. Please use `add_new_points_or_box` instead.""" return self.add_new_points_or_box(*args, **kwargs) @torch.inference_mode() def add_new_mask( self, inference_state, frame_idx, obj_id, mask, ): """Add new mask to a frame.""" obj_idx = self._obj_id_to_idx(inference_state, obj_id) point_inputs_per_frame = inference_state["point_inputs_per_obj"][obj_idx] mask_inputs_per_frame = inference_state["mask_inputs_per_obj"][obj_idx] if not isinstance(mask, torch.Tensor): mask = torch.tensor(mask, dtype=torch.bool) assert mask.dim() == 2 mask_H, mask_W = mask.shape mask_inputs_orig = mask[None, None] # add batch and channel dimension mask_inputs_orig = mask_inputs_orig.float().to(inference_state["device"]) # resize the mask if it doesn't match the model's image size if mask_H != self.image_size or mask_W != self.image_size: mask_inputs = torch.nn.functional.interpolate( mask_inputs_orig, size=(self.image_size, self.image_size), align_corners=False, mode="bilinear", antialias=True, # use antialias for downsampling ) mask_inputs = (mask_inputs >= 0.5).float() else: mask_inputs = mask_inputs_orig mask_inputs_per_frame[frame_idx] = mask_inputs point_inputs_per_frame.pop(frame_idx, None) # If this frame hasn't been tracked before, we treat it as an initial conditioning # frame, meaning that the inputs points are to generate segments on this frame without # using any memory from other frames, like in SAM. Otherwise (if it has been tracked), # the input points will be used to correct the already tracked masks. is_init_cond_frame = frame_idx not in inference_state["frames_already_tracked"] # whether to track in reverse time order if is_init_cond_frame: reverse = False else: reverse = inference_state["frames_already_tracked"][frame_idx]["reverse"] obj_output_dict = inference_state["output_dict_per_obj"][obj_idx] obj_temp_output_dict = inference_state["temp_output_dict_per_obj"][obj_idx] # Add a frame to conditioning output if it's an initial conditioning frame or # if the model sees all frames receiving clicks/mask as conditioning frames. is_cond = is_init_cond_frame or self.add_all_frames_to_correct_as_cond storage_key = "cond_frame_outputs" if is_cond else "non_cond_frame_outputs" current_out, _ = self._run_single_frame_inference( inference_state=inference_state, output_dict=obj_output_dict, # run on the slice of a single object frame_idx=frame_idx, batch_size=1, # run on the slice of a single object is_init_cond_frame=is_init_cond_frame, point_inputs=None, mask_inputs=mask_inputs, reverse=reverse, # Skip the memory encoder when adding clicks or mask. We execute the memory encoder # at the beginning of `propagate_in_video` (after user finalize their clicks). This # allows us to enforce non-overlapping constraints on all objects before encoding # them into memory. run_mem_encoder=False, ) # Add the output to the output dict (to be used as future memory) obj_temp_output_dict[storage_key][frame_idx] = current_out # Resize the output mask to the original video resolution obj_ids = inference_state["obj_ids"] consolidated_out = self._consolidate_temp_output_across_obj( inference_state, frame_idx, is_cond=is_cond, run_mem_encoder=False, consolidate_at_video_res=True, ) _, video_res_masks = self._get_orig_video_res_output( inference_state, consolidated_out["pred_masks_video_res"] ) return frame_idx, obj_ids, video_res_masks def _get_orig_video_res_output(self, inference_state, any_res_masks): """ Resize the object scores to the original video resolution (video_res_masks) and apply non-overlapping constraints for final output. """ device = inference_state["device"] video_H = inference_state["video_height"] video_W = inference_state["video_width"] any_res_masks = any_res_masks.to(device, non_blocking=True) if any_res_masks.shape[-2:] == (video_H, video_W): video_res_masks = any_res_masks else: video_res_masks = torch.nn.functional.interpolate( any_res_masks, size=(video_H, video_W), mode="bilinear", align_corners=False, ) if self.non_overlap_masks: video_res_masks = self._apply_non_overlapping_constraints(video_res_masks) return any_res_masks, video_res_masks def _consolidate_temp_output_across_obj( self, inference_state, frame_idx, is_cond, run_mem_encoder, consolidate_at_video_res=False, ): """ Consolidate the per-object temporary outputs in `temp_output_dict_per_obj` on a frame into a single output for all objects, including 1) fill any missing objects either from `output_dict_per_obj` (if they exist in `output_dict_per_obj` for this frame) or leave them as placeholder values (if they don't exist in `output_dict_per_obj` for this frame); 2) if specified, rerun memory encoder after apply non-overlapping constraints on the object scores. """ batch_size = self._get_obj_num(inference_state) storage_key = "cond_frame_outputs" if is_cond else "non_cond_frame_outputs" # Optionally, we allow consolidating the temporary outputs at the original # video resolution (to provide a better editing experience for mask prompts). if consolidate_at_video_res: assert not run_mem_encoder, "memory encoder cannot run at video resolution" consolidated_H = inference_state["video_height"] consolidated_W = inference_state["video_width"] consolidated_mask_key = "pred_masks_video_res" else: consolidated_H = consolidated_W = self.image_size // 4 consolidated_mask_key = "pred_masks" # Initialize `consolidated_out`. Its "maskmem_features" and "maskmem_pos_enc" # will be added when rerunning the memory encoder after applying non-overlapping # constraints to object scores. Its "pred_masks" are prefilled with a large # negative value (NO_OBJ_SCORE) to represent missing objects. consolidated_out = { "maskmem_features": None, "maskmem_pos_enc": None, consolidated_mask_key: torch.full( size=(batch_size, 1, consolidated_H, consolidated_W), fill_value=NO_OBJ_SCORE, dtype=torch.float32, device=inference_state["storage_device"], ), "obj_ptr": torch.full( size=(batch_size, self.hidden_dim), fill_value=NO_OBJ_SCORE, dtype=torch.float32, device=inference_state["device"], ), } empty_mask_ptr = None for obj_idx in range(batch_size): obj_temp_output_dict = inference_state["temp_output_dict_per_obj"][obj_idx] obj_output_dict = inference_state["output_dict_per_obj"][obj_idx] out = obj_temp_output_dict[storage_key].get(frame_idx, None) # If the object doesn't appear in "temp_output_dict_per_obj" on this frame, # we fall back and look up its previous output in "output_dict_per_obj". # We look up both "cond_frame_outputs" and "non_cond_frame_outputs" in # "output_dict_per_obj" to find a previous output for this object. if out is None: out = obj_output_dict["cond_frame_outputs"].get(frame_idx, None) if out is None: out = obj_output_dict["non_cond_frame_outputs"].get(frame_idx, None) # If the object doesn't appear in "output_dict_per_obj" either, we skip it # and leave its mask scores to the default scores (i.e. the NO_OBJ_SCORE # placeholder above) and set its object pointer to be a dummy pointer. if out is None: # Fill in dummy object pointers for those objects without any inputs or # tracking outcomes on this frame (only do it under `run_mem_encoder=True`, # i.e. when we need to build the memory for tracking). if run_mem_encoder: if empty_mask_ptr is None: empty_mask_ptr = self._get_empty_mask_ptr( inference_state, frame_idx ) # fill object pointer with a dummy pointer (based on an empty mask) consolidated_out["obj_ptr"][obj_idx : obj_idx + 1] = empty_mask_ptr continue # Add the temporary object output mask to consolidated output mask obj_mask = out["pred_masks"] consolidated_pred_masks = consolidated_out[consolidated_mask_key] if obj_mask.shape[-2:] == consolidated_pred_masks.shape[-2:]: consolidated_pred_masks[obj_idx : obj_idx + 1] = obj_mask else: # Resize first if temporary object mask has a different resolution resized_obj_mask = torch.nn.functional.interpolate( obj_mask, size=consolidated_pred_masks.shape[-2:], mode="bilinear", align_corners=False, ) consolidated_pred_masks[obj_idx : obj_idx + 1] = resized_obj_mask consolidated_out["obj_ptr"][obj_idx : obj_idx + 1] = out["obj_ptr"] # Optionally, apply non-overlapping constraints on the consolidated scores # and rerun the memory encoder if run_mem_encoder: device = inference_state["device"] high_res_masks = torch.nn.functional.interpolate( consolidated_out["pred_masks"].to(device, non_blocking=True), size=(self.image_size, self.image_size), mode="bilinear", align_corners=False, ) if self.non_overlap_masks_for_mem_enc: high_res_masks = self._apply_non_overlapping_constraints(high_res_masks) maskmem_features, maskmem_pos_enc = self._run_memory_encoder( inference_state=inference_state, frame_idx=frame_idx, batch_size=batch_size, high_res_masks=high_res_masks, is_mask_from_pts=True, # these frames are what the user interacted with ) consolidated_out["maskmem_features"] = maskmem_features consolidated_out["maskmem_pos_enc"] = maskmem_pos_enc return consolidated_out def _get_empty_mask_ptr(self, inference_state, frame_idx): """Get a dummy object pointer based on an empty mask on the current frame.""" # A dummy (empty) mask with a single object batch_size = 1 mask_inputs = torch.zeros( (batch_size, 1, self.image_size, self.image_size), dtype=torch.float32, device=inference_state["device"], ) # Retrieve correct image features ( _, _, current_vision_feats, current_vision_pos_embeds, feat_sizes, ) = self._get_image_feature(inference_state, frame_idx, batch_size) # Feed the empty mask and image feature above to get a dummy object pointer current_out = self.track_step( frame_idx=frame_idx, is_init_cond_frame=True, current_vision_feats=current_vision_feats, current_vision_pos_embeds=current_vision_pos_embeds, feat_sizes=feat_sizes, point_inputs=None, mask_inputs=mask_inputs, output_dict={}, num_frames=inference_state["num_frames"], track_in_reverse=False, run_mem_encoder=False, prev_sam_mask_logits=None, ) return current_out["obj_ptr"] @torch.inference_mode() def propagate_in_video_preflight(self, inference_state): """Prepare inference_state and consolidate temporary outputs before tracking.""" # Tracking has started and we don't allow adding new objects until session is reset. inference_state["tracking_has_started"] = True batch_size = self._get_obj_num(inference_state) # Consolidate per-object temporary outputs in "temp_output_dict_per_obj" and # add them into "output_dict". temp_output_dict_per_obj = inference_state["temp_output_dict_per_obj"] output_dict = inference_state["output_dict"] # "consolidated_frame_inds" contains indices of those frames where consolidated # temporary outputs have been added (either in this call or any previous calls # to `propagate_in_video_preflight`). consolidated_frame_inds = inference_state["consolidated_frame_inds"] for is_cond in [False, True]: # Separately consolidate conditioning and non-conditioning temp outputs storage_key = "cond_frame_outputs" if is_cond else "non_cond_frame_outputs" # Find all the frames that contain temporary outputs for any objects # (these should be the frames that have just received clicks for mask inputs # via `add_new_points_or_box` or `add_new_mask`) temp_frame_inds = set() for obj_temp_output_dict in temp_output_dict_per_obj.values(): temp_frame_inds.update(obj_temp_output_dict[storage_key].keys()) consolidated_frame_inds[storage_key].update(temp_frame_inds) # consolidate the temporary output across all objects on this frame for frame_idx in temp_frame_inds: consolidated_out = self._consolidate_temp_output_across_obj( inference_state, frame_idx, is_cond=is_cond, run_mem_encoder=True ) # merge them into "output_dict" and also create per-object slices output_dict[storage_key][frame_idx] = consolidated_out self._add_output_per_object( inference_state, frame_idx, consolidated_out, storage_key ) clear_non_cond_mem = self.clear_non_cond_mem_around_input and ( self.clear_non_cond_mem_for_multi_obj or batch_size <= 1 ) if clear_non_cond_mem: # clear non-conditioning memory of the surrounding frames self._clear_non_cond_mem_around_input(inference_state, frame_idx) # clear temporary outputs in `temp_output_dict_per_obj` for obj_temp_output_dict in temp_output_dict_per_obj.values(): obj_temp_output_dict[storage_key].clear() # edge case: if an output is added to "cond_frame_outputs", we remove any prior # output on the same frame in "non_cond_frame_outputs" for frame_idx in output_dict["cond_frame_outputs"]: output_dict["non_cond_frame_outputs"].pop(frame_idx, None) for obj_output_dict in inference_state["output_dict_per_obj"].values(): for frame_idx in obj_output_dict["cond_frame_outputs"]: obj_output_dict["non_cond_frame_outputs"].pop(frame_idx, None) for frame_idx in consolidated_frame_inds["cond_frame_outputs"]: assert frame_idx in output_dict["cond_frame_outputs"] consolidated_frame_inds["non_cond_frame_outputs"].discard(frame_idx) # Make sure that the frame indices in "consolidated_frame_inds" are exactly those frames # with either points or mask inputs (which should be true under a correct workflow). all_consolidated_frame_inds = ( consolidated_frame_inds["cond_frame_outputs"] | consolidated_frame_inds["non_cond_frame_outputs"] ) input_frames_inds = set() for point_inputs_per_frame in inference_state["point_inputs_per_obj"].values(): input_frames_inds.update(point_inputs_per_frame.keys()) for mask_inputs_per_frame in inference_state["mask_inputs_per_obj"].values(): input_frames_inds.update(mask_inputs_per_frame.keys()) assert all_consolidated_frame_inds == input_frames_inds @torch.inference_mode() def propagate_in_video( self, inference_state, start_frame_idx=None, max_frame_num_to_track=None, reverse=False, ): """Propagate the input points across frames to track in the entire video.""" self.propagate_in_video_preflight(inference_state) output_dict = inference_state["output_dict"] consolidated_frame_inds = inference_state["consolidated_frame_inds"] obj_ids = inference_state["obj_ids"] num_frames = inference_state["num_frames"] batch_size = self._get_obj_num(inference_state) if len(output_dict["cond_frame_outputs"]) == 0: raise RuntimeError("No points are provided; please add points first") clear_non_cond_mem = self.clear_non_cond_mem_around_input and ( self.clear_non_cond_mem_for_multi_obj or batch_size <= 1 ) # set start index, end index, and processing order if start_frame_idx is None: # default: start from the earliest frame with input points start_frame_idx = min(output_dict["cond_frame_outputs"]) if max_frame_num_to_track is None: # default: track all the frames in the video max_frame_num_to_track = num_frames if reverse: end_frame_idx = max(start_frame_idx - max_frame_num_to_track, 0) if start_frame_idx > 0: processing_order = range(start_frame_idx, end_frame_idx - 1, -1) else: processing_order = [] # skip reverse tracking if starting from frame 0 else: end_frame_idx = min( start_frame_idx + max_frame_num_to_track, num_frames - 1 ) processing_order = range(start_frame_idx, end_frame_idx + 1) for frame_idx in tqdm(processing_order, desc="propagate in video"): # We skip those frames already in consolidated outputs (these are frames # that received input clicks or mask). Note that we cannot directly run # batched forward on them via `_run_single_frame_inference` because the # number of clicks on each object might be different. if frame_idx in consolidated_frame_inds["cond_frame_outputs"]: storage_key = "cond_frame_outputs" current_out = output_dict[storage_key][frame_idx] pred_masks = current_out["pred_masks"] if clear_non_cond_mem: # clear non-conditioning memory of the surrounding frames self._clear_non_cond_mem_around_input(inference_state, frame_idx) elif frame_idx in consolidated_frame_inds["non_cond_frame_outputs"]: storage_key = "non_cond_frame_outputs" current_out = output_dict[storage_key][frame_idx] pred_masks = current_out["pred_masks"] else: storage_key = "non_cond_frame_outputs" current_out, pred_masks = self._run_single_frame_inference( inference_state=inference_state, output_dict=output_dict, frame_idx=frame_idx, batch_size=batch_size, is_init_cond_frame=False, point_inputs=None, mask_inputs=None, reverse=reverse, run_mem_encoder=True, ) output_dict[storage_key][frame_idx] = current_out # Create slices of per-object outputs for subsequent interaction with each # individual object after tracking. self._add_output_per_object( inference_state, frame_idx, current_out, storage_key ) inference_state["frames_already_tracked"][frame_idx] = {"reverse": reverse} # Resize the output mask to the original video resolution (we directly use # the mask scores on GPU for output to avoid any CPU conversion in between) _, video_res_masks = self._get_orig_video_res_output( inference_state, pred_masks ) yield frame_idx, obj_ids, video_res_masks def _add_output_per_object( self, inference_state, frame_idx, current_out, storage_key ): """ Split a multi-object output into per-object output slices and add them into `output_dict_per_obj`. The resulting slices share the same tensor storage. """ maskmem_features = current_out["maskmem_features"] assert maskmem_features is None or isinstance(maskmem_features, torch.Tensor) maskmem_pos_enc = current_out["maskmem_pos_enc"] assert maskmem_pos_enc is None or isinstance(maskmem_pos_enc, list) output_dict_per_obj = inference_state["output_dict_per_obj"] for obj_idx, obj_output_dict in output_dict_per_obj.items(): obj_slice = slice(obj_idx, obj_idx + 1) obj_out = { "maskmem_features": None, "maskmem_pos_enc": None, "pred_masks": current_out["pred_masks"][obj_slice], "obj_ptr": current_out["obj_ptr"][obj_slice], } if maskmem_features is not None: obj_out["maskmem_features"] = maskmem_features[obj_slice] if maskmem_pos_enc is not None: obj_out["maskmem_pos_enc"] = [x[obj_slice] for x in maskmem_pos_enc] obj_output_dict[storage_key][frame_idx] = obj_out @torch.inference_mode() def reset_state(self, inference_state): """Remove all input points or mask in all frames throughout the video.""" self._reset_tracking_results(inference_state) # Remove all object ids inference_state["obj_id_to_idx"].clear() inference_state["obj_idx_to_id"].clear() inference_state["obj_ids"].clear() inference_state["point_inputs_per_obj"].clear() inference_state["mask_inputs_per_obj"].clear() inference_state["output_dict_per_obj"].clear() inference_state["temp_output_dict_per_obj"].clear() def _reset_tracking_results(self, inference_state): """Reset all tracking inputs and results across the videos.""" for v in inference_state["point_inputs_per_obj"].values(): v.clear() for v in inference_state["mask_inputs_per_obj"].values(): v.clear() for v in inference_state["output_dict_per_obj"].values(): v["cond_frame_outputs"].clear() v["non_cond_frame_outputs"].clear() for v in inference_state["temp_output_dict_per_obj"].values(): v["cond_frame_outputs"].clear() v["non_cond_frame_outputs"].clear() inference_state["output_dict"]["cond_frame_outputs"].clear() inference_state["output_dict"]["non_cond_frame_outputs"].clear() inference_state["consolidated_frame_inds"]["cond_frame_outputs"].clear() inference_state["consolidated_frame_inds"]["non_cond_frame_outputs"].clear() inference_state["tracking_has_started"] = False inference_state["frames_already_tracked"].clear() def _get_image_feature(self, inference_state, frame_idx, batch_size): """Compute the image features on a given frame.""" # Look up in the cache first image, backbone_out = inference_state["cached_features"].get( frame_idx, (None, None) ) if backbone_out is None: # Cache miss -- we will run inference on a single image device = inference_state["device"] image = inference_state["images"][frame_idx].to(device).float().unsqueeze(0) backbone_out = self.forward_image(image) # Cache the most recent frame's feature (for repeated interactions with # a frame; we can use an LRU cache for more frames in the future). inference_state["cached_features"] = {frame_idx: (image, backbone_out)} # expand the features to have the same dimension as the number of objects expanded_image = image.expand(batch_size, -1, -1, -1) expanded_backbone_out = { "backbone_fpn": backbone_out["backbone_fpn"].copy(), "vision_pos_enc": backbone_out["vision_pos_enc"].copy(), } for i, feat in enumerate(expanded_backbone_out["backbone_fpn"]): expanded_backbone_out["backbone_fpn"][i] = feat.expand( batch_size, -1, -1, -1 ) for i, pos in enumerate(expanded_backbone_out["vision_pos_enc"]): pos = pos.expand(batch_size, -1, -1, -1) expanded_backbone_out["vision_pos_enc"][i] = pos features = self._prepare_backbone_features(expanded_backbone_out) features = (expanded_image,) + features return features def _run_single_frame_inference( self, inference_state, output_dict, frame_idx, batch_size, is_init_cond_frame, point_inputs, mask_inputs, reverse, run_mem_encoder, prev_sam_mask_logits=None, ): """Run tracking on a single frame based on current inputs and previous memory.""" # Retrieve correct image features ( _, _, current_vision_feats, current_vision_pos_embeds, feat_sizes, ) = self._get_image_feature(inference_state, frame_idx, batch_size) # point and mask should not appear as input simultaneously on the same frame assert point_inputs is None or mask_inputs is None current_out = self.track_step( frame_idx=frame_idx, is_init_cond_frame=is_init_cond_frame, current_vision_feats=current_vision_feats, current_vision_pos_embeds=current_vision_pos_embeds, feat_sizes=feat_sizes, point_inputs=point_inputs, mask_inputs=mask_inputs, output_dict=output_dict, num_frames=inference_state["num_frames"], track_in_reverse=reverse, run_mem_encoder=run_mem_encoder, prev_sam_mask_logits=prev_sam_mask_logits, ) # optionally offload the output to CPU memory to save GPU space storage_device = inference_state["storage_device"] maskmem_features = current_out["maskmem_features"] if maskmem_features is not None: maskmem_features = maskmem_features.to(torch.bfloat16) maskmem_features = maskmem_features.to(storage_device, non_blocking=True) pred_masks_gpu = current_out["pred_masks"] # potentially fill holes in the predicted masks if self.fill_hole_area > 0: pred_masks_gpu = fill_holes_in_mask_scores( pred_masks_gpu, self.fill_hole_area ) pred_masks = pred_masks_gpu.to(storage_device, non_blocking=True) # "maskmem_pos_enc" is the same across frames, so we only need to store one copy of it maskmem_pos_enc = self._get_maskmem_pos_enc(inference_state, current_out) # object pointer is a small tensor, so we always keep it on GPU memory for fast access obj_ptr = current_out["obj_ptr"] # make a compact version of this frame's output to reduce the state size compact_current_out = { "maskmem_features": maskmem_features, "maskmem_pos_enc": maskmem_pos_enc, "pred_masks": pred_masks, "obj_ptr": obj_ptr, } return compact_current_out, pred_masks_gpu def _run_memory_encoder( self, inference_state, frame_idx, batch_size, high_res_masks, is_mask_from_pts ): """ Run the memory encoder on `high_res_masks`. This is usually after applying non-overlapping constraints to object scores. Since their scores changed, their memory also need to be computed again with the memory encoder. """ # Retrieve correct image features _, _, current_vision_feats, _, feat_sizes = self._get_image_feature( inference_state, frame_idx, batch_size ) maskmem_features, maskmem_pos_enc = self._encode_new_memory( current_vision_feats=current_vision_feats, feat_sizes=feat_sizes, pred_masks_high_res=high_res_masks, is_mask_from_pts=is_mask_from_pts, ) # optionally offload the output to CPU memory to save GPU space storage_device = inference_state["storage_device"] maskmem_features = maskmem_features.to(torch.bfloat16) maskmem_features = maskmem_features.to(storage_device, non_blocking=True) # "maskmem_pos_enc" is the same across frames, so we only need to store one copy of it maskmem_pos_enc = self._get_maskmem_pos_enc( inference_state, {"maskmem_pos_enc": maskmem_pos_enc} ) return maskmem_features, maskmem_pos_enc def _get_maskmem_pos_enc(self, inference_state, current_out): """ `maskmem_pos_enc` is the same across frames and objects, so we cache it as a constant in the inference session to reduce session storage size. """ model_constants = inference_state["constants"] # "out_maskmem_pos_enc" should be either a list of tensors or None out_maskmem_pos_enc = current_out["maskmem_pos_enc"] if out_maskmem_pos_enc is not None: if "maskmem_pos_enc" not in model_constants: assert isinstance(out_maskmem_pos_enc, list) # only take the slice for one object, since it's same across objects maskmem_pos_enc = [x[0:1].clone() for x in out_maskmem_pos_enc] model_constants["maskmem_pos_enc"] = maskmem_pos_enc else: maskmem_pos_enc = model_constants["maskmem_pos_enc"] # expand the cached maskmem_pos_enc to the actual batch size batch_size = out_maskmem_pos_enc[0].size(0) expanded_maskmem_pos_enc = [ x.expand(batch_size, -1, -1, -1) for x in maskmem_pos_enc ] else: expanded_maskmem_pos_enc = None return expanded_maskmem_pos_enc def _clear_non_cond_mem_around_input(self, inference_state, frame_idx): """ Remove the non-conditioning memory around the input frame. When users provide correction clicks, the surrounding frames' non-conditioning memories can still contain outdated object appearance information and could confuse the model. This method clears those non-conditioning memories surrounding the interacted frame to avoid giving the model both old and new information about the object. """ r = self.memory_temporal_stride_for_eval frame_idx_begin = frame_idx - r * self.num_maskmem frame_idx_end = frame_idx + r * self.num_maskmem output_dict = inference_state["output_dict"] non_cond_frame_outputs = output_dict["non_cond_frame_outputs"] for t in range(frame_idx_begin, frame_idx_end + 1): non_cond_frame_outputs.pop(t, None) for obj_output_dict in inference_state["output_dict_per_obj"].values(): obj_output_dict["non_cond_frame_outputs"].pop(t, None) ================================================ FILE: auto-seg/sam2/utils/__init__.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. ================================================ FILE: auto-seg/sam2/utils/amg.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import math from copy import deepcopy from itertools import product from typing import Any, Dict, Generator, ItemsView, List, Tuple import numpy as np import torch # Very lightly adapted from https://github.com/facebookresearch/segment-anything/blob/main/segment_anything/utils/amg.py class MaskData: """ A structure for storing masks and their related data in batched format. Implements basic filtering and concatenation. """ def __init__(self, **kwargs) -> None: for v in kwargs.values(): assert isinstance( v, (list, np.ndarray, torch.Tensor) ), "MaskData only supports list, numpy arrays, and torch tensors." self._stats = dict(**kwargs) def __setitem__(self, key: str, item: Any) -> None: assert isinstance( item, (list, np.ndarray, torch.Tensor) ), "MaskData only supports list, numpy arrays, and torch tensors." self._stats[key] = item def __delitem__(self, key: str) -> None: del self._stats[key] def __getitem__(self, key: str) -> Any: return self._stats[key] def items(self) -> ItemsView[str, Any]: return self._stats.items() def filter(self, keep: torch.Tensor) -> None: for k, v in self._stats.items(): if v is None: self._stats[k] = None elif isinstance(v, torch.Tensor): self._stats[k] = v[torch.as_tensor(keep, device=v.device)] elif isinstance(v, np.ndarray): self._stats[k] = v[keep.detach().cpu().numpy()] elif isinstance(v, list) and keep.dtype == torch.bool: self._stats[k] = [a for i, a in enumerate(v) if keep[i]] elif isinstance(v, list): self._stats[k] = [v[i] for i in keep] else: raise TypeError(f"MaskData key {k} has an unsupported type {type(v)}.") def cat(self, new_stats: "MaskData") -> None: for k, v in new_stats.items(): if k not in self._stats or self._stats[k] is None: self._stats[k] = deepcopy(v) elif isinstance(v, torch.Tensor): self._stats[k] = torch.cat([self._stats[k], v], dim=0) elif isinstance(v, np.ndarray): self._stats[k] = np.concatenate([self._stats[k], v], axis=0) elif isinstance(v, list): self._stats[k] = self._stats[k] + deepcopy(v) else: raise TypeError(f"MaskData key {k} has an unsupported type {type(v)}.") def to_numpy(self) -> None: for k, v in self._stats.items(): if isinstance(v, torch.Tensor): self._stats[k] = v.float().detach().cpu().numpy() def is_box_near_crop_edge( boxes: torch.Tensor, crop_box: List[int], orig_box: List[int], atol: float = 20.0 ) -> torch.Tensor: """Filter masks at the edge of a crop, but not at the edge of the original image.""" crop_box_torch = torch.as_tensor(crop_box, dtype=torch.float, device=boxes.device) orig_box_torch = torch.as_tensor(orig_box, dtype=torch.float, device=boxes.device) boxes = uncrop_boxes_xyxy(boxes, crop_box).float() near_crop_edge = torch.isclose(boxes, crop_box_torch[None, :], atol=atol, rtol=0) near_image_edge = torch.isclose(boxes, orig_box_torch[None, :], atol=atol, rtol=0) near_crop_edge = torch.logical_and(near_crop_edge, ~near_image_edge) return torch.any(near_crop_edge, dim=1) def box_xyxy_to_xywh(box_xyxy: torch.Tensor) -> torch.Tensor: box_xywh = deepcopy(box_xyxy) box_xywh[2] = box_xywh[2] - box_xywh[0] box_xywh[3] = box_xywh[3] - box_xywh[1] return box_xywh def batch_iterator(batch_size: int, *args) -> Generator[List[Any], None, None]: assert len(args) > 0 and all( len(a) == len(args[0]) for a in args ), "Batched iteration must have inputs of all the same size." n_batches = len(args[0]) // batch_size + int(len(args[0]) % batch_size != 0) for b in range(n_batches): yield [arg[b * batch_size : (b + 1) * batch_size] for arg in args] def mask_to_rle_pytorch(tensor: torch.Tensor) -> List[Dict[str, Any]]: """ Encodes masks to an uncompressed RLE, in the format expected by pycoco tools. """ # Put in fortran order and flatten h,w b, h, w = tensor.shape tensor = tensor.permute(0, 2, 1).flatten(1) # Compute change indices diff = tensor[:, 1:] ^ tensor[:, :-1] change_indices = diff.nonzero() # Encode run length out = [] for i in range(b): cur_idxs = change_indices[change_indices[:, 0] == i, 1] cur_idxs = torch.cat( [ torch.tensor([0], dtype=cur_idxs.dtype, device=cur_idxs.device), cur_idxs + 1, torch.tensor([h * w], dtype=cur_idxs.dtype, device=cur_idxs.device), ] ) btw_idxs = cur_idxs[1:] - cur_idxs[:-1] counts = [] if tensor[i, 0] == 0 else [0] counts.extend(btw_idxs.detach().cpu().tolist()) out.append({"size": [h, w], "counts": counts}) return out def rle_to_mask(rle: Dict[str, Any]) -> np.ndarray: """Compute a binary mask from an uncompressed RLE.""" h, w = rle["size"] mask = np.empty(h * w, dtype=bool) idx = 0 parity = False for count in rle["counts"]: mask[idx : idx + count] = parity idx += count parity ^= True mask = mask.reshape(w, h) return mask.transpose() # Put in C order def area_from_rle(rle: Dict[str, Any]) -> int: return sum(rle["counts"][1::2]) def calculate_stability_score( masks: torch.Tensor, mask_threshold: float, threshold_offset: float ) -> torch.Tensor: """ Computes the stability score for a batch of masks. The stability score is the IoU between the binary masks obtained by thresholding the predicted mask logits at high and low values. """ # One mask is always contained inside the other. # Save memory by preventing unnecessary cast to torch.int64 intersections = ( (masks > (mask_threshold + threshold_offset)) .sum(-1, dtype=torch.int16) .sum(-1, dtype=torch.int32) ) unions = ( (masks > (mask_threshold - threshold_offset)) .sum(-1, dtype=torch.int16) .sum(-1, dtype=torch.int32) ) return intersections / unions def build_point_grid(n_per_side: int) -> np.ndarray: """Generates a 2D grid of points evenly spaced in [0,1]x[0,1].""" offset = 1 / (2 * n_per_side) points_one_side = np.linspace(offset, 1 - offset, n_per_side) points_x = np.tile(points_one_side[None, :], (n_per_side, 1)) points_y = np.tile(points_one_side[:, None], (1, n_per_side)) points = np.stack([points_x, points_y], axis=-1).reshape(-1, 2) return points def build_all_layer_point_grids( n_per_side: int, n_layers: int, scale_per_layer: int ) -> List[np.ndarray]: """Generates point grids for all crop layers.""" points_by_layer = [] for i in range(n_layers + 1): n_points = int(n_per_side / (scale_per_layer**i)) points_by_layer.append(build_point_grid(n_points)) return points_by_layer def generate_crop_boxes( im_size: Tuple[int, ...], n_layers: int, overlap_ratio: float ) -> Tuple[List[List[int]], List[int]]: """ Generates a list of crop boxes of different sizes. Each layer has (2**i)**2 boxes for the ith layer. """ crop_boxes, layer_idxs = [], [] im_h, im_w = im_size short_side = min(im_h, im_w) # Original image crop_boxes.append([0, 0, im_w, im_h]) layer_idxs.append(0) def crop_len(orig_len, n_crops, overlap): return int(math.ceil((overlap * (n_crops - 1) + orig_len) / n_crops)) for i_layer in range(n_layers): n_crops_per_side = 2 ** (i_layer + 1) overlap = int(overlap_ratio * short_side * (2 / n_crops_per_side)) crop_w = crop_len(im_w, n_crops_per_side, overlap) crop_h = crop_len(im_h, n_crops_per_side, overlap) crop_box_x0 = [int((crop_w - overlap) * i) for i in range(n_crops_per_side)] crop_box_y0 = [int((crop_h - overlap) * i) for i in range(n_crops_per_side)] # Crops in XYWH format for x0, y0 in product(crop_box_x0, crop_box_y0): box = [x0, y0, min(x0 + crop_w, im_w), min(y0 + crop_h, im_h)] crop_boxes.append(box) layer_idxs.append(i_layer + 1) return crop_boxes, layer_idxs def uncrop_boxes_xyxy(boxes: torch.Tensor, crop_box: List[int]) -> torch.Tensor: x0, y0, _, _ = crop_box offset = torch.tensor([[x0, y0, x0, y0]], device=boxes.device) # Check if boxes has a channel dimension if len(boxes.shape) == 3: offset = offset.unsqueeze(1) return boxes + offset def uncrop_points(points: torch.Tensor, crop_box: List[int]) -> torch.Tensor: x0, y0, _, _ = crop_box offset = torch.tensor([[x0, y0]], device=points.device) # Check if points has a channel dimension if len(points.shape) == 3: offset = offset.unsqueeze(1) return points + offset def uncrop_masks( masks: torch.Tensor, crop_box: List[int], orig_h: int, orig_w: int ) -> torch.Tensor: x0, y0, x1, y1 = crop_box if x0 == 0 and y0 == 0 and x1 == orig_w and y1 == orig_h: return masks # Coordinate transform masks pad_x, pad_y = orig_w - (x1 - x0), orig_h - (y1 - y0) pad = (x0, pad_x - x0, y0, pad_y - y0) return torch.nn.functional.pad(masks, pad, value=0) def remove_small_regions( mask: np.ndarray, area_thresh: float, mode: str ) -> Tuple[np.ndarray, bool]: """ Removes small disconnected regions and holes in a mask. Returns the mask and an indicator of if the mask has been modified. """ import cv2 # type: ignore assert mode in ["holes", "islands"] correct_holes = mode == "holes" working_mask = (correct_holes ^ mask).astype(np.uint8) n_labels, regions, stats, _ = cv2.connectedComponentsWithStats(working_mask, 8) sizes = stats[:, -1][1:] # Row 0 is background label small_regions = [i + 1 for i, s in enumerate(sizes) if s < area_thresh] if len(small_regions) == 0: return mask, False fill_labels = [0] + small_regions if not correct_holes: fill_labels = [i for i in range(n_labels) if i not in fill_labels] # If every region is below threshold, keep largest if len(fill_labels) == 0: fill_labels = [int(np.argmax(sizes)) + 1] mask = np.isin(regions, fill_labels) return mask, True def coco_encode_rle(uncompressed_rle: Dict[str, Any]) -> Dict[str, Any]: from pycocotools import mask as mask_utils # type: ignore h, w = uncompressed_rle["size"] rle = mask_utils.frPyObjects(uncompressed_rle, h, w) rle["counts"] = rle["counts"].decode("utf-8") # Necessary to serialize with json return rle def batched_mask_to_box(masks: torch.Tensor) -> torch.Tensor: """ Calculates boxes in XYXY format around masks. Return [0,0,0,0] for an empty mask. For input shape C1xC2x...xHxW, the output shape is C1xC2x...x4. """ # torch.max below raises an error on empty inputs, just skip in this case if torch.numel(masks) == 0: return torch.zeros(*masks.shape[:-2], 4, device=masks.device) # Normalize shape to CxHxW shape = masks.shape h, w = shape[-2:] if len(shape) > 2: masks = masks.flatten(0, -3) else: masks = masks.unsqueeze(0) # Get top and bottom edges in_height, _ = torch.max(masks, dim=-1) in_height_coords = in_height * torch.arange(h, device=in_height.device)[None, :] bottom_edges, _ = torch.max(in_height_coords, dim=-1) in_height_coords = in_height_coords + h * (~in_height) top_edges, _ = torch.min(in_height_coords, dim=-1) # Get left and right edges in_width, _ = torch.max(masks, dim=-2) in_width_coords = in_width * torch.arange(w, device=in_width.device)[None, :] right_edges, _ = torch.max(in_width_coords, dim=-1) in_width_coords = in_width_coords + w * (~in_width) left_edges, _ = torch.min(in_width_coords, dim=-1) # If the mask is empty the right edge will be to the left of the left edge. # Replace these boxes with [0, 0, 0, 0] empty_filter = (right_edges < left_edges) | (bottom_edges < top_edges) out = torch.stack([left_edges, top_edges, right_edges, bottom_edges], dim=-1) out = out * (~empty_filter).unsqueeze(-1) # Return to original shape if len(shape) > 2: out = out.reshape(*shape[:-2], 4) else: out = out[0] return out ================================================ FILE: auto-seg/sam2/utils/misc.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import os import warnings from threading import Thread import numpy as np import torch from PIL import Image from tqdm import tqdm def get_sdpa_settings(): if torch.cuda.is_available(): old_gpu = torch.cuda.get_device_properties(0).major < 7 # only use Flash Attention on Ampere (8.0) or newer GPUs use_flash_attn = torch.cuda.get_device_properties(0).major >= 8 if not use_flash_attn: warnings.warn( "Flash Attention is disabled as it requires a GPU with Ampere (8.0) CUDA capability.", category=UserWarning, stacklevel=2, ) # keep math kernel for PyTorch versions before 2.2 (Flash Attention v2 is only # available on PyTorch 2.2+, while Flash Attention v1 cannot handle all cases) pytorch_version = tuple(int(v) for v in torch.__version__.split(".")[:2]) if pytorch_version < (2, 2): warnings.warn( f"You are using PyTorch {torch.__version__} without Flash Attention v2 support. " "Consider upgrading to PyTorch 2.2+ for Flash Attention v2 (which could be faster).", category=UserWarning, stacklevel=2, ) math_kernel_on = pytorch_version < (2, 2) or not use_flash_attn else: old_gpu = True use_flash_attn = False math_kernel_on = True return old_gpu, use_flash_attn, math_kernel_on def get_connected_components(mask): """ Get the connected components (8-connectivity) of binary masks of shape (N, 1, H, W). Inputs: - mask: A binary mask tensor of shape (N, 1, H, W), where 1 is foreground and 0 is background. Outputs: - labels: A tensor of shape (N, 1, H, W) containing the connected component labels for foreground pixels and 0 for background pixels. - counts: A tensor of shape (N, 1, H, W) containing the area of the connected components for foreground pixels and 0 for background pixels. """ from sam2 import _C return _C.get_connected_componnets(mask.to(torch.uint8).contiguous()) def mask_to_box(masks: torch.Tensor): """ compute bounding box given an input mask Inputs: - masks: [B, 1, H, W] masks, dtype=torch.Tensor Returns: - box_coords: [B, 1, 4], contains (x, y) coordinates of top left and bottom right box corners, dtype=torch.Tensor """ B, _, h, w = masks.shape device = masks.device xs = torch.arange(w, device=device, dtype=torch.int32) ys = torch.arange(h, device=device, dtype=torch.int32) grid_xs, grid_ys = torch.meshgrid(xs, ys, indexing="xy") grid_xs = grid_xs[None, None, ...].expand(B, 1, h, w) grid_ys = grid_ys[None, None, ...].expand(B, 1, h, w) min_xs, _ = torch.min(torch.where(masks, grid_xs, w).flatten(-2), dim=-1) max_xs, _ = torch.max(torch.where(masks, grid_xs, -1).flatten(-2), dim=-1) min_ys, _ = torch.min(torch.where(masks, grid_ys, h).flatten(-2), dim=-1) max_ys, _ = torch.max(torch.where(masks, grid_ys, -1).flatten(-2), dim=-1) bbox_coords = torch.stack((min_xs, min_ys, max_xs, max_ys), dim=-1) return bbox_coords def _load_img_as_tensor(img_path, image_size): img_pil = Image.open(img_path) img_np = np.array(img_pil.convert("RGB").resize((image_size, image_size))) if img_np.dtype == np.uint8: # np.uint8 is expected for JPEG images img_np = img_np / 255.0 else: raise RuntimeError(f"Unknown image dtype: {img_np.dtype} on {img_path}") img = torch.from_numpy(img_np).permute(2, 0, 1) video_width, video_height = img_pil.size # the original video size return img, video_height, video_width class AsyncVideoFrameLoader: """ A list of video frames to be load asynchronously without blocking session start. """ def __init__( self, img_paths, image_size, offload_video_to_cpu, img_mean, img_std, compute_device, ): self.img_paths = img_paths self.image_size = image_size self.offload_video_to_cpu = offload_video_to_cpu self.img_mean = img_mean self.img_std = img_std # items in `self.images` will be loaded asynchronously self.images = [None] * len(img_paths) # catch and raise any exceptions in the async loading thread self.exception = None # video_height and video_width be filled when loading the first image self.video_height = None self.video_width = None self.compute_device = compute_device # load the first frame to fill video_height and video_width and also # to cache it (since it's most likely where the user will click) self.__getitem__(0) # load the rest of frames asynchronously without blocking the session start def _load_frames(): try: for n in tqdm(range(len(self.images)), desc="frame loading (JPEG)"): self.__getitem__(n) except Exception as e: self.exception = e self.thread = Thread(target=_load_frames, daemon=True) self.thread.start() def __getitem__(self, index): if self.exception is not None: raise RuntimeError("Failure in frame loading thread") from self.exception img = self.images[index] if img is not None: return img img, video_height, video_width = _load_img_as_tensor( self.img_paths[index], self.image_size ) self.video_height = video_height self.video_width = video_width # normalize by mean and std img -= self.img_mean img /= self.img_std if not self.offload_video_to_cpu: img = img.to(self.compute_device, non_blocking=True) self.images[index] = img return img def __len__(self): return len(self.images) def load_video_frames( video_path, image_size, offload_video_to_cpu, img_mean=(0.485, 0.456, 0.406), img_std=(0.229, 0.224, 0.225), async_loading_frames=False, compute_device=torch.device("cuda"), ): """ Load the video frames from a directory of JPEG files (".jpg" format). The frames are resized to image_size x image_size and are loaded to GPU if `offload_video_to_cpu` is `False` and to CPU if `offload_video_to_cpu` is `True`. You can load a frame asynchronously by setting `async_loading_frames` to `True`. """ if isinstance(video_path, str) and os.path.isdir(video_path): jpg_folder = video_path else: raise NotImplementedError( "Only JPEG frames are supported at this moment. For video files, you may use " "ffmpeg (https://ffmpeg.org/) to extract frames into a folder of JPEG files, such as \n" "```\n" "ffmpeg -i .mp4 -q:v 2 -start_number 0 /'%05d.jpg'\n" "```\n" "where `-q:v` generates high-quality JPEG frames and `-start_number 0` asks " "ffmpeg to start the JPEG file from 00000.jpg." ) frame_names = [ p for p in os.listdir(jpg_folder) if os.path.splitext(p)[-1] in [".jpg", ".jpeg", ".JPG", ".JPEG", ".png"] ] try: frame_names.sort(key=lambda p: int(os.path.splitext(p)[0])) except: frame_names.sort(key=lambda p: os.path.splitext(p)[0]) num_frames = len(frame_names) if num_frames == 0: raise RuntimeError(f"no images found in {jpg_folder}") img_paths = [os.path.join(jpg_folder, frame_name) for frame_name in frame_names] img_mean = torch.tensor(img_mean, dtype=torch.float32)[:, None, None] img_std = torch.tensor(img_std, dtype=torch.float32)[:, None, None] if async_loading_frames: lazy_images = AsyncVideoFrameLoader( img_paths, image_size, offload_video_to_cpu, img_mean, img_std, compute_device, ) return lazy_images, lazy_images.video_height, lazy_images.video_width images = torch.zeros(num_frames, 3, image_size, image_size, dtype=torch.float32) for n, img_path in enumerate(tqdm(img_paths, desc="frame loading (JPEG)")): images[n], video_height, video_width = _load_img_as_tensor(img_path, image_size) if not offload_video_to_cpu: images = images.to(compute_device) img_mean = img_mean.to(compute_device) img_std = img_std.to(compute_device) # normalize by mean and std images -= img_mean images /= img_std return images, video_height, video_width def fill_holes_in_mask_scores(mask, max_area): """ A post processor to fill small holes in mask scores with area under `max_area`. """ # Holes are those connected components in background with area <= self.max_area # (background regions are those with mask scores <= 0) assert max_area > 0, "max_area must be positive" input_mask = mask try: labels, areas = get_connected_components(mask <= 0) is_hole = (labels > 0) & (areas <= max_area) # We fill holes with a small positive mask score (0.1) to change them to foreground. mask = torch.where(is_hole, 0.1, mask) except Exception as e: # Skip the post-processing step on removing small holes if the CUDA kernel fails warnings.warn( f"{e}\n\nSkipping the post-processing step due to the error above. You can " "still use SAM 2 and it's OK to ignore the error above, although some post-processing " "functionality may be limited (which doesn't affect the results in most cases; see " "https://github.com/facebookresearch/segment-anything-2/blob/main/INSTALL.md).", category=UserWarning, stacklevel=2, ) mask = input_mask return mask def concat_points(old_point_inputs, new_points, new_labels): """Add new points and labels to previous point inputs (add at the end).""" if old_point_inputs is None: points, labels = new_points, new_labels else: points = torch.cat([old_point_inputs["point_coords"], new_points], dim=1) labels = torch.cat([old_point_inputs["point_labels"], new_labels], dim=1) return {"point_coords": points, "point_labels": labels} ================================================ FILE: auto-seg/sam2/utils/transforms.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import warnings import torch import torch.nn as nn import torch.nn.functional as F from torchvision.transforms import Normalize, Resize, ToTensor class SAM2Transforms(nn.Module): def __init__( self, resolution, mask_threshold, max_hole_area=0.0, max_sprinkle_area=0.0 ): """ Transforms for SAM2. """ super().__init__() self.resolution = resolution self.mask_threshold = mask_threshold self.max_hole_area = max_hole_area self.max_sprinkle_area = max_sprinkle_area self.mean = [0.485, 0.456, 0.406] self.std = [0.229, 0.224, 0.225] self.to_tensor = ToTensor() self.transforms = torch.jit.script( nn.Sequential( Resize((self.resolution, self.resolution)), Normalize(self.mean, self.std), ) ) def __call__(self, x): x = self.to_tensor(x) return self.transforms(x) def forward_batch(self, img_list): img_batch = [self.transforms(self.to_tensor(img)) for img in img_list] img_batch = torch.stack(img_batch, dim=0) return img_batch def transform_coords( self, coords: torch.Tensor, normalize=False, orig_hw=None ) -> torch.Tensor: """ Expects a torch tensor with length 2 in the last dimension. The coordinates can be in absolute image or normalized coordinates, If the coords are in absolute image coordinates, normalize should be set to True and original image size is required. Returns Un-normalized coordinates in the range of [0, 1] which is expected by the SAM2 model. """ if normalize: assert orig_hw is not None h, w = orig_hw coords = coords.clone() coords[..., 0] = coords[..., 0] / w coords[..., 1] = coords[..., 1] / h coords = coords * self.resolution # unnormalize coords return coords def transform_boxes( self, boxes: torch.Tensor, normalize=False, orig_hw=None ) -> torch.Tensor: """ Expects a tensor of shape Bx4. The coordinates can be in absolute image or normalized coordinates, if the coords are in absolute image coordinates, normalize should be set to True and original image size is required. """ boxes = self.transform_coords(boxes.reshape(-1, 2, 2), normalize, orig_hw) return boxes def postprocess_masks(self, masks: torch.Tensor, orig_hw) -> torch.Tensor: """ Perform PostProcessing on output masks. """ from sam2.utils.misc import get_connected_components masks = masks.float() input_masks = masks mask_flat = masks.flatten(0, 1).unsqueeze(1) # flatten as 1-channel image try: if self.max_hole_area > 0: # Holes are those connected components in background with area <= self.fill_hole_area # (background regions are those with mask scores <= self.mask_threshold) labels, areas = get_connected_components( mask_flat <= self.mask_threshold ) is_hole = (labels > 0) & (areas <= self.max_hole_area) is_hole = is_hole.reshape_as(masks) # We fill holes with a small positive mask score (10.0) to change them to foreground. masks = torch.where(is_hole, self.mask_threshold + 10.0, masks) if self.max_sprinkle_area > 0: labels, areas = get_connected_components( mask_flat > self.mask_threshold ) is_hole = (labels > 0) & (areas <= self.max_sprinkle_area) is_hole = is_hole.reshape_as(masks) # We fill holes with negative mask score (-10.0) to change them to background. masks = torch.where(is_hole, self.mask_threshold - 10.0, masks) except Exception as e: # Skip the post-processing step if the CUDA kernel fails warnings.warn( f"{e}\n\nSkipping the post-processing step due to the error above. You can " "still use SAM 2 and it's OK to ignore the error above, although some post-processing " "functionality may be limited (which doesn't affect the results in most cases; see " "https://github.com/facebookresearch/segment-anything-2/blob/main/INSTALL.md).", category=UserWarning, stacklevel=2, ) masks = input_masks masks = F.interpolate(masks, orig_hw, mode="bilinear", align_corners=False) return masks ================================================ FILE: auto-seg/sam2_configs/__init__.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. ================================================ FILE: auto-seg/sam2_configs/sam2_hiera_b+.yaml ================================================ # @package _global_ # Model model: _target_: sam2.modeling.sam2_base.SAM2Base image_encoder: _target_: sam2.modeling.backbones.image_encoder.ImageEncoder scalp: 1 trunk: _target_: sam2.modeling.backbones.hieradet.Hiera embed_dim: 112 num_heads: 2 neck: _target_: sam2.modeling.backbones.image_encoder.FpnNeck position_encoding: _target_: sam2.modeling.position_encoding.PositionEmbeddingSine num_pos_feats: 256 normalize: true scale: null temperature: 10000 d_model: 256 backbone_channel_list: [896, 448, 224, 112] fpn_top_down_levels: [2, 3] # output level 0 and 1 directly use the backbone features fpn_interp_model: nearest memory_attention: _target_: sam2.modeling.memory_attention.MemoryAttention d_model: 256 pos_enc_at_input: true layer: _target_: sam2.modeling.memory_attention.MemoryAttentionLayer activation: relu dim_feedforward: 2048 dropout: 0.1 pos_enc_at_attn: false self_attention: _target_: sam2.modeling.sam.transformer.RoPEAttention rope_theta: 10000.0 feat_sizes: [32, 32] embedding_dim: 256 num_heads: 1 downsample_rate: 1 dropout: 0.1 d_model: 256 pos_enc_at_cross_attn_keys: true pos_enc_at_cross_attn_queries: false cross_attention: _target_: sam2.modeling.sam.transformer.RoPEAttention rope_theta: 10000.0 feat_sizes: [32, 32] rope_k_repeat: True embedding_dim: 256 num_heads: 1 downsample_rate: 1 dropout: 0.1 kv_in_dim: 64 num_layers: 4 memory_encoder: _target_: sam2.modeling.memory_encoder.MemoryEncoder out_dim: 64 position_encoding: _target_: sam2.modeling.position_encoding.PositionEmbeddingSine num_pos_feats: 64 normalize: true scale: null temperature: 10000 mask_downsampler: _target_: sam2.modeling.memory_encoder.MaskDownSampler kernel_size: 3 stride: 2 padding: 1 fuser: _target_: sam2.modeling.memory_encoder.Fuser layer: _target_: sam2.modeling.memory_encoder.CXBlock dim: 256 kernel_size: 7 padding: 3 layer_scale_init_value: 1e-6 use_dwconv: True # depth-wise convs num_layers: 2 num_maskmem: 7 image_size: 1024 # apply scaled sigmoid on mask logits for memory encoder, and directly feed input mask as output mask sigmoid_scale_for_mem_enc: 20.0 sigmoid_bias_for_mem_enc: -10.0 use_mask_input_as_output_without_sam: true # Memory directly_add_no_mem_embed: true # use high-resolution feature map in the SAM mask decoder use_high_res_features_in_sam: true # output 3 masks on the first click on initial conditioning frames multimask_output_in_sam: true # SAM heads iou_prediction_use_sigmoid: True # cross-attend to object pointers from other frames (based on SAM output tokens) in the encoder use_obj_ptrs_in_encoder: true add_tpos_enc_to_obj_ptrs: false only_obj_ptrs_in_the_past_for_eval: true # object occlusion prediction pred_obj_scores: true pred_obj_scores_mlp: true fixed_no_obj_ptr: true # multimask tracking settings multimask_output_for_tracking: true use_multimask_token_for_obj_ptr: true multimask_min_pt_num: 0 multimask_max_pt_num: 1 use_mlp_for_obj_ptr_proj: true # Compilation flag compile_image_encoder: False ================================================ FILE: auto-seg/sam2_configs/sam2_hiera_l.yaml ================================================ # @package _global_ # Model model: _target_: sam2.modeling.sam2_base.SAM2Base image_encoder: _target_: sam2.modeling.backbones.image_encoder.ImageEncoder scalp: 1 trunk: _target_: sam2.modeling.backbones.hieradet.Hiera embed_dim: 144 num_heads: 2 stages: [2, 6, 36, 4] global_att_blocks: [23, 33, 43] window_pos_embed_bkg_spatial_size: [7, 7] window_spec: [8, 4, 16, 8] neck: _target_: sam2.modeling.backbones.image_encoder.FpnNeck position_encoding: _target_: sam2.modeling.position_encoding.PositionEmbeddingSine num_pos_feats: 256 normalize: true scale: null temperature: 10000 d_model: 256 backbone_channel_list: [1152, 576, 288, 144] fpn_top_down_levels: [2, 3] # output level 0 and 1 directly use the backbone features fpn_interp_model: nearest memory_attention: _target_: sam2.modeling.memory_attention.MemoryAttention d_model: 256 pos_enc_at_input: true layer: _target_: sam2.modeling.memory_attention.MemoryAttentionLayer activation: relu dim_feedforward: 2048 dropout: 0.1 pos_enc_at_attn: false self_attention: _target_: sam2.modeling.sam.transformer.RoPEAttention rope_theta: 10000.0 feat_sizes: [32, 32] embedding_dim: 256 num_heads: 1 downsample_rate: 1 dropout: 0.1 d_model: 256 pos_enc_at_cross_attn_keys: true pos_enc_at_cross_attn_queries: false cross_attention: _target_: sam2.modeling.sam.transformer.RoPEAttention rope_theta: 10000.0 feat_sizes: [32, 32] rope_k_repeat: True embedding_dim: 256 num_heads: 1 downsample_rate: 1 dropout: 0.1 kv_in_dim: 64 num_layers: 4 memory_encoder: _target_: sam2.modeling.memory_encoder.MemoryEncoder out_dim: 64 position_encoding: _target_: sam2.modeling.position_encoding.PositionEmbeddingSine num_pos_feats: 64 normalize: true scale: null temperature: 10000 mask_downsampler: _target_: sam2.modeling.memory_encoder.MaskDownSampler kernel_size: 3 stride: 2 padding: 1 fuser: _target_: sam2.modeling.memory_encoder.Fuser layer: _target_: sam2.modeling.memory_encoder.CXBlock dim: 256 kernel_size: 7 padding: 3 layer_scale_init_value: 1e-6 use_dwconv: True # depth-wise convs num_layers: 2 num_maskmem: 7 image_size: 1024 # apply scaled sigmoid on mask logits for memory encoder, and directly feed input mask as output mask sigmoid_scale_for_mem_enc: 20.0 sigmoid_bias_for_mem_enc: -10.0 use_mask_input_as_output_without_sam: true # Memory directly_add_no_mem_embed: true # use high-resolution feature map in the SAM mask decoder use_high_res_features_in_sam: true # output 3 masks on the first click on initial conditioning frames multimask_output_in_sam: true # SAM heads iou_prediction_use_sigmoid: True # cross-attend to object pointers from other frames (based on SAM output tokens) in the encoder use_obj_ptrs_in_encoder: true add_tpos_enc_to_obj_ptrs: false only_obj_ptrs_in_the_past_for_eval: true # object occlusion prediction pred_obj_scores: true pred_obj_scores_mlp: true fixed_no_obj_ptr: true # multimask tracking settings multimask_output_for_tracking: true use_multimask_token_for_obj_ptr: true multimask_min_pt_num: 0 multimask_max_pt_num: 1 use_mlp_for_obj_ptr_proj: true # Compilation flag compile_image_encoder: False ================================================ FILE: auto-seg/sam2_configs/sam2_hiera_s.yaml ================================================ # @package _global_ # Model model: _target_: sam2.modeling.sam2_base.SAM2Base image_encoder: _target_: sam2.modeling.backbones.image_encoder.ImageEncoder scalp: 1 trunk: _target_: sam2.modeling.backbones.hieradet.Hiera embed_dim: 96 num_heads: 1 stages: [1, 2, 11, 2] global_att_blocks: [7, 10, 13] window_pos_embed_bkg_spatial_size: [7, 7] neck: _target_: sam2.modeling.backbones.image_encoder.FpnNeck position_encoding: _target_: sam2.modeling.position_encoding.PositionEmbeddingSine num_pos_feats: 256 normalize: true scale: null temperature: 10000 d_model: 256 backbone_channel_list: [768, 384, 192, 96] fpn_top_down_levels: [2, 3] # output level 0 and 1 directly use the backbone features fpn_interp_model: nearest memory_attention: _target_: sam2.modeling.memory_attention.MemoryAttention d_model: 256 pos_enc_at_input: true layer: _target_: sam2.modeling.memory_attention.MemoryAttentionLayer activation: relu dim_feedforward: 2048 dropout: 0.1 pos_enc_at_attn: false self_attention: _target_: sam2.modeling.sam.transformer.RoPEAttention rope_theta: 10000.0 feat_sizes: [32, 32] embedding_dim: 256 num_heads: 1 downsample_rate: 1 dropout: 0.1 d_model: 256 pos_enc_at_cross_attn_keys: true pos_enc_at_cross_attn_queries: false cross_attention: _target_: sam2.modeling.sam.transformer.RoPEAttention rope_theta: 10000.0 feat_sizes: [32, 32] rope_k_repeat: True embedding_dim: 256 num_heads: 1 downsample_rate: 1 dropout: 0.1 kv_in_dim: 64 num_layers: 4 memory_encoder: _target_: sam2.modeling.memory_encoder.MemoryEncoder out_dim: 64 position_encoding: _target_: sam2.modeling.position_encoding.PositionEmbeddingSine num_pos_feats: 64 normalize: true scale: null temperature: 10000 mask_downsampler: _target_: sam2.modeling.memory_encoder.MaskDownSampler kernel_size: 3 stride: 2 padding: 1 fuser: _target_: sam2.modeling.memory_encoder.Fuser layer: _target_: sam2.modeling.memory_encoder.CXBlock dim: 256 kernel_size: 7 padding: 3 layer_scale_init_value: 1e-6 use_dwconv: True # depth-wise convs num_layers: 2 num_maskmem: 7 image_size: 1024 # apply scaled sigmoid on mask logits for memory encoder, and directly feed input mask as output mask sigmoid_scale_for_mem_enc: 20.0 sigmoid_bias_for_mem_enc: -10.0 use_mask_input_as_output_without_sam: true # Memory directly_add_no_mem_embed: true # use high-resolution feature map in the SAM mask decoder use_high_res_features_in_sam: true # output 3 masks on the first click on initial conditioning frames multimask_output_in_sam: true # SAM heads iou_prediction_use_sigmoid: True # cross-attend to object pointers from other frames (based on SAM output tokens) in the encoder use_obj_ptrs_in_encoder: true add_tpos_enc_to_obj_ptrs: false only_obj_ptrs_in_the_past_for_eval: true # object occlusion prediction pred_obj_scores: true pred_obj_scores_mlp: true fixed_no_obj_ptr: true # multimask tracking settings multimask_output_for_tracking: true use_multimask_token_for_obj_ptr: true multimask_min_pt_num: 0 multimask_max_pt_num: 1 use_mlp_for_obj_ptr_proj: true # Compilation flag compile_image_encoder: False ================================================ FILE: auto-seg/sam2_configs/sam2_hiera_t.yaml ================================================ # @package _global_ # Model model: _target_: sam2.modeling.sam2_base.SAM2Base image_encoder: _target_: sam2.modeling.backbones.image_encoder.ImageEncoder scalp: 1 trunk: _target_: sam2.modeling.backbones.hieradet.Hiera embed_dim: 96 num_heads: 1 stages: [1, 2, 7, 2] global_att_blocks: [5, 7, 9] window_pos_embed_bkg_spatial_size: [7, 7] neck: _target_: sam2.modeling.backbones.image_encoder.FpnNeck position_encoding: _target_: sam2.modeling.position_encoding.PositionEmbeddingSine num_pos_feats: 256 normalize: true scale: null temperature: 10000 d_model: 256 backbone_channel_list: [768, 384, 192, 96] fpn_top_down_levels: [2, 3] # output level 0 and 1 directly use the backbone features fpn_interp_model: nearest memory_attention: _target_: sam2.modeling.memory_attention.MemoryAttention d_model: 256 pos_enc_at_input: true layer: _target_: sam2.modeling.memory_attention.MemoryAttentionLayer activation: relu dim_feedforward: 2048 dropout: 0.1 pos_enc_at_attn: false self_attention: _target_: sam2.modeling.sam.transformer.RoPEAttention rope_theta: 10000.0 feat_sizes: [32, 32] embedding_dim: 256 num_heads: 1 downsample_rate: 1 dropout: 0.1 d_model: 256 pos_enc_at_cross_attn_keys: true pos_enc_at_cross_attn_queries: false cross_attention: _target_: sam2.modeling.sam.transformer.RoPEAttention rope_theta: 10000.0 feat_sizes: [32, 32] rope_k_repeat: True embedding_dim: 256 num_heads: 1 downsample_rate: 1 dropout: 0.1 kv_in_dim: 64 num_layers: 4 memory_encoder: _target_: sam2.modeling.memory_encoder.MemoryEncoder out_dim: 64 position_encoding: _target_: sam2.modeling.position_encoding.PositionEmbeddingSine num_pos_feats: 64 normalize: true scale: null temperature: 10000 mask_downsampler: _target_: sam2.modeling.memory_encoder.MaskDownSampler kernel_size: 3 stride: 2 padding: 1 fuser: _target_: sam2.modeling.memory_encoder.Fuser layer: _target_: sam2.modeling.memory_encoder.CXBlock dim: 256 kernel_size: 7 padding: 3 layer_scale_init_value: 1e-6 use_dwconv: True # depth-wise convs num_layers: 2 num_maskmem: 7 image_size: 1024 # apply scaled sigmoid on mask logits for memory encoder, and directly feed input mask as output mask # SAM decoder sigmoid_scale_for_mem_enc: 20.0 sigmoid_bias_for_mem_enc: -10.0 use_mask_input_as_output_without_sam: true # Memory directly_add_no_mem_embed: true # use high-resolution feature map in the SAM mask decoder use_high_res_features_in_sam: true # output 3 masks on the first click on initial conditioning frames multimask_output_in_sam: true # SAM heads iou_prediction_use_sigmoid: True # cross-attend to object pointers from other frames (based on SAM output tokens) in the encoder use_obj_ptrs_in_encoder: true add_tpos_enc_to_obj_ptrs: false only_obj_ptrs_in_the_past_for_eval: true # object occlusion prediction pred_obj_scores: true pred_obj_scores_mlp: true fixed_no_obj_ptr: true # multimask tracking settings multimask_output_for_tracking: true use_multimask_token_for_obj_ptr: true multimask_min_pt_num: 0 multimask_max_pt_num: 1 use_mlp_for_obj_ptr_proj: true # Compilation flag # HieraT does not currently support compilation, should always be set to False compile_image_encoder: False ================================================ FILE: auto-seg/sam2_hiera_l.yaml ================================================ # @package _global_ # model model: _target_: sam2.modeling.sam2_base.SAM2Base image_encoder: _target_: sam2.modeling.backbones.image_encoder.ImageEncoder scalp: 1 trunk: _target_: sam2.modeling.backbones.hieradet.Hiera embed_dim: 144 num_heads: 2 stages: [2, 6, 36, 4] global_att_blocks: [23, 33, 43] window_pos_embed_bkg_spatial_size: [7, 7] window_spec: [8, 4, 16, 8] neck: _target_: sam2.modeling.backbones.image_encoder.FpnNeck position_encoding: _target_: sam2.modeling.position_encoding.PositionEmbeddingSine num_pos_feats: 256 normalize: true scale: null temperature: 10000 d_model: 256 backbone_channel_list: [1152, 576, 288, 144] fpn_top_down_levels: [2, 3] # output level 0 and 1 directly use the backbone features fpn_interp_model: nearest memory_attention: _target_: sam2.modeling.memory_attention.MemoryAttention d_model: 256 pos_enc_at_input: true layer: _target_: sam2.modeling.memory_attention.MemoryAttentionLayer activation: relu dim_feedforward: 2048 dropout: 0.1 pos_enc_at_attn: false self_attention: _target_: sam2.modeling.sam.transformer.RoPEAttention rope_theta: 10000.0 feat_sizes: [32, 32] embedding_dim: 256 num_heads: 1 downsample_rate: 1 dropout: 0.1 d_model: 256 pos_enc_at_cross_attn_keys: true pos_enc_at_cross_attn_queries: false cross_attention: _target_: sam2.modeling.sam.transformer.RoPEAttention rope_theta: 10000.0 feat_sizes: [32, 32] rope_k_repeat: True embedding_dim: 256 num_heads: 1 downsample_rate: 1 dropout: 0.1 kv_in_dim: 64 num_layers: 4 memory_encoder: _target_: sam2.modeling.memory_encoder.MemoryEncoder out_dim: 64 position_encoding: _target_: sam2.modeling.position_encoding.PositionEmbeddingSine num_pos_feats: 64 normalize: true scale: null temperature: 10000 mask_downsampler: _target_: sam2.modeling.memory_encoder.MaskDownSampler kernel_size: 3 stride: 2 padding: 1 fuser: _target_: sam2.modeling.memory_encoder.Fuser layer: _target_: sam2.modeling.memory_encoder.CXBlock dim: 256 kernel_size: 7 padding: 3 layer_scale_init_value: 1e-6 use_dwconv: True # depth-wise convs num_layers: 2 num_maskmem: 7 image_size: 1024 # apply scaled sigmoid on mask logits for memory encoder, and directly feed input mask as output mask sigmoid_scale_for_mem_enc: 20.0 sigmoid_bias_for_mem_enc: -10.0 use_mask_input_as_output_without_sam: true # Memory directly_add_no_mem_embed: true # use high-resolution feature map in the SAM mask decoder use_high_res_features_in_sam: true # output 3 masks on the first click on initial conditioning frames multimask_output_in_sam: true # SAM heads iou_prediction_use_sigmoid: True # cross-attend to object pointers from other frames (based on SAM output tokens) in the encoder use_obj_ptrs_in_encoder: true add_tpos_enc_to_obj_ptrs: false only_obj_ptrs_in_the_past_for_eval: true # object occlusion prediction pred_obj_scores: true pred_obj_scores_mlp: true fixed_no_obj_ptr: true # multimask tracking settings multimask_output_for_tracking: true use_multimask_token_for_obj_ptr: true multimask_min_pt_num: 0 multimask_max_pt_num: 1 use_mlp_for_obj_ptr_proj: true # Compilation flag compile_image_encoder: False ================================================ FILE: auto-seg/submodules/segment-anything-1/.gitignore ================================================ .nfs* # compilation and distribution __pycache__ _ext *.pyc *.pyd *.so *.dll *.egg-info/ build/ dist/ wheels/ # pytorch/python/numpy formats *.pth *.pkl *.npy *.ts model_ts*.txt # onnx models *.onnx # ipython/jupyter notebooks **/.ipynb_checkpoints/ # Editor temporaries *.swn *.swo *.swp *~ # editor settings .idea .vscode _darcs # demo **/node_modules yarn.lock package-lock.json ================================================ FILE: auto-seg/submodules/segment-anything-1/README.md ================================================ # Segment-Anything-LangSplat This is the modidied version of [SAM](https://github.com/facebookresearch/segment-anything) for [LangSplat](https://github.com/minghanqin/LangSplat). Please follow the original repository of [SAM](https://github.com/facebookresearch/segment-anything) to install. ================================================ FILE: auto-seg/submodules/segment-anything-1/scripts/amg.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import cv2 # type: ignore from segment_anything import SamAutomaticMaskGenerator, sam_model_registry import argparse import json import os from typing import Any, Dict, List parser = argparse.ArgumentParser( description=( "Runs automatic mask generation on an input image or directory of images, " "and outputs masks as either PNGs or COCO-style RLEs. Requires open-cv, " "as well as pycocotools if saving in RLE format." ) ) parser.add_argument( "--input", type=str, required=True, help="Path to either a single input image or folder of images.", ) parser.add_argument( "--output", type=str, required=True, help=( "Path to the directory where masks will be output. Output will be either a folder " "of PNGs per image or a single json with COCO-style masks." ), ) parser.add_argument( "--model-type", type=str, required=True, help="The type of model to load, in ['default', 'vit_h', 'vit_l', 'vit_b']", ) parser.add_argument( "--checkpoint", type=str, required=True, help="The path to the SAM checkpoint to use for mask generation.", ) parser.add_argument("--device", type=str, default="cuda", help="The device to run generation on.") parser.add_argument( "--convert-to-rle", action="store_true", help=( "Save masks as COCO RLEs in a single json instead of as a folder of PNGs. " "Requires pycocotools." ), ) amg_settings = parser.add_argument_group("AMG Settings") amg_settings.add_argument( "--points-per-side", type=int, default=None, help="Generate masks by sampling a grid over the image with this many points to a side.", ) amg_settings.add_argument( "--points-per-batch", type=int, default=None, help="How many input points to process simultaneously in one batch.", ) amg_settings.add_argument( "--pred-iou-thresh", type=float, default=None, help="Exclude masks with a predicted score from the model that is lower than this threshold.", ) amg_settings.add_argument( "--stability-score-thresh", type=float, default=None, help="Exclude masks with a stability score lower than this threshold.", ) amg_settings.add_argument( "--stability-score-offset", type=float, default=None, help="Larger values perturb the mask more when measuring stability score.", ) amg_settings.add_argument( "--box-nms-thresh", type=float, default=None, help="The overlap threshold for excluding a duplicate mask.", ) amg_settings.add_argument( "--crop-n-layers", type=int, default=None, help=( "If >0, mask generation is run on smaller crops of the image to generate more masks. " "The value sets how many different scales to crop at." ), ) amg_settings.add_argument( "--crop-nms-thresh", type=float, default=None, help="The overlap threshold for excluding duplicate masks across different crops.", ) amg_settings.add_argument( "--crop-overlap-ratio", type=int, default=None, help="Larger numbers mean image crops will overlap more.", ) amg_settings.add_argument( "--crop-n-points-downscale-factor", type=int, default=None, help="The number of points-per-side in each layer of crop is reduced by this factor.", ) amg_settings.add_argument( "--min-mask-region-area", type=int, default=None, help=( "Disconnected mask regions or holes with area smaller than this value " "in pixels are removed by postprocessing." ), ) def write_masks_to_folder(masks: List[Dict[str, Any]], path: str) -> None: header = "id,area,bbox_x0,bbox_y0,bbox_w,bbox_h,point_input_x,point_input_y,predicted_iou,stability_score,crop_box_x0,crop_box_y0,crop_box_w,crop_box_h" # noqa metadata = [header] for i, mask_data in enumerate(masks): mask = mask_data["segmentation"] filename = f"{i}.png" cv2.imwrite(os.path.join(path, filename), mask * 255) mask_metadata = [ str(i), str(mask_data["area"]), *[str(x) for x in mask_data["bbox"]], *[str(x) for x in mask_data["point_coords"][0]], str(mask_data["predicted_iou"]), str(mask_data["stability_score"]), *[str(x) for x in mask_data["crop_box"]], ] row = ",".join(mask_metadata) metadata.append(row) metadata_path = os.path.join(path, "metadata.csv") with open(metadata_path, "w") as f: f.write("\n".join(metadata)) return def get_amg_kwargs(args): amg_kwargs = { "points_per_side": args.points_per_side, "points_per_batch": args.points_per_batch, "pred_iou_thresh": args.pred_iou_thresh, "stability_score_thresh": args.stability_score_thresh, "stability_score_offset": args.stability_score_offset, "box_nms_thresh": args.box_nms_thresh, "crop_n_layers": args.crop_n_layers, "crop_nms_thresh": args.crop_nms_thresh, "crop_overlap_ratio": args.crop_overlap_ratio, "crop_n_points_downscale_factor": args.crop_n_points_downscale_factor, "min_mask_region_area": args.min_mask_region_area, } amg_kwargs = {k: v for k, v in amg_kwargs.items() if v is not None} return amg_kwargs def main(args: argparse.Namespace) -> None: print("Loading model...") sam = sam_model_registry[args.model_type](checkpoint=args.checkpoint) _ = sam.to(device=args.device) output_mode = "coco_rle" if args.convert_to_rle else "binary_mask" amg_kwargs = get_amg_kwargs(args) generator = SamAutomaticMaskGenerator(sam, output_mode=output_mode, **amg_kwargs) if not os.path.isdir(args.input): targets = [args.input] else: targets = [ f for f in os.listdir(args.input) if not os.path.isdir(os.path.join(args.input, f)) ] targets = [os.path.join(args.input, f) for f in targets] os.makedirs(args.output, exist_ok=True) for t in targets: print(f"Processing '{t}'...") image = cv2.imread(t) if image is None: print(f"Could not load '{t}' as an image, skipping...") continue image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) masks = generator.generate(image) base = os.path.basename(t) base = os.path.splitext(base)[0] save_base = os.path.join(args.output, base) if output_mode == "binary_mask": os.makedirs(save_base, exist_ok=False) write_masks_to_folder(masks, save_base) else: save_file = save_base + ".json" with open(save_file, "w") as f: json.dump(masks, f) print("Done!") if __name__ == "__main__": args = parser.parse_args() main(args) ================================================ FILE: auto-seg/submodules/segment-anything-1/scripts/export_onnx_model.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import torch from segment_anything import sam_model_registry from segment_anything.utils.onnx import SamOnnxModel import argparse import warnings try: import onnxruntime # type: ignore onnxruntime_exists = True except ImportError: onnxruntime_exists = False parser = argparse.ArgumentParser( description="Export the SAM prompt encoder and mask decoder to an ONNX model." ) parser.add_argument( "--checkpoint", type=str, required=True, help="The path to the SAM model checkpoint." ) parser.add_argument( "--output", type=str, required=True, help="The filename to save the ONNX model to." ) parser.add_argument( "--model-type", type=str, required=True, help="In ['default', 'vit_h', 'vit_l', 'vit_b']. Which type of SAM model to export.", ) parser.add_argument( "--return-single-mask", action="store_true", help=( "If true, the exported ONNX model will only return the best mask, " "instead of returning multiple masks. For high resolution images " "this can improve runtime when upscaling masks is expensive." ), ) parser.add_argument( "--opset", type=int, default=17, help="The ONNX opset version to use. Must be >=11", ) parser.add_argument( "--quantize-out", type=str, default=None, help=( "If set, will quantize the model and save it with this name. " "Quantization is performed with quantize_dynamic from onnxruntime.quantization.quantize." ), ) parser.add_argument( "--gelu-approximate", action="store_true", help=( "Replace GELU operations with approximations using tanh. Useful " "for some runtimes that have slow or unimplemented erf ops, used in GELU." ), ) parser.add_argument( "--use-stability-score", action="store_true", help=( "Replaces the model's predicted mask quality score with the stability " "score calculated on the low resolution masks using an offset of 1.0. " ), ) parser.add_argument( "--return-extra-metrics", action="store_true", help=( "The model will return five results: (masks, scores, stability_scores, " "areas, low_res_logits) instead of the usual three. This can be " "significantly slower for high resolution outputs." ), ) def run_export( model_type: str, checkpoint: str, output: str, opset: int, return_single_mask: bool, gelu_approximate: bool = False, use_stability_score: bool = False, return_extra_metrics=False, ): print("Loading model...") sam = sam_model_registry[model_type](checkpoint=checkpoint) onnx_model = SamOnnxModel( model=sam, return_single_mask=return_single_mask, use_stability_score=use_stability_score, return_extra_metrics=return_extra_metrics, ) if gelu_approximate: for n, m in onnx_model.named_modules(): if isinstance(m, torch.nn.GELU): m.approximate = "tanh" dynamic_axes = { "point_coords": {1: "num_points"}, "point_labels": {1: "num_points"}, } embed_dim = sam.prompt_encoder.embed_dim embed_size = sam.prompt_encoder.image_embedding_size mask_input_size = [4 * x for x in embed_size] dummy_inputs = { "image_embeddings": torch.randn(1, embed_dim, *embed_size, dtype=torch.float), "point_coords": torch.randint(low=0, high=1024, size=(1, 5, 2), dtype=torch.float), "point_labels": torch.randint(low=0, high=4, size=(1, 5), dtype=torch.float), "mask_input": torch.randn(1, 1, *mask_input_size, dtype=torch.float), "has_mask_input": torch.tensor([1], dtype=torch.float), "orig_im_size": torch.tensor([1500, 2250], dtype=torch.float), } _ = onnx_model(**dummy_inputs) output_names = ["masks", "iou_predictions", "low_res_masks"] with warnings.catch_warnings(): warnings.filterwarnings("ignore", category=torch.jit.TracerWarning) warnings.filterwarnings("ignore", category=UserWarning) with open(output, "wb") as f: print(f"Exporting onnx model to {output}...") torch.onnx.export( onnx_model, tuple(dummy_inputs.values()), f, export_params=True, verbose=False, opset_version=opset, do_constant_folding=True, input_names=list(dummy_inputs.keys()), output_names=output_names, dynamic_axes=dynamic_axes, ) if onnxruntime_exists: ort_inputs = {k: to_numpy(v) for k, v in dummy_inputs.items()} # set cpu provider default providers = ["CPUExecutionProvider"] ort_session = onnxruntime.InferenceSession(output, providers=providers) _ = ort_session.run(None, ort_inputs) print("Model has successfully been run with ONNXRuntime.") def to_numpy(tensor): return tensor.cpu().numpy() if __name__ == "__main__": args = parser.parse_args() run_export( model_type=args.model_type, checkpoint=args.checkpoint, output=args.output, opset=args.opset, return_single_mask=args.return_single_mask, gelu_approximate=args.gelu_approximate, use_stability_score=args.use_stability_score, return_extra_metrics=args.return_extra_metrics, ) if args.quantize_out is not None: assert onnxruntime_exists, "onnxruntime is required to quantize the model." from onnxruntime.quantization import QuantType # type: ignore from onnxruntime.quantization.quantize import quantize_dynamic # type: ignore print(f"Quantizing model and writing to {args.quantize_out}...") quantize_dynamic( model_input=args.output, model_output=args.quantize_out, optimize_model=True, per_channel=False, reduce_range=False, weight_type=QuantType.QUInt8, ) print("Done!") ================================================ FILE: auto-seg/submodules/segment-anything-1/segment_anything/__init__.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. from .build_sam import ( build_sam, build_sam_vit_h, build_sam_vit_l, build_sam_vit_b, sam_model_registry, ) from .predictor import SamPredictor from .automatic_mask_generator import SamAutomaticMaskGenerator ================================================ FILE: auto-seg/submodules/segment-anything-1/segment_anything/automatic_mask_generator.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import numpy as np import torch from torchvision.ops.boxes import batched_nms, box_area # type: ignore from typing import Any, Dict, List, Optional, Tuple from .modeling import Sam from .predictor import SamPredictor from .utils.amg import ( MaskData, area_from_rle, batch_iterator, batched_mask_to_box, box_xyxy_to_xywh, build_all_layer_point_grids, calculate_stability_score, coco_encode_rle, generate_crop_boxes, is_box_near_crop_edge, mask_to_rle_pytorch, remove_small_regions, rle_to_mask, uncrop_boxes_xyxy, uncrop_masks, uncrop_points, ) class SamAutomaticMaskGenerator: def __init__( self, model: Sam, points_per_side: Optional[int] = 32, points_per_batch: int = 64, pred_iou_thresh: float = 0.88, stability_score_thresh: float = 0.95, stability_score_offset: float = 1.0, box_nms_thresh: float = 0.7, crop_n_layers: int = 0, crop_nms_thresh: float = 0.7, crop_overlap_ratio: float = 512 / 1500, crop_n_points_downscale_factor: int = 1, point_grids: Optional[List[np.ndarray]] = None, min_mask_region_area: int = 0, output_mode: str = "binary_mask", ) -> None: """ Using a SAM model, generates masks for the entire image. Generates a grid of point prompts over the image, then filters low quality and duplicate masks. The default settings are chosen for SAM with a ViT-H backbone. Arguments: model (Sam): The SAM model to use for mask prediction. points_per_side (int or None): The number of points to be sampled along one side of the image. The total number of points is points_per_side**2. If None, 'point_grids' must provide explicit point sampling. points_per_batch (int): Sets the number of points run simultaneously by the model. Higher numbers may be faster but use more GPU memory. pred_iou_thresh (float): A filtering threshold in [0,1], using the model's predicted mask quality. stability_score_thresh (float): A filtering threshold in [0,1], using the stability of the mask under changes to the cutoff used to binarize the model's mask predictions. stability_score_offset (float): The amount to shift the cutoff when calculated the stability score. box_nms_thresh (float): The box IoU cutoff used by non-maximal suppression to filter duplicate masks. crop_n_layers (int): If >0, mask prediction will be run again on crops of the image. Sets the number of layers to run, where each layer has 2**i_layer number of image crops. crop_nms_thresh (float): The box IoU cutoff used by non-maximal suppression to filter duplicate masks between different crops. crop_overlap_ratio (float): Sets the degree to which crops overlap. In the first crop layer, crops will overlap by this fraction of the image length. Later layers with more crops scale down this overlap. crop_n_points_downscale_factor (int): The number of points-per-side sampled in layer n is scaled down by crop_n_points_downscale_factor**n. point_grids (list(np.ndarray) or None): A list over explicit grids of points used for sampling, normalized to [0,1]. The nth grid in the list is used in the nth crop layer. Exclusive with points_per_side. min_mask_region_area (int): If >0, postprocessing will be applied to remove disconnected regions and holes in masks with area smaller than min_mask_region_area. Requires opencv. output_mode (str): The form masks are returned in. Can be 'binary_mask', 'uncompressed_rle', or 'coco_rle'. 'coco_rle' requires pycocotools. For large resolutions, 'binary_mask' may consume large amounts of memory. """ assert (points_per_side is None) != ( point_grids is None ), "Exactly one of points_per_side or point_grid must be provided." if points_per_side is not None: self.point_grids = build_all_layer_point_grids( points_per_side, crop_n_layers, crop_n_points_downscale_factor, ) elif point_grids is not None: self.point_grids = point_grids else: raise ValueError("Can't have both points_per_side and point_grid be None.") assert output_mode in [ "binary_mask", "uncompressed_rle", "coco_rle", ], f"Unknown output_mode {output_mode}." if output_mode == "coco_rle": from pycocotools import mask as mask_utils # type: ignore # noqa: F401 # if min_mask_region_area > 0: # import cv2 # type: ignore # noqa: F401 self.predictor = SamPredictor(model) self.points_per_batch = points_per_batch self.pred_iou_thresh = pred_iou_thresh self.stability_score_thresh = stability_score_thresh self.stability_score_offset = stability_score_offset self.box_nms_thresh = box_nms_thresh self.crop_n_layers = crop_n_layers self.crop_nms_thresh = crop_nms_thresh self.crop_overlap_ratio = crop_overlap_ratio self.crop_n_points_downscale_factor = crop_n_points_downscale_factor self.min_mask_region_area = min_mask_region_area self.output_mode = output_mode @torch.no_grad() def generate(self, image: np.ndarray) -> List[Dict[str, Any]]: """ Generates masks for the given image. Arguments: image (np.ndarray): The image to generate masks for, in HWC uint8 format. Returns: list(dict(str, any)): A list over records for masks. Each record is a dict containing the following keys: segmentation (dict(str, any) or np.ndarray): The mask. If output_mode='binary_mask', is an array of shape HW. Otherwise, is a dictionary containing the RLE. bbox (list(float)): The box around the mask, in XYWH format. area (int): The area in pixels of the mask. predicted_iou (float): The model's own prediction of the mask's quality. This is filtered by the pred_iou_thresh parameter. point_coords (list(list(float))): The point coordinates input to the model to generate this mask. stability_score (float): A measure of the mask's quality. This is filtered on using the stability_score_thresh parameter. crop_box (list(float)): The crop of the image used to generate the mask, given in XYWH format. """ # Generate masks mask_data, mask_data_s, mask_data_m, mask_data_l = self._generate_masks(image) curr_anns = self.generate_curr_anns(mask_data) curr_anns_s = self.generate_curr_anns(mask_data_s) curr_anns_m = self.generate_curr_anns(mask_data_m) curr_anns_l = self.generate_curr_anns(mask_data_l) return curr_anns, curr_anns_s, curr_anns_m, curr_anns_l def generate_curr_anns( self, mask_data, ) -> List[Dict[str, Any]]: # Filter small disconnected regions and holes in masks if self.min_mask_region_area > 0: mask_data = self.postprocess_small_regions( mask_data, self.min_mask_region_area, max(self.box_nms_thresh, self.crop_nms_thresh), ) # Encode masks if self.output_mode == "coco_rle": mask_data["segmentations"] = [coco_encode_rle(rle) for rle in mask_data["rles"]] elif self.output_mode == "binary_mask": mask_data["segmentations"] = [rle_to_mask(rle) for rle in mask_data["rles"]] else: mask_data["segmentations"] = mask_data["rles"] # Write mask records curr_anns = [] for idx in range(len(mask_data["segmentations"])): ann = { "segmentation": mask_data["segmentations"][idx], "area": area_from_rle(mask_data["rles"][idx]), "bbox": box_xyxy_to_xywh(mask_data["boxes"][idx]).tolist(), "predicted_iou": mask_data["iou_preds"][idx].item(), "point_coords": [mask_data["points"][idx].tolist()], "stability_score": mask_data["stability_score"][idx].item(), "crop_box": box_xyxy_to_xywh(mask_data["crop_boxes"][idx]).tolist(), } curr_anns.append(ann) return curr_anns def _generate_masks(self, image: np.ndarray) -> MaskData: orig_size = image.shape[:2] crop_boxes, layer_idxs = generate_crop_boxes( orig_size, self.crop_n_layers, self.crop_overlap_ratio ) # Iterate over image crops data = MaskData() data_s, data_m, data_l = MaskData(), MaskData(), MaskData() for crop_box, layer_idx in zip(crop_boxes, layer_idxs): crop_data, crop_data_s, crop_data_m, crop_data_l = self._process_crop(image, crop_box, layer_idx, orig_size) data.cat(crop_data) data_s.cat(crop_data_s) data_m.cat(crop_data_m) data_l.cat(crop_data_l) data = self._generate_masks_data(data, crop_boxes) data_s = self._generate_masks_data(data_s, crop_boxes) data_m = self._generate_masks_data(data_m, crop_boxes) data_l = self._generate_masks_data(data_l, crop_boxes) return data, data_s, data_m, data_l def _generate_masks_data( self, data, crop_boxes, ) -> MaskData: # Remove duplicate masks between crops if len(crop_boxes) > 1: # Prefer masks from smaller crops scores = 1 / box_area(data["crop_boxes"]) scores = scores.to(data["boxes"].device) keep_by_nms = batched_nms( data["boxes"].float(), scores, torch.zeros_like(data["boxes"][:, 0]), # categories iou_threshold=self.crop_nms_thresh, ) data.filter(keep_by_nms) data.to_numpy() return data def _process_crop( self, image: np.ndarray, crop_box: List[int], crop_layer_idx: int, orig_size: Tuple[int, ...], ) -> MaskData: # Crop the image and calculate embeddings x0, y0, x1, y1 = crop_box cropped_im = image[y0:y1, x0:x1, :] cropped_im_size = cropped_im.shape[:2] self.predictor.set_image(cropped_im) # Get points for this crop points_scale = np.array(cropped_im_size)[None, ::-1] points_for_image = self.point_grids[crop_layer_idx] * points_scale # Generate masks for this crop in batches data = MaskData() data_s, data_m, data_l = MaskData(), MaskData(), MaskData() for (points,) in batch_iterator(self.points_per_batch, points_for_image): batch_data, batch_data_s, batch_data_m, batch_data_l = self._process_batch(points, cropped_im_size, crop_box, orig_size) data.cat(batch_data) data_s.cat(batch_data_s) data_m.cat(batch_data_m) data_l.cat(batch_data_l) del batch_data, batch_data_s, batch_data_m, batch_data_l self.predictor.reset_image() data = self._process_crop_data(data, crop_box) data_s = self._process_crop_data(data_s, crop_box) data_m = self._process_crop_data(data_m, crop_box) data_l = self._process_crop_data(data_l, crop_box) return data, data_s, data_m, data_l def _process_crop_data( self, data, crop_box: List[int], ) -> MaskData: # Remove duplicates within this crop. keep_by_nms = batched_nms( data["boxes"].float(), data["iou_preds"], torch.zeros_like(data["boxes"][:, 0]), # categories iou_threshold=self.box_nms_thresh, ) data.filter(keep_by_nms) # Return to the original image frame data["boxes"] = uncrop_boxes_xyxy(data["boxes"], crop_box) data["points"] = uncrop_points(data["points"], crop_box) data["crop_boxes"] = torch.tensor([crop_box for _ in range(len(data["rles"]))]) return data def _process_batch( self, points: np.ndarray, im_size: Tuple[int, ...], crop_box: List[int], orig_size: Tuple[int, ...], ) -> MaskData: orig_h, orig_w = orig_size # Run model on this batch transformed_points = self.predictor.transform.apply_coords(points, im_size) in_points = torch.as_tensor(transformed_points, device=self.predictor.device) in_labels = torch.ones(in_points.shape[0], dtype=torch.int, device=in_points.device) masks, iou_preds, _ = self.predictor.predict_torch( in_points[:, None, :], in_labels[:, None], multimask_output=True, return_logits=True, ) # print('..............................................') # print(masks.shape)torch.Size([64, 3, 738, 994]) # print(masks.flatten(0, 1).shape)torch.Size([192, 738, 994]) # print(masks[:,0,:,:].shape)torch.Size([64, 738, 994] # print(points.shape)(64, 2) # print(torch.as_tensor(points.repeat(masks.shape[1], axis=0)).shape)torch.Size([192, 2]) # print(iou_preds.shape)torch.Size([64, 3]) # Serialize predictions and store in MaskData # mask type: s m l data_s = MaskData( masks=masks[:,0,:,:], iou_preds=iou_preds[:,0], points=torch.as_tensor(points), ) data_m = MaskData( masks=masks[:,1,:,:], iou_preds=iou_preds[:,1], points=torch.as_tensor(points), ) data_l = MaskData( masks=masks[:,2,:,:], iou_preds=iou_preds[:,2], points=torch.as_tensor(points), ) data = MaskData( masks=masks.flatten(0, 1), iou_preds=iou_preds.flatten(0, 1), points=torch.as_tensor(points.repeat(masks.shape[1], axis=0)), ) del masks data = self._process_batch_data(data, crop_box, orig_w, orig_h) data_s = self._process_batch_data(data_s, crop_box, orig_w, orig_h) data_m = self._process_batch_data(data_m, crop_box, orig_w, orig_h) data_l = self._process_batch_data(data_l, crop_box, orig_w, orig_h) return data, data_s, data_m, data_l def _process_batch_data( self, data, crop_box, orig_w, orig_h, ): # Filter by predicted IoU if self.pred_iou_thresh > 0.0: keep_mask = data["iou_preds"] > self.pred_iou_thresh data.filter(keep_mask) # Calculate stability score data["stability_score"] = calculate_stability_score( data["masks"], self.predictor.model.mask_threshold, self.stability_score_offset ) if self.stability_score_thresh > 0.0: keep_mask = data["stability_score"] >= self.stability_score_thresh data.filter(keep_mask) # Threshold masks and calculate boxes data["masks"] = data["masks"] > self.predictor.model.mask_threshold data["boxes"] = batched_mask_to_box(data["masks"]) # Filter boxes that touch crop boundaries keep_mask = ~is_box_near_crop_edge(data["boxes"], crop_box, [0, 0, orig_w, orig_h]) if not torch.all(keep_mask): data.filter(keep_mask) # Compress to RLE data["masks"] = uncrop_masks(data["masks"], crop_box, orig_h, orig_w) data["rles"] = mask_to_rle_pytorch(data["masks"]) del data["masks"] return data @staticmethod def postprocess_small_regions( mask_data: MaskData, min_area: int, nms_thresh: float ) -> MaskData: """ Removes small disconnected regions and holes in masks, then reruns box NMS to remove any new duplicates. Edits mask_data in place. Requires open-cv as a dependency. """ if len(mask_data["rles"]) == 0: return mask_data # Filter small disconnected regions and holes new_masks = [] scores = [] for rle in mask_data["rles"]: mask = rle_to_mask(rle) mask, changed = remove_small_regions(mask, min_area, mode="holes") unchanged = not changed mask, changed = remove_small_regions(mask, min_area, mode="islands") unchanged = unchanged and not changed new_masks.append(torch.as_tensor(mask).unsqueeze(0)) # Give score=0 to changed masks and score=1 to unchanged masks # so NMS will prefer ones that didn't need postprocessing scores.append(float(unchanged)) # Recalculate boxes and remove any new duplicates masks = torch.cat(new_masks, dim=0) boxes = batched_mask_to_box(masks) keep_by_nms = batched_nms( boxes.float(), torch.as_tensor(scores), torch.zeros_like(boxes[:, 0]), # categories iou_threshold=nms_thresh, ) # Only recalculate RLEs for masks that have changed for i_mask in keep_by_nms: if scores[i_mask] == 0.0: mask_torch = masks[i_mask].unsqueeze(0) mask_data["rles"][i_mask] = mask_to_rle_pytorch(mask_torch)[0] mask_data["boxes"][i_mask] = boxes[i_mask] # update res directly mask_data.filter(keep_by_nms) return mask_data @torch.no_grad() def generate_l(self, image: np.ndarray) -> List[Dict[str, Any]]: """ Generates only large masks for the given image, skipping the generation of small and medium masks. Arguments: image (np.ndarray): The image to generate masks for, in HWC uint8 format. Returns: list(dict(str, any)): A list of records for large masks only. """ # Generate only large masks mask_data_l = self._generate_masks_l(image) curr_anns_l = self.generate_curr_anns(mask_data_l) return curr_anns_l def _generate_masks_l(self, image: np.ndarray) -> MaskData: """ Internal method that generates only large masks for the given image. Skips the generation of small and medium masks for better performance. Arguments: image (np.ndarray): The image to generate masks for. Returns: MaskData: The mask data for large masks only. """ orig_size = image.shape[:2] crop_boxes, layer_idxs = generate_crop_boxes( orig_size, self.crop_n_layers, self.crop_overlap_ratio ) # Iterate over image crops data_l = MaskData() for crop_box, layer_idx in zip(crop_boxes, layer_idxs): crop_data_l = self._process_crop_l(image, crop_box, layer_idx, orig_size) data_l.cat(crop_data_l) data_l = self._generate_masks_data(data_l, crop_boxes) return data_l def _process_crop_l( self, image: np.ndarray, crop_box: List[int], crop_layer_idx: int, orig_size: Tuple[int, ...], ) -> MaskData: # Crop the image and calculate embeddings x0, y0, x1, y1 = crop_box cropped_im = image[y0:y1, x0:x1, :] cropped_im_size = cropped_im.shape[:2] self.predictor.set_image(cropped_im) # Get points for this crop points_scale = np.array(cropped_im_size)[None, ::-1] points_for_image = self.point_grids[crop_layer_idx] * points_scale # Generate only large masks for this crop in batches data_l = MaskData() for (points,) in batch_iterator(self.points_per_batch, points_for_image): batch_data_l = self._process_batch_l(points, cropped_im_size, crop_box, orig_size) data_l.cat(batch_data_l) del batch_data_l self.predictor.reset_image() data_l = self._process_crop_data(data_l, crop_box) return data_l def _process_batch_l( self, points: np.ndarray, im_size: Tuple[int, ...], crop_box: List[int], orig_size: Tuple[int, ...], ) -> MaskData: orig_h, orig_w = orig_size # Run model on this batch transformed_points = self.predictor.transform.apply_coords(points, im_size) in_points = torch.as_tensor(transformed_points, device=self.predictor.device) in_labels = torch.ones(in_points.shape[0], dtype=torch.int, device=in_points.device) masks, iou_preds, _ = self.predictor.predict_torch( in_points[:, None, :], in_labels[:, None], multimask_output=True, return_logits=True, ) # Only process the large masks (index 2) data_l = MaskData( masks=masks[:,2,:,:], # Only take the large mask iou_preds=iou_preds[:,2], # Only take the large mask IoU points=torch.as_tensor(points), ) del masks # Filter by predicted IoU and stability score if self.pred_iou_thresh > 0.0: keep_mask = data_l["iou_preds"] > self.pred_iou_thresh data_l.filter(keep_mask) # Calculate stability score data_l["stability_score"] = calculate_stability_score( data_l["masks"], self.predictor.model.mask_threshold, self.stability_score_offset ) if self.stability_score_thresh > 0.0: keep_mask = data_l["stability_score"] >= self.stability_score_thresh data_l.filter(keep_mask) # Threshold masks and calculate boxes data_l["masks"] = data_l["masks"] > self.predictor.model.mask_threshold data_l["boxes"] = batched_mask_to_box(data_l["masks"]) # Filter boxes that touch crop boundaries keep_mask = ~is_box_near_crop_edge(data_l["boxes"], crop_box, [0, 0, orig_w, orig_h]) if not torch.all(keep_mask): data_l.filter(keep_mask) # Compress to RLE data_l["masks"] = uncrop_masks(data_l["masks"], crop_box, orig_h, orig_w) data_l["rles"] = mask_to_rle_pytorch(data_l["masks"]) del data_l["masks"] return data_l ================================================ FILE: auto-seg/submodules/segment-anything-1/segment_anything/build_sam.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import torch from functools import partial from .modeling import ImageEncoderViT, MaskDecoder, PromptEncoder, Sam, TwoWayTransformer def build_sam_vit_h(checkpoint=None): return _build_sam( encoder_embed_dim=1280, encoder_depth=32, encoder_num_heads=16, encoder_global_attn_indexes=[7, 15, 23, 31], checkpoint=checkpoint, ) build_sam = build_sam_vit_h def build_sam_vit_l(checkpoint=None): return _build_sam( encoder_embed_dim=1024, encoder_depth=24, encoder_num_heads=16, encoder_global_attn_indexes=[5, 11, 17, 23], checkpoint=checkpoint, ) def build_sam_vit_b(checkpoint=None): return _build_sam( encoder_embed_dim=768, encoder_depth=12, encoder_num_heads=12, encoder_global_attn_indexes=[2, 5, 8, 11], checkpoint=checkpoint, ) sam_model_registry = { "default": build_sam_vit_h, "vit_h": build_sam_vit_h, "vit_l": build_sam_vit_l, "vit_b": build_sam_vit_b, } def _build_sam( encoder_embed_dim, encoder_depth, encoder_num_heads, encoder_global_attn_indexes, checkpoint=None, ): prompt_embed_dim = 256 image_size = 1024 vit_patch_size = 16 image_embedding_size = image_size // vit_patch_size sam = Sam( image_encoder=ImageEncoderViT( depth=encoder_depth, embed_dim=encoder_embed_dim, img_size=image_size, mlp_ratio=4, norm_layer=partial(torch.nn.LayerNorm, eps=1e-6), num_heads=encoder_num_heads, patch_size=vit_patch_size, qkv_bias=True, use_rel_pos=True, global_attn_indexes=encoder_global_attn_indexes, window_size=14, out_chans=prompt_embed_dim, ), prompt_encoder=PromptEncoder( embed_dim=prompt_embed_dim, image_embedding_size=(image_embedding_size, image_embedding_size), input_image_size=(image_size, image_size), mask_in_chans=16, ), mask_decoder=MaskDecoder( num_multimask_outputs=3, transformer=TwoWayTransformer( depth=2, embedding_dim=prompt_embed_dim, mlp_dim=2048, num_heads=8, ), transformer_dim=prompt_embed_dim, iou_head_depth=3, iou_head_hidden_dim=256, ), pixel_mean=[123.675, 116.28, 103.53], pixel_std=[58.395, 57.12, 57.375], ) sam.eval() if checkpoint is not None: with open(checkpoint, "rb") as f: state_dict = torch.load(f) sam.load_state_dict(state_dict) return sam ================================================ FILE: auto-seg/submodules/segment-anything-1/segment_anything/modeling/__init__.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. from .sam import Sam from .image_encoder import ImageEncoderViT from .mask_decoder import MaskDecoder from .prompt_encoder import PromptEncoder from .transformer import TwoWayTransformer ================================================ FILE: auto-seg/submodules/segment-anything-1/segment_anything/modeling/common.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import torch import torch.nn as nn from typing import Type class MLPBlock(nn.Module): def __init__( self, embedding_dim: int, mlp_dim: int, act: Type[nn.Module] = nn.GELU, ) -> None: super().__init__() self.lin1 = nn.Linear(embedding_dim, mlp_dim) self.lin2 = nn.Linear(mlp_dim, embedding_dim) self.act = act() def forward(self, x: torch.Tensor) -> torch.Tensor: return self.lin2(self.act(self.lin1(x))) # From https://github.com/facebookresearch/detectron2/blob/main/detectron2/layers/batch_norm.py # noqa # Itself from https://github.com/facebookresearch/ConvNeXt/blob/d1fa8f6fef0a165b27399986cc2bdacc92777e40/models/convnext.py#L119 # noqa class LayerNorm2d(nn.Module): def __init__(self, num_channels: int, eps: float = 1e-6) -> None: super().__init__() self.weight = nn.Parameter(torch.ones(num_channels)) self.bias = nn.Parameter(torch.zeros(num_channels)) self.eps = eps def forward(self, x: torch.Tensor) -> torch.Tensor: u = x.mean(1, keepdim=True) s = (x - u).pow(2).mean(1, keepdim=True) x = (x - u) / torch.sqrt(s + self.eps) x = self.weight[:, None, None] * x + self.bias[:, None, None] return x ================================================ FILE: auto-seg/submodules/segment-anything-1/segment_anything/modeling/image_encoder.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import torch import torch.nn as nn import torch.nn.functional as F from typing import Optional, Tuple, Type from .common import LayerNorm2d, MLPBlock # This class and its supporting functions below lightly adapted from the ViTDet backbone available at: https://github.com/facebookresearch/detectron2/blob/main/detectron2/modeling/backbone/vit.py # noqa class ImageEncoderViT(nn.Module): def __init__( self, img_size: int = 1024, patch_size: int = 16, in_chans: int = 3, embed_dim: int = 768, depth: int = 12, num_heads: int = 12, mlp_ratio: float = 4.0, out_chans: int = 256, qkv_bias: bool = True, norm_layer: Type[nn.Module] = nn.LayerNorm, act_layer: Type[nn.Module] = nn.GELU, use_abs_pos: bool = True, use_rel_pos: bool = False, rel_pos_zero_init: bool = True, window_size: int = 0, global_attn_indexes: Tuple[int, ...] = (), ) -> None: """ Args: img_size (int): Input image size. patch_size (int): Patch size. in_chans (int): Number of input image channels. embed_dim (int): Patch embedding dimension. depth (int): Depth of ViT. num_heads (int): Number of attention heads in each ViT block. mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. qkv_bias (bool): If True, add a learnable bias to query, key, value. norm_layer (nn.Module): Normalization layer. act_layer (nn.Module): Activation layer. use_abs_pos (bool): If True, use absolute positional embeddings. use_rel_pos (bool): If True, add relative positional embeddings to the attention map. rel_pos_zero_init (bool): If True, zero initialize relative positional parameters. window_size (int): Window size for window attention blocks. global_attn_indexes (list): Indexes for blocks using global attention. """ super().__init__() self.img_size = img_size self.patch_embed = PatchEmbed( kernel_size=(patch_size, patch_size), stride=(patch_size, patch_size), in_chans=in_chans, embed_dim=embed_dim, ) self.pos_embed: Optional[nn.Parameter] = None if use_abs_pos: # Initialize absolute positional embedding with pretrain image size. self.pos_embed = nn.Parameter( torch.zeros(1, img_size // patch_size, img_size // patch_size, embed_dim) ) self.blocks = nn.ModuleList() for i in range(depth): block = Block( dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, norm_layer=norm_layer, act_layer=act_layer, use_rel_pos=use_rel_pos, rel_pos_zero_init=rel_pos_zero_init, window_size=window_size if i not in global_attn_indexes else 0, input_size=(img_size // patch_size, img_size // patch_size), ) self.blocks.append(block) self.neck = nn.Sequential( nn.Conv2d( embed_dim, out_chans, kernel_size=1, bias=False, ), LayerNorm2d(out_chans), nn.Conv2d( out_chans, out_chans, kernel_size=3, padding=1, bias=False, ), LayerNorm2d(out_chans), ) def forward(self, x: torch.Tensor) -> torch.Tensor: x = self.patch_embed(x) if self.pos_embed is not None: x = x + self.pos_embed for blk in self.blocks: x = blk(x) x = self.neck(x.permute(0, 3, 1, 2)) return x class Block(nn.Module): """Transformer blocks with support of window attention and residual propagation blocks""" def __init__( self, dim: int, num_heads: int, mlp_ratio: float = 4.0, qkv_bias: bool = True, norm_layer: Type[nn.Module] = nn.LayerNorm, act_layer: Type[nn.Module] = nn.GELU, use_rel_pos: bool = False, rel_pos_zero_init: bool = True, window_size: int = 0, input_size: Optional[Tuple[int, int]] = None, ) -> None: """ Args: dim (int): Number of input channels. num_heads (int): Number of attention heads in each ViT block. mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. qkv_bias (bool): If True, add a learnable bias to query, key, value. norm_layer (nn.Module): Normalization layer. act_layer (nn.Module): Activation layer. use_rel_pos (bool): If True, add relative positional embeddings to the attention map. rel_pos_zero_init (bool): If True, zero initialize relative positional parameters. window_size (int): Window size for window attention blocks. If it equals 0, then use global attention. input_size (tuple(int, int) or None): Input resolution for calculating the relative positional parameter size. """ super().__init__() self.norm1 = norm_layer(dim) self.attn = Attention( dim, num_heads=num_heads, qkv_bias=qkv_bias, use_rel_pos=use_rel_pos, rel_pos_zero_init=rel_pos_zero_init, input_size=input_size if window_size == 0 else (window_size, window_size), ) self.norm2 = norm_layer(dim) self.mlp = MLPBlock(embedding_dim=dim, mlp_dim=int(dim * mlp_ratio), act=act_layer) self.window_size = window_size def forward(self, x: torch.Tensor) -> torch.Tensor: shortcut = x x = self.norm1(x) # Window partition if self.window_size > 0: H, W = x.shape[1], x.shape[2] x, pad_hw = window_partition(x, self.window_size) x = self.attn(x) # Reverse window partition if self.window_size > 0: x = window_unpartition(x, self.window_size, pad_hw, (H, W)) x = shortcut + x x = x + self.mlp(self.norm2(x)) return x class Attention(nn.Module): """Multi-head Attention block with relative position embeddings.""" def __init__( self, dim: int, num_heads: int = 8, qkv_bias: bool = True, use_rel_pos: bool = False, rel_pos_zero_init: bool = True, input_size: Optional[Tuple[int, int]] = None, ) -> None: """ Args: dim (int): Number of input channels. num_heads (int): Number of attention heads. qkv_bias (bool): If True, add a learnable bias to query, key, value. rel_pos (bool): If True, add relative positional embeddings to the attention map. rel_pos_zero_init (bool): If True, zero initialize relative positional parameters. input_size (tuple(int, int) or None): Input resolution for calculating the relative positional parameter size. """ super().__init__() self.num_heads = num_heads head_dim = dim // num_heads self.scale = head_dim**-0.5 self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) self.proj = nn.Linear(dim, dim) self.use_rel_pos = use_rel_pos if self.use_rel_pos: assert ( input_size is not None ), "Input size must be provided if using relative positional encoding." # initialize relative positional embeddings self.rel_pos_h = nn.Parameter(torch.zeros(2 * input_size[0] - 1, head_dim)) self.rel_pos_w = nn.Parameter(torch.zeros(2 * input_size[1] - 1, head_dim)) def forward(self, x: torch.Tensor) -> torch.Tensor: B, H, W, _ = x.shape # qkv with shape (3, B, nHead, H * W, C) qkv = self.qkv(x).reshape(B, H * W, 3, self.num_heads, -1).permute(2, 0, 3, 1, 4) # q, k, v with shape (B * nHead, H * W, C) q, k, v = qkv.reshape(3, B * self.num_heads, H * W, -1).unbind(0) attn = (q * self.scale) @ k.transpose(-2, -1) if self.use_rel_pos: attn = add_decomposed_rel_pos(attn, q, self.rel_pos_h, self.rel_pos_w, (H, W), (H, W)) attn = attn.softmax(dim=-1) x = (attn @ v).view(B, self.num_heads, H, W, -1).permute(0, 2, 3, 1, 4).reshape(B, H, W, -1) x = self.proj(x) return x def window_partition(x: torch.Tensor, window_size: int) -> Tuple[torch.Tensor, Tuple[int, int]]: """ Partition into non-overlapping windows with padding if needed. Args: x (tensor): input tokens with [B, H, W, C]. window_size (int): window size. Returns: windows: windows after partition with [B * num_windows, window_size, window_size, C]. (Hp, Wp): padded height and width before partition """ B, H, W, C = x.shape pad_h = (window_size - H % window_size) % window_size pad_w = (window_size - W % window_size) % window_size if pad_h > 0 or pad_w > 0: x = F.pad(x, (0, 0, 0, pad_w, 0, pad_h)) Hp, Wp = H + pad_h, W + pad_w x = x.view(B, Hp // window_size, window_size, Wp // window_size, window_size, C) windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) return windows, (Hp, Wp) def window_unpartition( windows: torch.Tensor, window_size: int, pad_hw: Tuple[int, int], hw: Tuple[int, int] ) -> torch.Tensor: """ Window unpartition into original sequences and removing padding. Args: windows (tensor): input tokens with [B * num_windows, window_size, window_size, C]. window_size (int): window size. pad_hw (Tuple): padded height and width (Hp, Wp). hw (Tuple): original height and width (H, W) before padding. Returns: x: unpartitioned sequences with [B, H, W, C]. """ Hp, Wp = pad_hw H, W = hw B = windows.shape[0] // (Hp * Wp // window_size // window_size) x = windows.view(B, Hp // window_size, Wp // window_size, window_size, window_size, -1) x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, Hp, Wp, -1) if Hp > H or Wp > W: x = x[:, :H, :W, :].contiguous() return x def get_rel_pos(q_size: int, k_size: int, rel_pos: torch.Tensor) -> torch.Tensor: """ Get relative positional embeddings according to the relative positions of query and key sizes. Args: q_size (int): size of query q. k_size (int): size of key k. rel_pos (Tensor): relative position embeddings (L, C). Returns: Extracted positional embeddings according to relative positions. """ max_rel_dist = int(2 * max(q_size, k_size) - 1) # Interpolate rel pos if needed. if rel_pos.shape[0] != max_rel_dist: # Interpolate rel pos. rel_pos_resized = F.interpolate( rel_pos.reshape(1, rel_pos.shape[0], -1).permute(0, 2, 1), size=max_rel_dist, mode="linear", ) rel_pos_resized = rel_pos_resized.reshape(-1, max_rel_dist).permute(1, 0) else: rel_pos_resized = rel_pos # Scale the coords with short length if shapes for q and k are different. q_coords = torch.arange(q_size)[:, None] * max(k_size / q_size, 1.0) k_coords = torch.arange(k_size)[None, :] * max(q_size / k_size, 1.0) relative_coords = (q_coords - k_coords) + (k_size - 1) * max(q_size / k_size, 1.0) return rel_pos_resized[relative_coords.long()] def add_decomposed_rel_pos( attn: torch.Tensor, q: torch.Tensor, rel_pos_h: torch.Tensor, rel_pos_w: torch.Tensor, q_size: Tuple[int, int], k_size: Tuple[int, int], ) -> torch.Tensor: """ Calculate decomposed Relative Positional Embeddings from :paper:`mvitv2`. https://github.com/facebookresearch/mvit/blob/19786631e330df9f3622e5402b4a419a263a2c80/mvit/models/attention.py # noqa B950 Args: attn (Tensor): attention map. q (Tensor): query q in the attention layer with shape (B, q_h * q_w, C). rel_pos_h (Tensor): relative position embeddings (Lh, C) for height axis. rel_pos_w (Tensor): relative position embeddings (Lw, C) for width axis. q_size (Tuple): spatial sequence size of query q with (q_h, q_w). k_size (Tuple): spatial sequence size of key k with (k_h, k_w). Returns: attn (Tensor): attention map with added relative positional embeddings. """ q_h, q_w = q_size k_h, k_w = k_size Rh = get_rel_pos(q_h, k_h, rel_pos_h) Rw = get_rel_pos(q_w, k_w, rel_pos_w) B, _, dim = q.shape r_q = q.reshape(B, q_h, q_w, dim) rel_h = torch.einsum("bhwc,hkc->bhwk", r_q, Rh) rel_w = torch.einsum("bhwc,wkc->bhwk", r_q, Rw) attn = ( attn.view(B, q_h, q_w, k_h, k_w) + rel_h[:, :, :, :, None] + rel_w[:, :, :, None, :] ).view(B, q_h * q_w, k_h * k_w) return attn class PatchEmbed(nn.Module): """ Image to Patch Embedding. """ def __init__( self, kernel_size: Tuple[int, int] = (16, 16), stride: Tuple[int, int] = (16, 16), padding: Tuple[int, int] = (0, 0), in_chans: int = 3, embed_dim: int = 768, ) -> None: """ Args: kernel_size (Tuple): kernel size of the projection layer. stride (Tuple): stride of the projection layer. padding (Tuple): padding size of the projection layer. in_chans (int): Number of input image channels. embed_dim (int): Patch embedding dimension. """ super().__init__() self.proj = nn.Conv2d( in_chans, embed_dim, kernel_size=kernel_size, stride=stride, padding=padding ) def forward(self, x: torch.Tensor) -> torch.Tensor: x = self.proj(x) # B C H W -> B H W C x = x.permute(0, 2, 3, 1) return x ================================================ FILE: auto-seg/submodules/segment-anything-1/segment_anything/modeling/mask_decoder.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import torch from torch import nn from torch.nn import functional as F from typing import List, Tuple, Type from .common import LayerNorm2d class MaskDecoder(nn.Module): def __init__( self, *, transformer_dim: int, transformer: nn.Module, num_multimask_outputs: int = 3, activation: Type[nn.Module] = nn.GELU, iou_head_depth: int = 3, iou_head_hidden_dim: int = 256, ) -> None: """ Predicts masks given an image and prompt embeddings, using a transformer architecture. Arguments: transformer_dim (int): the channel dimension of the transformer transformer (nn.Module): the transformer used to predict masks num_multimask_outputs (int): the number of masks to predict when disambiguating masks activation (nn.Module): the type of activation to use when upscaling masks iou_head_depth (int): the depth of the MLP used to predict mask quality iou_head_hidden_dim (int): the hidden dimension of the MLP used to predict mask quality """ super().__init__() self.transformer_dim = transformer_dim self.transformer = transformer self.num_multimask_outputs = num_multimask_outputs self.iou_token = nn.Embedding(1, transformer_dim) self.num_mask_tokens = num_multimask_outputs + 1 self.mask_tokens = nn.Embedding(self.num_mask_tokens, transformer_dim) self.output_upscaling = nn.Sequential( nn.ConvTranspose2d(transformer_dim, transformer_dim // 4, kernel_size=2, stride=2), LayerNorm2d(transformer_dim // 4), activation(), nn.ConvTranspose2d(transformer_dim // 4, transformer_dim // 8, kernel_size=2, stride=2), activation(), ) self.output_hypernetworks_mlps = nn.ModuleList( [ MLP(transformer_dim, transformer_dim, transformer_dim // 8, 3) for i in range(self.num_mask_tokens) ] ) self.iou_prediction_head = MLP( transformer_dim, iou_head_hidden_dim, self.num_mask_tokens, iou_head_depth ) def forward( self, image_embeddings: torch.Tensor, image_pe: torch.Tensor, sparse_prompt_embeddings: torch.Tensor, dense_prompt_embeddings: torch.Tensor, multimask_output: bool, ) -> Tuple[torch.Tensor, torch.Tensor]: """ Predict masks given image and prompt embeddings. Arguments: image_embeddings (torch.Tensor): the embeddings from the image encoder image_pe (torch.Tensor): positional encoding with the shape of image_embeddings sparse_prompt_embeddings (torch.Tensor): the embeddings of the points and boxes dense_prompt_embeddings (torch.Tensor): the embeddings of the mask inputs multimask_output (bool): Whether to return multiple masks or a single mask. Returns: torch.Tensor: batched predicted masks torch.Tensor: batched predictions of mask quality """ masks, iou_pred = self.predict_masks( image_embeddings=image_embeddings, image_pe=image_pe, sparse_prompt_embeddings=sparse_prompt_embeddings, dense_prompt_embeddings=dense_prompt_embeddings, ) # Select the correct mask or masks for output if multimask_output: mask_slice = slice(1, None) else: mask_slice = slice(0, 1) masks = masks[:, mask_slice, :, :] iou_pred = iou_pred[:, mask_slice] # Prepare output return masks, iou_pred def predict_masks( self, image_embeddings: torch.Tensor, image_pe: torch.Tensor, sparse_prompt_embeddings: torch.Tensor, dense_prompt_embeddings: torch.Tensor, ) -> Tuple[torch.Tensor, torch.Tensor]: """Predicts masks. See 'forward' for more details.""" # Concatenate output tokens output_tokens = torch.cat([self.iou_token.weight, self.mask_tokens.weight], dim=0) output_tokens = output_tokens.unsqueeze(0).expand(sparse_prompt_embeddings.size(0), -1, -1) tokens = torch.cat((output_tokens, sparse_prompt_embeddings), dim=1) # Expand per-image data in batch direction to be per-mask src = torch.repeat_interleave(image_embeddings, tokens.shape[0], dim=0) src = src + dense_prompt_embeddings pos_src = torch.repeat_interleave(image_pe, tokens.shape[0], dim=0) b, c, h, w = src.shape # Run the transformer hs, src = self.transformer(src, pos_src, tokens) iou_token_out = hs[:, 0, :] mask_tokens_out = hs[:, 1 : (1 + self.num_mask_tokens), :] # Upscale mask embeddings and predict masks using the mask tokens src = src.transpose(1, 2).view(b, c, h, w) upscaled_embedding = self.output_upscaling(src) hyper_in_list: List[torch.Tensor] = [] for i in range(self.num_mask_tokens): hyper_in_list.append(self.output_hypernetworks_mlps[i](mask_tokens_out[:, i, :])) hyper_in = torch.stack(hyper_in_list, dim=1) b, c, h, w = upscaled_embedding.shape masks = (hyper_in @ upscaled_embedding.view(b, c, h * w)).view(b, -1, h, w) # Generate mask quality predictions iou_pred = self.iou_prediction_head(iou_token_out) return masks, iou_pred # Lightly adapted from # https://github.com/facebookresearch/MaskFormer/blob/main/mask_former/modeling/transformer/transformer_predictor.py # noqa class MLP(nn.Module): def __init__( self, input_dim: int, hidden_dim: int, output_dim: int, num_layers: int, sigmoid_output: bool = False, ) -> None: super().__init__() self.num_layers = num_layers h = [hidden_dim] * (num_layers - 1) self.layers = nn.ModuleList( nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim]) ) self.sigmoid_output = sigmoid_output def forward(self, x): for i, layer in enumerate(self.layers): x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x) if self.sigmoid_output: x = F.sigmoid(x) return x ================================================ FILE: auto-seg/submodules/segment-anything-1/segment_anything/modeling/prompt_encoder.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import numpy as np import torch from torch import nn from typing import Any, Optional, Tuple, Type from .common import LayerNorm2d class PromptEncoder(nn.Module): def __init__( self, embed_dim: int, image_embedding_size: Tuple[int, int], input_image_size: Tuple[int, int], mask_in_chans: int, activation: Type[nn.Module] = nn.GELU, ) -> None: """ Encodes prompts for input to SAM's mask decoder. Arguments: embed_dim (int): The prompts' embedding dimension image_embedding_size (tuple(int, int)): The spatial size of the image embedding, as (H, W). input_image_size (int): The padded size of the image as input to the image encoder, as (H, W). mask_in_chans (int): The number of hidden channels used for encoding input masks. activation (nn.Module): The activation to use when encoding input masks. """ super().__init__() self.embed_dim = embed_dim self.input_image_size = input_image_size self.image_embedding_size = image_embedding_size self.pe_layer = PositionEmbeddingRandom(embed_dim // 2) self.num_point_embeddings: int = 4 # pos/neg point + 2 box corners point_embeddings = [nn.Embedding(1, embed_dim) for i in range(self.num_point_embeddings)] self.point_embeddings = nn.ModuleList(point_embeddings) self.not_a_point_embed = nn.Embedding(1, embed_dim) self.mask_input_size = (4 * image_embedding_size[0], 4 * image_embedding_size[1]) self.mask_downscaling = nn.Sequential( nn.Conv2d(1, mask_in_chans // 4, kernel_size=2, stride=2), LayerNorm2d(mask_in_chans // 4), activation(), nn.Conv2d(mask_in_chans // 4, mask_in_chans, kernel_size=2, stride=2), LayerNorm2d(mask_in_chans), activation(), nn.Conv2d(mask_in_chans, embed_dim, kernel_size=1), ) self.no_mask_embed = nn.Embedding(1, embed_dim) def get_dense_pe(self) -> torch.Tensor: """ Returns the positional encoding used to encode point prompts, applied to a dense set of points the shape of the image encoding. Returns: torch.Tensor: Positional encoding with shape 1x(embed_dim)x(embedding_h)x(embedding_w) """ return self.pe_layer(self.image_embedding_size).unsqueeze(0) def _embed_points( self, points: torch.Tensor, labels: torch.Tensor, pad: bool, ) -> torch.Tensor: """Embeds point prompts.""" points = points + 0.5 # Shift to center of pixel if pad: padding_point = torch.zeros((points.shape[0], 1, 2), device=points.device) padding_label = -torch.ones((labels.shape[0], 1), device=labels.device) points = torch.cat([points, padding_point], dim=1) labels = torch.cat([labels, padding_label], dim=1) point_embedding = self.pe_layer.forward_with_coords(points, self.input_image_size) point_embedding[labels == -1] = 0.0 point_embedding[labels == -1] += self.not_a_point_embed.weight point_embedding[labels == 0] += self.point_embeddings[0].weight point_embedding[labels == 1] += self.point_embeddings[1].weight return point_embedding def _embed_boxes(self, boxes: torch.Tensor) -> torch.Tensor: """Embeds box prompts.""" boxes = boxes + 0.5 # Shift to center of pixel coords = boxes.reshape(-1, 2, 2) corner_embedding = self.pe_layer.forward_with_coords(coords, self.input_image_size) corner_embedding[:, 0, :] += self.point_embeddings[2].weight corner_embedding[:, 1, :] += self.point_embeddings[3].weight return corner_embedding def _embed_masks(self, masks: torch.Tensor) -> torch.Tensor: """Embeds mask inputs.""" mask_embedding = self.mask_downscaling(masks) return mask_embedding def _get_batch_size( self, points: Optional[Tuple[torch.Tensor, torch.Tensor]], boxes: Optional[torch.Tensor], masks: Optional[torch.Tensor], ) -> int: """ Gets the batch size of the output given the batch size of the input prompts. """ if points is not None: return points[0].shape[0] elif boxes is not None: return boxes.shape[0] elif masks is not None: return masks.shape[0] else: return 1 def _get_device(self) -> torch.device: return self.point_embeddings[0].weight.device def forward( self, points: Optional[Tuple[torch.Tensor, torch.Tensor]], boxes: Optional[torch.Tensor], masks: Optional[torch.Tensor], ) -> Tuple[torch.Tensor, torch.Tensor]: """ Embeds different types of prompts, returning both sparse and dense embeddings. Arguments: points (tuple(torch.Tensor, torch.Tensor) or none): point coordinates and labels to embed. boxes (torch.Tensor or none): boxes to embed masks (torch.Tensor or none): masks to embed Returns: torch.Tensor: sparse embeddings for the points and boxes, with shape BxNx(embed_dim), where N is determined by the number of input points and boxes. torch.Tensor: dense embeddings for the masks, in the shape Bx(embed_dim)x(embed_H)x(embed_W) """ bs = self._get_batch_size(points, boxes, masks) sparse_embeddings = torch.empty((bs, 0, self.embed_dim), device=self._get_device()) if points is not None: coords, labels = points point_embeddings = self._embed_points(coords, labels, pad=(boxes is None)) sparse_embeddings = torch.cat([sparse_embeddings, point_embeddings], dim=1) if boxes is not None: box_embeddings = self._embed_boxes(boxes) sparse_embeddings = torch.cat([sparse_embeddings, box_embeddings], dim=1) if masks is not None: dense_embeddings = self._embed_masks(masks) else: dense_embeddings = self.no_mask_embed.weight.reshape(1, -1, 1, 1).expand( bs, -1, self.image_embedding_size[0], self.image_embedding_size[1] ) return sparse_embeddings, dense_embeddings class PositionEmbeddingRandom(nn.Module): """ Positional encoding using random spatial frequencies. """ def __init__(self, num_pos_feats: int = 64, scale: Optional[float] = None) -> None: super().__init__() if scale is None or scale <= 0.0: scale = 1.0 self.register_buffer( "positional_encoding_gaussian_matrix", scale * torch.randn((2, num_pos_feats)), ) def _pe_encoding(self, coords: torch.Tensor) -> torch.Tensor: """Positionally encode points that are normalized to [0,1].""" # assuming coords are in [0, 1]^2 square and have d_1 x ... x d_n x 2 shape coords = 2 * coords - 1 coords = coords @ self.positional_encoding_gaussian_matrix coords = 2 * np.pi * coords # outputs d_1 x ... x d_n x C shape return torch.cat([torch.sin(coords), torch.cos(coords)], dim=-1) def forward(self, size: Tuple[int, int]) -> torch.Tensor: """Generate positional encoding for a grid of the specified size.""" h, w = size device: Any = self.positional_encoding_gaussian_matrix.device grid = torch.ones((h, w), device=device, dtype=torch.float32) y_embed = grid.cumsum(dim=0) - 0.5 x_embed = grid.cumsum(dim=1) - 0.5 y_embed = y_embed / h x_embed = x_embed / w pe = self._pe_encoding(torch.stack([x_embed, y_embed], dim=-1)) return pe.permute(2, 0, 1) # C x H x W def forward_with_coords( self, coords_input: torch.Tensor, image_size: Tuple[int, int] ) -> torch.Tensor: """Positionally encode points that are not normalized to [0,1].""" coords = coords_input.clone() coords[:, :, 0] = coords[:, :, 0] / image_size[1] coords[:, :, 1] = coords[:, :, 1] / image_size[0] return self._pe_encoding(coords.to(torch.float)) # B x N x C ================================================ FILE: auto-seg/submodules/segment-anything-1/segment_anything/modeling/sam.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import torch from torch import nn from torch.nn import functional as F from typing import Any, Dict, List, Tuple from .image_encoder import ImageEncoderViT from .mask_decoder import MaskDecoder from .prompt_encoder import PromptEncoder class Sam(nn.Module): mask_threshold: float = 0.0 image_format: str = "RGB" def __init__( self, image_encoder: ImageEncoderViT, prompt_encoder: PromptEncoder, mask_decoder: MaskDecoder, pixel_mean: List[float] = [123.675, 116.28, 103.53], pixel_std: List[float] = [58.395, 57.12, 57.375], ) -> None: """ SAM predicts object masks from an image and input prompts. Arguments: image_encoder (ImageEncoderViT): The backbone used to encode the image into image embeddings that allow for efficient mask prediction. prompt_encoder (PromptEncoder): Encodes various types of input prompts. mask_decoder (MaskDecoder): Predicts masks from the image embeddings and encoded prompts. pixel_mean (list(float)): Mean values for normalizing pixels in the input image. pixel_std (list(float)): Std values for normalizing pixels in the input image. """ super().__init__() self.image_encoder = image_encoder self.prompt_encoder = prompt_encoder self.mask_decoder = mask_decoder self.register_buffer("pixel_mean", torch.Tensor(pixel_mean).view(-1, 1, 1), False) self.register_buffer("pixel_std", torch.Tensor(pixel_std).view(-1, 1, 1), False) @property def device(self) -> Any: return self.pixel_mean.device @torch.no_grad() def forward( self, batched_input: List[Dict[str, Any]], multimask_output: bool, ) -> List[Dict[str, torch.Tensor]]: """ Predicts masks end-to-end from provided images and prompts. If prompts are not known in advance, using SamPredictor is recommended over calling the model directly. Arguments: batched_input (list(dict)): A list over input images, each a dictionary with the following keys. A prompt key can be excluded if it is not present. 'image': The image as a torch tensor in 3xHxW format, already transformed for input to the model. 'original_size': (tuple(int, int)) The original size of the image before transformation, as (H, W). 'point_coords': (torch.Tensor) Batched point prompts for this image, with shape BxNx2. Already transformed to the input frame of the model. 'point_labels': (torch.Tensor) Batched labels for point prompts, with shape BxN. 'boxes': (torch.Tensor) Batched box inputs, with shape Bx4. Already transformed to the input frame of the model. 'mask_inputs': (torch.Tensor) Batched mask inputs to the model, in the form Bx1xHxW. multimask_output (bool): Whether the model should predict multiple disambiguating masks, or return a single mask. Returns: (list(dict)): A list over input images, where each element is as dictionary with the following keys. 'masks': (torch.Tensor) Batched binary mask predictions, with shape BxCxHxW, where B is the number of input prompts, C is determined by multimask_output, and (H, W) is the original size of the image. 'iou_predictions': (torch.Tensor) The model's predictions of mask quality, in shape BxC. 'low_res_logits': (torch.Tensor) Low resolution logits with shape BxCxHxW, where H=W=256. Can be passed as mask input to subsequent iterations of prediction. """ input_images = torch.stack([self.preprocess(x["image"]) for x in batched_input], dim=0) image_embeddings = self.image_encoder(input_images) outputs = [] for image_record, curr_embedding in zip(batched_input, image_embeddings): if "point_coords" in image_record: points = (image_record["point_coords"], image_record["point_labels"]) else: points = None sparse_embeddings, dense_embeddings = self.prompt_encoder( points=points, boxes=image_record.get("boxes", None), masks=image_record.get("mask_inputs", None), ) low_res_masks, iou_predictions = self.mask_decoder( image_embeddings=curr_embedding.unsqueeze(0), image_pe=self.prompt_encoder.get_dense_pe(), sparse_prompt_embeddings=sparse_embeddings, dense_prompt_embeddings=dense_embeddings, multimask_output=multimask_output, ) masks = self.postprocess_masks( low_res_masks, input_size=image_record["image"].shape[-2:], original_size=image_record["original_size"], ) masks = masks > self.mask_threshold outputs.append( { "masks": masks, "iou_predictions": iou_predictions, "low_res_logits": low_res_masks, } ) return outputs def postprocess_masks( self, masks: torch.Tensor, input_size: Tuple[int, ...], original_size: Tuple[int, ...], ) -> torch.Tensor: """ Remove padding and upscale masks to the original image size. Arguments: masks (torch.Tensor): Batched masks from the mask_decoder, in BxCxHxW format. input_size (tuple(int, int)): The size of the image input to the model, in (H, W) format. Used to remove padding. original_size (tuple(int, int)): The original size of the image before resizing for input to the model, in (H, W) format. Returns: (torch.Tensor): Batched masks in BxCxHxW format, where (H, W) is given by original_size. """ masks = F.interpolate( masks, (self.image_encoder.img_size, self.image_encoder.img_size), mode="bilinear", align_corners=False, ) masks = masks[..., : input_size[0], : input_size[1]] masks = F.interpolate(masks, original_size, mode="bilinear", align_corners=False) return masks def preprocess(self, x: torch.Tensor) -> torch.Tensor: """Normalize pixel values and pad to a square input.""" # Normalize colors x = (x - self.pixel_mean) / self.pixel_std # Pad h, w = x.shape[-2:] padh = self.image_encoder.img_size - h padw = self.image_encoder.img_size - w x = F.pad(x, (0, padw, 0, padh)) return x ================================================ FILE: auto-seg/submodules/segment-anything-1/segment_anything/modeling/transformer.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import torch from torch import Tensor, nn import math from typing import Tuple, Type from .common import MLPBlock class TwoWayTransformer(nn.Module): def __init__( self, depth: int, embedding_dim: int, num_heads: int, mlp_dim: int, activation: Type[nn.Module] = nn.ReLU, attention_downsample_rate: int = 2, ) -> None: """ A transformer decoder that attends to an input image using queries whose positional embedding is supplied. Args: depth (int): number of layers in the transformer embedding_dim (int): the channel dimension for the input embeddings num_heads (int): the number of heads for multihead attention. Must divide embedding_dim mlp_dim (int): the channel dimension internal to the MLP block activation (nn.Module): the activation to use in the MLP block """ super().__init__() self.depth = depth self.embedding_dim = embedding_dim self.num_heads = num_heads self.mlp_dim = mlp_dim self.layers = nn.ModuleList() for i in range(depth): self.layers.append( TwoWayAttentionBlock( embedding_dim=embedding_dim, num_heads=num_heads, mlp_dim=mlp_dim, activation=activation, attention_downsample_rate=attention_downsample_rate, skip_first_layer_pe=(i == 0), ) ) self.final_attn_token_to_image = Attention( embedding_dim, num_heads, downsample_rate=attention_downsample_rate ) self.norm_final_attn = nn.LayerNorm(embedding_dim) def forward( self, image_embedding: Tensor, image_pe: Tensor, point_embedding: Tensor, ) -> Tuple[Tensor, Tensor]: """ Args: image_embedding (torch.Tensor): image to attend to. Should be shape B x embedding_dim x h x w for any h and w. image_pe (torch.Tensor): the positional encoding to add to the image. Must have the same shape as image_embedding. point_embedding (torch.Tensor): the embedding to add to the query points. Must have shape B x N_points x embedding_dim for any N_points. Returns: torch.Tensor: the processed point_embedding torch.Tensor: the processed image_embedding """ # BxCxHxW -> BxHWxC == B x N_image_tokens x C bs, c, h, w = image_embedding.shape image_embedding = image_embedding.flatten(2).permute(0, 2, 1) image_pe = image_pe.flatten(2).permute(0, 2, 1) # Prepare queries queries = point_embedding keys = image_embedding # Apply transformer blocks and final layernorm for layer in self.layers: queries, keys = layer( queries=queries, keys=keys, query_pe=point_embedding, key_pe=image_pe, ) # Apply the final attention layer from the points to the image q = queries + point_embedding k = keys + image_pe attn_out = self.final_attn_token_to_image(q=q, k=k, v=keys) queries = queries + attn_out queries = self.norm_final_attn(queries) return queries, keys class TwoWayAttentionBlock(nn.Module): def __init__( self, embedding_dim: int, num_heads: int, mlp_dim: int = 2048, activation: Type[nn.Module] = nn.ReLU, attention_downsample_rate: int = 2, skip_first_layer_pe: bool = False, ) -> None: """ A transformer block with four layers: (1) self-attention of sparse inputs, (2) cross attention of sparse inputs to dense inputs, (3) mlp block on sparse inputs, and (4) cross attention of dense inputs to sparse inputs. Arguments: embedding_dim (int): the channel dimension of the embeddings num_heads (int): the number of heads in the attention layers mlp_dim (int): the hidden dimension of the mlp block activation (nn.Module): the activation of the mlp block skip_first_layer_pe (bool): skip the PE on the first layer """ super().__init__() self.self_attn = Attention(embedding_dim, num_heads) self.norm1 = nn.LayerNorm(embedding_dim) self.cross_attn_token_to_image = Attention( embedding_dim, num_heads, downsample_rate=attention_downsample_rate ) self.norm2 = nn.LayerNorm(embedding_dim) self.mlp = MLPBlock(embedding_dim, mlp_dim, activation) self.norm3 = nn.LayerNorm(embedding_dim) self.norm4 = nn.LayerNorm(embedding_dim) self.cross_attn_image_to_token = Attention( embedding_dim, num_heads, downsample_rate=attention_downsample_rate ) self.skip_first_layer_pe = skip_first_layer_pe def forward( self, queries: Tensor, keys: Tensor, query_pe: Tensor, key_pe: Tensor ) -> Tuple[Tensor, Tensor]: # Self attention block if self.skip_first_layer_pe: queries = self.self_attn(q=queries, k=queries, v=queries) else: q = queries + query_pe attn_out = self.self_attn(q=q, k=q, v=queries) queries = queries + attn_out queries = self.norm1(queries) # Cross attention block, tokens attending to image embedding q = queries + query_pe k = keys + key_pe attn_out = self.cross_attn_token_to_image(q=q, k=k, v=keys) queries = queries + attn_out queries = self.norm2(queries) # MLP block mlp_out = self.mlp(queries) queries = queries + mlp_out queries = self.norm3(queries) # Cross attention block, image embedding attending to tokens q = queries + query_pe k = keys + key_pe attn_out = self.cross_attn_image_to_token(q=k, k=q, v=queries) keys = keys + attn_out keys = self.norm4(keys) return queries, keys class Attention(nn.Module): """ An attention layer that allows for downscaling the size of the embedding after projection to queries, keys, and values. """ def __init__( self, embedding_dim: int, num_heads: int, downsample_rate: int = 1, ) -> None: super().__init__() self.embedding_dim = embedding_dim self.internal_dim = embedding_dim // downsample_rate self.num_heads = num_heads assert self.internal_dim % num_heads == 0, "num_heads must divide embedding_dim." self.q_proj = nn.Linear(embedding_dim, self.internal_dim) self.k_proj = nn.Linear(embedding_dim, self.internal_dim) self.v_proj = nn.Linear(embedding_dim, self.internal_dim) self.out_proj = nn.Linear(self.internal_dim, embedding_dim) def _separate_heads(self, x: Tensor, num_heads: int) -> Tensor: b, n, c = x.shape x = x.reshape(b, n, num_heads, c // num_heads) return x.transpose(1, 2) # B x N_heads x N_tokens x C_per_head def _recombine_heads(self, x: Tensor) -> Tensor: b, n_heads, n_tokens, c_per_head = x.shape x = x.transpose(1, 2) return x.reshape(b, n_tokens, n_heads * c_per_head) # B x N_tokens x C def forward(self, q: Tensor, k: Tensor, v: Tensor) -> Tensor: # Input projections q = self.q_proj(q) k = self.k_proj(k) v = self.v_proj(v) # Separate into heads q = self._separate_heads(q, self.num_heads) k = self._separate_heads(k, self.num_heads) v = self._separate_heads(v, self.num_heads) # Attention _, _, _, c_per_head = q.shape attn = q @ k.permute(0, 1, 3, 2) # B x N_heads x N_tokens x N_tokens attn = attn / math.sqrt(c_per_head) attn = torch.softmax(attn, dim=-1) # Get output out = attn @ v out = self._recombine_heads(out) out = self.out_proj(out) return out ================================================ FILE: auto-seg/submodules/segment-anything-1/segment_anything/predictor.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import numpy as np import torch from segment_anything.modeling import Sam from typing import Optional, Tuple from .utils.transforms import ResizeLongestSide class SamPredictor: def __init__( self, sam_model: Sam, ) -> None: """ Uses SAM to calculate the image embedding for an image, and then allow repeated, efficient mask prediction given prompts. Arguments: sam_model (Sam): The model to use for mask prediction. """ super().__init__() self.model = sam_model self.transform = ResizeLongestSide(sam_model.image_encoder.img_size) self.reset_image() def set_image( self, image: np.ndarray, image_format: str = "RGB", ) -> None: """ Calculates the image embeddings for the provided image, allowing masks to be predicted with the 'predict' method. Arguments: image (np.ndarray): The image for calculating masks. Expects an image in HWC uint8 format, with pixel values in [0, 255]. image_format (str): The color format of the image, in ['RGB', 'BGR']. """ assert image_format in [ "RGB", "BGR", ], f"image_format must be in ['RGB', 'BGR'], is {image_format}." if image_format != self.model.image_format: image = image[..., ::-1] # Transform the image to the form expected by the model input_image = self.transform.apply_image(image) input_image_torch = torch.as_tensor(input_image, device=self.device) input_image_torch = input_image_torch.permute(2, 0, 1).contiguous()[None, :, :, :] self.set_torch_image(input_image_torch, image.shape[:2]) @torch.no_grad() def set_torch_image( self, transformed_image: torch.Tensor, original_image_size: Tuple[int, ...], ) -> None: """ Calculates the image embeddings for the provided image, allowing masks to be predicted with the 'predict' method. Expects the input image to be already transformed to the format expected by the model. Arguments: transformed_image (torch.Tensor): The input image, with shape 1x3xHxW, which has been transformed with ResizeLongestSide. original_image_size (tuple(int, int)): The size of the image before transformation, in (H, W) format. """ assert ( len(transformed_image.shape) == 4 and transformed_image.shape[1] == 3 and max(*transformed_image.shape[2:]) == self.model.image_encoder.img_size ), f"set_torch_image input must be BCHW with long side {self.model.image_encoder.img_size}." self.reset_image() self.original_size = original_image_size self.input_size = tuple(transformed_image.shape[-2:]) input_image = self.model.preprocess(transformed_image) self.features = self.model.image_encoder(input_image) self.is_image_set = True def predict( self, point_coords: Optional[np.ndarray] = None, point_labels: Optional[np.ndarray] = None, box: Optional[np.ndarray] = None, mask_input: Optional[np.ndarray] = None, multimask_output: bool = True, return_logits: bool = False, ) -> Tuple[np.ndarray, np.ndarray, np.ndarray]: """ Predict masks for the given input prompts, using the currently set image. Arguments: point_coords (np.ndarray or None): A Nx2 array of point prompts to the model. Each point is in (X,Y) in pixels. point_labels (np.ndarray or None): A length N array of labels for the point prompts. 1 indicates a foreground point and 0 indicates a background point. box (np.ndarray or None): A length 4 array given a box prompt to the model, in XYXY format. mask_input (np.ndarray): A low resolution mask input to the model, typically coming from a previous prediction iteration. Has form 1xHxW, where for SAM, H=W=256. multimask_output (bool): If true, the model will return three masks. For ambiguous input prompts (such as a single click), this will often produce better masks than a single prediction. If only a single mask is needed, the model's predicted quality score can be used to select the best mask. For non-ambiguous prompts, such as multiple input prompts, multimask_output=False can give better results. return_logits (bool): If true, returns un-thresholded masks logits instead of a binary mask. Returns: (np.ndarray): The output masks in CxHxW format, where C is the number of masks, and (H, W) is the original image size. (np.ndarray): An array of length C containing the model's predictions for the quality of each mask. (np.ndarray): An array of shape CxHxW, where C is the number of masks and H=W=256. These low resolution logits can be passed to a subsequent iteration as mask input. """ if not self.is_image_set: raise RuntimeError("An image must be set with .set_image(...) before mask prediction.") # Transform input prompts coords_torch, labels_torch, box_torch, mask_input_torch = None, None, None, None if point_coords is not None: assert ( point_labels is not None ), "point_labels must be supplied if point_coords is supplied." point_coords = self.transform.apply_coords(point_coords, self.original_size) coords_torch = torch.as_tensor(point_coords, dtype=torch.float, device=self.device) labels_torch = torch.as_tensor(point_labels, dtype=torch.int, device=self.device) coords_torch, labels_torch = coords_torch[None, :, :], labels_torch[None, :] if box is not None: box = self.transform.apply_boxes(box, self.original_size) box_torch = torch.as_tensor(box, dtype=torch.float, device=self.device) box_torch = box_torch[None, :] if mask_input is not None: mask_input_torch = torch.as_tensor(mask_input, dtype=torch.float, device=self.device) mask_input_torch = mask_input_torch[None, :, :, :] masks, iou_predictions, low_res_masks = self.predict_torch( coords_torch, labels_torch, box_torch, mask_input_torch, multimask_output, return_logits=return_logits, ) masks_np = masks[0].detach().cpu().numpy() iou_predictions_np = iou_predictions[0].detach().cpu().numpy() low_res_masks_np = low_res_masks[0].detach().cpu().numpy() return masks_np, iou_predictions_np, low_res_masks_np @torch.no_grad() def predict_torch( self, point_coords: Optional[torch.Tensor], point_labels: Optional[torch.Tensor], boxes: Optional[torch.Tensor] = None, mask_input: Optional[torch.Tensor] = None, multimask_output: bool = True, return_logits: bool = False, ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: """ Predict masks for the given input prompts, using the currently set image. Input prompts are batched torch tensors and are expected to already be transformed to the input frame using ResizeLongestSide. Arguments: point_coords (torch.Tensor or None): A BxNx2 array of point prompts to the model. Each point is in (X,Y) in pixels. point_labels (torch.Tensor or None): A BxN array of labels for the point prompts. 1 indicates a foreground point and 0 indicates a background point. boxes (np.ndarray or None): A Bx4 array given a box prompt to the model, in XYXY format. mask_input (np.ndarray): A low resolution mask input to the model, typically coming from a previous prediction iteration. Has form Bx1xHxW, where for SAM, H=W=256. Masks returned by a previous iteration of the predict method do not need further transformation. multimask_output (bool): If true, the model will return three masks. For ambiguous input prompts (such as a single click), this will often produce better masks than a single prediction. If only a single mask is needed, the model's predicted quality score can be used to select the best mask. For non-ambiguous prompts, such as multiple input prompts, multimask_output=False can give better results. return_logits (bool): If true, returns un-thresholded masks logits instead of a binary mask. Returns: (torch.Tensor): The output masks in BxCxHxW format, where C is the number of masks, and (H, W) is the original image size. (torch.Tensor): An array of shape BxC containing the model's predictions for the quality of each mask. (torch.Tensor): An array of shape BxCxHxW, where C is the number of masks and H=W=256. These low res logits can be passed to a subsequent iteration as mask input. """ if not self.is_image_set: raise RuntimeError("An image must be set with .set_image(...) before mask prediction.") if point_coords is not None: points = (point_coords, point_labels) else: points = None # Embed prompts sparse_embeddings, dense_embeddings = self.model.prompt_encoder( points=points, boxes=boxes, masks=mask_input, ) # Predict masks low_res_masks, iou_predictions = self.model.mask_decoder( image_embeddings=self.features, image_pe=self.model.prompt_encoder.get_dense_pe(), sparse_prompt_embeddings=sparse_embeddings, dense_prompt_embeddings=dense_embeddings, multimask_output=multimask_output, ) # Upscale the masks to the original image resolution masks = self.model.postprocess_masks(low_res_masks, self.input_size, self.original_size) if not return_logits: masks = masks > self.model.mask_threshold return masks, iou_predictions, low_res_masks def get_image_embedding(self) -> torch.Tensor: """ Returns the image embeddings for the currently set image, with shape 1xCxHxW, where C is the embedding dimension and (H,W) are the embedding spatial dimension of SAM (typically C=256, H=W=64). """ if not self.is_image_set: raise RuntimeError( "An image must be set with .set_image(...) to generate an embedding." ) assert self.features is not None, "Features must exist if an image has been set." return self.features @property def device(self) -> torch.device: return self.model.device def reset_image(self) -> None: """Resets the currently set image.""" self.is_image_set = False self.features = None self.orig_h = None self.orig_w = None self.input_h = None self.input_w = None ================================================ FILE: auto-seg/submodules/segment-anything-1/segment_anything/utils/__init__.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. ================================================ FILE: auto-seg/submodules/segment-anything-1/segment_anything/utils/amg.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import numpy as np import torch import math from copy import deepcopy from itertools import product from typing import Any, Dict, Generator, ItemsView, List, Tuple class MaskData: """ A structure for storing masks and their related data in batched format. Implements basic filtering and concatenation. """ def __init__(self, **kwargs) -> None: for v in kwargs.values(): assert isinstance( v, (list, np.ndarray, torch.Tensor) ), "MaskData only supports list, numpy arrays, and torch tensors." self._stats = dict(**kwargs) def __setitem__(self, key: str, item: Any) -> None: assert isinstance( item, (list, np.ndarray, torch.Tensor) ), "MaskData only supports list, numpy arrays, and torch tensors." self._stats[key] = item def __delitem__(self, key: str) -> None: del self._stats[key] def __getitem__(self, key: str) -> Any: return self._stats[key] def items(self) -> ItemsView[str, Any]: return self._stats.items() def filter(self, keep: torch.Tensor) -> None: for k, v in self._stats.items(): if v is None: self._stats[k] = None elif isinstance(v, torch.Tensor): self._stats[k] = v[torch.as_tensor(keep, device=v.device)] elif isinstance(v, np.ndarray): self._stats[k] = v[keep.detach().cpu().numpy()] elif isinstance(v, list) and keep.dtype == torch.bool: self._stats[k] = [a for i, a in enumerate(v) if keep[i]] elif isinstance(v, list): self._stats[k] = [v[i] for i in keep] else: raise TypeError(f"MaskData key {k} has an unsupported type {type(v)}.") def cat(self, new_stats: "MaskData") -> None: for k, v in new_stats.items(): if k not in self._stats or self._stats[k] is None: self._stats[k] = deepcopy(v) elif isinstance(v, torch.Tensor): self._stats[k] = torch.cat([self._stats[k], v], dim=0) elif isinstance(v, np.ndarray): self._stats[k] = np.concatenate([self._stats[k], v], axis=0) elif isinstance(v, list): self._stats[k] = self._stats[k] + deepcopy(v) else: raise TypeError(f"MaskData key {k} has an unsupported type {type(v)}.") def to_numpy(self) -> None: for k, v in self._stats.items(): if isinstance(v, torch.Tensor): if v.dtype == torch.bfloat16: v = v.to(torch.float32) self._stats[k] = v.detach().cpu().numpy() def is_box_near_crop_edge( boxes: torch.Tensor, crop_box: List[int], orig_box: List[int], atol: float = 20.0 ) -> torch.Tensor: """Filter masks at the edge of a crop, but not at the edge of the original image.""" crop_box_torch = torch.as_tensor(crop_box, dtype=torch.float, device=boxes.device) orig_box_torch = torch.as_tensor(orig_box, dtype=torch.float, device=boxes.device) boxes = uncrop_boxes_xyxy(boxes, crop_box).float() near_crop_edge = torch.isclose(boxes, crop_box_torch[None, :], atol=atol, rtol=0) near_image_edge = torch.isclose(boxes, orig_box_torch[None, :], atol=atol, rtol=0) near_crop_edge = torch.logical_and(near_crop_edge, ~near_image_edge) return torch.any(near_crop_edge, dim=1) def box_xyxy_to_xywh(box_xyxy: torch.Tensor) -> torch.Tensor: box_xywh = deepcopy(box_xyxy) box_xywh[2] = box_xywh[2] - box_xywh[0] box_xywh[3] = box_xywh[3] - box_xywh[1] return box_xywh def batch_iterator(batch_size: int, *args) -> Generator[List[Any], None, None]: assert len(args) > 0 and all( len(a) == len(args[0]) for a in args ), "Batched iteration must have inputs of all the same size." n_batches = len(args[0]) // batch_size + int(len(args[0]) % batch_size != 0) for b in range(n_batches): yield [arg[b * batch_size : (b + 1) * batch_size] for arg in args] def mask_to_rle_pytorch(tensor: torch.Tensor) -> List[Dict[str, Any]]: """ Encodes masks to an uncompressed RLE, in the format expected by pycoco tools. """ # Put in fortran order and flatten h,w b, h, w = tensor.shape tensor = tensor.permute(0, 2, 1).flatten(1) # Compute change indices diff = tensor[:, 1:] ^ tensor[:, :-1] change_indices = diff.nonzero() # Encode run length out = [] for i in range(b): cur_idxs = change_indices[change_indices[:, 0] == i, 1] cur_idxs = torch.cat( [ torch.tensor([0], dtype=cur_idxs.dtype, device=cur_idxs.device), cur_idxs + 1, torch.tensor([h * w], dtype=cur_idxs.dtype, device=cur_idxs.device), ] ) btw_idxs = cur_idxs[1:] - cur_idxs[:-1] counts = [] if tensor[i, 0] == 0 else [0] counts.extend(btw_idxs.detach().cpu().tolist()) out.append({"size": [h, w], "counts": counts}) return out def rle_to_mask(rle: Dict[str, Any]) -> np.ndarray: """Compute a binary mask from an uncompressed RLE.""" h, w = rle["size"] mask = np.empty(h * w, dtype=bool) idx = 0 parity = False for count in rle["counts"]: mask[idx : idx + count] = parity idx += count parity ^= True mask = mask.reshape(w, h) return mask.transpose() # Put in C order def area_from_rle(rle: Dict[str, Any]) -> int: return sum(rle["counts"][1::2]) def calculate_stability_score( masks: torch.Tensor, mask_threshold: float, threshold_offset: float ) -> torch.Tensor: """ Computes the stability score for a batch of masks. The stability score is the IoU between the binary masks obtained by thresholding the predicted mask logits at high and low values. """ # One mask is always contained inside the other. # Save memory by preventing unnecessary cast to torch.int64 intersections = ( (masks > (mask_threshold + threshold_offset)) .sum(-1, dtype=torch.int16) .sum(-1, dtype=torch.int32) ) unions = ( (masks > (mask_threshold - threshold_offset)) .sum(-1, dtype=torch.int16) .sum(-1, dtype=torch.int32) ) return intersections / unions def build_point_grid(n_per_side: int) -> np.ndarray: """Generates a 2D grid of points evenly spaced in [0,1]x[0,1].""" offset = 1 / (2 * n_per_side) points_one_side = np.linspace(offset, 1 - offset, n_per_side) points_x = np.tile(points_one_side[None, :], (n_per_side, 1)) points_y = np.tile(points_one_side[:, None], (1, n_per_side)) points = np.stack([points_x, points_y], axis=-1).reshape(-1, 2) return points def build_all_layer_point_grids( n_per_side: int, n_layers: int, scale_per_layer: int ) -> List[np.ndarray]: """Generates point grids for all crop layers.""" points_by_layer = [] for i in range(n_layers + 1): n_points = int(n_per_side / (scale_per_layer**i)) points_by_layer.append(build_point_grid(n_points)) return points_by_layer def generate_crop_boxes( im_size: Tuple[int, ...], n_layers: int, overlap_ratio: float ) -> Tuple[List[List[int]], List[int]]: """ Generates a list of crop boxes of different sizes. Each layer has (2**i)**2 boxes for the ith layer. """ crop_boxes, layer_idxs = [], [] im_h, im_w = im_size short_side = min(im_h, im_w) # Original image crop_boxes.append([0, 0, im_w, im_h]) layer_idxs.append(0) def crop_len(orig_len, n_crops, overlap): return int(math.ceil((overlap * (n_crops - 1) + orig_len) / n_crops)) for i_layer in range(n_layers): n_crops_per_side = 2 ** (i_layer + 1) overlap = int(overlap_ratio * short_side * (2 / n_crops_per_side)) crop_w = crop_len(im_w, n_crops_per_side, overlap) crop_h = crop_len(im_h, n_crops_per_side, overlap) crop_box_x0 = [int((crop_w - overlap) * i) for i in range(n_crops_per_side)] crop_box_y0 = [int((crop_h - overlap) * i) for i in range(n_crops_per_side)] # Crops in XYWH format for x0, y0 in product(crop_box_x0, crop_box_y0): box = [x0, y0, min(x0 + crop_w, im_w), min(y0 + crop_h, im_h)] crop_boxes.append(box) layer_idxs.append(i_layer + 1) return crop_boxes, layer_idxs def uncrop_boxes_xyxy(boxes: torch.Tensor, crop_box: List[int]) -> torch.Tensor: x0, y0, _, _ = crop_box offset = torch.tensor([[x0, y0, x0, y0]], device=boxes.device) # Check if boxes has a channel dimension if len(boxes.shape) == 3: offset = offset.unsqueeze(1) return boxes + offset def uncrop_points(points: torch.Tensor, crop_box: List[int]) -> torch.Tensor: x0, y0, _, _ = crop_box offset = torch.tensor([[x0, y0]], device=points.device) # Check if points has a channel dimension if len(points.shape) == 3: offset = offset.unsqueeze(1) return points + offset def uncrop_masks( masks: torch.Tensor, crop_box: List[int], orig_h: int, orig_w: int ) -> torch.Tensor: x0, y0, x1, y1 = crop_box if x0 == 0 and y0 == 0 and x1 == orig_w and y1 == orig_h: return masks # Coordinate transform masks pad_x, pad_y = orig_w - (x1 - x0), orig_h - (y1 - y0) pad = (x0, pad_x - x0, y0, pad_y - y0) return torch.nn.functional.pad(masks, pad, value=0) def remove_small_regions( mask: np.ndarray, area_thresh: float, mode: str ) -> Tuple[np.ndarray, bool]: """ Removes small disconnected regions and holes in a mask. Returns the mask and an indicator of if the mask has been modified. """ import cv2 # type: ignore assert mode in ["holes", "islands"] correct_holes = mode == "holes" working_mask = (correct_holes ^ mask).astype(np.uint8) n_labels, regions, stats, _ = cv2.connectedComponentsWithStats(working_mask, 8) sizes = stats[:, -1][1:] # Row 0 is background label small_regions = [i + 1 for i, s in enumerate(sizes) if s < area_thresh] if len(small_regions) == 0: return mask, False fill_labels = [0] + small_regions if not correct_holes: fill_labels = [i for i in range(n_labels) if i not in fill_labels] # If every region is below threshold, keep largest if len(fill_labels) == 0: fill_labels = [int(np.argmax(sizes)) + 1] mask = np.isin(regions, fill_labels) return mask, True def coco_encode_rle(uncompressed_rle: Dict[str, Any]) -> Dict[str, Any]: from pycocotools import mask as mask_utils # type: ignore h, w = uncompressed_rle["size"] rle = mask_utils.frPyObjects(uncompressed_rle, h, w) rle["counts"] = rle["counts"].decode("utf-8") # Necessary to serialize with json return rle def batched_mask_to_box(masks: torch.Tensor) -> torch.Tensor: """ Calculates boxes in XYXY format around masks. Return [0,0,0,0] for an empty mask. For input shape C1xC2x...xHxW, the output shape is C1xC2x...x4. """ # torch.max below raises an error on empty inputs, just skip in this case if torch.numel(masks) == 0: return torch.zeros(*masks.shape[:-2], 4, device=masks.device) # Normalize shape to CxHxW shape = masks.shape h, w = shape[-2:] if len(shape) > 2: masks = masks.flatten(0, -3) else: masks = masks.unsqueeze(0) # Get top and bottom edges in_height, _ = torch.max(masks, dim=-1) in_height_coords = in_height * torch.arange(h, device=in_height.device)[None, :] bottom_edges, _ = torch.max(in_height_coords, dim=-1) in_height_coords = in_height_coords + h * (~in_height) top_edges, _ = torch.min(in_height_coords, dim=-1) # Get left and right edges in_width, _ = torch.max(masks, dim=-2) in_width_coords = in_width * torch.arange(w, device=in_width.device)[None, :] right_edges, _ = torch.max(in_width_coords, dim=-1) in_width_coords = in_width_coords + w * (~in_width) left_edges, _ = torch.min(in_width_coords, dim=-1) # If the mask is empty the right edge will be to the left of the left edge. # Replace these boxes with [0, 0, 0, 0] empty_filter = (right_edges < left_edges) | (bottom_edges < top_edges) out = torch.stack([left_edges, top_edges, right_edges, bottom_edges], dim=-1) out = out * (~empty_filter).unsqueeze(-1) # Return to original shape if len(shape) > 2: out = out.reshape(*shape[:-2], 4) else: out = out[0] return out ================================================ FILE: auto-seg/submodules/segment-anything-1/segment_anything/utils/onnx.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import torch import torch.nn as nn from torch.nn import functional as F from typing import Tuple from ..modeling import Sam from .amg import calculate_stability_score class SamOnnxModel(nn.Module): """ This model should not be called directly, but is used in ONNX export. It combines the prompt encoder, mask decoder, and mask postprocessing of Sam, with some functions modified to enable model tracing. Also supports extra options controlling what information. See the ONNX export script for details. """ def __init__( self, model: Sam, return_single_mask: bool, use_stability_score: bool = False, return_extra_metrics: bool = False, ) -> None: super().__init__() self.mask_decoder = model.mask_decoder self.model = model self.img_size = model.image_encoder.img_size self.return_single_mask = return_single_mask self.use_stability_score = use_stability_score self.stability_score_offset = 1.0 self.return_extra_metrics = return_extra_metrics @staticmethod def resize_longest_image_size( input_image_size: torch.Tensor, longest_side: int ) -> torch.Tensor: input_image_size = input_image_size.to(torch.float32) scale = longest_side / torch.max(input_image_size) transformed_size = scale * input_image_size transformed_size = torch.floor(transformed_size + 0.5).to(torch.int64) return transformed_size def _embed_points(self, point_coords: torch.Tensor, point_labels: torch.Tensor) -> torch.Tensor: point_coords = point_coords + 0.5 point_coords = point_coords / self.img_size point_embedding = self.model.prompt_encoder.pe_layer._pe_encoding(point_coords) point_labels = point_labels.unsqueeze(-1).expand_as(point_embedding) point_embedding = point_embedding * (point_labels != -1) point_embedding = point_embedding + self.model.prompt_encoder.not_a_point_embed.weight * ( point_labels == -1 ) for i in range(self.model.prompt_encoder.num_point_embeddings): point_embedding = point_embedding + self.model.prompt_encoder.point_embeddings[ i ].weight * (point_labels == i) return point_embedding def _embed_masks(self, input_mask: torch.Tensor, has_mask_input: torch.Tensor) -> torch.Tensor: mask_embedding = has_mask_input * self.model.prompt_encoder.mask_downscaling(input_mask) mask_embedding = mask_embedding + ( 1 - has_mask_input ) * self.model.prompt_encoder.no_mask_embed.weight.reshape(1, -1, 1, 1) return mask_embedding def mask_postprocessing(self, masks: torch.Tensor, orig_im_size: torch.Tensor) -> torch.Tensor: masks = F.interpolate( masks, size=(self.img_size, self.img_size), mode="bilinear", align_corners=False, ) prepadded_size = self.resize_longest_image_size(orig_im_size, self.img_size).to(torch.int64) masks = masks[..., : prepadded_size[0], : prepadded_size[1]] # type: ignore orig_im_size = orig_im_size.to(torch.int64) h, w = orig_im_size[0], orig_im_size[1] masks = F.interpolate(masks, size=(h, w), mode="bilinear", align_corners=False) return masks def select_masks( self, masks: torch.Tensor, iou_preds: torch.Tensor, num_points: int ) -> Tuple[torch.Tensor, torch.Tensor]: # Determine if we should return the multiclick mask or not from the number of points. # The reweighting is used to avoid control flow. score_reweight = torch.tensor( [[1000] + [0] * (self.model.mask_decoder.num_mask_tokens - 1)] ).to(iou_preds.device) score = iou_preds + (num_points - 2.5) * score_reweight best_idx = torch.argmax(score, dim=1) masks = masks[torch.arange(masks.shape[0]), best_idx, :, :].unsqueeze(1) iou_preds = iou_preds[torch.arange(masks.shape[0]), best_idx].unsqueeze(1) return masks, iou_preds @torch.no_grad() def forward( self, image_embeddings: torch.Tensor, point_coords: torch.Tensor, point_labels: torch.Tensor, mask_input: torch.Tensor, has_mask_input: torch.Tensor, orig_im_size: torch.Tensor, ): sparse_embedding = self._embed_points(point_coords, point_labels) dense_embedding = self._embed_masks(mask_input, has_mask_input) masks, scores = self.model.mask_decoder.predict_masks( image_embeddings=image_embeddings, image_pe=self.model.prompt_encoder.get_dense_pe(), sparse_prompt_embeddings=sparse_embedding, dense_prompt_embeddings=dense_embedding, ) if self.use_stability_score: scores = calculate_stability_score( masks, self.model.mask_threshold, self.stability_score_offset ) if self.return_single_mask: masks, scores = self.select_masks(masks, scores, point_coords.shape[1]) upscaled_masks = self.mask_postprocessing(masks, orig_im_size) if self.return_extra_metrics: stability_scores = calculate_stability_score( upscaled_masks, self.model.mask_threshold, self.stability_score_offset ) areas = (upscaled_masks > self.model.mask_threshold).sum(-1).sum(-1) return upscaled_masks, scores, stability_scores, areas, masks return upscaled_masks, scores, masks ================================================ FILE: auto-seg/submodules/segment-anything-1/segment_anything/utils/transforms.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import numpy as np import torch from torch.nn import functional as F from torchvision.transforms.functional import resize, to_pil_image # type: ignore from copy import deepcopy from typing import Tuple class ResizeLongestSide: """ Resizes images to the longest side 'target_length', as well as provides methods for resizing coordinates and boxes. Provides methods for transforming both numpy array and batched torch tensors. """ def __init__(self, target_length: int) -> None: self.target_length = target_length def apply_image(self, image: np.ndarray) -> np.ndarray: """ Expects a numpy array with shape HxWxC in uint8 format. """ target_size = self.get_preprocess_shape(image.shape[0], image.shape[1], self.target_length) return np.array(resize(to_pil_image(image), target_size)) def apply_coords(self, coords: np.ndarray, original_size: Tuple[int, ...]) -> np.ndarray: """ Expects a numpy array of length 2 in the final dimension. Requires the original image size in (H, W) format. """ old_h, old_w = original_size new_h, new_w = self.get_preprocess_shape( original_size[0], original_size[1], self.target_length ) coords = deepcopy(coords).astype(float) coords[..., 0] = coords[..., 0] * (new_w / old_w) coords[..., 1] = coords[..., 1] * (new_h / old_h) return coords def apply_boxes(self, boxes: np.ndarray, original_size: Tuple[int, ...]) -> np.ndarray: """ Expects a numpy array shape Bx4. Requires the original image size in (H, W) format. """ boxes = self.apply_coords(boxes.reshape(-1, 2, 2), original_size) return boxes.reshape(-1, 4) def apply_image_torch(self, image: torch.Tensor) -> torch.Tensor: """ Expects batched images with shape BxCxHxW and float format. This transformation may not exactly match apply_image. apply_image is the transformation expected by the model. """ # Expects an image in BCHW format. May not exactly match apply_image. target_size = self.get_preprocess_shape(image.shape[2], image.shape[3], self.target_length) return F.interpolate( image, target_size, mode="bilinear", align_corners=False, antialias=True ) def apply_coords_torch( self, coords: torch.Tensor, original_size: Tuple[int, ...] ) -> torch.Tensor: """ Expects a torch tensor with length 2 in the last dimension. Requires the original image size in (H, W) format. """ old_h, old_w = original_size new_h, new_w = self.get_preprocess_shape( original_size[0], original_size[1], self.target_length ) coords = deepcopy(coords).to(torch.float) coords[..., 0] = coords[..., 0] * (new_w / old_w) coords[..., 1] = coords[..., 1] * (new_h / old_h) return coords def apply_boxes_torch( self, boxes: torch.Tensor, original_size: Tuple[int, ...] ) -> torch.Tensor: """ Expects a torch tensor with shape Bx4. Requires the original image size in (H, W) format. """ boxes = self.apply_coords_torch(boxes.reshape(-1, 2, 2), original_size) return boxes.reshape(-1, 4) @staticmethod def get_preprocess_shape(oldh: int, oldw: int, long_side_length: int) -> Tuple[int, int]: """ Compute the output size given input size and target long side length. """ scale = long_side_length * 1.0 / max(oldh, oldw) newh, neww = oldh * scale, oldw * scale neww = int(neww + 0.5) newh = int(newh + 0.5) return (newh, neww) ================================================ FILE: auto-seg/submodules/segment-anything-1/setup.cfg ================================================ [isort] line_length=100 multi_line_output=3 include_trailing_comma=True known_standard_library=numpy,setuptools skip_glob=*/__init__.py known_myself=segment_anything known_third_party=matplotlib,cv2,torch,torchvision,pycocotools,onnx,black,isort no_lines_before=STDLIB,THIRDPARTY sections=FUTURE,STDLIB,THIRDPARTY,MYSELF,FIRSTPARTY,LOCALFOLDER default_section=FIRSTPARTY ================================================ FILE: auto-seg/submodules/segment-anything-1/setup.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. from setuptools import find_packages, setup setup( name="segment_anything", version="1.0", install_requires=[], packages=find_packages(exclude="notebooks"), extras_require={ "all": ["matplotlib", "pycocotools", "opencv-python", "onnx", "onnxruntime"], "dev": ["flake8", "isort", "black", "mypy"], }, ) ================================================ FILE: auto-seg/submodules/segment-anything-2/.clang-format ================================================ AccessModifierOffset: -1 AlignAfterOpenBracket: AlwaysBreak AlignConsecutiveAssignments: false AlignConsecutiveDeclarations: false AlignEscapedNewlinesLeft: true AlignOperands: false AlignTrailingComments: false AllowAllParametersOfDeclarationOnNextLine: false AllowShortBlocksOnASingleLine: false AllowShortCaseLabelsOnASingleLine: false AllowShortFunctionsOnASingleLine: Empty AllowShortIfStatementsOnASingleLine: false AllowShortLoopsOnASingleLine: false AlwaysBreakAfterReturnType: None AlwaysBreakBeforeMultilineStrings: true AlwaysBreakTemplateDeclarations: true BinPackArguments: false BinPackParameters: false BraceWrapping: AfterClass: false AfterControlStatement: false AfterEnum: false AfterFunction: false AfterNamespace: false AfterObjCDeclaration: false AfterStruct: false AfterUnion: false BeforeCatch: false BeforeElse: false IndentBraces: false BreakBeforeBinaryOperators: None BreakBeforeBraces: Attach BreakBeforeTernaryOperators: true BreakConstructorInitializersBeforeComma: false BreakAfterJavaFieldAnnotations: false BreakStringLiterals: false ColumnLimit: 80 CommentPragmas: '^ IWYU pragma:' ConstructorInitializerAllOnOneLineOrOnePerLine: true ConstructorInitializerIndentWidth: 4 ContinuationIndentWidth: 4 Cpp11BracedListStyle: true DerivePointerAlignment: false DisableFormat: false ForEachMacros: [ FOR_EACH, FOR_EACH_R, FOR_EACH_RANGE, ] IncludeCategories: - Regex: '^<.*\.h(pp)?>' Priority: 1 - Regex: '^<.*' Priority: 2 - Regex: '.*' Priority: 3 IndentCaseLabels: true IndentWidth: 2 IndentWrappedFunctionNames: false KeepEmptyLinesAtTheStartOfBlocks: false MacroBlockBegin: '' MacroBlockEnd: '' MaxEmptyLinesToKeep: 1 NamespaceIndentation: None ObjCBlockIndentWidth: 2 ObjCSpaceAfterProperty: false ObjCSpaceBeforeProtocolList: false PenaltyBreakBeforeFirstCallParameter: 1 PenaltyBreakComment: 300 PenaltyBreakFirstLessLess: 120 PenaltyBreakString: 1000 PenaltyExcessCharacter: 1000000 PenaltyReturnTypeOnItsOwnLine: 200 PointerAlignment: Left ReflowComments: true SortIncludes: true SpaceAfterCStyleCast: false SpaceBeforeAssignmentOperators: true SpaceBeforeParens: ControlStatements SpaceInEmptyParentheses: false SpacesBeforeTrailingComments: 1 SpacesInAngles: false SpacesInContainerLiterals: true SpacesInCStyleCastParentheses: false SpacesInParentheses: false SpacesInSquareBrackets: false Standard: Cpp11 TabWidth: 8 UseTab: Never ================================================ FILE: auto-seg/submodules/segment-anything-2/.gitignore ================================================ .vscode/ .DS_Store __pycache__/ *-checkpoint.ipynb .venv *.egg* build/* _C.* outputs/* checkpoints/*.pt ================================================ FILE: auto-seg/submodules/segment-anything-2/.watchmanconfig ================================================ {} ================================================ FILE: auto-seg/submodules/segment-anything-2/CODE_OF_CONDUCT.md ================================================ # Code of Conduct ## Our Pledge In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to make participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. ## Our Standards Examples of behavior that contributes to creating a positive environment include: * Using welcoming and inclusive language * Being respectful of differing viewpoints and experiences * Gracefully accepting constructive criticism * Focusing on what is best for the community * Showing empathy towards other community members Examples of unacceptable behavior by participants include: * The use of sexualized language or imagery and unwelcome sexual attention or advances * Trolling, insulting/derogatory comments, and personal or political attacks * Public or private harassment * Publishing others' private information, such as a physical or electronic address, without explicit permission * Other conduct which could reasonably be considered inappropriate in a professional setting ## Our Responsibilities Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. ## Scope This Code of Conduct applies within all project spaces, and it also applies when an individual is representing the project or its community in public spaces. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. This Code of Conduct also applies outside the project spaces when there is a reasonable belief that an individual's behavior may have a negative impact on the project or its community. ## Enforcement Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at . All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. ## Attribution This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html [homepage]: https://www.contributor-covenant.org For answers to common questions about this code of conduct, see https://www.contributor-covenant.org/faq ================================================ FILE: auto-seg/submodules/segment-anything-2/CONTRIBUTING.md ================================================ # Contributing to segment-anything We want to make contributing to this project as easy and transparent as possible. ## Pull Requests We actively welcome your pull requests. 1. Fork the repo and create your branch from `main`. 2. If you've added code that should be tested, add tests. 3. If you've changed APIs, update the documentation. 4. Ensure the test suite passes. 5. Make sure your code lints, using the `ufmt format` command. Linting requires `black==24.2.0`, `usort==1.0.2`, and `ufmt==2.0.0b2`, which can be installed via `pip install -e ".[dev]"`. 6. If you haven't already, complete the Contributor License Agreement ("CLA"). ## Contributor License Agreement ("CLA") In order to accept your pull request, we need you to submit a CLA. You only need to do this once to work on any of Facebook's open source projects. Complete your CLA here: ## Issues We use GitHub issues to track public bugs. Please ensure your description is clear and has sufficient instructions to be able to reproduce the issue. Facebook has a [bounty program](https://www.facebook.com/whitehat/) for the safe disclosure of security bugs. In those cases, please go through the process outlined on that page and do not file a public issue. ## License By contributing to segment-anything, you agree that your contributions will be licensed under the LICENSE file in the root directory of this source tree. ================================================ FILE: auto-seg/submodules/segment-anything-2/INSTALL.md ================================================ ## Installation ### Requirements - Linux with Python ≥ 3.10, PyTorch ≥ 2.3.1 and [torchvision](https://github.com/pytorch/vision/) that matches the PyTorch installation. Install them together at https://pytorch.org to ensure this. * Note older versions of Python or PyTorch may also work. However, the versions above are strongly recommended to provide all features such as `torch.compile`. - [CUDA toolkits](https://developer.nvidia.com/cuda-toolkit-archive) that match the CUDA version for your PyTorch installation. This should typically be CUDA 12.1 if you follow the default installation command. - If you are installing on Windows, it's strongly recommended to use [Windows Subsystem for Linux (WSL)](https://learn.microsoft.com/en-us/windows/wsl/install) with Ubuntu. Then, install SAM 2 from the root of this repository via ```bash pip install -e ".[notebooks]" ``` Note that you may skip building the SAM 2 CUDA extension during installation via environment variable `SAM2_BUILD_CUDA=0`, as follows: ```bash # skip the SAM 2 CUDA extension SAM2_BUILD_CUDA=0 pip install -e ".[notebooks]" ``` This would also skip the post-processing step at runtime (removing small holes and sprinkles in the output masks, which requires the CUDA extension), but shouldn't affect the results in most cases. ### Building the SAM 2 CUDA extension By default, we allow the installation to proceed even if the SAM 2 CUDA extension fails to build. (In this case, the build errors are hidden unless using `-v` for verbose output in `pip install`.) If you see a message like `Skipping the post-processing step due to the error above` at runtime or `Failed to build the SAM 2 CUDA extension due to the error above` during installation, it indicates that the SAM 2 CUDA extension failed to build in your environment. In this case, **you can still use SAM 2 for both image and video applications**. The post-processing step (removing small holes and sprinkles in the output masks) will be skipped, but this shouldn't affect the results in most cases. If you would like to enable this post-processing step, you can reinstall SAM 2 on a GPU machine with environment variable `SAM2_BUILD_ALLOW_ERRORS=0` to force building the CUDA extension (and raise errors if it fails to build), as follows ```bash pip uninstall -y SAM-2 && \ rm -f ./sam2/*.so && \ SAM2_BUILD_ALLOW_ERRORS=0 pip install -v -e ".[notebooks]" ``` Note that PyTorch needs to be installed first before building the SAM 2 CUDA extension. It's also necessary to install [CUDA toolkits](https://developer.nvidia.com/cuda-toolkit-archive) that match the CUDA version for your PyTorch installation. (This should typically be CUDA 12.1 if you follow the default installation command.) After installing the CUDA toolkits, you can check its version via `nvcc --version`. Please check the section below on common installation issues if the CUDA extension fails to build during installation or load at runtime. ### Common Installation Issues Click each issue for its solutions:
I got `ImportError: cannot import name '_C' from 'sam2'`
This is usually because you haven't run the `pip install -e ".[notebooks]"` step above or the installation failed. Please install SAM 2 first, and see the other issues if your installation fails. In some systems, you may need to run `python setup.py build_ext --inplace` in the SAM 2 repo root as suggested in https://github.com/facebookresearch/sam2/issues/77.
I got `MissingConfigException: Cannot find primary config 'configs/sam2.1/sam2.1_hiera_l.yaml'`
This is usually because you haven't run the `pip install -e .` step above, so `sam2` isn't in your Python's `sys.path`. Please run this installation step. In case it still fails after the installation step, you may try manually adding the root of this repo to `PYTHONPATH` via ```bash export SAM2_REPO_ROOT=/path/to/sam2 # path to this repo export PYTHONPATH="${SAM2_REPO_ROOT}:${PYTHONPATH}" ``` to manually add `sam2_configs` into your Python's `sys.path`.
I got `RuntimeError: Error(s) in loading state_dict for SAM2Base` when loading the new SAM 2.1 checkpoints
This is likely because you have installed a previous version of this repo, which doesn't have the new modules to support the SAM 2.1 checkpoints yet. Please try the following steps: 1. pull the latest code from the `main` branch of this repo 2. run `pip uninstall -y SAM-2` to uninstall any previous installations 3. then install the latest repo again using `pip install -e ".[notebooks]"` In case the steps above still don't resolve the error, please try running in your Python environment the following ```python from sam2.modeling import sam2_base print(sam2_base.__file__) ``` and check whether the content in the printed local path of `sam2/modeling/sam2_base.py` matches the latest one in https://github.com/facebookresearch/sam2/blob/main/sam2/modeling/sam2_base.py (e.g. whether your local file has `no_obj_embed_spatial`) to indentify if you're still using a previous installation.
My installation failed with `CUDA_HOME environment variable is not set`
This usually happens because the installation step cannot find the CUDA toolkits (that contain the NVCC compiler) to build a custom CUDA kernel in SAM 2. Please install [CUDA toolkits](https://developer.nvidia.com/cuda-toolkit-archive) or the version that matches the CUDA version for your PyTorch installation. If the error persists after installing CUDA toolkits, you may explicitly specify `CUDA_HOME` via ``` export CUDA_HOME=/usr/local/cuda # change to your CUDA toolkit path ``` and rerun the installation. Also, you should make sure ``` python -c 'import torch; from torch.utils.cpp_extension import CUDA_HOME; print(torch.cuda.is_available(), CUDA_HOME)' ``` print `(True, a directory with cuda)` to verify that the CUDA toolkits are correctly set up. If you are still having problems after verifying that the CUDA toolkit is installed and the `CUDA_HOME` environment variable is set properly, you may have to add the `--no-build-isolation` flag to the pip command: ``` pip install --no-build-isolation -e . ```
I got `undefined symbol: _ZN3c1015SmallVectorBaseIjE8grow_podEPKvmm` (or similar errors)
This usually happens because you have multiple versions of dependencies (PyTorch or CUDA) in your environment. During installation, the SAM 2 library is compiled against one version library while at run time it links against another version. This might be due to that you have different versions of PyTorch or CUDA installed separately via `pip` or `conda`. You may delete one of the duplicates to only keep a single PyTorch and CUDA version. In particular, if you have a lower PyTorch version than 2.3.1, it's recommended to upgrade to PyTorch 2.3.1 or higher first. Otherwise, the installation script will try to upgrade to the latest PyTorch using `pip`, which could sometimes lead to duplicated PyTorch installation if you have previously installed another PyTorch version using `conda`. We have been building SAM 2 against PyTorch 2.3.1 internally. However, a few user comments (e.g. https://github.com/facebookresearch/sam2/issues/22, https://github.com/facebookresearch/sam2/issues/14) suggested that downgrading to PyTorch 2.1.0 might resolve this problem. In case the error persists, you may try changing the restriction from `torch>=2.3.1` to `torch>=2.1.0` in both [`pyproject.toml`](pyproject.toml) and [`setup.py`](setup.py) to allow PyTorch 2.1.0.
I got `CUDA error: no kernel image is available for execution on the device`
A possible cause could be that the CUDA kernel is somehow not compiled towards your GPU's CUDA [capability](https://developer.nvidia.com/cuda-gpus). This could happen if the installation is done in an environment different from the runtime (e.g. in a slurm system). You can try pulling the latest code from the SAM 2 repo and running the following ``` export TORCH_CUDA_ARCH_LIST=9.0 8.0 8.6 8.9 7.0 7.2 7.5 6.0` ``` to manually specify the CUDA capability in the compilation target that matches your GPU.
I got `RuntimeError: No available kernel. Aborting execution.` (or similar errors)
This is probably because your machine doesn't have a GPU or a compatible PyTorch version for Flash Attention (see also https://discuss.pytorch.org/t/using-f-scaled-dot-product-attention-gives-the-error-runtimeerror-no-available-kernel-aborting-execution/180900 for a discussion in PyTorch forum). You may be able to resolve this error by replacing the line ```python OLD_GPU, USE_FLASH_ATTN, MATH_KERNEL_ON = get_sdpa_settings() ``` in [`sam2/modeling/sam/transformer.py`](sam2/modeling/sam/transformer.py) with ```python OLD_GPU, USE_FLASH_ATTN, MATH_KERNEL_ON = True, True, True ``` to relax the attention kernel setting and use other kernels than Flash Attention.
I got `Error compiling objects for extension`
You may see error log of: > unsupported Microsoft Visual Studio version! Only the versions between 2017 and 2022 (inclusive) are supported! The nvcc flag '-allow-unsupported-compiler' can be used to override this version check; however, using an unsupported host compiler may cause compilation failure or incorrect run time execution. Use at your own risk. This is probably because your versions of CUDA and Visual Studio are incompatible. (see also https://stackoverflow.com/questions/78515942/cuda-compatibility-with-visual-studio-2022-version-17-10 for a discussion in stackoverflow).
You may be able to fix this by adding the `-allow-unsupported-compiler` argument to `nvcc` after L48 in the [setup.py](https://github.com/facebookresearch/sam2/blob/main/setup.py).
After adding the argument, `get_extension()` will look like this: ```python def get_extensions(): srcs = ["sam2/csrc/connected_components.cu"] compile_args = { "cxx": [], "nvcc": [ "-DCUDA_HAS_FP16=1", "-D__CUDA_NO_HALF_OPERATORS__", "-D__CUDA_NO_HALF_CONVERSIONS__", "-D__CUDA_NO_HALF2_OPERATORS__", "-allow-unsupported-compiler" # Add this argument ], } ext_modules = [CUDAExtension("sam2._C", srcs, extra_compile_args=compile_args)] return ext_modules ```
================================================ FILE: auto-seg/submodules/segment-anything-2/LICENSE ================================================ Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ================================================ FILE: auto-seg/submodules/segment-anything-2/LICENSE_cctorch ================================================ BSD 3-Clause License Copyright (c) 2020, the respective contributors, as shown by the AUTHORS file. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ================================================ FILE: auto-seg/submodules/segment-anything-2/MANIFEST.in ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. recursive-include sam2 *.yaml #include all config files ================================================ FILE: auto-seg/submodules/segment-anything-2/README.md ================================================ # SAM 2: Segment Anything in Images and Videos **[AI at Meta, FAIR](https://ai.meta.com/research/)** [Nikhila Ravi](https://nikhilaravi.com/), [Valentin Gabeur](https://gabeur.github.io/), [Yuan-Ting Hu](https://scholar.google.com/citations?user=E8DVVYQAAAAJ&hl=en), [Ronghang Hu](https://ronghanghu.com/), [Chaitanya Ryali](https://scholar.google.com/citations?user=4LWx24UAAAAJ&hl=en), [Tengyu Ma](https://scholar.google.com/citations?user=VeTSl0wAAAAJ&hl=en), [Haitham Khedr](https://hkhedr.com/), [Roman Rädle](https://scholar.google.de/citations?user=Tpt57v0AAAAJ&hl=en), [Chloe Rolland](https://scholar.google.com/citations?hl=fr&user=n-SnMhoAAAAJ), [Laura Gustafson](https://scholar.google.com/citations?user=c8IpF9gAAAAJ&hl=en), [Eric Mintun](https://ericmintun.github.io/), [Junting Pan](https://junting.github.io/), [Kalyan Vasudev Alwala](https://scholar.google.co.in/citations?user=m34oaWEAAAAJ&hl=en), [Nicolas Carion](https://www.nicolascarion.com/), [Chao-Yuan Wu](https://chaoyuan.org/), [Ross Girshick](https://www.rossgirshick.info/), [Piotr Dollár](https://pdollar.github.io/), [Christoph Feichtenhofer](https://feichtenhofer.github.io/) [[`Paper`](https://ai.meta.com/research/publications/sam-2-segment-anything-in-images-and-videos/)] [[`Project`](https://ai.meta.com/sam2)] [[`Demo`](https://sam2.metademolab.com/)] [[`Dataset`](https://ai.meta.com/datasets/segment-anything-video)] [[`Blog`](https://ai.meta.com/blog/segment-anything-2)] [[`BibTeX`](#citing-sam-2)] ![SAM 2 architecture](assets/model_diagram.png?raw=true) **Segment Anything Model 2 (SAM 2)** is a foundation model towards solving promptable visual segmentation in images and videos. We extend SAM to video by considering images as a video with a single frame. The model design is a simple transformer architecture with streaming memory for real-time video processing. We build a model-in-the-loop data engine, which improves model and data via user interaction, to collect [**our SA-V dataset**](https://ai.meta.com/datasets/segment-anything-video), the largest video segmentation dataset to date. SAM 2 trained on our data provides strong performance across a wide range of tasks and visual domains. ![SA-V dataset](assets/sa_v_dataset.jpg?raw=true) ## Latest updates **09/30/2024 -- SAM 2.1 Developer Suite (new checkpoints, training code, web demo) is released** - A new suite of improved model checkpoints (denoted as **SAM 2.1**) are released. See [Model Description](#model-description) for details. * To use the new SAM 2.1 checkpoints, you need the latest model code from this repo. If you have installed an earlier version of this repo, please first uninstall the previous version via `pip uninstall SAM-2`, pull the latest code from this repo (with `git pull`), and then reinstall the repo following [Installation](#installation) below. - The training (and fine-tuning) code has been released. See [`training/README.md`](training/README.md) on how to get started. - The frontend + backend code for the SAM 2 web demo has been released. See [`demo/README.md`](demo/README.md) for details. ## Installation SAM 2 needs to be installed first before use. The code requires `python>=3.10`, as well as `torch>=2.3.1` and `torchvision>=0.18.1`. Please follow the instructions [here](https://pytorch.org/get-started/locally/) to install both PyTorch and TorchVision dependencies. You can install SAM 2 on a GPU machine using: ```bash git clone https://github.com/facebookresearch/sam2.git && cd sam2 pip install -e . ``` If you are installing on Windows, it's strongly recommended to use [Windows Subsystem for Linux (WSL)](https://learn.microsoft.com/en-us/windows/wsl/install) with Ubuntu. To use the SAM 2 predictor and run the example notebooks, `jupyter` and `matplotlib` are required and can be installed by: ```bash pip install -e ".[notebooks]" ``` Note: 1. It's recommended to create a new Python environment via [Anaconda](https://www.anaconda.com/) for this installation and install PyTorch 2.3.1 (or higher) via `pip` following https://pytorch.org/. If you have a PyTorch version lower than 2.3.1 in your current environment, the installation command above will try to upgrade it to the latest PyTorch version using `pip`. 2. The step above requires compiling a custom CUDA kernel with the `nvcc` compiler. If it isn't already available on your machine, please install the [CUDA toolkits](https://developer.nvidia.com/cuda-toolkit-archive) with a version that matches your PyTorch CUDA version. 3. If you see a message like `Failed to build the SAM 2 CUDA extension` during installation, you can ignore it and still use SAM 2 (some post-processing functionality may be limited, but it doesn't affect the results in most cases). Please see [`INSTALL.md`](./INSTALL.md) for FAQs on potential issues and solutions. ## Getting Started ### Download Checkpoints First, we need to download a model checkpoint. All the model checkpoints can be downloaded by running: ```bash cd checkpoints && \ ./download_ckpts.sh && \ cd .. ``` or individually from: - [sam2.1_hiera_tiny.pt](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_tiny.pt) - [sam2.1_hiera_small.pt](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_small.pt) - [sam2.1_hiera_base_plus.pt](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_base_plus.pt) - [sam2.1_hiera_large.pt](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_large.pt) (note that these are the improved checkpoints denoted as SAM 2.1; see [Model Description](#model-description) for details.) Then SAM 2 can be used in a few lines as follows for image and video prediction. ### Image prediction SAM 2 has all the capabilities of [SAM](https://github.com/facebookresearch/segment-anything) on static images, and we provide image prediction APIs that closely resemble SAM for image use cases. The `SAM2ImagePredictor` class has an easy interface for image prompting. ```python import torch from sam2.build_sam import build_sam2 from sam2.sam2_image_predictor import SAM2ImagePredictor checkpoint = "./checkpoints/sam2.1_hiera_large.pt" model_cfg = "configs/sam2.1/sam2.1_hiera_l.yaml" predictor = SAM2ImagePredictor(build_sam2(model_cfg, checkpoint)) with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16): predictor.set_image() masks, _, _ = predictor.predict() ``` Please refer to the examples in [image_predictor_example.ipynb](./notebooks/image_predictor_example.ipynb) (also in Colab [here](https://colab.research.google.com/github/facebookresearch/sam2/blob/main/notebooks/image_predictor_example.ipynb)) for static image use cases. SAM 2 also supports automatic mask generation on images just like SAM. Please see [automatic_mask_generator_example.ipynb](./notebooks/automatic_mask_generator_example.ipynb) (also in Colab [here](https://colab.research.google.com/github/facebookresearch/sam2/blob/main/notebooks/automatic_mask_generator_example.ipynb)) for automatic mask generation in images. ### Video prediction For promptable segmentation and tracking in videos, we provide a video predictor with APIs for example to add prompts and propagate masklets throughout a video. SAM 2 supports video inference on multiple objects and uses an inference state to keep track of the interactions in each video. ```python import torch from sam2.build_sam import build_sam2_video_predictor checkpoint = "./checkpoints/sam2.1_hiera_large.pt" model_cfg = "configs/sam2.1/sam2.1_hiera_l.yaml" predictor = build_sam2_video_predictor(model_cfg, checkpoint) with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16): state = predictor.init_state() # add new prompts and instantly get the output on the same frame frame_idx, object_ids, masks = predictor.add_new_points_or_box(state, ): # propagate the prompts to get masklets throughout the video for frame_idx, object_ids, masks in predictor.propagate_in_video(state): ... ``` Please refer to the examples in [video_predictor_example.ipynb](./notebooks/video_predictor_example.ipynb) (also in Colab [here](https://colab.research.google.com/github/facebookresearch/sam2/blob/main/notebooks/video_predictor_example.ipynb)) for details on how to add click or box prompts, make refinements, and track multiple objects in videos. ## Load from 🤗 Hugging Face Alternatively, models can also be loaded from [Hugging Face](https://huggingface.co/models?search=facebook/sam2) (requires `pip install huggingface_hub`). For image prediction: ```python import torch from sam2.sam2_image_predictor import SAM2ImagePredictor predictor = SAM2ImagePredictor.from_pretrained("facebook/sam2-hiera-large") with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16): predictor.set_image() masks, _, _ = predictor.predict() ``` For video prediction: ```python import torch from sam2.sam2_video_predictor import SAM2VideoPredictor predictor = SAM2VideoPredictor.from_pretrained("facebook/sam2-hiera-large") with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16): state = predictor.init_state() # add new prompts and instantly get the output on the same frame frame_idx, object_ids, masks = predictor.add_new_points_or_box(state, ): # propagate the prompts to get masklets throughout the video for frame_idx, object_ids, masks in predictor.propagate_in_video(state): ... ``` ## Model Description ### SAM 2.1 checkpoints The table below shows the improved SAM 2.1 checkpoints released on September 29, 2024. | **Model** | **Size (M)** | **Speed (FPS)** | **SA-V test (J&F)** | **MOSE val (J&F)** | **LVOS v2 (J&F)** | | :------------------: | :----------: | :--------------------: | :-----------------: | :----------------: | :---------------: | | sam2.1_hiera_tiny
([config](sam2/configs/sam2.1/sam2.1_hiera_t.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_tiny.pt)) | 38.9 | 47.2 | 76.5 | 71.8 | 77.3 | | sam2.1_hiera_small
([config](sam2/configs/sam2.1/sam2.1_hiera_s.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_small.pt)) | 46 | 43.3 (53.0 compiled\*) | 76.6 | 73.5 | 78.3 | | sam2.1_hiera_base_plus
([config](sam2/configs/sam2.1/sam2.1_hiera_b+.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_base_plus.pt)) | 80.8 | 34.8 (43.8 compiled\*) | 78.2 | 73.7 | 78.2 | | sam2.1_hiera_large
([config](sam2/configs/sam2.1/sam2.1_hiera_l.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_large.pt)) | 224.4 | 24.2 (30.2 compiled\*) | 79.5 | 74.6 | 80.6 | ### SAM 2 checkpoints The previous SAM 2 checkpoints released on July 29, 2024 can be found as follows: | **Model** | **Size (M)** | **Speed (FPS)** | **SA-V test (J&F)** | **MOSE val (J&F)** | **LVOS v2 (J&F)** | | :------------------: | :----------: | :--------------------: | :-----------------: | :----------------: | :---------------: | | sam2_hiera_tiny
([config](sam2/configs/sam2/sam2_hiera_t.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_tiny.pt)) | 38.9 | 47.2 | 75.0 | 70.9 | 75.3 | | sam2_hiera_small
([config](sam2/configs/sam2/sam2_hiera_s.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_small.pt)) | 46 | 43.3 (53.0 compiled\*) | 74.9 | 71.5 | 76.4 | | sam2_hiera_base_plus
([config](sam2/configs/sam2/sam2_hiera_b+.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_base_plus.pt)) | 80.8 | 34.8 (43.8 compiled\*) | 74.7 | 72.8 | 75.8 | | sam2_hiera_large
([config](sam2/configs/sam2/sam2_hiera_l.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_large.pt)) | 224.4 | 24.2 (30.2 compiled\*) | 76.0 | 74.6 | 79.8 | \* Compile the model by setting `compile_image_encoder: True` in the config. ## Segment Anything Video Dataset See [sav_dataset/README.md](sav_dataset/README.md) for details. ## Training SAM 2 You can train or fine-tune SAM 2 on custom datasets of images, videos, or both. Please check the training [README](training/README.md) on how to get started. ## Web demo for SAM 2 We have released the frontend + backend code for the SAM 2 web demo (a locally deployable version similar to https://sam2.metademolab.com/demo). Please see the web demo [README](demo/README.md) for details. ## License The SAM 2 model checkpoints, SAM 2 demo code (front-end and back-end), and SAM 2 training code are licensed under [Apache 2.0](./LICENSE), however the [Inter Font](https://github.com/rsms/inter?tab=OFL-1.1-1-ov-file) and [Noto Color Emoji](https://github.com/googlefonts/noto-emoji) used in the SAM 2 demo code are made available under the [SIL Open Font License, version 1.1](https://openfontlicense.org/open-font-license-official-text/). ## Contributing See [contributing](CONTRIBUTING.md) and the [code of conduct](CODE_OF_CONDUCT.md). ## Contributors The SAM 2 project was made possible with the help of many contributors (alphabetical): Karen Bergan, Daniel Bolya, Alex Bosenberg, Kai Brown, Vispi Cassod, Christopher Chedeau, Ida Cheng, Luc Dahlin, Shoubhik Debnath, Rene Martinez Doehner, Grant Gardner, Sahir Gomez, Rishi Godugu, Baishan Guo, Caleb Ho, Andrew Huang, Somya Jain, Bob Kamma, Amanda Kallet, Jake Kinney, Alexander Kirillov, Shiva Koduvayur, Devansh Kukreja, Robert Kuo, Aohan Lin, Parth Malani, Jitendra Malik, Mallika Malhotra, Miguel Martin, Alexander Miller, Sasha Mitts, William Ngan, George Orlin, Joelle Pineau, Kate Saenko, Rodrick Shepard, Azita Shokrpour, David Soofian, Jonathan Torres, Jenny Truong, Sagar Vaze, Meng Wang, Claudette Ward, Pengchuan Zhang. Third-party code: we use a GPU-based connected component algorithm adapted from [`cc_torch`](https://github.com/zsef123/Connected_components_PyTorch) (with its license in [`LICENSE_cctorch`](./LICENSE_cctorch)) as an optional post-processing step for the mask predictions. ## Citing SAM 2 If you use SAM 2 or the SA-V dataset in your research, please use the following BibTeX entry. ```bibtex @article{ravi2024sam2, title={SAM 2: Segment Anything in Images and Videos}, author={Ravi, Nikhila and Gabeur, Valentin and Hu, Yuan-Ting and Hu, Ronghang and Ryali, Chaitanya and Ma, Tengyu and Khedr, Haitham and R{\"a}dle, Roman and Rolland, Chloe and Gustafson, Laura and Mintun, Eric and Pan, Junting and Alwala, Kalyan Vasudev and Carion, Nicolas and Wu, Chao-Yuan and Girshick, Ross and Doll{\'a}r, Piotr and Feichtenhofer, Christoph}, journal={arXiv preprint arXiv:2408.00714}, url={https://arxiv.org/abs/2408.00714}, year={2024} } ``` ================================================ FILE: auto-seg/submodules/segment-anything-2/backend.Dockerfile ================================================ ARG BASE_IMAGE=pytorch/pytorch:2.3.1-cuda12.1-cudnn8-runtime ARG MODEL_SIZE=base_plus FROM ${BASE_IMAGE} # Gunicorn environment variables ENV GUNICORN_WORKERS=1 ENV GUNICORN_THREADS=2 ENV GUNICORN_PORT=5000 # SAM 2 environment variables ENV APP_ROOT=/opt/sam2 ENV PYTHONUNBUFFERED=1 ENV SAM2_BUILD_CUDA=0 ENV MODEL_SIZE=${MODEL_SIZE} # Install system requirements RUN apt-get update && apt-get install -y --no-install-recommends \ ffmpeg \ libavutil-dev \ libavcodec-dev \ libavformat-dev \ libswscale-dev \ pkg-config \ build-essential \ libffi-dev COPY setup.py . COPY README.md . RUN pip install --upgrade pip setuptools RUN pip install -e ".[interactive-demo]" # https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite/issues/69#issuecomment-1826764707 RUN rm /opt/conda/bin/ffmpeg && ln -s /bin/ffmpeg /opt/conda/bin/ffmpeg # Make app directory. This directory will host all files required for the # backend and SAM 2 inference files. RUN mkdir ${APP_ROOT} # Copy backend server files COPY demo/backend/server ${APP_ROOT}/server # Copy SAM 2 inference files COPY sam2 ${APP_ROOT}/server/sam2 # Download SAM 2.1 checkpoints ADD https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_tiny.pt ${APP_ROOT}/checkpoints/sam2.1_hiera_tiny.pt ADD https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_small.pt ${APP_ROOT}/checkpoints/sam2.1_hiera_small.pt ADD https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_base_plus.pt ${APP_ROOT}/checkpoints/sam2.1_hiera_base_plus.pt ADD https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_large.pt ${APP_ROOT}/checkpoints/sam2.1_hiera_large.pt WORKDIR ${APP_ROOT}/server # https://pythonspeed.com/articles/gunicorn-in-docker/ CMD gunicorn --worker-tmp-dir /dev/shm \ --worker-class gthread app:app \ --log-level info \ --access-logfile /dev/stdout \ --log-file /dev/stderr \ --workers ${GUNICORN_WORKERS} \ --threads ${GUNICORN_THREADS} \ --bind 0.0.0.0:${GUNICORN_PORT} \ --timeout 60 ================================================ FILE: auto-seg/submodules/segment-anything-2/checkpoints/download_ckpts.sh ================================================ #!/bin/bash # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. # Use either wget or curl to download the checkpoints if command -v wget &> /dev/null; then CMD="wget" elif command -v curl &> /dev/null; then CMD="curl -L -O" else echo "Please install wget or curl to download the checkpoints." exit 1 fi # Define the URLs for SAM 2 checkpoints # SAM2_BASE_URL="https://dl.fbaipublicfiles.com/segment_anything_2/072824" # sam2_hiera_t_url="${SAM2_BASE_URL}/sam2_hiera_tiny.pt" # sam2_hiera_s_url="${SAM2_BASE_URL}/sam2_hiera_small.pt" # sam2_hiera_b_plus_url="${SAM2_BASE_URL}/sam2_hiera_base_plus.pt" # sam2_hiera_l_url="${SAM2_BASE_URL}/sam2_hiera_large.pt" # Download each of the four checkpoints using wget # echo "Downloading sam2_hiera_tiny.pt checkpoint..." # $CMD $sam2_hiera_t_url || { echo "Failed to download checkpoint from $sam2_hiera_t_url"; exit 1; } # echo "Downloading sam2_hiera_small.pt checkpoint..." # $CMD $sam2_hiera_s_url || { echo "Failed to download checkpoint from $sam2_hiera_s_url"; exit 1; } # echo "Downloading sam2_hiera_base_plus.pt checkpoint..." # $CMD $sam2_hiera_b_plus_url || { echo "Failed to download checkpoint from $sam2_hiera_b_plus_url"; exit 1; } # echo "Downloading sam2_hiera_large.pt checkpoint..." # $CMD $sam2_hiera_l_url || { echo "Failed to download checkpoint from $sam2_hiera_l_url"; exit 1; } # Define the URLs for SAM 2.1 checkpoints SAM2p1_BASE_URL="https://dl.fbaipublicfiles.com/segment_anything_2/092824" sam2p1_hiera_t_url="${SAM2p1_BASE_URL}/sam2.1_hiera_tiny.pt" sam2p1_hiera_s_url="${SAM2p1_BASE_URL}/sam2.1_hiera_small.pt" sam2p1_hiera_b_plus_url="${SAM2p1_BASE_URL}/sam2.1_hiera_base_plus.pt" sam2p1_hiera_l_url="${SAM2p1_BASE_URL}/sam2.1_hiera_large.pt" # SAM 2.1 checkpoints echo "Downloading sam2.1_hiera_tiny.pt checkpoint..." $CMD $sam2p1_hiera_t_url || { echo "Failed to download checkpoint from $sam2p1_hiera_t_url"; exit 1; } echo "Downloading sam2.1_hiera_small.pt checkpoint..." $CMD $sam2p1_hiera_s_url || { echo "Failed to download checkpoint from $sam2p1_hiera_s_url"; exit 1; } echo "Downloading sam2.1_hiera_base_plus.pt checkpoint..." $CMD $sam2p1_hiera_b_plus_url || { echo "Failed to download checkpoint from $sam2p1_hiera_b_plus_url"; exit 1; } echo "Downloading sam2.1_hiera_large.pt checkpoint..." $CMD $sam2p1_hiera_l_url || { echo "Failed to download checkpoint from $sam2p1_hiera_l_url"; exit 1; } echo "All checkpoints are downloaded successfully." ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/.gitignore ================================================ data/uploads data/posters ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/README.md ================================================ # SAM 2 Demo Welcome to the SAM 2 Demo! This project consists of a frontend built with React TypeScript and Vite and a backend service using Python Flask and Strawberry GraphQL. Both components can be run in Docker containers or locally on MPS (Metal Performance Shaders) or CPU. However, running the backend service on MPS or CPU devices may result in significantly slower performance (FPS). ## Prerequisites Before you begin, ensure you have the following installed on your system: - Docker and Docker Compose - [OPTIONAL] Node.js and Yarn for running frontend locally - [OPTIONAL] Anaconda for running backend locally ### Installing Docker To install Docker, follow these steps: 1. Go to the [Docker website](https://www.docker.com/get-started) 2. Follow the installation instructions for your operating system. ### [OPTIONAL] Installing Node.js and Yarn To install Node.js and Yarn, follow these steps: 1. Go to the [Node.js website](https://nodejs.org/en/download/). 2. Follow the installation instructions for your operating system. 3. Once Node.js is installed, open a terminal or command prompt and run the following command to install Yarn: ``` npm install -g yarn ``` ### [OPTIONAL] Installing Anaconda To install Anaconda, follow these steps: 1. Go to the [Anaconda website](https://www.anaconda.com/products/distribution). 2. Follow the installation instructions for your operating system. ## Quick Start To get both the frontend and backend running quickly using Docker, you can use the following command: ```bash docker compose up --build ``` > [!WARNING] > On macOS, Docker containers only support running on CPU. MPS is not supported through Docker. If you want to run the demo backend service on MPS, you will need to run it locally (see "Running the Backend Locally" below). This will build and start both services. You can access them at: - **Frontend:** [http://localhost:7262](http://localhost:7262) - **Backend:** [http://localhost:7263/graphql](http://localhost:7263/graphql) ## Running Backend with MPS Support MPS (Metal Performance Shaders) is not supported with Docker. To use MPS, you need to run the backend on your local machine. ### Setting Up Your Environment 1. **Create Conda environment** Create a new Conda environment for this project by running the following command or use your existing conda environment for SAM 2: ``` conda create --name sam2-demo python=3.10 --yes ``` This will create a new environment named `sam2-demo` with Python 3.10 as the interpreter. 2. **Activate the Conda environment:** ```bash conda activate sam2-demo ``` 3. **Install ffmpeg** ```bash conda install -c conda-forge ffmpeg ``` 4. **Install SAM 2 demo dependencies:** Install project dependencies by running the following command in the SAM 2 checkout root directory: ```bash pip install -e '.[interactive-demo]' ``` ### Running the Backend Locally Download the SAM 2 checkpoints: ```bash (cd ./checkpoints && ./download_ckpts.sh) ``` Use the following command to start the backend with MPS support: ```bash cd demo/backend/server/ ``` ```bash PYTORCH_ENABLE_MPS_FALLBACK=1 \ APP_ROOT="$(pwd)/../../../" \ APP_URL=http://localhost:7263 \ MODEL_SIZE=base_plus \ DATA_PATH="$(pwd)/../../data" \ DEFAULT_VIDEO_PATH=gallery/05_default_juggle.mp4 \ gunicorn \ --worker-class gthread app:app \ --workers 1 \ --threads 2 \ --bind 0.0.0.0:7263 \ --timeout 60 ``` Options for the `MODEL_SIZE` argument are "tiny", "small", "base_plus" (default), and "large". > [!WARNING] > Running the backend service on MPS devices can cause fatal crashes with the Gunicorn worker due to insufficient MPS memory. Try switching to CPU devices by setting the `SAM2_DEMO_FORCE_CPU_DEVICE=1` environment variable. ### Starting the Frontend If you wish to run the frontend separately (useful for development), follow these steps: 1. **Navigate to demo frontend directory:** ```bash cd demo/frontend ``` 2. **Install dependencies:** ```bash yarn install ``` 3. **Start the development server:** ```bash yarn dev --port 7262 ``` This will start the frontend development server on [http://localhost:7262](http://localhost:7262). ## Docker Tips - To rebuild the Docker containers (useful if you've made changes to the Dockerfile or dependencies): ```bash docker compose up --build ``` - To stop the Docker containers: ```bash docker compose down ``` ## Contributing Contributions are welcome! Please read our contributing guidelines to get started. ## License See the LICENSE file for details. --- By following these instructions, you should have a fully functional development environment for both the frontend and backend of the SAM 2 Demo. Happy coding! ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/backend/server/app.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import logging from typing import Any, Generator from app_conf import ( GALLERY_PATH, GALLERY_PREFIX, POSTERS_PATH, POSTERS_PREFIX, UPLOADS_PATH, UPLOADS_PREFIX, ) from data.loader import preload_data from data.schema import schema from data.store import set_videos from flask import Flask, make_response, Request, request, Response, send_from_directory from flask_cors import CORS from inference.data_types import PropagateDataResponse, PropagateInVideoRequest from inference.multipart import MultipartResponseBuilder from inference.predictor import InferenceAPI from strawberry.flask.views import GraphQLView logger = logging.getLogger(__name__) app = Flask(__name__) cors = CORS(app, supports_credentials=True) videos = preload_data() set_videos(videos) inference_api = InferenceAPI() @app.route("/healthy") def healthy() -> Response: return make_response("OK", 200) @app.route(f"/{GALLERY_PREFIX}/", methods=["GET"]) def send_gallery_video(path: str) -> Response: try: return send_from_directory( GALLERY_PATH, path, ) except: raise ValueError("resource not found") @app.route(f"/{POSTERS_PREFIX}/", methods=["GET"]) def send_poster_image(path: str) -> Response: try: return send_from_directory( POSTERS_PATH, path, ) except: raise ValueError("resource not found") @app.route(f"/{UPLOADS_PREFIX}/", methods=["GET"]) def send_uploaded_video(path: str): try: return send_from_directory( UPLOADS_PATH, path, ) except: raise ValueError("resource not found") # TOOD: Protect route with ToS permission check @app.route("/propagate_in_video", methods=["POST"]) def propagate_in_video() -> Response: data = request.json args = { "session_id": data["session_id"], "start_frame_index": data.get("start_frame_index", 0), } boundary = "frame" frame = gen_track_with_mask_stream(boundary, **args) return Response(frame, mimetype="multipart/x-savi-stream; boundary=" + boundary) def gen_track_with_mask_stream( boundary: str, session_id: str, start_frame_index: int, ) -> Generator[bytes, None, None]: with inference_api.autocast_context(): request = PropagateInVideoRequest( type="propagate_in_video", session_id=session_id, start_frame_index=start_frame_index, ) for chunk in inference_api.propagate_in_video(request=request): yield MultipartResponseBuilder.build( boundary=boundary, headers={ "Content-Type": "application/json; charset=utf-8", "Frame-Current": "-1", # Total frames minus the reference frame "Frame-Total": "-1", "Mask-Type": "RLE[]", }, body=chunk.to_json().encode("UTF-8"), ).get_message() class MyGraphQLView(GraphQLView): def get_context(self, request: Request, response: Response) -> Any: return {"inference_api": inference_api} # Add GraphQL route to Flask app. app.add_url_rule( "/graphql", view_func=MyGraphQLView.as_view( "graphql_view", schema=schema, # Disable GET queries # https://strawberry.rocks/docs/operations/deployment # https://strawberry.rocks/docs/integrations/flask allow_queries_via_get=False, # Strawberry recently changed multipart request handling, which now # requires enabling support explicitly for views. # https://github.com/strawberry-graphql/strawberry/issues/3655 multipart_uploads_enabled=True, ), ) if __name__ == "__main__": app.run(host="0.0.0.0", port=5000) ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/backend/server/app_conf.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import logging import os from pathlib import Path logger = logging.getLogger(__name__) APP_ROOT = os.getenv("APP_ROOT", "/opt/sam2") API_URL = os.getenv("API_URL", "http://localhost:7263") MODEL_SIZE = os.getenv("MODEL_SIZE", "base_plus") logger.info(f"using model size {MODEL_SIZE}") FFMPEG_NUM_THREADS = int(os.getenv("FFMPEG_NUM_THREADS", "1")) # Path for all data used in API DATA_PATH = Path(os.getenv("DATA_PATH", "/data")) # Max duration an uploaded video can have in seconds. The default is 10 # seconds. MAX_UPLOAD_VIDEO_DURATION = float(os.environ.get("MAX_UPLOAD_VIDEO_DURATION", "10")) # If set, it will define which video is returned by the default video query for # desktop DEFAULT_VIDEO_PATH = os.getenv("DEFAULT_VIDEO_PATH") # Prefix for gallery videos GALLERY_PREFIX = "gallery" # Path where all gallery videos are stored GALLERY_PATH = DATA_PATH / GALLERY_PREFIX # Prefix for uploaded videos UPLOADS_PREFIX = "uploads" # Path where all uploaded videos are stored UPLOADS_PATH = DATA_PATH / UPLOADS_PREFIX # Prefix for video posters (1st frame of video) POSTERS_PREFIX = "posters" # Path where all posters are stored POSTERS_PATH = DATA_PATH / POSTERS_PREFIX # Make sure any of those paths exist os.makedirs(DATA_PATH, exist_ok=True) os.makedirs(GALLERY_PATH, exist_ok=True) os.makedirs(UPLOADS_PATH, exist_ok=True) os.makedirs(POSTERS_PATH, exist_ok=True) ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/backend/server/inference/data_types.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. from dataclasses import dataclass from typing import Dict, List, Optional, Union from dataclasses_json import dataclass_json from torch import Tensor @dataclass_json @dataclass class Mask: size: List[int] counts: str @dataclass_json @dataclass class BaseRequest: type: str @dataclass_json @dataclass class StartSessionRequest(BaseRequest): type: str path: str session_id: Optional[str] = None @dataclass_json @dataclass class SaveSessionRequest(BaseRequest): type: str session_id: str @dataclass_json @dataclass class LoadSessionRequest(BaseRequest): type: str session_id: str @dataclass_json @dataclass class RenewSessionRequest(BaseRequest): type: str session_id: str @dataclass_json @dataclass class CloseSessionRequest(BaseRequest): type: str session_id: str @dataclass_json @dataclass class AddPointsRequest(BaseRequest): type: str session_id: str frame_index: int clear_old_points: bool object_id: int labels: List[int] points: List[List[float]] @dataclass_json @dataclass class AddMaskRequest(BaseRequest): type: str session_id: str frame_index: int object_id: int mask: Mask @dataclass_json @dataclass class ClearPointsInFrameRequest(BaseRequest): type: str session_id: str frame_index: int object_id: int @dataclass_json @dataclass class ClearPointsInVideoRequest(BaseRequest): type: str session_id: str @dataclass_json @dataclass class RemoveObjectRequest(BaseRequest): type: str session_id: str object_id: int @dataclass_json @dataclass class PropagateInVideoRequest(BaseRequest): type: str session_id: str start_frame_index: int @dataclass_json @dataclass class CancelPropagateInVideoRequest(BaseRequest): type: str session_id: str @dataclass_json @dataclass class StartSessionResponse: session_id: str @dataclass_json @dataclass class SaveSessionResponse: session_id: str @dataclass_json @dataclass class LoadSessionResponse: session_id: str @dataclass_json @dataclass class RenewSessionResponse: session_id: str @dataclass_json @dataclass class CloseSessionResponse: success: bool @dataclass_json @dataclass class ClearPointsInVideoResponse: success: bool @dataclass_json @dataclass class PropagateDataValue: object_id: int mask: Mask @dataclass_json @dataclass class PropagateDataResponse: frame_index: int results: List[PropagateDataValue] @dataclass_json @dataclass class RemoveObjectResponse: results: List[PropagateDataResponse] @dataclass_json @dataclass class CancelPorpagateResponse: success: bool @dataclass_json @dataclass class InferenceSession: start_time: float last_use_time: float session_id: str state: Dict[str, Dict[str, Union[Tensor, Dict[int, Tensor]]]] ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/backend/server/inference/multipart.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. from typing import Dict, Union class MultipartResponseBuilder: message: bytes def __init__(self, boundary: str) -> None: self.message = b"--" + boundary.encode("utf-8") + b"\r\n" @classmethod def build( cls, boundary: str, headers: Dict[str, str], body: Union[str, bytes] ) -> "MultipartResponseBuilder": builder = cls(boundary=boundary) for k, v in headers.items(): builder.__append_header(key=k, value=v) if isinstance(body, bytes): builder.__append_body(body) elif isinstance(body, str): builder.__append_body(body.encode("utf-8")) else: raise ValueError( f"body needs to be of type bytes or str but got {type(body)}" ) return builder def get_message(self) -> bytes: return self.message def __append_header(self, key: str, value: str) -> "MultipartResponseBuilder": self.message += key.encode("utf-8") + b": " + value.encode("utf-8") + b"\r\n" return self def __close_header(self) -> "MultipartResponseBuilder": self.message += b"\r\n" return self def __append_body(self, body: bytes) -> "MultipartResponseBuilder": self.__append_header(key="Content-Length", value=str(len(body))) self.__close_header() self.message += body return self ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/backend/server/inference/predictor.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import contextlib import logging import os import uuid from pathlib import Path from threading import Lock from typing import Any, Dict, Generator, List import numpy as np import torch from app_conf import APP_ROOT, MODEL_SIZE from inference.data_types import ( AddMaskRequest, AddPointsRequest, CancelPorpagateResponse, CancelPropagateInVideoRequest, ClearPointsInFrameRequest, ClearPointsInVideoRequest, ClearPointsInVideoResponse, CloseSessionRequest, CloseSessionResponse, Mask, PropagateDataResponse, PropagateDataValue, PropagateInVideoRequest, RemoveObjectRequest, RemoveObjectResponse, StartSessionRequest, StartSessionResponse, ) from pycocotools.mask import decode as decode_masks, encode as encode_masks from sam2.build_sam import build_sam2_video_predictor logger = logging.getLogger(__name__) class InferenceAPI: def __init__(self) -> None: super(InferenceAPI, self).__init__() self.session_states: Dict[str, Any] = {} self.score_thresh = 0 if MODEL_SIZE == "tiny": checkpoint = Path(APP_ROOT) / "checkpoints/sam2.1_hiera_tiny.pt" model_cfg = "configs/sam2.1/sam2.1_hiera_t.yaml" elif MODEL_SIZE == "small": checkpoint = Path(APP_ROOT) / "checkpoints/sam2.1_hiera_small.pt" model_cfg = "configs/sam2.1/sam2.1_hiera_s.yaml" elif MODEL_SIZE == "large": checkpoint = Path(APP_ROOT) / "checkpoints/sam2.1_hiera_large.pt" model_cfg = "configs/sam2.1/sam2.1_hiera_l.yaml" else: # base_plus (default) checkpoint = Path(APP_ROOT) / "checkpoints/sam2.1_hiera_base_plus.pt" model_cfg = "configs/sam2.1/sam2.1_hiera_b+.yaml" # select the device for computation force_cpu_device = os.environ.get("SAM2_DEMO_FORCE_CPU_DEVICE", "0") == "1" if force_cpu_device: logger.info("forcing CPU device for SAM 2 demo") if torch.cuda.is_available() and not force_cpu_device: device = torch.device("cuda") elif torch.backends.mps.is_available() and not force_cpu_device: device = torch.device("mps") else: device = torch.device("cpu") logger.info(f"using device: {device}") if device.type == "cuda": # turn on tfloat32 for Ampere GPUs (https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices) if torch.cuda.get_device_properties(0).major >= 8: torch.backends.cuda.matmul.allow_tf32 = True torch.backends.cudnn.allow_tf32 = True elif device.type == "mps": logging.warning( "\nSupport for MPS devices is preliminary. SAM 2 is trained with CUDA and might " "give numerically different outputs and sometimes degraded performance on MPS. " "See e.g. https://github.com/pytorch/pytorch/issues/84936 for a discussion." ) self.device = device self.predictor = build_sam2_video_predictor( model_cfg, checkpoint, device=device ) self.inference_lock = Lock() def autocast_context(self): if self.device.type == "cuda": return torch.autocast("cuda", dtype=torch.bfloat16) else: return contextlib.nullcontext() def start_session(self, request: StartSessionRequest) -> StartSessionResponse: with self.autocast_context(), self.inference_lock: session_id = str(uuid.uuid4()) # for MPS devices, we offload the video frames to CPU by default to avoid # memory fragmentation in MPS (which sometimes crashes the entire process) offload_video_to_cpu = self.device.type == "mps" inference_state = self.predictor.init_state( request.path, offload_video_to_cpu=offload_video_to_cpu, ) self.session_states[session_id] = { "canceled": False, "state": inference_state, } return StartSessionResponse(session_id=session_id) def close_session(self, request: CloseSessionRequest) -> CloseSessionResponse: is_successful = self.__clear_session_state(request.session_id) return CloseSessionResponse(success=is_successful) def add_points( self, request: AddPointsRequest, test: str = "" ) -> PropagateDataResponse: with self.autocast_context(), self.inference_lock: session = self.__get_session(request.session_id) inference_state = session["state"] frame_idx = request.frame_index obj_id = request.object_id points = request.points labels = request.labels clear_old_points = request.clear_old_points # add new prompts and instantly get the output on the same frame frame_idx, object_ids, masks = self.predictor.add_new_points_or_box( inference_state=inference_state, frame_idx=frame_idx, obj_id=obj_id, points=points, labels=labels, clear_old_points=clear_old_points, normalize_coords=False, ) masks_binary = (masks > self.score_thresh)[:, 0].cpu().numpy() rle_mask_list = self.__get_rle_mask_list( object_ids=object_ids, masks=masks_binary ) return PropagateDataResponse( frame_index=frame_idx, results=rle_mask_list, ) def add_mask(self, request: AddMaskRequest) -> PropagateDataResponse: """ Add new points on a specific video frame. - mask is a numpy array of shape [H_im, W_im] (containing 1 for foreground and 0 for background). Note: providing an input mask would overwrite any previous input points on this frame. """ with self.autocast_context(), self.inference_lock: session_id = request.session_id frame_idx = request.frame_index obj_id = request.object_id rle_mask = { "counts": request.mask.counts, "size": request.mask.size, } mask = decode_masks(rle_mask) logger.info( f"add mask on frame {frame_idx} in session {session_id}: {obj_id=}, {mask.shape=}" ) session = self.__get_session(session_id) inference_state = session["state"] frame_idx, obj_ids, video_res_masks = self.model.add_new_mask( inference_state=inference_state, frame_idx=frame_idx, obj_id=obj_id, mask=torch.tensor(mask > 0), ) masks_binary = (video_res_masks > self.score_thresh)[:, 0].cpu().numpy() rle_mask_list = self.__get_rle_mask_list( object_ids=obj_ids, masks=masks_binary ) return PropagateDataResponse( frame_index=frame_idx, results=rle_mask_list, ) def clear_points_in_frame( self, request: ClearPointsInFrameRequest ) -> PropagateDataResponse: """ Remove all input points in a specific frame. """ with self.autocast_context(), self.inference_lock: session_id = request.session_id frame_idx = request.frame_index obj_id = request.object_id logger.info( f"clear inputs on frame {frame_idx} in session {session_id}: {obj_id=}" ) session = self.__get_session(session_id) inference_state = session["state"] frame_idx, obj_ids, video_res_masks = ( self.predictor.clear_all_prompts_in_frame( inference_state, frame_idx, obj_id ) ) masks_binary = (video_res_masks > self.score_thresh)[:, 0].cpu().numpy() rle_mask_list = self.__get_rle_mask_list( object_ids=obj_ids, masks=masks_binary ) return PropagateDataResponse( frame_index=frame_idx, results=rle_mask_list, ) def clear_points_in_video( self, request: ClearPointsInVideoRequest ) -> ClearPointsInVideoResponse: """ Remove all input points in all frames throughout the video. """ with self.autocast_context(), self.inference_lock: session_id = request.session_id logger.info(f"clear all inputs across the video in session {session_id}") session = self.__get_session(session_id) inference_state = session["state"] self.predictor.reset_state(inference_state) return ClearPointsInVideoResponse(success=True) def remove_object(self, request: RemoveObjectRequest) -> RemoveObjectResponse: """ Remove an object id from the tracking state. """ with self.autocast_context(), self.inference_lock: session_id = request.session_id obj_id = request.object_id logger.info(f"remove object in session {session_id}: {obj_id=}") session = self.__get_session(session_id) inference_state = session["state"] new_obj_ids, updated_frames = self.predictor.remove_object( inference_state, obj_id ) results = [] for frame_index, video_res_masks in updated_frames: masks = (video_res_masks > self.score_thresh)[:, 0].cpu().numpy() rle_mask_list = self.__get_rle_mask_list( object_ids=new_obj_ids, masks=masks ) results.append( PropagateDataResponse( frame_index=frame_index, results=rle_mask_list, ) ) return RemoveObjectResponse(results=results) def propagate_in_video( self, request: PropagateInVideoRequest ) -> Generator[PropagateDataResponse, None, None]: session_id = request.session_id start_frame_idx = request.start_frame_index propagation_direction = "both" max_frame_num_to_track = None """ Propagate existing input points in all frames to track the object across video. """ # Note that as this method is a generator, we also need to use autocast_context # in caller to this method to ensure that it's called under the correct context # (we've added `autocast_context` to `gen_track_with_mask_stream` in app.py). with self.autocast_context(), self.inference_lock: logger.info( f"propagate in video in session {session_id}: " f"{propagation_direction=}, {start_frame_idx=}, {max_frame_num_to_track=}" ) try: session = self.__get_session(session_id) session["canceled"] = False inference_state = session["state"] if propagation_direction not in ["both", "forward", "backward"]: raise ValueError( f"invalid propagation direction: {propagation_direction}" ) # First doing the forward propagation if propagation_direction in ["both", "forward"]: for outputs in self.predictor.propagate_in_video( inference_state=inference_state, start_frame_idx=start_frame_idx, max_frame_num_to_track=max_frame_num_to_track, reverse=False, ): if session["canceled"]: return None frame_idx, obj_ids, video_res_masks = outputs masks_binary = ( (video_res_masks > self.score_thresh)[:, 0].cpu().numpy() ) rle_mask_list = self.__get_rle_mask_list( object_ids=obj_ids, masks=masks_binary ) yield PropagateDataResponse( frame_index=frame_idx, results=rle_mask_list, ) # Then doing the backward propagation (reverse in time) if propagation_direction in ["both", "backward"]: for outputs in self.predictor.propagate_in_video( inference_state=inference_state, start_frame_idx=start_frame_idx, max_frame_num_to_track=max_frame_num_to_track, reverse=True, ): if session["canceled"]: return None frame_idx, obj_ids, video_res_masks = outputs masks_binary = ( (video_res_masks > self.score_thresh)[:, 0].cpu().numpy() ) rle_mask_list = self.__get_rle_mask_list( object_ids=obj_ids, masks=masks_binary ) yield PropagateDataResponse( frame_index=frame_idx, results=rle_mask_list, ) finally: # Log upon completion (so that e.g. we can see if two propagations happen in parallel). # Using `finally` here to log even when the tracking is aborted with GeneratorExit. logger.info( f"propagation ended in session {session_id}; {self.__get_session_stats()}" ) def cancel_propagate_in_video( self, request: CancelPropagateInVideoRequest ) -> CancelPorpagateResponse: session = self.__get_session(request.session_id) session["canceled"] = True return CancelPorpagateResponse(success=True) def __get_rle_mask_list( self, object_ids: List[int], masks: np.ndarray ) -> List[PropagateDataValue]: """ Return a list of data values, i.e. list of object/mask combos. """ return [ self.__get_mask_for_object(object_id=object_id, mask=mask) for object_id, mask in zip(object_ids, masks) ] def __get_mask_for_object( self, object_id: int, mask: np.ndarray ) -> PropagateDataValue: """ Create a data value for an object/mask combo. """ mask_rle = encode_masks(np.array(mask, dtype=np.uint8, order="F")) mask_rle["counts"] = mask_rle["counts"].decode() return PropagateDataValue( object_id=object_id, mask=Mask( size=mask_rle["size"], counts=mask_rle["counts"], ), ) def __get_session(self, session_id: str): session = self.session_states.get(session_id, None) if session is None: raise RuntimeError( f"Cannot find session {session_id}; it might have expired" ) return session def __get_session_stats(self): """Get a statistics string for live sessions and their GPU usage.""" # print both the session ids and their video frame numbers live_session_strs = [ f"'{session_id}' ({session['state']['num_frames']} frames, " f"{len(session['state']['obj_ids'])} objects)" for session_id, session in self.session_states.items() ] session_stats_str = ( "Test String Here - -" f"live sessions: [{', '.join(live_session_strs)}], GPU memory: " f"{torch.cuda.memory_allocated() // 1024**2} MiB used and " f"{torch.cuda.memory_reserved() // 1024**2} MiB reserved" f" (max over time: {torch.cuda.max_memory_allocated() // 1024**2} MiB used " f"and {torch.cuda.max_memory_reserved() // 1024**2} MiB reserved)" ) return session_stats_str def __clear_session_state(self, session_id: str) -> bool: session = self.session_states.pop(session_id, None) if session is None: logger.warning( f"cannot close session {session_id} as it does not exist (it might have expired); " f"{self.__get_session_stats()}" ) return False else: logger.info(f"removed session {session_id}; {self.__get_session_stats()}") return True ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/.babelrc ================================================ { "env": { "production": { "plugins": ["babel-plugin-strip-invariant"] } } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/.dockerignore ================================================ # Logs logs *.log npm-debug.log* yarn-debug.log* yarn-error.log* pnpm-debug.log* lerna-debug.log* node_modules dist dist-ssr *.local storybook-static .env # Editor directories and files .vscode/* !.vscode/extensions.json .idea .DS_Store *.suo *.ntvs* *.njsproj *.sln *.sw? # Test results /coverage/ /test-results/ /playwright-report/ /playwright/.cache/ ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/.eslintignore ================================================ node_modules/ dist/ env.d.ts ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/.eslintrc.cjs ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ module.exports = { root: true, env: {browser: true, es2020: true}, extends: [ 'eslint:recommended', 'plugin:@typescript-eslint/recommended', 'plugin:react/recommended', 'plugin:react-hooks/recommended', 'plugin:import/recommended', 'plugin:prettier/recommended', ], ignorePatterns: ['dist', '.eslintrc.cjs'], parser: '@typescript-eslint/parser', parserOptions: { ecmaVersion: 'latest', sourceType: 'module', project: ['./tsconfig.json', './tsconfig.node.json'], tsconfigRootDir: __dirname, }, plugins: ['react-refresh'], settings: { react: { version: 'detect', }, 'import/resolver': { typescript: {}, node: {}, }, }, rules: { 'no-console': 'warn', curly: 'warn', 'react/jsx-no-useless-fragment': 'warn', '@typescript-eslint/no-unused-vars': [ 'warn', { argsIgnorePattern: '^_', }, ], 'react-refresh/only-export-components': [ 'warn', { allowConstantExport: true, }, ], 'react/react-in-jsx-scope': 'off', }, }; ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/.gitignore ================================================ # Logs logs *.log npm-debug.log* yarn-debug.log* yarn-error.log* pnpm-debug.log* lerna-debug.log* node_modules dist dist-ssr *.local storybook-static .env # Editor directories and files .vscode/* !.vscode/extensions.json .idea .DS_Store *.suo *.ntvs* *.njsproj *.sln *.sw? # Test results /coverage/ /test-results/ /playwright-report/ /playwright/.cache/ ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/.prettierignore ================================================ node_modules/ dist/ ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/.prettierrc.json ================================================ { "arrowParens": "avoid", "bracketSameLine": true, "bracketSpacing": false, "singleQuote": true, "tabWidth": 2, "trailingComma": "all", "endOfLine": "auto" } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/.watchmanconfig ================================================ {} ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/frontend.Dockerfile ================================================ # Stage 1: Build Stage FROM node:22.9.0 AS build WORKDIR /app # Copy package.json and yarn.lock COPY package.json ./ COPY yarn.lock ./ # Install dependencies RUN yarn install --frozen-lockfile # Copy source code COPY . . # Build the application RUN yarn build # Stage 2: Production Stage FROM nginx:latest # Copy built files from the build stage to the production image COPY --from=build /app/dist /usr/share/nginx/html # Container startup command for the web server (nginx in this case) CMD ["nginx", "-g", "daemon off;"] ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/index.html ================================================ SAM 2 Demo | By Meta FAIR
================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/package.json ================================================ { "name": "frontend-vite", "private": true, "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "merge-schemas": "tsx schemas/merge-schemas", "relay": "yarn merge-schemas && relay-compiler", "build": "tsc && vite build", "lint": "eslint . --ext ts,tsx --report-unused-disable-directives --max-warnings 0", "preview": "vite preview --open" }, "dependencies": { "@carbon/icons-react": "^11.34.1", "@heroicons/react": "^2.0.18", "@monaco-editor/react": "^4.6.0", "@stylexjs/stylex": "^0.6.1", "graphql": "^16.8.1", "immer": "^10.0.3", "immutability-helper": "^3.1.1", "jotai": "^2.6.1", "jotai-immer": "^0.3.0", "localforage": "^1.10.0", "monaco-editor": "^0.48.0", "mp4box": "^0.5.2", "pts": "^0.12.8", "react": "^18.2.0", "react-daisyui": "^4.1.0", "react-device-detect": "^2.2.3", "react-dom": "^18.2.0", "react-dropzone": "^14.2.3", "react-error-boundary": "^4.0.11", "react-photo-album": "^2.3.0", "react-pts-canvas": "^0.5.2", "react-relay": "^16.2.0", "react-router-dom": "^6.15.0", "relay-runtime": "^16.2.0", "serialize-error": "^11.0.3", "use-immer": "^0.9.0", "use-resize-observer": "^9.1.0" }, "devDependencies": { "@graphql-tools/load-files": "^7.0.0", "@graphql-tools/merge": "^9.0.4", "@tailwindcss/typography": "^0.5.9", "@types/dom-webcodecs": "^0.1.11", "@types/invariant": "^2.2.37", "@types/node": "^20.14.10", "@types/react": "^18.2.47", "@types/react-dom": "^18.2.7", "@types/react-relay": "^16.0.6", "@types/relay-runtime": "^14.1.13", "@typescript-eslint/eslint-plugin": "^6.18.1", "@typescript-eslint/parser": "^6.18.1", "@vitejs/plugin-react": "^4.2.1", "autoprefixer": "^10.4.15", "babel-plugin-relay": "^16.2.0", "babel-plugin-strip-invariant": "^1.0.0", "daisyui": "^3.6.3", "eslint": "^8.48.0", "eslint-config-prettier": "^9.0.0", "eslint-import-resolver-alias": "^1.1.2", "eslint-import-resolver-typescript": "^3.6.3", "eslint-plugin-import": "^2.28.1", "eslint-plugin-prettier": "^5.1.3", "eslint-plugin-react": "^7.33.2", "eslint-plugin-react-hooks": "^4.6.0", "eslint-plugin-react-refresh": "^0.4.3", "invariant": "^2.2.4", "postcss": "^8.4.28", "postinstall-postinstall": "^2.1.0", "prettier": "^3.0.3", "relay-compiler": "^16.2.0", "sass": "^1.66.1", "strip-ansi": "^7.1.0", "tailwindcss": "^3.3.3", "tsx": "^4.16.2", "typescript": ">=4.3.5 <5.4.0", "vite": "^5.0.11", "vite-plugin-babel": "^1.2.0", "vite-plugin-relay": "^2.0.0", "vite-plugin-stylex-dev": "^0.5.2" }, "resolutions": { "wrap-ansi": "7.0.0" }, "relay": { "src": "./src/", "schema": "./schema.graphql", "language": "typescript", "eagerEsModules": true, "exclude": [ "**/node_modules/**", "**/__mocks__/**", "**/__generated__/**" ] } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/postcss.config.js ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ export default { plugins: { 'postcss-import': {}, 'tailwindcss/nesting': {}, tailwindcss: {}, autoprefixer: {}, }, }; ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/schema.graphql ================================================ input AddPointsInput { sessionId: String! frameIndex: Int! clearOldPoints: Boolean! objectId: Int! labels: [Int!]! points: [[Float!]!]! } type CancelPropagateInVideo { success: Boolean! } input CancelPropagateInVideoInput { sessionId: String! } input ClearPointsInFrameInput { sessionId: String! frameIndex: Int! objectId: Int! } type ClearPointsInVideo { success: Boolean! } input ClearPointsInVideoInput { sessionId: String! } type CloseSession { success: Boolean! } input CloseSessionInput { sessionId: String! } type Mutation { startSession(input: StartSessionInput!): StartSession! closeSession(input: CloseSessionInput!): CloseSession! addPoints(input: AddPointsInput!): RLEMaskListOnFrame! clearPointsInFrame(input: ClearPointsInFrameInput!): RLEMaskListOnFrame! clearPointsInVideo(input: ClearPointsInVideoInput!): ClearPointsInVideo! removeObject(input: RemoveObjectInput!): [RLEMaskListOnFrame!]! cancelPropagateInVideo( input: CancelPropagateInVideoInput! ): CancelPropagateInVideo! createDeletionId: String! acceptTos: Boolean! acceptTermsOfService: String! uploadVideo( file: Upload! startTimeSec: Float = null durationTimeSec: Float = null ): Video! uploadSharedVideo(file: Upload!): SharedVideo! uploadAnnotations(file: Upload!): Boolean! } input PingInput { sessionId: String! } type Pong { success: Boolean! } type Query { ping(input: PingInput!): Pong! defaultVideo: Video! videos( """ Returns the items in the list that come before the specified cursor. """ before: String = null """ Returns the items in the list that come after the specified cursor. """ after: String = null """ Returns the first n items from the list. """ first: Int = null """ Returns the items in the list that come after the specified cursor. """ last: Int = null ): VideoConnection! sharedVideo(path: String!): SharedVideo! } type RLEMask { size: [Int!]! counts: String! order: String! } type RLEMaskForObject { objectId: Int! rleMask: RLEMask! } type RLEMaskListOnFrame { frameIndex: Int! rleMaskList: [RLEMaskForObject!]! } input RemoveObjectInput { sessionId: String! objectId: Int! } type StartSession { sessionId: String! } input StartSessionInput { path: String! } """ The `ID` scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as `"4"`) or integer (such as `4`) input value will be accepted as an ID. """ scalar GlobalID @specifiedBy(url: "https://relay.dev/graphql/objectidentification.htm") """ An object with a Globally Unique ID """ interface Node { """ The Globally Unique ID of this object """ id: GlobalID! } """ Information to aid in pagination. """ type PageInfo { """ When paginating forwards, are there more items? """ hasNextPage: Boolean! """ When paginating backwards, are there more items? """ hasPreviousPage: Boolean! """ When paginating backwards, the cursor to continue. """ startCursor: String """ When paginating forwards, the cursor to continue. """ endCursor: String } type SharedVideo { path: String! url: String! } scalar Upload type Video implements Node { """ The Globally Unique ID of this object """ id: GlobalID! path: String! posterPath: String width: Int! height: Int! url: String! posterUrl: String! } """ A connection to a list of items. """ type VideoConnection { """ Pagination data for this connection """ pageInfo: PageInfo! """ Contains the nodes in this connection """ edges: [VideoEdge!]! } """ An edge in a connection. """ type VideoEdge { """ A cursor for use in pagination """ cursor: String! """ The item at the end of the edge """ node: Video! } schema { query: Query mutation: Mutation } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/schemas/inference-api-schema.graphql ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. input AddPointsInput { sessionId: String! frameIndex: Int! clearOldPoints: Boolean! objectId: Int! labels: [Int!]! points: [[Float!]!]! } type CancelPropagateInVideo { success: Boolean! } input CancelPropagateInVideoInput { sessionId: String! } input ClearPointsInFrameInput { sessionId: String! frameIndex: Int! objectId: Int! } type ClearPointsInVideo { success: Boolean! } input ClearPointsInVideoInput { sessionId: String! } type CloseSession { success: Boolean! } input CloseSessionInput { sessionId: String! } type Mutation { startSession(input: StartSessionInput!): StartSession! closeSession(input: CloseSessionInput!): CloseSession! addPoints(input: AddPointsInput!): RLEMaskListOnFrame! clearPointsInFrame(input: ClearPointsInFrameInput!): RLEMaskListOnFrame! clearPointsInVideo(input: ClearPointsInVideoInput!): ClearPointsInVideo! removeObject(input: RemoveObjectInput!): [RLEMaskListOnFrame!]! cancelPropagateInVideo( input: CancelPropagateInVideoInput! ): CancelPropagateInVideo! } input PingInput { sessionId: String! } type Pong { success: Boolean! } type Query { ping(input: PingInput!): Pong! } type RLEMask { size: [Int!]! counts: String! order: String! } type RLEMaskForObject { objectId: Int! rleMask: RLEMask! } type RLEMaskListOnFrame { frameIndex: Int! rleMaskList: [RLEMaskForObject!]! } input RemoveObjectInput { sessionId: String! objectId: Int! } type StartSession { sessionId: String! } input StartSessionInput { path: String! } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/schemas/merge-schemas.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {loadFilesSync} from '@graphql-tools/load-files'; import {mergeTypeDefs} from '@graphql-tools/merge'; import fs from 'fs'; import {print} from 'graphql'; import path from 'path'; import * as prettier from 'prettier'; import {fileURLToPath} from 'url'; const __filename = fileURLToPath(import.meta.url); const __dirname = path.dirname(__filename); const loadedFiles = loadFilesSync(`${__dirname}/*.graphql`); const typeDefs = mergeTypeDefs(loadedFiles); const printedTypeDefs = print(typeDefs); const prettyTypeDefs = await prettier.format(printedTypeDefs, { parser: 'graphql', }); fs.writeFileSync('schema.graphql', prettyTypeDefs); ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/schemas/video-api-schema.graphql ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ The `ID` scalar type represents a unique identifier, often used to refetch an object or as key for a cache. The ID type appears in a JSON response as a String; however, it is not intended to be human-readable. When expected as an input type, any string (such as `"4"`) or integer (such as `4`) input value will be accepted as an ID. """ scalar GlobalID @specifiedBy(url: "https://relay.dev/graphql/objectidentification.htm") type Mutation { createDeletionId: String! acceptTos: Boolean! acceptTermsOfService: String! uploadVideo( file: Upload! startTimeSec: Float = null durationTimeSec: Float = null ): Video! uploadSharedVideo(file: Upload!): SharedVideo! uploadAnnotations(file: Upload!): Boolean! } """ An object with a Globally Unique ID """ interface Node { """ The Globally Unique ID of this object """ id: GlobalID! } """ Information to aid in pagination. """ type PageInfo { """ When paginating forwards, are there more items? """ hasNextPage: Boolean! """ When paginating backwards, are there more items? """ hasPreviousPage: Boolean! """ When paginating backwards, the cursor to continue. """ startCursor: String """ When paginating forwards, the cursor to continue. """ endCursor: String } type Query { defaultVideo: Video! videos( """ Returns the items in the list that come before the specified cursor. """ before: String = null """ Returns the items in the list that come after the specified cursor. """ after: String = null """ Returns the first n items from the list. """ first: Int = null """ Returns the items in the list that come after the specified cursor. """ last: Int = null ): VideoConnection! sharedVideo(path: String!): SharedVideo! } type SharedVideo { path: String! url: String! } scalar Upload type Video implements Node { """ The Globally Unique ID of this object """ id: GlobalID! path: String! posterPath: String width: Int! height: Int! url: String! posterUrl: String! } """ A connection to a list of items. """ type VideoConnection { """ Pagination data for this connection """ pageInfo: PageInfo! """ Contains the nodes in this connection """ edges: [VideoEdge!]! } """ An edge in a connection. """ type VideoEdge { """ A cursor for use in pagination """ cursor: String! """ The item at the end of the edge """ node: Video! } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/App.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import SAM2DemoApp from '@/demo/SAM2DemoApp'; import SettingsContextProvider from '@/settings/SettingsContextProvider'; import {RouterProvider, createBrowserRouter} from 'react-router-dom'; export default function App() { const router = createBrowserRouter([ { path: '*', element: ( ), }, ]); return ; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/assets/scss/App.scss ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ @tailwind base; @tailwind components; @tailwind utilities; .tab { display: flex; padding: 0px 0px; margin-right: 6px; align-items: center; height: 100%; } @layer base { @font-face { font-family: 'Inter'; src: url(/fonts/Inter-VariableFont.ttf) format('truetype-variations'); } } body { font-family: 'Inter', sans-serif; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; } body, html, #root { height: 100%; @media screen and (max-width: '768px') { overflow: hidden; } } :root { --segEv-font: 'Inter', system-ui, -apple-system, BlinkMacSystemFont, 'Segoe UI', Oxygen, Ubuntu, Cantarell, 'Open Sans', 'Helvetica Neue', sans-serif; --perspective: 4000px; color-scheme: dark; } h1, h2, h3, h4, h5, h6 { font-family: 'Inter', sans-serif; } .prose .display h1 { @apply text-4xl text-gray-800 font-medium leading-tight; } .prose .display h2 { @apply text-gray-800 font-medium leading-tight; font-size: 2.5rem; } .prose h1 { @apply text-3xl text-gray-800 font-medium leading-tight mt-2 mb-4; letter-spacing: 0.016rem; } .prose h2 { @apply text-2xl text-gray-800 font-medium leading-tight my-2; letter-spacing: 0.01rem; } .prose h3 { @apply text-xl text-gray-800 font-medium leading-tight my-2; letter-spacing: 0.005rem; } .prose h4 { @apply text-lg text-gray-800 font-medium leading-tight my-2; } .prose h5 { @apply text-xl text-gray-700 font-normal leading-normal my-2; letter-spacing: 0.005rem; } .prose h6 { @apply text-base text-gray-700 font-normal leading-normal my-2; } .prose p { @apply text-sm text-gray-700 font-normal leading-normal; @apply leading-snug; } .prose ol, .prose ul { @apply text-sm text-gray-700 font-normal leading-normal; padding-right: 2rem; } .dark-mode h1, .dark-mode h2, .dark-mode h3, .dark-mode h4, .dark-mode h5, .dark-mode p, .dark-mode ol, .dark-mode ul, .dark-mode p *, .dark-mode ol *, .dark-mode ul *, ≈ { @apply text-white; } .dark-mode h4, .dark-mode h6, .dark-mode li::marker, .dark-mode a { @apply text-gray-200; } .flex-grow-2 { flex-grow: 2; } .flex-grow-3 { flex-grow: 3; } .flex-grow-4 { flex-grow: 4; } .flex-grow-5 { flex-grow: 5; } .nav-title { font-family: var(--segEv-font); } .segment-active { animation: segment-highlight 2s linear infinite; stroke-dasharray: 5, 10; stroke-width: 4px; } @keyframes segment-highlight { to { stroke-dashoffset: 60; } } .segment-select { animation: segment-dotted 2s linear infinite; stroke-dasharray: 3, 5; stroke-width: 3px; } @keyframes segment-dotted { to { stroke-dashoffset: 24; } } /** * Daisy UI customizations */ .btn { @apply normal-case rounded-md; } .comp_summary h1, .comp_summary h2, .comp_summary h3 { @apply mb-4; } .disabled { opacity: 0.4; pointer-events: none; } .absolute-center { top: 50%; left: 50%; transform: translate(-50%, -50%); } @screen lg { .drawer .grid { grid-template-columns: max-content 1fr; } } .fade-in { transition: opacity 0.5s; opacity: 1 !important; } .react-photo-gallery--gallery > div { gap: 0.25rem; } .sticker { filter: drop-shadow(0.25rem 0.25rem 5px #fff) drop-shadow(-0.25rem 0.25rem 5px #fff) drop-shadow(0.25rem -0.25rem 5px #fff) drop-shadow(-0.25rem -0.25rem 5px #fff); transition: filter 0.3s ease-out; } .sticker:hover, .sticker-select { filter: drop-shadow(0.25rem 0.25rem 1px #2962d9) drop-shadow(-0.25rem 0.25rem 1px #2962d9) drop-shadow(0.25rem -0.25rem 1px #2962d9) drop-shadow(-0.25rem -0.25rem 1px #2962d9); } /* keyframe animations */ .mask-path, .reveal { opacity: 0; animation: reveal 0.4s ease-in forwards; } .slow-reveal { animation: reveal 1s ease-in; } .reveal-then-conceal { opacity: 0; animation: reveal-then-conceal 1.5s ease-in-out forwards; } @keyframes reveal { from { opacity: 0; } to { opacity: 1; } } @keyframes reveal-then-conceal { from { opacity: 0; } 50% { opacity: 1; } to { opacity: 0; } } .background-animate { background-size: 400%; animation: pulse 3s ease infinite; } @keyframes pulse { 0%, 100% { background-position: 0% 50%; } 50% { background-position: 100% 50%; } } /* Fix for Safari and Mobile Safari: Extracted Tailwind progress-bar styles and applied them to a
instead of a element */ .loading-bar { position: relative; width: 100%; -webkit-appearance: none; -moz-appearance: none; appearance: none; overflow: hidden; height: 0.5rem; border-radius: 1rem; border-radius: var(--rounded-box, 1rem); vertical-align: baseline; background-color: hsl(var(--n) / var(--tw-bg-opacity)); --tw-bg-opacity: 0.2; &::after { --tw-bg-opacity: 1; background-color: hsl(var(--n) / var(--tw-bg-opacity)); content: ''; position: absolute; top: 0px; bottom: 0px; left: -40%; width: 33.333333%; border-radius: 1rem; border-radius: var(--rounded-box, 1rem); animation: loading 5s infinite ease-in-out; } } @keyframes loading { 50% { left: 107%; } } @keyframes inAnimation { 0% { opacity: 0; max-height: 0px; } 50% { opacity: 1; } 100% { opacity: 1; max-height: 600px; } } @keyframes outAnimation { 0% { opacity: 1; max-height: 600px; } 50% { opacity: 0; } 100% { opacity: 0; max-height: 0px; } } @keyframes ellipsisAnimation { 0% { content: ''; } 25% { content: '.'; } 50% { content: '..'; } 75% { content: '...'; } } .ellipsis::after { content: ''; animation: ellipsisAnimation 1.5s infinite; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/codecs/VideoDecoder.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {cloneFrame} from '@/common/codecs/WebCodecUtils'; import {FileStream} from '@/common/utils/FileUtils'; import { createFile, DataStream, MP4ArrayBuffer, MP4File, MP4Sample, MP4VideoTrack, } from 'mp4box'; import {isAndroid, isChrome, isEdge, isWindows} from 'react-device-detect'; export type ImageFrame = { bitmap: VideoFrame; timestamp: number; duration: number; }; export type DecodedVideo = { width: number; height: number; frames: ImageFrame[]; numFrames: number; fps: number; }; function decodeInternal( identifier: string, onReady: (mp4File: MP4File) => Promise, onProgress: (decodedVideo: DecodedVideo) => void, ): Promise { return new Promise((resolve, reject) => { const imageFrames: ImageFrame[] = []; const globalSamples: MP4Sample[] = []; let decoder: VideoDecoder; let track: MP4VideoTrack | null = null; const mp4File = createFile(); mp4File.onError = reject; mp4File.onReady = async info => { if (info.videoTracks.length > 0) { track = info.videoTracks[0]; } else { // The video does not have a video track, so looking if there is an // "otherTracks" available. Note, I couldn't find any documentation // about "otherTracks" in WebCodecs [1], but it was available in the // info for MP4V-ES, which isn't supported by Chrome [2]. // However, we'll still try to get the track and then throw an error // further down in the VideoDecoder.isConfigSupported if the codec is // not supported by the browser. // // [1] https://www.w3.org/TR/webcodecs/ // [2] https://developer.mozilla.org/en-US/docs/Web/Media/Formats/Video_codecs#mp4v-es track = info.otherTracks[0]; } if (track == null) { reject(new Error(`${identifier} does not contain a video track`)); return; } const timescale = track.timescale; const edits = track.edits; let frame_n = 0; decoder = new VideoDecoder({ // Be careful with any await in this function. The VideoDecoder will // not await output and continue calling it with decoded frames. async output(inputFrame) { if (track == null) { reject(new Error(`${identifier} does not contain a video track`)); return; } const saveTrack = track; // If the track has edits, we'll need to check that only frames are // returned that are within the edit list. This can happen for // trimmed videos that have not been transcoded and therefore the // video track contains more frames than those visually rendered when // playing back the video. if (edits != null && edits.length > 0) { const cts = Math.round( (inputFrame.timestamp * timescale) / 1_000_000, ); if (cts < edits[0].media_time) { inputFrame.close(); return; } } // Workaround for Chrome where the decoding stops at ~17 frames unless // the VideoFrame is closed. So, the workaround here is to create a // new VideoFrame and close the decoded VideoFrame. // The frame has to be cloned, or otherwise some frames at the end of the // video will be black. Note, the default VideoFrame.clone doesn't work // and it is using a frame cloning found here: // https://webcodecs-blogpost-demo.glitch.me/ if ( (isAndroid && isChrome) || (isWindows && isChrome) || (isWindows && isEdge) ) { const clonedFrame = await cloneFrame(inputFrame); inputFrame.close(); inputFrame = clonedFrame; } const sample = globalSamples[frame_n]; if (sample != null) { const duration = (sample.duration * 1_000_000) / sample.timescale; imageFrames.push({ bitmap: inputFrame, timestamp: inputFrame.timestamp, duration, }); // Sort frames in order of timestamp. This is needed because Safari // can return decoded frames out of order. imageFrames.sort((a, b) => (a.timestamp < b.timestamp ? 1 : -1)); // Update progress on first frame and then every 40th frame if (onProgress != null && frame_n % 100 === 0) { onProgress({ width: saveTrack.track_width, height: saveTrack.track_height, frames: imageFrames, numFrames: saveTrack.nb_samples, fps: (saveTrack.nb_samples / saveTrack.duration) * saveTrack.timescale, }); } } frame_n++; if (saveTrack.nb_samples === frame_n) { // Sort frames in order of timestamp. This is needed because Safari // can return decoded frames out of order. imageFrames.sort((a, b) => (a.timestamp > b.timestamp ? 1 : -1)); resolve({ width: saveTrack.track_width, height: saveTrack.track_height, frames: imageFrames, numFrames: saveTrack.nb_samples, fps: (saveTrack.nb_samples / saveTrack.duration) * saveTrack.timescale, }); } }, error(error) { reject(error); }, }); let description; const trak = mp4File.getTrackById(track.id); const entries = trak?.mdia?.minf?.stbl?.stsd?.entries; if (entries == null) { return; } for (const entry of entries) { if (entry.avcC || entry.hvcC) { const stream = new DataStream(undefined, 0, DataStream.BIG_ENDIAN); if (entry.avcC) { entry.avcC.write(stream); } else if (entry.hvcC) { entry.hvcC.write(stream); } description = new Uint8Array(stream.buffer, 8); // Remove the box header. break; } } const configuration: VideoDecoderConfig = { codec: track.codec, codedWidth: track.track_width, codedHeight: track.track_height, description, }; const supportedConfig = await VideoDecoder.isConfigSupported(configuration); if (supportedConfig.supported == true) { decoder.configure(configuration); mp4File.setExtractionOptions(track.id, null, { nbSamples: Infinity, }); mp4File.start(); } else { reject( new Error( `Decoder config faile: config ${JSON.stringify( supportedConfig.config, )} is not supported`, ), ); return; } }; mp4File.onSamples = async ( _id: number, _user: unknown, samples: MP4Sample[], ) => { for (const sample of samples) { globalSamples.push(sample); decoder.decode( new EncodedVideoChunk({ type: sample.is_sync ? 'key' : 'delta', timestamp: (sample.cts * 1_000_000) / sample.timescale, duration: (sample.duration * 1_000_000) / sample.timescale, data: sample.data, }), ); } await decoder.flush(); decoder.close(); }; onReady(mp4File); }); } export function decode( file: File, onProgress: (decodedVideo: DecodedVideo) => void, ): Promise { return decodeInternal( file.name, async (mp4File: MP4File) => { const reader = new FileReader(); reader.onload = function () { const result = this.result as MP4ArrayBuffer; if (result != null) { result.fileStart = 0; mp4File.appendBuffer(result); } mp4File.flush(); }; reader.readAsArrayBuffer(file); }, onProgress, ); } export function decodeStream( fileStream: FileStream, onProgress: (decodedVideo: DecodedVideo) => void, ): Promise { return decodeInternal( 'stream', async (mp4File: MP4File) => { let part = await fileStream.next(); while (part.done === false) { const result = part.value.data.buffer as MP4ArrayBuffer; if (result != null) { result.fileStart = part.value.range.start; mp4File.appendBuffer(result); } mp4File.flush(); part = await fileStream.next(); } }, onProgress, ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/codecs/VideoEncoder.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {ImageFrame} from '@/common/codecs/VideoDecoder'; import {MP4ArrayBuffer, createFile} from 'mp4box'; // The selection of timescale and seconds/key-frame value are // explained in the following docs: https://github.com/vjeux/mp4-h264-re-encode const TIMESCALE = 90000; const SECONDS_PER_KEY_FRAME = 2; export function encode( width: number, height: number, numFrames: number, framesGenerator: AsyncGenerator, progressCallback?: (progress: number) => void, ): Promise { return new Promise((resolve, reject) => { let encodedFrameIndex = 0; let nextKeyFrameTimestamp = 0; let trackID: number | null = null; const durations: number[] = []; const outputFile = createFile(); const encoder = new VideoEncoder({ output(chunk, metaData) { const uint8 = new Uint8Array(chunk.byteLength); chunk.copyTo(uint8); const description = metaData?.decoderConfig?.description; if (trackID === null) { trackID = outputFile.addTrack({ width: width, height: height, timescale: TIMESCALE, avcDecoderConfigRecord: description, }); } const shiftedDuration = durations.shift(); if (shiftedDuration != null) { outputFile.addSample(trackID, uint8, { duration: getScaledDuration(shiftedDuration), is_sync: chunk.type === 'key', }); encodedFrameIndex++; progressCallback?.(encodedFrameIndex / numFrames); } if (encodedFrameIndex === numFrames) { resolve(outputFile.getBuffer()); } }, error(error) { reject(error); return; }, }); const setConfigurationAndEncodeFrames = async () => { // The codec value was taken from the following implementation and seems // reasonable for our use case for now: // https://github.com/vjeux/mp4-h264-re-encode/blob/main/mp4box.html#L103 // Additional details about codecs can be found here: // - https://developer.mozilla.org/en-US/docs/Web/Media/Formats/codecs_parameter // - https://www.w3.org/TR/webcodecs-codec-registry/#video-codec-registry // // The following setting is a good compromise between output video file // size and quality. The latencyMode "realtime" is needed for Safari, // which otherwise will produce 20x larger files when in quality // latencyMode. Chrome does a really good job with file size even when // latencyMode is set to quality. const configuration: VideoEncoderConfig = { codec: 'avc1.4d0034', width: roundToNearestEven(width), height: roundToNearestEven(height), bitrate: 14_000_000, alpha: 'discard', bitrateMode: 'variable', latencyMode: 'realtime', }; const supportedConfig = await VideoEncoder.isConfigSupported(configuration); if (supportedConfig.supported === true) { encoder.configure(configuration); } else { throw new Error( `Unsupported video encoder config ${JSON.stringify(supportedConfig)}`, ); } for await (const frame of framesGenerator) { const {bitmap, duration, timestamp} = frame; durations.push(duration); let keyFrame = false; if (timestamp >= nextKeyFrameTimestamp) { await encoder.flush(); keyFrame = true; nextKeyFrameTimestamp = timestamp + SECONDS_PER_KEY_FRAME * 1e6; } encoder.encode(bitmap, {keyFrame}); bitmap.close(); } await encoder.flush(); encoder.close(); }; setConfigurationAndEncodeFrames(); }); } function getScaledDuration(rawDuration: number) { return rawDuration / (1_000_000 / TIMESCALE); } function roundToNearestEven(dim: number) { const rounded = Math.round(dim); if (rounded % 2 === 0) { return rounded; } else { return rounded + (rounded > dim ? -1 : 1); } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/codecs/WebCodecUtils.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ // https://github.com/w3c/webcodecs/issues/88 // https://issues.chromium.org/issues/40725065 // https://webcodecs-blogpost-demo.glitch.me/ export async function cloneFrame(frame: VideoFrame): Promise { const { codedHeight, codedWidth, colorSpace, displayHeight, displayWidth, format, timestamp, } = frame; const rect = {x: 0, y: 0, width: codedWidth, height: codedHeight}; const data = new ArrayBuffer(frame.allocationSize({rect})); try { await frame.copyTo(data, {rect}); } catch (error) { // The VideoFrame#copyTo on x64 builds on macOS fails. The workaround here // is to clone the frame. // https://stackoverflow.com/questions/77898766/inconsistent-behavior-of-webcodecs-copyto-method-across-different-browsers-an return frame.clone(); } return new VideoFrame(data, { codedHeight, codedWidth, colorSpace, displayHeight, displayWidth, duration: frame.duration ?? undefined, format: format!, timestamp, visibleRect: frame.visibleRect ?? undefined, }); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/MobileFirstClickBanner.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ChangeVideoModal from '@/common/components/gallery/ChangeVideoModal'; import {DEMO_SHORT_NAME} from '@/demo/DemoConfig'; import {spacing} from '@/theme/tokens.stylex'; import {ImageCopy} from '@carbon/icons-react'; import stylex from '@stylexjs/stylex'; import {Button} from 'react-daisyui'; const styles = stylex.create({ container: { position: 'relative', backgroundColor: '#000', padding: spacing[5], paddingVertical: spacing[6], display: 'flex', flexDirection: 'column', gap: spacing[4], }, }); export default function MobileFirstClickBanner() { return (
Click an object in the video to start

You'll be able to use {DEMO_SHORT_NAME} to make fun edits to any video by tracking objects and applying visual effects. To start, click any object in the video.

); } type MobileVideoGalleryModalTriggerProps = { onClick: () => void; }; function MobileVideoGalleryModalTrigger({ onClick, }: MobileVideoGalleryModalTriggerProps) { return ( ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/Tooltip.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {PropsWithChildren} from 'react'; type Props = PropsWithChildren<{ className?: string; message: string; position?: 'left' | 'top' | 'right' | 'bottom'; }>; /** * This is a custom Tooltip component because React Daisy UI does not have an * option to *only* show tooltip on large devices. */ export default function Tooltip({ children, className = '', message, position = 'top', }: Props) { return (
{children}
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/annotations/AddObjectButton.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import useMessagesSnackbar from '@/common/components/snackbar/useDemoMessagesSnackbar'; import useVideo from '@/common/components/video/editor/useVideo'; import {activeTrackletObjectIdAtom, labelTypeAtom} from '@/demo/atoms'; import {Add} from '@carbon/icons-react'; import {useSetAtom} from 'jotai'; export default function AddObjectButton() { const video = useVideo(); const setActiveTrackletId = useSetAtom(activeTrackletObjectIdAtom); const setLabelType = useSetAtom(labelTypeAtom); const {enqueueMessage} = useMessagesSnackbar(); async function addObject() { enqueueMessage('addObjectClick'); const tracklet = await video?.createTracklet(); if (tracklet != null) { setActiveTrackletId(tracklet.id); setLabelType('positive'); } } return (
Add another object
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/annotations/ClearAllPointsInVideoButton.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import useRestartSession from '@/common/components/session/useRestartSession'; import useMessagesSnackbar from '@/common/components/snackbar/useDemoMessagesSnackbar'; import useVideo from '@/common/components/video/editor/useVideo'; import {isPlayingAtom, isStreamingAtom, labelTypeAtom} from '@/demo/atoms'; import {Reset} from '@carbon/icons-react'; import stylex from '@stylexjs/stylex'; import {useAtomValue, useSetAtom} from 'jotai'; import {useState} from 'react'; import {Button, Loading} from 'react-daisyui'; const styles = stylex.create({ container: { display: 'flex', alignItems: 'center', }, }); type Props = { onRestart: () => void; }; export default function ClearAllPointsInVideoButton({onRestart}: Props) { const [isLoading, setIsLoading] = useState(false); const isPlaying = useAtomValue(isPlayingAtom); const isStreaming = useAtomValue(isStreamingAtom); const setLabelType = useSetAtom(labelTypeAtom); const {clearMessage} = useMessagesSnackbar(); const {restartSession} = useRestartSession(); const video = useVideo(); async function handleRestart() { if (video === null) { return; } setIsLoading(true); if (isPlaying) { video.pause(); } if (isStreaming) { await video.abortStreamMasks(); } const isSuccessful = await video.clearPointsInVideo(); if (!isSuccessful) { await restartSession(); } video.frame = 0; setLabelType('positive'); onRestart(); clearMessage(); setIsLoading(false); } return (
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/annotations/CloseSessionButton.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import PrimaryCTAButton from '@/common/components/button/PrimaryCTAButton'; import useVideo from '@/common/components/video/editor/useVideo'; import {ChevronRight} from '@carbon/icons-react'; type Props = { onSessionClose: () => void; }; export default function CloseSessionButton({onSessionClose}: Props) { const video = useVideo(); function handleCloseSession() { video?.closeSession(); video?.logAnnotations(); onSessionClose(); } return ( }> Good to go ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/annotations/FirstClickView.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ChangeVideo from '@/common/components/gallery/ChangeVideoModal'; import useMessagesSnackbar from '@/common/components/snackbar/useDemoMessagesSnackbar'; import {DEMO_SHORT_NAME} from '@/demo/DemoConfig'; import {useEffect, useRef} from 'react'; export default function FirstClickView() { const isFirstClickMessageShown = useRef(false); const {enqueueMessage} = useMessagesSnackbar(); useEffect(() => { if (!isFirstClickMessageShown.current) { isFirstClickMessageShown.current = true; enqueueMessage('firstClick'); } }, [enqueueMessage]); return (

Click an object in the video to start

You'll be able to use {DEMO_SHORT_NAME} to make fun edits to any video by tracking objects and applying visual effects.

To start, click any object in the video.

); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/annotations/LimitNotice.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {InformationFilled} from '@carbon/icons-react'; export default function LimitNotice() { return (
In this demo, you can track up to 3 objects, even though the SAM 2 model does not have a limit.
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/annotations/MobileObjectsList.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ClearAllPointsInVideoButton from '@/common/components/annotations/ClearAllPointsInVideoButton'; import ObjectThumbnail from '@/common/components/annotations/ObjectThumbnail'; import {OBJECT_TOOLBAR_INDEX} from '@/common/components/toolbar/ToolbarConfig'; import {BaseTracklet} from '@/common/tracker/Tracker'; import {activeTrackletObjectIdAtom, trackletObjectsAtom} from '@/demo/atoms'; import {spacing} from '@/theme/tokens.stylex'; import stylex from '@stylexjs/stylex'; import {useAtomValue, useSetAtom} from 'jotai'; const styles = stylex.create({ container: { display: 'flex', padding: spacing[5], borderTop: '1px solid #DEE3E9', }, trackletsContainer: { flexGrow: 1, display: 'flex', alignItems: 'center', gap: spacing[5], }, }); type Props = { showActiveObject: () => void; onTabChange: (newIndex: number) => void; }; export default function MobileObjectsList({ showActiveObject, onTabChange, }: Props) { const tracklets = useAtomValue(trackletObjectsAtom); const setActiveTrackletId = useSetAtom(activeTrackletObjectIdAtom); function handleSelectObject(tracklet: BaseTracklet) { setActiveTrackletId(tracklet.id); showActiveObject(); } return (
{tracklets.map(tracklet => { const {id, color, thumbnail} = tracklet; return ( { handleSelectObject(tracklet); }} /> ); })}
onTabChange(OBJECT_TOOLBAR_INDEX)} />
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/annotations/MobileObjectsToolbar.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import MobileObjectsToolbarHeader from '@/common/components/annotations/MobileObjectsToolbarHeader'; import ObjectsToolbarBottomActions from '@/common/components/annotations/ObjectsToolbarBottomActions'; import {getObjectLabel} from '@/common/components/annotations/ObjectUtils'; import ToolbarObject from '@/common/components/annotations/ToolbarObject'; import MobileFirstClickBanner from '@/common/components/MobileFirstClickBanner'; import {activeTrackletObjectAtom, isFirstClickMadeAtom} from '@/demo/atoms'; import {useAtomValue} from 'jotai'; type Props = { onTabChange: (newIndex: number) => void; }; export default function MobileObjectsToolbar({onTabChange}: Props) { const activeTracklet = useAtomValue(activeTrackletObjectAtom); const isFirstClickMade = useAtomValue(isFirstClickMadeAtom); if (!isFirstClickMade) { return ; } return (
{activeTracklet != null && ( )}
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/annotations/MobileObjectsToolbarHeader.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ToolbarProgressChip from '@/common/components/toolbar/ToolbarProgressChip'; import {isStreamingAtom, streamingStateAtom} from '@/demo/atoms'; import {useAtomValue} from 'jotai'; export default function MobileObjectsToolbarHeader() { const isStreaming = useAtomValue(isStreamingAtom); const streamingState = useAtomValue(streamingStateAtom); return (
{streamingState === 'full' ? 'Review your selected objects across the video, and continue to edit if needed. Once everything looks good, press “Next” to continue.' : isStreaming ? 'Watch the video closely for any places where your objects aren’t tracked correctly. You can also stop tracking to make additional edits.' : 'Edit your object selection with a few more clicks if needed. Press “Track objects” to track your objects throughout the video.'}
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/annotations/ObjectActions.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import PointsToggle from '@/common/components/annotations/PointsToggle'; import useVideo from '@/common/components/video/editor/useVideo'; import useReportError from '@/common/error/useReportError'; import { activeTrackletObjectIdAtom, isPlayingAtom, isStreamingAtom, } from '@/demo/atoms'; import { AddFilled, Select_02, SubtractFilled, TrashCan, } from '@carbon/icons-react'; import {useAtom, useAtomValue} from 'jotai'; import {useState} from 'react'; import type {ButtonProps} from 'react-daisyui'; import {Button} from 'react-daisyui'; type Props = { objectId: number; active: boolean; }; function CustomButton({className, ...props}: ButtonProps) { return ( ); } export default function ObjectActions({objectId, active}: Props) { const [isRemovingObject, setIsRemovingObject] = useState(false); const [activeTrackId, setActiveTrackletId] = useAtom( activeTrackletObjectIdAtom, ); const isStreaming = useAtomValue(isStreamingAtom); const isPlaying = useAtom(isPlayingAtom); const video = useVideo(); const reportError = useReportError(); async function handleRemoveObject( event: React.MouseEvent, ) { try { event.stopPropagation(); setIsRemovingObject(true); if (isStreaming) { await video?.abortStreamMasks(); } if (isPlaying) { video?.pause(); } await video?.deleteTracklet(objectId); } catch (error) { reportError(error); } finally { setIsRemovingObject(false); if (activeTrackId === objectId) { setActiveTrackletId(null); } } } return (
{active && (
Select to add areas to the object and to remove areas from the object in the video. Click on an existing point to delete it.
)}
{active ? ( ) : ( <> }> Edit selection }> Clear )}
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/annotations/ObjectPlaceholder.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {BLUE_PINK_FILL_BR} from '@/theme/gradientStyle'; type Props = { showPlus?: boolean; onClick?: () => void; }; export default function ObjectPlaceholder({showPlus = true, onClick}: Props) { return (
{showPlus && (
)}
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/annotations/ObjectThumbnail.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ type Props = { thumbnail: string | null; color: string; onClick?: () => void; }; export default function ObjectThumbnail({thumbnail, color, onClick}: Props) { return (
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/annotations/ObjectUtils.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {BaseTracklet} from '@/common/tracker/Tracker'; export function getObjectLabel(tracklet: BaseTracklet) { return `Object ${tracklet.id + 1}`; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/annotations/ObjectsToolbar.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import AddObjectButton from '@/common/components/annotations/AddObjectButton'; import FirstClickView from '@/common/components/annotations/FirstClickView'; import LimitNotice from '@/common/components/annotations/LimitNotice'; import ObjectsToolbarBottomActions from '@/common/components/annotations/ObjectsToolbarBottomActions'; import ObjectsToolbarHeader from '@/common/components/annotations/ObjectsToolbarHeader'; import {getObjectLabel} from '@/common/components/annotations/ObjectUtils'; import ToolbarObject from '@/common/components/annotations/ToolbarObject'; import { activeTrackletObjectAtom, activeTrackletObjectIdAtom, isAddObjectEnabledAtom, isFirstClickMadeAtom, isTrackletObjectLimitReachedAtom, trackletObjectsAtom, } from '@/demo/atoms'; import {useAtomValue, useSetAtom} from 'jotai'; type Props = { onTabChange: (newIndex: number) => void; }; export default function ObjectsToolbar({onTabChange}: Props) { const tracklets = useAtomValue(trackletObjectsAtom); const activeTracklet = useAtomValue(activeTrackletObjectAtom); const setActiveTrackletId = useSetAtom(activeTrackletObjectIdAtom); const isFirstClickMade = useAtomValue(isFirstClickMadeAtom); const isObjectLimitReached = useAtomValue(isTrackletObjectLimitReachedAtom); const isAddObjectEnabled = useAtomValue(isAddObjectEnabledAtom); if (!isFirstClickMade) { return ; } return (
{tracklets.map(tracklet => { return ( { setActiveTrackletId(tracklet.id); }} /> ); })} {isAddObjectEnabled && } {isObjectLimitReached && }
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/annotations/ObjectsToolbarBottomActions.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ClearAllPointsInVideoButton from '@/common/components/annotations/ClearAllPointsInVideoButton'; import CloseSessionButton from '@/common/components/annotations/CloseSessionButton'; import TrackAndPlayButton from '@/common/components/button/TrackAndPlayButton'; import ToolbarBottomActionsWrapper from '@/common/components/toolbar/ToolbarBottomActionsWrapper'; import { EFFECT_TOOLBAR_INDEX, OBJECT_TOOLBAR_INDEX, } from '@/common/components/toolbar/ToolbarConfig'; import {streamingStateAtom} from '@/demo/atoms'; import {useAtomValue} from 'jotai'; type Props = { onTabChange: (newIndex: number) => void; }; export default function ObjectsToolbarBottomActions({onTabChange}: Props) { const streamingState = useAtomValue(streamingStateAtom); const isTrackingEnabled = streamingState !== 'none' && streamingState !== 'full'; function handleSwitchToEffectsTab() { onTabChange(EFFECT_TOOLBAR_INDEX); } return ( onTabChange(OBJECT_TOOLBAR_INDEX)} /> {isTrackingEnabled && } {streamingState === 'full' && ( )} ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/annotations/ObjectsToolbarHeader.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ToolbarHeaderWrapper from '@/common/components/toolbar/ToolbarHeaderWrapper'; import {isStreamingAtom, streamingStateAtom} from '@/demo/atoms'; import {useAtomValue} from 'jotai'; export default function ObjectsToolbarHeader() { const isStreaming = useAtomValue(isStreamingAtom); const streamingState = useAtomValue(streamingStateAtom); return ( ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/annotations/PointsToggle.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {labelTypeAtom} from '@/demo/atoms'; import {AddFilled, SubtractFilled} from '@carbon/icons-react'; import {useAtom} from 'jotai'; export default function PointsToggle() { const [labelType, setLabelType] = useAtom(labelTypeAtom); const isPositive = labelType === 'positive'; const buttonStyle = (selected: boolean) => `btn-md bg-graydark-800 !text-white md:px-2 lg:px-4 py-0.5 ${selected ? `border border-white hover:bg-graydark-800` : `border-graydark-700 hover:bg-graydark-700`}`; return (
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/annotations/PrimaryCTAButton.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import GradientBorder from '@/common/components/button/GradientBorder'; import type {ReactNode} from 'react'; type Props = { disabled?: boolean; endIcon?: ReactNode; } & React.DOMAttributes; export default function PrimaryCTAButton({ children, disabled, endIcon, ...props }: Props) { return ( ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/annotations/ToolbarObject.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ObjectActions from '@/common/components/annotations/ObjectActions'; import ObjectPlaceholder from '@/common/components/annotations/ObjectPlaceholder'; import ObjectThumbnail from '@/common/components/annotations/ObjectThumbnail'; import ToolbarObjectContainer from '@/common/components/annotations/ToolbarObjectContainer'; import useVideo from '@/common/components/video/editor/useVideo'; import {BaseTracklet} from '@/common/tracker/Tracker'; import emptyFunction from '@/common/utils/emptyFunction'; import {activeTrackletObjectIdAtom} from '@/demo/atoms'; import {useSetAtom} from 'jotai'; type Props = { label: string; tracklet: BaseTracklet; isActive: boolean; isMobile?: boolean; onClick?: () => void; onThumbnailClick?: () => void; }; export default function ToolbarObject({ label, tracklet, isActive, isMobile = false, onClick, onThumbnailClick = emptyFunction, }: Props) { const video = useVideo(); const setActiveTrackletId = useSetAtom(activeTrackletObjectIdAtom); async function handleCancelNewObject() { try { await video?.deleteTracklet(tracklet.id); } catch (error) { reportError(error); } finally { setActiveTrackletId(null); } } if (!tracklet.isInitialized) { return ( } isMobile={isMobile} onClick={onClick} onCancel={handleCancelNewObject} /> ); } return ( } isMobile={isMobile}> ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/annotations/ToolbarObjectContainer.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {spacing} from '@/theme/tokens.stylex'; import {Close} from '@carbon/icons-react'; import stylex from '@stylexjs/stylex'; import {PropsWithChildren, ReactNode} from 'react'; const sharedStyles = stylex.create({ container: { display: 'flex', overflow: 'hidden', cursor: 'pointer', flexShrink: 0, borderTop: 'none', backgroundColor: { '@media screen and (max-width: 768px)': '#000', }, paddingHorizontal: { default: spacing[8], '@media screen and (max-width: 768px)': spacing[5], }, paddingBottom: { default: spacing[8], '@media screen and (max-width: 768px)': 10, }, }, activeContainer: { background: '#000', borderRadius: 16, marginHorizontal: 16, padding: { default: spacing[4], '@media screen and (max-width: 768px)': spacing[5], }, marginBottom: { default: spacing[8], '@media screen and (max-width: 768px)': 0, }, }, itemsCenter: { alignItems: 'center', }, rightColumn: { marginStart: { default: spacing[4], '@media screen and (max-width: 768px)': 0, }, flexGrow: 1, alignItems: 'center', }, }); type ToolbarObjectContainerProps = PropsWithChildren<{ alignItems?: 'top' | 'center'; isActive: boolean; title: string; subtitle: string; thumbnail: ReactNode; isMobile: boolean; onCancel?: () => void; onClick?: () => void; }>; export default function ToolbarObjectContainer({ alignItems = 'top', children, isActive, title, subtitle, thumbnail, isMobile, onClick, onCancel, }: ToolbarObjectContainerProps) { if (isMobile) { return (
{children}
); } return (
{thumbnail}
{title}
{subtitle.length > 0 && (
{subtitle}
)} {children}
{onCancel != null && (
)}
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/annotations/TrackletSwimlane.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import useSelectedFrameHelper from '@/common/components/video/filmstrip/useSelectedFrameHelper'; import {BaseTracklet, DatalessMask} from '@/common/tracker/Tracker'; import {spacing, w} from '@/theme/tokens.stylex'; import stylex from '@stylexjs/stylex'; import {useMemo} from 'react'; const styles = stylex.create({ container: { display: 'flex', alignItems: 'center', gap: spacing[4], width: '100%', }, trackletNameContainer: { width: w[12], textAlign: 'center', fontSize: '10px', color: 'white', }, swimlaneContainer: { flexGrow: 1, position: 'relative', display: 'flex', height: 12, marginVertical: '0.25rem' /* 4px */, '@media screen and (max-width: 768px)': { marginVertical: 0, }, }, swimlane: { position: 'absolute', left: 0, top: '50%', width: '100%', height: 1, transform: 'translate3d(0, -50%, 0)', opacity: 0.4, }, segment: { position: 'absolute', top: '50%', height: 1, transform: 'translate3d(0, -50%, 0)', }, segmentationPoint: { position: 'absolute', top: '50%', transform: 'translate3d(0, -50%, 0)', borderRadius: '50%', cursor: 'pointer', width: 12, height: 12, '@media screen and (max-width: 768px)': { width: 8, height: 8, }, }, }); type SwimlineSegment = { left: number; width: number; }; type Props = { tracklet: BaseTracklet; onSelectFrame: (tracklet: BaseTracklet, index: number) => void; }; function getSwimlaneSegments(masks: DatalessMask[]): SwimlineSegment[] { if (masks.length === 0) { return []; } const swimlineSegments: SwimlineSegment[] = []; let left = -1; for (let frameIndex = 0; frameIndex < masks.length; ++frameIndex) { const isEmpty = masks?.[frameIndex]?.isEmpty ?? true; if (left === -1 && !isEmpty) { left = frameIndex; } else if (left !== -1 && (isEmpty || frameIndex == masks.length - 1)) { swimlineSegments.push({ left, width: frameIndex - left + 1, }); left = -1; } } return swimlineSegments; } export default function TrackletSwimlane({tracklet, onSelectFrame}: Props) { const selection = useSelectedFrameHelper(); const segments = useMemo(() => { return getSwimlaneSegments(tracklet.masks); }, [tracklet.masks]); const framesWithPoints = tracklet.points.reduce( (frames, pts, frameIndex) => { if (pts != null && pts.length > 0) { frames.push(frameIndex); } return frames; }, [], ); if (selection === null) { return; } return (
Object {tracklet.id + 1}
{segments.map(segment => { return (
); })} {framesWithPoints.map(index => { return (
{ onSelectFrame?.(tracklet, index); }} {...stylex.props(styles.segmentationPoint)} style={{ left: Math.floor(selection.toPosition(index) - 4), backgroundColor: tracklet.color, }} /> ); })}
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/annotations/TrackletsAnnotation.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import TrackletSwimlane from '@/common/components/annotations/TrackletSwimlane'; import useTracklets from '@/common/components/annotations/useTracklets'; import useVideo from '@/common/components/video/editor/useVideo'; import {BaseTracklet} from '@/common/tracker/Tracker'; import {m, spacing} from '@/theme/tokens.stylex'; import stylex from '@stylexjs/stylex'; const styles = stylex.create({ container: { marginTop: m[3], height: 75, paddingHorizontal: spacing[4], '@media screen and (max-width: 768px)': { height: 25, }, }, }); export default function TrackletsAnnotation() { const video = useVideo(); const tracklets = useTracklets(); function handleSelectFrame(_tracklet: BaseTracklet, index: number) { if (video !== null) { video.frame = index; } } return (
{tracklets.map(tracklet => ( ))}
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/annotations/useTracklets.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {trackletObjectsAtom} from '@/demo/atoms'; import {useAtomValue} from 'jotai'; export default function useTracklets() { return useAtomValue(trackletObjectsAtom); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/button/GradientBorder.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import stylex from '@stylexjs/stylex'; import {gradients} from '@/theme/tokens.stylex'; enum GradientTypes { fullGradient = 'fullGradient', bluePinkGradient = 'bluePinkGradient', } type Props = { gradientType?: GradientTypes; disabled?: boolean; rounded?: boolean; className?: string; } & React.DOMAttributes; const styles = stylex.create({ animationHover: { ':hover': { backgroundPosition: '300% 100%', }, }, fullGradient: { border: '2px solid transparent', background: gradients['rainbow'], backgroundSize: '100% 400%', transition: 'background 0.35s ease-in-out', }, bluePinkGradient: { border: '2px solid transparent', background: gradients['rainbow'], }, }); export default function GradientBorder({ gradientType = GradientTypes.fullGradient, disabled, rounded = true, className = '', children, }: Props) { const gradient = (name: GradientTypes) => { if (name === GradientTypes.fullGradient) { return styles.fullGradient; } else if (name === GradientTypes.bluePinkGradient) { return styles.bluePinkGradient; } }; return (
{children}
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/button/PlaybackButton.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {OBJECT_TOOLBAR_INDEX} from '@/common/components/toolbar/ToolbarConfig'; import Tooltip from '@/common/components/Tooltip'; import useVideo from '@/common/components/video/editor/useVideo'; import {isPlayingAtom, streamingStateAtom, toolbarTabIndex} from '@/demo/atoms'; import {PauseFilled, PlayFilledAlt} from '@carbon/icons-react'; import {useAtomValue} from 'jotai'; import {useCallback, useEffect} from 'react'; export default function PlaybackButton() { const tabIndex = useAtomValue(toolbarTabIndex); const streamingState = useAtomValue(streamingStateAtom); const isPlaying = useAtomValue(isPlayingAtom); const video = useVideo(); const isDisabled = tabIndex === OBJECT_TOOLBAR_INDEX && streamingState !== 'none' && streamingState !== 'full'; const handlePlay = useCallback(() => { video?.play(); }, [video]); const handlePause = useCallback(() => { video?.pause(); }, [video]); const handleClick = useCallback(() => { if (isDisabled) { return; } if (isPlaying) { handlePause(); } else { handlePlay(); } }, [isDisabled, isPlaying, handlePlay, handlePause]); useEffect(() => { const handleKey = (event: KeyboardEvent) => { const callback = { KeyK: handleClick, }[event.code]; if (callback != null) { event.preventDefault(); callback(); } }; document.addEventListener('keydown', handleKey); return () => { document.removeEventListener('keydown', handleKey); }; }, [handleClick]); return ( ); } function getButtonStyles(isDisabled: boolean): string { if (isDisabled) { return '!bg-gray-600 !text-graydark-700'; } return `!text-black bg-white`; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/button/PrimaryCTAButton.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import GradientBorder from '@/common/components/button/GradientBorder'; import type {ReactNode} from 'react'; type Props = { disabled?: boolean; endIcon?: ReactNode; } & React.DOMAttributes; export default function PrimaryCTAButton({ children, disabled, endIcon, ...props }: Props) { return ( ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/button/ResponsiveButton.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import useScreenSize from '@/common/screen/useScreenSize'; import type {ReactNode} from 'react'; import type {ButtonProps} from 'react-daisyui'; import {Button} from 'react-daisyui'; type Props = ButtonProps & {startIcon: ReactNode}; export default function ResponsiveButton(props: Props) { const {isMobile} = useScreenSize(); return ; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/button/TrackAndPlayButton.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import PrimaryCTAButton from '@/common/components/button/PrimaryCTAButton'; import useMessagesSnackbar from '@/common/components/snackbar/useDemoMessagesSnackbar'; import useFunctionThrottle from '@/common/components/useFunctionThrottle'; import useVideo from '@/common/components/video/editor/useVideo'; import { areTrackletObjectsInitializedAtom, isStreamingAtom, sessionAtom, streamingStateAtom, } from '@/demo/atoms'; import {ChevronRight} from '@carbon/icons-react'; import {useAtom, useAtomValue, useSetAtom} from 'jotai'; import {useCallback, useEffect} from 'react'; export default function TrackAndPlayButton() { const video = useVideo(); const [isStreaming, setIsStreaming] = useAtom(isStreamingAtom); const streamingState = useAtomValue(streamingStateAtom); const areObjectsInitialized = useAtomValue(areTrackletObjectsInitializedAtom); const setSession = useSetAtom(sessionAtom); const {enqueueMessage} = useMessagesSnackbar(); const {isThrottled, maxThrottles, throttle} = useFunctionThrottle(250, 4); const isTrackAndPlayDisabled = streamingState === 'aborting' || streamingState === 'requesting'; useEffect(() => { function onStreamingStarted() { setIsStreaming(true); } video?.addEventListener('streamingStarted', onStreamingStarted); function onStreamingCompleted() { enqueueMessage('trackAndPlayComplete'); setIsStreaming(false); } video?.addEventListener('streamingCompleted', onStreamingCompleted); return () => { video?.removeEventListener('streamingStarted', onStreamingStarted); video?.removeEventListener('streamingCompleted', onStreamingCompleted); }; }, [video, setIsStreaming, enqueueMessage]); const handleTrackAndPlay = useCallback(() => { if (isTrackAndPlayDisabled) { return; } if (maxThrottles && isThrottled) { enqueueMessage('trackAndPlayThrottlingWarning'); } // Throttling is only applied while streaming because we should // only throttle after a user has aborted inference. This way, // a user can still quickly abort a stream if they notice the // inferred mask is misaligned. throttle( () => { if (!isStreaming) { enqueueMessage('trackAndPlayClick'); video?.streamMasks(); setSession(previousSession => previousSession == null ? previousSession : {...previousSession, ranPropagation: true}, ); } else { video?.abortStreamMasks(); } }, {enableThrottling: isStreaming}, ); }, [ isTrackAndPlayDisabled, isThrottled, isStreaming, maxThrottles, video, setSession, enqueueMessage, throttle, ]); useEffect(() => { const handleKey = (event: KeyboardEvent) => { const callback = { KeyK: handleTrackAndPlay, }[event.code]; if (callback != null) { event.preventDefault(); callback(); } }; document.addEventListener('keydown', handleKey); return () => { document.removeEventListener('keydown', handleKey); }; }, [handleTrackAndPlay]); return ( }> {isStreaming ? 'Cancel Tracking' : 'Track objects'} ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/code/InitializeLocalMonaco.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {loader} from '@monaco-editor/react'; import Logger from '@/common/logger/Logger'; import * as monaco from 'monaco-editor'; import editorWorker from 'monaco-editor/esm/vs/editor/editor.worker?worker'; import tsWorker from 'monaco-editor/esm/vs/language/typescript/ts.worker?worker'; self.MonacoEnvironment = { getWorker(_, label) { if (label === 'typescript' || label === 'javascript') { return new tsWorker(); } return new editorWorker(); }, }; loader.config({monaco}); loader.init().then(monaco => { Logger.debug('initialized monaco', monaco); }); ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/effects/BackgroundEffects.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {backgroundEffects} from '@/common/components/effects/EffectsUtils'; import EffectVariantBadge from '@/common/components/effects/EffectVariantBadge'; import ToolbarActionIcon from '@/common/components/toolbar/ToolbarActionIcon'; import ToolbarSection from '@/common/components/toolbar/ToolbarSection'; import useVideoEffect from '@/common/components/video/editor/useVideoEffect'; import {EffectIndex} from '@/common/components/video/effects/Effects'; import {activeBackgroundEffectAtom} from '@/demo/atoms'; import {useAtomValue} from 'jotai'; export default function BackgroundEffects() { const setEffect = useVideoEffect(); const activeEffect = useAtomValue(activeBackgroundEffectAtom); return ( {backgroundEffects.map(backgroundEffect => { return ( ) } onClick={() => { if (activeEffect.name === backgroundEffect.effectName) { setEffect(backgroundEffect.effectName, EffectIndex.BACKGROUND, { variant: (activeEffect.variant + 1) % activeEffect.numVariants, }); } else { setEffect(backgroundEffect.effectName, EffectIndex.BACKGROUND); } }} /> ); })} ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/effects/EffectVariantBadge.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {right, top} from '@/theme/tokens.stylex'; import stylex from '@stylexjs/stylex'; const styles = stylex.create({ variantBadge: { position: 'absolute', top: top[1], right: right[1], backgroundColor: '#280578', color: '#D2D2FF', fontVariantNumeric: 'tabular-nums', paddingHorizontal: 4, paddingVertical: 1, fontSize: 9, borderRadius: 6, fontWeight: 'bold', }, }); type Props = { label: string; }; export default function VariantBadge({label}: Props) { return
{label}
; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/effects/EffectsCarousel.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {CarouselContainerShadow} from '@/common/components/effects/EffectsCarouselShadow'; import {DemoEffect} from '@/common/components/effects/EffectsUtils'; import useVideoEffect from '@/common/components/video/editor/useVideoEffect'; import type {EffectIndex} from '@/common/components/video/effects/Effects'; import {Effects} from '@/common/components/video/effects/Effects'; import {color, fontSize, spacing} from '@/theme/tokens.stylex'; import stylex from '@stylexjs/stylex'; type Props = { label: string; effects: DemoEffect[]; activeEffect: keyof Effects; index: EffectIndex; }; const styles = stylex.create({ container: { display: 'flex', flexDirection: 'column', gap: spacing[2], width: '100%', }, label: { fontSize: fontSize['xs'], color: '#A6ACB2', textAlign: 'center', }, carouselContainer: { position: 'relative', borderRadius: '8px', overflow: 'hidden', width: '100%', height: '120px', backgroundColor: color['gray-700'], }, }); export default function EffectsCarousel({ label, effects, activeEffect, index: effectIndex, }: Props) { const setEffect = useVideoEffect(); return (
{label}
{effects.map(({effectName, Icon, title}, index) => { const isActive = activeEffect === effectName; return (
setEffect(effectName, effectIndex)}>
{title}
); })}
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/effects/EffectsCarouselShadow.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {spacing} from '@/theme/tokens.stylex'; import stylex from '@stylexjs/stylex'; const styles = stylex.create({ container: { position: 'absolute', width: '100%', height: spacing[8], pointerEvents: 'none', }, }); type CarouselContainerShadowProps = { isTop: boolean; }; const edgeColor = 'rgba(55, 62, 65, 1)'; const transitionColor = 'rgba(55, 62, 65, 0.2)'; export function CarouselContainerShadow({isTop}: CarouselContainerShadowProps) { return (
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/effects/EffectsToolbar.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import BackgroundEffects from '@/common/components/effects/BackgroundEffects'; import EffectsToolbarBottomActions from '@/common/components/effects/EffectsToolbarBottomActions'; import EffectsToolbarHeader from '@/common/components/effects/EffectsToolbarHeader'; import HighlightEffects from '@/common/components/effects/HighlightEffects'; import useMessagesSnackbar from '@/common/components/snackbar/useDemoMessagesSnackbar'; import {useEffect, useRef} from 'react'; type Props = { onTabChange: (newIndex: number) => void; }; export default function EffectsToolbar({onTabChange}: Props) { const isEffectsMessageShown = useRef(false); const {enqueueMessage} = useMessagesSnackbar(); useEffect(() => { if (!isEffectsMessageShown.current) { isEffectsMessageShown.current = true; enqueueMessage('effectsMessage'); } }, [enqueueMessage]); return (
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/effects/EffectsToolbarBottomActions.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import PrimaryCTAButton from '@/common/components/button/PrimaryCTAButton'; import RestartSessionButton from '@/common/components/session/RestartSessionButton'; import ToolbarBottomActionsWrapper from '@/common/components/toolbar/ToolbarBottomActionsWrapper'; import { MORE_OPTIONS_TOOLBAR_INDEX, OBJECT_TOOLBAR_INDEX, } from '@/common/components/toolbar/ToolbarConfig'; import {ChevronRight} from '@carbon/icons-react'; type Props = { onTabChange: (newIndex: number) => void; }; export default function EffectsToolbarBottomActions({onTabChange}: Props) { function handleSwitchToMoreOptionsTab() { onTabChange(MORE_OPTIONS_TOOLBAR_INDEX); } return ( onTabChange(OBJECT_TOOLBAR_INDEX)} /> }> Next ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/effects/EffectsToolbarHeader.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ToolbarHeaderWrapper from '@/common/components/toolbar/ToolbarHeaderWrapper'; import useVideoEffect from '@/common/components/video/editor/useVideoEffect'; import { EffectIndex, effectPresets, } from '@/common/components/video/effects/Effects'; import {BLUE_PINK_FILL} from '@/theme/gradientStyle'; import {MagicWandFilled} from '@carbon/icons-react'; import {useCallback, useRef} from 'react'; import {Button} from 'react-daisyui'; export default function EffectsToolbarHeader() { const preset = useRef(0); const setEffect = useVideoEffect(); const handleTogglePreset = useCallback(() => { preset.current++; const [background, highlight] = effectPresets[preset.current % effectPresets.length]; setEffect(background.name, EffectIndex.BACKGROUND, { variant: background.variant, }); setEffect(highlight.name, EffectIndex.HIGHLIGHT, { variant: highlight.variant, }); }, [setEffect]); return (
} className="pb-4" /> ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/effects/EffectsUtils.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {Effects} from '@/common/components/video/effects/Effects'; import type {CarbonIconType} from '@carbon/icons-react'; import { AppleDash, Asterisk, Barcode, CenterCircle, ColorPalette, ColorSwitch, Development, Erase, FaceWink, Humidity, Image, Overlay, TextFont, } from '@carbon/icons-react'; export type DemoEffect = { title: string; Icon: CarbonIconType; effectName: keyof Effects; }; export const backgroundEffects: DemoEffect[] = [ {title: 'Original', Icon: Image, effectName: 'Original'}, {title: 'Erase', Icon: Erase, effectName: 'EraseBackground'}, { title: 'Gradient', Icon: ColorPalette, effectName: 'Gradient', }, { title: 'Pixelate', Icon: Development, effectName: 'Pixelate', }, {title: 'Desaturate', Icon: ColorSwitch, effectName: 'Desaturate'}, {title: 'Text', Icon: TextFont, effectName: 'BackgroundText'}, {title: 'Blur', Icon: Humidity, effectName: 'BackgroundBlur'}, {title: 'Outline', Icon: AppleDash, effectName: 'Sobel'}, ]; export const highlightEffects: DemoEffect[] = [ {title: 'Original', Icon: Image, effectName: 'Cutout'}, {title: 'Erase', Icon: Erase, effectName: 'EraseForeground'}, {title: 'Gradient', Icon: ColorPalette, effectName: 'VibrantMask'}, {title: 'Pixelate', Icon: Development, effectName: 'PixelateMask'}, { title: 'Overlay', Icon: Overlay, effectName: 'Overlay', }, {title: 'Emoji', Icon: FaceWink, effectName: 'Replace'}, {title: 'Burst', Icon: Asterisk, effectName: 'Burst'}, {title: 'Spotlight', Icon: CenterCircle, effectName: 'Scope'}, ]; export const moreEffects: DemoEffect[] = [ {title: 'Noisy', Icon: Barcode, effectName: 'NoisyMask'}, ]; ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/effects/HighlightEffects.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import EffectVariantBadge from '@/common/components/effects/EffectVariantBadge'; import ToolbarActionIcon from '@/common/components/toolbar/ToolbarActionIcon'; import ToolbarSection from '@/common/components/toolbar/ToolbarSection'; import useVideoEffect from '@/common/components/video/editor/useVideoEffect'; import {EffectIndex} from '@/common/components/video/effects/Effects'; import { activeHighlightEffectAtom, activeHighlightEffectGroupAtom, } from '@/demo/atoms'; import {useAtomValue} from 'jotai'; export default function HighlightEffects() { const setEffect = useVideoEffect(); const activeEffect = useAtomValue(activeHighlightEffectAtom); const activeEffectsGroup = useAtomValue(activeHighlightEffectGroupAtom); return ( {activeEffectsGroup.map(highlightEffect => { return ( ) } onClick={() => { if (activeEffect.name === highlightEffect.effectName) { setEffect(highlightEffect.effectName, EffectIndex.HIGHLIGHT, { variant: (activeEffect.variant + 1) % activeEffect.numVariants, }); } else { setEffect(highlightEffect.effectName, EffectIndex.HIGHLIGHT); } }} /> ); })} ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/effects/MobileEffectsToolbar.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import EffectsCarousel from '@/common/components/effects/EffectsCarousel'; import {backgroundEffects} from '@/common/components/effects/EffectsUtils'; import useVideoEffect from '@/common/components/video/editor/useVideoEffect'; import { EffectIndex, effectPresets, } from '@/common/components/video/effects/Effects'; import {ListBoxes, MagicWand, MagicWandFilled} from '@carbon/icons-react'; import {useCallback, useRef, useState} from 'react'; import {Button} from 'react-daisyui'; import EffectsToolbarBottomActions from '@/common/components/effects/EffectsToolbarBottomActions'; import ToolbarProgressChip from '@/common/components/toolbar/ToolbarProgressChip'; import { activeBackgroundEffectAtom, activeHighlightEffectAtom, activeHighlightEffectGroupAtom, } from '@/demo/atoms'; import {BLUE_PINK_FILL} from '@/theme/gradientStyle'; import {useAtomValue} from 'jotai'; type Props = { onTabChange: (newIndex: number) => void; }; export default function MobileEffectsToolbar({onTabChange}: Props) { const preset = useRef(0); const setEffect = useVideoEffect(); const [showEffectsCarousels, setShowEffectsCarousels] = useState(); const activeBackground = useAtomValue(activeBackgroundEffectAtom); const activeHighlight = useAtomValue(activeHighlightEffectAtom); const activeHighlightEffectsGroup = useAtomValue( activeHighlightEffectGroupAtom, ); const handleTogglePreset = useCallback(() => { preset.current++; const [background, highlight] = effectPresets[preset.current % effectPresets.length]; setEffect(background.name, EffectIndex.BACKGROUND, { variant: background.variant, }); setEffect(highlight.name, EffectIndex.HIGHLIGHT, { variant: highlight.variant, }); }, [setEffect]); return (
{showEffectsCarousels ? (
) : (
Apply visual effects to your selected objects and the background.
)}
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/effects/MoreFunEffects.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {moreEffects} from '@/common/components/effects/EffectsUtils'; import EffectVariantBadge from '@/common/components/effects/EffectVariantBadge'; import ToolbarActionIcon from '@/common/components/toolbar/ToolbarActionIcon'; import ToolbarSection from '@/common/components/toolbar/ToolbarSection'; import useVideoEffect from '@/common/components/video/editor/useVideoEffect'; import {EffectIndex} from '@/common/components/video/effects/Effects'; import {activeHighlightEffectAtom} from '@/demo/atoms'; import {useAtomValue} from 'jotai'; export default function MoreFunEffects() { const setEffect = useVideoEffect(); const activeEffect = useAtomValue(activeHighlightEffectAtom); return ( {moreEffects.map(effect => { return ( ) } onClick={() => { setEffect(effect.effectName, EffectIndex.HIGHLIGHT); }} /> ); })} ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/gallery/ChangeVideoModal.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import type {VideoGalleryTriggerProps} from '@/common/components/gallery/DemoVideoGalleryModal'; import DemoVideoGalleryModal from '@/common/components/gallery/DemoVideoGalleryModal'; import useVideo from '@/common/components/video/editor/useVideo'; import Logger from '@/common/logger/Logger'; import {isStreamingAtom, uploadingStateAtom, VideoData} from '@/demo/atoms'; import {useAtomValue, useSetAtom} from 'jotai'; import {ComponentType, useCallback} from 'react'; import {useNavigate} from 'react-router-dom'; type Props = { videoGalleryModalTrigger?: ComponentType; showUploadInGallery?: boolean; onChangeVideo?: () => void; }; export default function ChangeVideoModal({ videoGalleryModalTrigger: VideoGalleryModalTriggerComponent, showUploadInGallery = true, onChangeVideo, }: Props) { const isStreaming = useAtomValue(isStreamingAtom); const setUploadingState = useSetAtom(uploadingStateAtom); const video = useVideo(); const navigate = useNavigate(); const handlePause = useCallback(() => { video?.pause(); }, [video]); function handlePauseOrAbortVideo() { if (isStreaming) { video?.abortStreamMasks(); } else { handlePause(); } } function handleSwitchVideos(video: VideoData) { // Retain any search parameter navigate( { pathname: location.pathname, search: location.search, }, { state: { video, }, }, ); onChangeVideo?.(); } function handleUploadVideoError(error: Error) { setUploadingState('error'); Logger.error(error); } return ( ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/gallery/DefaultVideoGalleryModalTrigger.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ResponsiveButton from '@/common/components/button/ResponsiveButton'; import type {VideoGalleryTriggerProps} from '@/common/components/gallery/DemoVideoGalleryModal'; import {ImageCopy} from '@carbon/icons-react'; export default function DefaultVideoGalleryModalTrigger({ onClick, }: VideoGalleryTriggerProps) { return ( } onClick={onClick}> Change video ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/gallery/DemoVideoGallery.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {DemoVideoGalleryQuery} from '@/common/components/gallery/__generated__/DemoVideoGalleryQuery.graphql'; import VideoGalleryUploadVideo from '@/common/components/gallery/VideoGalleryUploadPhoto'; import VideoPhoto from '@/common/components/gallery/VideoPhoto'; import useScreenSize from '@/common/screen/useScreenSize'; import {VideoData} from '@/demo/atoms'; import {DEMO_SHORT_NAME} from '@/demo/DemoConfig'; import {fontSize, fontWeight, spacing} from '@/theme/tokens.stylex'; import stylex from '@stylexjs/stylex'; import {useMemo} from 'react'; import PhotoAlbum, {Photo, RenderPhotoProps} from 'react-photo-album'; import {graphql, useLazyLoadQuery} from 'react-relay'; import {useLocation, useNavigate} from 'react-router-dom'; const styles = stylex.create({ container: { display: 'flex', flexDirection: 'column', marginHorizontal: spacing[1], height: '100%', lineHeight: 1.2, paddingTop: spacing[8], }, headerContainer: { marginBottom: spacing[8], fontWeight: fontWeight['medium'], fontSize: fontSize['2xl'], '@media screen and (max-width: 768px)': { marginTop: spacing[0], marginBottom: spacing[8], marginHorizontal: spacing[4], fontSize: fontSize['xl'], }, }, albumContainer: { flex: '1 1 0%', width: '100%', overflowY: 'auto', }, }); type Props = { showUploadInGallery?: boolean; onSelect?: (video: VideoPhotoData) => void; onUpload: (video: VideoData) => void; onUploadStart?: () => void; onUploadError?: (error: Error) => void; }; type VideoPhotoData = Photo & VideoData & { poster: string; isUploadOption: boolean; }; export default function DemoVideoGallery({ showUploadInGallery = false, onSelect, onUpload, onUploadStart, onUploadError, }: Props) { const navigate = useNavigate(); const location = useLocation(); const {isMobile: isMobileScreenSize} = useScreenSize(); const data = useLazyLoadQuery( graphql` query DemoVideoGalleryQuery { videos { edges { node { id path posterPath url posterUrl height width posterUrl } } } } `, {}, ); const allVideos: VideoPhotoData[] = useMemo(() => { return data.videos.edges.map(video => { return { src: video.node.url, path: video.node.path, poster: video.node.posterPath, posterPath: video.node.posterPath, url: video.node.url, posterUrl: video.node.posterUrl, width: video.node.width, height: video.node.height, isUploadOption: false, } as VideoPhotoData; }); }, [data.videos.edges]); const shareableVideos: VideoPhotoData[] = useMemo(() => { const filteredVideos = [...allVideos]; if (showUploadInGallery) { const uploadOption = { src: '', width: 1280, height: 720, poster: '', isUploadOption: true, } as VideoPhotoData; filteredVideos.unshift(uploadOption); } return filteredVideos; }, [allVideos, showUploadInGallery]); const renderPhoto = ({ photo: video, imageProps, }: RenderPhotoProps) => { const {style} = imageProps; const {url, posterUrl} = video; return video.isUploadOption ? ( ) : ( { navigate(location.pathname, { state: { video, }, }); onSelect?.(video); }} /> ); }; function handleUploadVideo(video: VideoData) { navigate(location.pathname, { state: { video, }, }); onUpload?.(video); } const descriptionStyle = 'text-sm md:text-base text-gray-400 leading-snug'; return (

Select a video to try{' '} with the {DEMO_SHORT_NAME}

You’ll be able to download what you make.

layout="rows" photos={shareableVideos} targetRowHeight={isMobileScreenSize ? 120 : 200} rowConstraints={{ singleRowMaxHeight: isMobileScreenSize ? 120 : 240, maxPhotos: 3, }} renderPhoto={renderPhoto} spacing={4} />
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/gallery/DemoVideoGalleryModal.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import DefaultVideoGalleryModalTrigger from '@/common/components/gallery/DefaultVideoGalleryModalTrigger'; import { frameIndexAtom, sessionAtom, uploadingStateAtom, VideoData, } from '@/demo/atoms'; import {spacing} from '@/theme/tokens.stylex'; import {Close} from '@carbon/icons-react'; import stylex from '@stylexjs/stylex'; import {useSetAtom} from 'jotai'; import {ComponentType, useCallback, useRef} from 'react'; import {Modal} from 'react-daisyui'; import DemoVideoGallery from './DemoVideoGallery'; const styles = stylex.create({ container: { position: 'relative', minWidth: '85vw', minHeight: '85vh', overflow: 'hidden', color: '#fff', boxShadow: '0 0 100px 50px #000', borderRadius: 16, border: '2px solid transparent', background: 'linear-gradient(#1A1C1F, #1A1C1F) padding-box, linear-gradient(to right bottom, #FB73A5,#595FEF,#94EAE2,#FCCB6B) border-box', }, closeButton: { position: 'absolute', top: 0, right: 0, padding: spacing[3], zIndex: 10, cursor: 'pointer', ':hover': { opacity: 0.7, }, }, galleryContainer: { position: 'absolute', top: spacing[4], left: 0, right: 0, bottom: 0, overflowY: 'auto', }, }); export type VideoGalleryTriggerProps = { onClick: () => void; }; type Props = { trigger?: ComponentType; showUploadInGallery?: boolean; onOpen?: () => void; onSelect?: (video: VideoData, isUpload?: boolean) => void; onUploadVideoError?: (error: Error) => void; }; export default function DemoVideoGalleryModal({ trigger: VideoGalleryModalTrigger = DefaultVideoGalleryModalTrigger, showUploadInGallery = false, onOpen, onSelect, onUploadVideoError, }: Props) { const modalRef = useRef(null); const setFrameIndex = useSetAtom(frameIndexAtom); const setUploadingState = useSetAtom(uploadingStateAtom); const setSession = useSetAtom(sessionAtom); function openModal() { const modal = modalRef.current; if (modal != null) { modal.style.display = 'grid'; modal.showModal(); } } function closeModal() { const modal = modalRef.current; if (modal != null) { modal.close(); modal.style.display = 'none'; } } const handleSelect = useCallback( async (video: VideoData, isUpload?: boolean) => { closeModal(); setFrameIndex(0); onSelect?.(video, isUpload); setUploadingState('default'); setSession(null); }, [setFrameIndex, onSelect, setUploadingState, setSession], ); function handleUploadVideoStart() { setUploadingState('uploading'); closeModal(); } function handleOpenVideoGalleryModal() { onOpen?.(); openModal(); } return ( <>
handleSelect(video)} onUpload={video => handleSelect(video, true)} onUploadStart={handleUploadVideoStart} onUploadError={onUploadVideoError} />
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/gallery/VideoGalleryUploadPhoto.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import useUploadVideo from '@/common/components/gallery/useUploadVideo'; import useScreenSize from '@/common/screen/useScreenSize'; import {VideoData} from '@/demo/atoms'; import {MAX_UPLOAD_FILE_SIZE} from '@/demo/DemoConfig'; import {BLUE_PINK_FILL_BR} from '@/theme/gradientStyle'; import {RetryFailed, Upload} from '@carbon/icons-react'; import {CSSProperties, ReactNode} from 'react'; import {Loading} from 'react-daisyui'; type Props = { style: CSSProperties; onUpload: (video: VideoData) => void; onUploadStart?: () => void; onUploadError?: (error: Error) => void; }; export default function VideoGalleryUploadVideo({ style, onUpload, onUploadStart, onUploadError, }: Props) { const {getRootProps, getInputProps, isUploading, error} = useUploadVideo({ onUpload, onUploadStart, onUploadError, }); const {isMobile} = useScreenSize(); return (
{isUploading && ( } title="Uploading ..." /> )} {error !== null && ( } title={error} /> )} {!isUploading && error === null && ( } title={ <> Upload{' '}
Max {MAX_UPLOAD_FILE_SIZE}
} /> )}
); } type IconWrapperProps = { icon: ReactNode; title: ReactNode | string; }; function IconWrapper({icon, title}: IconWrapperProps) { return ( <>
{icon}
{title}
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/gallery/VideoPhoto.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import Logger from '@/common/logger/Logger'; import stylex from '@stylexjs/stylex'; import { CSSProperties, MouseEventHandler, useCallback, useEffect, useRef, } from 'react'; const styles = stylex.create({ background: { backgroundRepeat: 'no-repeat', backgroundSize: 'cover', backgroundPosition: 'center', cursor: 'pointer', }, video: { width: '100%', height: '100%', }, }); type Props = { onClick: MouseEventHandler | undefined; src: string; poster: string; style: CSSProperties; }; export default function VideoPhoto({src, poster, style, onClick}: Props) { const videoRef = useRef(null); const playPromiseRef = useRef | null>(null); const play = useCallback(() => { const video = videoRef.current; // Only play video if it is not already playing if (video != null && video.paused) { // This quirky way of handling video play/pause in the browser is needed // due to the async nature of the video play API: // https://developer.chrome.com/blog/play-request-was-interrupted/ const playPromise = video.play(); playPromise.catch(error => { Logger.error('Failed to play video', error); }); playPromiseRef.current = playPromise; } }, []); const pause = useCallback(() => { // Only pause video if it is playing const playPromise = playPromiseRef.current; if (playPromise != null) { playPromise .then(() => { videoRef.current?.pause(); }) .catch(error => { Logger.error('Failed to pause video', error); }) .finally(() => { playPromiseRef.current = null; }); } }, []); useEffect(() => { return () => { pause(); }; }, [pause]); return (
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/gallery/__generated__/DemoVideoGalleryModalQuery.graphql.ts ================================================ /** * @generated SignedSource<> * @lightSyntaxTransform * @nogrep */ /* tslint:disable */ /* eslint-disable */ // @ts-nocheck import { ConcreteRequest, Query } from 'relay-runtime'; import { FragmentRefs } from "relay-runtime"; export type DemoVideoGalleryModalQuery$variables = Record; export type DemoVideoGalleryModalQuery$data = { readonly " $fragmentSpreads": FragmentRefs<"DatasetsDropdown_datasets" | "VideoGallery_videos">; }; export type DemoVideoGalleryModalQuery = { response: DemoVideoGalleryModalQuery$data; variables: DemoVideoGalleryModalQuery$variables; }; const node: ConcreteRequest = (function(){ var v0 = [ { "alias": null, "args": null, "kind": "ScalarField", "name": "name", "storageKey": null } ], v1 = [ { "kind": "Literal", "name": "after", "value": "" }, { "kind": "Literal", "name": "first", "value": 20 } ], v2 = { "alias": null, "args": null, "kind": "ScalarField", "name": "__typename", "storageKey": null }; return { "fragment": { "argumentDefinitions": [], "kind": "Fragment", "metadata": null, "name": "DemoVideoGalleryModalQuery", "selections": [ { "args": null, "kind": "FragmentSpread", "name": "DatasetsDropdown_datasets" }, { "args": null, "kind": "FragmentSpread", "name": "VideoGallery_videos" } ], "type": "Query", "abstractKey": null }, "kind": "Request", "operation": { "argumentDefinitions": [], "kind": "Operation", "name": "DemoVideoGalleryModalQuery", "selections": [ { "alias": null, "args": null, "concreteType": "DatasetConnection", "kind": "LinkedField", "name": "datasets", "plural": false, "selections": [ { "alias": null, "args": null, "concreteType": "DatasetEdge", "kind": "LinkedField", "name": "edges", "plural": true, "selections": [ { "alias": null, "args": null, "concreteType": "Dataset", "kind": "LinkedField", "name": "node", "plural": false, "selections": (v0/*: any*/), "storageKey": null } ], "storageKey": null } ], "storageKey": null }, { "alias": null, "args": (v1/*: any*/), "concreteType": "VideoConnection", "kind": "LinkedField", "name": "videos", "plural": false, "selections": [ (v2/*: any*/), { "alias": null, "args": null, "concreteType": "PageInfo", "kind": "LinkedField", "name": "pageInfo", "plural": false, "selections": [ (v2/*: any*/), { "alias": null, "args": null, "kind": "ScalarField", "name": "hasPreviousPage", "storageKey": null }, { "alias": null, "args": null, "kind": "ScalarField", "name": "hasNextPage", "storageKey": null }, { "alias": null, "args": null, "kind": "ScalarField", "name": "startCursor", "storageKey": null }, { "alias": null, "args": null, "kind": "ScalarField", "name": "endCursor", "storageKey": null } ], "storageKey": null }, { "alias": null, "args": null, "concreteType": "VideoEdge", "kind": "LinkedField", "name": "edges", "plural": true, "selections": [ (v2/*: any*/), { "alias": null, "args": null, "concreteType": "Video", "kind": "LinkedField", "name": "node", "plural": false, "selections": [ (v2/*: any*/), { "alias": null, "args": null, "kind": "ScalarField", "name": "id", "storageKey": null }, { "alias": null, "args": null, "kind": "ScalarField", "name": "path", "storageKey": null }, { "alias": null, "args": null, "kind": "ScalarField", "name": "posterPath", "storageKey": null }, { "alias": null, "args": null, "kind": "ScalarField", "name": "url", "storageKey": null }, { "alias": null, "args": null, "kind": "ScalarField", "name": "posterUrl", "storageKey": null }, { "alias": null, "args": null, "kind": "ScalarField", "name": "width", "storageKey": null }, { "alias": null, "args": null, "kind": "ScalarField", "name": "height", "storageKey": null }, { "alias": null, "args": null, "concreteType": "Dataset", "kind": "LinkedField", "name": "dataset", "plural": false, "selections": (v0/*: any*/), "storageKey": null }, { "alias": null, "args": null, "concreteType": "VideoPermissions", "kind": "LinkedField", "name": "permissions", "plural": false, "selections": [ { "alias": null, "args": null, "kind": "ScalarField", "name": "canShare", "storageKey": null }, { "alias": null, "args": null, "kind": "ScalarField", "name": "canDownload", "storageKey": null } ], "storageKey": null } ], "storageKey": null }, { "alias": null, "args": null, "kind": "ScalarField", "name": "cursor", "storageKey": null } ], "storageKey": null } ], "storageKey": "videos(after:\"\",first:20)" }, { "alias": null, "args": (v1/*: any*/), "filters": [ "datasetName" ], "handle": "connection", "key": "VideoGallery_videos", "kind": "LinkedHandle", "name": "videos" } ] }, "params": { "cacheID": "e0bccf553377682e6bc283c2ce53bee5", "id": null, "metadata": {}, "name": "DemoVideoGalleryModalQuery", "operationKind": "query", "text": "query DemoVideoGalleryModalQuery {\n ...DatasetsDropdown_datasets\n ...VideoGallery_videos\n}\n\nfragment DatasetsDropdown_datasets on Query {\n datasets {\n edges {\n node {\n name\n }\n }\n }\n}\n\nfragment VideoGallery_videos on Query {\n videos(first: 20, after: \"\") {\n __typename\n pageInfo {\n __typename\n hasPreviousPage\n hasNextPage\n startCursor\n endCursor\n }\n edges {\n __typename\n node {\n __typename\n id\n path\n posterPath\n url\n posterUrl\n width\n height\n dataset {\n name\n }\n permissions {\n canShare\n canDownload\n }\n }\n cursor\n }\n }\n}\n" } }; })(); (node as any).hash = "d09e34e2b9f2e25c2d564106de5f9c89"; export default node; ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/gallery/__generated__/DemoVideoGalleryQuery.graphql.ts ================================================ /** * @generated SignedSource<<20d31a82b5f3b251b0e42b4f0e3522b8>> * @lightSyntaxTransform * @nogrep */ /* tslint:disable */ /* eslint-disable */ // @ts-nocheck import { ConcreteRequest, Query } from 'relay-runtime'; export type DemoVideoGalleryQuery$variables = Record; export type DemoVideoGalleryQuery$data = { readonly videos: { readonly edges: ReadonlyArray<{ readonly node: { readonly height: number; readonly id: any; readonly path: string; readonly posterPath: string | null | undefined; readonly posterUrl: string; readonly url: string; readonly width: number; }; }>; }; }; export type DemoVideoGalleryQuery = { response: DemoVideoGalleryQuery$data; variables: DemoVideoGalleryQuery$variables; }; const node: ConcreteRequest = (function(){ var v0 = [ { "alias": null, "args": null, "concreteType": "VideoConnection", "kind": "LinkedField", "name": "videos", "plural": false, "selections": [ { "alias": null, "args": null, "concreteType": "VideoEdge", "kind": "LinkedField", "name": "edges", "plural": true, "selections": [ { "alias": null, "args": null, "concreteType": "Video", "kind": "LinkedField", "name": "node", "plural": false, "selections": [ { "alias": null, "args": null, "kind": "ScalarField", "name": "id", "storageKey": null }, { "alias": null, "args": null, "kind": "ScalarField", "name": "path", "storageKey": null }, { "alias": null, "args": null, "kind": "ScalarField", "name": "posterPath", "storageKey": null }, { "alias": null, "args": null, "kind": "ScalarField", "name": "url", "storageKey": null }, { "alias": null, "args": null, "kind": "ScalarField", "name": "posterUrl", "storageKey": null }, { "alias": null, "args": null, "kind": "ScalarField", "name": "height", "storageKey": null }, { "alias": null, "args": null, "kind": "ScalarField", "name": "width", "storageKey": null } ], "storageKey": null } ], "storageKey": null } ], "storageKey": null } ]; return { "fragment": { "argumentDefinitions": [], "kind": "Fragment", "metadata": null, "name": "DemoVideoGalleryQuery", "selections": (v0/*: any*/), "type": "Query", "abstractKey": null }, "kind": "Request", "operation": { "argumentDefinitions": [], "kind": "Operation", "name": "DemoVideoGalleryQuery", "selections": (v0/*: any*/) }, "params": { "cacheID": "4dae74153a5528f2631b59dfb0adb021", "id": null, "metadata": {}, "name": "DemoVideoGalleryQuery", "operationKind": "query", "text": "query DemoVideoGalleryQuery {\n videos {\n edges {\n node {\n id\n path\n posterPath\n url\n posterUrl\n height\n width\n }\n }\n }\n}\n" } }; })(); (node as any).hash = "d22ac5e58f6e4eb696651be49b410e4e"; export default node; ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/gallery/__generated__/useUploadVideoMutation.graphql.ts ================================================ /** * @generated SignedSource<<76014dced98d6c8989e7322712e38963>> * @lightSyntaxTransform * @nogrep */ /* tslint:disable */ /* eslint-disable */ // @ts-nocheck import { ConcreteRequest, Mutation } from 'relay-runtime'; export type useUploadVideoMutation$variables = { file: any; }; export type useUploadVideoMutation$data = { readonly uploadVideo: { readonly height: number; readonly id: any; readonly path: string; readonly posterPath: string | null | undefined; readonly posterUrl: string; readonly url: string; readonly width: number; }; }; export type useUploadVideoMutation = { response: useUploadVideoMutation$data; variables: useUploadVideoMutation$variables; }; const node: ConcreteRequest = (function(){ var v0 = [ { "defaultValue": null, "kind": "LocalArgument", "name": "file" } ], v1 = [ { "alias": null, "args": [ { "kind": "Variable", "name": "file", "variableName": "file" } ], "concreteType": "Video", "kind": "LinkedField", "name": "uploadVideo", "plural": false, "selections": [ { "alias": null, "args": null, "kind": "ScalarField", "name": "id", "storageKey": null }, { "alias": null, "args": null, "kind": "ScalarField", "name": "height", "storageKey": null }, { "alias": null, "args": null, "kind": "ScalarField", "name": "width", "storageKey": null }, { "alias": null, "args": null, "kind": "ScalarField", "name": "url", "storageKey": null }, { "alias": null, "args": null, "kind": "ScalarField", "name": "path", "storageKey": null }, { "alias": null, "args": null, "kind": "ScalarField", "name": "posterPath", "storageKey": null }, { "alias": null, "args": null, "kind": "ScalarField", "name": "posterUrl", "storageKey": null } ], "storageKey": null } ]; return { "fragment": { "argumentDefinitions": (v0/*: any*/), "kind": "Fragment", "metadata": null, "name": "useUploadVideoMutation", "selections": (v1/*: any*/), "type": "Mutation", "abstractKey": null }, "kind": "Request", "operation": { "argumentDefinitions": (v0/*: any*/), "kind": "Operation", "name": "useUploadVideoMutation", "selections": (v1/*: any*/) }, "params": { "cacheID": "dcbaf1bf411627fdb9dfbb827592cfc0", "id": null, "metadata": {}, "name": "useUploadVideoMutation", "operationKind": "mutation", "text": "mutation useUploadVideoMutation(\n $file: Upload!\n) {\n uploadVideo(file: $file) {\n id\n height\n width\n url\n path\n posterPath\n posterUrl\n }\n}\n" } }; })(); (node as any).hash = "710e462504d76597af8695b7fc70b4cf"; export default node; ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/gallery/useUploadVideo.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {useUploadVideoMutation} from '@/common/components/gallery/__generated__/useUploadVideoMutation.graphql'; import Logger from '@/common/logger/Logger'; import {VideoData} from '@/demo/atoms'; import {useState} from 'react'; import {FileRejection, FileWithPath, useDropzone} from 'react-dropzone'; import {graphql, useMutation} from 'react-relay'; const ACCEPT_VIDEOS = { 'video/mp4': ['.mp4'], 'video/quicktime': ['.mov'], }; // 70 MB default max video upload size const MAX_FILE_SIZE_IN_MB = 70; const MAX_VIDEO_UPLOAD_SIZE = MAX_FILE_SIZE_IN_MB * 1024 ** 2; type Props = { onUpload: (video: VideoData) => void; onUploadStart?: () => void; onUploadError?: (error: Error) => void; }; export default function useUploadVideo({ onUpload, onUploadStart, onUploadError, }: Props) { const [error, setError] = useState(null); const [commit, isMutationInFlight] = useMutation( graphql` mutation useUploadVideoMutation($file: Upload!) { uploadVideo(file: $file) { id height width url path posterPath posterUrl } } `, ); const {getRootProps, getInputProps} = useDropzone({ accept: ACCEPT_VIDEOS, multiple: false, maxFiles: 1, onDrop: ( acceptedFiles: FileWithPath[], fileRejections: FileRejection[], ) => { setError(null); // Check if any of the files (only 1 file allowed) is rejected. The // rejected file has an error (e.g., 'file-too-large'). Rendering an // appropriate message. if (fileRejections.length > 0 && fileRejections[0].errors.length > 0) { const code = fileRejections[0].errors[0].code; if (code === 'file-too-large') { setError( `File too large. Try a video under ${MAX_FILE_SIZE_IN_MB} MB`, ); return; } } if (acceptedFiles.length === 0) { setError('File not accepted. Please try again.'); return; } if (acceptedFiles.length > 1) { setError('Too many files. Please try again with 1 file.'); return; } onUploadStart?.(); const file = acceptedFiles[0]; commit({ variables: { file, }, uploadables: { file, }, onCompleted: response => onUpload(response.uploadVideo), onError: error => { Logger.error(error); onUploadError?.(error); setError('Upload failed.'); }, }); }, onError: error => { Logger.error(error); setError('File not supported.'); }, maxSize: MAX_VIDEO_UPLOAD_SIZE, }); return { getRootProps, getInputProps, isUploading: isMutationInFlight, error, setError, }; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/icons/GitHubIcon.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ type Props = { className?: string; }; export function GitHubIcon({className}: Props) { return ( ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/options/DownloadOption.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {Package} from '@carbon/icons-react'; import OptionButton from './OptionButton'; import useDownloadVideo from './useDownloadVideo'; export default function DownloadOption() { const {download, state} = useDownloadVideo(); return ( ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/options/GalleryOption.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ChangeVideoModal from '@/common/components/gallery/ChangeVideoModal'; import type {VideoGalleryTriggerProps} from '@/common/components/gallery/DemoVideoGalleryModal'; import useScreenSize from '@/common/screen/useScreenSize'; import {ImageCopy} from '@carbon/icons-react'; import OptionButton from './OptionButton'; type Props = { onChangeVideo: () => void; }; export default function GalleryOption({onChangeVideo}: Props) { return ( ); } function GalleryTrigger({onClick}: VideoGalleryTriggerProps) { const {isMobile} = useScreenSize(); return ( ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/options/MoreOptionsToolbar.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import MoreOptionsToolbarBottomActions from '@/common/components/options/MoreOptionsToolbarBottomActions'; import ShareSection from '@/common/components/options/ShareSection'; import TryAnotherVideoSection from '@/common/components/options/TryAnotherVideoSection'; import useMessagesSnackbar from '@/common/components/snackbar/useDemoMessagesSnackbar'; import ToolbarHeaderWrapper from '@/common/components/toolbar/ToolbarHeaderWrapper'; import useScreenSize from '@/common/screen/useScreenSize'; import {useEffect, useRef} from 'react'; type Props = { onTabChange: (newIndex: number) => void; }; export default function MoreOptionsToolbar({onTabChange}: Props) { const {isMobile} = useScreenSize(); const {clearMessage} = useMessagesSnackbar(); const didClearMessageSnackbar = useRef(false); useEffect(() => { if (!didClearMessageSnackbar.current) { didClearMessageSnackbar.current = true; clearMessage(); } }, [clearMessage]); return (
{!isMobile &&
}
{!isMobile && ( )}
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/options/MoreOptionsToolbarBottomActions.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import RestartSessionButton from '@/common/components/session/RestartSessionButton'; import { EFFECT_TOOLBAR_INDEX, OBJECT_TOOLBAR_INDEX, } from '@/common/components/toolbar/ToolbarConfig'; import {ChevronLeft} from '@carbon/icons-react'; import {Button} from 'react-daisyui'; import ToolbarBottomActionsWrapper from '../toolbar/ToolbarBottomActionsWrapper'; type Props = { onTabChange: (newIndex: number) => void; }; export default function MoreOptionsToolbarBottomActions({onTabChange}: Props) { function handleReturnToEffectsTab() { onTabChange(EFFECT_TOOLBAR_INDEX); } return ( onTabChange(OBJECT_TOOLBAR_INDEX)} /> ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/options/OptionButton.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import GradientBorder from '@/common/components/button/GradientBorder'; import useScreenSize from '@/common/screen/useScreenSize'; import {BLUE_PINK_FILL_BR} from '@/theme/gradientStyle'; import type {CarbonIconType} from '@carbon/icons-react'; import {Loading} from 'react-daisyui'; type Props = { variant?: 'default' | 'flat' | 'gradient'; title: string | React.ReactNode; Icon: CarbonIconType; isActive?: boolean; isDisabled?: boolean; loadingProps?: { loading: boolean; label?: string; }; onClick: () => void; }; export default function OptionButton({ variant = 'default', title, Icon, isActive = false, isDisabled = false, loadingProps, onClick, }: Props) { const {isMobile} = useScreenSize(); const isLoading = loadingProps?.loading === true; function handleClick() { if (isDisabled) { return; } onClick(); } const ButtonBase = (
{isLoading ? ( ) : ( )}
{isLoading && loadingProps?.label != null ? loadingProps.label : title}
); return variant === 'gradient' ? ( {ButtonBase} ) : ( ButtonBase ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/options/ShareSection.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import DownloadOption from './DownloadOption'; export default function ShareSection() { return (
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/options/ShareUtils.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ export async function handleSaveVideo( videoPath: string, fileName?: string, ): Promise { const blob = await fetch(videoPath).then(res => res.blob()); return new Promise(resolve => { const reader = new FileReader(); reader.readAsDataURL(blob); reader.addEventListener('load', () => { const elem = document.createElement('a'); elem.download = fileName ?? getFileName(); if (typeof reader.result === 'string') { elem.href = reader.result; } elem.click(); resolve(); }); }); } export function getFileName() { const date = new Date(); const timestamp = date.getTime(); return `sam2_masked_video_${timestamp}.mp4`; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/options/TryAnotherVideoSection.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import GalleryOption from '@/common/components/options/GalleryOption'; import UploadOption from '@/common/components/options/UploadOption'; import {OBJECT_TOOLBAR_INDEX} from '@/common/components/toolbar/ToolbarConfig'; import useVideo from '@/common/components/video/editor/useVideo'; import useScreenSize from '@/common/screen/useScreenSize'; type Props = { onTabChange: (tabIndex: number) => void; }; export default function TryAnotherVideoSection({onTabChange}: Props) { const {isMobile} = useScreenSize(); const video = useVideo(); function handleVideoChange() { if (video != null) { video.pause(); video.frame = 0; } onTabChange(OBJECT_TOOLBAR_INDEX); } if (isMobile) { return (
Or, try another video
); } return (
Try another video
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/options/UploadOption.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import useUploadVideo from '@/common/components/gallery/useUploadVideo'; import OptionButton from '@/common/components/options/OptionButton'; import Logger from '@/common/logger/Logger'; import useScreenSize from '@/common/screen/useScreenSize'; import {sessionAtom, uploadingStateAtom} from '@/demo/atoms'; import {MAX_UPLOAD_FILE_SIZE} from '@/demo/DemoConfig'; import {Close, CloudUpload} from '@carbon/icons-react'; import {useSetAtom} from 'jotai'; import {useNavigate} from 'react-router-dom'; type Props = { onUpload: () => void; }; export default function UploadOption({onUpload}: Props) { const navigate = useNavigate(); const {isMobile} = useScreenSize(); const setUploadingState = useSetAtom(uploadingStateAtom); const setSession = useSetAtom(sessionAtom); const {getRootProps, getInputProps, isUploading, error} = useUploadVideo({ onUpload: videoData => { navigate( {pathname: location.pathname, search: location.search}, {state: {video: videoData}}, ); onUpload(); setUploadingState('default'); setSession(null); }, onUploadError: (error: Error) => { setUploadingState('error'); Logger.error(error); }, onUploadStart: () => { setUploadingState('uploading'); }, }); return (
Upload{' '}
Max {MAX_UPLOAD_FILE_SIZE}
) : ( <> Upload your own{' '}
Max {MAX_UPLOAD_FILE_SIZE}
) } Icon={error !== null ? Close : CloudUpload} loadingProps={{loading: isUploading, label: 'Uploading...'}} onClick={() => {}} />
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/options/__generated__/GetLinkOptionShareVideoMutation.graphql.ts ================================================ /** * @generated SignedSource<<39d7e92a6c15de1583c90ae21a7825e5>> * @lightSyntaxTransform * @nogrep */ /* tslint:disable */ /* eslint-disable */ // @ts-nocheck import { ConcreteRequest, Mutation } from 'relay-runtime'; export type GetLinkOptionShareVideoMutation$variables = { file: any; }; export type GetLinkOptionShareVideoMutation$data = { readonly uploadSharedVideo: { readonly path: string; }; }; export type GetLinkOptionShareVideoMutation = { response: GetLinkOptionShareVideoMutation$data; variables: GetLinkOptionShareVideoMutation$variables; }; const node: ConcreteRequest = (function(){ var v0 = [ { "defaultValue": null, "kind": "LocalArgument", "name": "file" } ], v1 = [ { "alias": null, "args": [ { "kind": "Variable", "name": "file", "variableName": "file" } ], "concreteType": "SharedVideo", "kind": "LinkedField", "name": "uploadSharedVideo", "plural": false, "selections": [ { "alias": null, "args": null, "kind": "ScalarField", "name": "path", "storageKey": null } ], "storageKey": null } ]; return { "fragment": { "argumentDefinitions": (v0/*: any*/), "kind": "Fragment", "metadata": null, "name": "GetLinkOptionShareVideoMutation", "selections": (v1/*: any*/), "type": "Mutation", "abstractKey": null }, "kind": "Request", "operation": { "argumentDefinitions": (v0/*: any*/), "kind": "Operation", "name": "GetLinkOptionShareVideoMutation", "selections": (v1/*: any*/) }, "params": { "cacheID": "f02ec81a41c8d75c3733853e1fb04f58", "id": null, "metadata": {}, "name": "GetLinkOptionShareVideoMutation", "operationKind": "mutation", "text": "mutation GetLinkOptionShareVideoMutation(\n $file: Upload!\n) {\n uploadSharedVideo(file: $file) {\n path\n }\n}\n" } }; })(); (node as any).hash = "c1b085da9afaac5f19eeb99ff561ed55"; export default node; ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/options/useDownloadVideo.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {getFileName} from '@/common/components/options/ShareUtils'; import { EncodingCompletedEvent, EncodingStateUpdateEvent, } from '@/common/components/video/VideoWorkerBridge'; import useVideo from '@/common/components/video/editor/useVideo'; import {MP4ArrayBuffer} from 'mp4box'; import {useState} from 'react'; type DownloadingState = 'default' | 'started' | 'encoding' | 'completed'; type State = { state: DownloadingState; progress: number; download: (shouldSave?: boolean) => Promise; }; export default function useDownloadVideo(): State { const [downloadingState, setDownloadingState] = useState('default'); const [progress, setProgress] = useState(0); const video = useVideo(); async function download(shouldSave = true): Promise { return new Promise(resolve => { function onEncodingStateUpdate(event: EncodingStateUpdateEvent) { setDownloadingState('encoding'); setProgress(event.progress); } function onEncodingComplete(event: EncodingCompletedEvent) { const file = event.file; if (shouldSave) { saveVideo(file, getFileName()); } video?.removeEventListener('encodingCompleted', onEncodingComplete); video?.removeEventListener( 'encodingStateUpdate', onEncodingStateUpdate, ); setDownloadingState('completed'); resolve(file); } video?.addEventListener('encodingStateUpdate', onEncodingStateUpdate); video?.addEventListener('encodingCompleted', onEncodingComplete); if (downloadingState === 'default' || downloadingState === 'completed') { setDownloadingState('started'); video?.pause(); video?.encode(); } }); } function saveVideo(file: MP4ArrayBuffer, fileName: string) { const blob = new Blob([file]); const url = window.URL.createObjectURL(blob); const a = document.createElement('a'); document.body.appendChild(a); a.setAttribute('href', url); a.setAttribute('download', fileName); a.setAttribute('target', '_self'); a.click(); window.URL.revokeObjectURL(url); } return {download, progress, state: downloadingState}; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/session/RestartSessionButton.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import useRestartSession from '@/common/components/session/useRestartSession'; import {Reset} from '@carbon/icons-react'; import {Button, Loading} from 'react-daisyui'; type Props = { onRestartSession: () => void; }; export default function RestartSessionButton({onRestartSession}: Props) { const {restartSession, isLoading} = useRestartSession(); function handleRestartSession() { restartSession(onRestartSession); } return ( ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/session/__generated__/useCloseSessionBeforeUnloadMutation.graphql.ts ================================================ /** * @generated SignedSource<> * @lightSyntaxTransform * @nogrep */ /* tslint:disable */ /* eslint-disable */ // @ts-nocheck import { ConcreteRequest, Mutation } from 'relay-runtime'; export type CloseSessionInput = { sessionId: string; }; export type useCloseSessionBeforeUnloadMutation$variables = { input: CloseSessionInput; }; export type useCloseSessionBeforeUnloadMutation$data = { readonly closeSession: { readonly success: boolean; }; }; export type useCloseSessionBeforeUnloadMutation = { response: useCloseSessionBeforeUnloadMutation$data; variables: useCloseSessionBeforeUnloadMutation$variables; }; const node: ConcreteRequest = (function(){ var v0 = [ { "defaultValue": null, "kind": "LocalArgument", "name": "input" } ], v1 = [ { "alias": null, "args": [ { "kind": "Variable", "name": "input", "variableName": "input" } ], "concreteType": "CloseSession", "kind": "LinkedField", "name": "closeSession", "plural": false, "selections": [ { "alias": null, "args": null, "kind": "ScalarField", "name": "success", "storageKey": null } ], "storageKey": null } ]; return { "fragment": { "argumentDefinitions": (v0/*: any*/), "kind": "Fragment", "metadata": null, "name": "useCloseSessionBeforeUnloadMutation", "selections": (v1/*: any*/), "type": "Mutation", "abstractKey": null }, "kind": "Request", "operation": { "argumentDefinitions": (v0/*: any*/), "kind": "Operation", "name": "useCloseSessionBeforeUnloadMutation", "selections": (v1/*: any*/) }, "params": { "cacheID": "99b73bd43a9f74104d545778cebbd15c", "id": null, "metadata": {}, "name": "useCloseSessionBeforeUnloadMutation", "operationKind": "mutation", "text": "mutation useCloseSessionBeforeUnloadMutation(\n $input: CloseSessionInput!\n) {\n closeSession(input: $input) {\n success\n }\n}\n" } }; })(); (node as any).hash = "55dd870645c9736b797b90819ddb1b92"; export default node; ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/session/useCloseSessionBeforeUnload.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {useCloseSessionBeforeUnloadMutation$variables} from '@/common/components/session/__generated__/useCloseSessionBeforeUnloadMutation.graphql'; import {sessionAtom} from '@/demo/atoms'; import useSettingsContext from '@/settings/useSettingsContext'; import {useAtomValue} from 'jotai'; import {useEffect, useMemo} from 'react'; import {ConcreteRequest, graphql} from 'relay-runtime'; /** * The useCloseSessionBeforeUnload is a dirty workaround to send close session * requests on window/tab close. Going through Relay does not send the request * even if the `keepalive` flag is set for the request. It does work when the * fetch is called directly with the close session mutation. * * Caveat: there is static typing, but there might be other caveats around this * quirky hack. */ export default function useCloseSessionBeforeUnload() { const session = useAtomValue(sessionAtom); const {settings} = useSettingsContext(); const data = useMemo(() => { if (session == null) { return null; } const graphQLTaggedNode = graphql` mutation useCloseSessionBeforeUnloadMutation($input: CloseSessionInput!) { closeSession(input: $input) { success } } ` as ConcreteRequest; const variables: useCloseSessionBeforeUnloadMutation$variables = { input: { sessionId: session.id, }, }; const query = graphQLTaggedNode.params.text; if (query === null) { return null; } return { query, variables, }; }, [session]); useEffect(() => { function onBeforeUpload() { if (data == null) { return; } fetch(`${settings.inferenceAPIEndpoint}/graphql`, { method: 'POST', credentials: 'include', headers: { 'Content-Type': 'application/json', }, keepalive: true, body: JSON.stringify(data), }); } window.addEventListener('beforeunload', onBeforeUpload); return () => { window.removeEventListener('beforeunload', onBeforeUpload); }; }, [data, session, settings.inferenceAPIEndpoint]); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/session/useRestartSession.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import useMessagesSnackbar from '@/common/components/snackbar/useDemoMessagesSnackbar'; import useVideo from '@/common/components/video/editor/useVideo'; import useInputVideo from '@/common/components/video/useInputVideo'; import { activeTrackletObjectIdAtom, isPlayingAtom, isStreamingAtom, labelTypeAtom, trackletObjectsAtom, } from '@/demo/atoms'; import {useAtomValue, useSetAtom} from 'jotai'; import {useState} from 'react'; export default function useRestartSession() { const [isLoading, setIsLoading] = useState(); const isPlaying = useAtomValue(isPlayingAtom); const isStreaming = useAtomValue(isStreamingAtom); const setActiveTrackletObjectId = useSetAtom(activeTrackletObjectIdAtom); const setTracklets = useSetAtom(trackletObjectsAtom); const setLabelType = useSetAtom(labelTypeAtom); const {clearMessage} = useMessagesSnackbar(); const {inputVideo} = useInputVideo(); const video = useVideo(); async function restartSession(onRestart?: () => void) { if (video === null || inputVideo === null) { return; } setIsLoading(true); if (isPlaying) { video.pause(); } if (isStreaming) { await video.abortStreamMasks(); } await video?.startSession(inputVideo.path); video.frame = 0; setActiveTrackletObjectId(0); setTracklets([]); setLabelType('positive'); onRestart?.(); clearMessage(); setIsLoading(false); } return {isLoading, restartSession}; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/snackbar/DemoMessagesSnackbarUtils.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {EnqueueOption} from '@/common/components/snackbar/useMessagesSnackbar'; export type MessageOptions = EnqueueOption & { repeat?: boolean; }; type MessageEvent = { text: string; shown: boolean; action?: Element; options?: MessageOptions; }; export interface MessagesEventMap { startSession: MessageEvent; firstClick: MessageEvent; pointClick: MessageEvent; addObjectClick: MessageEvent; trackAndPlayClick: MessageEvent; trackAndPlayComplete: MessageEvent; trackAndPlayThrottlingWarning: MessageEvent; effectsMessage: MessageEvent; } export const defaultMessageMap: MessagesEventMap = { startSession: { text: 'Starting session', shown: false, options: {type: 'loading', showClose: false, repeat: true, duration: 2000}, }, firstClick: { text: 'Tip: Click on any object in the video to get started.', shown: false, options: {expire: false, repeat: false}, }, pointClick: { text: 'Tip: Not what you expected? Add a few more clicks until the full object you want is selected.', shown: false, options: {expire: false, repeat: false}, }, addObjectClick: { text: 'Tip: Add a new object by clicking on it in the video.', shown: false, options: {expire: false, repeat: false}, }, trackAndPlayClick: { text: 'Hang tight while your objects are tracked! You’ll be able to apply visual effects in the next step. Stop tracking at any point to adjust your selections if the tracking doesn’t look right.', shown: false, options: {expire: false, repeat: false}, }, trackAndPlayComplete: { text: 'Tip: You can fix tracking issues by going back to the frames where tracking is not quite right and adding or removing clicks.', shown: false, options: {expire: false, repeat: false}, }, trackAndPlayThrottlingWarning: { text: 'Looks like you have clicked the tracking button a bit too often! To keep things running smoothly, we have temporarily disabled the button.', shown: false, options: {repeat: true}, }, effectsMessage: { text: 'Tip: If you aren’t sure where to get started, click “Surprise Me” to apply a surprise effect to your video.', shown: false, options: {expire: false, repeat: false}, }, }; ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/snackbar/MessagesSnackbar.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import useScreenSize from '@/common/screen/useScreenSize'; import {color, gradients} from '@/theme/tokens.stylex'; import {Close} from '@carbon/icons-react'; import stylex from '@stylexjs/stylex'; import {useAtomValue} from 'jotai'; import {Loading, RadialProgress} from 'react-daisyui'; import {messageAtom} from './snackbarAtoms'; import useExpireMessage from './useExpireMessage'; import useMessagesSnackbar from './useMessagesSnackbar'; const styles = stylex.create({ container: { position: 'absolute', top: '8px', right: '8px', }, mobileContainer: { position: 'absolute', bottom: '8px', left: '8px', right: '8px', }, messageContainer: { padding: '20px 20px', color: '#FFF', borderRadius: '8px', fontSize: '0.9rem', maxWidth: 400, border: '2px solid transparent', background: gradients['yellowTeal'], }, messageWarningContainer: { background: '#FFDC32', color: color['gray-900'], }, messageContent: { display: 'flex', alignItems: 'center', gap: '8px', }, progress: { flexShrink: 0, color: 'rgba(255, 255, 255, 0.1)', }, closeColumn: { display: 'flex', alignSelf: 'stretch', alignItems: 'start', }, }); export default function MessagesSnackbar() { const message = useAtomValue(messageAtom); const {clearMessage} = useMessagesSnackbar(); const {isMobile} = useScreenSize(); useExpireMessage(); if (message == null) { return null; } const closeIcon = ( ); return (
{message.text}
{message.type === 'loading' && } {message.showClose && (
{message.expire ? ( {closeIcon} ) : ( closeIcon )}
)}
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/snackbar/snackbarAtoms.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {atom} from 'jotai'; export type MessageType = 'info' | 'loading' | 'warning'; export type Message = { type: MessageType; text: string; duration: number; progress: number; startTime: number; expire: boolean; showClose: boolean; showReset: boolean; }; export const messageAtom = atom(null); ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/snackbar/useDemoMessagesSnackbar.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {MessagesEventMap} from '@/common/components/snackbar/DemoMessagesSnackbarUtils'; import useMessagesSnackbar from '@/common/components/snackbar/useMessagesSnackbar'; import {messageMapAtom} from '@/demo/atoms'; import {useAtom} from 'jotai'; import {useCallback} from 'react'; type State = { enqueueMessage: (messageType: keyof MessagesEventMap) => void; clearMessage: () => void; }; export default function useDemoMessagesSnackbar(): State { const [messageMap, setMessageMap] = useAtom(messageMapAtom); const {enqueueMessage: enqueue, clearMessage} = useMessagesSnackbar(); const enqueueMessage = useCallback( (messageType: keyof MessagesEventMap) => { const {text, shown, options} = messageMap[messageType]; if (!options?.repeat && shown === true) { return; } enqueue(text, options); const newState = {...messageMap}; newState[messageType].shown = true; setMessageMap(newState); }, [enqueue, messageMap, setMessageMap], ); return {enqueueMessage, clearMessage}; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/snackbar/useExpireMessage.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {useAtom} from 'jotai'; import {useEffect, useRef} from 'react'; import {Message, messageAtom} from '@/common/components/snackbar/snackbarAtoms'; export default function useExpireMessage() { const [message, setMessage] = useAtom(messageAtom); const messageRef = useRef(null); const intervalRef = useRef(null); useEffect(() => { messageRef.current = message; }, [message]); useEffect(() => { function resetInterval() { if (intervalRef.current != null) { clearInterval(intervalRef.current); intervalRef.current = null; } } if (intervalRef.current == null && message != null && message.expire) { intervalRef.current = setInterval(() => { const prevMessage = messageRef.current; if (prevMessage == null) { setMessage(null); resetInterval(); return; } const messageDuration = Date.now() - prevMessage.startTime; if (messageDuration > prevMessage.duration) { setMessage(null); resetInterval(); return; } setMessage({ ...prevMessage, progress: messageDuration / prevMessage.duration, }); }, 20); } }, [message, setMessage]); useEffect(() => { return () => { if (intervalRef.current != null) { clearInterval(intervalRef.current); } }; }, []); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/snackbar/useMessagesSnackbar.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {useSetAtom} from 'jotai'; import {useCallback} from 'react'; import { MessageType, messageAtom, } from '@/common/components/snackbar/snackbarAtoms'; export type EnqueueOption = { duration?: number; type?: MessageType; expire?: boolean; showClose?: boolean; showReset?: boolean; }; type State = { clearMessage: () => void; enqueueMessage: (message: string, options?: EnqueueOption) => void; }; export default function useMessagesSnackbar(): State { const setMessage = useSetAtom(messageAtom); const enqueueMessage = useCallback( (message: string, options?: EnqueueOption) => { setMessage({ text: message, type: options?.type ?? 'info', duration: options?.duration ?? 5000, progress: 0, startTime: Date.now(), expire: options?.expire ?? true, showClose: options?.showClose ?? true, showReset: options?.showReset ?? false, }); }, [setMessage], ); function clearMessage() { setMessage(null); } return {enqueueMessage, clearMessage}; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/toolbar/DesktopToolbar.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ObjectsToolbar from '@/common/components/annotations/ObjectsToolbar'; import EffectsToolbar from '@/common/components/effects/EffectsToolbar'; import MoreOptionsToolbar from '@/common/components/options/MoreOptionsToolbar'; import type {CSSProperties} from 'react'; type Props = { tabIndex: number; onTabChange: (newIndex: number) => void; }; export default function DesktopToolbar({tabIndex, onTabChange}: Props) { const toolbarShadow: CSSProperties = { boxShadow: '0px 1px 3px 1px rgba(0,0,0,.25)', transition: 'box-shadow 0.8s ease-out', }; const tabs = [ , , , ]; return (
{tabs[tabIndex]}
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/toolbar/MobileToolbar.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import MobileObjectsToolbar from '@/common/components/annotations/MobileObjectsToolbar'; import MobileEffectsToolbar from '@/common/components/effects/MobileEffectsToolbar'; import MoreOptionsToolbar from '@/common/components/options/MoreOptionsToolbar'; type Props = { tabIndex: number; onTabChange: (newIndex: number) => void; }; export default function MobileToolbar({tabIndex, onTabChange}: Props) { const tabs = [ , , , ]; return (
{tabs[tabIndex]}
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/toolbar/Toolbar.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import useListenToStreamingState from '@/common/components/toolbar/useListenToStreamingState'; import useToolbarTabs from '@/common/components/toolbar/useToolbarTabs'; import useVideo from '@/common/components/video/editor/useVideo'; import useVideoEffect from '@/common/components/video/editor/useVideoEffect'; import {EffectIndex} from '@/common/components/video/effects/Effects'; import useScreenSize from '@/common/screen/useScreenSize'; import { codeEditorOpenedAtom, isPlayingAtom, isStreamingAtom, } from '@/demo/atoms'; import {useAtom, useAtomValue, useSetAtom} from 'jotai'; import {useCallback, useEffect} from 'react'; import DesktopToolbar from './DesktopToolbar'; import MobileToolbar from './MobileToolbar'; import {OBJECT_TOOLBAR_INDEX} from './ToolbarConfig'; export default function Toolbar() { const [tabIndex, setTabIndex] = useToolbarTabs(); const video = useVideo(); const setIsPlaying = useSetAtom(isPlayingAtom); const [isStreaming, setIsStreaming] = useAtom(isStreamingAtom); const codeEditorOpened = useAtomValue(codeEditorOpenedAtom); const {isMobile} = useScreenSize(); const setEffect = useVideoEffect(); const resetEffects = useCallback(() => { setEffect('Original', EffectIndex.BACKGROUND, {variant: 0}); setEffect('Overlay', EffectIndex.HIGHLIGHT, {variant: 0}); }, [setEffect]); const handleStopVideo = useCallback(() => { if (isStreaming) { video?.abortStreamMasks(); } else { video?.pause(); } }, [video, isStreaming]); const handleTabChange = useCallback( (newIndex: number) => { if (newIndex === OBJECT_TOOLBAR_INDEX) { handleStopVideo(); resetEffects(); } setTabIndex(newIndex); }, [handleStopVideo, resetEffects, setTabIndex], ); useListenToStreamingState(); useEffect(() => { function onPlay() { setIsPlaying(true); } function onPause() { setIsPlaying(false); } video?.addEventListener('play', onPlay); video?.addEventListener('pause', onPause); return () => { video?.removeEventListener('play', onPlay); video?.removeEventListener('pause', onPause); }; }, [video, resetEffects, setIsStreaming, setIsPlaying]); if (codeEditorOpened) { return null; } return isMobile ? ( ) : ( ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/toolbar/ToolbarActionIcon.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import GradientBorder from '@/common/components/button/GradientBorder'; import useScreenSize from '@/common/screen/useScreenSize'; import {BLUE_PINK_FILL_BR} from '@/theme/gradientStyle'; import type {CarbonIconType} from '@carbon/icons-react'; import {Loading} from 'react-daisyui'; type Props = { isDisabled?: boolean; isActive?: boolean; icon: CarbonIconType; title: string; badge?: React.ReactNode; variant: 'toggle' | 'button' | 'gradient' | 'flat'; span?: 1 | 2; loadingProps?: { loading: boolean; label?: string; }; onClick: () => void; }; export default function ToolbarActionIcon({ variant, isDisabled = false, isActive = false, title, badge, loadingProps, icon: Icon, span = 1, onClick, }: Props) { const {isMobile} = useScreenSize(); const isLoading = loadingProps?.loading === true; function handleClick() { if (isDisabled) { return; } onClick(); } const ButtonBase = (
{isLoading ? ( ) : ( )}
{isLoading && loadingProps?.label != null ? loadingProps.label : title}
{isActive && badge}
); return variant == 'gradient' ? ( {ButtonBase} ) : ( ButtonBase ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/toolbar/ToolbarBottomActionsWrapper.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {spacing} from '@/theme/tokens.stylex'; import stylex from '@stylexjs/stylex'; import {PropsWithChildren} from 'react'; const styles = stylex.create({ container: { display: 'flex', alignItems: 'center', justifyContent: 'space-between', paddingTop: { default: spacing[2], '@media screen and (max-width: 768px)': spacing[4], }, paddingBottom: spacing[6], paddingHorizontal: spacing[6], }, }); export default function ToolbarBottomActionsWrapper({ children, }: PropsWithChildren) { return
{children}
; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/toolbar/ToolbarConfig.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ export const OBJECT_TOOLBAR_INDEX = 0; export const EFFECT_TOOLBAR_INDEX = 1; export const MORE_OPTIONS_TOOLBAR_INDEX = 2; ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/toolbar/ToolbarHeaderWrapper.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {ReactNode} from 'react'; import ToolbarProgressChip from './ToolbarProgressChip'; type Props = { title: string; description?: string; bottomSection?: ReactNode; showProgressChip?: boolean; className?: string; }; export default function ToolbarHeaderWrapper({ title, description, bottomSection, showProgressChip = true, className, }: Props) { return (
{showProgressChip && }

{title}

{description != null && (
{description}
)} {bottomSection != null && bottomSection}
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/toolbar/ToolbarProgressChip.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {OBJECT_TOOLBAR_INDEX} from '@/common/components/toolbar/ToolbarConfig'; import useToolbarTabs from '@/common/components/toolbar/useToolbarTabs'; import {streamingStateAtom} from '@/demo/atoms'; import {useAtomValue} from 'jotai'; import {useMemo} from 'react'; import {Loading} from 'react-daisyui'; const TOTAL_DEMO_STEPS = 3; export default function ToolbarProgressChip() { const [toolbarIndex] = useToolbarTabs(); const streamingState = useAtomValue(streamingStateAtom); const showLoader = useMemo(() => { return streamingState === 'partial' || streamingState === 'requesting'; }, [streamingState]); function getStepValue() { if (toolbarIndex === OBJECT_TOOLBAR_INDEX) { return streamingState !== 'full' ? 1 : 2; } return 3; } return ( {showLoader ? ( ) : ( `${getStepValue()}/${TOTAL_DEMO_STEPS}` )} ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/toolbar/ToolbarSection.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {PropsWithChildren} from 'react'; type Props = PropsWithChildren<{ title: string; borderBottom?: boolean; }>; export default function ToolbarSection({ children, title, borderBottom = false, }: Props) { return (
{title}
{children}
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/toolbar/useListenToStreamingState.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {StreamingStateUpdateEvent} from '@/common/components/video/VideoWorkerBridge'; import useVideo from '@/common/components/video/editor/useVideo'; import {StreamingState} from '@/common/tracker/Tracker'; import {isStreamingAtom, streamingStateAtom} from '@/demo/atoms'; import {useAtom} from 'jotai'; import {useEffect} from 'react'; export default function useListenToStreamingState(): { isStreaming: boolean; streamingState: StreamingState; } { const [streamingState, setStreamingState] = useAtom(streamingStateAtom); const [isStreaming, setIsStreaming] = useAtom(isStreamingAtom); const video = useVideo(); useEffect(() => { function onStreamingStateUpdate(event: StreamingStateUpdateEvent) { setStreamingState(event.state); } function onStreamingStarted() { setIsStreaming(true); } function onStreamingCompleted() { setIsStreaming(false); } video?.addEventListener('streamingStateUpdate', onStreamingStateUpdate); video?.addEventListener('streamingStarted', onStreamingStarted); video?.addEventListener('streamingCompleted', onStreamingCompleted); return () => { video?.removeEventListener( 'streamingStateUpdate', onStreamingStateUpdate, ); video?.removeEventListener('streamingStarted', onStreamingStarted); video?.removeEventListener('streamingCompleted', onStreamingCompleted); }; }, [video, setStreamingState, setIsStreaming]); return {isStreaming, streamingState}; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/toolbar/useToolbarTabs.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {toolbarTabIndex} from '@/demo/atoms'; import {useAtom} from 'jotai'; type State = [tabIndex: number, setTabIndex: (tabIndex: number) => void]; export default function useToolbarTabs(): State { return useAtom(toolbarTabIndex); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/useFunctionThrottle.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {useCallback, useState} from 'react'; type ThrottleOptions = { enableThrottling?: boolean; }; type State = { isThrottled: boolean; maxThrottles: boolean; throttle: (callback: () => void, options?: ThrottleOptions) => void; }; export default function useFunctionThrottle( initialDelay: number, numThrottles: number, ): State { const [isThrottled, setIsThrottled] = useState(false); const [lastClickTime, setLastClickTime] = useState(null); const [numTimesThrottled, setNumTimesThrottled] = useState(1); /** * The following function's callback gets throttled when the time between two * executions is less than a threshold. * * The threshold is calculated linearly by multiplying the initial delay * and the number of times the button has been throttled. The button can be * throttled up to numThrottles times. * * The function has an optional flag - enableThrottling - which allows a callsite * to optionally disable throttling. This is useful in cases where throttling may * not be necessary. (e.g. for the Track & Play button, we would only like to * throttle after a stream is aborted.) */ const throttle = useCallback( ( callback: () => void, options: ThrottleOptions = { enableThrottling: true, }, ) => { if (isThrottled) { return; } const currentTime = Date.now(); if (lastClickTime == null) { callback(); setLastClickTime(currentTime); return; } const timeBetweenClicks = currentTime - lastClickTime; const delay = initialDelay * numTimesThrottled; const shouldThrottle = options.enableThrottling && delay > timeBetweenClicks; if (shouldThrottle) { setIsThrottled(true); setTimeout(() => { setIsThrottled(false); }, delay); setNumTimesThrottled(prev => { return prev === numThrottles ? numThrottles : prev + 1; }); } callback(); setLastClickTime(currentTime); }, [initialDelay, numThrottles, isThrottled, lastClickTime, numTimesThrottled], ); return { isThrottled, maxThrottles: numTimesThrottled === numThrottles, throttle, }; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/ChangeVideoModal.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import type {VideoGalleryTriggerProps} from '@/common/components/gallery/DemoVideoGalleryModal'; import DemoVideoGalleryModal from '@/common/components/gallery/DemoVideoGalleryModal'; import useVideo from '@/common/components/video/editor/useVideo'; import Logger from '@/common/logger/Logger'; import {isStreamingAtom, uploadingStateAtom, VideoData} from '@/demo/atoms'; import {useAtomValue, useSetAtom} from 'jotai'; import {ComponentType, useCallback} from 'react'; import {useNavigate} from 'react-router-dom'; type Props = { videoGalleryModalTrigger?: ComponentType; showUploadInGallery?: boolean; onChangeVideo?: () => void; }; export default function ChangeVideoModal({ videoGalleryModalTrigger: VideoGalleryModalTriggerComponent, showUploadInGallery = true, onChangeVideo, }: Props) { const isStreaming = useAtomValue(isStreamingAtom); const setUploadingState = useSetAtom(uploadingStateAtom); const video = useVideo(); const navigate = useNavigate(); const handlePause = useCallback(() => { video?.pause(); }, [video]); function handlePauseOrAbortVideo() { if (isStreaming) { video?.abortStreamMasks(); } else { handlePause(); } } function handleSwitchVideos(video: VideoData) { // Retain any search parameter navigate( { pathname: location.pathname, search: location.search, }, { state: { video, }, }, ); onChangeVideo?.(); } function handleUploadVideoError(error: Error) { setUploadingState('error'); Logger.error(error); } return ( ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/EventEmitter.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ type EventMap = { type: keyof WorkerEventMap; listener: (ev: WorkerEventMap[keyof WorkerEventMap]) => unknown; }; export class EventEmitter { listeners: EventMap[] = []; trigger(type: K, ev: WorkerEventMap[K]) { this.listeners .filter(listener => type === listener.type) .forEach(({listener}) => { setTimeout(() => listener(ev), 0); }); } addEventListener( type: K, listener: (ev: WorkerEventMap[K]) => unknown, ): void { // @ts-expect-error Incorrect typing. Not sure how to correctly type it this.listeners.push({type, listener}); } removeEventListener( type: K, listener: (ev: WorkerEventMap[K]) => unknown, ): void { this.listeners = this.listeners.filter( existingListener => !( existingListener.type === type && existingListener.listener === listener ), ); } destroy() { this.listeners.length = 0; } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/Video.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {BaseTracklet, SegmentationPoint} from '@/common/tracker/Tracker'; import {TrackerOptions, Trackers} from '@/common/tracker/Trackers'; import {PauseFilled, PlayFilledAlt} from '@carbon/icons-react'; import stylex, {StyleXStyles} from '@stylexjs/stylex'; import { CSSProperties, forwardRef, useEffect, useImperativeHandle, useMemo, useRef, } from 'react'; import {Button} from 'react-daisyui'; import {EffectIndex, Effects} from '@/common/components/video/effects/Effects'; import useReportError from '@/common/error/useReportError'; import Logger from '@/common/logger/Logger'; import {isPlayingAtom, isVideoLoadingAtom} from '@/demo/atoms'; import {color} from '@/theme/tokens.stylex'; import {useAtom} from 'jotai'; import useResizeObserver from 'use-resize-observer'; import VideoLoadingOverlay from './VideoLoadingOverlay'; import { StreamingStateUpdateEvent, VideoWorkerEventMap, } from './VideoWorkerBridge'; import {EffectOptions} from './effects/Effect'; import useVideoWorker from './useVideoWorker'; const styles = stylex.create({ container: { position: 'relative', width: '100%', height: '100%', }, canvasContainer: { display: 'flex', justifyContent: 'center', alignItems: 'center', backgroundColor: color['gray-800'], width: '100%', height: '100%', }, controls: { position: 'absolute', bottom: 0, left: 0, width: '100%', padding: 8, background: 'linear-gradient(#00000000, #000000ff)', }, controlButton: { color: 'white', }, }); type Props = { src: string; width: number; height: number; loading?: boolean; containerStyle?: StyleXStyles<{ position: CSSProperties['position']; }>; canvasStyle?: StyleXStyles<{ width: CSSProperties['width']; }>; controls?: boolean; createVideoWorker?: () => Worker; }; export type VideoRef = { getCanvas(): HTMLCanvasElement | null; get width(): number; get height(): number; get frame(): number; set frame(index: number); get numberOfFrames(): number; play(): void; pause(): void; stop(): void; previousFrame(): void; nextFrame(): void; setEffect( name: keyof Effects, index: EffectIndex, options?: EffectOptions, ): void; encode(): void; streamMasks(): void; abortStreamMasks(): Promise; addEventListener( type: K, listener: (ev: VideoWorkerEventMap[K]) => unknown, ): void; removeEventListener( type: K, listener: (ev: VideoWorkerEventMap[K]) => unknown, ): void; createFilmstrip(width: number, height: number): Promise; // Tracker initializeTracker(name: keyof Trackers, options?: TrackerOptions): void; startSession(videoUrl: string): Promise; closeSession(): void; logAnnotations(): void; createTracklet(): Promise; deleteTracklet(trackletId: number): Promise; updatePoints(trackletId: number, points: SegmentationPoint[]): void; clearPointsInVideo(): Promise; getWorker_ONLY_USE_WITH_CAUTION(): Worker; }; export default forwardRef(function Video( { src, width, height, containerStyle, canvasStyle, createVideoWorker, controls = false, loading = false, }, ref, ) { const reportError = useReportError(); const canvasRef = useRef(null); const [isPlaying, setIsPlaying] = useAtom(isPlayingAtom); const [isVideoLoading, setIsVideoLoading] = useAtom(isVideoLoadingAtom); const bridge = useVideoWorker(src, canvasRef, { createVideoWorker, }); const { ref: resizeObserverRef, width: resizeWidth = 1, height: resizeHeight = 1, } = useResizeObserver(); const canvasHeight = useMemo(() => { const resizeRatio = resizeWidth / width; return Math.min(height * resizeRatio, resizeHeight); }, [resizeWidth, height, width, resizeHeight]); useImperativeHandle( ref, () => ({ getCanvas() { return canvasRef.current; }, get width() { return bridge.width; }, get height() { return bridge.width; }, get frame() { return bridge.frame; }, set frame(index: number) { bridge.frame = index; }, get numberOfFrames() { return bridge.numberOfFrames; }, play(): void { bridge.play(); }, pause(): void { bridge.pause(); }, stop(): void { bridge.stop(); }, previousFrame(): void { bridge.previousFrame(); }, nextFrame(): void { bridge.nextFrame(); }, setEffect( name: keyof Effects, index: number, options?: EffectOptions, ): void { bridge.setEffect(name, index, options); }, encode(): void { bridge.encode(); }, streamMasks(): void { bridge.streamMasks(); }, abortStreamMasks(): Promise { return bridge.abortStreamMasks(); }, addEventListener( type: K, listener: (ev: VideoWorkerEventMap[K]) => unknown, ): void { bridge.addEventListener(type, listener); }, removeEventListener( type: K, listener: (ev: VideoWorkerEventMap[K]) => unknown, ): void { bridge.removeEventListener(type, listener); }, createFilmstrip(width: number, height: number): Promise { return bridge.createFilmstrip(width, height); }, // Tracker initializeTracker(name: keyof Trackers, options: TrackerOptions): void { bridge.initializeTracker(name, options); }, startSession(videoUrl: string): Promise { return bridge.startSession(videoUrl); }, closeSession(): void { bridge.closeSession(); }, logAnnotations(): void { bridge.logAnnotations(); }, createTracklet(): Promise { return bridge.createTracklet(); }, deleteTracklet(trackletId: number): Promise { return bridge.deleteTracklet(trackletId); }, updatePoints(trackletId: number, points: SegmentationPoint[]): void { bridge.updatePoints(trackletId, points); }, clearPointsInVideo(): Promise { return bridge.clearPointsInVideo(); }, getWorker_ONLY_USE_WITH_CAUTION() { return bridge.getWorker_ONLY_USE_WITH_CAUTION(); }, }), [bridge], ); // Handle video playback events (get playback state to main thread) useEffect(() => { let isPlaying = false; function onFocus() { // Workaround for Safari where the video frame renders black on // unknown events. Trigger re-render frame on focus. if (!isPlaying) { bridge.goToFrame(bridge.frame); } } function onVisibilityChange() { // Workaround for Safari where the video frame renders black on // visibility change hidden. Returning to visible shows a black // frame instead of rendering the current frame. if (document.visibilityState === 'visible' && !isPlaying) { bridge.goToFrame(bridge.frame); } } function onError(event: ErrorEvent) { const error = event.error; Logger.error(error); reportError(error); } function onPlay() { isPlaying = true; setIsPlaying(true); } function onPause() { isPlaying = false; setIsPlaying(false); } function onStreamingDone(event: StreamingStateUpdateEvent) { // continue to play after streaming is done (state is "full") if (event.state === 'full') { bridge.play(); } } function onLoadStart() { setIsVideoLoading(true); } function onDecodeStart() { setIsVideoLoading(false); } window.addEventListener('focus', onFocus); window.addEventListener('visibilitychange', onVisibilityChange); bridge.addEventListener('error', onError); bridge.addEventListener('play', onPlay); bridge.addEventListener('pause', onPause); bridge.addEventListener('streamingStateUpdate', onStreamingDone); bridge.addEventListener('loadstart', onLoadStart); bridge.addEventListener('decode', onDecodeStart); return () => { window.removeEventListener('focus', onFocus); window.removeEventListener('visibilitychange', onVisibilityChange); bridge.removeEventListener('error', onError); bridge.removeEventListener('play', onPlay); bridge.removeEventListener('pause', onPause); bridge.removeEventListener('streamingStateUpdate', onStreamingDone); bridge.removeEventListener('loadstart', onLoadStart); bridge.removeEventListener('decode', onDecodeStart); }; }, [bridge, reportError, setIsPlaying, setIsVideoLoading]); return (
{(isVideoLoading || loading) && }
{controls && (
)}
); }); ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/VideoFilmstripWithPlayback.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import PlaybackButton from '@/common/components/button/PlaybackButton'; import VideoFilmstrip from '@/common/components/video/filmstrip/VideoFilmstrip'; import {spacing, w} from '@/theme/tokens.stylex'; import stylex from '@stylexjs/stylex'; const styles = stylex.create({ container: { display: 'flex', alignItems: 'end', gap: spacing[4], paddingHorizontal: spacing[4], width: '100%', }, playbackButtonContainer: { display: 'flex', justifyContent: 'center', alignItems: 'center', width: w[12], height: w[12], }, filmstripContainer: { flexGrow: 1, }, }); export default function VideoFilmstripWithPlayback() { return (
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/VideoLoadingOverlay.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {fontSize, fontWeight, spacing} from '@/theme/tokens.stylex'; import stylex from '@stylexjs/stylex'; import {Loading} from 'react-daisyui'; const styles = stylex.create({ overlay: { position: 'absolute', width: '100%', height: '100%', background: 'rgba(0,0,0,0.5)', }, indicatorContainer: { position: 'absolute', top: '50%', left: '50%', transform: 'translate(-50%, -50%)', display: 'flex', alignItems: 'center', gap: spacing[4], color: 'white', }, indicatorText: { color: 'white', fontSize: fontSize['sm'], fontWeight: fontWeight['medium'], }, }); type Props = { label?: string; }; export default function VideoLoadingOverlay({label}: Props) { return (
{label ?? 'Loading video...'}
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/VideoWorker.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {registerSerializableConstructors} from '@/common/error/ErrorSerializationUtils'; import {Tracker} from '@/common/tracker/Tracker'; import {TrackerRequestMessageEvent} from '@/common/tracker/TrackerTypes'; import {TRACKER_MAPPING} from '@/common/tracker/Trackers'; import {serializeError} from 'serialize-error'; import VideoWorkerContext from './VideoWorkerContext'; import { ErrorResponse, VideoWorkerRequestMessageEvent, } from './VideoWorkerTypes'; registerSerializableConstructors(); const context = new VideoWorkerContext(); let tracker: Tracker | null = null; let statsEnabled = false; self.addEventListener( 'message', async ( event: VideoWorkerRequestMessageEvent | TrackerRequestMessageEvent, ) => { try { switch (event.data.action) { // Initialize context case 'setCanvas': context.setCanvas(event.data.canvas); break; case 'setSource': context.setSource(event.data.source); break; // Playback case 'play': context.play(); break; case 'pause': context.pause(); break; case 'stop': context.stop(); break; case 'frameUpdate': context.goToFrame(event.data.index); break; // Filmstrip case 'filmstrip': { const {width, height} = event.data; await context.createFilmstrip(width, height); break; } // Effects case 'setEffect': { const {name, index, options} = event.data; await context.setEffect(name, index, options); break; } // Encode case 'encode': { await context.encode(); break; } case 'enableStats': { statsEnabled = true; context.enableStats(); tracker?.enableStats(); break; } // Tracker case 'initializeTracker': { const {name, options} = event.data; const Tracker = TRACKER_MAPPING[name]; // Update the endpoint for the streaming API tracker = new Tracker(context, options); if (statsEnabled) { tracker.enableStats(); } break; } case 'startSession': { const {videoUrl} = event.data; await tracker?.startSession(videoUrl); break; } case 'createTracklet': tracker?.createTracklet(); break; case 'deleteTracklet': await tracker?.deleteTracklet(event.data.trackletId); break; case 'closeSession': tracker?.closeSession(); break; case 'updatePoints': { const {frameIndex, objectId, points} = event.data; context.allowEffectAnimation(true, objectId, points); await tracker?.updatePoints(frameIndex, objectId, points); break; } case 'clearPointsInFrame': { const {frameIndex, objectId} = event.data; await tracker?.clearPointsInFrame(frameIndex, objectId); break; } case 'clearPointsInVideo': await tracker?.clearPointsInVideo(); break; case 'streamMasks': { const {frameIndex} = event.data; context.allowEffectAnimation(false); await tracker?.streamMasks(frameIndex); break; } case 'abortStreamMasks': tracker?.abortStreamMasks(); break; } } catch (error) { const serializedError = serializeError(error); const errorResponse: ErrorResponse = { action: 'error', error: serializedError, }; self.postMessage(errorResponse); } }, ); ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/VideoWorkerBridge.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {EffectIndex, Effects} from '@/common/components/video/effects/Effects'; import {registerSerializableConstructors} from '@/common/error/ErrorSerializationUtils'; import { BaseTracklet, SegmentationPoint, StreamingState, } from '@/common/tracker/Tracker'; import { AbortStreamMasksRequest, AddPointsResponse, ClearPointsInFrameRequest, ClearPointsInVideoRequest, ClearPointsInVideoResponse, CloseSessionRequest, CreateTrackletRequest, DeleteTrackletRequest, InitializeTrackerRequest, LogAnnotationsRequest, SessionStartFailedResponse, SessionStartedResponse, StartSessionRequest, StreamMasksRequest, StreamingStateUpdateResponse, TrackerRequest, TrackerResponseMessageEvent, TrackletCreatedResponse, TrackletDeletedResponse, UpdatePointsRequest, } from '@/common/tracker/TrackerTypes'; import {TrackerOptions, Trackers} from '@/common/tracker/Trackers'; import {MP4ArrayBuffer} from 'mp4box'; import {deserializeError, type ErrorObject} from 'serialize-error'; import {EventEmitter} from './EventEmitter'; import { EncodeVideoRequest, FilmstripRequest, FilmstripResponse, FrameUpdateRequest, PauseRequest, PlayRequest, SetCanvasRequest, SetEffectRequest, SetSourceRequest, StopRequest, VideoWorkerRequest, VideoWorkerResponseMessageEvent, } from './VideoWorkerTypes'; import {EffectOptions} from './effects/Effect'; registerSerializableConstructors(); export type DecodeEvent = { totalFrames: number; numFrames: number; fps: number; width: number; height: number; done: boolean; }; export type LoadStartEvent = unknown; export type EffectUpdateEvent = { name: keyof Effects; index: EffectIndex; variant: number; numVariants: number; }; export type EncodingStateUpdateEvent = { progress: number; }; export type EncodingCompletedEvent = { file: MP4ArrayBuffer; }; export interface PlayEvent {} export interface PauseEvent {} export interface FilmstripEvent { filmstrip: ImageBitmap; } export interface FrameUpdateEvent { index: number; } export interface SessionStartedEvent { sessionId: string; } export interface SessionStartFailedEvent {} export interface TrackletCreatedEvent { // Do not send masks between workers and main thread because they are huge, // and sending them would eventually slow down the main thread. tracklet: BaseTracklet; } export interface TrackletsEvent { // Do not send masks between workers and main thread because they are huge, // and sending them would eventually slow down the main thread. tracklets: BaseTracklet[]; } export interface TrackletDeletedEvent { isSuccessful: boolean; } export interface AddPointsEvent { isSuccessful: boolean; } export interface ClearPointsInVideoEvent { isSuccessful: boolean; } export interface StreamingStartedEvent {} export interface StreamingCompletedEvent {} export interface StreamingStateUpdateEvent { state: StreamingState; } export interface RenderingErrorEvent { error: ErrorObject; } export interface VideoWorkerEventMap { error: ErrorEvent; decode: DecodeEvent; encodingStateUpdate: EncodingStateUpdateEvent; encodingCompleted: EncodingCompletedEvent; play: PlayEvent; pause: PauseEvent; filmstrip: FilmstripEvent; frameUpdate: FrameUpdateEvent; sessionStarted: SessionStartedEvent; sessionStartFailed: SessionStartFailedEvent; trackletCreated: TrackletCreatedEvent; trackletsUpdated: TrackletsEvent; trackletDeleted: TrackletDeletedEvent; addPoints: AddPointsEvent; clearPointsInVideo: ClearPointsInVideoEvent; streamingStarted: StreamingStartedEvent; streamingCompleted: StreamingCompletedEvent; streamingStateUpdate: StreamingStateUpdateEvent; // HTMLVideoElement events https://developer.mozilla.org/en-US/docs/Web/HTML/Element/video#events loadstart: LoadStartEvent; effectUpdate: EffectUpdateEvent; renderingError: RenderingErrorEvent; } type Metadata = { totalFrames: number; fps: number; width: number; height: number; }; export default class VideoWorkerBridge extends EventEmitter { static create(workerFactory: () => Worker) { const worker = workerFactory(); return new VideoWorkerBridge(worker); } protected worker: Worker; private metadata: Metadata | null = null; private frameIndex: number = 0; private _sessionId: string | null = null; public get sessionId() { return this._sessionId; } public get width() { return this.metadata?.width ?? 0; } public get height() { return this.metadata?.height ?? 0; } public get numberOfFrames() { return this.metadata?.totalFrames ?? 0; } public get fps() { return this.metadata?.fps ?? 0; } public get frame() { return this.frameIndex; } constructor(worker: Worker) { super(); this.worker = worker; worker.addEventListener( 'message', ( event: VideoWorkerResponseMessageEvent | TrackerResponseMessageEvent, ) => { switch (event.data.action) { case 'error': // Deserialize error before triggering the event event.data.error = deserializeError(event.data.error); break; case 'decode': this.metadata = event.data; break; case 'frameUpdate': this.frameIndex = event.data.index; break; case 'sessionStarted': this._sessionId = event.data.sessionId; break; } this.trigger(event.data.action, event.data); }, ); } public setCanvas(canvas: HTMLCanvasElement): void { const offscreenCanvas = canvas.transferControlToOffscreen(); this.sendRequest( 'setCanvas', { canvas: offscreenCanvas, }, [offscreenCanvas], ); } public setSource(source: string): void { this.sendRequest('setSource', { source, }); } public terminate(): void { super.destroy(); this.worker.terminate(); } public play(): void { this.sendRequest('play'); } public pause(): void { this.sendRequest('pause'); } public stop(): void { this.sendRequest('stop'); } public goToFrame(index: number): void { this.sendRequest('frameUpdate', { index, }); } public previousFrame(): void { const index = Math.max(0, this.frameIndex - 1); this.goToFrame(index); } public nextFrame(): void { const index = Math.min(this.frameIndex + 1, this.numberOfFrames - 1); this.goToFrame(index); } public set frame(index: number) { this.sendRequest('frameUpdate', {index}); } createFilmstrip(width: number, height: number): Promise { return new Promise((resolve, _reject) => { const handleFilmstripResponse = ( event: MessageEvent, ) => { if (event.data.action === 'filmstrip') { this.worker.removeEventListener('message', handleFilmstripResponse); resolve(event.data.filmstrip); } }; this.worker.addEventListener('message', handleFilmstripResponse); this.sendRequest('filmstrip', { width, height, }); }); } setEffect(name: keyof Effects, index: EffectIndex, options?: EffectOptions) { this.sendRequest('setEffect', { name, index, options, }); } encode(): void { this.sendRequest('encode'); } initializeTracker(name: keyof Trackers, options: TrackerOptions): void { this.sendRequest('initializeTracker', { name, options, }); } startSession(videoUrl: string): Promise { return new Promise(resolve => { const handleResponse = ( event: MessageEvent< SessionStartedResponse | SessionStartFailedResponse >, ) => { if (event.data.action === 'sessionStarted') { this.worker.removeEventListener('message', handleResponse); resolve(event.data.sessionId); } if (event.data.action === 'sessionStartFailed') { this.worker.removeEventListener('message', handleResponse); resolve(null); } }; this.worker.addEventListener('message', handleResponse); this.sendRequest('startSession', { videoUrl, }); }); } closeSession(): void { this.sendRequest('closeSession'); } logAnnotations(): void { this.sendRequest('logAnnotations'); } createTracklet(): Promise { return new Promise(resolve => { const handleResponse = (event: MessageEvent) => { if (event.data.action === 'trackletCreated') { this.worker.removeEventListener('message', handleResponse); resolve(event.data.tracklet); } }; this.worker.addEventListener('message', handleResponse); this.sendRequest('createTracklet'); }); } deleteTracklet(trackletId: number): Promise { return new Promise((resolve, reject) => { const handleResponse = (event: MessageEvent) => { if (event.data.action === 'trackletDeleted') { this.worker.removeEventListener('message', handleResponse); if (event.data.isSuccessful) { resolve(); } else { reject(`could not delete tracklet ${trackletId}`); } } }; this.worker.addEventListener('message', handleResponse); this.sendRequest('deleteTracklet', {trackletId}); }); } updatePoints( objectId: number, points: SegmentationPoint[], ): Promise { return new Promise(resolve => { const handleResponse = (event: MessageEvent) => { if (event.data.action === 'addPoints') { this.worker.removeEventListener('message', handleResponse); resolve(event.data.isSuccessful); } }; this.worker.addEventListener('message', handleResponse); this.sendRequest('updatePoints', { frameIndex: this.frame, objectId, points, }); }); } clearPointsInFrame(objectId: number) { this.sendRequest('clearPointsInFrame', { frameIndex: this.frame, objectId, }); } clearPointsInVideo(): Promise { return new Promise(resolve => { const handleResponse = ( event: MessageEvent, ) => { if (event.data.action === 'clearPointsInVideo') { this.worker.removeEventListener('message', handleResponse); resolve(event.data.isSuccessful); } }; this.worker.addEventListener('message', handleResponse); this.sendRequest('clearPointsInVideo'); }); } streamMasks(): void { this.sendRequest('streamMasks', { frameIndex: this.frame, }); } abortStreamMasks(): Promise { return new Promise(resolve => { const handleAbortResponse = ( event: MessageEvent, ) => { if ( event.data.action === 'streamingStateUpdate' && event.data.state === 'aborted' ) { this.worker.removeEventListener('message', handleAbortResponse); resolve(); } }; this.worker.addEventListener('message', handleAbortResponse); this.sendRequest('abortStreamMasks'); }); } getWorker_ONLY_USE_WITH_CAUTION(): Worker { return this.worker; } /** * Convenient function to have typed postMessage. * * @param action Video worker action * @param message Actual payload * @param transfer Any object that should be transferred instead of cloned */ protected sendRequest( action: T['action'], payload?: Omit, transfer?: Transferable[], ) { this.worker.postMessage( { action, ...payload, }, { transfer, }, ); } // // Override EventEmitter // addEventListener( // type: K, // listener: (ev: WorkerEventMap[K]) => unknown, // ): void { // switch (type) { // case 'frameUpdate': // { // const event: FrameUpdateEvent = { // index: this.frameIndex, // }; // // @ts-expect-error Incorrect typing. Not sure how to correctly type it // listener(event); // } // break; // case 'sessionStarted': { // if (this.sessionId !== null) { // const event: SessionStartedEvent = { // sessionId: this.sessionId, // }; // // @ts-expect-error Incorrect typing. Not sure how to correctly type it // listener(event); // } // } // } // super.addEventListener(type, listener); // } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/VideoWorkerContext.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import { DecodedVideo, ImageFrame, decodeStream, } from '@/common/codecs/VideoDecoder'; import {encode as encodeVideo} from '@/common/codecs/VideoEncoder'; import { Effect, EffectActionPoint, EffectFrameContext, EffectOptions, } from '@/common/components/video/effects/Effect'; import AllEffects, { EffectIndex, Effects, } from '@/common/components/video/effects/Effects'; import Logger from '@/common/logger/Logger'; import {Mask, SegmentationPoint, Tracklet} from '@/common/tracker/Tracker'; import {streamFile} from '@/common/utils/FileUtils'; import {Stats} from '@/debug/stats/Stats'; import {VIDEO_WATERMARK_TEXT} from '@/demo/DemoConfig'; import CreateFilmstripError from '@/graphql/errors/CreateFilmstripError'; import DrawFrameError from '@/graphql/errors/DrawFrameError'; import WebGLContextError from '@/graphql/errors/WebGLContextError'; import {RLEObject} from '@/jscocotools/mask'; import invariant from 'invariant'; import {CanvasForm} from 'pts'; import {serializeError} from 'serialize-error'; import { DecodeResponse, EffectUpdateResponse, EncodingCompletedResponse, EncodingStateUpdateResponse, FilmstripResponse, FrameUpdateResponse, PauseRequest, PlayRequest, RenderingErrorResponse, VideoWorkerResponse, } from './VideoWorkerTypes'; function getEvenlySpacedItems(decodedVideo: DecodedVideo, x: number) { const p = Math.floor(decodedVideo.numFrames / Math.max(1, x - 1)); const middleFrames = decodedVideo.frames .slice(p, p * x) .filter(function (_, i) { return 0 == i % p; }); return [ decodedVideo.frames[0], ...middleFrames, decodedVideo.frames[decodedVideo.numFrames - 1], ]; } export type FrameInfo = { tracklet: Tracklet; mask: Mask; }; const WATERMARK_BOX_HORIZONTAL_PADDING = 10; const WATERMARK_BOX_VERTICAL_PADDING = 10; export type VideoStats = { fps?: Stats; videoFps?: Stats; total?: Stats; effect0?: Stats; effect1?: Stats; frameBmp?: Stats; maskBmp?: Stats; memory?: Stats; }; export default class VideoWorkerContext { private _canvas: OffscreenCanvas | null = null; private _stats: VideoStats = {}; private _ctx: OffscreenCanvasRenderingContext2D | null = null; private _form: CanvasForm | null = null; private _decodedVideo: DecodedVideo | null = null; private _frameIndex: number = 0; private _isPlaying: boolean = false; private _playbackRAFHandle: number | null = null; private _playbackTimeoutHandle: NodeJS.Timeout | null = null; private _isDrawing: boolean = false; private _glObjects: WebGL2RenderingContext | null = null; private _glBackground: WebGL2RenderingContext | null = null; private _canvasHighlights: OffscreenCanvas | null = null; private _canvasBackground: OffscreenCanvas | null = null; private _allowAnimation: boolean = false; private _currentSegmetationPoint: EffectActionPoint | null = null; private _effects: Effect[]; private _tracklets: Tracklet[] = []; public get width(): number { return this._decodedVideo?.width ?? 0; } public get height(): number { return this._decodedVideo?.height ?? 0; } public get frameIndex(): number { return this._frameIndex; } public get currentFrame(): VideoFrame | null { return this._decodedVideo?.frames[this._frameIndex].bitmap ?? null; } constructor() { this._effects = [ AllEffects.Original, // Image as background AllEffects.Overlay, // Masks on top ]; // Loading watermark fonts. This is going to be async, but by the time of // video encoding, the fonts should be available. this._loadWatermarkFonts(); } private initializeWebGLContext(width: number, height: number): void { // Given that we use highlight and background effects as layers, // we need to create two WebGL contexts, one for each set. // To avoid memory leaks and too many active contexts, // these contexts must be re-used over the lifecycle of the session. if (this._canvasHighlights == null && this._glObjects == null) { this._canvasHighlights = new OffscreenCanvas(width, height); this._glObjects = this._canvasHighlights.getContext('webgl2'); this._canvasHighlights.addEventListener( 'webglcontextlost', event => { event.preventDefault(); this._sendRenderingError( new WebGLContextError('WebGL context lost.'), ); }, false, ); } else if ( this._canvasHighlights != null && (this._canvasHighlights.width !== width || this._canvasHighlights.height !== height) ) { // Resize canvas and webgl viewport this._canvasHighlights.width = width; this._canvasHighlights.height = height; if (this._glObjects != null) { this._glObjects.viewport(0, 0, width, height); } } if (this._canvasBackground == null && this._glBackground == null) { this._canvasBackground = new OffscreenCanvas(width, height); this._glBackground = this._canvasBackground.getContext('webgl2'); this._canvasBackground.addEventListener( 'webglcontextlost', event => { event.preventDefault(); this._sendRenderingError( new WebGLContextError('WebGL context lost.'), ); }, false, ); } else if ( this._canvasBackground != null && (this._canvasBackground.width != width || this._canvasBackground.height != height) ) { // Resize canvas and webgl viewport this._canvasBackground.width = width; this._canvasBackground.height = height; if (this._glBackground != null) { this._glBackground.viewport(0, 0, width, height); } } } public setCanvas(canvas: OffscreenCanvas) { this._canvas = canvas; this._ctx = canvas.getContext('2d'); if (this._ctx == null) { throw new Error('could not initialize drawing context'); } this._form = new CanvasForm(this._ctx); } public setSource(src: string) { this.close(); // Clear state of previous source. this.updateFrameIndex(0); this._tracklets = []; this._decodeVideo(src); } public goToFrame(index: number): void { // Cancel any ongoing render this._cancelRender(); this.updateFrameIndex(index); this._playbackRAFHandle = requestAnimationFrame(this._drawFrame.bind(this)); } public play(): void { // Video already playing if (this._isPlaying) { return; } // Cannot playback without frames if (this._decodedVideo === null) { throw new Error('no decoded video'); } const {numFrames, fps} = this._decodedVideo; const timePerFrame = 1000 / (fps ?? 30); let startTime: number | null = null; // The offset frame index compensate for cases where the video playback // does not start at frame index 0. const offsetFrameIndex = this._frameIndex; const updateFrame = (time: number) => { if (startTime === null) { startTime = time; } this._stats.fps?.begin(); const diff = time - startTime; const expectedFrame = (Math.floor(diff / timePerFrame) + offsetFrameIndex) % numFrames; if (this._frameIndex !== expectedFrame && !this._isDrawing) { // Update to the next expected frame this.updateFrameIndex(expectedFrame); this._drawFrame(); } this._playbackRAFHandle = requestAnimationFrame(updateFrame); this._stats.fps?.end(); }; this.updatePlayback(true); this._playbackRAFHandle = requestAnimationFrame(updateFrame); } public pause(): void { this.updatePlayback(false); this._cancelRender(); } public stop(): void { this.pause(); this.updateFrameIndex(0); } public async createFilmstrip(width: number, height: number): Promise { if (width < 1 || height < 1) { Logger.warn( `Cannot create filmstrip because width ${width} or height ${height} is too small.`, ); return; } try { const canvas = new OffscreenCanvas(width, height); const ctx = canvas.getContext('2d'); if (this._decodedVideo !== null) { const scale = canvas.height / this._decodedVideo.height; const resizeWidth = this._decodedVideo.width * scale; const spacedFrames = getEvenlySpacedItems( this._decodedVideo, Math.ceil(canvas.width / resizeWidth), ); spacedFrames.forEach((frame, idx) => { if (frame != null) { ctx?.drawImage( frame.bitmap, resizeWidth * idx, 0, resizeWidth, canvas.height, ); } }); } const filmstrip = await createImageBitmap(canvas); this.sendResponse( 'filmstrip', { filmstrip, }, [filmstrip], ); } catch { this._sendRenderingError( new CreateFilmstripError('Failed to create filmstrip'), ); } } public async setEffect( name: keyof Effects, index: EffectIndex, options?: EffectOptions, ): Promise { const effect: Effect = AllEffects[name]; // The effect has changed. if (this._effects[index] !== effect) { // Effect changed. Cleanup old effect first. Effects are responsible for // cleaning up their memory. await this._effects[index].cleanup(); const offCanvas = index === EffectIndex.BACKGROUND ? this._canvasBackground : this._canvasHighlights; invariant(offCanvas != null, 'need OffscreenCanvas to render effects'); const webglContext = index === EffectIndex.BACKGROUND ? this._glBackground : this._glObjects; invariant(webglContext != null, 'need WebGL context to render effects'); // Initialize the effect. This can be used by effects to prepare // resources needed for rendering. If the video wasn't decoded yet, the // effect setup will happen in the _decodeVideo function. if (this._decodedVideo != null) { await effect.setup({ width: this._decodedVideo.width, height: this._decodedVideo.height, canvas: offCanvas, gl: webglContext, }); } } // Update effect if already set effect was clicked again. This can happen // when there is a new variant of the effect. if (options != null) { // Update effect if already set effect was clicked again. This can happen // when there is a new variant of the effect. await effect.update(options); } // Notify the frontend about the effect state including its variant. this.sendResponse('effectUpdate', { name, index, variant: effect.variant, numVariants: effect.numVariants, }); this._effects[index] = effect; this._playbackRAFHandle = requestAnimationFrame(this._drawFrame.bind(this)); } async encode() { const decodedVideo = this._decodedVideo; invariant( decodedVideo !== null, 'cannot encode video because there is no decoded video available', ); const canvas = new OffscreenCanvas(this.width, this.height); const ctx = canvas.getContext('2d', {willReadFrequently: true}); invariant( ctx !== null, 'cannot encode video because failed to construct offscreen canvas context', ); const form = new CanvasForm(ctx); const file = await encodeVideo( this.width, this.height, decodedVideo.frames.length, this._framesGenerator(decodedVideo, canvas, form), progress => { this.sendResponse('encodingStateUpdate', { progress, }); }, ); this.sendResponse( 'encodingCompleted', { file, }, [file], ); } private async *_framesGenerator( decodedVideo: DecodedVideo, canvas: OffscreenCanvas, form: CanvasForm, ): AsyncGenerator { const frames = decodedVideo.frames; for (let frameIndex = 0; frameIndex < frames.length; ++frameIndex) { await this._drawFrameImpl(form, frameIndex, true); const frame = frames[frameIndex]; const videoFrame = new VideoFrame(canvas, { timestamp: frame.bitmap.timestamp, }); yield { bitmap: videoFrame, timestamp: frame.timestamp, duration: frame.duration, }; videoFrame.close(); } } public enableStats() { this._stats.fps = new Stats('fps'); this._stats.videoFps = new Stats('fps', 'V'); this._stats.total = new Stats('ms', 'T'); this._stats.effect0 = new Stats('ms', 'B'); this._stats.effect1 = new Stats('ms', 'H'); this._stats.frameBmp = new Stats('ms', 'F'); this._stats.maskBmp = new Stats('ms', 'M'); this._stats.memory = new Stats('memory'); } public allowEffectAnimation( allow: boolean = true, objectId?: number, points?: SegmentationPoint[], ) { if (objectId != null && points != null && points.length) { const last_point_position = points[points.length - 1]; this._currentSegmetationPoint = { objectId, position: [last_point_position[0], last_point_position[1]], }; } if (!allow) { this._currentSegmetationPoint = null; } this._allowAnimation = allow; } public close(): void { // Clear any frame content this._ctx?.reset(); // Close frames of previously decoded video. this._decodedVideo?.frames.forEach(f => f.bitmap.close()); this._decodedVideo = null; } // TRACKER public updateTracklets( frameIndex: number, tracklets: Tracklet[], shouldGoToFrame: boolean = true, ): void { this._tracklets = tracklets; if (shouldGoToFrame) { this.goToFrame(frameIndex); } } public clearTrackletMasks(tracklet: Tracklet): void { this._tracklets = this._tracklets.filter(t => t.id != tracklet.id); } public clearMasks(): void { this._tracklets = []; } // PRIVATE FUNCTIONS private sendResponse( action: T['action'], message?: Omit, transfer?: Transferable[], ): void { self.postMessage( { action, ...message, }, { transfer, }, ); } private async _decodeVideo(src: string): Promise { const canvas = this._canvas; invariant(canvas != null, 'need canvas to render decoded video'); this.sendResponse('loadstart'); const fileStream = streamFile(src, { credentials: 'same-origin', cache: 'no-store', }); let renderedFirstFrame = false; this._decodedVideo = await decodeStream(fileStream, async progress => { const {fps, height, width, numFrames, frames} = progress; this._decodedVideo = progress; if (!renderedFirstFrame) { renderedFirstFrame = true; canvas.width = width; canvas.height = height; // Set WebGL contexts right after the first frame decoded this.initializeWebGLContext(width, height); // Initialize effect once first frame was decoded. for (const [i, effect] of this._effects.entries()) { const offCanvas = i === EffectIndex.BACKGROUND ? this._canvasBackground : this._canvasHighlights; invariant(offCanvas != null, 'need canvas to render effects'); const webglContext = i === EffectIndex.BACKGROUND ? this._glBackground : this._glObjects; invariant( webglContext != null, 'need WebGL context to render effects', ); await effect.setup({ width, height, canvas: offCanvas, gl: webglContext, }); } // Need to render frame immediately. Cannot go through // requestAnimationFrame because then rendering this frame would be // delayed until the full video has finished decoding. this._drawFrame(); this._stats.videoFps?.updateMaxValue(fps); this._stats.total?.updateMaxValue(1000 / fps); this._stats.effect0?.updateMaxValue(1000 / fps); this._stats.effect1?.updateMaxValue(1000 / fps); this._stats.frameBmp?.updateMaxValue(1000 / fps); this._stats.maskBmp?.updateMaxValue(1000 / fps); } this.sendResponse('decode', { totalFrames: numFrames, numFrames: frames.length, fps: fps, width: width, height: height, done: false, }); }); if (!renderedFirstFrame) { canvas.width = this._decodedVideo.width; canvas.height = this._decodedVideo.height; this._drawFrame(); } this.sendResponse('decode', { totalFrames: this._decodedVideo.numFrames, numFrames: this._decodedVideo.frames.length, fps: this._decodedVideo.fps, width: this._decodedVideo.width, height: this._decodedVideo.height, done: true, }); } private _drawFrame(): void { if (this._canvas !== null && this._form !== null) { this._drawFrameImpl(this._form, this._frameIndex); } } private async _drawFrameImpl( form: CanvasForm, frameIndex: number, enableWatermark: boolean = false, step: number = 0, maxSteps: number = 40, ): Promise { if (this._decodedVideo === null) { return; } { this._stats.videoFps?.begin(); this._stats.total?.begin(); this._stats.memory?.begin(); } try { const frame = this._decodedVideo.frames[frameIndex]; const {bitmap} = frame; this._stats.frameBmp?.begin(); // Need to convert VideoFrame to ImageBitmap because Safari can only apply // globalCompositeOperation on ImageBitmap and fails on VideoFrame. FWIW, // Chrome treats VideoFrame similarly to ImageBitmap. const frameBitmap = await createImageBitmap(bitmap); this._stats.frameBmp?.end(); const masks: Mask[] = []; const colors: string[] = []; const tracklets: Tracklet[] = []; this._tracklets.forEach(tracklet => { const mask = tracklet.masks[frameIndex]; if (mask != null) { masks.push(mask); tracklets.push(tracklet); colors.push(tracklet.color); } }); const effectActionPoint = this._currentSegmetationPoint; this._stats.maskBmp?.begin(); const effectMaskPromises = masks.map(async ({data, bounds}) => { return { bounds, bitmap: data as RLEObject, }; }); const effectMasks = await Promise.all(effectMaskPromises); this._stats.maskBmp?.end(); form.ctx.fillStyle = 'rgba(0, 0, 0, 0)'; form.ctx.fillRect(0, 0, this.width, this.height); const effectParams: EffectFrameContext = { frame: frameBitmap, masks: effectMasks, maskColors: colors, frameIndex: frameIndex, totalFrames: this._decodedVideo.frames.length, fps: this._decodedVideo.fps, width: frameBitmap.width, height: frameBitmap.height, actionPoint: null, }; // Allows animation within a single frame. if (this._allowAnimation && step < maxSteps) { const animationDuration = 2; // Total duration of the animation in seconds const progress = step / maxSteps; const timeParameter = progress * animationDuration; // Pass dynamic effect params effectParams.timeParameter = timeParameter; effectParams.actionPoint = effectActionPoint; this._processEffects(form, effectParams, tracklets); // Use RAF to draw frame, and update the display, // this avoids to wait until the javascript call stack is cleared. requestAnimationFrame(() => this._drawFrameImpl(form, frameIndex, false, step + 1, maxSteps), ); } else { this._processEffects(form, effectParams, tracklets); } if (enableWatermark) { this._drawWatermark(form, frameBitmap); } // Do not simply drop the JavaScript reference to the ImageBitmap; doing so // will keep its graphics resource alive until the next time the garbage // collector runs. frameBitmap.close(); { this._stats.videoFps?.end(); this._stats.total?.end(); this._stats.memory?.end(); } this._isDrawing = false; } catch { this._sendRenderingError(new DrawFrameError('Failed to draw frame')); } } private _drawWatermark(form: CanvasForm, frameBitmap: ImageBitmap): void { const frameWidth = this._canvas?.width || frameBitmap.width; const frameHeight = this._canvas?.height || frameBitmap.height; // Font size is either 12 or smaller based on available width // since the font is not monospaced, we approximate it'll fit 1.5 more characters than monospaced const approximateFontSize = Math.min( Math.floor(frameWidth / (VIDEO_WATERMARK_TEXT.length / 1.5)), 12, ); form.ctx.font = `${approximateFontSize}px "Inter", sans-serif`; const measureGeneratedBy = form.ctx.measureText(VIDEO_WATERMARK_TEXT); const textBoxWidth = measureGeneratedBy.width + 2 * WATERMARK_BOX_HORIZONTAL_PADDING; const textBoxHeight = measureGeneratedBy.actualBoundingBoxAscent + 2 * WATERMARK_BOX_VERTICAL_PADDING; const textBoxX = frameWidth - textBoxWidth; const textBoxY = frameHeight - textBoxHeight; form.ctx.fillStyle = 'rgba(0, 0, 0, 0.4)'; form.ctx.beginPath(); form.ctx.roundRect( Math.round(textBoxX), Math.round(textBoxY), Math.round(textBoxWidth), Math.round(textBoxHeight), [WATERMARK_BOX_HORIZONTAL_PADDING, 0, 0, 0], ); form.ctx.fill(); // Always reset the text style because some effects may change text styling in the same ctx form.ctx.fillStyle = 'rgba(255, 255, 255, 0.8)'; form.ctx.textAlign = 'left'; form.ctx.fillText( VIDEO_WATERMARK_TEXT, Math.round(textBoxX + WATERMARK_BOX_HORIZONTAL_PADDING), Math.round( textBoxY + WATERMARK_BOX_VERTICAL_PADDING + measureGeneratedBy.actualBoundingBoxAscent, ), ); } private updateFrameIndex(index: number): void { this._frameIndex = index; this.sendResponse('frameUpdate', { index, }); } private _loadWatermarkFonts() { const requiredFonts = [ { url: '/fonts/Inter-VariableFont.ttf', format: 'truetype-variations', }, ]; requiredFonts.forEach(requiredFont => { const fontFace = new FontFace( 'Inter', `url(${requiredFont.url}) format('${requiredFont.format}')`, ); fontFace.load().then(font => { self.fonts.add(font); }); }); } private updatePlayback(playing: boolean): void { if (playing) { this.sendResponse('play'); } else { this.sendResponse('pause'); } this._isPlaying = playing; } private _cancelRender(): void { if (this._playbackTimeoutHandle !== null) { clearTimeout(this._playbackTimeoutHandle); this._playbackTimeoutHandle = null; } if (this._playbackRAFHandle !== null) { cancelAnimationFrame(this._playbackRAFHandle); this._playbackRAFHandle = null; } } private _sendRenderingError(error: Error): void { this.sendResponse('renderingError', { error: serializeError(error), }); } private _processEffects( form: CanvasForm, effectParams: EffectFrameContext, tracklets: Tracklet[], ) { for (let i = 0; i < this._effects.length; i++) { const effect = this._effects[i]; if (i === 0) { this._stats.effect0?.begin(); } else if (i === 1) { this._stats.effect1?.begin(); } effect.apply(form, effectParams, tracklets); if (i === 0) { this._stats.effect0?.end(); } else if (i === 1) { this._stats.effect1?.end(); } } } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/VideoWorkerTypes.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import { DecodeEvent, EffectUpdateEvent, EncodingCompletedEvent, EncodingStateUpdateEvent, FilmstripEvent, FrameUpdateEvent, LoadStartEvent, RenderingErrorEvent, } from './VideoWorkerBridge'; import {EffectOptions} from './effects/Effect'; import type {Effects} from './effects/Effects'; export type Request = { action: A; } & P; // REQUESTS export type SetCanvasRequest = Request< 'setCanvas', { canvas: OffscreenCanvas; } >; export type SetSourceRequest = Request< 'setSource', { source: string; } >; export type PlayRequest = Request<'play', unknown>; export type PauseRequest = Request<'pause', unknown>; export type StopRequest = Request<'stop', unknown>; export type FrameUpdateRequest = Request< 'frameUpdate', { index: number; } >; export type FilmstripRequest = Request< 'filmstrip', { width: number; height: number; } >; export type SetEffectRequest = Request< 'setEffect', { name: keyof Effects; index: number; options?: EffectOptions; } >; export type EncodeVideoRequest = Request<'encode', unknown>; export type EnableStatsRequest = Request<'enableStats', unknown>; export type VideoWorkerRequest = | SetCanvasRequest | SetSourceRequest | PlayRequest | PauseRequest | StopRequest | FrameUpdateRequest | FilmstripRequest | SetEffectRequest | EncodeVideoRequest | EnableStatsRequest; export type VideoWorkerRequestMessageEvent = MessageEvent; // RESPONSES export type ErrorResponse = Request< 'error', { error: unknown; } >; export type DecodeResponse = Request<'decode', DecodeEvent>; export type EncodingStateUpdateResponse = Request< 'encodingStateUpdate', EncodingStateUpdateEvent >; export type EncodingCompletedResponse = Request< 'encodingCompleted', EncodingCompletedEvent >; export type FilmstripResponse = Request<'filmstrip', FilmstripEvent>; export type PlayResponse = Request<'play', unknown>; export type PauseResponse = Request<'pause', unknown>; export type FrameUpdateResponse = Request<'frameUpdate', FrameUpdateEvent>; export type RenderingErrorResponse = Request< 'renderingError', RenderingErrorEvent >; // HTMLVideoElement events https://developer.mozilla.org/en-US/docs/Web/HTML/Element/video#events export type LoadStartResponse = Request<'loadstart', LoadStartEvent>; export type EffectUpdateResponse = Request<'effectUpdate', EffectUpdateEvent>; export type VideoWorkerResponse = | ErrorResponse | FilmstripResponse | DecodeResponse | EncodingStateUpdateResponse | EncodingCompletedResponse | PlayResponse | PauseResponse | FrameUpdateResponse | LoadStartResponse | RenderingErrorResponse | EffectUpdateResponse; export type VideoWorkerResponseMessageEvent = MessageEvent; ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/editor/DemoVideoEditor.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import TrackletsAnnotation from '@/common/components/annotations/TrackletsAnnotation'; import useCloseSessionBeforeUnload from '@/common/components/session/useCloseSessionBeforeUnload'; import MessagesSnackbar from '@/common/components/snackbar/MessagesSnackbar'; import useMessagesSnackbar from '@/common/components/snackbar/useDemoMessagesSnackbar'; import {OBJECT_TOOLBAR_INDEX} from '@/common/components/toolbar/ToolbarConfig'; import useToolbarTabs from '@/common/components/toolbar/useToolbarTabs'; import VideoFilmstripWithPlayback from '@/common/components/video/VideoFilmstripWithPlayback'; import { FrameUpdateEvent, RenderingErrorEvent, SessionStartedEvent, TrackletsEvent, } from '@/common/components/video/VideoWorkerBridge'; import VideoEditor from '@/common/components/video/editor/VideoEditor'; import useResetDemoEditor from '@/common/components/video/editor/useResetEditor'; import useVideo from '@/common/components/video/editor/useVideo'; import InteractionLayer from '@/common/components/video/layers/InteractionLayer'; import {PointsLayer} from '@/common/components/video/layers/PointsLayer'; import LoadingStateScreen from '@/common/loading/LoadingStateScreen'; import UploadLoadingScreen from '@/common/loading/UploadLoadingScreen'; import useScreenSize from '@/common/screen/useScreenSize'; import {SegmentationPoint} from '@/common/tracker/Tracker'; import { activeTrackletObjectIdAtom, frameIndexAtom, isAddObjectEnabledAtom, isPlayingAtom, isVideoLoadingAtom, pointsAtom, sessionAtom, streamingStateAtom, trackletObjectsAtom, uploadingStateAtom, VideoData, } from '@/demo/atoms'; import useSettingsContext from '@/settings/useSettingsContext'; import {color, spacing} from '@/theme/tokens.stylex'; import stylex from '@stylexjs/stylex'; import {useAtom, useAtomValue, useSetAtom} from 'jotai'; import {useEffect, useState} from 'react'; import type {ErrorObject} from 'serialize-error'; const styles = stylex.create({ container: { display: 'flex', flexDirection: 'column', overflow: 'auto', width: '100%', borderColor: color['gray-800'], backgroundColor: color['gray-800'], borderWidth: 8, borderRadius: 12, '@media screen and (max-width: 768px)': { // on mobile, we want to grow the editor container so that the editor // fills the remaining vertical space between the navbar and bottom // of the page flexGrow: 1, borderWidth: 0, borderRadius: 0, paddingBottom: spacing[4], }, }, loadingScreenWrapper: { position: 'absolute', top: 0, left: 0, width: '100%', height: '100%', background: 'white', overflow: 'hidden', overflowY: 'auto', zIndex: 999, }, }); type Props = { video: VideoData; }; export default function DemoVideoEditor({video: inputVideo}: Props) { const {settings} = useSettingsContext(); const video = useVideo(); const [isSessionStartFailed, setIsSessionStartFailed] = useState(false); const [session, setSession] = useAtom(sessionAtom); const [activeTrackletId, setActiveTrackletObjectId] = useAtom( activeTrackletObjectIdAtom, ); const setTrackletObjects = useSetAtom(trackletObjectsAtom); const setFrameIndex = useSetAtom(frameIndexAtom); const points = useAtomValue(pointsAtom); const isAddObjectEnabled = useAtomValue(isAddObjectEnabledAtom); const streamingState = useAtomValue(streamingStateAtom); const isPlaying = useAtomValue(isPlayingAtom); const isVideoLoading = useAtomValue(isVideoLoadingAtom); const uploadingState = useAtomValue(uploadingStateAtom); const [renderingError, setRenderingError] = useState( null, ); const {isMobile} = useScreenSize(); const [tabIndex] = useToolbarTabs(); const {enqueueMessage} = useMessagesSnackbar(); useCloseSessionBeforeUnload(); const {resetEditor, resetSession} = useResetDemoEditor(); useEffect(() => { resetEditor(); }, [inputVideo, resetEditor]); useEffect(() => { function onFrameUpdate(event: FrameUpdateEvent) { setFrameIndex(event.index); } // Listen to frame updates to fetch the frame index in the main thread, // which is then used downstream to render points per frame. video?.addEventListener('frameUpdate', onFrameUpdate); function onSessionStarted(event: SessionStartedEvent) { setSession({id: event.sessionId, ranPropagation: false}); } video?.addEventListener('sessionStarted', onSessionStarted); function onSessionStartFailed() { setIsSessionStartFailed(true); } video?.addEventListener('sessionStartFailed', onSessionStartFailed); function onTrackletsUpdated(event: TrackletsEvent) { const tracklets = event.tracklets; if (tracklets.length === 0) { resetSession(); } setTrackletObjects(tracklets); } video?.addEventListener('trackletsUpdated', onTrackletsUpdated); function onRenderingError(event: RenderingErrorEvent) { setRenderingError(event.error); } video?.addEventListener('renderingError', onRenderingError); video?.initializeTracker('SAM 2', { inferenceEndpoint: settings.inferenceAPIEndpoint, }); video?.startSession(inputVideo.path); return () => { video?.closeSession(); video?.removeEventListener('frameUpdate', onFrameUpdate); video?.removeEventListener('sessionStarted', onSessionStarted); video?.removeEventListener('sessionStartFailed', onSessionStartFailed); video?.removeEventListener('trackletsUpdated', onTrackletsUpdated); video?.removeEventListener('renderingError', onRenderingError); }; }, [ setFrameIndex, setSession, setTrackletObjects, resetSession, inputVideo, video, settings.inferenceAPIEndpoint, settings.videoAPIEndpoint, ]); async function handleOptimisticPointUpdate(newPoints: SegmentationPoint[]) { if (session == null) { return; } async function createActiveTracklet() { if (!isAddObjectEnabled || newPoints.length === 0) { return; } const tracklet = await video?.createTracklet(); if (tracklet != null && newPoints.length > 0) { setActiveTrackletObjectId(tracklet.id); video?.updatePoints(tracklet.id, [newPoints[newPoints.length - 1]]); } } if (activeTrackletId != null) { video?.updatePoints(activeTrackletId, newPoints); } else { await createActiveTracklet(); } enqueueMessage('pointClick'); } async function handleAddPoint(point: SegmentationPoint) { if (streamingState === 'partial' || streamingState === 'requesting') { return; } if (isPlaying) { return video?.pause(); } handleOptimisticPointUpdate([...points, point]); } function handleRemovePoint(point: SegmentationPoint) { if ( isPlaying || streamingState === 'partial' || streamingState === 'requesting' ) { return; } handleOptimisticPointUpdate(points.filter(p => p !== point)); } // The interaction layer handles clicks onto the video canvas. It is used // to get absolute point clicks within the video's coordinate system. // The PointsLayer handles rendering of input points and allows removing // individual points by clicking on them. const layers = ( <> {tabIndex === OBJECT_TOOLBAR_INDEX && ( <> handleAddPoint(point)} /> )} {!isMobile && } ); return ( <> {(isVideoLoading || session === null) && !isSessionStartFailed && (
)} {isSessionStartFailed && (
Uh oh, it looks like there was an issue starting a session. } linkProps={{to: '..', label: 'Back to homepage'}} />
)} {isMobile && renderingError != null && (
)} {uploadingState !== 'default' && (
)}
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/editor/ImageUtils.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ export function convertVideoFrameToImageData( videoFrame: VideoFrame, ): ImageData | undefined { const canvas = new OffscreenCanvas( videoFrame.displayWidth, videoFrame.displayHeight, ); const ctx = canvas.getContext('2d'); ctx?.drawImage(videoFrame, 0, 0); return ctx?.getImageData(0, 0, canvas.width, canvas.height); } /** * This utility provides two functions: * `process`: to find the bounding box of non-empty pixels from an ImageData, when looping through all its pixels * `crop` to cut out the subsection found in `process` * @returns */ export function findBoundingBox() { let xMin = Number.MAX_VALUE; let yMin = Number.MAX_VALUE; let xMax = Number.MIN_VALUE; let yMax = Number.MIN_VALUE; return { process: function (x: number, y: number, hasData: boolean) { if (hasData) { xMin = Math.min(x, xMin); xMax = Math.max(x, xMax); yMin = Math.min(y, yMin); yMax = Math.max(y, yMax); } return [xMin, xMax, yMin, yMax]; }, crop(imageData: ImageData): ImageData | null { const canvas = new OffscreenCanvas(imageData.width, imageData.height); const ctx = canvas.getContext('2d'); const boundingBoxWidth = xMax - xMin; const boundingBoxHeight = yMax - yMin; if (ctx && boundingBoxWidth > 0 && boundingBoxHeight > 0) { ctx.clearRect(0, 0, canvas.width, canvas.height); ctx.putImageData(imageData, 0, 0); return ctx.getImageData( xMin, yMin, boundingBoxWidth, boundingBoxHeight, ); } else { return null; } }, getBox(): [[number, number], [number, number]] { return [ [xMin, yMin], [xMax, yMax], ]; }, }; } export function magnifyImageRegion( canvas: HTMLCanvasElement | null, x: number, y: number, radius: number = 25, scale: number = 2, ): string { if (canvas == null) { return ''; } const ctx = canvas.getContext('2d'); if (ctx) { const minX = x - radius < 0 ? radius - x : 0; const minY = y - radius < 0 ? radius - y : 0; const region = ctx.getImageData( Math.max(x - radius, 0), Math.max(y - radius, 0), radius * 2, radius * 2, ); // ImageData doesn't scale-transform correctly on canvas // So we first draw the original size on an offscreen canvas, and then scale it const regionCanvas = new OffscreenCanvas(region.width, region.height); const regionCtx = regionCanvas.getContext('2d'); regionCtx?.putImageData(region, minX > 0 ? minX : 0, minY > 0 ? minY : 0); const scaleCanvas = document.createElement('canvas'); scaleCanvas.width = Math.round(region.width * scale); scaleCanvas.height = Math.round(region.height * scale); const scaleCtx = scaleCanvas.getContext('2d'); scaleCtx?.scale(scale, scale); scaleCtx?.drawImage(regionCanvas, 0, 0); return scaleCanvas.toDataURL(); } return ''; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/editor/VideoEditor.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {VideoData} from '@/demo/atoms'; import stylex, {StyleXStyles} from '@stylexjs/stylex'; import {useSetAtom} from 'jotai'; import {PropsWithChildren, RefObject, useEffect, useRef} from 'react'; import Video, {VideoRef} from '../Video'; import {videoAtom} from './atoms'; const MAX_VIDEO_WIDTH = 1280; const styles = stylex.create({ editorContainer: { position: 'relative', display: 'flex', flexDirection: 'column', alignItems: 'center', width: '100%', height: '100%', borderRadius: '0.375rem', overflow: { default: 'clip', '@media screen and (max-width: 768px)': 'visible', }, }, videoContainer: { position: 'relative', flexGrow: 1, overflow: 'hidden', width: '100%', maxWidth: MAX_VIDEO_WIDTH, }, layers: { position: 'absolute', left: 0, top: 0, bottom: 0, right: 0, }, loadingMessage: { position: 'absolute', top: '8px', right: '8px', padding: '6px 10px', backgroundColor: '#6441D2CC', color: '#FFF', display: 'flex', alignItems: 'center', gap: '8px', borderRadius: '8px', fontSize: '0.8rem', }, }); export type InteractionLayerProps = { style: StyleXStyles; videoRef: RefObject; }; export type ControlsProps = { isPlaying: boolean; onPlay: () => void; onPause: () => void; onPreviousFrame?: () => void; onNextFrame?: () => void; }; type Props = PropsWithChildren<{ video: VideoData; layers?: React.ReactNode; loading?: boolean; }>; export default function VideoEditor({ video: inputVideo, layers, loading, children, }: Props) { const videoRef = useRef(null); const setVideo = useSetAtom(videoAtom); // Initialize video atom useEffect(() => { setVideo(videoRef.current); return () => { setVideo(null); }; }, [setVideo]); return (
{children}
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/editor/VideoEditorUtils.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {Mask, Tracklet} from '@/common/tracker/Tracker'; import { convertVideoFrameToImageData, findBoundingBox, } from '@/common/utils/ImageUtils'; import {DataArray} from '@/jscocotools/mask'; import invariant from 'invariant'; function getCanvas( width: number, height: number, isOffscreen: boolean = false, ): HTMLCanvasElement | OffscreenCanvas { if (isOffscreen || typeof document === 'undefined') { return new OffscreenCanvas(width, height); } const canvas = document.createElement('canvas'); canvas.width = width; canvas.height = height; return canvas; } export function drawFrame( ctx: CanvasRenderingContext2D | OffscreenCanvasRenderingContext2D, frame: VideoFrame | HTMLImageElement, width: number, height: number, ) { ctx?.drawImage(frame, 0, 0, width, height); } /** * Given a mask and the image frame, get the masked image cropped to its bounding box. */ export function getThumbnailImageDataOld( mask: DataArray, videoFrame: VideoFrame, ): ImageData | null { const data = mask.data; if (!ArrayBuffer.isView(data) || !(data instanceof Uint8Array)) { return new ImageData(0, 0); } const frame = convertVideoFrameToImageData(videoFrame); if (!frame) { return new ImageData(0, 0); } const frameData = frame.data; const scaleX = frame.width / mask.shape[1]; const scaleY = frame.height / mask.shape[0]; const boundingBox = findBoundingBox(); const transformedData = new Uint8ClampedArray(data.length * 4); for (let i = 0; i < data.length; i++) { // Since the mask is rotated, new width is the mask's height = mask.shape[1]; // Transform matrix: doing a rotate 90deg and then flip horizontal is the same as flipping x and y // [ 0 1 ] [ -1 0 ] = [ 0 1 ] // [-1 0 ] x [ 0 1 ] = [ 1 0 ] // So, we can find the new index as: newY * newWidth + newX const newX = Math.floor(i / mask.shape[0]); // ie, new x is the current y const newY = i % mask.shape[0]; const transformedIndex = (newY * mask.shape[1] + newX) * 4; const frameDataIndex = (newY * mask.shape[1] * scaleY + newX * scaleX) * 4; transformedData[transformedIndex] = frameData[frameDataIndex]; transformedData[transformedIndex + 1] = frameData[frameDataIndex + 1]; transformedData[transformedIndex + 2] = frameData[frameDataIndex + 2]; transformedData[transformedIndex + 3] = (data[i] && 255) || 0; // A value boundingBox.process(newX, newY, data[i] > 0); } const rotatedData = new ImageData( transformedData, mask.shape[1], mask.shape[0], ); // flip w and h of the mask return boundingBox.crop(rotatedData); } /** * Given a mask, the mask rendering context, and the video frame, get the * masked image cropped to its bounding box. */ function getThumbnailImageData( mask: Mask, maskCtx: OffscreenCanvasRenderingContext2D, frameBitmap: ImageBitmap, ): ImageData | null { const x = mask.bounds[0][0]; const y = mask.bounds[0][1]; const w = mask.bounds[1][0] - mask.bounds[0][0]; const h = mask.bounds[1][1] - mask.bounds[0][1]; if (w <= 0 || h <= 0) { return null; } const thumbnailMaskData = maskCtx.getImageData(x, y, w, h); const canvas = new OffscreenCanvas(w, h); const ctx = canvas.getContext('2d'); invariant(ctx !== null, '2d context cannot be null'); ctx.putImageData(thumbnailMaskData, 0, 0); ctx.globalCompositeOperation = 'source-in'; ctx.drawImage(frameBitmap, x, y, w, h, 0, 0, w, h); return ctx.getImageData(0, 0, w, h); } export async function generateThumbnail( track: Tracklet, frameIndex: number, mask: Mask, frame: VideoFrame, ctx: OffscreenCanvasRenderingContext2D, ): Promise { // If a frame doesn't have points, the points will be undefined. const hasPoints = (track.points[frameIndex]?.length ?? 0) > 0; if (!hasPoints) { return; } invariant(frame !== null, 'frame must be ready'); const bitmap = await createImageBitmap(frame); const thumbnailImageData = getThumbnailImageData( mask, ctx as OffscreenCanvasRenderingContext2D, bitmap, ); bitmap.close(); if (thumbnailImageData != null) { const thumbnailDataURL = await getDataURLFromImageData(thumbnailImageData); track.thumbnail = thumbnailDataURL; } } export async function getDataURLFromImageData( imageData: ImageData | null, ): Promise { if (!imageData) { return ''; } const canvas = getCanvas(imageData.width, imageData.height); const ctx = canvas.getContext('2d'); if (ctx === null) { return ''; } ctx?.putImageData(imageData, 0, 0); if (canvas instanceof OffscreenCanvas) { const blob = await canvas.convertToBlob(); return new Promise(resolve => { const reader = new FileReader(); reader.addEventListener( 'load', () => { const result = reader.result; if (typeof result === 'string') { resolve(result); } else { resolve(''); } }, false, ); reader.readAsDataURL(blob); }); } return canvas.toDataURL(); } export function hexToRgb(hex: string): { r: number; g: number; b: number; a: number; } { const result = /^#?([a-f\d]{2})([a-f\d]{2})([a-f\d]{2})([a-f\d]{2})?$/i.exec( hex, ); return result ? { r: parseInt(result[1], 16), g: parseInt(result[2], 16), b: parseInt(result[3], 16), a: result[4] != null ? parseInt(result[4], 16) : 128, } : {r: 255, g: 0, b: 0, a: 128}; } export function getPointInImage( event: React.MouseEvent, canvas: HTMLCanvasElement, normalized: boolean = false, ): [x: number, y: number] { const rect = canvas.getBoundingClientRect(); const matrix = new DOMMatrix(); // First, center the image const elementCenter = new DOMPoint( canvas.clientWidth / 2, canvas.clientHeight / 2, ); const imageCenter = new DOMPoint(canvas.width / 2, canvas.height / 2); matrix.translateSelf( elementCenter.x - imageCenter.x, elementCenter.y - imageCenter.y, ); // Containing the object take the minimal scale const scale = Math.min( canvas.clientWidth / canvas.width, canvas.clientHeight / canvas.height, ); matrix.scaleSelf(scale, scale, 1, imageCenter.x, imageCenter.y); const point = new DOMPoint( event.clientX - rect.left, event.clientY - rect.top, ); const imagePoint = matrix.inverse().transformPoint(point); const x = Math.max(Math.min(imagePoint.x, canvas.width), 0); const y = Math.max(Math.min(imagePoint.y, canvas.height), 0); if (normalized) { return [x / canvas.width, y / canvas.height]; } return [x, y]; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/editor/atoms.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {atom} from 'jotai'; import {VideoRef} from '../Video'; export const videoAtom = atom(null); ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/editor/useResetEditor.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {OBJECT_TOOLBAR_INDEX} from '@/common/components/toolbar/ToolbarConfig'; import useToolbarTabs from '@/common/components/toolbar/useToolbarTabs'; import useVideo from '@/common/components/video/editor/useVideo'; import { activeTrackletObjectIdAtom, frameIndexAtom, isPlayingAtom, isStreamingAtom, sessionAtom, streamingStateAtom, trackletObjectsAtom, } from '@/demo/atoms'; import {DEFAULT_EFFECT_LAYERS} from '@/demo/DemoConfig'; import {useSetAtom} from 'jotai'; import {useCallback} from 'react'; type State = { resetEditor: () => void; resetEffects: () => void; resetSession: () => void; }; export default function useResetEditor(): State { const video = useVideo(); const setSession = useSetAtom(sessionAtom); const setActiveTrackletObjectId = useSetAtom(activeTrackletObjectIdAtom); const setTrackletObjects = useSetAtom(trackletObjectsAtom); const setFrameIndex = useSetAtom(frameIndexAtom); const setStreamingState = useSetAtom(streamingStateAtom); const setIsPlaying = useSetAtom(isPlayingAtom); const setIsStreaming = useSetAtom(isStreamingAtom); const [, setDemoTabIndex] = useToolbarTabs(); const resetEffects = useCallback(() => { video?.setEffect(DEFAULT_EFFECT_LAYERS.background, 0, {variant: 0}); video?.setEffect(DEFAULT_EFFECT_LAYERS.highlight, 1, {variant: 0}); }, [video]); const resetEditor = useCallback(() => { setFrameIndex(0); setSession(null); setActiveTrackletObjectId(0); setTrackletObjects([]); setStreamingState('none'); setIsPlaying(false); setIsStreaming(false); resetEffects(); setDemoTabIndex(OBJECT_TOOLBAR_INDEX); }, [ setFrameIndex, setSession, setActiveTrackletObjectId, setTrackletObjects, setStreamingState, setIsPlaying, setIsStreaming, resetEffects, setDemoTabIndex, ]); const resetSession = useCallback(() => { setSession(prev => { if (prev === null) { return prev; } return {...prev, ranPropagation: false}; }); setActiveTrackletObjectId(null); resetEffects(); }, [setSession, setActiveTrackletObjectId, resetEffects]); return {resetEditor, resetEffects, resetSession}; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/editor/useVideo.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {useAtomValue} from 'jotai'; import {videoAtom} from './atoms'; export default function useVideo() { return useAtomValue(videoAtom); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/editor/useVideoEffect.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import useVideo from '@/common/components/video/editor/useVideo'; import { activeBackgroundEffectAtom, activeHighlightEffectAtom, } from '@/demo/atoms'; import {useSetAtom} from 'jotai'; import {useCallback, useEffect} from 'react'; import {EffectUpdateEvent} from '../VideoWorkerBridge'; import {EffectOptions} from '../effects/Effect'; import Effects, {EffectIndex, Effects as EffectsType} from '../effects/Effects'; export default function useVideoEffect() { const video = useVideo(); const setBackgroundEffect = useSetAtom(activeBackgroundEffectAtom); const setHighlightEffect = useSetAtom(activeHighlightEffectAtom); // The useEffect will listen to any effect updates from the worker. The // worker is the source of truth, which effect and effect variant is // currently applied. The main thread will be notified whenever an effect // or effect variant changes. useEffect(() => { function onEffectUpdate(event: EffectUpdateEvent) { if (event.index === EffectIndex.BACKGROUND) { setBackgroundEffect(event); } else { setHighlightEffect(event); } } video?.addEventListener('effectUpdate', onEffectUpdate); return () => { video?.removeEventListener('effectUpdate', onEffectUpdate); }; }, [video, setBackgroundEffect, setHighlightEffect]); return useCallback( (name: keyof EffectsType, index: EffectIndex, options?: EffectOptions) => { video?.setEffect(name, index, options); const effect = Effects[name]; const effectVariant = options?.variant ?? 0; if (index === EffectIndex.BACKGROUND) { setBackgroundEffect({ name, variant: effectVariant, numVariants: effect.numVariants, }); } else { setHighlightEffect({ name, variant: options?.variant ?? 0, numVariants: effect.numVariants, }); } }, [video, setBackgroundEffect, setHighlightEffect], ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/ArrowGLEffect.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import BaseGLEffect from '@/common/components/video/effects/BaseGLEffect'; import { EffectFrameContext, EffectInit, } from '@/common/components/video/effects/Effect'; import fragmentShaderSource from '@/common/components/video/effects/shaders/Arrow.frag?raw'; import vertexShaderSource from '@/common/components/video/effects/shaders/DefaultVert.vert?raw'; import {Tracklet} from '@/common/tracker/Tracker'; import {normalizeBounds} from '@/common/utils/ShaderUtils'; import {RLEObject, decode} from '@/jscocotools/mask'; import invariant from 'invariant'; import {CanvasForm} from 'pts'; export default class ArrowGLEffect extends BaseGLEffect { private _numMasks: number = 0; private _numMasksUniformLocation: WebGLUniformLocation | null = null; // Must from start 1, main texture takes. private _masksTextureUnitStart: number = 1; constructor() { super(4); this.vertexShaderSource = vertexShaderSource; this.fragmentShaderSource = fragmentShaderSource; } protected setupUniforms( gl: WebGL2RenderingContext, program: WebGLProgram, init: EffectInit, ): void { super.setupUniforms(gl, program, init); this._numMasksUniformLocation = gl.getUniformLocation(program, 'uNumMasks'); gl.uniform1i(this._numMasksUniformLocation, this._numMasks); } apply(form: CanvasForm, context: EffectFrameContext, _tracklets: Tracklet[]) { const gl = this._gl; const program = this._program; if (!program) { return; } invariant(gl !== null, 'WebGL2 context is required'); gl.clearColor(0.0, 0.0, 0.0, 1.0); gl.clear(gl.COLOR_BUFFER_BIT); // dynamic uniforms per frame const styleIndex = Math.floor(this.variant / 2) % 2; gl.uniform1i(this._numMasksUniformLocation, context.masks.length); gl.uniform1f( gl.getUniformLocation(program, 'uCurrentFrame'), context.frameIndex, ); gl.uniform1i( gl.getUniformLocation(program, 'uLineColor'), this.variant % 2 === 0 ? 0 : 1, ); gl.uniform1i( gl.getUniformLocation(program, 'uArrow'), styleIndex === 0 ? 1 : 0, ); gl.activeTexture(gl.TEXTURE0); gl.bindTexture(gl.TEXTURE_2D, this._frameTexture); gl.texImage2D( gl.TEXTURE_2D, 0, gl.RGBA, context.width, context.height, 0, gl.RGBA, gl.UNSIGNED_BYTE, context.frame, ); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST); // Create and bind 2D textures for each mask context.masks.forEach((mask, index) => { const maskTexture = gl.createTexture(); const decodedMask = decode([mask.bitmap as RLEObject]); const maskData = decodedMask.data as Uint8Array; gl.activeTexture(gl.TEXTURE0 + index + this._masksTextureUnitStart); gl.bindTexture(gl.TEXTURE_2D, maskTexture); const boundaries = normalizeBounds( mask.bounds[0], mask.bounds[1], context.width, context.height, ); gl.uniform1i( gl.getUniformLocation(program, `uMaskTexture${index}`), index + this._masksTextureUnitStart, ); gl.uniform4fv(gl.getUniformLocation(program, `bbox${index}`), boundaries); // dynamic uniforms per mask gl.uniform1i( gl.getUniformLocation(program, `uMaskTexture${index}`), this._masksTextureUnitStart + index, ); gl.pixelStorei(gl.UNPACK_ALIGNMENT, 1); gl.texImage2D( gl.TEXTURE_2D, 0, gl.LUMINANCE, context.height, context.width, 0, gl.LUMINANCE, gl.UNSIGNED_BYTE, maskData, ); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE); gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4); }); const ctx = form.ctx; invariant(this._canvas !== null, 'canvas is required'); ctx.drawImage(this._canvas, 0, 0); } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/BackgroundBlurEffect.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import BaseGLEffect from '@/common/components/video/effects/BaseGLEffect'; import { EffectFrameContext, EffectInit, } from '@/common/components/video/effects/Effect'; import fragmentShaderSource from '@/common/components/video/effects/shaders/BackgroundBlur.frag?raw'; import vertexShaderSource from '@/common/components/video/effects/shaders/DefaultVert.vert?raw'; import {Tracklet} from '@/common/tracker/Tracker'; import invariant from 'invariant'; import {CanvasForm} from 'pts'; export default class BackgroundBlurEffect extends BaseGLEffect { private _blurRadius: number = 3; constructor() { super(3); this.vertexShaderSource = vertexShaderSource; this.fragmentShaderSource = fragmentShaderSource; } protected setupUniforms( gl: WebGL2RenderingContext, program: WebGLProgram, init: EffectInit, ): void { super.setupUniforms(gl, program, init); gl.uniform1i( gl.getUniformLocation(program, 'uBlurRadius'), this._blurRadius, ); } apply(form: CanvasForm, context: EffectFrameContext, _tracklets: Tracklet[]) { const gl = this._gl; const program = this._program; if (!program) { return; } invariant(gl !== null, 'WebGL2 context is required'); gl.clearColor(0.0, 0.0, 0.0, 1.0); gl.clear(gl.COLOR_BUFFER_BIT); const blurRadius = [3, 6, 12][this.variant % 3]; gl.uniform1i(gl.getUniformLocation(program, 'uBlurRadius'), blurRadius); gl.activeTexture(gl.TEXTURE0); gl.bindTexture(gl.TEXTURE_2D, this._frameTexture); gl.texImage2D( gl.TEXTURE_2D, 0, gl.RGBA, context.width, context.height, 0, gl.RGBA, gl.UNSIGNED_BYTE, context.frame, ); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST); gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4); const ctx = form.ctx; invariant(this._canvas !== null, 'canvas is required'); ctx.drawImage(this._canvas, 0, 0); } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/BackgroundTextEffect.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {Tracklet} from '@/common/tracker/Tracker'; import {DEMO_SHORT_NAME} from '@/demo/DemoConfig'; import {Bound, CanvasForm, Num, Pt, Shaping} from 'pts'; import {AbstractEffect, EffectFrameContext} from './Effect'; export default class BackgroundTextEffect extends AbstractEffect { constructor() { super(2); } apply( form: CanvasForm, context: EffectFrameContext, _tracklets: Tracklet[], ): void { form.image([0, 0], context.frame); const words = ['SEGMENT', 'ANYTHING', 'WOW']; const paragraph = `${DEMO_SHORT_NAME} is designed for efficient video processing with streaming inference to enable real-time, interactive applications.`; const progress = context.frameIndex / context.totalFrames; // Zooming heading if (this.variant % 2 === 0) { const step = context.totalFrames / words.length; const wordIndex = Math.floor(progress * words.length); const fontSize = context.width / Math.max(4, words[wordIndex].length - 1); const sizeMax = fontSize * 1.2; const t = Shaping.quadraticInOut( Num.cycle((context.frameIndex - wordIndex * step) / step), ); const currentSize = fontSize + Shaping.sineInOut(t, sizeMax - fontSize); form.fillOnly('#fff').font(currentSize, 'bold'); const area = new Pt( context.width, context.height - (context.height / 4) * (1 - t), ) .toBound() .scale(1.5, [context.width / 2, 0]); form .alignText('center', 'middle') .textBox(area, words[wordIndex], 'middle'); // Scrolling paragraph } else { const t = Shaping.quadraticInOut(Num.cycle(progress)); const offset = t * context.height; const area = Bound.fromArray([ [0, -context.height + offset], [context.width, context.height], ]); form.fillOnly('#00000066').rect(area); form.fillOnly('#fff').font(context.width / 8, 'bold'); form .fillOnly('#fff') .alignText('start') .paragraphBox(area, paragraph, 0.8, 'top', false); } } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/BaseGLEffect.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import Logger from '@/common/logger/Logger'; import {Tracklet} from '@/common/tracker/Tracker'; import invariant from 'invariant'; import {CanvasForm} from 'pts'; import {AbstractEffect, EffectFrameContext, EffectInit} from './Effect'; export default abstract class BaseGLEffect extends AbstractEffect { protected _canvas: OffscreenCanvas | null = null; protected _gl: WebGL2RenderingContext | null = null; protected _program: WebGLProgram | null = null; protected _frameTextureUnit: number = 0; protected _frameTexture: WebGLTexture | null = null; protected vertexShaderSource: string = ''; protected fragmentShaderSource: string = ''; protected _vertexShader: WebGLShader | null = null; protected _fragmentShader: WebGLShader | null = null; async setup(init: EffectInit): Promise { const {canvas, gl} = init; if (canvas != null && gl != null) { this._canvas = canvas; this._gl = gl; } invariant(this._gl !== null, 'WebGL2 context is required'); const program = this._gl.createProgram(); this._program = program; { const vertexShader = this._gl.createShader(this._gl.VERTEX_SHADER); this._vertexShader = vertexShader; invariant(vertexShader !== null, 'vertexShader required'); this._gl.shaderSource(vertexShader, this.vertexShaderSource); this._gl.compileShader(vertexShader); invariant(program !== null, 'program required'); this._gl.attachShader(program, vertexShader); const fragmentShader = this._gl.createShader(this._gl.FRAGMENT_SHADER); this._fragmentShader = fragmentShader; invariant(fragmentShader !== null, 'fragmentShader required'); this._gl.shaderSource(fragmentShader, this.fragmentShaderSource); this._gl.compileShader(fragmentShader); this._gl.attachShader(program, fragmentShader); this._gl.linkProgram(program); if (!this._gl.getProgramParameter(program, this._gl.LINK_STATUS)) { Logger.error(this._gl.getShaderInfoLog(vertexShader)); Logger.error(this._gl.getShaderInfoLog(fragmentShader)); } } this._gl.useProgram(program); this.setupBuffers(this._gl); this.setupUniforms(this._gl, program, init); } apply(form: CanvasForm, context: EffectFrameContext, _tracklets: Tracklet[]) { const gl = this._gl; invariant(gl !== null, 'WebGL2 context is required'); gl.clearColor(0.0, 0.0, 0.0, 1.0); gl.clear(gl.COLOR_BUFFER_BIT); gl.activeTexture(gl.TEXTURE0 + this._frameTextureUnit); gl.bindTexture(gl.TEXTURE_2D, this._frameTexture); gl.texImage2D( gl.TEXTURE_2D, 0, gl.RGBA, context.frame.width, context.frame.height, 0, gl.RGBA, gl.UNSIGNED_BYTE, context.frame, ); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST); gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL, true); // Apply shader gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4); const ctx = form.ctx; invariant(this._canvas !== null, 'canvas is required'); ctx.drawImage(this._canvas, 0, 0); } async cleanup(): Promise { if (this._gl != null) { // Dispose of WebGL resources, e.g., textures, buffers, etc. if (this._frameTexture != null) { this._gl.deleteTexture(this._frameTexture); this._frameTexture = null; } if ( this._program != null && this._vertexShader != null && this._fragmentShader != null ) { this._gl.detachShader(this._program, this._vertexShader); this._gl.deleteShader(this._vertexShader); this._gl.detachShader(this._program, this._fragmentShader); this._gl.deleteShader(this._fragmentShader); } } } protected setupBuffers(gl: WebGL2RenderingContext) { const vertexBufferData = new Float32Array([ 1.0, 1.0, -1.0, 1.0, 1.0, -1.0, -1.0, -1.0, ]); const texCoordBufferData = new Float32Array([ 1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, ]); const vertexBuffer = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer); gl.bufferData(gl.ARRAY_BUFFER, vertexBufferData, gl.STATIC_DRAW); gl.vertexAttribPointer(0, 2, gl.FLOAT, false, 0, 0); gl.enableVertexAttribArray(0); const texCoordBuffer = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, texCoordBuffer); gl.bufferData(gl.ARRAY_BUFFER, texCoordBufferData, gl.STATIC_DRAW); gl.vertexAttribPointer(1, 2, gl.FLOAT, false, 0, 0); gl.enableVertexAttribArray(1); } protected setupUniforms( gl: WebGL2RenderingContext, program: WebGLProgram, init: EffectInit, ) { this._frameTexture = gl.createTexture(); gl.uniform1i( gl.getUniformLocation(program, 'uSampler'), this._frameTextureUnit, ); gl.uniform2f( gl.getUniformLocation(program, 'uSize'), init.width, init.height, ); } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/BurstGLEffect.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {hexToRgb} from '@/common/components/video/editor/VideoEditorUtils'; import BaseGLEffect from '@/common/components/video/effects/BaseGLEffect'; import { EffectFrameContext, EffectInit, } from '@/common/components/video/effects/Effect'; import fragmentShaderSource from '@/common/components/video/effects/shaders/Burst.frag?raw'; import vertexShaderSource from '@/common/components/video/effects/shaders/DefaultVert.vert?raw'; import {Tracklet} from '@/common/tracker/Tracker'; import {normalizeBounds, preAllocateTextures} from '@/common/utils/ShaderUtils'; import {RLEObject, decode} from '@/jscocotools/mask'; import invariant from 'invariant'; import {CanvasForm} from 'pts'; export default class BurstGLEffect extends BaseGLEffect { private _numMasks: number = 0; private _numMasksUniformLocation: WebGLUniformLocation | null = null; // Must from start 1, main texture takes. private _masksTextureUnitStart: number = 1; private _maskTextures: WebGLTexture[] = []; constructor() { super(4); this.vertexShaderSource = vertexShaderSource; this.fragmentShaderSource = fragmentShaderSource; } protected setupUniforms( gl: WebGL2RenderingContext, program: WebGLProgram, init: EffectInit, ): void { super.setupUniforms(gl, program, init); this._numMasksUniformLocation = gl.getUniformLocation(program, 'uNumMasks'); gl.uniform1i(this._numMasksUniformLocation, this._numMasks); // We know the max number of textures, pre-allocate 3. this._maskTextures = preAllocateTextures(gl, 3); } apply(form: CanvasForm, context: EffectFrameContext, _tracklets: Tracklet[]) { const gl = this._gl; const program = this._program; if (!program) { return; } invariant(gl !== null, 'WebGL2 context is required'); gl.clearColor(0.0, 0.0, 0.0, 1.0); gl.clear(gl.COLOR_BUFFER_BIT); const styleIndex = Math.floor(this.variant / 2) % 2; // dynamic uniforms per frame gl.uniform1i(this._numMasksUniformLocation, context.masks.length); gl.uniform1i( gl.getUniformLocation(program, 'uLineColor'), this.variant % 2 === 0 ? 1 : 0, ); gl.uniform1i( gl.getUniformLocation(program, 'uInterleave'), styleIndex === 0 ? 0 : 1, ); gl.activeTexture(gl.TEXTURE0); gl.bindTexture(gl.TEXTURE_2D, this._frameTexture); gl.texImage2D( gl.TEXTURE_2D, 0, gl.RGBA, context.width, context.height, 0, gl.RGBA, gl.UNSIGNED_BYTE, context.frame, ); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST); // Create and bind 2D textures for each mask context.masks.forEach((mask, index) => { const decodedMask = decode([mask.bitmap as RLEObject]); const maskData = decodedMask.data as Uint8Array; gl.activeTexture(gl.TEXTURE0 + index + this._masksTextureUnitStart); gl.bindTexture(gl.TEXTURE_2D, this._maskTextures[index]); const boundaries = normalizeBounds( mask.bounds[0], mask.bounds[1], context.width, context.height, ); // dynamic uniforms per mask gl.uniform1i( gl.getUniformLocation(program, `uMaskTexture${index}`), this._masksTextureUnitStart + index, ); const color = hexToRgb(context.maskColors[index]); gl.uniform4f( gl.getUniformLocation(program, `uMaskColor${index}`), color.r, color.g, color.b, color.a, ); gl.uniform4fv(gl.getUniformLocation(program, `bbox${index}`), boundaries); gl.pixelStorei(gl.UNPACK_ALIGNMENT, 1); gl.texImage2D( gl.TEXTURE_2D, 0, gl.LUMINANCE, context.height, context.width, 0, gl.LUMINANCE, gl.UNSIGNED_BYTE, maskData, ); }); gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4); // Unbind textures gl.bindTexture(gl.TEXTURE_2D, null); context.masks.forEach((_, index) => { gl.activeTexture(gl.TEXTURE0 + index + this._masksTextureUnitStart); gl.bindTexture(gl.TEXTURE_2D, null); }); const ctx = form.ctx; invariant(this._canvas !== null, 'canvas is required'); ctx.drawImage(this._canvas, 0, 0); } async cleanup(): Promise { super.cleanup(); if (this._gl != null) { // Delete mask textures to prevent memory leaks this._maskTextures.forEach(texture => { if (texture != null && this._gl != null) { this._gl.deleteTexture(texture); } }); this._maskTextures = []; } } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/CutoutGLEffect.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import BaseGLEffect from '@/common/components/video/effects/BaseGLEffect'; import { EffectFrameContext, EffectInit, } from '@/common/components/video/effects/Effect'; import fragmentShaderSource from '@/common/components/video/effects/shaders/Cutout.frag?raw'; import vertexShaderSource from '@/common/components/video/effects/shaders/DefaultVert.vert?raw'; import {Tracklet} from '@/common/tracker/Tracker'; import {preAllocateTextures} from '@/common/utils/ShaderUtils'; import {RLEObject, decode} from '@/jscocotools/mask'; import invariant from 'invariant'; import {CanvasForm} from 'pts'; export default class CutoutGLEffect extends BaseGLEffect { private _numMasks: number = 0; private _numMasksUniformLocation: WebGLUniformLocation | null = null; // Must from start 1, main texture takes. private _masksTextureUnitStart: number = 1; private _maskTextures: WebGLTexture[] = []; constructor() { super(4); this.vertexShaderSource = vertexShaderSource; this.fragmentShaderSource = fragmentShaderSource; } protected setupUniforms( gl: WebGL2RenderingContext, program: WebGLProgram, init: EffectInit, ): void { super.setupUniforms(gl, program, init); this._numMasksUniformLocation = gl.getUniformLocation(program, 'uNumMasks'); gl.uniform1i(this._numMasksUniformLocation, this._numMasks); // We know the max number of textures, pre-allocate 3. this._maskTextures = preAllocateTextures(gl, 3); } apply(form: CanvasForm, context: EffectFrameContext, _tracklets: Tracklet[]) { const gl = this._gl; const program = this._program; if (!program) { return; } invariant(gl !== null, 'WebGL2 context is required'); gl.clearColor(0.0, 0.0, 0.0, 1.0); gl.clear(gl.COLOR_BUFFER_BIT); // dynamic uniforms per frame const contrastValue = [1.0, 1.6, 0.75, 0.0][this.variant % 4]; gl.uniform1f(gl.getUniformLocation(program, 'uContrast'), contrastValue); gl.uniform1i(this._numMasksUniformLocation, context.masks.length); gl.activeTexture(gl.TEXTURE0); gl.bindTexture(gl.TEXTURE_2D, this._frameTexture); gl.texImage2D( gl.TEXTURE_2D, 0, gl.RGBA, context.width, context.height, 0, gl.RGBA, gl.UNSIGNED_BYTE, context.frame, ); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST); // Create and bind 2D textures for each mask context.masks.forEach((mask, index) => { const decodedMask = decode([mask.bitmap as RLEObject]); const maskData = decodedMask.data as Uint8Array; gl.activeTexture(gl.TEXTURE0 + index + this._masksTextureUnitStart); gl.bindTexture(gl.TEXTURE_2D, this._maskTextures[index]); // dynamic uniforms per mask gl.uniform1i( gl.getUniformLocation(program, `uMaskTexture${index}`), this._masksTextureUnitStart + index, ); gl.pixelStorei(gl.UNPACK_ALIGNMENT, 1); gl.texImage2D( gl.TEXTURE_2D, 0, gl.LUMINANCE, context.height, context.width, 0, gl.LUMINANCE, gl.UNSIGNED_BYTE, maskData, ); }); gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4); // Unbind textures gl.bindTexture(gl.TEXTURE_2D, null); context.masks.forEach((_, index) => { gl.activeTexture(gl.TEXTURE0 + index + this._masksTextureUnitStart); gl.bindTexture(gl.TEXTURE_2D, null); }); const ctx = form.ctx; invariant(this._canvas !== null, 'canvas is required'); ctx.drawImage(this._canvas, 0, 0); } async cleanup(): Promise { super.cleanup(); if (this._gl != null) { // Delete mask textures to prevent memory leaks this._maskTextures.forEach(texture => { if (texture != null && this._gl != null) { this._gl.deleteTexture(texture); } }); this._maskTextures = []; } } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/DesaturateEffect.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {Tracklet} from '@/common/tracker/Tracker'; import {CanvasForm} from 'pts'; import {AbstractEffect, EffectFrameContext} from './Effect'; export default class DesaturateEffect extends AbstractEffect { constructor() { super(3); } apply(form: CanvasForm, context: EffectFrameContext, _tracklets: Tracklet[]) { form.ctx.save(); form.ctx.filter = ['contrast(100%)', 'contrast(150%)', 'contrast(50%)'][ this.variant % 3 ]; form.image([0, 0], context.frame); form.ctx.globalCompositeOperation = 'hue'; form.fillOnly('#fff').rect([ [0, 0], [context.width, context.height], ]); form.ctx.restore(); } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/Effect.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {Effects} from '@/common/components/video/effects/Effects'; import {Tracklet} from '@/common/tracker/Tracker'; import {RLEObject} from '@/jscocotools/mask'; import {CanvasForm} from 'pts'; export type EffectLayers = { background: keyof Effects; highlight: keyof Effects; }; export type EffectOptions = { variant: number; }; export type EffectInit = { width: number; height: number; gl?: WebGL2RenderingContext; canvas?: OffscreenCanvas; }; export type EffectMask = { bitmap: ImageBitmap | RLEObject; bounds: [[number, number], [number, number]]; }; export type EffectActionPoint = { objectId: number; position: [number, number]; }; export type EffectFrameContext = { frameIndex: number; totalFrames: number; fps: number; width: number; height: number; masks: EffectMask[]; maskColors: string[]; frame: ImageBitmap; timeParameter?: number; actionPoint: EffectActionPoint | null; }; export interface Effect { variant: number; numVariants: number; nextVariant(): void; setup(init: EffectInit): Promise; update(options: EffectOptions): Promise; cleanup(): Promise; apply( form: CanvasForm, context: EffectFrameContext, tracklets: Tracklet[], ): void; } export abstract class AbstractEffect implements Effect { public numVariants: number; public variant: number; constructor(numVariants: number) { this.numVariants = numVariants; this.variant = 0; } nextVariant() { // Cycle through variants this.variant = (this.variant + 1) % this.numVariants; } async setup(_init: EffectInit): Promise { // noop } async update(options: EffectOptions): Promise { this.variant = options.variant; } async cleanup(): Promise { // noop } abstract apply( form: CanvasForm, context: EffectFrameContext, tracklets: Tracklet[], ): void; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/EffectUtils.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import invariant from 'invariant'; import {Group} from 'pts'; import {EffectFrameContext} from './Effect'; export type MaskCanvas = { maskCanvas: OffscreenCanvas; bounds: number[][]; scaleX: number; scaleY: number; }; import {Effects} from '@/common/components/video/effects/Effects'; import type {CarbonIconType} from '@carbon/icons-react'; import { AppleDash, Asterisk, Barcode, CenterCircle, ColorPalette, ColorSwitch, Development, Erase, FaceWink, Humidity, Image, Overlay, TextFont, } from '@carbon/icons-react'; export type DemoEffect = { title: string; Icon: CarbonIconType; effectName: keyof Effects; }; export const backgroundEffects: DemoEffect[] = [ {title: 'Original', Icon: Image, effectName: 'Original'}, {title: 'Erase', Icon: Erase, effectName: 'EraseBackground'}, { title: 'Gradient', Icon: ColorPalette, effectName: 'Gradient', }, { title: 'Pixelate', Icon: Development, effectName: 'Pixelate', }, {title: 'Desaturate', Icon: ColorSwitch, effectName: 'Desaturate'}, {title: 'Text', Icon: TextFont, effectName: 'BackgroundText'}, {title: 'Blur', Icon: Humidity, effectName: 'BackgroundBlur'}, {title: 'Outline', Icon: AppleDash, effectName: 'Sobel'}, ]; export const highlightEffects: DemoEffect[] = [ {title: 'Original', Icon: Image, effectName: 'Cutout'}, {title: 'Erase', Icon: Erase, effectName: 'EraseForeground'}, {title: 'Gradient', Icon: ColorPalette, effectName: 'VibrantMask'}, {title: 'Pixelate', Icon: Development, effectName: 'PixelateMask'}, { title: 'Overlay', Icon: Overlay, effectName: 'Overlay', }, {title: 'Emoji', Icon: FaceWink, effectName: 'Replace'}, {title: 'Burst', Icon: Asterisk, effectName: 'Burst'}, {title: 'Spotlight', Icon: CenterCircle, effectName: 'Scope'}, ]; export const moreEffects: DemoEffect[] = [ {title: 'Noisy', Icon: Barcode, effectName: 'NoisyMask'}, ]; // Store existing content in a temporary canvas // This can be used in HighlightEffect composite blending, so that the existing background effect can be put back via "destination-over" export function copyCanvasContent( ctx: CanvasRenderingContext2D, effectContext: EffectFrameContext, ): OffscreenCanvas { const {width, height} = effectContext; const previousContent = ctx.getImageData(0, 0, width, height); const tempCanvas = new OffscreenCanvas(width, height); const tempCtx = tempCanvas.getContext('2d'); tempCtx?.putImageData(previousContent, 0, 0); return tempCanvas; } export function isInvalidMask(bound: number[][] | Group) { return ( bound[0].length < 2 || bound[1].length < 2 || bound[1][0] - bound[0][0] < 1 || bound[1][1] - bound[0][1] < 1 ); } export type MaskRenderingData = { canvas: OffscreenCanvas; scale: number[]; bounds: number[][]; }; export class EffectLayer { canvas: OffscreenCanvas; ctx: OffscreenCanvasRenderingContext2D; width: number; height: number; constructor(context: EffectFrameContext) { this.canvas = new OffscreenCanvas(context.width, context.height); const ctx = this.canvas.getContext('2d'); invariant(ctx !== null, 'context cannot be null'); this.ctx = ctx; this.width = context.width; this.height = context.height; } image(source: CanvasImageSourceWebCodecs) { this.ctx.drawImage(source, 0, 0); } filter(filterString: string) { this.ctx.filter = filterString; } composite(blend: GlobalCompositeOperation) { this.ctx.globalCompositeOperation = blend; } fill(color: string) { this.ctx.fillStyle = color; this.ctx.fillRect(0, 0, this.width, this.height); } clear() { this.ctx.clearRect(0, 0, this.width, this.height); } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/Effects.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import BackgroundTextEffect from './BackgroundTextEffect'; import DesaturateEffect from './DesaturateEffect'; import {Effect} from './Effect'; import EraseBackgroundEffect from './EraseBackgroundEffect'; import OriginalEffect from './OriginalEffect'; import OverlayEffect from './OverlayEffect'; import ArrowGLEffect from './ArrowGLEffect'; import BackgroundBlurEffect from './BackgroundBlurEffect'; import BurstGLEffect from './BurstGLEffect'; import CutoutGLEffect from './CutoutGLEffect'; import EraseForegroundGLEffect from './EraseForegroundGLEffect'; import GradientEffect from './GradientEffect'; import NoisyMaskEffect from './NoisyMaskEffect'; import PixelateEffect from './PixelateEffect'; import PixelateMaskGLEffect from './PixelateMaskGLEffect'; import ReplaceGLEffect from './ReplaceGLEffect'; import ScopeGLEffect from './ScopeGLEffect'; import SobelEffect from './SobelEffect'; import VibrantMaskEffect from './VibrantMaskEffect'; export type Effects = { /* Backgrounds */ Original: Effect; EraseBackground: Effect; Desaturate: Effect; Pixelate: Effect; Sobel: Effect; BackgroundText: Effect; BackgroundBlur: Effect; Gradient: Effect; /* Highlights */ Overlay: Effect; EraseForeground: Effect; Cutout: Effect; Scope: Effect; VibrantMask: Effect; Replace: Effect; Burst: Effect; PixelateMask: Effect; Arrow: Effect; /* More Effects */ NoisyMask: Effect; }; export default { /* Backgrounds */ Original: new OriginalEffect(), EraseBackground: new EraseBackgroundEffect(), Desaturate: new DesaturateEffect(), Pixelate: new PixelateEffect(), Sobel: new SobelEffect(), BackgroundText: new BackgroundTextEffect(), BackgroundBlur: new BackgroundBlurEffect(), Gradient: new GradientEffect(), /* Highlights */ Overlay: new OverlayEffect(), EraseForeground: new EraseForegroundGLEffect(), Cutout: new CutoutGLEffect(), Scope: new ScopeGLEffect(), VibrantMask: new VibrantMaskEffect(), Replace: new ReplaceGLEffect(), Burst: new BurstGLEffect(), PixelateMask: new PixelateMaskGLEffect(), Arrow: new ArrowGLEffect(), /* More Effects */ NoisyMask: new NoisyMaskEffect(), } as Effects; export enum EffectIndex { BACKGROUND = 0, HIGHLIGHT = 1, } type EffectComboItem = {name: keyof Effects; variant: number}; export type EffectsCombo = [EffectComboItem, EffectComboItem]; export const effectPresets: EffectsCombo[] = [ [ {name: 'Original', variant: 0}, {name: 'Overlay', variant: 0}, ], [ {name: 'Desaturate', variant: 0}, {name: 'Burst', variant: 2}, ], [ {name: 'Desaturate', variant: 1}, {name: 'VibrantMask', variant: 0}, ], [ {name: 'BackgroundText', variant: 1}, {name: 'Cutout', variant: 0}, ], [ {name: 'Original', variant: 0}, {name: 'PixelateMask', variant: 1}, ], [ {name: 'Desaturate', variant: 2}, {name: 'Cutout', variant: 0}, ], [ {name: 'Sobel', variant: 3}, {name: 'Cutout', variant: 1}, ], [ {name: 'Sobel', variant: 2}, {name: 'EraseForeground', variant: 2}, ], [ {name: 'EraseBackground', variant: 0}, {name: 'EraseForeground', variant: 0}, ], ]; ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/EraseBackgroundEffect.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {Tracklet} from '@/common/tracker/Tracker'; import {CanvasForm} from 'pts'; import {AbstractEffect, EffectFrameContext} from './Effect'; export default class EraseBackgroundEffect extends AbstractEffect { constructor() { super(3); } apply( form: CanvasForm, context: EffectFrameContext, _tracklets: Tracklet[], ): void { const fillColor = ['#000', '#fff', '#0f0'][this.variant % 3]; form.fillOnly(fillColor).rect([ [0, 0], [context.width, context.height], ]); } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/EraseForegroundEffect.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {Tracklet} from '@/common/tracker/Tracker'; import {CanvasForm} from 'pts'; import {AbstractEffect, EffectFrameContext} from './Effect'; import {EffectLayer} from './EffectUtils'; export default class EraseForegroundEffect extends AbstractEffect { constructor() { super(3); } apply( form: CanvasForm, context: EffectFrameContext, _tracklets: Tracklet[], ): void { const effect = new EffectLayer(context); const fillColor = ['#fff', '#000', '#0f0'][this.variant % 3]; for (const mask of context.masks) { effect.image(mask.bitmap as ImageBitmap); effect.composite('source-in'); effect.fill(fillColor); } form.image([0, 0], effect.canvas); } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/EraseForegroundGLEffect.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import BaseGLEffect from '@/common/components/video/effects/BaseGLEffect'; import { EffectFrameContext, EffectInit, } from '@/common/components/video/effects/Effect'; import vertexShaderSource from '@/common/components/video/effects/shaders/DefaultVert.vert?raw'; import fragmentShaderSource from '@/common/components/video/effects/shaders/EraseForeground.frag?raw'; import {Tracklet} from '@/common/tracker/Tracker'; import {preAllocateTextures} from '@/common/utils/ShaderUtils'; import {RLEObject, decode} from '@/jscocotools/mask'; import invariant from 'invariant'; import {CanvasForm} from 'pts'; export default class EraseForegroundGLEffect extends BaseGLEffect { private _numMasks: number = 0; private _numMasksUniformLocation: WebGLUniformLocation | null = null; private _maskTextures: WebGLTexture[] = []; constructor() { super(3); this.vertexShaderSource = vertexShaderSource; this.fragmentShaderSource = fragmentShaderSource; } protected setupUniforms( gl: WebGL2RenderingContext, program: WebGLProgram, init: EffectInit, ): void { super.setupUniforms(gl, program, init); this._numMasksUniformLocation = gl.getUniformLocation(program, 'uNumMasks'); gl.uniform1i(this._numMasksUniformLocation, this._numMasks); // We know the max number of textures, pre-allocate 3. this._maskTextures = preAllocateTextures(gl, 3); } apply(form: CanvasForm, context: EffectFrameContext, _tracklets: Tracklet[]) { const gl = this._gl; const program = this._program; invariant(gl !== null, 'WebGL2 context is required'); invariant(program !== null, 'Not WebGL program found'); const fillColor = [ [1, 1, 1], [0, 0, 0], [0, 1, 0], ][this.variant % 3]; gl.clearColor(0.0, 0.0, 0.0, 1.0); gl.clear(gl.COLOR_BUFFER_BIT); gl.uniform1i(this._numMasksUniformLocation, context.masks.length); gl.uniform3fv(gl.getUniformLocation(program, 'uBgColor'), fillColor); context.masks.forEach((mask, index) => { const decodedMask = decode([mask.bitmap as RLEObject]); const maskData = decodedMask.data as Uint8Array; gl.activeTexture(gl.TEXTURE0 + index); gl.bindTexture(gl.TEXTURE_2D, this._maskTextures[index]); gl.uniform1i( gl.getUniformLocation(program, `uMaskTexture${index}`), index, ); gl.pixelStorei(gl.UNPACK_ALIGNMENT, 1); gl.texImage2D( gl.TEXTURE_2D, 0, gl.LUMINANCE, context.height, context.width, 0, gl.LUMINANCE, gl.UNSIGNED_BYTE, maskData, ); }); gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4); // Unbind textures gl.bindTexture(gl.TEXTURE_2D, null); context.masks.forEach((_, index) => { gl.activeTexture(gl.TEXTURE0 + index); gl.bindTexture(gl.TEXTURE_2D, null); }); const ctx = form.ctx; invariant(this._canvas !== null, 'canvas is required'); if (context.masks.length) { ctx.drawImage(this._canvas, 0, 0); } } async cleanup(): Promise { super.cleanup(); if (this._gl != null) { // Delete mask textures to prevent memory leaks this._maskTextures.forEach(texture => { if (texture != null && this._gl != null) { this._gl.deleteTexture(texture); } }); this._maskTextures = []; } } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/GradientEffect.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import BaseGLEffect from '@/common/components/video/effects/BaseGLEffect'; import { EffectFrameContext, EffectInit, } from '@/common/components/video/effects/Effect'; import vertexShaderSource from '@/common/components/video/effects/shaders/DefaultVert.vert?raw'; import fragmentShaderSource from '@/common/components/video/effects/shaders/Gradient.frag?raw'; import {Tracklet} from '@/common/tracker/Tracker'; import {generateLUTDATA, load3DLUT} from '@/common/utils/ShaderUtils'; import invariant from 'invariant'; import {CanvasForm} from 'pts'; export default class GradientEffect extends BaseGLEffect { private lutSize: number = 2; private _lutTextures: WebGLTexture[] = []; // Must be 1, main background texture takes 0. private _extraTextureUnit: number = 1; constructor() { super(3); this.vertexShaderSource = vertexShaderSource; this.fragmentShaderSource = fragmentShaderSource; } protected setupUniforms( gl: WebGL2RenderingContext, program: WebGLProgram, init: EffectInit, ): void { super.setupUniforms(gl, program, init); gl.uniform1i( gl.getUniformLocation(program, 'uColorGradeLUT'), this._extraTextureUnit, ); this._lutTextures = []; // clear any previous pool of textures for (let i = 0; i < this.numVariants; i++) { const _lutData = generateLUTDATA(this.lutSize); const _extraTexture = load3DLUT(gl, this.lutSize, _lutData); this._lutTextures.push(_extraTexture as WebGLTexture); } } apply(form: CanvasForm, context: EffectFrameContext, _tracklets: Tracklet[]) { const gl = this._gl; const program = this._program; if (!program) { return; } invariant(gl !== null, 'WebGL2 context is required'); gl.clearColor(0.0, 0.0, 0.0, 1.0); gl.clear(gl.COLOR_BUFFER_BIT); // Bind the LUT texture to texture unit 1 const lutTexture = this._lutTextures[this.variant]; gl.activeTexture(gl.TEXTURE1); gl.bindTexture(gl.TEXTURE_3D, lutTexture); gl.activeTexture(gl.TEXTURE0); gl.bindTexture(gl.TEXTURE_2D, this._frameTexture); gl.texImage2D( gl.TEXTURE_2D, 0, gl.RGBA, context.width, context.height, 0, gl.RGBA, gl.UNSIGNED_BYTE, context.frame, ); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST); gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4); const ctx = form.ctx; invariant(this._canvas !== null, 'canvas is required'); ctx.drawImage(this._canvas, 0, 0); } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/NoisyMaskEffect.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import BaseGLEffect from '@/common/components/video/effects/BaseGLEffect'; import { EffectFrameContext, EffectInit, } from '@/common/components/video/effects/Effect'; import vertexShaderSource from '@/common/components/video/effects/shaders/DefaultVert.vert?raw'; import fragmentShaderSource from '@/common/components/video/effects/shaders/NoisyMask.frag?raw'; import {Tracklet} from '@/common/tracker/Tracker'; import {RLEObject, decode} from '@/jscocotools/mask'; import invariant from 'invariant'; import {CanvasForm} from 'pts'; export default class NoisyMaskEffect extends BaseGLEffect { private _numMasks: number = 0; private _numMasksUniformLocation: WebGLUniformLocation | null = null; private _currentFrameLocation: WebGLUniformLocation | null = null; constructor() { super(1); this.vertexShaderSource = vertexShaderSource; this.fragmentShaderSource = fragmentShaderSource; } protected setupUniforms( gl: WebGL2RenderingContext, program: WebGLProgram, init: EffectInit, ): void { super.setupUniforms(gl, program, init); this._numMasksUniformLocation = gl.getUniformLocation(program, 'uNumMasks'); gl.uniform1i(this._numMasksUniformLocation, this._numMasks); this._currentFrameLocation = gl.getUniformLocation( program, 'uCurrentFrame', ); gl.uniform1f(this._currentFrameLocation, 0); } apply(form: CanvasForm, context: EffectFrameContext, _tracklets: Tracklet[]) { const gl = this._gl; const program = this._program; if (!program) { return; } invariant(gl !== null, 'WebGL2 context is required'); gl.clearColor(0.0, 0.0, 0.0, 1.0); gl.clear(gl.COLOR_BUFFER_BIT); // dynamic uniforms per frame gl.uniform1f(this._currentFrameLocation, context.frameIndex); gl.uniform1i(this._numMasksUniformLocation, context.masks.length); // Create and bind 2D textures for each mask context.masks.forEach((mask, index) => { const maskTexture = gl.createTexture(); const decodedMask = decode([mask.bitmap as RLEObject]); const maskData = decodedMask.data as Uint8Array; gl.activeTexture(gl.TEXTURE0 + index); gl.bindTexture(gl.TEXTURE_2D, maskTexture); // dynamic uniforms per mask gl.uniform1i( gl.getUniformLocation(program, `uMaskTexture${index}`), index, ); gl.pixelStorei(gl.UNPACK_ALIGNMENT, 1); gl.texImage2D( gl.TEXTURE_2D, 0, gl.LUMINANCE, context.height, context.width, 0, gl.LUMINANCE, gl.UNSIGNED_BYTE, maskData, ); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE); gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4); }); const ctx = form.ctx; invariant(this._canvas !== null, 'canvas is required'); ctx.drawImage(this._canvas, 0, 0); } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/OriginalEffect.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {Tracklet} from '@/common/tracker/Tracker'; import {CanvasForm} from 'pts'; import {AbstractEffect, EffectFrameContext} from './Effect'; export default class OriginalEffect extends AbstractEffect { constructor() { super(3); } apply( form: CanvasForm, context: EffectFrameContext, _tracklets: Tracklet[], ): void { form.ctx.save(); if (this.variant % 3 === 1) { form.ctx.filter = 'saturate(120%) contrast(120%)'; } else if (this.variant % 3 === 2) { form.ctx.filter = 'brightness(70%) contrast(115%)'; } form.image([0, 0], context.frame); form.ctx.restore(); if (this.variant % 3 === 2) { form.fillOnly('#00000066').rect([ [0, 0], [context.width, context.height], ]); } } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/OverlayEffect.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {hexToRgb} from '@/common/components/video/editor/VideoEditorUtils'; import BaseGLEffect from '@/common/components/video/effects/BaseGLEffect'; import { EffectFrameContext, EffectInit, } from '@/common/components/video/effects/Effect'; import vertexShaderSource from '@/common/components/video/effects/shaders/DefaultVert.vert?raw'; import fragmentShaderSource from '@/common/components/video/effects/shaders/Overlay.frag?raw'; import {Tracklet} from '@/common/tracker/Tracker'; import { findIndexByTrackletId, preAllocateTextures, } from '@/common/utils/ShaderUtils'; import {RLEObject, decode} from '@/jscocotools/mask'; import invariant from 'invariant'; import {CanvasForm} from 'pts'; export default class OverlayEffect extends BaseGLEffect { private _numMasks: number = 0; private _numMasksUniformLocation: WebGLUniformLocation | null = null; // Must start from 1, main texture takes 0. private _masksTextureUnitStart: number = 1; private _maskTextures: WebGLTexture[] = []; private _clickPosition: number[] | null = null; private _activeMask: number = 0; constructor() { super(8); this.vertexShaderSource = vertexShaderSource; this.fragmentShaderSource = fragmentShaderSource; } protected setupUniforms( gl: WebGL2RenderingContext, program: WebGLProgram, init: EffectInit, ): void { super.setupUniforms(gl, program, init); this._numMasksUniformLocation = gl.getUniformLocation(program, 'uNumMasks'); gl.uniform1i(this._numMasksUniformLocation, this._numMasks); // We know the max number of textures, pre-allocate 3. this._maskTextures = preAllocateTextures(gl, 3); } apply(form: CanvasForm, context: EffectFrameContext, _tracklets: Tracklet[]) { const gl = this._gl; const program = this._program; invariant(gl !== null, 'WebGL2 context is required'); invariant(program !== null, 'Not WebGL program found'); gl.clearColor(0.0, 0.0, 0.0, 1.0); gl.clear(gl.COLOR_BUFFER_BIT); const opacity = [0.5, 0.75, 0.35, 0.95][this.variant % 4]; gl.uniform1f( gl.getUniformLocation(program, 'uTime'), context.timeParameter ?? 1.5, // Pass a constant value when no time parameter ); gl.uniform1f(gl.getUniformLocation(program, 'uOpacity'), opacity); gl.uniform1i(this._numMasksUniformLocation, context.masks.length); gl.uniform1i( gl.getUniformLocation(program, 'uBorder'), this.variant % this.numVariants < 4 ? 1 : 0, ); if (context.actionPoint) { const clickPos = [ context.actionPoint.position[0] / context.width, context.actionPoint.position[1] / context.height, ]; this._clickPosition = clickPos; this._activeMask = findIndexByTrackletId( context.actionPoint.objectId, _tracklets, ); } gl.uniform2fv( gl.getUniformLocation(program, 'uClickPos'), this._clickPosition ?? [0, 0], ); gl.uniform1i( gl.getUniformLocation(program, 'uActiveMask'), this._activeMask, ); // Activate original frame texture gl.activeTexture(gl.TEXTURE0); gl.bindTexture(gl.TEXTURE_2D, this._frameTexture); gl.texImage2D( gl.TEXTURE_2D, 0, gl.RGBA, context.width, context.height, 0, gl.RGBA, gl.UNSIGNED_BYTE, context.frame, ); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST); context.masks.forEach((mask, index) => { const decodedMask = decode([mask.bitmap as RLEObject]); const maskData = decodedMask.data as Uint8Array; gl.activeTexture(gl.TEXTURE0 + index + this._masksTextureUnitStart); gl.bindTexture(gl.TEXTURE_2D, this._maskTextures[index]); gl.uniform1i( gl.getUniformLocation(program, `uMaskTexture${index}`), this._masksTextureUnitStart + index, ); const color = hexToRgb(context.maskColors[index]); gl.uniform4f( gl.getUniformLocation(program, `uMaskColor${index}`), color.r, color.g, color.b, color.a, ); // 1 byte aligment gl.pixelStorei(gl.UNPACK_ALIGNMENT, 1); gl.texImage2D( gl.TEXTURE_2D, 0, gl.LUMINANCE, context.height, context.width, 0, gl.LUMINANCE, gl.UNSIGNED_BYTE, maskData, ); }); gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4); // Unbind textures gl.bindTexture(gl.TEXTURE_2D, null); context.masks.forEach((_, index) => { gl.activeTexture(gl.TEXTURE0 + index + this._masksTextureUnitStart); gl.bindTexture(gl.TEXTURE_2D, null); }); const ctx = form.ctx; invariant(this._canvas !== null, 'canvas is required'); ctx.drawImage(this._canvas, 0, 0); this._clickPosition = null; } async cleanup(): Promise { super.cleanup(); if (this._gl != null) { // Delete mask textures to prevent memory leaks this._maskTextures.forEach(texture => { if (texture != null && this._gl != null) { this._gl.deleteTexture(texture); } }); this._maskTextures = []; } } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/PixelateEffect.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import BaseGLEffect from '@/common/components/video/effects/BaseGLEffect'; import { EffectFrameContext, EffectInit, } from '@/common/components/video/effects/Effect'; import vertexShaderSource from '@/common/components/video/effects/shaders/DefaultVert.vert?raw'; import fragmentShaderSource from '@/common/components/video/effects/shaders/Pixelate.frag?raw'; import {Tracklet} from '@/common/tracker/Tracker'; import invariant from 'invariant'; import {CanvasForm} from 'pts'; export default class PixelateEffect extends BaseGLEffect { private _blockSize: number = 10.0; constructor() { super(3); this.vertexShaderSource = vertexShaderSource; this.fragmentShaderSource = fragmentShaderSource; } protected setupUniforms( gl: WebGL2RenderingContext, program: WebGLProgram, init: EffectInit, ): void { super.setupUniforms(gl, program, init); gl.uniform1f(gl.getUniformLocation(program, 'uBlockSize'), this._blockSize); } apply(form: CanvasForm, context: EffectFrameContext, _tracklets: Tracklet[]) { const gl = this._gl; const program = this._program; if (!program) { return; } invariant(gl !== null, 'WebGL2 context is required'); gl.clearColor(0.0, 0.0, 0.0, 1.0); gl.clear(gl.COLOR_BUFFER_BIT); const blockSize = [10, 20, 30][this.variant]; // dynamic uniforms per frame gl.uniform1f(gl.getUniformLocation(program, 'uBlockSize'), blockSize); gl.activeTexture(gl.TEXTURE0); gl.bindTexture(gl.TEXTURE_2D, this._frameTexture); gl.texImage2D( gl.TEXTURE_2D, 0, gl.RGBA, context.width, context.height, 0, gl.RGBA, gl.UNSIGNED_BYTE, context.frame, ); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST); gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL, true); // Apply shader gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4); const ctx = form.ctx; invariant(this._canvas !== null, 'canvas is required'); ctx.drawImage(this._canvas, 0, 0); } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/PixelateMaskGLEffect.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import BaseGLEffect from '@/common/components/video/effects/BaseGLEffect'; import { EffectFrameContext, EffectInit, } from '@/common/components/video/effects/Effect'; import vertexShaderSource from '@/common/components/video/effects/shaders/DefaultVert.vert?raw'; import fragmentShaderSource from '@/common/components/video/effects/shaders/PixelateMask.frag?raw'; import {Tracklet} from '@/common/tracker/Tracker'; import {preAllocateTextures} from '@/common/utils/ShaderUtils'; import {RLEObject, decode} from '@/jscocotools/mask'; import invariant from 'invariant'; import {CanvasForm} from 'pts'; export default class PixelateMaskGLEffect extends BaseGLEffect { private _numMasks: number = 0; private _numMasksUniformLocation: WebGLUniformLocation | null = null; // Must from start 1, main texture takes. private _masksTextureUnitStart: number = 1; private _maskTextures: WebGLTexture[] = []; constructor() { super(3); this.vertexShaderSource = vertexShaderSource; this.fragmentShaderSource = fragmentShaderSource; } protected setupUniforms( gl: WebGL2RenderingContext, program: WebGLProgram, init: EffectInit, ): void { super.setupUniforms(gl, program, init); this._numMasksUniformLocation = gl.getUniformLocation(program, 'uNumMasks'); gl.uniform1i(this._numMasksUniformLocation, this._numMasks); // We know the max number of textures, pre-allocate 3. this._maskTextures = preAllocateTextures(gl, 3); } apply(form: CanvasForm, context: EffectFrameContext, _tracklets: Tracklet[]) { const gl = this._gl; const program = this._program; if (!program) { return; } invariant(gl !== null, 'WebGL2 context is required'); gl.clearColor(0.0, 0.0, 0.0, 1.0); gl.clear(gl.COLOR_BUFFER_BIT); const blockSize = [10, 20, 30][this.variant]; // dynamic uniforms per frame gl.uniform1i(this._numMasksUniformLocation, context.masks.length); gl.uniform1f(gl.getUniformLocation(program, 'uBlockSize'), blockSize); gl.activeTexture(gl.TEXTURE0); gl.bindTexture(gl.TEXTURE_2D, this._frameTexture); gl.texImage2D( gl.TEXTURE_2D, 0, gl.RGBA, context.width, context.height, 0, gl.RGBA, gl.UNSIGNED_BYTE, context.frame, ); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST); // Create and bind 2D textures for each mask context.masks.forEach((mask, index) => { const decodedMask = decode([mask.bitmap as RLEObject]); const maskData = decodedMask.data as Uint8Array; gl.activeTexture(gl.TEXTURE0 + index + this._masksTextureUnitStart); gl.bindTexture(gl.TEXTURE_2D, this._maskTextures[index]); gl.pixelStorei(gl.UNPACK_ALIGNMENT, 1); gl.texImage2D( gl.TEXTURE_2D, 0, gl.LUMINANCE, context.height, context.width, 0, gl.LUMINANCE, gl.UNSIGNED_BYTE, maskData, ); // dynamic uniforms per mask gl.uniform1i( gl.getUniformLocation(program, `uMaskTexture${index}`), this._masksTextureUnitStart + index, ); }); gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4); // Unbind textures gl.bindTexture(gl.TEXTURE_2D, null); context.masks.forEach((_, index) => { gl.activeTexture(gl.TEXTURE0 + index + this._masksTextureUnitStart); gl.bindTexture(gl.TEXTURE_2D, null); }); const ctx = form.ctx; invariant(this._canvas !== null, 'canvas is required'); ctx.drawImage(this._canvas, 0, 0); } async cleanup(): Promise { super.cleanup(); if (this._gl != null) { // Delete mask textures to prevent memory leaks this._maskTextures.forEach(texture => { if (texture != null && this._gl != null) { this._gl.deleteTexture(texture); } }); this._maskTextures = []; } } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/ReplaceGLEffect.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import angeryIcon from '@/assets/icons/angery.png'; import heartIcon from '@/assets/icons/heart.png'; import whistleIcon from '@/assets/icons/whistle.png'; import BaseGLEffect from '@/common/components/video/effects/BaseGLEffect'; import { EffectFrameContext, EffectInit, } from '@/common/components/video/effects/Effect'; import vertexShaderSource from '@/common/components/video/effects/shaders/DefaultVert.vert?raw'; import fragmentShaderSource from '@/common/components/video/effects/shaders/Replace.frag?raw'; import {Tracklet} from '@/common/tracker/Tracker'; import {normalizeBounds, preAllocateTextures} from '@/common/utils/ShaderUtils'; import {RLEObject, decode} from '@/jscocotools/mask'; import invariant from 'invariant'; import {CanvasForm} from 'pts'; export default class ReplaceGLEffect extends BaseGLEffect { private _numMasks: number = 0; private _numMasksUniformLocation: WebGLUniformLocation | null = null; private _bitmap: ImageBitmap[] = []; private _extraTextureUnit: number = 1; private _extraTexture: WebGLTexture | null = null; private _fillBg: number = 0; private _fillBgLocation: WebGLUniformLocation | null = null; private _masksTextureUnitStart: number = 2; private _maskTextures: WebGLTexture[] = []; constructor() { super(6); this.vertexShaderSource = vertexShaderSource; this.fragmentShaderSource = fragmentShaderSource; } protected async setupUniforms( gl: WebGL2RenderingContext, program: WebGLProgram, init: EffectInit, ) { super.setupUniforms(gl, program, init); this._extraTexture = gl.createTexture(); this._numMasksUniformLocation = gl.getUniformLocation(program, 'uNumMasks'); gl.uniform1i(this._numMasksUniformLocation, this._numMasks); this._fillBgLocation = gl.getUniformLocation(program, 'uFill'); gl.uniform1i(this._fillBgLocation, this._fillBg); gl.uniform1i( gl.getUniformLocation(program, 'uEmojiTexture'), this._extraTextureUnit, ); // We know the max number of textures, pre-allocate 3. this._maskTextures = preAllocateTextures(gl, 3); this._bitmap = []; // clear any previous pool of texture let response = await fetch(angeryIcon); let blob = await response.blob(); const angery = await createImageBitmap(blob); response = await fetch(heartIcon); blob = await response.blob(); const heart = await createImageBitmap(blob); response = await fetch(whistleIcon); blob = await response.blob(); const whistle = await createImageBitmap(blob); this._bitmap = [angery, heart, whistle]; } apply(form: CanvasForm, context: EffectFrameContext, _tracklets: Tracklet[]) { const gl = this._gl; const program = this._program; invariant(gl !== null, 'WebGL2 context is required'); invariant(program !== null, 'Not WebGL program found'); const iconIndex = Math.floor(this.variant / 2) % this._bitmap.length; if (this._bitmap === null) { return; } gl.clearColor(0.0, 0.0, 0.0, 1.0); gl.clear(gl.COLOR_BUFFER_BIT); // dynamic uniforms per frame gl.uniform1i(this._numMasksUniformLocation, context.masks.length); gl.uniform1i(this._fillBgLocation, this.variant % 2 === 0 ? 0 : 1); // Bind the extra texture/emoji to texture unit 1 if (this._bitmap.length) { gl.activeTexture(gl.TEXTURE0 + this._extraTextureUnit); gl.bindTexture(gl.TEXTURE_2D, this._extraTexture); gl.texImage2D( gl.TEXTURE_2D, 0, gl.RGBA, this._bitmap[iconIndex].width, this._bitmap[iconIndex].height, 0, gl.RGBA, gl.UNSIGNED_BYTE, this._bitmap[iconIndex], ); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE); } context.masks.forEach((mask, index) => { const decodedMask = decode([mask.bitmap as RLEObject]); const maskData = decodedMask.data as Uint8Array; gl.activeTexture(gl.TEXTURE0 + index + this._masksTextureUnitStart); gl.bindTexture(gl.TEXTURE_2D, this._maskTextures[index]); const boundaries = normalizeBounds( mask.bounds[0], mask.bounds[1], context.width, context.height, ); gl.uniform1i( gl.getUniformLocation(program, `uMaskTexture${index}`), index + this._masksTextureUnitStart, ); gl.uniform4fv(gl.getUniformLocation(program, `bbox${index}`), boundaries); gl.pixelStorei(gl.UNPACK_ALIGNMENT, 1); gl.texImage2D( gl.TEXTURE_2D, 0, gl.LUMINANCE, context.height, context.width, 0, gl.LUMINANCE, gl.UNSIGNED_BYTE, maskData, ); }); gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4); // Unbind textures gl.bindTexture(gl.TEXTURE_2D, null); context.masks.forEach((_, index) => { gl.activeTexture(gl.TEXTURE0 + index + this._masksTextureUnitStart); gl.bindTexture(gl.TEXTURE_2D, null); }); const ctx = form.ctx; invariant(this._canvas !== null, 'canvas is required'); ctx.drawImage(this._canvas, 0, 0); } async cleanup(): Promise { super.cleanup(); if (this._gl != null) { // Delete mask textures to prevent memory leaks this._maskTextures.forEach(texture => { if (texture != null && this._gl != null) { this._gl.deleteTexture(texture); } }); this._maskTextures = []; } } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/ScopeGLEffect.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {hexToRgb} from '@/common/components/video/editor/VideoEditorUtils'; import BaseGLEffect from '@/common/components/video/effects/BaseGLEffect'; import { EffectFrameContext, EffectInit, } from '@/common/components/video/effects/Effect'; import vertexShaderSource from '@/common/components/video/effects/shaders/DefaultVert.vert?raw'; import fragmentShaderSource from '@/common/components/video/effects/shaders/Scope.frag?raw'; import {Tracklet} from '@/common/tracker/Tracker'; import {normalizeBounds, preAllocateTextures} from '@/common/utils/ShaderUtils'; import {RLEObject, decode} from '@/jscocotools/mask'; import invariant from 'invariant'; import {CanvasForm} from 'pts'; export default class ScopeGLEffect extends BaseGLEffect { private _numMasks: number = 0; private _numMasksUniformLocation: WebGLUniformLocation | null = null; // Must from start 2, main texture takes 0 and 1. private _masksTextureUnitStart: number = 2; private _maskTextures: WebGLTexture[] = []; constructor() { super(6); this.vertexShaderSource = vertexShaderSource; this.fragmentShaderSource = fragmentShaderSource; } protected setupUniforms( gl: WebGL2RenderingContext, program: WebGLProgram, init: EffectInit, ): void { super.setupUniforms(gl, program, init); this._numMasksUniformLocation = gl.getUniformLocation(program, 'uNumMasks'); gl.uniform1i(this._numMasksUniformLocation, this._numMasks); // We know the max number of textures, pre-allocate 3. this._maskTextures = preAllocateTextures(gl, 3); } apply(form: CanvasForm, context: EffectFrameContext, _tracklets: Tracklet[]) { const gl = this._gl; const program = this._program; if (!program) { return; } invariant(gl !== null, 'WebGL2 context is required'); gl.clearColor(0.0, 0.0, 0.0, 1.0); gl.clear(gl.COLOR_BUFFER_BIT); // dynamic uniforms per frame gl.uniform1i(this._numMasksUniformLocation, context.masks.length); gl.activeTexture(gl.TEXTURE0); gl.bindTexture(gl.TEXTURE_2D, this._frameTexture); gl.texImage2D( gl.TEXTURE_2D, 0, gl.RGBA, context.width, context.height, 0, gl.RGBA, gl.UNSIGNED_BYTE, context.frame, ); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST); // Create and bind 2D textures for each mask context.masks.forEach((mask, index) => { const decodedMask = decode([mask.bitmap as RLEObject]); const maskData = decodedMask.data as Uint8Array; gl.activeTexture(gl.TEXTURE0 + index + this._masksTextureUnitStart); gl.bindTexture(gl.TEXTURE_2D, this._maskTextures[index]); const boundaries = normalizeBounds( mask.bounds[0], mask.bounds[1], context.width, context.height, ); const styleIndex = Math.floor(this.variant / 2) % 2; // dynamic uniforms per mask gl.uniform1i( gl.getUniformLocation(program, `uMaskTexture${index}`), this._masksTextureUnitStart + index, ); const color = hexToRgb(context.maskColors[index]); gl.uniform4f( gl.getUniformLocation(program, `uMaskColor${index}`), color.r, color.g, color.b, color.a, ); gl.uniform4fv(gl.getUniformLocation(program, `bbox${index}`), boundaries); gl.uniform1i( gl.getUniformLocation(program, 'uFillColor'), this.variant % 2 === 0 ? 0 : 1, ); gl.uniform1i( gl.getUniformLocation(program, 'uLight'), styleIndex === 0 ? 0 : 1, ); gl.uniform1i( gl.getUniformLocation(program, 'uTransparency'), Math.floor(this.variant / 2) % 3 === 2 ? 1 : 0, ); gl.pixelStorei(gl.UNPACK_ALIGNMENT, 1); gl.texImage2D( gl.TEXTURE_2D, 0, gl.LUMINANCE, context.height, context.width, 0, gl.LUMINANCE, gl.UNSIGNED_BYTE, maskData, ); }); gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4); // Unbind textures gl.bindTexture(gl.TEXTURE_2D, null); context.masks.forEach((_, index) => { gl.activeTexture(gl.TEXTURE0 + index + this._masksTextureUnitStart); gl.bindTexture(gl.TEXTURE_2D, null); }); const ctx = form.ctx; invariant(this._canvas !== null, 'canvas is required'); ctx.drawImage(this._canvas, 0, 0); } async cleanup(): Promise { super.cleanup(); if (this._gl != null) { // Delete mask textures to prevent memory leaks this._maskTextures.forEach(texture => { if (texture != null && this._gl != null) { this._gl.deleteTexture(texture); } }); this._maskTextures = []; } } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/SobelEffect.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import BaseGLEffect from '@/common/components/video/effects/BaseGLEffect'; import { EffectFrameContext, EffectInit, } from '@/common/components/video/effects/Effect'; import vertexShaderSource from '@/common/components/video/effects/shaders/DefaultVert.vert?raw'; import fragmentShaderSource from '@/common/components/video/effects/shaders/Sobel.frag?raw'; import {Tracklet} from '@/common/tracker/Tracker'; import invariant from 'invariant'; import {CanvasForm} from 'pts'; export default class SobelEffect extends BaseGLEffect { constructor() { super(4); this.vertexShaderSource = vertexShaderSource; this.fragmentShaderSource = fragmentShaderSource; } protected setupUniforms( gl: WebGL2RenderingContext, program: WebGLProgram, init: EffectInit, ): void { super.setupUniforms(gl, program, init); } apply(form: CanvasForm, context: EffectFrameContext, _tracklets: Tracklet[]) { const gl = this._gl; const program = this._program; if (!program) { return; } invariant(gl !== null, 'WebGL2 context is required'); gl.clearColor(0.0, 0.0, 0.0, 1.0); gl.clear(gl.COLOR_BUFFER_BIT); const pairIndex = Math.floor(this.variant / 2) % 2; gl.uniform1i( gl.getUniformLocation(program, 'uSwapColor'), this.variant % 2 === 0 ? 1 : 0, ); gl.uniform1i( gl.getUniformLocation(program, 'uMonocolor'), pairIndex === 0 ? 0 : 1, ); gl.activeTexture(gl.TEXTURE0); gl.bindTexture(gl.TEXTURE_2D, this._frameTexture); gl.texImage2D( gl.TEXTURE_2D, 0, gl.RGBA, context.width, context.height, 0, gl.RGBA, gl.UNSIGNED_BYTE, context.frame, ); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST); gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4); const ctx = form.ctx; invariant(this._canvas !== null, 'canvas is required'); ctx.drawImage(this._canvas, 0, 0); } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/VibrantMaskEffect.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import BaseGLEffect from '@/common/components/video/effects/BaseGLEffect'; import { EffectFrameContext, EffectInit, } from '@/common/components/video/effects/Effect'; import vertexShaderSource from '@/common/components/video/effects/shaders/DefaultVert.vert?raw'; import fragmentShaderSource from '@/common/components/video/effects/shaders/VibrantMask.frag?raw'; import {Tracklet} from '@/common/tracker/Tracker'; import { generateLUTDATA, load3DLUT, preAllocateTextures, } from '@/common/utils/ShaderUtils'; import {RLEObject, decode} from '@/jscocotools/mask'; import invariant from 'invariant'; import {CanvasForm} from 'pts'; export default class VibrantMaskEffect extends BaseGLEffect { private lutSize: number = 4; private _numMasks: number = 0; private _numMasksUniformLocation: WebGLUniformLocation | null = null; private _currentFrameLocation: WebGLUniformLocation | null = null; private _lutTextures: WebGLTexture[] = []; private _maskTextures: WebGLTexture[] = []; // Must be 1, main background texture takes 0. private _extraTextureUnit: number = 1; // Must from start 2, main texture takes 0 and 1. private _masksTextureUnitStart: number = 2; constructor() { super(3); this.vertexShaderSource = vertexShaderSource; this.fragmentShaderSource = fragmentShaderSource; } protected setupUniforms( gl: WebGL2RenderingContext, program: WebGLProgram, init: EffectInit, ): void { super.setupUniforms(gl, program, init); gl.uniform1i( gl.getUniformLocation(program, 'uColorGradeLUT'), this._extraTextureUnit, ); this._numMasksUniformLocation = gl.getUniformLocation(program, 'uNumMasks'); gl.uniform1i(this._numMasksUniformLocation, this._numMasks); this._currentFrameLocation = gl.getUniformLocation( program, 'uCurrentFrame', ); gl.uniform1f(this._currentFrameLocation, 0); // We know the max number of textures, pre-allocate 3. this._maskTextures = preAllocateTextures(gl, 3); this._lutTextures = []; // clear any previous pool of textures for (let i = 0; i < this.numVariants; i++) { const _lutData = generateLUTDATA(this.lutSize); const _extraTexture = load3DLUT(gl, this.lutSize, _lutData); this._lutTextures.push(_extraTexture as WebGLTexture); } } apply(form: CanvasForm, context: EffectFrameContext, _tracklets: Tracklet[]) { const gl = this._gl; const program = this._program; if (!program) { return; } invariant(gl !== null, 'WebGL2 context is required'); gl.clearColor(0.0, 0.0, 0.0, 1.0); gl.clear(gl.COLOR_BUFFER_BIT); // dynamic uniforms per frame gl.uniform1f(this._currentFrameLocation, context.frameIndex); gl.uniform1i(this._numMasksUniformLocation, context.masks.length); // Bind the LUT texture to texture unit 1 const lutTexture = this._lutTextures[this.variant]; gl.activeTexture(gl.TEXTURE1); gl.bindTexture(gl.TEXTURE_3D, lutTexture); gl.activeTexture(gl.TEXTURE0); gl.bindTexture(gl.TEXTURE_2D, this._frameTexture); gl.texImage2D( gl.TEXTURE_2D, 0, gl.RGBA, context.width, context.height, 0, gl.RGBA, gl.UNSIGNED_BYTE, context.frame, ); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST); // Create and bind 2D textures for each mask context.masks.forEach((mask, index) => { const decodedMask = decode([mask.bitmap as RLEObject]); const maskData = decodedMask.data as Uint8Array; gl.activeTexture(gl.TEXTURE0 + index + this._masksTextureUnitStart); gl.bindTexture(gl.TEXTURE_2D, this._maskTextures[index]); // dynamic uniforms per mask gl.uniform1i( gl.getUniformLocation(program, `uMaskTexture${index}`), this._masksTextureUnitStart + index, ); gl.pixelStorei(gl.UNPACK_ALIGNMENT, 1); gl.texImage2D( gl.TEXTURE_2D, 0, gl.LUMINANCE, context.height, context.width, 0, gl.LUMINANCE, gl.UNSIGNED_BYTE, maskData, ); }); gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4); // Unbind textures gl.bindTexture(gl.TEXTURE_2D, null); context.masks.forEach((_, index) => { gl.activeTexture(gl.TEXTURE0 + index + this._masksTextureUnitStart); gl.bindTexture(gl.TEXTURE_2D, null); }); const ctx = form.ctx; invariant(this._canvas !== null, 'canvas is required'); ctx.drawImage(this._canvas, 0, 0); } async cleanup(): Promise { super.cleanup(); if (this._gl != null) { // Delete mask textures to prevent memory leaks this._maskTextures.forEach(texture => { if (texture != null && this._gl != null) { this._gl.deleteTexture(texture); } }); this._maskTextures = []; } } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/shaders/Arrow.frag ================================================ #version 300 es // Copyright (c) Meta Platforms, Inc. and affiliates. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. precision mediump float; in vec2 vTexCoord; uniform sampler2D uSampler; uniform vec2 uSize; uniform int uNumMasks; uniform float uCurrentFrame; uniform bool uLineColor; uniform bool uArrow; uniform sampler2D uMaskTexture0; uniform sampler2D uMaskTexture1; uniform sampler2D uMaskTexture2; uniform vec4 bbox0; uniform vec4 bbox1; uniform vec4 bbox2; out vec4 fragColor; float addv(vec2 a) { return a.x + a.y; } #define dd(a) dot(a,a) vec2 solveCubic2(vec3 a) { float p = a.y - a.x * a.x / 3.0f; float p3 = p * p * p; float q = a.x * (2.0f * a.x * a.x - 9.0f * a.y) / 27.0f + a.z; float d = q * q + 4.0f * p3 / 27.0f; if(d > 0.0f) { vec2 x = (vec2(1.0f, -1.0f) * sqrt(d) - q) * 0.5f; return vec2(addv(sign(x) * pow(abs(x), vec2(1.0f / 3.0f))) - a.x / 3.0f); } float v = acos(-sqrt(-27.0f / p3) * q * 0.5f) / 3.0f; float m = cos(v); float n = sin(v) * 1.732050808f; return vec2(m + m, -n - m) * sqrt(-p / 3.0f) - a.x / 3.0f; } float calculateDistanceToQuadraticBezier(vec2 p, vec2 a, vec2 b, vec2 c) { b += mix(vec2(1e-4f), vec2(0.0f), abs(sign(b * 2.0f - a - c))); vec2 A = b - a; vec2 B = c - b - A; vec2 C = p - a; vec2 D = A * 2.0f; vec2 T = clamp((solveCubic2(vec3(-3.0f * dot(A, B), dot(C, B) - 2.0f * dd(A), dot(C, A)) / -dd(B))), 0.0f, 1.0f); return sqrt(min(dd(C - (D + B * T.x) * T.x), dd(C - (D + B * T.y) * T.y))); } float crossProduct(vec2 a, vec2 b) { return a.x * b.y - a.y * b.x; } bool pointInTriangle(vec2 pt, vec2 v0, vec2 v1, vec2 v2) { vec2 v0v1 = v1 - v0; vec2 v1v2 = v2 - v1; vec2 v2v0 = v0 - v2; float d0 = sign(crossProduct(v0v1, pt - v0)); float d1 = sign(crossProduct(v1v2, pt - v1)); float d2 = sign(crossProduct(v2v0, pt - v2)); bool has_neg = (d0 < 0.0f) || (d1 < 0.0f) || (d2 < 0.0f); bool has_pos = (d0 > 0.0f) || (d1 > 0.0f) || (d2 > 0.0f); return !(has_neg && has_pos); } void main() { vec4 color = texture(uSampler, vTexCoord); vec2 fragCoord = vTexCoord * uSize; float aspectRatio = uSize.y / uSize.x; float time = uCurrentFrame * 0.05f; vec3 multicolor = vec3(0.5f + 0.5f * sin(time), 0.5f + 0.5f * cos(time), 0.5f - 0.5f * sin(time)); vec4 mask1 = vec4(0.0f); vec4 mask2 = vec4(0.0f); vec4 mask3 = vec4(0.0f); bool scoped = false; bool intersected = false; float threshold = 0.75f; float circleRadius = 0.015f; if(uNumMasks > 0) { mask1 = texture(uMaskTexture0, vec2(vTexCoord.y, vTexCoord.x)); bool visible = bbox0 != vec4(0.0f); vec2 p0 = vec2((bbox0.x + bbox0.z) * 0.5f, bbox0.y); // Top center vec2 p1 = vec2(bbox0.x + 0.5f * (bbox0.z - bbox0.x) * (0.5f + 0.5f * sin(time)), bbox0.y - 0.25f); //vec2 p1 = vec2(0.5f, 0.5f); vec2 p2 = vec2(bbox0.x + 0.5f * (bbox0.z - bbox0.x) * (0.5f + 0.5f * cos(time)), (bbox0.w + bbox0.y) * 0.5f); float d = calculateDistanceToQuadraticBezier(vTexCoord, p0, p1, p2); d *= length(uSize.xy) * 0.25f; vec2 v0 = p0 + vec2(-0.020f, -0.020f); // Left vertex vec2 v1 = p0 + vec2(0.020f, -0.020f); // Right vertex vec2 v2 = p0 + vec2(0.0f, 0.020f); // Bottom vertex // Check if the point is inside the triangle bool inside = pointInTriangle(vTexCoord, v0, v1, v2); // Circle drawing vec2 adjustedCoord = vTexCoord - p0; adjustedCoord.x /= aspectRatio; float circleDistance = length(adjustedCoord); if(d < threshold && visible) { scoped = true; } if(uArrow && inside && visible) { intersected = true; } else if(!uArrow && circleDistance < circleRadius && visible) { intersected = true; } } if(uNumMasks > 1) { mask2 = texture(uMaskTexture1, vec2(vTexCoord.y, vTexCoord.x)); bool visible = bbox1 != vec4(0.0f); vec2 p0 = vec2((bbox1.x + bbox1.z) * 0.5f, bbox1.y); vec2 p1 = vec2(bbox1.x + 0.5f * (bbox1.z - bbox1.x) * (0.5f + 0.5f * sin(time)), bbox1.y - 0.25f); vec2 p2 = vec2(bbox1.x + 0.5f * (bbox1.z - bbox1.x) * (0.5f + 0.5f * cos(time)), (bbox1.w + bbox1.y) * 0.5f); float d = calculateDistanceToQuadraticBezier(vTexCoord, p0, p1, p2); d *= length(uSize.xy) * 0.25f; vec2 v0 = p0 + vec2(-0.020f, -0.020f); vec2 v1 = p0 + vec2(0.020f, -0.020f); vec2 v2 = p0 + vec2(0.0f, 0.020f); bool inside = pointInTriangle(vTexCoord, v0, v1, v2); // Circle drawing vec2 adjustedCoord = vTexCoord - p0; adjustedCoord.x /= aspectRatio; float circleDistance = length(adjustedCoord); if(d < threshold && visible) { scoped = true; } if(uArrow && inside && visible) { intersected = true; } else if(!uArrow && circleDistance < circleRadius && visible) { intersected = true; } } if(uNumMasks > 2) { mask3 = texture(uMaskTexture2, vec2(vTexCoord.y, vTexCoord.x)); bool visible = bbox2 != vec4(0.0f); vec2 p0 = vec2((bbox2.x + bbox2.z) * 0.5f, bbox2.y); vec2 p1 = vec2(bbox2.x + 0.5f * (bbox2.z - bbox2.x) * (0.5f + 0.5f * sin(time)), bbox2.y - 0.25f); vec2 p2 = vec2(bbox2.x + 0.5f * (bbox2.z - bbox2.x) * (0.5f + 0.5f * cos(time)), (bbox2.w + bbox2.y) * 0.5f); float d = calculateDistanceToQuadraticBezier(vTexCoord, p0, p1, p2); d *= length(uSize.xy) * 0.25f; vec2 v0 = p0 + vec2(-0.020f, -0.020f); vec2 v1 = p0 + vec2(0.020f, -0.020f); vec2 v2 = p0 + vec2(0.0f, 0.020f); bool inside = pointInTriangle(vTexCoord, v0, v1, v2); vec2 adjustedCoord = vTexCoord - p0; adjustedCoord.x /= aspectRatio; float circleDistance = length(adjustedCoord); if(d < threshold && visible) { scoped = true; } if(uArrow && inside && visible) { intersected = true; } else if(!uArrow && circleDistance < circleRadius && visible) { intersected = true; } } bool overlap = (mask1.r > 0.0f || mask2.r > 0.0f || mask3.r > 0.0f); if(overlap) { fragColor = color; } if(scoped || intersected) { fragColor = uLineColor ? vec4(multicolor, 1.0f) : vec4(1.0f); if(intersected) { fragColor = vec4(multicolor, 1.0f); } } else { fragColor = overlap ? color : vec4(0.0f, 0.0f, 0.0f, 0.0f); } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/shaders/BackgroundBlur.frag ================================================ #version 300 es // Copyright (c) Meta Platforms, Inc. and affiliates. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. precision mediump float; in vec2 vTexCoord; uniform sampler2D uSampler; uniform vec2 uSize; uniform int uBlurRadius; out vec4 fragColor; void main() { vec2 texOffset = 1.0f / uSize; // texel color vec3 color = texture(uSampler, vTexCoord).rgb; float sampleCount = 0.0f; // sample the surrounding pixels based on the blur radius for(int x = -uBlurRadius; x <= uBlurRadius; x++) { for(int y = -uBlurRadius; y <= uBlurRadius; y++) { vec2 offset = vec2(float(x), float(y)) * texOffset; color += texture(uSampler, vTexCoord + offset).rgb; sampleCount += 1.0f; } } // average the colors of the sampled pixels color /= sampleCount; fragColor = vec4(color, 1.0f); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/shaders/Burst.frag ================================================ #version 300 es // Copyright (c) Meta Platforms, Inc. and affiliates. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. precision highp float; in vec2 vTexCoord; uniform sampler2D uSampler; uniform vec2 uSize; // resolution uniform int uNumMasks; uniform bool uLineColor; uniform bool uInterleave; uniform sampler2D uMaskTexture0; uniform sampler2D uMaskTexture1; uniform sampler2D uMaskTexture2; uniform vec4 uMaskColor0; uniform vec4 uMaskColor1; uniform vec4 uMaskColor2; uniform vec4 bbox0; uniform vec4 bbox1; uniform vec4 bbox2; out vec4 fragColor; void main() { float PI = radians(180.0f); float lines = uInterleave ? 12.0f : 80.0f; vec4 color = texture(uSampler, vTexCoord); vec4 color1 = uMaskColor0 / 255.0; vec4 color2 = uMaskColor1 / 255.0; vec4 color3 = uMaskColor2 / 255.0; vec4 mask1 = vec4(0.0f); vec4 mask2 = vec4(0.0f); vec4 mask3 = vec4(0.0f); vec4 scopedColor = vec4(0.0f); vec2 fragCoord = vTexCoord * uSize; // transform to pixel space bool scoped = false; vec4 transparent = vec4(0.0); float p = PI / lines; if(uNumMasks > 0) { mask1 = texture(uMaskTexture0, vec2(vTexCoord.y, vTexCoord.x)); vec2 center1 = (bbox0.xy + bbox0.zw) * 0.5f * uSize; vec2 fragCoordT = (fragCoord - center1) / uSize.y; float a = mod(atan(fragCoordT.y, fragCoordT.x) + p, p + p) - p; // angle of fragment float pattern = sin(a * lines); // smoothstep for antialiasing float line = smoothstep(2.8 / uSize.y, 0.0, length(fragCoordT) * abs(sin(a))); vec4 colorToBlend = uLineColor ? vec4(color1.rgb, 0.80f) : vec4(1.0f); bool visible = bbox0 != vec4(0.0f); if (uInterleave && visible) { vec4 tempColor = mix(transparent, colorToBlend, step(0.0, pattern)); scopedColor += tempColor; scoped = true; } else if (!uInterleave && visible) { vec4 tempColor = uLineColor ? vec4(color1.rgb * line, line) : vec4(line); scopedColor += tempColor; scoped = true; } } if(uNumMasks > 1) { mask2 = texture(uMaskTexture1, vec2(vTexCoord.y, vTexCoord.x)); vec2 center2 = (bbox1.xy + bbox1.zw) * 0.5f * uSize; vec2 fragCoordT = (fragCoord - center2) / uSize.y; float a = mod(atan(fragCoordT.y, fragCoordT.x) + p, p + p) - p; // angle of fragment float pattern = sin(a * lines); float line = smoothstep(2.8 / uSize.y, 0.0, length(fragCoordT) * abs(sin(a))); vec4 colorToBlend = uLineColor ? vec4(color2.rgb, 0.8f) : vec4(1.0f); bool visible = bbox1 != vec4(0.0f); if (uInterleave && visible) { vec4 tempColor = mix(transparent, colorToBlend, step(0.0, pattern)); if (scopedColor == vec4(0.0)) { scopedColor += tempColor; } scoped = true; } else if (!uInterleave && visible) { vec4 tempColor = uLineColor ? vec4(color2.rgb * line, line) : vec4(line); scopedColor += tempColor; scoped = true; } } if (uNumMasks > 2) { mask3 = texture(uMaskTexture2, vec2(vTexCoord.y, vTexCoord.x)); vec2 center3 = (bbox2.xy + bbox2.zw) * 0.5f * uSize; vec2 fragCoordT = (fragCoord - center3) / uSize.y; float a = mod(atan(fragCoordT.y, fragCoordT.x) + p, p + p) - p; // angle of fragment float pattern = sin(a * lines); float line = smoothstep(2.8 / uSize.y, 0.0, length(fragCoordT) * abs(sin(a))); vec4 colorToBlend = uLineColor ? vec4(color3.rgb, 0.8f) : vec4(1.0f); bool visible = bbox2 != vec4(0.0f); if (uInterleave && visible) { vec4 tempColor = mix(transparent, colorToBlend, step(0.0, pattern)); if (scopedColor == vec4(0.0)) { scopedColor += tempColor; } scoped = true; } else if (!uInterleave && visible) { vec4 tempColor = uLineColor ? vec4(color3.rgb * line, line) : vec4(line); scopedColor += tempColor; scoped = true; } } bool overlap = (mask1.r > 0.0f || mask2.r > 0.0f || mask3.r > 0.0f); if(scoped) { fragColor = overlap ? color : scopedColor; } else { fragColor = overlap ? color : vec4(0.0f, 0.0f, 0.0f, 0.0f); } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/shaders/Cutout.frag ================================================ #version 300 es // Copyright (c) Meta Platforms, Inc. and affiliates. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. precision mediump float; in vec2 vTexCoord; uniform sampler2D uSampler; uniform float uContrast; uniform int uNumMasks; uniform sampler2D uMaskTexture0; uniform sampler2D uMaskTexture1; uniform sampler2D uMaskTexture2; out vec4 fragColor; vec3 applySepia(vec4 color) { float gray = dot(color.rgb, vec3(0.3, 0.59, 0.11)); vec3 sepia = vec3(gray) * vec3(1.2, 1.0, 0.8); sepia.r = min(sepia.r, 1.0); sepia.g = min(sepia.g, 1.0); sepia.b = min(sepia.b, 1.0); return sepia; } void main() { vec4 color = texture(uSampler, vTexCoord); vec4 color1 = vec4(0.0f); vec4 color2 = vec4(0.0f); vec4 color3 = vec4(0.0f); if(uNumMasks > 0) { color1 = texture(uMaskTexture0, vec2(vTexCoord.y, vTexCoord.x)); } if(uNumMasks > 1) { color2 = texture(uMaskTexture1, vec2(vTexCoord.y, vTexCoord.x)); } if(uNumMasks > 2) { color3 = texture(uMaskTexture2, vec2(vTexCoord.y, vTexCoord.x)); } bool overlap = (color1.r > 0.0f || color2.r > 0.0f || color3.r > 0.0f); if(overlap) { if (uContrast == 0.0) { color = vec4(applySepia(color), color.a); } else { color.rgb = ((color.rgb - 0.5) * max(uContrast, 0.0)) + 0.5; } fragColor = color; } else { fragColor = vec4(0.0f); } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/shaders/DefaultVert.vert ================================================ #version 300 es // Copyright (c) Meta Platforms, Inc. and affiliates. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. layout(location = 0) in vec4 aPosition; layout(location = 1) in vec2 aTexCoord; out vec2 vTexCoord; void main() { vTexCoord = vec2(aTexCoord.s, 1.0f - aTexCoord.t); gl_Position = aPosition; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/shaders/EraseForeground.frag ================================================ #version 300 es // Copyright (c) Meta Platforms, Inc. and affiliates. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. precision lowp float; in vec2 vTexCoord; uniform int uNumMasks; uniform vec3 uBgColor; uniform sampler2D uMaskTexture0; uniform sampler2D uMaskTexture1; uniform sampler2D uMaskTexture2; out vec4 fragColor; void main() { vec4 finalColor = vec4(0.0f, 0.0f, 0.0f, 0.0f); float totalMaskValue = 0.0f; if(uNumMasks > 0) { float maskValue0 = texture(uMaskTexture0, vec2(vTexCoord.y, vTexCoord.x)).r; totalMaskValue += maskValue0; } if(uNumMasks > 1) { float maskValue1 = texture(uMaskTexture1, vec2(vTexCoord.y, vTexCoord.x)).r; totalMaskValue += maskValue1; } if(uNumMasks > 2) { float maskValue2 = texture(uMaskTexture2, vec2(vTexCoord.y, vTexCoord.x)).r; totalMaskValue += maskValue2; } if(totalMaskValue > 0.0f) { finalColor = vec4(uBgColor, 1.0f); } else { finalColor.a = 0.0f; } fragColor = finalColor; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/shaders/Gradient.frag ================================================ #version 300 es // Copyright (c) Meta Platforms, Inc. and affiliates. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. precision mediump float; precision mediump sampler3D; in vec2 vTexCoord; uniform sampler2D uSampler; uniform sampler3D uColorGradeLUT; uniform mediump vec2 uSize; out vec4 fragColor; void main() { // texel color vec3 color = texture(uSampler, vTexCoord).rgb; vec3 gradedColor = texture(uColorGradeLUT, color).rgb; fragColor = vec4(gradedColor, 1); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/shaders/NoisyMask.frag ================================================ #version 300 es // Copyright (c) Meta Platforms, Inc. and affiliates. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. precision mediump float; in vec2 vTexCoord; uniform float uCurrentFrame; uniform int uNumMasks; uniform sampler2D uMaskTexture0; uniform sampler2D uMaskTexture1; uniform sampler2D uMaskTexture2; out vec4 fragColor; vec3 startColor = vec3(0.0f, 0.67f, 1.0f); vec3 endColor = vec3(0.05f, 0.06f, 0.05f); float random(vec2 st) { return fract(sin(dot(st.xy, vec2(12.9898f, 78.233f))) * 43758.5453123f); } void main() { vec4 finalColor = vec4(0.0f, 0.0f, 0.0f, 0.0f); float totalMaskValue = 0.0f; if(uNumMasks > 0) { float maskValue0 = texture(uMaskTexture0, vec2(vTexCoord.y, vTexCoord.x)).r; totalMaskValue += maskValue0; } if(uNumMasks > 1) { float maskValue1 = texture(uMaskTexture1, vec2(vTexCoord.y, vTexCoord.x)).r; totalMaskValue += maskValue1; } if(uNumMasks > 2) { float maskValue2 = texture(uMaskTexture2, vec2(vTexCoord.y, vTexCoord.x)).r; totalMaskValue += maskValue2; } // Dynamic color alteration using sin(time) float time = uCurrentFrame * 0.1f; vec3 dynamicColor = mix(startColor, endColor, sin(time)); vec3 colorVariation = mix(vec3(0.0f, 0.0f, 0.0f), vec3(1.0f, 1.0f, 1.0f), vTexCoord.y); // apply randomness to the final color float rnd = random(vTexCoord.xy); if(totalMaskValue > 0.0f) { finalColor = vec4(mix(dynamicColor, colorVariation, rnd), 1.0f); } else { finalColor.a = 0.0f; } fragColor = finalColor; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/shaders/Overlay.frag ================================================ #version 300 es // Copyright (c) Meta Platforms, Inc. and affiliates. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. precision highp float; in vec2 vTexCoord; uniform sampler2D uSampler; uniform vec2 uSize; uniform int uNumMasks; uniform float uOpacity; uniform bool uBorder; uniform sampler2D uMaskTexture0; uniform sampler2D uMaskTexture1; uniform sampler2D uMaskTexture2; uniform vec4 uMaskColor0; uniform vec4 uMaskColor1; uniform vec4 uMaskColor2; uniform float uTime; uniform vec2 uClickPos; uniform int uActiveMask; out vec4 fragColor; vec4 lowerSaturation(vec4 color, float saturationFactor) { float luminance = 0.299f * color.r + 0.587f * color.g + 0.114f * color.b; // Calculate luminance vec3 gray = vec3(luminance); vec3 saturated = mix(gray, color.rgb, saturationFactor); // Mix gray with original color based on saturation factor return vec4(saturated, color.a); } vec4 detectEdges(sampler2D textureSampler, float coverage, vec4 edgeColor) { vec2 tvTexCoord = vec2(vTexCoord.y, vTexCoord.x); vec2 texOffset = 1.0f / uSize; vec3 result = vec3(0.0f); // neighboring pixels vec3 tLeft = texture(textureSampler, tvTexCoord + texOffset * vec2(-coverage, coverage)).rgb; vec3 tRight = texture(textureSampler, tvTexCoord + texOffset * vec2(coverage, -coverage)).rgb; vec3 bLeft = texture(textureSampler, tvTexCoord + texOffset * vec2(-coverage, -coverage)).rgb; vec3 bRight = texture(textureSampler, tvTexCoord + texOffset * vec2(coverage, coverage)).rgb; // calculate the gradient edge of the current pixel using [3x3] sobel operator. vec3 xEdge = tLeft + 2.0f * texture(textureSampler, tvTexCoord + texOffset * vec2(-coverage, 0)).rgb + bLeft - tRight - 2.0f * texture(textureSampler, tvTexCoord + texOffset * vec2(coverage, 0)).rgb - bRight; vec3 yEdge = tLeft + 2.0f * texture(textureSampler, tvTexCoord + texOffset * vec2(0, coverage)).rgb + tRight - bLeft - 2.0f * texture(textureSampler, tvTexCoord + texOffset * vec2(0, -coverage)).rgb - bRight; // magnitude of the gradient at the current pixel. result = sqrt(xEdge * xEdge + yEdge * yEdge); return result.r > 1e-6f ? edgeColor : vec4(0.0f, 0.0f, 0.0f, 0.0f); } vec2 calculateAdjustedTexCoord(vec2 vTexCoord, vec4 bbox, float aspectRatio) { vec2 center = vec2((bbox.x + bbox.z) * 0.5f, bbox.w); float radiusX = abs(bbox.z - bbox.x); float radiusY = radiusX / aspectRatio; float scale = 1.0f; radiusX *= scale; radiusY *= scale; vec2 adjustedTexCoord = (vTexCoord - center) / vec2(radiusX, radiusY) + vec2(0.5f); return adjustedTexCoord; } void main() { vec4 color = texture(uSampler, vTexCoord); vec4 color1 = uMaskColor0 / 255.0; vec4 color2 = uMaskColor1 / 255.0; vec4 color3 = uMaskColor2 / 255.0; float saturationFactor = 0.7; float aspectRatio = uSize.y / uSize.x; vec2 tvTexCoord = vec2(vTexCoord.y, vTexCoord.x); vec4 finalColor = vec4(0.0f, 0.0f, 0.0f, 0.0f); float totalMaskValue = 0.0f; vec4 edgeColor = vec4(0.0f, 0.0f, 0.0f, 0.0f); float numRipples = 1.75; float timeThreshold = 1.1; // can take any value from [0.0, 1.5] vec2 adjustedClickCoord = calculateAdjustedTexCoord(vTexCoord, vec4(uClickPos, uClickPos + 0.1), aspectRatio); if(uNumMasks > 0) { float maskValue0 = texture(uMaskTexture0, tvTexCoord).r; vec4 saturatedColor = lowerSaturation(color1, saturationFactor); vec4 plainColor= vec4(vec3(saturatedColor).rgb, 1.0); vec4 rippleColor = vec4(color1.rgb, 0.2); if (uActiveMask == 0 && uTime < timeThreshold) { float dist = length(adjustedClickCoord); float colorFactor = abs(sin((dist - uTime) * numRipples)); plainColor = vec4(mix(rippleColor, plainColor, colorFactor)); }; if (uTime >= timeThreshold) { plainColor= vec4(vec3(saturatedColor).rgb, 1.0); } finalColor += maskValue0 * plainColor; totalMaskValue += maskValue0; edgeColor = detectEdges(uMaskTexture0, 1.25, color1); } if(uNumMasks > 1) { float maskValue1 = texture(uMaskTexture1, tvTexCoord).r; vec4 saturatedColor = lowerSaturation(color2, saturationFactor); vec4 plainColor= vec4(vec3(saturatedColor).rgb, 1.0); vec4 rippleColor = vec4(color2.rgb, 0.2); if (uActiveMask == 1 && uTime < timeThreshold) { float dist = length(adjustedClickCoord); float colorFactor = abs(sin((dist - uTime) * numRipples)); plainColor = vec4(mix(rippleColor, plainColor, colorFactor)); } if (uTime >= timeThreshold) { plainColor= vec4(vec3(saturatedColor).rgb, 1.0); } finalColor += maskValue1 * plainColor; totalMaskValue += maskValue1; if(edgeColor.a <= 0.0f) { edgeColor = detectEdges(uMaskTexture1, 1.25, color2); } } if(uNumMasks > 2) { float maskValue2 = texture(uMaskTexture2, tvTexCoord).r; vec4 saturatedColor = lowerSaturation(color3, saturationFactor); vec4 plainColor= vec4(vec3(saturatedColor).rgb, 1.0); vec4 rippleColor = vec4(color3.rgb, 0.2); if (uActiveMask == 2 && uTime < timeThreshold) { float dist = length(adjustedClickCoord); float colorFactor = abs(sin((dist - uTime) * numRipples)); plainColor = vec4(mix(rippleColor, plainColor, colorFactor)); } if (uTime >= timeThreshold) { plainColor= vec4(vec3(saturatedColor).rgb, 1.0); } finalColor += maskValue2 * plainColor; totalMaskValue += maskValue2; if(edgeColor.a <= 0.0f) { edgeColor = detectEdges(uMaskTexture2, 1.25, color3); } } if(totalMaskValue > 0.0f) { finalColor /= totalMaskValue; finalColor = mix(color, finalColor, uOpacity); } else { finalColor.a = 0.0f; } if(edgeColor.a > 0.0f && uBorder) { finalColor = vec4(vec3(edgeColor), 1.0f); } fragColor = finalColor; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/shaders/Overlay.vert ================================================ #version 300 es // Copyright (c) Meta Platforms, Inc. and affiliates. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. layout(location = 0) in vec4 aPosition; layout(location = 1) in vec2 aTexCoord; out vec2 vTexCoord; void main() { // Rotate texture 90 degrees clockwise vTexCoord = vec2(1.0f - aTexCoord.t, aTexCoord.s); gl_Position = aPosition; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/shaders/Pixelate.frag ================================================ #version 300 es // Copyright (c) Meta Platforms, Inc. and affiliates. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. precision mediump float; in vec2 vTexCoord; uniform sampler2D uSampler; uniform mediump vec2 uSize; uniform lowp float uBlockSize; out vec4 fragColor; void main() { vec2 uv = vTexCoord.xy; float dx = uBlockSize / uSize.x; float dy = uBlockSize / uSize.y; // Sample from 2 places to get a better average texel color vec2 sampleCoord = (vec2(dx * floor((uv.x / dx)), dy * floor((uv.y / dy))) + vec2(dx * ceil((uv.x / dx)), dy * ceil((uv.y / dy)))) / 2.0f; vec4 frameColor = texture(uSampler, sampleCoord); fragColor = frameColor; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/shaders/PixelateMask.frag ================================================ #version 300 es // Copyright (c) Meta Platforms, Inc. and affiliates. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. precision mediump float; in vec2 vTexCoord; uniform sampler2D uSampler; uniform mediump vec2 uSize; uniform lowp float uBlockSize; uniform int uNumMasks; uniform sampler2D uMaskTexture0; uniform sampler2D uMaskTexture1; uniform sampler2D uMaskTexture2; out vec4 fragColor; void main() { vec4 color = texture(uSampler, vTexCoord); vec2 uv = vTexCoord.xy; float dx = uBlockSize / uSize.x; float dy = uBlockSize / uSize.y; vec4 color1 = vec4(0.0f); vec4 color2 = vec4(0.0f); vec4 color3 = vec4(0.0f); vec2 sampleCoord = (vec2(dx * floor((uv.x / dx)), dy * floor((uv.y / dy))) + vec2(dx * ceil((uv.x / dx)), dy * ceil((uv.y / dy)))) / 2.0f; vec4 frameColor = texture(uSampler, sampleCoord); color = frameColor; if(uNumMasks > 0) { color1 = texture(uMaskTexture0, vec2(vTexCoord.y, vTexCoord.x)); } if(uNumMasks > 1) { color2 = texture(uMaskTexture1, vec2(vTexCoord.y, vTexCoord.x)); } if(uNumMasks > 2) { color3 = texture(uMaskTexture2, vec2(vTexCoord.y, vTexCoord.x)); } bool overlap = (color1.r > 0.0f || color2.r > 0.0f || color3.r > 0.0f); if(overlap) { fragColor = color; } else { fragColor = vec4(0.0f); } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/shaders/Replace.frag ================================================ #version 300 es // Copyright (c) Meta Platforms, Inc. and affiliates. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. precision lowp float; in vec2 vTexCoord; uniform vec2 uSize; uniform int uNumMasks; uniform sampler2D uEmojiTexture; uniform bool uFill; // use all emoji texture uniform sampler2D uMaskTexture0; uniform sampler2D uMaskTexture1; uniform sampler2D uMaskTexture2; uniform vec4 bbox0; uniform vec4 bbox1; uniform vec4 bbox2; out vec4 fragColor; vec2 calculateAdjustedTexCoord(vec2 vTexCoord, vec4 bbox, float aspectRatio, out float distanceFromCenter) { vec2 center = (bbox.xy + bbox.zw) * 0.5f; float radiusX = abs(bbox.z - bbox.x); float radiusY = radiusX / aspectRatio; float scale = 1.25f; radiusX *= scale; radiusY *= scale; vec2 adjustedTexCoord = (vTexCoord - center) / vec2(radiusX, radiusY) + vec2(0.5f); distanceFromCenter = length((vTexCoord - center) / vec2(radiusX * 0.5f, radiusY * 0.5f)); return adjustedTexCoord; } void main() { vec4 finalColor = vec4(0.0f); float aspectRatio = uSize.y / uSize.x; float totalMaskValue = 0.0f; vec4 bgFill = vec4(1.0f, 0.0f, 0.0f, 1.0f); vec4 emojiColor; if(uNumMasks > 0) { float maskValue0 = texture(uMaskTexture0, vec2(vTexCoord.y, vTexCoord.x)).r; float distanceFromCenter; vec2 adjustedTexCoord = calculateAdjustedTexCoord(vTexCoord, bbox0, aspectRatio, distanceFromCenter); if(maskValue0 > 0.0f) { emojiColor = texture(uEmojiTexture, adjustedTexCoord); if(distanceFromCenter > 0.85f && !uFill) { emojiColor = bgFill; } } if(uFill) { emojiColor = texture(uEmojiTexture, adjustedTexCoord); } totalMaskValue += maskValue0; } if(uNumMasks > 1) { float maskValue1 = texture(uMaskTexture1, vec2(vTexCoord.y, vTexCoord.x)).r; float distanceFromCenter; vec2 adjustedTexCoord = calculateAdjustedTexCoord(vTexCoord, bbox1, aspectRatio, distanceFromCenter); if(maskValue1 > 0.0f) { emojiColor = texture(uEmojiTexture, adjustedTexCoord); if(distanceFromCenter > 0.85f && !uFill) { emojiColor = bgFill; } } if(uFill && emojiColor.a == 0.0f) { emojiColor = texture(uEmojiTexture, adjustedTexCoord); } totalMaskValue += maskValue1; } if(uNumMasks > 2) { float maskValue2 = texture(uMaskTexture2, vec2(vTexCoord.y, vTexCoord.x)).r; float distanceFromCenter; vec2 adjustedTexCoord = calculateAdjustedTexCoord(vTexCoord, bbox2, aspectRatio, distanceFromCenter); if(maskValue2 > 0.0f) { emojiColor = texture(uEmojiTexture, adjustedTexCoord); if(distanceFromCenter > 0.85f && !uFill) { emojiColor = bgFill; } } if(uFill && emojiColor.a == 0.0f) { emojiColor = texture(uEmojiTexture, adjustedTexCoord); } totalMaskValue += maskValue2; } if(totalMaskValue > 0.0f) { finalColor = emojiColor; } else { finalColor = uFill ? emojiColor : vec4(0.0f); } fragColor = finalColor; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/shaders/Scope.frag ================================================ #version 300 es // Copyright (c) Meta Platforms, Inc. and affiliates. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. precision mediump float; in vec2 vTexCoord; uniform sampler2D uSampler; uniform vec2 uSize; uniform int uNumMasks; uniform bool uFillColor; uniform bool uLight; uniform bool uTransparency; uniform sampler2D uMaskTexture0; uniform sampler2D uMaskTexture1; uniform sampler2D uMaskTexture2; uniform vec4 uMaskColor0; uniform vec4 uMaskColor1; uniform vec4 uMaskColor2; uniform vec4 bbox0; uniform vec4 bbox1; uniform vec4 bbox2; out vec4 fragColor; void main() { vec4 color = texture(uSampler, vTexCoord); float aspectRatio = uSize.y / uSize.x; float radiusThreshold = 0.8f; float tickness = 0.085f; vec4 mask1 = vec4(0.0f); vec4 mask2 = vec4(0.0f); vec4 mask3 = vec4(0.0f); vec4 color1 = uMaskColor0 / 255.0; vec4 color2 = uMaskColor1 / 255.0; vec4 color3 = uMaskColor2 / 255.0; vec4 scopedColor = vec4(0.0f); bool scoped = false; vec4 whiteVariation = uTransparency ? vec4(0.0,0.0,0.0,1.0) : vec4(1.0); if(uNumMasks > 0) { mask1 = texture(uMaskTexture0, vec2(vTexCoord.y, vTexCoord.x)); vec2 center1 = (bbox0.xy + bbox0.zw) * 0.5f; float radiusX1 = abs(bbox0.y - bbox0.w) * 0.5f; float radiusY1 = radiusX1 / aspectRatio; float distX1 = (vTexCoord.x - center1.x) / radiusX1; float distY1 = (vTexCoord.y - center1.y) / radiusY1; float dist1 = sqrt(pow(distX1, 2.0f) + pow(distY1, 2.0f)); if(uFillColor) { if(dist1 >= radiusThreshold - tickness && dist1 <= radiusThreshold) { scoped = true; scopedColor = uLight ? whiteVariation : color1; } } else if(dist1 <= radiusThreshold) { scoped = true; scopedColor = uLight ? whiteVariation : color1; } } if(uNumMasks > 1) { mask2 = texture(uMaskTexture1, vec2(vTexCoord.y, vTexCoord.x)); vec2 center2 = (bbox1.xy + bbox1.zw) * 0.5f; float radiusX2 = abs(bbox1.y - bbox1.w) * 0.5f; float radiusY2 = radiusX2 / aspectRatio; float distX2 = (vTexCoord.x - center2.x) / radiusX2; float distY2 = (vTexCoord.y - center2.y) / radiusY2; float dist2 = sqrt(pow(distX2, 2.0f) + pow(distY2, 2.0f)); if(uFillColor) { if(dist2 >= radiusThreshold - tickness && dist2 <= radiusThreshold) { scoped = true; scopedColor = uLight ? whiteVariation : color2; } } else if(dist2 <= radiusThreshold) { scoped = true; scopedColor = uLight ? whiteVariation : color2; } } if(uNumMasks > 2) { mask3 = texture(uMaskTexture2, vec2(vTexCoord.y, vTexCoord.x)); vec2 center3 = (bbox2.xy + bbox2.zw) * 0.5f; float radiusX3 = abs(bbox2.y - bbox2.w) * 0.5f; float radiusY3 = radiusX3 / aspectRatio; float distX3 = (vTexCoord.x - center3.x) / radiusX3; float distY3 = (vTexCoord.y - center3.y) / radiusY3; float dist3 = sqrt(pow(distX3, 2.0f) + pow(distY3, 2.0f)); if(uFillColor) { if(dist3 >= radiusThreshold - tickness && dist3 <= radiusThreshold) { scoped = true; scopedColor = uLight ? whiteVariation : color3; } } else if(dist3 <= radiusThreshold) { scoped = true; scopedColor = uLight ? whiteVariation : color3; } } bool overlap = (mask1.r > 0.0f || mask2.r > 0.0f || mask3.r > 0.0f); if(scoped) { fragColor = overlap ? color : scopedColor; fragColor.a = uTransparency ? fragColor.a : 1.0; } else { fragColor = overlap ? color : vec4(0.0f, 0.0f, 0.0f, 0.0f); } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/shaders/Sobel.frag ================================================ #version 300 es // Copyright (c) Meta Platforms, Inc. and affiliates. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. precision mediump float; in vec2 vTexCoord; uniform sampler2D uSampler; uniform vec2 uSize; uniform bool uSwapColor; uniform bool uMonocolor; out vec4 fragColor; void main() { // calculate the offset for one pixel in texture coordinates vec2 texOffset = 1.0f / uSize; vec3 result = vec3(0.0f); // neighboring pixels vec3 tLeft = texture(uSampler, vTexCoord + texOffset * vec2(-1, 1)).rgb; vec3 tRight = texture(uSampler, vTexCoord + texOffset * vec2(1, -1)).rgb; vec3 bLeft = texture(uSampler, vTexCoord + texOffset * vec2(-1, -1)).rgb; vec3 bRight = texture(uSampler, vTexCoord + texOffset * vec2(1, 1)).rgb; // calculate the gradient edge of the current pixel using [3x3] sobel operator. vec3 xEdge = tLeft + 2.0f * texture(uSampler, vTexCoord + texOffset * vec2(-1, 0)).rgb + bLeft - tRight - 2.0f * texture(uSampler, vTexCoord + texOffset * vec2(1, 0)).rgb - bRight; vec3 yEdge = tLeft + 2.0f * texture(uSampler, vTexCoord + texOffset * vec2(0, 1)).rgb + tRight - bLeft - 2.0f * texture(uSampler, vTexCoord + texOffset * vec2(0, -1)).rgb - bRight; // magnitude of the gradient at the current pixel. result = sqrt(xEdge * xEdge + yEdge * yEdge); if (uMonocolor) { // Convert result to a grayscale intensity float intensity = length(result) / sqrt(3.0); // Threshold to determine if the color should be white or black float threshold = 0.2; if (intensity > threshold) { fragColor = uSwapColor ? vec4(1.0) : vec4(0.0, 0.0, 0.0, 1.0); } else { fragColor = uSwapColor ? vec4(0.0, 0.0, 0.0, 1.0) : vec4(1.0); } } else { result = uSwapColor ? result : vec3(0.0, 1.0, 0.0) * result; vec4 finalColor = vec4(result, 1.0f); fragColor = finalColor; } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/effects/shaders/VibrantMask.frag ================================================ #version 300 es // Copyright (c) Meta Platforms, Inc. and affiliates. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. precision mediump float; precision mediump sampler3D; in vec2 vTexCoord; uniform sampler2D uSampler; uniform float uCurrentFrame; uniform sampler3D uColorGradeLUT; uniform int uNumMasks; uniform sampler2D uMaskTexture0; uniform sampler2D uMaskTexture1; uniform sampler2D uMaskTexture2; out vec4 fragColor; void main() { vec4 color = texture(uSampler, vTexCoord); vec3 gradedColor = texture(uColorGradeLUT, color.rgb).rgb; vec4 color1 = vec4(0.0f); vec4 color2 = vec4(0.0f); vec4 color3 = vec4(0.0f); // Apply edge detection for each mask // We can't use dynamic indexing with samplers in GLSL ES 3.0. // https://registry.khronos.org/OpenGL/specs/es/3.0/GLSL_ES_Specification_3.00.pdf Ch 12.30 if(uNumMasks > 0) { color1 = texture(uMaskTexture0, vec2(vTexCoord.y, vTexCoord.x)); } if(uNumMasks > 1) { color2 = texture(uMaskTexture1, vec2(vTexCoord.y, vTexCoord.x)); } if(uNumMasks > 2) { color3 = texture(uMaskTexture2, vec2(vTexCoord.y, vTexCoord.x)); } bool overlap = (color1.r > 0.0f || color2.r > 0.0f || color3.r > 0.0f); if(overlap) { fragColor = vec4(gradedColor, 1); } else { fragColor = vec4(0.0f); } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/filmstrip/FilmstripUtil.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {CanvasForm, CanvasSpace, Font, Group, Pt, Triangle} from 'pts'; import SelectedFrameHelper from './SelectedFrameHelper'; import {PADDING_BOTTOM, PADDING_TOP} from './VideoFilmstrip'; export function getPointerPosition( event: React.PointerEvent, ) { const rect = event.currentTarget.getBoundingClientRect(); return new Pt(event.clientX - rect.left, event.clientY - rect.top); } export function drawFilmstrip( filmstrip: ImageBitmap | null, space: CanvasSpace | undefined, form: CanvasForm | undefined, ) { if (filmstrip == null || space == undefined || form?.ctx == undefined) { return; } const ratio = filmstrip.width / (filmstrip.height + PADDING_TOP + PADDING_BOTTOM); form.image( [ [0, PADDING_TOP], [space.size.x, space.size.x / ratio], ], filmstrip, ); } export function getTimeFromFrame(frame: number, fps: number): string { const seconds = Math.floor(frame / fps); const frameRemaining = frame - fps * seconds; return `${seconds}:${frameRemaining.toFixed().toString().padStart(2, '0')}`; } export function drawMarker( space: CanvasSpace | undefined, form: CanvasForm | undefined, selectedFrameHelper: SelectedFrameHelper, pointerPosition: Pt | null, scanLabel: string | false, fps: number, ) { if (space == undefined || form?.ctx == undefined) { return; } const marker = Group.fromArray([ [0, PADDING_TOP], [0, space.height - PADDING_BOTTOM], ]); const currentMarker = marker .clone() .add(Math.max(5, selectedFrameHelper.position), 0); const getTextPosition = (label: string, marker: Group) => { const textWidth = form.ctx.measureText(label).width; return marker[0] .$subtract(textWidth / 2, 0) .$min(space.width - textWidth, PADDING_TOP - 10) .$max(textWidth / 2 - 2, 0); }; // draw current marker form .strokeOnly('#00000066', 5) .line(currentMarker) .strokeOnly('#fff', 1) .line(currentMarker) .fill('#000') .polygon( Triangle.fromCenter(currentMarker[0].$add(0, 10), 5).rotate2D(Math.PI), ); // draw text const frameLabel = getTimeFromFrame(selectedFrameHelper.index, fps); form .font(new Font(12, 'monospace')) .fillOnly('#fff') .text(getTextPosition(frameLabel, currentMarker), frameLabel); // draw scanning ghost marker if ( selectedFrameHelper.isScanning && pointerPosition != null && scanLabel != false ) { const scanMarker = marker.clone().add(pointerPosition.x, 0); form.strokeOnly('#ffffff66', 5).line(scanMarker); form .font(new Font(12, 'monospace')) .fillOnly('#8595A4') .text(getTextPosition(scanLabel, scanMarker), scanLabel); } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/filmstrip/SelectedFrameHelper.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ export default class SelectedFrameHelper { private frames = 0; private frameToWidthRatio = 1; private selectedIndex = 0; private scanning = false; constructor(totalFrames: number, totalWidth: number, index?: number) { this.reset(totalFrames, totalWidth, index); } reset(totalFrames: number, totalWidth: number, index?: number) { this.frames = totalFrames; this.frameToWidthRatio = totalWidth / this.frames; if (index != null) { this.select(index); } } select(index: number) { this.selectedIndex = index >= this.frames ? this.frames - index : index; } toPosition(index: number) { return index * this.frameToWidthRatio; } toIndex(position: number) { return Math.floor(position / this.frameToWidthRatio); } get index(): number { return this.selectedIndex; } get position(): number { return this.selectedIndex * this.frameToWidthRatio; } scan(state: boolean) { this.scanning = state; } get isScanning(): boolean { return this.scanning; } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/filmstrip/VideoFilmstrip.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import SelectedFrameHelper from '@/common/components/video/filmstrip/SelectedFrameHelper'; import {isPlayingAtom} from '@/demo/atoms'; import stylex from '@stylexjs/stylex'; import {useAtomValue, useSetAtom} from 'jotai'; import {CanvasSpace, Pt} from 'pts'; import {useCallback, useEffect, useMemo, useRef} from 'react'; import {PtsCanvas, PtsCanvasImperative} from 'react-pts-canvas'; import {VideoRef} from '../Video'; import {DecodeEvent, FrameUpdateEvent} from '../VideoWorkerBridge'; import useVideo from '../editor/useVideo'; import { drawFilmstrip, drawMarker, getPointerPosition, getTimeFromFrame, } from './FilmstripUtil'; import {selectedFrameHelperAtom} from './atoms'; import useDisableScrolling from './useDisableScrolling'; const styles = stylex.create({ container: { display: 'flex', flexDirection: 'column', }, filmstripWrapper: { position: 'relative', width: '100%', height: '5rem' /* 80px */, }, filmstrip: { position: 'absolute', top: 0, left: 0, bottom: 0, right: 0, cursor: 'col-resize', overflow: 'hidden', }, canvas: { width: '100%', height: '100%', }, }); export const PADDING_TOP = 30; export const PADDING_BOTTOM = 0; export default function VideoFilmstrip() { const video = useVideo(); const ptsCanvasRef = useRef(null); const filmstripRef = useRef(null); const isPlayingOnPointerDownRef = useRef(false); const isPlaying = useAtomValue(isPlayingAtom); const {enable: enableScrolling, disable: disableScrolling} = useDisableScrolling(); const pointerPositionRef = useRef(null); const animateRAFHandle = useRef(null); const selectedFrameHelper = useMemo(() => new SelectedFrameHelper(1, 1), []); const setSelectedFrameHelper = useSetAtom(selectedFrameHelperAtom); const fpsRef = useRef(30); useEffect(() => { function onDecode(event: DecodeEvent) { video?.removeEventListener('decode', onDecode); fpsRef.current = event.fps; } video?.addEventListener('decode', onDecode); return () => { video?.removeEventListener('decode', onDecode); }; }, [video]); useEffect(() => { setSelectedFrameHelper(selectedFrameHelper); }, [setSelectedFrameHelper, selectedFrameHelper]); const computeFrame = useCallback( (normalizedPosition: number): {index: number} | null => { if (video == null) { return null; } const numFrames = video.numberOfFrames; const index = Math.min( Math.max(0, Math.floor(normalizedPosition * numFrames)), numFrames - 1, ); // The frame is needed for the CAE model. Do we still want to support it? // return {image: decodedVideo.frames[index], index: index}; return {index}; }, [video], ); const createFilmstrip = useCallback( async ( video: VideoRef | null, space: CanvasSpace | undefined, frameIndex?: number, ) => { if (video === null || space == undefined) { return; } const bitmap: ImageBitmap = await video?.createFilmstrip( space.width, space.height - (PADDING_TOP - PADDING_BOTTOM), ); filmstripRef.current = bitmap; selectedFrameHelper.reset(video.numberOfFrames, space.width, frameIndex); // also reset index to first frame return bitmap; }, [selectedFrameHelper], ); // Custom animation handler const handleRAF = useCallback(() => { animateRAFHandle.current = null; const space = ptsCanvasRef.current?.getSpace(); const form = ptsCanvasRef.current?.getForm(); if (space == undefined || form == undefined) { return; } // Clear space, in particular clearing the frame index number of // previous renders. space.clear(); drawFilmstrip(filmstripRef.current, space, form); const scanLabel = selectedFrameHelper.isScanning && pointerPositionRef.current !== null && fpsRef.current !== null && getTimeFromFrame( computeFrame(pointerPositionRef.current.x / space.width)?.index ?? 0, fpsRef.current, ); drawMarker( space, form, selectedFrameHelper, pointerPositionRef.current, scanLabel, fpsRef.current, ); }, [computeFrame, selectedFrameHelper]); const handleAnimate = useCallback(() => { if (animateRAFHandle.current === null) { animateRAFHandle.current = requestAnimationFrame(handleRAF); } }, [handleRAF]); const handleFrameUpdate = useCallback( (event: FrameUpdateEvent) => { if (!selectedFrameHelper.isScanning) { selectedFrameHelper.select(event.index); } handleAnimate(); }, [handleAnimate, selectedFrameHelper], ); // Register a frame update listener on the video to update the filmstrip // indicator when the video changes frames. useEffect(() => { video?.addEventListener('frameUpdate', handleFrameUpdate); return () => { video?.removeEventListener('frameUpdate', handleFrameUpdate); }; }, [video, handleFrameUpdate]); // Initiate filmstrip image useEffect(() => { const space = ptsCanvasRef.current?.getSpace(); async function onLoadStart() { await createFilmstrip(video, space, 0); handleAnimate(); } async function progress() { await createFilmstrip(video, space, 0); handleAnimate(); } void progress(); video?.addEventListener('loadstart', onLoadStart); video?.addEventListener('decode', progress); return () => { video?.removeEventListener('loadstart', onLoadStart); video?.removeEventListener('decode', progress); }; }, [createFilmstrip, selectedFrameHelper, handleAnimate, video]); return (
{ if (video != null && space != undefined) { selectedFrameHelper.reset(video.numberOfFrames, space.width); } if (video !== null) { await createFilmstrip(video, space); } handleAnimate(); }} onPointerDown={event => { const canvas = ptsCanvasRef.current?.getCanvas(); canvas?.setPointerCapture(event.pointerId); // Disable page scrolling while interacting with the filmstrip disableScrolling(); pointerPositionRef.current = getPointerPosition(event); selectedFrameHelper.scan(true); // Pause the video when a user initially has their pointer down. // Playback will resume once the onPointerUp event is triggered. isPlayingOnPointerDownRef.current = isPlaying; if (isPlaying) { video?.pause(); } }} onPointerUp={event => { // Enable page scrolling after interaction with filmstrip is done enableScrolling(); const space = ptsCanvasRef.current?.getSpace(); if (space != undefined) { pointerPositionRef.current = getPointerPosition(event); selectedFrameHelper.scan(false); const frame = computeFrame( pointerPositionRef.current.x / space.size.x, ); if ( frame != null && selectedFrameHelper.index !== frame.index ) { selectedFrameHelper.select(frame.index); if (video !== null) { video.frame = frame.index; if (isPlayingOnPointerDownRef.current) { video.play(); } } } handleAnimate(); } pointerPositionRef.current = null; }} onPointerMove={event => { if ( !selectedFrameHelper.isScanning || pointerPositionRef.current === null ) { return; } const space = ptsCanvasRef.current?.getSpace(); const form = ptsCanvasRef.current?.getForm(); if ( selectedFrameHelper.isScanning && space != null && form != null ) { pointerPositionRef.current = getPointerPosition(event); const frame = computeFrame( pointerPositionRef.current.x / space.size.x, ); if (frame != null) { handleAnimate(); if (video !== null) { video.frame = frame.index; } } } }} />
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/filmstrip/atoms.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {atom} from 'jotai'; import SelectedFrameHelper from './SelectedFrameHelper'; export const selectedFrameHelperAtom = atom(null); ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/filmstrip/useDisableScrolling.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {useCallback, useEffect, useRef} from 'react'; function preventDefault(event: TouchEvent) { event.preventDefault(); } export default function useDisableScrolling() { const isDisabledRef = useRef(false); const disable = useCallback(() => { // Scrolling is already disabled if (isDisabledRef.current) { return; } isDisabledRef.current = true; document.body.addEventListener('touchmove', preventDefault, { passive: false, }); }, []); const enable = useCallback(() => { // Scrolling is not disabled if (!isDisabledRef.current) { return; } isDisabledRef.current = false; document.body.removeEventListener('touchmove', preventDefault); }, []); useEffect(() => { // Enable scrolling again on unmount return () => { enable(); }; }, [enable]); return { disable, enable, }; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/filmstrip/useSelectedFrameHelper.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {useAtomValue} from 'jotai'; import {selectedFrameHelperAtom} from './atoms'; export default function useSelectedFrameHelper() { return useAtomValue(selectedFrameHelperAtom); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/layers/InteractionLayer.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import useVideo from '@/common/components/video/editor/useVideo'; import {getPointInImage} from '@/common/components/video/editor/VideoEditorUtils'; import {SegmentationPoint} from '@/common/tracker/Tracker'; import {labelTypeAtom} from '@/demo/atoms'; import stylex from '@stylexjs/stylex'; import {useAtomValue} from 'jotai'; import {MouseEvent} from 'react'; const styles = stylex.create({ container: { position: 'absolute', left: 0, top: 0, right: 0, bottom: 0, }, }); type Props = { onPoint: (point: SegmentationPoint) => void; }; export default function InteractionLayer({onPoint}: Props) { const video = useVideo(); // Use labelType to swap positive and negative points. The most important use // case is the switch between positive and negative label for left mouse // clicks. const labelType = useAtomValue(labelTypeAtom); return (
) => { const canvas = video?.getCanvas(); if (canvas != null) { const point = getPointInImage(event, canvas); onPoint([...point, labelType === 'positive' ? 1 : 0]); } }} onContextMenu={event => { event.preventDefault(); const canvas = video?.getCanvas(); if (canvas != null) { const point = getPointInImage(event, canvas); onPoint([...point, labelType === 'positive' ? 0 : 1]); } }} /> ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/layers/PointsLayer.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {SegmentationPoint} from '@/common/tracker/Tracker'; import stylex from '@stylexjs/stylex'; import {useMemo} from 'react'; import useResizeObserver from 'use-resize-observer'; import useVideo from '../editor/useVideo'; const styles = stylex.create({ container: { position: 'absolute', width: '100%', height: '100%', pointerEvents: 'none', }, }); type Props = { points: SegmentationPoint[]; onRemovePoint: (point: SegmentationPoint) => void; }; export function PointsLayer({points, onRemovePoint}: Props) { const video = useVideo(); const videoCanvas = useMemo(() => video?.getCanvas(), [video]); const { ref, width: containerWidth = 1, height: containerHeight = 1, } = useResizeObserver(); const canvasWidth = videoCanvas?.width ?? 1; const canvasHeight = videoCanvas?.height ?? 1; const sizeMultiplier = useMemo(() => { const widthMultiplier = canvasWidth / containerWidth; const heightMultiplier = canvasHeight / containerHeight; return Math.max(widthMultiplier, heightMultiplier); }, [canvasWidth, canvasHeight, containerWidth, containerHeight]); const pointRadius = useMemo(() => 8 * sizeMultiplier, [sizeMultiplier]); const pointStroke = useMemo(() => 2 * sizeMultiplier, [sizeMultiplier]); return ( {/* * This is a debug element to verify the SVG element overlays * perfectly with the canvas element. */} {/* */} {/* Render points */} {points.map((point, idx) => { const isAdd = point[2] === 1; return ( { event.stopPropagation(); onRemovePoint(point); }} /> {isAdd && ( )} ); })} ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/useInputVideo.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {inputVideoAtom} from '@/demo/atoms'; import {useAtom} from 'jotai'; export default function useInputVideo() { const [inputVideo, setInputVideo] = useAtom(inputVideoAtom); return {inputVideo, setInputVideo}; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/components/video/useVideoWorker.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {RefObject, useEffect, useMemo, useRef} from 'react'; import VideoWorkerBridge from './VideoWorkerBridge'; type Options = { createVideoWorker?: () => Worker; createWorkerBridge?: CreateWorkerBridgeFunction; }; const DEFAULT_OPTIONS: Options = { createVideoWorker: () => new Worker(new URL('./VideoWorker', import.meta.url), { type: 'module', }), }; type WorkerFactory = () => Worker; type CreateWorkerBridgeFunction = ( workerFactory: WorkerFactory, ) => VideoWorkerBridge; export default function useVideoWorker( src: string, canvasRef: RefObject, options: Options = {}, ) { const isControlTransferredToOffscreenRef = useRef(false); const mergedOptions = useMemo(() => { const definedProps = (o: Options) => Object.fromEntries( Object.entries(o).filter(([_k, v]) => v !== undefined), ); return Object.assign( DEFAULT_OPTIONS, definedProps(options), ) as Required; }, [options]); const worker = useMemo(() => { if (mergedOptions.createWorkerBridge) { return mergedOptions.createWorkerBridge(mergedOptions.createVideoWorker); } return VideoWorkerBridge.create(mergedOptions.createVideoWorker); }, [mergedOptions]); useEffect(() => { const canvas = canvasRef.current; if (canvas == null) { return; } if (isControlTransferredToOffscreenRef.current) { return; } isControlTransferredToOffscreenRef.current = true; worker.setCanvas(canvas); return () => { // Cannot terminate worker in DEV mode // workerRef.current?.terminate(); }; }, [canvasRef, mergedOptions, worker]); useEffect(() => { worker.setSource(src); }, [src, worker]); return worker; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/error/ErrorFallback.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import useReportError from '@/common/error/useReportError'; import {Button} from 'react-daisyui'; import {FallbackProps} from 'react-error-boundary'; export default function ErrorFallback({ error, resetErrorBoundary, }: FallbackProps) { const reportError = useReportError(); function handleReportError() { reportError(error); } return (

Please check your connection and retry or report error.

); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/error/ErrorReport.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {getErrorTitle} from '@/common/error/ErrorUtils'; import errorReportAtom from '@/common/error/errorReportAtom'; import emptyFunction from '@/common/utils/emptyFunction'; import {BugAntIcon} from '@heroicons/react/24/outline'; import {Editor} from '@monaco-editor/react'; import {useAtom} from 'jotai'; import {useEffect, useRef} from 'react'; import {Button, Modal} from 'react-daisyui'; type Props = { onReport?: (error: Error) => void; }; export default function ErrorReport({onReport = emptyFunction}: Props) { const [error, setError] = useAtom(errorReportAtom); const errorModalRef = useRef(null); // Clean error state on ESC useEffect(() => { function onCloseDialog() { setError(null); } const errorModal = errorModalRef.current; errorModal?.addEventListener('close', onCloseDialog); return () => { errorModal?.removeEventListener('close', onCloseDialog); }; }, [setError]); useEffect(() => { if (error != null) { errorModalRef.current?.showModal(); } else { errorModalRef.current?.close(); } }, [error, setError]); function handleCloseModal() { errorModalRef.current?.close(); } function handleReport() { if (error != null) { onReport(error); } } return ( {error != null ? getErrorTitle(error) : 'Unknown error'} ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/error/ErrorSerializationUtils.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import CreateFilmstripError from '@/graphql/errors/CreateFilmstripError'; import DrawFrameError from '@/graphql/errors/DrawFrameError'; import WebGLContextError from '@/graphql/errors/WebGLContextError'; import {errorConstructors} from 'serialize-error'; export function registerSerializableConstructors() { // @ts-expect-error Wrong `errorConstructors` types errorConstructors.set('DrawFrameError', DrawFrameError); // @ts-expect-error Wrong `errorConstructors` types errorConstructors.set('CreateFilmstripError', CreateFilmstripError); // @ts-expect-error Wrong `errorConstructors` types errorConstructors.set('WebGLContextError', WebGLContextError); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/error/ErrorUtils.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import CreateFilmstripError from '@/graphql/errors/CreateFilmstripError'; import DrawFrameError from '@/graphql/errors/DrawFrameError'; import WebGLContextError from '@/graphql/errors/WebGLContextError'; import {deserializeError, type ErrorObject} from 'serialize-error'; export type RenderingErrorType = | 'webgl_context' | 'draw_frame' | 'create_filmstrip' | 'error'; export function getRenderErrorType(error?: ErrorObject): RenderingErrorType { const deserializedError = deserializeError(error); if (deserializedError instanceof WebGLContextError) { return 'webgl_context'; } if (deserializedError instanceof DrawFrameError) { return 'draw_frame'; } if (deserializedError instanceof CreateFilmstripError) { return 'create_filmstrip'; } return 'error'; } /** * This function extracts the title from an error message. * The title is defined as the text before the first newline character. * * @param error The error object from which the title is to be extracted. * @returns The title of the error message. * @example * ```ts * const error = new Error('This is the title\nThis is the body'); * const title = getErrorTitle(error); * console.log(title); // 'This is the title' * ``` */ export function getErrorTitle({message}: Error): string { const idx = message.indexOf('\n'); return idx < 0 ? message : message.substring(0, idx); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/error/errorReportAtom.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {atom} from 'jotai'; export default atom(null); ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/error/useReportError.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import errorReportAtom from '@/common/error/errorReportAtom'; import {useSetAtom} from 'jotai'; import {useCallback} from 'react'; export default function useReportError() { const setError = useSetAtom(errorReportAtom); return useCallback( (error: unknown) => { if (typeof error === 'string') { setError(new Error(error)); } else if (error instanceof Error) { setError(error); } else { setError(new Error('unknown error occurred')); } }, [setError], ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/loading/LoadingMessage.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {Loading} from 'react-daisyui'; export default function LoadingMessage() { return (
Fetching data
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/loading/LoadingStateScreen.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import introVideo from '@/assets/videos/sam2_720px_dark.mp4'; import introVideoPoster from '@/assets/videos/sam2_video_poster.png'; import StaticVideoPlayer from '@/common/loading/StaticVideoPlayer'; import {borderRadius, fontSize, spacing} from '@/theme/tokens.stylex'; import stylex from '@stylexjs/stylex'; import {PropsWithChildren, ReactNode} from 'react'; import {Link} from 'react-router-dom'; const styles = stylex.create({ container: { backgroundColor: '#000', minHeight: '100%', }, content: { display: 'flex', flexDirection: 'column', gap: spacing[8], maxWidth: '36rem', //* 576px */ marginHorizontal: 'auto', paddingVertical: { default: '6rem', '@media screen and (max-width: 768px)': '3rem', }, paddingHorizontal: spacing[8], color: '#fff', }, animationContainer: { display: 'flex', justifyContent: 'center', }, animation: { border: '2px solid white', borderRadius: borderRadius['xl'], maxWidth: 450, maxHeight: 450, height: '100%', overflow: 'hidden', '@media screen and (max-width: 768px)': { height: 300, width: 300, }, }, title: { textAlign: 'center', lineHeight: '2rem', fontSize: fontSize['2xl'], fontWeight: 400, }, description: { textAlign: 'center', color: '#A7B3BF', }, link: { textAlign: 'center', textDecorationLine: 'underline', color: '#A7B3BF', }, }); type Props = PropsWithChildren<{ title: string; description?: string | ReactNode; linkProps?: { to: string; label: string; }; }>; export default function LoadingStateScreen({ title, description, children, linkProps, }: Props) { return (

{title}

{description != null && (
{description}
)} {children} {linkProps != null && ( {linkProps.label} )}
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/loading/StaticVideoPlayer.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import React from 'react'; export type VideoAspectRatio = 'wide' | 'square' | 'normal' | 'fill'; export type VideoProps = { src: string; aspectRatio?: VideoAspectRatio; className?: string; containerClassName?: string; } & React.VideoHTMLAttributes; export default function StaticVideoPlayer({ src, aspectRatio, className = '', containerClassName = '', ...props }: VideoProps) { let aspect = aspectRatio === 'wide' ? `aspect-video` : aspectRatio === 'square' ? 'aspect-square' : 'aspect-auto'; let videoSize = ''; if (aspectRatio === 'fill') { aspect = 'absolute object-cover right-0 bottom-0 min-w-full min-h-full h-full'; videoSize = 'w-full h-full object-cover object-center'; } return (
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/loading/UploadLoadingScreen.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ChangeVideoModal from '@/common/components/gallery/ChangeVideoModal'; import type {VideoGalleryTriggerProps} from '@/common/components/gallery/DemoVideoGalleryModal'; import LoadingStateScreen from '@/common/loading/LoadingStateScreen'; import {uploadingStateAtom} from '@/demo/atoms'; import {ImageCopy} from '@carbon/icons-react'; import {useAtomValue} from 'jotai'; import OptionButton from '../components/options/OptionButton'; export default function UploadLoadingScreen() { const uploadingState = useAtomValue(uploadingStateAtom); if (uploadingState === 'error') { return (
); } return ( ); } function UploadLoadingScreenChangeVideoTrigger({ onClick, }: VideoGalleryTriggerProps) { return ( ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/logger/DemoLogger.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {RenderingErrorType} from '@/common/error/ErrorUtils'; import Logger from './Logger'; type UploadSourceType = 'gallery' | 'option'; // Maps event names to an optional payload for each event type DemoEventMap = { // User events user_click_canvas: { click_type: 'add_point' | 'remove_point'; click_action: 'add_object' | 'refine_object'; click_variant?: 'positive' | 'negative'; }; user_click_object: { tracklet_id: number; }; user_click_track_and_play: { track_and_play_click_type: 'stream' | 'abort'; }; user_click_apply_effect: { effect_type: 'background' | 'object'; effect_name: string; effect_variant: number; }; user_change_video: { gallery_video_url: string; }; user_upload_video: { upload_source: UploadSourceType; }; user_click_share: { gallery_video_url: string; }; user_click_download: { gallery_video_url: string; }; user_click_web_share: undefined; // Error events client_error_rendering: { rendering_error_type: RenderingErrorType; }; client_error_start_session: undefined; client_error_upload_video: { upload_source: UploadSourceType; upload_error_message: string; }; client_error_unsupported_browser: undefined; client_error_page_not_found: { path: string; }; client_error_general: { message: string; }; client_error_fallback: { fallback_error_message: string; }; // Dataset events client_error_fallback_dataset: { dataset_fallback_error_message: string; }; dataset_client_impression_event: { impression_type: 'grid_view' | 'detailed_view'; video_id?: string; }; dataset_client_click_events: { click_type: 'search' | 'next_page' | 'prev_page'; video_id?: string; }; }; export interface LoggerInterface { event: ( eventName: K, options?: TEventMap[K], ) => void; } export function initialize(): void { // noop } export class DemoLogger implements LoggerInterface { event(eventName: K, options?: DemoEventMap[K]) { Logger.info(eventName, options ?? {}); } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/logger/LogEnvironment.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {LogLevel} from '@/common/logger/Logger'; // Only enable debug logging in modes that are set in MODES_WITH_LOGGER. The // default is always error only. export const LOG_LEVEL: LogLevel = import.meta.env.MODE === 'production' ? 'debug' : 'error'; ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/logger/Logger.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {LOG_LEVEL} from './LogEnvironment'; /** Signature of a logging function */ export type LogFn = { (message?: unknown, ...optionalParams: unknown[]): void; }; /** Basic logger interface */ export interface Logger { info: LogFn; warn: LogFn; error: LogFn; debug: LogFn; } /** Log levels */ export type LogLevel = 'info' | 'warn' | 'error' | 'debug'; const NO_OP: LogFn = (_message?: unknown, ..._optionalParams: unknown[]) => {}; /** Logger which outputs to the browser console */ export class ConsoleLogger implements Logger { readonly info: LogFn; readonly warn: LogFn; readonly error: LogFn; readonly debug: LogFn; constructor(options?: {level?: LogLevel}) { const {level} = options || {}; // eslint-disable-next-line no-console this.error = console.error.bind(console); if (level === 'error') { this.debug = NO_OP; this.warn = NO_OP; this.info = NO_OP; return; } // eslint-disable-next-line no-console this.warn = console.warn.bind(console); if (level === 'warn') { this.debug = NO_OP; this.info = NO_OP; return; } // eslint-disable-next-line no-console this.info = console.log.bind(console); if (level === 'info') { this.debug = NO_OP; return; } // eslint-disable-next-line no-console this.debug = console.debug.bind(console); } } export default new ConsoleLogger({level: LOG_LEVEL}); ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/screen/useScreenSize.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {screenSizes} from '@/theme/tokens.stylex'; import {useLayoutEffect, useState} from 'react'; export default function useScreenSize(): { screenSize: number; isMobile: boolean; } { const [screenSize, setScreenSize] = useState(0); useLayoutEffect(() => { const updateSize = (): void => { setScreenSize(window.innerWidth); }; window.addEventListener('resize', updateSize); updateSize(); return (): void => window.removeEventListener('resize', updateSize); }, []); return {isMobile: screenSize < screenSizes['md'], screenSize}; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/tracker/SAM2Model.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {generateThumbnail} from '@/common/components/video/editor/VideoEditorUtils'; import VideoWorkerContext from '@/common/components/video/VideoWorkerContext'; import Logger from '@/common/logger/Logger'; import { SAM2ModelAddNewPointsMutation, SAM2ModelAddNewPointsMutation$data, } from '@/common/tracker/__generated__/SAM2ModelAddNewPointsMutation.graphql'; import {SAM2ModelCancelPropagateInVideoMutation} from '@/common/tracker/__generated__/SAM2ModelCancelPropagateInVideoMutation.graphql'; import {SAM2ModelClearPointsInFrameMutation} from '@/common/tracker/__generated__/SAM2ModelClearPointsInFrameMutation.graphql'; import {SAM2ModelClearPointsInVideoMutation} from '@/common/tracker/__generated__/SAM2ModelClearPointsInVideoMutation.graphql'; import {SAM2ModelCloseSessionMutation} from '@/common/tracker/__generated__/SAM2ModelCloseSessionMutation.graphql'; import {SAM2ModelRemoveObjectMutation} from '@/common/tracker/__generated__/SAM2ModelRemoveObjectMutation.graphql'; import {SAM2ModelStartSessionMutation} from '@/common/tracker/__generated__/SAM2ModelStartSessionMutation.graphql'; import { BaseTracklet, Mask, SegmentationPoint, StreamingState, Tracker, Tracklet, } from '@/common/tracker/Tracker'; import {TrackerOptions} from '@/common/tracker/Trackers'; import { ClearPointsInVideoResponse, SessionStartFailedResponse, SessionStartedResponse, StreamingCompletedResponse, StreamingStartedResponse, StreamingStateUpdateResponse, TrackletCreatedResponse, TrackletDeletedResponse, TrackletsUpdatedResponse, } from '@/common/tracker/TrackerTypes'; import {convertMaskToRGBA} from '@/common/utils/MaskUtils'; import multipartStream from '@/common/utils/MultipartStream'; import {Stats} from '@/debug/stats/Stats'; import {INFERENCE_API_ENDPOINT} from '@/demo/DemoConfig'; import {createEnvironment} from '@/graphql/RelayEnvironment'; import { DataArray, Masks, RLEObject, decode, encode, toBbox, } from '@/jscocotools/mask'; import {THEME_COLORS} from '@/theme/colors'; import invariant from 'invariant'; import {IEnvironment, commitMutation, graphql} from 'relay-runtime'; type Options = Pick; type Session = { id: string | null; tracklets: {[id: number]: Tracklet}; }; type StreamMasksResult = { frameIndex: number; rleMaskList: Array<{ objectId: number; rleMask: RLEObject; }>; }; type StreamMasksAbortResult = { aborted: boolean; }; export class SAM2Model extends Tracker { private _endpoint: string; private _environment: IEnvironment; private abortController: AbortController | null = null; private _session: Session = { id: null, tracklets: {}, }; private _streamingState: StreamingState = 'none'; private _emptyMask: RLEObject | null = null; private _maskCanvas: OffscreenCanvas; private _maskCtx: OffscreenCanvasRenderingContext2D; private _stats?: Stats; constructor( context: VideoWorkerContext, options: Options = { inferenceEndpoint: INFERENCE_API_ENDPOINT, }, ) { super(context); this._endpoint = options.inferenceEndpoint; this._environment = createEnvironment(options.inferenceEndpoint); this._maskCanvas = new OffscreenCanvas(0, 0); const maskCtx = this._maskCanvas.getContext('2d'); invariant(maskCtx != null, 'context cannot be null'); this._maskCtx = maskCtx; } public startSession(videoPath: string): Promise { // Reset streaming state. Force update with the true flag to make sure the // UI updates its state. this._updateStreamingState('none', true); return new Promise(resolve => { try { commitMutation(this._environment, { mutation: graphql` mutation SAM2ModelStartSessionMutation($input: StartSessionInput!) { startSession(input: $input) { sessionId } } `, variables: { input: { path: videoPath, }, }, onCompleted: response => { const {sessionId} = response.startSession; this._session.id = sessionId; this._sendResponse('sessionStarted', { sessionId, }); // Clear any tracklets from the previous session when // a new session is started this._clearTracklets(); // Make an empty tracklet this.createTracklet(); resolve(); }, onError: error => { Logger.error(error); this._sendResponse( 'sessionStartFailed', ); resolve(); }, }); } catch (error) { Logger.error(error); this._sendResponse('sessionStartFailed'); resolve(); } }); } public closeSession(): Promise { const sessionId = this._session.id; // Do not call cleanup before retrieving the session id because cleanup // will reset the session id. If the order would be changed, it would // never execute the closeSession mutation. this._cleanup(); if (sessionId === null) { return Promise.resolve(); } return new Promise((resolve, reject) => { commitMutation(this._environment, { mutation: graphql` mutation SAM2ModelCloseSessionMutation($input: CloseSessionInput!) { closeSession(input: $input) { success } } `, variables: { input: { sessionId, }, }, onCompleted: response => { const {success} = response.closeSession; if (success === false) { reject(new Error('Failed to close session')); return; } resolve(); }, onError: error => { Logger.error(error); reject(error); }, }); }); } public createTracklet(): void { // This will return 0 for for empty tracklets and otherwise the next // largest number. const nextId = Object.values(this._session.tracklets).reduce( (prev, curr) => Math.max(prev, curr.id), -1, ) + 1; const newTracklet = { id: nextId, color: THEME_COLORS[nextId % THEME_COLORS.length], thumbnail: null, points: [], masks: [], isInitialized: false, }; this._session.tracklets[nextId] = newTracklet; // Notify the main thread this._updateTracklets(); this._sendResponse('trackletCreated', { tracklet: newTracklet, }); } public deleteTracklet(trackletId: number): Promise { const sessionId = this._session.id; if (sessionId === null) { return Promise.reject('No active session'); } const tracklet = this._session.tracklets[trackletId]; invariant( tracklet != null, 'tracklet for tracklet id %s not initialized', trackletId, ); return new Promise((resolve, reject) => { commitMutation(this._environment, { mutation: graphql` mutation SAM2ModelRemoveObjectMutation($input: RemoveObjectInput!) { removeObject(input: $input) { frameIndex rleMaskList { objectId rleMask { counts size } } } } `, variables: { input: {objectId: trackletId, sessionId}, }, onCompleted: response => { const trackletUpdates = response.removeObject; this._sendResponse('trackletDeleted', { isSuccessful: true, }); for (const trackletUpdate of trackletUpdates) { this._updateTrackletMasks( trackletUpdate, trackletUpdate.frameIndex === this._context.frameIndex, false, // shouldGoToFrame ); } this._removeTrackletMasks(tracklet); resolve(); }, onError: error => { this._sendResponse('trackletDeleted', { isSuccessful: false, }); Logger.error(error); reject(error); }, }); }); } public updatePoints( frameIndex: number, objectId: number, points: SegmentationPoint[], ): Promise { const sessionId = this._session.id; if (sessionId === null) { return Promise.reject('No active session'); } // TODO: This is not the right place to initialize the empty mask. // Move this into the constructor and listen to events on the context. // Note, the initial context.width and context.height is 0, so it needs // to happen based on an event, so when the video is initialized, it needs // to notify the tracker to update the empty mask. if (this._emptyMask === null) { // We need to round the height/width to the nearest integer since // Masks.toTensor() expects an integer value for the height/width. const tensor = new Masks( Math.trunc(this._context.height), Math.trunc(this._context.width), 1, ).toDataArray(); this._emptyMask = encode(tensor)[0]; } const tracklet = this._session.tracklets[objectId]; invariant( tracklet != null, 'tracklet for object id %s not initialized', objectId, ); // Mark session needing propagation when point is set this._updateStreamingState('required'); // Clear all points in frame if no points are provided. if (points.length === 0) { return this.clearPointsInFrame(frameIndex, objectId); } return new Promise((resolve, reject) => { const normalizedPoints = points.map(p => [ p[0] / this._context.width, p[1] / this._context.height, ]); const labels = points.map(p => p[2]); commitMutation(this._environment, { mutation: graphql` mutation SAM2ModelAddNewPointsMutation($input: AddPointsInput!) { addPoints(input: $input) { frameIndex rleMaskList { objectId rleMask { counts size } } } } `, variables: { input: { sessionId, frameIndex, objectId, labels: labels, points: normalizedPoints, clearOldPoints: true, }, }, onCompleted: response => { tracklet.points[frameIndex] = points; tracklet.isInitialized = true; this._updateTrackletMasks(response.addPoints, true); resolve(); }, onError: error => { Logger.error(error); reject(error); }, }); }); } public clearPointsInFrame( frameIndex: number, objectId: number, ): Promise { const sessionId = this._session.id; if (sessionId === null) { return Promise.reject('No active session'); } const tracklet = this._session.tracklets[objectId]; invariant( tracklet != null, 'tracklet for object id %s not initialized', objectId, ); // Mark session needing propagation when point is set this._updateStreamingState('required'); return new Promise((resolve, reject) => { commitMutation(this._environment, { mutation: graphql` mutation SAM2ModelClearPointsInFrameMutation( $input: ClearPointsInFrameInput! ) { clearPointsInFrame(input: $input) { frameIndex rleMaskList { objectId rleMask { counts size } } } } `, variables: { input: { sessionId, frameIndex, objectId, }, }, onCompleted: response => { tracklet.points[frameIndex] = []; tracklet.isInitialized = true; this._updateTrackletMasks(response.clearPointsInFrame, true); resolve(); }, onError: error => { Logger.error(error); reject(error); }, }); }); } public clearPointsInVideo(): Promise { const sessionId = this._session.id; if (sessionId === null) { return Promise.reject('No active session'); } // Mark session needing propagation when point is set this._updateStreamingState('none'); return new Promise(resolve => { commitMutation(this._environment, { mutation: graphql` mutation SAM2ModelClearPointsInVideoMutation( $input: ClearPointsInVideoInput! ) { clearPointsInVideo(input: $input) { success } } `, variables: { input: { sessionId, }, }, onCompleted: response => { const {success} = response.clearPointsInVideo; if (!success) { this._sendResponse( 'clearPointsInVideo', {isSuccessful: false}, ); return; } // Reset points and masks for each tracklet this._clearTracklets(); // Notify the main thread this._context.goToFrame(this._context.frameIndex); this._updateTracklets(); this._sendResponse('clearPointsInVideo', { isSuccessful: true, }); resolve(); }, onError: error => { this._sendResponse('clearPointsInVideo', { isSuccessful: false, }); Logger.error(error); }, }); }); } public async streamMasks(frameIndex: number): Promise { const sessionId = this._session.id; if (sessionId === null) { return Promise.reject('No active session'); } try { this._sendResponse('streamingStarted'); // 1. Clear previous masks this._context.clearMasks(); this._clearTrackletMasks(); // 2. Create abort controller and async generator const controller = new AbortController(); this.abortController = controller; this._updateStreamingState('requesting'); const generator = this._streamMasksForSession( controller, sessionId, frameIndex, ); // 3. parse stream response and update masks in session objects let isAborted = false; for await (const result of generator) { if ('aborted' in result) { this._updateStreamingState('aborting'); await this._abortRequest(); this._updateStreamingState('aborted'); isAborted = true; } else { await this._updateTrackletMasks(result, false); this._updateStreamingState('partial'); } } if (!isAborted) { // Mark session needing propagation when point is set this._updateStreamingState('full'); } } catch (error) { Logger.error(error); throw error; } this._sendResponse('streamingCompleted'); } public abortStreamMasks() { this.abortController?.abort(); this._sendResponse('streamingCompleted'); } public enableStats(): void { this._stats = new Stats('ms', 'D', 1000 / 25); } // PRIVATE private _cleanup() { this._session.id = null; // Clear existing tracklets this._session.tracklets = []; } private _clearTracklets() { this._session.tracklets = []; this._context.clearMasks(); } private _updateStreamingState( state: StreamingState, forceUpdate: boolean = false, ) { if (!forceUpdate && this._streamingState === state) { return; } this._streamingState = state; this._sendResponse('streamingStateUpdate', { state, }); } private async _removeTrackletMasks(tracklet: Tracklet) { this._context.clearTrackletMasks(tracklet); delete this._session.tracklets[tracklet.id]; // Notify the main thread this._context.goToFrame(this._context.frameIndex); this._updateTracklets(); } private async _updateTrackletMasks( data: SAM2ModelAddNewPointsMutation$data['addPoints'], updateThumbnails: boolean, shouldGoToFrame: boolean = true, ) { const {frameIndex, rleMaskList} = data; // 1. parse and decode masks for all objects for (const {objectId, rleMask} of rleMaskList) { const track = this._session.tracklets[objectId]; const {size, counts} = rleMask; const rleObject: RLEObject = { size: [size[0], size[1]], counts: counts, }; const isEmpty = counts === this._emptyMask?.counts; this._stats?.begin(); const decodedMask = decode([rleObject]); const bbox = toBbox([rleObject]); const mask: Mask = { data: rleObject as RLEObject, shape: [...decodedMask.shape], bounds: [ [bbox[0], bbox[1]], [bbox[0] + bbox[2], bbox[1] + bbox[3]], ], isEmpty, } as const; track.masks[frameIndex] = mask; if (updateThumbnails && !isEmpty) { const {ctx} = await this._compressMaskForCanvas(decodedMask); const frame = this._context.currentFrame as VideoFrame; await generateThumbnail(track, frameIndex, mask, frame, ctx); } } this._context.updateTracklets( frameIndex, Object.values(this._session.tracklets), shouldGoToFrame, ); // Notify the main thread this._updateTracklets(); } private _updateTracklets() { const tracklets: BaseTracklet[] = Object.values( this._session.tracklets, ).map(tracklet => { // Notify the main thread const { id, color, isInitialized, points: trackletPoints, thumbnail, masks, } = tracklet; return { id, color, isInitialized, points: trackletPoints, thumbnail, masks: masks.map(mask => ({ shape: mask.shape, bounds: mask.bounds, isEmpty: mask.isEmpty, })), }; }); this._sendResponse('trackletsUpdated', { tracklets, }); } private _clearTrackletMasks() { const keys = Object.keys(this._session.tracklets); for (const key of keys) { const trackletId = Number(key); const tracklet = {...this._session.tracklets[trackletId], masks: []}; this._session.tracklets[trackletId] = tracklet; } this._updateTracklets(); } private async _compressMaskForCanvas( decodedMask: DataArray, ): Promise<{compressedData: Blob; ctx: OffscreenCanvasRenderingContext2D}> { const data = convertMaskToRGBA(decodedMask.data as Uint8Array); this._maskCanvas.width = decodedMask.shape[0]; this._maskCanvas.height = decodedMask.shape[1]; const imageData = new ImageData( data, decodedMask.shape[0], decodedMask.shape[1], ); this._maskCtx.putImageData(imageData, 0, 0); const canvas = new OffscreenCanvas( decodedMask.shape[1], decodedMask.shape[0], ); const ctx = canvas.getContext('2d'); invariant(ctx != null, 'context cannot be null'); ctx.save(); ctx.rotate(Math.PI / 2); // Since the image was previously rotated 90° clockwise, after the image is rotated, // we scale the canvas's width using scaleY and height using scaleX. ctx.scale(1, -1); ctx.drawImage(this._maskCanvas, 0, 0); ctx.restore(); const compressedData = await canvas.convertToBlob({type: 'image/png'}); return {compressedData, ctx}; } private async *_streamMasksForSession( abortController: AbortController, sessionId: string, startFrameIndex: undefined | number = 0, ): AsyncGenerator { const url = `${this._endpoint}/propagate_in_video`; const requestBody = { session_id: sessionId, start_frame_index: startFrameIndex, }; const headers: {[name: string]: string} = Object.assign({ 'Content-Type': 'application/json', }); const response = await fetch(url, { method: 'POST', body: JSON.stringify(requestBody), headers, }); const contentType = response.headers.get('Content-Type'); if ( contentType == null || !contentType.startsWith('multipart/x-savi-stream;') ) { throw new Error( 'endpoint needs to support Content-Type "multipart/x-savi-stream"', ); } const responseBody = response.body; if (responseBody == null) { throw new Error('response body is null'); } const reader = multipartStream(contentType, responseBody).getReader(); const textDecoder = new TextDecoder(); while (true) { if (abortController.signal.aborted) { reader.releaseLock(); yield {aborted: true}; return; } const {done, value} = await reader.read(); if (done) { return; } const {headers, body} = value; const contentType = headers.get('Content-Type') as string; if (contentType.startsWith('application/json')) { const jsonResponse = JSON.parse(textDecoder.decode(body)); const maskResults = jsonResponse.results; const rleMaskList = maskResults.map( (mask: {object_id: number; mask: RLEObject}) => { return { objectId: mask.object_id, rleMask: mask.mask, }; }, ); yield { frameIndex: jsonResponse.frame_index, rleMaskList, }; } } } private async _abortRequest(): Promise { const sessionId = this._session.id; invariant(sessionId != null, 'session id cannot be empty'); return new Promise((resolve, reject) => { try { commitMutation( this._environment, { mutation: graphql` mutation SAM2ModelCancelPropagateInVideoMutation( $input: CancelPropagateInVideoInput! ) { cancelPropagateInVideo(input: $input) { success } } `, variables: { input: { sessionId, }, }, onCompleted: response => { const {success} = response.cancelPropagateInVideo; if (!success) { reject(`could not abort session ${sessionId}`); return; } resolve(); }, onError: error => { Logger.error(error); reject(error); }, }, ); } catch (error) { Logger.error(error); reject(error); } }); } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/tracker/Tracker.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import VideoWorkerContext from '@/common/components/video/VideoWorkerContext'; import {TrackerOptions} from '@/common/tracker/Trackers'; import {TrackerResponse} from '@/common/tracker/TrackerTypes'; import {RLEObject} from '@/jscocotools/mask'; export type Point = [x: number, y: number]; export type SegmentationPoint = [...point: Point, label: 0 | 1]; export type FramePoints = Array | undefined; export type Mask = DatalessMask & { data: Blob | RLEObject; }; export type DatalessMask = { shape: number[]; bounds: [[number, number], [number, number]]; isEmpty: boolean; }; export type Tracklet = { id: number; color: string; thumbnail: string | null; points: FramePoints[]; masks: Mask[]; isInitialized: boolean; }; export type BaseTracklet = Omit & { masks: DatalessMask[]; }; export type StreamingState = | 'none' | 'required' | 'requesting' | 'aborting' | 'aborted' | 'partial' | 'full'; export interface ITracker { startSession(videoUrl: string): Promise; closeSession(): Promise; createTracklet(): void; deleteTracklet(trackletId: number): Promise; updatePoints( frameIndex: number, objectId: number, points: SegmentationPoint[], ): Promise; clearPointsInFrame(frameIndex: number, objectId: number): Promise; clearPointsInVideo(): Promise; streamMasks(frameIndex: number): Promise; abortStreamMasks(): void; enableStats(): void; } export abstract class Tracker implements ITracker { protected _context: VideoWorkerContext; constructor(context: VideoWorkerContext, _options?: TrackerOptions) { this._context = context; } abstract startSession(videoUrl: string): Promise; abstract closeSession(): Promise; abstract createTracklet(): void; abstract deleteTracklet(trackletId: number): Promise; abstract updatePoints( frameIndex: number, objectId: number, points: SegmentationPoint[], ): Promise; abstract clearPointsInFrame( frameIndex: number, objectId: number, ): Promise; abstract clearPointsInVideo(): Promise; abstract streamMasks(frameIndex: number): Promise; abstract abortStreamMasks(): void; abstract enableStats(): void; // PRIVATE FUNCTIONS protected _sendResponse( action: T['action'], message?: Omit, transfer?: Transferable[], ): void { self.postMessage( { action, ...message, }, { transfer, }, ); } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/tracker/TrackerTypes.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {SegmentationPoint} from '@/common/tracker/Tracker'; import {TrackerOptions, Trackers} from '@/common/tracker/Trackers'; import { AddPointsEvent, ClearPointsInVideoEvent, SessionStartFailedEvent, SessionStartedEvent, StreamingCompletedEvent, StreamingStartedEvent, StreamingStateUpdateEvent, TrackletCreatedEvent, TrackletDeletedEvent, TrackletsEvent, } from '../components/video/VideoWorkerBridge'; export type Flags = { masks: boolean; effect: boolean; }; export type Request = { action: A; } & P; // REQUESTS export type InitializeTrackerRequest = Request< 'initializeTracker', { name: keyof Trackers; options: TrackerOptions; } >; export type StartSessionRequest = Request< 'startSession', { videoUrl: string; } >; export type CloseSessionRequest = Request<'closeSession', unknown>; export type CreateTrackletRequest = Request<'createTracklet', unknown>; export type DeleteTrackletRequest = Request< 'deleteTracklet', { trackletId: number; } >; export type UpdatePointsRequest = Request< 'updatePoints', { frameIndex: number; objectId: number; points: SegmentationPoint[]; } >; export type ClearPointsInFrameRequest = Request< 'clearPointsInFrame', { frameIndex: number; objectId: number; } >; export type ClearPointsInVideoRequest = Request<'clearPointsInVideo', unknown>; export type StreamMasksRequest = Request< 'streamMasks', { frameIndex: number; } >; export type AbortStreamMasksRequest = Request<'abortStreamMasks', unknown>; export type LogAnnotationsRequest = Request<'logAnnotations', unknown>; export type TrackerRequest = | InitializeTrackerRequest | StartSessionRequest | CloseSessionRequest | CreateTrackletRequest | DeleteTrackletRequest | UpdatePointsRequest | ClearPointsInFrameRequest | ClearPointsInVideoRequest | StreamMasksRequest | AbortStreamMasksRequest | LogAnnotationsRequest; export type TrackerRequestMessageEvent = MessageEvent; // RESPONSES export type SessionStartedResponse = Request< 'sessionStarted', SessionStartedEvent >; export type SessionStartFailedResponse = Request< 'sessionStartFailed', SessionStartFailedEvent >; export type TrackletCreatedResponse = Request< 'trackletCreated', TrackletCreatedEvent >; export type TrackletsUpdatedResponse = Request< 'trackletsUpdated', TrackletsEvent >; export type TrackletDeletedResponse = Request< 'trackletDeleted', TrackletDeletedEvent >; export type AddPointsResponse = Request<'addPoints', AddPointsEvent>; export type ClearPointsInVideoResponse = Request< 'clearPointsInVideo', ClearPointsInVideoEvent >; export type StreamingStartedResponse = Request< 'streamingStarted', StreamingStartedEvent >; export type StreamingCompletedResponse = Request< 'streamingCompleted', StreamingCompletedEvent >; export type StreamingStateUpdateResponse = Request< 'streamingStateUpdate', StreamingStateUpdateEvent >; export type TrackerResponse = | SessionStartedResponse | SessionStartFailedResponse | TrackletCreatedResponse | TrackletsUpdatedResponse | TrackletDeletedResponse | AddPointsResponse | ClearPointsInVideoResponse | StreamingStartedResponse | StreamingCompletedResponse | StreamingStateUpdateResponse; export type TrackerResponseMessageEvent = MessageEvent; ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/tracker/Trackers.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {SAM2Model} from './SAM2Model'; export type Headers = {[name: string]: string}; export type TrackerOptions = { inferenceEndpoint: string; }; export type Trackers = { 'SAM 2': typeof SAM2Model; }; export const TRACKER_MAPPING: Trackers = { 'SAM 2': SAM2Model, }; ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/tracker/__generated__/SAM2ModelAddNewPointsMutation.graphql.ts ================================================ /** * @generated SignedSource<> * @lightSyntaxTransform * @nogrep */ /* tslint:disable */ /* eslint-disable */ // @ts-nocheck import { ConcreteRequest, Mutation } from 'relay-runtime'; export type AddPointsInput = { clearOldPoints: boolean; frameIndex: number; labels: ReadonlyArray; objectId: number; points: ReadonlyArray>; sessionId: string; }; export type SAM2ModelAddNewPointsMutation$variables = { input: AddPointsInput; }; export type SAM2ModelAddNewPointsMutation$data = { readonly addPoints: { readonly frameIndex: number; readonly rleMaskList: ReadonlyArray<{ readonly objectId: number; readonly rleMask: { readonly counts: string; readonly size: ReadonlyArray; }; }>; }; }; export type SAM2ModelAddNewPointsMutation = { response: SAM2ModelAddNewPointsMutation$data; variables: SAM2ModelAddNewPointsMutation$variables; }; const node: ConcreteRequest = (function(){ var v0 = [ { "defaultValue": null, "kind": "LocalArgument", "name": "input" } ], v1 = [ { "alias": null, "args": [ { "kind": "Variable", "name": "input", "variableName": "input" } ], "concreteType": "RLEMaskListOnFrame", "kind": "LinkedField", "name": "addPoints", "plural": false, "selections": [ { "alias": null, "args": null, "kind": "ScalarField", "name": "frameIndex", "storageKey": null }, { "alias": null, "args": null, "concreteType": "RLEMaskForObject", "kind": "LinkedField", "name": "rleMaskList", "plural": true, "selections": [ { "alias": null, "args": null, "kind": "ScalarField", "name": "objectId", "storageKey": null }, { "alias": null, "args": null, "concreteType": "RLEMask", "kind": "LinkedField", "name": "rleMask", "plural": false, "selections": [ { "alias": null, "args": null, "kind": "ScalarField", "name": "counts", "storageKey": null }, { "alias": null, "args": null, "kind": "ScalarField", "name": "size", "storageKey": null } ], "storageKey": null } ], "storageKey": null } ], "storageKey": null } ]; return { "fragment": { "argumentDefinitions": (v0/*: any*/), "kind": "Fragment", "metadata": null, "name": "SAM2ModelAddNewPointsMutation", "selections": (v1/*: any*/), "type": "Mutation", "abstractKey": null }, "kind": "Request", "operation": { "argumentDefinitions": (v0/*: any*/), "kind": "Operation", "name": "SAM2ModelAddNewPointsMutation", "selections": (v1/*: any*/) }, "params": { "cacheID": "dc86527e91907e696683458ed0943d2f", "id": null, "metadata": {}, "name": "SAM2ModelAddNewPointsMutation", "operationKind": "mutation", "text": "mutation SAM2ModelAddNewPointsMutation(\n $input: AddPointsInput!\n) {\n addPoints(input: $input) {\n frameIndex\n rleMaskList {\n objectId\n rleMask {\n counts\n size\n }\n }\n }\n}\n" } }; })(); (node as any).hash = "3c96f05877dd91668c1f9e8a3f1203a5"; export default node; ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/tracker/__generated__/SAM2ModelCancelPropagateInVideoMutation.graphql.ts ================================================ /** * @generated SignedSource<<87827cb79ef9276cd5a66026151e937c>> * @lightSyntaxTransform * @nogrep */ /* tslint:disable */ /* eslint-disable */ // @ts-nocheck import { ConcreteRequest, Mutation } from 'relay-runtime'; export type CancelPropagateInVideoInput = { sessionId: string; }; export type SAM2ModelCancelPropagateInVideoMutation$variables = { input: CancelPropagateInVideoInput; }; export type SAM2ModelCancelPropagateInVideoMutation$data = { readonly cancelPropagateInVideo: { readonly success: boolean; }; }; export type SAM2ModelCancelPropagateInVideoMutation = { response: SAM2ModelCancelPropagateInVideoMutation$data; variables: SAM2ModelCancelPropagateInVideoMutation$variables; }; const node: ConcreteRequest = (function(){ var v0 = [ { "defaultValue": null, "kind": "LocalArgument", "name": "input" } ], v1 = [ { "alias": null, "args": [ { "kind": "Variable", "name": "input", "variableName": "input" } ], "concreteType": "CancelPropagateInVideo", "kind": "LinkedField", "name": "cancelPropagateInVideo", "plural": false, "selections": [ { "alias": null, "args": null, "kind": "ScalarField", "name": "success", "storageKey": null } ], "storageKey": null } ]; return { "fragment": { "argumentDefinitions": (v0/*: any*/), "kind": "Fragment", "metadata": null, "name": "SAM2ModelCancelPropagateInVideoMutation", "selections": (v1/*: any*/), "type": "Mutation", "abstractKey": null }, "kind": "Request", "operation": { "argumentDefinitions": (v0/*: any*/), "kind": "Operation", "name": "SAM2ModelCancelPropagateInVideoMutation", "selections": (v1/*: any*/) }, "params": { "cacheID": "f00f78f24741d27828f0bd95b0f373c2", "id": null, "metadata": {}, "name": "SAM2ModelCancelPropagateInVideoMutation", "operationKind": "mutation", "text": "mutation SAM2ModelCancelPropagateInVideoMutation(\n $input: CancelPropagateInVideoInput!\n) {\n cancelPropagateInVideo(input: $input) {\n success\n }\n}\n" } }; })(); (node as any).hash = "1abafecade479ab3c45f9cecf0360285"; export default node; ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/tracker/__generated__/SAM2ModelClearPointsInFrameMutation.graphql.ts ================================================ /** * @generated SignedSource<<7330d05db0fe66bbd89190cc665dd8d9>> * @lightSyntaxTransform * @nogrep */ /* tslint:disable */ /* eslint-disable */ // @ts-nocheck import { ConcreteRequest, Mutation } from 'relay-runtime'; export type ClearPointsInFrameInput = { frameIndex: number; objectId: number; sessionId: string; }; export type SAM2ModelClearPointsInFrameMutation$variables = { input: ClearPointsInFrameInput; }; export type SAM2ModelClearPointsInFrameMutation$data = { readonly clearPointsInFrame: { readonly frameIndex: number; readonly rleMaskList: ReadonlyArray<{ readonly objectId: number; readonly rleMask: { readonly counts: string; readonly size: ReadonlyArray; }; }>; }; }; export type SAM2ModelClearPointsInFrameMutation = { response: SAM2ModelClearPointsInFrameMutation$data; variables: SAM2ModelClearPointsInFrameMutation$variables; }; const node: ConcreteRequest = (function(){ var v0 = [ { "defaultValue": null, "kind": "LocalArgument", "name": "input" } ], v1 = [ { "alias": null, "args": [ { "kind": "Variable", "name": "input", "variableName": "input" } ], "concreteType": "RLEMaskListOnFrame", "kind": "LinkedField", "name": "clearPointsInFrame", "plural": false, "selections": [ { "alias": null, "args": null, "kind": "ScalarField", "name": "frameIndex", "storageKey": null }, { "alias": null, "args": null, "concreteType": "RLEMaskForObject", "kind": "LinkedField", "name": "rleMaskList", "plural": true, "selections": [ { "alias": null, "args": null, "kind": "ScalarField", "name": "objectId", "storageKey": null }, { "alias": null, "args": null, "concreteType": "RLEMask", "kind": "LinkedField", "name": "rleMask", "plural": false, "selections": [ { "alias": null, "args": null, "kind": "ScalarField", "name": "counts", "storageKey": null }, { "alias": null, "args": null, "kind": "ScalarField", "name": "size", "storageKey": null } ], "storageKey": null } ], "storageKey": null } ], "storageKey": null } ]; return { "fragment": { "argumentDefinitions": (v0/*: any*/), "kind": "Fragment", "metadata": null, "name": "SAM2ModelClearPointsInFrameMutation", "selections": (v1/*: any*/), "type": "Mutation", "abstractKey": null }, "kind": "Request", "operation": { "argumentDefinitions": (v0/*: any*/), "kind": "Operation", "name": "SAM2ModelClearPointsInFrameMutation", "selections": (v1/*: any*/) }, "params": { "cacheID": "b4f20e0205c26d5dc3614935ac73fa3f", "id": null, "metadata": {}, "name": "SAM2ModelClearPointsInFrameMutation", "operationKind": "mutation", "text": "mutation SAM2ModelClearPointsInFrameMutation(\n $input: ClearPointsInFrameInput!\n) {\n clearPointsInFrame(input: $input) {\n frameIndex\n rleMaskList {\n objectId\n rleMask {\n counts\n size\n }\n }\n }\n}\n" } }; })(); (node as any).hash = "880295870f14839040acf8f191fa1409"; export default node; ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/tracker/__generated__/SAM2ModelClearPointsInVideoMutation.graphql.ts ================================================ /** * @generated SignedSource<<092c43655450b8af706e546837e0a01c>> * @lightSyntaxTransform * @nogrep */ /* tslint:disable */ /* eslint-disable */ // @ts-nocheck import { ConcreteRequest, Mutation } from 'relay-runtime'; export type ClearPointsInVideoInput = { sessionId: string; }; export type SAM2ModelClearPointsInVideoMutation$variables = { input: ClearPointsInVideoInput; }; export type SAM2ModelClearPointsInVideoMutation$data = { readonly clearPointsInVideo: { readonly success: boolean; }; }; export type SAM2ModelClearPointsInVideoMutation = { response: SAM2ModelClearPointsInVideoMutation$data; variables: SAM2ModelClearPointsInVideoMutation$variables; }; const node: ConcreteRequest = (function(){ var v0 = [ { "defaultValue": null, "kind": "LocalArgument", "name": "input" } ], v1 = [ { "alias": null, "args": [ { "kind": "Variable", "name": "input", "variableName": "input" } ], "concreteType": "ClearPointsInVideo", "kind": "LinkedField", "name": "clearPointsInVideo", "plural": false, "selections": [ { "alias": null, "args": null, "kind": "ScalarField", "name": "success", "storageKey": null } ], "storageKey": null } ]; return { "fragment": { "argumentDefinitions": (v0/*: any*/), "kind": "Fragment", "metadata": null, "name": "SAM2ModelClearPointsInVideoMutation", "selections": (v1/*: any*/), "type": "Mutation", "abstractKey": null }, "kind": "Request", "operation": { "argumentDefinitions": (v0/*: any*/), "kind": "Operation", "name": "SAM2ModelClearPointsInVideoMutation", "selections": (v1/*: any*/) }, "params": { "cacheID": "c23b3d5afca5b235328a562369056527", "id": null, "metadata": {}, "name": "SAM2ModelClearPointsInVideoMutation", "operationKind": "mutation", "text": "mutation SAM2ModelClearPointsInVideoMutation(\n $input: ClearPointsInVideoInput!\n) {\n clearPointsInVideo(input: $input) {\n success\n }\n}\n" } }; })(); (node as any).hash = "020267989385cb8b8f0e5cdde784d17e"; export default node; ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/tracker/__generated__/SAM2ModelCloseSessionMutation.graphql.ts ================================================ /** * @generated SignedSource<<48ee5db240b8093e9e53bf0329c8bab7>> * @lightSyntaxTransform * @nogrep */ /* tslint:disable */ /* eslint-disable */ // @ts-nocheck import { ConcreteRequest, Mutation } from 'relay-runtime'; export type CloseSessionInput = { sessionId: string; }; export type SAM2ModelCloseSessionMutation$variables = { input: CloseSessionInput; }; export type SAM2ModelCloseSessionMutation$data = { readonly closeSession: { readonly success: boolean; }; }; export type SAM2ModelCloseSessionMutation = { response: SAM2ModelCloseSessionMutation$data; variables: SAM2ModelCloseSessionMutation$variables; }; const node: ConcreteRequest = (function(){ var v0 = [ { "defaultValue": null, "kind": "LocalArgument", "name": "input" } ], v1 = [ { "alias": null, "args": [ { "kind": "Variable", "name": "input", "variableName": "input" } ], "concreteType": "CloseSession", "kind": "LinkedField", "name": "closeSession", "plural": false, "selections": [ { "alias": null, "args": null, "kind": "ScalarField", "name": "success", "storageKey": null } ], "storageKey": null } ]; return { "fragment": { "argumentDefinitions": (v0/*: any*/), "kind": "Fragment", "metadata": null, "name": "SAM2ModelCloseSessionMutation", "selections": (v1/*: any*/), "type": "Mutation", "abstractKey": null }, "kind": "Request", "operation": { "argumentDefinitions": (v0/*: any*/), "kind": "Operation", "name": "SAM2ModelCloseSessionMutation", "selections": (v1/*: any*/) }, "params": { "cacheID": "aa7177838c16536b397bfee2d15a94ee", "id": null, "metadata": {}, "name": "SAM2ModelCloseSessionMutation", "operationKind": "mutation", "text": "mutation SAM2ModelCloseSessionMutation(\n $input: CloseSessionInput!\n) {\n closeSession(input: $input) {\n success\n }\n}\n" } }; })(); (node as any).hash = "6e1008de944562dc1922cd3f9cc40f10"; export default node; ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/tracker/__generated__/SAM2ModelRemoveObjectMutation.graphql.ts ================================================ /** * @generated SignedSource<<3d0d7bdc0d4304f08ea91b7df9efeb1f>> * @lightSyntaxTransform * @nogrep */ /* tslint:disable */ /* eslint-disable */ // @ts-nocheck import { ConcreteRequest, Mutation } from 'relay-runtime'; export type RemoveObjectInput = { objectId: number; sessionId: string; }; export type SAM2ModelRemoveObjectMutation$variables = { input: RemoveObjectInput; }; export type SAM2ModelRemoveObjectMutation$data = { readonly removeObject: ReadonlyArray<{ readonly frameIndex: number; readonly rleMaskList: ReadonlyArray<{ readonly objectId: number; readonly rleMask: { readonly counts: string; readonly size: ReadonlyArray; }; }>; }>; }; export type SAM2ModelRemoveObjectMutation = { response: SAM2ModelRemoveObjectMutation$data; variables: SAM2ModelRemoveObjectMutation$variables; }; const node: ConcreteRequest = (function(){ var v0 = [ { "defaultValue": null, "kind": "LocalArgument", "name": "input" } ], v1 = [ { "alias": null, "args": [ { "kind": "Variable", "name": "input", "variableName": "input" } ], "concreteType": "RLEMaskListOnFrame", "kind": "LinkedField", "name": "removeObject", "plural": true, "selections": [ { "alias": null, "args": null, "kind": "ScalarField", "name": "frameIndex", "storageKey": null }, { "alias": null, "args": null, "concreteType": "RLEMaskForObject", "kind": "LinkedField", "name": "rleMaskList", "plural": true, "selections": [ { "alias": null, "args": null, "kind": "ScalarField", "name": "objectId", "storageKey": null }, { "alias": null, "args": null, "concreteType": "RLEMask", "kind": "LinkedField", "name": "rleMask", "plural": false, "selections": [ { "alias": null, "args": null, "kind": "ScalarField", "name": "counts", "storageKey": null }, { "alias": null, "args": null, "kind": "ScalarField", "name": "size", "storageKey": null } ], "storageKey": null } ], "storageKey": null } ], "storageKey": null } ]; return { "fragment": { "argumentDefinitions": (v0/*: any*/), "kind": "Fragment", "metadata": null, "name": "SAM2ModelRemoveObjectMutation", "selections": (v1/*: any*/), "type": "Mutation", "abstractKey": null }, "kind": "Request", "operation": { "argumentDefinitions": (v0/*: any*/), "kind": "Operation", "name": "SAM2ModelRemoveObjectMutation", "selections": (v1/*: any*/) }, "params": { "cacheID": "0accbe68b8deea021539365678e58172", "id": null, "metadata": {}, "name": "SAM2ModelRemoveObjectMutation", "operationKind": "mutation", "text": "mutation SAM2ModelRemoveObjectMutation(\n $input: RemoveObjectInput!\n) {\n removeObject(input: $input) {\n frameIndex\n rleMaskList {\n objectId\n rleMask {\n counts\n size\n }\n }\n }\n}\n" } }; })(); (node as any).hash = "2dddf010d202332e6e012443cc1d8e55"; export default node; ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/tracker/__generated__/SAM2ModelStartSessionMutation.graphql.ts ================================================ /** * @generated SignedSource<<90910bae5bb646118174e736434aac56>> * @lightSyntaxTransform * @nogrep */ /* tslint:disable */ /* eslint-disable */ // @ts-nocheck import { ConcreteRequest, Mutation } from 'relay-runtime'; export type StartSessionInput = { path: string; }; export type SAM2ModelStartSessionMutation$variables = { input: StartSessionInput; }; export type SAM2ModelStartSessionMutation$data = { readonly startSession: { readonly sessionId: string; }; }; export type SAM2ModelStartSessionMutation = { response: SAM2ModelStartSessionMutation$data; variables: SAM2ModelStartSessionMutation$variables; }; const node: ConcreteRequest = (function(){ var v0 = [ { "defaultValue": null, "kind": "LocalArgument", "name": "input" } ], v1 = [ { "alias": null, "args": [ { "kind": "Variable", "name": "input", "variableName": "input" } ], "concreteType": "StartSession", "kind": "LinkedField", "name": "startSession", "plural": false, "selections": [ { "alias": null, "args": null, "kind": "ScalarField", "name": "sessionId", "storageKey": null } ], "storageKey": null } ]; return { "fragment": { "argumentDefinitions": (v0/*: any*/), "kind": "Fragment", "metadata": null, "name": "SAM2ModelStartSessionMutation", "selections": (v1/*: any*/), "type": "Mutation", "abstractKey": null }, "kind": "Request", "operation": { "argumentDefinitions": (v0/*: any*/), "kind": "Operation", "name": "SAM2ModelStartSessionMutation", "selections": (v1/*: any*/) }, "params": { "cacheID": "2403f5005f5bb3805109874569f2050e", "id": null, "metadata": {}, "name": "SAM2ModelStartSessionMutation", "operationKind": "mutation", "text": "mutation SAM2ModelStartSessionMutation(\n $input: StartSessionInput!\n) {\n startSession(input: $input) {\n sessionId\n }\n}\n" } }; })(); (node as any).hash = "5cf0005c7a54fc87c539dd4cbd5fef5d"; export default node; ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/utils/FileUtils.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import Logger from '@/common/logger/Logger'; type Range = { start: number; end: number; }; type FileStreamPart = { data: Uint8Array; range: Range; contentLength: number; }; export type FileStream = AsyncGenerator; /** * Asynchronously generates a SHA-256 hash for a Blob object. * * DO NOT USE this function casually. Computing the SHA-256 is expensive and can * take several 100 milliseconds to complete. * * @param blob - The Blob object to be hashed. * @returns A Promise that resolves to a string representing the SHA-256 hash of * the Blob. */ export async function hashBlob(blob: Blob): Promise { const buffer = await blob.arrayBuffer(); // Crypto subtle is only availabe in secure contexts. For example, this will // be the case when running the project locally with http protocol. // https://developer.mozilla.org/en-US/docs/Web/API/Crypto/subtle if (crypto.subtle != null) { const hashBuffer = await crypto.subtle.digest('SHA-256', buffer); const hashArray = Array.from(new Uint8Array(hashBuffer)); return hashArray.map(b => b.toString(16).padStart(2, '0')).join(''); } // If not secure context, return random string return (Math.random() + 1).toString(36).substring(7); } export async function* streamFile(url: string, init?: RequestInit): FileStream { try { const response = await fetch(url, init); let blob: Blob; // Try to download the file with a stream reader. This has the benefit // of providing progress during the download. It requires the body and // Content-Length. As a fallback, it uses the blob function on the // response object. const contentLength = response.headers.get('Content-Length'); if (response.body != null && contentLength != null) { const totalLength = parseInt(contentLength); const chunks: Uint8Array[] = []; let start = 0; let end = 0; const reader = response.body.getReader(); try { while (true) { const {done, value} = await reader.read(); if (done) { break; } start = end; end += value.length; yield { data: value, range: {start, end}, contentLength: totalLength, }; } } finally { reader.releaseLock(); } blob = new Blob(chunks); } else { blob = await response.blob(); } const filename = await hashBlob(blob); return new File([blob], `${filename}.mp4`); } catch (error) { Logger.error('aborting download due to component unmount', error); } return null; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/utils/ImageUtils.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ export function convertVideoFrameToImageData( videoFrame: VideoFrame, ): ImageData | undefined { const canvas = new OffscreenCanvas( videoFrame.displayWidth, videoFrame.displayHeight, ); const ctx = canvas.getContext('2d'); ctx?.drawImage(videoFrame, 0, 0); return ctx?.getImageData(0, 0, canvas.width, canvas.height); } /** * This utility provides two functions: * `process`: to find the bounding box of non-empty pixels from an ImageData, when looping through all its pixels * `crop` to cut out the subsection found in `process` * @returns */ export function findBoundingBox() { let xMin = Number.MAX_VALUE; let yMin = Number.MAX_VALUE; let xMax = Number.MIN_VALUE; let yMax = Number.MIN_VALUE; return { process: function (x: number, y: number, hasData: boolean) { if (hasData) { xMin = Math.min(x, xMin); xMax = Math.max(x, xMax); yMin = Math.min(y, yMin); yMax = Math.max(y, yMax); } return [xMin, xMax, yMin, yMax]; }, crop(imageData: ImageData): ImageData | null { const canvas = new OffscreenCanvas(imageData.width, imageData.height); const ctx = canvas.getContext('2d'); const boundingBoxWidth = xMax - xMin; const boundingBoxHeight = yMax - yMin; if (ctx && boundingBoxWidth > 0 && boundingBoxHeight > 0) { ctx.clearRect(0, 0, canvas.width, canvas.height); ctx.putImageData(imageData, 0, 0); return ctx.getImageData( xMin, yMin, boundingBoxWidth, boundingBoxHeight, ); } else { return null; } }, getBox(): [[number, number], [number, number]] { return [ [xMin, yMin], [xMax, yMax], ]; }, }; } export function magnifyImageRegion( canvas: HTMLCanvasElement | null, x: number, y: number, radius: number = 25, scale: number = 2, ): string { if (canvas == null) { return ''; } const ctx = canvas.getContext('2d'); if (ctx) { const minX = x - radius < 0 ? radius - x : 0; const minY = y - radius < 0 ? radius - y : 0; const region = ctx.getImageData( Math.max(x - radius, 0), Math.max(y - radius, 0), radius * 2, radius * 2, ); // ImageData doesn't scale-transform correctly on canvas // So we first draw the original size on an offscreen canvas, and then scale it const regionCanvas = new OffscreenCanvas(region.width, region.height); const regionCtx = regionCanvas.getContext('2d'); regionCtx?.putImageData(region, minX > 0 ? minX : 0, minY > 0 ? minY : 0); const scaleCanvas = document.createElement('canvas'); scaleCanvas.width = Math.round(region.width * scale); scaleCanvas.height = Math.round(region.height * scale); const scaleCtx = scaleCanvas.getContext('2d'); scaleCtx?.scale(scale, scale); scaleCtx?.drawImage(regionCanvas, 0, 0); return scaleCanvas.toDataURL(); } return ''; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/utils/MaskUtils.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ /** * Converts an image mask represented as a binary image (foreground pixels are * `>1` and background pixels are `0`) stored in a Uint8Array to an RGBA * representation where background pixels have an alpha value of 0 and * foreground pixels have an alpha value of 255. This is useful for compositing * the mask onto another image. * * ```typescript * const rgba = convertMaskDataToRGBA(mask.data); * ``` * * @param data - The image mask represented as a Uint8Array * @returns A new Uint8ClampedArray representing the mask in RGBA format */ export function convertMaskToRGBA(data: Uint8Array): Uint8ClampedArray { // Shifting pixels instead of assigning them individually per pixel is // much faster. See JSPerf benchamrk: https://jsperf.app/morifo const len = data.length; const tempData = new Uint32Array(len); const RGA = 0x00ffffff; const FOREGROUND = 0xff000000; const BACKGROUND = 0x00000000; for (let i = 0; i < len; i++) { const alpha = data[i] > 0 ? FOREGROUND : BACKGROUND; // alpha is the high byte. Bits 24-31 tempData[i] = alpha + RGA; } return new Uint8ClampedArray(tempData.buffer); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/utils/MultipartStream.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ const decoder = new TextDecoder(); const encoder = new TextEncoder(); const blankLine = encoder.encode('\r\n'); const STATE_BOUNDARY = 0; const STATE_HEADERS = 1; const STATE_BODY = 2; /** * Compares two Uint8Array objects for equality. * @param {Uint8Array} a * @param {Uint8Array} b * @return {bool} */ function compareArrays(a: Uint8Array, b: Uint8Array): boolean { if (a.length != b.length) { return false; } for (let i = 0; i < a.length; i++) { if (a[i] != b[i]) { return false; } } return true; } /** * Parses a Content-Type into a multipart boundary. * @param {string} contentType * @return {Uint8Array} boundary line, including preceding -- and trailing \r\n */ function getBoundary(contentType: string): Uint8Array | null { // Expects the form "multipart/...; boundary=...". // This is not a full MIME media type parser but should be good enough. const MULTIPART_TYPE = 'multipart/'; const BOUNDARY_PARAM = '; boundary='; if (!contentType.startsWith(MULTIPART_TYPE)) { return null; } const i = contentType.indexOf(BOUNDARY_PARAM, MULTIPART_TYPE.length); if (i == -1) { return null; } const suffix = contentType.substring(i + BOUNDARY_PARAM.length); return encoder.encode('--' + suffix + '\r\n'); } /** * Creates a multipart stream. * @param {string} contentType A Content-Type header. * @param {ReadableStream} body The body of a HTTP response. * @return {ReadableStream} a stream of {headers: Headers, body: Uint8Array} * objects. */ export default function multipartStream( contentType: string, body: ReadableStream, ): ReadableStream { const reader = body.getReader(); return new ReadableStream({ async start(controller) { // Define the boundary. const boundary = getBoundary(contentType); if (boundary === null) { controller.error( new Error( 'Invalid content type for multipart stream: ' + contentType, ), ); return; } let pos = 0; let buf = new Uint8Array(); // buf.slice(pos) has unprocessed data. let state = STATE_BOUNDARY; let headers: Headers | null = null; // non-null in STATE_HEADERS and STATE_BODY. let contentLength: number | null = null; // non-null in STATE_BODY. /** * Consumes all complete data in buf or raises an Error. * May leave incomplete data at buf.slice(pos). */ function processBuf() { // The while(true) condition is reqired // eslint-disable-next-line no-constant-condition while (true) { if (boundary === null) { controller.error( new Error( 'Invalid content type for multipart stream: ' + contentType, ), ); return; } switch (state) { case STATE_BOUNDARY: // Read blank lines (if any) then boundary. while ( buf.length >= pos + blankLine.length && compareArrays(buf.slice(pos, pos + blankLine.length), blankLine) ) { pos += blankLine.length; } // Check that it starts with a boundary. if (buf.length < pos + boundary.length) { return; } if ( !compareArrays(buf.slice(pos, pos + boundary.length), boundary) ) { throw new Error('bad part boundary'); } pos += boundary.length; state = STATE_HEADERS; headers = new Headers(); break; case STATE_HEADERS: { const cr = buf.indexOf('\r'.charCodeAt(0), pos); if (cr == -1 || buf.length == cr + 1) { return; } if (buf[cr + 1] != '\n'.charCodeAt(0)) { throw new Error('bad part header line (CR without NL)'); } const line = decoder.decode(buf.slice(pos, cr)); pos = cr + 2; if (line == '') { const rawContentLength = headers?.get('Content-Length'); if (rawContentLength == null) { throw new Error('missing/invalid part Content-Length'); } contentLength = parseInt(rawContentLength, 10); if (isNaN(contentLength)) { throw new Error('missing/invalid part Content-Length'); } state = STATE_BODY; break; } const colon = line.indexOf(':'); const name = line.substring(0, colon); if (colon == line.length || line[colon + 1] != ' ') { throw new Error('bad part header line (no ": ")'); } const value = line.substring(colon + 2); headers?.append(name, value); break; } case STATE_BODY: { if (contentLength === null) { throw new Error('content length not set'); } if (buf.length < pos + contentLength) { return; } const body = buf.slice(pos, pos + contentLength); pos += contentLength; controller.enqueue({ headers: headers, body: body, }); headers = null; contentLength = null; state = STATE_BOUNDARY; break; } } } } // The while(true) condition is required // eslint-disable-next-line no-constant-condition while (true) { const {done, value} = await reader.read(); const buffered = buf.length - pos; if (done) { if (state != STATE_BOUNDARY || buffered > 0) { throw Error('multipart stream ended mid-part'); } controller.close(); return; } // Update buf.slice(pos) to include the new data from value. if (buffered == 0) { buf = value; } else { const newLen = buffered + value.length; const newBuf = new Uint8Array(newLen); newBuf.set(buf.slice(pos), 0); newBuf.set(value, buffered); buf = newBuf; } pos = 0; processBuf(); } }, cancel(reason) { return body.cancel(reason); }, }); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/utils/ShaderUtils.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {Tracklet} from '@/common/tracker/Tracker'; /** * util funtion to generate a WebGL texture using a look up table. * @param {WebGL2RenderingContext} gl - The WebGL2 rendering context. * @param {number} lutSize - The size of the LUT in each dimension. * @param {Uint8Array} lutData - The LUT data as an array of unsigned 8-bit integers. * @returns {WebGLTexture} - The WebGL texture object representing the loaded LUT. */ export function load3DLUT( gl: WebGL2RenderingContext, lutSize: number, lutData: Uint8Array, ) { const texture = gl.createTexture(); gl.bindTexture(gl.TEXTURE_3D, texture); gl.texParameteri(gl.TEXTURE_3D, gl.TEXTURE_MIN_FILTER, gl.LINEAR); gl.texParameteri(gl.TEXTURE_3D, gl.TEXTURE_MAG_FILTER, gl.LINEAR); gl.texParameteri(gl.TEXTURE_3D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE); gl.texParameteri(gl.TEXTURE_3D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE); gl.texParameteri(gl.TEXTURE_3D, gl.TEXTURE_WRAP_R, gl.CLAMP_TO_EDGE); // Pixel storage modes must be set to default for 3D textures gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL, false); gl.pixelStorei(gl.UNPACK_PREMULTIPLY_ALPHA_WEBGL, false); gl.texImage3D( gl.TEXTURE_3D, 0, gl.RGBA, lutSize, lutSize, lutSize, 0, gl.RGBA, gl.UNSIGNED_BYTE, lutData, ); gl.bindTexture(gl.TEXTURE_3D, null); return texture; } /** * Generates a 3D lookup table (LUT) data with random RGBA values. * @param {number} lutSize - The size of the LUT in each dimension. * @returns {Uint8Array} - The LUT data as an array of unsigned 8-bit integers. */ export function generateLUTDATA(lutSize: number) { const totalEntries = lutSize * lutSize * lutSize; // 3D LUT nodes const lutData = new Uint8Array(totalEntries * 4); // Each entry has an RGBA value for (let i = 0; i < totalEntries; i++) { lutData[i * 4 + 0] = Math.floor(Math.random() * 256); // Random red value lutData[i * 4 + 1] = Math.floor(Math.random() * 256); // Random green value lutData[i * 4 + 2] = Math.floor(Math.random() * 256); // Random blue value lutData[i * 4 + 3] = 1; // alpha value } return lutData; } /** * Normalizes the bounds of a rectangle defined by two points (A and B) within a given width and height. * @param {number[]} pointA - The coordinates of the first point defining the rectangle. * @param {number[]} pointB - The coordinates of the second point defining the rectangle. * @param {number} width - The width of the canvas or container where the rectangle is drawn. * @param {number} height - The height of the canvas or container where the rectangle is drawn. * @returns {number[]} - An array containing the normalized x and y coordinates of the rectangle's corners. */ export function normalizeBounds( pointA: number[], pointB: number[], width: number, height: number, ) { return [ pointA[0] / width, pointA[1] / height, pointB[0] / width, pointB[1] / height, ]; } /** * Pre-allocates a specified number of 2D textures for use in WebGL2 rendering. * @param {WebGL2RenderingContext} gl - The WebGL2 rendering context. * @param {number} numTextures - The number of textures to be pre-allocated. * @returns {WebGLTexture[]} - An array of WebGL textures, each pre-allocated and ready for use. */ export function preAllocateTextures( gl: WebGL2RenderingContext, numTextures: number, ) { const maskTextures = []; for (let i = 0; i < numTextures; i++) { const maskTexture = gl.createTexture(); gl.bindTexture(gl.TEXTURE_2D, maskTexture); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE); maskTextures.push(maskTexture); } return maskTextures as WebGLTexture[]; } /** * Finds the index of a Tracklet object within an array based on its unique identifier. * @param objects - The array of Tracklet objects to search within. * @param id - The unique identifier of the Tracklet object to find. * @returns The index of the `Tracklet` object with the specified `id` in the `objects` array. */ export function findIndexByTrackletId(id: number, objects: Tracklet[]): number { return objects.findIndex(obj => obj.id === id); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/utils/emptyFunction.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ /** * This function accepts and discards inputs; it has no side effects. This is * primarily useful idiomatically for overridable function endpoints which * always need to be callable, since JS lacks a null-call idiom ala Cocoa. */ export default function () {} ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/common/utils/uuid.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ /** * Generates a random UUID (Universally Unique Identifier) following the version * 4 standard. * * The function replaces each 'x' and 'y' in the template * 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx' with random hexadecimal digits. For * 'y', the function ensures the first hexadecimal digit is '8', '9', 'A', or * 'B' as per the UUID v4 standard. * * @returns A string representing a version 4 UUID. * * @example * * const id = uuidv4(); * console.log(id); // Outputs: '3f0d2c77-4f69-4c1e-8a6e-35e866e8a5d1' */ export function uuidv4() { return 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function (c) { const r = (Math.random() * 16) | 0, v = c == 'x' ? r : (r & 0x3) | 0x8; return v.toString(16); }); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/debug/stats/Stats.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ /** * Derived from mrdoob / http://mrdoob.com/ */ import Logger from '@/common/logger/Logger'; import {uuidv4} from '@/common/utils/uuid'; import invariant from 'invariant'; export type Request = { action: A; } & P; export type Response = Request; export type GetStatsCanvasRequest = Request< 'getStatsCanvas', { id: string; width: number; height: number; } >; export type GetMemoryStatsRequest = Request< 'getMemoryStats', { id: string; jsHeapSizeLimit: number; totalJSHeapSize: number; usedJSHeapSize: number; } >; export type SetStatsCanvasResponse = Response< 'setStatsCanvas', { id: string; canvas: OffscreenCanvas; devicePixelRatio: number; } >; export type MemoryStatsResponse = Response< 'memoryStats', { id: string; jsHeapSizeLimit: number; totalJSHeapSize: number; usedJSHeapSize: number; } >; export type StatsType = 'fps' | 'ms' | 'memory'; export class Stats { private maxValue: number; private beginTime: number; private prevTime: number; private frames: number; private fpsPanel: Panel | null = null; private msPanel: Panel | null = null; private memPanel: Panel | null = null; constructor(type: StatsType, label: string = '', maxValue: number = 100) { const id = uuidv4(); this.maxValue = maxValue; this.beginTime = (performance || Date).now(); this.prevTime = this.beginTime; this.frames = 0; const onMessage = (event: MessageEvent) => { if (event.data.action === 'setStatsCanvas' && event.data.id === id) { const {canvas, devicePixelRatio} = event.data; if (type === 'fps') { this.fpsPanel = new Panel( canvas, devicePixelRatio, `FPS ${label}`.trim(), '#0ff', '#002', ); } else if (type === 'ms') { this.msPanel = new Panel( canvas, devicePixelRatio, `MS ${label}`.trim(), '#0f0', '#020', ); } else if (type === 'memory') { this.memPanel = new Panel( canvas, devicePixelRatio, `MB ${label}`.trim(), '#f08', '#201', ); } self.removeEventListener('message', onMessage); } }; self.addEventListener('message', onMessage); self.postMessage({ action: 'getStatsCanvas', id, width: 80, height: 48, } as GetStatsCanvasRequest); } updateMaxValue(maxValue: number) { this.maxValue = maxValue; } begin() { this.beginTime = (performance || Date).now(); } end() { this.frames++; const time = (performance || Date).now(); this.msPanel?.update(time - this.beginTime, this.maxValue); if (time >= this.prevTime + 1000) { this.fpsPanel?.update( (this.frames * 1000) / (time - this.prevTime), this.maxValue, ); this.prevTime = time; this.frames = 0; const id = uuidv4(); const onMessage = (event: MessageEvent) => { if (event.data.action === 'memoryStats' && event.data.id === id) { const {usedJSHeapSize, jsHeapSizeLimit} = event.data; this.memPanel?.update( usedJSHeapSize / 1048576, jsHeapSizeLimit / 1048576, ); } }; self.addEventListener('message', onMessage); self.postMessage({ action: 'getMemoryStats', id, } as GetMemoryStatsRequest); } return time; } update() { this.beginTime = this.end(); } } export class Panel { private min = Infinity; private max = 0; private round = Math.round; private PR: number; private WIDTH: number; private HEIGHT: number; private TEXT_X: number; private TEXT_Y: number; private GRAPH_X: number; private GRAPH_Y: number; private GRAPH_WIDTH: number; private GRAPH_HEIGHT: number; public canvas: HTMLCanvasElement | OffscreenCanvas; private context: | CanvasRenderingContext2D | OffscreenCanvasRenderingContext2D | null = null; private name: string; private fg: string; private bg: string; constructor( canvas: HTMLCanvasElement | OffscreenCanvas, devicePixelRatio: number, name: string, fg: string, bg: string, ) { this.canvas = canvas; this.name = name; this.fg = fg; this.bg = bg; this.PR = this.round(devicePixelRatio || 1); this.WIDTH = 80 * this.PR; this.HEIGHT = 48 * this.PR; this.TEXT_X = 3 * this.PR; this.TEXT_Y = 2 * this.PR; this.GRAPH_X = 3 * this.PR; this.GRAPH_Y = 15 * this.PR; this.GRAPH_WIDTH = 74 * this.PR; this.GRAPH_HEIGHT = 30 * this.PR; const context: OffscreenCanvasRenderingContext2D | RenderingContext | null = canvas.getContext('2d'); invariant(context !== null, 'context 2d is required'); if ( !(context instanceof CanvasRenderingContext2D) && !(context instanceof OffscreenCanvasRenderingContext2D) ) { Logger.warn( 'rendering stats requires CanvasRenderingContext2D or OffscreenCanvasRenderingContext2D', ); return; } context.font = 'bold ' + 9 * this.PR + 'px Helvetica,Arial,sans-serif'; context.textBaseline = 'top'; context.fillStyle = bg; context.fillRect(0, 0, this.WIDTH, this.HEIGHT); context.fillStyle = fg; context.fillText(name, this.TEXT_X, this.TEXT_Y); context.fillRect( this.GRAPH_X, this.GRAPH_Y, this.GRAPH_WIDTH, this.GRAPH_HEIGHT, ); context.fillStyle = bg; context.globalAlpha = 0.9; context.fillRect( this.GRAPH_X, this.GRAPH_Y, this.GRAPH_WIDTH, this.GRAPH_HEIGHT, ); this.context = context; } update(value: number, maxValue: number) { invariant(this.context !== null, 'context 2d is required'); this.min = Math.min(this.min, value); this.max = Math.max(this.max, value); this.context.fillStyle = this.bg; this.context.globalAlpha = 1; this.context.fillRect(0, 0, this.WIDTH, this.GRAPH_Y); this.context.fillStyle = this.fg; this.context.fillText( this.round(value) + ' ' + this.name + ' (' + this.round(this.min) + '-' + this.round(this.max) + ')', this.TEXT_X, this.TEXT_Y, ); this.context.drawImage( this.canvas, this.GRAPH_X + this.PR, this.GRAPH_Y, this.GRAPH_WIDTH - this.PR, this.GRAPH_HEIGHT, this.GRAPH_X, this.GRAPH_Y, this.GRAPH_WIDTH - this.PR, this.GRAPH_HEIGHT, ); this.context.fillRect( this.GRAPH_X + this.GRAPH_WIDTH - this.PR, this.GRAPH_Y, this.PR, this.GRAPH_HEIGHT, ); this.context.fillStyle = this.bg; this.context.globalAlpha = 0.9; this.context.fillRect( this.GRAPH_X + this.GRAPH_WIDTH - this.PR, this.GRAPH_Y, this.PR, this.round((1 - value / maxValue) * this.GRAPH_HEIGHT), ); } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/debug/stats/StatsView.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {EnableStatsRequest} from '@/common/components/video/VideoWorkerTypes'; import stylex from '@stylexjs/stylex'; import {useEffect, useMemo, useRef, useState} from 'react'; import {useLocation} from 'react-router-dom'; import useVideo from '../../common/components/video/editor/useVideo'; import { GetMemoryStatsRequest, GetStatsCanvasRequest, MemoryStatsResponse, SetStatsCanvasResponse, } from './Stats'; const styles = stylex.create({ container: { position: 'fixed', top: 0, left: 0, width: '100%', overflowX: 'auto', display: 'flex', flexDirection: 'row', cursor: 'pointer', opacity: 0.9, zIndex: 10000, }, }); const URL_PARAM = 'monitors'; export default function StatsView() { const {search} = useLocation(); const video = useVideo(); const containerRef = useRef(null); const [isWrapped, setIsWrapped] = useState(false); const isEnabled = useMemo(() => { const urlSearchParams = new URLSearchParams(search); return ( urlSearchParams.has(URL_PARAM) && ['true', ''].includes(urlSearchParams.get('monitors') ?? '') ); }, [search]); useEffect(() => { if (!isEnabled) { return; } const worker = video?.getWorker_ONLY_USE_WITH_CAUTION(); // Enable stats for video worker worker?.postMessage({ action: 'enableStats', } as EnableStatsRequest); function onMessage( event: MessageEvent, ) { if (event.data.action === 'getStatsCanvas') { // Add stats canvas and hand control over to worker const canvas = document.createElement('canvas'); canvas.width = event.data.width * window.devicePixelRatio; canvas.height = event.data.height * window.devicePixelRatio; canvas.style.width = `${event.data.width}px`; canvas.style.height = `${event.data.height}px`; containerRef.current?.appendChild(canvas); const offscreenCanvas = canvas.transferControlToOffscreen(); worker?.postMessage( { action: 'setStatsCanvas', id: event.data.id, canvas: offscreenCanvas, devicePixelRatio: window.devicePixelRatio, } as SetStatsCanvasResponse, { transfer: [offscreenCanvas], }, ); } else if (event.data.action === 'getMemoryStats') { // @ts-expect-error performance.memory might not exist const memory = performance.memory ?? { jsHeapSizeLimit: 0, totalJSHeapSize: 0, usedJSHeapSize: 0, }; worker?.postMessage({ action: 'memoryStats', id: event.data.id, jsHeapSizeLimit: memory.jsHeapSizeLimit, totalJSHeapSize: memory.totalJSHeapSize, usedJSHeapSize: memory.usedJSHeapSize, } as MemoryStatsResponse); } } worker?.addEventListener('message', onMessage); return () => { worker?.removeEventListener('message', onMessage); }; }, [video, isEnabled]); function handleClick() { setIsWrapped(w => !w); } if (!isEnabled) { return null; } return (
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/demo/DemoConfig.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {Effects} from '@/common/components/video/effects/Effects'; type EffectLayers = { background: keyof Effects; highlight: keyof Effects; }; export const DEMO_SHORT_NAME = 'SAM 2 Demo'; export const RESEARCH_BY_META_AI = 'By Meta FAIR'; export const DEMO_FRIENDLY_NAME = 'Segment Anything 2 Demo'; export const VIDEO_WATERMARK_TEXT = `Modified with ${DEMO_FRIENDLY_NAME}`; export const PROJECT_GITHUB_URL = 'https://github.com/facebookresearch/sam2'; export const AIDEMOS_URL = 'https://aidemos.meta.com'; export const ABOUT_URL = 'https://ai.meta.com/sam2'; export const EMAIL_ADDRESS = 'segment-anything@meta.com'; export const BLOG_URL = 'http://ai.meta.com/blog/sam2'; export const VIDEO_API_ENDPOINT = 'http://localhost:7263'; export const INFERENCE_API_ENDPOINT = 'http://localhost:7263'; export const demoObjectLimit = 3; export const DEFAULT_EFFECT_LAYERS: EffectLayers = { background: 'Original', highlight: 'Overlay', }; export const MAX_UPLOAD_FILE_SIZE = '70MB'; ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/demo/DemoErrorFallback.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import LoadingStateScreen from '@/common/loading/LoadingStateScreen'; import {FallbackProps} from 'react-error-boundary'; export default function DemoErrorFallback(_props: FallbackProps) { return ( ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/demo/DemoSuspenseFallback.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import LoadingStateScreen from '@/common/loading/LoadingStateScreen'; export default function DemoSuspenseFallback() { return ; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/demo/SAM2DemoApp.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import '@/assets/scss/App.scss'; import ErrorReport from '@/common/error/ErrorReport'; import DemoErrorFallback from '@/demo/DemoErrorFallback'; import DemoSuspenseFallback from '@/demo/DemoSuspenseFallback'; import RelayEnvironmentProvider from '@/graphql/RelayEnvironmentProvider'; import RootLayout from '@/layouts/RootLayout'; import SAM2DemoPage from '@/routes/DemoPageWrapper'; import PageNotFoundPage from '@/routes/PageNotFoundPage'; import useSettingsContext from '@/settings/useSettingsContext'; import {Route, Routes} from 'react-router-dom'; export default function DemoAppWrapper() { const {settings} = useSettingsContext(); return ( } errorFallback={DemoErrorFallback}> ); } function DemoApp() { return ( <> }> } /> } /> ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/demo/atoms.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import { defaultMessageMap, MessagesEventMap, } from '@/common/components/snackbar/DemoMessagesSnackbarUtils'; import {Effects} from '@/common/components/video/effects/Effects'; import { DemoEffect, highlightEffects, } from '@/common/components/video/effects/EffectUtils'; import { BaseTracklet, SegmentationPoint, StreamingState, } from '@/common/tracker/Tracker'; import type {DataArray} from '@/jscocotools/mask'; import {atom} from 'jotai'; export type VideoData = { path: string; posterPath: string | null | undefined; url: string; posterUrl: string; width: number; height: number; }; export const frameIndexAtom = atom(0); export const inputVideoAtom = atom(null); // ##################### // SESSION // ##################### export type Session = { id: string; ranPropagation: boolean; }; export const sessionAtom = atom(null); // ##################### // STREAMING/PLAYBACK // ##################### export const isVideoLoadingAtom = atom(false); export const streamingStateAtom = atom('none'); export const isPlayingAtom = atom(false); export const isStreamingAtom = atom(false); // ##################### // OBJECTS // ##################### export type TrackletMask = { mask: DataArray; isEmpty: boolean; }; export type TrackletObject = { id: number; color: string; thumbnail: string | null; points: SegmentationPoint[][]; masks: TrackletMask[]; isInitialized: boolean; }; const MAX_NUMBER_TRACKLET_OBJECTS = 3; export const activeTrackletObjectIdAtom = atom(0); export const activeTrackletObjectAtom = atom(get => { const objectId = get(activeTrackletObjectIdAtom); const tracklets = get(trackletObjectsAtom); return tracklets.find(obj => obj.id === objectId) ?? null; }); export const trackletObjectsAtom = atom([]); export const maxTrackletObjectIdAtom = atom(get => { const tracklets = get(trackletObjectsAtom); return tracklets.reduce((prev, curr) => Math.max(prev, curr.id), 0); }); export const isTrackletObjectLimitReachedAtom = atom( get => get(trackletObjectsAtom).length >= MAX_NUMBER_TRACKLET_OBJECTS, ); export const areTrackletObjectsInitializedAtom = atom(get => get(trackletObjectsAtom).every(obj => obj.isInitialized), ); export const isFirstClickMadeAtom = atom(get => { const tracklets = get(trackletObjectsAtom); return tracklets.some(tracklet => tracklet.points.length > 0); }); export const pointsAtom = atom(get => { const frameIndex = get(frameIndexAtom); const activeTracklet = get(activeTrackletObjectAtom); return activeTracklet?.points[frameIndex] ?? []; }); export const labelTypeAtom = atom<'positive' | 'negative'>('positive'); export const isAddObjectEnabledAtom = atom(get => { const session = get(sessionAtom); const trackletsInitialized = get(areTrackletObjectsInitializedAtom); const isObjectLimitReached = get(isTrackletObjectLimitReachedAtom); return ( session?.ranPropagation === false && trackletsInitialized && !isObjectLimitReached ); }); export const codeEditorOpenedAtom = atom(false); export const tutorialVideoEnabledAtom = atom(true); // ##################### // Effects // ##################### type EffectConfig = { name: keyof Effects; variant: number; numVariants: number; }; export const activeBackgroundEffectAtom = atom({ name: 'Original', variant: 0, numVariants: 0, }); export const activeHighlightEffectAtom = atom({ name: 'Overlay', variant: 0, numVariants: 0, }); export const activeHighlightEffectGroupAtom = atom(highlightEffects); // ##################### // Toolbar // ##################### export const toolbarTabIndex = atom(0); // ##################### // Messages snackbar // ##################### export const messageMapAtom = atom(defaultMessageMap); // ##################### // Upload state // ##################### export const uploadingStateAtom = atom<'default' | 'uploading' | 'error'>( 'default', ); ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/graphql/RelayEnvironment.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import Logger from '@/common/logger/Logger'; import { CacheConfig, Environment, FetchFunction, GraphQLResponse, LogEvent, Network, ObservableFromValue, RecordSource, RequestParameters, Store, UploadableMap, Variables, } from 'relay-runtime'; import fetchGraphQL from './fetchGraphQL'; function createFetchRelay(endpoint: string): FetchFunction { return ( request: RequestParameters, variables: Variables, cacheConfig: CacheConfig, uploadables?: UploadableMap | null, ): ObservableFromValue => { Logger.debug( `fetching query ${request.name} with ${JSON.stringify(variables)}`, ); return fetchGraphQL(endpoint, request, variables, cacheConfig, uploadables); }; } export function createEnvironment(endpoint: string): Environment { return new Environment({ log: (logEvent: LogEvent) => Logger.debug(logEvent.name, logEvent), network: Network.create(createFetchRelay(endpoint)), store: new Store(new RecordSource()), }); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/graphql/RelayEnvironmentProvider.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import ErrorFallback from '@/common/error/ErrorFallback'; import LoadingMessage from '@/common/loading/LoadingMessage'; import {createEnvironment} from '@/graphql/RelayEnvironment'; import { ComponentType, PropsWithChildren, ReactNode, Suspense, useMemo, useState, } from 'react'; import {ErrorBoundary, FallbackProps} from 'react-error-boundary'; import {RelayEnvironmentProvider} from 'react-relay'; type Props = PropsWithChildren<{ suspenseFallback?: ReactNode; errorFallback?: ComponentType; endpoint: string; }>; export default function OnevisionRelayEnvironmentProvider({ suspenseFallback, errorFallback = ErrorFallback, endpoint, children, }: Props) { const [retryKey, setRetryKey] = useState(0); const environment = useMemo(() => { return createEnvironment(endpoint); // The retryKey is needed to force a new Relay Environment // instance when the user retries after an error occurred. // eslint-disable-next-line react-hooks/exhaustive-deps }, [endpoint, retryKey]); // Force re-creating Relay Environment function handleReset() { setRetryKey(k => k + 1); } return ( }> {children} ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/graphql/errors/CreateFilmstripError.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ export default class CreateFilmstripError extends Error { override name = 'CreateFilmstripError'; constructor(message?: string) { super(message); } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/graphql/errors/DrawFrameError.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ export default class DrawFrameError extends Error { override name = 'DrawFrameError'; constructor(message?: string) { super(message); } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/graphql/errors/WebGLContextError.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ export default class WebGLContextError extends Error { override name = 'WebGLContextError'; constructor(message?: string) { super(message); } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/graphql/fetchGraphQL.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import Logger from '@/common/logger/Logger'; import { CacheConfig, GraphQLResponse, RequestParameters, UploadableMap, Variables, } from 'relay-runtime'; /** * Inspired by https://github.com/facebook/relay/issues/1844 */ export default async function fetchGraphQL( endpoint: string, request: RequestParameters, variables: Variables, cacheConfig: CacheConfig, uploadables?: UploadableMap | null, ): Promise { const url = `${endpoint}/graphql`; const headers: {[name: string]: string} = {}; const requestInit: RequestInit = { method: 'POST', headers, credentials: 'include', }; const customHeaders = (cacheConfig?.metadata?.headers ?? {}) as { [key: string]: string; }; requestInit.headers = Object.assign(customHeaders, requestInit.headers); if (uploadables != null) { const formData = new FormData(); formData.append( 'operations', JSON.stringify({ query: request.text, variables, }), ); const uploadableMap: { [key: string]: string[]; } = {}; Object.keys(uploadables).forEach(key => { uploadableMap[key] = [`variables.${key}`]; }); formData.append('map', JSON.stringify(uploadableMap)); Object.keys(uploadables).forEach(key => { formData.append(key, uploadables[key]); }); requestInit.body = formData; } else { requestInit.headers = Object.assign( {'Content-Type': 'application/json'}, requestInit.headers, ); requestInit.body = JSON.stringify({ query: request.text, variables, }); } try { const response = await fetch(url, requestInit); const result = await response.json(); // Handle any intentional GraphQL errors, which are passed through the // errors property in the JSON payload. if ('errors' in result) { for (const error of result.errors) { Logger.error(error); } } return result; } catch (error) { Logger.error(`Could not connect to GraphQL endpoint ${url}`, error); throw error; } } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/jscocotools/mask.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ export class DataArray { data: Uint8Array; readonly shape: number[]; constructor(data: Uint8Array, shape: Array) { this.data = data; this.shape = shape; } } export type RLEObject = { size: [h: number, w: number]; counts: string; }; type RLE = { h: number; w: number; m: number; cnts: number[]; }; type BB = number[]; function rleInit(R: RLE, h: number, w: number, m: number, cnts: number[]) { R.h = h; R.w = w; R.m = m; R.cnts = m === 0 ? [0] : cnts; } function rlesInit(R: RLE[], n: number) { let i; for (i = 0; i < n; i++) { R[i] = {h: 0, w: 0, m: 0, cnts: [0]}; rleInit(R[i], 0, 0, 0, [0]); } } class RLEs { _R: RLE[]; _n: number; constructor(n: number) { this._R = []; rlesInit(this._R, n); this._n = n; } } export class Masks { _mask: Uint8Array; _h: number; _w: number; _n: number; constructor(h: number, w: number, n: number) { this._mask = new Uint8Array(h * w * n); this._h = h; this._w = w; this._n = n; } toDataArray(): DataArray { return new DataArray(this._mask, [this._h, this._w, this._n]); } } // encode mask to RLEs objects // list of RLE string can be generated by RLEs member function export function encode(mask: DataArray): RLEObject[] { const h = mask.shape[0]; const w = mask.shape[1]; const n = mask.shape[2]; const Rs = new RLEs(n); rleEncode(Rs._R, mask.data, h, w, n); const objs = _toString(Rs); return objs; } // decode mask from compressed list of RLE string or RLEs object export function decode(rleObjs: RLEObject[]): DataArray { const Rs = _frString(rleObjs); const h = Rs._R[0].h; const w = Rs._R[0].w; const n = Rs._n; const masks = new Masks(h, w, n); rleDecode(Rs._R, masks._mask, n); return masks.toDataArray(); } export function toBbox(rleObjs: RLEObject[]): BB { const Rs = _frString(rleObjs); const n = Rs._n; const bb: BB = []; rleToBbox(Rs._R, bb, n); return bb; } function rleEncode(R: RLE[], M: Uint8Array, h: number, w: number, n: number) { let i; let j; let k; const a = w * h; let c; const cnts: number[] = []; let p; for (i = 0; i < n; i++) { const from = a * i; const to = a * (i + 1); // Slice data for current RLE object const T = M.slice(from, to); k = 0; p = 0; c = 0; for (j = 0; j < a; j++) { if (T[j] !== p) { cnts[k++] = c; c = 0; p = T[j]; } c++; } cnts[k++] = c; rleInit(R[i], h, w, k, [...cnts]); } } function rleDecode(R: RLE[], M: Uint8Array, n: number): void { let i; let j; let k; let p = 0; for (i = 0; i < n; i++) { let v = false; for (j = 0; j < R[i].m; j++) { for (k = 0; k < R[i].cnts[j]; k++) { M[p++] = v === false ? 0 : 1; } v = !v; } } } function rleToString(R: RLE): string { /* Similar to LEB128 but using 6 bits/char and ascii chars 48-111. */ let i; const m = R.m; let p = 0; let x: number; let more; const s: string[] = []; for (i = 0; i < m; i++) { x = R.cnts[i]; if (i > 2) { x -= R.cnts[i - 2]; } more = true; // 1; while (more) { let c = x & 0x1f; x >>= 5; more = c & 0x10 ? x != -1 : x != 0; if (more) { c |= 0x20; } c += 48; s[p++] = String.fromCharCode(c); } } return s.join(''); } // internal conversion from Python RLEs object to compressed RLE format function _toString(Rs: RLEs): RLEObject[] { const n = Rs._n; let py_string; let c_string; const objs: RLEObject[] = []; for (let i = 0; i < n; i++) { c_string = rleToString(Rs._R[i]); py_string = c_string; objs.push({ size: [Rs._R[i].h, Rs._R[i].w], counts: py_string, }); } return objs; } // internal conversion from compressed RLE format to Python RLEs object function _frString(rleObjs: RLEObject[]): RLEs { const n = rleObjs.length; const Rs = new RLEs(n); let py_string; let c_string; for (let i = 0; i < rleObjs.length; i++) { const obj = rleObjs[i]; py_string = obj.counts; c_string = py_string; rleFrString(Rs._R[i], c_string, obj.size[0], obj.size[1]); } return Rs; } function rleToBbox(R: RLE[], bb: BB, n: number) { for (let i = 0; i < n; i++) { const h = R[i].h; const w = R[i].w; let m = R[i].m; // The RLE structure likely contains run-length encoded data where each // element represents a count of consecutive pixels with the same value in // a binary image (black or white). Since the counts represent both black // and white pixels, this operation ((siz)(m/2)) * 2 is used to ensure that // m is always an even number. By doing so, the code can later check // whether the current pixel is black or white based on whether the index j // is even or odd. m = Math.floor(m / 2) * 2; let xs = w; let ys = h; let xe = 0; let ye = 0; let cc = 0; let t; let y; let x; let xp = 0; if (m === 0) { bb[4 * i] = bb[4 * i + 1] = bb[4 * i + 2] = bb[4 * i + 3] = 0; continue; } for (let j = 0; j < m; j++) { cc += R[i].cnts[j]; t = cc - (j % 2); y = t % h; x = Math.floor((t - y) / h); if (j % 2 === 0) { xp = x; } else if (xp < x) { ys = 0; ye = h - 1; } xs = Math.min(xs, x); xe = Math.max(xe, x); ys = Math.min(ys, y); ye = Math.max(ye, y); } bb[4 * i] = xs; bb[4 * i + 2] = xe - xs + 1; bb[4 * i + 1] = ys; bb[4 * i + 3] = ye - ys + 1; } } function rleFrString(R: RLE, s: string, h: number, w: number): void { let m = 0; let p = 0; let k; let x; let more; let cnts = []; while (s[m]) { m++; } cnts = []; m = 0; while (s[p]) { x = 0; k = 0; more = 1; while (more) { const c = s.charCodeAt(p) - 48; x |= (c & 0x1f) << (5 * k); more = c & 0x20; p++; k++; if (!more && c & 0x10) { x |= -1 << (5 * k); } } if (m > 2) { x += cnts[m - 2]; } cnts[m++] = x; } rleInit(R, h, w, m, cnts); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/layouts/DemoPageLayout.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {spacing} from '@/theme/tokens.stylex'; import stylex from '@stylexjs/stylex'; import {PropsWithChildren} from 'react'; type Props = PropsWithChildren; const styles = stylex.create({ container: { width: '100%', height: '100%', display: 'flex', justifyContent: 'stretch', alignItems: 'stretch', gap: spacing[12], paddingHorizontal: spacing[12], paddingVertical: spacing[4], '@media screen and (max-width: 768px)': { display: 'flex', flexDirection: 'column-reverse', gap: 0, marginTop: spacing[0], marginBottom: spacing[0], paddingHorizontal: spacing[0], paddingBottom: spacing[0], }, }, }); export default function DemoPageLayout({children}: Props) { return
{children}
; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/layouts/RootLayout.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import LoadingStateScreen from '@/common/loading/LoadingStateScreen'; import useSettingsContext from '@/settings/useSettingsContext'; import {Cog6ToothIcon} from '@heroicons/react/24/outline'; import stylex from '@stylexjs/stylex'; import {Suspense} from 'react'; import {Button, Indicator} from 'react-daisyui'; import {Outlet} from 'react-router-dom'; const styles = stylex.create({ container: { display: 'flex', flexDirection: 'column', height: '100%', maxHeight: '100vh', backgroundColor: '#000', }, content: { position: 'relative', flex: '1 1 0%', display: 'flex', flexDirection: 'column', overflowX: 'auto', overflowY: { default: 'auto', '@media screen and (max-width: 768px)': 'auto', }, }, debugActions: { display: 'flex', flexDirection: 'column', position: 'fixed', top: 100, right: 0, backgroundColor: 'white', borderRadius: 3, }, }); export default function RootLayout() { const {openModal, hasChanged} = useSettingsContext(); return (
}>
{hasChanged && ( )}
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/routes/DemoPage.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import Toolbar from '@/common/components/toolbar/Toolbar'; import DemoVideoEditor from '@/common/components/video/editor/DemoVideoEditor'; import useInputVideo from '@/common/components/video/useInputVideo'; import StatsView from '@/debug/stats/StatsView'; import {VideoData} from '@/demo/atoms'; import DemoPageLayout from '@/layouts/DemoPageLayout'; import {DemoPageQuery} from '@/routes/__generated__/DemoPageQuery.graphql'; import {useEffect, useMemo} from 'react'; import {graphql, useLazyLoadQuery} from 'react-relay'; import {Location, useLocation} from 'react-router-dom'; type LocationState = { video?: VideoData; }; export default function DemoPage() { const {state} = useLocation() as Location; const data = useLazyLoadQuery( graphql` query DemoPageQuery { defaultVideo { path posterPath url posterUrl height width } } `, {}, ); const {setInputVideo} = useInputVideo(); const video = useMemo(() => { return state?.video ?? data.defaultVideo; }, [state, data]); useEffect(() => { setInputVideo(video); }, [video, setInputVideo]); return ( ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/routes/DemoPageWrapper.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import LoadingStateScreen from '@/common/loading/LoadingStateScreen'; import DemoPage from '@/routes/DemoPage'; import stylex from '@stylexjs/stylex'; import {isFirefox} from 'react-device-detect'; const styles = stylex.create({ link: { textDecorationLine: 'underline', color: '#A7B3BF', }, }); const REQUIRED_WINDOW_APIS = ['VideoEncoder', 'VideoDecoder', 'VideoFrame']; function isBrowserSupported() { for (const api of REQUIRED_WINDOW_APIS) { if (!(api in window)) { return false; } } // Test if transferControlToOffscreen is supported. For example, this will // fail on iOS version < 16.4 // https://developer.mozilla.org/en-US/docs/Web/API/HTMLCanvasElement/transferControlToOffscreen const canvas = document.createElement('canvas'); if (typeof canvas.transferControlToOffscreen !== 'function') { return false; } return true; } export default function DemoPageWrapper() { const isBrowserUnsupported = !isBrowserSupported(); if (isBrowserUnsupported && isFirefox) { const nightlyUrl = 'https://wiki.mozilla.org/Nightly'; return ( This version of Firefox doesn’t support the video features we’ll need to run this demo. You can either update Firefox to the latest nightly build{' '} here , or try again using Chrome or Safari.
} linkProps={{to: '..', label: 'Back to homepage'}} /> ); } if (isBrowserUnsupported) { return ( ); } return ; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/routes/PageNotFoundPage.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import LoadingStateScreen from '@/common/loading/LoadingStateScreen'; export default function PageNotFoundPage() { return ( ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/routes/__generated__/DemoPageQuery.graphql.ts ================================================ /** * @generated SignedSource<> * @lightSyntaxTransform * @nogrep */ /* tslint:disable */ /* eslint-disable */ // @ts-nocheck import { ConcreteRequest, Query } from 'relay-runtime'; export type DemoPageQuery$variables = Record; export type DemoPageQuery$data = { readonly defaultVideo: { readonly height: number; readonly path: string; readonly posterPath: string | null | undefined; readonly posterUrl: string; readonly url: string; readonly width: number; }; }; export type DemoPageQuery = { response: DemoPageQuery$data; variables: DemoPageQuery$variables; }; const node: ConcreteRequest = (function(){ var v0 = [ { "alias": null, "args": null, "concreteType": "Video", "kind": "LinkedField", "name": "defaultVideo", "plural": false, "selections": [ { "alias": null, "args": null, "kind": "ScalarField", "name": "path", "storageKey": null }, { "alias": null, "args": null, "kind": "ScalarField", "name": "posterPath", "storageKey": null }, { "alias": null, "args": null, "kind": "ScalarField", "name": "url", "storageKey": null }, { "alias": null, "args": null, "kind": "ScalarField", "name": "posterUrl", "storageKey": null }, { "alias": null, "args": null, "kind": "ScalarField", "name": "height", "storageKey": null }, { "alias": null, "args": null, "kind": "ScalarField", "name": "width", "storageKey": null } ], "storageKey": null } ]; return { "fragment": { "argumentDefinitions": [], "kind": "Fragment", "metadata": null, "name": "DemoPageQuery", "selections": (v0/*: any*/), "type": "Query", "abstractKey": null }, "kind": "Request", "operation": { "argumentDefinitions": [], "kind": "Operation", "name": "DemoPageQuery", "selections": (v0/*: any*/) }, "params": { "cacheID": "71cbafce4d2d047acdc54d86504f2d2e", "id": null, "metadata": {}, "name": "DemoPageQuery", "operationKind": "query", "text": "query DemoPageQuery {\n defaultVideo {\n path\n posterPath\n url\n posterUrl\n height\n width\n }\n}\n" } }; })(); (node as any).hash = "63c9465d78b30d42d6fc11e50a9af142"; export default node; ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/settings/ApprovableInput.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import Tooltip from '@/common/components/Tooltip'; import {ArrowPathIcon, CheckIcon, XMarkIcon} from '@heroicons/react/24/solid'; import {ChangeEvent, KeyboardEvent, useEffect, useMemo, useState} from 'react'; import {Button, Form, Input, Join} from 'react-daisyui'; type Props = Omit< React.InputHTMLAttributes, 'size' | 'color' | 'onChange' > & { label: string; defaultValue: T; initialValue: T; onChange: (value: string) => void; }; function getStep(value: number) { const stringValue = String(value); const decimals = stringValue.split('.')[1]; if (decimals != null) { // Not using 0.1 ** decimals.length because this will result in rounding // errors, e.g., 0.1 ** 2 => 0.010000000000000002. return 1 / 10 ** decimals.length; } return 1; } export default function ApprovableInput({ label, defaultValue, initialValue, onChange, ...otherProps }: Props) { const [value, setValue] = useState(`${initialValue}`); useEffect(() => { setValue(`${initialValue}`); }, [initialValue]); const step = useMemo(() => { return typeof defaultValue === 'number' && isFinite(defaultValue) ? getStep(defaultValue) : undefined; }, [defaultValue]); return (
) => { setValue(event.target.value); }} onKeyDown={(event: KeyboardEvent) => { if (event.key === 'Enter') { event.preventDefault(); onChange(value); } }} />
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/settings/SAM2Settings.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {INFERENCE_API_ENDPOINT, VIDEO_API_ENDPOINT} from '@/demo/DemoConfig'; import ApprovableInput from '@/settings/ApprovableInput'; import useSettingsContext from '@/settings/useSettingsContext'; export default function SAMVSettings() { const {settings, dispatch} = useSettingsContext(); return (
dispatch({type: 'change-video-api-endpoint', url})} /> dispatch({type: 'change-inference-api-endpoint', url})} />
); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/settings/SettingsContextProvider.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import emptyFunction from '@/common/utils/emptyFunction'; import {INFERENCE_API_ENDPOINT, VIDEO_API_ENDPOINT} from '@/demo/DemoConfig'; import SettingsModal from '@/settings/SettingsModal'; import { Action, DEFAULT_SETTINGS, Settings, settingsReducer, } from '@/settings/SettingsReducer'; import { PropsWithChildren, createContext, useCallback, useMemo, useRef, } from 'react'; import {useImmerReducer} from 'use-immer'; type ContextProps = { settings: Settings; dispatch: React.Dispatch; openModal: () => void; closeModal: () => void; hasChanged: boolean; }; export const SettingsContext = createContext({ settings: DEFAULT_SETTINGS, dispatch: emptyFunction, openModal: emptyFunction, closeModal: emptyFunction, hasChanged: false, }); type Props = PropsWithChildren; export default function SettingsContextProvider({children}: Props) { const [state, dispatch] = useImmerReducer( settingsReducer, DEFAULT_SETTINGS, settings => { // Load the settings from local storage. Eventually use the reducer init // to handle initial loading. return settingsReducer(settings, {type: 'load-state'}); }, ); const modalRef = useRef(null); const openModal = useCallback(() => { modalRef.current?.showModal(); }, [modalRef]); const handleCloseModal = useCallback(() => { modalRef.current?.close(); }, [modalRef]); const hasChanged = useMemo(() => { return ( VIDEO_API_ENDPOINT !== state.videoAPIEndpoint || INFERENCE_API_ENDPOINT !== state.inferenceAPIEndpoint ); }, [state.videoAPIEndpoint, state.inferenceAPIEndpoint]); const value = useMemo( () => ({ settings: state, dispatch, openModal, closeModal: handleCloseModal, hasChanged, }), [state, dispatch, openModal, handleCloseModal, hasChanged], ); return ( {children} ); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/settings/SettingsModal.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {DEMO_FRIENDLY_NAME} from '@/demo/DemoConfig'; import SAM2Settings from '@/settings/SAM2Settings'; import {XMarkIcon} from '@heroicons/react/24/solid'; import {forwardRef, useState} from 'react'; import {Button, Modal} from 'react-daisyui'; import useSettingsContext from './useSettingsContext'; type Props = unknown; type Config = { key: 'sam2'; title: string; component: React.ElementType; }; const SettingsConfig: Config[] = [ { key: 'sam2', title: DEMO_FRIENDLY_NAME, component: SAM2Settings, }, ]; export default forwardRef( function SettingsModal(_props, ref) { const {closeModal} = useSettingsContext(); const [activeConfig, setActiveConfig] = useState(SettingsConfig[0]); const SettingsComponent = activeConfig.component; return ( ); }, ); ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/settings/SettingsReducer.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {INFERENCE_API_ENDPOINT, VIDEO_API_ENDPOINT} from '@/demo/DemoConfig'; export type Settings = { videoAPIEndpoint: string; inferenceAPIEndpoint: string; }; // Key used to store the settings in the browser's local storage. export const SAM2_SETTINGS_KEY = 'SAM2_SETTINGS_KEY'; export type Action = | {type: 'load-state'} | {type: 'change-video-api-endpoint'; url: string} | {type: 'change-inference-api-endpoint'; url: string}; export const DEFAULT_SETTINGS: Settings = { videoAPIEndpoint: VIDEO_API_ENDPOINT, inferenceAPIEndpoint: INFERENCE_API_ENDPOINT, }; export function settingsReducer(state: Settings, action: Action): Settings { function storeSettings(newState: Settings): void { localStorage.setItem(SAM2_SETTINGS_KEY, JSON.stringify(newState)); } switch (action.type) { case 'load-state': { try { const serializedSettings = localStorage.getItem(SAM2_SETTINGS_KEY); if (serializedSettings != null) { return JSON.parse(serializedSettings) as Settings; } else { // Store default settings in local storage. This will populate the // settings in the local storage on first app load or when user // cleared the browser cache. storeSettings(state); } } catch { // Could not parse settings. Using default settings instead. } return state; } case 'change-video-api-endpoint': state.videoAPIEndpoint = action.url; break; case 'change-inference-api-endpoint': state.inferenceAPIEndpoint = action.url; break; } // Store the settings state on every change storeSettings(state); return state; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/settings/useSettingsContext.tsx ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import {useContext} from 'react'; import {SettingsContext} from '@/settings/SettingsContextProvider'; export default function useSettingsContext() { return useContext(SettingsContext); } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/theme/colors.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ export const THEME_COLORS = [ '#3880F3', '#F0AA19', '#00D2BE', '#28D232', '#8773FF', '#00C8F0', '#FA8719', '#E6193B', '#FA7DC8', ]; ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/theme/gradientStyle.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ export const BLUE_PINK_FILL = 'from-[#595FEF] from-40% to-[#FB73A5]'; export const BLUE_PINK_FILL_BR = 'bg-gradient-to-br from-[#595FEF] from-30% to-[#FB73A5]'; ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/theme/tokens.stylex.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import * as stylex from '@stylexjs/stylex'; export const spacing = stylex.defineVars({ '0': '0rem', '0.5': '0.125rem', '1': '0.25rem', '1.5': '0.375rem', '2': '0.5rem', '2.5': '0.625rem', '3': '0.75rem', '3.5': '0.875rem', '4': '1rem', '5': '1.25rem', '6': '1.5rem', '7': '1.75rem', '8': '2rem', '9': '2.25rem', '10': '2.5rem', '11': '2.75rem', '12': '3rem', }); export const gap = stylex.defineVars({ 4: '1rem' /* 16px */, }); export const w = stylex.defineVars({ full: '100%', 12: '3rem' /* 48px */, 96: '24rem' /* 384px */, }); export const m = stylex.defineVars({ 3: '0.75rem' /* 12px */, }); export const fontSize = stylex.defineVars({ xs: '0.75rem', sm: '0.875rem', base: '1rem', lg: '1.125rem', xl: '1.25rem', '2xl': '1.5rem', }); export const fontWeight = stylex.defineVars({ thin: 100, extralight: 200, light: 300, normal: 400, medium: 500, semibold: 600, bold: 700, extrabold: 800, }); export const color = stylex.defineVars({ subtitle: 'rgb(107 114 128)', 'gray-900': 'rgb(17 24 39)', 'gray-800': 'rgb(26 28 31)', 'gray-700': 'rgb(55 62 65)', 'blue-600': 'rgb(37 99 235)', }); export const screenSizes = { sm: 640, md: 768, lg: 1024, xl: 1280, '2xl': 1536, }; export const borderRadius = stylex.defineVars({ sm: '0.125rem', md: '0.375rem', lg: '0.5rem', xl: '0.75rem', }); export const top = stylex.defineVars({ 0: 0, 1: '0.25rem' /* 4px */, 2: '0.5rem' /* 8px */, }); export const right = stylex.defineVars({ 0: 0, 1: '0.25rem' /* 4px */, 2: '0.5rem' /* 8px */, }); export const gradients = stylex.defineVars({ rainbow: 'linear-gradient(#000, #000) padding-box, linear-gradient(to right bottom, #FB73A5,#595FEF,#94EAE2,#FCCB6B) border-box', rainbowReverse: 'linear-gradient(#000, #000) padding-box, linear-gradient(to left top, #FB73A5,#595FEF,#94EAE2,#FCCB6B) border-box', yellowTeal: 'linear-gradient(#000, #000) padding-box, linear-gradient(to right bottom, #94EAE2,#FCCB6B) border-box', }); ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/types/mp4box/index.d.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ declare module 'mp4box' { export interface MP4MediaTrackEdit { media_rate_fraction: number; media_rate_integer: number; media_time: number; segment_duration: number; } export interface MP4MediaTrack { id: number; created: Date; modified: Date; movie_duration: number; movie_timescale: number; layer: number; alternate_group: number; volume: number; track_width: number; track_height: number; timescale: number; duration: number; bitrate: number; codec: string; language: string; nb_samples: number; samples_duration: number; edits: MP4MediaTrackEdit[]; } export interface MP4VideoData { width: number; height: number; } export interface MP4VideoTrack extends MP4MediaTrack { video: MP4VideoData; } export interface MP4AudioData { sample_rate: number; channel_count: number; sample_size: number; } export interface MP4AudioTrack extends MP4MediaTrack { audio: MP4AudioData; } export type MP4Track = MP4VideoTrack | MP4AudioTrack; export interface MP4Info { duration: number; timescale: number; fragment_duration: number; isFragmented: boolean; isProgressive: boolean; hasIOD: boolean; brands: string[]; created: Date; modified: Date; tracks: MP4Track[]; audioTracks: MP4AudioTrack[]; videoTracks: MP4VideoTrack[]; otherTracks: MP4VideoTrack[]; } export interface MP4Sample { alreadyRead: number; chunk_index: number; chunk_run_index: number; cts: number; data: Uint8Array; degradation_priority: number; depends_on: number; description: unknown; description_index: number; dts: number; duration: number; has_redundancy: number; is_depended_on: number; is_leading: number; is_sync: boolean; number: number; offset: number; size: number; timescale: number; track_id: number; } export type MP4ArrayBuffer = ArrayBuffer & {fileStart: number}; export class DataStream { static BIG_ENDIAN: boolean; static LITTLE_ENDIAN: boolean; buffer: ArrayBuffer; constructor( arrayBuffer?: ArrayBuffer, byteOffset: number, endianness: boolean, ): void; } export interface Trak { mdia?: { minf?: { stbl?: { stsd?: { entries: { avcC?: { write: (stream: DataStream) => void; }; hvcC?: { write: (stream: DataStream) => void; }; }[]; }; }; }; }; } export namespace BoxParser { export class Box { size?: number; data?: Uint8Array; constructor(type?: string, size?: number); add(name: string): Box; addBox(box: Box): Box; addEntry(value: string, prop?: string): void; write(stream: DataStream): void; writeHeader(stream: DataStream, msg?: string): void; computeSize(): void; } export class ContainerBox extends Box {} export class avcCBox extends ContainerBox {} export class hvcCBox extends ContainerBox {} export class vpcCBox extends ContainerBox {} export class av1CBox extends ContainerBox {} } export interface TrackOptions { id?: number; type?: string; width?: number; height?: number; duration?: number; layer?: number; timescale?: number; media_duration?: number; language?: string; hdlr?: string; // video avcDecoderConfigRecord?: BufferSource; // audio balance?: number; channel_count?: number; samplesize?: number; samplerate?: number; //captions namespace?: string; schema_location?: string; auxiliary_mime_types?: string; description?: BoxParser.Box; description_boxes?: BoxParser.Box[]; default_sample_description_index_id?: number; default_sample_duration?: number; default_sample_size?: number; default_sample_flags?: number; } export interface SampleOptions { sample_description_index?: number; duration?: number; cts?: number; dts?: number; is_sync?: boolean; is_leading?: number; depends_on?: number; is_depended_on?: number; has_redundancy?: number; degradation_priority?: number; } export interface Sample { number: number; track_id: number; timescale: number; description_index: number; description: { avcC?: BoxParser.avcCBox; // h.264 hvcC?: BoxParser.hvcCBox; // hevc vpcC?: BoxParser.vpcCBox; // vp9 av1C?: BoxParser.av1CBox; // av1 }; data: ArrayBuffer; size: number; alreadyRead?: number; duration: number; cts: number; dts: number; is_sync: boolean; is_leading?: number; depends_on?: number; is_depended_on?: number; has_redundancy?: number; degradation_priority?: number; offset?: number; } export interface MP4File { getBuffer(): MP4ArrayBuffer; addTrack(options?: TrackOptions): number; addSample( track: number, data: ArrayBuffer, options?: SampleOptions, ): Sample; addSample( trackID: number, uint8: Uint8Array, arg2: {duration: number; is_sync: boolean}, ): void; onMoovStart?: () => void; onReady?: (info: MP4Info) => void; onError?: (e: string) => void; onSamples?: (id: number, user: unknown, samples: MP4Sample[]) => unknown; appendBuffer(data: MP4ArrayBuffer): number; save(fileName: string): void; start(): void; stop(): void; /** * Indicates that the next samples to process (for extraction or * segmentation) start at the given time (Number, in seconds) or at the * time of the previous Random Access Point (if useRap is true, default * is false). Returns the offset in the file of the next bytes to be * provided via appendBuffer. * * @param time - Start at the given time (Number, in seconds) * @param useRap - Random Access Point (if useRap is true, default is false) * @returns Returns the offset in the file of the next bytes to be provided via appendBuffer. */ seek: (time: number, useRap: boolean) => number; flush(): void; releaseUsedSamples(trackId: number, sampleNumber: number): void; setExtractionOptions( trackId: number, user?: unknown, options?: {nbSamples?: number; rapAlignment?: number}, ): void; getTrackById(trackId: number): Trak; } export function createFile(): MP4File; export {}; } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/src/vite-env.d.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ /// ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/tailwind.config.js ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import tailwindCSSTypography from '@tailwindcss/typography'; import daisyui from 'daisyui'; import * as daisyColorThemes from 'daisyui/src/theming/themes'; /** @type {import('tailwindcss').Config} */ export default { content: [ './index.html', './src/**/*.{js,ts,jsx,tsx}', 'node_modules/daisyui/dist/**/*.js', 'node_modules/react-daisyui/dist/**/*.js', ], daisyui: { styled: true, themes: [ { light: { ...daisyColorThemes['[data-theme=light]'], 'base-100': '#FFFFFF', 'base-200': '#F1F4F7', 'base-300': '#DEE3E9', primary: '#0064E0', 'primary-content': '#FFFFFF', secondary: '#0F191E', 'secondary-content': '#FFFFFF', accent: '#6441D2', 'accent-content': '#FFFFFF', info: '#009B9B', 'info-content': '#FFFFFF', success: '#0F9B14', 'success-content': '#FFFFFF', warning: '#FA8719', 'warning-content': '#FFFFFF', error: '#C80A28', 'error-content': '#FFFFFF', '--rounded-box': '0.35rem', // border radius rounded-box utility class, used in card and other large boxes '--rounded-btn': '0.35rem', // border radius rounded-btn utility class, used in buttons and similar element '--rounded-badge': '1rem', // border radius rounded-badge utility class, used in badges and similar }, }, 'dark', ], }, theme: { fontSize: { xs: ['0.75rem', {lineHeight: '1.5'}], sm: ['0.875rem', {lineHeight: '1.5'}], base: ['1rem', {lineHeight: '1.5'}], lg: ['1.125rem', {lineHeight: '1.2', fontWeight: 500}], xl: ['1.25rem', {lineHeight: '1.2', fontWeight: 500}], '2xl': [ '1.5rem', {lineHeight: '1.2', fontWeight: 500, letterSpacing: '0.005rem'}, ], '3xl': [ '2.25rem', {lineHeight: '1.2', fontWeight: 500, letterSpacing: '0.01rem'}, ], '4xl': [ '3rem', {lineHeight: '1.2', fontWeight: 500, letterSpacing: '0.016rem'}, ], '5xl': [ '4rem', {lineHeight: '1.2', fontWeight: 400, letterSpacing: '0.016rem'}, ], '6xl': [ '5rem', {lineHeight: '1.2', fontWeight: 400, letterSpacing: '0.016rem'}, ], }, extend: { colors: { graydark: { 50: '#f1f4f7', 100: '#DEE3E9', 200: '#CBD2D9', 300: '#A7B3BF', 400: '#8595A4', 500: '#667788', 600: '#465A69', 700: '#343845', 800: '#1A1C1F', 900: '#0F191E', }, }, lineHeight: { tight: 1.2, }, backgroundImage: { dot: 'url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAgAAAAICAYAAADED76LAAAAAXNSR0IArs4c6QAAABdJREFUGBljYGBg+A/FQAoTMGEKDUcRAATwAgFGIXEOAAAAAElFTkSuQmCC)', }, keyframes: { wiggle: { '0%, 100%': {transform: 'rotate(-3deg)'}, '50%': {transform: 'rotate(3deg)'}, }, }, animation: { wiggle: 'wiggle .25s ease-in-out', }, typography: { DEFAULT: { css: { maxWidth: '100%', // add required value here a: { textDecoration: 'none', }, }, }, }, }, }, plugins: [tailwindCSSTypography, daisyui], }; ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/tsconfig.json ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ { "compilerOptions": { "target": "ES2020", "useDefineForClassFields": true, "lib": ["ES2020", "DOM", "DOM.Iterable", "webworker"], "module": "ESNext", "skipLibCheck": true, /* Bundler mode */ "moduleResolution": "bundler", "allowImportingTsExtensions": true, "resolveJsonModule": true, "isolatedModules": true, "esModuleInterop": true, // esModuleInterop true required for Jest "noEmit": true, "jsx": "react-jsx", /* Linting */ "strict": true, "noUnusedLocals": true, "noUnusedParameters": true, "noFallthroughCasesInSwitch": true, "baseUrl": "./src", "paths": { "mp4box": ["types/mp4box"], "@/*": ["*"] } }, "include": ["src"], "references": [{"path": "./tsconfig.node.json"}] } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/tsconfig.node.json ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ { "compilerOptions": { "composite": true, "skipLibCheck": true, "module": "ESNext", "target": "ES2017", "moduleResolution": "bundler", "allowSyntheticDefaultImports": true, "strictNullChecks": true }, "include": ["vite.config.ts", "schemas"] } ================================================ FILE: auto-seg/submodules/segment-anything-2/demo/frontend/vite.config.ts ================================================ /** * Copyright (c) Meta Platforms, Inc. and affiliates. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import react from '@vitejs/plugin-react'; import jotaiDebugLabel from 'jotai/babel/plugin-debug-label'; import jotaiReactRefresh from 'jotai/babel/plugin-react-refresh'; import path from 'path'; import {defineConfig} from 'vite'; import babel from 'vite-plugin-babel'; import relay from 'vite-plugin-relay'; import {stylexPlugin} from 'vite-plugin-stylex-dev'; export default defineConfig({ resolve: { alias: { '@': path.resolve(__dirname, './src'), }, }, plugins: [ react({ babel: { plugins: [jotaiDebugLabel, jotaiReactRefresh], }, }), stylexPlugin(), relay, babel(), ], worker: { plugins: () => [relay], }, }); ================================================ FILE: auto-seg/submodules/segment-anything-2/docker-compose.yaml ================================================ services: frontend: image: sam2/frontend build: context: ./demo/frontend dockerfile: frontend.Dockerfile ports: - 7262:80 backend: image: sam2/backend build: context: . dockerfile: backend.Dockerfile ports: - 7263:5000 volumes: - ./demo/data/:/data/:rw environment: - SERVER_ENVIRONMENT=DEV - GUNICORN_WORKERS=1 # Inference API needs to have at least 2 threads to handle an incoming # parallel cancel propagation request - GUNICORN_THREADS=2 - GUNICORN_PORT=5000 - API_URL=http://localhost:7263 - DEFAULT_VIDEO_PATH=gallery/05_default_juggle.mp4 # # ffmpeg/video encode settings - FFMPEG_NUM_THREADS=1 - VIDEO_ENCODE_CODEC=libx264 - VIDEO_ENCODE_CRF=23 - VIDEO_ENCODE_FPS=24 - VIDEO_ENCODE_MAX_WIDTH=1280 - VIDEO_ENCODE_MAX_HEIGHT=720 - VIDEO_ENCODE_VERBOSE=False deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [gpu] ================================================ FILE: auto-seg/submodules/segment-anything-2/pyproject.toml ================================================ [build-system] requires = [ "setuptools>=61.0", "torch>=2.3.1", ] build-backend = "setuptools.build_meta" ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/__init__.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. from hydra import initialize_config_module from hydra.core.global_hydra import GlobalHydra if not GlobalHydra.instance().is_initialized(): initialize_config_module("sam2", version_base="1.2") ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/automatic_mask_generator.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. # Adapted from https://github.com/facebookresearch/segment-anything/blob/main/segment_anything/automatic_mask_generator.py from typing import Any, Dict, List, Optional, Tuple import numpy as np import torch from torchvision.ops.boxes import batched_nms, box_area # type: ignore from sam2.modeling.sam2_base import SAM2Base from sam2.sam2_image_predictor import SAM2ImagePredictor from sam2.utils.amg import ( area_from_rle, batch_iterator, batched_mask_to_box, box_xyxy_to_xywh, build_all_layer_point_grids, calculate_stability_score, coco_encode_rle, generate_crop_boxes, is_box_near_crop_edge, mask_to_rle_pytorch, MaskData, remove_small_regions, rle_to_mask, uncrop_boxes_xyxy, uncrop_masks, uncrop_points, ) class SAM2AutomaticMaskGenerator: def __init__( self, model: SAM2Base, points_per_side: Optional[int] = 32, points_per_batch: int = 64, pred_iou_thresh: float = 0.8, stability_score_thresh: float = 0.95, stability_score_offset: float = 1.0, mask_threshold: float = 0.0, box_nms_thresh: float = 0.7, crop_n_layers: int = 0, crop_nms_thresh: float = 0.7, crop_overlap_ratio: float = 512 / 1500, crop_n_points_downscale_factor: int = 1, point_grids: Optional[List[np.ndarray]] = None, min_mask_region_area: int = 0, output_mode: str = "binary_mask", use_m2m: bool = False, multimask_output: bool = True, **kwargs, ) -> None: """ Using a SAM 2 model, generates masks for the entire image. Generates a grid of point prompts over the image, then filters low quality and duplicate masks. The default settings are chosen for SAM 2 with a HieraL backbone. Arguments: model (Sam): The SAM 2 model to use for mask prediction. points_per_side (int or None): The number of points to be sampled along one side of the image. The total number of points is points_per_side**2. If None, 'point_grids' must provide explicit point sampling. points_per_batch (int): Sets the number of points run simultaneously by the model. Higher numbers may be faster but use more GPU memory. pred_iou_thresh (float): A filtering threshold in [0,1], using the model's predicted mask quality. stability_score_thresh (float): A filtering threshold in [0,1], using the stability of the mask under changes to the cutoff used to binarize the model's mask predictions. stability_score_offset (float): The amount to shift the cutoff when calculated the stability score. mask_threshold (float): Threshold for binarizing the mask logits box_nms_thresh (float): The box IoU cutoff used by non-maximal suppression to filter duplicate masks. crop_n_layers (int): If >0, mask prediction will be run again on crops of the image. Sets the number of layers to run, where each layer has 2**i_layer number of image crops. crop_nms_thresh (float): The box IoU cutoff used by non-maximal suppression to filter duplicate masks between different crops. crop_overlap_ratio (float): Sets the degree to which crops overlap. In the first crop layer, crops will overlap by this fraction of the image length. Later layers with more crops scale down this overlap. crop_n_points_downscale_factor (int): The number of points-per-side sampled in layer n is scaled down by crop_n_points_downscale_factor**n. point_grids (list(np.ndarray) or None): A list over explicit grids of points used for sampling, normalized to [0,1]. The nth grid in the list is used in the nth crop layer. Exclusive with points_per_side. min_mask_region_area (int): If >0, postprocessing will be applied to remove disconnected regions and holes in masks with area smaller than min_mask_region_area. Requires opencv. output_mode (str): The form masks are returned in. Can be 'binary_mask', 'uncompressed_rle', or 'coco_rle'. 'coco_rle' requires pycocotools. For large resolutions, 'binary_mask' may consume large amounts of memory. use_m2m (bool): Whether to add a one step refinement using previous mask predictions. multimask_output (bool): Whether to output multimask at each point of the grid. """ assert (points_per_side is None) != ( point_grids is None ), "Exactly one of points_per_side or point_grid must be provided." if points_per_side is not None: self.point_grids = build_all_layer_point_grids( points_per_side, crop_n_layers, crop_n_points_downscale_factor, ) elif point_grids is not None: self.point_grids = point_grids else: raise ValueError("Can't have both points_per_side and point_grid be None.") assert output_mode in [ "binary_mask", "uncompressed_rle", "coco_rle", ], f"Unknown output_mode {output_mode}." if output_mode == "coco_rle": try: from pycocotools import mask as mask_utils # type: ignore # noqa: F401 except ImportError as e: print("Please install pycocotools") raise e self.predictor = SAM2ImagePredictor( model, max_hole_area=min_mask_region_area, max_sprinkle_area=min_mask_region_area, ) self.points_per_batch = points_per_batch self.pred_iou_thresh = pred_iou_thresh self.stability_score_thresh = stability_score_thresh self.stability_score_offset = stability_score_offset self.mask_threshold = mask_threshold self.box_nms_thresh = box_nms_thresh self.crop_n_layers = crop_n_layers self.crop_nms_thresh = crop_nms_thresh self.crop_overlap_ratio = crop_overlap_ratio self.crop_n_points_downscale_factor = crop_n_points_downscale_factor self.min_mask_region_area = min_mask_region_area self.output_mode = output_mode self.use_m2m = use_m2m self.multimask_output = multimask_output @classmethod def from_pretrained(cls, model_id: str, **kwargs) -> "SAM2AutomaticMaskGenerator": """ Load a pretrained model from the Hugging Face hub. Arguments: model_id (str): The Hugging Face repository ID. **kwargs: Additional arguments to pass to the model constructor. Returns: (SAM2AutomaticMaskGenerator): The loaded model. """ from sam2.build_sam import build_sam2_hf sam_model = build_sam2_hf(model_id, **kwargs) return cls(sam_model, **kwargs) @torch.no_grad() def generate(self, image: np.ndarray) -> List[Dict[str, Any]]: """ Generates masks for the given image. Arguments: image (np.ndarray): The image to generate masks for, in HWC uint8 format. Returns: list(dict(str, any)): A list over records for masks. Each record is a dict containing the following keys: segmentation (dict(str, any) or np.ndarray): The mask. If output_mode='binary_mask', is an array of shape HW. Otherwise, is a dictionary containing the RLE. bbox (list(float)): The box around the mask, in XYWH format. area (int): The area in pixels of the mask. predicted_iou (float): The model's own prediction of the mask's quality. This is filtered by the pred_iou_thresh parameter. point_coords (list(list(float))): The point coordinates input to the model to generate this mask. stability_score (float): A measure of the mask's quality. This is filtered on using the stability_score_thresh parameter. crop_box (list(float)): The crop of the image used to generate the mask, given in XYWH format. """ # Generate masks mask_data = self._generate_masks(image) # Encode masks if self.output_mode == "coco_rle": mask_data["segmentations"] = [ coco_encode_rle(rle) for rle in mask_data["rles"] ] elif self.output_mode == "binary_mask": mask_data["segmentations"] = [rle_to_mask(rle) for rle in mask_data["rles"]] else: mask_data["segmentations"] = mask_data["rles"] # Write mask records curr_anns = [] for idx in range(len(mask_data["segmentations"])): ann = { "segmentation": mask_data["segmentations"][idx], "area": area_from_rle(mask_data["rles"][idx]), "bbox": box_xyxy_to_xywh(mask_data["boxes"][idx]).tolist(), "predicted_iou": mask_data["iou_preds"][idx].item(), "point_coords": [mask_data["points"][idx].tolist()], "stability_score": mask_data["stability_score"][idx].item(), "crop_box": box_xyxy_to_xywh(mask_data["crop_boxes"][idx]).tolist(), } curr_anns.append(ann) return curr_anns def _generate_masks(self, image: np.ndarray) -> MaskData: orig_size = image.shape[:2] crop_boxes, layer_idxs = generate_crop_boxes( orig_size, self.crop_n_layers, self.crop_overlap_ratio ) # Iterate over image crops data = MaskData() for crop_box, layer_idx in zip(crop_boxes, layer_idxs): crop_data = self._process_crop(image, crop_box, layer_idx, orig_size) data.cat(crop_data) # Remove duplicate masks between crops if len(crop_boxes) > 1: # Prefer masks from smaller crops scores = 1 / box_area(data["crop_boxes"]) scores = scores.to(data["boxes"].device) keep_by_nms = batched_nms( data["boxes"].float(), scores, torch.zeros_like(data["boxes"][:, 0]), # categories iou_threshold=self.crop_nms_thresh, ) data.filter(keep_by_nms) data.to_numpy() return data def _process_crop( self, image: np.ndarray, crop_box: List[int], crop_layer_idx: int, orig_size: Tuple[int, ...], ) -> MaskData: # Crop the image and calculate embeddings x0, y0, x1, y1 = crop_box cropped_im = image[y0:y1, x0:x1, :] cropped_im_size = cropped_im.shape[:2] self.predictor.set_image(cropped_im) # Get points for this crop points_scale = np.array(cropped_im_size)[None, ::-1] points_for_image = self.point_grids[crop_layer_idx] * points_scale # Generate masks for this crop in batches data = MaskData() for (points,) in batch_iterator(self.points_per_batch, points_for_image): batch_data = self._process_batch( points, cropped_im_size, crop_box, orig_size, normalize=True ) data.cat(batch_data) del batch_data self.predictor.reset_predictor() # Remove duplicates within this crop. keep_by_nms = batched_nms( data["boxes"].float(), data["iou_preds"], torch.zeros_like(data["boxes"][:, 0]), # categories iou_threshold=self.box_nms_thresh, ) data.filter(keep_by_nms) # Return to the original image frame data["boxes"] = uncrop_boxes_xyxy(data["boxes"], crop_box) data["points"] = uncrop_points(data["points"], crop_box) data["crop_boxes"] = torch.tensor([crop_box for _ in range(len(data["rles"]))]) return data def _process_batch( self, points: np.ndarray, im_size: Tuple[int, ...], crop_box: List[int], orig_size: Tuple[int, ...], normalize=False, ) -> MaskData: orig_h, orig_w = orig_size # Run model on this batch points = torch.as_tensor( points, dtype=torch.float32, device=self.predictor.device ) in_points = self.predictor._transforms.transform_coords( points, normalize=normalize, orig_hw=im_size ) in_labels = torch.ones( in_points.shape[0], dtype=torch.int, device=in_points.device ) masks, iou_preds, low_res_masks = self.predictor._predict( in_points[:, None, :], in_labels[:, None], multimask_output=self.multimask_output, return_logits=True, ) # Serialize predictions and store in MaskData data = MaskData( masks=masks.flatten(0, 1), iou_preds=iou_preds.flatten(0, 1), points=points.repeat_interleave(masks.shape[1], dim=0), low_res_masks=low_res_masks.flatten(0, 1), ) del masks if not self.use_m2m: # Filter by predicted IoU if self.pred_iou_thresh > 0.0: keep_mask = data["iou_preds"] > self.pred_iou_thresh data.filter(keep_mask) # Calculate and filter by stability score data["stability_score"] = calculate_stability_score( data["masks"], self.mask_threshold, self.stability_score_offset ) if self.stability_score_thresh > 0.0: keep_mask = data["stability_score"] >= self.stability_score_thresh data.filter(keep_mask) else: # One step refinement using previous mask predictions in_points = self.predictor._transforms.transform_coords( data["points"], normalize=normalize, orig_hw=im_size ) labels = torch.ones( in_points.shape[0], dtype=torch.int, device=in_points.device ) masks, ious = self.refine_with_m2m( in_points, labels, data["low_res_masks"], self.points_per_batch ) data["masks"] = masks.squeeze(1) data["iou_preds"] = ious.squeeze(1) if self.pred_iou_thresh > 0.0: keep_mask = data["iou_preds"] > self.pred_iou_thresh data.filter(keep_mask) data["stability_score"] = calculate_stability_score( data["masks"], self.mask_threshold, self.stability_score_offset ) if self.stability_score_thresh > 0.0: keep_mask = data["stability_score"] >= self.stability_score_thresh data.filter(keep_mask) # Threshold masks and calculate boxes data["masks"] = data["masks"] > self.mask_threshold data["boxes"] = batched_mask_to_box(data["masks"]) # Filter boxes that touch crop boundaries keep_mask = ~is_box_near_crop_edge( data["boxes"], crop_box, [0, 0, orig_w, orig_h] ) if not torch.all(keep_mask): data.filter(keep_mask) # Compress to RLE data["masks"] = uncrop_masks(data["masks"], crop_box, orig_h, orig_w) data["rles"] = mask_to_rle_pytorch(data["masks"]) del data["masks"] return data @staticmethod def postprocess_small_regions( mask_data: MaskData, min_area: int, nms_thresh: float ) -> MaskData: """ Removes small disconnected regions and holes in masks, then reruns box NMS to remove any new duplicates. Edits mask_data in place. Requires open-cv as a dependency. """ if len(mask_data["rles"]) == 0: return mask_data # Filter small disconnected regions and holes new_masks = [] scores = [] for rle in mask_data["rles"]: mask = rle_to_mask(rle) mask, changed = remove_small_regions(mask, min_area, mode="holes") unchanged = not changed mask, changed = remove_small_regions(mask, min_area, mode="islands") unchanged = unchanged and not changed new_masks.append(torch.as_tensor(mask).unsqueeze(0)) # Give score=0 to changed masks and score=1 to unchanged masks # so NMS will prefer ones that didn't need postprocessing scores.append(float(unchanged)) # Recalculate boxes and remove any new duplicates masks = torch.cat(new_masks, dim=0) boxes = batched_mask_to_box(masks) keep_by_nms = batched_nms( boxes.float(), torch.as_tensor(scores), torch.zeros_like(boxes[:, 0]), # categories iou_threshold=nms_thresh, ) # Only recalculate RLEs for masks that have changed for i_mask in keep_by_nms: if scores[i_mask] == 0.0: mask_torch = masks[i_mask].unsqueeze(0) mask_data["rles"][i_mask] = mask_to_rle_pytorch(mask_torch)[0] mask_data["boxes"][i_mask] = boxes[i_mask] # update res directly mask_data.filter(keep_by_nms) return mask_data def refine_with_m2m(self, points, point_labels, low_res_masks, points_per_batch): new_masks = [] new_iou_preds = [] for cur_points, cur_point_labels, low_res_mask in batch_iterator( points_per_batch, points, point_labels, low_res_masks ): best_masks, best_iou_preds, _ = self.predictor._predict( cur_points[:, None, :], cur_point_labels[:, None], mask_input=low_res_mask[:, None, :], multimask_output=False, return_logits=True, ) new_masks.append(best_masks) new_iou_preds.append(best_iou_preds) masks = torch.cat(new_masks, dim=0) return masks, torch.cat(new_iou_preds, dim=0) ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/build_sam.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import logging import os import torch from hydra import compose from hydra.utils import instantiate from omegaconf import OmegaConf import sam2 # Check if the user is running Python from the parent directory of the sam2 repo # (i.e. the directory where this repo is cloned into) -- this is not supported since # it could shadow the sam2 package and cause issues. if os.path.isdir(os.path.join(sam2.__path__[0], "sam2")): # If the user has "sam2/sam2" in their path, they are likey importing the repo itself # as "sam2" rather than importing the "sam2" python package (i.e. "sam2/sam2" directory). # This typically happens because the user is running Python from the parent directory # that contains the sam2 repo they cloned. raise RuntimeError( "You're likely running Python from the parent directory of the sam2 repository " "(i.e. the directory where https://github.com/facebookresearch/sam2 is cloned into). " "This is not supported since the `sam2` Python package could be shadowed by the " "repository name (the repository is also named `sam2` and contains the Python package " "in `sam2/sam2`). Please run Python from another directory (e.g. from the repo dir " "rather than its parent dir, or from your home directory) after installing SAM 2." ) HF_MODEL_ID_TO_FILENAMES = { "facebook/sam2-hiera-tiny": ( "configs/sam2/sam2_hiera_t.yaml", "sam2_hiera_tiny.pt", ), "facebook/sam2-hiera-small": ( "configs/sam2/sam2_hiera_s.yaml", "sam2_hiera_small.pt", ), "facebook/sam2-hiera-base-plus": ( "configs/sam2/sam2_hiera_b+.yaml", "sam2_hiera_base_plus.pt", ), "facebook/sam2-hiera-large": ( "configs/sam2/sam2_hiera_l.yaml", "sam2_hiera_large.pt", ), "facebook/sam2.1-hiera-tiny": ( "configs/sam2.1/sam2.1_hiera_t.yaml", "sam2.1_hiera_tiny.pt", ), "facebook/sam2.1-hiera-small": ( "configs/sam2.1/sam2.1_hiera_s.yaml", "sam2.1_hiera_small.pt", ), "facebook/sam2.1-hiera-base-plus": ( "configs/sam2.1/sam2.1_hiera_b+.yaml", "sam2.1_hiera_base_plus.pt", ), "facebook/sam2.1-hiera-large": ( "configs/sam2.1/sam2.1_hiera_l.yaml", "sam2.1_hiera_large.pt", ), } def build_sam2( config_file, ckpt_path=None, device="cuda", mode="eval", hydra_overrides_extra=[], apply_postprocessing=True, **kwargs, ): if apply_postprocessing: hydra_overrides_extra = hydra_overrides_extra.copy() hydra_overrides_extra += [ # dynamically fall back to multi-mask if the single mask is not stable "++model.sam_mask_decoder_extra_args.dynamic_multimask_via_stability=true", "++model.sam_mask_decoder_extra_args.dynamic_multimask_stability_delta=0.05", "++model.sam_mask_decoder_extra_args.dynamic_multimask_stability_thresh=0.98", ] # Read config and init model cfg = compose(config_name=config_file, overrides=hydra_overrides_extra) OmegaConf.resolve(cfg) model = instantiate(cfg.model, _recursive_=True) _load_checkpoint(model, ckpt_path) model = model.to(device) if mode == "eval": model.eval() return model def build_sam2_video_predictor( config_file, ckpt_path=None, device="cuda", mode="eval", hydra_overrides_extra=[], apply_postprocessing=True, **kwargs, ): hydra_overrides = [ "++model._target_=sam2.sam2_video_predictor.SAM2VideoPredictor", ] if apply_postprocessing: hydra_overrides_extra = hydra_overrides_extra.copy() hydra_overrides_extra += [ # dynamically fall back to multi-mask if the single mask is not stable "++model.sam_mask_decoder_extra_args.dynamic_multimask_via_stability=true", "++model.sam_mask_decoder_extra_args.dynamic_multimask_stability_delta=0.05", "++model.sam_mask_decoder_extra_args.dynamic_multimask_stability_thresh=0.98", # the sigmoid mask logits on interacted frames with clicks in the memory encoder so that the encoded masks are exactly as what users see from clicking "++model.binarize_mask_from_pts_for_mem_enc=true", # fill small holes in the low-res masks up to `fill_hole_area` (before resizing them to the original video resolution) "++model.fill_hole_area=8", ] hydra_overrides.extend(hydra_overrides_extra) # Read config and init model cfg = compose(config_name=config_file, overrides=hydra_overrides) OmegaConf.resolve(cfg) model = instantiate(cfg.model, _recursive_=True) _load_checkpoint(model, ckpt_path) model = model.to(device) if mode == "eval": model.eval() return model def _hf_download(model_id): from huggingface_hub import hf_hub_download config_name, checkpoint_name = HF_MODEL_ID_TO_FILENAMES[model_id] ckpt_path = hf_hub_download(repo_id=model_id, filename=checkpoint_name) return config_name, ckpt_path def build_sam2_hf(model_id, **kwargs): config_name, ckpt_path = _hf_download(model_id) return build_sam2(config_file=config_name, ckpt_path=ckpt_path, **kwargs) def build_sam2_video_predictor_hf(model_id, **kwargs): config_name, ckpt_path = _hf_download(model_id) return build_sam2_video_predictor( config_file=config_name, ckpt_path=ckpt_path, **kwargs ) def _load_checkpoint(model, ckpt_path): if ckpt_path is not None: sd = torch.load(ckpt_path, map_location="cpu", weights_only=True)["model"] missing_keys, unexpected_keys = model.load_state_dict(sd) if missing_keys: logging.error(missing_keys) raise RuntimeError() if unexpected_keys: logging.error(unexpected_keys) raise RuntimeError() logging.info("Loaded checkpoint sucessfully") ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/configs/sam2/sam2_hiera_b+.yaml ================================================ # @package _global_ # Model model: _target_: sam2.modeling.sam2_base.SAM2Base image_encoder: _target_: sam2.modeling.backbones.image_encoder.ImageEncoder scalp: 1 trunk: _target_: sam2.modeling.backbones.hieradet.Hiera embed_dim: 112 num_heads: 2 neck: _target_: sam2.modeling.backbones.image_encoder.FpnNeck position_encoding: _target_: sam2.modeling.position_encoding.PositionEmbeddingSine num_pos_feats: 256 normalize: true scale: null temperature: 10000 d_model: 256 backbone_channel_list: [896, 448, 224, 112] fpn_top_down_levels: [2, 3] # output level 0 and 1 directly use the backbone features fpn_interp_model: nearest memory_attention: _target_: sam2.modeling.memory_attention.MemoryAttention d_model: 256 pos_enc_at_input: true layer: _target_: sam2.modeling.memory_attention.MemoryAttentionLayer activation: relu dim_feedforward: 2048 dropout: 0.1 pos_enc_at_attn: false self_attention: _target_: sam2.modeling.sam.transformer.RoPEAttention rope_theta: 10000.0 feat_sizes: [32, 32] embedding_dim: 256 num_heads: 1 downsample_rate: 1 dropout: 0.1 d_model: 256 pos_enc_at_cross_attn_keys: true pos_enc_at_cross_attn_queries: false cross_attention: _target_: sam2.modeling.sam.transformer.RoPEAttention rope_theta: 10000.0 feat_sizes: [32, 32] rope_k_repeat: True embedding_dim: 256 num_heads: 1 downsample_rate: 1 dropout: 0.1 kv_in_dim: 64 num_layers: 4 memory_encoder: _target_: sam2.modeling.memory_encoder.MemoryEncoder out_dim: 64 position_encoding: _target_: sam2.modeling.position_encoding.PositionEmbeddingSine num_pos_feats: 64 normalize: true scale: null temperature: 10000 mask_downsampler: _target_: sam2.modeling.memory_encoder.MaskDownSampler kernel_size: 3 stride: 2 padding: 1 fuser: _target_: sam2.modeling.memory_encoder.Fuser layer: _target_: sam2.modeling.memory_encoder.CXBlock dim: 256 kernel_size: 7 padding: 3 layer_scale_init_value: 1e-6 use_dwconv: True # depth-wise convs num_layers: 2 num_maskmem: 7 image_size: 1024 # apply scaled sigmoid on mask logits for memory encoder, and directly feed input mask as output mask sigmoid_scale_for_mem_enc: 20.0 sigmoid_bias_for_mem_enc: -10.0 use_mask_input_as_output_without_sam: true # Memory directly_add_no_mem_embed: true # use high-resolution feature map in the SAM mask decoder use_high_res_features_in_sam: true # output 3 masks on the first click on initial conditioning frames multimask_output_in_sam: true # SAM heads iou_prediction_use_sigmoid: True # cross-attend to object pointers from other frames (based on SAM output tokens) in the encoder use_obj_ptrs_in_encoder: true add_tpos_enc_to_obj_ptrs: false only_obj_ptrs_in_the_past_for_eval: true # object occlusion prediction pred_obj_scores: true pred_obj_scores_mlp: true fixed_no_obj_ptr: true # multimask tracking settings multimask_output_for_tracking: true use_multimask_token_for_obj_ptr: true multimask_min_pt_num: 0 multimask_max_pt_num: 1 use_mlp_for_obj_ptr_proj: true # Compilation flag compile_image_encoder: False ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/configs/sam2/sam2_hiera_l.yaml ================================================ # @package _global_ # Model model: _target_: sam2.modeling.sam2_base.SAM2Base image_encoder: _target_: sam2.modeling.backbones.image_encoder.ImageEncoder scalp: 1 trunk: _target_: sam2.modeling.backbones.hieradet.Hiera embed_dim: 144 num_heads: 2 stages: [2, 6, 36, 4] global_att_blocks: [23, 33, 43] window_pos_embed_bkg_spatial_size: [7, 7] window_spec: [8, 4, 16, 8] neck: _target_: sam2.modeling.backbones.image_encoder.FpnNeck position_encoding: _target_: sam2.modeling.position_encoding.PositionEmbeddingSine num_pos_feats: 256 normalize: true scale: null temperature: 10000 d_model: 256 backbone_channel_list: [1152, 576, 288, 144] fpn_top_down_levels: [2, 3] # output level 0 and 1 directly use the backbone features fpn_interp_model: nearest memory_attention: _target_: sam2.modeling.memory_attention.MemoryAttention d_model: 256 pos_enc_at_input: true layer: _target_: sam2.modeling.memory_attention.MemoryAttentionLayer activation: relu dim_feedforward: 2048 dropout: 0.1 pos_enc_at_attn: false self_attention: _target_: sam2.modeling.sam.transformer.RoPEAttention rope_theta: 10000.0 feat_sizes: [32, 32] embedding_dim: 256 num_heads: 1 downsample_rate: 1 dropout: 0.1 d_model: 256 pos_enc_at_cross_attn_keys: true pos_enc_at_cross_attn_queries: false cross_attention: _target_: sam2.modeling.sam.transformer.RoPEAttention rope_theta: 10000.0 feat_sizes: [32, 32] rope_k_repeat: True embedding_dim: 256 num_heads: 1 downsample_rate: 1 dropout: 0.1 kv_in_dim: 64 num_layers: 4 memory_encoder: _target_: sam2.modeling.memory_encoder.MemoryEncoder out_dim: 64 position_encoding: _target_: sam2.modeling.position_encoding.PositionEmbeddingSine num_pos_feats: 64 normalize: true scale: null temperature: 10000 mask_downsampler: _target_: sam2.modeling.memory_encoder.MaskDownSampler kernel_size: 3 stride: 2 padding: 1 fuser: _target_: sam2.modeling.memory_encoder.Fuser layer: _target_: sam2.modeling.memory_encoder.CXBlock dim: 256 kernel_size: 7 padding: 3 layer_scale_init_value: 1e-6 use_dwconv: True # depth-wise convs num_layers: 2 num_maskmem: 7 image_size: 1024 # apply scaled sigmoid on mask logits for memory encoder, and directly feed input mask as output mask sigmoid_scale_for_mem_enc: 20.0 sigmoid_bias_for_mem_enc: -10.0 use_mask_input_as_output_without_sam: true # Memory directly_add_no_mem_embed: true # use high-resolution feature map in the SAM mask decoder use_high_res_features_in_sam: true # output 3 masks on the first click on initial conditioning frames multimask_output_in_sam: true # SAM heads iou_prediction_use_sigmoid: True # cross-attend to object pointers from other frames (based on SAM output tokens) in the encoder use_obj_ptrs_in_encoder: true add_tpos_enc_to_obj_ptrs: false only_obj_ptrs_in_the_past_for_eval: true # object occlusion prediction pred_obj_scores: true pred_obj_scores_mlp: true fixed_no_obj_ptr: true # multimask tracking settings multimask_output_for_tracking: true use_multimask_token_for_obj_ptr: true multimask_min_pt_num: 0 multimask_max_pt_num: 1 use_mlp_for_obj_ptr_proj: true # Compilation flag compile_image_encoder: False ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/configs/sam2/sam2_hiera_s.yaml ================================================ # @package _global_ # Model model: _target_: sam2.modeling.sam2_base.SAM2Base image_encoder: _target_: sam2.modeling.backbones.image_encoder.ImageEncoder scalp: 1 trunk: _target_: sam2.modeling.backbones.hieradet.Hiera embed_dim: 96 num_heads: 1 stages: [1, 2, 11, 2] global_att_blocks: [7, 10, 13] window_pos_embed_bkg_spatial_size: [7, 7] neck: _target_: sam2.modeling.backbones.image_encoder.FpnNeck position_encoding: _target_: sam2.modeling.position_encoding.PositionEmbeddingSine num_pos_feats: 256 normalize: true scale: null temperature: 10000 d_model: 256 backbone_channel_list: [768, 384, 192, 96] fpn_top_down_levels: [2, 3] # output level 0 and 1 directly use the backbone features fpn_interp_model: nearest memory_attention: _target_: sam2.modeling.memory_attention.MemoryAttention d_model: 256 pos_enc_at_input: true layer: _target_: sam2.modeling.memory_attention.MemoryAttentionLayer activation: relu dim_feedforward: 2048 dropout: 0.1 pos_enc_at_attn: false self_attention: _target_: sam2.modeling.sam.transformer.RoPEAttention rope_theta: 10000.0 feat_sizes: [32, 32] embedding_dim: 256 num_heads: 1 downsample_rate: 1 dropout: 0.1 d_model: 256 pos_enc_at_cross_attn_keys: true pos_enc_at_cross_attn_queries: false cross_attention: _target_: sam2.modeling.sam.transformer.RoPEAttention rope_theta: 10000.0 feat_sizes: [32, 32] rope_k_repeat: True embedding_dim: 256 num_heads: 1 downsample_rate: 1 dropout: 0.1 kv_in_dim: 64 num_layers: 4 memory_encoder: _target_: sam2.modeling.memory_encoder.MemoryEncoder out_dim: 64 position_encoding: _target_: sam2.modeling.position_encoding.PositionEmbeddingSine num_pos_feats: 64 normalize: true scale: null temperature: 10000 mask_downsampler: _target_: sam2.modeling.memory_encoder.MaskDownSampler kernel_size: 3 stride: 2 padding: 1 fuser: _target_: sam2.modeling.memory_encoder.Fuser layer: _target_: sam2.modeling.memory_encoder.CXBlock dim: 256 kernel_size: 7 padding: 3 layer_scale_init_value: 1e-6 use_dwconv: True # depth-wise convs num_layers: 2 num_maskmem: 7 image_size: 1024 # apply scaled sigmoid on mask logits for memory encoder, and directly feed input mask as output mask sigmoid_scale_for_mem_enc: 20.0 sigmoid_bias_for_mem_enc: -10.0 use_mask_input_as_output_without_sam: true # Memory directly_add_no_mem_embed: true # use high-resolution feature map in the SAM mask decoder use_high_res_features_in_sam: true # output 3 masks on the first click on initial conditioning frames multimask_output_in_sam: true # SAM heads iou_prediction_use_sigmoid: True # cross-attend to object pointers from other frames (based on SAM output tokens) in the encoder use_obj_ptrs_in_encoder: true add_tpos_enc_to_obj_ptrs: false only_obj_ptrs_in_the_past_for_eval: true # object occlusion prediction pred_obj_scores: true pred_obj_scores_mlp: true fixed_no_obj_ptr: true # multimask tracking settings multimask_output_for_tracking: true use_multimask_token_for_obj_ptr: true multimask_min_pt_num: 0 multimask_max_pt_num: 1 use_mlp_for_obj_ptr_proj: true # Compilation flag compile_image_encoder: False ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/configs/sam2/sam2_hiera_t.yaml ================================================ # @package _global_ # Model model: _target_: sam2.modeling.sam2_base.SAM2Base image_encoder: _target_: sam2.modeling.backbones.image_encoder.ImageEncoder scalp: 1 trunk: _target_: sam2.modeling.backbones.hieradet.Hiera embed_dim: 96 num_heads: 1 stages: [1, 2, 7, 2] global_att_blocks: [5, 7, 9] window_pos_embed_bkg_spatial_size: [7, 7] neck: _target_: sam2.modeling.backbones.image_encoder.FpnNeck position_encoding: _target_: sam2.modeling.position_encoding.PositionEmbeddingSine num_pos_feats: 256 normalize: true scale: null temperature: 10000 d_model: 256 backbone_channel_list: [768, 384, 192, 96] fpn_top_down_levels: [2, 3] # output level 0 and 1 directly use the backbone features fpn_interp_model: nearest memory_attention: _target_: sam2.modeling.memory_attention.MemoryAttention d_model: 256 pos_enc_at_input: true layer: _target_: sam2.modeling.memory_attention.MemoryAttentionLayer activation: relu dim_feedforward: 2048 dropout: 0.1 pos_enc_at_attn: false self_attention: _target_: sam2.modeling.sam.transformer.RoPEAttention rope_theta: 10000.0 feat_sizes: [32, 32] embedding_dim: 256 num_heads: 1 downsample_rate: 1 dropout: 0.1 d_model: 256 pos_enc_at_cross_attn_keys: true pos_enc_at_cross_attn_queries: false cross_attention: _target_: sam2.modeling.sam.transformer.RoPEAttention rope_theta: 10000.0 feat_sizes: [32, 32] rope_k_repeat: True embedding_dim: 256 num_heads: 1 downsample_rate: 1 dropout: 0.1 kv_in_dim: 64 num_layers: 4 memory_encoder: _target_: sam2.modeling.memory_encoder.MemoryEncoder out_dim: 64 position_encoding: _target_: sam2.modeling.position_encoding.PositionEmbeddingSine num_pos_feats: 64 normalize: true scale: null temperature: 10000 mask_downsampler: _target_: sam2.modeling.memory_encoder.MaskDownSampler kernel_size: 3 stride: 2 padding: 1 fuser: _target_: sam2.modeling.memory_encoder.Fuser layer: _target_: sam2.modeling.memory_encoder.CXBlock dim: 256 kernel_size: 7 padding: 3 layer_scale_init_value: 1e-6 use_dwconv: True # depth-wise convs num_layers: 2 num_maskmem: 7 image_size: 1024 # apply scaled sigmoid on mask logits for memory encoder, and directly feed input mask as output mask # SAM decoder sigmoid_scale_for_mem_enc: 20.0 sigmoid_bias_for_mem_enc: -10.0 use_mask_input_as_output_without_sam: true # Memory directly_add_no_mem_embed: true # use high-resolution feature map in the SAM mask decoder use_high_res_features_in_sam: true # output 3 masks on the first click on initial conditioning frames multimask_output_in_sam: true # SAM heads iou_prediction_use_sigmoid: True # cross-attend to object pointers from other frames (based on SAM output tokens) in the encoder use_obj_ptrs_in_encoder: true add_tpos_enc_to_obj_ptrs: false only_obj_ptrs_in_the_past_for_eval: true # object occlusion prediction pred_obj_scores: true pred_obj_scores_mlp: true fixed_no_obj_ptr: true # multimask tracking settings multimask_output_for_tracking: true use_multimask_token_for_obj_ptr: true multimask_min_pt_num: 0 multimask_max_pt_num: 1 use_mlp_for_obj_ptr_proj: true # Compilation flag # HieraT does not currently support compilation, should always be set to False compile_image_encoder: False ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/configs/sam2.1/sam2.1_hiera_b+.yaml ================================================ # @package _global_ # Model model: _target_: sam2.modeling.sam2_base.SAM2Base image_encoder: _target_: sam2.modeling.backbones.image_encoder.ImageEncoder scalp: 1 trunk: _target_: sam2.modeling.backbones.hieradet.Hiera embed_dim: 112 num_heads: 2 neck: _target_: sam2.modeling.backbones.image_encoder.FpnNeck position_encoding: _target_: sam2.modeling.position_encoding.PositionEmbeddingSine num_pos_feats: 256 normalize: true scale: null temperature: 10000 d_model: 256 backbone_channel_list: [896, 448, 224, 112] fpn_top_down_levels: [2, 3] # output level 0 and 1 directly use the backbone features fpn_interp_model: nearest memory_attention: _target_: sam2.modeling.memory_attention.MemoryAttention d_model: 256 pos_enc_at_input: true layer: _target_: sam2.modeling.memory_attention.MemoryAttentionLayer activation: relu dim_feedforward: 2048 dropout: 0.1 pos_enc_at_attn: false self_attention: _target_: sam2.modeling.sam.transformer.RoPEAttention rope_theta: 10000.0 feat_sizes: [32, 32] embedding_dim: 256 num_heads: 1 downsample_rate: 1 dropout: 0.1 d_model: 256 pos_enc_at_cross_attn_keys: true pos_enc_at_cross_attn_queries: false cross_attention: _target_: sam2.modeling.sam.transformer.RoPEAttention rope_theta: 10000.0 feat_sizes: [32, 32] rope_k_repeat: True embedding_dim: 256 num_heads: 1 downsample_rate: 1 dropout: 0.1 kv_in_dim: 64 num_layers: 4 memory_encoder: _target_: sam2.modeling.memory_encoder.MemoryEncoder out_dim: 64 position_encoding: _target_: sam2.modeling.position_encoding.PositionEmbeddingSine num_pos_feats: 64 normalize: true scale: null temperature: 10000 mask_downsampler: _target_: sam2.modeling.memory_encoder.MaskDownSampler kernel_size: 3 stride: 2 padding: 1 fuser: _target_: sam2.modeling.memory_encoder.Fuser layer: _target_: sam2.modeling.memory_encoder.CXBlock dim: 256 kernel_size: 7 padding: 3 layer_scale_init_value: 1e-6 use_dwconv: True # depth-wise convs num_layers: 2 num_maskmem: 7 image_size: 1024 # apply scaled sigmoid on mask logits for memory encoder, and directly feed input mask as output mask sigmoid_scale_for_mem_enc: 20.0 sigmoid_bias_for_mem_enc: -10.0 use_mask_input_as_output_without_sam: true # Memory directly_add_no_mem_embed: true no_obj_embed_spatial: true # use high-resolution feature map in the SAM mask decoder use_high_res_features_in_sam: true # output 3 masks on the first click on initial conditioning frames multimask_output_in_sam: true # SAM heads iou_prediction_use_sigmoid: True # cross-attend to object pointers from other frames (based on SAM output tokens) in the encoder use_obj_ptrs_in_encoder: true add_tpos_enc_to_obj_ptrs: true proj_tpos_enc_in_obj_ptrs: true use_signed_tpos_enc_to_obj_ptrs: true only_obj_ptrs_in_the_past_for_eval: true # object occlusion prediction pred_obj_scores: true pred_obj_scores_mlp: true fixed_no_obj_ptr: true # multimask tracking settings multimask_output_for_tracking: true use_multimask_token_for_obj_ptr: true multimask_min_pt_num: 0 multimask_max_pt_num: 1 use_mlp_for_obj_ptr_proj: true # Compilation flag compile_image_encoder: False ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/configs/sam2.1/sam2.1_hiera_l.yaml ================================================ # @package _global_ # Model model: _target_: sam2.modeling.sam2_base.SAM2Base image_encoder: _target_: sam2.modeling.backbones.image_encoder.ImageEncoder scalp: 1 trunk: _target_: sam2.modeling.backbones.hieradet.Hiera embed_dim: 144 num_heads: 2 stages: [2, 6, 36, 4] global_att_blocks: [23, 33, 43] window_pos_embed_bkg_spatial_size: [7, 7] window_spec: [8, 4, 16, 8] neck: _target_: sam2.modeling.backbones.image_encoder.FpnNeck position_encoding: _target_: sam2.modeling.position_encoding.PositionEmbeddingSine num_pos_feats: 256 normalize: true scale: null temperature: 10000 d_model: 256 backbone_channel_list: [1152, 576, 288, 144] fpn_top_down_levels: [2, 3] # output level 0 and 1 directly use the backbone features fpn_interp_model: nearest memory_attention: _target_: sam2.modeling.memory_attention.MemoryAttention d_model: 256 pos_enc_at_input: true layer: _target_: sam2.modeling.memory_attention.MemoryAttentionLayer activation: relu dim_feedforward: 2048 dropout: 0.1 pos_enc_at_attn: false self_attention: _target_: sam2.modeling.sam.transformer.RoPEAttention rope_theta: 10000.0 feat_sizes: [32, 32] embedding_dim: 256 num_heads: 1 downsample_rate: 1 dropout: 0.1 d_model: 256 pos_enc_at_cross_attn_keys: true pos_enc_at_cross_attn_queries: false cross_attention: _target_: sam2.modeling.sam.transformer.RoPEAttention rope_theta: 10000.0 feat_sizes: [32, 32] rope_k_repeat: True embedding_dim: 256 num_heads: 1 downsample_rate: 1 dropout: 0.1 kv_in_dim: 64 num_layers: 4 memory_encoder: _target_: sam2.modeling.memory_encoder.MemoryEncoder out_dim: 64 position_encoding: _target_: sam2.modeling.position_encoding.PositionEmbeddingSine num_pos_feats: 64 normalize: true scale: null temperature: 10000 mask_downsampler: _target_: sam2.modeling.memory_encoder.MaskDownSampler kernel_size: 3 stride: 2 padding: 1 fuser: _target_: sam2.modeling.memory_encoder.Fuser layer: _target_: sam2.modeling.memory_encoder.CXBlock dim: 256 kernel_size: 7 padding: 3 layer_scale_init_value: 1e-6 use_dwconv: True # depth-wise convs num_layers: 2 num_maskmem: 7 image_size: 1024 # apply scaled sigmoid on mask logits for memory encoder, and directly feed input mask as output mask sigmoid_scale_for_mem_enc: 20.0 sigmoid_bias_for_mem_enc: -10.0 use_mask_input_as_output_without_sam: true # Memory directly_add_no_mem_embed: true no_obj_embed_spatial: true # use high-resolution feature map in the SAM mask decoder use_high_res_features_in_sam: true # output 3 masks on the first click on initial conditioning frames multimask_output_in_sam: true # SAM heads iou_prediction_use_sigmoid: True # cross-attend to object pointers from other frames (based on SAM output tokens) in the encoder use_obj_ptrs_in_encoder: true add_tpos_enc_to_obj_ptrs: true proj_tpos_enc_in_obj_ptrs: true use_signed_tpos_enc_to_obj_ptrs: true only_obj_ptrs_in_the_past_for_eval: true # object occlusion prediction pred_obj_scores: true pred_obj_scores_mlp: true fixed_no_obj_ptr: true # multimask tracking settings multimask_output_for_tracking: true use_multimask_token_for_obj_ptr: true multimask_min_pt_num: 0 multimask_max_pt_num: 1 use_mlp_for_obj_ptr_proj: true # Compilation flag compile_image_encoder: False ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/configs/sam2.1/sam2.1_hiera_s.yaml ================================================ # @package _global_ # Model model: _target_: sam2.modeling.sam2_base.SAM2Base image_encoder: _target_: sam2.modeling.backbones.image_encoder.ImageEncoder scalp: 1 trunk: _target_: sam2.modeling.backbones.hieradet.Hiera embed_dim: 96 num_heads: 1 stages: [1, 2, 11, 2] global_att_blocks: [7, 10, 13] window_pos_embed_bkg_spatial_size: [7, 7] neck: _target_: sam2.modeling.backbones.image_encoder.FpnNeck position_encoding: _target_: sam2.modeling.position_encoding.PositionEmbeddingSine num_pos_feats: 256 normalize: true scale: null temperature: 10000 d_model: 256 backbone_channel_list: [768, 384, 192, 96] fpn_top_down_levels: [2, 3] # output level 0 and 1 directly use the backbone features fpn_interp_model: nearest memory_attention: _target_: sam2.modeling.memory_attention.MemoryAttention d_model: 256 pos_enc_at_input: true layer: _target_: sam2.modeling.memory_attention.MemoryAttentionLayer activation: relu dim_feedforward: 2048 dropout: 0.1 pos_enc_at_attn: false self_attention: _target_: sam2.modeling.sam.transformer.RoPEAttention rope_theta: 10000.0 feat_sizes: [32, 32] embedding_dim: 256 num_heads: 1 downsample_rate: 1 dropout: 0.1 d_model: 256 pos_enc_at_cross_attn_keys: true pos_enc_at_cross_attn_queries: false cross_attention: _target_: sam2.modeling.sam.transformer.RoPEAttention rope_theta: 10000.0 feat_sizes: [32, 32] rope_k_repeat: True embedding_dim: 256 num_heads: 1 downsample_rate: 1 dropout: 0.1 kv_in_dim: 64 num_layers: 4 memory_encoder: _target_: sam2.modeling.memory_encoder.MemoryEncoder out_dim: 64 position_encoding: _target_: sam2.modeling.position_encoding.PositionEmbeddingSine num_pos_feats: 64 normalize: true scale: null temperature: 10000 mask_downsampler: _target_: sam2.modeling.memory_encoder.MaskDownSampler kernel_size: 3 stride: 2 padding: 1 fuser: _target_: sam2.modeling.memory_encoder.Fuser layer: _target_: sam2.modeling.memory_encoder.CXBlock dim: 256 kernel_size: 7 padding: 3 layer_scale_init_value: 1e-6 use_dwconv: True # depth-wise convs num_layers: 2 num_maskmem: 7 image_size: 1024 # apply scaled sigmoid on mask logits for memory encoder, and directly feed input mask as output mask sigmoid_scale_for_mem_enc: 20.0 sigmoid_bias_for_mem_enc: -10.0 use_mask_input_as_output_without_sam: true # Memory directly_add_no_mem_embed: true no_obj_embed_spatial: true # use high-resolution feature map in the SAM mask decoder use_high_res_features_in_sam: true # output 3 masks on the first click on initial conditioning frames multimask_output_in_sam: true # SAM heads iou_prediction_use_sigmoid: True # cross-attend to object pointers from other frames (based on SAM output tokens) in the encoder use_obj_ptrs_in_encoder: true add_tpos_enc_to_obj_ptrs: true proj_tpos_enc_in_obj_ptrs: true use_signed_tpos_enc_to_obj_ptrs: true only_obj_ptrs_in_the_past_for_eval: true # object occlusion prediction pred_obj_scores: true pred_obj_scores_mlp: true fixed_no_obj_ptr: true # multimask tracking settings multimask_output_for_tracking: true use_multimask_token_for_obj_ptr: true multimask_min_pt_num: 0 multimask_max_pt_num: 1 use_mlp_for_obj_ptr_proj: true # Compilation flag compile_image_encoder: False ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/configs/sam2.1/sam2.1_hiera_t.yaml ================================================ # @package _global_ # Model model: _target_: sam2.modeling.sam2_base.SAM2Base image_encoder: _target_: sam2.modeling.backbones.image_encoder.ImageEncoder scalp: 1 trunk: _target_: sam2.modeling.backbones.hieradet.Hiera embed_dim: 96 num_heads: 1 stages: [1, 2, 7, 2] global_att_blocks: [5, 7, 9] window_pos_embed_bkg_spatial_size: [7, 7] neck: _target_: sam2.modeling.backbones.image_encoder.FpnNeck position_encoding: _target_: sam2.modeling.position_encoding.PositionEmbeddingSine num_pos_feats: 256 normalize: true scale: null temperature: 10000 d_model: 256 backbone_channel_list: [768, 384, 192, 96] fpn_top_down_levels: [2, 3] # output level 0 and 1 directly use the backbone features fpn_interp_model: nearest memory_attention: _target_: sam2.modeling.memory_attention.MemoryAttention d_model: 256 pos_enc_at_input: true layer: _target_: sam2.modeling.memory_attention.MemoryAttentionLayer activation: relu dim_feedforward: 2048 dropout: 0.1 pos_enc_at_attn: false self_attention: _target_: sam2.modeling.sam.transformer.RoPEAttention rope_theta: 10000.0 feat_sizes: [32, 32] embedding_dim: 256 num_heads: 1 downsample_rate: 1 dropout: 0.1 d_model: 256 pos_enc_at_cross_attn_keys: true pos_enc_at_cross_attn_queries: false cross_attention: _target_: sam2.modeling.sam.transformer.RoPEAttention rope_theta: 10000.0 feat_sizes: [32, 32] rope_k_repeat: True embedding_dim: 256 num_heads: 1 downsample_rate: 1 dropout: 0.1 kv_in_dim: 64 num_layers: 4 memory_encoder: _target_: sam2.modeling.memory_encoder.MemoryEncoder out_dim: 64 position_encoding: _target_: sam2.modeling.position_encoding.PositionEmbeddingSine num_pos_feats: 64 normalize: true scale: null temperature: 10000 mask_downsampler: _target_: sam2.modeling.memory_encoder.MaskDownSampler kernel_size: 3 stride: 2 padding: 1 fuser: _target_: sam2.modeling.memory_encoder.Fuser layer: _target_: sam2.modeling.memory_encoder.CXBlock dim: 256 kernel_size: 7 padding: 3 layer_scale_init_value: 1e-6 use_dwconv: True # depth-wise convs num_layers: 2 num_maskmem: 7 image_size: 1024 # apply scaled sigmoid on mask logits for memory encoder, and directly feed input mask as output mask # SAM decoder sigmoid_scale_for_mem_enc: 20.0 sigmoid_bias_for_mem_enc: -10.0 use_mask_input_as_output_without_sam: true # Memory directly_add_no_mem_embed: true no_obj_embed_spatial: true # use high-resolution feature map in the SAM mask decoder use_high_res_features_in_sam: true # output 3 masks on the first click on initial conditioning frames multimask_output_in_sam: true # SAM heads iou_prediction_use_sigmoid: True # cross-attend to object pointers from other frames (based on SAM output tokens) in the encoder use_obj_ptrs_in_encoder: true add_tpos_enc_to_obj_ptrs: true proj_tpos_enc_in_obj_ptrs: true use_signed_tpos_enc_to_obj_ptrs: true only_obj_ptrs_in_the_past_for_eval: true # object occlusion prediction pred_obj_scores: true pred_obj_scores_mlp: true fixed_no_obj_ptr: true # multimask tracking settings multimask_output_for_tracking: true use_multimask_token_for_obj_ptr: true multimask_min_pt_num: 0 multimask_max_pt_num: 1 use_mlp_for_obj_ptr_proj: true # Compilation flag # HieraT does not currently support compilation, should always be set to False compile_image_encoder: False ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/configs/sam2.1_training/sam2.1_hiera_b+_MOSE_finetune.yaml ================================================ # @package _global_ scratch: resolution: 1024 train_batch_size: 1 num_train_workers: 10 num_frames: 8 max_num_objects: 3 base_lr: 5.0e-6 vision_lr: 3.0e-06 phases_per_epoch: 1 num_epochs: 40 dataset: # PATHS to Dataset img_folder: /fsx-onevision/shared/data/academic_vos_data/MOSE/train/JPEGImages # PATH to MOSE JPEGImages folder gt_folder: /fsx-onevision/shared/data/academic_vos_data/MOSE/train/Annotations/ # PATH to MOSE Annotations folder file_list_txt: training/assets/MOSE_sample_train_list.txt # Optional PATH to filelist containing a subset of videos to be used for training multiplier: 2 # Video transforms vos: train_transforms: - _target_: training.dataset.transforms.ComposeAPI transforms: - _target_: training.dataset.transforms.RandomHorizontalFlip consistent_transform: True - _target_: training.dataset.transforms.RandomAffine degrees: 25 shear: 20 image_interpolation: bilinear consistent_transform: True - _target_: training.dataset.transforms.RandomResizeAPI sizes: ${scratch.resolution} square: true consistent_transform: True - _target_: training.dataset.transforms.ColorJitter consistent_transform: True brightness: 0.1 contrast: 0.03 saturation: 0.03 hue: null - _target_: training.dataset.transforms.RandomGrayscale p: 0.05 consistent_transform: True - _target_: training.dataset.transforms.ColorJitter consistent_transform: False brightness: 0.1 contrast: 0.05 saturation: 0.05 hue: null - _target_: training.dataset.transforms.ToTensorAPI - _target_: training.dataset.transforms.NormalizeAPI mean: [0.485, 0.456, 0.406] std: [0.229, 0.224, 0.225] trainer: _target_: training.trainer.Trainer mode: train_only max_epochs: ${times:${scratch.num_epochs},${scratch.phases_per_epoch}} accelerator: cuda seed_value: 123 model: _target_: training.model.sam2.SAM2Train image_encoder: _target_: sam2.modeling.backbones.image_encoder.ImageEncoder scalp: 1 trunk: _target_: sam2.modeling.backbones.hieradet.Hiera embed_dim: 112 num_heads: 2 drop_path_rate: 0.1 neck: _target_: sam2.modeling.backbones.image_encoder.FpnNeck position_encoding: _target_: sam2.modeling.position_encoding.PositionEmbeddingSine num_pos_feats: 256 normalize: true scale: null temperature: 10000 d_model: 256 backbone_channel_list: [896, 448, 224, 112] fpn_top_down_levels: [2, 3] # output level 0 and 1 directly use the backbone features fpn_interp_model: nearest memory_attention: _target_: sam2.modeling.memory_attention.MemoryAttention d_model: 256 pos_enc_at_input: true layer: _target_: sam2.modeling.memory_attention.MemoryAttentionLayer activation: relu dim_feedforward: 2048 dropout: 0.1 pos_enc_at_attn: false self_attention: _target_: sam2.modeling.sam.transformer.RoPEAttention rope_theta: 10000.0 feat_sizes: [32, 32] embedding_dim: 256 num_heads: 1 downsample_rate: 1 dropout: 0.1 d_model: 256 pos_enc_at_cross_attn_keys: true pos_enc_at_cross_attn_queries: false cross_attention: _target_: sam2.modeling.sam.transformer.RoPEAttention rope_theta: 10000.0 feat_sizes: [32, 32] rope_k_repeat: True embedding_dim: 256 num_heads: 1 downsample_rate: 1 dropout: 0.1 kv_in_dim: 64 num_layers: 4 memory_encoder: _target_: sam2.modeling.memory_encoder.MemoryEncoder out_dim: 64 position_encoding: _target_: sam2.modeling.position_encoding.PositionEmbeddingSine num_pos_feats: 64 normalize: true scale: null temperature: 10000 mask_downsampler: _target_: sam2.modeling.memory_encoder.MaskDownSampler kernel_size: 3 stride: 2 padding: 1 fuser: _target_: sam2.modeling.memory_encoder.Fuser layer: _target_: sam2.modeling.memory_encoder.CXBlock dim: 256 kernel_size: 7 padding: 3 layer_scale_init_value: 1e-6 use_dwconv: True # depth-wise convs num_layers: 2 num_maskmem: 7 image_size: ${scratch.resolution} # apply scaled sigmoid on mask logits for memory encoder, and directly feed input mask as output mask sigmoid_scale_for_mem_enc: 20.0 sigmoid_bias_for_mem_enc: -10.0 use_mask_input_as_output_without_sam: true # Memory directly_add_no_mem_embed: true no_obj_embed_spatial: true # use high-resolution feature map in the SAM mask decoder use_high_res_features_in_sam: true # output 3 masks on the first click on initial conditioning frames multimask_output_in_sam: true # SAM heads iou_prediction_use_sigmoid: True # cross-attend to object pointers from other frames (based on SAM output tokens) in the encoder use_obj_ptrs_in_encoder: true add_tpos_enc_to_obj_ptrs: true proj_tpos_enc_in_obj_ptrs: true use_signed_tpos_enc_to_obj_ptrs: true only_obj_ptrs_in_the_past_for_eval: true # object occlusion prediction pred_obj_scores: true pred_obj_scores_mlp: true fixed_no_obj_ptr: true # multimask tracking settings multimask_output_for_tracking: true use_multimask_token_for_obj_ptr: true multimask_min_pt_num: 0 multimask_max_pt_num: 1 use_mlp_for_obj_ptr_proj: true # Compilation flag # compile_image_encoder: False ####### Training specific params ####### # box/point input and corrections prob_to_use_pt_input_for_train: 0.5 prob_to_use_pt_input_for_eval: 0.0 prob_to_use_box_input_for_train: 0.5 # 0.5*0.5 = 0.25 prob to use box instead of points prob_to_use_box_input_for_eval: 0.0 prob_to_sample_from_gt_for_train: 0.1 # with a small prob, sampling correction points from GT mask instead of prediction errors num_frames_to_correct_for_train: 2 # iteratively sample on random 1~2 frames (always include the first frame) num_frames_to_correct_for_eval: 1 # only iteratively sample on first frame rand_frames_to_correct_for_train: True # random #init-cond-frame ~ 2 add_all_frames_to_correct_as_cond: True # when a frame receives a correction click, it becomes a conditioning frame (even if it's not initially a conditioning frame) # maximum 2 initial conditioning frames num_init_cond_frames_for_train: 2 rand_init_cond_frames_for_train: True # random 1~2 num_correction_pt_per_frame: 7 use_act_ckpt_iterative_pt_sampling: false num_init_cond_frames_for_eval: 1 # only mask on the first frame forward_backbone_per_frame_for_eval: True data: train: _target_: training.dataset.sam2_datasets.TorchTrainMixedDataset phases_per_epoch: ${scratch.phases_per_epoch} batch_sizes: - ${scratch.train_batch_size} datasets: - _target_: training.dataset.utils.RepeatFactorWrapper dataset: _target_: training.dataset.utils.ConcatDataset datasets: - _target_: training.dataset.vos_dataset.VOSDataset transforms: ${vos.train_transforms} training: true video_dataset: _target_: training.dataset.vos_raw_dataset.PNGRawDataset img_folder: ${dataset.img_folder} gt_folder: ${dataset.gt_folder} file_list_txt: ${dataset.file_list_txt} sampler: _target_: training.dataset.vos_sampler.RandomUniformSampler num_frames: ${scratch.num_frames} max_num_objects: ${scratch.max_num_objects} multiplier: ${dataset.multiplier} shuffle: True num_workers: ${scratch.num_train_workers} pin_memory: True drop_last: True collate_fn: _target_: training.utils.data_utils.collate_fn _partial_: true dict_key: all optim: amp: enabled: True amp_dtype: bfloat16 optimizer: _target_: torch.optim.AdamW gradient_clip: _target_: training.optimizer.GradientClipper max_norm: 0.1 norm_type: 2 param_group_modifiers: - _target_: training.optimizer.layer_decay_param_modifier _partial_: True layer_decay_value: 0.9 apply_to: 'image_encoder.trunk' overrides: - pattern: '*pos_embed*' value: 1.0 options: lr: - scheduler: _target_: fvcore.common.param_scheduler.CosineParamScheduler start_value: ${scratch.base_lr} end_value: ${divide:${scratch.base_lr},10} - scheduler: _target_: fvcore.common.param_scheduler.CosineParamScheduler start_value: ${scratch.vision_lr} end_value: ${divide:${scratch.vision_lr},10} param_names: - 'image_encoder.*' weight_decay: - scheduler: _target_: fvcore.common.param_scheduler.ConstantParamScheduler value: 0.1 - scheduler: _target_: fvcore.common.param_scheduler.ConstantParamScheduler value: 0.0 param_names: - '*bias*' module_cls_names: ['torch.nn.LayerNorm'] loss: all: _target_: training.loss_fns.MultiStepMultiMasksAndIous weight_dict: loss_mask: 20 loss_dice: 1 loss_iou: 1 loss_class: 1 supervise_all_iou: true iou_use_l1_loss: true pred_obj_scores: true focal_gamma_obj_score: 0.0 focal_alpha_obj_score: -1.0 distributed: backend: nccl find_unused_parameters: True logging: tensorboard_writer: _target_: training.utils.logger.make_tensorboard_logger log_dir: ${launcher.experiment_log_dir}/tensorboard flush_secs: 120 should_log: True log_dir: ${launcher.experiment_log_dir}/logs log_freq: 10 # initialize from a SAM 2 checkpoint checkpoint: save_dir: ${launcher.experiment_log_dir}/checkpoints save_freq: 0 # 0 only last checkpoint is saved. model_weight_initializer: _partial_: True _target_: training.utils.checkpoint_utils.load_state_dict_into_model strict: True ignore_unexpected_keys: null ignore_missing_keys: null state_dict: _target_: training.utils.checkpoint_utils.load_checkpoint_and_apply_kernels checkpoint_path: ./checkpoints/sam2.1_hiera_base_plus.pt # PATH to SAM 2.1 checkpoint ckpt_state_dict_keys: ['model'] launcher: num_nodes: 1 gpus_per_node: 8 experiment_log_dir: null # Path to log directory, defaults to ./sam2_logs/${config_name} # SLURM args if running on a cluster submitit: partition: null account: null qos: null cpus_per_task: 10 use_cluster: false timeout_hour: 24 name: null port_range: [10000, 65000] ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/csrc/connected_components.cu ================================================ // Copyright (c) Meta Platforms, Inc. and affiliates. // All rights reserved. // This source code is licensed under the license found in the // LICENSE file in the root directory of this source tree. // adapted from https://github.com/zsef123/Connected_components_PyTorch // with license found in the LICENSE_cctorch file in the root directory. #include #include #include #include #include #include // 2d #define BLOCK_ROWS 16 #define BLOCK_COLS 16 namespace cc2d { template __device__ __forceinline__ unsigned char hasBit(T bitmap, unsigned char pos) { return (bitmap >> pos) & 1; } __device__ int32_t find(const int32_t* s_buf, int32_t n) { while (s_buf[n] != n) n = s_buf[n]; return n; } __device__ int32_t find_n_compress(int32_t* s_buf, int32_t n) { const int32_t id = n; while (s_buf[n] != n) { n = s_buf[n]; s_buf[id] = n; } return n; } __device__ void union_(int32_t* s_buf, int32_t a, int32_t b) { bool done; do { a = find(s_buf, a); b = find(s_buf, b); if (a < b) { int32_t old = atomicMin(s_buf + b, a); done = (old == b); b = old; } else if (b < a) { int32_t old = atomicMin(s_buf + a, b); done = (old == a); a = old; } else done = true; } while (!done); } __global__ void init_labeling(int32_t* label, const uint32_t W, const uint32_t H) { const uint32_t row = (blockIdx.y * blockDim.y + threadIdx.y) * 2; const uint32_t col = (blockIdx.x * blockDim.x + threadIdx.x) * 2; const uint32_t idx = row * W + col; if (row < H && col < W) label[idx] = idx; } __global__ void merge(uint8_t* img, int32_t* label, const uint32_t W, const uint32_t H) { const uint32_t row = (blockIdx.y * blockDim.y + threadIdx.y) * 2; const uint32_t col = (blockIdx.x * blockDim.x + threadIdx.x) * 2; const uint32_t idx = row * W + col; if (row >= H || col >= W) return; uint32_t P = 0; if (img[idx]) P |= 0x777; if (row + 1 < H && img[idx + W]) P |= 0x777 << 4; if (col + 1 < W && img[idx + 1]) P |= 0x777 << 1; if (col == 0) P &= 0xEEEE; if (col + 1 >= W) P &= 0x3333; else if (col + 2 >= W) P &= 0x7777; if (row == 0) P &= 0xFFF0; if (row + 1 >= H) P &= 0xFF; if (P > 0) { // If need check about top-left pixel(if flag the first bit) and hit the // top-left pixel if (hasBit(P, 0) && img[idx - W - 1]) { union_(label, idx, idx - 2 * W - 2); // top left block } if ((hasBit(P, 1) && img[idx - W]) || (hasBit(P, 2) && img[idx - W + 1])) union_(label, idx, idx - 2 * W); // top bottom block if (hasBit(P, 3) && img[idx + 2 - W]) union_(label, idx, idx - 2 * W + 2); // top right block if ((hasBit(P, 4) && img[idx - 1]) || (hasBit(P, 8) && img[idx + W - 1])) union_(label, idx, idx - 2); // just left block } } __global__ void compression(int32_t* label, const int32_t W, const int32_t H) { const uint32_t row = (blockIdx.y * blockDim.y + threadIdx.y) * 2; const uint32_t col = (blockIdx.x * blockDim.x + threadIdx.x) * 2; const uint32_t idx = row * W + col; if (row < H && col < W) find_n_compress(label, idx); } __global__ void final_labeling( const uint8_t* img, int32_t* label, const int32_t W, const int32_t H) { const uint32_t row = (blockIdx.y * blockDim.y + threadIdx.y) * 2; const uint32_t col = (blockIdx.x * blockDim.x + threadIdx.x) * 2; const uint32_t idx = row * W + col; if (row >= H || col >= W) return; int32_t y = label[idx] + 1; if (img[idx]) label[idx] = y; else label[idx] = 0; if (col + 1 < W) { if (img[idx + 1]) label[idx + 1] = y; else label[idx + 1] = 0; if (row + 1 < H) { if (img[idx + W + 1]) label[idx + W + 1] = y; else label[idx + W + 1] = 0; } } if (row + 1 < H) { if (img[idx + W]) label[idx + W] = y; else label[idx + W] = 0; } } __global__ void init_counting( const int32_t* label, int32_t* count_init, const int32_t W, const int32_t H) { const uint32_t row = (blockIdx.y * blockDim.y + threadIdx.y); const uint32_t col = (blockIdx.x * blockDim.x + threadIdx.x); const uint32_t idx = row * W + col; if (row >= H || col >= W) return; int32_t y = label[idx]; if (y > 0) { int32_t count_idx = y - 1; atomicAdd(count_init + count_idx, 1); } } __global__ void final_counting( const int32_t* label, const int32_t* count_init, int32_t* count_final, const int32_t W, const int32_t H) { const uint32_t row = (blockIdx.y * blockDim.y + threadIdx.y); const uint32_t col = (blockIdx.x * blockDim.x + threadIdx.x); const uint32_t idx = row * W + col; if (row >= H || col >= W) return; int32_t y = label[idx]; if (y > 0) { int32_t count_idx = y - 1; count_final[idx] = count_init[count_idx]; } else { count_final[idx] = 0; } } } // namespace cc2d std::vector get_connected_componnets( const torch::Tensor& inputs) { AT_ASSERTM(inputs.is_cuda(), "inputs must be a CUDA tensor"); AT_ASSERTM(inputs.ndimension() == 4, "inputs must be [N, 1, H, W] shape"); AT_ASSERTM( inputs.scalar_type() == torch::kUInt8, "inputs must be a uint8 type"); const uint32_t N = inputs.size(0); const uint32_t C = inputs.size(1); const uint32_t H = inputs.size(2); const uint32_t W = inputs.size(3); AT_ASSERTM(C == 1, "inputs must be [N, 1, H, W] shape"); AT_ASSERTM((H % 2) == 0, "height must be an even number"); AT_ASSERTM((W % 2) == 0, "width must be an even number"); // label must be uint32_t auto label_options = torch::TensorOptions().dtype(torch::kInt32).device(inputs.device()); torch::Tensor labels = torch::zeros({N, C, H, W}, label_options); torch::Tensor counts_init = torch::zeros({N, C, H, W}, label_options); torch::Tensor counts_final = torch::zeros({N, C, H, W}, label_options); dim3 grid = dim3( ((W + 1) / 2 + BLOCK_COLS - 1) / BLOCK_COLS, ((H + 1) / 2 + BLOCK_ROWS - 1) / BLOCK_ROWS); dim3 block = dim3(BLOCK_COLS, BLOCK_ROWS); dim3 grid_count = dim3((W + BLOCK_COLS) / BLOCK_COLS, (H + BLOCK_ROWS) / BLOCK_ROWS); dim3 block_count = dim3(BLOCK_COLS, BLOCK_ROWS); cudaStream_t stream = at::cuda::getCurrentCUDAStream(); for (int n = 0; n < N; n++) { uint32_t offset = n * H * W; cc2d::init_labeling<<>>( labels.data_ptr() + offset, W, H); cc2d::merge<<>>( inputs.data_ptr() + offset, labels.data_ptr() + offset, W, H); cc2d::compression<<>>( labels.data_ptr() + offset, W, H); cc2d::final_labeling<<>>( inputs.data_ptr() + offset, labels.data_ptr() + offset, W, H); // get the counting of each pixel cc2d::init_counting<<>>( labels.data_ptr() + offset, counts_init.data_ptr() + offset, W, H); cc2d::final_counting<<>>( labels.data_ptr() + offset, counts_init.data_ptr() + offset, counts_final.data_ptr() + offset, W, H); } // returned values are [labels, counts] std::vector outputs; outputs.push_back(labels); outputs.push_back(counts_final); return outputs; } PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { m.def( "get_connected_componnets", &get_connected_componnets, "get_connected_componnets"); } ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/modeling/__init__.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/modeling/backbones/__init__.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/modeling/backbones/hieradet.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import logging from functools import partial from typing import List, Tuple, Union import torch import torch.nn as nn import torch.nn.functional as F from iopath.common.file_io import g_pathmgr from sam2.modeling.backbones.utils import ( PatchEmbed, window_partition, window_unpartition, ) from sam2.modeling.sam2_utils import DropPath, MLP def do_pool(x: torch.Tensor, pool: nn.Module, norm: nn.Module = None) -> torch.Tensor: if pool is None: return x # (B, H, W, C) -> (B, C, H, W) x = x.permute(0, 3, 1, 2) x = pool(x) # (B, C, H', W') -> (B, H', W', C) x = x.permute(0, 2, 3, 1) if norm: x = norm(x) return x class MultiScaleAttention(nn.Module): def __init__( self, dim: int, dim_out: int, num_heads: int, q_pool: nn.Module = None, ): super().__init__() self.dim = dim self.dim_out = dim_out self.num_heads = num_heads self.q_pool = q_pool self.qkv = nn.Linear(dim, dim_out * 3) self.proj = nn.Linear(dim_out, dim_out) def forward(self, x: torch.Tensor) -> torch.Tensor: B, H, W, _ = x.shape # qkv with shape (B, H * W, 3, nHead, C) qkv = self.qkv(x).reshape(B, H * W, 3, self.num_heads, -1) # q, k, v with shape (B, H * W, nheads, C) q, k, v = torch.unbind(qkv, 2) # Q pooling (for downsample at stage changes) if self.q_pool: q = do_pool(q.reshape(B, H, W, -1), self.q_pool) H, W = q.shape[1:3] # downsampled shape q = q.reshape(B, H * W, self.num_heads, -1) # Torch's SDPA expects [B, nheads, H*W, C] so we transpose x = F.scaled_dot_product_attention( q.transpose(1, 2), k.transpose(1, 2), v.transpose(1, 2), ) # Transpose back x = x.transpose(1, 2) x = x.reshape(B, H, W, -1) x = self.proj(x) return x class MultiScaleBlock(nn.Module): def __init__( self, dim: int, dim_out: int, num_heads: int, mlp_ratio: float = 4.0, drop_path: float = 0.0, norm_layer: Union[nn.Module, str] = "LayerNorm", q_stride: Tuple[int, int] = None, act_layer: nn.Module = nn.GELU, window_size: int = 0, ): super().__init__() if isinstance(norm_layer, str): norm_layer = partial(getattr(nn, norm_layer), eps=1e-6) self.dim = dim self.dim_out = dim_out self.norm1 = norm_layer(dim) self.window_size = window_size self.pool, self.q_stride = None, q_stride if self.q_stride: self.pool = nn.MaxPool2d( kernel_size=q_stride, stride=q_stride, ceil_mode=False ) self.attn = MultiScaleAttention( dim, dim_out, num_heads=num_heads, q_pool=self.pool, ) self.drop_path = DropPath(drop_path) if drop_path > 0.0 else nn.Identity() self.norm2 = norm_layer(dim_out) self.mlp = MLP( dim_out, int(dim_out * mlp_ratio), dim_out, num_layers=2, activation=act_layer, ) if dim != dim_out: self.proj = nn.Linear(dim, dim_out) def forward(self, x: torch.Tensor) -> torch.Tensor: shortcut = x # B, H, W, C x = self.norm1(x) # Skip connection if self.dim != self.dim_out: shortcut = do_pool(self.proj(x), self.pool) # Window partition window_size = self.window_size if window_size > 0: H, W = x.shape[1], x.shape[2] x, pad_hw = window_partition(x, window_size) # Window Attention + Q Pooling (if stage change) x = self.attn(x) if self.q_stride: # Shapes have changed due to Q pooling window_size = self.window_size // self.q_stride[0] H, W = shortcut.shape[1:3] pad_h = (window_size - H % window_size) % window_size pad_w = (window_size - W % window_size) % window_size pad_hw = (H + pad_h, W + pad_w) # Reverse window partition if self.window_size > 0: x = window_unpartition(x, window_size, pad_hw, (H, W)) x = shortcut + self.drop_path(x) # MLP x = x + self.drop_path(self.mlp(self.norm2(x))) return x class Hiera(nn.Module): """ Reference: https://arxiv.org/abs/2306.00989 """ def __init__( self, embed_dim: int = 96, # initial embed dim num_heads: int = 1, # initial number of heads drop_path_rate: float = 0.0, # stochastic depth q_pool: int = 3, # number of q_pool stages q_stride: Tuple[int, int] = (2, 2), # downsample stride bet. stages stages: Tuple[int, ...] = (2, 3, 16, 3), # blocks per stage dim_mul: float = 2.0, # dim_mul factor at stage shift head_mul: float = 2.0, # head_mul factor at stage shift window_pos_embed_bkg_spatial_size: Tuple[int, int] = (14, 14), # window size per stage, when not using global att. window_spec: Tuple[int, ...] = ( 8, 4, 14, 7, ), # global attn in these blocks global_att_blocks: Tuple[int, ...] = ( 12, 16, 20, ), weights_path=None, return_interm_layers=True, # return feats from every stage ): super().__init__() assert len(stages) == len(window_spec) self.window_spec = window_spec depth = sum(stages) self.q_stride = q_stride self.stage_ends = [sum(stages[:i]) - 1 for i in range(1, len(stages) + 1)] assert 0 <= q_pool <= len(self.stage_ends[:-1]) self.q_pool_blocks = [x + 1 for x in self.stage_ends[:-1]][:q_pool] self.return_interm_layers = return_interm_layers self.patch_embed = PatchEmbed( embed_dim=embed_dim, ) # Which blocks have global att? self.global_att_blocks = global_att_blocks # Windowed positional embedding (https://arxiv.org/abs/2311.05613) self.window_pos_embed_bkg_spatial_size = window_pos_embed_bkg_spatial_size self.pos_embed = nn.Parameter( torch.zeros(1, embed_dim, *self.window_pos_embed_bkg_spatial_size) ) self.pos_embed_window = nn.Parameter( torch.zeros(1, embed_dim, self.window_spec[0], self.window_spec[0]) ) dpr = [ x.item() for x in torch.linspace(0, drop_path_rate, depth) ] # stochastic depth decay rule cur_stage = 1 self.blocks = nn.ModuleList() for i in range(depth): dim_out = embed_dim # lags by a block, so first block of # next stage uses an initial window size # of previous stage and final window size of current stage window_size = self.window_spec[cur_stage - 1] if self.global_att_blocks is not None: window_size = 0 if i in self.global_att_blocks else window_size if i - 1 in self.stage_ends: dim_out = int(embed_dim * dim_mul) num_heads = int(num_heads * head_mul) cur_stage += 1 block = MultiScaleBlock( dim=embed_dim, dim_out=dim_out, num_heads=num_heads, drop_path=dpr[i], q_stride=self.q_stride if i in self.q_pool_blocks else None, window_size=window_size, ) embed_dim = dim_out self.blocks.append(block) self.channel_list = ( [self.blocks[i].dim_out for i in self.stage_ends[::-1]] if return_interm_layers else [self.blocks[-1].dim_out] ) if weights_path is not None: with g_pathmgr.open(weights_path, "rb") as f: chkpt = torch.load(f, map_location="cpu") logging.info("loading Hiera", self.load_state_dict(chkpt, strict=False)) def _get_pos_embed(self, hw: Tuple[int, int]) -> torch.Tensor: h, w = hw window_embed = self.pos_embed_window pos_embed = F.interpolate(self.pos_embed, size=(h, w), mode="bicubic") pos_embed = pos_embed + window_embed.tile( [x // y for x, y in zip(pos_embed.shape, window_embed.shape)] ) pos_embed = pos_embed.permute(0, 2, 3, 1) return pos_embed def forward(self, x: torch.Tensor) -> List[torch.Tensor]: x = self.patch_embed(x) # x: (B, H, W, C) # Add pos embed x = x + self._get_pos_embed(x.shape[1:3]) outputs = [] for i, blk in enumerate(self.blocks): x = blk(x) if (i == self.stage_ends[-1]) or ( i in self.stage_ends and self.return_interm_layers ): feats = x.permute(0, 3, 1, 2) outputs.append(feats) return outputs def get_layer_id(self, layer_name): # https://github.com/microsoft/unilm/blob/master/beit/optim_factory.py#L33 num_layers = self.get_num_layers() if layer_name.find("rel_pos") != -1: return num_layers + 1 elif layer_name.find("pos_embed") != -1: return 0 elif layer_name.find("patch_embed") != -1: return 0 elif layer_name.find("blocks") != -1: return int(layer_name.split("blocks")[1].split(".")[1]) + 1 else: return num_layers + 1 def get_num_layers(self) -> int: return len(self.blocks) ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/modeling/backbones/image_encoder.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. from typing import List, Optional import torch import torch.nn as nn import torch.nn.functional as F class ImageEncoder(nn.Module): def __init__( self, trunk: nn.Module, neck: nn.Module, scalp: int = 0, ): super().__init__() self.trunk = trunk self.neck = neck self.scalp = scalp assert ( self.trunk.channel_list == self.neck.backbone_channel_list ), f"Channel dims of trunk and neck do not match. Trunk: {self.trunk.channel_list}, neck: {self.neck.backbone_channel_list}" def forward(self, sample: torch.Tensor): # Forward through backbone features, pos = self.neck(self.trunk(sample)) if self.scalp > 0: # Discard the lowest resolution features features, pos = features[: -self.scalp], pos[: -self.scalp] src = features[-1] output = { "vision_features": src, "vision_pos_enc": pos, "backbone_fpn": features, } return output class FpnNeck(nn.Module): """ A modified variant of Feature Pyramid Network (FPN) neck (we remove output conv and also do bicubic interpolation similar to ViT pos embed interpolation) """ def __init__( self, position_encoding: nn.Module, d_model: int, backbone_channel_list: List[int], kernel_size: int = 1, stride: int = 1, padding: int = 0, fpn_interp_model: str = "bilinear", fuse_type: str = "sum", fpn_top_down_levels: Optional[List[int]] = None, ): """Initialize the neck :param trunk: the backbone :param position_encoding: the positional encoding to use :param d_model: the dimension of the model :param neck_norm: the normalization to use """ super().__init__() self.position_encoding = position_encoding self.convs = nn.ModuleList() self.backbone_channel_list = backbone_channel_list self.d_model = d_model for dim in backbone_channel_list: current = nn.Sequential() current.add_module( "conv", nn.Conv2d( in_channels=dim, out_channels=d_model, kernel_size=kernel_size, stride=stride, padding=padding, ), ) self.convs.append(current) self.fpn_interp_model = fpn_interp_model assert fuse_type in ["sum", "avg"] self.fuse_type = fuse_type # levels to have top-down features in its outputs # e.g. if fpn_top_down_levels is [2, 3], then only outputs of level 2 and 3 # have top-down propagation, while outputs of level 0 and level 1 have only # lateral features from the same backbone level. if fpn_top_down_levels is None: # default is to have top-down features on all levels fpn_top_down_levels = range(len(self.convs)) self.fpn_top_down_levels = list(fpn_top_down_levels) def forward(self, xs: List[torch.Tensor]): out = [None] * len(self.convs) pos = [None] * len(self.convs) assert len(xs) == len(self.convs) # fpn forward pass # see https://github.com/facebookresearch/detectron2/blob/main/detectron2/modeling/backbone/fpn.py prev_features = None # forward in top-down order (from low to high resolution) n = len(self.convs) - 1 for i in range(n, -1, -1): x = xs[i] lateral_features = self.convs[n - i](x) if i in self.fpn_top_down_levels and prev_features is not None: top_down_features = F.interpolate( prev_features.to(dtype=torch.float32), scale_factor=2.0, mode=self.fpn_interp_model, align_corners=( None if self.fpn_interp_model == "nearest" else False ), antialias=False, ) prev_features = lateral_features + top_down_features if self.fuse_type == "avg": prev_features /= 2 else: prev_features = lateral_features x_out = prev_features out[i] = x_out pos[i] = self.position_encoding(x_out).to(x_out.dtype) return out, pos ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/modeling/backbones/utils.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. """Some utilities for backbones, in particular for windowing""" from typing import Tuple import torch import torch.nn as nn import torch.nn.functional as F def window_partition(x, window_size): """ Partition into non-overlapping windows with padding if needed. Args: x (tensor): input tokens with [B, H, W, C]. window_size (int): window size. Returns: windows: windows after partition with [B * num_windows, window_size, window_size, C]. (Hp, Wp): padded height and width before partition """ B, H, W, C = x.shape pad_h = (window_size - H % window_size) % window_size pad_w = (window_size - W % window_size) % window_size if pad_h > 0 or pad_w > 0: x = F.pad(x, (0, 0, 0, pad_w, 0, pad_h)) Hp, Wp = H + pad_h, W + pad_w x = x.view(B, Hp // window_size, window_size, Wp // window_size, window_size, C) windows = ( x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) ) return windows, (Hp, Wp) def window_unpartition(windows, window_size, pad_hw, hw): """ Window unpartition into original sequences and removing padding. Args: x (tensor): input tokens with [B * num_windows, window_size, window_size, C]. window_size (int): window size. pad_hw (Tuple): padded height and width (Hp, Wp). hw (Tuple): original height and width (H, W) before padding. Returns: x: unpartitioned sequences with [B, H, W, C]. """ Hp, Wp = pad_hw H, W = hw B = windows.shape[0] // (Hp * Wp // window_size // window_size) x = windows.view( B, Hp // window_size, Wp // window_size, window_size, window_size, -1 ) x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, Hp, Wp, -1) if Hp > H or Wp > W: x = x[:, :H, :W, :].contiguous() return x class PatchEmbed(nn.Module): """ Image to Patch Embedding. """ def __init__( self, kernel_size: Tuple[int, ...] = (7, 7), stride: Tuple[int, ...] = (4, 4), padding: Tuple[int, ...] = (3, 3), in_chans: int = 3, embed_dim: int = 768, ): """ Args: kernel_size (Tuple): kernel size of the projection layer. stride (Tuple): stride of the projection layer. padding (Tuple): padding size of the projection layer. in_chans (int): Number of input image channels. embed_dim (int): embed_dim (int): Patch embedding dimension. """ super().__init__() self.proj = nn.Conv2d( in_chans, embed_dim, kernel_size=kernel_size, stride=stride, padding=padding ) def forward(self, x: torch.Tensor) -> torch.Tensor: x = self.proj(x) # B C H W -> B H W C x = x.permute(0, 2, 3, 1) return x ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/modeling/memory_attention.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. from typing import Optional import torch from torch import nn, Tensor from sam2.modeling.sam.transformer import RoPEAttention from sam2.modeling.sam2_utils import get_activation_fn, get_clones class MemoryAttentionLayer(nn.Module): def __init__( self, activation: str, cross_attention: nn.Module, d_model: int, dim_feedforward: int, dropout: float, pos_enc_at_attn: bool, pos_enc_at_cross_attn_keys: bool, pos_enc_at_cross_attn_queries: bool, self_attention: nn.Module, ): super().__init__() self.d_model = d_model self.dim_feedforward = dim_feedforward self.dropout_value = dropout self.self_attn = self_attention self.cross_attn_image = cross_attention # Implementation of Feedforward model self.linear1 = nn.Linear(d_model, dim_feedforward) self.dropout = nn.Dropout(dropout) self.linear2 = nn.Linear(dim_feedforward, d_model) self.norm1 = nn.LayerNorm(d_model) self.norm2 = nn.LayerNorm(d_model) self.norm3 = nn.LayerNorm(d_model) self.dropout1 = nn.Dropout(dropout) self.dropout2 = nn.Dropout(dropout) self.dropout3 = nn.Dropout(dropout) self.activation_str = activation self.activation = get_activation_fn(activation) # Where to add pos enc self.pos_enc_at_attn = pos_enc_at_attn self.pos_enc_at_cross_attn_queries = pos_enc_at_cross_attn_queries self.pos_enc_at_cross_attn_keys = pos_enc_at_cross_attn_keys def _forward_sa(self, tgt, query_pos): # Self-Attention tgt2 = self.norm1(tgt) q = k = tgt2 + query_pos if self.pos_enc_at_attn else tgt2 tgt2 = self.self_attn(q, k, v=tgt2) tgt = tgt + self.dropout1(tgt2) return tgt def _forward_ca(self, tgt, memory, query_pos, pos, num_k_exclude_rope=0): kwds = {} if num_k_exclude_rope > 0: assert isinstance(self.cross_attn_image, RoPEAttention) kwds = {"num_k_exclude_rope": num_k_exclude_rope} # Cross-Attention tgt2 = self.norm2(tgt) tgt2 = self.cross_attn_image( q=tgt2 + query_pos if self.pos_enc_at_cross_attn_queries else tgt2, k=memory + pos if self.pos_enc_at_cross_attn_keys else memory, v=memory, **kwds, ) tgt = tgt + self.dropout2(tgt2) return tgt def forward( self, tgt, memory, pos: Optional[Tensor] = None, query_pos: Optional[Tensor] = None, num_k_exclude_rope: int = 0, ) -> torch.Tensor: # Self-Attn, Cross-Attn tgt = self._forward_sa(tgt, query_pos) tgt = self._forward_ca(tgt, memory, query_pos, pos, num_k_exclude_rope) # MLP tgt2 = self.norm3(tgt) tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt2)))) tgt = tgt + self.dropout3(tgt2) return tgt class MemoryAttention(nn.Module): def __init__( self, d_model: int, pos_enc_at_input: bool, layer: nn.Module, num_layers: int, batch_first: bool = True, # Do layers expect batch first input? ): super().__init__() self.d_model = d_model self.layers = get_clones(layer, num_layers) self.num_layers = num_layers self.norm = nn.LayerNorm(d_model) self.pos_enc_at_input = pos_enc_at_input self.batch_first = batch_first def forward( self, curr: torch.Tensor, # self-attention inputs memory: torch.Tensor, # cross-attention inputs curr_pos: Optional[Tensor] = None, # pos_enc for self-attention inputs memory_pos: Optional[Tensor] = None, # pos_enc for cross-attention inputs num_obj_ptr_tokens: int = 0, # number of object pointer *tokens* ): if isinstance(curr, list): assert isinstance(curr_pos, list) assert len(curr) == len(curr_pos) == 1 curr, curr_pos = ( curr[0], curr_pos[0], ) assert ( curr.shape[1] == memory.shape[1] ), "Batch size must be the same for curr and memory" output = curr if self.pos_enc_at_input and curr_pos is not None: output = output + 0.1 * curr_pos if self.batch_first: # Convert to batch first output = output.transpose(0, 1) curr_pos = curr_pos.transpose(0, 1) memory = memory.transpose(0, 1) memory_pos = memory_pos.transpose(0, 1) for layer in self.layers: kwds = {} if isinstance(layer.cross_attn_image, RoPEAttention): kwds = {"num_k_exclude_rope": num_obj_ptr_tokens} output = layer( tgt=output, memory=memory, pos=memory_pos, query_pos=curr_pos, **kwds, ) normed_output = self.norm(output) if self.batch_first: # Convert back to seq first normed_output = normed_output.transpose(0, 1) curr_pos = curr_pos.transpose(0, 1) return normed_output ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/modeling/memory_encoder.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import math from typing import Tuple import torch import torch.nn as nn import torch.nn.functional as F from sam2.modeling.sam2_utils import DropPath, get_clones, LayerNorm2d class MaskDownSampler(nn.Module): """ Progressively downsample a mask by total_stride, each time by stride. Note that LayerNorm is applied per *token*, like in ViT. With each downsample (by a factor stride**2), channel capacity increases by the same factor. In the end, we linearly project to embed_dim channels. """ def __init__( self, embed_dim=256, kernel_size=4, stride=4, padding=0, total_stride=16, activation=nn.GELU, ): super().__init__() num_layers = int(math.log2(total_stride) // math.log2(stride)) assert stride**num_layers == total_stride self.encoder = nn.Sequential() mask_in_chans, mask_out_chans = 1, 1 for _ in range(num_layers): mask_out_chans = mask_in_chans * (stride**2) self.encoder.append( nn.Conv2d( mask_in_chans, mask_out_chans, kernel_size=kernel_size, stride=stride, padding=padding, ) ) self.encoder.append(LayerNorm2d(mask_out_chans)) self.encoder.append(activation()) mask_in_chans = mask_out_chans self.encoder.append(nn.Conv2d(mask_out_chans, embed_dim, kernel_size=1)) def forward(self, x): return self.encoder(x) # Lightly adapted from ConvNext (https://github.com/facebookresearch/ConvNeXt) class CXBlock(nn.Module): r"""ConvNeXt Block. There are two equivalent implementations: (1) DwConv -> LayerNorm (channels_first) -> 1x1 Conv -> GELU -> 1x1 Conv; all in (N, C, H, W) (2) DwConv -> Permute to (N, H, W, C); LayerNorm (channels_last) -> Linear -> GELU -> Linear; Permute back We use (2) as we find it slightly faster in PyTorch Args: dim (int): Number of input channels. drop_path (float): Stochastic depth rate. Default: 0.0 layer_scale_init_value (float): Init value for Layer Scale. Default: 1e-6. """ def __init__( self, dim, kernel_size=7, padding=3, drop_path=0.0, layer_scale_init_value=1e-6, use_dwconv=True, ): super().__init__() self.dwconv = nn.Conv2d( dim, dim, kernel_size=kernel_size, padding=padding, groups=dim if use_dwconv else 1, ) # depthwise conv self.norm = LayerNorm2d(dim, eps=1e-6) self.pwconv1 = nn.Linear( dim, 4 * dim ) # pointwise/1x1 convs, implemented with linear layers self.act = nn.GELU() self.pwconv2 = nn.Linear(4 * dim, dim) self.gamma = ( nn.Parameter(layer_scale_init_value * torch.ones((dim)), requires_grad=True) if layer_scale_init_value > 0 else None ) self.drop_path = DropPath(drop_path) if drop_path > 0.0 else nn.Identity() def forward(self, x): input = x x = self.dwconv(x) x = self.norm(x) x = x.permute(0, 2, 3, 1) # (N, C, H, W) -> (N, H, W, C) x = self.pwconv1(x) x = self.act(x) x = self.pwconv2(x) if self.gamma is not None: x = self.gamma * x x = x.permute(0, 3, 1, 2) # (N, H, W, C) -> (N, C, H, W) x = input + self.drop_path(x) return x class Fuser(nn.Module): def __init__(self, layer, num_layers, dim=None, input_projection=False): super().__init__() self.proj = nn.Identity() self.layers = get_clones(layer, num_layers) if input_projection: assert dim is not None self.proj = nn.Conv2d(dim, dim, kernel_size=1) def forward(self, x): # normally x: (N, C, H, W) x = self.proj(x) for layer in self.layers: x = layer(x) return x class MemoryEncoder(nn.Module): def __init__( self, out_dim, mask_downsampler, fuser, position_encoding, in_dim=256, # in_dim of pix_feats ): super().__init__() self.mask_downsampler = mask_downsampler self.pix_feat_proj = nn.Conv2d(in_dim, in_dim, kernel_size=1) self.fuser = fuser self.position_encoding = position_encoding self.out_proj = nn.Identity() if out_dim != in_dim: self.out_proj = nn.Conv2d(in_dim, out_dim, kernel_size=1) def forward( self, pix_feat: torch.Tensor, masks: torch.Tensor, skip_mask_sigmoid: bool = False, ) -> Tuple[torch.Tensor, torch.Tensor]: ## Process masks # sigmoid, so that less domain shift from gt masks which are bool if not skip_mask_sigmoid: masks = F.sigmoid(masks) masks = self.mask_downsampler(masks) ## Fuse pix_feats and downsampled masks # in case the visual features are on CPU, cast them to CUDA pix_feat = pix_feat.to(masks.device) x = self.pix_feat_proj(pix_feat) x = x + masks x = self.fuser(x) x = self.out_proj(x) pos = self.position_encoding(x).to(x.dtype) return {"vision_features": x, "vision_pos_enc": [pos]} ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/modeling/position_encoding.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import math from typing import Any, Optional, Tuple import numpy as np import torch from torch import nn class PositionEmbeddingSine(nn.Module): """ This is a more standard version of the position embedding, very similar to the one used by the Attention Is All You Need paper, generalized to work on images. """ def __init__( self, num_pos_feats, temperature: int = 10000, normalize: bool = True, scale: Optional[float] = None, ): super().__init__() assert num_pos_feats % 2 == 0, "Expecting even model width" self.num_pos_feats = num_pos_feats // 2 self.temperature = temperature self.normalize = normalize if scale is not None and normalize is False: raise ValueError("normalize should be True if scale is passed") if scale is None: scale = 2 * math.pi self.scale = scale self.cache = {} def _encode_xy(self, x, y): # The positions are expected to be normalized assert len(x) == len(y) and x.ndim == y.ndim == 1 x_embed = x * self.scale y_embed = y * self.scale dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device) dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats) pos_x = x_embed[:, None] / dim_t pos_y = y_embed[:, None] / dim_t pos_x = torch.stack( (pos_x[:, 0::2].sin(), pos_x[:, 1::2].cos()), dim=2 ).flatten(1) pos_y = torch.stack( (pos_y[:, 0::2].sin(), pos_y[:, 1::2].cos()), dim=2 ).flatten(1) return pos_x, pos_y @torch.no_grad() def encode_boxes(self, x, y, w, h): pos_x, pos_y = self._encode_xy(x, y) pos = torch.cat((pos_y, pos_x, h[:, None], w[:, None]), dim=1) return pos encode = encode_boxes # Backwards compatibility @torch.no_grad() def encode_points(self, x, y, labels): (bx, nx), (by, ny), (bl, nl) = x.shape, y.shape, labels.shape assert bx == by and nx == ny and bx == bl and nx == nl pos_x, pos_y = self._encode_xy(x.flatten(), y.flatten()) pos_x, pos_y = pos_x.reshape(bx, nx, -1), pos_y.reshape(by, ny, -1) pos = torch.cat((pos_y, pos_x, labels[:, :, None]), dim=2) return pos @torch.no_grad() def forward(self, x: torch.Tensor): cache_key = (x.shape[-2], x.shape[-1]) if cache_key in self.cache: return self.cache[cache_key][None].repeat(x.shape[0], 1, 1, 1) y_embed = ( torch.arange(1, x.shape[-2] + 1, dtype=torch.float32, device=x.device) .view(1, -1, 1) .repeat(x.shape[0], 1, x.shape[-1]) ) x_embed = ( torch.arange(1, x.shape[-1] + 1, dtype=torch.float32, device=x.device) .view(1, 1, -1) .repeat(x.shape[0], x.shape[-2], 1) ) if self.normalize: eps = 1e-6 y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device) dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats) pos_x = x_embed[:, :, :, None] / dim_t pos_y = y_embed[:, :, :, None] / dim_t pos_x = torch.stack( (pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4 ).flatten(3) pos_y = torch.stack( (pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4 ).flatten(3) pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2) self.cache[cache_key] = pos[0] return pos class PositionEmbeddingRandom(nn.Module): """ Positional encoding using random spatial frequencies. """ def __init__(self, num_pos_feats: int = 64, scale: Optional[float] = None) -> None: super().__init__() if scale is None or scale <= 0.0: scale = 1.0 self.register_buffer( "positional_encoding_gaussian_matrix", scale * torch.randn((2, num_pos_feats)), ) def _pe_encoding(self, coords: torch.Tensor) -> torch.Tensor: """Positionally encode points that are normalized to [0,1].""" # assuming coords are in [0, 1]^2 square and have d_1 x ... x d_n x 2 shape coords = 2 * coords - 1 coords = coords @ self.positional_encoding_gaussian_matrix coords = 2 * np.pi * coords # outputs d_1 x ... x d_n x C shape return torch.cat([torch.sin(coords), torch.cos(coords)], dim=-1) def forward(self, size: Tuple[int, int]) -> torch.Tensor: """Generate positional encoding for a grid of the specified size.""" h, w = size device: Any = self.positional_encoding_gaussian_matrix.device grid = torch.ones((h, w), device=device, dtype=torch.float32) y_embed = grid.cumsum(dim=0) - 0.5 x_embed = grid.cumsum(dim=1) - 0.5 y_embed = y_embed / h x_embed = x_embed / w pe = self._pe_encoding(torch.stack([x_embed, y_embed], dim=-1)) return pe.permute(2, 0, 1) # C x H x W def forward_with_coords( self, coords_input: torch.Tensor, image_size: Tuple[int, int] ) -> torch.Tensor: """Positionally encode points that are not normalized to [0,1].""" coords = coords_input.clone() coords[:, :, 0] = coords[:, :, 0] / image_size[1] coords[:, :, 1] = coords[:, :, 1] / image_size[0] return self._pe_encoding(coords.to(torch.float)) # B x N x C # Rotary Positional Encoding, adapted from: # 1. https://github.com/meta-llama/codellama/blob/main/llama/model.py # 2. https://github.com/naver-ai/rope-vit # 3. https://github.com/lucidrains/rotary-embedding-torch def init_t_xy(end_x: int, end_y: int): t = torch.arange(end_x * end_y, dtype=torch.float32) t_x = (t % end_x).float() t_y = torch.div(t, end_x, rounding_mode="floor").float() return t_x, t_y def compute_axial_cis(dim: int, end_x: int, end_y: int, theta: float = 10000.0): freqs_x = 1.0 / (theta ** (torch.arange(0, dim, 4)[: (dim // 4)].float() / dim)) freqs_y = 1.0 / (theta ** (torch.arange(0, dim, 4)[: (dim // 4)].float() / dim)) t_x, t_y = init_t_xy(end_x, end_y) freqs_x = torch.outer(t_x, freqs_x) freqs_y = torch.outer(t_y, freqs_y) freqs_cis_x = torch.polar(torch.ones_like(freqs_x), freqs_x) freqs_cis_y = torch.polar(torch.ones_like(freqs_y), freqs_y) return torch.cat([freqs_cis_x, freqs_cis_y], dim=-1) def reshape_for_broadcast(freqs_cis: torch.Tensor, x: torch.Tensor): ndim = x.ndim assert 0 <= 1 < ndim assert freqs_cis.shape == (x.shape[-2], x.shape[-1]) shape = [d if i >= ndim - 2 else 1 for i, d in enumerate(x.shape)] return freqs_cis.view(*shape) def apply_rotary_enc( xq: torch.Tensor, xk: torch.Tensor, freqs_cis: torch.Tensor, repeat_freqs_k: bool = False, ): xq_ = torch.view_as_complex(xq.float().reshape(*xq.shape[:-1], -1, 2)) xk_ = ( torch.view_as_complex(xk.float().reshape(*xk.shape[:-1], -1, 2)) if xk.shape[-2] != 0 else None ) freqs_cis = reshape_for_broadcast(freqs_cis, xq_) xq_out = torch.view_as_real(xq_ * freqs_cis).flatten(3) if xk_ is None: # no keys to rotate, due to dropout return xq_out.type_as(xq).to(xq.device), xk # repeat freqs along seq_len dim to match k seq_len if repeat_freqs_k: r = xk_.shape[-2] // xq_.shape[-2] if freqs_cis.is_cuda: freqs_cis = freqs_cis.repeat(*([1] * (freqs_cis.ndim - 2)), r, 1) else: # torch.repeat on complex numbers may not be supported on non-CUDA devices # (freqs_cis has 4 dims and we repeat on dim 2) so we use expand + flatten freqs_cis = freqs_cis.unsqueeze(2).expand(-1, -1, r, -1, -1).flatten(2, 3) xk_out = torch.view_as_real(xk_ * freqs_cis).flatten(3) return xq_out.type_as(xq).to(xq.device), xk_out.type_as(xk).to(xk.device) ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/modeling/sam/__init__.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/modeling/sam/mask_decoder.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. from typing import List, Optional, Tuple, Type import torch from torch import nn from sam2.modeling.sam2_utils import LayerNorm2d, MLP class MaskDecoder(nn.Module): def __init__( self, *, transformer_dim: int, transformer: nn.Module, num_multimask_outputs: int = 3, activation: Type[nn.Module] = nn.GELU, iou_head_depth: int = 3, iou_head_hidden_dim: int = 256, use_high_res_features: bool = False, iou_prediction_use_sigmoid=False, dynamic_multimask_via_stability=False, dynamic_multimask_stability_delta=0.05, dynamic_multimask_stability_thresh=0.98, pred_obj_scores: bool = False, pred_obj_scores_mlp: bool = False, use_multimask_token_for_obj_ptr: bool = False, ) -> None: """ Predicts masks given an image and prompt embeddings, using a transformer architecture. Arguments: transformer_dim (int): the channel dimension of the transformer transformer (nn.Module): the transformer used to predict masks num_multimask_outputs (int): the number of masks to predict when disambiguating masks activation (nn.Module): the type of activation to use when upscaling masks iou_head_depth (int): the depth of the MLP used to predict mask quality iou_head_hidden_dim (int): the hidden dimension of the MLP used to predict mask quality """ super().__init__() self.transformer_dim = transformer_dim self.transformer = transformer self.num_multimask_outputs = num_multimask_outputs self.iou_token = nn.Embedding(1, transformer_dim) self.num_mask_tokens = num_multimask_outputs + 1 self.mask_tokens = nn.Embedding(self.num_mask_tokens, transformer_dim) self.pred_obj_scores = pred_obj_scores if self.pred_obj_scores: self.obj_score_token = nn.Embedding(1, transformer_dim) self.use_multimask_token_for_obj_ptr = use_multimask_token_for_obj_ptr self.output_upscaling = nn.Sequential( nn.ConvTranspose2d( transformer_dim, transformer_dim // 4, kernel_size=2, stride=2 ), LayerNorm2d(transformer_dim // 4), activation(), nn.ConvTranspose2d( transformer_dim // 4, transformer_dim // 8, kernel_size=2, stride=2 ), activation(), ) self.use_high_res_features = use_high_res_features if use_high_res_features: self.conv_s0 = nn.Conv2d( transformer_dim, transformer_dim // 8, kernel_size=1, stride=1 ) self.conv_s1 = nn.Conv2d( transformer_dim, transformer_dim // 4, kernel_size=1, stride=1 ) self.output_hypernetworks_mlps = nn.ModuleList( [ MLP(transformer_dim, transformer_dim, transformer_dim // 8, 3) for i in range(self.num_mask_tokens) ] ) self.iou_prediction_head = MLP( transformer_dim, iou_head_hidden_dim, self.num_mask_tokens, iou_head_depth, sigmoid_output=iou_prediction_use_sigmoid, ) if self.pred_obj_scores: self.pred_obj_score_head = nn.Linear(transformer_dim, 1) if pred_obj_scores_mlp: self.pred_obj_score_head = MLP(transformer_dim, transformer_dim, 1, 3) # When outputting a single mask, optionally we can dynamically fall back to the best # multimask output token if the single mask output token gives low stability scores. self.dynamic_multimask_via_stability = dynamic_multimask_via_stability self.dynamic_multimask_stability_delta = dynamic_multimask_stability_delta self.dynamic_multimask_stability_thresh = dynamic_multimask_stability_thresh def forward( self, image_embeddings: torch.Tensor, image_pe: torch.Tensor, sparse_prompt_embeddings: torch.Tensor, dense_prompt_embeddings: torch.Tensor, multimask_output: bool, repeat_image: bool, high_res_features: Optional[List[torch.Tensor]] = None, ) -> Tuple[torch.Tensor, torch.Tensor]: """ Predict masks given image and prompt embeddings. Arguments: image_embeddings (torch.Tensor): the embeddings from the image encoder image_pe (torch.Tensor): positional encoding with the shape of image_embeddings sparse_prompt_embeddings (torch.Tensor): the embeddings of the points and boxes dense_prompt_embeddings (torch.Tensor): the embeddings of the mask inputs multimask_output (bool): Whether to return multiple masks or a single mask. Returns: torch.Tensor: batched predicted masks torch.Tensor: batched predictions of mask quality torch.Tensor: batched SAM token for mask output """ masks, iou_pred, mask_tokens_out, object_score_logits = self.predict_masks( image_embeddings=image_embeddings, image_pe=image_pe, sparse_prompt_embeddings=sparse_prompt_embeddings, dense_prompt_embeddings=dense_prompt_embeddings, repeat_image=repeat_image, high_res_features=high_res_features, ) # Select the correct mask or masks for output if multimask_output: masks = masks[:, 1:, :, :] iou_pred = iou_pred[:, 1:] elif self.dynamic_multimask_via_stability and not self.training: masks, iou_pred = self._dynamic_multimask_via_stability(masks, iou_pred) else: masks = masks[:, 0:1, :, :] iou_pred = iou_pred[:, 0:1] if multimask_output and self.use_multimask_token_for_obj_ptr: sam_tokens_out = mask_tokens_out[:, 1:] # [b, 3, c] shape else: # Take the mask output token. Here we *always* use the token for single mask output. # At test time, even if we track after 1-click (and using multimask_output=True), # we still take the single mask token here. The rationale is that we always track # after multiple clicks during training, so the past tokens seen during training # are always the single mask token (and we'll let it be the object-memory token). sam_tokens_out = mask_tokens_out[:, 0:1] # [b, 1, c] shape # Prepare output return masks, iou_pred, sam_tokens_out, object_score_logits def predict_masks( self, image_embeddings: torch.Tensor, image_pe: torch.Tensor, sparse_prompt_embeddings: torch.Tensor, dense_prompt_embeddings: torch.Tensor, repeat_image: bool, high_res_features: Optional[List[torch.Tensor]] = None, ) -> Tuple[torch.Tensor, torch.Tensor]: """Predicts masks. See 'forward' for more details.""" # Concatenate output tokens s = 0 if self.pred_obj_scores: output_tokens = torch.cat( [ self.obj_score_token.weight, self.iou_token.weight, self.mask_tokens.weight, ], dim=0, ) s = 1 else: output_tokens = torch.cat( [self.iou_token.weight, self.mask_tokens.weight], dim=0 ) output_tokens = output_tokens.unsqueeze(0).expand( sparse_prompt_embeddings.size(0), -1, -1 ) tokens = torch.cat((output_tokens, sparse_prompt_embeddings), dim=1) # Expand per-image data in batch direction to be per-mask if repeat_image: src = torch.repeat_interleave(image_embeddings, tokens.shape[0], dim=0) else: assert image_embeddings.shape[0] == tokens.shape[0] src = image_embeddings src = src + dense_prompt_embeddings assert ( image_pe.size(0) == 1 ), "image_pe should have size 1 in batch dim (from `get_dense_pe()`)" pos_src = torch.repeat_interleave(image_pe, tokens.shape[0], dim=0) b, c, h, w = src.shape # Run the transformer hs, src = self.transformer(src, pos_src, tokens) iou_token_out = hs[:, s, :] mask_tokens_out = hs[:, s + 1 : (s + 1 + self.num_mask_tokens), :] # Upscale mask embeddings and predict masks using the mask tokens src = src.transpose(1, 2).view(b, c, h, w) if not self.use_high_res_features: upscaled_embedding = self.output_upscaling(src) else: dc1, ln1, act1, dc2, act2 = self.output_upscaling feat_s0, feat_s1 = high_res_features upscaled_embedding = act1(ln1(dc1(src) + feat_s1)) upscaled_embedding = act2(dc2(upscaled_embedding) + feat_s0) hyper_in_list: List[torch.Tensor] = [] for i in range(self.num_mask_tokens): hyper_in_list.append( self.output_hypernetworks_mlps[i](mask_tokens_out[:, i, :]) ) hyper_in = torch.stack(hyper_in_list, dim=1) b, c, h, w = upscaled_embedding.shape masks = (hyper_in @ upscaled_embedding.view(b, c, h * w)).view(b, -1, h, w) # Generate mask quality predictions iou_pred = self.iou_prediction_head(iou_token_out) if self.pred_obj_scores: assert s == 1 object_score_logits = self.pred_obj_score_head(hs[:, 0, :]) else: # Obj scores logits - default to 10.0, i.e. assuming the object is present, sigmoid(10)=1 object_score_logits = 10.0 * iou_pred.new_ones(iou_pred.shape[0], 1) return masks, iou_pred, mask_tokens_out, object_score_logits def _get_stability_scores(self, mask_logits): """ Compute stability scores of the mask logits based on the IoU between upper and lower thresholds. """ mask_logits = mask_logits.flatten(-2) stability_delta = self.dynamic_multimask_stability_delta area_i = torch.sum(mask_logits > stability_delta, dim=-1).float() area_u = torch.sum(mask_logits > -stability_delta, dim=-1).float() stability_scores = torch.where(area_u > 0, area_i / area_u, 1.0) return stability_scores def _dynamic_multimask_via_stability(self, all_mask_logits, all_iou_scores): """ When outputting a single mask, if the stability score from the current single-mask output (based on output token 0) falls below a threshold, we instead select from multi-mask outputs (based on output token 1~3) the mask with the highest predicted IoU score. This is intended to ensure a valid mask for both clicking and tracking. """ # The best mask from multimask output tokens (1~3) multimask_logits = all_mask_logits[:, 1:, :, :] multimask_iou_scores = all_iou_scores[:, 1:] best_scores_inds = torch.argmax(multimask_iou_scores, dim=-1) batch_inds = torch.arange( multimask_iou_scores.size(0), device=all_iou_scores.device ) best_multimask_logits = multimask_logits[batch_inds, best_scores_inds] best_multimask_logits = best_multimask_logits.unsqueeze(1) best_multimask_iou_scores = multimask_iou_scores[batch_inds, best_scores_inds] best_multimask_iou_scores = best_multimask_iou_scores.unsqueeze(1) # The mask from singlemask output token 0 and its stability score singlemask_logits = all_mask_logits[:, 0:1, :, :] singlemask_iou_scores = all_iou_scores[:, 0:1] stability_scores = self._get_stability_scores(singlemask_logits) is_stable = stability_scores >= self.dynamic_multimask_stability_thresh # Dynamically fall back to best multimask output upon low stability scores. mask_logits_out = torch.where( is_stable[..., None, None].expand_as(singlemask_logits), singlemask_logits, best_multimask_logits, ) iou_scores_out = torch.where( is_stable.expand_as(singlemask_iou_scores), singlemask_iou_scores, best_multimask_iou_scores, ) return mask_logits_out, iou_scores_out ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/modeling/sam/prompt_encoder.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. from typing import Optional, Tuple, Type import torch from torch import nn from sam2.modeling.position_encoding import PositionEmbeddingRandom from sam2.modeling.sam2_utils import LayerNorm2d class PromptEncoder(nn.Module): def __init__( self, embed_dim: int, image_embedding_size: Tuple[int, int], input_image_size: Tuple[int, int], mask_in_chans: int, activation: Type[nn.Module] = nn.GELU, ) -> None: """ Encodes prompts for input to SAM's mask decoder. Arguments: embed_dim (int): The prompts' embedding dimension image_embedding_size (tuple(int, int)): The spatial size of the image embedding, as (H, W). input_image_size (int): The padded size of the image as input to the image encoder, as (H, W). mask_in_chans (int): The number of hidden channels used for encoding input masks. activation (nn.Module): The activation to use when encoding input masks. """ super().__init__() self.embed_dim = embed_dim self.input_image_size = input_image_size self.image_embedding_size = image_embedding_size self.pe_layer = PositionEmbeddingRandom(embed_dim // 2) self.num_point_embeddings: int = 4 # pos/neg point + 2 box corners point_embeddings = [ nn.Embedding(1, embed_dim) for i in range(self.num_point_embeddings) ] self.point_embeddings = nn.ModuleList(point_embeddings) self.not_a_point_embed = nn.Embedding(1, embed_dim) self.mask_input_size = ( 4 * image_embedding_size[0], 4 * image_embedding_size[1], ) self.mask_downscaling = nn.Sequential( nn.Conv2d(1, mask_in_chans // 4, kernel_size=2, stride=2), LayerNorm2d(mask_in_chans // 4), activation(), nn.Conv2d(mask_in_chans // 4, mask_in_chans, kernel_size=2, stride=2), LayerNorm2d(mask_in_chans), activation(), nn.Conv2d(mask_in_chans, embed_dim, kernel_size=1), ) self.no_mask_embed = nn.Embedding(1, embed_dim) def get_dense_pe(self) -> torch.Tensor: """ Returns the positional encoding used to encode point prompts, applied to a dense set of points the shape of the image encoding. Returns: torch.Tensor: Positional encoding with shape 1x(embed_dim)x(embedding_h)x(embedding_w) """ return self.pe_layer(self.image_embedding_size).unsqueeze(0) def _embed_points( self, points: torch.Tensor, labels: torch.Tensor, pad: bool, ) -> torch.Tensor: """Embeds point prompts.""" points = points + 0.5 # Shift to center of pixel if pad: padding_point = torch.zeros((points.shape[0], 1, 2), device=points.device) padding_label = -torch.ones((labels.shape[0], 1), device=labels.device) points = torch.cat([points, padding_point], dim=1) labels = torch.cat([labels, padding_label], dim=1) point_embedding = self.pe_layer.forward_with_coords( points, self.input_image_size ) point_embedding[labels == -1] = 0.0 point_embedding[labels == -1] += self.not_a_point_embed.weight point_embedding[labels == 0] += self.point_embeddings[0].weight point_embedding[labels == 1] += self.point_embeddings[1].weight point_embedding[labels == 2] += self.point_embeddings[2].weight point_embedding[labels == 3] += self.point_embeddings[3].weight return point_embedding def _embed_boxes(self, boxes: torch.Tensor) -> torch.Tensor: """Embeds box prompts.""" boxes = boxes + 0.5 # Shift to center of pixel coords = boxes.reshape(-1, 2, 2) corner_embedding = self.pe_layer.forward_with_coords( coords, self.input_image_size ) corner_embedding[:, 0, :] += self.point_embeddings[2].weight corner_embedding[:, 1, :] += self.point_embeddings[3].weight return corner_embedding def _embed_masks(self, masks: torch.Tensor) -> torch.Tensor: """Embeds mask inputs.""" mask_embedding = self.mask_downscaling(masks) return mask_embedding def _get_batch_size( self, points: Optional[Tuple[torch.Tensor, torch.Tensor]], boxes: Optional[torch.Tensor], masks: Optional[torch.Tensor], ) -> int: """ Gets the batch size of the output given the batch size of the input prompts. """ if points is not None: return points[0].shape[0] elif boxes is not None: return boxes.shape[0] elif masks is not None: return masks.shape[0] else: return 1 def _get_device(self) -> torch.device: return self.point_embeddings[0].weight.device def forward( self, points: Optional[Tuple[torch.Tensor, torch.Tensor]], boxes: Optional[torch.Tensor], masks: Optional[torch.Tensor], ) -> Tuple[torch.Tensor, torch.Tensor]: """ Embeds different types of prompts, returning both sparse and dense embeddings. Arguments: points (tuple(torch.Tensor, torch.Tensor) or none): point coordinates and labels to embed. boxes (torch.Tensor or none): boxes to embed masks (torch.Tensor or none): masks to embed Returns: torch.Tensor: sparse embeddings for the points and boxes, with shape BxNx(embed_dim), where N is determined by the number of input points and boxes. torch.Tensor: dense embeddings for the masks, in the shape Bx(embed_dim)x(embed_H)x(embed_W) """ bs = self._get_batch_size(points, boxes, masks) sparse_embeddings = torch.empty( (bs, 0, self.embed_dim), device=self._get_device() ) if points is not None: coords, labels = points point_embeddings = self._embed_points(coords, labels, pad=(boxes is None)) sparse_embeddings = torch.cat([sparse_embeddings, point_embeddings], dim=1) if boxes is not None: box_embeddings = self._embed_boxes(boxes) sparse_embeddings = torch.cat([sparse_embeddings, box_embeddings], dim=1) if masks is not None: dense_embeddings = self._embed_masks(masks) else: dense_embeddings = self.no_mask_embed.weight.reshape(1, -1, 1, 1).expand( bs, -1, self.image_embedding_size[0], self.image_embedding_size[1] ) return sparse_embeddings, dense_embeddings ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/modeling/sam/transformer.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import contextlib import math import warnings from functools import partial from typing import Tuple, Type import torch import torch.nn.functional as F from torch import nn, Tensor from sam2.modeling.position_encoding import apply_rotary_enc, compute_axial_cis from sam2.modeling.sam2_utils import MLP from sam2.utils.misc import get_sdpa_settings warnings.simplefilter(action="ignore", category=FutureWarning) # Check whether Flash Attention is available (and use it by default) OLD_GPU, USE_FLASH_ATTN, MATH_KERNEL_ON = get_sdpa_settings() # A fallback setting to allow all available kernels if Flash Attention fails ALLOW_ALL_KERNELS = False def sdp_kernel_context(dropout_p): """ Get the context for the attention scaled dot-product kernel. We use Flash Attention by default, but fall back to all available kernels if Flash Attention fails. """ if ALLOW_ALL_KERNELS: return contextlib.nullcontext() return torch.backends.cuda.sdp_kernel( enable_flash=USE_FLASH_ATTN, # if Flash attention kernel is off, then math kernel needs to be enabled enable_math=(OLD_GPU and dropout_p > 0.0) or MATH_KERNEL_ON, enable_mem_efficient=OLD_GPU, ) class TwoWayTransformer(nn.Module): def __init__( self, depth: int, embedding_dim: int, num_heads: int, mlp_dim: int, activation: Type[nn.Module] = nn.ReLU, attention_downsample_rate: int = 2, ) -> None: """ A transformer decoder that attends to an input image using queries whose positional embedding is supplied. Args: depth (int): number of layers in the transformer embedding_dim (int): the channel dimension for the input embeddings num_heads (int): the number of heads for multihead attention. Must divide embedding_dim mlp_dim (int): the channel dimension internal to the MLP block activation (nn.Module): the activation to use in the MLP block """ super().__init__() self.depth = depth self.embedding_dim = embedding_dim self.num_heads = num_heads self.mlp_dim = mlp_dim self.layers = nn.ModuleList() for i in range(depth): self.layers.append( TwoWayAttentionBlock( embedding_dim=embedding_dim, num_heads=num_heads, mlp_dim=mlp_dim, activation=activation, attention_downsample_rate=attention_downsample_rate, skip_first_layer_pe=(i == 0), ) ) self.final_attn_token_to_image = Attention( embedding_dim, num_heads, downsample_rate=attention_downsample_rate ) self.norm_final_attn = nn.LayerNorm(embedding_dim) def forward( self, image_embedding: Tensor, image_pe: Tensor, point_embedding: Tensor, ) -> Tuple[Tensor, Tensor]: """ Args: image_embedding (torch.Tensor): image to attend to. Should be shape B x embedding_dim x h x w for any h and w. image_pe (torch.Tensor): the positional encoding to add to the image. Must have the same shape as image_embedding. point_embedding (torch.Tensor): the embedding to add to the query points. Must have shape B x N_points x embedding_dim for any N_points. Returns: torch.Tensor: the processed point_embedding torch.Tensor: the processed image_embedding """ # BxCxHxW -> BxHWxC == B x N_image_tokens x C bs, c, h, w = image_embedding.shape image_embedding = image_embedding.flatten(2).permute(0, 2, 1) image_pe = image_pe.flatten(2).permute(0, 2, 1) # Prepare queries queries = point_embedding keys = image_embedding # Apply transformer blocks and final layernorm for layer in self.layers: queries, keys = layer( queries=queries, keys=keys, query_pe=point_embedding, key_pe=image_pe, ) # Apply the final attention layer from the points to the image q = queries + point_embedding k = keys + image_pe attn_out = self.final_attn_token_to_image(q=q, k=k, v=keys) queries = queries + attn_out queries = self.norm_final_attn(queries) return queries, keys class TwoWayAttentionBlock(nn.Module): def __init__( self, embedding_dim: int, num_heads: int, mlp_dim: int = 2048, activation: Type[nn.Module] = nn.ReLU, attention_downsample_rate: int = 2, skip_first_layer_pe: bool = False, ) -> None: """ A transformer block with four layers: (1) self-attention of sparse inputs, (2) cross attention of sparse inputs to dense inputs, (3) mlp block on sparse inputs, and (4) cross attention of dense inputs to sparse inputs. Arguments: embedding_dim (int): the channel dimension of the embeddings num_heads (int): the number of heads in the attention layers mlp_dim (int): the hidden dimension of the mlp block activation (nn.Module): the activation of the mlp block skip_first_layer_pe (bool): skip the PE on the first layer """ super().__init__() self.self_attn = Attention(embedding_dim, num_heads) self.norm1 = nn.LayerNorm(embedding_dim) self.cross_attn_token_to_image = Attention( embedding_dim, num_heads, downsample_rate=attention_downsample_rate ) self.norm2 = nn.LayerNorm(embedding_dim) self.mlp = MLP( embedding_dim, mlp_dim, embedding_dim, num_layers=2, activation=activation ) self.norm3 = nn.LayerNorm(embedding_dim) self.norm4 = nn.LayerNorm(embedding_dim) self.cross_attn_image_to_token = Attention( embedding_dim, num_heads, downsample_rate=attention_downsample_rate ) self.skip_first_layer_pe = skip_first_layer_pe def forward( self, queries: Tensor, keys: Tensor, query_pe: Tensor, key_pe: Tensor ) -> Tuple[Tensor, Tensor]: # Self attention block if self.skip_first_layer_pe: queries = self.self_attn(q=queries, k=queries, v=queries) else: q = queries + query_pe attn_out = self.self_attn(q=q, k=q, v=queries) queries = queries + attn_out queries = self.norm1(queries) # Cross attention block, tokens attending to image embedding q = queries + query_pe k = keys + key_pe attn_out = self.cross_attn_token_to_image(q=q, k=k, v=keys) queries = queries + attn_out queries = self.norm2(queries) # MLP block mlp_out = self.mlp(queries) queries = queries + mlp_out queries = self.norm3(queries) # Cross attention block, image embedding attending to tokens q = queries + query_pe k = keys + key_pe attn_out = self.cross_attn_image_to_token(q=k, k=q, v=queries) keys = keys + attn_out keys = self.norm4(keys) return queries, keys class Attention(nn.Module): """ An attention layer that allows for downscaling the size of the embedding after projection to queries, keys, and values. """ def __init__( self, embedding_dim: int, num_heads: int, downsample_rate: int = 1, dropout: float = 0.0, kv_in_dim: int = None, ) -> None: super().__init__() self.embedding_dim = embedding_dim self.kv_in_dim = kv_in_dim if kv_in_dim is not None else embedding_dim self.internal_dim = embedding_dim // downsample_rate self.num_heads = num_heads assert ( self.internal_dim % num_heads == 0 ), "num_heads must divide embedding_dim." self.q_proj = nn.Linear(embedding_dim, self.internal_dim) self.k_proj = nn.Linear(self.kv_in_dim, self.internal_dim) self.v_proj = nn.Linear(self.kv_in_dim, self.internal_dim) self.out_proj = nn.Linear(self.internal_dim, embedding_dim) self.dropout_p = dropout def _separate_heads(self, x: Tensor, num_heads: int) -> Tensor: b, n, c = x.shape x = x.reshape(b, n, num_heads, c // num_heads) return x.transpose(1, 2) # B x N_heads x N_tokens x C_per_head def _recombine_heads(self, x: Tensor) -> Tensor: b, n_heads, n_tokens, c_per_head = x.shape x = x.transpose(1, 2) return x.reshape(b, n_tokens, n_heads * c_per_head) # B x N_tokens x C def forward(self, q: Tensor, k: Tensor, v: Tensor) -> Tensor: # Input projections q = self.q_proj(q) k = self.k_proj(k) v = self.v_proj(v) # Separate into heads q = self._separate_heads(q, self.num_heads) k = self._separate_heads(k, self.num_heads) v = self._separate_heads(v, self.num_heads) dropout_p = self.dropout_p if self.training else 0.0 # Attention try: with sdp_kernel_context(dropout_p): out = F.scaled_dot_product_attention(q, k, v, dropout_p=dropout_p) except Exception as e: # Fall back to all kernels if the Flash attention kernel fails warnings.warn( f"Flash Attention kernel failed due to: {e}\nFalling back to all available " f"kernels for scaled_dot_product_attention (which may have a slower speed).", category=UserWarning, stacklevel=2, ) global ALLOW_ALL_KERNELS ALLOW_ALL_KERNELS = True out = F.scaled_dot_product_attention(q, k, v, dropout_p=dropout_p) out = self._recombine_heads(out) out = self.out_proj(out) return out class RoPEAttention(Attention): """Attention with rotary position encoding.""" def __init__( self, *args, rope_theta=10000.0, # whether to repeat q rope to match k length # this is needed for cross-attention to memories rope_k_repeat=False, feat_sizes=(32, 32), # [w, h] for stride 16 feats at 512 resolution **kwargs, ): super().__init__(*args, **kwargs) self.compute_cis = partial( compute_axial_cis, dim=self.internal_dim // self.num_heads, theta=rope_theta ) freqs_cis = self.compute_cis(end_x=feat_sizes[0], end_y=feat_sizes[1]) self.freqs_cis = freqs_cis self.rope_k_repeat = rope_k_repeat def forward( self, q: Tensor, k: Tensor, v: Tensor, num_k_exclude_rope: int = 0 ) -> Tensor: # Input projections q = self.q_proj(q) k = self.k_proj(k) v = self.v_proj(v) # Separate into heads q = self._separate_heads(q, self.num_heads) k = self._separate_heads(k, self.num_heads) v = self._separate_heads(v, self.num_heads) # Apply rotary position encoding w = h = math.sqrt(q.shape[-2]) self.freqs_cis = self.freqs_cis.to(q.device) if self.freqs_cis.shape[0] != q.shape[-2]: self.freqs_cis = self.compute_cis(end_x=w, end_y=h).to(q.device) if q.shape[-2] != k.shape[-2]: assert self.rope_k_repeat num_k_rope = k.size(-2) - num_k_exclude_rope q, k[:, :, :num_k_rope] = apply_rotary_enc( q, k[:, :, :num_k_rope], freqs_cis=self.freqs_cis, repeat_freqs_k=self.rope_k_repeat, ) dropout_p = self.dropout_p if self.training else 0.0 # Attention try: with sdp_kernel_context(dropout_p): out = F.scaled_dot_product_attention(q, k, v, dropout_p=dropout_p) except Exception as e: # Fall back to all kernels if the Flash attention kernel fails warnings.warn( f"Flash Attention kernel failed due to: {e}\nFalling back to all available " f"kernels for scaled_dot_product_attention (which may have a slower speed).", category=UserWarning, stacklevel=2, ) global ALLOW_ALL_KERNELS ALLOW_ALL_KERNELS = True out = F.scaled_dot_product_attention(q, k, v, dropout_p=dropout_p) out = self._recombine_heads(out) out = self.out_proj(out) return out ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/modeling/sam2_base.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import torch import torch.distributed import torch.nn.functional as F from torch.nn.init import trunc_normal_ from sam2.modeling.sam.mask_decoder import MaskDecoder from sam2.modeling.sam.prompt_encoder import PromptEncoder from sam2.modeling.sam.transformer import TwoWayTransformer from sam2.modeling.sam2_utils import get_1d_sine_pe, MLP, select_closest_cond_frames # a large negative value as a placeholder score for missing objects NO_OBJ_SCORE = -1024.0 class SAM2Base(torch.nn.Module): def __init__( self, image_encoder, memory_attention, memory_encoder, num_maskmem=7, # default 1 input frame + 6 previous frames image_size=512, backbone_stride=16, # stride of the image backbone output sigmoid_scale_for_mem_enc=1.0, # scale factor for mask sigmoid prob sigmoid_bias_for_mem_enc=0.0, # bias factor for mask sigmoid prob # During evaluation, whether to binarize the sigmoid mask logits on interacted frames with clicks binarize_mask_from_pts_for_mem_enc=False, use_mask_input_as_output_without_sam=False, # on frames with mask input, whether to directly output the input mask without using a SAM prompt encoder + mask decoder # The maximum number of conditioning frames to participate in the memory attention (-1 means no limit; if there are more conditioning frames than this limit, # we only cross-attend to the temporally closest `max_cond_frames_in_attn` conditioning frames in the encoder when tracking each frame). This gives the model # a temporal locality when handling a large number of annotated frames (since closer frames should be more important) and also avoids GPU OOM. max_cond_frames_in_attn=-1, # on the first frame, whether to directly add the no-memory embedding to the image feature # (instead of using the transformer encoder) directly_add_no_mem_embed=False, # whether to use high-resolution feature maps in the SAM mask decoder use_high_res_features_in_sam=False, # whether to output multiple (3) masks for the first click on initial conditioning frames multimask_output_in_sam=False, # the minimum and maximum number of clicks to use multimask_output_in_sam (only relevant when `multimask_output_in_sam=True`; # default is 1 for both, meaning that only the first click gives multimask output; also note that a box counts as two points) multimask_min_pt_num=1, multimask_max_pt_num=1, # whether to also use multimask output for tracking (not just for the first click on initial conditioning frames; only relevant when `multimask_output_in_sam=True`) multimask_output_for_tracking=False, # Whether to use multimask tokens for obj ptr; Only relevant when both # use_obj_ptrs_in_encoder=True and multimask_output_for_tracking=True use_multimask_token_for_obj_ptr: bool = False, # whether to use sigmoid to restrict ious prediction to [0-1] iou_prediction_use_sigmoid=False, # The memory bank's temporal stride during evaluation (i.e. the `r` parameter in XMem and Cutie; XMem and Cutie use r=5). # For r>1, the (self.num_maskmem - 1) non-conditioning memory frames consist of # (self.num_maskmem - 2) nearest frames from every r-th frames, plus the last frame. memory_temporal_stride_for_eval=1, # whether to apply non-overlapping constraints on the object masks in the memory encoder during evaluation (to avoid/alleviate superposing masks) non_overlap_masks_for_mem_enc=False, # whether to cross-attend to object pointers from other frames (based on SAM output tokens) in the encoder use_obj_ptrs_in_encoder=False, # the maximum number of object pointers from other frames in encoder cross attention (only relevant when `use_obj_ptrs_in_encoder=True`) max_obj_ptrs_in_encoder=16, # whether to add temporal positional encoding to the object pointers in the encoder (only relevant when `use_obj_ptrs_in_encoder=True`) add_tpos_enc_to_obj_ptrs=True, # whether to add an extra linear projection layer for the temporal positional encoding in the object pointers to avoid potential interference # with spatial positional encoding (only relevant when both `use_obj_ptrs_in_encoder=True` and `add_tpos_enc_to_obj_ptrs=True`) proj_tpos_enc_in_obj_ptrs=False, # whether to use signed distance (instead of unsigned absolute distance) in the temporal positional encoding in the object pointers # (only relevant when both `use_obj_ptrs_in_encoder=True` and `add_tpos_enc_to_obj_ptrs=True`) use_signed_tpos_enc_to_obj_ptrs=False, # whether to only attend to object pointers in the past (before the current frame) in the encoder during evaluation # (only relevant when `use_obj_ptrs_in_encoder=True`; this might avoid pointer information too far in the future to distract the initial tracking) only_obj_ptrs_in_the_past_for_eval=False, # Whether to predict if there is an object in the frame pred_obj_scores: bool = False, # Whether to use an MLP to predict object scores pred_obj_scores_mlp: bool = False, # Only relevant if pred_obj_scores=True and use_obj_ptrs_in_encoder=True; # Whether to have a fixed no obj pointer when there is no object present # or to use it as an additive embedding with obj_ptr produced by decoder fixed_no_obj_ptr: bool = False, # Soft no object, i.e. mix in no_obj_ptr softly, # hope to make recovery easier if there is a mistake and mitigate accumulation of errors soft_no_obj_ptr: bool = False, use_mlp_for_obj_ptr_proj: bool = False, # add no obj embedding to spatial frames no_obj_embed_spatial: bool = False, # extra arguments used to construct the SAM mask decoder; if not None, it should be a dict of kwargs to be passed into `MaskDecoder` class. sam_mask_decoder_extra_args=None, compile_image_encoder: bool = False, ): super().__init__() # Part 1: the image backbone self.image_encoder = image_encoder # Use level 0, 1, 2 for high-res setting, or just level 2 for the default setting self.use_high_res_features_in_sam = use_high_res_features_in_sam self.num_feature_levels = 3 if use_high_res_features_in_sam else 1 self.use_obj_ptrs_in_encoder = use_obj_ptrs_in_encoder self.max_obj_ptrs_in_encoder = max_obj_ptrs_in_encoder if use_obj_ptrs_in_encoder: # A conv layer to downsample the mask prompt to stride 4 (the same stride as # low-res SAM mask logits) and to change its scales from 0~1 to SAM logit scale, # so that it can be fed into the SAM mask decoder to generate a pointer. self.mask_downsample = torch.nn.Conv2d(1, 1, kernel_size=4, stride=4) self.add_tpos_enc_to_obj_ptrs = add_tpos_enc_to_obj_ptrs if proj_tpos_enc_in_obj_ptrs: assert add_tpos_enc_to_obj_ptrs # these options need to be used together self.proj_tpos_enc_in_obj_ptrs = proj_tpos_enc_in_obj_ptrs self.use_signed_tpos_enc_to_obj_ptrs = use_signed_tpos_enc_to_obj_ptrs self.only_obj_ptrs_in_the_past_for_eval = only_obj_ptrs_in_the_past_for_eval # Part 2: memory attention to condition current frame's visual features # with memories (and obj ptrs) from past frames self.memory_attention = memory_attention self.hidden_dim = image_encoder.neck.d_model # Part 3: memory encoder for the previous frame's outputs self.memory_encoder = memory_encoder self.mem_dim = self.hidden_dim if hasattr(self.memory_encoder, "out_proj") and hasattr( self.memory_encoder.out_proj, "weight" ): # if there is compression of memories along channel dim self.mem_dim = self.memory_encoder.out_proj.weight.shape[0] self.num_maskmem = num_maskmem # Number of memories accessible # Temporal encoding of the memories self.maskmem_tpos_enc = torch.nn.Parameter( torch.zeros(num_maskmem, 1, 1, self.mem_dim) ) trunc_normal_(self.maskmem_tpos_enc, std=0.02) # a single token to indicate no memory embedding from previous frames self.no_mem_embed = torch.nn.Parameter(torch.zeros(1, 1, self.hidden_dim)) self.no_mem_pos_enc = torch.nn.Parameter(torch.zeros(1, 1, self.hidden_dim)) trunc_normal_(self.no_mem_embed, std=0.02) trunc_normal_(self.no_mem_pos_enc, std=0.02) self.directly_add_no_mem_embed = directly_add_no_mem_embed # Apply sigmoid to the output raw mask logits (to turn them from # range (-inf, +inf) to range (0, 1)) before feeding them into the memory encoder self.sigmoid_scale_for_mem_enc = sigmoid_scale_for_mem_enc self.sigmoid_bias_for_mem_enc = sigmoid_bias_for_mem_enc self.binarize_mask_from_pts_for_mem_enc = binarize_mask_from_pts_for_mem_enc self.non_overlap_masks_for_mem_enc = non_overlap_masks_for_mem_enc self.memory_temporal_stride_for_eval = memory_temporal_stride_for_eval # On frames with mask input, whether to directly output the input mask without # using a SAM prompt encoder + mask decoder self.use_mask_input_as_output_without_sam = use_mask_input_as_output_without_sam self.multimask_output_in_sam = multimask_output_in_sam self.multimask_min_pt_num = multimask_min_pt_num self.multimask_max_pt_num = multimask_max_pt_num self.multimask_output_for_tracking = multimask_output_for_tracking self.use_multimask_token_for_obj_ptr = use_multimask_token_for_obj_ptr self.iou_prediction_use_sigmoid = iou_prediction_use_sigmoid # Part 4: SAM-style prompt encoder (for both mask and point inputs) # and SAM-style mask decoder for the final mask output self.image_size = image_size self.backbone_stride = backbone_stride self.sam_mask_decoder_extra_args = sam_mask_decoder_extra_args self.pred_obj_scores = pred_obj_scores self.pred_obj_scores_mlp = pred_obj_scores_mlp self.fixed_no_obj_ptr = fixed_no_obj_ptr self.soft_no_obj_ptr = soft_no_obj_ptr if self.fixed_no_obj_ptr: assert self.pred_obj_scores assert self.use_obj_ptrs_in_encoder if self.pred_obj_scores and self.use_obj_ptrs_in_encoder: self.no_obj_ptr = torch.nn.Parameter(torch.zeros(1, self.hidden_dim)) trunc_normal_(self.no_obj_ptr, std=0.02) self.use_mlp_for_obj_ptr_proj = use_mlp_for_obj_ptr_proj self.no_obj_embed_spatial = None if no_obj_embed_spatial: self.no_obj_embed_spatial = torch.nn.Parameter(torch.zeros(1, self.mem_dim)) trunc_normal_(self.no_obj_embed_spatial, std=0.02) self._build_sam_heads() self.max_cond_frames_in_attn = max_cond_frames_in_attn # Model compilation if compile_image_encoder: # Compile the forward function (not the full module) to allow loading checkpoints. print( "Image encoder compilation is enabled. First forward pass will be slow." ) self.image_encoder.forward = torch.compile( self.image_encoder.forward, mode="max-autotune", fullgraph=True, dynamic=False, ) @property def device(self): return next(self.parameters()).device def forward(self, *args, **kwargs): raise NotImplementedError( "Please use the corresponding methods in SAM2VideoPredictor for inference or SAM2Train for training/fine-tuning" "See notebooks/video_predictor_example.ipynb for an inference example." ) def _build_sam_heads(self): """Build SAM-style prompt encoder and mask decoder.""" self.sam_prompt_embed_dim = self.hidden_dim self.sam_image_embedding_size = self.image_size // self.backbone_stride # build PromptEncoder and MaskDecoder from SAM # (their hyperparameters like `mask_in_chans=16` are from SAM code) self.sam_prompt_encoder = PromptEncoder( embed_dim=self.sam_prompt_embed_dim, image_embedding_size=( self.sam_image_embedding_size, self.sam_image_embedding_size, ), input_image_size=(self.image_size, self.image_size), mask_in_chans=16, ) self.sam_mask_decoder = MaskDecoder( num_multimask_outputs=3, transformer=TwoWayTransformer( depth=2, embedding_dim=self.sam_prompt_embed_dim, mlp_dim=2048, num_heads=8, ), transformer_dim=self.sam_prompt_embed_dim, iou_head_depth=3, iou_head_hidden_dim=256, use_high_res_features=self.use_high_res_features_in_sam, iou_prediction_use_sigmoid=self.iou_prediction_use_sigmoid, pred_obj_scores=self.pred_obj_scores, pred_obj_scores_mlp=self.pred_obj_scores_mlp, use_multimask_token_for_obj_ptr=self.use_multimask_token_for_obj_ptr, **(self.sam_mask_decoder_extra_args or {}), ) if self.use_obj_ptrs_in_encoder: # a linear projection on SAM output tokens to turn them into object pointers self.obj_ptr_proj = torch.nn.Linear(self.hidden_dim, self.hidden_dim) if self.use_mlp_for_obj_ptr_proj: self.obj_ptr_proj = MLP( self.hidden_dim, self.hidden_dim, self.hidden_dim, 3 ) else: self.obj_ptr_proj = torch.nn.Identity() if self.proj_tpos_enc_in_obj_ptrs: # a linear projection on temporal positional encoding in object pointers to # avoid potential interference with spatial positional encoding self.obj_ptr_tpos_proj = torch.nn.Linear(self.hidden_dim, self.mem_dim) else: self.obj_ptr_tpos_proj = torch.nn.Identity() def _forward_sam_heads( self, backbone_features, point_inputs=None, mask_inputs=None, high_res_features=None, multimask_output=False, ): """ Forward SAM prompt encoders and mask heads. Inputs: - backbone_features: image features of [B, C, H, W] shape - point_inputs: a dictionary with "point_coords" and "point_labels", where 1) "point_coords" has [B, P, 2] shape and float32 dtype and contains the absolute pixel-unit coordinate in (x, y) format of the P input points 2) "point_labels" has shape [B, P] and int32 dtype, where 1 means positive clicks, 0 means negative clicks, and -1 means padding - mask_inputs: a mask of [B, 1, H*16, W*16] shape, float or bool, with the same spatial size as the image. - high_res_features: either 1) None or 2) or a list of length 2 containing two feature maps of [B, C, 4*H, 4*W] and [B, C, 2*H, 2*W] shapes respectively, which will be used as high-resolution feature maps for SAM decoder. - multimask_output: if it's True, we output 3 candidate masks and their 3 corresponding IoU estimates, and if it's False, we output only 1 mask and its corresponding IoU estimate. Outputs: - low_res_multimasks: [B, M, H*4, W*4] shape (where M = 3 if `multimask_output=True` and M = 1 if `multimask_output=False`), the SAM output mask logits (before sigmoid) for the low-resolution masks, with 4x the resolution (1/4 stride) of the input backbone_features. - high_res_multimasks: [B, M, H*16, W*16] shape (where M = 3 if `multimask_output=True` and M = 1 if `multimask_output=False`), upsampled from the low-resolution masks, with shape size as the image (stride is 1 pixel). - ious, [B, M] shape, where (where M = 3 if `multimask_output=True` and M = 1 if `multimask_output=False`), the estimated IoU of each output mask. - low_res_masks: [B, 1, H*4, W*4] shape, the best mask in `low_res_multimasks`. If `multimask_output=True`, it's the mask with the highest IoU estimate. If `multimask_output=False`, it's the same as `low_res_multimasks`. - high_res_masks: [B, 1, H*16, W*16] shape, the best mask in `high_res_multimasks`. If `multimask_output=True`, it's the mask with the highest IoU estimate. If `multimask_output=False`, it's the same as `high_res_multimasks`. - obj_ptr: [B, C] shape, the object pointer vector for the output mask, extracted based on the output token from the SAM mask decoder. """ B = backbone_features.size(0) device = backbone_features.device assert backbone_features.size(1) == self.sam_prompt_embed_dim assert backbone_features.size(2) == self.sam_image_embedding_size assert backbone_features.size(3) == self.sam_image_embedding_size # a) Handle point prompts if point_inputs is not None: sam_point_coords = point_inputs["point_coords"] sam_point_labels = point_inputs["point_labels"] assert sam_point_coords.size(0) == B and sam_point_labels.size(0) == B else: # If no points are provide, pad with an empty point (with label -1) sam_point_coords = torch.zeros(B, 1, 2, device=device) sam_point_labels = -torch.ones(B, 1, dtype=torch.int32, device=device) # b) Handle mask prompts if mask_inputs is not None: # If mask_inputs is provided, downsize it into low-res mask input if needed # and feed it as a dense mask prompt into the SAM mask encoder assert len(mask_inputs.shape) == 4 and mask_inputs.shape[:2] == (B, 1) if mask_inputs.shape[-2:] != self.sam_prompt_encoder.mask_input_size: sam_mask_prompt = F.interpolate( mask_inputs.float(), size=self.sam_prompt_encoder.mask_input_size, align_corners=False, mode="bilinear", antialias=True, # use antialias for downsampling ) else: sam_mask_prompt = mask_inputs else: # Otherwise, simply feed None (and SAM's prompt encoder will add # a learned `no_mask_embed` to indicate no mask input in this case). sam_mask_prompt = None sparse_embeddings, dense_embeddings = self.sam_prompt_encoder( points=(sam_point_coords, sam_point_labels), boxes=None, masks=sam_mask_prompt, ) ( low_res_multimasks, ious, sam_output_tokens, object_score_logits, ) = self.sam_mask_decoder( image_embeddings=backbone_features, image_pe=self.sam_prompt_encoder.get_dense_pe(), sparse_prompt_embeddings=sparse_embeddings, dense_prompt_embeddings=dense_embeddings, multimask_output=multimask_output, repeat_image=False, # the image is already batched high_res_features=high_res_features, ) if self.pred_obj_scores: is_obj_appearing = object_score_logits > 0 # Mask used for spatial memories is always a *hard* choice between obj and no obj, # consistent with the actual mask prediction low_res_multimasks = torch.where( is_obj_appearing[:, None, None], low_res_multimasks, NO_OBJ_SCORE, ) # convert masks from possibly bfloat16 (or float16) to float32 # (older PyTorch versions before 2.1 don't support `interpolate` on bf16) low_res_multimasks = low_res_multimasks.float() high_res_multimasks = F.interpolate( low_res_multimasks, size=(self.image_size, self.image_size), mode="bilinear", align_corners=False, ) sam_output_token = sam_output_tokens[:, 0] if multimask_output: # take the best mask prediction (with the highest IoU estimation) best_iou_inds = torch.argmax(ious, dim=-1) batch_inds = torch.arange(B, device=device) low_res_masks = low_res_multimasks[batch_inds, best_iou_inds].unsqueeze(1) high_res_masks = high_res_multimasks[batch_inds, best_iou_inds].unsqueeze(1) if sam_output_tokens.size(1) > 1: sam_output_token = sam_output_tokens[batch_inds, best_iou_inds] else: low_res_masks, high_res_masks = low_res_multimasks, high_res_multimasks # Extract object pointer from the SAM output token (with occlusion handling) obj_ptr = self.obj_ptr_proj(sam_output_token) if self.pred_obj_scores: # Allow *soft* no obj ptr, unlike for masks if self.soft_no_obj_ptr: lambda_is_obj_appearing = object_score_logits.sigmoid() else: lambda_is_obj_appearing = is_obj_appearing.float() if self.fixed_no_obj_ptr: obj_ptr = lambda_is_obj_appearing * obj_ptr obj_ptr = obj_ptr + (1 - lambda_is_obj_appearing) * self.no_obj_ptr return ( low_res_multimasks, high_res_multimasks, ious, low_res_masks, high_res_masks, obj_ptr, object_score_logits, ) def _use_mask_as_output(self, backbone_features, high_res_features, mask_inputs): """ Directly turn binary `mask_inputs` into a output mask logits without using SAM. (same input and output shapes as in _forward_sam_heads above). """ # Use -10/+10 as logits for neg/pos pixels (very close to 0/1 in prob after sigmoid). out_scale, out_bias = 20.0, -10.0 # sigmoid(-10.0)=4.5398e-05 mask_inputs_float = mask_inputs.float() high_res_masks = mask_inputs_float * out_scale + out_bias low_res_masks = F.interpolate( high_res_masks, size=(high_res_masks.size(-2) // 4, high_res_masks.size(-1) // 4), align_corners=False, mode="bilinear", antialias=True, # use antialias for downsampling ) # a dummy IoU prediction of all 1's under mask input ious = mask_inputs.new_ones(mask_inputs.size(0), 1).float() if not self.use_obj_ptrs_in_encoder: # all zeros as a dummy object pointer (of shape [B, C]) obj_ptr = torch.zeros( mask_inputs.size(0), self.hidden_dim, device=mask_inputs.device ) else: # produce an object pointer using the SAM decoder from the mask input _, _, _, _, _, obj_ptr, _ = self._forward_sam_heads( backbone_features=backbone_features, mask_inputs=self.mask_downsample(mask_inputs_float), high_res_features=high_res_features, ) # In this method, we are treating mask_input as output, e.g. using it directly to create spatial mem; # Below, we follow the same design axiom to use mask_input to decide if obj appears or not instead of relying # on the object_scores from the SAM decoder. is_obj_appearing = torch.any(mask_inputs.flatten(1).float() > 0.0, dim=1) is_obj_appearing = is_obj_appearing[..., None] lambda_is_obj_appearing = is_obj_appearing.float() object_score_logits = out_scale * lambda_is_obj_appearing + out_bias if self.pred_obj_scores: if self.fixed_no_obj_ptr: obj_ptr = lambda_is_obj_appearing * obj_ptr obj_ptr = obj_ptr + (1 - lambda_is_obj_appearing) * self.no_obj_ptr return ( low_res_masks, high_res_masks, ious, low_res_masks, high_res_masks, obj_ptr, object_score_logits, ) def forward_image(self, img_batch: torch.Tensor): """Get the image feature on the input batch.""" backbone_out = self.image_encoder(img_batch) if self.use_high_res_features_in_sam: # precompute projected level 0 and level 1 features in SAM decoder # to avoid running it again on every SAM click backbone_out["backbone_fpn"][0] = self.sam_mask_decoder.conv_s0( backbone_out["backbone_fpn"][0] ) backbone_out["backbone_fpn"][1] = self.sam_mask_decoder.conv_s1( backbone_out["backbone_fpn"][1] ) return backbone_out def _prepare_backbone_features(self, backbone_out): """Prepare and flatten visual features.""" backbone_out = backbone_out.copy() assert len(backbone_out["backbone_fpn"]) == len(backbone_out["vision_pos_enc"]) assert len(backbone_out["backbone_fpn"]) >= self.num_feature_levels feature_maps = backbone_out["backbone_fpn"][-self.num_feature_levels :] vision_pos_embeds = backbone_out["vision_pos_enc"][-self.num_feature_levels :] feat_sizes = [(x.shape[-2], x.shape[-1]) for x in vision_pos_embeds] # flatten NxCxHxW to HWxNxC vision_feats = [x.flatten(2).permute(2, 0, 1) for x in feature_maps] vision_pos_embeds = [x.flatten(2).permute(2, 0, 1) for x in vision_pos_embeds] return backbone_out, vision_feats, vision_pos_embeds, feat_sizes def _prepare_memory_conditioned_features( self, frame_idx, is_init_cond_frame, current_vision_feats, current_vision_pos_embeds, feat_sizes, output_dict, num_frames, track_in_reverse=False, # tracking in reverse time order (for demo usage) ): """Fuse the current frame's visual feature map with previous memory.""" B = current_vision_feats[-1].size(1) # batch size on this frame C = self.hidden_dim H, W = feat_sizes[-1] # top-level (lowest-resolution) feature size device = current_vision_feats[-1].device # The case of `self.num_maskmem == 0` below is primarily used for reproducing SAM on images. # In this case, we skip the fusion with any memory. if self.num_maskmem == 0: # Disable memory and skip fusion pix_feat = current_vision_feats[-1].permute(1, 2, 0).view(B, C, H, W) return pix_feat num_obj_ptr_tokens = 0 tpos_sign_mul = -1 if track_in_reverse else 1 # Step 1: condition the visual features of the current frame on previous memories if not is_init_cond_frame: # Retrieve the memories encoded with the maskmem backbone to_cat_memory, to_cat_memory_pos_embed = [], [] # Add conditioning frames's output first (all cond frames have t_pos=0 for # when getting temporal positional embedding below) assert len(output_dict["cond_frame_outputs"]) > 0 # Select a maximum number of temporally closest cond frames for cross attention cond_outputs = output_dict["cond_frame_outputs"] selected_cond_outputs, unselected_cond_outputs = select_closest_cond_frames( frame_idx, cond_outputs, self.max_cond_frames_in_attn ) t_pos_and_prevs = [(0, out) for out in selected_cond_outputs.values()] # Add last (self.num_maskmem - 1) frames before current frame for non-conditioning memory # the earliest one has t_pos=1 and the latest one has t_pos=self.num_maskmem-1 # We also allow taking the memory frame non-consecutively (with stride>1), in which case # we take (self.num_maskmem - 2) frames among every stride-th frames plus the last frame. stride = 1 if self.training else self.memory_temporal_stride_for_eval for t_pos in range(1, self.num_maskmem): t_rel = self.num_maskmem - t_pos # how many frames before current frame if t_rel == 1: # for t_rel == 1, we take the last frame (regardless of r) if not track_in_reverse: # the frame immediately before this frame (i.e. frame_idx - 1) prev_frame_idx = frame_idx - t_rel else: # the frame immediately after this frame (i.e. frame_idx + 1) prev_frame_idx = frame_idx + t_rel else: # for t_rel >= 2, we take the memory frame from every r-th frames if not track_in_reverse: # first find the nearest frame among every r-th frames before this frame # for r=1, this would be (frame_idx - 2) prev_frame_idx = ((frame_idx - 2) // stride) * stride # then seek further among every r-th frames prev_frame_idx = prev_frame_idx - (t_rel - 2) * stride else: # first find the nearest frame among every r-th frames after this frame # for r=1, this would be (frame_idx + 2) prev_frame_idx = -(-(frame_idx + 2) // stride) * stride # then seek further among every r-th frames prev_frame_idx = prev_frame_idx + (t_rel - 2) * stride out = output_dict["non_cond_frame_outputs"].get(prev_frame_idx, None) if out is None: # If an unselected conditioning frame is among the last (self.num_maskmem - 1) # frames, we still attend to it as if it's a non-conditioning frame. out = unselected_cond_outputs.get(prev_frame_idx, None) t_pos_and_prevs.append((t_pos, out)) for t_pos, prev in t_pos_and_prevs: if prev is None: continue # skip padding frames # "maskmem_features" might have been offloaded to CPU in demo use cases, # so we load it back to GPU (it's a no-op if it's already on GPU). feats = prev["maskmem_features"].to(device, non_blocking=True) to_cat_memory.append(feats.flatten(2).permute(2, 0, 1)) # Spatial positional encoding (it might have been offloaded to CPU in eval) maskmem_enc = prev["maskmem_pos_enc"][-1].to(device) maskmem_enc = maskmem_enc.flatten(2).permute(2, 0, 1) # Temporal positional encoding maskmem_enc = ( maskmem_enc + self.maskmem_tpos_enc[self.num_maskmem - t_pos - 1] ) to_cat_memory_pos_embed.append(maskmem_enc) # Construct the list of past object pointers if self.use_obj_ptrs_in_encoder: max_obj_ptrs_in_encoder = min(num_frames, self.max_obj_ptrs_in_encoder) # First add those object pointers from selected conditioning frames # (optionally, only include object pointers in the past during evaluation) if not self.training and self.only_obj_ptrs_in_the_past_for_eval: ptr_cond_outputs = { t: out for t, out in selected_cond_outputs.items() if (t >= frame_idx if track_in_reverse else t <= frame_idx) } else: ptr_cond_outputs = selected_cond_outputs pos_and_ptrs = [ # Temporal pos encoding contains how far away each pointer is from current frame ( ( (frame_idx - t) * tpos_sign_mul if self.use_signed_tpos_enc_to_obj_ptrs else abs(frame_idx - t) ), out["obj_ptr"], ) for t, out in ptr_cond_outputs.items() ] # Add up to (max_obj_ptrs_in_encoder - 1) non-conditioning frames before current frame for t_diff in range(1, max_obj_ptrs_in_encoder): t = frame_idx + t_diff if track_in_reverse else frame_idx - t_diff if t < 0 or (num_frames is not None and t >= num_frames): break out = output_dict["non_cond_frame_outputs"].get( t, unselected_cond_outputs.get(t, None) ) if out is not None: pos_and_ptrs.append((t_diff, out["obj_ptr"])) # If we have at least one object pointer, add them to the across attention if len(pos_and_ptrs) > 0: pos_list, ptrs_list = zip(*pos_and_ptrs) # stack object pointers along dim=0 into [ptr_seq_len, B, C] shape obj_ptrs = torch.stack(ptrs_list, dim=0) # a temporal positional embedding based on how far each object pointer is from # the current frame (sine embedding normalized by the max pointer num). if self.add_tpos_enc_to_obj_ptrs: t_diff_max = max_obj_ptrs_in_encoder - 1 tpos_dim = C if self.proj_tpos_enc_in_obj_ptrs else self.mem_dim obj_pos = torch.tensor(pos_list, device=device) obj_pos = get_1d_sine_pe(obj_pos / t_diff_max, dim=tpos_dim) obj_pos = self.obj_ptr_tpos_proj(obj_pos) obj_pos = obj_pos.unsqueeze(1).expand(-1, B, self.mem_dim) else: obj_pos = obj_ptrs.new_zeros(len(pos_list), B, self.mem_dim) if self.mem_dim < C: # split a pointer into (C // self.mem_dim) tokens for self.mem_dim < C obj_ptrs = obj_ptrs.reshape( -1, B, C // self.mem_dim, self.mem_dim ) obj_ptrs = obj_ptrs.permute(0, 2, 1, 3).flatten(0, 1) obj_pos = obj_pos.repeat_interleave(C // self.mem_dim, dim=0) to_cat_memory.append(obj_ptrs) to_cat_memory_pos_embed.append(obj_pos) num_obj_ptr_tokens = obj_ptrs.shape[0] else: num_obj_ptr_tokens = 0 else: # for initial conditioning frames, encode them without using any previous memory if self.directly_add_no_mem_embed: # directly add no-mem embedding (instead of using the transformer encoder) pix_feat_with_mem = current_vision_feats[-1] + self.no_mem_embed pix_feat_with_mem = pix_feat_with_mem.permute(1, 2, 0).view(B, C, H, W) return pix_feat_with_mem # Use a dummy token on the first frame (to avoid empty memory input to tranformer encoder) to_cat_memory = [self.no_mem_embed.expand(1, B, self.mem_dim)] to_cat_memory_pos_embed = [self.no_mem_pos_enc.expand(1, B, self.mem_dim)] # Step 2: Concatenate the memories and forward through the transformer encoder memory = torch.cat(to_cat_memory, dim=0) memory_pos_embed = torch.cat(to_cat_memory_pos_embed, dim=0) pix_feat_with_mem = self.memory_attention( curr=current_vision_feats, curr_pos=current_vision_pos_embeds, memory=memory, memory_pos=memory_pos_embed, num_obj_ptr_tokens=num_obj_ptr_tokens, ) # reshape the output (HW)BC => BCHW pix_feat_with_mem = pix_feat_with_mem.permute(1, 2, 0).view(B, C, H, W) return pix_feat_with_mem def _encode_new_memory( self, current_vision_feats, feat_sizes, pred_masks_high_res, object_score_logits, is_mask_from_pts, ): """Encode the current image and its prediction into a memory feature.""" B = current_vision_feats[-1].size(1) # batch size on this frame C = self.hidden_dim H, W = feat_sizes[-1] # top-level (lowest-resolution) feature size # top-level feature, (HW)BC => BCHW pix_feat = current_vision_feats[-1].permute(1, 2, 0).view(B, C, H, W) if self.non_overlap_masks_for_mem_enc and not self.training: # optionally, apply non-overlapping constraints to the masks (it's applied # in the batch dimension and should only be used during eval, where all # the objects come from the same video under batch size 1). pred_masks_high_res = self._apply_non_overlapping_constraints( pred_masks_high_res ) # scale the raw mask logits with a temperature before applying sigmoid binarize = self.binarize_mask_from_pts_for_mem_enc and is_mask_from_pts if binarize and not self.training: mask_for_mem = (pred_masks_high_res > 0).float() else: # apply sigmoid on the raw mask logits to turn them into range (0, 1) mask_for_mem = torch.sigmoid(pred_masks_high_res) # apply scale and bias terms to the sigmoid probabilities if self.sigmoid_scale_for_mem_enc != 1.0: mask_for_mem = mask_for_mem * self.sigmoid_scale_for_mem_enc if self.sigmoid_bias_for_mem_enc != 0.0: mask_for_mem = mask_for_mem + self.sigmoid_bias_for_mem_enc maskmem_out = self.memory_encoder( pix_feat, mask_for_mem, skip_mask_sigmoid=True # sigmoid already applied ) maskmem_features = maskmem_out["vision_features"] maskmem_pos_enc = maskmem_out["vision_pos_enc"] # add a no-object embedding to the spatial memory to indicate that the frame # is predicted to be occluded (i.e. no object is appearing in the frame) if self.no_obj_embed_spatial is not None: is_obj_appearing = (object_score_logits > 0).float() maskmem_features += ( 1 - is_obj_appearing[..., None, None] ) * self.no_obj_embed_spatial[..., None, None].expand( *maskmem_features.shape ) return maskmem_features, maskmem_pos_enc def _track_step( self, frame_idx, is_init_cond_frame, current_vision_feats, current_vision_pos_embeds, feat_sizes, point_inputs, mask_inputs, output_dict, num_frames, track_in_reverse, prev_sam_mask_logits, ): current_out = {"point_inputs": point_inputs, "mask_inputs": mask_inputs} # High-resolution feature maps for the SAM head, reshape (HW)BC => BCHW if len(current_vision_feats) > 1: high_res_features = [ x.permute(1, 2, 0).view(x.size(1), x.size(2), *s) for x, s in zip(current_vision_feats[:-1], feat_sizes[:-1]) ] else: high_res_features = None if mask_inputs is not None and self.use_mask_input_as_output_without_sam: # When use_mask_input_as_output_without_sam=True, we directly output the mask input # (see it as a GT mask) without using a SAM prompt encoder + mask decoder. pix_feat = current_vision_feats[-1].permute(1, 2, 0) pix_feat = pix_feat.view(-1, self.hidden_dim, *feat_sizes[-1]) sam_outputs = self._use_mask_as_output( pix_feat, high_res_features, mask_inputs ) else: # fused the visual feature with previous memory features in the memory bank pix_feat = self._prepare_memory_conditioned_features( frame_idx=frame_idx, is_init_cond_frame=is_init_cond_frame, current_vision_feats=current_vision_feats[-1:], current_vision_pos_embeds=current_vision_pos_embeds[-1:], feat_sizes=feat_sizes[-1:], output_dict=output_dict, num_frames=num_frames, track_in_reverse=track_in_reverse, ) # apply SAM-style segmentation head # here we might feed previously predicted low-res SAM mask logits into the SAM mask decoder, # e.g. in demo where such logits come from earlier interaction instead of correction sampling # (in this case, any `mask_inputs` shouldn't reach here as they are sent to _use_mask_as_output instead) if prev_sam_mask_logits is not None: assert point_inputs is not None and mask_inputs is None mask_inputs = prev_sam_mask_logits multimask_output = self._use_multimask(is_init_cond_frame, point_inputs) sam_outputs = self._forward_sam_heads( backbone_features=pix_feat, point_inputs=point_inputs, mask_inputs=mask_inputs, high_res_features=high_res_features, multimask_output=multimask_output, ) return current_out, sam_outputs, high_res_features, pix_feat def _encode_memory_in_output( self, current_vision_feats, feat_sizes, point_inputs, run_mem_encoder, high_res_masks, object_score_logits, current_out, ): if run_mem_encoder and self.num_maskmem > 0: high_res_masks_for_mem_enc = high_res_masks maskmem_features, maskmem_pos_enc = self._encode_new_memory( current_vision_feats=current_vision_feats, feat_sizes=feat_sizes, pred_masks_high_res=high_res_masks_for_mem_enc, object_score_logits=object_score_logits, is_mask_from_pts=(point_inputs is not None), ) current_out["maskmem_features"] = maskmem_features current_out["maskmem_pos_enc"] = maskmem_pos_enc else: current_out["maskmem_features"] = None current_out["maskmem_pos_enc"] = None def track_step( self, frame_idx, is_init_cond_frame, current_vision_feats, current_vision_pos_embeds, feat_sizes, point_inputs, mask_inputs, output_dict, num_frames, track_in_reverse=False, # tracking in reverse time order (for demo usage) # Whether to run the memory encoder on the predicted masks. Sometimes we might want # to skip the memory encoder with `run_mem_encoder=False`. For example, # in demo we might call `track_step` multiple times for each user click, # and only encode the memory when the user finalizes their clicks. And in ablation # settings like SAM training on static images, we don't need the memory encoder. run_mem_encoder=True, # The previously predicted SAM mask logits (which can be fed together with new clicks in demo). prev_sam_mask_logits=None, ): current_out, sam_outputs, _, _ = self._track_step( frame_idx, is_init_cond_frame, current_vision_feats, current_vision_pos_embeds, feat_sizes, point_inputs, mask_inputs, output_dict, num_frames, track_in_reverse, prev_sam_mask_logits, ) ( _, _, _, low_res_masks, high_res_masks, obj_ptr, object_score_logits, ) = sam_outputs current_out["pred_masks"] = low_res_masks current_out["pred_masks_high_res"] = high_res_masks current_out["obj_ptr"] = obj_ptr if not self.training: # Only add this in inference (to avoid unused param in activation checkpointing; # it's mainly used in the demo to encode spatial memories w/ consolidated masks) current_out["object_score_logits"] = object_score_logits # Finally run the memory encoder on the predicted mask to encode # it into a new memory feature (that can be used in future frames) self._encode_memory_in_output( current_vision_feats, feat_sizes, point_inputs, run_mem_encoder, high_res_masks, object_score_logits, current_out, ) return current_out def _use_multimask(self, is_init_cond_frame, point_inputs): """Whether to use multimask output in the SAM head.""" num_pts = 0 if point_inputs is None else point_inputs["point_labels"].size(1) multimask_output = ( self.multimask_output_in_sam and (is_init_cond_frame or self.multimask_output_for_tracking) and (self.multimask_min_pt_num <= num_pts <= self.multimask_max_pt_num) ) return multimask_output def _apply_non_overlapping_constraints(self, pred_masks): """ Apply non-overlapping constraints to the object scores in pred_masks. Here we keep only the highest scoring object at each spatial location in pred_masks. """ batch_size = pred_masks.size(0) if batch_size == 1: return pred_masks device = pred_masks.device # "max_obj_inds": object index of the object with the highest score at each location max_obj_inds = torch.argmax(pred_masks, dim=0, keepdim=True) # "batch_obj_inds": object index of each object slice (along dim 0) in `pred_masks` batch_obj_inds = torch.arange(batch_size, device=device)[:, None, None, None] keep = max_obj_inds == batch_obj_inds # suppress overlapping regions' scores below -10.0 so that the foreground regions # don't overlap (here sigmoid(-10.0)=4.5398e-05) pred_masks = torch.where(keep, pred_masks, torch.clamp(pred_masks, max=-10.0)) return pred_masks ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/modeling/sam2_utils.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import copy from typing import Tuple import numpy as np import torch import torch.nn as nn import torch.nn.functional as F from sam2.utils.misc import mask_to_box def select_closest_cond_frames(frame_idx, cond_frame_outputs, max_cond_frame_num): """ Select up to `max_cond_frame_num` conditioning frames from `cond_frame_outputs` that are temporally closest to the current frame at `frame_idx`. Here, we take - a) the closest conditioning frame before `frame_idx` (if any); - b) the closest conditioning frame after `frame_idx` (if any); - c) any other temporally closest conditioning frames until reaching a total of `max_cond_frame_num` conditioning frames. Outputs: - selected_outputs: selected items (keys & values) from `cond_frame_outputs`. - unselected_outputs: items (keys & values) not selected in `cond_frame_outputs`. """ if max_cond_frame_num == -1 or len(cond_frame_outputs) <= max_cond_frame_num: selected_outputs = cond_frame_outputs unselected_outputs = {} else: assert max_cond_frame_num >= 2, "we should allow using 2+ conditioning frames" selected_outputs = {} # the closest conditioning frame before `frame_idx` (if any) idx_before = max((t for t in cond_frame_outputs if t < frame_idx), default=None) if idx_before is not None: selected_outputs[idx_before] = cond_frame_outputs[idx_before] # the closest conditioning frame after `frame_idx` (if any) idx_after = min((t for t in cond_frame_outputs if t >= frame_idx), default=None) if idx_after is not None: selected_outputs[idx_after] = cond_frame_outputs[idx_after] # add other temporally closest conditioning frames until reaching a total # of `max_cond_frame_num` conditioning frames. num_remain = max_cond_frame_num - len(selected_outputs) inds_remain = sorted( (t for t in cond_frame_outputs if t not in selected_outputs), key=lambda x: abs(x - frame_idx), )[:num_remain] selected_outputs.update((t, cond_frame_outputs[t]) for t in inds_remain) unselected_outputs = { t: v for t, v in cond_frame_outputs.items() if t not in selected_outputs } return selected_outputs, unselected_outputs def get_1d_sine_pe(pos_inds, dim, temperature=10000): """ Get 1D sine positional embedding as in the original Transformer paper. """ pe_dim = dim // 2 dim_t = torch.arange(pe_dim, dtype=torch.float32, device=pos_inds.device) dim_t = temperature ** (2 * (dim_t // 2) / pe_dim) pos_embed = pos_inds.unsqueeze(-1) / dim_t pos_embed = torch.cat([pos_embed.sin(), pos_embed.cos()], dim=-1) return pos_embed def get_activation_fn(activation): """Return an activation function given a string""" if activation == "relu": return F.relu if activation == "gelu": return F.gelu if activation == "glu": return F.glu raise RuntimeError(f"activation should be relu/gelu, not {activation}.") def get_clones(module, N): return nn.ModuleList([copy.deepcopy(module) for i in range(N)]) class DropPath(nn.Module): # adapted from https://github.com/huggingface/pytorch-image-models/blob/main/timm/layers/drop.py def __init__(self, drop_prob=0.0, scale_by_keep=True): super(DropPath, self).__init__() self.drop_prob = drop_prob self.scale_by_keep = scale_by_keep def forward(self, x): if self.drop_prob == 0.0 or not self.training: return x keep_prob = 1 - self.drop_prob shape = (x.shape[0],) + (1,) * (x.ndim - 1) random_tensor = x.new_empty(shape).bernoulli_(keep_prob) if keep_prob > 0.0 and self.scale_by_keep: random_tensor.div_(keep_prob) return x * random_tensor # Lightly adapted from # https://github.com/facebookresearch/MaskFormer/blob/main/mask_former/modeling/transformer/transformer_predictor.py # noqa class MLP(nn.Module): def __init__( self, input_dim: int, hidden_dim: int, output_dim: int, num_layers: int, activation: nn.Module = nn.ReLU, sigmoid_output: bool = False, ) -> None: super().__init__() self.num_layers = num_layers h = [hidden_dim] * (num_layers - 1) self.layers = nn.ModuleList( nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim]) ) self.sigmoid_output = sigmoid_output self.act = activation() def forward(self, x): for i, layer in enumerate(self.layers): x = self.act(layer(x)) if i < self.num_layers - 1 else layer(x) if self.sigmoid_output: x = F.sigmoid(x) return x # From https://github.com/facebookresearch/detectron2/blob/main/detectron2/layers/batch_norm.py # noqa # Itself from https://github.com/facebookresearch/ConvNeXt/blob/d1fa8f6fef0a165b27399986cc2bdacc92777e40/models/convnext.py#L119 # noqa class LayerNorm2d(nn.Module): def __init__(self, num_channels: int, eps: float = 1e-6) -> None: super().__init__() self.weight = nn.Parameter(torch.ones(num_channels)) self.bias = nn.Parameter(torch.zeros(num_channels)) self.eps = eps def forward(self, x: torch.Tensor) -> torch.Tensor: u = x.mean(1, keepdim=True) s = (x - u).pow(2).mean(1, keepdim=True) x = (x - u) / torch.sqrt(s + self.eps) x = self.weight[:, None, None] * x + self.bias[:, None, None] return x def sample_box_points( masks: torch.Tensor, noise: float = 0.1, # SAM default noise_bound: int = 20, # SAM default top_left_label: int = 2, bottom_right_label: int = 3, ) -> Tuple[np.array, np.array]: """ Sample a noised version of the top left and bottom right corners of a given `bbox` Inputs: - masks: [B, 1, H,W] boxes, dtype=torch.Tensor - noise: noise as a fraction of box width and height, dtype=float - noise_bound: maximum amount of noise (in pure pixesl), dtype=int Returns: - box_coords: [B, num_pt, 2], contains (x, y) coordinates of top left and bottom right box corners, dtype=torch.float - box_labels: [B, num_pt], label 2 is reserverd for top left and 3 for bottom right corners, dtype=torch.int32 """ device = masks.device box_coords = mask_to_box(masks) B, _, H, W = masks.shape box_labels = torch.tensor( [top_left_label, bottom_right_label], dtype=torch.int, device=device ).repeat(B) if noise > 0.0: if not isinstance(noise_bound, torch.Tensor): noise_bound = torch.tensor(noise_bound, device=device) bbox_w = box_coords[..., 2] - box_coords[..., 0] bbox_h = box_coords[..., 3] - box_coords[..., 1] max_dx = torch.min(bbox_w * noise, noise_bound) max_dy = torch.min(bbox_h * noise, noise_bound) box_noise = 2 * torch.rand(B, 1, 4, device=device) - 1 box_noise = box_noise * torch.stack((max_dx, max_dy, max_dx, max_dy), dim=-1) box_coords = box_coords + box_noise img_bounds = ( torch.tensor([W, H, W, H], device=device) - 1 ) # uncentered pixel coords box_coords.clamp_(torch.zeros_like(img_bounds), img_bounds) # In place clamping box_coords = box_coords.reshape(-1, 2, 2) # always 2 points box_labels = box_labels.reshape(-1, 2) return box_coords, box_labels def sample_random_points_from_errors(gt_masks, pred_masks, num_pt=1): """ Sample `num_pt` random points (along with their labels) independently from the error regions. Inputs: - gt_masks: [B, 1, H_im, W_im] masks, dtype=torch.bool - pred_masks: [B, 1, H_im, W_im] masks, dtype=torch.bool or None - num_pt: int, number of points to sample independently for each of the B error maps Outputs: - points: [B, num_pt, 2], dtype=torch.float, contains (x, y) coordinates of each sampled point - labels: [B, num_pt], dtype=torch.int32, where 1 means positive clicks and 0 means negative clicks """ if pred_masks is None: # if pred_masks is not provided, treat it as empty pred_masks = torch.zeros_like(gt_masks) assert gt_masks.dtype == torch.bool and gt_masks.size(1) == 1 assert pred_masks.dtype == torch.bool and pred_masks.shape == gt_masks.shape assert num_pt >= 0 B, _, H_im, W_im = gt_masks.shape device = gt_masks.device # false positive region, a new point sampled in this region should have # negative label to correct the FP error fp_masks = ~gt_masks & pred_masks # false negative region, a new point sampled in this region should have # positive label to correct the FN error fn_masks = gt_masks & ~pred_masks # whether the prediction completely match the ground-truth on each mask all_correct = torch.all((gt_masks == pred_masks).flatten(2), dim=2) all_correct = all_correct[..., None, None] # channel 0 is FP map, while channel 1 is FN map pts_noise = torch.rand(B, num_pt, H_im, W_im, 2, device=device) # sample a negative new click from FP region or a positive new click # from FN region, depend on where the maximum falls, # and in case the predictions are all correct (no FP or FN), we just # sample a negative click from the background region pts_noise[..., 0] *= fp_masks | (all_correct & ~gt_masks) pts_noise[..., 1] *= fn_masks pts_idx = pts_noise.flatten(2).argmax(dim=2) labels = (pts_idx % 2).to(torch.int32) pts_idx = pts_idx // 2 pts_x = pts_idx % W_im pts_y = pts_idx // W_im points = torch.stack([pts_x, pts_y], dim=2).to(torch.float) return points, labels def sample_one_point_from_error_center(gt_masks, pred_masks, padding=True): """ Sample 1 random point (along with its label) from the center of each error region, that is, the point with the largest distance to the boundary of each error region. This is the RITM sampling method from https://github.com/saic-vul/ritm_interactive_segmentation/blob/master/isegm/inference/clicker.py Inputs: - gt_masks: [B, 1, H_im, W_im] masks, dtype=torch.bool - pred_masks: [B, 1, H_im, W_im] masks, dtype=torch.bool or None - padding: if True, pad with boundary of 1 px for distance transform Outputs: - points: [B, 1, 2], dtype=torch.float, contains (x, y) coordinates of each sampled point - labels: [B, 1], dtype=torch.int32, where 1 means positive clicks and 0 means negative clicks """ import cv2 if pred_masks is None: pred_masks = torch.zeros_like(gt_masks) assert gt_masks.dtype == torch.bool and gt_masks.size(1) == 1 assert pred_masks.dtype == torch.bool and pred_masks.shape == gt_masks.shape B, _, _, W_im = gt_masks.shape device = gt_masks.device # false positive region, a new point sampled in this region should have # negative label to correct the FP error fp_masks = ~gt_masks & pred_masks # false negative region, a new point sampled in this region should have # positive label to correct the FN error fn_masks = gt_masks & ~pred_masks fp_masks = fp_masks.cpu().numpy() fn_masks = fn_masks.cpu().numpy() points = torch.zeros(B, 1, 2, dtype=torch.float) labels = torch.ones(B, 1, dtype=torch.int32) for b in range(B): fn_mask = fn_masks[b, 0] fp_mask = fp_masks[b, 0] if padding: fn_mask = np.pad(fn_mask, ((1, 1), (1, 1)), "constant") fp_mask = np.pad(fp_mask, ((1, 1), (1, 1)), "constant") # compute the distance of each point in FN/FP region to its boundary fn_mask_dt = cv2.distanceTransform(fn_mask.astype(np.uint8), cv2.DIST_L2, 0) fp_mask_dt = cv2.distanceTransform(fp_mask.astype(np.uint8), cv2.DIST_L2, 0) if padding: fn_mask_dt = fn_mask_dt[1:-1, 1:-1] fp_mask_dt = fp_mask_dt[1:-1, 1:-1] # take the point in FN/FP region with the largest distance to its boundary fn_mask_dt_flat = fn_mask_dt.reshape(-1) fp_mask_dt_flat = fp_mask_dt.reshape(-1) fn_argmax = np.argmax(fn_mask_dt_flat) fp_argmax = np.argmax(fp_mask_dt_flat) is_positive = fn_mask_dt_flat[fn_argmax] > fp_mask_dt_flat[fp_argmax] pt_idx = fn_argmax if is_positive else fp_argmax points[b, 0, 0] = pt_idx % W_im # x points[b, 0, 1] = pt_idx // W_im # y labels[b, 0] = int(is_positive) points = points.to(device) labels = labels.to(device) return points, labels def get_next_point(gt_masks, pred_masks, method): if method == "uniform": return sample_random_points_from_errors(gt_masks, pred_masks) elif method == "center": return sample_one_point_from_error_center(gt_masks, pred_masks) else: raise ValueError(f"unknown sampling method {method}") ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/sam2_image_predictor.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import logging from typing import List, Optional, Tuple, Union import numpy as np import torch from PIL.Image import Image from sam2.modeling.sam2_base import SAM2Base from sam2.utils.transforms import SAM2Transforms class SAM2ImagePredictor: def __init__( self, sam_model: SAM2Base, mask_threshold=0.0, max_hole_area=0.0, max_sprinkle_area=0.0, **kwargs, ) -> None: """ Uses SAM-2 to calculate the image embedding for an image, and then allow repeated, efficient mask prediction given prompts. Arguments: sam_model (Sam-2): The model to use for mask prediction. mask_threshold (float): The threshold to use when converting mask logits to binary masks. Masks are thresholded at 0 by default. max_hole_area (int): If max_hole_area > 0, we fill small holes in up to the maximum area of max_hole_area in low_res_masks. max_sprinkle_area (int): If max_sprinkle_area > 0, we remove small sprinkles up to the maximum area of max_sprinkle_area in low_res_masks. """ super().__init__() self.model = sam_model self._transforms = SAM2Transforms( resolution=self.model.image_size, mask_threshold=mask_threshold, max_hole_area=max_hole_area, max_sprinkle_area=max_sprinkle_area, ) # Predictor state self._is_image_set = False self._features = None self._orig_hw = None # Whether the predictor is set for single image or a batch of images self._is_batch = False # Predictor config self.mask_threshold = mask_threshold # Spatial dim for backbone feature maps self._bb_feat_sizes = [ (256, 256), (128, 128), (64, 64), ] @classmethod def from_pretrained(cls, model_id: str, **kwargs) -> "SAM2ImagePredictor": """ Load a pretrained model from the Hugging Face hub. Arguments: model_id (str): The Hugging Face repository ID. **kwargs: Additional arguments to pass to the model constructor. Returns: (SAM2ImagePredictor): The loaded model. """ from sam2.build_sam import build_sam2_hf sam_model = build_sam2_hf(model_id, **kwargs) return cls(sam_model, **kwargs) @torch.no_grad() def set_image( self, image: Union[np.ndarray, Image], ) -> None: """ Calculates the image embeddings for the provided image, allowing masks to be predicted with the 'predict' method. Arguments: image (np.ndarray or PIL Image): The input image to embed in RGB format. The image should be in HWC format if np.ndarray, or WHC format if PIL Image with pixel values in [0, 255]. image_format (str): The color format of the image, in ['RGB', 'BGR']. """ self.reset_predictor() # Transform the image to the form expected by the model if isinstance(image, np.ndarray): logging.info("For numpy array image, we assume (HxWxC) format") self._orig_hw = [image.shape[:2]] elif isinstance(image, Image): w, h = image.size self._orig_hw = [(h, w)] else: raise NotImplementedError("Image format not supported") input_image = self._transforms(image) input_image = input_image[None, ...].to(self.device) assert ( len(input_image.shape) == 4 and input_image.shape[1] == 3 ), f"input_image must be of size 1x3xHxW, got {input_image.shape}" logging.info("Computing image embeddings for the provided image...") backbone_out = self.model.forward_image(input_image) _, vision_feats, _, _ = self.model._prepare_backbone_features(backbone_out) # Add no_mem_embed, which is added to the lowest rest feat. map during training on videos if self.model.directly_add_no_mem_embed: vision_feats[-1] = vision_feats[-1] + self.model.no_mem_embed feats = [ feat.permute(1, 2, 0).view(1, -1, *feat_size) for feat, feat_size in zip(vision_feats[::-1], self._bb_feat_sizes[::-1]) ][::-1] self._features = {"image_embed": feats[-1], "high_res_feats": feats[:-1]} self._is_image_set = True logging.info("Image embeddings computed.") @torch.no_grad() def set_image_batch( self, image_list: List[Union[np.ndarray]], ) -> None: """ Calculates the image embeddings for the provided image batch, allowing masks to be predicted with the 'predict_batch' method. Arguments: image_list (List[np.ndarray]): The input images to embed in RGB format. The image should be in HWC format if np.ndarray with pixel values in [0, 255]. """ self.reset_predictor() assert isinstance(image_list, list) self._orig_hw = [] for image in image_list: assert isinstance( image, np.ndarray ), "Images are expected to be an np.ndarray in RGB format, and of shape HWC" self._orig_hw.append(image.shape[:2]) # Transform the image to the form expected by the model img_batch = self._transforms.forward_batch(image_list) img_batch = img_batch.to(self.device) batch_size = img_batch.shape[0] assert ( len(img_batch.shape) == 4 and img_batch.shape[1] == 3 ), f"img_batch must be of size Bx3xHxW, got {img_batch.shape}" logging.info("Computing image embeddings for the provided images...") backbone_out = self.model.forward_image(img_batch) _, vision_feats, _, _ = self.model._prepare_backbone_features(backbone_out) # Add no_mem_embed, which is added to the lowest rest feat. map during training on videos if self.model.directly_add_no_mem_embed: vision_feats[-1] = vision_feats[-1] + self.model.no_mem_embed feats = [ feat.permute(1, 2, 0).view(batch_size, -1, *feat_size) for feat, feat_size in zip(vision_feats[::-1], self._bb_feat_sizes[::-1]) ][::-1] self._features = {"image_embed": feats[-1], "high_res_feats": feats[:-1]} self._is_image_set = True self._is_batch = True logging.info("Image embeddings computed.") def predict_batch( self, point_coords_batch: List[np.ndarray] = None, point_labels_batch: List[np.ndarray] = None, box_batch: List[np.ndarray] = None, mask_input_batch: List[np.ndarray] = None, multimask_output: bool = True, return_logits: bool = False, normalize_coords=True, ) -> Tuple[List[np.ndarray], List[np.ndarray], List[np.ndarray]]: """This function is very similar to predict(...), however it is used for batched mode, when the model is expected to generate predictions on multiple images. It returns a tuple of lists of masks, ious, and low_res_masks_logits. """ assert self._is_batch, "This function should only be used when in batched mode" if not self._is_image_set: raise RuntimeError( "An image must be set with .set_image_batch(...) before mask prediction." ) num_images = len(self._features["image_embed"]) all_masks = [] all_ious = [] all_low_res_masks = [] for img_idx in range(num_images): # Transform input prompts point_coords = ( point_coords_batch[img_idx] if point_coords_batch is not None else None ) point_labels = ( point_labels_batch[img_idx] if point_labels_batch is not None else None ) box = box_batch[img_idx] if box_batch is not None else None mask_input = ( mask_input_batch[img_idx] if mask_input_batch is not None else None ) mask_input, unnorm_coords, labels, unnorm_box = self._prep_prompts( point_coords, point_labels, box, mask_input, normalize_coords, img_idx=img_idx, ) masks, iou_predictions, low_res_masks = self._predict( unnorm_coords, labels, unnorm_box, mask_input, multimask_output, return_logits=return_logits, img_idx=img_idx, ) masks_np = masks.squeeze(0).float().detach().cpu().numpy() iou_predictions_np = ( iou_predictions.squeeze(0).float().detach().cpu().numpy() ) low_res_masks_np = low_res_masks.squeeze(0).float().detach().cpu().numpy() all_masks.append(masks_np) all_ious.append(iou_predictions_np) all_low_res_masks.append(low_res_masks_np) return all_masks, all_ious, all_low_res_masks def predict( self, point_coords: Optional[np.ndarray] = None, point_labels: Optional[np.ndarray] = None, box: Optional[np.ndarray] = None, mask_input: Optional[np.ndarray] = None, multimask_output: bool = True, return_logits: bool = False, normalize_coords=True, ) -> Tuple[np.ndarray, np.ndarray, np.ndarray]: """ Predict masks for the given input prompts, using the currently set image. Arguments: point_coords (np.ndarray or None): A Nx2 array of point prompts to the model. Each point is in (X,Y) in pixels. point_labels (np.ndarray or None): A length N array of labels for the point prompts. 1 indicates a foreground point and 0 indicates a background point. box (np.ndarray or None): A length 4 array given a box prompt to the model, in XYXY format. mask_input (np.ndarray): A low resolution mask input to the model, typically coming from a previous prediction iteration. Has form 1xHxW, where for SAM, H=W=256. multimask_output (bool): If true, the model will return three masks. For ambiguous input prompts (such as a single click), this will often produce better masks than a single prediction. If only a single mask is needed, the model's predicted quality score can be used to select the best mask. For non-ambiguous prompts, such as multiple input prompts, multimask_output=False can give better results. return_logits (bool): If true, returns un-thresholded masks logits instead of a binary mask. normalize_coords (bool): If true, the point coordinates will be normalized to the range [0,1] and point_coords is expected to be wrt. image dimensions. Returns: (np.ndarray): The output masks in CxHxW format, where C is the number of masks, and (H, W) is the original image size. (np.ndarray): An array of length C containing the model's predictions for the quality of each mask. (np.ndarray): An array of shape CxHxW, where C is the number of masks and H=W=256. These low resolution logits can be passed to a subsequent iteration as mask input. """ if not self._is_image_set: raise RuntimeError( "An image must be set with .set_image(...) before mask prediction." ) # Transform input prompts mask_input, unnorm_coords, labels, unnorm_box = self._prep_prompts( point_coords, point_labels, box, mask_input, normalize_coords ) masks, iou_predictions, low_res_masks = self._predict( unnorm_coords, labels, unnorm_box, mask_input, multimask_output, return_logits=return_logits, ) masks_np = masks.squeeze(0).float().detach().cpu().numpy() iou_predictions_np = iou_predictions.squeeze(0).float().detach().cpu().numpy() low_res_masks_np = low_res_masks.squeeze(0).float().detach().cpu().numpy() return masks_np, iou_predictions_np, low_res_masks_np def _prep_prompts( self, point_coords, point_labels, box, mask_logits, normalize_coords, img_idx=-1 ): unnorm_coords, labels, unnorm_box, mask_input = None, None, None, None if point_coords is not None: assert ( point_labels is not None ), "point_labels must be supplied if point_coords is supplied." point_coords = torch.as_tensor( point_coords, dtype=torch.float, device=self.device ) unnorm_coords = self._transforms.transform_coords( point_coords, normalize=normalize_coords, orig_hw=self._orig_hw[img_idx] ) labels = torch.as_tensor(point_labels, dtype=torch.int, device=self.device) if len(unnorm_coords.shape) == 2: unnorm_coords, labels = unnorm_coords[None, ...], labels[None, ...] if box is not None: box = torch.as_tensor(box, dtype=torch.float, device=self.device) unnorm_box = self._transforms.transform_boxes( box, normalize=normalize_coords, orig_hw=self._orig_hw[img_idx] ) # Bx2x2 if mask_logits is not None: mask_input = torch.as_tensor( mask_logits, dtype=torch.float, device=self.device ) if len(mask_input.shape) == 3: mask_input = mask_input[None, :, :, :] return mask_input, unnorm_coords, labels, unnorm_box @torch.no_grad() def _predict( self, point_coords: Optional[torch.Tensor], point_labels: Optional[torch.Tensor], boxes: Optional[torch.Tensor] = None, mask_input: Optional[torch.Tensor] = None, multimask_output: bool = True, return_logits: bool = False, img_idx: int = -1, ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: """ Predict masks for the given input prompts, using the currently set image. Input prompts are batched torch tensors and are expected to already be transformed to the input frame using SAM2Transforms. Arguments: point_coords (torch.Tensor or None): A BxNx2 array of point prompts to the model. Each point is in (X,Y) in pixels. point_labels (torch.Tensor or None): A BxN array of labels for the point prompts. 1 indicates a foreground point and 0 indicates a background point. boxes (np.ndarray or None): A Bx4 array given a box prompt to the model, in XYXY format. mask_input (np.ndarray): A low resolution mask input to the model, typically coming from a previous prediction iteration. Has form Bx1xHxW, where for SAM, H=W=256. Masks returned by a previous iteration of the predict method do not need further transformation. multimask_output (bool): If true, the model will return three masks. For ambiguous input prompts (such as a single click), this will often produce better masks than a single prediction. If only a single mask is needed, the model's predicted quality score can be used to select the best mask. For non-ambiguous prompts, such as multiple input prompts, multimask_output=False can give better results. return_logits (bool): If true, returns un-thresholded masks logits instead of a binary mask. Returns: (torch.Tensor): The output masks in BxCxHxW format, where C is the number of masks, and (H, W) is the original image size. (torch.Tensor): An array of shape BxC containing the model's predictions for the quality of each mask. (torch.Tensor): An array of shape BxCxHxW, where C is the number of masks and H=W=256. These low res logits can be passed to a subsequent iteration as mask input. """ if not self._is_image_set: raise RuntimeError( "An image must be set with .set_image(...) before mask prediction." ) if point_coords is not None: concat_points = (point_coords, point_labels) else: concat_points = None # Embed prompts if boxes is not None: box_coords = boxes.reshape(-1, 2, 2) box_labels = torch.tensor([[2, 3]], dtype=torch.int, device=boxes.device) box_labels = box_labels.repeat(boxes.size(0), 1) # we merge "boxes" and "points" into a single "concat_points" input (where # boxes are added at the beginning) to sam_prompt_encoder if concat_points is not None: concat_coords = torch.cat([box_coords, concat_points[0]], dim=1) concat_labels = torch.cat([box_labels, concat_points[1]], dim=1) concat_points = (concat_coords, concat_labels) else: concat_points = (box_coords, box_labels) sparse_embeddings, dense_embeddings = self.model.sam_prompt_encoder( points=concat_points, boxes=None, masks=mask_input, ) # Predict masks batched_mode = ( concat_points is not None and concat_points[0].shape[0] > 1 ) # multi object prediction high_res_features = [ feat_level[img_idx].unsqueeze(0) for feat_level in self._features["high_res_feats"] ] low_res_masks, iou_predictions, _, _ = self.model.sam_mask_decoder( image_embeddings=self._features["image_embed"][img_idx].unsqueeze(0), image_pe=self.model.sam_prompt_encoder.get_dense_pe(), sparse_prompt_embeddings=sparse_embeddings, dense_prompt_embeddings=dense_embeddings, multimask_output=multimask_output, repeat_image=batched_mode, high_res_features=high_res_features, ) # Upscale the masks to the original image resolution masks = self._transforms.postprocess_masks( low_res_masks, self._orig_hw[img_idx] ) low_res_masks = torch.clamp(low_res_masks, -32.0, 32.0) if not return_logits: masks = masks > self.mask_threshold return masks, iou_predictions, low_res_masks def get_image_embedding(self) -> torch.Tensor: """ Returns the image embeddings for the currently set image, with shape 1xCxHxW, where C is the embedding dimension and (H,W) are the embedding spatial dimension of SAM (typically C=256, H=W=64). """ if not self._is_image_set: raise RuntimeError( "An image must be set with .set_image(...) to generate an embedding." ) assert ( self._features is not None ), "Features must exist if an image has been set." return self._features["image_embed"] @property def device(self) -> torch.device: return self.model.device def reset_predictor(self) -> None: """ Resets the image embeddings and other state variables. """ self._is_image_set = False self._features = None self._orig_hw = None self._is_batch = False ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/sam2_video_predictor.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import warnings from collections import OrderedDict import torch import torch.nn.functional as F from sam2.modeling.sam2_base import NO_OBJ_SCORE, SAM2Base from sam2.utils.misc import (concat_points, fill_holes_in_mask_scores, load_video_frames) from tqdm import tqdm class SAM2VideoPredictor(SAM2Base): """The predictor class to handle user interactions and manage inference states.""" def __init__( self, fill_hole_area=0, # whether to apply non-overlapping constraints on the output object masks non_overlap_masks=False, # whether to clear non-conditioning memory of the surrounding frames (which may contain outdated information) after adding correction clicks; # note that this would only apply to *single-object tracking* unless `clear_non_cond_mem_for_multi_obj` is also set to True) clear_non_cond_mem_around_input=False, # if `add_all_frames_to_correct_as_cond` is True, we also append to the conditioning frame list any frame that receives a later correction click # if `add_all_frames_to_correct_as_cond` is False, we conditioning frame list to only use those initial conditioning frames add_all_frames_to_correct_as_cond=False, **kwargs, ): super().__init__(**kwargs) self.fill_hole_area = fill_hole_area self.non_overlap_masks = non_overlap_masks self.clear_non_cond_mem_around_input = clear_non_cond_mem_around_input self.add_all_frames_to_correct_as_cond = add_all_frames_to_correct_as_cond @torch.inference_mode() def init_state( self, video_path, offload_video_to_cpu=False, offload_state_to_cpu=False, async_loading_frames=False, ): """Initialize an inference state.""" compute_device = self.device # device of the model images, video_height, video_width = load_video_frames( video_path=video_path, image_size=self.image_size, offload_video_to_cpu=offload_video_to_cpu, async_loading_frames=async_loading_frames, compute_device=compute_device, ) inference_state = {} inference_state["images"] = images inference_state["num_frames"] = len(images) # whether to offload the video frames to CPU memory # turning on this option saves the GPU memory with only a very small overhead inference_state["offload_video_to_cpu"] = offload_video_to_cpu # whether to offload the inference state to CPU memory # turning on this option saves the GPU memory at the cost of a lower tracking fps # (e.g. in a test case of 768x768 model, fps dropped from 27 to 24 when tracking one object # and from 24 to 21 when tracking two objects) inference_state["offload_state_to_cpu"] = offload_state_to_cpu # the original video height and width, used for resizing final output scores inference_state["video_height"] = video_height inference_state["video_width"] = video_width inference_state["device"] = compute_device if offload_state_to_cpu: inference_state["storage_device"] = torch.device("cpu") else: inference_state["storage_device"] = compute_device # inputs on each frame inference_state["point_inputs_per_obj"] = {} inference_state["mask_inputs_per_obj"] = {} # visual features on a small number of recently visited frames for quick interactions inference_state["cached_features"] = {} # values that don't change across frames (so we only need to hold one copy of them) inference_state["constants"] = {} # mapping between client-side object id and model-side object index inference_state["obj_id_to_idx"] = OrderedDict() inference_state["obj_idx_to_id"] = OrderedDict() inference_state["obj_ids"] = [] # Slice (view) of each object tracking results, sharing the same memory with "output_dict" inference_state["output_dict_per_obj"] = {} # A temporary storage to hold new outputs when user interact with a frame # to add clicks or mask (it's merged into "output_dict" before propagation starts) inference_state["temp_output_dict_per_obj"] = {} # Frames that already holds consolidated outputs from click or mask inputs # (we directly use their consolidated outputs during tracking) # metadata for each tracking frame (e.g. which direction it's tracked) inference_state["frames_tracked_per_obj"] = {} # Warm up the visual backbone and cache the image feature on frame 0 self._get_image_feature(inference_state, frame_idx=0, batch_size=1) return inference_state @classmethod def from_pretrained(cls, model_id: str, **kwargs) -> "SAM2VideoPredictor": """ Load a pretrained model from the Hugging Face hub. Arguments: model_id (str): The Hugging Face repository ID. **kwargs: Additional arguments to pass to the model constructor. Returns: (SAM2VideoPredictor): The loaded model. """ from sam2.build_sam import build_sam2_video_predictor_hf sam_model = build_sam2_video_predictor_hf(model_id, **kwargs) return sam_model def _obj_id_to_idx(self, inference_state, obj_id): """Map client-side object id to model-side object index.""" obj_idx = inference_state["obj_id_to_idx"].get(obj_id, None) if obj_idx is not None: return obj_idx # We always allow adding new objects (including after tracking starts). allow_new_object = True if allow_new_object: # get the next object slot obj_idx = len(inference_state["obj_id_to_idx"]) inference_state["obj_id_to_idx"][obj_id] = obj_idx inference_state["obj_idx_to_id"][obj_idx] = obj_id inference_state["obj_ids"] = list(inference_state["obj_id_to_idx"]) # set up input and output structures for this object inference_state["point_inputs_per_obj"][obj_idx] = {} inference_state["mask_inputs_per_obj"][obj_idx] = {} inference_state["output_dict_per_obj"][obj_idx] = { "cond_frame_outputs": {}, # dict containing {frame_idx: } "non_cond_frame_outputs": {}, # dict containing {frame_idx: } } inference_state["temp_output_dict_per_obj"][obj_idx] = { "cond_frame_outputs": {}, # dict containing {frame_idx: } "non_cond_frame_outputs": {}, # dict containing {frame_idx: } } inference_state["frames_tracked_per_obj"][obj_idx] = {} return obj_idx else: raise RuntimeError( f"Cannot add new object id {obj_id} after tracking starts. " f"All existing object ids: {inference_state['obj_ids']}. " f"Please call 'reset_state' to restart from scratch." ) def _obj_idx_to_id(self, inference_state, obj_idx): """Map model-side object index to client-side object id.""" return inference_state["obj_idx_to_id"][obj_idx] def _get_obj_num(self, inference_state): """Get the total number of unique object ids received so far in this session.""" return len(inference_state["obj_idx_to_id"]) @torch.inference_mode() def add_new_points_or_box( self, inference_state, frame_idx, obj_id, points=None, labels=None, clear_old_points=True, normalize_coords=True, box=None, ): """Add new points to a frame.""" obj_idx = self._obj_id_to_idx(inference_state, obj_id) point_inputs_per_frame = inference_state["point_inputs_per_obj"][obj_idx] mask_inputs_per_frame = inference_state["mask_inputs_per_obj"][obj_idx] if (points is not None) != (labels is not None): raise ValueError("points and labels must be provided together") if points is None and box is None: raise ValueError("at least one of points or box must be provided as input") if points is None: points = torch.zeros(0, 2, dtype=torch.float32) elif not isinstance(points, torch.Tensor): points = torch.tensor(points, dtype=torch.float32) if labels is None: labels = torch.zeros(0, dtype=torch.int32) elif not isinstance(labels, torch.Tensor): labels = torch.tensor(labels, dtype=torch.int32) if points.dim() == 2: points = points.unsqueeze(0) # add batch dimension if labels.dim() == 1: labels = labels.unsqueeze(0) # add batch dimension # If `box` is provided, we add it as the first two points with labels 2 and 3 # along with the user-provided points (consistent with how SAM 2 is trained). if box is not None: if not clear_old_points: raise ValueError( "cannot add box without clearing old points, since " "box prompt must be provided before any point prompt " "(please use clear_old_points=True instead)" ) if not isinstance(box, torch.Tensor): box = torch.tensor(box, dtype=torch.float32, device=points.device) box_coords = box.reshape(1, 2, 2) box_labels = torch.tensor([2, 3], dtype=torch.int32, device=labels.device) box_labels = box_labels.reshape(1, 2) points = torch.cat([box_coords, points], dim=1) labels = torch.cat([box_labels, labels], dim=1) if normalize_coords: video_H = inference_state["video_height"] video_W = inference_state["video_width"] points = points / torch.tensor([video_W, video_H]).to(points.device) # scale the (normalized) coordinates by the model's internal image size points = points * self.image_size points = points.to(inference_state["device"]) labels = labels.to(inference_state["device"]) if not clear_old_points: point_inputs = point_inputs_per_frame.get(frame_idx, None) else: point_inputs = None point_inputs = concat_points(point_inputs, points, labels) point_inputs_per_frame[frame_idx] = point_inputs mask_inputs_per_frame.pop(frame_idx, None) # If this frame hasn't been tracked before, we treat it as an initial conditioning # frame, meaning that the inputs points are to generate segments on this frame without # using any memory from other frames, like in SAM. Otherwise (if it has been tracked), # the input points will be used to correct the already tracked masks. obj_frames_tracked = inference_state["frames_tracked_per_obj"][obj_idx] is_init_cond_frame = frame_idx not in obj_frames_tracked # whether to track in reverse time order if is_init_cond_frame: reverse = False else: reverse = obj_frames_tracked[frame_idx]["reverse"] obj_output_dict = inference_state["output_dict_per_obj"][obj_idx] obj_temp_output_dict = inference_state["temp_output_dict_per_obj"][obj_idx] # Add a frame to conditioning output if it's an initial conditioning frame or # if the model sees all frames receiving clicks/mask as conditioning frames. is_cond = is_init_cond_frame or self.add_all_frames_to_correct_as_cond storage_key = "cond_frame_outputs" if is_cond else "non_cond_frame_outputs" # Get any previously predicted mask logits on this object and feed it along with # the new clicks into the SAM mask decoder. prev_sam_mask_logits = None # lookup temporary output dict first, which contains the most recent output # (if not found, then lookup conditioning and non-conditioning frame output) prev_out = obj_temp_output_dict[storage_key].get(frame_idx) if prev_out is None: prev_out = obj_output_dict["cond_frame_outputs"].get(frame_idx) if prev_out is None: prev_out = obj_output_dict["non_cond_frame_outputs"].get(frame_idx) if prev_out is not None and prev_out["pred_masks"] is not None: device = inference_state["device"] prev_sam_mask_logits = prev_out["pred_masks"].to(device, non_blocking=True) # Clamp the scale of prev_sam_mask_logits to avoid rare numerical issues. prev_sam_mask_logits = torch.clamp(prev_sam_mask_logits, -32.0, 32.0) current_out, _ = self._run_single_frame_inference( inference_state=inference_state, output_dict=obj_output_dict, # run on the slice of a single object frame_idx=frame_idx, batch_size=1, # run on the slice of a single object is_init_cond_frame=is_init_cond_frame, point_inputs=point_inputs, mask_inputs=None, reverse=reverse, # Skip the memory encoder when adding clicks or mask. We execute the memory encoder # at the beginning of `propagate_in_video` (after user finalize their clicks). This # allows us to enforce non-overlapping constraints on all objects before encoding # them into memory. run_mem_encoder=False, prev_sam_mask_logits=prev_sam_mask_logits, ) # Add the output to the output dict (to be used as future memory) obj_temp_output_dict[storage_key][frame_idx] = current_out # Resize the output mask to the original video resolution obj_ids = inference_state["obj_ids"] consolidated_out = self._consolidate_temp_output_across_obj( inference_state, frame_idx, is_cond=is_cond, consolidate_at_video_res=True, ) _, video_res_masks = self._get_orig_video_res_output( inference_state, consolidated_out["pred_masks_video_res"] ) return frame_idx, obj_ids, video_res_masks def add_new_points(self, *args, **kwargs): """Deprecated method. Please use `add_new_points_or_box` instead.""" return self.add_new_points_or_box(*args, **kwargs) @torch.inference_mode() def add_new_mask( self, inference_state, frame_idx, obj_id, mask, ): """Add new mask to a frame.""" obj_idx = self._obj_id_to_idx(inference_state, obj_id) point_inputs_per_frame = inference_state["point_inputs_per_obj"][obj_idx] mask_inputs_per_frame = inference_state["mask_inputs_per_obj"][obj_idx] if not isinstance(mask, torch.Tensor): mask = torch.tensor(mask, dtype=torch.bool) assert mask.dim() == 2 mask_H, mask_W = mask.shape mask_inputs_orig = mask[None, None] # add batch and channel dimension mask_inputs_orig = mask_inputs_orig.float().to(inference_state["device"]) # resize the mask if it doesn't match the model's image size if mask_H != self.image_size or mask_W != self.image_size: mask_inputs = torch.nn.functional.interpolate( mask_inputs_orig, size=(self.image_size, self.image_size), align_corners=False, mode="bilinear", antialias=True, # use antialias for downsampling ) mask_inputs = (mask_inputs >= 0.5).float() else: mask_inputs = mask_inputs_orig mask_inputs_per_frame[frame_idx] = mask_inputs point_inputs_per_frame.pop(frame_idx, None) # If this frame hasn't been tracked before, we treat it as an initial conditioning # frame, meaning that the inputs points are to generate segments on this frame without # using any memory from other frames, like in SAM. Otherwise (if it has been tracked), # the input points will be used to correct the already tracked masks. obj_frames_tracked = inference_state["frames_tracked_per_obj"][obj_idx] is_init_cond_frame = frame_idx not in obj_frames_tracked # whether to track in reverse time order if is_init_cond_frame: reverse = False else: reverse = obj_frames_tracked[frame_idx]["reverse"] obj_output_dict = inference_state["output_dict_per_obj"][obj_idx] obj_temp_output_dict = inference_state["temp_output_dict_per_obj"][obj_idx] # Add a frame to conditioning output if it's an initial conditioning frame or # if the model sees all frames receiving clicks/mask as conditioning frames. is_cond = is_init_cond_frame or self.add_all_frames_to_correct_as_cond storage_key = "cond_frame_outputs" if is_cond else "non_cond_frame_outputs" current_out, _ = self._run_single_frame_inference( inference_state=inference_state, output_dict=obj_output_dict, # run on the slice of a single object frame_idx=frame_idx, batch_size=1, # run on the slice of a single object is_init_cond_frame=is_init_cond_frame, point_inputs=None, mask_inputs=mask_inputs, reverse=reverse, # Skip the memory encoder when adding clicks or mask. We execute the memory encoder # at the beginning of `propagate_in_video` (after user finalize their clicks). This # allows us to enforce non-overlapping constraints on all objects before encoding # them into memory. run_mem_encoder=False, ) # Add the output to the output dict (to be used as future memory) obj_temp_output_dict[storage_key][frame_idx] = current_out # Resize the output mask to the original video resolution obj_ids = inference_state["obj_ids"] consolidated_out = self._consolidate_temp_output_across_obj( inference_state, frame_idx, is_cond=is_cond, consolidate_at_video_res=True, ) _, video_res_masks = self._get_orig_video_res_output( inference_state, consolidated_out["pred_masks_video_res"] ) return frame_idx, obj_ids, video_res_masks def _get_orig_video_res_output(self, inference_state, any_res_masks): """ Resize the object scores to the original video resolution (video_res_masks) and apply non-overlapping constraints for final output. """ device = inference_state["device"] video_H = inference_state["video_height"] video_W = inference_state["video_width"] any_res_masks = any_res_masks.to(device, non_blocking=True) if any_res_masks.shape[-2:] == (video_H, video_W): video_res_masks = any_res_masks else: video_res_masks = torch.nn.functional.interpolate( any_res_masks, size=(video_H, video_W), mode="bilinear", align_corners=False, ) if self.non_overlap_masks: video_res_masks = self._apply_non_overlapping_constraints(video_res_masks) return any_res_masks, video_res_masks def _consolidate_temp_output_across_obj( self, inference_state, frame_idx, is_cond, consolidate_at_video_res=False, ): """ Consolidate the per-object temporary outputs in `temp_output_dict_per_obj` on a frame into a single output for all objects, including 1) fill any missing objects either from `output_dict_per_obj` (if they exist in `output_dict_per_obj` for this frame) or leave them as placeholder values (if they don't exist in `output_dict_per_obj` for this frame); 2) if specified, rerun memory encoder after apply non-overlapping constraints on the object scores. """ batch_size = self._get_obj_num(inference_state) storage_key = "cond_frame_outputs" if is_cond else "non_cond_frame_outputs" # Optionally, we allow consolidating the temporary outputs at the original # video resolution (to provide a better editing experience for mask prompts). if consolidate_at_video_res: consolidated_H = inference_state["video_height"] consolidated_W = inference_state["video_width"] consolidated_mask_key = "pred_masks_video_res" else: consolidated_H = consolidated_W = self.image_size // 4 consolidated_mask_key = "pred_masks" # Initialize `consolidated_out`. Its "maskmem_features" and "maskmem_pos_enc" # will be added when rerunning the memory encoder after applying non-overlapping # constraints to object scores. Its "pred_masks" are prefilled with a large # negative value (NO_OBJ_SCORE) to represent missing objects. consolidated_out = { consolidated_mask_key: torch.full( size=(batch_size, 1, consolidated_H, consolidated_W), fill_value=NO_OBJ_SCORE, dtype=torch.float32, device=inference_state["storage_device"], ), } for obj_idx in range(batch_size): obj_temp_output_dict = inference_state["temp_output_dict_per_obj"][obj_idx] obj_output_dict = inference_state["output_dict_per_obj"][obj_idx] out = obj_temp_output_dict[storage_key].get(frame_idx, None) # If the object doesn't appear in "temp_output_dict_per_obj" on this frame, # we fall back and look up its previous output in "output_dict_per_obj". # We look up both "cond_frame_outputs" and "non_cond_frame_outputs" in # "output_dict_per_obj" to find a previous output for this object. if out is None: out = obj_output_dict["cond_frame_outputs"].get(frame_idx, None) if out is None: out = obj_output_dict["non_cond_frame_outputs"].get(frame_idx, None) # If the object doesn't appear in "output_dict_per_obj" either, we skip it # and leave its mask scores to the default scores (i.e. the NO_OBJ_SCORE # placeholder above) and set its object pointer to be a dummy pointer. if out is None: continue # Add the temporary object output mask to consolidated output mask obj_mask = out["pred_masks"] consolidated_pred_masks = consolidated_out[consolidated_mask_key] if obj_mask.shape[-2:] == consolidated_pred_masks.shape[-2:]: consolidated_pred_masks[obj_idx : obj_idx + 1] = obj_mask else: # Resize first if temporary object mask has a different resolution resized_obj_mask = torch.nn.functional.interpolate( obj_mask, size=consolidated_pred_masks.shape[-2:], mode="bilinear", align_corners=False, ) consolidated_pred_masks[obj_idx : obj_idx + 1] = resized_obj_mask return consolidated_out @torch.inference_mode() def propagate_in_video_preflight(self, inference_state): """Prepare inference_state and consolidate temporary outputs before tracking.""" # Check and make sure that every object has received input points or masks. batch_size = self._get_obj_num(inference_state) if batch_size == 0: raise RuntimeError( "No input points or masks are provided for any object; please add inputs first." ) # Consolidate per-object temporary outputs in "temp_output_dict_per_obj" and # add them into "output_dict". for obj_idx in range(batch_size): obj_output_dict = inference_state["output_dict_per_obj"][obj_idx] obj_temp_output_dict = inference_state["temp_output_dict_per_obj"][obj_idx] for is_cond in [False, True]: # Separately consolidate conditioning and non-conditioning temp outputs storage_key = ( "cond_frame_outputs" if is_cond else "non_cond_frame_outputs" ) # Find all the frames that contain temporary outputs for any objects # (these should be the frames that have just received clicks for mask inputs # via `add_new_points_or_box` or `add_new_mask`) for frame_idx, out in obj_temp_output_dict[storage_key].items(): # Run memory encoder on the temporary outputs (if the memory feature is missing) if out["maskmem_features"] is None: high_res_masks = torch.nn.functional.interpolate( out["pred_masks"].to(inference_state["device"]), size=(self.image_size, self.image_size), mode="bilinear", align_corners=False, ) maskmem_features, maskmem_pos_enc = self._run_memory_encoder( inference_state=inference_state, frame_idx=frame_idx, batch_size=1, # run on the slice of a single object high_res_masks=high_res_masks, object_score_logits=out["object_score_logits"], # these frames are what the user interacted with is_mask_from_pts=True, ) out["maskmem_features"] = maskmem_features out["maskmem_pos_enc"] = maskmem_pos_enc obj_output_dict[storage_key][frame_idx] = out if self.clear_non_cond_mem_around_input: # clear non-conditioning memory of the surrounding frames self._clear_obj_non_cond_mem_around_input( inference_state, frame_idx, obj_idx ) # clear temporary outputs in `temp_output_dict_per_obj` obj_temp_output_dict[storage_key].clear() # check and make sure that every object has received input points or masks obj_output_dict = inference_state["output_dict_per_obj"][obj_idx] if len(obj_output_dict["cond_frame_outputs"]) == 0: obj_id = self._obj_idx_to_id(inference_state, obj_idx) raise RuntimeError( f"No input points or masks are provided for object id {obj_id}; please add inputs first." ) # edge case: if an output is added to "cond_frame_outputs", we remove any prior # output on the same frame in "non_cond_frame_outputs" for frame_idx in obj_output_dict["cond_frame_outputs"]: obj_output_dict["non_cond_frame_outputs"].pop(frame_idx, None) @torch.inference_mode() def propagate_in_video( self, inference_state, start_frame_idx=None, max_frame_num_to_track=None, reverse=False, ): """Propagate the input points across frames to track in the entire video.""" self.propagate_in_video_preflight(inference_state) obj_ids = inference_state["obj_ids"] num_frames = inference_state["num_frames"] batch_size = self._get_obj_num(inference_state) # set start index, end index, and processing order if start_frame_idx is None: # default: start from the earliest frame with input points start_frame_idx = min( t for obj_output_dict in inference_state["output_dict_per_obj"].values() for t in obj_output_dict["cond_frame_outputs"] ) if max_frame_num_to_track is None: # default: track all the frames in the video max_frame_num_to_track = num_frames if reverse: end_frame_idx = max(start_frame_idx - max_frame_num_to_track, 0) if start_frame_idx > 0: processing_order = range(start_frame_idx, end_frame_idx - 1, -1) else: processing_order = [] # skip reverse tracking if starting from frame 0 else: end_frame_idx = min( start_frame_idx + max_frame_num_to_track, num_frames - 1 ) processing_order = range(start_frame_idx, end_frame_idx + 1) for frame_idx in tqdm(processing_order, desc="propagate in video"): pred_masks_per_obj = [None] * batch_size for obj_idx in range(batch_size): obj_output_dict = inference_state["output_dict_per_obj"][obj_idx] # We skip those frames already in consolidated outputs (these are frames # that received input clicks or mask). Note that we cannot directly run # batched forward on them via `_run_single_frame_inference` because the # number of clicks on each object might be different. if frame_idx in obj_output_dict["cond_frame_outputs"]: storage_key = "cond_frame_outputs" current_out = obj_output_dict[storage_key][frame_idx] device = inference_state["device"] pred_masks = current_out["pred_masks"].to(device, non_blocking=True) if self.clear_non_cond_mem_around_input: # clear non-conditioning memory of the surrounding frames self._clear_obj_non_cond_mem_around_input( inference_state, frame_idx, obj_idx ) else: storage_key = "non_cond_frame_outputs" current_out, pred_masks = self._run_single_frame_inference( inference_state=inference_state, output_dict=obj_output_dict, frame_idx=frame_idx, batch_size=1, # run on the slice of a single object is_init_cond_frame=False, point_inputs=None, mask_inputs=None, reverse=reverse, run_mem_encoder=True, ) obj_output_dict[storage_key][frame_idx] = current_out inference_state["frames_tracked_per_obj"][obj_idx][frame_idx] = { "reverse": reverse } pred_masks_per_obj[obj_idx] = pred_masks # Resize the output mask to the original video resolution (we directly use # the mask scores on GPU for output to avoid any CPU conversion in between) if len(pred_masks_per_obj) > 1: all_pred_masks = torch.cat(pred_masks_per_obj, dim=0) else: all_pred_masks = pred_masks_per_obj[0] _, video_res_masks = self._get_orig_video_res_output( inference_state, all_pred_masks ) yield frame_idx, obj_ids, video_res_masks @torch.inference_mode() def clear_all_prompts_in_frame( self, inference_state, frame_idx, obj_id, need_output=True ): """Remove all input points or mask in a specific frame for a given object.""" obj_idx = self._obj_id_to_idx(inference_state, obj_id) # Clear the conditioning information on the given frame inference_state["point_inputs_per_obj"][obj_idx].pop(frame_idx, None) inference_state["mask_inputs_per_obj"][obj_idx].pop(frame_idx, None) temp_output_dict_per_obj = inference_state["temp_output_dict_per_obj"] temp_output_dict_per_obj[obj_idx]["cond_frame_outputs"].pop(frame_idx, None) temp_output_dict_per_obj[obj_idx]["non_cond_frame_outputs"].pop(frame_idx, None) # Remove the frame's conditioning output (possibly downgrading it to non-conditioning) obj_output_dict = inference_state["output_dict_per_obj"][obj_idx] out = obj_output_dict["cond_frame_outputs"].pop(frame_idx, None) if out is not None: # The frame is not a conditioning frame anymore since it's not receiving inputs, # so we "downgrade" its output (if exists) to a non-conditioning frame output. obj_output_dict["non_cond_frame_outputs"][frame_idx] = out inference_state["frames_tracked_per_obj"][obj_idx].pop(frame_idx, None) if not need_output: return # Finally, output updated masks per object (after removing the inputs above) obj_ids = inference_state["obj_ids"] is_cond = any( frame_idx in obj_temp_output_dict["cond_frame_outputs"] for obj_temp_output_dict in temp_output_dict_per_obj.values() ) consolidated_out = self._consolidate_temp_output_across_obj( inference_state, frame_idx, is_cond=is_cond, consolidate_at_video_res=True, ) _, video_res_masks = self._get_orig_video_res_output( inference_state, consolidated_out["pred_masks_video_res"] ) return frame_idx, obj_ids, video_res_masks @torch.inference_mode() def reset_state(self, inference_state): """Remove all input points or mask in all frames throughout the video.""" self._reset_tracking_results(inference_state) # Remove all object ids inference_state["obj_id_to_idx"].clear() inference_state["obj_idx_to_id"].clear() inference_state["obj_ids"].clear() inference_state["point_inputs_per_obj"].clear() inference_state["mask_inputs_per_obj"].clear() inference_state["output_dict_per_obj"].clear() inference_state["temp_output_dict_per_obj"].clear() inference_state["frames_tracked_per_obj"].clear() def _reset_tracking_results(self, inference_state): """Reset all tracking inputs and results across the videos.""" for v in inference_state["point_inputs_per_obj"].values(): v.clear() for v in inference_state["mask_inputs_per_obj"].values(): v.clear() for v in inference_state["output_dict_per_obj"].values(): v["cond_frame_outputs"].clear() v["non_cond_frame_outputs"].clear() for v in inference_state["temp_output_dict_per_obj"].values(): v["cond_frame_outputs"].clear() v["non_cond_frame_outputs"].clear() for v in inference_state["frames_tracked_per_obj"].values(): v.clear() def _get_image_feature(self, inference_state, frame_idx, batch_size): """Compute the image features on a given frame.""" # Look up in the cache first image, backbone_out = inference_state["cached_features"].get( frame_idx, (None, None) ) if backbone_out is None: # Cache miss -- we will run inference on a single image device = inference_state["device"] image = inference_state["images"][frame_idx].to(device).float().unsqueeze(0) backbone_out = self.forward_image(image) # Cache the most recent frame's feature (for repeated interactions with # a frame; we can use an LRU cache for more frames in the future). inference_state["cached_features"] = {frame_idx: (image, backbone_out)} # expand the features to have the same dimension as the number of objects expanded_image = image.expand(batch_size, -1, -1, -1) expanded_backbone_out = { "backbone_fpn": backbone_out["backbone_fpn"].copy(), "vision_pos_enc": backbone_out["vision_pos_enc"].copy(), } for i, feat in enumerate(expanded_backbone_out["backbone_fpn"]): expanded_backbone_out["backbone_fpn"][i] = feat.expand( batch_size, -1, -1, -1 ) for i, pos in enumerate(expanded_backbone_out["vision_pos_enc"]): pos = pos.expand(batch_size, -1, -1, -1) expanded_backbone_out["vision_pos_enc"][i] = pos features = self._prepare_backbone_features(expanded_backbone_out) features = (expanded_image,) + features return features def _run_single_frame_inference( self, inference_state, output_dict, frame_idx, batch_size, is_init_cond_frame, point_inputs, mask_inputs, reverse, run_mem_encoder, prev_sam_mask_logits=None, ): """Run tracking on a single frame based on current inputs and previous memory.""" # Retrieve correct image features ( _, _, current_vision_feats, current_vision_pos_embeds, feat_sizes, ) = self._get_image_feature(inference_state, frame_idx, batch_size) # point and mask should not appear as input simultaneously on the same frame assert point_inputs is None or mask_inputs is None current_out = self.track_step( frame_idx=frame_idx, is_init_cond_frame=is_init_cond_frame, current_vision_feats=current_vision_feats, current_vision_pos_embeds=current_vision_pos_embeds, feat_sizes=feat_sizes, point_inputs=point_inputs, mask_inputs=mask_inputs, output_dict=output_dict, num_frames=inference_state["num_frames"], track_in_reverse=reverse, run_mem_encoder=run_mem_encoder, prev_sam_mask_logits=prev_sam_mask_logits, ) # optionally offload the output to CPU memory to save GPU space storage_device = inference_state["storage_device"] maskmem_features = current_out["maskmem_features"] if maskmem_features is not None: maskmem_features = maskmem_features.to(torch.bfloat16) maskmem_features = maskmem_features.to(storage_device, non_blocking=True) pred_masks_gpu = current_out["pred_masks"] # potentially fill holes in the predicted masks if self.fill_hole_area > 0: pred_masks_gpu = fill_holes_in_mask_scores( pred_masks_gpu, self.fill_hole_area ) pred_masks = pred_masks_gpu.to(storage_device, non_blocking=True) # "maskmem_pos_enc" is the same across frames, so we only need to store one copy of it maskmem_pos_enc = self._get_maskmem_pos_enc(inference_state, current_out) # object pointer is a small tensor, so we always keep it on GPU memory for fast access obj_ptr = current_out["obj_ptr"] object_score_logits = current_out["object_score_logits"] # make a compact version of this frame's output to reduce the state size compact_current_out = { "maskmem_features": maskmem_features, "maskmem_pos_enc": maskmem_pos_enc, "pred_masks": pred_masks, "obj_ptr": obj_ptr, "object_score_logits": object_score_logits, } return compact_current_out, pred_masks_gpu def _run_memory_encoder( self, inference_state, frame_idx, batch_size, high_res_masks, object_score_logits, is_mask_from_pts, ): """ Run the memory encoder on `high_res_masks`. This is usually after applying non-overlapping constraints to object scores. Since their scores changed, their memory also need to be computed again with the memory encoder. """ # Retrieve correct image features _, _, current_vision_feats, _, feat_sizes = self._get_image_feature( inference_state, frame_idx, batch_size ) maskmem_features, maskmem_pos_enc = self._encode_new_memory( current_vision_feats=current_vision_feats, feat_sizes=feat_sizes, pred_masks_high_res=high_res_masks, object_score_logits=object_score_logits, is_mask_from_pts=is_mask_from_pts, ) # optionally offload the output to CPU memory to save GPU space storage_device = inference_state["storage_device"] maskmem_features = maskmem_features.to(torch.bfloat16) maskmem_features = maskmem_features.to(storage_device, non_blocking=True) # "maskmem_pos_enc" is the same across frames, so we only need to store one copy of it maskmem_pos_enc = self._get_maskmem_pos_enc( inference_state, {"maskmem_pos_enc": maskmem_pos_enc} ) return maskmem_features, maskmem_pos_enc def _get_maskmem_pos_enc(self, inference_state, current_out): """ `maskmem_pos_enc` is the same across frames and objects, so we cache it as a constant in the inference session to reduce session storage size. """ model_constants = inference_state["constants"] # "out_maskmem_pos_enc" should be either a list of tensors or None out_maskmem_pos_enc = current_out["maskmem_pos_enc"] if out_maskmem_pos_enc is not None: if "maskmem_pos_enc" not in model_constants: assert isinstance(out_maskmem_pos_enc, list) # only take the slice for one object, since it's same across objects maskmem_pos_enc = [x[0:1].clone() for x in out_maskmem_pos_enc] model_constants["maskmem_pos_enc"] = maskmem_pos_enc else: maskmem_pos_enc = model_constants["maskmem_pos_enc"] # expand the cached maskmem_pos_enc to the actual batch size batch_size = out_maskmem_pos_enc[0].size(0) expanded_maskmem_pos_enc = [ x.expand(batch_size, -1, -1, -1) for x in maskmem_pos_enc ] else: expanded_maskmem_pos_enc = None return expanded_maskmem_pos_enc @torch.inference_mode() def remove_object(self, inference_state, obj_id, strict=False, need_output=True): """ Remove an object id from the tracking state. If strict is True, we check whether the object id actually exists and raise an error if it doesn't exist. """ old_obj_idx_to_rm = inference_state["obj_id_to_idx"].get(obj_id, None) updated_frames = [] # Check whether this object_id to remove actually exists and possibly raise an error. if old_obj_idx_to_rm is None: if not strict: return inference_state["obj_ids"], updated_frames raise RuntimeError( f"Cannot remove object id {obj_id} as it doesn't exist. " f"All existing object ids: {inference_state['obj_ids']}." ) # If this is the only remaining object id, we simply reset the state. if len(inference_state["obj_id_to_idx"]) == 1: self.reset_state(inference_state) return inference_state["obj_ids"], updated_frames # There are still remaining objects after removing this object id. In this case, # we need to delete the object storage from inference state tensors. # Step 0: clear the input on those frames where this object id has point or mask input # (note that this step is required as it might downgrade conditioning frames to # non-conditioning ones) obj_input_frames_inds = set() obj_input_frames_inds.update( inference_state["point_inputs_per_obj"][old_obj_idx_to_rm] ) obj_input_frames_inds.update( inference_state["mask_inputs_per_obj"][old_obj_idx_to_rm] ) for frame_idx in obj_input_frames_inds: self.clear_all_prompts_in_frame( inference_state, frame_idx, obj_id, need_output=False ) # Step 1: Update the object id mapping (note that it must be done after Step 0, # since Step 0 still requires the old object id mappings in inference_state) old_obj_ids = inference_state["obj_ids"] old_obj_inds = list(range(len(old_obj_ids))) remain_old_obj_inds = old_obj_inds.copy() remain_old_obj_inds.remove(old_obj_idx_to_rm) new_obj_ids = [old_obj_ids[old_idx] for old_idx in remain_old_obj_inds] new_obj_inds = list(range(len(new_obj_ids))) # build new mappings old_idx_to_new_idx = dict(zip(remain_old_obj_inds, new_obj_inds)) inference_state["obj_id_to_idx"] = dict(zip(new_obj_ids, new_obj_inds)) inference_state["obj_idx_to_id"] = dict(zip(new_obj_inds, new_obj_ids)) inference_state["obj_ids"] = new_obj_ids # Step 2: For per-object tensor storage, we shift their obj_idx in the dict keys. def _map_keys(container): new_kvs = [] for k in old_obj_inds: v = container.pop(k) if k in old_idx_to_new_idx: new_kvs.append((old_idx_to_new_idx[k], v)) container.update(new_kvs) _map_keys(inference_state["point_inputs_per_obj"]) _map_keys(inference_state["mask_inputs_per_obj"]) _map_keys(inference_state["output_dict_per_obj"]) _map_keys(inference_state["temp_output_dict_per_obj"]) _map_keys(inference_state["frames_tracked_per_obj"]) # Step 3: Further collect the outputs on those frames in `obj_input_frames_inds`, which # could show an updated mask for objects previously occluded by the object being removed if need_output: temp_output_dict_per_obj = inference_state["temp_output_dict_per_obj"] for frame_idx in obj_input_frames_inds: is_cond = any( frame_idx in obj_temp_output_dict["cond_frame_outputs"] for obj_temp_output_dict in temp_output_dict_per_obj.values() ) consolidated_out = self._consolidate_temp_output_across_obj( inference_state, frame_idx, is_cond=is_cond, consolidate_at_video_res=True, ) _, video_res_masks = self._get_orig_video_res_output( inference_state, consolidated_out["pred_masks_video_res"] ) updated_frames.append((frame_idx, video_res_masks)) return inference_state["obj_ids"], updated_frames def _clear_non_cond_mem_around_input(self, inference_state, frame_idx): """ Remove the non-conditioning memory around the input frame. When users provide correction clicks, the surrounding frames' non-conditioning memories can still contain outdated object appearance information and could confuse the model. This method clears those non-conditioning memories surrounding the interacted frame to avoid giving the model both old and new information about the object. """ r = self.memory_temporal_stride_for_eval frame_idx_begin = frame_idx - r * self.num_maskmem frame_idx_end = frame_idx + r * self.num_maskmem batch_size = self._get_obj_num(inference_state) for obj_idx in range(batch_size): obj_output_dict = inference_state["output_dict_per_obj"][obj_idx] non_cond_frame_outputs = obj_output_dict["non_cond_frame_outputs"] for t in range(frame_idx_begin, frame_idx_end + 1): non_cond_frame_outputs.pop(t, None) class SAM2VideoPredictorVOS(SAM2VideoPredictor): """Optimized for the VOS setting""" def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self._compile_all_components() def _compile_all_components(self): print("Compiling all components for VOS setting. First time may be very slow.") self.memory_encoder.forward = torch.compile( self.memory_encoder.forward, mode="max-autotune", fullgraph=True, dynamic=False, ) self.memory_attention.forward = torch.compile( self.memory_attention.forward, mode="max-autotune", fullgraph=True, dynamic=True, # Num. of memories varies ) self.sam_prompt_encoder.forward = torch.compile( self.sam_prompt_encoder.forward, mode="max-autotune", fullgraph=True, dynamic=False, # Accuracy regression on True ) self.sam_mask_decoder.forward = torch.compile( self.sam_mask_decoder.forward, mode="max-autotune", fullgraph=True, dynamic=False, # Accuracy regression on True ) def forward_image(self, img_batch: torch.Tensor): """ Identical to the corresponding method in the parent (SAM2VideoPredictor), but cloning the backbone features and pos encoding to enable compilation. """ backbone_out = self.image_encoder(img_batch) if self.use_high_res_features_in_sam: # precompute projected level 0 and level 1 features in SAM decoder # to avoid running it again on every SAM click backbone_out["backbone_fpn"][0] = self.sam_mask_decoder.conv_s0( backbone_out["backbone_fpn"][0] ) backbone_out["backbone_fpn"][1] = self.sam_mask_decoder.conv_s1( backbone_out["backbone_fpn"][1] ) # Clone to help torch.compile for i in range(len(backbone_out["backbone_fpn"])): backbone_out["backbone_fpn"][i] = backbone_out["backbone_fpn"][i].clone() backbone_out["vision_pos_enc"][i] = backbone_out["vision_pos_enc"][ i ].clone() return backbone_out def _forward_sam_heads( self, backbone_features, point_inputs=None, mask_inputs=None, high_res_features=None, multimask_output=False, ): """ Identical to the corresponding method in the parent (SAM2VideoPredictor), but cloning the outputs of prompt_encoder and mask_decoder to enable compilation. """ B = backbone_features.size(0) device = backbone_features.device assert backbone_features.size(1) == self.sam_prompt_embed_dim assert backbone_features.size(2) == self.sam_image_embedding_size assert backbone_features.size(3) == self.sam_image_embedding_size # a) Handle point prompts if point_inputs is not None: sam_point_coords = point_inputs["point_coords"] sam_point_labels = point_inputs["point_labels"] assert sam_point_coords.size(0) == B and sam_point_labels.size(0) == B else: # If no points are provide, pad with an empty point (with label -1) sam_point_coords = torch.zeros(B, 1, 2, device=device) sam_point_labels = -torch.ones(B, 1, dtype=torch.int32, device=device) # b) Handle mask prompts if mask_inputs is not None: # If mask_inputs is provided, downsize it into low-res mask input if needed # and feed it as a dense mask prompt into the SAM mask encoder assert len(mask_inputs.shape) == 4 and mask_inputs.shape[:2] == (B, 1) if mask_inputs.shape[-2:] != self.sam_prompt_encoder.mask_input_size: sam_mask_prompt = F.interpolate( mask_inputs.float(), size=self.sam_prompt_encoder.mask_input_size, align_corners=False, mode="bilinear", antialias=True, # use antialias for downsampling ) else: sam_mask_prompt = mask_inputs else: # Otherwise, simply feed None (and SAM's prompt encoder will add # a learned `no_mask_embed` to indicate no mask input in this case). sam_mask_prompt = None sparse_embeddings, dense_embeddings = self.sam_prompt_encoder( points=(sam_point_coords, sam_point_labels), boxes=None, masks=sam_mask_prompt, ) # Clone image_pe and the outputs of sam_prompt_encoder # to enable compilation sparse_embeddings = sparse_embeddings.clone() dense_embeddings = dense_embeddings.clone() image_pe = self.sam_prompt_encoder.get_dense_pe().clone() ( low_res_multimasks, ious, sam_output_tokens, object_score_logits, ) = self.sam_mask_decoder( image_embeddings=backbone_features, image_pe=image_pe, sparse_prompt_embeddings=sparse_embeddings, dense_prompt_embeddings=dense_embeddings, multimask_output=multimask_output, repeat_image=False, # the image is already batched high_res_features=high_res_features, ) # Clone the output of sam_mask_decoder # to enable compilation low_res_multimasks = low_res_multimasks.clone() ious = ious.clone() sam_output_tokens = sam_output_tokens.clone() object_score_logits = object_score_logits.clone() if self.pred_obj_scores: is_obj_appearing = object_score_logits > 0 # Mask used for spatial memories is always a *hard* choice between obj and no obj, # consistent with the actual mask prediction low_res_multimasks = torch.where( is_obj_appearing[:, None, None], low_res_multimasks, NO_OBJ_SCORE, ) # convert masks from possibly bfloat16 (or float16) to float32 # (older PyTorch versions before 2.1 don't support `interpolate` on bf16) low_res_multimasks = low_res_multimasks.float() high_res_multimasks = F.interpolate( low_res_multimasks, size=(self.image_size, self.image_size), mode="bilinear", align_corners=False, ) sam_output_token = sam_output_tokens[:, 0] if multimask_output: # take the best mask prediction (with the highest IoU estimation) best_iou_inds = torch.argmax(ious, dim=-1) batch_inds = torch.arange(B, device=device) low_res_masks = low_res_multimasks[batch_inds, best_iou_inds].unsqueeze(1) high_res_masks = high_res_multimasks[batch_inds, best_iou_inds].unsqueeze(1) if sam_output_tokens.size(1) > 1: sam_output_token = sam_output_tokens[batch_inds, best_iou_inds] else: low_res_masks, high_res_masks = low_res_multimasks, high_res_multimasks # Extract object pointer from the SAM output token (with occlusion handling) obj_ptr = self.obj_ptr_proj(sam_output_token) if self.pred_obj_scores: # Allow *soft* no obj ptr, unlike for masks if self.soft_no_obj_ptr: lambda_is_obj_appearing = object_score_logits.sigmoid() else: lambda_is_obj_appearing = is_obj_appearing.float() if self.fixed_no_obj_ptr: obj_ptr = lambda_is_obj_appearing * obj_ptr obj_ptr = obj_ptr + (1 - lambda_is_obj_appearing) * self.no_obj_ptr return ( low_res_multimasks, high_res_multimasks, ious, low_res_masks, high_res_masks, obj_ptr, object_score_logits, ) def _encode_new_memory( self, current_vision_feats, feat_sizes, pred_masks_high_res, object_score_logits, is_mask_from_pts, ): """ Identical to the corresponding method in the parent (SAM2VideoPredictor), but cloning the memories and their pos enc to enable compilation. """ B = current_vision_feats[-1].size(1) # batch size on this frame C = self.hidden_dim H, W = feat_sizes[-1] # top-level (lowest-resolution) feature size # top-level feature, (HW)BC => BCHW pix_feat = current_vision_feats[-1].permute(1, 2, 0).view(B, C, H, W) if self.non_overlap_masks_for_mem_enc and not self.training: # optionally, apply non-overlapping constraints to the masks (it's applied # in the batch dimension and should only be used during eval, where all # the objects come from the same video under batch size 1). pred_masks_high_res = self._apply_non_overlapping_constraints( pred_masks_high_res ) # scale the raw mask logits with a temperature before applying sigmoid binarize = self.binarize_mask_from_pts_for_mem_enc and is_mask_from_pts if binarize and not self.training: mask_for_mem = (pred_masks_high_res > 0).float() else: # apply sigmoid on the raw mask logits to turn them into range (0, 1) mask_for_mem = torch.sigmoid(pred_masks_high_res) # apply scale and bias terms to the sigmoid probabilities if self.sigmoid_scale_for_mem_enc != 1.0: mask_for_mem = mask_for_mem * self.sigmoid_scale_for_mem_enc if self.sigmoid_bias_for_mem_enc != 0.0: mask_for_mem = mask_for_mem + self.sigmoid_bias_for_mem_enc maskmem_out = self.memory_encoder( pix_feat, mask_for_mem, skip_mask_sigmoid=True # sigmoid already applied ) # Clone the feats and pos_enc to enable compilation maskmem_features = maskmem_out["vision_features"].clone() maskmem_pos_enc = [m.clone() for m in maskmem_out["vision_pos_enc"]] # add a no-object embedding to the spatial memory to indicate that the frame # is predicted to be occluded (i.e. no object is appearing in the frame) if self.no_obj_embed_spatial is not None: is_obj_appearing = (object_score_logits > 0).float() maskmem_features += ( 1 - is_obj_appearing[..., None, None] ) * self.no_obj_embed_spatial[..., None, None].expand( *maskmem_features.shape ) return maskmem_features, maskmem_pos_enc ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/sam2_video_predictor_legacy.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import warnings from collections import OrderedDict import torch from sam2.modeling.sam2_base import NO_OBJ_SCORE, SAM2Base from sam2.utils.misc import (concat_points, fill_holes_in_mask_scores, load_video_frames) from tqdm import tqdm class SAM2VideoPredictor(SAM2Base): """The predictor class to handle user interactions and manage inference states.""" def __init__( self, fill_hole_area=0, # whether to apply non-overlapping constraints on the output object masks non_overlap_masks=False, # whether to clear non-conditioning memory of the surrounding frames (which may contain outdated information) after adding correction clicks; # note that this would only apply to *single-object tracking* unless `clear_non_cond_mem_for_multi_obj` is also set to True) clear_non_cond_mem_around_input=False, # whether to also clear non-conditioning memory of the surrounding frames (only effective when `clear_non_cond_mem_around_input` is True). clear_non_cond_mem_for_multi_obj=False, # if `add_all_frames_to_correct_as_cond` is True, we also append to the conditioning frame list any frame that receives a later correction click # if `add_all_frames_to_correct_as_cond` is False, we conditioning frame list to only use those initial conditioning frames add_all_frames_to_correct_as_cond=False, **kwargs, ): super().__init__(**kwargs) self.fill_hole_area = fill_hole_area self.non_overlap_masks = non_overlap_masks self.clear_non_cond_mem_around_input = clear_non_cond_mem_around_input self.clear_non_cond_mem_for_multi_obj = clear_non_cond_mem_for_multi_obj self.add_all_frames_to_correct_as_cond = add_all_frames_to_correct_as_cond @torch.inference_mode() def init_state( self, video_path, offload_video_to_cpu=False, offload_state_to_cpu=False, async_loading_frames=False, ): """Initialize an inference state.""" compute_device = self.device # device of the model images, video_height, video_width = load_video_frames( video_path=video_path, image_size=self.image_size, offload_video_to_cpu=offload_video_to_cpu, async_loading_frames=async_loading_frames, compute_device=compute_device, ) inference_state = {} inference_state["images"] = images inference_state["num_frames"] = len(images) # whether to offload the video frames to CPU memory # turning on this option saves the GPU memory with only a very small overhead inference_state["offload_video_to_cpu"] = offload_video_to_cpu # whether to offload the inference state to CPU memory # turning on this option saves the GPU memory at the cost of a lower tracking fps # (e.g. in a test case of 768x768 model, fps dropped from 27 to 24 when tracking one object # and from 24 to 21 when tracking two objects) inference_state["offload_state_to_cpu"] = offload_state_to_cpu # the original video height and width, used for resizing final output scores inference_state["video_height"] = video_height inference_state["video_width"] = video_width inference_state["device"] = compute_device if offload_state_to_cpu: inference_state["storage_device"] = torch.device("cpu") else: inference_state["storage_device"] = compute_device # inputs on each frame inference_state["point_inputs_per_obj"] = {} inference_state["mask_inputs_per_obj"] = {} # visual features on a small number of recently visited frames for quick interactions inference_state["cached_features"] = {} # values that don't change across frames (so we only need to hold one copy of them) inference_state["constants"] = {} # mapping between client-side object id and model-side object index inference_state["obj_id_to_idx"] = OrderedDict() inference_state["obj_idx_to_id"] = OrderedDict() inference_state["obj_ids"] = [] # A storage to hold the model's tracking results and states on each frame inference_state["output_dict"] = { "cond_frame_outputs": {}, # dict containing {frame_idx: } "non_cond_frame_outputs": {}, # dict containing {frame_idx: } } # Slice (view) of each object tracking results, sharing the same memory with "output_dict" inference_state["output_dict_per_obj"] = {} # A temporary storage to hold new outputs when user interact with a frame # to add clicks or mask (it's merged into "output_dict" before propagation starts) inference_state["temp_output_dict_per_obj"] = {} # Frames that already holds consolidated outputs from click or mask inputs # (we directly use their consolidated outputs during tracking) inference_state["consolidated_frame_inds"] = { "cond_frame_outputs": set(), # set containing frame indices "non_cond_frame_outputs": set(), # set containing frame indices } # metadata for each tracking frame (e.g. which direction it's tracked) inference_state["tracking_has_started"] = False inference_state["frames_already_tracked"] = {} # Warm up the visual backbone and cache the image feature on frame 0 self._get_image_feature(inference_state, frame_idx=0, batch_size=1) return inference_state @classmethod def from_pretrained(cls, model_id: str, **kwargs) -> "SAM2VideoPredictor": """ Load a pretrained model from the Hugging Face hub. Arguments: model_id (str): The Hugging Face repository ID. **kwargs: Additional arguments to pass to the model constructor. Returns: (SAM2VideoPredictor): The loaded model. """ from sam2.build_sam import build_sam2_video_predictor_hf sam_model = build_sam2_video_predictor_hf(model_id, **kwargs) return sam_model def _obj_id_to_idx(self, inference_state, obj_id): """Map client-side object id to model-side object index.""" obj_idx = inference_state["obj_id_to_idx"].get(obj_id, None) if obj_idx is not None: return obj_idx # This is a new object id not sent to the server before. We only allow adding # new objects *before* the tracking starts. allow_new_object = not inference_state["tracking_has_started"] if allow_new_object: # get the next object slot obj_idx = len(inference_state["obj_id_to_idx"]) inference_state["obj_id_to_idx"][obj_id] = obj_idx inference_state["obj_idx_to_id"][obj_idx] = obj_id inference_state["obj_ids"] = list(inference_state["obj_id_to_idx"]) # set up input and output structures for this object inference_state["point_inputs_per_obj"][obj_idx] = {} inference_state["mask_inputs_per_obj"][obj_idx] = {} inference_state["output_dict_per_obj"][obj_idx] = { "cond_frame_outputs": {}, # dict containing {frame_idx: } "non_cond_frame_outputs": {}, # dict containing {frame_idx: } } inference_state["temp_output_dict_per_obj"][obj_idx] = { "cond_frame_outputs": {}, # dict containing {frame_idx: } "non_cond_frame_outputs": {}, # dict containing {frame_idx: } } return obj_idx else: raise RuntimeError( f"Cannot add new object id {obj_id} after tracking starts. " f"All existing object ids: {inference_state['obj_ids']}. " f"Please call 'reset_state' to restart from scratch." ) def _obj_idx_to_id(self, inference_state, obj_idx): """Map model-side object index to client-side object id.""" return inference_state["obj_idx_to_id"][obj_idx] def _get_obj_num(self, inference_state): """Get the total number of unique object ids received so far in this session.""" return len(inference_state["obj_idx_to_id"]) @torch.inference_mode() def add_new_points_or_box( self, inference_state, frame_idx, obj_id, points=None, labels=None, clear_old_points=True, normalize_coords=True, box=None, ): """Add new points to a frame.""" obj_idx = self._obj_id_to_idx(inference_state, obj_id) point_inputs_per_frame = inference_state["point_inputs_per_obj"][obj_idx] mask_inputs_per_frame = inference_state["mask_inputs_per_obj"][obj_idx] if (points is not None) != (labels is not None): raise ValueError("points and labels must be provided together") if points is None and box is None: raise ValueError("at least one of points or box must be provided as input") if points is None: points = torch.zeros(0, 2, dtype=torch.float32) elif not isinstance(points, torch.Tensor): points = torch.tensor(points, dtype=torch.float32) if labels is None: labels = torch.zeros(0, dtype=torch.int32) elif not isinstance(labels, torch.Tensor): labels = torch.tensor(labels, dtype=torch.int32) if points.dim() == 2: points = points.unsqueeze(0) # add batch dimension if labels.dim() == 1: labels = labels.unsqueeze(0) # add batch dimension # If `box` is provided, we add it as the first two points with labels 2 and 3 # along with the user-provided points (consistent with how SAM 2 is trained). if box is not None: if not clear_old_points: raise ValueError( "cannot add box without clearing old points, since " "box prompt must be provided before any point prompt " "(please use clear_old_points=True instead)" ) if inference_state["tracking_has_started"]: warnings.warn( "You are adding a box after tracking starts. SAM 2 may not always be " "able to incorporate a box prompt for *refinement*. If you intend to " "use box prompt as an *initial* input before tracking, please call " "'reset_state' on the inference state to restart from scratch.", category=UserWarning, stacklevel=2, ) if not isinstance(box, torch.Tensor): box = torch.tensor(box, dtype=torch.float32, device=points.device) box_coords = box.reshape(1, 2, 2) box_labels = torch.tensor([2, 3], dtype=torch.int32, device=labels.device) box_labels = box_labels.reshape(1, 2) points = torch.cat([box_coords, points], dim=1) labels = torch.cat([box_labels, labels], dim=1) if normalize_coords: video_H = inference_state["video_height"] video_W = inference_state["video_width"] points = points / torch.tensor([video_W, video_H]).to(points.device) # scale the (normalized) coordinates by the model's internal image size points = points * self.image_size points = points.to(inference_state["device"]) labels = labels.to(inference_state["device"]) if not clear_old_points: point_inputs = point_inputs_per_frame.get(frame_idx, None) else: point_inputs = None point_inputs = concat_points(point_inputs, points, labels) point_inputs_per_frame[frame_idx] = point_inputs mask_inputs_per_frame.pop(frame_idx, None) # If this frame hasn't been tracked before, we treat it as an initial conditioning # frame, meaning that the inputs points are to generate segments on this frame without # using any memory from other frames, like in SAM. Otherwise (if it has been tracked), # the input points will be used to correct the already tracked masks. is_init_cond_frame = frame_idx not in inference_state["frames_already_tracked"] # whether to track in reverse time order if is_init_cond_frame: reverse = False else: reverse = inference_state["frames_already_tracked"][frame_idx]["reverse"] obj_output_dict = inference_state["output_dict_per_obj"][obj_idx] obj_temp_output_dict = inference_state["temp_output_dict_per_obj"][obj_idx] # Add a frame to conditioning output if it's an initial conditioning frame or # if the model sees all frames receiving clicks/mask as conditioning frames. is_cond = is_init_cond_frame or self.add_all_frames_to_correct_as_cond storage_key = "cond_frame_outputs" if is_cond else "non_cond_frame_outputs" # Get any previously predicted mask logits on this object and feed it along with # the new clicks into the SAM mask decoder. prev_sam_mask_logits = None # lookup temporary output dict first, which contains the most recent output # (if not found, then lookup conditioning and non-conditioning frame output) prev_out = obj_temp_output_dict[storage_key].get(frame_idx) if prev_out is None: prev_out = obj_output_dict["cond_frame_outputs"].get(frame_idx) if prev_out is None: prev_out = obj_output_dict["non_cond_frame_outputs"].get(frame_idx) if prev_out is not None and prev_out["pred_masks"] is not None: device = inference_state["device"] prev_sam_mask_logits = prev_out["pred_masks"].to(device, non_blocking=True) # Clamp the scale of prev_sam_mask_logits to avoid rare numerical issues. prev_sam_mask_logits = torch.clamp(prev_sam_mask_logits, -32.0, 32.0) current_out, _ = self._run_single_frame_inference( inference_state=inference_state, output_dict=obj_output_dict, # run on the slice of a single object frame_idx=frame_idx, batch_size=1, # run on the slice of a single object is_init_cond_frame=is_init_cond_frame, point_inputs=point_inputs, mask_inputs=None, reverse=reverse, # Skip the memory encoder when adding clicks or mask. We execute the memory encoder # at the beginning of `propagate_in_video` (after user finalize their clicks). This # allows us to enforce non-overlapping constraints on all objects before encoding # them into memory. run_mem_encoder=False, prev_sam_mask_logits=prev_sam_mask_logits, ) # Add the output to the output dict (to be used as future memory) obj_temp_output_dict[storage_key][frame_idx] = current_out # Resize the output mask to the original video resolution obj_ids = inference_state["obj_ids"] consolidated_out = self._consolidate_temp_output_across_obj( inference_state, frame_idx, is_cond=is_cond, run_mem_encoder=False, consolidate_at_video_res=True, ) _, video_res_masks = self._get_orig_video_res_output( inference_state, consolidated_out["pred_masks_video_res"] ) return frame_idx, obj_ids, video_res_masks def add_new_points(self, *args, **kwargs): """Deprecated method. Please use `add_new_points_or_box` instead.""" return self.add_new_points_or_box(*args, **kwargs) @torch.inference_mode() def add_new_mask( self, inference_state, frame_idx, obj_id, mask, ): """Add new mask to a frame.""" obj_idx = self._obj_id_to_idx(inference_state, obj_id) point_inputs_per_frame = inference_state["point_inputs_per_obj"][obj_idx] mask_inputs_per_frame = inference_state["mask_inputs_per_obj"][obj_idx] if not isinstance(mask, torch.Tensor): mask = torch.tensor(mask, dtype=torch.bool) assert mask.dim() == 2 mask_H, mask_W = mask.shape mask_inputs_orig = mask[None, None] # add batch and channel dimension mask_inputs_orig = mask_inputs_orig.float().to(inference_state["device"]) # resize the mask if it doesn't match the model's image size if mask_H != self.image_size or mask_W != self.image_size: mask_inputs = torch.nn.functional.interpolate( mask_inputs_orig, size=(self.image_size, self.image_size), align_corners=False, mode="bilinear", antialias=True, # use antialias for downsampling ) mask_inputs = (mask_inputs >= 0.5).float() else: mask_inputs = mask_inputs_orig mask_inputs_per_frame[frame_idx] = mask_inputs point_inputs_per_frame.pop(frame_idx, None) # If this frame hasn't been tracked before, we treat it as an initial conditioning # frame, meaning that the inputs points are to generate segments on this frame without # using any memory from other frames, like in SAM. Otherwise (if it has been tracked), # the input points will be used to correct the already tracked masks. is_init_cond_frame = frame_idx not in inference_state["frames_already_tracked"] # whether to track in reverse time order if is_init_cond_frame: reverse = False else: reverse = inference_state["frames_already_tracked"][frame_idx]["reverse"] obj_output_dict = inference_state["output_dict_per_obj"][obj_idx] obj_temp_output_dict = inference_state["temp_output_dict_per_obj"][obj_idx] # Add a frame to conditioning output if it's an initial conditioning frame or # if the model sees all frames receiving clicks/mask as conditioning frames. is_cond = is_init_cond_frame or self.add_all_frames_to_correct_as_cond storage_key = "cond_frame_outputs" if is_cond else "non_cond_frame_outputs" current_out, _ = self._run_single_frame_inference( inference_state=inference_state, output_dict=obj_output_dict, # run on the slice of a single object frame_idx=frame_idx, batch_size=1, # run on the slice of a single object is_init_cond_frame=is_init_cond_frame, point_inputs=None, mask_inputs=mask_inputs, reverse=reverse, # Skip the memory encoder when adding clicks or mask. We execute the memory encoder # at the beginning of `propagate_in_video` (after user finalize their clicks). This # allows us to enforce non-overlapping constraints on all objects before encoding # them into memory. run_mem_encoder=False, ) # Add the output to the output dict (to be used as future memory) obj_temp_output_dict[storage_key][frame_idx] = current_out # Resize the output mask to the original video resolution obj_ids = inference_state["obj_ids"] consolidated_out = self._consolidate_temp_output_across_obj( inference_state, frame_idx, is_cond=is_cond, run_mem_encoder=False, consolidate_at_video_res=True, ) _, video_res_masks = self._get_orig_video_res_output( inference_state, consolidated_out["pred_masks_video_res"] ) return frame_idx, obj_ids, video_res_masks def _get_orig_video_res_output(self, inference_state, any_res_masks): """ Resize the object scores to the original video resolution (video_res_masks) and apply non-overlapping constraints for final output. """ device = inference_state["device"] video_H = inference_state["video_height"] video_W = inference_state["video_width"] any_res_masks = any_res_masks.to(device, non_blocking=True) if any_res_masks.shape[-2:] == (video_H, video_W): video_res_masks = any_res_masks else: video_res_masks = torch.nn.functional.interpolate( any_res_masks, size=(video_H, video_W), mode="bilinear", align_corners=False, ) if self.non_overlap_masks: video_res_masks = self._apply_non_overlapping_constraints(video_res_masks) return any_res_masks, video_res_masks def _consolidate_temp_output_across_obj( self, inference_state, frame_idx, is_cond, run_mem_encoder, consolidate_at_video_res=False, ): """ Consolidate the per-object temporary outputs in `temp_output_dict_per_obj` on a frame into a single output for all objects, including 1) fill any missing objects either from `output_dict_per_obj` (if they exist in `output_dict_per_obj` for this frame) or leave them as placeholder values (if they don't exist in `output_dict_per_obj` for this frame); 2) if specified, rerun memory encoder after apply non-overlapping constraints on the object scores. """ batch_size = self._get_obj_num(inference_state) storage_key = "cond_frame_outputs" if is_cond else "non_cond_frame_outputs" # Optionally, we allow consolidating the temporary outputs at the original # video resolution (to provide a better editing experience for mask prompts). if consolidate_at_video_res: assert not run_mem_encoder, "memory encoder cannot run at video resolution" consolidated_H = inference_state["video_height"] consolidated_W = inference_state["video_width"] consolidated_mask_key = "pred_masks_video_res" else: consolidated_H = consolidated_W = self.image_size // 4 consolidated_mask_key = "pred_masks" # Initialize `consolidated_out`. Its "maskmem_features" and "maskmem_pos_enc" # will be added when rerunning the memory encoder after applying non-overlapping # constraints to object scores. Its "pred_masks" are prefilled with a large # negative value (NO_OBJ_SCORE) to represent missing objects. consolidated_out = { "maskmem_features": None, "maskmem_pos_enc": None, consolidated_mask_key: torch.full( size=(batch_size, 1, consolidated_H, consolidated_W), fill_value=NO_OBJ_SCORE, dtype=torch.float32, device=inference_state["storage_device"], ), "obj_ptr": torch.full( size=(batch_size, self.hidden_dim), fill_value=NO_OBJ_SCORE, dtype=torch.float32, device=inference_state["device"], ), "object_score_logits": torch.full( size=(batch_size, 1), # default to 10.0 for object_score_logits, i.e. assuming the object is # present as sigmoid(10)=1, same as in `predict_masks` of `MaskDecoder` fill_value=10.0, dtype=torch.float32, device=inference_state["device"], ), } empty_mask_ptr = None for obj_idx in range(batch_size): obj_temp_output_dict = inference_state["temp_output_dict_per_obj"][obj_idx] obj_output_dict = inference_state["output_dict_per_obj"][obj_idx] out = obj_temp_output_dict[storage_key].get(frame_idx, None) # If the object doesn't appear in "temp_output_dict_per_obj" on this frame, # we fall back and look up its previous output in "output_dict_per_obj". # We look up both "cond_frame_outputs" and "non_cond_frame_outputs" in # "output_dict_per_obj" to find a previous output for this object. if out is None: out = obj_output_dict["cond_frame_outputs"].get(frame_idx, None) if out is None: out = obj_output_dict["non_cond_frame_outputs"].get(frame_idx, None) # If the object doesn't appear in "output_dict_per_obj" either, we skip it # and leave its mask scores to the default scores (i.e. the NO_OBJ_SCORE # placeholder above) and set its object pointer to be a dummy pointer. if out is None: # Fill in dummy object pointers for those objects without any inputs or # tracking outcomes on this frame (only do it under `run_mem_encoder=True`, # i.e. when we need to build the memory for tracking). if run_mem_encoder: if empty_mask_ptr is None: empty_mask_ptr = self._get_empty_mask_ptr( inference_state, frame_idx ) # fill object pointer with a dummy pointer (based on an empty mask) consolidated_out["obj_ptr"][obj_idx : obj_idx + 1] = empty_mask_ptr continue # Add the temporary object output mask to consolidated output mask obj_mask = out["pred_masks"] consolidated_pred_masks = consolidated_out[consolidated_mask_key] if obj_mask.shape[-2:] == consolidated_pred_masks.shape[-2:]: consolidated_pred_masks[obj_idx : obj_idx + 1] = obj_mask else: # Resize first if temporary object mask has a different resolution resized_obj_mask = torch.nn.functional.interpolate( obj_mask, size=consolidated_pred_masks.shape[-2:], mode="bilinear", align_corners=False, ) consolidated_pred_masks[obj_idx : obj_idx + 1] = resized_obj_mask consolidated_out["obj_ptr"][obj_idx : obj_idx + 1] = out["obj_ptr"] consolidated_out["object_score_logits"][obj_idx : obj_idx + 1] = out[ "object_score_logits" ] # Optionally, apply non-overlapping constraints on the consolidated scores # and rerun the memory encoder if run_mem_encoder: device = inference_state["device"] high_res_masks = torch.nn.functional.interpolate( consolidated_out["pred_masks"].to(device, non_blocking=True), size=(self.image_size, self.image_size), mode="bilinear", align_corners=False, ) if self.non_overlap_masks_for_mem_enc: high_res_masks = self._apply_non_overlapping_constraints(high_res_masks) maskmem_features, maskmem_pos_enc = self._run_memory_encoder( inference_state=inference_state, frame_idx=frame_idx, batch_size=batch_size, high_res_masks=high_res_masks, object_score_logits=consolidated_out["object_score_logits"], is_mask_from_pts=True, # these frames are what the user interacted with ) consolidated_out["maskmem_features"] = maskmem_features consolidated_out["maskmem_pos_enc"] = maskmem_pos_enc return consolidated_out def _get_empty_mask_ptr(self, inference_state, frame_idx): """Get a dummy object pointer based on an empty mask on the current frame.""" # A dummy (empty) mask with a single object batch_size = 1 mask_inputs = torch.zeros( (batch_size, 1, self.image_size, self.image_size), dtype=torch.float32, device=inference_state["device"], ) # Retrieve correct image features ( _, _, current_vision_feats, current_vision_pos_embeds, feat_sizes, ) = self._get_image_feature(inference_state, frame_idx, batch_size) # Feed the empty mask and image feature above to get a dummy object pointer current_out = self.track_step( frame_idx=frame_idx, is_init_cond_frame=True, current_vision_feats=current_vision_feats, current_vision_pos_embeds=current_vision_pos_embeds, feat_sizes=feat_sizes, point_inputs=None, mask_inputs=mask_inputs, output_dict={}, num_frames=inference_state["num_frames"], track_in_reverse=False, run_mem_encoder=False, prev_sam_mask_logits=None, ) return current_out["obj_ptr"] @torch.inference_mode() def propagate_in_video_preflight(self, inference_state): """Prepare inference_state and consolidate temporary outputs before tracking.""" # Tracking has started and we don't allow adding new objects until session is reset. inference_state["tracking_has_started"] = True batch_size = self._get_obj_num(inference_state) # Consolidate per-object temporary outputs in "temp_output_dict_per_obj" and # add them into "output_dict". temp_output_dict_per_obj = inference_state["temp_output_dict_per_obj"] output_dict = inference_state["output_dict"] # "consolidated_frame_inds" contains indices of those frames where consolidated # temporary outputs have been added (either in this call or any previous calls # to `propagate_in_video_preflight`). consolidated_frame_inds = inference_state["consolidated_frame_inds"] for is_cond in [False, True]: # Separately consolidate conditioning and non-conditioning temp outputs storage_key = "cond_frame_outputs" if is_cond else "non_cond_frame_outputs" # Find all the frames that contain temporary outputs for any objects # (these should be the frames that have just received clicks for mask inputs # via `add_new_points_or_box` or `add_new_mask`) temp_frame_inds = set() for obj_temp_output_dict in temp_output_dict_per_obj.values(): temp_frame_inds.update(obj_temp_output_dict[storage_key].keys()) consolidated_frame_inds[storage_key].update(temp_frame_inds) # consolidate the temporary output across all objects on this frame for frame_idx in temp_frame_inds: consolidated_out = self._consolidate_temp_output_across_obj( inference_state, frame_idx, is_cond=is_cond, run_mem_encoder=True ) # merge them into "output_dict" and also create per-object slices output_dict[storage_key][frame_idx] = consolidated_out self._add_output_per_object( inference_state, frame_idx, consolidated_out, storage_key ) clear_non_cond_mem = self.clear_non_cond_mem_around_input and ( self.clear_non_cond_mem_for_multi_obj or batch_size <= 1 ) if clear_non_cond_mem: # clear non-conditioning memory of the surrounding frames self._clear_non_cond_mem_around_input(inference_state, frame_idx) # clear temporary outputs in `temp_output_dict_per_obj` for obj_temp_output_dict in temp_output_dict_per_obj.values(): obj_temp_output_dict[storage_key].clear() # edge case: if an output is added to "cond_frame_outputs", we remove any prior # output on the same frame in "non_cond_frame_outputs" for frame_idx in output_dict["cond_frame_outputs"]: output_dict["non_cond_frame_outputs"].pop(frame_idx, None) for obj_output_dict in inference_state["output_dict_per_obj"].values(): for frame_idx in obj_output_dict["cond_frame_outputs"]: obj_output_dict["non_cond_frame_outputs"].pop(frame_idx, None) for frame_idx in consolidated_frame_inds["cond_frame_outputs"]: assert frame_idx in output_dict["cond_frame_outputs"] consolidated_frame_inds["non_cond_frame_outputs"].discard(frame_idx) # Make sure that the frame indices in "consolidated_frame_inds" are exactly those frames # with either points or mask inputs (which should be true under a correct workflow). all_consolidated_frame_inds = ( consolidated_frame_inds["cond_frame_outputs"] | consolidated_frame_inds["non_cond_frame_outputs"] ) input_frames_inds = set() for point_inputs_per_frame in inference_state["point_inputs_per_obj"].values(): input_frames_inds.update(point_inputs_per_frame.keys()) for mask_inputs_per_frame in inference_state["mask_inputs_per_obj"].values(): input_frames_inds.update(mask_inputs_per_frame.keys()) assert all_consolidated_frame_inds == input_frames_inds @torch.inference_mode() def propagate_in_video( self, inference_state, start_frame_idx=None, max_frame_num_to_track=None, reverse=False, ): """Propagate the input points across frames to track in the entire video.""" self.propagate_in_video_preflight(inference_state) output_dict = inference_state["output_dict"] consolidated_frame_inds = inference_state["consolidated_frame_inds"] obj_ids = inference_state["obj_ids"] num_frames = inference_state["num_frames"] batch_size = self._get_obj_num(inference_state) if len(output_dict["cond_frame_outputs"]) == 0: raise RuntimeError("No points are provided; please add points first") clear_non_cond_mem = self.clear_non_cond_mem_around_input and ( self.clear_non_cond_mem_for_multi_obj or batch_size <= 1 ) # set start index, end index, and processing order if start_frame_idx is None: # default: start from the earliest frame with input points start_frame_idx = min(output_dict["cond_frame_outputs"]) if max_frame_num_to_track is None: # default: track all the frames in the video max_frame_num_to_track = num_frames if reverse: end_frame_idx = max(start_frame_idx - max_frame_num_to_track, 0) if start_frame_idx > 0: processing_order = range(start_frame_idx, end_frame_idx - 1, -1) else: processing_order = [] # skip reverse tracking if starting from frame 0 else: end_frame_idx = min( start_frame_idx + max_frame_num_to_track, num_frames - 1 ) processing_order = range(start_frame_idx, end_frame_idx + 1) for frame_idx in tqdm(processing_order, desc="propagate in video"): # We skip those frames already in consolidated outputs (these are frames # that received input clicks or mask). Note that we cannot directly run # batched forward on them via `_run_single_frame_inference` because the # number of clicks on each object might be different. if frame_idx in consolidated_frame_inds["cond_frame_outputs"]: storage_key = "cond_frame_outputs" current_out = output_dict[storage_key][frame_idx] pred_masks = current_out["pred_masks"] if clear_non_cond_mem: # clear non-conditioning memory of the surrounding frames self._clear_non_cond_mem_around_input(inference_state, frame_idx) elif frame_idx in consolidated_frame_inds["non_cond_frame_outputs"]: storage_key = "non_cond_frame_outputs" current_out = output_dict[storage_key][frame_idx] pred_masks = current_out["pred_masks"] else: storage_key = "non_cond_frame_outputs" current_out, pred_masks = self._run_single_frame_inference( inference_state=inference_state, output_dict=output_dict, frame_idx=frame_idx, batch_size=batch_size, is_init_cond_frame=False, point_inputs=None, mask_inputs=None, reverse=reverse, run_mem_encoder=True, ) output_dict[storage_key][frame_idx] = current_out # Create slices of per-object outputs for subsequent interaction with each # individual object after tracking. self._add_output_per_object( inference_state, frame_idx, current_out, storage_key ) inference_state["frames_already_tracked"][frame_idx] = {"reverse": reverse} # Resize the output mask to the original video resolution (we directly use # the mask scores on GPU for output to avoid any CPU conversion in between) _, video_res_masks = self._get_orig_video_res_output( inference_state, pred_masks ) yield frame_idx, obj_ids, video_res_masks def _add_output_per_object( self, inference_state, frame_idx, current_out, storage_key ): """ Split a multi-object output into per-object output slices and add them into `output_dict_per_obj`. The resulting slices share the same tensor storage. """ maskmem_features = current_out["maskmem_features"] assert maskmem_features is None or isinstance(maskmem_features, torch.Tensor) maskmem_pos_enc = current_out["maskmem_pos_enc"] assert maskmem_pos_enc is None or isinstance(maskmem_pos_enc, list) output_dict_per_obj = inference_state["output_dict_per_obj"] for obj_idx, obj_output_dict in output_dict_per_obj.items(): obj_slice = slice(obj_idx, obj_idx + 1) obj_out = { "maskmem_features": None, "maskmem_pos_enc": None, "pred_masks": current_out["pred_masks"][obj_slice], "obj_ptr": current_out["obj_ptr"][obj_slice], "object_score_logits": current_out["object_score_logits"][obj_slice], } if maskmem_features is not None: obj_out["maskmem_features"] = maskmem_features[obj_slice] if maskmem_pos_enc is not None: obj_out["maskmem_pos_enc"] = [x[obj_slice] for x in maskmem_pos_enc] obj_output_dict[storage_key][frame_idx] = obj_out @torch.inference_mode() def clear_all_prompts_in_frame( self, inference_state, frame_idx, obj_id, need_output=True ): """Remove all input points or mask in a specific frame for a given object.""" obj_idx = self._obj_id_to_idx(inference_state, obj_id) # Clear the conditioning information on the given frame inference_state["point_inputs_per_obj"][obj_idx].pop(frame_idx, None) inference_state["mask_inputs_per_obj"][obj_idx].pop(frame_idx, None) temp_output_dict_per_obj = inference_state["temp_output_dict_per_obj"] temp_output_dict_per_obj[obj_idx]["cond_frame_outputs"].pop(frame_idx, None) temp_output_dict_per_obj[obj_idx]["non_cond_frame_outputs"].pop(frame_idx, None) # Check and see if there are still any inputs left on this frame batch_size = self._get_obj_num(inference_state) frame_has_input = False for obj_idx2 in range(batch_size): if frame_idx in inference_state["point_inputs_per_obj"][obj_idx2]: frame_has_input = True break if frame_idx in inference_state["mask_inputs_per_obj"][obj_idx2]: frame_has_input = True break # If this frame has no remaining inputs for any objects, we further clear its # conditioning frame status if not frame_has_input: output_dict = inference_state["output_dict"] consolidated_frame_inds = inference_state["consolidated_frame_inds"] consolidated_frame_inds["cond_frame_outputs"].discard(frame_idx) consolidated_frame_inds["non_cond_frame_outputs"].discard(frame_idx) # Remove the frame's conditioning output (possibly downgrading it to non-conditioning) out = output_dict["cond_frame_outputs"].pop(frame_idx, None) if out is not None: # The frame is not a conditioning frame anymore since it's not receiving inputs, # so we "downgrade" its output (if exists) to a non-conditioning frame output. output_dict["non_cond_frame_outputs"][frame_idx] = out inference_state["frames_already_tracked"].pop(frame_idx, None) # Similarly, do it for the sliced output on each object. for obj_idx2 in range(batch_size): obj_output_dict = inference_state["output_dict_per_obj"][obj_idx2] obj_out = obj_output_dict["cond_frame_outputs"].pop(frame_idx, None) if obj_out is not None: obj_output_dict["non_cond_frame_outputs"][frame_idx] = obj_out # If all the conditioning frames have been removed, we also clear the tracking outputs if len(output_dict["cond_frame_outputs"]) == 0: self._reset_tracking_results(inference_state) if not need_output: return # Finally, output updated masks per object (after removing the inputs above) obj_ids = inference_state["obj_ids"] is_cond = any( frame_idx in obj_temp_output_dict["cond_frame_outputs"] for obj_temp_output_dict in temp_output_dict_per_obj.values() ) consolidated_out = self._consolidate_temp_output_across_obj( inference_state, frame_idx, is_cond=is_cond, run_mem_encoder=False, consolidate_at_video_res=True, ) _, video_res_masks = self._get_orig_video_res_output( inference_state, consolidated_out["pred_masks_video_res"] ) return frame_idx, obj_ids, video_res_masks @torch.inference_mode() def reset_state(self, inference_state): """Remove all input points or mask in all frames throughout the video.""" self._reset_tracking_results(inference_state) # Remove all object ids inference_state["obj_id_to_idx"].clear() inference_state["obj_idx_to_id"].clear() inference_state["obj_ids"].clear() inference_state["point_inputs_per_obj"].clear() inference_state["mask_inputs_per_obj"].clear() inference_state["output_dict_per_obj"].clear() inference_state["temp_output_dict_per_obj"].clear() def _reset_tracking_results(self, inference_state): """Reset all tracking inputs and results across the videos.""" for v in inference_state["point_inputs_per_obj"].values(): v.clear() for v in inference_state["mask_inputs_per_obj"].values(): v.clear() for v in inference_state["output_dict_per_obj"].values(): v["cond_frame_outputs"].clear() v["non_cond_frame_outputs"].clear() for v in inference_state["temp_output_dict_per_obj"].values(): v["cond_frame_outputs"].clear() v["non_cond_frame_outputs"].clear() inference_state["output_dict"]["cond_frame_outputs"].clear() inference_state["output_dict"]["non_cond_frame_outputs"].clear() inference_state["consolidated_frame_inds"]["cond_frame_outputs"].clear() inference_state["consolidated_frame_inds"]["non_cond_frame_outputs"].clear() inference_state["tracking_has_started"] = False inference_state["frames_already_tracked"].clear() def _get_image_feature(self, inference_state, frame_idx, batch_size): """Compute the image features on a given frame.""" # Look up in the cache first image, backbone_out = inference_state["cached_features"].get( frame_idx, (None, None) ) if backbone_out is None: # Cache miss -- we will run inference on a single image device = inference_state["device"] image = inference_state["images"][frame_idx].to(device).float().unsqueeze(0) backbone_out = self.forward_image(image) # Cache the most recent frame's feature (for repeated interactions with # a frame; we can use an LRU cache for more frames in the future). inference_state["cached_features"] = {frame_idx: (image, backbone_out)} # expand the features to have the same dimension as the number of objects expanded_image = image.expand(batch_size, -1, -1, -1) expanded_backbone_out = { "backbone_fpn": backbone_out["backbone_fpn"].copy(), "vision_pos_enc": backbone_out["vision_pos_enc"].copy(), } for i, feat in enumerate(expanded_backbone_out["backbone_fpn"]): expanded_backbone_out["backbone_fpn"][i] = feat.expand( batch_size, -1, -1, -1 ) for i, pos in enumerate(expanded_backbone_out["vision_pos_enc"]): pos = pos.expand(batch_size, -1, -1, -1) expanded_backbone_out["vision_pos_enc"][i] = pos features = self._prepare_backbone_features(expanded_backbone_out) features = (expanded_image,) + features return features def _run_single_frame_inference( self, inference_state, output_dict, frame_idx, batch_size, is_init_cond_frame, point_inputs, mask_inputs, reverse, run_mem_encoder, prev_sam_mask_logits=None, ): """Run tracking on a single frame based on current inputs and previous memory.""" # Retrieve correct image features ( _, _, current_vision_feats, current_vision_pos_embeds, feat_sizes, ) = self._get_image_feature(inference_state, frame_idx, batch_size) # point and mask should not appear as input simultaneously on the same frame assert point_inputs is None or mask_inputs is None current_out = self.track_step( frame_idx=frame_idx, is_init_cond_frame=is_init_cond_frame, current_vision_feats=current_vision_feats, current_vision_pos_embeds=current_vision_pos_embeds, feat_sizes=feat_sizes, point_inputs=point_inputs, mask_inputs=mask_inputs, output_dict=output_dict, num_frames=inference_state["num_frames"], track_in_reverse=reverse, run_mem_encoder=run_mem_encoder, prev_sam_mask_logits=prev_sam_mask_logits, ) # optionally offload the output to CPU memory to save GPU space storage_device = inference_state["storage_device"] maskmem_features = current_out["maskmem_features"] if maskmem_features is not None: maskmem_features = maskmem_features.to(torch.bfloat16) maskmem_features = maskmem_features.to(storage_device, non_blocking=True) pred_masks_gpu = current_out["pred_masks"] # potentially fill holes in the predicted masks if self.fill_hole_area > 0: pred_masks_gpu = fill_holes_in_mask_scores( pred_masks_gpu, self.fill_hole_area ) pred_masks = pred_masks_gpu.to(storage_device, non_blocking=True) # "maskmem_pos_enc" is the same across frames, so we only need to store one copy of it maskmem_pos_enc = self._get_maskmem_pos_enc(inference_state, current_out) # object pointer is a small tensor, so we always keep it on GPU memory for fast access obj_ptr = current_out["obj_ptr"] object_score_logits = current_out["object_score_logits"] # make a compact version of this frame's output to reduce the state size compact_current_out = { "maskmem_features": maskmem_features, "maskmem_pos_enc": maskmem_pos_enc, "pred_masks": pred_masks, "obj_ptr": obj_ptr, "object_score_logits": object_score_logits, } return compact_current_out, pred_masks_gpu def _run_memory_encoder( self, inference_state, frame_idx, batch_size, high_res_masks, object_score_logits, is_mask_from_pts, ): """ Run the memory encoder on `high_res_masks`. This is usually after applying non-overlapping constraints to object scores. Since their scores changed, their memory also need to be computed again with the memory encoder. """ # Retrieve correct image features _, _, current_vision_feats, _, feat_sizes = self._get_image_feature( inference_state, frame_idx, batch_size ) maskmem_features, maskmem_pos_enc = self._encode_new_memory( current_vision_feats=current_vision_feats, feat_sizes=feat_sizes, pred_masks_high_res=high_res_masks, object_score_logits=object_score_logits, is_mask_from_pts=is_mask_from_pts, ) # optionally offload the output to CPU memory to save GPU space storage_device = inference_state["storage_device"] maskmem_features = maskmem_features.to(torch.bfloat16) maskmem_features = maskmem_features.to(storage_device, non_blocking=True) # "maskmem_pos_enc" is the same across frames, so we only need to store one copy of it maskmem_pos_enc = self._get_maskmem_pos_enc( inference_state, {"maskmem_pos_enc": maskmem_pos_enc} ) return maskmem_features, maskmem_pos_enc def _get_maskmem_pos_enc(self, inference_state, current_out): """ `maskmem_pos_enc` is the same across frames and objects, so we cache it as a constant in the inference session to reduce session storage size. """ model_constants = inference_state["constants"] # "out_maskmem_pos_enc" should be either a list of tensors or None out_maskmem_pos_enc = current_out["maskmem_pos_enc"] if out_maskmem_pos_enc is not None: if "maskmem_pos_enc" not in model_constants: assert isinstance(out_maskmem_pos_enc, list) # only take the slice for one object, since it's same across objects maskmem_pos_enc = [x[0:1].clone() for x in out_maskmem_pos_enc] model_constants["maskmem_pos_enc"] = maskmem_pos_enc else: maskmem_pos_enc = model_constants["maskmem_pos_enc"] # expand the cached maskmem_pos_enc to the actual batch size batch_size = out_maskmem_pos_enc[0].size(0) expanded_maskmem_pos_enc = [ x.expand(batch_size, -1, -1, -1) for x in maskmem_pos_enc ] else: expanded_maskmem_pos_enc = None return expanded_maskmem_pos_enc @torch.inference_mode() def remove_object(self, inference_state, obj_id, strict=False, need_output=True): """ Remove an object id from the tracking state. If strict is True, we check whether the object id actually exists and raise an error if it doesn't exist. """ old_obj_idx_to_rm = inference_state["obj_id_to_idx"].get(obj_id, None) updated_frames = [] # Check whether this object_id to remove actually exists and possibly raise an error. if old_obj_idx_to_rm is None: if not strict: return inference_state["obj_ids"], updated_frames raise RuntimeError( f"Cannot remove object id {obj_id} as it doesn't exist. " f"All existing object ids: {inference_state['obj_ids']}." ) # If this is the only remaining object id, we simply reset the state. if len(inference_state["obj_id_to_idx"]) == 1: self.reset_state(inference_state) return inference_state["obj_ids"], updated_frames # There are still remaining objects after removing this object id. In this case, # we need to delete the object storage from inference state tensors. # Step 0: clear the input on those frames where this object id has point or mask input # (note that this step is required as it might downgrade conditioning frames to # non-conditioning ones) obj_input_frames_inds = set() obj_input_frames_inds.update( inference_state["point_inputs_per_obj"][old_obj_idx_to_rm] ) obj_input_frames_inds.update( inference_state["mask_inputs_per_obj"][old_obj_idx_to_rm] ) for frame_idx in obj_input_frames_inds: self.clear_all_prompts_in_frame( inference_state, frame_idx, obj_id, need_output=False ) # Step 1: Update the object id mapping (note that it must be done after Step 0, # since Step 0 still requires the old object id mappings in inference_state) old_obj_ids = inference_state["obj_ids"] old_obj_inds = list(range(len(old_obj_ids))) remain_old_obj_inds = old_obj_inds.copy() remain_old_obj_inds.remove(old_obj_idx_to_rm) new_obj_ids = [old_obj_ids[old_idx] for old_idx in remain_old_obj_inds] new_obj_inds = list(range(len(new_obj_ids))) # build new mappings old_idx_to_new_idx = dict(zip(remain_old_obj_inds, new_obj_inds)) inference_state["obj_id_to_idx"] = dict(zip(new_obj_ids, new_obj_inds)) inference_state["obj_idx_to_id"] = dict(zip(new_obj_inds, new_obj_ids)) inference_state["obj_ids"] = new_obj_ids # Step 2: For per-object tensor storage, we shift their obj_idx in the dict keys. # (note that "consolidated_frame_inds" doesn't need to be updated in this step as # it's already handled in Step 0) def _map_keys(container): new_kvs = [] for k in old_obj_inds: v = container.pop(k) if k in old_idx_to_new_idx: new_kvs.append((old_idx_to_new_idx[k], v)) container.update(new_kvs) _map_keys(inference_state["point_inputs_per_obj"]) _map_keys(inference_state["mask_inputs_per_obj"]) _map_keys(inference_state["output_dict_per_obj"]) _map_keys(inference_state["temp_output_dict_per_obj"]) # Step 3: For packed tensor storage, we index the remaining ids and rebuild the per-object slices. def _slice_state(output_dict, storage_key): for frame_idx, out in output_dict[storage_key].items(): out["maskmem_features"] = out["maskmem_features"][remain_old_obj_inds] out["maskmem_pos_enc"] = [ x[remain_old_obj_inds] for x in out["maskmem_pos_enc"] ] # "maskmem_pos_enc" is the same across frames, so we only need to store one copy of it out["maskmem_pos_enc"] = self._get_maskmem_pos_enc(inference_state, out) out["pred_masks"] = out["pred_masks"][remain_old_obj_inds] out["obj_ptr"] = out["obj_ptr"][remain_old_obj_inds] out["object_score_logits"] = out["object_score_logits"][ remain_old_obj_inds ] # also update the per-object slices self._add_output_per_object( inference_state, frame_idx, out, storage_key ) _slice_state(inference_state["output_dict"], "cond_frame_outputs") _slice_state(inference_state["output_dict"], "non_cond_frame_outputs") # Step 4: Further collect the outputs on those frames in `obj_input_frames_inds`, which # could show an updated mask for objects previously occluded by the object being removed if need_output: temp_output_dict_per_obj = inference_state["temp_output_dict_per_obj"] for frame_idx in obj_input_frames_inds: is_cond = any( frame_idx in obj_temp_output_dict["cond_frame_outputs"] for obj_temp_output_dict in temp_output_dict_per_obj.values() ) consolidated_out = self._consolidate_temp_output_across_obj( inference_state, frame_idx, is_cond=is_cond, run_mem_encoder=False, consolidate_at_video_res=True, ) _, video_res_masks = self._get_orig_video_res_output( inference_state, consolidated_out["pred_masks_video_res"] ) updated_frames.append((frame_idx, video_res_masks)) return inference_state["obj_ids"], updated_frames def _clear_non_cond_mem_around_input(self, inference_state, frame_idx): """ Remove the non-conditioning memory around the input frame. When users provide correction clicks, the surrounding frames' non-conditioning memories can still contain outdated object appearance information and could confuse the model. This method clears those non-conditioning memories surrounding the interacted frame to avoid giving the model both old and new information about the object. """ r = self.memory_temporal_stride_for_eval frame_idx_begin = frame_idx - r * self.num_maskmem frame_idx_end = frame_idx + r * self.num_maskmem output_dict = inference_state["output_dict"] non_cond_frame_outputs = output_dict["non_cond_frame_outputs"] for t in range(frame_idx_begin, frame_idx_end + 1): non_cond_frame_outputs.pop(t, None) for obj_output_dict in inference_state["output_dict_per_obj"].values(): obj_output_dict["non_cond_frame_outputs"].pop(t, None) ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/utils/__init__.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/utils/amg.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import math from copy import deepcopy from itertools import product from typing import Any, Dict, Generator, ItemsView, List, Tuple import numpy as np import torch # Very lightly adapted from https://github.com/facebookresearch/segment-anything/blob/main/segment_anything/utils/amg.py class MaskData: """ A structure for storing masks and their related data in batched format. Implements basic filtering and concatenation. """ def __init__(self, **kwargs) -> None: for v in kwargs.values(): assert isinstance( v, (list, np.ndarray, torch.Tensor) ), "MaskData only supports list, numpy arrays, and torch tensors." self._stats = dict(**kwargs) def __setitem__(self, key: str, item: Any) -> None: assert isinstance( item, (list, np.ndarray, torch.Tensor) ), "MaskData only supports list, numpy arrays, and torch tensors." self._stats[key] = item def __delitem__(self, key: str) -> None: del self._stats[key] def __getitem__(self, key: str) -> Any: return self._stats[key] def items(self) -> ItemsView[str, Any]: return self._stats.items() def filter(self, keep: torch.Tensor) -> None: for k, v in self._stats.items(): if v is None: self._stats[k] = None elif isinstance(v, torch.Tensor): self._stats[k] = v[torch.as_tensor(keep, device=v.device)] elif isinstance(v, np.ndarray): self._stats[k] = v[keep.detach().cpu().numpy()] elif isinstance(v, list) and keep.dtype == torch.bool: self._stats[k] = [a for i, a in enumerate(v) if keep[i]] elif isinstance(v, list): self._stats[k] = [v[i] for i in keep] else: raise TypeError(f"MaskData key {k} has an unsupported type {type(v)}.") def cat(self, new_stats: "MaskData") -> None: for k, v in new_stats.items(): if k not in self._stats or self._stats[k] is None: self._stats[k] = deepcopy(v) elif isinstance(v, torch.Tensor): self._stats[k] = torch.cat([self._stats[k], v], dim=0) elif isinstance(v, np.ndarray): self._stats[k] = np.concatenate([self._stats[k], v], axis=0) elif isinstance(v, list): self._stats[k] = self._stats[k] + deepcopy(v) else: raise TypeError(f"MaskData key {k} has an unsupported type {type(v)}.") def to_numpy(self) -> None: for k, v in self._stats.items(): if isinstance(v, torch.Tensor): self._stats[k] = v.float().detach().cpu().numpy() def is_box_near_crop_edge( boxes: torch.Tensor, crop_box: List[int], orig_box: List[int], atol: float = 20.0 ) -> torch.Tensor: """Filter masks at the edge of a crop, but not at the edge of the original image.""" crop_box_torch = torch.as_tensor(crop_box, dtype=torch.float, device=boxes.device) orig_box_torch = torch.as_tensor(orig_box, dtype=torch.float, device=boxes.device) boxes = uncrop_boxes_xyxy(boxes, crop_box).float() near_crop_edge = torch.isclose(boxes, crop_box_torch[None, :], atol=atol, rtol=0) near_image_edge = torch.isclose(boxes, orig_box_torch[None, :], atol=atol, rtol=0) near_crop_edge = torch.logical_and(near_crop_edge, ~near_image_edge) return torch.any(near_crop_edge, dim=1) def box_xyxy_to_xywh(box_xyxy: torch.Tensor) -> torch.Tensor: box_xywh = deepcopy(box_xyxy) box_xywh[2] = box_xywh[2] - box_xywh[0] box_xywh[3] = box_xywh[3] - box_xywh[1] return box_xywh def batch_iterator(batch_size: int, *args) -> Generator[List[Any], None, None]: assert len(args) > 0 and all( len(a) == len(args[0]) for a in args ), "Batched iteration must have inputs of all the same size." n_batches = len(args[0]) // batch_size + int(len(args[0]) % batch_size != 0) for b in range(n_batches): yield [arg[b * batch_size : (b + 1) * batch_size] for arg in args] def mask_to_rle_pytorch(tensor: torch.Tensor) -> List[Dict[str, Any]]: """ Encodes masks to an uncompressed RLE, in the format expected by pycoco tools. """ # Put in fortran order and flatten h,w b, h, w = tensor.shape tensor = tensor.permute(0, 2, 1).flatten(1) # Compute change indices diff = tensor[:, 1:] ^ tensor[:, :-1] change_indices = diff.nonzero() # Encode run length out = [] for i in range(b): cur_idxs = change_indices[change_indices[:, 0] == i, 1] cur_idxs = torch.cat( [ torch.tensor([0], dtype=cur_idxs.dtype, device=cur_idxs.device), cur_idxs + 1, torch.tensor([h * w], dtype=cur_idxs.dtype, device=cur_idxs.device), ] ) btw_idxs = cur_idxs[1:] - cur_idxs[:-1] counts = [] if tensor[i, 0] == 0 else [0] counts.extend(btw_idxs.detach().cpu().tolist()) out.append({"size": [h, w], "counts": counts}) return out def rle_to_mask(rle: Dict[str, Any]) -> np.ndarray: """Compute a binary mask from an uncompressed RLE.""" h, w = rle["size"] mask = np.empty(h * w, dtype=bool) idx = 0 parity = False for count in rle["counts"]: mask[idx : idx + count] = parity idx += count parity ^= True mask = mask.reshape(w, h) return mask.transpose() # Put in C order def area_from_rle(rle: Dict[str, Any]) -> int: return sum(rle["counts"][1::2]) def calculate_stability_score( masks: torch.Tensor, mask_threshold: float, threshold_offset: float ) -> torch.Tensor: """ Computes the stability score for a batch of masks. The stability score is the IoU between the binary masks obtained by thresholding the predicted mask logits at high and low values. """ # One mask is always contained inside the other. # Save memory by preventing unnecessary cast to torch.int64 intersections = ( (masks > (mask_threshold + threshold_offset)) .sum(-1, dtype=torch.int16) .sum(-1, dtype=torch.int32) ) unions = ( (masks > (mask_threshold - threshold_offset)) .sum(-1, dtype=torch.int16) .sum(-1, dtype=torch.int32) ) return intersections / unions def build_point_grid(n_per_side: int) -> np.ndarray: """Generates a 2D grid of points evenly spaced in [0,1]x[0,1].""" offset = 1 / (2 * n_per_side) points_one_side = np.linspace(offset, 1 - offset, n_per_side) points_x = np.tile(points_one_side[None, :], (n_per_side, 1)) points_y = np.tile(points_one_side[:, None], (1, n_per_side)) points = np.stack([points_x, points_y], axis=-1).reshape(-1, 2) return points def build_all_layer_point_grids( n_per_side: int, n_layers: int, scale_per_layer: int ) -> List[np.ndarray]: """Generates point grids for all crop layers.""" points_by_layer = [] for i in range(n_layers + 1): n_points = int(n_per_side / (scale_per_layer**i)) points_by_layer.append(build_point_grid(n_points)) return points_by_layer def generate_crop_boxes( im_size: Tuple[int, ...], n_layers: int, overlap_ratio: float ) -> Tuple[List[List[int]], List[int]]: """ Generates a list of crop boxes of different sizes. Each layer has (2**i)**2 boxes for the ith layer. """ crop_boxes, layer_idxs = [], [] im_h, im_w = im_size short_side = min(im_h, im_w) # Original image crop_boxes.append([0, 0, im_w, im_h]) layer_idxs.append(0) def crop_len(orig_len, n_crops, overlap): return int(math.ceil((overlap * (n_crops - 1) + orig_len) / n_crops)) for i_layer in range(n_layers): n_crops_per_side = 2 ** (i_layer + 1) overlap = int(overlap_ratio * short_side * (2 / n_crops_per_side)) crop_w = crop_len(im_w, n_crops_per_side, overlap) crop_h = crop_len(im_h, n_crops_per_side, overlap) crop_box_x0 = [int((crop_w - overlap) * i) for i in range(n_crops_per_side)] crop_box_y0 = [int((crop_h - overlap) * i) for i in range(n_crops_per_side)] # Crops in XYWH format for x0, y0 in product(crop_box_x0, crop_box_y0): box = [x0, y0, min(x0 + crop_w, im_w), min(y0 + crop_h, im_h)] crop_boxes.append(box) layer_idxs.append(i_layer + 1) return crop_boxes, layer_idxs def uncrop_boxes_xyxy(boxes: torch.Tensor, crop_box: List[int]) -> torch.Tensor: x0, y0, _, _ = crop_box offset = torch.tensor([[x0, y0, x0, y0]], device=boxes.device) # Check if boxes has a channel dimension if len(boxes.shape) == 3: offset = offset.unsqueeze(1) return boxes + offset def uncrop_points(points: torch.Tensor, crop_box: List[int]) -> torch.Tensor: x0, y0, _, _ = crop_box offset = torch.tensor([[x0, y0]], device=points.device) # Check if points has a channel dimension if len(points.shape) == 3: offset = offset.unsqueeze(1) return points + offset def uncrop_masks( masks: torch.Tensor, crop_box: List[int], orig_h: int, orig_w: int ) -> torch.Tensor: x0, y0, x1, y1 = crop_box if x0 == 0 and y0 == 0 and x1 == orig_w and y1 == orig_h: return masks # Coordinate transform masks pad_x, pad_y = orig_w - (x1 - x0), orig_h - (y1 - y0) pad = (x0, pad_x - x0, y0, pad_y - y0) return torch.nn.functional.pad(masks, pad, value=0) def remove_small_regions( mask: np.ndarray, area_thresh: float, mode: str ) -> Tuple[np.ndarray, bool]: """ Removes small disconnected regions and holes in a mask. Returns the mask and an indicator of if the mask has been modified. """ import cv2 # type: ignore assert mode in ["holes", "islands"] correct_holes = mode == "holes" working_mask = (correct_holes ^ mask).astype(np.uint8) n_labels, regions, stats, _ = cv2.connectedComponentsWithStats(working_mask, 8) sizes = stats[:, -1][1:] # Row 0 is background label small_regions = [i + 1 for i, s in enumerate(sizes) if s < area_thresh] if len(small_regions) == 0: return mask, False fill_labels = [0] + small_regions if not correct_holes: fill_labels = [i for i in range(n_labels) if i not in fill_labels] # If every region is below threshold, keep largest if len(fill_labels) == 0: fill_labels = [int(np.argmax(sizes)) + 1] mask = np.isin(regions, fill_labels) return mask, True def coco_encode_rle(uncompressed_rle: Dict[str, Any]) -> Dict[str, Any]: from pycocotools import mask as mask_utils # type: ignore h, w = uncompressed_rle["size"] rle = mask_utils.frPyObjects(uncompressed_rle, h, w) rle["counts"] = rle["counts"].decode("utf-8") # Necessary to serialize with json return rle def batched_mask_to_box(masks: torch.Tensor) -> torch.Tensor: """ Calculates boxes in XYXY format around masks. Return [0,0,0,0] for an empty mask. For input shape C1xC2x...xHxW, the output shape is C1xC2x...x4. """ # torch.max below raises an error on empty inputs, just skip in this case if torch.numel(masks) == 0: return torch.zeros(*masks.shape[:-2], 4, device=masks.device) # Normalize shape to CxHxW shape = masks.shape h, w = shape[-2:] if len(shape) > 2: masks = masks.flatten(0, -3) else: masks = masks.unsqueeze(0) # Get top and bottom edges in_height, _ = torch.max(masks, dim=-1) in_height_coords = in_height * torch.arange(h, device=in_height.device)[None, :] bottom_edges, _ = torch.max(in_height_coords, dim=-1) in_height_coords = in_height_coords + h * (~in_height) top_edges, _ = torch.min(in_height_coords, dim=-1) # Get left and right edges in_width, _ = torch.max(masks, dim=-2) in_width_coords = in_width * torch.arange(w, device=in_width.device)[None, :] right_edges, _ = torch.max(in_width_coords, dim=-1) in_width_coords = in_width_coords + w * (~in_width) left_edges, _ = torch.min(in_width_coords, dim=-1) # If the mask is empty the right edge will be to the left of the left edge. # Replace these boxes with [0, 0, 0, 0] empty_filter = (right_edges < left_edges) | (bottom_edges < top_edges) out = torch.stack([left_edges, top_edges, right_edges, bottom_edges], dim=-1) out = out * (~empty_filter).unsqueeze(-1) # Return to original shape if len(shape) > 2: out = out.reshape(*shape[:-2], 4) else: out = out[0] return out ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/utils/misc.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import os import warnings from threading import Thread import numpy as np import torch from PIL import Image from tqdm import tqdm def get_sdpa_settings(): if torch.cuda.is_available(): old_gpu = torch.cuda.get_device_properties(0).major < 7 # only use Flash Attention on Ampere (8.0) or newer GPUs use_flash_attn = torch.cuda.get_device_properties(0).major >= 8 if not use_flash_attn: warnings.warn( "Flash Attention is disabled as it requires a GPU with Ampere (8.0) CUDA capability.", category=UserWarning, stacklevel=2, ) # keep math kernel for PyTorch versions before 2.2 (Flash Attention v2 is only # available on PyTorch 2.2+, while Flash Attention v1 cannot handle all cases) pytorch_version = tuple(int(v) for v in torch.__version__.split(".")[:2]) if pytorch_version < (2, 2): warnings.warn( f"You are using PyTorch {torch.__version__} without Flash Attention v2 support. " "Consider upgrading to PyTorch 2.2+ for Flash Attention v2 (which could be faster).", category=UserWarning, stacklevel=2, ) math_kernel_on = pytorch_version < (2, 2) or not use_flash_attn else: old_gpu = True use_flash_attn = False math_kernel_on = True return old_gpu, use_flash_attn, math_kernel_on def get_connected_components(mask): """ Get the connected components (8-connectivity) of binary masks of shape (N, 1, H, W). Inputs: - mask: A binary mask tensor of shape (N, 1, H, W), where 1 is foreground and 0 is background. Outputs: - labels: A tensor of shape (N, 1, H, W) containing the connected component labels for foreground pixels and 0 for background pixels. - counts: A tensor of shape (N, 1, H, W) containing the area of the connected components for foreground pixels and 0 for background pixels. """ from sam2 import _C return _C.get_connected_componnets(mask.to(torch.uint8).contiguous()) def mask_to_box(masks: torch.Tensor): """ compute bounding box given an input mask Inputs: - masks: [B, 1, H, W] masks, dtype=torch.Tensor Returns: - box_coords: [B, 1, 4], contains (x, y) coordinates of top left and bottom right box corners, dtype=torch.Tensor """ B, _, h, w = masks.shape device = masks.device xs = torch.arange(w, device=device, dtype=torch.int32) ys = torch.arange(h, device=device, dtype=torch.int32) grid_xs, grid_ys = torch.meshgrid(xs, ys, indexing="xy") grid_xs = grid_xs[None, None, ...].expand(B, 1, h, w) grid_ys = grid_ys[None, None, ...].expand(B, 1, h, w) min_xs, _ = torch.min(torch.where(masks, grid_xs, w).flatten(-2), dim=-1) max_xs, _ = torch.max(torch.where(masks, grid_xs, -1).flatten(-2), dim=-1) min_ys, _ = torch.min(torch.where(masks, grid_ys, h).flatten(-2), dim=-1) max_ys, _ = torch.max(torch.where(masks, grid_ys, -1).flatten(-2), dim=-1) bbox_coords = torch.stack((min_xs, min_ys, max_xs, max_ys), dim=-1) return bbox_coords def _load_img_as_tensor(img_path, image_size): img_pil = Image.open(img_path) img_np = np.array(img_pil.convert("RGB").resize((image_size, image_size))) if img_np.dtype == np.uint8: # np.uint8 is expected for JPEG images img_np = img_np / 255.0 else: raise RuntimeError(f"Unknown image dtype: {img_np.dtype} on {img_path}") img = torch.from_numpy(img_np).permute(2, 0, 1) video_width, video_height = img_pil.size # the original video size return img, video_height, video_width class AsyncVideoFrameLoader: """ A list of video frames to be load asynchronously without blocking session start. """ def __init__( self, img_paths, image_size, offload_video_to_cpu, img_mean, img_std, compute_device, ): self.img_paths = img_paths self.image_size = image_size self.offload_video_to_cpu = offload_video_to_cpu self.img_mean = img_mean self.img_std = img_std # items in `self.images` will be loaded asynchronously self.images = [None] * len(img_paths) # catch and raise any exceptions in the async loading thread self.exception = None # video_height and video_width be filled when loading the first image self.video_height = None self.video_width = None self.compute_device = compute_device # load the first frame to fill video_height and video_width and also # to cache it (since it's most likely where the user will click) self.__getitem__(0) # load the rest of frames asynchronously without blocking the session start def _load_frames(): try: for n in tqdm(range(len(self.images)), desc="frame loading (JPEG)"): self.__getitem__(n) except Exception as e: self.exception = e self.thread = Thread(target=_load_frames, daemon=True) self.thread.start() def __getitem__(self, index): if self.exception is not None: raise RuntimeError("Failure in frame loading thread") from self.exception img = self.images[index] if img is not None: return img img, video_height, video_width = _load_img_as_tensor( self.img_paths[index], self.image_size ) self.video_height = video_height self.video_width = video_width # normalize by mean and std img -= self.img_mean img /= self.img_std if not self.offload_video_to_cpu: img = img.to(self.compute_device, non_blocking=True) self.images[index] = img return img def __len__(self): return len(self.images) def load_video_frames( video_path, image_size, offload_video_to_cpu, img_mean=(0.485, 0.456, 0.406), img_std=(0.229, 0.224, 0.225), async_loading_frames=False, compute_device=torch.device("cuda"), ): """ Load the video frames from video_path. The frames are resized to image_size as in the model and are loaded to GPU if offload_video_to_cpu=False. This is used by the demo. """ is_bytes = isinstance(video_path, bytes) is_str = isinstance(video_path, str) is_mp4_path = is_str and os.path.splitext(video_path)[-1] in [".mp4", ".MP4"] if is_bytes or is_mp4_path: return load_video_frames_from_video_file( video_path=video_path, image_size=image_size, offload_video_to_cpu=offload_video_to_cpu, img_mean=img_mean, img_std=img_std, compute_device=compute_device, ) elif is_str and os.path.isdir(video_path): return load_video_frames_from_jpg_images( video_path=video_path, image_size=image_size, offload_video_to_cpu=offload_video_to_cpu, img_mean=img_mean, img_std=img_std, async_loading_frames=async_loading_frames, compute_device=compute_device, ) else: raise NotImplementedError( "Only MP4 video and JPEG folder are supported at this moment" ) def load_video_frames_from_jpg_images( video_path, image_size, offload_video_to_cpu, img_mean=(0.485, 0.456, 0.406), img_std=(0.229, 0.224, 0.225), async_loading_frames=False, compute_device=torch.device("cuda"), ): """ Load the video frames from a directory of JPEG files (".jpg" format). The frames are resized to image_size x image_size and are loaded to GPU if `offload_video_to_cpu` is `False` and to CPU if `offload_video_to_cpu` is `True`. You can load a frame asynchronously by setting `async_loading_frames` to `True`. """ if isinstance(video_path, str) and os.path.isdir(video_path): jpg_folder = video_path else: raise NotImplementedError( "Only JPEG frames are supported at this moment. For video files, you may use " "ffmpeg (https://ffmpeg.org/) to extract frames into a folder of JPEG files, such as \n" "```\n" "ffmpeg -i .mp4 -q:v 2 -start_number 0 /'%05d.jpg'\n" "```\n" "where `-q:v` generates high-quality JPEG frames and `-start_number 0` asks " "ffmpeg to start the JPEG file from 00000.jpg." ) frame_names = [ p for p in os.listdir(jpg_folder) if os.path.splitext(p)[-1] in [".jpg", ".jpeg", ".JPG", ".JPEG"] ] frame_names.sort(key=lambda p: int(os.path.splitext(p)[0])) num_frames = len(frame_names) if num_frames == 0: raise RuntimeError(f"no images found in {jpg_folder}") img_paths = [os.path.join(jpg_folder, frame_name) for frame_name in frame_names] img_mean = torch.tensor(img_mean, dtype=torch.float32)[:, None, None] img_std = torch.tensor(img_std, dtype=torch.float32)[:, None, None] if async_loading_frames: lazy_images = AsyncVideoFrameLoader( img_paths, image_size, offload_video_to_cpu, img_mean, img_std, compute_device, ) return lazy_images, lazy_images.video_height, lazy_images.video_width images = torch.zeros(num_frames, 3, image_size, image_size, dtype=torch.float32) for n, img_path in enumerate(tqdm(img_paths, desc="frame loading (JPEG)")): images[n], video_height, video_width = _load_img_as_tensor(img_path, image_size) if not offload_video_to_cpu: images = images.to(compute_device) img_mean = img_mean.to(compute_device) img_std = img_std.to(compute_device) # normalize by mean and std images -= img_mean images /= img_std return images, video_height, video_width def load_video_frames_from_video_file( video_path, image_size, offload_video_to_cpu, img_mean=(0.485, 0.456, 0.406), img_std=(0.229, 0.224, 0.225), compute_device=torch.device("cuda"), ): """Load the video frames from a video file.""" import decord img_mean = torch.tensor(img_mean, dtype=torch.float32)[:, None, None] img_std = torch.tensor(img_std, dtype=torch.float32)[:, None, None] # Get the original video height and width decord.bridge.set_bridge("torch") video_height, video_width, _ = decord.VideoReader(video_path).next().shape # Iterate over all frames in the video images = [] for frame in decord.VideoReader(video_path, width=image_size, height=image_size): images.append(frame.permute(2, 0, 1)) images = torch.stack(images, dim=0).float() / 255.0 if not offload_video_to_cpu: images = images.to(compute_device) img_mean = img_mean.to(compute_device) img_std = img_std.to(compute_device) # normalize by mean and std images -= img_mean images /= img_std return images, video_height, video_width def fill_holes_in_mask_scores(mask, max_area): """ A post processor to fill small holes in mask scores with area under `max_area`. """ # Holes are those connected components in background with area <= self.max_area # (background regions are those with mask scores <= 0) assert max_area > 0, "max_area must be positive" input_mask = mask try: labels, areas = get_connected_components(mask <= 0) is_hole = (labels > 0) & (areas <= max_area) # We fill holes with a small positive mask score (0.1) to change them to foreground. mask = torch.where(is_hole, 0.1, mask) except Exception as e: # Skip the post-processing step on removing small holes if the CUDA kernel fails warnings.warn( f"{e}\n\nSkipping the post-processing step due to the error above. You can " "still use SAM 2 and it's OK to ignore the error above, although some post-processing " "functionality may be limited (which doesn't affect the results in most cases; see " "https://github.com/facebookresearch/sam2/blob/main/INSTALL.md).", category=UserWarning, stacklevel=2, ) mask = input_mask return mask def concat_points(old_point_inputs, new_points, new_labels): """Add new points and labels to previous point inputs (add at the end).""" if old_point_inputs is None: points, labels = new_points, new_labels else: points = torch.cat([old_point_inputs["point_coords"], new_points], dim=1) labels = torch.cat([old_point_inputs["point_labels"], new_labels], dim=1) return {"point_coords": points, "point_labels": labels} ================================================ FILE: auto-seg/submodules/segment-anything-2/sam2/utils/transforms.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import warnings import torch import torch.nn as nn import torch.nn.functional as F from torchvision.transforms import Normalize, Resize, ToTensor class SAM2Transforms(nn.Module): def __init__( self, resolution, mask_threshold, max_hole_area=0.0, max_sprinkle_area=0.0 ): """ Transforms for SAM2. """ super().__init__() self.resolution = resolution self.mask_threshold = mask_threshold self.max_hole_area = max_hole_area self.max_sprinkle_area = max_sprinkle_area self.mean = [0.485, 0.456, 0.406] self.std = [0.229, 0.224, 0.225] self.to_tensor = ToTensor() self.transforms = torch.jit.script( nn.Sequential( Resize((self.resolution, self.resolution)), Normalize(self.mean, self.std), ) ) def __call__(self, x): x = self.to_tensor(x) return self.transforms(x) def forward_batch(self, img_list): img_batch = [self.transforms(self.to_tensor(img)) for img in img_list] img_batch = torch.stack(img_batch, dim=0) return img_batch def transform_coords( self, coords: torch.Tensor, normalize=False, orig_hw=None ) -> torch.Tensor: """ Expects a torch tensor with length 2 in the last dimension. The coordinates can be in absolute image or normalized coordinates, If the coords are in absolute image coordinates, normalize should be set to True and original image size is required. Returns Un-normalized coordinates in the range of [0, 1] which is expected by the SAM2 model. """ if normalize: assert orig_hw is not None h, w = orig_hw coords = coords.clone() coords[..., 0] = coords[..., 0] / w coords[..., 1] = coords[..., 1] / h coords = coords * self.resolution # unnormalize coords return coords def transform_boxes( self, boxes: torch.Tensor, normalize=False, orig_hw=None ) -> torch.Tensor: """ Expects a tensor of shape Bx4. The coordinates can be in absolute image or normalized coordinates, if the coords are in absolute image coordinates, normalize should be set to True and original image size is required. """ boxes = self.transform_coords(boxes.reshape(-1, 2, 2), normalize, orig_hw) return boxes def postprocess_masks(self, masks: torch.Tensor, orig_hw) -> torch.Tensor: """ Perform PostProcessing on output masks. """ from sam2.utils.misc import get_connected_components masks = masks.float() input_masks = masks mask_flat = masks.flatten(0, 1).unsqueeze(1) # flatten as 1-channel image try: if self.max_hole_area > 0: # Holes are those connected components in background with area <= self.fill_hole_area # (background regions are those with mask scores <= self.mask_threshold) labels, areas = get_connected_components( mask_flat <= self.mask_threshold ) is_hole = (labels > 0) & (areas <= self.max_hole_area) is_hole = is_hole.reshape_as(masks) # We fill holes with a small positive mask score (10.0) to change them to foreground. masks = torch.where(is_hole, self.mask_threshold + 10.0, masks) if self.max_sprinkle_area > 0: labels, areas = get_connected_components( mask_flat > self.mask_threshold ) is_hole = (labels > 0) & (areas <= self.max_sprinkle_area) is_hole = is_hole.reshape_as(masks) # We fill holes with negative mask score (-10.0) to change them to background. masks = torch.where(is_hole, self.mask_threshold - 10.0, masks) except Exception as e: # Skip the post-processing step if the CUDA kernel fails warnings.warn( f"{e}\n\nSkipping the post-processing step due to the error above. You can " "still use SAM 2 and it's OK to ignore the error above, although some post-processing " "functionality may be limited (which doesn't affect the results in most cases; see " "https://github.com/facebookresearch/sam2/blob/main/INSTALL.md).", category=UserWarning, stacklevel=2, ) masks = input_masks masks = F.interpolate(masks, orig_hw, mode="bilinear", align_corners=False) return masks ================================================ FILE: auto-seg/submodules/segment-anything-2/sav_dataset/LICENSE ================================================ BSD License For SAM 2 Eval software Copyright (c) Meta Platforms, Inc. and affiliates. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name Meta nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ================================================ FILE: auto-seg/submodules/segment-anything-2/sav_dataset/LICENSE_DAVIS ================================================ BSD 3-Clause License Copyright (c) 2020, DAVIS: Densely Annotated VIdeo Segmentation All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ================================================ FILE: auto-seg/submodules/segment-anything-2/sav_dataset/LICENSE_VOS_BENCHMARK ================================================ Copyright 2023 Rex Cheng Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ================================================ FILE: auto-seg/submodules/segment-anything-2/sav_dataset/README.md ================================================ # Segment Anything Video (SA-V) Dataset ## Overview [Segment Anything Video (SA-V)](https://ai.meta.com/datasets/segment-anything-video/), consists of 51K diverse videos and 643K high-quality spatio-temporal segmentation masks (i.e., masklets). The dataset is released under the CC by 4.0 license. Browse the dataset [here](https://sam2.metademolab.com/dataset). ![SA-V dataset](../assets/sa_v_dataset.jpg?raw=true) ## Getting Started ### Download the dataset Visit [here](https://ai.meta.com/datasets/segment-anything-video-downloads/) to download SA-V including the training, val and test sets. ### Dataset Stats | | Num Videos | Num Masklets | | ---------- | ---------- | ----------------------------------------- | | SA-V train | 50,583 | 642,036 (auto 451,720 and manual 190,316) | | SA-V val | 155 | 293 | | SA-V test | 150 | 278 | ### Notebooks To load and visualize the SA-V training set annotations, refer to the example [sav_visualization_example.ipynb](./sav_visualization_example.ipynb) notebook. ### SA-V train For SA-V training set we release the mp4 videos and store the masklet annotations per video as json files . Automatic masklets and manual masklets are stored separately as two json files: `{video_id}_auto.json` and `{video_id}_manual.json`. They can be loaded as dictionaries in python in the format below. ``` { "video_id" : str; video id "video_duration" : float64; the duration in seconds of this video "video_frame_count" : float64; the number of frames in the video "video_height" : float64; the height of the video "video_width" : float64; the width of the video "video_resolution" : float64; video_height $\times$ video_width "video_environment" : List[str]; "Indoor" or "Outdoor" "video_split" : str; "train" for training set "masklet" : List[List[Dict]]; masklet annotations in list of list of RLEs. The outer list is over frames in the video and the inner list is over objects in the video. "masklet_id" : List[int]; the masklet ids "masklet_size_rel" : List[float]; the average mask area normalized by resolution across all the frames where the object is visible "masklet_size_abs" : List[float]; the average mask area (in pixels) across all the frames where the object is visible "masklet_size_bucket" : List[str]; "small": $1$ <= masklet_size_abs < $32^2$, "medium": $32^2$ <= masklet_size_abs < $96^2$, and "large": masklet_size_abs > $96^2$ "masklet_visibility_changes" : List[int]; the number of times where the visibility changes after the first appearance (e.g., invisible -> visible or visible -> invisible) "masklet_first_appeared_frame" : List[int]; the index of the frame where the object appears the first time in the video. Always 0 for auto masklets. "masklet_frame_count" : List[int]; the number of frames being annotated. Note that videos are annotated at 6 fps (annotated every 4 frames) while the videos are at 24 fps. "masklet_edited_frame_count" : List[int]; the number of frames being edited by human annotators. Always 0 for auto masklets. "masklet_type" : List[str]; "auto" or "manual" "masklet_stability_score" : Optional[List[List[float]]]; per-mask stability scores. Auto annotation only. "masklet_num" : int; the number of manual/auto masklets in the video } ``` Note that in SA-V train, there are in total 50,583 videos where all of them have manual annotations. Among the 50,583 videos there are 48,436 videos that also have automatic annotations. ### SA-V val and test For SA-V val and test sets, we release the extracted frames as jpeg files, and the masks as png files with the following directory structure: ``` sav_val(sav_test) ├── sav_val.txt (sav_test.txt): a list of video ids in the split ├── JPEGImages_24fps # videos are extracted at 24 fps │ ├── {video_id} │ │ ├── 00000.jpg # video frame │ │ ├── 00001.jpg # video frame │ │ ├── 00002.jpg # video frame │ │ ├── 00003.jpg # video frame │ │ └── ... │ ├── {video_id} │ ├── {video_id} │ └── ... └── Annotations_6fps # videos are annotated at 6 fps ├── {video_id} │ ├── 000 # obj 000 │ │ ├── 00000.png # mask for object 000 in 00000.jpg │ │ ├── 00004.png # mask for object 000 in 00004.jpg │ │ ├── 00008.png # mask for object 000 in 00008.jpg │ │ ├── 00012.png # mask for object 000 in 00012.jpg │ │ └── ... │ ├── 001 # obj 001 │ ├── 002 # obj 002 │ └── ... ├── {video_id} ├── {video_id} └── ... ``` All masklets in val and test sets are manually annotated in every frame by annotators. For each annotated object in a video, we store the annotated masks in a single png. This is because the annotated objects may overlap, e.g., it is possible in our SA-V dataset for there to be a mask for the whole person as well as a separate mask for their hands. ## SA-V Val and Test Evaluation We provide an evaluator to compute the common J and F metrics on SA-V val and test sets. To run the evaluation, we need to first install a few dependencies as follows: ``` pip install -r requirements.txt ``` Then we can evaluate the predictions as follows: ``` python sav_evaluator.py --gt_root {GT_ROOT} --pred_root {PRED_ROOT} ``` or run ``` python sav_evaluator.py --help ``` to print a complete help message. The evaluator expects the `GT_ROOT` to be one of the following folder structures, and `GT_ROOT` and `PRED_ROOT` to have the same structure. - Same as SA-V val and test directory structure ``` {GT_ROOT} # gt root folder ├── {video_id} │ ├── 000 # all masks associated with obj 000 │ │ ├── 00000.png # mask for object 000 in frame 00000 (binary mask) │ │ └── ... │ ├── 001 # all masks associated with obj 001 │ ├── 002 # all masks associated with obj 002 │ └── ... ├── {video_id} ├── {video_id} └── ... ``` In the paper for the experiments on SA-V val and test, we run inference on the 24 fps videos, and evaluate on the subset of frames where we have ground truth annotations (first and last annotated frames dropped). The evaluator will ignore the masks in frames where we don't have ground truth annotations. - Same as [DAVIS](https://github.com/davisvideochallenge/davis2017-evaluation) directory structure ``` {GT_ROOT} # gt root folder ├── {video_id} │ ├── 00000.png # annotations in frame 00000 (may contain multiple objects) │ └── ... ├── {video_id} ├── {video_id} └── ... ``` ## License The evaluation code is licensed under the [BSD 3 license](./LICENSE). Please refer to the paper for more details on the models. The videos and annotations in SA-V Dataset are released under CC BY 4.0. Third-party code: the evaluation software is heavily adapted from [`VOS-Benchmark`](https://github.com/hkchengrex/vos-benchmark) and [`DAVIS`](https://github.com/davisvideochallenge/davis2017-evaluation) (with their licenses in [`LICENSE_DAVIS`](./LICENSE_DAVIS) and [`LICENSE_VOS_BENCHMARK`](./LICENSE_VOS_BENCHMARK)). ================================================ FILE: auto-seg/submodules/segment-anything-2/sav_dataset/example/sav_000001_auto.json ================================================ {"video_id": "sav_000001", "video_duration": 20.125, "video_frame_count": 483.0, "video_height": 848.0, "video_width": 480.0, "video_resolution": 407040.0, "video_environment": "Indoor", "video_split": "train", "masklet": [[{"size": [848, 480], "counts": "ka0e8ka001O1O001O1O2N2N2N2N2N1O2N1O2N1O2N2N1O2N2N2N2N1O2N1O2N2N2N1O2N2N2N1O2N1O2N1O2N2N2N1O2N2N2N1O2N1O2N2N1O2N1O2N2N2N2N2N1O2N1O2N2N1O2N2N2N1O2N2N1O2N1O2N1O2N2N1O2N2N2N2N1O2N2N1O2N2N1O2N1O2N2N2N1O2N2N1O1O2N1O2N2N2N2N2N2N2N2N2N1O2N1O2N2N1O2N2N1O2N2N2N1O2N2N1O2N1O2N2N1O2N2N1O2N1O2N2N2N1O2N2N1O2N2N1O2N1O2N2N2N2N1O2N2N1O2N2N2N1O1O2N1O2N3M2NR\\R8"}, {"size": [848, 480], "counts": "[3j0fi0000001O1O002N3N3L3L5L3L^l>3^SA7K4L3M2N2M2O1O00001O00000000O2N1N3J5L4M3O1O001000000001O1N2O1O2N2N2N2N1O1O1O001O001O001O01N100O1O1O1N3N1O2O000O2O0010O01O0001O001O00010O1O2N1O3M3M4L2O0O001O001O1O00001O1O1O001O01O01O0010O01O1O1O1O1O2N1O1O1O1O000001O0000001N4Kh]^1jN^caN5L2O0000O10O1N2M2mNT1M20100001O01O001O00000000001O00O101O000000001O0000000000000001O01O2Oh0XO7I4L1O00O001N2O0N2M3L4L5D;H8N2001O4]NYWOU1Wi0J4L2N3M1O1O1O000O01O1N1H9^Ob0N1001M3L4J6K5J6L4K5M4K5K\\V40fiK3L4N1N2O1001O2N2M5IoTg0f0YjXO>fVOQO[h0f1M3N2N1O100000000010O0001O00001O01O0001O0000001O00010O00010O0001O01O00001O00001O000000001O0001O000000001O01O00001O000001O01O0000001O01O01O0000000001O000000000000000000001O00010O0000000001O01O0000000000001O00000O10001O0O10001O0O1N2H8H8J6F:FS@An?>R@Bo?=R@Bn?=S@Bo?=Q@CP`0F:000001O00000000000000000000O10000O10000O100O1O1O1N2O1L4E;H8J6E;J6YOg0G9G9_Oa0IgbX1"}, {"size": [848, 480], "counts": "WQe3=oi06J6K4M4L4L4M4L3L4M3M3M4M2M4L4L2O1N2O0O100O100O01000O01O0O2N2N2N2N2N3M2O1O1O01000O0100000000000000010O0000000000001N10000O101N2O0O2N2N2N1O2N1O2M2O2M2O1N2N3N2M3O1N2N2N2O0O3N1N2N2N3MdS]6"}, {"size": [848, 480], "counts": "d0\\1Ti00O1O0010O1N1N3N2O1O10O101O2N2N2M3N1O001O001O0O100000000001O002N1O2N2N>B3M2M3L5C^c60i\\I9H7J5L6J8H9G2N1O1O0001O0O10O1000O100000000000000000001O000000001O003M3M7I7H8HYhS:"}, {"size": [848, 480], "counts": "ZbV85Zj02N2gYO0hLM[d03f^OU1Ya0lNX^Oc1ga0]NW^Of1ga0\\NV^Of1ja0[NS^Of1na0\\Nm]Og1Sb0\\NV]OX2jb0\\N`\\Of1`c0]N\\\\Od1ec0]NX\\Od1hc0_NT\\Oa1mc0bNo[O_1Qd0gNh[OX1[d0lN_[OT1dd0mNW[OPO5?^e04UZO\\Oe02ce0\\1l1F9J7L3M3M4M2N3L5K5K6I9FX\\\\3"}], [{"size": [848, 480], "counts": "`b0P8ab0O1O1O2N1O2N1O2N2N2N1O2N1O2N1O3M2N1O2N1O2N2N2N1O2N2N2N1O2N1O2N2N2N2N1O1O2N2N2N2N2N1O2N1O2N1O3M2N1O2N2N1O2N2N2N2N2N2N1O2N2N1O2N2N2N1O1O2N2N2N1O2N2N1O2N1O2N1O2N2N2N1O2N2N1O2N2N2N1O2N2N2N2N2N2N2N2N1O2N2N2N2N2N2N1O2N2N1O2N2N1O2N2N1O2N1O2N2N1O2N2N2N2N1O2N2N1O2N2N1O2N2N2N2N1O1O2N2N2N1O2N2N1O3M2N1O2N2NSYa8"}, {"size": [848, 480], "counts": "onh06Yj07J2M4M2N1O2N1O1O0O100000N201O0O2N1O1O10000O100001O00001N2O1O1O2N1O1O001O0000001O000000O100O1O1O2N1O1O1N3M200O101N10001O1O01O1O010O001O1O1O100O1O2N2M3N2O1N1O1O001O00001O1O1O1O001O001O10O01O001O1O1O1O1O1O2O0O1O10O000000001O3Lc0fN^VOa0TPe1_ORj[N5mNT1H7N2O1O1O1O1O0000O11O00000001OO2OO2O0O100000000000001O000001O000001O10C=B2N11N1O0O2O0O2J5DF:M0012M3M3J6J6L4K4M5M4IR_T12f`kNa0Ec0]O`0B4O000O101O00010O000000001O0010O000010O00001O0000010O000000001O01O000001O00000000010O00001O00010O0000010O00000000001O000010O000001O00001O0000000001O0000001O00000000000000001O00000001O000001O000000000O101O00000000001O0O1O1O1O1M4J5G9H9G9BgU95VjF;D=Cji0J6A?@?L4H8L4L4M2N3N1N2N3M2K6K5M3N1N3O0O101O1O0102M5K4M1N01O010O01O01O00ZKd]OX1\\b0cNl]O[1Sb0bNS^O[1ma0cNZ^OX1ga0dN^^O[1aa0cNf^OX1Za0dNm^OY1Sa0eNQ_OY1o`0fNT_OX1l`0gNV_OX1j`0fNZ_OX1g`0dN^_OZ1b`0dNa_O[1_`0cNd_O\\1\\`0cNg_O[1Y`0cNi_O]1V`0cNk_O\\1V`0bNm_O]1S`0aNo_O_1Q`0`NQ@_1P`0^NS@a1m?^NT@b1l?]NV@b1j?]NX@b1h?]NZ@a1h?]NY@c1g?\\N[@c1e?]N\\@b1e?\\N]@c1c?]N]@c1d?[N^@c1b?]N`@b1a?\\N`@d1`?\\Na@c1`?\\N`@c1a?]N_@c1b?[N`@c1b?\\N_@c1a?]N_@b1c?]N^@b1c?]N]@c1d?\\N\\@c1g?ZNZ@f1g?YNY@f1j?WNW@i1l?TNT@k1Yd0O1O2O0O011N101N3M101O0O2NTiP7"}, {"size": [848, 480], "counts": "T:P1`i00O1000000000000O10O1000O1000O10O100000O0100000O100000O10O100000O10O1000O100000O10O1000O100000O010000000O10O100000O010000000O0100000000O10O100000O10000O10O100000O1000O1000O1000O100O0100000000O10O1000000O0100000000O010000000O10000O1N2M2N3L5L3N2O1N2N3M3MSYm10ofRN5J101O0O00101M10001O1O1O1O1N110L5N010O1O1N2O11O0000O1000O0010000O1O010O10O01000000O010000000000O010000000O100O10000000000O100000000O10000O100000000O1000000O010000O010O100O10O01000000O1000000O0100O010O1000O100000O10O1000000O0100O0100O10000O10O1000000000O10O10000000O10O10000000O10O10O1000000O0100000O10O10000000O010000000O1000O10O1000O10O1000O10O1000O1000O10O1000O10O100000O1000O100000O01000000O10O1000000O01000O10000O1000000O10000O10001O0000000O10000O0100000O10000000O1000O1000000O10O10000O001001O001O001O1N100O1O1O2Nj_?"}, {"size": [848, 480], "counts": "nSn9T1gh0f0^Oa0^Oc0XOg0ZOf0WOi00000000001O000000000000O100O100O1000000O100O1O1O1O1M3K5L4C=H8H8E;E;E;A?@Pm[1"}, {"size": [848, 480], "counts": "\\\\Y3;li0f0@5J6K5L3N3M2N3M2N3M3M2O2N2M2O2M3N2N2M3N1O1O0000000O10O1000000O100O010O1N3K4H8N1N3O1N2O1O1O2O000O100000001O000O101O000O2O0O100O1O100O2O0O101N100O3M101O0O2N1O2N2N1O2N2N2M3L5M4K8Hlbi6"}, {"size": [848, 480], "counts": "Q1d1lh0O100O1O1K5O001O100000O110O101N2N1O1O1N101O00000001O0000001O001N2O1O2N4LAegS:"}, {"size": [848, 480], "counts": "\\hU87Xj02O2eYOF[b0@\\Zf0b0ReYO5Ih0WO:I201N11O0001O0000001O01O000001O000010O0001O001O01O0001O0001O01O0000000000001O00000000001O001O01O001O01O01O000001O000010O0001O000001O01O00000000000000000000010O000000000000000001O01OO11O01O0000000000001O0O100000000O101N1O1O1M3I7G:DA>D]Oc0ZOe0B?L3H9K4M4L3M4M2N2M4L4M2N3N1O2N1O101N1O1O100O101O0O1O10001N1001OO2N10SJZ^OP3fa0kLg^On2Xa0nLQ_Om2o`0nL]_Ok2c`0nLi_Om2W`0PMQ@k2o?RMX@k2g?RM_@k2a?QMe@m2[?RMj@j2V?SMPAk2o>SMVAj2j>UMYAi2g>RM_An2`>QMbAn2^>PMgAn2Y>PMjAn2V>PMnAn2R>QMPBo2o=PMTBn2l=PMWBo2i=PMXBQ3g=nL\\BP3e=mL^BR3b=mL`BR3`=mLbBS3]=mLcBS3^=kLdBT3\\=kLfBT3Z=lLfBS3\\=lLeBS3\\=kLeBU3\\=iLfBV3[=hLfBX3\\=eLeBZ3Zb0O000O10000000000000O100O100O1O1O1O1O10O01O1O001N1O2N2M2O20TL]ZOj2Yg0\\Oe0ZO=C9H6J6JRj03M3M4L2N3M2N2N2N1O1O2N3M2N3M2N3M3M1O001O0000000000000000000000O100O100O1N2N2O1O1M3InVOnNTi0o08L4M3O1O1O1O1N2O1000000001O000000001O0000000000000000000000O10000O11O0000O100O100O1000000O1O1O1O1K5M3M3OQf`6"}, {"size": [848, 480], "counts": "n1Y1Wi0000000001O001O1O1N200O1O1O1O0O101O10O01N21L3N1000001O00O10000001O001O0O3O0O5eNjVOR1ei0D5Kgh5@iWJ3N1M5L3N4K5L4L8G8I5J3M10000O1000000000000O100000000000000O11O0O2O1O001O2N5J:G;B\\lR:"}, {"size": [848, 480], "counts": "ZTT84Zj05L2O2WYO6ZMCbd09P]OIWNc1hd0eNo\\OJUNd1ld0cNl\\OOQNa1Se0aNj\\OU2Uc0RNb\\Oo1_c0SN^\\Om1cc0SN\\\\On1dc0QN\\\\OP2ec0oMZ\\OQ2gc0nMY\\OR2ic0mMV\\OS2oc0gMQ\\O8oNd0Tg0\\OWYO7mf0H[YOJkf05j1E;M2M4O[Xf3"}], [{"size": [848, 480], "counts": "cd0m5ed0N2N2N1O2N2N1O2N2N1O2N2N2N1O2N3M1O2N2N2N2N2N2N2N1O2N2N2N2N2N2N2N2N2N1O2N2N1O2N2N2N2N2N2N2N2N2N2N2N2N2N1O2N2N2N2N2N2N2N1O2N2N2N2N2N1O2N2N2N2N2N2N2N1O2N2N2N2N2N2N2N2N2N2N2N2N1O2N2N2N2N2N2N2N1O2N2N2N3Mcki9"}, {"size": [848, 480], "counts": "kg`05Zj05K3N2N1O2N1O2N1N2O1O2N001O0O1000000000000O100000010O100O1O1O01O0O11O00001N01001O00001N10000000001O00000010O0O100O1O1O1N2N2M3O100O2O0000001O001O01O00010O1O00100O1O1O001O10N1M3K7FQVOKU]n2:g\\RMe0A>B5K3N0O1OO1O1N201N101O1O2Md0[Omm9<`QFA?B>D;M4M2O1O101O000000001O1O00000000000fH"}, {"size": [848, 480], "counts": "k9m1ch00000000000O100000O1000O1000000000000O10O1000000000O0100000O1000O1000O10000O01000000000O10O10O10000000000O0100000000000O10O10O10000000O01000000000O10O100000000O10O10000000O10O100000O10O10000000O10O1000O100000O010000000O10O01K54L1N7J4Lc0]O;D7J2M4KQ\\i2:fcVM5L3M4L5K4K4M3M6J3M2N00000O100000O010000000O010XOUWOKkh04WWOKih05WWOJjh05WWOKhh05ZWOJfh06ZWOJgh04[WOJeh06\\WOJdh05]WOKch03_WOMah02`WON`h00bWO0^h0NdWO1]h0JhWO6Yh0EkWO;Ui00O001O1O1M3M3M2O2N2N2O0O2N2M3M3N2O0O2O100000O10001O0\\OPWOHPi03VWOKkh0M^WO2dh0I`WO6_i0N1O0O10000000O010O1N20O010000O010O1000000O1O010O1N2O100H_O_VOb0^i0:N2O0100000001N10000000001O0O101O0O101O001O00001N10001O000O2O000000001O0O10000000001N10001O0O101O0O101O0O1000000O101O0O2O001O1N1OfmV2"}, {"size": [848, 480], "counts": "m[f2`0ji07F:G8I8B=G:lNT1\\Oc0I7H9E:B>ZOg0H7J6J6M3M3N2M4L3N2N2N3M2N2O1N2N2N2N3L3M3M3N2N2N2O2L3M3N200O1O2N100O100O1O101O0O10gIh^OcMBc5ea0eLb_Ol2^`0kLR@n2n?oLZ@m2c?QMe@k2[?RMl@j2T?SMRAj2n>TMWAi2i>UM\\Ah2d>VM`Ai2_>UMfAh2Z>VMkAg2U>VMPBh2P>UMTBj2l=UMWBj2h=UMYBk2g=SM]Bk2c=TM`Bj2`=TMcBl2\\=SMfBl2Z=oLmBo2S=QMmBo2S=PMoBP3P=oLQCQ3oQOk0mNT1YOe0\\Oc0O3O0O100O100O1000000O1000000O0100O010O100000O01O1O1N2N3K4J6C=H8H9A>_Ob0_Oe0lN`da1"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "R3V1Zi0001O001O1N2O001O1O1O001N110OO100001O000000000000000000000001O0000001O002N1O1O001O3M;D6IYc6DT]I3M4L5L4L5K8H8H3M2M101O0000O100000000000000000O100000O2OO2O01N101O1O1O8H=C>@nUQ:"}, {"size": [848, 480], "counts": "\\UT893I[43o`0`MSA_2n>_MTA`2k>`MVA`2j>`MVA`2j>_MWAa2j>^MVAb2j>]MXAb2h>]MYAc2g>\\MZAd2f>[M[Ae2e>ZM\\Af2d>ZM\\Af2d>YM]Ag2c>XM^Ah2c>VM^Aj2b>VM]Ak2d>SM]Am2c>RM^An2b>RM\\AP3d>PM\\AP3e>nL\\AR3d>mL]AS3c>lL]AU3e>iL[AW3f>gL[AY3g>dLZA\\3i>`LXA`3j>]LXAb3kb000O10O1000O1000000000O11VNPZOQOQf0i0_ZOmNae0Q1eZOjN\\e0T1mZObNWe0c0j2@[VX6"}, {"size": [848, 480], "counts": "e_O`0A?K5M3N100000010O0000001O001O000001O01O000000001O1O000001O01O0000000001O000000RG"}, {"size": [848, 480], "counts": "U;k1eh00O1000O1000000000O010000000000O010000000O1000O10O100000000O10O10000000O0100000O100000O1000O10000O10O100000000000O10O100000O10000000O1000O1000000O01000000000O10O1000000O1000O100000000O10O100000O10O1000O100000O10O10000000000O10O100000O10000000O0100000000000O0100000O100000O10000O10O1000O1M2O2M3L4M3N1N3M4K9H;CbRP3LcmoL6J4L5N0101N3N1O1N101O1O0O100000000000O1N2N2O10O10O100001O00O01000O1O1O1N2N1O2M3N2N2O1N2000O10001O000000000O2O0000001O0O2O00000O2O000O1000001O0O1000001O0O1000001O000O2O00001N10001N10001O0O101O0000001N101O1OO11Ognd2"}, {"size": [848, 480], "counts": "Th]3:oi09J6H6L5J7J5K5L4K5K6H8G8J6K4M4K4M4L4L3L5M2M4L3M4M2N3L3K5L5L3L4K5N3L3M3N2M4M2N2N2N2O2M2M3N2N3N1N2O2N1O1N2O101N1N2O1O100OaKo\\Ob1ob0VN^]Of1bb0WNc]Og1\\b0WNh]Oh1Wb0WNm]Og1Rb0WNS^Og1la0WNZ^Of1ea0YN_^Of1_a0XNf^Of1Za0XNj^Of1Ua0ZNm^Oe1Sa0YNo^Og1Qa0WNS_Og1l`0YNV_Og1i`0WN[_Og1e`0XN]_Og1c`0XN__Og1``0YNa_Og1_`0XNb_Oh1]`0XNe_Og1[`0XNg_Og1Y`0XNi_Og1W`0XNk_Og1V`0WNk_Oi1U`0VNm_Oi1S`0WNn_Oh1R`0XNo_Og1Q`0XNQ@f1Q`0XNS@e1m?[NV@b1j?^NZ@^1g?`N_A:b>FdA4\\>LgA1Z>MiA1W>OjA0V>0kANW>1iAOW>1jANV>2kAMV>2jANV>2iANY>1hANX>3hALY>3hAKZ>4fALZ>4gAJ[>5fAI\\>7jABV>>o5O10000O10001O001N1O10000O11O1O0O11O01O00010O2N2O2N2N2M1100N9BZS[5"}, {"size": [848, 480], "counts": "P=o0ai0000O1000O100000O100000000O0100O10O10000000O10O1000O1000O10O1000O1000O10000000O010000000O010O100000O1000000O0100000O1000O1000O1000O10O100000O10000000O100000O01000000O10O100000000O0100000000O01000000O01000000O10O1000000000O0100000O10000000O01000000O10O10000000000000O1001OO1O1N2N2K4O3IgeV30YZiL8M2L4N1O1000O0100O10000O0100000O10O1000000O10O11O000O01000000000O010000000O10O100000O10000O10O1000O100000000O010000000O01000000O100000O1000O100000O1000O01000O1000O10000000O01000000O0100O1000O1000O10O2OO10000000O1000O10000000O100O1000000O1000000000O10O2O00O1000O1000000O10001O00000O1000000O1000000O10O11O00010O0O2O1O2M10O04L2M_WX1"}, {"size": [848, 480], "counts": "baY96di0i0POn0XOh0SOl0TOm0F9N2O1O1000000O1000000000000000O10000000000O100O2N1N3M3J6J6EA`0QOZ1lN^nR2"}, {"size": [848, 480], "counts": "kZX54Wj0>C7I6K5K6J6K4M2N2O001N1N3N1O2O0O2N1O10000O2O1O0O1O10001N0011N1O1O1O1N2M3M3N2N2O1O1O1N200O1O2O0O101O0001O01O000000001O01OO2O00001O00000O101N2O2M102N1N3M2N3M2N3M2N3N2N3L4K]nm4"}, {"size": [848, 480], "counts": "]4b1nh00O0110O00O13M1O001O00000000001O000001O000000000001O0000000O5L>^Oem9AmRF5K7J8G>C3M1N1000000000001OO10000000O1000O11O0000O11O00O2O001O2N5K5JRS[:"}, {"size": [848, 480], "counts": "PZe76Xj03WZOKUa08]^O4ba03\\\\OBeN]1nd0VOT\\Oh1lc0\\No[Oe1Qd0_Nj[Oc1Ud0`Ng[O`1Zd0bNc[O_1^d0bN^[O`1bd0bN[[O^1gd0gNR[OX1Qe0kNiZOV1Ze0kN_ZOYO:?ce0Q1cZOgNie0m0bZO`Nle0X1k1H8J8I6K4M3J8ASYT4"}], [{"size": [848, 480], "counts": "Vi0Z1Wi0O2N2N2N2N2N1O1O3M2N1O2N2N3M2N2N2N2N2N2N2Nb^j;"}, {"size": [848, 480], "counts": "T7l0di00000001O0000001O000O101O0000O100001O01O000000010OO01000001O00O1000000000001N101O001O0000O1O2O00000O1O1O1N3O0O1O1000001O00001O00010O010O01O100O1O1O1O2N3N0O001O001O1O00001O01O0000000000000001O10O01O1O1O1O1O101ON2N8Ae]^1_OScaN3M2N2O000O1L5WOh0E;0O100O1O1O100O10000O1001O00000010O4L3N2N3M4L2M3M4L2M6EoW=6ogB7E9@>L4M3N20O6J1OO11O2NO1N21N10O10000O10000000O1O0O2M4H7L4I7J8K6GUk51nTJ4K3K5O1M22O2M2N``e0J__ZO>G8Gd0^O9H3M10O2O0001O00000000010O00001O0000000010O0001O00001O01O01O00000001O01O0000000000010O001O00001OO110O000001O01O0001O0000001O0001O0001O000001O000001O000000000001O0000000000000010O000000O10000000001O000O2O00000O100N2K5G:H8AkU9B`jFe0@<]Od0J5L5M2O101O000000001O0000010O001O000000010O000000001O001O000000010O0000001O000000010O0000001O0001O01O0000000000JSXOhMng0W2XG"}, {"size": [848, 480], "counts": "o:i1fh0010000000000O01000O10000000O10O1000000O10O1000000000O10O10000000O1000O1000000000O100000O010000000O1000O1000000000O010000000000O100000O1000O100000000O10O1000O1000000000O10O1000000000000O1000O100000O0100000O100000O10000000O10O10000000O1000O1000O1000000O0100000000000O2O1O4L8G9oN^VOd0[_e0@e`ZO2ki0e0D`?Ba@`?B`@=b?B_@=a?C_@l?_OU@a0md0O2O000O2O0O2N101N1O3Mcg0KW^Na0ci0>I77B?ZOoUONZid5"}, {"size": [848, 480], "counts": "hWe0W1R[O^N]e0R1[2D8J7J7J3J6Ib[b4"}], [{"size": [848, 480], "counts": "ii0g0ji0O1O2N3M1O2N2N1O2N2N2N2NRmQ<"}, {"size": [848, 480], "counts": "m6l0di00000001O0000O1000O10000000O0100O100000000000001O1O0001O0O100O1O2N1O1O1O1O2O000000001O010O1O01O001O1O100O1O2N1O1O1O101N1O1O001O000010O00000000000001O1O001O1O010O1OO2O5J^]^1UO\\caN5J6L3N2N000O1SOLTWO?hh0f0O2N101O2N1O10O0N2O2O100O1L5N3O5E;E>D:FeSf29okYMC5K4L2N1N10000000O010000001O0O2O4L7H5L4L3M2N3M1O0O2O1O1O1O2N2Mina2JUQ^Md0C4L3O0O1O1O2OO2O00000O2O0000001O0O101O0O100O101O00001N10000O2O000O2O00000O1000001N10000000000O101O00000O10001O001N100O10001O0O3N0000001O0000gam3"}, {"size": [848, 480], "counts": "afa3`0ii09H7I7K5J5G9E;A?M3M3L4H8I7F:E;H8F:J6K5M3M3L5I6M3M3N2M3N2O1N3M2N2O1N2N2O1O1O2M101N3M2M3N2L4N2N3N1N2N3N1O101O00001N100O2O00001O00001O001O0O101O0000XJc]O^3\\b0^LU^OU3ka0hLb^On2^a0oLl^Oj2Ta0QMW_Oi2i`0TM^_Oh2b`0WMd_Od2\\`0ZMi_Oc2W`0[Mn_Oc2Q`0\\MS@a2m?]MX@`2h?^M\\@`2d?^M`@`2`?^Md@`2]?^Mf@`2Z?^Mi@a2X?]Mj@b2V?\\Mm@c2T?YMPAf2P?XMSAg2n>WMTAi2k>UMWAk2k>RMWAl2m>oLUAQ3k>nLVAR3Tc00000000001O00000001O0000O1O1N2L4K5M3G9G9F:F;@`0I8ETPf5"}, {"size": [848, 480], "counts": "SB6J2O1N4K2O1O_b6Fj]I5L5K2N2N2O0O2N2N2N3M1O1O101N3M1O2N2N2N1O7lVObNih0j1H1O00000001N1O1000001L5PO[WO\\Og]POUAo0k>POWAo0j>POXAn0h>UOWAh0j>YOXAc0j>]OXAEWAKV?5c50O2O0O11O001O001O1N2N120O1N1N3N1O2N2OW[V5"}, {"size": [848, 480], "counts": "kB8I4J_hc6"}], [{"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "XWo1e0hi09H3M100M4M3M3N1O200000000000001O0010O01O00000000001O00000001O0000000000000100O0000100O1O1O010O0001O001O001O00001O0000001O00O100001O000O1O1M3K5J6L4J6K9GUYU1<_fjN:Fh0[O5M000000001O00001O01O000001O0001O0001O0001O01O01O000001O000010O0001O00001O01O000001O00001O01O0010O0000001O010O001O0000000010O0000000000001O00001O000001OO2O0001O00000000O10001O0000000O2N10000O1K6G8D=Aco9EnVEa0nh0k0@?M3N100000000O2O00010O001O0000000010O000001O000010O000001O000000010O000000010O0000001O0001O000000000L5J5I7J6O1O10000O2O0000000000001N11O0000O100000001O00000O10001O0000000O1000001O00000O1000001N10000000001O0O10001O00000O1000001OO10O2O001O001O0O2O000000000O1000000000001N10001O0O100000000O2O0000O1000O1000000000000O1000000000001N10cE"}, {"size": [848, 480], "counts": "U;f1jh00O1000O100000000O1000000000000O0100000O10O1000O10000O011O2N2M7J9Dh`e0]Ok_ZO:H`0@4M2M2O000O2N010O100O100O100O100O10O101O0000000000000O10O1000O10000000O1000O10000000000000000O10000000000O10000O01000000O0100ZOQWOKoh04TWOJlh05VWOJjh06WWOIhh08XWOHhh08XWOGih08XWOHhh08WWOIih06YWOIgh06ZWOJeh06[WOJfh04\\WOLdh03]WOMch02^WONbh00_WO1bh0KaWO4`h0IdWO6Zi01O1N2O1O1O1N2N2N2L4O1N2N1N3M3N2O1O1000O0100001O000]OmVOITi04oVOKQi0LYWO3hh0DaWO:]i0N00O10O01O10000O1000O02O0000O2OO100000000O10O01O1N2M3M3M3N1O2000000001N1000001N1000001N100000001O000O101O0000001O0O2O000O110O00001N1000001N100O10001O1O000O101N100O2O00Roi5"}, {"size": [848, 480], "counts": "egQ4F3L4N1N2N2O1O1O2N1O1O1O100O1O1O10O01000000O01O100O100O100O10000O10001O002O0O0010O01O100O001O002N100O1N3M3M3M3L7FYUg4"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "WSf43\\j03M4M2QYO0Zc02X\\OT1nb0SOg\\OR1Xc0SOa\\OP1^c0UOo[OZ1Pd0hNh[O^1Xd0dNe[O]1[d0eNb[O\\1_d0eN][O]1dd0dNY[O\\1id0gNR[OY1Pe0lNhZOU1Ze0nNaZOR1ae0UOSZOQO`0`0ae0U1jZO]N_e0]1mZOlMhe0j1i1C8J4N5J5L6I4L5I7HQ[o6"}], [{"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "aVW17Qj0c0A8I6H;G3N3N3N2NO1K`WOZN`h0g14O1M300O101N101OO10010O00001O01O01O00O1001O00O20O0000O10001O01O001O1O00010O1O1O1O010O000001O1O00001O0000001O001O000000000000000N2L4N2H8L4J7J5Jge6JbZI4M3L4M2M4O01O2M8@PVOOVog0h0YjXOj0VO6K200000001O001O000000010O001O000001O0001O01O01O0001O01O00010O000001O0001O000000010O001O00000000010O001O01O01O001O001O000001O000001O0001O000000001O000000001O000000001O000O11O0000O2O00001N100O2L3C=@g0ZO_U96RjFm0[Oc0D9N3O000O101O000010O0001O0000000010O0001O000010O00000001O0001O01O00001O0001O0001O000001O01O0001O0O1J6I7J7L3O1000000O101O00000000001O0000000O101O00000000000O2O00000000000O10001O0000000O2O0000001O0O1000000O10001O000000000O1000000000001N2O001O001O00000O101O00001O000O10001O0O10001O00000O10000000000O1000O1000000000O2O0O10000000001N10000000001O0O10000O2O0000000O1000000000000O1000dE"}, {"size": [848, 480], "counts": "k:d1lh00001O0O2O1N;Ee0VO_`e0MeeYOKgi0l0F:G3L100000000O1000000O10O1000N2O100O1000O100000000O10000000O100000O1000000000000000000O011O0000O10000000000000O1000O1000O1000000ZOoVOMQi02RWOLnh04SWOJnh05UWOIkh06WWOIhh08YWOGgh09YWOGgh09YWOGgh08ZWOHfh07[WOIeh07[WOHfh06\\WOJdh05]WOLbh02_WONbh01_WOOah0OaWO1_h0LdWO3]i00O1O1N1M4N2O1O0O2N2N3M1N3N2N2M2O200O1O01000000O20O@kVOEUi09nVOFRi08QWOGPi03WWOJlh00YWOOdi00001N10000O10O1O1O1O1O001O100000001O00000000O01O1N3M2M3M2M400O11O01OO1000000O2O00001O0000001O0O1O10000O2O001O000O101O0000001O0000000O2O0000001O0O101O000O10001O0O2O0O100O1O2O0OjfY6"}, {"size": [848, 480], "counts": "`cl37Wj02O2N1O1O1O1O2N100O10000O10000000001O00001N101O001O00001O00001O1O001O1O1O1O1O3M4L4L6J4]XO[NSf0j1eYOYN[f0l1`YOUN_f0o1VYOmMA6Yg0l21O0000001O000O101O001O001O0O101O000O1000000O2O001N100000001N101O0O10000O10001N100O2O00001N100O2O0dNXXO0hg0LaXON`g00fXOL[g02iXOJXg05jXOJWg03mXOJTg06mXOITg05nXOJRg05PYOHSg07c1O101O0O10001O00001O001O1O0010O01O1O1O003N]fW5"}, {"size": [848, 480], "counts": "`Pj06SVOD[i0m0K:G2O0O100O1001O001N2O1O001O1O001O0O101N101O1O1O1O1N2O001O1O001O1O001O1O1N2O1O1O1O1O1N101O1N2O1O001O1O1N3N001O10O01O0O2O[`a:"}, {"size": [848, 480], "counts": "k^`45Xj06J9H5M3L3M4L5L3M3L5L3L4M2N1O001N101O000O2N100O2O000O2O1N10001O0O01000O10000O100000000O100000000O2O000O101N101N101M2O1O1N2O2N1O1O1bNQMZ[OP3ed0SMX[Oo2gd0SMV[On2jd0TMS[Om2md0VMnZOg2TOQMne0;jZO\\2CXMbe0?gZOU2L\\M]e0c0bZOo1Vf0VNbYOi1af0P13M3L4M4K4L5L4J5N3M2N3O1M3O1J6J6oNgWOYOWfa5"}, {"size": [848, 480], "counts": "Uk72\\j01O2COZVO1gi0;01O100O1O010N2000O100000000O0100O10000000O1000O10000O100000O1000O1000O0100000O2OO100O1000O1000000000O10O2O000000000O2O1O001O001O000O2O001O1O000O2O1N101NPcR:"}, {"size": [848, 480], "counts": "fjZ5f0Qi0l0\\Nc1WOh0Dii0=G7L3M3N1M3O2L3M3L3M4K5XOh0H8M3O1001O100O001O0010O10O010O010O01O010O100O100O1O010O100O2N1O2O0O010O0001O000O2N1N2N3M2N2N3N1O1N3M2O1N3M2N200N2N3N1N2N2N2O1O1O1O1O1O1O1O001O100001O1N2O1L4M2N001N20N2O1M2N4N2M3K5M3N2M4L3N2M3J6M4I8I\\cP6"}, {"size": [848, 480], "counts": "e;f0ji0O1000O100000O0100000000O10O1000O10O100000000O1000000000O1000O1000000000O01000O1000O1000000000000O10O1000000O1000O1000O1000O01000000000O10O100000000000O010000000000000000000O10000000000O2O0000O10000O1000O1000000O1000000O10O100000O1000O10000O010000000O01000000000000000O0100000000000O000100O2LRlX2E]TgM0I6O1O2O000001O2N2N1N2MRio5"}, {"size": [848, 480], "counts": "e^\\57Wj06J3N5J;E?A?A4L5L3L2O1O1N01O2H7G\\OQ[a6"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "[gR31^j05L2M3N2N001O1O7fYO_O\\LNee0f0P\\O^OdN`1Ye0lNY\\OITNa1ce0cNZ\\Ol1fc0SNZ\\On1fc0QNZ\\OP2gc0nMX\\OT2ic0jM[\\OS2Vf0K3L6J3M6J8G:Fgcg8"}], [{"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "n7h0\\i0=^Oa0O2N101O01O00000001O01O00000O1000000O2O0O100000001N11O00O11O000010O0000000010O00000000001O01O00000001O00010O001O1O0010O00001O100O1O001O00001O1O0000000000O1000O110OON3L5L3I7H8H8LcZ8JdeG4N2M3K6M1001O5@QVO5]og04]jXO>Eb0_O9I2O1O1O00001O000000001O0010O0001O01O01O00000010O00010O00000010O000010O01O00001O0010O0001O00O10001O0O100O1O1N2N3L3M3N2O1N3M2O1N2N2O1N3M2O1O2N1N2O1O1O100O2O0O100O2O0000000000001O00001O1N2O0O7Ic]<3XbC7J3O1N101O0000000010O01O001O001O00100O1O1O00100O1O1O1O2N1O10O01O2N100O1O2N1O1O2N101N1O1O2N1O1O010O00O1O100000001O0000001O00001O00000O100000001O0000000O2O000000000O100000001N10000000001O0O1000001O0O1000001O0O100000001O0O100000000O10001O001O0O101O000O101O00000000000O101O00001O000O1000000O1000000000000O10000000000O2O00000000001N10000000001O0O10001N1000000000000O2O0000000O101O1O0O2Nf`m0"}, {"size": [848, 480], "counts": "c9d1lh000000000O10O1000O10000000000O1000000O1001N100000O100000000000000O100000O1000O10000O1000O100000O10000O100000000O10O10O1O1O100N2_Oa0O10000O100000O10O100O2N1O1N2M4I]j52bUJ8J5L4K5K3L4O1M4N1N200O100001O0000001O001O6hNTWO?di0K1O00000000O1O1O1N0110O01001O1O001O0O010000O100N2O1N2O1N2M3L4O1O11O0000000O10001N1000001N1010O0001N1000001O00001O000O2O0000001N2O001O00000O101O00001N101M6H^e`7"}, {"size": [848, 480], "counts": "kgR22[j06L2M3O1N1O1O1mMBSZO`0je0CUZO=ke0CVZOie0DmYOe0Sf0\\OlYOd0Sf0_OjYOb0Vf0@fYOc0Yf0_OcYOc0\\f0A`YO`0`f0CWYOc0hf0IgXO=Xg0_1O0O101NROSYO[Nlf0d1XYOZNhf0d1[YO[Ndf0e1_YOYN`f0g1bYOXN^f0f1fYOXNZf0g1gYOYNXf0g1iYOYNWf0f1kYOYNUf0f1mYOYNSf0g1mYOYNSf0f1oYOYNPf0g1RZOXNne0h1RZOXNme0h1UZOWNke0i1UZOWNje0i1XZOVNhe0j1XZOVNge0j1ZZOVNfe0j1_110O0100O001O001O1O00100O1O001O1O1O1O1O011N1O100O100O10001O0O2O000010O10O002O0000O010O1O1000O1000O100O100O1O1O1O1O1O1N2O2N1O1O2M2O1N2O1N2O1O1O1N3N1N1M5L3L5G8I8G:CSPP7"}, {"size": [848, 480], "counts": "W;e0ki0O1000O1O100O10O01000000000O1000O100000O10000O10000000000O0100000000000O01000000000O01000O01000000O10O100000000O100000O010001O2M3N002N2O0O1O2N3L2O`l>4\\SA2N001O0N5M1O0O10000M30O1000000000O010000000O11O000O0100O1000000000O100000O10O100000000O1000000000000O0100000O1000O10000O1000O10001OO1000000000O1000000O1000O1000000000O10000O10O10O1N1O3M2NTZb7"}, {"size": [848, 480], "counts": "hfl4d1\\h0b0UNi1YOg0O1O10001N1000000000000000000000O1001O00O2O0O1O2N1N2N3M2L6F;\\On0cNVmc6"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "ZlQ31^j05K5L3M4L2N3oWO@jNLZf0h0gZO@cNY1_e0YOi[Oa1Sd0`Nk[Ob1Td0`Ni[Oa1Xd0_Nf[Ob1[d0_Nb[Ob1`d0^N\\[Od1ed0^NW[Oc1ld0^NoZOb1Ue0`NeZOa1^e0cNZZO]1ke0bNoYO_1Vf0[12QNWZOROme0h0`ZOkNfe0n0R2J6L4M5L4L3L4M3L3M4K4M3Lnb^8"}], [{"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "[7=Sj0000O100N3L3ROo0L4O1O1O00001O01N100000000O2O000O100O2O000O1000000000001O0001O001O01O000000000000010O00000001O0000010O001O001O1O00100O1O00010O01O1O001O001O00001O00000000000000000O1N3L3M3J6H8K6HohW1KTWhNb0C]Od^m6"}, {"size": [848, 480], "counts": "^:e0ki0O1000O10000O010O100000000O0100000O1000O1000O100000000000O0100000000000O10O10000000O10000O010000000O1000O10000O0100000O01000000O1000O1000O10000000000O01000000001N01000000000000000O1000000O1000O100O1000O100000O10O10000000O0100000O10O100000O10000O01000001O00000O1000O10000O01000000O100000000O100000000O010000000000000O100000O1000O10O100000O1000000O010000000O10000000O1000O1000O10000O01000O1O10000O0100000O1000O1000000O01000O10000O01000000000O1000000000O10000000O100001O00O0100000000000000001O1O0O2O0O3N1N5Knek5"}, {"size": [848, 480], "counts": "l^R5R2Sf0]2F9M201N10000000000O1000001O000001O00O11N10001O0O2N2N2M3K7J6Cb0[Oh0POan^6"}, {"size": [848, 480], "counts": "PkQ2>oi05L4K4M4M2M3N2M3N2N3M1O2O1N1O2O0O1O100O1O100O100O10O10O10O2O0O1000000O2\\NXWO^1ih0`NXWO_1nh0NLcNWWO]1ih0dNVWO[1kh042O1N2O0hNVWOi0kh0TOXWOl0hh0SOYWOl0hh0TO`WOd0gkS9"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "YjV35[j02N2N2M4M4L5L1QXOAeNZ1Qe0XOU\\Od1jc0_NQ\\Od1nc0`Nn[O`1Sd0aNk[O`1Ud0aNh[O`1Yd0bNb[O_1_d0dN][O]1dd0fNW[O[1kd0hNoZOX1Se0lNfZOU1^e0l17I3mMaZOnNee0i0fZOkNbe0m0V2J6L5L4J6K4M4K4L3M4M2K]_Z8"}], [{"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "lZ45Xj07K4M1O000O2N2M2POQ1N3N1N10010O0000001O01O0000000O1O1O2O0O1O10000O2O00000000001O01O00001O0001O01O00001O00000001O0O11O000000010O1O1O0010O0001O01O00100O001O001O00000000001N1000000000000O1O1N2J6I7L4J6I8GhZ80]eG4L5J4O1O2N14L6]OQVO8Xog02ejXOC3L:G4K4N3L3L4M3M3M3N2M3N1O1N3N1O2N1O2O0O1O1O10O100O100O1O001N1M4K4N3K4K5N3M2J6N2N2N3L3M3M3L4M3N2M3M4M2N2N2O1O1O1N2NSXOBme0oi05L5K3M4M2N3M1O2O2M101N3M2O1N3N00001N101O0O2O0000000O0100O1O101N100MWWO_Nih0a13O1N2N3N1M301N1O2O1K5O1O1N3K4L9H^^R7"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "gb\\37Xj03N2M2O2N3L6K4M2mXOIgc09][OYObNb1Ve0WOT\\O^1jc0dNR\\O_1mc0bNQ\\O_1Pd0cNk[O_1Vd0bNg[O_1Zd0cNa[O_1`d0dN[[O]1gd0fNS[OZ1Re0fNhZO[1ae0e14K5kM[ZOWOie0a0cZOVObe0d0gZORO`e0c0]2K4M5L9C>_OnPX8"}], [{"size": [848, 480], "counts": "Pj0`0Pj00001O2N1O1O2N1O2N1O2N1ORmQ<"}, {"size": [848, 480], "counts": "oW>;Tj02O1N2O1N1O2M3VOZOUWO06n0]h0i0O1O2O100O001O0001O000001O00O100O101N10000O101N10000000001O000001O1O1O0001O01O00O100000001O0000000010O001O00001O1O10O01O01O001O1O001O010O0000001N11O0000O100000000O100O1O1L3H8M5G9G:JdZ8KbeG5J5L5L201OO3N5LYUg0LdjXO=E9Hb0_O9I1O1N20N2O00010O000001O00000010O0000010O000001O0010O0000010O000001O0000001O01O01O00000001O01O001O000010O000001O01O01O000000000000010O001O000001O0000000000001O001O0000000O10001O0000000O1000001N1O2J6Bf0ROlU9GcjFe0[O`0YOg0I7M2O2O00001O0000001O001O0001O0001O0000001O01OO1010O000001O01O0001O00010O000000001O01O00000000001O0O1M3G9J6L5N1O100O2O0000000000001O0O100000001O000000000O2O0000000000001O0O100000001N100000001O0O100000001N10000000001N100000000O10001O000000O11O0O2O000000001N1000000000000O2O001O0000001O1N101O0O100000000000000O100000000O01001O00000O10001O000O10001N1000001O000O100O2O000000001N1000000O2O00000O10_I"}, {"size": [848, 480], "counts": "bca07Wj0?Bb0_O3M7I1O1O00001O0000O010000000O10000000000000000O1000O100000000000O1000O1000O1000000O10000000O100000O100000000000O10O10000000O01000000XOSWOMlh03VWOLjh03WWOMih02XWONhh02XWOMih02YWOMgh02[WOMeh03ZWONeh02\\WOMeh02\\WONdh01]WOOch00^WO0bh0N`WO2`h0KcWO5]h0HfWO7Yh0IjWO6Vh0DoWO=Qh0@RXO`0nh01O1O0O2N2M3N2N1O2M3O1N2L4N1O2N2N10100001O0000001N10000[OSWOGnh04XWOJhi0N2O00000O1000O1O1O1N20O01000000000O10000000O010O1O1O1O1L4L4M3N2O1000000001N10000O10001O00001N1000001O000000001O0O101N10001O0000001O000O2O00000O2O000O101O00001N11O01O0O10001N100O2N101O1N10Qo]6"}, {"size": [848, 480], "counts": "dhe22Uj0f0^O8K3N2O101N1O2N1O1O100106I1O1OO2O001O1O1N1O2N2N1O001O1O1N101N101O0O2O001N101N101N2O001N2O0O2O0O2O1O0O2O00cNaXOJ]g05iXOGWg05RYOGmf07XYOFhf08\\YOFdf09^YOGaf07dYOF[f09jYODVf0;P2O1O01O0O2O1N2O11O00001O010O01O0100O0100O001O101O0O2\\VOBSi0T1I5L3M>B00O2N1O1O2N1O2M3N2N2N2N2M3N1O1N2O1N2O2M3M2N4M3L5K7I4KPZd6"}, {"size": [848, 480], "counts": "XZc0`0ni02O1O010O10000000000O10O100000O10000000O1000O01000000O10O10O1000000O10000000O10O100000O100000O100000000O1000O1000O1000O1000000O1000O10000000O10O10000000O1000O1000000O010000O1000O10000000O10O1000O10O1000O100000O100000O010000000000O1000O1000000O10O1000000000000O0100000O10O100000O10000000000O1000O1000000O10O1000000O0100000000O10O1000000000O10O10O10O1000000O100000O010000O0101N0100000000O100000O0100000O1000O10O10O10000000000O1000O1000O10O10O100000000O100000000O010001OO1000000000000000000000000O2O2N1N102M\\fX5"}, {"size": [848, 480], "counts": "Vjc5[1gf0_2nN\\L^ZO]4Ze0a0O000000001O000000000000000000000000O100000000O100O1O1N2L7F:ROS1QO]1fNPok5"}, {"size": [848, 480], "counts": "U]b2=oi09I4L4M2M4M2O1N2N2N100O2O0O1O2O1N2O2M3N5J4M1N1000000O101O1O00O100O100O100N2O1L4O1O1O1010ON2L5IVWOeNlh0X18O0011O0000N2M4N1O2N2N2L5Kb0]OPmV8"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "k_f33[j05L3N2M3N3M2N5TYO\\OlM6_d0b0Y]ON`MMTe09T]Ok0kb0VOo\\Oo0Qc0SO]\\O]1dc0dNV\\O`1jc0bNR\\O`1oc0aNn[O`1Sd0cNg[O_1Zd0cNa[O^1bd0eNX[O\\1od0bNkZO^1]e0aN[ZO`1he0b13jMcZOSObe0e0hZOQO_e0g0lZOoN[e0i0\\2J4M4M5J8F:C:Kc^l7"}], [{"size": [848, 480], "counts": "dh0l1eh0O2N2N3M2N2N1O1O1O2N1O2N2N2N1O1O1O2N1O1O2N2N1O2N1O2N1O1O1O2N2N1O2N1O2N1O1O2NR\\\\;"}, {"size": [848, 480], "counts": "S[g08Uj07L2N0O1UVO_Ogi0e0O001O2oNQ1M3N010010O00000001O00010O00O1O1O1O2N1000000O100000001O0001O01O0001O001O0000O2O01O00000001O0O11O0000010O00001O100O001O1O01O10O0001O001O001O00001O0000000O10000000000O1N2M3H8K5J6F;HY`7Lo_H6K4I6N1O1N3N12N8Geog09moWO:Ie0]O7J1O0000000010O000001O01O01O000010O0001O010O000000010O00001O0001O000001O00000010O000001O00001O01O01O00010O00000010O0001O000000010O00001O0000000001O000000001O00000O2O000000001O000O1000001N101O000N2M4]Ob0A]P:@goEP1^O>C=K4O2N1O101O00000010O000001O000010O00000001O0001O01O0000001O01O01O0000000010O00000000000010O0000O101L3G9L4L4M3O2N10000000001O0O100000001O00000000001N10000000001O000O10000000001N10000000000O2O00001O000O100000001O0O1000001O0O101O0000000O10001O000000001N10001O00000O2O00O11O0000001N1000001O00000O2O001O000O01000000000O10O10000000O2O0000000O10001O0O2O0000000O10001O0O101O000OoI"}, {"size": [848, 480], "counts": "k6b1nh00000O101O0O:G:D8I5DQVOLT[a0OP__Oi0^O;F4L6J2N2M101O000000O1001O00O0100000O100000000O10O10000000000000O100000O100000O100000000O1000O100000000000O10O1000000000O1000O10O10000WOUWOMkh03UWOMkh02WWOMih02YWOMfh04[WOKeh05\\WOJdh05]WOKch05]WOKch04^WOKbh05_WOKah04`WOL`h02cWOM]h01dWOO\\h00fWO0Zh0NhWO2Xh0GoWO8Ui000000O1O1M3M3O1N1N3N2M3M201M3N1N3M20100O1001O00000O11O00[ORWOHoh05UWOImh0M\\WO2bi0O1N2O000O010O01O001N3N1O100001O0O10001OO01000O00100N2O2L3M3M2O2O1O100000000001N100O1010OO10001O000O101O00001O000O101O0O100010O000001O000O101O0O10001N101O00001O00001O000O1O2N1O2N101N10^[W6"}, {"size": [848, 480], "counts": "jVk1?mi09H5L4L4L3M4L3M3O1N2N2O1O100O1OfN^WOg0ah0UOeWOk0Zh0POkWOP1kh000000O1O10O0000O1000000000000O10001O001N101O0O2N2O1N10001N1O2N2O0O101O0OXOoWOTOog0j0VXOSOjg0l0n0M2N2O1O2M2O2O1O0O1001O000O2000O101O0O100010O01O00001O0010O010O01O01O01000O101N3N3L;F1N4L5L2M4M5K;E00O001O2N1O2M3N1N3N2N2M3N2M3N2M2O1O2M2O2M3N3L5L3L8G5KdXS7"}, {"size": [848, 480], "counts": "^8e0ki000O0100000O1000000O1000001N6K5J3N2N1OaX86VgG;G1N20O01000000O010000O100000O100000O1000O10O100000000O01000O100000O010000000000O0100000O1000000000O02O00O10000000O100000O100000O1000O100000O1000000000O10O10O100000O010000O1000O10000000O1000O10O100000000O10O010000000O1000000O10000000O0100000O010O100001N10000000O10000000O01000000000O1000000O10O10000000O100000O1000O010000000O10000000O10O1000O100O1000O1000000O1000000O1000O10000O01000000O100000O10000O010O10000O10O100000000O0100000O100000O10O10000O10000O1000O100000000000000000000000N200O10001N2O2N2M3N1OTmR5"}, {"size": [848, 480], "counts": "]Sl5a0Tg0Q3WOd0@`0O1O0000O10000001O0000000000000000O10000000000O100N2O1O1L4K5I7XOh0UO[1fNZlb5"}, {"size": [848, 480], "counts": "eok1=oi07K6J4L3N3M2N2N2N2N2N2O0O2N100O2O0O2O1N101O1N1O10001O0O1000O100003M2N0O110OO1O1O1000000O1N2O1O2N1O1N3K5M2N3N2O1N2N2N2L4M3M4M3K6K4M5HPQj8"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "lbo34Zj04M2O2M4M3M6J5QYOKfMBjd0g0`\\OUOfNe1gd0XO^\\OZOcNb1od0VO[\\O`1dc0aNZ\\O`1gc0aNV\\O`1jc0bNR\\O`1oc0bNm[O_1Td0dNf[O^1\\d0eN^[O[1ed0gNV[OZ1Pe0fNhZO[1be0c14hMfZOTO]e0g0lZOQOZe0f0Q[OnNYe0g0^2K5M3K7J9C^ke7"}], [{"size": [848, 480], "counts": "Rh0^2Sh0O2N1O1O2N2N001O2N1O2N2N2N1O1O1O2N1O2N1O2N1O1O2N1O2N1O2N2N1O1O2N1O1O1O2N2N1O2N1O2N1O1O2N2N1O2N2N1O1O2N1O1Obno:"}, {"size": [848, 480], "counts": "lco08Wj05K3N000O101N3L3BYOkVOm0ah0l0N1O2O0001O00000001OO2O0O100O100O101O000O10001O00000001O0001O0001O0001O00000000000001O0001O0000000010O001O1O1O001O10O010O001O0010O000001O00001O1N1000000000O01010NO2N2K5I7J6J6I9H[`7GP`H8H6L3K4O2O0010O:EWUg00cjXO=G:Fg0ZO4N001O001O00001O01O0001O0001O01O000010O0000000010O00001O0010O000001O0001O01O00001O000001O01O00001O00010O001O01O01O000001O01O0000001O000001O000000001O000000001O0000001O00000O101O000O1000001O0O101K4H9F9Aa0CbU94PjFf0C=B>J6M20000O2O00001O00010O00001O00000010O000001O0000010O0000001O0001O01O00001O0001O00010O000000001N100H8J6L4M3M4O0O1000001O000O1001O0001N10000000001O0O1000001O0000000O10001O0000000O101O000O1000001O00000O101O000000000O101O000000000O101O0000000O10001O1O0O101O001O0O10001O0000001N10001O00000O2O00000O100000000O100000000000000O02OO10001O000O101O000O1000001OTJ"}, {"size": [848, 480], "counts": "`6j1fh0O100000000O2O0000001O0O2O2M:F6K;D7H;FbUb0?oi]O`0A;E3M2M101O00000000000O10000000O10O1000O100000000O11O000O1000O1000O1000000000O100000O01000000000000O100000000000O1000O1000000O10O10VOUWONlh01VWONjh01XWONhh02XWOOgh01[WOMeh03[WOMeh02]WOMch02^WONah02`WOMah03_WOMah02`WONah0OaWO1_h0NbWO1_h0OaWO1_h0KeWO5[h0HhWO8Xh0EkWO;Uh0AoWO>Qi0100O1N2O1N2L3N3N2O1N2N1N3M3N2N2N101O02O00001O0000001O0ZOQWOJPi0OXWO0jh0H^WO6_i0O1O000O0100O1O001O1O100O100001O000O10000000O010O1N2O1N2N1N3L5M2000O1000001O0O10001N100001O01O00001N1000001O00001N1O101O00001O0000000O101O00001O0O101N11O000001O0O101N2O00001O0O2N101O000O2O00000ORnj5"}, {"size": [848, 480], "counts": "Y]e1`0oi04K5K5K4M3M3M3L4M3M3M3M2O1N3N1N3M2O2N1N3N1N2O2M2O1O2N101N101NcM_XOl1bg0PNaXOQ2og000O1001N100O1O2N1O001O1O1O1O1O001O1O00RO_XOnN`g0n0fXOPOZg0l0lXOSOSg0k0oXOUOQg0h0RYOYOmf0f0UYOYOkf0e0XYO[Ogf0c0\\YO]Ocf0`0aYO_O_f0?dYOA[f0>fYOBYf0=jYOCUf0mZOASe0?mZOASe0?nZO@Te0>lZOAVe0>jZOBVe0?iZOAXe0>hZOAZe0?eZOA\\e0>eZO@\\e0a0Z2O0010010O010O10O2O0O3N3L4\\VOTO\\i0P101O2OO11O01O2N3M1N3M1N2N3M2M5K6IVjc7"}, {"size": [848, 480], "counts": "Z8c0li01000000O100000O1000O10000000O100000O010002N3L5L2N4L1O1OPh:OoWE;dUOGni0>O2OO010O100000O100000O10O1000000000O1000O1000O100000000O1000O1000000000O10O100O10O10O10000000O1000O100000O1000000000O10O1000O1000000000O100000000O01000000000O0100000O100000O100000O100000O10O1000O1O10000000O100000000O1000O1000O100000O10000O0100000000O10O10000000O01000000O10O100000000000000O100000O10O10O100000O100000000O10000000O010O1000O10O100001O0000O01000000000O10O1000O10O100000000000O01000O10O10O1000O100000000O01000000O1000O100000000O10O1000000000000O10O10000O0100000O100000O1000O2O1O001O0O3N4I^oh4"}, {"size": [848, 480], "counts": "i[T6S1^h0R1iNU1lNT1L301O0000001O0000000000000000O1000000O100000000O1O1N2N2L4L4DXOn0YOU1_NchY5"}, {"size": [848, 480], "counts": "W]X2?mi08J4L4M3L3N2N2N2N2N2N2O2M3N1N2O0O2O2N1N100O10001N101O0000000000000O10O1O100O2N10O11O0O1K`WOYNah0d17N2N3N100O2O0O2O0O3N1N2M3M3M3M3M4L4O1IUVOEZR`8"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "SfX47Xj04M2M4M5K4VYOEWc0?_\\Oi0_b0ZOY]Ol0fb0WOU]Ol0jb0VOi\\OV1Vc0lNa\\O[1`c0fNY\\O_1gc0cNU\\O_1lc0bNQ\\O^1Qd0eNj[O\\1Wd0iNb[OW1ad0kNX[OX1kd0iNoZOX1Ze0L`ZOlNhe0f0iZOoN\\e0j0nZOkNYe0o0R[OWNce0_1n1H8K4L6J5J9I5G]m[7"}], [{"size": [848, 480], "counts": "Th0\\2Uh0O1O1O2N2N1O1O1O2N1O1O2N1O2N1O2N1O2N1O1O2N1O1O2N2N1O1O2N1O1O2N1O2N1O2N1O2N1O1O2N2N2N1O2N2N1O1O2N2N2N1ObcQ;"}, {"size": [848, 480], "counts": "\\RW18Wj06J3N1N2O1N4L001VOXOVWO38m0Zh0j0O100000000001O00000O1O100O100O2O000O10001O0000000010O0000001O000001O00000000001O01O0000000001N1001O00100O1O001O1O1O0100O0010O0001O0010O0001N2O000000O100001O0O1O1N2M3K5I7J6I7J8Ihe6M]ZI6I6L2L5N1O11O007I:DQUg0=^jXO?Dg0YO4M201O1O00001O00010O00001O01O0001O01O0001O0010O0001O00010O00001O0001O01O0000010O0000001O00010O00001O00001O01O01O0000010O0000001O01O00000001O0001O000O2O01OO10001O000000001O00000O10001O0000000O2N1N3H7F:F;BiU9JYjFd0B;B?G8K5O2O0O2O0000001O00010O00001O0000001O0001O01O0000010O00001O0001O01O0000010O0000000010O000000000N2J7I6L4L4N2O1O2O000000001O00000000001N10001O000000000O2O00000000001O0O10000000000O10001O0000000O101O000O10001O00000O2O0000001O000O1000000000000O101O001O001O0O101O1O0000000O2O001O000O2O0000000O2O0000000000000O10000000000O0100000O1000000OUJ"}, {"size": [848, 480], "counts": "`6j1fh00000O100000000O100000000O101O0000000O2O1O2M7I7J=A`0_Oejc0d0iT\\O?B;E1O1O1O0O10000000000O1000000000O100000O0100000000000000000O1000O10000000000O10000O1000O100000000000O1000O1000000000O1000O010000VOTWO0lh0OVWO0jh0OXWO0hh00YWOOfh02[WOMeh02\\WONdh01]WOOch01^WONbh01_WOOah00`WOOah00`WO0`h0ObWO0^h00bWO0^h0OcWO1]h0MeWO3[h0JhWO5Yh0GkWO9Vi0000O1O1O1O0O2K5M3N2N2N1O2N2M3M3N101O0011O000000001O0O1\\OoVOIQi04SWOKnh0OYWOOih0J^WO3ai0O00000000O010O1O002N1O10000000000000O100000O010O1N2O1N2M3M3K5O100001N1000001O0O10001O0000001O00001N10001O000O2O000O2O0001O01O000O101O0000001N100O101O00001O00001N101N101O000O2N100O2O000NSdg5"}, {"size": [848, 480], "counts": "aTl1\\1eh0d0D:J5L4L3M4M2M3N2N2N2N2N1O2N101N2N101N2N100O2N101O1N101N101O0O2O001O1O010O01O1O3N1N5K5K2N2N000gNQYOQOQg0j0SYOUOof0f0TYOZOmf0c0VYO\\Okf0b0VYO^Ojf0a0WYO_Ojf0>XYOBhf0=YYOCgf0Qj02N2O1O0O2O0N5K2YOg0H8O10001O0000000001O00000O1O1O101O0O1000001N1001O00000010O01O0000001OO101O0001O0001O0O1000001O0001O001O10O01O1O00100O0001O01O01O001O00010O00001O1N10000000000O1O1O1L4J5L6I6H8I8Kge6L]ZI9I3L4M2O1N2002N8Gjog0OSPXOC:Eced02[Z[O6[VOTO[i0X1I6J2M2O1O00O10000000000000000O10O1000O1000000000000000000O01000000000000000O100000O0100000000000000000000O01000000000O10O10O100WOUWOMkh02VWONih02YWOMgh03YWOMgh03ZWOLfh03[WOMdh04]WOKch05]WOKch04^WOKch04^WOLbh03_WOMah01aWOO_h0OcWO1]h0NeWO1[h0MgWO3Yh0HkWO8Vh0DnWOaf0A_YO>cf0@]YO?hf0]OXYOb0]h0L3N3M3K]ai7"}, {"size": [848, 480], "counts": "\\8e0ki000O100000O100O1000O2O0000000000000000000O010000O10O10000000001N1000001O7Hga`0L\\^_OH8ZOf0_Oa0^ORal4"}, {"size": [848, 480], "counts": "nPT46Vj0:G6J4M3M3N2M4N2M3L3N2O0O2N6J2O1N10001N101N2O1O0000001OO100001O00EfWO`NYh0`1iWO^NXh0b1E\\SA8J7I2M110000000O10O100000000000O10O10O10000000000O0100000O0100000000000O10O100000000O10O1000O10000000000000O10O100000O100000O10O100001OO01000000000O10O1000000000O010000000000O010000000000000O100000O100O11O000O100000000000O10O10000000O010000000000O10O100000O10O1000000O01000O10000000O10000000000000000O0100O1000O10000O10000000000O10O10000O1000O1000000000000O01000000O100000O010000000O100000O1000O10O1000O100000000O01000000000O1000O1000O1000000000000O10000000O1000000000000O10O100000000O101O1O1O2N1N3MmP_4"}, {"size": [848, 480], "counts": "ko_6a0Wi0k0VOh0[Oe0[Od0ZOf0XOh0J7O0000000000001O000000O1000000O1000000O1O1O1O1N2M3K5J6A?DBn`71h^He0A?E:G:G8M3O2O0O101O00001O00001O00010O00000010O0001O0000010O00001O000010O00000001O01O000010O0000000000N3H7K5L4L4N3N1O101O00000O1010O000000O101O00000000001O0O1000001O0000000O10000000001O000O101O000000001N100000001O0O10001O0000000O10001O0000000O10001O001O0O2O001O0000000O2O0000001O0O101O0000000O2O00000000000O1000000000000O10000000000O101O00\\I"}, {"size": [848, 480], "counts": "R7g1ih00000O10001OO10000000000000000O1000O1000001N1O4M3M;EJ6L4M4N1O1N2O1O2N1O1N4M2O6I2N2N3M101N2O1N2O1N2O1N101N101O0O2O1O1N2O1O0O3N1O1O2NoVOEng0:oWOKPh04kWO1Wh0MhWO4Yh0LdWO5_h0IWWO`0lh0_ORWOa0Pi0_OnVOa0Ti0<3L5K5K9^Omaf5"}, {"size": [848, 480], "counts": "i8f0ji000O0100000000000O10000000000000000000000O010000000000001N2O1O1N6K3L3NQR>=amA1O1O10O010000O10000000000O010000000000O010000000O1000000O1000O1000O10000000000O1000O1000000000O100000000000000000O01000O100000O1000000000000O1000O100O100000000O0100000O10O1000000000O1000O100000O1000O1000000000000O1000O100000O100000O0100000000O1000O10000000O100000O10000O1000O10001O0000O1000O10000000O1000O100000000000000O100000O1000O1000O10000O10O100000000O10000000O1000O10O1000000O10O10000000O1000O0100O0100O10000O1001O00O10O1000000000O1000000000000000O010000000000000O10O10000000001O0O2O1O001N3N3JW`a4"}, {"size": [848, 480], "counts": "W`]6n0Qi0c0ROm0ZOe0YOg0B?]Ob0J6N2O1O1000001O00000O100000O100O10000O100O1O1N3M2L5I6G9H9@`0A`0@a0Ab0WN^WOe0cho4"}, {"size": [848, 480], "counts": "Zca54Wj09I7J4K5M3L5K5L3M2N1O2O0O2O0O2N10001O0O2O00O100O1O001O1O1O1O1O100O1O2N100O100O2O00000O2O1O0O2O0O2N2O1O0O2O1N2O1N3L5J7IoVZ5"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "f^d45Si0]1ZLfNg]O`1Vb0dNf]O_1Xb0dNc]O`1\\b0dNn\\OGoMi1Re0bNl\\On1Tc0UNg\\Om1Yc0ZN_\\Og1ac0]NY\\Oe1gc0_NT\\Oa1mc0cNn[O^1Rd0fNh[O[1Yd0lN^[OV1cd0nNW[OQ1kd0TOnZOnN4c0Re0_1[[OPNod0k1a[OYMPe0b2b1I8E:Ma0^O=B9Dh^T7"}], [{"size": [848, 480], "counts": "ci0m0di0O2N2N1O2N1O2N1O1O3M2N1O2N1O1O2N2Nbhm;"}, {"size": [848, 480], "counts": "gmW1:Uj02N2O1O1O1N2M3M2]Oc0ECQ[86mdG9_O`0C=E:K6N2N1O10001O00000010O0001O0000010O0000001O010O00001O01O0001O00000010O0001O0001O0000010O000O1O1K5J6L5K4M3O1O1000001O00000000001O0000001N10000000001O00000O1000001O0000000O2O000000000O2O00000000001N1000001O00000O101O00000O2O000000000O2O000000001O1N2O001O0000000O101O00000O10001O000O2O0000001O000O100000000000000000O10O100000001O0O100000001O00RI"}, {"size": [848, 480], "counts": "Z7f1jh0000000000000000000O100000000000001N101O0O5Kf0XOmed0_O_Z[Og0@=D9H2L100000000000000O10000000O1000000000000000O01001O00O10000000000O10000000O1000O1000000000O100000O100000O1000O100000000O10000XORWONnh01TWONlh01VWONjh01XWONgh03YWOMgh03ZWOLfh04[WOKeh04\\WOLdh03]WOMch02^WONbh02^WONbh01_WONbh00aWOO_h0OcWO1]h0NdWO2]h0JfWO6Zh0GiWO8Yh0CkWO=Si0100O1O0O2N2N2M3N2M3N2N2M3N1O2N2N101O1000000000000000[OSWOGnh05VWOJkh00[WONei0O1O001OO01000O1O1O1O1O1O100001O000000000O010O1000O1N2O1M3N2M3N2N2O100000O2O0000001N1000001O000000001O000O2O00000O2O0000001O0000001O0000001N101O0O2O000O2O00001O001O001N100O101N100O2O1O1OcRj5"}, {"size": [848, 480], "counts": "lXe38Vj04J6K4M4L3M4M2M3M4M2M3M3M3N2N2M2N3M3L4M3M3M3N1N3M2M3N3L4M2N2O1N3M2O2L3M4M2M3M5L3N1M3N3N1N2O1O1N2O1O1O1O1O1O100O010000O0100000O10000O100N3J5G9ZNnYOgNZf0T1VZO[Noe0c1a1N2M3N2N2M2O2N2M3M3N2N101N2N1O2N2O0O2N2O1N2N2O0O2N2O1O1N2N3N001N3N3L[kn5"}, {"size": [848, 480], "counts": "o8g0ii000O10000000000000O10000000O100000000000001N10001O00001O0O8Gkl>KZSA;F3L10100O1000O1000O100000000000O01000000000O1000O10000000O100000000O010000O100000O010000000O1000000000O1000O1000O100000000000000O010000000000000000O1000O1000O100000O10O1000000O010000000000000O10O1000000000O100000000000O1000O10000000O10O100000001OO10O1000000000O10O10O10000000O1000000000O10O10000000O1000O1000000000O1000O100000000O0100O10000000O10001O0O10O1000000O1000O010000000O1000O1000000O1000000O10O10000O010000000O1000000000000O100000000O1000O1000O10000000000000000000000000000O2O001O1O001N4M2Llid4"}, {"size": [848, 480], "counts": "cVZ6l0Pi0e0WOi0ZOe0TOl0DG9N2O1O101O000000000O100000O10000000000O1O1N3N1N2K6K4E_Oc0\\Oe0XO[TS5"}, {"size": [848, 480], "counts": "c_X5>ni06K6L2M3M4M2N2N2O1N2N3M2O1N2O1O0O2O1O0O10001O0000O100O010N2O1N2N2O1O1O2N100O100O2N101N1O101N1N2O2N2N2N2N3M3M2M4M3Mcod5"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "oTa43Zi0`0lVO0gh0Q1aL`Nh]Oe1Ub0_NR]OOQNe1jd0cNP]OKPNe1Pe0fNiZO^Ol1_2Zc0\\N`\\Of1`c0`NX\\Ob1hc0`NU\\Oa1kc0aNR\\O`1nc0cNm[O_1Sd0eNh[OZ1Zd0lN_[OT1bd0ROW[Om0kd0YOmZOf0We0;V[OQNPe0i1^[O`MPe0[2f1DhVOjN_h0i1K2M2O2O01O00010O0000010O0001O0001O01O0000010O000010O00010O00010O000001O0000010O0000001O010O00001O00010O000010O000001O0001O01O000001O01O0001O00000000000000001O001O0000001O000O10001O0000001N1N2K6G8D=EgU9CcjF`0]O`0B?G8L5M3O0O101O00001O000010O000001O01O0001O0001O0001O001O01O0001O0000010O00001O0001O0001O0000O2O0M3J6K5K6L3N200O101O000000001O0000000000001N1000001O00000000001O000000000O2O000000001N100000001O0O100000001O000O101O0O10001O00000000000O101O0000001N101O1O0000001O0O1000001O0000001N10000000001O0O2O00000000000O10000000O100000O101O00000O10001O00000000000O1000gH"}, {"size": [848, 480], "counts": "e7f1jh00000000000O10O100000O0100002N2N0O2O1N2cNXWO3Jc0Rj0Ahjc0>iT\\Ob0_O8J0N2O001N100000000000O01000000000O1000000000O01000000000000000000000O1000O100000000O1000O100000000000000000O100000O1000O1000000WOVWOLjh03WWOMih03XWOLhh04XWOLhh03YWOMgh03YWOMgh02ZWOMgh03ZWOLfh03[WOMeh02\\WONdh02\\WONdh01]WOOch0O_WO1ah0MaWO2`h0LcWO3^h0EiWO;Vi0O010000O100O1L4N2M3N2N1O2M3N2M3N2N2O0O2N200100O00001N1[OQWOIoh04UWOKkh01ZWONjh0F_WO9]i0O1O00000O100O10N2O1O1O1O1001O00000O1000O100O010O001O1O2L3K6M2N1100000000O10001N10001O01O01O000O101O00001O0O101O0O1O20O00000001O000O2O00001O0O101N11O01O000O2O001O0O101O001N100O2O0O101O0Ojal5"}, {"size": [848, 480], "counts": "l_U38Sj07L4L3L4L4H9L3L4L4L4L3M4J6J6L4L3M4M2M4K5L3M3M4K4N2M4M2L5L3L5L3N3L3N3N1N2O1N3N1O1O1O1O100O100O011O00O100000000000001O101N1\\MaZO5`e0GgZO5Ye0HlZO6Ue0HoZO5Qe0JR[O4od0KR[O4nd0JU[O4md0JV[O4kd0IY[O4hd0J[[O4gd0J[[O3gd0K\\[O3ed0L][O2ed0L][O2ed0L^[O0dd0O^[ONed0O^[ONdd01^[OMdd01_[OKdd03][OJfd06[[OGhd07[[OAld0=e2N1O2N2N110O0O2O0O2O1N2O1N3M3N3LlXe6"}, {"size": [848, 480], "counts": "[9d0ki0100000000000000O10000O0101O1O000000O010001O1N10002N2M102NX]<6`bC2000000O01000O10000000O01O1O1000000O0100000000O100000O10O100000000000O0100000000000000O10O1000000000O100000O100000O01000000O010000O1000000000O100001O00O0100000000000000000O10O101O00000000O0100000O1000O1000001O00O1000O100000O1000O100000O1O2O00000000O011O00000O010O1000O10O10000O10000000000O10000000O100000O011O000001O000000O010000O100000O010000000O10000000O020N010000000O0100000000O10O10000000O100000O10O0100000O10O1000000000000O010001O00O100000O1000000000O100000O10000000000000001O000O2O1O1N2O1O2L3NaSh4"}, {"size": [848, 480], "counts": "olV6j0Ui0b0]Oc0WOi0UOj0A`0F9_Oa0M3O1O1000000O101O00O100000O10000000O1O1O1N2O2M2L5H8DQWO^OTi0`0mVO]OXi0`0>N2N11OO2N1O4L4K\\YP7"}], [{"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "o^U1:Uj03N1N2O0O2O1M3K5XOh0J5N2000001O000001O0O10000O2N1O100000000O2O000001O01O01O1O0010O00000000000000001O00000000010O0001O00100O1O001O1O0010O001O01O01O1O0000001O0O2N1K6K4K5M3N2N2N2N2O1O1O002M2O1M4KehW1O[WhN9J4M3M3O001N1O2N101N1O000010N1N21OOZWOROlg0l0SXOXOlg0e0UXO]Olg0a0UXO@jg0?VXODig09XXOIgg06YXOLfg03YXOOgg00ZXO0gg0NYXO3gg0LYXO6fg0IZXO8fg0G[XO:eg0D[XO=eg0B\\XO?cg0@]XOa0cg0^O]XOb0dg0^OZXOe0eg0YOZXOj0fg0TO[XOn0dg0ROZXOP1fg0oNZXOR1fg0nNYXOT1fg0jN[XOX1^h00O1O2M2N2M4M2O1O2N1O1M4N101N1O110O00001O00000001O01O000000000001O00000000001O000000000O2O00000000001O0O101N1K5I8G8Ab0CT[8OldG=@?C=CY2N2O1N2N2M5J^Pd7"}, {"size": [848, 480], "counts": "g9e0ki00000000O1001O0000O100000O101O000000O0101O001N101O1O1N2O2N7Hcb;=o\\D3OO010000O10O01000000O100O0100000O1000O10O1000O10000000000O1000O100000O10000000O100000O10O1000O1000000000000000O100O0010O001M2N3K6K^]Q2MibnM1O1N2N2O2N001O1O1O1O1O0O2O1O00000O10N20O10O1000O11O000O10O100000O1000O1000O1000000000000000O10O10000000O10O10O1000O10O1000000000000O0100000000O1000000000O100000000000O100000000000O10O10010O0O2O001N2O0O1O2O4KUSh4"}, {"size": [848, 480], "counts": "RmV6U1gh0e0YOg0ZOf0[Od0B>A`0K4N200O1000000000000O10O10000000000N200O1O1N3M2L4L5B=E[WOAgh0>[WOAeh0;aWOC_h08jWOCWh09PXODPh09V1L3N2N2O2N2M3NmmS2e0ZQlM>eVOnNbh0f1I4O1N20O01O0000001O01O0001O01O01O000001O01O001O01O01O00000010O0000001O01O00001O01O0001O001O00010O0000010O0000010O00100O000000010O0000001O0001O00000000001O000O10001O000000001O0000001O000O1O1M4G8E=_ObU9@ZkF5ZOe0F9F:L5K401O1O00000001O01O1O2O3L5K1O1O00100O0000001O010O00000010O000001O00010O000000010O0000000000N3H7K5L4L4N3N100000001O000000001O000O100000001O000000001O0000001N1000000000001N100000000O2O0000000O2O000000001O0O10000O2O00000000001N10000000002N002M100000001O00000O10001O00000O101O0000000O10001O0000000000000O1000O10000O0101O000O10001O000000nG"}, {"size": [848, 480], "counts": "`8i1gh000O100000000O10000000000001O000000O1000000000001O1O1O8H`0@VPR4ZO_PnK2N2N100O1N2N3L3M3N2O10000001O00000000001O00000010OO101O000000010O001O00001N1001O0001O000O10001O0O100O2O1N100000001N2O0000001O0O100O2N3MPWi5"}, {"size": [848, 480], "counts": "Wn;4Zj04M2M2O2M2N2N3M3N2N2M2O2N1O2M2O2M3M3N1O2N1N3N2N2N1O2M201M2N3N2N1O2N1N3N1O2M3N2N1N3N2M2O2N2N2M2O2N1O2M3N2M2O2N1O2N1O2M2O2N2N1O2N2N1N3N1N2O2M2O1O1N2N2O1O1O2N1O1O1O100O1O1O10O0100O10O010O1O1N2N2O2M3N2G8L5M2YNnYOoNSf0i0YZOQOie0l0\\ZOQOfe0k0]ZOQOfe0n0\\ZOoNge0o0[ZOPOge0m0\\ZOPOfe0n0\\ZOQOee0m0]ZOROee0l0\\ZOROfe0l0]ZOROde0m0]ZOPOfe0o0[ZOlNje0S1WZOjNle0U1UZOjNme0T1TZOkNme0T1TZOjNne0T1TZOkNme0T1i1N1O1O100O2N1O100O1N3O0O1O1O1O2N100O2N1O1O101N101N1O2O0O2O0O[cT8"}, {"size": [848, 480], "counts": "U:h0hi000000000O10000000O10000000001O1O0O01O1N2O1O4L1O1O1O1N2O2N2Mb^Q5KdanJ101N011O002N1O0O11O0000O2O1O1N1000000O10000000000O011O0O10000000000001O0000000000000000000000000000O1000O100O100O10000O100O1O1O100O2O0O2N2O\\Xg4"}, {"size": [848, 480], "counts": "[g\\625>Si0l0YOd0XOg0ZOg0_O`0B>D^Ok]T7"}], [{"size": [848, 480], "counts": "Pj0`0Qj0O001O2N2N1O2N2N3Mb\\T<"}, {"size": [848, 480], "counts": "g5m0bi010000000O2O0N200O100000001O0O2O00000000000O3LRke1NnTZN4O10001O0001O0O1000010O0000000000010O0000010O000001O0010O01O001O001O1O10O01O2N1O1O1O100O1O2N2N4M5J4L6J3MO2N2O0010O000001O1O000000001O00000000000O100N2N3J5K5L4J6J7J7Gge61[ZI5K5L4M2N30O2NbUg0H_jXO?Dj0VO5N2N2O00010O00000010O000001O01O01O000010O0000010O0001O010O00000001O01O0000010O001O00000010OO20O000010O000010O0001O01O0001O000001O00000000000001O01O000000001N1000000000001O000000001O0O100N3J5H8H9DA?D;L4L5L31O01O001O00001O01O01O00001O01O0001O0000010O0000001O00010O00000010O000001O000001O0000000O2K4J6K5K5N2O1O101O00000000001N1000010N1000000000001O0000000O2O0000001O0O1000000000000O10001O0O1000001O0O10001O00000O2O000000000O10001O000000001N10001O00001N10000000001N1000001O000O2O00001N1000001O00cG"}, {"size": [848, 480], "counts": "P9d1lh00000O1000000O1O1O1O1O1O1O1O1N2N102M2O1O001N2O1O1O2O0O011N1N2O1N2001OO1O1O2L300O10000O2EQVO3Wj0Nm\\V26kbiM4N2M3J6H7N2I8N1O0O2O1O10O1000O]OoVOIRi05oVOKQi04QWOKnh05SWOJnh05SWOKmh06RWOJoh05QWOKoh05RWOJnh05SWOKmh03UWOMkh02VWONjh00XWO0ih0J\\WO6dh0G_WO9\\i0000O1O1N2N2N2N2N2N2N2N1O2N2N2L4N101O1001O001O000O10001XOTWOJPi0DjVO1?:bi000N1000O11N01O1O100O100OO11000O1000O100O2O0N101O1N2N2M3M3O1O11O0O1000000O2O0000001N1000001O001O0O10001O00001O0O101O000000001O000O10001O001O000O2O00001O00001N101O00000O2O1N2O00boV5"}, {"size": [848, 480], "counts": "W=d0ji02M3M4K5L3L5L2O2M3N2N2L4N3M3L3M3N1O0O1M300O010O1O100O1N2O0O2O1OO20O1010OO10001O0kYOaM[d0`2d[ObM[d0]2e[OeMYd0[2f[OiMXd0W2g[OkMWd0U2i[OnMUd0R2k[OoMTd0Q2j[ORNTd0n1k[OUNTd0j1l[OYNRd0h1l[OZNSd0f1l[O\\NSd0c1m[O_NRd0`1n[OcNPd0]1o[OeNPd0[1n[OhNRd0W1m[OkNRd0T1n[OnNRd0Q1n[OPOQd0P1o[OROPd0m0P\\OTOPd0k0P\\OVOPd0i0P\\OXOoc0h0P\\OZOPd0d0P\\O_Ooc0`0Q\\OAoc0=Q\\OEnc0;R\\OGmc07U\\OIkc06U\\OKkc02W\\OOic0NY\\O3gc0KZ\\O6fc0H[\\O9dc0F^\\O;ac0C_\\O?ac0@_\\Oa0ac0^O_\\Oc0ac0[O_\\Og0ac0XO_\\Oi0ac0VO`\\Oj0`c0TOb\\Ol0^c0TOb\\Ol0^c0SOc\\On0]c0POe\\Oo0\\c0oNe\\OP1\\c0POe\\Oo0\\c0nNf\\OR1[c0kNh\\OT1Xc0lNi\\OS1Wc0lNj\\OS1Wc0lNj\\OT1Wc0kNj\\OT1Vc0kNk\\OU1Uc0jNm\\OT1Uc0kNk\\OU1Vc0iNl\\OU1Uc0kNk\\OU1Vc0jNk\\OU1Uc0kNk\\OU1Vc0iNk\\OV1Vc0iNk\\OW1Vc0hNj\\OX1Wc0fNk\\OX1Vc0gNk\\OY1Vc0fNi\\OZ1Xc0eNi\\OZ1Yc0eNg\\O[1Yc0eNg\\OZ1[c0eNe\\OY1^c0fNb\\OY1_c0fNb\\Og0Qd0YOo[O`0Yd0_Og[O?[d0Ad[O=`d0B`[O;dd0D\\[O:gd0EY[O7ld0HU[O3Pe0Kl2O2N1N2Ocf18RYNOmNW10N2O1N3O1N2M3O1N2N2N1O1N3N4K6J^me8"}, {"size": [848, 480], "counts": "fha32^j02N1N2O0O4L4L3N3M1O1N010000O10000O10O100000O10000000001N01000000000000O10O1000O1000O100000000O100000000000O100000O010000O1000O10000000O1000O10000O100000O100000O1000000O01000000O10O10000O010000000000O10O10000000000O100000O10000000O1000O10000000O100000O0100000000000O0100000O10O10O100000O100000000000000O100000000000000O10O10000000000000O1000O1000000000O10010O1O0O101O0O1O2N101N_Wo3"}, {"size": [848, 480], "counts": "aTn6`0Xi0j0\\Oc0[Oe0\\Oc0ZOf0^Ob0F;M2O1O100000000O1000000000O10000000000N2O1O1N3M2L5I7C=B=@b0^Oe0ZOS1_NYV_4"}, {"size": [848, 480], "counts": "Sgg1?mi0:G6K4L4L5L5K6J4K3O0O2N10000O100O10000O100000O1O100N2O1O1N2N2M3N2O1O1N200N2O100O2O0O101O001O0000001O001N20OO2O1N101N2O1O1N2N2N3L3N3L5KP_m8"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "PXX5:Z31[MHSe0=V]On0eb0WOS]On0kb0VO[\\O^O^Nb1We0UOT\\O^1lc0cNQ\\O_1nc0dNo[O]1Qd0fNk[O[1Ud0gNh[OX1Zd0kNa[OW1_d0lN\\[OU1fd0nNT[OQ1od0TOkZOj0Xe0[OaZOd0ce0=hZORN\\e0j1oZOeM\\e0T2d1F;J5M5J8I8Fb0\\OW[a6"}], [{"size": [848, 480], "counts": "mg0c2ng0O2N1O2N1O2N1O2N2N2N2N1O2N2N2N2N1O2N1O2N2N1O2N2N1O2N1O2N2N2N1O2N2N2N1O2N2N1O2N3M2N1O2N2N2N1O3MbmT;"}, {"size": [848, 480], "counts": "jT51_j03cUO0Sj0;L7I3M2N00001O10O000000000O110O01O010O2N1O2OO01O2N10O01O1OO1O11O10O00000M3O2M2O1N2O100O1000001O0000000O101O0O2O0BTVO01OR]^15R]bN2N2O1O1N2N1I8VOj0L3O10001N1000O2N1O100O1O10001N1001O000000001O010O001O0000010O00000001O0O100000001O01O001O001O100O001O000010O00001O1O00001O01OO2O0000001O0000000O1O1M3O1J6K5L4J6J8Ig^T1=n`kN<^VOnNPi0f1A5N2O001N11O00001O01O00000010O0000010O00001O01O000010O000001O00010O0000010O0000001O00000010O0001O000001O01O001O01O01O001O0000000001O01O0000000001O000000001O000O11O0000000001N10001N1000000O2M2G9I7I8C>DQ[85bdGc0E;B>G8N2O1O1O2O00001O00001O00010O00001O0000010O00001O01O000001O00000010O000001O01O00000001O0001O00000O1M3J6K6K4M3N2O101O000000000O101O0000000000001O00000O1000001O0000000O2O00000000000O101O00000O100000001O0O1000001O00000000iG"}, {"size": [848, 480], "counts": "]9h1hh000O10000000000O10000000O1000000O10O100000000000000O1000O100000O10000000O1000O10000O1000O3N00O1O10000O01000O100O100O101N8I2M4L4M3L3M3M3M3M4L3M4LXTd12fk[Nd0lUODnh0Q1N1O0000O100000000000000000O0100000000000000O01000000000000O1000O10000000000O100000OZOQWOMoh01UWOMkh03UWOMkh02VWONjh02WWOLjh03WWOMih03WWOLjh04WWOKih04YWOJhh06WWOKhh05YWOKgh04ZWOLfh02\\WONdh01^WOMch00`WO0`i00O100O1O1O1N2M3O1O0O2N2M3N2N2N2O1N1N3M20100001O000O101O0[OmVOLUi00RWOLRi0GZWO6ai0O1O0O10O1000000O0O2O1O1O10000000000000O0100O1O1O1O1O1M3L4M3O100000O10O11O00000O101O0000000O2O00000O2O00010N1000000O2O001O00000O2O0000001O00001N1000001O00001N100O100O2O1N100O2N10nZX4"}, {"size": [848, 480], "counts": "kl<71Nbi0V1ZO9K5H7G9J6L5J6J5J7I6J6J7J5J7K4L4M4L3N2M3M4N1N2N2O2M2N2N2O1O2N100O1O1O100O2O000O1O1000000000001N]JZ[O^5ld00O12N1O1O01O000000000000000000001N100001OO^LU[Oa1kd0SNl[Ob1Ud0YNS\\Oc1mc0[NX\\Ob1hc0\\N[\\Oc1ec0\\N_\\Oa1ac0]Nc\\Oa1]c0]Nh\\Oa1Wc0]Nm\\Oa1Sc0^NQ]O_1Pc0_NS]O_1mb0`NU]O_1lb0_NW]O_1jb0_NY]O_1gb0`N\\]O^1eb0`N]]O`1bb0^Na]Oa1_b0^Nd]O`1]b0_Ne]O`1Zb0_Ng]Oa1Zb0^Ng]Oa1Yb0^Ni]Oa1Yb0\\Ni]Oc1Xb0[Ni]Oc1Zb0\\Nf]Ob1]b0\\Nd]Oa1ab0\\N`]O[1kb0aNW]OZ1Rf0M3N2L4M3M3M2J8I^PX9"}, {"size": [848, 480], "counts": "S;h0hi000000000000O010000000000O0100000O10000000O10000000O100000O2O001OO0O2N2O1N2O1N1O2N2N2O1OO10O2O1O1O1O100O100N2OgV_18Ri`N2M1O00010000000O100O010O100O010O10000O1O0O2O1O1O10O1000O100O01000000O1000O10000000O1000000000O1000O10000000O100000O10000000O100000O100000O1O010000000000O010000000O100000O10O10000000O1000O1000O1000000O10O10000O01000000O0100000000O10O10000000O1000000000O1000O100000000000000O10O1000O100000000O10000000O1000O10000000O10O10O10000000O100000O1000O1000O10000000O10O1000O100000O010000000O01000O10000O0100000O10000000O100000000000000O10000000000O0100000O2O0000000O0100000000000O10000000001O001O1O1N3N0O2O2KgbP3"}, {"size": [848, 480], "counts": "khl7l0Ri0d0YOe0\\Oe0VOi0@`0B>K5N3M200O101O000000O100000O1000O10000O1O1O1N2O2L3L5H8C=^Oc0D>ZOh0YOS\\a3"}, {"size": [848, 480], "counts": "Tdc17Pj0`0D8K4K6L3M3M3M2N2N3M3M3N1O2N1N100O1000000O10O10000O1O1O1N2N2N2O1O2M2O002N1O1O100O1O2O000O101O00001O1O000O101N1O2O001N2N2N1N3N2N3M2N3L4J9FWmo8"}, {"size": [848, 480], "counts": "o2c1mh000O1001OO100O1O1000000000000O1000000O2O011AoVOZOYi0=jVOAXi0=iVODXi08jVODHOdi06eVOLdi0M^VO0lof;"}, {"size": [848, 480], "counts": "YRV62_32Zc05Z\\OV1ab0POT]OW1jb0mNo\\OW1Pc0nN^\\O_OSNh1_e0nNT\\Oa1kc0aNQ\\Oa1nc0bNn[O`1Rd0bNl[O]1Ud0eNh[O\\1Xd0eNe[O\\1\\d0gN`[OX1bd0kNY[OU1id0QOoZOm0Ve0WOcZOTO47]e0h1nZOiM\\e0R2d1I7I7G;K6K4L8G8G8I8GYQa5"}], [{"size": [848, 480], "counts": "Zd0V6[d0O2N1O1O2N2N1O2N2N2N1O1O2N2N1O2N2N2N2N2N1O2N2N3M1O2N1O2N2N2N2N2N1O2N2N2N1O1O2N2N2N1O2N2N2N1O2N2N2N2N1O2N2N2N1O2N2N2N1O2N2N1O2N2N2N1O2N2N2N2N1O2N2N1O2N2N1O2N2N2N2N2N1O2N1O2N1O2N2N1O2N1O2N1O2N2N2N1O2N2N1O2N2N1O1O2N1O2N2N1O1O2N3M3MS^]9"}, {"size": [848, 480], "counts": "h5l0di00O11O000001O000001OO10010O0000001O00001O00O1001O0000000000001O01O01O0001O0001O0000010OO00101O2N4M0N6Jhl>I_SA4M3M3M2N4M1N3M2N2N1O100O1O1O0000O101O0O10000010OO1O1L5L4N1O2N2N3N4K3N2M[\\c12bc\\N4L3N2N2N2N2J5G:]Od0L20N2002N1O10O0O1M3L40000N2001O001O1O011N00001O0001O000001OO2O0O1O1000001O0003M0000O11O01O01O000001O000010O01O000000001O001N10000000000O1O1O1N2L4K5M3J6J6K7KTYU1LPgjN5I7M3L3ZWO\\O^g0d0]XO^Ofg0f0QXO_Oog0h0eWO]O[h0]1O00001O010O001O000010O0000010O000001O0000000010O0000010O0001O00000010O000001O001O01O0001O00010O000000001O00010O0000000000000001O00000001O01O0O110O000000001O000O100000001O000O100O1O1L4J7I6J7C>Be`72Z_H=_O`0D;F;K4N3M201N1000001O0000001O00001O01O0001O0001O01O0000001O01O01O00000000010O0000001O0001O00000000010O0O1O1J6I7K6L3N200O10SH"}, {"size": [848, 480], "counts": "a9k1eh000O101O000O0100000000000O10000000O10O10000000000000000O10O10O100000000O10O100000O100000000000000O010000000000000O100O1N2M2N3K5L4K5L4K4K6L:EVo^30oPaL3L3L4L5M4K2M2O1O1O10OO3N10000O100000O100O1000000O1000000N20000@aVONai0OdVOM^i0Mofi00fYVO1N101N001M3O1N2O1O1O1O100O1O100N2N10100O1O1O100O010O1O1O1O100O11OO10O10O1001N10001O00001O000O101O000000001N1000000O2O000O101O00001O00001N10001N1000001N10001O0O101N1000001O0O2O0O3N^]Q3"}, {"size": [848, 480], "counts": "fm]1o0Ui0`0F8J5I8I5K6K4L4J6L5J5K5K6K4L4K6J5K5I7L5K4L4M4L3L4M3N3N1N2N3M2O1O1O1N2O1O2O000001O1O2N000000000000O2O00000O10000O1O100O1000000O1000000O100000001O0000dKi\\O_1Wc0SN`]Od1`b0XNh]Od1Xb0WNP^Of1Pb0WNT^Oh1la0UNX^Oj1ha0TN\\^Ok1da0RN`^Ol1`a0SNb^Ol1^a0QNe^Oo1\\a0oMf^OP2Za0oMg^OQ2Za0mMg^OT2Ya0iMi^OW2Xa0gMi^OX2Ya0eMi^O[2Ya0aMi^O_2dd0000O1O1O100O10O1O10OO2M3M3L4J6G:I6VOTWO]OiYU8"}, {"size": [848, 480], "counts": "Z;h0hi0000O1000O100000O10O100000O100000O1000000O11O00O010000000O10O1000000000O010000000000000O0100000O100000O1000O1O1M3M3K6JgdP5M_[oJ3M2N2O1N110O1O1O1O1O1N1O200O100O1000O10O100000000000O1000O10000000O1000000O10000000O1000O2O00O1000O1000O1000O010000000O1000000000000O0100O10000000O10O100000O1000O100000O01000O1000O100000O0100000000O01000000000O10O1000O10O100000000O1000000O100000000O100000000O10000000000O1000000000O10000000000001N101O1O1N3N3LWei1"}, {"size": [848, 480], "counts": "ZlR9:_i0i0XOg0^Oa0QOP1_O`0G9H8G9M3N2O101N100000000000000000O100O100O1O1O1O002M2N2K5L4G:E:A`0_Oa0[Oh0nN[jX2"}, {"size": [848, 480], "counts": "V_e2;Rj06I7J5L3M4K6L3L3N4L4L3N1N2N2O0O2O0O100O100O01000O1000O001O001O1O1O1M3N2N3N1O2N1O2N1O2O00001O01O000001O0O011O10OO2O1N2N2N2O1O2M3M2N3M4L5J6FQRn7"}, {"size": [848, 480], "counts": "g36Sj08M8I4M1N101M3TOTOPXOP1ng0POQXOR1ng0oN`WOM9Z1Uh0oNbWOY1]h0>8G4L2OO01O1O1O002N6J010N1001O01N2Nj0mN^c65e\\I5L2O3J8I7K;D6K2N10O00001N1O101OO11O1OO1O100000000000O101O001O001O2M5K;mNcVO5RX_:"}, {"size": [848, 480], "counts": "Vj]7:Uj05K5K3N2M4nWOTOZf0o0cYOTOZf0m0cYO]OUf0e0gYO]OYf0d0eYOBVf0`0fYOBZf0`0cYO^O`f0d0\\YO\\Off0l0RYOROQg0R1hXOnNZg0R23J7K4I7I9L6J6I7I8H6J7I6If^X4"}], [{"size": [848, 480], "counts": "ja0f8ka0O2N2N1O1O2N2N1O2N1O2N2N2N1O2N2N2N2N1O2N1O2N1O2N2N1O2N2N2N2N1O2N2N2N2N1O2N1O2N2N2N1O2N2N2N2N1O2N2N2N2N1O2N2N2N2N2N2N1O2N1O2N2N2N2N1O2N1O2N2N2N2N2N1O2N2N2N1O2N2N2N2N1O2N2N1O2N2N1O2N2N1O2N2N2N2N1O2N2N2N1O2N2N1O2N1O2N2N1O2N2N2N2N2N1O2N1O2N1O2N2N1O2N2N2N1O2N2N2N2N2N1O2N2N1O2N2N2N1O2N2N1O1O2N2N2N2N1O2N2N1O3M2N1O2N1O2N1O2NeUX8"}, {"size": [848, 480], "counts": "d5d0li0000000000O101O0000001O0000010O00001O0000001O0000001O0000001N100000001O000000001O0000000000000000001O000000001O001O000000001O0000001O00000001O01O01O000000000001O0001OO100000000001N3N1O001O3K7H7HlaR30T^mL5O2N1O1O1O1O1O0000000001O000O100000001O00000O10001O0O1000000O101O00000O101O0O10001O0O101N3N1ObVO2Zi03N2N2N2N1O10001N2O1N101N3N1O0O100O100O10O1M3M3L4L7FTZ80neG7J5M3M3L3002N<\\OcUg07VjXOe0[VOVOdh0`1K3O2L3O2O001O00010O0000001O000010O01O000000010O000000010O00001O01O0001O001O0010O0001O000000001O0000010O00010O000000001O0000001O01O0000000000000000010OO02O00001O001O000000000000001O0O100000001O0O100O1N2N3J6H7I7EA=@?@`0E;D<^Ob0A?\\Oe0H7O100O100O100000000O100000O100O100O1O1O1O1N2N2M3L4I7I8G9VOi0K6B>D?dN]WOBidW1"}, {"size": [848, 480], "counts": "eUh3:Sj0:F6J6K5K4M3L4L2O2N2M2O1O1O1N2O1O1O00100O010O01O1O001N2N2N2O1O1O100O1O1O2N100O1000001O000010O0000010O0001O1O1O1N101O1O1O0O2O1O1N2N2O0O2M4M2N3M3M3M4K4M_\\f6"}, {"size": [848, 480], "counts": "R42^j0OM3LMmUO3Rj0;K3M1K4M30002M01010O00O11N2M1O2O1O2N0010010N01000O1000001O00O1010O0O1O1O11O000000HiNVWOX1jh0iNUWOW1kh0iNTWOX1lh0hNSWOY1mh050000O100001O00000000002N2N001O1N2O2nNmVOa0Vi0ZOYWO3oh0MZ_70ZaH;I:F7J:G6J4L1O001O00O1000000000000000001O00000000O100001O1O1O006J4Kd0UOkT^9"}, {"size": [848, 480], "counts": "SnZ83Ti0^1H4kL\\NX]Oh1db0aNU]Oa1ib0fNP]O\\1ob0fNb\\Oh1]c0\\N^\\Oe1bc0bNV\\O_1jc0eNR\\O\\1nc0fNP\\OY1Qd0iNm[OU1Ud0mNh[OR1Zd0QOc[Om0_d0WO][Oe0gd0^OU[O?od0BoZOoNF4ce0l0gZOkNhf0S1YYOiNlf0V1X1O1O1O3N5J6I6K5J6K4L_jY3"}], [{"size": [848, 480], "counts": "Qa0_9Ra0O001O2N1O2N2N2N2N1O2N2N2N2N2N1O2N1O2N2N2N2N1O2N2N1O2N2N2N1O2N2N1O2N2N2N2N2N1O2N2N2N2N2N1O2N2N2N2N1O2N2N1O2N2N2N2N2N2N2N1O2N2N2N2N2N2N1O2N1O2N2N2N2N1O2N2N1O2N3M2N1O2N2N2N2N1O2N2N2N1O2N2N2N2N1O2N2N2N2N2N1O2N2N1O1O2N2N2N1O2N2N2N2N2N2N1O2N2N1O2N2N2N2N2N2N1O2N2N2N2N2N1O2N2N2N2N1O2N2N1O2N2N2N2N1O3M1O2N2N2N1O2N2N2N2N1O1O2N2N2N2N2N2N1O2N2N2N2Ndlo7"}, {"size": [848, 480], "counts": "Qe2;Tj05L0O20N1001O000000001O00001O000000001O00001O00001O00001N2O00010O00001O0000001O00000O1000000000001O00001O000000000000001O0000001O00001O00000000000000000000010O0001O01O00000000001O001O00000O101O000000001O001OO11O00O1O1N2M4O1N2O2L5K6K6Icof2KSPYM`0I9L4O0O11O0000000000001O0O10000000001O000O10001OPWOXOXh0h0hWOXOXh0h0hWOXOXh0h0hWOXOXh0g0iWOYOWh0g0iWOYOWh0g0hWOZOXh0f0hWOZOXh0f0hWOZOYh0d0hWO\\OXh0d0hWO\\OXh0d0hWO\\OXh0d0iWO[OWh0e0hWO\\OYh0c0gWO^OXh0b0hWO^OXh0a0iWO@Vh0`0iWOBVh0>jWOBVh0>jWOCUh0E:B?CbW8_OdbH5K2N4L7I3UVO_O`i0T1D2O1O000O101O0O1000000000000000000000000001O001O001O1N5L5lNfVOh0cPU9"}, {"size": [848, 480], "counts": "fkd8a0ci0=[Oe0J5N2nL\\Ni\\OOkMg1Ze0bN_\\OQ2_c0TN\\\\On1cc0VNY\\Ok1fc0WNX\\Oi1ic0]NP\\Oc1Pd0cNk[O[1Wd0fNg[OX1\\d0kNa[OR1bd0QOY[Om0kd0XOoZOd0Ve0@dZOUO1Ede0]1QZOmNWg0Q26K5I7L4L4K6K7J8I7H8F:G9GTRo2"}], [{"size": [848, 480], "counts": "\\a0T9^a0N2N1O2N2N2N2N1O2N2N2N2N2N1O2N2N2N2N2N1O2N2N1O3M2N1O2N2N2N2N2N2N2N1O2N2N2N1O2N3M1O2N1O2N2N2N2N2N2N1O2N2N2N2N2N2N1O2N1O2N2N2N2N2N2N1O3M2N1O2N2N2N2N1O2N2N2N2N1O3M1O2N2N2N2N2N2N1O2N2N2N1O2N1O2N2N2N2N2N2N1O2N2N1O2N1O2N2N1O2N2N2N1O2N2N2N2N2N2N1O2N2N2N2N2N1O2N2N2N2N2N2N2N1O2N2N2N2N2N1O2N1O2N2N2N2N2N1O2N2N2N2N2N1O2N1O2N2N2NdUX8"}, {"size": [848, 480], "counts": "]5`0Pj00000001O00001O1O00001O000000000000001O00001O00001O001O000000001O0000001O000000001O00000O10001O0000000000000010O00000000000001O0000001O0O10001O000000000010O001O01O01O01O00001O000O10001O001O0O10001O001O0000O2N1O1N2O1N2N2O1O1000001O1O002N3M5K4J5Lhfd0LRY[O2SVO0ii05SVONji0>N10000000000001O2N9G5KV^Y14dafN5L3N0O2N1O1N3L3L4B>F:N1100O001O2O0O100O1000001O000001O001O01O0000010O0001O0000000000000001O01O0000100O1O1O1O001O000001O000010OO2O00001O00001O001N1000000O1O1N101M3K5L4K6I6L4M4K5K]dS1=lakNDah0\\1J4K4M4M200000001O00001O0001O0001O00010O0000010O000010O0001O00010O000000001O01O01O0000001O000010O000001O01O01O00001O0001O0001O000001O0001O0000000000000001O0000000000001O0O1000001O000000001O0O1O10000O1O2I6J6L5G8H9G:G]`78V_H=E;F9H9G8M3N2O2O0O101O0000000eH"}, {"size": [848, 480], "counts": "[9f1jh00O11O0000O100000O10000000O10000000O100000000000O10O1000000000000O1000O1000000000O10000000000000O01000000000000O1000O100000O10000000O10O100000O1000O100000000O10O1000000000O10000000000000O10000000O100000000000O1000O1O1O1O1O1N2O1O001O1O1O100N2O100O1O1O00100O10000O10O10O1O100000000001N100000000000000003L4M2N2N3M2N5I_fa1KdY^N5K4L4L4A?H8O1O10O100000000001O0000O1000000000O100000O10000000000000000O10O100000000000O100000000000O10YOVWOHjh07WWOHjh08VWOHjh07WWOIih07WWOIih06XWOJhh06XWOIih06XWOJhh06XWOJhh05YWOJhh05YWOKgh03[WOMeh02\\WONdh0O_WO1bh0JbWO6\\i0O1000O1N2M3M3N101O1O1N2N2M2O2N2N2N2N2N2O00100001O0@kVOEUi07QWOGPi06TWOGmh04YWOKhh0N_WO1ai00O0000O100000O0O2O1O001O100000000000O100000O100N2O1O001N2L4M3N20000000O1000001O000000001N100000001O00001O000O2O00001O0O101O00001N1000001O00001N1000001O00001N10001N10000O3N001NY_o1"}, {"size": [848, 480], "counts": "WRl12Uj0>E8H7L4J6L4I7J5O21O0O10O010O010ROXNSYOg1gf0`NYYO_1cf0fN]YOZ1_f0iNbYOW1Zf0lNgYOT1Vf0mNlYOS1Qf0oNPZOQ1le0QOVZOP1ge0QOZZOP1be0RO_ZOn0^e0SOdZOl0Ze0VOgZOj0Ue0YOlZOf0Re0[OP[Oe0md0]OT[Ob0id0_OZ[Oa0cd0@_[Oa0]d0Ad[O?Zd0Ah[O>Vd0Cl[O=Qd0DP\\O=nc0DS\\O;kc0EX\\O;fc0D]\\O;bc0E\\[OkN3_1_d0F\\[OoN5Z1^d0G\\[ORO6V1]d0H^[ORO6V1Yd0Ia[OSO6S1Yd0Ib[OVO4Q1Xd0Id[OXO4n0Wd0Ke[OYO3k0Wd0Lg[OYO3j0Ud0Mi[OZO2i0Sd0Ml[O[O1g0Rd0Om[O[O1e0Qd00o[O[O2c0nc03P\\O[O2b0mc01T\\O]O2>ic05X\\O\\O1fc05e\\OZOF`0ec05i\\OYOBa0ec06l\\OWO_Ob0dc07P]OUO^Ob0bc08Q]OXO\\O`0bc08S]OXO[O?bc08U]OXO[O>`c09W]OYOYO=`c09Y]OZOXO;_c0:[]OZOVO<`c09[]O[OUO;_c0:^]O\\OSO9^c0;c]OYOQO9]c0=a]O\\ORO6]c0=b]O]ORO5\\c0=d]O^OPO5[c0=e]O_OPO3[c0=f]OAoN1[c0>e]OCPON[c0?e]OCQOL[c0`0d]OFQOI[c0a0d]OGPOH\\c0`0e]OIoNE]c0a0e]OJnNE]c0a0e]OKmND_c0?e]ONkNC`c0?d]OOlNB`c0?c]O1lN@ac0>c]O3lN_Obc0=b]O5jN_Odc0;b]O7iN_Ofc09a]O9gN_Ohc08`]O;fN^Okc06_]O`NCRd0N^]O`0\\NDXd0K[]Od0TNHbd0CZ]Og0nMJhd0_OY]Ol0gMIQe0ZOW]OU2jb0jMU]OW2kb0iMT]OX2mb0gMR]OZ2nb0fMQ]OZ2Qc0eMm\\O]2Sc0cMl\\O]2Vc0bMh\\O_2Yc0aMe\\O`2]c0`Ma\\O\\1aNoNod0F]\\O[1iNkNld0IY\\O\\1oNgNkd0LT\\O]1TOeNjd0MP\\O]1[OaNhd02l[OV1ce0iN[ZOR1ne0lNQZOh0^f0UObYOe0ff0ZOXYO`0Pg0AkXOa0Xg0^OfXO>`g0A^XO:lh01O0NO02402I_[X7"}, {"size": [848, 480], "counts": "Q;j0fi0000000000O01000000000000O1000O1000000000O1000O1000O10000000O1000O1000000000O10O100000000000O10O10000000O0100000000O10O10000000O10O1000000000O10O10000000000O1000O100000000O100000O2O0000O1O100O1O1N2N2N2M3N2N2N2Md[[2M^ddM5M4N2M4M2N00000O10O010O1O0010N1O2N2O1000000O010000000O100000O1000O10O100000000O10000000O10O1000000O1000000000O10O100000000000O010000000000000O1000O10O10000O01000000000000O01000000000O1000O100000O100000O100000O10000000O10O100000O100001N10O1000O1000O10000O10O1000000O10O10000000O1001OO100000O10O1000O10000000000000O100000000O10O100000O10000000O10O100000O1000O1000O10O1000000000O0100000000O0100000000O10O100000O10000O10O1000O0100000000O1000000000001O00O1000O100000000000000O1001O00O10O10000000000000O10O11O1O001O1N2O1N2O2M2MRSa0"}, {"size": [848, 480], "counts": "\\iY::mi0=^O`0C<@?F:E;H8B?@?B>Dmi09H8J4L4L5L4L3M3N0O2N2N1O2N1O100O1O1000000O010000O1O1O1N2O1N2O1N20O02N1O1O1O2O0O2O00001O00001O001N110O1O0O100O2O1N102N1N2N2O1M3N3M3L4M3MWWf6"}, {"size": [848, 480], "counts": "Va<=ii0F\\VOVc0^OS]O=mb0A\\]O8cb0Ef]O6Zb0Gk]O7Ub0Gn]O8Rb0DR^Oha0AY^O?fa0@\\^O`0da0]O_^Oc0aa0[Oa^Oe0_a0YOc^Og0]a0XOd^Oh0\\a0VOf^Oj0Ya0VOh^Oj0Xa0UOi^Ok0Wa0TOk^Ok0Va0SOk^Om0Ua0ROl^On0Ta0QOl^Oo0Ua0POl^OP1Ua0nNl^OR1Ta0mNm^OR1Ta0nNl^OR1Ua0lNl^OS1Va0kNk^OU1Ua0jNl^OV1Ta0jNl^OV1Ta0hNn^OW1Ta0fNn^OZ1Te000O100O100O0N3L4M3M4K8F]jc7"}, {"size": [848, 480], "counts": "o:j0fi0O01000000000O10O10000000O10000000O100000000O01000000000000O10O100000O100000O1000O10000000O010000000O10O100000000O10O10000000000000000O01000000000000O10000000O01000000000000O10O10000000O100000O1000000000O01000000O1000O100O1N2N2N1N300N2N200O0O200M3N3NZoi10ePVN3N100N2O1J6L4N1O200O100000000000O10O100000O1000000O01000000000O1000O1000000000O10000000O100000O1000000000O1000O100000O1000O10O1000000000O1000O10000000O10O1000000000O100000O10O100000O100000O1000O100000000000O10O10000O0100000000O0100000O10000000000O100000O100000O100000O1000000000O1000000000O10O10000000000O100000O1000O100000O10O100000O1000O10000000O1000O1000O1000O100000O10O10000000O100O100000O010O10000000000O100000000000000O1000000000000O100000000000O100000O11O000000O10O11O001O000O2O1O2M101N3MRfl0"}, {"size": [848, 480], "counts": "Vko9b0`i0a0A=B=DB=G9E;C=F:N3N100000000O01000000000O0101O0O1N2O1N1N4K4L4G9I8G8F;D\\h0AeWO?\\h0@dWO`0\\h0@dWO`0\\h0AcWO?]h0@dWO`0\\h0@dWO`0\\h0AcWO?]h0AcWO?]h0AcWO?^h0_OcWOa0]h0@bWO`0^h0@bWO`0`h0[ObWOe0Ui00\\O\\O\\WOd0bh0^O^WOb0bh0]O_WOb0bh0@\\WO`0dh0A[WO?gh0BVWO=lh0CSWO=mh0DRWO;oh0FPWO:Qi0FmVO;Si0FlVO9Ui0IiVO6Xi0a0O0100O101OO010000O1000000O100O010K6G9L3M5Ii^T1\\1l_kN5M2L4M3O101O00001O01O000001O0001O0001O000010O00010O0001O0001O01O00001O0000010O0000001O000010O000001O01O01O0010O0001O00000001O0001O000000000001O000000001O000000001O0000000000001O000O101N1000000O1N3K4K5L4J7D;J7FB>F9F:B>@a0E9FD>@j0lNhbf1"}, {"size": [848, 480], "counts": "RR_3a0ki08I5L5K3M4L4M3L4M3L5L4L2N1O101N100O1000O0100000000O10O1O1O1M3H8M3N2N2O1N2O10000000O100001O0000001O001O0O10100N2O0O2O1N2O1N2N2N2N1O2N2J6M4J5N2MZjR7"}, {"size": [848, 480], "counts": "c2a1oh001O00000000000O110OO010000000000000O11O00O1N3O0001O002N1OO01001O0001O1OO001001O00002N1O1O1O0O3M=_OfX8AlgG6K5K6K6JBSRm9"}, {"size": [848, 480], "counts": "k[j79Rj07[Oc0I8L3G9N2kL]N\\]Od1ab0cNf\\OGlM50g1]e0dN_\\Ol1ac0[NX\\Of1hc0aNP\\O_1Qd0dNk[O[1Wd0hNe[OW1\\d0lNa[OR1bd0QOY[Om0kd0YOmZOd0Ye0@`ZOZONAje0]1nYOjN\\g0R23K6K5K4J8I7M5K7I9D8J:F8Hdgh3"}], [{"size": [848, 480], "counts": "kc0e6mc0N2N1O2N2N2N1O2N2N2N1O2N2N2N2N1O2N2N2N2N2N1O2N1O2N2N2N2N1O2N1O2N2N2N2N2N2N2N1O2N1O2N2N2N2N1O2N2N2N2N1O2N2N1O2N2N2N2N1O2N2N2N2N2N2N2N2N1O2N2N1O2N2N1O2N2N2N2N2N2N2N1O2N2N2N2N1O2N2N2N2N2N2N2N2N2N1O2N2N2N1O2N2N2N2N1O2N2N2N1O2N1O2N2N2N2N2NcnZ9"}, {"size": [848, 480], "counts": "Y5i0gi00O1001O01O00001N10010O00001N10001O01O0001O0000001O0000001OO1001O0000001O01O000001O01O0000000001O00001O0O2O001O0000000000O1N2K5L4L5J6M2M4K^ok22TPTMc0K6J3N3M3M00001O1O0M3N3N2001O0000000000000010O00000000O101O0000001O01O00001O0001O01O00100O1O1O1O1O000000001O0001O1O1O1O001O1O001O1O1O0O101O0O1N3L4J5L6I6K4L^e6LhZI3L3N3L3O1N2N200:Ejog0o0ToWO5L4K5M3N1O101O0001O01O00000010O000001O01O01O0001O0001O01O01O000010O0001O0000001O00000010O0001O00010O0000010O00001O000010O00000000000010O000000000001O00000000001O0000000000001N100000001O00000O1O1M4I6K5J7D;H9Fc`74U_H>F9I8D;F:M4M201O000O110N10000010O01O00000010O0001O00001O01O01O000000010O001O000001O0001O0001O000001OO100N2K5L4M3N3O^H"}, {"size": [848, 480], "counts": "U9f1jh000O10000000O1O1000O10000000O01000000000000O100000O100000O10000000000O0100000000000O10000000O100000000000000000O1000000000O10O10000000O02O1O1O2N4K4M3N2L2O1O1O1N2O001O1O001N2O1O1O3M2N1O2M3N2M3Mf`U23R_jMa0D4K8I6K2N2N1O1O00000000000000O1000000000O100000O1000000000000O01000000000O10000000O010000000000O1[OoVOKQi03TWOJlh06VWOHjh08VWOHjh07WWOIih06XWOJhh06XWOJhh06XWOIih06XWOJgh05[WOKeh04\\WOLdh03]WOLdh01^WO0bh0O_WO1dh0H`WO8\\i000000O1O1M201N2N2O1N2M2N3O1N2M3N2N2O0O2O1O1001N10000AkVOCUi09RWOCoh0:VWODkh05\\WOJeh02_WOMci0O0O1000O100O0O2N3M2O1O010001O000000000000O10O10N2O1O1N2M3L4N2O100001O000O1000001O00000O101O000000001O0O101O000O101O00001O0000001N101O00001N10001N1000001O00001N100O100O2M20[cP3"}, {"size": [848, 480], "counts": "ola1C>_O`0B>B>DF9J6O1O10001OO10000O10O100000O100O1O1O1O1N2M3J6I7F;K4I8A?FHf0UOaUR2"}, {"size": [848, 480], "counts": "[Wl3e0hi0G9^Oc0^Oa0G9RO^LiZOh3Ve0ZLaZOm3^e0c0000000O100000O10000O0100000O1O1O1O1O1N2M3K5J7E:K6F:D@dha:"}, {"size": [848, 480], "counts": "i_V7N2N201O000O10O100O01000O1000000O100O1O1N2N2M4I6E;H9K5DE9D=F9K5M3O2N1O110O001O0000001O01O01O0000001O00010O00000010O01O00001O0001O00010O0000000001O01O01O00O2N1L4K5J6M3M3N3N1O11O00O101O000000001O00000000001O000000000O101O0000000000000OWH"}, {"size": [848, 480], "counts": "Q9f1jh0000000000000O01000000000O100000000000O1000O100000000000000000O100000O1000000000000O01001O00O10O100000000000000O1000O1000000000O10000000O100000O1000000000O10O100000000000000000O1000000000O1000O10000000O100000000000O10O2O1O1O6J:Ea0UOTVO0Ved03iT\\Oe0]O]Oc0I7_Ob0B=N2O1000001O00O10O01000000000000N2O1O1O1N2N2K5L5B=G:E;B=I8E^ORVW2NRPhMJmi0`0L2O1O000000O100000000O1O1M5IeVOKai07J4M3L3O2N1O1N2N2N2N2N2N2O0O2M3N101O1O10O2O00010\\OnVOHRi07QWOGoh05WWOIjh00]WONgh0K^WO4`i0O0O100O10O1O1N1O3N100O0100000001O00O1000O10O1O1O1O1N2N2M3M3O1000000000O100000001O000000001O001O0O101O00001O0O10001O00000O2O00000O2O0000001O0O101O000O1010O00O101O0O100O1OllP4"}, {"size": [848, 480], "counts": "nYl2;gi0e0[Oa0J4K5J5K5L5H8G9SOXMjYOn2je0S1J5M3M4M2J7K4M3N3M2N3N1O1O2N1O1O101N100O2O0O2O000OZLY[Of1fd0RNn[Ob1Rd0YNY\\Oa1gc0\\N_\\Oa1`c0]Ng\\O`1Xc0^Nl\\O`1Tc0]NQ]Oa1nb0^NW]O_1ib0`NZ]O^1fb0`N^]O^1ab0bNb]O\\1]b0cNg]O[1Yb0dNi]O[1Wb0dNl]OZ1Tb0eNo]OY1Qb0eNS^OY1ma0fNV^OX1ja0hNX^OV1ga0kN[^OS1ea0mN\\^OR1ea0lN]^OS1ca0lN`^OR1`a0nNa^OQ1`a0mNc^OQ1]a0nNe^OQ1\\a0nNe^OQ1[a0oNe^OQ1\\a0mNf^OR1[a0mNf^OR1[a0lNf^OT1Za0lNg^OS1Za0kNg^OU1Za0jNf^OV1[a0gNg^OZ1Za0cNg^O]1[a0`Ng^O^1Ve00000N1O2L4K5K5M4F;AkY]7"}, {"size": [848, 480], "counts": "e:i0gi00O010000000000O0100000000000O100000O1000000000O10O10000000O10O1000000000O010000000000000O10O1000000000O10O1000000000O100000O1000O100000000O1000O100000000000O1000000000O100000000000000O0100000000000O1000000000O10001O1N102N0000000O1O1N2KbRS2G_mlMb0J2O0O1000000O10O2O0000000O1000O1000000000000O01000O10000000O1000000O1000O100O010O1000O1000000000000O10000000000000000O0101O0000O10000000O100000O100000000000000O1000O10O1000000O10O10000000O0100000O10O1000000000O010000000000O010000000O10O1000O10000O10O10000O100000000O10O11O000000O10O100000000000000O100000000000000O1001O00O10O100001O1O1O0O2O1O0O3N1M4MPPe2"}, {"size": [848, 480], "counts": "h[X8b0Zi0g0]Oa0B=A`0D;[Oe0D^Oh0ROSZS3"}, {"size": [848, 480], "counts": "gln3;Qj0;G5J5K6L4L4M3L2N2N1N3N101N10O010000O1000O010O100O1O1N2O1O001O2L3O1O2N100000001O00010O100N2O001O001O0O2O1N101N2O001O1N2O2M3M3L9Fhhh6"}, {"size": [848, 480], "counts": "_2;Uj01O1O002N3M5K5K6J;E5K1O1O000001O001O000000O100001O001O0000N20000001O001O00O17H7J:F8FRiY;"}, {"size": [848, 480], "counts": "on`66Si0X1I7L3jL]N^]Oe1_b0cNT]Od1lb0`Nf\\Oi1Zc0^N[\\Og1ec0[NX\\Of1gc0bNP\\O_1Qd0eNj[O\\1Vd0fNg[OZ1Zd0jN`[OU1cd0oNX[OP1jd0UOoZOg0Xe0_O^ZOQO9Kbe0U2`1M2K6J7J5I9K6J8H8H5K9F9GUdT5"}], [{"size": [848, 480], "counts": "Pg0`3Qg0O2N2N2N1O2N2N2N2N1O2N2N2N1O2N2N2N2N1O1O2N3M1O2N2N2N1O2N2N2N1O2N2N2N1O2N2N2N2N2N2N1O2N1O2N2N2N2N2N2N2N1O1O2N2N2N2N1O1O2N1O2N1Oceg:"}, {"size": [848, 480], "counts": "X5R1^i0O001O1O1O1M300O11O00000000001O1O100O2N2N2O0O2N1O001O000100O010O001O001O0000001O00001O000O1O1N2N2N2N3N1N2N2O2N100O1O101O000001O001O0001G8O4MQiQ31XVnLj0J3B>AcNaWOe1\\h095K1O00001O2N2N1O000O10000O1M2M4L4L4L4J7K5J6Kh^T1>k`kNk0UO5K4L301O00000001O01O000010O00000001O010O0000010O001O00010O00000010O0000000010O00001O00010O00000010O000000010O0001O00001O0001O0001O000001O0000000000001O00000000001O00000000001O000O101O000O100M4J5I7I8E;Fod0DP[O;Qe0FmZO;Se0GkZO8Ve0JgZO7Ye0NaZO3_e0_2000O100O1N2O1N2M3K6F9H9H8_Oa0C=EB4L2N000O1O110O000000000000O10000000000000001O00001O1O011N3M5J:Ga0UOeb_;"}, {"size": [848, 480], "counts": "\\U[67Yi0S1H6J5L4M3B>K5N2O0^NhMmZOY2Qe0jMlZOW2Te0mMhZOS2Xe0QNcZOo1^e0YNYZOe1ke0`NnYOJ45Tf0;]YO]Ol0DQf0h1\\1M4K5H9J6M4M3K5H;Ha0^OPX[5"}], [{"size": [848, 480], "counts": "Wg0Y3Yg0N1O2N2N3M1O2N1O2N2N2N2N1O2N2N2N1O2N2N2N2N2N1O1O2N2N2N1O1O2N2N2N2N1O2N2N2N1O2N2N2N1O3M1O3M2N1O3M2N1O1O2N2N2N1O2N1O2NSjk:"}, {"size": [848, 480], "counts": "T5R1^i000001O01O0001O1O100O2N001O010O001O000001O01O001O010O01O1O1O1O00001O01O01O000O1O1N2N2M4L3N2N2O1O100O2O00000001O000010O01O2N00100O11^O[VO6Pe[3:bTeL8K4L3M4K7KO000M3M3N2L4I7I7N3L5IeP51[oJ6L2M4M2N3N1004K:EeZf0?jdYOk0VO6I4O2N11O000001O0010O0000001O01O00010O0001O01O000010O0001O0001O01O0000010O00001O0001O01O001O0001O01O001O00010O000010O000000001O01O00000000000001O0000000000001O000000001O000000001O0O1000001N1N2L4G:J5C>F:Cg`79Q_H;D=H7EF:I8M2N2000000O2OO10000O100000000O100O1O1O1O1N3J5I8F9J6ED=D=ARX]3"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "\\2f1jh00O11O00000000000000000O1000001O0000000000001O001O002N5K;DPbd;"}, {"size": [848, 480], "counts": "ePW6?ei0>@?G8fLcNg\\O1TNa1Ue0_Ne\\O2RNb1Ye0]Nb\\OX2^c0iMa\\OW2^c0VNU\\Ok1kc0\\Nn[Od1Rd0_Nj[Oa1Wd0bNe[O_1[d0cNb[O\\1`d0gN\\[OX1fd0mNS[OR1Pe0UOfZOSO58\\e0j0WZOmNf0J]e0T2b1J5L5C=M7J6H8H8G:H8If\\_5"}], [{"size": [848, 480], "counts": "[g0U3\\g0O1O2N2N2N1O2N2N2N1O2N2N1O2N2N2N2N2N1O2N2N1O2N2N1O2N2N1O2N2N2N2N2N1O2N2N2N1O2N2N2N2N2N1O2N2N2N2N1O2N1O2N1O2N2N2N2Nbdl:"}, {"size": [848, 480], "counts": "T5X1Xi00000001O01O01O00001O0001O000001O0001O00010O10O0001O010O1O001O0000001N1N2N2M3L5M2N2O1O1O10001O0001O00001O1O100O1O1O1O1O1O01O2M[`\\3Fo_cL9Gc0^O5K3M0000K5O1K5K5K5K6L4JgP5M`oJ1M3N3L4M3M200004L>_OP`e0d0\\_ZOb0_VOiNhh0d1J4M3N2O00001O01O000001O010O000001O0001O01O000010O000010O0001O0010O0001O0001O01O00001O00001O00010O00001O010O001O010O0000000001O00000001O000001O000000001O000000000O20O00000000O2O000000000O2O000N3I6H8J7C>F:Cb`7:S_H>D:E_O`\\n2KbcQM=F5K4M3M2O100O11O01O01N10001XORWOLQi0H[WO5ai0N2O0000O10OO2O1N2O1O10000001O0O100000000000O1O001O1O1M3M3L4N2000000000O2O0000000O2O000000001O000O101O0000001O000O101O001O001O0O101O0000001N101O000O2O00001O000O2O0O100O2N1O\\e[4"}, {"size": [848, 480], "counts": "i[a21[g0m0`ZOCRe0S3H8L3M3M4M2M4L3M3L4N2N3M2N2O1N2O2N1O100N3O0O1O1O2N1O100N2O2N1O100N2O2N1O1O2N1O100O1O1O2O00000O101N1000000O2O00kKa\\OY1_c0aNj\\O\\1Uc0bNR]OZ1nb0aNY]O]1gb0`N^]O^1bb0_Nd]O_1[b0^Ni]Oa1Wb0]Nm]Oa1Sb0\\NQ^Oc1oa0\\NT^Ob1la0\\NW^Oc1ia0\\NY^Oc1ga0[N\\^Od1da0ZN_^Oe1aa0YNb^Of1_a0WNd^Oi1[a0VNf^Oj1Za0UNg^Ok1Ya0TNh^Ol1Xa0SNi^Om1Xa0QNi^Oo1Wa0PNj^OP2Va0oMj^OQ2Wa0nMj^OR2Va0mMk^OS2Va0kMj^OV2Va0iMk^OW2Ua0iMj^OY2Ua0fMk^OZ2Wa0dMj^O\\2Va0cMj^O^2Wa0aMh^Oa2Wa0_Mi^Oa2Xa0`Me^O`2]a0^Mb^Od2_a0ZMb^Of2aa0WM^^Oj2dd000hNgXOYOYg0:_YOZOcf0C>G;_OSm^3"}, {"size": [848, 480], "counts": "jZU43Xj0:I9F5N3M2N1O1O1O001O00000000001O0000O1000000000000O1N2O1O1N2O1N2O1N200O1O1O100001O1O1O000000001O2N000000000000000000000000O1O1F[VODjTc6"}, {"size": [848, 480], "counts": "\\2d1lh0001O0000000000O10000000000000000000001O001O1O2N5K5K:DmVf;"}, {"size": [848, 480], "counts": "^aT61ni0f0VOg0F8M4M2\\O`NUXOc1jg0b0O1O001^NdMoZO]2Pe0gMlZOX2Ve0jMgZOV2Ze0mMaZOR2be0SNXZO3G`0Uf0ClYOJ<4oe09\\YOC\\h0S15K5E;N2N3N1O2L4L5L7I8FSW`5"}], [{"size": [848, 480], "counts": "[g0U3]g0N1O3M1O2N2N2N1O2N2N2N1O2N2N2N2N2N1O2N2N1O2N2N1O2N2N2N1O2N2N1O2N2N1O2N2N2N2N2N2N2N1O2N2N2N2N1O2N2N2N1O1O2N2N2N2NR_m:"}, {"size": [848, 480], "counts": "T5Y1Wi0O1001O01O0001O001O00000010O000001O0001O0010O01O10O01O001N101N1J6K5N2N3M2O1O1000001O01O001O00100O1O1O002N1O1O1O0010O01O1O01L6]Ocmj44bRUKc0Ah0WO7K2O101N101O010O000001O01O000000010O000010O0000001O01O01O0001O0000010O001O0000010O00001O0000001O010O00001O010O1O001O01O00001O0001O01O0000000000000000001O000000001O0000001O00000O10001O0O1000000M4K4G9I8C=GQ[8NndG=A>F9DA4K4[Od0M3MTZ]31mebL0O1O1O2M3M4N0O10000001O00000000O10O0100O1O1N2N2L4N2N200000000000O2O0O100000001O00000O2O0000000O2O00001O00000O2O00001O0O101O0000001O0O101O0O10001O001N101O0O2O0N2O10k_\\4"}, {"size": [848, 480], "counts": "od`2B>DC=F:E=Bcg_3"}, {"size": [848, 480], "counts": "ioR4>li0=F7J5J3O2N1N2O2O0O1O2O0O1000001O000O101O0O1000000000O1O1O1O1O1O1O1N2O1O2N1O100O1000000001O0001O1O1O0010OO101O0O2O00001N2O3L7Cd`e6"}, {"size": [848, 480], "counts": "\\2e1kh0000000000000O100000000000000000001O0000002N1O7I;E9DXQg;"}, {"size": [848, 480], "counts": "[aT61me02a]OW1Vb0kN_]O`1_b0bN^]Oa1`b0cNh\\OLSNd1Te0cNd\\O0QN`1[e0cN`\\OP2`c0SN]\\Om1bc0YNX\\Og1ic0`NP\\O`1Pd0dNk[O\\1Vd0gNf[OY1[d0iNa[OX1ad0jN[[OU1hd0POQ[OTOLa0We0`0fZOnN=JXODoe0i1UZOfNcg0o15E:K6L4N2N2N5Ke0ZO9F9BWla5"}], [{"size": [848, 480], "counts": "Rh0^2Sh0O2N1O2N2N2N2N1O1O2N2N1O2N2N2N1O2N2N2N1O2N2N1O2N2N2N1O2N2N1O2N2N2N2N2N1O2N2N2N1O3M1O2N2NR]W;"}, {"size": [848, 480], "counts": "X5V1Zi000100O001O0000O11O10O00000001O001O000001O1O0010O01O001N1O1@a0L3N3N1O100000001O00000010O001O001O101N001O1O2N001O001O1O0100O00b0]OQTn18bkQN>BBTfd06bY[O9J3O0O101OO1000000001N1000O100000O10O010000000O101O00O01000000O1N2M3M3L4M3N2O1N2N2O1O1O1O1O001O1O10000O1000000O10000000O1000O10000O11O0000O10000O2O1O0N6GoS93PlF6J7J4L5J5K7K3L3O1O01N100^OlVOHUi05PWOHQi02VWOLkh0OZWOOei0O0000000000O001N2O1O1O100O11N101O000O10000000O002N1O1N1O2N2L4N200O11O000O1000001O000000001O0O2O00001O0000001N1000001O00000O2O00001O0O10001O000O2O001O0O101O001N101N10hi_4"}, {"size": [848, 480], "counts": "bRn2Clc0@b\\O4^c0Kf\\O2Zc0Lj\\O2Vc0Ln\\O2Qc0MR]O2nb0LU]O3kb0KX]O3ib0LX]O4gb0K[]O5fb0I[]O6fb0I\\]O6eb0H\\]O8db0G]]O8db0G]]O8eb0F\\]O:eb0D\\]O;eb0D\\]OE;D=]Ob0A?DB_`7=Q_H>B?D:F:K6N1O100O2O00001O00001O01O01O00001O01O0001O00010O0000001O01O01O000000010O0000001O01O000001OO101L3I7K5M3M3N2O1O2O00000000001O000000001O00000O2O00000000001O000000000O2O0000000000000O2O000000001N100000001O0O10000000001O00fG"}, {"size": [848, 480], "counts": "W9g1ih0O100000000000O10O10000000000000000O100000O10000000O10000000O1000O100000000O100000O100000000000O1000000000O100000O10000000000000000000O100000O1000000000000000000000O010000000001N2O2M=CXQc0^OcU\\OFXi0R1E4M3N101O00N2N2^OTWO[OINTi0e0ZWOYOih0c0d0N2O1O1O1O101M[[`2Fkd_ME;DE:E=Cd0]NSWO2ie_3"}, {"size": [848, 480], "counts": "QWR48Tj07I9F9J4L3N3M2N2N2O3L2O3L101O2N0000000O11OO10000001N1O100O1N2O100N3M2N2N2N2O1O100O101O0O11O01O0O101O0O2O1N3M3M3M2N3N2L7J6Jlmg6"}, {"size": [848, 480], "counts": "i2d1lh000000000000000O10000000000000000O100000001O1O1O1O1O8Ga0\\On[e;"}, {"size": [848, 480], "counts": "fWU61]47\\a0Ma]OW1Yb0lN^]O]1`b0fNZ]O_1eb0dNf\\OISNe1We0eN`\\On1_c0WN\\\\Oj1dc0ZNX\\Oe1ic0^NS\\Oc1mc0cNl[O]1Vd0fNd[O[1]d0fNa[OY1bd0iNY[OX1hd0lNS[OXOJa0We0L3M8H5L:F5K9F6B_fb5"}], [{"size": [848, 480], "counts": "ch0m1dh0O2N2N1O2N1O2N2N2N2N2N3M2N2N2N2N2N2N2N2N2N2N1O2N2N2N1O2N1O2N1O2N1Ob``;"}, {"size": [848, 480], "counts": "o5n0bi01O1O001O00100O2N2N1O000010O01O0010O0001O01O010O0001O01O1O1O0O1D=M2M4M2O1N2O101N10000O101O0001O001O001O101N1O2N1O2N1O100O1O000001NWXi4EmgVKa0Dg0XO7K2O101O001O000001O01O1O001O0010O00001O01O000010O0001O01O01O01O000001O01O01O0000001O00010O00000010O00010O0001O1O001O0001O0001O01O000000001O01O00O1000001O00000000001O0000000000001O000O10000O2N1L4G:I6F;F:Eb`73Y_H?_O`0E9H9I6N2N201N101N11O01O0000001O01O01O0000010O000010O000001O001O01O000001O01O01O00000001O01O000000O1O2I6J6L4M4M2O100O101O000000001N100000001O0000000000001O0000000O1000001O0000000000001N100000000O2O00000O101O000000001N10000000WG"}, {"size": [848, 480], "counts": "f9f1jh00000000000O11N10O1000000000000000000O1000O100000000000O100000000000000O01000000000000000000000O01001N010000000000000O10O100000000000000000O10O11OO100000000000000000O1000000O10000O2O1NUe0CjZO?Ue0AkZO?Ue0@lZO`0Te0@lZO`0Te0@lZO`0Te0_OmZOa0Se0_OmZO`0Te0_OmZOa0Se0_OmZOa0Se0_OmZOa0Se0_OmZO`0Te0_OmZOa0Se0_OmZOa0Se0_OmZOa0Se0^OmZOb0Te0^OlZOa0Te0@lZO?Ve0_OkZO?We0AiZO=Ye0BiZO:Ze0FfZO5`e0IaZOJle06TZOCSf0AVad;"}, {"size": [848, 480], "counts": "S\\U6g0Pi0m0XOaNTXOd1gg0`NVXOb1ig0cNoWOa1Ph0`0O1O1O1O0hN`MaZO`2`e0aM^ZO_2be0eMZZOZ2he0kMRZO;G?[f0\\OfYO2:7Uf0MYYOK_h0n05D=L4M4J6L6J5InZd5"}], [{"size": [848, 480], "counts": "Vi0Z1Wi0O2N1O1O2N2N2N1O2N2N2N1O2N2N2N2N3M2N1O2N2N2NSdi;"}, {"size": [848, 480], "counts": "a6g0hi010001O0010O02N3M2N3M1O1O1O10O01O1O000010O0001O010O000001O001O10O0001O001O0O1O1O1K6J5M3N2N3N100O100O101O0O100001O0010O0100O1O2N2N2N1002JRVOFn\\h44R]XK`0C`0A=D4L3O1O001O01O0001O0000010O00001O01O01O000010O01O0001O01O000001O01O01O0000010O0001O00001O01O01O001O01O000001O001O0001O01O01O0000000010O00000000O110O000000001O00000000001O0000000O10001O0O1O1O1L5DA>A?@`0E;H8N3N1O100O100000000O0100000000O001O1O1O1O2L3K6F9H9DCe0UOlj^3"}, {"size": [848, 480], "counts": "ZjQ67Sj0;H5K5M2L4M2O2N1O2N1O1O2OO10O100O100O100O01O10O01O10O01O001O1O100O1O10001O01O2N10O0O12N10O10O01O001O100O1O2N1O2N1N3N2N1M5K8E]`g4"}, {"size": [848, 480], "counts": "\\4e0ki01N3N3L4^VOWOWi0S1N2M2N2O1O000000000O10O11O000001O00O11O0000000O10001O1N103L:E\\a_;"}, {"size": [848, 480], "counts": "PU[6;Rj09I5K5iMUOTYOOg0n0Tf0VOmXO4l0h0Uf0[OeXO3R1d0Xf0LeYO3]f01^YOGkf0>oXOZOZg0l0^XOSOeg0V1oWOhNVh0c16J7G9M6H6POhVO:bYd5"}], [{"size": [848, 480], "counts": "li0d0mi0O2N2N2N2N2N1O1O2N2NSbS<"}, {"size": [848, 480], "counts": "R7R1^i000001O001O001N101O1O01O0001O00001O00010O00001O00000001O010O0000001O001O10O01O00001O00001O00000000001O0O1O1O1N2N2M3L4N3N1O1O1O2N100O1000001O0000YUi1NajVN>I0O21M2O101N1O1N2N2O1M4M2NbPb2`0kn]M=[VO[Olh0X1J5N2O0010O01O001O00000010O01O00010O0001O01O00001O01O01O01O01O000010O0001O01O01O000000010O00001O01O0001O01O0001O000000010O00001O0001O01O00000000O2O0000000000001O0001OO10001O000000001O000O1O2M2L4G:I6EG8L4N3N101O0O110O00001O00001O01O01O00001O01O0001O00010O00001O0001O01O00000010O0001O0001O000000000O1J7I6L4L4N2O101N10000000001O00000000001O0000001O00000O10001O0000000O101O0000000000001O0O100000001O0O100000001O00000OWF"}, {"size": [848, 480], "counts": "f:e1kh0000000O100000O100000O1000000000000000O100000000000O10000000000O100000O1001OO10O1000000000000O1000000000000000O100000O10000000000O10000000000000O1000000000O1000000000000000O0100000000O1000001N5Kj0TOmPc00[U\\OH[i0j0L7G6N2N0000000000O10000000001O000O2O3JSf^2TOjZaM4M2O0O10000000O02O00O01O100O001L5L3M20100000O2O0000001O0O101O00001O0O101O0000001O0000001O0O2O00001O001O0O11O01N1000001O000O2O000000001N10001O0O2O001O1N_YX4"}, {"size": [848, 480], "counts": "dYZ37Wj03N2M3M2N3M2N3M3N1N3M2O2M3N2N2M3N2M2O1O2M2O3L3N2M2O2M3M3L4N2L4M3M2N3M3M2N3M3M3M2N3M3M2M4M2N3M2N3M2N3M2N3M2N3M2O1N3N1N1O11N1N2N3L3N2O1M4N2N1O2M2N3J6ZNe1K6N2N101O100O1O001O1O1O001O1O1O100OfZOgN^a0Y1V]O4jb0KR]O:nb0El\\Ob0Tc0]Oh\\Oh0Xc0WOe\\Om0[c0SOd\\On0\\c0QOd\\OP1\\c0oNe\\OQ1[c0nNf\\OR1Zc0nNf\\OQ1[c0nNf\\O>ZNiNQe0h0e\\O9Ud0Fl[O6Xd0Jh[O3[d0Le[O1_d0Nb[OMcd02][OKgd04Z[OHkd07T[OFPe0:nZOCWe0=iZOAYe0>gZOBZe0>fZO@]e0>cZO^Obe0b0]ZO^Ode0a0\\ZO_Oee0a0\\ZO\\Ofe0d0\\ZOYOfe0e0\\ZOWOhe0h0Q2O1N2O1O011N001O0O110O1O001O1O1O101N101O0319C0O02N001O1O2M2N4J6J^mR5"}, {"size": [848, 480], "counts": "ZSd0Al[O`0Td0_Om[Oa0Sd0]Oo[Ob0Sd0]Om[Oc0Sd0\\On[Od0Sd0[Om[Oe0Sd0ZOn[Of0Sd0YOm[Og0Td0XOl[Og0Ud0XOk[Oi0Vd0VOj[Oj0Vd0UOk[Ok0Vd0TOj[Ok0Xd0TOh[Ol0Xd0TOg[Ol0Zd0SOg[Ol0Zd0SOg[OAa0^OlU]3"}, {"size": [848, 480], "counts": "Xg]6>ni05K4M3N2O2N1N3N2N1O1O1O100O2N100000O10000O1O100O10000O100O100O1000001O0002N2O0O10O01O1O010O01O1N101N3M3M3M3L6IP]a4"}, {"size": [848, 480], "counts": "Rd26Yj04L8H6J?A7J1N1000000000000000000000000000000000O2O00000000001O0O2N8C`l];"}, {"size": [848, 480], "counts": "a[Z6X1Qi08XOiNPXO\\1ng0hNoWOZ1ee0]N\\\\OW2bc0lM[\\OT2fc0nMW\\OS2ic0PNS\\OP2oc0QNn[Oo1Td0_N\\[Oa1fd0bNU[O^1ld0fNoZOW1Ue0mNfZOBD7ke0=ZZOVO9Mfe0Q1kYOoNgg0S1RXOmNRh0^1;M5B=L4M7I8G:EiZ_5"}], [{"size": [848, 480], "counts": "Pj0`0Pj001O1O2N1O2N2N2NUWU<"}, {"size": [848, 480], "counts": "l6S1]i00000001O000O10001O00001O00001O0010O000000010O01O000000010O0001O00010O001O001O00001O0000010O0O1000001O0N2N2N2N2M3O1M4N1N2O1O1O2N10000O2O00001N2N_Vd1O]i[N8K3N3M2N4M1O2O2N2N2N5K3M2N2O1N2N001O1O00010O0000000001O0000000000001O000001O010O001O1O100O1O001O01O010O001O00001O1O0000001O00000O11O0000O2N1N2H8K4N3L5J6K5L4GVV47iiK3M2N3L3N200002N7A^`e0b0T_ZOk0VO6I5M21N1O11O001O01O0000010O0001O01O000001O010O000001O0001O0010O000001O001O010O0000001O00010O0000001O01O01O00010O001O0010O000001O00000001O000000000000001O0000001O000000001O00000000001O000O1O2N1K5G:I6A`0Ci`7KY_Hc0E9C@a0_Oa0B=F:N2O1O100000000O0100000000000000O1N2O1N2N3M2K5H8I8F9E@`0@g0SOf[\\3"}, {"size": [848, 480], "counts": "Zm^73\\j04M1N2N3M2N2O2M101N2O1N101O001O001O1O001O001O3N0O002N6gVOhNnh0_1N01O001N1O2O1O1M3N2M3N2L5L4L5IaTj3"}, {"size": [848, 480], "counts": "lc28Uj08J8I7H?A3N001O000000000001O0000000000O010001O0000000000001O00001O5Ic0VOjQ];"}, {"size": [848, 480], "counts": "jaY6L7E;F8K;CeU`5"}], [{"size": [848, 480], "counts": "]i0S1^i0O2N1O2N2N2N1O1O1O2N2N2N2N2N2N2N2N2N2NbSl;"}, {"size": [848, 480], "counts": "g6S1]i0001O0000001N10001O010O00001O1O010O000000001O01O0000000010O0001O00001O001O0010O0001O000000001N1000001O0N2N2N2L4N3M2N2N2O1O1O10000O101O00002M2KbVd12Wi[N;L3N1O2M3M3N2O2N2N2N1O2N1O3N3L2N1O1O00001O01O0001O01O01O00000001O01O00000000010O1O001O1O100O1O001O0001O1O00001O1O001O001O00001O0O100001O00O100M3G9L3M4L5K4L5LgoQ1FVPnNc0J3L4L5N10000000001O00010O000010O01O001O01O01O0010O01O1O010O002N1O1O2O0O2N001O3M4L3N1N2N3M2O0O001O00O1O20O1O0001O01O000000000010O0001O0001O000000000000000O10001O10O000000O2O000000001O0000000O1O2N1J7H7J7@`0Ch`7K\\_Hb0C9E;F9EJ3O10O10K@YVOa0fi041000000O100000O1000O1000000O10O10000000O100000000000000000O1000M3N2L5M1O2N00N22O1N2O1N2N2001OOgZX30WegL:J0O2N1N2O1O1N10000000O0100000O0100000O1000000000000O10O100000000000O100000000000000O100000000000O01000001O1O002N1N2O2NS\\Q3"}, {"size": [848, 480], "counts": "VTP8f0[i0a0@>B>F;@?E;\\Od0E;G9M4N1O1O10000000000O100000O10000O1O1O1O2L3L4L5G8I7F;E;B`0@`0Bd0]NPWO71HkT]3"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "Yi14Wj0:J9H;E;D3N0O101O000000000000000000O11OO100000000000001O000000001O4L8Edl];"}, {"size": [848, 480], "counts": "^R[61^j02N2N2O001O1kWOHmN0Rf0?VYODW12ZONZf0j0XZOCROFef0i0VZO6ke0MPZO3Rf00hYO[Oof0l0hXORO^g0P1]XOnNgg0U1SXOjNQh0]1kWO[N\\h0b1;J5H=G:A?EhU`5"}], [{"size": [848, 480], "counts": "Ri0^1Ti0N2N2N2N2N2N1O1O2N2N2N2N2N2N2N1O2N2N2N2N1O2N2N2NRog;"}, {"size": [848, 480], "counts": "e6S1]i0001O0000000O2O001O10O01O00001O0010O00000001O000001O0001O01O0000001O00001O1O01O0001O0000001N10000O101N1O1M3M3N2N3M2N2O1N2O101N1000001O1N11O00O5Ih[c15Rd\\N7J5L4L4M3M4N5J3M8H3MO101O0000000001O10O00000010O00000000100O000000000001O00000001O010O1O00100O001O01O1O4L7I6J2N1O2N1O1O3M2N2N4K7J4Jfj`11YU_N5L3M2O2N1O100O1O001O1O1O10O01O010O001O00010O000010O0001O0010O01O00010O001O1O2O1N2N2N2N3M3M3N2M4L5K3N2M1O1O1O001O01O00000001O01O00000O10001O01O00O10001O000000001O000000001O00000O101N1O1M4M2H9H8B>Cg`7M[_H`0C;G8E_O`0CK6ZOf0D=L3O1O101N1000O1000000000000O100O1O1O1N2N2M3M4I6G9G:C>@`0B?Dl0aNk[\\3"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "Si1]VOkNPi0c1E5N00L5H8J6N2O2M2N2O1N2M4M2O1O1N3O0O2O0O1000001N1000001O00001O01O0001O0001O001O01O01O0010O00000O101O000001O00O2O0O1000000000001N1001O000000000001O01O001O10O02N01O01O001O1O0010O01O1O00100O001O10O001N2HSo93kPF8B?H7L5L2O3M1N3N3M2N4M3L3M3M1O1O1O0010O000000010O000001O00001O01O000001O0000010O0000010O000000O1O1N3J5J6L4M3N201N10000000001O0000000O2O000000001O0000001O000000001N1000000000001O0O1000000000000O2O00000000001O0O10hF"}, {"size": [848, 480], "counts": "X:d1lh0000000000000O10O10000000000000O10O11OO10000000O10000000O10O100000000O100001OO100000O1000O1000000000000000000000O010000000000O10000000000O01000000000O1000000O100000O100000000000O100000000O2O002N5I=Cgkc0ZOlT\\Oe0gUO_O1OXi0W1I3O1O0O10000000000O1000O10O1001N10000000000O010000000000000O1000000000O1000O10000000000000000000000O10O1000000000000000WOSWOOmh00WWOMih03XWOLhh04XWOKih05WWOKih04XWOLhh04XWOLgh04ZWOLfh03[WOMfh02ZWONgh00YWO1hh0MYWO3gh0LZWO4fh0I]WO7ch0F`WO9bh0CaWO=Yi0O01N2L4O1N2O001N2N2N1O2N2O1M201N3N0O2O10O101O1O00001]OkVOIVi02PWOLRi0KVWO3fi0N001O0O0100000OO2O1O1O01000001N11O000000O01000O3K`Ui5"}, {"size": [848, 480], "counts": "Y`R41_j03L1QN3[YONcf04\\YOLdf05[YOLcf05]YOLaf06^YOJaf07^YOJbf06^YOJaf07_YOJ`f06`YOJ_f07`YOJ`f06]YONaf04[YOOef03PYO7nf0MTXO[O033OMg0lg0e1O2O00001N10gNbXO@^g0>gXO_OXg0a0lXO\\OTg0c0PYOZOof0g0UYOVOif0j0ZYOTOff0k0\\YOTOcf0m0]YOTOaf0m0`YORO_f0o0aYOQO_f0n0bYOSO\\f0n0dYOROZf0P1gYOPOWf0P1jYOPOVf0P1kYOPOSf0Q1nYOnNQf0R1PZOoNne0R1SZOmNme0S1SZOmNle0T1TZOmNke0S1UZOmNje0T1WZOlNhe0S1YZOmNfe0T1[ZOkNee0U1[ZOlNce0U1]ZOkNce0U1]ZOkNbe0V1^ZOjNbe0V1^ZOjNae0W1_ZOiN`e0X1`ZOgNae0Y1^ZOhNae0Y1_ZOgN`e0Z1`ZOeN`e0\\1`ZOdN`e0]1_ZObN`e0`1aZO^N`e0b1_ZO_N`e0b1`ZO]Nae0c1`ZOZNae0g1_ZOXNbe0h1^ZOVNde0j1\\ZOSNfe0n1^11O0O1O1O100O2O0O2O00001O001O010001O1O1N1O101N1O2O1N1O1O1O10O01O010O001O1O001O2N1O2N1O1O1O1O1O1O1O00001O1N10001N2O1O1N2O0O2O1N2N3L4M2M_Rl4"}, {"size": [848, 480], "counts": "l;h0hi00O10O10000000000000O01000000000000000O10O100000O100000O10000000O100000O10000000O1000O1000O10O10000000000000000O100000O10O10000000O10000000O10O100000000O1000O1000000000000O1000O10000000O1000000000001O1N3N1O3M]R>LdmA5M2N1O1N2O1000000000O100000O10000000O01000000000O1000000O0100000000000O10O10000000000O100000O1000000000000O10O100000O10000000O01001OO1000000000001OO01000O1000000O1000O100000O100000000000O0100000000O010000000000000000O0100O01000000O100O10O10000000000000O1000000O100N4JeZh5"}, {"size": [848, 480], "counts": "cTP81ji0j0\\O`0A?A>A`0]Ob0B>C=B>M4N1O10O01000000000000000000000O100N2M3N2M3L5I6I7H9D;C?@`0F=@P1`NXa[3"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "Ui14Xj0:I8H7I>B5L1N10000000000001O01O0O0101OO1001O0000000000000O101O001N3N=jN`VON55Rf^;"}, {"size": [848, 480], "counts": "i[Z6570Ri0V1D9M3lL\\Ng\\O1RNc1Ve0_Nd\\O6mM^1_e0^Na\\OU2_c0lM_\\OU2ac0mM[\\OU2ec0mMX\\OS2ic0aNb[O`1_d0cN\\[O]1ed0fNX[OV1md0nNmZOYOI9de0c0]ZOQOH7O2N1O2O000000001O00001O01O01O000010O000001O0001O01O0000001O01O01O00000010O000001O0001O0000000O1L5H7L4L5M2N200O1000000000001O00000000001O000O1000001O00000000000O2O0000000000000O101O000000001N1000001O0O10001OjF"}, {"size": [848, 480], "counts": "W:c1mh000000000O0100000000000O10000000O100000000000000O01000000000000O1000O100000000000O100000000000O01000000000000000000O1000O10O1000000000000O100000000000O010000000000000000O10000000000O1000000O2O003L;B\\fd0OcY[O>F5K3N100000000O10000O1001O00O02O00000O1000000000000000000O1000O1000000000000000000000O10O10O2OO1000O10000000000O1000O10XORWONnh02TWOLmh02UWOMjh04VWOKkh04VWOLjh04VWOLjh03WWOMih02YWOMgh03YWOLhh03YWOMhh01XWO0hh0OYWO1hh0LZWO4fh0J]WO5fh0D]WO<\\i000001M1O2O1O1N2N2M3N2M3O1N1O2N2N2N2O001O10O10001O001N1^OlVOHUi02RWOLRi0GYWO7ai0O0O2OO1000O10N2O1N200O1000000001O0000O100000O01M3M3M3M3O1O1O100O11O000000000O10001O00001O0O10001O00000O101O1O0O2O00001O0O10001O00001O0O10001O000O101N101O01OO2O0O101O0MVZX4"}, {"size": [848, 480], "counts": "PRk3`0mi06J5L3M2O1O1O2O0O2O0O2O0O101N1000001N10001O000N3N3N1O001N2O1O1N4M000O101N1O101N2O0O2O1N2N2O2M2N3N00QNbXOn0[g0oNkXOP1Tg0nNQYOP1mf0nNWYOR1gf0mN\\YOS1af0lNbYOV1\\f0hNhYOW1Vf0iNkYOX1Sf0fNQZOZ1me0eNWZOZ1ge0gNZZOY1de0hN\\ZOY1ae0iN_ZOW1_e0kNbZOT1^e0lNbZOU1\\e0kNfZOU1Ye0kNgZOV1Xe0jNiZOU1We0kNiZOV1Ve0iNmZOU1Se0kNnZOU1Qe0kNR[OR1nd0mNU[OR1kd0lNY[OQ1hd0nNZ[OQ1ed0oN][OP1cd0oN^[OP1cd0mN`[OS1nf0001O000010OO100010O0O100O2O1N3N2M2OnMWO^ZOg0ce0\\O\\ZOa0ee0CYZO:ie0IVZO4je00UZOLme06RZOFPf0=oYO@Tf0b0jYOZOXf0j0f14K2O2M5K2N2O1N1O2O1N3N001O0O100O100O100000O1O101M3N1N3L3N3M2N2N3N1N4K7Iodm4"}, {"size": [848, 480], "counts": "i;i0gi000000000000O10O1000000000000O1000O100000000000O10O10000000O10000000O1000O1000O1000O1000000000O10O100000000000O01000000000O10O10000000000000000O0100000000000O01000000000O100000O100000000000O0100001O1O3L3N1O3L8Hcl>1ZY@Mhi0?N1N2N200O10000000O100000O10O100000000000O010000000O10O10000000000000000O100000O100000O10O100000000O10O10000000000O0100000000000O10O1000000000O10O10000000000000O10O1000000O1000O100000O10000000O01000O1O10O1000000000O010000000000O02OO1000000000O101O00O100000O10000000000O10O1001OO10O10000000000000O0100000O10O1000000000O0100000O10000000O10O10000000O10O10000000O0100000000O100000O10O100000O10000O100000000000000O1000000000O0100000000000000O1000000000000O100010O00O10001N2O1O2M\\\\Q3"}, {"size": [848, 480], "counts": "YnP8l0Ti0a0A?A>@a0]Ob0C=C=D=M2O1O1O10000000000000000O10000000N2O2M2N2M3L5I6H9G8E=_Oa0B>Gb0oNS\\\\3"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "Ri1:Tj07J8H>B6J3M2O00000000000001O0000000000O100000000001O0O10001O001O2N6Ij0ROVl];"}, {"size": [848, 480], "counts": "T[Z6=h0E\\h0]1bLcN[]Oe1db0_Nf\\ONTNc1Ve0aNc\\O0SNa1Xe0bNa\\O4oM]1`e0bN]\\OQ2cc0SNX\\On1hc0YNP\\Og1Qd0`Ng[Oa1Yd0bNc[O^1_d0dN][OZ1fd0iNW[Oo0Re0UOhZOZOF7ge0d0\\ZOQOXg0V1]XOkNfg0j16I7E=J8I5K7G9I9EPa^5"}], [{"size": [848, 480], "counts": "oh0a1Pi0O1O2N2N2N1O1O2N1O2N2N2N2N2N1O2N2N2N2N2N2N2N1O2N2N2N2Nb_e;"}, {"size": [848, 480], "counts": "a6S1]i00O1000001O000O101O01O0001O000000001O0000010O001O001O0001O001O001O00001O01O01O00001O0000001N1O1N2M3L5L3L4N2O2N1O100O101O0000000000010N4Icef15XZYN7K4N4L6J5K8H8I2M00000001N100001O00001O10O000000010O00000000000000001O0001O010O0000100O1O1O1O0010O00001O001O001O00001O001O00001O000000000O1O1O1L4J6L4K4M5K5K5KdP5O^oJ4L3N2N2N2001O00;^OY`e0c0X_ZO`0]VOQOhh0b1G5M20O1O1001O000001O01O000010O0001O0010O0001O0001O01O00010O0000010O0000001O01O01O00001O01O0001O00010O0000000010O000001O001O0001O01O000000000001O00000000001O000000001O0O10001O000000001O0O100O1N3G8H8C>ER[8LReGee0_O\\ZOc0ce0[O^ZOg0dg010O001O1M3O3KUhg4"}, {"size": [848, 480], "counts": "g;j0fi000O10000000O1000O10000000O100000O10000000O010000000000O10O10000000000O01000000O1000O10000000000O01000000000000O10000000O100000O100000O10000000O1000O1000000000O100000O100000000O11N10O10000000O11O01N2O1O1O1O8GjW=2ShB4M1O1O001O1O100O10O1000000000000O10000000O100000O1000O10O100000O10O100001OO010000000000O0100000000O10O100000000000O10O10000000O1000O10000000000O10O1000000O10O10000000O10O10O100000O100000O1000O100000O1000O1000000000O1000O10000000O100000000O01000000O1000O10000000000O1000000000O0100000000000000O10O100000000O10O10000000000O010000000O10O10O1000000000O0100000O10000O0100000O10000O100000000000O1000000O10000000000O1000000000000O10O10O101O000000O010000000000O1000000000001O001O1N101O2M4LlaP3"}, {"size": [848, 480], "counts": "gYo75mi0d0ZOb0@>D=@?]Oc0YOg0DC>Aa0\\Ol0TO^V]3"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "oh1c0li05Jb0@3L2O1N101O0000000000O2O0000000000O100000O1000001O002N1N3N8YOeVOHdi0LaVO1Q[`;"}, {"size": [848, 480], "counts": "T]Y61Z42gK4ge00^]OU1`b0lNb\\OL[N`1Re0fN_\\OOYN^1We0fN^\\OMWN`1[e0eN]\\ONPNa1be0eNX\\Ol1hc0\\No[Oe1Qd0^Nk[Ob1Vd0aNf[O`1Zd0cNb[O]1_d0eN^[O[1dd0gNW[OX1ld0lNoZOP1We0UOaZOTO95\\e0l0SZOnNeg0k14I8HiWOQN\\h0e1`0I5K6K5K6J7I4L6Ifk\\5"}], [{"size": [848, 480], "counts": "gh0i1hh0O2N2N1O2N2N2N2N1O3M2N1O2N2N1O2N2N2N2N2N1O2N1O3M2N2N1O2N2N1O2NbUb;"}, {"size": [848, 480], "counts": "]6T1\\i00O101O0001OO101O00000010O0001O000010O000001O000010O0000001O1O0000001O001O01O01O0000001N1O1O1J7K4M3M3N2O1N3O0O1000001O00000000O10101OMoje17kTZN6K5K4L7ICa0_O4N1O20O01O000000001O001O01O01O0001O01O0010O000000010O000001O01O0001O000010O0001O001O00010O00001O01O00000001O000010O0001O0000000001O0001O000000O2O0001O0001O00000000001O000O101O000000001N1O1O1L5D;H9C=FmZ84ldG>B=B>G8M3N3N101O001O00001O0000010O00001O0000010O00000010O0001O00000010O000001O01O0001O0000010OO10000O1N2I8J5M3M3N2O1O2O000000001O0000000000001O0O100000001O000000000O2O000000000O2O00000000000O101O0000000000001N100000000O2O0000UG"}, {"size": [848, 480], "counts": "o9d1lh000000000O10O10000O01000000000000000O10O10000000000O10000000000000O01000000000000000000O11OO1000000O01001OO10O1000000O10000000O100000O10000000000000000O100000000000000O10000000002M2N2O5J?_Oked04QZ[Oa0B7H2O000O20O00O10000O010O10000000O100000000000000O011OO10000000O10000000O1000O1000000000000O10000000O1000O1000000O100ZORWOJnh05TWOJlh05UWOKkh05UWOJlh05UWOKkh05UWOKjh05WWOKjh03WWOMih03XWOLhh03YWOMgh03YWOMgh02ZWONfh00\\WO0ch0O_WO0ch0L`WO4dh0CaWO=Yi0000O100O1N2N2N2M2O2O1N2M2O2M3M3O1N1O2O1O020O0000O2O001XORWOLoh0NYWOOei01N00000000000OO2N11O1O100O100O10000000O10O01000O1O1O1O1N1L5L4O100000O1000001O000O101O00001O00001O0O10001O00001O0O10001O00001O00001N1000001N10001N10001O001O0O100O2O00001N2O[d[4"}, {"size": [848, 480], "counts": "]]T53\\j0[1dN8I5YXO`NUf0j1\\YOhNWf0m2K2O1N1O2O0O2O0O10001N1O10000001O0000001O1O00001O001O000000000000001O0000O1O10000000000N200O100O01000O100O100O100O1O1O1O1001O000O2O00000O101N10000O2O00001N2O1N`ZOnNb00\\e0l1g1J7E=G9J4L8I5K6I8F8DVka5"}], [{"size": [848, 480], "counts": "ah0o1bh0O2N2N1O2N2N1O1O2N1O2N2N2N2N2N2N1O2N2N2N1O2N1O2N2N2N2N3M2N2N2N3M3Mb``;"}, {"size": [848, 480], "counts": "U6W1Yi0001O1O010O00100O0000001O01O000000000010O0001O001O0000001O001O0000001O0000001O0000000O101N1L4H8K5N2O2M201O00000000000O4L_ef19a`XN8dh0o0M1N11O01O000000O100O101N10000O101O01O001O001O01O0000001O000001O0001O00000001O00001O010O1O1O1O001O10O001O1O001O001O0000001O00001N100001O00O100O1M3K5K5L4L4J7J6KfP5J`oJ6K4M2N3M2N21O1O0OoZf03jdYOl0VO5K4M1100001O00001O01O000001O0001O00010O0001O00010O000010O0001O0000010O00000010O000001O00001O01O0000010O01O001O001O01O01O000000010O0000000000000000000001O00001O00000O101O00000000001O000O101N100N2F;G9]Of0AnZ8:hdG=A=G:@?N2O2N1O101O00001O00001O000010O01O00000010O0000010O000001O00001O01O000001O01O0001O00000001O00000O1J6I7L5L3N2N2000001O00000000001N10000000001O000000000000001O0O10000000001O0O1000000O10001O00000O10001O000O100000001O000000000OZG"}, {"size": [848, 480], "counts": "k9e1kh0000000000000O1000O1000O1000000000000000O1000O1000000O1000000000000O10O11O00000O1000O100000000000O1000000000O10O100000000000000O010000000000000O01001OO1000O100000001N1000000O101O3L9Ga0^Obed0e0jY[O8H5L0O2O0000000000000000000000O100000000O1000O10000000O100000O1000O100000000000000O1000O1000000000000000O10O10000000O100YORWOLnh03WWOIih07WWOIih06XWOJhh06XWOJhh05YWOKfh06ZWOIgh07YWOIgh06ZWOJgh04YWOLhh03YWOMgh01[WOOeh0N^WO2dh0I_WO7bh0DbWO;Zi0000O001O1N2N2M3O1M3N2N2N1O2N2N2L4N2O00100000O20O01O0\\OnVOJSi02RWOKPi0NXWO0kh0G]WO6`i0O000O100000O0O2O1O2N001000000001O00O01000O100O1O1N2N2N2M3N1O2O10000001O000000000O101O0000001O000O101O00001O000O2O00001O0O10001O000O2O0000001O0O10001O000O101N101O0O2N0O2ObY]4"}, {"size": [848, 480], "counts": "^_j49Vj0c0\\O4N1N3M3M2N3M3M3M2O2cWOWOnf0]2H3N3L2N3N1N2O0O2O00000O2O00000001O00000001O00000O10000O10000O1O10000O1O1O1000000O1N2000000O10000N2O1001O001O0000O11O0000000000001O00000O2O1O1O2N4L7I6J7I7I5K2M4M3M3L4M2N3L3N2M2O2M2N2N2N2N3M4K4L5K4K9GgVf4"}, {"size": [848, 480], "counts": "_;i0gi0000000O1000O1000000000O1000O10000000O100000O100000O10O1000000000O10O100000000000O0100000000000O100000O0100000O1000O10000000000O100000000000O0100000000000O1000O100001N01000000000000O2O1O1O1N3N6IeR>IbmA1O2B3UVOOii0=N2O10O100000O1000O10000000O10O10000000O01000000O01000O1000000000O0100000000000000O1000O100000O1000000000O10O1000000000O01000O1000000O010000000000O01000000000O1000O10000000O100000O1000000O10O10000O1000O1000O10O10000O10O1000000000O010000000000000000O10000000000O100000O100001N01000000000O1000000000O010000000000O01000000O01000000O10O100000000O10O100000O1000O1000O10000O100000O01O1000O100O1000000000000O10000000000O1000O1000000000O10000000000O1000000O10000000000O101O002N1N3N2M3MefT3"}, {"size": [848, 480], "counts": "ZZj77`i0P1UOe0C=]Ob0@`0XOi0@?M3N2O100O2O00000000000000O100000000O100N2N2O1M4J5J7C=E;@a0@b0\\Om0lN_Pc3"}, {"size": [848, 480], "counts": "Zgm49Tj0a0_O7K2M3N2M4N0O1O1O1O1O2O1N2O1O1O1O1N2O001OO1000000O110O0O1000001N2N101O0O2N101N2O1O1N2O1O3I6L9I6I4L5IXfT6"}, {"size": [848, 480], "counts": "^3d1mh0O0000000000000000000000000O100000001O00001N2O3L7I:TO^VO2Tjg;"}, {"size": [848, 480], "counts": "i]T63c4NTa06`]OU1^b0nN[]OX1db0kNU]OY1jb0mNo\\OU1Qc0mNZ\\Of1fc0^NS\\Oe1mc0aNk[Oa1Ud0aNh[O_1Yd0cNd[O^1\\d0dNa[O\\1ad0hNY[OY1gd0kNT[OS1Pe0QOjZOYOF?de0=_ZOnN=4^e0R1mYOiNjg0i16ClWOUN]h0e10^SA4lUOMei0a0M2O1O1000O1000O011O000000O10O010000000000O100000O10O1000000000O010000000O1000O100000000O10O10000000O10O10000000O100000O1000O10000000O10000O010000000O1000O0100000O10O1000000000O10O1001O0O1000O1000O10000000O1000O10000000000000O01000000000000O100000O100000O10000000000O10O10000000O10O100000O100000O100000O10O1000000000O010000000O10O10O100000O0100000O100000O0100000000O100000O10O10000000000000O1000O01000000O10001O0O10O1000O1000000000000O100000000000000O10O1000000001O001N3N1O2M3N2MhfT3"}, {"size": [848, 480], "counts": "nYj7e0Si0j0@?B>\\Oc0[ORMWYO_3Sf0j0DBojg;"}, {"size": [848, 480], "counts": "f]T65Y3d0jLDee0I\\]OX1cb0jNW]O\\1hb0fNT]O]1kb0fNm\\O_1Rc0fN[\\Oi1ec0]NQ\\Og1oc0`Ni[O`1Xd0bNe[O_1[d0cNa[O_1_d0cN^[O]1cd0gNW[OX1md0mNlZOR1Ve0ROdZOUO3;`e0e0TZOnNl0B`e0X2_1J7A`0L5I7K7H5L6J6I7C]`c5"}], [{"size": [848, 480], "counts": "\\h0T2]h0O2N1O2N1O2N2N1O2N1O2N2N1O2N2N1O3M1O2N2N1O2N2N1O2N2N2N1O2N2N2N2N2N2N1O2N1O2N2Nba[;"}, {"size": [848, 480], "counts": "T6Q1`i0O2N2N2O1N1OO2M3J5J600O2O0001O01O000001O00010O00001O00001O00001O0000001O0000001N1O1E;J6N2N2O2O000000000001O01O01O001O100O1005JYdk14eaSN?jh0FRWO>lh0c00000001O0O1000001O01O000010O01O000001N1001O00001O01O00000001O001O010O1O1O1O011N0001O001O00010O001O0000001O0O2O0000O100000O1O2J4L5M3L4J6L5K6FVk54lTJ2M3M4M2O1N23M4LkZf04ldYOm0UO5K3N20001O00001O00001O0010O00001O0001O01O01O0001O01O00001O01O01O0000010O00001O01O01O00001O0001O01O0001O01O0000001O01O0001O0000000001O01O00000000000001O00001O00000000000O20O00O10001O0000000O2O0N2J6D_Oa0_Oa0]Oc0^Ob0I7N2O1O2O000O1000000000000000O10000O1O1O1O1N3M2M3L5G9A>C?@`0Bb0TOaVb3"}, {"size": [848, 480], "counts": "lV_57hi0c0K5L2O2M2N3N1O2N2N3M3N2M2N2O000O101O00000O10000000O2O0O10001N10001N2O001O1N2O1M3L4M4M2O2M4L8H:Ehfe5"}, {"size": [848, 480], "counts": "W3g1ih000000000000000O1O1O100000000O1000000000001N3N1O4K8H`0UOYVOLeof;"}, {"size": [848, 480], "counts": "PXU64\\3T1\\b0PO[]OW1bb0jN[]OY1db0jNU]O\\1ib0gNR]O\\1nb0hN\\\\Oj1dc0ZNV\\Oh1jc0\\NQ\\Od1Pd0`Nj[Ob1Vd0bNe[O^1\\d0fN_[OZ1cd0hNX[OY1id0lNP[OU1Re0POgZOYOJCki0a0AD=@a0^Oh0UNXWOi0UZa3"}, {"size": [848, 480], "counts": "hca5>1Mbi0c0L5M1O1N3N1O2N2N2N3M4M1N2O0O2O1O0O2O001O00000000001O000000O1O1O2O001N2N2N2O0O2N2N3L3J6N2N3N2M5K9GP_b5"}, {"size": [848, 480], "counts": "R3`1Pi0O10000O2O000M2O2O2O00000O10O101O103L3M2OO0O1O3N1N4L8[O`VOJmi0F[VO3]Uf;"}, {"size": [848, 480], "counts": "llV67W3U1^b0nN\\]OW1bb0kNc\\OKYN_1Se0iN_\\OMVN_1[e0gN[\\Om1dc0XNW\\Oi1ic0\\NQ\\Oe1oc0_Nl[Oa1Ud0bNg[O_1Yd0cNd[O^1\\d0eN_[O\\1cd0fNZ[OY1gd0lNS[OS1Pe0SOgZOTO2;^e0e0XZOnNh0F_e0U2b1I7E=I5J7K5L9F6I:G6ESQa5"}], [{"size": [848, 480], "counts": "gg0i2hg0O002N1O1O1O2N1O2N1O2N1O2N1O2N2N2N1O2N2N1O2N1O2N2N2N1O2N2N2N1O3M1O2N2N1O2N1O2N2N1O2N2N2N2N1O2N3M2N1O1O1O2N2NRTo:"}, {"size": [848, 480], "counts": "i5V1Zi000001O1O001O0001O010O3M5K2O0O01O0O100O10000011N2N2N1O1O00000OO2O01AdVOJWi0MbVO7ni0KQVO3Pj04300O10001O000001O001O001O1O011N1O1O1O1O1O00100O1OO100N2MTRX2CiTgMOXO9Qi0KbWO>\\h0ORWO8kh0h000O000000001O01O0000001O000000000010O001O1O1O1O010O0010O001O00001O001O00001O001N1000O1000000000N2L4L4K5K5J7J5M5JVdS1?Z[lN>D7H6M2N3O2M2O1N1O1O000010N11O0010O00010O01O00010O0000001O010O0000001O0001O0001O00000010O0001O0000010O00001O010O000000001O0001O000001O00000001O000000000000000O20O0001O0000001N1000000000001N100N2O2H7H9I6@dU91^jF=G:CO2O00000000000001O000O2O001O0000001N10001N101O0000001N10001O00000O2O0000001N10001N100O1OoY]4"}, {"size": [848, 480], "counts": "PXP5h0ai0>M4^MiN[[O^1\\d0gN`[O]1\\d0gN_[O]1^d0gN][O]1`d0jNU[O\\1id0lNhZO^1Ve0P2M2N2O1O1O1N2O000O10000O100O10000O1000000000000001O000000000000000000000000000000O1000000O100O100O10O10O101N10O10001O00O100000O10000000000000000O101O1O001O2N1N2O2N2N1O3L4M3M3M3M4K4M5K9G6J6JlZOGSe0k200O2O1N100O2O1O00000000000O10O1000000000000O10000000O10000000000000000O10000O1000O010000O10000O10000000000001O0O2O00000000001O000O101O00001O00001N2O001O002N1O1O1O1N2O1O2N2N2N2M3N1O3M4L6J5K9G=C=Cb0^O8H3L5L5K2N3M3M4L2M3N2N2N4L5HgWX4"}, {"size": [848, 480], "counts": "S;i0gi0000000O100000O1000O10000000O100000O100000O10000000O100000O1000O1000O10O1000O100000O100000O10000000O10000O10O100000000O0100000000O10000000O1000O1000000000000O010000000000O0100000000000000O01001O001O1N2O2N2N5JTX=JfgB=N2O1O1N20000O0100000000O0100000000O0100000000O10O100000O0100000000O02O000000O10O1000O1000O1K6DeWg34`hXL4N3M2N2N2M2O1ON2O100000000O1000000000000O10000000000O10O100000O2OO1000O100001OO1000001O1N101O2N3L3Mdmn2"}, {"size": [848, 480], "counts": "aXo7f0Xi0d0[Od0B>A>@`0XOh0]Oc0N3M200O10000O101O00O100000000O1000000O1O1O1M3N2L5H7ERjVN;Hc0\\O8I4O1O1O1O00001O01O01O001O1O010O00100O001O00100O000000010O000001O0O11O00001O00O2M2001O1O001O010O001O010O000000010O00000000010O0000000000000001O0000000000001O0000000000001N100000001O000O10001N1N2I7H9A`0CT[8NPeG:C=E:EEUmA?N2O100N2O010O1000O1000O1000000O0100000000O10000000O10O10000000O01000O10O1N2M3K5N2M3N2Noh_31QW`L3M4M2N3L3M2O000O1000000000O10O1000O10000000000000000O10000000000O100000000O1000000000000O1000O10O1001O00O101O001O1O002M3N2N3LSSn2"}, {"size": [848, 480], "counts": "SSP8a0[i0i0WOe0\\Oc0DD6J5L2M3O1N2N3M2N101N2N2N2N100O2N2OO010001O0O100000000000000010O001O000O2O1O01OO2O00001N101O0O2O0O2N2N1O2N1M4M5L3J5I9I6EbgP6"}, {"size": [848, 480], "counts": "o2b1oh0O000000000000O11O0000000000000O1001O0001OO1001O0O10001O1N5K6I=[OVVOL[j0JmVa;"}, {"size": [848, 480], "counts": "Y\\Y63[j06K3N2VYOLTc07d\\O2Wc04T\\O_OgMc0Uf00\\[OQ1cd0TOW[Oo0gd0TOU[Om0kd0UOR[Ol0nd0VOoZOl0Qe0VOjZOl0Ve0ZOcZOg0]e0_O[ZOc0ee0BTZOQOcWE9E=ZOnN]WOV1ah0=00O02K4N2O11O0000O20O00001O001O010O1O1O010O000010O0O1010O01O000O2O00001O0000000000000O1N2N2J6J5N3K6J6K5Lbe6MaZI3N2L4N200N26]OZ[f0a0]dYOk0WO7H6L110O101O1O00010O0000010O00000010O000001O01O00000010O001O01O000001O0000001O01O000001O001O01O0001O01O01O00000010O01O00000010O0000000000000001O01O00000O1000001O00000000001O00000000001N10000O1O1K6F9J7A`0DS[81jdG=A?G9E:I7N2O2O0O1O101O00001O0000010O00001O0001O01O00000010O000000010O000001O00010O0000000000000001O00000L4I7L5L3M3N2O100O101O0000000000001O00000O1000001O0000001O000O1000001O000000000000000O101O00000O10001O00000000jG"}, {"size": [848, 480], "counts": "^9e1kh00O10O100000000000000O100000000000000000O1000O10000000000O10000000O01000000000000O1000000000O10O10000000000000000O10O10000000O10000000000000000O1000000000000O01001O0000000O1000000O1000001O0O102M5K:Eakc0YOTU\\Oh0[O?B4L2M2O00000000000000O100000001O000O100000000000000O010000O100O001000000O10O100000O100000O100000000000O010000000000O010000000AQWO\\OPi0>YWO^Ohh0>g0N2M3N1N3L4M3O1N2Oc\\^1`0jbaN6K3O1O1000O100000001N1000001N1000001O00001N1000000O2O00001N10001O00001O000O10001O0O1000001N101O0O10001O0O101O1N101N10neV4"}, {"size": [848, 480], "counts": "e\\f48Vj09E8K5J6bLPOS]OU1fb0SOR]OW1TM_N_e0>X]O`1_b0cN]]Ob1`b0aN[]Ob1cb0bNX]O`1gb0gNP]O]1ob0gNh\\O_1Vc0fNb\\O_1]c0dN]\\Oa1`c0dNX\\Oa1fc0nNg[OX1Xd0a2N2N1O2N2O0O1O1O2N1O1O1O1O11O1O0000000000100O000O3N0010O00000001O0O11O000000O100000O1O100O1000000N2M300O11OO1N101L400O11O00000O2O00000O101O000O2O001N10001N2O1O1N2O2M100O2O3L2O3L5K4M8G6Jh1XN:F8H:F7J4K4L6J6J4L6I7J6H_Wa4"}, {"size": [848, 480], "counts": "R;i0gi000000000000O010000000000O10O1000000000000O01000000000O10O10000000O01000000000O1000O10000000O100000O100000000O1000O1000O100000O10O1000000000000O100000O1000000000O02O00O100000O100000000000O100000001O002M2O2N3L:DUR>1iSA0fi0>N200N2N200O0100000000000O100000O10000O10O10O100000O1000O100000O1000O1000000000000O100000O10O10000000O100000O1000O10000000O100000000O6K8G3MkTd1?ej[N1100O10O100O1000O10O100O10O1000O100000O10O1000000000O01000000000O010000000000O0100000O1000O1000O010000000O1000O10O100000O1000O10000000O1000000000000O10000O10000000000O01001O00000O10O10O1000010O0O01001O001O1N2O1O2N2MVSn2"}, {"size": [848, 480], "counts": "YmP8o0oh0c0\\Od0\\Oc0_Ob0VOh0E=H7O1O1O1O10000000000O1000O100000000O100O1O1N2N2K6I6J7G8Fdc0DU\\Ob0ic0Dd[Ok0[d0ZO[[Ok0fd0VOV[Om0id0UOT[OP1hd0ROT[OQ1kd0TOoZOn0Pe0WOiZOi0Ze0[O`ZOZOG5ke0e0VZOoNb0Hde0U2^1K4J7C?K5L6J8G5L7I6I6Jdl\\5"}], [{"size": [848, 480], "counts": "bg0n2cg0O2N1O1O2N1O2N2N2N1O2N1O2N2N2N1O2N2N2N1O2N2N2N1O2N2N1O2N2N2N1O3M2N1O2N1O2N2N2N2N1O1O2N2N2N1O1O3M1O2N1O2N2N1O2NbYn:"}, {"size": [848, 480], "counts": "h5V1Zi000001O0O1001O0002N1O00001O000001O000001O00O1000001O01O01N2G8I8M2O101N10001O0001O00010O001O1O00100O1O00001O01O1O001O101N2O1O0O10O10O1O4K2N2Jll`14PS_NF8F:_OmNYWO[1_h0>N1000O1M3O10001O000O1000000010O001O00100O1O000010O00001O001O001O00001O00001O1O00O1001O0O1O1N3K9F;E>DoYh1b0_eWN4M3N2O1O1O00010O1O00010O00001O2N1O101N1O4L3M3M4L3M2N100O1O1O010O001O1O0010O01O001OO2O00000000000000001O000000000000001O000000000O2O01O00O101O0O100O100O2L3G:J5D>DY[8FkdGa0@`0G8F;H7M3O1O2O0O101O00001O0000010O00001O0001O01O0000000010O000001O01O01O00000010O00000001O0001O00000O1O1J7I6L4M3N2O1O10001O000000000O2O00000000001O0O10001O000000000O101O0000000000000O10001O0000000O2O000000000O10iG"}, {"size": [848, 480], "counts": "^9e1kh0000000000000O10O100000000000000000O100000000000O10O10000000000O1000000O11N10O1000000O1000000000000O011O0000O10000000O1000O1000O100000000000O10O1000001O00O10O1000000O10000O2O0N2O1O1O1O2N100N3M4M4K5LVfd0]OYZ[OR1SO4L5K1O1OO100001O000000000000O101O000000000O100000O1000O10O1O1O100O1000O10000000O1000000000O1O1O1O1N2N1O2O1N2N101O1N2O1N2O100O1O1O10O0100O00100O1N2O1M4DPTU7"}, {"size": [848, 480], "counts": "X^j4d0ki0?A9G7H:G8H:F4L4J6L2jLYM`^Oi2^a0[M^^Og2aa0\\MZ^Oh2ca0^Mm[O\\OT2Y3na0gMj]OZ2Vb0QNU]OY2kb0TNS\\Oa2lc0R20000O10000000000O10O10O10000000000O1000000001O00000000O1001O000000001OO20O0000001O1OO2O01O1O0000000001O00000000000O1O100O100O1O10O10O1O1O010O1O0010000O0100O2O000O1000000O2O00001N101O1N2O0O2O1O1O1N3N2N6I5L;Ej1VN9G6I:G;E8G>C=B;F9F8G_kT4"}, {"size": [848, 480], "counts": "S;i0gi00O100000O10000000O10O10000000000O01000000000000O10O1000000000O0100000O01000000000O1000O1000000000O1000O2OO100000O10O10000000O10O100000O10000000000O10O1000000000O10O10000000O01O100O11N:G1N7J002M]jh02`UWOXO]]U5"}, {"size": [848, 480], "counts": "Q3a1oh0001O000000000O02O00000000000000000000000000O11O000O10001O1O6I8He0XOPWa;"}, {"size": [848, 480], "counts": "iVZ65Zj03M3M4M6I5L5K4L1O1O2O3L2OO001O1O001O101N4L9G4K7J7GWg]5"}], [{"size": [848, 480], "counts": "ag0o2ag002N1O1O1O2N2N1O2N2N1O2N1O2N2N2N2N1O2N2N1O2N2N2N1O2N2N2N1O2N2N2N2N2N1O2N2N1O2N1O3M1O2N1O2N1O2N2N2N2N1O2N1O2N1O3MR_m:"}, {"size": [848, 480], "counts": "i5U1[i00O2O0000001O00001O1O010O000000000001O0001OO10000O101L3I7N3N2N1O2O00001O000001O01O01O001O001O00100O00001O01O000000010O10O10O2O2N1N100O4L001MRnS2NnQlM9I3M2K5_OZOVWOh0ih0]OoVOe0Si0;100O1N2N2O1O2M2N2O1N2O1N3M2N2O1N3M2N2O1N2O1N3LiSj4O[lUK1O?@2O1O001_NDlXOBUfd0HTZ[O=C4L10O1O10O01O1O010N101O1O001N2O001N101O001N1000001O00000O0100O10000000O0100O100O10000000000O1O100O01000O2O0O10O101O00001O000O2O0O10001O000O2O0O2MfgV7"}, {"size": [848, 480], "counts": "mhV59Qj0e0\\Ob0_OXi0k0oNQ1_O`0G9N3M2O2O0O2O1O001O001O1O1O1O1N2O1O00001O000001O00001O0N3H8]Od0@b0B`0WOWWOQOV[\\3"}, {"size": [848, 480], "counts": "RSl5b0ji07I6L5J5K3N3M4M4L3M2N2N4K4N2M2N2N1O2O0O101O0O100000001N1001OO10000000000000000000O1001O01O000000001O00001O001O001O0O2O1O1O1O1N102M3I7L4I8G:H;DUie4"}, {"size": [848, 480], "counts": "Q3\\1Ti02N2N1O1O000O100001O00000000000000000O100000O11O0000001O001O5J7J:E>\\Ob\\`;"}, {"size": [848, 480], "counts": "P`]<"}], [{"size": [848, 480], "counts": "cg0m2dg0O2N1O1O2N2N2N1O2N1O2N1O2N2N2N2N2N1O2N1O2N2N1O2N2N2N1O2N1O2N2N1O2N2N2N1O2N2N2N2N2N1O2N2N2N2N2N2N1O2N2N1O2Ncno:"}, {"size": [848, 480], "counts": "i5U1[i00O2O001O1O01O01O1O00000010O0O1001O000001O000O100N2L4L5M2O2N1O2O001O00001O01O001O0010O01O1O001O1O001O01O00010O000000010O01O001O001O01O1O1OZdP2_OS\\oM9SOC[WOb0ah0CTWOf0jh0d0L2N4L2N2NO2M2O1001O00O10001O000O101N100000001O01O1O001O0001N100O1O1O2M3N1N2O2L4M2O1O2N1N2O2N1N2N2O1N2O1N2O1N2N2N2O3N0O2MlS`41Ul_K2N3M3M2N2N2N2N1O3N1N2N2N4M6I6J3M2N2N3N2M5dWOUNig0Z2O1O00010O00001O000001O0000000O1I8I6M3L4N2O2O0O1000000O101O0000000000001O000O10001O000000001O00000000000000001O000O100000001O0O1000000O101O00fG"}, {"size": [848, 480], "counts": "_9e1kh00000000000000000O1000000000000000000O10O100000000000O1000000000000O10O100000000000O10O1000000000O10000000O1000O100000000000000O010000000000000O10000000000000000O101OO1000O2OO11O0000000O101O002M3M7Ik`e0ZOieYO2ji0>K0O00O1O10O01O2N00001O001N101O000O101O002N1N200N3N001O0O2OTU[8"}, {"size": [848, 480], "counts": "e[g5a0ci0P1SOc0D>C3M2N2O0O1eLWMn^Oj2o`0YMQ_Og2n`0\\Mm^Og2Ra0]Mj^Od2Ua0`Mg^Oa2Ya0iM[^OY2da0lMV^OV2ja0SNi]OQ2Vb0WN`]Ol1_b0^NU]Of1jb0mN\\\\OZ1cc0j200O1O100O1000001N100O100O2N100O100O2O01OO101O01OO101O000000001O00000000010O0001O000000O2O01O0000000000000000000000O100000000O100O010O100O010000000000O1000000O10000O1000000O1000001O0O10001O0O10001O001N101O001O1N2O1O0O3N1O1O2hLk[OmMd0=dc0Z1n]O]NWb0S1b^O_Nea0X1P4Db0]Oh0TO`TS3"}, {"size": [848, 480], "counts": "T;i0gi00O0100000000000000O01000000000000000O10O1000000000O10O100000000O01000000000O10O1000O10000000O100000000000O0100000000000O010000000O010000000000O100000000000000O10O1000O1000O1000000000000000O10001O001O1O1N3N4K[g?Id^_ONRj0;M4J3O1O1O10O0100000000000O1000O1000O1000O1000O100000000O1000000O1000001O0O2O1O1O1O1O1N2O00001O100O2M4M0OQhi7"}, {"size": [848, 480], "counts": "Z\\S85Vj08K5[M1aZO2Ue0;bZOK[e0`0aYOoNDJNn0lf0`2O2O0000000O100000001O00000000001N11O000000010O0O2F;]Oc0Aa0Bg0VOc\\\\3"}, {"size": [848, 480], "counts": "]e\\6o0\\i0:I3M5K4L5L4L3M2N2N2N2M201N1O2N1O2N100O2O0O100O2O00000O100000000O100000O10O1001O0000001O000O11O01N10001O2N1O1O0O2O1O002N2N001O1O2M3L3N3CdWO_Nbh0U1XWOmNXi0i0f0WOiVU4"}, {"size": [848, 480], "counts": "Q3`1Pi01O1O001O000000000000000000O1000000000000000O11O0000001N102N6I=D?ZORWa;"}, {"size": [848, 480], "counts": "P`]<"}], [{"size": [848, 480], "counts": "kg0e2lg0O2N1O2N2N1O2N2N1O2N2N2N2N1O2N2N1O2N2N2N1O2N2N2N2N2N1O2N1O1O2N1O2N1O2N2N2N1O3M1O2N2N2N1O2N2N1OfmT;"}, {"size": [848, 480], "counts": "i5U1[i00O20O01O1O00001O0010O000001OO2O01O000001O0O1M3L4M3O101N101N2O00001O00001O01O001O01O01O1O001O001O00000010O000001O01O01O1O001O00010O02M2J\\VO]OeTi1b0bdWNAPWOf0mh0\\OPWOh0mh0[OnVOi0Qi0=N1O1O1O1O1O01O01O001O000010O000000001O0000000001O01O0O1000010O0001O001O1O10O0001O01O000010O01O00001O001O000O2O0000000O1000000M3M3N2J6J6K6J5M4L7HjiR17iUmN>G5L3O0103L3N1N2O1O100O1O2N1O1O101N101N1O101N1000000O2N10001O0O2O000000QX=2mgB10O01O1O1O00002N1O1O1O100O101N2N2N4iVOJlg0`1M2N000000000001O001O000000001O000O1000001O000O1O1O2M2L4EJ6K6M2O101O00001O00001O0000010O0000001O01O01O00000010O000001O000010O000001O0001O0001O00000001O000N2K5K5K6K4N2O1O101N1000000000001O000O1000001O00000000001O00000000000O10001O000000000O10001O0000000O101O00000O100000fG"}, {"size": [848, 480], "counts": "`9d1lh000000000000O1000O10000000000000000O100000O10000000O10000000000O100000O10000000O1000O100000O10000000O01000000000000O10000000O100000O10000000000000001N2OO11O0000000O1000O11O0O10001O00O11N2O1O2M6Ickc0^OnT\\Og0\\O7I3L3N001O00000000000001O00000O100000000O10O1000000O01000O10O100000000000O10O10000000000000000O10000000000O0100000000000XOTWOLlh04TWOKmh04TWOLlh04UWOKkh05VWOJjh06VWOJjh05WWOKih04XWOLhh03YWOLhh04XWOLhh03YWOMhh00ZWO0gh0NZWO2gh0K[WO4gh0H\\WO8^i00000OO2O1M3M3N2O001O1O1O1N2N2N1O2N2O1N2N2O11N2OO10]OlVOJSi04UWOHjh07XWOIhh02]WOLhh0L\\WO4ai0N10000O10O1000O10O001O001O01000O10000000O10001OO03LhPj5"}, {"size": [848, 480], "counts": "gcP6n0\\i08M2M2N3L3L5J6G9L3_LlMZ[OOV3X2^a0TNZ^Oo1ea0TNV^On1ia0WNQ^Ol1na0[N^[OUOY2a2Xb0gN\\]O]1cb0mNQ]OU1nb0QOj\\OS1Uc0ROc\\OQ1]c0SO\\\\OP1dc0ZOj[Oo0Ud0f201N101O1O0O101O001O1O001O00001O010O1O000001O01O0O11O01O0000000000000001O000000000000000O1000O010000O100O010O01000O01000O01O001000O1O100O10O10O10O0100O10O1000O10000O2O00O1001N10001O0000001O001O0O2N2K4K6_N][OoLhd0]2o1]Od0DA?EC>_OlQ^3"}, {"size": [848, 480], "counts": "bme6h0ai0c0A5L3M3M3N3M2N2N2N2N2N3M2N100O100O101N2O0O10000O10001O0O1000001O00000000000000O10010O00001O00001O001O001N101L3M3N3M3M4L3M3M3K4L5L5K5J7I8Gicm3"}, {"size": [848, 480], "counts": "Q3c1mh0000000000000000000000001O00O100001O00000000O0101O00001O2M:Fh0TOPlb;"}, {"size": [848, 480], "counts": "PlW64Si0[1I5iL`Nh\\ONSNf1Te0bNa\\ONVNb1Xe0cN_\\OQ2ac0RN\\\\On1cc0VNY\\Ok1gc0bNj[O`1Vd0bNf[O`1[d0aNb[O_1_d0dN][O\\1dd0hNW[OW1ld0mNnZOj0]e0\\OXZOa0oe0a0`1K5J6I7K5K8I6K9H6J6J6I7Haa^5"}], [{"size": [848, 480], "counts": "Xh0X2Zh0N2N2N1O2N1O2N2N2N1O2N2N1O2N2N1O2N2N2N1O2N3M1O2N2N2N2N1O2N2N2N2N2N1O2N1O2N1O2N2NRgZ;"}, {"size": [848, 480], "counts": "i5P1`i01O2N100O001O001O0000010OO1000000O1O1N3N10001N100O101O001O1O00001O0001O01O0001O1O00001O000010O0000000010O1O00001O010000N2O2N2M3N3Ah]^1MebaN3N2O1N2N2]Oe0\\OROYWOV1dh0AXkc03cT\\O>F8G6K000000001O00000000O1000000000O10000000O10000000000O10O10000000O1000000000000O010000000000000O1000000000000O10O1000WOUWOLlh03WWOKih05WWOKih05WWOKih04XWOLhh03YWOMgh03YWOMgh03YWOMgh02ZWONfh02ZWOMgh03YWOMgh01[WOOeh00\\WO0dh0N_WO1bh0L`WO3eh0E^WO<[i0000O100N1O2N2O1N2N2O10OO2N2N2K4O2N2N2N1010010OO10000[OQWOHQi03VWOJjh03ZWOLjh0K]WO4_i01O00O1000000N100100O1O10OO2O100000O1000O100O1O1O010O1O1O1M3M201O0010O2N1O1M4O0O100O1O2N10001N2O0O10001M3NRUV5"}, {"size": [848, 480], "counts": "_lj5m0Pi0d0VOj0E:K6M2WMVMg]Oo2Wb0h2M1O2N101N2O0O2O2M2O1O1O1N2O1O2N1N2O0O2O1O0O2O1001N00N30O4L100O2N0000001O1O100O001O002N1O2N1O001O00001O00O11O0000O1000000O10O100000O1O11OO1O1O1N1101OO1O1N10100O10O01O1O1O0101N01O1O0101O0O101O0O100O2N1O101N1O2N1O1O2N1O2N1O2N1gKa\\O`1bc0TMX[On0_1h1Wf0H9QOn0C=I8H7Bj_Y3"}, {"size": [848, 480], "counts": "U;g0ii000000O10000000O10O100000000000O10O1000000000O010000O100001N10O100000000O10O10O1O1000O01000000000000000O1000000000O1000O1000000000O11N1000000000O0100000000O1000O11O0000O10O1000000000001O1N101O1O3L6K5IRR>:emA5J3N1O1O1000O10O10000000O1000000000000O010000000O10O10000000O10O100000O10O10000000O10000000O1000O10000000O10O10000000000O010000000000O11O00O1000O10O1001O00O1000O10O11O0000O10O10000O0100000O010000000O1000O10000O1O01000O100000O101OO1000O1000O1000000O10O1000000000000O1O1M3M4H^V_1Nki`N0O2O1O1O2N2N2N2N1O0O101OO10000000000000O10O1000000000000O10000001OO10O10000001O001N2O2M2O1N101N3LcbP3"}, {"size": [848, 480], "counts": "ohl7l0oh0f0^Ob0XOh0]Ob0_Oa0C=G9M3O1O2O0000000O1001O00000O10O10O100O101N1N2O1N2K6G8D=_Oa0G:_Ob0]Og0]OYa`3"}, {"size": [848, 480], "counts": "Xka6g0ci08H7L5L3M3M3N2N2N1O2N3M3M2N2O0O2O1N101N101N2O1O1N2O0O2O000000000O100000O101N0100000O101O00000O2O0O101N101N2M2O2N2N2M3L4H\\WO`Ngh0]17O2N3M2N3L6E`0\\Oi`R4"}, {"size": [848, 480], "counts": "Q3c1mh000000000000000000000000000000000000000O101O001N2N5L7FdVf;"}, {"size": [848, 480], "counts": "kUV6:ef0Kb\\OZ1]b0lNd\\OF\\Nd1od0gNa\\OKYNb1Ue0fN_\\OORN_1_e0eNZ\\On1fc0TNW\\Om1ic0\\Nm[Oe1Sd0`Ng[Oa1Yd0bNc[O^1^d0eN^[O[1cd0hNY[OX1id0lNQ[OT1Pe0ROhZOn0[e0XOZZOPOb09We0a1][OeMnd0V2i1G9J8H9J6J6K6I6J6J8HRQa5"}], [{"size": [848, 480], "counts": "Qh0_2Sh0N1O1O2N2N2N2N1O1O2N2N2N2N2N2N1O2N1O2N2N2N1O2N1O2N1O2N2N2N2N1O2N1O2N2N1O2N2N2N2N2N1O2N2N2NbbV;"}, {"size": [848, 480], "counts": "aba04\\j03M4K3N2OO02N1O102M2N2N100O000000000000O10000O1O2O0O1O100O2O0L5N1O2N11OO2Oc]^11[baN8I1N10000O1O1M4I6@a0D;OO2O11O000001N10000000001O01O00O2N11O00O1000010O01O00000001O0001O0000O101O00000001O001O010O1O0000O2L3O1L5M3L3J7K5M2K5N3N1N3N2N1O2N1O2Lhhl21XWSM3N2M201N101O00001O001O001O01O01O001O1O001O10O01O1O100O1O001O010O001O1O00010O001O1O1O10O0O1L6EQf6EbZI3J9]Oa0C;D>D[O_[_8"}, {"size": [848, 480], "counts": "fQQ5=ni09bN\\OdXOk0jf07]XO0[g0h1I7L5H7A`0E9K5F:K6H8I7D=L4K4I6L4N1J7M3N101N10001N10000O2O000000001O0000001O0001O00002N1O2O1N2N3M3M1O1O2N2OO01O001O1O1O0000001O001O00000000000001O000O1000000O100O100000000O100O1O100O100O1O100O1O01000O1O100O100O10O10001O000000000O101O000O101N1PLa\\Oo0`c0kNh\\OQ1Yc0gNU]OR1lb0jNZ]OT1gb0fNa]OV1`b0gNd]OW1^b0^Nm]Oa1Tb0YNR^Oe1Pb0UNU^Ok1la0nMZ^OP2ja0iM[^Og1[b0nMm]Ok0`f0IB`0\\Oaeh;"}, {"size": [848, 480], "counts": "]hR67Wj05L4L2QXOAke0a0QZOAne0e0oYO[Ooe0g0oYOZOPf0j0jYOXOVf0i0fYOZOZf0h0bYOZO^f0h0_YOYOaf0j0[YOWOff0k0VYOVOjf0P1PYOPOQg0Q1lXOoNVg0U1aXOnNag0l0`XOTOeg0JYYO5kf0EaYONifk5"}], [{"size": [848, 480], "counts": "Xh0X2Yh0O2N2N2N1O2N2N1O2N1O2N2N2N1O2N2N2N1O2N2N1O2N2N1O2N2N2N1O2N2N2N2N1O2N1O2N1O1O2N2N2NclY;"}, {"size": [848, 480], "counts": "m5Y1Wi00000000001N03N10N10O01N3N1N2O2N3L4L4J8ClbZ25V]eM8I3NO10O2O0O1L4\\Od0L4O10000000O100000O1000000O10O1O11O001O000001O000001O01O00000000001O00000000001O01O01O01N2O10O1O3M?A:POZVOb0Qj0In_\\38PfbLA_i0P1G5H7H7L5M20000001O1O1O1O1O1O1O2N1O2M3N101M2O3M1O1O1O1O1O001O1N1000000O101O0O1O1M4FfS9@SmF000ZVOb0Yh0@UWOR1hh0=M2O2N100O2N101N10001N10000O2O0O100O2N100O2O0O100O101O0O101O0O101O000000010O000000001O000001O0000000O2J5J6K5K5N3N10000000001N1000000000001O0O10000000001O000000001O000O1000000000000000001N100000000O101O001O0O100000001O0000000O1000000000dG"}, {"size": [848, 480], "counts": "a9e1kh00O1001O0000O101OO100000000O10000O1000000O1001OB]WOmNgh08XWOHi_j0N[ZVO6QVOJ\\i09cVOG[i0>aVOD]i0?_VOC_i0h0O2N1O1N2M3M3N2O001O100O10000000O100000000O2O002N4Ka0_OdPc02Wo\\O;I?A4L00000000000000000000000O11O0000000O010000000O100000O1000O100000000000O01000000O1000O100000O100000000O100PO\\WO4dh0HaWO7_h0FeWO8\\h0FgWO9Yi0O2M3N2Nj]k7"}, {"size": [848, 480], "counts": "oW\\42WSA9I0O100M3O1000O100000000000O010000000000O10O1000000000O10O1000000000O0100000O1000O100000O01000000O01000000000000O010000O1000O2O001O2M9EVeV39bZiL3O1O000O11O00O010000000000O10O011O0000O100000000000O01000000000O10000O01000001O00001O1O0O2O1N3M2O2L2NcaU3"}, {"size": [848, 480], "counts": "]Qk7>ei0?F9J6J6L3N3M3M2O3N001O1O1O1N2O2N1O2N1O101N1O2N3N2M10O1N3M4I9F`0ZOoTg3"}, {"size": [848, 480], "counts": "mUP6>ni09I5I7K5J6K5K4J6L3N2M2O2N101N1N3N100O1O100O1000000O010O10000O01000000O100O1O1O1N2O1M3N200O1O2O0O1O2N100O2O1N101O1O1N2O01O10O1O001N3N1O100O1N4M2N1N3N1O103H7Jll[4"}, {"size": [848, 480], "counts": "T3c1mh000000O11O00O100000000001O1N2N4M7H>\\Ojdm;"}, {"size": [848, 480], "counts": "P`]<"}], [{"size": [848, 480], "counts": "eh0k1fh0O2N2N2N1O2N1O1O2N2N1O2N2N2N1O3M1O2N1O2N1O2N2N2N2N1O1O2N2N2N1O2N2N2NRf_;"}, {"size": [848, 480], "counts": "P6W1Yi000O10000O1O2M200O1O101O0O102N001O01O01O0000001O000001O00000001O0000001O00001O1O1N5KnVd1LWi[N3L3N1N100L5G8O1O100O100O10000O011O0O1O1O001O100O1O101N01O010001O0O2O1O00000O2O00001O101NK6WO]oP3BhQoL3M4M2N1OO001010N01002M1000O1O100001N2O000000001O000000001O0000000oNYOYXOg0ag0_O^XOb0ag0_O_XOa0ag0_O_XOb0_g0_OaXOa0_g0@`XO`0_g0AaXO?_g0AaXO?_g0AaXO?_g0B`XO>`g0B`XO>`g0AaXO>`g0B`XO>`g0B`XO>`g0B`XO>ag0A^XO`0bg0@^XO`0bg0@]XOa0cg0_O]XOa0dg0^O\\XOb0dg0^O\\XOb0dg0_O[XOa0eg0_O[XOa0eg0^O\\XOa0eg0@ZXO`0fg0_O[XOa0eg0_O[XOa0eg0_O[XOa0fg0^OZXOb0fg0^OZXOa0gg0_OXXOb0hg0^OYXOa0hg0^OXXOb0ig0]OWXOb0jg0]OWXO`0mg0_OSXO?og0ASXO;og0E^10dT9M`QF?lh0f0L2O2O0O110O00001O0000010O01O01O0000001O000010O00000010O0010O000000010O0000010O001O0010O00000O100O1N2VOcN^XO`1bg0fNmWOc1Uh066J6J6N2N111N1000000000001O0O10000000000000000O2O000000001O00000O10000000001O0O10000000001O000O2O00000O1000001O0000000O1000001O0000000O10000000bG"}, {"size": [848, 480], "counts": "h9a1oh000O2O001O1O8H3M001O1N100N3VObVO8oPc0JTi]O6I7K3M2N100M3O2M2M3N20000O100O1N2M3M3O1N2O0010O1000000O10O100001N1000O10O10O11O1N2O3M4Lb0\\OTkc0NR[[OFbi0Q1D5L2N2N1O000000000000000000000O2O0000O100000000000O1O10O10000000000O1000O02O000O1O2@jU`8"}, {"size": [848, 480], "counts": "R[m3k0^h0Y1POo0XOg0@a0H7K6L3M3M4L4M2O2L3M3N2O1N2O1O1O1O101N1O100O2O0O1O2N100O2N10000O2N10001O00000011N100O0010O00001O00001N100O2O0O2O00000O2O000O2N1000010O00000O100001O000000000000000eKe\\Oa1\\c0WNX]O\\1hb0^Nc]O\\1^b0`Nj]O\\1Vb0]NU^O_1ka0_NY^O_1ga0_N]^O_1ca0^Na^Oa1_a0]Ne^Oa1\\a0\\Nh^Ob1Xa0\\Nl^Ob1Ta0\\No^Oc1Ra0ZNQ_Od1Pa0[NQ_Oe1o`0YNT_Of1m`0WNU_Oi1k`0UNW_Ok1j`0RNX_On1h`0QNY_Oo1h`0oMY_OP2h`0nM[_OQ2f`0mM[_OS2e`0lM]_OR2e`0lM]_OS2c`0lM__OR2c`0lM__OS2c`0jM`_OS2ad0N3L3N2M8H7J8G:E;Ffm[5"}, {"size": [848, 480], "counts": "X;h0hi000000O1000O10000000001O0O100000000000000O101O0000O01000000000000000O1O001O1O1000O0100000000000O100000O10O10O100000000O010000O10O10000000000O1000O1000000000000O011O001O1N2O2N1O6IY]Vi0m0mNQ1VOk0ZOd0K6L3M3N1N3N3M4L2M200O1O1O0O2O0000000000O1O1O2M2O1M4K5I7C>@a0\\Oi0VOknl3"}, {"size": [848, 480], "counts": "n_[5?ki0=F6K7I5K5K4M2N2N1N4K4N2N2N1O2N10001O0000000O1000O1000000O100O1O0O2N2O1O2M2N2O1N2O1O101M200O2O0O101N10010O001O100O1O001O1O0O201M3N1O1O0O3M2O2M3M3M4J7HTRS5"}, {"size": [848, 480], "counts": "V3d1lh00000O1000000000001O002N;Dmc0^OZ\\O>fc0^Oe\\O;[c0Bm\\O9Sc0EW]O3ib0Ga]O5_b0Jg]O1Yb0Nj]O0Vb0Nm]O1Sb0MP^O1Rb0LQ^O3oa0KT^O4la0JV^O6ja0IW^O7ia0IW^O7ia0IW^O7ia0GZ^O8ga0FZ^O9ga0E[^O;ea0D\\^Oba0A_^O?aa0@`^O`0`a0_Ob^O?_a0@b^O`0_a0^Ob^Oa0_a0^Ob^Ob0^a0]Oc^Oc0^a0[Oc^Od0^a0\\Ob^Od0_a0ZOb^Oe0`a0YOa^Og0`a0WOa^Oh0aa0VOa^Og0ba0WO_^Oh0ee0O1O2O1N4L>_O`oW6"}, {"size": [848, 480], "counts": "Z;i0gi0000000000O010000000000O0100000O01000000000O1000000000000O010000000O1000000000000O10000000000000001N1000O1000O0100O0010000O1000000O1000O10000000O10O2O001N101O1O1O7HXg?JnX@4L8J0O0O10O1000O100000000O010000000000O10O1000O101N1000001N10006Geam23W^RMN3N1N2O1O001N2N1O101N1O1O2N1M3N201N10O0100000O11O00010O00001O1O10O01O00001O01O001O010N10010O001O001O00000O100000000O1N2M3L4K5I7K7GYYU17bfjNeWOC[h0Qe0Kd\\OH[N;Re0Mc\\OH[N;Se0Kb\\OLZN8Ue0Kb\\ONXN6Xe0L_\\ONZN5Xe0L]\\O1[N0Ze0N\\\\O2ZN0[e0MZ\\O4[NN`e0KS\\O8^NKae0La[Oi0nN[Oce0IW[OV1VOoNQg0[1PYOcNnf0`1RYO_Nef0j1]YOSNdf0l0lRk6"}, {"size": [848, 480], "counts": "[;j0fi000000000O010000O1000O1000O100000O10000000000O100000000O10O10000000O1000O100001OO0101O0000001N100000O011O00O0100O10000O1000O10000O10O1000000001N101O2N4K7J3MRR>;bmA1O10000000O0100000O10000O010000O010O1O1O00100O1000O01000000O1000O10O100000O100000000O1000O1O10O10000000O10O100000000O01000000000O1000O100000000000O01000O4M2N3M4KkTd1:kj[N1O101OO1000O1000000000O0100O100000O010000000O10O1000O10O1000000000O10O100000000O01000O01000000O10O1000O100000O10O101O0000O01000000O10000000000O10O1000000000000O01000000000O10O101O1O1O000O2O1N2N2OPdc3"}, {"size": [848, 480], "counts": "TmX7k0ih0o0aN]1YOf0ZOf0M3O1O100O10001O0000001OO01000O1000000000001N1O1N3N1L4L5F:Eia0@X^O`0ha0_OZ^O`0fa0_O[^Oa0da0@]^O?da0_O]^O`0da0_O]^Oa0ca0^O_^Oa0ba0]O_^Ob0ba0]O_^Oc0aa0]O_^Oc0ba0[O`]O_OZNU1Wd0\\O^]OCXNQ1[d0ZO]]OGWNo0\\d0ZO\\]OJVNk0`d0ZOY]OLWNj0ad0YOW]OOWNg0dd0YOT]O2WNe0fd0XOQ]O7XN`0hd0ZOj\\O:^N;kd0YOe\\O?_N8nd0WO`\\Od0bN4Qe0VOZ\\Oj0eNOUe0SOT\\OP1gNLof0N1O10000O10O10000000O01000O100O100O00100N110000O10O1000O100000O1000O10000O1000001N10OO1G`RS24emlM4L10000O100O010O10000O100O01000000000000O010000O100000O100000O10O10000O0100O100000O010O10000000O10O1000O1000000000O10O100000O10O10O100000O10O1000000O100000000O10O100000O1000000O10000000O1000O10000000O100000O10000000001O001N102N1O1N2O2Lnmf3"}, {"size": [848, 480], "counts": "_cU7a0kh0V1cN]NiXOd2Zf0X1^Oa0L4N2O2O0O101O00000000000O10O10000O1000000O2O0O1O2M3M3I9G9Ab0[Oh0kNVXO[N^nZ4"}, {"size": [848, 480], "counts": "Xcc4>ni0D3L3N3M3M3M2N01O02N1O10O01O1O1O10O02N3M2N005K1ON3O0K6L3E;I7H9Bn\\[24QcdMhb0@Z]O`0eb0_O^]O`0bb0^O`]Ob0`b0[Oc]Oe0\\b0ZOg]Oe0Xb0ZOj]Of0Vb0YOl]Of0Tb0XOn]Oh0Rb0VOQ^Oi0na0UOU^Ok0ka0SOW^Om0ia0QOZ^On0fa0QO\\^Oo0ba0QO_^Oo0aa0POa^Oo0_a0POc^Oo0]a0oNf^OP1Za0oNg^OQ1Ya0nNi^OQ1Wa0mNk^OS1Ua0lNm^OS1Sa0mNn^OR1Ra0lNQ_OR1Qa0mNP_OR1Pa0mNQ_OS1Pa0kNR_OT1n`0kNT_OS1n`0jNT_OV1l`0jNU_OU1l`0iNV_OV1j`0jNV_OU1l`0iNV_OV1j`0iNW_OW1j`0hNV_OW1l`0gNX^OLiM\\1oc0gNU^O4iMU1Ud0cNQ^O;iMR1Vg0oNjXOo0Vg0SOjXOl0Vg0UOiXOk0Wg0UOiXOj0Wg0XOiXOf0Wg0\\OhXOd0Ug0AjXO=Tg0FmXO9lf0OSYO1jf02WYOLef0:ZYOF[f0f0eYOYOPf0R1QZOlNPf0T1PZOlNoe0WO^YOb1f0TOif0YOeXOc0e00gig6"}, {"size": [848, 480], "counts": "];h0hi000O0101OO10O1000000O010000000000O010000000O01000000O01000000000000O1000000000000O2O00000O1000000000O100000O1000O1000000O010001O000O2O002M4M5K3LQR>>amA1O100O001O100000000O010O100000O1000000O102M3N00001O0O0100O1000O01O100O1O1O10O10001N2O1N4L_aU22_^jM2N2O0O1O2O000O010O1000O1000000O10O100O100000O10O100000O2O00O010001O001N2O001N2O0O2O2O0N3M3NYg?0cX@5K4N2O100N3O00000O1000O10000000O010000000000000O10O100000O1000000000000001N101O001O1N102N2L3Nmbh3"}, {"size": [848, 480], "counts": "ZnS7f0hh0V1mMo1D=Cbg0B_XO8fg0H\\XO1ig0OZXOKig06VXOHlg09RXOEQh0;oWOAUh0`0kWO]O\\h0GYWOa0oi0J5K01013L`VX6"}], [{"size": [848, 480], "counts": "oi0a0Pj0O2N2N1O2N1O2N002N2NRbS<"}, {"size": [848, 480], "counts": "jQm5:ni0=J3M2O1O1O001O100O1O101N1O00100O001O0010O01O1O001O010O1O1O2N0010O01O2N002O0O1O1O1O2N2N2O1N3M1O2N01O01O001O0001O0O1001O001O00010O00000000000001O0000001O000O100000001O00000O101O000O100O1O2J5G:_Oc0_OiU9HVjFn0[O?]Oc0M3M200O101O00001O0010O00000001O01O0001O0000001O0001O0001O00000010O000001O00001O01O00000000001O000M3I7L4J6N3M200O100000001O000O10001O00000000000O2O0000000000001O0O100000000O10001O00000O10001O0000000O2O0000000O101O000O100000000000000O101O000O101O0000001N1000000000001O00000000001NZG"}, {"size": [848, 480], "counts": "i9f1jh00000O10000000O10O1000000O1000000O100000O10O100000000O10O100000000000000O1000000000O1000000000000O100000O1000000O02O002M4L2O5J=CXkc0_OYU\\O;E4M0cVOFch0:^WOH`h07aWOJ^h05bWOL^h04aWOM^h03aWOO_h01`WO0`h0O`WO2_h0NaWO3_h0MaWO3_h0M`WO4`h0L`WO4`h0L`WO3bh0K^WO6bh0I_WO7ah0I^WO8bh0G_WO9ah0G^WO:bh0E_WO:bh0E^WOch0@^WO`0ch0^O^WOb0bh0]O^WOd0ch0YO^WOh0Ui000O103N3M6]OPVO7Wj0M\\cc2a0P\\\\M7L100000O101O001N101N2O2N3M3M2N1N2O2M3Nm^Q6"}, {"size": [848, 480], "counts": "Qn[34Wj08ZOe0CY[O@ed0>_[OA`d0?b[O@]d0=h[OCVd0Pd0XOl]O;SNSYOAkf0b0UYO]Oif0f0VYOYOhf0k0XYOTOgf0m0ZYOQOef0R1[YOlNbf0X1V15O0N]Ob0QOl016HoP^6"}, {"size": [848, 480], "counts": "];h0hi000000000000O0100000O1000O10O10000000O0100000O0100000000000O10O1001OO10O10000000000O1000000000O0100000000000000O1000000000000O101O001N3N3M4K4MRR>:bmA3M2O1O1O1000O101O0000000O0100O10O10000000O0100000000O10O11O0O2O00001O0O100000O100O100O0101N2O1O1N2O1O1N2O0O2O001N101O0OiRn19nlQN4M1O00O0100O100O10000O01000O1000000000O10O100001N2O0O2O001O100O0O101O00000O0100O100O10OO101N2O0O2O1O100O100000O1000O10000O10O10000O1000000000000O10000O1000000O100000O100000O1000O100O10000000000001O001O1N2O0O100O3N3Jnbh3"}, {"size": [848, 480], "counts": "eSS7m0`h0U1UNk1@>F:O1N200O2O00000000001O0O1000O10000O10000O2O000N3N1N3K5I7E=B`0QOU1TOU1fNnj[4"}, {"size": [848, 480], "counts": "ZRo4?ni08I5K6J4L4M2M4M2N2N2M200N2O1O2O0O01O010O100O100O0O2O1N2O1N2O1O1O00100O100O1O2O000001O01O00000010O01O000O2O1O2N001O1N101N3N1O2O0O2N2M7HhSf5"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "P`]<"}], [{"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "fYX27Yj02M2O1N2O1N2O1N1PODTWOT1ih0Whl2L]QTM9J6H7I8F9L5M2O10000000010O0000000010O000001O0000010O000000001O010O0000000010O0000010O000001O000010O0001O0000001O0000000001O000000001O0O100000001O0000000O2O0O10000O1O2N1J6G;]Od0^O[U9`0VjF`0\\Oc0E:O2M200O2O001O00001O01O0001O0000000010O000001O01O0001O0000001O01O01O000000010O0000000001O0000000O1L4J6L5I6N2O1O1000001O000000000O2O00000000001O0000000000000O2O0000001O000O10000000000O101O000O1000001O000O1000001O000O100000000O10001O0O101O000000000O2O0000000O10001O00000O1000001O1OWG"}, {"size": [848, 480], "counts": "j9f1jh000000000000O01000000O1000000000O10O1000000O1000000O0100000000O1000O10000001N0100000000O2O000O1000000000O2OO1000000O2O2M4L2O:Ea0]O_Pc0KfU\\OKii0m0A;E7I4M10OO1O1000000O11N10O100O100O10O0100001N2O00O1000O101O00000O10000O10OO2B=01N4F9L7^OTVO1jch22[VXM2C=M301O00O2O0O100000001N101O0O101O000000001N10000O2O000O2O00000O101O0O101N101O0O2O0000aUQ5"}, {"size": [848, 480], "counts": "Ue]3:ni0:H7F:I7J6J6J5J7L4H8J5K6J5K5J7K4M4J6K5J5M4K5M2N2N3M3N1N3N2M2O2N1O2O0O2O0O2O0O2O0O101O0O10000OPNW[O[Nid0b1\\[O]Ncd0^1c[OaN]d0[1i[OcNXd0m0W\\OSOkc0`0a\\O_O_c0>d\\OB\\c0;h\\ODYc09j\\OGUc06P]OHPc07Q]OIob04U]OKjb04Y]OKgb03[]OMeb01^]ONbb00a]OO_b0Oc]O1]b0Ne]O1[b0Mh]O2Xb0Mj]O2Vb0Ml]O2Ub0Lm]O2Tb0Lo]O3Qb0LR^O2na0NS^O1ma0MU^O3ka0LW^O3ia0MX^O1ia0J\\^O6ea0I\\^O6da0I]^O7ba0KW]OB_Nb0Zd0KW]OF^N=]d0KT]OL^N8^d0KT]O0\\N5`d0KS]O2]N2`d0KS]O5]NO`d0KS]O7]NMad0KR]O;\\NIcd0KP]O?\\NDed0Lo\\Ob0]N@ed0Mm\\Oe0^N]Oed0Ml\\Oi0^NZOgd0Lj\\Ol0_NVOid0Mg\\Oo0`NSOid0Mf\\OT1`NnNjd0Md\\OY1bNiNkd0Mb\\O\\1cNdNod0M^\\O`1fN^NPe00Y\\Od1ef02NM3J3F>C?[OPdZ6"}, {"size": [848, 480], "counts": "^;g0ii0000000000000O01000000O1000O10000000O10O10O100000000O10O10000000O010000000000O100000000O0100000000000O1000O100000O2OO10000001N101O1N2O2M5L6Ial>=QSA4M1O10O01000000O010000000O10000000O10O100000O0100000O10O100000000000O10000O101O2N`^d2Lca[M1O10000N2O1O10O1000000000000000O010000000O01000O1000O10O1000000O0100000O100O0100000O0100000000000O010000000000O100000000O1000000O1000000O10000O1000000000O0100000O101O00O10O100001O1O1O1O0O3N1OO02NmWj3"}, {"size": [848, 480], "counts": "[SS7W1Xh0T1\\Nc1_O?H8O1O2O0O100000001O00000O1000O1000000O100O101N1O2M2N3I6J7E?XOg0ZOS1hM[WOP1\\^]4"}, {"size": [848, 480], "counts": "TPP55Vj09G:I4M2N2M201N1N4L3M201N2N2N2N1O2N1O10000O100O10000O0100O100O001N3N1N2N2O1N2O100O1000000000010O00001O00001O001N2O0O2O1O1N2N3M3M2N3M4FaVe5"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "P`]<"}], [{"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "X_W26Yj03N1N2O2N1O1N2N1M3POP1N2000001OO1000001O00000000001O000O11O01O00000000000000001O00001O0001O00O100000001O0001O0000002OO0001O00O2OO21KTWObNfY`2eb0A\\]O>db0A]]O>eb0A[]O>fb0A\\]O>eb0A[]O>hb0^O[]Oa0gb0\\OZ]Ob0\\f0LiV`6"}, {"size": [848, 480], "counts": "_;h0hi00000000000O0100000000O10O10000000000O010000O10O100000O10000000O1000O1000O2OO100000O10O1000000O1000O10000000000O10000000O1001N2O2N1N4M4K4LTg?1kX@;C3O100O10O1000O10O10000000O100O1000O10O10000O010000000O010000O10O1000O1000O10000O10O1000001O0O1000001O00O01000O1000O0100000000O0100000000000O1000O100000O10000000O10O1000O1O100O001O1O1O1N3M2N2O2L\\_g18Z`XN5O10000O100O1000000O010000O1000O1000O10O100O1000O1000O10O10000000000O1000000000000O10000O100000O010001O000O10O1000000000000000O01001O1O001O1N103M1O0O2NiWj3"}, {"size": [848, 480], "counts": "`YR7a0hh0[1`N]1WOi0A?K5N1O100O1000001O000000000O10O100000000O1O2N1O2N1O1M4I7D=Ac0UOT1oNX`]4"}, {"size": [848, 480], "counts": "R^S55Uj0>B:I5L5L5K3M101N2N1O2N101O0O1O1000O01000O1O1O001O1M3O010N2O1O100O100000001OO100001O001O0O10001N101O1N2O2M4L3M3L4L7AoVi5"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "W]\\54[j02mWOMfe05XZOOee05WZOMfe0;SZOFle0L0fYOgL`e0Y3^ZOlL`e0S3_ZOoLae0P3_ZOQMae0o2^ZOSM`e0n2`ZORM`e0m2aZOSM_e0m2aZOTM^e0l2hZOnLXe0R3kZOlLTe0T3kZOlLUe0X3hZOhLXe0Z3\\ZOVL5a0_e0_3^ZObLbe0U4100O10000O100O2N1O2M2M4I7D>\\Oh0SOZ1hNaZ^4"}, {"size": [848, 480], "counts": "Yo^5>ni06K4N3L4M2M4K4N1O1N3N1O1O10000O100O10O0100O01O1N101O1O1O001O1O010O101O0O101O00010O0000100O1O00001O1O0010OO2O1O2N2M4M2L6HaP\\5"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "gb[52\\j05L2N4M2O1M3M4N0^VO]OVi0Q1N3M6J3L6K1O0OO2O0O2N3^NVWOZ1\\i0C3M5K4L4K6He`[6"}], [{"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "ZjU25[j03M1N2O1N2O2L3I6UOl0L30000000000001O00000O1000001O000000001N10000000000010O001O0000000001O000001O000000O2O01O001O1O00100O001O0001O1O1O001O001O0000000000000000000O010L4E;H8I7GcTS2IbQlMOii0d0J5LO2IaVOA^i0b071000O1O6EoXOEQg0=kXOFTg0E@?@WPc0=]o\\Ob0@7H8I2M2O000001OO100000000000000000O10000000000O10O1001O0O10O1000000O10O10000O1000000000000O100000O100000O1000000000O0100000000O10000O1000O100000O2O00000O103Mj0VO1O3L3La\\P8"}, {"size": [848, 480], "counts": "mQ]42Zj06L4M2N2M3M3N2N3L3M3N2M2M4K4D=H7G:F:D;G9J7DcL\\YOd3]f0;J7L3N3M2N2O1O2N1O1O1O2N100O1O2N1O10001N10000O2O000000001O000O100O101N1O10000O2O000000001O000000TLU[OaN1W3jd0oMn[Ol1Rd0QNX\\Oh1hc0UN^\\Oh1bc0VNc\\Og1]c0WNi\\Of1Vc0WNo\\Og1Qc0XNR]Of1nb0YNT]Of1lb0YNW]Oe1jb0YNX]Of1hb0YNZ]Of1fb0YN\\]Of1db0YN^]Of1cb0WN`]Oh1ab0VNa]Oi1_b0VNb]Oi1`b0UNb]Oj1^b0TNd]Ol1]b0RNd]Om1]b0SNc]Om1]b0SNd]Ol1]b0SNf]Oj1[b0TN\\^OL]LT1Ze0nNR_O=Xca5"}, {"size": [848, 480], "counts": "c;h0hi00000000O10O100000O1000O10000000O100000O100000O1000O10000000O0100000000000O10O10000000O10O1000O1000O10O1000000000O01000O102N2N2N2M4M7EWg?3gX@;G3L2O1000000O1000O1O100000O10000O10O100000O10O1000O1000O10000000O1000O100000O1000O100000O1000O10O100000O010000000O10O10O1000O10O10000000O1000O1000000000O010001O000O101O1N2O1O1O1N10001O1N2O1N3N1Ne[V2;QdiM2N1O000O10000O100O010O100O01000000O100O1001OO01000O1000O100000O1000O100000000O0100O11O1O000000O101O2M101N3N1O3Leam3"}, {"size": [848, 480], "counts": "`jo67Wj04I7L3J5O100O1N2000001N100100O00001N1O10iMZOdZOf0Ye0_OcZOc0Ze0AcZO?]e0CbZO>Rd0RO^[Oc0?;Td0TOX[Oc0c09Vd0UOU[Od0c08Xd0UOQ[Of0g04Zd0WOkZOh0j00]d0ZO_ZOm0U1F^d0CTZOj0f1mNZNOQf0:jYOn0ch0?5Ji0]NRWO1ig`4"}, {"size": [848, 480], "counts": "caS6=Qj09G3M2K7M1N2O2N1O1O1O1O2N100O1O1O10O100O10O010O0100O01000O1O001O1O100O2O000000001O0100O1O10O001O100O1O1O1O001N201N1O1O2M2O1N3L5K8F^Yc4"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "P`]<"}], [{"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "e6Anl`15QS_N3N1N101O1N2L4C=[Od0N200O1001O000000001O00000O100000010O000O1O1001O0001O01O0000000001O00000O11O0001O000000000100O001O1O001O10O0010O01O001O001O00001O00000O1000000000000O1N2K5M3K6I5J7L5Hnok21QPTM8K5K4O2M2N2O1O101O00000O10000000O100000O100O100O1O001O1O0000O2H9ElS>OekAAXWOk0eh0WO]WOg0bh0YO_WOg0^h0VOWWO1:j0]h0VOYWO09k0`h0YO\\WOJKn0jh0VOZWONJm0mh0VOUWOR1jh0:O1102M012`NSWOW1Ri0101O000N2010O000001N100O101O0O1O1O1[O`NYXOb1fg0gNPXOZ1og0hNoWOY1Rh0eNlWO^1Th0bNkWO_1Uh0?O0000001O0001N1O1N2K5K5K5M3N3O0O1000001O0O100000000000001O000000000O10001O00000O100000001N1000000000000O10001O000O101O000O10001O00000O1000000000001N10000000000O2O1O00001OO0101O00001O0O10O11N10001O000O1000001ORG"}, {"size": [848, 480], "counts": "o9e1kh00000000000O10O10000000O10O10O1000000000000O100000O100000000000O100000000000000O10000000O010000O10000O101O1O0O2O2N3L>C=@kjc0;S[[O_Oei0Q1E8G5L1N2O00000000000O0100000000000000O10000000000O100000000O1000O10000000O1000O10O10000000000O1000O100000O1000000000O10O10000XOTWOLlh03VWOKkh04VWOLjh04VWOLjh04VWOLjh03WWOLjh04VWOLjh03XWOLhh03YWOMgh02ZWONgh00ZWO0fh0N\\WO2dh0M]WO2dh0J`WO6`h0FdWO:Yi0000O100N2M2O2M3N2O1N2M3N1M4O1N1O2M3O10000001OO01O1O2N2L5I8L>@ef[7"}, {"size": [848, 480], "counts": "Zkl46Xj04K5L3M3M3M3M3N2L4M3kNT1J7gNY1K4M4M2N2O2N1O2N1O2O1N1O101O0O2N2O001N101O0O2O1O001O001O00001O001O000O2O001O00000O2O00000O101O0001O000001O00000000\\LQ[Og1od0oMg[Oe1Zd0WNm[Oe1Sd0XNU\\Oc1kc0ZNZ\\Od1fc0YN_\\Oe1bc0XNa\\Og1_c0WNf\\Of1Zc0XNk\\Oe1Vc0YNm\\Of1Rc0XNQ]Of1Pc0YNQ]Og1ob0VNU]Oi1lb0UNV]Oj1jb0UNX]Oi1ib0WNX]Oh1ib0UNY]Oj1ib0TNY]Ok1hb0RN\\]Ok1oe0[N\\WOj0gi0]ObXU5"}, {"size": [848, 480], "counts": "c;h0gi01000O10O1000000000O1000O100000O100000O10O1000O10000000O100000000000000O01000000O010000000O1000O10O1000000000O10O1000O101O2N2N2M3N3Lil>MZSA6J3M1O2N1O10000O010O1000O1000O100000O1000O1000O1000O10O10000000O10O1000O100000O1000O1000000O01000000O10O100000O10O1000O100000O01000O100000O100000000000000O10O1000000O01000000O10000000O010000000000O10002M8H6K1O1ObPX21\\ogM=C1O1O10001N02O0000O010000000000O10000000O1000O10000000O100000O100O11O001O00O10001N2O1N2O2N1N2N3LVgl3"}, {"size": [848, 480], "counts": "a[Q7]1kh09L3O1O2O00000O2O00001O001O1O001O1O1O001O1O1O100O1O1O100001N4I`0TOVj`4"}, {"size": [848, 480], "counts": "ibX6Rj0001O100O1O1O100O1O1O1O10O01OO1001OO10000O1O1O5JcR`1Jdm_N2N3N0O102M2N1J7UOj0M300O100001O000000000000001N11O00000001N1O1001O00000010O0000000O11O01O000001O00000001O0001O001O100O1O001O01O00001O1O10O000001O000O2O00000000000000000N2N2L4J6K5I7K6I7Jce62\\ZI5L4N2M3O00O6APVO3oTg0OQeYOd0@f0[O8H4N110O00000010O00000000010O0001O01O0001O01O0001O000010O00001O010O00000000001O00O11ZNgWOS1Pi0K2N2M3N3M4K8H>BhW=:dgB=H7O0O100000O0O2N2N2K5O2O0M2O3M2Mmae0IQ^ZObg0A_XO`0ag0\\OaXOe0bg0QOeXOo0\\h03N1N3N00003L2O2O0O101N001N3M2M3N2N2O000M11O1N4L4K5O2N1O10001N1M4K4C=H9FPZ83RfG6L2K6L4L4M2M3O2O01O010O01O000000000001O000000000O101O0000001N100000000O10001O000O100000001N101O001O0O10001O00000O1000001N1000001O0000001NSG"}, {"size": [848, 480], "counts": "n9c1lh0100000000000O10000000O01000000000000O100000000000000O100000000000O0101O00000O10O1000O10O100O10000O0101O001O00001N2dNQWOU1`i0E9EQkc0OQU\\O`0A8H6I3N1O000000000O100000O1000000000O100000O1000000000000O10O100000000000O100O1000O1000O100000000O100000O10O100000000000000ZOQWOJPi05RWOJnh05TWOJlh05UWOKjh06VWOIkh07UWOIkh06VWOJjh06WWOIih06XWOJhh05YWOKgh05YWOKhh02ZWOMgh02ZWONfh00\\WO0dh0N^WO2bh0JbWO6\\i000O01O1N2M3N1N3N2O1N1N3M3M3N2O1M201O1000O100000000_OmVOETi07QWOGPi05UWOIii0O2O000000000O10O1N200O1O1O1000O10O100000O010000O001N2N2L4N1O2O10000001O00000O100000001O00000O2O01O0001N1000001N100O101O0O1O1O2M3M4LTag5"}, {"size": [848, 480], "counts": "jRV56Wj06L2L3O2N2N1N3M3dN]OgXOf0Pg0JgXO8mf0:iXOKSg0;fXOIkf0V2M2N2O1N3N2N1O2N101N101N2O1O1N2O001O0O2O1O1O1O001O1O002N0010O01O010O1O10O01O001N10001O0001O0000000001O01O0000001O00000lL`ZOX1`e0bNmZOW1Se0gNR[OV1nd0eN[[OW1ed0hN^[OV1cd0fNb[OX1^d0gNe[OV1\\d0gNi[OW1Xd0fNl[OX1Td0fNo[OX1Rd0fNP\\OZ1Pd0eNR\\OZ1oc0cNU\\OZ1lc0fNY\\OB_Nf0Ze0FP4N2M4M2N2NTZk4"}, {"size": [848, 480], "counts": "`;i0gi0000000000O010000000000O01000000O01000000000000O01000000000O10O10000000O1000O10000000O10O100000O100000000000000O100000O02O1O1O3L3N3L5KTg?1jX@5J9I2O001N01000000O0100000O100000O1000O10000000O0100000O1000O10000000O01000000O0100000000O01000000O1000O100000O10000000O01000O100000O10O10000O1000O10O100000O1000000O011O0000000O01000000O1000O1000O1000O1000000000O010O100000O10O100000O1000000O01000000000O1000O1000O10000000O1000000000O1000000000000O010000000O10000O100000001N1NZa]1DR_bN5K4N2N4L000O10010O0000O101O1N101O1O2M2O2Lham3"}, {"size": [848, 480], "counts": "]\\Q7482ch0f1D6N1O2OHgWOUN\\h0e1;O100O10000O1001N101O001O3M>B1O00010O00O101O1N102N2N8Gao_4"}, {"size": [848, 480], "counts": "^Pa6?ni07J4M4K5K5L2N2N1O2N10001N100O100O11O00O2O00O1000000O1O1M3O1O1N3N2O0O101N1O100000001O001N101O001N2O1N2O3M2L3M4IgS^4"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "jXX55Yj04M2nXO4^c0N\\\\OP1jb0TOm\\OU1ob0lN\\\\O_O`Ni1Se0iN[\\O@_Ni1Ve0kN^ZO\\Of1W2mc0eNn[O\\1Sd0fNf[O^1[d0bNb[O`1^d0cN][O^1ed0dNW[O]1id0gNQ[OZ1Qe0jNhZOW1Ze0nN]ZOT1ee0MgZObN\\e0Z1oZOUN_e0c1k1E:G:I:J8H5L4K4M1N2N2N2N2M4M2MmPY6"}], [{"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "a6>Rj00O200O1O1O1O1O1O1O1O010O1O0001O00000O1O2O0O1N3L4JWX_10mg`N2N2O1O0O2N2M3H8XOg0M3N2000000001O00000000000O2O00000O101O00000000010O01O0000000000000001O01O0000000001O000001O001O100O001O0010O001O1O00001O001O001O00001O00O100000000000O1L4N2H8K5J6K6HU`7OP`H4L4M2N3O01O2MR[f0FPeYOe0@h0YO6K3O0000000010O0000001O0001O00001O01O0001O000010O000000010O00000010O0000001O01O0001O0000001O00000010O01O0001O01O0001O01O01UNQXOR1jh0L3N4J;CY^A=Agjc0a0kT\\ObZOPNJm0de0R1][OfNdd0Y1i[OZNXd0e1U\\OmMlc0T2X2O1O100O100O1O2N1K5G9O2N101N3N2N6I8lNcVOg0\\j0VOoS_4"}, {"size": [848, 480], "counts": "cXe69Sj07J7J4L4M3M2N3M3M2N3N1N2N2O0O10001O0O100O10000000O100O1N2O1O1N2N2O101N100O2O0O2O0O101O1N1O2O1N2N2N2O1O2L4L6HTfZ4"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "gXX52]j04M2TYOLWc07]\\OP1fb0TOP]OV1lb0nN[\\O@`Nf1Te0kNZ\\OA_Nf1We0jNaZO\\Oc1Z2mc0eNl[O\\1Td0eNg[O_1Zd0bNb[O`1^d0cN^[O^1bd0eNZ[O\\1gd0gNS[OZ1nd0jNmZOW1Te0POaZOQ1ae0OlZOaNWe0\\1S[OSNYe0e1l1K7EDh0WO5N10O100001O0000010O0000001O00010O00001O01O00000010O00000010O000000010O000001O001O0001O0000000000000O2O000000001O000000000O2O1O0000001O0O1N2N2O2N1N3M2O1M3N3M3L5G?AoQh09hmWO8[Oe0H9L4OO1O010O001O001O001O100O2M3bNPWOY1Pj0eNVVO7Vj0JaQ90_nF4ZVO;Zh0T1J1O1O1O010O001O002N1O001O10O0001O1ON2N2K5M4O0O1000001O0O100000001O00000O10001O0O1O1G9N2N2O2M2M3L4N2L`j5KhUJ4M2J4M3N2N2L4N2O010N2O1O1O1O02O1O0000000O100000001O00000O1000O11N2XOeVO9ji0O001N2O1O2N001O001O1N10002NRk3"}, {"size": [848, 480], "counts": "j9d1lh0000000O1000O100000O1000000O10000O1000O1001O000000001N011O001OO1001OO10O100000000000000O0100O1001OO010001OO010001N2O2iNiVOP1di0H=@ojc07lT\\O`0A6J6J1O1O0O10O100000000000000O10O1000000000O1000O100000O10000000O10000000O1000000000O10O1000000O10000000O10O10000000000O100ZOPWOLPi03UWOHlh08VWOFjh0:VWOFjh09WWOGih09WWOGhh09YWOFhh0:XWOFih08WWOIih06XWOJhh04ZWOLfh03ZWONfh00\\WO0dh0M_WO2ch0JaWO5]i00O100O1N1O2M2O2O1O2M1N3M3M3N1N3N2N2O01000O01000001O0O2]OnVOFTi01TWONii0N1O0O100000O1O1O001O1O1O1O100000010O0O1000O0100O1O1M3N2L4N2O010O100001O0O1000000O2O000000000O2O00001O0O1000001O00000O101O0010O01O0O1O2M2O1Ni\\c5"}, {"size": [848, 480], "counts": "Tdf5\\1gh0?K5N8C>@:L3M2L5K3K6H8H7J7M3L3N3N1O2M2O2N2O001N101O1O0O2O010O000010O002O2M1O2N3M1O000001O000001OO100000001O0000O02O0000000000000000000000000000hLoZOQ1Qe0jNW[OS1jd0gN_[OU1ad0eNh[OX1Yd0dNm[OY1Td0dNP\\OZ1Qd0bNS\\O]1nc0`NU\\O_1mc0\\NX\\Oc1hc0ZN[\\Oe1gc0VN]\\Oi1Vf0N4RO]WOInVa4"}, {"size": [848, 480], "counts": "\\;j0fi00000O100000O10O100000O10O100000000000O010000000O10O1000000O1000000000O01000000000000O1000O100000O10O100000000000O011O001O1O2M2N5L5JQb`0Md]_O7UVOKhi0`0O2N0100000O1000O100000O100000O1000000O0100000O1000O01000000O100000000O010000000000O10O100000O1000O10O100000O100000O1000O1000O1000O10000000O10O100000O100000O10O1000000O0100000O10O100000O100000O01000O10O10000000O1000O1000O1000000000000000O0101O00O1001N0100000O100000O10O1000000000O0100000000O011OO10O10O100000000000O10O1000000000O101O1O1NWV_1Lmi`N2O000001O001O1O1O1N3N3Mham3"}, {"size": [848, 480], "counts": "PdP7n0Yh0^1aN\\1VOh0L4N3O0O1000000000000000O010O1O1O100O10000O1ZNUZOgNle0V1YZOfNhe0W1^ZOfNce0V1fZOcN\\e0Z1nZO\\NUe0a1o1N4K6I;Ff0[Ok0TOSY^4"}, {"size": [848, 480], "counts": "kif6`0mi06K4M2M4M2N2M4L4M3N4L1N101O000O1000001O00000O11N1000000O001O10O01M3O2N2M2N3N100O2O001O0O2O001O0O2N2N3M2N4L4L3JXZX4"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "fmY52]j02N3kXO=UMI\\e0KW]O]1fb0eNb\\OKYNd1Te0cNa\\OMSNd1\\e0cN`ZOBe1^2kc0`Nh[Ob1Xd0aNd[O`1\\d0bNa[O_1`d0bN][O^1dd0fNV[O[1ld0jNmZOW1Se0oNeZOR1]e0TOYZOVO2a0he0T1jZOZN_e0a1T[OaM]e0Y2`1J7Aa0I;F9I3M4J4M4L2M3M3N2NbkY6"}], [{"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "^6?Qj000001O1O001O100N2O100O1O1O0000N2O1N2N3L3N4L5KRm`16iR_N2N3N000O2N1M4J6ZOe0J6N2O1001O0000001O0000000000001O0O10000010O00000000001O01O000000001O00000000001O0000000001O1O00100O001O1O0001O0010OO1010O0001O001O001N1000O1000000000O1N2N2H8K5K5I8I9FUjR1OfUmN=L3M4L3O02M301N100O100O1O1O1O101N1N2O2M2O1N2NYWY31chfL7K4J5K5@`0000000000000000N2O1O2OO02O0O1O10001N1O100O1O1O1O101N100O100O1O1O2N1OjVOB\\h0>bWOE]h0:dWOF]h08dWOH\\h07dWOJ\\h04fWOLZh03fWON[h01cWO1\\h0OdWO2]h0MbWO4^h0KcWO5]h0IdWO7[i01O0O001O100000O10000O10002N2M3N1O001O00M3L3M4N2N2O1N2O10O0100O100O1O100O001N2N2N2O1N2O1O1N20O01O100O1O10O10N20000000000lG"}, {"size": [848, 480], "counts": "i9d1lh000000000O010000000O10000000000O100000O100001O0O10001O00001O00O0101O0000000000O02O0000O10O10O1001OO01000000O101O2M7J7H>Aojc0NTU\\O>B=D6J3M2M10000000O1000O10000000O10000000O100000O100000O1000O100000O10000000O10000000O1000O1000000O1000O10O10000000000O10O1000ZORWOJnh05UWOIkh07WWOFjh0:UWOGkh08VWOHjh07VWOJjh06WWOIih06YWOHhh07YWOIfh07[WOIeh06[WOKeh03]WOMch00`WO0`h0NbWO2^h0LdWO3^h0HfWO8Xi01O1M3N2O0O2N2N2O1M3N1O2M3M3N102M101001OO01000001O0O2\\OnVOHVi0KTWO4fi0N001N10O10000O1N20O01O1O10000000010O00000O10O01000O01O1O1O1OOYfc6"}, {"size": [848, 480], "counts": "X\\g57ni0?F9DAc0^O5M10O1001O01O0001O00000010O000001O01O0001O00000010O000001O0000010O00O1000001N100000000000000010O0001mNaWO3`h0FoWO1Qh0JXXO2hg0L\\XO2eg0JaXO1bg0MaXO0bg0LQP\\13TQdN?C:D;1N1O001O01O01O0001O00000O2O000M3F;M2O1N3M2N2O1N2O_OfWOnNXh0o0oWOoNPh0P1SXOoNlg0o0XXOPOhg0P1YXOoNfg0Q1]XOmNcg0Q1aXOmN`g0P1dXOnN\\g0P1gXOPOYg0m0iXOSOZg0g0iXOYO\\g0MeWObVOA]i0b063M2M3O00O1O1J510O2OO2O1N2O001O100N2O1O1O1O1O1N2O1N2O1N2O100O10000000000000000000000001O000O10001OO10O2O0O10001O000O3N2Nh0VOl\\<`0fbC4L1O000000000001O00O11N100O100O1000001O0O[G"}, {"size": [848, 480], "counts": "h9d1lh00000O10000000O1000O100000000000000O100000000O100000001O0O20O000001N1000000O10000000000000O010001OO10O10O1000O2O1O2M9H=ASkc0B\\U\\O?A>B:F6K1O1O00O1000O10000000000000O10O2O00O10000000000O010000000O100000O10000000O10000000000O0100000O10000000O011O00O1000O1000YOSWOJnh05UWOIkh06WWOIih07VWOJjh05WWOJjh06VWOJih06YWOIgh06ZWOJfh05[WOJfh06[WOIeh05\\WOLdh03]WOMch00`WO0`h0LdWO4]h0GgWO9Xi0O01O1K5O1N1O2N2N2O1N2N2M2N3N2N2O1O1O0100000000O2O001]OlVOHUi02RWOLji0O1N2O000000O10O01N2O1O1000000000O11O0O100000O0100N2N2O0M4M3O100O0100000000O101O0000000O1000001O001O00001O0O2O0M3L5L4MlZm5"}, {"size": [848, 480], "counts": "h[Q6`0ji08E:F;H7K5K5G:I7K3L4K5K6J5K6_Oa0G9L3N3M3M3N2N2O0O2O1N2O001O1N2O1O010O00001O10O00010O001O0010O00000000000010O00001eLaZOc1_e0WNT[O\\1ld0_N`[OZ1`d0cNg[OY1Yd0eNm[OX1Rd0gNS\\OU1mc0iNX\\OT1ic0iN\\\\OT1dc0kN_\\OS1ac0lNa\\OS1`c0jNc\\OU1]c0kNd\\OT1\\c0kNh\\OQ1Yc0nNj\\OP1Wc0nNk\\OQ1Uc0nNn\\OP1Sc0nNn\\OQ1Sc0oNn\\OP1Rc0oNP]OP1Pc0POP]OP1Qc0oNQ]On0Pc0QOS]Om0nb0ROV]Oj0jb0UO[]Og0fb0XO\\]Oe0eb0[O^]Ob0bb0\\Oc]O`0^b0_Of]O>[b0Af]O>Zb0Cf]O;[b0Ef]O:Zb0Id]O4_b00]]OGlb0:e31N1O101O00001O1O10O1O002OO30IVi0P1YNe1hNW1D;N2O2O0O10001O00O01000O100O1O100O011O0O100O100O2O0N4K4I8H9@b0kNb1kMXWO8Si`4"}, {"size": [848, 480], "counts": "^US7:Qj09I7J4K6L2N3L4M3M2N2M4M2N1O1O100O101O00O01000000000O1O1O1O2N1O1N2N2O1O1N2N2O100O10001N100O101O1N2O1O1N3N1N3N1N2N2N2N3M5JZjg3"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "dXX54[j03M2O2nYOFia0?o\\Om0ib0ZOh\\OP1Uc0SOe\\OQ1[c0TOo[O]O_Nd1ae0ROj[O^1Vd0eNe[O]1[d0dNc[O\\1_d0fN][O[1dd0fNY[O[1gd0kNQ[OV1Qe0nNhZOR1Ze0TO\\ZOo0fe00gZO`N]e0\\1lZOXN\\e0a1n1K5F:F>G9H7I6K5K3M2N3JY[\\6"}], [{"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "TUT29Vj02N1O2O1N2O0M4QOn0M3O1001O0000000000000000001O0O100000001O0000000001O01O00000000000O101O01O00001O01O00000001O010O1O1O001O0010O001O001O001O0001O01O00001O000O100000000N2N2M3L4J6K5H9J7Gi^T1?i`kN=Ed0]O7L1N101O01O0000000000010O0001O000010O000001O01O0001O000000001O0001O01O01O01O00001O00001O00001O01O010O000010O00000010O0eNlWO:Th0_O\\XO8dg0EaXO:^g0DfXO:Zg0EiXO9Xg0DlXO9Vg0FlXO8Tg0FPYO8Qg0FSYO6gQV1JPgjN6L1N2M2M3N4K5H7J6O101O01O01O0O2O000M3H8O2N100O1O1O2M2O100N3N1O1O1O2O0O1O1O101O0000000000001O01N2N2Gk0XO=D6J5N3L4M3IVY8LRgG2N1O2N5K10000O100O101M2O100O1O2N0011O00O100O100O10000O0100000O1O1000O0100O10O100OO2\\X=@ogB6I6M3L4E;E:01O1O100L4K400100000000000aH"}, {"size": [848, 480], "counts": "h9c1mh00000000000O100000000000000O2OO1O10000O0101O1O0000000O1000000000001N10O11O00O100000O1000001O00O01000O11N3N2M5L7H;Dbed0NaZ[Ob0_O9F5L1O0O2O00000000000O10000000000000O1000O10000000O1000O10000000O10000000O1000O100000000O10O100000O1000000O10O10000000O1ZOQWOKoh04SWOKmh05SWOKmh05TWOJlh05VWOJih06XWOIih07WWOIih06XWOJhh06XWOJhh05YWOJhh05YWOKgh03[WOMdh02^WONah01aWOO_h0KgWO4[i000000O0O2K5N2N2O1N2N1N3N2O1M3M3N2N101O11O000O101O00000_OmVOESi06TWOGoh01XWONei001O1OO11N10O1O1O1O1O1O10000000000000O0101N00100O1O1N2N1L5N200O0100000O0101O00000O101O0000001O000O101O00000O2O00000O2O000000001O0O2O1O0O101O00000O2O00001O0O101O0O1O2N10TYZ5"}, {"size": [848, 480], "counts": "gbh6f0ci09G9D:C>ZOf0]Ob0B?I7H8I6I8L3N3M2N3N1O2N1O101N100011N:F2N10O00000N2O10001N1000000O2O00000O100000000000000000bLlZO`1Te0VNQ\\Oo0oc0kN]\\On0dc0oNa\\Oo0_c0nNd\\OR1\\c0kNh\\OT1Xc0gNo\\OW1Qc0fNR]OY1ob0eNS]O[1mb0cNU]O]1kb0bNU]O^1mb0_NU]Oa1lb0]NU]Oc1lb0\\NS]Oe1nb0YNS]Of1Rc0VNm\\Ok1ke00O101N100002Ld0WOooe3"}, {"size": [848, 480], "counts": "Z;j0fi000O1000000000O10O1000000000O010000000O10O100000000O1000O100000O100000O011O0000O10000000O1000O10000000O10O10001O0O3N1O1N3N5K4K`g?NUX@>N2N2O1O1O100000O1000O100000O10000000O10O100O100000O10O1000000O1000O100000O1000O1000O1000O100000O1000O100000O100000O10000000O10O1000000O010000000O1000O10O10000000O010000000O010000000O100000O0100000O10O100000000O0100000000000O010000000O10O1000000O010000000000O1000000O1000000O10O1000000O010000000000O01000000000O010000000O10O100000O10O1000O10O1000000000O010000000O1000O10000OO3M1O2N2M3O1O1N2O1O1NSeS5"}, {"size": [848, 480], "counts": "`Zm6=ih0_1lNUN^XOh2_f0U1\\Od0L4O2O0O100000001N1000000000O1O10000O11O000001N1N2N3L3L6G8D?UOn0TO^1VNmda4"}, {"size": [848, 480], "counts": "[TX7d0ii06K4L4M2N3M3M2N2N3N1N2N3N1N2N2O1N1000001O00O1000000O100O1O1O1O1N2N1O2M3N1O2O100O1O2N2O1O1O1N101N2O1O0O2N2N2N2N6J4KgZe3"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "gcV51\\j04N1QYO6oL5^e0F[]OY1bb0jNT]O_1jb0dNb\\OFXNi1Ue0lNU\\OC^Ne1\\e0jN`ZOYO_1]2Rd0bNi[O_1Wd0dNe[O\\1\\d0eNb[O[1_d0iNZ[OZ1gd0hNU[OX1md0mNkZOT1Ve0POdZOQ1]e07bZO`Nae0W1Q[OXNWe0d1T[ObM`e0V2b1L5B>K8I9E8H8J3M4L3Ldj^6"}], [{"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "Tam18Wj03N1N2N2O0O2N2VOi0G9O1O11O000000000000001N10000000001O000O11O01O000001O000000O2O01O00000001O0000000000010O000002N1O0010O01O001O01O00001O001O000000001O001O00000000000O1O1N2M3J6K5J6I8K5Ih^T1>l`kN9Gg0[O5L2N1O20O000000010O00000001O01O0001O0001O0001O0000010O00000010O00001O01O00001O00010O00001O00001O01O01O01O0001O000001O000001OO100N3M2H9A`0\\OfPo1I^oPNj0F5O20O10O10O001O0000O1O1G9N20O1L5I7K7I6M:D5L4N0O1O2N2O001N1OPj03l[N0_i08^VOI[i0`0`VOB^i0j0N10010O01O001O00001O001O0000000000000000O100O100O001O010N200O1O01000000O10000O2O1N1H8D>HTRc05lm\\O3M100O1O1O1O1O1000O0100O100O1001OUG"}, {"size": [848, 480], "counts": "i9b1nh0O01000000O11O0O10O10000001N100000O10000000O100000000000000O1000000000000O2O00O10000O02O2M4M2M8I;DQ`e0;e_ZO;D6K1O001N101OO10000000000000O10O2OO10000000000000O10O10000000O10O10000000000000O10O1000O100000000O100000O100000O10000O100YOQWOMnh04SWOKmh04UWOJlh06UWOIkh07VWOHjh07WWOIih06XWOJhh06XWOJhh05YWOKgh04ZWOKgh04ZWOLfh02\\WONdh00^WO0bh0N`WO1ch0GcWO9Yi01000O01N2L5M1O2O1N2N2N1O3M2M3M2O2N2O010O11O00000O10001]OoVOEQi08SWOFoh02YWOMei001O00001O00O0O2O1O1N2O11O0000000O100000O100O100O010M3L4K6N00100000O100O2O000000000O2O000000001O0O1000001O000O101O000O2O0000000O2O1O00001N1000001N10001N101O000O2O0000Tm`5"}, {"size": [848, 480], "counts": "^]R7>gi0>F8I7K4J5L6G8@a0nNQ1XOh0L5K5L3N3M2N2O2N1O1O2N100O101O0O100010O2N2N00000001N10000O100O101N1000000000000O1000000000000000TLg[Oa1Zd0VNX\\O`1hc0[Nd\\O^1\\c0\\Nm\\Oa1Sc0\\NS]Oa1mb0\\NX]Ob1ib0[N[]Oc1eb0\\N]]Oc1cb0[N`]Oc1ab0[Nb]Od1_b0YNc]Og1]b0XNd]Oh1\\b0VNf]Oi1\\b0UNe]Ok1^b0oMe]OQ2]e0O1000O10O01OL2I:G9G:FWbU3"}, {"size": [848, 480], "counts": "Z;i0gi00000O10O10000O100000O0100000O100000000O1000O100000O100000O1000O1000000000O10000000O1000000000001N2O2N0O3M4M5KSg?9cX@2M2O0O2O1000000O10O100000000O1000O1000O10000O1000O100000O010000000O100000000000O10O100000O1000O1000O10O100000O1000O1000O10000000O10O1000000000O10O100000O10O1000000O10O1000O1000O1000000000O01000000000O010000O10O100000O0100000000000O1000O10000000O10O1000O1000000000000O10000000O010000000O0100000000O1000O100000O1000O100000O1000O100000O10O1000O10O10000000O010000O10O10000000O10O100000000000O10O100N2L5HWdX5"}, {"size": [848, 480], "counts": "Rag69Ri0]1fNT1fNX1B?J5O1O101O000O10000000000000O10000M3I7H8L4L4N3M2M3O2M2O3M3M8Aa0YNkYh4"}, {"size": [848, 480], "counts": "QlP85Rj0?D8L5L4L3M2M3L4M2N3M201N2O000000000O1000000000000000O1O1O100N2O1O1N2M3O1O1O1N2O101N10001N101O010O001O001O1N3N1O2N2M3N2M4L5Hk^h2"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "foo43[j03N1O1kUOIo39fa04\\]Ol0_b0WOX]OQ1gb0XOg\\OR1Xc0POS\\O_ObNc1[e0ROn[O`1Rd0bNk[O^1Vd0eNe[O]1[d0eNb[O[1`d0gN[[O[1fd0hNU[OY1kd0lNnZOU1Te0oNeZOS1[e0VOZZOlN?c0\\e0V1nZO[N\\e0`1nZOoM_e0j1i1J6C=K7I8H5K6K4L3N1N2N2NUob6"}], [{"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "dhb1:Uj02O1N101N2N2K5C<@`0M3O1000O100000001O0000000000001O0O1001O0001O0000000001O01O000O10001O00000000001O0001O00001O01O01O1O1O1O0010O01O0001O001O0000001O00001O001O0000000000000N2N2M3I6L5K6H9IYYU16bfjN;Eh0XO5N1O2O0010O01O000000001O01O01O01O0001O0001O01OO11O01O0001O0000001O01O01O0000010O00001O00001O00010O00010O00001O01O000000001O0M3K5I7K6H7K4I9HRXW2g0SghM=M2N2O001O01O0O1L4XOUWOJlh0MdWOK^h01fWOM[h0OkWONWh0KRXO1Yi0MXkc0?QT\\O9M5M3M2O1N201N0001O0000001O0O10001O0O100000001N1000000O100O1O1L5F9L3N2O3M3MY_71f`H2M3M3N2O1M3N2O1N1O2O1O1O1O1O1N2O0100000O0O2OSOfVOf0[i0XOjVOd0ci0M2O1O00000000000OUG"}, {"size": [848, 480], "counts": "h9a1oh001O0000O1000O010000O1001O00O010O1000000000000O010O100000O101O0O3N2N4eNlVOQ1bi0oN`VOb0hYf0Oo_ZO>C6J6I3N1O0000000O01001O00O1000000000000000000O01000000000O100000O100000000000O1000O010000000000000O01000000O1000000O100000000YORWOKoh04TWOJlh06TWOJlh06TWOJlh05UWOKjh06VWOJjh05XWOIih07WWOIih06XWOJhh05YWOKgh04ZWOLfh02\\WOMdh02^WONbh0OaWO1`h0KcWO5\\i0000O1N2N1O2O1M3O1N2M3N1O2N2M3N2M3O0O20000O11N10001O0O1@iVOGXi03PWOJPi0OXWO0gi0M1O0O100000O1N2O1N2O1O10O10001O1O000000O100O1O00100O1N2K5K500O1001N100000001O0O1000001O0O1000001O00000O101O0O1000001N101O00001O000O2O000O2O00000O2O000O2O00001N101O001N10a[h5"}, {"size": [848, 480], "counts": "gbl6:mi0:I8H7F9H9G9G8J5FI8N1O100O10001O0000000000O100O100O100O1N2O1H8F;]Ob0J7K4M3M4M7G>XOR1YNnaP5"}, {"size": [848, 480], "counts": "PiZ8l0^i07K5M2N2O1O2M2O001O100O1O100O10001N100000000O010O01O100O100N2N2O1O1O002O0O100O2N1O2O001O0101N01O1O1O1O1O1O2N1O2N1O1O2N2M4JV\\_2"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "ffg43\\j02N2kYOJmK6ge02\\]OT1`b0nNX]OZ1eb0UOi\\Oo0Wc0UOa\\Oo0_c0UOT\\OU1kc0mNk[O[1Ud0fNg[O]1Zd0dNc[O\\1^d0gN][O[1dd0iNU[OX1ld0mNmZOT1Ue0POdZOQ1^e04dZOcN]e0X1nZO]NXe0^1o1K6H8G?F7K8I4J7J5J4L4M3M2McRl6"}], [{"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "U[V17Wj04N1O1N101N2N1N3C<@`0K6M110001O000000001O000000000000001O000000000O2O0001O000001O0000000000001O000000001O01O00000001O01O1O1O1O1O010O1O01O00001O00001O001O00001O000000000O10000O1N2M4J5J6I8I6I8Hee67XZI4L3N3L3O2N1003MW[f0BmdYOb0Ch0ZO5L4M2O000000000010O0001O0000010O001O01O00001O01O0000001O01O0001O0001O01O0001O01O001O0000001O0010O000010O00001O0001O0001O00010O0000001O0001O0000000000001O00001O0N2L4K6H7K5K6J7GPmU3HYSjL6A?M3001O00000000000O101O0O100000O11O1O1O1O0O1O2N2N0O2O2O00O01N4L2O2M3MPh?KVX@1O1O1O1O1O1O1O1O01O01O1O100000O1000O10000000OQG"}, {"size": [848, 480], "counts": "d9c1mh01O00000000O1000000000000O10O100001N2O4K=Da0ROTVO2\\_e05eZ[Of0ZO8I6J3M1O000O10O100000000000000O010001OO1000000000O010000000000000O10O1000000000000O10O100000000000000000O010000000O1000000O100XOSWOMmh02TWOMmh03TWOLkh05VWOJjh06VWOJjh05WWOJjh06WWOIih07WWOIih06XWOJhh06XWOJhh06XWOIih05YWOKgh04ZWOLfh02\\WONdh0N`WO2_i00O100000OM4N2M3O1N2O0O2N2N2M3N2M3N2N101O10O10000O2O00000_OkVOGUi05SWOFoh02ZWOLfi0O1O0000O010O1O1N2N2O1O1O11O1O00001O0O10O100O100O1O1O1O1K5K500O10O10000000000O101O00000O101O0000001N1000001O00000O101O00001N10001O001O0O101O000O2O000O101O000O101O0O101O0O2O000O1000aon5"}, {"size": [848, 480], "counts": "Uoj6:Pj08I6J5K5J6I7I7I8H7J7J5L6nN_MkYOh2le0R1N3L4N001O1O010O1000000000001N11O00O1000001O000O100O101N100000001O00000O101O0000000O101O01O000001OkLbZOX1^e0_NV[OV1jd0fN\\[OX1dd0eN`[OZ1`d0cNf[OZ1[d0cNi[O[1Wd0bNm[O]1Sd0aNP\\O^1Pd0`NT\\O^1lc0^NY\\Oa1gc0^NZ\\Ob1gc0[N\\\\Oc1fc0[N\\\\Od1dc0[N]\\Od1ec0[N\\\\Od1ec0ZN]\\Oe1cc0ZN_\\Od1dc0YN_\\Od1Yf0N3L2O1lNoVOf0Vi0QOQWOk0_i0K6IPeY3"}, {"size": [848, 480], "counts": "W;h0hi0000000000O01001OO100000O10000000O100000002M3N1O1O2M6K1N5J[g?5WX@;M3N3N10000000O010000000O10O100000O100000000O0100000O1000O010000000000O0100000000000O10000000O1000O10000000O1000000O0100000000O0100000000O010000000000O010000000O1000O10000O0100000000000O010000000O10O10O100000O0100000O010000000O10000000000O10O100000000000O10O11O0000000O10O11O0000O010000000000O0100000000O0100000000O01000000000O1000O10O1000O1000O10000000O1000O1000O1000O1000O1000O100000O010000000000O1000O1000O100O2O00O1000000000O2OO02L4L`]^5"}, {"size": [848, 480], "counts": "Q_T6S1Xh0X1eNY1[Od0E;M4N10000O101O00000000000000O0100000000001M2O1O1N3N1M4H8B?@e0POc1jMfZ[5"}, {"size": [848, 480], "counts": "ikZ88Qj0:I5L3N3N1N2O2M2N2O1O1O1O2N1O1O1O2N100O1000000O100O0010O1O1O100O1O1O1O2N1O2O001N101N1001O0100O001O001O000O2O2N1O2N1M3M4I6KmY_2"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "fh]43\\j02N2QYOQ1Zb0SO^]OT1`b0nNV]OZ1jb0oNi\\OV1Vc0kNW\\OBZNi1_e0nNl[O`1Td0cNg[O_1Yd0dNc[O\\1^d0gN][O[1dd0iNV[OW1ld0lNnZOU1Se0QOeZOQ1\\e0TO\\ZOm0ee0OmZO`NWe0[1Z[OhMWe0R2g1I8F;I9J:F5J7H6L4M2M2O2L4M_PV7"}], [{"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "cbk08Wj01O2O0O3N1N1O2M2Ede0ZOdYOmNo0f1]e0ZOlZOd0Ue0XOP[Oe0Qe0XOR[Oh0nd0UOV[Oj0jd0TOY[Oj0hd0TO[[Ok0ed0SO][Om0dd0PO_[On0bd0PO`[OP1ad0nN`[OQ1ad0oN`[On0cd0POa[Ok0bd0SOc[Og0`d0WOf[O1od0JbUa3"}, {"size": [848, 480], "counts": "T;i0gi0000O01000001O1O2M2O1O1N3N5JVg?8aX@2O1M3N200O1000O1000000O10O100000O10000000000O10O10O10O100000O10000000O10O1000000000000O1000O1000O100000O10O10000000O100000O1000O100000O100000000000O10O10000000000O010000O10000000O0100000O10O1000000O01000000000O0100000O10O10000000000O010000000000000O100000O1001O00O10O2OO100000O10000000000O10O10000000O010000000O1000O10O1000000000O10O10O10000000O10000000O1000O1000O100000O10O1000O1000O100000O100000O1000O10000000000O10000O10000000000O0100000O10000000000O1000000000000O100001O01N101O001N2N4M2LY_T5"}, {"size": [848, 480], "counts": "baj5=Ti0U1cNY1QOn0ZOe0I7N3N1000001O0O1000000000O10000000000O10000O1O2N1N3L3K7B=@b0^Of0oNlXe5"}, {"size": [848, 480], "counts": "Z`X8d0hi06M1N3N1O2N1O1O1O100O2O0O1O2O0O10000O011N10O011O001N1O10001N1O100O2N2O001O1O001O001O001O101M2O2M6I7FUhj2"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "TeT44U4;ba0G_]OZ1^b0gNY]Ob1fb0_NV]Od1ib0aNb\\OKSNf1[e0iNY\\Oe1gc0fNj[O^1Vd0dNf[O^1Zd0dNb[O^1_d0dN][O]1cd0hNW[OX1kd0lNnZOU1Se0QOeZOP1]e0o13PNjZO`N[e0\\1oZOnMae0k1g1K5E;I9I:H6J4L5K4M2M3M2N2N2NRT_7"}], [{"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "`Yc08Xj02M2O000O2N2O2L2M3[Oe0F:N3N11N10000000001O0000001N1001O00O10001O0000000010O00000010O000O100000010O0000000001O01O000001O00100O1O001O0010O00001O0010O01O001O0000001O0000001OO1000O101N1N2K5J6K5I7K6K6Gfe60^ZI6K2M4L3O2N2O05J`Ug0JajXOmiFg0ZOe0I7L3O110O001O001O0000001O01O01O0000001O01O0001O000000010O001O000010O0000001O01O000001O000000000O1I7J7J5N2O1O1O2O00000O101O000000001O000000001O000O100000001O0O10001O000O1O3_O]o9_O`QF4M2L4I7J6M3M300O11N10000000000O1000000000002N2M4M3M6I5Kjm95mQF5010OO010000000000000000000O10O10001O000O100000001N10001O0O2OO100001N1000000O10001O0O10001O001O0O1O2N\\`5"}, {"size": [848, 480], "counts": "QPe0:Sj0e0]OLmmA?A1O0O2O10O0100000O10O0100000000O1000000000O0100000000O01000000000O1000O10000000O01000000000O100000O100000O10000000O010000000000000000O100000000000000O010000O10O0100000000O010O1000O10O10000000O1000000000O100000O1000000000000O10000000O100000O10O1000O1000000000000O010000O11O00O0101O00O10O1000000000O1000O100000O100000O1000O1000O1000O10O10000000O10O1000000000O010000000O0100000000000O01000000000000O010000000O1000O010000000000O10000000000O100000000O100000000O0100000000000000000000001O1N2O1O0O2O3L[S[5"}, {"size": [848, 480], "counts": "bmc5:2Inh0`1ROl0mNS1[Od0B?M2N2000001N1000001O00O1000000O1000000O101N1O1O2M2M4I7D<@a0^Oe0VOllk5"}, {"size": [848, 480], "counts": "hYb7e0gi08K3M3M3M2N1O2O0O101N100O10000O10001N10O11OO01000001O0001O000O2O1O00001O01O01N1N3N2M3N2L4L5L4K6Jjcb3"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "dak34Yj05M3N3mXOj0eb0YOQ]OS1kb0POm\\OV1Rc0nNW\\O@_Ne1Ze0nNS\\OB]Nb1_e0QOo[O_1Qd0dNj[O^1Vd0eNf[O\\1[d0eNa[O]1_d0fN][O[1dd0hNW[OX1kd0lNnZOV1Se0nNgZOR1[e02eZOfN^e0V1mZO]N[e0]1oZORN]e0i1j1I6E;K6K8H7I6K5K5K2N3M2M6Hebf7"}], [{"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "]o?4[j03N1N3N1N3M2N2M2I8]Ob0H8O100000O1000001O00000000001O0O10000000001O000O110O000001O0001O00000O110O00000001O01O00000001O010O1O1O1O1O010O001O01O010O1O00001O00001O0000000000000000001N1O1M3K5K5K4K7I8IT`7LQ`H7K2M3O2N1001O:EWUg07^jXO=Dg0ZO4N2N101O01O0001O1O00010O000001O01O01O0001O0000010O00001O00010O00000010O01O000000001O0010O0O11O01O01O01O01O00001O01O000010O0000001O000001O0000000000001O00001O000000001O000000001O000O1O101N1N2C>G9]OWP:D\\VE5^i0k0YOf0F9M2O2O0O101O001O0001O01O00001O010O0000001O0001O0001O0010O00000001O01O000001O00010O000010O000O1O1F:L4L5L3O100O1O101O0O1000001O00000O10001O000000000O20O0000000000O2O01O00O2O00000O1J6EK41M2O100001O00O10000O2O0000001N10000O2O00000O10000000001O0O101N100000000O2O000O101O00000N`U7"}, {"size": [848, 480], "counts": "jea033=fi0b0E9G3M1O1N10O11O0000000000O100000O10000000O100000000O1000000000O10000000O1000000000O01000000000O1000O1000000000000O010000000O10000[OoVOKQi03QWOMnh03TWOLlh05TWOImh06UWOIkh07UWOIkh06VWOJjh05XWOJhh06XWOJhh05YWOJhh06XWOJhh05YWOKgh04ZWOLfh03[WOMeh02\\WOMdh02^WONbh0OaWO1`h0JdWO6[i00O1O1O1N2M3N2N2N2N2M3N2N2M2O2N2O1O1O10O10001O00000_OmVOETi07PWOHPi04TWOLmh0NYWO1ei0N1O000000O100N101ON12O1001O1O00000000000000O010O2N1N1O2M3M3M3O1O1001O000O101O00000O2O0000001O000O101O00001O000O101O00001O000O101O00001O00001O0O101O0000000O2O00001O0O101O001N101N101O00TQb6"}, {"size": [848, 480], "counts": "gkT6=Qj06I5K7J3M3M3L4L3M4L4K5L4L4L4L4K4M4K5L4L3N3M3L4M2N3M3N2N1N2N3N1O2M201N1O2O0O101N10001O00001O000000010OO2O1O2N3L3M2N2M2O1N2N3N1O1O1N3_N[YOSOgf0h0bYOQOaf0m0bYOoNaf0o0aYOoNaf0o0aYOPO`f0o0aYOnNcf0o0_YOoNcf0n0aYOoNaf0n0b1N3N1N200O2N1N3N3L4L3N3M2N2N1N4KeZm3"}, {"size": [848, 480], "counts": "`R`0a0ni02N1N2N110O01000O10O10000000O10O10000000000O0100000O1000O1000O100000000O10O1000000000O01000000000O10O1000000000000O10O1000O10O100000000O10O100000000O0100000000000O10O1000000000000O1000000000000O100000O10000000O10000000000O100000O100O010000000O1000O10O100000O100000000000O010000000000O01000000000000O0100000000O10O1000O1000O100000O10O10000000O100000000000O0100000000000000O10O1000O10O10000000O1000O1000O1000O10O1000O0100000000000000O100000000000000O100000O100000O1000000000000O10001O0O110O1N3N1O1O2M2N1O\\h\\5"}, {"size": [848, 480], "counts": "nb`5P1fh0m0SOk0UOj0[Of0B=N2O100O1000001O001O0000O100000000O1000000O2N2N1N2N3K5D=Aa0WOn0kNoVo5"}, {"size": [848, 480], "counts": "fd`73Wj0?E5J7K4K6fVOjNQi0\\1N1O102N1N2O1O0O10001N1000000O100000001O000000O2N1N21O00O101O0O2O0O2N1O2M3M3M3M4L4I7K6Ja^c3"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "_Wh38Vj04N3lYOGja0;R]On0hb0TOQ]OS1mb0POm\\OT1Qc0QOi\\OR1Vc0ROS\\Oa1mc0bNn[O`1Rd0bNj[O`1Wd0aNf[O`1Zd0cNb[O]1_d0fN\\[O[1fd0hNT[OZ1md0kNlZOU1Ve0POcZOo0`e03fZOaN_e0Y1lZOUNae0e1j1H8E;H8I8J5L4K5L6J4L3M2O2LXgj7"}], [{"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "m_=1\\j08K1N101O0O2N1O3K4G8YOg0N2O10000001O000000001O000000001N1000000000001O000000010O00000000001O00000000001O01O000000001O01O01O1O1O00100O1O10O00001O010O1O00001O0000001N1000000000000O100N2L5J5K5J6J7I6LfSV1?kkiN;Ff0[O3N1001O000001O00010O00001O01O0001O01O01O000010O000000010O01O0000001O0001O01O00001O01O01O00000000010O000010O00000010O00001O01O01O00000000000001O000000001O0000001O0000000000001O0O10001O1N1L4O2K4C`0TOkU98WjFa0C<@?K5M4N101N101O01O0001O00001O1O01O000001O0001O0001O10O000001O0001O01O0000010O00001O00010O0000O100N3E:K5N2N2N2O1O2O000O10001O000000001O0O100000001O00000000001O0000000000001O000000000O2O0000000000001N10001N100O1B?K4LhY80[fG100O1N200N2O1O1O1N2O1O1O010000M3B^S>KnlA9J4M3O010000O2O000000001N10001O0O1000000O2O00000000001N101N10000O101O001O001N1000000000oZ6"}, {"size": [848, 480], "counts": "gP`0l0ci09H8H3M0000001O0000O02O00O100000000000000O100000O1000000000O1000000000O10000000O100000000O1000O1000O10000000O100000O10000000O10O10000YOTWOJlh06TWOJlh05UWOKkh05UWOKkh04VWOLjh04WWOJjh06VWOJjh05WWOKih05WWOKih04YWOKfh05[WOKfh02\\WONdh01]WOOch0O_WO1ah0LbWO3_h0JdWO6\\h0IeWO7[h0FhWO:Vi0100N1N3M3N2N2N2N2N2M3N2M3M2O2O001000O101O0000010\\OPWOFPi06UWOIlh04WWOKkh0M[WO3bi0O00000N20O100O1O1L4000O2O001O001O0000O1000O010O01O1M3N3K4N1O2O1O10O1000000O2O001O001O0000001N1000001O00001N1000001O00001O0O101O0000001O0O10001N101O000O101O000000000O2O1N10010N1O2OW[e6"}, {"size": [848, 480], "counts": "Rjb6j0`i08A?H7M3M3M4L4M1N3M3N2M3N1O1M3O2M2N3M2N3L3M4M3N1N3M3M2N3N1N3M3M2O1O2N100O1O2O0O10000O2O00000000000000O101O001O1O1O1N2O1O1O1N101N1O1O1N2O002O0N2O1O1N2M3K5N2N2iNgXOVOZg0a0[YOQOgf0j0`YORObf0j0eYOPO^f0o0cYOoN^f0Q1eYOiN`f0T1_1M3M2O2K6N2M2O2M2O2M2M4N4KehX3"}, {"size": [848, 480], "counts": "^]>`0oi02N00100M3O10000O01000000O10000000O010000000000O0100000000O010000000000O10O100000000000O10O10000O10O1000O10O10000000000O100000O100000O10000000O10000000O100000000000O10O100000000000O010000000000O10000000O10O10O100000O10000000O10O100000O100000O1000000000O10O1000000000O1000O1000O0100000000O1000O1000000O010000000000O100000O100000O100000000000O0100000000O1000O1000O10000000O1000O01000000000000O0100000O10O1000000000O010000000000O100000O10O1000000O11OO10000O1000000000000O10O1000000001O001N101O1O2M4M1M`R`5"}, {"size": [848, 480], "counts": "oS^54ai0V1hNo0ROl0SOn0@?K5O1O1O2O000000000000000000000000O1000001N1O1O2N1M4L4K5^Oc0Ab0ROY1fNgkP6"}, {"size": [848, 480], "counts": "W]o7b0li04K5L4M2M4M3M4L3M100O1O1O10001O00000000000000001OO2O00000O1O1O2M2O1M4M2O10001O001O1O1O1O100M4L4K6JleT3"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "]bf39Vj03M3PYOh0db0[OQ]OQ1jb0SOm\\OT1Rc0QOU\\OYOgNi1Te0QOQ\\Ob1mc0aNR\\O^1nc0dNn[O^1Rd0dNk[O]1Vd0dNf[O^1Zd0eNa[O\\1`d0gN[[O[1fd0hNU[OX1md0nNjZOS1Xe0P22QNjZO^N^e0\\1mZORN_e0g1l1H8E9J7H:I5K7J4L8G5L5IXQn7"}], [{"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "\\e<5Yj04M2O1O0O2O0N3J6UOk0J5O1O100001O00000O101O00000000001O0O10000000001O0001O01O01O00000001O0O100000001O01O000000000010O0001O1O00100O1O001O10O0010O01O001O001O0000001O0000000O1000000O1N2L4M4H7K5K5I9KeSV1Pj04L200O001N2000O10O100000O1000O100000O100000O10O10000000O0100000000000O010000000O1000O10000000O10O1000O10O1000O1000O10000000000O1000O100000000000O100000000000O10O100000000000000O1000O1000O100000O10O10O1000000000O10O100000O1000O100000O100000O10000000O10000000O10O100000O0100000O010000000000O10O100000000O10O10000000000O10000000O10O100000000O10000000O10O1000000000000O01000000O01000O1000O010000000000O10000O0100000000O10000O011O000O01000000000O1000000000000O1000000000000O0100000000001O001N2O1O1N2O3L3Mol`5"}, {"size": [848, 480], "counts": "dX]5Z1Th0S1POP1QOn0E;O2N1O100000000O10001O00O1000000000000O2N100O1N3N1N2M4I7@a0_Od0ROW1lNg`R6"}, {"size": [848, 480], "counts": "WdR8d0ii06L3L3N3M2N2M3N2N2O2M4M2M3N1N10000O100000000O11OO10000010O0000010O01O0001N10001N2N2M3N2M3N2L3N4I7L5G`dP3"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "jge34[j03hYO7ea0MZ]OU1ab0mNc\\OGZNc1Qe0kN^\\OJWNa1Ze0QOR\\Oc1mc0_NP\\Ob1Pd0aNl[O`1Td0bNi[O_1Xd0bNe[O^1\\d0eN_[O]1ad0gNZ[OY1hd0kNR[OV1od0mNlZOS1Ue0ROdZOo0^e01lZOaNWe0[1R[OVNYe0e1l1I8F=F8J8H;G5J7J5L4K3LZfo7"}], [{"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "jj;8Wj03N1N2N101M2L5VOj0I6O1000000000000001O000000000O2O0000000O2O00000000010O0001O01O0000000000000O2O01O0000001OO2O01O001O001O10O01O1O1O0010O10O01O001O00001O001O0000001O00000000000O1N2L4K5I8K4J6J7Lee6GdZI7H5L4M2N3N100006J8FdZf0=mdYO`0@c0_O3M201N1001O010O0001O00000100O01O00001O0001O01O0001O01O0001O01O00001O01O000001O001O00001O01O01O000010O0O1010O00001O01O000001O01O01O0000000000000001O0000001O0000001O000000001O000O1000001N1O1N3G9D<^OoU9EYjFL_VOc0oh0g0D=E:O1O2O0O101O1O000001N1O1M3N2N2O1N2N3M2O1N2O1O1O1O101N100O2O0O1O1O100O2N100O100O101O0O10001N2O2M3N2N2N2M3N2N1N3L4Lgdn0OY[QO4J7I6M2O2O010O001O1O001O001O1O001O1O001O0001O1O1O000O2O00000O3N0O2N2LjWb0_Oih]O1DEROkVOT1Pi0:K4L4M2N3L3M4L3TOWNnXOl1mf0o0K5K4N2O1O2N1O1O1O2O0O1000000O2O000O10000O101O000000000001O0000100N101O10O000001OO1000000000000O10000000O0100000000000O010000001_MmYOc0Sf0[OSZO`0ne0\\OZZO`0fe0^O`ZO>`e0@X[OJid03][OHdd06a[OG_d07f[OF[d07m[OAUd0=Q\\O]ORd0`0P3N3M2N3N1N3M5Kak`2"}, {"size": [848, 480], "counts": "mm;b0mi02N1O1O001O10O100000O10O10000000000O010000000O10O1000O10000000O10O10000000000O010000000000O0100000O10O1000O1000O1000O10O100000000O1000O10000000000000O0100000000000O1000000000O1000O100000000O100000O10O1000O10000000O10O100000O1000O10000000O1000O10000000O100000O1000O100000O10O100000O01000000000O01000000000O0100000000O100000O10000000O100000000O100000O100000O100000O100000O10O1000O10O10000O100000O10O1000000000O010000000O1000O0100000000O100000000O1000000000000O1000O1000000000O10O11O001O001O1O1N2O1N3MRbb5"}, {"size": [848, 480], "counts": "cc[5Y1ng0\\1fNX1YOf0M4M2O100O1000010N100000000000O1000000O10000N3N1O2N1N3K4L5^Ob0]Oi0oNd1SNjVO:\\nT6"}, {"size": [848, 480], "counts": "hWY8>oi07I5M4K3M4M2N2N2N3N1N4L4M1O1N10001N100000O100000O1000000000000001O00000000001O001N1O2M3GTWOkNnh0R1UWOkNnh0S1;K4L8^Ofkj2"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "Zmd36[4O^a04_]OR1[b0SO]]OT1`b0nNd\\OB^Nf1nd0jNaZOK`11cN`1[e0SOP\\O`1Pd0bNl[O`1Td0bNi[O_1Xd0cNd[O^1\\d0eN_[O\\1cd0gNX[OY1id0kNR[OV1od0mNlZOS1Ve0ROaZOP1`e01kZObNWe0Y1R[O^NSe0\\1U2I:D:H8I9J7H5L6J4K5L2N4Kj`P8"}], [{"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "]f71\\j09I10001O1N2N1O2J6SOl0M4N1000O101O000000000000001O0000001N100000001O00000000010O000000001O000000000000001O01O000000001O010O1O001O1O100O0010O0010O01O001O00001O0000001O0000000O1000000M3L4L4J6K6H7K6Lee6IbZI6J4M2N3L20100104KZUg01`jXO;G=C`0B2N10001O01O0001O0000010O000010O0000010O01O000001O01O0000010O000000010O001O00001O01O01O00000010O0001O01O00000001O01O0001O01O01O0000000001O01O0000O1000001O0000001O000000001O0000000O2O0O1O2N1M4D=YOZP:LloEd0ZOf0H8K4O101O00001O001O0000001O00000010O01O0000001O010O0000000001N100O2O0O100O100O2O000O100O101N10000O1O1N3L3N2M3O100oN_WO2ch0HfWO4Zh0IlWO3Uh0IRXO3Wi0O\\nQ11_QnN4E;D<00O10000000O10000000001O001O0O2O001N100000001N2O1GdVg0EmiXO2M3M3N2000O1000001O00000O10001O00000O101O000O100000001N101N101N10Rd>"}, {"size": [848, 480], "counts": "WW:e0ki0me0]OZZOD:I7K8G7K4K5K4K7IXdY8"}], [{"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "j7:Uj02O001N101N2E;UOk0N1O1000000001O000000000O2O000000001O0000000O2O0001O000001O0001OO2O0000000000001O0000000001O00000010O01O001O2N100O1O0001O010O001O001O001O0000001O000000000000000O1M4J5M3J6K5J7G\\nV17`QiN:Fj0YO5L2N11O00000010O0000010O000010O0001O0000010O00000010O000010O0000010O00001O000001O01O1O00001O0000010O01O01O00010O000000010O0000000000001O000001O00000000000O20O000001O000O1000001O00000O101N1N3L3G:D?YOlU90niFk0\\Ob0J7L3N2011N3M101N1O1O1O01O01O01N10001O000O1O2M2O100O2O0N2O2N1O2O0000001O000001O01O00001O0001O0N2VOVXOiNog0R1j0L5E;I_f[4"}, {"size": [848, 480], "counts": "X9>Rj0:F8H8H7I1O0O101O00O10O1000000000000000O10O100000000000O100000000O0100000000000O10000000O10000000O100000O10O100000000O010000000O1000000[OQWOHPi07QWOIoh06RWOJnh05TWOJkh06VWOJjh06VWOJjh06VWOJjh05XWOIih07WWOIih07WWOIih06XWOJhh05YWOKgh04ZWOLfh03[WOMfh0O]WO1dh0J`WO5bh0GaWO9ah0CaWO=Yi00O100O1N2M2O2N2N2N2N1O2M4M1N3N2N2N1010000001O000O2O0\\OQWOHnh04XWOJih03YWOMhh0O[WO1ci0O0000000O10O100O1O1N2O0101N1000000000O100O10O10O10OO3L3N2M3N2O00100O0101O0O1000001O0O2O0000001O000O101O00001O000O101O0000001O0O101O00001O0O1000001O000O2O000O2O00001N101N2O0000XXT7"}, {"size": [848, 480], "counts": "San7;Rj06H7G8H8TOk0eN`N_YOh1\\f0Y1J5N3N1N3N2N1O1O2O0O2O0O101N10000O10001N1000000001O0001O0001O00100O00001O000000000000000000O2O000000000O100000O100000O100000000000000001O0O10001O00000UM^ZOg0ce0SOeZOk0\\e0POlZOj0Ve0QOR[O>]e0[OlZO2de0KaZOMee00`ZOCle08Y2M4KYlR2"}, {"size": [848, 480], "counts": "o:c0li01O0O2O1O1O10O10000O010000000O1000O10O1000000000O10O10000000O0100000000O01000000000O1000O100000O100000O100O10O1000000O1000O100000O10O10000000O010000000O10000000O0100000000000000O100000O10000000O1000O10000000O1000O10O100000000O010000000O10O100000000000O100000O100000O10000000O1000O1000O1000O100000O01000O1000O100000O1000O1000000000O1000O100000000O100000O010000000O10O1000000000O010O100000O10O100000O10O1000000000O0100000O10000000000O100000000O10O010000000000000000000000O01000000000000001N2O2N002M3M4K`Zm5"}, {"size": [848, 480], "counts": "fjP581S1cg0e1hNV1^Oa0M2O100O2O01OO100000001O00O1000000O10001O0O100O2N1O2M3L3K7]OhYO]Lif0k2T1iNd1]Nem^6"}, {"size": [848, 480], "counts": "^\\f8i0di07I5L3M3M3N1O2O1N2N1O2N101N100O2O0O101N1000000O1000O10001O00O10O2O0O2N101N2O10OO2N2M3N3J5K7E:Jm[_2"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "m_X36Xj06K6oYO_Oia0j0m\\O=lb0GP\\OVOoN_1nd0\\Oo[O\\OiN_1We0VOn[O`1Rd0bNk[O_1Ud0bNi[O_1Wd0dNf[O\\1[d0fN`[O\\1`d0hNZ[OZ1gd0hNV[OW1kd0nNnZOS1Te0POfZOR1Ze0Q22PNhZOdN[e0V1oZObNUe0W1[[ORNSe0h1Q2D;I7K6J:F6K5K4L3M2N2O2Lk^Z8"}], [{"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "i7=Sj00O2M3]Oc0^Ob0M3O00O1000001N1000000000001O0000001N1001O000001O0O1001O000010O000O101O000000000000001O0001O0000000010O01O1O1O100O2N01O01O01O001O001O00001O00001O00000000000000000O1M3K5J6J6K6H8IiSV19PliN9G>Ca0A2O0000001O010O0000001O000100O000001O001O01O0001O01O0001O01O000001O01O01O0000001O01O01O00001O0001O01O0001O01O00010O0000001O01O0001O00000001O000000000000001O0000001O00000O10001O00000O101O0O1N3I7^Ob0\\Ono9EfVEd0mh0i0A`0J5N3O000O2O0010O000000000O1N2M4L3N2O1O1O1O1N2O1O1O1O2N1O1O2N1O1O1O1O1N3N1O1N2O1N2O2N1O100O1O2N1O1O100O1O2O000O1000001N100O1O2O000O2O00000O100O10001N1OgYj3"}, {"size": [848, 480], "counts": "W9^1Ri01O1O1O001O00O100000O1000O100000000000O10000000O100000000000O10O1000000O11O00O1000O10000000O1000O10O100000000O010000000O1000000O100ZORWOJnh05SWOKmh05TWOJlh06TWOJkh07UWOIkh06VWOIkh06VWOJjh06WWOIih07WWOIih06XWOJhh05YWOKgh04ZWOLfh03[WOLfh02\\WONdh0N`WO2`h0JdWO6[i0000O100O1N2M3M2O2N2O1N2M3N1O2M3N2N2N2O10O011O001O000O2^OnVODRi08SWOGnh04VWOLkh0N[WO2bi0O00000000000O0001O1O001O1000000000000000O0100O100O00100N2L4K5O100O10O101O0O100000001N10001O00001O0O101O00001O0O10001O0000001O0O101O0000001N10000O2O00001O000O101O0O101O1N101OXbW7"}, {"size": [848, 480], "counts": "YYk7:mi0b0QNWO`YOP1Rf0HVYOe0bf0i1L4M2O1N3L3O2N2N2N1O2O0O100O2O0O10001O0000000O101O01O001O1O010O1O001O0010O00000000000000000000000000000O10000O10O10000000000O1000001N10000O100O2^MnYOd0Rf0YOSZOd0oe0TO[ZOKcf01fYO@df0=j1L4L4K5L5JcR[2"}, {"size": [848, 480], "counts": "o:a0ki04M210O10000000O100000O1000O100000000O10O1000000000O1000O100000O1000O1000O1000000000O0100000O1000000O010000000O0100000000O10O100000O10O10000000O100000O01000000000000000000O10000000O100000O1000000O01000000O1000O1000000000O0100000000O10000000O10O10000000O100000O10000000O10O100000O10O100000O10O100000O0100000000O10O10000000000O01000000000000O10O100000O100000O10O100000O10O10O100000O1000O1000O010000000000O01000000O100000000O1000000O1000O10O100000000000000000000O1000O100000000O1001O1O001N2O1O1N3M2N4LoTn5"}, {"size": [848, 480], "counts": "Z\\n48Yh0[2\\NY1WOi0L3O2O000O10001O000000000000000000O100000001N1O1O2M2O2M3L3J:SOQ1QOf1mMnVO5RVb6"}, {"size": [848, 480], "counts": "he`8b0hi07L4L5L3M3O1N3M3M2O0O2O1N101O1N2O1O1O0O10010N1000000000O100000001O0O2O1N1O2L4M3M3M3M4L5H7G=@Tme2"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "]PV33\\j07J2M3mXO3_M@^e0d0UZO^O_2[1Sc0@d\\Oh0Zc0YOT\\OROfNh1Ve0XOk[O^1Td0cNj[O^1Vd0eNe[O]1[d0eNa[O]1_d0gN[[OZ1gd0jNS[OW1md0lNoZOT1Se0oNhZOR1Xe0SO_ZOP1ce0OdZOiN^e0Q1oZOeNUe0V1T[O`NRe0V1[2F9I8J6K8I6J6J6J3M2N3M4KXn\\8"}], [{"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "h7>Qj02H9SOl0L4O0O10O1000001N1000000000001O0000001N1O11O0001O00000O11O1O01O000000001O000O1001O000001O000001O0000010O1O1O002N10O01O10O0010O0001O001O00001N10001O000000000000O11M2N2K5J6J6K6I7JjSV11UliNL5N1O101O0000001O010O0000001O00010O00001O00001O01O01O00000010O000001O01O0001O00010O00O10000O101N1I7J6M3L4N3N10000O101O0O1N2\\Oe0J5N2N3N1N2O1O100O1O2M20000OXkh01gTWO8Hf0[O1O00O1000001O001N101O000O101O0000000O101O00000000001N1000001N100001O01N001O1O1N2N2N3M2M3N[fi1"}, {"size": [848, 480], "counts": "W9b1nh00000000000O0100000000000O1000O100000O10000000000000O010000000000000000O10O1000000000O10O10000000O1000O100000O1000000O100000000YORWOKnh06SWOImh06TWOJlh06UWOIkh07UWOIkh06VWOJjh06VWOJjh05XWOIih07WWOIih06XWOJhh05YWOKgh04ZWOLfh03[WOMeh02\\WONdh0O_WO0bh0LbWO4_h0GeWO9[h0DhWOPj03C?UOi0N100000O1000001N1000000000001O0000001N100000000010O00O1001O01O00000000001N10000001O0001O000001O000000100O1O002N1O10O010O0010O01O0O110O000O2O000000000000000000000M3M3K5I7K6H8I7Kfe6K`ZI6J4L4M2M3O1001O109EbZf0;QeYO7I=Ca0B1O1O00001O0001O01O0000010O000010O0001O000010O000000010O000010O00000010O000001O000010O0001O001O0001O01O0000010O00000010O0001O00000001O00000000000000001O00001O0000000000001O00000O101O000O1O2N1M4^Oe0UOXP:7coEg0XOg0J5O2O0O10001O00001O010O000000001O01O01O001O000000O2N1N2L4L4N3N1N2O1N2O2N100O100N2N3J5K5M3N2O1O2N1N2O1O100O1O101N1O1O100O2O0O10001O0000001N11O00000000000001O01O00001O1O1O002O0O2N2O2M3M6J7I1O1OO2O000000000000000O10001O0000000000O1000O10001O000O100000N21O1N100O2O001O0O2O1N101O2N3L3N2N1N3N2M4KnnQ2"}, {"size": [848, 480], "counts": "W9b1nh00000000000O0100000000000O10O10000000O100000O100000O10000000000000O10O1000000000O10O1000000000000O10O100000O1000000O1000O1000O1ZORWOJnh06SWOImh06TWOJlh06UWOIkh07UWOIkh06VWOJjh06VWOJjh05WWOJjh06WWOIih06XWOJhh05YWOKgh04ZWOLfh02\\WONdh01]WONdh0O_WO1ah0KcWO5^h0GeWO9Yi0000O100O1M3L4N1O2N2N2M3N2M2O2N2N2M300O010001N1000001]OPWODPi0:SWOEnh06VWOJkh00[WONdi000100N1000000O1O000O101O100001N20O00000000O010O01O1O1O1O1M3L4M2010000000O101O000O10001O00001N1000001O00001N10001O0000001O0O10001O000O2O00001O000O101O0O110O0O10001N10001O0O3N0Oi\\X7"}, {"size": [848, 480], "counts": "fP^7h0bi0Pj02H9WOj0I6O0O10O2O0O1000000000001O000O100000001O000000001O00000001O01O01O00O100O2O00000001O0001O0001O00000000100O1O001O1O2OO01O01O1O010N1010O01O0O10001O0000000O10000001OO1N2K5J6I7J7H9JU`7KP`H6K5L2M3N20010O08GnZf0FTeYOd0A8Ie0\\O7J2O1O00001O0001O01O0000000010O001O01O000010O00010O000000010O00000010O0000001O00010O0000001O001O01O01O00000010O0001O01O0001O000001O01O0000000001O00000000001O0O10001O0000000000001O0O10000O2O0O1H9Ccj:QOjUEW1XO`0J401O01O1O1O1N2O1O2O0O100O1001O011N4L:F001O010O001O00000010O000001O1O0001O0001O0001O00000000001N1J6I7K5L4O1O2O00000O101N1000001O0N2I7L4N2N3L3M3M3M4M3JfU4O]jK4K5M2M3M3N2N2O1L4N2N101O100001O00000000000O10001O000O011O000O10000000001O0O01001O00000O10000O1000001O0O2O3M1O1O1N3N2N0O1001N001N2O1O1O010000000O11O0O10O11N2O000O10001O000000000O2O0K5MknY1"}, {"size": [848, 480], "counts": "V9b1nh000000000000O10O10000000O1000O100000000O10O1000000000O100001O00O010000000000000O0100000O10000000O0100000000O10000000O11N0100000O1[ORWOHnh07SWOImh06UWOIkh07VWOHjh07WWOIih07WWOIih06XWOJgh07YWOHhh07YWOIgh06[WOIeh06\\WOJeh04\\WOLdh03]WOMch01_WOOah0McWO3]i0000O1O1O1M2O2N2N2N1O3M2N2M2O2M3N1N3O1O1O10O10001O001O0\\OQWOGPi05UWOHlh07VWOHkh03YWOMih0I`WO6]i0001O0000000O001O001N200O10O101O000000000O10O10O01O1O1O1N1N3L4N20000000O10000000001N1000001O00001N10001O0000001O000O2O000000001O0O101O00001O000O101N100000001O001N10001O0O2O1Oi\\X7"}, {"size": [848, 480], "counts": "gUo63Qj0?J5I7M2O1O00100000O2O1N:G3L2O1N2O001N100O100O0010O010O0000100O000010O01O001O00001O000O100N3N1O1O2M2M3kN]O`WOOd0g0ig0EkWOe0Th0\\OeWOk0[h0e001O0000001O00001O0O11O000000001O0kN\\XO^Odg0`0`XO^O`g0`0cXO_O^g0?cXOA]g0>cXOC]g0=cXOC^g0;cXOE]g0;bXOF^g09bXOH_g07aXOI_g06bXOJ_g05`XOKag05^XOLbg04^XOLcg03\\XOMfg02ZXONfg02YXOOgg01YXONig01WXOOjg00UXO1kg0OUXO0mg0OSXO0og0NRXO1og0OQXO0Qh0OoWO0Sh0NnWO1Sh0OmWO0Vh0NjWO0Xh00iWONYh00hWON]h0N[Ri2"}, {"size": [848, 480], "counts": "g:h0hi000000O100000O10O010000000000O10O1000O100000O1000000000O100000000O100000O1000O100000O100O010000000O0100000O1000O100O100000O1000O10000000O10O100000000000000O100000000000O10O10000000000O01000000O100000O1000O10O1000000O1000O100000O10000000O1000O10000000O1000O100000O01000O1000O10O1000O100000O1000O1000000000000O1000O100000000000O1000O10O100000000O10O10000000O100000O10O10O100000O10O100000000O10O10O1000000O010000O1000000O1000000O1000O1000O1000000000000000000O10O1000001OO1000O1001O1O1O001N3N1N5KadP6"}, {"size": [848, 480], "counts": "Wam4n0`h0Z1[M^2M3O1N1000001O0O100001OO100000000000O10001O000O2N1O2M2N3L4J6D?SO]1]NmRc6"}, {"size": [848, 480], "counts": "PdQ76Vj08I6K4L3N2M4M2N3M2N2N2N2O2M2N00100O10000O2O00000O2OO11O1O1O000001N101O001O0O2N102M2M2M5L3N3L5K>]OfYS4"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "lUU3;Sj09H5L3eXO3jM[Obe0d0^\\Ol0[c0WOa\\Ol0^c0UO]\\OP1bc0ROW\\OS1hc0POm[OY1Sd0iNg[O[1Yd0hNb[OZ1_d0iN][OW1cd0lNY[OT1id0QOoZOP1Re0TOhZOn0Ye0R23nMnZOcNTe0X1T[ObNPe0W1a[OPNmd0g1T2G;H8I9GRj01N2M2J8ROl0N3OO1O10000O1000001O00000000000O2O0000000000001O000000010O0000001O000O1000000001O01O000001O0001O01O001O1O1O100O001O0010O001O1O0010O01O00001N10001O000000O101O00O1N2L4I7J6K6I7IY`7GR`H7J4M2M3N201O0001Na0^O`Zf0=QeYO;E`0@a0B3N0000001O01O000001O0001O01O0001O01O00010O000010O000001O0001O00010O0000000001O010O00001O000010O0001O000010O0001O01O0001O000001O01O0000000001N100001O00000001N100010O0000O10001O0O10001O0O100N3J7@`0XOYP:MZVEMZi0Q1SOl0N1O2N1000001O00001O001O00001O01O000000010O000001O001O01OO11O00100O1O1O100O1O01O01O001O001O0000000K6J5H9K5M3N2O1O3L4M3M3M3M3M3M8H4Lib6;k\\I7J6J2cVOVOPi0W1NO1000010N10000000000000000O101O00001O000000000O10001O0O100000000O10000000001N2O1O00001O0O100O10001O00000O2O000000001O000O10001O000O100000O1000O1000O1000O2O000O10001N10001O000O101O00000N2O2K\\TY1"}, {"size": [848, 480], "counts": "U9a1oh00000001N1000000000O10O100000000000O1000O10000000O100000000000000O01000O11O00O10000O01000000000O1000000000O1000O1000000000000O10O1000YOUWOIkh06WWOHjh07WWOIih07WWOIih07WWOIih06YWOIgh06ZWOJfh06ZWOJfh05[WOKeh04\\WOLdh03]WOMch01_WOOah00`WO0`h0KeWO4]h0HfWO8Yi00O100O1O0N3M3M3O1N2N1O2M3N2N1N4M2O001N20O10001O00001O0^OnVOFSi04TWOJlh04WWOJlh0J_WO5^i001O000000O010O1O1N2O1O10O011O0000001OO0100000O001O100O1N1M4L4O100O1000O10000000001O0O10001O00001O000O2O0000001O00001N1000001O00000O2O00001O0O101O0000001O0O10001N101O001O0O2M\\bW7"}, {"size": [848, 480], "counts": "hhd77Uj05L4N1N3N1O2M2O2M3N2N3M2N4M1N3M3[XOeNQf0^1jYOfNSf0]1hYOgNWf0[1cYOiN]f0Y1XYOQOgf0Q1nXOXOSg0h0jXOZOVg0P20000000O101O000O1O2O0[OiXOPNYg0m1mXOmMUg0o1h0K6M3L3N3M2N201N2N1O2O0O2O1N1N3N1O2M3N1O2N2N2N1O2N2N2N2N3M3L3LX]P3"}, {"size": [848, 480], "counts": "e:j0fi00O11N1000000O10O100000O100000000O100000O1000O10000000O10O10000000O1000O100000O1000O100000O10O10000000O01000000O1000O1000O1000000O10O10000000000O1001N01000000000000000O010000000O1000O1000000000O010000000O1000000O0100000O10000000O100000O01000000000O10O1000000O010000O10O10000000O10O10000000O10O100000000000O100000000000000O1000O010000000000O10O100000O1000O1000O10O100000O10O100000000O10O10O10000O10O100000O010000O10000O1000000000O0100000000000000O1001O0000O1000O10001OO100000O11O001O1O0O2O1N3M2O2Mcon5"}, {"size": [848, 480], "counts": "m[n4812ag0HhXO2M_2je0n1H7M2O100O2O000000000000001O0000O1000000000001O0N3N1N3M2N3K5H9SOR1nNjXb6"}, {"size": [848, 480], "counts": "eof69Wj03N2M1N3N1O0O2N2G\\OaVOh0[i08O10001O1O2N001O1N4M0N31N0001I8M2O2N00000O1001O0YN_WO_1bh0`N]WOa1ih001N1O10O0M32N2N1N3N2L4L4N2N4K5Am^[4"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "mUU34Yj09I6J4M4lXOMaMBae0d0a\\Oj0Yc0YOb\\Ol0\\c0UO]\\OQ1cc0QOP\\OZ1Pd0gNk[O]1Ud0eNg[O]1Yd0fNc[O[1]d0hN_[OY1ad0lNX[OU1jd0mNR[OT1od0POjZOQ1We0S22SNaZOfNae0Q1R[ObNQe0Z1W[O[NQe0Z1Y2E=H8I8I:DBc0A2M101O01O001O0001O0001O0001O01O01O00000010O0001O01O000001O01O01O000000010O000000010O00001O001O01O01O01O01O00001O000010O000001O0001O0000000000001O0000000000001O001O0000000O2O000000001O0O101M3K4F;\\Oi0[OPU91YQF>Xi0c0K2N1O2O0N2O2L3L4L5M2N2N2M4L3N2O2O0O1010O00000001O01O0001O000010O001O0010O01O010O0000010O1O1O1O1O001L4H7K6J7K4N2N2O1O2N1O2N2N1O000000O001A?O1000000000000000O2O00O11N1000001O000O100000001O000O1000001O00000O100000000O2O00000O1000000000001O1N1000001N2O00000O101O000O101O0000001O0O10001O000000000O10000000O1000O10000O10000O2O00000O101O001O000000001N100O1N2N[jU1"}, {"size": [848, 480], "counts": "T9o0ai03L4M8H3M001O1N2O00O1000000000000O10000000O0100000000O100000000000000O010000000O100000000000O010000000000O100000O1000O100000000000O10O1WOYWOIgh07YWOIgh07YWOIgh07YWOIgh06ZWOJfh05[WOKeh05[WOJfh06ZWOJfh06ZWOJfh06ZWOJfh05[WOKeh04\\WOLdh02^WOMch00`WO0ah0JdWO6[i000O010000N2M3K5O1N2O0O2N2N2N2M3N1N3O1N2O00110O01N10001O001XOTWOJlh03XWOLkh0J]WO5`i0O000O100000O1O001N2O1O100000O101O0000000O0100O10O01O1N2M3N2M201O1O100000O1000001O000000001O0O10001O001N10001O000O10001O00001N10001O0000001N100000001O0O101O000O2O000O3M2N\\bW7"}, {"size": [848, 480], "counts": "dee77Xj02N2N2O2N1O1O1N4M2N4L4K101O1O3cWOmNWg0V1cXOnN\\g0X1\\XOjNdg0R2O00000000[OnWOlNRh0m0YXOoNgg0m0`XOoNbg0P1aXOmN_g0R1eXOkN[g0S1S1M2O2N1O2O1O0O2N2N1O2M3M2N3M3M2M4MRZ8OleG7M1O1N2O1O0O1O2O00O2O2M3M4LTPm2"}, {"size": [848, 480], "counts": "g:g0ii0000O1N20O101O000O1000O100000O10O1000O10000000O10O10000000000O01000000O1000O10O100000O100000000O10O1000O0100000O1000000000000O1000O100000000000O1000O10000000O10O10000000O01000000000O100000O1000000O0100000000O10000O01000000000O100000O100000O010000000O10O1000O10O1000O10000000O10O10000000O100000O100000000000000O10O100000O1000000000O01000000000O01000O1000O10000000O10O10000000O01000000000O1O0100000O10O100000O10O100O100000000000000O10000000O100000O1000000000000O100001O00O100000000000001N101O2M2O1N3MTUn5"}, {"size": [848, 480], "counts": "bPP5R1Yh0oNkWOW2df0b1TOk0L3O101N100000001O0000001N1000O10000O11O01O001M2O1N3N2L4J6G:YOl0YOS1VNSi_6"}, {"size": [848, 480], "counts": "dWf66Uj09G7K4K5K6J4N3N2L3O14M01N1O001O1O1O1O0O200O1N100000001O100O01O0O1001O01N102L3N101O02O00N3DVWOmNmh0n0`0J6K4OO@_VO1`i0LQW\\4"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "ljV33Zj0:H6I5M4iYO^OWb0e0g\\Oa0Sc0AQ\\OYOhN]1Ue0[Oo[O^1Pd0dNl[O^1Td0cNj[O^1Vd0cNh[O^1Wd0fNe[O[1[d0hN`[OZ1ad0jNY[OW1gd0lNU[OT1ld0QOmZOP1Ue0TOdZOm0]e0S22jMR[OcNRe0X1V[O^NRe0W1Z2D?G9J9D;F;[OoX`8"}], [{"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "g7;Tj02O000O101O1N1M4A>]Oc0M4O00O100O2O00001O0000000000001O0000000000001O0000001O01O00000001N10000O11O0001O000001O01O000001O001O1O100O1O1O001O01O001O0010O01O1O00001O00001O000O011O00O10O2N1N2M3I7L4H9I7JV`7KP`H5M3L4M2N3N20O1N8ImZf0ETeYOg0_OVc0Dk[OXOnN_1Re0ZOn[OYOnN_1Re0[Om[O^OfN\\1\\e0WOl[O^1Td0dNi[O]1Wd0eNe[O]1\\d0eN`[O[1ad0iNY[OY1gd0kNT[OU1nd0nNlZOS1Ue0VO^ZOn0be0o13jMQ[OdNSe0W1V[O_NQe0W1[2B=J8J:F9F9G9GTi]8"}], [{"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "h75Zj03M4M00000O101N2N2L4B=\\Oe0L2101N10001O000000001O0000000000001O0000000O110O000000010O0000000O1000001O00001O0001O000001O01O001O001O100O1O1O1O01O0000100O001O001O00001O00000000000O1000000O2L3N2K5I7I7I8JW`7JQ`H5K4L3N3L300002N4LP[f0BVeYOa0B;Ec0A:G001O1O001O01O0001O0001O01O0001O0001O01O01O000010O000001O01O01O000010O0000000010O0000010O0O1010O010O0010=B1O01O0N3O0O1O1N2O2M2N2O1O1O1001O00000000001O001O1O00001O1O001O1O1O1O1O001N2O1N2O1N2N4B=QOiVO1Sc;j0iUE>[Od0K6O0O1O2O0000001O001O0001O01O000010O000001O000000010O0000001O01O01O00000010O00000001O0001O000O1M3I7K5L5L3O1O101O00000O110O00000000001O00000000001N1000000O101O0000000O10001O0000001O0O100000001O0O100000000O101O0000000O1000001O000O1000001O1N2O000O101O00001N100000001O0O100000001N1000001O0000000O10000O0100000O10000000000O2O000000000O2N10001O001O000O1O1M\\`R1"}, {"size": [848, 480], "counts": "kh21Wj0c0D>C6J7I3M1O0O1000000000000000000000O010000000O1000001OO11N10O10000000O100000O10O10000000000O1000O100000O1000000000O10O100000000000000WOSWOOlh00XWONhh02YWOMgh03YWOLhh04ZWOJfh06ZWOJfh06ZWOJfh05[WOJfh06ZWOJfh05[WOKdh06\\WOJdh06\\WOJdh04^WOLbh02`WOMah01aWOO_h0NdWO2\\h0IiWO7Xi0O1000O1N2M3N2M3N1O2M3N2M3N2O1M2O2M4N0011O00000O20O00\\OQWOGPi06SWOImh04WWOJlh0M\\WO2fh0H_WO7^i0O00000O1000O1O001O1O1O010O1001N1000000000O10O0100N2N2O1N2M3M2O200000000000O10001O0000000O2O0000001O000O101O1O00001O0O101O0000001O000O101O00000O2O00000O2O001O0O2O0O2O2MlRU7"}, {"size": [848, 480], "counts": "fc[72[j06K4L4K5L3M3M4L3N2N2N1N3N3L3N102N1O100101M`0B0O001O0000000000O2N10O01O1O001N2O0O2O000O2O0O1O100O2O0O1O1O1O1O2N1O1O1O1N2N2GRNnWOP2Qh080000001O002M2ROlWO^OTh0DfWO6d05hg0_OnXO?bh00001O000O011OO0100O10O0001O00100O10O01O10O01O0010O10N2O2LUW^2"}, {"size": [848, 480], "counts": "j:e0ji01O1O010000OO2O010000000000O1000O10O100000000O01000000000O10O100000000O01000O1000O1000000O1000000O10O100O1000O10000000000O010000000000000O1000000000O100000O10O100000O1000O100000O1000000O01000000O100000O100000000O01000000000O100000O1000O100000O10O100000O1O10O1000O10O10000000000O1000000000O1000O100000000000000O10O10000000O10000000O01000000000O0100000O0100000000000O010000000O010000000O10O10O100000O010000O010000O01000000000O1000000000000O10000000000000000O1000000000000O100000000000O01000001O001O1N2O1O2M3M3MbPj5"}, {"size": [848, 480], "counts": "P[S5a0Xh0ERXOMJd1if0n1oNo0N3OmM\\ZOYOde0>YZOhM=d1[e0a0P[OXOTe0g0nZOSOWe0l0mZOlNXe0T1kZOgNWe0Y1P[O[NUe0d1l100O1O100O1GjWOXNVh0d1=N2O10000000001O1N11O01O101N2Nh0lNZ^\\6"}, {"size": [848, 480], "counts": "kdi76Vj08J5J6L3M4K5K4M6K0O2N10O01O100O2OO01000003M5KO100O1O100O20OO1O2M2O2M2N3N1O2L4M3N2N2M4K<]OQc^3"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "kTZ34Yj09I6J5K3oXOFhc0Vf0@mYO?Sf0@nYO?Sf0_OoYOa0Rf0^OoYO`0Sf0^OoYOa0Vf0XOlYOe0Sh0I7Jkf`2"}, {"size": [848, 480], "counts": "j:e0ki00O100O1O010O100O001000O10000000O01000000O1000O1000O10000000O10O10000000O10O10O100000O100000000O1000O01000O100000O10000000000O1000000000O1000O1000000000O10O1000000000O10O1000O1000O100000000O10O10000O1000O10000000O1000O100000000O10O10000000O10000000O010O100O1000O1000000O10O10000000O1000000000000O10O100000000000O10O100000O1000000000000O100000O10O01000000000O0100000000O0100000O1000O1000O1000O10O100000O01O1000O10000O011O00O10O1000000000000O10000000000000000000O02O01OO100000O01010O00O0100000O2O010N2O001N3N1N3M2Nc[h5"}, {"size": [848, 480], "counts": "fgU5b0Yi0g0B>kYO]NXc0e1a\\OcN\\c0`1`\\OcN_c0`1[ZOYNc1;Qd0j1l[OXNTd0j1g[OZNXd0g1c[O]N\\d0e1][OaNcd0`1Z[OdNdd0]1V[OhNjd0Z1nZOlNRe0W1kZOjNTe0Y1hZOhNXe0Z1eZOgN[e0W30O1000000000001N1N3N1O1N3L4G:]Of0POV1QO\\d[6"}, {"size": [848, 480], "counts": "Zda83Uj0;I6K4M3N2M3N3M3M2O1N2O2M101O001N10000001O000000000000000O100O1O2N1O2N1O2N2O2L4J7G9CkXh2"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "Wd\\3K6I;E:EB8G2O1N2O0N2O1N2O2N100000001O01O0001O01O01O01O00010O0000010O010O3M3M4M;D3M1O0000O2N1O1I7M4H7M4N10001N11O01O0010O0001O000001O01O000000001O000000O101O01O01O0O100000001O000000001O000O10001O000O1M4H9B>VOoo9d0ioE=[Oe0K4O1O2N1000001O000010O000001O00010O00001O0001O0001O0000010O001O000001O01O0000001O01O0001O000000O2K4G9J6M3O2N100O10001O00000000001O00000000001O000O10001O0000000O1000001O000O101O00000000001N1000000O10001O000O10000000001O0O10000000000O101O2N0O1000001O0O10001O0000001N10000000001N10001O000O10000000O1000O100000O02O000O1000001O0O10001O001O000O2OO11N100O100M4MZVo0"}, {"size": [848, 480], "counts": "o]47ni0f0D;F2N7H2O1O0000001O0000O010000000000000O1000O10000000O2OO100000O1000000O1000000000O10O10000000000O10O100000000000O10O1000000000O1000000VOTWO0lh0OYWOMgh02[WOLeh05[WOKeh05[WOKeh05[WOKeh04\\WOLdh04\\WOKeh05[WOKeh04\\WOLdh04\\WOLdh03]WOMch01_WOOah0NbWO2^h0IgWO6[h0GgWO9Xi0000O1O100L3M4O1N2N2N1N3N2N2M3N2N2N101000O1000010O0O101O1XORWOLoh0HkVOK?;ci0O1O000O10000O10O1N2N2O010O1001O000O100000O10000O010O1N2O1M3L4N10100O100000O101O000000000O20OO101O00000O2O00001O000O2O00001O0O101O000000001O0O10001O0O10001O001N101O0O102Mm]S7"}, {"size": [848, 480], "counts": "V^T8184n0aH[;5SD[7?dH[;5SDX7a0eHX;8TDT7b0hHW;7TDR7d0iHT;;TDn6f0hHU;>QDk6g0jHW;=QDi6f0lHX;?nCf6f0oHZ;?mCd6f0PI[;`0lCa6e0RI^;b0hC^6g0RIa;e0`C]6l0PIc;`9ZDbFe;`9XDcFf;_9WDcFi;^9UDbFl;_9QDcFn;_9oCcFPT5a8nJSGHa0X5]8PKSGDc0\\5[8oJSGCd0\\5Z8QKTGAb0]5\\8QKSGAb0]5[8RKTGA`0\\5]8SKUGaNXOlNW1[7a8WKQGYN_OVOo0^7\\8RKYGQND^Og0_7\\8RK]GbMKN;b0aNe5l9XLYIbMZN^1ea0[Nm]O7`0X1ga0bNh]O7a0R1la0gNb]O7d0k0oa0PO[]O6f0e0Tb0UOU]O6i0?Wb0[Oo\\O6m06[b0Dg\\O7o0Kcb0N]\\O7cf0I[YO7gf0IWYO8jf0HTYO8mf0IQYO7Pg0JnXO7Rg0ImXO7Wg0FhXO;[g0AeXO?_g0\\ObXOd0`g0XObXOi0bh0000O101O000000001O0O10001O000010O0001O00001O0001O00001O010O00001O1O001O1N2O1O001N2O0O2O1N2N3N2M2N5J[R]4"}, {"size": [848, 480], "counts": "\\le62]j03M2N1O100O100O100O1O1N2N2O1M3N2N2N2N1N2K4M004M3J6L3M2M3M3O1]NaNXZO]1he0nNnYOQ1Uf0TOcYOo0^f0UOYYOn0hf0XOnXOl0Sg0V12ON2J402O0gZOcL`c0^3a\\OfL[c0\\3e\\OfLYc0Y3i\\OgLWc0X3h\\OkLWc0T3f\\OPMZc0P3b\\OTM^c0l2a\\OVM]c0Y3S\\OiLmc0l3W[O[Lid0f40O2N001O1M4N1O3M2N3M2M6bN\\1H8I7L5J9I4E;K5^OaWOlNch0P1b0K5L4L5K6I>_OdYf3"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "Zeo2b0ji07J5K5M4J4M4M3M3N1N3L4L5K4N2M4L5K4L7I6K3L3N2M2O1N2N1O00001O2O0O2O0O2N100O1O100O10001O0O1O10000O1000000O100000000O100000000000000001O00000O2N2O1N2N2N100N3ZNTYO^N0c03ROnf0g1f1H8J7J8K4M4L7@idV7"}], [{"size": [848, 480], "counts": "jSQ46Sj09K6J5M1O20O001O0010O000001O0000000000000O10000001O000000000O01000000000O10O10O2N010O1O1O1O1O1O1N2N101O100O001O1000000000O10001O000O1000000O1O2O000O100O1O100O010O1O2M2O1L6L3MQaS6"}, {"size": [848, 480], "counts": "\\i`2`0ki08F9F:I6I7I7J6J6I8I7I8C>G9F;A=F:C=D=^Ob0F8I7I6L4L4M2N3L3N2M3M4\\MoHdAU7Q>YIhAl6n1UH]8U1bEl6i1YH[8Q1hEk6h1[HZ8m0lEl6d1[H\\8l0nEl6b1[H^8l0mEm6`1[H_8k0oEn6Z1]He8h0nEn6U1`Hl8d0mEQ7o0`HS9a0jEX7f0]H_9=hEo7LjG[:9fEZ8^OdGj:5eE_:Z:cEdE^:[:dEcE]:]:fE^E\\:a:iEZEX:f:i200O100000O1000000000O010lFj@n6V?oHPAn6P?PIUAn6k>oHYAo6h>nH[AP7i>kHZAT7m>aHXA^7m>YHXAf7l>RHXAn7f`0O0O101O000000001O0001O000001O01O01O0010O000100O2NUAVHo:l7mDZHP;f7oD\\HP;d7oD_Ho:b7PE_Ho:a7PEaHo:_7QEbHn:_7QEbHn:_7QEbHm:`7REaHm:_7SEbHk:`7TEcHi:^7VEbHj:^7VEaHk:`7SEaHn:^7REbH`MXOdb0]OSOXNR6i1^J?b0SOVOdNo5c1_Jc0a0kNWOoNo5[1]Ji0a0eNWOVOP6V1]Jk0a0`NWO^OP6R1VH\\Nd0a2T2YNXOGn5l0SHhNb0Z2\\2nMWO4U6^Og0^O]HZ6nMWIl97XHb6mMVIf>j6[AUIe>k6\\ASIe>m6\\ARId>n6^AoHc>Q7`AlH`>T7cAhH^>X7eAdH\\>\\7gAaHY>_7iA\\HY>e7iAPH`>P8o1YJj]OV3Vb0iLm]OU3Sb0gLS^OW3la0hLW^OW3ha0hLZ^OX3fa0dL^^O\\3ba0aLa^O_3_a0_Ld^O`3]a0dKU^OXOa0T5_a0[K^_Od4l`0oJW_OQ5Tc00001O0000000000000000001O0000000000O1001O0000000000000000O11O0000000000000000O10000lNcJU]O]5gb0gJY]OY5fb0iJY]OW5X2eJP=7g@T5W2kJk0Kj78SES5V2oJ?c0d7^OfEP5U2TK6a1g6hNnFc4U2WKMT2\\6WN`G_4V2XKMZ2R6SNjG[4V2\\KJ[2o5RNPHW4W2]KI`2h5nMXHU4W2^KHi2\\5hMdHQ4X2_KHQ3o4cMQIl3W2bKHU3f4aMZIi3V2eKHU3`4bMaId3W2fKHW3X4cMiI`3V2iKHX3P4dMQJ[3U2lKIY3h3eMZJV3T2nKJY3a3hM`JQ3U2oKJ]3W3hMiJl2U2RLJ_3m2hMTKg2T2TLJe3b2eM_Kb2T2WLJj3T2dMmK[2R2\\LKn3g1aM\\LU2n1aLOo3W1eMkLk1k1eL3R4i0fMXMc1i1iL6S4=gMcM^1f1mL:T4NjMQNU1\\1YMd0R4^OhMaNm0X1_Mi0R4QOjMkNf0W1bMn0V4bNg33]H\\1U4nMh3:^Hi1Td0101N101N2N1O10mM`XOW1^g0hNfXOV1Zg0jNgXOU1Yg0jNhXOV1Wg0jNkXOU1Ug0kNlXOT1Tg0lNlXOT1Sg0lNoXOS1Qg0mNoXOS1Qg0mNoXOS1Qg0mNnXOT1Qg0mNoXOS1Qg0nNnXOR1Qg0nNPYOQ1Pg0POPYOP1of0PORYOP1lf0QOVYOn0if0ROXYOn0gf0SOXYOm0if0ROXYOn0gf0SOXYOm0hf0TOXYOl0hf0TOWYOl0if0TOXYOk0hf0VOWYOj0jf0UOWYOi0kf0WOTYOh0mf0YORYOg0of0XORYOg0of0YOPYOg0Qg0XOoXOi0Pg0XOoXOh0Rg0XOmXOi0Sg0VOnXOi0Sg0VOnXOi0Rg0WOnXOh0Tg0XOlXOe0Wg0[OhXOe0Xg0[OiXOd0Xg0\\OgXOd0Zg0\\OfXOd0[g0[OdXOf0]g0YObXOg0_g0XO_XOk0ag0UO]XOl0eg0SO[XOl0fg0UOZXOh0ig0WOXXOg0ig0XOWXOh0jg0XOUXOg0mg0YOSXOf0ng0ZOQXOf0Qh0YOoWOf0Rh0ZOnWOe0Th0ZOkWOf0Vh0YOkWOf0Wh0TOmWOm0Sh0SOjWOo0Wh0QOiWOo0Xh0POgWOP1Zh0POdWOR1\\h0nNbWOS1`h0lN^WOV1ch0iN[WOX1gh072M6I4L2N3O0010O10O01O00N2M301O0001OO01000001O001O1O001N2O0O2N2N1000001O0O1001O01O0O1O11O000100O01O100O10O01O100O100O1O100O1O100O100O10000O10001O00000O101N101O0O2O001O0O2O001O001N101O0O1O2M3M6ITPn1"}, {"size": [848, 480], "counts": "S^c65[j01N100O10001N1O100O100O1N2N2M2N3M3K5M1O1N2]MUOb[Ol0_d0YOX[Oj0jd0VOR[Ol0Re0ROgZOS1]e0kN^ZOX1ce0hN[ZOZ1fe0dN[ZO[1de0gN\\ZOV1ee0lN\\ZOP1ge0POYZOn0je0QOVZOm0le0TOSZOk0oe0UOQZOh0Sf0WOlYOg0Wf0YOiYOd0Zf0[OfYOb0^f0^ObYO`0`f0@_YO>ce0@iYO1d0_[O[Oh0=41cc08n\\OH^O3cc06l\\OM\\ONhc05l\\O?fNA]d01k\\Om1Uc0SNj\\On1Vc0RNj\\On1Vc0TNg\\Om1Yc0TNe\\Ol1\\c0UNa\\Om1^c0WN\\\\Ok1ec0VNW\\Ol1kc0SNT\\Ok1oc0UNP\\Oj1Sd0UNl[Ok1Vd0SNj[Ok1[d0SNd[Ok1`d0SNa[Ok0QORObe00][Oc0Pf0[OQZOHkN6[g00jYOJPO0Yg03gYONUOIWg06eYO1Rg0JQYO5eh00000000001N2N2Nk\\j3"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "SSd3>ii0f0C2O2_Oe0F6J5J;G5K4L4K6K5J5M3M2M4M2M2N4M4L2N2N2N2N1N3N1O2N101O0O100O101O00000000000000001OO101O01OO2O0O100O2O001N1O2O1O1M3O5K2M2O0O2N3K6H7G9I7L3N2O1O001O1O2N5K2N1O1O2N2GlWOPNXh0l18M3M3N2N2N3M2N102M3M3M3N2M3M2M5L4L5K3LP^R6"}], [{"size": [848, 480], "counts": "fk_46Wj06I6L3L4M2N3M3M2N3N1000000010O001O000000001O00000000000000000000000000000000O1000000O100000000000010O00O1010O01O1O001O001O00001O00001O1O0000001O00000O101O0O100O101N101N101N2O0O2O1M4L_^a5"}, {"size": [848, 480], "counts": "m[f2a0di0E=VOP1[OS1ROe0\\O:H6J4nDTG\\4o8_KXG[4l8bKWGZ4l8dKWGY4k8fKWGW4l8gKVGV4l8iKUGU4m8iKVGU4k8jKVGT4l8jKVGT4m8hKVGW4k8TKjGi4Z8SKiGk4Y8SKiGj4[8SKhGi4[8UKfGi4^8mJkGQ5X8cJSH[5Q8^JTH`5P8XJVHf5n7SJWHi5o7QJTHk5S8PJPHk5X8oIkGn5Z8mIiGS6X8jIjGV6Y8dIjG\\6X8_IkGa6W8ZIkGg6W8UIkGk6X8oHkGQ7W8lHjGT7Z8fHhGZ7_8^HbGb7c8WH_Gi7i8lGZGT8a=000001O00001O00001O0000lNcGU@]8h?gGW@Y8h?iGX@V8g?kGY@U8e?nGZ@f0Bh5S`0dIZ@a0Hi5l?hI\\@TJ[Ak5S?dIPA\\6ba0000001OO1000000001O00000000O10000000000000000000000000000001O00000000O1fNYIk^Og6Ta0[Ik^Oe6X4WIgJb1o:cNQFd6U4aK[5lMaFb6Y3XIZLh2^9`MnF`6Y3\\IWLo2X9VMWG_6Z3]IVLR3U9RM[G_6Y3_IULV3Q9nL`G]6Y3`IVL^3g8fLjG\\6Y3aIVLa3`8dLQHZ6X3bIWLd3Z8bLWHX61YIP2:]Mh3U8`L\\HU60]IP26_Ml3n7_LcHR6O`Io13aMP4[7kLUIb50aIn12cMS4S7lL\\I]50cIl11fMW4j6lLdIY5OeIl10fMY4e6lLjIV5OgIi1OiM\\4_6jLPJT5OhIg10jM_4X6iLWJP50iIe11kMc4o5gL`Jm41jIb10nMg4g5eLhJj40mI`1OPNk4^5dLQKe41oI]1NSNo4U5cLZKa40QJ]1LTNU5k4aLdK]4OTJ[1JWNX5c4bLkKX4NWJ[1HYN]5Y4`LULT4L[J[1D[Nc5o3^L_LP4J^J\\1A\\Ni5e3\\LiLl3HbJ]1\\O]NP6Z3[LSMh3GdJ_1XO]NV6o2]L]Ma3FhJ`1SO_N[6^2dLmMV3ClJc1oN^N`6o1jL\\Nl2@QKe1iN`Ne6_1PMlNa2ZOZKi1_NcNl6n0SM[Od4S1\\GeNR7A9G=C:F7F9E:C>I6M3L4L4M4L3M3M4M2O2M2O2N2M4M3L4L5J4nDZFW5i9dJ[FY5h9dJ\\FY5g9dJ[FZ5g9cJ[F\\5h9_J[F`5h9jIlFT6X9fIlFZ6V9`InF_6T9hHfESM[1T:R9]HkGa7Y8WHmGi7U8QHoGm7W8jGnGT8V8fGnGY8V8aGmG^8X8[GkGd8Y8WGhGi8]8QGeGn8b8iFaGV9g8bFYG^9R9SFSGk9i<11N100O1UO^_OgGb`0o7Z@_Gg?[8a@cG_?Z8]1N2M3Gh]OXH[b0g77O1N2O1N3N1O1N2OSO`Hl^O`7o`0kHl^OT7Qa0QIm^On6Ra0UIm^Ok6Ra0WIm^Of6Ta0\\Il^Od6Qa0`Io^O_6n`0eIQ_O[6m`0gIS_OY6Pa0eIo^O[6Sa0dIl^O\\6Va0cIh^O^6[a0`Ic^O`0M[4ea0SKZ^Oa05Z4fa0QKS^Oc0;[4fa0nJo]Oe0>\\4ja0gJk]Oh0?`4eb0QK_]Oo4jb0fJX]OZ5Pd0O0000001O000000000000O1001O00lCZK^2f4^MfKY2[4dMnKV2R4gMVLT2j3jMdLj1\\3UNjLf1V3ZNnLb1R3_NPM^1P3eNQMW1o2lNSMo0m2ROVMj0j2WOZMd0f2\\O^M`0b2AaM;_2FeM5[2MgMOY22iMKW24mMIS26QNGo19TNDl1i0R5X2dKoN@nNg3d3hLWL_4U2cKSOAkNg3_3kL]L[4T2dKUOAiNg3Y3nLeLW4Q2fKXO_OgNi3T3oLlLU4n1fK]O[OfNl3l2RMSMS4i1iKDTOcNo3f2VMZMQ4`1mKNlNbNQ4]2\\McMm3W1RL6dNcNS4T2`MkMj3f0aLh0QNcNW4i1eMUNe3>eLQ1jMcN[4\\1iMbN`39fLU1gMcN_47dJROj3e0j20iL^1bMdNa9N]LBfLl1]McN_9Oa0]1QFdN]9Nc0]1QFeNZ9Of0[1RFfNV90h0Y1SFhNS9Ol0W1QFlNQ9LP1W1PFoNn8IT1W1nEPOm8IX1U1kEROm8G[1U1hEVOk8E_1T1fE]Oe8^Oh1R1dEIlN`NX9e0[2P1aE2S8mNc2j0[E8R8nNo2?oDb0S8mNP3`0oDa0R8mNQ3b0mD`0T8kNP3e0nD?T8fNR3k0kD=od0CS[O;md0EU[O:kd0EW[O9id0HX[O6gd0KZ[O3gd0NY[O1hd0N[[OOfd00a[OH`d08b[OE_d0;c[OC^d0=c[O@^d0`0d[O\\O_d0c0c[OZO^d0f0f[OSO]d0n0b201O1O2ROaVOh0di0O1O1O2OO01O010O001O0010O0100O101OO010O01000O01000O010O010O100O10O01O10O0101N100O2O001O0O3N0O2O1N11000O001O1N4L5K2OO01O01O010O002N2N2O3L5K6F^\\^2"}, {"size": [848, 480], "counts": "aPa61_j0000\\j01cUO3L10O1N3Nn`;:c^D9G8K4N2N2O1O1iWOXOef0g0]YOVOdf0k0a1O1O1O1GoMAbZO<\\e0KbZO2_e00`ZO0^e04PYO@o0:Rf08kXOAR19Qf0g0oYOJ`e05bZO0Xe0OiZO4Te0KnZO6Qe0KlZO7Se0JkZO9Se0HlZO9Te0FlZO=Qe0DnZO`0nd0@S[Oa0ld0_OS[Od0jd0\\OV[Of0hd0ZOY[Og0ed0ZO[[Of0fd0XOZ[Oh0Qe0mNoZOEmN^Oh01je0g0bZOFgf0;WYOBnf0>QYO]OVg0a0iXOYO_g0i0S120O002L7J3L5J4MhnP4"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "gTW4d0di0j6BZI:f6^N_HdIn0j7c6YNlHjId0i7`6[NPIkId0d7^6_NQIkIf0a7Y6dNRIkIg0^7W6fNTIlIh0Z7U6iNTImIj0V7R6mNUImIk0U7n5nNXImIl0S7k5POZImIl0P7k5SOYInIn0l6i5TO\\IPJm0i6g5UO_ISJj0f6h5TNaJWKHc6f5bMWKkKUO`6d5`M^KQLnN\\6d5_McKVLiNY6e5ZMiK^LcNT6f5SL\\HfM_34_NQ6f5PLkHUM]3l0RNk5g5RLZMVNnLd5i5TL]MWNkLc5h5TL`MXNkLa5e5RLfM\\NiL_5`5SLjMZNlL_5[5VLkMSNSMe5U5TLiMTNXMe3TMPL7g0l7U1eM]NWM_3RNlLW7R1bMbNWM\\3TNnLT7R1cMcNWMX3UNRMT7P1bMeNWMT3VNWMS7m0aMiNWMm2ZN\\MP7l0`MkNWMh2\\NbMm6j0aMlNWMb2_NhMi6i0cMmNVM[2cNoMe6h0bMnNYMS2eNWNb6e0bMROYMi1hN_N`6`0cMYO[M[1hNlN]6;cM^OVN4RN3V69dMAd16h08fMBa15l06dMF_14o02eMJ[14S1NdMNX14Y1FcM6T13j8MUG2l8OTG0m80SGNo82QGMo84QGIR97RGAQ9`0`80O100O1O0001O01O3N1N100O100O10000O01O1N3N1M5K7Gbal3"}, {"size": [848, 480], "counts": "V]V75^40Ya05a^ON]a0;W^OJga0c[OGYd0a5kAcJX_O:J9H2N2M4L5J5[XOWM]g0Q3K5M2M3N1O3N1N3N2M20001O0O1O1O100N2N5L1000O1O10O01O1O00100N2O1N1O2N2O00100O100O3GUYOeLPg0V38M3M4L4M6I6J9Gb0^OQHeNeF_1T9lNgFU1T9ROiFo0R9XOkFi0Q9]OkFd0P9DmF=o8KkF7o8U1hEnNm9d1iE_Ni9T2oEnMg9`2RFdMj9b2PFbMn9c2kEaMn9g2kE_Mo9g2kE_Mo9h2lE[Me9V3TFPMd9Y3UFmL`9_3YFgLb9_3ZFdLa9b3\\F`L`9g3[F[Lb9j3ZFXLb9P4XFRLc9X4VFjKg9_4QFcKm9h4hEZKU:V5[EmJc:m5^DZJ`;]:L3M3N2O1NQOlDfBS;Y=QEeBn:Z=UEeBi:Z=ZEfBe:V=aEiB]:P=lEQCQ:h000001O0000001O00001O001O00001O000000001OfNmHV_OT7g`0RIV_On6h`0UIW_Ok6h`0VIX_Oj6h`0WIX_Oh6g`0ZIX_Of6h`0[IW_Oe6i`0\\IW_Oc6h`0_IW_Oa6h`0bIV_O^6i`0dIV_O\\6i`0fIV_OZ6h`0iIW_OW6g`0lIX_OT6f`0PJX_OP6e`0UJY_Ok5\\7QJTHQO^7[1QIb5n6TLeNbNZJZ5h6mLYNRNkJQ5a6^MZNgMTKj4Z6nMXN[M^Kg4o5[N_NPMbKe4h5gN^NgLjKb4h5nNTNdLSL^4Q2RIW1R6:cL^LY4m1XIY1Q62cLhLT4l1[IX1Q6MdLoLP4k1_IV1P6FhLXMj3l1`IU1o5_OnL_Mc3l1cIT1n5XOSMhM\\3k1eIV1l5mN^MlMT3o1fIX1g5cNR52\\E]1`5VN]58_E\\1[5kM_5e0eEU1Ub0fNU^OV1ja0eN`^OV1`a0fNh^OV1Xa0gNm^OW1Sa0eNT_OX1l`0`N^_O^1k`0QN`_Oj1hd0K4M2N3M3M2N3M1O4L3M2N00001O0000000001O0010O0000001O001O001O1O00000000000000000O101O000000000000000000001O01O00001O00000001O001O0000001O00001O00000000000000000O101O0001O000000000010O000000000000001O00000000000000O10001O0000000000000O101OO10O10001O000000000000000000001O000000010O2N10O1O001O1O001O00100O001O1O1O1O1N3N3L4M5JbXo0"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "[fZ5i0`i0a0A:I5I7K5eWOmMQh0]2L3L4L4M3L5K4M3M3L3M3M4M2M3L4N2N2N2N2O1OO2O0001O000000000100O001O1O1O1O2N1O1O2N1O1O1O2N2N1O1O1O1O1O3M2M3N2M2O1N2M4L5I8J9G5L5K7I4K6K6J5K4M3L4L5K6K4KP`P5"}], [{"size": [848, 480], "counts": "]^Y5:Tj04L3M3M3N1L5M3N100O2O001O01O000010O000001O00000000000001O00000000000000001O000010O1O1O100O1O1O1O100O1O100O1O001O10O01O00001O001O0O2O001N2O1N101N4L4KemU5"}, {"size": [848, 480], "counts": "Q[^31^j01O1IOoUO1fi04WVON2Obi0>^VOB]i0e0`VO[O[i0n0O3K6J;FV2hM9H6g@iKS8[4jGjKo7\\4nGfKn7^4PHeKk7_4SHdKi7_4VHbKd7e4YH^Ka7h4dGgJQKb0U=m4fGQKdJ8a=k4hGPLS8U4kGmKo7Z4nGhKi7b4TH`Kf7h4UH[Kf7o4SHSKf7X5RHlJi7_5iGhJT8e5]GaJ`8m5nFZJn8R6aFWJ]9P6PF^Jm9n5dEWJ[:Q6\\ERJa:W6VElIh:[6PEgIo:d6bDbI[;k:O1N2N2N2N2N2O1O1N2N2O1N3N1N2NPOfEoAV:P>oEoAP:o=TFPBj9P>XFPBe9X=mEbBa06b9T=[GkBe8Q=_GoBa8nkF\\AT9c>nF\\AR9c>PG\\AP9d>QG[Ao8e>SGYAm8f>UGYAk8g>VGXAj8g>XGYAg8k5iFo1c0THd8l4THd2\\O_H_8b4lHa2hNlH\\8T4gIb2PNYIY8i3\\Jb2_MdIU8i3bJW2`MPJn7i3iJg1cM_Jf7h3nJ]1aMkJc7f3UKQ1^MYK^7e3\\Ke0ZMfK[7d3bK;XMPLX7c3gK1VM\\LV7`3hKKWMeLS7^3lK_OZMRMj6`3SLPOYM_Mf6`3VLdN\\MlMo5FoFh3l5WN_MZNf5ETGg3o5iMaMjN\\5BZGj3n5[MfMWOl5\\3iLgLgMLg4D]Gi3Z;bLZME^Gg3Y;bLWN^3i1aLYN]3i1`LYM1ZG_3Qe0cLmZO^3Re0dLQ[OX3nd0fLW[OX3hd0fL^[OW3ad0hLc[OV3\\d0iLj[OR3Vd0mLP\\Oo2oc0oLV\\Oo2le0J6J5^NRXO6Rh0BTXO=ng0\\OXXOb0mg0SO[XOl0fh0O0O100O101O0O10000O100O10O10O101N100O0100000O1000O11N1O10000O10O01000O010O0100O010O01O10O10O100000O1O1O100O1O01000000O2O1O1O1N10O10O01O010O01000000O100O1O10O001O10000O1O010010OO1O010O1O2O1N1O101N8HhjW2"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "fS]5f0ai0b0BdF^A\\9`>gF_AX9a>kF^AT9b>mF]AS9b>oF^Ao8c>SG[Am8o5\\Gn0IRIk8j5fGP1@UIi8l5jGl0^OWIh8n5mGg0\\O[Ie8P6YH9TOfIc8S6`HMoNoI]8Y6jH_OkNXJZ8[6QISOiNaJU8_6UIhNjNhJP8b6YI^NlNoJk7e6[ITNoNVKe7j6\\IjMRO\\Kb7k6^IcMTOaK^7m6_I\\MXOeKY7R7cIoLZOmKR7V7jIbLZOVLl6Y7lIZL^OZLf6_7lIRLB^Lb6a7nIjKFbL\\6f7oIbKKfLV6j7PJYKU9j4lFlJZ9j2VDSL_2l0^9j2]DZLV2d0b9^2YEcLV1f0g9c2\\EeLn0=m9i2]EhLl0NQ:U3ZEmLo0VOS:h3UERMQ?k2SAXMh>b2bAQNg=f1fBYNW=\\1XCbNgQ5ZAZKf>j4QAZKo>T8N101N2O1O1O1O0O2O1O001O2O0O10WO[AgEd>V:bAhE]>V:fAjEY>P:PBmEP>e9aBXF`=d9eBZF[=d9hB\\FW=b9mB]FR=b9RC\\FmeNTK_:\\4QG;dNXKZ:\\4WG7cN[KU:^4[G2eN]KQ:^4^G2dN_Kn9\\4bG2a:NaENb:OcEL`:0eEbL[LZ2S>Q1gE_L]LZ2o=U1hE]L]LY2n=W1jE\\L\\LX2m=Z1lEXL[LZ2l=\\1mETL]LZ2j=_1lF]NW9a1lFZNW9e1lFVNY9g1kFSNY9k1kFoMY9P2kFiMY9U2mFbMX9]2oFWMW9f2[7K5K5K5M300O1H8O1N2O1N2N2O100N3L3M5K5L4L4K3O1O1O2M4L3M3M4HfhV4"}, {"size": [848, 480], "counts": "gnX45Uj07H3M50L5NcNFgXO;Wg0MbXO3]g03_XOM`g0V7eAQH6m0T>Q7hARH2o0U>n6kASHNQ1U>m6oARHJR1V>j6SBSHFU1V>g6VBTHCU1W>e6ZBUH]OX1X>b6^BUHXO[1Y>^6cBUHSO_1Y>[6gBRHnNh1Z>U6mBnGjNn1Y>R6RCWJlb8O1000O0100000O1000000000O0100QOmA`ES>a:nA^ER>_:SB_Em=^:YB^Eg=`:^B]Ec=`:dBZE^=c:iBRE]=l:T1N2M@[ERA1Le:l>mK[AhMf>Y2hAWM[>k2lAkLU>W3PB_LS>o1QAULQ1_1T>[2QATLm0P1]>l2k@RLl0>g>`3a@PLl05h>j3a@PLm0Hf>X4b@nKm0]Og>d4`@PLla0n3Y^OPLea0n3`^OQL_a0l3f^OSLXa0k3m^OTLRa0j3R_OVLk`0h3Z_OXLd`0e3a_O[L]`0b3h_O_LT`0`3o_OaLn?_1j]OVO]2[Og?_1o]OQO^2_Ob?_1S^OlNa2GX?]1Z^OfNc2MQ?\\1_^OaNf22j>]1b^O[Ni28d>\\1g^OUNj2>_>\\1l^OlMl2h0W>\\1QCdNn<[1TCdNl<\\1TCeNkQf0Q1`2K8H9I1O1N2N2N3L4M7I^h[7"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "cR`5e0hi06E:WOSOgWOQ1Rh0k0MO10M4N2N2O2N2O0O2N2N2N2O1O1N3O0N2O1O101M2O10000O1O1O01O010O001O1M3M3N3M3L3M4L5L4K6J5L6I5L4K4K7H7J6J8C[S_5"}], [{"size": [848, 480], "counts": "U_`3>mi0:J3N1N3N010O001O001O000000001O000O10000O100O100O1O1O1O1O1O1O1O001O1O10O01O100001N2O1O1O1O001N101O0O2O0O101O0O100O1O2O0O2O1N2N2N3L3N3ISgT7"}, {"size": [848, 480], "counts": "bg\\2b1dh0a0SOn0^O?F9E:I8H>B;D7K3L5L3M4L3M3M2N3M3L4L2N200010L3O1O1O1O1N3N2J5N2101M8F9J6I:H8Gb0_O9G8I7H6JY3YBgJnLk0e`0S4QCmKm<^3iCcLUQj02M4M101M3N1O100N2O100N200N20OO110OOO20O0110N2O1K5M3J6M3O1M3N2M3O1N2O000N3O0O020O1O100000O1O100O1N2O1L4O2M3L3M5L3SOoVO7gi0Nigj:"}, {"size": [848, 480], "counts": "al;2Zj09K3L2[OAVWOa0ih0_OWWOb0hh0^OWWOf0eh0[OWWOk0fh0VOYWOm0ah0WO]WOl0bh0TO^WOn0Ri01N1O1ZOiNRXOW1mg0lNnWOW1Ph0POiWOQ1Wh0oNhWOR1Xh0nNhWOS1Wh0lNjWOU1Vh0hNlWOZ1eh000O01O1O1O1O1O001O2N1N2O1N20O2OBUN\\XOd1Yh0O10G^NaWOd1]h0:N1O100ORNcWOi1^h0UNcWOl1ah0O1000N2HSNjWOo1Uh0SNiWOn1]h000O01H8NoMlWOi1]h0001O0N3N12O00M3J5N31O003M0O01001O0O1O0100O01000O1001N1O1N1000002N1O001O001O1O1O1O1O000O10jYO`NkMDYe0k1P\\OLjc04d[Ob0Xd0^Oc[Oj0Yd0XOd[OR1Td0nNk[OV1Rd0jNm[OX1Rd0hNm[OZ1Qd0hNn[OY1Qd0gNo[OY1Pd0hNP\\OX1nc0jNQ\\OW1nc0kNQ\\OU1nc0lNR\\OS1nc0nNR\\OCD_OZd0o0R\\OTOY;O1@RD`BY2Ok6Y=VGeBn1:g6b<^JcC^5\\C6[NoXOCWg08f1ZOkVOEYi07mVOEYRo8"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "WZk45Yj03N3L2L5M2N3N1O1O2N1O2L3N3M20000N2N2O1O0N02M3O10000O2O0000100O100O2O1N2O2O8G9F\\Ra6"}], [{"size": [848, 480], "counts": "jih75bi0`0aVOB]i0`0aVOB\\i0j000O1O100000O1O1M3000001O1ON200O1O10000000O1O100O1O2L4N1K7L3NRbh3"}, {"size": [848, 480], "counts": "jcl09Pj0oi07K4K3N3N3M2N1O2N2N1O001O2N1O2O0O10000O2N3M3M1O2N2N2N1O1O100O001O010O1O1O100O1O010O011N1O1O1O0000O2N101N3N1N3N4K4M4Jc0^O?B`ic7"}], [{"size": [848, 480], "counts": "^[k7e0\\i0e0H4M3M3N00001O0001O001O1N101O0O2N2N3L3L5K5K5I;AY^m3"}, {"size": [848, 480], "counts": "dck26Wj0:UVONhh0P1L3N2O001N101O0O100O0100O1O1O1O101M2O1O010M1O2O0O21N101O2M3N4L9G6I5L3M2O2L3M3M3O1O1WObM]YOa2]f0fM`YO[2\\f0kM`YOW2]f0lMaYOV2]f0P1M3N2N2O1N2O1N101N2N2O001N3N4L1N2O1O1O1N101O001O0000001O0000000001O0000001O001O001O1O1O1O00001O00O10000O1000000O1000000O10O10000000O1000000O10000O011N100O100O1000O1WOlZOVLSe0i3oZOWLQe0h3P[OXLPe0g3Q[OYLnd0g3S[OYLmd0e3U[O[Ljd0d3W[O]Lid0`1W[O_O3POfd0]1`[O\\OLWOdd0\\1e[OVOI^Obd0]1m[OiNEJdc0Nl[O^1U1[N@:]c0On[O[1^2fNda0Oo[OZ1^2fNca0@h[O5IfNRa0h0V_O`0IiNTa0c0T_Ob0JkNVa0?o^Oe0NjNaa03b^OQ1OkNba02_^OQ12lN_a02`^OQ12mN^a02`^OP14lN^a03^^OP15lN^a04]^Oo07lN]a02^^OP18mN[a01]^OR1;jN\\a0O[^OU1>hNec0X1]\\OgNcc0X1j2000001O00010O000010O00000001O0O2N1000001O00001O00010O010O01O01O0010O01O01O01O001O00001O0O2O01OOO2K5N1O2L3O2M3O1O11O01O0O1001N101N2O1N3N1N3N2M3L4M3M3N2L5K7IZ\\Q3"}, {"size": [848, 480], "counts": "a]\\22]j03N0O101N100001O00O1N2O2M2M3L4K5N1L4E;G7J6N3M2ZN\\N`ZOh1]e0aN[ZOa1ce0fNUZO]1je0nNdYOX1^f0W15N1001N02N3O0O2N011N2O2M201002M;G2kYOfLZOLZe0m4OO1F;O03M3NO010O2M3N2O1N3N4\\MQZO\\\\O=cNSOQe0?^\\O=bNTOod0?_\\O=cNTOmd0>b\\Og\\O9dNZOdd0=h\\O9eNZObd0>h\\O9eNYObd0?i\\O8eNZOad0>j\\O9eNXOad0?k\\O8eNYO_d0?l\\O8eNYO_d0>m\\O9fNWO]d0`0l\\O9iNWOZd0?n\\O:iNVOYd0`0n\\O:kNUOVd0`0Q]O:kNTOTd0b0Q]O:lNSOSd0c0Q]O:nNROPd0c0S]O;oNoNoc0f0S]O:POnNmc0g0T]O;QOjNlc0l0S]O:Sd0En[O;Rd0En[O;Rd0Eo[O:Qd0Fo[O:oc0HR\\O8ic0KX\\O5bc01^\\OO\\c06f\\OIob0cNW[Og1j1Fnb0hNS[Ob1P2Flb0c0U]O]Ojb0c0V]O]Ojb0c0W]O\\Ojb0c0V]O^Oib0b0W]O^Ojb0`0X]O@hb0`0W]O@kb0>V]OBkb0=T]OCmb0dZOC\\e0=dZOC\\e0>cZOB]e0>cZOB^e0>aZOB`e0>_ZOC`e0=`ZOCae0=_ZOCae0<_ZODbe0;^ZOFbe07`ZOIbe0JjZO6Ve0BQ[O?Pe0[OT[Oe0od0TOV[Ol0od0dN[[O]1lf0O010O1O00010O001O010O00001N101O0000M2K6N100ON2O1101N2O1000000001O1N2O1N3M2N3M3N1N3M3M3M4M3K7Gdo\\3"}, {"size": [848, 480], "counts": "^_j22]j01O2N1O10001O00000O101N2N0O200N2N2M2I6M3N003I8I6J6G9I6[N]N\\ZOf1be0eNPZO_1Qf0gNfYO]1Yf0^1O1NON1MN4N4O4K4M3O1O2O0O2O01dZOjL^c0X3^\\OkL`c0_3U\\OeLic0`3P\\OcLoc0k3a[OXL^d0l3S[O_Lnd0b40O0O101N2N2O3L3M2N4M3L3M3M5]MRZO;Sf0_OSZOSO9Khe0l0YZOnNb0EYe0X1P\\OfNRd0W1o[OfNVd0W1k[OjNXd0P1i[OQO[d0j0f[OVO`d0c0b[OZOed0?n2EmUd7"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "aQn2:Rj06K6K3L3M4N2M3L3N3N2N2M2O2N100L5N2O1M3O1N2N1N201N10001N10001O01O000000001O1O1O001O1O001O100N2O1O1O1O2M4L6K2M2N2N3POiVOb0ei0M4L4M3J4M4MT^j7"}], [{"size": [848, 480], "counts": "[^\\78Tj06M3N2M3M2L4G8FnNRWOW1gh0hNZWO^1fh0cNWWO^1jh040O100O2N1O1N2O2N1N3N1O2M3M3M5K4K6K5IZWX4"}, {"size": [848, 480], "counts": "_Zk1j0Si0k0\\O?H8I6L3M4L4L2O2N2M3N1O2N2N2N1O2N2N1O2O0O2N2O1N1O2N1O2O0O2N1N3B=K6L3M3N3M2O1N2O101N1O100O1O10000O100O10000O2O00000O100000000O1000O1000000000000000001N10000`LR[O^1nd0aNT[O^1ld0aNV[O^1jd0aNX[O]1id0bNW[O^1id0cNW[O]1id0bNW[O^1hd0cNY[O\\1ed0fN[[O[1bd0gN][OZ1\\d0lNe[OT1Yd0nNg[OS1Xd0mNi[OR1Wd0nNi[OS1Vd0mNi[OT1Wd0kNj[OV1Ud0bMh[O04^2Td0aMm[OL0d2Qd0aMW\\O[O0T3ic0`MX]O^2ib0aMX]O_2ib0_MY]Oa2fb0[M^]Oe2cb0XM_]Oi2ab0TMa]Ol2`b0oLd]OQ3^b0hLh]OW3Zb0eLh]O\\3ed001O00000001N101O001O000001O00000O101O001O00010O0010O00010O00N2100O0100O001O001O1O0100O10N2N1N3L4O1O1N101O001O1N11O01N2O1O0N2O1O2N2O001O1O100O1O1O100O100O100O001O100O100O010O1N20O01O10O000O0O100O1N201N2O1O1O2OO2O1N1O2N2N1N3M4M2N2O1O1O1O1O1O2N1O2M2O2M3N2M3N1N3N2N3L7H[a[4"}, {"size": [848, 480], "counts": "ncn23\\j03M2O0O101O0O100000O10O100O100O2O0O1O1N2K5H7J7J5M4M2L4K5I7G2cN^NTZOf1je0cNnYO`1Qf0fNgYO[1Zf0Z13O10N2L11O4M2N3N101N2O2NcZOmL`c0T3Z\\OVMac0k2^\\OVMbc0j2^\\OVMbc0l2\\\\OTMcc0T3U\\OmLkc0\\3l[OeLSd0h3][O[Lcd0i41O001O1O1N2N2N3M2N3M4L6K4J5^MSZO9Rf0AWZOcNbg0Y1l0N3N2L6K4L7I5K;Bma]7"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "S^Q37Wj04M2M3N1O2M2O1N2O1O2N1O1O1O1O1O1O1O1O1O1O1O1O100O2N1O1O1O2O0O101O00000O10001O0O1O2O00002O0O000O1000O2O010N2O0O3L4M2N3N2N2M2N1O2GbVO^O^i0`0eVO^O]i0`0eVO^O\\i0a0;N3K`Sb7"}], [{"size": [848, 480], "counts": "cmc72[j04M4M2M4M2M4L3N1O1O1O100M3O001O1O001O1O01O0000010O00001O1N2O2N2M3O1N4K7Icnj3"}, {"size": [848, 480], "counts": "X\\^2e0di0;DSd0Co[O;Pd0FQ\\O9nc0hMi[OT19T1mc0fMn[OT17U1ic0gMR\\OT16S1hc0hMU\\OS14U1fc0hMW\\OR15U1cc0hM[\\OQ12W1cc0gM]\\OQ11X1ac0gM`\\Oo00Y1_c0hMc\\On0NZ1^c0hMg\\Ol0L\\1Wc0mMo\\Od0K_1ob0TNX]O;Ja1jb0WN_]O5Ge1gb0XNe]O0Di1db0XNk]OLBl1ab0ZNP^OF@Q2_b0YNU^O\\OCZ2Xb0ZNi_Og1U`0ZNk_Og1T`0YNl_Og1T`0YNm_Og1S`0XNm_Oh1T`0VNm_Ok1S`0QNP@P2R`0]M_@c2fc000100O0010O10O010O00010O010O0010O010O10O01O0001O01O01O01O010O1O001O1O00100O1O001O100O1O1O001O100O1O001O100O1O1O1O001O1O1O1O1O001O1O010O1O1O1O1O010O1O001O1N200O001O001O1O1O001O0010O0100O01O0N3N2K5K5M2O2O1O001O0100O010O100O100O1O2N1O1O3M3M3M4L3N0O1O1]O]WOXOch0h0^WOWOch0h0_WOVObh0i0aWOSO`h0m0b0O1O2N1O2M3N3M2M6JUik3"}, {"size": [848, 480], "counts": "b]T33]j0001O00000O2O000000O02O0O101N100O1O1N1N3H8C=F9K4L5K5J5M4M2gNkMWZOX2^e0[NWZOd1ke0dNkYO[1Zf0W14N3O10ON2ON0122N110O2N10O3N3aZOjL`c0\\3T\\OmLic0W3P\\OmLoc0\\3f[OgLYd0h400001O1O1N2M3N2O2N1N3K7YNh1G6VO[XOaNlg0X1ZXOaNmg0\\1f0N1O1N6H7K5K7H7J:EW]Y7"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "Sjb3:Sj05L4L4L2O1N3M3M3N2N2M2O2N2N1O2M200N3O0O100O010O0100O010O1O0010000O010O2O010O1O10000N2N2N2O1O1N2O1M3M2O2O0O101N1000001O0O101O001M2O2O1M2N3L6Ijlo6"}], [{"size": [848, 480], "counts": "fUl71^j03N2L5M;E0O100O1O100O10O01O1O10O01O00100O010O010O00010O01O2O0O001O1O2M2O3L5KXQa3"}, {"size": [848, 480], "counts": "bfo2=fi0a0F9F9EB6J4L4L3M3L6K5K3M2N100O1O001O0SORJX]Oo5db0XJX]Oh5fb0\\JX]Oo3_`0d5W^OSJX18a`0d1S^Ol12YLZ17b`0`1X^OP2IYL]17b`0_1\\^Of1FaL02]16b`0a1^^Oc1CdL02]16c`0_1b^O`1_OiLO2^15b`0`1f^OY1@mLK5^14b`0`1g^OW1EUMS12b`0b1h^OT1EVMT12_`0c1j^OR1FWMT11^`0d1j^OR1EYM_c0d1l\\OR1oc0kNT\\OR1Pd0jNS\\OS1Pd0kNR\\OS1Qd0iNR\\OT1Qd0hNS\\OV1Pd0fNS\\OW1Rd0cNQ\\O\\1Vd0WNR\\Of1^f0N2N1N3N2O0O2M2N3N1N3M2N3M2O1N3N1O2L3O2L4N2L5K5K`[o4"}, {"size": [848, 480], "counts": "Yjm25[j01N100000001O000O10O100000O1I8N1O1O1L4M3N2K4M1N112K6H8M3L3F;LNhNVNRZOh1me0gNhYOX1Yf0PO^YOP1cf0^10O2N0OM3101O2N2O1N2O01O2UZOXMlc0j2l[O^MSd0c2j[OaMUd0a2f[ObMYd0j2P[OcMod0o3O11N10001O0O2N1O1O1O1^NnZOeMTe0Z1hZOSNQg0j1QYOTNRg0h1QYOUNSg0e1RYOXNQg0e1P1M3N3ZN[WO[1Xi0E4L4K5L8EfP`7"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "ka]5;Rj08I5K6\\VOVOYi0W1I4L2M3N3N1L3O2O0O2N1O1N3N1O1O101N101N2O2M00010O100O01000O01O10O01O1N2N3N2N2M3L4J6EZli5"}], [{"size": [848, 480], "counts": "_jW58Vj04L2N3M3N1O2M2N2O100000000001O001O1O00001O00001O000000O010O10O01O100O1000O1000000O1000000O10000O10000O2O002M4L8EW_f5"}, {"size": [848, 480], "counts": "_eT34Yj07I5K4L3N3L5J5K5L4L4L4L4K6J6K5J6J6K4M4L3M3J6I7L4L3M4L3M3M3M4L3N2M4M2N2N2N3M2N2N2O1O1eNoJW]OQ5ab0cKR]O^4lb0iKo\\OW4ob0mKn\\OT4Qc0nKm\\OS4Rc0PLl\\Oo3Tc0SLj\\On3Uc0TLj\\Ol3Vc0TLi\\Om3Vc0ULi\\Ok3Vc0WLi\\Oi3Vc0YLi\\Of3Wc0]Lf\\Od3Yc0_Le\\Oa3Zc0k100000O10O100000O0100c]OXIRa0h6l^OZITa0e6j^O_IUa0a6h^ObIXa0^6e^OeI[a0[6`^OjI`a0c700000O10000O100O1O1O1O2M2O1O2N1O2N1O2N10001O10O`J_]OQ3bb0lLb]OS3^b0kLe]OS3\\b0kLg]OS3[b0jLh]OU3Yb0iLj]OT3Yb0hLk]OU3\\b0cLg]O[3ab0\\Lb]Oc3ab0XLb]Of3ab0WLa]Og3bb0UL`]Oi3gb0PL\\]On3gb0oK\\]On3db0QL_]Ol3bb0SLa]OIeNa3ic0gLX^OS3ha0mL^^Om2ca0RMb^Oi2]a0YMc^Of2\\a0[Mf^Oc2Ya0_Mg^O_2Ya0bMg^O^2Ya0bMh^O]2Wa0dMj^O[2Wa0eMi^OY2Xa0gMi^OX2Wa0hMj^OW2Ua0kMk^OS2Va0mMk^OR2Ua0nMm^OP2Ta0oMm^OP2Sa0PNo^Om1Ra0RNQ_Ok1Qa0TNP_Ol1o`0SNT_Ok1m`0SNU_On1i`0QNZ_OQ2b`0nMa_OW2Y`0gMj_Oa2k?^MX@h2b?UMb@U3S?hLPAY3n>fLUAY3k>eLWA[3i>cLYA]3h>aLYAg1TLHif04\\YOJdf04_YOKbf02bYOL^f02eYOM\\f01fYONZf01hYONYf0OkYO0Uf0LoYO3Rf0JRZO4oe0HVZO6ke0F\\ZO6fe0G]ZO7de0FaZO6be0EcZO8Qh0GSSU5"}, {"size": [848, 480], "counts": "ajm27Yj000000O100O101N1O1O100O1O100L4J6N2M3M2N3L2K3K7M2M3O3L5M2`N_NPZOb1ke0mNeYOY1Zf0Y16J5O2L3O01N3L3O0O1011O10100O12Ne0QYORMRe0X4N0O1001O0O10000O1N2N2N2I8eNnZORMWe0g2]1YOSYOcMVg0W2g0K3YNfWOS1oh0L3M4L3M5K6I7J:BVea7"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "Zjc4;Tj03L4L4L4M2N3L4M3M2M3N2M3N2N2M4M1O3L3O0O2N2N3N2L3N2VXOeM\\g0\\2<2M100O10001O0O100O100O1000O101O001O1O2L3L5M3N2M3M3M3M3Nc0]O5J6J6ZOZVO4`cY6"}], [{"size": [848, 480], "counts": "fQR54[j03K30102L2O1O1O000000001O0O10000O10O0100O100O1O001O010O0O2O1N2M2O1O20OO2O0100O100O2O1N10N2O01O010O00001O001O1O1M3M4M3K]ii5"}, {"size": [848, 480], "counts": "dZo15Yj09Fb0]O?B7H7K4L4M2N2M3L3M4L3N2N2M3L4J6L5L3L4N2M3N2N2N2M4K4N2M4L3N3L3L5K5I7K4L5L3N3L4M2N3M2O2M2O2N2N101cNkIX]OOn0X6ba0`JS^Oa5ja0dJT^O]5ja0eJT^O]5ia0gJT^OZ5ja0kJR^OW5ka0mJR^OU5la0PKo]OR5oa0h1N3N2M3M3N2N2N2L4O0O2O1O0O2O1O001O001O10O0001O0100O10O10003L6KXMV_OmKe`0n3b_OULY`0c3Q@_Lk?[3]@fL`?X3c@iL[?V3g@mLU?Q3n@SMn>j2TA[Mh>a2\\AcM`>Z2cAiMZ>Q2lARNQ>e1XB\\Ng=_1^BcN`=[1aBgN^=X1cBiN\\=T1gBmNW=YNg@?T2X1U=VNm@>o1^1Q=QNUA=l1g1jh1l1eg1Q2a<^MjA`0f1R2bDRE=j0FWL4Q>GQE>R2Jm8HSE>o1Km8EVEa0m1Im8EWEc0k1Hn8CZEe0h1Gn8C[Ef0h1Fn8B\\Eh0f1DQ9AZEk0i1_OW=b0kBZOV=f0kBXOV=h0kBWOU=i0kBTOW=m0`BRO]J0Tc0n0[BXOk=h0TBZOk=g0RB\\On=d0RB\\On=d0QB]Oo=b0RBZOR>f0oAVOS>j0oAQOU>o0f5O00100O101N10O0100O1O00100O01O0010O1O0010O01O000001O00000O10000O1O101O010O0000000010O0001000O010O2N1O1O3M2L6HX_\\4"}, {"size": [848, 480], "counts": "Zmd38Qj0>E;iVORO[h0c1N1O2O1K4N3N2M3N2M3N2O2N1O10001O1O1N101O1O0010O000000001N3M3L7I3N2N101Na0@3M3K5L5J7J4K9Fjda7"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "dRj37mi0a0H5L6I5L3M3L4N1N3N001N2N2M3N2O0O2O1O1O001O2N0O101OO11O001OO2O0001O1OO200O1O1O2O000010O1O01000O010O01O1O1O2N0O101N101N3N4L2M2O2M2O2M2N3M2O1N2O2N1O1O2N1O2N1O1M3L4N2M4M2M[Q]6"}], [{"size": [848, 480], "counts": "Wd`4:Uj03M3N1O001O1O000O1000000O10000O0100O01O1N101O001N1O1O2N2N10010O010O1O01O1O01000O10O01O10O010O01O100O010O001O1O1N2O2M3L7GX\\Z6"}, {"size": [848, 480], "counts": "eY:e0ai0d0A>D9I6J3M2N2M3N2M2O2M2N2O2M2O1O1O2N1M3O2M2O1O1N3N1RNhLc\\OZ3Yc0lLc\\OU3\\c0mLb\\OU3]c0kLb\\OV3]c0kLa\\OX3^c0hLa\\OZ3]c0gLb\\OZ3^c0fLa\\O\\3]c0eLa\\O]3_c0cL`\\O_3^c0bL`\\Oa3^c0_Lc\\Oa3]c0_Lb\\Oc3\\c0^Lb\\Od3]c0]Lb\\Od3^c0\\L`\\Og3^c0YLb\\Oh3^c0XL`\\Ok3^c0ULb\\Ol3]c0ULb\\Ol3]c0TLb\\Oo3\\c0QLd\\OP4[c0PLe\\OR4Yc0oKf\\OR4Xc0oKh\\OS4Wc0mKg\\OU4Wc0lKh\\OW4Vc0iKj\\OX4Uc0iKi\\OY4Uc0hKj\\O[4Tc0aKP]O`4nb0aKR]Oa4jb0aKW]O_4gb0cKW]O`4hb0_KX]Ob4gb0_KX]Oc4gb0]KW]Oe4hb0[KX]Of4hb0ZKW]Oh4hb0WKX]Oj4hb0RKZ]OQ5eb0mJ\\]OT5db0kJ\\]OV5cb0kJZ]OY5eb0gJZ]OZ5fb0eJZ]O]5eb0bJZ]O`5fb0_JY]Oc5fb0]JY]Of5fb0XJ[]Oi5eb0VJ\\]Ok5cb0TJ]]Om5cb0QJ_]OP6`b0nIa]OS6_b0kIb]OW6^b0gIb]OZ6_b0cIb]O^6_b0`Ia]Ob6_b0]I`]Od6`b0\\I_]Oe6P1nIn?S6P@YJ_NWOm`0a6a@eJ]?[5a@iJ\\?Y5a@jJ]?W5a@lJa>dNi@a6a0oJ`>iNk@Y6a0RKa>jNl@T66_Kk>cNk@P61eKR?aNh@n5OeKW?_80OO2OO010dJVATOi>k0ZASOf>l0\\AROf>m0\\AQOd>n0_AoNc>Q1]AkNg>T1[AkNd>U1]AjNd>S1`AkNa>S1bAkN_>T1cAjN^>T1eAiN]>U1eAhN]>V1gAgN[>V1iAgNY>V1kAfNX>X1jAfNX>Y1jAeNW>Z1jAdNW>\\1kAaNW>^1jA`NX>`1iA]NY>b1iAZNZ>e1hAXNZ>g1hAVNZ>i1gATN\\>k1fAQN]>o1cAmMa>Q2bAkMb>S2_AiMe>V2]AeMg>Z2[AaMj>]2XAaMi>^2YAfMa>[2_AkM\\>T2eAoMW>Q2iARNU>m1lAUNQ>j1PBXNo=g1RBZNl=f1TB\\Nk=d1UB\\Nj=d1VB^Ni=a1XB`Ng=_1YBcNf=]1ZBcNf=\\1ZBfNf=Y1ZBhNe=W1[BkNd=U1\\BlNc=S1^BnN`=R1`BQO_=n0aBUO\\=j0dBXO[=g0fB[OX=d0iB]OV=b0jB_OV=a0jB@U=>lBCU=;lBHS=5mB2R=GPC:X=[OiBg0f=_L__Oc1m2n1_>jMcAV2^>gMcAY2_>cMcA]2b>\\MaAc2h>SMYAm2g>RM[Am2f>QM\\An2e>PM]Ao2c>PM_Ao2b>PM_Ao2a>QM`An2`>RMbAl2^>TMdAi2]>VMgAg2Y>YMkAc2V>\\MPB^2o=cMWBW2i=iM^Bo1c=QNcBi1^=VNgBd1Z=\\NaCi0_g6O1000000000O1000O10000000O1000000000000000000O10000001O00000O0100000000001O1O00000000O0100000000000O1000000O100O10000O010000O1O10O0100000O1000O10O01O2OO010000O100O2O000O2O0O2O0O3L3LjlP4"}, {"size": [848, 480], "counts": "[\\l3?ji0:H7L3M2O2M2N3K5K5I7L5L3N2N2O002N1O1N2O1O1O1O2O2M6J:F:Ge0ZO3N4M2OlMoYOdNN0Vf0W1PZO`Nb0H_e0e1SZOXNi0K\\e0i1U2L5J5K2N4M2M6J5J7I9FZP[7"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "klm28Qj0:F9L2M4M3L3O2L3N3M2O1O1O1N2O1O1N2O1O1O0010O00O110O000001O10L31O000100O01O01O1O101N1O1O101O0O11O00OQOmWODRh0;PXODPh0:VXOCjg0;XXODig0:XXOFig09WXOGig08XXOHig06YXOJhg04XXOLig02XXONhg01YXOOjg0NWXO1kg0LVXO3ng0HTXO7Qh0DPXO;Th0AnWO=Si0N4M3J6L\\de7"}], [{"size": [848, 480], "counts": "ZeS24Zj04L3O1N2N2N1O2O0O2N2N1O2O0O2N2N3M2M4N0O2N3M2N3N010O1O01O1O1O1TOPWO6Pi0IQWO7Pi0GSWO7mh0ITWO6mh0GVWO8kh0GWWO7ih0HYWO6ih0HYWO6jh0EYWO;`i0L3NXQW9"}, {"size": [848, 480], "counts": "l:a0j16WOa18WOT`0RO\\@7XO_18YOS`0ROZ@:YO[1;XOS`0ROW@>[OW1;ZOR`0ROU@b0\\OQ1<]OS`0oNR@l0YOh0b0]OS`0oNP@S1VOa0g0^OS`0nNl_OZ1UO;l0^OR`0mNk_Oa1oN7T1[OS`0mNh_Oo36SMT`0mNe_OQ46RMW`0kNb_OT48PMW`0lN__OV49oLX`0lN]_OU4jGn@U8Q?PHl@o7S?VHi@k7U?[Hg@d7X?aHd@_7[?gH`@Y7_?m1O0000010O01O00\\A`EZ=_:fBbEZ=]:eBfEY=Y:iBhEV=W:jBjEV=T:jBoEU=o9lBSFR=l9oBVFP=h9RCXFnU3YClLeNOT>Q3ZCPMaNOW>m2\\CSM[N2Z>g2_CVMVN3]>i0l@XOb2LRN5b>d0m@XOa2OnM7e>4WABV21hM;l>K]AGQ23cM=Q?CaAJl16_M?W?XOgA0e18[Mb0_?lNfA8a1:XMd0Qb0POi@;TMg0Ub0kNh@?oLh0\\b0fNf@c0jLj0bb0_Ng@Q3Z?mLg@R3[?jLh@V3`?XLj@g3Qc0N1O1O2N100O1O2N1O1O1N2O2N1O1O1N2O1O2M2O1N2N3N1N3M2O2M2O2M2O2M2N3M3N1N2N2O2N1O2N1N3N1O1O2N1O2N2N2M3N2M4L3N4K6Hf\\W7"}, {"size": [848, 480], "counts": "[cU4b0ji03M4M3L4K4K4J6O1L4M4aN\\NPZOf1ke0kNgYOW1Xf0QO_YOQ1`f0^12N1N3M3J5O2OL3O2O01N2011O1O2N6J>PZO]LXd0Q5M1O00O10000O1O2M3N2N3M4YNhZOnM]e0m1U[O_MRe0j1kYO\\Nig0`1ZXO\\Nng0\\1k0H4K3N3M3M4L7G9G_ci6"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "Pad12\\j03N2N1M4L4N2L4L4M2N2M4N2N2L4N2M3N2N2N3M2M3N2N1O2N2O0O1O2N2O0O2O0O2O000O2O001N100O100000O10O1000O10000O1O100O2M2O0O3M2N2M4M3L5L4I8Cj0[O^`Q9"}], [{"size": [848, 480], "counts": "R_]18Uj05J5K5K4N3N1O1010O0000001O00000000000O100O10000O1000000O10O10O10O010O1O010O001O001N101N2O0O1010O10O0010O1O10O1O01O01O10O1O1O1O1O1O100O1O2L4G802L3NRiR9"}, {"size": [848, 480], "counts": "^U34Yj05K4M3K4I7C=M3O1O1O001N1]JEQA=f>0TA1g>9RAJh>b0n@Ak>k0n@XOl>P1PAROj>U1TAkNh>\\1TAfNd>c1XA`Na>i1YA[Na>l1[AWN`>o1]ARN`>T2[AoM_>Y2]AhM_>`2\\AbM`>f2ZA[Ma>S3TAoLj>^3j@dLR?i3b@YL]?R4W@PLg?]7O1O1O2N100O010O1010O01OZOW@fFi?X9]@eFb?Z9a@dF_?Z9f@cF[?[9g@fFW?Z9k@eFU?Y9Q1O2N100O2NeAoFZ;Q9eDVGU;i8kDnG_:R8`ERHl8lNhEP9\\1WH\\8BmEW8AlG?`0o9DlEQ8BQH>>Q:EjEg9NiFV:CiEg9GnF^:^OiEh9CmFOlM^:b1mEQH8I4J6L4M3M2M4M4L2N3M2N3M2M4M3M3N2M3N1N2M01101N1O1O2O0001O001N20O01O1N3N1N2O1O1O1O1N2N3N1N2O1O2M3N2N3L3N2M3N3M2M5K4L4L6J6K4J7J4K6KSk`8"}], [{"size": [848, 480], "counts": "g]m2>li09K2N4N0010O001O00000000O2O00O1000000000O10000O10O0100O1O010O010O100O001O1O001O1O00100O10O1000000O2O0O100O101N100O1O2O0O2O0O2N2N2M4M3L8HSnf7"}, {"size": [848, 480], "counts": "Y^\\1>Ve0EZ_Oo0cK[Odc0b0V@U1V?[Of_O]2\\?lMQ@g2i?_Mj_Om2P`0ZMe_OP3V`0XM`_On2[`0]MX_Oj2c`0d3K5I6L5L4K5M2M4K5J5K6K4L4M4J6L3N3M2O1N2N2O2N101N1O10QN[BSGf=i8iBkFW=S9PCiFP=U9UCgFl]9XN1V1dFa0[9XN6Q1bFf0X9XN>h0^FP1W9SNa0d0]FX1W9lMn2R2i:O1O100O1O1N3N1N3N2M3M2N3M2ON11O01O010O1N111N1O1N201N1O2N3N2M3M\\N[XO`0dg0]OcXO?\\g0_OhXOa0Wg0^OkXOa0Ug0^OlXOc0Rg0^OoXOa0Qg0^OQYOa0nf0@RYOa0lf0@TYO`0lf0_OUYOb0jf0^OVYOb0if0^OXYOb0hf0]OYYOc0gf0\\O[YOd0df0\\O\\YOd0df0\\O\\YOd0df0[O]YOe0cf0[O]YOe0cf0[O]YOe0cf0ZO^YOf0cf0YO]YOg0cf0YO\\YOg0ff0XOZYOh0ff0WO[YOi0ef0WO[YOh0gf0WOYYOi0gf0WOXYOi0if0WOWYOi0if0WOWYOh0jf0XOUYOh0lf0XOSYOh0Pg0UOPYOl0Yh00000000001OO1000000000010O0001OO010O1[OUOdWOm0[h0WOaWOk0]h0WO]WOm0dh0TO[WOl0`h0c0M4N2O1O10PNiWOh1Vh0XNnWOf1Qh0[NPXOc1Rh0;4L3N3N1N1O1N1000M3001O1O100O001N2O3MJeNYWOZ1dh0:001O001O1N10O2N11O:E6K4K7I5K3M5K3M202KlkZ4"}, {"size": [848, 480], "counts": "QP\\6>ei0>G:L2O1M3J5K5M4K4J6bNZNiYOn1Vf0_NXYOh1hf0P1O^O[YO\\Mdf0d2`YOXMbf0f2`YOTMef0l2\\YOSMdf0h2d0L3O01O1O2N1O2O3M4M2O2M2O0000O001L4O1000001O0O5L2N201F]XOaMfg0X2`0J4BbWObNch0[1;M3M3L3O101NW;e0O2O1N1O2N1OlAeDV=Z;jBlDR=Q;oBREP=l:RCTEmo9]2eFh1Z9UNPGd1o8ZNYGb1f8ZNbGc1]8[NgGc1Y8\\NjGc1T8]NoGa1Q8`NoG_1R8`NoG_1Q8aNPH^1P8cNPH\\1P8cMWFYKk1S7m7\\MdF^K_1U7m7XMjFcKZ1S7l7UMPGgKW1Q7i7QMWGoKQ1i5oM_Ji9g3ZGQLP1e5PNaJf9g3]GRLo0d5QN`Jc9i3_GSLn0a5SNaJ`9k3`GRLo0_5SNcJ^9l3`GRLQ1]5SNcJ\\9j2XF]MY1JP1Z5TNeJ[9c2`FcMR1JQ1X5TNgJY9c2bFcMP1LR1U5TNhJX9c2cFcMQ1LR1S5TNjJW9c2bFdMR1LR1P5UNlJU9c2dFcMR1NQ1l4WNoJR9d2cFdMT1LR1i4WNSKQ9d2aFcMW1MP1d4[NXKm8d2aFcMX1MP1^4^N_Ki8a2cFcMX1NP1Z4`NbKg8a2bFeMX1NP1V4aNfKf8_2bFgMX1MQ1R4bNkKd8^2aFhMY1MR1l3dNoKb8^2`FiMZ1MR1h3fNSL_8[2bFmMX1MR1c3iNVL\\8[2cFoMW1LS1_3kNYLZ8[2bFQNV1MT1Z3mN[LX8\\2bFRNU1MV1U3a9iNUDVNU1LU1R3c9kNUDVNT1LV1m2f9nNRDYNS1LW1g2l9hNQDdNn0NU1a2R:_NVDROd0OT1Z2Z:]NRDZOa0OU1T2]:aMZB5Q2730V1m1c:bMYB4m1=20V1h1e:fMWB4m1>10X1b1g:nMPB3o1>20Y1[1k:SNZDb0B1Z1S1j=lNm@1Y1m0o=QOi@3Y1e0S>XOe@3Z1=V>@a@3Z16Y>G^@4[1M\\>OZ@3_1C]>:U@2Td0Mn[O3Pd0NR\\O0nc00S\\O0lc0OW\\OOic01X\\ONhc02Y\\ONec03\\\\OLdc04\\\\OMcc02_\\OMac03_\\ON`c02a\\OM_c03a\\ON]c03d\\OM[c03e\\ONZc02g\\ONXc02i\\ONVc01l\\OORc02P]OMob03S]OLlb03W]OLhb04X]OMgb03Z]ONdb02\\]O0bb00`]OO_b00c]O1[b0Og]O0Xb00i]O0Ub01m]ONRb02P^OMoa03R^OMma03U^OKla04V^OJja06X^OGha0:W4000O10000O10000000000O1000000O01000001O000000000000001O00001O000000000010O100O10O1O1O1O010O1O100O1O1O001N2O1O001N2OPUk3"}, {"size": [848, 480], "counts": "j[R75[j0000O10001O00O10O100000000M3L4M2N2N3M3M2N3L3K5NOO3O1N4N01O2M2O3J5N2NmMFUYOG2`0jf00lXOE97lf0P1VYOlNjf0W1UYOfNlf0\\1TYO`Nmf0d1RYOZNnf0i1RYOSNPg0o1f03K4J7L4M3dYOnMWd0T2d[OTNWd0n1Q[OZMNk0Qe0k1Q[OiNnd0Y1P[OhNod0Y1Q[OgNod0Z1P[OfNPe0[1oZOeNPe0]1nZOcNTe0]1kZObNVe0_1hZOaNYe0Z3100O1O2M2\\N_ZOoMke0[1WZOlMVg0R2oXOiMUg0U2kXOkMXg0Q2hXOoM_g0k1aXOSNdg0i1e0L5K7I3M3M3M6K5J7ITjY3"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "Vg\\3f0ci0;G8J5K5I7K5L4L4L3M3M3N2M4MbNeXOJYg04mXOJPg07RYOHnf07TYOIjf07WYOIhf07ZYOIdf08\\YOHcf09^YOG`f0:`YOG_f08bYOH]f09cYOG\\f0:dYOF[f0;eYOEZf0fYOCYf0=gYOCXf0>hYOBXf0>gYOCXf0>hYOAYf0>hYOBWf0?iYOAWf0?iYOAWf0>jYOBUf0?jYOAWf0?iYOAWf0?iYOAWf0>jYOBVf0>jYOAWf0?iYOAXf0>hYOBYf0`YO@bf0`0^YO^Oef0a0[YO]Ohf0b0XYO\\Okf0c0UYOZOnf0e0SYOXOPg0i0PYOSOTg0l0mXOnNXg0R1T1O1O1O1N2O2O1N1N2O2N1N2O5DnUU7"}], [{"size": [848, 480], "counts": "PTg32Yj08I6L3M2N3M3N1N3N1N2O1000000001O0000001O001O0000001O0000000000O10000O10O0100O010O100O1000O01000000O10O010O10001N101O0O01001N101N101N2N2N2N3L4Kehj6"}, {"size": [848, 480], "counts": "^ll1a0ji06K5L4L4K5I7XOg0N3N1O2N101N1O2N1K6M2M4L3L5L4W^OlLRj4lAUKS>Q5hAPKV>S5iAmJV>V5gAkJW>X5hAhJW>Y5hAhJW>Z5fAgJY>[5fAfJY>\\5eAeJZ>\\5eAeJZ>\\5eAeJZ>]5dAdJ[>]5dAdJ[>]5dAdJ[>^5bAdJ]>]5aAdJ_>]5_AeJ`>\\5_AdJb>]5ZAfJe>[5ZAfJf>Z5XAgJh>Z5VAhJi>Z5UAgJk>Y5TAfJm>[5RAeJo>\\5n@fJR?[5l@eJT?\\5j@_J]?b5a@[Jc?f5[@ZJe?h5Y@VJj?k5S@SJQ`0n5m_OQJV`0o5h_OPJZ`0P6d_OQJ4fN]?[7Y@RJK4N6SNiWOX1kh0L2N3M3M4L4L5K:DUT]3"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "m^k22[j04M2N3N101N2O0O2O2L3N3M2O1N2N4M1M4L5L4L3M4L3M3M2M2O2N2O0N3O1N3N1N2O0O2O0O1O1000000O1O1000000O1000O1000O10L400O10000O100O1O10000O1O1N2O1N3L3O1O1N2N2O2M2N3M2N2O1M3N3M3L4L6B>C=LnUY7"}], [{"size": [848, 480], "counts": "Ub[31Zj0:H4L4L4M3M2O2O0001O00001O001O00000000001O000000000000000O10000000O10O010000O010O1O001O1O1O0O2O001N2N1O2O1O0010O100O10000O2O1N1O3M3N1M3N2N5JUUW7"}, {"size": [848, 480], "counts": "ZmR21Wj0hEk8Q2kFS8`0iEc8R2QGS8`0iE^8b1fGb80jEY8b1jGb80jEU8c1nGa81jEP8d1QHa81jEm7e1RHa85gEg7h1VH`86dEd7m1WH^8Q:aGQF^8P:_GRF`8R:[GQFe8R:UGQFj8S:PGPFP9S:jFPFV9S:dFPF]9R:\\FRFd9Q:TFTFk9P:kEWFU:Q=10000000000000001O000O100000000O1O1O1000000O1UC`Ef9a:WFcEf9^:YFeEe9[:[FeEd9\\:\\FeEb9\\:^FeEb2B`1k:mKeE0@C8[4c:RLfEJFC:\\4[:VLfEFI_Of0^4l9[LhECJcNFAc1c5U9fLjE]OLhNP2g3eM\\Lf:g1lEZOJjNV2`3aMfLd:d1oE^N^O[O9<[2V3`MQM`:d1SFRN^OI4:_2o2^MZM^:a1XGPN\\N6d2d2_MgM\\:\\1TGTN]N5h2Y2bMRNW:Y1TGVN[N6k2k1lM\\Nm9[1RGXNZN5o2Z1YNkN^9\\1PG[NZN4S3Q1\\NQOY9]1oF]NYN4V3j0_NTOV9_1mF_NXN4Y3a0dNWOS9c1iFaNWN4_35k:S1`CdNVN4]c0T1_^OgNTN5`c0P1]^OlNSN4fc0g0Z^OUOPN3oc0>S^O_OnM3Wd03n]OJjM4`d0Hh]O4hM4lg0LTXO3mg0MSXO3lg0NTXO2lg0NSXO2ng0OQXO1og0OQXO1og0OQXO1og0OQXO1og0OQXO1Ph0NoWO3Qh0LPXO4Rh0JmWO7Sh0ImWO6Uh0HlWO8Uh0GjWO:Wh0FhWO9Zh0FeWO;^h0^OeWOc0Ti00O10O010000000000O1000000000O10O101O000000000001N10000000001N101N3M5Gm\\g5"}, {"size": [848, 480], "counts": "UYd66Zj01N1O10001N00100000000000000O1N3K4N2K5M3N1O2M1M102N2N2O1L4L4L4N010hNeN^YO[1bf0nNRYOU1mf0WOgXOj0Zg0S12O1O100O2N2O00N4M2M3lYO_MWd0f2^[OgM]d0d2P[OhMmd0n3O10O00001O1N2M3O010O1@cZOkK`e0k3lZOoKWe0^3X1I7M3M2O2L4^OcXOeMhg0X2;L5L7TNdWOW1nh0L3M4L5K7I8EURg3"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "mV`3i0`i09H8J5K5L3N3M1N2N4M1O2N2M3O1M2O3N1O2M2O1O1O2L3O1O100O100O1O1O1O100O010O1O0100O2O1N2O2M2N2L4M3M3K6J5I8B>_Oa0J;BUi_7"}], [{"size": [848, 480], "counts": "`[m23[j08I2M3M3M4M1N3N2N1N3O001O00010O00010O001O0001O00000001O000001O0000000000O2O0O100O10000000000010O00002N001O1O002N1O2N1O2N10O01N3N1O1O002M2N102L3Ngec7"}, {"size": [848, 480], "counts": "alc16Wj05J5M3L3H9F9O1O2N1O1O1O1M3K5H8G9hJnMgA]2g=_NhAh1P>cNdAf1U>cNeAa1W>gNaA^1Y>jNaAX1]>nN]AU1`>SOYAo0c>[OUAh0g>APAb0m>IX@j0f?Bc_Ok0Z`0g4L5L3N2M3M2M4M3N1NWOV@QGf?R9_@jFW?_9l@`F^>U:dAiEP>a:TB]Ei=e:n02N2N2N3N2N1000O2O0OTDnD_8P;aGTE]8k:cGYEY8e:iG^EV8]:mGhEo7V:QHnEn7o9RHUFl7j9YG_E^M:Oa0X;f9PGaGo8]8jElHV:R7`EZI^:f6`E^I^:b6`EcId9oKSF3GY:b0hIT:oKYEX:b0mIV:hKXEY:b0SJY:m5fEVJY:j5eEZJT9_K\\F0a0MBY:;^JR9RLaF]O0W:4`JX9ZLeF[>X9hAhFX>S9kAmFcNO_?Q9PBPGbNO^?o8QBXGP>g8QBXGP>h8QBWGo=g8UBUGm=j8m1O1N2O011N1O102M1O1O1O1O10001N10O1000O10001O0000hNdG\\@\\8_?VHT@j7g?^HV@b7f?cHY@]7b?iH]@W7_?oH^@R7[?WIb@j6n>hIo@Y6d>WJTAn5d>S3I7nKaDlHg;o6cDdHd;X7dDaH`;Y7lD[HY;V7\\EaHg:j6VFjHn9m6cFhH`9c5]HnIj7P6\\6O1YI\\\\O`6lc0O1O1O0O2gNV\\OjKjc0P4]\\OoKdc0P4\\\\OoKec0Q4[\\OoKfc0P4Z\\OPLfc0Y4Q\\OfKPd0P4Z\\OPLgc0m3[\\OSLec0Y3n[OXL?>cc0Y3P\\OXLjc0^2_\\OXME;nc0j1k\\OnMPO@Hh0cd0Y1o\\O`NmN6Yd0T1j\\OgNmN5_d0\\OT[Oj0n1E_N5Se0Ka\\OO\\N6og0IRXO6lg0LTXO3lg0NSXO3lg0NTXO2lg0NUXO1kg0OUXO0lg00SXO1ng0NRXO3mg0LSXO5mg0KSXO5ng0ISXO7lg0IUXO6lg0HVXO8ig0HXXO8hg0HXXO8ig0HVXO8jg0GWXO9ig0GWXO9ig0FYXO9gg0FZXO:fg0D\\XO;eg0E[XO;dg0HZXO8fg0HZXO8fg0G[XO8fg0GZXO:fg0FZXO:fg0C]XO>bg0A_XO?bg0_O^XOb0bg0^O^XOb0bg0^O]XOb0cg0@\\XO`0dg0AZXO`0fg0_O[XOa0eg0_OYXOc0hg0\\OXXOd0hg0[OZXOd0fg0\\O[XOc0hg0YOXXOh0hh0O10O1O1O1O1O1O100000O100000000O02O0XO^OaWOd0^h0TOlWOi0oh0O000000000O10O2O00O2O00O1001M2O02O0102L5J8Ienh4"}, {"size": [848, 480], "counts": "ckW69Vj01O100000000O10000O1000000O2M4K4N101N100N101OO3E:N11000L4N10bN2\\XON`g07`XOG_g0;cXOCZg0`0iXO\\One0?`YO8gg0NSXO5ig02oWO3og0X1O001O1N3N1N101K5G9fYOiMXd0Z2e[OkMWd0W2f[OlMYd0V2e[OkMZd0Y2b[OhM^d0[2^[OfMad0^2[[ObMfd0a2T[OaMmd0P41N2M3L4K5N2O2N1VOk0I7K5M3M3WOiXOVN\\g0c1kXOWN^g0a1gXOXNbg0a1l0J4L3N1N4L5K6J:Ec_S4"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "Xf\\3k0^i0c0@:G5I8I6L4K4K6L3M2O1O1N2N2O1N2O1N2O1N2O1O1O1O0001E:O2O10O010000OM4O0101O001O0O20O01N2O001O1O1O1O1N2O2N2M2N3L7dNoWOBM^Och0b0oj`7"}], [{"size": [848, 480], "counts": "Vck28Vj04M2N2N2M3N1O2N1N2O1N3M2N2O2O00010O01O0010O000010O0001O01O01O010O1O01O01O0011O1N2N2O0O1O2O1N2O1N2N2N102M2N1N2O001O001O1N1OWan7"}, {"size": [848, 480], "counts": "VXn05Yj05L2N3M2N2N2N1O2O1N2N1O2O1N2N2O1M2O2M3L5L3M3jI[N\\Cj1[W4gC[LbMZOd>_4gCXLkMTO[>g4iCULPNQOT>n4iCSLTNnNP>T5hCoK\\NkNh=Z5iClKcNgN`=b5lCgKfNfNZ=h5mCbKjNfNV=m5mC^KmNfNR=P6oCZKQOdNna5nA`JP>d5mA\\JR>g5lAYJR>l5jAUJT>P6hAQJW>R6fAPJW>U6dAmI[>X6aAhI]>^6^AcI`>b6[A`Id>e6XA[Ig>k6SAVIk>X9hCkDR9X;nFkDn8W;QGlDm8T;SGnDk8R;UGoDj8Q;UGREi8n:WGSEh8m:XGUEg8j:XGXEh8f:XG]Eg8b:WGaEh8_:VGdEi8\\:TGiEk8V:nFRFQ9m9bFbF^9]9_FgF`9Y9[FnFc9R9SFYGl9g8oE_GQ:`8mEdGR:[8lEhGS:X8kEkGU:U8iEnGV:Q8hERHX:m7fEVHY:j7fEXHZ:f7eE^HZ:a7dEbH\\:]7cEeH]:Y7cEjH\\:U7bEnH^:Q7`ESI_:l6`EVI`:i6]E\\Ic:a6\\EcIc:]6[EeIe:[6VEjIj:o:O001QOQA[Fg=OdBa9L[Fa=8bBX9a>jF^AS9d>PG[Ao8d>TGZAl8e>VGZAj8f>XGYAg8f>]GWAc8j>^GTAb8l>_GSAa8m>`GRA`8n>aGQA_8n>dGPA\\8P?eGo@[8Q?fGn@Z8Q?hGn@X8Q?jGn@V8Q?lGn@T8P?QHl@P8S?g1N2M3N2N2M3N2oHiDfNZ;d0SFcNo9V1cF]N_9Z1lFbNV9\\1mFaNU9]1oF_NS9_1RG\\NP9b1WGVNl8e1]GUNe8JXDYLFE9OZ3S4b8MYD[LAD]4n3l71YD[L_OE_4j3l76VDYL\\4Z3a7=SDXLa4T3_7e0oCXLe4l2_7l0mCXLf4e2a7S1iCZLi4[2a7[1fC[Lm4Q2a7e1bCYLP5l1a7k1_CYLT5e1`7R2\\CZLV5]1b7i0iBeN?UOY5S1e7U1bBbN`0XO\\5h0f7`1^B^Na0[O]5>i7i1ZB]N`0\\Oa53l7T2TB\\N`0\\Ok5@j7h2nAYN=_OYd0X2Z[OZN=^OXd0X2\\[OZN<]OXd0Y2][OYN;^OXd0X2_[OZN9]OXd0X2a[O[N7\\OXd0X2c[O[N6\\OWd0W2f[O]N2]OWd0U2i[O]N1]OVd0U2k[O]N0]OUd0S2o[O_NL_OTd0P2R\\ObNJ^OSd0o1U\\ObNI_OQd0l1Z\\OeNE^OQd0j1^\\OgNA@Pd0f1c\\OiN^O@oc0`1j\\OQOWO_Onc0^1n\\OROUO_Omc0]1Q]OTORO^Omc0Z1V]OWOmN@lc0V1[]OYOjNAjc0Q1b]O^OcNBjc0i0k]OD\\NBic0`0V^OMQNDhc07_^O5jMDfc06b^O5hMFec03f^O7eMEec01i^O;bMDdc0MP_O>[MFag0:_XOFag09_XOHag08^XOIag08_XOGbg09^XOGag0:^XOFcg0:]XOFbg0:_XOFag0:_XOG`g09_XOH`g08aXOI^g07aXOJ_g06aXOJ_g06`XOK_g06aXOJ_g06aXOJ_g06`XOJ`g08_XOGag0;^XOEag0=^XOBcg0?[XOAeg0a0YXO@gg0a0XXO^Ohg0d0VXO\\Okg0f0SXOZOmg0g0QXOYOQh0g0mWOYOTh0i0gWOZOYh0_1O1O101O001N0100O10O1O110O0O10O100O1OgWOkMZh0R2fWOnM\\h0Q22N3M3M3K5L5J5C=H;CaUm4"}, {"size": [848, 480], "counts": "T]P67Wj03O0O10000O2O010O0O2N1O100N2O1L4M2N1O2M4M1M4I5J5000201O1N1O0N23N2L4N3J5L5L4M3L5KbWOKhf01WYO4if0IXYO8if0EXYOG5I6K6J5L3M4K5M2M3N2M3N2N2M3N1O2N2N101N2NgMnXOV1Qg0hNXYOS1ff0mN^YOP1bf0oN`YOP1_f0QObYOn0]f0SOcYOm0]f0ROeYOm0[f0SOeYOm0Zf0SOgYOm0Xf0TOiYOk0Wf0VOhYOj0Xf0VOiYOi0Vf0XOjYOh0Vf0XOjYOh0Uf0ZOlYOd0Tf0\\OmYOc0Sf0]OnYOa0Sf0_OnYO?Sf0BlYO>Tf0BlYO=Uf0ClYO;Uf0EkYO;Uf0FkYO8Vf0IiYO6Yf0IhYO5Yf0KgYO4Zf0MfYO0]f0OcYO0^f00cYOM`f03_YOLbf05]YOIff07ZYOGgf09YYOFif09XYODkf0SYO@Pg0?PYO_ORg0a0oXO[OUg0d0nXOVOZOOXg0k0QZOROSf0m0i1O100O102M3M2M5K5JPc]7"}], [{"size": [848, 480], "counts": "VQX32Zj07K3M3M3L3N3M2N3M3N1O100001O00010O0000001O0001O000000001O01O01O0000001O000100O1O10O01O1O010O1O001O1O100O001O1O2N1O1O1O1O001N2O1N3N2LTo]7"}, {"size": [848, 480], "counts": "Sc^19Sj07I;E;ET6jBhJTOPOQ>Z6iBfJYOoNl=^6hBbJ@nNf=c6hB_JFkNa=i6fB\\JMiN[=n6fBYJ0hNZ=P7dBXJ4hNV=T7bBTJ:gNS=W7aBSJ=eNQ=[7_BPJb0dNn<_7]BmIg0cNkj1SAVNl>k1TAUNk>k1VAVNi>i1YAVNf>k1ZAUNe>k1\\AUNd>j1^AUNb>j1^AVNa>k1aATN]>m1dASN[>m1gASNX>l1iAUNV>j1lAWNR>g1QBYNn=e1VBZNi=d1YB]Nf=`1]B_Nd=_1^BaNb=]1_BcNb=[1_BfNa=X1`BiNb=S1aBlN_=R1cBnN]=Q1dBoN\\=P1eBoN\\=P1eBPO\\=m0eBTO\\=j0eBVO]=`0kBBV=9kBJU=2mB1oC=N2N2N01M2K6O2000012Mb0^Oo0RO1O1N3N1O001O000001N1O1N3gNeZOWM_e0h2V1N2O2TOVYOmMnf0k1R1J5VN`WO]1nh0K3M5L3L6J9F7I`[g4"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "gZT3a0ki07I6J6K5L4L2O2M3M3M2N3N2M2N3M2O1O2M2O1O1O1O1O1O1O2N10O01O1O1O1O100O1O00100O010O0100O01O1O01O1O1O1O1O001O2M2O2N1O1O2N3M2M3N2N2N2M4M1O2M3L4N3J4N3K4M4J6K5I7M4KQiW7"}], [{"size": [848, 480], "counts": "ZiY35Rj0k7UAQIX>V7ZAXI_>c9Io0QO5aBcCbe6Z4XGkKV5Ia3Y4]GPLT5C^3l3QHaLa4B\\3n3UH`L^4B\\3n3XH_L\\4C\\3n3ZH]LY4F\\3n3\\H[LX4F]3o3]HYLV4H\\3P4cHRLT4J[3T4lHeKm3O]3\\4U;101O0O11O000000000000O100O1O_NgZOmMYe0n1nZOPNRe0o1P[OPNPe0P2P[OPNPe0o1R[OPNnd0P2R[OYMSOc0ke0S2R[OZMTOb0je0S2S[O[MSOb0je0R2T[O[MSOc0ie0R2T[O[MTOb0he0R2U[O[MTOc0ge0R2V[OZMSOd0ge0Q2][OoMdd0P2\\[OPNed0n1\\[OSNcd0m1][OTNbd0k1`[OTN`d0k1a[OVN_d0h1b[OXN^d0g1c[OZN\\d0d1f[O\\NZd0c1g[O]NYd0a1i[O_NXd0_1i[ObNVd0\\1m[OcNTd0[1n[OdNRd0Z1P\\OgNPd0W1Q\\OiNPd0U1Q\\OkNoc0T1R\\OlNoc0R1R\\OnNnc0Q1S\\OoNnc0o0S\\OQOnc0k0U\\OVOkc0g0W\\OYOjc0c0Y\\O^Ogc0?[\\OAfc0ZOPZOj0^2Nic0H^\\O9dc0]Od\\Oc0df010O1O010O100O2O0O2N1O1O0O20O00000000000000001O2N2O1N1O001O002N2N1000O100O100003M1OO1O10O20O001N010O1O2N2N1N3MY^a4"}, {"size": [848, 480], "counts": "Q\\P6=ki09L4I6K5I7L4M3L4K5N2M3L4K3NMoWOlMog0S2O7H73NJfM\\XO[2cg081ON4N5Je0kXO\\LYf0S4I4N2M4M001O00000000000000O2oNkZOeLXe0W3U1POPYOVNVg0c1P1L4K9H5L1N3N5J3L6K7I9Gn^P5"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "ia]3?ki0:H5L4L4M4L3M2N3M3N1N3M3N1N3N1N3N1O1N2O2N1O1O1O1O1O1O1O1000O0100O2N010O10O10O010000O00100O001O0N3M3N2N3N1O2N1O101N3N001O001OO14L1O002N001N101O2N2N1N3L6\\OWWP7"}], [{"size": [848, 480], "counts": "Wno28Vj04L3M3M3L3N3M2N2O2O000001O100O001O010O1O1O000000001O000000000001O000000000000000O100000000O10000001O000000001N101O001O0O2O0O2N2O1N2M3N3L[]d7"}, {"size": [848, 480], "counts": "ll_2;f15Wf0NcYO8Zf0L`YO9^f0J[YO;cf0IWYOQ4jD^JnLb1W>R4fDaJQM^1W>T4QD\\JcM72[1Y>U4mCUKiMg0Y>V4kCVKiMf0[>W4iCUKhMg0^>W4fCXMZf701N3M2N1O100O1O10000O010O10000O2QNcCiE]h101UNZDkDg;P;hDcD[;Z;nD^DT;^;RE_Do:];XE^Dj:^;`EWDe:f;T2K5cN^1[Nm_O\\Hb`0]7\\1M4L4L4J6PK\\]Om1ib0QNd]OU1lb0iN^]Ob0nb0]OV]O:Qc0ER]O6Pc0JS]O2ob0MU]ONlb01[]OIfb07a]OPNnN;bc0d1f^O\\N[a0c1f^O\\N[a0c1e^O\\N\\a0d1d^O\\N]a0c1c^O]N]a0c1c^O]N^a0a1c^O_N]a0a1c^O^N_a0a1b^O^N^a0a1b^O_N`a0^1b^OaN`a0Z1d^OeN^a0Y1c^OgN_a0V1c^OiN^a0U1c^OkN_a0n0e^OSO`a0=k^OCZa03k^OL`a0Cg^O=fe000000000O10O10000000O10000000O10000000O1000O010000000O01000O1000000O10O10O10O010O010O010O0100O100O001O2OO010000O100000O2O0000001O00O1001N10O101O0001N10001O1O001O00010N3M3L6JTh_4"}, {"size": [848, 480], "counts": "e]S56Xj04N100O2M3L3O1N010O0O1OO20N2N3M3M3H7L4L4K3N5J7I5L2N1iNXNhYOg1[f0dNYYO[1jf0T11N2J6N200OO1M4lYOUMdd0k2^[OWM_d0i2b[OYM[d0h2f[OXMYd0h2g[OZMVd0g2k[OZMSd0i2i[OYMVd0a401N01O1000000O1O2N1O2N1M3VOWZOjLme0i2T1L4F:]Oc0M4M4L6J5J5ZN[WOY1Si0K6I6K7H9Faa^5"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "Q\\h3f0ei0=D7J7I5L4L5K4L4M2N2M4N3L4L2N4L2N3M2O1N1O1O010O10N1N3J6N20O01000000O2O1O00001O1O1N3N4L2N3M3L4M2N2M5I6J8F;Eo\\X7"}], [{"size": [848, 480], "counts": "_QX32Xj09K3M4L3N1N3L3N3M2N2O2O0001O10O00000010O000001O01O00001O00010O001O01O02O1N2N1O100O1O1O1O2N1O1O1O1O1N2O1O1O1N2O2N1N4Jhbd7"}, {"size": [848, 480], "counts": "V]a21Qj0?E;Db0@?_O`0@`0C=D<[Oc0[Oc0B?J6J6G9QAVJQ9n5jFYJo8n5kFVJQ9o5kFTJQ9Q6YFcJd9c5VFaJg9e5SF^Jj9h5PFZJo9k5iEYJV:Q6mDdJQ;S6`CfJ_W9YAdFi>V9cAaF`>Y9_1M3M2N2N2N2O1O1N20SOmF]@R9c?QG[@o8e?RGZ@m8g?TGX@l8g?UGY@k8f?XGX@g8c?bGZ@\\8`?nG]@S8_?TH]@m7`?ZHZ@h7c?\\H\\@d7b?^H^@b7^?cH`@^7\\?m1K5J6J6J6J6WJRDhLVn[OBVd04P\\OLXd0Gm[O9Ug000O1000O1000000000000000001O0O1000000000O1000O10O10000O10000O010O100O001000O01O010O10O01O00100O100000O10001N0100000O10O100000O1000O0100O1O1O01001O0000O02O000000000000000001O001O1O1N2O1O1O001N1000000O2N101O4Kk]j3"}, {"size": [848, 480], "counts": "PlZ56ni09@c0J210N4L5K5L3M4L3eN]NfYOd1[f0kNSYOX1nf0S11N3O10O01J5L5N3M3O001O0101]YOmMgd0]2kZOjMRe0]2dZOfM\\e0g3000000001J5N200O1000000gNRZOoMoe0h1kZOdMXe0a1eYOjNY1AWe0a1gYOhNmg0U1VXOfNlg0X1g0O2O1N3M4L4L3M6J5K7Gb`c5"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "fld3?mi08E;GXg0BhXO=Yg0BiXOI4H7K5J5GLnWOXNSh0j18002M1gNUNnYOn1oe0\\NhYO^1^f0W13M4K4N3OK3O3O1N30O1NSZOXMWd0g2j[O\\MTd0e2j[O]MTd0e2k[O[MUd0h2h[OXMWd0P3c[OmL^d0a3U[OWLSe0P5B3N000000O2M2L5M3M3cNiZO^MZe0_2jZO\\MZe0a2T[OPMPe0n2S[OmLRe0n2]1G:Aa0^O=J5N4L3M5K6J6I9DSeg5"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "kcj3`0ii0=D;F9H9I4K5L5J5M2L5L4M2N2N2M3N2N2L3O2N]NSYOElf09[YOCef0<^YOC`f0=cYOA\\f0?fYO@Yf0`0jYO^OUf0c0mYO[OSf0e0nYOZOQf0g0oYOYOQf0f0PZOZOPf0f0PZOYOQf0g0PZOXOQf0g0oYOYOQf0g0oYOYOQf0f0PZOYOQf0g0oYOYOPf0h0PZOXOPf0h0oYOYOQf0g0oYOXOSf0g0mYOYOSf0g0mYOYOTf0e0mYOYOWf0e0iYOZOYf0e0fYO[O]f0b0dYO\\Oaf0a0_YO\\Oef0c0[YO[Oif0c0VYO]Omf0`0TYO^OQg0?nXOBTg0Be0\\O:F6J4L3M4M3\\AfIm8]6QGfIl8]6PGgIl8_6XF`IjK:k=[6TFZJk9k5oEXJo9j5oEWJo9l5nEVJQ:l5mEUJR:l5lEVJS:Q6eEQJZ:S6aEoI_:S6QE[Jn:j5dDaJZ;b5\\DfJc;_5SDgJl;]5lChJS<]5cCiJ\\<]5ZChJdbHXOb7g0aHSOc7m0_HjNh7V1YHeNk7[1XH\\Nn7d1VHRNP8IiBZO_5;R8ITCIW5UOWJmNS>R2dBJfd05][OIcd06`[OH`d06d[OH\\d06l[ODTd08R\\OGmc0Ma\\O1dc0mN[]OQ1Sf0O2N1O1O2O4K2N1O1O1O001O001O010O1O00001O00001O001O001O000000001O0010O1O0100O0100O0100O00010O0010O010O010O011N1000O000010O01O001O10O01O0000010O00000100O1O1O010O1O1O0010O010O01O1O1O0010O00010O00010O001O0010O0001O100O1O1O1O1O1O1O0000002M5I\\^`3"}, {"size": [848, 480], "counts": "UaW52Tj0`0F7G:G9G5J6M0O1fNdNcYO]1[f0lN\\YOU1df0[10N3M3N2M3J7N2O1O1N101000`YOhMhd0Z2S[OjMld0`2iZOaMWe0l3001N1N2N2O10000O11O01M2fNZ1K5ZOVXObNng0]1d0M3M3N2N3M2O2N2M4L6J6J8FSZi5"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "mTm3810ei0g0G9G7I6K5L3L4K6L3N2M4L2M4M3M2O2N1N3N1O2O0O001O100O1O100N2O0010O100O011N2OO02N1O3K4L4A`XOeMig0W2;M3M2O2M2O1O2N1O2N2N1O2N2N2O2N0O1O2N1O3M3N2M2O0O2N3N2M5K2N2O1N2N1O2N2MfR^6"}], [{"size": [848, 480], "counts": "\\Zb4Q5aAUK_>n4ZAVKf>n4SAUKl>X800O1O1O100O1N1O2N1N3O1H8M3O1O1MYOmAWEQ>e:WBYEg=b:aB]E[=^:oBbEmW1dC=dM[Nj>Bj_OZ1Y4kNkKGZ2`0f><`FAe9;^FBe9<[FBg9=ZFAh9?XF_Oj9`0S81000O10000O1O010O101N1O010O010O01O01O01O1O10O002O0O2N10O01OO1010O1O001O010O0010O101N2O1O0O10O010O0001O0010O100000O10000000001N2O1O1O1N1O2OO010O1N2M5K5Jdg_4"}, {"size": [848, 480], "counts": "Vc`58Wj09F7Bo1d]ONa3WNh>Q4UARLh>Q4VAPLi>Q4UAQLi>R4RARLl>P4f@^LX?e3d@^LZ?e3b@^L\\?d3c@]L\\?f3`@\\L^?l3[@TLd?T4U@mKj?W4Q@jKo?X4o_OhKQ`0Z4n_OeKR`0]4k_OeKT`0]4j_OeKT`0]4i_OdKW`0]4g_OeKX`0]4e_OeKZ`0\\4d_OeK]`0\\4__OgK``0Z4]_OiKc`0W4[_OkKe`0U4Y_OmKf`0T4X_OnKh`0S4T_OoKm`0Q4P_OQLPa0P4n^ORLRa0o3k^ORLVa0n3h^OTLWa0m3f^OUL[a0l3b^OUL^a0P4Z^OSLga0Z601_NeGl@\\8R?iGj@X8S?nGi@T8T?PHi@P8W?RHg@o7X?THf@X7@WHi?c0f@U7AXHi?f0d@o6F[He?h0d@l6G[He?m0a@h6LXHc?S1_@d60XHa?X1[@`65WH`?b9b@^F]?b9e@\\F\\?b9f@^FY?b9i@]FV?c9l@[FU?c9o@ZFR?e9l0O10WBcFa:[9]EkF`:U9ZESGe:k8kDgGT;X8kDkGU;S8jDQHT;n7mDSHR;m7mDUHS;i7mDYHS;e7nD\\HR;b7nD`HR;`7kDcHT;_7hDdHX;\\7eDgH[;[7`DhH`;Z7ZDjHf;X7RDmHo;T7lCPITHTD[1_MnNh0`3[O]L]>@bDZ1PMXOj0Z3[OcLoa0a0l]OBo0R3[OhLSb0;e]OJQ1k2\\OoLVb03^]O3S1d2]OUM^b0FU]Oa0T1\\2^O[Mmb08g]OW2@bMhb07j]OQ2BhMdb07l]Oi1GPN]b06o]Ob1KTNXb09P^O\\1NXNTb0li07H7J7J7I6I4M3M2lNiNPYOZ1of0T1M2001M2N201M3M3N2000L4K5O3bYOgM`d0_2S[OnMid0m300O100000O1O1O1O2N2M3M3L6bN_1H:QO^XObNfg0[1^XO_Nfg0_1\\XO\\Njg0a1g0J3N4L3N3L4M3M5I6J8G_Tj5"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "mbV4=Pj07J4N2I6L5L4M3K4M4L2N2N3M3N2M3L3N2O0O2O1O1O1O1N20000O1O1O100O1O100O01000O1001O0O2O0O2O1N3N1N2O1O3M6IVH^Ag7d>\\HZAa7i>`HUA]7n>fHPAW7R?lHk@P7Y?SIb@l6b?m13K6J5J7K5J6M3NcCPG]7Q9dGQHY8b8`EoFhNn0g;X8nD\\IR;h6cD_I];j:000000000000000O1000001O000000001O00000O1O2O1O0O2O001O0O2N1O3M3bGeCc2][2fAhMW>^2dAdMZ>d2^A^M`>m2WASMf>U3SAlLi>_3o@dLc>R4o@SLi>n7K5N2M3N2N2O1OZO^AaE_>]:gAaEX>^:kAaES>^:QBaEm=_:UBaEj=]:ZBbEe=]:]BcEb=[:aBeE^=X:gBhEW=T:oBkEQ=Q:SCoEmRFgAl9`>iEfAU:\\?O000O2O000000000O10O1000000O10000O10000000000O10000O10000UOaEeA_:Y>eEeA\\:V5\\FdK>g3mN]KY:T5gFZKl0b3VNnKY:o4mJXNkJiLX:k4_KdL`JcNLMV:j4kKjKgJZOYO2U:i4^NmJ`G:R:h4[OVKh0g4ZOXKg0g4[OVKg0h4\\OUKf0j4]OQKf0n4_:UMfZO>[e0^OkZO?Ue0_OP[O>Qe0^OT[O`0nd0\\OX[Oa0jd0VO^[Og0gd0POb[Ol0ad0mNf[OP1ad0dNf[OZ1jf0N4L4L5K3M5J2O1O1O1N101N100O1O100O2O00000O100O1000O02N1N20000O10000000000000000000000000000O1000000000000000000000000000000000O10000000O1000O10000O100O01000O1000000O100000O10O100O100O1O101N2O0O10000O1000O100000001O000000001O001O001O001O1O1N2N6FVcl2"}, {"size": [848, 480], "counts": "Raj5o0`11]e0:UZO2`e0>RZOGie0b2O0000L4O2O0O2M3O10000SOm0@a0M1O2^OiWOiNYh0W1gWOhN[h0W1eWOfN^h0Z1bWOdNah0Z1W8K5I7L4K5M3MTOaBmD]=S;hBkDV=T;nBjDP=V;TChDk_7TOPEd0_3:a7QOmDh0a39a7nNkDo0a34d7nNfDb1aM^Nn4U1k8lNaDf2U2aNY9jN]DS3m1WNe9gNYDW3n1UNj9dNTD[3n1UNm9aNRD\\3n1VNP:_NnC_3n1VNT:ZNiCe3Q2SNV:YNdCi3R2RNZ:UN`Co3P2oM`:VNYCR4P2lMf:TNTCX4P2fMl:SNZBUO>[5U2`MS;QNWBXO;_5T2[MZ;[4]DiKc;jMVBe5n1dLl;gMVBi5f1dLU<`MWBP6\\1eL\\UG\\On8e0UGUOm8R1oFhNT9\\1lF\\NW9f1kFSNY9n1iFkM[9U2gFfM\\9[2eF`M^9a2cFZM_9k2_FPMd9T3ZFgLi9\\3VF^Ln9d3SFTLR:m3PFlKT:T4nEfKV:[4mE\\KW:\\2QAkNk4_N[:c2m@mNgd0Q1\\[OnNdd0m0c[OQO]d0h0k[OXOSd0a0V\\O^Okc0a0V\\O^Ojc0b0W\\O]Oic0b0Y\\O]Ogc0c0Z\\O\\Ogc0b0\\\\O\\Oec0b0]\\O]Odc0a0^\\O^Occ0`0_\\O^Obc0a0`\\O]Obc0`0a\\O_O`c0?b\\O^Obc0>b\\O_O`c0=e\\O^O`c0:i\\O@[c04T]O@Uc0mXO_OUg0b0kXO[OVg0g0kXOTOXg0m0V10O2O0O2O0O11N1002OOOO2N2L4M3K6B?IZib5"}], [{"size": [848, 480], "counts": "TRR56Xj05K3N2M3N2M2O1O2M2N2N3N1O2O01O01O010O00010O010O00010O001O01000O001001N3N1N2N1O101O1N2N3N2M1O2O1N2N1O1O001N100O2O2KW\\k5"}, {"size": [848, 480], "counts": "_`k03]j01O0O2O1N1O0010O10O2O00001O00001O001O0000001O000000010O00000001O000001O00O10001N101O01O000000000000O2N10000N2O15L0O00000001O1O01O1N2O001N10000000O11O001O001O1N110O010O00000001OO1O2O00O10O2O0VYOCac0=[\\OHec08X\\OKgc05W\\ONhc03T\\O0lc00R\\O2oc0MP\\O5Qd0Im[O:Sd0Ek[O=Vd0Bh[O`0Yd0_Oe[Od0[d0ZOe[Og0\\d0XOd[Oi0\\d0UOd[Ol0\\d0UOa[OQ1Zd0POd[Ob1]4WNh88iBi2Q3YMT:MiBo2j2\\M[:EjBR3e2^M`:AiBS3b2`Me:]OgBV3`2`Mj:XOfB[3Z2aMR;QOdB_3X2cMT;lNdBc3T2dMW;lN`Be3V2`MY;mN_Be3U2`M\\;kN]Bh3U2^M_;iN[Bk3S2^Mc;fNYBn3Q2^Mf;dNVBR4Q2\\Mi;cNQBV4R2ZMj;dNPBX4o1XMPm3jATLV>o3fARLZ>P4cAQL^>Q4]ARLb>Q4YAXKMhMj>Q:^AjEb>S:cAkE^>R:eAmE]>o9eAQF\\>l9gASFW>l9mASFR>k9RBTFm=i9XBVFg=g9^BYFa=d9cB[F[=d9hB\\FW=b9lB^FS=a9PC^FoMPAb0[ME^>ITDg0ZMCa>FSDk0ZM_Oc>FRDm0YM^Oe>FoCo0ZM\\Og>DnCR1iLLY?ROlCT1gLN]?nNdC\\1lLIa?iNaC`1mLIb?gN_Cb1mLIca0oNT_O[1YOEaa0UOQ_OY1]OCf?eN^@e0;U1@Ce?fN\\@f0;T1CBe?dN\\@g02\\1MZOe?cNZ@i0Ka17TOd?bNX@l0@i1d0iNe?bNV@m0ZOm1l0eNd?aNU@n0SOR2U1`Nd?^NT@T1iNT2_1[Nd?]NR@c4:PMe?]NP@b4Z:VBfEk=X:VBhEk=U:WBkEi=S:ZBkEh=Q:\\BmEf=P:]BnEd=o9`BPFa=l9bBSF_=c3WB^1=mJ\\=_3oBn0M^KV=`3ZCf0^>ROQBe0R>SOZBe0i=VO`Bc0d=XOdBa0_=[OhB?[=_OjB;Y=ClB7Q=LUCMV<_Mi_OV3V4TOTU1mCcNSlVOASi0a0mVO@oh0d0oVO@kh0T1O0L3110BYWOTOeh0m0[WOTOdh0l0]WOROdh0n0\\WOQOeh0o0ZWOSOfh0[1010N2ON1O1001N2F;M3N2O1N2NO101O02N1O2O00001N100O2VO_VOc0ai0[ObVOc0_i0]OaVOd0^i0\\OaVOe0_i07YXO@\\e0a0_ZOI[e08aZOO[e01bZO4]e0M^ZO9_e03oYO4Pf07ZYO5ef0l10O0O2O000O3L4J6F:L4K3L4K5M3N2O100O1O100O1N2M3N2O1O1O1N22O0O0000O1001O00010O00000O2O0000000000001O1N1000000O2O0000000O1O1O1M3M3M3N2M3N2M4F9K5N2N2O1O1N2N2N2O1O1O10000O10000O10O11O0O1O1N2N2L4M3N2M2N3N2N2O1N2N2O1O1N2O0O2O1N2O1N2N101N2N2N1O200O10O010O01O0100O010O100O10000O11O1O3L4M4L4K3N2N1N2O2N1O1O1O1O1O001O000O101O0O100O10000001O00000O101O00000002a\\OPJZb0U7N000001O0O1O2N2N1O2N1O1N2`Mf\\OhM]c0V2g\\OdM\\c0\\2e\\OaM]c0_2d\\O^M_c0a2b\\O]M`c0c2`\\O[Mac0e2_\\OZMcc0f2]\\OWMfc0h2Z\\OVMhc0k2U\\OdL_d0\\3_[O_Lid0`3U[O_Lnd0^45L3M4M2N3L5L5J9H6JgMZOb`0jMfAT2Y>PNiAk1U>YNPB`1o=dNSBW1k=nNSBP1l=TOSBk0l=XORBh0l=[OQBg0m=]OoAe0o=^OmAe0R>^OhAe0X>]OdAf0[>\\O^Aj0a>YOYAj0f>YOSAl0m>XOj@n0U?WOd@k0[?U5]@`EW?`:i@dES?[:n@gEP?Y:QAgEn>Y:RAhEl>Y:UAgEj>Y:QAmEn>S:o@QFP?P9VAlFN6j>m8QBUGn=l8PBVGo=k8PBVGo=k8PBVGo=k8PBVGo=k8PBUGP>m8mAUGR>m8lASGT>o8jARGU>o8jARGU>P9iAPGX>P9gAQGX>P9hAoFX>R9gAnFY>S9gAmFX>T9gAlFY>U9fAkFZ>W9eAiFZ>X9eAhFZ>Z9eAgFZ>[9eAdF[>]9dAbF]>_9cA`F]>b9aA]Fa>c9_A[Fb>f9^AXFc>i9\\AWFe>i2ZAX30nIf>g2aAY3GoIi>f2dAZ3CoIj>c2iA\\3]OPJj>`2oA_3WOPJk>^2RBb3oNRJo>[2UBX4k=hKXBU4h=kKZBS4g=lK[BR4e=oK\\Bo3e=PL^Bm3b=RLaBl3_=TLcBj3^=ULdBi3\\=VLfBi3[=ULhBi3X=WLiBi3V=XLkBf3X=VLjBi3[=RLgBl3_=fIe@k1o1_4\\=eIg@i1o1a4[=fIh@e1o1e4g=XK]Bf4d=XK^Bg4c=VK`Bj4`=TKbBk4_=RKeBl4e=iJ]BV5h=eJYB[5i=bJYB\\5h=bJZB]5i=`JXB`5i=]JZBa5g=]J[Bb5h=ZJ[Be5k=RJXBm5Wa0N2N2O2M3PNV[OUNmd0b1c[OUN`d0]1R\\O[NSd0X1Y\\OdNlc0R1[\\OlNhc0P1Z\\OoNkc0k0X\\OSOlc0i0U\\OVOQd0c0R\\O[OUd0=m[OCYd01m[ON\\d0WO_YO8Y2`0Yg0N1O1O2N1O2N1N3N3K8HkgP4"}, {"size": [848, 480], "counts": "afh45Zj02O00000000001O0O1001O0000O1M3M4M3L3N3M10O11M2J51M5M3N5K3@cok6"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "nj]61[j05L4L4N1M4N1O1N3M2O2M102L3O1O1N2N2O1N2N3M2O0O2O0O2O01OO101O0001O0O2O001O001O001N2O1N1O3M2N2M3O2M3N2N2N3M7_OQVO4^ic4"}], [{"size": [848, 480], "counts": "bY`9:Uj02O00000O101N1N2N2O1O1N2O1N20O01O1M30000O10O100000000O10O101O0000O1000000000000O100O100O1O1O101N2N:B]ob1"}, {"size": [848, 480], "counts": "kn_2235li0`0G6L3M3M3N2M2O2M2O1O1O1M3M2O2N2N1O2O0O2N100O101N100O100O100O10O11N100O01000O10O100O100O10000O1O10O01O1O1O1O1N110O11O2N1O1N2N2O1N1O1O2O0O100O1O2O000O2O0O101N10001O0O100O2N100O2O0O101N100O2O0O2N1O2N1O2N1O101O0O2N100O100O100O100O100O001O1O010O000100O001O0O2N100100O010O1O100O100000000O100O001O100O1O010O1O1O00001O0O1O1I7^Ob0G9K6N2N2K5L4M3M3N2J6J6J6M3N2M3N2M3O1N2O1O1O1O10000O100000O100O011N100iNa\\O\\K`c0`4e\\O^K\\c0`4f\\O_K[c0_4g\\O`K[c0^4e\\ObK\\c0\\4e\\OdK]c0Z4c\\OfK_c0X4b\\OgK_c0W4b\\OiK_c0V4a\\OjK`c0U4`\\OlK`c0S4a\\OkKac0U4^\\OkKdc0S4[\\OnKfc0P4[\\OPLfc0o3Z\\OQLgc0n3X\\OSLic0l3W\\OTLjc0j3W\\OVLjc0i3V\\OWLkc0i3T\\OVLnc0h3R\\OYLoc0f3Q\\OYLQd0f3o[OZLRd0d3o[O\\LRd0b3n[O_LSd0`3m[O_LUd0`3j[OaLWd0^3h[OcLYd0\\3f[OdL[d0\\3e[OdL\\d0[3d[OdL^d0Z3b[OgL_d0X3a[OgLad0W3_[OjLbd0U3^[OkLcd0S3^[OmLcd0R3][OnLed0m2^[OSMcd0a2h[O_MYd0[2k[OfMVd0f1lZOaMX1i0mc0]1d\\OcN]c0P1P]OoNRc0o0n\\OQOSc0n0n\\OQOTc0n0k\\OROVc0m0k\\OROWc0m0h\\OTOYc0l0f\\OSO[c0m0d\\OTO]c0l0b\\OSO_c0n0_\\OSOac0P1\\\\OoNfc0R1W\\OoNic0R1V\\OmNlc0T1R\\OlNnc0V1o[OjNSd0X1j[OhNWd0Z1e[OgN[d0\\1b[OcN`d0_1][OaNdd0`1Y[O`Nid0`1V[O`Nkd0`1S[O`Nnd0a1Q[O^NQe0b1mZO_NTe0a1kZO^NVe0c1iZO]NXe0c1fZO]N\\e0d1aZO]Nbe0a1]ZO^Nfe0b1XZO^Nie0e1cYOaM;i0Tf0Q2eYOnM^f0R2`YOnMaf0R2_YOlMdf0R2^YOjMef0U2\\YOiMff0V2ZYOiMhf0V2YYOhMjf0V2ZYOdMkf0Y2h0N2N2N2M3N2N3L3N2L5I8UOm0@[`]2"}, {"size": [848, 480], "counts": "Pai43Zj030001O01O00O1O2N1O1O1O1O1O1M4M3K3O2N2N1O1L4J4M201O2M3L4N2N2O1mN]NYYOc1ef0hNoXOZ1Sg0POcXOo0_g0o020O1N1K310002O1M31N2N4MX1hNa0_O2N10000001M3M2O2K4I8UOjYOWMZf0c2m0M4ROdXO`Nag0]1dXO^Nag0^1cXO^Nkg0T1\\XOdNig0X1h001N2O3L3M5K5J9G:ERTe5"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "Zfd4=Qj05L5K5K4M2M2O0O101O0O101N101O001O1O001O0O1000001O0000000000000000000000O10010OUWOcNch0]18000000010O00000001N11O01O001O001O1O0O2O2N2N1O0O10000O101O000O1O100O2N2M3N3M6IZaf5"}], [{"size": [848, 480], "counts": "XgU9>ki09K5K3000O00100O1001OO1O11O0000000000O100000000000O1002N10OO1001O01N1010O10N2O010N2N110O1O1O1N10000N200O2N2MZWj1"}, {"size": [848, 480], "counts": "]iX3`0li07K4L4L3N2M3M3N2N2N2N2N2N2N2N101N2N1O2O0O2mWOeNnf0\\1nXOnNlf0S1oXOWOkf0Z2L4M4L2N2N3M2N2N2Na0_O9G5L1N2N3M2O1N2O1N2O0O2O1N2O1N10nMo[OeMPd0X2U\\OfMjc0X2`\\O`Mac0^2e\\O^M[c0_2k\\O]MUc0a2P]O]MPc0`2S]O_Mnb0_2S]ObMmb0^2R]ObMob0]2Q]OcMPc0]2o\\OdMPc0]2P]ObMQc0]2o\\OcMQc0^2n\\ObMSc0]2l\\OdMUc0\\2j\\OcMXc0^2g\\O`M[c0`2d\\O_M_c0`2`\\O^Mdc0a2\\\\O^Mfc0a2Y\\O^Mjc0a2V\\O^Mlc0a2S\\O_MPd0_2P\\O_MSd0a2l[O^MVd0a2j[O]MWd0d2j[OYMXd0g2h[OWMYd0j2g[OTMZd0m2i[OmLYd0T3_13N2M3O1N2O1O001O1O001O001O10O01O10O01000O010O100O1O1O1O1O1O001N2O1N3N1N2N2O2M2N2N3M2M3N2N1O2O01O010O001O1O00100O1O100O011N101N2hXOYMdf0k2QYO\\Mnf0T3O010O000000000O100O10O010O010O01O100O10mN`YOVN_f0i1fYOTNZf0j1hYOVNWf0k1jYOTNVf0k1lYOTNTf0l1lYOTNTf0k1nYOTNRf0l1nYOTNQf0m1PZORNPf0n1QZOQNPf0m1RZORNne0n1SZORNle0n1UZOQNke0n1WZOQNie0o1XZOPNhe0o1ZZOPNge0o1ZZOPNfe0P2[ZOoMee0P2`ZOhMde0X2Z1O100O1O1O010O000O2O0O1O1O2O0O2O0O101O000O101O000O2N1000001O000O1O10000000010O00001O00UXOjN^f0U1`YOPOF0ae0o0gZOYOXO;ie0>hZOi0Te0]2MG9K5O10000000001O000O100000001N10001O1O00001O001O1J7E\\YOcLhf0j2WYOcMPg0W2VYOhMkf0T2XYOlMHCmf0]2_YOTNaf0j1aYOVN_f0f1fYOWN]f0c1iYOTN`f0e1fYOYN^f05RYO=h0[O`cV2"}, {"size": [848, 480], "counts": "\\[j43\\j01O11O0010M3N1O2N100O1O1O1N3M2O1O1O1O0O2I6E9M3O2O1L3M4N101N2kNeNUYOZ1kf0VOcXOl0_g0Q11O2O1OO0L301010001O2N2O4Mk0lXOfL[e0[4M000001N100N3M3N3M2M3L5aN]ZOkMfe0R2nZOVMZe0g2Y1XO]XO`Neg0]1`XO`Nag0_1k0L8I2M3N3L4L5J6K:Djne5"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "kTQ56Uj0KnUO3Rj0:K7I4K5K4M4M101L4M3N2N1O101OO10O2O001O1N2O1O2N3M2N2N2N1O2M2N10000001O00001O1O1O1O1O1O101N2N2O0O3L4M4J]WO`N]h0]1=M4L4L4L5K5J5M6K1O4J5Lc_f5"}], [{"size": [848, 480], "counts": "VQf81^j03N105J1O1O2OO0100O1O0O2O1N3M2N2M101O2N2N1O1N2O0O101O1O1O001O1O0000O100O010O1O2N1O2N2O2M2N2N2N2O1N3N2N3L3N3LRnY2"}, {"size": [848, 480], "counts": "aaa47Qj0`0E6K5K4L4M2M2N3M2O1N3N1M3L3M4N2N1N3N1O101N10QOWXOXOhg0h0\\XOUOcg0k0aXORO_g0m0dXOROZg0P1dXOQO[g0Q1dXOoN[g0U1cXOjN\\g0Z1aXOfN_g0_1\\XOaNcg0V20001N3O1N4M3M3M2N2N2lNnLeZOT3Ue0VMfZOl2Ve0XMhZOj2Te0[MiZOg2Se0]MjZOf2Se0^MjZOc2Se0bMhZOb2Ue0^1N2N1O2N2N101N1O2O0O2O0O2O0O2O001N2O00001N10001O00000001O00001O001N110O010N2O1N3N2M4M2M5`KjZO^3Ze0YLlZOe3Ze0PLmZOn3ke000O100O2O0001O2M3L4L4L4L4L4L3N3M3N2M2N101N2OjNV[OgLhd0Y3\\[OeLcd0Y3`[OfL`d0Z3b[OeL]d0Z3f[OdLZd0[3i[OdLVd0Z3m[OeLSd0Y3P\\OgLoc0X3R\\OhLnc0W3T\\OiLkc0U3W\\OlLhc0Q3[\\OoLfc0n2]\\OQMcc0k2a\\OUM`c0Q1W[OQO9[O]1c0Sc0m0]^OSOda0k0]^OUOca0k0]^OUOda0k0\\^OTOda0l0\\^OTOea0l0Z^OTOfa0n0Y^OQOha0o0W^OROha0o0X^OoNja0Q1V^OnNka0S1S^OmNna0S1Q^OnNoa0S1P^OlNQb0V1m]OiNTb0X1j]OhNWb0Y1h]OfNYb0[1f]OdN[b0]1d]ObN]b0_1b]O`N_b0`1a]O_N`b0a1`]O^Nab0c1^]O\\Ncb0e1]]OYNdb0g1^]OUNdb0l1P3000001N101O0O2O00001O0010O1O10O01000O100O2N2M2N7I5K3L3M0100O001O01O000O000100000N2O1O2M3N2N2N2O1M3N5KeR^2"}, {"size": [848, 480], "counts": "k`i43[j03O000000000O1O2N1O1O1O1O1O2M2M3N2N1O2M2N3H5I7M3N3N3M2O1N3M1mN[N\\YOg1af0cNXYO\\1hf0T12L5M20O1J6H610XZOQMTd0n2k[OVMTd0h2m[OYMRd0g2n[OZMRd0g2l[OZMUd0f2h[O\\MXd0i2`[OZM`d0[4O0101O00000O1O1O2M3M4J5K5kNV1SOeXO`N^g0\\1hXO^N\\g0_1m0M:G4K3M3M3M4L3M6I8H_if5"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "lQl6;Qj04L4M3N2M3N2N2M4M2M3N2N3L5J4O2M2EXNmWOj1Rh0VNmWOk1Th0TNjWOo1Yh05K2O02N1O101N3M2N2N2N2K5J7H7G`0WO`VOMh^d4"}], [{"size": [848, 480], "counts": "WVX94Zj03N2N3M2ZOGQWO;oh0KjVO7Ui0b00001O001O00000001O1O2M3N2M3M2M4M3M5HTi_2"}, {"size": [848, 480], "counts": "iaR44Xj09I5K7J4L3M3M2O1N2O1O1O001O00001bWOZOPg0g0eXOFXg0k1O0O01O001O010N101O0O2O0O1O100O00O1O001O100O00O2O0101N4M2M4M;E5K4M2M2O0O2O0O2O00000O1O2O000000001N10000O100000000POWLY[Oi3`d0`L][Oa3^d0eLa[O[3\\d0iLb[OX3]d0jLa[OW3]d0lL`[OV3_d0mL^[OT3`d0_1O1O1O1O100O100O100O101O0O10000O10000000O1000000000001O001O00001O1O1_Kg[Oi2Zd0UMg[Ok2Zd0TMf[Ok2\\d0SMf[Ol2[d0SMe[Om2]d0QMc[On2ad0nL`[OR3cd0gLa[OX3jd0oKe[OQ4]e04K3N5J:F9F5K5L2O1N2O1O001N2O001OoNW[O[Lid0^2V[O[Mf01Sd0X2T]OgMkb0V2Y]OiMgb0V2Z]OkMfb0S2\\]OmMcb0S2]]OnMbb0R2_]OnMab0Q2_]OoMbb0P2_]OPN`b0P2`]OPNab0o1`]OQN`b0m1a]OSN`b0m^OATa0?k^OAUa0?l^O@Ta0a0k^O_OVa0`0k^O_OUa0b0j^O^OWa0a0j^O^OWa0a0i^O@Va0a0j^O^OWa0a0i^O_OXa0`0h^O@Xa0a0h^O]OZa0b0f^O^O[a0b0d^O^O]a0b0c^O\\O^a0d0b^O\\O_a0d0a^OZO`a0g0a^OWO`a0i0`^OVO`a0k0a^OROaa0n0Q400000O0100O1O010O1O100O1O001O0000100O0010O00010O010O1O1O10O01O1O001O001O01O001O1O1O1N6K2L4M4K7FSi_2"}, {"size": [848, 480], "counts": "gVf44[j02N1O100O100000001O0N3N1O101O0O1O1M3J6M3M2N2N1N3M2L2L5N003O1N0O2iNhNXYOZ1ff0POoXOR1Qg0V10N2O1OL5M3N1002O2NUZOUMTd0k2h[O^MTd0b2j[O`MUd0a2i[ObMVd0a2e[OaMZd0[40000000000000O1O1M5I7I7WOi0dNWYO^NZg0]1lXOYN^g0c1k0K5K8I4M2N2L5L4M4K7H;C]^h5"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "_nU7;Pj07L3L3N2M2N102M4K4N2K6L3N3N2L4J6N2N12OO10O10O01N102N2N2M3N3M5K5J6K:E[bZ4"}], [{"size": [848, 480], "counts": "fUS91]j05K5QOL[WO6ch01UWO2ih02RWO1mh0j0O001O000001O000000000O1M3M3L4J7H8Ce_f2"}, {"size": [848, 480], "counts": "ghZ4:l2JRd0b0g[ODSd0`0k[OBRd0a0l[OARd0a0m[O@Pd0c0n[O^OQd0d0n[O]Ooc0g0n[O[OPd0g0n[OZORd0h0i[O\\OUd0g0`[OC^d0?S[OOmd04hZO5We0McZO8\\e0^200O101O00000O10000000O0nNaZOUM^e0g2jZOUMVe0h2P[OVMod0i2U[OTMkd0i2[[OSMgd0k2_12N7I6K8G2O2fN^La[Od3Qd0[LQ[O8k0_3lc0QMS\\OQ3gc0TMW\\On2gc0TMW\\On2gc0SMY\\Om2fc0UMY\\Ok2ec0WMZ\\Oj2ec0WM[\\Oi2ec0YMW\\Oi2hc0ZMS\\Oj2kc0YMo[Ok2Pd0k1O100O10000O1000000O10000000000001O000000001O0O2O1O001O1O1O001O2N2RKf[Ob3[d0\\Lg[Oc3Zd0\\Lg[Oc3[d0[Le[Oe3^d0WLc[Oi3jd0iKV[OX4ae001O0000001O1N2O2N2N1N101O000O100O10001N10001O000O2O001N1O2O0O2N1O1N2O1N2N2O1N2O1O1M4M110O1101N2N2O1O1OYNQYONmf0OZYOOdf0NbYO1[f0NiYO2Uf0LPZO4me0IXZO>_e0^OiZOi0SOkNdd0:^\\Om1ac0TN`\\Ok1`c0TNc\\Oi1^c0VNe\\Oh1[c0XNh\\Od1Xc0\\Nl\\Oa1Tc0_No\\O]1Rc0cNR]OY1nb0gNW]OS1jb0mN[]On0db0TO_]Og0bb0YOb]Oa0`b0_Oc]O=]b0Dg]O7Zb0Im]ONUb01d^ORO`a0o0Q400O100O2O0O1O100O1O001O1O010O00100O01O1O3M2O3nVOkNah0h1J2N1O1O1O000O01JcWOXN^h0[1b000000000O2N2L3I:\\OgTh2"}, {"size": [848, 480], "counts": "TQg42]j02N101N1O1000001O0M4L3N2O1N2O1N2M2M3L2O3L5N1L4L4H7K4OM4NoN_NZYOa1bf0jNUYOY1hf0VOeXOQ1Zg0Q1010O1N002O20O0O101O2QZOTMYd0l2e[O[MVd0f2g[O^MXd0e2c[O^M[d0U3jZOTMVe0R41M3L41O01O0O2O1N2O0N3iNeZOSMbe0i1mYOfNXg0W1lXO_N]g0_1eXO]Nag0^1eXOYNbg0d1g0M6J3M3N2M4L5J8I8G:DmXi5"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "QUY7154bi0i0J`0@5L005K3M2O1N000dWOSNUh0i1;O2N3N4L4L1O4L3N1N2N2N3N1N2N2M2O4K3L5K6IQjY4"}], [{"size": [848, 480], "counts": "mjd74Yj07F7M3M4N1O001O1O001O001O1O1O1O100O0001O2N2N3M2N3M1O2N1O2M3L5KiTn3"}, {"size": [848, 480], "counts": "h_l23Yj07J4M3N1O1O2N1O1O001O0001O001O10O01O1O100O10O0100O10O0100O10O001O0O10001N10000O2O0O1O10000O2O001N100O1N2F:M3M4L3M3mWO]NVg0g1iXO]NRg0f1lXO]NPg0e1PYO\\Njf0i1VYOXNdf0n1[YOTN_f0P2aYOQN[f0R2fYOnMXf0S2iYOmMUf0T2lYOlMSf0T2mYOnMQf0R2PZOnMPf0P2QZOQNne0k1WZOUNhe0o1TZORNke0P2SZOQNme0Q2QZOnMoe0V2mYOkMRf0X2kYOiMTf0[2hYOfMXf0\\2fYOdMYf0Z3O2QNXLS]Oi3gb0jLk\\OV3Tc0oLh\\OR3Vc0QMh\\OP3Wc0RMh\\Om2Xc0UMf\\Ol2Xc0WMf\\Oj2Yc0WMf\\Oj2Zc0VMf\\Oi2Zc0XMe\\Oi2Zc0YMe\\Og2[c0YMd\\Oh2[c0ZMd\\Oe2]c0\\Mb\\Od2]c0_M`\\Ob2`c0_M^\\Ob2bc0`MY\\Oc2fc0T20N2N2N200O1N2O1O1O100000001O00001O010O1O001O1O1O1O1O2N2N2M6K7TKmZOm3Ve0nKnZOQ4Te0jKnZOV4fe001O000O1000001O00001O1O0O100000000000001O00000O2O00001O1O001O0O2O00000O2O00001N100O2O1N101N1O2N1O1O1O1N2O0O2N2O1O13nXOjLdf0b3O1O1O1O1O1O100O100O100O0100O1O1O2N2mN`YORNdf0g1eYOSNcf0b1cYO[Nhf0W1]YOgNmf0m0WYOQOof0;`YODaf0:cYOB^f0>eYO]O^f0b0dYOZO_f0e0f110O10O0100O100O10000O10000O10000O10O10O10000O2O00000001O001O0O1O1O1O100O01O01O01O000000001O001O000O101M2MP_a2"}, {"size": [848, 480], "counts": "Tgc41^j01O1O1O2N100000000O1000O01L4K6M2N101O1N2L3H8K5L4L3N3K3O1N0jN]NdYOd1\\f0iNVYOW1jf0W1010O101N01M3N11O2O1O1O02N0VZOXMnc0l2m[OXMRd0P3_[OXM_d0]400000010O01N1O3M2N2N2N2N2hNdZOhLUf0m1\\YOdNmg0T1ZXO`Nng0\\1e0N2N3M3M4L4L4L6K3L8Fdhk5"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "Vam63]j06J3M3N5J9H4K4L3M3N1N1O3M2N2O2M2O000O100O100O1N22L3L3L3N3M3N3M2O2O002M1O1N3N1N2M3M300O1O1N4M2LdUX4"}], [{"size": [848, 480], "counts": "XUU79Uj05L2N1O2O0O100O1O1O100O1O1O01000O10O0100O010O0100O010O2O0O1O2O0O2O1N2N2N2M4MaQX4"}, {"size": [848, 480], "counts": "n`]25Yj07I4M2O1N2O0000001O00010O00010O001O01O10O01O0100O010O100O10000O101O01O10O000001O0000000000000O1000O10O0100O010O100O001O0O1O1O1O1N2O1N101N2N1N3N2N2N2N2bWOdNeg0_1ZXOdN^g0a1bXOdNWg0_1hXOdNSg0_1mXObNof0a1QYO`Nkf0d1SYO^Njf0d1SYO`Nkf0d2N1O1O2O1N2N101N100O101O001O001O00001O0000000POWLY[Oi3Rd0RMh[On2Ud0WMi[Oj2Ud0XMj[Oh2Td0ZMl[Of2Sd0\\Ml[Od2Sd0]Mm[Oc2Rd0^Mn[Ob2Pd0aMn[O`2Rd0`Mm[Oa2Rd0bMj[O`2Ud0cMh[O^2Xd0dMd[O^2[d0l100O100O1000001O000O1001O000000001O0000001O001O001O1O1O1O2N1N3N1_KV[OZ3nd0_LX[O^3ld0]LV[Ob3Qe0VLQ[Oi3Se0RLP[Ol3je001O000O1000O100000O1000001O0000000O1000000O10000010O1O003M2OO01O01O0000000O2O00001O0000001N10000O101N1O1O100O2ZO\\YO^Mdf0`2`YO^M`f0_2dYO`M\\f0_2eYO`M\\f0\\2iYOcMWf0Z2lYOfMUf0V2oYOhMRf0T2SZOkMme0o1ZZOoMge0i1cZOUN\\e0Y1Y[OcNhd0T1^2_O`02N1N3K5J6\\OmN]WOa1`h09N101O10O0O2N1N3M3K4\\Od0K6N1N4M2N4K7Gbeb3"}, {"size": [848, 480], "counts": "oQb42[j04N10001O00000O1000O02O0O1O1O1M4L3N2L4N2N1N3K3L4M3M3M3M3M2N3N0iNaN]YOb1df0gNRYOY1of0QOfXOo0^g0o02O00O1M2N30O2ON1011002M5lYOQMjNN^e0Z3Q[OaMmd0R4N100O11O1O0O2M2O2N2M3K5D=RNhYOmNlf0Q1WYO^N[g0^1hXOXNjg0^1g0L4L3N1N2O3L4L4L7I5K5Ijhk5"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "XSk6:Vj05K3M5L8G=CJnVOPOPi0[1M7I2N2O1N2N100O1O0000001O01N1N2O2N1N3M2N2KWWObNjh0]16M2O1O1O2N1O1KgVORO[i0l05M3O2J5O1O1N3K7G`Y\\4"}], [{"size": [848, 480], "counts": "UVU77Wj03N2N1O1N2O1O1O1M3M3N110N101N1O1O2O00001O01O00001O1O1O2N1O1O2M201N2M3N2M3M5KTQX4"}, {"size": [848, 480], "counts": "`Uj3<`1;Zf0J^YO>^f0FZYOb0cf0AXYOc0ff0@UYOd0jf0_OnXOf0Rg0\\100010O001O1O2O0O10OgNPYOROQg0d0XYO]Ogf0>_YOAbf0;aYOF_f06dYOK\\f00hYO0Zf0JjYO7a0BZc01Y\\O>9G`c0A\\\\Oi02I[e08cZOI]e08aZOJ^e08_ZOIae08^ZOIae09]ZOGce0:\\ZOGce0n03Rc0_Ok[O<`1Lhb0Z1i]OXNYb0c1l]O[NWb0_1m]O`NUb0Z1Q^OeNQb0U1T^OiNoa0S1S^OlNPb0Q1R^OmNQb0o0R^OPOPb0m0R^OQOQb0l0P^OSORb0k0P^OTORb0h0Q^OVORb0g0P^OWOSb0f0o]OXOTb0e0o]OXOTb0d0o3M3L5L6H\\kT4"}, {"size": [848, 480], "counts": "ifc44[j02O1O00001O0O101N10000O0N3L5L4M2M3N2M3N1O1M2M2O1N3N3L3M4K2kNdNXYO]1ef0UOmXOi0Tg0Y10002O0O1O02N2N2N2OO002O000PZO\\MVd0g2b[OcMYd0a2X[OmMfd0g2aZO`M^e0i300000O1O1N2M4J5N4K401N3nNU1_O>UOXXOdNng0X1i0L3M3M3L3M4M6K3L5K9F6JZnj5"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "cnk61^j02N3N6I5L4L`0^VOlNgh0n1A3L2O2M6K1N2ON2O0010O00M4FlWORNYh0m1501M21O01N2O1N3M2H9J5M4L4K6I9CZmb4"}], [{"size": [848, 480], "counts": "nTU7?ni05L3O1O0O2O0O100O100O10OO2N1N2O2O00000001O1O1O1O1O1O1O1N2O2N1O2N2N1N3M5JTgY4"}, {"size": [848, 480], "counts": "YRZ2;Sj03N3M3N2N2M2O2M2O1O100O100O10000O101O1N101O2O1N2O1N100002N3M1O1N102N10O0O0101O000O1O1O000O100O1O1O100O1O1O002N2O1N001O010O0001O00001O1O1O102M2N2N4L3M1O100O001O0hYOmNSc0R1l\\OQOSc0o0l\\OTORc0l0n\\OUOQc0k0n\\OXOPc0i0n\\OYOQc0g0m\\O\\ORc0d0m\\O^ORc0b0m\\O_OSc0a0m\\O@Rc0`0n\\O@Sc0?m\\OBRc0>iZOVOV1=Pd0>fZO[OW17Rd0`0bZOAX10Ud0`0_ZOFY1KWd0i1e[OWN[d0n1_[OTN`d0n301O1O1O1O001O1O1O0000001N10000000O010O1N2O2N100O1O1OmNWKj[O54f4ic0RLT\\On3ic0VLV\\Oj3hc0YLW\\Oh3fc0[LY\\Oe3gc0[LX\\Og3fc0[LY\\Oe3fc0_LU\\Oc3jc0c1N100O2N10000O100O10001O0000001N1000001O000001O00001O001O1O001O1O1O1N2O2N1bJW\\OS4kc0jKX\\OT4kc0iKU\\OW4mc0eKU\\OZ4Rd0^KP\\Ob4\\d0oJg[OP5md000O10O001O10O010000000O10O012N6J3MO010O1N20O2O1ON1O2hL\\[Oc0ed0WOf[Od0Zd0YOn[Ob0Qd0\\OW\\O?ic0_O`\\O:`c0DQ]OMnb02n]OTORb0j0Q^OTOPb0k0Q^OUOna0k0S^OSOoa0l0R^OROPb0l0R^OTOma0l0T^OSOma0l0T^OTOla0j0W^OUOha0k0Y^OUOga0j0Z^OUOga0i0[^OWOda0i0^^OUOba0j0a^OTO`a0g0f^OVO[a0d0[4N2O2N11O01O0000O1000001O00000O101O0O2O000O2N100O1O1O2O0O2N0M3H8J51O1O101N101N010O2O2M2O2N1N3M3N1O2M3N2N3N1N4L6Gbbk2"}, {"size": [848, 480], "counts": "efc45Zj01O2N101O0O10000O10000O2N100O100O1N1O2M3M2N3M1N3M2M3M4L4J4H71N02lNZN_YOi1^f0fNUYO\\1hf0nNgXO[1Yg0m001ON3K5M2O0002M2N201kYO_M[d0b2^[OiM^d0X2_[OlM`d0U2\\[OnMcd0P400001O0O10001O0O1O2N5I9\\MRZO@mf09WYOoN`g0P1aXOiNfg0U1]XOfNgg0X1l0L4M2M2N3M5L2M6J7I9EmSj5"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "mPP71]j04M4M3M5K3L4M2M2O6J=eVOeNgh0e1M3M4K3N1O1O3L2O1OL3M4K5N2K4O2N2N101N2O001N2O1O1O1O1N201M4L6Dm0cNc`0o0j^O2g0jNc`0S1n^OHnc08V\\OAnc0>W\\OZOlc0f0P30O0001O001O001O1O01O0010O1O001O001N1O2N2N1O2M2O2O1O1O001O01O001O010O001O0010O100O01000O0100O001O010O01O01O0010O010O000010O010O0010O010O1O00010O0010O010O01O010O0010O01O010O1O010O010O10O010O01O10O01000O0010O10O10000O010O10O10O10O010O0100O010O1O10O0010O01O01O01O0001O000001O0000000000001O00000001O01O000000001O001O001O1O1O1O001O1O1N101O1N1LV="}, {"size": [848, 480], "counts": "g[e41]j03M2O1O100000010OO5K_j06ZUO3O0O2O0O1O1O1L4N2N2N2M3N2G9F:A?K5M3K5L3LFVOUNiYOh1Zf0bNZYO_1hf0R13I7N2O100OK4L2111O111O01O4M=C1O000000000001O1N5L5J4M2FgXOXM]g0`2jXO^MWg0]2a0M4\\Oe0K5M2N1N4L4L5K6J7FU_h5"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "WQY73\\j04L4M2N1O2N4L3N3L8H9H0O2N4M1N10N101N2O1O0O2O0O2O1O010O00001O1O001O001O1O1O0010O00010O01O1N101N2N1N3M3L4C>LQgg3"}], [{"size": [848, 480], "counts": "hSU77Yj02O2M3N1N1O2N001O0000000000O010O002M7B[jg4"}, {"size": [848, 480], "counts": "gQk3a0ii0;H7J4K4M3M2N2N1N2N2M3L5K4M2M0N0O4M0N4J9nMVN[[Oo1[d0_Na[O`1[d0iN`[OU1^d0SO\\[Om0ed0XOW[Oh0id0\\OS[Od0md0]21O^[OkKnb0S4R]O[Lbb0c3_]OfLZb0Y3f]OQMQb0m2o]O]Mia0b2W^OeMca0Y2]^ORNYa0o1f^O\\NQa0b1o^OlNFhLV`0Z4U@5^?Ic@jNRAY1l>eNUA^1i>`NXAc1g>[NZAg1d>XN\\Ak1b>UN]An1b>VNWAm1h>XNRAj1keZO]O`e0b0aZOXOfe0f0ZZOUOle0j0UZOPOQf0o0RZOfNVf0Y1b110O1O1O10OO10010O0001O0O1O100000001O1N101O2M2N3N2O4K:G6JN2O2M2N2N3M4L3L4M2M4M3L3N2M3NiVOjNVi0W13L3M2O0O2O000101ON2N1N3N1O2N1N3M3N1O4HZ[b2"}, {"size": [848, 480], "counts": "ZWb53Yj05N1N2L4N2N2M3K5M3N2N2O001O1O2N1000001O00000001O100O1O1O100O1O2O2O4K6K1O1N2N3N3M3OT_h5"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "jh^73]j04L3N2M4L4L6J7[VOROZi0X1K8H6K3L0003M1O1N2O1O0D=M4M2M3M3N3L4M3M1O3M2N4KeUT4"}], [{"size": [848, 480], "counts": "Yfm65Yj04M2I7M2O2N2O001N102N1O1O1O1O001O0O10000000000O1000O2O1N6IRof4"}, {"size": [848, 480], "counts": "QRf3=ni08K4L5L2N1O2N100O1O000OL5OO2O0iMLTZO5P6UOj7Z1QBCS6YO_7_1ZBZOU6\\OY7`1^BVOX6^OW7^1_BUOZ6^OT7`1`BSO\\6_OP7a1aBQO`6_Om6b1aBoNQ3_ORO3Y;a1bBmNk2e0]NoNRP3R9LfF5\\9Y87N1M3M4J6CaHeA_7[>^HfAd7Z>XHiAj7V>SHkAo7U>nGlAU8Z>_GiAf8b>aFhAd9_?5L4K4L3M3M3M4L6J8H4K3N2N1O2N2N1O1O1O1O010O011NQL_APLa>m3eAPL[>n3iAPLV>o3nAoKR>n3RBQLn=m3VBQLj=m3ZBQLf=l3_BRLb=k3dBRL[=k3kBRLV=k3nBSLS=WO\\AS2g1cNm]OSf0i1dYO[Nfg0_1j0K3M3M4M4J5L7H8H7Iaoe5"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "nWX71^j0101O2N3M3M3M2N6J6KW1hN5NO0O0O101N2O1O1N2N2O1N2O2M2O1N2N101N2O1N2N2O0O1O2N1O2N1O2N2L4M2N3L5JVSo3"}], [{"size": [848, 480], "counts": "Ynl67Yj0002N6J001O2N2N1O2N01000O1O1003M1O0O20N10000O2O0001OO100O1O2O0O100O100O001O001O01O00O2O00O10000O101N7Gm^U4"}, {"size": [848, 480], "counts": "TP_2;Pj09J4L4M2O001N2O1N2O1O1O1O1O1N2O1O1O3M2O1N2N1O2O1O2N2N1O1O1O10O114M:E1M3M1O2N1O1O2OO0O101N2O0N3N1N2O1N2O1N1O2N2O0O2N1O1O1O1N2O1PAgNg4Z1XKkNb4X1\\KlN`4V1_KlN^4U1bKmN[4U1dKmNZ4S1fKPOW4P1iKROT4P1jKSOS4n0lKUOP4m0oKVOn3k0QLXOl3i0SL[Oi3f0WL\\Of3d0ZL_Ob3c0^L_O^3c0bL_OZ3c0fL_OV3c0iL@T3b0iLBT3`0WLQMUHe2c;;gKf0W4[OaKn0]4RO]KV1a4kNZKZ1e4gNVK^1i4dNPKc1n4eNfJ`1Y5nNVJW1h5TOjIP1U6ZO^Ij0a6@PIf0o6DcHa0\\7JSH>k7M_G?`8FUGa0j8BmFe0R9^OeFh0[9ZO^Fl0a9WOWFo0i9ROoET1R:oNeEV1\\:WOjDUINj7Z;f6jDaA];Y>?N3M3M2N3N1N3N1O1OdMiBbGV=Y8aCVG^g:Y1ZER;h:iDZEW;i:dDYE[;k:]DZEc;j:UDZEj;j:nC[ERTN^D\\OcNf1S=lNWEYNXN_2dT2_OTN;m1CXN8i1HZN4h1J[N3f1M\\N0f1O[NOf11\\NMe13[NLe15\\NIe17[NGf1:ZNDh1;ZNCf1>ZN@h1`0XN_Oh1a0YN]Oi1DcMmEe0]:h1CkMlE=`:i1_OTNjE6e:g1\\ODc0[6ZAdIf=TO^AT7Q1hI\\=_O\\Ah6Z1hIU=0TAY6j1eIo0lJdDTK\\;h4jDVKV;g4nDXKR;f4QEYKo:e4TEZKm:c4WE[Ki:c4ZE\\Kf:b4]E]Kd:`4`E^K`:`4dE^K\\:_4iE`KV:^4nE`KR:^4RF`Kn9^4VF`Kj9]4\\FaKd9\\4aFaK_9[4hFcKX9W4QGfKo8S4\\GiKc8P4jGjKW8o3XHiKg7Q4iHeKV7T4YIeKf6W4dIdK\\6W4nIdKR6T1lDPOg5\\O]5Y1]FPN^4d0U5U1QGgMo3R1P5U1_GWMi3b1h4W1oN[NQ1d1WOTNj0m1ZOnMf0R2]OkMd0U2]OhMd0X2AbM`0]2E^M>a2JPM=P3c;N5L101N2M2O1N101O1O1O1O1O1O010O10010O001N002N0O101N4M1N2N2N2O1N2N2N3M5J6K4K4L4L3M5I9Ei_j1"}, {"size": [848, 480], "counts": "^Wb5f0fi0?C3L3O1M3L4M2M4L4M2O100010O1O1N200O10001N100O2O1N2O1O1N102N2M6K8G4M1O2N2M3N3L6JmYi5"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "YVV97U2Jme0;cYO8Vf0OYYO>ef0h1N01O0O2N2O010O100001O01O0O100O2O1O1N101O1O1O1O1N2O1M4M2M3N2N2M2N5J4M3M2M4M3L3N4K4M4KeTQ2"}], [{"size": [848, 480], "counts": "lZ`8e7oAWHo=f7ZBVHc=j7bBTHY8VOoJh8lLoGR8K^JX8fMkGh7h0^HYNhNh9V1UG`7]1TGR9_1_E[7]=iHaBU7`=mH_BP7c=RI[Bm6f=UIXBj6i=YITBf6m=gI`A_6`>g22N1M2E=N1O1100O10010O0000000000001O0O2O1O002M3N3M3M3M2N3M2N2O0O3M3M2N3N1N2N4M4K4L3N3MaEiBf7U=RHSCP8ko;K8H4L2O0O1OaKfBiKY=U4kBjKT=S4PCmKP=Q4RCQLke5C^J9a5HcJ4]5MeJO\\52eJLZ56hJFY5P8BWH5k7J^HMc73`HHb77`HFf74^HHh73[HIj73ZHHl72^HDg78[HDl76WHGl76YHUNPIi0h>Q1RIlNn6T1UIhNa5\\OXCl1\\7aN]5NmBa1m7XNW5=fB\\1[f0jN^YOX1af0\\10O00100O1N2O0O3N1N2M3K5K6K4L4G9A`0E;H8K5L4M2N3L4L5H[UP2"}, {"size": [848, 480], "counts": "SVf45Zj02N2O0O2O1N10001O000O1O1N2N2N2M3N2M2N2NOO3K5N3L5J5H6N10MnNbN[YO`1cf0hNUYOZ1kf0POcXOY1\\g0n0O1NOO20O2O2M3O0001O011mYOWM]d0k2Z[OeM^d0\\2][OjMad0h2iZO]MWe0n301O0000O1N3N2N2O1N1O4H9\\Nd1VOXXOaNng0Y1YXO_Nlg0_1e0M4M2M4L3M2N4K5L6J6I;BPTj5"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "Ufk84ii0]1PNVO[XOZ1_g0l001O1O2O0O1O100000001O001O10O2N1N3M4L6I7I9F9H5K5K6J3M4L4L5J4L4L>Annc2"}], [{"size": [848, 480], "counts": "XbU87Tj08J4L4L3C>^Oa0N2O100001N10001O0O100O100O100O1O1O1O100O100O100O1N2N3N5J9G9EVZY3"}, {"size": [848, 480], "counts": "`Un3S1Yi09H4M3M3N1O2N001O1O1O1O001O1N2O001O001O010O1O001O1N110O10O01I7O1@bMjXOb2Tg0cMeXO`2Zg0bMeXO_2Zg0bMfXO^2Yg0cMhXO[2Xg0eMkXOY2Sg0iMoXOU2Qg0kMmXOW2Rg0kMlXOV2m1lMdb0O^[OV2k1RNbb0Ib[OV2i1VNbb0Ed[OV2e1[Ncb0]2o[OQOnc0j3O2N1gBUJk5n5UI^K\\6_5]HoJ^7^5PHjJn7f5\\GdJ`8k5kF^JR9i5cF^JZ9g5\\FfJRN`Lk:m8kFiKm8oJbEk8T1_La9e3QFhLm9o9M2O2N1N2N2N2O1O0000001O1N101O0000000001OO10O010O010O001]N]FVBe9b=kFTBW9h=j1O1O1O1N3L3N]NdBWFX=j9nBQFa:H_FV:]OiET:>nEk9:[Ej9X1WEb9ec6^AgIa>T901N2O1kEkEm3U:gK]FS4d9gKhFR4Y9dKXGS4j8eKkGm3V8mKYHi3h7SL`Ig2a6WMiIa2X6]MmI^2U6`MPJ\\2P6dMSJX2n5iMTJS2n5lMYJm1g5TN]Jf1e5YN`Ja1b5^NbJ]1`5cNcJX1_5gNgJR1[5nNgJn0[5ROhJh0[5WOiJb0[5]OiJXOgHhNdjIYBlNMW7d`0]He_Ob7ea000O001N100O100O10000O1O1O10000O10000001O005K00K[]O`Hdb0f7oN^HS_Oc7k`0dHP_O\\7m`0hHR_OY7k`0kHS_OV7j`0mHU_OT7i`0oHU_OR7j`0oHU_OR7P3RH[:n0cBR7i2ZHa:g0cBR7g2[Hd:f0aBR7^2eHh1^ETB5XO\\:S>\\FXB\\OCV:T>iFmAc9U>[1000O2O0O1N11`FSD_4k;bKZDZ4d;hK_DW4[;nKgDP4U;SLmDl3P;VLSEi3j:XLXEf3h:XL]Ef3c:^JZGb5f8VJcGh5_8PJhGP6Z8iIkGV6W8eImG[6U8_IoGa6S8YIRHf6R8RIRHn6b8ZHbGf7d=00000000000000000001O000001O0000001O0000001nN]GU@d8g?dGT@]8i?fGV@Z8i?iGU@W8j?jGU@W8j?iGW@W8h?jGX@W8f?kGY@V8d?mG[@T8b?PH[@R8b?QH\\@Q8a?RH]@Q8_?UH[@o7Z7lFb0g:ZOeE;]:DmE1V:NmELW:jKnDW1V1a2R:RLlD\\1c1n1e9bLmD^1e1e1c9eLPEe1b1]1c9hLREi1b1T1a9mLTEm1a1l0a9RMSEP2d1a0_9XMVES2j12X9YM`Ea2b1B\\9aM^Ej2b1oN_9kMWEU3d1ZNdYi02O0000O1000O11O00O100000000O01000000000000000O10000001O0O11O000O10O10O1000000000001N01000000000001N101O00000000000010O0O2O0O1000001O0001O000000000001O00000000001O000010O01O000010O1O101M2N3M3KePm1"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "i`o51Vj0;kUOIbi0l0E:I:D8J6K5L3L6K4L3M4M2M3M3N1N2N3N1O1O1N2O1O0000O2I6O2O00100O100O101O0O10001O001O2N1O1N4M2N1O2N2N2N1O2N3M3M2N2M4M3L4M5J5L2N3L4M2N2M3N1O4L2M3N3L4M2M1O2LT\\]4"}], [{"size": [848, 480], "counts": "]ld44Wj09I4L4L4K4O2O00001O01O0000001N10001O00000O10000000O010O100O001O1O10O10O100O001O001O10O010O01O0100O2O000O100O2O0O101N2O2N2N2M101N2N2N3MTZP6"}, {"size": [848, 480], "counts": "ThY38`d03i@5n>3m@0o>5i@1T?3`@9W=ROW@h0]2;X=ROW@f0]2P2jAQNU>S2hAnMV>Z2cAgM\\>b2[A_Mc>l2SAUMl>S3l@nLS?]3b@cL^?h3W@YLh?m3R@TLm?R4m_OnKS`0X7O100O0010O010000OYOl_OSGU`0k8n_OTGR`0j8P@VGo?i8S@WGm?f8W@YGi?c8[@]Ge?`8^@`Gb?]8a@cG^?\\8d@cG]?[8f@dGZ?Z8h@fGW?X8m@gGR?X8QAfGP?Y8QAgGn>X8UAgGj>Y8WAfGh>Z8ZAfGe>Z8]AeG`02g;X8kCfG:8h;Q8PDfG5?i;j7TDdGNj0l;a7dElH\\:R7]EWIb:h6REfIc9WNkDS8[1oIc9ZNkDh7a1PJa9^NfDe7a1WJf9V7WFoHg9P7XFTIh9i6XFZIh9d6WF^Ii9a6VFbIi9^6VFdIi9^6SFeIl9\\6QFfIo9h6aE[I^:i6\\EYId:j6PE`Io:Q;O1O010O1O1O11O1N10O10O0100000000O000010O10O100O10O1lNhDPCW;P=kDoBT;P=oDnBR;P=PEPCP;nl^OFo`0?m^ODo`0`0l^OEPa0?j^OFSa0>h^OFVa0=e^OGYa09UAU79bHP11m:a0eCi6b0cHe0>n:;iCc6e0dH?d0P;:hC]6k0dH4o0W;4hCZ6l0cH0T1[;2hCX6k0aH0Y1Z;1iCY6k0YH3_1X;2gC]6S2cIU:5cC_6o1_I]:e8YE_Ge:f8TE^Gj:f8QE]Gn:g8kD]GT;f8hD\\GW;g8dD\\G[;g8`D\\G`;k8iCeGVo0UAQOn>i0UAXOl>d0VA\\OR?;o@ET?6n@JW?Ln@4gd01O0001O0001O00001O0000001O001O0010O01O1N101O001O1O10O0001O000100O010O000010O01O10O01N101O0O101N2N1O1O2N2N2N2N2Mbkg4"}, {"size": [848, 480], "counts": "ZdU44[j02N102N00O1000O1O001O1N3M2H8MO2O0100O2N1N3H8H8H7aNROZYOR1df0WORYOm0lf0\\1N2M3M4M10M4K4N3M2O1O2O010010O107I;F:E1O01000000O1O2N101N1O1M302N3L3I6aNiXO@[g0;iXO@\\g06QYOVOag0d0[1K6I4N4L4K`WOLmk]6"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "]SY49ii0e0C6K6K4M3N2N2M3M3M3O1N2O1N2O2M2N2O1O1O100O1N2O100O2N1O1000001N010O01O1O2N10000O1000000O0L5I9K6D>GaQk6"}], [{"size": [848, 480], "counts": "^g`44Vj0_8YAaGe>]8`AcG^>Z8fAeGY>Z8jAfGU>X8oAfGQ>X8RBgGn=X8UBgGj=W8YBgGh=X8[BfGe=X8_BeGc=Y8_BdG`=^8Z2M3N2M3M3M2010QBZHT9g7cFdHY9b7YFhHf9b7eEiH[:g8oC`Gf:dNoDS:0^Gg:XOiDc9HeG[;c;eAEcd01O010O001O010O00100O1O002O0O1O010O1O1O00100O1O010O1O010O1O10O010O100O1O1O001O0O2O001O001N2O1N101N2N3M3M4J[ih4"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "XeU4g0ei06J5J4M4L4M3N2M3M3O0O2N2N2M3O1N2O1O1N2O1O100O2N1O1O2N2N101N101O0O101N2OO10O10O1L5O1O1L4L3N3K5K6J8H7H9mNgVONTdm6"}], [{"size": [848, 480], "counts": "\\o^49Sj06K4L4M2O2N101O1O01O001O000001O000001O00000000000000O101O00000O2O01O00001O1O1O001O1O1O0010O0001O1O0O2O001O1O001O001O002M1O2O2L_`Y6"}, {"size": [848, 480], "counts": "mkS3:Sj04N1O100O1TJ@gAa0W>BeAb0b=Fk\\O2U5e0`=]2jAhMT>Z2jAiMR>[2kAgMP>^2jAhMm=b2]ATN^>P2YAYNd>k1SA]Nj>i1k@`NR?`6M3O2M2O2M2O1NYO^AdEa>Y:fAdEY>Z:jAfEU>Z:mAeER>Z:RBeEk=Z:\\BbEa=V:mBgER=U:TCjEjgFiAY9m7gFjL4XKV9Z7UFWHn0i4NdKo8a3UFVMk0AR1h3TOPLj8Y3aFQMi0Mm0^3WOXLc8Z3cFoLj01m0S3ZObL\\8Y3fFPMi00Q1g2]OoLT8X3fFRMi0OV1Z2@\\Mk7X3hFSMf0O^1j1BlMb7X3iFRMf00_1`1HUN[7X3hFSMf00b1W1K^NU7V3lFTMb00e1Q1LeNQ7U3mFUMa01h1j0KkNP7Q3PGXM>1j1d0LROm6m2SG[M;2n17O]Og6m2SG[M;4T2D2O_6j2RG\\M;8[6_O[2e2WGbM4;]6XO[2h2WGdM2;Pa0P2R_O_M1b0m`0n1S_O[M5f0j`0m1WASNj>n1UARNl>l1UASNl>j1VAWNj>f1YAZNf>f1ZA\\Ne>b1]A^Nc>a1^A_Nb>[1dAfN\\>W1eAkN[>R1gAnNY>P1hASOV>j0lAWOT>e0PB[OQ>?SBBn=9VBFm=L^B6R>oNXBQ1cc010O01O1O010O10001O0O01O01O0100N2O001O1O1N2O2N1N4M1M3N3M3N2M4HVfn4"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "[aU4244gi0c0H4K5L4L4N2L4N2N2N2O2L3N2N3N1O2M2O1O1N3O0N3N1O2N101N3M2O0O2N2O1N2N2N2O1N1000N1013M10N1O1N3N1O2N2N2M3M3M4L4F;C>Ln0ROYng6"}], [{"size": [848, 480], "counts": "nY]48Vj04K3N4L2N3N1O101N1O1001O01O01O001O01O01O000001O000000001O00001N1010O001O1O100O1O100O1O1O001O1O001O001O001O1N2O1O1N3N1O2KYe]6"}, {"size": [848, 480], "counts": "Qn[38Qj0:]Ob0F;Q]OgNXk2SAjMk>n6N2O1N2O1O1O1O1O2O0O1O2N1OYO]AcEc>Y:cAfE[>Y:hAfEW>Z:kAeEU>Y:nAeE:EhWFaAi9^>YFaAg9^>\\F`Ad9_>_F_Aa9_>bF`A]9`>dF`A\\9\\>iFcAW9Y>nFfAR9V>RGjAm8R>WGPBh8k=]GUBb8f=eGYB[8c=iG\\BX8^=oGaBQ8a7kGlK9bLm7a7nGdK=iLe7V6`FXJb1`2`0oL^7c5YGmJl0Y2c0UMX7`5^GRKk0o1f0]MS7T5jG`K`0g1i0`MP7^3oEkMT2@6`1S;T1YF]MgNW1S;[1XF^MiNn0T;c1TF_MoNb0R;n1PFbMQO5T;Y2lEbM\\O_OR;n2cEcMV>[2lAfMS>Y2nAhMQ>X2PBiMn=V2SBlMj=S2XBnMg=Q2ZBPNf=n1[BSNd=l1\\BUNd=j1]BVNd=h1]BXNd=e1^B[Nc=c1^B\\Ne=`1\\BaNd=^1]BbNc=\\1^BeNb=Y1`BfNb=T1bBmN^=l0iBSOY=g0lBYOU=;UCEm<2[CNgn3eAYLZ>e4c@aK[?k7O1N101O00O101N11O10O10eA^Ehm1bATN\\>k1fAUNY>j1hAXNW>f1kAZNT>f1mAZNR>e1oA]NP>a1RB_Nn=^1UBcNi=Z1[BgNc=V1aBkN]=R1fBoNZ=n0iBSOV=j0mBWOS=d0QC\\OP=>UCCkn1SAUNi>n1UASNi>P2SASNW>Cm]Ob2Y3ZNc>Ef]O_2Y3QNn>P3f@RMW?Z3^@gL`?a3Y@aLf?b3V@`Lh?d3T@^Lk?d3Q@_Ln?c3o_O_LP`0c3l_O`LS`0a3i_OcLV`0_3g_OcLX`0^3d_OeL\\`0\\3`_OhL^`0[3]_OiL``0Z3[_OkLc`0X3X_OlL?jMe>]5f@mLe0gMb>d5^@jLo0cMb>e8\\A_G`>d8]A_Ga>c8WAfGf>[8QAoGn>R8m@SHQ?o7l@THS?m7j@VHU?k7g@YHX?`9N2O1O1N2O`Mo@cIo>W6[AgIc>X6`AhI]>Y6cAiI[>X6`AnI_>S6\\ASJb>n8N2N2J6RO`DgBe;W=k0N2N2L4M3O1IeBYC[=gh0c@4^OYNX2ba0h1d]OiMk0?aa0g1i]OeMh0c0_a0h1R@XNo?f1R@[Nn?c1T@\\Nm?a1U@_Nm?]1U@cNP`0V1S@iNn?S1U@mNl?P1V@QOk?l0W@SOk?j0V@VOl?f0V@[Om?@j[O`0b40We0O01O01O00010O000001O010O00001O010O1O001O1O00100O001O001O001O010O1O001O010O0010O001O000O2O0O2O0O2O1N2M3N2N3M4JbdZ4"}, {"size": [848, 480], "counts": "`cc44Xj07J7H7Kj0UO5K4@UNRXOX2gg0=K6I4L3N1O1O10O02O4L2N010O1O01O001O1O1N2O1N2O1M3K5M5K9E8HChU\\6"}], [{"size": [848, 480], "counts": "c^X49Tj04L5K3N3M2O2O000010O00000010O01O00001O000000001O00O101O0O1O100O10000O100000O0100000000001O00001N1000001N2N2O1N2N2M4KUVd6"}, {"size": [848, 480], "counts": "odh3m0`i05K3L4L4K5J6K5J6G9H8K5K5J6RMWMl]OR3ia0aMi]Oe2Sb0eMa]Oa2Zb0jMZ]O\\2ab0PNT]OT2hb0[Nc\\On1Yc0g2i]ORIi`0Q7S_OUIg`0o6U_OVIe`0o6W_OVId`0m6[_OUIb`0m6V_O\\If`0h6T_O^Ih`0f6T_O^Ig`0g6W_O\\Ib`0\\8L4MiMj_OZJQ`0f5V@XJe?j5_@UJ\\?m5e@UJV?n5k@SJo>o5SASJg>P6WAUJe>l5XAZJe>g5UAaJh>_5XAeJe>\\5VAjJh>`8N2O1N2O2M2O1O1O1O100O100001OO100O2M2N2J6M3UOIhBlDV=Q;SCkDkBlC=ZOZ2oMgMh>EdC]1fN[1POcMe>F^Cg1cNU1\\O^Ma>GXCQ2`NP1IYM^>GmB_2bNe06UMY>HeBo6SOZIW>H[BV7@SIS>HWBZ7HnHo=JSB\\7OkHm=JnA_73kHm=HlA]78nHj=HhA\\7=PIh=HeAZ7c0QIg=HaAV7h0VIf=P8YBTHe=k7[BXHd=f7]B\\Hb=jNhAo7g0YI`=hNiAl7i0_I\\=W7eBkH[=S7fBoHY=n6iBSIW=j6jBXIU=f6mB\\IR=b6oB_IQ=_6oBbIR=\\6nBeIS=Y6nBoHk=P7TBjHS>U7nAjHS>U7mAlHS>S7mAnHS>Q7mAPIS>o6mASIR>m6mATIT>j6lAWIT>h6lAZIS>e6lA_IR>_6oAcIQ>Z6oAiIP>V6PBkIP>V6nAlIR>T6TA[HKf1o>U901O0100O100O1O1N101O1O1N2O101N0O101N1O2O0O1O1O01O1N11O1O1O1O_Oa0H9M200000O1O2O001N2N2N2N3nKk@gLV?Q2XAaJ5Q3f>o1bAoJ3g2]>Q2hAVK9\\2Q>[2gA[Kc0n1g=e2eAaKo0]1^=o2dAfKW1n0X=Z3cAhKh17j^OUObd03W\\Og0XOYOgf0f0^YOWOaf0h0bYOWO]f0h0fYOUO[f0j0gYOTOZf0j0g1O1O1O1O10O0\\XOZO`e0f0U2O10O0100O0100O1O100O010O10O10O01O1000O01000O1O1O1O1N1100000000O10O100000000O2O00O100000O01O1O1001O00000000O100O1O1000000O101N101N1O3M2N2O3Jln`2"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "]\\k5e0fi0;G7G7J6L4K4J7K5L3M2N3N1N2N2OgNbXOB]g0=fXOAZg0?hXO_OXg0a0iXO^OWg0c0jXOZOWg0f0kXOXOVg0g0kXOXOUg0h0mXOVOSg0j0nXOUORg0l0oXOROQg0o0UYOkNkf0U1WYOgNjf0Y1XYOdNjf0[1W1O0O100O10001N101N2O1O1N2O2M3M5J6IgQ\\5"}], [{"size": [848, 480], "counts": "n[i35Wj07K4M5K4L5fVOCYh0^1K3N2N1O0O2O000O10O1O1N2M3K5J6L4N2O1N2N2M3N2N3L4M3M4L4L8HcTg7"}, {"size": [848, 480], "counts": "lVm25>e0L3J6O2O0O101O0000000100O1O10O0100O001O001O0001O10O1O0O2iInAOR>LbBE_=9lB]OU=a0VCUOke0VBlNj=Q1_BiNa=V1dBeN^=X1kBaNU=]1SC\\Nnki0b0B6J7H6K7K2N2L5L3N3M2M4M2N1O2N3M1O2O0O2OO2N1N2O2M2O1N2N3K5L3L5L3M4L4N2O1N2O0O10000000O100O10O100000O1O01O101N1O1N2O1M3O1O2N2N2N3L3J6DgnP4"}], [{"size": [848, 480], "counts": "Uaa49Uj0200M3L4L40L43K2M3O100O1L30100L40N200000M2O2M3I6M4M3N0O3O1O1O100O10000O1001O0000001O1O1O0O2N2M3BUWOUOmh0c0TWOTO17mh0a0\\WO]Ofh0a0f0N2L3N4M3Mhi\\6"}, {"size": [848, 480], "counts": "TVa36Uj05O10O2K4M13M3M4M1M3N2O1O1N2O1O00000000100O1N2O1000O010000000000001O00000000O101O0O1001O00O1000000000000001O01O01O01O001O0010MSKSO[@l0a?AW@>j?GQ@7P`04f_OK[`06d_OI\\`09c_OE_`0<`_OCa`0=o\\OWOc0<^b0>h\\OA2BlNa0Yd0?c\\OECn0kc0JQ\\O^OHR1Vd0i3O1N2O1O2N1O2N2N2N2O0O3M2O1f\\OfI\\b0[6`]OiI^b0Z6_]OgIab0Z6\\]OiIcb0X6Z]OjIfb0W6W]OkIhb0P7O00000O1000000000100O0000nNX]O\\Jib0^5^]OaJab0Y5f]OfJZb0V5j]OjJVb0U5k]OmJSb0R5n]OnJRb0T5l]OmJSb0U5k]OkJUb0V5j]OjJVb0Y5g]OgJVa0]OP_On5IeJTa0Fm^Og5NcJSa0Mg^Od54_JSa0i6l^OVIRa0n6k^OTISa0n6k^OSITa0V8MQOm^OfHPa0\\7P_OeHn`0_8O1O2O0O2O0O2O0010OO2O000000000000001O0000O10001oJT@\\Ol?>l@POT?l0WAmNj>o0_AkNa>R1eAkN[>S1jAjNW>S1oAiNQ>U1UBgNk=V1\\BeNf=X1_BeNb=X1bBfN^=W1hBfNY=X1jBfNV=^OW_O1f3>T=]O[_O3d3=Q=^O^_O4d3;o<^O`_O7c37o<^Oa_O;c34n6\\ADg><^A[Oe>e0_AQOh>n0^5O1N2O0O101N1O2O1N2O1O1N101O000000100O010O001O001O0010O01O00O2M2L4K6J5J6lNT1M3M4L3L4N2O1O2O0000000000O1O2N1N2QNPYO;Sg0BYYOWOVO4Uh0a0a1K6IYhj2"}, {"size": [848, 480], "counts": "dQj31^j03N01N10N4M2N10000O100O1O2M2O1O1O0O2N1I7F:M101L5K6I6K6`N[NnYOk1le0eN\\YOh1cf0n04L5M02O1ON011N4M3O1N3N10O01007hYOPMcd0b3`ZOoL[e0T4OO1000010M5L2N2O1N2O1O1N4cNaZObMde0o1i1J5J4A>I7N3M3L5L6I8H9FSSg6"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "W[n65Uj0?D7G8I7K4L5L3K5M1N2N3N1O2M3N1O2O0OO2N1O2N2N2N2N2M3N2O1N2N200O2O1O010O2O2M2N3N0O1O1O2N1O1N2O1N1O2N101N1O2O1M3N1O1N3L6GW_m3"}], [{"size": [848, 480], "counts": "l\\l44Zj03N2N2M2O1O2M3N1O1O1N3N1M3O1O1N2N2N2M2L5I7K5N101O00010O01O100O001N3M2M3N2N2O1O2N1O1O102D;K5O0O2N2N3N5G_RV6"}, {"size": [848, 480], "counts": "bZ\\37Uj06L3M2O2M2N2O1O1O1O10000O010O2N010O1O1O1O1O0000100O001O010O1O10O010O01000O001O100O10O10O10000O010O1O1000O10O10O10O1000mLROd\\Om0Yc0[Oc\\Oe0[c0^Od\\Ob0eNUOPc0dNZOPc09[^O>dNZOPc09\\^O=cN\\Oob08^^OS]OP28`Meb0a0Q]OQ2:^Mcb0c0Q]OP2=^M`b0c0P]OQ2`0]M^b0e0n\\OQ2?^Mbb0d0k\\OQ2c0YMbb0b4_]O]K`b0d4a]OZK_b0g4b]ORKWO_OWc0`5a]ORKbb0o4^]OQKbb0P5\\]OQKdb0P5[]OnJ\\OBRc0`5b]OnJ]OCPc0_5a]OQK_O[OSc0e5\\]OSKib0m4U]OUKjb0k4U]OWKkb0i4m\\O_KRc0a4l\\OdKRc0\\4l\\OgKRc0Y4P]OfKPc0Y4R]OgKmb0Y4T]OfKkb0[4U]OeKkb0[4U]OeKkb0[4U]OeKjb0\\4V]OdKjb0\\4V]OeKib0[4W]OeKib0[4X]OdKhb0]4W]OcKib0^4V]ObKjb0_4V]O`Kjb0`4U]OaKkb0`4T]O`Kkb0b4T]O^Klb0e4P]O\\Kob0S6mNfHP_OZ7j`0PIQ_OQ7l`0UIQ_Ok6m`0ZIP_Of6n`0^IP_Ob6o`0aIo^O_6Pa0dIn^O\\6Pa0hIm^OY6Ra0jIk^OW6Ta0kIj^OV6Va0a10100O100O100O10O1000O10000000O1000000O10000XLa_O^M``0`2e_O]M\\`0]Oe^Om0Q1EZ`0ZOl^Om0m0FX`0[OP_Ol0i0HW`0\\OS_Oi0h0IU`0]OV_Oh0g0IT`0]OW_Oi0g0HR`0^OX_Ok0h0CR`0AW_Ok0i0BP`0BY_Ol0h0@P`0BZ_Om0i0^Om?D]_Om0g0\\On?E\\_OP1g0YOm?F^_OQ1h0UOk?H__OS1j0oNi?K`_OW1i0iNi?Ma_OZ1h0eNi?Mc_O]1h0aNg?Nf_O`1e0^Nh?Li_Oe1c0YNf?Lo_Oh1>WNh?JP@n1:SNeb0g1Y3I8H7H5L2M3N2M1O1N10O010001N10001N010O2O000O1O1O10O01O01O1O100O100O1N2N101O1O02N2M3O001O001O10O0100O00100O1O1O1O010O2O001O1N3M3M3L^nc2"}, {"size": [848, 480], "counts": "d\\h31]j03M201O000000000000000O100N2K4K6N2N2O1N2N1M3K31001L5I6J7I6`N^NmYOg1le0mNYYO`1hf0m05N3O1O010NN4OO1O102N3N101O01O6mYOPM\\d0\\3mZOSMod0W4001O0O101N2M3N2N2O001N2hNRZOlMPf0i1gZOeM_e0_1kYO`Nlg0Z1VXOcNRh0U1g0M3M2N4L4L4L5K7H7JUSg6"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "jhV74mi0f0E;F6J5K4L5J4N3M3N2N100O1N1ON31O0001N1O2N2O1O1O2O001N2O1N3N2M4M3M6I5L;D9G5K4L\\U4MS^l3"}], [{"size": [848, 480], "counts": "]ld47Vj04M2O2L3N3N1N2O1N3N1O2M2O1N2O0O2N2F:J6L3O2O1O000010O001O1O1O0O2N2O1N2N2O1O101O0L4O1O1N3M202_OgVOGZi07gVOIYi07hVOG^i02eVOL^i0OS^^6"}, {"size": [848, 480], "counts": "Uko23Xj09H6M2M2N3L3O2M2O1O1N2O1O001O1O1O1O001O1O00O100010O010O001O010O0001O0010OO2N2O1N101O1O01O10O01O10O0010O0010O1000O0100O0100O10O100000O100_XORO`e0m0^ZOWO`e0i0_ZOZO`e0f0^ZO]O`e0c0_ZO_O[`0FnBj0fLCZ`0EnBi0gLCZ`0FnBg0gLDY`0GoBe0hLEY`0FoBe0fLHY`0DQCc0eLKY`0CQCc0dLL[`0APCc0cLO\\`0_OQCb0`L2_`0\\OPCb0^L6a`0YOQC`0ZLUOR@P1X1Fh>XOR@n0^1Db>\\OQ@n0e1_O\\>@Q@o0m1VOW>BT@S1_2_Nf=5Q@W1gd0_N`[O]1kf0O2N2O1N2O1O2M2O2N1O1O1O2N1O2N1O101N1O1O1O1O100O1000O010O1O001O1M2N3L4J6L3K6E:[Oe0J7K4M3N1O2N2O0000O1N3K5K5H8E;B>G9G9J7K4M4M3M3L5J7Eh^b2"}, {"size": [848, 480], "counts": "dQj31[j04001O1ON2O2O0000000O001M3M3N2M201N2M2K6J5M201L4K5K6I6J5]NjNeYO^1[f0V14J70O20O0N01O0N23N2N3M2O1001O2N8hYOUM]d0S3V[OSMid0[4O0000O2N3M2N2O1N2O2N2M2cNVZOPNme0o0QZO`Nbg0_1`XO^Ndg0`1\\XO^Njg0]1WXO`NQh0Z1QXO`NYh0Z1a0M3N3L5J6K6J7HhXf6"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "km^7Yh0_OVXO4mg0HYXO1lg0Jdji5"}, {"size": [848, 480], "counts": "b]i33Wj0U4P@mKZ11_>]4n_OcKc14X>l5gAXJT>i5lA[Jn=h5RB[Ji=f5WB]Jf=d5YB_Jd=a5\\BaJb=`5\\BcJb=]5^BeJ`=[5`BgJ^=Z5aBhJ]=X5cBiJ\\=W5cBkJ\\=V5cBkJ\\=U5eBkJ[=U5dBbInNF]>h6eBbISOAY>m6dB`I[O[OS>T7cB`I@SOP>^7_B`Ig>`6ZA_Ig>a6XA_Ih>a6XA`Ih>_6XAaIi>_6VAbIi>^6WAbIj>^6UAcIk>\\6UAeIk>Z6TAhIl>W6TAiIm>U6TAlIl>S6TAnIl>P6VAPJi>o5XARJg>m5[ASJd>j5`AVJ_>k5`AVJ`>j5_AWJ`>k5]AWJc>k5ZAVJY=nNVBm6?UJW=\\OPBb6f0SJX=AmA_6h0QJY=EkA\\6j0PJY=KfAY6n0lI[=j7bBXH]=k7`BVH_=m7^BSHa=P8]BPHc=S8ZBnGf=S8WBnGi=T8UBlGk=Y8oAhGQ>\\:0O001O001O1N2O1O001O1O100O1N2O1SJQBWOQ>e0XBUOi=j0]BoNe=P1bBhN`=W1eBeN\\=Y1iBbNX=]1lB_NU=_1PC]NQ=`1UC[Nmm0]AjNi>S1]5L11O0101N2O1O1O1O1O00000O10010O0O00000010O0001O001O000oNgVOk0Yi0TOhVOl0Xi0SOiVOl0Yi0ROiVOm0]i0O1O1O100O100O10O01O100O1O001O01O001O00010O001O00010O0001O00010O01O01O1O0010O01OO1O11O01O01O00001O01O001O001O01O00010O0001O010O01N1100O01O0100O100O0010O010O00100O100O1O0001OO2O000O2O1N2N2O1O0000001N2N2N2NiV7"}, {"size": [848, 480], "counts": "Tlj31\\j04N1O2O000O100000000O1N1L5L4N2N2N1O2I6I7O1M3J5K7H7K5]NcNoYOb1ge0e12I8K4NM4010N3N2L4N2N3M2N3O0101O3M7I4L6K?@2N6PZObKce0f40O2N2N1O2iNQZOiMPf0R2UZOmMle0Q1QZOgN85je0Q1PZOaNfg0Z1]XOcNkg0W1VXOeNRh0V1f0N1N2N2M6K4L7I6I8HVSg6"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "oiY7c0hi09J4J6K5L6J5K4L4M3M1O003L5L2M3N2O1N1O1O2O0O100O2N000000O3N1N2O2M3M2M3O101O1N2O2N001O0010000N2O100N2O1N3N3M2M3N2N2N1O1O1O1N2N3L4M2N3L3O1N3L5L3L4L5Lc\\V3"}], [{"size": [848, 480], "counts": "Xkg67Wj03M3M3M3N2M2N3M2N2O2N1O11O001O01O01O001O001O01O01O0000100O1O0VO_VOe0gi0O01O00001O00010O0001O1O1O1O1O1O3M1O1O1O1O001N2O0O101NmhT4"}, {"size": [848, 480], "counts": "jSV55Yj03M3N2N2N1N3L4L3M4M3L4L3N3M3M2QKfNe@^1T?jNh@X1S?nNj@U1m>TOQAm0h>[OUAh0e>]OYAe0a>C\\A>_>H^A;Z>3^ANZ>a0]AA^>k0ZAWO_>T1ZAmN`>`1XAbNa>i1YAYNa>P2[ARNa>T2ZAnMc>X2XAoLUNgNb`0_4TAiL_NeN\\`0e4RAgLdNbNY`0j4PAdLkN`NR`0P5QA^LRO]Nn?W5n@]Lo?f3n_OZLR`0i3l_OWLR`0m3k_OTLT`0P4g_ORLX`0Q4e_OoK[`0T4a_OnK]`0V4^_OmKb`0U4[_OkKe`0X4X_OiKh`0Y4U_OhKj`0\\4S_OdKm`0^4Q_OaKPa0a4o^O^KQa0d4m^O\\KSa0g4j^OXKWa0k4e^OUK]a0m61N2O2O0O1000YA[Ge;d8WDeGd;[8[DhGc;W8]DlGb;S8\\DQHb;n7^DUHa;j7^DXHa;h7_DZH`;d7`D^H_;b7`D`H`;_7^DdHa;[7_DhH`;W7UDUIj;j6UDYIk;e6UD]Ik;b6TD`Il;^6TDdIk;\\6SDgIm;X6RDkIl;U6RDnIm;R6RDPJl;R6RDPJn;o5QDSJo;l5oCWJPni06K4L3N3N1O20O001O001O001O00000O10000000001O0O100000O1O100O10000O10000O10O10001O000O2O001O000O2O1O00001N101O0O1O1O1O10000O2N2Ngef3"}, {"size": [848, 480], "counts": "joh5f3gA[LX>i3cAXL]>n3\\ASLe>l7O01O001000O10O11O[Om@mER?Q:QAoEo>o9SAQFm>m9UASFk>k9WAVFh>f9\\AZFd>c9_A^F=DjU^OTOl0?Qa05ZAKQ?]O\\Ac0Yd0O010O010O01O010000O10000O1000O010O100O100O010O1O1O1O010O1O100O100O01O01O000001O00001N2O002N1O1O1N1O2N3M3JoRl1"}, {"size": [848, 480], "counts": "Pko31^j0100O1O1O1N2O0O3M3L3N2M3I7F9N2L3K7J5I7K5^N^NRZOf1ke0eNfYO_1]f0iNWYOT1Pg0S12000O1ON30ON2O102N2N2O200O05bYOXMkd0`4G1O0O20O00001N1O2N2N2O1O1O1bN_ZObMme0T2[1WObXO]Nag0BcXOc11bNcg0IbXO_1dh0L2N3M5K4K8H6J9EXSg6"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "TWa7g0hi05In0PO7L3M002N2O0O101N2O1O100N201N1O100O1O101N1000000O101OO1O10O10O1O10000O2O0O2N101O1N101N1O1O2N1N4J5M4M3L4L4K5K7J:Dc][3"}], [{"size": [848, 480], "counts": "gbS77Uj07J3M4L4N1001O001O0001O0000O1000000000O100O10O01O1O1O001O1O100O100O010O100O1000O100O10001N101O0O100O1O2O000O1O100O1O2N1O1O1N2N3O00001N2N\\i`3"}, {"size": [848, 480], "counts": "i[Q6?ji0:E:F9I7K5H9Hg0ZO?[]OWMl_3eAfLU>a3eAcLW>b3eAaLV>e3cAaLW>f3cA_Lh=W4QBoKl=T4oAQLo=R4jATLT>n3aA]L]>f3\\A`Lc>b3XAaLh>b3o@eLP?_3]@QMc?n6O010O100000O100O1O100O10001O0O1OjNlESB[9XOmF`>3RBl8CPG:B28R<`0mCa8JSG00L4X<fNQLg[O[4Rd0T1K6E:D=\\Oc0[Oe0E;J7K5J5I7`@kFc=X9cAfGX>\\8`AmG^>Y7ZAkGMW1g>f6jAoIT>S6fARJY>g5\\AhG1k2b>j8ZNaD[D`;`;hD[DY;a;mD\\DT;a;REZDP;d;TEYDm:f;VEWDk:g;YEVDh:i;\\ESDe:l;^EQDc:n;`EoCa:PDXAQ4ITOg0YMj=k8WBVGd=m8\\BTGa=n8_BTG^=l8cBVGZ=j8fBYGW=T8[CPHbf3jB]K[=\\4_CmJfgi0i0^O7I;D9I5L5J5K4N3M1N2O2M2O100O1O10000O10O100O101M2O1O2N1O1M3L5G8N2L4O2O001O1O01000N2O1O3L3N1N2O2N1N101N2N1N2O1N3N1O2N2O1N2O0O2M2N3M3L4L4K5Khkh0OcSa0"}], [{"size": [848, 480], "counts": "nXb71[j05L5M2L5L4M2O1N2O1O1N2N3M101O0O2O1O1O0O2O0O2O1O1O10O01000000O11O2N1O1N3M2O4K4L7I7I5K3M3Ldme3"}, {"size": [848, 480], "counts": "gbl6:Pj07I7I7J6E=D?Ab5XAdJd>^5ZAfJc>\\5[AfJc>[5\\AgJb>Y5_AhJ`>W5`AkJ^>V5`AmJ^>T5aAoJ\\>R5bAQK]>n4cATK[>m4cAVK\\>j4aA[K\\>f4aA^K^>b4^AcK`>^4]AfKa>[4^AgKa>Z4\\AjKa>X4]AjKb>W4[AlKc>V4[AkKe>V4YAkKg>U4XAmKf>U4XAlKh>T4WAmKi>T4UAmKj>T4UAmKk>T4SAmKl>T4RAoKm>R4QAoKn>S4o@oKQ?Q4n@PLQ?R4l@QLR?P4l@RLS?P4j@SLR?Q4k@RLm>U4QAmK[>h4bA[KU>m4iAVKQ>P5kATKP>Q5mARKo=R5nAQKn=S5mARKP>i8N3K4L4OkNdBVE[=j:jBREV=m:SCkDmQ1VBiNk=P1_BlNb=l0hBQOZ=j0kBTOV=k0lBSOU=k0oBROS=l0oBROR=l0RCQOo`_OAb`0>__O_Od`0`0]_O^Oe`0a0\\_O\\Of`0c0\\_OZOg`0e0Z_OXOj`0e0X_OYOj`0e0X_OXOk`0f0X_OVOk`0g0Y_OUOi`0j0Z_OQOj`0m0Y_OoNi`0o0^4N3K5L3O001O001O2N100O01005J01O1O01O000001O01O01O000001O00001O000001O01O000000001N1001O00O100O100000000000001O000000001O001O001O0000000000000000]H"}, {"size": [848, 480], "counts": "`_Y35Zj0101O00000000000000000000000O1DiXO@Yg0>iXO@Wg0`0kXO^OVg0a0mXO\\OTg0c0oXOYOTg0e0PYOVORg0j0PYOROTg0k0oXOQOTg0m0oXOmNVg0P1V1O1O1O1N2O2N1N201N2N3K5M4JQmT1"}, {"size": [848, 480], "counts": "TRm21[j05O00000O101O00001N100O100O10O01O1N2K5M2K6M2I6G9K4N3L4N3_N\\NUZOf1ee0fNSZOZ1ke0QOhYOS1Xf0^13N2O2N1M40O1001N1O3M2O10010O04lYOZMXd0b4M1O000010O0O101N2N2N3M3L4cMSZOOSf0NnYOJ\\f03fYOjNag0P1bXOfNmg0Q1n0K4L4L3M4L5K6I6JXXc7"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "gb\\8d0ei0hf0BYYOcMgYOgNmg0T1UXOhNQh0T1RXOgNSh0V1RXOcNSh0Z1d0N1M4M3L8G7K4M2N3KhoR8"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "Udo7b0ii07I7K5J6L3M3M2K6M3N1N2O2M21O2N010O0100O01O000O2O1O0O2O001M4M2O2M3N2N3M3L6L3K3M2N2N1N3M2DdcX3"}], [{"size": [848, 480], "counts": "hW`;2]j03M3L4M5K5K5K100O1O1N1N3O00001O00O11O00O110O1O1O1O010O2N1O3L3N2N3L5K5Jg^O"}, {"size": [848, 480], "counts": "fQj57Vj05M2N1O2N1O1O1O1O1O100O100bWO_Olf0b0PYOGkf09PYO0kf02RYO2lf0NQYO7mf0IQYO:mf0GPYO=of0CnXO`0Qg0BkXOb0Tg0^OjXOe0Tg0\\OiXOh0Ug0YOfXOm0Yg0V1N101N1O2N2N1O2N2N2N2N2N2M3N1O2O0O2O0O2O0O2N101O001O001O00001O1O001O0001O00000000SOSLV[On3fd0ZLV[Of3gd0cLP[O`3nd0R1O1N2O1O010O0010O0100O10O100O010000000O1O100O1O100O100aLP[O]1Qe0^NW[O_1id0`NZ[O]1hd0aNZ[O^1fd0bN[[O\\1fd0cN\\[O\\1dd0dN][OZ1ed0dN\\[O[1ed0eN[[O[1ed0dN\\[O[1fd0dNY[O]1gd0bNZ[O]1hd0`NY[Oa1hd0\\NZ[Oc1gd0[N[[Oe1fd0WN\\[Oi1gd0SN[[Ol1gd0PN[[OP2hd0jM\\[OV2gd0cM][O]2gd0YM_[Og2Uf00O101O000O10001N1O2O0O2O0O1000PNjXOf0Ug0ZOmXOe0Sg0ZOnXOf0Qg0[OnXOf0Rg0ZOmXOg0Sg0WOlXOl0Sg0TOPYOk0of0TORYOl0mf0TOUYOk0kf0SOWYOm0hf0SOYYOm0ff0TOZYOl0ff0SO[YOm0df0SO\\YO9ITOjf0c0^YO8IVOhf0b0_YO7KVOdf0d0bYO5KWOne0FWZOn015JWOie0]1]ZO[OKYOee0]1aZOYOLYObe0_1bZOXOLYObe0^1dZOWOK[O`e0^1fZOWOKZO_e0^1gZOWOK[O^e0^1hZOVOI]O`e0[1hZOXOH]Obe0X1gZOZOI]Obe0V1fZO]OH^Oce0R1fZO_OH_Oce0P1gZO@G_Oce0o0gZOBF_Oce0n0hZOCE_Ode0m0gZOCG_Ode0l0eZOEG@fe0g0dZOIF@ie0d0aZOLF@Rf0:YZO6F_OUf07UZO:F_OWf04TZOFCWf0LSZOb0FBXf0JSZOd0EC\\f0BQZOj0DDbg0<^XOE`g0;aXOE_g0;aXOE_g0;aXOE^g0;cXOE]g0;cXOE]g0;cXOF[g0:fXOFZg0:fXOFZg0:eXOG[g09eXOHYg09gXOGYg08hXOHWg09iXOGWg08jXOHVg08jXOHUg08lXOHSg09mXOGSg08nXOHQg09oXOHPg08PYOHof09QYOGof08RYOHmf09TYOFlf09UYOGjf0:VYOFjf09XYOFgf0;YYOEgf0:ZYOFef0;\\YODdf0;]YOEcf0;]YOEcf0:_YOE`f0^OZf0d1[YOhNeg0T1`XOaNmg0X1h0N3K3N3M3M5K6J;DknW8"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "kPi7VWO_Ojh0b0b0000O101O0gWO0Qf02cYO?Wf0A]YOQ1\\f0QO_YO_1Uf0bNhYOf1Rf0ZNlYOi1Rf0YNkYOk1Sf0UNkYOn1Tf0SNjYOm1Vf0UNfYOm1Zf0S12J5001O100O2O0O1O100O2O1N101N101O0O1O100O1000001N1001O0001O00000O101OO10000000001O0000lNoKi[OQ4Sd0VLi[Ok3Td0[Lh[Of3Vd0`Le[Oa3Zd0cLa[O_3]d0[1O1O001O1O100O1O10O010O100O10000O010O2O0O100O2O0O1O1O1O1O2O0O1O2N1O1ZLS[Oh1od0VNR[Oi1Pe0TNS[Ok1md0TNT[Ok1nd0RNU[Ol1nd0oMV[Oo1ld0lM\\[Oo1fd0mM^[OQ2dd0kM`[OT2bd0hM`[OW2bd0cMd[O\\2`d0YMg[Of2Sf0N2O1N3M2N2O2M2N4L5J7J3L6K4L3M2N2N1O1O1O010O010O01O10000O101O0O3N1O1N2O2N1O100O101O000O10O02O0O10000000010O1O2N2O0O0001N1O2N1O2N1O2N3M2N1N2O000O100O00100O10O0100O0010O00100O10O10OVOoVO6Ri0JoVO5Qi0KoVO4Ri0KPWO4Pi0LPWO3Pi0NPWO2Pi0MQWO2Pi0NPWO1Qi0OoVO0Ri0OoVO1Qi0OoVO0Ri0OoVO1Qi0OoVO0Ri0OoVO0Ri0OPWOOQi01oVOMSi03e000O10000000O11O000O101N2N3M2NTA"}, {"size": [848, 480], "counts": "WfV26Yj02N100O1001N0100O1O2M3H7M3O1N2N2O1O1O00O1N02L4K4M4F9H7cNVNUZOl1ge0cNmYO_1Rf0hNeYOZ1\\f0kN\\YOX1cf0X12NO1OO301N3O001N3O0O1OYZOSMnc0n2n[OYMoc0g2o[O\\Moc0e2o[O^MPd0d2m[O]MSd0T3Z[OoLed0^40000001O0O2N2N3M4K6J7F7XNQZOgNTf0U1`ZOVNee0Z1]YOaNVh0\\1`0N4M5L3L4L5K4L6J7I6H^iX8"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "kab8e0gi06K5J6K4M3N1N2L5L4M2N2N2O1O1N1O2O2M2O1O10001O0001O001O1O100NUXOkM]g0U2`0O2M3_OaWOkNch0Q1eWOgN_h0T1c0L7J3L3N3M2M3N2M7EZkd2"}], [{"size": [848, 480], "counts": "nek86Yj03L4M2O1N1O2O1N2O2M1O2N3M2O2M2N2O100000O010O100O1O2SOnVO8li0H2Okhi2"}, {"size": [848, 480], "counts": "kcf55Yj03M2N2N101N10O1000O10000010O00O2M2M4K4L5K4M3N2N3M2M3O0ON3L4O2N1O3M101O000O2O1O1O1O1O1O1O00100O100O0100N2O101O1O1N2O2PWO0]g0e1K4L4L4M2N2N2N1O2N3M2WZOlLRd0W3i[OQMRd0Q3j[OTM]OWOK4Ud0b3_\\OTMMUOfc0i3Y\\OSM^d0o2`[OQMad0P3\\[ORMdd0o2Z[ORMfd0o2V[OTMid0n2U[OSMkd0Z4O0O4M0O10O010O001N1O2M3LeNVKX]Od4kb0cKQ]OY4Qc0lKl\\OQ4Uc0SLi\\Oj3Yc0XLe\\Og3[c0]Lc\\Oa3^c0aLa\\O^3^c0eL`\\O[3`c0hL]\\OX3cc0jLZ\\OX3fc0jLV\\OX3ic0h100O101O0O100O1000001O0000000O11O0O1000h\\O_Iab0a6^]OdI^b0\\6`]OhI^b0X6a]OjI^b0V6a]OlI_b0S6`]OoI_b0Q6`]OQJ_b0n5a]OTJ_b0k5a]OVJ^b0j5a]OXJ^b0h5a]OYJ`b0f5_]O\\J`b0P1Z]Oc26`L_b0i0_]Of21dL_b0c0c]Oh2MgL`b0?e]Oi2JjLab0:g]Ok2HmLab04j]OP3@QMfb0Jo]Oe4Pb0WKS^Oi4oa0QKU^OP5ma0cJ^^O]5Zc000010O00O2ROo[O[KRd0b4R\\O[KPd0b4P101O1N2WM]ZOc0de0\\O^ZOa0de0]O]ZOb0fe0\\OZZOc0he0\\OXZOd0je0ZOVZOe0me0YORZOg0Pf0XOPZOh0Sf0UOmYOj0Qg0XNoXOi1Vg0RNjXOn1Yg0oMgXOR2\\g0iMeXOW2jg01O1O1O2N1N2O1O001O1N2OSO_XOmN^g0S1dXOoNYg0P1hXOQOWg0n0kXOSOSg0l0nXOUOPg0j0RYOWOmf0h0UYOXOjf0g0WYOZOhf0e0ZYOZOff0e0\\YO[Ocf0d0_YO[O`f0e0bYO[O]f0d0dYO\\O\\f0c0gYO[OYf0d0iYO[OWf0d0kYO[OUf0d0nYOZORf0e0PZOZOPf0e0SZOXOoe0f0VZOTOle0j0n1O1N2O2M2O2N1O2N2M3N2N2Me[W1"}, {"size": [848, 480], "counts": "gaR22[j06L100O10000O10001O000O100O2O0O1N3N1M3M3N1M4L4M1M2N3M3J6J7L4L4_N_NoYOa1Qf0kNbYOS1df0Z12100O00N00O2002M4N1O2N1010O011QZOZMRd0l2c[O[MZd0n2Z[OWMed0T3lZOPMUe0T410001O0O3N2I7K5L5M2N4J5eMeYOmNkg0P1XXOkNmg0Q1XXOiNmg0T1j0N2M3M4L3M5K5K5JSY[8"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "nfo85Vj0;F6K3M3N2N2N2N2O1N2N2O2M2O2M3N1O1O2N101M3N1O11O00001O1N3N2N1O2N2N3M4POPWO8hi0L4L1O3M4KVj[2"}], [{"size": [848, 480], "counts": "PaQ97Sj08I7L3N2O1O1O2N0100O100O01O001O001O01O010O1O1O3M3M2M2O1N2N2N4K4LWd`2"}, {"size": [848, 480], "counts": "Xfl6;Qj04M3N2N2N1mNS1M3OO0001K6L5K4M3M3N2M2N^OgXOSNXg0j1h0O010NnWO`NT3L\\`0b2\\_OfM``0_2X_OgMe`0\\2b^O^NZa0e1^^ObNaa0`1Z^OdNea0^1W^OeNha0^1R^OfNma0`1j]OcNVb0a1c]OcN]b0`1\\]OcNdb0a1V]ObNjb0`1Q]ObNob0a1k\\OcNUc0[4000001OO2N1M4CQ7l@gIQ?X6h@SJU?m5e@ZJZ?f5c@^J\\?b5b@aJ]?^5b@eJ]?[5a@gJ`?X5^@kJb?T5]@nJc?Q5Z@SKe?La_OP3g0WMi?Fc_OR3b0ZMl?Ad_OU3=^Mn?ZOh_OX36bMS`0SOi_O[31fMW`0jNk_O`3JjM^`0^Nm_Oh3CmMb`0jMZ@Y4ROnMi`0]M\\@e4jNoM^b0R5O01O0O1nNc]OQJ^b0k5k]OoIUb0n5o]OPJSb0m5P^ORJPb0l5T^OPJoa0n5T^OnIoa0P6T^OlIoa0Q6Y1O2M2N3M4L4]La[Oo0bd0nNc[Om0ad0POc[Ok0`d0TOd[Of0`d0WOe[Ob0_d0]Oe[O>^d0_Og[O;]d0Cf[O8]d0Gf[O4^d0Jf[O0_d0Le[ONbd0N_[O0hd0I[[O5jd0EY[O9md0@U[Oa0md0ZOV[Of0Zg01O2O00O2N2N2N0O2N1O2N1N3N1M4L4IhWa1"}, {"size": [848, 480], "counts": "V\\S27Yj00O10000O1O100O101O0O1O1O1N2N2M3M3M2O2O1M3K4I6M1M4L5J5cN[NRZOg1he0gNmYO\\1oe0nNeYOW1Wf0^14K5O11N10N1O2O1O101N3O0O2O0012N8fYOSMad0U3P[OTMnd0X4O000001N101M3M3M4M4K5K4hMjYO0[f0KjYORO[g0g0jXOoNbg0k0bXOlNgg0Q1\\XOdNng0Y1f0O3L4L3M5J6K6IQn\\8"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "hS\\9VOl^OQO[1n1f?ROi^O[OS1m1S`0hNb^OIR1c1Z`0eN]^O3S1[1_`0eNQ^Ob0W1j0g`0W1l^OoNTa0Z1]^OlNba0P51L4N3O001O0O1N2I8H7N3N2O1N2N1N3M2N3M2L5K4L4L4K5L5L4M2N3MkNYJf]Od5Vb0RK\\]Ol4cb0ZKZ]Oe4eb0^KY]Oc4eb0`KZ]O_4fb0dKW]O]4hb0eKW]O\\4gb0gKW]OZ4gb0kKQ]Of4ab0i1K5K3M3N2M3N3X@_G[=i8dAWHT>_8SAgGk>^8l@gGS?l9O0000000000000000O100O2O000O101N101N1hFe@[7\\?aHi@\\7Y?aHi@_7W?_Hm@`6ClHb?`0PAn54_Io>>RAk56fIi>;UAj56jIg>7YAi55nIf>1ZAl56QJh>C\\AS5mN\\KT1L\\`0[3Q@aLH3X`0T3[@bLA8U`0k2hBSMY=e2PCYMR=[2[CbMf]O>J5L5L4K5L7I6I8Fmb^8"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "ana98Qj0]MZAi2f>XMUAl2j>VMQAm2o>UMl@o2S?RMf@T3Z?mLQ@_NcNf4`a0lL]_Oj3j`0WLS_Oi3n`0YLo^Oh3Ta0XLg^Oj3[a0VL`^On3aa0TL[^Ol3ga0YLl]Oo3Tb0U21N3N2N1M4L4K5I9H;G5K4M3N2N2N1O2N2N2N2MnNjIP^OT6fa0bJP^O[5Pb0jJm]OU5Rb0PKl]Om4Ub0XKg]Of4Zb0^Kc]O`4_b0cK^]O[4db0jKV]OV4lb0f1010O01O2Z^OiIP?X6c@\\JU?f5a@gJETNOL^>[7cAZLY>P8O1O0000001N1000000000000000000000O100000001O2M3gEbA\\8_>aGeA]8\\>aGgA\\8[>\\GmAc8T>WGRBh8o=RGXBl8j=mF[BR9k=^F`BU6oN`K0gNY`0b5T@ZKJROS`0^5_@UKE[Ol?Z5WA]JZO7a?U5oBdJT=U5Y4jNU1QOo0E;I6N3O0O2M2J7K401O0O101O0001O01O001O010O2N2N10O01O0O101O5J9H2M3N1O1N2O1O1O2N100O2O0O100000O001N1fND_XO=^g0H^XO:_g0OWXO5hg03kWO2Uh0U1N2F92O0O1O1O1N2L5J6I6I8A`0G8K8IZ^d0"}, {"size": [848, 480], "counts": "jbm12\\j03N2O0O2O1N1N2O100O100O1N2O1N1N3M3N1O2N2L3J6M0O103H8L4L2]NdNRZO`1me0gNhYO\\1Yf0jN^YOW1df0nNVYOT1jf0V11001M4N00N2O2N3O1O2N1010aZOkLcc0U3W\\OSMgc0m2o[O^MPd0d2l[O^MTd0m2][OXMbd0R3iZOYMXe0o30000001N2N2N2O1N2N4L6J6RNo1XOQXOaNTh0X1VXO^NPh0^1c0N4L3M4L4K6K7H7Inla8"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "moj96Rj0e0A3J6J8I6I7J6K4M2O0ON4M3N2O1001OO2O001O100O1O100O100O1O2N1O1N2N2O2N2M4M1N4L3N1N2O1O1O2O0O1O101N1O1O0O3CkcV1"}], [{"size": [848, 480], "counts": "\\_V9>mi05L5N2O01O00010O000O101O000O1000000O101O2N1O2M2O1O1O1O1O0O1O2N``\\2"}, {"size": [848, 480], "counts": "]Po62\\j03M3L4M2O2N1N2O1N1O2L4L301O0O101P^O^OP:d0nD?o:CkDd0R;_OhDf0V;]OeDg0Z;ZObDj0];WO_Dm0_;UO]DP1a;RO[DQ1d;POXDS1i;mNTDV1k;kNRDW1o;iNmCZ1S[3]`0[NmAU3R>lLhAY3Y>fLdA^3[>cLaA`3`>`LQA[3[NTKd`0a1c@e3T`0[Lg_Og3[`0[L[_Ok3h`0]L\\^OT4ga0W24M2N3M3L4M2M4K5L3N3M2N3N1O2M2O2M2O2M2O2N1O1ObNVJP^OVOJe6Vb0]1OO2O100O2M3N2N2O20O010O10000O1a^OeHBEZ?g7j@UJj>m5n@]Jj0kMabLiAa3Y>_LcAc3_>^Lc@kN^No4Pa0WL]@_4e?bKS@c4o?_Ki_Od4Z`0f22L5K5L4L4L4K4K6K5L3K5L4M4L3N3M3M3M3N2M2N3N2N1O`NYJ[^Oe5ca0aJ[^O\\5ea0iJX^OV5ha0mJX^On4ja0UKU^Og4na0[KR^Oa4Pb0bKl]O]4Vb0l11O2O010010O1O100O101Z^OeHU`0\\7]_OZIY`0\\9SO4L3N2M2Ne0SAeD[=ZVOR@QGo?m8V@PGj?o8Y@oFh?n8[@PGf?o8[@PGf?f6f_OZKf0nMe?_6T@ZKm`0^4`_OXKd`0f2P_OoLk0L\\`0S3X_O\\Lbc0c3f1cNlYO^NUf0o2O002N4L2NJ7N2L4M4L3M4L3L4M4L4J8J6I6H8H8H7K5N1O1O2N1O2N1O1O2M2O4K2O2N1N3M2O1N101O0O1000O01000O01O1O00000000F;1N1O010O1N2O002N1O1N11000N2O00100O1000O100000000001O0000bH"}, {"size": [848, 480], "counts": "[Tf11]j04M1O2N100O1O1O101O0O10000O001M3N2O001N2N2M3M2N3N1M3J6I8F9UN^N`ZOk1\\e0cNRZOb1oe0kN\\YOY1if0S14N10O01NO3NO2N3N4M2N2O2O000OYZOUMlc0k2n[O^MPd0c2l[OaMSd0`2g[OeMYd0b2\\[ObMdd0l2fZO\\MZe0m3001N10001N2O1N2O3L4K5L6^MlYOPO>Ole0l0jYOnNf0Jie0R1Z2M4M4L3M4K5L5L5J6H_fg8"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "clo9?li0;D9J5H7L5J7H6L4L4L30O1N21O10N2N110O11O00O101N2O1N2M5K4K4L4K4N3L3N2N3M3N1N3N1O1O2M2O3L3M3N3LbeV1"}], [{"size": [848, 480], "counts": "o`V9>oi06K4N00100O01O01O0000000000O1000O00100O0010OO1100O1O1O2N1O1N3N2M2O4M1OSUY2"}, {"size": [848, 480], "counts": "cXn6:Pj08I5M3K5M3O1N101O001O01O01OO100O101OO10Oi\\OD]<RH_Am7b>oGaAQ8a>lG`AS8c>iG_AW8d>bG`A]8Z`00000O1O1O2N1O1N2mIV_O_2l`0QM]@a1TNaLba0j1f@fNRNV1LWN^a0k1PAUNXN\\1_OaN[a0m1aBVOZLjNVa0Q2eBkNZ>U1lA`NY>`1[53O1N1000O1N01001N3N3L3M2N3M2N3M3M5K5J4M2N2N4K5L1O1N101O0O2O000O10000000O010000O10OO2O00010O01O1O10O01O1O01000O01000O01O001000O10O1000000O1O101N4Fbm<"}, {"size": [848, 480], "counts": "kYe12\\j03M3O00000O101N1O100O100O100N2K4L5N2M3O001N2M1H7N2O2PNhNeZO]1Ue0QO]ZOV1_e0SOXZOP1de0YOTZOj0le0BbYOd0^f0d12J5N31OO1O2O1N2L5L3O2N2O010O02O4cYOcM^d0l2jZO_MUe0o3O0001N101N3L5K6J5L4M3M4aMdYOa0`f0ZOiYOUO33Xf0c0mYOlNeg0S1m0N2O3L3M3M4L5J8H6JnUj8"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "Vfg9?ji0;H6J7J5J6L2M4K3N2O2O2O1O1O4M03L001O1N1M4M202M3N3M4L3L3N3N1N4K5L2M4M2M4M2M3N2L\\ed1"}], [{"size": [848, 480], "counts": "\\RT9c0ii06N00100O01O001O000000001O00000O010O01N1100O010O1O100O2M2O3M3M3KUS^2"}, {"size": [848, 480], "counts": "RYS7>Pj06I4M3L4E:L5M3M1O2O1O010O001W\\OjNW>V1eAPOY>o0dAWOZ>i0aA^O\\>c0_AC_>?_AC^>`0^ADa>=\\AFb><[AGd>9YAKf>6WAMi>3RA2m>NP^OjN_1]1b`0Ib]O[Od1o0j`0\\1Q_OgNo`0]1j^OeNWa0^1a^OeN`a0c1h]ORLC`2gb0m42O1N2O1N2L4L3N3N2M2N3NcNgIf^OV6[a0QJa^Ol5`a0YJ\\^Oe5ea0^JZ^O_5ha0cJV^OZ5ma0hJR^OU5Pb0mJn]OP5Tb0SKj]Oi4Zb0YKd]Oe4_b0\\K`]Oa4bb0cKZ]O\\4hb0c12O2N1N2O2M3N1010O010000000O10001O1O3M6K5J4L3N3f^O[Ib>h6m@gIQ?]6c@kI]?e80O1SGX@P7h?iHb@T7`?gHc@Z7]?cHe@]7]?_He@a7_?XHe@g7b?jGf@V8k`0O0O2O001N2O1N101N1O2N1TJi]O^3Xb0`L_^Oj2ca0UMa^Of2`a0YMg^Oa2Za0_Mm^OX2Ua0gMo^OT2Ra0mMT_Ok1n`0UNn_Om0T`0SOQ@0cMUN_b0k1T@VNmMJXc0Q2X41O1O01O1O4L:E9G5L3K3N3M2N3M4L3M0O2O00000O10000O1000000O10000O100O001O010O10O010O010O010O1O0010O010O010000O001000000O010000O1000000000001N2O0O3Keja0"}, {"size": [848, 480], "counts": "kYe12[j04N2O0O100O1O1O1O100O1O100O2L2L5L3N3M3N1N3L3J5L3L5QNhNaZO_1]e0iNXZO]1ge0iNRZOX1le0QOhYOS1Uf0e1M2N12L4M4M3N101N3M2M3N3O1O1O1011N8Hc1]N2N01O00000O3M4L9F8I2N2O1aMlYO?Yf0[OkYOc0Xf0YOmYOPOh0K_e0S1`2M3N2N2M3M5K5J5K8G^Pk8"}, {"size": [848, 480], "counts": "P`]<"}, {"size": [848, 480], "counts": "Uj]9`0ii09J7K2N4J5L4M2M4L3N2N2L3N300O2O010O0010O010O1N2O2M3N5J9\\NQWOW1^i0E4M3L4L3N3L4L4KnPQ2"}], [{"size": [848, 480], "counts": "eSX93Yj06L4K3N2M4N1O20O01O010O00010O1O100001N5L1N100O1O2OO02N1O1O0O2O1N2NRWY2"}, {"size": [848, 480], "counts": "oik6;Sj03M2O2N1O2M2N2L3N3N2N2N2O1N2M3K5M3M3M3M3K5N5K6K3M4L4MUNZXOn0cg0POaXOQ1]g0lNeXOW1Yg0eNkXO]1Rg0aNQYOa1kf0^NWYOe1\\f0eNdYO^1Xf0dNhYO^1Tf0cNmYO`1oe0aNQZOa1ke0cNRZO`1ke0cNRZOa1je0gNjYOg1le0c1K3M3N1N3N2M2N2O1N2N101N101l[OhJnb0X5n\\OnJPc0S5j\\ORKUc0Q5b\\OVK^c0m4[\\OXKdc0j5000000000000001O0O101N1O2M3M3VKQ\\On2Sd0oLQ\\Om2Sd0oLQ\\Ol2Sd0QMQ\\Oh2Vd0TMl[Oh2[d0TMg[Oh2_d0RMd[Ok2Rf0M3N3M2N3N2O0O1O2O0O10001N10fYOVMnd0i2P[O\\Mod0b2oZObMPe0^2lZOgMSe0Y2jZOjMVe0U2fZOQNZe0n1aZOWN_e0i1ZZO^Nfe0b1WZObNie0R31O1O010O00010O0010O0100ZOQZOjLoe0Q3WZOnLhe0P3[ZOPMee0n2]ZOQMde0m2^ZORMbe0m2`ZORMae0k2bZOTM_e0j2cZOUM^e0i2dZOVM]e0g2eZOYM\\e0d2hZOYM[e0c2jZOXMYe0e2[1M3M2N3M2O2M2N3M2M4K4L5K4L5I6BlVO\\OYi0c0iVOWO[i0i07O001N1000000001N100O0O2N2O01OO1000000O100O001N200000O01O1O000N2O10010O001O001O01N2M2N3100O00100O1O0010J5N2O2M2010O02O001N9G_T7"}, {"size": [848, 480], "counts": "iYe14Zj03N2O0O2N1O1O00100O1O100O1O1N2J6K4O2M3N1M4J5K4H7SNeNeZO^1[e0jN[ZOY1ee0nNQZOV1ne0ROhYOo0Wf0d1M20O102O1O1N3N110M4M2N3M2O3O0O11O3N;fYOgLed0f4M0O00O2O001O1M3L4iKWZOc3ne0XLUZOd3Rf0XLoYOd3Wf0XLkYOb20WNbf0i0XYOjNW1Jbe0[1YYOgN]h0T1iWOfN[h0X1a0L3N3N3K5L6J6I7ERkk8"}, {"size": [848, 480], "counts": "lT[98Vj04L3N2O000O000K0O5M4N2O101lWOYO]f0j0bYOXO[f0i0dYOYOZf0h0fYOXOYf0i0gYOWOXf0i0hYOYOVf0g0hYO^OVf0b0gYOBWf0`0gYOBWf0`0gYOAbe0HaYOl0g0]Ofe0JaYOT1;VOSf0H`YOZ10TO_f0D`YO_2_f0k0N1OO11O3M2N2N3M2N2N2O1OQLUZOY3je0aL\\ZO`3de0ZLaZOf3Uf0001N1O2O1O0O10_NeYOmNYf0e0^ZOSObe0i0eZOSO\\e0k0hZOSOWe0l0mZOPOUe0o0oZOlNSe0LiYO8Y1GQe0M[ZOFn0OPe0:U3000010O10hVOEZh0:dWOK[h05`WOO_h0G7M3K400O2O01O00000O101N101O001N101O001O010O10O01O1O1O1O1O1O001O1O1O1O1O1O1O001O1O001O001N101O0O2O0O2O001O1N101H8M4KRdd1"}], [{"size": [848, 480], "counts": "fb`:7Wj020000000N^ng1"}, {"size": [848, 480], "counts": "hW`67Wj06J6K3M3L4H7VNSOeYOW1Zf0nNZYO[1df0lNmXO_1Qg0Q1N2O0N3N3M2O1O10hM[YOd0df0YOcYOe0]f0fNRYOMn0W1ig04N3L2O2M3N1O2N1O2N1O2O1N1O2O0O2N2O1N101O0O101O001O000O2O001O00001O010O00010O0010O001O100O2N1O3N5J:F2N01O0000001O0O100O10000O2N1O1O100O1O100O1N200O1O100O1O10000O100000O10000O100O101N1O2O0O2N2N2N1O1N101N1O1001O010000O10O10000O010O1000O100O1O2N1N2N2N3M2O1O2M2O2O0O2O2M2O2M0010O0O100N2O1O1N0N2O20O2OD=M2O00O3N1N3M2M3O20O2N01H8M3N2O1O3M2N3M4K6L6FcZn0"}, {"size": [848, 480], "counts": "VTf15Zj02N2O0O101O0O1O2O0O100O1O1N2L4K5N2M2O2O1L4H7G7K5L4ZNaN[ZOc1ce0hNRZOZ1ke0SOfYOQ1Yf0c1M0012N1O3N0O2O3M2N3L3N3N2N2O1012M6nYOoLZd0f4M1O000001N102M9E^YOAbf0]e0ORZO5me07_YO1gf0j15K2M2O2M2O6J4L4K5L3N1N2O1N101N6K4LO00O01O100O1O0O2O001N1O2M3L3N2N3N1O1O100O1N2O2N100N101O100O1N2O1O1O1O1N2O1N2N2N2O1N2N2N2N101N1N2O010O2N1O2N101N100O1O101N100O1O1O2N100O1O2O0O10O0100O1O1000O100O11O0000000010OO1000O0J7F:NDSLZZOl3ee0[LWZOc3je0`02O1N2QLQZO[3Qf0WLPZO9O]3Tf0<6G:I7L4M3oLnXO_2Ug0]MoXOa2bg0M3L3M2N2N3M2O1OFkM]XOR2eg0PNZXOn1gg0UNWXOi1fg0^NZXO\\1cg0lN\\XOQ1fg0POZXOn0hg0ROXXOk0kg0UOUXOi0lg0YOSXOd0Ph0\\OPXOb0Rh0]OPXO?Sh0AnWO<`h0VOdWOf0ah0VObWOd0ch0ZO`WO`0dh0_OW1OnYf1"}, {"size": [848, 480], "counts": "VTf14[j02N2O0O101O0O100O10000O101N1L5K4N2M3N2O1M2M3G8N0L6SNaNdZOf1Ye0fNZZO]1ce0ROoYOQ1ne0XOiYOk0Rf0j1L31K6N2O1N3O2M2O2M2N3M201N200O11YZOoLmc0X3cZOfLc0c0hd0]4N00000001O0N4J6K6K7I5M4L8H2ZNWZO_Nme0l0V2M201N3M3M4L4L5K6H8IPVj8"}, {"size": [848, 480], "counts": "kQb74Wj06\\NIjXO;mf03hXO4jf0g0aXO_O]g0k1O1N3N101N10O11O1O1N1@XMXYOh2df0d0O2N4M2N3L3N2N2M3N2O1N4M3NnKXZO\\3ee0aL^ZOa3ce0WLbZOj3Rf00000O2N2gNgYOWN[f0e1VZOkMme0R2WZOjMme0P2ZZOjMme02VYO_1ih0G:B\\Qg3"}, {"size": [848, 480], "counts": "`e]85Yj07G6L2M4M6I4L3OO10O2O001O10O0001O10O00001OO20N11O001NO102N2O000O11OO1O101O2N100O2O0O1O4K3M8BZnh2"}], [{"size": [848, 480], "counts": "WTf9;gi0`0I6L201N3N1O10001O10O000N2M4L3M4K4L5L4K5JcUU2"}, {"size": [848, 480], "counts": "_Wf6>ji0B7gM^XOU1dg0cNcXO\\1bg0YNcXOg1Sh01O0001O01O0000000O2O0O1M3N3M2N2M4L3N2N2N3L3M2O2N2N2N2M3N2M3L4L4N2N2ZOf0M3N2N1O2N1O2N2N2M3N2N2N1O2N1O1O2N1O1O100000001O00O1000gNaJb]O^5^b0fJ_]OZ5bb0gJ[]OZ5fb0gJX]OY5ib0gJV]OY5kb0hJR]OY5nb0jJm\\OW5Uc0mJb\\OW5`c0l00O101O0kKS\\Of1oc0WNS\\Oi1nc0[23^Km[Oc2Td0VMT\\Og2nc0RMT\\OP3oc0hLW\\OW3lc0eLU\\O\\3mc0`LR\\Og2LjLTd0Qj00N0@FkVO=lh0f0L4O2L4K5kNhN^XOm1]g0e010DRMWYOn2ff0[MTYOd2nf0]MQYO`2Qg0aMoXO[2Ug0>2O1O1O100O1O1000001O1\\M`XOW2cg0bMaXO^2jg00N7KO2N2N7I2N:E4L4L6K3L4L?A4L4M1O1O2OO1OO10000O11N100O11O00000O20000O1N101M4M3KS^n3"}, {"size": [848, 480], "counts": "cSW85Yj06J5J4L4L6L2N2O1N1O100O100O001O000O1O1O1O2OOO200000000O100001O1O1N2N2N2J9K2N3N3M2M3N2M5L3K\\ZP3"}], [{"size": [848, 480], "counts": "Z]n92\\j07I6L3K3M2O2M3N3M3M3N0000N103L4L3M3M5K4L\\fm1"}, {"size": [848, 480], "counts": "aX`61Wj0;J5M2N2N2001O8H10O010WXO4kd0NlZO;Re0LaZO;^e0KXZO:ge0KPZO9oe0NgYO7Xf0O\\YO7cf0OSYO6mf0h1O03N2M2N3M2N2N1N3N2N1N2O3gMZXO`1ig0XNbXOb1Wh0O10O00001O1O1O001O001N101O00001O0N2N2N2M3O2N1N2O1N3N1O1N2N2N2N2M3N2N2M3O1N2N2N2N1N3N2N2O1N2N2N101N2N1N3O00O11O0010O010O1O10O10O100O101N1O1O100O1O2N1O1O1O1O1O1N3N1N101O000000010O0000010N01OO1H9N2M2M5M2O1O2M3M3XO^XOZNig0e1c0N1N2O2L3N4M4L3L5J5L4M2M3N2M4L6JYQl1"}, {"size": [848, 480], "counts": "cch12]j04L101O000000000O1000101N2M100O1O1O1L4J6L3K6M2L5K4M3H7E8]N]N]ZOb1fe0hNnYOZ1Sf0nNdYOT1Uf0XO]YOQ1af0`1N0001L4L4O3M2N2N3O1N2M3MYZOWMmc0i2R\\OYMmc0f2S\\O[Mmc0f2P\\O]Moc0g2l[O[MSd0l2e[OUM[d0U3X[OnLid0\\4100N2O1N3M4M4J6I6I7bMfYO;af0BcYOYO7A\\f0T1`YOPOhg0m0^XOhNig0W1j0M3M3M4L4L4K5L6J7H_Qf8"}, {"size": [848, 480], "counts": "XnQ6;Rj09H8I4K3fVONSh0\\1@eMbXOa2Ug0iMdXOZ2Zg0eMhXOY2Yg0fMhXOZ2Wg0iMfXOY2Yg0a0N0OH9E;A`0A?H;cNVWO:Yk5FfnJ:UN0iXO2Pg05oXOLnf08TXO_O=9_g0:oXOFPg0;oXOERg0:nXOGRg09nXOGRg08nXOIRg06oXOJRg04nXOLSg02oXOMUg0LSYOGkn[5"}, {"size": [848, 480], "counts": "on`82Xj0b0B6K4L3M4M0O2ON2N1100000000O1O2N100O0O2O1O001O000O100O1001O1O1O010O1N2O001N3N1O1O1O101N1O101N3M3Lb0\\OnYb2"}], [{"size": [848, 480], "counts": "oXo94Wj041O0001O10O1O0001N101O1N20O1O1N2O1N2Obem1"}, {"size": [848, 480], "counts": "PeQ72Zj07L3N2M4M5J4M1N2O0O2O1N2N2N3M3L4K4K5L5I6L4L3I8\\Od0K5K5L4M3N2M3MdMaYOl0^f0POdYOT1[f0_NPZOc1oe0YNTZOi1le0jM_ZOX2be0^MeZOd2bf03M2N3M2N2O0O2O1N2O1O1N2O1O0O2O00001O0001O01O0001O0010O01O1O1O001O001OO1N200O1O1O10O10O1000000O1001O0O10001O001N101N2N2N4M2M4L3M6K9F3NO001N100O10001O00N2O101O010O00001N2O010O1O1N2N2O2N1N200O100O2O0O101N2N2O1N4L5K6I3N1O0O0010O1O0K5N1N2O2O1N11O2O1O1N2N3M2O1O1O001O00011O001O1O1O2N5G^Zi0"}, {"size": [848, 480], "counts": "bmk14[j03M101O0O1O10001N1000000O1O1N3L3N2M4M2M3L4M2B>G7J2aNaNVZO`1je0hNmYO[1Rf0SO_YOo0`f0_1N200O0101N2M3O2N1O2O2M2NXZOmLYd0Q3h[ORMVd0m2k[OSMTd0m2l[OTMUd0l2h[OXMVd0m2c[OVM]d0m2][OWMad0^4O2NN1O2O001N3M5L5I7J2aMhYOb0\\f0[OiYO>Zf0AhYO\\O2C\\f0n0eYOmNkg0P1l0N3M4M1M5L4L4L5K7GTWe8"}, {"size": [848, 480], "counts": "]iU52Sj0O2O00001O00000001O0O1000000O100O10000O100O2O0O2O0O101N1O101N100O2N100O100O1O100O1O2N1O1M4N2N1LTPQ2"}], [{"size": [848, 480], "counts": "U^[92Zj07K3M4M1N3N2N2N1O100O1O100O1O1N2O1000]OiVOMWi02kVOMUi03kVOMUi01mVOOSi00nVO0Ri0OoVO0Ri00nVOOTi00lVOOki0000ZQZ2"}, {"size": [848, 480], "counts": "hkc77Uj06L3M3L4N2M3M3M4M6J5J6K5J7J7I9G5K5K4L3M3M3M3N2N8G7J2N2M100O101O0O1O101O0K6CN2O2M3N2O2M3M1N2O^Tg6"}, {"size": [848, 480], "counts": "SQ]91Qj0d0F7I6L3K5L3M4M2O1O1O2K5M2O2O02O10kWOXNag0h1WXO^Nmg0Q21O1N2N2N101M3O0O2N1O2M2O2M2O1N2N3M4M5WOoVOHUi01oVOOUi0IoVO6fi001O0O1O0O2O3KRfh1"}], [{"size": [848, 480], "counts": "dS]98Xj02M3M101O1N2O0O101O0O100N2O1O100O10O1000000L4O1Bdf[2"}, {"size": [848, 480], "counts": "W[k71Yj07L4N2M2O1O2N1O2M2N3N3^WO^OQg0n0TXOHeg0i1H6J4L4L3M1O2N100O2N1O1O100O100O2O0O1O100O1O10001O0O100O10000XO_L`ZOb3]e0cLaZO]3]e0gLaZOY3]e0m0O2O1O100O1O10000O2O0O2O00000O2O00O100O1O10000001O10ZLhZOR2Xe0lMmZOR2Se0jMQ[OV2od0hMS[OV2nd0iMS[OT2Qe0jMP[Om1DQM_e0P1nZOo1BRMae0l0oZOY2Se0_MT[OW2]ORMde0;W[O`2cf0O0O102N2M3N4K4L3N1N2O1O2M4M4K4K3N2N100O0100000O101O00000000001N2_OnVOBRi0=PWOBQi0;QWOEPi08SWOGmh08TWOHmh05TWOLnh0OUWO1mh0IWWO7ai0O1000O101O0O100000001O00000001O0000000O100O2N2LQ\\h0"}, {"size": [848, 480], "counts": "Rhl16Yj0100O2O0O10000O100000001O0O1O1N3L3M4L3N1N3J5B>K3L5N1M201aN`NoYOa1Qf0]13L4M201M2OO1N2O2O0002O001O11^ZOnLec0R3W\\OTMhc0n2T\\OUMkc0R3j[OSMTd0Y3Z[OPMdd0]3iZOiLWe0X4001O000O1O2M2N3M4M6J5^N\\ZOoMke0l0TZO\\Ni0:Ye0S1b2J4K4N2N2M4L4L5K6J6IRbc8"}, {"size": [848, 480], "counts": "Xij3=Qj00J5M5N00011N2N1O2M20ONO1301N3M2YOg0M4GQNPXOP2dg0^NYXOb1fg0aNXXO`1hg0d0O101O002N3M2OcM`XOj1^g0VNbXOl1]g0SNcXOP2]g0c00O;DO3N1O1O4L4M9G05Id1[N5H6E>A>GSoZ7"}, {"size": [848, 480], "counts": "njb9a0ki07I6K4L5M2M4gWOgNUg0\\1eXOiNYg0[1cXOgN\\g0]1^XOfNbg0_1VXOeNhg0R2O1N2000000001N101O2M4L4M1N3N1O2M3N3L3M5K2M7I8F:H" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "sav_dataset.visualize_annotation(\n", " frames, manual_annot, auto_annot, \n", " annotated_frame_id=0,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Show the SA-V annotations in another frame - auto + manual" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAOUAAAGFCAYAAAACSjT4AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjcuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8pXeV/AAAACXBIWXMAAA9hAAAPYQGoP6dpAAEAAElEQVR4nOz9WY9taZrfh/3eaQ17iL1jPPOQY2WN3V3Vc7PZMilalCFSMG0IhnnhC1/4AxiGP4D9DWzAgGBfCbAsyQJtQ2TTFAdLFMmmmsXuqq4p5zx5xjgx7XEN7+iLd+04J5vNdjWRhPsiVyIyq86J2LH3Wu8z/Z//839ESinx1fXV9dX15+aS//9+A19dX11fXV+8vjLKr66vrj9n11dG+dX11fXn7PrKKL+6vrr+nF1fGeVX11fXn7PrK6P86vrq+nN2fWWUX11fXX/Orq+M8qvrq+vP2aV/3m+8d/s+QghSSCipiSFhvSM6S11W/MIvfI/J8Q2CgXJUIoGUEljLhz/9ET/76Y/QQqKkQBtN8JEkJFIaxO6XDDQGmSDFSAielBJSSqSUr75FAEL8K+8REoKEePVSJARCiPwzSb36zpQQQiClRCmDFDJ/33BJJUFCSh4lNFLmv1dKIYRAICFJiILgAyEEEhEhBFqWFLqiNAVKK6SQpCSIMeB9QAiFEBqjKoyp0NogpUIkTURDAusb1s1TQlyDTDjv6L3Dx0AgkUgIAUIkUgpAYHcjhRBIJErk95pkuv670hiqsgQUKaZ8j4kopZBKkQRfuA+7+0QCJQtIAqIkpYALW6xtCT4gdMV/8D/5m3zre3+J//Q/+3t8/w++z7vv3uF/+j/+n/Nf/Kf/Lz54/wMePrjFr3zvDd59e5/f/dv/OVcXF8QYWa/XbLdbvE+QBDFKiqJiNp9TlAZEz9XVJSnAzaObjEYTvHdcXS3Ytg2j6YSqqlBao4SA4Ik+EimwUeLFIW8+/A95841f44c/+zs8Pf1HIBak1IESyJSQw30jAS4wEopxUTIuCrTWFKUiEUHAy/PntK7B+RZrLTEkRFToFBExIlXP/mzCbFJjVMnNk9vMZjMA/nf/5//y/6et/dxG+erYJ2KMpAQxRmJKryzgz3KJ4VARX/szEAlyAP83JxqlxLXhZgPK/00pHzYhBFLJa0MTw+EthgewM+JEJCafD2JiMGCVvweJQCFRpBBxzuGcJcaIGj5Xb1tiFxBCIoXKnyiBlAalBEkFEAGEvP78+c0nYughSYQoEARIAUF2IjFFSNn5iJQgJdKf5KO+cFOA175HCBBSIoTBB0sIId8vld+LlJIU87ORCaSQ+cALBVIRQkKh0MaAEMQkkbJgcdXz2afn7E0P+M2/8B3q8Yrf/p13uHNXsT+b8/L0MV97+4i6GCHmAakUR0dHWGtpmp71akvb9FjruTg7QxuFKcFZB0mwXC1xztJ1HdvtliQEvu8JSgEJFwLedkTnEWqMixWmGHHr+D5XF8+5OH2E8B1CBkAgID+vlD9rigliwDlHay2mrigmE2SURAIhOubjGtN6Yglmb0JpKoqiRkvJwWzO/kHJfG/KqCqpyxHT6Rxr+5/7/P6ZjXL3gGOM2Thj/JOD1p92id35CF+IeK8iZj6Y119/1tcHEhIpJKbQaKUQSZDCq0i5i5JCSqSSGGOuo8XwEUFEQhSk8CpiiNcMXab8UIXMP59SxFpHApTOBuy9x1mHvY76AgXEwelEAiVFzgREjvPReXrb5k8hCoQIiGSRSBIJhSAgskH+SZ89JdJ1NBXZ0e0cXto5Q5GNEhDC4LzLz1IKGFxOIhFTunZqpMGYgSQESiqSLggJUsgG++TZU7q+oygLDg+OUKLnnbePefPhAdut5Z+cfkrXbBFJIGIkpUiIAYDRqEKg0apju23Yrte4vs8RlOwkzy7PMErm350SKSZ82+GaFlkYOtvifY93liQMMc75zrd+i9m05qOP/yF991MQV8iQ74s0gkImFDnQCCGoJmPGumBalxztzdnfn3F4eEBdlZSFpq4U2iTm8ymT8RhjCpQygKQoC4TsUVISY6TdtlxdXvH89AnT6eTnOrv/RkYZUySEQIzZW+9Suj/xEpKiqhBCDoclGwsqEVMkkj3W9bcP6VMS6Y9F0/xgxGv//te8u/zXEqSWaC3QSiBRoIrXfo+4NkwpX6WiCXK0SAkhFJJIFPH686WUCCGglWKXJ+fXAkikweMqrTHGYAoDEmLfQ0pIBUpFINL5DaELgwFLhBxeMiZEFEhVobRBoNDS4KPPzkCIfIAQX8gn0h+Lhtef9U98Lq+yBqM1Aq4jsBQxp9xCkP7Y88l+IBtTTLvPm4YMQHN6djU4a4MxIyBitMZIgXc9MQrWyyXLxSXLy5f4GLDO4mMAoZCiIkRBDAElJcE6YookEqSI9z3BpexoB+foW4dtG1RRokuVH7+Eqs5p8NtvneD9E+r6Oe+8VTIZ3eVgcofD/WOMgkmp2Ks0dV1T1TX1eMS4qim0pFAG7x1KClLw9F1DdA3Rd4Rmy6bZYH1gvW1ISSKUQgiPVIrClOyNS+hX+MsnkOZ/yrl9df2bRUqGbEiAlApjzL/eKKWgKEoEcnjAORVCJuJ14ZdIu0Off2Soz3bRKUfM18+c+BOPWnrt3zktTMRcZQ5GuLteN0iBgCRJUVy/rhACYmRneWKIlClBChFEHNLOVxF9d0ATOeIICdZZXOjQRa5HlVIoqYgx+wFZ5PosiZhvaMrOCKEJ3uJDQAkIMV0bohBiqIFfferh2OaacFdb5/wWiPkvGKLmEDGvi2+RUEO6GoIjBgEp5nsRE0JEJBFiwHuf62MCIVlCdMSQf3dVlmy3C7wPKKlJwRB8RJUVkYSzlsuLDeubK16+fM7l6Yv8kXO1ho8RXZRIadBSIEVCS5s/W8rPoNSSoiwpyzLXetpQ6oKbN25wdHjE8fEx48mY+cGMotKU5ZybN7+DC4a/9BenkF4wKjS1PiT4RJEE2JZ+s8Q7h4+R9uKSq5TQRtE0Ldttk7Ogvmc8rlHJUlWK7fqSy8UCWRSMpnscHBxS1qN81gCtJUa0jFTL3tGUwvx8uOrPbZSvD5PkQ5EPtZH5sMEOKPnX1IJDypRSGl4rJ0jEXU2ZBlAm/7/steMQGV5FpfyT1z/yx36HGF43/x6SJCU5nMnBsHgV8dIuibw+yJKUIinKDKAMxiZSTvO00iilcC4QfAQVkcNnjkMqlo0zkFL2/tY3xNRjTEFZDLVkguQjKiW0EIDJNXqKiKQw2iCFIQawzmGdpbN9jqRSoIe0O5IdQ0qCEEV2FDvzFEBOcof6c3hmJDLuk4bMIF6nuRlgy7+DEAcnKAjB4V3A6BJnHSlJoog43xGiIwmJMCUIaJoN3luabceL5wvefesYpUqc7WiHmjHFQAoWmQKVMpiiQBUGrTV1XTKZjLh18wajqqYqS4qioNAF8/mccT1iVI6o6zoHA7LTL7Qixlw+SCEzcEMghUjz5A/ZOouPPYWJdEDyT0ghEa0neU+KEWcdIQVSgsl4TNv3fOvbv8Bs/4Af/tEP+eD5B6zWGz795APeeesBL1485snTZ7z93nuobc/7nz4GoYjBI1PiV3/xW8yPFWO3YKQuwbqfy9b+zEYpYKglIgQoCkVRViBKUlII6RAiR6e4M7ZceBFTrr+QBiEVKjl24XLAUiAFlFSUyiDSDljKhpxEyB7zX/su5fAlXhlgghQzAilEvE6jSSkbLPI6Jc6vPRjrYK27FAkSQmSkVghN33eEEBBSXsfnnd+SMiOtzvaE0GIKSVFIlHrldKRgAE5eOSqBoKoqJvUUJTWg8ut4y7bdgkhICcFbXLD0tiNJQRwMOkfKwZGIHB1TGurfjBejEDnI80VHm0RCCoUpNdPJeKiHLTGBUpKyrOk6z9nZBTEGUgw5oxGSJBW6KIbaSmKtpWssjz57xrtv3sb7hLWR7bqjb3ukSPxHf+NvoHrHtB6xN92jLEuUBCUSSkZGkxGFNsQY8b0lpEjMeBeEROodzWKJ94GmbQnBM5lOqeqalECmyOXZKcF77r71kOXlC9bbLTePTxjv7eFcT0wJoxVBKtrG0tiGp0+fIQUcHh3xxttv8//++3+Pq6tljuQpoAvDB5894vHpc7yzrLcblv5n2BBYbTcUZYkSmllV81d+/VeZxoZqe8Zo8wgZ7M9laz+3UUY3wOZDdDRouugoCsVsb0ZhpvlDFnpIU4fDKlI+gTIDKEoXIBRG6mwQOv+VkBk8SiGgo0YJiVEGIRPOe3rn8N4TkifGHBH+1WopAuqLSGPKtSsxkoxHCHldiUk51MJCQFI5ghHY1aViMHB4HeTJh7QoDN45QvQIIQbHMdRXQgxAmENJSWkUWg1peMpePA6GIxADmJIRz9IYpJaIAcjQRqLKAj2SxBjAJrzX+KAhQu+y95UJknhVoV9npvmOoASAIISA7TuIQ3ZDdnwpBpAJYzRr75hMJlR1lZ+31iitqUxBZQwpCVxw+NgjlWQ0mVJPD6jKkuPjIxAeVQg+/vhD/uJf+A4SDULw9PkTlIKqHPGNd+5xYEq6zZZu24AHKROSQG+3rDab3F7SBiFgtj/nfH3F08cvSB729/c5e3lKWZRYZ3n/w/f5xi98h7eO36W3PcE6ZFnye3/wfS7dFj0d0cXE47NzPn78hPPTM6KPaCFJShETeO/ZbLf4YHlnPGK6XvPjD9/n8ePH7O3NmB8eMNufM9nfp2u2rJqWoprw9OUlUms65yiTZDSaUO8dMZ3sU3UNo0XPuA3IAdD60oxyZ19FktRJIaXC+w0qJcajGm0Ezjt0VBkAGFoSRgq01iilCRGk0AhU7usogVICpaCUkpg8QQzwOwoRcnarlEYLRY+jDxKPI/wJNpkRQvHan+8QxmzEMSaUEEiZiDF/k1IDsBQhpNwbFSqbI2JX7ezAnAQi5ggjcl0ZUxh6kPn3SSEQKeBDQilQMqGVIESPkrmVIkVAEvDWEXa1nVQUQqNjBO+zs5IQvafrG7bbdc4DokApybgqkYUAH3AxEVIiiYhmuAc7oxQgB0BOqlz/KyHRQqNURn21kmitkEpQFoa6LqmKAmUMk/GEUV0hBGhdIJVBakXvLM53CCXZdpaoR5SloR4VmCK3fZ6/eMHHnz7la+++xenLc37ykw+pihqs4KOffsREJE7254yrGqkkznu883gf+eD9D+is5dd+49d5eX7O0vW8ODvnfLlAiQJXGB6dnVKVBQl4uVmx+MEf8PHZGQAH+/uc3DjiPHiax0+YzGYIDA/feJPN9ozz1hFcQAaICLQp6axl21gWqyWzW5b20RM+v7xkmxKx7/CrNUd37mDqEZtti5Al3kPfRYpKIWNFIcZ0a0d1p0aScJfnuLNzun5FCo7Zl2mUWmkKoRgJw56s0ErSsyFYTyLQNBes+p46jNGlRhmF0gqtFVopjg6PuLq4QEuJRKClQgswQmCEYKQERVnTbJuc5iZIKeQ+6K6elIKYdEZ/d2jna+8xDcazy8qyAWiUdDmBCwVEmY0tgUCjZIXWmpQMIQZ6tkT8kAaGnAKKIbalHMljDMTg839jyCBMFLmvFz1JCIJPGZVODBEyIpVGxJgDerAQHaRISKB0CTEQ+g6RIhqD0pIYOoRrKJOjkJqqVpRFgSny8xiXU5CKsqwwJiGHFLia1JR1hTYGJTXbpsUnQQiJwhSIlFBCoZVGG4VE4FxPCh5SQmuJFIK+t6yuFjC0S3pncTGgjca6HmU0G+uxoubr3xPcvXubw6N9rq62SAUJRRQFP/rpR1xcrSh9g0KzvFxxvrlgXLxL02xZrVf44On7lkJrLq5WPHlxyptf/yafv3jB2fsXuBBJUqFMyfT4kEXf0a+vkEJQ7s1wKfHhp58yGo0oJ2MoSvR4QuMi509ekpCc3HnI45eXfPDZE4gCGXIdbYpycGzQ9BE92WfjLVsvsEIjPNjlloOjW/zoR+/z/PkZAjIZRBb0XcYkVlcdRheMyhExOPrNAnH2EhGXkMKXa5SlKaml5qjeY08VkCJrO+KsveKjj9+nrPdZ9Zbp0TGT2QRTapQWVALa9Zr9/X32ZzNePHsGMYP5WmejrJSiEIlbB/ucOocNudhOQiKUQGhB01siAiMlPmr8tUGm66a8uE7WdlassuEJhdEVRt9CS4NSOQVTSqMLEDIgpaZ3DSE6Aj7XaK9lySmBi5EgBsZQiqQQicEPiOQAmqZ43esUKae7IubaZVSUHB+dcPbilNV2hYgBjaTQGegR3kLyVAhuHMy5efsIYSCmxHw8ZX+yh5SJ+XzCyY0jirLIkL3L5IWtd7RNTwg5PfbJs+kaQoLWRk7PL/js8WMuLi4IPlAUhtl0j8ODQzbNltMXzyDGHJEVA4glMLpAK4UPLjfvo+fk1g3OL18SE1xtOsrZDYqi5O7xbe7du83V4mfEKDh9ecbDN9/g+ekptu853BsxHk949qLFNg3rvuXzx5+xWC4wxkCMGKVpnGUymxOForWWpuuwwaPKiqooKeqKKBM+xYwfJJUx55Bom466qrg8v6BrO2yI9D5hTAVCs9q0tDZQmSKXCgz9ahJSa24ezLlz9z4ffvgB6JJSakRIGFNgyjEvXl6y2jRopTCmoNC5RylS7p8WRcG9u/fQpgSl2TZratGh5L8GBP03N0rNuB7z4OFDRkHw8vQFRVGgrOSXfvG7vPuNX+L7f/RjVD1ClQpdKGJyhGZLs15zdXnJrZMbjEYj2m1DDB58ZnSIEBEhcv7iOSJECqGIiIwLRjJzBEEUiaQM5XjCumvZthmqZtfETztII1upSCIjp0kzKm9yMPkO0YUckaLH+Z7oLVH0Ge6PDiMEWkn8gMKRFIXO1DQtFISAbTu6rqEgUcSEDTHjuEKgtMSUhrIqqArNqK6YTseUZclkMmV/vs/X7t6lVDpHvrJCqZKyGqGQmCSQPjKfTpiOS6KJeCEyrdEGau+wYUvz4gWXtsfFxPPTM9rNlsumw1qP0SXjyZh3v/E1lueXnF5cMZ7t89HHH/P0+XOc95nBIwWzvQ3VeIz1llWzwXY9RgmMUQgpMKaA3rLdbPDB0dsOXRbcLgrOLi/ouh6fNIfTI4K3SBH57ne/wwcffkTf9Tz6/FMOjmYslpdA5O7d25SV4enLFxC2xFLz6YvnLJYLqsLke11VSKGRxqDKgsVmy7bLoBadpaozDTKliPcW5zxESQBCSNi2IcbI5eUlZ2dn+JTT6XoMWilSjGilkEpSqIL5fB8p4Wq95ODkiNF0Qj2uWG1W+OiZViNi76jrMcaUdL0lpIQWueceQ+5LCiICTwqCqipwQeCtIAbNfl2i9ZfcEtFaIpXAy0h9cMBRpTjzG/wiEQMcH51w986ajbPoQlDUBikFaTxlXNZcnV3y2aPPSDFD8olIigEhQRPRmU+ASrkkigICEh8jwTqMHPpySeJioKxKet/hQ8y9tLTDe4diWiSEzNGSpJBoSiNZbZZsNxeE0BBxJK8GVFVR1AJVGnxq0VqBlChpODk4ZlKPKItyaCprUkpUSlIJhdGKoq4QRqHrCj0qKAqFJmG0RA791uAjySeS95RCIWygby1904LNQNp2u0Fpg4qOJx++RGq48i0vN1vENqD6Dqd6DJE31YzbkxM2FxvOXj7hw7QmCtgrR0wmE7y3vFxccbltadwnrJqGrusIKeEJEAVN17LtGuTQGrHeEoVEJIEWmqZrsdYRvM896bJGGklvLV3XEyP4ECmMQSuJIPDwwU3efvseP/jhj3j64jHP/+5TvO0ZjRRvvvMGEbhcLYhuw8Y7zjcrFusl3vYYpZjt7QOSg8MjMJrzxRVt3xKJ9DaQkCyWV5xfnOF8T9f2BJ9wAUIEpQS6NLz49Dnb7RqhNCl1aClp1ku2qwWFlpAc48mUW7cO6fsWWQRmswpUZDQRPH/2Kcm3+D5SoLh5OOfi9DmTUnM8vUlZFJTGoAuDUhItBKUxFGXF3RsHGFNyHis+u4Tu+JAaw/GXaZRJJhrf8unpY7xw7I3GHNw44oPPP+Xq8ooPP/yA9XpLUgZZGkRMmXkiFGVRI6Vi0zQUpqBQGlL2KylkZFTqXN9k/qXBhphTTKFApMyqkYKYIj5YRJKUKiCiH3rhuUk+ZFxIIdAKlBRIAcZ0jEYb2u0CYxZMJ2AM1EwYlVOMFsQK9g4LDo4qpns1RhU4B4UuUYVCDhQ0BkDIdz3R94iYH0ZMifXqiuaqz0BO39Ftt3jvODiYs1qsCX3g9tEJd05u0FwuOD89Zz4/pNQF9XjCg9t3GR8eUqmSH19t8K5HlWNu1XtUqwsebZdIlSjLgqOzwM31BXUILE+3NIdbujI3/Iuy4PTFC55cnLN2EZcEfcyUtpA8SYIQkhg9tu8ojCb3WwNCanSZUU9vQ24FlAbvAnVdo7XCWk/wEYnMNVQ9HggTgfG05K/++38ZHz2ffPKIbdNQGsU7777N7bu3WJ4+Ytt2CAKd96ybhqbrgJjBwq4lCclRqUkSzi/OSSLiYyAliXOWzWbNdrtBKggx4HygsxnMG41H7M9mbNZrzBBRQ0jUWtIvLii9oxqXSCk43KvBbhgpuPfmXUxVUo5q3rx5xF/+je/imo6Rrjk5PObth29y4+ZNfvvbX8doDVIwmYypqhJEwiiJiJHOWgiWUVVx8OY3Ob/a8qJrMErxa1+mUU7mI1arFVu7YeM7DCMoChKCzrW0/SYzQ7RAI9ABVFJ0JPqQ6GNCKHNN5k4pQEhIrSBGgo8YyTVPM4ZcLUqV02CtEjFEtBDUwuElVLUmjUuMqRiPxhitKXVJWRYoLanKgrIo0KLg8PA20/E+Ih3juy1aRka6wC6hKMaIStBKhzAFbdfQrrZcbS+wDqK64vjmDYK9wbPHLW2zwcVTTNkhU2J/vo8ik9y9TGz6nJ5PqoLeOi4uzlhvt7jWoqPg7tENtNL89Gc/5a//9f+Qr739Hskl1lcLjt54SHU0Z3G5ov+X/5L1Zsti1VPuT2mMwyq4+6jna0vPVDg2R4pVCdvoCeQJlELnltJysaDfNERhsAkC6ZqyNgC+aCkGQnbOYKIPBOvwhaYYGvdaG4L3uOiYz2Zstw3BerRXCGlQokCrmsfPzim3kfFsxnRvym/95m+wN53x5MlT3nh4j1/99jfYm1Q8/XDJxdk5e3sF0Xtsm/uXpjR4Z9kM0W2zWbJeX9E2K4qqJMWI9YHpuCYGx9HRASfHR6SYaZRKFfTOURYF777xkPKv/GWWV0vGw8TO/Vt3OD48ZP3bfxEk6MJAqdFSUBUGbTQ+OISUTKYT7v+P/n2SjyQXkFHQrDe0L54wwePbHucDT589HoJFdsIxenrXM53P+YVf+h76+IR07226rseqLzl9/Y2/+Gt89NHHtE1DOTH0qaFPDVE6unbDzZNDHr84J0o/TCCYHFQQhPSKOSOVpJAaLaBIgnFZMFYSGTxGa7TJ7ZO96QhlDKrUJNkzno6piv2MKNYNstBIXRJlgRAGpTMY450HEkrLTJtyNrNkpGN5+YzkHHWpKOuSYC1vvfUGfXSs7Jpt23D60vLpp8+5f/+Iqh7z8vwSF8+xSFxzyOMPTmjXM5r0E2bHZ0wm+yijuTy/YLFecXRyiAuetmnYn+9x7+4Dnpy+oF2uSDZQ64pV07LpO5rgMdMxnzx9jO0sL5+/YPzyMbqq6HvLcn3J+eKSR2cvEJXi9o0TFlXkqlzzbKYoTcHBvT2sUXzYQFtK6qLAjKrsHLoWnzwhCRKK4PJ4WU7tM8FCIxF+YE7JjDILEjLltlRRFrRtR/CeSpdUpkSMyFxVKUlSo3UJlPyDf/hPqOf7zA4O2Z/tM64n/Or3fpnf/s3foioUJR6TLKOy5HBvj9lexcRUPLh9F3fiMFoxn03YG4+o6po3HtzjnYd3+F/9L/4m48kYHyIxCObzA/YP9una32Y8qvN5koYUE855QvDM9ibsv/UWfdujbcI5R+haVs+eopTAec9is8ErsL3F9xbXO0RKROeoygKpZE61r65wznN0dMT+bI/Li0suzi4ISI5u3sAYRYiZDimkoChGNC7x2Ytzzq8WPLlc4ny4ZpN9aUb53tfe4e7tW3jvWSyv+Pzzx4RlRirbdsvB3pTlas3autdGugavLGVu0jpH0pJv/sLXOZnP0N5RC6iFpFSKlAJKwWQ0plQTooh4ejw9IRk0+wQsvXS0MdGuWrbrSyKBg5N9lNDYvqfZbhASJpNxfh9B4m1Ey4JN23FxvuT4+IBvvPU2R8d73H/vDagMl2vH3/7df8zTZwtW6zX337zJdHqDP/rxkidPEnY1o1n8Muur5/haUbsGo07wvcfbTGoQCUJvkQmabcu2abl39wFPn3yOjx4fA5uuw8ZIr+A/+b//58ymc3bc4MXmir7rKJXhYHzARdfQ+A5zYTn1nltvPmBT1Xx8eU5ZlZyNe5RULPcLJhFOTo5p1w0iRdrk8SQ0gtl8zvn5kr7vkUqhtUSkRJEUKgmMUNe4tZYao8yAWhucdKiizCNoSTAejfHBM93bI4kCqacYM2Jzfsl4XvHTH39EsI5RPeLgcJ+HD+9z42jOSCYOKsU3332XX/hf/28Yl5rpdMxv/uL3SMEjU8T3Ta7BJTjXMk6R+uQkty10lQGvzrP57Cnb7ZYFEaMNRK7Hz5zPdDkpZWaM2Ui7bdBasz+fQcpzpJ212BjwIeQ03XpESMgYSJ1nbz4jhoBMmqbd8vzsjG3fcnV1he0sLiSWn3cEPNb2pBTZdi2dCxzevkexf8K/+IM/4uLskqZp+Xl1z39uo5zWY2ptcNaiBGxWLcEqpCxZLJY0V0sqbVhZB1ISdkyaoXGRG/SBvttijGQyLtFBo5wj+UDve4yUFEIzqQomoxJkZLXa4MKS1kpcuMGHnzoQ30COr0A9ZrG8YrHe0ie4urxARI/rO0L0HB0f0lnL3uSAl88vuXVyh8XVFc1mgXMdv/XLv4rUFVJUOAK9tRS6QKM4f3rKP1uc8qu/8Zc4OjziZx88AR4xuiGx6hGjynPj+D4yCvpNS/IJJSRKyIw2JOi7nk8/+Zivv/d1bt64wYunz0mAi57Hz58hS0PjLf3qihgTITl625CIOAKxXXHVtmyjxWDx3Rb14pTDk2O8iJyePWPfzpiOJxTjkqmZ4Ltcb01v3WI8maJ0T11MuXnvITcOLR9//IjeWqL3SC0p9RiFYlTXHB8eMRrVjOoR0+ksT7kYk7mzAzlCSEk9qSmKgnfuvYGNksYaGjHBf75kb3LIv/MXv8Hy4oLNZs3z58/54R/+gNoIxkZw73DGN+7doE49jYDnrqdttuAt0XYYAc71KCOxfUPyduDlSqIXFKZktn+AVJqmbWj6LlPbjEZrjXWW3jl624PKFMZCFVjraNdLni0uMn9aSmxwWJu/nAsIqdhuNkghOJjNufr4I2JIjCcTumh59NPHrLYrRuMxD+4+YNM0/PT9n4FM7M2nbJstTdcxncz49bv30WXFerNls9ny/Plzuq77co3y/OkZ3nY42+Mi1HrKwazkxsk9Ll48JrjAqBoRmwa0IcXcX9sx/Hc0NGctpy+eU4nE8f4eq+WCbr2B4CilYj6f8rVf/Abf+sXvUFUa12w5e/5DfvKDFc8+/y6f/2jBZn2b6e0POHjrgjaNWGw3nASF7yPBeUQSlPUYrWu2VxvGtWC13jCbtrRdT993LJZL/uk//z2+8943+dH77/N0e8povI9ij/moxK0Vy8slP/rB7/Prv/2LfPzox6zX/5R6/hPGasXbD94i9J6Lly/Rw1RLoTWF1tRVxaZpcv/Sez599AlvvvkGy9WadtPQ2552uyUBdV1jrac0BqVLaqNQQjKbzkEYzKijDzPq2nA4mVEXNYUp+Ma3v8HFesXJzRsc7e8z0gX4SNd2xJBQxrDarNmsG9rW8uJsAQcF88lNzi+vWCyuKEtNPSox0uD6wLgeM6pHmU4pFIRE228JIeCdJ4ZARMBZghCRPtHYSOMLisOHXF2seP8nH7NYbmg2Kw4PD7h9+w778ylEy8WzR/zkZ++j+hUjHJXIkyerxQIRPTLm1HOz3TCaTCB5BBB8ILpIVdQEIfCfP8XHmAejuw7nHa3th0HzPOoljaIaVZiyxGhD5x3W9nnuVZB5sqMRpTacvTzn/PKCcjQixcTe3h6TvRGr1ZJt19A+XWCTpw0OqzU2RtLikovLS55tVoxGNb7t2DYNPkTcZksxyiylly9P+eTjT3DeEUP8043sz2qUpz/7DNc3iFIzms1JfcwNVWUAWC2W3HvrHdTVFZGhfSH8MOmRkDIBniQivetZbjecHB5grcV6h7cdvYARY+Y3T5Cjmj54VDHG+xGda+jchtHEYv1jhHmCs1eUpudwXxF8h+v73HIRAttFiAYZDUSB63q87ahHBlNMMUay6VteXpzz+ZNnPGvOmc/X3Di6xXjqkWqKe7ZmeXXKdrnmd37jN/mH//C/ZVQE9uqae7dv4JuAbzeEYNFFRT0ec/v2LWKKNF1D07dIrZhOJhyfHPPGgzdYL1aMTElZmsy+KWvKqs5NaKOIwZFCHo8KSLyEICNKCWpRIEKOyFFLjk9OkEpQCYXdtiwXC9bLFW3b43yksw7bWfq+Z+sTdb3PfH+f2fyIy4tLurbBGEEILc8ePwUVBvqgwCiDlDIfphjyJFDK2WAiEX1AukjnBbLapxKa8WiC1oa9yZzjwyPKQvP8xVNWiwsO51Nm833Wly/46LNPSd0VwvY0qxXNdktRGqaTMWVV8cmnn9JbR1VXaF0QQsyEdSFo+hYEWeUhhMyD3pGnSUiZmWSlLKnrEeODfeaHByw2a5aLBc22yb3V0iP8mqP9A9be82K5Ia0aDg726TZr2k8+IYXI2fkZ203DN771LTbPn2K9Z7Fe8uTZKZAZS01rsTZQlhVFpRiNxxwcHXG1WmCjZ7w3pu96fPBfrlEK57DbFiMqdIzErkMJRXIWk6AoCr71zW/y4ekp2yF1TsRhvCihVGbZSCEIMWZdGqVwKRFlIkiIIWvYfPr+R/zX/9Xv4r1DaE3TJ1R9RBj/9+g7jxntdZjpJTZ6SjdiVhbI6LC2HcjYgm7dsz/fRyuNFAJTSC4uXhKiQxvB5OQQpRVN8HTBEVMipMBoWoGYMptPuXHrJlVdsTfa42B+wNFfO2S8Z/ChoTR7GAzvvHMX5/sdxT0TzGPE+TwHGEUgpCwJImJkfzpDhURwlqZpebk5JYmsjROjBwI+ZLWCznr6FIgyt3qUFwgPRsjcx01ZX2c3xe+sxfbZoJXOrKUUc00vqhFgWNoNAs1m07BaLTKwMhvhrCWkniR3DKndhEwmqwtgR3BKKY+UBRexTjAr5uwfHPJrv3ybxbbj4uUFxiiqUZG5s4cHqBR5/6c/oN9scNtzSmU5mk/xtmOxWhAby2mzpTQFbUh4BNaDIeXPIiS3b9+k6hsWl+dIkSAq3nz4JnU9Zr1ec3r2ks72BCkIQvDo2XP6R58z3Zuy3KwhQVlWjMYjlDYslgtOn5/hfcjqBkpwfrmgrgo22y2r5QqSQKmCR4+fsVotM2HBBaJP1KNRPm0xDc61QkjJfH7AbL7Par2h7x3b1mFMSVnUX65RetcjiJhhmr/r15TjPaQEoxTPnr7I8htljlqJMIwP5VGm6d6EFyllkaKQqHSJViUxSpyHGCUkCCiuLpY8ffwUHz0nt2/z4vIl01nPfD5jPF8gdY8pC1yUzKqS0XjCdH/ObK9GyYKiqDCmYL4/Q0pFPar5xtffoSjMMEdsM9nAZ5L87Xt3OexvIqVCWkltxnTdBiGga1uuLi/59OOPST4RsIRoSVERfZb46PoOhmkPpTLqLGImrVvvQMFoNKYqanznM+A1jGtZm8n11rssXaIVIeT6NilNFHlyJYSA6ywyKcbjEUoJts02S2nsCOcJ+t7SdRatDePJHuvNhslkws2DfazveHF6jjIlo6rOJGkKEgbnWpIIGG3wPlKPRvjgMUrStx0+eEKM+BiR2uDJTKbltkWMN1xendPEEXv7J1SjCiGh61uabUvftIjg6VuXaWjTCYVxPHjnLfaPj5gc7vPs+Quuzi+wriGP+yliTPjr8TjBtml46+0H1KOKZrtlNBqxWW2v69/pdEragA+O6XRK27aMxmOkUYi1QEpF9InlxQq8wNqe1WYzDEuITD4JkVhotpuW4ALG5IxhsVgMZydzmqPWmaCxDNR1SV1VtF3H/PCQvnMIaVgsz7m4WrFYbNCm4N133v1yjXJxdc7+wRwpI1oliD2jUjGpDAf3H5IS/LPf+z2urq5IY00gokWePFClZD7bYzIZ0axWTEYjDvf3GY9GHB8fE9weKeaWyPHREfvHJ3z7l76LkInDk0O+ZnPvqCgL9C98g/WiY7PtccES6XM/c4cUWo91nuA7Ls5bEIKuz+lcijGjZCEwrkZoWXL79h200TntkZrnz57y8Scf0XQNNnikjFRFQWUKos+EBinlQEaPIGX+eZFH1WIe5kPJLEuRBBSVYXtxhRJrvHUICb3t0aYYxrxyzS2CYLNeUxQaZUpC7AZiRE5f26YBIRGlZlyM2HQdm+2W4D1lWSKEoGm22N5SmpLRbIYNjsvNFfGl4OEb74IRXC3Pme0/4OjGnKJWCB2xoUdpQVGOcW7LZFTSdIHZbMZ58NjOE2UmtN+4eYPLqxWJFtNHjFYcH+7xg5894ZNHT3AxcHC8jykMs9keJwfHLC8uODm6QfSa1dWatmk5Oz+nbRqePHmC9wGtJMmHPKSsBYicPosYMFKwuDjn8wLq0YjHz55T1zWH8wMWq1WmfCqJMblOlhK0UbRth6GkqmqcdfS9wxQly9WW3vYonRliUmf9I+89t+/c4fLygufrZyAk1ahkMqm5urrEB4+SOYXvrWU8nmB7y8XVIjvmiyvu3L2NMQXWe6Z7+ywuG5o2A0BfqlHO5xOmswlJKg72j5BmRFKGvb057XZNHx3/4o/+gFW09AakUWip8S4gXMA1G46Oj7D1iP2DOZPJGO8ds/0ZhED0jhQDm+2G9z9eDoJKnscvng8gQ6IwBpTAO8Fqu8F6CzIDSBn5VNeHPIRAVZfDZEOWefADYCERtJ3D2Yipa1aLK7rWMp3sURQlXe9IyGvja9uGdttdiyRotbttAhRI73K7QMpX4NYwiimlpAgB21uqosJFRz2u2XaZrymVQpssESJiQBYKUWiQ4IcBcCETRV2y2i4pTMW2bxnvTTIpX0liUjQuw/K96/Iws4Ao8/tz1rG4vES9FXnw4BZ/+C9P+fCDH/PG/fuMak1MjkRAqizDIZJHySxbudmucxGSAs57WmtJKXLr9m0WV0tghSBxdXnKe+/eY3p4mxfnF7w4O+Xi8oxnTz7jcTXl5tExh4cH9K3l0w/PWV4+5fnjz3LLAYEYnJg2htlkD6M055cXKCGx1uPckjfefMjzF6eYQlNWJS56JrMJjx89pW07xqMRzjmkFSiZM4XNpuXGwSFKabq+p+89PsDDh2/w6PNPabsti9XiekQvCfj0s0fUVUk1GhFjoihKQow4nwcVlJKMJiOm0ylnZ2dopXjvva9zdnZB0zZUVU1KiYvLBYvVhpgExpSMJ9Mv1yhn83meOytqLq9aWhdxdCQl+OzpMxatZzSfM71xQF0bdKUplMbbQL9pWLYNQUjeePMNHn/+iG61Qcs8ypTT3IgUCaMzj1BphdI5JQw2oBAUWuOcIyWFT5kuhsyzj73tQEikVJRliXee8/UaZXROdX0YaFqgpcoHrLeZmO4sicBqu+ZkNGLH1VNSM5tN6PsW21qCdShphlorEyMyX9QjhWRU16RBic4NUh1SSprU4K2nKxyBSBM8LmbJEF1oNAmdIoWQ6DLLXGZ7zP3eotDU45p0MWjUyTwFEoYvGwJN0xJTQBfZMfmUaNoO3zumxjAuSm5Mc992fe8EZ3uO5jV37t/k6XrF17/5dVaXl/i+JzqLVlnMC/IhjDGitMJtLcvVgiQVs/kc7wU+KT784Md88uQfcPvB24xnc27cvsXd22+i0LTrjsuzcz766YecHNe0mzXRB27evsN6taRtGoJzaK2ZjEaMqoqqrDCmZLNtcr+33fDos89RtcJ6y9HJCW3b8clnj5iO9vABtCnwIbDZbBmNJihp+N73foXLywuWyyWj8ZimtbRtz9OnTxiNRxSlRsiEUIKzswtu3rzJ3nTOarm8LgUuL69wLrK3t4fSiaKQw+hezJmeD2w3WwptOLn/Bnfv3EfrghcvXnC1WOBd4PBgn+Pjky/XKB89fg5S4+SElJYEIJnEy8sVbZfoX17wzvERs70J5azGC4GRGpESvh4hrOXDZ49plpcI56hNgZElXdtmaQwpgKxgZkyOFFK+GnBWSHrVD4JRaRhNCogAUiicc1kfJVq890gpCSHmmb6YmT4u+jz5b9Rw4PIQczWqWK02hOAQchhUFpqUIlpqojIElYdv60mBQpJCRGqJ0oIwAFuF1PRdQ980BJUYTSZM9qZ02zYrxEpDIrDteoKA2HuUtUiZxZNLpTBAhyUlBt2dnFZtNi1amdxj04q262jahsViRQxgisw/lQq01uyNZ8xGU/aQfO3BQz5pV/zv/6//SW6wv3allOidwyjFgzt3+B9897soIjolRmVJNR4TkqIaTZjO9uj6nno0wlrPfH7AdLLP1WqDT5f03Ybt9hIbG6Tp6bYVs+mMt+6/wbv3D3j+tCC4hu9+51uI1PPrv/orPH76OT97/2d88vEn3Ll5i1FZsVosCE5xcnTMe+8ekZLkk08/5vNnj2jaPK3Sth3TyR6TyZREBnDu3r1LXdd89NFHPH32DOs93jvOLs8RSIp6TFmNmM0Oefz0c7QW7O1NkVLhnM0BQSguzi84Pz8fhNt0lheRWUZGSU2z3TKdjNmbzpBK8+TxUxYXGwpTcblYc+/ePVKIdE2LkhJdl/zSd38FqQt+nuvnNspn6wZSAXGKlJ6kHC5atls7pIwBmWXRr2WK07UQkyINtV1rW4okqKsx0/GcD16+j9ICqXXWZxFA15GSJ+HzwG5R5XnJlKjKEuscbd9n9XGZZ9hEytqdQkiC8yQpUEIiYkJ40CkPHDu/g/0z00Ukyd37b/CTn/w0ywimLPnofEBrSfAerQwhNBnIUrm+1GSlBKEy0up8wLXQbSwySY7258wPj9CFYuEjstYIVbNqW/quRcT8oI8PjyiKghQiwfYEb+l7mxv8ERAJbz19axEiz2lufcf5yys26w3BRe7cucfh0RFN21CPK8aTEZWpkS4xHc/4l2fP+Pu//88Z1xVS50e+kwupjOY3v/0r/PizRzx+/pz/+G/9LX7rl36RfSTKB1JM9N4O2YbHFAUxRqSUXF5cIKTJfE/bMN2rESowPxxxcDSmNhLbnPPDHzxheXHJ6fNn3L99k/3ZhO2m5x/8g7+P0pJxPeLNhw+4f/cuzx4/ZT7d48bxTRCaYD3HJzcZ1RV3792mqA3f/8Pvs1wsaTYty8UKkGhpuLw8Rxs90DUVy+UVLvYUZUnfBz748BOE1BSmRMqEjZ7nz19QjSr6vsdax9Onz/LAOhmEMxpUYfDe0zY9Wmd1+/OLK549OwNUfi5J4oJDisStW3exzlMWFbdv32W+d8R7732Tjz/5+Ms1yj4VyFgigkbFSEoue+iokSnQJ4/1DpHHdrMy27VhZlgZFH2fU85f+83f4OXpBZuu5d6D+/yL//73uXHjhKo0BN/jXcfZ+Quc91SVuF5jEGVJYDCsQdfOe4ccRnbigNbFRBZYEoEuWXzw+BhJSFStB+lGjXd5sqCux2w3zbV4VlXklQOFVphCU1cjqnJMZUo0AoOh7SzWdzjf46zFe0EY5XTojbtvIpVgvV5QywrvI9tNi3ceFWRe2aAllRxhksmtIlPgpUcKi5EO5z07WFXIrLW72WzYbDdksT9NWVS0257H26coI1gsEkJrlDToCN988w3+0ff/Bd96+JD/7V//a9cjcFlBMxJTpJSKzfd+kafrLX/vD3/A7/34p8R33+V+YXj8ycf0UpKEGlQL05DCG4LPTs6GyKa3HB7vc9VsSOKAxfICVyhi3/Ho449YnJ1xvH9AoRMyRSpTIGJuvxTK0AlF2/a0bc+0mjCfHeADfP75UzarjqQSVVXw1sN3eHl6zvFhz2J5yXq9wjo/gHkeFxxWZiZPFJHuqgehUarGeQjB0vc+B4JhnUO3WOXRvaKma3uqqqIeVTljA7rW0ndrtDbEKAhJYn0c2kNZxSIEn2dkq5IbJ7doNh3Pnr6k6yy/9r3f5IMPfnYtOv2lGaWy99FuhrBzTChQ4pIYCjY+ceUvieKK3jriMO1PcplEwIAe6irvqkiJmCQ//vFPmc3mSK3RpuDmrVtMpxOc65gdHKEElEU9GHXEuh5TD5ExJsZFTT2IPLdtS3SB+XSPq/MrbOuyWHSIaGXQUiGTwCCQWuHaHktPNa5J3nN1cc64GrFZLAl9jyJlVT2yyJVWilXTsF31eRokSQpdkOIIY/aR2oDYggyU1Qyl4PzcEkKPLiSz2V1OT1/irSX53EeNEiSaxeUGLYpByySQ8EQRs2JcSIPqQRj6mhu22wYXIpPRdOjJSkLIEw8xBnrbY0MLSVIIyU9Pn2K949/77neZKj3MpMZBKVDAMOEwQnB/POI/+uXvUVcVf/uf/h5v/pV/F/v8KdYUuTVjHSFEjAlonfAuSzo6n1iut8iu5/jebfamUzq3putb3GbN4uIMLQW3bhwxGlUoKUEZopGDRhB5hjJKRvWU2Wyfohzhm57Z7BApck3ddGu8TwgU4zoDhbdv3UIZwwcffsDpy5cIJZFSZ3EvH8gAeXajOzGzTIDI1M+iNIzHE2LwTKcTFmlJVVVA4vj4GKMNFxcL2jYDXMvVGpccwfmcSUWPVNnARXQcTY44muzxyceXFEHx9a9/m9Pnn3N1cc7x4dGXa5Sie4hq71Ks7lO0e5gY8SIxNgfI/nMwKwpZZo2ApLPildzp5WTxYKl0BipS4vTlGYUpcdby6LPPsdZzdnGFEJFt02TWissCKjE6nLe0kNXNkkBKwWrVvQJdfGKx2NBuHEIIXEjElL0aUmdtPZGwdujpkahQYB3LZs3Nm3d50mV9HDkszem2PVfbhpu3b7O4XGB7UDGnyuPxhN6BtRUH+3epy2OU3hLkZ9i4Rfa5xh2Zij6VOEYEoQl0QECIrCTQNCtSzECKEDGzoIbImOIgvmXAuW5YgpMzjePjfZaLFbb3tH3HRE1zq8VHehcyX3VIwwG0kIh4rV13LUXJa3tJpBBo4JfffpO/809/jx9++hlHAzjiB9AJRO7XBo/t/cDIiljnqKqS/fmc8XjMyWROtC3PH/WM6hG3j08QUdBuG6woMnE/pEHkUCEQrJZNHlb2kZgULggSkuV6y2iSgZt26xBJYzvLZrnlYH5AWVTU5ZQYzhEIqukkt+I6hxQ1MVXM9g54efYc63sQUOrczy7qGmMUXbNhs+nQqqAoStpuy+PHT5jNZiwWK7y3tM5jfSAQMg4QBUorpuOK9cU5YxJfm7/Fke24aLZ8czbnbl3zt/+b3+W73/wmM/klM3p6/xQft0TTEsUYjwBtQK5xnSMgsNYjk4RA7rtdS+MzACsMkTJk9W9neXD/Ab/9O/8O/9n/7b9AKkXXNWzbBmKupVIIiEH2UTCkzT5ctyB2MokyRaL3qEHY2MedJKSmi4qU4vW4UgmMjGBiFEUKLBZXmJObHE5GGBI68xgYlSNkiBRKo5MaBq/VoPMzJTrJ1l2wfLYhxERIPVrnPSJSBoTyiOUFZVGipKYbpvXLskDIQV2+2IlAD0Jh0tO7DrXb2JMS280uCpdIJxFKoswgHE2ksy17ci8rCniBdfmeV0pdK9vtpFHy6Nauokw7Rczhe/J0xmhYufDi/Jz942PW254oXq2miMGhhLrenyFEIqSYx+7EMB/rBhAuKSpTEWwELfC9JyIoy4qqlDlyOQ/k+l0IQW8tn37+mO26ZTzeG3rDBmNqzs4uCB4KXUFUGFWiKLBdIAWJUprJeI+2a5lNp7z15nfwvmK5XhBFR2dLnAu4FIjaUO7t4fqWbW8RMRK8Z73dQgqZyOFz2SBEfq6FEvQu74Qpq5L5bMqsKpivF/yNX/wOv/bwPg+uPuZwtOXb37vDcrXizd/4Fvvz/ev7/KUZJdXHbERPW/8BVTnGGEOha0rjsOsF3keWm1UGP0SeDhEMqJXMm6qqsmAJKDmIEStBUWp+5bd+nb//X/8j7ty5y5OnT3j2PAs4EXIfMkRL9BmRjEHSNgnvAtoIjJa0fZ/Vv4Wk1CpLPqbM4LBhqHFjwJQWFcDIikoY9qziqAfbOOKLF9wbjei7jqK3eCGoyxF7R0cYXVBLQ1IKJccYM8Y7Q0qa0UgiDDT2gqZdgdLocpShLqkRJLwPuJgjixCDwFNMJDnIVw5IcxJZjUGhMmLc9fQ+S4Tdf3Af7y3bZptZQuTFRErn6Yh6PGbbdiQhsWEnpWlem+HL+1J22rOvL4HYCZDtis2xMXzjjYf84MOPOPmVX+fZ4icEn/t0DMLYSIkUACLr0whBMRAYUoLgAjjPqBxz5+Z9CqmYTyeIKFCiwCidtX5TJmGEkAj9JdZ2OJv48Y/+iJAEv/1bv8O4nmC04fvf/z5N33H/3l325/u0XZOd3ban23SopBiXY44Pjnj+8pS9yT57kyOaTrBpt4xHU8bjTBRHK5I2mCpLQR4eHHL24hlXV7nXKAZFv663BO+zZpHOqHtRmkHxT7NYLOmd5ZfHU35rb8pb61NmTy64USaoNXJqEPIA73YC31+iURZjaFOebfRF3kco9JbpwR66VMiYp/FizDOUOYoN6WvMkpKFKTNxUgiqQlMVhsX5Of/gd/8uF5dnFGWBdS7zEIUkoNBCDgY1GiQcZd7Dgc1tgNGIJFtsCFTliELoYUuTRIgRkgkxKLQQyOgGLVkDvOS9e4YHo8jXb8wIRQ1VzWcvTzm72hCFwDYBmyAJjW0D1teUdYlNFT6OqEZzRpNDmq5HqSMKvUBJgSnK3HqRjpQc1q6zbEiprumHUYZhA54gRk8clAOiSER8VvCOgUJpjKmZTQ9ZrReURUIIN0RrnTMQmXuTcpBTiYlrIePdScis3EEdfqgrX0XM3ZV3ihDDdW/S6IrCTEAEtA5Ays8hZKlMKSVhEK8uyiIr+SUxiJkptpue7bKl1gXt8gqFRlIi0MQ4KEwkaNuO5dUVTbsEGbAh7+H86MNP2JvOuXXzBpeXa1zwvHh+nilt+1O0Vpy9vCCGRGkqBJLeOkAQvGS5sEg1hqiZ1se03QLve7QQKK1J1pGU5PDwkLLQHB7u0263xJBV/S7Pz+lps1JGzPfs+PiI05dntJ0lWMvJZEJlLePtBtGds3y6QU0Uo/0pwhQgJNFmJPtLNcokAkVZEGNCqozgRRHRWlJoyTblh1WWFSEMXjdlkePsUAVVVWeh5QRFCEylZI3nzu1b6KLg2YsXtE3P8ckNjCn54Gcf4wX0fWD/4ARSyHo5aowPARdclr8YBxKgpEFFDSGzXEgjgquQA7opkkdEQRETVbGG/iV16Jn2Ad8IVosNZrlg0uU0vG+3BMBLhVA1vfMwUnmbVJzw2//ur/DdX/kl/s7/84e8/8FP6X2X5e2Dzkt8hmxBq2rgi8g8oa4E6EgSjpAcCJezgZgISeU6OJlBz8hgVEFZTAlhSVVOQPTooiYIhVKay5cvmc73s5h02kVfwe17t/jnP/qn7E+mvHd8ZzCUXUkx7DQZ9HN3mc0ft1Pbe7YbS0jgfJefcWFypqMlSSTatsPHSFnWiOsWAcQgeP7slPX5kmk5RoRhxV40SFEgRTG4CzmclRolYm4/JUgicnm5pCwmGFMiyHq6XdMz2xsjUsJoTddZYhBU1ZQkPSEknE8IWeBakSVqQs3+dMalk8jCMz8ecePWTX760YdcXF2yWS4Its+rGmLmABupslK8L1iuLgnOEZDXvFjnPJLI8uKc6Y1jZnVF5QuslTQvt2wvGoqqpqjqYe/nl6w84FyTdytqk/uRMm91UjJQlYZRVdFby2q7oRYTpMq81wzs7ESPBIWpKL0lXCwRkyum0xkP334LWRmqqkZXI27dvc/Dh2/ws59+gpSKpnVcXC3y5qeUODi4xbZrOX35MhuBgzDsMCSCEsN6BV1idEb7JAHSGoKkQLO9PONtUaNmY0Yu4OmxSaCaHu0iOiaSFDghSCqvOvAxgyFKFxwf3uSv/c/+ArfuzPj88QU/+/j7rNsXSBUYjaZIVUAqycygvF+CZEnBU+gaVQgCHdavcCkvnPUetB5h9IjCSIyJONdTlhOUmtI2MJ4YqsIgZQmYzBrSmr3ZlG3T5Zm9QW/HGMW62XI4nVIrQ1bLyqp/w9bJa+DtutwQkuvFQMC4nnA8P6EPjqvlJdIkTo6OKIwipAzKLTbbvF1L6WE9Yi4fUlRoVTIeT/nW179BVRTDkttMNk/DjK1AUJUjzk/Pef78BVdXLbbLotZFWTCdTrF9Xs9QFob3vvYuJE+hEiJKCBKBJsWcYo5Ge/Sn54wn+1RlyWbrSaIipRFHx/d579t3+ff+g+9hyhH/h//Tf0xvW5YXF1y+fIl1HSEFlNIYlRcCZ1aTzrI2LrK8XOXtzUNv3BB4885NJlWJaUymoEZBZ3v6TU/0Mi8x0j+fuf38NaXYlRy7FXVpFwyZzWfcuHOfxarh8ZMn3H/jTUxZkodEIinlcaQUIsfzA+TyipksqJ4tEW/u0QpHb+A7b73HZt0SY2IyGeNcT11X+Ojw3rI3qTMP1a5xwbLaXmJDGFIgsdu3hRSglWBiakxRZbpe8PhuqHWVoCnWuBSpvEElT0pZDiMO6bUYpu2VAE8ixLxZy3qHSYLxeJ4PAQXB1vh+TN/rLNkvakSqIKlhYj8gYh7xkQRkrJCxBykRqadrEn0rEBTMZzeRjFHCo03H2kaUqFB6DJjcmywLUlRDdEns7+8PrZN0PTFCShildnGR3So8EF9YO3h9DbiPFHnTyrbtcqQWNZUZI2SXVQeNoKwNGo1OiVZYYpR5aexQb+b17lli9MG9N3nx+DFFWQzAVECbmKVHRL43WWo0cnBcMNm/xycfOz77fMH+/JAH9+/RbFfsz6d5bWEKTCZjTp+/RJsKqSSTyYT9eeD8csG4zKydmARlNWbbJy42PZM9TaeXvPXGA/6Hf+2v8ku/+jZPnj5mMhnnej5k8osAiIFEpipCrvcPDo5JKfLs2YvszoRCS0GZBDMJJ2WN7LPgsq7LzBqzmrbr8NYiEHmn6ZdplGVpsmx8SiCyoriQKUPPumTbtpSjitV2zbMXLxBCD1uBAyk5gm1pN2t+6b2vs/78EXMLYy/Y9pYuOHRdYuqSPVPSbFqKKks21nVFcD57filxQiBTQMs8fxiDH3iiGSSRw0KhJFTmxeo8qyiUImW1SpIRWQEtCYTPbYEoY26hiEiQMX/vLtsY5gq9txg8IVouzlf8f/7uh9y5f8If/sGnrLdbrOuRZlgcRAHJDF2HvPCIlJDCg1TY4Oi7LZeLc2zvMeaAg/0TZtUxoQElenAO6QRmVDCqMthRmJLpdJ9tv87qAMlTGEMK8XrfiRSJGBzjavTaE1TD16tdldfZVHq1ql2Qn8n7jx5xcnDM5aM1IhbA8MwJQ+bj6fuetm0hJWZ7e9fjaCkmpEr0fcMnn37A5vKSvl0gyAi5MfpapCslR3YDkkJXNF1PXY+Z7U8YT0p86Oi7NvdWRcJHz2qzZr1umB/uoU3BdDzh0WfPYOA1X5ydEoPjk08fsT/9JocHb1NNBLpomR/dop5MrqmRSgqi90O/UVIWBpMGqVORcwpSYrPeAnD3zj26tqdtsg5SmRTjKDgZjZB2iSSQBKhCUpsKU2iatsui4eFLXoVnTIELHbtlrlIJpMor2KazPX78/vtMZnOE1kxnI+rhEAkJ0XdsVwu6i5e8ePmSMuSarSo0l12bHyJkWcbJjL63VJMKU6jsvVJECYFRikKpDEQIhl2Lr1bQJRFzFBjW0wmpkELnVDtBkoEQI0pKhAJ8wISEEgkpcsomk0eJ3SyoApHR3LzMy6MNuNiwas757/7xDyjLmuV6zcY+zy2jVGY7RiHJpIAkhvESaQmpw9mGTfuSptkQPYzL2+zP7jApK1TfYWKHZEukw0Sf2zQie/PkU2bx9A1aGazrKU3BThleqpyiBuvITNqd3clhOv+Ldc3r0VKIYZng9Xo/CEESoiRElWdeo8eHQGkkwqVrAMQoRdc0NM2aotQUtaAqDTePD+gqw6jOkdIoRYq5LcaQReVWi8wprRb0rsOlntt3bqBSwdHsCGMMt+7cZr1d0vuO0axmbz4BHEWluXXnGO8Se4d7UASkBiVrCp1IacO26dChpesPeP7iKSc3a9arDcF5gnP46LHBIRJolTdwF0WBMTklXy3zuNm3v/UWH378MYHsjPYnI3Tr2atL/MUWdN5/E0UCJTGFZk+PsxD3ly2cpXXWcExxGFwWcljgIplOJ3Rti0+Bm3fvMt+fMh3PcnGuIPqeSaXpF5d8+JMfs6/g9vwQXRt61yNTojS5baG1RKk8XBxxmRWjhuHoBFWh6XpHMRkjpSLXP68OWNoxVRCvepliSE1EyBMBMhu4kRo5/IN4tWfy1SbkIaqIiCSRhEMYjw1bOrVg1ZZoa9i2C3y6woUGQZGHpYXc8YGGusmD7llsPmXdnuH7kkLvcTS/y159i1JNUW6DjktkOEPKLDOibIuKM7xtcH1HmyKu6/OGQSmHLVWO+XyfGGLOCFJOr6L/gsn9KU/3FbqT2/Wv7mcQkogkJkmMEuEhOIcsDXVV4EY1i+WKrm3ZlwckIovlFc16ye3ZPgfTMcFI7t+/Q4x+6FPDbt3ggAISQm65LNdrfvbBhxRVntDZnxxxtH+HH/74Jzx79oxypDk8mXF29pKihqbpqUaaB2/cym9aQuc7xuNbKFXSNAueXz6m9S26Ezx+Evi9f9by/MVHPH/2jGfPnl/T9Lqug5gYVWMmRcVkPKIs895NmRTrxYqzFy/Zbps8p6nh5PiQ8dMGTWb2OBeuR/cy5pI3CwihvnyjzBo7eWmrVnqQ9Rc4RzY+aWgbS2UqtNAEZ7MRKwHBo6RibzojKUUbLC54tBDEYFleXWAKTdu3nBhN37UURkH0eT8FETG0ErTM0hdK7Ngpu6PE0H/La+nErkga2DO7976DGIUocHLMVu1RJMdGl7TREYTOrzds7MkGGpAiZioVDkRHkiuC1KQo6f0K55fE2JLiZODcRpIIuTeYINGyWj9ivX5MxDIuvsnR7C1GxTEi1ggv0KJFxQaVrpCsCcESe0t0M1Jss7pbdPR9g08uy4cIaLsmc4NjzDs8Q0CJklTl+c7f+PovUMicuqaB2fRqz/rrV8YJ5Gv2K3SB1AERusER6LzOMGXlvlFVU+qCEByz/TkHN04Yj2uaxYLFxUv81RWuaejmY+rJiEQW7M57QXcRPNdbRms62+FcZvZILYbZyYLtdsu2bahmM2SRWDVnSHOXiENrgxF53WBKCZkChSlARMZjeLA3ATMHBFp7njz9jGfPH/H555+zWa7ZbLa0TUvwWfeHukLJhJJglERLyXw84VQqLk5fUkhJKRVGJJqzCx6agtRuCNEPabEYhtbTkAnkFPhL71NebxsWYpBI0KShoBemQKCIITCpZ8gkyNzbTM0S0Q+bp0ye4hB5wlsTmY5qVtsVjW3Ztls2mxXr1YLLi3OEiDjbkKInpYAmoYyiEdlI1IBfZPhG5gUkKeXfKROJAMJntDHmWggp8DGyFIk/dJbYtxQyy0QqMoUrgxYZEMmIZCTGrKy2vFrgZEHpLVJ56mqCTxtIDdAh8YiQp1CSCIiU4X4hAloWyHjM3uiQk/33kHEf4WtIioRFEFHJIlkjaEnBQuwRaUtpPDdvHrJcrRE6EIeVCUSLkZkNJFLAuz7rgoTEpt2QUuJwMhvi5LAQl53056uv3T8i5cM4mCSICnCk5AnJURfloKeaE+ZKGqqiYLlpct+PPBs7HY9YXV7y6NNPEMGzPx/zcO/NIZMZiAsJksh67RJxPa4XokMoiS4kLvRILeldByJiCgUi5oxHQyRzWaXU+bw4RwyBojbEkPEFtELooYcbfSakK03wntV6hbXdUPdnAFPKhNYCY/LuVJUiRgiO6gkyDEr9SaKcpVxtuHM4YzScjyTzGRSC681tScRBbvVLN8pXy1OVGrZcDXC8FHmYWaM4mB+SYqaCpewKYRjmqnRWJ/OuwfsekSJGKo4O95kf7NFcNRwcHvB7/90/YfPgAQKPd30+EM7SbtZZDzYEjMobhuns4IHy73n1XkMe/4qBQbp6dw6ICbZK85MQWYTAfHqAvVgysR0PR4rcNn5FEZS56YbznsurBV4a1Kah7XpOjm8QUz8AFgGRsqxkJKCkIS+ZBZFKSnmHOwdvYeQc6SUpFAjxin6YiMgUEMkiRDaElDzgeHn6mLOXT6lHUwR+kArJigHGSHrbICSkmEW2jNKvsXkAkZUEkgh/zCBf/1+vco9Xz93k5ylAyEg9LjCFyWYU84oKIVTe4blDF8WwA9tapPOE0JHCLrIPvdKU8qrD4dnt2jLe92gtmYwnHJ8cMqnmTGdjrO2BgBKBvtlSSE10geAdiIy4p5SG1pInRUdKEilUnkZxfmgCCWQCkTyFlIwqw3ppiTGDMFrmfapayWz4AzttWpTcuP8G7ctzfGexlcKIkgMCD8ZjqphXagipUKSBJxyJMhFFJrz8vKHyz2SUWklciMQ03CBV0bZXTPduD80IKEw5EAzycCgpgx4iJmpZU4qKJiRkACESKjomA7vHqZ6y0Kwur/i7/9XfJrQ9emzQCXzbse57hJ2QkkYlTaErBP3wkCUpKRAOKeKrejAVmfMZIwg3wPWKsTeYUNFS49fDSJfWWELeIk3MuzXI71PkJ0kQWbLfux57dY6SgvneNO/iiBLvMn9S4HOUFmkgLih0rPPhTY7gsmREkjY7jABxQHnFMIiaF7smJJG+y2u/o/DY0NPZPj/stDuMnpx2Z3U7LXcHfudewmtff7rX3jmKXRqWYsotrR1TS4jMrAqJUhf4ENFlhVRmqEhF3irWO8oIQRome1NQ+bfraz4u144yk+Qj22aLs5bJ8YjSFCgh2axXWciMhLOWs5cvSCliewtR0Lke29s8LijyTtAQw5ChS3TMift13Rwhup6xkWy0IIa8CjHvFpUYBUoMCotaoYVmj5JvTQ959myBSIFGZVCtcluOK02RIkbIAYCM1w6AnaLjrsT6Oa4/23p1kWutvu8pSgE+Ik1BWZZoremtZb3eUI3G7ChdIqnMXxWJAhhrQ4vIcvHeY5JG9pa6NDxbXRG9o1QK23Qk65CjQN75N4AmwSOkIviQo0FiOMS73ZS7RzykaInrlEmIXCEiJDEJJvM97t18gEqaikg6/ZywfI6SYpiD3q3Yyw86EnOGvNvsHCMhWFLMiFu6jrDpmnifj5/ccXsgaRIyU65SIAxsVxll1hsioOIAVMX8fm1w9NsVkYhPnqZvs1ZOFAiZl+94J4ghlw3RB9BfPAI5du2+shfPQepVjMz/+SIgFHxW0gs2kAYKZRruad4EH7G9Q1Z5dWAkEUPASIVtW2RKmLJkOp6CyKsS8vsRQxI79Lx3kypRIqLENp7PP/kcQkFV7rNYrEkJ5nsTZtMRm80GKSRaZbG0EDKBfDqdZu7qNcmeYVXiayVY/mAUWlLorEpIihDygLuSu35i7r0KGSkAVhvGbW6LaeEJLlB6y1iIvMI+pKFsGe7mjtV2/fXz2dnPbZQx5cOcUkaYfOjxqSMkxbbNdyDGSNd3jKazzE9NIFNCxDQsVQVlFFEkXPCEFBEx8tlnn7BaLdis12xWy+yxyMTIFAeCu8jMfCnzfKN3bmDq5GWy125o6C/B62BPepV+D/8OIbLcrqkWF+xVUzarNcVqyZ5Sw3aqdP16MuWjnGcydgbPsJBWEEMelM5Ra2DJsIP8MydYpNwXfZXS50VEIolrFDIfoIxuxwGllEpxdn5Og0IWJXG74Xy1JCU5LDstEFqRQlbD0+hch6n83nYGQCqGGjldv78vJLDXWQ3XP5f/Kg2DAIPjE7vomb/PO4/te2RZwq6HGSydD7iuhRjzduW2I1aKKECFNCDEOweWMxUhJYSUt9p7kFEznRxQ1Qc4l6PPdDpFqoQpTNZWipG6yrO6o3qEKQxt32eq304pMMXrjOIaAc1/QPTu+p6nYc50ENDIdL0UcvkSJLH1zIxGep/LMwUFWeZUpUD0EUfOxlA5o9hpKv1Zrj8Do8cMwEmit4FNs6btLT5BfL7AWotSktV6ydGNExKeJBJRCBQRlRIBDzqbik8JiUT5rGWzN5+xvlpwdnqKdxbISgMpZrUzmbJ+rBASY8wrArKQiOutXq9VUbtzNjwMXvtzUnYgbdsilGQ6n9G2LcF7UvHKcPO/07Whi13tM3j23TqG4OMQ+cg9uLgTAovDe8sT+xnUiMOhTzmPGpS9d7zTfHBe7WCJkCX6Q0BLRZKSpmnQZT1MZwxZeozEEClMXgenlH6tUBSIVJKHqIffIV6rH/+YB//C7RL5vscB8TYy3+9ruCZFtBJ5f2iMpOCzWJq1tE3WPZJC5/lLawddpTgY9fChRc7CQhJst1uqsuTBg/sZVFQFdtAx2pvvE2LCd3mpkZJy+DuJ8IHClAip6fs8jC2lJCS5W+3yqn6PWWDaJ0u7bUkhDWOluV8qZZYcvXZUEUqhqYTAaE1MAiczkGMIlCJmXrUcWoeDQQIgBYLwb6dPmT2ZQRpJ22/pbKDrHFGIDHSInI8v1pd0/YakaqRWyJw7IgfNHaFzahgQtD6BlmybjrIaE1pHu2mxvUUlMRxScr9v8NZyIDw7EspkMCPF9NoRzgaBSF/oWSYGrGeg4Fkis+mMG0dHTOua8f4Bzeqc2C8HxYE8exiHyiBHGzXca3F9uHcel6EOjNHnxnjKQNNwDBBIYtLZycQdu0YONW5AJEMUnpgcu5ovpd3CbkEgUSqVBcYS11qxhTH0jc0MG2sRSqCMZP9ozg8//gFGG757711IOt8XmUW102tP9ovPmS+0NHfc5RA8IkXEsLjV49EoYvQczic0RaboCXaLfsFHl5FSkajqAl0W2BjQUaBk5idLJa+dp48QeM7VZsnxzWP6bV6317c94FGqQErBZttwcHCQNW6tpdCGiCfGhO17IG/gstYSkgCprjO3lBIhRNKwb7PvMiCU77VCKI0qCpQxoAxSKHSUTIsKve2GpybRCFx0lCpSFwqdXFbjN4og4uCw8+rBkEJ2yH9aq/jfxChVUWZ1uhCJSYI2TOYeHxzWK7bdmhAd682CzXaF0KBSkdPOmNDBAz5L6KNwWuOKcuhiZCMYV2PeePgWP/qXf4RrGpSqULJAiDz5IaSEmEWSuhBQUiPEMCB8rduZh6J3US3xqgEgpcjpqMpEeuk61s+e4sQpyllMtIMBqeE1smFldo6GwTnkdCS/dt7TKIbWTCISiOLVUHZ+H2GIhAPcIMQAwuS0VwxaMYkWUkcSWXoiAA6wKSKkxpgKZH5lmYZ6J+a+se19PmxSUNYVVVXxybMzpBBMjQK2INr8DNilscO1S/8Fr9Lp4YqDo0spkMWKHdKo7HCiJ7g+745MPVdXVxS2IlYl2A7nPUIphJJ4H7BNh48BlXK2o7RGxjgsfxK4EFgslnR9z7ZrqIopTdvx/PkZnW2orMAYTbNt2J/P8wynVtc9WgBneyajOj+fBC7EvJ9TyEyoGBYUpxTpvcXHgS4hVJaPrEtMYbIekZKZlyUVtZCEtkWHkCdkRCJGz6Q2AzE+AzopDsaXciqfN2cPs5RftlHqskSWUCCo96YZ5pf5jTRbx+Xikr3ZHs5HrhaXtK7FFGWOESlRpIh0jvF0xmz/iLZvWNclSSZWTY/3oIqKk1u3Obx5k7OnT7Omz5D2hRhyeuQ9RkvstkFXI4TYwfzZq2YW/PAAkmAnbLTLZ5VUmIEJdBR7HtgVhc09Ju86qpTww17NmCJKeMKAqMGOzP0qXd1FGjkYHSINs4waicnjZCjktXZRnmpIYfeTubmsBaiUWyqZ/RB3lAmiUCidmULZrHP8VUiiz32zrHwQabuOyXQPJYtXaaiwIDdARMQq0/6EvTbMV9HxXz01SSRi8oTggIT1gVrKvJouOtp2w8vVJZPbx9y6eZMgE6Hbcn5+jvMJYwy6rEgIepv3c8qY6Np+IP/v6jzJpulYLJYoafj80WO+9va3iSHx8vyUGDyjesRsOueR9VRFRaEMxVhzcXlFiLmk6Jpm2MnCADwpQhKESF5/MCQ1Wmcdn6bN43aIQTi7yJM3aUBxRQSdItJ7Yt/lszbUiEJE9kYVYkBYlcjnIZcwYaiV83kQ/zZaIlGq69cUiPywMl42TMIbSjOBZHny+XPqyYSyrknJ07VbhHNIH5iWY47v3uPsyWd8PtOYQvHs9/8FrdYc377Ff/tP/jFSgzCCKFxWiIthqJk0vbdMTd4tODESIUPu6YkEQiFiCUhSFDllEwOLhTw6RsqJo/ctN4Tnd6oD6kLSKM3SwouLKzY4gvAI4UB4pDDXtR/XB3lIal9vBYq80kDqHI2lEDndGQCcGBwxZmKCFDn1lsO4hNxF1UDW7EnDNObAKjJqx219NSgtVRpkM10GJmKk7x2z/czbfGVYOcqJWOXaUrYg+yG7+FfbIz95+oyUEr/yzV/n7PGWJBwRSxK5L1oWGS12DoSGtm0pux4lBHuzGeX+nKX3XH7yCUlKxtMpt27eoovZuTKAbiHkO+m8xTmPUIZxPUUIzWQ8Z39+iOslNliQgv3Dfd58801+/OM/YDwe5e3SZZmj0jBKFZzPS2NjxKcIwoBQhDCsYxT5PsUU2fYdnc2pv4seLfIwgZQi83lTohJQJ4mKASccSsWsGJESOgr2RjXJu+y4ZKYOxrjDGzJaH0XWIP7y0Vfk9XG8hsWJ+ZeFQPKei/MzTFFy6+YNHr75NgdHh9SjkvVyQegtq/NLfvyDP0JKxTol1lqjjQDvCDGBh2dnLzBhACPiMDI1aIGHlAdLk0j45BCFGDhhcqj+dmlg7lftVgcA160TMUTOQETEjknsqKNHm5KoIxfCIaOHgR6VDWPXKcifPS/dEflXvzYGlQCURBqB1DltJ2YtWiEEUQA+AFkdQJJrLynSLibmfuCQunoysTlTHONA4h6EBmQEuZv4yAE4DuTuy4sF79x8ZZQZSxl6oMKSyGvExQ6653UwTPBisSClxLSaceq3Gb0UAanyjGaKYphKyc8hiUhZlezYPETomhbXZ16zVILeWbwUCJX7xEoqjFAkIsYbYoTRZM5q03G1WHD71h1m8znj0Qz7D7us8CciulAcHB7kszjsCzk8OmC12CCA8XiKKYqcNsb8nEIirz7wjkjCesvVYsVmsyEOEy8pxQHgERQmL7HSElRKVFpgokeJvNMlbzxLFEoyropr/MD5/Doie9oBTBN4H/Hh1b3+Eo1yOPDXgF0cJPvyrGQIge12izYFh0cnVGVFCB5nBeNRjVeGMHWElHj09Cm6VLQRiiSRHpCZkmekgGFCPXOWcx24o7zllEcOkLokpWFy/bVrZ5yDHQ0gDRmU0SLHh0IRcSQZcGTldNKujsx1YM4GXkdxYUeovgYNro13QCMV2SBVQomIlAotQMSED4P3RGSyALmPq0QGiZJwtK5BhI4oEh0qL9VJkegtShXoQmdvPFC3tMg1m4xpeL8Rb/u8xuGP3ROZNCmVeQ/lF6ptcW2Y8QuRP/9cjB7vLfPDMdO9cX6vKTfoffQkmahGoyy6HRPaZLJIDCHvO0Fk1buh9bNzuHEAfLTKEyLL9ZbnT5/jQ2Sz3nJ5dYUxFb1vCThC7Lm8Ouf23Tt51WAI9MqyP9vHdWEgfdv8voVEyl0LSlDVhkpURBLr7YZNs8n3f1BgYFdyDFS8FMFbj4sZOc/ZCtf9VAUUCiZVQa0jJuXvy2leLjNCiEQ3fNYkCF92+rojDux6S0Budoec2SmV09uYUiYGEwgxYF3KtKOYx6pMVeAWkeQ1286hCvOFKJTyHc1q5cPvyeBQJr/FFNls1njvWW+a64U6w8kb3uPQbxtGghIxr6YbUooEFMZQeo8OOQeI1/Vhuj6QO0N7HWTNv2QAbIbXy8Ll+eeud6EogVbk953ydukMDqSMBApF5rwC5PUHMgV87Im2A6lyLVtXjApDSipr7pBXMOw0R5UctoCJfO9C2k15pOuvfCfD0EOV1+YYX4N7rt1KesU82fUzY/L4YBEyOwSGdlBK+RlHKZFFNZQz2ar7tmNUj4i2YzwZU5QlVVmAFIQQSD7fT4HAh4AdJDQ363WObL1ls1pSlxV9n1dbCClZbTfszfY4O3tJ17akCJPRlGJQbu/7Hhc81np8GOpvJdGFGbAJT9u2AyMqV4Np8LhikEsJzmO9x4isUqGEQgk1TEp5hiyVQglGpUaLHpVTn2F0MDu6LGMUcmQV0H7ZYsy73lm6ns5IA8QehxVkZvg+socY+nVxiHYp5eWluiivBZOcC9ezj685bIQUWcCJPCIWI5jBKfjgefriGVfWc77e4L27BoOGmJhTwyQILmE7T6khpIgLnpQGmN5ZCB4Z80hYkrlxvDNmMbRkXp9AFH/sK732FUUAGdEmG6RSOx5PXv5jpIYkuPXmlBDgxWdb+q3LD1DnZTpaCKrKgKgISGwSONuzCZ579x4yGc95fn6WRZ2EILieaBjS0sztVSKv9Pti9jAYnnR5ryY79SAxIIXDoWSolYeUPMMX2cEivwjpxwHZDnFI8ZUkV66R5AO2b6nKAps8o8l4+CEIKQyiVCpPFwkB3pO6vPC263pG4zGzyRQtZG5xDEuGy3LE1dWS6f07hJCwLlCaxGq9pqgK2qbD+h6ZMriTfWgm2IthGW6MEWMKbt26w+n5GWHgCgsyKaXQmrqqKbSiVJpxkkydQbZDH3YXj4gUSqBlIgWHd3lJMFIgZBbbzvdm59rkvx2j3PlUds8RAUniPWT9X0UMghQz0JLBlh2HJH/ysiphgJPDQFAeEjrUgMbBEHljrldD8KjEEH0To/GUZdyw2m6JYnewdlSmDMiEKOh6R3CBTiUkjijzVLj3loqcIu6IwyHlVkH2BRHkANwwRF4xpC1px+4Zsof8rgjCk2Tk4GjObH9C12RyOWQx5TtvzPhH3/+7fPKTj+ltx/3bb/Dd934LlnOahUUYjRaJkSkglrgk6GLKzbuYePr0KbNZSxxmRKXQBB+Iyg/vM5uhFiKPtcV47UwkQ00d8uoIlXYTo0MtmXaOk1fehp3rdaTUI6WgLsbXBq8k+PQqM5FSIoSk7zt0DLi2wfU9xITWWbUixh4fMm1PpqxB5EMixEjw0PeOlCKFMRidl0Ntlku89dSmRqusVo6Awuy2kyU22w2T0RjrLXEAa8oiEwB2iukoiY/ZHZnKUE9nXCyWOUBEECjKsspatsMmbCUVhdRUUSNdm/m+ccjkgLrQiOCJIaBELn9IOZsLnkHILAeJmBI+/HyTIj9/TbmT0CenEZlqldXq8iIaOSBP+RfHga+UrtOD/Dp1XSGHxTkZ1BjSRYaB0JRIwTKZjHJNEgKETNVyUhBiQpUG1WWQ4PpN7eoiMnSfhCQMKKeCIZIOdWAIeCLD9s9cf8Y4UAYGHHyXEQuRdVlFbu28InnvWjWOuw9uc+PePt57fvr+D/jgOfzmL/828+kh2pdc9af8X/7W/5GmbbhxdMTBbMqjp5/w6eOP+PZ7v8Svfesvs/ysRflXglYk0AjqoqSXEhfBB480Zb5PUqJ1Rlm9z7qpIg0JZILxqCDEyKSu8/LeVCLijCQyqT1PV+bOqriWhBTwhQgb8zKn5JBSUZUjlCwRKWBKhet7UsobrxBZLTxGlyd7fEbMhZTU4zH1aEwaBslzIJY4F8EG8JEYMkIphLoWqxZCQggc7R8QyTtH1n1Ps1nnle1GkUTEOU/Td/TO5XTatvgwLO0dajyhNEnkpT1S543cTbPNwSZGshRsvgN97wkp4ZRCo4mhQrqQ10JEjxYBlRyVVqRgcwlzfYrzs9txpnfAYADsl22UWUx0eOIDqph2BBqRUHogVMeQDUP4gbZkiCJDyEnm9XFZhDd7GDF8oDyInGcPo4vMDg5BiDzelfJgcxAJH0MmPBfmWrz4tW4bCJsdgkh4PEqURKG5zk2HNsNrfgKBQqZhPAk1DPHuBrhy8zm3NtLAgxXXCeA77zzkBz/9/SwgjRhaJ/D/+N3/cmAPaXzwvPngPr/1S99lWo9QSrHabvn06VN+/49+yKeff8jf/Kv/Sy5/sEC5vLk4JEhJZlHjFJmM9vCDbIpIZAMQApQmxZ7oA97bgQYomRxXbNr/L21/GmxZdt13Yr89nHPu+ObMl3NVZc0DqlAACyiCIERSHJukWhJFSU1JHVa43SE7rAiHwg77g786HO0PDneEu6Ojox0tt1oSNbHFeQBIgCBAECgUap5zqKycX+ab73TO2YM/rH3OvS+rQFTJ0AGy3nzvGfbaa63/+q//GvFjj3+eJb0MvgA1IepJ2tHDQgzewMvCjto7HGG0pig6ZBmYmQwWiFERfUxbl8iUWWM5vr7B4egQu9elYxS6muFnJSoEsm7OYDhgsDyUPFqJKFlIgtkhyamORxPKuibvFDz08MOsb2yQmZz3L18R8nmuGQ57bN0u0UipTacSlMkMWZYxmc7wQZhlztXiJJxLaITGeUftHSYrqJyXPkpkfkskddgoGYVYV45IzSxqXIyiBeVk0lxQHqoJ3WwF5R2tikLjIOKcbk8LDsqo+B+qUYrHSKHmQnEuQqJLzeUOQlLYlbwjNXwqmYLkcHgk1HO+FqJ726tJanbN6PWXII3Q00pKHlFJ54mvHXlK3JvukAZ8UEZAHWHYeGL0wpoxmiP9hfccGpUMMWKRMLAxdhtBejsCTaCoY2R9fZUb2x9w49YWavCjaLOM6TyA0ho3fZ/o9qnH34YY+fyzz3D25HHGIyniL/e7PP3Iw5w+sclvf+Ur/OF3fpPnlh5gVI4xODwwC1A6ESY+e/9xbt7eoq5KQnBoleptLTgW5HoVmEzoa6RnE6PF6zFKldKloZzkP22Ekf4pmDrPd997Tzi0oyl5Dko1iyl1vsRINFbKC96zv7PN/U8+SW/Qx3jPbO9ARhHEiLEWHyVVUTq2YXKIEn014KAyHucrur0OT33qKR584EGIirffeZcQI0v9AYrIcDig2+lQTWfS3WFkGpgLgt4vr67QH/SAkOQyRaEfNLWrKGuh3u0eHIDyCXFPqYnRFEVGJy9kVmnl6ASFcR5XSz+oC44QK4yrJXxttIZI/ioZxaJioE9ZSOV+2J6yfW6LC7vJJ2TgThsaOic3GmmR0lGJ0aROgMZDBedRGGhaZVJ7S4xRdFHaFho5AaUNOgacC4TMtjlRbGQVU1lggeUgtCpSfqibOsn3ubwgIapW4hW9mpcFhPgtoYhCWpAeeew8X/3zL6OKRymWnpfXSTttNngMRWQax7jxS9JJoYPMGjFGSNMusNTv8/lnP81XvvFNnv7ieeoPnAAsCUmVtDImForhcDSWwT/apOG2DSgTEwk7ozdcWtAYDQQ9gaBQvouJlsiEqOtGqIGmhDSnmTc3xRF0jaMSmU/v8EnepK4dVe2YliWT6Mk6Gb1en76VmZJbQcLPTqcQac5qKjTJpgSR8IgQ5LzrWhqN9w92uXjpIrNZydJgGRcDprCsHV8n6xSMJlOWV4ZMZ1PZcJSiqmsyK4CNzYzUGG0ueS4aZYQ2F2IndSdJ2qGiwyiwWmOtIc+tlMJ8jUHYWTlgEtmAIPNTvfdYFelkFqKQMJxPhknzPCA09MsQcZ6EBv/g4+MbpW4S/0ZmspFOUDSPU2kJg6o6sfSVSXIIAsDE1CeoU92RGLGN8hySt4Gw5fb2D+SGqgT6OGH9GyT3IO30Db81RiD65HSb+tMiPtqUOb7foeY9ffAhDmhCQ5AlrMmsZjQ9kO6J7hMLxfeUpjadJVpQaWs1/V6X3BicD0wmEmpFNR9+VIcam0ArlAiI6RiwZByMDhkuLbG9dwAoOkVBd9CXnChGvPM4F8gzi8FAI5EfQbsuqjomjB6zD2aSnmOYG+E82WvvkXczynrGtJ4xq2fsH+6j9RLKZEL4rmuquqYYdLFWanN17RLYITq3WV7ggmdWzcA0m0ijii6N0iHCtJzJzFBr0VbjQk3pKg7Gh2AU3X6PytW44Cl6XSazMUWnkFKHD1hjqas6AYMSwgohQGGQkQFCf5RNxdVCzG8AsUaTNXghHRgdsSrQMRmF1mRaSbtW6vAZdHOKTEMtvNsYU69tg9enDS8xh3FB/fA9pWl4qIkRE8IchkylRbQxdLIshZhyg4zRaKXxyWOGWvrTfIyyGFUqRSjZWbWSBmCTJj+pKHoqIUlAGqAO84ubG6S8SCQtahZ65xTzr5Xi+x2LBtvgkzHlp1p2AAnHtRjS3Z078rt2KIs7CrFa2nWaHWAOnFirIWTUbpoemiC/Ta+nVlAYAyHDJ3BgqdNBK8V0MuH45iaoa5BArNxmVHV1JHoJBKajkYR2pBy53sDUq0S7h8u28WbShuYqQa7zemVzLyLXt67x/tVrzGZCr9vb28HNpuTaYrXBOxE2W+qtS0ksCsAWQyJdBCl7zWYlCo8MPJaGhMYoI4JoHo5nFL0+JwdDhstLVN6xd+sml65c4ub2bU6MTtIbdAiI0UWg8o566tMwIyPtg1ODsVnKX0Mr6aGVEllUa1HaUJZVu6FDpNst6HW7dLtdCqVQBDpEVlXBEOiUlUhPukAdS5Z6Gp2eXYgNaV/qkTE124t+unjMQBT1i49xfDLlAWgX9dwTSd6SZYIELi8vo7XmYH9fcgqj08YhiKsG7jt7lt3du7TSF6QSS+o3jCj6vYEsNh8lN1gAWaPz0iGxCN/HhZNcsLu4uNKOXsmRT1UIAoAgD1Gp1LQVBPjRKiPGGq8insCg18V7Bw14QZDzFwxMCvpA1nuU+vAv+M4rr/HMI+dlmG0aT64SJ9UYoeHV3jEsZNydj5Ha1/hqRukDTmuqWUme5VS1Tw3lFZ1OjgtSC7VWiPbelUkIKl283SPauwQ7Ai25OaqJCRbC/aYJGzHKaTnF+RofvTBdvGc0OkTHSCfLMUZ4ot1up73egBDjnZJw8s7dbQ6+9zLkRkC+zBDS9prZjCzvoHXG6HDC7t4eKNjd22Nj4xg+eMqywvma2WTMwcEBvX4fpRXaZsxKaSju5R1igNlshg+R0gXyoiNavVq36UeeZWS5kAEODw5EyycRYoqiYDAYsLK0RKEVVkWGPrAyC3QmM3Ijw4pRQovsW9HimafkAu4YpBQSIvQGPVQMHE5EfrXmh12n/IjATyXPZKyhyHPyPGcwGAgzYyRKasZaXF1TljO888zGhygSUBMTV1A1OWV6FxW5vXUrNQuLZwpBZgQK8Ju6NFr4sDEy1aKK4nnVHHSdnzRohfFWkD9lyWLdei0JycEECblD8pkNnhYSF/ehh8/xjRe+CtlJrB3wod2hbbwWTzmZTKl8jU7IaTuvk8iJjXWKPOdbr73Krz38FJNrNzBETHSCqvqaqHLu3NmS0odWaKOZzmYcP7HJbGuGUoJAVs4JAb1RT1OBmO/IfUPYU4v3Ri/cndhEGkhuNBgOGQ4nhL0RCkUnz+hkFpMQ1Dp4olbYTiGj5VKFyWaWvBAu7LSaMts/IBhFHR2lqyUPLKUuHLWi1xvQ6w2YzmZ473nt9ddZWV5hOFiiqmZ0TM50OmE8FsNpwuTpdIbNMjLlyG2B9xArx6yawLjEq9COGdRaGpQV0nN5d2tL1lGM0haXdzDWiiJfQv1tjNjaE2qRfPFe2rCcd+Smh07N7DJXp4mOFMoY6ghbO9s4awlZQRlmqQb+g49P4CmTZEIbwsZ2/Vll6OQ5Rin2d/eJMXDs2HGWV1fpdLssLy9TlTNmkyk3rl7hxe9+G1dPOXX61DzkPGIzirtbW7jgyBQ0jaI6LRdBPyI6amJc3H1k1xcvp0mVqgQGCa1MaQXaYHwmw2uLDDOrEaU0AXlEk2fuPSASlScQQEmjcdGVcd06OyZiUjGmn6dkP6WkRvcxxRl29m5zZ2eXE+tr5CbD2BJavyRH7QSaJ3qijgQTcUYWQ8Sze7BLFUR9QGnDaHSISe1IIOUB5x2dPEk9pnsizC951CGGpPymRFCsjVDkbN7dvkNZlXz+05+hOixlQI4aC+KspO8wImlAwMsooCxLSnYRa2T+ZojS3bN+/BibZ06DNXiEvuYDvPH6m2R5zgMPnCdE2N7eZd+NqOuaD65+wHBpyHDY58zpk4wnI6aTEeWgS8/3AMP29jaj0SG9bg/fH2JNxsHBCG0M0qxsUFahbcI8lGwW9WzGwf4+o+kUH9LABCXcKx9hUlUYpNDfLR1u5ogu8bxjlN5WoJ8X6BDapgRlTJJzafJIiNqgTM6scozGo/lG+QOOT6T72nxsC/7phKw2onDuPIcHI7K8g7RoyIzD8WQkf5sMwoeAj5GqrlNTcqpVagFRlBKeqEljEYIX3Vaj5o5IkFvmvNVGakPN2TjC24wElaf4WBTOdDSY5CMS/QCUhB6iPCcgQfP9JqiOqS4WNWzvbgFgug+JEWrQQfSCVEKe5VAUK88zuf1v+fMXX+Nv/uxPYIyh2+0xHk+p/aLko5xzjI46iIBkTcTrpuNASjsmjR8oy5rJeCr3OgoA5pyD4kgLswQQjfiyEgM/8sOFY28ywfnAoNvj9p1Dopd4XCXCgg8elAyKDd6QdbvtM7VRE71nfHhIVU6lu9979kYHKCvRkDWWyWSGqz2Z1WiM8KWd5GTDpSUefvhBTp48QTmbCUCkIuvra0xGE5YHKwQnhfxut8PNWzfonn+Yuzt3uXrzumyiWmh2cp4kil0DRDaNFKlBXUFUgf2DPep6hrVG1rMLTOrA8cEq3kVcSLXG4MmIdPIMFctWkY/FZ56IA1lWUKLxrsZXqSD7MY5PlFMeNcgmGROkMCrNbDqjLmtcHRmNxqysrxMRvmry7Ng8w2QZzpeCoiZPKItfpzkWaVaJarpCEPSsQUQDlFWZoPw5NKHahSfkd5FzVIlJIsF/TMxyk9giKgR0ClktGqtkspNwX5PSGkJ8UDqSZZqNk8e4fec26C7arsr5h/nDaNb6PAYQBHY0mbX3qum8r0ODgKa/iwk8CIE6VuneaYbDISFqJqVHmQxrZWhsXXu0koU9m5UopcUjpEbfYa8rRqhEQyB127ZAj+j1NHDHHJ6WMkZNWdeEKCPnrbUpj0+81xjJiw537t6lzix5p6ATArObN6irCh2lYXx/dCgGEMVDe9dwo+HOnR2c97ha2s589PSHQw5HEwiBu9s7dLsFmyeG7O2NKDoF4/GI5aUlxpMRq6vLrK4scXBwyKnTp5OSg8VFCXHLcoq1hlldCnmgCiI7EnwKX0UhYjQZMZlI+1f0gY4P5LYgdlfwMWCidETF4MlUFKX+mEgxMRJdo3YBaN2GswoR79JCC/vhG2WrErcA8qCkv0y6Q0i1yppZOZOTSNBsTMZirMIYQd/q2kuZxDSv3dC/F9DEZEAtR0IJYuu8lwG02iRUKxldakZucTWVvr/AQmneIYQGzp5vNKIg0JxBFAVYncIgFRn0uzz0yP387h+8hh58HpulVqYjQXhsU12lIMtXUXaJm1t3OJxMscZxOBoREMSaNH6AKKGUKMaFBAp5bGY5c/o02uS8/vZFQqgZDodMpzP5e6UhcTJzm7N5/Bjfe+t1FPDjjz5C00cjZ9g2o6XndxQraL5yITCZlVRN/U1rKSdo1W7O3jtmzlN0pRY5mUyYjsfoyURKXTGwtLJMzJM39CIgVZbCee72ujKBy2icr9EalodLTMdjyrKiyAt86j8djcaiPkGUEeyu4vDwEKs1mc0JAQaDAVpbesMhxmappbBGKan1Ou+ZTqfs7u9x8/YtXJROFxMVRlsybZNebiSra/o2k0qBq1usA+/JjZRINCIDGlKt3Mdm40mBIlImqmoZrfeXIf+Lxydi9Nz7dZNXtsVg3chlxKSCplswqDUqrVBKGmUbhS9DQlejklawxt8oUSKziVIm2K2Z1wBNQkiVT8hhg/+ltadNAmsac08Mj6AEzve+YQySYmHJA0kEguRJVMojMwOPPPUY333pBVR2gu7KZzFaJymKBultrhYaSUalM7TucTi6xa072wy6XWiCdqVbkoSCNGzXyM5ci3/3ZcXFd94jGIsPInFxZ/suO7vbLC0PqasKpTWdoivj+oxpOchHJiJ+aE3ED33Wbi5K+gsbaQ1jE8E7AbdyXyQFeeSB+1k+flxSjNmM3cpzsLuL84per0vILQ2rl6jY3tmjdBVFt8NgeYBzFWU9xWaazWPH8M6zPFzicHQISJNwXYuCuQzLgdHhoYyw0xYC7O3s4gOCylYV2mY0ItoxikpGi4o3/Oxmq1aK3Fj63R4dm2GAoq5Z1QU6Cj/YJeZZDJ4skwZo+Vsj8qFJ0IwgonB12gp90h4K6p6U4i85/oP6KedAT/NPfI8MbYk0HFYZs93kVvJQ6krYDyGxVIIPIlIUkypdbLycwWYdadQ3ObWdMNMCZ/sQiHiUFTkQHcW4VFsvshKlKS31KmSkHUpAEh8j3kvnSgwKjxaKGonAEEkobpCNJolJK6s4rEbcvHWTwam/y7Hjp/Hesbe7K745LLS0pXvWhDHZ8Fn89u/zzqUrPPvEY4ltIg960VaM1lijRZw5isCvjslzpuZVCfFloI3zwraRAcyBqiqP1sNShD/nQqj5D2i3jwXfKV83ejXOO2Lq+Fda5n8Kd1U8bmYsvSS+XXRycm04SLVKa6VLQ3rjFtYOkqNmnYyikxMmjrou6XQyrFV45+kUBYcH+3R7hZQyTIZGarOlkbk1eSaIf6eTSQlIWbyvGU+cMGyiKDoQYyvQ5qPicDKSzpDE+TVRk5tMhhUZiwUGNmPddumExMdG0OroHUv9AZkx2GggRrQSvR+FBCy197jgqREp1ZkX1ccfuqdsvWEDSLTo6/whLnrTuqqJLhIz2mRYzFKLEYprJHhPDBaTVOl0BJnsq+kvrUI2JWIoNjIOa49OsiReCRSNc5IXpg2i0SSVOqaglBiVKF4ZJP6uU5YYIllMUpIxtuhYe/8UkuvGgI+Bcw8/xNdeehHbOc1Tz/w0/f6A0XiEq2uqssTVQpZvGqa1kb7QCOS9c5QHa1y6epUnHjpPt9ttw+gFMpB4ywQoCZnCJp1YjVJWQvbMoJWIUkkJV56LMtKv6GnAI6ELKmLKreW8ju7X8zwyAofTKQD9bk9C1cR5JkZUUmiLUXJ8QqBTdOnkHZmhoS09aV7FYMgyLXloJpxlouSRIqmaZCu1xFB1XdHv9XF1JcaXGQaDHk898SS3b9wUL6zFKEkMHmWV3Eet6fa6VD6knkZNJ9V+VYiJfVTJz0OTP6f7A2i01EyznNwaMqXoo1iKBcXYY7AiIm0jRSdnqddJvbIS6YAR/ENLDVtUBqSc5tJ2HxqGzMc4Pr5Rtptvyidj8zDnHjTLMsoysTVCxDtH06atjYYQ0rw/g1KCqtJ0PoQFRJdA6WqqGAmmIOic2O8wPhyJXIWKeCWJum/6HxuB46YZW0d8TGG3UikXzRLg5JnYAo+V99Y+PSZFVFLnU2kmpoQ4GhMVnW6P8WTMcONhllc3CN5RFAUrK6vs7e2jVEVdVamdLS0AI+FRYQaE1c+wtfUVLl29xhMPn0d4QvPZHuKgndBCgkaR0fSIagpU0lktjITw/X6/7VltNF+jCtS+bA1PR09UNUQZ8UcDeS08wybB8BG++fob8iwbu06P2dU13kgfpVbgBS1h2O9JM4IWFYXZaMrscIIKEaOtTJtWKZNNj6iqaxEytknYOQbO338/23e3Cc6R5x06ecFrly+zvXOXleVl+oMB3SLDWml8KIpCaHl5Rukd0cocTaLMvCkK8aJZqgc755jWNdPaMSqnRJXIJ8GgokaTYUyOMqZFNopoRMlOWZSJGDzd3NDPDSoht013jXB6G7FrWsHt2rlW7eKHHr6GhZDoCM8zGaSwO6SDPgaYzqbMplO6phB1gSh1TmuztvUHJ4pgWZCdUxBTyUdrH6mDooyKKkClc6ZZN4VwoY0wAVC69RZy6ckw1bybPrYhaZrEpGAWPcHXaOOJSiev1XiWRBmIqciuFLVoCDJYfUzU11IYtry8itaW3d1dqqpqK4Q+SL3LGOl06S0/xmT767x14RKPPHi/cICZh5CQuJdp3JrWTU3QoE2GS2rsAgI5kadAeCRlLQs0K6RAfnRPlmSn6e+DxjTnS0XRpCjpTxIRwLtI8IJkhiIjTz2vaCHvW2PT38rGqkOEOuBcTT8byKYY5+hyVCTEWGGMTaF2ZOv2LY5vbLK5eRKrc/p9UStQStHpyDCkbrcHRPIiZzgccjgZkxU5LoaWvN/sIg2s16zbhnU2qx0NX7hZu0qJREk5KwkWahfombk0ZqDJS2syG8lMFI+eVA1QihAUXklE5UNs9V9d7Vrv/DFt8pOhr81xhGKH6LApBcZIYp/nHYKvmUwOiaoi6+SSX0RhgnSyQgroSgr8KtWzXARlZNepiIQso/KOaYjEIiNmItMYYxsUSIi6GFqruUJCTIBNAqvbnMkoLawLXaPT2DMfaMsfQv8LLfPFaI3VlitbtwFYWn9cCvpR8pUss/R6PSaTCfsHsZUYVKkWq53CWsPa+jHciR9j68bXuHZzi3OnTnFEcg+oA5QhUgVR7NYarLYURYdyOkVoeUI4z/OMPCtwdYlzkl/aVGppn1Xz6q2+KguGeQSHPXpoTSPHqBLbqUlADPKctFIp8rF4FTBooqtQ0eGjxxZZCjNju1gVUNc13W6XTqegrism4xGjgwMee+RxnnnmGYyREfW1Fxri+QfPpzUnddiTJzd5/kd/lBdeepFbt27hnE/j72R2NipNWUubQYM+a20k2mhaICMoDFobvPdMxmOMgY4HtMOpLmXw2BjRUWrgVkc0XnJ3IpnVGC20uqBbmLGNJus6JHzl6Ob7lx0fH+hZoAgtAgYCOEgNpygKTOpvm04nXLv6AQGHyTMGg6HsXjEy6A/QG5HJ4SHNPA5hRShBWX2g0pbQsehMkXlNjSbvWlRd4r0IFUWERRFbrxbTaQmc2tQtRX9VYbIe0UjvZx41+Si1UsUC9Fx2sUkqVSIhCBTbYsLEGClnNc6XaN1wc+U+aG2EmZPafJr8snae8WjCsVM/xs71r/HKG29z9uRJWTQL4jdjpdnXhjIoZkqDDfTyPKXg88GmIXhGo4pub0CMXkZBVBWdfo4tslTfFS9vokTErfk1+5eaf6vJRhYcZfJyXmQXiwJjM0BEzfKEGFe+xhMoq1oaD6YT6nqGUtAb9sk7ObVKnSNeVA7KqqLX69HtdJhNp4zHE2azksuXLgEiPD0YLHE4GqGV4d0L7+G949y5M9hM896Fa1z+4Ap1cGR5LoseklQnYoxNJqNUSx9UykgLWwAbdTtAyWQZWZ4Lf9taBiHQc0rmjCppnoh4TPRkJklPqkgMTnSJlUsbgYh0xfSeIZBKOkmxQn3k9veh4xOgr4ufL1p8atHJDMYoRocTjM7oFB0ef+xxTK7wSrGzu8skcWLHo31Wh0uUs6kw6DObwnIBODI0pZK6j80zVJB5DlnmKV1G7Y3IPbQlGSEWaBIrJyR+aFvEldKA1wUYaZzOsfTiFIWMDTdKpdmCqg1pJI8XXzNYX+dgZwebDcjzIZPJmLIcozTCOa1qDg4OWqdn0lju4EO7U0+nUzaObZD3TrO1fZvRZEKv0zsSbG4H2KqhjvJPpmUHfDkFX6OSfEVVV1QuUf+01Nt0dORZl1P33cdLb77LI2dOc6LfTTIh81ih9Z+LaQgwT1Dk0CkistZSdApRf1Mk1lqkP+gzmoy4evUDbt29w1JWsFx5ynKGMopev4PSMtNSB8nNQy36qJ1+X3pCURijGQwGrK+vE6M0J0+mE5qxdYNBn9HkkKzImFYzfPRUM1EROLG8IhPTilwQ9ZB6NKJMC1fGCMinRY60DVt10xbYyKp0yIucwlp6IdCvkXEITiRHdXDYWFL0M6ySfBhvWuwgxkBM4mLDlRXKgzEhpKpAk1z9sMNXtVjxWnjxxQlRTWzu60CRF+R5Tnepw9LqCpsnT1CXM2bjCS+//CJ3t+9Q+QpvFN4ISCBthIIyjg8PuHVzB68sQdm064FXARfEW+oo6m06hYBaiUKPIrSGpVMiroEqlu17ZVEz9jPoRrRyopbA0Q1HASpoCqWxK+tceuVlit4Zut1VylJCxhA8znlGoxGTqTTegrBfjNYYE3G1TBtWSkbH2WKVyeQa09mMTtFptVYjsOsid6JKA2kClhrtRYAElTRmVZD2NZ0iGBXx0RGDo/Yymryqazp5Tp7IFjThU4zzcHShOhIRwoAPnk6eSzhIgr/a5vDYevXaeUZlSTZYQRvDUn/I6dU1/NYdtl1FJxdJRq01RgtF0SWmkneOfq/Xag0Za1ldXeWB8+fZ2dlDa8V0OmsZPsaKhqu1hqoqZeSAc1itKbKcidYURUHlmoG9KR9O/bUm5ZllPeVwPJoP3IkCCCojzdHKKIzV5A46RPCiNKCCR1OTx5quLaSODYKOG9WOLYxK4yKMDse4NNczhCZq+5hsdD5RTqk/lHzE9qMmRpF6dN4TI9TBSYE4FjgvimaurDFGuhn2RocYpZjWjqzTIyRvJdGixs8qwrQUiF/Vbfc9yqGpyUItyG2QepFCi8SI0EDnwXWUHRqlcMpQG01TR53UI/Ad8lwSdO1lerNB5EBcc6GpvALgveNgdwdjk6FEqQ1WZSUzPbzHGo3RMv5dtSBXwGqNc46l9eeY7L7G2xcu8qOf/UzKhyJLSyuMKscsRQAqemKomEUpBygrE62MMdTBpbDME2Ka7oSmrHxS6F7MVOcwmHx1FAhqALEr27ts7e7x7BNPcPXKNZoJVU1hfB7aasromFYVx48dZ3VjndXVddaLDre37gohxGryzKYGAGHJWCOSkd55Op0e1tpWke7gcMTZc+cYTUucC8zqCq0tg/6APO9i7UyU5ppp3A2wmNQFMis1ypjeKzLnKiutqWvH/v4+VVUd3XgVZJml0yuku0WLQXarSDOXuikHqeBl8hm0mENjCFqJyLc4JomMau9bp7WgTvwDj0/kKT8E6cZIDCp1IDgUlhAEfXWOJD1pGE9cQmUVCktmipY8EKOo4qReYkzj7p3H1h4dFKWObdFfh4QkpjAuAjVJkS1ECDa1yIhHaXb7CMJfRdBJ1AwXJkS6WA0xKrRX6BCwNIS8KKWdpkOElK64Kslo1tRVhQvS4RGaXCbVDq2xVFXZIrkiyFTOGSWJCXR76y5lVfHQ+Ud4+7WLECUMF+X0GqUDxmZULqBUhrWWzIseT/BCso8BJqpD5gq0L9IzS9BMImc3WO/icxRYy6d5J0kdXhmmTlHXYpRKG+rakWlREI9aMalqIor+YJA2Wiuq6K6WUFprjDVSt0suXZtIWc5wzouwtBKK4PLyKmVRcjAds3ZsnZs3twgxsLaxxmOPP4J3jr2DXbIsp3Y1q+sbbG3dFv6vlu4UoxRWCwoaYjJIJS1uxlpG4wmz2ZQmcw7pWrNcs7TUpd+Tmm9GoO+hb6BnrBTNjEKFwEB16BcdrHJpfqWg9EYJCSWCdIk0wtghjaFIT+PjwTz/gdxXWZwLMG+Q3dNaCTPFnUcqL8wKXEWuBG1VaSS2CuJBfFL4askIEXSoObexzKlBn52tA0ZBcWtvTK0SKKLE8xzJgRauWBa9MHIEg5EBoDpGlJeulGg8PoovFHRWIgFFQ8lrXIP4lXnLsJQrrNZkxjD1HpqG3cy3ujnSCiRoirVZG1LHGMiLVZTusLe/3zJj5LUDhBodPR1ryIyAYKeOHWdldY13P7jGwaSUPKvXlZFsQQx+aWmZMJ1hbD4PxY/4x3STGj7gAubTmGvjP4OCCkXlFUoXuBgpfVNCkPxpXNWgDZ1eT3IuldDr6NHpuVauhiBha1SioO98bDtOYoRZWXI4GaO15tLly5w6fRZtDKPxIVU9wxrL+soqV96/hFKiDXtsc5OdnR3yPMdmNqUMIWn4xiOiVTphCzYziErAAsKuoFt06BSdpKULWYh0laavFR0VRalfyTDEpTynZwxWCXkhRFkLizXYZKcEmPOqY5Nm/LDR14UyyD0/SB801hQ01uGjdIcIkWD+GgqD1Tk6yqg6V6cfJk6lJrK53OeDyT4vXX4Fayzdostfef4ZtLFMD0ru7Nzh/asf4JPcQkyjyxrwIkKaWpVIBWpubE1yGhJVLCpPHUR6IyiN0VlrlJI+GXQwtCPHo7y2IpJbQ7fTYVp5lpYHFJ2ukMmT4HCMsqOa1FEQgii8dYollOmwtb2Lq127oegYyVzJwCjWh6LIdugUalKhO7UMKq1qQl3TLfptL6XWMBwOyIdL6RwXk34SCn0POJcQQUkXZChEs5h9iEyqgKshKpHgqIPU/BwwIzJyNbqTYTOLw2O0oaMNxnuyhEBUriZUUCMS/kTFZDpFGUNURmZHEtnZ26Oua5ZWVvjMZ3+Eg/0DRqN9JpMxxmhWV5ZoRiRELePr+v2BbOohCClf+zYiavGEBuhTkZXlIUbDzVu3hNQSgwBYRZGiCIWJgY5XdKMYpFWiKAERS6BnE5kgzjELFUElFYymK6jpp5SSjjp66z/G8R/UTwn3IrARrQxZlmOzjKoWuMUl+Q+tdOt1BFK3KGVReBldgIaUfGsdGS71+OpvfENusDYQ97l66zqAQNb9Ps89+zgrg2XKUUkZ4WA84eKlazQaLi6NEkdZQCQwQnMKMUIanaaiQpeeiCYjx6ITX9NigixQHZobL57GJ56lUpDnGZPKYazFpsReaUVuMrwDYhDAR8NsKh3szSSuiIA54/EYgKVej6GObHQyVnPNpCoZ1RVZ8FjnoKxQPmC8pxcVeZazU86YeYcuumRZIUJUDTsKaPpB4ahhtpUfFCQg7YULlwHY3DzL5SvvoJ0meunEl9Yrj0bOf1o5up2MNnRUoF0NVUWmJN+u6op6GqhixFUi9Tk6PIQoTJeD0SHj6RTnPbVz1LVne3ubzePHqKoKgufO1i3uP3uGU6dOyTiM6Nm6u83K0hKT0SGTyYzJtJRWLBIw1SL5KvkMjzGWXr8jrJs0ZctYI/XzZGAGTRGTrGQIImKmZb3kQD+3qCCzVZSVlEAmUUcadfRmOwxBxmSQ1n9QR5v5/7LjE/dT3nu0Mxm1JN8CbMQk3T5fHPO8Roq4VqedMtQp3JpzV03q8t588El+8q/+DaZ1zXsX38S7muvvvMzhZMSffOfb6RHIsba8zNn7TtLv9Tizsk5V19RRUfnIhfc+YOok5PJRC48xRgbDJeJgyGhjhVu9IbUp+K3r17i2v9em5Y+dPMWjFJSzGcc3Nrh9d4eD8W26nbOoEDBpDghRZqp43zBnaGIW8dxJ3j4mEGi49ln2b3+Fi1ev8u6l98XDTmd0DHQNFCpw6EqUCQyWl1oZj9xq+tbSS7s0RErvOdzdQ9kpg+EylZeOiodPnpLIoY1i267S9HkKtZME6M2dHQCmtad0gcwL+txyZlNPZohRCCNGiP/GGsn1q4pYVWgNLnp87YVpEwKuclJeqGVwjq8rDmZjZrOpgFpKjOPqlWucf+C88IgDXLlyhRPHN1ldXWdr+w515SnrGSc2T+Aqz3gyYTorE9oaaZIPpeYfjREB8IZvCxKGF7nFGGlysNqQKZmsldURFUTnVQWPwmEJdI1Oa1PyUbTBGJtCcRlfQYz4qKl9EC2lSBIj1/8xjHKOtbaxceOZlUFrYZpYuyDV513bbL1Y17c6zewIaaYkwiPUCSCRgit0+8sMo6KfFaw9/hmUhupTzzELjg9uXGVv+zY3L70FwKSa8so7b3/orIss49jaGk8+dJ76zi53DkpOP/wg5Ip33n2df3zrFvuXLnL34PAjr/ov3nmHzFpWh0ts7aZFe/AO1dJxjBayRGYsoZa+Pz2/KWir0QmAiL5idOjJraCnNhsQY2T/cEQIQcYP2IwwGFB3NGWmqa1ijMMPC8azmkorbC4MmsbTxfQ8QlTgArPxtM3TM6uTESY/qZJZLswSabWEFjZcj+jvqKiT3AXz9ACJl5UxqblaJXpzxFeOaibAloy4M5SzGSoErBeubOYDylqiF7XD4CXF0BiiVxzuj9ne3pMxDQDacOvOFpubmzgn80l9CExnJevHN7l8+bLMsREfSdOVExJo2CgiKCXj+EKr7UQq2TSEkVQeUbIeQxQ9HvDoIKMbchOFvVPLum6UDJr+11buNMnXNMroklZ+PNEs+ERizAsQMKkNJ0kkNGJTWSbj38Dh64iKDudLTFBYq1IOUGOMMO6Vhxh8O504KjA24xsvvwzA+fOfwlc+TcOVdzYqMtCKJ0/fD2cewD/zPJHI1NXcPdijrEouv/m9BKBAXc64fu0SN7bu8Df+ypc43+vzL778RynETiGcVaw90hGK30cc0y3HnZ0dWdQRZjt/ytXpe5x68O8xNGvoBLk3ugMx7dDaiAqAVeDKGlc7VLcz7+AH3nrvEjFGnnjqaS5ub2OGQw6NJxiYWMs0y9gzirpjOFTCrqkVjHxNoTKIkAcoaolOcgOdpDX75tVrqE89RWtP9yQ3kdgOnwlN2xkyw8SiBbhp5FWYy0YCZFkHpSy7O/vkmabSkXw8YlrNIASRAp1NwdVon1IAH8i8p9fvkSEeOtZB9JYCaC8P+uaNW9R1QOmMqDV37t5luDRMNLsaFwLXrl+n3x+gjMbHRrV/Pl9TyFweHxTeS/1xNitT21+KGBpASEnXkU8fQ6gJ3lP5moAjCw6iIzOgUzsfURFTpKeNWqT3yhrwadq2Vsy7Nz/e8Qlodh9h6WmHCdElmlYFqqbbE9qSMgEfxpRVSQyZwMvBoawD7YhRBvioZrZH2smbnd7Q6GU2TBt5vxgjMQkyC9dRMVCKpdUNbG55+r77cSGCVzgPW9MJr7/0DX7ja18Tj2EU+ZLh5HN9Vh/oYBTkHd16hLlXSD1ydZDNIcCdt8bcemlCPb3G9ff+GSce+s/JiyWiCngvObBSUojWxogKGkLGP7G5yfLSULRu0jU25ZGzDz/Ci2++Q5HnTHUAE5jajNJadquSTrdHpWRT8rmhNobaVcQQ6ERY6vcp60AwkabCev748aOgj5p/ImAYLWIYzBzoWfaR1egIRhOSekREyPA+SpiojKJ2Ne++965QDesp/bpkBU+uFbnWqNmULKR2u6DR3tNVio5X9KtI5jVlCVmlyPIc66WcNhpNmU1LbCbPfG9/l8g5stzCJIAXdfbJwT7VdIL2XjpkIMGgKYgNSryxMdSVYzye4luCuqRaXpOMEZyWzqDOzBPLmjLJqJgQ0SqSa02mNcraNCUaQvSCT0gYIkeUSNFoIyW+KE3WP/QukY+GkBooP/E+jUUpxdr6Cp1Owd3tm5RuSpZ3pISgZbzZdDRiZX0dbadkRS5hgGe+UJjfWI1qF4tEhotyH4s4tKinRZca9rRIg2irqcspN976DhGwueLBX16jd7KLNrIqdZQxL01nfPP+7bsYsB0hD5z6bJ+NJwq23yq59hc3uXnhn7H+wK/R7Qwlj2kmSzUgg5eJTkWvS6fIUEgHzcHdF47cSa8iTkVMbqmMAhWYmoxp0NTKMugMULYD0TN1geHyCuPxmDrKxrScNEZLHFeufwDAj9x3H6p5TAsI7OLiUEZhlW1rpgAnuzmPDjJqpamUiF87DJVVHM5qpgGqqmR97QSPPvkog0GHro6Mr33A3VffYKWfCRndKdAZaItTmlnw9IshuelQhFyIHd5gVEaRd4lRY01OPZkImFYHMiUSllanDT3UhFkpJArnqUcjjHMolzJlLb2zpqHRKQmv67pOBI80W6apY1pDNLL5yyS5gKtEMyhoh1KBSI01QijR0UgEkZ6vD4oQBZVuKJ+NVIpRChtFB/bjwzyfxCjjEWGJdFGRkASFadg4GmbTCc5V9JeGHD9+jMFwhU5WsLO9y3h0yKx22E6PJd0VBEtrgvfo1EXSXECzDUjk1Xy1WIqNCyGj/MeHCN6nMEaxOx3z7T/+t9LqlCse/qVVemd66fWC/L9RKG+R5cV3ii0Hs8kYso7m1Ke75APDpa/cYPvq77Fx7pchaqbTCp/O3ChNbhRry0M6nR6EQFnXXL/5Hq7aZmnQ52A0Zv3YMfamY7KOpeh1pI+zrommoHQwqyI26+GDAgxliJRBobIOvgx4E5lYjUm5jW6sK6GQ9zZyNWBAW1MLnspHZnVFJ88ZAsczSyXBBs5YSqUYh4h3kbosqX3g2MY6RZbTyTrkBlw+YNhfZ8lrMqVQ0RJNhssyKQ0NLcU6suCBEBW57dGtZ3z62Wc5ffoU+6NDLn3z67K5Lw/4iS/8KHf27lJ0C/pVhwfPfYpyNGNne4elPMNMZxIKh4bdK3VKQdA1OoD3UfR4XNO/K00EimS0SgkRJThC5VAxoHUTRsjMkV7RQyV1Rh1ZILlbIhL1NS2CMTVokLykSnXK/whAzwKcfoSmpBFFcZ0QNMPOzjaByAPDAUtLy/R6S/SKPoP+ClU5Y2dnm5dfeZmd7X3OnNpMZHKZZLxo+qI4JrQxgVCapBkWGfdSWpBzTBU4+d0Qee2VbzLavonJFI/+4iqdMx0goHyQMszC+7Udh1G1r3u08DO/F1HB+kM5Wi1z4ctvcvfSDsunfxVfS4E8xojudiiKHkUho7/rquLuzi12rv5Ljq31+cLnPsNv/v4f0+/3uXuwT9btoLNCCA5Bpv+avMtgeR10ho9SU7R5QZnmdFRqBrlhlOcQPOBohrarKLl4PJJPNtFFs/PI3bo7nXDh+g3OnzrNlau32S361IgXqEOgCp5aK2y3h40B/IzhsJemfymcyZl1VhhtnqVWlrwOeGXwWqPyHLRBW0MwkZKANwaXgKKe0YzXT7BVDMhsj7i8Snf/DudrRXbpFvt+l8F9m4TphOtvvYXVFqfgYD+wpCOmTkX8CIEgLVRB47URj2sswUeMCxRRkSlpbyuUJlQVlVJkKmKCwzrIFWRa1AKsUnQDLHcLVBRurY+eqL1ERpHURhjbTEFIBcyBH6MXtCB+8PEf1E/ZNIeCGKUUn6VWabRNg3ykLqeixlWeUjkZma4sNisoy5LpdEZdSVNvc4FHyEgq3eTai1E2xOjkkY9eZPIILSIJt0e73LjwKhFYe7BgcMLK7Utk7xZ5hFS+WOw0TNeKmotiMV/aQYmBbDzUgQgXvnKLg1u/w7HTfxui7NK9YZ9+r0sMnv3DQ+7cucbBna+SWcePP/8cNhdAJsRIgSHr5VROhv7oVFe1nS7D9bWW7IAWwkTpZFyeRIiWuhlvHhS9BY/foJJHzz6Vt1pIHKoUdmWdLtfyJXZioFaCxBIi2tepPBCJvsbPDsk7mTwTY3Cmx6iIXOsHZkYmgnkvxP9cG4KrqKczNJGZ8gQt8pMChmou7l/Be0ecTjHTinPLazzXXcEfjLk7u4M5MeCsydnod6hiZDuX7oxjw2Xi4ZRpKCmDl/wwRmofqIPHBRGoVs7Tqyt5lqmDp/CBOJuhvJDYdXD0TIeuUhRGgzZkKjBQlmGRY5M8qVD0kHxT6SRpIqGHML5iOzBXKyW6vdFKFPcxjk/AfT36gu105kY/psm/tHTDhxCEGJB+LmPX0yLXuqUghVTXUzHpy+j5++RFTq/fE+qel9HqwTtpmF0cK5ZCYJrTSedy4/YHOCcyGcc/28NZWn4qKaxonUaixB29zI++ifNAQT459nCBr4Zc+tr7jPZe4uTpHycqRZZZJtMp4/GIvZ3LzO78LxBnfOrpJ1lZXmEyEz0cX1WsGMXIVeRJzClDZlfIYCGfNsJAlhVkRd4Wpo0xYDSlq4SsoWMLZsCRgCIpoav2sanmySrNN955D4AnHn+Wb767TWmKBGYEIjIyLjcGGz3eWKI2FL2eaM9YS600d6c1Nw5Lqkzy0egiWVRkEep6hq8rrIooa8Ea6axAZo/61BhunWKgLUu6Q9fArWpKzKxMdC7hXBUZa9HDAc9pW5B1LQe9Dk7NvdL26JAqhNQwHihjZFDkTFSQ4bsm6eqEiHZyfVlwDLWiby0mCuE/U4oiibHo6IRGmNQx0NIw3yprpC2w9mWr+RSTKobWhoPR+CPX073HJ/CU819tc620LptmYqUgyy2N7mwjshQSzDcfCCRexsSYRpIJWBEQwKPZvYOKyPhKhc2yZEuZvF4ipHvf/PMiY2iaBDxw5e3vAdBd1qjVxOiJQl5vHEVzQZKTfpQR3vO9ZLhthptaso490uX698Yc3vkjvHfk3SfpDgZMq4rRwVXc7m/RzSPPPvUsjz3yiJRRZiUAt27d4tzJTR7c2CAqLYXxsuK13R0GmWVQllTO0dOafpHLuDsrOWuVNGCm04pyNqXfs+2uIcanFvCwha5KBaTwsYyRrb19rDF4lVM6KPEC3kV5jh5ZiEFFUYiwOTYvqJ2TIblEJmWFqyuIChMguKRciISCESWtdVGJXqr0W8jUagVKmSR+qClINcLCstZdwVSOrA7YCqKOWAOd4Fnxiq6GzEAZIzF4qhCwxpD3ehS2Q1YU+BgYlTMmdc24rJhVFU6DMzJvUiMN6kMQ+UhjCDqgoifHYoAYaqJAOjJNTqcpcUoL0quSIn+QiC+GSLDSMGCjpgxHcZkfbGk/4Igf6gcTI5SBrQGlRD26oTgppRKAkkbhhdBOz7I6o8hyDJ5QT7HRQZDw09mFZluj09CYQCB5MS3FeIWZg09BSgtVCLjoIXiuH+ww3d8GYPXhglzk1mVhJosUp/kxQopFSmEL+kpemXpHsIXm7HMDLvzxPpOdPyGugc4/zejwOm73N1Gx5NNPPctTjz2ectfApfcFJY0x8u3vvXzPWyqefexRNtfWeXzjOD54uqMJ127dIsxGzLyj2x8QlWdWVsyqGmMUJnhOnTjJa7zF5f19njh+TAL7xAIQZT7p6XExUvnAxd0DXnrvAo89+CgXLm2lpoI0JbqRNknSkjEK26fTG2CygtL51J0f8InbS/AEF2W8gPOotIBDE4hEiK6JUFLpKW3kSit0kFrsfgbT2tHRGb3ag4rsGM9d7XE6wziHUxGnoWMk5NQ6IxJxtYWoicFRHUyI3rMUYagV5Dmq1ydYabCWck/F3vYOqpyh8p4InkXpsLHGSFcSUcYgapOuKc1NIYmlxJBIByI23UaXweOj59LH5A98sn7Ke45mcSsFKgSik9qQRmZOBB9BJUHkNM9SR9Aqw5ocoia4SCNzpFsCsByTcsZ4NsHaDJtEioU9kc5G4p80WUmTo8mUheiZbVV4JwSB9Uc7rTeMaVWkyLV1jn8ZZB3a36HlizZtUKYpNSjYfKzH7vsl2xdmzPa/STF4mDB9D8KMp598nE8/8XiL0KkgnfEAeW/Io5//mTTlWrFz8332bn3AK+9exPuaP3/lFQBWhkPWVpapJlOee/Zput0ud25vc1CW7IxGTGYTVPT0ex0Avv7mm3z6/vtE3c1KG1KGTL8ahcikqql85PdeeQUfAo88+Cm++cYNapXhSbS1BBJpJPSXaDVjc3OdtELbGlwMNUp5ea5RGF0aUt4lNzsQEZqXT+WjmKZipTw/VLIhLA25XniuXdth0D9GP2q8EkphLGSKtw5AXYsos4faKHyURvIMQ6YtIUgnTWEsnSwTNQiUDKAtpebpowA13ldMpzNmdZ0muzm0KzmxOkzRnQYVBXhM0qEihSMOKKQIzSWWElo4wCpKHr2r2g7dv/T4xIT0xa9jCGKMkvRJ60tqPm2oFc55rA7yABKYoALkJic6eTYxNdIqpel1O8zKGSbLiRFGsxlalYnCJzMwM21EXr6Z+BOazUE8sVeai2+/OLck3ZzvXK3tSPj6Edd35EgLU+w5EKNKk5rauIuIDIUdnszZvjAj+hnV9CbGDgnA6vISRjfdEoG6rHnvsnjKvNPniQceE+1bpYkPPg5acWd/h7IuufHBRfZvX2M22uPyVSHmX7ryAUZrBv0+Cnjg3DnOn7sPlMLXnpXhkLc/uMo/+af/X86dPMF9p05ileb+jWOsDwd86733RKzMB1597z1+/PM/xnsf3MFpnW5nmrdJ0vOV2B9FkPECK0uEKPq8SViREGu0EmaTUQGsRwWFMQGdSSEf71GI15PWugTtJfVxQkntZrBxmoujW1ypxjzePc0wRqihEy0zRLw7i6IaUCnNVDmqmETcbAevjDyn6CTNiRW1q9J60FijyJXBWi0Sopnh5LBPRP5uWpZMxofU0wM6nUyikCRf0na/NdGWZFML/1IFQEkXclAel0Vm+Q95PuURBZfkYmJwEFx7Q2UIjIymjl6Y+NPpmNzkgrxGUggZZTqz1XgLXtdYJUJTx1bXeO/y+ww3z7G6stx6MEEca2Z1jfOBvcMD8izHGM3u7h1OnTzD+nCZDHAkzVlg+XRG1hXv2tRTGm/ZWlxsEMq/LJRtE09BhaOM22tUy1Qrdjw/ZgcvsXLyl9jZ/1Nef/tdnnjoQZqIoyxn1Knv8dQjn06j5ZP4V5D62cmlNZRRPHzqLEpbxqWMG/fA5YtvMtnb5vaFV4nB88pbb/PSG2/IFal5m914MuWti5d56+Ll9mcNV7P5+kvPf5FCDxnN9kHnosquJFwLKqCiFq2dGImVZzI+ZPvuHc6cPi73QgnxvKxHODcmGtFn1UqYV9pClkWskcKAMZBlAoBMplPW11YhBqrpDK06BOep+x1meonVxx+n7BdMy4qpiqx66M00FJoqz9k2hiozHBrPyIv+z2w8oRpN2BwMWcl72MzRjYFhVRPqUrydS3NRtKWvZKzfzAeiB+8VNQoyy6odciy3dGNo4jkBv7yQAiTUb+JyJTKgCVGWpiJJ6XKj+ezxzY9laR/bKOvJvsC+KWfMjCXTGm0lr7TKk2eK/YOA1jW9rsbVE7ybESjJEj/QR4jeUxhNR0E3Bga+pNCOrNfDT8fEGFk+ca4tUzTh5sF0xqULb7F1+W3G27eOeLes02V47BQPPfkjzMopezfelwssElsjAuijeWT75w0S8hEXfo+tRhYRTfG+4ok/7H0honUP23uY3f2LbO/tsra8igJubW21VLtefyg13vQGKsZEcA/EKuBTi1BhDJv9ATbLuP9zX8IDh1/6BWrvubO7zZ3b1wHFeO8Od99/Czcb46ryyOUsTk6779w5Pv3ok6wVPS5evEYnBKoYhDodJfJBGzwREyHUFeV4H1+VdDu51GINSRojsDTMKIYbDJaWmJUVe7t7HOyN6PcLTp08TpFbrBHPOJpMCAFu3z5gY73D1q3bWBPo9TqE2OXATdCDLnaQMXYztA+8vwzTnlDj9voZk25ByDt420W5yN3tHcowwecKX1huDzdY6nXw0zEr3nPfbMbpyZi+9zgVmRqFyzVrLtDHg/Z4BTU5B2Tc7ig2S8cDMaKSgFYDCIaYQlQWKsAJzZ2Pr9BJiiYji4qTtvN9rOvo8bGN8r03XsVo0dfp97vc98B9LC0NKXKL1iGxVTyHh3exKvLgw+fZH4042L2D1VCWBd57Mp2hnAgwP7p5En14l8dNTr26wgsXL/CtP3yd3spxHnz803JBBMZVxfXrH/DOt75MPf1oWLmeTdm5epHvXL14dBEu/Ffqkqq1HLnBjdV9Hy+54FWPVELa2ntseyi9ixxcnxtBqPcoywnF0nOMb73Ly6+/zU9+4XliCNy4LfMtbZazsXFCBMPaOKgBokI7hsEHha/F+1ezGdqkFjQl8yrPrm9ybuMU1ghKXX3h57l7sM3ocI/333qR/VsfMBvtH7m0rdu3+c50CjHy/Kc+zZlenyzrsX0w5u7hiJt7u6hCYYoCqw13bh9SzQ4xMdAr8hZdJ6kqHDu2gjOKqDUmc0RfMB0fsH3nOoc7tzh//jydoiBGGVNgrSXPIrPZAVV9yPn772dnbxsQDWCCxqmI1YbDbsHlQZebyhKcZ2Y0VZYTsg5GdSm8xq9Ytre2KKOiv77J7PQ57sYatzRjrdAc3r1JuQNny5pSa651M3Yyx4OHJQ/NZnT8lGgK9gdDPlg5zgf9QNy+i94dkZU1dWyYYvO1IuhCopwH8LVrKuY0JPQqanTeY9DrffQau+f4+DnlxOGVI6iSjtJ0rWZt2EEreSA+KjpFxrDbxSpNcNJPd3iwTwQ6va4ouamMWEEMgQfP3s/261vcUpF/+m//DRFFd3mdz//M32Wl26fyjjdee4Hrb34HN5vxcSbhfu7h89za2+eDO9vpxJN3oEFeG+97NIy9J/JcrKsn8HWRrDanOMzpBpHdKzP2Pqho3Ws4RIUSlYYcSfkiQnDcvHtXXslmDPvDD7/n/FVohsew8HXdqIKHWsKlpo9QWzJrsUZzZvUY2cYmT97/MAeTMYfTCbt7d7l++S0iMBsfcPvaJQD+/df+WOqESbf31PHjPPXQw2weO4arHfsHB4zfuUXlS2osuc3wPol0JcTVlSUzPD7l8NYieIIvKV3k/csXyLRhaXmY6qrSIXRjdsjGsQ26wwK/59qylgoBbRQYjdeWSW4ZR0PMIraK2KlC1ZE8i8RM460hWkNneZXBykmmuscsBJaPn8ZnjtcPdjhcHbBb1jid8UG3YF8HnCpRxrAyyQnZMtePP8S765vs6G0erkrynVHShkp1gGbO5EJZWwEkpb7FtRKVYqwUW67ETo5uit/v+NhGmTcMnhjQoaJjYdgv8MERgiPXOUEZVtfXUFrz7rsX0FmPzzz3BMeOn2BtY4Nev0dwgf27+1x59wKvvPM2p5YLvvXO25iiw2d/5u+wtrJGJysovePF7/wJt9968UM+TCtNpjpoZTjX+xRWSQf8xdGLHExnzKq6/d1qJKhwME0+Ge4JMZu72kotM5+TsvDjxU9ayDamB6MY71S89wf72LDCWvY0t+s/lRtss/YRSbqYvLJafO10b9saaPM7KhmkILwSOqdSTGwwLtXmyTGlBr6WPKacKTKryayhbwuW1wbct3GcZx58jABUPjKeTfERrt14n8P9HXa3rrJ/8wpX79zl8rVr7Tlurq/zM1/4PG+8+Brv7ezT73XxzlHkmZRKKsdsf8TBdIzqGGwvxyhFN4OJlTsQ6gk+KzDWcLi7T57nRGVAazyOUTmWJkgjTfMiQZmeCZo8ZEJOsZDlMv7CVQHna1zpcd7JPEylhUFmFEsrG6wf3+TarcvsZT1CHtkdivDZhAKC5dJal/HJTzFQPfLOKm55k63b16l2L7NZQadqBjclGbCoZLNomiOiaoEf6RGmFX8MaO4WGe91NSZU38+8jhwf2yg7/TwJ6np0mHHt8nv0Opr+8pIM71EKpQ3dwQBTdJjtj8i0yGzUIXIwmmDzDlVVo4sOZBk7o0MeO3WcP3/rbfprm5xYP4bRhpl3vPSdrzL94C1+9rnP8IcvfI+uWeHBpc/RNX1yk3OqfxINdG2OjpGt0S4XRy/x9rUbR857tOWIo4gaQh0FtLjXJpt5F21/XWz2ujg3VEXSS10MdkVywpWRy189xPpVzhQ/R4hz6HsymxGDMHeKTp6YNEfPQDc1uvRFO8ouJtBm4Txbma223jo3yiPbdpT8xjlhQWml0GUKrZTMm7Ras5J1QSuOPfwpohJdncp7RtMJN29f586Ny+zevsL+bMq/+sof8b/95b/O+7/1e+g04QscNhhsqPnMmRMisJ285d7kgKo6pAxjlM1xRtMb5hgj0pkqyDiI7rCPzTMOp2NMtwC0ADblTEpOKoiETKHIO5Zuv0en06Ga1YxH+6gIpUuEcC3qfONZSTWZ8fCxDcbTQyKBjdUNoivYLfeFcBItxuZkg2McbDzKbuhQ1yWT8W1u777K6cMdVoKjcI6YxmRIWaeJZJrNNT2DEEWOE6S+rqCOGYd5n7uDbN4U/AOOj22UJ05v0rGWUJW4eoxzM27fvsmZIqfT79KsjP5gQKffw2Zz6fyYUKmD8RStZG4D2rbiy80RVQQdef/SBW6+/SL/+Jd+ngt37qCAHzn2S5xbPtN6ksZAQoiUdcUL21+liqMjIWYTWt5+vWTz8x1BCvmoUFXOs/1ZYyAoyfGan6jFwFVKOAfXK65/a0x1Z8iZ4ufITZ/Kj9HkBCoqNyZOpM742EMPHQmC2/df+LwhX8xV7o561cbgBNlbAKgSgDVPeeRi242GuYMXY3aiGxMaemTStNGajtL0+kscf3CZ8OATuBC5dPMK3/qt/4FXPrjM+vIAo0EZQRgpx9y/usx//8//BeM0Sq9bFOR5xheefZri9Fn6gz4YxUEIXN/ZxvUlN9Qx0tFGMM0IKu9Q5AUH+/vcurtNPR2jvOP48U3sapfoArGsmE5Ldu5uoxwcW9+g9FOijuhMFAZMBl0NZb3PZOzJOxkrK+tYlhgddNjeuZvG8kVKP+bq1juMZpG6rglhl7q+RTbZZylobBTBLgHIVbtm20pR+n4kNYRHIDU362ipiz4Hy/2UOv3g42Mb5dlzZ9Eh4KdTdrc9QRkGvX7rSWKMRCMzJ/KiaMcGNB7FR1qxWtBpoKjIMJokERhVYPtwxDvf+j0+89B5vnD6OP/zn36Dnl3neO84iyllEy7EEHht503uVJf4tZ/8Al9++XW29g458/QXuPrKNwC4+07Jxo8UxEw8S5NKtj6oCc2JbauTSgapVGqwjho3jozv1Lg6Mr4l5IiD9yuMX+FM8fMUpg8octMnV2vM4i3C/u8BnuXlJZZXVuZWv3ikBDLI8AlikHCJ1FYkoXUTxEpms4AHHXmZ5htCcjjK2o/t0yBRwcLcpqMQqds3SQvNaKk721aM2nP2zBlu3rpFvjTE7AeGbkqtPZPplKVjpzj1wGNs37jC3tZ1/uDP/kIAK+YbxGMPPcAja2tsHtsgG/RwnQ63qwk73lED5IopnrWlVbqdJXa37jCIHWbWYoNCjxyxrghTR6gdk2yE7WdMfUlZBTqdgmFvgA+G/cNbqLzH0vAYWZGhHNi8YDBYIoZA7RzlbJ+D8U3qGIlaYTLN2vo6nxouc/rSdQmptZbZMOkGNgba7IgxKlyQZgJnZd1br5hkOZP+kGLtOJPwgzER+ARG6RwU2mAxFM4wzDusmZ5E1d6ltZBhjKbQGcrLNCqJsQMaaZXSMc0LsTk2y7ly6Qqfe/wRvvP2e+xP9rjw2qvEuuZvff4z/Pob77J/OOULx/9Tcp0dQU2bBTmuSi6NX2BlacDnN1f5E23QNuP42fPcevd71NMJwUVGN2o65wxNfvKhpHGBkC5eODXBoij3PFe+esDk7lFGhkKzpJ9kvfgUeTLIeQUlGV4QtPjc6fNoY4mp2/KId0QWrIxWFxEu50SSotl42pWQFoJ8P+WYqFaqJE08SihzevV26tPR0LkBuhQNqaI5syY0FvI2RKq6SuvAcfv2XTafeIbh+jq93HJCVbz85psA/Mwv/Crnjx9j6h0z57h49QMmkxHTyYRLbwih470PblK/d+nI9T947iydbofPfOoJennO7vISh0uBSZHBsmVMJMsDHkfwGuUCwWq0zTl0E/yuY1RO6A2HrK2tUXRypmWNqit89NShh/EdMoSRk2UdnK+YHh5QVhWdjiUjjZHPClYLy8rBLj0c0Yj8KEqnIVBJ/zadvPSGRmmYCJGowUShgs6KnGplmbEu2C6nP8jMgE9glK+8+g7PPPYQhVdQRyIOXQeiq5OsYSB6i84ycpuRWeELmhiwRLLoyaLDJAJkhiirH04DZ4oc7z1vfvvr7F27wj/+T36GqBW/+63vcLb3GY73N+c1onZBScng8uG7TMM+J4cn+K//7EVu7exy6unnOX7iJFl3QD2dCDPofUfnjOZDFpFWZ1QitqRT+KgQQKncdbz3uweESU5XbdLVZ9AYCr2GVoaeXZfBqEeW2IePE8c32jeO93SjLHo8kdo30jqUgKkQA7WrZXaJb3oHG4BBVMabtz4yO1TNG7caBzj/YbNrfwT7t42YFyQs0ut6HyhdZLiyirUFRZaxYnNub2+TFR0GmQVX0zOaXpGz9vDDoKT5vH7ueTRwZ/+Anf09tnfvcuXCG0QUH2zdoJpNeD11qwAMez16/R5nz5zi0ccfhSLjQDtmWnNjf4brFoymU0azKW53D681m+fvZzDsy1VXNTp4Yg2z/X3sQIxm584u4/EhlStxdclkMmFtdV0G0TqPwxGo6FVTOk50eYNulP+Ew601OBpZyyRbo+bYg4kyQa6ymtuTMVsKKvXRa+Pe42Mb5fbODju7e5wYDNjcPMH0cJeDgz2yzoC80xf+YxSGT6eTy8MOAa0UuZU2GT0/fYIVIrFSmvuOraKV4u7lCzxx32mODbv83/7Nb7Fs7uOZ419Mqmm0XILGID84uMI7B18jErl0/SZKG9YfeoonPvdF2rgsHXuXataezjFLKdy4p6gxz8VovUq553nvd/aJ0w4n85+la9ZS4aHxbnr+y2kRN0n/vUfzjqr535E8UUL6FpltQtAov6+1Js8zsiyjUUjzTjRzayfiTb41RnktpXRCmpMsyoescuGCjyBEi1+Twub5vQwRbHdA1ukRlUVh6Pa7XLx+g+PnHmFVGerJjJZhrlWSzlBk2qCt5vTGGqePrRHj/fjPfhbl4e7hiLKuuXrjKrdvXgUUe9u32N66zt3tXb770qvS4G0tmTV88fnPoqylJrJ+9izu+Al2yxkm11gdmMRA7UZUsSarDf7Q45XicDLjztYtOrbgzPGTHJYTYrhDXTly3SUPGXrQ4cSS5cx0TFcrwKIy0pBeWu3X6JOGTRSGVFCRoJs+nIiJOVMFUw3eOT5K5uqjjo9tlHuH23zrxe9w//oG5zbW6RYGXXk2+6sCAMZAUI4YaowVoMIoi3M1RI1XTfe1FPC1EfK5dZ6n/D7/73/0n/Ldy1s8e26T/8s/+31i1eGJ4z+F0hYHiSMpC9a5itKXvLH/NbJC4UvRIP2RX/mHrK6s4Lznpa9/mfJgpz1/7yLbL1dsfrEgGuleaQPNKB0BMj9TyZzMGt777T3UdMiZzi+Rm26TaQJzwa7WulL54vsdMdIygbQyPPXQeb750quCkmpNTGPAI1E4pTF8OPVsvHhUMrczywhREUKUaWchyE7vPc5V0lljrLREmbk3VR+2zuYsaX8phfCNm10sh5tOB6dEVLuZvxmi0PBUSJRDIKZpX1E15fVkpEq6QlSCnSOajX4foxVn11eJTz8NSjGtPeOyYn885v33L1KWM95/80VC8PzR1771obr14w8+wLGNdX70C8/hMs0Omst7MnDIKbh15wMOplMyq9lYWWG5m6P6BZ1Bj3JaYsqM5f4G5sSAFbfPycqjMuk8UZnk187V1E5KTlplxCiN3wpFmnuHTbfRG9jtW7Z7BhUNufp45vbxx6sbRRk9N/bu4nzFoJsRjeP4E2fxzuGCR1kDSvRPVZQlPJnMZOhmkJg7JaHCrQwVNZ6dvV2+VGlOPHk//9WX3yJUXZ47/ius9FZbvRvh6DnquuTV7T9ju7rM2lrGL//YF/iffv8bxLxDr59z9+4dLnzn6+xfvfghh7V7uWb16ZxsTc5hjmk0w7hV0h7S3Hl3gptGjtnPUJheO5K9IRLM3evcU0ppRUkf6EeFySSDUJq8kCE8rpxw8f0LPHL/Y1K3i17Ow1gpnjfX3nrQxX/CxdRGkVnbAlQxpoFDYT6qPSQR4ga6lRqgERAptheTQLAmZCDlp/Mjxkje7eG9jBaPUZoSFq+wGeIqxhkWbDuCj20ZOCrHYsgQlJb1kbRxu0bR6XU41u/x8OZxIoHpj34BFwKXblzH1TWeyLuvfY/ZeMzFa7d56+Jlvv7t7wLSs3v6+HGy3HLm5Al+4dmnGLmaXV8Shz32/AgbZQhRx1pUJyPkGqdrlKmxweGDS/m9gFUxBBFvNhk+OJLWAATQMQ0nRtTrSmUxKyssnzhOnEHH59/fwBaOj8/o0TmRSKng5uQQNa7odUQ1LtSO0XgGuqbTtxRZQZblKGUpy5moUutICKk9y3tscDyweRw9KLg9rbgxuJ//5o/e5LW3bvPUqf+MpWyY1qDwDF0I+Lri9Z0/Zat8g5PH1vhHP/8l/l+/+WVmVU23P+SVP/4D9q5ebgfcfOgafOTyb49ZezSnGGqWzmcy+MksLD6tcLPA1ksTMrXMUnY/ShsRWmrQNjWHcloJkjg38cVFfuRIfx5UpNPttsTx0XjM3YMDjJHHmVtDryjIjEgaWqOTWLEUsFGi2jDvB5dNDwQl1EpT2NR8q7T0m3pRB3feU9e1SGT4JlqQCWEqjZVD3ZNnLlxOiNDvDlBRY1Do6LGh8cJzhLgxQvWhcD7OhfXiHHaSvxFR7ui8tBI3xds0fdsYTdcYlDV85vx5aZ9S8COPPo4PcHv/kMPJmLdee5GqnBJj5PaV96irGZev3eDr33kRrRT9bhdjND/7V77AarfDsftOUhrF1Gdc2NtlUnpUdYCb7TGOU3Q9w89qiAGrNF5ZnNZkeZYaLDQxeEzUmKjTXm2ZmoxZt4u3Gh9KygVSy192fHyjNJoQIzMVWtXvwhqcC2mc+hRMxDMhz6TFqtsbMhofsrW1hdcyozIED2VNnByw3u9w5vwprlUT/nf//e8Rqy5Pn/y7DLsreCc7cAgyOzIEz83x+9wqX+fv/PSP8SuPrPGvXr3C9sEIgOnuDtPdebiKUnR7S5y+/xmi0extXWP71kVCHbn7ukyuuv4XUzp9TW8zS5mueLJyz+OncDr7Kay2kkcmvZVFFDPCgp7nYgVUs5I9ya1ykcjQ5JLyLieOH8MYk6YNW1xUlGmc3qR27E9nWKWxWtHJM/rdjsjsq0xewzu8D2kmSlJfi9JGp6NJnkr+o5SIYVubkUdD7DTj2qXEEbyEvbVrBKpVO/cRxOM03lNrmXxMkmPU0fHy5asAPPLIs+hFj3vPoRb+24BJ8zJNAqQarmGMzXgO6ehXtdD3mmgmNRnr5PGNMZxbWYGVFZ44dQoQYeadwzHe1RyOx7x74W1qV3Pl7ZeZVZ5/87tfFlJ98ta9TsHnnvkUQ614+NwJlscHuOmhdHqk+ymbImgsRmd0e0PGVcWkcm16I3pThph32O9YpjGwd3CAmXzUXfnw8bGNsgUSIqgQicYwrSK37xxyYnOD5ZXjEHMZfV6NWB4uEY2lLB3T6YRuv0+R5+Q2w/YjvY0VNnpd/uCbX+f9a9cZZCd5cvOvMSxWUC7igiIGTQgyl6L2Nbdm7wDw3IOnGd16h6fvO8ObV0+zubrCiY1Vfv0rUpfs9ZZ5+Mkvcf/Tz5OtrlLlluruXd7+rX/Jtdtv42KdwjSYjQKzUfmh613habq+j/HSbd+osZGKyOKlEn0slSearoGgwKqj5GO18FnTKN3eW1cnbVuLsCslVK1DIDrPqPZsT6bk1pBbS5Fl5JnFKtEgzbTBKCGvqyDzPkyCaFBzecx2y2gcmxFDbcApH6TlTVQEhVztvfQMXnn3JQDu39jkbjQt60jVjlt37gDQ7/QWrnXOCr73Dtx7X+aR/hzgkv1kXq5RAEkzSAGx8gSEAAFNnmqkHUvaLdFKsQJEbVgdDDj76ecIBMaf+qyEwLdvUlUl165cYLR/Fx8CX/32d4kx8mffNpz4hZ/gF4c38JMR8fAAgyDyKiqszplVJXXwBG1l+FDw0tSN1LUnRcHN2nHrxjb1QUW3+iGjr25BgUmnncKR8e7lLXrD4xjdIUYrmp5OMZuW1MwYDJa579x9dPo98k5Brg0bK0u8+MK3+N0v/z6uMjyx9qscH96H0RZCTOCFwgVF5SquHbzE1ux7uDDmkQceoJ4prh0OKCa7/G9+7BluZT3+/Z9/D5Ri/cR5PvfL/wUrg1UiUAOh9pjegKd/8R9w/wfvQ3RMDu4y29/h+s132Dm81SRT7RIaqXfItMa4HkU8TsEKyvaxWUdugOj9p2G3afEkiQKd8rbFYzHgFb86v59b77zIyXOPHsE8kzaGhLvIBLOyDJiqhlhJ/VcrjDH08oxekdEvLHmmMaRpYYgXbXLRxag6ElpgSiVJDqM1Js/SspDc0HtPVXtcLTW2TrfL3et32HeeTBuOx6ptQYNmFodYlY7NdaZooiVpHL3XrUnGtNsx1/5dvIMNGt38iaT3cxZTVC5pAUkdNyzksiFFXZFIFwHlnt48hVeRzz74MCa3eA8fbN/Fe8/3Xvgz/k+/91X+q6UBzz1wjp955lNo4GfXenTGY+ztLbqHh4z2DjgY7VFWFS4GhBKoyOqIefB+wmYXfWeE6Vum7odcpwwha8O7qEEpQxUt46pgdyQhVXCO6J0QnSdTytoxHKwKFB4ix5eXefmlb/M/ff1PCT6yVjzBg5s/QTfvt8VvIVlDFaSr4uLOl9mrXyfLcn7ssz/GI2ce5FsXbxPDGivH1nn9rVd5+eWXQGme/sLf4tRzP0beG1IGiCFSR8+0rinrGtPP0U89Rt7r0tfSPvbAaEq1tYWOgWsXXmNv5wa3rsuEr23/ampmVjBR5OUaS8V5NoafwpiCTtYXUCMGoldEn3K7ZmzewtGQzAXkCCyOvI3BSY2r+a126E7jRxDigY6pK0NyxCo4fO0YlzX5RNG1ltxq8ahGU1hDYYz0vSZSQUt2b+LX2HiiBFikhR4bAoJSFJ1cSA3ApK5QWnPy+DGWBwM2/Ix33tuVTqFQ43zVki4isnmJQF/DLkqZeHt7mt7De3zpQnjbmLJSzT1Jv9Dc2Ui6f+l11AJnufnQYlealuKRbkNhLdbmYEmhL5zZ+Jv89h/+O+7cuMbvvvIWv/XS6yjApglzX3rkQR5ZWyWuLfO5B/qcnOxLdUHDzOb87m7FH333FZ789JN8tujy0PPP/vDV7JRdE7esFDEzqCyXWQnd40zqInUoOLxzVEGByhIYoTBKcXx1ld/7nd/g3XcvMMzv58zaj7HRPSkSGF52MCngK+qoKJ3n0s4fMFXv8jf/+q/hq4w7d/d48d2bxCxiisidu5d56aXvobThs1/429z/xb8q7TshdVKYSHCiEODjfAZ9LEtUXhCNgaUe2eB+AoEzD5zjdAg8trePdo6ti28xG+2zffsye3ev4f2YO5PvcGfyHazusz54lmNLn6Zje0mnNYlNxe+nxTJngfgjNMi572j+N1+iyfsqlTraJWfRyoiCmhOlBWIkukDpPbqsyKwYY6Yh12KoRWbJrSFL06JYCGnnC3eOvCpI4lm+/fZBOWN5ZYVB1sFooaO98Dtv019e575jx1AanHeyMTVZllIoHRtEjXlGfvTqF786GtIuMJbaP5qXXuQtmkggLr7MkYtTTbC8ILYtYK+VdbhwVqu9Ln/rl/82M1+zezjm6vWrKAWX33uT8cEuX3vvCl+u3wXgv/2IJ62Aotvnj77yDSnd/M4fAvDf/N//nx/x20ePj22U6ycewigRrxJ6k6YAltePMXKKqpb6YzcbUqsKpyxBuUQ9Eu3Qt995m/XeMzy2+cuykybpQpV0YQIiQVgFmLqa/eotlpb77I00d7buUNZO6p8+8ND99/EHv/nPADh37tM8+sWfpzZzGQxRl6FF/0yqQ4q2iyaLGpNmL0aiqOklyppeWUOFyPG1dQDO1jWmdFS7e2x/8A7vv/sXlNNDbh98g+3x91jrfZqTSz9CrjtzSty94Zfinu8vrJrkmVouZdsZ0sQmC8FeMsw2/FVG4kQAozBGJXK3aJ6WTsAYq8EqTSczdHNhXGU2aR21p5HC3Qa0Si+7WPDQWcby8hrKGMnhrGI0mbDUW2Iw7GLNXPaFGHG1zKX0IRJDLWFtApEa5pRcxxy5bS7z3nvYEhka41KqnUYtvOpm15r/hSjx3XvvEcNUUVBnpSWPbf4uGXY3s3Qzy2q3y/njxwB4/lPPQIDbB/vs7u0SY+Tm7RvcuvZ++/pFt8+jjz/Duc3jXN/ZpnaOd15/mZvvv8vHOT62UXbXTpPnhRSsjUZlmg4Kj6GqHLNQM60OqXb3qQ+3CYiKXSOy++L3ZKDN5uAzgMgbqtR7FlWafhTk+9O6ZHf8FuBZWz8usxm9yOdHHyms5fbV93B1jbUFj33hl/DGSniWhHJVyiuaGqQ1GVprjDUib6kMGpMADif1s2YRKkXQKk2gjpAbyCKdfp/TpzY58+znKKdjbr7+Pd5/8xtsHf45s+oGT2z8KjGKRkuMH76HDdOj+dgcaw88STPZec5DXfi7Zj21fLwwz6daEWCVwkRZlKYJF4NJU58idYzUpWNSecl7rElsK0WeWTKrRTWwwVCbMtGCVZq8IO908QFU1GQL5+obxp9iLs+Y59jUABoDuNql8own+ii5sWqmLi+wnZr9ifkGFVtjXNyg1NH7+X2xlPkP2o1YycBbKS/H5g2PbAVxXr8BRJBaGTi3vsq5tRViDDx1/zn85z5Hk57QOABg6fRJIjDoDfidH7ZRHrv/UbQyxBioteh8aleDE30WqyuMcig3AiUTDpUSGExrg0tgQAzCn2x25UgkapG7cCGyfXCRrfHXcGxz3wMP8sBDz3P1+pbshhFUptk81eOrX/4yxlie/7n/kqUHH8ER72HUyMPVFgqlyGIhi0VrMmvbke9NT2JMyWwTDgmY1QR4gvmFJO6VGUtWwjH61L1TjMY7TKstjPfEaKijp/S7R+7fexcvc+b0mRZ+n05nrVcf3b4Kjz5L2xnLPIhrFshcW6fJmxrPkIAcJWPkom6UaNMS1lLjbEcJRvl7jyjYT51otbZGmqWpxsZIfdRoyuCYTQ5l3F5VMbAZOGFkZWnAL0pQ51aKdzEcVkmG1CiyLEdHlQbqpCnPtSN4h3Mu6Qg316dS7bRhSzVlk7mZLHrae/3rfDU04W97N+VrjaC1HN0q22RiQRic9i+bfLyp5ybVuiQ5H5OBN2R/mnatD+kmf//jYxvl/qwUN4+CTJNlGpvlGBw6iEZPlnY+rRUzJXp7gnrNXUdIquYNIhgA5wOlm7Ezeos7068xWOrx48//Mje2Kt65dJU8Ey8XlWJlZcj3XvhTQgjc/+CPcPpTzzXqh0duaRPF2CSx3/xUdmSp44VE9g4xtAu+hVaimISKoEOgSo+tM3Nkl67z9W/8S26Or7b3JzDl7b2vcbL/OZTpcOjmxGqA0XhME3apqLmztd2iluVol8p5rNWNDzwauMXm8qLsag3gERdywnuXYlq5MfoUjIon1u1C1xhr5RklLvG0FlBMgyCxRn53UpaM7tzg/jNn+ODGdY6dfUjqg0G6mtozSIym1qGl78VFjiHC7FIGCbcxmMLSgk4hyogK51qmWEzdFzr9z6Rp4fL6i898YRu9l4l0zz2ds4rmr9EWcdow9sOExMaAk3ttI72W6UVq/1O0Gwl8lGry9z8+tlEuFYciBW8s1lp63R5D08H6SKwdrtTMOn3qyRKXD65jCdQxUFczZrMZjz7yBN954dvsTN4i030AXKjZmb6Dj4796g1cOOD5L3wJla/w6rvX8VFh08Rgqw294ZDC7LG3u01eDPjsz/19bJ5THQEtYloQap6bxSbyi8ndOhHFjxBiU5cLrQeSxCetryChX3Q1K7slBy98mz+48Lt0bcUXjuUs9QKby5aoMn799e9xuHuZM0t/g0H2MLPyxj13MVH57tmAZwfbvPHlf8Hwvsfprp2gv3KMPC/aHsb2SHGdrN/mahdzsQZbnC8/BRKRqIVQMPWJpluFUQpSAV5+LJKhLsgwm8PJLKWJkSzPmJUTYqkY1yV79Np30nFOOFDpfOW8AjJcOM1GuSdEXAwPmhY2kxtQnRQhKIJzuFqMta5r2ZsSJbIhO7Q76kLEGRsPqRCebWw2ijm3uDFJjhh4U+ddOL8jJz5HkOfX2mDYYeHmNy/7EfnM9zk+tlG+8Rv/9Uf/oFld6T07RcFf/alf4M++cQOlhII+nc4wepWlpSXuHLzA3dl32z+KaWdZXl7hMz/yC1x4/zYHh3eIZChbEJXkKhi4/8wKf/Dv/i1Exef+6j+ks7EhDBZkEm/zJNrcJC3CQMA5J3kMURafkq4M711Sw47zRZsehI6gg+iymHcu8eaLX+O93TdYKwL/5ZND+j1NnUU6WmO1oWs1/+PLu1w9+A365sEjt2ltdVU8dEwSV4uhdoyUe1uUe1ut4XU372Nw+iGWNs9hTEan25eQu9lsFvLWVuMnqmSWDVAzX+wfatBqvKyar8zmFiqVhK7TFOcsS9oy6XffvXiBQWfISreHKZba1wtOBtokbUqaDrUmh1PJMBfHmM3DT3mNRbkUCTFlE9Paih4Q4oFijELHq0WJ3SeGUfNS89EZ8pouUQylmymmdaApdNHaWvPs28afIwa5eJ7z1avS+TbXKBUxeY+bBweiu/vmK4z29/i4xyeoU3qWhxl5/uEhJW1oBYTg+Mof/x4//uM/xa2bdyjLkv29Pd4cHfKLv/DX+Po3vopLUonD4ZAnHnuSANzdOeTl195hVjlikJF6NolEeRt4+pFT/Okf/DpExYNP/RTHn/kMPqbO+aYo3SIEzU2W0LmqZSJVi1cYqaKF1AbVCBPHxoulfFdHMKMxt1/8Nq+/8vuUfsJ/cqrLj9/XJ+aaSnlC7Rj7ElCc7Rv+/lMD/sdX99irXzxyj+4/d5+0/niZRNaErjIc1TKeemblPKyf3rrM9NZl7gA6yxnc9yTLZx9jee1EopfZhZ39SCZ0NNeaB+T3PjH5zbigTQQ0DdGkexHb+woxRDKdsXHiFMPhCoPM0O2mcX4hMh5PqbIMldT0dKrp6eTFYuvXRX29yRJVWj+xmVciOibzs1RaDLqJhlJYqK0mLzSQS3kkRHxdC1faOVydCOMR6iCDiIo8x9gMbYV4odVC2Npuy417m3vIxdjDpXmdSsH7t7coy6oJXLmzdYsbl94BIgfbW7j6w2yxH3R8ovmU/+gfPMLJzcE9z1i+8E52otms5v/x377Fn3/rT/nc577E1q3bzGYTuv0ely5d4bEHH8eajLp2zKYTXn/ldZwP1OhEJA+po2Qme76BMyeP8e2v/gbT0ZhHnvwpnvrFv4fNckHvSOGlbqpQiobRIWyUirqWRlWldesYQPKoefvPPB9SSqoMg0nNG3/0m7z5/p/z4MDyN84tc2IjYxwDrpIujOAdxNQZozznBhkPLFsu7d2jUqCk1cpFGXzz5ruSc/a6mv/jf/EIk6lj/6CicoFvvLjD3kHNtVszQESQDy68xMHFl7nZHZAN1xicfRyIdNdOkfeGZMayUMxIc1nuNUbS4lcf+t7it8QQ5RMVYeeOhOEPnD3HOHbodVfQ2pIFZHYL0OkNsJlENnXtqErxjIJ4WzHOFJ4aoxdkTubvKuHiAtTbRottp2nygvL7PuUbzXNXVmGMTMjKYj4XIKMJYVV6zYW4Nf1tsyksBjCTumZWe0BTes87ly4QfWT7zm22PpDnV44PZTDtPYdWiscfeZKV9RM89MxPMspyav0Rz+Mjjk9klFkBJpuHHvMLBp3JblZb2RXruubW7dtcv3Gdp596mpOnTtDpZhilyHTBeDTh+q3rjGYlzgdsp0On24Wyogw1wde4oDlx8gTVwRX2d3d49Imf4ulf+vuoQpT1ZBJBs9PS5i0hpJ3SOSlkk3IPLZA/TZ4RZJweDaiTFool0pnVjF58mfeufheI/KNHBtiBYtdNqSuPVjblNCblKHIKznt+4mz+IaMUUTAFWrM/OmjZHZ95coVux9DvWjZWCyKRR88v41xge2/Gd1/bYzrzvHVxxO5BjZsc4iaHTG9fAUBpgx2u0t28n/6J+zl24hxFZtvNqUmq9T0GcPSz1l+1HqwNZ1GMklF2bI7KlzFB+J8mBl774H0AHnjiMzKXw1gypYhOADTvHVVV05Q6lBKwzRh5FkrR5oXycxmPJ28tO8PigNs2l27zf9UaroSRqT6t5oBNs8Ec2YBT32cdIlsH+4QYRGf4rddSFKPYvnWNw+1b7b3yzhEBm4jwSil+7gs/Tb+3hEPjgNPnnkQtP8DIwNtjxZVx5NVDGFlLZeCff1/rmh+fyChDIBGU50ha015FFCBg8ZiVJbOqRBeW0siDyPKMOmrM8pDedBmvFWXtMYnOFYg4J0NEN08dQ9dbvPnaSwyXNnnqJ/46FJmUZVxoW7SaB+WjTAb2XmphTS1KJ46o1KQUjfZrO/gW6VuMGtBgpiXb3/gmf/H671CHkp+5r0M9CIxdTQiKXn+AtZkguagWvXTeUdUVa0XNMFccVvP78c67F3j0sSfp2oy33r4t3SFK8cTDQyIxEf7niZa1ms2NHr/4Ez0UkVnpee3CAa++vc+V61NqF5nOZPxgvX+Xev8uB+9+l531EwxPP8TyiQcYLq1IA4DWbaSrIBXcF5+VWljkR78/z63kJzKZukaFwMraMr/xtd/GFl3uP31f2pRkwI8mbYI6J1PySiH41OMZqOsaraUcY42R0F6Jt1RaukBMCn9l2FBoQ0th8sizk/qsAhWIhBT+pvNtr1Hhoqyp7fGM7b09iIrtO3e4/N7rbN/8YJ5/L9yBpV6fjaUV+XywxF//4i+idcHJ1XOg1znUhp2DwKiOTKOhUnCzVhzeUdwOjlvBMTYGpzVETeb/I3hKYA4wNBefFmRjoNYoHnpgyOtvH3Lq1Alu3LoFaXSYjwGvIsoYdNSQW4K1+LpKw2LFcAFWVpYp9CFvv/YyS0ub/OTf+ieY9dXUVe8JdYOYzonWTftVWNhZGzTPWjkHFh6aKA40NyLiTKCYlFz4rX/NW1e/w48fK3j21BInlxWlr9FB08s7ZL1+2uVVm29JqcdRdAqyvORYt+awmnvL0XiEjdIYfOGiiEbdd7rL+bM9GXswd/dzwKm552g6heZHnlznmcdXiTEynniu354wGju+9b1tDsee3YOacvsW5fYt7r76DWxviC26bDz2HKfOPUqeNqUFmkJzJ2je/ehH0S1yrm5LUsK3kPYlbGRnfw+0obC2Df/lb2NL0G9CY60Vmc3JE9gSfMAHj3OO2s1agEWbxhN5jNbzZ6cSrUEpwDOfTC1hr0mlt3Elw6W8gvc+uE5VlVy58C4H21uUswnVZM5B7RYFTz/0CFFrBv1lfua5nyXTOToa1vJN4iwHa3DRMB4pYtTs7njGGsYGyijN/7Pg8MoSlWwmXZvTVx5pw46isP5RjJKPOD6ZUbZecaE42tz1ZBjGwpkTXV55Yw+TBpXWdb2AsjW7ryLLc4w17cuqFGZGBQ8/fB/f+trvMhge46/+yj8hO3USx0LpQ6n56YSGO7uAoGndMnikrtV0cURS3IsiYiMEowkmUtQ17/3Wv+Ltqy/wV451+c8e7nO75/Ezh8ZSFAV5twcmWwQtmYdQQvzu5AXPnii4tD83yt3dPf67/8//AJDU0zR/7WdOYO3cIJp71D66hQSnkTa0zaY1tKwOO4DiuU9tMJlW/O6f3qSsAu9eHjOZ+jbUvfYXv8f+9Yt0V47RXz3O8sYJukVHGgWa5wdMqlmbHzUhXzmbsXvpNQa9Hlc+uMq51Q2s1ZgYGR8c8MVPf5o/fuG7vPDyd3j6yWdZ7fWO1PnkelKLWyqxtNmugjzPIMtwtbTTxUQmEGK7xhpLFSsOyxKjIlZrrNU4PG++8zoh+BSoCc86usDld17FVaV47kpIGg1p45HzD3L29H14DPedf5LuygPc2DPUsUCT8+rYsKIzlqvIoVJkGowNKYM1guRHLaPnYyDGNDRKmAh4ZagxOKvwTY6qAugP4d/f9/j/y1M2W5s68vP5NxSStwXn8c6TymACSQcZhydlBzEO0XiJPHD/fXxwWca6PfO5v0731GnKIGRycY6xpWZFICaKWlMEVlrGnOnkJRfbfZIrlU9CaEnkvXHF27/961y4/iI/caLL33l0yEEecVOHD5B3euiiIGqbBIwbwoFub4xsQYLsPrhakJsJlQz6wBpLWVXEGBkOLP/wb9/HA6d7Hwr5QbetVM29bQy/Tfbiwn1XYLViaSnj137pLMTAtZtjXnzzgLcvjbmxVUKMHH7wNocfvN0aSm/jNNlguXllIjC6cRlffUR7UYzc/9CD7B+OKcsZSlm0NajDEX/t8Sd54713eeeFP+HSa9/muZ/+m/T7fTKbcXJ1DVCYRWpg85Ip/PRejFRboQNiNFkmm6hzcFjWfPXP/5ib737vI8/rQ3dPKfq9PjazaGP4yZ/+RVR3hY2zz3DAkFt1wY2pooiafRcw25JP2xDoxcBaZsi1hkKIL44kEh2lWSKgUrN1um/RJ+FskYTzRKYE9kNkHAOljrgjjXo/+PhERtlogQJpUcipNbMU7n1n2SQkztdtHVBJn5uWMdU6gPGRDIXywm1dX+ny+ncvcv9Dn+f081+iRlDSRsqyQQxUUktr3ks+OcqhbD1aTJN/hcIhoY8OBBUYlJ63f+tf8e717/J/fWad9RXFQR6pay/FeJOhtWjglHUtUHt6EEpr6ZbRBqtSCUApNvod+la1RvnFL3yR73z3O9T1lP/D//ohVpaL1oMshoutWsDRq6JBXhK23N5h8fdyj8sq8P61EbfuzLh4dcrB6CO6VdImMLlzDe5c+/DPv8/x+jvv8KUv/RV6RY4msjrs0HOar7z3Lsu9Pte3d6hmY775O9IkYK3l2Z/5VZ565AlUCNigFsA02RTvBYGFKxES68gQjOYb3/wTbr37Pc6cOivtVen8+8NlvviTf4OgM0oXqSNUMRIxjLP7qJyhMAU390pmKvD2diBERWVygpWNtZMSHqUCWgcsQYbcKrDKYJRMoxQf1KgXiwd0BFxAJl6r9H0MDkVNpIqREqij/M6HjOMvOT65UXJ0qTSximIxsW5/CDHga5e8WlpOWtNw44zV+FhTK49VgScfe4QXXvwzjm8+yGd/7u+BVnjnk2MWHm1olrPSrfEtvCPtCbV1ywaIatZDapDSkbVJZPqtl3j3+kv82PEOxwYwNVCHgPMebawgrSjqStgksruDybJWgMqY9BAVuAhfvrTPfmXQFASmuKYwrqDo2nk+9xFUrjYGXwhom5kiESjrwNsX9zkcO8YTT4iRw7HjjbcPOZzUR1KX5bM9Nh9fZvn+PsEHxndnbL2+326iLZ9WKTY/PSTvWXq9HmVVYa3IWgYfef033ufrX/9TVldWUUqxt7/Xjof/qMM5x/e+/G8I6ld5+pHH2/HkWqnUFN40RKfboMFoJSp3ovvPxRtbXH/rBT73mecZPP1rTLINLDlKKSaZ5k+igSj5Z69bsLw0xNoctz9m6/ZtooV+pyueejaV9aYjyjtUQETZtMhvWiXGYJXGKIsOoBMxJcSmuqrxUeGJMphYKeqocQqc0jg0ddRUOlKBfD/93Ycf8vc/PmH4eq8rVG3Y1uze9+ztBGA6mxJ8ANc42pA4pRWZCWRWU+F58rHzvPydbzEbV/zs/+p/j15ZowphoU1pDoUcXbTp3RahwhYul1C5Gc2hAiKboR1974hvvscfvvKvOTuI/MoDXaZW45QU95uNAIRE34xBK2wh3RTGSoG8ndoLo7Lijy7e5auXJwyzp1nJn+LG5Df55je/Tki84P/l965xbK3g2aeWudckGyZSkZsEskRsplgeFijgYFTy3/3PF7m5NT1SFkBB3jecfm6V3mbO8EyBQjFY7ZJ3CvZHY5S2rB5fYv3JZXwIaKOx1tLvDyhyuSabZ6AUznmGw2UOt/b4zq+/ymR7Roywu7MHSrPRf5KVbI1z/QfIlKWTdZjWI97de5nr43eowhjvHK/+8b/n3NmznFxZTlo/Eiq2ekcqgULRS8+lJAYQHdvbd/Cuprd8jFdu7KHX+nSLnLzICNomKRBQ3nNYzvAjzcbmgGwjJ04OqWPEaUMsK5TPiXWF8l7I46LZj26I7kpSD2My2TzaHlJZZ8IEInlJpASiwGvRvPcq/Y6GWkOlQpJU1XwoJPgBxyfOKY8cDez5IdwuHQopmCdBprr0zCYVQdW42ZTR3hYb6yusLK8Rlebtl15ke2eP53/6Pydb32AWAh4/L10s8J+0Uve820ddeDLIKF0erYI1MChh+dJt/viF36FrKv7OQ13GXUU0TduSvGbTUiUFdk2WWWySQGwAJ9l9PBe3D/jnr+1wd6JYyp7g9OBLGGXJ9K+wNf0Wh+E9Qoi8+OoeAH/wtdvf99b2uoYQIrMy0Ck0953ug4ps3a3Y3Z+PVDv5uRX6xzosHeuhhwGTa4zRGCsGZ61FG8tweYnh0hJFXoi0ZHp02lqyrEgzF2NL/i73D7n59i2+9U9f4vDGjJXOw2x0HmSzf57lwQpL3R59bSlo0G6PD6ucHBxnd/I8f3H7j7g1u0BdTrl54xpn1zfkfkbQyssyb2+0hDAxfSSKPtGF178n0dHyg9y4cgc183R7q3QHA7qDAb3+AG20zOl0kf3DGePqGsYagjYEBdMoeZ4tDEHZ1Hyd5CwToysq6QByETFkAi6S2q9SwzzSBeOJ1Cg8Cqc0XkkE5pXCA7UCrxU+NU3LxqM/LvAKfGKjVPd8Lu8UUS3wo4AH7xtIiLO3xyOPPML+/iF7O/scTqdULslN1jPG4x2Ci5w6+yDWeF57+Saf+8l/wLHnnqcMcsPahb/gBeccjqNHczbtFpFCs7YpJzRIYCDfH/Mn3/jXXJu8z//5qQHdZYtPBtlQ7UzTFZPOoUFzG71UmGvJ/Nn72/zGW3tkepUnNn6eE8OHIDUQK7XMifIEV3df4870JQB8mGBUlzJsE/iw9OBkOmeJzMrAO5cOP/KJ3H1jxF4xpXzE8djPn2NlbZkstxgjm0fwER8CSml6va6Uk9L479xm1N6hFCkyiEwOplz8xmXe/eMrTHdKuuY0z575WU6vnWJJF1g0yqTaYtQybs97qqqmnE2pqgoVFEN9jFtcAOD2tcv4J56WskoURTjVkh4XN/TYTlOrXWRyuEen6HDnMFJXgbA3ojyo2DN30XlGp99nsLpKr5M8fWaoa4/zQdIqrfHR4whYHVFWz5uvlTzjrEXTpDuFIELdURnBBwDQeBWpEc0nB9RRUWslZT4kVK2VptZQm9B2x6gFMPTjGuZ/gKe81zDnJY7mW+trXZSCq1ev8MQTn+H21ja7+wesrq2ztr6GzjTTyYT90SEHkylrG+v8/r/7l5y871Mc//RniMhu1XTMzntakzkucFybo/Fm955pTHktUWEj1LpmqRwzfeVF3j+8zOeOFeglQxWNTGRqJUx1GxpKGioPa47kJoOMkT+7tM1vvH3IWueZ/x9v/x3uTXLd94GfU9Xdv3TTmyfPYAaYQRpkgCAIgCAIglkSSVOBkihKlGzJNm3tYwVbK1OWtNLKQdbjXe/6sUVLFC3LlElJDKIkUgxgQiCJHGeAweR5c7jpF7q76uwfp6q7f/feAd7Rku553rn3/kKHqjp10vd8D49e+BZm440OoWLurFK6MZujN6L6ekJUVu2SshhxsLxFiM1gaLONLvkyqCpXD7/AItzskEOIsrt6gvqgoT4IPPvh67SHcOb+7RTRtbG49dwB23fOiDFQFkUSyDSayf3IAxij8tRHLrPYq3EqnBq/ikfv+G7u2tmmcIFCAh61UYhGbrZatiyXK5qmoV61hDYSCTy3+Fw3D9efe4oYIg5n/gMWSe+eU/vNzchb4JlLVzi8eY03v+4tfOzpA9RPoDU9hQ+EZc3h3iHza3uMtraoqjHlaEw5m+BHFdVkTDUe4wqHStEhuzRm9JXdT+Y3DxppVKnbmojHeaFwZc92jrFiNAJNFlBRmpBNWaH2jpUzoRVyp7FjS/WrHv//ma9A1j3rF84LF567+AJRlLvvvZvxaMqprVOUo5LlpEHdiPP3zPj5//NHKasZr3zXH0DGU4iptx8mHIN4x4lC2S3CTnnnlWv3YsEgi/7N2siZKwf8yyc+SNTAe+6a4sUbXE/7dAvJhzAjwExglZbQAt5TJKLgK7tz/sVjt9goHuTR89/OuCwt1ZIJm8Uwn4UX1Bl423vPpmwAsD3b6AQvE2atPUZ64e7T96TdNj2398zrebIDhFvzi3zmM/+UFz51bW12nBQ8/4mrtzmXwshv8fLtb+DC7BWcmp1mXGwhFAb2EFvMEZcA6DXz+YrFcmn5whComxUX68dY6QGvvOduoipfvnKNz3zxC7zuoVekgI+hcwbIVHLtakxWV2yT2+JHLBmDZlRSRGPnjKKhZtnuUpcH+KKEsiAWjnI6ZTSbMZqOKMuCUVVQ4I1VIvmPvVtkgSUpSko8jQqLomAhBUUkMTdY0+FaoPHYT4UWIURHcKYlW4/lLJOlpWlzHuyzX/X4dwj0aPdbnsijYZ5h9mi5nFs7NWfmgQYgOkZ+xD133c0nP/EbLOcL3vUdP8S5u+6jiULMfhxJKNeuP9xg098au9e1W8lZy/T31XhhaxmYP/E0V5ZXePlWwWgCuIyX9F2HEU1seMZGHhMCBUJbMypL8Ma5+mOfvEaIjgd23pf6HTbWoSkBJ0j51Gx4gwHlvRuY330cK+3bvTYczqQkX9oq8ZWymnbv71QPMnXfz/7yao/5BDZHZ9lbXh1AI/tL9r6zbWgOx52bL2NaTXGFZ1SUlG5JIZIQNTY3IQT2D1bs7jU0DSzbwGF7jYurj3IQnqXWXV5x34P84Fvfw/33bvLX/vk/50P/6sf51M5Zzt19P29+41s5u7HBpCyMRTNvYOkWY4TPfdryknfd8yjxc4EiOERiGj8d1EQ6tGmIwRO9R1YeLTyxXrE62EfKAvGO8WjMuJpSliOqyYTxdMxoXKJtS2xbGgoj1iwcIp5WhAM1pJdEA8tEUqBHggV5UNsgYsqTuyTwYtyvThSX1unRCMhXOn4XNCWDQKj2k9y9p4Rg5TTGQCBogK2tTT7y0V/it379Azz06m/gzCtfQyh8pxqO+YudoKX0QNZe2gvq8Lo94ijrElvq03nLv33sAyzDkm+7f4Qvj4LN7MtDkESIqV2ApnhuMgVF4LBNV46G1AnBUD3OG1AdZ3w5ufkqIrjUDcsPYmQZp3s8WKb9nWWDoLOg7Tua/KMLm3dyYfOOY9Nzx+adSDcCrI3b2onzJQU8pt2zwS4I0XnqENg9mHPz5gF1cFyaf4krqw9T600iDTs7Z3jf1/0x4tYb+el5wx1XAn/o9/8QTz71Mf63f/lP2bv6Ak997mMUZcX9r3w9W9uneNXLX8Hp2YRSXHc3q9TLsWkaqlDjtEhorwLFEcVqaFWc8UFFhwZvZV6tQ5saCm+sEyIc+jlLvw++wpUlflwxm06YzcZMJzN8NcYXpTXxqSNLlChKqdFALd06U6JmIL3YBuHErHIx5SOqqdWFFXWHGH/vAj3a7ePr00kO9Axe6d7XiCs8VTXCtZIo9oXDw6v82i//Wy7c/Spe9y3fB0VptBxZwgemY38uXfvZ38NAJAcQwF6wTJhGMdJeusql+UXunDhOzwpjPNe0ZNWoKhRNpNDaLWI0X9cY2jRGvHPcOXXcWAT2V7t4CkuVCB2qSFI1hJPQp0/S5pXb6uV7z2xsHQvf8D3oBCv78D2VhR3SBcYG83LSPGrGK/djsy6UgnqlSRuLZbEiddtw8eY1buzeYNHUXK0/yV77GEVZ8MjLH+Y1b3gve/ECF/cr5vEcnJ1ysYh8rKnZeeAc/95f/gZO1df4yL/5pzz+5cf50sc+SFTlox8QLtz/MKPJlHsfeAWnz5xlOZ8jIpTtPhuLa8RQ2e7jCrTI/yoCJZGSIJ6IJzpHjEJsPdp6cKnqxHmC1FDUBF/QLDz1/j63vFBVY0bjCZuzDTYmE+46fY7RbMZ8scCFlkoVDxBaS6dhOOuIg8Isqt5dCZAwu96ltoVKQv3c3vHvrCn7/buPjNrk5kY4aSyCQ1Y1VROREAjtgs0LZ/l//9d/l42Ns7z+vX8UX40STw49cxv9jt4V2X6F7aYXWLu745oStpc1j33q11jGJd9w7xg/LiGFLuy72VTN104nlC7Y2kH0QrBOS9/xQMFjN1ueW/w8Pvx+RkVF4U0YvbO8n4ihQSSmVEBnYttPHWwEoOusBIPDiRhLXRpbUlULwjGKkQw2MHrPXGnZk2rloTTlnL9pIu/wON+yrBtK7xGUOrZ84drHeO7gIzRqgO7CF3zL+76d2bmXc2OuPLG3zWp8Gj21SSwnBKlog6JScDU23FoWjPVu7nrPX+CV39Sw1b5As3uZD/zqv+Ti80+yrFc89bkeTvf6hx5ie/8aZw+f43DlCThaX1CXJa04Sj9GXAV+gpZjgh8TXGkmNg6NvbAiBXhvtrFroLW0Cd6xXAXaumE1X3LDeS5fuc5kNGE2nbExrtgYjdiajqnKkoJIbFpCjAQVVJylUFCCGCTPe0fphZArXFzAvQRt+dKxr8Pfc/KWvlodMca4/LqFle1zPka2Nqf865/5Cbyb8XXf/efZue+hVILV1yT22kM7/+nF70d7AaIXzC4dkk+qSnnzgCdufImxF+7bScKYTu2cUSviTThDqrU0bdlT/BsayYIQQRUvyrQQ9lb7XKs/x0ZzNyM/pSxGFGVhO2os1ot6O3PIhDLGoaZnTSjXBY0BTM1235BREQhmdA7qSwemQjaPVZUm1v0mloJajdbsNc/R6WSRdM/QxgOu15+jiQfcddcdbG7cy3g84dWPvofHb2zwVL2FTKfgNgiuovUFmtIFtpcJEU/LmAXwxabkS41Qcopi8gh3f8c7ePdsn41ml5vPfobHH/8Uk6rg+77m6/nspx/jnETC/AaNK3C+JNTgXUXhR2bO+jHBl7hyjCtGqCtRKYh+THBjAp7oRmbeemfCWHjr3anWcTxoS3Q1zpc0dcPh4YJbewcUhacalcymYzY3pmxORmxOpkynUyN5ChCahqZtaWKg1dwKMVs8LsUX1rG/X+l4yUK5Zm51E0hacBko7gafT8lXVdzGlA986Jf4zCc/wdd+w59i574HU0DDolVdqzn6i0QiOXoKR8zZZAYOd/7+7rRPH2Acs5c+/ymuLK/wjXdXbIxd15hH8lXE6i5FHE5zG/OEu41ZiwZECtswVPGqfP8r4R9+vuZW89vsxo8i6pmGB6mabc6OXsGk3EisBxmvkx8uprxdus8O4tKbOiJZsNKTdTyv1i9k0c7T2RzzcJ1lvLI+Q3mz7GYvchAeI2gPQMivG1blhDkXoaoqvvcP/BHmKtzaD8Ryi0/Pt2h2zhPGGyAjLCzS69tsWuduWSaczkxOcdRiLcivqvD4wSnGccnWqTvZePu7GbdLfubaAboz49Qb7qN6/glG9Yr61nX29m9wuNindQtW3tF4M2W1LomuNKF0JbGYEN2U4CqinxDLEYGShoLQeoiF3ZzzSDHCuwr1EFxExdPUjWGb64q9+ZzLuzfxpadK8MPZdMbGdJPZbEoxm1JhxQGr2HKwWhp6SECct3OeOLrHj5cklJ1wSPZvrFxFuhbaUBaeejDfTmBSjrn73vt4fu8Sn/vMpzl19n7OvuHNRMLAdMre1ABwbWtlzW/qAjxoL7+DOEj/SxLw9GLZRHZvXiWi7Iyd+QOtQExGnQsdRWcuhnYDzlHnXBJM159XTUjOTBx/8lXCl261fHE3cPGw4bD9AgdBuDX/FHeX38q0OJXoLV0KVJmsZWQJMfaaTBc0epAu03Kr/Txb/iGauGQen+meUwms9AWGyYWvOocizDa2cG7YwFQ4e+4OHnn1WwDt2qEbW4NnY+M0jWzzwv4+jQM2K4KbEkaniH5i7R80CzadD5KnoWvtnoYtSiSmzSK7JxFYyIjae66LtfZzheA3PeMzh0zuejXV/AbTw+ts7l+n3r1JfbDP4d4ut/b3WC4OCALizd+MriRokTR3iRYzKKc01RhXjgi+Ql1BdAX4CmJILZiTbhe1yK4WtHWDFN5wBW1g4RqcL9ibN4jsU5QF4/GIsiqYbczYPrXD1sYWU1GWjWnQum1+b4SybhQVq1O8enXJtZtGPbhYNPzqhy7hHfyR736QjUlPrjXb2ORwvuLG4iY/87//Y7Z37uad/95/QjWdrZUtddFHzf5QcjDlSBCnF9kjQmhHlxzpAjy2b2+slGupYqNRCNHhWjrWAHVCcMnclZTSyL6yWFTYiaDqO9PYLmjIm1NT5S1jeOM5a75zeQm3aviZJxdcaz7Gufh1nZAHFqz0pt2iuA6g3eg+B/EJWj0kslwb+8P45eOTV5Tccef9+BRhvP9Vb2F8+s5ktQyMJbVghHMO5yuuLMdWvZIqIbwYyPrzhzXOOcblmLIoQAtcHNMurHcJU486a0EfEFZlRXAK2ZoZ5GK0C9rZkYvJzcIIOI2do+4GG7OKlRtEB43zRPUcVNvIZJNq8xxlqBmFBZN6yXSxz6mDm5y6dYV44yLLW1eo53sc1ofsN4EGZ/08i22DDoYWqfcZO4cWY2IxQssxWo5pY0UIKyROkbJC3Mi0rohFeYONIcE20iiR6Mz8l8KxWi3x3rG7e4srly9TTkZMNjepJhNGkzHTyWYfa/kqx0sSyv/pHz2emAGMnqJtB9KQBOp//F8/z6Ov3OrC9ZPtHc7eO+XH/+GPsLV9F+/63v+U8dlzNGoRqz7goL1A5rRISuoyEMSvfPRY1yzjkjCIcbHkqVtP4AUe3PEpxZFYxUnRsYARW2mkSBUg2Q80C8F3oXFIWjsHVFLOzIvgHdw9Ve6cCY/dcnz+xrPsxRdYxC/T6BU0IShPOkSEu+57gO0z5wEYT2acvf/VHFx7hvFsB9l+iJi0XIvw5ZsNOAv9fz4oqz3zd130iLouquywqHeuPUUEX5SMyxHeOUtZTQJN27K3UopQMB1PrLYxmtZyhc1TSBok+IIosRtrR6bqIGnCfmnY7BydLu3eH+jWwTeS3Cahr/0mq8Kzj+KnSrW5YnLqkNGFfTaXe8wWu5w+vIkeXGP31iX2d69xuL/Pfr1gEWrUFcZkIAUqI6LzBF8Sy4qyGlGWU/xkCxdnNG5EKyOCHyO+tDpay4MkGs0SdWLdpoPQthbYgUQ0vXDs7u/hygpfFcktEuDur7qKX5JQrkLsFuXpV25R7ZSgSjH1nH54k8sfu8nFD9/gg79zo5uFnVPbfPrjv4X3Fe/67h9ifPpsAjDHtfyNJGpDN4zeppNkpukeDDAM7Ax22RzJHMx+UGumGhYLrq+uIsBWaTQXFhtJuzWZBS8aR1AIidfHKkEk8fEMV1i+T+sTmbtiSddPQnA8ehY+e6Plpv4qVVlSesd9DzzM2XsfMlNYcnBGmJ26g3l1gUsreGrffAAR4Yu7DqpzaFOi16eQTWhqkBaXfKAYR7SptQQ+doLhxFNiiBYw8mXvnRUCVyM7nYBopIhK0zTsHh5y7cZNZqMJm9MpJY6RFBTiUBxxWLKXfdxhC/rj3sWxn2kr7gJTFsOyBHxX59Yl3mNqmwGRwsqiCs/SV0i1xdXZPfjYMG5WbNQLps0up5bXOHN4leL6kzS7Vzi4fo0b88CBjkwLO08RKpogaF0gUiGHU/x4E1dtUBYT2mpKKEaEoiBKgbiKViqCNOBG4BzqHXhnVoQzC0+DEJoIPlhw8CVUirwkoXztH32ArTtniDhGkxG1Gk/ObDxme3uLN75tg+e+9hJtHfjEz3ye3Yu3+J0P/xoHewd87Tf+ABsXLnQERm1KyueYhks+jGY7Z4D9HGrAIXpnII5dBLbLS6Y3IxFpW3S1GghrApc7A5criQEvUeSjKUeZyLkkETt5Z7td3lTyjdgGqmnp9HcW1fHxqy0CnD5/By//9j/JzVXgsIanVjPwJWiwgEAQOBTcoXUIq/xkMAz9cg4aacV1mslRWOI6aagyWiezNq8DMXRJ6UsKZ3wz3hnG15cjxPuBUFpyaDSaUM02WS4WHO7ucXDlGl4c506dZmtrw0L/OMouamwbp5njadyPrsHOIjpqWqdgYQ7OwaAD1lrseSDkiehMrfpHnCfiaHzFqtjgYCxU7V0UmyvcuRXbd+2yvbzOHXtXuOfW8zR7l1juXeNwf4+ry4aDJtA6wFUQ5oR6n+DGFH5MWc2grJByDNWYWE5Z+g0WrqKR2sAJhUfLwgoQCoePJBeoILQJddTZA1/9eElCeer8KXYubBEVJuMJGzvbIDAuCopUxPzar38EjfDMJ1/gxjN77N68ydu//o9z39e828pcQlrwmlA2R4I4MXUo7oIF6bMx9tjQbrpk/TOQBNtOhmIMc7Ja8cKtiyzDildslUxGFbHwVqAsPqU3THhDmxjT1ZZI6Mi5BC0LglohbC4xyouIrLmVtDMIbXQ8uRupxhMe/K7/iE9e9YhTWhyxKi2AhIJ3KZ8mOFXKABJPMNhFidISZWUCGR1Ox+bLCeYO+MQSHpUqbXROPKPxhKKcgC/xAl4Udc6a8uRQ0boMMJmWTKc7FKLQrLh1/QYvfOnLlOMRO6fPMpvM8M4lkzhhV4XOpO8marBJHs1urYEk1nfa/NXBpyxAZ5tf/kSmAbO/1aWi40KIFKhOuTneZrxxnsnOvZR3vYqyWbK1PGD78DqnDy6xvPE8qxuX2D9YsLua07AguAKkRJYVrhjjijGBkraYIOMtitkmzE4hkym1czShplk1UBu0DilQV1L4EvWGQvo9CfScOXeW83efY7WqOTw8ZDqZ0KZAR0AR8YSg1HXDYs+YobdO3cldb34njTgC1tMiptHOubtOiJI2HJAwdGVGHfcn9CogT0UnnFlOTchCbKibFX615HDvOq02XJiMGFcFdYchNYaBqIbNzcEnq/JwqAotwkoqPvTkHr/17AGvODvmzMTzuvMTRqLMCqGg90/teSJP7Autwhve/Z08fmOK0wKJihcDrHsVfDQzqnXOML9RcRoHemF4aAoupE1CbJFms1oF6/asUIhQ+YJRVeF8iSsnSFFazxAsOdGBJqQnrFjby53t8MvYoLFmsjll1Sy4fvUKBwf7bG5us7m1wXQ2tj6Pa/d8NCR+9FGSeTo0Ve0JyQ+TfXkjafbpVOZfdhjlHMUdamAFxHV51iiwoGBeblD7ktaVjNqajfaAc/V1dpo9duY3uHPvOvHGNZY3rrC4eY395YL9Zk7TeLQc4X2JjyNcvEnRjqDd5nWvfDt7deCLX36OVRNZqYJajlSK1Ji4MMG83eMlCeVoPGJjY4aIcOPm9Y6ZTNMgGhFy4OkvPM8Ln75GUYx403u+j3Jji5bMOmaDmFMrNohpOYh22jIbM2vCmA5bhD3oTCRPZyJolkiMLfVqSdvUaNOYxktTGgUzndtACMF8W1IpHZIqUxzqChDPxy4e8gtPXEcVRpMpX7wV+dyVA3796QMA7t0qec+9Eybes12aJl0E+DfPrLj7odeyuP/9LK7XCWIHaMTII3IqySgpBMt7FoQU5Dq6yB1R/SCIxQAaK9b3L7ULKJ1nXFaUZYmII7rUqVibPigTE4Gxt3B9iGauxzaQQQ2rpqFuloTVAt+saFcLCh9oV4dcrxfc2i2YzqZsbW8znW5SVBXOZ++1N+eHmtOQW4MNVvKTKj2gY+BvDrRMFt28Dvo1ld/UZI4b96zLpqMY3K0UZ5ttMeGgHHM4PUUVA6Odms1zB2zfu8ekPmS2XHDHwUXCredZ3rrMwf5NDg73WdQ3WTSeZRjTtDd55rcPjI5kb85IxqyKMcFNiMWMUI5oi5ImpWjibbbDe0lC2dQr6sWC0DbEmAl1CwvEqEUgLz91iV/+ex9CFV72irdz4ZFHCdqu+3zDgEmybTRp0Ww2KnSUkGtWlUBuyZd3UrqfStRAG1qWywX1cmF9D+uaNiFUljFS1yua0BISXApJ9JPi8a7EeQ/i2W/hJx+7wcUF3P3wo7z8Hd9KPbuTQiLVap/nPvqLvPDlL/DszWv8b5+1aOrWyDF2wiIoZx54DTvv/c/4wvXWktlKMjUjXqVbqqKpmFattVyhR6K8gwE41smlCzZ5itHYwNbOUybWAUgBMGKiY2yJ6iwyq6AaWYVDq/FsGpq6RkODhmhVMYjhgOOKUB+giz18C04rGlcQNHKw23Kwv081HjGdbrC5sclkMrUNwTk0cbbaveSoehamMIyddVpufRvOCKi+DGpdaDW7pulfQpSlRkOZpcI5oRBFnAl/FEfjhFo8h1qyP55wJZ5BVCli4M7lfWw0u5xZ3uCexXVGi1uEvWscXL/K7s2b7B7ucvjUTYLzbJUVbTGldBW1lGgxJZRjQjlGRzPasqJxx/vwnHS8JKFsm8hi2dJGRaRgtWpwLplSKiznCz76k59muVfz4Cu+jjd86x8lFq6rEOkEMh8DTXkUGmeuWTYl7WPG65LRPSlk4LBmr1FpQ8tiuWCxXLI8PKRZLUEDPkTLcfkpH792yHcfVHjfUie4nHe2O3sRCvE4LbheR/7x529yvXa88wf+Cp+cb/HbiwjzxN2jG0xe/t088tYRo6uf5pnPfIhnH/8U+7Wyh3Lvy1/Dmff+xzx2M1J4Ky4eCloO9ecjt5lwWEolT98wePKVDnGeqirxZUlZGPOAIIQYCLGlDQ1tvaJpamIAjZkOI7KSAwNNRysCNg9ALAiFRa9FG6Q5JBxcRect48lpqqoiuIIghZneGjisG1b7B5RVxXQ2ZTydUY2nlFVlzAPpeToUU35OoTOqswY0ATuqCo8Z2ekd7SyGDlmGWoJUzOAVAZ8CYHltFWiK8Frk32dQqCu4tHE3InfybLtkVh+y0S7ZCnvsrK7ymmtXmO7eIuzd4MbBda4ud7mxf41ri8g+jnY0I4ymaDHFrTaIxZja3Z64vbSUSBNZrCJ13bK3t6SqWgtctLBcrPjkP/kdXvjoFaazUzz6Dd+Dm0wIkhq7nCBsuTi5S+DrAASetGkXdxRr3tJNSvpcDErbNIS6ZVWvODg8YD6fs5rPQSNCxKsVFl/YeZhnr3+c37kO7zk35axEcMkr84JIxaE6PnJtyY9/6RZthFe981v5+PICyxBwWZ2pEkXYj8pnbwQ8r+HU17yBN32jMJEGh/LCsuLxG2otE5Lp6MhdwlyCpQ/A5enwGE9q9qC+mlD2loKjcAE0oMFRN9Zop6lr2pAq/VPbABUPWqQAVcR7S+aboLr0iC2EFWiDo6Gp95HVTU5N4HD/JvWtW0QpGG+dwo03CH5EjGPUOTQK7Qr225aD/QN8WTKZTpjNZlTjCUVZGmgiB2o6odOeAT0F0zRZQD3g3jZlFcnMHp2oakycOE5SMXWOAichHWx8ln7LoIZkdXWtBIU8FNFB7UtW5QY3UbzUVPEBntto2V7U7CxucLp+llcsLjFe7nLr2hWu3bzB5cM9rq/2WK0q4nJK6yoaP7otORP9SmjvwbG5ucm7/8LXEgK0S5tk5zwbd2wyu2OL3/kHH+b5D19kNJrxru/5i5x+8KEU2cvjnYXPBmlYD6n5vdySTm3Qh6XBhc9NTc3nDIngeblaslosqFc17XLFarWiXq3Q0OLUSm6cCGOEjUXD44/9Itf3n+ZUCd9z5wb3jgpUjLMlKPzrq3M+eXOFlCPe8T1/jsuTN/LkwcooELGuYIISxWgFUXAx6TXpvSdRB2oNgLwYq4FEQ6+0LgeEjg+9w9IaLo2TubknS6V0lkPyqF1r/nSwtFNotUvxOBSvkUC0FII4YlviQsS7A3xoIZjgEmtcXFKxYlJ6zpyaMBoJkwomLiLzFft7c559/iLPXbnB4QqYbDPaPo+b7hj0zo2tWkMElcI0eWEkzsWooBqPmUw3qEbjVGmRkYYCkoRWSDm+FMpJ1ozz3vCkyWUKMdIGTeB824Bd4VM4yPLAXgpy3AJyEbJQDIJqxkmcDOIs0AlMrmnmpWuVAFGEqmkYrRZMl4fsrFacXu1y/8GTTA+vors3aPb2uHZwkydWt7hG5ENPn0AqfXReX4pQlrOKsGqJbW92jbcnbN+3xeXPXoGoPPyab+LR3//9XcPRDI0zn3OQvsiQuWMCme/M6pLyaXKPEVJzmLquqZcr04rLJU1dE5oWbQMaYgpixI69rHSOEs/UOw6vfJknvvybzFe3TgwOlqMx7/jTP8xHb5zFtWPawnhlnLap0YxBrKJY4MVpb2xq8pEd1n5c1OGzSaUKuC6Htx68sMMp+BiQLjClfWTyyFSt/S0mlIiisbVmR0GsUslFfGwpsPv3YNX6K6XSmpEsmbFg6gM7szGnNidsTj33nT/N6Z0dzp7ewY0LYhHZPdhl/+otDnYP2Nufc7hoeOrZK3zp6ctc3l2yZMxo6wLFxjkr3yoSHWTKqnbk795TjseJBWCDYjxJ2r7E4c3vdYa9JfHBehEKn9rCO0shWXAqEiK0XeFA8j/zNTG+YHstJv2cNzUF6TVx/qlYUbfV2yb0mVgQKQcLozpQby1HgyKtx8earXCZreYGZ5a3uOfwOuf3rzE+uInMF/zQr/3oV5W12xbKM2fPIwJv//pvY2PrlMGVypJnnnyMj37ol9EYKasJ3/RH/jqTe+4mSuw03dBnzI1cjwplJ43JhZBMmuV67RNjpFmtWC6XtHXNcr5kMZ+b+ZqZyNTaIHhxVCnYUXiPFB6ZjZmMRoydo7l5i4tf+jj7ty4R2pYrVx8jhpY7Xv5qHvr2f59PXd/BNY6V86iz4tYogSix2yUdak17UhS/iyKK4UMl+tRynOTjQI4kDkMdGQqXJgQXQsLU9qDu4TjmzW1dMCNIQCSicUVHshuUQmuKuCCu9vDaMPKBsYOZD9x9ZsIdpza4sD3h9NaE7dmEna0po1IoFZyUVi5VONrSYGUuKPWq5nB/wY2be1y/vs/+YcON/RVPPnedJ5+7wZVbLatihtvYoto4BdUEdRVtymtGDAETnS3+YjxjOtlgMh4zLicGbHAe50vrOekdlqf3pim9CXpufRg0dV0LpjlDm+F/KY8pPgV6k+WQUl5O+mTKsCbVJTM5s0VoSmPlsiyjIzF6yzbWBorRiVWg0OBYUsUVo2bO1nLFhf0Fp5c1P/IPv+erytpt+5R/+L/650RRbs6XtH7K4Uq4dvMGW+94G1uPf5rda1d44OF3MbvjLuPh1Liu/TrPXQfyJwOBzA69rFlrGiNNaIhtMNN0uaRe1b1m1D6NLN5TpNxcVZXGyzIaIc7RitKUDnWwjEo8s8W57Xdz5nDFc5/6TfTK57nw4CNU3/VX+M0ryiyYpo5i3mAWpEii+RABAoG2S9ibdvfJQrKSII2SWCIkxyEGRnmiQmRQM6qp+3AXtBg0NcLM2jjEtKoiGlEaNDQ4aQnLXVjtUcUlExfYqVq2x5Gds8KZ0xUX7jjNbFqxNR5zfmeHcZHMOSdGASKNIZQUVCJBrKSrqFNtbMq+TE6NOL1znnvuPM3e3pxbtw54+b2n2T0MXLmx5EvPXOHLL1zl2gvPUbsx4+3zlNNNKEYEP6JVT6ZzrFdKW9fM90uKcsR4ssFkOmM0nqR4guBRcvdKSRUskkEmsc9z51Z/KWRllkoHUjHJE7GiEK/Zixwgb0VsPJBeUHNrji6F0/cOaVFaiQStGZNQalrSuIrD0Rb7Y7i4FSlOaC570nHbQvnT9J/1IwAA+FBJREFUH3siPxJNqoWbTOHVtz7B/o1r7Jy+j1e957uIHiKWljDtpX1LtPTA5MS8GK2hJQhjNyC5PBTs/bZuWC1XzA8OCU1jxcetnVvEUZQOPx7jvWc8GjEajy0d4Kz6QRXEO0qAoGjTIPOaem9Osz/n6rOfQGPg1d/8x/nVS5FCC4OyeRO8HGmxoILdX1fASmELwAFSEMWKZjVY4CA6h0tjEDFggAl3gXQjKgNfOyQjOGeAzXQuVI25oLW+HKoRaVfI6hBdHtKEGxSuZXPq2J7AhbMF5zfH3Hdum/ObytakYGM2oSphMh3jXYVER4sgrklLMnEfpR4ZZqYFM+9IZlwei87fEqpKOH9uxh3ntpgvavb2V9x9fsHL791gd3E/V27NeeK5qzz+1Atcf6EhFBuMts4y2thBxao1ghhHawzKKih1G1muakbjJZPxlNl0DCVIKXgtbYw8lm5Jm1bUHplUON9tdKKKk2zO2P+9BgrA46zmUcl9kywEl39PQt6Hk7CKojRLXsBR4MUA6zGm91JMxEseKweDOuOvdNy2UO76afpNUWmIsuTNd0/47b/9I3hf8eg7v5dya4NA20VTY8KSxpDIhLqcmnV9toeIhLZGY+hQNC73oldomhVNXbNaLgmt5RUtYS4maEVJVZWMRuNElOwsSiv0rAFi19emReuG+nDJYveAuKpZ3brGYnGdycYmNyd3Ivse1YSUSTk1EWMecAlUIOKTxkt+q0QLQjlPEEn+YG2BnuBQCUQXTDGqPVc24YdF4aoBtEkpCEVokbhEmzmhnkOooV4SVktG0rJVwNlZybnzI86cPsW5nTHndsZsTQtm44KqLJiORxQuVb1nlwEhRoeq72CEvXUivZ+Pomp9IjPHXleiLXSblVEwGgZ1UjmmZ6acOz1ltWo5WLTc2Fvw4F3bvOU1D3Dx2iFPPX+VL79wnVtXrsHY2sUX4w2CH9NqQZARaCAQWYSG5fyQw72CqlSm49K0ZzXBjSaU1RjnCkQdhTeeJxLjQcbTiqTXpPc0CzRp3khJaiachM6lZ+rdi06eUbB+ItmSSWNnXblSr5SkQV36TA9hvC1P8faFMnTcMIq4COWKM/uX0Bi45+G3cf7VrydqoEOnCt2DdoUD2Q8KMeV33RqowMxd478RIbVJr2lTW+sOpCxQVhWF94xGI0Yj4/bMKJ+o2qNTNFpovGk43NtndbigXdRIG5GoBmxAueeVr+WLNyOFpPYGMSatnfwR7/DOkuGQzSNjNIihIdYNjdbWYUlqhIBT0FDY5pRQm5Jak2dsr2DABVHFaYvEBdrMkfYQiUtY7ePrfSay4tTGiAt3zTi7s82ZjSmnpiNOzwo2Zo5JVVKVnqqEsjBzD9cXZMcAucbRtHgLBOvtGa2oeMinq5Ca8r7IUjIUPmAmNEATWqLSEYSNK89kPOb09pj5qmX3YMXdZ2Y8fN8Zbh5GLt7c58vPX+bpS89x7UqNr7aY7pzHjbcJVBawYkR0FaENNG3NfgOLeoEvJ0gxohqNGY+mFOUkGbdA7AUso77soczDdGncnaZ4pGqaz3WfcniodnvQ8NUuCCfal60ZN659ISOXbk8c7bhtoSx8oooQLIStFdeefRqAcw+8hoC1lcsf8s6DF5yY062SfMwOICAgxoFja8E0SM8mt0JVCaFNFI/JVK08hfeMKxPGqirNtJKedpJM3REtqhtCS31wyOHuPqFuoNUUGRVuXXkMgPu+9tu4tF9SFYmpTLOwuJTPymllS3Vo0xJDa/SZcUmMNTGDAZIj0oq3hkLRtKFDKRBciKA1Ehok1IR2QWzn0OxDs8+0bNiaFZzeLjm/NeHc9mnObZVsjh0bJdbkpiipUsfgovQmf6J2e6oJIJC3yAyqj0lLan4JJHZa0ntPJg8zbiGxc5yQKM1FAy7lC7MWjtm3k0h0LRJrVD3jUhifGnH21BarIOzPGx443OTRl1/g+u6cS9cP+PLz13jmuYvM968iky1kvI0fbxOLmQV8kjfRrGrqJiKypJnPWZWHOD+hHE+6KK7gkgUSk5/p+k22mycTyuws5afMAhuPCZQM/p/5h4bUN7mCSdY+f/u8EHbcvlC64YVK7t6uePpXfweAZz77YcZnLrB15x34kRXgigiFK0DBe8OixowzDQmlmmI/2RLSpOJjCF1dZBsMoicilFXJZDxmVI2oyrKrcbTdvk+ttKE1gYqK1oFmsWS1v0CXDdrGlLsTdHnA/t4LbJw6zS1/Pt1MKgsSg/0RbKMIlr5LExytKxPByr2SGWvNa3ziJ03NUDVSxBbX1Gg9x7cNGpbEeoVvDxi7FbNRYGtWcue925w/dRdbGwWTkWdcwKR0TEYlYwdVqYy8UvoSkRJxUDiHdym/pn20NiZT2eJBZr6BEi0gaytFk+8qiQoxYZkN9mj+0cCKWz868H3fviGnIgIW6COS6lSt6saJgA+MvWe0VXF2e4s2CvPVNnuHNa97xZ1cub7g4s19nnjuKk89/xQHVzwyPs3Gzp340aYxBWBd0aIYUim0NVGXcHiQEE0lo9GI8XRqHb3F43xCS6W4QCa1FlyfDknrO/uRrgu4pUEdaMuBh9l9U4/868X8JC37FWTtdj9YJRwlChqVM2XBZ555EoBLz36MK//HJ3ngka/nNd/yvRTTsW1BasEcl1IEFN4A4G2bNGJKEKihXiTkULVdI4RAicMXo9TpylEUJUXCEIaEVNEkjFkrhsZylRoi9XzBcm+fej4fCKQtwusvfJS6vsVbvutP89lrNSspjM8V1503hpTrzD041EEi0FJijl51BYSqSmyXFO2cUTunbPdhtYfUB4xo2Bh7drZH7Nw94s4zm+xMzzIuPNNRwXjkKVxB4SPjsjAmQFfgRQw6Vyilt5ybYGgqSdoRdR3CJd9mMmvsPruCAMu3IRn0n6Ld2m+Sa7xJg8LIoUlo3MPaLcw+qJJM3+yFdr5pupZGXOHABQTHqPRMRxVndqbc2Yy558yEg+U2b3nlPewdtjz7wg2eeuEGT196ioPDGW25QbsxRssJUk7s+rFNQRqlrVvaeslicYg/3KcsSyajmbk5ZWlF67049htT1pcKmYk9Pw90b/UjoK7TpP1gCYNtqgMgHKst/SrH7WtK6T/qisg0JfOdjJmUb2LZfJanPv/rLA8OeNnb3sfpB1+GKytC4lF1eQF7401xMU+y9GaF9vtLCEpMvqSZVUrbtsQ2UIv5QjFqagkgVnaVBF6bhnZZUy9rlodzmvmC0DRpR7QR2r30aW7d/CL3v/oNXD31ehY3KmoBoUlFvJCdXVHFx5x3dWkXDYgGJAZiW8NyQajnqK5wzSFjWXG6ajk7c5y5MOKOM3dxemPCbFxRVlCWQulySsOWhKOldJ6yqqhGVoDtnKMsPIXzOBdTU1WAVNOZF4BaMMaIk4c4Y4EU7Mo1oI78fANTLovXIG0tqTysp6PU9J2kJ0wyB1oh+26xE0hSsK2/T4tMe4mI93hnXZRFPNvTip3pFHUVTavs7c956EJF/bq72FvCZ5+6zBeeucKzu5e5disQiy0m22egGCO+QnWESknAouAaFbdqWM5XOOeoqpLtjRnT0YhxOerQUDmmLta4oFsjmnPM2g2WiW8UgusF18bG/Ng8rpnE2rak26+lhJeiKatxGnrFE5gUSdnLhO3RO9go3syi+QIXn/1lrlz8JBfufRMPv/v3MTp31lIBQsfmBoDLJoRLfoslfs3UMr8ktMZssEoFzna9PPlqfoOmHTsEYl0T6hWrgznLgwOaVUNTNzhNln8S/vrwJjeufRZQHviGP8KHrgWCiwg11r62oE1Euj5CJVBGE8CmtjImaWtcs6CKC0ZSM3WBrbHj/KkpF85sc+HUmNkIJpUwrrzB/fJ0qyK+X8qqisdTlQWjqqSoKnyZCpS94J1YKzeB5PGack6aL2rip1XzFw38Drkm0RxbSf6jdr5UtnoyAXWeX9LCQlOJW2485CBDIAxq2PtYuXzPDZ4p+7HJIu4qedLeTJGeTQRjr5MArsF706Cz8ZRwZsxq1bBYrbjnwhne9aaz3DoUHn/qEh/73Jf58qVnudWUjLfOM9q6QCxmICW40n7iiOrQICybBXG5z6IaMx2NmI7NDaoSUEFU8KmfjFUOSS+QnQ+ZtKF1H07v5Ty2EDUJYwryRLK1IENr9isety2U44mlREQV167Y8ktQpZAdxAklFc6/Gucq9usP8fyTH2b/2jO84t1/kJ0HXw6lTwwaGRzdoUQNkdM0lhbJF4x9tk4Sg17umAwZ6QMSlFgbmGC1mDPf36c5nNOujHDYom0p8JSapV6/9DFCWLJ5+hzXOE3LPpF5NspSfbtDwpJquaBazqnme5Q0eNdSSeDsbMSd21PuOX+e7Y2ScSWMCmE0chSFwxNBg+XHFMvvibPkt4iBpnOvDhEK5ym9pyodUgjO2zQ7l353xvKGJp9bFcS6EccM0oiStLsJWY4GMgzoJDMvG1qaIj7DQijye8l8j4n+ElJktecPsRcTtjdrErtM0izS91Cxjyq+8JTeeoMYJ1MqSFZSjrclZp9cYDpxTMYVW63nsLbN746d+3njK+/h4o1DvvjMNT71hae4eOk6oZwh1YxquoOf7BAZo1IYCB8FdTT1nP16wf6+gQQmvmI6mlCNZmxsbjGpjHuq1mwZJJc8Y5YFcr2rlRsmH1yS9Tek2Px3OF5ClUjScBqJEeZPfYwYA+PRq8w691DgmU0eZlS9jGX9JfYPPsjH//X/yLm7HuXV7/9DVFubKFZiFVtLfWQfT0PodtOOqKrL4bnuPSEHL5TYNqzmC1YHc1aLJYv5IW29hKbBReNvdb6g8j7Rfgirgz0Wh1eYbu3w+j/61/jE5QXj0KJxhTQL3GpFszik0JYtH5jFJadmFVun4K5z21w4t8PW1ojZuGRSFUzLAomt+bLBEu2WbE/RPjV6k1T/QClWKibJNM19R7wIhTOBFJcT9j1+T/NePIDXRQ3Jj0xuwECoOuJnyRHUxMSetG0+enNVjr0m9Gz3KkkjpzSJ0VrSg+IRCt/zGFl7NegIrhOIObvgknzZbk7T7mEF7iGB7FOaxtk9F17YGBeMKiEEmFWwM5ny8rteznve9AquXp/z+LNX+MwTz/DUpUssdEK1czej6TahnFHLiFo8UUoEn6wfw1IfrFqcm1Pt3WRjOuHU9hbjYmyBIk3Uoul5yHGMbK0zNGRl8NdLdCbTcdtCGcPgDpCuYYmmWknvSQ+gFH7EbPQqRv4CN5Y/z+XnPs7qp27xsje9n3OveCXeiWnFvBMpMFygSSvmnbeHoGWsrLWcq5dL6sMFzXJF06xQFJ/wrpOioiwLw0mqmcLtskbrBSGuuOOBR/nsU88Tdg8oF3uMtGGDhp2xY2djzLk7znD+9DaTIjLbGFFNCmZjT1lqKvdyeEkoIWddf733tmhdXnxWrtXZBCKUYpFs512/MFN5l6Qc6RB16JwBDUw0c9rH5iCzNNi56aKGthlISltoQjUlYe56i9Dd04sektr4JWhZwEq/zBweEFaL3X+ZKnns+r47vz1XXtCuc1vSRUgGgG08oqgLXQBJAIJ1fME7ipR7LZwwKh1b6gjqCKHg3HSDB+7c4J1vejlXby347BPP89kvX+b5a1+idTNkeo5q4wxazIiMIdWBNq5Im2nDqq1Z7C3YPdhnXI2ZlmM2JjPGozFSWrs865jmiEe0YdeQN0ODNG+C2Ve4PSG9ffBAP1OdGoe0Yyc0To6cekCdx1dnOef/EIv6KXZv/jyf+uUf4dSnX84r3vZtbN59H94cLXJxc1+KlBZCiqpGNcpHC+bEROFhEVbvHeXGDMeGaY7QUChUOELbpmqSJaE2Aq3nnvkNJtMpj7z2Nbxw6RNMJgX3P3AnF07fwc50zMakYjQucGWJKwtwfSTRu4hI1gD05mCq4hexapYcfnHOcJfZnSiAwmVSZLf2rFkjxTTVqqlfSZvG1pnwum6cXBLuXjBddJ3J33s/qTi8Qw0x0K4nHznnm7+fhdI7Z8BwsfZ+JCH03kidcxF6PsdwCWatbTdBv+HSY6Ehr2fpNn9INY/OwB7irBWAfcUstDJdazoq2ApCHeD0bML9d7yC93zNw1y6ucvnvvQcn/vSJV649AIrmVJMz1LOzqIyI7oxIfWWjBJp1FO3Lfv1Hj4eULhbjMcVs+mU2Xhq+dCyNL8TCMGqVLwGMgtHSIvEeokk2/c2DdrbFspu4JKvcvRNS0u4Ixrb4d2IyfghnPsuDurf5Prlx9n/+ee4+2Vv557XfS2zM+es5Cbv5tpHFGMCAITQ0tRGQZI7U5EWjS8KymQ2WU7TE5Y1+weHtKm2Mqb6xPnVLzJf3eTP/kd/jvNnJ5SvOkfhPdPRmHExonQeJFUxOCu+jSIpvaD0KMvczsDoHAliQaoBA10uUiZBHlM6G/G2uMX3C1ec68RI0qQmQgUEq1v1Pi/2jCiyXcGlhd6zNGTLop8I8+myZWOIoiHL+3EBTWkUETwuKTPb9b13JoRprHIbBtcl5nOKZXC2NWE0E8ClIJApkkR8lTabNTM8vS/Zj8uwNoDEioCm4BeOqvCUhWdcQt0GFnXLxlnPPTv38+43PMS13RVffOoyn/j801x87mmi36Kc3Uk13UKqEdF71E9onNKgNA5WMXKwPOTa8pCSglE5YjyasbGzxebGBpNyTFF66nbOsmlYta2VcmlqgKyO6Lqn+arH7Zuv2UxCKVKFYD4scpoGuRuwrtcWTmHsLjAafSercJn95oM8+dgv8vyTH+HBV30j97zhHUjprbokBTE0RxRz96tu0WtvEWAaIKrSLJfUqxX1asny4IC4WFGm4EHhBJaHvHDlkzzyykd41asexjV7ndZx4iicR3INgsuES+bLGnbRwAEae8YEq9WMaJsrFLIGlQQV6bVhR7voxBjnxM6ZtyCr+bPgiZcC74q1RkkxGutev9YzPHEQLe3M5H7ecthfXE+96TkiiAO/ct2adQO/MQuxpmJzQfDk6ovBaujONbhAdx82BoN57O4zPZNAjhrJ8L1M0gwJlJCulXcAqUGEGD2iHo9n7ITxuGCzgtAqdVQuTCY8dOYB3vWa+7l0/ZDHn3yeLz5ziYvXnuSggen2edz0NL7cQNyYVkA8qbu40AaDVdbzffaXcy47YVyVbG1OOHVmytnTW4wnE5bLhus391i0IaXYojGr38Zx+3nKBB4QFGmUDlHXwebisKDCAhxZt3TV3Z6xuxNffTuL9gvMm0/w2Kd+lv0bF7n/Td/I6NSOmabJPA25En6gQV2OFCjJJNNkQrTmYy4XCYRgA+Cx8pzVYo+2XfHwI49As6CU7tZBhKCOmHZ71Yz5zMWvimpLTICGEPsAldOIhkTRmJBH3ns0USJaKi8Jd/JDvBPE01UuCAZoL3wOcNGdr8vPJjM+laPk2+6WrvnjeRGva0+SX9hBGiERhaXPRjOBh5uABaGKpGGzAGln1vYSBl2oo7tuuo/uvV7DW9JaUrH4EZa7NJuWPshpnRR3SCkKxSXQQhZsi8STNihN0W5bd4ldQByu8MYJi7JqAptl4PR0xkP3Psi76oe4fGPBE09e4YtfvsyzVy4xD2Pc5l0U0y0YW2uDIAUhrbmgqXBLI4vFPjfnNS9cC2xtbHBm5xz33ns3r3z5XVB4bu4fcuXaHsv65K5mR4+XYL72pk6omzVTbTg3svb5VJ2RYR5pxysYs1G8gUn5Kg6aj/Pcs7/FzetP8rLXvI+dl7+KQGYrj4S2NXA4WQPlfn+k2kJzusuiwG9uoNMp9XxBW69SQl6gabi094x9RzJznHarR2OkdckUytRNYjAz702DKa3dUxdsSkn4aGiZrCatAiaD2TOEy/4uCmcMCKXl6VwCZGRtEzVYP5O27a4zHPvcrStrHgto9oUCXRrfrft1Cuv0KiJIMfg98+0eKy2ywEw2Mzv/cvB2cqo7Lbz21pEVstYtTdYFMgtYsoHzIsp3bymlSEptDM5JVhBJTLvvJMhgKgDIHrL3MB0J06ogqGMVG2a1cmZjxqvveZjV217Jsxf3+PxTl/nUE1d55vJTtNUEJttWzTKaEqQiaoFKCQKxiAaICnB4sKSZX+ba5euMJ2NOn9vh7nvP86bXXKA+uX3MseO2hXL/8EYai0i7v6TtQL05kjbYeaHLO52Urck5Lc+YjeJteN1mPv8wn//oT3LPjXdz5pE34CZjYjDSq877yeZa2jmjKhIsMOIg4RyF2eYGXjZx3tIB7XzJqt5Ltyuo87RG35bOlZjZB76TiAHPY0x9LEkVMEIHbxNN7GhHOnE5LxSlo/D2rywtIlwW2TeMhnBKhyX8Q0dlkV2FfL9DQXCi3RhYzWwPSMg9NjWmZr2D75q/SjdHOWoqaubZ8FrDzUBDb2Aj62avHBG9/B17j3U7uvsASSu6/oVBwKIT/6QlpWsPIQNmfZuzrtTKSSfwlkZJ84ymGkuLTEoSbEm9+bwTpkXBeFQQAoQ2UruazftLHn7gQd77NY/w1MUbFiR64jmuXnyOuRvD5BR+vIMbbaLFCJcY0C2HWxDwaID6YMHe/JCnn32B8XTMXRfOw7e+4fiYHDluWyjr3Jw0QdlkNk0TXmMMjW6wyWUzMA1eFxpO05X9TrUi06l/hIrzHMaP8fQTv8Tl5z7KQ49+O1v3vozoE72DZqxmFysAlDbfT0wgAe8oExt1iCZc1XjULc7FYm50/SHZr9EinSGZ4M7nBWt+c1QzJwMBTWZgZzql5yic6xa5c84oSMqSQgRfeHyRo8khAcVjQsr0i3FgbxzTWGsR6fyN4Xd1+O3jh2CgBUvV2Fx03S0SN6pdBzgiaC9+3tsLWqzfb0J0daZsMoxz4bRLJnrezDUXGGMWVyIGzv/lI4Te8FURRINtyEO1K31cOrteUcCK1EsKHN5DMbb8eQiR0WzJ9kPbPHL/Gb7pnY9y8doen/3is3z6i8/z/PNPs4glxdYFRlvnoZxCUREKe57oC0LaylHlYDHn5lNPAW/4qmN1+ymRZpVHFzxc33wQEcei/SQb1aMIbn2HzZ/lhGkWeviS2oB72WJD3oHXCYvVZ3n84/+MO6++lTsefhOzU6dZtjWtpvYCyc9DoW0CsWkscprychkoLZimXM3n1M0SEH7p3/4ib3jD67n/njO0i0Pz9VRp20AIDUhKWbgcZSzwhevSPSlTg3gLDhWqjMqSqhrhvOvMWJcCEEKAkAM9afFoDwLIA9SblkdHXgbCkoJB0Gm09cjpMCUxeFVAJEdMk5bvPpAN0zxdsXtGMnpTNU/Y2sbQA9KP3PHAVD72egqYaAbwDzSlgRt6SOVguZiAOTqA/BBxC9kSSPMviX4l+dlmKA83gnzXfYMpyY6pRIoSRmOPuII2wHK5YloopydTXnHna/j6Nz7MC1dv8fknLvKFp6/ywguXmceCYmOH4tQ5mG4DM3K3rSieoEL7u52n9CFjHgGiNX3BDA7bvXoWMeDIYmFtANFcNUCP6IkRjcKIN1LyKhbhgzzz9K9z6YWPcde9b+GOh99MMZ3QZLqM9B00Mh6PKZwzh14En03nEKlXS5746L/lcHGTLf822vgM/83f+W95+OGX84M/8H2MvELm+xRne1vSYiLGoEZMJqlLRFyF9RssxGoks4DF5As656y3SrIULPBDMtvyuPUBHUsj6LGRWmd/z4t6/e81c3JNonNQR7vIqd3bEcESyK0Gh79nH89lkGzekbopTJ85oqV7c3m4GUsXmVZNudhE+twFh7qNYbDRJBNWIfHEZkO6F8jj2dDjlkW2TiDnbNO3VRBCgsolKGEqeo8tlhvFMRrBaFSiAqFtmVaRM5szXnn/I8yXD3P1+pwnnrvCpx5/mseef5pDP2V0+h5Gm2dhNCOWU2ufd5vWxW2z2d39/T+SBipAvc8j25GP/tO/jegm5yZ/HHSwQLLtmi+S57UzbrOPmKBfWYOkezY854pl/ALL+Akic6bTC9z/inczu+9lxoimibRQDAHkRTp2a20Dq+WSxeEh+5ee4tknf4lSH2AiX0OhEMtn2Gt/h2oMf/Wv/GU2JiVtu0K1QclkXMFMUVdQeGsFUBSuW6Qu1elVReJ1Tc/mfKLJT8B6K79K2M9kqR0d8ExWfWxysn/u+p6dAgOWgDzMtuIz71F+o9OF4pJQyDHB7t8fXn8oGIPfj/ib+b1jdz70VtJNdwEp+jyqkDGx6VnEfPdc+9nt3FnEk5/PUCjTc9tL6fkswmfjHSUF8O07PsEbO+sjj4VkILkF+wRQabuxVcp0UkeM9tkYYLWK1K1QB8+igacu3eRjn3+Szz95kSu7DbWbUe1coNg6j5YTnv/R7z82z0eP289TPv8JGwhtcWGXnZ17OulyqZ6o49zUdQBSb/24buByyZXRcaZFlfCRpgUrKn0L0/gqFs0XWCw+wRc+9S84f+V13PHIWym3dnBF0QU5cpFz0zSsFivauiYuFty4/Ck0RjZmX0OhW8TVCte8jLPlvazC5/jrf/Nv883v/ya+9f3vZrXaRzO+UxSRNi32qjOF2+S/GDQu4p1B+3ICPZu50D9PnvQ1F2coFKzn7F5snzSzMAenGPi36TrOaBtJQ7q2oMnImYyn7c+5Zi6uvZ4N7sFErn9o7fVhkGf4UTN/83UzBMMl7qT8DbvX/O4QSJIDCXaffWArKVIbC3WdtZFZFKM4+u7A+QqxE+Csn1Vd/yjYBmLJBU8uwILW7l0NBuLVUxRC5UsUR9NGVk3LznibV937Jg6Wkeeu7PHJx57iU489xeUrX2JFCfwuCuUb71DOnj3L5saEs6cnnNoes//UO/jIBz9M0JtUxTlAiLFN9n1KHaRH7aq7xZq0WjW4Ee72W7r9sOhjMg1lhnNvYBxexkH9QS5f/DhXLn+S7e17raEnUI03OHf3q1G1wudmccCNy19gfnidw8MrzMqvxbtTlFIgUtA2NW1b4+NrmVaOf/kv/xUXL17k+/7wd1FVnlV90N1PLzu9ryaSKjwk4z4TuDzvvOmDnRIQM2ORAZnhIJYzrPMcmqTrZmvScIMGOd3nsYWU0zDr37fF3AvJugmVc6tHzeDh3y/2+9HXup/QBWi6NEi3WZn2U8niqXQbheoAdpn8yKFw5wEduEndriGpMiaPcCTlQkFIYJAjFoJ2t5M0vi3Q3vrI26UOy+YkYbajbQTYBl5Uwqj0tG2gbmompbI1mfHwfW/kW9/5Bp6+eIMvPPncsbE76bhtofyO97zR2nE7Eigbzp47i9IQ48Ko4p0nBKGr2teEgkET+kU6vyLE0C/6PPD5jzQohuQxH88xY7t6H8v2SxyGj3Pr5nPY7mXH809/5Ng9OxmxOXonO9vvQJuIthHnHaXYvTa14MPrOF/cx8c++gv8zkf/S972ljfxp/7EH2S+uIFRDSYGAFG8dymc3vcI8d53k567O9mzpMdIMDtrjZFNNpLRkFBSmcd1cGhitOuWZ16IYiftZWNdSI4HWOxi/TBn07fXTkcFbShkXVnYEcHLv6tqYoBYP0cuSevlR7sNwJ6Zrqgh71wimvLffeF1PqsFyk17+oFgZVqPbDN38tSto2wa91cannnNRklr9ShGN1+f7H9kPFHq2NYbK0pVClXpmERoWmjahs3ScW7jLI++/By3c9y2UF44O6NNnY4t0CsUfkDXryDOaOdl8NrQxLFcXDZv89hpx3KdK05yACfn7zSVAZVFycg9TKEvo403URYARFmx0s91Hqt3O4zLRyiLs/jyFE1tVR0uQfJEBFdUeITQ1Ph4hvPybRzKZ/nt3/k0ly5f5T//Cz9ECCs8ESfRyICTUIpYOZb5gtlETUMxmOOc4He5dV9eGAlq17Vs0+PRSkn8sSH0gIX8fMOgSOdfySAoc+zo56Az+bqXjwtbPtai6Ue14ZH31vOXSYF10Z3hwAQsIRM6Iejel5xjDMc0u4jFDRgIVzdmR2k5BvcXRRG8aTbEgOKu/+5a8unofobQ97VZM3htzu0KA/OeDizjHIwrz6gQ2mgBovFJU3PCcfvRV3HgCySV/qi4Ljwfoy0eL5aj86536vMi0tDSYnC1HHQAQ680bUNoU/FzWjDajz8ivqOZt52qwHEWhM6Hm8jDdL6VeAyOZX0eyrJiMplQiAdV6tWSplnZJlIU0LYUrWNH386Gfy1Xn/8Qf/4v/lV+33d8O9/yvnej8QBxEXGK99LB+6RbSLmBWy+ZGaCdynj7FInLQtn7MQzMV7pnlg743zRNXwmylsPshdI5l6ByvRaUIwLR+boD9oAsPEcF6ySBHL5+9F7XDh1qxWS2J61pgb3UkmGYIui0UkJVab7ffp3BMFc7eMa1kwz/n+Yo7ULdxpZt5/UzsaZD1p+w/9mtwf5zmoDykNkY+iJyUiDSjZTx+hbwosdtR1//1c/9lBFehQYl0KoyX3n+4//gP2DiXslm8Y3gMpeM68zUPOkxtklThjRpg/cHQ5C1i92cvW8RTNPKOaATM8wvCWa39TuXPusQX1COR4zGU+qmpg2R6XTCZDyiqVccHOzT1EukbfGxwWnqxaE115sPsIzPMRqV/I2/9p9z57kJQkhCmaYwVc4X4g3Pag/QF2gnZjvLe5K0LGRm8aFUDpdWrpRQNSKwtg0dZQoDjcvauBk3raQ6x7XFqtncNROxe08MONH7nDoQ0LSI5bggnvT3EIWU5ynPp8UBXXdOTSDpoTlr5xHoUmt22NhZG0T77CA1p0eFsh8fs67yOBRpb8rRWemslEH74cEWR4oEZ5MiDuanMO5eyQw8VgPbA/noNLemsYdcTwpve+f38tWO2wcPxMZQNUJKuAqjcWnv6T5grQRWyRRbM6SSs2+5IJO07JSv8fbkQaPzBMhRy4Gllb7Xnbw3aaTXVlEhtoEwX9EGg9Y1TWAxv8mocsw2Ntg5c55mtWK+t0u92E9mZsTjOTf6Zppwmf34cX74b/w3/MX/25/jTa990JgNkho3FvOAk3bguaWocmJS7yB5aXENN931o3Os6REr9mzOC+oEjS4B0/OCT6MhJAFjLQiUhcAGvK/n6wXI8stZGO07R7TmSXt2EmKlF4r14JB05k5vYafXJKGKsnWgcMz8PRLgKorEYK5y5H26zcQ2tvVzqA6QSmkBdakQcuRXGdY6Dt5O1Sx5TjCAu6YZioM8cJqnTKnSw8K7VdyPxW0cLwFm1wx8GFsyRSFMZzPahTWEES/oICckRyZXBpnmbGatoT/yIOngHEkwh3R+PXyq15b58TNeU53VWoYIy2WNr8b42SmcU5rFIbu7cw4OalzhmUxmRkfVrlLr+JZKPJW7k/NygaV/jL/1//x7XLjjPP/+n/xjvOnRVxObGicNUFPQYPUL6WYSgRQ5n5jC+XFgxsJwUQ0OgbU5dTnvKcQEsu6KAQYaUdNYZW2qg0XQkZElzdS3i4O+3GddQ/W30AtrN/6D6OhabloMSWM1qcmViP1ZDAZnPp4Qu4Y8Q8jg0cS/c67TxENwyhC/S3rufp9LW+TAIlg36webUMz6bqCxu42zD4ppJibOvEdpnrMgdlqaHLQbmNzHRvUrH7dfJUKgFxP7LbQ1j7zyVXz8ox8lxj2c306Caeq0E+FudZCstnXDXY9OfK/8OnOgX6g9piPNewe0zlQXIQoQGE+mlKMpTassVytCs6Da2aHc2CQcLqjnh6yWc9pmxdbmjNgULOcHqCp1aA3+p5EyPsjZSrh+9RP8zb/z33PhwgX+7A/+AG9/06tolzcQrcldnlSyyWe+kT1X1grJ/BLbhLuI3tpgSKK6l8Hz9m8fM02PzlO3gOlNfLDrdoLaQ9k6k5peaIcQvhDj2nv9ehgaz2kyUFQsKm33bbZDDt6pKNEpPo/VEc02fL4ejdP7n8civDn6LZLMzWT2y4sJQl5nQN5Q4gnmeOddSGryNHxPB0X+g40qDkZFsil+bJRu67htoWyTjd5LGqhENjY20wuZMKRPe6CsBca6pdbbtelzJ934QPUPD9WUr0qDmS2zzqzrNW+oG6pC2d7YZnMG+4fXWN24gk6mlNWIcmtCOGxpm5rdvT1bLOKIzhOSUCqCBkfFyzjv7mHpnuPalU/w1/7W3+Geu+/mB3/gj/B1b3wI2sNOd2cYV9D1AehMI1JUsHe66Xci7Vno8jY40HIZxndcq/VaJyZO3N58zR+xHd/8KsvL5RZ4GdUzFGob3PWUyHAu7ebyXMTu/g38Yb5/BpJnE86CYutBnpM2mmyV9bWtx4+14JeD2Pn1/V2mbXCt1LAzV5WBWhxYAHmTT8n13Cu19xUVYuocraRu0rZ5dCU3J4zZ7R63Hej5P//5P0zfoFPlUR2Hc8df/E9/iDEPs1W+G1VnRGYxc5EeYU5bHzU4umN2F+m+tXYfuYK/W7QKuZbQWOIcksk4xCN+jC8mVNNN3Khg2TTUywUaW8rCpwlrWS0PQSNlYVQdbdvAaoUj4tHkP2ItAqhZ6NMcxk9Rx1u84+1v5T/5s3+C89sjNKxANNGbxG4DsdsRijIHLCwPmVNEZqVnjKjr/MbhMTQ7T8oXDn/PgTR7zdaT/W1aTAycgiNz8vpOiGNnVuYa0xdZIj0zdGeaenGIJtC5Sg8iSD6GOAWneJwFsAasd8cjqes7+no4rP+Oc4J2mN5hIMwOx9E8cFKFWSiHl0lrtDNq3OCcOti8ohJyG8fBmIGkU+qJuua93/EnTx7L4d3drlD+i5/9scGdQ4yGfA8647HPfp7/4e/+t8z8qxjLoxBniLq1m7UcYYbfHR2g4aFH5uL4Dt259GsaJkfqfKd9je/AFrqrpkw2tgx6pdCszH90mH8bCcRQoxqsPULpiW1NrGtr1Z4ikUgvnE4aVvo0u+ET4A/5jm/9Zv7DP/2HcWGOaot2JFhJYCTzuHpbz8O+nfn5dNADck3w0mgcCYoMP7f+en9W87MTLWL2gkSIYtxFQ4EwMzFrysiwJdyx44hQ4kzzZqFEctAvfU4BZ3WpokIIhjM+2STvr9ulhzLj32AMXEpPxUwh0p0mb+99N9ETHyEeT1PkqKu6wfhDMltTgyTFWlocHffsImTBPnK89zt/F4Xy537un6QbsAkL0aU+8xXKjC899jn+7n/zt2jbhpE7j2OTDf8oXneAKm0oR0VsuBzzw2eh1OOfz7uYyhCXfMTQle6f+WaJ8tEVkGB2VTnBFSWr5YoYGnAOXzrEQbtaom1DOSopqoK2adCmMV7aTFwsGdFjej6EfebyGebxy2xtT/mB7//DfPv73oHqHDLDuKZ7ITMZ5DxnBtVbuiLGmHvvmGbsTMn+GU+KVB79e11TKrENaZnncyT/96tMv3T3edwcEzcw1dJCtr4xLuWxDSeKihkDqqgLZF87DjobH68X7TV0Bp3nFul5bBhcPaYRPToOIon86ysI5Vr2KFsaBLo2dl2AKYH4lM78HroV9kIv5Nql6vrjvd/5Ayfex9o93bZQ/sz/DtBpPqNnt6ruqGMCwsHeHj/x4z/O5z/3OW7duglA6TbwMmWjeB0u3E2MPt8xPefbcbMlV4+8iBWb5bb/5trD5yBJr6H6QS0QV1CNphRlSd20tLHtCoAzmgiN+NJACwRDMsW8sxN7Qep0T4u6PW6F32AVr3H2zGl+6D/8Qd79ta+DsEzgiNABD0KwFFNU6VkJhCSUmcUv0Dbtiaa8ae1svmXzLPuEZs7n2EP2gfpxs0VMMlfzzy7i3eXz3FrU8ujwZihdvxHk2ZTu3kAgijXrVkW7HhwpBiCD62UTFwGNa/cDlqvMFz9qyufnOXrkFNvaM6Qx6bR4/oQkFghtzaxWcoAgfS8JZf7wiQG3bA4nxbG2SOEbv/NPnfCdI2e4XaH86Z/8UZJV2EXSkmubNCagLvWHKKmXNb/6a7/JRz7yWzzz9NPEqBRui5m8nlLvBwpU8yAPdshOKPtKkzVNeFLt02A8+l+Gpm1ekPlaPgnmmKIaE4HlamnNU71199IkHKKRUhyqwdorhBayyYXVW4oq1hYgIrQEd4kD/RQNN9jZ3uaPfd/38p3vfycalgnoEFBtiRpps1rM/EAD7Kdpys5AAHLdbIqQYj5Ph/JR7XhxYzKful18MGbm0Q60QOR4ekJAcmQzz9IgMiqSmq4OWBeOAQg0RVkHLfWGPTgUOlKvYdQXetPefOx0S0eCKL12Oh6ZHa6FnEc8auabb9j78dmPtia/mgqlM6n04HwiiZXwZO2b2zqYbMYjQvmDJ35n7dZvVyj/2Y//SP8gJFRNcmijkiJtklIUOUztKcoNFMfnPvsZ/vGP/SOuXbvGxN3PRN6AxLOortv0WXyGO6EcDeG+iCly/NBuAfTnJpkYhhIvRhPK8ZQ2BDPxNLdK7xE1XpXC26S1TZNMwRTZE8iID8vDBrxEnFdqfY6b4UMoNe9659fyw3/pPyS2C4QA0iSAtXHGtm1DCHVayENjvA80OCdUVdXn7tRA/0MTKsZcdM7aTp6FMNsLHeW+KoRe8xw1jWPaA+1ZByTSQGZjz4LZDe+awNh453ygDvKipMU9rBftwgSdC3N703y0sW1vCucEWj5fHqPE+ROzj9q/P1x3zgh6u7nIFkr2b08KiWQf9vdcKH/ix/9+v8PAgDOnB5STFpTdh4ArcTJGXEFQA+l+9rOf4+//L3+f1bJm07+VikdAy7VBHUYQ+ztdf/AXV5fDY11T9idMPmcq4fDlhHI0aEYaowUtnDcBaU1YimQ+xaho6o0pGjofEzT9TYpaBlRrai6zH3+bt3zNI/yXf/nP4uLCAkGxh5vFGAihXavKyGZbbh0wzMsxGPvhQhqmCdwgMJK5b+zxBws2Kelhgr67tvS+e7Yx1gSIbMXlzmnaXTuboya+JpQk/tMuf0uvzaW7rwR8O2FZnrRUu0qUE4Ry/fOafNEwiJQmYVY6H1zpLRV7zHVNns+NH4AT1m7y/0Kh/Mkf/wedoKjRSHWZSeNENT/MOiiTrHVP4U0owfI5koh8f/Kf/zS//Mu/goaCSu5gyushboIWZkrl/Vlyh3p6AVOyx5SfYm1Q6D47rI0bmsEDVIYI4kqq8QxxJepI5WK2EgtfWLVG2xBipKxGyYfDSMSa2vxKJ5Cq5s3PtPGwXGQg6C2u66/wte94NT/8l36QZrVPjC0DTNY60iY/xkkLonszLewkSDFYqZtB04qupUAnlDlyGQfaIy3o7Jcdi+6macipAkG6FgiD+gjzxaOubR5djWkqNbMEf0wlff0GYJuA77oud5DAI7xAGa20Nj7d37L2KqwL8XDTynlY+zcwZTvhjh1N53AO1n7/PRTK2wYPiNpu2Cty4z213TIB0GO/09hUgIuBAjNTcLZYnXP8iT/yB3nvN7yP/+Xv/888/fRTLMMzFhCSN+D1HkQLNAWCumfq7c/uNZX1RZxiBQODtfdZlXVUSp4ZjS31ak5RTfBuZOBnTBDbUCMilOUIFyOhbgkiuLLAFR4npYH0u96aMTHFJatBc0Bhgy15Hb/+m7/JL3zgjbzn7a8gBgsmDCFlJw/+4JbTptSlBgYfy9xCURVtm9S3MnYL13Jt2lsMnf+2fhnnrBBd0wK2bmKgqd9GXnRrd5u4ltb8TsS6i0lh5GRk66LPg4bYoiq2LrSAIQBgYJaL5HHqcaonDlD3XOsBqGNChUW7TXv246FKaowka5v98By/18dL8Cl/tDNRbLcLg9xQ2nlSK3KitUn3zujjXU7qJw2DCPgSkSmjyYy6XvB//rOf5Nd+/de5fuM6pdtmg0eRcB50k54jNIubdn8eTbIMP9Vryv57eVwH8m3viwV/XFFRliMMxWH0lsYJK6k8KmkktVTLaFQiqFXQtA0xNIhGwwJnAQKEiLiaG/HfsHVa+LG//3eg3h2Y1Ol2TghYrFnxR5TY8Pmbpum+fzTF4FOK4ujiskBPgi4OMLVGhB1oQtvRYapYPtD51F/S9f6WlacNoW8+bdb2L5JN6OSPx9hVDnUoI/E41+uJPsDTC6S1lsiO55GJ1zzna1v52rMe1YzZXTmW0ui09GDsu9SMpIjb+nv5Hv4v05Sttmlp5wHpPYaOjNl5chvvXBEuZoHiiyxYfSGr8zUSlWnl+ME/+of5U9/3fXzs05/hR370H/Lsc7+BcyUT9yBl+0okbiBS2IIZLMTjQy/d/zOKpDNyzPZe+16nUTWisSE01rTU+4Iik2IVjrpukGhh+bIqCDFYLWhdd7QmviiTNgpoaPv7EcuTChVjdz/Xr3+C5y/e4v47RxB6AuaTduNuvAePtxaUoF8Q4gywkU3/mAVNlSgRCTlgNMzbZfRPukb6fNsG2iZ00WYjXIgUGqmKgspjQRvnKF3B6vCAw71dyqLg1PlzuNGIuoQgiqZOZTmK089QzhHqYBNJ7oUOhSJFSAczNpxnGFpMyew9Ej/ohq/TvBkOKt35TZYH2rHDXKeEUyd31q6iW0vSJUrsXGvb5fBebu+4/dKtEAcYwmSSORmAOtLt58GErlRJMvW/5pA+WGPlBu9jaq4TQB1vefXLecvf+6/56Gc+zy/+yq/wG7/xQRbuCablg5ThTpzeCVoOpvaoKjn654uZHLr2IzmqQGq519bE1uOLEleNKEpvPS6bFSFRf+QATNu0pvwLjy8KnCtoa0FDTBtTugcHk/gw+3yC/+Mnfob//D/9w4Ox6wXyuF95/N6HwQr7jF0nDq2ZAWLIvmXlSrnfZT650C/WSG6mFDs4nEShAuSg5tZzF9EoBDyL1QpXFEyKEbtXrlDPDymrkubUFnp2h52HH0Ano9RPJZmFdrEkGDkQFdf8xpgSrH2gKpveeaykm+hu6bsjC797aB3+ODa2LgHO44Bgq5uv40srab++5jePrAxGORJO0ha3fdx+J+fUL1FzA9NU4GsFxT14zmCxfUbQOau8LkufmtBkU0U77huRmBLRZj20zYJHX3kvj77mT/Jn/tQf4x//k5/gAx/4dQ7axxn5M1TcRdE+DDoGcgeoPCT9IUf+/srPN9hp049A6r0ZWsS7xL7gkp8VQRzlaIQvCuq6JrQRvKMsR3hXsVqtLImbTC1xQiFTfJhy8eJlCl8R4iIFQfI9rOuDE02kwe/DYEn2+bvgSfdoPY/OCcwZgyGw7znnE0Y3bb4KZRtZ3roOz94k1GZyurRol1EZhZZKA7KKrOZXuXnzOqOz28zuvpAoWFwS9N4nXPcV7WF7zTmg1RwIZF99ccTWOSFAdjuHgJFoJwskd+C2wNYRlj8V4591Rur2FUby3+le8vES6ECyMGUb36rpfUJ8KGLdm9Tyl33+ShGnuMRb0vGgDgczpVfobH5DzcTQMBvBD/0H38ef+v4/yMc++QV+9Mf+CZcvfxrvHqd0Z5jE1+DDBZyUyfKSgWp5sYEbapiBLAwnNkWuFEHbGiM6t3C9T5QnQSG0gaIaUY7GKTCkhGXDqBwx3ZgQ6xXNakEuP/NuRinbfOGxL7KoI6V02JsBeF3W7yXftVp/SOiDQpnH6GhToKFPmdMkfX/P/ggh0DQNbdvivWc0StFlhyFoxLRcKZ5VE6haKFqb8egwtqZocL3cXLeKQtUqOl/igTarnJifNYG21VybvlFR9iuH8zQwJ4dBKgY/j8rp8BjszMcEtzuxNYlCelNX0ga8fjjIUeL0rF3ke5jKonexToyaf5XjtoWyHNDl51bg9rd2kbaYQjJRehPV3Dpz6vNOqF0VfB94MW0b0qTlBjjmi8SwYGPieffXPsrXv/P/xQtXbvC//oN/xIc//BFWepFReYEt92ZccxqN5cCdOElXHl/sGbWy/uJRnYsJqRiwPlOUiEBT1xRVyXgyBYTlckXdBkajMWU5Ag20oTHfbkD3qDGm/hm2kQ19lKFVvXaffQVu2viyphSGFTlHF0EPODg57eIKj/OFmesutU5Pi1KCRUwTh1snfKZZBvMoZp46FQqEw91dNtsWzUXPqoN8JYnAIf+X78noRY8eneXa76CD+2dtrtef/SsJQ/Jr+yt0H5fMRbx2LgOcuIGmXLNUBnb2i7tNX/24faEsjrQDH9ysaUFjjg5D5oGsVoHU7ggDZ4dknmh3LpeDEk6tWY/EbqK8pPSLNsT2gAtnpvzw//3P07bw87/wq/zoj/1vXD381zgpmVQPUcWH0LgFjDpX8fgQnTBoJ45jMj3TTpqtBdRItCYbGzhXsFyuaJqWyXTK9nhMs2qo69oGuKvHA/Bs+Feyai/y2x9/jLe/6b4+45rWlRXrS7cSj5qh3cdliEbJf/evQ37P7jkDD3o4nLPA1aiy351L/Ubs+nFAeWGBsECrAdcR8CdgffpM5k21wJZQNw0xKMHF1IQ3L/S0NgaEWOs5yKE2HAZsTnJO1oX0uEZ6MRWapkSzVuvPl628o1Ue3XruzKt1i6RzEY7e6e+Z+ep7hEhnxqYLZkY56+fougStQe2y4KU+Fi5PinaBkt5oIJOlm/2uiew4mY3m/5tGbWpr2PNN73sH7/vGd/Fvf+lX+amf/lkuXXqMg/g5SmdVKtJeQOJ2Nwk9z8uRB9T1gcxHXkAidORH+X41tNTLBePpJtV4xGpVs1jMmUzGOIkEDSzrGuNIMe1qebtNAL789HO89Y3305vyw/s5Yral4yjiZxhNhFxFod3PoTWimrsw5yNrpIRgSdHQwZWtmZJGCIF2ueqYAHrxyblr7cjIDdkoNK0xykdVtMueRDK5l2bLKWYAQS8U3aQMx2PtzywUdIJw0nilWeRFjyxUmjffHva3Bo5IPjvOQYgJmbR+eOcghjWarX+X47aFshpV64M1fHCXbj9xXIjQtSWwMjjHUXxk3o1JEIQ0j+kNvzZQSuobjy2CRH/d3QqqvP8b3sa3vf9dXLl+wAd+9Tf5zQ9+iC996YM4Kbhj69vQ1XlWq2Zglh4xceRoCW3/3nDDIAUANP1X64pWoSxHlKVnuVgxP9ijEAdB0aYBDbYZkbQIBYJj/2CO4hFaW/h51x4+W7db9L7L0HfMY3o0L5mPIWxuKJykYIqdLwMRkmmdtIEmTesjLBdLdvf38Az4pFJgJN2dvRDs+64FaRQfIbiEM0VxODqimNSTwwHH1FK+jzX7PY+Hdpask5MshGOjMJjANKP5XIPL5oxCtgDW5S7tViEkSyb238lDhrWuH35VB9jr2z1uP/pKv2jWHe40ycOHywuQVF/n3FrtXWe+GOYH1Wh9/MQSr7nkJZ8jdosGM3+Tr9m5GOm9OszZmXq+69vewXf/vm/gC196gf/5f/kHPPXkv+L+nX+Puh4diZ8MvYmjRsf6+7mltyD9NSXazhgaGjdHy8pohmOkDsHyxsmH0mjcMapC5c/g2xk//wu/wp/+ge+CZi9pAk1rrtcKeQw6geQ41vVFYXj0C3U9Wpt5dzLRc/+slvdO50rT6tMctjH0PuTAt+1Hq3/NqULbIjHgnQWMovSCL51TKQN2Prcuh7IecxgesTPPrX4TXX/29XHQI6/3/uNQGx5tdNyTYKX51kFQLnfvTgKuOtxcOgxVMtl5Scftw+wGtIQM/BjAKB60361t18jCmHbdwaKRxIXjME4KTQMxeMZuBwZjXLAekm1iL1hvIHT0PkMIxOWSO+/Y4c/+uT/DX/5Lf4Xd5RMIr1r/8DGlefLoHYWzrW+8EQ0NEWhDYw1p1UyyjPu0zTISJbdet++H0JofHbOgQGah6yhvpB8D47pZD9R0ApfTC12wYT14lV2KkNpA9A/fo26cG4x6mgwRSzoV4q2xqoLXlExfu8bxsYurGq0b3LikTfOGyjrxlPYuhT3LEDh/1Jxdn5Phsw2PkzapkyB3ptF63zAPXQd7HKzB3n+35+gCZonRTgZr/MVsrts9blsoV/Vi7QZ7/KANqhtmU1OsO/Od9uFuSXZ7Etr8B+Y/ajbXkgmbJ0clYszqFuiIA67SY2Yx/XfQhtlsjIhQN4dUGlH1x5XiS7AuMvbUBqIbENMCMZr0yJBoUPO3sMLdFBzRCo1z2lbRJoBadNZJgXiHS0DyzlKUrLHlpPXfjbsi5FRO1kBDrRpCrkYJaRH2BM4GYndd2iUHtFDBR4eP4KPio2S2xTWB0fVbQZtIu6wpt6eDYBXZfBho1jznpvm6Cs5knRwF6g8Djl/JZIe+YuXF0VL9d9ZqO9fuj27dRhVisF0zs2mQ16EYsH5w1vxULzJhJx+33wpvsBv1fky623zfWeAGJkLvOLvB60ou0ckM0rbTJA+jwx/2C2ot9Nydo3+92wklgre8mWjk859/nBgjhZyhq90c+ES9OlJe4tgNvj/YUdfe12RqJxMnCYoTx8w9wq3wQT75qcd47SvO9IwHHpyUHfVj3qgGm/v6YqEXjIyEsf8nz22wQPM4Gdu4G+A+zaRsmgbvHVVVUlUlOeeW9LOV3wEn0Nqka+Y5SIITIouDQ6rzp3oh7kzl4eZy3BR+sWMtAzDYkG8Twn38fKQxleNCeHQ9dFZM52nEhA1JKzKDYQabzFF74naO2xbKowGF9cFZfz1/3v62FuXdwpXeVMsCObTiuwJk+wCmRfsmN5gSZoj4WDs60xkK5/jlX/5VhIJCLxDWKgzSwjjiu73Y0WlIWf9kesnueLBJrSNn0nOkxR0HPlzTtv1nktmrEsx7TW64PWYWTP+iOFkYaMUcJBsIpZ2rX9De+9RWPhBCTCZuquQQpSiKZMgkCF7QlD3uEVvZj8k44267VKCJxFVrqCYPODFC6iPjfXRzP6k28uj6ss3Fr33mJMFc0+Sqxz830PY5FtJtcoMKm+xi5U5hZhnm9hJZYXRW/9oaeanG7EtIiawPwJr/KMeFNjPLDcHPPXonD0ou8s27tWL5TN+dywbAdXWBIgL+SMIWTpgQIbTKxRcuAYJGv/72S9q8+tzTYCkN/N+kAtbMb9blP/+tNrFed3A4Hvvik7zl0XsIjQVRrNQrAci7GEEfOQVZ0w7d89sLx8aj94WGD9wv7KI4ToehqqxWNTFGfO6kFiOhaZEmd9DOvtTg0SBRgNpYeFXa+cJIx3y+Zr9h5Ygpa3eWC6u1+3z3zosGcl78taPHmlU1uKZtrIOxUtAw+GK34fRWWl7jw/G2P/4v8im9rDu93e6WhTMhze1lAedwrkQY+orDf5B7i2STNK1Z0GhdiZGEnBFKVyBeUi+N0CVqjzr5+VAR6rrtLdXMhTMwS7KZYYsp7/y9HwiGixyNJ9SrFU0Tkmb0vZx1KsMNJizr/fUtU6CjqizcDhocv/YbH+aPf+83ok2d+v7YogyYL1cAPgmNdxY0imLMDyFfO/TPEWMfjMjBh6MbmHm8/XwV3SoY5jSzBjU+3aIO1MsaaSNRfArgaWeydtaOGL7WqdFNhlUNbYCq6MbcpiFbS0fM2H4nsnNqvieHqdt8fy8+9y/ma+b31uaD1BctUY/GkOhUutvoxzYvkM6qS8/bX9N8Tonr5WMn3+WLH7efpyzLtb+7nSHxmDLQlFmVtyEmQHa+PzOL8hGxhLGpfQMu+dzwJwmjOEsjiHfgDKDQtuuNSk8yW2wfKGiblkI2QLuS7Bd9xrVhTMGIsiw5dXrEweGC3RsRTQn6XGzb+wu9wOdcYz5fXgdrsUpxFLLBalWjlIAzoXRCcMnyCIpvlCJE5ge7LA/mtKua2LaUVUUxGePHFaNqRFGURCcE51hl3p5Bv8rc+9PuQxLfraUgXOHTvPTLJ0Zb2DEIIShhsSSsAgSxdm+aKFByflfM88yRVY3goqCtNevtepsMnDJJWlUGCzzbJNJ7FifO2e34kCdp1nVhTeHGJJAh03l2wSbtBPHY9fMGQv7Y2pMMYhwn3v5XPG5bKIt+O+3C6/2D9Ucu3QrBSqByBFGErsxLJOUvRRLgIJupPpXS9DVpMSQai6EJPDhebHJUlctXdjk4OGCnfBMuJh6gvBAGfl33HdYFB8CXLU9c/XGasM9M3odwimxOdgZAf9FuHz0O0eqF3j5TMpb72N//FC9c3uWuUwUSTbOLQqEOFyPlsqG+dcDlp55m9/I1XBMoGqVyBWHkWRXCdHPG9qlTjDZmrApHmHjKUUU5GlkbeEgpLatycElXeht02/w8qMqxudU0lqGNFK6w1H+UnrDLZYB8P5yqavLtMYFMhTIkc1XjIKUmQ92Tcai9TTz8+3bm/iThe7H0SL7G0W7VFo8cmFQnHr20yeB+fzeOlwAeSNcdDEQOBzca14IPQz8mDiKpxsad0D6p32BOrWRUiWFnE4eL2Xv2HrbT5rD+iRPC+rjkjcQ5RzUaJfNzYM7F49/vjVqDFt6sf51FewmAVp6n0lPr5hVY5YUwWGhyBDi0JpH2S/ZRFBS/tgiKVqwMat6w++TzzJ+6SHVzn7N1i1eHRIe6SL2IeA9xd8X1F26A94SiQAuhnFSMZlNG0wmj6QgKj9+YUc1mFIWnKMxyCDESJUPKhoihPBb2bOW4opqNKdrWAjhqG6YSEnrLUYiY6xEN2BOjEhKRdTdfA3/ShuqExa9HtWRvIn81tfNim/RQIPucZbZqXuRcx7Sk9D9f9Db0yGePauevftx+160TUhNgQtJqLyg5KiZZJXbMYHlwhUykFDTa2+rWzilY1Uwe34h2OaGObU6T7zIwHDJesc9R2WAUhWfsRrTt3O4x+yokN1NOnhfvhVb3u7+jrFjPX9kiNM3bozgGDsjAZ8omUfqIsPa8LgtyjNR7Sw4u36C5vsf84jXkYEkRlCJf3SVTX62fhY9YlXwbKJc2LmG3oS0WhAIOC6H1go7GFNMJs/GY7dmUYlQg4xKdVcRxiRSuK0aWVJpWqOKCUo1HbJ07y3x+BUGJjRI0EFUppUzPpB1zgd2EUi/mhLZGdWSa2BYNfXNbevt+OB9dQOYl2n63cfTnTm7Gkeh0NymcLJJHf18/+ckvvxTBfMl5yk4Dxp7GIgyYyYY4VxOS9Zsafj/EJj2D70yXDDCI2UVLCy/qUbN58MD0Ed8+imt+W/7pnEs1h/23vuqE+yWr1ZXuzyVfYiJvNL/SHqY7x7FF1CmAZJ4n37mfG80f6IVZLDHt2kB764D55Rv4NlooQtTM/wGgXjTiYu/7OLGgkI+gIVqSu7H2cwDiI3GvppY9rhNptaV2kXZS4mYjqumEYlxRjEZU4zHjiZWeOe9wpefU/Xfhi4r5zT1YLtBmhTY14nwCPtrExdgaVYoGlqxYHh7itqeE9ByZL6g/8tgZL9K/a87xxY61DX9gyb3Y/A/31JNSZZ0C7Sbzd/d+X7L5Cic8ZLrxTCs4NF/z0stC3LatadPCIy6DesGIb12nwZLvnaKtmGCuETT3O9wwzO0SSZMqtCkHGEJg1dadQHbCceLEaPcvcAuAr3vnu/nIhz9IbNc/lY/1bMhx82U9BDA8Bp+JxgOrqviiYGNzC91YUB8sTPk0KT/WXQcjKNMIQRM/U277neYkIU8kEZnF2BLbQBuVQi2Y5UTxq5Z2b8HS71pdp3O4osCXJbMzpzj7wN0wm6CnpkzGd1MtzlM1DT60tPND2kXN/u4uoW5ZLOfWLCkKQYXoHau2pQohdaqQnr2dHGM4qoN681LkxYUHTtZAR3ORR9frkLsWZe29PKH9RnnydbPrduJ7+ebz47zE4/axrxYJAGyCnc+sYhGPo+uHGDV1U0oQKTCqwqEAgQGWPWlWIuISy3qalJCuZX6pWr/DQbObo/m3zLEiuNSGHH7rtz4JQMXdLJfL/MV+nAYDtvaagvOeZXwGgG/+lm+hKDy/9oFfJ+oeTs/kcM7gO4MA0sBPGsQT13zW4cUzc0ARAy4o0TvKO89w/sIZQt2wnC8I8yXtcklY1dTzBW1TG4PeqkZWLb7FzNtoPDvaAfvTZoXiotq4J1Y554XKOaJ6RjEFYLpxaFFtqecNq9NbuFMTCI4aYTqdIgeHLPZu0WoLBWyeP4t3nkYircf8Dwc68sj2jFqDpXPwqEYIreWfJfHTpV0td4DOR3dH2gtMP1V9TnQ9QLcu2CZ4+bORKIEulSceN0j35RwlKQ/bX7e7IfKuePSKQ2FUNNHiJEzUS8iL3D6iRzKPydD3i7QdljKnNoZwONZM234nSvZpSn9IsgeMeTxfMXfw7f9F+hKdk44YkoAGYbWKfPITnwaEUI+IicRqaJoMj279KimyGFmE5zh79ix33XkHD73sZfzqB36FyB6ip/uF0/mLduSJP2rSZBpFm+91wQyhpQmrJMAFoXAsK8O/OjfBsc2IyCQqsYnQNGjbom1NvTen2ZvT7B7SHCyoD1fEpoHESi5RKTCBdUoqoaKjSHTiwDnUubXGqg4hiFIrzLUlVDBbKr5uuPzUsyxv3aJZHFqXsqiMqhHVeALbU2Z3nWV0Zgs3Ka01fIhIKs7WCK3Ejr0A3KDYOa2hDGVS7eelX/b9mA6CNG5t1ffxC4sESxfUya3mEUu3OZFE7DaE68lgc02nGpy+dzzWX+t+qgFEoiRq0lQN9dVQY/l4CZpyEFHNZqSGASdqNlH7apIYIpG+PCgDDpwTAuCCR1K0DifEkARLFaTtBsm0pWmbYYR37f7ygGPt1dsgPPPss8zK+yiY0UozUGwvZpP0zyp+Thv2eeObvokYWr7uHe/gH/zDf8CKLzKV+9NOOthkpJ/MpOOPrKNshqlVhQw0dibiUlHURXtPpCM/FlGCRbVQL7hxiVAgcUSxOaG6cAofBOqaMF9R7x0y35tTz5e08xpd1sSmhSbhcFN+0aeia8QRkY4VPPv2gqN2CfcalbioufGFp5k/d5XYNihtqq1U2npBmNfo/gGxjUjpodqkdEIVIuNg0e42tXMQ53ClUCTq0cj6mGRCb1IQLcNLugxidnM6aeh8kv797Kfm1gR59pNOMJZIW8OZAgUSIVgGBZyI8z0aj0jKgL4hn2Yi7C7qe/KSO+m4faEk4SKjURD2bNyxW5C6tgBtM7YC5aMwr1SKJGZOOS9oa7kwwxsEyFyhDDWwDW5PaymdbygqoM5IotVxOG9ZrVZMx9t4tULi23XHi6Ik+CsQlDe+4Y00zZJqMubBBx/kqS9fRHWBYza4B+mIplVzzWOu0hhGn/OGkPba9KuZVuvghrUp1978tSGQNO5AVUIFVoI1wrPJRnuG2SoQlg3tsqY9WNDsHtDuz4l1Q1g1xBT06t0Km8YMqMnBs1kNo9rhDwPNjQMOr91C6pYy1UPGZJaaJxJgFVk+e5VrLnJ+8xUUhUd3D7nx/EXiyszGWhtG0xnlbMJ0Y4YvPdPZlFgJ0QktwayV7BuK627QsKa92Tp0DJIpZ+md5KsaJjk/UUIfp/RTUOz+AdeGLiCYc+iWW++pLod+aKeFB3pTE647pt9Zs+r09tffbX6uS0Vk0HJIDW7sZgawu4R0MV7gXP+Xi3HtXFmjZl5YjdLlH+smTQhhLTRuSeekKeM6I54JZTB/Vq2l+k/885/GS8Vd229n/9btPmUWhsg8PMNoNOK+++8j6oLFquGd73wHX/7yPybIdZxMUnQ3b0gDX2fNpM3InxzZyBPVm/PSNbW1PpkO1yXl7XxJ8LGNLhfTmsybZaFgpF5AXSjRQxwLMZTIqQnTO04zCQJNZFU31O2KZr5kfmufOF8i8xVt3RBra6XgJfnDowJXCNWiJeIoxmNWN60ZrtNoOPMIxQA2pwTaW4f4VcT5kmZRc/jcNfx+nXLVyry4Siwch0WJAOXmFLc1ZXL6FG5jgq8q/KhCnTd4petkAUfq3Jx8yYyXzeiZ7CLY7ahhpXNrxZwKS0OXe4fYMkpIsZy3lTTeKZvgfS5DHB7afVdVjxMoDNy2202LvKSUSDZd+yYp2aCnu+i6aZmjfrkrVUy+ZzIzYi4wMoFv25amyYXMfaizi5YN2uatU80rEgMxKG0UaoWP/NZvc3r8KAUbxLC77gJ+hUMBcTV1vMojD72cwnuredTAe7/xXfzUT/0My71PUsmd3YNbI9l+0STHqHvtqLms0DHI53Fz2b9OLA0iviuBcs511ohL7POoX1toFpPQ7vzkIEZCbDROIHrcRIgyBY0UMbK5rOFwCQdLVnsHLPYOWS1XrNqAc45Td58jbE/Z14aqcvhTM+obt4jLlgLFq+C8dIXcrWAlTN5BG5AY0bqlaGAUPMSE2VWgVZzWSIjUu0tqd4PrxXP42YTRdEI1mzHe2aYYTyjGpZW1VRWuLPGFM8EURVyuvFHUr/MMZc4nR2bXlLQxajK9xcDEtqrS+Im5XYk600Xrth1jLrBwIGU398OJlaHpm4q0bZ362071vKSUyFHgwOCNfhAGu4E50pZ7MkHOFSEgqrSxJcR6zVdt2jZ1vFpnbXPO6ixN6G3gO8GMRssR2kgbC378Z34aoeBU9QZu3tqlbptEbXjUFzj2hIhAcFeIoebrvu7rqJcLfGkt72Jb8we+6w/wY//oxwh6mULuzvJwxPPvd8/+IY68LTAp7mQ/fJLHv/QUd7zlQfPxfIHzhTXFGVgKRj+BmYwMfaReUw6FvHsekpZRIUgimBZoxQrLxVUwKpCNCeXZLdyipqprQtvgxdFMS/ZdxIswceBPz9gM51lcuc7y4BAXI7VCERNb3mjEaHuKnJ0Rc8Aeh8qwa3dAgqLOUjlFruyKUAZF2yVyUNOUhywvXScWhaXQCodWFeV4RDU13K+flIxmU8R7itGIWAp46Zj1nPalYr2XSv9bkthjc6VZiSSsk/R5eJECcW3+YD+tyZztjMZ8oQiBYcnJVz5eUqCnT9APVPZwNQ6fS5IVn9A6RKPCV8Nf0QZrHtO2oWsggwyKSAlrat9QPBlgOayllM6MrINSR88nP/1ZKrdDaEfGXB4V7Wgepb/nZP7kATTzUNmPn2Jzc5M3v/a1SKwh1qi0NHXkne98Cx/4lQ9w6dmPs6mn6WkspY/3cJIH0adEsu9WcSfT8g5+/Cf/Be9759+Eds+eLQVfEAZ9GhNtSrYSc2SyWwiStGN6sNwjA0s3ha4dgL0VMc6cHAuhcAQPrRRoBdo6QoysfKBVRXEsneLGwuju01RnZ4S9Bc1yhQYr8RqNx4w3NilnE7RyuLJMAZ0KxNNo25maEYtFRC80jrSxZMYGZ12U24gPEVatmaVOieIJTpiLBYdaIn48ppxOqWZTJjtbjDY3GG3McFUBhbP2i2KxCkWtkiNJTI5GZ3B9GtQUv/BE2xnJlCldP0tyEKev/c3vddxCeT3303Rbx0uqpxwK5lFzNh/9jiPd3xqU2GrXnNTM1Jo6tLQD0Hp/ku7ra4fgGZIOZ7CCLT4H3vMz/+oXqVcNLz/9+2jrEpW29zEkUehq749018MEKbpLLMNV/tB3fh8sDijGjqA1UWvqdsF8/yZ/+s/8cf7aD/9tVvIFRvooiOta3vX32v8f4ch7yXNtla3iDVy68W/4pd/4KN/6ztcRNRpuNFdMJP8zpgi2hfftznuOqxyRjImPd333DjEkjpxk5XQLJcEDo0vPDloJmho1haiGZ61bpKmJoTG8a1lQjks2trYpfIFLFCK+LInirGBabWE6V1CMxsRqRHCtEZDlKKfSAQkg06iZQLooSDT/0UAOyQYlkDJ/SSgVVnPi7pIlN1h6D0VBNZvhpxPcuGLz7ClGmxv4UUU5HdGK0X43aOqz2u/Kku5H1CCHuV5yrWi995q6P7R/gaEGNd91qDa/+vGSmQfyRHeM2wMf8yjaPqcy7D3tKfJDTd02tNFazR0/ZPCA6RXJwui6t4Zdi3N+6ZlnnwOg9BOCYDu1p/NH18U/BVzSr84JhzyG9543vvbVeCdoDITY0GpNSIJ37vwWDz/8MF96/AuM5BWITr+CE78uJCYQJhVBQerTjP05/vGP/zN+/ze9i3q5R5BcCJ4WsGbXAcDRaMnh4YLtzRG0B8kqyYu7tyAyZZCq0IYjcQB609gl9yHEFJBTBe8o8JQKumi4+vTzxPmSGFpwjmJasXtqh9F4zGxzk+nWhgVURBF1FDKgMxlVVOdOUatH6xV1HdEQ8EhnCXQFxiQ/sdMsdh6zQlIqPj2sk1xhmcnXHDEoumqJ810a3UWBVfUcVCU6KvFbU0anN9m6cIZia4YWaWvuWOhsu1PB8ujaj1WeP9OKeTVJasSQNznzJZMOSKm8PC2/yz7l0SDOSVCm/LPPZdIthiyQdV3ThoYmGuqno5VZ87myHThc0A7pomQ9fnFY1dCzC/SlXiKuE7q8sfVg5O7jOCcU1ZLF6jne+tY3UnqDoYUE88t1lGVZcHC4zw/+mT/BD//Vv8GqfpyxvH4QLKA/6fCRuu1VBq8BlEx4FTf2fpVf+PWP8u63vRqRGiex94GcR2XMl596no9+9NP861/4JQ4P5zxw/338P/7qf4I0cysG99m8NfPV+owobRsJwX6P2T/o5pV+c03vZVPYOUehAq0iN+f4eU0ZraNU3Fuwf+uQA+e55T2jyZhqOqWcTJjONqgmY/ykJIxLXOXZvP887ZltQt0w373FfPcW2rRd/jREgwP6vI4ASSvaZ4ER6XhVs3+dxzqnP1yah6Bp3lRxi5Zi0RAFmmu32H1KuTb1TO85z5nXPIQbTdBBPXCOFA2FbTir61Vd2s1u50LGVI8bLVXWEeX+bpuva0idQbDnGKAgvWZV68HyjkkgcyOZEFuCWuemLjt79IaP8IraOCm5SFr1+EYgKSJm47puQuQj7959zsl2tar07Oun8YXwvd/z3QgNvXCX3R4KRvVdjpT3f/P7+Zmf/llKuQfPuaMPwFAwO/jW4PWYfKiS83gZ8fO/9Cu89+veRB0DzhcgJZev7vJrv/Hb/NRP/Rzz+QJxjp3X3cNsa8yzv/0k/8Vf/+/57/6r/4y2mScDIwmWREJUQog0beh+Pxqos3GI3S0PA/7BwcoJVSV4lMYLq8IxCcpEHWWTgPkxwHwBtxra4pC9ahf1AuOC6twmm3eehlGJViNKN+XUXVvsLC+gy5p6/5DV4YJ6Mac+XNKurHuZi6l5lAqFZvqRXKY59KeTO5CLyjX54onAStJ45Ai5EyiCooeBxcXLXB45zr3iIfxo3LMrpvWTTdKsAGyz0k5fHI135rVl8mCBHacOpyfn6l/seMkpka8kkEMzNoRA21ppT9NYIr9tW8sDkf0b0ARFOr5QOPaamakZ55rMns6MVlKTqO51JAzEsscqdskIAcRRVQWuWrB/+Bhv+5o3szEbQ912QQBr896zWDnnCES++dvexy/94i+xWHySTd7LMOydNaPttKQ+pulVUTIiOwIFE06PXs/nvvBRfu23P0U1KvjFX/l1Pvzh37HF4IStV97BXV/zZs4+ej+nHjhH3dR85L/4CZ7/8kVC0iQx5G7JaWGgNHVDCC2tHk9lHQVn5/8rCfkmjlg4irKkxIMYVLFUpRBw1qfbnlYFDcnbCwF1SlPX7K/muLFnfH6HmPKNReEpR1PcxpjxzgaTGExjrhrCqkYXNe1yhTaR0ARi3RDrltA0SNtSRHCD8j3RiKjrSqRNyw0glWK5X1VQpxSlp5gWlOe2aKqKGEJqqOjzjo6ltELnNskgz9KtoySFR20kTTGEnmd3zQn9qsdL0pS5ZnLNZzziU2YN2abURoxKXdedlsxh4yh54UoPRNf1Bx4+gz1/HOxGvQmtaSfU3FIP+2zy3buRHFKR5Fo+7zwbGxOuLT7CbDbmD37v99K01pgnI3NsoBMMTByF9wmqFvmWb/1W/tlP/jM2ipvQniKV8Hea9eg0nOR5igh377yV+c2n+Tv/3f9gEzOtmD14lvNvfRnn3/4Qd77+AcqyJKwiB/uHIMKd3/xavvT//RV+59OP88ZH7k5CnyB70egtmtbao3dzpEeFUikK3423kqk4pAuOiatwvsIRcCno4hQ0OpwrDJklAm5QuWODZqbzKqW9Es2LitA6tTEsPSIFMKKIUETFR7WWD62hhSQq2gTqxZJ6/4Bm/5C4XBGb1jaAxiplXKqK8eJToCZv/NDkeyoEmVSMz22z87K7iVtTmqpIm7jQxR5Uj4UDhnO4Zn0d8Ttl4D71r99ekAdeiqZMWL4uAKFWta7RFsC6hmw7PGedhHGoaUUSRlQGzWEMd9fplxxVHB72XT0mkDmQ5AZAcAtZJ9lJr+T3suXqgMlkRB2f5cbi0/zAD3w/aIOGlmi5G4NiIV3O0EXBq+DxSGh4//vfy8/+7M8wD59kJu+mbxhH2jn6O+qhVzm1k8whD3v1Myzq67jS89o//8088J6HuOvVFwit43A/0DZKaANGRR1RB25sub+9/QOa0JLxvyGNRxP6TdICbjnwk+bApcXTWq2mF0loFpdgZp7Sl/hCkLKiWEU0tgRRmqRRguZSsbSjSq6WcbhgTW+0EQOZO2/jk8Zfcq/SRJIW0zoLaeisE7YxQBTiqHSbcnWGIig+WLAo1DVxWbN/cw9dNQbla1qoW9pljcsuUkIRSFUwO3eKnfvuxJ3eYlW6PoIvQMoHZ08jby7dfPa22AnCljfwBOl0pkF0sPZu53gJQhmyDWYD1xH5KhoYqOqh1myp66ZnC2Dgyw1uVDPFBz2WKueFshOfteYa8dYRk3oIhndiEKnCO0IbUlVC745nv7MqPM8e/A7OCa9/9BFiu6Jua7Rd0bYNjQLOIaWnKEq82sISKY3lTVa8/33fyM/93M8zcTeQeI71vXTdnO0IYgfR2sCKp2/9ApP7d3jr3/geLrzyHqanQEappM1ZlM8q9xWV2G2SgjHuWWTYpC7GSBMCbRuTD29Bt+xTQqJkSa3uQkyEV973piu2mAspDTLpC5wv8dGKpgOS4G7DVWLrw1qyW2iucNDWAXFlOn8GlpvAivMdk4KIdBUiGhUpkgJwzjSdgrqCiKQyc0V0iguBU3edxgdLqXhV4qpmdTAnLFaE1Yp2taIOAT8Zs3nnedypLepRQRDprCGOCI/okQ32mLbri6V7VysJf9K6qj3o43aP2xbKprUqi6MwuxjpQOpZI3ZCOfA1TzzUGddLhMxmR/a7Elb2aNS3xxIOTMu88wtMJhNAcIVjPK7SYLgUWbQqErD8U1F6ojtk0V7izW9+A6vlHqEN1MsFsa3tXsQhZQG5I1Vajt4pMirRGHjzm97Av/y5f81Kn2fCuX5aB/Owjn/Mk2YLdK6PE+KKl/2x97N9/znq0BIXDTIqwPkOoRKlXzYhBi7/4mOICG9986PE1b6ZayHQpn9N22vKEHq3Yzim/WvJ55WjZncCZzif2hhmgei3nqPZoNRmtItSahusgD0hsbz0rIY2O/01c4zduT4f3Y+dnUMVy08mNwbncFGRctDGYuZhZ8woKgUOF9RoOZ0VXtellauxJmu3r82OPHH3W9askAsU7NWXcubbF8pmZQIWcm5xYH+HdS3Zaa4Y14Rp/bCJiJm3RfLN9ybeUQoR6BeAaga5S/d3Gw75lvd/I/+f/+nvc+vwce4+/XbKoqJuUr1naMldk4qiYDwe8+S1n8UX8K3f8j4O9vaSHxWYbFaMRiN8OaJtlVUdiNqgGiz1oh4vtnjOnN4BYKXPMpE3JF/2hJ1RBj9z7xG34DB8nukDZzn/xpdR1w2FAC20MVIUAj63Cxw6yUKztwABXzhCbVGjXP5mfuXQ31+3MNbIsz0Jc2t+VQZ3h+xXpfciObgBXY1aH//o0ge5zBqFEAbNVFMENVdg2LzlzS77Gb7bLI7mfl26bMzroRvnRCOCdM2VNJvSpSeWJRmJHDVXNnkzuvUrm5YnRU3Xv5NzlvmzafPsXluPjfz/2vvXWNu2LT0M+lrvfYz5WGvvc/Y596ZuuVx+YbuCjR1XjMu2FKscxwQcISER2QEUAjggywIEQmARkEEKEj+QQOIfDwl+8ogAIUJ4CJEIE9lBOAEntkNUIthOuR6ue87Ze681H2P03hs/vtZ672Outc9Z+9S9Uv0442rdvc5aa845Hr311trXvva1lxwvNkoHafq0WuuQsPN77uSf25G3V+c77diKw1xDVJ416OfKMi2/hDQFu6orQgyYpwkh8GdaC6qNQOAiu+LL89/AT/3mn4QoZUp2uxn7O2B/AOZdRF4LSgnI64J1PWOeE+KOoVdKETFE7HeC3/t7fw/+5l//BZT6HkFeYxvqWAirnkk68TljlX8LRc/46T/zDyPudyi5QlJFrIKyViLKoe+0XGwB7375Kyy/+oB/4B/4fShlBcT4paFAKz+lGYuDWrGreaeUME0T0jyRUB7Dk2clEhAkIAQiwKUUltxaeNuv7fbBOmhUFFgvV2DNkP0Eb3MLsn2m7oXHtTM+7wCfXhZamaPnNIB342zXi/WkAkYEsNyy2mwbCCSUZuCbK1Dg661JmxEyfPVT8tEG3nbmOMePI3xdVwA9j/NhnSbLtnmgXqaIMWLNXXryw3WanoONYd74kDriyxtyC+0TYPBeOXrUEAJq8NHs1obTNGYFv/L234Ci4B/7U38KpVQLODIXYgzspYscmXA5n3E6v8frT47YH3aYUoKbWEoJ/8Sf/sfx3/jr/y1cwr+Jg/6hbQ7hYWfbPgFIhcSvcNK/ie//yd+Ln/wjfz+VAoSd+nUNKItC9smakZlDZqMmvv03fwnlvOCP/UN/FEterQeQaCixE21MHV5vwDzP7X7FGDHPM41yCibIvCVkAIGbYwgdWW0AnBuE9GvaHN2TeRNBMPQ1xS024Afnn2yBvP5u2sJb94/9FgtcPHs8lc5NjegdRs4QskgBhkU8sRn9epscPOUT1Y1xs/oWEfG3rlN6LiFisoQ3ud/trvfSwml/TX/t9j36DWD46tO7tvKSI5jjDwK+wFDw9979Dfzdt/8i/sSf+Efw/e99juX0JYIUIAimnCDXQAJzTVhXGsNuN+P+/h6v7u8xzzPzFxXMMeK3/vRP4fPPP8f7L38JQar1fvJMWn2S/2WAxVd4W/7PeP2HfxN+/3/+H0NIEbWwNioVqJmzUGquSGmGhIqSK06PZ7z7lS/wK//bv4a//2d+N/7AH/g9uJzfE4S31q8YoyHlRFtjDEhpwvF4pLezv5nmGWlKrOJYPa4qy0QAICpIWcyLjSPl2qV022xRgBfx2wJBFqK5MUZrUO/v4c/La8LN5odnTrTe0h1U/6B+3DqijQeFtcApo4kx5FaWroJE8263nvbrDq/Pdo/Zv/cLsHXc9rCXecuX1ylLv1aoGLxvw0cbeAEbM9AKJ84ytmlSuvEgAiAos7wAaQujN5LqjSH6xjBq4BgqWRVFBce7O4QQsJbHFlpIP22oFvztX/u/4lcf/gr++B//4/i5f/Bn8e7tryEgA5IhU8SyVqgE5BKhtQAVePPpa9wd97h/dcfFBWGTrxZKPWrGn/tP/9P47/x3/3uIu19EXH8rLlcTXhRPMhSQK3L8RVzxbyO8jvjt/+Qfw7TfmZq85ckFKKsgLxFS2WlxuTzi/FiwXq74xX/uX0X+1Qf85/4r/1Xo5QE7NItAFR+ulFjaDxS9nucZ8zw3LznKgHpepxYaejmKw2IDYhDygE0BwMEqHSfgDFGL+yC/+wkUEU1OORNprVCMINg4Dw+VDYVn61/sxtXG+9mTbz83ZH0w9KravCo0IyA23pxW3TbJOzGk1XDHXHNrSLfEmVuj9D7afv3Bcw689Hh5P6WFIh7BUPnMYuZhJ2B+2IQXaIhDvxov1U/YdVa4i/GBd4MEts3M23+7UUKtcW84h/ZJ4rs3b8xleY9fefgr+Onf/FP4g//g78fbr34NVReUckGICkkT4jJBkTBjh2makVLCq/sD9vtdnz6mMH2hzDYjCfjZ3/s78Yd/7ufwl//y/w2/9dWfwWU9tXM9yy+g4j0W/DvQWpHe3OFn/uv/IXzyW3+y5V9exyxFgWvBqVQsjxWn8yNODye8/7e/wK/+7/4NvP1//C38N/9rfwGfHSNKLmgq8pFyGiwvSaNchhCYP6aEGONmghqgTUi6Tcyy64uVGq0p+JgDXuc2leiPi+ElrFPFPhtAUjHGDJFZKgWFBlxxeJHf1zCEvMA4jNg3XwzRWlfmd6mQTmiJ0ZTnK+9vsEiJmxB/1zqfHKFv6+vrjWj8nDG9hV2bBe72i4+QssO30n2Vzb/8TN+l3Gui7abftD+M3n0kBADbkHnzGtvJg9X8/CFCFQ8PJ5RSMMV7M/C+aQgU5+uvAlD8wX/vH8TD+7coeQEkQ6UgTQmHww6H44zdPGGeD0hxjygT5ol55JPcsJ2fIC9X/Pn/zJ/FX/lX/hX87ff/a1S9QocHIlNE+t49Pv35n8FnP/vb8Olv+wGh/8rQ1MnkWiq0KPIpY1muOL99xA//8v8Xv/Q/+6tICPhn/+JfwL/rp79PIeTopSTh+HZDViWQ++rPwg1yRF39+bkR0Whu7rU9Q0HglGl0BHvQW3jOp7SNPNjC92lhvmz9PDzNUAt5BUb/a1pM/rexeeeROKJt4W/R1FK6Jw9CpD+0MLPan0Ww3PX8Wttc0TPOYQv22Ger3Jz3y70k8FFGuQVdtsjnzV96CCU9F/nY4zY82AJA/ZxEgnVotASu/6vuLemRl/URv/TuL+Gnfuqn8Ht+5nfhhz/8Zex2CfN+h/v7z3A87vHpm09wOMyWT3Lvj5W8ypqpGxRcm0cHB62MHEpd8cnr1/jiyy8x/cRrxDcHSIr45Od/F6ZPDrj7qc+RLJQspaDkDFQxrdxq78NFki8Lvvib/w7+1v/gLyF/dcbP/7F/CH/2P/kfxd0E1LxSW7UR+vu9rrWyljiMaRvFn26fx9MaMD1LDYGzXawoD3FPw5AzCD4wj8XvBzDWGv2RuMByO48h7w8Sh3McCSH9EfONyfVVe/ad1jYknNoZTNEAPz9a7Xx8+xfkki/LN5mQeUq3PfFvPj7CKL/ujfsN8QcG2MP+eHvs73oDGvU8iOdB1Cu0G19rxfc+/xz7/R5fnv4/+M3rH0EME4IA1/UR/9av/HO41F/Gn/4P/3m8e/tDfPrJa/zED76H+1d7hMDZIfQmAsogRgQIQmVNUdUQWrHQpVgXu4WfVYHH6yP+yB/5OfwL/4f/E773p38Wd7/7N1mubeqfkaE2xccyrtc6DGC1HXjN+Dv/l38db/+1v4P3//rfxU/+xE/gz/3F/yJ+38/8dtSyIgqQpgkqirqylFNcmhOj93l6H0ejLKV80EDH59mmp6mlGxapiPpgILRlwUdTW3jJ4Ml5tPSarj3Uzsne0z2kH2obXa97d+/e8AJLl1rediOe3NIsrZAgrba9iXaeOZ4r79160i2pZbimYPefnCd/xw9/2M3xrcYWfP2D3JYrxr/xh6r9hzeJyVNDHH/eZ9xb+FQVJYtJXVTUUvD2ix/iH/33/Un88//8v4C//ff+Zbze/xYoKn7xq7+Ec/1F/Nn/xD+F+/2E6ZPv4Xjc43g42iLrWyaFsgTTlBBFICajSd6mopaCvGaUUtFlN4RyFTU3YwjRyOt2rc584rnzfKGuzMb3Pf2dH+Ltv/q38Hf/N/8aDvsD/qn/2H8E/8H/wD8MLSvKuhAUgVhJikipgq1aVR20eRqG+WJ8Ugt0kODZZ0zkdQxpx42/qQZot4XRZz15/m7M9t+N/KGAaQw0b+31vqaYOKQJfiYsvXE9lJqbd8Zwjj3N2hb5mzHdsM0+VAcfR0GO13S7Xjf32NfGy2yxHd/KKMeTGI+OpvVEPEjX9XF0qqFn6vCLtJwSGNTrhs/qP3MPqTefbSHfWvG7f8dvx+/7ff8e/L//2r+EX3xnF5oS/lP/8X8Sf98nr3B32OF4f8T+uEeaAmpdoWLk88JapwiQkUkNKwVa1iGPMV0h6+rnQZ7jeFqducKIwQnY0RXXKkGashbkxwv+f/+Tv4T3f+0XsYsT/uI/81/G7/odvw1RKq7nRyK+7R5SHZANv3zwtbg059Py05h/jfdyU964ea7cQI20rka6GA1u+Agnfrix+R+JuUC5+UK9OT90U3uCatof3K7rtmnYdQSJUGEufnst7Vxwmw/as9GtJ7wdtRhCaKWkTW18+BxPHXq9ns+m/jgZPX7b+skMO5D9hCWH7a4aQ4KmgCCdwA6fCwKD2dHBBicl9M9xYkL3smOeGZON1atALuyve/vFr+Hn/+gfxm/6yR/gdDohpYif+4N/AFEzDvuEw37CLgVEsOWndTjAd2c+sIwMVwDCZoH45tJDRrUIoeSC73//ewCAcrraMFWT3RAxwwSACi0Zy/sFv/i/+Kv48v/5txCy4h/5+Z/Hn/r3/wm8uZ9Qro/QSGSyAo0FQ8NjNwV1yCqyoi0AVbQhR8zTpHfLuGy/SjNySbEZ6GikEYAEpXzLcsGEbE+AkYNLNLKHsxqow3Hl1dq5VMhTDSlQnkUiVKyUZkLIfk38prdONaOx+/5kRboIeOUIeKgYEO+horRgjNc2GKT35FbPNLtDkRErgPfvUnishe6N/BCG89luAgptNfyPMcxvBfR88MfuAQfDDSKYY0KNnR8bvMWrurL2AE1LX+gNbq5lE0K5Vw0hIoSIGBUxCWLMOJ9WABOu50f89N/3Geb5J3DYJ8R6xv2rOxz2O0xzgATqxIqDcLJt1NZa2/Xceu52Hto74FUp4lRqxe/7PT+Dzz57gy/+938dn/3sb4Gk2Be8UHbj+svv8PD/+kX8yv/xryMtwO/8Lb8Ff+G/9OchuqKWQonHmGxx0ICce2pmwc+sFSWvzKthNVutjR8agiCG2FBUt9pu3gyz2xwR93IqSAq2R9WMnBdMjmS3Z60d+bb3D8FkHSUiiKDYthZihJqXLlZnZk06DvnqUJK5XWY3999DWbFNPMBBqKdRQo94O7sMGGqLfekC2jm6/kIXIA+hM9eeO7eRi90AyQBI7d0kLzm+dfj6oeM2tq6DHKQv+Jxt4hIzfoY9weqUqJuc1Psym7yCkaNTSkhpMrUxPozjAXh1DxBL989lY66IIqaAmDj1Kech77VEyVvS2CPK3wURpGl61igBNG+jKjbMCCh6wT/xZ/5x/I/+x/9T/O3/4f8db37+d7bdWyD48l/6BTz8jb8LVOB3/c7fgf/Cf/bPIknB+f0XbaZFC8usyF2HxeDhL/V3sqnHWRRR1fi9DrZF5l0DDY1Iy1Cq4YXCTN1+GKjsxRcw/I7JBjBVu1fkpEbvkeSDRBUadJSAogIJM1QjWxVHUEHNu7VT2yAO33i0lKjlpc88Hwd1ZPu62/f5wCcw6rB1u3EeI6H/5muj5jD87cuQ2x+xUT6b9Hb5dMRI40kpGpNfeqM0zxotzNdCFKvQGzb7EbZlEfThGDOx8gvAMXjFcs6qlGLg7EuGL9WGTN46eFXr2Pf6le+gweUj20Vy6Sh3+qo8/6JAZjSMEBN+9+/4Tfjzf+6fxv/8f/m/wt/67/+Lm/s0zzP+6B/6Q/hH/+TP4/tv7rCe31q/oLTdeHyQ/UHz9bXaHJBqQ5SUebBKZzzJzSIUEaRhwtloAKJ9crYqSR+iFUUDQgVEApAmYFKoVCAbhQ8FKuQXIwASIwkMtaJmauuUKEj7O+QSiJ4W7SkAPDRE+9mH0M1xXY3X5UeBpzdPDU5MoX9Ecj26GbYdf0FbHwqBz8wZU6aejz81yOeMEWAzwAdbGG+Ojx6v7sfmZvEHT38uHXTwpN+/r+IPoA6UK2VHvRg3D0DngvT3HZNtDwvbYgvFxiFUqGbUuiCESuRU+V7PblhmuAgB0fm04IJs3mqY20EPTpWCNS/sYayKOO0QZActit/xmz/Df/uf/WfwL/+Vv2oPm57v5/7g78d6fkQpK86nBzYRW3jjbVW+CFJKrQjueUytpakI1OKTnoj+8veKlEJbfGM+K26QN0Kmbsxurv5ZIQjClLB7dYcQM/SyAqFQxzdG1BCohB4FGoA4T9hNE2IW1FIxH/eYf/A9FFM5GOuO22nOz3uwW0O4/d3tf48L3z1cUDT0tIM7PT1pyLGvQfvMcrPWbs9xPLfxekbp09u/fcnx6/aUgl53ei73eu7G9Ti+wkW0INYNH3sXOt8/ostnPL35bpDtYZcKH9FH8eCVpQiJzcDHZ9+Gt4TYOaEGVow3stgczi51UpDzglyuuF4vzHlDRFxXrMsV0zQhxgnvv/i7+Nl/909BbWpZrRUPX/0KQ0o8BwJsF9u4yLzx142yFE4qcyDJUUS+xhRRRTCFgCiwWigwiKryMzx+NDGvIIxvKlEizHcH/OC3/zTy2xPe/9pXOL97hE7AvDvi/vPPkI4z3l1OOC9XvHr9Cm9ef4JZImouKEFRXu3wwPCiqR3An+htTndzjKjxS9aV/7wpY9RKBfeB7zvWutmk0D1fQ2ArUwLfu0bvOBrq+Jmjuob/fGyXe+nxUQrpzx7mAb8uzKgg2uVNt6OUyBN3Lz1Mg8HVz8HQfmyMXNWAidoYOUR5a8tXvYThnRIxRstPU2e9mOivv3fOGcuy4HK5NBEwGumKUjNUC1wHZpGKEDJSWjFNM5Y1ISWG197LCSPdc2aISSI23/zMLW73dxRYHtTphYVxVP4d729CizfEYENPvdoCYT5W1JBa+18dwA6JEXKYMP3EhN1nbxA+/xSXX/wlRIk4vvoE+1evEA8T7pYrdrVinneQ3R6AQGqBakaZBAqOj/eNIQ5poID0vlZ0Hxb1cx7wuTVQiouBa9OJ8iNfl/Z8vY/UKbW1lbaGNWTRUOdo9+cwrtnb12xLIkPv8Qs9pB/fSiH9ya4l0i6A/33zYqVhFPDmZ/MYAUMMHsR1DdEfV8WoC9uh7u4d1VDAfjO9yZRvJRJaJzqNs28AaYpIKdBohq4JVAImOVMac8lrM8hR8qSUYhhItCJ7bywuBYBkxgLKkLifu4XtVU13Wtrm5jzRcVf3DYSfy/f2O0HxvMA6nQWf9Kjira7MQYPdC/X7ZvmnCBSxzdsQAQnzkfl6jCxplYkq65gDPns9QzRARbAIWT0lJRpPmvAYCqY4kfFThZ0rNEcaYOXkLVvaiJFjMMS4vw6ICbQBblwVta0yH93QSlFFbTPa6kSNS3E0GAVru7nWviG6B7cKwNiO1t9EuT6kh6rbdOqpR/8YLwn8KDwlgDoolD05HNSzfIbsGMLXYvG9CLocResgf/5C3AhlQOuqe0R1z/j0JETQdkrfLd1T0ivEvgMKH/DpdMLpdMKSV1yXZUNLU/CcvUPEFzTLFnxgjT8r0RaUGHJ6myPbOYLliXme23kBW0/Z43V+joe/fWitTYay9y8KhKrIJSNUekGGZmQXqQRAYkOYG+pr7WlNqsVyrDIL0uEeBRzSpCY1oiZFotGiAc9fA8PjBpbRJTO/NCBFYmgiYLANI5ghb2N7e63CpoTTKOmRpOkR9SiCXOVdmjDPc+uUqcas8ogg+FBa4OnGzlyqe0nQCY2u4zb09RTi1nu+9HixUT5HM2r3WIl+Pft7wN0a43dEQ/q0D8URoAM7HjZ4niT8nWyT5ZeEBSKkR7nnad321jExvlebmGTH5XLBu3fvqFerdRMOjchasPF8TMnE2tCk/azvvtLYTXxIEX2k9/acbx+iewJHKjc7/iZ6uH2t8LkUtcK3zQG96Z6ACLJmqiwEcn+tJ9gWOFClUtCZ2TG1ZMuQ10djNIn/hYd+CrXJzE7CINJq60P7l690tWdS27XZNalywLByFEPV0pUQ1Q2hWjhuIlqCTRQ0oq+uhCCQllq5UbaT+4C32WwVgzGOPxvX3ccc3yp8fXJyjhMMu007Mfs7EZK9vYDsoRh/5znMEMNLRGdNdI9xC4Lc5ptj3B9CpBq3ecUxFBwHFKktFAcUSqaXPJ/P/JwxetksfCcw2AOXThcUEXJaA5rnctnMIAnAiPAORlSA63WFILf3CSGQkmcb2IcR7u33PCdrsaqAqNr4ApLnRYDUEk9tf8+6r0AlWgWDhllsFskUPLKARQK0JkbhBrTUau1S2sJlgECPYZNtjXAUYoVU5re8E720I7B+Wf8ci4yYSnjJwhckp0s7YOVeK+fcPCPGe2R2Z8GKbRCDYQJwzNojlL7ynxphy/OBzUbwMcdHDfi5PZrRyfZn42vUTq79rX1fSjW6meUQz3mM4Ftq3xDGMNVf89Tr8RxCFMzz1Drub0MK/5wQwjAdrOJ6vuJ6vTaigujTvMG9nUhiqDf83I1fAzvsPUdqntKQUc6hHAwKsLrnNmwVrSa7+ETcsYXlvpg2ngVeJzaoH3adTUleoJJsc3ASBr9qFYRKo6rGQqq1cJaHbWKwflUuWgPWWhhPb9TOy86tWlTC9V0t/DQh6XZdNHCtrpvkw3q4XkqpuOWn+nU/h3046FLQ100MwVhAgt5Y73dqsxJbLuyfUptSw63u8NZBjGvVgaaXHL+uksjtLvCcYQHdpHqSroix54LuKYdXclPcbGM8YoztJo8Q9Bbu9kVNw/SbMb7mSXnG3vNyueD0eELOmTcyRtLCpHc19M+LiCE9+ewQAlu0muexc3Ev5yRlCS1EvA3Nx5qXRxbi9dvhvjN96wZdqzQxZT4fGqYtE8sk7LlYa5oza/xLK42qBOXUtBDNO1kOByKlYewZVCBIYTQwrj0DRniyJgPphlcrz0mtR6S9joZQClC0AFptrXhYyxDZl8xYUx7roE+Qee2bstcvVYCYEoWhN15S26Id4rf28w2YdJNSPVeuGWU9v+n4kTB6xB7MuJiAEaDo/+0nSmjfL+rWoOUZQ+2HKovq487Uc7Uwvg1ahqqKLnbEnzeAolZcLlcsy9JKHmOY63KXTcmvGV9s9U0Pz8dzUQnNKGOQQSYjECQxxJn59QDmbDyA78o0Fm21hCF0guvCGngCA38AwOqCAi/1EKkFbFQeYJKeNHiGikRNdVWsuaKGYPQ9esgAJeFB+J6o9MLQiWdRjcMq7DCBC22D3sWHyrI0ZDVRgPNDUM0BK9aFMHO1HDhXbZuLUw0FXtf2cLILWpUy0gZ57X3NGOVC0KiVzSidohgEsZfIb1dhe69bzuutx7x1AN90vJzRc3NmfXfiV7SHAOllgbHb3c6uJc/VRmVrpFQjjXqkhz1/Ac+Fqf2GBDAMM8NTBZrYEhdqbTcIqEVwXRasy4rsY/qKyVWM3g+BCnXBw0A0o5wSSQIhdu0KV1zQQG/JRa9wurUaqlmHzUgsNw0KaCnD3Q7NqFTRKKsjkOB1VQ+T3SAFzAMFlJyEbRIx8R6VUjk3BBMQEiCxQf2lMMe7LtShLXmouSlpfjFaEzgXBI00eClm8DaKNmyHagE9/DUztM2xoNaV4XKuqLkghmTRjaA4/9k3bCFyqoOBqOW5ChOTVg+jFVGTrUuryIp57aIoDk0NOEgyXjXDel/eFhk8sz57CtLDcH8mz7uX54+PyCnDzX/3jxmxJdcI9b/ZkHEH0aOumhagzUuUwSg/fNzuOB3wsdqVPXMdPYoGhmSKDhSsBeczvSMqa1ylWN2q9T7GzTV0MwdCSBa+mhhVsF07cLFU6eGsI4OArcdIj+iPLVKmvCtJbHpG3QP0mmvLflQQowdYrtdjYTYCN0sAqEBIkddnzjiKgoXOiFJpkMFiyJL5fNbC4To5s1c0xa70UNZiHt6edZKBANDLZAKl7pDde2iAFkHRbH2sFahAXUnGqMgomRtTSjLUG7nAq1SUwuU0JebZ1XYiR21pIAYWuQd0j9YiOrGooINcwT/Ho5bNIc0oQ9gSG0YPqb5W1CGtXn9+yfGtSiK3XmpDr/PkCT189Z1Ogm5eP+Y3/PstV3BzO4YL2v7KPG/1xF/bTbFXtvcruRiUX1DWisv5inXJ5gWKvUeldqoZZEoJKXah4tEoRSKCJAQzzCYHIGjhUsurg3VrwK5bHPntOUs/e15jGw9hIWew93ZPAMCkSNwbl/ZaX0BeElIrkqMRLgCYlGMFNWYTTFYEQM7V6GdGfLdyTC0KEZL+fdyhKPsZJYmJXbk3AfjuFVLJRe61FiAXjmZo4/kKPXuxz+Pt7B1CU0r9XlnYzPMxv/VMeHj78y0GAmxFrr4+vLzFTxzf8P/uYaq/obTn/THHR5dEbpNZEfbKtd/zh2039nwIgOm62o62OeHRMPtnPD24e42/YijJPKQxd4bkvv2dWg67Fqw5I68cgtPHpwMazMjE+J8xQGLAfNjbhrF9f+6rE2CMHojlRr4obfHxnvg3ulHQFpeHxBDWKQArGamIMUgqv2/3jd8rrPwAosT8nfUwBg58jTGirJmTrGyQ7rgpVkNlS+2E8VwoyVgFrOFZcV5FsNa1bSi1mqSJTV2GePhnpHpkCCqChe8iyigBAblU5NrZRVSyF5TKzSzFaFHASH0DhZmb4kKfjQnZOov+dfvft2UKe01Lyfoau0X6x3U1pmYbAAnSBR5trfMzn1nSzxzfyijHQ0SYO3l/2ea3FpwNJ85//XfdCP37Dx3tPWAGrd1bcLGyZ/BDaLAIMJmkRrGxUGmeEKfUSyWRUHkUz+N43fM8N4RzuACwdJCYGylBE44Y8HPdNt2Ocojkx4yiSvzbXBjKodrCM4OtAnRCubTPYegmUFucDeySgKoBxdlUcWLblRX8y7oipmgIJ1lZo5RGrd5Mbf+vglIVIXKujID5KSdXkTYHCxGhRIBLe0YKqVQnCEEhVnKtKsglm6GRUllFkNKEFAOHG2lmiGy9t57+tLVTTeNI+oZyW5oSy2dGcnirUQ9kdMByVOXGxg24d+18KPy8tQ1V2YyMsDvwDX64Hy83Sg9f1UIs/0BfBMAzOxA2P79FWH1R3l7riGQBTzeC0SDHvwshbJg3Lay2iCmECSlEgipTVzNodc1Aoatoi12wDc9brdVfVwVVZ/YUqotP9/Mpur0f4qwR6+kM2onWasV5rUKDdMaCL8B2PtZWVCvbs3rihgKT9WzghECz5VjRwKKQoEr5EK1AruS0tnCYMR3DV2GSxbCZ7wcRoxLyQ7lvkG9bK0kJcBKGgX+1cNNQVSpL1AIgA4F82xAsKpGAKbIMNaUE1JUzbAaPR8Bl8FB2XwxSGFIjaQh9xNMuEH9tTwXsPpuYtXONeznq62NQ/z3V850c87L+ydvj5UBPGOCcm11gPKlnX/tMyODAzIdfhGatLUdsYM4Yvz/vHUd0kugf31RFsIs2WHS8BrGukUAaoLS4s5MfABBd8M9HQK3Bxvk5ZxIGMACCaNiWbz7BjCog12wzGyu0Whht8nBaYSgwjyiG34kwBI3RQl0noYO5UeyL1sP1XJi8SWQ46Oe22hiJbIbZ8ynGIqUyt0ZWaAz2M240VVz4y6IWGeREEFBt0hQlSwRZKVvCtjqYCBUQIzDNpPXFlBDEqG8A8roCNZsx+3rgRgir1/q1V0g7fbay8dzmOXar/cDR7CYwWiHoE03P1kBIeHQGY3z6RtjD2PFfatdGhLCtmb70+HUxesYP/NCHbjs5xhNkd8Lzn4XBKG1LRy9nuCy81mrR2NM7Pxby+8lUiHFVg3t8vzYxypwZQCsyNM/H3JOdr94VIwg2ooK4gw9pZejqVLpan7YiCRKSRMRAxsxaVxQU8zSl9/H59fgZWb+nooMvqgoJilRZ32VoyOuTwA1AVJCdkla1sVuquhfcRg18ZMbAsc9xdYVcKK2CEDotjjunnbQPA1YDchgFZENga1EkiXx2Kze/IK7mU+iZK72qKhXM4dIxgeugmEdWyNDzGND7S3ie8OfnditgfVh9z/frjRa6io1JlHbPpa2BsbSh7bX8x1sOWSTi3jJgCh9xfLRRfozFeyj0nNFq/6Mnr3OTHRFUR7XUPZGVDHQYmbDNJcYkn+/RizedfNZ3PfTeRhN+AuAKjgzFlDu/qHNDPezlE2bKRuBizb7jGnm6mi9ti7w/5KAEaVgJ8VrcwOG1uM29JUmlHQUGAJihRQnGwbUGcrWJ2kBHOfkJKGVBkKl5z2aEQwgYQtgsKs97JURoMBkOhYlVF7uPEajC+m8pRJrVP18Mjw2opSPC0IoAvkeyMFZbnVBNjpOMnZwLlQ5qJ9arDMRzaxTTofzjG32LCYZvOunEjdo5ylxH/JvbdV/g5IPeq+jP1BlUgg7wvfz4KKP8kEF+HfNmuPqPOLb0qP4Z5jFLj9cVXhzf6m6GMO5w6DfKPKViuJd2mp5/bq7JPDYRNW1/ow1Nq6jZOidKJcyfgbxWVF1Riw/cHUsRPVflNcLGn7OJOdjq6eFQby3qb3DzvSGSVQRSmfeVah0TZnTbpjYqvEvspZexBY7nyL8b5UnG58Ic0nPiaobDe4MiWJbMgr/vhdXCdwDqpHgtgGRoBQKog+sE+uATiixakupdLhmxmDGjo7IhiHFMncVEo2SHTL9hfbMO2+/h+bh75Vv9YWzXJL7ucCf2cY4M+HWEr19npDwZbcZRm9z8y06u/1lfBDHGJohb8mCQ0o3vQ1KQY+jhoJR7ALVFuzHC7avhu19QoKC2XKka9axWRVbFuhasa0UtgjVXlLpyAKw3RA/3pWvGbKMI8fxx/G8Hq8Qaqa3JcQTQVDq6qxz+wZKPe2egyX7wdRW1EGvxjh3P0T2fDQEM92O0PkzmgW68dQBOmgKgCCrsHljdMbUUgtcWTfcoqBrZXlC0sBvHykCaayv7MiyopOiCPZPRO7hhzkjAzahmaxAY71+1PM+DW09XaLyNfOFhlXTD7Wvi5c7ldvPSj4xhP8ooN6yF8UOfADjfbLTf/Hk0oLGxeJom5FzJF3XApeOPm7DLz6Gfuw239UV5c21PznkI89ww+WNv+WLOogrkUrHmimVVLEuxcNXCbFPI864G/1rXddPX6df5nFEC9ALu0f0aRdCQQi0VOS8gGFQRU8KSM5BCk7swk7R3D6hajKQvm/xUXbYEERl1MHiglGhobSfbkzuaWXoAr3stQBUS1xUkd1DuxcwiBFNMsqZkqQA4VUsltZyuATGqQFkBodBaMZyCaQuM1E7UE9ZYDiGjiXVnpzkqaPkBIVRMs5PqAxCdzA9uLC0MHZONrz9434aSi1pd+hvQ2/H4Vv2UT3aCDxjmtz16+Ekj8fCJ3lKxqQt9YHMY80o7KwtR+L3/Mxo0gM1DGM4IrDk6kFOMdcLXrUvBdWUIuho5wfNfqBq6mrf3qgBrXk39gIp105Ta9Y/XoaUaUuvppD94sbYn63hAbzzWws2C9VXWGLUtYEWSYChvgYt4tZvS3r+asSbLSxVaVuRKmRPfRIrNRAlwb85maO8lZTMxS00Aa8ECGpELwWYtzDNFUDUhKBDF8lQ7L2ZDfdNseRsInmrwdUMyhDo+BFgW680JfIsY0WqmQdBG722e/ZP/+Oa17QbpKVYdz/sFx4+GPKB9Z9j+zfN/C3DX0Wf+BgPatc0RQzPQWJVawdXJ5s9/DnMJ9ywOyLRPaTufF4udtTM+CZem9LxvWRaGo8YeKgosueJyWVFqJUulVHg5xDS7+L0DVbbLc5H3eimZOdylW+O3UOQZIUI0m9fK6AyRFRTgAcnctogpWlWgSpV1eL5XmZ9qiFArOTjn1nA5iI0TEMD6TDNLNiJAzSh1pQe0MLrxep1QDwpG+8DXACKk/O8tQVtswwsytRMQjXDSYe99C5Awwxn5gu06D+6hquMFNk4BtQFl1dZGcHIFSPUrymbvFr1610xD2d0c210BQaHawSI8z9jpaPaPwSj9uM3Z/EPHnKn/DuhQMU/N0TD+13OH8z39733ndtAD3IHBm7sWrxDau1VP0r1Wx63PON6t15DeryUe9qCGsLgZioUylXliLqaUh2DUs4o1Zyx5xZr93KMZs3tGmGH772u7LmkGI5yqFQDSBhXrmk05HljrwntTGas5gMDmYqocxATk6woIEKMi1IJg6CuU7WGAGG0uoJaCiGLnasgzhChoVUgyJo+K5dGKGgCNCpUMr6n20JiLHfAbbWNlRXtJB2iLWY3sUKtAwkwOrdAOo/Ab75FkQ/kYTlaIlGGhKTazLI07y7Yx6XknAon3RkUstRrjxzdoASLz6DDo/wJEtdnyhXYW/i9r5x146uKpxhZT4Fmrfeb4VmLMt8DEUw+Jze+3hjnkfjd/1xYw0EjI3UC8Z8kZ+jBlfUHVsU/OYHHPZdQ7C9SMxGUksXl/z1FqqQghNj5sLTDDD1ZnY7mkSoSuRFnzCmgJbEsy444yTBobclK/VvcEzZMDdq4FVVaEKFCs9FyC1iNJnVISoWupqCUji3F4YgR0Id1PwOK1ZhsDDxqK0sOICqIWBFHUbM/C8z2LekRnu0/RPIXaYmfMV2yDVQNOXI1QPNdvubqtg02+bhFRoNJBDJEta4FlIQlkASFYKagFOR5qd+MI9vkO4jlg1ffyRKqhfd6mpVDt3imgtWLNtjEJ89kYzWOagbac1Dty4BukRzf8uxA8KjKC3Q3J5euOFxvlt6UMAU+9q/2QIcfwu4YkamfqOPHYH4kIG09FLJ+XAEE/txBcP9aMyju+Kyivr8ypgsSWzzhC1ryBFbwZPfWBOdVYNzFGGmbNDW1t5yi2+xq9pgVsAtvt3YNsPXKrA9eKGqyDXypCQwYVyMpiv3rYzjKE6/CE6OE34FGGAw9eZ/NCOsOtAq2skTIcNONy9TkLp117iMLGDLk1CoCEIs4XFgNTbDP126FofFJ1xTp4lt7XghpwxWfFjdH/rq2NgbYoNxvdGCQyQLM77DEpbNHEYLAyfx8qva+0WSvSNmSLg5+u3fZ50gzSN+AYpHF1uZ5tZckQzX3D8etSHhjBnpeSC8ZF6HWg23zUv4oxQFoFHzCyN3c5zy5imJrxaaOnmTm03RPgRq8mjGwbuz+AGLjoLHxaVvJTU0pABOrqxeJooV5Evl44zbmOC561OJXeARPkZnf28E3LBhDw0g3cU9n5eQGaotaGRpsbF0GTCqkFNCgwp5TmMahEF8SkHCXQ25oYl8qwyA1cqaiI1r8phieQ5xKBsvBnEkmPgyDUDqQA2EwJc/DEZVUgtQ0F4jySaqasiAiIYUYSNx4fc6dN/cDH0IvERsoA0Poi3RA9G7zFJzrKzXKII7eA1V6tra5I7BvFxrFIN0owZGNaRA/qUQW1n+oQgj9v4LfHt54l8m2OASzt4ZK4UsDA2LFckN9LMyg+RS7ieZ4slCWwzr7BiKorP6t9ijexwgwmWHc9d08d/toL4THwJoYQsVrPYDt365bQaswSda2hXpd1ooKYdIcr3rlcvg/LLaWH7bwdAi3uMQtSFFMOUNOzUYhUCxcVVTNijFjLyvx6XeENzhtmhFgYKxEo2VrtgCkRgcxrRtUACOl9jAwqgq5ABSZJKJX0t6CsKSYUnovAaIG1RbB+T90YPeRnr+ogMmZk72C0vKBMAeY0IYaAWlYEBMyJmrR+v0strd+ybeieBoh7SI+igOYlm8FaxCXBBu8yRGYqxEggBSNtBCBBWgDffLzUr80RN8QPfJz9fISnDB/4uQfuz32o3Px70ykyvOQ2Z/X8UjdJuP+BIJcC8RmIpWKapqd5b8tp+g16Uoy3DgDPPWksbHZm2UOwZH5WLdwgSqZ2qnG9LX8KJlMBdjzY+QqIQjLkNSTZwALVssnJ3VPWzDyQC7kgSESxcLIiN0CjaobUiFpX29wsOmA8a/fBvAoKAgQF1L2JwoUVQAZyy31VwZl15AkHVaQQqSwXAkJUZFFMQZnbBw8aSw+Rx6duoBRPj9fCX3CDCYFeqvh6aCmJWMFfEAKNMiV7nqWrEWwaEpKV0QyVZ+BCqt7Gg7pKIYRKhaWgqAMzPBFXz9Pb8FVhpmDJuTkUj3e5tMQ8ubczWn32hWWRjwpf+45++5ttfN9zJd38fDRI/+2myDp83Rp5/y+GnmUtbO+Btg5wX5ge9jUmz80O6bv3eCEighQTCmr7/FIVIgmlOuQdkFca67qa4LAT3z1t8bYvpaRGjAle3giBxWzmw84rVYTSqXUKcNSfkMXSXLQCtDSf32zhrxowoqEBHv6seH9hIaggREF09YMAC/U5Js47LVQGj28OgbU/bdTFECw1E4VKgYZiGfnQFdFCSvPwahKbgo5KGqDSzxnee9BBkuEZeVkuIaJKalFSCMz5Q2sFpEellEtqa/A2fPX75EhpXx/BdxReg6Gn4n+jjsI2ZkMLTFo429ahpTPAVuXva46PYPToEBfLze/6jRTbBeGo1HBxfjBkkTZHUpWLtPNZDWKWIbj0jby9V7BcrgMaPZ93TZTQPKYKO0PgPEhlU68Y7C4Qa5Y2GXyXyqgGVIAKdGvJqJX5kQo9R4wBIQbKIcJCI0QEyQhhaoX3EAAtBdHuUYrc1SUJJ4TVihhszJ1mSoBEab2a7DqY4UieYDVPnwBNcJvkZdrmIIFk+kpwKwRQHU4FQRLLLhra63i7HGSJAHJ7Rv15C7RSUd1DaQS1kNpyKLiSAL8TEaRg4JVU40llUCU+oCjDfXGFduEzlsgoIQzLiCqBPvuEU7wDsgF1FZzNZ5tJiAgDYdy7aIDeSVPEfKRYNBWEc0ljGO6nvV5p9GyYqMPmgbZWRxDKzUWwdUpfd3wU+jpS7cadxgENke3DQ/t5z5vGxlFVLu51XeHF8DEvsMCwvVd/bzGV8fEcLHRrSb4juVaP9FzWdkLPu7yYrEAnBFQaAEEUMx4R5JWeqqiiWr4BCBBN51W3+WFsmjL8+BjIoiHLhr0SCitLIKPqiiCKNEXU0jvgW/hu1y5WuKYxamPKeG1WDfiBRQVBol0Tr7VoMePr8hV+/1T1pj43PEr//JYOmIKCr8FAgoWfT9XcNkRVNb3nwjKH0CtOKQJSEYvjCwGeKuno0eD5qQ5nFFkDDXxNNEMJtg6Zw4eGHPB9KvpigIk89P9WAcMAhgKMKDySMDJ9sJ5OQbV7jXa/mvkHRh3+g3BzP7/u+FaE9FGW3bmXt3/DO+AhVDfITs1iyOFq16PBD5/ajJXv339+e17t4YnT0OwOixWtN0rk3kzr1UI/T5uR0fIUsZwuIESBhAwJBShDHGLaPKyu+LkYpc2kJWOC5WQCJIH1XaFNgtYVEI6N4yj0rgvjNdnhjrT/j3atoqZZZx5FLd1h+cg8vpZm5Hw/i/FvDkYW/X42qc3IJnCJAaEExEjwo2HggcV1WM4sIpDq4R8RXbMdhAgPGJGiXWu0OaQSiSKjf34POfs5EkEODUTT8fz5nb0mtLC24RSD4oUrwrfN1KiZfN6KEC3QHvLBcYv0Z9I3MWsO2GZHfWN6wfFROWXOeWN4I1n8uZKISG95Yh9cxjRN7Xcll9b5MQod+8JoSdYQCnQ5yv7ARo5rfyRMUHrjKw9X1lNwp/a5iGpgz0aeEOheUIjwlqqQ3A0c9uWep5U8PAgX5Y4aqMrGszFDtDyM4lMVUBOhUoUgNh6rn3d7+AKGoyFCSzbpCsWoUeT3xjfMYj2g7j3ctbkyA4bn+NzEr15GYGrgxu7PJQZXFKwI2ntA/VCjdwdB4zAjCEN5Mx4SPmKLZG5zSkZLan9vG4ADaMYguq370h0628d2K+0GI0KCgFooy/F/NnbCIgrF7cZo9D1/g2HdGVSIkaLXzuOZTfC546NKIm3SlBnR+LtbL9n+WwAUnz5cUEpoQz2pQSrIOW90cPieT89h/Izn6k58XdtO7T16RlC1q7H7PIhi2xpzh/7+bpTBBZWNEkbpSaAWMe/gxnkDf6tpBSkLxyEoyrpYqKdQZEBYVqiaAQ8DVeEKcV6rDYNeDMs1APVeJ5OCDBCl0hzsOcVEvm40CRApWyNzQI3Sl/QQvFXdkDcb7OAPAoyxZCAR10Tv6PC9yo8AQIMaQBTMoKMBPcFawBjWu4cb5Rv9KY74RCt9GGEiREPPh3UK9O4WpxW0UQkD6tKEziwPjalrOxEP8HOhkVYoQqgNFOsXayvIFeN9v+tXgJccHxW+uhdz7/ZNJPXxhH1R5ZybuFXO9Unoy5u6dfW3hgfITVjjv9uSBzifsCuRBShi9Pc2YMc29GbAngMND1hVMc8TykpdnWC7K/+Wn+bDewDQw7krRmmMmVrW1glWa0at2T7flO3EQS96FR+E048BYge1YGPggJ4QIhXcW/hliHQM2IwNG+41v/frH2lx3XC3M1Kk0dRGQanbZ/Xc2onWtcONjUCMqgNy9tyFwMyqsnmvtolIB/9GB6HKmi7JQD6zg9FWyQSGOA6Q4w7G9UO1hH7ObV6pWpKgPcmp1sOpDcB66ozI6+WXA1x45p583fHRygMiXSVszBHHiVZutH70kod34fcQ47lxdqXoZsGLWM4CQ1RrRzmfRbRuQo5q7Rp8IPYz9boUiQK1tpSwhcheGlHVpvxOLVIvlRAKbzmO3yPAdmIvaZi2aV2pHRWYT2qprYRSai/6iykNjEh333yihdMBMU2Y1ELbECGSSbe7iTo8OuC16ebf8b5TSmN7b90A21yVsB0l+Nwx/q6F0GriylAKgZmHDkFci6wxvLpqRDegUrlpiW435GCRg9gw3mg1RD4zNCX6IEBI3hRQNykPt2sHwm67lzzdsJ9ZX2ZLrTadJNw03GMPbvKjjo9ucvYb3orulqOV0g0yxj58Z825SV30GiSPFssPu3QIAdlKJnaljKrc2wVBitNNeNNJwQQ6zDCCP1R/yLAdur8nvRaHx3iHK6MkQUwTtFbsj3tuNHFiZ3ugXkwuQ7+TP5hxg1ES60tlz2O1/FGavpBNdxaOApjnmQNdFVBQFoDRbkAKE0JMVkKI7N6PEUG5yKluQFL0WPt1FfMQUjPyXt/zrhgztEiqHrW0uEEEA3dSmCjCVTILlaEDOU8jGTTaIoJAojCCMeHp6OweITAVAYSJ+rm11f4CgSGIjYEg0sp6a7TyEd+HxZeAajMz6c1MfT4AEpWlMgPuHGNI0bhXRnf0fF19vIY9T6+5egAfkMDIxhKjgWAQwVKNikVLNUBsBs1L+eMfBfSMu+3IVfWxcf5zKgS4/uoWmGEI4cX+kUDdd9gUArJzNCHcnQToxeu+UzbgxjyxG9Q2RHPEsRd7/XAD9ZAVWlt7lwKNyKSgkXPseB7ex0EHf78OudNJWnsTrE4J5QItVG8LVUw9LZAPGoR80CrN4zN/jI32BVEEZQjMcNzCUPGF1ze/EUDr9zm2fLKCzJ0UE0IMyLEY5xT8MtU6cXSkdpClxb43a8TzPZ67RVn2fu5BtLITJIAUwCiTTQMXq9ESUWXzcezEeO2OIcSAdaVnLwCkuI/zZ6CYAjDvZgAJJa+WI7LclWJgc0MNmwiuDqW90dkZHOBP3a4f22NIsdy7spy+dUhfd3xr7usYnjDJZyw/TdOGoTP+7bIsbeb8CGM/l8skCcPPzQjtxoybwtNjm4e5dxhzp3YNH7hGljN80pJgXbPlvk4scH6o9k4XywWhAynbvnOwpdaIEIwnGwbNs2ETETBELkq6ng/k6eoLoXVdON/WN6bxXm7uyPCzMU9umrhDu5v/226iWI5rRj1iAB4jPHe0PD50A5bgxXhS8hpDRjhnhJVb8zSmfKfCsotHPeNzcuwhhNCIA7wvrj7PEDbFiBQBSQFSI8syEpAC0WAxkv84iNY/v19lvx8NsyHj/pn7TSE1mGf3u/TS1PJbSUyOsDv/25nw2oyq5ZfrCoUil2z5k3kVS6YxGKS/7zgj3j+3edfmObuH8L/r3RrbnKYZPwYAStXoZ25EPRTu//Izqu2upRg7JSagcHRAqaWpe7vsZbTQT4yKBs8z7G2j2OwR1baQ/JqDBMSJxpLShBD5+xQjQ0dlIByE4WFIAm8Cpjp3HcCObhy3MxQ9ull1peFEAy6itK4QGhVjBgkDawr0RrcbwCjvL8baoVGqhbEBZA8yZA6B15DihIzeaN7yRAmIcYIPvm0cPOHC58CozpHWaugo22UAKIoqlqVCZob+KUXEFBGjk9ErUNe2MXrZT9xp1L560J6ibDbfTbSgaL/nzSLN7sPb19Pjo7mvt/lkl9Ho5IC+S5J5UmoxQ7Lpx2IzBzVsDBjY7uq36NvoSQHrQhj+3vVZtPYdNMZoDcKKmrchN4ILc3W17X5v+430rhCGoY6+AjEmVJs+pSZ/yI0jIohyDLlx+3i9NoEK3UsA3KE9/G/3KLGXUjx/SsypmJ/6ZlNbaC8iSJJIUhdgSjPVIDxqdbEaeOmm9M+K3YDZbtTv9e09Hu/Q7drYfC83EYB/r56H8dprBeKUSIUTGi4sx4aCavDZPZWXp8wOlPm1KqmD1cZBeAQBwzq8iyVNE6aZQ4PIA5YG8KhhDTFKMz2f1SnQ1lPKdjhDZ80wx9hLKyj2bHNSexj78uOjjdJdfC8wj2FsV58b53qMQMBohB4e3hric4uAwlKTIY28AbBcjjeRxhBjwlr4uU5UYJ7Sz11VkdeMtNuhFNZIm2pSy4PMUMe8yQrcMXCOowMOyyKt9to2kBCMZ9pjHYZ8RGUlwKZgmRaq5dci9BiSJgBKMCNw82ON0rRhBK2pmS81bykJ0IKUpk2fpw7GFULAlGbEkEgLcu6whflOUQTQiOSirnmD9vMW0Q2bZlsLcP5rIIGJq7xtqPBcuy1Y7zrxsg7zcR9CCyWvluCgglIgjC5qURqld894VQCe6libXKgIJnspIQJiNEjZorkiYqoO2ppuaeJ93aJ6cvKcnXgPJgB8pEXio3LK3rQwFl5dM9QvyPVZ2yHO/OCu4WGohxy3UL1/L40vBlMYAHIxjuwYy/uubntW0Ipp6h5MhL6JolY+IZm3eFlXXotdU1VSxMYcwhN6UVPejomweavV2c4pNozG7gmCcEAtHOH0wCfYoqOhBgN2QrK+z5YHJayLecWBoL2Wnj74EBpGLRYqIsJHjXv3Phf8UCwXA46srYkeppEN29/wE8W0fdDJ/B846L3781CwhKG+gOAbADWAHLyuheR+UWeMeSO2P1cvt8Fqz2JgEVp/rStDfOi86MUqFQcX1r/S5FFG6CSKIdQXABrUQlVtK4dMMGtV0x59AXyuAgcwh9f8OMJXKn+7zAXnCqYYsWaWL1JKzRhLJZ+1NmiTO6ZAsF4X7PY75DXbbmjDSWuxG1PZ4KqWSbvxev5q4YogthCoI7EFeVnNUGxhmgfUlkPGXrZxYwm+wE2GA13YyijIFhJpExKG1ewAhrK+O1bNFlEA0NLwAKKOZhhSbAcuCCkgCcsRJhRk5Qn+ne9/qvSOMfq5cAGVUhEktQ4V98i19qnYLImweF9KHXK2W3ICWsuR0a25KP19tG5Q9gAgpcjwXcBRAlkhSYwL0bdASLQ81NOA2uVCBbYG2K4mgQAaFVW6VhO0otgw21Ir6vVi80XLZlP3w5IcOhQT+aorUCuH5EoF0sTeUg0JebU1KNYtksb1Oxh3qciA9W2aJx1y7dGFCmAVE4/Evvl4sVGua26G5++95oxoaGquXTWNatp9B4lmCJ5IT2VqBjzu9j63gQTl8RoIZnAxWV3KAEeLYDclms1N8ri/VsvmFNXDFLWMQoyArMLwyAgFLp4Ee/84sohMKDQEwTTxwex2O1wXXyDWlW/SG6YNaZ/Jz1nXxcoNAIQ/EwGiN/NaXtSuq4WLzqYSiBSrXVrJRcX+7d6R99eQwDGk9aj8ZrGIb3QqjYLm6CTfA3aPnSPKUGbzPoPXAUivkyb4nOk5mvf0SI986DhxXF9s+EGvjVfL7WmIvff1ueuw0+AmWem1azVx6iq4oGCqBSmR78rnZawly/5h+WmEdxMpKgqiqjVm8zoasNiMz5NioJXiXugsv1WTsy/kGCPWdW033m9KexAxNl4rc68F0zQ9YfyMNceeU24BoH5VYpKArK9BgDRF9gUKoBumiaAVdqXnTirmBRgFNS/iqFsQQc5jHfW5h86/Z4nHCM+yA4Qc35oXJNtk1DpCxJIoy7QQkdh3aaWJECe4riklLgQuemzzpYa80NBa47km5zEM9/T22XW2Eet9/ncjZe5ZYKZW8+RAqKY7oz1SsLuBoMG+xEgAHUdgqQXQWoh+Wrkhxsk2t4gaE7QUOMUn2HkKrPfRgUb4NLDOFGvP+7l1C+e8wkJOosOlKrBmjmMIwTY2ozA6nGCsrCDB1Bqo4ldrQdC+WfT77SfDbh+rm+DFFomPMMppmrCuq+WE3AX8JvU88fmvxif03Va2gzndwC+XC3a7HQ3ZQhreyG7s3kkuEoz9ryglA40KNezYls905NaQVH9Qg7dQ1c4iap+p7aE3NNnfq+V5lLRIydgvMqFqwHqprWUIYTBKmAGrIiYKEKeYhge7zbFbTVeZq4z30u97DJF9vRbGjUbGi0Ffr+Lh1ujZegpwm+P7ovNn5cV/94TmV1sPZt9sFQqWi2r1eR0KjQH7eWb64akBGFKmaNKeEkC9pWCRBqwn0qIK847Vc0sMHMlnDjcHVTUlQ4J+EIEmRi8lVKRIZfXovFrxj1IQ5ZXGLoqFjWu8370c5x5yzM8/9vioJmdHVJ1F7+wdAFjXdYO4+mv2+/3mv29LH+Oic3SV/XsOhmBjxAR6GLpV83wd5d2e13JdMc87LMsCePItglGmnwsxtLzLQ92A3mHQPNTgzT3vZQlCESINLYGTizEH297premZrNaqkRo8O+MJpwk156a050CNq6ABHDleCxdASonPIfa0QMQjqZFQgOaFHQlsdUOnoblh3Yawg5GOJS4/uIkxLw9hYt0vRmolAbzWWpCmCRlUKQgGYJFEn6xkxo3xsqwkFySBhAQJEyAJQEIfvkrDhCqkKkQqqjAU/fDhEIvlrsOmVwqpjxzjTiDTJyt6m7evD9h5up25CBpD6AgJXRJV1a1Z2mb4smySx8uBHkMyp2nCmpfWc9YGzNxA4r7T5pyx3+/b4qm191X6a92Y3RunlDDPCeu6bN6rhYoim5vrO7nnpP4zN1jyaZnLKayDAg6++C0T2xFls1GEQKkKtdwhRp/3QfK2Mza83CESUISGR4RQID64ZgAjpCqH4kiH4VtuZrVQsjH5/ykmLOVqdcW4QRr5troxoPY5A+/X7w/P35DMEC2YDpvXurf1z/FhRJ1ZNJm0p0JSYu3PUwDzKixjHQBdEENCSmLSHRECK6kZBZK3NSElQMPcDFLRezv7ROwBpBLg65e8DH8vgwi3h8AFWqjYp34v1SMAtBBdVaBBuxyl+JBZbXBgZ7LVtvFre9OXG+aLjfJwOODh4QGqiuPdEe/evcU8z917Drv2yOzxXNIfcANkBuBEK5P6WooNf6H6t7Ncaq3Y7XbDji2crSi9tuTvywlQwFqzeZoxhJMGqojXygRkm9x4cDcUewRUgcPAWoFao3F/bYVAIonfrMsFAyxGlon9ZWVPphuYe78ABST66bYFwaZnULlA3Yt6jp8BsZwwwLrmgUYsCDTILf+V9VJX2uv3xSOO2J6XkxvWtWCed3ZOiintDJGnmoAIDKyyEkMF9vs9LpcrwZTEcyqrK/UxN86FImhFWWpCSNxAbfNlOQropP6Ojvc+1H5tt4erHjb7dYzJwt5SCAFO0a18jPcBj9ctaDZjNSDM93MAITAsrujeW+sYyr4snH2xUZ4vJ8y7hJwLcl5xOBxwPp8xz/OTOSK3+cn1em2LnFOmKFG/mxL5sDHyoS8rYoq96RQj5Y5XHwNzFJWOtrpnK6WiZC6qUhTT5HkL38wjudYp4Mpztrv3nE4BkKeqCqTA3sWm56qExU0rysyWjWDihpACycuVGwC7sYptKGYkdmprXgcS/7DoRJESN5w0AakIaEjBhJIrYmA+F0PEdVlaSOXe0cXOJEWElBimqXJWBjJSDJim2K5fbOVqFZSaEKBItWKeZ8SYMM9zM1amGURBmXpkpORykBExREzzhJCsrhe5GdeFZHgnblByBcawUlRdEdPOOLeVCHUSxJpQc4YuKyjSpUwDAlpKs4lGfNNG6OUZC+MZXEbznLZB1whoghM0/Hm2VwggBhAKtNmYC661kFUjHFS0RWDlkx+xUc7zZCFmaiGhG8UIAnj+50boXtRzMWfVTDFiWa7YzROWZcVlXTDPE4v3QTYXoOp8SYMPrJ53uxm4KPKW92mgh73R0GDfrNSfywjkAGihqocyLVcwry5BqUBgZHWSBlgymmKAaGCdjz0MyJYTqp2PNyQnmaz2KYDR9ap5sRbazwmKnYX8gSGxSCMiqAIlC5BS2/wAA6hiREpT20Cl0vNVqPWHxvYav18cxmWhdAg25FbbplqcyGGbSErAsirS5BtytY6WFRKyDRNi61fVjKoTSqlIybjTEEi0srsNms2VHhgQqwsLaqioNRhvmrPsghsUnvZ40tOaQaJvudp+CxAeZwRSi7bUIYgbJvNwUR32+P69x8SOhayF2kvihqpm9C/EfT6qJOLG5eFiD/H6rjSGqbvdruWQY6OsL/yYkoEwnu/wM5aSudAHwGIMiX1uorfb+BfAHHeed01Nu+dqtkAtXCXIENrciNbaap7XJy17WKlQyyFAIwDDZPdI1US3oF4TDEjREDqTwnSOKYWbLVyPkbW5FKAhoFSGchrYVePGlczYluUK2A4eDdl0zzjPM4GmAS31Y5oi5jlx9HtmFOEN2uOmGqNYSOrlIO/oKUbOsmK+kcEZslJCI2lXo4ChzNWErS3BpqB0zdCwouoKRUSpK4JR3mKMRF/DSOJwJhZ7R6c0IxrrZ5pmrEs29b9t1tYit2rAFNqeCtfZ5eE5K5Cz0USFg2xDlCeGju3LhvXPH4bAkoqXQ2pVy0d/xP2UbbR5KZjnuS2YnHPzMi3EtJ+7MXJacC99jIbWQZqwqV86Awdg3uEjvWOcAHBxjvmsbwT7/R6nE8NqdkKYUJehmZxmxdxFLSClJnG/+T4igPKBFo5Fy7MKrGPCyxtg7li5cKohepIGRW+xXk0Ya0UNtXOPWat1S0Tk5Yo0JSAO81HapoOW/4mpsLN9yZhFE8NaN7BulGrylrXJb3h+NaK1GzzAcicFF1QVNXV1/pSyKAGqPvY+EsgJs22aGbWwHatkhu0wYn+tFbks3MhKtmllimCgEURITNeVotRWDoII1Buyvd8yREAjiofdtwakipo55qAnRdyEeZ2GuFrRulbmhRKNKFC7p3Q7luZv0Z5vW7OWLkTDI1TV9HGB+jKb/HaTnB0p7YuGcx4ALspSu6GMBAL/fppYAlgvFxwOuw18v64rwpQM1eLBkNfvsW7Owd+TSGnCuq7Y7/dY1xWl1A0pvsHT4g/HKGUxoOS+CUQyySkYvbK1SVJk3hINAWxxML0JuxACcgn0NI7YwhN9PqRqg3cATsyqOSMGoOaMeZ7h7cE9KuhG432TdGIC5102jqgSYHI0vNcrOR9SLYcLlou1im0L/91bBnoMP3/nvFaCRqVUhMT6ISpQaqGWrOggoF1QNCNAsOYr5nlHvmshyYPgisnBGC5ea0XQCpWI4nlmaxoPbQ0QxOJ9iNGYUHZ/nzuyWjQ2rJ8e2XVChR+lsJapYRgLb+vGy2Oq3TC3lQe64ogeobVQWZ4/v9vjI/opA5brgpQmXK9XpIkeC0pDHHNJSB9o4zVDvxleCplSwgpsvOk8z91bKRN5ho82AxuwcKhzbed57kAS0LpBRlS2bShqRAC/kdHqnyHBe3F9zHm1vCvnjCiJoImFoKWuiJb3+pAasfphq4o1ZtF2e2w1T6XnKEMpCFanVLsG3NwfMQRR266PIcy0kgS6Z+0LcLt4/Hf+vgRcpHn/dV3ttWbsFZT0qOYh+SBQk0JDRcGKXK4IMaLoYmHrFaVmhKoo9UrwxLEBZU8pxKCzCsSZSuiqBMVqzS3c7GuwF+shQBa1jbCYOPczi14V2jSCukfr3q0j+JuXVW4w6puFI7MDiNkJAze2Yv+2jqYfV0mkFtbocmaoUkvdPOR2ouqTqDp5eeRgjjnpPM9walNKqRlXrT4SIFrn/W0Ddffcrq5OGldH2Zx0zhvvN48hdowJtRTYZA84JUqCQCowTQmXSxkMyxa4mhEouKAkIGhCKb1uyxIJC+PVAQTpjcghClAzH1GIzDpSgmZOV45xNu6woIKCUSQAmJf0Bz/k8m2chBW5S+0AmDRsStGJ+vxdihGrslN/mmbTUjIq2w1gNh58HgWowBQEJTL/QwXWTA9Xa2lhuVarB3r7l4uMeSYvQEgJxdpGBEoJ3NB1mEaCRAzeFhURw4QYlRTFG6P0ayl5BdAZOO33Q4Qm9ua+qZbCMp1xDtDHrQfmvDBsoRn4iK0YvlNrkx21O/e1NubHywnpeWmhXl5dFV1avuiF+v1+BxiNqY8j2KKlvtjTPKHkDITYCOyt+CwCQUIu2dS5w6YlzIkMwCALKBElCq7LFWEyUComrLmgioGtBUjzHporSmVjhhhgM+8OWORKyC8QcEjzzrr8mQcuy9pqsvO0o1FLhISA67rCxxHU6huIRQyGvHpfooBMGC4CCjExIJjBwT0ZLJ4DEmbbPFgCYQg41ladKO+81LGQ3Q2w5mytWOYJFHC5SuZjaDNBfTPiKIEu78LnCQAKLYowR+znHUKMuF4WrCtb57Kdo48RzJXpQa3KfDFSqWHVyqZgK9uwPhlah5FWa3kTQdOjCDZHxgSmJ5kQQmxblgowzTtcrxdGcEuwCATt2pQ7u23M9rMBimeErJASEFOCSoTPlZ1T5MiGZozeGxwstlADxCqq0Ij9M36kRhmC9PwsuJgxk1hknuA8z7guV0xT2jBCRiKBL2gnszvXVcAui8vlgsNxh9X+hr/zrcd3/9Cg+VEFIQRK58dEDmUVoKgCQZBrwaQBkiIu1zPEQjdIggQOP11NhQDoRHM+bM7fKFr5NzGiFOdzwgjuNoPEoHOBompk2IZgnFtBaJ6IYA3nXAqqJoJIgsGDmNCyTFasJgpMIOWZkAs2cr0SndaWL+rgY9HC4Mr+svZsRtaVf++Kg7e9ij5cyQkSDJulXVsjHORsIEfFsmRUBCQBF3rOUFeDMKCmttB7uA+wj2oEAVL4Ypig1Uo2sS9lFW8ps1LPFBrAxsvo67EgmGrBdvw5p3CR0he0IDXqHJH2XeKgIsBDWRBR1wLYWAbyd41jjZcZJPCR3FcvcDsgM00T8ppZIBb+bDfvEGLnqlLisHvBTfhkZQcPzzyMjUGAyIV4OOyZw6bUFqOIbGYz+OsUrDOlacLpdMLxeIfLZcE09YK3yyKiglSxaW43tpSCyTaN1qYG6bsr0HIMEYaJnH9BcKeqoA+pceDBGqFhEhfmfUQqwyoB+gh5Pvcqamrn7FyoLf8TwMepWzjbwjsP26ulDzpuVl014HaT9Gfhv2vtUUNO2kNI2P2WFuoyJ7Ui/wB8bDwSX4VaKkKaEIwt5CoM8FxZ2UvpJScJgFh9L4RoXTasyzLSCKhaMMdpA9SMrCwo+bZVKN0pvgnZ397m3/ZTApbiRthpmzGSFLFq4fSwwGdUCoBQUUsmUBu8r5L3otrm+JLjo9DX0dM1pNRJ3FqRUrQpux2S98V+O59CVZGmHa7XK4Ggwh7MEBMeHk+4e/0axfLOUnIz4pwrFuvtHDm09OKCpRSkeUJMAaVkzDNBmxjdeDJKrbh/9Zrho4jlykSMU0q4XC6tLugPqR9iHtRCM+GOeM2rAUE+Ro7/L+Y5XWNWVZGsuMnO0NDvCZw3CQBsCmYq0KcWZy22aMa80UAw83AjF5lexHOgEaTolMeqnvepeRTLn+25wuqFpWQkEzRWLSi1ImkavGgPbRvsqP1uwGQ0IwJ05b0qiDbVilEGI1lFVZINvHXNa8SKXqtOwYw7dWSd9xKAsgxUc6Z0p4ff/iztOm/ZZ1ybBVUqSvGSkxWHHCCTaqQQkzBRKkQUYYohXqsWYbTmN/MG9PvQ8RHoqzX6RnIgaSQT1nVp5REuil6C8P7J8YLHMMi1UkoliFRqwW63owKZeeLT6bQJU9trjRnkRPm+aYSeS5RMKN5u/uVywbzbYzKU16H5GNHoY26Y/qCYC7jRiMdZAGgQxVQA6M0ivIBNQKNVJw3NnbCuBXG2RmoEKilYAWycQxlEKJRVuhpANQAmDlO5xnPxZ+AAG8B8OyaGe85JZc2XdMlk1LutMZJk4ZOvb3Mhekg0QKMUD+P670nCR/NyLKs48R1orVBgylFqBWK0iHHMX7s3ooECag3OJE+U9rMNEFh533NZ2+YGVTgDCdgaZAf1eIQQDJwapVQUQMGaC5sOZPidsr1MpFPzRAQ10CiDyAcKNk+Pj+inZNeGakBKEdfrFUC0mlZ+YnheEllX8mQ9Hx1JBECf3DWlGcu6tN+7h/Um6nE34+6F9nn+Pu4VHk4P2O8PuF4XTOZpAcF+v0cuFSlMOJ/PuL+fkRLPY/TkI0XN86lSx1YnwLdmIoY9H+n/Aq3BVUDgyGpVtaihsiwfZQVEEhQrCPdRrYCQQeZodc46ACqZJr4h+X0eEfAREed1WE9itFCxAUMUGhttbiR4xBhJb3OEo4XkavkjsK4FtYrlkaQmMhAIbFgWQQjJ6oEEB3OtmKJ5R2NX8Rrp3XKh0bKR28NtaQYy9ouOqLwfHlb3nxN+8Ubx9vwEm3XlR7XdpEctaIbLNexA3HZjjBYZ6WDwVYAiQBRBVzr++uMjcko1doxgWTIOhyPO53Oj0qlis0iWZaHXyxXrwuGh1TyiajWUb20LwA37dDrBW624QCYalQK+y65rbiWVW25tHUbrTdNkhHfmh4fDAfnxzGbtYOFt2rVkKcYIVCe3d6TXd18+nN4FUZz5owoY9c/HuAmMV2BhnIigZMLsORcQvLa+UPPwlEx0YEZZZrBcxHm9lDXpZadN3VEcbJHWyunkflfr69GkzwmJLVfyOSEt9I3JgBffZIa8t3o4WNBwOCUI5z2g5nNRVSyisNa2qtBE8EwkmIyk12kjAJazQpgh4s3wBJ2CdC5u5z7b5zVWTn92tZqsiCiieS3xe1VqM7YxghOQZ5vz2jYEPif0DaJW5vctVxTU1mSgDTFWAUrghJYfuVEuy9q8oIiY/AY7M7z04Yiq131qUaQ02+/JvD8eDliWK2JMuC7ntgCu1yuc+pSzUUdEbQBpRNGKy/kK14XxUswoR1IrX3fYH5EN2WUuJ4BU5Lxit5tIq4oRWiuW64lAkHgHSvfKzRhDBCwvzYUs6dUYKcnqkWlKpo4H80ZxUDLooSXDnNqK6CpGXIYag0WxlkwVOK1c9Erp+1JWqHLcAZSCT2TqEGRyDYtcSyt8w+hxUcaFN7SbETZEUUp+kF/DHIhlMMuxqtUmUa0GaTNPQkVAQMkZqtS7WRZSDjkq0IzQNHlDCIjqm45Ah7YyrhuKU4bEjDJAEMPEzUq3XGvPoUupSJIMLa1A6a1yHlIqshXyC8fmSeUMk8CoxFXw15U4hUuNuAr7iM5qZeudFg5uclvTyOt1grqKsvFBxYYZv+z4qCbneINMjuQAQt7LQFqPRgrvYlO73b6htufzGTHFlhu6l6P3s5i/sf6FbV0WVnquer1ee0nFQxmQNeRhSkoJ1+XadtCUJsst0Bal81ArtrUk/9ZSLeONSvNaZPVUAydcXMlQU18SYvU5057xvMgwftsEsAFual6hUcwD+axKo4TxjRlI2ubnoTtHy7GDgiPOO7rq11KfFLS3KOnoMfy59Pf30JxdEClEQzMrSllRa4ZIAgnk0TwzF6TEyI3GyNrcsKR1+lO9wcNW1lXbIaY9i7g5V0eZay2o4mp9xrO2Ee40yIparxYVeAeTl108/LfNPlt+Kd4b2iOR9tkNzEKjZvr36or7XkEZIuOX+ckPkQWfOXLO2O127SGNJ+u/H8cNAOo1eKzrFaUsqHXFsl4QomCah5kigyG4gfUZlrkhisuybHKAsbthXDBB2VxLz2kLMfZkvX2edAArpdRyht59Ujf1PaATH/je0oyqZFNRkN4Zoi03Ma9YPbQ0NBZoRXw39BHAIZ1uzAENoYU+u1j88zAu2uodHx31HlMo3wxHb+Bft7XJ9mS1Axliu0QZ38NSD7935D0nTtaKCZoiNCUKTosr7Ad0IkMw4+z0yKalas/Fod1gfarsOMlAyJBQAOGXhAKOr2dbHUKBROIAIVSu0eAGiC2Ku81AW1rigfk3H92An5ZdPny82ChDCLhcLhhbsMaDlLktvEwPmnF3f4SCfXT7/Yzr9Qx/+VgbY3c7PaJr64wTvHw40Jjgl2KIbSu7BFOkC6xFmmzDlCYcDgdcr1cbWmrjsUH+ac5r2wDGckGrTtpCiFbwJ8+UuUkpHpp2YGhEQ73G2R6VAqiCWo22pQFq0oeuzN2HzQicZhhCaOPffABRB3QEXuznuMDEDUIpb1ktp8OwrNiSxXUzhmfjM7ktqt8ePdS3DbZ2jqmYh3OpFhESO0JKiCm1jbCtmyAmckwPJ0KD8TSGnr8wpQGL9xzSU1Elo9ga01CgkkF11gpjyHlgTq3Y6nmvoeT+eWKhrRsqKIAWhMP2jCbSjXW4Vs8vFduNraHJLzO1l4ev+30v4ndVu14LG3MU79Jw1s2ysBZ5m1CP0LeTe5tspW7HIIxeeL/b4d3793j16pWhwE50p7Lbes0QDSQ9VyBKghYg64q7wxFrzgR/qhPdAy6XE1wGf0TaurYODTNKtAVB2ZIuoamAejdLjzHbnEK1OA0OnFiuooCod2RkKEyisuRNelCKdU2IT5ji5/QWvQF5tc2iRQgV6FzWCpiW0Cgp8lz0M24s4zEiu/7fMKSap2XGLwR6Uuxj7pz8n9LEcNbI7lUNBKw9vBQPIaEMRwtxAecYA56Pewscc95NWQLsiglWEonB0VMLoQ28ESX32ZvQQ4iYUsSyrgiBvyulWuqjpkERNzgL1w1TGj9GUv02kvzw8XI5kPMZd3d3OJ1OzTMBfYfw8ocbbdOcaQarraNjnme8e/cOh8Ndq0MCaIjtsmRIYGeJG38jCtSK1cS4/HN8QfmF50wAYL/f47ou2O/37T0oAJVMWVuJMBYK/IawbQzmItYWjo2I53iT2654k0QIeijp4VZTUCh8WqWwMXqKVhfzepoCroLnejbwTg7LpQIlxFuo2MJeO59RIaKd0w38zx38BvgBnlzr7WvGJgN6XZZWtBq5ICS4HGhMO4bIVQEkA/Q4K0UiR9WVKogym8CxQDSxDxMT/9vvoQErFO0CILX9jpuFe9juwVzGxw+XlIkhIGdDuwNaWxllSMjMCsX6T8XKHapQnRBKaSGtr/E2yNhLZASDUZTVZ2ewfdPxUXIgblDuBW/DSG/AHVumzudz0/PZ7Qy0WUsDe3a7XfPAeR3HwokRj+vG8GIIyLVT+MbQik3MAQdjCtVa2VaWgGJ0Pwo1dYIDQQEX6FrBZl2ygOihBnhbe/eEL0YRsZyHR4fIzQClh8E9t1S4u/P/VeHYNrY0KYkD6Clit43+mSFaGcCRBW949K6RFnrfIA4OUjTX1o+RtfUcacD/FfOC3UMzdQgSUTUjpr3JTQpC2EHKCp+zoXDvb80GKWLShBQTnMzPXJehZa0FqtuOEQzrzCngMFV71Wq3IvgtaPnhWD6ikzUPKsJWM4kGxilSEJQoLZeEihEhrJFdBDJEiVynDnbZ04oB13WBYhw49fXHRw+NvVwumOcZ0zThYmFpCqbaFtgSReSpIq+lTXcKIeD9+wccDgdDaan58vBw4k6JHgIAilIL9vtD4846qDPPM1DEVNIuADr5eVkWXC4Vh0NALmxODiLsRFEa7WVZAIkoFvjkkrHbJRSbI1Frwbp63spifnR+a1WqlKlAq2A5X6HFwicFFKUtAlVFgQ0PxRjG8vpkyHFEBEutJgw+lAuMVO4hG+pKtqjQENlk7HkMxcC4GVSLCLzQD9TsenyApN5QgIYmdw+5rQFuCeudzGGdPMLwX5zwEFhv5aUakhYUCMBspQaBkBtaSc2MkeoPEtzrVUhIVCOIgrKsELHatimbawYO+2N7ViKK4qG5b0aVP2dEURAGLnYw5tAUBFo4C2WKAJR6QyT/28/snopmi6hMn9bU7cdoMYQJqMq0ysLZEih9E1+Iv74Y6Ikx4Hx+hGrBul6xrlccD3usyxXklnZql8BDBMVut8fDwyNimFCy4vHxgphm5KKY0oT9bt86A8Ycar/fY1mWjdcdPbIvPIAgk/NVeZ4nIqK1dz0AwH5/QAjBxm0rQQfPT2xHFlHkvKCUFdfrGdxyHcjRgdtrBHegjVHwGYlVV4pD1UzI3ogFtXlGmMHSRmsAslZkCZYdmWy+amvDcpTYGpl6WNlxmxa60Q1XRBO07vL+7jXVnEQPrW/R1ufyyNu/oZCYtX2JQGJoni+0/tLe0TNNE38ePeezcwzUDOJzcI2jAgmV+ZzwetqkZnVV/DF98tSD76vwkkjXDXLk3AGyWgtM6xNas9VFK1IANYBQ7LMzoPzSskLLCoE21QNq3lpZJwhSlAYYBXapDUDVNx8vNkrPKSknSHparRzE6c3PHdhZoKqY5521Zx1BRsaEaZpxva64XhaQfFBwMLHmWrUprpFBlFo46zfUd3DXkhWR9nnZtFgcrb1eCAK1Xb8WxCA47HfGbwV2ux1SitzFVTe7HkPkFbWs3HxMe6bUjKoFFdmAoGIhFofHkjNZNyjcCIu3MHBA6iB9grGLDjcQzISxPAJ9Js1rMP0tQ8WR0RH/9UX55D2ee+PNJ9z+xE+IDen+/GOIiHFCSskkWxJSYlqQYvcuHWwalAMV8OG/js6GQEMfz9FJK7WS5oeaEDAjYAfoBEEyeU8L9cVryO5F+yh2z9NVOZS31mKYgyPPXqYJ7f5yqBJDdv9960sFTATbO6Sf8oe/7nixUe52HD+w3x2x3x0Y9yPgMB9RSsXrV69wvlygAGKiwC7Q62ywsHS5XpFixN3xCBFY2LnieLwDQJRwmibs97sGVDhR4HA4IJfcABuvaXpuuq5LC9lCEKSJMb6jnqV2yRHWFIHdnFBLth2vKyK4oQMMcfm5S2PVOCna/B3QxIGd2K0tjPVc1Hf6W+OE0Gu1vNViUz5c57UGo6CNZaeOXncfvM37ah1mN3rIrKPBkpjgRtfyRfHGZ//icgmBnNQYYlucIQb7b/tdTEZmTwiS+Duxr5CQph1inNrfA0MIXQFYx0p/nw70uF6uakEpK9yDogqS7BAxI+iEhJkToCtH5UWBtwAgqMKnO+sQ9Xj5omgxX9sVz32t+ibjVMsxVOHeKlQmiBHBmrYlOMXxRxy++s7w+tVrfPHFV7g73gMwjus04d3br9j7uFyRC2dTFq3IdYVKIV9TgP1xj5wpk5hNLIq0Jm07pkgnsjuo5PXK/W6PFOOw2LsGEA11xc7C03lOkAAKGqMaxB5N2pEhR85XBFGsy4XdFOYlHU27XM4AgGW5bmqYrVDvt7r9Hw1UoW2n7eHuDV2rGWY3KucySbSx8NINQ5qCm+cx/nQIivj5+XuTgmbTq1FNIrK2XsvuGT3UG/ozLVdUBzwrP4eGRGQ1xgkpzkaDiwhmhCEkcI5pQAwTm7WE4xECOJF6JNKXUuz8q9V8FeuyQks1BNbV8wVTikgxNE0lNmEXVC1dGZ6niqqZbX9D1LCNBrYF/a0366BcR1mjXVvYfPFlbpxeZvJNjeWUj6HZvdgop4nJ//lywfe+/zmWZTXD5MlSSBkt91HLv9I8oYoiJMs1gnch9JKK35C7u7tGnXMOrfdLAtTjiSk17+G5gpdopokhk4c9u90O8zxjv99jnibs5glaMpbrhZOWSsa6LhaOVlyvlxb6qmqjDV6v15bj+u/6lxncMCukeaDBCJ+ErqOnNDSSoE9fIO7pGuvJAFW/7r6++D4jaWEsi4wQ64iY93NBu5/9nAxdbV5yaHOyf+NgmG6o/JrYQB5ZfnKiuz8b1iojUpxYR5QItuu7ykH/IuWSP29N8Ck2QsJIhRvxh1KyBUjcUZ4PH58P19U22O4DQ/fY4qWW7dodP78/B92sia9PD/rxYqM8nR5xPB4ggbQ59i1ebXyZYM0Vy1pavUokIts8EBoow6dSKrVtFBsmjgtnkXVzwTzvcD6fW/0TsAnH0heHq8DN89TroiItzHAvKvazWgiZl1KQbQf1idO5ZP5++DwijhXX68IKTbTdtxqQo3ljAKOKHh/rsBAUbceFbndkVULyqMXCfQMhzG+G2Fuz6D0jFGz78r5MWCTDR8pFrhqYb8F/3vsZu2H7+YZNjuSlIf4sNs/noa63ffkXIynPtdg4Hoe2OB8QxK9t+OfexjcHz9FryU2PyHtMo32lwCG0MYjJnxVwlscVEsiHpbaSckIXLMOo2nSKSOx/Spbw8N7vz3jwdb0uPBo7vbZ3QA3pygdU7z50vNgo2SzsO2rC9bri8fGMEGY8Pp6Rs+JyXnA6XTClHa7XFSnOgIqhqMJ5F9cr9Vk8+bYbM80zJCTkQiU3R/KIuJLpMs8Jl+sJKQpKXiComKeIeUo4HvfY72Yc9geUNSNIwMPDAxlCJZsMf8SSudjVZ1tIxZpXwt9zggTFmq/w7ndAMc1TEw4rJdtuaKFhXYGaIbaItnXTvkNy6CgXUIBClF0KosxxolHJgrBY09uPLBwOsE6QBMgMiQdo2AFhRkVEBWdjQidAKVBca0CQiWUc36nFIaFo5R2OQBCblULKH71kCNFywokSjhqhJSDFA6a0hw8BgmqLkoJNIYNJmtRWTkBHlGv/Yt5tdUcRBK2oeYXoAugK0WJ5YLXXkh65txEXcwxIEURKkaF1RYDH22rIOE9pHNPI5xM24f4ttZAboT0DB+qgNpSro8cAUVZfF7VmGrYbd3iZh/TjxUYJ7coD7969s9Ay4f37B4SQsCzZUqyAZcncVavidDoRUV0LqiGt67K0Xkj3luuaTWSJbUSdJC44HA8NqOgMf178fj+jlBW7OeHu7oA0mSHnjLvj0bpRbIQe0EjtbvBO64vmIdvDse4DVe6yNMZsN5woaykLcx97iD4azQEoL1iP/2L8ThyrYykmeDnAASOpLT8FWD7REFAlUuFdIqpEFBVwwOpkBjqhaIJiAmSHigRF8u2AQFIQtlPZfQgG7HR2T2+1olecIBaapjQjxD4RjffImS+Agy+lrMhlNQSbXq/YOAnPfz3vrdWKRMI+WwJpGUCGCLVvXF3dQR/fXP2LsjHeTuWGZK3Nsm2k4Ga5TUXaUm//vU1TOk/2qQfdeEbt5ZtWufpxhK/VcrbHx0d87/Pv4Xq9NibOcr0iJSep1zYuDdo7OcZdSpXKdVoL+yjthO/vXyGX3Hm202TF/bkxiEQEp9PFpDW4+C+Xy+C9MuZd6vmL9U2ONDmIYLkuTduHm+F2p/Sulyb8ZQ9tNsDJZ2dG06ZlLrQFcjpI/twxoHYV2Iztw/ZF7YFKJ6fXqq3JenyvrpPbAOD2doohcpZe8Hcv6pA/P9MWXgzW3REQp8TpXVN60uY2Hm1sQb6i5gUlX7GsV1yXC3JZbIMrZqS5bX64eb/x/V1y0iUr3cORtijwwb+tfixDV00rMdVmJE6hvA1BR2UDf+w9x+1I9LMo+vC8YPfbo4SPOV5OHjBa3H6/x/lyRq0Vn332Ga7LpU129hF51+sFx+PBXDkXzuGwaznFbrfDcrkgThHX9YoQ2ORcykrGDggseUmF/XyCy+VsYbTLLpL/6G1a0SZQudo3pS7NcGpGXhcc9jvkZYGWgvW6oKycj1EyuYxjfgEMRO1C0vh6vRrBWWxGBR8YZS/Z7FsNTXRk9dZLdosb0FrN0JoxIqGbDopWHwubwUa+SOFGWE2Ma7MQrMSiir72B68tEdSu5ZeXKhxdBLAB0HzRqtfgANzuPO65anWjK8iZ6unFar2lZnrHwSBvnck2zxPr0eVoAc+P2zUOno1GpSjKklYpKzeBws6RTrDYCrp1j+ZlDm6WJStqwcYob8n67l1bq5295wd25Q8eH9VP2SQhJeDVq1f44Q9/Da9e3ZFKVDL2h72pDHAmpQTFfr9DCqwRTZN3gFQgCnIp2O13yDXjcNy3m1ZKQYoBy/WCaU4mhAUDQNjlD4nYH45kkgRvVwoWklROsYJinidfrRAoHh/edyOpoDSjrYt5mqClYjJ2kpdjCCAFNikX8mmrhWEuKUE0kvMOxXMnoBWtt1+w/JFfMShzUxS4tGLvxJeNgW4X3VieMU+dbUZH8RKMeVIVkPsuG5STqo00Wnhfo3tRhzQsInCjbBsX4BPknuwBI/Ls11Csdu1eUpXKb86kkbHuqv370sjsXTDtFtWmUQDeUaOqrTTnkc9Tr/ZUO3e8136BWrkJsFzj19ZLTn70TW8oK/WY/sXHi43y7u6ueUpV4Hq9IKVgMXvBNEcsS29gvl5P2O0mlHxBihUiBetywX43tTAiGqoYQ8TO1OR2O0ddXViLeeO6XpCSoJQFEsBcpWYUrTgcDrhcL4AtmGW5tt7LZVmwLleIAg8PD/Q0RtEqmXmKFiq7jfM7+k3edkv0li6XLsntgY/oq0vWt6Vd1SmgCCo+xAnWUUltINn23G1rhk9/xoWJZhFeNuF5E8yhhIajp4MmrVkSX+LE8vBkM/D63Igq357Tc8e2RrtFNxuLpkHS9jWQH8bP2V7XNpKhd7wJdR288QscnuUtEDd+zu0GWLVvfq3eW/3z6pNr0/FhtGv5cJj7oePFRvn48Ij9fo/37x+aHCOZLkQl12XFlOa2uF+9eo3L5TrIT7Lr4nw5oxYgxRmXyxXztMf1wjByntiB8urVK9YIgxeiA+bdHq7bupsTYlBMk2A3J5wvZ8y7GcuyolYghKl5CMqScIc7Ho8k1E+z6c3uUAoVvVGdpmc32rr+oYoowyyTSt2YFCKL2wNgMXozPmk3tAqRCkjp/w5hrRttQmBHp479il2j9fah9s/xToZgANuWwuYd/SO1bQzZnjPE/uViWoJeEO/55y2A4aDPuFHocD3but1zigPPAyJMLbxM5CinkyOcnDGS6MlphWkc3R6jEX1og2HovZDLXBcoMkq1unYD/XrpyoElRjxOoaTT8q+XHB+h+xoZx1sX++VywavXd5YLKva7Hd4/vMcPfuInTZEuYjcTItdVkCJLJ9O0x3LNqFU5NHZdcby7w5Jt8lMlopZiwjTNePf+Hd68+QTn8wUxsjZZcsbd3R7LuiBNCdlzK1PcC4mh5Ol0wf5wxO4gyGuGoNLrLgtSoAnMaQcFcF0XCnoZUksGkIl3BUGt1ptoYSkMWZMExCki6wpAkTWzWwYFqOxbYN7Su1lU2cfJxeCLmKG3QFj6UAd3wPmNql2ZbvNcOjILUTYMS4VEAMZi4pr0trNu7C3sHkbnQUKrg6p9voDX76GkX48nS35NjFJ6EwFJMeSZhpRYFglDB0kTwrIopBZ4+7BW8lB7uKxg3dVD0WLAC8wopUUPHiaza6dufK8Ytu3RQh9px0282y8NLRe7Hs0AYtfKtcM3AGk5e6c79s/BR4WwH5VTOr/xcjnj/p4NyiIBk53kbrfD6XTCshTs93fIWbEuFfv9HYCI16/eADVgv79HrUCUiXnXwKOELY7zecGyZKQ44XK+mtxIaB5XK6UDd7veOH2+XKAaME87rCtD0MfHE3LOuF7PyPnKfNPg/JwLBDZ8J3qXeWhjAmCG8Bxn0T3CbrdDsqnIuXLYaC4rEK0DPbDeGRIw7QKmXSDtsK6mNs73GYv5OngZDEiff+/HrffhD7kcHHnkmr8FI7YQ/W3uCsD4mrZIwjbHey6sfhLmtYuARanuzVvhpf3aWULwUBn9ulw0mu/HMsuyXNAQ3uqfV7Zf8H9rB8HUqY80uH7XxhDUoxLWHiU0kAAuwsUva8xTB61WlLra59bhq3v3H3mXyOvXr3E+nxsQU02ThSLNue1SDw8PmOcJv/qrv4rD4dBqkA7RpYm0t5wpZrSsF6SJGqqn8yNUtRmZKxHEFFsNcZ53eHxkKD3vdsjriiklfPXlV3jz6acoteByvQx542IgT0GpGfNusrrmjABgN004n0/QzJmZ3HyCha9o/YqANtVz2GKndxEsxgDyRdQ8mgNBYJAGpaqdqG+cyglVm8G22x1VRJoCoI+8c08Uh1phe70vswbGfHMec1sSUNWN9xvzptZY/EzYOtL8ns+3QGQJ2SRQGE0EVETp93nkxarqJscfQRsHWry0NYI/GML2EfGmcdrG90yOOp4rwTuqJwRTdG/Ilt3d22vvXv1pDvkjzyn94p3OxoWScD6fcHd3xMMjc83j8Yh1XXA47Ez3xg7hSHStBZfL2bpAFtzd7XA6vYMqaVG59DIKF+3EcDN1pYHPPvsMp9MZtbCRep7m1syc2AyHtdATxSliWa9AlM42AUswtRScTydrQ+ODJ8eROi7caLjzsTLg4aDvttV28Q75hyBY1wXzlNqDVSUP2PNPV04rhYJdMXbwKLXQEG3hdeWFrcaRj1e49Z7jInkKzoxwP8Cc1UO3HsKpeg1wC5KM6vHjV+M73yCS4+vtv+xn1kVkXijETi/0HtdR+8nfd1SccAN17+zIK+9Rz59t+bWUwHPvWsmnXtcumtbJDL1ZwCOZjQF/0L6eAlXj/XrJ8WKjfHh4wKeffmpy/3cN1hYBSuVcStVqqnC8yMvlAgmC0+kR0cK5NAkgVAinuDFHCqSYOGJsZq/jfj/j/tURD49vcbyjx72/v29SFS4Zcj5TfSDZ0NFSS+sw8ZHf++OuIbatDxMkROQ1cyBL4DiGxUb5kTwQrK7Vd3Hx3I38QIQo5vU9xAF2+9koeoNKnA4aLUbRolflefn8Tb+ucQG0QvgQQvb8FJvFe3vcGq3/DEDjnqoylCf5W26+0M5jDHP5+qdTo90w/XuglzH4MwweyiOO0iIH/stNebebGwndvaF7Tjei26luTkjvnyVWYpKGD7Tqj3RFBT/Hbpjbrht/b7+usd1tvFbf0H49x0coDxA4ubu7b8RvEWme07V7gvUk7ve71k1+PB5wPp84AUuzIVkcBjTPc6O63d3d4XK94Hjc43I9AyjY7SYrvyS7MR1QcEbR5XJpzCFVnmsxJDUEDpElYuHzGCsu5zNmA41EaaCMHrmIXeS5IXm+gGzBqIWyOa9I1JFoRuZ/Nyp/e8jnZPdlXeCqZ7e76O1idyMcuyD8nvmzuTXKMT+8VRFwAsLoRW/R3edCreegfT/X288f66i3dLoylkTgZSf7mfTP9usaUW2//m03xrYO6ffa18TTa9Lh62n46c98O4+k31d+gyc/63/bIJ4Xh6zj8RFADxt8S824LswDmftxlgi7NebWQrUs16ZNWiswTbvGOyUYqLi/f4W3X71HShPu7u5xOV+w3+1RimJdCubpgOs1w3VV/ca7CoJ7lr6jMWfL186TXNcVooKSgSROzaPlrOuCeTcZ+SA41ZRNtqUAxiRiODvQ58wovNved35vOUspkRQPsn6mmIBaoblgjglaCg77Q1/M0r1dHQzxdqGHwFJMWTNHFihlV+KNp/SNoHvQLbgzGrkvyPF4LhwGuqGNP3cPMi5M/1v3ZKNXK7W2AUdul1oq8prb5C//jFFO5FYL53ZzGEGm21CzCWPfXNttHnlLTBi9I4B2HiE4qeL599n8zEposDr1S44XG+XhuEeaAq7XswE1Gff393j//r0hZEsTeGoPXSgNspupLhdCRF4LtFZMVut8/foTpDQh54LD4QjvhXv9+jW++uot9vsjlqUT13POePXqlXlHktPnOWGebRBQVWq7LisO+x0e3z8QtKkVEI7KlhixuztAJ4FECgQfdgesa8Z+t8P58QFTirhezthPk01R4w0P0hdJigFRgDklTDEhQDAZ9O/i1PM0mRfuiKIYMH/YzYgCTClCkrC0UtgW1upaUlurGGrBLs6YZMIcZwholAKBmE5u7zNkE3e0XNoNZ/Rq4zl9nZe8zddGD3773+OGML7GjbMWbU3TaiJFpShKrkZN5L1zb+gKEH7+t8wmXgdLQ7ks4PRrtSljAHV0SZETUPDatYVKVkCIpoaIJvPStHc18/daOFIhmP6sCFFZdC8LoHnxoCBZxJhVqArNBSg/co2eB4hQLOhyOQ36OQH7vauZJ6zL2h54KRm7ecbj6RHTtGvTt+bdhMvlbLtTxrpesd/vCAAddljzAoaLASVTvxWAUfgUDw8PADrZPaWEd+/eWiN2xsP7B+znPU6PF7y6e41afO4kpzbP84zJuZxTQi7kUwZEXM9XCATX8wn7/bzpuvFOlZ7jAQLtatoiVtPrXn3Mv0ZPE0PA9XKhYZneUYih6QB1uQsdvE+XuBTGyE10eTQI5xiP3/sz8XMZDW78nf+sbUJD6H3rBZ7zkKM3Hj2LGy9BlYJlyS0f3Ia328XuIeTooZ/7LAm9C2SMDngd/V8KKt+G4mMniNrve2QkNyUhCV8P2qivFWXrmBuk1h+xUQLSxpTTSHiix+Md1jXbSDzdnLAzShjSkhS+21F5breb7UGxjnNdzri7P6KUBbvdBKBAgiJGsEVqXW3kwQXTFHE+P6KUjOv1isfHR5MOYZi0LFdcrhdME2UKY0qYZnaczPOM6/mEFALWZW2j86h0QK+S1wV3hwNKXlFMqrLldMNuJwLjsbLc4ZvEsiymY7timk102BZ1F5de7bVoGkFqyOxtGOQ1U7WyTK0uBIxWrHfnNhqCGx+bkMOTDcJz3NG4Nk/8mbKH/zvmbc/mXPZ3o5K+f5Z7Tf/yvx2L/yPIMr7vc2F6G7H3DOrr/z6fNz71dqpe/P/wfejpAIZ/veSzfX9v5wPw4mbnFxvl8XiHt2/f4dX9JzifF+x2ByzXjGVxHc0uvOz5pQMvux1ri3d3d0031j2JJ/MuVEvie8TlesInn9zhfHnA4bCDM2qOxyNqrU2zh+8DvH9P75lLBoIiJODx/ICQAlbNWOuC0/mMy5klkPfv39HYDHhIQUAxX8V+P+Ph4R1mQ3S5AHt40h+4F5vZGTLu1CVnSBKs+Up2jzjSyPcSM8ycV6zXKy6Pj0AtyNcrUggm6jVBa2F4K6byZgTaCpc/iShm1A6ajRQ4AE0QzLtm2sMfgKO23G4M7LbE4n9zG+6OHvc2jPVn2w2453Aj8sncsxury6+MuZ6ft0cBfaPZypz4+41efcwdb2fGjNfiEdF4jMCZP8dgXUkiPYQGboW5n96vbzpebJTv3r3Hmzef4Xy+4LA/4ny6oFYgmCwEvd8Ojw+P7JUcQrdaFIfDocfcthgOhwPevXuH168/weVyaTeSI+sSTqdHHA57hChNonDccc/nizGMCDidTidMid4xlwwVxVpWxIkDStMUzbtc2FGiFXldIFpIpodyvJuiXZMDRk7K1gE0aTde0fJaV9MrNSNEYMlXFGX91I1nbCG6XE7N4zWhYAHlLlSxmyagVkymvmf+ElULclmQUjA5lDjcoy3gwu+5iOZ5GhgyWy/xHLjkz3Fb1tBnv27DWffOI1AzztNwA+k1wg7OkYV1bdcwGtxo9PxZr8k6yDMKRyu2m8r4u1t0tv1MsWEWjV9jxDFep4N2twDYc/f1644XG6VIwPl0wrLwhr1//74Z2t5AGFWTzrCCrBvQ3d09XMez5IIYElIiIf3Nmzd49+4tdrvJeIXUhm3DZ4PgemUt8nw+N7S1FJ+2Rd7h+XwFZ1fwIaZ5goKe+Hy5UKE6CGqhIsHj4wOW5YIocMQBOS94PD1gmmcsS8HxeI9cSMW73enaCPLE6ctVM5b11MawkcTc629XU5Pv3mcotjfDD20hjgtw9EKOBjsRwh98zhkVtcmW3OZ/bhCOgI+hnnswP5/bnM1DSr+349+Oi/oW9BkNcnw/fvX/dlK5e9BmTM8Ywpjr9nsS4TrBH/L6o8fsBIevYfSYgY3HS73d6FW/zfERRil4eHxASgFrXvD69SuczyfUUrDa8B2ibT3EAIDD4UBgRjmhebfbW6OqYjcfcL1kHA73UBVcLhTOenw845NP3iDnihQT8lqa3Mj794+oFbhcFpRSEUJCLcDd8RWWJWNdMo77O5zen3Dc3yEvGff7I9YLSzSzlVx28x5BqDUUQmrvNaUdTo9X7OYDLucV+x1LL7Ohqt5F7sT4IJxqHYXCTuuSjWBO8Si1aCLYrXaFPFVOnioFmOc9OBQWmOcD2Lwr6Lqi/X722iML2L3jXpun9b+7zbGAp/D/+DM3oue85W0YCPSw8naxfihsGwEYDHmbl7N8M3nuvJ9b5Ntw9mlr2Rh+902kI8gu3E0haNm8FsO5P3c/RoPebCD1+U3kY0LYjzLK4/EIdb5mWY2IXfH4+IjXrz+FWt1sDCPc44UQcL1csdtRJWCe9jidTnj9ySe4XhfEMGFKMy7nBfvdEdfrgnneoxRQH0ZotD4KAQg4Hu5xuSz45JNPcbYaZ0w7XC4LUpxxfrxA18rReFWAoihFkaYJay6AcCbJcl2xm3eo1sxKqZGCGDkXYk4RSYRetRZAC87nR/Jzrxm7+QBBhGhAEtY7k0TUVSk8JQnrdUWUBFRBMKGqWpj8Pz46HdGU5iRy0I8GiLK1aU4789iccOULyf8VCU2yUZUd+i7POS6ScWFdLhfs9/tm2F3Nri+oMXT0w70NqYnVwDAgGBWRbCWmIevKsRYePjtJgDkgf05FAgJ0XdtmS+NzZs+WuL8FaAA0jrSHqFqZu7vSQR0I4oqKWmCAH2unrl2rCnjniX/emBL4JgoEQ8Vh66fiuU3sx2KUrqnKUXVLY7zEGHF/f4+vvvqKNy/2t/RY20GZ+/t7PDw84P7+HpfLhY3TpwtSTFjWBXf390R41WlaBacTCe7LknE83hmkHbHfHZFzxfFwxPv3D/TApWLNBbsdF9phd8AUJ449hyCFCYf9Hc7nBce7V7heMq6XFbvdHqUCMSXc3d1BTKyrd64Yymnho/fM1ZIBBEgVlLUgSkRt0iCJBmL1MVICz22jgrLksyw01vPjBUEi8lJsEG2xgau9UN0IBgagAazxURIyNI6mh7N+bPLf4b89twd6wZ30st5gML7HLbLqUdFYMhlLHO6VluVq7CNtn5cs7C82jet24T7n6dzYRoSZYE7PDd0IGj+29Vl2po0zCDt62qVSxvuDbzAi/0zPhcfc2t/nY0Ee4COM0pGw6/WK+/v7Fpc7L3GEvkUEDw8POB6PANDCqlJKm6YcImlvx7sZISqu10fECFwXcmuv1wuconc6nWw2JjV6RIDz+cSdVyuOxz2uyxnTFLGbJyxX6tK60sD1ckVeCSQ8Pj5gniNqXXG8O5B0UAsO9zOQCkpYULCgYEXyNitUzPOEJvZUufOKFelLycjrinliK9rd8UigQDioKK/kAR8OB7x9+xYAcHp4RL4ueHV/j3W54vWre5wfTzjsj7hezzTGAIQYUGo21o88edCei65r75wAtgY4AjCj13PdXBpSaTmwamfv3BqJf+8hI/M7gTcZK55TqjMFiHXdhLx+LSOi/aEF7J/vihJjT+M4lt7fwymI4/34uq8WWg9ADUSeO5V2jJtQM8qv6cv5kXtKdmmsBrDMeHx8wN3dEafTI8js50U5RH+8O+Dx8YGk87w2TuhiHR+OUl6vF6gW3N/f4fHxEdOU8PbtV7i/Z7eJaq//hcAx6VUL3nz2KR4e3+Pu7oDrcsF+P2NZLkRTI4a5IrCHOWFdF+wPM94/vDVNoBXTHFHAdq/dfkZMAXGKSFMwhHNFSoLH0wNC5PjvFANZOgCyaRIFAcq64rDfs/YpAWmKWJcVApaFvvzySxyPR7x7+xZv3rzhw1fguN+bHAiFrhUVKny4yeq7lAji8KAgxAX9njj/+Hw+b6Zs07hsQvJgXGPOx9ICv/ye+YCdcsNAGWufLkgWAvsufWRAXomaLktXC1TVZpR+jL8bwZuqpZHFnUTua0fsc7R16zxFhm891m3JYzzGKC74JhjQ+M0inrs/Nfrb9++hPjb5qH/OjwV9BdRasvZ4a3ND1pUsnp4jUPo/mPdIU+R0KlM2S1My4rqHR9takiOLu93cHiABpDhQtphMk0zAc3GYP6WINCduBonNxmqiymJqwV9++RV2O7aVzTvmqsFqkF6WCUEaqcD7R0fidSkF80SB5mgSJZ5jXU6PECXB+no9c4YK+tyUWit2+z2++uqrtos72aDkPHi02pp5Q5DW4uXCW9XYUl6MZ859wfV6tTy+q9HVuuXS3jJ0vDPG140vLp/RMi7A0TC54Lg2urfwkK4DKu4hReRJ7XFD+jbA0zd478QRAWDPsXGRBxXz0es+ZywfAp7cpfVmB39N75vtbJ/nCQcjK6n9rt2b3vL1IYLGc8eLjfJyOePVq3v4XMFpSqiVdbdlWVpR38MLHw7rO6CIK6z12Rjv3r3F/f09zudzW1QeivkuTC2daXhvNlh7QXq326HUzK4LUZwX9mper2QI5bLi7tUdis3QpI5QwTTNnHGfAnZ7khNqWXE+PWCeJiOsr43FNC4AintxLDw5vYIQxUCGitPje/5trViuF2gtmNOEmgsO844e9XDYhE6+OL2ZOZpX9k6YEU1lCYDn4yqDIlRB8J+No9/H7z3fHEsT/vv9ft/yXoBA0OjNvJwC9F5LX3iNCFC3fZUOvPi59Ty238+24eHrJ1ONG8n4Z7cliK/zjpu8Wjj8KU0cXDv+7raeelui+iB4Q4Rok1eO7/GS48VG+erVa5PWKEhptjCW5Y3Xr1/jhz/8YVML8HFzqtoG48QYsRphfV1XBAl48+YzvH//Hnd393j79h3med+QSBdpnqYZqsD5fMU871tOez6fKAWyLIABTFw0AKA4ujr6sCB9SNE87/Hu3QNqVVwtrDoc9vY3M2pRiEYIElxHhufNYv3Dw3u8en2PXK5IU0BMgcwgVGgAwpwgoqg5I6giL1cEAfa7mdHGbrdZOKOygKOR63pFzSuglLPkGHBH+ra5kIf5gFoUcHlS7/Tvu2FbTAhpqDNHEh4GUKY0Ix7RTd5PK9jHaAN6uica8yz/OTCOH/dRFrynzSjJjtgYdQ8LnwOD0GidY2fIWLQfj9scFoDhAs54mtr7jLREP27PYfSO43X692PD921t+OuOj2hyPmGe9w0GBiIulwVAwMPDCff3r1oe4t3hjtS+evXK6FYJuSzWBUHQxRXlRCKmNEOrKaVnGs/DwyNiTEiRcw5fvXrNeSUTGT+n0yPO5zOuF+Yxy3LF4bDH4+N7HI97uOqYt5y9/uQVFQ9eHRECEEVx3M/I64rj8ciwK1eUpWC9rAgq0Fxxf9hzCE+pmGLA+fEBofIhLyUjzSw/pDhhXTiioWaOadjNM6CKslKG5Gr9n+4BU6LieLAukf1ujxgipkSRsZwzHh4eIWCuXMpq5QhGLf4v04czSiHB43Q6NW8FoHlILrqIZVktbOvkdwd7fLgS1efdAVSsK0WzU4rM7Qza6GFdsU4KtTyYEiwxCabZVBWs8yVEQakZMQmaho1syx3+72igtdqUM1RMU4SEp9zWPkzIpUq7Qdy2dnn05sY9Tnp7bqO57ZZ5EhIrFQxhihQYv36URgnA+iQX6qxeLtjvuyKA1xBfvXqFx8fHlmeN5GffvY/HveWnR5zPF5TCet7lsiKlGb/2a7+GECK+/PJLTNOEy+WKGBMoB1kxT3MDEpIZ7PFwwG6acdwd8Pj+EYfdEZfzdSghVKQYjLy+p+K53fSHhwfkUvDu3TvEwPphkAQBR7156CxQpCkgZ8p9qFYcDvt2rTxXhnwSyEWFwuqw5GouNo7BJ4ypsm5aar9XMU7IK2XyRSJUI9a14ny+2uIsBCSs5uZ1Qt/ZyS3uNDJnVz0+nlBrtSiEntLvDz+7tH93uxk5L3Y+cQh72SLl4weui9cht2GjNqCkNsDNQSH/XqQDPi6XAmyL9dva4Lbn8bax+Zbmdxta3oagAFrlYAyBx9AV2KKsY4TyfI6qtq7sM7HtQ33J8fJ+ysOhEc7fvXvXvKCqthzGcywf2jrmQX5hPo/EUa9p4t+ueWV5oS6tX3OeZ6RkZOYUcV0uuLu/w+n82Opw67oixQmqgsPxDoCNx1FgzWTXpBgxTQmX8wWv7o54eP+uQepsOZuxXhcc9gfklVo4825nN57I7eWyIKUZIrH1doZABTxRqqtfLxdj9SzkzFohOQ2Lep5nzPPEbpXrtRmiLw4HWTzPPp9pQEGCRQILci4Ds2db7vD5KTsLkf0915XtacuSzegLRzWI4Hpd2jn4OapWk2U54HA4gDNIZ1yXFT7Y9fGRUcrpdMLlctmgoL5oR/lKpgesd7v39s36duHe1lb9e1dc8M1mLLuM6O74PuNrHPO4NaqRpHBbj30+dO7G36+7Az2ODj/JOV9wvNgoRxd/PnOWyOPjYyMCfPLJJxxzZyWQ+/s7rOt1U4Ny3qoP5wmB/70sC968+QTv3n+J+/ujCWv1Ia0U6rpCNWNZzihlwd3dEctyxQ9+8BO4XM7Y7WacTydIYPf+UjKmeYfT+cJFVyp2hz0eT49tUbx/T9WDZVnx6u41Lo8XaK64LlecTKU954IYdkhxhyAT8lqR14oUd+yQKRWH3QHX84XKdQrcH+9wPV9aS5dvTk7U10qFhJQS1pyb9+gLKBsFT6EVWJeMqiw3AOz6uFwuA1pMJpDPR/SyRic6aJNNcdBsnmcbEIRmwCP31qOclCK1lkRMlZBzV2rhAj+fz7her23xjd5m9GS3wlRU2tc2O+bWO3pY7//tXso3Mt+Ub8cRPFeX9esdSRP+7+3nPsdg8ve59cJjftrOQZ92E7nBv9RbflT4ejwesSwLPv/88wa4eGH//fv3rRk5hID3D+8bAT3YzPspTfBugBgpTem718PDe7x58ym++OKLVmbxfGddrzbwp9r37Kfc72d8+dUPcTzu+d+HGSoFBStiEmilmsHlugISsOQMtakV5+sV026Hy3LBvJ9xXa+YdjMVBK2F63J5wH7PReM7+fW64LA/ouSC3TxjnnbtXjiFcFkWcnaHTgdfHI5gihlqTJEaq8Nn+OtyXnFdLggByOuC/WFveWNfmJ2rylIORb/WZoz+5QavqpgMqBtV9Lxc44QCR8EvlzMoxOxCzhW5rHj3/m37ew8r/dy9rcq9tofPl8ulrSVvMwOw8V6+fjoS3csm7g3HvM6/Hw3GF7+fy9hR0ubhDJuHG04vuz2VyBxf478fZTj9mQJoadGHQKBvOj5KjNnLE5fLBff39x3FMqXpu7t7+90rxJjw+vVr9ipGzwsnLNcVh8Mdci7t4fjh4YeLczmCend3h8dHMoSu1wuqZty/OiBNwOEw43J9RJgEua5YljMO+wnr9YQ3bz7F9XLF61evIYg47u8RJKJWgUJwuV5wd39vPZik2anwPX/4xa9gmkNjGpVCgCVFTv/ivETF27dvkVLCV199RQMJgsPhQLh9nhBShESis96ZzM4Sy28gzYM5Q8pLTKWsuLvbG6OHKg00PE5ASynh8fFxgxaOqcSYK+V8hc/BXPMVMUlX9tvkldsmbho7ABM6EyMNsHHgcVOKua1FAl3Bz0PUjrqubYOKMbYOmjFH/FBddQRtxjxvVHvwv3XdKHcYbnSjEY/0xTHkv81Lb0Gh0XjH9q3bhusfm6f0UocvhtFFn05nvLp/jV/55V/FT//m34qvvnqL4/EOX331jrzS0uuOqsBhT+qc52a73a7t/vv9wQS5uPC87DJNM3KuEImt4Xq/u8NyzZAwcXr0ZQVKgK6C/XyH0+kCCQHrarLyVgLxmuF+v8fjwwMfalagAFOgpMlnn70x3qRiXTPO54UzOjRgv9tBQSh9SgmX8xUBAdWU8a7rijo8tKoVIdI4IWjatiIc03B/OCJfFwQFas6YzDB3uxnLcgXHhi9QzTaG/tLqwN6O5ZKe1LexKrx5txCAlASKDEhG1StUV8TgCwZNObyVMeBsH7HQ1ZQKSwXHdDAd8U3BPayjvb4oR/TXw+DHx8fmVcfWLjeG0QMCaPjD6H08fB1pe6OB+Pv56x3/8Pr2aCy3xv5cLvihDWIEiEIIjkVvDPG2jvpNx4v/8s2bN1jXFd///vfx5ZdfUC6ysmVrnmc8PLzH8XjEL/3yL3Eq87XvmixvzPjqq68wzzN+4Rd+AZ9//hnevn2L3bzH+3ePuDveI8aE0+lsnoA767t3bwGhZ1NV7HYHGxzEMQUpTZinGaUo7u5eYZr3SPMB6nMqVFtIBwDH44G7+7SDVLZHJSTUwvrfw/uTiXUt2FkXiFrHwPW6AIjIK7sJnCnjYexu3iOEZP2jgsP9AWtd2dc5R1yWC6pUPD68Ry0Z63KBVs6lLOvSBbvO9JzX6xXzLuG6nLE/zKhKafw0OUK6QzSQywfOLisH0IhUhATEWbCWKyBs7vY6YZomQCtpa0reK0yWX4K2/w4mTlaVukHzPNmYQTKNdrvJ3nfF8biHi3bdSli6N7wtJbjh3BrnWOfzMPGWTud/7wCYh/Rj6OmGEWM04LBTDv09OQ+n0/58Qxj7Ov3vxxzRv0YertFnNzmtf84tqvvrNkqCOBGn0wmffPIJAAw3vkvoi5BEnstieUppIse73Q5rNn3XM0si62qI5G6Ht2/f4fXr13h4eGzE9RgTUpqw2+3ZpiSCNO2w5gpFQJp3yIV9mst1xTyxjWqad1iuGbMp6XEBR7ZpWbgVA0O8y/UKICCviuPxFU4PF+zSAY/vz+wXvWasVxIeuNsr1vVqedME5/5mu/HMI6WVbVwYS4TqdilGm4wdjYzwDvMu4XDcNVLE+/fvUEppBAjySQl25bxshKNXC0N9rMS6rkhTgpdMXMN0vz/Cp2gBFTEJ8rq2xmiWPNyIQlMo8JBVRLDmdUDOUzM2X9giHbDyhoAxHxwX8O3AVjfSEbVf17V5WEYJ1y5VaiGr//0t0uvh+Da03CKrwNiKth3fN4I+z73Oz3uk6ZVaAGw96S0o9E3Hi42SXmqHh4cHvHr1Cg8PD5zkfL1g3s3WsqW4u7vHsqzY7w9NneB0fsS8m7jwDGJ/eHhAtV0qhICvvvwK9/f3Nlyn5zgElSiWtWRybZdlxfe/9/0GpgQW7TjRWStSsl1ZC9XKK7mq79+9w6tXr/Du3bumJbSbCfdP04TdbsbpdMK6Ui5TBAb0KA6HPWot2B9mLPmCEClhKMIBtVULYuTAIdWK/W5GLQWvX7/CulxRcsb5dMJsXttZJLVW3N3dcZdG62TEYb/H48M7xCB4+9VXmFLEuiwQrdhNCaIVKQAoK6YYsZsm7Hc7EvLB2ZsCQV65uN1DuD6SL3iOWeizUHwxAgzHPO8cF7rXZGutG+mX0+kEH2khskU3x/fwBTzOBPH/9lBZtZPYPUR2o3W0+LZsMtbEx88d/9YPrwQ0Yxo4uCyTTZv38XN3VNmPW4/4HOr6sWWRj6pTPjw84PPPP2/GRq4qgYe3b7/E3f0d3r19i9evXuOLL77Ap59+2oy4Vk4YVq04nU7slnj/Hp+++QRffvUF7u6PuCxXlFow73ZYc8abz97gcr0gBsHp8RGv7wkATVY3vFzY2Ayrz5WSWwO2tz+tyxWH/R55WXE83uF6uVAO5OEBMYS2iFUV5/MJx+OeYxYSuzNiDEgp2MYwtQey3/cuk5yX1tq1XC+YU8L1fEYU4PH9O9SSkaIgBuDh4R1CAtIUBq6vjZVXSnZOU4Ko4tXdPS6nE+6PB4gq9qYB9PDwHtNMhHq3Z9eLQJGXpQn/igqW64rkfZ1DCOZhnjcHjF04XgrxfPW2tOH5EcPBnRES2JHiLCFvDnbk1b2pI66j0fsi9vzylormhuqbt7+vG4SHt6MRufcaDX08PJy9bXPzf8drHg2WFM253aMRsW2f20YcPuW73p7Hh44XG+Xj42PzlA5111pxPB7x+PiIN2/eNANcfbb9AFUvy4LdftcMe11XfO/zz/H3/t6vsua4LhRoVqUsyPGAh8dHlFpwOlGD9e3bLzFNCcc7Egg++/wNzpdTa+MKJs/oYwzWhZS1s40oyCsVBpbrgk9ef8JFYQN9rtcLOaxlweG4Q9EMFdb8nDRxuVzatQOO5BXMOxqkCNX0rld+/sPjI169etUoa3yIDBl9zqF7In+AKSUsK8PtEELzoiEESGDXR0rR5CgjzucTDgdygmmoXEhu8ECH8r3Fyz3NahKYXr5wDzEuZF9s4+t6PieY5x1yJuFiWVYT1s6t3OXvN94zB/dGds5ocO4tx8U+MsRuNxjPB91THg6HRtQfJSz939F7juT853o9PSQea7H+nHxTUXMK32R0L/WYLzbKzz//HI+Pj60bxHcZp4uNqN26rvjBD36AL774ovURHg4HfPnll5jn2WZYLjidT23BrmvGvJvxcHrEJ59+ii+//BL39/fQqvj+979v9L49p0fnbGEyPdTj44MViKdGZHj71Vd48+ZTvH/3Dm8++QTv37/H0R5Wg6aVHs9Ry1LYV+djBKY04Xy+4HgkHW6/3zcWjD+4nTF/OjiR2+Z1PBzx7t07HI9HfPnll2339sXpXuS2r9ARSl+o9Mq9cZidK6VtkPS0Ex4fT0SU7bXn87l5JzdMyn9w8aQ0NXKDe05fcL6gRyN0IxmFtoDu8fz6vCFh/Fv3Kv7lRuGhqpfclmVpJSKv8fo98s1mPEYDGcPgEQgCsAFt/HMdiR03htGA/L/HMPS2HjuGuM3oBBYVbnPPH7lROntEtfcy0oPt8eWXXzbwxGHnd+/e4dNPPzWUcLIBP7ORqx+a5/npn/5pnM9nfPLJJ3j3jgju+4eHJjeSDCTxMgDJCvTOTVEuRnzyyWucTqcGRh2PR1wv19anOE0TFPTYTnbY7Xb46quvKEtyPrdr8JucrS64rqyZ7nY76hQND+xiKufj6x4eqHG7LL0U4CGh/53XIv1zu6iyohbbxQfEzkNzgLW3xQj0HtZR+Is7/+FwsHMPrfwwhqbuAYiwysaQ/PzcY7kxjTmTL0wAzYB9ofoG4jRCX/gepneBNW7qHkb6gh/zXfdensNuCfWh5bSjwqFvIq15OfR+Rv8Mz4cBGGA12T0uFn2c2zn7PXGQyr3+7abkBxH//v1LQ9bxEP3YLPS747vju+PHenx7ccrvju+O744fy/GdUX53fHf8Bju+M8rvju+O32DHd0b53fHd8Rvs+M4ovzu+O36DHd8Z5XfHd8dvsOM7o/zu+O74DXZ8Z5TfHd8dv8GO74zyu+O74zfY8f8HGomHBNi5TOsAAAAASUVORK5CYII=", "text/plain": [ "
" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "sav_dataset.visualize_annotation(\n", " frames, manual_annot, auto_annot, \n", " annotated_frame_id=30,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Show the SA-V annotations in frame 0 - manual only" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAOUAAAGFCAYAAAACSjT4AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjcuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8pXeV/AAAACXBIWXMAAA9hAAAPYQGoP6dpAAEAAElEQVR4nOz9WZCt2XXfif328E1nzPnO91bdmmcUUAQIEiQgiLOolkK0Qt1Si+GWKYXDYT04/GC/9lM/+MFhu0Mh2W7b0WpR3VKraUokBRIkQYKYgUIBVah5uPOQc57xG/bkh/2dk3kLBeJWGA7zoXZUZt46efKc8+1vr73X+q//+i8RQgh8ND4aH42/MkP+//sDfDQ+Gh+Ne8dHRvnR+Gj8FRsfGeVH46PxV2x8ZJQfjY/GX7HxkVF+ND4af8XGR0b50fho/BUbHxnlR+Oj8VdsfGSUH42Pxl+xoe/3ic898wTeewaDAVJKZrMZ1lqkVshUcvrCWXRRsHP3EE0ODsrpERfPDJFYBoMVrly/zWQyZ9jJcPWMg6M5DxSSX37kDH0/Zbi+wlul5M29OaPZhLIsefiRhxlPpyRpytbWFkIo0AleSKQQeONIpCKEgCnn1JMpdTnDWEPwAe89dVNjg8c6hxcBqRS9TsbcNxyVEwoHo66lc+Es+U3PzFrq9YxidYDfH5NmGh8cG6sbXH37CK9X+OwnnqJ++yr/5ht/xryuEEh+7jOfo9MbUqytUltDliQIIQjOMz444ltf+yqTyR6dVPOxx5+gSBTTyQRnHRKBgPZnIIQAPiC8B28xdUNpasgV6UqHo2pGnuaMxyOSXoc8ycmsZCPv0ZkaBiHFzSrOvPAw166+Sdoonr98kbI27E8NO2bO/nzCk2vnCWPDHTzX7h5ikxwdcibJKvXZX+HawREXH7jDen8f6Q1WTDCdilNbG5S3Rsy295GlIwka13i8h9F4hFaC4foKjfJ46VFCkLuC2cGc6dEUUzV4AnkvZbC6jg1TTCX4qUd+jnG4xm2zjT/V57zaYP/tbcbjORIQQvDAgw9y5cZN7u7tUdY1mU5Y7/VYXRmwt7/PUWW4duvW/9fG8Xd+7Zf53osvsra2xtbWFlmWEUJgdDTj2pU7GONwzkFwSBGQUrC2tsJwdcDKcMCdu3dJsoK86PHmm29hjOGdO1d/7Pvet1EqpZBSUlXV8t9KKaRSCCE4e+Ysm+fOcnDwLbK0g5YJSIsX0Cs6OO/odnKmsykBg/cGRAO6wKU5Vd1AA73BKheKFS4lKQjBg5ceYG24wsHuHnu7e8yqmsNqxqSqKIqCRCoq69nZ3mYyndLYBmMalIqGCuC8QyUJRbdDZRqqqmRsa7orfVSnjy0dqkkIZp2pO0JqjZk16KSmJzNs1SAl3L1+m16aIdwh1f5N0n5CkiioIQRwziH+kjkMBJy11N5y584dBp2cEAISAd7jrMOH1iAFWOMQSIIXeA/OapQJ4D2CBGskZq5AKaQNBNswqkeYmUP6lEeylGdsydPDFdS84kJouD6dMb07RjWWvjOMDm5Ql46Q5CitcKfXOdotqcoNDq6nFGc3uf7WnHl+h77coUgVvmfYvnWT1FmkcWit0FmKAHwIJElOt8jIV7poLZBA4gRHN484HB8yNzVpJ2VrcwtrKzwgRUGe9ZFqhbq+gpOCEDJEKDAmoKTCmoYsy3DOI0V08jqdDq4xdLs96rpGymPn73/3G3+LC8Mh4IEfJq6FEO9JiDcHj2DiHP/dn3+Ft67fREiBlBJrLSHEDR4g0ZLVIsMlnuA9KnhSLbDGQFlSi8Bh01DP5lgbCEFG473Pcd9GmedxAQkhlj+llCAFPjiqssR6R+0tWglkopCdDG8rsrSP955uJ+P8uQ26mUKzBkJyttejP+wTJj2sEFihkTRMpmOq2rCzs0s5n1PNK7z3uACiU3Bre5ter0e/0wXnSZOUsqyxQGMFmUpw3hIClGWFTgUqlxinsF4TtOTU+Qs0kxHl/h7lqufCL53nylv7OC0p8y6bjzzEwEqsb+h2OzjnSIOmv+/J3ytRNmNrfY3DyQQQHI5G9FbWQHyQaQoEAq0TlAgIKTHtCYkAbz1N01Abg2gX+VE5BSVRiSbrZFjjMWWJFQHZK/CZprO2juhk5EVBEkT8mjvkFNbqOQ+PblA0KaJx1Fdv4qeGMPF0HGQh4HyFCAphG9Is4WBi2T14kKPJk/zW//EX6D7Q45/9029zeE0ihrcZXnLoTCBSjZIJQRa4REKWoPDI4Em7ls5wiMw1WknM0ZzRjT0Odw8obYMYJmw+cIrBYMD+zbsIozC1IpOr2LCKkYLaWTAJWg8JTgISgmBtbY26rvHeI4DQbmJFp8N4dIBUqp1teGRzk0v9HuBaw7x3+BBwrUWGIPBCcGQdK71efA0hEEIsjXHxM9OSB7wiuIALAiEFiU6orGU8K5nUNfPDEY1zGDmDLU1TN6xvbvxkjbKpKzgxCculJgSJUgRj+fjzH+farV329icIpVBZRlIaOlmOEDDodfE48A34uAhnpuFu5cDKaNzVjPF8RjZYJdEZ1nuM9TipqBtDOZ+R+4BOM4RSbGxtsXt3B5UkSK3AWLSQKER0dQWIrKCqKiYHRxhnkVIii4KRrQhrXQaXV6g+q7n1qER++hGCEmgluREsHZ+QhA7KBhKrWBcdujcDzd2K0a19nn36Sd68eo0gFI11mOAJywUg2i+QUqC0ijcfkDpBaIlCIoHgAaEIWFSakhYdOsHjlafTK+gP+6BBZJD2EopejzztYBtPbS14AaUl8RJrZuy7GXNb48YN1kJIAl4n1CnUqiJ4ASHgRTwtrAgkHjp1BtPHKO1DqDOSTiEJE4mbZFRdS+gK9DAlpBCExgkISkAiIAjwAt+AUSCkoj6cMr6+y96tPYyxrJ3b4NyZLZLg8aaiKyXCBIIHqeJ+1jQNla3JTI3MDM4HQFI3NfP5jKLTRRCQQGWaaFQy4AlLlCTOeoi/e99aFid+uzDIxW+0VMvnOeshQCfvkKgk3tUAmU8YupxgDS7MCcKjdMAR0ASEdwgPqRAEBK6pMc6QdbL7srX7Nkrv7Af/IggIcPfWXb7/3e/T0Qkb3S55UXD9xj71fI43FqkEt2/eZjafEj2MQKff4+yFx/hnv/MHS2M/f/Y0jz78IF/7s+9S1809b3XmzGmeeOwJ8ixj82zJ3s42p8+eYzKeorRGakWudetueFSisMaQFxm1qVFK0jSWJM05tbZOaBwHsxHlyDH7kuPgL95BBI/yFuktBM+R0GjVQzpB4RVeZcxHgktiC3Nk6BcZSgosghAEDk8QARHXPEJEw5RSorWOG4UElMJ6T5ZqukUXKaLR1sYgs4S0yAkEPJ5ZOSX4gC0NftYw3x2z09yFNgqdTqeooOjKFCw0dUPSBOxwgPQdaFISOUUhSbwj9RbnNXW7YOPSCSACrrGk2pCHHl/4b7YR9Tazuzv09TZoDzJFoOInkx4h4yL31qOURGqFdQ6cYP/OPkfv3ESMGoxtWHnkDGcfvsDqu/t0jkrKBDqVZ3I4jvOUpBS6oZ7M0ZnGzsdU4gAfdyysNYzHI7q9btzFgqexDWmaEmTA4Tlz4QJvXv0GF0+dYqMoWJ4fJ7yXe41ULB9ZGPri994H1lfXWVtdiw/KaM7KCoS5RHD7IEsUAWFdxC1kQAZQQhCAVEqyJEFpSdHv3Jet3bdRihMXlbQAxoWLD5CmKdbD7t4e3/rS19je3uHmjZsIIXjhhY8zXFmhLOfR9Usy9qb7rK2vUzUNq6ce5P/5L/8tqo1LAd5+7xpvvP0eOtHLHW0x3n3vKm++9Q5CCNZWV/iFz36Ko9GILM+pq4rdvT16vR6dTgdjDForQvDkRc5oPMJaQ1EUaK3Yv3sXmSqyROJrQ3JFobVmOBhSpDmJStjfP6CuKsDiA3gapswxWnLm4oBB6ZGhoUBTOgHOExD3uPgn508pHRdJiK7TfDolGyQRMJOSotNh9+CA+X7NvK5o6gbnXBtbgneQyox+t49OC4KwoByiliQ6RagCKRSCCmfLaHBC4oNAmJxZnrPrLbMgMHhM8IjoQAMglUTnlrWzb7KRnGbvBx9nPhlxqnsVm7yLSkHLBNkuOLFYviJeX/CeEAJpkOxducVk5xAzr5CJ4vQzlxk+uoVqLJ07h6yXgjKByjmYB+beQabJwgw/cxRpQeVAt2vD+4C1FmPMcrMLPsT4UkqElDRNQ5JkTOdzHjl7mo5W4ONm037gv3yNB9AnnuSdx3uPtTZiFO2KFEGAOQNMAQheEEJ8bnSr45wGAWkS50sISNOf8EmZ5TmnT59huLLGW2+/y7ws+fOvfIP5fH7P84pOwXB1CAG+853v8tzTT/HC0w9xdHiIlIoQJEqmDAddvvnNlwD4h7/1n9Lt9wg+UM1L7t7e4eIDF5BKLCfLe0dV1dy6eZvrV27y+mtvcTSpENYwGA458h6lFNZ6pNSkicQ6i7UWrTVpmjKfz0mSBGMsHk8vy8iSHDMZkZhARykSZ0mGkv7mGgd3x/zU8z/NJ174BEFLGjzT6RhZHVG/9X02msBr5ZQ0OOogqKsabxzBBkS7XhcegJQCrRTRXYprxFiLNYZGSMbTSRsXlxyMxpRNTTy8BCJEn0yg8ICSCYnS0Z0SChkswUa3WGkNQoGUOB8R5yBz3vWCV/bHvGQMMynIpKXvJL1lVswjpSDNHZW5zfr6l9k4M2I+n3L76reYNncRyiFUL94QcbzGBe2mHQJNVbG/v8/0+iFZ7cmHOWuPXyK5vEkpawZThxEepxRWgnfRHxB4BA6tGzZW1zlScwQevMAZh7EWgiTLitY7kwRkRKsV5FlCdsL1bGf/hCGGE5/25CPH3xfBxmIzFVKSJAla6+gIBxc3VR8QcgbM42YdBMGD9QHvlx8PL2LY0jRNu/H+mF2hHfdtlJcuX+bdd6/zxT/9i+Vjp89ssbm1wUOPXWarDWJXVod0ux0IgRe//T3+4s+/zseffZJEK6rG4r3HNA1bZ89y5xsvcv7B83TX+igHQinSYZ/hcIAIxF0QQZBxT06ynCeGQy5ffpC33nqX77z0Mr/6uZ8hIMk7XRwC5aCuDCsrKxweHmCtZzye4tzxtDsXUErhao9IFc4EGm9Q/R5WBXyo2RpmqDRw+YnLvPDXfgajBSJLmE2nDA4Pef2r32RjdISvMxLh8SpgnQUbEDYgvSA4G+Oc4FFakKQS8LjgUYmO1+U9RZoxYkIiFalKqBsbYz5CuzuLNlECQQSc9AQFEkXAEULACQ8yEKRHKKhkYOINSmaMCs0XD2e8XJXcQuBDQOM5TeABqSm8B2lBpARf4wKEbIpS3yUtLOS74CukArTDidDOZARfBCB83GmayZzJzbuIuqCzNWDz4xfxgwxDhRMBI2PMLYJAuYB2Ae8FQUQ0vzb7SBlQXpHUAm+iwQZnSfMChGZe1gQkXmiSJCOknlk5InUBGRabTODYdz3psLZGGOKGefJXAVDi+IGFcTrvscEjgotuch2Q2Wt41+CtxAiP8Q5jPR4ZgTsR8CKGM3U9QyoZLfU+xn0b5Re+8Kd0uh0+8cnnefrZJ0mThMGgh5AS0RpN3JjaU0IInn32Sb7zrZf4/T/6U/7hb/wNbty8Dgk47SGRzOcl6xfPoE71Se5McQunSIDudpAIzGyOODlxAdIs45FHH+btN9+hO1jl9rV3uHjhPEqqJfSstSbPc+bljO3tbZIkQSlFnhdU1RFpmiKEwhnHYDCgahqQAZXIGFtJj9SCuinxCrwMIDxOgRIS5QPCOTQW1d64uq5x1mFqg1AxfSQ0aCmRMqC1ZG1thclkwnB1hdHhfjxFRcxOOmsRIp5w7bKI0Z4QyCAQcZsmoomLU+CEuwVYbxFKIL0H53FpwjujMdvCMVcC18T4rEFyaBtWU01HCqKLbgneoZMsgp24eDq0KYXFphCCQwQFIRonIZ4OECi6XVbOn0aIhNMXz1IPBVZZ8BGpbJdGfO2FgYh4jSCYz2fs7e1Cv4Mxjiqr2nhcsX+wj9YJRRFwzlE1BuMasjSnKFLKulnOx0ko54NHNNpwz8kZ19/ikYWX0zQNQUtkMEgVME27VQoikIPACnCLjXPx96K9W96jdRJP+/sY922U1lp+/W//KufOngKIfryIb8iJ+Em2gW1RFPQv9PnsX/tZ/uD3vohXGYMip6tAlFNk+wFdcKhBAXcnhHbXDQFqPMPNdVSeUh6OED4ggohnhxAMh32stTTGMhqNmK6tkqQJTdkQgOl8hk4T4BjKXsR6i8+ptUZKSariqeXbazFNg2sMQgjm8zkZklRIpErx2pLkOWmSgg9orcjSlAdOX0T4jKODQ4RSIAVBeFAtiGIMBHj00UfZ2dlBIMizHBFoYw6BMxat9II+cGL2PwikCMvfLNJTEUgSZFlGESDram5nCd+sDnmnLBkFT6gFqZfoTCMBG3x0d9t7fDQa0WQ95JGik/ejy+wdPjicC1hrwHgkAZWcWD4hxEWYKjYfPo+Xnlo21BYI0c30eLwLy3ym9x4hVARP2jlorME5j3Ae2xgmk2n0BJyjrus2tnaYpsEag/GWLM8RIgJnH+SmfujR/qlv14rWGhIV3XYcYE76LiBO3K0l4B5/m2U5ZVnGdN5P2igBsjyNgIUQZFnK6toqSmkm0ymTyRTvHWdOn+bU1hYAUgnW11bjbnN4xIWyYX3zHHuTI3pNRb/XZfe9W7i7o8X2CQikELjaUJUVab9D2hjstERrhdQKFUAn0eCSNEUnKUejMVqnVMKC0szKiiLP4k4oJAiJVBrrPDpJEVLhfCDTLdASJFIqnI0u88HOPs467uzscvXKFXSSIDsZ86ZG1SYmjX0g1ZAlEpsXVEcNd27fZudwnyTP0IlEKOIJZy3z0Yg8S0l0wsH+PrYxICVKSlKt8M6SKNUitycX1XGOLfg259bGdQFQWqFUazze0zQVGs8NkTA7qnjTQuVAIcgI5CKQAEVWkFmPlIvpDxwcHlLpioPpAcPeCptbQwIVQtiYlzMGadoNTjiQsj1BwUvwIm4oQXqkCCReRmBLtuvVgw7ggscGRxIUCkGQkkp4bmxv4xFoF6hmJaZsSLRGCUmqNXmWYZoGJSW9Toej0lBJj3EeS6CsawCev3wJiC7kcWLqww1BNEipZBuGiBbIC8jgEMETMV+/PCGjSyyXbxhCoNvt0V9fXxr5jxsfyiiVkAwHA0QInD13muFwGHd459ne3mZ3d4/Tp0+R53nL/BEsCBb50YQz167jheCU9JS379DPc7ZvbcPBnLzXJ02zFu1y1GVNM56hspTu+gpHZYUUEdcKwfPoYw/zlT//Gm++/S6pTqiqBp1mGD8lyTJm0ylBSHSSIpXGBzDWkQXIi05Eja1F6wQQ+KZBKR3d3CC4dPYCNxykWc5Djz2KSjQNntF4zEZj2e91kXdBCU+WSPYnM1aLFfIs4+HnnqI/HLC6NiTgccZQTia8+eoPeP3VV5FAv9cl1A0iT9FSRuDGOLTS7c08iXgfx0bx5rcngjj+bdzP2l07BEazKS/OZqiQYgGcpKslK8OM1X4v7uIW0qMZqrGtkbXGGTx1bThyRwxXJYia6JwJgncEL8EFhAqE4AkEnGgBj9b9C9LjFcgW/fSLD+kCWI+xFt/iMpLoms9cw91be6xvbdJRCfW8xBpLp9NhMBiwtrpKlqQ0TYNE0C06HMxHpIMeLgicFNy8cweAh7Y2l3PzoQ2y3RC9D5GoQIjhCx6lFYV1cR6cwweDJ+B8wAWPB+KMSAIC6xzGB3SaMp9N7+vtP5RRAiip6BQZnaJAKdmiSrC1tcZsOkLr6NaOjo7Y3d+lNDHXKFxD7uZEgligpzMSqeLa8vDApUusrq3jvWc2mfHmm2/ia8P8cITs91FKY+oGYSwQSLMUgGs3bvBLP//TzKYTGtPQ7XYpihwfPEmSkLZ8xQVN6iQ7QylFkiTHjzlHUAopFWmSxJi0KnGJgjwhyEBoUnSWkK70MToijkpKtnd2mYUZDz0c00TOe/b295EStBT4lqrVNA0iBIosjZPfuv6JSmiahrzlFh/Hle9fMItvx8tNtKeB9x7nHFLEJLiTCT5kaCGRsqHIJFubK6z3Y/I9jOeY0Tgin6LlHJ1I48T5iidzzFNrpGgRTyGXn2VBVfO+RZalx+EgEJ8vNTLRKCkI3uKta8OImNaI8R00jcE4F11xFWNW5xxCCKy1dDvd6AKGQF1VCJWiE02Spbi5x/toCvB+gsDJf/CXWulR3fDGteusrqywt30XYwydXheZRuTcOYtzluDbDSnEXLJr00H3vpWg1+/z3s1bpMby6DNP/+g3PjE+ZJWIYDyZMBqNMMYgAmzfvctsOiVRmizLEATmsylFnlHOZjSVAaCWgUZJZEiQViHHE/7Gs8/iguftd66RZDmJTphPZ1hrjkPuyjDfP4yLGnAi3DupIXDnzh1u3bqFVIqqqWnqhmZeYpsGIWhduxasCIG6ZScthm8nV0pJlmVkeY7UmqzI45kkogschCIoBUmKyguCkIgQ3UIRAqE1JNVSvWIezdEYg3GWNMtIlEYGEU+aILGNxVQ1iZA4a9u8lrzn+n4kYBGia6yERMnjXK8QgmHW5+LWRS6ee5CHT11gZdCnW2QkWqOVQBPQIqCUWBokogVd2nsd54aW12uRQqKkRuu0xRREy4HWJFqTJZo00aQqIVEJUmgEcZOTUpEoTSLjOWBtPHmNs+3pAtY7pIzuYWNqnIsnTpLm1LVdblbee0LwONtQFBmzcSyOiCQDTnz6uMDFB07fD7u1AmicYzKfo7WmnM3x3pPnGXmek+cZqdZIG3Oji68AGO/aE/LEi4WAVBrnI+IePpB++cPjQ52UovVF69pQVTUbaxtYY3HWoZRCK42WktFsyvr6GlmqsDaelNpDJziCMIBDOsm6ziDAfFbSVBWuY7h27Sre+ei+BYl2IS74Nu6In8DT6eacOrPF/v4BG1s/zd7uLoO8Q20sKoAwhkR3Kat4o0KI+TBEhOR9sCgZUwqLZehcZP0HralMjcpSaueQQaKJhqbSFJGmrKyucSAUSQikBLT3YA1CBIypkULHWG2xYwtBkqUoqfDeYCpDpyjwtmFvezu67iGQ6oREKWpjj+8tMeel37+42o1EiUj6XsQKjTF4lfLY+QeQViPclCYcImykONZVQAfIhSQkCdnEUCkQKiznRwQBweFdPAt98HhsPBG8xtt4OiDbdSFjGgBBBM1CwLV4unUBZyMibUuDIBIV4iYLXgkckeoXPPgQU2cOgQ8KjyLJUirj6HZzZvMZOIuZz0nWCnpZHz8/JoyfHDK8/2AU7T53r2O7+Jc7Efe5EDnKQkiCa9FiD6mNa8W3uUjnY1TpCHgRaXuL0NKGgAmeNFGg7u8M/FAnpRQCqTU60TSNQWnF6soq6+sbWGtJ05Qsyzh95hRZEUnSD1y+SJJovvTWu+j+MLprLZT+mLBkScLbb7zD0eEheZZy8fyFluuoEUIjRBIRunZXFlKAFEgt6Q/6TKZT8rxYupxbp06hhEIK0FpFl/QEiqZUZO649nHvPUJGt63T6ZBlGVIpatOQ5Rm1aXDWoYVCtkn5oDVZnrfukieRbU7SW6SKiwr8CZ5wPIWk1pH/SmSneOdwLlLQWDzXeqSOe+Xi5FsAKfcsrQXi2p7KJxFmgPFkyvUbt3n3rfciyybRMZFd11RViTMGoRTZsEfVTWgSEa9tcdq27+uiVcZY0zvqqqGqGuqmiQT6uqZpaowxeOvwNm5scWMVKKHRMoI5GoluEXZk6120p7MXARcCznuECOhEtQYKAUljLfuHBzTWRjZUsExETb7aZdgf0jTNj6jEWBjfyXMxHN+bE18hBL5/8zYhBJ554gmsNeR5jjEmutbGxAqdEONG15YDurCM8tuwObr9kU0Vz8+8KCI3+z7GhzopF+iS84H5fE4IgdNnzmBNw+3bOyilqKqKu9t3GQz6dIoOnaLHxQtneePWbeZPfRp9uN++mkNNduhkKYeHR2RZQlXVlOU87vzS471oKVJh6S6F4MGzBH2AZWqj1+8jdMLk7g46SUjSZPl3QsQKDe8CqnW1vXfx9drFrLWO8DfRvcqLSNdzxqKkRMsIdkkpSYqcllRHqhOCi64OAoLz8H6qXQhoGb0JKSRSyEhLa+9mCPEmG2tI9ALsOZE3+xEerPfHrvcJNB4hBAcH+5zbOMfq2hpra11uX3+P8mgPX1tUR8IgY7i5gd1ax+8fsrM/ws9pwd6WvNC+/mJ+FpTIJElQiQAZWStBimVqILQusQkxPkylJlMCqSRdxfK+Ld3kE/lCrRTD4RCIMdyilG02m1GWVTR+3z6OoB7PELMKW9dtbePJxcoJVL8dYmGD4Z6nLcbuaEwIgSxJ6LYAU/R/Q7tWYgTt2xykFycMfDH/IdL/VNaWbAlBt9tdrq0fNz7USbnILTnvmZcVjTEIGdG68dEIQuDGlavs3rjF9vWbdIXmlM74xZ//DLOq5nf3DZXK4mlJwO5e59eef5rZbM5L3/keHkuqFMM0YYhjVRo2heO0hrO55FyesKIzNJKeUjz/9GMAHI6nDIZDACbjEUFEN1NKidIKrROkVBAiQKFkJBZAaIEMlrkwu8ifOkeeZ3GXLCsO9/bZ39lhPp4wPjhkbzbGJhLlBR2pkQG8FNH98YHgF6yXdnGHgBaCIsvaQuZIqBZIukGTBhlrU0Mg10mb5+Qe38t/QEiyXAwi0vgWm5VzjouXL/Los48iixyR5SS9PlZr5iIwDp66k8H6KsnFcwyfeIiLzz3BmTOn0UIgCcgQWkAjGn2SJOhExzBFa5RUMb+IaNetWGbMhQ+R5WMcGEewxz+XFETaFEoAGwQCSa/XQ7UItGg3GmNi7jK08xp8ACeQBvxoDoczhPHIk1zjxdy/38U4YZAfNBZP9z4ChfH0jbB0QCBsRFhN8DTeYjmutAkLMkULeCWJxre1n0maLrGGHzc+HPoqQAiJaI/txhnGu0esDldYX1unnM+woymnQoaaOgauouuhe+ES586e5QsvvcTf+PzPwhsvtTuWJ5fxRGnKilvXr7LpYCV4SCOzRkmJ1gKtoRKSq85QIci14JFLpwF4970rXDq3SaI0h3v7dPOcwaDPvCxJ04Rur0tdmdbgDFpHqFtphYjMN1i4lN7T7/ejq0IkFBzu7XHu/AWGgwGpM4R6SrrSw8hAGgQdlaKFwmkVX8PEKpUQSyiWO7H0kCYpVkHaorZ4T+4kWmhmOnIrU5Us/26xgMW9gOu9C6k9yVQLPIkg2Nzc4OyFUxyVh3jnSBR0N9ZxNBzu70Kvi+92GLcMHZ8lbKyt85lTZ/nuS6/x5lvvEIJdbCn3vFcgRFfVx/gztP6tX1yrFARFzOdJiQyRE4oI2KbBGYeSCuMsNR5nPbYlPkRmVIXWWYw9WzBFCEmiErRKcDZgjSc4EC6WnCkXkFIti0FiIZJYug4nHNflaflBE/pBIO2JFDq2bqitYd5YbEsb9LQprNbghYwxfpblVM61QBr/vzHKpN+Ffo6oDEpIjDGkWYbUcfGVkyk9AxtkeB9QkwYpAx0teeKxh/njL32Zm0LxsIgRshOBR/oFAjjYPWC2f8DDUpEhmStLJhxeJ0wIOCOYeIsXjiJL0OrEJiiIUg3W4Z1r0x0S52NFC5TMZnOyLKFpDNPpFGMrNrfWSBJNXcUT3znXurUeby1CREi/U3Q4e/o0VgrypkJ2E+rzZzjKCmzVtIimwolIMjfGkLzfek7cbSdiBYKUYknPFEKgk4SqLNFKtrGuXP7Zj8PtFidyXMiCc+fORxkUH40mCBBaMdjaoLvSRyuJSlMMARGZd0xnU3yQPPLIBY6Otrmzsx3zVcdvcvxPKVFKQFuz2pZSRm8qQmmRrUTLffEeoRRNbZk3NlZ1WEuDx1qBkxKPwppINVyMk+GHUpEkIUUkgJ888BbI9/sri5au//0Bnyf+MDKjkiSJOAaLeF3EOFEcOwYn79FiczTeUeQ50/kM76Oh3q9Rfjj3NVXQzVB5lH5o6obVlVXqxnBwdBhh7hA/6ER69pRjm8DYwUMPPgzAn1+9ueSn6CB4sJkB8PKrbyIdJGgmQbLtBF5KSiRXZoG3Z4Kd0rGeF3TyBHWScR9gNBotS2zquiF4HyH4JImnk7WUZU1Z1hE9Lus2R9ZOhJQxRj59KlZahEC/30cLEV1c0S56H5AiJVnZoEhzpBAUOiFtkU9rPd7F0wpYxhvL06ZNnbjgQUQ02REXdKI0Jrjovi1jzQh0+DYl8/5xkmAgEMgWBS+KAu8cSohITmhj7CQrKPp9sk4XkSQxzUMsCPc+YEyDVIIHHryE1ierC4+jLxFaW/UC4eO1SqFirWiLbi6rWxZ8V2LqCONorGM6r2msx7kYEiVZxsraKp1OQZaly+NJCdGmfdrYNbSoU3vZi83Ie/9DnuriYy/ivKUVh3t//0HxupCCIs9J2zBIa003L8ikRiNJdEwDJUotCrWOY+T28yV5jvci5mmFRsrkAz7gD48PZZR2WuIPJriqppzP2dvdZ16WXLl6hXldEoRHCrASDpRlW1smCgqtkbJN0CMjk55AYsFzXGOWBhAEZlIzUilOJwShMTLHJBpEQh40KgS0Oq4DDCEwG89oaoPWOp4yAeqybJHDiFQ6G9qK+6h5E7xgNi2Xd0cpQW/QW5L5hysrpEIxmc4iRSpE0a2Awnc6aBkQeE6TkKrIeSlLh3UC6RUiRENfgCUQ+aU0ts1/SmSQWAlOQpGkeCkoul3ypEAGFQ0SiRMSv9iZT8SRLD99PAk9AZ0m5FmKbWpSJekVOb08jzk2naBUilQJEA3JI3FBUFYN48kUYxq2tjZYW1lt3eEAOBAeKWPBgTcR8xetdoIUrYpCkIjFB118ME8ssEaQeJBpRpoXJDqLaSnhmTdzbu/foejkUajMG4Iz0fU1DYmUDHpdwCFEpAZmWYLWEuMsKMHp8xe4decOa/0+F4aD9+Gt7UcJSxr8D8WbIcgl4CSFiEoIuPYORGJC5jyrTqATRZIq8kSj5LGHEFo3Xqg41zrJeOjhx5iWhr2D0X3Z2YdyX3WaoGxgPp7w2//qf8Q5xz/6x/+AlZUeSZB0UHSR5EGySspYebIAfSGZHR0C8ND6Gu7WLUCSuZjHWcQdG0rGCxTiWFBDSMQioGoXofIetFqmAUIIFEXBZDolTdPIjVSKNM0oipyybBgOhzS1xXuo6tkSKV0wRrwPCOFx3qHThNIYsiwmja2N7la8ZZHPaVWXo6JgnNXUwpME0EIwnc1ojKNpDFrIGFt6DyeIBXmeoyToTkaa5oQ0Q3Zy0izFK03SH7C1scHtu1FdYHHShEWeYrmI4v/4E0YfQmB9fR2lFOODA1ZWVsjyrM2itz5fW5TrvCfPMpwzlPM58/kcYyPJQUrJxsYG+0dH8T5I0Ikky1J0GtHuRCsgppSCEK0STkx54SNlMBpqJG/rIEiFQuk2zRVg1lSUOBwOGxy3bt9isJLfI44V62k91h3nIr1vleREZGW54Ck6PXYPDri4ucmwFfKCBe1tgZueODGX0xmfaUNgbzRpT8aEppzF1Fnr4vvGkFrPMOtQK0cTojCckvI4h9T+kFojpGBazmm0pr+xSdHt3Z+d3dez2uESybic829/+39kPiup6or/5v/2r/inv/Wf81B3SDdIchGQHtZFgpUG7R3d4Pney68C8JnhAHfH04TIzgm33uPTj17mW+9cYX4wIjm7gZGCSNgU7SnXyi8KH9khpEydY24icupD4OyZM0wmE5xpKPKcXq+H9zky0VTVTqzjbMGb6D46Fp5QjFuiv7i/v491kdFflSVKSl577TVWV9fI8pTbBzu42mP297iZ5Ew7fa4xZ14kbK5uMZs27O/vUZsStI/UvOAJxmDH08jhXF/j+o1rbJw7Q6/bRYZAIiWZ0HSFovKClbU1bt29e89u//7xfvrgwvhTrfHW4hpD8A5rDGHhcwag3SisMSgpsabG2lgQ7mwLTEhYXV3h4Ogwuo9SkCUJqY5pJikkKsg2/mtjYxFwSC5snUeRcuPKXe7c2qbb6/D8p5+kyDUH73yHRCZLoCXTCXPpSdKUzEVSR5omBG+jV+JivjnYmPvTWmOtayl9bWF2mi6FtH5ojpY/j4vLjz3WuN1FQ4XSOl56+x3SJGG0u0uWtiJcIrrRMgSU8aQhkiWED1jnWnLLifdsb5oLgfF4TBLg4WeeZXVj877s7MMZ5eGUL/zxlznYP+J//4//ETuHB/zz3/43fPmr3+Lhz/8CJthlamgkA1Pl6fi4Oy0DcCXaCu1ABaj5hGKwQmMdd6qSx7xAi3Z3ExobJB5HkBorLNvVlLmRBAlVcxyn7W7vMGvpUbvbO3TShNHogGFbpeK9b13QE3F/OA7AFwu7qStQijTRmKYmUYpXvv893njjdbrdDgfTEcELlPfo2mBTyV4dCJlmOOzjzYQr771LkAFdaPJORvCORErCvCRF8vBjj5CkGUEqRJoukdbo1seF6Ft3McZELarKcT5MypjnXEgqOmMJziG8p5/l0NS4qsRUNc7aaJTECxdtKii0G5Ug5tXqummBikg+GQ57KAEEhwwe4UF5gfItIuplrKZv3T7vHOfXz/GHv/cVbt28fc/a+c43XqLT7fBPfuNvMH336wTv8a6VbExS0jQlJ7qkIUQGkRQKLwRaK4yNOEEs9zpZgheBJussx4JlC/9igbT+MP3tZHi5MNBFzhpigX1Te9IsOVYMaIn0pVJRj7d9gcXmDuIY+JEylqERSHUs5Sub+r7s7EMZ5fb1W7zz5rtcvnSRsLNN3zQ8cP4c3/jO9/iZTz3PSq/Lhlf0ZMKhMIwF5O2ht8ixeSmQROlBLyWaKAYBcAhMvURKsAImKOY+YCOHDOlA5xq8ZbPXYTfPlxOzvrLCzuE+/VYKUgqJMXbpni7YGwuKlQ9xglMZ3QylZbsbx4oJlSrwHi0lZd1gnaWcT5fv13hPLRRWBaxSeGMwZUmaJFw4f5aHHn2EzbObnLt4jls3byCCR8xrvvbnX+b1N96kbmouynjKxPspsLSumggtwhd3ELFIeRBdpbquIxLZ5vGii+Wo5nNUCKx2uxwejhDGoggQHIuyI9qcmjUmFu+ecOV8INZJCk9o5syOxnhnUDLgnaeczti9vc3K6hApFEFKlJIgY7zbHwz5j7/75+zu7vHYow/y+KOX6XcLnHW89ua7vHvlJv/q9/+Ev//gQxzeuBbrYZ0hBEVZllQIHn7oMuPpLhDTS1oVCCmwtiFtSwejhxCpmHFTiuksQsvoWUKu4YTRHSOlMb1xrGJ3XGx9bLzOOvYP9xmuDEjThFQKTG2hnBPqwCBNMM5QeYsPrr2PLbMnunXsHuzTWEM/TRFKRhrifYwPZZRf/8o3Afjcpz/F7PAAZxv+5s9/hv/6X/8b/v0ff5lf+vxnGPTWMFLhg1yGMDVgvEdJiQwR4IiZnMj8yFr6UQlcx1PXlipYvnr1Ng0BS5RSyIJnXiSMTM0t4N2rsUxHSsH6+gbF9m3SNFaPLISz4u9bfOw4/xBFkZwjSiVFl3Y6jQW1eafHdG/Ol7/8ZQbD1UjVOwn2Blo20OLl4kK5ffsWPiQ8dPlhVldXaRrDzs4OWZ5TTae4Nk6dbN+J9KwTDJSw2OdbVFacQJcFtC5SwHlHVUVmS1jMqZTgPPVsxoUzZ/DOUpVzvIsEb/8BsahzfhmLLjYs5z1lPSPPV9jq9Wl2RlHPt63MKWdzynLG7s4OQigSHVMGtWlQWnHx4gNsb+/S63X4/M+8sCy6VsDWpz9OYwyvv3kF+/FPMHvvrXhSB4cPMfY2Nqo3GGOQIlaN6Dyn1+symRwtBbZjNUxU0tM6qk04a8mLLgDnNzdOyIHRIrnHnkI74bxvWtivKqyznD19mul0ymQyYTafRH0lBB0PWyLHusDcBLzwka+7IE8sXz5QNzWT2ZQADIaDtqjh/vIyH8oojw5HfOZnP830zh1UEyfO3bzJL/38z/FHX/4L3nz9bYb9PoKWfgSLWlwODkc899BDqJs3MB4IDusFOMuvPnmaL778Gn/wB3/G7Y8/zSsvv8ZkMuPocPSXsi8Azp45w1q3z+hoxKA/wDY1s/mMxhg2NrdorImLtj0hjrfLKAUog0C0tMGqqsjSHG8d0/GEqmlI0pyg1BIQCC3asqBqxZ8xJWCtRYjjhW6NYzqZx83BRwFDoZOY4xMiVtKLuL8uLzNEmUPJoqypvc3tCorlVTHFsTBWiBtPJ025eOEC5WwepTXzHC015p60BktBKAhxw2mr4l3TUM5mlNMpKs84t75KpjRlkASibEme5Sglo1G2qobeOhSSejLl2aee4OVXX+c733uNZx5/mE6exk0Av0xBWefwwS+yTC2rL57iR6MjLlw8w507O1FMG8fq6oCDg23SLG0NM1aJFHkRebiVwbqAbcsEDyYTvAixeoeIcS3ZUC2DaBHTnpQDeWt7j7oxnDt9hndffR3RUjqRreSHA5HGe2OdI8k1+OhFsLw/7Uy36UEpJd1uFxHCDxcU/Ihx30b5sWef4dL582zfvsmV/QMeXd0AZ3FJgnOW3/i1X+XFH/yAq9dvfODfnz97hr/3zDPkd25TCk1wFSooVNB0Xnudf/Krv8y/+IMv8Pv/4YsIIegWBT/z3MdwMpIMjlOKcaEKIcnSnPNnL3LlrTc4PRxy+cx5rt+5SafbJUjJ1umz7O7uIOWME6B4XMytnyGCwBmPEppup0eWZqwOV3AO1k5t4oSktu44Wom+z/GNDWCNR8qE4FtysotykEKp6BG4+F5eKkizmBsMLtZWiuMFshShCnFBeWOQSXpCb0lGI3SRUZOlSVxwPi6AUxvrDPtdDrf3sE2NyguUTltlebekFC6S3EmSYHzLHAgC31h87RjvHtEUGZ3ugDMrG4xHt/FSoJVm0O2jEx11aJ2laZrIUjWWvbt3ePTRx5FPP8W3v/cKV6/f5m/+4mfIU30c03KcYBchClUJJUArEg+z6ZhO5xKCiIYL6SjLCXmRkmVpdBGdJdIUA9Z6nBM4F8W6AcqqboGnZQjZbn7H7uzxNuVP/HuR/Ix85m5ekGVx45EBMuvRtUUJh1SKouhQziY455cFziDAB9IkwfiAEo5Op0MiFMlPWjhrrdOhmkwYdLtsDFe4MFxlPDriqCppDkuuv/UOT124yF//9E8jtcD46Gc7FVXKmJU0dw/o9leYTfdxdQAfVd983XB5+13+q3/0n7F9OOKXHtwiVQG/N+Ire4d88+iIWdCAQKsU4RPybMCTTzyL7uX0V4eM7lxjKBy9Xi/mKoGj0Yj19XUODg7uuZbFqSd8IGnFtgRRQcEaRzkvwXuOjkZkvR6eY2GwOGQ0ylbBjfZs8/6YnOyci1o9ixsdACmjyLKM7nPTNPi2YPGEgsTyfXxYiAOL9hPHX0RFQBPfVbTJd6U4tblFOZ1hTYO1hq5ui8N9u4iBRWHuPa0nWqU/GQAHB3v7uFMr6FQw6PVaEoBvjUmhgsJYgzMGWsVB2YpL3755g0tnz/Oxv/u3+KOvfov/8MWv8Ot//dN0utny+lKlSVu+cAhQB0GaZQxTRWnmTKeTVtpRkSSKspqzuhrBQGejhpLWUf4kEQnOxsZC92Kgx1O/DDEXtyKcfO4HuZRiSSZxbdE17f1eFi4IwXw2wwVP2VSYFuEMIWYVRCvPImX0KBKlIhf4PsaH6CVSMJ1OGY8O2fGGvWqMFIE7d3aYzww2wHg6YX9/l4tnTtERmswHusGS5JqhU5zSGVp1UConiBLnS0LQBAJdX7Exvs6l4Nl784DbCIxPuV4bGq9YVF3UdQUu6vd8+9tfwymB8g3rnZSbV69x/vw5XNPgTIMzJhYNsyhoDSAFGoXzcunJxpNDxhhGGY5GY5wPNHXFqV6vvUEQWvI1iLYAN97syLk8ljZcENt1umDELIwqNqSRbR42Eq1dq/nyw7voj0yFLL7ahLUQgiyNOdVyMsaFCNVHDViiYFU4Lg4OC9GKEPOmwYVYttW6wmVZ0+v30Gmr/dP+hTGWqqrodbptnHmclqFNLVlr2bl9m/neLr/wqRf4f/3Of+DVt6/yuU9+bHk9iZB0dbwv3oE3jtl8xsRJskIzHo+RKiK7sfzO0+l2YV4za+YopSiKIhZEiGOq4kJKJU/SY0P8MWPxFB8CV3Z2AegXBZLICXY+EEI0yljdEwWWuzphjmVSz2K82CK8wfvIzpISbwJJoki0bimTP2GjnM9r9vYOmM2meN8wKacMB3366+t0+nDnzjZNbTjcPeCUznjh3CU2bGCttGQeVgyUWjNtGnSqcfNwwp0STMvAG9s1O2RYoahwuGAxgJc5wTVUZcV8XuKtb0+amFO7fOE8Kihu3bzJIM/BWlxdU2QZ+zt7KClJlDpmQClB8PEUkyqCEbhI0WuMwXlASBpnUUIhVIILUe6iNeGWj9kmy6WOQlytWrcPDusMOiTLFNHCcYoxUdytTUtcP8nXPBZvltzLR/lgEw3t5+jmBc5ayqamMg2H0ymbF9KWj3qvUR6XIgVUyykV/kTtZ5BkaUqCxweDkxCQ2KZhPpvTzTtLBBsp8a3Cu2xzltY5vNOIuqRTFLx77Saf++THlhaglCJT0ShnIfKZvfOYEFAeDg8PUSrFOUuaapTWmMbSHwyYzSuEVMsNIirZxddt2pPolz/2DIlYkN/ub/gQeOW9awghmI4PSbQkyyMtToi2cVKQiKYhScQyrx0WBFjC4j9oifiJ1nT6fap5ic4qgjT39Vnu2yjv3t1lOp0jgDwt+Knnn+WRhx/CGMd7V29x89Y2LoB0ims39tlSHaq1Va5noLzldFeTOEfh4g6kCEjhQHpEUFgv2UMxyRI8FhtUlFFwHm8d8+mc0WjUupbHTB8hPFI5HJbgLePDQ7JEI7zDVjWj8RglJUWeM55MWVTRO+9orEXpJMr1Y8l9QVWbqJinEurSYY1FChU1S1tgJ1YuROMSEqT0rThXgOBxzrZ8WVgEN2LJ1wztIlrkxcQy10h8GDiujvigcZJQEIiV78Nen6ZuqE1DZS0yTRmurWGcaYGkY6JBNMrozroo8X0i1SAIIaY6pK0x3kSSBxKFopPlS42d1rQxNoo+64XeURBY67h77QpPPvIQL77yKnd3D49PipPF27BUgA0ilpw1TUOiY5VQICLFd+5s8+BDD2PbfLPzvt3wjsMDa+MmnyjVxubHE3W/xgnRxpSEPM/ajSqgbUDaQKIkWrbpjcCyKmeRSVik3ZI0RTrHfDbn1R/8AJ9kND/mfRfjvo3y6PCoPR0CSmWMRxO+973vM5/X7Owd4XzkRFoBExn45uQuO1s5RdoldZJ5mqHKGY86wUqWM2lZEl4sYiVNIEeQtvVofqmVWc7L2Huy7a2Bb2X+WoOo6pJ+ntLtdYBAnqb4EEuA8pYyNpvO2p3z3rjPeYexBq1TtPIIYXEukKQqSla0k7yoVICoSXOc+2x7d7bu64KEsBDmiieUQIoW4FALI2wj2yUqGB8SrYu8qIgIbR3f8Whd4WXlQtyg1lZXaZoGYy3jyQSEQki5pIktFxwnECpYItGLYY1FyWikdV1F6Q0ZEeZEJ3Q6nQjQyMgGtdYsXfcAmKYhKzqxasQ5UPH0t95xZmuT19+5xm5dkbWbkm3d6AUQtLg3SZpQFDlyQe5wnqZe9Kd0S7qdUqo10BNA0snZCnwwUf3eab8H/BGpImiJc3Zp9EKwbFq72NysXwhohdYLi+wiY030uuqaYtjj/MOP0F3foPlJty2I0HAEb8aTKd99+WWCd0gZeYwsFqU0dIY5Tzx1ESkTKuExqWafQF4k+EaimiRW8AsZDTIIEl+TiDmIJJIN2oYuTROT3NYYkkTTKQrms6pldkTXd0E0f+DSg5i6XoooFXlB05gIapwMsk/ki9I0pZzNI1cxHC8KpRQyRKaMzqOs/sIogbb28nghLRrphsBSakRK2cpiLBQIImp8Tw5StKKry1MknhhBLiQfT7i2y2cdn5USiZaCvCg4OJhRlQ113TBYWYs6twsGzOIVApxodHJiAcc407qaDE9TOY7GJdPpDC0kNsRyOKViYXOQ4BsHzpMnSdQeahX7tNZITvJr4ttePHsKrRRff/stPpukUVitBctUO7/nzp2i002pK0+SVvQGffr9PtbHOTbGYIxnsTl2u13qqv7RR+H77SAcF58v6HWLe7NcEzqNaayTf9tOvvOOWTMjEKglkfxw4n2EkKgkxThPbR2bK6ucOnOOpN+nuc885X1XiSgkWvp24SqckDilsUK09LC4222eWuPSpTP0ZcaaTMl8wEvPnraMlMA6TXAiVisQBal0kBQukPooxy9CJIhHKudisQaEhLyT0et3yPO0le9IaWpHOa+5fu0GV69cxxjDjes3ODw44O7tO5i6RTlPoHFewLQqY5lTnscqABm1YZSKgbtoDUyEe+/Okmsaonsduz0lS5fzh/ViwvKngPZUjbm+WF3fAkgizmGQniA8vv06WdPokThkjGU8yBDo5x2qpqFqLNN5SQiCLO/Eqv62tCpeu0eKtplOiOT62FgnNtcROIQ0PPjABbTocOegoqodMliUb1qPJrq/SgQUsaQqUbGriZZtr9Lg4nWIE1ceIpL76OVLvHXjBt2HH6UJYHxENXXblOmTn3mMT//spyjrhslszqScMavnmGCYllO896RJTqfo0+1GpD1W4hznbBdc1JP1tssq8cByvmPjJNEq7x/fY1MbEqkjs2qhEBE8hIhzVMFSBUcdfCzSbnPyHqidQWYZcx+og6S7sgJKg9S4+zS3+2+vruH02Q2OjmZM54YgIpq0EG+PiyqQdhJW19egMm29oV/sSVipol/tPMqzhMQFUbRXsUikh3vczKVWTetCihP/VlK2NXntTmoNK9lgeVIlSRJlI7MU2QoZLcp3qqpCCCjynLKax2sJYcl1XOSnZNsr5djtPeagntyiF59p0bJtcWK7INp8YrzYoigiT3V5Pfe+ToClBoxYXPvxWbfc4X0IyKA4c/oUxtTMqjmlaRBS0l9dadXLHWrRkqEt0oz3I+b5RCvpGNk/8MLzz/H8I49SHY4ZVxW2XW5S+DanunAdWyU9WoFswhL8Wp5CJ06g6C1Ifv6Tz3Przjb/8jvf4H/20KOM3367LViO4cTv/s4fEbzCuzSmG6oKnWiMNTSmAhFVI8bTMQFLr9dpUw9t8qitH/1xZ5Jggc8s2Dji5J2M3o6ULfdXRKPE4XFLuUgbRIxxF/ctRO0gBDTeEZSkOxzE9GBwFN3BjzMz4EOclGkaeOThs2ysF0gVQYx7qrxFLN9RWUZIVDwFIzsLhEAKiVOCmWtIpaKv0nuoUDIEVPCxU5EUrcRCi/CdxLdP2MLilosQk7WJbr+SFGsdC35xlmUURbEUfdJtEfOCWqZaOfzFKSZawGEpOdk+L56QxxuFYKF7uqgmEGxsbJDnObLlqEYltJqyKinnc2azOcOVFS5cuHDCID9gLJLdP8RoWpplnDel2NjYYF6VHEzHGAEyT9k8vUUg9joRop1fWt3uEOdMBBd1eETAOEu/22NzsMad67fZOTqi0RrfuuQnAanjqpQQtW1bxYflbWpj7ZOfPYQYjigp+JWf/zSTcs67EjrDYQvmuIgl2A74WLoFsSTvWF9WEbxvmzOJpZh2CHDu7DleeeMtulnGJ86diVP4Y9b0yRm9Z+qJMp0LN3d54IZ7n3UsVr285EhFEBGBlir2mXHOoWWMMe9n3H/TWBfoqsAgjzWPuOML8mLhBoW2QUtE60KEJkHJVg1O43KBnXiydodaeBUqeApj0SFQhUhnW6qFibCMwxYlOpLFTV9oc0bQp1NE5LWuoySgEKItw4rdm1xzDBIsio8bE1vARz2gSGQXUqJri6miQJhf5iGP48pFcB8XfiBJNGfPnuHOnTu8++47iOtRvVuEqDAe1ess3hhWV1aibqqQcacXJz0Oll/xGqPA1r1usERJwXA4YDgccv3Gjch6khKdpm3rtdiNOrpqDt1unM67JYC1QFHLusGXNXvbu4Ta4BNJbVys1CCSBpIkIfKG3fKLEHBtSoQAWZpCCG17iXs3FdFKe5zaWuOFpx/ni9/5Nv/55z6P+M4R1RLNjA1gtYootWr1eZTUy/h6ESrEouu4tSdJSllVpEqR6uOz5i9FXUP8FlfvyWcKrLMUIm9pIbHoOVEaZcyyhlK0m3RUXmhNIoCWCm9qUp2gWiJLpPz9hN1XDGhr2Fzp8Y4+onT2xFbkl07R6HBEOH+egMIJATLiy0VWsNLrs5IGsukE5ovLj0PhSZsa7UPb9i3uPosi9oXbuoj1aA1aK4lKYsVCnmpUZVHeoIj9HrSKLdYF8OCDD/LOlfeYzWZx4lysIlFCYJpmeYJ4Hxk+XRsQzoMUJxTIRZs2CEtCt9Iy8jK15OjoEGsN65ubnL54gZXVFVYGA/Cxrdzh/h5f/cpf8N6VK8QuUza6SMTGRks1NZZOBseP3DsCgYceuoxScpn7BEk5rxiNjrh04Ry3b99mcnS0rGDJ8wzVVnfINhZyAeqyInUwOpgAUCuorGspgQKhJFlesLGxwejwkOl4grd2Ob9Jkiw7ZwHL8EG8D3GUItbyP/f4o1y9dZf/6Rtf4+++8Cl2Xv3+MlVE8DRNjVSyJZ4HVBs7Ou+YzWakaUrd9kCJ4I28Z2aWp9pfelwuoNljRbr4cETOF2UTkkgG6EmNcB6RBBASYyRQteuyXadSRE9pXi49pigoFfDN/eUp79t99T4wGY8Z9HukbcerezzK9uLLsmQ2nR3v/CK2AlgZDlFpwrSpkGmCzrN75isJntQ25N6SecdGntNRGiEThNRI5PKGL5PgBPKsw/qpR9Dd8xTFOt3BGknR49Enn2VlLbqSgbhLL9otQIwTpRBtTixZGpmUiiRNKYJkw6f0rVpqvcaY6d784bGmbJQhuXXrFqPRiG6vy8b6enz/Fp31LpaLBX8MFi3aox1jEuKe/zvxRidynXHkec7q6ip37tyhrKolQqh1wqDfI0kSut0Oq6srpGnCbDpmPp3QyVPOnT3FmTOnWel20MaRGM8gychVQppkzKuKujFL9ftnnnmW3/qt3+I3f/M3efzxxymKgjzPlzo2y9BAqWXlhpSShy6eB2D/aBTv3QKMSSQvPPM4s7LkejlHx7wLSojlxrFMtbQGflwh4pbVQEmSEHxAneji/JcEBT80Fs8c1Q3zqmZjbT1uUOlCuUCAECRIci8pgqSnM1ZXV5feQRxhuR6SRUe4JFl6B3Hz/wmflAbL7t4hm1tbdFKNCiybvEY/K5I3Te042NvnzLkOQXp0qhFJSlXXMXZIFVVwsdNTkIjgls5Z4gNZEFzaWuf5557hD7/yNWZHDiMMTggWGFksm4pK2dYrxmUGc0lvrUeTWLzKqJMMJQNIE/9KRqWVlcEq09mUeTWLKtdti3PZsjCaskQpTeFhqDJmqmAmjkGAmHc8dn3j/Th2fayzCAST6TQm5m0guDo6nz46Moi4aBfF1SfTHgvjkzKJdEQflhuSAJCtsrtUIDV3bt1mPp4wmVYYVJuikmRpgjMlRZYgXcLK+XMEt0WnU7RxWhQbk8GTCk83VQw7Bd4FyFLuzo7wIbYN/NQzH+N/+U/+CavDLns7d3j8sSeZTubsbN+Jxqc18/kcbwy9NEGnGYmKm9eZ1SFpkvD21es8fvmBiDy34NXa6gq9TsEb167wxJkzXN2+jdKLHHAbKAhBN8/p9npMJvEUt9aS5ukytnQnRLOWM3kiy3TvuLfGcvH/d0YTRtMpT52/wHh0RN6qDiCi65kEgfKxEMCFwOxoxGw+i1IoPrQ1B8corHGRVx1EXBNHR0cMBuv3ZWv3bZS1inmrXAvWuwn7+4J5cAQkCEUIclmMa+tInbLCIXQC1iGUpjPooc92cfWc7mTK5OggIqGtCyJdQNaG833N5z55gXdvnuL2dw4R3uN8JGcr7wGL9wbnLLUxHJWGrOhhE8us7oPImQSNFh6tLEEprK+pjUH5hOFwjVJYMpVxND6k3ylIWgmKajzhcv8U6zOPHHQQW6cIyrfNZ2TkQtK6KgFEcNCmExZqMEEIbNsPxbuA0qrt8+FROkeoFGP9cgddGGV092K1vUpy8mIF10QAJaKXMRqQMiLEVgSayYxe0GjdpQkOHRKMaVDSMehnSGHIRIIvG4pOjnOWvbu3mZdl5JDmGZkWqI5m0E/pFD2qEPB3HCEI0iTjF3/xl1EqYXdnH7zg7NnzPPLIhCRLuXn7FqZp8DKmsUaTCd2OY9AraOqKndtX+ZlP/RR//tWvczAec2p1rU1zRZbV6c0N3rl2g08//TTJ3Vt0uh1G0wkI2tK6yI3VQoKLMbJzkTPc6/UBGdvBn1ANWFSD/OXn5XF8Hk66rkIglEBIiSf+DCFSW5T3yERj2mbFARBtCsiFEBVspMIIQX9tDSMl12/eZOdgn527uzzw4KP3ZWv3bZReQl1b1lZWOHem5N0bB7Fp5snkdgscVGUJIuBkFBxKdMLqRuzKHHmBOoIpi8yXAO0FuYVMaF67PeLruxb97CcIt3bxdx1mbmJeTwnQNa62OBxSOgQWqRMqY6mNRyrRdshKkSIqblfNmN2jKakIPLF5lr4OpLmKkh9tF5gcxSMrW5y2OS44bnXXeG8+53QWO1X5RZnTPbc2utKijXsX9FHnXEQnRVi6vR5A6iUlbwGyLE7BJWuofX7spdICYgIWPIPYA1dgguPOZEY371CJBLfI19nYT7LINUp0sInCKcXo4JDZbBZTJr0OnaKgqWoIjjRLML7mxVffogxQGolXGb1ej263y+HhIamGVAp6vR6XL18m7+WgJa+88gouLEC+BQhkUEqwu7fDL/363+HLX/06b7xzhY1PrMRYTUqCDzz24CXeuXaDo7q+l60TQCca7xxVWVLOsuOibGKYtHXqQiRoeB/XBbHtwfKQ/ACr/CCkFQFfff1tAM6fOsuNK28cA5gthiFdwFvb5ssFtuUtL1MprWErKfEhMCvnyKJL0e3Qayp6D/V57mMfuy9bu3/0VUSUcvvuNmvr63Q7BfNprBoXiHaROfCW0eER8+kMOdRRvVoIQqIQiQIb8ySu1fNZJG+VE+RNoKks23mHr20n9B5/hAc+tsP2H3+R7oL25AVWa/KiwIfAsJMjtKL2AhESzq8rTJBQe/qZJi86kdp2IKi9JGSCtW6Xh/o5lZ/j0yGTsqbxhswLHu6swJHhcHWdt03KjfER6/2CRKfL0+qeLMXxBnsc84XjmPH9Uq1Sili9QYustvESJ3Kyy65NIWLay5f14MNxbCmE4LpvkJUHWyAk+OAimti24QMYDocYHd3Lw8MDsiKnPxy0rfrmCB81bg+qOXerCVYnBFUgRCDNO0iV0FiDNQ6Rx1Z9vcGAVVNhbWxvHohVNbJNFZRlidaRhjc/2mU4HPDWe9f55HPPkC8Ejj2sDQdorbm7t8eZU6e4s7PDieLZqBBxkrvbTn7wnvl8RpIMCMExWF+hMYZf+8TzdFvFiQ8Ce37IqxURWNkfj9v3i0GZI9bwKjwi9muKa7xVQVys3QUN0LU4h5QS01hmsxkXz56n1+vSG/TpdHqkacb9jPs2yocvP4CsJlzf26Y/dKyudBgfzbBJFPNdaKb4EIPuyd091tCcPnWGrNfhcH/CeDJjZgx7Zc1zzz5N8Yqh3r6D7A6Rwy2KJOHihVM01mO/+YeMXpT0bUNva5OwtsZqkdKp5ninONrdYVbO0EXGKCu5IW2k281LBt2csYKyrmlcRZ5LkA4XcmbVjLevX+fJQRdlptTKo1VOkAl9bxGTKbOiz9U0Z/uoImllQxZAQwgx2X4ssHxMy1NKsfCbjnOZ945YZhXFhk9KQy6et1x4YVEuJnELyEHE2H3RFIi22sVZT6Yj4d8JQ/ASWzZ476iqElNVzA6P2N3Z5u7BDltbpzg8Ooj3q7FkWqOynHJumXUepPQKZSUpgt7KaXSaU5Vj9nbuEqyh1+nQ6XaYzEru7uzS7fbwwdOkaWxQG6BuSoxxJInm9rWrfP7nf5bf+b0vsHd4xJnNjRiTA91uQaIV71y7xrlnnsY0USBLtrnJQOyNeVxEHOfUOsN4ckSvV+CDW/YY7aSRJLKISYGlN7JIgbR3AvFD52ac79osQoqYpnEo/GLOiXW3yDYkWSizL0gtMqbUlIz5Y60Tuv0eic7ucbH/snHfRvmt773BxtoqP/exn+Hg1lVMNUWHKOS7OBmEECRpTpIpaCxr5y7yX/23/0NsJ+f8PT3flRB88qEHeeDcA9w4GvHNF78NgP96+JFdjBc3CuCnHnuMzz74EJOdPcY729hTWzReEJIBe0cjKpljTcOKymlMSV4UIOf4oLg5mjKc11xKPLly0d1VDb0kpmMmWnBjPKWygVS6FpWVS/d1eSoS3cwgA3lekGUZMGWRWDbGxKah7hhJBJZSHsBxzpTFa8ICFJcqiWmbxeMiqt0tpC10EhXgi7bZTqIk5A0dG5B5B60UaZpg64bVtSHSG0a7u1GsLDhAotKERKeQpFhSarVGGSTSWxpv6A23UEmOryY0xnDz+nU6ecHK6oB33n2bnZ1d0izFOQNS0u90yZKU0URgTI33nulkzJlHHsR7z+F4wunNdayLlEpPROgJntrGxrRSR1RTq1gP2piYczbWtBq88ZRVMiGmqE6gukCrWP1DZHRxzzEZ2hOvTWcsNz5F6T2iEai2zKSRknnLFEIEKlczqWvm1tD4yNgKUoOIXuN8PseHwGDQb9s7xJJA9SOqft4/7tsorXPsjab8H/717/G//jt/k+LOIaWscYBaQNYerJUcYhg8dJH/9k++RFk3pL0BDz75AmnebbHawGjvDt956/t8/Z330GlK2hmAgO7qJqcefGLpoxNoebCB+fiQ3SuvEoBvvfU2X3/tNc6sr/PXXvg0RdFB13POX7rIlStv4QxUVY0aZCA0qc4QssIRKMm4axxbWrHuLNbPSFwCKqccrnKldBzYBiVTnIj6PMfdoE8MEfdbIeTStfUhIEV0g4wxJG1vkpM7dEx8t/mxxU67+JISQUKWSoaDIXJeYhe5PyHxUiPaVn5JopfpAqUSECkumWPmDV4lJEmCtZaQaIJ3rCcFD9ocWwfKTkDr2DSnyDtkeUFgxiJGst4hCaxvbjCdzSjLchnzllWF2Wu4fv06dR01kfIio1PEjWnQ7RGEx9om5lC9R7cNU01LWl8AMVopnnjkQb77yusMNzfZvn07nnRENpJSKgpAax0NRUaRbaVSet1VQkgRZPdsegsT+yAANh6W4fjfgAkC46K3Q5LjVIEJCcIGau8xwTMNDumb6BF6S9723/Q+lrohW+MOEU/QSpGlGaZt1NLrdqgqe1+29qGEs57+xb/L21//I/7lF7/E3/70J3jj9kstMdmz3u+yNxrjg8RbiUtSrty5w8rpi/zC3/kvyBK97LmQtJO2+/ynORod0u8O2BquREaEinzV0FLtFjO34Kyaz/4CQUi29/d44zt/weGda/zrP/w91lbXyJKEr7wk+flPfZqzvR51r8OwU5DKLt15wYPnp8yDRwlFWlfc3rlFL/EUUpHrAt9d50a6wtu721RCIIXHiwTfNq09jh+P3UnRkgmstfcsgEUudaE/uojvIvOo5YwGRV1ZmsbF0i4i48a7GusURVpgbMCGlm8iBBaJCQ7TVFgT1RJiTlChQopLSpJ5Q1OuLg0+STSqsXSt58E0Z6JSrhOh/izR5FkkXmghQPi2vtHivOXrX/8qo/1rrHQUuYRcQWUsxghmsxnGxBRNXdVI71nv9ghh0U+yZHN9DRngdC9BKcVrb77Dkw8/uMz7IkQUXw6B4WDATz33LNdu3eT67bskaUaeSPIkI1UK5wK9fp+1tVNUlcMLSWUcKuvhXQTUBnm+8DqPRzhBwrjnd3E93p3NefvmDU5tneLt925SmQDWYFxYNh0OIpK1g4ix/ZoXJF6iPai29UMMaAR4T1F0SHWsDTbeQlMT7s8mP5xRDgZD1i4+xI3vfY2032/J5g2plFxa7zMvj5g2Hu0iMgitylqiWtZK5P2oELmwZ9c2OLOyxtu3b/JnX/8Tiu6AFz7200gkjrYYuM0Rxk4VAk3MB106c4oLf/M3mJQl77z2Cu9+58scjo4IwH//739nGavJRQxHC5W3d6RT5Hzyqafxwz6nvaVIPM3mRd4dBy7lGcZ7rt86xPtAqQJNq3sTSWvH9YMesM7j2/4kSZLQ7w+wxmCbCujgXIyBRHv6J0k8uYN1jA7HlPM6uuxRMJRgPY0JBJ/hETQ27tCxZTkEFznCizKMFtbACInPa3InsPWcGzevsba2RqIFSbCsNDWrxrO9P+WwM8QkcYN0ZoxPUrqdAh9m2GAgNJi6YbwzQ53PSJzFkzKrGqZOIJKMsqxwLnKAXW1xUqCTlCAE83kV525WslrkhLpsyfou5kIBqVVbnRPnU1QV/e1DkrLEhEDTGLL5nGEnZV4blMpoGosPM0SSAZLGBvKVNb736veRQvCZBx5oSTonPBNaGaLlI3ElCinwLtAEF7mqMopveRcf9wRCixLHFE2cc4Fkpj0950m8aUkbEieiK2utpZCylVSVGBFz67r6CceUAFJJLjzyNDe+9zVeefcaWsCirKirYLPbZV7GVEc5mXBqdYVpOaWxhm6et3So+CfRexO8eesGX/39/w5TVwhgNhnxc5/9FdI04Udkf5cTKxEMiw4f/8SneOyp5zEuxrW3blxhcrgXd/Byxvab3ztBQ4tjWjv+6Otffx+UevyO/V6PJx55lMsXH2QqTUziLyUvjtlFwkfKXVEUsVVCgE6ny9HoiKtXr5DcjUrhSZLETljeU5ZzHnjgAY72D1hbW2N1bR1jW3HhVg19NJpx88YuAYExDcew0nFOTcoTwlzEipbgHRqNCJ7hcNh2zvIkAgrvyYIlt4HchVbyMYATHB0dMS8tRTWlawPaxYK81BkYHZFtdJDSobs5hzv74C3dTgEIlBRsrq+yt3c3urdlyXBlhbXVVXxVk6qE0fZtfu5Tn+DL3/gOB6MxG2srce5P0PCcs5jZFNfUBCHIspzNJGXVC677gA2BxgUq25BmkrQosMG1JWpxHUoi6vpDqY+w+NaqLIgI8gYBO5PI+Tx95gJXbh4u48uYjw5LNUG3eFQIghUgFVLYqDwYojas9LEGN01jpzeVSIRSJCKhnh39yPV8cny4prEIhsM1sk6fOzt7PHDhAq/cvobD04znPHfhMju7b2ME3Lp2lQdPn+cbr7/J4WRML8tPBNWxv/3b16/y1f/42/z6C8/w7IUtqgD//Pe+xJ+LwGc++ytkrWEuJ2lBcF52UF0E8lGSEiLXcvjE0zEl4WItXP0zn48OWZsDixup5/atG0yPDrn71veX17f44azl2y99l+++/H3+i3/w99mZjvFStWLS0e2KnYNVXKxaoxNN0zR0Ol0Esb9hVhQYY6Kw72xG0zTgPMO8oChyiiKn2+1gW0U5EYjyjTa0oIaKuTFakKl1lxeaO2JRfN1ueDIEiizj4vkLdAu/rMxIfCB1gcR7chvomNjPxWtFLx0gs4SRPyLJDFp7BIZEQUZD00zxoiDVktVBj9pbtkdznn76aZyHcj4nSyUrKz0wjjzL0GmCIFDkOaZpqMqaTz//FF/66je5cecu62srrfID3NnZRWvNIMsY9VImdQT7YgmWj/EygoZA5QXVPKDqmqFoqOqaNOsurVD+6H18eY8XCHdA4AK8+N5VAFZWt2iu7JwA3E4i46LFhk68QQhYY6M3owRGxVJCpCLNssiIEprgIPGW+WR6X1b2oYyyrmt6RZfuxhmu3HibF556HG5diTtVY3l4pccPOpL3KkntEh5cWwOgnM/a1NPS66axjm996XfA1HzyVIebb3yXRAj+N//J5/kv//vf5+s65ec+94vItjoiLMjachHCs6Q1LRQB5CKh18rQxBhWUegI1jjp2jgv/s1jDz8OgP/ETy+O3mX+zxjHlXff4ZUv/ju++o1v8rFP/hST2ZzlORUWyGFbIhzaSoWyoq63KYqCixcvcu7ihdjxq2mo65qqqrhz8wavvvRd6nLOYKXLGqttGqXNP0oQMp4Mqv1Y2guUh1LH6z6uWQkLILHNZUYEuZrNEYOMLNEQHEFC6j0yeAIN2sUTJpMSnWryLKXf7xJWOoxtTXA1SW1IlUc2JZshkAYPwXB2bYWD0YzBYMDh0RjnHGUZhaWVVjgLnSLDW8ew2+dge5uqnHOpH/N0t+5s8/xTT0RUsmm4fXcXrRQ+T9jPNK5TIKYleVv9YxTgJS4orBcEnWBwGOtJmkCSLoAzECKqA8bd/96E1JIV2f6/cw57j+C1B6VxssH4SOoMDmJHShHnMcRWfEqAxKO0wouotpBLTUg0Rb8LieJoPAYpqF2DHU1gZ3xfdvbhGvy0vdy3Hn6aw+tvMa0ahDGENMFYR3Wwy6mO5GrpmVaCp7bW+B3gnR98l0vnLi5fRwrBjb1t6umIzz33NAd372JczAW521d55vIlXn/12+w89wJbq6ssdinfnooCqIxhd3eX937wIlsXH2K4tsWpzS3SNle4QN5O0kqVXCicH49FoXSLtCxvkM4SHn/yOa5+7xu8/tZbfPYXf4nJtCYIt8xFLggCxjqMtW03qLYcrGmYTKfUdc1oNKLX61EUBdbGdgrzMrpp1tbtQpJLiN6jsCpBJjlORDqdU5GG6MRCXgR0ALVwHNrr9QGq2Yy7N2+xtXaBIAW0oE3iHFp4EAZhG6x1EUUUDoJlPJ5QWk/pQUmN04GQaUJ/SOU0PSVxdUPe1az0esxmc9bW1hiPRhAcVTmjk2bMphM2NtYI1tBUJSE4fLBkMrC2MqSqmyh76QW7h0c0jeHjH3uOu5MpTZIwbwzWW0gFQkuMl9QqVoMaDLUIICRWJpTatbny5Q1d3sMfPY6BtyRZkEIEmW1YNw0ui9UvmkDlDMY3WB/Xn5AOKT3KOYpUUyQiNvGJCB77bo5KE27euc3V7Z3Y40QJekJwPu3el519SPc17jbd4TpSJeztHXB6uMrudMLUGkaTMZfXVnnr8IADp9jZ32fQKRgf7LB9NCZpETetNQdHhzhrObWywuTwKpUHaQN7e7v87U++wKtXrvP6d7/Oxud/jfdV/zA3DV/7wu/GWBG48fI3EEIwPPsAD338Zzh3/gGGne7y707GGG3+9xhJ5fjnsUI2senPCUBHiJPP++Bx7FK2ecpWFsQ5t2xRIETsfaiUwkuBc3bpkotWoUB4wQOba8zP3WKlv0KRd3EL0KstG9u9c8h8VjKblpi2JGiRG3fWMRmPEUBd1SQJlIcjKGtyL0k9JNYjKkMzndOkCuk9o9EI5YHSxEZMnYLuxjqqV3DQWPo6wdSOohtIM6hCIMtSut0utimj4JZ1rG9ssL6+ytlTm/zFF/+Uej7j7Nkhfj7i8qULfOf7P2A6K+l3O9y4cxfvPWvraxxOJ6g0oXQWIzxz6ZgUfUZNYK4D3VDTCTWFEgit8dIiaeh6w/SEqxne537eu35jUbaUEqkS9ivDi2+9xaDT5aF6zkq/wyyv8QZE8Oz7kp1qjpGSxjmkih238yRBeI/FxlBmAUx6z2w+Z33zFBcefpzVwRpBKYwtefs73/vRhnVifCijbJUuGQ5WSPtDXn77HX79k59g77svMg6Gg/mcc6sZq5lib+558633ePDMGV5+7ypHsykrvQHeWnxjuPryNwB4eq3HtYOAlAqHZW4t5uZ7PHv5AV5+7UV2n/skpzdPxQUtBfO65mt/+LvUd97lk88+xRPnTrE7mbG9f8TrV6/z4n/4V7x9+gJ/7W/9Q4a97lJmMLp2bb7wHgtt3dwFCCXiI7HFmjrec0NoMeFjVHgRzy3YPAtFu0WJ0SKh7f0iH9kq1ekE2eb8bAu5RzA1Jrw3uprvv/49Xnrl5R95L4b9AUprnnjkSbrdXnTNvEAmgndefBMRBP3+kMl8xvxwH3ftNnkYxNbgQZPY2BuyMSbGr9oTnKeTaiYTS3cwICsKkm6PuTUkjWWGoVeIVrRaMts/YjDo44KhMjVay1jWFwIHh4cUaWzlIERgdWUFrVKWriIwbxpeffMdLlw4z7Vr1+gNB+R5GkWMdYYXkqbb4dBN8d5R+DkDYXggLbDCceAmnM8UpzLNrZY/HFjo6C7Eyo5HtJlYnieUjIi6C8zqhn6ek1dzCuFovGViGqomoqlaaEKIyLJKYlG7kitI70FYnAzMPQipqVxNU004e/YCqysr9Do9RJIyrhLOPPrMfdnZhzLK8eEeUkiSToe1y09z+7t/xnBlQKFSJiZwa3zExc0ul7YGvHP1kNvjKf/pT3+KH1y5ytXXX+Tpn/prkTbmLd62qgDOIlvdUSfAIJjs7fMPn3iS/2tV8af/+p9z7qkXePITP8tgMOC1l7/Lzpvf5zd//Rf53kvf4M9uvQPesZoW/OYvfI6pKfnqy2/wZ7/7L/nc3/oHrPR6x4ap2s7KIcobxvixbRgrWt1OFowd7qVFhYAUFrfIr7HgwcqlYJZsy5UWEiJRe+aYxxoXY6w59D72G2mqqAa6WD9BQGd1yA/eeIPB5lkuPvE8h3t3jz8GsZj84NpbmPGIr337q8clZMSK+L//+V9jPTEcjsZMqpLJ7T2eHlm0nKK8JBcZ2kQqmwmW/aPD2ItSCUQSSDcHkKaQpkyaGmkdK2kPL1KqqiQpDZ2igw57XL/yNrvjEXmRYyuDCJDphF4ayQTnz2zy4KVznNncxKU93rlylfW1VfJuh++98hp13fCx557nW1/9OoP1TTZPnebGW1fQxNxfKUD3CqbjKU2aEHpdbikVGUeuYjOUrBEBrjh/AtkKYUUPZ0FUiL+Pcx8wxqK1JnFhebpuF+vcHHSYNoaRrAipIOkmhJ7DW0Oap61sKDQqp9EnzCd4tKuY793Fy5T+YNiKLIEXFicU2+X92dmHMsq3/vTfAZANVgnO0skzzuaCrV6H24czdqZz5say3u9TJJK5S5hcf4efeuwRvvn6i3x3OubU489TTyeUB9s8//Bl7N3bMUAOslUOk6wGxfytN/jfPv0o/2etee3lb3Dr1RfJBitUh7v85i9/nsm1NyitQ6jYSGWnLPmTr/wZa2Xgb/z1z/Iv/vCPefWlb/AzP/t5pFLvS30cB2JSxmpO0VrE0k09Bnjb54GSgSCPT0iI1MeTpVeLYueTBbmLx46HWLZ1s4vSreVOH5jM51RNw0OPPM3PPf9JApEc7YmV9y44puVnsd6zs7fD6GifRbXpzTe+x//7q3/Cf/2/+p+zYydMRxPmhxNODYb0TCCUTfQAXFTik5nEB8GsnGNxuCwBmZB0u2iVMts5ZKXoc/bcA2x4OHzvB6jE0k1zVjsFbjLj1MY6TgQGwyE6SehmGatFTuIaHjl/ioc2N5ABXtmfcDSe8PyzT7J/cMTrb77NZz73Oa5fuU4Igu7aKrooUFojjcF7mEwjoOQCiJAwSwp8kaNdwFZTRNCkHDcQOtmSboHMnxzeRSJGbMkelhxcj+Bud4XDJiXkimJF01i3pDSmPrbms9bEsMQ2KGeAyM8V3qJczdw6hM7o9PqRI9tq3ngfODz8CaOvjz/04BJA2Ts6Iity/sHP/Ty7777K5rBgbzpjXkmu7s84f3aDQjXsB8m7d67zD578BM57vv3mmxxdexOAi1ub/L1HL6H2d0iKPlMEuWrIjeLJvMsFVZLs3+L/9NBpvv/CM/w/vvMyN/cO+M8+/xkGd97h7v4e67rAe8XcS6ahwTrY1xlf/drX+dgjD/Hid77MpUee5tLpM4vbtEg+xTIwcSyLtGjZJxY3dpF+WZoRMT2BeJ+BHffgUFKhtY69I09UigBLMGGxOSyM0rkfpnksXs9bhy+b5QdIEk2mEiCh388JInB2ZUgIDwOxqvNLkxHvfv9rHDaWydEhh3d3WFUJBYosVZQdoGnA20gFtAYXDLnXDErPA+sD/NpGjIuDYHAqpcgLpJSsndri+psvkkxq0qzD2ZU1vIVKK0KeYAPMZnPGR/vMMslDa6s8sLWKsjO2Dfzf//2XWR32OXPuNF/80l+wsrZOdzhg+63ryCQhFCmjpqS2Bo1ABdE289EIwEmPU56gIhllJgQTHThws6VYigxLHO4Y7ON4f5VRwRtjo5yNazfQvNtjJ2jGIbasM7WlslGPZyGS5q2Lgmyta2xcq3DnA94ZtG8wwdPvFGR5zryct4SRtnmU/QnT7B47v7HMg60+9yjONLx57V2CSqEb6A5Sxl4y8YGZqel0BQfzCfPas3vzOn936xS//tSv8o2bOzjv+dWtAR/f2SF0+vzAGZwUBCXYqA2XnGDDSaqmop7fonfrKv/k9ArqqY8xu3GDUXBsdnoEr7FWkISAQlKahkbUHDjJ5y+c51uvv8XVt17l4uYphJJRjkGKY+6lFMf562VMKVpAJSbzl5ozElAK3IJEfjIwbaUbZYjk9vbEX6jRnawGWYBGyzYBH9CJ6WQcS7sIQht/LogLQgiklpFT2vYd0Sc2GZUkoBTdoot2c2bTObv9Lt+6dJGnZlM6e3v0ez3G7gglBKsi42Pr5/jt7W3+9I++svwsv/LTn+bpS6v0Ll6i3+2Sbqxx+/YVfJpwsdNhrcj5j994kT1TgxaoYOgnikfPnmPz0jmChDtNj//LF/6c3YNDfumv/yxXb21TVzWf/PzPc/fODkWnoMQSMkVTVxjboJEoHSuQ9g8OMMGh8wSpBNgS66K4Vd04uv1VPEetAYofnsxwPJ9eBJy3iFZc6xvX3sFYy3PPfpKXb2zjhUeHWLTfhFa7x0c5TvyJ0CeEZSPf47xnXAm9YR+lW30iQWvQtDKVP37cf4OfpmFRjDu/eweCRyykGnGsbQ3JO5r53HBnPCYrEuS4ZO4U16sZvds3kLtXeMp5Vrzmk6LishfcKqfk/SKKCoUGKTzBNBwGmBjHvq3ZSSThaMJgf4xINIWClUwxDpKyFuRO43zAOk8ZLMJDZ96w0u2yc+VNDp56gTzRJFqRJDoSuQkg2n5SrVr74mBcoOoLtbz4mFhWorO4ScCigWls5yBJEo2UkYpVldUx4eF9K+VH9QlZvNfJ5y7Sr/c0rBUe31jM8kMDLZoLxBNQSox3WNswmc74/a2z/Je/8+/5uUcf4Z9evECWpAQZa1xTBF+cjPndr38NnbZt6wn82z/9E/Qv/DJ/8+Of4s54jB6usXvlbfzhiDwtOLd5liceeZyXb1zFhQZtA889eplfeeZjODPjbqX553/yZa7eusmv/sLPgda88sobnDp7lqn1zOqKqZ0xV5ZKOfoqipAJG0jylMbZeKpJgUoiyf7uzi5JXuCloHKSimQZlC+F1ha3rp1K76OwsiSipy7A3Bqu7e+jlSIvVgh+gpMa4dtyRBEL6f2iGaKIxieER+I5FsmOBurbNvdpni5VGBeourM/XMb3o8Z9G+VoNIqVCVqTahVjIRYbkWd9Y53VlVWm+0c4D100F1dPIZ3jVqpRMtAJgsxrRNPQKSuO7JhRUZB7QWoqyumEo+DZFUO6XjIyMPaSXSFwSY+UwIavWJOSjSTFGcuOd5Qu4LzE2cA5qXlAeh6+e5tza2u8duMm87qiSAdUTWyAE1W2dQz0kwRPQCqWJ9DitFTtqQqgtUYpTVgAAyxudkvvOtEla2HM9ke4plIKdJIs9WJP1l2K939ffnvfKmufIdpTPrQu98lhnV3qsuanz/AvvvltAvDlt97mf/Hss+jERFqfhWLrLL/93/5PdIeb/Op/8o/RUlPbhj/4nX/Gv/3SH/Mrv/F3efH1N3nztTcIlSdVkv1pSa5HnF4fUNlN+p2MQSp47PJ5puUut+/U/MF7d3j5zTf427/8efqdDrd29ijLkmcuX+TsqVU2cs18OGC/GnF+ZUizMyJRCp1p8n6XQZJinePg6JD5vCRUVezyFQJpkjKXFY28d/OM+ED8FhaNeIVAIrEI5jYwd4FvXr/Dn7z0Mp/51Gd45d1tkMebnyAgg0d6FxU24qst9r5j7yqENioKCOeQ+BgDL2Ve4lfTNMcB748Z922UWZ6zMlxhPB4xmdZ08jxqmQbYOrXFI488xGuvvoYVC7q24MnHnmTQ7/He3h32Dw+orYBUYZXkpbLkHDNKKenMDD3TMK0NR6bhWlBc1gOqOqM0kqkKHOQJ0lW8sD7kARnYlpp5PWVma2a1AetY14G/t7bBUx2Nz1PeHV7itRs3uHb1bdKHnyZLFBCioJUJVHWDELFNfJZqtNIoHVsJIEHK4+nxzuGsxS/sTIh7+pMsSAgLTdI8K1pu6r1jQY3L0izWX37g9vmXuznHi+7ex04W9sacadunUmlEkjCeH8N/P5jNuJjnpCG2fZdI5nVNv6foZjlaKlyIshyr/QH/7gtf4LWrNxBVjXKB7aMJmYhlAuunNnj0gfNsX7sFVnH16g12D8fM5CpffelFVlcGbKyuAmDqGCN/7xvf4btf/3bs46kUvX6Pt156k0G/z6MPPwZaodvGsc57jPccjEaMpzM6RUGCQhpP4sFXNc899jh//NWv8pUr1/nEpQsIARt5RiIkMgQcggrBzXnFD+7s8PK1a7z4+ps8+ehjqOIU1Xi8dJGCbL0RJ8DLFkTztMxkFl2kgzcn8tdReSNLU7qdDqZpkCL2LfVAXZfUzewvva+Lcd9Gubq2yXw2o6oM5bykqgxFnqOUYjoveenlVzjaP4qpAx938Ou37jLs91AIcqsheIyqKVPJO0kPvxcYzDzSTxkqycgFjjy8ZGe8WyZM3RrGdzmwnv0jh9RzTD6j0hWbVc6zKuMdAbeNpeMDHxvkXGaMLRVHrk8hJE9dfphXv/GHbF95ncd/+hdZ7Q/JkyT2ddSLE0rifCsyXNtlugQROy5DTMgH6yPcLpd4USuyGxXVFqmRPI8Uu9m8Ym9vb6nkvVAvmE/GDAZ9ep2cvf09tNbHVQwnLW7hPotWxy/8KBeoTdGcYCsJIVs9WE+a/X+4+/Mgy7L7vhP7nHPu9vbcKytrX7qqF3SjG+gFO0GAC0BSImeRhjOWZhQazzgclj1eJjwRVnjC/sMxdtj/zNgRXuTQyCNaoiyKQ44okABBYiPQABrofa19zz3z7e9u5xz/ce6972VWVaMaksKyTkdXZr5337v3nnt+57d/vwHSP/io15o1pIKWF5Dj48/ilhTXcHfnDsm4x4vPfoErV28htCHVFqEVWZJxM9nBmBztaephwGB/j41+gq61eW9/n2On6q6tTAg8oUiE4Z0PrgCur/LI+Y/hRQ2yZMLOtXfBWnq9ATdu3pq5D0Hg+zz/wvM8//QnmFiDMAaSjDRLQErmG02WJhmXV1f5L7/+pwXthODJc2foRBG+EGQWxlnOO9evkSTOFXvuqadorzzFze1eQUXgmMQMrsBdWIEsYgOyIk93uMDaJOQ6JvC8Ij+tMcJwbG2N42trxPEEz/cB6YrrGx4ra3OPJGuPLJRvvvWm86u0K/Q2uebIkSPUahHrWzukyQRfeq4KJjc06w2Mzhn1h0gl8HGphEgGzEU+gzTnkjWctILFSUzL9ziWwVjDbjziklFMshpeComMyY1kI4+5OcqYb+Q0MByt1TgmAt7OelwMIj6v6gTWkAvFdmJ5/eZtnv3YEzx9/jRf+8HL/OgP/p8snXmST372q9RDv1zKOFPTFCxgLjKLsGirp5HQAp5eiSJ1UyzeWSh/gfs7DAN832cw2Ib1dVqtFlEUkSSJi7imCSZJ8IQziw+PsgvEGEOSpo7vQzq2qwpB/QGjNJaAom/TpV4atYi0OUc+EzVuSUEqBb70sDYnmNHqUjlTfH93AwucWF3j6vAO2iQYC1IorDAILyTOYa/fxzbqnDh9its3tri0vkmiNZ7nT69MuG6V0sRuLqzwmS/8GrXARxvL6HO/jLWQ6YybN6644IkQxJMJW++/yssv/5Dvf/8HD7xvz/N46tw5eqMRxtqqNezNS1cOzo8QdDodPvnxZzk2P8dyEPK9OwN3v9rR9dmiUF0IBwJtTQGWRQ7CbdhG5PiBQGiJlOAJifJ8Eq0ctGjB5B3V6sXT1HiB5fip5Qc/uMP380hHAaNJXE4vwrrFMRpPHIOxsSjlagittnhC0YgamDxlrtlAKYf+1YhqCKsRWUw/77NVC6nFOcvaEFnDspIYfIjqDGNLZmNyAYmYkEufPSw3JilPNjyaytJI+5yQEQuZ5NNz8xy3E8YiJ5eKEYLYl2Smy3PH53n2v/dr/OH33uS9W7f47n/3dzn25AusHjvD4twcnnD07a7AHKb+GzMCUOQSXYy7SHXOepeF3yIlg8GAa1evofyAj3/843Q6HZaXl8myjCRJmAwHvP3qj7l25XIBpaEd5gsuUlcSDHmeRxgGrmtE56TGFkhwqrrGKUh04csW2jIIQ8IkQAio1SKuxCnJTEhea814lGCVxma5A0Ou7kMhlWDz5vsAWCPIjQAjCiBpF08YjFLS8Zj5jk86yQlsxOqxNe70B8h+30VKi6kzoog6FxPaWjpK3XfWkycsq3NtZ2YjOL28UqQsHMfJ5BMvsb61yZ0bl+iuT7Vo6UZbq3nr8uX7kSEODd/3+eTzL9BuNMh2dghX2ywu+DSFwB+PuLu9hbGaIAzwPYVCI4VrtRsOB7RbDU4cP44QjlRqMplgdAZGEPkhm7dHDPq9ApGvSIEJg7Y5V699QNCq/ywxc8/9kY4CxwvCbA6oQDq1btalLGDepSDwAxLtCrdHSUI8maCkJM+NK9w1mthC7Am2lOVx62OloucbEi1ZC5p0w4B0f8LGzoQ802AVsQc3MkOinUbK8pwlIXk2bHBivkFsDLm2CCPAhnRqGcu1jN072+xt7XC+P+KrX/4y/+W3vsXVH36D68pj6fzHePKFL7JYa7n7q4IlJYBV8adwaRG3CGyV15wdUko8qRAWhsMBUaOJFJIsy+h2u1VU1VH4+Uwmzsdz8BEFpVt5XkAqQaNed4valCRDRV7MmqLFKyseQhFkKi440RmDYZejJ9eo5zDeOcgjvJemHDnSobt/D5tovHy6oLW27A37DPa3ePzEKa7sbqGLe7ZF+aDTe5Y412xuj4jHIbWgRtBocfT4Kns3E6dlikVjsKTJhL1er5pPyTSNkE4SJGlhditqge+oFZS7t6OdFs+cPUtmHBW71sZdEw5Jcau7j8By9dJbTArc2PGgR+/ezWpe0jTlz775pygpZxrf3f/Hj5/ksxefREQ18BU5hjv7u4Bk8ME+cjgmngzY9yz19hyjJHW7jNZIY8l1RhK7vKQfhkyS1LlIwtERxmnC+s3tR5K1R8d9LapYpnyjlkw7BG0pA6SwBRGsM1NzKwDFMM0BhRWSUZYwEa48TSvJREgyFTBQNXZqiisqpZtLJuMxNlAsdRo0+hl7OiapKXIZMeqP6U5ijkQBiZQ09ISnWoYafSQZPgqjBan0WHriGTb6Gr0d460+TVwfczv3+I9/7VcZhQE/euMtfvr2W7yyu8lLv/zvMtdozmjKw/kuxzqdFRTehwWyYnUuCGeEcEBbg8GAIAwqZmegoOjzCnBnl1T2g7DwHe003eJOS1lv63nO55lmxZ3vo/U0NC8Lwc/ThPmVRVIhGNxax/ebB673u3fX+UpdkmrDanuOYJbMUUjWt+6SxiOevfBZ3t/tgVHkOqu68K0Eq6TDR9aWvf6YVmvIOLnO6vHjPPX4BbaL5mG38A37e/uMY8c8de7is1XBv7YGkblN3FoHapxhEUq40sWitlh5HpHyHMZrlcJyaaDOag1rNeeXl1wQBsiMoR/HGGsYxTHXrl3BAPFkxMal12Fmz71+6wZXrlyqNs7A93nyyScAyZl2yNr5T5LlmnGe0R1PCGRAd9hDKouSgjRNSNKYZqPpFJR0HCwWWawF69yiRxiPLJT1egOtc/Isd5RlFlempTx83zFAKeHYjNMkReNC/1p6CN9p1dgapHIMvBaL1Zq0plgPImxTsTncZd9CrDRqMmbOCp5aqOP7DbaUZGgNC8uCeLhBIiETGs8kLHsZJtZY5RXchhJtNYNwjo1GnSQbgNcgsT6XdnY50+5w7NQ8zz72eX71K5/nH//jr/Gjb/wuL/7Kv8NCs0lZ8jYlW3XDGOtapWZec/CaUwar2dSI1pokTaoCgpIhCpgyWJWFBWIayFGm/G5Lao0Lv5ecmeWJS8tQSoceXpC/Hjtxiqtv/5Dr1+/RCmI2xj2emQR4a+0Dz9NXCoNiYj36W7vYhqOP80KHJnD72tsALHbmye7tInNNalIETlM6enHjIDWEJNOae5tbLHYa+EHAsWOnqdWn5xTAjVt3q7+jWs2VLNoypmlnUkCFmWsFRrsgWpYZCgxyp10taOvK1yzGmfqBT+R7LnEP+ELQqNewNkfYNo8tL7u6V2sZfurzBd2iQBtLbzTh3r1b3HrvFZdvNIbX33yrQjR4ndcAaNTrrCwu8fhjFwmWTmE9BUpx9cZNdrWh02wXsQVV3kWBozTdlH/WeGShXFyed37kaMhkNCGdOPjAWrPOXKdNPI4RQjLoDdC5hwwijKdczagn8ZSPUQEqqBGEDlnbhpI9MvzhgE48xvNCbDpmHE9QgDCWLRVTD+rERrCap5wKM+babVfNY1xKxgD9FFdYLnMwPpHx6V5+l3Wv7hL8V2PyPCNYrHP84jPEdswIycif8Jf/7V/iD//h1/nRN36XT3/lt5lv1KeLZGaUiNiHc4XWquI9hRAeJW3dtBhBcLCAQKBUANKF+3UBt+hyjtOalFwbJlnutK90heyB7083hdJqMa6CCJNz9MgqAD967zL/3nNPYCOfM0HIP7x2ByEkL/zSb/L2y3/OWzdv8+L5s6z3NG0d8O3tHQDOPPGiC7gkE0LfZ6hzsizHagslgLZ1aSVtjasalh4WQyokuYH9rX3MSHP22eeqC81zzfrWHgCN+SXmGu0ZF6Aw8wTIEi2tcN0rukTcpmeKeTXFNZTN83Eak8UJqa9QnjOBPeUhi59CgPAAA74UzIe1YrN039MO6xybW+DZJ57BWENmYHNvC13QT1x/7yekSUw6GXLj9i1u3L5VaVklJb/1lV/n3pUbNFotMq0Rniqa8i0yjzm/ukqm/gXnKW/dvUEURbSaLY6srZDGKd39Lu9+8J6jQ7fOjBE25NipT7J68fNkNZ8aFs8T+EEAYUCuBHGakmYpidAgJixzl6x3h5CIUDggrixNyT3JppdwPAh5YmQ4qy25L1FGkAnIhHtguYGRdeHxAIvEo0POc3qHI8pD6JimydGBhI9/mpHt0e/3mK8vEXoN7tkeX/jMs/zXv/c13nn1u3zqM7+EEi7XN2sSFtEdnPDJwjwpdZQrs1N+6DBA0ZUZWpXUlfWvQoDyQXhkJiXVjoekWJ/VyLShN57gF5aF50miwAdhHT6qkAVYgvuQEszwLFqu7mX8+G6flSOG127cRkrJqWMnuFSrc3Nrg55u834vZUnvEdScwEmp2J+M2Ln9PmtLy7x18zaJNgXYcSEYhoqXMQesUBgZkGYZR1TEfNR0/Y6FWb0wN4/WOXs9R9ATRg3qM9i3JbuvmKmgcpmgiuFjmoMq5qjsAqmGVGgsqTF4ueNoyYUp2MJNZfKXrFhl1Fx6Cl8qCqhZgqIQwALt2vHKnXjs+JrT2HlOfzxmc2ud3a07WGO498HrfOv73+HcyQuYsMYkyyplIUzO02bCL33scb7+7luPJGuPTltgIB0n7I4TumqfTqfDwtICIBj0B8RxQpakDPOMfWE5tbAIWjMe9Zn0h0xGQ3QeI0jJsxStU6IkIZAZeSNksT2H6qUkOiULQwYmQTPBCM1a7nGhnzLfarIeeFxSIPFQ1mfe+kgzxmYCzwi0tnjC0pCWo/GAU34dT2RELUV9ocFwAb52b4v3r9yk0drg+PHjzLVCbr73Bv/Jr3yZ//M3v8U3d9ad+WENw70tXnrqCXrdnjM/raMJd0+5SKHgOvWtAt8LnKYsEM6Tgr23MmELE8bzPVCykN0ynFvMdfFTSoUfOsAxk+ckaUacpq6kT7iOFE8qfM+V9iHASIlUHmlu6IarTFpzDI/63Pn+6xw5fZHFRoOLz7zEj//8D9mMB6yurDHqpZVWN8BebxtrLRdPn+PeYFKRzM7ehymqWZz57TYmKyV7owFry0dYbLa4unEPIQTnT5/iIC5rOYpqGMsBy0RgH9jhUYGHTf+5/xsLLVr6lZU0zmjbtChyt9aSTybOXZEl29kU+lJIWeU8pVQEyqMRRsw1mhxfWkY//hRJlvOH926wv7dF7XSIDiKMcczixmjEKGNxfom//ff+a3b/RWP0+NovpxFrLPu7XXr7fer1Gq1mnWYzYNDroQcJ482rvPX1P8BiiESMUo4Yta4z6nmOEBCKnOV8wmInYq2zxv72BvkkxUcQWNDZBOXlZHnMbjcmqS1xZ3WR95ciRjsxWWrpkrAQCXwhaYw181aykg4wGHKb0M5iWjognhMcO7PKgDHRe68zv62pjQ2D3QHX13dZnG9ge33izVf427/+Zf53/+ybxEU3/xMnjvE3XvgY3xwlCBRSTlHTSsjKar0ZXJ2kcPrTWkGcpFirKiEtTTJP+UWgo7DTisUT+D5XbtxECMHy6olKYKWSIOQBfzbNcxKToYQLJkghiGTE3NFTfHDnOr/yaye4lO4xYUiqNa32Ir6oc/7Mk7wWfoN/+q2v829/9a8zSuqsj7cRQhIEEVv3rmGt5cSxNa6+8TYz2c8iLeR8QIr7pMI9gl6a8Nrl93nu45/gWz9+mflOh9XVI5j8YPS3bPJ2RdumEE5TxbDui7UVpn1ZAG6n8TD3s2pItZUMFh9BTL+1GqZyG9xnrC4jwboy0d1jKVJUZROAkEXe2DFOh35QWSpnTp7ivZ0d7t25Uwg3LEjF3/vptUcWSPgomtLKaY2mdBdrtGHQGzAaDJhv1Tk1vwCNlKS/z/nGPVqeoCU1ITmeccSwigw/cLmpLEsJGbE21KR+g6t+g2u7A3Su8a3gcdXhYnuNvL9HvNLAPvcEN0aCtb1rfNpOWE8nJLnHvobNmsfNwKNu5mjlkkYGoRyxF3hsSMHWtXVWg4TEy2jXIlazkK1RTtbbZ3dri2DQZ5B7rL33Mr/3Nz9Pey7E8yQ1M+by1h1CdRJjFNaOHfQDUzYt4WSO3BiM0HjSmUEJkFo3zaJozHNsxK4IXlgB0mKEIS9IUludJb75J98maLRZnptzpl3pfJUmUVFAoAqy27IlTGuLMZmLfmvDWAa0GwF//sYbCCE4cfHjjKREBCFnn/sCV175JsflLkdOtfmf/8M/orl8jGMrx9nevIUAsgJrtrQchXG/aF8Ras3pdpu9UZ/NeIwVLriihWQiFftWMxqP+dQXPofvB6T5DItxVV/s5sOaGeZluD+0XfrxgplOjZn3xOznxX1e//TVYljXVzn72n2nF1SCbwooTqeQMvdtxS5kzDReoLUhyxI6i3MsrywRRSEXPct/9ZNvP1CmHjY+QpPzzGUXoJmigM3wfEmbkHCzhx8qltohc6ZHJzHUspggS/FzixIpC0shJ890yMSY9+9JVsY+F43zE3MFb+cjxmbCRb/DJ4IFas0AfWSJa8Mtrq7f5IOuj+jtc8zmLA02yHWDbq645cMOAfdqEfu+Ig01XiaRnsVQ41htntNRDisKGydsTGJ2bU6apWgd0/A9cmEZmgG1/nu0bYrnCZA+wh4jALKiBUFYgxGFL1nkLo02CK3xjMWTgkQYrJKMydjPU5pCUlcuIGSMIZMGlEWQ0dIJTywu891XXuWtH/2UvV6PC5/7Ks0wqGjWoUy+F8/CTnOSFUmQEnhCcu5jL7J75wpv/+RbPHX2MX7nxmWay8dRSrHT7wJw4eIzaGv43/6jf4Q1hvbKMb70l/8GnlJsXHuDKAxdMG/mPCDQBjAZy9bw2cdO8e233mGzlFghQCjOnH+MH7/yCvNzHU4cP1Z8cioAld4yFlP0GdrMFSQooYpqJDl7w24zsw57dVZLTpemKDyAmTzHARP4AUEWe0h4xexRruLVzsyvewZO5K110XmHoG5QUnJna5vVo6uoVp2wFoGAIPSJCxfmUcdHQh6oLASmkyxwZVmNRoO6lAx7e8jMJ/YUtUAhhcZKCz6ErQZz544SLwUkvuDImXmeugWdtzdQJiFsGmw+4PGh5vP1RfIjbd557igmjem/b7hybZ9MSPpWclmnrDZajPOA3WFMXxtGE43KQPkh2jP0hULaCG1CfphrrnUNMu5jahGnn3qWI7LGzt6+21n3d3nay/CSe9zaSLi+McLzIazV2a1bJmFOFko86yMyjYwzlM5o6KKTINfYNGfiQV4XdD2P1eU1duM+WdonFj5GS0c5YQ0yn/D8+VPsblznpeUj/F9/9x+z1e0hpOTsS1/m6QtPYlGHURLvGwcCIYVpd3L1BBvPfZEf/PjbfP9H3yFqz/Ps57+KrxQIh0qY5Rmnzz9JpjXdezd4/gt/Cd9zGDPJeIgvBIPEAo58x/UQuihxI53wqaU5TpJDCjkBkqwQJkW73WEwHPD4hfNVg/Jh1SWEI8wRniKMAqSxmEy7aKdxZY15nlXmKkhyUYbDnD//AJU6u0Cngvsz5rA85LDY2g97t/jOje4uk94eT194jKub67x49gRCeqgCGHw4HDMoquEedTx6Rc/hv61ECOuAiHVOd9IjaihkkhFYS557TEyOaQf4802azZDO8jzm5HG2gxrrwx6h77GQ9qgpDSpn6FsudhZ4gZA55fF2nhPf7WKsJaDNMePRDzSxFrwsLW3ZwHgePTNiR2dkvo8yDit1sTbH/FzNwRBqg6optrImQS0mz0YkY80k3qM/GLHX3Wenv87JT19kRR3nzs0Ndvb6xGmG50Oyus/43CmWz5xgLmwwvLfB4PZd5vqaE90ekZ7gGeFo1JQl8CW7fp3Ta6f5o9d+zHujnxKGdQLlSup8YfF7+zx34gRHOkd59dYG290eH//F32JubpHV+QW8qgTazb6wgmna9JDWqXZ8txo9JXj+45/E8wQ6yzl/8RmaUc29K6TjxPA8pFQ8/bHnyZ78BJacfjxge28HnSW89PGPs73Xc+ewrgNCkCNtzlrD4/PnTuJ5PqHnuU1KyCJpLguEhoNrxxz6G2aDqY4USfoK31eFEPqUoRxdQHhaY9BZ7jSrhWnDt/O3S9mRxUmmNO1TjXtQxX74KNvAZgu7ZucdHDyLMRohJRpD3Q9IXbQJ3/d46703EcDZJ5/n6rs/eaTzPrJQ6hIDRYjCfPVAGGqhZUEpIgxHQ4/51Tb7+310LjBakU40w3rCUj2l3jiCbHTYWe8yGBu6eyMYDAjaGSu5YqW15KjxfvIu7+92GfYt3t0e+zWPXijJpSJKoT/os720BCsrpCZnt9ul1+vRarcYTEaOz7AZkIwnGD1ib2/f8dRnKZM0KchNrauEKUB3TRjz5t49Pj83jxCSREpGqkYumvTqDVSWcntzE7VwhLlajb7MSPs7xPk+7UBjvJDMD9BKEvcVt+7uspO+Q80POXXxPEtHTzDXXsAYUGlCtrPFu6/+iP6922RzS4BgZXmVxVYbrQ2X793i5uU3CYKIx558jqXOggOFKhZDWc1ywDwTFiWKgJHyePFjLwCupjc1jvq9dE9VYZ5JIfG8QpiE5M5oiNE5y3PzvH9rBxBF34RGKYOKh1xodsi37jJqtlECFBZfKZT0ptHLmdVrhUs9zLUaRVrEFo3D7lqlEeR2RvdVickS/U8ivYBASKxwvpvWBl2UBma5Js01uTWk2qAyZ8Z7nj/NFcqpMB1UnAd9zemf93ucBwVzGtsFsFbQqtVQvo9nDMJKPBSXblwHBAuLK1z9MAGbGY8slCXi3PSSnLPuB5KjShFIwfGGz2rosRFYbu7lJDqk5UWkqWB3xxK1oZ/c5dX3bqAaLVCwORkgah7H2/PcOXqELGjx0/mAG9sjtLFMkpx0BJmyVfI4zQ3vbm4SXrlSVBblFaelEY7q3N64VZQ2TSfZweRrlBB4ShZh/eLBW0E7EfhDzd7uGBMLMhmShTV0veMwTXsbXL99maXuCN/G9M2I4Xyb0fHjTJKUveGYWEvW6xkbehu9cZe1EydYmZ9HWEsYRjSaLbY2NsjmO1ybjBnGI377sRd579Zt3njlO1itiYc9hvtblS+zfvktavNLCATLJx9jMuwy2Lu/jjKsNTl58RlOrJ2g7k278QUu2mnsjH90IKA6jaQanTmhDTwyneMWpxMem2csmJzT9ZBR3Gdr0CNJRdEsDoKCDv5gyRMWi/I9jh9ZZq83wAKZzdHGbTKOuMk5dEocFJmDiz8Ha/EkjivS94p3fMcRiauiytIcbbQzu40hTR17uCyi16posSsDNgev19531ml09+Fa1lhoLXRITFb04Tpul5m06yOPRxbKlh/RbncYT8b0RiNn6xtLnqZ0wowQQVvDgvLwlxpsotlLAsSxU9QW5hhMMta1YbQ+Io199rv7zK/MsXz0FOlezPr+iHfsVYbePlf9MYO6D1IxSGNyW4TdtUHmllQorAwZpwaba9dsXG2us6uNaQhfCozUSHIsrtRKFkUAQkr8TOBvJ6R9GI9zulrSuHiO5qmj7E16LIsJCyKhHYw5uWDwmz6pnOf7aZOJH6E8D90fMkwtCRmpysgFWKUQ2kKm6XX3GQyHGGNJtEAELWIb0I5jzhxZ5tr19wBoRiF/7Rc/xSfPnWCoFP/05Td45c33MNbS3777wOcDMAB2b33A7dMX+fQXfo2GX9J5zyIU2IN+qnVzJqyj3Lv97k8QQtCs19EmRxqXVTdW4BvDufYcR6VEhIJ2vUU4TJHa4PkKTzoIxsAPDl2ZBSs4d/wYb166VjybwAFeS2dumzQrScQeEDmdvi6sqPSWOHBMgVKnIKo5cDElZVESaB3oVdmoXnTd6NxtOkp5B3KTs7GT+wVz5p6wpKnzF+uNOn67SY4hUrJIL4WkWYofBHj3zcnDxyML5b/zWMBT5+YZJD6vb2Zc3c+4sznBxJKwXmelmdEKBQQST+Ys1EJuB/Mk584SNJsEQqCSAcOrt+kNBtjJHl883eCpBajvjKhNMgI74lUTQ+IQ1pQQKAl5rstHQi6FC/RZkMYhrlUh+6JUqxLI6okVE+5+FADBpXkiXR2lhhvjIWFLkq80Wb3wBObUKUZbd/jlRgb9bdLuHiJJGeQptZ7k6LFVnvJHRFmPE6rGeDlhG/iRJ7hzxVGgaSERwnNlYdqhaQscyFU7rLGtJVd+8gb/+1/5PH/aH/LF4x1qZIz279Dd6GIl/OYnl/ntL/426f6IySDnaC3lZDt3ARIpHZ16Zklkna9d3eUPfvg6P/j2P+OzX/x16n5QnXOaUHfDGINQJYOmZJKnZJMRC50Ow7HBqgClMxf2B3wLa7Um4txjbJMT3rjhVISQKM9z5qsQZEX6Y67dLgTK2bD1tquFHexucnV7h/lWx7GrGY3veTTqdQKb4ePqf63OXR9rlZsoU0MO5oPSl3R3wywUvqBoVEY4E9uX+L4CZmgRjUsjWevqU7M8dwiCRdeNZQZLSRYtc8KZ/eUpr196DYBOo0Pm+ZjcQAAWRS0IuHb3LkdOPsby4uKjitqjC+Wnztc4NtclV5qPnZ4jxuOVy4bv/fQOzSjmcy+co+NZknFMrHNSavSby1ytNTFBnRSBNZLxXIZYvMMXj9R56XyDiG1kOMDbGPFYIhCyxng85i8yx8yligWcW9DSBQxMkYJwjbAzpoHAtZjNmFBVtLwKr9si9O0EukRYij3JtTSj1Wpx4YkLpO1F3r92k6XRgG6yS5LGCEIiAanSTHIJexNO1CTLDY+WnECoWSHjncEuQZZgTB1rFaaAhFBYZEELJaUlUgohPQZSMbh1my+mW7SEh2r7xAhqWUAoLfruTXQwZD+FdKdP46imrhLkRKCNYDy2jMea0STnJaP4wm98iv/0az/mz/7o/81Tn/4Vjq+szXhwUxazElQaY4izjCvvv0k86PKFT3+Km3e6+H6NPLMu6motNTxOPvNx/ldf+2MGk5j/7C9/hWb/FoxGaFyALQgC3v/gXQBOnjjuxKN4CLoQoDyZMIonSBWgh2NKHN1gOCGUEEhJIwiJfIXvyaKSyoDOnUYs6lVNiSLo8ituoxZl0X+p5x5sOAohkEqglDt3re5Pv9dorIE0M67GVxtybciMpmwcQDiiX1OUCGqjnbmqHci1tSC1di5SYSk86nhkoVxph9TkiEzk1HRONMlYSj2enA9YbGREdkBDSGoCrCeprzTYVzmXdQ7WGRf+MCW5uU2SJ7zb3eFj+/uEniRNFTbNCPspZwKPOAz4icoZZlnVtqPzmVKvMn906BrtNAxZTfz0TacNnKUrik1V4JLzropjZHw2TJ165rNAnabuMOh1ud5cwr9wBiPnCHp3iSZ3aA6GMBqxbCW5TBjWDLUg5O6tDQa9DKuDYhcX1f9lk68z21yhtKc81udrXD6zwMXhmKCTEdQNnvapp5ZxnBNnGVm8zUoQYY5L5ucUAovGoxtn3NjZY5xachTSaprXX+H/9Fc+w3/6ez/gp9/4//BmvUUQNVg9//Q0d1dFPmFv8za9u9dJR32eOHWC58+c449e38AYRVaY/l4Kj586wT977316iYt+/t3vvsxvPP8Zfvru+1URg+/7iKQU+uk5OPS8pFL4QVDgtCqMtUwyzcQajMmQIkEK167meQpfeYSewJeSUHp4ymlAD4u01vVmGl0I6P25kNlrsLiIcvmihYqr06ESAkoUyAmiir66QgJDnuWkuSDJ0upMtVqN/UmMZyXJcMJAWxqLzeoEJYL7o4xHFsqmrOFrjbY5uxlcub7NjbtDIlOnpgPibort1IiUh9E542xMmBoSPUR5LaxO6O3eJe7d5ol6i+WmYTzcY2R99vcUQxvi+yFrmWLgSTy/8DMQB+AfYRqifqBwCqoC8nIXO/y50nmpdrDSDMoVZqK49OYNnn66zRc/8wv88EffZjPtYZsL3NrS2M2cF5ZWOH+0TW1/h814j3E8ZtGvMehOuLc/IqwfxaoY32QkiUMkt8J1k4B0kTkh8LwAT/js+XCvbXis0QTdBa3xyfBDTaRC9shZbkoW6x6mJdDjPlb7ZMajm0zoWYiVj0AgjSKzgsnln/K//rWn+M6tnH/0nR+R9Pfob966byMDt3ktzc3x/LOf5fkjS7x3457jHM1icmEJhaWpYO30KX7vn/weq0++QKOzyOUf/DGxLwvKOkkQBCwsLLC9c5cgCFDKc/JfOouz56xaQC2gEdZBdCIUsvB7c2sYJSk2Tl26RQgwBs86P9T3oO57NKOAtvLwle/ckjIGUVoEpjB/Zy7CFucWqIPBmJLjonjFFoJa1eYqCJRHnRBtHMAYQFCrsbNxEzO3wMrCMkeXF5ivO8ug1my7ZvRHHI+ep0zHWAmJFmxvWva2aq4iw1q6I83NbUs3j/E8TafWIM4zxHAEepPUX0GYEZMb66wMBnxqRXH2SEZQXyTNGhhhuLlu2R1KrmvNMEtda5d1sPKmyEuVnRYO9cTcpxmrSKK002MP7FCyeBAuMlbBeViBlQppJS2/Rp7npHsjPnjrXXobPQI15sY3X4HaHPV4TN2MGXkDMjRYyUoYUm/52FDw/Mopjg06/PkH1xmNYDJKMJXvI3CxRhc88YManvSpxYbjowyyIcLTSN/lf5UnCf2cuoJ2B4IwI0lyhrkhQTPJx+xnIxIESiukzdGiwW3ZJp1boR4scuHZBf6LT7xEMki5urlVFJcLjPQwXgjSY/XIeWwWsrPT5+2sx7q+jFU9PN9AapE25excwGuXP8AKwfGLHyeZOLr0re0t5hAMrMMDWlha4OY3b7K2eoR6vQ44P/8w6PTmzQ9oz32astvDFikaa8tAp+tdDaREKR8hHRZrmmkmWjvfLTeoWONPHCOYQhNKCD3X9REFAZHv4yuJNBqMw4+1wmlXt1nMwoCKwqIqhzm46Yvp6wC74yH7G7c4dmSFd2/eYG5hgTPnztH0a2jl8f3XHKj1xQtPPRCL6WHjkY/8xms3OHk0QhufO9uWQZaRSshtTk9nDPsCBpY8GbPaGPDk6eMse4bT401ubYbkcx6jyYhIGkKdEQQtBr2Mexs7pCKkaTukuWUioev7JMORwzqxeZElE5U9ZGzRCVCGsw9EtEuGq8NjiunqUldihvVXYIUhizPefuMtPOVz7YNrLqBgDCIZEvg+v/WLL3Fza599z2e4chI1GlJLevT1kKw/YGIyjIXddEKqxkAEuASzA6OaGRaEVEihaGcZje4eA7vPdcYs10I6zTp1JRFCo+qCOEsZpzkm99A2QEifQS7YyWEofZSKwPMYRXNs0mSQLBDFa3RpQqrJRjlGzGFEAS0sfKSuIY3i6rpFSIPnz2GVJhcCY3WFuO4JOH3hcf7vX/sT2kfPsDy/RNrMCepNvv/Wm3z22RcYbu5U7GOHR7UxzmyQ6bBHiVRfmvaVczKzAR/A4gU836Hc50W6RlqLNtDPNcIYlMkJPUXgKVQc44mEQAp8KQiUKJjBXLuWwgFrF3sAYsbjtoV0PiwRYrGkOiNPJgTBIoPBkBOnTru0DBJpS5YR8EIP/19G9PXKKGHvpiFAIYxgsVVHNZoQePSzEd1+Qncvp+FHHD1ynOEwJdE5H6PP4qTL24FGRgJ/bOn4iizxuX01Zb/vM9QxmQ1RwnWja2kIgogwdEnizBb9g8qlMNI8xxcBaZpW/Ayz/uMUTOqwc10i0rqZnjbQCoy0QOHYF0XHgENZMB41k7A0uY45v8zOmSf55r0xC/VFVrjNSFsWhoI0EYzHY3azIVbVkEKQ5znj8ZhmpzN9oNa6BSQswmp8EppiRKcWkGQ5G5MRcShYUCHYCGmbQMQ4FlzP6oioCarGXhhwZ0kx0jW8rIaQdYZRgJY+bev6+ZNUkqc5qdFu3xdloEtjRYIBMuta3oQGFXdJk4kDD85zpJWEKDzpkeU5kdEMJzH93h55mvCxc2cZDEZIWdABPgD5vVrY9z0PHpwDmX1iMyxnpWNnMQfyz55UDmfXGtAZWEual6zdUwBlhSXwJKHvEygfX0o8aQk8D195jp3ZWsqyQitslUY6aPoedJaNtdSbTZqNFhiJwcMz003fSFyp6SOORxbKX3nsLBOTMxyOaNd9zp46ChIG4yExluFkiffYYnVtmTidEKeZ41UwY06MrpI1VvHnIuZkQGOlyW6m+NN7Ccr4tLw2Ya2O9RUoST0VNBsRjeYcuTb0hyNya/H9ACEEcZaTmxxH0lPCbVQuPBS4KAeF0mLNTCzugB8hi9jhTEqlqgGzICG1ln5tjcnCKnuZoq8kAyu4HcyzPoHH8Vn0hgQ2JZQBrZpgs+dqOYfDIc1OZ6rFjSZLx0RhzudfuMjmB68TN44ysZodGZA0mlhRQ0zapLaOETUQPkYrRtTw8ggPHy19RqEksRYTCSyKXDiui9wl9TBYEmHRoSA1hqxsKC7h5aCoMdVYm6FGe5g4RhSZCIklEI7eb67Vortxk+/97n9VdAxZnj//GD989xoycGiGWeYi2EmaoPMc5QeFxjnQp0FtbvGhAimqf2c1bBEs45B/KJx+k1qjhOM5FQJQspDhgufDWlKdE6camWmUyFBSEEhXSOJ5kkBIAula4AJf4SkH66mKiKstoVtsGRQqc5iWZqOD74cOO1aUCGuGCmnww3aeQ+ORhTIJYsLxmNMn5qi1Q6Q3Js9zan5CYHKkn3L2SEgW75OlFmMFRlhykxNlOcf3B7SjGidPdei0aty+vk2CT2+ScI+EvLePF/oEQYDwQ+JUMck04/HYpUMsWMbuwRuXCC7Zj8qd1BaRt9J8ne34r/yFh2xYQpSho2JDnolNaCXQUnE76tDXPuNRhu8HGF8hGzX2TJOtSZeF0QZHsh6NMKGxL1k2q6RYut1eoSkFSZJg8pR0PGT37g2eOXuSUWONV5IniNoLDGoNun6LzEq0cIQ2qliS0pco46Es+FagVEQGTHRCSu56BAvBV8or8JISUm3IpCYlw3puswo95RagtshJhk0nqHiIGfeQ1uJJ6eqbZcaZp57g7/zpN5jEcbEIXR/ib33mM3ibe9gCnc9ay90791g7eowPLr3PaDwm6ExR42bH4rFzLnVl73/PneP+V8Whn+X32kLzm9LcnYnACuEQGUCA8g8FDC2x0VidO98Zh3If+B4qEyjpQLGklITKI5QeQiiUBGk0u/sO3uTE6hq21nDzhVtzC3MNfvz22zTnFjm+ssxWUUf8KOORhfLm+Y9xtL+PxwSjJwzHY8IwRCEJbMRu2OaONphkSCQsPi7ClgcBnoioTcYcqRuWog5X7va4vTVB+wFDqckLlDaBhnSMSSZoq0gSTZq5CKwRzhARAqwUmGIXKlHHZ8esXzl9T1BWjs4eV/3OIaFk+sANkiGCa9YnUC3IpesBxCC1JYl8NvAI8oi4uYJXG+M11qnnNbLxhPX1e/RHQ1qtNu12GyUEjcY8/nGP777+U3pmjjw6jV9bJTeGvCCy9dAoIVFuNp3Sto52LZASZRImJmNiYxKduQCOgDRJiYKAZtAgEQlz7YgjDUna3cAagzIZdz94gzs3rmCxPPXsi6zvTZj092l4PtrkxS4PR48t8/r77+B7OcdONcDCaKJZ3475yfXrfOHp50j3BwTWYLR7XiVEpqAs6HiAgImyyqg8svTpZjfOMic4+0Hu+8N9pmxyE8V/VWPbgXRIeSaXp1QI4c88+QIA2yXCsTZ3pjIOhNsvTHQPQag8bt68igVazTZJGLnUBwJpcjxyhqMRnXqL4BEBs8rxyEL549oqx5sL9Ec7rHV3acUZGo88qtNrdrjRWGXv9DJi/y7x9hVau3tEwwm5MHRJyHRC1o+5tbnDlc2Y/SygG4aYQFaMt4430lG4GjyMo94pwtduF3LB0mm425gpKs106menf/adg4+pdPDd6pHTAEP5YIUAI1AiIPc8bm+OuXC0Q+iFgEChCEQT33hYM+FeMOKuEHjZiJ3Jy3S7m8go4tjRY5y78BjtdotareaES1s21+/x+qs/IbUDkmwXJVp4JBg6DnAMg6rKAZULLkhFjHGaEkEqoNaoc6yuyHPtNi2dYdc/wG5OyLOUt77+PX7U3S3KytyQVZMxfOfrf8iv/upvcGOYFxUuzkyTEvzI5+q1a/wH/+YJXnhyESsgzTV/8dNtfv/r69w8ddrh4WR5UdhhqRiwRVltJSpxKUfBd1ZEZilM6VJKTYHXM9WilfzamZ+zwm6L77JMXZDirPbAWigoD3EPv4yHU7g7tgK6Bit8t2qsw9lNtEZYjck1wqYO2xjIpQsc2Twn1ZbETthnPL2u4iyPOh5ZKNevbSGOrTKYO8eWnOfpxCFI35k7x52VJxlGkrEXopZPcuT0KVbuvol5/zJJf4QfKDLrsZdLJjZEdtrEgwnNwCNPLClUms/t9q6zwSqJ1dIhqRUT5/rrpruvKYSojJQ5YTKVAJbrwKKoYDmK70fYqllZSIeZI4RAKIUqWLmkCpnMtamHNcLGCp7uMN9aRCDJE4vOJbmWGFNHiw5JmIO3iolu0Bu8D5OAo6fPopRPnmuyLHNVJ8Y6vB6twITYPMSXTbQNCYpSLk8GtOqNIg8XY43zm4RQjCdjpMm4/Pp32NvZ5JX3D4IyVY3PAmqRx3NPtVg7ElUL5YnzbebaIe9c6vI7/+0tet1dPOEsEFtQBLbaTV59/TU8T/DE2TZIt7xCX3LsiEP71sZihcRoiyGvNs7iNEWU3Pmv/cGgur7Lf/HP8GoNADqnnqCzdIxms+1gQirxsVVQDlHCdbhXp8AhJWwHM58rx/1+3AGBnCkyL5uj5cxnrJ1+QkqFJ32wllxp4jyne/sSzXodZTzu3r7FXaBWq9Oea3N8brV6DlqDflBC4CHjkYWy2x2RZ3c5fvIIQXOet1ZTloYj1mOP3tY23lyDTs1jTmecHK9z3u7DkQ7rQUg/M7QWQkxniXFjHtVLGV65Q9wdY9KM7iRD4+MYigrf0Lqpt8JFwQpFiuN1MAVhZyFsRdOPWwQgRQRCoAXo4ikI6yNsDSkDhKcQvsJ6jnnZC0JUECKVh1CKzBfoyEMGPoEKaIgQX/qEtQb9PmSTIb7nE/hhEQ10zb/GGsdUpctWN2cCKqUKf3YauNAFvRrWMXwJmdLpBIRW0jT7oDXWat787j+gu7vN+p0bD302tZpibTWq/vY8yRc+tUzou+Lqx061iEIPTx6MGgLMd1zRusOfEQV/isFazdG1FX762st84qkWjZrTGuU4UEVbaEJTaI5Ku1mKyhn3/tVbU8qByc696vfB7cvc80PCuRUWLzxH1GzTnlt2pnuBn1oGbWYjoVWo4MOqZWZt19kXC2mv3raFqV35osVGX2l5d88WCpcJssmIWi1iNBrRabdZmF+gM79AWAtYnqsBoIKI8Tgly/TDr/HQeHSMHiJGk5RbN+8xWponawYY22Wue4WlsSI6Mo+/dITlOGOuf5c4GzPKBc35BowSdveHpKMEEfboiDorkeLyqMvR1QWCgaA/hkmWOUJWXXoGDrzRoXA4LBsByCLUXe6YRnhYIdDCIw8i/FrTlbBJ193u+T7Sq2NUC1RQgR4hXcmbFRIrPaxy2jHwnECGYUjoBygkSnjUaiGBUmijybKcUTIuFoUDfpKFaV0jIgijKt3i0LIPFjI481ARhAEkKfM1zfi9b/DyD/6c0aA/1XRAre7xzJPzM0/DLaZ6zeOzLyzRbgbMdaLCWigqUAoUtjLtY2fbjKfWPbOrVUqJ0Sl5miOEYH9/D2MMzz+z4BATENXCHI2dKdxsNNnaGxcpBKCgrYdiEy1SGlU1VTHOHK/TanpkmebSjTE6S5hs3+bO9m2Xv/VD6kdO0D72GItrZ4mimstrmENSNmPOzm42duaXD4t7lgYu1YqaRlTLn/e5R/agKT4ej1k9ctSxmxUxjktXLgNw4WPPE3kh9l+GppTGJ5MZwzgnvHuHZhizqLvMCcNSe4luw+M2ivXc0NULTMQKUvrM2R3WTJdanhJYjU22CFVAP5aI0S5n59q8dKJFFmsGieZu33J7Z8J+Umdfj8kQLpktPayUaKnQvo8MAvwwwgQtRNhGeZ6z632F8JzQVZa8tRjrYwgxhZ/gWk0MUgk85THXatGo1wn9kDCoEaoAtEtMWyEdVZ01pFmOsCV4skB6wvEQUuTCRICfaALPx0rnMDmwLI0xRZWSi4CgPCeURyKf9Vf/nJs3rtNoePzCZ5aLML3ghecWqUcezUZQNSgfXmRTE29qjLm1Mw1f3bcwrTMFf/KWiyCeOXWed959nzTJyLOM5ZVlrl27QrOuOHms4YTcTk3S7/1kF3DI+aUZaQRovIp3xvN8lJDkZQph5vT/7m8eZ2WhhrGG/iBlv5/y+js9fvp2jyS1ZMmYwa0PGNy+xD0haRx/jKA1z/zpp4jqraLBoESyP+w5HpLQ+2bLFj7oLM5dWVM0M6ezk1X+VqjnOJmAtawuL7to6/wC4/HYwW8KQZbHhVYVFXv4o45HT4koNwmtNOeCl/ExqalrD50ZtrKceyPNq3HG9ignzSNIDcbmNITHaixY3Olzul6nUWuyn1o2+xNGtNjJF6k1TkEzwzMTlhc8xKImTk4zGsWoOEYYgycV0guwnk/ue1ip3I4qFVaoAqOzoFvDgTiV7FVFtAElnEMupazYeKWUBErRbtWpRyF56vg3c5EhbaFtCmlQSlGrBUReVDwcQ+RZ6raPMZr9e1e58vZrKCm4+MTzvEcEOieOY7I0IwgKEp8iyCKUQnkKXxg2tncIfMl/9j96isX5yJ2zEKxKFYiDi6WKc1RK4/CuXh7IfUHM8nNbuw76UXoe3f0uWZahkBxbW+XVN17m6YstmvVpKmGWTwYgtE5IpRDk1c7gDqgHEaGS6Nwl9PUMiVC5iQkhme9EzLVDTh9r8utfPMI4zrl0Y8TNOxPevTJgv58zvOUYwPbe+SHBwhHqxx4jWjqKHzZotuZI8xhfukICITxKI9cJXRF3F7Mb2NT/PBidvV+gZ33c8tp3N25gdM65Yye4fHPLNU1o7W5fgM4cB6byPKzRFf3Bo4xHFsrlpQXIIjq9DR43E5r7PRINRuWkeZ+5/oTn6l3eSSOumzrKD0iBfdnGBm36aAajlFPnPk6vNY+SgvM0GFHnVSshCJgkMDCGYWgZRyFJx1VDSK0x4JgkBI40tEzmFn6bLCZVFVunlFNtWU1umUYREs8IPCuRQhFIIDW06pLlhYh07zZgK1hTiXGVPZnl3tUPeP3tVyklQecZ3f29+/ya/Z0N1lYXubu+UYEZz5qwU+bnAKOT4jWo1/yCNqQIZn2IuwRMBfZnDDG78ooxmmTs7iccPXqUjfVNLOApjygI2di+h1KCX/rckYJKfLpUx3HGzl7K2WPHORNE/NgYjBe6lEg+wS/IULI0gXoN5QWMh0Nu3LoNwMpiSLt5sOyszDdHUUAU+Xzq2RovfhziNOf2+oSXX91jr5ty8+6EdG+DdG/Dfc7zUWG9urb64hEWTzzO0topEIIoCIs1MUVQqDTsrG4Us1BwD53F++baGEMYRgV+rXttvtXi//W9H+CHdc6unfiZz+bwePQuESDONGE2ZK6mIXO1h57OyK1PYxQz118nbxxhtxbRjwKkjfCjJnP1I9RbF9mfDNlvrWKDOXJrMHhYIrQ1ZFlOZgWjKCEmR+dAZpwgFbuvlCWCtUMvR7mFa8vidCAQQdElIjBSVEgEFkM9lJyacxE03wii4Tr9rRuFH6Z577s/4c/v3HA7+0OGlOJAtVi95vGZFxaRCC6eb3H8WJ3bd0f83X94g1MnTrCxJQ/skhX7lnV+n/QV2ty/EMpo4OFl8ICU30cYhbYtQLjubk7o9jKeeeYid+7crQCmz547xTe+9cdcONPi1LEW7kOi0pbDkabbzzi+WuOVS+8iAAUoa2m0G+zv7BKGAcKTBM0mOjWocVJBtsy1PcKg3EYPX+H0mSEsUc3jsTMtzp9uYjTsdWNee6fHu1cG3F6PsSYnH/Wqz/dGPXq3L3NDKhCSudOPE7YWWDv9BK1Gqwh2lTvMwU1PVP/M/H0golt61dNLtwhU6JNrB54FrgoqSV13kCoQBD9U1g+NRxbKyb179EcjzsocEShGqsXYazAWdXpinjQMyfDZCZvE7TkmUZtaVicQNQZ5xK6fISKwqqjQx02atVkBfS8IPEXgNUmkJs0hyXJyo8mtZpIl5DpHVbctC9/MBX0QgkApIt/h7Zydk0iTVakT3d/g6k+/w/fefnWaRzs0ajWPC+eaPPFYZ7qDWhBievyZE01WlmrV30pK6vViGosHbYzTiDs724AgzQ4KuS2xg4RFBR65Fhw7fowrVz6g349pRo3qOdrqe/mZCvGhz17AtDRNgLDk2vLHf75OrVZj7egxtrf2CQIfYSw7+5tgDV/5haN4yqM03Gxhmm3tOc2+evIUN2/dRkkfaTSesJw8tsK7b7zM2tFVlo4ccWmOyQQmU8342ecXXZDOTi9wutHZKkc5e88C15S8sljnK1+o8+XPHCFNNYNRxntXB1y5NeLGnQlYV9xgtAtE7V15E4D1179DY/k4yg+reTg8V/PHzjC3dBQhBIvzSw4Cs9yQsPSGA0aTIfcuv8Fg4xZREJBkGX69RmZyNz/Souz0gU3BSx59PDqanV1EtU+yWWvwXSXRzYCJFxILn4Ftoot9NpM5EwE2DUitT2w0ZBMEgkAEhH5UQMY7KH5rDUmckaYuca1zQ54bciPQxUIy0mJsXgRtpkQ4C43QYSdhWQsnDG+9ybU3v4+w8M2NO858mhm+Jzm+Vqv2vueeXmR1OXKLWRhOn2jSbPgoT07DBoVUPFRDVU4KxbeKg7uvEOSFUJa020IIfOXAlY4eW2PY61KPfN5//32u3RxybKVx4OurRzx11w6d/2fFF6e/GeMixe9d6XP11ohf/qWvsL/fL/JpmiefuMjX/+yPuXC2xalj7YNLV0CWGf742xs0m01OnjhHlnqOZTqeMBqOKFBei3RGxF63R0PAztYm1lrqNcXJtfpDQyofNmaP9D1J4EmadZ+jy3W+8NJ0M7x+Z0iWGnb2U/7pn22S5ZbVxUX2u5uM0vSh39+7W9CxC0HYmj9AawiQjQfk6RTD9fjqKmGzRRA2XGNEsYs6OJTppv5Rx6MHeo68ROIpbqmQm1qAKHaRwkwEi2dSlIGaFgjlgSdRkV9Qa0uUlYzHCXluSNOEPE8wNncBAOO+y9qyIMDlLI8vRvjS0jYjJnt3HcFOUTXywZ9/m96eo3B7bzLGFMzBAsGF823qURkZhPNnWlw832FlIarmqQz2HA6gVIx3TGOG1RI6JASllmbmqDIlIKXk/Plz7E9yV6BtMobDgeteTzP293bp7u1y8tgaB/v6/sUPYy2vvr3La+/0yHPDu5cHrKysUgtX2Nq+TVCLiAKfrb1NkjThq188g++pqb1c/Hj/apc762O+9Atf5tqVDfLcwa+Eno+0knRrn2eeeppr16/y7T/9E9689AEWyDIH1vxrX1xhrvXhbUyldhIzG54zGafGfPlWOfNe9TDh7PEm3X7KrfWYMHApsS99+gV8P0R5HkHoWLeMsfzO7/8hw9Ho4AVYS9Lfe9jFVee8s7HBxs4Oc3PzfPL5l2jPL3H33h0843gw650OHgKlS4q/RxuPLJSx33T5QlcKg7Cuml4piRauEl9LhS4K5LM0IZsYdE9URdI6d1EooRzWSq2gPPADxUo7ou07YlWTawKbcvet7/D+N3/AsN8FDjcsQxioCoP3pU8s8NQTc5xcc6ZfqxEU0PdUocdyMmcjbKZ43VQCVQYAqI6ZktlQ9WOW4yD0ykEPUAhBGIaMtntcvXKVMPIJAp8wCKnXahw/fgwl4M033+SJi+dnv5X7ttgiFC8epC1/xtAGvvujTX7va/cqwOSXXvos82c/wY+v7OBllrrNOba6yhuv/Zj5dsDqcmPmG9wk5Znmj7+1ju8HnDlylpc/uMxENzDK0Ih7BHkKXgewDMdjfvTmm9U3BL7kL/3yUT77/PJ0/qrHcPCGqhzrzAboZnYaXS4Z5bu9hEmsiRPND366h7WwvhNzdyOu5qnZiPjg9j1ee/N+KrrZ3lvlS6RXWjrTczWORKycb5OmGUsXmng1Ra4t3csTxjsp2+/v8fWv/xGe8vhr//7fJDSGJE05d/ZJAk8V+ImP/tAenV7dXWdh6jmksDTX6Nj5fLkx5FqTGUMOLkVRkNh4UtKMfKwNaEaSFbmHsobRnXe4c8k9uPf3ttnb3rjvvAvzIctLAR9/ap6VpagQGueTXTzXIQgUWEEYFHARB+zMw5yP4r73xCHBKyNyws4KmJ0KaymUovrU/dNdaEpjDRsbGwihOP/YeTqdFp1Om0a9TpZm5FlKd2+v6gm972sOf7d9YMbxQ4dFcOV6l9//2j0+/tTHefoTX6bVaDFIJH96+S593WE+T8hMQqPdZmd3h5eeXaAe+ZQoD8UXkWvY76VI6bM+iTGe4smozWaeg1J0VhfZHOxy5dp7rCyF1TU8frbFL3xqmcX5sACVstVG+UBrnCmeUXlyYx1f58ZOTLeX8hev7DIpCuPj5MPV0HA04o1336axHFb3YrGoQHLyxUVXmGItrSM1ZM2xa1NYd0IKao06tXqERRBFEVEUEQQ+/md9jDZsbeywe7XP5W/f5nf+m79Lq9kqfHBDanNXqfYRInSPLJTD4ZTKy0GgVB44FoenstoJ8aRh0Ruh+9sgHIjzZLDH29/4BhbLdp7zZu9+02B5KeKJx9rFdwraTY/PvnCEY2tNPE8UOUX3dgXzOhNEmz7gD1u04sDvQhzUdNUiKEqrqrhI8Vpp687owun7xYM+2Dco6Ha7ELaoRTXnPycJnvJQhXn2qJTbP/ew8OM39siN5flPfpmX78bUGZHHGckkJswlTwWai/NH+d6V9/F9yZc+s8ps3aktJkJ5gpc+ucS3f7BNmo84mvX5tVbESOZsPPY4f/L6q9y7e4X/2d+8wNJCyMx0zhDvFuZ9IZziAWrfWIjjvDIJLIJvvbzFj17fYzTOyfKfvcCDmmL+dAssHHlqjoXTLWpzAcZa4iQhTRO3fpQr6PACn2azzWgS4/s+vu9Tq9cJw4BavcZkMkF5ilq9VhgybsOSnsfC6iKN+RZzxzt87T//LvvdfcAh6/mB74hpA/+RH9kjC2VWJH6VcIC3pxcDpIAg67J/483Cx/s+w/4+Rjt+hXIIISjXXqvp84VPrSCBJy+2WV2uFa8H1KKAWSOzKD/kQIkYU7iG2Xad6ecetmvOHkPVlDB9xRY9laI6x1QgS7N2Nq5y0Bi+/1xOyH3fR3iSPIkxuWv/SQsCUlHQDEzDSo86HnXXdRf9b/zqCd691OcP/ujvc+HxpwDD0eU15nKHfHBcWt68fZ3X33qDf+srxzi63Ki0sp3eJkoJfv1Lx8Fa/vxP/oDHzp3nv6t7CCzf/cPfx/My/ta/f54jy3V+Zl1ZWXdX0PbtdRNefbvLcJQxHGve+qA3vQsLubEHzHblCcJWwLFnFzn36aNYa7ny8jqtIxFLZ9qOFsF30JDaWLTUDCcTFhYXyIx20CBlWWUYUqvVSbKMhaVlavW6s4ikrBolIgSj0YhavajKsu75lrSGkyS5Tx8k4yFeoJzgfwSn8pGF8rnFBJMMuPbqt8jThO988NYBwQNoNXwaNXj6qUVWl2qUemW+4/PYmU4hnIIodNTaVhy2tB/01wxmSjGclSoKhLhS202jeQ9e4IKSrWIqkFPz9PBnxX2fnRHqMk9RCmdZHG+rTbS60Gaz5fRnlhF4EUqDjlOGZowfKNfZr2RVLC4q9S+r+5+2nBX/PlQmZ9+Y+mL1us9f+7dO8Xf+wTW+/xffetiHeebxDi8+u3TwXqEKXAF4SvDrXz6OkvCdH17h/cuXEAI++fQ8X/niKRbnXbXT7Ewaa9ntxvSHLgotrKU/TPnBq06jJKnlxu3hQ33lsOVTU4IjTy7QOd5AKjj+9JIrnpcak2fkueXYZ+YccW4rot8fYvKiWMT38XwP3/cxFprtNgsLCwRhWBXiIwRpmjOZxA7t3miM1QUrtAAhSbOcLNNoUzKDGbTOybOUYW/E5hsu6Pgbv/hlhrsbfPftV/idnQ2kUqST8UPn/fB4ZKH82v/tb08/5AlOrLnd5PyZFufPtBFCcOZkB78A0FVF+5GoOuceLC73v3L/k3mgk1z6HIKpNHzYmNV6B85+v9H7aFrLPSxb/JdlDgz5/Uv73NlwD+Do6lEajRZKSd55+6eMJyNAUKvV+OznPo/VAcvLi1y8cGHKKnz/WQ78rPYBuD/iM1vVMPOWFPDE+QX+F/9xQHeQgoV3L/e4ecddZ7vp8+lPLHDuZJsw8A58jTvN1FUBJ5hf/dIJXnxuiZ29GKUE5061Hf1AkZbQuPTJ9dsD3vmgxytv7H+o7xc0fNpFO1h56atPLLBwus3csUbBsKxJ0gSw5CJnMBxUUVSnsBypj0FSbzbpdOYIwgAvCFCeQiqHW7Tf3afWaFbdSKZ4lsrziLOUWu6E0aHvOzM7yzSTcU6e9h18ioFkNGH70g7ZJOXODzaZ7MQcP3GKZ174FY7QwoT/Dd995YcPveeHDWE/tO9lOlqtFp9/aZnPvbSK70mWF+pOY8mSDmD2SRYL3T40FOKOEg/a2x8kJjNHTYswD/xd1YiK+z8yPf5At9zBs4npWQ8IQPHZWSS82W/IteXHr27y5rtd3v6gR5qZ6hJ930cKSZImD1jo5VcLgiAgz3O01vylXz7Kr33pxIFev0pH2oNmbrmoZufj4IZz/92WnyuxaxzcZdkhMf2crSKQs8/PTun4Zm7IWstgmKKtpd9PefuDHnGieeX1feKkpAGgcAOmn2su11h9coHHPn+SsBlQa4ekaeYA0bKcJEkdKZM2mIJyzlpLEAZ02m1GozFBGBAEAVEU4QcBvu87EDAhabVarj8UO22IFpLRaIQQijAKMXrqAhkLuzs7tDsdstwwGSbEw5i7r99hvDdh/2qX4caMxjOgi5asdmeejz37Ao8//2Xe3mkhwiUePz6Pn65TtzlLekBTj/mP/vO/xc8aH4k09oufOcraSnMmugJVdX4pHAcCJ/f/Njs+PEx8WPsdOvbQXlJpzQ895dRnvP87xX1/VzJe+A/TxegqYu6uD/hnf3aP19/pYoxFeoKlp5rU5n1Mbtl6a4BXFxx5Yo6li+2iRM/tvNsfDLC5YzLefnuIowgX/Mm3NphrBzx2rs3SfFTVwTqf9/BNze5CPOD+Z7aXIppcVhOVLkA5ldPDnVBW/nwJq4IT4iw3bO/GlE3Cg1HOD17Z4f0rfdJigR7e5lWgOP6JJY49s8zqhUWsKeqWhUVLS5pmxJOY3UGG1q6WeWlpmdFkQpKmSCnxg4BaLUIVPbBGa5ZWlllcXKKEhQEKDQfD4QipPLIsQchCAVjjisOFdXhDUmGMu6c8z8iSnM1re+zIPrd/dIe9y10mO+PqfsKoTuhPU0ULK2ucffrTyCCiXz/JvcQjTVaxnQW09Xh922BYAjS+6aBMzn/Ezx4fSShlAS5VPMricR8ySosbeCTqhENr6r73HvzHP8c4LIwHzbIDm8iMxj1sdo/GOX//967wxrtdstwSdjzWnm/RORlRW4iq7pTjL86BsBXmayUEwrL2iWbhjlrWnncPeu/KhFvf6fH3/8lNolDywrMLfPmzR1hZqqFkxUz54FurXLiHOQT2wNulJpx9sbI5LOS5y9zudids78YI4PrtEW++t8/mVnxw+xKCxkqNUB3cFU69uEJjqUbnaJPaQkCapfTGXbI8x/d9kiQhyzRCqYJ0ViI9hVSKSRJTa9ZZOrJMvVZ3eFCecqjruM2s3+/jh0Hl4011umASJ0SJq7d1fUMGY3KyLGcyjBn2Jly/dQdtDNpo1l/ZIN6JGW9PqsxCZ2GZuZPHePLFryDDBiN/gY2JdtVrQpLm8PIgQdqQucYSndV5BkbgaYsoikG01GhhSJQC79Ei7R9JKA+Ph4VU3Hv/fzI+4oWOxhl/5x9c4u33e0TzHidfaLN0sY4KVKVFy51V+tPAUAUuTGE+iinwr/RdGd/y4w3mT9V4559sM97K+N6Pdvjxa3v88udX+OqXjuOpsuxrRiQOXP9Hu5lqa7KuY+SdS/sMhhmjkeZHr+1igSQxJOmD/V0v8jj72TXOfvoY88fmEEqQJqnTOmlGlqckScIg7zPYKozhorFgrjOPkh6pn1OrN4iiiEajgac8x7WiFOPxmHa74za1oniiiKshpCTJUiZJgpLKVYIhsNpgrHMr0iwnzTWptmRZzs7NTa59/Sp71/rY3JBPDuaHgzDi4tOfQgYhq09/iR0zx739jJeHOXbgECac/1k8O6mohfMsLSzTqDcRuYvMGmPR5exqUaW/7rd0Hjw+klBWXootyVjLvfXgyQ5swg/9tukCrY4rNdYjxG3+ZQw7+9sDVL0TyMu8c6nH6V+Y4+jH20hfPuBa7aEftrDGi/YtYcFWNUTFnLqfQU3x9F9dYefSmDs/HJAMcv7Zn29wdzPmr/7GKebnopnziIO2YmlzFm/NHIW1Dn8nSTW9fsJonPHDV/cQQhAnhtfe2Udrc+DrpJIIJWgdqbP29HJ1Q51jLdaeWAYlCJsBaZqw39tjPJmQZ67BoFaLqEU1+oMBQkiXfvD9wvfzUFIR1SWnj64RBmE1Xe4MBuV5JGmK53tYXE20M6cd7Iq1FqQkyTKktBgjHHB3mpEmOTt39ultjeje7bH53g7ZIGX3vV2kUCjP48jqSc48/kky6VZiePxphsbj0k5CbiRXNgXSjKj8TWkxQiCMQ8FTQYgfRLSaTcIwoGxmvw+xH4GR6lCA8cPHR9OUdlr1fiD28iHVCj+rvEgevIODXy1mfvmotWWPMKp9RTzg9QOrWxCnmv/HP7jEpdsjLv7lZZbPN+4LYd13ieX9FIX11k4pzisaE2bTB+5Y6QmOPNlk8bE6+zcmJH3NGy/32Nj+gL/1H1xgfq5GVUFYXOIUVLFgNM4MmTbcvOPAqi5dH3Dl+pDBKGN9M+ZBw488Gis1zn/+OFIJ5o93WDg2h1QKL1AkScJ4PCbLMsZ6TDyOybv6vq4bVdC153nG/Pw87fYcjWYT3w+mrUzA7t6ug0SJInTuitG0NiCcr2eFYGdvn3a7PQ34QBXhjdMcbSboHPLU0r21z2BjwN1X7tG92TvQMucHIU9+8osceeZX2Yo9Jrnl9X5SmKICe8eCMFjhSIIkTugrWBAhMQLCMKBZb9GoN5FS4ikHf+IwiJ0/a8V0VQvsAUbnRxn/XOZrNT7MN/zXYBhr+ekb27x7ecCZL86xdL7uGL0sFXwiPGDeq03GVCZPmdOcDSQ5IM2ZjxXCqnxYvlBHIKkvBlz6ox3+i//Lu/zmr67x2RdWK5hIJ9yGLDP0hynvXurxF6/sMBjl7Pce3hUBID3Jwqk2naMNPvbr55k/0gYlyFIH9jSejJjEMVmWuS4X7eqXl5aXQDqaBqV8oigkjELCoPzpE8cJQRDQbLZLQ32qzAU0Gi0mcUy92STNc2f66RLRHLQWJKlmHOdkmWMwm4xirBEMt/r07nTZemOH8e4EmxvG+y4AFYQRjz/5CYRQnLzwHGr1HIPU8pMrO7z5zgZho0UQhtVeL4vIegGFMPMcykCYRUhHjDs3N0er0SJQoUtjFdqxtCJt1c3w8wvEvxih/FdkfJhWnq2jfMAHy4Me+NZ7l/f4nd+/werHGxx9tu00XmFiP0qKtNSIhw8Us++X9rud3oW1uK4YaVg4W+PCbyxx6Y92+N0/vM312yOade9AuubazSE3bo+qKOiDhvIlx55d5tynj9NZbWKxNBZrxGlCksTc3biH1o7p2mhLXjAbO5fOBa38KCRJUpIs5cTJkzSbTacxyoCWcAzcQRSxu7vP6toxkjQnTVzapBQ6Y6HbH+IFAbpg69YatLbkWrO3u086ztga7bL+5gY6Ntz7ybqb81yj84I+D0Gj2eYLX/oqayefoLFwnuvrCaMcrscp23dztDWYsIPMRwy7Q6QY4/s+YbNBUC9Q3A+ll2xBc2cBFQQ0221qUQ0p1EwheymAJVXBfVHDjzz+tRJK4KEScl8pt3jAaw/IqQyGKb//x3cwAtaeb7viZVOao/fbvQeikpWQPWQjEPf/XdYvzb5sravvWThT58JfWuLSP93hBwVw1YcNqSRBXWGBqBlw+jNrnPrkURoLNbLMBWGGoxHbt7eLRVZoDAtRFGGFwA8DolqNRr1BGIUF76SLlu7s7CClotFoFOVm7q6tdTDaSnloren2+k4IDQUDszNTtTHk2jIcJaRxzqg/ZrgzQiC488oddi/tMdwo26osSima7Tmw8OQzL7J24iILq0+R5D77WnBtc8TtXkba2yUxOXkR/NFausocqag36zTbNawxTCYJcTwhy3KiKMIrorvOHHVldtJzlUBRo4EXRigpXBTdzOQdCiSHCsnf/vMZjh9NKB8xevSv7LAzv0wDow+YQPeKsa7l6fa9MavPNQnaXtHrWX720KcPyd5sh8X9BxyU4EpTPtB9doltISwLZxtc/A3Be3+wNX1XQGu14dqOinH0yUXWnl5i4USbNMsYTSbk2rA/2We4OaBWq9Pt9hz+rHAax/MDZ34GIVJKJnHMqTOnCQLXrS8ogj/CLcj5hQW2trZZWl52C9IWfnOBoSSEm8NJnKKUR55rJpOEJMnJtWGw22X3yi53v7+O0ZZslDHZn1T3sLi8yvJjpzh64jynH/8U0o/Yz1tobZnEmhujlCvXBiTaMEESW0siIEdiCKhQySur1CCFj+9JBJYgdAXreZ6T5znxJMZTisAPEMqn0WhRbxV5eaUKlD5RacODRZmlGyOrQpSfd/zrpyl/7nG/aGpteP3tLgCrz7kOlinFezHuK3Wb+f2Bwvjw808DXDM7RvGzJF41RtM6GdFcDRluOGSFtWeX+ezffAaDcSkJnRc+YMbW7lZhDhqHcWsdtblQkma7je971OoRjVaDKKq5lEShMbe2t8mznHa7U0QWS2xYTZpmLteYJkziSbFZlb6gAQO51kxijdkZcO/ddfr3+gzujdj5YA9BYabGeaWdjx4/zRNPnAcEjz3/q4zlPPe6CRNreb3rCjayfFzkFil+FvQHKkdLQS4dcp1nCvIgYbHCQYIiXUOFLPLDQkjAgZf5fgDWOkoC4dq34jgGJYlqNYRwUXNbOsQza0GUxdTVJnwo9P0Rx784ofyom8OHXuz9Ed6D7/48O9GHRYjvK4EAYGtnzJ31MY0jAVHbp8T8vE9DMuOPzIwHfaedEdSp4VEm36YBA3e5MxvAjIBKKQia09CQbBjWtzeKShnL3FwHY12lDLgyvrof4Pk+9XqdKKqRJgm1xTrz83NFvbU4sPMLKel05tjZ22V+cZEsc1AmVYmeccl4ayzjSYIxhizLyLXBaMNga4jRhjs/ucfuB10XDbUWrxQA4NTZx7jw3OcQa0+RZZL+BLYGBqMF37+bk+vejEUjnSlqwZiiJlVIrHKhMqF0cR8SZQVSlVFtiSk0prWGgnKy2AIlVsz4jZ4irLu2rajWQFsYxxM2t3fJs4woDGm1HB+MEj4Wx5sqZjrPp/izU9ejfMSPOj5ynvLw+Nni8ZCr+RkR2wNvzZp5P5dAFp+drvbq/GV5Xjmls+PlV3exFoKWQkhRBSju8/vsDCrCzOUJqEra3HHFHZSJ8OJ8sx32sirkpxKA6iE7ueVwZlRbQ5bnrk/T87AIPC+gc2SeWr1Gu9XG87wCG9eF8JMkZb+7T7PddlUv1pCleSF4GptrXJ3omH5/4ICkzfSa8jxlNB4wGsRcunkVnVpufu+2432xsH+z5yjQgc7cAr/+V/5D6gvHkM1VttI6uVAMcs2bgxR9x5DnqfMzMU7g75tPFwhyJDwFbV05sUKA9KqA1+GAjayeEYfSz6XACqSSeGGEDENE4GO8ACkkzahGvdUhG09IBgO21zewCELfp91u0Wg08YOgEkVhD66kEuf/o4x/9c3Xfz7z/P4ve8T5SdKcd4uevtVnW5T9m9ocuqBZgaQ0ZagirqUguc3UHDB5ptD4VP2aiJLxywmuE87iMxKEUO5Bp9McXOAHrK4dpVFrUAtreL5Ht9vj+PHjU0jL4nzGWnSuEUqS55put4/FdaFaYwpKe1MInibXMJnkGCPc33lOOklIJgk3f3yDO99dJ+66zo1yeJ7PxadeQCmPj336q5jaEW7uaQZjRTKSTNKYcZaSl/i81pXN2YJBuax+Kc3CctOabZRWQh74u2xueKT+inITFi7CqpRPEIQEYYTvB454afZwpQhaDYJmnY5ZwaYZvZ1N9jbvsZEkNNtztBaXabU7VXrkEfs8Hjg+klAePo09EGb6+YJAH0Xz/fNoSTdKM/Hw97p/hqOUJHEo12++s8e9zQnSE/hVY6s9kLJwHz54TQcEtPj3MDHN4ePu+/yMAqgWJoAVjHoZt3/UZ7xhOXH8OLfv3KHeqHP82AmUdPTzyvMYTSaM45hOp0O9VmM8iZnECXmWo40z7XJjidMMIR26tzGGTDttqXNNv9ujvzPCxttsvLFJ0k+RQrL+5gbj3TEmt6wcPc6Ln/o0QdRi6cxnEdIjMXBl3zDRmh/sZOg8ITeQCofnpBHkwsMIhzwgy1yfUAXtX2HKF6zKiJKq3DLNCxYBF3Fo7oo5PxyTrBiVrZkamEKBFyKCEDwfFUYgFKYK5BTPrEyRCMgCgecHLNfPoIwhyzU7OzvcuXMHz9tgeXGRTqeD582SOkkekBF76PjIFT0Hb56faYb+jC/82YcUluKBSObDznf42OplcfAg66KC2jjD4u76kD/97gZXbgzpDw5itIZNj2jOr8wf9w2z2tEe/O6Zc1lmTNOfcauVtWZNtQicqVqgBlnY+GDIpW/s40uP3/43fwupFH/vH/wuvuczv7CAtThQ6zwnjGrsd7tIpegPhq7AvDiHw9l19aGjkQvSJEnq0gSDCZNuwmh7xKWvXXGtSrP2M4JWZ46llRM8/6t/nf3649zqZmjgtY2MXCeuqsUaHGW7rTYaQ+yehhWFmV6a+Moxj5XCNGM1mFLzCApg7pk5m/nDlL6CfQAKXtU65qp3XIQ1QgU1rB9AQX+B9CpArqnQH3xw0giMlCRSIgMPT0kW63UWj66wt36X61c/oN5ssLC4SLPZwvNCvMIUftTx/0Pz1X50gf5ZMny/ZUmaakbjjGu3HMZQKSQ3bg15/+oAa2FnN6ngTh70pfZAhYCt/rflSR54EeJAQO7DbnNaalfmPqdAXWDJE8N7f7JH/3bK048/wSefeoLtOzdYPXsOcJtLr9d3sBeFP5ZpzXA0Imo0XNRWC7S2xOMJ8Thm64NNsiQjDCO2P9hh54qrg00GKXHfRXXnF1c4//h5EIJPfOnfQPh1tLHsy1W2Y3h5b0LSd2V8VjjNqymJCQstL+XsjSIEqIIwXsABbegm6tBzKKbe4FDYETOclcWslsto5qVqOC/ABcCkdHWrQRQRBA1UEGGEKhDZRQU2XX5utgRSCFttGFgXBc/ynP5kzHDQB52Q9/fB5AwGPUbjIbUootFq02g0adQbBMGHQ2uW4199n3J2PGBlW1wt5HCUVvJx+XqPrZ2ENDX86LU90tQQJ4/IDyiqeZ+eowijzfqAs788yBKdpVdz13nQfD3Y/1mqePfTGFfWJQRYDe98bYfezYzf/JVfZrS1zs1L7xPWpvi1OssZTyYYO60LzXLDJE659c5t7rx6r5q8ez91ZmgW51WLkpCyurST5y5y5BNnWFp7DLnyPHe6mlQIftDPSPOMVOfEduzItwsuS8d47c5QgjFLSt9wxs/G/a6YMcnFwXk5gO1TCQGVoE8PPWiVzFiZB4Ytzun5Lg/r+QHKC1BehBCeU4YzpZLYAjLGgsA4Rm2lUMqS5jGTyZhBf0CeabAQpy7yLKzGpo7UJwh98txh/TLsEcdDej2fIHw08KyfSygPRLiqCXoUY/TRAaIOfFeh3SwUXQKQ55rtnQlpbvnT767z1nu9A+mGn8fP7qw1eOY3zzN3tMk3/o8/JhlkVbQRM736GffmAWNm9/6Qi6gsI3EIzcC6Xd0Kg7GSrUtjujdTfvOXf5ne7VtkJscLAoxx7MnNZpPbr22x/etdpK9IJjG9jT79mz023txh570us3Tn9XqDufYKL/3GVyqMo8aZFxiJDtpotuOcG92Yq0ZitzLQAiMgExJjPTSSXCisNChbBJ7M7L2WfBqmwOQVB6TFCeiU5Pe+VXOfZLn3DVPBdNvddFNzrx0yD0thlgI/cGkOVZACC+G7jUhYlHR0CO5UCil8lPTIk4Q8S8n0hP5kRJ5lDo8HTRIn2NxpTutZhHQsW5gYYVM848oMrQSMxmSGROek5sPrkMvxc1f0VMSg1QzMTMaBYR/4a3VwZe2VJmFpjDgQ57wwK7d3J+zuJ/zglR0Gw5w40axvTg5/4SOPsOUTNn1OfeooQct1ss+fbRM2PCbpiJVnO9z+3g5gp1U81lYJeKc9f8YWc2CBHbTVpwJ5f11I1ZMXG258t8fRlRXMoEeaJgjlUh45gu29Ic8+9zzf/4vv8K3/w/fxAoWONb31IVhLGEZ87he/yvmPfZpECzSSuH6GnYnhvX5OrjW5geEtS27GWBzHZI4qwKk1jjXbwSUiACnwigJ7YYvrl1P03NLslmJm17pP0GboAB4QKBMPfuugYjwY40HYGaEsGNeklHieV3WtOOrBAPABB+Lm+RIrHKsaJice98jTjCyO0XmKNpnLx1qJtAKpIAx8tNVocqyM8cjJ4z3s7l3yzV1kq02tPY8J2ygirFJooTDy0TTFvxrma/EgJ3HGXi/hg6suTN8f5Pzw1V0EMJ7ohzbbAiyeaVObj7jwiydRZYd38b0Wp3200ejcdZ8HHQ8VSYxxYX6sRIsU368jpWTxQoM7f7FLPjGkwxxZL3ygQ4L4sGmuTK0Hym2ZKjkcgCqEX7j+wHtvjiBVfOGF57n17jsV5EUZ1RtOMh5/5vNEtTm++Y0/AKDdWeC55z/PY0+9SP3Yk1zaVfxkDIMsJzYw2R+SalOZdWVwpKynNoVVUoImi5nrrHCKigqcUjuW11UdZwv6wJmo5ew8WTMzLYcE9gBq7uG93k73Qcc8N/XfxUwgRSiFLOAjPc8DRdFm5eHJAE86KNM0TciyHpPJmDzPMTp1+dnClxTCCSECtHUepi/A2gwTJlg9Rk12sKMuy3nCcjMjTjM29i8z6ipU4zheewW/3sLUA/JHtBN/bqE8nKB94DHuQMof1hr2uwnGWDa2Jrx7uUfZGCqk4Mr1Ieubk/tzgZSnEvh1jxOfXEF67iEceWKRpZNztJeaeJ7zEfI8J01TtHYCmGUZWZZh0oy8sPtzkZEX1kQZlLBArnM8pVg42WHp8Q7b73XZeXPC0c+00AWk5gHD/YEO5dSjnI37TOeu9FsLQ6xwYisZtjDcybj7oxEvPPtx7l76oPKgJM4uMki0CLixOaJ18S/z28/9NnkoiTVc3Rrz8iQluwFjI8gF5Bgyq0mtqDSfJ7wCd7YAjypktKQSpKBpnxXK2ahKeU2zSXtbBlZmNdesoDErTA8Z9wnqzHmK16Q42IcqOHg+pRRBELiiCeX8XpPnTMZDdJZgtUbrHIGrBJK2LExgeo9FM7pAOwa3LEGmCSIeE5lN5kJYrAk6izU6c8t4XoqSJxjupKxvdbl+b5OtrbcxNkK16phHlLZ/KZrSFHWJxhjurA/Icssrr+6wtZty7dZwim72kOHXPIK6N/O3z7nPH6NztMnqhWXCelgtDGOcjT8YDgrk65Q0yxxRaxFxW1hcdBUr4wyUa0BVSuH7PghBFIb4nkOyDoOAeBLz5FNPsfI/OMof/m++w867IzrnagTLM17Nh91CIXUHFOV9TnJpqpuZGJ87SGeWK1/rMdfpsBwF7O7FRU2qAiux+BjbYEKdbVmjm2oGozFZ6qKsLthTdGvYtFjQriQskgLriUIAtVvMVlZVQqXPVy3+UvrKK6+qHNx9HmjQFiXkZplXnJmS2Xv/EGDiB23xpQ9Z5idLwZ4Vbmndxi6Fw/mJohCpFEk6JssTtM3RWYbNYhQGKUAJhbAeGBd5dX0yRWOzsUiTIie7eOMt/GyHVpSxMNdm9XSHY8vHmW+GLHRatJoNMILeIKbfmzBqpBxfPcLHnjjDoD9ka3uHje1Nbt29+9D7nh0/t1DOml4WxyW53024fmvAcJzzF69so7Vlr5v+TCGUSrJ4usMTXzzF8sl5wjmP1lLLhfiLjnPPd21AcRzT604q7TceT9AFF2Fp2kwp4ywIidYa5Xn4YUCj3qBWr9NqNomiqNAEsiCjdYtia3OLIIw4cmqFJ3/1LD/9R+9y7/s9zvzmwkE3mAdYqPf5kfcrSzH7x8wxAreh3X55iKdDvvKFT3PjnbcdrZwQIGQRwvfJhE9CQC9V7I80qcbZnrbUK8qlH4rzKCVRqkR9c8BPQkjcXxZZBYPkAUGbRZw/UE544M7F9LVSq6pZv3HGLBXWaftD0egPG+UdHZh8DprVVckdrghiPMkL1yTDsdsYpAUpFMLKwg2RaBxigLQaledInSEme9i4RzMYstySrC5LVpdXmJ+v0WrVmZ9r0IoUzboPWGyeEsgaxzvLmGMewzhna2eX/mBIvNji7ImjxMlFdnZ3Hul+P1pFT6EgjLH0BylvXdpjd8+Zg+9e7rG+EZPmD/f7/JrHZ/7Dp+kcbRa7rHuGvuexdvYIYeQjBOTaMJ5oev0ukzgubP+MNE3QWiOlw/SUSlUlYdZahJIo5RFEIWEQ4Hk+Qej6/5I45onHn8Dz/CLaNvXhSsr18rn6vk+v16PVbPGxX3qMd/7kCqPNhNG9lPqxRwtrTxfSoYBEtWEceh3IY8u9Hw0ZXRP8+pd/getvv42STiBLrsRcCjIlyJQk8UISE2AShS/DA/GTAyank2dU1VlfahuBpIC+mL3aB3U5WICyudsJ3/3bjyhgNF1QZCqUM78LMzWXD4yHuCzVT1HRIJaxApjCZRqbF2V6uoCnceeQSiBwjdouRypdKtQYTJYS5Tvo0T4q2SeUGfPNgMW24sT5DoudReZbAYsLLer1kCgK8QPpYCqNxtPWbXQekI9R1hB4EV5LUW/Oo3WHOMnY2euxv58S1aIH3uPh8ZGE8nf+2+t0Wj5Zbnn3Ug+tD06k8iVB4/5F21iqcerFVS587hTzKx1kGcE0uurgTvOM4d6QXq/PcDRmEmdkWcE3WS4scA69cMXhURgSBCFhzQEYNZstvMCn3nBcGGlBEGqNYX19g6hWJwwjJpPJgewhtjAii01HKp9ub0Ct3qS50OHCl8/w5u9/wMaPBpz5rYVpPrxYy7OaUNgSn9UWJpbA5JAOdFVq59Ukqlb4nRbifk7vesrG62MiUeM3vvR5brz7Jr6UeKIsQWNG8SiMCEDV8VRETUmMkugDbtW0VrTMeR4IawoDwmCtxDCt9bR25rjZIYo7ktPKIDlVhDPHFZg3YhqUkrNbVHUjs6j5dub/mQ2l+PzUn7QgnZ9X/l+mq0oTXQgJ0sF0lNaSFRZFjqc1MotJR13MZICnY1bqA1bmQtYWG6ws1FhaatGo12jV64SeRxQEBdqgcZwvxiKEjxUeHhJhLBiNzjOM1QibYY3n2sOEoR7ByWM1jh/xSfL2/fP6gPGRhPLqjeGBv4VwzbVnXjzGwsk29cWI5ZOL02bX8uFJgRd4GKMZjsYO/yXPGY/HDiJCOxzQZrPJ/v4+ee4A+oo4CFLIgtcxcKSfgUNFG41GnDpzpuh3ExXnfJZrBK4wutzPPS+g2xvSbHIwkFTmN4QTXofHokhzS5y5AEFQkJzG+xn5yLiuEcr+QVvNRbmSSmR4k2tu/2DE4E5G0p9aEF5dVq1X1kC8r7HacmRlmV948Xluv/cWoSihCS1CGLCysi6s8LFEGFHH92t4oWI0WxVTPR9RXZu1ckZ6hPNNLVU1SzEZcPjvsu2pDF2VwR54aPeDZWoCu7qCqXZ2N1D2MpYfKLVeIZhi9hvsNFcsCrRzqxFGF8RIzkUpRMZpTitQFrxck096mHQdGQ+p25y6zFjpBJxYC1hb7LDQnqfZrNNuN6nXQuq1oJgv55eLosJIyuK1Ip4gpCArCum1NoCL6Mri3FOT3fnCnnQF748yPmKe0nW0f+zXzqF8x913+okTqNDHWMdPaYqFneucNM3IkpQkTYnjmCxL0bkjTgFRdAa4heCHEbVGkyTNGAyH1KIa7XabTqdDo+nAipV05qoonrbZ2sIPXJ+gNrYgGy2ffGFqW9f7p3yfwWCI8nymTakFZL8t+E6K/GCWCyZpTm8wYbw/4Par6wCYzLL75pi1z7QOTEvlvhZmlZBgU8ut7w7Zu5ywtLDI40+tFi1Jhv1uj3sbjotTKcWFM+e4ePY0yX6X2++9iy8EnqQCdS4frDMPJa5W1EN5DiVAexIPVQnl/cXuRTH2Q4WoeO9DIukHPOhKZg/6eEyPKJMK7m1BJWQOKc4l7quMTLVJFo1ORSmdKO/FlvCShYAINzeyahCwGAw2T/D0iHwyIR0PibJdOmrC8Vqf1eMdji636cw3qdUj5poh7Uaduh8QBn5BMGwxJq/WsNuQTHUfJW0CON+/uueZjbmkqb9vjqU8lAJ7+PhIQvml/+STnPvkKaRyyVilFHEck4xzEILBYMBoPCIp0g5KyaIXz0xNkFl3RUo8qRBSEkURw+GQZqvF0bVjhFHkEr9KzRQ1C6RQlI++FtUYDEdEtQbW5OgqHF/4PkUEGMALAobjPgZc65JVlKVzWZ6TZBnWGvIko3t3n6vfvYq0gq23d8jHOf/eX/8f8s7bP+Wdd3/C8scbqDqlW1r4WGCs49HECG5+Z0C27vM3fuurTPpd7t26xXAwJM1Szh9Z5ZOPnXVax1jWb95g/d13CDyfwJN4wlWESDHFfnG4sBJPKLwi5WIUGOmWjUQ5k+kBvpm7RvEhQil58FtTM9P9OXNQGZSdEdCqfYppD2O1EAWFfyzxpDM93eLXDtTKarf0bRm/nW6twilP5y8WBfsC55taa0hGfeToHna8S0Sfo80GR1fmWVroMN9e5GinRqsVUG/6NOsRUaBQylUiWWNA5xhdCjdVGsgRB5nq1kuhLDfIg6WYtkoLPXAmy/zuI4yPJJTzax0ynZOOJxhtiJOEXq9Ppl05mrFOE4CgM9chDCPqdcNoNHQq3HMgS1EtIghCojBycPTKTdJwMCDXmvmFefJcO9+kePruhg9OghcE7O53abXnKqGFslDAFQ07M9g1AcdpyniSYC1keYrOLFmSkY5ibvzFNfav7JONMgbrrshaSMnpMxf463/jf8rZxz7FJz99hf/l/+S32Xs7ZumlWrFkimU4Zb6hdzNm73LMV37xi2xcv4opYBm1dkSog71densO+EoAnpD4nueE0VJhupaat1yiJR6pZw3S5kibYmxCbgKslDiuqwetiCLiev8bPEgaDy6rIlJ66LBZm8RSmsEzeKfWId+V/Cmq6tA3YFO0zhy5jrUIY1BGI6xF4aYyl27DkYVQSlN8Lo0RSYYd7SNGO0Rpj+N1wZFlw/KJNosLJ2k1a7QaDVrtOo1GQC1QRIGPVAaFQRgwmYO0FIZq0zcugYtAOqumaNg+cN9CzgCnlVqzFFxR/byv4Xrm/Z81PpJQbmxu0k16pGlOnuWO1EYWCdfiEQlRdOkbx5hbq9eYm+9Qq9Wp1SJHoloFbmRhorkbqDUabG5uFibP1ASdtk3Z6vdc6+Kn2xzKompXouZwYvLcmdTpOGbjnXUG3RHb4Z6LxCkPqTzuvXKb/u0uycB1Riwur/K5L32Bz/3yXyEMa8wtnWGx2aHWikiaFzl99jGuv/UB8881kOH9D81oy71XhjQbDdq+omdzBsM+w+GQOI6LhLakaoDGdT8oUeLHFD2GQFmCMG32FUhpUcKgRIq0E3I9IhE1J3hWVxtYOaZr4gHmaeFT2gP+3UFdW/l55XfM/iZM4T8DQlZVSp6UeMqVsVkA44h18izD6Ixcpwh01StitXbGrpLkuFpbIwyCDLIUbxyT9fqkehtphjT9lKVmzqlTPqeX5lls1GnXm9SjEM9X+L5HvRERRj6+FChhkCQOCUEbJ/zWWRimYBwzFBu5dl021qEqHxAwKUtagqkFNp02e59QyhmT9WA66cPHRxJKY0xBTza9ICEdhXS90Si44IOCGbfGcDRCSlhaWsIYfSjwMH3I5QVL5ZGkGd1en6hWd6S0xcGlJrZFL2Bpuzu0tARrLIPdIZmRaO1MycHdHptv3GXrtQ1Mqh8wKa6V59S5J/il//F/n87KMZpzy+RRh1hbJlnGW7c3GY7X6cYp2+MuJz/xJa5eeo/B7ZTOuUOtOBYGtxPi3Zxf/PxnWL91E09YV01kjCOwkbIi/ClnQFpxSF4KYSwmyzKzMJRGqgyUwWOMMUMGdDAP2YWLGa9SFoffnJqgpZkpZiW5YJQ/FEAq/eeiLzIIfHzPcxFIrZ3pbTU6zSrwZqM1RpsiQOZM1apNSpbwJxqRa0SeIPIBpn8X3b1NXVmO1APWTrY4vjrPYlPRjDwaoYevFJ6QNGoNwjAo2rPADxSecka7NYJcu+CWsAchIF0KhWpNVVAkpQFUVpzNaL1ZgXyQyVqu59nPPqqWhI8a6JGSIAoROISvWq1GrV6n0Wg4hDRZdIsjQAqU77O7u00YRYzH4yIYUpihFkp+C2ucPZ/lGuUHjMYJ0gtnJkoX1SluJ8tzjdaWndtb3Htrne6Vt0mHGb0bPQ4vPM/z6Myv8OIv/lX8WgOLQQpZ0MRLVs+8QKoW2SZla6KZ7I/YHm6xNRgxnsQkOQxjgddsk8o6wdJL1Jp/n+2fdGmdWkCqKkyC1ZaNHw+JohoBlt54jFc0yfq+X/nIVXgfqgBRYRBVvp9FYq2s/lbSBZBkibOjAhQWQUouZdHkK6ffW5imFXQl6sDUHNR5Uy16cMcvtHdpTk/jia77oiLeAXSO1BqbO/AsYQyWzEVerUsbyMrMt4WrY1Emw8sTbDLCxH2ao7t4NqURCY4uhJy4eIyFuRqddg3fg8hXBEIS+MoxgqsAJSW+AuVRBIksHjlCU4CJlc1ibiPBFrAowhbUefagRXbITfow4TqIaMd975VelRCzgbsPHx9JKH0/5PyFCwXmi6sIUWpGRRfHGReqQhSw9nt7uyhvigbnhqh2LYcdY4oIqs8kyfCTDGMFeWZJxjF7t7cZbbum5N2rXbrX9hlvT9Cpy2U2WnOcffzZqqveYjn3sc+wcvYFdtM59hKBCEKs9FxQBpgkMa9cGpHkfbpY0rwoRJBFeENESCQqENhMowTc6XqcfupF3v/Jn5H2c6L5ouTdWoZ3UuK9nC987iVuXL9CoPPqdsMwrB5UtUAo/3HmU25dchsoeCRt9X7ZpGGFRKoAoUIQAVqEZNLHSIuaTdJXwlYGcuQBcZ39YzYLUu78Jcp5KZRSSBTOa7TaYLUmS2OyPHMbi3E5PPeVZUipqDlFgvXAGqTJkNkIHfeR2RhGu/h5j6WaYaXlc+ZkwMLCEu1Wk1rkoaSL0vqe7wJFSuL5boELJfA8i+9LfDVdi6agEygDNdbqA/flgnKmMFFdY/gUW6mIGhfC9qgm5+HjHuRCHLJ4Hzo+klAqpajX60i3JVUn19oUPl4RnC7tcgvKDxmMYhp1p0nLiKex1vUoFt3y1oLODXlimIxjRt2UdJxx77W7rP/4HkkvxmRTqHghBE89/wWeeP5XCTqnSI3ixl5G7klkGGAt7FjJZirR2hALzXgyJDGGNM9IjCYzuYu2CelKr4TAlp3tzLYlyWJ9W0ap4cjjn+P9H/8Zw1sJQT1EYBjczNj88Zh6vY7IU0yeAqKCxgjDkBJP1O3YxUOv7sgtFAzuGgqSGNeCVGhRASgP4YUov07mtcnkPP9f2v40WJYku+/EfsfdIzLz7m9f6r3au3rvanQ30AQaQDdAAARBECQhbiYO58Pww4zpg6SxEUTRaBqKo6HGRBNHYzKaKJk4IAmSoDAU9yFBEgtBorF0o9eqruqllu7a337fu1tmRoS768Nxj/DIe1/1K7ARZVkvby6REe5+/JzzP/9zDmatJ5GXGnDMDqW4r95xSOGcCIS06G3Kooua4GvSVAetcNd1nq5pyUWu8nklalw/+5dgiT7gZKGuxqIhHN2lO7jJrLvLjsx5eKfi6pObXDx7la1ZRVUZ6jWHddo4J4ZOg/bGYJ0Knqss1qlwVa6iqmqcszgz+HzESEi1b7uu692GUnC+nbBloeozc4rvPYgpulov6PfMp3RVzcHhEWvr60p6TmCFz0KoV1PY1GBNxdHhkrqepd3K472n7QK+9Rzc3mNxuOTuK3e5/cIuxhgObxyy+817/e9Op2t89/f/EXbOXyEE2HzoQyzcWa4fwHNtRHYNEUdjhaMYWRxqq/IonqbTnEvvVRN2JCEUi5FJXysG8j4jOZOpX8TWhD5DXcTwdf8+vv8P/hl+41//fW7+zmGaBNhYX+cHPv4xvv6VL1ORS1LGEUgABVpZmknp3xBi0c8wJu62EHAgjmgmUE2Reo2u2mHpTuHNGkpnaPozjYUxWSQyCFB+XYhYUTO/risqa/Ftl5r5tIRUcjJ0KpQmxIJsHpQ7igqkJVHhfAdNQ7e4S3f4FtLcZtPOubQWeeTRDS6e3ebMzhVmU0ddAaYja3YjASEQfKdtBIxT87SqqBzMZjWusskcTGUl00aTfcK8WeSwXRk7LIGX8ni7UMbqZ3IcszRdTxJUfS2nZ59s4p50vCOhrOsJt2/vYitNElUrNfZAjCq/tFPFkCBvYblYaBZ303K4v+TGS7fwLbz26VfYf+0ezcE4I3tja4eLDz3GmYuP8NR3/zjztce4cRh4vQ0EW9EcVSx9Ql5jpwKeft9Hr30tkjFfkg2iVVTTpNCCZIdeSDwCnd40hOlqUu6AiSAGI5a5r3iDJ/jBH/pRnnvmi4jAB9/3XiqJfO2ZL1KZbNKC7zptB74ykW+3a/ZASjK1MEKHxcqEIBMwjuXaJrvblzliU9u5Bd1sYhyHRWwCijJrxlkN41t0o7ACIgGJAb9YsmxbQqfFwwyRje4QxUFd2kxC2rS08oFpkxC1LWF5iFvcJh7c4pzss77mOXfecfbslIcuPsL2xhpOQEL+/aRUjQpO5rYKGtN0xlBVlqqumE5qKiM4l4j1MjCNYvAaoybFFZNpKgLOGbxXobzf2K+WKymPk4S3nMuQynLGGHHO9Yjr8DD9JmFXSlfe73iHWSKG+XxJ0+QYIgTv+2JNIWoopJkvaduGrg289fnXaBYdb9Q3ufutu9x6/ibLvaFO6KNPPc3G1ike/+AnWD//JMYIrTvF64cTGh95fulZ7AUWRNroaZeRJrZ0osWhmkxUcKlPYFwBNSQPRBxpO2GcQZQpWogQjfRopiHgCKm1vJIDbAQf4c69Pd519SIEz7WXv07bLJgY5UlmTnSGz9t2XCXPhxVEc3AwiUZZL4IhimpJw5TOzDBujWjX6CY7mI3TTKhpOsuiXRK62KvIMQNIkJT0a41R7ec7QqfmXehaMumCGJKg6rUcuVqT8kmZ9yFi8IR2Tnt0m3i4y6S5yyQcsjUVLmxVPPzuHc5un2ZzHTY2LJOpw4lL6YlR6yoH/T3JmzoRI4LN1+ms+o9OksbUEAsFWJTr6OauYP2mF1VQSxOyn+fCkgN6C2ZE4C8+N7B7xt/Pn7WJSHOSwCtKmzd1rYLwIMc7EkofPJ3XbkW5YUvbeppgObp7wOJgwVufe41bz1/nMHVLCl3oPV1jHTtnLvDx7/8R7GTK2ac+yZ12yv4y8iYWOXS0redwseSONPgodMGzDIE2C0oGRoJC7Hkyy7qgJx2SQA9jjCb4JgQxT5cvSAr0vzOUeBQUMDDdAjM3uGafOi4I7QIBNtanLE3ENxoUj8VCyUnXJiGvJKHuDWRR0IuUuBs8Kbsqm601mAli13CTHcJkmzDdIdopJnpqB0RHEzwxg2/G9r/XdR3g6ZolbQj4pkGCHwCfkGupFryf7JulRF/afWR5AAd3iIu7uOaQU+4eF3amPP74Nme2T7Nzao21ac10OkEq02sqI47oI8GnbA7RfEZ1QpP5awXnnAI2Ge4lYo3FWTtUQuj9s1WtJYWw0KOpefbz5lQKz0k+Zg53lMJ7klCuCvHoPGktqp+vaHkmyDzI8Y6E8o1n38LsVOy9ucBYy5u/8zpHt45ALPuv7bHc0w7B65vbbG2dYzJb50Of/JM4N8EEWD/zJPfCKV5fRBoLLx9G2q6h6TqariGkAPSybfFUA2KYY3XJZI4RQqchFWO0HH80GW28j1mY7X4RJJXulzj4IlYU7eyz6xM7xkTVDCY0xGZOPNpDFnPM4hq1ROoqTbapMJ1n3nnwBo9uGGpCp9KPZvAjY59TCESTeLu5DEfqspV7XYglmhrsjOC2iNPTLKt12qRxLDAxFuMcnc/ABrRtavYafDLru+RTBgweF1V7BaNE8xgiFQHrO6SZw/IQ2kNCs08z32XNHHBxO3Dl0jYPnTvLuc0rbEyr1BJB0VBxhjr/TSJ1p0wtgwqVsRZbZbZP8p2N/m2lrIKnpqoqyBTiLxq5lmZkFtISZc3htxK0ycBNNjlLTXiSe1HGKFeFsxTy3lwFTcBJOIKxFpuE8vfEp3z2F7523/cuXn2SzUfO8f7v/5P4jUe51VqaILy8MIQUb+v2OhbtQk06IywqIRghBEuXzBkAMS75emnXyvt3Mn9iQi2EwW4fdsO3d9ittT2AU75nRBu8WKMMk+gDRgSHmqu0DeHwDv7oBpP2Jltyi5m1VGmiO4FYVTSLVit/i/pfYtRU9gJDcRqhL6XYhyoERHmv+Tmi7deCtQRXE+sN/OwsbF5mz24QmpY6NdrxIdJ2LW3wurit0PkmATURbyJBNFYYxRCDUbpegv9jt6Ra7sPiLt3BHeqDXTbMktPrHQ9d3OD8+U3Onj7D5nqqDCeW2k6xCWyBYf7wymxS9FhvxwDGaO0cm8YkC19MlQAk5Xo4S69ZVNgysk8vlCIDF1UFowRTVNgH3vQgyKUbcT+tWa6XUSilOE6KX4qorywphJQFs6rU11xlAd3veGCh/MSP/nEiqPAkP+CJD/8YVBtEa9ljh7tHnhe7lsODhv2uYenBh5ouJIGKYahWHQVCRZfYHoSSOZnMjwK4LwdtlcKU7Xrdjb497CzZ/yDjQUO8VKIwMZaJm1Fby8RG1lzD7rW32D98k+WtN7j51stM60OuXtpAJhPECJ3tkABt0/W7cPY5ImjsM99eRqqTcAaAoNrNk2IQ4pKGrIi2Irgp1BMONy+A22S5CISuo4u+TznzUf1mHyK+C0AicJN6ViVf12KRLiDLe/iju7j2CH/vBpPmFqfXhMvnNnnkPae5cGaD9amwseGoJhFMTZSZWi0xI9RZW6zMT7r3vOENwZlkDkumoukGJMnIya3qBlR88BMDkdhl0GboOaK/YwZNGPU3Bk14HDEtj1U6XP53RAxIgGCpNcs5LoXS9pUsBrrdakjm7Y4HFsq75/4Uc98gswlr0zWss3y1E+ZLz712wTIc0XYdPgidGBqZarwt+t7Pk6wA8vR0LV0q5muMMMqZzpMi0u86w1vjXasc1PQJytCArL4u0mcsSPp+JWCdUDnDunPsGEdthEoWTMMR129+i2vf+ALd3dvUy302L22yLgZvDWLVR6WCbjLBd1o1DxkSjaOo39pF7T8RUzu3gHJPxRhCQoADFUgFpkZkhph1gpli3Tpz44nzQ/BLpY+Z5G+lbSz2QhLIdmMVArOuRbqG5dEhy6NdwnKXyt9iu1ry7o2ay+/d4uLZJ9nZ3mRzY8pkCpO6RuIEsUJgwWLZJWQ4aYjkO4mM43n6XtCap0mYVGumxUqpXSCXF801drKwrWqrfGShVCDIYa0+hmp/ka71PTJaKqhVfur9uKnl/eWjr0jU38d4nea1ZJD+WrJQnrRu73c8sFDu72wTQ6QLHffoOJwf0vnUkyNdFGIJeW7yBYihT+mHojtSNkVMj3jmYLeQdsqsETOkn0+5IoAyPCVnlfQgSnEtuVxi3smNsYmipUJZV4K1AWthp7Zc3TGsLfe4vd8wv/UWi71byPKA7fUJp05tEScVlQlU1uBFwxJhqgBK07aJPpjuO4InIMErN9cIAUsQAyZ1xZII4og4xFREO8XZKc6t4+sN/GwNwiKFNyYYsUTxep7ge18xhJYYG2K3wLQHKsR7R1TtPU7bfc5uGh5/4iwXzzzCqZ0taidMJspZttYSDRhnkhshCVWfILqtaHEqBCsJwDCWIAq+dd73HNgSDXVW0VTjXNKKOq8mgWqDoOpERdFcWATEGgWGYuxRTGN0I1d0W6tMZH6xNTJK2zsJlCk3kVUuKzGXqhlst5jXsKRKgM6SE7UjwzyLaImV0prLpmtVfYcrpF+7dzddQMrhKxLZ+ywDGTSUok/oXRRm6HD3GexImm51py0fEe5X+Hh19xETT3wvL4CRUCJY55B0JVYEYseZdcPtX/6/8Ztf+DQH93Z5+PF38b4P/CB3XrXEAKe3p2xurlFPDVMbqSvBh5hKNlom1YRl09A2Q8Js8IHGd3ijxHuCJZoJ0dZp/LIf6hCpEDfFVGt0ZgZrZ/HbF4nr5/F2i2WsCCEF/mNEuiVxMcce7dE1u/hwmzjfpWoO2a6FUxs1lx7Z5NGrZzm3c4l1V7FRrzOxmrETjLZXt46ePRRii2/ahGoqrW3qKsRks9FCes+HFl907AqdVgbIc5BBnMGcS0nKhRBKMnEjMbk6sReAzBuGEoBRdyibztk89N7ju9BrLf2eOWa+npSIXAJBCRhm0DFjS63UstkaMKkUSqlQ8ud/T4CetkBzxx7E+I+S6KUhpZjf0Ilk0F6mt73HVCYZ/y99eeX98helfL6SQtO/NexgeVfuOz+JXmwIke86c8gz//Cv8cIXP8va2mmuPPQRXnv5i6zP1rh4ZovFvSVntmasVcKsdkwrJUeHAMZ2GFezdK1qgCC0nYZI+kmMgDEEOyOaGVRrWDdJYygKDBkDpgIzwU82iacvcrRxlq5aZ9pZJqGjaw+hOUSOdjFHt4gHd1if38bKgosXN7j4+BmunL/Mhe01NtenVLMKUxta8UgwxOhoSf1KJNIJmJCEKZKImtqdyuUYsHREurTJeILXcI/3ijWHDNYlzGGVDZO1Zm/FFJZPUpgY6Isorya5j4P1oQB6eg+pWCGxF9JsDh8zT0ebtmrGTFc0yceV/r2xIJYC13mfLC9z7Nz5uh80HALvQCiDFMWViv8nHVR8Ugdd0mrPvQb7iy3aeEtZfo20+5VnknzGQdv2Ml7arOUJsuWfhXr0OQPRpthzek88mMipNXi4fYl/+Vd+htAZvv/H/3ecvvIw51vLP////V/56nNf4vf/oT/GC1/ZpzYTJBgcTqtuW0c0WlDJSIfBQgg0y4628LtUKDV0YM0a4s7TTbfoqokWYorK9QwmpRiJQaYb+GqGbyuaxYKt5as0+7fxh7dx3T1mMuf8uuPKYzs8fPExLp49y9r6BlLVGGeYOc3m0PAKLPEE6zVQmknZkkj8fVjGJCTaUblc5S/0RAitJu7T81z8LMc6dY4V1cozF1EnJ2dkKLKMWKyzVJVNBAELKcUvaz4tqu17ps6g8VamN+fmikFLu2QXKVPuVoQwK4VykSft2KOnadWchLLmNSYi1K7uE5/HcqD/ZmK8xou//fEOhLLY9YCcAKqO7YqmyxeeShqWxbJXNWl5H7F8WWSwWEsfsTcpht222P9Irl0vtJFseqh2iNjk4goQMMZzdsuw9qW/y//8T/82F658jHd//Kc4+/ijONOx/uYh79n6AHcO3+DNb73I5fOXaO4d0E49bRWZOBCnG5MzEK0QXcDbathy0gIJCR01xtDOtmD9YRbrZ2kmFUGmTMIBE99ReSEsW7q2JbYQbl5H5t+Cu7u07hbn1yuunlvn0oUzXLh4mu3tLdbX16krWyCeGWRKyKVoVQgTUk6j5EVbxPtC8n/Q5kI+BrpOtX6uiJd9uBBiysYYqspJyuKwVteE916LYkct/BU8OFtjncU5Q10nvm3les0nqGmZfcT8UGR88BHV3xsQzmEF5IT4rKlAN5Sxb6m+X7lp5M3S9OVAsmYfCWJahDF9NyKa97u6thlvAJmG9yDHAwvlKpgrZqAY9cGo4ob7z6+a0aVaT85zPOFjGSY/8UJiDhus/ICU5vJYg2oDmFQoC1IoxHB2o2L9i/8jn/uX/xOPPPnDvPeH/hTTnRlSeR382PKIPU1z7pN87vl/z0ee/jBndnZYzO+yrITaau1P5xwZllftaUeTpLt7SLuKcG97B7bO0szOQC1UWEwbCUd7RH9Is7zL/PAA1xyw5VoeOiVc+dBZLp9/D5fObHN6s2YyW8NMZsodDaEPxmfmS0rKwYeAF18MU968lHXi2yERuzTPjDEK8Vub7k8GHznGsV4Yxe2MVlAQDRGE0GGsMJtOWFtbYzrT+ktZEBcLLagdY0TCOFBfhhWs2EFYSuupWFPld4bXBpZOL5gkhZLb2BfADAwxUGPG6Ynl/fYLMj9Pe2B5XW8XC73f8cBC6VI+oP5YctzzbmTGu4LGroYGL31v+hHErEJVCuRov4tj0zYfmh5kTngnfU9Cf259KZnPQZQyhu9rQZ1as6x98Wf53C/+Tzz9/X+Ohz7wXVRrlRbXVceKGAPbtubJ2WXshR/ms1/+d3zk6Q/x1NZpaI5opMFEIdRaYqSLymsNyTzIe8Q41iqsr58j1Nt0cUJ3cEB3eMjR0V3q+S5VfInLa1Ouvv8hdk6d5dyZbba3t7GyzqTyTGvDREJiM2lua0fOjkhzEiJdIqeLETUNGRbH/QjaJ/leMEYo+7hkjMSiR6VuCFqmRayhriqqaoYxwmRS4aqhWWzTNMkfbZWnmtaFKcyq1fjemAIHQ2X7sa9XgixZKFdji2rFMRLG8pG1WhmXXD1WX+s3i7TOezOb4Xoe5Hhgoawms/u/ma6jn2QSbSuWIpefFtpjdffpn+QkvuOil/3B++05OdNjqGOV6n/nCgFRE4bPrlt2nv2bfPEXf4EPfd9/zqX3f5hqZjApVTRGxWS9MdQ762wuLO+pHqK+8MP8xpd+lWsXzvLHv+cHmd++QTdvwIOpBI8G70Oru74kldWnt0kEDM29lnDwEnJ0yI5/k0tbDZcunuOhS5c4d+ZTbG1uaXkLG+hiS4wGKyHlEFYYlHQhmVaXSqWQfSdBi/9EtPxiiCMhLM3C3jxMc5bBG+ccpN8rtWhILCLC4E8ao7mO04mWhJnUyYxOoYuIZzmf62YQckU6+n+Pa7eVeQ2hLykzrKvhcyGEwXLL3ylMz/z36NzF8lwVylVCwSoxvTx//n5/7ZKFMf9AAtQeTFG+A/P1fj3bY7EzhLE2iGQb6n567eTX9bsn/576i/f7nn73GMgjkEojITayUTsufP3v89l/8bf48Cf+V1z54Hfj1gRTA1ED3tnj8M4RN6eYGlzneHTtEUz9B/gPr/1r/vov/iP+0Pd/ikfMhHbRELyni5GuU5/IkLms9LmVRgwhCFP/Eqe3z/PYU+d56MwlzmzVrM+m1NWUqppgjEvWRqdV7qzgpEqtyXXcjVgdc58Sp4uwQyAc43euopirBaDy6xlc8V1HZ1pcaoaUEcSc/eCMMJlWfXerup5grUkIaUcGDYwRTa/q2n599AysYuZKITomCIW2PBHVXXkvI56r1MtSWzqjjY1Wx2VV6/5ujl4Ihf7Jg57qHfiUJ5wxQiZbxSiMWmOTrdcRRDZ+f1XwRp+Lq5thMeDDDR7zdfNryb/skdiohHNnDR9bv8F/+MW/x6Wr38Ol9z5NPRXEWVLVTxoRLEpSbyeOyakNwsIiwVN3kSe2Nzm1c5bf/ua/55/8u1/i409/mB/50HexeOs2ca4VEmwXcWITipe1fgYoOj5y6RTv+9C7OXtqk6pSVpCSGRzW1sSovhikig2ZTxt7XskwUlkDluadRHz0Jy70jGaugiR5jLPw9Vqz0/hjDlPUdU3lLNOJVgnIZmYInrZtIEaG2mCpXXwhjDl+qH7lyVqoNBuH67z/Nh6DFmy2JnVTW0EQs2lv7XC9mgDj0aawY1P+fmN0kml/vyOj7Q+oIPvjHQjlCXGWZOYhaq6etBVECfcZynEoRQWpMEcSYjqgY9lHHSaoP0rZLlAzzWJPKUricaHh8rrlKz/3l/FN4PHv/knc2gyssmkyG6iPJxJZTAyT9Vqr0vuAUfuUc9MpP7r5R3j5+jf49DP/ms8++2W+98Mf5VMf+Aj27iE3X3+TkHpCLtG8TJPK688Pj9jf3cc3GgucztawVlKZTN3getNIhjiuMpsyfyTnEw7NA8odX4lSx8MAOfaX/9Zgf6fsmRS6mVQVsXJa1iV1PQPt3FVVTtk/lYPY4buWbsUEVqHIoEyiUaYSJwNHeZxhka+/3MczvpnJB4oOS/KJlcSgmSVKAnGuSiwf368TyfMprjfrJaKVFEL2iWNezgkPyeKva1oTlUsfsV+xx1f1ik8e+xDh8ap39zt+V3HK8VVk3/A4hqrruqivJsWXUPqWpIGP5BBI2rESKkZGx/oq3oU5nH1ZQ6qip6lWpMGuwhIXGoieGDpq0/HI7ov8ygvP8ej7fpz1i5eRyhKtQMqoB0UMs1+wnFXMT81wC4tddsql9hHrOtad5T3rT7O+vs0zr36G3/zi5/mNz3+WC+fO89Of/FHO7c3x166xwNBVFjGe4Axdd8T1azc52D/kzPnTCpRlGD7bJEnLiwxMkBBSUx6GDPshBED/mX6R2fF85IWRBQMGDZPZV1lTWmt7M7Rt2yTQFpFA0yzoGjASMKbIqI8RYywudT8TEbzvsEbLfORr0I8e14xK5jheOW4wLX3fVCqjohp+ScWeUffAFJr+mM+XqI8xDPFOBY3KdaoUTN3cRaM+MRuCSbhO1kHHj976jt/5OOX9rqAHhk98f+yQRyl15qADe/+z97pjKgswxH+EgdxNKj4VSfViPJgY1OQMLaFbQvDUoaHyy1QuouOp84b/8H//y6xvXeLx7/kJ6omjqpWbG0fXVRzGcrA5VQ1RddAq66WqLK5xRO+49Oi7uPjwE1y/9TrPvfRZru1+k7/xj36e3/fhj/IDH3ya6tXXqXc2aWLgxu2b3Ds6ZH9vn9dff4NT53eYrjlmkwqxGaZX2l0ofKXe9xMtwRKSQGYrf5imtPgYTL+3Q1PFaPOkGAcaW0YedREZRCpySMd7ENHMltRMe6T9MkPrmMlXaK0sZCWtbYh56zWXSGv5b2965s+nsE1PSJGB0uYL/u39gJr7aa8YNUxU0uP6z0teucfdiHxdXdelex2u/TueusX9IN1Iyk4/6U3RGEbxZvaIpBjIvrwDqfWd0UJY+RPGoGlHAhAQq6aHhIjxEdNp/mOMLV2cE9olMQYCDS0NJkbWZ47w9d+gXS5570f/AGvbm9TTisylUrxqMGPIVy1CqGqCWFrnoG0xnadqA64OmODAB5w1XD71fs49/CQHt2/zwouf4wvP/ja//aXP84Gn3sMLn32ZZdPwniffxVPvfg9djLx5/RoX7lxk89QalVMycckk6S8mJgtdhpEzRqHVDLSWfmOM6k8Gji+CVQGtqgonSpTuku9Y+p6ZXzqKAfYzN2jd/JBkNcXok+9oU12gcTHsnjx+wqIvN5M+Rpnodl3XjXw9rQ44dmdKDVm+VgJcqyGK1dBLNvUfxOQsfyufP/dRzb05T7qm+x0PLJT+bYQyUvLkx0fPKsk3kP4//Kt+k+49Ve9XKtQSENNpoonXLrsWD3R0/kgb9bQd0gw5hVECPmpfCi1JXyMY3ru2x7//pz/HxSsf4dLT34ub2MQlSP4Zx/T66IqxhhbwJkJlsJOICwEbIjZNogkRUwkb5hwf3vz9XDz3GF954dN89YUXmVXruGqNb7z4Tb76wjdwzvG+97+fo6Mj7t69x9Q5cFF3/RKW70cqDXfv/xQ+UzneaXFlYvf9dudyUXuOa+RVLZIXVBY+vTIPDPSxkmTQf0cU+aYQsLwZlGZqD+gwfm31KInp/SNpxXw/WQuX5811kvI1loh0Fvh83gG4Om5yD4IVR8JdFtAa+faRt72fk44HFkoxJ380gyonbwKDlhy/L0VoQyA67T0fEyBAg2UBsYHQId4TmiW+WSiKFgKx61IV7khXAAQxtVLrpSt6Tm/Ay//2byFYHv/wj1FNrBIEJBdITF9mEMiRtkSIxqRrdqq9IxAjHYEufypEpNJ0MOsMlx99iktnr3DzrddYY0q3aLh1cJvrR29x7fAVnn3mGULouHDuRzg8PMCsb2BdlUIpRXWgSDrVxwAA3qFJREFU5Pfl8Sw9+JNMV5Hcbm5MnFaAxI80Qgip+h/HBbHPmCgC8TaFR5wxCIGua44JsgpjkcgsqerASqA/a678G6qdKr2/EzRLqTGbpmG5XGprRby2Tk/nyJtE/n4ZzrmfT1v+XSapr8YscwlL77t+I8gCfBJwlVfWOwmtPLhQ3vekx+MvJQj8dpUA0pSnRd5C9BA6TOwIzV26Zk4IDcSA8R0upkYwUUv6Fz/Ygz4wEAcUEGl4ZPfL/PZzX+TdH/5fcOrKI9SVVihI8BKRwbdduWlFOyWJginBprQIkRRKIUeHAL2WqY/IdML5i1cJ+3M6FpwTw9baNldPP8HNwxt87au/xa9U/57/5M/8cW1AKprVkNAHPVfhT+WxzpdxvyBByYYpzbb8Xo5HxhhSqc3BFB35bKl+TtO22hm78AetaCOf3mTuOrq21QJYk6FFQz8mhVlaaqZSmCfVsCTLTUYSHuG9biREdYQiUYkFoNUrzPEM/9IPHW0c6dy9qVl8tszqKAUyp3yVfupJbJ1Vn/r3xHw9MSQCx9ex5tLry8aov5knOJDKgURM8NjQQNcSWk8IC3y3wLetEo+jmkaZ8U/Mlm0WBJsyPSTVmgWJHRLmmO4A3+5SLe/yrtOOZ/7VP2A6O8VD7/tu3NQhzhS9FIvBK0SzN2V7Yc+CIKnkVeKyEvoCxRLVz7UhocD5+wI4g5046lgRm0CQyNm1c7z/4id4/tnf4i/+H/8v/NRP/kF+8Ps+SvBLZe1MqhEiOBpnkYKcUPjsWQj9wOLRnT30/mFeiF2q72qT3GT2TRnLs/m3bQBrmUynWm0uwZFCJCZCthVtmOScU0GJEYxBrOD7xRtHmkTDLUVZDZOrDKa4avZpQ+wrt+ffsbWwbJY0vksV8MauUp7Ptm2IQF1X/WbaA08IxhZbm2hCdkRLqmTTtzRPVfBBCSYp00Z3yOMAFwyE9e+0UIb7CWW6Ef0nZhii79YkBsQJTgTnO0K7pF0uCe2C2B1CuyCENpUeBBMTyyMVTo0IiEuEcp+uRX1HA7og/ALXHRH2bxP2bmCWt1njJufWPVdmZ3nuYI8PfvSn2bywg5tYoonaBu0EgTzx9vQDedwTCyS1VAsB6yOVj0gI0AVM08KyTSRrrTAetTiOlrCwFkcg2sjZjbN8dPbDPP/mZ/iFf/iPOTw45Cc/9QnAa6xQYrHT69VkFDP7YDDWON57utD1QE82U2GV/ZKupzDRJN2s5PlMGqpyTmOpEomx000o+eM5LarXRmjru2XT9JXXc8vD8hqAdE6j1LxJnUj0Or6hAJlMpO/4HJOfZgw4q0XL6roq7iuNSQaEUkJ1Fo404UiqTZOTkvM4+pByRNPHs4YtWVDZ9RKGurElANZbBSHHVGWkfd/ueAc+5dsRaoeeEjaK1rtBiL5FuiUsO3y7ZLmcExZHxM6nUdFiTkgWD71JKSZOsp8ICB4bWmgWhOUBrj0kHN3FLHaZ+XtsGc+FzSlXH9nhkbMf4tRjl/irf+Wvsr5+hic+/L2YSZU6H8cTcMlSQIu/YvFuDj/EQNVGKu9xPmBbj207fIy0ocMvW2g6gtfAvKR0qX7QnaU2EI2W1Fir1/jUuZ/i+u6b/Jtf+mUE+BM/8SPkumtGBrO5r49zglCWO3qIITXzzX79mNmTWTXWWuyKoGQNJalUyxCuyCwjrWBuCtdkBGaklzOYo4t4TBTXcXCjUAqo4BwrctWHFXTTL33LfB85yyVff6mVSjO+BH9KHzB/NzN5YtA+p6XrkM89WjV5bEzuXZlAxzTeRoZ6r9/5uq95Nxk5s3kCku8VPXQLJC5p2iV+uYRmTgydxhZFS0Xl+gOaSVFhQtCs9qj5eSbtqMF3aA/6lhiW+O4Qv3eH9d1rbMYjTruGi9szLj26w9WHHmW2PmFzex1XW2ZUfPmFb7B39x4f/d4/gdveohUScycPbmFOjJ5Kvs3U2ruMomqdHdcGXOuZNB5pOkzTQWLwsNQeHF0IRN/1NoaQUn3FYFLDGm9CXw386oVHub77bv7Fv/rXfPL3fZyLD52hM8MiL69Yd/SQOlYPycc931XLAxZCMI779SEMGNkIo/BEesfYVHUOzYOMMRI7P8RKRz6Mgn5lVkomBAz1dcb1cUoQSla2y9iDeEM36NUQStZCpfCVQM1gcg53msGgVSpdCfIQTQ/mrI5Pf23ltUvmORcWCrHX7A96vKMkZ2u0jQwxqNkWF7BsMG1LaNqUhjNnwbwvG68VDuPAAY3SG6E6gQ2RiigOH4DQEsOCtcN7mPk9uLcL+zep4xFbWzVndmZcfr/w8ENPcHpnh9l0Sl0pIVqCUBlLbSuqjQ3+8X/7V9jcPMfD7/84wRqtvUrBgMmgSTnQhdbIJpOJ4CJMPFSdCqNpOvCe6D2haQmLhtB5bKe1bUKTCNkCjQHrQTrN8M/+jLVOixLXVhdjCFzYfogXrz3Db3/+i/zxR36caCJdzrzPFeHzrtzltu3HtYsYSe3Nc3KyH73faylATohFGqMt3ytXms4xFY1Gzf92yOofEMosEEPIQME5EtdUk6T1ezmcoCVIcnu78ugBmUgf5llFifO5ytdX45Hl6yeN10nasyQkrIZuRmGP4hpWD20VOISfHuR4YKFci3upyK4ntg3d/JDYzpGo2qE3L2JUkCPqPquaQXMZNcxrsVHNXMIcFwN1aGmXB7SHt3CLO3B0i8lyj9NrhodObXHh8bOcPb3F6TMbTCaWtTXHZKKV0Z2ryC3dRSwGB85x884dbt24wRPv+l7c9iZLkw1I3dkVM4o9qhMLUywPnUSwUYtTTbvI+iJg2o7YNbBsiE1HbNRUZd5St57YtCybJaFVHzkmM7UzuVxFOjdoW4Fpja0tRI9vGnY2zzJxU774zDP86T/9Uyy7JYFIm0pwSPADRTjGvoZtD5RkLSJxtMCtZaQRYAgxmIRgZi2Rwx6VdTg7IOgxbaQgOGsJ0RXnKkjjoo14+vFMMcKmaaAZYqfT6VTDK6k8pAIwZXWBAsoOWvsna6F8jGKixWvZVMxkiFXEVYfvOECW309YEHVdj5hH5flWzzFiDSW/3zmnhalXTPe3Ox5YKM2925rS06mZJrHBhQ4kEK2yenRKLD2LQe8waaaIiMdEj2067LLBNq+yvHsTszxiGg64MPNc3ply5bFtzm0/xNmzG6yvT6iMSyUKk8a1U4ythjIVuRGOCN4IMq35uf/hZ6nrGU99+IfpjOCFEwytwTQcDXBUll/lU83UxjNpOqp5R2yTJpwvictWH02Ln895a+8me80eL+x9owcl1qotLm08ymSyzvraFq4aiNFWklnoUqlJgXpjAyuWV157nc4Hural9dqLhJA6JGfGiSmLUI8FLpZMKlEwJWuzsi2cgBYPrixUtV6X0RbwBqGyaaSiWhg+AUYiSgLPmSmlb5VBmNX4XWkKikhf8mM2mw0ajcHHG2nDvNEXr5WaK/+d5zELT/m50j88VlaSYSMYHkN8MmvX/ChJ9WWoCeitifxaNt9Xtfv9jgcWysXBfj+Zxkrq6htVNUdDFSM2Bnw0dLEiiscRqLsOaRtcsyAubtIcXkOOllTBc356kwtbMx5+4gznz11m59Q6s9kEK0LlprjaYCvRHRvTk5pz2/Fk/KiphGBTHZ5bt+7wzBe+yGNPfJzplSt4Ucx01arvUdUcUkmL2HWqGevOU7Uda0cd0qoAdm1Ht1zSHc6Z7x9w7ea3uLe8yzf3XmC/vXtsJwd46d5XmNg1Htt5mqunn2I6nWCcIDFlR4ogNpmIjeu1d9M0hGWKDbYdwQcqo4he5uf3nGDJWli/rPcW+41IJI7azPWhD5SwYWWIURqbEN6o2S35PKSfyVpExPSc5/FiHoCnVaCk/FzeULKGrirtQVKamv1cZd/ybY4ybLEqKOWxaqbnz43AKoAoPRk/b2b5OyfFWct77K/9AQWxPB48ThmaxL8EE5ViJXREDCFabBRMaAkETGgw83tMlvu4/VuYgzust3Nm5oCL52ecf/gsFy9c5vT2VdZnk75Fm5aPqHSAjEPMcNM+ejx5MSjDJcoQHgC05UBV8/P/489ibc0jH/kUi4nFknLnepxqbLYQE4c0Tfysi2wuAq5tMMsGc9hA0xIWLXGxpD2c84VXP8M37jxL4xfJz7M89dT7OH3mHBcvP0bEgLc0yzkvvfxldm/f4Ou3PsO1g2/ywYufZHN7E+M9NiSGjjGpUZEhkxna0BLbBkErsAdJBbGyZkxeTwyDr5c1pCQQJsur+mz6PC/A3NPDkIRWtM6PTbFCAxrvzECXDCj8yPdaAV/UZxxT1MrejTD0+YChUFbTGCbT6kTmz2D2HxfaMhSUzW/n3DH2UimgpbYs0dyM4mZfuBTEvqoejICl8RgMvGFjTIocPHjLAngnIRFIYXuBFMbQWF3DpDtA/BKOdpH5LpPFPbaWh2yZlsunJ1y6OuXi2XNsbDzCdDZlMpkymUyo6gkQU5Vt09fOFFEMroT5x0hbaaqVsKlwffc2z37pSzz8yMfYuHyFTnIHqyyAw9cGAdW42KSLTDthcwmzhUfmS+J8Tne4UCBn3nBn9yafees3eXPxKmvTNT7+9KfYPHWW4HZ47dptbh4ueetrb0EUQuM4PJzT+TNcPP84731v5NO/+T/zpTd/lQ/ZT7FdnSK2LS5WYCuESGNNr5Eqa2g1iaUoY1nsyFlL5vkZnE0VLjFayXGkYpLAiCFEr3S5nEBNyKORVGLSsuZkv20V6CgFNfuWq1kgJRpamtH6d8di7jE2U+7cwDCK9EAX0KPNq1zXrKVGKGq5jkV6fq4xpm/B3jRNf03DvQ0xyXze/Ju98hAZbTYxAVK9aY2ktfZ7wOgRsrkXlDlDS7c4oNm9wam7LzKJDWcmcGaz5qFHN7l8/gw7O5tYUzGb1VR1ZFLXKbPeaQzLVsNutmKelEK4CmeX9zZcF9jK8nN/628TQ+Rd3/VDhMrpPiXZfyx9FMlWKxIjtY+sLT1bi8B0GeBoSTw6IB4uaOdz2vmS3b1dfu3NX+Yo7vMn/uAf5/rRktev7/L63WuI3EHEElNTHYmGrm3wXUMMgTfeusatG8L3f89P8aXn/h1ffv3X+K7pj7JVWUJXY2QCxrK0w76hXapMKkk/aLkU0hv1uhAZ0FbQ6vS5vV8sBCjEQO7qbF2V6HJ5nPMj/U5UE5VcR7Wfl+PzFLO1kUr6WzumtpUxVPr7G4jfNrNovAJlXdcxmUyYzWa6VuLYV1wVyqwdy2sqBar8rfzdsgPXyM8uzNUSbS39yNJE7rVi3pRMiYI/oIAVxwML5TZL2oO7yHwXOfwWs+YaZyvLxTNbnL464/z5y2xvrjOZOCaTiulEU4IqaqqqThXVSBnoIBh8F1KANv2IDGTrVfNoOAYAIwMVlQ/4VFPnpRdfZGvnMnLhPJ1ApnKP/McccFf1iQuRjYVn86Bhbd4QFkvaowX+6Ah/OKc5nPPcnRd4ZvcLPPTQRX74Yz/OF77+LRofCeKQWGOjqOyI0yTroMyQgAcJiHgWXccXn3+RD37wu/n0p/8tb9x8kenkg6wtO4wPtNayyGZo+n+/S+f77gErZaqQhMSsNMdJedNDHFKgi+l7ouBOlTM+etYQqXRoMm1jTMQHvaZA6uwVNaxxPCRg+8TjsqRI/rcMReQ4aZndYcRASq7OfTdyXqKJ4NuiDUQYKIMwhEDKTSBfX2maZuErQafSlM6mazbDS6FevZf8+/l3e6ZPiH3oSmsyvTO/8oGFcuOFX2VqAld2ppx/eJuz566wsTNle2cNKx11Zfpq11Wl3XgtFRVTCJKK8vqkSXTBxVhCyrH4//0ZFLrMBgaRoN2sghF++d/+Knv37vHRT/1RZG0NHyVlmmt9liFBSw1xL0IVI9PGs3bYUu/N8Qf7tMsF7eGC7mDOrYM7PHP3a7x++CKXL1/hsSc+xue+9LI2SDXaPxKJfWEsj0GCJwZt6KO1bSNERZ6DD+zveTY3t3jjzvNc3nmM6eYa0nS0kmO4eoSYaYsyhBxStn8vhHlUkiCKqLBqS7kIMZeeVEqdIqsqEK6v2zt0PhPRz2XENZdnBMhtzPO8rIZiyjSm1aMPD6TwSP7MqpAipvcHc36n9x4Jka5pe423qvnyuVbN5fx8NR65+nopoPn1vDGshlBWgaRVvzIybBhiZNTw50GOBxbKn/jEQ5zfXmd7AtNqRj1dp6oETCROHNLb2MlniQHfRryPSEjoZ06pSuKRu2DFDOcxFsBV+Du9ysiPjJFoBJlM+Gf/+J+wvXWRM+/5oH4HyOkDOR9k+GrE+chmA1v7LWu7c7izR3e4RzdfsHd4wNf3X+Eru58jSOD3f+rHuX5oee6161TO9hpfOz1rh+QQRYsJ+47QRS0bQpcmqUvMJs8rL73G0x/6KJ/+rV/jrTvfZPP0aWTZIq7qSfb5Vk2aWDBaTt/EwbyUcT1XLZ+Z0NfUEZnQj0Rv4mXQxfSx0yHwP4Q/UopcIoxr+pj0Jm8plOWiXQVhVsMNZUbG8cWsG9hqNksIgdj5XsutakTvPVWl1RHyvZVFuVaP1Y2+TLg2xgz5ocYd2zzK+873VfqW+gPSr7OscU/KXrnf8cBC+eS7H2ZrbYKjw7kKFw2z1Ka8NVozIMY4ZCYAEiIk6pSarIOm0jtS8P4k/1ed5LgqgowFV83dYAyvvv4W+3t7PPH+jyKTqdZV6c29BFMJmBCJEnEB1tvAzlFgsjeHe3s0e3vMD/Z49d51Prv7GQ79HlevPMyH3vdRvvrCWxxEoxkmBrwk5Dckmnzy1zofiV5TzNThV8qb/qfP2+BxkzVE4KjZw3cdk9arEJCHRjAx90HUtmuSWkFlE7XPqBA1Y6XXbNkn0M8bIzjrRtUBjBk0Y2/Op72xJ7CnTZMI1jmMcT3yC+Ng+eqCKwW0FNxjYQcG0EYBG+nLoJSfDQl88rn2a9La1lpqZ3tUtkRcxZxAAlz53fwbpU/aA1VGVL+smMOlUGZhG21URg3+kNlLRSfpBzkeWCgf2tnAWIO1M4iB0Hk6A5oOBMF3fXqQ2uJgsnYk8yjzwin8kPsAUpnuNii2AuVLj7y528mUv/P/+VvUkzUuf/gHVCv6UAi9ah9PTCEIjUNuzj2Tu3Pk9j3CnXsc3bvDb976Ei8ffoPpbMonPvbDXNtt+cxXvqkawqlgBJPNRYFUec7HkJrFqt/kCsjcB+WhhhjxAYLA6zf3cFXFrcNX6NqPY5ukmWS4Z2sNMcOnMghIru+aW8mJJI3aCwjJpwHnKiaTyYowZIEtPO441HPNXZKziZsXq/e+NytLs/Ukgbxf1n4ZlywFelj4Y0JALwwAVvA+tVhMFebEpT6XFCYpkaZre2sCscd+K/9evrdSe/egFcc3kJMQ1Gx1ZJ/VOimElD4k8h03XyO5sUva0WMgdmX76uLC0VC9GNFq5v0EZJrWsBOetHmMdsiTzI+886bOVN/61lu8/MKLPPGeTzI9fbbfaWPapCSpACPK/J/4yNYysLHf4HYP6W7u8dqdN/jN25/ldnedD7z/A7jtMzz78jXW4wRb1drH0piioapB0m4dukDnuxHjI4jm6VlnUwpSwERDtFmjphgbWl+IpsV0gfmdazTtnE/+8CcREugwOI4phpgbrg6bQy6oMmR2WIzJWffDYshaoAw1iVhyRr0IVJWmWznnMClW17aacZJRy5NAkvIYhwnGfNHV3pClJoKBFK6V1Qeyt5ghayWbwogQfOir5/X3mH5PqyrIuCZuHECnEEJPpSs3g/utyXyU95DvLQtliMWGFZTb+3viU3bNAB/HoEnIvYWUAQeRAe2LMRW6TQIrKVsga8z+BseDoOaTfqaEvFcHRwkGFltP+Lm/+bMYsVz54PcRRHpgIyZ1OlTR0/PWnWe28NQHc9pbd3jzxmv80u1fpbNLfuiHf4KXvvEqu7deo55s6OIuMuiNFHVLCSkTfqU71AowoD6OHcwlYJJ8QWOcWhGttsX2iyO87zh9+hQhapu+XLfHSGlyxp5mp2ENwYqjJ/obyDHNWGxS2WccNlH1H9Wncn04A6J2/QpjpLEMcyyXS+q6Pmaelr9ZBtzLMSrnsjQNsymrVya9xhqYN1aFNap/rSa503pN+T6TSZ590t6kFei8760ua20PJmW/svyd1c2mvLeyAsGqgB4dzXsXw/X9Xn4PUrdiMXDELFyDSJWiJXFIj+rpUSPjXggh4xmDAGX/JmvQ3EU4DusnfT1lW2A58B2vv/Y6p848jOxsF1oy+6MD2z8CkzZw9rBh4+4B7Y2b3Ll7h1/d/Q1au+QHfvDH+MwzLyBWqOoZzlQgDsQQJJtJBouCCV3oKHsmltkOEAlRC0FLQjSNAe81xe38mXW+/tWW89tPInVNKLiaaRCJxqtgiWh8NzdWSuazzQJptMyiKYGswuQdLJVBOMvY3fBcr1sFaYgfZwdE4gDg5MXYdd0ocH+SkJWacpV6dxJI1AtGul/tDqYCpno+hX6Sv51T2EwKCnoflA0mRsc7N7E10q8pvWeLS+0D27YF3/Ua1yo+349N6X+Wm0oW+qGkZMA521sUnfE4o/58Zb/DrfAUOJXelzum5FfVfhxijopDRPVbTrDtx6cZo1k9EjicWGsyJZ/pX/7jf85iseCDn/ghYj3RRRQLnzT7fkawIaqG3F/Q3thl79Ydfnv3We52t/mhH/oxfueL30AmU+q1CZU12OiIfbOh4cit2SUeJyoXrUYpRymje13XUVWOZnEIEabTbcRaQqcwULM4AOCRqw9RG4MYPac1mqXfI69G2xz0ycORlDCeAAibGhsVQqKLyvSMqNKkyqZprlBQWjOlUJe+2UmAz2r4oXytdEVKoc2fyf9m7TuYsYM/lse59HPz2ur9btI6C6EPVUWizk2yVPK9Z4DJOddrzD69yzqqoqN0/v0yF7PcgPL4mdwm0lrt89J6ZN4OfWC+zfHgSc7pyObXffAZYIyPwjA5q6bo2wnkqg3f79D5EcG5ms9/9nPMpltML17Fx6SlV0AEjEF8pIqRjaOO6tYB81t7vLm/xysHL3LmzFneur4H1lBNprjapfboFoIUymeVyDCAH6EIsp84Jmni27Zlc3uDr7/wFYwYtrYfSvE5jXVef+M5AD7+0adZl6V2zhJ1ESDzVwdh6qu0xdgjhXkRUoxn2ZFqFUHMwpEJAWVC9BitTBpHjtdEHZmJxXitmmxZQFc1Tj6ysJXCetIjj335e73PJsP3jTFacrQAgla1etaCOVyTv9v61LQpmbilL7u60eQNV1LcOlP5HHD48ut0N+5xWBn4T++7RIYx+PYfSTdcVqUrzFcoEb00Iv3HVMvdXyuOn0vvM+UBTCiuyUWUFLmJCOIN+/OWN994g3MXn6Le3KIpficG/bwxhuADRoR6sWD97hJ/94C35gd8Yf9ZovF88Onv4YvPvkA9mak5k4pyKWiiix1RFowzNpXXh+gVhQ4+V1d7+yNP6hOPXeY3/v0X2JxeZra+iTGWYEL+GQCmlcGJSVWtEkRPLNDqZOpBD937hHKrlUBfVCubpqsmZAixt35iQqVLEGTQpkUcEU5c1Hm+S204otCtCM9JGhcGAKUUzp4pU3w/fzcPha0c9LHoVQEenKyYrDb1nCK6z8QBjENJ+CYCIeBDQFKMdGQ5hCHemsfVOc1yEaNgWQgRe7QkvHkHd3eOl+9wknO+rZgFcmSiFIKYHyOT53jJ+Hwz47/zGfJvQSwYKaRlEaLyQZ/52vMAnH34/fpO3ycCcp/50pDcPJwzvX3A7r17vHj0BreXb/CR7/44X33hm5i6wtYVlVOTkChpwnLlSPXhrEm1anzAp45UhG8vkPmw1rI42iWGwKnNh5XYLIJ3ZlTRvHIqDFFyrC3nqybjOI2n75qhBZ4BzapezVwox3VwB7LfTWI+ZW2Xa5rmZOfsqxkxRI5T3E4CebJZuLoRlInYJ5XaWD3vScDZqlmcA25idF5gXGIzgzbZ5/NBUeYudolbka8n0qWO0hZSUnnEWvXVY0x8284TunZEwqgqDT25ymGcbqRt12JNp6VjwuC6fbvjwYGeuCIsI/+gFErphbK43/seZdC1pEnl3zyuYSMhdkg94dc//etU1ZStS4/RhUAnpDKEg5ZOc4TFsHHY0e4d8MLhNb5693d49PEn+NZrNwgY6tkm1WSSyiMKPpty6aezyeacJcYhk/4kNPHt71e4eeMtANbWdtTviCqs40kLQzpW//rquXWHN6nrdck3LRdwCaCsbo4xRqJfzcYpqhIUaCuMszNWTcFyLrN5m+d1NcH5pH/L+W7bdhSYL8GW1Xsr0c/Bdx7mYuiZeUIF82LessmZ5zXESAwdIRZtE7xXUC5ZE1kgp9NpH9cNokQVMaZPEjDWYB6QA/uOfcp84+UEZhMh6h/EbMFn2z7vdtnXGdAfNf0Y9vEQfG9SjScqLf4o2BjpRHj2S19iZ+cK1ZkL2o4tRkyP2Mbe+fQx4IMnHC25eXCXL+9/ibX1GafOX+Gtr7/MxvomdT3DujqZafTlDfO9DRkY9E1hc+yODN2Xi+w+wjn2n5LrbyAmFDFGz2RS94yQfJ4IiXKvA5tLNp4E2OSxj9n5Ft04y0yIfmxDnsOMGtsehbXW9eNeUtyy6WvMsGGUSCoM8cvSf83rJl9vSTovjzKEUq6Bcv2t4hTlmly9lhJBXZ2DVSHuzVCRUSGy0qfUTWJou1Buhtp8yWsnNTIGIwlJ/j3UlPnHBi2RJ5kkcNngQ21JBBLzp/dMkkAXvwCoio/9yYbfLgc0iuV3vvgFAC49/lGitVo7KGfl5+/EiI2q9WwTqHaPePbgVe41t/jYR3+AZ772TZyrEVtrERssEfAxJ1Enfw1BrFXiuw90nadtc1u2PKEUz9OiiOMgeb7+fK9aA9bQhoCfVOzv3eTg8BYf/56PYsTjQ+K6Jhk3JJRVCsHXHSj5hMmPFCDvyhkpl2FIS38wZ5rkMdP3bEo8bnukM1tD3ufSi+M1Ufp9WSjG9DkzEsrShD3pyNeyKkzDpjBkZpTfOX4cD8WsrqvVlgbZxA5h6D+SzXljDPWsxpnh2kbaNztL+fpTmRe9n+8weSA73uk2iWFQ6YPpkwUwJl7mYPp0q3G4lUEsz7N65M/1/oizvP7mmxgxTDZPpwB+2i3j4HPFpCUA8JHl/IjXF69roSa3xqKNrE8qpJoipkaMHU1evxOaYZKCH7RkHo38b15EvcZcuQ8R4cnHH+aZL/wy02qLza1TGCLeQGdMqvQWUh/IfE76EEZiwR4DRmIWzOKas2VSjuEqa6VfrCcQH0pKnfe+95/KBb7a8u6keS3NztVrLk3TVSQ0m78nfa/UiHlcy98da8Tj/mi5eZQm8KqGzT1I8v1nze4Smwvi6N5i1E1bHPjeFSvR6GNDdeLxwEI5m02HP2LEd+3A9GAIouauv0aUitZ31k03l76eTNfQdxBOotRzGOPKDfSDJYKZTPm1X/olqsk6s8uP4JOwxJjr9iRTWUAzIALiAweLOdeXbzFbX+ObL79BDCG1466PJbDm37RFY5qu6/Bt16cenWQ9nCQQ+T1jDHVlOTo8ZGN6jrqeqmXgLF5gebgLwMc+8rSexxisGSotJD76aBGs+mMnjVlGTcv76uOrqVtzfr9k4JR+4nw+18pzRd3Wsk/M6u+v+nWr/mDpY5a/d7/6rat+cukqlM9L33EoKDZowlVzu9wcynPn75Y5n71m7JRcsYoh9Jseyr3u/GoXsweTygcWykyfjKBlJJ1ecAgBCQaMUTMy77rQl6oIydQSkV4IY2KfSDIzs0mW7zGK0ayKYifK/7712jUWiyUPP/FRqCeaiBujgk+iGezqHWUTOiIr5QmjRDDK+zTG9eGDGAq/TTLNSyHw6LWdePBdhrNWYrKxeKD3XLwj1tB1y/7TgmpHAaKJ3Ln1MgDf/dEPFLzW8dlLK+DYowhLidCbUuXiKzWNghmhAOaGkEru8QHK37UoVUxSnw8RGbG8SnO2/L28mE8Swvy8/FwuBRJDwieimtghbbJGMp1Qjzw+A5tnXDgLNLZbrp/VDaO85lUhz4BX3sgkzWteqGNGVNb4vu97kgkzEpUk/yDHg8cpS3M4YzUh9jHEwW4edizJrdElYk2C9EPmXcZEPZPRTpmBlhwAKCcXdFHeuH6DGGGycUoFLaQHEUlMltLfUqEMhHKBVyAmmWAp6JHjjT07IZcMSaX0u7bFdy0x+F4ow8jEKrS+qFctJDaqZGJ54UOlEE4gslwcsbd/nfPnznLh3CZCkyZ5EEITB1cRxmbiCFwSekHLZjAMJmn/81mo+thm7M0uw2DyarOhyTGCgE3YwXDvnCh45ZG1z6ovmC0Rg+2R6JjGNzcUymvD2KLBqycJ8HEfM19vZgStmrvl8xKZLkGfnKs58Gez9TDEYMt7ypubzeczed0Fmvv4z6vHO/ApC2ZGFIKPGqz248HVXSr7DMNXSuRK04Po1Xn2TctdKjLY4yOhtI5//s/+BUZg5+pTRO9TjE17TQYJms2Rd0ujO5wPAVdPODd9iJvzN/jAo1e4dftuz/TIftVqMDybc13Xqtm6AquvXt/JhwqpjwGCasrz208gMdIa1QBd17BY3uPChasQGrrQDtn4ZOM+PxuPaTEtJ2vDBNasBvlLk7Lc/MZdmcdsnTFYM5jCSmIfyAqr5mo5/yVgU66dEAKhG19/Xh/5HDkMkRHbnJmRyfnZbM3zpumEHSGYkQDl4yRBzUf2qVfBJRPVryy/X96Hdba49tjndbrqO8x9LYsSqeU5QN6hO747raJsx3b1FbOqPHI9UYekZq0RG4RoQKoJ+/v7rG+cg3qiSGBQAY8JHbW9Kar2f+y0DEdwjomZ6CICTNDwiiX2ZIPVhRJjJPqOkzJWEOnTwu53JLElhsikrtjfv4eIMKs2dBFawTgHqXHOxvo6+7d2qZJ7YOsKN51oAa38U2Zg2A50+5yvOQhPWROnbKSaxz3v9KUflzmg5X2uar28Frq2TYSC45/JzX3KtVD67bkgVxki6bpu1FlLW/j5Y/FX3VCytZU2j8jovvt0L9EQlrX091duEHmzWiUy5PuOMfYV73ozO4KZTRS5zrFkSTiI6GuSQ4RWy4bC0F/l2x0P7lOOkNbj8PPqTnuMClU8z4OVE1qH12Qww0QwQdsdWAQnEBBuHR5yb/cuFx96P0xmSSBTvA01F2IYHGBlX3ja2BHFcml6mVcOX+Dg3i6Vc4TgMQI+DkHufPR+U2K49DHX/D7013oiCtnft372oUvn+cJv/SKVnbG9dUHbmhtLFLj++ldADH/4hz/FS198ljNnT7G1s4Ndn2KalslMS/xHCxJDovpJj7qCVi8vY35Zq5R+WxaenGmfF2oZeii/MyzuQiOnuXZVpb+RxikZgmT5dM4mcrb+bl3XKbY7xLiztRSjprGF5M/qNSSWTTeOW+bOX6TfU7M21UmSfO3Z/RlS+cr7WOVYlz5lWVwr32/+7V54FwuMFGUwsy9sBI9aaJrsHwkuWRArMnO/44GFsmSvRDXk0yIdO9AwLj5UZlEcP46nxfTas1j6UcAmV+3ObUUoZ5tn9O2YBz4jRAzIZ4yExFvsgmc+cezMzlKZCa9/62UunX2E23cOaJcLxFZDkabC7Ioxgh+CyP1VyUBweKAjRjY3ZnRdy+b0NFJXdAI4S7tccuP688ymE+KdXd46vMHBndts7WyzvrPDbHuL9c1NpuszbK3aM4dpeo8uZjO4OyZE+f1VXyrXtREZKoHnBVgizqVFswq89UBOyJvyYGaW5nJeQ6taOD/vuo7ccCgrgGw+5s9lLbd6P6U5PszNGGjKWnHV3NRu1kMopTTry+sfjx0s25bc77Kn2DmHxaibglp4zgpuNqGJge84+tq1ne5wyWyg332O07dWB2sV0UqfQney0GfGl1rTGYONmqwbTWqT4By/9K/+LQA7V9+TIJVeHw3/ZJgyvxO1xsvRzLKzvs7l9Sd55c5zvP+DH+P69V3mR0dMZusjs2c0IWGcjdBf69v6kaNPM51OuHXtm8QID53/INE5orMEa7h343U6v+QTH/8kr33rm0yrlkVjOVos2Dg4Yn33HoutLTa2NqG21LMp09lUe3kkUEdEqCa1otCim1oGiSQhqGXsrwQmVgWlpLUNLKZuxCfN8zRoHsUZSv80fyf7mavNcVY34xCUwRSJtG1H23bl/npMOFZdn+H98VorCyfn3y2TlVctiXKMVsGjfITQ9K/nzaaua41t1lZNXLRSxGw6Y3nsDPc/HpyQHjIymbHRQQiyLT2o+QEp7Rv4jiD60oQtY1J6SoWhDZWBYASx4KIQXdULwtBmNiIEcisqRTNDb9NHNPspAI0VjmrL1c0nuTF/na8+9wWeeuQpXrt+G2vVlA3pHrNvQ762AmYpu5KMRFWGPSF/Nl/lhXOn+eoz/w5nJqxtnae1Rn1JMcxm2xhjuXHtGhfObLJ7sE8Tp0RbYU0FPtAtGhb7B0iljXXW19dZW19Xs8kYrHP4uqKeTmgNTNdmkBaGWIOrq97/CT7QdG1v6YQ4LNJIMuFWEPGspUqBHFkTCGKEtk11WnPR2TRfPviUlBwT+JLWDCQzL4NzsFxk7WXT/Dq8b4ABSc3XkGuz5pkxyd+2RRxyVSGUrlb+O2vq8rgfWqubTEpcKI6maYkRpjYl8RshBq2TBMKD1n998JCItr/UxVlODGo3g1Yay2VC9OJVaIxoAm/OgRp4gJIdMzK+GGMyX6zVjk9WiKKkglDNePEbL7K+eQ536pxeQ/r9bL5lgIC+f0OKUQLewHximK1v8fj20zx/+9NcufIwBk/TLAiiNXh6szzdexmNjBSbSz9T9B/uK0SWeBBw9tQa86NDzm4/BbMZnbNY4zAYQjMnxsi5c2cwk46ugb3FEVhDbR1T64jS4WWJLCNdDITDI/zsQGu4GoObTMAYQmVpa8Plhx9mUk8GP8f73tfrN8F8caRiz6ICYmRI08oapgRETkJNMxYgKcE3WxI5run9oNWMdUkQxn6894Hg89rSMIhPCKpgeiFU60oTTfNaU82m16l+//EMlfEmMmjWVZO49K1Xj8FFc8fOka2srg3oBmJw3iThlf7evt3xjlO3YBWazyuw1BGlKUraVYQYBxh52L30HKO/RREr59G+lkpjoes8d27dZuvUVSS1bVN1S4oHxuRSamy0YCISgIUFKoObWE5tnmdj/zRfee7LfPSjn+SrL79JNFWqAJ4W5n3GMPbX+WBjF0W1d4xRqXySUrLS95eLPWIMvOtdj/PQWmTvzm3u3r7F4mjO0dERp9bWtTZQBOd1wdnGI3HZF5Pq5ktCNDQSCBsTurML1mdriDEs24YgWkWv1B6lr1gu1IzKQuLbrvioJeiR/y6TqcuQRD5nfuSFnsn8AxPMp3VSrq3hep1zGHucbJDXU4kex1TO5MS5CMoiWwX0ThLA0r8sgaHyuvK9l+0IQ+HuhKD/ux/H96TjHWSJyKDY+tA5o4VZTpwxJuX5pXoqjE2f4bODT9gvjPR/GYBYpc+VjnLylURU83oTR8houU3oaSOdQGuFpjaYesqVnQ/w9ZufZrm4x9b6OncO53Qx7/qmZyTpz53sM6/e90nHbDrh5Re+AsD2qcdASL0+gNBx7drziAg/9iM/CId32b1+jds3rnPrxg2W9/ZYNg21q/CCEjZCILQB3/g+FBCINL7jqG0wG1POnD7L2VOnMZVjvlgQDArPr+w0pWbI4E8psDmxoTT5Sg7tampX/mz2X0sEcyhU1bJcLkcgTL+wGWeX5IdWiYv9eUpBGm00+gJF3W9ykXpgyBsVn0gSyil2xtJ63/vhAzo8NnVLBg+M/eL008RYIMWkur8xFODl2x8PLJQp6Wfwm+IwqZJLR9pcfIl+1ywnqpdgyR7p2IGOcdC6kncASSZwhK88/w0Azlx+twp88XX96PB9ZdHojESCTlLQdnJLZzATw+bmRc4ePc6Xvvw5fuj3/xF++0vf6M02XeZjfmR5L6W2XwUgVg8Rw+LoEAAjTjcZUV/XNw3L5pDt7S02tjao1is2N2acOXua9Y11rr/2GrQtc7+k8TCJRtu6x4Gt473naDGnaTvl0R4dcPqt6zz+6GMwSaMSh5WZF/bqfaxqn6irayiwvbIJZUECeoJ+ad6W72dkexXhXEVFjWiIYYT2x6GiXr7mVXJD8J7aVsiypbm3TzdfatEI0bVgKqe1f6pUULuLBAnaXzPmeLsnWEvGMvN45Y2lXM9lpCEfwwY1CHMbI6Ey2tH8hKyhk44HFsoxAiUIBScwtMToRztnuYOu2vTDjUVYKZHQL+5YAkb63We/8jyIMN0+27+XHNc+ZjcypksTUyTVbbV0TlkXrq64cuaD7C9v8Jnf+mXe8/Qn+co3XtYFY8wDm6cPdORNLF2XEUGC5/DudebLe3zqUz+K7xZU1rK2vs76dEJdO6azmsX+AV3XEruA8YHQDZUBABaHh+zPD2hDy/zeHl3bceHmbZrDowR6pdgjY78pC15d18fM17wQTRwsgvI75XnyozQBm6YZ+aKLxaI/h4hlMpmMYqCDvzlo4XITqKqq72ZVMp26riP6gA1C1XnM3SN2v/oi8zv3+kouXfTUU0WszelNzKkNXFVhnSN1nUrWRiBa7bUiScXaom3eKmA0gD6rNWOHzTrE2JM/ls13mPtapm4pWDPsVFqBOl/IEHQuBXIV/tZjSMjtz9w7a1oyMSZ011RTfuWXf4VJvcHGxUdG15YFUVZeK06qAprqmRIswXpCZbCzKVfPfIyXrv8ai/032dne5O69/QcdlhOPfjHFvOGe4JwKqS2dfkgBgoaOCkugtpatUzvUswlNs2SxmNMuG8RHupQ6VlUVR0dH2PkG9fYmB3fuEq7fxPsD9vb3eeGFl1g7d4rNs6cxs7oHYU6KNeaFlzVaOXbEQSBLQSrNXhjCJiUgtFgs+s+u/mYp4P25GOKSpXnqnFXrpVhT+Rwhglm27L9xC//WHcK1Xep500cGLBEOWpZyQLx+R0HEuoLK0tUWM60xzmK21pDNmQrvdEJ0WjEwRg35DKlqOXEiWQypTIrOdVYm+p73nmnfZ/P3gNEzrKeSiKwmRzYBfOyI0Q+7bWbaUIAjwmDmZnu4f119OfEGmyzyTqATYe/eHq5eR1JeW4HwIFF9g375RzCZUJ58xN7RMBFvDVjtFLa+eZad/Uf46le+xCd+5I/y2S99XYM+D6Aps9ksJL+3TLJOn1kulzz1+Ht4441XmR/dZsNc1o0iBPbvvgnAp37we7EStZ25AM4wmcyYbawBkbZr8K3HNwHvtdW6iLBYLmjblna+4PD2Pap6xhuvvcb+0YKXX/wmlzvP2myDajYlmJOry62a4SMAiITEpvqrvfUhkrD0hIAai08J4M1ySWTQuM45Rdx7k3MwO8WYwaqJ45SubKb2IRexkNhMWqdam/y66Fjs7nHv1Wt01+/imhR3jXl9mn5zYakMrcCSkHCGmHt/TCvMxpS5c6xtbuA21/DrU2R9iplNsZPUhcsK0ZqUDB/Bh96X9SEmSqTBRWh8S1sb/M4a4e532HxddebzJK4+Qsw80lQtrYzNKFqT/NCcVpXOXcQgDILNi8VrMHzetEQik9lOn7KiLQnS9ZEWiF5Ylj0g+cMp00CzoFUgOyuYYJC6YmvjIe4cvszR3i2qutLAdQa27oPCjgUyTXr+cBwCKW3bsbaxDcDR/NagARAODq4B8MRjV9hY18Vr83gnylaMStuSymEMmJiqkhthJlsQoTmasz7b1Ca5Vc29m7cxxuHnDcuDObOz2yOEMM8f3D+jAxKJnsLHTMhbBi4ykhlC6MnhJU9YK7ylfijG9O3Z9ZwDR1mMcnu1cPKwvka+eu8Wi5rx+3O6e0c0i5a9azdpb+9jlh2JB5qmQ1KH66ylDSFVjncRrA+ELllx3sP8EB8jc3MPKkesHe3UMt3eYLq+hnMVUjv81EHtmEyndNYowcVp2mC1NtH78RFjLd3UsfnEQ7i3bt93nMvjd1k4a5jM0jzVcRteMylYXH63TKEpzxfRgL2kCZcAMUHlHs/vfP7L+M5z+vJ7EFcNaNrqhRZOZW/Wij5TonuKazpLCB0+ChbL+sYF5IbhG89/mTOXP8T13TvHzv3tAJ13ckRiQnf1fFXlmEwmxKDlDUPQVLEhvgtgFBW240wLI4aqrjl1+jSVq5hMplyvJoTFgrZpWC4Wan5SVnWLPQC0ynnt56QwIVdBnvy+lkYZN2MtfcwS7IFx2ZBVVLMXyuJYzTYZBjDim5b9azc5fOsW1bylbj0hMgTpJSvIYc56xFdMtgHGSG4A4yPed5gOpAnUC0EOOlp2aZMr1BqQ2tHOZoSpg2nNdG2GqRxma4bUjlA5qlnFkkB14RQbm2sPtDZ+1zV6MiJX3qyOr/ptIQRlVYzkLyOG9/mNEPF4ICW5Bk8XAi0hs/p6LTiccTgEhmp22eYvzJgc6kA0cdZYqxMYGbK4gSFZakAtT/QL/yOOzLTJ4+FjYLlcqgme4H9IvSiTVjTGUYbfenBGYl9J7dRkglhD2zVce+U1Do+OWC6XWh2uBUn8Uilq+ogdOh2X5y6FZtX3U07qOM6Zfc6sIUuAr0RK8/lCGCqd69wo0JP/zp85yQ9uYmCytc7W2dMsb9/D7M4REVokuSvpRpKbIMFoXmO+lgQOjsCbmMNsBlc7urSYxIOErgfMiBHnA9AirlHOsYG9GGlih12fYNenbJ0/zcZjDyHrFW6tJswmD7Q23pFQDv5Gvt+clJwlRoP2+MzyefvITIaOewFI9ngUbVsXJNKZiFQ1v/Vbv4Oxju1Lj/SYLNAjbPkwaGbJQItL2qgPt9B/T9LL0aiA9IcdeKMhxBTnOh6reieHrIyEyPCbIpIqcac7ydUPRJJ5LuAh+CL9CsH1l6wYq09rcWNnm4tXrnD75m3md+5xeHBIM1+wtlb3925idg2ldztOOkpiQAihJ6jre2PtWaa2DRkdYSCbi/Q+5mqFgK7rEqafU9SOVyIv45qByFIgbE2JF7cJXcC0HjlcYL3pi5+pO6ycWpFB8/ZlY2SYmxA1ixcjGtct5ixXpMgZJyGNdW5D74IykupoiPsdHBwQ73V0UjH9wMNY65hV32GgZ8xoGIMgOak4XT4i0HXZ1xhTmcpDgYEunX8QGmMSTE0OHzi+9vzXEBGmG9tJF+f3Y+9L5l8wvaBmgIAkgbH4pr6df8NOZ2xvXGXv6DWuXj3Pm7fvjHyb/l5H9/Hggnlvv2E2W2Pv4A1Cu6Ry67TzQ/YObvDY44+xuTEhhqUKv9pUg3lJHHznYmcfxYEzITyBMtPNDTZP7TDf3WN/b4/Du/dY31pP/NPsAxclQUSJ26vpa3kRl+Uphw2aXiuWsclVq6JEaUtTtMzR7bsnRzt6D+iJ7PlaKuswISKdWmOnzp+DnVOwbImHS+LBgnA4p122Wix70ejvG0PnvdJBUWHMyGl/b6B5kjLexDNkEEevFSSLkvCehN8gzO/tsxmEJkaov8NJziVIMJTQyGaN7+ukllB1DJEo45Sa8ih31pi2texfZp9QEFwq/Qj0DIxcOTwPVC9qSTr7MZWCiHAfCzQC0Rqsm+niS81U34kufHt/M7K3f4h1juXyqAc2ou9ouzlr62ta92f16/3YjP271d+RpPJFlEECagaePnuG/bduslzMuXfrDqfPnGJibZ+Eq1KVzcLBPB3HlEW5p2L6fU37WOaGwMOGncsxllknpfmbtWS+l1JjjlLO4iCI5Weyv7qMy1TVD3DQrTn8dg0xsoZlGgzOR5bzJbQd5s4B3dECv2xo9o4Ii6W2xPOh9ylLS/DbzXM57uV3FHzTcTLW4qqKRdPhFw1xMqN9MEX5zjWl/jhkNSNF5+GcYZEfQhy6HvWTVN5c6WMWQFKIYBXy1moxqcxfNSNmn2j8Uz3aloXVEPGDnY0ye04GhpSEHtncvMyt3a+yt3uDqq7omiGZ9qSA9uqx+pniHUqShBgtNNaPJ6Bsj1RnKA+vSO8ASAIY+l08hSj6bBiJ/b0jkWAiG9ubTDfXOTjc42Bvn8XBHDudUtVVQiRJSKrOU9s26R5SonDQkQwp9qbV4QEMzVLjmcZqhT5SEWxnKwRT0OLSoJsU0sCQY9pdp8kLIdXPTXkExUY97lGatXXbtH19W2MtOA1RiBG8cRzFSGUdcbPWquTnNjFtR2xathYec7CgWzTM9/YJu0cs5nO6toUQsRFMiFRilAGGlmwxCJJ6x4g1dEUJVelRZR07MQry+xghRNr5grg1ZeHHRafvd7yjkEi/xGKkLAClQhL7jle+80r89aGvaZO+eWzBlmt7BCoEHRwR4VuvXePe3XtcfvSj2Nl60qCxD6P0GrM3RyWv76R59RqzFum1oAzPo0TqtS0Arr/5Gq46S9eMx6AUSF28+e6PA2GrRxxxAk1f0iN/P4QOxI+LUcWYgClJ5TqHUFD+zXL8vKgZGkIAC/VazalzO9y7extnLHhdJNkqCSGo2RsiXdDiy8BI28VQ5JaGqv+tXKg5NhHfDeitXq5mRgyxzojNTVgLl8CHwSRNVJFEqBiAoBJg6hlFaPZIiJEqWQmVcxhTJEHn+TVC5wDn6FxENiaY0zMmrqKet8jekuXBEUeHh7SLJf5gTjhY4H3ALn1PaQxpL+yIODNkw6hVKIMLlzaLIJDJDqS2EMv7kORXj99l24Ky6Q49hF8y/stwSfHNE893XOBVc4WoEEbDcB4B7UaFxp/GQW76inWjycm2cP/KWGsmFy6hoWp2V1XFghWpLK6x9z/i4GWUZtDKN1IxJaWY5fbwpaZEFIYXYynrlOZ/V8eo9N3KGDHJNxcj1JOas+fO0i6WmnSbQgEhKCkjAr7zdDH2+ZMwLpYcwuDvZeFc7aJVIq7ZdSm5qRnVLaulr95PP6ZhKMnRp0KtJEf3oZsCKMpjlNHR0ZH8c+ssTYwsaakry3RtDbu9hgk7zDqPnS/wiyWy6AgHc+RwCfNG0+QWDV3TErxFXE0NVMZhjOaeDqsz2ToyzE3wnqZtWIbvsKYsB1ARtXbwH+PgH64Gok8258bvl//mxaVxS4hi6Iqas9kHyFkjuisJOWMzgzeja2ZlnsrnIn1b7nq6yfrsPDevvcnHf+S7+fyXvn7fMRj7wiebs+Vx6fwOz33ukNObT1JNZxqGyEVLE8gCPpnj48TcjDiKpOTj0iWQlc7JXuOfRjSYvX5qm8vG0CyX1GtreIHotYqEGDVNS41VnqvUUCLS81en0+nofktObJk9UfKgV0MaZRuAMYg0oLb5908CmXKNoX4aob/vXCM2ptdiJIU6LGvOMQ01wWTqnmrsaAWqGUd02NqwdnqDSgzWRzbnLXbR4Y8WzA+PoOkId48wIV2n71KIcOBrk0IrAWEZIl0TeoDp2x0PniVSpODEOC7FWKb35KMMOg/CmmOZjN4rj5icxEjiva6IWL8XicEkDSnkZGsoVeBg5rIiRLFv+y55V4vq600nWxwcXaPKJUryeYoLPwkQ+LYgQbpX5QwbxBqCHSbQWUcINpXD9BgxOOs0bJJ8UERSwWmhS8JpjdEc0DC4ClE0BosxGsxem+CcMpgkdCxTXZq+ip0OxKgPS8z+YAF3lRqu5MGWc7wqrPk3Ss1ujKFp1Arp8zZXNH8plPk8eYz7YlXFBtC1LaSwUFmxTo9USDoqoIdJRHsftN1h8naMCJNJrWvVVRjRsh6yNtW1UVXYpqFbttT3GhYHCxbzORMfafYOae8eKHgUNSuJIITTmzQ7M42/hrffuPPxjoRyFazptWIhNyXsvfq8Nx3vJ4wrR/9KqTUSOpcBpmFZM+TQFV8u/RqJKxfbbxIaaxIjnNp+nFu73+DGay8lqlTozZ/+GnqaHP17J8Uu829ba1guNHVrMtlWKN5ZTFlLtzyvodAEMjpX1mb50SYSuE2vkyqLdyEilSgJqHaISbVkFkvapu03Gmst1aTua8aWm20e+hJljDGyXGrFmYyklh22VpOCS42Zxyf37ciUvLImUNb8wEhrl0SD0jTu56MD37aINZi67sGXruv6uquSSqfERKsjbep5Ll3iVIcQqF2VNjkdB28gWojTimo6wU8j7lxk2nRUxlDPlxze2lWUt0klKYmsXTyL1BMQqF29usRPPH7XjJ7hjQI4uc/nhgnJRZjfHhSBbG7ExD9dmYB43G0YXRCqIQegJ28C5e4xfirGjPp0AFS5Urcv/ZZ0btHAvhGlueV+H+NL0QU2m9W8+HVtm76xdgFJu/lgfqk/2d9jIQC6MMfjOLTgK4LhVXVsI8q+XOnr62tqaYz6dhSfG4cHjm80pY9ZbkZd1/X3lM3TVZ8vnyN/P3+nf3/FIikFMB+rFdZFZCiOZXI/yJwvOrgItr/W4jcYVsVow7VGcY2ER/gevFG3vXURsY7Gqla0kymmPoVpOqWJxhSvt4KfL7AnVNK73/EOavSM7M5EGk/dh99Gxu4npG/7WtJomtIo2DgIpbNK9B38xEHYerQrDqZrynPu+zDqiVCh6hfBENaZrG9R2Rk3r73Bk49/mG+9+gbeaJqVIL3gSirGowKUS20Og55/30jEEbWfClptwEoi3JvBRC7HsAwFaB/I8cJc5Y92XaeFkdP7JqUYhSLo3gWfFtig9XIpEZ98oVJL9liAgElmtiUlsUvy9xFC1IcIiXhu+lvpgrbxyz1cMnEB1LLIXaxGpqveADkuaqxVEz5CjB6XqIkiyaq3uQxJrfOTtWJaBCKSzPukKWNKO9SbTPWIUhOq5AcaIxBNX28puwQhhX2UySREH5LCSLWEgFA7chjJCJok3npiMxR3+3bH765GTxx4k8ShyNJJx3H0dfi7XGgjkxA0uyRpSZs2RS1+m0jlMWISVzsgmBjxfvA5R9edNUjxu30162TGgGplN53ibM3tWzd5+kMzrlmhIdL6CMHrOYKk3vYpfBFFG4WGwRSSSF9szIQhqaxZ3uXozlvEMAc3XG2ufJbHrOu0u1fbdljrenN2VeOMSiRm4M1ErAyt9UIICoSJxvOyX1rsSb2Zms87mMq+D/RDypm0gu8iXQI6kBwfHdraxuTnRmPIObZZm5XzX7ZTyINXVXVvEYw2oy5gnUWTj03fqi4LtWrulLuZ15dA6OclE/ELc9wMpnfOndQxgxgloeFpnBJPWq9X76t3ymKEyqpzGjSmmTtxCwZPpHs77VUcv6uQSB7QMkn17T73dtqy/DebUVK8DsKFc6eYrs1Y7N2i7jpiVRMjSSgjvuvwMWoh3ODhGEi04r+OfqMU40hol/jQcv7cBZaLBROb4mfeD0nJ6IT1was8weX5I+j2HFguAo8+8hhvvPE6L7z5H+BNsLZitrYDwGwyoW1afNDYn6QwRNd1+KA5itZYYh1x1RCLC4WpSX8nSVijJ/qhhUQ0Q8ggxN6U6MdolZubj9VMj960RYWkR4VXmgeVm0eZlGxzkL04VxbWci1lge3XSIxEqfrNKfuY+fpWi1CXpqihUA6r+IcZ6tuWvnFSs4mho6jwSR29BsxCT22dw4RkfYmGeIwxdIslzXxx4hivHu+YPHA/AXs7X3IAe1ZGpTjf8Zhbeh+YThwPXX2IV156hXb/Hu7sOSXIpAwKsLgIsfMYEby2Mi5+q0h+luJfVff6JP3g0f4tmu6Idz313Rzs7rI2cbguIj7SeM3pJGZLpN8n6al8/Y2BUtgCXRuZ3634sR/6FK++8QamqmhD4IWvfZ3LVx/iv/jP/yy7d+5BSnuzgaGCXEJBo4t0TUf0RToVQ9DeiMGHLvn4adHnIlHJL5WywU2hDUu0s1zUJdpZgkAZmRXofbf8KOvZlOfImq+NEWs04D4WgkEQs4DHOK6KbmVoPFSul4E/PfbZsmCaGPtSKDGj1kXToFLB5GsHZfSQuoOPQacInNwcSdFsVRCC8jUkRszuIeGNm8dk5KTjd6Upy0Ep0dVvrxGH98uFkI9RLh9JEUUIzRF/+a/8ef73/+X/ic//2j/kY3/gP2GysZ38AjUP8nd8gK7tdeEqTpGcESH6RFtLmi4aIERu3dLiXA9fucrLu1/DmYh1ArXDhq4HStRczJtJ7P3CYVyUzmAAkcDB7i12JsJ7L55h49QW2xcu8mf+5B9jOrG8+a0X2ZpVrM2mOLE03lNZS1XXKZaZiAE+gA2Ic72fqVpZw0ImmFHgX0EjBxLw+AElJvvY2hDJd13hH6vADADOoO3K+RKjAFjuoB3juKpb/k4uNwlDWK00w1fLxkwmk2RKDq3mNJQkVM5QAkvl9ZRhl7H1IJDCRaYQorzOtFDygJ73prCPvX8c+m0ulZM5wQzN1yySEwq0m3Mg6ua8bHB3Do5976TjP8p8zf+eZIqu+ifDYxC8sjbo6hGSq6ckggjtnP/mv/8/85f+q7/Er/+z/zcf/uhP8NB7n1YOaAw90SBEofORLpdoKI6M5PYAT9be+bqN4czZJ7m79ypvXXsDGzuiBCY2YCYGGw3LrDH8wIrJQplwXoha29tKIIduqhgwviO2LeFwztGbb3Lt9i3qumIyqfDTmvl0ytpsRlU54mRCaNHKa0bPn00s74L6U06Le2XkNo9pGWy3uVizpMXZeZ2DMMQ22+USRHDWUte1xgETa6ojJNN6qM1T5k3mxZiLI69uzuVzY3K50dWsI0YpXSdVYzfWouDfuDHPKimhfD/XG+o6zVWdTqdYa/suWtZaTMFS6uPuxihVlFxJY2yuloqolIf+ddI6jwFnBCqHXZvRyuqKPPl4xzV6smBl4VolCZQTUULfOomJcL0y4KssEm0xZnqgRzNDLC4E/upf++/4P/zMX+S3fv3n+QRw9X1P442hMoIYtWmbzqdAbaGdJWkUSYNpEoKK1u7Jn64nG4gYbt2+zal6xnJ+RMBjCEwri4k1vusINoMoAwsnJHAKNFHaWZt2zogzYCViCbjgmSw7JsFglp7ucM6BFZq6YjGbUs9qJmtruogmjpitMhlKMmqrc834UPJ4Alyi+r+I8kGpKhCDjYGuaVgul+qLhry5+CE+6Bx26nHTKRhPEE3e7VIXqTSpiJihUzGJNVNo1dK3LM06XbiaLJDXRdZMZe/I1fKXGVhkRejy+ikZQydVtSjPW36mTvHMVStgcKHCsO0WtNJV5bNaaLkUVmctRgyT9XWOfi/6U+aL9r4rzJbxjZy0S5Y3rQoqC6RNi3qliWj2qdIk24RyVhKJtPz3/4+/yj//57/IL/ydn+cTBK6+78MaazAGcTCpKpoQ6EKbgumpG3AI9Dh3alGAL6lRUJ86x9b6JZ5//jn+6E/9NC889zVtxxcFEy2VRLQoWSBIwNsimM+A7FmjrdGt1TDOdFLhrKFGcCEg0qnbKxbjgaDtt7vlnMWhoZpUTKYz6tkEUztsZrFELSLWWi2RaYrN0iRztjfpqorAPN27ZzFf0DRLzXTIlSPiEJM0kwkBrUQfnSUYo9kXlZbm1LmJIAEDOMmEBb1zY8ac2CwsucCzXufQ9DULWCksefNfbRSr5rVJaP/xqu2lcsgCquvNYiuDWIdJmriqJ5pa5RytD4QoPak+HyEJZUxKJKOu+bx5M1k12XPhNxNygS/d3GLtwD1Y7tY7Ml9XkadS2EqWyepxP15sCEP7tZiAk15g83+SY1YK9UcTCGHOH/7pH8NY4R/8rX/A98fIQx94GoxVInZV4XzAdD7lsuhmkH0w9dPSbi89LKLC6hyPPPmDPPfsP+K3PvubPP74e7hx/U0MHpsg8SiSyodoDNWYMCCcZDPTYIg4Z5lMKtYmFXXlqK3RQQ+pIh1aerCf1k79vGY5x8+PmB9UUBmsq6isxYrptUZVOSqXFnz2jwcMFueqgQoXAvP5kYIiPugj6rVOJlqX1PpIFEtnWrwIwRioK2RtAqnXJJAq6Zl+jmIIiT+qv5wXaRa6EqzRNTDUiF0Vxhy7LM3TnoonRtdJjGpJFeuy/Hy5MQhCFIOGUA3R+56yqEW/dO15v0IRTGuy/P0sjPne7of4GorYq0+c19phZt9hRk8OZqdhOCaEpbO+qi0H1GpsIuQbKY/VneckM1yI0LX84T/6ozgx/N2f/ft8gsjV93wQ66YEa5m4ijYtLq1OnUxhgdCTH1Gzr78rJUWYjU0evvp9vPrab7C+tcXD73kXb776Kkd3DxJYq7EyEyIugomZaWRQy9vgrMUaoaosk7qiqnIbB13URqI2F40RbLnjBsCDqBkqTduHM1oRbC5kIfTxyzxGpVCKDGMbIylToYUQyCnqBsFOJhr/q2rt8GUaLWDctnQEZDql9uvainA6UeAjBowZhCEUc2pTGCML2kkmKiFiirZ8oMJYktTz+UYmbIj9eYwxvem9umbyWBhjaL3XgmHFWsv0vhijziXH292pSTom0q+u0VUusLW2b6CbD2OEtg34uiJuf4cLZy2Xy2LHyL7UWMDKi76fOXuS31k67hlFI+NeamPSLzYEhyZA0wR+6id/CCHyt//m3+Pqt57l+37/n6KerjOtLI2zLNs0oUWIRdJ5MNpIyIsG7gWtZYoxbF56gitd4KWv/QZvvvYal69c4V3ve3e6P8O1Gze4e/MW+IhDNQcIUTRgrnSvqOarqEYNrbagE2PAQUUGqZSALknTIh6RAGIwUbAhWw1goif7yYFuSOTm+Aa3+loWWScWKyqYdQDnhcqAaTwxLAkhsFwsWLQNrE2Zdh0T75ltga20J6YPPpnfukk6q2UkJ1WVNoTEKuo1kk8uCXpfZhwuWQ1LlH6itVaBFxnWQYga7M+ATAaZ9F/T0zE7Bu5yadpmJWPdpNewpZJR8zpzbYcc13K9ltq+B4Dy2i7nwhi8ATZnx+bnpOMdEdIHrVjGAO8fayyF8KT8v9UbHO14fTQj9zI0SZMpyGSiSUz/jp/6w58kEPk7f/PnOPoX9/jw9/wkO488xbKuOGoS4yNT1pCB7ZRDGHF8H0GEaAzr5x7hiWqDN9/6LC9+/eu88LWv6aA5x7mLF/ngRz6Cny/xzZI3XnsN8R1VNFij/UJCDMQuIilxtw2RRYx4BeToqpQJIgGEVLAYjIkYm9LTOl35Nsdki0oPrGyG/d/xOG1CwyNpwQpJU5KyGbQVQrcUJKgALRcLDhZzfLtg4RvWY4fEiK1rve7pJJ+496Fj2tbUpFWwKWbrKYFR2opcN/fZbDYqvpzJACUQ2FtYxFRZQRsZ1VKl+8m1B9XfNAKuTm3fHSCWZduN1nC51sbZT6spbG1at1opPX+uFMZSU5duXO9EGEPw0MVIvbnJgxwSV9XcfY5f+Vc/P0yuyRQtvZCyZEN55IvM2m/VGb/vRQn0wf/CNDtGTk7oaRSDrytu7y358//bv8jh3pxP/f4/x+ShR7hxcEDnvcL6rQbXNRdT42yxa9VMDHFY1KLV5eJ8CU0H3YJuecj+/hsQ4WB+nflyFx+U8L2xucnDjz7K1asPsz6Z4ozh9s0bLPf3qaOWI+y6JZhAFSIV4IxgnUkk6cTzNEpst06oaouzTn1WlAnjjGCNkqzLjmDleEMSxBgLRlPy0Y2CJZWxuNS8qKpcTz7Qbs8Vvus4PDrkYD6nsRE7qZnNZqytrVNPp5itNWZbmxhnqaYTnZfURLXM+OhbIIhBErCiyLEu8Mlk0n82C0xJDjhJAPLrzmhJzZNYROX327alDUOoqOu6vkCYIqzjQl9ZyHJXsOwTZwoeMGIg5c+Xwp3XPAk1XrQNLgqTJvDj/9n/+r7rPh/vOE453Pjx1+Hk+FOpLcvd5f6COYQxyt/sydL9xwTdnyN1F7lQV/y//p//A3/n7/0Cv/yvf5aP/+CfJZw+q26bNRAMoVNzkcDJG0l5fhTqFzfBuZpTazsQIzvhKUK75O7uS+wfXqNddDz/7LM8/+yz/fVeuHixp4k9cvVhHrv6Hu7ceIvd11/HeXDWYL2oliTD+RFrwDmYdFBXBisRCR5jtKSks0LlLK5yg2BKBjQS+yeX0+/vIhECBHLfzmCULN6FDonqD7kouOhpfMth13LUNjRtwHpP13nmiyXiHNVijS3fMV2bsVk56lQcO3M987iW8VIN4TiqyiUkf9isSz8xg30Z3c9V1DWk4QYFAEl7lessKwpGDy0qZnDW9H+DnMQB6Df/4+loRZkWBsuvtPaGtZRCbSlBwtmK6D3ivsMhkWpSp5uMlK3MYaBUlZ2gyps8hoiVNvgJirpXWKTwBYUpHJXonfHZZH8pTB48UxP4L/6zP40V4d/84s/x1Hf9MdzFh3SwIWUtpHL8uTboYCun68koa0JqQ/JxRe86GoOpak7PnuZ0eBqCp13usVjc4eDwGlEiN69fV9ob8Pprr/GbIly+coXv+cj38OpXX4J2gfH0nY4RkjaE2nc0zQJrG6YTR5UEqjNCFYUuRqyPOJeBpRwuGANuPRAhOSYrEIVgUrWBtHgE8F6J2zEa2i7StIG2i3giMXiCb2DZ0kSPnR/hfWRja4v1eo16uqEhKO/7zUDEYJySEUQMiBLIq8Td1d/SjIu8gVvr6HyiwiFYaxTVDTEVozZEVCsbY+gS6waSqwDEmEuK6JxFbLomg1hB+s3YaoYTQ+w6s5Iyeh58l+KUgVzCTGOmQ5y+qpy+Y7SMjHadjtpbpct+tKZ5tSftAiccD566NeIp6kOSABmGUhAlopaPUnOWqOv9gCJgtMvmdKE0+vTFn6UIZQiaOkQkti1/7j/9aT796d/km8/9W57Y+ZPYyUyzyEXoJBnHxc+OfLDhf/p/EXIxZ/17eF0MiHNMpxdYO3OZM+5DRGsJ7QJSFkF3eI/dWy/x1hvf4J+98U/4w3/wp/nql59HCCn1SE9qDNQCaybS0rE+mzCZreOspnoZk8xeq1cWYyCQEEjNlSEUvFExJgl6znTImkzzA/UGTa9JTNdR+0DntTpBRlWDVyFuJXDUNoTDQ0KItMuW09unUipdxMfQd4uujOBcpS3nrCNGCjMwx6v1PvR5rqLnCVlzOzVP29DSdQFrBVCBlMQqSvskXaIahkC6H92Eolgimj7UF/SSjIvkQdD7z9c1hOFSmp016jPHLJTD+jBmKGzmU8xTw1aWIG06t/6UZ0wyuN/xjgjpg2CNgR7NrwsjU/Wk41iuXnGcBBYN/0rf/kDlUIrBA53YHLdKvlSY8zN/4Wf4Cz/zF3jr2d/gyoc+CeIG4APVUiaYVGkuFkI6tgTK17IGTbddjAHKqbUWqRxuuqXgTYTpxg5bZ67C14Trt55j0bUcet1ENK1L0mcNnaSQhe84e+E8F688rMBPArkMPglgRKKH6GnbhsV8kUw+XZid9/i2QcRQO9cjrnqhhXkbhxq+1qVOy76jTZnzw/zkwmgdnQ/cu3eP5XLJhQsX2Nrewq5NsdYlE1mY1HVvkqq5OqwB51xPHsgbfA8EpoWv1pf62KpRFfDLvNzSBcrxwryXZhMWUBS8MDNHIZbClcpHGebI/iRprUgsUOJy6hN4lt0kK5pTuRp5OAkdP+n4j25bkN7toeDS+S0pTychrvcjpI/PnU3LXI1H/aLRIdms1onTHNTAY49e4of+0E/ya//qF1nfuMipq+/ui2wBqagzqZuvZHFOd0S29nrLtdfyx0DP2Aebc1nWmL4TouZ9dss5R/PbbJ86zWGAuRiqUdYDqfBToBX1e9dOrXP2oXPpvgMmhkIgI8SO6BXAWiwWNMuWLmh1geVyydH8iGbZ0DUdtROcFFRJXf29KxAA41sdkxh6HzGzqtJwIEG1R+w8+3t73Lp1i53Tp9lanynwQhxyPzO7xVoE6bM5yg2+FBBJnw1xTC6o6xrvQ6q4oNeSq+973yW+bFWELnImx+payrWF8jrTc4N2RlsN0ThXJQsxxVoD/bpOec5JIC0+xJRArbVvAbwdymoGjbk90PHAQjmfz3tCLwy8RWCAvXvzdiyg5d992CEcr1KWnysAMNQYHZe0DOTYWC6wnCLDBK+iq+aJ0PkF09kOH/2+T/KF3/xlDnbf5NwjT+M2Tw07XUwE4mRGSbZNrdGWZzGTi0fimo6c5Z42qhBTWzUVnCgQfaA5mvPKi7+CqRve9z0/yOe//DVc1O9WIS1ao4LhBWRSs75xmoeffJizl89gyJnyIMl3khgxvmN5uE+zXDLd3OTw6AiAZrlksVxSLWYcHR2xmC/pukgTYh8GGcH2MWXFq0rCoOlVVpz6aHnko2pxZ21f57RbNsm0VoTVmMTgSZtNVdd9Qa4S6MtrJ1dWAFLr+cBkUqPmfFkhL2h7wghdF8jUTGXgpWJkYnuQqOv8qLdlBn00hKI+ph49sRgw6X1tpiSEBCZatC7tYOkZkwuC6YaTm6OpwZQ0efL0RFKFvbeJOJTHO2qvXsLU2Q84KfaYLzzXiDkZzBm/VoZNShNCB5mCa5u+n9skr5wvG9aaUtkiHbx8I/DhT3yKz//6L3Pn9te5+ugPsn7hMW2pl3zjPGsJF1HfxxgMQhuWxKCAQMFL1jEg2wkCwUOrzr5BtUo3P+T1V34dN2t4z9Of4Iufew5jVN+1ScPaFAqwxtBJZLq+xdUrpzl7+QKz7U2c0arjZRq1xIj1HbPZlC5VNt9KpReaZcPR0RF7+/tUB/vUBwvaecPiaEmzVO5r8L7faCKenM1iiakBmTZo9V0a93SfViS1kRPEVb156pxDJjX05uqwDsSIZtUU66OqqlEeZNaWdVWndnZjplhZO6h8b5iLAfldpXWWZupqdCBXNzjpkRvzVlU1As6yppQ0FjHm6gp6LWU1iPz7ZUjw2x3vICQSCcEnG1vJz12n5RCjH+8AGQ4vJ6G8uDwR5fNyMFYHlxH6GpIpMP7NXohJG35CDdemU/b3DnlBZnz8D/wxvvnsZ/nmS7/E7I3TXH7yU7i1bYxzJFhAr6ccRGvpRHfJkAgIQ8euweQV7xnidIpEh8WcN9/6bexkyXs/9Am++IXncUbrxKRNNG02el5vDKYSzl24wKOPX2V9awtb1whOhTIGYtcls0k0y306oa6rBHSptmralrX5nLWtDQ72Nzm4d8Dh3iFVfcT88Ij50SG0KXwStGmq+nXJbJOkAcQgJvSNg0KMSaub/rmtKmxVUVU1pppo89REr+vxABFCGAoqW+vIYQkjuYV5Tk+zWtFALAkgJxASYTy5GSH3PjX92gBlZYUAvksF2tCvqA/akVukG+MwOVsotXMfwjSD1ZY3nHHVBcMQHskaOCN/w9ovN4TVtf/tjncglIY2HCDeY8Vpd6QQoAuj+NQqVSpfaPm8jHWO/QvoQZZksurNDSbvYK+OhT332BBSGlNURsZsWiEe9m4f8Olbtzl79mGe/t7H+eoXPsvLz/1TJpNt1jcfwroJm2cexc42sH0g3PYTG/J5844//DoZJPLdgsX8Fk1zj6a5x+H8Bpvbmzz1we/ld77wFWwBEISYGiCFiEggWksMkQ5YW5tx9txpqloXSqIuEaPHx1YBByMEIxr7yhufFRUKUzFxgplW1Gsz3HSCm06o1qfYgwpzWNG1Wmyra1pkqa3RlUuqpUecrWjRdoQ+sXUQzTyppzPqqqaaTtk5c47p2gZVPSPkzTWR/TVLQoXdmqQ9RRBTpYbaVTJd85pRKCvjE5l8kFsZaCt2RTsjghjtfB1DpOvUyklTlFBW01emUNTWARbrkg+IUPUmrm4WuqFqsyVJ1KcoOWHcYE1NjFlItcqFkDdZbeXoU1lSlYe0PvL5HuB4YKFcxI6qmxNaR+WEiROtG2NSqzbJsabjhOIydlbuHFm4SlT3uICOKXsyEogs5PrZzDQixuTHBGWPiEFCwAXhxus3uPmm8MEP/yDN7mu88co3uX1Tg/433/o8GxuXOHPhQ3D6Eq6q04JP2qxrtUJcuyD4JTEG9vZf0aJZBI4WNwhRYXpXVZw6e5qrTz7NZ37ny0P81oCE1XBQ0ijW4rvItetv4cOjODtFG/+oz6mWQqKvYfAYsrOpwI0ndnnhG9xkgptMwBrWtjY4PDxiur/G0ZE2tFkuG9pFA0uvJfm7DhMDlVXTdbFYaPgo8+atZba5wZnTZ9jc3mG2scmFSxdZ3z5FNZ3SxlRYLPnlGkvUNvUaFom9sOja0JijgnOq2owk5Dhpm5IaZ3MmpmileEkAHaJmaL/WYtElO/ViIYFI9CZsMkVB5y+tVRVKeqEcEtm1rpKRCh8Uie1dqfQIcUgRlLQh5feLRf1tjwcWym/tdrxne8okWoLVXSbXWREG9n6ZTApjQGfV5ywXZgkAlNozH/1n9Y/ep83vac7dYIImLw/nlK7WJRaIOuWB555/iZ3tDR790PdzZmuCGHj5+We4deM633r537J5/RKnT7+Lze3LGO9pdt9gMd9l7/ANGn9ECOM+I2IMp8+e5bEn34ubzGjawLdeeYPf+dwzqJmTgC9jsIaVzWmgd2ENt2/f5q23rnHx4oYmMdP1HZhLJslqMnFv0qf3cx7jbG0NYwz1ZMJkOunboS+XS9pFS1x4fCGUzghd23J4eMiibfoSkVVdc/bMGc6eO0c9W8fMpqydOYXdmNHF0HNaSz5rnteyz0hg8CGzhswhNXxg2TTHaGwlSpvHK4dGSh9y1Q3q52fltf659P8rxjKbmxobPhbKiD0EkcqdDD1CQogFwPS7Ox5YKL+5u8VTW8JssmRhQ0pDyghZ6IUyD1BZrj4/VhdRSY3K3xviV4WfWApv//z4NeZ+mFl7msQiMb1pqyZpSOGP23sH3Lq7BzEgJrKxsc4HvveHObz9Ji9+9SvsH7yBs0q87vyy/52Llx5iY3MLV1XsnL2SENrIiy+9xue+9HIqoU/Kxle/RyH6FMeN2seirBNLtiaMYX9vwQvfeIVHH73I1uYMO62UKRJLxHTcJkDHMJtwYbSI8/OqqtjZ2enHu21b2kVLO2/pWm2wGppG9VEIrG1v0iRieQY8tre32dnZoZ6t0QJiU3uEHCRHyQYxXY+1TmvmStIsouZeDu7ruOSyG5FFu9BEZOf6NLAyDJc1YTYRM4ZArxmlr0CXr7t3dkrcgbRBo2wjhblcQksDIuDc4CbFCCGohjXiCDJU0Ri4y5J8cf28MUrPG4gK32Gf8vWXXuJbV3+AGdfVXwsBcowtEX5zkaQ8gXkxrLLyy12y1JKrmrNcUAPQs+p7js/V49CoZTObzcgAgQ6eQt0hdgnCFhBL9B379w74rc89y3Q646M/8AdhuceLX30WEeHJpz7GbGOLvYMFb755nRdf2yVEoX3hVjIdU+ZFzJPdX4ZOf4kUo7mSzo7vUaLmDLZLwyvfvMYbr99i492P6GZDBO8VmF9BF7PZlRd36cOXQXaRoTGOiDCZaPn9sBHo2o62aQhtgwRNxYoiLJslYi31RGuxbm5ssr6xTuUm6kOazAiSvhWiEaU45MLIuThV8F4BG5MRVMF3nhzSCCL4gI6rjxinWjVnggxJyZEuhZ+stdqnMoUo9D4LBprkyn/JH03zEXOejAjGinoFxmqsU4SIQ4wnxFzQOs2xVZM4mgQupdCRY6hdFFIjHyW+x6R4TtC49zkeWCj/zc/+FV787Q/z5//8f8nFWUeXFlmZzArDD5cl7PMiKYPHWbuuJqmWar801QbzNfcwPCnmM9AAo+giKBn99H2gGXFBIWLisIt184bPfFaF8dGr70Oi5/NfeWUkCOqxqjYMK0KYh16K5yUwRRLKLg6Z9sMhBC/cu3fEM888z9UrF6gri5FI8B0uCXCG8kdZOAXtOAtlLkRlre2LF+fX8+dM7ainUzV7fYcTQ51brafW69PZjBi15GNd1xjjaFvfF6KKRjB2tUzjMPdjMzOb31rwTE3Ewa3J38vrpTRbTzryOltNrC6Bx1V3IWtqTNX7piI2CbdLfmZLCKYHDtUNyWCW9Ci4IFSuYjqdslwu+14rqiU7nMvX9B0mpAO89PyX+Av/1c/w1//af8fGtMUjmFpRKedcv1BKUkB+5AEejiHsUAaVV4sQlQOZNWX/nBMyPWQQmg7BGZNaNh7PCBmuhMReGRcBizHyyqtvYOR4ZbyxwL2zo7z2tm37uJ6I0EY1fY2JvPrK63zrm6/ygQ+8G6FTs7IoKrWaLoQMVdlKrVgnytvqeGXTLs+RqypMrHBGF1m+UU/scx/zkVHxnMGvwfqh3EUGdrKLkh/eB9oU/1OBzOlX2e8cxilbXdknLRlfZSy7LLRVbuT5c6XlkL+jfw9rTzms9GVHJdqhwzRDgnUU6WmWw1wKrffQLHvAKR8iskIr/PbHg30qD/Sln2B395D/+r/9q1TO4KOm/sRQIGR2MCPypAx9McYB5KGcxTiWs5roWgrl8P0hy8D7oJB+iAkpTOyJ4LGxw8UOq0VARoN1kliV1wfKYgkxahNk0GyAhCCqz4IaRhLBRC0HadSMltzIJD2X5KspsyOHbTxd19CFji50tLGlCy1diOwfdnz9G6+yXGj2RRdaWt/12kvbtCcGk0b5AIOxDtJiqOuayWSiHaXSosstG8QI9bSinlQYq8TralJhJzVuWmNqRzXRIlM92kmOIfsUhgJXOzUXRdK4OC0BFC0RR+cF74XOQ9uRslA8GJcI4+rTdWEoV5kFrdys83A6Y7BihoT1ZCLq7yfzUjLyatTntU7T8Iwt/lVz1Fihqh3ilDCCsdoXNaDXJwYxFWIdUaKuedFYuI9aU3fZLjhcHCl5PvrU7j7iKou12mLPfKdDIgDx/O/DxsALL/4bvvbyDd5z+TwSOkJV981qcj3NrB3LoOnbJTaX/mXpWx7zGeOQYZCFMgtXjOSoIQASPVaESlDz7wERsVWN6gvTU8G6DBgM1zP6fvFvFv2+JKOkN4rnIcU5FQRJ4exW/aBXXrnGG29c58l3ndcGMsIoj1LHIZVRTCBHrhXTo50h+XHFpmasBtG1WnoqfOxSVYCYskhsYsYaQ5fAFxKIp/FDr/WDjKj/F0nlSUyiv2l4IaO9IpKSijP7JQXfEzod2qLzWGF29loujasR03fxHoCv4ZEHJyYAIiJEm1hJ1gyeT4ha0VD0GlAMTktror0/jBiC5Hq1Wt7ER48pADxjtD6uBGU8ZaDRB5/KjMqoKPW3O96RphQccuUPIXaDv/43/y6xEppuqYmyhVlQxpjyazCERwbhHBZ7qQWzObYa2xw+S3++1e/74Pss8xADzlnqukoL+XcDU8dBiOJANxtFS2PxuN8xcvTHH86IYr5vHzyd9zRNy82bt/n6179B5z3T6ZTJZEKdHlVdU1UVk8mE6XTav19VSqQus+JzIL4s1py/6yrHZDIZyPEjs3TcWr20ZjQVSz+fu0IPc6wWTNO22pEqPZDMG7V0fjCle2uqiFWrn2yHniFW/x6sqcL0ZGzh5GvNY1++llO/MhAV0/3lWY0UtWv776ZgW0aYg1pQne/oVj5bjlH++52ESN6RprQxMFm/BGc+wYvf+Dc888p1Hn1kk9p7XHUcXSpR1ZNMUdV4Y0HM791Pc6pmGgttPvQcuoMDYAxV7VhbF+yBmrP5HG83SP1v9h8pt+BBMzN6Nj5yDZzyGG8wChXl8pbJgFe/i6xFhbDseOmbb3D9xl0ef/wqtXNQEKPzArDWIglEycycEIZYZkCtCg2sm6LXh6GubD/+maNKjKr14vA7OqR6V865fBU9uUB7K6W8QrTubg5NZE1tc85rgJC6mMVkjoO6NJIqBHQ+t2+oCH5ICsiIqlrh2h07Z/mfPJdhtI5KLVzOyaqFNlp3xbpYjZ3qdQ/kd1dpOCcK9BTtGPt477c73pGmNBiCNZhH/ihmepW/8Tf+NlO3CXHcyDQvkoyKtW07QgnzjWbBWg0Alzt7fi8fIQxsj1WBzGbRcJ4WVy95zwcusb5p+l6GA+j0ALtX2jpl9NDwhckPuM8j9o9SeEvBHOemxtSyDjyitVet47W3bvDl575B40GSv1VqyNlsppquqjDOEo3Qdp6288wXjWbzi8G4CldNmM7WqNwEm4juNlU8l6iIdd8jpRie1TxDXZAae4tRMzc0cUBSWdmYKgPYVHfVoaU51c/rfCCSKgtgEOPSIz+3RFSQ899aB8j27wkmmZHDGhit1xOuOT8/SejKf0tcowTmyvVYgmm99jYpxILiBUj2OyPd27hvo+t+oE+lQ2/KEDYu4q78NK986xX+0T/9Fep6MhqY1Z1m9Qby4PRmZggjYSy166rghjDWkuPfGYcmQvR4v+TJJx/lypVLWHN8gk6+x5VBSr7LqtBlf/EkgSzPdzLqFscPGZ6rDtW80BAN83nHl7/8PLdu3aWqFEmtk+mawxxZUMv4cNd1zOdzFosFwSuCub6+znQ67T+XN6k81v3Ob0ozcVy0rATwfCpK1rUe7yO+U7MuAhjpQamAlmBRcCRiKwdGtB16ajUv1qQQU+KKWgVpApFEgyrSsxVkq4BJLDo2F/NbPvIY3e9YFb77WVK9m5R85Jwh04OcKW6aO83E4nsPevwuhFJr4ZiL30d96uP8/b///+XVN3eJMdI0S2LwiWgdhn4VhRAZY0YLIpu4J2nN+2lORVqPF34eLjQFtLFYY9laX+Ndjz/M+XM7zCYGZ7QYlST0M8ZMdJfiMTrhyivD5+4r3Dpg932/PHOuNk6P6mrAPIilixGPcP36LV584WXapmO2tq5JvUablIYu4NskHMkKyMLaWxkydLvqN60UkskNavW7mWRhEzquD3qtJnSpPIgPkeWy7X3DGHLbvcHcWwX38t9ZSEqhH9rN65EXe0wm96DpVOkaG3ASmYV+xPvv9e3W5XjobdX0zNeV/x6XUx27XXA8zdBae+z8/ewmlPskbOR+xzsSys7rRETFkqmf+LNEd5b/+r/5q4ChafaJfqnORdB6p1omIRBCh/ctXdcSgk/1Vhjd8Ek2fQkOBZ9SxWI49pne/E0BXh+E4A0mWOqw5PFLW3zs6Ud5/JEdttcttfHY0GJDg/HacyQGgWiQVFPWxDRAgqJ9UBikmnkgmBN35gzJ5h1TVaokpA9FBSUBDuIQcWCsFgc2E0KocLNNwmSNzjiapfCVL32d69du07VBGSjBIB1IGwmNp1k2NI3mVrpKF2Y2bZ1xEKBrOppGO0Sr1ZF6N6JZF4gGj0IEH4QhrAGdF5ZtpO2gC0bZN6QwiDFJQ6SUPcbhDGAU5shapswqKhftZDJJRbfyeDKsEdEeLZGIt8KyTnxrnSqtS+SGVvAWkpuha1Ji6Pk9WlQ8s3DU5YkpI0d90SHRwTktqGVtin/HTEdJTK4YKbs9iyQBllzOc5UocvLxjoCe0Gsm1Rx+dorq0h/i5it/m3/xy5/hRz/1HmJsESpy2EJE65Y0zUDgLv3KHog4wRw9Zr7m9+/jCyqKllFYxbdjBNvNmZmOx65u0rSnmDrD7p0Ddnd3OZwf0bQhASEy6Mk47L3ZXNKbEjQVKKG5chxJ7o+xIzkMCEW4JFdWBzAWqSdELN3hksrWTNYmzO+02Njy4guv8Ju/+RmM+W4uX7zEzNWIh+hjylSJhBSEn9Q1Uk/6UhneK5WuRLRLkrgKkaZBabv4hIoWpeX7+Y8RKxDF9ret16+4Q9d6JA4NerKpt8r+WhXIfE2lT7da17V/bgQvkc5CtOpd2lwFgLzBpxIyfYZNvtj8PG/qmWFmTrDAUizaFD4p9L5sXiv9IKT1GcLQ4j0TEt7OfC6Pd6Qp+6x+dHdtxRIf+hRu+yP8/N//BXYXU1qfry9l8adCR5PJZIRerYI49/Mhy9dX/dVVbRkL3zQkQkGXsuxjbJnUHY89cor3vfcy73riEk88fpWHLp3n1PYG00mtaTcxgy/DaOvg0g9yMlyLj9zP/7g/kkc68+CvpZqkxtCu13SVZdl1uMkak83TBLvGvaOO3/rtz/LZz/wON27cpGkaOu/p2o6YgLRcXzWTvpUJMzB/crysRBFX0+30Mfj8bdvSFWEun3z78h4hb7ZFbxHGvOfs95ZmbWnulSbtyeM1CNsw+HHQkHboUO2LcEvmxYKM1lhMgtNvDCuYRDb1RaRHrNVvNGmM6QX97U3TBzdd4Z2SB0QDq72WQKDaQK78UQ6f+0v8rb/5D/jz/5s/iw+CkQaMR2R6bHGu2uslBW81BHKMCZR+9qTJGv1GX8pCEscy4Ezk9PqMzWqTqanYmtWc2Zqwuzfnzt0ld/fnLOYL2nZJ1yX/KGVeUHBl+/Y4ZuDwltefJ+L+wqrXZQxELK6aaIu7piF2YN0EO410i4Z2HllbP8Vi0dHe63jtzWv89ueeYevUGWbfNWNzOiNE5fhOJw4ffTKRY18bJ4SBfwoDKnmSnxOCdnbuaXIMn8sab7VQcTlvMYbC/xvSmEoubkl9WxXI/Jk8Tr22zOYhWTtpETEdYiWJSMr+ySZkHn5rszmuNXcQMnEZQmr6FNLGgcZbbZHtQVTNmAuPGaOlO1ufoJyYi7plTTo8V1/7QQR3ON6ZUKYRKWGPGCGefhy7/V187rO/w/Xd/yXntj1RGoxdIL4idz3NxIKTfMdSm5wE/vQLf8ApRsLQa8vib93pE4cyCg6b0rkcVy+d4czWGvfuTbi7d8SF84bDRcfRfMHBwSFH8yVHR3OaZaMmMag5HAzLZcdyoZzVthub0+PN4n6CqeaihgwiFoOtK3zTEbvAqVDRzir2mz2WRw2TakK9vkk8OqJdnuKb37rNr/36bzCd1HzkQx/CWZOQy7SQUUTVMM5RLWOMJdK4ulgychtCQKzTsESxWebv5DHPpPQeSBItniyBnmRQJiOUgmqtHb1X/pvnves6MBGL9Myo4DuCbzFiCV6rLZCqVQjKd/YhWUyS11qXNnX5/7f3p8G2ZdldH/qbzVpr79Pc/mZlU43KUqEGMAY/GwHi0RhBgFHRBDZ22GEbN2E7/MHh9+G9F44XNg7z4dmAeQJbopHVUkIqySohGocNEkiFLaGOQiUkS6oGSVWprGwqM29z9t5rzeZ9GHPMNdc6++ZtslIuyTlvrHv22Wfv1c4xxxj/McZ/QPErjdFocapaESKuhM+0QkfI0CQ/1hfy62hK6VmpQCJnTBbQLjOTN68XoIeNx2tbYNRYUNRQVsNkTzHv/COEn/oJ/sdv/Gv8v/8f/5oAANNtLIHOzyVdmrS+1pxrsEcfziUtVOZ5u0rr98TfnF9ra+2cCyMCrvoCnYPN+YbzjefalTPuHCbuXhzYH3r2hxOmmJhCZBwDYVLzGcIEL774Cp/+1PPIHE+zg7gYb6wpU2Fjm0LCxMSw9Zi+Y3+YuLc/cH71Ku4wMl2M7HcT/ekA5yc4DozTxE//7McYBsezzz7Ne971LoKQ2cwLWLkXrRCoaaiaqPXn5f6WzKuc54ZINi/up+6rbfrauiKKlMqkXTbUiU3VznprBX3hO6qpW0iU9XytofQDzeQUSqmJgDgV9TQlK6kQi2WozYgNQbiErbBSGBNLHBQMAWuFyCzZov1thQOkxAttQV/8U10Qdf9lrubmOTzqeOxeIvPN0hicrBbx+pdin/1j/PgPf5B/9OO/jd/4m25gkyMTFits69Mc8xVbf0N/f5CZdVTjlpNrhbZqCqeFuCU53EDfOc79lv5sw/m1E8Zx5GJXAu6U3MykiwhMU+aZd1wjjHd5+eXPFivAVmtggRA99B4KQDOFkY4ThtNTRi64P+3p4ynD6ZawG9ntL0jbDd3JFhv3sD8QwwU/87Of5Id++Ed55pln2Q5dPUdNP6MuWhYtjVqv3O2i1/qebYVGO/TzWg3Rhrf0nmuJUkpzdcdaI65jt+tn3ZrF9Wfx31MM+G5mkZNrCxgTSyuCojRyrMBLPTfvxGzNQT8FWQiuLUkSQwiSlmCEYCtnsWhM4ci1JkN2xYTOcp+NEWTXCOug0WmQm5h2Pr5Ir8cTCGUJbmdJ8vZK524NPPd74NUf5y9+3Tfxdf+/P8nAHfZWej4YQ6k0kEUt6U6MtH2LQUt8jvlnNO/NP/MibUkhKKPow2I/alZlsVEKLoeYI4ifuOk8bAbCqRg08j1bwixR5ni2hAl6Ej/0Qz/GC5+9Q5SmI+VgbYbOg+8hWjVCJMWJFBL96Rnb5En3LoRnd7ulPxk47HeEw0jfeeg3EE9JBu5MB37ghz/Kb/xN/wK//ku/mBgPzf0xcwV+ZX3Te+EQlgJBkcWCCY1JKX00jCmsdSu0vFo1SOaQdx7r5C7KTdVEdifunRE+J8tcpWGdZ5xC6dEpiexTiCWENi8cChI6a6QtPIk47QkKL9R5kIlxFG2cSi5umqoVI1WvEaNP25brMJZEKOV50szXGiGldrU9hv4QBF5CKWMNrRibyTmKUGMx0dGXUJmzshjZlLlEIv6A8Vjo62JaVU0H3sjqYYar2Hf/MV555RW+5pu+i5xHhLU8kFJA6DCEZEriPwI1L4Vr3rdOLmh5QNcIHMym4uWLbuNc6hPLq+KkG/EXvMl4Mt5kNt5w2llOe8vZ4LiydVw567l+dcOtGyfcunHCl7zvC/gN/+yX8fTTt0taVZvn85BhQHpSFqEMI+P+QqoK+h7befa7HSFMDNse62Ha72Wl7wfMcILpr5D9GS++ep9/+KP/iHu7XVn02lo+W3/Pea6ouayl2ucpDGxSMTIL43qBrH05ciG9YqbmqIt2IaoyRpvoqg0oCK1m/uj3pEHQcg7U1EwD2SYwkZxHETgimCThoBRIaSTFkRAOTGHPNO6Yph05H8h5JMaRnIP4h6UxrzOJzsOmdwydxZqIMwlDgDRh0lQoysqWAzZPkEcwI8ZOOBsw+YA18toRcSQ6Z+gceJuxJmLNW1Alsh6S4EzRfuWG3/wy7NmX8X1/7+/zUjylGNuz4sqqIeRBqjPc2uCtf6JCFWOsDNNr5PXoMHOyttr57VDBXmjmsmnZk7NCUeicpXOezluci/RD5OlnrvBrvvg9fOmX/Rpu3b75ZDdQLp6cA/vDXXYXr2NcIJ14mCL54oAz0pcjhFB7Jvb9IJrJejKWn/4/foZPP/88IZR4oCZsN5esvmVr0sNcVqVpdPp3771ot5Uwzhk4y5zT9rm1IZBjYxlCWS28zTPPq9e68Ba5rtdXmxolKUgIMZTwTSCEiRii8A/FIABP6ZgtbSKoPL/eS/N6Z01hG8iLzeRMjpIIgzmQzR7sAetGUr4gc4GxB7AHMjusPWDMAecnnJ+wblzfiqPjTQklaKkRZSUEXId/9x8j5oG/8k0fwrsrmCwt1ZTnROxr5oJfrQDIiRyF3DmFCCkt7fGcq9/TPsyjF1ZQv7bxjwrhGs4ve6Msx9SmN+Wn0ER4Zkp76PrMc++6zRd90bv4gvc+izEj1pei2Sbl6vIda89Gjuoy5HFk3N2FOGJPOjprmC52TOPIZtjgrWO/2xOmhPc94EjZkXLHZ158nX/8kZ/m3t0DOWqcLyIEyIFcWrLr+Wjaot7DNQLb+pzGLrXl2sdvY4rt99d+Y+sralgEWAjxseeqQBFQCgGSWCY5inVGIscJUpDJLLGPkmFTfhd1DqWT9yITR3/PNT9rlaCfSnYawkUVQpmXGZtFezqTMDlUbWryiGXCMmGQ184EvH3LNaVM4kxpgZYLj6dxmJtfQnf7d/MP/t7f4yMf/SR2CkyTIU+ZHKRrss2ZcNgTx4M45ClAyaxPOZDSRIxTmVgR60x9reauPsx5m6F/3wSqjSm6OS2FcjaTzaVN51nOkOLMJzObqInt1vHMczd45tkbDFuHc7mkYUneqHLjXAo5MAunsgY5IE8HwuECkzOn52ckk7h3/x4pZk42p5Ash8PEFHJJx+tJdOz2iY9+9GP89E99nIuLQzEHk4BsiJmX0oyQ6kRvBUVDIMeqI/T8W84lDZOsOy+3QrsWVrV42uQF/Z4k2XeXwLuq2TOCkiLMA3WxSZEco5B+6SKbK4RTNNxy1qLXlufEgxafq0ojixZWrSnEX6FqTQmrlM8bUzRvURwmituWJzG1TSQVqpeHjScXytZ8yEmC1kj2hMkG/8zvxfqr/JVv/BayHfABTLTkmIhTIIVYKR0WvS1y4RwrN2Eax6XGLHe9FcY2a6RW3FeiqHKJWWCgoxUm1dmcBTIXUE19MaUbmRcjMNZwdrbl9lPXuHbtrMDmy0VCE61r/WJz/xQXkdXZ1WZA3LvA9o7ubEsorc59KdU6HIRE2bsO53qkySv84qd+iR/70X/MJz/xCxwOh3IuCn4tqVlawVjfvzVnqbomusBV0KcJiLcIqSt+ZKsBj6HrLQKfcy7Paq4gkimwXBgsRVuaUoubonTlNpnD/kCcJDaba7aO/LNGFj4VtKytJyg9TkoBBXkWRn1EOabFplqTAg7q5ozDWS+1nUmb+2RSEKE2GeK05qk6Pt6U+VpBzjKBpYst+Gwx2+dwz7yfX/jFT/H9H/k5wnBBdCPV5G1QvHYFbx+I/k1H+/Dbh6yTwbommFQ+p/SGus+lhpyBCfV7NRFg7duuh8iUpLGdn11BW30bsziFeh6aYqZJ2O0Col8wGabDgXR3xxQmXCfCHCbxJ7cnJ1jnOYzC4dP3GzCeMSRev3/BT/3Mx/iHP/KPeOnFz5JKcr10NpbrbJkg2snf1rnOJqciqctc5VnQ0iWftRZbMwtra6a2WlDjvmQj5V4hE6dYhEpQWBFCbWSgafNFcxpTMYbFsRUZXiws83lo2qBWxRxL6WwmzEJj178ZUxeldv/tfdTR+tmXyeOOjzfpUy6HmLKZyVqSM9h3/wHs6fv481/9l7h/cUUSp9NczByScLxkI2jcFCPZGGJu0qvg0g07ZhZV86E8OFPSsHpvGXov5XhHkNFawYLEuESvzP+iVgzoqpwzZE2AloqBKUyk6MjJloVqXm6zpudhi+b0dL6j9730MayEU0UGpsi43xHuXdCnzKbzZDKHcSRmGE7PCTh2YwTT47szJjdwkTwv3Zv48Y9+jJ/6yU8y7jLkDujI2dWUQ1POow3oV+/KOAm5FbvEGofBSbOdIjxCaO3JSfh3gsCsYB0RajG1ls/FmEkxS+FQlClXW8tFyAFMMIRDwCWDCQEzjvQkfArYOGLCHp8DJk/SLD1GcmnDrmbkulZ3CSZZaTlgPcrGoAzxbeKDCqn+XQiylvMhkYgpEFMgJelWphUmapnoYlUBSp3vv3xCmRevUpbK+WTB+C39+/4TYnT8N//NVzP0m3rRqQRcc5aUrLGwE2jBrSKC7cqjyQrr95avi59Q4oDWZobe03ce9wDwRcCqSEgjMU8IpWAiE0lIyVmMY0ntKnmYZtbKgmAqVhVBpufCL219V2cdQz+wGTb0w0Y6axWk0mSIaSLsLmCaGPoe33eEHNmNB1w/4PoNhykRkuHs/BrDyVWiHThkxy+9fId/8k8+xp3XdpA9OVlSVL3e/myvX8wFa2yhA5FP5az33BXNMH9ffVJNQdSwRsrqnJlqRWnr8pwlD1RMZHUNxFfPwWCzKaDNiM0CjHQ2QjpAGiFO1Ayquu+y5NklvrD2h9fv6xwbx7Ei0OuiewnZaGxb3BVl8Gt7pqq/Ltc4J+vr3Mp5blb0KONzrCl1mPq7PXuO7bv/DT7x8Y/zt7//I6g/Jg9xjkVpQ9o5+8ZdMgVaMKc1SVpBk30oOdL8QLq+o+u7oyaOgECN3xVlFayZIIUQKsQ5O6nsgHv37vPiZ15kv9sL1eWKrqTV8GvzvJpdfvY79fpDmNgf9mCgH3p810nFS8oMwxbnvJBS5cz25IST01Os88QY+bmf+xif+cyLZZGYy7Q0xQ0zm6Rris/WFJsXEhltEbJ0qLILc3+e2GIv68Kqx9D7UjW0EXQz1Xsv8eyUZRE0plBxCo8fgignCVUVRvjF/MuzH3vM7VnPJR2tlnygu7LCL9rRfm/dR1P9eDWZH2W8OaE09b/mjZasNhNdIj/zWzD9c3zbt38IuitMGIKxBUhZNuPUn13X4WqCc7sKzj7aGmSQn7R0nxiB7bDG0Df0GWsYvh0tKNKa2y1AkbIQEL/00su8+OJL7A/3RbM2D2Kx6q72E0JhQTNUxrauF+FzhWd1t9+zO+ywzjBsNmAch8OE9wMnp+eEDBfjiPUbrl67yenJGcYYXnrpJV544YWjQI4xGg+mak61WEQYNaRCoaGcc2VVKEXIXdEY+dJ9a6+zrc5fI6vz+SRSOhDjnpQmUhTzUP0+adirtY5zg1t93rJP1VyypaQ+byFTq9psXizbRf9Yuudiqhtz9P12wV36nrkerzWJH2W8eU1ZBVO1Y64BOHnXQn8V/9wf4e7du/z1v/ODTLZnMhCZqSFaWgXZT0mMviQ45sECVbBsxU5UgPUUjTGVx+Y4b85y6E1uTZtxHKVb1Tjy0ksv8+lPP8+nPvW89CYpZqsI3VQJw9r9hBCqwI5h4jBNxBShZNH43uP6DuOkjdx+vyPGUAS2Z4qZEBPdsOX06lXcRkIjfXfC6dk53nfVQtCJ1yYG1LrCEt5R9DBnIcISU7MUh9u59QDMC18VKC4DbpWypTAHKqBXwTi7bEXRDx3GZkLcM4UdKQdiGkkpMIWRcZQUxFRCEWRNPhHgRwEnmfxqpcQivGJajuOBaRoL88WsEdtrWj93dYXasQ7ltJ/X3+XZTkzTyH5/sSjEOKaBj40nTkhfD700BeEFVKheFP07vpz86o/xoe/6bv6F3/abubrZQ5yrD7KinysETFbSJcDzAAW3DDeoYaVBp5zAaOmWILJjG9t6wGjPZc1n+9JLL/GpT32az3721eXVq6+VlxqmHTJxiomThAhZWqwLoVTGMk4jeZrY379Pt73C2Y1zDocgGZxDT99tIE24IOlpw7Dh9PSEFO+y2WwW97CdSOuAfytsl9HpZYhC74NaMTpJ22qP+ftp0VNG25TrpM+oSZ8J4QBOcltTLJ2/cJAyMUwFoFLHR1kNqD9zlgweubdy/733C74dfXZ93y/MzGPP/LIFdjzu2n6+NVVTSkUQZfq+ZW0Ljo1WGKmv8yXhwm/o3vl+DofI1//lb+Ck3+BLVn67CpKl4efi+4sb16Tssfjaal1rP5/rawF/RGNKo5rLt2A9ORd+YU6EmLh3f8cLv/Qizz//IuMhXPq+/lTNuNyHAB4a+0pBOl4d9qKBc0x4Zxm6jt44fBJTs98MnN+4xvbqFfzpKX67Ydhu6TdbIoau7zk5Oan8PHLrqqFX71NtnW60wkWS1HNi7p+RjbxWASoLDRSe15SrVtXR+ls5CzF2a8auQ0IGCc6HECTWnVK5v4EYQwl/RGIS8K29t+uwGCw1WasRVUu1z6NFQ9Xnbpnpjs2JdrSVTK2WXBK8SfhFqXAelQ7kyTSlaVdPQTobq7UYF6KpBB0rK/SV9+Cv/4v81Ef/d175zMhZdwHpOtZ2YHYYE/FmDjyr79auxql0aFYEbz4ikG3VkuUNZl5V9UeLFrcG03fEGMR85OEs1mJOS0XDK69d8PO/+AqffWWPs1uyM4QcCuHu0jSSFTxfetiixMUHlzJAYYozLs1AV6nCGXcHun5ic77BdZ5oPC4nfI74QcIN9y9eojdS8DxsN6VviCkwiVRupEgJvzhiiuWeikaMKjy+I46j/O6FldxgSsaawdiOEA04i3e9JGgYIbNKZSKYDH3fFQ1haglXa8rGEAlxzsKJIRABYzqpLLLSXkH7DEiX5bmwWkISppaF6ftT4SJSYVRLYe03tgLeuk8KyFjrCCFWk1/mu3xnisK+n4BxChUslLbxErNOaW4D+CCtfGw8vqZc79gsf6qBUSx+tKopGcjOM7z3j2P8U/x3X/tN+O0ZmTtg7mHMBDkRQ1zQCeoFLXlb5oqH+eDz60vxSAOYhFQHlCpym3FaX7nSsg/SkiAplPtD4IXPvMYLL95hmizW9BXtzVkgdGM180SOl7NWyYjQGrQyvSRhS/YFxECcDkzjjjDuoaSSHe7d4/6d14jhgPMG4z3GdhLuyB7bdVwcDoxJuj5vTraloavEfYWF0JYJSymyFhqMmEsYC1l4IrnGj2VhbX1NWzOJYG5VrqERBeZUEFsU/JjJrKExaUIsAJOxrtyhAtqpC9NgCW06YJtr3AbxW0FY+4FrJLUV0GOhlfV8EE0pVDNjCalMIRTKET3dRyfLasdjC+UMrM52Y770R3mdjVDwR0OJXVo4fYbunX+UT37in/K9H/5ZfG/J5gIVi3ojGv9Fb0rd9coHesPzbeH+1X7mS8k1JHJcGDXmZkjR8dprB1544S53Xj+gyQEzMsxicrSglPpP7XGBSviVivlWEcIUSdMBm0Zs2HG49zqvvPQZ7t69UwLWmfv37vLKKy9Xc+xwGIU9fbMpJWXFnJsmphCYglBMSnwucCjt21ISKpUpRqYwiWB5W5HZGo6yRXPa5TNQkEXu9Rxy6boOg6lJ6K3ZZ4yR3iGAd7PpaJvkhmoEQW25CFwSvDacpK5J+95aCDVUo6Go9nm04TYdKtQSrgp1nxrjbGPsarquK2/eOqAnizmaV+8dPZyBmEtc0gjWlw2Y534X7rM/wjd+y7fxm3/Tn+KKvUvMPcYmjG1WT0xpXlr6MOTiTxqt4WMWNDPrx8yxVU8LBhSKshU9jDFKobJZfme+toJWGst+P/HiC6/wwvMvM42J3DmySTVBW01sY4wktZSmtDJ5E7MZ1K7QuoFeYE5GOmzZTA4B44XjdbqTmMaR85uJkyvX8Hni1VdfoZsGwjhiGelOugJyJElEqC0FZPIcpnHhk6WccWlGSUmm5vHWczNG6riTEebybDGWUjUhdbWYtkek2CzCNDGnVbaAh8Q5kW7I2UJ2OMmtEwFoXBONT+o9axHfnHMtSxPf1WLMDPDoc2n7gCq7ooI+6vetfdV2cRaGvEnisMYWf3FCaVPEPLdy1ilJl7IyS611i6SCNxqPJ5RZJ81DRgODSuqdvpa/WWPxz/0h4p1/wjd+2/fwn/+Jr+LOXkzLzHyDKTfElmPrRaWctSZ8nkhZQiFCuDufw1LAUq0YkCoK5vIeLi8s8j2ZgPLADffvXfALv/ApXnrpZTH9osTRdAc5SSqb7HzWevW+1JIFXT3n4m0Q8ERz7nIuxdMGyAEbohTdjgfixUR4ak847OjCnrwL5DQRc2S7PYOcCwXlWITS6GVi4qwtQghCHJUEJe18J9o6qeknK73zwpJu3Fx1owFhqaktnLHWSGLDYSJGqZpptaiGlqppZxAa0mwhWjpnSwLBPOdk4Qw42yFVQhr2mhMFFhaRMcDMNdvOgRbQWSL/yxhrK/T68FKS2HSMsRZx67FTCTGJ8JVsICOs8pLW2LDVP2Q8llCaB7zOD/hQfXCIIFkKAwjgrn8x22e+in/4wx/kw1/+m/mN73sHmQer+Qq3pyRZAQ/AWtvR3uCcK14gr2k1ajntlR87C7PY4jlbXn/9Dr/0wvNMYSQmy1gYxqt53KJyxMW11P012NTlu1i0rfrVRhjI1Q4wSVr8jfmC1179DNkmrBmZ9plkJhKJ8/Nz0UrTxBSn0mJA45BN5YahsrcrE0BIEVfMT9Ocl68+nyGUhrGxmLwYyZPNKZJixnRzXm2M7pL5phPZWifmsHe47EhxrL4YUErtZGFTbQuatpZr23KYWzK0wM66rlXNSY1T6wLR9z3jOC7i0W18FeYa1AogGYvLM+jUzlMdzjmm8dGyeNrxeD5lY2W10Mobf2f2D3UYAGtx7/kDdKdfytd+3dczldbcdb+qJctNrchdI5BHbfTMIiQzC2ZaCWku2rVMjiNXo5+ZNXXilVc+y+uv3UFRiPbBtceTBaQ9PxHsWu5TmNGNmcMT1s5NerSBTzvZcs6l2xeYlAhhT4w7MiMxHaSyPkWunp/jjJVJVMqNcpTwU1tTWjXCKl1xnTYmx6fxI0szhmblXwMj+szUz9L708Y2IWMNuAIMqTuhI5WEgNZnXWu19jraZ5Dz7E60Qnnp+R75Xb+vGUUae1yTWOtn1Ydsj6fa+I2O96DxZHHKvNweRThz/Z7y2RiMP2V47v3cu3+fb/6f/lecHbjoM8lkkhH/s20nHtLM4SKsY1CcpVLmo6ANxJAXeZUpLQVVN3H2rWjL0mfCpDS/pvSgSJnpMHJx/4CxPcb5YnpRUvkyFN6dzBwW0YvX6gpjPNb2pV13h3WdvLYd1vc4P2C9xzmDczJZ6yQutXyV3CkFbBE2Y1Ktgr929QxnIU8RkxI2aZFwxqQIMdZYsI7anbnJYGpTBEMQKg3lXArTAWOkfYEtwiWPI9V+JkCdrLofY0xlevPW4k0mHC5qi3nJBCpCXZLjRUNqAgQCUB3GKjSHw4GQNENqrN29FMvQniTtAl9BNZ2bxVzS9vPjNDLFwBgmxkna2ksLHVfbEKwBJD0/rcbRgvhUsqX2h7eIDuSRNeR6ZA2TqKkkIJC5+evpz7+MD//A9zMSGeIOY0KZ6OK7SI6oEb9mYTevNFPJ0NeuXEtBTEdfW2sZNhs67xoN3fCzlFhnShP37t7n3r0d2ownL3rYN06QrlTVHRHTz/uerhvoup6u2+D7LX7Y4HrZun6L67o5u6dJS6sk1uo721SoJySlDCMhnpPTgWvXz7EmE+Ik2tqwOLdWI9Tn2qz87U+dbON4IMTCYqDx2JxLEfnys22Op5qRC2unoKpSWByZDnvh0knSgmEcDzUIvwZatB1BWxOpr9fF22vg7lJ4JBdtnEWEM9R7HS8h4nIOXd/V4nkVyJZ9ISVpRDVNgVgKCMSVgRB+mXJfZSI/2mczEAxEAxK1M4zuFP/sH2KaEt/8nd9L5wArFeRrEwFWwA1Lk1Ee2IN7jrQrvzrjIF2atC15ayrmLOwHu/2e1+/e5fkXXuDuvfvVf7Rqdq3MpMU1N5O7nrsxtceKtTM0bwoaud5PDbHoPup9z4twirWWq1evcvPGDVnQ9J7Vu9/cj6T9V/LskhxBHltzv63yWAMrbdH6g65//qmCPhKjoMnaCWwcR/b7vfQuya1Z2vSJSfPzFi0eq3bWc64dvlbz5NjcabdWsJcLwrKVgmRFLZMOdL6M08ThcKj1mq2v+yjjiTXlorMxxTyCBwuomgZQA9UBIxUjN38D7uzX8vf/3vdzN94i2EAyqa50c+3e0neA9QpfQib1TNuDH3kIaa6Js85Jy7jiB1SHP8Od3YEXX7vPL/zSy9y9t8MgHZ68daXnxHys9aKh59ZW4M8+5+Vq/NZ8VGDBey/n1vni/815o0pjIcrZc35+ztVr50iEuM1aKYhliWznJKat0hK0WrOdgHrumranJv/6muWzlB4kS42rz1CvP2dJWpimRAiZEEveRBTwSDNxWq2bs1hMKohKz5JLyE0yfZTCxczngamLnZySqdpLwSM9pxhT0yKwPQbVpNbrapMU2sSFTJZMJTWrS0yz9UMfNh5LKFXoVBB1m5OQyniI5sxALFk+yWQmP2Df9x+S7Sn/9Z/6C5y4p8rNvdzmer2S62tdgdszyppOVGz6xepXG8QURoECyigYUIPxIfHavT3/9NMv8vOffpHX7l5wmCamUILG+QE9T9r7ZiQeVsEbNXdYLhT1/jTXtXjwzokwMqcfksvVGou1HdeuXuPq1TMwEsLR/ZkMrgilsxS6RbkPrM6jvY42lNAKVztBdXbkzCIe12pREGE/7KU3SwyZaUpI2p6v8WzMjLKLFTDXpsYoJNmx+G1a1WKNq/m87fmoQCkpdEjyfXFxJLyx348cDlMVSDExJcQRi0+r+wLN9bUL0EjvUas8jDFzRVHRnI8aEnli89U2W6Vof8gwLKIOjexm8slT9O/+d3j++ef5W9/3k1jb1w+0gtle2DGtBCwmeft67VscE3Tl0hkGaRm/u7jg+Rde5FO/9CK7MTCGII5/ReYi68Vhvamm0UaoXddVDbu+Jr3eNivl6HXm5QtJtNhw8+ZNTk62dN7RdVrYLKCRop1WzW/bIKarxaTVCOu80UVYpXzumCDra/X3xE9MgOTbSvK7MCAqHYka56m0AFCtu1iENeatV27UDfBIy4TGv2tQZDFLl6V46pPu9/tFdo5eU+uWtKme62s89p6GT3a7XQ25PMp4ooR00/wvIy/eyfNbi1GR0yPvZmOZnvmt2Bf+Nt/1oQ/xu7/iT5KRZkA6iZSBbKFV4OiEms3ZpRaq12BMfeAzlCwCJFlEMik+8/JneOGFF7l3bwfGEkMqHZ8j2FzROj1P26BwemqSTdLhfI8kdmc6I+bjeqKvmREW5rrus7mJFUsiMXSGWzeusd0MmNhmRsmqn2IC58FYnBXNlwyFI0n2rBkp1SIqMUytqGgnrJaetYvmfG/nIgItkWuBIN85prF0S7ZzxhRZS8AyvrOkkKvpqPfhmLvQujn6ORW8NiZpWHZ0a7V5W+up31mn6j3SMMt5qPfhLWkam5khY/X3irvWhDwebT/tx7Mpk9sPmPf+2+wOlm/+9r/JSd8XQChXikC9LQlqAXRqNjUJZ/QVWmO7Ptzy+ZgL3QeaKJ7wxtBZR9cNvPr6Ba+9vmcKhhAzCUfGkU1Jms6Ft9UK16w1QuYrnTM8znqG/gTrBjA92chP64RZwXmH7zp8oSuxKyRvAUKkSDKpafUuSePGZLwNXD2J3LrSEcYDXddjjcOZjLMZbyzbYWDT95wMA1fPz9gOA7119M5Jt6oQcTnjsRAjcZxIsYSYpsR0mDjsRw57KS9TMrGcI2RloBPC5FyoG5U+tLI4kEk5YGzCevUuLFhHMrby2eqCOR4OjPuD7Ke0vMAknJeGTc5bumEuWl+7N5cYILIQtklW2BLEgXnRaTVka7a3+MBiTjcWgzVzl2xlutA0v0cZT2y+CmgjF5bXC0ieheBBW8YswgYJwGTSzS/F3fzt/P2///184jMXJGUIs+Zo7iPWVACp/BFoTYnLAqn7qK/L/7MZJ9Xsrut5+bOvcrEbSdmSssG4DuN7jHe4wnszDL0QXDlXWgbMx9tuTzg/v4LzA9l4knFEioYqMHxdWWGuK2wE8pK/qgthcZkxGWsSV848pxtDCLsZwIgBYgAjGkmp+p3JmBzFEIwJX/or5ig5pjnMXKcppmp6Ckt4rHcupcLZC0ifmCI4QCi1hAqcqFWSciATMbYwxRnA2driHuPK4poW6LKW4alFIXFiyWtem9IPomOp97PxpVtkX38+slaECuqohWRK4rzyImnd7luDvip6SlPi81g7WI72uxWZtRae/X0Yd8o3fMt3cNJ5oYO3M+hB49PMYYbmNFcAybxdzoW8fE6NuZuh67yU4xiJk86ATU8/yOu2cLd+0cAw9JydnbHdbgU5NVwSshZlrIK00pCtOX7pxpXnYozh6pVTht4RQ2C3u0+Io9QohlAmuMZcYwk7TCh/jZGTq2GSagamJQqqPqbe4zlMIgnvVRByqujjWnutM3Paez9NUxXEnHMRusIzyxx6aU1VzUeFmV1gTftRj7Fa6Na+8FoLrt2f1kyfE1Oa8zUzWGiKxtTXjzoez3xVP61oSN3ySjhraOSSxOZL2/K7xcm/8hz+2T/CJz7+SX7ipz8jFQTGgncYJ+VDukrKtxYSeWnFOyaEDwaISrWjEfPovV/wTgy+7qMKZN/h3ZJnNOcW8MhsTzo2W48v9JYZ4ZjVpOqjxz+SeTT/vaCO1dqY/+6c5fTsrPpH07QnJZ2YhXI/HohxJISRmMZqdmYmcp6kSsekwjckPENtlf4x1FvPuzUXc5ZSsVAa7GgyhrQAKLHVRuvmGi9VRjuh/I9xwnlTtW8ujHZ2Zvcs5zDfq2kaGcexAiutYK7dgWPjQde3UABcFsj2Gerc1jK8NsPpUcbnhGLy2OGqZZpX75lyQ9tt8Q1DMh7e+fswwzv52r/8TcTuKtFI0kF2tjKhpyIISku/OP4DBLK92UskVq9FJn3KBlLi1q1r0h68mGIz7f8ynihbw7ydA9ttz3bb4b0r6WMy2XL1fh9yX9faknY5M6Vnjezr7OyUW7duFk0XkGa9hSMmJlKYCNOOMO0J4SDVLVlS5mIYyWnCGaHfUHrNEIS4a61R2pS8BYhiZp6aXMiJU2ncmmLisD9AFvRUe3TklKpgSnVPJpdziGlO0dPicLkXl0ML2ndGCLJm7TwnHMzCs9Z86wX6mNC22lETH9qsosWCVRYQcYPmkNCjCubnlPf1czPEFI2bLf6df5jdLvA3v+/H6PpNlWrbcInC4/sAl45o5huWcy5pgGLmdd7RF87VqWSdiAaZGmHUpOWZknK72XDz5jXOzk7wzksrthQbgbwMrcu4HMpZ/smgcdh6/cCVK1e4cn5GjFNl7xYSaaGynM9XzEPVSMZQ8muViU8XD0FGM7OJ2VJNrgVREdacc9UM8yRdhp2W934dWC+wLyqE0tocMl3nUUZyMZfnxaIVkLUvOScI5IU5DQ/uGtZqv9k0lXpQ1cTqR7bZTGvTtqLkxrz1Qrm2TpeG6NpMfZT9Lb9rDdjnfhvu+pfz3d/xHXz29bEiWgbw1i3Mx6oVjxz9Ue7FwjdAuk4fxokwJW7dvCnFtzpRqtGuJtU84brOM2w6rl8758aNa2w3m9r3oq2qOHb8qg2P+JPVLCoXJMkAlt4LuidFu5ZxDMXHSsSQ5gUjREKMTCER0c7JoTCal+avhWJF74UCQ6RcmzEp+GOb0AJQhSHnvPCp1CRfa6nWj1OfVIdBK2eEh8mU5ADve9ou1BLgL1k9WQL81rqa9dM+0zbZvhXKY8KyNnP1exrPbPel599+t8U7dBwT/geNxxTKJh7SoKh58fvjiWOLPi7ez5ZkHN37/m1Md5s/+ae+mm64KVUSSA6+M9JCThRoKf1ZpDXMWy7lUg/2JWdULpGYUmI/Zi52iefe+RTPPnOL87MBaZNRWp0ZbVFQqklMpvOWm9fPuH37nLOtpXcJm6XK5ElQsfWDlL6IEnawOeEt9N5ydnpCmBKHKbAfA7v9xMXFgTt37/Ha3Xu8evced3YH7ux23N8fCAlCzuzHiZQo7Q0QikssORlpm27AG3BkvDViBo8HSJE4hYVp1i5OKYG1Hms8qRQAo1k2SMWPVP9kYYwsWy6UI8PmFGs7fCdNjGKyJDxdf0JMjhSFnWEaA2CJCWIyYHwtkFYgaUZGW/98FkqdE+qerBMO9vv9pcR3HccSBhQMs9aKWT43y3ykZ/7ERc7r8WZRWLP4jdoh2mxO6b/wT3D3//izfODb/xb/3vt/J7t0IYSDWYkH2xM0l0+mgrPm6MoovzYrI5n9GBhHw/3dnmHT8dTtGzhnuHPvPhcXh5IwXfyjLPmnzlpOtxve+c53cO3qKc4ETJogptXDqyfbnMMDkNVLdyqz+HqGzWbDlSvn5AyHMXD/YodFtBmpsPBZQ5cFRQ4Z+j5KiMkYQtB8VisLnXXSNyUlvHPiE8eId0a6SXvPOGVpThTmid3WG5JFa1WtW8i1UsoYxxxWKSEv/Q4YnOvBCIsC2TKlUjBtM6HwCW03HpIUgIOtRdqqctvkADVbq6A0i7CONiyi31VWu/bzDwLojpmvAM47BjNgrSGEt0AoVXjejAA+1vFy2W79etzVf5a/+3f/Ln/0K38Xm14/UbSjodqob3R+RmIHFf5f/7UclRgTu92ewwHu3bsgRTg/OyOT6fqBrrvP/YuL8nnxvby1bIcN166cc/X8BO8ghon9fsd+r5yp8wO0TdlXO3ke6e42IdicM2enp1y5cgXIhCkwBUsOo6QvWCcT1kpsVKlWNIPFGIMv6HLf92At1nWLUJLy6iz9JyG4ik14JKU0g0CFP1ZBGu+7Otm9n4VV60VDCDjjsGYj/LMmcwgBjGEfDId9IN8bS5aNaE3vLIPrkJ4/mu88H1czctQ0P4Zut/7ysi4y12Ryvaa1pm0FsQ0VtT5o+/6jmq+PnWZ3KVGAo4rpczJyFr5S7IB797/K9NOf4Gu+5a/xn/0n/zp5d4+cPaZA5HORkln4J/ONmJPWZWVnFuQm7mWw5BTZ7wP7fWSaxsK7Ezjd9jhrJOvHyZWbEkPdDAOnm4Gz0y0e6VY9lWyWQ4wcplFisPby3VmwvMEiTvdGD9IYKQE7OT1lGAa0wY3zXaU6xDqsxhCDppnJwlMJrPJE13XSDiFm+j5X1oMWcW0nn/4t6THLxG2rQWYtFWtD2BAC1pfWeiq4UYifs81423MYDzjfcTEeCkhlGQ8WW7hpXbLsR8tm6Oiyl4U5daTsoWq1WBZ18T+FaBoSipxL3WPlfUqZea02i5/tZhprS69Vhb8V3FZo9X496njitgXVFzwyZ1rBfZCAmgf+fd5pxpSYYSZf+yL87a/kIx/5ID/yk7+Vf+4LbwMeayfwVsqQAElxKS9Xk9o2KKvIr66eZj7pJCZpmBLjIdRCXNJIChM5Rs5PN2w3PcZIWp51ht57OmexBEjCHocR1jTrD4RU8lDL4Y8xm+U8x/LWwMhRXzhlttstN2/eLEkVkc1GAB/Jsy2cMQZcdKWbmMQ0Y4jCXW0siSSsBDmSDmMBT2auXQ17gC4g6mLM8T+dqPv9npQyYYpVkEOIeB/x3nP/4oJsTA34yzE0tS0zGgk3uGC52CVSsljTCfiWR7CGZCxT9LjoMQE6DIkesrZnz5ADMDIjELNZGmMiWonfWhUu5WGS/M26GSMLhizWlP0tE1daUKmdd23M81G1JDyhUD4InPmcDyNmSS6BBPv078C8/L188Nu/i3/+v/p/Eg/CldMWmbZCCayEsnDNFOGQzLJMioL4xSDLoTECfAiiGEghkEp/SnLC2o6+E0TQmoAowIQzCZtL/0oMmFJxsjFk7iDLzDIg3b4WoCk/GlyMCMXZ2RnXrl6VNERna3WL72yNCwOlpnCJOIrAgZa16ZgZwm1lQFChVNNba02zneNwh8OB/X4vghvbMjgJk0hX6omMrQkJaqVoEkAiMoaAy55YGtN23pKSJdGXeLRhip48wpQj2wzOeJkrOWPtBpMCpEDhTyk+LZeQWb3Oek+bBaZdkJaL4uWwTltTqe+pZbD2Xx82nrhK5A3nzaW/NdpJ3zn6/TJpjqjhnDMMtzG3fx8vfvqv8ZGf/Bi/7oveUdjUlUdU/crLE18nqNTgiajH4l9YY0i4kiMT5wmSkvho00SKEZOFkwYDzluMSbW422EEpSxEwtZYsnME09P3AkTkxcNdCuU6Vatdjdth9eYZ8M5y4/pVNkOHIUsSeiEW67oOcqT1bzS+qBNmvjdKayn3MmcJNVxc7BiGfkHepZOvCmU2NT642+1KQ58O8jwhQWoLa+sCrU1MkMmS7ZRBqTxDKCGSLKEk9WElAV/oO2MyxEMgBNFivqSXe+uw+IIddGgCRZ1Dq58hhKrR1xqtvf8tSrs2WlqhbLVlG4M99iwfNB4/TvmosY4nGdlg8jLUkhrFMTlLfvZfwp1+Cd/yrR8ku+uQenKSlt3Oelzpr9iufuJ7WTCubsZ4vOtl67TtgEDXzhnOTk/wzkAWHhxnKBUXCW8Dg0sMJjNYS28MG+/Ydh1D5+mdxXu3qJlUUGHtK7aTYW3iXBZIU++MzZHT3nHjygmdzXQ20TupTem8F4a4bIRUqwAdc72htBt3rkNSCB05S9lUznMrvBDEL54K+TDWElJkPx44TCNTikwxcBhHdvsdY5jASIw3If02YsokDIcpchgDKTvGMcnxjCcbj9OwRxahCwW4yU3LdqmGkbrJDKQscdeQEuMYGUNkioJBREqapJck97YmtZ0XbZJBa44eK0WrUzQvYwXtWKOzetx1gfjDxhP7lJ8761W0Us4IBWJ5Nzf/twtBHq5gn3s/n/25P8MHP/Q9/Bvv/z2kNFZIfyY5nm9OC+QoGKLvC1jR4i8Z7y03b1xDCIDh4uJCfCU3N6rphx6bhQYFKK3bE9qtJ5UF5TAeuHfvfgUgQghNR6z5/NrzMUVnt3+jhG2kfC2TU+DK2TWunZ3Sd5bN0NP3vpyLNL8pqqiCEwK2tEHsNu4sQyaWaEpjDPvDAV9S54yBqbCrxyiMg8Y5ptD0XjSmch+l8hxclrDMlBIGB1mqQYzxBUERczfnRC6hg+hzBVdquIPZqsimmJXWMoUIncHlTCcfkntpLTmKVdFqq3VMdf36mEC2i6gtecysnpFq3Xaste2jjM9Zf8o3MxQlW7zXvFZDVqfqdPPX0r38W/jwP/gwX/mVv5Pb54aUStythD1gGXvSn+2Nr+8vFj/xmbq+4/bt22y3W15//XV2ux2QpZFq+a4pydUzdUUQgCFnUjHP7n32de7dvTcvNlmQQQmJLJG86ne8gSWSoVBKwo2bNzg53dJ3HX1tHS8mYUpCSTmjyseyTEwRhvlzYt7OMP40jo2/JIzwsuJbQgYrMLTEMbUiArOYxBjxaSXuKaalhk/UtwylOU42CKOdHetDWcf+UopEo1yw8rhttISimW05pjWWkPPR+9neh1bDrcMb7fH19fz+6tmsBHsdBvn8FcrGvczH9G35u2ou0aD6RwlHGNdj3/1HGT/6E/yF/+Ev89/+V/8p48VnMcmRSvoVSHOZNkNfb0prRsgqOZslphzUW4sbenp/lZPNUIRyKeBiR0mQeRql+kKTuVO2THdHLu4HxjFW/whLSQebr133Wds1pMvmbAWnSGQC5+db3vGOm2y3HcPG0vUSlvACMYt2tE1Svm3jeCz2q+CXmnJCHSKAT8za61EYyUE4ZrWNQStcM7gRqxOv91fvtbEGg4Qo1sjlNE3llspr77sFSCIJ9GVfJRHBFr82hkS0huAC3liyFXKz3Mydeg7m8r1fuxOLKdn40rOwLT4yW10roKjN8Pn8FMoHaYBcIxR1GEw12Zo3oYRJ0ukz2Ntfyac+9d38xM98il/73itMhxEcGCuXpQFk7SC89jWXPoBBWerIQrCEkbSzzp+y3QzL0zHldFIWzp7xwBTGkuwd2e0Th8PI7n7GuQHvA2FKTRhgSQpWhR3RdGtQQv6YkR4ZiRs3r3L9+hW2Jx2+M3g/14umnOi6jhRnFDU32TULS8HMhM/1cei9chJOkOQBydARoTQVTNP96zU554hpKnmzasrlqn21zaFGhDQxYZomydYprRGkDwlVe+t+1MKQDlxdwRAMxgjQo4zzmkyvBSUPk4d2cWiFr976xt9UdFm/d0yw23225vGjjLdUKB8Wm5n9x7wQyPaPCww2i3ZNxjPZjP2Cr8K+/hH+7J/77/mLX/PV5PElUoz4sjMVwGma6g1pA+LtqmdASsLqUZc30Lsl/0v70PqhI6UNWjA8jiOJPRcXnyHGzMn2hMieqRQVG8PiwbcsaAV2vGT61GFg6Htu37rJyemGfujoPIUcuL19eRHwx1rGEurQYH/5BVP8M1kkqPeuM11p2OOKWVv2nJuHo6dVEFkNT7Vh2Jzl/NSnJ1tinLNfKpFySkeTKyoibWaeG2stXQFQBFTLOG/o+w0mRXKMHMIoxdbjSM6zRtfzXR9jvSmFh5rmGiKy1mLSstD68vkuE/ZTShVLeNh4TPQ1z1tutwd89MHfLlupLMnHvlI+sTYTssEm2VK3hff9B4Rg+dN/9i/gNtcFZSxFsGPxh9qt7qfRFPIGC+R2vSlhsveOrpN8UNmEjWAYBrYnG05OtpyenmGs5d79Hd53tV/k9uSE7Xa7oLFs0b/K+NbQKi6uHUFgb1y/zu1b19n0jqFzbDcl06j4k2pat5M45TmDpy17s3YJ23uvmmf2G9oFaMGwkI8DKC3qLIjp/JlQyrx0E/rFcb6+Iw1m9RxsWSD0WSjbn/cdfT/Qd32xipQHVhIXxsMomi3PLBWaI9Ca+Hoc7VnZxmld6clZ37czoNOGVPS6j/HyvGWactZcazuuGSstN2u6NXxz+Tvzr6b5g5l/L5OlGCuk83fjbnwFP/Mz38dLL73EU1cyaTqAtXS9rx2rUkzSasAABYEU4itmniGjC0HWw6KGmrhI6pvl+fpq+pIBk4nR0DMwBfjMq68zpZ5khY9m6DzOCFDRViOoqq7+1zF/0shdcBhunJ9y89oJJ71h4w29lVCNw5XvyXdjYR6Q70tbBlP6sdQsHQO6Von5l8v5zWuusRZDAai8p5ZIpYy3jjCF+vthP7HZnjAdAjmbkudqa5rbPowY10lCP6YmIBjnySkvGsaqP1Y377GFjc9aaSOfmYEkrb9U6yfnTMZgfE9IQcjMCliVindkrZGu2xYwCd9ZRN5KAnmZb6ZyRQnHUAgj2+229FjJkJLw6hZW98rQXgRxv9/zqOPxyZgfc4gflqtAtppS9jlP+2b6L46aad427V4MGI955veSGfhL3/htDKdnWIfEGGUpr8RM7VWkYi7GViOZLOxovcP3VgAPm7DFNPKdw3vx37y3wkRX+x3a0pRHFpT79w+8fvc+r9+7z72LC/b7XaXGqGZwSoWmI0ozojjz8+ioq3WpHz3dbHn6qds8desaQ2cYvKGzBm8ttn2aWeOiZdNK/zwvPDJ5ZeIJU0Ail14h0sZPqEvCNIonX92CWauo366/64Km4JGpzVXHirJq2CXlPC9vjdXSdla+tMmB1MZaHP94edaseZUPoDIyWovxM3+TXkM7e9VkpyymFZxKqU5HTRVcM+C1puvlfT94PFGR80IuaH5CEb7G9Gxv0mIvb+xvXj7gg94zxLN34d75b/KJj32Mv/MD/5i+6/CdKyGSjFZ5tqaimoutf9T6U845nHd0vmvMmTmZvVhB1T/EaCzQkVPi7t277A8HLnY7Li7us9vt2O12C8LfNRDQbqAmW5k05fjn52c89+wzXLt2VUzls1O6Gg5ZkjrNBb0zYCJVH8vEaXlWy2Jt/bus+KmYcH5h6raTrs0UAiEcy3lmBKihhpxWx54LgPV617G+NktGs4Tahj4hBGkLPwX2+wNC+FxmYV7upxVaNTGPuTd6//W4reZuwx3t/WrTBXW09/0t4X1VlBRWiqvxMavZkFupnb+jwjxzuF7+HIv3HyC8TbF1tp78jt+KGZ7jg9/5PeTNVRIjxoPV1nYpsWBbu+SzFW1t5mLoikAqJb5Rxqay1ZVvPlc1ne7duyvUEdPIoeFzaYWyFcJjVSGtL6sP/Nr1ntvvuMJm49mebNhut/Sd+FYpa7yRqrFkQhehQ7hjrUFS/pD2BWGaBF/Jorm9K8W5Bpw1eOcqr63uVxDIWUOum+LEsOyAVfNMzTIctY4h68RvP3Ms0Vua+kxC0zIFdvuR+7uR+7sD93YHDiESs5GMoqRtHuQ5S85yua/Yupi2eavrxWk91nFI/bzvOpmTeUaNK9j2iOPxmQco8TK1ARrKGTm/fAzRmYWR44JtzPoNlh9eGb1VIJFzsf4E+wV/gou942u//kO4rsO7hDPSs9HZMsG8r2bUfFnFiG4EcgGLG1sq4h3WenmoBRQy2rcBFS4xrO7fv4Cci9+5pKVohbJ9v10oFHDQEipnLX038K733ODsisN50UYSBilJ4Y0ATGHCmEbLFCY4ZdNLMZDChMkZUiQFqW4hRUnRQ/KDyQnvTOnVCRZHDKk8/7mpbGtGhjg3F1pTZqy1hfpees3rxO2F0NKCYzOQM0WhbtkdRnaHiXsXO+7vRkLKTDFJwfoUhbfXWGxt0OTRbKdWU7ZmZqvNK9hlFhNzca5q1rbhFf3+w6IROt6UTznLi4Ij+Yhie4jGe9PDVBk1178E+8wf5MPf/w/4gR/8ecw+0lmnqcx0vrt089cP4GEro66qC1SumrSSWZRTYr/f47wXxnOWx2h7WRwzYdcIYBYHjWvXr/Hudz/HyclA13u22w3GGKYpMDXEVWoRtGNtMqrZp6NdENqqCJ1YwvkzWxnTNGJoNGTZt3Ou8MzOx1VhW5t8qhVbodTjrtFLvQd6/1rtmbK0w5tCZJwCISRCSoSYCy+RtjgvC2fOi3u7RqXb+bE2byXGKlZEqw31vEMpzF6b+W+Zpsyr10U/HPnc/O/yHt7AXH0C+TXtK2PJz30l/vzX83Vf/628ejgFo/STc9trZax+kABe9u/mz+mKbZrfNe/WGOG2CclxsQ/4rse7buGbHPMd1yaQCjsZUhAunBwn3vHULW7dvEXnPEPX0fsOi1RgxEbzGiOUHsba2nOzvbZWS7c+qF5Py/WqYQsxww+kNIMqCyFb3bv2eG3anY61MLbvw1JAjt0/FTD5A01oY65y0bYV6gbMLe7mxj6tUK6zdtpza/3eVpvrvsZRwjpSXJ4X+32YKbwejx2nFJxO/08sRTDX9x4uXesJOu/p0vfyWmCL/6avDNKOHYu1G7qn/2XGKfBN3/Y9QIcng+nEBDWXqwXeEBMr/q8mn9v6jVVWSwk7xOR5/qX73NkZjB2KyQumMuEVYuEjD0g1Q2WDAylzmiaubDzvefYaVzanDL6vnXcPFwfyBB5fmqNIEZpvguyt4Gk52rKmMAk/a2kRn4mEOJYqjEgy0s8l5Sya39lShbE0PdvjrFHk1k9Tk9mkSBgPJa1P0d22K8zM95pSwHC570d9jtZKZ21bMr5S6RTuOozzYE1tu14XnBDKkSxJF/U8A04aNmrdC8UMUob9OHEYJ8YQiBlhmshU3379bN8SoVQTpRWd9W9Hv/WGfuJi73WPb3wWDUhUYPpsTBVMbnwp7vzX8aM/8iO8upP+HkfzbIG1D3NprL+28ovbD+aUub/b8XOf/BS7MaMMek5NZqMAzBHtW7ZlCwQDWZjk3nH7Ou969h1s+oHOOtIk/SsOe+17aEoRd74khK12SCkRopitdZI0CNxSc+aaSC7+syn4limvm7uw0gSpea3PswI7MpGKKRgrC7pW+KQ0m4fKSpBSqmGstYXRPqc6O0p5nrWSAI+1THG+fsUVcs6i3doFdiVQFbxq3Q1K6VcFkKjvtSb6ej+PMp5IU7Yy81Z5io8ztA+JCKbj4M7g2T9MTJZv+NYPkYYr9EQsy3zE1kxpr2MN9Bge7ebmnLl79x4//09/nsM4ksgLf6R+7sh+1n6kQuspRTbbgWeffZorV8+KDzlV6sQ5rJNpUb+cc42Bqkasnz1y+oqgthrBldbv5WSxjYDaxrxrJ+AseJdLoORvcge0BZ764vIhlvtg9uta/0yuozFn2zkpt2EOdxRzlhIKSpkZIBqnYuoW4SpfNkaK0oXHSIm3liCdbmvTuiWD1vN/3PH4IZHc/KL6J7/BJmd8eXuIOOf1Ph5+amLwmGLZXf8SzNmv5Yd/8Id47W7Gop2c8jKbBvV7qK9bgRX0VczytOoBstaYOSfuvP46r732GiEG0ZTeYTvJRNFuw8aYxWWp2do2iQUJVmcmrl8/5ZnnbjIMkPJMeyh8OK1vJ9ptXtHnCdgmomOO+IRH4P3ZF1NrogkVNJ9b+5Ci4OdyMPKsfVISHtSMMJ/H0lHaUtpP2OV5rPd9zM8ny71XS0TT/Gwpjo7JkrMTvtjsybkn556YHCE5xpCZosQ6Y5ZML2l7aKRQO0vB9jSFCibVta36sGluo2FdiQWXeYx2gn40AX0yhvSVj/eG0YyHCNWDv3t58tftmOAb1ZgZCFLg+vTvB+P5xg98J871xCDEuqph5FJKZkjzwBdxzLqCyqSPKcwTxSDhlAYM8CWEMQxDQV89znf4XhgOnO9EQItv5jpPP/RsNpvSdXnWCjEG+t5x66mr3Lp9TjeYmtidUqoF15IQMLchyDnNQhnn1nVV+6jxv/DpL5uEy4XJ1PBFKrB/K9j6+ZxzjUVWYKTJu1WgKOc5DppjKI2BTVm4LiPiuTm3NlGi3i80BprnwgIDKZvC9eOAjpQ6ch5kYyAkR8yOEDNTSBJGKRpV6zNjyoSYGEOUn5P4kKk0W4oZQsryewmf6Xu5IeQ60gLl6Pjl6SXyME36uRymWSmufRHu7Nfxwz/6o3z6blkJm4C2IoM05skamWxPUd5fGLrNJYq5c+XKOSenp+IbOid9T0q8cbPZsNlsaot17V2o7fTWk9EYuHnrBu9+zzu5dv2cYegqighzyVSr8XQs2byXSKL6hTraReiYSWbtnExQr9yspk5egRvqZx3xr2w1W2cNaq1ZtABss4VmP3z+fLsQtlqdCuLpQiH2XMoGYz25CE7C4PxAih05b4jJEyNEKWclYzDWsT+MNQGhPZ/2HGMzr9ohjZ5mUud1mOpB4y0XyjfSoqJJj5i29WWDzLYr5iMeNfsN9pmvIsTEB77tf8LYkiaWLXPOLRowQU7nWMhgOeGrv1B+zzlLW7cYOT8/5/TkpPpdFe0tmkYFceh6+q7UA6IJGRROnYQlcbIdeOdzz/DM00+x2fR0g6U/G9inkZDErMJKE1qscKmGIETSFxd7xjEstFnVavkyWnoM1ZROYfP35H0t+Vre8cxyn8ZmYpwEeTa5JH3LtgjEN0KWsnKxzv7vurlQG1ts/U1nhHjaYOasoWzI2ZKSUKFYJwUKyv9k3QB0WNND7otpa4sGLMRehX5U1raOlF3RvJlpCihLnoZftK+J9jaJCdGaGcbwaEL5xGx27cN41M+ux4NR1jewvx9qmjfaxlrGG+/DPfUH+fEf/9t8/Cu+gnc9O2DZUOIHzbdK6VPpdWEKO4DR/gnM5lPMSTJeCqAxd6mKbLdbrt+4xic+/ZIQSZd1L4NoAqgphkISlgu7Wyz0khHrMt4kzk+33Lh2wulZh7ETZhO59b49Fzawe6EXmD9looVkO4iJe/d37PYijC4ZcAbjXTUeoJRxMdN4rE3XuToDSEGa+ZSMHmOYeYaSpOp1TgTYdVq1EaXvCOrDZqyJZBsLbVCsfnqVHxIpSrYRQJwmrJeCaed60H4tKuTGYrKgwNY5bFbLB6hBO0tMFutE4DbbrVgFwRDpMQjBWTYG123JxpBNJCRDNl7ypzvhaDKuI+aIzZb9oamTLG5GCDBO6uLMmnOcDhwOUhmz241vNHHreIJ6SpZQ+KMMtVXa7c2M4jy/kXQaDNk4oj8hPPcHSO5Z/se/+gH86Q063yE9DJs9mNnXetjZLU1FEcqUA8aIiXf12jkxBA77w9wqr9AwhmkiTcJJapCfMRykR2TZhzOR023H00/d4MrVE7zPGBvILnDylMNf6wgFac7Oka3Fdp6UDbvdgRCEDHqapFOUxviqFuN4MH+dMidGgoJes7BW4Kcxb9Uf0YoJaedO0YyZlCeJR5YyKPV924dawaqqITNzbLcgti3qKy8qwDMj6SW2WV9ZEo6Ekxb3OCIijClnQs7EbAjJkrInREfKHYmObHqs35JNT6YjJE/ItvqfY0iMUyHySjCFzGESdj1J84NsHOMkGUaPMn55fMpf7lFAHzBC2X9yG975x3nxxVf4wAf/Fv1gyG6S1TQ//EYdC4FcSmgvCFwIgfOzM5RkahoPTOOBMI1M09j0tpyIYRLKDtW0KeCsoe86rl27yrVr19gMm0LWZYkE2AbyxhJcT7YDznV0XU/nO0KQ2KWaUXrumSVKuhCscsNUGHVyi1BKjqmaiQrwzAI0gz4aVJ/DELkKsp6H967GUtsMmdY/01u6TkIvR63vtX6vfqn6rbn+txiawVPR25xLOp4ITIqGlDpi8KTUQd6QYg95Q04DOQ3lvY4QDCEYoTe1PdCB6QjRME2ZaYJphBgNKYnG7vqTh841+NUqlCgwmrEmYU0m3vx12PMv48Pf/wPcGzt654W3NKWyDi/HG6GSxz6TcmaaIjFmNtsN51dOcTaR4oEw7UlxxOQJkydghDxhTaLrLMPg2QyezdBxfrbl7OyU27dvc+3qNTwOh8PQkcyBKd4n5AMhCeucJFZ3OCPmkeSllwaB1SqZ/bcWGJqvY46paZnRnNMq9ZZCoSnayllLToGSS4ABIRorfp0K3jpVTZMOQphqqKUyI8SZG6d+JwsbhGkSIzTOactPRXFR07WYP+o3gvABxZQllAFgxG/cjxNTSEwFeZ2izIUpR0FcM4wloX2KmZANCYvvN+A6sJ5sPdb3uG7AuA4/bBlOTunKT+t7QjZgPf32/4JCWUHdxrJN5RKD3xC+4N8ns+VP/bd/jm5zRiQTSIRS1gTHUclWq7QTB5ZglLWOMMln3/ved3Hr5lU6D9ZErI0YislKABNxLrEZLKcnHdeunXD79lWuXjvjmWef5umnn+bWzVv0XaHqz45sJ6wDk0NZbAwki8mOGDP7/QjGCUjBTGRlzbI5zyUKDJaUj6LVfKPtIr5zJUwlbHcpRawxDH0HZLwTYbTG4IyUebUcQaoZu8JQtw6qp7w8L6lLzXgr5VXO2FKpkktsIRdQrIhe8fNNQdQTBUUtwI6GNWJSNBYu9hOHKTBOiTFmDjEyxolI4mLcsxtHDlMg5MyUhLc2W4v1Hb4bsK7H2A7rerp+i3U92+0Z2+051vX1795vcH7AmEfj6Hl8oOeNsJl2vEm38c2ORVqdKSbY6bO4G7+L55//W3z0k5/lPe/oF/FKnRDHAtawTEo+PmxpobfnmaffwXiY6Iee/X5f24/Lp4RHaLsZOD09ZbPZMPQD282GYdhwduUKN2/eZDMMdAhXasqGa1fOICcOhwOOHpM90sjGsN8fOOwnWLSEk1xVvY72ZxVQY7BubuDTCkUrqPo30UziQ2qs1BSNlnJGGiQhXbZdU5dajqccOvp+i/ra0myoTeReVHKkFcNB89zac8+Neb22dtaVOSElcpR8ZDNFyBOYzDgGcjJY64XxPU9MIdCVdg3e+KKNLSlbUnaC1BqD63oye1K2hAj9sBWfNj+aDnxCNrsmwLz4tYFGc3ltlm//cg89P2sM2ADv+Vfg3k/xtX/+a/gzf/r/A+M96djVnOOxuF8VqDaR3WjoRoIqU0zsDyMpJvqu4+mn38HJ2SmH/YH79+/NAoFw6pycbjk/P+dksymUItJKYHtyyjD0WAObzUDXd0wc6E56cghMFwdM7haAjSZESEZMAhNLHNBU8KT2mYwiWPNCNLdx6/uBcTyw2Wwq8KOMbtZKGwDNy10nGpBnQQ2FYUAZ3LrOM45ToYYsbfOMWS6KhbDLGEFZhQsIQlgKqlirqfLqYCSH1tgOLZTIRFIyBc1dMhGu46ACYEHIigDLc5VEB1vZ7ObwTMIq162zJUFBlsMpZ6yTRALveozN+G4jvEDH+kgeGY/dybnG5vT/S5pzEW5faqzHPdib+WKLrGZTjFhP6C08+352n/jz/M2//QO8//f+i8QUi99/PIl4DUbMginCmLIpWSGOwyQpYzEENkNP118jTBPTlbM68Td9x3Y7cHq2ZdgMQpVoDTGEUokBxk5Y5+g3W4KJ7NwF57cDHDak125JGzgXC/JLKViOkKyEIEh0XsIGGanMUCoO6dhsF3m2Ocn7XSfIdEvF2fe9cAhpa7gSLopB/UC54UJFmcpCLEnew9AxjlOpixSBcSaTo6Cx0zSVCT6hrnDOkpUTUwRjyQQwQt5liv8bDlM5r4DvOvZEkolMacSV9hVakSNrSBYBbrSosbJA5qQkz4Zc+mTGmCVdM4yzG2AbbtssMUiRs0TMWnJnCCmD8UIGZgLZOLn3PBodyGNqStO8yuu35lHib48GAD/oSG9GvTZLR169b4DbvwHz/Hv5u3/ne/mXv+r/js2vkaID/EIIdaw15iVNmiUTZAyJKUoMzqDgiIXssHTkJFk75+enXLt2zunpCZvNgHUCfty7d0/q8mzxr0qsMOSJs1uO4Tzw2i9O7O8ETrPGRxOJRIgjWvJkjMM6EYgaXEcqMUSIBCFteUi9PxFWcufZbDY45+j7vmpXYyw5JIy1hQBMtfBcY+i8q/dDh6TETU32jraXE99vmiY81HpEa12Jo/pGywVyLr1a0HCIMATGMNEZV38XzZmqBp2TtZeaspr3xhKIxeqw1erJOS4W4lbTrpFhuQctIi3VNZrnXE76kUOBT877uvh9FgMNMLN4N6udN29v5ch5XjTaUXxL/ID5wv+QKTi++qu/me1wHdOsYseQVn2/TUnTDCNTck8P40HKgAwYZxtzePahXMn0kTS7uSqkpvnlXFuAAyVpes/12xHXHfjMLyRyhBgnxoOwsF/cv8/u4gI1D6wx9H0n+bcNQ4Iy71WKkSMcqs45NpsNQP3dWiE+1tBKuRsNSbNw5ixT6ZaFw60V0rIJzJPalNzduVxLF5mU5koYTbrXOa6hG8hM04hzGu5ZzlMlBhOe2zjv86icNOGW1TiWUncM/Gvzhtt58yjjTTCkz4apmi+1xCk3N+O4eLwlQ2NoRh3Z5sDSk7Jo8GxIV57BXP9tfOxnvo+Pf2bP01csPk2MxmPz8ZrOS5kvWXwKnGUME/txJOZMLj6XdsjSOJ6eUIyJcZzoeyFZMhYOh5EwSdmPcQaT5GpCCDAeSFju7Dpe/aVMnmAfd8QwMU07DuOO/eFALm38rDfCrqC9Mq30ztQMuc73JS0s0/c9QE1u77oO33mhMylcrNqCTvlzpT41oXQgxnQFFc240rx2s9mwO+yr0Ou+AZx32GlCtCZAwntbumbLllMoPyOGTC6EX8LTKk2ADAlrhHvIkGrMVzh4gILIYkp8NU/k7KFoYGtM6Sl7PGnkWEhsHZ9uwar284tk/vIcH7WM67E5eowRZvkKf1tTfi9b+eC8oD6aSGooo26Pc2Krc5y1d5YHR5I22mSsyZUlIL/n/dA9y1/6i3+FfnPCNtwrfpee+oPPQrUkRnhld+OBQ5jEizGa6JXJMcnEhrpo7fcH7rx+lzt3ZLu4v2O/OxBjhjzXRE7TyP7eHe6/9Dovf+zAx/6h59UXJu7fucudu69z/+Ie9+/f4+7du4QUcZ1jsx04PTulHzphDu8GSYDvmwT4QRLiW46hGWnNpJhx1kM2SA9LKU9y1hHGCVKqvTKdFZItb2012dVE1RikFm5vNhuE5dxgnXZKkyoRbw3KOmAQYbQWco4F9JG4aIqBOXs1Say0CC8pis+ZUyX/0v6i5ImcJgzSIlAaAMsxpULl+DNWloJjyQywJAK7lFDSzstHFEh4EoZ0Q9NFvpx86SWXZU7V1nJSgVHM1cc4qTc7NIhcgSnTalE18SCd3MC+81/lpU9+DX/jf/nf+Nd/x6/Bx5FMv7TQdR+Nr6mrYMqZmJM0+QmBTDcvKpnF59txGA+kO4G+79luN7WkyVoLVh784TAyHQ7kceIXfvgOMR2AhLH7snNh6o5R+iJutxuuXrkivDwldigt8iwpjlg7125aMwuOJg0452paoPqTes1KsqXhCf18ZUqwSrPiSmLDMqwyTVNTclWQaxJdL1rZOo+J5RmJpOGKWZ9zwntheN/vd7VlhJqpar6qwMQYCmClZqw+yPm1Jhzo9UzTdCTTaVncrOONTFf9u97T9nNtC4M3Go+HvhrJwLfk2lYcNFVYL9QIZ06aA/LlTNc7e+jxjkNFx02NB5wxikJX47FYtcZYMhCf+nLsqz/C3/jrf4Pf+Zv/CzZ+RzZi3pjcnEPZ0fohxBiJQTtqJbAK6RtsdrhSxa4dlevJ2CU03+43aU+REMgHmKaAn8BYaTlAMREltOLoO/EVz05POdluMdaQU8Q7V1jdHXFMOD8fQ0vFWorHdqwTJkSouoV/2PrDfb9hipGuk0R5mFv71QWsCGgoqG0lCGNpBq5jpWr+eucvnWtlQ2gWzBijILPleLFw3JYHWbRGKsokAXM3MaUiaW21Nb/OsdK2uvfGR26pQdbsE280Hst8teX0LYLemcJBU8pTQQ2LLL6klh7mI9uTgz0P+17Rj6XqXA0d8ToM0TiBqI1DsmQM6dk/CP4qX/dt382wvQ4mlJyYkiWy3DOV2KlkmAjYZ0iT+DfWZDpT2AR8T287nPE46/HW44xflIotAImUSFMgTYE4TcR4kEBCIYI2JuFtZjM4zs9OuH71nJvXr3HrxnXOTrZiGhZLwFDMuRgXAqmLQduYpk1IhxkN1f6emt2j2u7k5GThN4mAzhlBmrwO8wIwT9awKNBuCcMU+W1BEv0sJrHZ9iVkhBQBFMTVOrleazM5BXKaIAesyZV4Wvi5At5K1YolSLqgVSRbiwpEWCVmmqQVRY6kNBHjhLZvaM+xXaiVKU+KAeYMqbeknlInpW3fgDrxlckrZkoDHqVC+uUZx7WnqVtuNqGcLObU1ffinvnD/PRPfJT//SM/i/GwZFVr960aLlYTOScxFVNKwvuSpIekWhPOWun1YWpkEy0dM4WpLUZpOhvDgTjuSeGAyRHjJD/WdxbfGTabjrOzLVfPT7lyfsrZ2ZbT0y3bbU/Xu3K+qYQP5BgxSZFt5ZXBVC6Zlv91zfeqWkhbCarWWo+5mQ01yWCapkrlqdpyLgpOl4q6FQias34uI5jQZlXN56gJBUDRmC1TX3lqFbBJBeGNLNnzij9rFO2NzT3UMr/CqtdMtAchtHr8nHO9f2+EUbTjCdDX9oxmzRfLZI0FuXzLoh5rFFvt0uVpHTvby6M6+Jb81O/AfuZ/4Tu+47v453/Tf46ZXj2qk3PO0ohHH3JZ6a9cucLdiz2v3rvHNO6xncXisNi6kqWoJn3CEtDMXIefJ4CFbhAmNuVutcUfcd7Se0vfzzSU0vZP+kpWX62AMjKJlgkPonGWFSFAZSlQrdWaYXOa3JwT24ZDvPeFoNiKf4tMTE0t1Ma9IqjDwmRtBbPrOg6HQxXkth2dnoeCYCrk+lnVpmryKkAjvLfLRUatgq7rFoTUc2gqNJk+ltaamDGCJRrf/u1yqCc9spaExxRKDW+kGnQoWgMJD8xdlKDx4j53ow1xrPadVTDb1cBcDseYGrIoHy+CmU5O6d/773H3Z/+/fMM3/3X+4z/+L3EIB6nqb0zMXFYF0SABrHTEOj/b8vTTt+BleP3O65AOWGexxhdtKoRRBoQ4ykrHp77v2G6liBfENfClf4dOyHZy9p3wyFK0dYhBkEdMMR/nEIwxItQ5CYN4PwzEONb8UgVrgDk9riYLLFPgtNGPhlE09Uy/d//+BcOwIZe0vpQS9+/fr59VsGe73S78Wt13S0PS/mwXldY31Z9t3FhHC9iIcMzgy9ofbAGY1hJYg3r6esmiHi/tq00sUWHVBeIt0ZQ5S+a9EQREBLIIZSyTVY7bemGfW8FUQ3StA1e++XzCiw+Z+f2FYEq2ZLj1xbjX/wj/6Me+m5/5rf833v3MKckIfD4HNJa7NymXqvzMlbMt1t3idNszHaZSwaHnJ/63tw7XOYZtz3DSS46rm/fsEGoLa8yMYJcTtcbSeTGDc4H+Q3BSrxklC8h6L75MzhjnZCEtea4hSjEuyCKhhFsgfpBOSK07VM05jmNNKACqlhyGofDPyEKQYqRzjqkI7P3dRUFBY9W2ouVEcPpekvW996UO9HJcsPVLW83eCkLrG+t3VHCVo0ivr11MWp+2JncURFmFqNXYsm+hnBRalOnSOc0CO2vQaZoWaYsPG4+pKeeiYF19SsVdA4foZ38lDYGpsvOY534n6ZXv5es/8O38F//l/wuz+zTgH2wG5yI2JuO95fz8lM2mJ06SeVNYmGTyWC1p8vjB0W08c1v02Xed+dElu0eQbFlHvJG8UAXWvDf0HqZJ8i9DKGhgntMFdaKo9vDOc5j2NQTSxtp0tW+14NzmjsUk1DGbgnE2H8sxVcD3+30jOMteG605qxNXTctWM7YayHvPOI5479ntdkd93SoEzenq9bbJAC2Zmo6W6rO9N4LSLsm7jv3U+732jR9lPLlQlt9LXfvqc0843uiLj3Y9TzQM0McCwWxuYG//Hl5+/jv56E99nN/wBSeC4tZzy0XRLqAfCZnEiHWOoe+xvcFmW53uCpIpNG4SFJPWKVRuMi4vzWVMXmhSkd+STG2tZB+ZDmczMTv2dfWeb+c6BSyWOGI7EdsSKZ30KlCaidP3PYfDWCesTvAYoyQbMPcZQakwS+pga9pJpUrEFUFSTdT6d63p2gpQu49WK62vofUZne0W+2g1lmrD9QK2Bpjaz2uqniwmxydtK6DtwvIo47HQ14Qp2SoFlzRzkOBzPcwDtjc72lDhfCxDdJboLNk6eOfvw51/MR/8wLdysr0thcZHY0xSiSCrZ8kYSgmfMx4hv/I2S0s+l7BOSqoyk6SRxYDyGpqccMWqria6mmPFnJVMpFlg1W/EOLASxN9ueoahY+g6nF02KlXfRjNU1NfLOddSrWEY6qRU7aNo6KJYuevAGmznOYRJfElraslWOBxwBmKYpKV8mOi9Z39xUfpfxpJxMwuLXJMgo85JpUff+xqGaBHTGENN0YO8QDxbrVqfcRGOFoA6NlpUuGVGAJp9CgqroRRZjGRT4i89Jszo9FsSp4Qmztj8vhimTKy6maMbq1XjYcInbHPNd59QSsXUbDRPfV1irMaQh1O6p9/P66+9zgf++v+K7zcP8QdEM1kDzmi7GImfYSLLyoWIlhRJj8hYcj7bCUTxu2TT+kJ9prJSz6ZRC0gYI8nomnS+/owKY1uwrb7VZrOpAtfSgsgxZ0FVHlOa747TuACK2hrE2Z8SLeMas7ZFMSUFz1Vt1KKt62QGJdHSYH9eCXe1AhqrZi2ox7RnGxZqk+rb+yvCH0hJEHStAYXLiQdtHHfRE/UNxq8qOpBHGuZ4niPQJEE4zI1/juHm7+bDH/7feOl1ASHI+ikur06XxuXq93ZTIVy8N59IfZgtELE2iXTC6nvKV2PKojUMQ2Vdb9HDnOZVW9ncjTELjbhGE0ESCaqfWM69ats0f885R0yJfujxhXRa96/aWs1evWy5TlczcdbASKv9Wv+4TQVUTagaM8Yo/reZF7F1mKK1GtahDL0/69BN+ywUHFp/pv0s8NaZr+0svDSRfgWMR1esluR7/HN/iCn0/A9/8RvYnl7FTRkTBVGtvS8WQjqDNYo7H9/ETs0tCtxaEnq+jWACiwmlPphqJVtS70KYcNZAlmTtzdDjnKXzDmvAV2Bp9uWGYai/6zNt0/D0OJrl06a1qYClnGpZWD/0+F5il13vBTmu52kZxwNd5zkc9jhnmcZRWrcXHtlczp2cUdJs7+b+IJJMVRI2Ylo81xa91UVAviJmrnNzBlEbh9TzU4HW+9+WabWC13VSGqcJEu2zUhC0Padj/umDZ98TjF9Rwtja0pf+9GAxTSYTz2/h3/H7+cVf/DQ/8XOfwpoIAWzKqk9LYr6AMznlirbmIqRKTtFuuWgy6x3OOwllVODCLGofawL5ypRSjaSpXCFMpBQbEKIE8q2h947zs1M2Q89m6DnZbun7npPC5N76YiqMwzDUCanZPzpJFeTRsEHOmaEfqka0TkI+IQWst4xhBGPY7/f1kUAmhAlrIEwj3lnCNInZn7TUSvl+pGysc14AopJQIcKZKqVla8a3wjlTpDR9RpjrRdvRVoS08dDWnz6Gyq5zfNvv1Tn1AD92PX7Vm6+Gxtx4A9NVR0aA1GQz0Rrcu38vbvuFfO3X/GU4v4oz4k8py/niuyvzNOsO159b9QJRoXMlXLIuTJaE737x+RaE0J85J4ahbybDrPW895yfn3N2dsaVq1fYbDacnp5WLaGhDEVajTG1i7MKIVCrKTSuqOarnoNqp34YyOXc9vs9NL6h5tNq4vi8sMxt2fUcVLh0AZizi2TiHw6Heu/1/FrNtD7HnOYaUr3utjXCsfBQmwnU0oLozzbMMn9nDi0J5vAWa8r/M8Yb+mcPGo05eAwXar8vZqVZvM7GkPwJ3Rf9R8To+O/++29me34GSChC2ygdPS+0MiRjUrNl0bKtv7j2G1UwW79Pzcw2pqc/rbXVd9T3NKwhwE9f2QX0d51gw0Y03NnZWT2WakRNEpcyrmkhBO1xjBE+V31vmiY658mFHMtZh+98TXtTwQWqoKpgiLZvEw1m4q71s2sRVSjuRDHrW9NTrYrZvFwioQsAkqUFpQK2/r0FblJqq1syfe/pOo/3hpwDAnBlyG9BQvqvtPGoPuTS3xPxFcPUk/Hk03fhb3wFH/v4x/nkawGDw6VINBOpwuGFKl+RvAc4lDaX8AZSIDx0HoOgq1KJEJhb2c0atTXNYA5TqBCp5qh9PsjVnJTvyPl1neTEdr0Dk7hy5Yyud5ycbmoFhu8sMU1stj3OG5y3JbeVEooQgT49PZ3RUjMvLKSMSZYckPI16wqL3UwDoprncDgsgBhjHNb6wmDnOBwOi7xVNYH1XogGApK4D60F01abqEC2GksXuPZven5QmBpMwjqD7yS2rMh5jJMIXJZCbWeh7xybocM78Ba2Q48h4a348p1/i0IivyLGG/iRDxqt7DQ7ImNJzuGe/f1gNnzrt36QbnsVmyw2mZKC1/LKLPlY1qEeay2+aEL9vet8nagtTN8KWou0wgzKtKv+0i+VJrWqdXSCqjk1DB0K53sv/ud2O2AtnJxscM6w3Q4FMPE1F/fk5KTy+aQSoBfAijlUgvh6vf4tz/7ZbrdbUGMoolsBo8YsVW20Tp6vx6jXvsw91fujwrzQhIZL+1qHQC7v//JwTpgXQpDice8t3lucU2YFJNfZZLre0XWWs/O3gCH9VxC88zkZs38J4ep76W7+Dj7+cz/Hp165j89nnIQBl3UiLHMgjyVKt75iKyj6HRU0bUHXmnYtibBO4paPNedchcX7jq4wl282myKMXTUH1VdtzdQQporCqg/qvWe73dB1nvMr53S9mLnWGjabgVC/G/BOGARaoYCiNcvC0grC3bt363XDLNDqX7Z+5zr0o/6uLjKtT9uShbXC14ZN1B9vQz6tNbL+fBuC0XPWXprOze0N25pQWQwooFnPZjOUhIiHj8dPHvh8R17XqynHNCCXPvOg79S/GYv7wn8TNzzLn/lzf5Z05TrRHhA6SQD1EWftVPfXmKHq/+kDTClJp+XS7ZcM42GekG3dYzuhdMKoSdiCPs5Zhl4oTTS0YA30na8EVIf9nqHvyCkKZYiVLBtnDSkG+s7jrOX6tat457h18wYn2w1nZycYsnDVIkKgPrQKkKbRWWA8HDBZeHB739XJLI9qrtHUCS7VN1IgrAhva3LqQtJ13SIzRw0jpUapoZIiQAYjoRcrGr5NfXRWerLINQmS7p2TPGUr+/LOF9IwI9cQpaeKM5btsKHzns6V4vUccRZyjAydp++8cALZ5dx80Hg8oSwzdO0Yf94Mo5k+x01X1Xzr7Vg+n/QqlP2V8C+h2+Kf/YNcXOz4n3/oB9kPkWQo/KxyK52zVcuoRlz7MnKqc1A5JfHJUkyESTSiAipq3q0hd0UNdUK3PuY0TeSU6LwjpyTkVIUeZBpHOu+JYaqpeykEhDEulXhhYrvZlNim9AvpvK9xzs57tpsNJ5ttDfYv4oLGMI0T5Mx4OGAxxCmQUyJOoS4mqmVbbaToaZsLuwxv2AZgmU38vu9Qrp5UmAbCFHBWENcYgiT3RxFQFVSrHLje452n7zqssQz9UJoLzYJroH5fCtcdfdeJ8BpbCbgk+0/ve8AZU0M9jzJ+FfqUM9r6JnezGqI33dO/neHGl/Nd3/kdvD5dk7pII4H7dWbHIiFgZSatM0daGL+drO1YB7vVFG01ZiuYs6+5zGjR9zUpYJrEdN3v99WE1bDBMPR1jev7jr7vODs/rYkCMUZOTk7wznNyclK1WIxRirqT+NvTNInJaZeFxicnJ7XaQ03S1ifX8E8b2G/jf1o8rZbFmrWgdSPW2T+Lx10WBk0KaEGgFmhSbZ7zbOZfzoBKSJqgHEN6oaZjeuLoeGKhfGiW2UN3kN94ezO7bs7tkhn6Bie+/FzNvSmjiLrbsnn3+4kJvvVbv4thOK1s2EqxqaEQC7U2ss3SUWACWNT3gQAkKSaGYViko7UZKBpPUy2hgqQm3Xwc1TKmZuS0QqCmo7W2+kSthp9DNGIuyzlNnJ2dkIlstwOkyPnpCd4ZOmcK0rhkclssOM6VmlGLzeCNJZek/NjQjrSZRBoTbTNt1sLVmvNtqETJmSuQRutnLp+/MaaGbKyV83MY6eqd5VkSE95I27+h66sJK8993shCYWmy0IyGaaJ7RDa7x06zu2SymsvW36Pv7fi/xxmXTeg2me1SctuDz6X5aNu3njyzF9S9nP8zdDd+Cz/1T36S1+5kMKmYdwGbEjZGTAiYFLE5YbNwmW43PVb4rxYTpS2o7Ytm8tYuBLv3Htf4YFrRoQCPCqiCHHKMgHDOBIaug5TonCNOU/3pjGEKYyWgMlZKxXLh9glx5OR0Ayax2fSyr6HDmMTVK2dses+m94Rxh3cZ8oQzGZKcn5ikoWoT5xyU2G2OCW8tKQRIkTRNNYbbpv7BHD/VRaRNEm9jkKoZFQmV8JJUlNiS3I9JJcQhfVtqy4PyM8RRiLdaLtkUMSkhjQYTQ+eETzbKc07ThMmJFEZ84bI1OZOC7MfmjM1vgfl6PFjfhB90BXrA9hhHeugnLvm0+voJlOwlbbj6a6tBJwx7HN37/l3s9ov503/+a0jdrXI+SA/FJA8rhpEYS3vzAsWrWdmaseM41pQ59W/WplfOmb5oTy2zWpNPtVw4reCGgmy26XqaTCBZPH6hlTQr6OTkpPK1hjDR9Q7rhEWu6z0hSP8S5yRoHsLIMHQ4Zzg9PWnAqbhAMEOIC9+QPGfqODuHQfQ85SNzbHGdvqZ/1894PyOrqvEVXFMyaC2/0vitcwZtxg6FqR15lt5QF1hvDd4Yus7Ni14OwqxnUikzExYGfeYaMntLMnpyUcuPq80ed8hhHv8YdXq+RacnWT6QcyD2Zwxf8Md56eXX+aa/+p04O4DtKnBUblXhLcqXzNBaHLwu6THzgqMpb7X6nzmNTbWQFvKqWduWT+lE9cUcm6aJw+FwiVWg73p2u91CoNvc1nVSdowR7+YMHY0patqcLhTq68GMmspCM4d6WpLmxb1eKYCWWaD1rVVIFU2dQRsBYdrXfdczdL0AN0YQ4s57hq4X07JBZHOeuY7acIia+uuwy5qxbp3n2mYtPWw8UT2l/Pw/LzRyTOu+dTjwUtfLK4m/cf1L2Tz9B/jhH/pBPvqJ58mMJAOBTPaW7CDbLKzx1kg3qabBjLgDVMEUJHWe+Joxoyv+NE1zbmljsmqMTyeFc65q3/1+z9Bw4ehPFagYI867RiOGxc+21KrNe5UqjznlTheJmuZm52LoNu0NgAKS6MRuY7Hidy75eFrQRt+vaXklQd1by6br8cYy7Q94Y/HG4Y1j0/U4LL3zeOvwxQe02dBZx2G3k0yrTN1yFN9eF5ZqrZRFp1382tzdNaDWCulboinRk3usL31uh1n9r0Kiq9pbd1TdLAmPtM3z9E9/BWTHX/3Ad4DbEHNmVyrxjRPNar0lk0vmjgimEBI7pmlksxnqBG59Q20f0D5gfbgaXFfEUieICndb+xhLelpLoFxNy5wWXDjrQmclt+q6jouLi8bslKybvu8Zx5HtdlsELLPb7SrLu5iteTFZgYUJDzPghZGwkj5L1a4tqrrg1DES1vDWQcxFqAwmgTeG3nk6K+ASUXrKOAyd9YgbmYUINIPNeW5jmQScmU1iX5MCuk5ilmpxaLxUwbS2UBrmxfJRhfLx2hbc+yRZWh89mWbKy18e3I9r5ds98GDzHzIUv/KNLnxuY/Cw/elO86W/lUmUC5SU7+M2z/Dyyy/w4t0tZ/m+aDxm08o5Kc9qzTRdbdVcNGaufogxVbPw5ORkgTy2q3NrEu/3e05PT7m4uODKlStVgNbCADNh8n6/r0Hx8/Nz7t27x8nJSTVHp2ni7Oys9s1Us/ni4gJYJizMGUbqD7Y1iYHe9PV+aDyyNUGr6ZflOaqloLSUmiPbZjC1T2aOPcLQCyoaplxNWk0icLZnTFK/qai4QVDvXMCn2MRO2wqeWk2TEzHF+uzaGKs+I11c50T7y37wg8ZjCaV9/pvg+cf5xnJ8rvXYL5fGftTwz3d/6Hv4t/7ob2ezGdjtdvTD5ap/fVi6ymrSeKuRcgoLk63vey4uLhZCOE0Tm82mClCr/VTDtSGUvhe/cbPZVIE+HA6lZd2Oq9urFfW8d+8eZ2dn3Llzp1JLtsTFbVimBVN0n8DCf9JQjWrfdpForQJjDFMI9JthwT/bfk7NXA0Z6flYt6SVbH1bPZ4KNFDNe/0+UFnddYHRFgr6nKp1EQrhWXPf9RzbMi5dTNtreJRh8ud93tzb4+3xf63xqzCj5+3x9viVPd4WyrfH2+PzbLwtlG+Pt8fn2XhbKN8eb4/Ps/G2UL493h6fZ+NtoXx7vD0+z8bbQvn2eHt8no23hfLt8fb4PBtvC+Xb4+3xeTb+/7un7uXS7bBiAAAAAElFTkSuQmCC", "text/plain": [ "
" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "sav_dataset.visualize_annotation(\n", " frames, manual_annot, auto_annot,\n", " annotated_frame_id=0,\n", " show_auto=False,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Show the SA-V annotations in frame 0 - auto only" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAOUAAAGFCAYAAAACSjT4AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjcuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8pXeV/AAAACXBIWXMAAA9hAAAPYQGoP6dpAAEAAElEQVR4nOz9ybNl2XXmif12d5rbvNabCI/w6CPQkSBBEiRTZFYyqyqrG8jKZJKlmcaa6m/RTAOZaS6rNJlMplJmZV/JTLKSfQIEQCIAROPhvb/+NqfZzdJg73PfCwAknVKYiYPYgRfheO5+37377LX2Wt/61reUiAhfri/Xl+vvzNL//34DX64v15fr8+tLo/xyfbn+jq0vjfLL9eX6O7a+NMov15fr79j60ii/XF+uv2PrS6P8cn25/o6tL43yy/Xl+ju2vjTKL9eX6+/Ysi/7B3/pF79GSom9vT201mw2G0IIaGvQleaV+/ewbcvzp+dYGojQrS9449V9NIG9vQM+fvCY1WrL/qwmDhvOLra81Wr+6/dfZZnW7B8f8GGn+eHJlsvNiq7reO/997har3FVxZ07d1DKgHUkpdFKkXzEaYOI4Lstw2rN0G3wwSNJSCkxjANBEiFGkhK0MSxmNds0ctGtaCNczgOz+/doHiY2ITAc17SHe6TTK6rakiRy6/AWn/zogmQP+Ae/+g2GH33C//Af/2e2Q49C8/d/+3eYLfZpjw4Zgqd2DqUUEhNXZxf84e//HqvVCbPK8stf/RqtM6xXK2KIaBQKyn8FEYEkqJQgBfww0vkBGkN1MOOi39BUDVdXl7jFjMY11EFzq1kwW3v2pCJuel79tff49JMfUo2Gb73zBt3gOV17nvstp9sVXz96HbnyPCHx6dNzgmuw0rByhwz3/hs+PbvgjbeecLw8RSdPUCv8rOfunVt0jy7ZPDtFdxEnljgmUoLLq0usUewfHzCaRNIJoxRNbNmcbVlfrPH9SEJoFhV7h8cEWeN7xbff//tcyac89s9Id5e8bm5x+qNnXF1t0YBSirfefpuPP3vI05MTumGgto7jxYLDgz0+++wzbmnFf9FGvr24jfERZATCz5zpkBIBIYkgohjE8OF2w3NvWN054s/OzxmCcHR0xJ07d6jrGhHh8mLDpx8/wftIjBEkopWgteLo6ID9wz0O9vd48vQprm5p2gU//OGHeO/58ZNPvjijNMagtabv+92vjTFoY1BKce/Ve9x+7R5nZ39IXc2w2oEOJAWLdkZMkfmsYb1ZI3hS8qBGsC2xauiHEUZY7B1yvz3gTVeBUrz95lsc7R9w9uKEkxcnbPqB837Dqu9p2xanDX1IPH/2jNV6zRhGvB8xJhsqQEwR4xztfEbvR/q+4yoMzA+WmNmS0EXM6BB/zDpeoK3Fb0asG1jomtCPaA1PHzxmUdWoeE5/+pBq6XDOwAAiEGNE/TV7KAgxBIYUePLkCXuzBhFBoyAlYojlgAgoCD6i0EhSpAQxWIwXSAmFI3iN3xowBh0ECSOXwyV+E9Gp4v264hdDxy/sH2C2Pfdl5MF6w/rpFWYMLKPn8uwzhi4irsFYQ3zlmIsXHX13i7MHFe292zz4cMu2ecJSP6etDGnhefboIVUMaB+x1mDrCgUkEZxrmLc1zcEcaxUacFFx8fCC86tztn6gmlXcuX2HEHoSoFVLUy/R5oBh+JioFSI1Slq8F4w2BD9S1zUxJrTKQd5sNiOOnvl8wTAMaK2B/y9IaiLl7yliivnCMRqdhBACItnBAzirOWxroktIShhJVFYRvIeuY1DC+TgybLaEIIjobLwvuV7aKJsmHyCl1O6/WmvQiiSRvusIKTKkgDUK7Qx6VpNCT10tSSkxn9W8/tot5rXBcgRKc2+xYLm/RFYLglIEZdGMrNZX9IPn+fMXdNst/bYnpUQUULOWR8+esVgsWM7mEBOVq+i6gQCMQVEbR0wBEei6HlspTKPx0RCSRazm7uv3GVeXdKcndIeJ+//V63z84SnRarpmzu3332UvaEIamc9nxBipxLI8TTQfdZhQc+f4iPPVClCcX16yODgC9fNMU6FQWOswSlBa48sNiYIUEuM4MniPKof8oluD0RhnqWc1wSd81xGUoBctqbbMjo5Rs5qmbXGi8tc2otdwNGx57/Iz2rFCjZHhk4ektUdWiVmEWoSYepQYVBipasfZKvDi7G0uVl/n//B/+i+Zv7Xg//x//CPOP9Wo/cfsvxmxtUJVFqMdolui01A7DAktiWoemO3voxuLNRp/seXysxPOX5zRhRG177j91l329vY4ffgU5Q1+MNT6kCCHeK0YYgDvsHYfiRrQIIqjoyOGYSClhAKkOLF2NuPq8gxtDMSAFoXKvu2vMVHZ/X1BoZTCGoPWgjIOpTRKpZ0xTv+treatZJAoRFEorXDW0YfA1aZjNQxszy8ZY8TrDdyxjMPI8e1bX6xRjkOfP8bkyaejphTOGMQHfuVbv8Knj15wcrpCGYOpa1znmdUNSsHeYk4iQhoh5UN41W35cdfDdktKAW8No9J4AZ8iISZOr64YgkfKoW6SYKsaZQy37tzhxdPnGOfQ1oAPWKUxqBzqKlB1S9/3rM4u8DGgtUa3LZehR47m7L1zQP8PLI8+0Oi/9z5iFNZoPpPALDmczDBBcMFwrGbMHwrj057LR6d88xe+zg8/+RRRhjFEvCSEtDNEyt2ptcJYg4jKN4N1KKsxaDQgCVAGIWCqiqqdMZNEMonZomW5vwQLqoZq4WgXC5pqRhgTQwiQFHQBlzTBbziNG7ZhIF6NhADihGQdQwWD6ZGkQISk8g0elOASzIYa1l+hC+9iXtXMWo2sNHFV088DMlfY/QqpQJQlKhCjwCkQBUmRRvAGlDYM52uuHrzg5NEJ3geOXrvFa6/ewUki+Z651igvSAJtsj8bx5E+DNR+QNeemATQDOPAdruhnc1RCBro/ZiNSwsJAQ0mCrXWkCIi6WeMUhWnKShEyiVZvt9YByrgBcYhgNLMmhnOuPxUBerk2I8NEjxRtohKGCtEBIugUkQlqJRCUMRxwEdPPau/WKNM8Wdj8vzJFAg8ffSU7/zpd/Bdx+binMViybhdMfQdyQe0UTx++JjNdo3W+dO5pub+669xte053WwYui2z/SWiNTHlUAilmR0e0gAxZoeg0OwfHaGV5pV7r7G6WmOsRVtDY20JNxLGGYL3NG3N4AeM0YxjwFUNd4+OkTFytrmku4xs/m3k7N//GCUJkwI6BZDEhbJYs0BHRZsMydRsLxVvqjv4C8+yrTFaEVCIKCIJUYLKZ74cgBxVWGuzo9CAMYSUqCvLvJ2jVTbawXt07ajaBkFIJDbdGklC6DxpM7J9ccXz8SmULHS9XmPEMNcVBBiHETcKYX8PnWYwVji9xqBxKVKlQEyWASkBm6AQUEIcA5X1NLLgf/q/PkMNz9g8fc7SPgObQFcoTH5nOqF0jgFSSBij0dYQYoSoOH1yysWPH6IuR3wYOXj/Ve69d5/Dn5wyu+joHMz6xOr8Ku+Tq2jtyLDaYmtL2F7RqzNS9liE4Lm6umS+mGcvJokxjFRVhWghkkgqb2+tNKRESvlzfT56KUa5M87yXREqIIpiUJoUhVu3jzk6PMp/SOe83wSF8m8i8RR0h0FQIWbcQgtawCiFAJXW1M5hrKZdzr5Yo1Q/NyTL349asdlu+Nf/739KiIIkwfVbCCProafrtjn0czUn61OOjo/px5HOJ/7o+39BEkUMgTAGzOYMLQpIKK1x1mKMIcVAihFjDNY6Usq/vri8pG4ahr7nxckJi8WC2WyG9x5rDSKJpm24vLokBE/btlhrOH36FF0ZaqdJg8d9bLDWsr+3T1s1OOM4PT1j6HsgkAQSI2u2eKt59Y099rqElpEWSxcVxLQLg6YQ/+Y+GWN3qUsSYbteU++5nL9oTTub8eLsjO3pwHboGYeRGGPJLSFFqHTNcr7EVi2iApiIGjTOVijTopVB0RNDlw1OaZIolG/YNA0vUmAjCk/CS0LlABoAbTS2CRzd+yG33CucfO9X2K4uuTv/hOB+gqnAaocuB05Nx1nlzycpISJUojn5+BGr5+f4bY92hld+8R32P7iDGQOzJ+ccd4rOQR8jbIVtilBbatmQNpG2aukj2IJZpJLbee93zk6S5PxSa5TWjONYjKsAZkLJz2VKF//apRPUGDBCtAbvAyklQggZoyhgnBIF/lVgDYAkhUgOc3NYnfdUFFQu75dSUFVf8E1ZN820/z+zEqCNoTYWrTWhG3Ahb551jvl8zmazQWuDiMboCmfAEHHK4ZWAc0AEH6fLF5RCYvFC3peDnnAm0LYt3kdWqw17+/tcpIQxhhASWlsqpwkxEELAWktVVWy3W5xzebNJLOqa2jX41SXOCzNjcDHg9jXL20ecPb3i29/6TX71134VsZqRxHp9he4vGD78DrdG4QfdmkoigyiGfiD5iARBlfM6hfta53wl5zH58/kQCN4zKs3VelXy4o6zyyu6cSBfXgoluuy9IQFGO5yxOZxSBi0BCTksNtaCMqA1MWXEWXTDT5Liz0+v+DPv2WhFrQPLqFnsqmIJrRVVE+n9Y46Pf5dbr16y3a55/Mkfsh6fokxEmQVTsjadBUVx2iKMfc/p6SnrB+fUQ6LZbzj66pu4d27T6YG9dcSrRDSGoCHFHA8oEoqItSO3Do+5MFsUCZIi+ogPAURT122JzjSCzsZnoKkdtTYECWgBi+STOT2I6/vwc89FyvcztiZYpdBG0/mAsQbnHNbmVxOJ2akmQekNsM3OWhSSICQhpd3by7e2VozjWBzv3+AVynppo5Qbr2dU3gwRyaiSgjB6Ah5N9jijD2gSygjdeo1zln7MnsePIzEG0r5j8ctf4eEffR/ON2hjUSi0aLQoxsFjlUaMIqoKKXlESFA1C64uL+n6kcV8QTObE1GYCEPvOTg44Pz8jBASV1drYpxcpSJGwRhDHBKqMkQvjMljlguCEZIM3NmvMZXwztfe4df+4f8KbxWqdmzWa/bOz/mL3/sDbl1ekIYapxLJCCEGCIIKgk4KiSHnOZIwVuEqDSSiJIyziM63S1vVXLLCaUNlHMMYcs5HASDKPwKIEqJOiAGNQYj5OagEWhCdUAZ6LaySx+iay9byL883fLfveIQiiWBJvILwlra0KYEOoCokDUQBqdcY86dUbYDmBaQebQAbiUrKTmbwRQEqZU8zrrasHj5FDS2zO3vc/pU3SHs1np6oBK9zzq1EYaJgo5CSQlRG8wd/itaCSQY3KJLPBisxUDUtKMu2GxA0SVmcq5EqsekuqaLQiUFHcJJBp1SSgGujlF0emSYLKishOCU4qxi9QmsDkJ2bJJTEHCYPgq5/QIojKWi8SvgU8SGR0Bm4U0JSOZ0Zhg3a6GypL7Femjzgvc9fo2ccR3zwxBjLByteJwkSI5IiInG3EX3fsVwsqCoLDqJNSJUIc8Pif/ubpK/eYab1LhwSo6jv3Wb+9uuE2iHaZBRAGxIlHFMGrQ0xCs9fvGB/fx+jzQ56ttbSNA0xRp49e0bf92htaJqWGCPG5Pwu+sje3h7tbAZaME7n3EontFUMY0cykHQ++NFkp2SSoGLEEjDlwQ3DQAwRP+Qaad/3+DAgEtFasFZzdHSAc479wwOUzo5Nq1ydjCGgVL7h8srZXiqAQXHTQNnbfEXdOFQQUkAZhU4JYiJWjh9vrnimRrZGEWNCRDGiOQ+eSyJKCzlED0iKWGcz2El+jtOhFgRRkp9tvjpQpZ4qSRBJtPM5B6+/wsFXbvHKL71D2HeMNiASiAV0Uaq8dvkcoqb8TrHdbjg5eYEfPX4z0m/7ko8bri6v6PuRGPNl0I+eYRwxlaZtK2QYs7GjcFqBxLIrP7tE0uduSsonzPVGQ0hCLOH4OI6MMeDDiI8jfhxRxPI5FBFFUBAnxzm9nipPKyWsdfm2f4n1t8spJd9USgSj1C7WlyS7q9lZy95iwa3jY95+8w0qp0lhpLIVEgNzA6pbYxXM1IwX6xPqt46RP/4MSdnrxpS4wPP2N79BdbzH6Q8/xvi02zgRwVpLXdckSayuLlkfHeIqx9iNCLDebrCVy4e1HPIp1wN2wIvWmsrkW2tyMH4ciaNHKcV2u6VGUymNNhXJBlzTULkKkmCtoa4q3nrlDVSquTg7RxkDWiEqgSkgivcg8MEHH/D8+XMUiqZucv5T9jL6gC3Rws3DcjNpkJ/61RQ6XgNJirquaQXqueVx7fiD/pwfdx2XkpBBUSWNrW2OOiTlcBcIIXBxeclYL9AXhlmzzCFziiSJxCiE4MEnNIJxN46PZONKleH2e6+TdGLQI0MAJEdWiUSKsqtnppRQymTwpOzBGDwxJlRMhNGzWq13EdkwDCW3jvhxJHiPT4G6aVAqA2ei89m0hVAifwUW8tctYzRpIgaQHTzOlFA4Av5m7ALqxtPaAe75d+u6oeu6XM77oo2ybhrGrsdqjdOKg4M9vvKVD6jqls8ePuLho0f4ceTXv/1r/Oqv/AoAldPoOOCfv+Dy00fU3cjx7dc4WV3QhRHOexZ/8Jgf/OgZvmyeUgqnDfFiy8XZOYvXbrO82tA/PqGpKlxTUwm89fY79F1HCiND33FxeYW1Fb0KYCybrqdt6nzDKA1Ko40lxIR1FUobYhJqW4AW0fnmDQmN4uz5KTFEnjx/wScff4x1Dj2r2Y4DZvC5aJyEykLtNKFp6S9Gnjx+zPPzU1xTY51GmezECIHt5SVNXeGs4+z0lDB60BqjNZXNYJYzpiC3Nw/TtbeXVLx/yesEMNZkMCzlyGUceyyJz5Rjc9HzwwB9BIOiRmiU4IC2bqlDQuvp9hLOzs/pbc/Z+oz9xQG37+wj9CgVcl3Oe7QvDk5F0LqE1ZA05VbPYbRWgks6A1u6nNcEViBKIkjEicGgEK3pVeKzZ89IKGwU+k2H78YM9ilNZS1NXePHEaM1i9mMi87T64SPmZ1DChhJGJVBtVjQ2L+Nae6AOrJBaqNLGqIKkCdoiShJFMx3d0OKSA5Tyw8UEebzBcvj41xNeIn10ka5WC7YhsQb9+6hJfL3fuvXeeedt1HKMI6BP/7jP+HP//y7/Mav/xqHh4ecnZ3hak2tDDw/ofqLH7PXb0lKcVcnLobI8+6Kdx5FHj3YIge3We7tE3yg6wbOTy+5+uQx1f6CV772Lh+9OMNqg0UhKXJycgoI0Q8kgb4fsVWNT2tcXbNZrxGlsa5CG0sS8CFSCzTtjKqqCgjkAEUaR4yxOOcwonjz3n0+i1DVDe9+5QOMs4wkLq+uuDUGThdz9FMwKlE7zelqw2F7QFPXvPdL32C5v8fh0T5CInpPt1rxw+9/j7/4/vfRwHIxz+FWU2VHZyzJR6yx5WHeRLxv5ENwnSOp69/NwFjx2iJcbtb8yWaDkSoTzKJmbjUH+zWHy0X24gGqiw1mDMXIinFKYhg8F/GC/UMNaiAHZ3nvJWmIgjJS6oBCVAXwKAil6EQyoEtJIk1vMgqEhA+BlC/ogpQKmzjy9NEJx3duMzOOYdsRfGA2m7G3t8fR4SG1qxjHEY1i3s44215S7S2IkkNIiRFTWFKCEHXGOH7e+qsM1VpL2IykJJmogJB0Th2MNbShpGgxksSTEGISoiQSlDxWIyhCjPgk2Kpiu1m/lK29tFE6a0Epaldx59YBd27doqosXddhreZb3/oqT598TNMYtIaPPvoJf/797/Irv/VNfskqdHdJkzyZICbUUbFaD1x99owKw3/2j/4RH3zlq4QQePLwKf/D/+2fEC62PPvhR9g37lM1DdvzNVrlcon3nuViQUqed99+Cz+OjH5kPp/Ttg1JEs45qsJXnGhSN9kZxmR0bfe9GBGTc9XKOZqmYdt3RGegcYgWZKywtaM6WOJtDumN1jx7/oKNbHj3vbeoqoqYEienp2gNVitSoWqN44gSoa2rvPmldOKMYxxHmsItvs4rf2rJ9K9rjF+pbAgpJWKM6BK+Re1IUmOVRuuRttbcuX3A8TIX3+Vqi7+8ysinKpyjmzlqSqRyM+c6tUUXkE8pvXsvU2aWUgFRdCKS806tNFpbtLMYrZAUSCGWNCKXNaCARKPHx5hDcWOghK1KKUIIzGfzHAKKMPQ9ylRYZ3F1RdyWmiQpI6hKAyFHSrt9K+umNf6cUonWZncmvPfMFnN0lZHzGAMxhl0OLZJryVP++fkfpVgsl3z08BGVD3zwi7/w15nYbr20URIVCs2nDx6yvrrg61/7CseHR/zJH/4x9+/f4/XX73G4v4dGeP70MbeODnjx9Cmf/sUx33znHU6ryMJrqmCRGKlFaE3F49MLmlcPWBwcsJjN+eFf/pBuuyXXahX6ouP5+seorUeU4A1U5VNbZ1mvNjx+/JhxGDDW0o8D4zAybjss2fPn0C5D/iLCMPTU9XXNKJXNNcZQ1zWVsWhrqdsm30kqh8CiQIwBpTFNS1QaJTksVCJIeZDG5CtAJu8ZJaOHdV1uxJBvGjRhDPh+wClNH0Kpa2WUNr/IdCP+HL9eYHyjNKYghZCNdL9ecnB4B2VnzGLibP2cSgectVijUDGTHJJRO4OkAC7X2SqkglTGGNCqwmiLtQaly9/TquS0YEsZQBCU0QWlzSimaIMzGleurRAiyjp8DCSqAlJFtM7h4eiHTBZB46qGYQg7Z5dSQiQRw0jb1myuNgQ0SVLJTzOjC8BM3IHrLZt2aZf+7Rg9TBljSQvKuWmaGl1ZIOEH0CGhtWEqyohk9DXxU7mlCNpYYsqI+8vmty9tlN2mx9iKFAIXV2vOzi74xte+Qb/pGbqR2jW0dUtbOT569oivf/1r7O9VnJ8/QNk3mXvFTAKiPBDRSfOaWvJsExgvBlanF/S3N/zrf/HPM/teBJ0qmiGhU8rIlnOZxiSBKIEgvnhyzTCM7DUzBh/yg/AeZ+d0fQGIpEDjKkPySQJG55LCdAxjzMm9WEvvB0xdMcSIFo0lH3pTVaiq4uDwiDNlcCJUCDYlCB6lBO8HtLI5V5vqYkrh6gqjDSl5fO+ZtS0pjJw8e0ZV1SQRKutwxjD4cP1syYfd/nRKIpIbA1QmfReqFKP3JFPxldffQgeLimtGOUeFTHEcesEKNEojzlGvPL0BZWS3P0oyepliNrIkiUTIN0KypJBvBzQorUHnMgCKDJqJEIkIEKIQg0cZTeg8ikxUEHLOl4wikql+kiBJLp1FFEkMCYOrK3ofmc8bNtsNxIDfbnFHLYt6SdrKjg9rMdmboLBxKjFcRxUZxI67ckjBthGgT4mr6PFhJBagSCmdqwoIJKhCPiupOKGYclYZEZLKHSdTahlE8JKonAHzBZdEplCvahvaWctqtaZpat577z2+8Y1v0HUdy+WS/f09vv3rv8rh0T7Hx8dcdokRRZT0ORhdAbXz7LvArb2aZ08ecbS/z3/+D36HcQwoGrSqMbrFqAprS75nDViDF0+/3WKNoanrXch55+5djDJoBdaaHJIWV2gLO8ham41vQgCLt5/NZtR1jTaGwY/UTc3gR2KIWGXQpSgv1lI3DSkJmoTTpSaZAtrkQwXpBk8430La2sx/JbNTUozEmCloU1mJkNDW7g4Q5bDITznZHeJabuWbCDPA1WrNg88e85MPP8osG2dzIXsY6PuO6D3KGOr9Bf3cMTqVP9sEuJWfG7NV5lwzRYZ+pO9HhnHMBPphYBwHvPekEEkhO7ZUSi9GWazOYI5FY0tdE12ii3I7JyVEyWUIpQTrTDFQEDRjCJyenzGGkNlQElipgeZwzv5yn3Ecd8+0sq4c7Oswf/oHYffcpRQs5cZXUgovkCRS1xVN0+C9z6G197lDR3LeGEs7YJTrSmhOm3PYn9lU+f5s2jZzs19ivTx5IGVeoRJhjInnz58TU+I3fvM32G5W/P7v/Xvquubs7Iw//tM/4o037nP79h0+ebDiajWw1gkbPM3udEW02nKrrnnrN36Bc9tycXHOixcvcNZgbCjgRUIk7trFUlKoqHBiuGVbtuNAVVVorVkslyjrWD19jnUOV7lMwSpwu7WOFAVjSjmlEJanw2ytzfA3Obxq2kzXiz5gtMbqQpzQGtc2FFIdlXVI3GbDUCAxZTbSTaqdSAaqTMnLlM60tPI0RfJD9sHn/B12Bq2m8/Vz1nS4tNY30XiUUpydnfLardc4PDri6GjO4wcf0V2ckIaAmWnYq9m/fYtw55h0es7z00vSlhI5F/JCef1pf0yhvTnnMC5Dm1orRKtdaUBKSOwl54eVttQmM2Xmhh2xbxcmly4NyLS6/f19IOdwUyvbZrOh6/ps/Kl8H8VwtUFtesIw5PqzypFDZkKp7GR+Kp/cGSTX7Krp/Ygi1yS95/jokL29vVIKkXJWcgadSg0yKbnhfMv+S6b/mbq0bCnFfD7fna2/ab20UQYfCAVNkxh5fnrO1WbN0eEeMQQ+/ehjPvjKB/y7/+lf8pPvfZ+TN17jndff4gOp+Ph7j3jWH7I/rvlGPdJI5nJq5VlGxff/7b+jf/V13n7zbfaaineXCy62T8FqWqWpKkWzsFg749FGeHR5xd1W8e03b/MfP36MdY69/X0AVleXiMphptYaYzNXNoQAkkMRo00GcbbrEv5es5MmWl6Mkaaps5fses5PThmIbIlcdR1XmyuC05gRZtqiBZJWxBvFdD3lapTaqlK0dU2HQpEJ1QrNXCxONN7k26OxLtc5p4dcVlLw0752dxhK21EMAY0ixsgb77zBu2+9iyZCUrjFktX6ghAGkNxKx/Ehzhr27xzxxqrj5MNPefDolFQimgxoZKN3zmGdxRqLtTpHyyqTxTNWU95tAqUmYsG0vyrvS4jXFERKCUUglJtssVhgjCXG3N6miNkxxpTfS/kiKrSHdLmF8w3Kp4xBAJVKGNGoHFOjb1ilTIb+Uwa5O+caNingY8A5lw3dmuJkFSpEEuAlMUrMTdLqun6eccgcAThnSaX301XVDmv4m9bLNzkrneFlQAt0w8B62PLgux/xwdvv8LWvfo2T58/Y/uQRv5IOqB+PvNlf8GqEZxdbvsuc3mg+0Bc0cSigAtR+5G6EH7445T/823/JN0fh3TSi9irQkdoY2lZTzxTn2vHP+w1nSnHUGO4uQSSAViz2ljhrOT85Zd407O0t2XYdVeWYL+YMvc+GicfaDHUba7jGVEpImRLL5TKHKmRCwfnJCa+9fp/9vT2q6JFhTXWwwGuhEsXMVFhliNbk1/C5S0VyC8XuwesElasIBqqC2pISTdRYZdnYzK2sjNv9vekAq88Drp9b001mCvCkRHH79i3u3b/LRXdOihFnYH7rmMjI+ekLWMxJ8xlXhaGTaseto2N+++49/vTPfsAPP/wxImFyKZ/7WYLkUDXl/FNKfJumz6oVYrJhaq3RkjmhKCGMI9FHjDb4GBhIxJAIhfiQmVE91tbZcWtdoh2NMw5rHDEIwSckgoq55czEzMSxyjCzGiel1ltCh2nbMtlieiY/u6Few6huQl03/ryCMIwMwbMdA6HQBhMFjysRj9LZOdR1Qx9jAdL44o2yfftVZsuGzcUV+nxDZQybzYblwR62rVBacfLZY17bwi+wRxgT9aNLKhO5SJ7DwwXzURPWJxlOT4qohCoMvK48z2XOk7/4IW+bikNleVp1HOqRsW15hNBvDJ/5nqAHjvfnNLWQHIykHZuFmEiF+mdMbv9yzgEdm82WunaMo2e9XuNDz+07RzhnGXqP0jl5z2FtIoWAUhnSn7Uz7r3yCkErmrFHzx3D669yUbeEfiyIpiGqTDL33uN+2npuuOSoyB5Uq2vkTymsc/RdhzW65LrXPv5vwu2mGzkfZMVrr72eZVBSNhpRoKxh784t5gdLrNGYqsIju8tuvVmTRPP++/e5uHjGk+fPyk34s29CaY0xKrOBioNNqjB1MpR23a1B5vgqYxiHwHYMuasjBEYSISii1iQMwWeq4bRuph/GZJKEVpkAfrMWv0O+tcaaibJ540p+OeATURBTdiZ1XeOcQ+lrkAhUzhPLZ941TzAZfHaOPkXapmG93ZBSNtQv3CjljWNGA9JqnNLoiy1XF1f8vb/36zx79pi//MmP6LseHTXRKR7qwGATXUxsR9iIxnrJpF2Ti7tWFFpgMQzMjdAmzcLWPIiKPx81/2BhOKPinz7dciGOioG///Yxn44G5/tSm4tIEi4vL5k3bSY1DyOSEs7Y0j5T7W5BkUwVM2aqkeWltUZEeOWVu1xerYgiLJdLrFI5xFXl0CdBmwp3cIu2atgoT2sdldYMQAiJFPNtBVznLrsDkvO0KAmURueKHkaBMxZfeLlMUaEASud8Hsig/fW6STBQKLQxWKVp2za3uhW6jnEGVxmcGKq6ykDI9PqFfxtTwocRbTRvvf0mL85e8FMZV/6ZMkWtuXyBVtmBKEBiYcNMwMp1GG5Q4GNuBg8etCKqHI24uuZgvmQTN9R1RSj03pwfCqagurkVa7cZO2eUUjYULRpHbqXK+V2hw+1qiPI5Y/6811NI0jn1MJq2aXBVRSylmspZgu5zmcZqlBKcirl/dPeE1e79uaYhrbe5TqssWruXsrWXNspNHBl//Ayz7vBJCDFxdnLG8xcn/LN/8c95fn7CTEUqYxis8Be251xFFt7wVdfyLHUEp+nHPRIXaAQXwEtLr47AVeyvt6iUeOrmfGwUv9mMRNWyrRo2RnC95TDNeCpbqqrCiKAkEcbA5mqDUxmo8cOIkkyEr9p2h1TGUIAXyZo3khSbdVcScMEYxWJvwcV6BcD+wQGVMqzWm0KRyqJbgiLNZlgtKBKv4KgMxKjoukiICp1M7toQAbkuLIcQYAwkoxGl0aIJOqPlravYyEjbzmhci4+BDMTnMFhPdK4beeTNs6WsIY2CrRxNXTGOA/P5PO9VZVA653laDIqJezrVaRVdP7LtOuqm4c6dWxwdHJZwWIAIKmWpDIHkc7i4s49yu2VkU3JDxNTHmDKf1KFwCXRVU9lcx0WEoBLbccuzfs0r9+5Q1ZZx3SHRo1VF8iNOa2aLORBRKlMD69phrcbHACY3f7Ves+8TqqLwiq/dyYSYXqOtO2gMgCSGmCp8DLlh24AiokvNWCPUMdFGBU6jtNCgGCXuIoSdAzUWbR3W1bz73ldYd56Ts8sv1iidc7hbB8iLFTMRxArf/e53+PFPPqTvVizEckc57uI4TI4P1JIHVWAvwRvach48Y99TxYFoSm9cLCFPHHhHNaimonEKrw250yp7fq0FVKaCKYHaR6StGMZxx5Jp25bVek1VVZkbaQxVVdO2DV03sr+/zzgEUoJ+2OyQ0okxklLu1YwpYitH5z11XdM0DSHkcCsTpzKfM5g5F23LVT0wqIQTsEqx3mwYfWQcS9uZZHElbhALmqbBaLCzmqpqkKpGzxqquiIZi1vucefWLR4/zeoC000jU51iMsRinNdoYv7e8fExxhiuzs44ODigbupCAC0xX2nKjSnR1DUxerrtlu12iw+Z5KC15tatW5xeXDDZnHWauq6wVSbyO2vIzegZ5Yzl4E8Ajy7PCy0Yk8shlTIY6zIRXWAz9nREIpEgkUePH7F30HxOHEtJ/owhXotXpVQI4yqX6mJRJyCB0xoo+keSEdJYst94w0EiU6lpimoyVXAYI9q4HUNKSoifRk8VEvv1jMFERsnCcDkauX42Qi5/Ka1Yd1tGa1neuk07X3yxRnn16Cn1K8fo/YbtoxOSCnTbNfNZwwf7x7wfW+4lx6FO2CB83SzY2jUzP/CaREgQ6oo6WHqVGAWiyl6y6i/5tbTHXtUQXWBrNdLl0EQrTW66EIwFZx2GBU+GAX++Zu0T1Tjyxv37rNdroh9pm4bFYkFKDdpZ+v557uMs4E0OHyNTJJTzlgx3n56eEmJm9Pddh9GaH/zgBxweHlE3FY/PnhOHhD894aFrWM+WfMqWbeu4fXiHzXrk9PSEwXdgc9eCSEK8J1ytM4fz+IgHn33KrddeZTGfo0VwWlMry1wZ+qQ4ODri0dOnfN6Xf379NH1wMv7KWlIIxNEjqTSITzGnkG/ulAjeY7Qm+IEQckN4DAWY0HB4eMDZxXkOH7Widi7XAHUu6RjRJf8rubESIhqdUm6vg8KnVTRNTa0sTiyVdiWEzA0BW51wVUUdM6mjqhySQo5KYq43S8i1P2stIcRC6SuN2VW1Iw4ocseSKtY83ZBJJtS1ILA34tZMNp+SjEyUMFrfqP3mMFqLYHyikox4qiSEGDNKffO5lIcWRbi6usIJvPeL3+Tw1u0v1igXz1bE9YDpB169fYvXbx1zcvqcJ48fIamhliWb0JMkoY3mI9PzuArcDYmohFnM4dNgFcHnEKcHTAqMVHw/Bcy24+vzPRqdDTaaGb04ohpJrqXXPX9y8YTnG0MwsF4ZxGY+5Itnz9l2ORR98ew5s8pxeXnG/tHh9cGVafOv60lTAj4d7HHowRgqZ/HjgDOGP//Of+Iv//IvmM9nnK0vkaQwKWEHT6g0J4MgtWV/f0nyKz7+6CeIFmxraWY1kiJOa2TbUaF57yvv46oa0QZVVTukNWXxHrIPizsjUhRUlet6mNa5zqkLiyf6gMSISoll3cA4EPsO3w/EELJR5jNYQteIFEelyHW1YRgLUJHD6f39BabkiVoSKoFJCpMKIpp0btmTHEOkGFGjJ24D626b80ZyR0pczKju3EJJrtNSwC4A5yqqqqIhh6Qi2TC0MiSlsNbgQ8YJcsh9swUvA00hhowiaw/G5fdLicTShBHnftBsoNeZ/lQLTZLwEgghEgXGwVPV7loxoBDpO2OyHi8wpatqykl3CLTObWgIlc2tfN04fLFGWYVAPBvxfkBaTe0DS2W4xHDit/zpTDGqkW/EitfMgg/thgcmcawVvRYeOji0gh412mfpwaQ1VhJrrflzIr2J7AWLNbDVwkNxPAuRzgiIxY3QHjSI7/mle7fp4z6f/ugjJCWODw54cXHGskhBaqXxPuzC02v2RjbGJHmDK53DDGN18cY5TDaVgZSwWtMNIyHmyGB6EGNKDMoQjBCMIXmP7zoq57j/+j3e/eB9bt+7zWtvvMajh5+hJKG2A7//736Xv/jLHzKMA2/kQl/x6opACdWUFIQvexA1lTzIodIwDBmJLISBHGJF+u0WI8LhfM75+SXKBwwCEpnajig1teBzs/pNZcIkOW8yKiHjls3FFSl6jBZSTHTrDS8eP+PgcB+jLaI1r7z6GtbVDL5HgI8//SHr9ZptuZ01OX9PlyP92vPqNnNHIWvfjNEjYui6jh7Fe+++w9X6BZDLS9a0KK0IYaRqqt2zzGRwdjqvuQ4dcnlCR1Sc+m/VziCnko1AadZWu5s0d7oIq7ClHzxRWZ49e87+wR5V5ai0wg8Bui0yCHuVw0dPnwJJYnmOhdmTi/C8ODtlDJ5lVaGMzjTEL9IoJ2ja2pz7+HGkspbbd+7wohI6o6ltTaSFakFFZKYSjdV4Kp66HpUCR0LptC8HshSOgmhOqhn/No70vSda4Q+erTiz0CuHCZ5FgLfmR5yOz3jx4oQx9cSxYxVGbh3fon3+hKqqAHbCWbv3znUOhlKkUj4Bu4Ps1+vcUNvMFqxPtvzu7/4ue/uHmap3I4YUKajvjXKGiPD48SOSON595z0ODw8ZR8/z58+pm4Z+vSaWPHX17EmmZ8V4Iw8pwpQFlZ1geChQe6kPxhSzooEvPZ2ljkdMDJsN9199lRQDfbclxYwapp+Ti8aYdrno5LBiSnTDhqY54M5iyfj8Muv5+gAI3WZL121wriaqOdvthj/44/+Z1WrNwcE+fd/z27/5bZbmko8fPACdATYD+HFk0/e0g8H6Ut9MCS+RJIXUHbJ6g/cerXLXiG0aFos5q9XFTmA753pZSc/arDYRQyjuLZfD2IkfZ2SYcnOKynlnLq1eF04ECFZx0m8ZROEThNWKzXaV9ZVQzBLcUQ0hClsvJJUyXxfFjUAEEIZxYLVZI8De/l5pani5usxLG2W/HXaMhcvtFcs2ohUM24HteUcoB/N72ws+Sc/44SxyUgmbPvFwI5zNHFSwJwXzkEhICmIkqcj2xSU/Xm34gYmQhHbQnAfDVQODElKseBZ7Pv3kit5oVoNDp5ax1jhtuby8Ym+5RxgHNtsNo/fcun2HMfh8aMsNsYsvJEsBalGoJGy3W/q+p64aUoisr1b044irGsSYHdwtBW0RmXK6XKTWKnt2pa4PevCR9WqbnUPKSqXKulzjUyp30qvsX3cXlmR1B81UsimPuRhWbq/K7JrJWCE7nllV8cb9+3SbbZbWbBqstvjPlTVyjTHXbyU7nNIVH8eRbrOhW68xTc1rx4fUxtKJzhiw0ty79zoXa/jBX/5Zfi17APqI08sBRcO/+De/y//+f/O/5uLklLOLC0KKWeOmSKl0wTJ6jbFqqjIVVl++xS8uL7j/xqs8efI8i2kTOTzc4+zsGVVdFcPMXSJt02Yebu8JUTACujCnRLHrDjHFGFPZp5i4vm1vGMogsPGJ0UPSGiUpvzldJD8iqCo/mxAjrrGQchTB7vmUnS7MJa018/kcJfKzDQX/vxrl2fOzTBVzhhQTR4caYiA5x3Y7sg6BFANrpXDOMAaNRHiK4mSu2assumpZ1I7zkzUSe4wYjFjaUbFoK+7OWuroMUvHvhgQw4XRnJlEV3iHQ7qD0467tqF2S7bDSBw2dOst77z6Og+ePGQ2nyNac+eVe7x48RytN9wExhWlzlb4kdHnTvX5bEFd1RzuHxAjHN29TVSaoVCrrs/1DRFfgeATWjskFXJyzOwyZUyOCGL+WUkbqGpEGURi7q2caoXFYUyGZlAk79GuuqG3pLMRxsyoqatMvE4pH4C7t47ZX845f3ZCGAdM0+bOngJsZUrhdZHbOYdPU0FUkcZAGiJXLy4Y25rZfI9XD25xdfmYpBXWWN54423+yf/4L9HVK9RH/znaHWTidRwBTX/yT/l//Yt/zT/+b/9Lvvsnf8DZ5TljHElaUKIYRNC6LfVHslq8UWANLsFmfcVs9iaKjIYrHem6FU1bUddVDhFjINMUhRASMSpiNKAisxhZ+hx1GDLonInxJS0gKwPF8vCydGhBscUhNKAjlXXMm5a6zvNgtEAdEnYIGBXRxtC2M7rNihjTrsE5J8tC5Rw+CUZFZrMZThncSwpnvbRRvv766ziXUbFb1nF//5Crywsu+g5XNbROaJqaxd4c4zRj9KA0wWqMsbReaKJl3jZs1qfEQSBl1Tct8EaT+NZbr/FabWlrTaodKdb8h9ML/uDiAiMWUFhToZKjqff4+te+iV00PHv6gMsnn7KvIovFAmuz/OLF5SXHx8ecnZ197rNMt55KkqVHYu6qjzERfKTbdpASFxeX1IsFqZQlrkNAnY2yKLgV8mFBBGXHo1XGsPOdAmidRZZ1Dp/HMXe3I3JTQWL3c1K5NSdO6dRilhUBi3KgyuGcM4a7t+/QrTcEPxKCZ26XGGPxqRxiQG7U6XajJ4rSnxYgwtnJKfHuAbZS7C0WWeJSMsCyHXLYWy9/iaq5w0ToN7YhpUR79Dusnv8TvvOjj3nz9dfxY8/F2GONRUIOWWdVDSR0cWyDKKq6Zr8ydH7Ler0q0o4G5wxdv+Xw8IAxRGLIGkrWZvkTpxwx5MFCqgwSMjdyR5jCf7VDY6/lJa9L/vmm0wim1GP17jlqnRu6JV0T861SbDcboiS6sccLUHi8UQmqyLNonUs2zpis+PcS628xS6RlvV5zdXnO8+Q56a/QSnjy5DnbjScI1HWFDyNvvHqXW6ahTsI8BVxl2cdwVxmsmWFMg6iOmDpELIKwNJG9KnHpe063wmMUPlU8GDxjmgjBuUGZ6Om7nj/6o98nGoVJI8ezioeffMrrr79GHEeiH4ne56ZhpoZWAa2wGGLSu0g23xw65zDGc3F5RUzCOPTcXSzKA8oh0PRKuQE3P/lcRJ+84A1ie6W5gfWiyANpdAmlMtE6Fs2Xn/Wif2UpZPoqBWulFHWVa6rd6oooGarPGrBkwSq5bg6WSbRCct1UouS2rRIKd93AYrnAVkX7p/yNECLDOJSdvJYt2bk5pbDVAcoe84d/9h2++d//dyyaR+gYaWczzk9OqRM02mJ0AYEiJB/ZbDesoqZuLVdXV3m4jlal/S4xm89hO7AZtxhjaNuW0fvr/sjyZpxxmAmtfgkbkBvPJ6vPZSX22jlEIjEJItkoc3dPFlieW8eWwGrYlE6UYuSlm8ppTfKCcwZnbaFMfsFGud0OnJycsdmsSWlk1a3Z31uyPD5mtoQnT54xDp7zF2fctTW/9tqb3ArCUReoExx46KxlPY7YyhK3ciOcUqw74S+fDTynJihDTyRKwANJN0gc6bue7bYjhVRumlxTe+f+6xgxPHr4kL2mgRCIw0Bb15w+P8FojTPmmgFlFJLyLaZNJkITswr26D0xAUozxoBRBmUcUShyE9koMx+zFMu1zUJcRa07SSREjxVHcaBMfjnnRPkw+0JcVzfM71q8+abck/BXmaiU9zFvWmIIdONA70fO12tu368KH/XzRnndipSV30zx4rveT9HUVYUjkcQTNQia/b0D/vg/fReUpWrv7yhvUridk+sx7buk4SFdgnnV0GoLRnERE5VrqKzDFmR2I4Gkcv+lF8EkOD8/x5iKGANVZTHW4sfAcm+PzbZH6axe3vd9UbLLe6EEGmOx5dafCEV/pXf7qTXp1CUJOKupm0yLU6oMThKNGkecU7u6tkwEWGRXbqMQ8Z21zJZL+m2HrXtE+5d6Hy9tlE+fvmC93qKApmr59re+yfvvvYv3kY8+ecTDR8+IAjoaPv3slDtmRn90yIM61yJfmVtcjLQxdyUYBK0i6IQSQ0iaEwyr2pEIBDFZRiEmUohs11suLy9LaJlvNqVy3qFNJBKQFLg6P6d2FpUioR+4vLrC6MxjvFqtmYrDMUXGEDDWZbl+Ak1q6QePtgZjHEMXCT6glSHKBMVPnQvZuJQGrVMR5xKQlIWmUywhUT7oasdHK+09TP15aldrJH8buO6O+HnrJqFAyJ3v+4sl4zAy+JE+BHRVsX90hI++AEnXRINslDnsjFni+0apQSGiM80sDPjkiSpT/V5/8x2+/+l/wMzeR+t69w4mqUits65Ou/g6YfUn/Kvf/0P+0S9+hfOnj9n2HWOMuEVduG+Tm6JQCbODiTHn2s7qEm5mpPjJk2e8/e57hFJvjmnSkL1OD5RSODQmUUC8a1zvZQbRjZJY+5BlKjU0Tc3EXbZB0EFwRmN1KW8Uw1cyOer8vEXAVRU6RrabLd//3vdIrmZ8ifcAfwujvDi/KLeDYEzN1eWK//SfvsN2O/D85IKYMicyKFhp4Q9WT3l+p6Gt5lRRs61qTLfhg6g4qBtWhSWR1JQrWTKTMOupTrMZYgh02y7PniyzNUhF5q8YRD90LJuK+WIGCE1VkSS3ADWFMrZZbwpA9vm8L6ZMjra2wpqEUoEYBVeZLFlRNnnqVABdhJyn2meZ3VnC14mEMAlz5RtK5d5KpTBmMsIp5CsbPKF3JUSeOiKyZs5NV1/uo13nQnZQR4eHRSQ7cLVagTIorXc0sWllBPpGxpXj4N3vBx8wOhvpMPRZekNnsESZjDBX88PMeGEy8uv3F1PEuYZq79s8P/2fSfM9fHzIVbelXs4RZxhuqEGEEkZP5Ibp2bjK0bZNnlZmcgg7DtN8yrhj2xhjioHmXTLqWntHTZ/vett+Zk3VIiEb5SZ6VJ1HJcYYdkavFLuhtZNzC2kS0JIShWV2kQ8+R13DQLu/4PX33md+fIvxix5bkKHhXCS9Wq350+9+F0kRrTOPcSIkJ+2Z7Td87RtvoLWjVwlfWU4RmtaRRo0ZXe7gV6XFRhQuDTi1BeUI5FAxhMBYFNmD9zhnmbUt202/I1NnZXFBkuKtN9/GD5kylgpkPo5lBsnNJPvGIaqqim6zzVxFuT4Uxhi0ZKaMbbKs/mSUQOk6uT5IkzKCCDtZCq11kcWYFAhyOeNzNUhVRFdvAA6icif/RFGb1rUpXd+VGo3ViqZtOTvb0Hdj1is6OMo6txMDZnoF4TrZkpvmXgr6caAmMfaRi6uO9XqDVRrb1Pzljz8CoFl+vVDgsrKcLTVsKf1URmtmy68yXvx7/s0f/BG/8fZ9wuqSpIQ+JXqfD3NSMBawzJT9fe21u8zmFUOfcFXPYm/JcrkkpLzHWak/MTnH+XzO0Je5KwiNgOHms+baOosz2qn2Sc4Xp/3xShOUoa6yONjnDLlsfkyRzbhBEAZNJj9MP6c8T+MqfEwMIXL74JC7r76GWy4Zv+g6pUGjdSClLGSUdEQKM99MiihKuH33mHuv7LPUNbWuOI2RwSVOtLBEEaJFokJbh/ElPBNFG4Uq5RYYJZkgnqmc02HNB7uZ5Zsv35q5vDEOkW478ODiM4aup53VPH78mHv3EqurNUdHR/l1po0j06/WfYc1lrZpSNNoAWd2D0wVA8uyEtdPJyXZ5Wcp5nmKzrmdUU56MZ9/oooJR823qs6k7J+S8cgNwwlRiaTSzpinukgqe11kSNFaWDZz+nGkHwPrbYeIom5muas/lltVlb9dhLEmlGq6h3MlLqK05+233sKqGU/OHtEPES0BR6AvNDGtLbqAP/nzXANZE2BkbYNp3+H0/COWv/ltwiefoHTFGDWB3C0yasGnjGpa66gs/Ppvf4Xl7A7/7J/+O1abLbNug66yJtO6W5NSzktBIyqrRPT9gCQwKnKg8hyRdONZKyVFfSCL/mg0BlOodgqdFMFArwxjsozbDjffI6mpq2ZKwxNRAn0ZvzCKyk3a6joUD9GjZ3O2SRhEMz84AGNB21KG+ZvXyxulhVfu3eLiYsN66xGlS8gwdTFkyLmaOQ6Pj6D3tFWDVrmgIEDQJsfVMe3i/nLzo1Ouze3ynxth5k6rpoSQ6savjdbEmA3FGIMPnoN6b3dTOeeo65q6rtBFyGgiJvd9j1LQNg1dv82fRa5HMEz1KV2kOa7D3msO6o06ye49TSPbphs7iir1xPxh27bNPNXd5/n86wjsNGDU9Nmv77oSgpaSiRhefeUu3g9s+i2dH1Faszw8KOrlETONZChUlvw88q2miqRjZv/Ar33rl/jW+x/Qn19x1fdZdZy0a1/K7+Fa12hCtvNtoXaiaEoptN1n3AaGEAlJ0ErlvE3lcQ9pkt8sxfwYI//P/8e/QJIhxSqXG/oe6yw+eEbfg8qqEVfrK4TAYjErpYdcw22MxijJVR74mbD1Jgsrh0bZKMXAKEIo116MEdG6cH9VJhIQScSdXGQQlXPc3WtLmakKY4qI0cz39/ApgkTa+d5L2dpLq9lVlfD+e/e4ddyiTQYxbqKGqNy+Y+oacQaDxobytFTu9ohGsYkjlTYsTXVjMmJmYhhJeVKRVkViIYMyn0sObtjCLneQXKx1tny5ihAipYSWZ2u07U70yVrLBIHGlDBFDn+6xVQBHHaSk+XP5Vzx2lGoAtJMHeVKKW7dukXTNOjCUc1KaANd39Ftt2w2W/YPDrh///4Ng/w5S6mdY/r82pll3jdjuHXrFtu+42x9hVegm4rbr9xByIdfqbK/FN1uyXumJKIlk7R9DCznC27vHfHkwWOeX1wwWksqt//de6+xWm/Q9gilCwBy02lOTcWf+wiZ8hhiboHquo5t3/Hcd2xVPsxTzpuxAkHCDFJu3QKhvdEPa4xBUqKqKiYlAudyx8mUDjlrb2YCf+OaENoJMMs16yzTWRKO3et8vsSibohVl+NEISKozPjRJs+ZiTFidc4xX2a9/ICfKMyNsNfoLF9ZorPs1TPtVyFlQEtG6yRDk2B0UYOzxEYRVol6mklfdsRIovUBK0Ivmc62UwtTssvDphadzNQoOYLW+eDpxKzNyOswZElApVRpw8rTm+J4DRJMSm2jzyPgjda5adbn/kk7BHzfZx3VXR3yOq+ckvt88AXnLPfuvcqTJ0/4yU9+jHqQ1buVZIXxrF4XSN5zeHCQdVOVzp5e3Yw42H3lz5gFtm56JIXGaMX+/h77+/s8+OyzLHWqNbaqyui1UIxFoSQW+pkiprgDsKYiejeMpG7g5NkLZPAkpxl8zJ0aKJb7R1xc/jl29h5G28/1cE6dGwhZFLrk0M3yKwyXv8d3P/wRbdvS90MG3Yh8tr3itfleQWwT/Q7NzANgrckotSn6PEbbXX49pQpTkT87U0OVNFkOPNwQ55pSh5+zSgifgAD0pKKprggx0Kqm0EJyiO6MxXi/66FUxUkryUBeLK9ptSH5gco6TCGymOzCX8rWXh7o8WCD5/bBgh/bC7oYbnzWtAuKLs8vkddfRzBEpTL8bTRt3XKwWHJQCfV6Bdv8N6eXMCSqccAmyXzUEtqmUgaawtYp15vgaGs0xhlEa5rKYvqASR5DnvdgTR6xroC3336bH3/8EZvNJm9cDEWWUOXxZuUGSSkzfOZBcreBVhhtdvmdyDUCJyIYqzMv02ouLs4JwXN8+zavvHGfg8MDDvb2oIwHPD894ff+w7/no48/Jk+ZCjlEQmXDVNOO7oIMrr/z+SUI7777DsboXe0TNN225/Lygjfvv8bjx49ZXVzsOliapsboUvIouVAUGLqeKsLlWVZdGAz0ISLKsH9wzPPzS5RuWN7+7cwSirmVSitV1Od1QYLLexOhckt0/RqPnz7jH377V/jw+9/PKYYoOi2cjz37rsKmCDruSkVIYhwHtNGFeC4Ya0vEEtlsNlRVxVBmoCCCxuCUoS1KaDs5lr/2uszha0IIipwfakAycj61TWgyGWChLSomlBNQGu810LPTk1U5f2+aBr3tdhETNjuqNL5cnfKlw9eUhNXVFXvLBZVzu8+0+0/58F3XsVlvrj2/ygJEB/v7mMqxHnt05bBN/Tn/5SRRhZEmBeoUudU0zIxFaYfSFo3eecpdERyhqWcc330fO3+dtj1mvneEaxd88PVvcnCUQ0kBlNZcXl6WRuecJ2qlSk3M7YxMa4OrKlrR3EoVy2B2Wq85B/x8/fBaUzbfHo8ePcp6QYs5t46P888v6GyKuV1M0nVj8jQe7boyoj73/278oBu1zryapuHw8JAnT57Q9f0uDrPWsbdc4JxjPp9xeHhAVTk26yu26xWzpuK1e3d59dVXOJjPsD7ifGLP1TTGUbmabd8zjJ4UE3fvv8Ef/dl3eO0r/5hf/83/iqOjo0w1M3anBTvN2FSq6PVI1tdtFu/T9T26qndpAUBwmrPQ06msbGdz3QWj1M5xTCypCR+47hCJu26gTP2U/BomtwTeRKz/pnUzV59SlKqqc4g8PYNS/2ySppWsI3V4eLij3900holTPL03pa/TIaNeztxe2ig9gRcn59TGMqssRq7fyDU6aPBD5OzkNAM2OmErS+0q+mFgvd0QK0MvMXs50SWmzx7JJaEWxZt3jvlvf+e3uLW3oDKuqJNPzV5T21TuYwvJcNXVvLjUeFnQu306d8DgDvC6AW0zvqg1Cc3B3iGzdj4RzYhlxHksYME4DOgktB72Tc3MtJ97yKmoYv9029O0QswCXav1mpiEEIS+H9hu+wLlG1Dm8wrt6meNT2uHMQ1K1WhVo6Yv7RBtSTojek8ePeb85IzVuseLQVSVHWHliL6jrR1t7Xjr9df4+gfv8Y2vvs/9115lVlf4vkNLolKJeWXYn7fMG8dy2e7y+a//8q/wh3/6Zxy+8uv86q//91Su5ujomL29fVydlettkeMIMWT8wGQDUUoz23sfpQwfP3xEO2uzUytefFCKlfdZFT2zMjB2qgGXA6oU86bhcH//urYZQhZ5LrllLKJZDoVT16kF/HQemM/s9XMjnwKVIz0f82vN5rOs9VpexxRGjym6tXH0XFxcstlushQKZfztTukAfBTQBlH5TFxcXBSBrb95vXT4Ophct2qs4njuOD1VbCWSRZ1MrveUXCIMmToVVB7iQogoY5ntLbD35sRhy3y1ZnVxlpHQKQSJgh48ry8tv/Pr9/nJw7s8/uNzVEpZ9g8wKWcAKXliDAzec9F56nZBcIHNsATVsBKLVQlrAmIMIQ0M3mOSY3//iE4FalNzcXXOctbiigRFf7XineVdjjcJvTdD3bmLmFSGz+jMhYTr1iCJUMoJu3ZZpQgxF8VTlPyAdQ6rjG1QpsKHtPOgk1FqrVE6d9sb19C0B8QxAw9T25xWuQyitSIoYVxtWIjF2jmjRKw4vB8xOrK3rNHKUytH6kbaWUOMgZOnj9l2XeaQNjW1VZiZZW9ZMWsX9CKkJ5GmnfPhJ5+y3fZ87e/97yiibSzmC8bDI7QxrNeZdkkpqg/jiHOWyllSjDSzI2xzm7/80Y/5r3/rN3nw4U8I3oMyYCrAEZQhhSKONZ9xuV6BumblJB+xKnclZcZU5gwvFktA53HwknAITWntSui/9sbJuXR2BskISRmGYDJl0uTbPjHd+pnaYlJCO4sPgdFnXFqZ3PETRfLIQW3wSrE8OsJrzYOHD3l+dsrzpy946+0PvlijTBqGIXB0cMBrr3b85LOzPDTzZnG7AAd914ESos6CQ846Dm8dY5osMaa1zWBKCR6SApsUTYBaWX7w+JL/5UXAfvNXkUcvSE8jfutzXc8osANxCEQiWkcUAW0dvQ8MPqGNyu1RpkKrrLjdj1e8uFhTKeFrt++xtELVmCz5oTOU22B4/+AOr4SGKJFH8yM+2m55pc6TqtLU5nTz4ZJDaVXy3lLByDdhKiBVCXsTgLY7St4Eskwo7I41VP68UvnWUBnQY+IZ5Bm4Ci+RJ6sN82ZGr1weSJNAhTxPsm0sRs0IzhCN4fLsnM1mk0smixmztmXsB5BIVTt8GviT739IJ9B5zXy+x9Nnj0CBq5akEl5WVZXH2bssc3Fy8iJHHRPhXaRIlSjwgaNXf4vnH/3fEVfYLkrjnGU7CmMINI2jNjXR6mu2juSpailG+q6j29TX0Qk5Tbpz934maKR8Dk2SQhyQG5KcP2WMN3491TCVniKgny5Llbk3AjoKKYRcL9eKUHjLuwC41JiN1iQRNt0W3c5p5zMWY8/i3SW/9Mu//FK29rcYr55RymdPn3F0fMx81rJd565xhSqHLEIKXJ5fsF1v0Ps2q1crhTiDcgZCrpNECVMUjhKFiYpmFMY+8KyZ8fvPHIuvvs9bv/ycZ//qXzKfaE9JEaylaVuSCPuzBmUNQ1Iocbx+bPCiYUgsa0vTzjK17UwxJI3UiqP5nHeXDX3akqp9Vt3AmDx1Urw3O4ALz/nhMT/yFZ9dXXC8bHG22t1Wn6tS7OpeU+2LnNRPejI/pb+rtcrdG7ALfXNucrMuO/VrZkx797Ipq61NP0cpxYM0ovsEoS2HKyJk8sWUv+3v7+PtluQ95+dn1G3Dcn8vj+qLW1TKGrdn/Zan/YpgHWJaXr97h+1f/JB2fh9XtbmJ2+TnXVUVTWpJ6RRjbM7JQtzVenOImTVzZ/PXULriweMnLA8O2FxeoUzR6ZEsCyImlz9WfZ91ScpKMZFucnfL5ktKbLcbnNtDJCKYIv6cz5Sm1MEpxlmKqZ8z1AIeGp17XnN4mY0sknstDQmV5zXlM15UEKezO9EAJ4ektcaPgc1mwxv3XmexmLPYWzKbLaiqmpdZL22UlsDAyIOTZyz3I4cHM64uNgSXdVAnzZQkOelePT3hIGkWxtAMmsX5KXOtaa7WDGcvWOzPWV1q0pBK+45F6ciBrLH1FYd/8M/Q36t4b3POO3Uk7ilGn0iiibFlMI6gwLY1l3XHZzpkut22Y2/ecGXyaIUx9jSNBh2J0rDpN/zowQO+vjfH+DWDSVjTINqxTAG1WrNpl3xSNTy76HFFNmQCGkRysX0iOEw5ysRVnU7BdS3z8yu3WVVQUMspT5r+3O7gydQupokT5KBy7j4NBaJ0u8SQqG0m/EflkaQJ3UhKkb7v8H3P5vyCF8+f8fTsOXfu3OX84iw/rzFQW4upG7ptYDN7my4ZTNCIO2C9XnN471tYV7NabyDF3B/oLKP3bLfbDLYgJKMzu0copPwcFRkzxzW3+PHHn/A7v/YtLi4uEJ8nIA9GoS1s+p7UWby/RnW1LuPyYrjRRJz3NETP1eqCxaItGjkGpxWV0ZiYR+FNhPTEDfT+xnNAZJeGRMmlGlGOwU8pRbbqiClqGflJxZhAl5Rkp3hXSC06l9SMzvVjax3z5QJna6be0y/MKH/rjbeodWDhYTw7x/drrJhM+7oBWLiqwdUGGTy3Be5uO5zvkJPn1BFaH6kl0ccBVSkclkqyeneoLG8qD/0FzfYEyE3HfhTW6wE/BlCGgNDFxMr3XL644vJACK/cYUwKcXucXFzS64bgRw5Mw+g7mrYFvSWJ4eHlmv3twJsu0ZiYw10zsnC5HLOyis+u1vRBqHTcQf5T+Lq7FclhpmihadoyiHbNVFj23uehofEaSQR2Uh7Adc10OijA5O+1cblsM31fZTBtopBZl8cDttZiylBWmpFZEHQzwxpDVTnCMHJ4tI9OnssXL7BIOcgaUzmcrcBVBCoGc0QnGp0CY7wmRUz1zfXVCmstTVNxfnHOdttlHqxkFXlnHdYYhnEogFyu4+3d/hVefPo/YtoZkYT3hZRBuZusZVsEwbTNqKY1uR909Lnm7IMvGrzZ0ox2+W9LvnErbaiUKgBixKQ8sOem0UxrIpELGkmKPlm2KLRxdCmhRoVR2cOOWrNVCqtz50ofB1bDwDZ4xpQZW6ItFAR6u92SRNjbW5bxDrkl0PwVXT8/vV7aKH/79l2seGzouXQVH6sVnc6baibIOkEImnM8vjW8sWw5GnpMKBsjgkWwBFytuPPqURlkmqULlxYOTFEFczbru6DoQ6Q2lmCzXnVvFC7lXkghe1nXd6go3Lr/Pg8fbYgJ+n7A7NWgLJWtUbonInTUPPWRO9ZwHAMhbXDRgWno9g/5uIuchRGjK6LKWcokQ/m5pQrjQ+ldaJskT9tKMU+LcmU2yU2kOhe+syfeaYtOX1qjcNSVZn9vH73N+keQb82kLaqM8nPO7soFxjhQFdFt8duRZBzO5Wlj4iySIseu5e3QEAahmwnW5qE5bTOjblqEzWQihBR373hqDpiIDSEEtl1kdbUixIgEj7G5mddaQ+WqfHMWuX8Bju58kxef/lN+8OGPmS8WXF1e7owkFsR6YunoQuG01pTp2hnlVWOuF1dVjTEVi/khIhWKGqWhsjCp0d8sLN1cArvoZKqt+2RYp4bBgKpmRBPw4lBBGFLCS2ItEZ3GHBGmQFPmb6aUW90mKXaRjCdYY6irGl8GtSzmM/o+vJStvbRRzohoiVgFXina4sWMEgyJ4+Wck8srkmhS0JxfXaLCiLLXpDAhkYdpQ6UdxmcEI/fdKpyG1iq0MznpxyJJWMwUh4dFPlBporEEFAORfhRO0oxPnUZ3G9zmhHtzi7/YQuyJfaSZZSnLCs8oiagNZ0QejyMLl2i1obEtaX7MZ9UBP3rxjF4ptEok5UhqIqkXs1LX4aQqZIIQwu4Q54NcAKAS4k75XWYe5fHfWgxDHxjHuBubJ0qR4kCIhrZq8UEIhQCNUgQ0XiJ+7Ak+qyVkyQqDkYroOtx2ZOwOdwbvnMWMgXlIvF01rEzFAzLLpHaWps7EC6sUqFT6GyfpRBj6jvOT51gFVkGQRIqqzIrMoEwMWZS4cTn3nri/s7YpZA+Dtg2XV1fcfvUVri4vs7rAZDop8PWvfo2D/X0+ffSQB4+f4qqaxmkaV1MZQ4zCYrnk6OgufR9JStP7iHIznCRam3VvJzBoWqqguPkQ3qAuFhDNI6wx9Di8ZF1igsdH2Q0dFpXJ2qJybn+UFC5pbAKjcsSYExoFKdG2Myqbe4N9CjAOyMvZ5N9i6haeaThDpYVlWxWy+UilNW8eL9l2F6zHhI2KIXpUCsxcPpSq1DUNkms+KLS2JFHEiRGkFA6DIk87Fg2iNSIKa2pAlelOCacjjYL9SnFXBr7GY6R2nI0nWFfzlutZ7ydWac3MztjWsPfKgquhYxM9IWpsMEQRFkYxNxv6/fv86HRDV5S3jc63WWeEsejeZNJaKWqT85UQsyJ4Sjk0XS73CN4Txh6YEWPOgVRB6JzLN7eEyOX5Fd12yJS7LBiKhMToBUk1CcUYsodOJdyTmDnCUwdvCTLxSpOagSYqwrDls4efcnR0hLMKJ4GDceDQJ56drjmf7eOd4BCivyK5ivmsJcmGIB5kZBxGnKtIfoX0l4h2+JDV7ZXOvZU70CuWATslCph+L/ics7aupVl+wMXFd7j9rW/x8OEjrHMYhNm8IawD6uyMvT7hug4vwjh66u2W/VnFdvAYUzOOgSQblKsBzRgEbyqqIMxkQIdpuPEUgWiMTN1GUsCe3BqnjMaPIx2JM6PolC3CZ4oUi4CzzgN/cokm77lCs7GJRUy45Mv1q4kqh7IhBFqdqaVaNF7l2rrtv+CcsqlM8TC5w/y4bjP8XKqzcwO353O2XShobA6ErNO71h5SQktCR0qHgWWQxGUMjCI5P9K2zPAtiXZZP02X0qX8IhMUrfI3XzVXhJRYtoFhyALBOvWkGt6+UzFKaZfSCo2hTTUmKEbp8RX8F/uOb9+5y4pA1wnr7cCgTZ7EpPWNtqtyW6aMNrZtm0clCMxmcy4uL/jkk49xTx8D2VjbtiWlRNdteeutt7g4PePo6IjDo2N8KOLCRV/m8nLDw89e5KK2H7mGla4jD61vCHORO1okRSwWJYn9/X2szfVRp6BNiVoCTRCaKEXCUiAqLi4u2HaBtl8zD4KNifjgCYd7e7w4+xGh31DP9mmcod/2YBLOZTEzpSgzWza7Lpmqbmib7HiM0sQQMcbtivZT2UTt+hYT49DhxRLHAVGKum647SoOk+JBEoIIYxT6MFLVmqptCZJTHBWhMlm5nptEm7JDOT3Mu6i1RjQEyePUbdWw3cCIJccJJV2gEAJKKhin7yqFBAXaoFXIyoOStWF1yj24VZUnvRmnUcbglGPYXLyUrb287uvgsx4KYKPwxvKARfWEdRqIJMarLb90/x2ev/hR9gwTYWKSSigQv0xcQ4E+Jp6HkQfjmqvkqVXDXT1j3zTMCxlhOohIniWfm2MTTBopCmQaGgMkpbPmi1xP9s18WaESUD5hJFI5AzrlYnztcMqh9ZajA020WQ5S7TVEfcx/dA2frjf5IQg7Yd3sbAxax8JssYzjyGw2R6Fyy1jb4r1ntVqx2WwYxxFiYr9paduGtm2Yz2elRlb2qTCBMqhhcm2sdC9O4fKkuaOm5uuSR2kR2rrmjdfvM29zucFohUtCFQWXEk0QZj7feMkaFtUeunZcpgtc7bE2ociS/ZPTFSW7uR1BEtshcHzr1u42NEbR1A7SdcM3kOmHMXfsHN/9NquTP+LBs6dYm/dKiYVtJI2B6rBlPatYDbnmabVGkXK+jGJE6JOi3wpmGNhXI/0wgChqUSy0oUqJoP9q+Y98YaQsTYliSLAZhD7qYozxBuB2ExlX5Za9Ye4iBB9yNGMU3pTyijZUdU2ShFUWieBSYLtav5StvbxGz/OTHa3JKoOuZlS1hm0OpWQMvHew4HszzUe9ZkwVY0y5N+5GU+3k50UpLpTwPHieBc/KB+ywZmw88UBhXYNDdi5PFCStUEah0UghJ0i58RS2dBIINpVaopgdjK0RnESMZjfINAeLKou0GiFpjVYRKwpiRGGoGs0t2/KA3Dy8u6dEdlB7QhElz8Toup5heEbbtrzxxhu89sb9PPFrHBmGgb7vefLwM77/Z3/K0G3ZO5hzxGEpo5T6owal880waZfapDAJOlsMZHfIppBsqmVmBLnfbFF7NbWzIBHRUJVIRRixMZKUotYaW1maumK5nCMHM67CgMSBsbvgF77+Vf7N7/4+l1ePWM4OgMiiaeiHFXVV0w8DIokQslOZqgTW5siqshW935SZoJkT2g9DoeIZUgh4hHq2oH7jNU5XK+KsRa07mkJy9wZImiiGkBRiHZ6IDwk3StZzqjWNHdCmQ6kpl/8pokf51tR+N/qRwQtqvs9QmsrRlqhHfMrt3xLBYssEr9xUoFTCKNAkjDUkJRmR1hZxlnY5B2e4uLoCrRjiSLhcwfOrL9YoRVkGnz/soCI+CLfbhs8uL5DK4UOkP3vB3Znmky7RBceHj55w981X8rBQJqPMswq11gza0mPok8bHXAu6wlNt1hy7GjVNiSLnlVFPN1Rp55Fct8t6NA6UQVLIwBHXinP5gQiV1aSkPxcYG6Vym5NR1xKSohBt0LrCmApXhKMQjai4q0VOBAEfIj6EMg2qtIONI6v1mmEYuLy8ZLFY0LYtIeRxCttuTRwHQhhQKuaSicphe8IQjCNiiWjEWKLJNMSoJnkRsALmOoPIOa5Av9nw9OEj7hzdR7QCAjEFXIxYlUB5VBgJIWYUUUWQwNXVii4kugRGW1bR886dOxitWF1+SLzzdSQkjMtqd76osI/DCCRC8Dht8ONI2zY5D46ZLJJTjPwwdCFtD12HNQbjLOsQ2GjL6Bzb0RNSgEqhrMYnzWByN6jHMygBpQna0dnIpqjtZY9mCingr+aZTs9HgMVynwGLDT1744C1c2IdsMZgEfro8WkkpMw8UzqidcLESFtZWqfyEJ+M4HEat5jK8fDJYz559jzPODGKhVK8Xs1fytZe2ijjDeJ0IjP133JLPlQv6KKwDp7L1RXvHB3y4fkZZ9HwFyfn3LYKVzpGUvBorWjnM5p2QUyaLkT6BH0CHQQI1MNADBFX6esLdqK1qCzdkJRl00PnA9YpjvcqnCRMGR6jkKwtWg5tzoan0JMp58/hKyX5n3IJBdroPHavdOZPFLrPZ7qfX9chZalTFlmQGONuRIFSGmtdLrFoRYyh1MxKXx45T51L4jWbGP3ASMJrAWUwqeTrcmNLpsNW/hVDZHV1hQKGfsA56M4voRtokqZK4EJC9Z5xvWWsDDolLi8vMQnoPEkr9Kyl14rl3h5XVz+gH/87Km2wDozJIWJuNLZZEd5mgKVpW5q2YTFrefTpA0IILOY1YUrOyjmKItRVhXKGftOzHnpM5ehiwKvEVkdW7ZLLUdhaYS4DMxlojUJZS9IBzcg8wWz0HLY+j3vQpvCwZfezds8oJYZ+wDpLU89Zec3oez6YW+4tFoi0bJqB5PNnOU0dz/stXmvGGNEmT9xunEOlRKCQ8CkqhSmx2W45vn2X++99lcO9I8QYfOj40R//p5eytZc2Sk854BQEUIQj5biz2OPB5SVX4jnbbnntsOawNpxsE8+6wKjyjxjiiCQY+wHZ9Gh7hatbeqsQnacCRwLbEKh7RT9G9udtrllKVpAxJgsoj6J4Pnj+lx9/ypPzc4ieX7j3Br/+lfc5nrxxQXP1VI5RoGIeHUopNqukyvA5VaQuJwqboXI1zlWI1nmTJJU247L5O37qtfLcpGg3dX9MlLBJ02fKCY11eaCqyh3qO/qeyih1mzperQO//LVXSN3Iuu9Zx5GLYeBqqOnKjTwoRScwpkAUUD5/RhszdXG53Ge13bA9PyV++phG9rBonFhcyLMhR+9z/moTEhOzyrJaBeZ7e9Rty+Uw8Ku/8W3+9T//V5yefcTyla/njhql8H2fZ0MyEfYVOtfA6PseW9QAFULd1KS+uz5PIkjlsIs54zAgoth0A01TZRFjW5OUZpzPOI9rUoq0acue8rxVtQQVOYsrXq8Ne9Yxm1veYEBkREtGZnMYex3IKnL+WzmHriuuNld4WaC1YykBoyyDiowpsPIj/ZjRVKtsrgC4CuNyU7vRB+iUQAWiFrYJlLb0cWDsV9y7d5/DgwMWswXKVVz1jlc/+MUv2Ch1RvecaEatMBFatry3rHmxrlh54dHVBW/cnvPmnT1+/Mk5lxg2Y0bDemPQGEadWf4GYQgjUkSYlFJElcVwg0989OwFdAOLmSFZSzNb4GymHvgBvv/kId85P2M1BiQGnj/4jE/6xG/ea/jK8oA9bfBMvYr5tjPOZE5oiigJ5KlQpYFjp66nipargEQ0QqPzGHelElFPyOvEg9U7jVZdwvKpLStrz1zzWMl/K+dSKaeyY5/VQHdhtoJgFG/Hjm/Yc2K1gVqIWhG1YSASUKSkSbGhp+EMzypFukvP+WgYes2rh8ecX16x6jtWj0/4hcuA1WtM0jSqxvoMhHkJnF6c51mURqGcUN3eg6qCquJq6Hn78BgQxuEMHxI6KJy1aPGsLs/ZjkMeax9yyGi1welMJlguZuzvLZi3LedEjN3n8uqKu6+8ytOLM6rDQ/yLMywObWtu332Fzz78GEuu/XUK7KJlfbVmrByymPPImMI46rktHbdiZN5ZmI/EZDFJ53YsZfJYB/JloiTLj/ogbDcdbd3S6AV90KyM4dlswZVpWI+eS90jlcLNHbKIpOCpmqrIhsJoGkZ7w3wkYWPP9uQpSVcs9/aZlMWSCkRleHbtk74Yo7zE5yEnpUhaFV21W7VwZzHj8fmG5+stWx84Xi5pnWYbHWehZ9nMQQJRQdD5QCGUCUmaRmdWRAaENIeSN/3s5JwXRvDEXOvaW6BMzRmGT05Ocr3MOEISzkJgePaYZ58I37i7x+98/VVuz1pMukZ/J3cpCEoLqIS1ZKeAzYoHpYivo6BJKElYpXZhrOjrGxIyqHGz9Wpqdr7ZkDt973qp3Vi3MLVuqUmAStBG0xjLq8sDVFOTyKySQGIIYx6YI4KYAW03vGuyeHTYd/io2Y6a9SsLLlCsL1dsz1fc3dtn4QXpxpyDxyx0rWtNEsWm2xKIxNqBdrj5HGsqNs/Pme9nwacULhiCR3vBaUvtLGn0zNuWhFDV8yxWZgyNtWiJHCxmHLQtCtiuLojhkuXyXZ5fnLO4dYStG1LI7Jj50SG2bTHWor0nJVitN+zt7eVIQBwb15LaBhuF0K9RYoljx90q34rmBuovZUxd+R8AfvQkNLPZjDEm4jhyWrX8ZLHPZbtPwCGNoT2wjCHuKI1VyqP5QvA5LQkjJnoosZhKARMHtiGibM1sscwc2aJ5k5Jwfv5Fo6+2RseU5TqU4IxQK4hac3vfcbLesO01n5xueP3eLVozciqaJ92Kd+s5LsGgY64VlaE0rbUczhf0SrNG0ZiRxhu+3sy5bzpcVFyJ5rOx5yqNXF1e4JPh4TCShoFj25KSYZs0axkJEU5tzR++uKT+0PPf/MLXmO84p6kgrdMDc2ht0JKBH1N0VnJOByqlLDRlslp2bpnKdcHPG9j1DA6jTb4xvL8OYSdua+k1nartk1HG+LM0jyRC2PSsV2fUFKV1o5jPGvaqOaouenQqEcnlH5EsJtwkw9pZnjjDZ2fnnD99zqFxtBjqytDNgHGEFDIVMHiieJpk2esSd1rLvF2goiIFwR3sc/H8BQD7t79BSIIfM5K6qBsk5TMgNh9O7wPj0OON4qCp2Z816OS52m45vTwDoJ61pHqBW87RASox9M4hbcXl2DEEj0VhRJVhPjYXK3QimoQYg1GwUYqVFT6VNd+yFZLAxixlkmuTquTq7KzSOYcyjlW3Ba1prWWtLY/NnI2q8oRuAT8E+pD1eCaRtBRiFmQrkZ2PReEuCSl6bBrxkljOWuqmYdttC2GkDI8KXzDNblXPd3UwJOZwbuJ0WmG+V3GVNKskbPzAbK44267YDolNGdWWKJou2nKsE//Z/hFSL/le9EStEKO4NXjejIpbUdOPPYMIMXoCuWXGaMfhzHJbWyRZQlA4EQyazo+MamBlDY+vIk+eXPD6wZzKOaxVOGVxtqiQkZXgkuR5hqqosZlJviFFBM/B/pImxgLyZnTj81zWideTb19tyjzfMtPxppbP9KcnQnsOcX8WNlICJgraR0ieRC47dX2+5bQ2aKNwtaNtKrAaUYrG5BB81DVV20IKzNs5Nm7ZrLc81xmdDSoxaxqWiwVX8QKjFIeq5htmzi/3sBn68vkVwc34zrxMm3IO3TSs15eI1uxZS2stHz95RpcmhcNEpRWHiwWzvQUouNxGnl50nDz/PZxz3Hv3TR50l0gCrR3trKUjILXJBh1GLBpjcwfS6dkZXiK2cWijIHSEmMWthjHSzo84GUaCFrRcj/sDrrtByldQiTF02FmFAF3YMuo9pFeM45akElZy0/4oZfpzKjW2ndpE2tW+d89YrlUHFvtLjNU7vd1s0OxYRl+YUfryAaemW4qMffb8kaM7+zQzy3breXJ1Rd069FXHNhoe9BsW1qF1oJLEbSz/8OCYd1LiUbemWbZZVEjGLBbsR84FVj5yGgaeO43g2AvC6IQWOKgNV6LpBkUTbZbeiIlOAiqBiY44Cs8fP8s6MkqYNY7FYsZs1pKIoD0iAaUsBruTpJxE+JJEum6Nj7mxdepEZ3pIwDTANI9zyM27WmcqVt/1u8PwU2b3V84JgRwGV9rs6qlG6aLeNwl2RWIU/DiwvixyAFqhrMmASbOgG46JWuNTJISR1XrDC4mlUqxIoqldhejc4+qScEtpXunHLOtJPkzM9/i//PF/QNsZy6OvEGKkuzxHhgFrLIvZgqPDI15cXSIkdArcPtzn7dt3SMlzue55dr7m04f/BhMf8Pf/wd/n48dPqQ4PMFqx6Tasw4atCfQmsjRZhEwFwTUVYwz4kPN/4zLJ/unzF7imJWlFH3UewDSvCHEgxp6UcbtsjKUeHWMkJI9VinndMibhsu94ah1Xyub6J0LUFpVKO6LKjfSpoPmobHxKFR1cNRlZNtBUxtxXTbVTYZxQ9Rh+to3vr1ovn1NeXubOBGuprMm5EFOJIHF865jDg0PWpxfEBHMsbxzeRcfIoyqPPpuJok4WNY7Mup6LcMVl29IkReV7uvWKC0m8UPvMk+bSw1XSvFCK6BZUCLdSz5HW3HIV0Qeep0gXhZg0MQivactbOvFe42h8wLqKqnRJnK9XXG5XBaKf0TYNi/kcUYKp1DVgozXKaKxyjOOAMi4zWRJI/DwBcApPb07JmqQvw88JTSexZ+vcTi/2Zt9lBnYzW6iAvEyd1TflEhX5+7lBuszjSHluSSpaqiFmka4Ycj9oCiGj0KWGa10RJAvX6KSVXJKZkGJtHM/PzlBYLrYjq6tLCFlYufMB0w/Mm4qYZjhrqI3icH/BGLasVp5nZ2uenX6PsP6P/PZv/gY/+LM/pf6Ft9mfK1xdMaQ167jh+NYBB1bjz1Y4Y7C1pVnO2XMVIUbOSpuY9D0pBLQIlavY6p5BJ3yUPEGtSE7KdJOVMpbWGqsdnSied5GLoLjwFT+0mk+tZbDXuadIxmy1JHTKUeFEe1GUMlr+g1lbSMrvxYgm5Rx4J/OSv8ZxvC4o/w3rpY2ybhoO9g+4urpktR6YNU3WMhW4c/cO77//Lj/4/g8IE7UNxde/8nX2lgs+OnnC6fkZQ1BQGYLR/FnX8RobOq2ZbTwLP7IePBd+5FMxvGP36IeazmvWRjhrHDr2/NrxPm9p4Zm2bIc1mzCwGTyEyLEV/vHRLb4xsyRn+O6m57QfmTmHVYpYOvxHL7AJnJ9fYcwWY+Bg0dA2DU3TZIUEo5i1GqtrtFEEPxIlhyEUg7g5n2RSa580SZu6LdzUz6+JGldXde6//LnuM6JT3JVzYOJCFdzg5jfLXgvsum2UmmQ1ypxKY1E6EKaGaokklaUrKlEIGpMUukzfyj8gu4ChrbLKm5rx4uyMgo+xHcYc9qP4/1D3X8+2Zdl5J/abZrltj7/nmsybrtJUZhkQAOEIECQFkESz2VRI0d2kIhShaD3pQa+KkP4FvUih/0DREYqQ1CRFsrtJgAAIgCRQQBmUy6r05vp7zLbLTaeHudbe+9ybmbgFFtXUyrh5ztl2rbnmmHOMb4zxfcWgYH8yZr1YgpcsFkvKuuVyXjMrL2jmv8tzN2+yuDgnC55fQfGSk7SVYZ3lvDvMqVzL3vkF69qhmhZP4PzxI0xjGIwnFMMhZr1msVozKAoSFNJ4Eg+pkoR1xYLArG1YS4lSkIpIEpkET4Ni5j2fGMenlWNmYXhynbutZZFk2LjKxbGUHSeuE+Bll1bxdJXJ9CrSwZsNai6IzBtZmjIcDDBtixRRt9QDTVPRtOtnsrVnNsr9g2PK9Zq6NlRlRV0bijxHKcWqrPj2d7/H7HwGIaozCwGf3H3AdDxCIcithuAxqqFKJe8lI/xZYLL2SL9iqiRzF5h5+LZd836VsHIHGD/kwnrOZw6pS0y+ptY1x3XOV1XGewLuGcvAB74+yXmJBbZSzNyYP75M0aMDbosVe9FhRXiP1AlOClQROrRV0RqJaRouLytccCAjKKCylIeTfdxgn4BCBkGnIUoQPcluZEHrd9k8jyV267Lm7Oxsw+TdsxeUywWTyZjRIOfs/Ayt9VWa/eAQ1iC86/RE5IZcbFMscKWKrAOQQpeblRKpFKqLf9IsRaUO19YEKdDOx/hLCcY6xZKQ1TKizX2JUCfA811fU7cNev9NZFA4b6PWi/Ms1lV034Un0TEeXDeOoFMumgZjLQ4PIdJ4rtqGHHhz/pCvNA2tgDoR/NXjfRoJzXJGFUZ8/fVXOLcVNsspRUKV5DRSM8hH5OMmcvGWNVVdMtSK69N9biYj7tLysB3w2LeUOQxs4GRueX045L3W8/sXCz5tWypaJpni9eMhF/MJrUpw1nZSHJEwK/iYt5YdNtDLNgi68ME3WFeTat3lp+O9unnjBrdu3KCuK3SSEDVPBNlQc3Jj76drlN/93ndjpYqLgq3eOq5du0ZR5Nx/dEbbVCRSR7Ef6xkNhnhnWS9WEZQgruC5TNnLE5at5Z3geT4IDquacaK5aaB0cF6veccrKlOgW2hkjfWSB7bm47Vhf2gZ4rleFNwUKd83c15Lc35VDUiDxwrF4ybwyTpW8AyvDalpmFRRt172kubd6uhCoDKWRGqE6AZZRmS0LS1LNDYdo3TkgN2oRwg2seGGuV1KsiwlSRKWy8dw/z7j8Zg8z2maJiKubYNvmtifqJ++BQ5HU1Ws1ysMBqU0KklJlOr4b3dfvXWm+9+8j65Tzx8zLHLEsuk6w2JnfpAxPk6kJgRLrhISLIlSBN3X0AbKHj2W6Wa37l1bpMJ6qNsWQsJkMmGxKLlcldgQC0JUfkCt9/jwk4/4pV/8qywXc/aLFIlFBc/AxN3da4kNHqka2mBpE0EtA5X01D5gvKSUgmVqadc1rW2x0xEhyxiUhlQ45Kjg3CdcFEcspznz2YxRWpGfHvLdD+7ybgiUSRpJqNOENBuQZlnUD3EhNnb3caiIJNDBd2RZWBCRLMsLS5IKhJNICVpIlE5onIrUop2Sd15EacaAQ6eBW7ePf7pGua7qzY0XId6UdVlFBWMfUCo2twYX0EIxzId427I3GqJUZP8a5gUiOISpWdgFj4qMorYcO08ePMdK4kkgH7CqAybUkblaVFiZcEHgo6rly0PNSAWG7YLnZM6BkfzS3j63QkUpLFYq1gjqRHL9+WscvjphffGQ+fszjlSO9kQXrzNM2a2QJkTNyxi4xw6SWEIlesh02+wcOsidqy6slJLlcskH73+ASlK+9rWvMZ1OOT4+xhhD0zRUqyXf/9Y3+OC9d0mzqDWB7LsiI7Hz4cEex6mjaiqscVRNzaqMytKxsz8ivn13vuzQXBkEAsV4MiFbrzZtVaa1Mf0SYq9gawzlek1QjmAsichJ0SQCrJZxIc3H/Lff/GMQEj16g75/s28ibo3DGUOeRZ4giWI4GrFqWkTbdux7W0BLChgnGUMlsW0bFzQfa29P94/JshxBVGmumgovLI1dY0wJJFgpqVVAFQIGIy6CozZLEhVoaTliSKhq2qblxtGU79crpK2psz38OOH0jdvMa4tdLxlai/JxXjoRc9DWVfjgSLOURCsUDiliq91qtWQyHvLcrVtdeBCoqgrvDHhBnmQ8/HTNcjHfhA6ibxoPlvc/+DHpePDTNcrQl5D1hknHdBriVJIdWiilIE1SGhcLt9dNQ11VKCmx1sfCXe+oA9Ra8EgFXg8JQSrmiadxkhvpiFmW0l5WPDiLk5KgqDV8ZDyNizuSsZYjIfl6NuS5/SG1jzSBwgsIGdPCcFwYLu9fcPFoTn1uqPdGnOjAMM9jnadvqBPJwKmNaxJCdFe8iLib6iozguy8xBCuxnb9pJMSLSOr2mq1JB+OkEJijGE2m22KDLTWaJ1QVbHEI9JHaHoyLusdaQ4jVTBhtCnHa4zBmBbrPMZG9vK6LGlsbF8SQjCQOUjBvF2zXM24/vwNBhaqtkEah0Rgg6NcrzmYTpld3iM0jlRFPiTnO6kAAX6U8XgxBzkiyLRbjHb7OmP9alnVWBsZ01WSMhwPqRcudgntIM+Z9NwYpUy8QAs6ACVgECzPZlRCbeg+DsZDslyjkwBEMqrWeirTYFtLXa+Z+o7GO4GQKkbacnuiqILHPL7Hq8ZAKvHLBa8WCSd7U2qd064rzMNzdGu4LQyl1Ky0oM1TGt8icxnpUfMsFua/c4lcldTVkksdGEz2WDeR6xbnkD5gnaGpY14yyTKqpo0NDiIya9Rtw/2PH/90jdJ3VSxbvdGAcR6dpEgZXYKofBTdVBsEoFi1FlAEIVmbhkrEOM4pSSUkRqUsVcFZoXhPtcyspCpLQqo4mg4ZLgwXrqYpFFbmrBcls6rmWp7SSMnQVbw59hQskBgSFN4JWqk5euOrPFg43OMaffoV6kHJcH/Icv2Qk+sHDBFkd++jyxrniC6L6JCMHeRMdol/L9iUkj1pkH3zrurQPyEi0dZyuSTNUnpl5954Y+pE4nxMKidpBiLC8sI7hG9xwqKl7jhUIS8S0kFCv0GLrnnYtG08I+8J65aKQL1asn9ySCsEy0/uczRKSXPIlEYJyIsxDxZLWuc5neyRLT1BaLzSm3P8YFjwaHaJGv8sUuZ4Z+gJp3po2PsIMPnGkCYG6+YMR2MOD/a59+BRt2jHSVNIx5vHU4bCR4rJrh5YOous2ogUB09FYH0/INOIwiZpQpplZMWAcZIhBgVipLu3S5q6RghPcAZpK/aCx9uW54cahKCxJV/WCev1nCVzyuAoiwaP4cvDKCi1Egm1SFk5cMpR+sDDxjJb1kyqCidSjHHM7s4QLkVlGct61RWXCNq2oWlrRsNR3KBkTE8FZDcXAuJJvtH/UKMcDIY4Z7HGRsmyQCzTUpokiQpQSig2gq5E6N9JjUjirloHj1Sy4wkNBOdoC8X9NCeMFA9X51wGqJVDVSV7QfDmwYAkGfJISVbBc3AsqFcPaCQY4dC+4VgbfO0ISnfahhIXHMtsjwfDAY1Zgh7ShIR3zs558aVrtF+6jg0VJ3uawYcPOP54RqpBa7FRmN4Acv3C5ENsldp5LNJrhs0Oupsacc7RtM2mgKBXiIIIIiF2aCZ3gJzUQqoULYHWtbHZV/eAQ5cS6T6o1/LAOxQGkSRokfBwtubdiwWPmjVfrVJuHSYUMkEFgQqSy3TEPRRV0CwenZPICZ6tC+2k5P/+w28DoMZfj6BTxzl0hYNVdLxCPrAqS4osVkqNxhMOJhMeX9wHV/LSiy8yrJe8VOTgOk9YSJSn63GNu3PfWKdELMc0jaBtLeuVJZprJwsQoHE+UrF4R1Hk7I0H7I8K8lx1Nc3xPhauQjjPQb3EGI8RgjrzGAROSRobaE1XHYWgNVAFWAvFKhEsTg+4MJ5z07KyLYn0XK5W6GWJz1N8onDBIZxnOpp02ILqRpKOR2m7KP/UjPLweD/GkesV1bqirZpYKjcasDedUJc1QkiW8yXOamSa43UcHKUlWiV4laLSgjTLYxVKJrnAkKyWTOsSrTNCW1LWFQoQPvBI1QzSAbUXnNqW25lhbzKJ1Tw+pmQ8sGjjToe04BNynzB794fc14OIEL5fY60hPRxw67WvUocSm2Y8OJpQ2Bb16QUHQaAJSBFpkHoB1F6dqmfEvrpPCkJQ3XMKITS9bF3vufXVO7vvUSoFqTaaiH2BOx1/0fnFktwsyNIBWZLEetgsZTIabmH4zof01hK8AxvZ4YdCcZOUDz2IvOB5HxjKEoVHEw0gFZ4HF0vuW8fEpSQyRdDx6gQQ6YBPLy8QIkXILFbP47d1TH1uDkBIvIzUGj5AXdYEE8jyFLf4XRJteO2Fl8jvf59JFnOjXbdaN76xmN+HgHBdDbDoKpu2+flYfhh8ZE30HmNjuOKDp1peUM7mLAcJWZGSZhlFlpFkGVlSoBTIHEQSyJQiIzL2WRtojaMRvuuZDLTeUQTBQbB4KRGDDGRCZQV1UDgJZ3mgGg9YI5iLjE9azzxIhuMxxrlYyCEAGZC25pXTU4z6KecpP7n7EXmeMx6NuXbjhLZumV3O+OGP347oWegS6yHj5u2f5fS1X8UUCQUhlrilKWQpVgnqto1d38KBqDjmLmZ+h4ycTEQiLtO2WC15qBtupRlvrD0vuYBNYk7NCDDdjbMe1qEBKUkJSDRTLD/jzrimNMLVjLzFpRK+9kusw5zFYs7+4IhMD6n3LX9WNPzSWjJIJQLXlUR1Rc56WyYRLSHmn6J7El/luzI7lWSRAxS3qfrZlNT19a9CgEpAaIxvaV2c8J29AoFVabm3XIOoGOSRWmKQp+zVJQiPlopURZoxiUQKSKWIC4ivUT7h3XPNd84rJvkFr50k3YITXSp8y8PFmB+ZKUfugjBSBBHZ6ZqgaacHfO+jDwjpLYIaR4a7TejSu+txTGLRdxQtGqSaXCcgJdX6El9/xMmNm3zwzrt8bVKQCLnJ726M2sdGaNW1eUWgjVjdFToGgBDd5ECU0PAdezxdrlTolAYPxmGEI2kb6pVBhVWHoHa3TwiGw2GXrgikRcYgzSnSKAlofcfmFwI22I7532KsIfWGPQFCWG4ogfWxYL0Vhj8VCd9qPTorqIzZbBbCW77iK37zaI910v50jVJ4aMuG87Jhpi6ZTqccHB0AguViSV03mKZlZQ2XInD74BCco1wvqBYrqvUKZ2sEbUzEu5a8aUilwQ4zDid7qHlL41pMlrH0DY4KLxw3rObVRcv+eMT9VPOOAolGhYT9kCB9STAC7QXOBbQIDGXger3kdjJAC0M+VgwOhqwO4L+/94gfvfcxw/EDbt26xd44o6ob5qXjWjFCQKfQG30rISLre5B0zbO9i9oXwsVO/aAg0WncKTuG86ZT7924sJ0LoxMd/Ta3rQDqN2AhIB8MGUlHEILgHbauuVyuuVjNESKQas0gKyiSlEFRkCaq0y2JdIizkHKmDqnGkksn8G5OCB4nQAVHIhzT8R4nap/1ouWx9RwqiRNxkTFyu7uH3hJ3DHLzTNgpbBCC2rSMBgNynWDdanNdIcK223+wrYbp6Dl6KFEQNpVHu3tLX2cauvGMY7qT4A2iK7d0BOfwIapwiX7B647VatWxQFjOzs5jy51SMbTSOv4uJTLRqE5aT6uE0ShD64RA3GUba2iNhQAJAeUhT3O8j8ri3jvE2rBXNfyVsOL4p22UiUs2Nyj4wOX5jPnlgsGgYDwaMBqlLOdz3LKhfPg+3/uX/4SAJxc1SkVh1IEzDKxFCMiE5dhWHE5zbkxvcPn4AbZqSRCkAZypUNpibM35rKYpjrhzesiPjnLWZzWmDcxoOMjjwA9Lx36QnLTL2HQbGiamZuxS6j3BzRdPWVKSv/0d9h87itKzPF/y4f1zDveHhFXJcu2oJppimJMPx+iOIuRcR5Vnh0DKrqxuM0lgpxEl1kmKuH+GIKiblhDUxkj7qhytkpjTpUux9Et5j8DaBu9MJPASAZXqmEdVHcVJCCyrmvmyJFFLtNSxjzFJSNKU+Tgw2j+mSEEuz5DIyD4YoHWOVQ1DZThkTaszvnnxkL+aZUySAo/lRy6Kx4riS1ypVOh9y60juy1qENB4x6PLCwZZjne7k7ArbBCR1YFu91MdMIhtCV3pZjRgcWWR6vbPKCTkPMGxKebfxuQC/HZHDSEy0YlOSDZ+VvzQnjOJndx7PJpN0bkHhIqF8dHFjuknpWOrnE40w8GYAkHWtLz1+hssm4Z7d+5sRHTzqmF18ZhPlnf4b37xOgfPYGvPvlOGvoucrr8pXsxyvmS9XLI/HnB7/wCGLc3ikleG9xhrwVg6MizaR2FYhSFJPVoEjGnJWHNj5WiTIe8nQz44X0YqkCB4XU15bXIDu7igPhkSfuYNPloLblx8wC+FivttRWM1lw4eFpqPU83A7zG2kqGBTK65SDUPpODRB/c5TRsabZgUOacm49HaYuaXnD96RLpcsLaaWT3jhRcnnB5bklQRhKJuJ2TrFO8VIZRd/+VWTUtEm8N6jxcOLSEN0ABtiMMsusa8qEbsQciY55SxBct2AJEEEttglUESGQUCsVE8hgixwkcKQZoPCKnv2OkDjbFU9Qq3TPmwFsxfGjDIGuTFkHdWZ0wGGWmS0oYAKuGvpCUvtXOcDDRpxbwpUTgSn/Dtex/F+54c9xNgsyUGJVAWJllKbVpK6zp+o/iSNgSCs7Tz76K15q03v8Y73/4OlqwT4FU7rW2RqFrupFs2Dd+9MXd8uEJGWTzvwyYVBzGGEZ07LUTkZ5V92N/XEPdUY4FOzMeiUJu1ZbMjiz4dK6NhOgg+kmX5YPCdCFAUx1UkMsEkGXfnDSY/xZiG6eEexydH5HnGc9WCX3N7HJgjRnr5TLb27PLqu45EENDliKRU6EQyISN7OCfJFEeTjD0/Z9p4ClOTmpbEBpRoOTjKeP7FKUaU/Oie5KRMeM3HONEq+L5dU/qK15IpfyU9oBiluGtHfLB6xPv3P+bHswQxv+RmsBwtH2DdkJlVfJLAGSn3ipzLRNFmDm0kUgc8BTeLfV7ILZwoQt3woKo5D5bWtDhXM0w0VgRKUZGPxii9jFJuMkGECSlgOq13ESLhr0fgu9yldz5ScfiAloJGeIKSlBgubctISAa9dqL3GOlBBQSGsWs47nKWAjhoKpJg6FvCemdxU5aKp6dsRbCh01SJIlEaGUaMhgdcczBZPUa0D/lodi8CPEkKIeaSJ5Mpx8WAUTYiu3WKddBUBpyKnRldrHhlFgTAeRK7olr8O3z2BujT7fzo9Pq8N/jlnzDdG3P3k3sED6bvWRVgfeyqMK2hqmr8uolN9CpFyMhssGlwJe6KbQgYwPmAc4EQtoYmevDIRZqXDR4XJFvz3h6i81D6x0I3rUOXwtnyHfYucn8P+l1eImwMLVrnmLeWD87v8ZW/8Quo8YCsyEGA9J7i/BP2mdMm+TNZ2k9glFdXsF4HQhDdkeFwyEBKVvMLpEmotaJIFVI4ggyQQDYesvfydeqjlCYRXHtxnzc/gen3H6B8QzbyBLvk9ZXjVweH2GsTfvAz1/FtzeJHnvc+uMQIySJI3nUtp8MxpU05X9UsnGddOZQBlWQ47VkIhQw5zmf8sXV8MPPIeoEvcl548+tckwVnF5fx/l2e8xVt0M09PnnQ8OGDNTqBrBhwPghUmcVkEh0ShHHI2qCcYei6TgLrCK2l0mAHgpnWnB7f4LxeYNoFtUjwLgIyIXikrfi5V25z/uBDvjS75HXT0ouY6mAYG4MnwXcs7aLfGa7OrM0k6sVsfbCIYBlWH/Nq+xBVrUnkBSpPMK0BIWid43J+ydl6SSI1A10wKHKyYkSaKszxCf/6f/wj0BNkdtrFjd1uREC1c8zs/4NpHyFW7+D2/+fI7Fp3etEazPpdBJaf+drP8s4P3yfZAa0DbCgmizxnb2+CtgFTNrSNwdvINldXdVTb8h6EopKi8xwkSB1jzh3w6coXsHVhr4zZZ83rnd/DU7+LLlv1BHLa7aguQCvhMQ1mOGZQFAipUS4SqBUGBssW3S5wov7iE+mOZ6/oefLvEAEQKR3aWWbVnHyokI0hDQFrNZW3+ElKsj9iNMqYHu/jn7/F47Tg/mpOlmgO2jmFcqAsqyTw2vSAnydjT2m+by313Rk+BFIm3PSaReqoneDfy8BEDvFaM/drzpzBJEknHAqHxR77e0WkIXQeVSgemRFpUWPNmqZ0VPUFi+Wai9klZ4v7PP9Lr3GibnHn4wecXSyoW4NOoDm9pHz5NscvPsdeNmR17wHLT++yt3A8N5uTuwrtBUFIGhVIE8l5MuCFGy/wz7/9Dd5ef5MsG5CqWFKXiEAyv+RnnnuOw+KQo/mCm8ITupRED5pEPqAQvZK+XjUENtsHbCZmlxkFFAHPsH2IDgbcgICnCQKhNEprghAUIkcnKVpIKmeZrR8hqxmjLEcMB1R1DTLtdoYenPEE3+Au/ymHQ8Ov/sJf47/7wz9CtPcQ2Sl9S1gIgbD6BkpKqrKNNa0iMiM4IaJeKV2U1xXcKwVqkDEYdCzqDCPC2oFl67ImGMtiXVJVJcELEFHCXaooehQ92YDuNjYlo6JW6FDpOHF3ciyfY60b5kIRGQ68DzGE2H1N9xkCgROKWW2RQ8WgDw+6MrtBW1MYj3I5uf8pG6UTW3g/uq8ahKfIAgdKkeO5nmn2TydcXi5wVuCdoq0cq0HD0aBlMLyGHE45uz9jWXpmF2tYLkknhhOrOBkf8cLNF/j0z37Ij85nrBYBfXfOZaGZZxIrFXkLi+WCx0dHcHJC6y3nsxnz+ZzxZMyyWkc9w1FKU1Z4t+bi4hIXAmvTUrUNbVMjO8CAjnTXZzXfvbjHr+7tI4SkkZK1KrBixHwwRJmWTx8+RB1cY68oWEhDuzijtpdMUofXGSZJcUpSLxSf3D3nrP0BRZJx+7VXOLr+HHuTA7wH1TaYs0f88Ft/wsP33uWFo4NItd8BKKrHIb1HBR+pK1H4oDqGhG2cBaKrwfUdJ41GdN3zOkiQHhtis7MPIjaAB0GCRHqP1oppqlGDIUqlSJnzyfQo6owOvwai/04PwhGW30b6C/4Pv/n3udhZqaXo+Xg7vsPQydEHhxMBI2BlGowYETAbkxAh4G2UHVAhNpeLnpWdGLNnhSYpxoyk5oYQtK2haSxNHVMP67plWdVUzmGalrSMjQ+DfIBTQOKjAjg9Zr5rWVvDDN1JxU2xl2ePY0sHMm2OjmZEErBomlYzPipQSYL2HhEkMggy35CGJeCotWb8DLb2zEa56bXrL6BbFZNUcl0pUim4NUw4zTQP0sDHF5bGZYx1TtsKzs8C+QQWzV2+9fZHqOEYFDyslohCc2uyz53r1zDpmG/up3z0eI3zgaqxtGswqoPACbTW88OHD8nee6+rLLIRPIGu1QnCR590pU3bO+Ccw3mHEgLdJ6u74noVBJNGkKwcF+clvhYYmWGyAjeYomRAzB/w4afvcjRbk4SahV+z2p+wvnWLqmm5WJXUTnJ/YHjgHuMe3OXGc89xsr+PCIEsyxmOxjx68ACzP+WDqmQlA2dthQ+jTUG7FAHrAkFlBC1wvsXVDWnweKnoq09DFwgFRMfIJzY7W/B9z33Y9HraDmXsaTJ7smohYzc9ODyOh2UZdys52IQpAMEZWH+bLNUcXNzlfr4XR1ZE+s9+kQjNA/Ar3vjyV7nz6b1IIKYCi8ayQpL6umMUjK1vUXBCkQRB2hNR7cw0unymwuKDJ9eQJwoxKqKxhAEmKDyCpmlYLUva1kBw+LZlsawR3QIkhSBLksj7IwSiq0LbeL6+16eJ/ZMxZdOP7facdj3HyhjqENg/mNJ4g5RRY0WGWJ2lPbhuL3uW45mNcpzkTCZTyqpkvl5H+MEHbNsyzQwZgomDA6VJjoY8xHHRpIibtykO9lhWhvvOs76/pq0TLmeX7J/scXz9Nu1Fzf3LNT8I77PSl7yflCwHCUjFsq2xIeLcwnmkDbRCEWRG2XqCdTEZ3Xsnm5Hrg/0OIZUCLx0SSyDEjpCuCEBISWIEyeOGdgFlaZk5yfC1lxndvs5FNedYVByIhkla8vyBJxkltHKff9uOqJIcpTVusWLVBhoMrTJYAUEphAtgHPPZJcvVCu8DjROIdEwdUqxI8F6gdMRVTALrLOGiGLIscrI0ZfTwgsHlCt3J0ydZQl6kKAGmbXDOo5UmS3PquqZ1hi3aIVBKg2tjWV8/Afs8YU+DCaii4Dsffxp3yOwG/RMB8GYOwfArr79BsAvS7Np2qGWXNhCSIB0ueJIkwZg2glFOsKgCi1azn4yijouUpKlgkEjMat0VBGxLCdneSaDTQfViC8aIaDgqgJJRBGmQCQ6yISBIVFTRsq2lbVrquqGpa6xx1E1JXcdC/jwfRARbCFRH5xK6xeFq9rRfogKiEwwEKE1Dg2cwHWPx5KoTCQ69VAS0qldH+4uPZzbK/+pLKW++vM+ySfjOQ8P7l4Y7Dyt8LckGA05GhnEmIJVoaTkoMj5N92lefol0NCIVAtUsWb3/KfPlklBd8OsvDHnzAAZna4rKkIY13/I1NJFhTQmBkmCt20wuK6MLJgJIH9WOeuQsKg7vGOTmrorNSidED3H3AyzxAZyDj8oV2VhiT0acvvoG/vZt1o/u8BtDA4vHtLMLRNOytC3FXHL95ilvJmtyM+c5VVAeNzwG/kQL7rwXJdCckAiho4yds/QVKFoIJlnBYyeZrwzWgA8tba6QqSbkEqlbgmtZiwH1yYQ6eI4v1hzkK05OLcWwK2pwCW3jQQ7QIkecR+8BF3DdKi8jLRsuBJCd+rFzOKXwQuBIYu1ymvHDj34UkfVk0qmbdftV/SkhWH7x9i3Wn3wLN+2lAURHxNztlCZqZownE+B+HOcg+TQEfu/OY26lES3O0oTpKLb4jYqC6ycnjN2KAZLUeoJpMG0NIRJF42IOOIQoVydUihCazXYm/WZxlkFERn4hSTSkOmE0TKFzIGMqxtM2PlJZLlesy4qmMhF5FpKAQum466EkOs26etr4PcpHYvKFN1gCiU7w1kMagxDlBZmvEbhtSumnaZS/+ErBzb0ZVjneemGPGs2fvuv5w2/eYZTX/LWff5mpDjRlTe0sLQWL0THvFyN8OqBFELyk3DOIwzv8+rUBv/DKkJzHyGyJfrDmS41AyIKyLPkjEyeQ6iawDeBk9O59l4IIoWMb2xgfscWs73uETV5+I1fQ4eCRPCrmuCBQa8kHrWE8HvPqG6/STg750Qcfc7ReMmvOadoaQUYuoFWOykq4qHiukBwPNWNZQeY4wfCD5TmpafB+QAgqKoERIRjZqZdKGciVQkjNUirW2YC8XTHONGoSu1EK48lkwF2UuPSQR1nBg8RwdDIlnTYE6Wi8Z123lLVjXS0QXpGTkU9TmlWNq2O6QMge4o9FDdtQxJO4GGdqGXVeetRRSh1TCsHjXY1ffZvnTk54sbxkjeZaFqXUrT2Lxeod+bFbf6/7BN3FjXHnqWTCI2s5zBPqck0zm9PefRQBKKmZ3D1noj3TJOF0POVglDEoRiRaIbyDtoourHW01mFDC8F21KoO6SMRdL8jye56d6lx4nwQSCVQWpJlCUJIDo8HOBdl340xOAvLtaE1jra1lK2lLGuEVFGhWhkSJTBCcGEsttsRcb7rbQXtLKmrYnztRRSO+mka5ckko5BrjLAUzpJXhqNW8+X9lMOhIQ9LhkJSCAhaMjgZcqks7/aNtQSSVUvz8WMa2/DD2RlvXV6SaUnbKkJryBYtL6aaOkv5M2VZGbOponDW7ZR97Ujk7RxPwt+7MTAdlB49XdEtXCKusCKyvK19wgM/YGASDhgwclOW8xkfjo5IXn0RL/dI53fJqzuMlitYrzkOEisbVoWnSDPufvKA5dwQXBpzdjsupOhSBkJGmXCtYg/i/f2Cd1884LVVSTo1pAOPdgmDNlDWltoYTP2YkzTH35Ls78XeT4dmVhs+OrugbAMWhQyOUWK5Pt4nGw5wD1dgLcIn9MrTiHgO8VCUHbGtCKAPTqiabyPy50Gk204QMwO/5NWbL1OePUL6gmI5R2uNrT+Moqgyjbld0Y9/9w39TisFDV2qIC9I0pzWx13aOM9Z2fLYW6wp0XKBloEi11EuMMvYKzSjRDPVBXkiyWSgEJ7EO3TXrhWlBP3VBOROZNxF4xA6hvvOcG2nQSOVJ1MAksFgSPCxaMYF8F5grKUqK1ZVzaJcU6WSxlumkz3qqkYHSbOqWLrA/vwC7x6jOrLp5KdtlCNZkDiHC5ZzA+99+JiP7q7I/YDCpdSzljAtyJXGO0tpSrLW07gVSo8JrmF+fpd6/ilvDMYcjzzl6oJ1SLi8UKxCRpJk3DCKpZboROBbE0EMITf5KNhxpz7LOAUbkVofrsLY/fv6DvooIye6hLcEq/CV4p3vfsRXvjLh13/5r/PHf/L7PGznhNEBnzxyhIeWnz864ZXrE4rLMx7WF5R1yWFSsJxV3Ltckw2uE1RN4g1NE7lag4jdJCARQaKFQOsULRIuErg38XxpOAI3A+dIMCSZI1cZF1iOR5LDgcaPBa5cEFyC8ZpZUzEPUKsEgUB6hQmCqlyTC8H+eAi1wBmN9SpSXggJSuNVglMZsyLqY9ZNRWLTKFU3OYkeieja1ZoPAfj1l1+E978HnXvWH1KIrnZ0p0qmQ2wkUVIOHTVphBMR9Q5dU7mPFC1ZokGlsTJKQuMtD+cL3GxFohISraC1FEGSp5JhITgZ5dzcn3A7yxjmI7QUCGfBWfr+X+n8hhZUiG2vi/cBGXqxwc7r2mVuDpEyUohA0vXa5jIwyXKu7U0x3rNKJN/86AFFMubs/DF+74CTg2OuHx9we39I9v7HXM4vIFEMdMrkGWzt2fOUbUmQ0DjB44eBi0dFJFcKgdna8fHjwMzWaO2YFkNqaxCrNbiHtMkJwq+pPrrPyXLJL54oXrpmSAeHtGaIF56P7wfOV5IPnWNl2tjaFaLar++kuvsi5Mh64p/aGeMmEEBue/6uJn1j+Y3okLENnUcvfRck4yQWKrcXa378vR8yfzAnVSUf/c6fQrHHoC4Z+JK1XmJwECQnWcZgnBAywc+d3Obmcsrv/vhD1muo1g2+N/oOkBcdqpekBVomFLXn1tqAWSG0QyYx/6u0JEssAwWTKaRZTAWsrKfBUdmSS7OmQaCcQgaLE0M+lRPavRMG+4cc6QGydFRnJb72WOuwQeKSIe3gEJ+O8SGNII0W/PCDfxUnxuSt6Gp78HaBX3+bL926ycnlQxoifYb2Wz+A0MeUbLbIuEnJThAqGp71HkWkYvGETbdHLNOLNadaxT5T4WGsE7JshNQpLkhWNJzVLaYyiMryzkXD6FHNraIgF4aphmmRUGjB/njMwXDESCm0bcHWBC0IIp6LDALhmyfjn+1uKvzVZX8z1xyBFhk8eUh5pCT3yyWv3rrBiy+/zCgpsFJT+IqjwxETMcVYi3w2LuZnN8p/9e2PeP56jvMJdx4HlsbQSrDBMneG1ULAMmCbktPhki+/cItj7XmhfMgnDzPsnmZdrcmlJ3OGNB2znBvuPTijFRmjMKW1gUrCLEloVuvIdRKiUq7r3C7oYsrOILeUiPHoayqfPrp6xQ7gkSEaZv9cEB5TG77/599Dq4QPfvwBdIuCaFakScI/+Bu/wMePLrnUCauT51HrFUUzZ+FWmMWSyht8gPO2olUlkANRUCgWYe8coUcsFRNjGM4uWIZLPqTkuMiYjgYMlEQIhxrE7ouytXircSFFyISlFZxZWMkEpXLQmnW+x0NGLJsD8voGs3wEOExh8Ynv1MAEQSZICqRTBBnTJsMs5+w8UlYImW/qR836BwRf87/4ub9C+eHb3RhusUjosDTZp2SeuNRNR4fHdXnWPm0Tuu1UdAUFPWt8LyHYq0L3BGVFERnhqwYIEuVjiuyTyiCdITUVe3nKpMhJLy/J1YyxVoy0ZJRJhoVmNMgZ6oRUKjCx7tb7iKiyMcKu2UCEK15avHUBjydIT609lY8o8GQ6IQAOiQxQCM9knDJOpgA78+2Lj2c2yvfWDRcfe1IUwgsOxwPUcASpZmHWzBYNswvLMMm5fu0Wq1VL4yxvseCwmvH91CFzQVIGponCNAmfvt9yuUhYuRoTMpQoCFLgpCdNc7IsitCYEDZV+kJIWmtJRErbtht9ht34UXbSA1diyvhMfG0c2V57BRB4GQAf6yp9n04gsix4TeEbjqoP8a8cc/bil/mdeyUHg0NO+JS1CxysBG0jKMuSc7MiqAIpBNZayrJkNJ1ub2oIUatEBERwJDSMxJppkdIYy4NqTZ0JDlQGIUeGEZBT1oIPzQCRj0AVXGQpd44Ua1egTYGQA1Z5ipMJkxA5SptWxpSAd9FtEz3Q5Qii6br4Y8vbC0PF47MHiPQ6yNiChLOE6j2GRcF1HxuBZQdj786xXvQossPT5Rp3KDGJOVgtZEw52C78+AvK4Pp8YugAqhB8LCWU0dORQpDrlP29I7QziKZEeMe8bgm+RYaAFKBFIBOBcZEwHQyZ5gOGiSZXgUlRMMhzMgnaO4KzUTmNyF7n3K5MRZ+jDwQFSzwX5Zrh3i1GwzF4iUeTeMHAWrT0mCwatgzPZnDPbJS/+aWXqLxltVozGSS8dPs6SFiWK2oCq+qIt3nE6Y1j6raibk3UVfAlz63fxwxPSfZy9mTK8GTEuVH89r0G5RPGekJWDAiJAiUZtILRMGc42sM6z2K1xoZAkqRdz57FeosQCiF6uo1+hetcxB1lrH4og9+ZSGHnRwd/xx0TejcG6HIvsfNhUdygOjjlwigWSrIMgk/Tfe5X8DoJh3pFGloymTIuBA/nDuc8q9WK0XS63cW9w7QleWb51Z9/jYc//g718DpVcJzJlGY4IogCUU1owwAvChAJ3inWFGibo0lwMmGdSZoQ8LkgoLAiuoA2wp54Ao0IuEzQ+qjeBXRF2/EarTUEHK1YsVzMkYNTJEn8nPpjvHnAr//Cr3D20ccUFpQSkYrySYekCzF2/xbQlZx1nS1Sd7teDPDFE8bN9g4SdhbGmDvs9EZDTIEEREzniIBuGzIRSNIUJSHkaXSPO6Vt7z3LpuJyUaNWDZlekmvNSEsGmSLPUyZKM1aKRCtGg5xBmpFoGXsyQ8AZi3dRmiFGpoE6SBpjGY+mJEkWuWNFRKxzYUkIHVseX6AtffV4ZqNs0pqsLHnhuT2KSYbUJdZaiqQh9RaZtLx0LcPUl5i2ixVEwHpLbiy3LpdM8oLnb0+Zjgs+/fAxDQnzquEeDXZ+ic6iPIBIMupWURlHWZYxHRIgECtNjHexkZWwsyP2bsbWfd3t+O+2h6dckc1EEJtIYpP37F/qVNSG/DSfsnAJ5dqQJJGbRQ4LLvyIR9WMg/UDrpk5w6xheCk59qe0BGazebdTxooTb1vacsX53Y/46kvPsx7e4E+bN8gnByyLIbNkjAkSJ6KgjeonaiJRXqMCJEGgVI4BKtfQYjv3MHQkXbrjS2poncdIR4sh6K4UzocNCKKpSXyLn5/HscieQ4oYIvj6PYSA6WTMtz96lwOh0FqRJSm3nN1Jn/SCRleBt75RuX8sl1tx3b6iaPeeCOLfmxCkX2dDv4b0Je9d/azo/vnIC6WU7HRE4u4mRSBPRATbshF+h/XaBc+laThvSsIikAhBpjWj4YBstSLVjiyJSmp7acE0ydEqIdUJiTV4a6mMom0Ee+NprAcnzjkRHJmw6BBiH6nnp1/R8/Erb3F9cYmmwruKVVmSZRkKSRpyzrMJd5zHNytyEUgA4cGmKVrkFFXJtYHnKJ/y3t05nz6qcEnKSjqsjryrAgdtiW8qXFA0jaM1ZrMihi6tEKTo8mJiwzq+e+zGldvnBJueup3XbX7nCaNka5yeKNX3QUhI1Ris7MrYPNIFmjzhAZrU5tSjE3RRoof3GdgCU1bcv3+PxXrFeDxhMpmghGA43Ce5pfmD73yTud/D5i+QFKdY77EiqhBrHEpIVBzNaEQhyq6lUqJ8Q+UNVahpnCGEyLjXNi15mjJKhzSiwQSPEhnKQihbElsi7ArTVLRtRdksSWzNByIm/VX+PADOzPDrH/DKSy/x6d27VCrlLBDrVJ3hvGk2C2E3oDi3XflEV1TgZOz9l0FQyK2KdURHBdui+q7Txftul32S24ini0K6uxa8jUKxritVDFtsvt+LQ+hTUrHlMNEZYjiI39k1NVtnmRsHpsGFikCUbM+kZqBTsjSlEJppkpPIjHfWDUsvuDkcQehgPG/RviXDRBWwp8KoLz6e2Si/UZxya3TAYn3Gjdk549rg0Nh8wHw05aPhKRcvHCMu71I/fo/x+QX5qsIKz4wG4xrMouaTh2e897Dm0qTMsgyfyo3ibc9dGmsmdAyme0WNbhWKYGkPFsQb+IRJsl1er97DLYnF9rl+E6VLu2wYsjvwAS9QIsVqzacPS169PiXTGSBQKFIxIvGa4CvupWvuCoE2a86qf89s9hCZ59y8fpOXX/0Sk8mYoii6HSrw8P49vvOtP6MNSxpzjhJjNA2eaSQcw8fSMhRCxJrXVipqfNwpETSAlRovBMbYuOwoiS8rfNXi1gtCW2KaGbZaIVqDdTW4iuAdOgRSoVGJgGy7kIUQ8OYSQsPJ4SH1g8cIEsymagouLBwc7HP/4SXOVEiZsO3+BDqlMi/jPVTE/KwVXS1oIDLhhW3tKSKmLmRnjE9RlfXea+gRUrp4c2dB9d09Fv1sUJvPEF0huQixQkh2+iBSaqTSeOnxSVeALgYguv5Xa5g3DcoZ2mqFDookLfhhq6lVQiIVwVpaF2hCxXJ5ztKdwVB0nE1PFg9+/vHMRnn/g0eIm6cs917mkdznK01kkL6z9zJ3Tr7MKpeUOkMdP8+1F25zcve7+B+9S7NYk6QKEzQXVlKFDDmdUC8rRqnGNoEWNjtfXO3jyhqUJDhJcJ3liN5FEl0+iU4XsM87divhBsre3s2AYkPL0X0+ImyalYXUHR+PQCiFSjRaa6TKqPYmDLKCbHiCdlP2x4cIJLYJOCuxTuL9ACemNJkFfYrPP2K+/BFUKddfeCkqLdtYLSI78iipFDgFPiPYjESOcCEj7dBNLVOyJEeKrnjCWwrt2MthPr9EWk/hDd47qtUSv1qCb0hMg23XLINBU8Y41xm20H6IhZhdkpyguPHC8/z+H/0OQh+g86OIMJoLAI4ODrj38HEXRfUmJ3l8dsarr3yJ+w/+CFffQyWjrdUQDcM4H1s/lEQEWLct92SLs1GpmRA1QpVU6EhJ37m3YcN0EW91BHpccF1ZYMeRi4vUmKJ3acU2dhNbc+7jUUHsnRYhuraim3cidjbE0s4eHQ4CgiKgSNOUYTrEe0/tU7xIMDphXjdcv3aTe59+wl2gKAZM9iZcywN7ScpyvkYPEpJugRj9NI1yNltjzV1uPX+NdLTP905bjlZr7tea+aPH6L0h00Kz5wzPl/d5JVzCtSn304yF8YwPMvz0iHK4j5q3rN67Qz0r8a1hVhkcsdzJ97Fh6BhZRGxJ6jZSoq6D7wQ7O2PruxOI6RIpchAC163ICBAhQYQCKVOEVohEEXSMF3SaodKs68tTmETgco1ME1KVMhQZiUzIiiGLBZhqRaIT0iSLaZMOSvchQv64vtUtdqwopa6CICGSDvceABKEbCmGimpZg69RMctNXS/wJqouY1tCvSK4mroqcc0cEdbxyr1Fe4MWDkXU08QLpDPITYqoW6Hk7t4TSb6Gw5z1eo2cvIUQOqKc5dsAJHIziJv5EAicXDvlG9/7LsgCVTwXUxw7fYeyQ5mljKI5tYZHvsasRES1AeEtAtBKkacd+6Gg0wHtDbyr1Q0+KoD1jmmfsw6fAaH0AFJ/mVfcqb7jo3dtRTdv5ObFvTzFlc6oEOem1hovNH5HynA6mXCwf8B0/4CsSDl0JeajB5yfzRBZwjArUIni6AtsrD+enaOHnHXV8snH91gf7WNGKT7M2Ju9x1GpyK/tkxxd47g27C3uUpuStRWM9oewbji/XNGuG0Q2ZyoGnOSKd9czrp8ekC4FizK2wODjpI4OZOzj6ystvOhEdHpS4P4VIjbuOqGxaU5SjGIJmxRoKaMCsR7g1RhUGjUMpYxiNzI2JwepCSrujqmOBpllGVmSopAooSmKjFQpnHcYY1k3ZXSROqYm2bnWBTlplm/SLT0nzZUYNkQBnDRLoWk5P7+L3j+iWl1Cu8C2K5yxpNaSGhsr5n1sKFfBkQqQ0iOEpa9MCpHKB4nrvA2BDCo2SIttDi7O8/5cAtO9Pf70W98AkZJMfrbjzrGIYMiLPC4IHchyZa+Uirqq6bfdsF05AfBCxAVQqbjAqMBcBSqTRiDNWYRwCO9Q3pEYx54LFEqSKR3j5gDxqlRcoDsV5X5W9te9W4gDfdXO1oP6rCO6z/GXXW7eXYAwhLBDE7Jzr7uY2IfAer3ixemUNEs3GEduHMeTCUfhmNI4nImdKs9y/AT9lAlGGla1Jbt7h1FWc+hm7AnP0eSI2VDzKYr71jNzB1TiBCkT9sIZN/yMwrakwRGaR2QqZVFLxPqcl/Ym/MJzY0ztWDaOu4vAp2cVl82AS1diEFgRJdmDlDipcEmCTFOSLMenY0Q2QWkd/fpEIbTciIfKbvR9SPBkeBH5PmOrSSQv1kqzNx4zHAzIkowsLchUuumyCEJGqbrgaY1FhJ48WSC1iDqE8ZYhQoJsHKlOCFJE6D7EAgLf1VHGeStQOhplWq7Izj5AihWZbRGhhVgugQyxzK0PpNRu+NU1Lcd5Fej/8/2M638XfgN2bGdk/OEJHJwe8ft/8vuoyV8FOcI7j2svCOYhL7z8Ovc+/TS62iF0gEjnnvb6m727SYheS7e7FMWAVHfMD0BuA4ohXglCcASZEILCuZgP1N5jcCjnUV4yRjORWWxpCz3soDZyB3EMFNL5zh2Nr9uktXbQg/jQdtvclmrS2W6fU+2c4F3IuHOje6l1QUzvBAGtsyT5gIP9A8qyjAuaEAiWiOU88vUWOUkheRL9+Lzj2VMiKua1xq3lVW14SzoGTuOM55Gx3Fs7vlUbHq8trc2hjUneodCc1oLDswUvDAYMixGXbeDhomLNmDN7SDG8DSOD9hXHBxpx6KibF1iva1TdNahKhdQpQSfYRBOkihUxMjLOiU3VRywspktkbyZgiLIKiewqRETHyyQlqVJMxgMGeYZto/6mFQbZrfqimxE9vaCgp6zwCG9xzuC8xZka0VqasmR2foYjB2ep6xrTGtK0E/HpqpBE93kyOFJbk9t1N7k7NvLPLBXsJleHVopu0nRT7akb34Mhn7djTPf3+ea3/5RAghr8TCQAQ+DLHwLw5de+xPs/+EFkMSC6lSJ4dID9vRGtaZGTX4wS5/EVG5jtZHLAfqJpXYnoFhflE4LqXilEd+86hnnvaIPrwhMwCBbeoIwjEYqkW0tdiBQcwCZdZMKaQuiIVguNC7JrU+7IujrgJ4KF24b9p80knv0VcGnz3h5VjkX2znvKpkENu6aJzqPwBFLfknc9l9622PB0Pvbzjmc2yuOjAzA50/kDXvcVo8s5jQOvLK1dsLeo+JnBjB+0OR/6ASpJaYFLOSGkExY4luuW2y9/jfl4HyUFrzBkzYBvBQlpStXA0ntWWaDMM5ppZMOWzkW4uh+frrg4eE+g16CIw6m6uFPK7W65Gdo+jSIk2gt0kEihSCXQeqyPuh1ZkkRQwDuEA+0cOLcRC12XC6wxKOHw9RLTrmM7DxYpKrSXCNMgMbgArtc93DGwrfJzusmz9nNigw0LnjZIuDqTNvXAOwb4xNFtzFc+v98hrl2/wR/88Z8gBm8ikwjrB2+h+YDxeIypt7wyInQ8rVLy1utf5l/++3+Lzp9DTH+Wvq7YebtZJFaLOV4lpMUEV1Uo0SLk9p71lxJ3sXi/JJogYlxqhIiFEd7He9F6XGs2BMxSQIJimCZMSJAhYSzSzsl2MQ7td0Apu0aFDlvfpGW29+NKFdjOLenWQLaD2xl0CBhjooTfTneMDDCwkPndqrIn/OsvOJ69SwSojSMzK/YKByZSEGpnsCFhuK7ZW9zHDq9xXuQs8hQZcpJ8xN7gGoPxa1xWKy7Hp4R0Dxs8Hk0gxwWPMRYTBOu8ocbiLGD8hpOGEKs6Ygmdiri3Eh0vzVZyLRVpN/gCL8WGiSB0O2hgyxkqfYf0WY8PJW3rqKwhNDXStXhrwRhS8SDmm3wgWEswEZyQMpCGmkHojE4GgjR4odDoeMNDdHn6Y6O+FSIYIxNFkN3u0sUyPWwVwpW50S1I7D5y5bcuPLpy73uD7xFMutwgQjDen/LhJx8TRIYe/WznuAW8mRHMGc+/8mUe3X8cEcgNcBJ/GiV5eH6OHt5CiWSLkoZtSVoZaiY3btEuW6y/JBczQqgiqrk9w62XSHTaw6awPfTuTLwuqUHLruMjIu/GeeamwbdwJhwT5SiUZ5BCkmgSBHkQ6A1Vpuvg1/6Ed47O5e9BHgmdV9PdhdDRqBBpZEyQtAiKLME6F3EKYsiRO0eCQ9K5vYjdm/WFxzMbZXXvHov1mpekRaSKtRpT6iGlGDAX+7RZhiHhLBtRT/ao8gmFGZCKgqXNOU8MIoegYg5KoIi5QdMpTwlSrUj1iEY6WguNsVGEJbioS+hs18Aaiw2id9nFS11+UwgXlZx8ABfwXXAg8AjXEJzBOUPStmjTELyPrid342osApkLZL4jgAgBKdcdi1ycOv2vogP9tthGgCCRBNKsQYqA99Aac/Xe99xBHfN5MRrE5tkQdTFl2JJQ+O6m0n989/PK53UxlOBzKCfEVqMD6NqXBMen1/m9//c/Ru/9MungOviAtRZfvYsQ8OrLr/Lh2+8CyaZyBhHpId+7eyd+9uitTUpBitApMURPZXp4TDFNSB6fI7UkaxWfzu/gw24rXkf+tbHTSPsCXai6e7FCRGKqbrhlN5bee5bCsQQehKgKl7QwaIhixMEyyTOGSpPrQKI6niYReeMlbEiwRA+CdcPVh0Cuv78yLlzGCxZCstCSyaDAeBsBIRlQQTK0DZr1xutxYpeD8IuPZ2ezC4eoyfM8LIb8gZK4UUqlM2qRsAyjbk0IGGmpBIQ2pQ0JtXdgKgSCVMS8m/ce5wJSRRbqpja0rcW5gLOx4dT6KHsOISo6hS4x3hdciq6yX3ZoYAgE51BihbcO0Vq0WaNdSfAW5WpkM0O6Bggo71C9O0tA6xLdxc0BgRM7EIHvGeM2j2yOsIMIbBhcQujQuVjzaTuj7GW3hYj8MSC4fvMG49GAB3fvbACFzed3caPauHpiQ8+8OcSTZ/Tk0W+dnn43HQxHXH/uRf7tn/4ZIjlgfPwrtC7uPAKDr77H/nSPZtVc6fro3bgsL3h4/hghEwaja6R5lEk0rcGWj8A84Otf+RqLGcyrBTelRqeKD85nSANWbIGazc608QnC06e++f6NY79ZA3vvaVdmLlKHOCrraZxlHSxJ3TIQinGiGOUJk+GQ0XBErrOYfgK8NXhrYnzrPKMqoPqKIylpgqXEsTaGy9Bwp7GgI5pv7VZEyJiW4KNAr+h2V8J/hC6R5tov0GjFJyrjYydiDMDWTYSA9i3KQ+FE5OHUEpXHVVYIiQqSsmyw1tO2TdTLCBYXNciAPh8VUbzYb+ejxFv8SmSILTrBW7y1eLNCuDWmMRhTkto5iTOkAXQwSEzMFQoQIgIzYeOmbOOt4MC4zl/sjBNBpGVk2xj7ZPeJ7HZp6A20fw2MRiOeO7rB0liatsF5w2q1xHuPbQ2XF+fMLs55/uaNjVv4mcdTOM/W+H/S4+XXXueH733EH/7Tf4pjxPj5/waZjFHCEHxFc/mvGeWe3/jrf5t3f/huFHfdnEMgzXP2jvf5xnf/DDX8Kkka1TGUkNh2hT3/Z7x0+3n+12/+Ik1Z8Z6yPJw94sPHH7OkZaAjkVh/5tuh3DyyE6awcTFDB9aFnYXrKhiz/WOT8spiF4mWcax0MWB0eMB4b4ROEkKSYWWyEbYN3uGdxZqWarVmtqyp1usoXrVuWK5XPGqWLJxn6QTD6QGDyYTLs3OCkFjnGAyH5CIwZxbBQQ8drvXTN8o6GXWQcLx6EWLgrJTEidjs7GTU7gshMqyZyuPmYlMkHatSPEJZBJ4E0fXgxdYpKWUkgjKGVDgkkdre1o9pq4dRn9407DUzRHBdUbVHia5WUgR0cJ1zG3eQIGIs4MMuLki3qMRZEXeAvoC5c4NhsztGztNoxD23aX+End/7FiOIcynJUrIs48HlBe+/9z5ZnpCmCVmaMSgKbt26iRLw3e9+F9c2TMfDDqmKyO7OPKNnnuuJgn+SIxAJiV975U3+2W//LotVRTJ6g/Ta36VKr1H7NUnw1Of/Br/+Pr/1D/5L3vvhe1sAqkt5yCC4ffs5/l///B+jildIT36LtY05UFmdUT/8f/DSrTGv3HyB//YP/iVJImmFoPYWKyyZ0N31dbnGwEYpbDOGRJR7Q3+584TbbUvZGYMIBEWvJ01T0jShKAr2JnvkWUY+yMmShFR1THWdJxNCVHHuU1RlY7mYLbm8nHN+fs5iuaCuaqbTKTdOb/LjR3MseQQcc8Xk2nWSNGO4d8DRteuxK2o45AjBCw9r3HpB00nIq2etRucnkVfvBiKevyB4T2sdro4xn/Ue6xzGeyw92hVFbDa7X5AgYxtMQgBfg7UQPNrUhGoVm1tNw9ifI4LFWotsKxJvEASkhKSjaJCic9z6lo5Aj4Rs71sf/wFXvPqwheW7/R66z1ShK5TeWcUjwhsVq7eN1RGU8JtX9bFgnMxlWWIePECIjFe+9ArT6ZjpdMJwMIiunmmZXVxEl5Z+59tCO0JwxQD7utyf7BAkwN/6uV/m//aP/wlleIHDL/0Ww8OXuLdqaY0g97GZPNjZZuA29a8df5EIgqIY8uHdTwkIkoNfQyQFhzJj7dYsHv0zsGd8/cu/zDs/fBuXRM8EiMpVIdncCCm3u/xWBnDHNe28kf7+bhFrNoCYJKLXWZqSpinT0YiiyMmyjERHQi/ZFY7QyZu3BLzx1HXNarlkvV4zX65ZrFY0bYNHcT5bYIyNIJxWKJ0xKiZcBsHRG69zcHDAwf4+k8mI4XCAlIr3P/6E/YN9sjxntVrhvcN6w9q1iGCQQpN11WLPcjyzUa5Wq83vkYZ1O9sDRDTVu9i0K0IHuoB2nuA9dV3i6pLENaRtiXYVwS4RokUJSIMlcW10N7TasL55HCFxGx2LQMD5rpIn9Ijf7pL6ReH0ThkVMvoTcus2RvBsB7HtbESIEN0hJTdseZuLF51uc49u9mbZpQ4uLy9JRgcUeRHj56ZBK91JwLGNhUSUq+dJhoInj3AlonymoxiO+Bd/+mdczuakL/1vaPWU5uIx1sau+4m0ZH7FPfOYw4NDlvPltsuGOOqeGAuOxxMEgfLsD8nKh5ycfJVFe87CPuKrb73F44eP6Am6dpyIKy53zyLweQwR/c30XepLKUWWZbF1LdHkeU6eZmRp9ETSNCGR23Gnq7RprGU2K1ksa1arknUV2w0vZjMWy3lMrYjIWzQYjbh56yZLUvaGQ0ajEUcnx0ynU46Ojzi7OCPJEo5OjuNc9D6yX0jJwckxjx4/5vT69c0OXxQ5AzeIrHdFRqHTjq7yLz6e2SiN7fvbou+9ScAGj/Yt0lpc0yBCBaHEti3eNCTVY4SzZN6RKEEaPAM8OZ6gDUJ2FRIy0i1Gp3PbfOY76e7+CCGgZJST850Cstg4rBFU/8xD9K1b3YoswoYvK+5NPhYoC7GpHon3Nxq8FDuhpugnj2LbGASdqmz3e5RQTxJItcQ2Nd5KEqlopSJVMWWiRH/2Aekjkrzt3e9OYgO7sikL++xj9/Ht5F6uVnzplTf587ffpv3w/8KTxV5nRNc+0Ypf/5W/x3s//GCza296vQU0VYWUgr/+a3+DP/yjf0N78S7fufwfgMDPvvUWg8GIs0ePI2L5JEK8ex9CLxYkQMlYNdW55koK0jQjz3OyPGM4HDAaDtGJijusjCVuwUfaECEF1nmaJhZprFdrjHXMF0suLi6o6pbGxpJGJLz+xutcrkvy8ZQ8LyiKgvF0ytHRMYtVyatvfo3jk5NIApbomOoQgn0puH/vPifXkkhT01VSudaB0pzP50wPDsEH9pHcnOSc5iO88hgZkB48n7UIPX08e5lds8R5F5HEdo0wJdYZhHNk7QJMizAGpQ1ataRCkApBJmpkElGyHmXrV1Av+mLyLskr7GYibcH9rjC9B2fYJrB9CJ34ym5vwNUVeucKAI0gAghxwwvdBPeb1FW/T+3GjXHd72bELsQguneEjlfGb93LECRKpYzHGXlR0CyXpHmOEYrQWtpEkaQqpk+UJBUixsNed2UqKu5SXUzSw+0iXGmOeuLY9XW3C1AQ8NHH7/D3/7Pf4t/84R8+/e7OIH71F/8a7/zgXYKXWxc9sKWjFIGqXBG84R/8vb/Pcr3im9/+Freff55hMeLRwwe44Dt9yM1IXjmzvo9DdXXJUmfkwzGj0ZjRaESaaIqiIE0TpBR4bzu+HhNBQW8w1mGN5ezsjNl8RlO3VKVjXdYoqbhx8zk+vnsWpRGkIhnk5IMBw+GA1gdu3L7N62+8wXgyQUmFkhKhFMtVyeOzC7IipzUtjW2xXUlfEJplWbNcNxhj8A6MMbRNTVWtuZwtuHP3Lkf7e4w6l3mdQeY8mTfQdT89y/HsMeXHf4R3DuE9mbfkuNjlLWLKWSUKmXXIGWk3/T2CNFaIwAa675PUcHXyb2YIEHe8XZQxXHmNFDFWEaLrqfwL0I9txUYP8PTfH1fsrmd8M0H/4iN0aKDv2M8deTFgb/+oK0qG46MTAoHlYsHe3hApJUkCWZbhcFRVhQ+BIs9RznQpkbAdhT5UDmyKoqXowZ6ny++unvd2pxRENvR7n3zCz7z55pUeVIEEF0VzPvjxe3i/2y3Bzvj3bwhUVcWPfvhjlBa8/vKXMMZwfn7WleB1C6XY5vgAtFakacaoKCiylEGeMd3bJyuGSB05cvv73NO7GGOp65bFYsHl5QVVveLy8jx2ZUz3+PTTT0iSBOs8xgSk0owHAxyKkxu3eOmllxhPJwzGI7JBjk4StJS88+47HJ1ejzUIzuMI4KII72z5CUd1i3Ue6yUuCLwPrNcN6xLu3J1RlmusiaFIVUUVMOctxRgmhwN+LMfU6hVenlpGsx9xxIqpWzOwFcNnmFnPbJT7fkEgoLVCR69jE8hH6tSwRTt7dytEd3LjIvY5mx0D273h27hj8+xT57FbptYzD+xO0s9LF/Rin9vv72XQryBB7HLGfJ5pbkEKjwtwdHSCkJr33n+ff/fvv0FdPyl5tuOOAnleIKXgZ3/u5xgMhrz1tbdo1kvWsznW2XiuO+O0MbYd71gIcWUx2uTwxOZC6JkWelfUe89isXgCZFHgRYwvdaTZBNXVv3e7BFtjiUBL9/0B5otlN/YR9BJKkOUpWkryPGe6NyXLU5I0iTFgmpIqFXuZRax9dUDbNlRVxXK5YrlcsF6VXF4uov5H02KtiSk075hMJhxfmzIanzCZTBiPx+wfHjAcjRiNxjStRcqEW7eewzgTG5dFrGbyQjHaO+Deowv29vaiZmfvawVYVg2Xi5KqNtQtNMZRVw3nZxf4AOvqsgvbJN5rvBgRkoTxeIAaHFCGMd+2N/mOOuJETGiT60xCy/PhnAMx53//OXNq93hmoxx0icLosfXiKrGnW9CBnqJPUffrdDdhNnHc1c98UvNv95k4aXb/vmpkn2XMuznEp3fOsPPPP/G32Jzj9tXx7GW3XYkgO0Q2uqs+BI6PT8iKIf/qX/8ejx8/RqBJ5ITD7Gtsy6oCqTym9Wdd2Vxg2fyYNlT80R/8EaFjDAd44/XX+Mqbb1BXa84vzmKoIMLm7MQOHNuPbNhewo4xx7/7GlKCiMX1Iepn7KLKIXiCkEhNF7Bu88/OR3qMWJoYaRX7eFoEYmNyomMaIkkpioLhaEiR50gkSRL1MJUCOkIvZy3rOhYazOZzLmYLZouYCyyrEtMahBR85a2vcO/RIy6XCxKdMJqMODh4nrzIKIqCtml482tf480330IpidIR3bXOY13g3r2H6KygXDUI3TXOhxbnPFZ6ZheXBJVjrKeqW4wxNJXh7NzStuc01kcdGBUrrchOkCqnltAKQUgSetEfEwKl1pjRAW56SpAHuKD5pAKvjriH4325j0rsT9co5Q6TrOwD9StxWDd56faFDRjyeUfYWu9nPRueeO3m8S92U7/4+SeNcdeF3e0cENsVJLBjsB2qKCWnp9f5xje/w/sffIgWY46yX2Wgb5LqCVIkT5yPgHB789i+/xpBWGp7gfErGveQyt3l7bff54dv/4jDwwNefulFrl+/zuXFOW1db1zK7Tq1E1luUCm25731Xru1ob/W7XltwoJY5Nkh6tuFVGpJkmbd++LfWZ4xGo0YjkZkWUGWphERd24bGgBKRte+tYZmXXNxec7FxQWXlzOqsmQ4GjKbzVmVDSpJI1u8VugiJ0kSHs8uObx+jTd/5mucHJ+wt7dHlmdkaUYgYFrDxx9/zGBvjGkt3oktzCcFZxcz9mdzWucwTcAGS2tLjPPMLkqWS8OyPme+KllXFba1hMozyCbMmwSfDWh0jhA6ohpSYoSgVZHXlQ4P8YBMM/YOjkj39ll6gXYBEWLI5qTDCU+jFOifckqkBwGvPLbZDbpb379GPLnv/Cd8/IQnKoTg5Pot/tm/+B9oS81Yf5nj4hdQMtt82FVCLugT5Vu/WCEE5MkxOceMeQGCx7gVs/a7zC/u8Y3zbzIajXjj9Ve5ffsVzh7coW1bdgb5M2Lfv9yoSyE6Rri4m2ml4843KJjsTxgMhuR5jtSSNEtAiO1iQ+S2bdsoszebzynLkvVyyXw+5/LyknVVxRrVED8/UZKXX/4SSVKQlzXHJ6ccHB5wenqdLMtio3mW8ujRI27ffoEkSTauNAKCD0itWaxWXFzOSZIM72IxuzUW56AyjkVZsawalnUnhFQtaauG+fmSJJ0glKWROagBDASMEkpd0CY5rUhwMiMqlsc7uWmNCzI+LhVFmnN0cMxwMEJYuYmHXX+zndikv54Nq/hJdspd49vU8/Wu1M6SLOLpb0K18JQtbz5nNzm+i9Ft3/PFu+JP+7iSpN7dhjrURSC4duMW/8//7p8g3R7Pjf4ztBxE96bLlT69we9eyQ44E0In3wcEgUCRqSnXB7+GC4bWzrm//m3+7M++w59989v8nd/8nyHWS+qq2jk5wRXy1U4TcfPczm9CSJx3m+vsu1WKYkhaDEAphoMB4/EeaRoT8LF3VOI7zM13XKt4IgjTNKzXay5nl5yfn/P48eNICWoMR4cHHB4cMl8skDIhzwcUgyEHBweMxgVpmnJwnPJ3fvGXmUym/eh0vowjyzLmywXFcIAPnqa2+BBpIa11+OAIiWK2XpEkDmMkdeOo1hVNbTifGfwnc1a1YekgKMW6NGidk+4foNWAoBKaTGJ9rPhyUuJkAmhCiLSQUfwp0qV6IRBeRNGpNCNJc8ajEVmW0jezP8XYj8DLnkPo2Y5nN0p2vjR+2+aHf6IF5ooH+RRj7/aInTlXrmDzo28s7l75l6rz/KKjC43j7+Lp567CPRKC4OT0Bv/jb/8u0u1za/hbaDGIAMcT5/9Z8W8sk+vSN50Nd+vbFfOJBRGaXB1xe/C/pPUL1u4O//Jf/T5/62/+KjcPjrl759PNIhbHSmzArifd7s1CIwU6ieVn08mEJEnI85zheILK8qjlIfqYuTdcCC4qptVVzXwx5+Gjh6zXJU3TcDG/pGmaWJEke2AI0jxDJ4qqXvPqq69y+4WXOb1xk+FwTJpmXbYr8MO330bphOn+PnVV44OjNZYgJcZ6vFB8/+0fc/v2C9RNS2MMLgSMCfjguFhVcL7CtBWmTZCyINFjBIqvfv3rJMWUO48uuPvg41iiNzzAJimlVB1pssD5WAwSeu8j9GBf6FrGuuVUSLyALEsZDcYMB6NYMdSVBAZiuaAP8bXbgCfEefYTTN9nF/i5gtg9sRXvxIZh5/c+7vn/G1f2M44+ND69cYM/+da3Wc80Nwd/FyXyrnD+iazh7sVuNsUQV1LCZkx2zf6pozdiocnUIYU+IpcH/M6//m1u3Djh5772VR4+uB8/pTPGXQJqrTVJkjAejxkMBuhEk+UxVuvL0PoT9SGAjorHIURh26ZpWCwWVGXD5dmc8/NL1us1ddNQNw3OOd76ylugBI8fP2Y4HLK3P2V/f4/pdMre3h6T8ZDZ7JLRaMqNm88TOh6fyIkUF4/TG7c4u5hxeuMWq7KMpXAmlmsGD3UjmK0MxUXJuixZriuq1uKtxFmPEKdMJ68wGR+SpCOsCTRt1HNZlQ3NbE0tJINrp8wWcxbzOb6uyYZj0izrh3rDt9Plm3ZuwzYcEVKitGZvb4/xcEyqssg00G1UGyyld7P/A2b9MxvlT3T8/9br3H7tF+ymW3z1KuAR38fVx3a9w+5zLxdLPvroY07z30SJbCOT4DeVRE+89+qJ7XqyT78sbL/8qpfRcclIGCbPc8pvcO/eb1O+YVBJgmkNyabULKKfaZIyHo9J05Qk6VzQLvUQ3ec+3IjfV1c1i9ljHp2fcXF+wXK5pmkalJKY1lGVLdb2CmdR1n1vuMd8MWexWvHrf/NvcvPGTfI8QyVdt6tSJFowPdjnhz98h5//pb/GYlkyXyyxztG6qJ5snOCDT+4wGI9p6oYQJK2BpnE0xvDpvUcdwDIDkTAc3eb45ilFNsZbQ6o1TeVYLA2NKXGx6wAXPGVjqLxj0basVUBmI4ppQrVes5qtkKKMi9RoSDpIr86f/l50HEQBUGnKaDKhyAukUDslgv1Ydp0YmxKoz5gHz3j8xEb5rMHq/1TH5xrmk3lLyYa6cufRq793b3n+hRf4nd/7NxTyJgN9c9vT+KSlPRFTdvjAlceueLibOCO+cdtIsAVz+mvywCh5ngP/8/zO7/4u//C/+i+RIVAUBUmSkCQdALOzYyqtUVLGUkXncR5Wq3UsxJ7Nmc1n3Lt/n/l6SWsNUe9DEjwcHBwQpGS0N+Hg6Ihr166xv7/HeDyOzPgq4fvf+z5JknLt+imttR0ML6JsO44kL6hNy/sffogPAmtj3s+GQN0aGmOpbeDR5ZrFvKaqHc4KlE7RScbx9S9z/dZtBqMxznnWZc2yrFiW58gArrUIJM5LLoOgNI7SGVoMjbfYEBn1nJNdnapiMBowmhQE76mqhrquMMaS5zlaK5TqBYUiyi47ryMfDtFZHhsSgog9thv3UHQ9ER2Tf/gPssmfBH19tq/5T9JkwxM/dx7oay6vHj2nGSCgrBvOzs45Sv86QiQbh/UpMOqJ9WD7dXFXDU+YbE9BCRucKNrnzuPbD4s3fJK+zsz8Ob/7+/+Gf/Rf/0PquoogTnfOUsfEv/Oe9apksVgwXyy5c+8hs/mSpq4ZFAVHR8e8//4HtM7ghSfJUoajEdPxlOlkj0QnnF1e8ht/+zcZT6b0wFySRJIogeLV117jO3/+Xd762tfxOIzzkbXPWPAOKQXGCs4vl6TZgLJqWa4rauOpGsPF5SVeZJyXgf3jF7g1PSbNRxjnaVpDWRnunFeYB7EZIupYdiixp2uIb2mcp0JSh0AjwCLxpGyKJjdeqUeKhERH+aA0G0MAa2M3Ul3VaKVIkxShEobDMYPxKN4KpTqaErFZPXdTZdtwRCKewFh+0uM/jvv6n/Dx+YuG+MxXHB+f8O/++BtoMWCUvrDZHD8L03nSlsTn2+vmsc8IQXdi8u3PIELHFJ9wkv0tHjz857z9o7d54fnbtLZhuVxSVhWL+Zyqqru/S5q6pu2MQKjIpqfTDJmk3Hz+NsNhwdG1Q05vnnJwcESe5V2LmuI73/0u67Lk9osv4lysD04STds2rJZdrnE+4+z8LLbudSWHbWMIVlDXLZezhjt3L2ntJc6DSgsme4ccXTvg9TfHuDRj7WG5cny8qFk/PMOauNV4wHjfpWAE1gWMdTgfd/34s0NslcVJgZWxjkl70QFqYct5K2NTgRRdnCgiS26SpCRJCiFE1oquWqqua1CSvCgQIsTP2l1Ir+ArOznD3d7Jv8Qu9ZcyyqcEdXbO51mPL954w1OfF9MkEbl8Vv7Mpz7zyf+Hrrcv7jE9/rbzChhO9nj8+DET/WWUzDsW8F3sM2w+/vPO6sql7lrzBvWjI7Pqvz88nUraWQ0ycYBiyLe+9W3aquYH7/w4wvomds689PJLNMayWKwRQjAaTzg5HVMMhhyfXOPg4IDFYsHx8Qlf+tIrBBmQ6aY2CxBolfDCSy/xg7ff5rU33mK1aqjqGu+iJLo1FmsrnHM8Or/EGEfV9CVxAUES3eCjm1y/9TKTg1NkPmJZGcqq4mJdcWd+zirAOghMK/BOgEzxTmKsxTqzBRC7SmoXwPvoXgYhCV3TtFCuK/eUqCCQqu+2kV3cH0GYvgehr9cKYidu1IpsEEOBvBjiApR1xcPH51hjyLOM8TjqwajOY9o0vfcGSriypu5mr571+Ind16cMsv/f537pZz+xi94//WRPoty/tr/g3gn8SVeAfhfrjahjtBMQ3Y3tZOwNUwBSSd55530AxskbBA99G2l/RpF1buecrpxa5wbv3KUr5x66Gyh2vldsKlZ3qm6ufqKWGZk85uGjD/krX/sas9USpKJICrI0xwdJVoz4mZ/7EsfHxzz//PPkeREJq5MMqSSz2YJ3332PW7dv01iDcZb1qqSsKryzhGBAaO49eMhHn3yKsxpr4rh556mqFY/O7rBqHB/fnZGqAUk25uj4mKPjawxGY4RSVHXNbFXz8UXNxXpJZSV1UBiRYkQSRXqtx1qDcx5H3alNPzmeMVkqhIiLCGLDrYMQIPXV1FD/OF3BUg+aXZlzvcEKpJLoLEdmGSJN8DpKzo/ygsF4iikrmuWSx/cfEBBkScJkMmY4HJGk6cYUxc5c7e+X/wm3y//k3defZn4y8qI+w+dtjFiwWC66h5KuVezqPt1TQX7+N+6snJtC1d1XXL2B/W4peuQXrrhJQkikkoyTF1m7DxFS8sqXXuHg+BrXj0452D+iKAref/99fu3Xfg1rbdT8lAofPNZ56rpBpQllU/Pu+x/iRdT/dNbFvKOLWpdV1VC3grPzEmsUVdVSVVEjtG4aHl1WvPnzf5vnnnuNPJvQtI6yqvnorKR6dEltWpyPBQcWRRVSmiCpWktpSmzPzxsC3kVJghD8pvrlaqufuJLTVkJe+XtDPvEs82VzfyPCqlRCmmakWU6SpJENfvflSpGOh6SjAVN/QmgN87OHXDy8x4OmYTTZY3x4zHgy3aRH/kPm7U+cp3zqsSuI4U9+/CQnf7WF6C/zbeIzdvVtKmK7Y8bj+o1b/Pn33iYVh2g5/MwdceMOB57a+vuv2p73Z19r6NwNj9+4sT2w0nfx97tELzM/TI6hhvlizt/9rb9Lmo/IVI7SCWmW8fDsjEcXF7z00kscHx3z6PE555czqrKO0gso6tZzsVij04y6iVL1ZV1RViVt2/L48TllC48vKoajQ45Pb7A3GaMTwaqx3F82GJ/y9t2SulqhZIJUmsYrKqEwKKy1OGuw3tOKgBMhlsMJjRcdhWef6xOqk/2LbtSGQFmIrtAj+p6bMbnCOLE7zk/31PaMAKGvSgIQCnSGSDPQCSrLQSj8BsjplsXeSxRgUoFOUo4HL6I6leizszPu3LmD1g84PjxkOp2i9a6ok/zC8ObJ4y9llFfcqkCn4vSXOf7i03yyd/DqQvDFr919vPtt+7VdsO89He9O5HQZDIYMhwOUUty6dYvVakUmJ/R5qCtG9uSC8uTlbNrANljuzts+5xxDBDa6TTO60WLHKCVIBX1/gJCCF26/wHj/GGdhta6oqorp4SHvvP8BSZ7z8af3qCob83iI2KQbApXxPHh4jlCKxSryI3kk2XCIDRkqTfh7/8V/znT/iNIYLuZzPpiVLCpP6SQro7FSE5C4BEpnsG3diWV5Ikt52NwzTx2vI2z5dwQgRGQF2FCIbJBo0elydPYXrjqCu65oJEOMhtrXqm7+v6GuidU7EWHNUWlBSFLo5C+QehPWbI3+6n2SPhanN1IiU41WksPBgMPrJ1zcv8uH7/+YwWjIweEho9EYrTN05wo/6/E/ofvapyN+gneEL44mP2/XDZuVuOfF0bHKJY1tQHt7+wwG445FLZZWSSkYj6dPffd20+ui2yvfud1prywDvXH+BWvQrvzAbkN4hCt2jRSU1AhUREJXKx5frmmtp24tzjvWbcvdhw84PL2GswHTitiY2xrWZdnRr2Sk2ZiiyDm5doNiMCQdTFhWntmsZTg13LmwvHd+HyOhbg2tDVQupQ6SlbG0IZKLBCGwXSH25lyF6ImPuguM91t1CiACruyG8YKfKMvsdhhPV53X55Z7/5Mdj2R34PvxD/F/wXcM+0qR5jlpOkSleWS/6FaCILaGvgEW+/sgwmbBIESwy1jLoipZLRfgGuziErxluZyzLlcUXRnjcDhiOBiSpukXT4Du+E8ypgyf8RvEXSFibVcNQQiu7KTee6RUSCnQOiHLUvb29iP5kpQMBgOyLN28ppda2wWzhBCRyKo7jXDFIp84vc8wtjhRtkTRT76wN9b+8RD6ODLm4LoZ2G2afZdBfI9AMMoOydQ+3/r2n/N3/vP/gvNlHWn0rcM7T90YFquS84s561UJpAihGQyGvPzq65yeXsN7KBvHZat4vFyzfLCmDeeYkNCahKZRmCBohaTGYRy0zlEHiSeSpIWO0JqOs0bTgyt+c85XqFVEL8raTfcr8lZwtcCejbVtyE3Ck09uh/azFvjQfadOJDpJ0UmK0imqb8vqzm/zsR2NSiy99lHvVimUCrS2pqpKlosl1jgIULdNpLgMjtC2CCFIswRrI9cvqzl1vWI+T2KHzTMcfymjvIJw7QzQZ0SdT/39xRvjdifsA3c6xGyXlSAEsfEqhJJR+LWrvNBJQpHnjCfjiDgqFblc1UafN97Ajuw3hEi4TIhdE9ZZnLVUVUVZVdvz2vg1XZd9B+U9iY1+/uXvTj7Rf1RvnXEEN7FlQHjZEdvtct1sd4Z+8jjnmFeOjx8vqKuKsqwwTQseTq/dZjq9wfO3DxlOjzA2UFvP+eWCT99+QFXXGCeYD65jfYp1ktpbKhfFWYNX4KILaYTEB41DYoUiSI8K0fUUnh1Zh05PA98xTogr1hLvY6+W8tQgfYZlxed7nsB+R+xBMTaPPeEe9sYsBUka0xyqo3kUIonshCKgpEBuKOgVUiQoqbFNgzUtxlUsqjXWGJyzWFwsCbRdl5OOBGzOtOBrRGjRXqB6Sifv8CYytbf+p6xPeYWAOGxpAsNmQLYjdHWn24kDnnIvt5oSQvTJDg9CbiTWd+e5lJJUZyiZUKQ5g8GAtMjRaULS13pu2mQ6qhApOlKrXvgj/lBS0bQt5XrNYrHa8IDWdU25XrFYzLlx4wb7+/ssL9dY3yJIO85Stob51DKzOwGfXLi2x24l0RXeg25H9vgYhzyphtr9vuvkXS5aal9w/eQm+3v7jIYjvA00Bi5Kx4ePWhZ3LmmDpgqK0iY4DrDOYT2s5hbr44LlhSRSZQdih6LvunnkxnfWHZmyCH3T946z3Y/9hpSMzzC0nfnyZIothM3Ln5ouuxujeGJUwo5RdgzpUsYFW+mIF0TpwRSIBflKxR00iC7X5S11Oce2BlPXONvivMF7SwgyEoIpyNIEFxwOS5A1GoutLwjnd7EPz5HjCcVkH59NUOQEpXBCXWFl/KLjP4r7uvHtPx+TuXL0Lo530SxVImLeLc/J8py863DI85wiH5DmQ9I0hxA73r3zG76aAJsO+kiOLDoSJkdZliyXkXpwvV5TluXmH96TZRkvvfQSSaK5vLzg4uKC/f0DLi/fx7g1WmqCe9JAnvYGNr9dcamuLlVP2drOS4LYxjKfafdPfOXLz3+VySyjbhvOLkru3F9SC0GlExatoBUZS2OpvafqCsJ7t64HR/p6at/t3D2DuNhdPDZxeVx5+92xX6A3rwtxp9xFLXdPOfidS3rCYMWVRW33cTZeE/TCWVvP44rmiVLITpVb68gOGNusNFqmaJkCgrZtMGZO1fHBetcSbCcI3ANrnQCmCzHCTASEYPBZQ3AlqjojrGcc24bjkaFuDQ8u32U9U6jhLfTkhGQwxg9S7DNZw1/CKJ9s4foipCYu+lsxF9kFyd0bN+VOSiuGwwF5nkaypf19xvsHjPcmCCExraFtDU3b4nwsuTI+yhZ4FyXq+s91ITKTr1bLyAO6LlmXsQh7uVyyWi53Ysn+vMSGi6Wqa/I8Y/9gn9Pr13n++dt88MH7tNxBM3kmWHt3RHYrOnY9htCN4cYN2wY1288IceIJcRX46R7dfNOH5zVvnwXMQGF9Th0kVW0wQVB6jRVg8ZjgaDvXXwqJFrrjnfX9l8U8/AYCDRtZiXirxRVUpT+D3aR96IGV3Z1r19D47EDn6gA+aag739M9JnfGJL7m6vcppUjTFK01QsW411tLVa5wpiE4h3MWQawEkqEvTGB7jSJA8Agc0huCaZBtg6hLcv+QvQwOC8H0sGC6d4zWLUo+x+qs5f6jGR/ee8ijR9/Hhxw1HuCf0dr+owM98Xb7qJ6sE6SIVPPDYsDeZMze3h63X3geIQInJ0dcv3GdH7/7HhfzBVIlOOcxxmGMJ3iBdzEBHqkrAqZtmV/OmM1mzBZz5osFq3JN27Y459Ba88Ybb2C94+Gjh8hEd8F4xmAwQErJ3nSP4XBAmmVMxxMuzs/5r//h/wohBHkxYDD4xzT1HYrwOkI8OWSfHUk/tVSFK+YGvavOTvE7/QQUOya3O/FE9zmKENTmPR9qyfepMbOAd6Ibm57+su0mdCwJy6Ug6C6mFY6ez1b2Z9fFfJvJ/4S7c6WzPlw1jKhhGXeUJz21KxCXfAJh3Tk+a4nvY8gN8zr9srT9VBmiGy2FRGpFnmdIpWjaEmMbXLA4YwimRnU8v0ooRNDgu0J3onGHCDQgfYusztHlIxJzxjg3HOxNOH1hys3jW+yPMg6mY8ajIXjBfFmzmFeshy23Tq/x1hsvslysePT4jAePH/LJ3bufe927x18a6Nkdxn7V35Q1SbH16dOcYjBgPJkwGg432g+JVKRKMh4Pee21V6nrNTqR1NWapq5wLlBWVWysrVuCD1RVzWw2Y7VasVqtWK/XnJ2d0TQN0IlzCjq+mdhUnChJ3TakecZgMub09JTj42Nu3rzJwf5+x7am0FrHuAP4829+m8nePqZt8R5+/df/Jv/9f/8vsOqchGs7BvS0QX623xCeeLaf2FuoY/OM6FWnRJSUF1vBIUQvqCo38uIAy9pTVm47AAgimsxGRk8piVKiczUjNbwQkvhXQG4kBuUVQ9uNXq/mgK+c9faxfldVu3HjjlsqAnLTg/psMdaVBWrnLbs7+KbkDof3nrKysUbXGqK6je9UKtSGmRAkrvNEZHAoa5HOIKoLQj1nlK44HktOjyWnxyfs7xeMxwP294aMc8VokACBYFtSWXBreoy/qVnVlkdn5yyWK+rDMS89d526eY2z87Nnut5nLx7YGaG+rqUvG9NZFFZJu3/5oNh0uWdFitRxd0q0QgrZ0b57WmuZLxZ861vfYjIZkeWRIMl7KNeed979gPOL88gD2oEvTdOiteK5554jzbJYuGwdPnh0GqtZpvv7TCZjimLAZDIhy1IuLy/5R//oH1EUA5IkyiP0kyxC3tGlFQgGwyHvf/ABz996DucdP/8Lv8Lv/M6/Ymm+zZ78jZjs3h2ULwgVrkadW5XgJ18DnRstOwBCSiSyi522JWcBcBJMMPhgGE72KStF4tUV/OSKyylAyFgk0Sfb+v9LtqX42xv8ZDDXX4SPIFdnfFfc9O7FkQk/giJbo9z5Xfitu3zl+GwD3Z6V2ModiP4K+qonokBR8ITgOu7hTkVNCQQuapkKUMiYCvUeb1pye4ZbX6KaSzJp2B+lHE4Uz70y5XB6yP445fBgzGCQkecZSSoJ3hG8Q7sQFzoN2BIVPKnO0WPFYLSPc1PqxnB2MefysiUv8s+8xiePZ2dIF8QWFx0T73mek2c5WVFwcHjEdDplVZa4EEEEH3ynWNzdkAA+xNVZaUUIAWdaqsZQNStWVcNyOefDjz7i7r1HXMxKVqsqDrqUm+StVAohFc579qZTRtM99g8OuHnzJjdv3WIwGnJy/ZTgPcvFAkIUa/3Gn3yDg8NrTPf2ODs/iyJF/fLiO0S5a81JsiHvf3SX42u3ECJhVZb84q/8Kn/4e7+HDeek4mQnWAxP+Ks7cU/H6bohjO7BqG5hi6+O8bDsDESLaJAx1tsWS10NsyR1+wjjL3n19b/FnZkh1RnuSli1rRXt0z9XYE3hQXhCkPgd/ecYE37GKiMi/BGk31x6f25XzKmrmtlIpIfeGd69EMFW6q9ftrY75xaVZhu7is7JlzHO6/9FV7N/TdeOJSNNR185FURAYdHOIU1Nu57hqyXa1ZwMlpzsZdw4HHJyUHB0NGY4KBgPBmRak6cpMhGI4JEEhA8IkRCERiMRsUIfZw0+OEQwBK8jXiI8gxyev1lw61pCYydPj+tnHM9slK9/+S3yIifReuumCsnRyTFFMaJqakLToFUaOVacjfm2bqyttZiyoiwj4FJVFQ8fPmY5X9A0NcPRkBs3bvDuu+9SVjUBietWt0QpRuMxk+mELMuYTEbs7+9z/8EDfuPv/B0Ojo5QSsdCYiFYVw0SQV6MUB2JUV6M+eDDO1y/GTBmCz55H8BHcR9nTfxOnbGsLBerFqUUZWV45bWv8Ae/969pwqek4TjGgX1VTwcUxBW8SxCFACKgBaSpRopIxWid26ncYTP5pACtJYmKDPS9tsm2ZGy754aNoBEIIRllihXiKWPa7pZE4OWKuxPTUUHsGMzG/dzdHjc9K0CnSNZ9zud1P4Stwx13sc2+3F9G38vYv6Hf9bpREbuf0HV3dAtJrKJyCB91SAUBROz2CHQpmSBQAbR12GqOb+8j6xWDYBlIw8k05bkbKTcOpxxM9hmNBkwmIwZFxqBIN8UosaAjLh69fN82zx1JmL3zXZFJRHRl991blz16OVrGgvdnOZ7ZKK9dPwW2EuGmY++WUrGua6qmxhM1KZx3VHXNcrViNVsyXyy4vLxktVrSNDXWGkBgjYskvlIy2jvg6NoNZouSu/fusndwyAu3X+CFF1/gxo0bACQ6iaItIg6K+9a3GY4n5MMxxrqo/yf7Ox8ng3MOYwz5YMgnd+6RDkZE0mAVLz/Eyp0QDCE4rPGsG8HZouHju+c466jLmtPjfQaDIU31CWO+DrvcPOFqGqNHCbWAVCekshPr2UyzbUmXgC3buFRxh+yN8qk5v5OS2BicJNESHdTmsafLDftqoc8zou65L0DSt4uC2LHZp13x7fKxA2CJHd51IWJMKbau50ZWka7RSWyx5S3pGBuqEykibiGjtXaQmSfYBu3W2KqiLVfk5pypqrhVLDi9NeX68YTp/ohikLM3ypgMBwySlCxNYqwtAt7bHTkISZxF8Tq8d5ux3VXxu0pNGtN0T42xvCo2/EXHs1NMqkh/iI+MZINhQZZl3L33MCaepeDTO3d4+PAB8+USISVpmvDJJ59GCsLQFX7LfgUXSKXJiwFaa/b29rl37z43b93il375V5geHHTqS+kmBkVKlIzGIITg4PCIO3cfsH98iq3jYHoRy+y861SgrSGEQDGecvfxh5gAbdMiXBJRNw/rqmRZlXgCSijWlaH0GQujyGTBc7de5trJEf/b/93/if/r//n/yDr8OYPwtbhCh37PE9H366TBFZBrSZ6mIATeu64YQmz8vrCZqKCV6kh7O2m87bbCLtQhhexcXTYTIcaGKrpMnxGbxThQfIFR7sgdXDm2bubmy7Zv2u58wJanZru/XqEkFaCkRAmJltH1jJPf4YKPrh9bI9tecZcrDf9f2v483polq+uEvysiM/feZ3zm6c5jVd2aJwoKEBAQBaEFFG1todum+7VfBbvblo/d7fA69WuLaPdr0yCKKCiKIMogUFJMRVHUPFFVt+Z7a7jjM5/nDHtnZkT0HysiMjLPee49l7eI57Ofc84ecmdGxoq11m+t9VuovxgT9gUfaU89q70dzN6ThP2rzNnh4sY6F8+d5MypbU5unebi9oLNzYa1jZqNtTnzJlKa+Cjors8xcg9Dd3FfsgQOQpn9+2Kq0/O3y78uW2o83zi2UF684zxPPvEM167d4OZNzf27fuMGjz32OHvLA1zwdE7NM0S4/4H7OXHyJAfLlqefehIbYeoLF85z+sxptja3OHlSy1wUkZ3xxBOfZ7Va8dCLHmZ/ucIYoyawNVhrYhNR8q65trHJo5/4NHfe+4DmHxqjN7n3dJ1S5gf0ht9aHnD15i0uX72Bc7C/71nuOdq2ZzZbEKoKU1nuuuseHjp9kdd/6TZ4w8as4uTcs721Tufv4NSpM+zdfIIFr8gyEyAnA0gWHjSbxCqLnLb01BtmxWTRydojalej1lh+DcpwiZBCASbtznjEB63Cv21vzoi4Hn6Bo6RxvKyiRTB5mxQ/A8kMLvhOQ8BYk1sg2lyh7yG02touNqYU77FeGwNbVAH3BrxRZFgCGB8/1y6RVUfYu47sXWHe3uTONeH8Wc/Zu7Y4fepuNjcWbK6vs7m1xvp6w6KxzJsaYz0Wj3jwXa+C5JOpGpS/2OgWlzb2Q6Bc7AqtZnXSmgNZWfp5qOC6eP35xrGF8s2/8ss8/tgT7N7q2dnZZ3/vAOe8gjYx6yHEWj9bWbrec+36Tc6cPcuDDz3A2bNnOX36FOsbaxH4MBhJqKeGT85eOM+73/MepXy3EaOLQE2mdPQe50JMJICDVcu1Gze1wUvc2fo+sFw6DlYdvXdgoGsbuv4EN24tOFi2zDfPs37Xac5vbjOfVVTWYSVQVzXeWjAts/mMtYXl7OnTLDbnPHtg+YY/+qf4sR/6P2ntM9TcSdIkGeuRlDyeypAC7aql7Vpc7/JCLTVaAk3VZE1ZNMNRdTEPSekGz177GQDuf8lr+WDXQ2UgON0Yips/rIkjzNPoU4aRfzeh9xoljA+f12v1ucGutrIj+k+GyiqKrKCtJ8Tept519K5FcLlWJLhI+2Vjip+AF4/QQddS7S/pbu7QussYv8tG3XJmo+eee2ruPXOS0+trbK1tsBZJoOu6Ym19zmxeUxvBisewwvcenFfhD8Ru4BI1pPYX9U5ZD4KyKo8EzJjUlsAf6kJdWgXl+8sih+PWDh9bKK/fvMmq7bhxc4fVSm9+AGxds7a5ztnz5zl56hSbW5tsbW9z5swZnnz6aapKeNnLX0rXttjYT1J3ECAGal0UpKpZcHNnj08//llOnjkb/VY1C1UY1Z8NXnen5aqj98K1Gzs0szkHq56dbsbefqDvK4KcwPmKra1tLt11B1/84IyD1YoNgcrW+r0hsLcKVPWKzc0ZGydPsra9TesDy67jE088w68//jQ3li1P717nRc1FrK1Yhs9QyyUGD4oRUphSz5xzuMLsEUm5osUCD1N5icIYV/8AiIoKgnj2u88CsLGxjbne3bYh6VjbHlZ3gwmazEwpJVnN6CmAFM831UU2TU1dVYpAOoc1YILDtV0kVlbuIO+0U7WWOYehTMqkOlWH9A7pV0h/C7/zBO7G51izgfNrDZfu3uTOCyc5vWHZmFesz6oIjBnWF+ux8kdT4+rGUlm1KoIXeicQ45OhUPwaQiGDSZmKJO5F5X2bakaAo0zWJIDjksHjaUl4AUJ5+douUs/YPHmSLbGcOnWGM2fOcvbcOc5fvEA9azCVzVyjxlpm6+t8+CMf5MSpUzz7zLM4T6R/0InxTkEWF6sS9pct8/Vtnnr2Bna+RQiipmjf4r2mRXVdT9cpi9n1mzvs9XM+c7llsTYjyJxu7S7W7zjFfLZBbdcgVLjecWV/idtb4cVRW8tcHBuNYdZUNPMNlouem8Hx9FN7XP7Eszx7a4/9gyWrHnaXQrWxRWu0+He+vsnezqfYMK9FqJCUmSPqg0n00Vxk+gYFcoIpdtNykRe+GKNXVUOGDPDoInChpwu32NraxgG9MbHI14yMXUj8QALY0ZeOdd6gRcc7fmR/M+ldRXzTCLU1gwC7HuMcoe/onUe8J9Ap8qo3W+OhsXDYR+Y76zuqfkVY7eGXO2zsPUEVWtbnwsVTM+560R2cOrFge2tBXcG8tjRiaGrLfFYjtsEaQ23BVtHSIFDRI47Yhj0Vi+lGQtBmQ05ii78i7TGMb8TzCteRHQNGn2WwcI5JBnBsoXzw4Zfy7LNX+Lpv+FLqZqb5o7aibpqhYiTOvwuBvu+xleHy5cs8+tFHmc/X1ATNgWSD8Wqfa9OWntZ5TL3gyvVd1rcP6J2wOgisVh3L1QFtu6LvoZudZra2yaUHXscdrziFCzN6Dz1wfecWy67l1s1bhP4Gs9i30NgaO58jZk4vsAM8c7Bk/9otVv1VbhBoIxN4MNEolUi32Aihc1iBm67m7pe8lo++41dx3KJiu4BQPCnupilu5NVvYxs0zXfVMbm9+FD0VknujKQwCPlTrTtg2T/NpTMP89SNns7WeBOwZZA+C1sCcsZ+bPlHGQVJO7+Ilh8loTSi4JVBCM4TnKNrl3R9pxaC1xieHrKYD1FIhlBB8BjfYbo93HIH0+3D3lXq/iZnFp5zmzX33d1w6tQZtjY3WMwrrIlhsapWoMgaqjpmjVmhqgJ1bagj7qCFDS5aVyGDSeV1IUMcnWipDcwP0XOf5ng/z7g928Wwb00s3tuOYwvlF3/Jl/LmN7+Zs2fP0swXYJT01xjDqu20x4T3WpHdKTOZd4HZYovPP3mNixciIVHwMf0pgIO+93Sdol+rVcfBjmNn74BOruLNDOcbnKyxfeJ+Tp88wcbWNgemZn/Vsotw46ajO7hB71YsXUtfWcysIcxrMHOWxuKcZ+kc+we7rHzsmegdne9Vx4nR1CuRgjW9LEsycX0H9lrPqbteSnjHm+m5Ti1bOZI3LO7hBqWdV2yCaYaFW74z6ilt8JNx2XgWoqiqYikG13cAnDpzgZWcALM2UIbkb55mh1Iccfi+kE/Sx0VvY2w5aIGviULrHaF39L2jbzsSyVU6roQIUkEElizBeSpZqouybPH7N+h3L7Pob3BCDrj7RM1dD25y4cxdbC1q6trQrFXYShvnBN9r0N4YbKWCV9UWW6lw1VVNXce+lmbw+QgB7zRs1/d9xiymlDbPNZJQJYVTfu44puiUL+j3xKes6gUbmyd58qnLnLt4ga5XP8mHwKrtNUk8QAiRJdtp5kfdrPPM0zfY3DqDtQ7nO1ZLzWftVo6u61m1PS4YFhvb3PPSV3Hi1FmuyCmqZgNrZvSdsL93wM39PS5f3mdv1SqlYAzoiTUEs0lbC/sEpcRwjiCettdW586pJuyJQigWI7OMesLgt8VKpryIrfG50FrEMN+4X2+M2dNW4aCCF2LRrOjvKSUtp8mVZtH0Jg53kySypYBJDJAjhqv7v4OI8JLXfA3vP9jCmTUsQqDNRxoLo8REgUGA0vNCwIoixU1TU1uL63T+etfh+54QHL5XoTQ+FMnmXnNHUYG0xFQ410Pb0i9v0O89hbRX2bQHXFwL3HPvBhfObHP6xJ0s5hVNDZiepNmNeASPd722ETDqDtV1TV3BYtFQ1Taag5FWMtknIWnGFPDXTaaMHZbAy2j+nyOUMX1PLhMsTNejBDW5Mwn5/oKjrwcHLadPn+Mjj36M2fo6GEVH+85Fgt7Y489rm7LedfStQ5zh5rVrLPdvsVp13NoP3DgQgtkAu8HG5hbbF08yX9+imq8jxnIrBA72V9x89gZd3+NCwMf4prc17XzBymkr7aXr6HqPC8l0VgbtZMz7RLYkQrAhpq5JhNpTPEPnLcQbHKcwXnmsHYixCiOWG/P7ETHsu09xpnmlfiLod6ZjwxC6GXbwrJueZ2gBcQmcegQRS+t6bq4eZWNzmysnH2ZfZtpZ2OtmE8I4LGIlGpMxa6ayGsa3xHiqgIhHgsctV6y6Dh81sSGw0e+hOGgVQx8+bloG7wKmi0LUdfjVHtXyKmH3CmflFutrjrPnKs6cmXPHhXvY3lijEhCfvj8qVaOCk3JbBY1pVsZQ15a6qZnPGmojVFVMrM+drQPBO5yq/Bg2SymemiXlnArl7UzSKV3J6E4cIbylUCYAMoQwFDUU/qfeE90k7IS68nbj2EJ589YBYhquXL3Jzq0VWN192zb2DfSBzgt9HyKFvaPrA245I5jzPH1rm94uaM6c4dTiDMZqaY1zjmUI7Oyv6HauYa3FBcOKCmfXcCaw9J4lgS62ym5DRy/6dxt5UXJ3qTABNXI1RRhpO2FcQZRStBAhGMlopsFToWl4upPDWqULvzZrzOoag8G5nuBd1JoFuVOB6pXDT2SzhH/yDc3nZhAqkIq99hl6v8/LX/8N3JAtZhsVbW9Zdit8nxzRtLgGWF5i0a81RrWf6/G9mne+1+ysEPNJrQyLc79qtCifWHnvAwaH7w7o9q8S9q4za28w83tszYXzWzV3v+gEZ7ZPsbkOGxuW2byikiqWJ6rbQuwaLdGvU7ZxUVKzyBZgraGqJGpMDbFQgEUh5r5qqVphIgYV1NKEzPe5QEaBvGGOEviL9w3ZPePPp/emkN5RAq/3PG3qyoJwnHFsodzZ2WG5WnGwXHHlyg2a+RqrVcf+/ordtuGg9aw6CGZOs6aNUeYntlm/4y62qzl4w/7+DZbLW7T7nra/CcZhKhP5UipMNWfZOfaWK65JiwtC77U1dpcEJa52n0zkeDNLXtCjhkTQwxjtLFVFBDHdLhfjGTmFLb5gkm+JAgYLWrpHfwHvHSfWHqGpawRtZ+968jkl4UgJ78654djDy/ncBlCnMHMlZdJaEEvAcnP1OGLg7pd/GR/1DcY4mgoIFa13BImAh7GavC+acwuOvl3ReY9rWyRvIIBPXKpF3k/yzWKhL90tZLULu9cIyxtU7R4nq5ucPzHn/vu3Ob19ihMn11ibN8znM6Q2WVMZqQhOEyh86Ami9YzqhEbz18a+mjZmbcWTs8ZSRZ6lRBWSUM2x1pJCWMhoaprhXBRQCM9RPmbaPEvhPUoop0I8Ok5ci+rnG4yxuRLpOOPYQvns1ZvsH7Tc2BM+/eQtZuvCQevBzmHzfmZntjkxW8OYmoChw3O9c+w/c51KLE0fEN8jldDNLe18QWsDXd/Sdj1t3+FjAHrVdTjqATGUBH2k2k3wPYQg6lcYQ0hMArdjQE92vwhidLEqx4xOpJVY4Z99P9WGJqhmML7l7o3A1rPv4a1v+lEW1T2cWjxIZQYhFh9iq+5U7AtEcEerQ+TITSNdYwnUqPUdhVIExHBr+Xmu7v82b/jKr+OT5l5WUeNYYGYspqroXQI2oOuWCrp5F836PvqUHoOjCqq9vNFE8+ADNR7reqQ9gNUedHv49hbtwXXWzC4Xtj13XtzmjrNnOLt5JxvzWqv7IxoqlaFJfxOTumOllmGo9LF1yvYBiUzpJgb6874oaqqqgowh/qKRa2lGJiEtUdYUDy9BmzJ+PNWEpQBO/cWjhLMU8tK6Ea9/mxgatFEov+A+5RMHa5w8/TAv/5ovo1lsYU2DMRWdC+wuO1adY/+gw/UHiBX6SuiCwEyp6pe10DvUzNpr8UZY1oI3gveWPpozAGKq6OulbJnkF6n5EyJqkQAUiS3KjLl9G7I0ITYBMyWMImqW2mgmBecIzmNEqAAb4N4t4Ylf/Id84GMfZtE8yF0n/hALIxj6LDwhZiqpkaUrMYFHXu/k8LWHmosmoIOMyISo3YMInV/xzO4vcfbiRU5/8Z/kw6s1fNvReI/3SrTc9R1dbEFnrdC7NgI1AWcCXjRWGMQQvKEKXkMcIRD6FfXqFixv0O9eo9m9zoZZcWq9544LG5w7t8mZU6fZXI/McGJp7Fz5Z0WvON0/HHgX81cjYmYgh9FsLCYfDAcfq2K01qOyZM2iwlZU1iT2chlyUYfk8cFsFyGDPKUgd12X7/vttGa5XkahlGIcFb8UUV9ZYggpCWZdq685dWFuN44tlHe9+CtxWK7vtewtHb7dQ7oDIOCrGjubUVv1PfZCx62+ZeXB9Q29jwI16pAj4Gv6mO2BLzMnkzsWJ5PxpE1TmJJdr7vR8wMpkvwPCo2UdtogzIxlVi1orGVmAw+e8rzvx/8qT3zis5xc+wZm4W7CQUuz3kDcAb14CFXsneiz0ZutURnkMJk36cVUyCWiaV8aY46cAKIVJtf23oW3N/jiP/o3+cithpX3+L6nDy7XabqgfnNqmgMxgZvYsyp2nbJYpPfI6iZu/wZVt4+7+Syz9gqn1oRLZze558WnOH96g/W5sLFRUc8CmIYgC7VaQkKok7aY3J/ob6UNb4CtojksKRVNrQCJRk5qVTeg4oOf6AmEPoE2Q8+RRKKdNWHQ7xg04WHEtBzTdLj0c5QYEG9XqTVzPDdeZxJKG8+lTLebhmSeaxxbKD/12A0OXIssZqzN17CzBWG2xoFz3OyWrA5aRUq90IuhlbnG/ILLJlu0wgZl0Xf0weeb40aOVkozk6HdWH5pvGuVkxrfQVokCVwdPS+SKxYkfr4WsJVQV4b1quKEqWiMcNfWinf9q/+Zz3/8KU7Mvw36Cu9WWvBNrC80ikZiiPE1zWjJ1xC/WnN5Nccy+ZW5ZKrwYz02TpQFLPvdFa4vf5tXf8lX8JFum73VHriVpo9FypAk2CELiSfZjbX3LPoO6VtW+3us9q/jV9ep3RW26xUv2mi49JItLpx5kBPbm2xuzJnNYdY0SJghVvAsWa76mGIXNUTcXETG8Tx9zSvnaRQm1ZpxsVJqF0hFt4ljJwnbVFulkYTSRs5fa/WRzyEEjQpEZLRUUNP81NvlppbXl0aOHufrGK/TtJYMQ5plmXL5BTdfb53YJvhA73tu0rN3sEcfieRcPCnE4suaIoiLqwA+shOeTJFYqW6GYLcQd8qkEZOaSYecCGC58APJRCyM1HQjwnAzjIg64EaD0rVAUwvWeqyFE43l7hOG3/xn38NnP/Yk57b/cw72esR1muI1b6AyGFGHPmigjhAMQh3zXYvTDKotPAEvsT4Qo0Idb2iIGtKjZnAQw6q7ylN7/56Xv+6LmT/ytTy93IXggBlGLEEcHov3LvuK3neE0BL6JabbJRzswc4+dXeTU/YWZzYN9z9whgun7+HkiS2aSpjNGprZTE0+QwTgtN7UB8H7GYIj0OfyMRsLCqyxsWTO0TuXc2BLNLSyiqYqNQyarEFkRxAKQdUbFQTNQRUQazITYkIxE4+RJocH2raNzPhGXZAiafwoUKbcRKa5rET0PLkium5D3uGNGKrKImVYpvi8YZyQnkzXuv4CM6Q/ffNGWlv4WFaTDZJcazRoKEWfIPlK6eSHq4dUPW8ieW5+aWKn60Z69C4z3X3EhCNfSwtgJJQItqrIGKcIhJ7T64arb/77vO29b2X3xk0evPPP0R1s4MxNgoVFUymbgBWqmPWiKXKq9Z0Z+GhHwIFPSQgmakhLkIDzfTFvhhBW7HaP0flr3Oo+wste8zoWL/tGnmjXcc0Wq1DjfQz8h4D0K8LyALu/Q99ex/mrhIPr1O0e241wcqPh4j2b3HvXGc6euMh6VbPRrDOztZrMJiixVhUr7An40OHaLqKamtY2r2rEJLPRQnzN+Q7nY+le32s3ZD8E6xOIM5hzsUi5EEKJJm5CrkMUSCOaOJDCCSOENdGBELJ56JyW7iWtpZ8zh8zXowqRSyAoAsNF3erYUiu1bLIGTKRCKRVKev/vCdDTFWju2IMY/1EmehHiDqMvkAinSrM0lXGVQ8b/xQ9PXi+/UcrfJyU0+aVhB0u7cu78JHqy3gfuPr/g+i98Lx/4tV9CxPLQQ9/JlrmPq3tPUdeqXeezSgPZ1lAZ0Z05mXXG0osuir7wZfKJB/3CIBXL/iZXVm9l6T43mW1FGo0xvPy1X8zay7+RT/cL+madeW+Z+Z6+24N2D9m/jtm/Qti9xvrBVawsuXBhgwv3n+bOc5c4v73G5vqcelFjGkMnDvGGECo6YusGCfQCxkdhCsRETe1OVaUYsPQEFDjSqg/NcdZuXSle6DMPb7kIS62ZrZjC8knWu4FMopzqUUsfbgjW+wLomYLaOtFJSJM5fMg8HW3aulmmdEUTfVzJr40FsRS43rloeZlDx07nfdxwCLwg4qyCXKn4P+qg4p066RJXe+o1mE+2KFuSkn6NuPuVR5J0xEHbZhkvbdbyAMnyT0I9ep+BYGPsOb4mLm6LnvOnGq7+wvfy6Fvfziu+9O+xdeYOLsoGT7zvw1jf4wQqW2GjllC2OdXyEpSPRzldYkK087luX9J3Rh9s1d/gqYOfxTQdJ0+cHvm+G1vbPPLKN9KsneQzu4Yn9zzO1LTLJVurz9Leuorbu0rV32QhB5xbr7jzvhPcfeE+Lpw5w9r6BlI3mMqwqLSaQzAEgRUOb512iU1J2RJZ6kxakCYi0RV1FUm38LmOUNnEXfw9skKQYp16j7VsJd25oHNCqsiI/rJo7W1d25ggYMG7rGVCCJm/N2XqDBpvcnsxGTQKxGL4BC5C1tTDUolKoVhvRO2Y0dN4T45CWdMaExGaqsn1vmM50J8pMV7jxc8/XoBQFrsekApATaRCLEc+8RjDK8myp5q0vI5QPi0yWKylj5hNimG3LfY/JFZ4pNMNJNNDtUPARhdXAI8xDlvDfefm3Pj5v8eHfuNXeOmX/++cefhlVOJYe3KP+c1bLELAG4P14PqAtx5v1P/SjSL6PcknOpTIHH0PgVW/yxP7P8PdD97Dg7/vT/LYXsDLjMbvMXMO0wc+cW2X/pmOUIG3zyAHj8ON63TVFc6t19x1dp2L509z/sIptre3WF9fp6ltgXgmNruIXIrHB4/xsaZR0qIt4n0++j94fDCxeqeL1JcqnMmH84mdvmCVk1jFYa2JWU6pltRDMHgHlW2wlaWqDE0T823rKms+QU3L5COmhyLjg4+o/t6AcA4rIApscpkC6IYy9i3V9ys3DY2zWmMyHUjS7CNBjIswUZYElOrl0NpmvAGkNLzjjBfO+5q+yAwpRjkYVVxwfv/UjC7Vugwo5CFrWwxy6Ml4IiEF1CdfIKW5PNag2gDGkDplqbmpDvt95yp2/uP/zgd/9Re4dP8f59T9L4YqUTB2rPeeVVBT1UkAH+h7h7MSQQ6iT5w2IxNN48kFBEACK3+TPuxy8q77eftODY1QY5k7i13u0az2aFc3ONjbpWp32ao67jgp3PmKM1w692Iunt7m1GbDbLGGmcWKHe9zMD5lvpgoeM57nAw+lOTNS0Eq12n8zhclTMm/skgmqxaRLCCpBjFf2ihupwBYChF432OssJgrK/18MaeqBkFcLg80TTEExI8D9WVYwYodhKW0noo1VX5meG7I0smCSVQokiy7cZgtmcbGDCVho1sZChUydU+K83quWOjtxvGrRGaz/LvGhIZFF8x4V9DY1dDgpawRLM1eZCyQo/0ujE3bNLQ8yBzxSvyc+HxsfSqaz140ZQyXuaCCGO46PWPn5/4WH/mt3+beR/5f3PuGb6aa1wRR8CUEz5bR/M8Kz4E4WpRWAhdwOK3At6BUh0krDppdDYIBxNhpPwHAhZd8OZ96Zka/u0u/t8f+/g2ag+vU4VNcWptz10vv4MTJM5w9vc329jZW1pnVjnljmImP2UwaPulJ1RHxnvhAH5PTxejmAcPiuF2C9lG+F4wRyhyXDIFQ9KjUDUFRZ7GGpq6p6wXGCLNZTVUPzWLbto3+aKd5qnFdmMKsmsb3xilwaR2OtWWJeA7nWmb+hMKKYySM5SNptTIuOR3T5/JmEdd5NrMZzuc449hCWc8Wt38xnke+ycS0rVCKXPq1MIOnu0/+JRXxHRa95A/ebs9JlR45ghJT9SSScUoIUbEL951r2PuFv8Wjb3snL3nj3+DMg49oxy/RJAIvBmeE+sQ6WyvLrO9Yup6l18oVvCPMLJ4IjhiyFg6ODOoQhhsUJND5HQCWO9eQxx7nhHuSi1stFy+c5Y6LFzl7+ivZ2lRmd2M9fegIwWDFxxrCGoMmXUhKqwtemdmS7yRo49mA0i/6MBLC0izM5mG8Zwm8qaoK4veVWtTHLCL84E8ao7WO89mcpmmYNdGMjqGLgGN1cBDZGIoYbgbbptptcl9jre5YUw7v894Pllv6TGF6pr9Hxy6W51QopwkF08T08vjp8/ncJQlj+oKhF+pxxvHN19v1bC+6u5Y3Xi31ZEPdTq8d/bx+9ujvU3/xdp/Tzx4CeYQYIXSIDdTW8qLzDauf+5t89G1v56Vv/DucevARbKO7amr0ggh9XVNtzjENVH3FzPVI39N3PZ3TXNLgFBYXEzIK6RIZVJqm+H95zacXO3z1q7a44/RFTm81rC/mNPWcup5hTBWtjZ5KYisDqWNrcp13I1bn3EW64yLs4PGH8junKOaUACo9n8AV1/f0pqOyNjZd1e9O1Q+VEWbzOne3appZZh10LoV5tK15iCziaX3kDKzizpVCdEgQCm15JKo7eS0hntPUy1JbamuIw/My1bq/m5GFUMi/HPdQL8CnPOKIEeBQc1MYtcYmWa+FcEyGTAVv9L4w3QyLCR8u8JCvm56L/mVGYoMmnFfWcP+ZmuXP/A0+/tvv4JE3/m1O3/8SpLJRHANdBAEqAv2sgpMb+JVFnKNxgar3uL6nW61wXukSNQ4ZgQMXYjOZuEsXiJUIbDePcLB8mu2w4mUvu4+6NhogN7Go1zaEoL4Y2GhqDRUS080sa8DSvJOAC+7IhZ7QzClIkuY4CV/Wmr3GH1OYomka6kpDQ7Ya0tuUdaKFEGIXar1D3odRL8sUP1S/8mgtVJqNw3nefhsPXhMyrLExSD/WiMm0t3Y4Xy2AcWhT2LEpf7s5Osq0v91I9/6YCjKPFyCUR8RZRAUEkZyZcuhz4m8zleNQigpSYY5ExFRvRXTW042ZmralbBeomVaxxxIlcVS+pRK4dPV9vOk3fpGL9/9Jtu5+MVIZzcg55O8GljNDs94gDYhTzlDTe2zbU80apUzsVrjIxOC6Ppp2gYD2vHRAhDwAYcPczcyc5Kd+6qd5/f/3b1DV2ihGTASiQsknM8RxNbMp5Y+kesKheUC542M4tKCSr5SETkRisL/PpNIiMKtrQl1pSMe5SMuvnbvqutLsn7qC0OP6jn5iAqtQJFAmplHGPh9DjvK4wiKdf7mPJ3wzJR8oOizRJ9YkBq0s0SSQqqpjlo/L60Si34hU2ayXgDIp+OQTh3TXIx6SxF/XtBYqlz5iXrGHV/XEJw85RHiY9e5243cVpxyfRfIND2OoatUW/GpSfAhN35I48YEUAok7VmY3jheVWbwLzZN8WYMinmipFXGya7+i8i0EF/leOu4+WfG+f/Z/UTcnOfeSP4RpKk3xS/VFyGhSVvOK5ckFdmWxyz72xQmYVY9dVfShgq4idYvsVi3+YIW0LfQdPibam6jtggiGjpPNa3nm+pt5/wc/wuu+6KVZiBLUjiQakSETxPuICDNU2A8hAPJ78iKz4/uRFkYSDBg0TMq+SprSWpvN0K7rokBbRDxtu6RvNYPJmKKiPmgbi8oarFUhcK6PFCnjlLejNKMi1oeZ4wbT0uFcaf6m8Eske0azakyh6Q/5fD5uaUW8U0Gjcp1qkohu7pJIJaIhGIXraB10eGTrO3zh45S3O4N0IVPQZnh1sEGDlDpz0IHZ/8xed4i0AEP8Rxi6SOF9FmUTAVUTPBalLPT9Cryj8S21W0W6iB5vPZeuP8HvfOZTXLz/T7J57gJ1bcCEI/e9EAJiK25tzjWmVvVI1yPeU1cW21R4VxFcnU1107Ywa7DLFX5/n2qJ7vBe6LzHG6iqmrXuThp7kh//8X/Da7/o7+Bch9gE0xsgRO3F2PcTIvWKCmSy8ofbFBcfg+n3XGhqai8RwpDGlpBHXUQGkTouYI8S4KuhX0ULpdR+KUPrkMlXaK0kZGVa2xDzjtUyhSle/symZ3p/DNvkhBQZUtpckX97O6DmdtorBA0Tlelx+f2SVu5hNyKdV9/Hkr4wnPsXvHSL20G6gVidftSLojGM4sXkEUkxkZneAd1JxKhHl95hDFp2JAAesWp6iA8YFzC91j+G0NGHA3y3IgSPp6WjxYSAWOHVdyx49z/9ATZPvJy7XvVHmM1nUSATDJPPOv8WAF83rIylrSroOkznaHqP7RzGV0q7n0CB5Qqpa2wdi7TFwKrFOAehpwue2ta0znNq9kU8c/OXefe7PsBrX/sSZV4rMknyyYRoocswc8YotJqA1tJvDEH9ySGfqLhdEwGt65pKNFG6j75j6Xum/NJRDDDfuUHrpodEqykEF31HG3mB3Oi7c/L4EYu+3ExyjDKm2/V9P/L1vPd5RZXXeFRcsQS4piGKaeglmfrHMTnL70rHd87FMrKofI44p9uNYwulew6hzDG4I0bOKkkXEP8ffqrfpAJQZ79SsAQ8YnotNHHaZdcq/TC928f1TjVXO9QUBvG4oH0plAGgQTBcPFHxiR//29x4+iov/5q/ztrps1rVUZz9kNZHbk6a2gUEY+irgJcKKkPrA9Z7Ku8jO4F2gZJKwQSxFqzQVAazu0+3bGMWDDgsdVWzxr1car6GH/6hH+Xk93wX9993UXl+Slg+z1Q8t+z/FD5TOd9xcaWY6O1253JROw5r5KkWSQsqN9hFECXbyWZZmWSQPyOgaW+DgKXNoDRTM6DD+LnpKBPT8yNqxXQ9SQuXx/XeZ81XVdUIkU4Cn447AFeHTe5BsMJIuEsCrZFvH3jO6zlqHFsoxRz91gSqHL0JDFpy/LoUoQ2BUGnv+RABAVosSwgt+B5xDt+ucO1SUTTvCX0fWbgDfQEQhNhKLVvNwYF0PGCf5lc+9VHO3vGH2bhwJ7aGBMGAmtZZU4bknscUTgAjCnYZyRynEOiDH7hivUfq2EejMsxi/mglFYY9xAvQadsFo2borHmY3dWn+N6/9w/5/u//+1icCrgU7EDR70vzWXrwR5muIqnd3DhxWgESN9II3kf2Pw4LYq6YKALxNoZHKmMQPH3fHhJkFcaikFki68Ak0J80V/oO1U7KeXSUZik1Ztu2rFYKsHmcot3xGGmTSJ8vwzm382nLv9McHRWzTBSWzvV5I0gCfBRwRbpPx9SS8EKE8rYHPRx/KUHg52ICiLdcTdTQqQD5HhN6fHuDvj3A+xaCx7ieKsTYXzCEcUJtBn1gSBxQQKTlrhOBR3/uR6jsNne+5BtpZhUiqlV1dQsptDO9koSf6QpP5yupYjm391HASWObElsNzPtAaCpk3lD3Trs9rSC4EMGDir7reOj8N/GxZ36Sv/t3/w/++l/5n3DtKqEPevTCn0pznTah2wUJymyY0mxLr6V4ZAg+Um0OpujIZ4v8OW3X0bYa7shgkegGlE3mvqfvOiXAmlWDaZoskMIsLTVTKcyzeliS5SYjEY9wTjcSgjpCgaCJBaDsFeZwhX/ph442jnjsbGoW7y2rOkqBTCVfpZ96VLbO1Kf+PTFfjwyJwKF1LPgM22CM+pvpBnsiHUjAeIf1EaHsHN4vcf0S13WaeBzUNEoZ/4Rk2UakFhsrPQSfAI/QI/4A0+/iuuvUqxuY7hZnb9zgXZ/8GPe9/H9g69LdmEoGnSOQiqtH5hqH3eTkS6kGDYAuaBNC7KEYEBeo4nOpMSwGpDLYWUUTami9XllwBGeoQsP9Z76W33n8x/iFX/xVvvbLv0gR0Vk9QgRH5yIS3/4mcgAA3C5JREFUNeoEgk9C6IYsHt3ZffYP00LsI7+rjXKTsm/KWJ5N3209WMtsPle2uQhHCoEQE7KtCCaah8HHRjnGIFZwefGGkSbRcEtBq2ESy2CMqyaf1ofM3J6+xzbCql3Ruj4y4I3vWtpOu64lAE1Tq7tVAk8IxhZbm2hBdkApVZLpW5qnKvig0exYaRPX0lExTV/Ulh5nHD8kcjuhjBeiP0KCIXK3JjEglVCJULke363oVit8tyT0e9At8b6L1INgQszyiIteBaeKCeUunotqOQO6INySqt/H37qK33kWs7rKGpc5OW953Wtexs//yC+xtn4PFx/5Eqp5pVXtKHn00C68vJmHLpBkNka3HVDKDxM8lfNUPdpD0Tlk1cOqw7terys4UnmLtUrJXwWPD1VMkYPzW/fTuW/hp376pzhz4gSvecmDhFopHoedXs8loZjJB4OxxnHO0fs+Az3JTIVp9ks8n8JE0++I15r9PKirSmOpEgih1xzkSOORyqKyNkJb363aNjOvm1xCMzaR9ZhGU/NmTUyi143EFyCTCeSOzyH6acZAZQ29RIFjAGkApWURiTQjQzVHuskSuWlykkecR+djjWh8e9KwZRZUcr2EgTe2BMCyVeBTTFVG2ve5xgvwKZ8roXboKWGDKN8NQnAd0q9g1eO6FavVAX65T+hjYmhk5kZSAFwvUoobJ8lPBASH9R20S/xql6rbw+/fwCyvs3A32TKO85tz7rrnBPeceQX1+S0+8dhnWR20PPLab2W+vYk34KQwSfMCKWKEhfpPk5xDDwLiPVUfH85jux7bORxeW7+tOkKnQfnQ9eprxiC/JL/MqDFgCVS1so9fPPUIz9x8MT/+kz/Na//WX8vzajKxkQpFCZCUQlnu6D54upRplOyCAnBIWTXWWuxEUJKGSqmGQ7giZRkpg7kpXJMRmBGfTmCOLuJxojgMTXWH52PH6wlLQNZuEbsvfct0HanKJZ1/qZVKM74Ef0ofMH02ZfIEH/B+7DqkY5ejDM3oPYigY5zv1IMVfg8Y0tPNobiQtFjT4g7BQb9Ewoq2W+FWK2gPCD4uTAlI1rmCDwaoMd5rVXvQ+rxM0ug0O8b7juBXuH4Pt3ON9etPsxn2OVW1XNhecPHeE9x1x70s1mdsbq9TNZZ5qAgnNvn/fe//ydb2w5x9+A30sVnrkSBY8ntI4hn39YCGXpLv7D3iHVXrqDtP0zpM22PaDudaet9x/ZnP0C53lF0ueE5v3k3j1RT0qO9pRXMvvfHY2hJEqGzDQ+e/hnc+/gP8kx/9cf77P/9fs/RtXuRhOM24o2tLwLL4OOe7Kj1gIQTjuF8OYTC2DEbhifiKsTH5Aa2DDCEQejfESkc+jIJ+ZVVKSggY+HXG/DglCCWTME7IIN7QDXoaQklaqBS+EqgZTM7hShMYNE2lK0EeghlItCfzk8+tPHfRuKQPhYUS19vvCdDjRct/NGDtsd5jw1JjcF2Hb7tYhnPAkoNMGx8iIJLQRIJkI1RvYEugJkiF84DvCH7J2t5NzMFNuHkdbl2mCftsbTWcPrHg0kuFu+94gFMnTrCYz2lqTYgWL9TG0tgaWazxA//397O/u8cbvua/xa5v0EaNEQKjAmqZLM1kzWY/0YMhMHdQ907jk20HvWO5e4NnnnyU3Z2n+PyVt9G5fZbddXwY+EVn1Ukau8lLz/4xtptTJCPYVgZbG2wT+0p6z+b8FJWs8e73vZ+qmWF71b7ea7lYYFhgmio25LKW2kWMxK7RqTjZjV7PWgqQI2KRxhjqSpkHBtM5aB4uaBfkbqjqHxDKJBBDyEDBOWKuqRZJ6+dSOEGrbFJ7u3JkQCZEHzEcRonTscrnp/HI8vmj5uso7VkmJExDN6OwR3EO06GtAofw03HGsYVyLexEkl1H6Fr6gz1Cd4AET5fzB0MGPgi6zzrUb/MSYvdGiw1q5uIPqIKn8R3dapdu7wrV8hrsX2G22uHUmuGOk1ucv/8MZ05tcer0BrOZZW2tYjZTZvSqqhGxsULCYqigqpBZzccf/SgbG/eyOHsnvZEY3og7eyBSYBYCSsaUACVhNl7JqeZ9YH3psX1Hv9xn58pTXLv6GB/41L+gc7uKYkZz7mVnFmzWdQaTPnRlh932Ou9/6kd4xfnvYHtxFkTJn2VeY5oKgsO3LXjDmbUX89TuO3jqmStsnV7HE+giBYd4N6QIR4Cp1BZZi0gYLXBrGWkEGEIMJiKYSUuksEdtKyo7IOiJYBqEylp8qIpjDTnJGtoYa8MuobftEDudz+caXon0kArAlOwChfnilfsnaaE0RjHR4rlkKqZkiCniqtN3GCBLr0csiCb2X00CVx5veowpH5POQ6XE1BPT/bnGsYXS3LyqJT19j6KOLZXvQTzBKhoZ939yFoNeYTQXAyIOExy27bGrFtt+ltWNy5jVPnO/y/mF49KJOXfet83Z7Ts4c2aD9fUZtakiRWHUuHaOsfVAUxGpGr0Izgh9bfjNX/oldm7c5GWv+U5kbR03mQ89pWnKw2AnGg91r5ypa62jaXvqg45rlz/Luz7yz7ly68MIMK+EL7t7gzvWKr760jYGz9pMuViNUyz6St9yyzv+6Qcv8/6n/xlbzf08ePpr2apP642qrXqzkUJtzZzEh8BnP/cEL966j871uqB97JCcMk5MSUI9FrhRgr0omJK0WdkWTkDJg2sLdaObhTGqZRFqqxOiCxFcBIxENAk8VaaUvlUCYabxu9IUFJFM+bFYLAaNxuDjjbRh2uiL50rNlf7Wt4YsPOX7Sv/wEK0kY40occ5Ks7YEesqk+jLUBGRrIj2XzPepdr/dOLZQLndv5ZtprMSuvqpmJBjqELDB44KhDzVBHBWepu+RrqVql4TlZdq9p5H9FbV3nJtf5vzWgrsfOM25s5c4cXKdxWKGFaGu5lSNwdaiOzYmJzWHoOwq0fhRUwnBRh4eb2v+w0/8JBub93LyRV9EL7G0rNxNKQxWn4AmFdLawawLzJyj6nrW93v6vVt88OO/wsef/DlCuMUdmxWvPrfGdzx0gbmoH7HC0wnsGuWEsRicE8RUnLA1f+4VF/j+DzzFR66+j4NnnuUB//VcWnuJQlsxgRsfMh+S8w6/irHBrsc7T20U0Uv5+TknWGRAohJgFQbfWCSM2szl0AeasGFliFEaGxHeoCy0efOKX5O0iHLTHrWgB+BpCpSU70sbStLQda09SEpTM9+v5Fs+xyjDFlNBKcfUTE/vG4FVAEFyMn7azNJnjoqzlteYz/2YgliO48cpfRvzL8EETbESegIGHyw2CMZ3eDzGt5iDm8xWt6huXcHsXmO9O2BhdrlwbsG5u89w4fwlTm3fxfpillu0KX1ErRNkKsQMF+2C0/wbAUlkxjKEBwAkqPb45V98E8v9JQ+95pvpNtYUpAlSzE8YpDIMGkVQc3DRBTaWnrpvkeWSz3zi3Xzksz/LjYPHePhUw//7Nfdx3/qc0CrH6I4E9o3QGYOTCCaJgBj6oDts5aAKwn/z0nO849kdfurjT/Chp3+Elm/jvgfeSDA1YkT7PMZiRB88oWsRtEuYl0iIlTRj8ob94OslDSkRhEnyqj6b/p4WYOrpoUQMARHl+bExVmggc9dKmvuJj1YKWXpefcZxilrZuxGGPh8wEGW1rWE2r4/M/EkSmeKXMKUHCSPzu6qqQ9lLpYCW2rJEcxOKm3zhUhAzqx6MgKXxHAx5w8aYGDk4fssCeCEhEYghA4GQSKXUjJ31u4hbwf515OA6s+VNtlZ7bJmOS6dmXLxrzoUzZ9nYuIf5Ys5sNmc2m1E3MyBElm2TuTNFJFP8l7QVg+9UmmrF/imCry3/4d/8BOvr93D6odfhJVUxRpCAYRekeE7wNC6w6GCzFeYHPXJwwMc+8Rbe/fg/x0jLd77iLF9/6SQL79nb22cJ7FaGFjWbkZSEkBaT0Cl+QxsCxntmPvDGc5ssGsOPfug6n3z2Zzh356to1k5CCLSVwUdf0Ihos9yg1JbBKICSd+SkJdP9GZxNFS4xyuQ4UjFRYMTgg9N0uVRATWJuDykWpcJojvbbpkBHKajJt5xWgZRoaGlG6989ywOHsSnlrhoyjAIZ6AIy2jzNdU1aaoSilus4Hjd9Z2rB3rZtPqfh2oaYZDpu+s6sPERGm01I6ymZ1ookHnkutxsvSCjVJPKaOUNHv9ylvf4sJ298klloOT2D05sNd9y7yaVzpzlxYhNrahaLhroJzJomVtZXGsOy9bCbTcyTUgincHZ5bcN5gVjhl//Tf2J5sOSR134zsrah2qUARkL+XIxKRk1Zu8Daqmd7GZivPO7WLh/79Ft53+d/lK++Z8Yfe+ASd9QNruu56R07lWffxmyiaNNF3RKPGf2imBJGTOtrRdsLvPzsnK+8d4Nf/fQ+Tz37KJdOfQliDEurncjSAjLWxETZQcvFkN6o10V6b8qaMUJu7xcKAfLB5w3JVnVMl0vznB7kpAYtsPZp/8q+5fQ+6b3S91ex4WsphGUMNY0y8dumLBrXsWpX9H3PbDZjsVjoWgljX3EqlEk7ludUClT5XemzZQeukZ9dmKsl2lr6kaWJnLVi2pRMiYIfS7xG49hCuc2KbvcGcnAd2XucRfs0Z2rLhdNbnLprwblzl9jeXGc2q5jNauYzLQmqaajrJjKqESvQQTC43scAbfwSGZKtp+bRMApzMwIVtfM4DGZe8eY3/Sdstcbi7kfo4w6V0uhk+I8QNNlcCNTOs3Hg2NrraG7t8bmnPsqHP/cLXNn/KN/y4Drf8cApcJ7drmXHePbr2AGKmFAYYmA9oKVa0SwOIe7souZkkIAzam66tuXL7lrnVz69y1PP/Cp33P86+vmcAyMjNDjv0um6JeGfmqlCFBIzaY6jnasK31mUsV1ivNWKoU4VHzlriEgdGk3bEBA/MEd4Qo69ej8IYxraHDUVHw9xyvSzDEWkOGlZ3WHEQCyuTn03Ul2iCURWh8NkXzCEQMpNIJ1faZom4StBp9KUTqZrMsNLoZ5eS/r+9L0508eHHLoyYnII6Ljj2EK58YlfZW48d56Yc+7ubc6cvZONE3O2T6xhpaepTWa7rmvtxmupqZmDl0jK6wikxjcxXphvaij+v30GhS6zIYNI0C7MTuCzn3mKZ556iov3fCPViTO46H+krk8qoEmohR6oQ2DWetb2O7h6g9/80I/ymZu/DfR86/2b/Ol7ttntVuyHwJ4YuiB4F30rGULnIf3LLMdkgUzaWLtB61V4H1iznvtONHxu51l2rjxFc+HuoVkSg7ugWSxxc4rV/lkI06xEQRRRYdWWcgFCop7UlDpFVlUgqszbO3Q+E9H3JcQ10TOmc84YyMidKLNyji4Xy+GBGB5J75kKKWKyP5jqO51z2pC37bLGm2q+dKypuZx+n8Yjp8+XApqeTxvDNIQyBZKmfmVg2DDEyKjhz3HGsYXy67/0Ds5tr7M9g3m9oJmvU9cK44dZhWQbO/osweO6gHMhlizF16KtmZRWnhjJe3oeU/g7PsvIjwyBYAS7WPDP/+k/oKo2Of/SP6QgUEDp+ZNXmQVEjdeKwFYLW7sd1TPXePuH/gVP7P02bzg/44/dc4pL6xVX3JJVgKVYWmMi0XLIWh1AKHly1OdOSRNCarfuSbwSkoS2d2zW0PsDVns3mblIuBB5bSpbxbBPnD8DxoTBvJQxn6vSZ0b0NXZEJlJYwFDvmEAXYwaqf8nzPySx4z0hJoxr+Zhkk7cUynLRTkGYabihrMg4vJj1/kyrWbz3hN5lLTfViM456lrZEdK1laRc0zHd6MuCa2PMUB9qqkObR3nd6bpK31K/QPISTRr3qOqV241jC+WDL7qbrbUZFT1VVVMFwyK2Ke+McgaEEIbKBEB80ARtYqemrKXiyYnqj6P8X3WSw1QEGQuuCoY3hr1Vzyc//nG2T7+K5uQ5Xf+SdE3IrdW0Xt9TBVjvPCf2HTc+/SjvevQf0/kn+OuvPcPL1xtW3nE1OA5EfUWXFp9ElwxJhAkZxA1E5DGTIhcgSNbR6dp1Uc2ruKC7JXiHcx1Xr70bay0ve/FDeH+AoG3XJMYxk4maKyqST5s1W/IJ9P3GqICX7ADGDJoxzXDaG3MCe9w0CWCrCmOqjPzCOFg+XXClgJaCeyjswADaKGAjmQalfG/y3V3ifo1a21pLU9mMypaIq5gjkgAn35u+o/RJM1BlJEIEh2k60/UkYRttVEYNfp+yl4pO0scZxxbKO05sYKzB2gUEj+8dvUH9pgDe9bk8SG1xMEk7FktS0mKJwxwhkKRPhAL8CeUCj/oyAahVxc//5M9hTM35F30j3hjd5WFg2QvgTaByKuwLF9jad8jl67z7o/+E1j/BX33lKV6yVnG9X7EjngNTEcTG64gLWAq/ToYwS0jaJG5KqW2BVlUMn8n9OVH/7g2X5rzzqRVPX/1ttu99lfK1+iUi2sC29xE+lUFAEr9raiUnEisdsoAQfRrlA5rNZhNhSAKbiqfj/Hjlc01dkpOJmxarcy6blaXZepRA3q5qv4xLlgI9LPxxQkAWBgArOBdbLEaGOalin0sKk5RA23fZmkDsoe9K35eurdTeGbTi8AZyFIKarI7ks9pKCiElh0S+4OZrIDV2cbrggyf0Zfvq4sSJpcAmxQcH/2QK1By1eYx2yKPMj7TzBiGIoXWWN/3sz7O5/RI2735pZCyLscIQ8vvFabnWzHk2V4H1Wy2f/tQ7ubn6LF9/1xovXq+54jt2xLGq9DM1lhDNVpGoHTWwpwwFcvROGjLgErtAxX6S5JusajaFNILvlbqyKCSwVYXvTZGnm2KIqeEqWTgTocpQ2WExJlXdD4shaYEy1CRiSRX1IlDXWm5VVRUmxuq6TitOEmp5FEhSjnGYYJwvOu0NWc4fDEnhJhYQJFAm+fGlKYwI3vnMnpevMX6fsirImBM3DKCT9z6n0pWbwe3WZBrlNaRrS0LpQ7Fhec3t/T3xKft2gI+D1yLkbCElwEGK1O4QItFtFFhJWGXUmPkCx5Og5pO+p4S8p5OjCQYWL8L73v0+nPOcvPQGQt1k8yf5qpLOJwpQ3fcsDhxPfPrdvOvJH+PUzPC1l9Z4Qnp2TdAuw6INZfIiKM52KKUaNHp+pItI35/+NIK4gRWB/F6G9/owshySZrYMmnEwOUNOs9OwhmClIif6G0gxzUFLxDMOodhE1X9Un6rK4QwI9F0X6wEHpLEMc6xWK5qmOWSelt9ZBtzLjaC8l+WGlkxZPTPJGmvIvLEqrEH9azXJK+VrStcZTfLkk2aTVqB3Lltd1toMJiW/svye6WZTXlvJQDAV0P39g+xiVLnfy+9B6VYoJo6QhAuOEq3EcZPBB53h4k2C92ntph0s/Z20JKQuwmFYP/HjCcW0uI0ZP/vT/x4jFRv3vjabDER0V0S0AVHcLead48xuy7Mfexe/9ekfYqNe8T0v3cbNPfsBNVcT67cY4urO6WSDsSSxiDiaYgmtLK4jRkL1EbUs6b0+4C18akcX4Mn6QT2OawHP1vYWwThEfAR0Ks1RNRJPSWK6XEK6tQ6+gGOzyTtYKoNwpgVWghQmgkQqSEP8ODkgkk3cYTH2fT8K3B8lZKWmnKbeHQUSZcGI16vdwVTAdD5j6McYBJtL2EwMCjrnNRtMDM7pXBN9vbSm9JotVWwf2HUduD5rXM1GHnPwTP3hdK2lSR+Cp6pstih646iM+vO1/QK3wlNXSrIvd0jJT9V+GGKOEjUUgZEWK2/EcJgxmpWRwOHAysmEmsMHuwdceeZZtk69Art5Qlu0FceVRPgcJ3qx8rjLV3jnY/+SRbXkf3rlJutz4VYIIBZjK0xtsSLRFzh8sen6U0A7bQSDD3zYBMqLNr3qPVIZHruhFsh6fQ4vsHf1Mdr+Fr//9/8RpF/RWN0CrNEq/Yy8Gm1zkIuHA7FgPG4e1mS/txQMEZMzokqTKpmmiaGgtGZKoS59s6MAn3LRTp8rXZFSaNN70s+kfQczdvDHkrYt/dy0trLfTVxn3g/rAa2JTZZTuvYEMFVVlTVmLu+yFXXRUTp9f1mLWW5Aaf5MahNprfZ56Rxy0A19YJ5nHL/IOY7RwrrNmHoY6eZMTdHnEsipDZ936PSItt9HfucjtG3L+fOvxtka8eUCUhNOvJp+1gfW9zqufu5j7PdX+OJzNRtrFbv67RruyLE7YRTF13cc2nyCL3bi55qTEPKOi0hsCFS09CvN2OCprWW9brRzlqiLACl/dRCmzNIWQkYK0yKkmM+yI9UUQUzC4fPcHc1u54u26VNO1JGZWNzPqcmWBHSqcdJIwlYK61EP4ND3ZZ9Nhs8bY5RytACCplo9acEUrkmf7VyLL+KlpS873WiSKawUIyGn8lXA3qc/T//sTfZqA99+2yUyzMHzvyVecMlKV5ivUCJ6cUby22JWy2214vh3yT5TmsCI4sbKAR+CgjsI4gyyWOMnf/xfM5+f59T9XzLKMlGwx2tnqoiYNssDzDNX+cCzP8t2Y/iG+9ZYBkcQC0kgYytyiUIvjBdZ2QxWv8dPrv94wxttNpQmwYvBEQj9CoBTJ7epahNZrSSfy+BzRlMPMnTvCMO/QCbVSqbp1IT0PuQrDD6GoMIAggzatIgjwpGLOs1HqQ1HKXQT4TlK48IAoJTCmTNlis+nz6apsHVFchk4JMCDk5VwBbVWIk9TSARdPifhmwB4j/MeiTHSkeXgh3hrmteq0ioXMQqWeR+w+yv8k9eobhzg5Atc5JwuKySBHJkohSCmx8jkOUwZny5m/Hc6QvouCEVGCnFZ+KD5oM9evcGVZ57lzKWvwi7WcDE8o2ZyZBmzmrdpgI3dJbtPf5qb/ed55eka5kLntUOU2FhDKBFISf5svDpFOwcTNAmkHzm9xxtBAsGCqS0rB0ZqbD3HBce1y28D4Ov/wJdibJv5aKX0TxkWpOvboQWeAYLJ2mxsbYyBnXTv1P2WyEnjY9hDs2hSsXPy1RRJPpzidhTIk8zC6UZQFmIfRbUxPe5Uo5c/s78c14UYRWJhTLGZQJvk8zmvKHMf+phbkc4n0MeO0hZiUXnAWt3UQ4j5tr3D990oCaOuNfRU1RWm0o206zus6ak6T+UH1+35xvGBnjARlpF/UAqlZKEsrve2owy6TstrjtawAR96pJnxr37kh6mqNc489Afp8LiAtueO5zgKp4iwsd/znuvvBOC15y3BOUDZzK2t1J80mruqGOZEs6e4F+FIc+1YFzwcjF/71A6fvNJz35k/ztbmRfaDJ4REJFyUY+XvmB5bd3gTu16X+ablAi4BlOn5Krg0rcYpWAkKtBXG1RlTU7C8l8m8Tfd1WuB81M/yfnddNwrMl2DL9NpK9HPwnYcNaeiZeQSDOYPmTyZnKr72IRB8jw9F2wTnlBAtWhNJIOfzeY7reokhMWNykYCxBnPMHNgX7FOmCy9vYDIRgv5BSAZesu3Tbpd8nQH9wSdoOR7bezcyGocJjNo2CDYEehE+8L73sb5xP9WpSzgf2Q+SHKYJCMpibj3s37jGk+1HWK+FC9sVQYwKoiiTeY5FRg09jBhzlXSOg4k30pLpDc8hmGmh3Fh5fv3Te6zPHub0qReD1KOd1EfNlY4V0KLyBCQlysajAJs09yHtTqIbZ1kJkefWp3sIiRwrobDWVnneyxS3ZPoaM2wYJZIKQ/yy9F/TuknnWyadl6MMoZRrII3y76OQ3um5lAhqef15vgohzmaoyIiIrPQpdZMY2i6Um6E2X3KK+JMwGIlI8u+hpkxfNgA36SYTBU53brVGY8pZNC2zZxIC40PGSQ7D6+V3lxMaxPKu970XgNOXvgJvYvA7CnppXmK1R6RtPe7aDbqw5GRNNAstIhaMjalFJmK6hwVNYpv2EhTx0e8k/0gg2NRkPDyXT+z2OA/ri7vw1rCqLAcHN+i66zz44P00tcG5mOsa3UrtVzL432ljIwqXEP1IYdiUhLGQMmgG732uNEnnpa/ZWHjcZaQzWUPOJerF8Zoo/b5cEztKnzOUQlmasLebo6OEadgUhsqM8jOHx+FQTPn+5POm48OQlJ6S51MWUrIcmkVDZYZzG2nf5PCk848xbb2eL3DyQHK842US/KDSB9MnCWCIeZmD6VNyphw1ieVxpiO9L/sjleXzTz6JwVKvnYtaJRRFvkR/j5jrCnjPZ3c+gqfn1WcbLEa53MXE4G6RMlacly70YWKDT9SO08U0QO3lZ0fviIvRec/OSm/evD5JEHDW4A72aPsdLl54BELamckhDI3KHc4sSecciu9Ilkk5h9OslbxYR9k9g7Ck+JtzLvtP5QKftrw76r6WZuf0nEvTdIqEJvP3qM+VGjHNa/m9Y4142B8tN4/SBJ5q2NSDJF1/0uyVMZrXTRhdWwjKTC8VuOyKlUDhoak6chxbKBeL+fBHCLi+GzI9GIKoqeuvkVhRkTrrxouLH4+mq88dhCN8kXMYJ9GIYbJEMM2MX//lN2Psgub8/bm/hDeCVQuXrMCNwXgHbjCVRYgd+kLMkrGZ5yaDH/GNaeGla/Reuxtn03B0ksW5HmFZDCac471PrRAxbG7cR7AGLxQsAtE/A6wZmBZiPvpoEUz9saPmLF1R6UPlNLPYrTm9XmbglH7iwcGBMs8VvK1ln5jp90/9uqk/WPqY5ffdjr916ieX/mj5e+k7DoRigyacmtvl5lAeO322rPnMmrHX5IoxkFaUcSEE5+ndtIvZ8aTy2EKZ0icDKI1kpSfsvUe8AWMI8W/v1QBMi8zHBS8iWQhDzD7RlnVF5UM87yBGWwsUO9Hw0+BDwFQLrZVMfqvzIJaUVVMaC8YVNxXIPFqxfXmpJXV7SJpPP5H92mgRDAGCNAYz/iizNWkx7/ucDlZ+zIt25NQLTKEPGaXd5a2rNJcOmU76OZEkjDJafKWmUTDDF8DcEFJJPT5AsJXVwLcQKyDiMUem21joS4EyxhwphOn38n2JCiT4iE8ENbF9UPDESEonjPdV0s+UzTMmzgKN7Y7Xz+E4eDrnqZAnwCttZHpb0uIZhHus8V3ue5ISZiRokvxxxvHjlOUKT1iNDzmGONjNw44lqTW6BKyJkL5PeZchUkbIaKcUydmxJH9gZMaEwGcf+xyr5ZJzd34N1DMlwUm+ScpkSZ6dD+BVcEsC5qR2ylSIECcxOrX6fPx9MPWG3MmxYJYCOThyAsP1iAJIO51nt4X15h6quLF4AnbkyibfaxBCEwZXMZ9vfv+wUNJpp/uRzi2ZpGlkfyjHNkM2uwyDyVvXNWY2O4Q424gdDNfNkYJXjqR9pr5gAlYMsbV7PC8fQm4olNaGsUWDV0cU4MM+ZjrflBE0NXfL30tkurRqUq3mkD+brIchBlteU9rcbDqe8ZFv2NPexn+ejhfgUxaZGUHwLmiw2h2VOpV8huEjY5TQx511cKynu1RgsMdHQmksb/ql/4Rg2Lr4Or0JLmqwuLOmsp6EXjqnPRgz900Y4o3Tcx9pnAA+dtYahQz0A7eBcW4zhGwRHDihdZ712SbO2ljr6WNDHcOVK9e4fqvn5NYM7T5GFu7SU53u9kHKe1BowwjWTIP8pUlZbn7jrszjbJ0xWDOYwprEPiQrTM3V8v6PQULG59SPzz+tj3SMFIZIiG2qzEjJ+clsTWa4lhP2eG9GApTGUYKaRvKpp+CSCepXlp8vr8NWtjj3kOs6q/oLnPtakhKp5TlA3r4/vDtNUbZDu/rErCpH4hOtEIJRM9Z6IRiQesbHP/Zx5ouLmM3TcadUAQ9Be2gYq0s4OI8Xh09CmXZFyLm8gmKuyfwqRS2Fb8pFMj3X5xv53UH9V10YyX+JPEHWYhHWF6d58NzX8tGPvplv/RN/jle+/BG+68/+aR6477ySMbleNV9Mz4uzmMXUhxBpV8aZOSGEUSPVwbf1WQDTgk45oOV1TrVeWgt918WEgsPvSc19yrVQ1l8mQq4yRNL3/aizlrbwc4firwn9Vmsrbh6B0XXnci+BvndYS76+coNIm9U0kSFddwghM95lMzuAWcwUuU6xZIk4iOhzkkKE1mjfUob+Ks83ju9TjpBWDi3O6U57KBWq+H0wyw7vpNkME8F4bXdgESpB2cb39rh5/QYnzzwMzSIxbBSZNV7927h5hE6Fsg/94AdEM4igfqJIWgQTsCJNbDhaILP2mmjd6UgfUy1iWcb1W9kNddaN9u60neehU29kuz7LZ66/kw9/6JP8t3/+f+HMmVO85MUP8Y1f/zW85lUPYkVL5yQi3emsnHejmF/SKqXfloQnVdqnhVqGHsrPDIu70MhxLqq61u+I/mXa8pJ8VpWNydn6vU3TxLrMIcY90HGCMZWSc7l03jHLph/HLVPnL+L3qVkbot+Zzj25P9F6YmxZTHOsS5+yJNdK15u+OwvvcomRggYz+cJGcHgwghb7B3wVLYhjbujHFsqU5RBCiH5VRJYKNDD9LHfEspj08DhcFpO1Z17mGnezQUGGt//m2wE4ceGLsjkX8jaFSonPHim+d9ph2DlscxLZE262QdOeBELeVY2mmgVlbytNWMI4nS47+y9oqP8q1vC2zy0B2Np6OcFqGzwbiZfbvufs7E4u3X0PB/0en7j6Tp64/F7e8uw7+Y3ffDtVZfnqr/oy7rvnLr7hD3wZa/MaQxvNd0ZB7tG3ly5AvFeJ10ZkYAJPCzAt2qEkaWy6pmNkIMenTXkwM0tzOa2hqRZOv/d9T2o4lBRAMh/T+5KWm15PaY6n80o+cgnCTDt+pXPK/mlC1ycx0MNzB6uuI/W7zCl2VaWs+MGj3PKBygrVYkYbPF9w9LXvet3hotlA3n0Op29NJ2uKaMV3oQLlc2V8qTUrY7BBi3WDiW0SqorPPP4ZAMzaSUgRcRkEJmGIOWk8mqCOwPzUvdgbc97z7Io/cNcCrOY39l2HreqcLxtkohULgRw9d2zBDGli6IFbrceadcxsHW+jKeo9bbti1fXqr3uhqee88uLv55Hzv4/WLXlq5+N85vq7+U+//Fac7/i/f+hH2dra5NWveClveN0rec0rX8zZk+s4t9Q5SOBbRFDL2F8JTEwFpUxrS4s41U0eBYjoZxRnKP3T9JnkZ06b40w344QBBAJd19N1fWFlHBaOqfUyvD5eayVxcvreslh5akmUczQFj9Lwvs3Pp82maRqNbTZWTVyUKWIxX7A65kqBF5KQnlDHkLDEQZMkW3pQ83FSiCZ3fAwQfWnCljEpPaTC0IbagDeCWKX8D1WdBSG2NCWxGYx8wTCgwUCiSCbM1zg1fw2Xl7/FJ2/0PHSyJuDxrlffKIYHInRJccAi2FD4icUI+arHrw8zoedo64pFY/G7SzocVgyVh7kx9KGn7VtcCNRUBFEN3tiaxWzO6QtfwovOv46ru5dZ9js8dvXdLNubvOW33s5vvFUtiDsuXeBv/a/fzT13bCmTQdTOVVNn/8c7T9t32dLxYVikKswSaTQGwUtaqhTI9Lv+1FTEros8ramwPN4v510sVQsRfIlrBqKZl8A5WC2T9rLx/lY41wIDkprOIXGzxtWmwof2SylN0vT+dL5TTZs0dTluh9YmN2SaodO2HSHA3MYifiMErzxJIByX//X4IZFIlaiap7gxqN0MyjSWaEL05FVojCjoT6R6HPIAh9BBXrghmi/WascnKwTRxeXrBZ/8+CeZLS5it8+TKEZ0k0jQszICGEwsxwETNMbVG+HEqddz9cn38kufPeDBbY29edfhjCnCFrefvMMCN/wRKC7p0PviNlZpoXLAxTaBBhs8W7MZt+qgtBYYjWmJaHPZCIoEoBLD+bVzGHuBB8+8mN57ru1e5vrBszy18zGuXf4s3/U9f5vv/76/zv0Xt3XjMwlwideQNsF8conpXQXEyGC+Jw1TAiJHoaYJC5BY4BuiJZHimq6IExtbRUEYktVBGQO8S2tLwyAuIqiCyUKo1pUWmqa1pppNz5Og66I0x0utWv6cgpGlZpyaunpeyUWrDh0j+Zt9p4wUxhgqZ6LwSr625xsvuHQLptB80h2BcskOOypxVxFCGGDkYffSY4z+FkWsKqdBda9pLPS949qVq6xtPqjxSTQGKhQuZQJ4otmaTtgDywoWG9us13fz1P7HuLzynF3Tch+xGkrxehASonesCXmekRnsstUwIHYAlbXs7z7N+z73T7DScOfmF3N27X5tkd73zCOSKSgvrADiFDiYieHSxgUubV7gkXOv5NZqj/c9+ev8uf/xb/BP/9Hf4d6Lm6y6Fi8R6S20xzQckuavzAfVrlxjH7UEPdLfZTF1GZJIx0yPtNATEdeQCebiOhlPbjrHqqow9nCyAYzLtKqqihUct8mp9YoPTHl1jhLA0r8sgaHyvNK1l+0IM5UKGkKn2HiOM15Ap4O4E4qmnqnpMSClTG6cMcqYXtU1VV0xm83yo2ma/KjrmrquqCoT6S6ieRs1cDq8AT77mScBWN9+MSA55Ux3UZPOUicfcglN8jZ7oLPCmVNfjZU1/uPjS3zQQDlB0bvbhT6KEPuh517IKH19C0jwrHae4r0f/UEW64YTp2s+ce0XePTqL3Lgb2GMaLpWCJnlIHhNUXSdo2972lXLatmy3D+ALvDAydfRt4a/8/d/kKZZ01S6CeiS76qMY4Z17HyW2+IdYfKVCzTXXBbPJeEo2QlCGOKHq9WK5XKZwcOUoNA0NbPZvFgXNU3T5M9ltvQCYDITM1XiNaXO0emRdEaeu5QiGn2/ytjCf02g1jhmWl5/yoUtUw91QzIjpeMJ9GjmlHsOC6wcx9aUOYSddv2MXQiSqCNtIl8i37A0kt8RX4xLeexAh9QJKwE1kr5Ss1ne/s53A7A4cV+c+EEbp5Ga3oQokETGdhP0+F4M1cZptm6+mk/c+C0+sdPzyKl5zJSJC0jiBQoDElu6mOX5lX/fbkQzPrEmpGEA4z23bnycVX+T7/nu7+H3ffEr+c23vosf+pF/xQee/tc8cvY/49LWPfReZ9kiaK+KYQPy0SdyXqH4moq7tl/Hhz/6G3zwY5/j3ovr0YIYC0jSXOXuX2qfEAL4gcu29CfTe9I9Ts1ySvO2fD0J1BThnKKiRjTEMEL7w8Col855mtzgnaOxNbLqaG/eoj9YKdGXqONk6kq5f2qDVEbpUWWoLApBkXpvLT4LccjXBGM2ujLSkO9n3iAGwe5CwNdGXZXwBc7oGSNQqp1yTqDvCMGNdtFyB7ud9tE2Bn7yXFrpJWCUUsBCNjGTmVqeUakls7nIYG6LsXgb6KuKjROv5ebyvbzp8QMePjXHlNZq9Ie+kCPt2Ku25+ZBT2W3qOo1guu4fP0diAivf/3LCaHn97/xFXzZ61/Bf/3df4UPPvFv+ezOPazVJ7n75GtobMPW7ASVtaz6jlk9x7uOttfduF9pBsoDJ17D9eXj/N1/+I/5v7/3f8GYVZ63pB2T4DVNc8h8TQvRhEG7lp8pj5MepQnYtu3IF10ul/kYIpbZbDbSeoO/OVSylJtAXde5m1XuO0Jsd+481gt17zA39rn+6Cc5uHZThRLog6OZz5kv5phTm5iTG1R1ja2qGCceEk+CVVoYQW+YLdrmTQGjAfSZcsYO1oUPgWo+w1jDqv0C576WpVsK1gw7lTJQpxMZgs6lQE7hbx1DQW4+ckbSZMi0ETD1nF95869QV1sszj7AFAdN4pt0cCAwFDFErWYNBIuvwCy2Obv5dTx762d46xMH/L57NmMO5eSEfhcjmbQy3jcAaJqK7UXFld0dutUthEDvDrj/vnsh9PTOY/EsasMP/6O/zY/+25/n3//cL/HEzuM8sfM+ADZm56nMjM7tU9t17jzxGk6v3cFavUZYBnznqU3Dlz/wp/nVj/8Tvusv/+/8o+/9y4ishiKCYmGV5mDSaMUNUZ88Cl0pSGVcEIawSQkILZfL/N7pd5YCno/FEJcs0c+qsgT8aE2lY/gAZtVx64kruKeu4Z++TnPQ5siAJcBux0p2Cc9cUxCxqaG29I3FzBtMZTFba8jmQoV3PiNUyhgYAtGtMnl9lr5voklR5ZqUSUSdnWOe+2z+HmT05PtEmYisJkcyAVzoCcENu60f/K5C3gYzN9nD+fnIk+MMNqZq9wK9CDs3dxC7jVSNHjPKZTqSRRSBDWBCQnOTz5saDOnDW8t8+2Ga/Uv8xmef4pUX19ioEy3jcErP5wXI9P+RmTt533gjBaDbv0Lb3+RFD7+B0C8xVaWvV4aNmeG7vvNb+DN/+pt59so1fvLfv4lnL1/l0Y9+gus3rnLu7GmefOoz3Dj4DLWZs9ac5lUXvpmqbuj6nqqt2Gwu8JGPvZ/LV3c5d252JLvcFH0cAUDJfI/8qyVY5eP9CwHEWJzz9L2jXa0IDBq3qipF3LPJOZidEkvUkudSprslMzWHXMRCiBlETlH1eS9UoWJ5fYebn32a/pkbVG0/sqSMmOGmrjzB9XhW6u+JotMC+HmN2ZhzUFWsbW5Qba7h1ufI+hyzmGNnsQuXFYLVJIFAABezmUSUf9YaCIYqQOs6usbgTqzhb3yBzdfSnEijRNXyjhcd5PSQMjajiEr0QyUv4hRaSYvVoKEAEQGn/uFB2xEINLPTEAPueqtKbRmrMkLIMbpAUEGNlQbq2QdCZeibhhMbX8ozN/4tv/W5Pb7uwe0iXYyCzfw2czL6f6JdwyGRzO8p32nyAhQ21mcYMQoAiaZs9aGnqT13XFjnu//st2quLw1975nNLKvW8Vtv/wC//Ktv4Z3vfj/vfuJf87q7/hSNmSFiuHPr5Tyx835+7TffyZ/8tq+KwNkghHD7ig4gZqcUPmZE3rTkSzKS6b3PyeFlErcyvMV+KMbk9uzpmkMIOucaJ4p0nWkKx8kCZLdYMM7jbx3Q39ynXXbsPH2Z7uotzKon5oHqYYLE4vekpQ0+MsdXAazz+D5acc7BwR4uBA7MTagrQlPRzS3z7Q3m62tUVY00FW5eQVMxm8/prdEEl0ob39ZrM70eFzDW0s8rNh+4g+qpq7ed53L8LomzhptZmqc6b8NzJgaLy8+WJTTl8QKKKkq84ZreqVC5w/Gu93wA1zs2z7wGsfWAlKX/JoIzCK3+JRI0J1YCIp5QWYIzVOsXsDvrfPzaiq92nip99ggL9nkBnRc4UgIFwGJtwWw2I3gFpbz32jYgx3chQkPKkDYTvG9pKvjqL38FX/Olr+Rgz/FH/+u/wHuf+Ld8xQPfiTGG9WYbI4YPfeTjiHx19v1DGGhTpjmvMGy40xzmaQC+792hZqylj1mCPTCmDSmFzpRCWYxptUkeIeDajltPX2bvqSvUBx1N57RSr6RCkewk5fNXayxt2WOAS7zW3jrXY3qQ1tMsBdnt6bhOJ5r22RmQpqJbLPDzCuYN87UFpq4wWwukqfB1Rb2oWeGpz59kY3PtWOvid83RkxC58mJ1frXkynuvWRUj+QvIc6zp4FNIPRa5ekfvPR0+ZfUNiyOK3VSJCUMGz3QIGsxGNPYWrMEstthYey3P7r6Fj11b8vLTi3im+cJ1sbzA0MeR1zc5hghcvabset/6zX+Q1Wql2t25XAxuDLlHpTEVZfgtgzMSMAS21mt+8B/8Hb7zu/8yjz77Vh4592Vszk6wNbvEe973QfZXgbqK1P8Fp4/YodNxeexSaKa+n+akjuOcyedMGnIaSoAxGZb3A9O5Jh8o0JP+LmOmU5+0DZ7Z1jpbZ06xunoTc/0AEaFDYqe1YZKD10J8dW1C/i5hkv4ZYn6yGKqmoo+xFHEgvtf5ihZX5TzQIVUbE1NgJwTa0GPXZ9j1OVvnTrFx3x3Iek211uAXs2Otk2PHKac7Zfpdq8VF239Zg61Se21BeymWj+kxE2sbBAehD9oExwuOQCeB3gSkbvjt334XxtSsn3kodvKIDnfyb7NJrNqEIkldSig2nTu6IL0I69uvxsgav/rpXVrnhx02QduTkIiUN/2FjKlcG8F7RSUrqyBL23d0Sdskcy0IOPBdD75HgkOCozJQGc228gScBO69dIIf+j//N57efxcffPLX6ZxnrT7FcrVi99ZSZ00t+JyfmdovHDVGSd4p9JKTBsZroox3DhUdAyVlWSJVWlj5uL3DhwHNTd+VfpaNhjyBlQS6rTnhwjb+9AZhc440FbauwVaa7C82tp0fwKnUyiAthHz+aYMxgjcUIEjU4D4QnIaIfFxexhgqhLkX5n1gozfMb/XUz+wSPvEs/ePPMpcaayvW68WxlskLYB4oMxpktMZllJCuwtf3ydcYpzKVQ4GBPh5/gFWMiTA18cKl4qMf+SiIYbZxUnWxpAS7MDjxaQrjnyHvfCUmyxBMju81zTqz+hJX9j/J9bbnXNWM5eewe3jUH887DmlKMwBmqlHiOUe0KZuXBZJc7uyjOHBKCCdwzx2n+KZv+Fr+zU/+DOfWHuCBU2/kmb1H+aEf/gn+yl/6DkLolQspFJQgookDZTgEBvOxpKccACGyVixjk1MBLwW7NEXLGt3cPTnY0WtATmRP51LbCuMD0qs1dvLcWThxElYdYW9F2F3i9w7oVh2u72HZ5vnuncukZ4Jk5DRfG2RrarwTM6y3/NxwPaOEAYloNcLBzVtseqENAZovcJFzmUakre/KvhQu5pmOsx604mJcUlOOcmcN0TksNVPShRWVcvKYBu0vlV6JZmpKtYvHNURkML5BNVs40veMF8d88RAH7Sf55LUVZ9abF64IEwhy9Iv5fIlABUSQR4aPH3lMDvt303lU00uFxKO5of/Ft30jP/Pzb+JT13+L19/xzdRmnV/7zd/mr/yP/1XshJbMgWQWHs5eiUfX3FMxhfWQ/h4EUUQyHWNZdVKavwmJTddSVdWRvKqEQRDL9yR/dRVWkdUPqKBfq3DbDYTAGpa5N1QusDpYQddjru3S7y9xq5Z2Zx+/XGlLPOezT1kiz899m6cu29gEVu5gMFaz2ZZtj1u2hNmC7ngRkReuKfXLgSQ4MrjRueYwPoRwuAtWXoTRXs9zUABJkRvShIDB8MSTV7lx/Qanzn4Zdm1jJIDp93g0PQ8hE3Lpwo6mS/FNUn6rwGxxJ3LT8unrHa+7M1BPbk7+TFAf7kjAJwnmERKWPzPafIuEh1xvF4rvGTKAJAIMeRePIQpJfp2EbD0ggcUcvvHrv5af/On/yFM3H8tX0XeO2jDQcYoiqD4Euq6NizMWCvsAMbEfYqwwABjalZqUxhqa2pLaPFS2RlDf0kftT0Cr8cVC6mSGMgKEQEzU1ooKCoLnJKxlwTVA13aZ39ZYC5XiA2IEZyr2Q6C2FWGzUVbys5uYrie0HVtLh9ld0i9bDnZu4a/vszw4oO868MqTZHygFoN4nePOxGobpx3MxRr6gkJVMqqscydGuZRcUJO3O1gStuYs3Zh0+nbjBYVE0tBJKxJ6USGQKHyud5r46xId42DaTk2bcu2PQAWvkyMitPHGIFZZzUOk8IjGRPlI9mv06YdQi5Smx/hzAGZ+EmtP8PjN66x6R91Uh7VqmAhXAbXnTeI2vhnRBC37JImUm5T2uBiRUYVA6o8pBWpdJt+X8+dEzVDvPUEcf/rbvoGf/tlf4JM3fp3O7XHXnecxqd9KBONCrJDvvZIvAyNtF3xR7Ovr/F2JqDm0AdcP6K2eruYiD7HOgE1NWMNw7s4XSSXxjvqg5WRTrV0iugGtHvEhbp5BqKsKY4oi6LgWghH6Cqgq+iogGzPMqQWzqqY56JCdFavdffb39uiWK9zuAX53iXMeu3KQkmDiveoJVGaohlGrUAYXLm4WXiAlOxDbQqxukyQ/Hb/LtgWJeFlHgvDLjP8yXFJ88sjjHRb4CAAFJa7q4kpOAWYrKU6ZKik1WOFDwI+M2/StgzYffp9sDvWM+fwBdvfeyRM3W1509jmmJsqiyuVYykotXH4gTD47XHf6Q2F4MZG1vZiXqVmVzL9So6QYMdE3FyNsrRv+2v/05/mb3/t/YSx815/5U9C3+EpbvgfA9Y4+hFw/CWOyZO8Hfy8J57SLVom4JtelzE1NqG7Jlj69nmQ+prBYybMzLY7OoZsQDvmrCR0djeif28rShsCKjqa2zNfWsNtrGH+CRe+wB0vccoUse/zuAbK3goMWv7ePW7b0bYd3FqkaGqA2FcZo7Wm0A+Mtlrz7hqB8w23XsvJfYE1ZTqAiY93gP4bBP5wGoo+MMU1eL3+mxaVxSwhi6BPnbDyMkbHx6KNAIjEZOB2b4fcyOaEUnAEKL4TpCJN6NAfpFhyT3mH02WIucodpiPmcieVvXJhbVkO4FBhPi7RAMnUBRIJpUTT8K9/wCNVf/gv4EHjlQ5dwAsEpi4QYNU1LjVUeq9RQIpLzV+fz+UioypzYpM3Lcq6S8iVdU9kGYAwiDaht+v6jQKbEMVTesnTdiSM2xOdCiK6CWNaqirlv8Cal7qnGDlagXrBPj20Ma6c2qMVgXWDzoMMue9z+koO9fWh7/I19jI/n6foYIhzytYmhFY+w8oG+PYpV/+hx/CqRgkQoFLA1qKZMr6VRBp0HYR2C5eVr5QjRDwxEf8QI73jrO0GEzfOvANR/skGDvyBISMXWJFc3Tkv8e2qrTpz1BAIt1l/Erb138tHLLQ+fXicHl2G0CRT5SMWTLyStQG2httulba9z7txZtrfWcf2tSIfpMGKobIVEYi1MLJfzSpnRR+G0xuBCiO5C1HAi6msZg1jhtS+/h77t6IxHfM8q8tJkFrtk3mcmuSSUeRbzPS2D/WXz1BIdTSPFGKc0IsYY2lYR0Vy3OdH8pVCm4yRtmMmqig2g7zpAcMKIsU5HJJIOEKwBExPtXQw7iQKGRoTZrNG1WtUYUVoPWZsjxjCva2zb0q86mpsty90ly4MDZi7Q7uzR3dhV8Cj4WJwk+FObtCcWGn/1x1shL0gop2BN1oojC05GO1r5e5KM2wrjZATUP/nohz8KwHzznCYkBBASM2yp5ZIIFdooJDR3+O5hatLOrQJeN9sAPHmry41YpdCw+omjETo5QijVdRtf187eis9fXzGfX2Lr0ou48cxJLl9+lL29FWuLiK4aCk1Q+NnhcGVGF5PAbXyeyCze+4DUAgaksYgJyiWzXNG1nc5Z1Dj1rMmcseVmq9c1RhlDCKxWyjiTkNSyw9a0KLjUmGneUt+OFIMsOYGS5gfGFkCRaFCaxtlk7cF1nfYZbZoMvvR9n3lXxepzIabV6eYveSVUsVeI956mqof2FIDTWgbCvKaez3DzQHU2MG97amNoDlbsXbmuKG8bKSkJrF04gzQzEGiq5tC6OWr8rjN6hhfG5uBR7xtuyGCePd/QxR9GMqaUhQZ8FO/RYcLhn2F45Pgf4/PNLh0GW69T21PstjdoO8eitooIjjRrPIJIxnk03nXUNUT/ovA4fAj0Tm96M19EkzsMsTMZKv0HjTGex1S1D0Mc0dT1CEQJYcjQKX19fU5RwlHfjuJ94/DA+MLKmGIpcKA+YxKcZJ5Ofb50jPT59Jn8erHhpU1juglOGdZFZCi0NqkfZKoXzTc48/Yw2ZZDcZwcdrJGcY24KbsM3qjb3lUBsRWtVa1oZ3NMcxLT9gWxNvRWcAdL7BFMercbL4CjZ2R3xqRxzSW5jesV33q0kD7ncxEd1GsQLl68wEc+/CihP6CyEaTwg8DmRjMxBUYi3qEJe2Q0Lx06Anb5lxxva+bMmnvYObjGs7dWPHBqTdkL8nWIAjuic5BMVsnnMb6ugqgy+jwwrw2L2tL6FcF15AVSTFMZCtA+kOOFOc0f7fteiZHj6ymDxRdB9967uMAGrSfRJHbRFyq1ZMYChEhuDZZYxC7R30fwQR8ixMRzky+l99oTxSfmQzOsosQ2kH4v7w+5TEpDHmJjXmxwVNaQZNcYMt+qtQ0pjTIwWDgiEs37qClDLDvUi4x8RLEJVfQDjREIJvebSS6Bj2EfzWQSgkuwYuQSAnxTkcJIRtAi8c4RWs8Ien+Ocew0OykeCRgxIjHl7fajBA2mZmtpFuXkZaNpepUkZrLAH/7GP4gA1z7z21RWaUZMZfRh0zEo1vfUiWTwLQWN54lkoCUthyCBEyffgJEZb/70HgQlbeqci0hrRJ2jPyfJWQ0eTaDQRwiO4F3kx3VI8BjRlt2Xb7bcWvZsbNxHPV/LmqG8XUnQVqsV+/v7HBwcsFqtRmzj6X3pud45ushv2/uhdK7vexXwEPAiucOXF8m8PaUmLu8ZgPcOY0SpOuYNVW1pmjpXf6jbmahhTETEReN0TkGkrne0XY/3JQ5O9g+nC61uGmbzOVVdY6zFVhW2VqTTVhYtPjY0s4bZrGE+n1FVFjGx7jGeRSIwl5BQgJSIH4UvNQ02JgqjJbHU6S0WkAoxtZYnxtoAH9n4vC8qikKA2oLVjUk8GQgUlHlg9Vzaqxi/q5BImtCySPW53vdc2rL8mcwoKZ4HYVZFJ991zAV6awnRWVfuhh7jg3ac9IO5mkf+vXR+k79kSP06AtCsnWFr9nI+feM9vOfJXd7z9IqdleP337PgJadn2ILuxMTNICQAqzCPh/+i+ePh49da/uWHd6ntBnfd+TU0Du689OXcuP47/MzP/Qp/4o9+hYZFYhgiUXz0vdM4XxOo6iEW5wtTM15SvNyAC47ghhYSwQybn0/aXtIGxm231mmlRzZtAd8P5uu0eVBprpZFyTYF2YtjJdOzXEulwGb/T+rsaycfM53flIS6NEVTs6JDSyBaC8kML33jtECG7t6DaV+azkMsVg9tqwoTi3pTiMcYQ79c0R4sj5zj6XjByQO3E7Dn8iUHsGcyK8XxDsfc4uvApYsnuPeB+/jc4++iu/nNmJNnSUwiYgXB6i7cQx+D+wO4w6CNBjkfzM34CAw5pxdOfxVc7vmJR99PJRsA/JtHdzkx289CeXHN8NKzQzD9yFGY1p/Z8bz9qZbt+iXcf8dXs712CpZLTm7dweb8Ej/5Uz/HhXNneP1rXoQ72BsY5CIKGqpA3/YEV5RTMQTtjRic76OPHxd9DHW46JdK2eCmAI5KtHMaG03CUYJACZkVyL5bepR8NuUxki/chYA1GnAfC8EgiGXyecmKbmVoPFSulyF/emz4ZQsshEyFEhJqXTQNKhVMOnfQjB6M5HkcQKcAHN0cycTyQBsdPhfQ1pHX9/BPXH7u9RLH70pTlpNSoqvPrxGH18uFkMaolo+ULgd+tcd/82e/nf/1e/4GT3z0l7nvDX+cIBWxwTIBQ2WUB7T3mt4kKQUjnwhkJyH7kyELJUStIUBdc8f2V9HsrLPVPAxUXFt+iBur9xMi1/XVA8eHrh4vGKxfX3N69lruO/lGFtUG1Y2b1PM5ddXw+ge/g3d94kf4vn/4g5w6dZK/9pf+HOdPbuDdSk1sYmKA82A9UlXZz1SzOtbHeDMK/Ks2r0A8Du2taKJfl0rvvPOatC0DwGSKFLKkf6fJINpwSH3eFCYbhyGGkMi0N0cJ7ExpY2azWQRohlZzIWjsta4mTHHF+ZRhl7H1IBDDRaYQorTOlCh5CPekY3gXsn/s8zaXOsEdXufpnEVCBgCd1+odgkdWLdW13eOtlXAcKBR4x1t+Nn95OrGjIHQYmy5jf3LIBCr9oaNGQrrS9zhb8873fZLv/74f4L57v5W7H/4qmq3t6IB72uDoXWDVOg5WLa3r6ZzHuaELE/HcxWn6lHiPCV4XO8qSUDlP0/Y0+ytM53LvSB88q3ZfkyYI9H6fA/e5fI5HTm7x+0Z1ifPbd1PXjVJq2kpL3azFWqH3HU9c/wiPXX0Ly+4ap8+c4tv/+Lfw+lc8TN/t07s+m1iJ3tBWuiuVi6w0I5NWtHGx4JMQxfKjyIrXrlb44KmMoWmaHAdEhB5P5zXwXWoV5xyrNpJrGRONEhmFzlK4pNxsRaC2A/Jbcscmc7RkY89a0lqsjLtmlYkN5bpLz+XSr86zWq2Yz+dYazN6ba2lnms5VTKJ02e71hHE5sbFKlt+pDGnHEM59DNa2wG6HvvUDXY+8Cn+i5/4V0cLWDFeMEfPVLimSQLp9/LnALXHxT/RtNMsEm0xpgtOsyIF4+GNr30Fs7/03Xzf3/sHXL38AV7/pX+Bevskzhi9EAkEjAILzo2EYvR72jQkoa+6A5jkY4rEtnmSs4sEmFULKtNELbPGWnW62GTSf8P3pXI0AerKkDI8TQhY57FBENfjRTm97tt+GXdsvYhHn/l1rtz4GP/g+/8J58+d5czpU/zhP/hVnNjejH6Z5c7zJ7GujzWZPicVhBAITi2AqjIIFRIE6wN957m8s2J3bx8CPH35Gr/wK28B4Ou/+ss5f+YUVeU4sbXB2Q1L1y/pRXlLc8wnxKTr4npNBM7SLJe+ZWnWJUAu4bNlJlDZO3JKf6ngDKrNJzHP7CcX8dFyZF+26LmZ3tPEeObUChhcqKjpgDKtdLoJTImWS2uhslrPOVtfZ/+Y/SmPrSnf/4435ZN2rsf7cX7r4ZKfsVAOPkt58qbIk3UjQR1IlojggCCmRuyM9374k/zjH/gnrPY2efjF38r5B15FZwUXoPfC3kHLrdWKVdvS99oKT5V7DKUk7Ri8tmT3LifUSwg0rcPuLWHZKeKKj8nkMXOmSJpOSceaFphKoqKAp2sxQmUNi3nDzNaKIIuizKMwSrKkJdCFnidvfpzHbryNg+6KzufwNra3tw6TYBXvSX+XLwRgZ+cWXepVObnH6bN1XXH6xAm+/dv+CG949YtwrOhTZ2kfqKLQtJ3eO2Ot9nYxlr6wftKGm3pV6prwVEZGSQOl6TgVnFLzVcZQRQ1ujeRywWkoKoE+XdehiWQac0z+qo1xzKqqojU1zdHWhlY+Z4oNfuXUXUvnlxIirLUYEc0wCoHgHCIw32m5+o4P8yf++T/j+cYLEsrBVA14f7jL7VGmLNw+L7a8yHKXSg5/aS5le99YOmNpqfhL3/0/8/QTV3j1K/87Lrz4i+iNwXvDQe+5sVxxkFi4nQIgEtFaDWuoT4R3EBHPBETVvcfurQj7KzV1YzfnhOym+Eo2XdP5F5uKRGjexBBPbQ2zuqIxlir6dSZWGxhMISF6ogKIEVa+Y9kveWb3kxx0N/PcXd7/GJ3bP86tG4bAubUXU5kFhMBmc4YLm/djrOWpnU+x113VFLh+n6d3P4IPLcYavv1PfAtf/3Vfguv3wcVzE7VINAsnxiRjW7/SzwtBeWVLP1BQHw/IGixtME3TDHMo445XlQybWFXZGH4Kh96fhFKb0AohUjsm07VEbdtuKAsrhVuT4GXk16aEiKTFy65iqUFQVVUYhuT70PW44Fhv4fq7H+XbfuAHn/c2Hb8VXiqfAuCwED6Xtizt7qlpO93lpmCBTLdzQAhY4/gH/9ff5d/9xM/w0//2B3g1gUsv/iJ6LN5Y5ramMx29SETAyFpMVRkQDASPz6ok7nzG4BtL6LRcWmJWTy6qiqavhJTilxIQQtaOIikpXM1NxaFiCCUM6CgYvKR413AeigYLtVTU9QabJ18VE/H1PavujZEErAhmTOcy3y09f+8Ds6rJVTaCWiTGCKfPvy76ujU+OHYPdnns2qN88tpv8S9+/D/wK7/xNv7+3/yLhLAXu3QNG6gvfcgYxkha4ygTFR8wRVs+UNaDMkk9HW9kwkZQKb3HF3Hbo9aRMYbOOYwyv+b3pPhtCAFEAcKpwlCTdJxIP12j01zgpCVDcSxjhK7zuKYmbH+BibNWq1XhV47NzamynULFadzO7yxh87SDkXCvYChycxCECmJXXM+f/M+/ic3NDf7FD/8gCNz5oi9Roawtq87StrqIEzyVhAYF1wi+WNRB0UwnApUlVBa8p8yzJaT3q2DZEH+P/mmIzYhSDFOFUxHS4GKJlDpWCpp7l2zWQajEx6COYMIYBU2l2nNTx/kr5v35bqJB2dqSaU0kCPeR6cgFQuiREFjIjIdPvoz7T72MJ3c/xwef/o/8pb/2ffyP/91/ycUzCw1DxKIIIWSzclbX8bpjVpEMNJS5uidmzUxDJtOk9SQQ1lqt0ZVhHSgTRXw+3tk0TxITTwioPxy/NymGpEW999hqNgKNknCqFky5tmPalvJ8y/We0HAV9uJuGIMzwOYXmKNn3EUpokpxHCVoU6E8qv5veoGjHU/y9OvCi1wqBgWZTDAx07/nm77xK/AEfuyHf5CAcOnhL2ZuK1ZNzUFrkS72ngzx5hULOQtrKM5fUsaPVmeYEMvJzLCYkzkc/FBKJuIxQSL5bxRiHzN/jPa16NEbZ2L2SQYmkskaZVMMkXY/YFIa2xH3ZfRcGH6Z+iRJx6bFUl6HItxe94egaXHO9XR9T7CGO7bu4ezWf8vbHvsJ/oe/8nf5lm/6Or7tG7+Sdnkran/18dSjjp2w42YTkvUUwShtRa6b+2KxGJmNyawsgcBsYREis0KgdT2N1HHji/HGSJZmBKomtn2vALGsun60hsu1Nq5+mpawdXHdarZPet9R/m4654zgpjthDN5BHwLN5uYRd/DwOLZQpuTmBFwkLVCGNo5yT0sTJPkLR/mX5ZC8WkKx6nRbDiElljuUVsLAsuUbv+ErMGL4F//0B3j2qffz8jf8GeZGM0Vmsxl919G3PUTQJplzwafc3aT/9PeULgYGb+LOF1ykulSNaUKgl8ifFy1iQYVa366LyQVRE1gU0XOodhIjGBlK2lLuphi0NC1mLfkYqNZFODFZy7keZnCsQoliWnw+ZW0mAU7CqLH1QOdcZNXrEe+o6oo33PWtfOSZt/HvfuY/8dGPf4r/+S98B8Evc6K3iTwY2ixVK1iccyAmJy5ozuvg45VkXaM4dRjCDiX2kNagovQFG0N+bay9gh+Iv4aWe+V7xq7UUQXYel6HNWX6zABilREELbbOVCFBWGx8gc3XNAZz4/DzMGW9O9yFqdxdbi+YcUElpVk48cXT0ZRU8Wn6wLd87VdQBcMP/dAP07/1gIde/9/E/FNlPQvWxIC3orGanxpKmRyfRTQzQzAxz1UiHUhElUNi1FMzzUZ1FozBiprY3qOfDQ7vOsT5nDBv8hcnyB+MaCK+lkMajCRfV4XemCK4nyd/OE5Imq+8N3kO08yGaMJG4uv4zwTNRnHe0XnNoXUxhKN+pOWl576M0+v38aFP/Cz/2//xz/mrf/HPYIJ2MJbUKdqPOXVEJLeM07aHlVo7RcB/8CWTxklCKfl1a6sBXISovcp1Vgra8NCwl9Fihvh3NGUOjWlYZtCeBU0Lg+VXCumglKKyiMh8ZWtFYad5vrcZxxbKetYMu0qiOoiv5e5b0VGfXuRUi05h5ekIWSaTn1WYwgFM7mIhpFZ3AYHO8U1/4Et57atfzvf85f8PH33nP+bUQ3+MrtEKgkrdPs3mLwWSspy5vJtR68WfChaEHLsC9RU1HCAEKzhrwWqOZ0rYtz5geodbrRDXYryyoA+lX8O8GAEb8yzFaFVEMo+9aE9LH0GPbLGEIfc2pReWwEY2VuNpayAHMnlJMgQiwOQ9kY0gZlZFXh8n2mPx5Ow8d229gQ8/+iY+9OjneN1L7qT1mhSfZ1QMprIReTUgJvci1TUxVFzk0Jet6F20XhCsNcoKELWvMYaARYwKcB+zbkjnCISQKEUSCmDjOWnBtyS2gGC1wonBekhZSQkx966P99pnu0lBuiFOX9eVvmKUvVG7TgftrdInP1rLvLqjdoEjxrGrRIy1GGu0VMYoL6vENPyc+Fv4jeUoneKjUNejfFBFyJwS9LoC2Y2+SfCxGiM4Ao4gPRiHc/tcOF3zX33Hn+CZq+/k1uffC74j+B4JHouQ9tds92dnbthoglqo0b80CVdV6pGoGQNKkhQMWtFuK0JVEeoaP5vh5zPcvMHPZrj5nM42dFLReUProXXQeike+lzvoIuVCBjDvtvFVEoybCp9SARWdMn4HOhOua4uxt+miRlJC43fp3PduZ6+jxoyfX/0m53zsdqjY7lccWH9RWzP7uWf/sufxJs11YzeK98PqjXrpqGqa5rZrGiuqrM/NR11k9bqDB8MxjZU9QxrawKGvtfsLLAYUyOmxmHxGHww9EH0pwcXBIfgMASxeu9ESIReQ3nZsGklTThYgloiZmOM2ZpkHaY6F52cJMDaaVo3DGsbjDSIVECFSI2YBme/wJpybDuPgR6tr/MjU/WocahWrxhHgUXDT8ntD1QjDD5tnqjcH0F3+y95wyv5R1XFtSvv4vSZl1DPFij8mICOdAwNi4yupzi3MPolTP4ej2BEa/9qCzbq8wDegAlaNqXm4DCnA5qr52ST+AdHkI6PXf059tqnOLP2CGc3XpRNZ4Ph3OYdWtPqfQSqdIJCNDdDZCGwEgpeIzni5OMrIWa3ZHBldINUi0ahrsRycvYgjz3zZt7xgY/wupfejbVVzIoSZrEDc0qbS1Oa0ulSjK9c3MkSGsInyrSvGlU3iWS6ly5QikvGW5QFHlC/szAzRyGWwpVKo/QpRYpKFSPRIkkau5g9MdrfNChSa0VrKqeRh2lo5XbjBTEPDLZzOpmsazIUXCJYZcrTUYjr7RLSx8dO9l3ICKLIZFFJMqtjnVuA+czy4IMP8NFHP8HiqadYP3cnVd1oBXkYUuesxJS6MCzVsa+m332kFBbP5t4f0VfLlxWK7mMh4EKgTygtJhfZpq8TCXgJrLo9ntz/RUJ1ixe/5H4+8uj7eXrvfcUlC82VddIuf3HjVcyrbS5tP0TTzJVloOiebKVslT6c+Qjs8RE5HUFGUvxO9oeD96zbsxixvPO9v8MXv+pBgngcIQfRJZnwVgm0UzXHFCgZofDWRv91AGOaplFN3XV5XhP7vnN9JHmui9BFquQgHyMNzbdN62wAH7tuKP1Kj6qqYxVMjLX6werzyfUQgxGL8yEWUCv3LYCzA62mD/5I3OKocWyhPDg4yAm9MOQtAgPsXZifpYCWf6fXE+JW3pj0uwIAQ1nUmNLSRzlVpyyvL5HYJk5NSud77QqMw+y0dOEW4cQ6Zl6ruVkq+nKBphtoNW+0D4HQjbrqjJdpNHmFWJHee4x4JDEWOA+9xy87uv0DXNfH5tVqeJLikJKvDi+ezy9/kWZ9yf/xd/82tQR2l463vfM9+Zudc7zpzb/G3t4+nsBnbr6FEODxGyc4vfYwd2y/gvVmmzpWaXgfy4gK32i4JhgY5CPuLBJ7R052eyDFINfqTcDwO49+DFvVeHpNKayqjLbWTZMJuUqgL62dxKwAYKRCxDObNcAQt9T3erquJwToeyWu1vADiNh4Tjab530/ZN/AAP5oCEV9TB3pp4IzEpMJjKkQfLRkLJq1PFh6xiRCsMjY7/PSxBTWS9LcPiRf+fnHC2qvXsLUGRiYCGEpXCmMcrtQSTnKPMjShNBJhqFEKH5+0iY5CztJSGQAa4LQHrS03jHbWFAtZgSbRDGiqQW4I6JmSLAGg9C5VfRfj54bBUOCMmij/REVD1OB9m0Pqx7XdmllEFAhCagAi9HvCgJd2KXz13nDq97A+tYMPJxeN3zTH/qK/I0SAv/5f/a1uNWKvu957MkrPPHMZf7Nv/tZnrjyDp7YeQdrzXlOzO/lru3XsKg2tEmQc3kTzfPDkJNsGJBeCDH5nuF9DJrHFC3VrK2QSuO6ZW8RFW6J1frD+qjrelQHmbRlUzexnd04U2zU3Kd4LY10TtN87PRaab2VmrrrukMaMltyLvYuqeti3Q+aMoWBQkhxcD2XaYhwUDRfYI4eNQlctLHVR+l7pUMMbrwDJDi8vAnlyaUbUf5eTsZ0chmhrzEhXA6Xi+lZRvnqPSdPntDnvFLSu70le8sl9dqcamNBqG02jQ3kRWpM5HGJ5FK9GBXKUWA4mvIFbosxWCd40RCK9MrPwqrTbr/JTKMElEIUYK9Ir4GWZ/E4Xvf61xBM2rWNvqfvo9kkBNch9NTG8/Bdp3jRPWf5A1/xRTz2xLP8u599Ex/80Ed44to7eGb3g6zVZzm/9nLm9gTr1QkIUJkq+nBxE4GMNivDn0QC7DBcpzriuuhjK/QT21vUVUWolWIlpddlPEAE7wdCZWurhK4pJUdkTw9omMP3HiNKWha8AlnKwq4+eKrml5RnlbRhZGpxfSRo09sUfdCe1CLdmAojycfTaxjCNIPVlvzhMeuCYQiPJA1cINthXJJYmum/B0Jp6Pwu4hxWKu2O5NU0K+NT01SpdKLl72Wsc+xfQLq6ZLLmmrR4/MFeHQt76rEh+MiB2vOffdPX8da3/jbL/lEW5ksxCK5zdDv79MuWam2GzGowylYmEYU1Lvp7JlasSyx01VjBYB3otxeCKTiJQX4fzV7ntBIl7qxJIFM/DVWcUaxDwOG51X8Say0vf/nDarZr6hIhOFzoFHAwgjeisa+08Vmh90vuOL/Bn//Ob8HxbTz59DV+5Md+it/58KPsXP+sCgE1lVnn5PwhNuoLbDfnmdkFIXg1cY0hiIlWR4iMDdHENhZbWTrX84mrvwb0/Hf/5Z9SYUqbKyb6nhKZEgRrovYUQUytmsbU0XRNa8bgY2FXCEPbvVSVo+Z05L9FEKNt24MP9L1q+HiLENFrUD/aoqhtBej5u1jRU2cTVzcLY0QpWYQYXSDnGBsxWNMQQhJS0cQQUrxaW6s7PYEoD3rHcx+YY4xjC+Uy9NT9Ab6rqCthVmmidTCxVZukWNPhhOJkSpQIlP7USylR3cMCOobzJYwD43pMfW+K2xFC3E3TJETfNYCJ7eLEtfSrFhqLnc0wlcaxQPBW6Ijt2EL0XYPH9z3euaJVQdy5k3hGNe1A44i9anbVCHHjYgIE6UcwMWvI+8BBexljhcWixnUHgNUsmaDCqYvS4HJ1SaI/cYQ+LnwAOi5d2OR7/vtvp3PCu9/7KJ9+/LP81tveiXMrntp7hy4CM2OruZcL669ke36WZjbX63AeH02zPEzgmb3H+cyN32LfPctf/K4/yytffD/1vKYLbvDOAzGW2BNEzdukaYIkX1FDCArOqWozgrarcwP5V9rsbarEFGWKT+g1omZoXmshxPkWUi8WIohENmGjKQq52kStN8HFdGRlxhtQaGtrjNQ4r0jsYNYPLow1RvOl44aUXi8W9fOOYwvl49d7Xrw9ZxYs3uouk3hWhGSWDGU4pRk6zS/U8xvnN5YAQKk90xhppyiI5edTCdHgKZYjZ37Gv1SAxAX8QYdf9VDFLA6BUBnCIuBrTyWG2qWgczSNSgIhCtBIV1d+bTDdlUkutY8zAkOL68F3BuWhOdG8hsurX+UDH/g4j7z4Djy9AieMM0mmxcTZpI+vpzrGEAJNY/ii1zzAK192N9/8h78cHwxPPHWNxx7/PG/5zbfz2c9/nGvLjzGzm9R2AwhsNXexPbtT72NwPL33QTq/y0F3hbvuvIO/8ef+Cg89cAmz1tAHn3Nay3zWdF/LPiOewYdMGjKF1HCeVduONvMyY6ZcVyk0UvqQUzco3/PJc/l3yf8Vc5nMTcEYDh2rhCCU7mSghfE+FADT724cWygfu77Fw1vCYrZiadVkzTGbmOxd+hIlXX16TBdRmRqVPjfErwo/sRTe/Pvhc0z9MJP2HGtd3TlTN6hkUqiCCTjf4+j1eSPaNbluMHUF3mNaR+g8pldeG5J+LPqJSCmcwxfrewMkPelj6GS0GEIyFYU1+wCVvJt/9s/+Fd/3vX+VgAb5JYwpDcsYm0QtqybcuBg3/V7Xda5t9N6zuPcM9146ye9/w6vYW/W85e3v5V3veT+Pf+azAOy1T/Pk7rsoz3Rra5P/8ju+na/7yjey6vZ1DiJIlS7XO/WPdYFWmsonUbNITOqLmkxDF0P8c9ktcc5RV1UuAyvDcEkTJhMxYQhkzSiZgY507ZA1bJzqAQwkJRMIUEU3wyMCVTW4SSGA95GKUiq8DCwamaPWKCorkqwfH9dbyp39AvuUvyOv4u7lU7xmdgWt4PeQmMX8UACadslyl55m5Ze7ZKklp5qzXFAD0DP1PcfHGrdpHo+U45mB2ZCYAoRRQoQLhGWP6wKYTntd+tjERTO2SXd3+m25bpNy45DpXkzIYQHJNzFdljDnZP06nnnmV/idD32KV7z0PoXjnVNgfoIuJrMrLe7Shy+D7CJDYxwRYTab4fqA7z3NvOfrv+r1fNPXvpHgNUvl6s4e7/nAh+N7G770Da/ixOYG88ZQVzO82cjJ1iKSWyEaMTgkEyMncirvnAI2JiGogusdKaThRXBec4o7FzCVatVUCaLCqN/Rez3vZN2kEIVeZ8GyJwlHNgnOQ+0pQ0omMVbUKzBWY50iBCrEOHxIhNbRvLZqEgcTwSUjWKmoGLiLfGS712T7COQVSuL5xrGF8lZ9nvfd2OGOtR3OVbv0ccGVxawwfHFJYZ8WSRk8Ttp1WqRaqv3SVBvM1xAd/6NiPul9IfoxsU8iSZDibhrfm3wfvWWTCfMqiMQ+IyHmViZIL0Ds7CTF84fHcNTxG9K1pxjd+DXDwtxPY97Dj/3Ln+D7vvev0bd7eNdTxQ0oQfnJjDPGUB4qCWXqnpwqMmxhYqb3maaimasfGVxPJYamrjmxteDeO76cIDBfLAhBKTWapsGYiq5zAxGV0S5f2Qy341KnsZmZzG+vpW8mgV1j6yitl9JsPWpUtymsLoHHMZaRfgqYOvumIjYKdxX9zA7vTdyIU5VSArOEhIILQl3VzOdzVqtV7rWiWrJXlgTvSfWZzzeOLZQ3LHysP8vLrt/izMkVYpc4BNMoKlVVVV4oZVJAeqQJHsYQuymDylMSonIik6bMvzOmGdQ3xxgWcObUNtvb2+zceJSF/RKQoxusSP6ZzJvheEkYp0uiFO0XOtLxdIG6uHD0tUTxX9mG7frVPPnkr/GBD3ychx84pzBHQUdR+urJp06LstSKTUx5g/F8JdMu3aOqrjGhpjK6yNKFOkKufUwjoeKpgl+D9cP8JmAnuSjpoYzz0Zz1quFcpFxRDT7MU7K6kk9aZnyVseySoqPcyNP7Ssshx1iNasoy9kjCjUSQYIcO0wwF1poLXdzLuNl3zkG7yoBTGiIySSt8/nHshPTWOi7PhQ/sb3D5YAMTukhU1RN8gZDZwYxIN2XoizEOIFtbHdlBaVroWgrl8PmhymBIvtZEgxBS+EJjU1pa7EeCNTUo8yQf0miTZ6WstZQUrdKdPj5C/Fk+VziDyXYmIQbK4pCSymMoKMDC3kdjt/kXP/avaeZr9L6jcz0u+Aixa2GAauwQb6dS8RMXQ9M0zGYz7SgVF11KwBcjNPOaZlYrbYk11LMaO2uo5g2mqahnM4y1A9pJiiG7GIaCqtFk7JS8j1TKSxYsgYreCc6JJtr30PVKuoWpYsK4+nS9D3kNJUErN+s0lZVR4jGDFLxL5OJ0xOSfsZeB8geJKNqbf6o5aqxQNxVS2VjYrh3De4+en5hI2lYRJOiaFw2TuaCcuqtuyd5yn8539MHFdveBqlYK0dmsKaIBzz2OzzwQNljVht/pHXfu7HJ2UTN32u/Q103s+U4mEErasQyaPldhc+lflr7lIZ8xJOSVLJQDghk1UAbmS617e1/z8LlM/i5+L/GZEN88mur0Whj9WRwpObSSncgk+MTQhgfwYMyME/VreeqpX+N3PvwpHnrgNEraPSCHOg86RzaCHIkrJqOdPvpxxaZmrAbRlS09Eh9XkRUgaNw2J9UbQx/BFyKIp/FDp0n4RtT/C0qErdw2RKtJwxVZ642yX2LwnZiM0A3uTWl2Zi0XZ9GI1n2GYu5LbZwmJ0QAIiAEG9MerRk8Hx8IMQ8gRFcmtUbvCfSof+wjkCNonrQLTsnOCDFNUPlxxQdsBtjAeUdVUGceZQUeNY6tKXsrGD9nx57lfe4cH3cb9Kxo+xW970dmQRljSs/BEB4ZhLM0E6flRSUt5WHIOh1v+nnnhwpzX0jXcZ3swyOhOXrHCx33gsdYO080MsM1JCTSu8As3ENjt/iRf/6v2do6zWw2o4mPummo65rZbMZ8Pmc+nzOLZVKmMGOTtVJW3wP5s1VdMZvNhpYAI7N03Fq9tGa0FEvfn7pCD/dYLZi267RELD6QlDdq6YtEjGxNFbFq9ZPtwD5nTST6StZUYXoyWGB5vovNq3zOGBMZAQZkNlOOxL+zAOXPSoIT8nxoW0NtqlS+t5yj9PcLCZEcWyit3WPdL9noA9dknfceNFyRiv1aGcPSl09jQVOT9agwR4kmTh8jkCf/Pu1rURzLDZ2m2tU+X/H7vgzocf4ykOJ4h83UchyW3wJm9WH8KKW0eJSQ0nCU6XemZwYxT5rPR43pvWWzeoQrV67w2GcuM5staJqGpmlYLBZsbGywubnJ1tYWa+tr1E1TbIKh2CCjeR80gD/0+jA0dZMXfmpnELy2M0jzPPbFBj4d9VWjK+GAYHAu0PeeZbsaalWNhoLIJrfgvKP32qYvmdUmdq8Oot2qA8q5Y1OSuzURUbUIlYIzks7h8D2dxjfTNZTIfr4bk3V2eN2N12v5meyGiVDVsUtYZWMWUvzckeDk4XFsoVzzcypmdJXhRnWaD11/EZ/dOcvMNBDGjUzTzUqomJLi+tHucZQQpufKnT29lob3Q7bHdEKTWZSO07YHvO71LwMJtOHxLGzHnRw9MCMh+//nkcfojzD6PaW1aX27+jRNuIsQhJ//j79M1cwykpo05GKxUE1X15jKEozQ9Y6udxwsW63mF4Opaqp6xnyxRl3NsKZGUP8s+Ws++m4lqJbuaen764IWNEVOKze8B+81rOF8iMwA6pOpn2uzn9c7TyAyC2AQU8VH+l0rS30I+W8xVsMn8bWYDDlaA+U46pzT71NBm/4scY0SVCzXYwmmZe1tYogFTwjKRKZ+Z6B/DvdtdN7HehfQGO1/IVhWNFxmm4/dqNlrByLddEHTnWZ6AWlyspnp/UgYSw14WHse1pLD95TOnmahnDxheehFD7PXf4gQduO7xhk+5TjqeSl+/m4E8GjTeapeD7+qPwUJm6zZ+3j729/FwVJ9uiaarinMkQS1jA/3fc/BwQHL5RLvFMFcX19nPp/n9yXff6oBrCnNxHFH5RLAc87RtR1953Au4Hof61UBIxmU8oCL7AguBGxdKXuFVdoQU9msIX3cmMQqSOMJyoWZXkt3SqAGZqFMqRwEsHyUhM9Hjanw3c7czG5S9JGrqhpZDSbGTdXSGWyj3xPzVaJNHzDYfokTyyfa07zt8z2rThMC2nY1kFF5n6neSyEyxowWRDJxb2e2HqU5pzQXR5wsqQ6ua1d8+3d8O/UMlv6DVDZEMiofs4PKqXuO6z/2lI4/dDvhHx85viuaP7rgJGsKj7DgQVwPb3nL21nf2NKiXmPBg++9Jtq3XfYBk7BmK0OGbld50/JaONz3ff6p360Be0XH9UHWakIfm8E6H1ituuwbBp/a7g2uyhTcS38nISmFftpANpvUIeTu1HmaLBjrqSSw8Gkeh8/ldutyOPR2lOlaCuKUWS9dSxrTTDVr7aHjp0mXiHI/X6y1HMcWyt2Djr0Dx6r3eG8JYrnSnOVd3SN86BnB9yva9hbBrdS58CF2MAat9uhxrqPvO7x3WDve2coJmAI+WZu6WCoW/KH3ZPMXDfA6L3hnMN4ysx1/6Xv+Iq35EKvwDmq7wkpk2wn6IKKAKW4gMI5kZBAghURiRggyCNP0kT+X7F8ZDpiELwpBSpzWFuWWEAxSNYSqJoil5hLn176Sf/xDP8Jb3/ZBzUDxBulBuoBvHe2qpW1bQtDmsnVdZ9O2MtqJuG972raji52VvY+VMGjVBZJ4b8B5YQhrQO+EVRfoeui90ewbYhjEmKghYske43AGDPHEHBcthG668GezWU4JlGLeQtC5tFYBJmeFVRPzrfVuYI2hqoZW8BZlFjSEfL9Tfo+PXE+aheNxriNzPwWtdU2FDlWlmU7Wal8TEyAVkCkR2oAz5MwiY/KGktrJP984tlAuW8eyc3Q+4LD0CPt2zpX6Ah/e2+bJPaOs4qHLPlj6aa0p0L9+JFTT/oS3FciIdk3fM3p/Qi2D7uL+/2nvXYNty66zsG/MOdfa+zzu7Xtvv9/qdwvJli3LsjEY2zI2YFcc21AJVKiC4BiIDS4oQlKpUKn8oCqVKqqgEmwKCmKSQMCGgCNA2AjHNn7JoiVLsiTr1VLL/e6+t/s+zjl777XmnCM/xhhzzrXOPrfPbbWMcPW8te957b0ec80xxxjfGOMbicARcOMRFm6Fv/iXfwQrfBQvDz+B6H4TXRgQXKWM5BZSPwbWtLtc85fXieo2yH1zWCdC6QOyoB7wfY9MwuLWxbvR0Vn86I/9XYzRIY8MRIBHRh5TsUycE67bvb097O/vY3d3D955xDFi2AzYrDcYhhE1Bcw2BK2oYFL/EBgyY0iMmIEhcXlFFn/XYqPC6GZZPbn64lRTK81KmtfRyjRWQTXt1ro69p46VSTk1h4YvDLQGQsAbIOXbdZSEMsDLg+aYeWBbYZZCyjaIzZuIO/lPN4ZCXTzMBWlZ0wLrNGY0KcZpxbK5AjRAZGASBkDEQ5ch0t+Dx9xb8UTl8/iGnZhzBmyU6JQAy4Wi2IyzIXNJvEk8OckoZ0LsBUhp5SkGWrKpetWzgOcP8SP/KU/i1tu38Urm5/Hy5ufxEifQt9VKscS/iizPFVy5o2W5XEdy3eb1zh9e9WUpHEyECF3HtlLeZLzHXy/A6YOY/K4pfsWXLz4Cv6Pf/BPEVOS1xhLLq2xxlnSt2TC1Mwfi5fZnNliaQE4eVWffxzH0k0rpSTNUPMULQYMmW16i2Ca92x+b2vWtuZea9LOwZcypzyfTa4a0tcO1W0nLcuLBWiyxjhrFxnT1DNMwkx9Iq2ltOtT9jqyCcB0w9i2Em4kJHf6Bj8zNjkASPBgeDznb8anDxZ4+PAQN+32SHBwNAAugWhZHlwrQHI/ckNtCt48eeBY6IMhFuaWhzXVmhkpMphJcywzPDJCD/zIj/wgvvCF5/Ev/8W/wvPP/yICfQj73dux8A8BbglwX7TyVJqmkI9ca5OgXiaofGDrYPtPUVYx65Sqg6GJ0QCnhBwZXbcQdHOTkYc9nPVvx//7L96Hb/vW34u7bz6LzAld12G5CEic9LhcSsxyrvmnQNVQ2/ycrKGQkiaH+j4DN+ZExe1zY86N/1fLmNpc3Db1bS6Q9h57pnZemHloT4HF/ZDJTJI4YbWtbagKYqmJOS6cOyCgVq8L2VXOunFA4q2+qfYAC3tg0MR40cpOaEBVA7fgYfu9+NqnEdw6Tp/Ro0LpoOYOKgHTpmM8He/BJ18h3N1fwX6/AtMA59eg1EHSJmpiwTbfsQVttoE/RQipkZNGGIq2bH6WnV5zKJkQIA088+YId99+Dv/1n/6TeO75l/BT7/1XePHFD4L5gyA47IYHAHh07mb0dF+dLHcO4CAoY8oSg5oDTZOfTxJMZSvIALw8RKfs7ZwZS3bIwWHIjDRmeBfgux4YR+S0i538NSBy+Ev/7f+Iv/HX/ipuOberyKUuZGj/R0xrVNsYY4s0Hk/O4IKCkw8a26Rjn7E5b1vMibkm5MmUK6VGW4zQCqr3fvK39qs99xgj4BgeVFtOpIicRjjyyIngXQcoWwVBKB9TVouJbK3FYo1A/UpSpNQEXM6dYL1DhbCbQSTHBQHBieZMpKVnysAOZhArjUojH/MN6LXG6Skm9XjZHFujR8wZMZ/FFboXn9js4JGDz+Gt5w9k4Y23wiGiC7Wky5LW55pzDvbYw5n/bDtgu0vb5wRgmMLWUflavPeQ2nVZmJ0HmAfcfcs+fvhP/wlcunqED/77D+Gpp76Il158qtHev1LOsfC3IrgzOL94D/KKIP2/qvaezdjJc8nyyJhlV6fMQjqlVBJjSuj7hRBxRSFK9p0H+g40iPm4yG8FpxF/8S//Ffytv/nXEJZZkFjbwHQuWiGYtzRvW5vL/GrSAbO6AgzneDKfdixLsDZNNkHYyeminTbUSSkVn3L+agV9WwKK5LOiXK8jSIofGJyjlpooaGeoJ2lW0jiUtWHtBAlR6sydE2pNShoHBQgRzklb+uxU+1fKYLFkEOBVOYl5UzV7qRc1CxGn15LA6+glUidLzIKu94hDxkHaw+e8w2evvIyHd16B378Clz0YcbLDtj7NNl+x9Tfs55PMrK0aVy+uFdqiKbwV4iq4Q0DfeYw84MwZj2/99m8E53eDuUNMGYcHR/jgBz4IkTjGp37zUzg8fBmJD3DL4rswDGoOKWJbJ+eUcwgoKJDgIAzoCVF6KnKG74IwlidhRnBdAHEAYgKzxzI/hqtHn8GP/q2/h//pr/xFZB5kfZj5WDYtYTxo52qe9C9zWs3GtkKjHfZ+q4Zow1s251ailHOt7phrxHnFxPxZT4ASNBsyREuGrrLIyb1FECVtRaAmJKcCvJRrC16eFUd7F8AJJJiz9hyNqnSEYIuZRfvDqF0YYK8mNMs8Ewmyq6V+BbnngtE3NDLXH6fXlM13SXcA8g6L4LAbgUgJIzx+Pd+Ge189wu/pBni6irWTng9E0EoD2dQyQ/4jhvNS7NqCBu0DaUfZrQiYcMeoCpXE4hZGbcAEsvCEEi9BZktgEdsKJfmaA+HMcg/f+z2/X8MsCX/0P/8+5Az8X//nP8LnPv2vcHP3hzAMAUKk9Nq+5PGbYQDCYcoZ8F2HwA48xlKP6jov6W5JF7IPYqZlgDngXPf1+NCHfw2f+uzTeOyRu5v5aSrwC+ubzYWHsBTUFgLCqVpp+HOKIHLaB7KCQC3VB0Myh4IPkh4HC11YIrsX9460cS5qlYbzAcMY0YFKIvsYk4bQ6sZhIKF3hBxFeNK4FvR3sg4YKQ2ijbPm4uaxWDFSJZRA9rSd3gc54avVEJkjhqNResBYWqbNKQRJlVDKUEIr5BjMSYQaDpQ8emj81clm5DLjGIn4CePU6KsN1p1HehhmxAgsg0PnJA70YnceH752E17a7CHFQwhreUTOEdK+LsG4Tp0TqHkqXPWh2OICWh7QOQIHVK/++E23ca6K1agwQHdAkiB0ACMQYxkIe53DXi+vM0uHM3sdvFtj2Uf88A/9KTz02G24NP40+mUWxrOCAZ5CTRKaB8TgnJDiKMzgXkiyLO/UBw9yQNasGwuZwPdg1yPgAeRE+Bv/649JiVHDxUrNz8y1oua4lqpzLpqOtGJkykA4nVctIGclvUKl5rD91oiqiFxtW0cyR5m5ZP7Y56RB0HQNlNRMAthlgBKYBxE4JICyVKvkiJwH5DQgxg3GuMY4rDCOKzBvwDwgpQHMUfxDSiDK8JTRBWDZeyw6B0dJY9gRyCMoj0pRpi+OcDwCPAA0gNwI7yKIN3Ak33vtctJ5QueB4BiOEhy9wVUi88GM4rO5Jr60cR0+52/DL167CVf7M1BjuyoutrigPEhzhlsbvPVPTKiSkgjbA3vNQbV0yez86fWrZm01qr6s7Mk6Z3nv0PmALjh4nxDjZfzIn/sBPPzW23Bp8z70fcQNacj2hMwQDp4BcRxAlMGdmhIxwUELl60TFynxMVmVxBK3Lr8FzzzzLH71Ax+Bwfc6Bc10VD5WoCaWW1mVpdHZ30MIot1mwlgzcKY5p+1za0Mg28Y0hDLbeJtnzrPvbeNVuS73lzV0I3FaMfclfBMR44gUk1obUQAeR+U5S48qi4t6mW/tr1ksLn0Rs7ZSjABtwLQG3AbOD8h8BMYRyG0AtwFjBec2INrAhxE+jHB+ONWyuHGhJHOiFaVLEUPK8H2ADw5AwmHYwSc3Z/GFeAHIPYgZCZXnROxrCJN4VqJOZjHLkpA755iEsAqNPc5c/J72YW69sRJ8tlusQjiH8/Vo0O0YYM3T0K9CExFQKe2BGK/hz/zZP4kHH78VF4f3gnFgcXRZLSdcG8++EqC0rgk5DuLfBEUPx4iUM4IX1Fi0J6vQWWaRR4/70NF5/O2/8+MAlhpykIavzBHMCWhiZZa2aHM4R2Bbn5PcVFvOffw2pth+fu43tr5i27i1FeJtz9WAIkDWAHGWhG9OQqiMDE4jkKMsZol91FaD2sPFqnocaJqJYz8ztDkBzRL0s2anQbioYtR1yXAs2tNTBnEs2pR4gMMIhxEE+d5TRHBfZk1pvkhixjpGjDmL0+8WGELAS7gVn7hyB4ZVgBsjxpHAI4NjEk5UZsTNGmnYiEOepcsys5AV5TwipVEXVoLzVL43c5fKBmGvCv2HJlBNpLo5T4Wymsl07GXrjBnIqfLJVLc9I+cj/MHv+k4M6RJGerLuumTM368NgxsW6ABhcEgRBKDre4AY4ziAszaNYdKEb5SUOKm08Lh999vxyqVr+L9/4p9LhTxnAdkgZl7OTWZUnnLX5JxLCGRbdYTdQ8u5ZGGSUoPZAHXt18m9Uu2oPAf5JMm+OwbeFc3O0AZEwjxQNpucwClhHIa6yXKBcFTDtfMNtdxU+5XfVcejKA1GafMAZiX+ikVrSlhF31+euyoOSuK28SimNiVkHq+7Fmy8bqGsuxsj5oRNHMAs3C4heQzxHJ5c7eOpIyFYChGgJLG4NEbkmAqlQ9asGzMTnE5c1smeaEyd9VYY26yRUnFfiKJceVCW/nSswqQ4m1UgWUE188WsHrG8BwA5wr33343QBSS6hHkrBdm4psJ5kohanCuNERiiZEJ1glyOcSwxvZSyNiN1pY4wZwLiOZzr34Z/9I9/Es89d1EhfAO/ptQsrWDM52/OWSrAXEVZC+jTBMRbhNSrH9lqwG3oeovAM7M+q2la3XxjcFBtSco+kZN06ibGZr1BGiU2az1My4ZHVKlDVOMZ62DW2LAIXhVGe1ac8uRlWhO6ldrLk4d3QZ5JtuY+jBxFqInF8jnNeF1C2fpjrCe3KnMCsMgBmW/C0/lWPHF4E15iYOiuIfkBQE2za32JNuWunmPKLtA+/PYh22Jwvgkm6fuM3tCOOdWQFZgwv1eDKsd82/kQ0SQsd5foQod1egbgfILlasLpG07SgnnUD9jOPEj9KWkys81X6ALgHGKWPGAhP3ZILJw3+/Q18NjHf/8//FWQWwLsYMiuYQAAjmnwts61mpxUdhC31YTNx3xW2zhMc7RCPA9hyc+o2j8y0phUqASFFSG0RgaWNq+ak6hgDJNzGzI82VjqdVjaoFXFbEvpbBbMRGOXvxFVQu7m+O082mj97OPkcdvH69aUGaWOX901MWWHlHEUAsYOWPlz+GD/Vvzq4Z04ONwFNx2Zc9aKc61Cz8wYU5KKdG7Sq4BjE7bNLCrmgz440jSsPjgs+iDleFv0VKlggcS4RK/Uf8kqBmxXZga0t6R30uF3ejxgiwzrdaKYtp6kYBctWio3i5Qi8jhKEqMtppyQAYSuh5RPMaQ/Ro9MHiM7bGLADj2EF158EU8++TzAHYAOzL6kHIqPXLV3Sql6V+Ql5KZ2iSPpe+3Ig1V4CMLtw1n4d6LArIDzSEApprbyuZSE1iQnZSbQYAPBgRPAEaBIiJsInwkUI2gY0CMj5AiXBlBcI3AEscyJ4A6sGkjMyHmt7hRMctJywAUYG0NKCZvNZpL4YEJqfxeCrOl6yMhC3J0jck4gM1dh8VIum1UBKG29v9FCeRLgb5oFUMKhzBgdIQUpBRrGW/Dsq+dw9Zp2+MsNT4/5H44wKDuBFdwaItjuPLIj0bHfTb9XP4G0wMoxFn1A3wX4E3y8nLNSUwxIPEIoBTOk3Y6UnKU0aGqX5mGWHhWqQdCpMDb2zwmzaJo9WIsH7wFXtaX1LUHKku7lnPLBaNaJC1LbyEDXL+DDAkweCQ49HgHB4yf/6Xvh3BKcHXIyvd5+be9fzAVHTlvWybuYbc69aob6efNJc56GNTKbc6Z+uc5VsVAKrb+5BhpzjSTdrtMI5AGOBRjpXALyBsgDkEbA8l3LsRUsc1N8Ye4Pz39va2wYhoJAz4vurbkwq8NJShIGjY0atmH+utxjTda3tcVcmxWdZnwJQM/xUS6FlPaQgZwdXgx34NOr87i4VhMS9hBrLMoa0tbsG3/MFGjBnNYkaQVNjmHkSPWBdH2Hru+2mjgCAjV+V5JdsGSCaHlPTDU7SQ+Ap558GpvNBrvucYC9JrJPSZgK4jpXoeU+RHtaKwjxEzWThyAxQ1f5c7zWIaYs57HaSSIHhzNYurvxqx/4AA5Xg/hPRJPCX1A1SecUn60pVkEwGW0RsnSoqk1lbe7kldWlSSXR3LSGmXIW3slNwXpKEs/OLJsgkdljookEUc4SqrJyrHb9cfVjt7k987Vko9WSJ7orM/yiHe3n5n00zY83k/k040sTSmr3XYJ0oGrJajM2/Rov9WfxKX4Lnjy4CUw9RhAiOQVSps047WvXKVmSOml1F6yAwhxkkK9o6T5BAtvBEaFv6DPmMHw7WlCkNbdbgMIo9J/+rWfADDg+2/imXMCGiR+LKarIYisWc7J2BzY6j6Q0ndC+GlLx4sgjaHPVMWWQC+iXOyKYcOBMGKOwrM2BHCJSbYyiOc1iaa+BGUpDWXNlTShFyH2xEubz1s5XW50/R1br9WTkvEFKa+Q8IicxD83v4yybsTyLpFYQyvOWY5rmklfO5vOyajLTZvWZtJv+tnTPyVIn2vr7FvuY+p5czteaxKcZrzv3FShWCjSlRb05S+cyYycAtI9n+nP48Dri3uHT2OsP4ShCatwtOC7HKruO3jATzZw0ah7EXPMAZr6WX+nnWTcQq/4+Hqc8PubvYeZa2R8CNsnjvT/1L9DReXTuHkDTvdpQSwFJ5AAK9JYbKL0mzScW/xoAtDluisJh47vS+UtAnoDOOQDiBnhHcF0PHjfNDWCi9UxIzGQU/06+ZpZ0P9nhZV6980JIPDMFi0Chtp9ofVTnsmpjX2KS28InzIzFokfcbDCOa4ATvIvgPICy9OFIMaMPpOwQYmZTA/wY4CTrJsEaBsk1BjVRrc/NiJxrEsW2/N/2/uaa2O5lrk1bvEPM1IhxrJZDCN2EWO61xusSyrJbcPs7qGAKQZJnB0+AYwdHIgiX/QV85PIS77qnw056URrpNLur+BmzLJtGKOviOOG6Jt+YAAAa+QVIgvAmmEMb2zphtNfS8tmG0OEjH/8Crl45wDn/HoBb8rBqpqMJfrdv0NoBMBiUVDCddXWSC7cu2XEc0YcF+mUv8VIQ4LX6nTOcCZEPCF0HGufPZtpAaR7wb4XtODo9DVHYPJgV0wp7u2DlvXnSU8balNuiZxi6zohxA3jJbc1pUMTVA5mR4qgAlTkDWvVfABXRlNaSzmhOQggTvh17dn3fT8zMbc/8uAW2Pe7avr81VXPOKoiyHm6kbcGXpClBKIHZ1vtgCK0eseS3QgBLXPYX8IXxNtx/9BweWkracmbJ6xS0DgBLw8+WSBkTbakoYlU20+s5dpFtgkClh2gFM80E86QdjYjEv0xAhsdP/MQ/w65/GL27Bym1Z7ZJ4XIfk4erb2FopggxOJNknBBLOZEjeHjVpnI47z1810sXYicpYsTCzJ1TRPAOXRfQzkJddKahar+RMn/a3MZ8VjFnJWkCVIXa4rQ5Q31VNxHa1t9iFmJsoFYH2XktHEZA4enNLKRb3onv7kAIngAICOdRExfKPM5MzVaTxZi061UutZ4GuLSWgyG2ht6e5FPOf9eWmrVa0gAdE1JrqWEm/GnGl6YpYcuPq1SyEDHJZq31eGrS5uzxcrgfn7gy4lF3GePmKpDPw7kOoBWIEgLR5IaNAaCaXrIDG4JnwiaX4YqWtKurLe7MH1WhdgTqO6QUdfG8Not1VjPYdz3e+973g7PDXngHiDrZSFD9CZ0o/VyW4PVst6Xmf0FwgUyaLWL+lglITGCXJcAetL8FMxwynMguxngE3wikEByTwiRSuZET1JT3SNkWsWjEpHPchQ5JNysXhJWcQJqxRiDXISYCvEPwvSRokJBZZV0LxEDfdyoEVBZ8G1dOMSGmmoWTYpQu2NRJZZGT9grWZ0C6LNfCaglJUCkLs98LKVguJmPdVI5Tyth6NqFxzhVAxjmPGC0lcAoOjilLNy0AwxgLWCjZVpbUUdsAnqSVt41TC+WJBzSZoGLy2+WDwMjEyF46NzEIK9rHM8N5fPbgDO7rXwDhKkALEAkUn5Lu5A2IYD5L/blWPNSLqBckl9RqWkDiSDULCSTRsmJ2N4c4CX0DtNosd3jiiQ9j1z8Kj3MAfAEfjDNKrIjG75hdZWtXKOKuP2ZkMg3lSp5rGkYAJPyonTU5JWnhrabvGKVqAuUWSbhXtfER2CHGUZIYSBYNINofpGF5ljxldkK2zDDkUTZE5uqTWWhE/paKb8xAKVKQJIipmTeZzywgHDkCstRHOu8liQIC1uliqBsVppQmrd9cg/hTLTrXgBPgq3nm7e/nIZT2JZZGKEkzBojlLBswcwUkTwvw2Di9UDbfG2jR/kzNm8xaTAA2YMnxZoKnDpwcXuzvxS+tR3w/X8ROfxVMCYQFsEWbmClwkg/U/rz1umeo2Rzmt9+xLsz2uPXYlgtLCGEH73vfzwLssEfvgHW2LLiAmfR2rNl5TvJh6glRpVo1L6mySCNjtWIE3oXvliAA4zhgyBnL3hcTsODh2ucj54w4JjCcorm19SBI/DAfAhjQBA7dEEIlmyoL2Em4qg2FyCJNQKo+vxUCCCJMxbSc8+9kzvBECD4ALsMhik/tauWQzaMnB9f0orTj2eK3Y0rop7b+2xbOMFPSzNo5IrwtM8f+bo2riKjEOAGU35t111bebPNJTxqnN1/bdcOtHuJivdY36I0QMGqaVyahx09MGHAWz7h78OzhHXhkQejyCol7kMtSO6iJBQQJYwhtpKBvJlPFVwOKH9T6tMB817OCAdZ3KJcKK0qmhErzHdRQZWGZcCAO+PCHfx27/iEJg2iPdmbR0aQmI5mDaw6knnUeuD/2oBqfTu4BkAp3kprBQbKm+h1G1y+QOWO9XsFlj5ykkLfMfxa0uLYUkJ18Mw4Tfygzw6spSERAJs2dtRvQa1KtRV4raBy0akI1MbVaS+aj5Ouq8LSAh8Q5Id2Q2QHs4SW3Ts356ppYfNLmrBUgEwBzcbrOgagCPCZkJoBEVNgVDfQZhqG8dwJGNWauMOSNEoclMXONqtM5o7VxajFl6VKmq9Q5P0kquN44vVBODjiXQswtSOgKF5tbF3bQXwdkHPhz+DA9gtsQcS+ex5Gia4w6wdAJcXpKu6msfitQhQgs6U1OwRO7nqmA5QpMWTNYrvd27JaIAC3hElDA47Of/TzGccQe3Qsmp5tFOzWt2NkG0I7ifDc/158IZE6fmI9lsxFAB5xBKYFjQt7dRY4RjiMkdmGdxrjUYG42JpRUHiGlqi1ijEIclQUl7UIn4ZFsWl12eomTOpCvJqoFhDOLZuQsprT3AcNmREosLAa5xugsTGDaDSQa3bEDkkPnnSYQ1OmRjTPCuw5SJcSqjWuiQGsWy/fHY7QASpsBE9A2MX5evTJZO5BQUYzW0Q2TzmZZQ0wifJoNRMIqb3nPr4X02zi9+Trb0VsZnJByFFtOl5/S8kVmgKVky0MsnZe7C/jk0e04uzNg4Y/A1oV5i5qv8bWMAum+xmgnmLngBcX3rRrV9hGafLYKswjK3v45vO9f/zg6uoAed4vm4VagWqu+zemZzRrhOpdfAZ56pCZFgxnMotk36yMwMQgJKQJMGTkfYJWewaOPPYTeA6vNqC0GxEdstYv1vDDLI7NU/HhSn03PDgiDm6GtUf3HpPFkkOTJSukZg7oas0ypRXqr1hOBkQR9Fzw8e+RkHEN65qzZUcpyIYJtaWtc2pYDtSVDC+yY1m5N1mreurJB9H0vKHyTalcQYv1sm0wyjiOYHDxX0MlGa/Z67zEOp8viacfpM3q4eQEKM8z+vu1jjfqXnyUpGi7gYncOnxvuwqV0FtlVc5TkjcVxb3dDagRyq43OKg4zp3wrAzZzXRwzs7Icv9HUwxjx6quvwvMZ0fsz/7AaqfX4ddQdl2Y/q+ErwIl3JZOmXVSTa9b5yTkKk5umoLFmvDAiuhAwjkMp0uUkIaq2prRohFm64jxtjNScroCNtq+ZpbK1fpM9M8tltvksqLrGJx0BniTV0NwJG5LeaNorbdVq7X20643Lsz3uR06e75af7fOWUWSxxzmJtb23DfPY+UwbX+98J43XF6ecHbvxx4+/1XZHQPwR1h0bGcwdXvY34UPr2/HOfsQdfBFHixGLgZFJQIoxifPMBE0bq1XocmDxP51zRSCrqZs1U0V9UW4mXv+X+JRDGpPSEwJk/iVEO0iVOeHDv/4xACR5rprF1LiejRDONKQCP3Y8838LHqQ+OqnGgXYOtmELGO3WQRnErpyLyDaBZgMcsxQGs/S9kL8LA5+b1U3OUcJWq5lw9Z1y5WTRGs53kJYwEltO+n7zz4AaO2w1j7kXwUlH5rg5guOElCM8cSkcyCkJ3KUhK/PZJOQhubMln9Y7DHFUTdiXtk1MQN/3kwyemnAuKYum7c1zYAKGcQA5OabX9zJJlhPPsIdpeZooHkHNBSizUNJ6czo6kNeFvp52TDWkhEUiQ+kXCCCPA7+Hp8ebcMf6Gdy5c4hFSiDS/oRZfJeS2KwQe3XHaj4poJsDm8ljBmoTk2LX/F4WtHMOi+USmdcYRhXgFpAhOd7emTP41z/9M+jdeSzcnciNQMzuWl4TH1u1oPON5rMcYQOJxPcgt914ERoQPRbJdWlUsAijgGRXAQDvftfXYRg3IOKmOoyb801N9G0Ah/1eNMGmJPpnlmC/Y/EZTZjre1PRSm2Avv2embWwOGHcrOGdCB/njKTlUJyT8OU0QIutq3EcJS5q5xtDEfwQ9L5mwnMsPMKqjZsCAjTWgcM0SQAAur6T5AaNXVp8swh6lkZUOUlMOZtlAiDGNzr39QRVWBDPsvWffIhIQHbiKySWCpKNOwv4u/C5eBUP8Abn8BTYAS6FEjtsTYZ5OKTd0QEIWFRM5qlQcmvmyJuFGMsJjSGjNri1IRQQGS+88gJWqw1u7r8exF4q10n8PfsfM/NLrq+xcmdgmAEpRQvOBGQyzxPQoTlUc6/OEQY8BwB42+96HDlFYcOrd1wWCCui7SyTanbeeb6smYzza7HvW96d6f1X4a5fpQpmGAZwyhiHAckJukxs9C8OhCwAEDDZaK1HqaChkr0zDMNE6EqIpLmO+YZjgjl3dVrS6bohTEM5zk1Z3u09OSUM41iAsvZY28Jh28YNa8ryVR+k6QpCjfHO8Ub7IENil6MwXyDDIbFHon08Ge/HW9YOb9u/go4uw+UMoFaNHPdN68OuE966cS2aQmXxVqFkgLMkAZFkrSwWCzBzcfrNh92gx9/53/8uzvXvwJ5/DGNUU1PuWsGesrLNlp1cqzU1td142x4332hav6jMfbupMM/m2lU6FGp8W8hmBZACRWKCErNSwk7ndr4o24p6ZmsSVN9vWpAZ2oNk6j/aPZipyQzkTBiz+LsxAZ4h1SDISHFE73owZySdO2a9LiVgbulZxJqqLeuYqbRKYMgz5JR06moesJSYVbNTKFOFnMySWawzGbPRoU43LZufsnmBkaKGnjYbMPMkm+k048bNVzaahqmWlD6FMkX8GhozEcBeuGUYGQfo8ALO4yNHG9yB+3HPckDUhd3u2JProeMs2tOrlewT8f3k+7rbApLhg7ogudYXAtXEOXPhNvzDH//HcHwOu+6rsRlykTkGwEbrD6A8vfnVkFVIVNChRW3lo9Od1H6e31u5fm6FQu+ZIw7Gz+LcuXO44/bz2KwO67FYQGsGwztCVLJlg6QnAMuWkIAtrLaGdarNSReuhFmEqqdqUUARzFGYDoQJIQuq64LkSkM2yIKyc0YmAHBFiJzzSgYuLoDUovqSpdRuxKXVn/PIPCJm7RqSpewuxoSYEwBXclTlWi3EkRtBZtWOsqa29UBp1yoRlYKH1rw9zXhdPqUDJjmWZYG9hjDa2rEFnVWIwYwRDk/hDJ44PIcL/Z3o8SJsgm0nssWyrbi5HRONyK22aBd9uxCraWPt2rz3WK/XeOWVq/jCF57C2fC1yNwj5TiRu+JV2i9JtZEZtMSwdDTh1FGtnaeaezJPph3bNQZsfW8xfUEgyki8wq233A9PQNcJLWa1Ihq/FLLzw7QZY4ImtxphnjdaAZN6/SKwdGwjIaLS4iDGCLBD8EF9Rgac12sEiDwI0k5AMn1c2UgnFpFqRz0DLA7YFmabcJiAlDAHpuBTTFGTAIZiHbSC1KK3barn/B7t69wXH8cRq9Xq1FoSeB1tCwqAPzmJLE3zkCYfOOEY9XfyuUgeLywu4OOHt+Hxiy/jwQsE0lQrp4vIGMgmZixwTDtNfMrrLPwqmNUId85pFlHGzs4+/v4/+pdw1GPHvRW2Y8tHcmOqoix0M58m1TOkaWEas2WC0OTzVNC2asbGF7z+YGzSU8g84vu+93vgHYG6vlxXSlnCIz4A5OCdgi+EpoZTwSa9cPObM3PRCm0ZFPO0q9fcpbC/mcZogaDQeYyDdkt2NWMKbCVgjNA55KjarsES5gLRoqDtZmyC2MYkCdOObnOfuN14Wkqabc/mxNFgIe08vOFVIjWDBgrAiE+VLOQASC40KdOYvp+33Eejn+SrkwydTBkvLO/C+4+W+GPxIm4Ol7HxvYQk1Dwy3zSXi+FJ8oKhilNToX2IFezI+l5CLkXQBEYgAjuPbucsPvWpz2Dh7oKjff2cbj12j7pwyYCbxp9kKAO3ETmrUKIsslzMB9Lzlzmaafvyu8lup0gsGI4yEl8CEeG+e+5E1/UYhxFCzw9QdvCLDgwxMZ0PGMeIMUawc9jEEZwYPmgcWcuOXCefkUaxkiSekxBCM0NDCAxwkvuEAEicADjJ5rHk8qzJBpmjLPagn2cHOI+MjM4vIN2+pZh82GzgSAL+mSV/l4jggzRs8kFydFuft9WsYkr7iRDGnOA5GHY9CQXZumk1ZOvXt5baZE23FgOJTzuO44TpYh63PGnccCs8QAkXdHEnsPiI5a8itZP1eYJg5gY9Em6njE3o8UJ3Hp/ZnMc7wgpEAr2TE5S03QSc056MDfM6Zrv1NoGUt9W/mQgxQ+OfWcEfq+EzHwaym+vnJHFa6xptJzefWq9BgsgdQKEINOtymOu+9jrQCuJMOBvAtvRABRIO4ucQvMctN+83FQtCJwISwi1iaMCeETkJBpAyAjmMHMEpl7IqEIFdLpuXLfYUE1wnISGJIdr11Ya1BIeopVOFgMoWLgubOTn5WyYA3ilfDwFkxFriW+YCJCnNpHZVBkESGmbWkMVV29hom2BgMUqb0zk2cUNaEZKIvtlstIhb/OI2ecCKrd9woKeqZCCiJoi/7kFTeU1W4+wIl/sOvzY+ils3r+B+96pQHToH1kRnAwVYhZWIYEkDQJ3U4yZfBmseq2I8xwarj7sNOzKfE6Wkyc41NzFFfVb/NCAreGAVBDwRSxPkxuadHG9mvm6ddqHUPHf+HHKMWCUJpHuWqg944yqVkMI4jkJbmYHMJIx6BaSo7OmcZQtp09Zas2/a/sDCBjI3m81GNI6ikvM4ppmY7YY2jiPA5sOSPF8ozyxIWR/CxFSNUXh7DIwCMDFRW7OxNSm3YRRzVoa5+9MCjPNzme9u5yvrBSfjH9vG6TXlbAXH5mc97cRnZNVcJX4JYMIfb8ed/CSabxMcXuR78Jlru7itO8S+06agQU3kLLto2dlA1axTH2ruewBTISQ1fY/dI5vZSSWtrNwVSXxqclMmkMfmieED42D8GPboQXh3QTQkYyaQ2yfkmHZsr6O81QqigSFfxJgP8M2/59uwOroG76TA1gqmgYSETTmGZDpB75cBzlKlA9FkgKCTxLJRxhgLS/p8Mdv3JmTOOS1gjnCZEJTloWS85KxUlhI+8N4V6yClCIJQ/qcE+KBtEgEY07tz9dEZ4mrCM441f7ZlIrDzt4DRttHeX+vHztfTJF2zfXzN85cEC2kzeFp/EnidaXYMTEzSPP2xAD4Myd6xP1rycznG5ICWJiexzAO3h98I78C5o0/ga3afl87LAOCFwyYn7WehAgRuBBPHd6Z2kRvIUye9PmTWzSWLXVw/nxMYkiY4EQw1d1kPlFNWrZBxOH4MV+ITuDb+Om7d+X44OociCacYx3zKY29AyQpK9AoA4PHHHgbnqDWsAHOozHp5gNp8ci9kMb8EFwK8gjoW/4sxIo0R3XI5Mf/alLy2RtIEoWpSs6gks2UYBvR9L2CLBeNzLuTbhRqFhX0uEdCVGKD44NYodj5SGuGc1JeKeRuKRmy5XI+3ZdgedjpJ2Ow4xhfLzJPkALOCFCWBkXfNz3u98Ybyvn6pQyAGyQ1dLxJe3TmPj8T78RxuU1sRoq0aLlHgxn2AY+ed+ZpSMZGBHHHfvfcg8VWktEZOWQGP5mVpWmbC2J5OL+BKfAIXLlxA5Cu4tPoZMMdGS74O0798pIJN9odVegoAcOftNxf2biGRFqrJGC25ehRBSDV/VILi4htKAri95H7aOCNQfaSWi8YQVubK+DdZpFu0jcz9vOsXw5rpihBKa3OANcTD5bpbs7FtRTAhVG6E0oCfVujm5mpZA21WV6M5Y4wYhgHDMGCz2ZTjt1q1/Yzt39vdqe3j9VWJqGZhtD9bEN52itMtvsarql8poyPC2J3HJfdWPHtlB4xankMAgvMITUOYAo0fv9RTub4TPwOCNl65chk333IzEg7AtClHbP+ZepdTWC4k45A/jP39Pfyh7/k+3Hb7nSLYeX2d89c5tbk8DvLUSRexJC38XWPMr+Dhhx7CsFlrzV9GirlWOcSEmBLGmJFgnZOjMppr81fLkDHf0qkVkrk0Y7JGN64JLQAowmCaw54JM2vfyGmSQhs+EaGsc0GwRIsav3TOI4QebRfqnCWTxxIJLMCfGoLnuWC21SXtNbajNU1bbTuOI9br9eRYrY9pn20zmWxsE/6Txg3kvk5mrRGmaq7azyd7QbND2ntJNkcuS5uROSBSwsEO8LGDR3HhlYS33nIFxIcoolcAAqgvSeATziYmpby27YzlATCDiRAzkKIrjVyWyx45eaRY0d92FjSyAecIiV7EmC/iD3z7fwJCwksvPo+OboWj5evRj5NB5bRW6MsY00UkXuGd7/xaXLl2FUQ7iEn6LuaoKWRgdAuheFyyETox4jAihF7NQmFft05UwXvEJPWVBNYW5yOiEm6lpvC9nUMj43IuaGs4oFRMaJsDI+1iYgElLFoUhZdo0e2B04jgHMZhRMqAQ0DXS/qcSySk00OE8x4pA5QJoIDMQ7mmOal2e73A1M2x6paahidCZ2mXRqo1B3DmG0zXGWGYk34ntVnmqZ7xDWT0tOGDOuaRM3nvjQ37vMXcGEBM0KqEhJd3LuATB2dxx2qDe/wGo09IgAADx05MW3eFAglt2RlZF3kRTDDWQ8Q4RnztO78Wv/HxTyDhZewtH8MwitZhTdmSzwGSA3uAAS/gWvogfv93fgeWOx7Sd3PbaGfpRkS1bgB28av8WYQQ8Oijj2K9fhWHR0Lf6UCAIb6O0GnCQ2Sg75NoWSLEaHwyDp4I3nnpm5KlYW0I4kMGT9hsNqAQMIwMeC8NeqgG4Ivpx5aWVhFd42MlDwm5ALDec/YZgOB9D9CIxAlghzFrRo5jRC0Y2FkGIBNYS6SsSNs26jYuaWZrm1y/zUQFqmBZNo5McX3/Nm03B4Xa4/ngsaAFnCPE+AYLJZ/w/RsyWoS2nEAeYGLGZb+Pzy/uw51XDnD72SOgdMRV7djEN+aHmp5G/LBtyGur61PKWK3W2GyAYUj4ju/8A/iZf/0zOLe8BI8zWPq3ICLDuyVyXiPxCofxo1jlJ8GI+I7v/A7s7PW4evUybjrbzc4jC68112QtzcIirzVfOnJeY5W+gNvvvBVx3CCOEWN04DgId6qT/ibZSfZOy71aCnIhbkDf94BzcL6bFFnbrj/1n0SjpWwsAq4AKVItIfdp8cUQurLYQ6jCWls0RHjyYk1kIBNjE6PgC5GwWUfwwaCgiUNGQPAOC99JiSgqyZedtwIsllg+TW80bQhgMh+t/2la0Tac+symgtiGiloftP39G2++2sWoIirW7Lb13Yzrac0iQFslnpCZMKptc4VuxReGC7j72iu47wIjQJBF4gTnzDBQT2uy4Ise1km0tCxUQW7iU9KLI2G9jlivEw4PV7jrrrvwlgfuxxe+8IQeSRbA0t+DdXoOjAjvHX7X2x7H448/js3mAEfXLkvcrjk/Nzv5ZB5mD+y4L3nC/BGQ8hEYCQ888KAijwQfOiGYZgDOC1qdE1K0NDNL7lYCK5bMk5gyusToe2niajw2AAp6aIvM/pY1NGULty34rVoqlYawMUobBtY2eQAhJ0JODHaM4Hpshg186HA0bBSkchg2Dk65aX12WA8Oy0WHjoNszLlD5iCYBhtgZcLjRdATkFEJknOyNgdQP9UeF02+ti9LMW1DQib8reC2Qmvzddpxwxk9JpDmC7I5lZPVsvXbrW85Hr0xM5mk8gQeTBmXwxk81T+EPm2wO34et3UJQIBzIxAcyEwRrtjVHNp2Dcoq8mu7J9UbzBJmiWPGsIngnPHCC0/jm77pG/H1734XNpsBTzzxBMZhxOXLz+OWC+dw+x2342u+9mtw6eJLuHTxOTjr6EyE9Vpig7vu7ail0zXrZzqun6h+7N3MGPgpEAHv/Lp3YRhXOLu3EDM09CCgJLb75LWbGKQwOaZStpaVlIs4IW8GBU9q7LENe1RCbJSsmFYA1+s1cmbEsYYBYkwIQYL+h0dH4rPHmpdqIQ7nGANJuMFHh6NVRs7S9oIBJB4AR8jkMKYAnwIoAh0IGT3A1p6dAY4ABkA7LZv2FA2akZyUfjkTLqUckKQhKi8i2TBks4Yeb0q72YJK7bprY6Kn1ZLA60yzM4H8sg8iZPjiu13yS/yGvwsPrC7hvL+CzkWArf2dols8BZSnQlnjkySgo0xmEsQvRdkOiQDOrjRryTHimWeewjBGxMR45LHH4XzAHbffildeeRHjuMEXv/hZBAI8tH8lCCCPzVoQV+G1nVxZc43ys4UPTjuYI47Sb2KxWGB3/wzGow0WCzlP6Bo+WqBk1ci01sXilJO3nafKEO5Ku4E2FilM5UlKqFwtzdpsNliv1yK4qS2D41KFEccRDGstEIuVYkkAGQlDjPAckLQxbRcccnbI6I1lF2MK4AEYOWGHAU8BDNtElqAcxZ/XvELSLm9zZNbu08Y8waClMqnjeFjH7qHNDjLLYO6/vta4AfS1PrStLlm10iYmm/yJ2x/nh6sHsLcxIGSRuhODgJzAaQ+HuA/PrZ7Co91lnF1qX0cyHlHzK6utX5Cxct0S1HZgbbyqvUvgIekHqS6QnMVHG0fhi2H53OroED4EfPGL1+BYcksDkQiltbUjB/Ye2bJeoJQehU+1nYPWz7Fr3e6DkH2QAMIAxoDb77gLMQ7oul4XhZif4DTxb9oczAlqCNUUOpfMEmo4OlphsegnFB62+IpQMpX44Gq10oY+HcB1QQKCYJZi38zyfLOsDe8so8eE1WpdSdBL9WEZCuwwRGA3ETGKFrP08uA8nJKaEXVgHss8z12C4ss2Bd0n+Y1tLHzugbRC2WrLNgY7D5Fcb7w+OpDGZD12Gp79MHHw5h9obGIAxpnKjNLh3mx5MOGoIxAHfHa4HbcfbfB13QF6t9b+hSREvs0FHKevqFFMAhC87ohe2pVnJzur94T9vV0cHh4CnEAsYQHZdTPIRfRee3mQCkHw6LwDQWr24DzYBayi5DtZ/iazPzaZrc8ymZ2ZYModlBWGEc8j8Qbv/oZvwOboGnb3l9K8PHh4kpCBWA8Sl5QsF1ng3nd6TCHqYnYglq+2OcTIcD6BScrZQEJBmaNwurITEGUzDGDOGOIoBFo5g0jihQTR0nFMEGI9jzRI2ECS9AHfLcUqYRG6mIEAB3YeCRlMrhBcW3u9zFp9QgLGZZeFrtIJwbMjggtONK33oFjLu6woxNbHPK3u2KbVrGHbZLYBcnN01oTdLIZt1SXbxutuW3Ba6/W1dwjRT5z5GMUI60olhfWtquTp7k58eriGR9Zr3LyonCyiMY9X0rdAziRnVideOYTL3YXgcPOFcxACYODo6Eh8JV/7Y/SLXgnA5FPCwJbVF8nIIFy49Q78k596Hzq6CR3dJZtNlpbpkxkgQ2C3C2aZlBLHBZhHHPJvYGdnBw8++ADGzRH6Pui1eAWZcjGlBYwJsGRzm3vM0gYtPzUleW7rzQYh1ea1BialJIyD5D3G2PRepNoxzapLPMvzG3OWTYCFDoQo6G4k5i5z1jzbjBTqZlXCHXq8nEVQU0q1h0pH8Mzo5E3ybJ2TXiuEibZqtWL78/w97fy32lUE3xgG63tM67ZjLuynGV9aK7w3aJRYVsMunhto1rJaraL/yuIMPofb8bHDi/jaQLig6KJkt9AkVgVg6w44MUcmu4z4TF3f4dZbb8XOzg6uXLmC1WoFgKUnpH6WMjd5pRk5RwEYmJHhEJlw5coV3OR/N0B9uR9mq0Kp2rvm41Yz/jozhsivYpNfxDe84xtxcHCAM3sL3ZjEJMxZ+G8qqrwty4RUGKYJ5UbJQUQYh6HxlwipMLk5YSYMlTSqdHoGTRYxSLSlxD3FtLSN1HzLGKOQOxMQx4jBDeWhzGN/khdrXLDyuF1yiCkjKXgDAhw5JQE/PoPtPLQabh7eaM9v3x93P6Z/a9fXXDmcZvz2C6UBk1AjslqUMkzz2I/cJIET4DjBc8ahO4OPHdyMu/aBW+gVIG9A2SNr+hUgzWUsDxOok9KaEWaqmVliPltwDn7Row83YXe5UKGcCrg6ipJ3OQyIcRAoHox+sYdf+vcfA8Fh4R4UMAVZeW/n1oP4uSaY9bqmD7RaEhnsDoDMePChB9D1QNdrj0WtLyRQKWsjIqWu3M6Sxwp+1aRtubZxHJG4kkmFIKY3UW1j0ApXBTdSceJtfu2epBTLK3HDFLkcx1Gn1HpKdhOQhK2yBdDUPYZTvzbFjOQI0UcEkjI/73wjOFXA2rnfpgnnc9T60lXYJm+pVtcMKLKEhWl+7/XHb4tQlpuc3IgF/eXlQBMyLoIs+skEEcDkMFLAJu/j2f5hfOJqxAN7V7DfOcRhhLhIclsWQLYOwi0rm/29NeUIrnRHtg3Du4Au7GFnuZjGVHU/QWYM44hx2GCMgyZ7J+yeuQVf+MIX0dEtCP4mpKwZLZzhWBbL1MWYPrC5GVXewlI3ueJPI4SARx59CEerKwjBofKyis+WU0VRucmumVgKqgHn2kA4hSScIMkDkqFjWr1lSbB5Nf8p5VHzZsvWWrSvs45fejpLTBjHUbJ1tDWC9CFB0d52HKkWEYAvhA7eeQg3kPaGIQ+rfhG3SKfuNeSh3Rxa4StTT9UnNXTZPrdNsNtjtubxacaXVSivH5thKY8C1J/kAvK0WkHeWV0qhkOkJQ5Dh2GP8fHVBdx5sMK7zx/C01XklBD0wyaAlq8IYBIQb3c9AqRBqfwFcyEJfsr/0j60ftEh5yWkq5VUTAzscO3gGvb9vei6HXAUBnbzHevzocaXrCDCiRkgBGRcwzo+g3e9++sw5gFd5+H9lFWdwZOAP5zDoKEOC/brDyD1z0QboMxdR5027PFq1uqRi//YXBZVjlPZGJprYWjNpPj0YGGPM01pLQFSzq1z33ze8nzdBDTpFBUOwcM7oTLp+yUoJ3BK2MRBmhwpENV2sJ4LSCs8bTjEEGsrrC6cPfl4QfT0eqcJ+5YTe5pxA0I5RQExERqar+Hy9/kOPH1Lw3PDzZLkGq0zZ9/OTOwQsoMHMPoeLy3vxIevHuLBRcQt/Q44SYFvZkkwnleVlyM1mkLRENWSdr6TdjXbIlpE13wn8S37xRK/8KsfBQCc7d4K7xyCN1+v7YUiPqTdX2WH4+NWQjP3K3wOjIxv/n3fDOKE5bIXoizIOUilXNBI9dm5mlG2OGXhkYYgZBOQFDi9ruY+LUzUFg2DIaRXzSJkFr9biClEQ7rmPTElcMraCUxN/zEiKGk0OYlNbjOznaubSfAenQ+aLdOhCwTvxWUZ15LmZ12ykARc886euYJQhImJb+dpe1ZanNZ7L5Q0KpTe+ZI80YZUbB7sfW0mzxuuKRtQWBeM7ThOHkyDElqfD4Z01yo1FZOJFuSvAT0nI0+2YetcDIAYTBkJGSCHwZ3Bs/5u/PKlq/jP7o5AYORxAziHrg/ShhwkTG7aBwKKQGYNuWSw/l4EwsIIstdQ0ZvVN+MyJ9zmGxIjJcLC9Xj6mWf113uaKC3a1riNrPaSG5XCqKDENoBA6DgHHKXP4tZbb8X5s0swRvSO4V2Gh69CDWhdpZljuuizdUjWLB2qZrSYfwKgxFg3RHIOBPm9CwGlRCozgvOIYyw/b9Yjlju7GDcRQorswexKmts6DiDfIaaEDCoJCOQDOEuIqY35tTm4LgThTbL3eE2IUCDJ6i/N+mEFBin0iDnCw6nQR3kOELCLHCv1UkboHKTfrCaQk2gM0rUhvEIRMQ7Y2dlR2kwGcpZqGuWQmrOir9cnl+3Nxw20V7fFYo+vLqXpaBZZ1qrDrdrSFhqVn7bvIxoyMY0GiEZSvgOGx8rt4vOrM/jc0SEe3d+ANdE5QmzFDADe6PkVmVXBSOZXkgiVD7rJMCq1P8yX8KAJVEuoGUR2TUDo9vD88y8g4Cxy6jEgqgazhVIkb7LDbtWM5t9ANsAxPYvIB/ju7/pebFbXcObMDoL6kgSgul/WmSo3n1eNhfacwqhnJiJDFl0FKwjJNFlxC6pWqcnttWuxWRtOM2mGYQRnaDIBwWudZC4WEU1s+laTt6+yZ5GuCVTcAF39zHStVUum1PWIPgCcAwWnz7cB8OoRiskOEtS3WAQ51826sCNMGfDaOOjxY588bjh5wA47kS0VVAaAhqB3sgAx15TX8zdnpz4msbo7MoHhsHJ7eHbxEH7ucsZ9O69gvwuyiyYRTLvEnKdQ9STzv/HznCGVjuHZS+bkxPHn+oXkw4p3wnuPo6MNXnrpJezQY8jJIfOoWqg+mMliOzY3ekkN4CB+9ogVfw7eO7zjHW/D6vAS9vb3BNBhSxaQbJha0VEfkplVY0zwYQ5OVFKXdoMQf3KA9z2IpxUPOVdz1kIpdn9dFxBjKowA3olPNsYkIYvm3CVs07gY87Iqe08BhmLUTUgyrcbkgRSxRiUKZ7SuT6V/tGEmZt10pz5i+6xan7mdozac1KYLlpXKlbfoDed9rcCL6H2bWLAmJqsfw1MnEOVNMHOvHEXNw6nmnH+mfG4ixLbViVRkv8Aa5/BMugOfv/oZvO12guM1KCzgxkoQlbSy3hbSMXN65ifWOKbqcyJonZB+hMqGZC8iwr/7lV8DACzpIa3WSI0ymPWbnN1rOxe2SEk1Q84bbPKLuO8t92G58AAvsbOzg3FDSCljaIiiTLNklhCHda5OOUkOcErSiTjVcAeztGEPwWMYRvWdCMF7eOfgAGk3Z3M4DoCCLSklOF/vK8VpeVjJM6VpOGoe15tXVMw3ruKbRskxTimBKQuhdBwRY5KSNU3kSBo6ERZB6PP3wpAYFPWnKqAtm8CJYBvqhtlem3MOoeuwGWMRRnvPabN5gBvyKc3xYGX21n71SWzuqhMbvwit0Nlx2u+Px+vqJ+uPXBb97H1KlkUsN3KZLuDfXbsHi8UzeOjMBl3fAVGMFlaonVwofDL1cKLl2p1+OlzxwyRPtCHUajYiLg++mYAGzJKRGz8Uk4daTZ8qkEUoc8aAV5H4CO94x9swxiN0XdAwCGEYpEzMQg8xjvAkCdjeexgTnFjqWvHA0kgWOSHHChB5CkqAzQBnBE/Su5McHIQTtutCMZFFKDMsbzgq+TLP/m7aon2S3vuSAN8Kb300jdBypffImZFJgJzsHNIwIo4Dcg7glNA5wqJnjCkDQwZzgg89Ajk41tCXoA0gqmGdefJ5q83rxrLd2bLnRKi4gW0wXxY6kJm4FANBFlyLlk5P3Gq409rUp78a+Z5I4XXX4TP9w7jlKOOuPqNLI7rQK6jA6EKP3GjKYp7Mrn3bdba+ncTr6mdqHE39CJ5+ptmnjvnkcx/Izt0Gq1mBIudl8d52263o+oBetdQ4RozDAMD8OimL8sEfO4/5joWsuTEX2/pAuw77fVKWPmtDN44DCNVvsnUQQkCKEUyhnG+K+rL619O6zMl8obZCl/sbi99vceAsNWgwZDlHxhiTglkMCgSfGDGlsrFAfelsQAwgFoPz5frMPC49KlugybS+ugj27C1hHxC/GTNNyzovpx03wDwwXTxG+V4edvv3E8Afw1Mmf2mFeKYMafL32THV/jUzmOHAFHDobsZnDs7joc1FvP32gL1OSZ6U7j74mlhwUpbFMTi+Oo8qmMUmgMX8pGCXsVjs4tee+BAC7aEPtyFnB+sa1wrdtvPYaAGD2lMz41r8GMgR3vXOtyOEhE5pFIdhQNLAvAE73gmBdee6Y+dsQZS58IlpWBnfZO0bMAOQq13JiFzJSaXm+A2+osDScTOwrdFsr882hjbGut0Pb8x/Bsib7+3AnCQ317MCTk7YCkpfS+XtidI5uhR7H8vamW6Sdj3WvBZAOdYwDFgul+quVF+zfe6n1ZSnNnSVJ1sqt5GQyysjUSp/F9acXF+WpkOAhRwUdkGhMbRXQTDzbNK5ddsUVrGcWElUT84hUYDPPQ7dnfiVa+fw9NEOUmQEMECdhEeoZmbYuK7+Fuu8JJ/XTpGzrBaSsENGh5deehlAANEC1iqd9OJZG3+fBOy0i9DOj5xBuIJNegHf+O53o/OQPg8R2BxtwCMQEGTakhShhSbIbueyjSiOlUhKTLKMzEmflTzDmAap+s9J5heiYZz3AqIRKYNA1XLtedoGve3fzQxFTqCcEIeNpvVlUEMtWdeIXl+uwI6NVrvDObAjKYKGsNEnJpDvQD4ATvreGNt7jBFDjHomJyE4mgJOtnHkRjtbuCUzsB5GbIYRQ4xIDIwxIbOgynMf8kbM11MLZYa1R8/SP0RfGbnsXMcBC/MbG8R7Yo5PhVRomBOmLXvsOPZqd2aW5kDEkveoQrHpLuDl7g48+SphnXvRICeI3msWn27Bn07wKMCZcfHVK8g5Y+nuB8HDKjQsaK3O1omnK0LZnIDAOMwfR+YRf+T7/1NQysij9K/YrDd6D6RF3HxMCCfaIWfEJGReLfJsz6H9LMBTuN+RAlak37cXOdUEufm+jVObOQ/9PrMgwXUzNy1smrFpDdBo+cn1N8+prA7yIApwThLg4RzGVO/fkuCZWbRbu8HOBKqAVw1/LENdgIIjoPxuboW0xznNuCHztTV76u//w49sJqUjjOhw2e3iYO9h/OblV/HQ+ip2zzr0KSIhbDXdAAOlZMwnk/j45J7kd/76R34DYKCn+6UGkESntokB20arJefmZhc8nO7SXdeVFvAt16oc3JqWKllxSgVUqvFEt/U62ria/eydLzmzmPiGAciWa2ro4jR8YF/bGJ35syb80haimUvdtKag17Tjld2HGE/VdSrRAX2QgqZKSh8zAOcEFOq7kukzDiN40Vfh0g+bWZ5T1qT4On8t0XN7nbamWrKt+bM87Ti9+apB4ZI0rg+q2WS3v5r3Tl62SLe+mtAKb3kfmmPoSTIIiQijI2wc4dCfwTPuZnzmkkMcAhzUsedKNzgBP8qlVoGtGk7MzsxTRrO5xgxdj5/92Z8HENDRbaKdnWQBkTEkFJOhDjtPy/pu1wIkwF3BteFJvOc934q93QqgCB9OCyiIdqs7OsoCnCwiOk57MYf3TRjafFGgqTds3jcHq8THr/E7cNU+OQsPKkOYz1OSjBgHLUx20+uYH7v1KesfgDYJwmsig9Pi6JQdmD1S9sgcwNyDuUfKHjF7DJExpogxJSSWTK+Y5OuYspimKWvrwIRhjGVZGpF1SrloTelmbcRdMm9GdH2acWqhJPOn1KAmE1D9/bYXAdU9OEFg6aRXI4ykxmd5zY8JlD6JTAmgFRJFXFzcjk9fPosXXhzAWbhG1+s1NptNRQ3VFMnNA293QpQdVBa9dHJuL36aN2nz7r1XP1O6TjkLVJtfq+afLXxLeC4gAxicM7xzWOenEPMhur6Dc7VHxnJnt1Tbxxix3N3H7t4eljs7yjIA5MSFY7Vcpxn/zT1vMwmnGxNNOh0TpoJt72fmEoss2pJblNfmXUxU7wicojYGJtQ26ZheY3Nttnm1QBADhQWiFBYQkJmU68cD6JBzB+aFvLBAzB6JPWJijDEjZi6+odVnpsyIKWOISb6O4kNmlpWZGIhZCiyEJcGV33FDyLWlBcrWcQNpdqdHj46N633sxrX79Q9BTf5jt8Qr3e34yCsXsXN+xCKkYu4VcwrmuFczy8wu50SzzZHFShxgTqKZ91wWvAidwUKNhnEi3HYsARRoElTXA4II6BaMV4++CCLCz/z0+/HJj38S3/s93w0i4Dc/9Rns7e7i7rvvwnq9xk+998eRsoRkvvZrvgpf98534NxepxvQ1ExsQaqanF7Lr4qg2u9zBjWrhWgar7X3tddf56yat4AWV2dz9Vz5XUpZABuuNZRFw+uFtwUG7TWTphnCnltBdsWKyhqjZhWcDIIPC+TUgbXcLEnuOqTdKYGcx3ozlASE9nrajczM2rl8xBhBISBq89jWPbjeeN2t8Jo/nPyh2cM//efboEN5YzkmgOpD0FzmCaKnM3JY4nJ/K566eg73X34O953XMAhPy5zkEyS4aFlEDcAxu8zJ76ELLzPGNODBBx/Ahz70UWRew9NOAZjMarW+F6aByVBcrlatLELJwz0aP47D+Ft4+Ae/BTc/dic+8/d/Ef/bj/1tEWhtRZZjAjnC2cfvwNEXX0HOGT//y7+Cn/uFX8Kf/6EfxM1n+7pRqQUyr+7YFrOU1gJ+9h5fJ232dCY+pGNYJywi1qRvmcQWYW6FLHMCLAap89C2Rydf+1Ja3Wh5TkSAJsU7cshIoqHYIWehQsmZRZOp7eX8AoCHowBwr6ZtVA3oxG9W+tGUAFAnHENZ6j1TikoPI0Irexih9jbJSFmKMiIDQ3yDhdJW5RYw8sRBzULbNvJ1eyucYFnL07/eWcsre49Lu3v4GD2O3YOIW/g5LM9mOCxR0V77lNY1aq8LyrK7U643UXZGzpLxQipc2qVq2GzwTb/76/HEhz6M1fg09vqb0M4Yz+dQ2zATm3kM8dOdVCVkfgUvHf0ibv7dD+Gr/vg3Y2c/4MHfdw8+/s8+hrQBzj9yJ8ajDa4+8wp853Hv730rhvWAmCIuf/5F/Ob/8j78xD/9Kfw3f+EHcXD1lXJeSbKvNB5z07VWZwDIUTYsbdRLhNKrElmC9Z0XAfadVW0kBCebnPnkjhLYJaUNSsVPNwOBkZFThHT/AtI4wgXRYN73gLIAFyEnJ+mPTrqwOWN0kDuAeq1I2cF5Ebjlzo5sLpGQ0IPgJL2QCL7bAROBKSFmAlNA6Dx8JxxN5DskTnDssN40dZIswhkjMIzm4lTNOYwbbDZSGbNaDddZt3XcUJHztpjUacZJAfqTRKtqyRPeMRFMrm/j+jsiQnILJNrBZfT47NWLePDoZbztHLBHHdbjoZqN5rBXlPS1LOopACFCKZUV7SajmT5tXVRz3+Yni+msZq9qUwIDtMLFzc9i5+EL+Kq/8AcRegemjAv3ncFX/fFvxOYVu14gfz0XjckEeA647e3345U/+HY8+08+hIuXrmKnqz7e9YL5bZs774NcpxPLIucMFzqMY7SbEV+u8MKaOamVKUWAgJxHeO8QOZffyfts821NQs2lDVqOBaFYYc5aooXiR+qElXuTI2lsE05riRwyvHAIk0NGRIIv/Tij+ocxO2kslaTcLGuGlAumYUfE7BHZgaIAVomluxk5J1oxyaZkFSUAgcljGDNiOp3MfEX1p3yjBhEpJaGDd8AY9vD84l48wRfw0riE9wnsR9lNT7G5bNuAjiW0KwI3f09OSenxp3C6BNiTFBnqvg7OcCRyfGX8AEb/Kh74L78FN912QU/BQMiAB7LzYKW+8E5eloJm+5QF98cYJxvppFBZZqyYnra4rezLKmtKaqJdSQOKkc25+cglCI+JKRqCL7HUtjKk9SFtStvPFU1u23UrgBXirJs/l/8mw9L1CnrLjJgzYhKByYmQc4cUA3LuAF4ipx7gJTgvwHmhv+sQIyFGAmcHcj2ADqAOMRHGkTGOwDgAKZFkdWWHrt+9ziqr4yuCze6NHmoVyo6MBDjCerGPp/hefPKFK7j33g69D1gN0lvDe3fcLG8Ww0mWgT1cZhHucUx49JEH4b3HKn8Ki/AQcvaq86mxus0/phnIk5DwMl5ZfxAr9wze8kPvwR1f9aAuNg+mNVLWjBV2UrRLgDnWMWaly9zuyZfypKwhhHIfGZY3a+VJRp5sWVki9KKt5PsoPUp0AwjqJ3rnkTiJb9doNftKjjDGEX2/IyVdGnLJyYqxUT/DwgZBID1XrUkV3IgLiivtJiACSih+IyB8QCmL5bL0wnObmTCOA7rgRaOmLHFNMKBmatSNdjOI7xhZNvrQL8F5I6Ci8xU19x3CQoC7qC3VUxK3Bi6gX85Z8reP31FCOSEB0J8zAkCMte/xkn8cT66u4vOXn8NbzkvukPlyQRkK5uli87S8bVkahqQuFjv4N+//BdGIeBGX4/uwQ29FwJ2A6yAcfcZcwAAyHHlkHGCdnsW1+HHEfAXoCG/54W/HA+95J3xouA8oa+5pVhPXACLx+WJMMH+63UKMSMoEwMCbFoGeJ2RborggpQl9v0DajACkFCylBEfC7pCZEXwQ8IQEdWbv4ZrjmWbsQodh3Bybx6ymqQF43nvpkelCMe05J3EwNB5LzIXlApzh4JTrSQrbM0Org2pYI2UR2MzA0XrEcsEILPeAlECI6H2H9TCgZ8nSigbk5YzgHFzotFIqlXBXCAHOe+z0IqSH+VBK4zgiBEF6id5ojh6efqXmV5O/23gDQh2vZ5TLmMCrAsp4t4+LOw/jVw5XWJwdcMEfTsykdqGW4zUaszW5tp6blvjpf/Nv0d+yj3BmB0dPPY9Nfg6EDpQcFu5+dHSTXueAw/Rp9YdGMBLIE8584wO45w+/G3c8/iC6oCVUGqDu+x5g1u5ZHsRKPs3WSdnSxeTpTLDrGZjTVso4P00Ot0SGWtnRZuWYH1hLk0g1WhYHWTSnl0LzNo8056zMgii/b8METpsNtRk8LdpqFHLbEh3aa7fc4m0x2EmqnJqvnCS1k8YE8AgQYxgiOBOcC0g5I/OIMUZ02q4hUFBt7JDZIbMXpJYIvuvBWCOzQ0xAv9gRn5ZP5y3eUOlW5Qm23Ullr/l9qaHQHaupRX4dY3u45EZGSXJwDq47xKs7t+Gzq8dw78VLeNct2ixVs5VKuIXmRcjVx5kksgv4CuGiWeD/+efvxxhHvO2/+8PYv+c2XH3+El7+xU/iygeeRLqywnrzOazM73QEvys7596jd+P8u96CW979CPZuPod+sVA2cyk4dl4gftc5gFlCIE19K6vZ2Qbzx80Gl371SSyXC9xx2zlcu3yx9plMGUSu2YhqG7e+X2AYNlLxoOVLBv44p8CJARjN/JBOhglq1FisMbh1XZDCaXKlCa2jpvSLWUmsBHl2TrKRnBOzfFJxAQWJlFcHGk4h16EtjMiZFM2dMhHO46BiPQCRDQFWAC7LZ1sADBCT1hnXrXeaoCA20MgM5yWRIPge5BihWwovEJ9u7Z5aKN10263CyVPRKRFGhsDeTDjltUzHiZ95LQnXxZrrhiFVHgSmHaz6iFV3AR+49hbcd/BR3HuWwDAWu+1JxHMwogqmRDgzE+B38Mu/8ms4+1X34Pxb7gCcx/n778BN992O8fu+EXGMOPitl7B+8QocEbozO7jtHQ8gBI9uuUBYdNodWpU8mWnkpJjXjeh3MhADeLMDyk7T1VQbZDVpdd6RE4ZLB+idtIQHi4kr9+PgqaFLJDHTiJwGueOkzrHve+SUJQBPpMIiBMjVD1TO16QTr/Qii0WHYRiLf8rI8MTgNMJ7qZWUBT6Cpe5YNKOGWYQ0LQIk5F2kvnDcjHpdEaHrsEZCpoQxD/DUorbqc2oFTAtSkZPydc5G8kxg7ZOZEkshThQy77azm/ceiVlMYZLFlpiU9Y4QMwMUhAyMIpiE+Y63NH7cNm6wyLmaRdRQndPkfRJsrclpr28Un2k+XuOA1PwvWx6mzqaTGNSrdB8+ffAibjn/AnYpIScPIEyEsJxypjGPaVIGPvmpJzEMA+79fY+JIOlxPBFod4kuZyx/11tAbwP6vsNi0aPrgixWgjALjIOw7mnIwL5mTuh2HHyfsbmWkTYZHWpmEMASeNfJIUhHKmTG+fPnAIKk3cE6bglC2vKQhrArrOQ+YLlcwnuPvu+LdiVyYIX+YxwVaZ0mIJSi6uYZSUrciJq9YzWJYkmN44gAoQMFJG9U4qih0XIRzK6EnMRETRKSiCM68uVn0Zz2khhpe1FTk1Zcg4ikYSJhp5BzpslG3GraOTIsc9Ai0hZGasrwLHvkFON18L7y7Hu15VE36SIT5qDP1emXe3CNBdoQNBbFpD7cvYCPHj6MO9OAd+Aa2nKxk2Kw9QE0DYMA9Msd/NIv/yr8bo9zb71bNioNEdST2zUYe0GtZIcii8b+V+BjiGnMnLC3wyCXcHi1pgWSCmPOCdFih2pSP/vzn0Q+GvFHf/D7sVmvhLSpgFZewkUNCVTrQ4bOT1rEWZhjjJuGeIoLuhhCJ41hu77MlZi6TW5qY4WIKTjOFrUASlKMbM9CNJt0ZTZQLutil+dpoRuAMY5D8/npOrUNzICrbB3JtspJE26ZDYvhTt2Y45U9bd6wCfJrlgnqeF1CqSHrssilzE6kUZpOVTo/05onjhLw/dKHAuCqrTVpXh1fIkZQXC4xIXYJL3Rn8O+f3sOddwLn/TWEPGKgAMe89ZqPZb6w+BTke3z0ox/D3mN3YP+O8wUFm7rTddfOjOYhUVlcVjNIxAriiF8F5bwZosf6kIGsnK7qA6U8Ki8OYzzc4Llf+CSe/Ykn8IM/8Cdw87k95BwRCLAMuS70MOLnvhdBsm5iXdchdAHr9Vq0q5q1TkMTOSnDgc6z+J2dms4M7yTEtFwusdqsS7K9HRsAfPBw4wjRmgCQEYLTSiR5cY76VRHXnDSGK0kJcYwgZDiSYnlCVla/BKfxaWjmFUjjqzyCOQCqgR2R9pSd2FeT570t26kV1hasmofRWmG1kq7TjBvmfQUsDuZkkojKwtfngmSgIItDfhq1rSb99JynvbhyEIa1aK0CqhgZA51z8oCy7OKr5RK/tbobH7n6Er7lplexEw+wDmelfpJO1phyKjXjifChX/8EmIHb/8Db1fFv9ukJYCR3lGLEZmPoYs3tVGUgoRxmUMpIaYM8bLC6HLA+DNgcJlAeinbIKWKMEZujFfJ6xKd+9N/i2keexX/1p/4EHrj3DgCM0PdKeyHX23V9YZsrccJcOXRyYngXAKbSx5IVYV2vJZwhC1mE0EEaIhFYO59xqX4BROBjjFgsFmqmEpzXWJ4j8Qu9x5glE4cgwugcwJwU9InwLiCnKBQnYBVKCAFyltQ9UqH2qqESp7JJcx5B6GGEYV4CnCWhYdsztrkxjp05ANh2uZ4wIczGaQUSuBGgx9XdJBAhcG37wJQa/allW54EIi7I5m9PjMQCAiagTk+tPOnyMJVseaQFjvbvxSc3CY+vfgvnQkLgAQxrW4f6tZik050wM+NIO3K5zqu5jmK6msalWQwppYTNJhdNUhnn1XRlRspCv88p4erzRrXCgH7NKeLg4hWsXrqCz/yN9yMdrHHL2Qv4U3/+h3DHrecAzuj7Xky6NMA5lBikoyo4bdKAVeSbP2n3HGNlK7BNZFJy5qppHDXFrDWJLVHAzHoJsWZ0vWhl5wMoyYxJhY0IuGimXNoarNcrhOARgi9mqm1QJjApRQWsMqYPsrH2dI7tfsZx3JLpNC1utjFHnU8CB+dF46clzzq1UHprYU5AB2nOaZeSQEhUQQYGEFNCUgTONMZknGAytGO7ptryuRMVmpMaS1hAWXZShoQBmHoc9Ht4nhI+dPAU7r6ZQbyRmkw4DQGZdOl5Zw8hpVRaDwgSqtempr1k4ojqnDw7mppFE5+k1HNmcNTQQOb6GSQMBys89/99HE//gw+AU8b+/h5uvv0u/KW/8MNYrw7AOcni7WQBpyFLIkLj15ngbJvndnESkQpVN/EPSzOfnNH3S4wpoev6slvPW+TVBIVcPt827DkpVmrmb/Dh2LW2VSL2t5SSILN6vpRSs15sHrM8F0g2kwiqsdDp+5pzHAv/oGrNVju2PnJLDWIYwmnGqYWyY2HAFqJbDw+5NyYgExe0lFn8ncSExFrOskVqZOG+Hu3ZIEnHjkn1b6ax9P9sf6UAy4DhFJEd40p3Hh/1D+POw0/g6/YYoA0cd82naXZWS8fKEzBJvs06Tw4gFezGR20VJjffld8wmia0yowGBsOBY8LmaIUXf+nTeP69H0F+dY0H7r8fjz36CL7jPb8PDhnD+kCS7AkgJBB7IGEikJIWmCd1k1Yobe+xNLthGEorQaLKMmd9NOz9EiZQnwval9FXGslWmC3kYsc27W0a2nJU7TNd12Gz2QAhY7nTQ/swSRGAUxTVC/0JaYob5xHgCEfSt4S8JKAjRwQnVStgjVEGQ1ulqMAIvCRmmhVlThq3nHYCt2tsBbedF5vfNs75WuOG4pREgDeYv0CtECgZko4Uc8bIltdvqWxf3kGzr8fBo/ozqSFrTWKAhMwBh/l+PLV6AQ8uVji3k4GYUTcAmuy0zBJYll4ZKFaAmE4Zzou9YGh8Qae5FUzbpQltaAMMDWCLg766doTh2gqXPvgkjp66iCsfeRrxYI277rgT/8WP/ABuu+WsAkXrJh7MqBUrScqqmswcR1N01b5vuzGbf5RzLq0ETcCMPNlGbWYjWsp38p79nTPFJ6taSIqZF4vFxDowIMi+bjabYwgm0GZVYaoJ9ViiMVumPpRnI1+zhi+spUOevKzBkfzdl2ek2wCETb47dtx2mGlvfxvHEYvF4o1HX0s5LhnRh6jJDEYEY+QsAplzoSPkiV54gwZP9SQd++a4zX/SMB8ITFi5C3ia34LffPkyvubOiH3avp0wC6FSecjO4evf9Q78g3/4k3jlV57Ebd/wCMwvKgXMdn2sN1AEkgUJzFI/eO25V5DHiOff/3GkdQSnjMtPPIW0GiQPNATcesst+GM/8Efw4AN34+jwAI6shweqr6agjCyiacJD1pL/OfGT9Yhs817b8I8IKJe/tcIQQlCCYoeUIjxkYRoTvTXuHccRfb+YmKzt8zFhrARdfuL3mkC2rQXsvWbitsCV1X2aALeF63ZN5ivb9cj1xibTx6G1Jqq70Qr69G/HQz351FoSuBGhJC/kVEwS74KYgIlFKIecEHPW8lKt3P4yCCQAWP7/5E+q1OYCeT0EteyunpDcgMvxAj579TGcv/okvvrMC0jESBb4Q/UboQ95HCPgPLou4s/90J/Bj/3dv4fP/M3345ZvewwXHr0bXb+Acx6+C8gxas6qXOfBy5exevkKVs++iku/+iTyasDB51+eTJlzDrfffhvuevxO/P73fCvuuvMO7O12GIcVwAl7Oz1iioI8QjaYrLmnlj5HWtIVY0K/WCCloeSXtg1rSnpcSRaYpsDJzi+C3Pd90ar2ucPDIywWS7ATTZVzxuHhYXmvgT07OzsTv9aObVq0fS62oG0uWt/UvpYyuFmYYhrgr+DL3KxsAZgWZZ2DevZ9u0FYkXx7rDaxxITVNojrrcV2nFooszppOWckIg15qHZUTSm9Fxu/7sugKeXoc5FXfTQ71XwS2gdSH7wI2ugTruzs4PP0MPaHNR4ZLyL0GZm038bEhLXjA5QZ4zDg0YfuxA/94A/gb/7o38alX/oc+gt7IE9Y3HETbv2mR3Dtsy/gym88W845XtsgHW3Ksfb39/Hur38X9nZ38d3f9R2SjO4dzuxK3mRMIxwRgsvodxaCOHJGjB7jsEFMWarw1bzMzCDvZYPUPNeYMthIrVQLmKaIMZYFaT6daU5j/7ZhWnKxWKgpKxtBTgmd9xhVYA9XR4WbpiU3JhLB6fse6/UaIQRsNptJuKE1V4EqwNuEasIphGmyvWwsVO6v3Uwqw0JNnDdE2YSo1dhybGGLF1qU8dg1VYGtGnQcx0na4muNUwtlMntYtmEFdARhjSjFRLA3fTk9yTda5AmEwAQODkdL4Fk+j48f3YyHFgfYpSsAwskosXbkynHEY4/ei7/+1/9n/NzP/SJ+4Rd+GddevYrDS0e49hvPwDmPSh5FeOc7vhpveeB+PP7Yw3j44fulKn9ca0zVfFmJrYEEjCACAkleKGnAJwRCH4Bx1KSIqGhgA5TYQjHtEXzAZlyXEEjbos12+1YLmrloJtvcJaimYBP31HOagK/X60Zwpr02WnPWFq6Zlq1mbDWQAUUhBKxWq63hhiIEzeW2vjRQQx7z+5rTfdrciL9+vE50/tXmu9Wgb3jyQLSiWIa0HVNMMBZztQ4z9k5zCZOJOyZhtPXbLQd5TeE8eUIkEL4/CCyeKONauBmfPzyHm46u4YFd4Q6tm5yGPdoQhvrXcdgA3uPbvu0b8N3f9e1wLDmen/zNz+ChB+7H/s5SKSEJOQ8Y0yioaF4jZ8ARw5eyGpsXhvPUXK3cL2nM0TGDqYN3jMQe67J71+mcp4ClnCYCC0x7ZdiiN4GyTJy+77HZDGXBtgCRdwFmzuWcAa+sdEp0PEcqc0rwKkimiVr/rjVdWwFqj9Fqpfk9tD6jd93kGK3GMm0438DmAFP7fkvVk81k+8qbI7Stef5a49RCObrpAVmFU9DD+bsbn++0YY/ZMeg68jgPJNzI2LZzERw2XZAzccYhzuLpnftx7/oq3naWMSJjVALn6cgV0SW1D8wsXB3AqbH91ofvAjBiGEYAFbGd0FA6lF6JBgjZvLrJHFZiZLl+IX6Ck0qYnWWvmTDAqOl5bZlSSgmsfiZQY4nL5RJHR0fY2dnBMAxFW5pQWnC9FCt3neS2dgFHmzV2lh3YUSnZSpsNvCOkKKZbjiOWyx2sj46w3NnFZhyEvKsRFhEGQUa9F9Co70UjinZLajYKGGMpekBtALy1YABVOFoAattoO2iZ6dquHXsGOUedC2u3QJNnZue0OW4tgtcaN9RLxF7CIm37+UxkBAdqXrT1NRdWmr0MZa0G3/Szx7s7X3+0Tnu7Y5FuLCAPhvDeJMdYu9twKT2K34oLKcG5rj8gmskRJJ2tMANkgBKmlQtNF5YUhexK45H1mqB+FzQ1jsv3gPGmVtNofm9930kOazhe9WKC2RZsm0Asl8sicC21o5zTFUEtIZHms8M4TICitgax+lOiZXxj1rYopnPVP3RuirbOkxmMnMuC/e392fMmNRfmfuh8TbTacxJeKX5kNVdtLnOOyFkdN60BtXzwCdpBNRGitQSuN35HEmddb2yz7Q0RlXiiB6jHhs7jOfcIPnztNqxj7Yw8AZWuq6qPV7+3LxPCye+aC7LrbIGIuUlkC9Z+Z3w1snEBi8WiBOdb9JAb/pwQQokZWpJAK7CtQA/DUP1EoJy/PWYJl+SMftEjdJ2GJajEIVuzt1gDTpLeLRNnDoy02q/1j9tUQNOEpjFTSuJ/U93E5mEKa2HRZt+0Zn3LXL9tDRk4NH9P+14AN2S+nr6XyGzxbF1MX8Fju4Y8YXCH0Tm80nl8ke/Dk1cZRB5+ZFBytVUDlVQEFdKaHNB6hsdfrGAZF4thYknMrnlSdd9op1YrOa3OiHGEd2KGOwKWC8l97YKHIyD4SkRti2mxWJSf7Xm2aXh2Hqt5bNPaTMAyiyB1XSfC2EvssuulXrRep8MwbNB1AZvNGt47jMOA4L1WhAiq78zE13/B1/4gYEjc1nthUWgfXbM+bROQj4ht570rsdg2DlmLAmrSQSvkwDSu2nUdFosF+r6fbJpyvCnyP7dWXmu8Lk35H5MgTuoWt/z9+C8lbTD6hGt9wsXudnxm8wBeTQ7IAxABlyUk4xUDrVUIDCsRYRXS2nWzvlg1mQsePngJZRTgggosb7u0fQUwEVCL/Yn2GUuWkdUlpiRVGH3wOLO/h+Wix3LRY3dnB33fY3d3tyxK28lNGBeLRVmQ1t3LFqmBPG1z2UW/KBrReY/FTo+YI1xwGOIAEGG9Xuu8AwAjxhGOgDgOCN4hjmPxywugBYhAgtD5IACRc0qEraEdchMF0QqAmcJSrdT0GQEmubc2YoMdtPHQ1p/ehsrOc3zbz9k4bUbP73jz9SQT8KTBALIDBs8YPGPjA56mr8YnDs8hLjw8iT9Vuo+1n52Zp2wHnL8v1523FTpZJKEslrYW0Xbkdpe3BWVfmTMWi75ZDFXrhRBw5swZ7O/v4+xNZ7FcLrG3t1e0hAE+hrQSEYZhKCl25g8Z4GNxRTNf7RpMO/WLBVivbb1eA41vOAxD+azlutYA+/TZmXDZBlCzi2ThbzabMvd2fa1mml8j51pDavdduXfSREhb7Wdz2tKC2Nc2zFI/U0NLgjl8mTXlb9c4yVw+5p9tGe2EztHW+fHFRQey9pnIIGRySC5gcB0u9ufwxfEevLLZEfIqSCjCGtxsvS5UCkTKzYtZy8nqZjHfNEwwW7/PzMw2pmdfnXPFd7TfMXPRen3fI4SAvu/Lz7bAFkvRcPv7++VcphFDCA0tyDgRgvY8RKS9JlF8zs4HsJJjeecRulDS3kxwgVrcXWsS54kGlbhr/uxaRBVQd8LVxPrWT5yn6rUa7BgaP3Nz2uZErTVjG1XObXULo++DUr0QmCME4GKAT5dq9xUtlDZuxFzeNrmn0o6QzkxSTeIgdTALMO9goCVe6h/GJ4bb8Yo7A4KHzwmJRmkk07SUL0jeCQ6lY1JfVAqEF520BnBa0JtzhKWptb5Ma5oBKKCMCZFpjtLnA1zMSfmMXF/XSUih6z1AGWfP7qPrPXb3lqUCI3QOKY9Y7vTwgeCD09xWaChCBHpvb6+ipdS4CZlB2YEj4NjDO68sdrVQ2DSPJZ5XNNnDuaAMdh6bzWaSt2omcPusmQFkcR9aC6Zt3Npu0C3oZcdpXZwy7x6SuOEJoZMCeUPOhc4kApyEbcEBfeexXHQIHggO2Fn0IGQEJ758F04nbl+yUJ7GJJy8/xSvN+qatiFmWzUmuAn51CAMwyNTwBAIh+EmvEB34aVxH5l6uOzgMjXNeXI5mqVwbbtf5xyCakL7uetqz8kWpm8FrUVagQrKtLv+1C8NcMrebULUmlOLRQeD80MQ/3NnZwHngN3dJbwn7OwsFDAJCKq5d3d3tXDaS1qdbg4gTKpHOGf09jeu/tlqtTpWImbarc2B3Ww2RRvNk+fLOcq9T3NPbX5q7LMNp+HYsVokdfvxjw+vlUAxSvF4CA4hOHhP8J7gPEqjpq736DqH/TOna1tww+jr7+ihz8nir0yE5IAxEA76m/Di4u14Lt+JI95FyHvYjQt4toUwzYHclijd+oqtoNhnTNCSEkW1pl1LImyLuOVjZeYiLCF06HrRlsvlUoWxK+ag+aqtmRrjWFBY80FDCNjZWaLrAs6cPYOuFzPXOcJyuUAsn40IXhgEWqEQ1IzLxtIKwrVr18p9A1Wgzb9s/c556Mf8XdtkWp/WTPzWx7PPteh1q0Vb/7Asheb9bQjGrpn1WXjvBXFWF8F7j+VyqZsBFDTrsVwu4P0brClb3+4rdchOaIkFSuIFVNQTfJJV2RwEWrhtL/ET2DtcW5zHs+5OfHFzE9Z+geQ2AFn6hPmIVTvZaM1Q8//sAeacpdOydvsFA8OmLkhDPttypXbBmUnYgj7eOyx6oTSx0IIjoO9CIaDarNdY9B04J/R9p/1BUuHM6bsA7xzOn7sJwXvccvMF7O4ssb+/CwKjC5IPvFgsig9tAmRpdA7AsNmAWHiJ+tCVxWzPq41J2nyM44hxHAvC25qctpF0XTfJzNFHL4hzGypRASKQhF6caPg2OUUaJHm9J0HSg/cIXkzvnIT1QEjDSO4hSU8VTw47iyW6END5IFlcnOAdwClh0QX0XRBOoFlW3EnjhszXdmJuxGx9I03TE89h1+NI2qU5mgimgTna7Xr20idqn3Hyyo4q2yMPOETEc+5mfNHdhecp4KgbkQmlxTkAeO+KljGNOPdl7HoBM1O10WnKiKNoRANUzLybQ+5tUbLF5MzHHMcRnDO64LVFOwk9iPcYhwFdCEhRqk4cEXKMEMa4rPHCjJ3lUmObhEXfSdUKmW8UsLNcYne5U4L9k7ggEcZhBJgxbDZwIKQxSlewMZbNxLRsq40MPW1zYafhDdcALNXE7/sOlUxM4p1CzCWIa4pRWBaTCKgJqjMO3BAQfEDfdXDksOgX2lyoCi4B5fPeOQTn0XedCC+50lhJsv9s3qOQAmio5zTjPwqg50bGjW4Yxw8w+wrorioxyE1a4tLyTnw27eIAN4FV4Lw/ntkx92NbM8lerflq72kXazvmwW4zRVuN2Qpm9TWnGS32e0sKsMr49XpdTFgLGywWfdFCfd+h7zvsn9kriQIpJezu7iL4gN3d3aLFUkpSDK6dusZxFJPTTQuNd3d3J9QjtumUihYN/7SB/Tb+Z3m5ZllU6hKaHMtCF61ZOl83ZsW0KLcdwzYN0+bM1cw/ngElFCKG2qYk4N1pl+QNm6/lZ0zNvonjfMrjnRji+FJMZD45lDL5uTFl6zXVU7fvaWAaZLeHw+XNeMGfw8X1DqyQWFjHlT2Pa+jDqzYqsq7+p11PW98HCECSlS5jTtPRLuYWFDFBMpOunseeG5WMnFYIzHR0zhWfqNXwNUQj5rJc04j9/V0wEnZ2FkBOOLO3i+AJnSdFGqdMbpMNx3t4kGgWlm5nnBKQMlJDO9JmEllMtM20mQtXa863oRIjZy5AGlo/c7p0iKiEbJyT6/Mg6erN8iyRMgI5eCIsur6YsPLc6wvMSpvD4JQRxxHdKdnsTi2UUU0i12ghTQWtS5aMD/R0Jqu1JZ+/5hH362m9uVbkJue01Q6vBVRx42BypvrSMEmGQyJCRsJq7PHy8hvwmfFhpM0CoKzmXYTLGS4lUIygnOA4w7EcwSo4iDBZKG1Bba+aKTg3Eew+BPjGB7MiYwN4TEAN5JBzRAjnTMSi64Cc0XmPNI7lqyfCGIdC+U9OSsUYSUmeB+zuLQHKWC57OdaiA1HGTWf3sewDln1AHFYIngEe4bWXifHSjGMs2sR7qQyhLIs1OIccI5AT8jiWGG6b+gfU+KltIqYpW2vBtJnMkSChEl6SihKnyf2grCEOVhIubXmgX2MapI5Vm/oSZ/maMzyk4GDReRAnIMlzzuMI4owcBwRH4BRBzMhRjuOY4fgNNl9bBm8LuKtYADOTcW623bg5+dqaclvI45QfbQ4iL57/YnIpGi5R//OQPC5FwmV/E15Y3oMn421Y855eD0CcQVkeVooDUhpVOGQ3n5dAMXMpTRrHsfg3c9OLmdGr9rSOWG2ZEVCLja3S3QQ3KrLZputZMoFk8YSJVrKsoN3d3cLXGuOIrvfCGscRXR8Q4wBov0oiCQ8sFh28J+zt7TbgVJogmDGmiW8Irpk63tUwiF2nvKXGFufpa/Z3e08IFVk1jW/gmoSLCFZ+ZYEw72kSGBPUTZ5lIJQNNjhCIELX+brpcRRmPZL4JZEy65Fdt5jwpwVJT2++JmsJnsE3tPJvfMy12vVuhrZ9/0ZdHk+/z5AQSeINDsY1Xulvxa/nB3HZnUWODLiulJSpBaMEYnzMDC3FwfOSHqobjqW8lep/1DQ200JWyGtmbVs+ZfMY1BwbxxGbzeYYq0Df9VitVhOBbnNb50nZKSUEXzN0LKZoaXO2UZivB1TUVDaaGuppSZonUz9bAy2zQOtbF5Y+NmvNQBsBYdrv+67HousFuCFBiLsQsOh6MS0bRJa5ch214RAz9edhF0tHbOPM7Wizll5rnFoonSZbm0h+uQXzeqM1jUn9tfb7L8/ZnOb6aGVIzFiNHa70N+O3ru1h5Xswb5AJwsYQHNgD7FgQXCdEY7XBjJjpRCiCKUhqXfiWMWM7/jiONbe0MVktxmeLwntftO96vcai4cKxryZQKSX44BuNGCdf21KrNu9Vqjxqyp1tEiXNzdVi6DbtDQCgIIkt7DYWK37nlI+nBW3s9yUtTxPUg3NYdj0COYzrDQI5BPII5LHseng49D4gOI+gPqBjQuc8NquVZFoxyouT+Pa2sRRrRTeddvNrc3fngForpG+4piTpSoOsgvkfalTxACpTjQrK60VcX3MorSYTiD0SdgBeIucFBr/A85ub8cXDHpvESMxYxRHsSFo3EITsF6yZOyKYQkjsMY4DlstFWcCtb2jkxO0DtodrwXVDLG2BmHC3tY9J09N2d3cnIAiRtWWvXDjzQmcjt+q6DkdHR43ZKVk3fd9jGAbs7OyogDFWqxWyxljFbJ369wAmJjxQAS+QhJXsWZp2bVHVCacOSVgjOA8kVqEiUJb2Gr0P6JyAS0gZjhkehM4F6bmaGB5OP8e1jWUWcKaaxKEkBXSdxCzN4rB4qYFpbaE0UDfLN9581dxCOfhpP/X6xon+ov0M1BzHJtfxy3xVKKiWDmZglT2u+dvx+aMH8PL6PAZjLgPDqZbzXsqz5qimxRgtvGB/SykXU9J+X/2xabWGmcAmhCZAJYg/EwYAJQtovV6XoPiZM2dwdHRUmOVMM+7uSmqYLTxDhYFpwkLNMNKi4QYpTSmWeTA/2a7fNLvNARiACt7e3h4AYGdnp+TIthlM0ydjsUdg0feaDMHFpAWzxmx9qcU0VJwAZaUX8CnFKIkXzpUicEvM996DIaRxJoDm35rFYhpzotG3+MEnjdPTgeRUJpRPiSJ9qWMreIT683/wQQkjAg5xFlf9A7h4KBpvuVwWZre2gr6Nfdkua0njVsIk758mCtjf2kVpoI4dt9V+puHaEErfi98o1ybmqPmXYo52BfW0uOXR0VERwJLjCkw2gxZMMUEGMPGfLFRjPmy7SbTCCgDjrC9m+/zNb52XWDnnSouEdh7acIn5xiYYZt7b54HqA5svvVwuiy/blrTlVLuUtT6nWSztRmjXZH8/1bLi0+rUN8eb483x2zJ+x2X0vDneHP+xjzeF8s3x5vgKG28K5ZvjzfEVNt4UyjfHm+MrbLwplG+ON8dX2HhTKN8cb46vsPGmUL453hxfYeNNoXxzvDm+wsabQvnmeHN8hY3/H2M53/5tSGLEAAAAAElFTkSuQmCC", "text/plain": [ "
" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "sav_dataset.visualize_annotation(\n", " frames, manual_annot, auto_annot,\n", " annotated_frame_id=0,\n", " show_manual=False,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Masklet annotations and Metadata" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Manual annotations and metadata" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
video_idvideo_durationvideo_frame_countvideo_heightvideo_widthvideo_resolutionvideo_environmentvideo_splitmaskletmasklet_idmasklet_size_relmasklet_size_absmasklet_size_bucketmasklet_visibility_changesmasklet_first_appeared_framemasklet_frame_countmasklet_edited_frame_countmasklet_typemasklet_stability_scoremasklet_num
0sav_00000120.125483.0848.0480.0407040.0Indoortrain[[{'size': [848, 480], 'counts': 'i\\Y4<Qj05K4L...[0, 1, 2, 3, 4][0.0035249812, 0.0946159778, 0.011285757, 0.00...[1434.8083333333, 38512.4876033058, 4593.75454...[medium, large, medium, medium, medium][2, 0, 10, 0, 0][0.0, 0.0, 0.0, 113.0, 0.0][121, 121, 121, 121, 121][41, 11, 22, 4, 115][manual, manual, manual, manual, manual][None, None, None, None, None]5
\n", "
" ], "text/plain": [ " video_id video_duration video_frame_count video_height video_width \\\n", "0 sav_000001 20.125 483.0 848.0 480.0 \n", "\n", " video_resolution video_environment video_split \\\n", "0 407040.0 Indoor train \n", "\n", " masklet masklet_id \\\n", "0 [[{'size': [848, 480], 'counts': 'i\\Y4\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
video_idvideo_durationvideo_frame_countvideo_heightvideo_widthvideo_resolutionvideo_environmentvideo_splitmaskletmasklet_idmasklet_size_relmasklet_size_absmasklet_size_bucketmasklet_visibility_changesmasklet_first_appeared_framemasklet_frame_countmasklet_edited_frame_countmasklet_typemasklet_stability_scoremasklet_num
0sav_00000120.125483.0848.0480.0407040.0Indoortrain[[{'size': [848, 480], 'counts': 'ka0e8ka001O1...[0, 1, 2, 3, 4, 5, 6, 7, 8][0.010841089678796047, 0.038489445267425544, 0...[4412.757142857143, 15666.743801652892, 7663.1...[medium, large, medium, large, medium, medium,...[5, 0, 0, 0, 0, 0, 10, 3, 12][0, 0, 0, 0, 0, 0, 0, 0, 0][121, 121, 121, 121, 121, 121, 121, 121, 121][0, 0, 0, 0, 0, 0, 0, 0, 0][auto, auto, auto, auto, auto, auto, auto, aut...[[1.0, 0.999616265296936, 1.0, 1.0, 1.0, 1.0, ...9
\n", "
" ], "text/plain": [ " video_id video_duration video_frame_count video_height video_width \\\n", "0 sav_000001 20.125 483.0 848.0 480.0 \n", "\n", " video_resolution video_environment video_split \\\n", "0 407040.0 Indoor train \n", "\n", " masklet \\\n", "0 [[{'size': [848, 480], 'counts': 'ka0e8ka001O1... \n", "\n", " masklet_id \\\n", "0 [0, 1, 2, 3, 4, 5, 6, 7, 8] \n", "\n", " masklet_size_rel \\\n", "0 [0.010841089678796047, 0.038489445267425544, 0... \n", "\n", " masklet_size_abs \\\n", "0 [4412.757142857143, 15666.743801652892, 7663.1... \n", "\n", " masklet_size_bucket \\\n", "0 [medium, large, medium, large, medium, medium,... \n", "\n", " masklet_visibility_changes masklet_first_appeared_frame \\\n", "0 [5, 0, 0, 0, 0, 0, 10, 3, 12] [0, 0, 0, 0, 0, 0, 0, 0, 0] \n", "\n", " masklet_frame_count masklet_edited_frame_count \\\n", "0 [121, 121, 121, 121, 121, 121, 121, 121, 121] [0, 0, 0, 0, 0, 0, 0, 0, 0] \n", "\n", " masklet_type \\\n", "0 [auto, auto, auto, auto, auto, auto, auto, aut... \n", "\n", " masklet_stability_score masklet_num \n", "0 [[1.0, 0.999616265296936, 1.0, 1.0, 1.0, 1.0, ... 9 " ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "pd.DataFrame([auto_annot])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Video info" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "sav_000001 is 20.125 seconds long with 483.0 frames. The video resolution is 848.0 x 480.0.\n", "This video is captured in Indoor environment.\n" ] } ], "source": [ "video_id = manual_annot[\"video_id\"]\n", "video_duration = manual_annot[\"video_duration\"]\n", "video_frame_count = manual_annot[\"video_frame_count\"]\n", "H = manual_annot[\"video_height\"]\n", "W = manual_annot[\"video_width\"]\n", "environment = manual_annot[\"video_environment\"]\n", "print(\n", " f\"{video_id} is {video_duration} seconds long with {video_frame_count} frames. The video resolution is {H} x {W}.\"\n", ")\n", "print(f\"This video is captured in {environment} environment.\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Masklet info" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "There are 5 manually labeled masklets and 9 automatically generated masklets.\n", "In SA-V, videos are annotated every 4 frames. Therefore, there are 121 frames being annotated.\n" ] } ], "source": [ "print(\n", " f\"There are {manual_annot['masklet_num']} manually labeled masklets and {auto_annot['masklet_num']} automatically generated masklets.\"\n", ")\n", "print(\n", " f\"In SA-V, videos are annotated every 4 frames. Therefore, there are {manual_annot['masklet_frame_count'][0]} frames being annotated.\"\n", ")" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'size': [848, 480],\n", " 'counts': 'i\\\\Y40` to get the binary segmentation mask" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'size': [848, 480],\n", " 'counts': 'Q_T6S1Xh0X1eNY1[Od0E;M4N10000O101O00000000000000O0100000000001M2O1O1N3N1M4H8B?@e0POc1jMfZ[5'}" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Get the rle of the masklet with masklet_id=5 in frame 100\n", "masklet_id = 5\n", "annotated_frame_id = 100\n", "auto_annot[\"masklet\"][annotated_frame_id][masklet_id]\n", "# decode the rle using `mask_util.decode(rle)>0` to get the binary segmentation mask" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "onevision_ta_2_pseudo_labeling", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.12" } }, "nbformat": 4, "nbformat_minor": 2 } ================================================ FILE: auto-seg/submodules/segment-anything-2/sav_dataset/utils/sav_utils.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the sav_dataset directory of this source tree. import json import os from typing import Dict, List, Optional, Tuple import cv2 import matplotlib.pyplot as plt import numpy as np import pycocotools.mask as mask_util def decode_video(video_path: str) -> List[np.ndarray]: """ Decode the video and return the RGB frames """ video = cv2.VideoCapture(video_path) video_frames = [] while video.isOpened(): ret, frame = video.read() if ret: frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) video_frames.append(frame) else: break return video_frames def show_anns(masks, colors: List, borders=True) -> None: """ show the annotations """ # return if no masks if len(masks) == 0: return # sort masks by size sorted_annot_and_color = sorted( zip(masks, colors), key=(lambda x: x[0].sum()), reverse=True ) H, W = sorted_annot_and_color[0][0].shape[0], sorted_annot_and_color[0][0].shape[1] canvas = np.ones((H, W, 4)) canvas[:, :, 3] = 0 # set the alpha channel contour_thickness = max(1, int(min(5, 0.01 * min(H, W)))) for mask, color in sorted_annot_and_color: canvas[mask] = np.concatenate([color, [0.55]]) if borders: contours, _ = cv2.findContours( np.array(mask, dtype=np.uint8), cv2.RETR_TREE, cv2.CHAIN_APPROX_NONE ) cv2.drawContours( canvas, contours, -1, (0.05, 0.05, 0.05, 1), thickness=contour_thickness ) ax = plt.gca() ax.imshow(canvas) class SAVDataset: """ SAVDataset is a class to load the SAV dataset and visualize the annotations. """ def __init__(self, sav_dir, annot_sample_rate=4): """ Args: sav_dir: the directory of the SAV dataset annot_sample_rate: the sampling rate of the annotations. The annotations are aligned with the videos at 6 fps. """ self.sav_dir = sav_dir self.annot_sample_rate = annot_sample_rate self.manual_mask_colors = np.random.random((256, 3)) self.auto_mask_colors = np.random.random((256, 3)) def read_frames(self, mp4_path: str) -> None: """ Read the frames and downsample them to align with the annotations. """ if not os.path.exists(mp4_path): print(f"{mp4_path} doesn't exist.") return None else: # decode the video frames = decode_video(mp4_path) print(f"There are {len(frames)} frames decoded from {mp4_path} (24fps).") # downsample the frames to align with the annotations frames = frames[:: self.annot_sample_rate] print( f"Videos are annotated every {self.annot_sample_rate} frames. " "To align with the annotations, " f"downsample the video to {len(frames)} frames." ) return frames def get_frames_and_annotations( self, video_id: str ) -> Tuple[List | None, Dict | None, Dict | None]: """ Get the frames and annotations for video. """ # load the video mp4_path = os.path.join(self.sav_dir, video_id + ".mp4") frames = self.read_frames(mp4_path) if frames is None: return None, None, None # load the manual annotations manual_annot_path = os.path.join(self.sav_dir, video_id + "_manual.json") if not os.path.exists(manual_annot_path): print(f"{manual_annot_path} doesn't exist. Something might be wrong.") manual_annot = None else: manual_annot = json.load(open(manual_annot_path)) # load the manual annotations auto_annot_path = os.path.join(self.sav_dir, video_id + "_auto.json") if not os.path.exists(auto_annot_path): print(f"{auto_annot_path} doesn't exist.") auto_annot = None else: auto_annot = json.load(open(auto_annot_path)) return frames, manual_annot, auto_annot def visualize_annotation( self, frames: List[np.ndarray], auto_annot: Optional[Dict], manual_annot: Optional[Dict], annotated_frame_id: int, show_auto=True, show_manual=True, ) -> None: """ Visualize the annotations on the annotated_frame_id. If show_manual is True, show the manual annotations. If show_auto is True, show the auto annotations. By default, show both auto and manual annotations. """ if annotated_frame_id >= len(frames): print("invalid annotated_frame_id") return rles = [] colors = [] if show_manual and manual_annot is not None: rles.extend(manual_annot["masklet"][annotated_frame_id]) colors.extend( self.manual_mask_colors[ : len(manual_annot["masklet"][annotated_frame_id]) ] ) if show_auto and auto_annot is not None: rles.extend(auto_annot["masklet"][annotated_frame_id]) colors.extend( self.auto_mask_colors[: len(auto_annot["masklet"][annotated_frame_id])] ) plt.imshow(frames[annotated_frame_id]) if len(rles) > 0: masks = [mask_util.decode(rle) > 0 for rle in rles] show_anns(masks, colors) else: print("No annotation will be shown") plt.axis("off") plt.show() ================================================ FILE: auto-seg/submodules/segment-anything-2/setup.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import os from setuptools import find_packages, setup # Package metadata NAME = "SAM-2" VERSION = "1.0" DESCRIPTION = "SAM 2: Segment Anything in Images and Videos" URL = "https://github.com/facebookresearch/sam2" AUTHOR = "Meta AI" AUTHOR_EMAIL = "segment-anything@meta.com" LICENSE = "Apache 2.0" # Read the contents of README file with open("README.md", "r", encoding="utf-8") as f: LONG_DESCRIPTION = f.read() # Required dependencies REQUIRED_PACKAGES = [ "torch>=2.3.1", "torchvision>=0.18.1", "numpy>=1.24.4", "tqdm>=4.66.1", "hydra-core>=1.3.2", "iopath>=0.1.10", "pillow>=9.4.0", ] EXTRA_PACKAGES = { "notebooks": [ "matplotlib>=3.9.1", "jupyter>=1.0.0", "opencv-python>=4.7.0", "eva-decord>=0.6.1", ], "interactive-demo": [ "Flask>=3.0.3", "Flask-Cors>=5.0.0", "av>=13.0.0", "dataclasses-json>=0.6.7", "eva-decord>=0.6.1", "gunicorn>=23.0.0", "imagesize>=1.4.1", "pycocotools>=2.0.8", "strawberry-graphql>=0.243.0", ], "dev": [ "black==24.2.0", "usort==1.0.2", "ufmt==2.0.0b2", "fvcore>=0.1.5.post20221221", "pandas>=2.2.2", "scikit-image>=0.24.0", "tensorboard>=2.17.0", "pycocotools>=2.0.8", "tensordict>=0.5.0", "opencv-python>=4.7.0", "submitit>=1.5.1", ], } # By default, we also build the SAM 2 CUDA extension. # You may turn off CUDA build with `export SAM2_BUILD_CUDA=0`. BUILD_CUDA = os.getenv("SAM2_BUILD_CUDA", "1") == "1" # By default, we allow SAM 2 installation to proceed even with build errors. # You may force stopping on errors with `export SAM2_BUILD_ALLOW_ERRORS=0`. BUILD_ALLOW_ERRORS = os.getenv("SAM2_BUILD_ALLOW_ERRORS", "1") == "1" # Catch and skip errors during extension building and print a warning message # (note that this message only shows up under verbose build mode # "pip install -v -e ." or "python setup.py build_ext -v") CUDA_ERROR_MSG = ( "{}\n\n" "Failed to build the SAM 2 CUDA extension due to the error above. " "You can still use SAM 2 and it's OK to ignore the error above, although some " "post-processing functionality may be limited (which doesn't affect the results in most cases; " "(see https://github.com/facebookresearch/sam2/blob/main/INSTALL.md).\n" ) def get_extensions(): if not BUILD_CUDA: return [] try: from torch.utils.cpp_extension import CUDAExtension srcs = ["sam2/csrc/connected_components.cu"] compile_args = { "cxx": [], "nvcc": [ "-DCUDA_HAS_FP16=1", "-D__CUDA_NO_HALF_OPERATORS__", "-D__CUDA_NO_HALF_CONVERSIONS__", "-D__CUDA_NO_HALF2_OPERATORS__", ], } ext_modules = [CUDAExtension("sam2._C", srcs, extra_compile_args=compile_args)] except Exception as e: if BUILD_ALLOW_ERRORS: print(CUDA_ERROR_MSG.format(e)) ext_modules = [] else: raise e return ext_modules try: from torch.utils.cpp_extension import BuildExtension class BuildExtensionIgnoreErrors(BuildExtension): def finalize_options(self): try: super().finalize_options() except Exception as e: print(CUDA_ERROR_MSG.format(e)) self.extensions = [] def build_extensions(self): try: super().build_extensions() except Exception as e: print(CUDA_ERROR_MSG.format(e)) self.extensions = [] def get_ext_filename(self, ext_name): try: return super().get_ext_filename(ext_name) except Exception as e: print(CUDA_ERROR_MSG.format(e)) self.extensions = [] return "_C.so" cmdclass = { "build_ext": ( BuildExtensionIgnoreErrors.with_options(no_python_abi_suffix=True) if BUILD_ALLOW_ERRORS else BuildExtension.with_options(no_python_abi_suffix=True) ) } except Exception as e: cmdclass = {} if BUILD_ALLOW_ERRORS: print(CUDA_ERROR_MSG.format(e)) else: raise e # Setup configuration setup( name=NAME, version=VERSION, description=DESCRIPTION, long_description=LONG_DESCRIPTION, long_description_content_type="text/markdown", url=URL, author=AUTHOR, author_email=AUTHOR_EMAIL, license=LICENSE, packages=find_packages(exclude="notebooks"), include_package_data=True, install_requires=REQUIRED_PACKAGES, extras_require=EXTRA_PACKAGES, python_requires=">=3.10.0", ext_modules=get_extensions(), cmdclass=cmdclass, ) ================================================ FILE: auto-seg/submodules/segment-anything-2/tools/README.md ================================================ ## SAM 2 toolkits This directory provides toolkits for additional SAM 2 use cases. ### Semi-supervised VOS inference The `vos_inference.py` script can be used to generate predictions for semi-supervised video object segmentation (VOS) evaluation on datasets such as [DAVIS](https://davischallenge.org/index.html), [MOSE](https://henghuiding.github.io/MOSE/) or the SA-V dataset. After installing SAM 2 and its dependencies, it can be used as follows ([DAVIS 2017 dataset](https://davischallenge.org/davis2017/code.html) as an example). This script saves the prediction PNG files to the `--output_mask_dir`. ```bash python ./tools/vos_inference.py \ --sam2_cfg configs/sam2.1/sam2.1_hiera_b+.yaml \ --sam2_checkpoint ./checkpoints/sam2.1_hiera_base_plus.pt \ --base_video_dir /path-to-davis-2017/JPEGImages/480p \ --input_mask_dir /path-to-davis-2017/Annotations/480p \ --video_list_file /path-to-davis-2017/ImageSets/2017/val.txt \ --output_mask_dir ./outputs/davis_2017_pred_pngs ``` (replace `/path-to-davis-2017` with the path to DAVIS 2017 dataset) To evaluate on the SA-V dataset with per-object PNG files for the object masks, we need to **add the `--per_obj_png_file` flag** as follows (using SA-V val as an example). This script will also save per-object PNG files for the output masks under the `--per_obj_png_file` flag. ```bash python ./tools/vos_inference.py \ --sam2_cfg configs/sam2.1/sam2.1_hiera_b+.yaml \ --sam2_checkpoint ./checkpoints/sam2.1_hiera_base_plus.pt \ --base_video_dir /path-to-sav-val/JPEGImages_24fps \ --input_mask_dir /path-to-sav-val/Annotations_6fps \ --video_list_file /path-to-sav-val/sav_val.txt \ --per_obj_png_file \ --output_mask_dir ./outputs/sav_val_pred_pngs ``` (replace `/path-to-sav-val` with the path to SA-V val) Then, we can use the evaluation tools or servers for each dataset to get the performance of the prediction PNG files above. Note: by default, the `vos_inference.py` script above assumes that all objects to track already appear on frame 0 in each video (as is the case in DAVIS, MOSE or SA-V). **For VOS datasets that don't have all objects to track appearing in the first frame (such as LVOS or YouTube-VOS), please add the `--track_object_appearing_later_in_video` flag when using `vos_inference.py`**. ================================================ FILE: auto-seg/submodules/segment-anything-2/tools/vos_inference.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import argparse import os from collections import defaultdict import numpy as np import torch from PIL import Image from sam2.build_sam import build_sam2_video_predictor # the PNG palette for DAVIS 2017 dataset DAVIS_PALETTE = b"\x00\x00\x00\x80\x00\x00\x00\x80\x00\x80\x80\x00\x00\x00\x80\x80\x00\x80\x00\x80\x80\x80\x80\x80@\x00\x00\xc0\x00\x00@\x80\x00\xc0\x80\x00@\x00\x80\xc0\x00\x80@\x80\x80\xc0\x80\x80\x00@\x00\x80@\x00\x00\xc0\x00\x80\xc0\x00\x00@\x80\x80@\x80\x00\xc0\x80\x80\xc0\x80@@\x00\xc0@\x00@\xc0\x00\xc0\xc0\x00@@\x80\xc0@\x80@\xc0\x80\xc0\xc0\x80\x00\x00@\x80\x00@\x00\x80@\x80\x80@\x00\x00\xc0\x80\x00\xc0\x00\x80\xc0\x80\x80\xc0@\x00@\xc0\x00@@\x80@\xc0\x80@@\x00\xc0\xc0\x00\xc0@\x80\xc0\xc0\x80\xc0\x00@@\x80@@\x00\xc0@\x80\xc0@\x00@\xc0\x80@\xc0\x00\xc0\xc0\x80\xc0\xc0@@@\xc0@@@\xc0@\xc0\xc0@@@\xc0\xc0@\xc0@\xc0\xc0\xc0\xc0\xc0 \x00\x00\xa0\x00\x00 \x80\x00\xa0\x80\x00 \x00\x80\xa0\x00\x80 \x80\x80\xa0\x80\x80`\x00\x00\xe0\x00\x00`\x80\x00\xe0\x80\x00`\x00\x80\xe0\x00\x80`\x80\x80\xe0\x80\x80 @\x00\xa0@\x00 \xc0\x00\xa0\xc0\x00 @\x80\xa0@\x80 \xc0\x80\xa0\xc0\x80`@\x00\xe0@\x00`\xc0\x00\xe0\xc0\x00`@\x80\xe0@\x80`\xc0\x80\xe0\xc0\x80 \x00@\xa0\x00@ \x80@\xa0\x80@ \x00\xc0\xa0\x00\xc0 \x80\xc0\xa0\x80\xc0`\x00@\xe0\x00@`\x80@\xe0\x80@`\x00\xc0\xe0\x00\xc0`\x80\xc0\xe0\x80\xc0 @@\xa0@@ \xc0@\xa0\xc0@ @\xc0\xa0@\xc0 \xc0\xc0\xa0\xc0\xc0`@@\xe0@@`\xc0@\xe0\xc0@`@\xc0\xe0@\xc0`\xc0\xc0\xe0\xc0\xc0\x00 \x00\x80 \x00\x00\xa0\x00\x80\xa0\x00\x00 \x80\x80 \x80\x00\xa0\x80\x80\xa0\x80@ \x00\xc0 \x00@\xa0\x00\xc0\xa0\x00@ \x80\xc0 \x80@\xa0\x80\xc0\xa0\x80\x00`\x00\x80`\x00\x00\xe0\x00\x80\xe0\x00\x00`\x80\x80`\x80\x00\xe0\x80\x80\xe0\x80@`\x00\xc0`\x00@\xe0\x00\xc0\xe0\x00@`\x80\xc0`\x80@\xe0\x80\xc0\xe0\x80\x00 @\x80 @\x00\xa0@\x80\xa0@\x00 \xc0\x80 \xc0\x00\xa0\xc0\x80\xa0\xc0@ @\xc0 @@\xa0@\xc0\xa0@@ \xc0\xc0 \xc0@\xa0\xc0\xc0\xa0\xc0\x00`@\x80`@\x00\xe0@\x80\xe0@\x00`\xc0\x80`\xc0\x00\xe0\xc0\x80\xe0\xc0@`@\xc0`@@\xe0@\xc0\xe0@@`\xc0\xc0`\xc0@\xe0\xc0\xc0\xe0\xc0 \x00\xa0 \x00 \xa0\x00\xa0\xa0\x00 \x80\xa0 \x80 \xa0\x80\xa0\xa0\x80` \x00\xe0 \x00`\xa0\x00\xe0\xa0\x00` \x80\xe0 \x80`\xa0\x80\xe0\xa0\x80 `\x00\xa0`\x00 \xe0\x00\xa0\xe0\x00 `\x80\xa0`\x80 \xe0\x80\xa0\xe0\x80``\x00\xe0`\x00`\xe0\x00\xe0\xe0\x00``\x80\xe0`\x80`\xe0\x80\xe0\xe0\x80 @\xa0 @ \xa0@\xa0\xa0@ \xc0\xa0 \xc0 \xa0\xc0\xa0\xa0\xc0` @\xe0 @`\xa0@\xe0\xa0@` \xc0\xe0 \xc0`\xa0\xc0\xe0\xa0\xc0 `@\xa0`@ \xe0@\xa0\xe0@ `\xc0\xa0`\xc0 \xe0\xc0\xa0\xe0\xc0``@\xe0`@`\xe0@\xe0\xe0@``\xc0\xe0`\xc0`\xe0\xc0\xe0\xe0\xc0" def load_ann_png(path): """Load a PNG file as a mask and its palette.""" mask = Image.open(path) palette = mask.getpalette() mask = np.array(mask).astype(np.uint8) return mask, palette def save_ann_png(path, mask, palette): """Save a mask as a PNG file with the given palette.""" assert mask.dtype == np.uint8 assert mask.ndim == 2 output_mask = Image.fromarray(mask) output_mask.putpalette(palette) output_mask.save(path) def get_per_obj_mask(mask): """Split a mask into per-object masks.""" object_ids = np.unique(mask) object_ids = object_ids[object_ids > 0].tolist() per_obj_mask = {object_id: (mask == object_id) for object_id in object_ids} return per_obj_mask def put_per_obj_mask(per_obj_mask, height, width): """Combine per-object masks into a single mask.""" mask = np.zeros((height, width), dtype=np.uint8) object_ids = sorted(per_obj_mask)[::-1] for object_id in object_ids: object_mask = per_obj_mask[object_id] object_mask = object_mask.reshape(height, width) mask[object_mask] = object_id return mask def load_masks_from_dir( input_mask_dir, video_name, frame_name, per_obj_png_file, allow_missing=False ): """Load masks from a directory as a dict of per-object masks.""" if not per_obj_png_file: input_mask_path = os.path.join(input_mask_dir, video_name, f"{frame_name}.png") if allow_missing and not os.path.exists(input_mask_path): return {}, None input_mask, input_palette = load_ann_png(input_mask_path) per_obj_input_mask = get_per_obj_mask(input_mask) else: per_obj_input_mask = {} input_palette = None # each object is a directory in "{object_id:%03d}" format for object_name in os.listdir(os.path.join(input_mask_dir, video_name)): object_id = int(object_name) input_mask_path = os.path.join( input_mask_dir, video_name, object_name, f"{frame_name}.png" ) if allow_missing and not os.path.exists(input_mask_path): continue input_mask, input_palette = load_ann_png(input_mask_path) per_obj_input_mask[object_id] = input_mask > 0 return per_obj_input_mask, input_palette def save_masks_to_dir( output_mask_dir, video_name, frame_name, per_obj_output_mask, height, width, per_obj_png_file, output_palette, ): """Save masks to a directory as PNG files.""" os.makedirs(os.path.join(output_mask_dir, video_name), exist_ok=True) if not per_obj_png_file: output_mask = put_per_obj_mask(per_obj_output_mask, height, width) output_mask_path = os.path.join( output_mask_dir, video_name, f"{frame_name}.png" ) save_ann_png(output_mask_path, output_mask, output_palette) else: for object_id, object_mask in per_obj_output_mask.items(): object_name = f"{object_id:03d}" os.makedirs( os.path.join(output_mask_dir, video_name, object_name), exist_ok=True, ) output_mask = object_mask.reshape(height, width).astype(np.uint8) output_mask_path = os.path.join( output_mask_dir, video_name, object_name, f"{frame_name}.png" ) save_ann_png(output_mask_path, output_mask, output_palette) @torch.inference_mode() @torch.autocast(device_type="cuda", dtype=torch.bfloat16) def vos_inference( predictor, base_video_dir, input_mask_dir, output_mask_dir, video_name, score_thresh=0.0, use_all_masks=False, per_obj_png_file=False, ): """Run VOS inference on a single video with the given predictor.""" # load the video frames and initialize the inference state on this video video_dir = os.path.join(base_video_dir, video_name) frame_names = [ os.path.splitext(p)[0] for p in os.listdir(video_dir) if os.path.splitext(p)[-1] in [".jpg", ".jpeg", ".JPG", ".JPEG"] ] frame_names.sort(key=lambda p: int(os.path.splitext(p)[0])) inference_state = predictor.init_state( video_path=video_dir, async_loading_frames=False ) height = inference_state["video_height"] width = inference_state["video_width"] input_palette = None # fetch mask inputs from input_mask_dir (either only mask for the first frame, or all available masks) if not use_all_masks: # use only the first video's ground-truth mask as the input mask input_frame_inds = [0] else: # use all mask files available in the input_mask_dir as the input masks if not per_obj_png_file: input_frame_inds = [ idx for idx, name in enumerate(frame_names) if os.path.exists( os.path.join(input_mask_dir, video_name, f"{name}.png") ) ] else: input_frame_inds = [ idx for object_name in os.listdir(os.path.join(input_mask_dir, video_name)) for idx, name in enumerate(frame_names) if os.path.exists( os.path.join(input_mask_dir, video_name, object_name, f"{name}.png") ) ] # check and make sure we got at least one input frame if len(input_frame_inds) == 0: raise RuntimeError( f"In {video_name=}, got no input masks in {input_mask_dir=}. " "Please make sure the input masks are available in the correct format." ) input_frame_inds = sorted(set(input_frame_inds)) # add those input masks to SAM 2 inference state before propagation object_ids_set = None for input_frame_idx in input_frame_inds: try: per_obj_input_mask, input_palette = load_masks_from_dir( input_mask_dir=input_mask_dir, video_name=video_name, frame_name=frame_names[input_frame_idx], per_obj_png_file=per_obj_png_file, ) except FileNotFoundError as e: raise RuntimeError( f"In {video_name=}, failed to load input mask for frame {input_frame_idx=}. " "Please add the `--track_object_appearing_later_in_video` flag " "for VOS datasets that don't have all objects to track appearing " "in the first frame (such as LVOS or YouTube-VOS)." ) from e # get the list of object ids to track from the first input frame if object_ids_set is None: object_ids_set = set(per_obj_input_mask) for object_id, object_mask in per_obj_input_mask.items(): # check and make sure no new object ids appear only in later frames if object_id not in object_ids_set: raise RuntimeError( f"In {video_name=}, got a new {object_id=} appearing only in a " f"later {input_frame_idx=} (but not appearing in the first frame). " "Please add the `--track_object_appearing_later_in_video` flag " "for VOS datasets that don't have all objects to track appearing " "in the first frame (such as LVOS or YouTube-VOS)." ) predictor.add_new_mask( inference_state=inference_state, frame_idx=input_frame_idx, obj_id=object_id, mask=object_mask, ) # check and make sure we have at least one object to track if object_ids_set is None or len(object_ids_set) == 0: raise RuntimeError( f"In {video_name=}, got no object ids on {input_frame_inds=}. " "Please add the `--track_object_appearing_later_in_video` flag " "for VOS datasets that don't have all objects to track appearing " "in the first frame (such as LVOS or YouTube-VOS)." ) # run propagation throughout the video and collect the results in a dict os.makedirs(os.path.join(output_mask_dir, video_name), exist_ok=True) output_palette = input_palette or DAVIS_PALETTE video_segments = {} # video_segments contains the per-frame segmentation results for out_frame_idx, out_obj_ids, out_mask_logits in predictor.propagate_in_video( inference_state ): per_obj_output_mask = { out_obj_id: (out_mask_logits[i] > score_thresh).cpu().numpy() for i, out_obj_id in enumerate(out_obj_ids) } video_segments[out_frame_idx] = per_obj_output_mask # write the output masks as palette PNG files to output_mask_dir for out_frame_idx, per_obj_output_mask in video_segments.items(): save_masks_to_dir( output_mask_dir=output_mask_dir, video_name=video_name, frame_name=frame_names[out_frame_idx], per_obj_output_mask=per_obj_output_mask, height=height, width=width, per_obj_png_file=per_obj_png_file, output_palette=output_palette, ) @torch.inference_mode() @torch.autocast(device_type="cuda", dtype=torch.bfloat16) def vos_separate_inference_per_object( predictor, base_video_dir, input_mask_dir, output_mask_dir, video_name, score_thresh=0.0, use_all_masks=False, per_obj_png_file=False, ): """ Run VOS inference on a single video with the given predictor. Unlike `vos_inference`, this function run inference separately for each object in a video, which could be applied to datasets like LVOS or YouTube-VOS that don't have all objects to track appearing in the first frame (i.e. some objects might appear only later in the video). """ # load the video frames and initialize the inference state on this video video_dir = os.path.join(base_video_dir, video_name) frame_names = [ os.path.splitext(p)[0] for p in os.listdir(video_dir) if os.path.splitext(p)[-1] in [".jpg", ".jpeg", ".JPG", ".JPEG"] ] frame_names.sort(key=lambda p: int(os.path.splitext(p)[0])) inference_state = predictor.init_state( video_path=video_dir, async_loading_frames=False ) height = inference_state["video_height"] width = inference_state["video_width"] input_palette = None # collect all the object ids and their input masks inputs_per_object = defaultdict(dict) for idx, name in enumerate(frame_names): if per_obj_png_file or os.path.exists( os.path.join(input_mask_dir, video_name, f"{name}.png") ): per_obj_input_mask, input_palette = load_masks_from_dir( input_mask_dir=input_mask_dir, video_name=video_name, frame_name=frame_names[idx], per_obj_png_file=per_obj_png_file, allow_missing=True, ) for object_id, object_mask in per_obj_input_mask.items(): # skip empty masks if not np.any(object_mask): continue # if `use_all_masks=False`, we only use the first mask for each object if len(inputs_per_object[object_id]) > 0 and not use_all_masks: continue print(f"adding mask from frame {idx} as input for {object_id=}") inputs_per_object[object_id][idx] = object_mask # run inference separately for each object in the video object_ids = sorted(inputs_per_object) output_scores_per_object = defaultdict(dict) for object_id in object_ids: # add those input masks to SAM 2 inference state before propagation input_frame_inds = sorted(inputs_per_object[object_id]) predictor.reset_state(inference_state) for input_frame_idx in input_frame_inds: predictor.add_new_mask( inference_state=inference_state, frame_idx=input_frame_idx, obj_id=object_id, mask=inputs_per_object[object_id][input_frame_idx], ) # run propagation throughout the video and collect the results in a dict for out_frame_idx, _, out_mask_logits in predictor.propagate_in_video( inference_state, start_frame_idx=min(input_frame_inds), reverse=False, ): obj_scores = out_mask_logits.cpu().numpy() output_scores_per_object[object_id][out_frame_idx] = obj_scores # post-processing: consolidate the per-object scores into per-frame masks os.makedirs(os.path.join(output_mask_dir, video_name), exist_ok=True) output_palette = input_palette or DAVIS_PALETTE video_segments = {} # video_segments contains the per-frame segmentation results for frame_idx in range(len(frame_names)): scores = torch.full( size=(len(object_ids), 1, height, width), fill_value=-1024.0, dtype=torch.float32, ) for i, object_id in enumerate(object_ids): if frame_idx in output_scores_per_object[object_id]: scores[i] = torch.from_numpy( output_scores_per_object[object_id][frame_idx] ) if not per_obj_png_file: scores = predictor._apply_non_overlapping_constraints(scores) per_obj_output_mask = { object_id: (scores[i] > score_thresh).cpu().numpy() for i, object_id in enumerate(object_ids) } video_segments[frame_idx] = per_obj_output_mask # write the output masks as palette PNG files to output_mask_dir for frame_idx, per_obj_output_mask in video_segments.items(): save_masks_to_dir( output_mask_dir=output_mask_dir, video_name=video_name, frame_name=frame_names[frame_idx], per_obj_output_mask=per_obj_output_mask, height=height, width=width, per_obj_png_file=per_obj_png_file, output_palette=output_palette, ) def main(): parser = argparse.ArgumentParser() parser.add_argument( "--sam2_cfg", type=str, default="configs/sam2.1/sam2.1_hiera_b+.yaml", help="SAM 2 model configuration file", ) parser.add_argument( "--sam2_checkpoint", type=str, default="./checkpoints/sam2.1_hiera_b+.pt", help="path to the SAM 2 model checkpoint", ) parser.add_argument( "--base_video_dir", type=str, required=True, help="directory containing videos (as JPEG files) to run VOS prediction on", ) parser.add_argument( "--input_mask_dir", type=str, required=True, help="directory containing input masks (as PNG files) of each video", ) parser.add_argument( "--video_list_file", type=str, default=None, help="text file containing the list of video names to run VOS prediction on", ) parser.add_argument( "--output_mask_dir", type=str, required=True, help="directory to save the output masks (as PNG files)", ) parser.add_argument( "--score_thresh", type=float, default=0.0, help="threshold for the output mask logits (default: 0.0)", ) parser.add_argument( "--use_all_masks", action="store_true", help="whether to use all available PNG files in input_mask_dir " "(default without this flag: just the first PNG file as input to the SAM 2 model; " "usually we don't need this flag, since semi-supervised VOS evaluation usually takes input from the first frame only)", ) parser.add_argument( "--per_obj_png_file", action="store_true", help="whether use separate per-object PNG files for input and output masks " "(default without this flag: all object masks are packed into a single PNG file on each frame following DAVIS format; " "note that the SA-V dataset stores each object mask as an individual PNG file and requires this flag)", ) parser.add_argument( "--apply_postprocessing", action="store_true", help="whether to apply postprocessing (e.g. hole-filling) to the output masks " "(we don't apply such post-processing in the SAM 2 model evaluation)", ) parser.add_argument( "--track_object_appearing_later_in_video", action="store_true", help="whether to track objects that appear later in the video (i.e. not on the first frame; " "some VOS datasets like LVOS or YouTube-VOS don't have all objects appearing in the first frame)", ) args = parser.parse_args() # if we use per-object PNG files, they could possibly overlap in inputs and outputs hydra_overrides_extra = [ "++model.non_overlap_masks=" + ("false" if args.per_obj_png_file else "true") ] predictor = build_sam2_video_predictor( config_file=args.sam2_cfg, ckpt_path=args.sam2_checkpoint, apply_postprocessing=args.apply_postprocessing, hydra_overrides_extra=hydra_overrides_extra, ) if args.use_all_masks: print("using all available masks in input_mask_dir as input to the SAM 2 model") else: print( "using only the first frame's mask in input_mask_dir as input to the SAM 2 model" ) # if a video list file is provided, read the video names from the file # (otherwise, we use all subdirectories in base_video_dir) if args.video_list_file is not None: with open(args.video_list_file, "r") as f: video_names = [v.strip() for v in f.readlines()] else: video_names = [ p for p in os.listdir(args.base_video_dir) if os.path.isdir(os.path.join(args.base_video_dir, p)) ] print(f"running VOS prediction on {len(video_names)} videos:\n{video_names}") for n_video, video_name in enumerate(video_names): print(f"\n{n_video + 1}/{len(video_names)} - running on {video_name}") if not args.track_object_appearing_later_in_video: vos_inference( predictor=predictor, base_video_dir=args.base_video_dir, input_mask_dir=args.input_mask_dir, output_mask_dir=args.output_mask_dir, video_name=video_name, score_thresh=args.score_thresh, use_all_masks=args.use_all_masks, per_obj_png_file=args.per_obj_png_file, ) else: vos_separate_inference_per_object( predictor=predictor, base_video_dir=args.base_video_dir, input_mask_dir=args.input_mask_dir, output_mask_dir=args.output_mask_dir, video_name=video_name, score_thresh=args.score_thresh, use_all_masks=args.use_all_masks, per_obj_png_file=args.per_obj_png_file, ) print( f"completed VOS prediction on {len(video_names)} videos -- " f"output masks saved to {args.output_mask_dir}" ) if __name__ == "__main__": main() ================================================ FILE: auto-seg/submodules/segment-anything-2/training/README.md ================================================ # Training Code for SAM 2 This folder contains the training code for SAM 2, a foundation model for promptable visual segmentation in images and videos. The code allows users to train and fine-tune SAM 2 on their own datasets (image, video, or both). ## Structure The training code is organized into the following subfolders: * `dataset`: This folder contains image and video dataset and dataloader classes as well as their transforms. * `model`: This folder contains the main model class (`SAM2Train`) for training/fine-tuning. `SAM2Train` inherits from `SAM2Base` model and provides functions to enable training or fine-tuning SAM 2. It also accepts all training-time parameters used for simulating user prompts (e.g. iterative point sampling). * `utils`: This folder contains training utils such as loggers and distributed training utils. * `scripts`: This folder contains the script to extract the frames of SA-V dataset to be used in training. * `loss_fns.py`: This file has the main loss class (`MultiStepMultiMasksAndIous`) used for training. * `optimizer.py`: This file contains all optimizer utils that support arbitrary schedulers. * `trainer.py`: This file contains the `Trainer` class that accepts all the `Hydra` configurable modules (model, optimizer, datasets, etc..) and implements the main train/eval loop. * `train.py`: This script is used to launch training jobs. It supports single and multi-node jobs. For usage, please check the [Getting Started](README.md#getting-started) section or run `python training/train.py -h` ## Getting Started To get started with the training code, we provide a simple example to fine-tune our checkpoints on [MOSE](https://henghuiding.github.io/MOSE/) dataset, which can be extended to your custom datasets. #### Requirements: - We assume training on A100 GPUs with **80 GB** of memory. - Download the MOSE dataset using one of the provided links from [here](https://github.com/henghuiding/MOSE-api?tab=readme-ov-file#download). #### Steps to fine-tune on MOSE: - Install the packages required for training by running `pip install -e ".[dev]"`. - Set the paths for MOSE dataset in `configs/sam2.1_training/sam2.1_hiera_b+_MOSE_finetune.yaml`. ```yaml dataset: # PATHS to Dataset img_folder: null # PATH to MOSE JPEGImages folder gt_folder: null # PATH to MOSE Annotations folder file_list_txt: null # Optional PATH to filelist containing a subset of videos to be used for training ``` - To fine-tune the base model on MOSE using 8 GPUs, run ```python python training/train.py \ -c configs/sam2.1_training/sam2.1_hiera_b+_MOSE_finetune.yaml \ --use-cluster 0 \ --num-gpus 8 ``` We also support multi-node training on a cluster using [SLURM](https://slurm.schedmd.com/documentation.html), for example, you can train on 2 nodes by running ```python python training/train.py \ -c configs/sam2.1_training/sam2.1_hiera_b+_MOSE_finetune.yaml \ --use-cluster 1 \ --num-gpus 8 \ --num-nodes 2 --partition $PARTITION \ --qos $QOS \ --account $ACCOUNT ``` where partition, qos, and account are optional and depend on your SLURM configuration. By default, the checkpoint and logs will be saved under `sam2_logs` directory in the root of the repo. Alternatively, you can set the experiment log directory in the config file as follows: ```yaml experiment_log_dir: null # Path to log directory, defaults to ./sam2_logs/${config_name} ``` The training losses can be monitored using `tensorboard` logs stored under `tensorboard/` in the experiment log directory. We also provide a sample validation [split]( ../training/assets/MOSE_sample_val_list.txt) for evaluation purposes. To generate predictions, follow this [guide](../tools/README.md) on how to use our `vos_inference.py` script. After generating the predictions, you can run the `sav_evaluator.py` as detailed [here](../sav_dataset/README.md#sa-v-val-and-test-evaluation). The expected MOSE J&F after fine-tuning the Base plus model is 79.4. After training/fine-tuning, you can then use the new checkpoint (saved in `checkpoints/` in the experiment log directory) similar to SAM 2 released checkpoints (as illustrated [here](../README.md#image-prediction)). ## Training on images and videos The code supports training on images and videos (similar to how SAM 2 is trained). We provide classes for loading SA-1B as a sample image dataset, SA-V as a sample video dataset, as well as any DAVIS-style video dataset (e.g. MOSE). Note that to train on SA-V, you must first extract all videos to JPEG frames using the provided extraction [script](./scripts/sav_frame_extraction_submitit.py). Below is an example of how to setup the datasets in your config to train on a mix of image and video datasets: ```yaml data: train: _target_: training.dataset.sam2_datasets.TorchTrainMixedDataset phases_per_epoch: ${phases_per_epoch} # Chunks a single epoch into smaller phases batch_sizes: # List of batch sizes corresponding to each dataset - ${bs1} # Batch size of dataset 1 - ${bs2} # Batch size of dataset 2 datasets: # SA1B as an example of an image dataset - _target_: training.dataset.vos_dataset.VOSDataset training: true video_dataset: _target_: training.dataset.vos_raw_dataset.SA1BRawDataset img_folder: ${path_to_img_folder} gt_folder: ${path_to_gt_folder} file_list_txt: ${path_to_train_filelist} # Optional sampler: _target_: training.dataset.vos_sampler.RandomUniformSampler num_frames: 1 max_num_objects: ${max_num_objects_per_image} transforms: ${image_transforms} # SA-V as an example of a video dataset - _target_: training.dataset.vos_dataset.VOSDataset training: true video_dataset: _target_: training.dataset.vos_raw_dataset.JSONRawDataset img_folder: ${path_to_img_folder} gt_folder: ${path_to_gt_folder} file_list_txt: ${path_to_train_filelist} # Optional ann_every: 4 sampler: _target_: training.dataset.vos_sampler.RandomUniformSampler num_frames: 8 # Number of frames per video max_num_objects: ${max_num_objects_per_video} reverse_time_prob: ${reverse_time_prob} # probability to reverse video transforms: ${video_transforms} shuffle: True num_workers: ${num_train_workers} pin_memory: True drop_last: True collate_fn: _target_: training.utils.data_utils.collate_fn _partial_: true dict_key: all ``` ================================================ FILE: auto-seg/submodules/segment-anything-2/training/__init__.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. ================================================ FILE: auto-seg/submodules/segment-anything-2/training/assets/MOSE_sample_train_list.txt ================================================ 28191f94 662487fe 80906bf9 7e704f2e efa25913 b6f03bd9 6834d249 5a723c30 07779415 4ce088c6 199995b5 54273925 4fa342f5 110da3cf 65856fa0 46705bb3 d869a3cf 555aa049 8f01fb2c 37b07a28 5e80b3dd ba0e4dd4 6f5144b6 acec8407 93723f88 c7c7528c 97f58761 e71f9faa e64c13dc 8830d59d 0e4aeed9 63437cf3 95215aa1 255f86ef dc54aab2 327cd258 198021ad c690220c d25ff89d 7875b874 4fa6d325 9fc933f6 4d8baafe 55ae6921 6a3bc149 89f8163f 2d65d2ac dba172b1 a14de179 4017d1b3 52ddf44c 3ba93641 34a5f964 da7dee28 872b76de 1dc12eca 265a69f4 86a2b59f 51e5ca25 ddf80bcd 6786602e 4fa28c89 f56942e9 2184bb93 d883e976 bfe1469e bc4e7b11 1c80acb0 2b0e34d3 56b9ce41 15f0b0cd cc5d0dd1 1b7eada8 7286b176 0ab42ab1 adb82dc9 c060b1e6 3da63bd5 5488796e d7066e20 aab5ed11 17f66311 24df9789 208fa934 7ce2c865 debe4249 4c56bbea 149dbae2 beb693c9 49eb0315 e7ad4717 4e016d5a 95e24093 07b5d86c 80701b6c 337dfa1e b624a46e 3f849de8 5db21df2 47891b4c a966d7fd 013103f6 da5e4bc5 ba9ea03d 526195de 57f3a53e b3aff7f8 26048547 bb7ee856 aef0d049 e35a8262 57ad022e f45d3823 e5e9eb29 39cc637e a4fc4f17 dd5a4739 bbe97d18 33602f6b 9061dac9 23454d80 a20baeec 794f01d4 02de2f2a 055fca57 a69df343 e307510e d07ad1be 1fc5e086 db6533a5 fe9706b7 87e32230 8ba58e4c 561f6380 2ab9ba0f 86571569 756cc6c9 aa185af5 c6d7f94b 7f54c579 71f4b40e 4190c83a fef0aba4 2f7c71bb e4b6f2ef 76adaeea 11cdeb64 733f2a02 e50dbddb f643141f d2e75e95 84559bc3 7ade3068 e69db797 0b787263 57895315 d7969c29 62529cd4 203733e7 48fd97a6 723fd024 849f0efb aafea009 dd4eb8f1 d18554ae f3c0f0cf 90fe55b9 b0ffaf3b e79ecd47 d670ce7b 56a5643a 90ff1d09 1fb378d9 57014c7d 994ed763 5bc7ea74 e99bd793 cbb66185 5f3fcff6 05ed1023 85efa9e3 652929ce 905d8740 a6fcde01 0fdf67f7 a5cf4c8d e1c48bdd 782551f7 6acd353f c30641cf 81d12756 51befc31 9d5ab5ca d262b7e4 2cd705a9 f7360199 d3f3bf9d 028f6f64 94767cb4 3a739934 72433603 ec66879d 6149becc 5845c157 c5082b3c f89b54d0 f3ada126 409dcb8a 4411fdee eb93ed20 9cb1ba0e b8e1ec26 7edd8b4f 5e9412c0 2744f35a dafeb75e f3f072f2 6f1df574 5a064706 89c76ac4 a6adef89 76303516 dbd67417 a53ef3fa 10552818 ac7deb19 2d403c59 55c157f1 214aeac3 a9f5e251 d7807996 d1dba33b 1367e367 44476e77 0644075b eda37457 f2de4198 9a4ce701 46e00caf 2ae75f99 cd49fb99 4e4483e7 a0669957 a6f0d882 9ce1d54a 1fc2314b 21f363b3 32ecef67 70bcaf68 115348f9 60827ada a218e951 6d30d5ac 6da17988 f22c39ce 5825f0e0 f415f9ad 0d4feda2 832fc243 414ca58b a92390a0 ddd383cc 43dc67f7 962ae0e2 6dd74e7b 2bcd6c3b b394847f 637fd121 d46e771b f6bfc699 63f138de 932ad0a6 2080824a 52fa9174 843d3bf7 f3431885 5c20c48a 134a2ab0 2ea465de f6786ab5 2bf49664 a49ce97b 6a50e93a a7c21e95 616ad8ec 0a8d7b41 b0c90527 2d893fb7 19310598 7744dc51 4539b907 9d299f60 e495537a 0b02886a f4c4a2ca e957b2b5 e6f3bf07 258944c8 54364322 ebb77f95 0af03282 cbdbc6c3 494ecef0 ee91f783 9698f06e 11e16068 b942ce0a 423a50e6 fb16e746 9c88ae45 8620c024 d3af3c85 780a25de e569a15f c4f9f19e 1106f3a7 d37e29a7 e53611da fdb2e432 18ad3117 6fcd426d 3bfa8379 3b19c5c3 ff1142df cd182615 b60ea255 b3f5d019 6dc5e55d 103166c7 37af9ac1 ad1881d1 731149b3 90e3338a 6aa0b6f2 a25316a3 dc8679e0 571fb490 80afed16 983a551b a58578e5 2bc0bba4 1143b3fe fdd8dd49 7fe2bf77 890ef032 8466eeb2 c791ddbb 631b82bd 78bf9b51 a99df45f 2bdb692f e89b1501 4e6aa1e8 e5665030 fe21fd5c 635577d5 4414cd3a 03c99e83 ff041cd1 c33adbc2 a988ec74 576031e0 03c21af7 79b25f4b bbc485d6 d36d5a0d efdab888 b20e6781 81fdc526 e1c26a53 7c6d3504 52a04667 f22e34d4 bb936ead 13f0606c d2abc61e af509e8f bea1c144 e15e4de8 e727099f b30744df ffb6a2e4 0d31d3a6 a23048fe 7d452630 6c736334 046ed4f4 94f4c2aa c290cfd3 f7203226 2fdae3c5 7c78e351 02b72b8d 2d22d3be ba28d02e 197f6587 43199a98 b563b04f 9293b755 9cef7489 d156b96f 15e9161e 6d094cd5 0d876a65 c818d30a 8094b12b a4a8e24b 14655f54 11c14893 8a48f62a 7f3d9c22 d952481c 03e0f9b8 28980657 6a0b5563 5879983c 37549a79 4a7162bd 7a6aa1ef 0dc1b78c f6dba17b 1dba51af b2f4d608 e2e6f421 464066da 5d24e4ea 1e75004d a02ed92c 673adbcc c2a0c0fd 85addee5 54b8f502 f5d2d8d3 a19507e1 803e1756 0d1fe009 5968c2d8 b926e1ad a9162e14 ae470d2b bd731802 68c879f2 21fe05d9 c1ed21d0 831498e4 cc45a7f2 cb170015 59750be4 30d1cb6b 03e5f069 106d33db 3f003746 3e5ad020 8bc5a91c 64b89eb5 bfd28682 f8687b9a 7bbf38ee d6d92b30 ceaa6c65 677c8ed7 dc33acf8 cfd1de31 e5be4781 85585220 5d2316f6 dd3f4a07 34535f5f 3ae0bc5d f521e3c5 74c2284f 12a42fd9 61403519 88cd32f3 662a1846 825a1944 cf376cf1 8465d99c 61a2e246 62d44645 103b3ca8 c7e745ed 4ed71139 230c2edf 529c6889 9e509c0d 54b9dea2 a8934c0d 29cffe2f 48017512 c9f7f69d ce691ee6 21c89360 3b97c07b ebd82d35 2895bb8b 7043c5c1 85d694d7 88fd7507 18d8931e aa718745 89b671bb 0d8d30ae 26163977 a6121689 1589579d 159789c4 f5ca8271 fcc16740 3158be0b 860fc1f7 3f54a330 82f24ce7 069f6a2a 2fa9c523 c9f1d87f efe9cbca 8f969ea5 4f5db794 62c501f8 2d3b0320 c99637f0 0f3b1fcb 6e4ee861 e0d9aff0 230ddb91 e14d1f96 c83aa6a1 eabdf66a 6783a303 81659eb2 ce954bd7 9a48c0c9 0ab807b4 f0617f71 fe86f2f8 61d80e22 e4b6d2a0 ac093040 0e05fabe d0b507c3 3d828137 c4fa0bab f7783321 ec27366a 404e4c58 073baf48 0f685e01 b0e98fdd b4891f7f a46b7b77 ee059f99 3c87888e 8d23ddcc 2d8d7d35 5680be79 fc79c03e 20660b72 53f67585 90956534 7e709e2d dae93f5c 54b9dbba cc41ba05 1e207fe0 a9c6abf2 35e0ca09 e3dcd186 1b8bb699 92162474 cdad6812 50b91533 570215ac 6042d64a b6e2c041 08746283 7a056996 b8651773 adf443e1 6a6e0e3b 886ed981 c1d57fea 43030c4c 7ebfbf57 0770ad03 e85301d5 31ac3d98 acaef45e 8f415dd1 fe2dc281 2c0b9d99 8e24501e 911ec4ad 8036b58e c3b350b9 b6cadd11 a3a80cf7 88ab50cd 59c755a8 1339321a 91b2f707 97b0811e 1da33959 31b09833 c1a40349 708098a9 1f220f98 999e07cb 0b5e5d29 94c63453 b826d642 a598602d 4c83eab8 2efd5e50 6ec5da3a 9fcd95eb 9a2c6b5b c205a718 e638e950 cb43141c 494dd91d c4957274 4975a81d a1f4c54d 51e6fafa 514490e5 b0d09e6a c6726eb8 06772c9a 5a65ffd7 3657c62b 03012cfd 529df209 f1c38e66 ab417352 118a067e 8957514f 22e8b380 3b1a4616 a4457543 57c9f6e0 e362c16b 0f809e41 857e375e 9cff25e3 d754fb65 6ad44b86 051052d8 a4564b94 f68507d0 80a7cf7b ad8cd1e0 60b19cd3 274fe944 f06632aa 628a337b 92c96c05 87fc565c 6f6e6c37 228a0234 6487110a aa911a8e 40c47fa3 9606508b 6ba9e61f c8c1d5a9 cf01df5b 9421b9ad 006e6b64 1c28e081 06273084 8925e11b b46c822b 00501424 cfd946b2 2e92a7dc 1c5f5bb6 1d29944c 8248698e 19247506 1eac1aff ee9caa47 4a41cbf8 d97c9309 4ca87c14 9707f1e3 8bb9a221 6605e67d 95cf72d7 1c6fb814 033130b2 4344808d 5f14e5d2 a810399b e325a6d4 7014ddf4 725d4bfb 790285e8 1a6a731f fbfb6e30 0d4d88f6 80ce18a4 572495b7 4b44dc50 95dce33c 4a6fb202 3142014e a3c56751 96b2a414 c4aa176c fd1e394f 93f0f509 f494e9fa bfa42a75 db5319c7 aa92e070 81220a93 e4a72496 fc467bf1 5397b01d 1dc0c9a0 f6f8b4a6 53dc7db4 8ef303eb 62ca45c9 e9d3465e 3784e3f6 8c934e67 5ba84e3f 30e41f1e 61cf0ec8 e93e8f01 fc6086dd a95f0aea 33a04ef2 6f295adb d2aa8c66 724cc810 d8623d26 8d0d641a 4bda7a76 38030c69 56199c41 d2f4b9e2 a7b8ac96 64044df1 fd1078cc 0165667b 16e1cca7 915f0d9a eeaaa67e 378430d5 a84c60e6 b4ae36cc 2a3a0571 13e6df75 aa348c45 59d7a11d 68954daf d6f883c6 f28b429a 32dc49d4 ccf14ee0 7d512591 9bdabdb2 ed878d94 54eda06d 132561ee 3c4b6736 0367af42 531c1c36 843d8f25 333bdbdc c3c21268 07b00746 c7fe0584 49fc9f2e 9ed4317a d29991b4 98b0033d f0b922bf 89fe6899 58264713 2f49220a 6ff85ca5 4b96b2c8 a42f54f5 aa425600 22fdee40 dde85a9d 3722f6fe e7529cbc 5ae23f9f cc32235b 730bc486 b12701b7 a96b3010 16130bd3 2c713560 f7935d24 a7eb6616 0d6e7177 100edaef 0442a954 60f4fa43 37bf7edf 76b18413 ab0646a9 c575434d 1e356390 5416fbb7 df7cf932 269872de 9033b607 c2e88575 932542cd 23e046fb 3d08dadd 7999adc5 ed81c485 3bd7facd 1feae28e 8d72533b 6a8d35d6 65308bdc 7f0b7662 98290486 fee3371f c463c7e5 faf7d852 75c34dc5 96a6722e e5605136 851bc5d9 15c41c4b 6a39e104 5fbff256 0e7001dd 5411113f 3ea2f7f2 242b74b1 87727003 ec6dd0e9 980baf58 9d0b7bf1 9113c9d4 5ebef6bd a5f70ce7 b0240233 06ad78e0 8745edd0 d8e8d984 ac32a655 38568758 d48c552d 0b27d5f7 c65d0736 800e3c14 d37a5857 bcebc660 d3ab52cc 405e3ee7 e33cddc9 b0197182 89fd5681 9e192417 8554c402 aae923b8 31af515d 75b26f88 60471744 460945aa c0fe8e1a 1731babb 2e85e35d f9c20062 115da184 ddfa88c7 359003f8 dfa99126 bf04814f f407a414 e18723c4 0a7a3629 c07ab37e 1251a1c9 4d09d22a 5984ed74 34504f63 ced51047 08ff419c d942e98c 2697f864 3b671a61 72a2f7e2 48e7cafe 6adad2f7 18840617 1e44f47e 36cc4055 8c494902 2982de7a 6a428397 c4a0ecfb 231d6945 fe470104 f93e1bd0 bd18bc5a 7bd70d93 8f81a0ee db78e7a1 7593caea 86d5b29b 5457b298 0d967fd1 62372d4c 68259db3 f0944ea2 7b017dbf bcb6e338 03692b14 f7d36a47 1ca2531a 6728528d 1fc0e6a8 0ba9c5ad a386eaa2 b0c5459f 1d64aff3 b97d4f1a b3745d91 c461003e 910bf878 ae42601c 8d2ddeff aaecaa39 250b5034 edb11192 7bfe9b57 6d533759 51586b36 a38d648a 8fdb48e5 6075d6b0 3588ea03 bc844942 398d41f5 660e3b70 0b99f522 f169fd1b 7bfa2ab5 ab461319 25153e58 002b4dce a2df1bee 550a7357 b604f2dd 2f477d05 bdf9eb5a 857ddc6e c8f0fd41 6df96f15 e147ab26 788da8e8 02221fb0 d1d95c61 a3f0cb28 3a6e6ace 67c2909a 220382ab eaed776d aff08a61 b99d1bd6 9d9ae988 34ccea00 41dae436 18513251 ad57acd1 67f110fc 3f09f5c9 25ef7d43 12a5d0d7 3ff48b8b 26ed56e6 c047a092 bb8639e1 8788747f 584838d4 f8e5f837 657242e8 cb8eedf4 74a917f1 578f71da c9b27125 22e1f53c f40145c2 4795259b 3f313a2f c9012bf6 22167a50 6e7f9437 ef51a724 356e0fcb d3ea999d 08a5c662 85aa3b0e 579fadec 7bc95dc2 c097af8e f01d8b9f 80fb79c6 ea65e6b7 29ff29f6 9e1f739d b7fb59c9 e2160f17 0be33bc1 e96b9b04 b1affe79 c4f4b2e2 f4c8ffb1 6a009e50 a8828854 2786f841 a64e724c 5f54d077 7040385d 6e0f0ecc f33d3c15 8108b358 46a502de 1e0fb02a ddbdfa32 e7b34ab6 c9080ed1 395224b3 33f9ab47 c245ecda c28d81a9 37303a3b 6380dd6f 2fb5a55b 83b7c53c 41c8d0d2 3aab2d13 dc7d21fb 86a88668 37bb38fe ab6413a8 bbe585b2 a0ca072a 9d5940d2 ddb1d0b1 a946317a 988b29a4 89dc0432 5df8490d 5e167efa 50a86faa fe6a535a a9f8b8b4 6e2dce1b d0696759 c09da3b2 f07dd347 67408899 406165ff a4a9d03d 9b5f0f47 5f3e8022 1d7a23e0 25af2eeb 82a3db34 c9351029 6c93d44c f088ad1c 9ee59f51 b5276b3f ca74a924 781af187 fa3e0b85 b898c99e 1ca51f06 5a92a0c1 138c81fe d0722d0f 05a7d84d e18f1dea 799a2d61 8276e558 f0ba8748 ce733e8a 2f9d0911 58f24fa4 66a25278 3135d31d 4b9223ee bdd5e6b3 ddbebec1 8dbebbd9 3020b38f e607450d 724a5d1c 91b754c5 2e85e790 3a407bd9 fd137178 a304029b 4023fc77 440d5072 2eb73c7c 164a7305 b33ade7c 277ad883 b0f7e75c 74107936 83924bdb b72beb78 86c01d64 f6f441eb 23b9a3ea 80b73f1a 93c6411d 1e95ef5e 800b5eac 9519832a ae043406 b06a902e 1dbca5cc 571f88a1 b1faf52b 45572497 8d016cdb f92cdae8 316931f8 f9884439 e1b7f212 e23c6392 ccfae073 5aa1efda 74f0687c eaff3301 b6520a94 c5398714 15e7e4d1 0fc00006 8cf49218 3a8ddc0a e7e2a0b9 eec4c008 8d73085e 77e246da 00e92ab4 f76f6cf9 19801183 233406ef b80e028c 342c0b2a a2768c47 99350a74 adbd400b f3978ade b87a4f6c fa95a6a2 6dff20c9 935b5ad8 dbbbb401 1b6472c1 9c0e6331 04ae7a6b 4c94e4f3 90cb46cb 2831ecf5 ff77a145 79af6097 ba61a719 abcb7665 7e87750e c4c7bc5d 3a670b81 3d9a7023 82667d52 a4587f62 ca619b7f 7c5462f5 bda5c60d e6e48ac8 405c6000 7981f344 f7375ab3 bb467ff9 cfc68a82 e417a6d8 1a6177c1 7b75dace b1af350d 484d48a3 1f805416 7416ab4e 1291276c 9e85179b 5a74660c 7e6d00df 01e3cec8 ee2c0688 f6de8226 a217538c b432c3ef 49e5ff4e 035359e5 8ae8e7ed 2da12766 cac39070 115adda4 1a2872dc fac3378e 294e7bf8 a1a4991f c062f4d7 72b2b77d 158062aa 9ae447a7 a7b05677 fdfd5d56 eac1a9e6 a5905593 59992293 84298fae f708e55f 093d3d93 75d26197 924f5d88 3184a7ec b454fdbc 2d9101b8 ae70fb7c 4385b2c4 63b37343 0b4b662c 2883ae72 ffcab778 0f96e2d7 897066e3 f23e98ad 797a7b7e 2fc476f9 ================================================ FILE: auto-seg/submodules/segment-anything-2/training/assets/MOSE_sample_val_list.txt ================================================ 32e5d721 5bad0bab 267bfd6c 0a43a414 56c56ca9 9a1146b3 c6ad7aaf 78a1f4b1 fc455e73 072e7b3f 77ccb57d a76ee415 8cdcfc17 5d518b42 376dd830 0e843fc8 2af0e766 2bd4e845 de2f2a6a ade9ee91 001ca3cb fc4c1c67 8ef55579 b84ce852 4cc8528a 767ffaaa 112a2ef0 a338c8aa cbd144f5 5ff72128 86a949e2 9f2323ac 1fab1d1c 75924351 ef55817b 02deca50 4d979d99 4d65f873 28470fa0 0d1575fe 06ea172e 29a6ddc2 797f1bec 780e7a99 b9ed5b44 02a236b4 607d8ff5 af5666b2 0558d0ed a938c6b2 103df575 77110e80 739e5a07 6763a576 06ebc138 ba4b3b09 b35cc2f3 4e0597a0 5949ee84 5348d547 323c4236 b3b51117 55727ddd ab2714f3 d2878895 c0734cb3 94f7c53e 2a2745e5 442ffb54 3592425a 50ae03b0 5f150435 3067f9fa 9ffb2818 adeaf5aa 31caacec 1cd99b86 aa22f9d0 8fa50320 e6348d2c 42ff84a5 8c8b7913 c96adcbc 495be321 db735509 ee113fc4 a678cdab c409ca4d 68d2b259 592b4dee 4e2b4dc7 eb4d26e1 2009a00f bec5c89d 67191f24 a3e85b4b da7080cd 80d978e9 36dcb93f a41e8c44 12fdc864 46d140ea 657c9dd9 a86f84ee 90c1c43d 33015509 afc7664d 23df06e1 291d4799 0ab75563 251bf059 bcefdcc4 ce9a2796 94d3403a 8f2e04bc f9cda066 9dfa2cc5 66924c91 e765a09e 15654ee1 48e0bd39 ee095221 2463609b 544d0d1f 51b8c2e1 d321dde4 4cb11a5f d7058a0d 37af282a fabae187 7be91184 181ec185 2d16ceeb b56be4b1 6699eff0 79acac96 d61c4665 0c13e1e7 100f6ecf 71217dfc 82df0888 4c42c747 c9fdf703 d2efeb4b 69ed9d14 64914fb6 255bedbc 4ea934d8 a034feb2 e4f4ddae e36a3026 c1489591 111bb373 e1d9fb32 93e22d48 c1ec4b26 d9638e69 60ab04c5 cfe7773a 62132822 2f5fb2a3 7bdd197d 033333fd 130fcdbe 12e509c2 67138c33 6f90cc5f 4e3020fe bbdd8bb7 b399ccdb fecd10d2 2e0967f7 f509054f 792c6ff7 48e2afc5 d904c048 111e0a5c b83024e2 e6a7b79c bdc5ccf7 b8146d00 9d394f1a 645b84f9 95ab2d0f e6f8a31d b4f876fb dc2c570d 3afd02d7 5c80c82c b1b32ddd 9f25fc61 ba538072 f8916fef 43c04ad2 a658e949 2861dd53 f6e40aba 09d305d1 aac33bff 8d9d4c08 ================================================ FILE: auto-seg/submodules/segment-anything-2/training/dataset/__init__.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. ================================================ FILE: auto-seg/submodules/segment-anything-2/training/dataset/sam2_datasets.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import logging import math from typing import Callable, Iterable, List, Optional, Sequence import torch from torch.utils.data import BatchSampler, DataLoader, Dataset, IterableDataset, Subset from torch.utils.data.distributed import DistributedSampler class MixedDataLoader: def __init__(self, dataloaders: List[DataLoader], mixing_prob: torch.FloatTensor): """ Args: dataloaders (List[DataLoader]): List of DataLoaders to be mixed. mixing_prob (torch.FloatTensor): Probability of each dataloader to be sampled from """ assert len(dataloaders) == mixing_prob.shape[0] self.dataloaders = dataloaders self.mixing_prob = mixing_prob # Iterator state self._iter_dls = None self._iter_mixing_prob = None self.random_generator = torch.Generator() def __len__(self): return sum([len(d) for d in self.dataloaders]) def __iter__(self): # Synchronize dataloader seeds self.random_generator.manual_seed(42) self._iter_dls = [iter(loader) for loader in self.dataloaders] self._iter_mixing_prob = self.mixing_prob.clone() return self def __next__(self): """ Sample a dataloader to sample from based on mixing probabilities. If one of the dataloaders is exhausted, we continue sampling from the other loaders until all are exhausted. """ if self._iter_dls is None: raise TypeError(f"{type(self).__name__} object is not an iterator") while self._iter_mixing_prob.any(): # at least one D-Loader with non-zero prob. dataset_idx = self._iter_mixing_prob.multinomial( 1, generator=self.random_generator ).item() try: item = next(self._iter_dls[dataset_idx]) return item except StopIteration: # No more iterations for this dataset, set it's mixing probability to zero and try again. self._iter_mixing_prob[dataset_idx] = 0 except Exception as e: # log and raise any other unexpected error. logging.error(e) raise e # Exhausted all iterators raise StopIteration class TorchTrainMixedDataset: def __init__( self, datasets: List[Dataset], batch_sizes: List[int], num_workers: int, shuffle: bool, pin_memory: bool, drop_last: bool, collate_fn: Optional[Callable] = None, worker_init_fn: Optional[Callable] = None, phases_per_epoch: int = 1, dataset_prob: Optional[List[float]] = None, ) -> None: """ Args: datasets (List[Dataset]): List of Datasets to be mixed. batch_sizes (List[int]): Batch sizes for each dataset in the list. num_workers (int): Number of workers per dataloader. shuffle (bool): Whether or not to shuffle data. pin_memory (bool): If True, use pinned memory when loading tensors from disk. drop_last (bool): Whether or not to drop the last batch of data. collate_fn (Callable): Function to merge a list of samples into a mini-batch. worker_init_fn (Callable): Function to init each dataloader worker. phases_per_epoch (int): Number of phases per epoch. dataset_prob (List[float]): Probability of choosing the dataloader to sample from. Should sum to 1.0 """ self.datasets = datasets self.batch_sizes = batch_sizes self.num_workers = num_workers self.shuffle = shuffle self.pin_memory = pin_memory self.drop_last = drop_last self.collate_fn = collate_fn self.worker_init_fn = worker_init_fn assert len(self.datasets) > 0 for dataset in self.datasets: assert not isinstance(dataset, IterableDataset), "Not supported" # `RepeatFactorWrapper` requires calling set_epoch first to get its length self._set_dataset_epoch(dataset, 0) self.phases_per_epoch = phases_per_epoch self.chunks = [None] * len(datasets) if dataset_prob is None: # If not provided, assign each dataset a probability proportional to its length. dataset_lens = [ (math.floor(len(d) / bs) if drop_last else math.ceil(len(d) / bs)) for d, bs in zip(datasets, batch_sizes) ] total_len = sum(dataset_lens) dataset_prob = torch.tensor([d_len / total_len for d_len in dataset_lens]) else: assert len(dataset_prob) == len(datasets) dataset_prob = torch.tensor(dataset_prob) logging.info(f"Dataset mixing probabilities: {dataset_prob.tolist()}") assert dataset_prob.sum().item() == 1.0, "Probabilities should sum to 1.0" self.dataset_prob = dataset_prob def _set_dataset_epoch(self, dataset, epoch: int) -> None: if hasattr(dataset, "epoch"): dataset.epoch = epoch if hasattr(dataset, "set_epoch"): dataset.set_epoch(epoch) def get_loader(self, epoch) -> Iterable: dataloaders = [] for d_idx, (dataset, batch_size) in enumerate( zip(self.datasets, self.batch_sizes) ): if self.phases_per_epoch > 1: # Major epoch that looops over entire dataset # len(main_epoch) == phases_per_epoch * len(epoch) main_epoch = epoch // self.phases_per_epoch # Phase with in the main epoch local_phase = epoch % self.phases_per_epoch # Start of new data-epoch or job is resumed after preemtion. if local_phase == 0 or self.chunks[d_idx] is None: # set seed for dataset epoch # If using RepeatFactorWrapper, this step currectly re-samples indices before chunking. self._set_dataset_epoch(dataset, main_epoch) # Separate random generator for subset sampling g = torch.Generator() g.manual_seed(main_epoch) self.chunks[d_idx] = torch.chunk( torch.randperm(len(dataset), generator=g), self.phases_per_epoch, ) dataset = Subset(dataset, self.chunks[d_idx][local_phase]) else: self._set_dataset_epoch(dataset, epoch) sampler = DistributedSampler(dataset, shuffle=self.shuffle) sampler.set_epoch(epoch) batch_sampler = BatchSampler(sampler, batch_size, drop_last=self.drop_last) dataloaders.append( DataLoader( dataset, num_workers=self.num_workers, pin_memory=self.pin_memory, batch_sampler=batch_sampler, collate_fn=self.collate_fn, worker_init_fn=self.worker_init_fn, ) ) return MixedDataLoader(dataloaders, self.dataset_prob) ================================================ FILE: auto-seg/submodules/segment-anything-2/training/dataset/transforms.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. """ Transforms and data augmentation for both image + bbox. """ import logging import random from typing import Iterable import torch import torchvision.transforms as T import torchvision.transforms.functional as F import torchvision.transforms.v2.functional as Fv2 from PIL import Image as PILImage from torchvision.transforms import InterpolationMode from training.utils.data_utils import VideoDatapoint def hflip(datapoint, index): datapoint.frames[index].data = F.hflip(datapoint.frames[index].data) for obj in datapoint.frames[index].objects: if obj.segment is not None: obj.segment = F.hflip(obj.segment) return datapoint def get_size_with_aspect_ratio(image_size, size, max_size=None): w, h = image_size if max_size is not None: min_original_size = float(min((w, h))) max_original_size = float(max((w, h))) if max_original_size / min_original_size * size > max_size: size = max_size * min_original_size / max_original_size if (w <= h and w == size) or (h <= w and h == size): return (h, w) if w < h: ow = int(round(size)) oh = int(round(size * h / w)) else: oh = int(round(size)) ow = int(round(size * w / h)) return (oh, ow) def resize(datapoint, index, size, max_size=None, square=False, v2=False): # size can be min_size (scalar) or (w, h) tuple def get_size(image_size, size, max_size=None): if isinstance(size, (list, tuple)): return size[::-1] else: return get_size_with_aspect_ratio(image_size, size, max_size) if square: size = size, size else: cur_size = ( datapoint.frames[index].data.size()[-2:][::-1] if v2 else datapoint.frames[index].data.size ) size = get_size(cur_size, size, max_size) old_size = ( datapoint.frames[index].data.size()[-2:][::-1] if v2 else datapoint.frames[index].data.size ) if v2: datapoint.frames[index].data = Fv2.resize( datapoint.frames[index].data, size, antialias=True ) else: datapoint.frames[index].data = F.resize(datapoint.frames[index].data, size) new_size = ( datapoint.frames[index].data.size()[-2:][::-1] if v2 else datapoint.frames[index].data.size ) for obj in datapoint.frames[index].objects: if obj.segment is not None: obj.segment = F.resize(obj.segment[None, None], size).squeeze() h, w = size datapoint.frames[index].size = (h, w) return datapoint def pad(datapoint, index, padding, v2=False): old_h, old_w = datapoint.frames[index].size h, w = old_h, old_w if len(padding) == 2: # assumes that we only pad on the bottom right corners datapoint.frames[index].data = F.pad( datapoint.frames[index].data, (0, 0, padding[0], padding[1]) ) h += padding[1] w += padding[0] else: # left, top, right, bottom datapoint.frames[index].data = F.pad( datapoint.frames[index].data, (padding[0], padding[1], padding[2], padding[3]), ) h += padding[1] + padding[3] w += padding[0] + padding[2] datapoint.frames[index].size = (h, w) for obj in datapoint.frames[index].objects: if obj.segment is not None: if v2: if len(padding) == 2: obj.segment = Fv2.pad(obj.segment, (0, 0, padding[0], padding[1])) else: obj.segment = Fv2.pad(obj.segment, tuple(padding)) else: if len(padding) == 2: obj.segment = F.pad(obj.segment, (0, 0, padding[0], padding[1])) else: obj.segment = F.pad(obj.segment, tuple(padding)) return datapoint class RandomHorizontalFlip: def __init__(self, consistent_transform, p=0.5): self.p = p self.consistent_transform = consistent_transform def __call__(self, datapoint, **kwargs): if self.consistent_transform: if random.random() < self.p: for i in range(len(datapoint.frames)): datapoint = hflip(datapoint, i) return datapoint for i in range(len(datapoint.frames)): if random.random() < self.p: datapoint = hflip(datapoint, i) return datapoint class RandomResizeAPI: def __init__( self, sizes, consistent_transform, max_size=None, square=False, v2=False ): if isinstance(sizes, int): sizes = (sizes,) assert isinstance(sizes, Iterable) self.sizes = list(sizes) self.max_size = max_size self.square = square self.consistent_transform = consistent_transform self.v2 = v2 def __call__(self, datapoint, **kwargs): if self.consistent_transform: size = random.choice(self.sizes) for i in range(len(datapoint.frames)): datapoint = resize( datapoint, i, size, self.max_size, square=self.square, v2=self.v2 ) return datapoint for i in range(len(datapoint.frames)): size = random.choice(self.sizes) datapoint = resize( datapoint, i, size, self.max_size, square=self.square, v2=self.v2 ) return datapoint class ToTensorAPI: def __init__(self, v2=False): self.v2 = v2 def __call__(self, datapoint: VideoDatapoint, **kwargs): for img in datapoint.frames: if self.v2: img.data = Fv2.to_image_tensor(img.data) else: img.data = F.to_tensor(img.data) return datapoint class NormalizeAPI: def __init__(self, mean, std, v2=False): self.mean = mean self.std = std self.v2 = v2 def __call__(self, datapoint: VideoDatapoint, **kwargs): for img in datapoint.frames: if self.v2: img.data = Fv2.convert_image_dtype(img.data, torch.float32) img.data = Fv2.normalize(img.data, mean=self.mean, std=self.std) else: img.data = F.normalize(img.data, mean=self.mean, std=self.std) return datapoint class ComposeAPI: def __init__(self, transforms): self.transforms = transforms def __call__(self, datapoint, **kwargs): for t in self.transforms: datapoint = t(datapoint, **kwargs) return datapoint def __repr__(self): format_string = self.__class__.__name__ + "(" for t in self.transforms: format_string += "\n" format_string += " {0}".format(t) format_string += "\n)" return format_string class RandomGrayscale: def __init__(self, consistent_transform, p=0.5): self.p = p self.consistent_transform = consistent_transform self.Grayscale = T.Grayscale(num_output_channels=3) def __call__(self, datapoint: VideoDatapoint, **kwargs): if self.consistent_transform: if random.random() < self.p: for img in datapoint.frames: img.data = self.Grayscale(img.data) return datapoint for img in datapoint.frames: if random.random() < self.p: img.data = self.Grayscale(img.data) return datapoint class ColorJitter: def __init__(self, consistent_transform, brightness, contrast, saturation, hue): self.consistent_transform = consistent_transform self.brightness = ( brightness if isinstance(brightness, list) else [max(0, 1 - brightness), 1 + brightness] ) self.contrast = ( contrast if isinstance(contrast, list) else [max(0, 1 - contrast), 1 + contrast] ) self.saturation = ( saturation if isinstance(saturation, list) else [max(0, 1 - saturation), 1 + saturation] ) self.hue = hue if isinstance(hue, list) or hue is None else ([-hue, hue]) def __call__(self, datapoint: VideoDatapoint, **kwargs): if self.consistent_transform: # Create a color jitter transformation params ( fn_idx, brightness_factor, contrast_factor, saturation_factor, hue_factor, ) = T.ColorJitter.get_params( self.brightness, self.contrast, self.saturation, self.hue ) for img in datapoint.frames: if not self.consistent_transform: ( fn_idx, brightness_factor, contrast_factor, saturation_factor, hue_factor, ) = T.ColorJitter.get_params( self.brightness, self.contrast, self.saturation, self.hue ) for fn_id in fn_idx: if fn_id == 0 and brightness_factor is not None: img.data = F.adjust_brightness(img.data, brightness_factor) elif fn_id == 1 and contrast_factor is not None: img.data = F.adjust_contrast(img.data, contrast_factor) elif fn_id == 2 and saturation_factor is not None: img.data = F.adjust_saturation(img.data, saturation_factor) elif fn_id == 3 and hue_factor is not None: img.data = F.adjust_hue(img.data, hue_factor) return datapoint class RandomAffine: def __init__( self, degrees, consistent_transform, scale=None, translate=None, shear=None, image_mean=(123, 116, 103), log_warning=True, num_tentatives=1, image_interpolation="bicubic", ): """ The mask is required for this transform. if consistent_transform if True, then the same random affine is applied to all frames and masks. """ self.degrees = degrees if isinstance(degrees, list) else ([-degrees, degrees]) self.scale = scale self.shear = ( shear if isinstance(shear, list) else ([-shear, shear] if shear else None) ) self.translate = translate self.fill_img = image_mean self.consistent_transform = consistent_transform self.log_warning = log_warning self.num_tentatives = num_tentatives if image_interpolation == "bicubic": self.image_interpolation = InterpolationMode.BICUBIC elif image_interpolation == "bilinear": self.image_interpolation = InterpolationMode.BILINEAR else: raise NotImplementedError def __call__(self, datapoint: VideoDatapoint, **kwargs): for _tentative in range(self.num_tentatives): res = self.transform_datapoint(datapoint) if res is not None: return res if self.log_warning: logging.warning( f"Skip RandomAffine for zero-area mask in first frame after {self.num_tentatives} tentatives" ) return datapoint def transform_datapoint(self, datapoint: VideoDatapoint): _, height, width = F.get_dimensions(datapoint.frames[0].data) img_size = [width, height] if self.consistent_transform: # Create a random affine transformation affine_params = T.RandomAffine.get_params( degrees=self.degrees, translate=self.translate, scale_ranges=self.scale, shears=self.shear, img_size=img_size, ) for img_idx, img in enumerate(datapoint.frames): this_masks = [ obj.segment.unsqueeze(0) if obj.segment is not None else None for obj in img.objects ] if not self.consistent_transform: # if not consistent we create a new affine params for every frame&mask pair Create a random affine transformation affine_params = T.RandomAffine.get_params( degrees=self.degrees, translate=self.translate, scale_ranges=self.scale, shears=self.shear, img_size=img_size, ) transformed_bboxes, transformed_masks = [], [] for i in range(len(img.objects)): if this_masks[i] is None: transformed_masks.append(None) # Dummy bbox for a dummy target transformed_bboxes.append(torch.tensor([[0, 0, 1, 1]])) else: transformed_mask = F.affine( this_masks[i], *affine_params, interpolation=InterpolationMode.NEAREST, fill=0.0, ) if img_idx == 0 and transformed_mask.max() == 0: # We are dealing with a video and the object is not visible in the first frame # Return the datapoint without transformation return None transformed_masks.append(transformed_mask.squeeze()) for i in range(len(img.objects)): img.objects[i].segment = transformed_masks[i] img.data = F.affine( img.data, *affine_params, interpolation=self.image_interpolation, fill=self.fill_img, ) return datapoint def random_mosaic_frame( datapoint, index, grid_h, grid_w, target_grid_y, target_grid_x, should_hflip, ): # Step 1: downsize the images and paste them into a mosaic image_data = datapoint.frames[index].data is_pil = isinstance(image_data, PILImage.Image) if is_pil: H_im = image_data.height W_im = image_data.width image_data_output = PILImage.new("RGB", (W_im, H_im)) else: H_im = image_data.size(-2) W_im = image_data.size(-1) image_data_output = torch.zeros_like(image_data) downsize_cache = {} for grid_y in range(grid_h): for grid_x in range(grid_w): y_offset_b = grid_y * H_im // grid_h x_offset_b = grid_x * W_im // grid_w y_offset_e = (grid_y + 1) * H_im // grid_h x_offset_e = (grid_x + 1) * W_im // grid_w H_im_downsize = y_offset_e - y_offset_b W_im_downsize = x_offset_e - x_offset_b if (H_im_downsize, W_im_downsize) in downsize_cache: image_data_downsize = downsize_cache[(H_im_downsize, W_im_downsize)] else: image_data_downsize = F.resize( image_data, size=(H_im_downsize, W_im_downsize), interpolation=InterpolationMode.BILINEAR, antialias=True, # antialiasing for downsizing ) downsize_cache[(H_im_downsize, W_im_downsize)] = image_data_downsize if should_hflip[grid_y, grid_x].item(): image_data_downsize = F.hflip(image_data_downsize) if is_pil: image_data_output.paste(image_data_downsize, (x_offset_b, y_offset_b)) else: image_data_output[:, y_offset_b:y_offset_e, x_offset_b:x_offset_e] = ( image_data_downsize ) datapoint.frames[index].data = image_data_output # Step 2: downsize the masks and paste them into the target grid of the mosaic for obj in datapoint.frames[index].objects: if obj.segment is None: continue assert obj.segment.shape == (H_im, W_im) and obj.segment.dtype == torch.uint8 segment_output = torch.zeros_like(obj.segment) target_y_offset_b = target_grid_y * H_im // grid_h target_x_offset_b = target_grid_x * W_im // grid_w target_y_offset_e = (target_grid_y + 1) * H_im // grid_h target_x_offset_e = (target_grid_x + 1) * W_im // grid_w target_H_im_downsize = target_y_offset_e - target_y_offset_b target_W_im_downsize = target_x_offset_e - target_x_offset_b segment_downsize = F.resize( obj.segment[None, None], size=(target_H_im_downsize, target_W_im_downsize), interpolation=InterpolationMode.BILINEAR, antialias=True, # antialiasing for downsizing )[0, 0] if should_hflip[target_grid_y, target_grid_x].item(): segment_downsize = F.hflip(segment_downsize[None, None])[0, 0] segment_output[ target_y_offset_b:target_y_offset_e, target_x_offset_b:target_x_offset_e ] = segment_downsize obj.segment = segment_output return datapoint class RandomMosaicVideoAPI: def __init__(self, prob=0.15, grid_h=2, grid_w=2, use_random_hflip=False): self.prob = prob self.grid_h = grid_h self.grid_w = grid_w self.use_random_hflip = use_random_hflip def __call__(self, datapoint, **kwargs): if random.random() > self.prob: return datapoint # select a random location to place the target mask in the mosaic target_grid_y = random.randint(0, self.grid_h - 1) target_grid_x = random.randint(0, self.grid_w - 1) # whether to flip each grid in the mosaic horizontally if self.use_random_hflip: should_hflip = torch.rand(self.grid_h, self.grid_w) < 0.5 else: should_hflip = torch.zeros(self.grid_h, self.grid_w, dtype=torch.bool) for i in range(len(datapoint.frames)): datapoint = random_mosaic_frame( datapoint, i, grid_h=self.grid_h, grid_w=self.grid_w, target_grid_y=target_grid_y, target_grid_x=target_grid_x, should_hflip=should_hflip, ) return datapoint ================================================ FILE: auto-seg/submodules/segment-anything-2/training/dataset/utils.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. """Some wrapping utilities extended from pytorch's to support repeat factor sampling in particular""" from typing import Iterable import torch from torch.utils.data import ( ConcatDataset as TorchConcatDataset, Dataset, Subset as TorchSubset, ) class ConcatDataset(TorchConcatDataset): def __init__(self, datasets: Iterable[Dataset]) -> None: super(ConcatDataset, self).__init__(datasets) self.repeat_factors = torch.cat([d.repeat_factors for d in datasets]) def set_epoch(self, epoch: int): for dataset in self.datasets: if hasattr(dataset, "epoch"): dataset.epoch = epoch if hasattr(dataset, "set_epoch"): dataset.set_epoch(epoch) class Subset(TorchSubset): def __init__(self, dataset, indices) -> None: super(Subset, self).__init__(dataset, indices) self.repeat_factors = dataset.repeat_factors[indices] assert len(indices) == len(self.repeat_factors) # Adapted from Detectron2 class RepeatFactorWrapper(Dataset): """ Thin wrapper around a dataset to implement repeat factor sampling. The underlying dataset must have a repeat_factors member to indicate the per-image factor. Set it to uniformly ones to disable repeat factor sampling """ def __init__(self, dataset, seed: int = 0): self.dataset = dataset self.epoch_ids = None self._seed = seed # Split into whole number (_int_part) and fractional (_frac_part) parts. self._int_part = torch.trunc(dataset.repeat_factors) self._frac_part = dataset.repeat_factors - self._int_part def _get_epoch_indices(self, generator): """ Create a list of dataset indices (with repeats) to use for one epoch. Args: generator (torch.Generator): pseudo random number generator used for stochastic rounding. Returns: torch.Tensor: list of dataset indices to use in one epoch. Each index is repeated based on its calculated repeat factor. """ # Since repeat factors are fractional, we use stochastic rounding so # that the target repeat factor is achieved in expectation over the # course of training rands = torch.rand(len(self._frac_part), generator=generator) rep_factors = self._int_part + (rands < self._frac_part).float() # Construct a list of indices in which we repeat images as specified indices = [] for dataset_index, rep_factor in enumerate(rep_factors): indices.extend([dataset_index] * int(rep_factor.item())) return torch.tensor(indices, dtype=torch.int64) def __len__(self): if self.epoch_ids is None: # Here we raise an error instead of returning original len(self.dataset) avoid # accidentally using unwrapped length. Otherwise it's error-prone since the # length changes to `len(self.epoch_ids)`changes after set_epoch is called. raise RuntimeError("please call set_epoch first to get wrapped length") # return len(self.dataset) return len(self.epoch_ids) def set_epoch(self, epoch: int): g = torch.Generator() g.manual_seed(self._seed + epoch) self.epoch_ids = self._get_epoch_indices(g) if hasattr(self.dataset, "set_epoch"): self.dataset.set_epoch(epoch) def __getitem__(self, idx): if self.epoch_ids is None: raise RuntimeError( "Repeat ids haven't been computed. Did you forget to call set_epoch?" ) return self.dataset[self.epoch_ids[idx]] ================================================ FILE: auto-seg/submodules/segment-anything-2/training/dataset/vos_dataset.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import logging import random from copy import deepcopy import numpy as np import torch from iopath.common.file_io import g_pathmgr from PIL import Image as PILImage from torchvision.datasets.vision import VisionDataset from training.dataset.vos_raw_dataset import VOSRawDataset from training.dataset.vos_sampler import VOSSampler from training.dataset.vos_segment_loader import JSONSegmentLoader from training.utils.data_utils import Frame, Object, VideoDatapoint MAX_RETRIES = 100 class VOSDataset(VisionDataset): def __init__( self, transforms, training: bool, video_dataset: VOSRawDataset, sampler: VOSSampler, multiplier: int, always_target=True, target_segments_available=True, ): self._transforms = transforms self.training = training self.video_dataset = video_dataset self.sampler = sampler self.repeat_factors = torch.ones(len(self.video_dataset), dtype=torch.float32) self.repeat_factors *= multiplier print(f"Raw dataset length = {len(self.video_dataset)}") self.curr_epoch = 0 # Used in case data loader behavior changes across epochs self.always_target = always_target self.target_segments_available = target_segments_available def _get_datapoint(self, idx): for retry in range(MAX_RETRIES): try: if isinstance(idx, torch.Tensor): idx = idx.item() # sample a video video, segment_loader = self.video_dataset.get_video(idx) # sample frames and object indices to be used in a datapoint sampled_frms_and_objs = self.sampler.sample( video, segment_loader, epoch=self.curr_epoch ) break # Succesfully loaded video except Exception as e: if self.training: logging.warning( f"Loading failed (id={idx}); Retry {retry} with exception: {e}" ) idx = random.randrange(0, len(self.video_dataset)) else: # Shouldn't fail to load a val video raise e datapoint = self.construct(video, sampled_frms_and_objs, segment_loader) for transform in self._transforms: datapoint = transform(datapoint, epoch=self.curr_epoch) return datapoint def construct(self, video, sampled_frms_and_objs, segment_loader): """ Constructs a VideoDatapoint sample to pass to transforms """ sampled_frames = sampled_frms_and_objs.frames sampled_object_ids = sampled_frms_and_objs.object_ids images = [] rgb_images = load_images(sampled_frames) # Iterate over the sampled frames and store their rgb data and object data (bbox, segment) for frame_idx, frame in enumerate(sampled_frames): w, h = rgb_images[frame_idx].size images.append( Frame( data=rgb_images[frame_idx], objects=[], ) ) # We load the gt segments associated with the current frame if isinstance(segment_loader, JSONSegmentLoader): segments = segment_loader.load( frame.frame_idx, obj_ids=sampled_object_ids ) else: segments = segment_loader.load(frame.frame_idx) for obj_id in sampled_object_ids: # Extract the segment if obj_id in segments: assert ( segments[obj_id] is not None ), "None targets are not supported" # segment is uint8 and remains uint8 throughout the transforms segment = segments[obj_id].to(torch.uint8) else: # There is no target, we either use a zero mask target or drop this object if not self.always_target: continue segment = torch.zeros(h, w, dtype=torch.uint8) images[frame_idx].objects.append( Object( object_id=obj_id, frame_index=frame.frame_idx, segment=segment, ) ) return VideoDatapoint( frames=images, video_id=video.video_id, size=(h, w), ) def __getitem__(self, idx): return self._get_datapoint(idx) def __len__(self): return len(self.video_dataset) def load_images(frames): all_images = [] cache = {} for frame in frames: if frame.data is None: # Load the frame rgb data from file path = frame.image_path if path in cache: all_images.append(deepcopy(all_images[cache[path]])) continue with g_pathmgr.open(path, "rb") as fopen: all_images.append(PILImage.open(fopen).convert("RGB")) cache[path] = len(all_images) - 1 else: # The frame rgb data has already been loaded # Convert it to a PILImage all_images.append(tensor_2_PIL(frame.data)) return all_images def tensor_2_PIL(data: torch.Tensor) -> PILImage.Image: data = data.cpu().numpy().transpose((1, 2, 0)) * 255.0 data = data.astype(np.uint8) return PILImage.fromarray(data) ================================================ FILE: auto-seg/submodules/segment-anything-2/training/dataset/vos_raw_dataset.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import glob import logging import os from dataclasses import dataclass from typing import List, Optional import pandas as pd import torch from iopath.common.file_io import g_pathmgr from omegaconf.listconfig import ListConfig from training.dataset.vos_segment_loader import ( JSONSegmentLoader, MultiplePNGSegmentLoader, PalettisedPNGSegmentLoader, SA1BSegmentLoader, ) @dataclass class VOSFrame: frame_idx: int image_path: str data: Optional[torch.Tensor] = None is_conditioning_only: Optional[bool] = False @dataclass class VOSVideo: video_name: str video_id: int frames: List[VOSFrame] def __len__(self): return len(self.frames) class VOSRawDataset: def __init__(self): pass def get_video(self, idx): raise NotImplementedError() class PNGRawDataset(VOSRawDataset): def __init__( self, img_folder, gt_folder, file_list_txt=None, excluded_videos_list_txt=None, sample_rate=1, is_palette=True, single_object_mode=False, truncate_video=-1, frames_sampling_mult=False, ): self.img_folder = img_folder self.gt_folder = gt_folder self.sample_rate = sample_rate self.is_palette = is_palette self.single_object_mode = single_object_mode self.truncate_video = truncate_video # Read the subset defined in file_list_txt if file_list_txt is not None: with g_pathmgr.open(file_list_txt, "r") as f: subset = [os.path.splitext(line.strip())[0] for line in f] else: subset = os.listdir(self.img_folder) # Read and process excluded files if provided if excluded_videos_list_txt is not None: with g_pathmgr.open(excluded_videos_list_txt, "r") as f: excluded_files = [os.path.splitext(line.strip())[0] for line in f] else: excluded_files = [] # Check if it's not in excluded_files self.video_names = sorted( [video_name for video_name in subset if video_name not in excluded_files] ) if self.single_object_mode: # single object mode self.video_names = sorted( [ os.path.join(video_name, obj) for video_name in self.video_names for obj in os.listdir(os.path.join(self.gt_folder, video_name)) ] ) if frames_sampling_mult: video_names_mult = [] for video_name in self.video_names: num_frames = len(os.listdir(os.path.join(self.img_folder, video_name))) video_names_mult.extend([video_name] * num_frames) self.video_names = video_names_mult def get_video(self, idx): """ Given a VOSVideo object, return the mask tensors. """ video_name = self.video_names[idx] if self.single_object_mode: video_frame_root = os.path.join( self.img_folder, os.path.dirname(video_name) ) else: video_frame_root = os.path.join(self.img_folder, video_name) video_mask_root = os.path.join(self.gt_folder, video_name) if self.is_palette: segment_loader = PalettisedPNGSegmentLoader(video_mask_root) else: segment_loader = MultiplePNGSegmentLoader( video_mask_root, self.single_object_mode ) all_frames = sorted(glob.glob(os.path.join(video_frame_root, "*.jpg"))) if self.truncate_video > 0: all_frames = all_frames[: self.truncate_video] frames = [] for _, fpath in enumerate(all_frames[:: self.sample_rate]): fid = int(os.path.basename(fpath).split(".")[0]) frames.append(VOSFrame(fid, image_path=fpath)) video = VOSVideo(video_name, idx, frames) return video, segment_loader def __len__(self): return len(self.video_names) class SA1BRawDataset(VOSRawDataset): def __init__( self, img_folder, gt_folder, file_list_txt=None, excluded_videos_list_txt=None, num_frames=1, mask_area_frac_thresh=1.1, # no filtering by default uncertain_iou=-1, # no filtering by default ): self.img_folder = img_folder self.gt_folder = gt_folder self.num_frames = num_frames self.mask_area_frac_thresh = mask_area_frac_thresh self.uncertain_iou = uncertain_iou # stability score # Read the subset defined in file_list_txt if file_list_txt is not None: with g_pathmgr.open(file_list_txt, "r") as f: subset = [os.path.splitext(line.strip())[0] for line in f] else: subset = os.listdir(self.img_folder) subset = [ path.split(".")[0] for path in subset if path.endswith(".jpg") ] # remove extension # Read and process excluded files if provided if excluded_videos_list_txt is not None: with g_pathmgr.open(excluded_videos_list_txt, "r") as f: excluded_files = [os.path.splitext(line.strip())[0] for line in f] else: excluded_files = [] # Check if it's not in excluded_files and it exists self.video_names = [ video_name for video_name in subset if video_name not in excluded_files ] def get_video(self, idx): """ Given a VOSVideo object, return the mask tensors. """ video_name = self.video_names[idx] video_frame_path = os.path.join(self.img_folder, video_name + ".jpg") video_mask_path = os.path.join(self.gt_folder, video_name + ".json") segment_loader = SA1BSegmentLoader( video_mask_path, mask_area_frac_thresh=self.mask_area_frac_thresh, video_frame_path=video_frame_path, uncertain_iou=self.uncertain_iou, ) frames = [] for frame_idx in range(self.num_frames): frames.append(VOSFrame(frame_idx, image_path=video_frame_path)) video_name = video_name.split("_")[-1] # filename is sa_{int} # video id needs to be image_id to be able to load correct annotation file during eval video = VOSVideo(video_name, int(video_name), frames) return video, segment_loader def __len__(self): return len(self.video_names) class JSONRawDataset(VOSRawDataset): """ Dataset where the annotation in the format of SA-V json files """ def __init__( self, img_folder, gt_folder, file_list_txt=None, excluded_videos_list_txt=None, sample_rate=1, rm_unannotated=True, ann_every=1, frames_fps=24, ): self.gt_folder = gt_folder self.img_folder = img_folder self.sample_rate = sample_rate self.rm_unannotated = rm_unannotated self.ann_every = ann_every self.frames_fps = frames_fps # Read and process excluded files if provided excluded_files = [] if excluded_videos_list_txt is not None: if isinstance(excluded_videos_list_txt, str): excluded_videos_lists = [excluded_videos_list_txt] elif isinstance(excluded_videos_list_txt, ListConfig): excluded_videos_lists = list(excluded_videos_list_txt) else: raise NotImplementedError for excluded_videos_list_txt in excluded_videos_lists: with open(excluded_videos_list_txt, "r") as f: excluded_files.extend( [os.path.splitext(line.strip())[0] for line in f] ) excluded_files = set(excluded_files) # Read the subset defined in file_list_txt if file_list_txt is not None: with g_pathmgr.open(file_list_txt, "r") as f: subset = [os.path.splitext(line.strip())[0] for line in f] else: subset = os.listdir(self.img_folder) self.video_names = sorted( [video_name for video_name in subset if video_name not in excluded_files] ) def get_video(self, video_idx): """ Given a VOSVideo object, return the mask tensors. """ video_name = self.video_names[video_idx] video_json_path = os.path.join(self.gt_folder, video_name + "_manual.json") segment_loader = JSONSegmentLoader( video_json_path=video_json_path, ann_every=self.ann_every, frames_fps=self.frames_fps, ) frame_ids = [ int(os.path.splitext(frame_name)[0]) for frame_name in sorted( os.listdir(os.path.join(self.img_folder, video_name)) ) ] frames = [ VOSFrame( frame_id, image_path=os.path.join( self.img_folder, f"{video_name}/%05d.jpg" % (frame_id) ), ) for frame_id in frame_ids[:: self.sample_rate] ] if self.rm_unannotated: # Eliminate the frames that have not been annotated valid_frame_ids = [ i * segment_loader.ann_every for i, annot in enumerate(segment_loader.frame_annots) if annot is not None and None not in annot ] frames = [f for f in frames if f.frame_idx in valid_frame_ids] video = VOSVideo(video_name, video_idx, frames) return video, segment_loader def __len__(self): return len(self.video_names) ================================================ FILE: auto-seg/submodules/segment-anything-2/training/dataset/vos_sampler.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import random from dataclasses import dataclass from typing import List from training.dataset.vos_segment_loader import LazySegments MAX_RETRIES = 1000 @dataclass class SampledFramesAndObjects: frames: List[int] object_ids: List[int] class VOSSampler: def __init__(self, sort_frames=True): # frames are ordered by frame id when sort_frames is True self.sort_frames = sort_frames def sample(self, video): raise NotImplementedError() class RandomUniformSampler(VOSSampler): def __init__( self, num_frames, max_num_objects, reverse_time_prob=0.0, ): self.num_frames = num_frames self.max_num_objects = max_num_objects self.reverse_time_prob = reverse_time_prob def sample(self, video, segment_loader, epoch=None): for retry in range(MAX_RETRIES): if len(video.frames) < self.num_frames: raise Exception( f"Cannot sample {self.num_frames} frames from video {video.video_name} as it only has {len(video.frames)} annotated frames." ) start = random.randrange(0, len(video.frames) - self.num_frames + 1) frames = [video.frames[start + step] for step in range(self.num_frames)] if random.uniform(0, 1) < self.reverse_time_prob: # Reverse time frames = frames[::-1] # Get first frame object ids visible_object_ids = [] loaded_segms = segment_loader.load(frames[0].frame_idx) if isinstance(loaded_segms, LazySegments): # LazySegments for SA1BRawDataset visible_object_ids = list(loaded_segms.keys()) else: for object_id, segment in segment_loader.load( frames[0].frame_idx ).items(): if segment.sum(): visible_object_ids.append(object_id) # First frame needs to have at least a target to track if len(visible_object_ids) > 0: break if retry >= MAX_RETRIES - 1: raise Exception("No visible objects") object_ids = random.sample( visible_object_ids, min(len(visible_object_ids), self.max_num_objects), ) return SampledFramesAndObjects(frames=frames, object_ids=object_ids) class EvalSampler(VOSSampler): """ VOS Sampler for evaluation: sampling all the frames and all the objects in a video """ def __init__( self, ): super().__init__() def sample(self, video, segment_loader, epoch=None): """ Sampling all the frames and all the objects """ if self.sort_frames: # ordered by frame id frames = sorted(video.frames, key=lambda x: x.frame_idx) else: # use the original order frames = video.frames object_ids = segment_loader.load(frames[0].frame_idx).keys() if len(object_ids) == 0: raise Exception("First frame of the video has no objects") return SampledFramesAndObjects(frames=frames, object_ids=object_ids) ================================================ FILE: auto-seg/submodules/segment-anything-2/training/dataset/vos_segment_loader.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import glob import json import os import numpy as np import pandas as pd import torch from PIL import Image as PILImage try: from pycocotools import mask as mask_utils except: pass class JSONSegmentLoader: def __init__(self, video_json_path, ann_every=1, frames_fps=24, valid_obj_ids=None): # Annotations in the json are provided every ann_every th frame self.ann_every = ann_every # Ids of the objects to consider when sampling this video self.valid_obj_ids = valid_obj_ids with open(video_json_path, "r") as f: data = json.load(f) if isinstance(data, list): self.frame_annots = data elif isinstance(data, dict): masklet_field_name = "masklet" if "masklet" in data else "masks" self.frame_annots = data[masklet_field_name] if "fps" in data: if isinstance(data["fps"], list): annotations_fps = int(data["fps"][0]) else: annotations_fps = int(data["fps"]) assert frames_fps % annotations_fps == 0 self.ann_every = frames_fps // annotations_fps else: raise NotImplementedError def load(self, frame_id, obj_ids=None): assert frame_id % self.ann_every == 0 rle_mask = self.frame_annots[frame_id // self.ann_every] valid_objs_ids = set(range(len(rle_mask))) if self.valid_obj_ids is not None: # Remove the masklets that have been filtered out for this video valid_objs_ids &= set(self.valid_obj_ids) if obj_ids is not None: # Only keep the objects that have been sampled valid_objs_ids &= set(obj_ids) valid_objs_ids = sorted(list(valid_objs_ids)) # Construct rle_masks_filtered that only contains the rle masks we are interested in id_2_idx = {} rle_mask_filtered = [] for obj_id in valid_objs_ids: if rle_mask[obj_id] is not None: id_2_idx[obj_id] = len(rle_mask_filtered) rle_mask_filtered.append(rle_mask[obj_id]) else: id_2_idx[obj_id] = None # Decode the masks raw_segments = torch.from_numpy(mask_utils.decode(rle_mask_filtered)).permute( 2, 0, 1 ) # (num_obj, h, w) segments = {} for obj_id in valid_objs_ids: if id_2_idx[obj_id] is None: segments[obj_id] = None else: idx = id_2_idx[obj_id] segments[obj_id] = raw_segments[idx] return segments def get_valid_obj_frames_ids(self, num_frames_min=None): # For each object, find all the frames with a valid (not None) mask num_objects = len(self.frame_annots[0]) # The result dict associates each obj_id with the id of its valid frames res = {obj_id: [] for obj_id in range(num_objects)} for annot_idx, annot in enumerate(self.frame_annots): for obj_id in range(num_objects): if annot[obj_id] is not None: res[obj_id].append(int(annot_idx * self.ann_every)) if num_frames_min is not None: # Remove masklets that have less than num_frames_min valid masks for obj_id, valid_frames in list(res.items()): if len(valid_frames) < num_frames_min: res.pop(obj_id) return res class PalettisedPNGSegmentLoader: def __init__(self, video_png_root): """ SegmentLoader for datasets with masks stored as palettised PNGs. video_png_root: the folder contains all the masks stored in png """ self.video_png_root = video_png_root # build a mapping from frame id to their PNG mask path # note that in some datasets, the PNG paths could have more # than 5 digits, e.g. "00000000.png" instead of "00000.png" png_filenames = os.listdir(self.video_png_root) self.frame_id_to_png_filename = {} for filename in png_filenames: frame_id, _ = os.path.splitext(filename) self.frame_id_to_png_filename[int(frame_id)] = filename def load(self, frame_id): """ load the single palettised mask from the disk (path: f'{self.video_png_root}/{frame_id:05d}.png') Args: frame_id: int, define the mask path Return: binary_segments: dict """ # check the path mask_path = os.path.join( self.video_png_root, self.frame_id_to_png_filename[frame_id] ) # load the mask masks = PILImage.open(mask_path).convert("P") masks = np.array(masks) object_id = pd.unique(masks.flatten()) object_id = object_id[object_id != 0] # remove background (0) # convert into N binary segmentation masks binary_segments = {} for i in object_id: bs = masks == i binary_segments[i] = torch.from_numpy(bs) return binary_segments def __len__(self): return class MultiplePNGSegmentLoader: def __init__(self, video_png_root, single_object_mode=False): """ video_png_root: the folder contains all the masks stored in png single_object_mode: whether to load only a single object at a time """ self.video_png_root = video_png_root self.single_object_mode = single_object_mode # read a mask to know the resolution of the video if self.single_object_mode: tmp_mask_path = glob.glob(os.path.join(video_png_root, "*.png"))[0] else: tmp_mask_path = glob.glob(os.path.join(video_png_root, "*", "*.png"))[0] tmp_mask = np.array(PILImage.open(tmp_mask_path)) self.H = tmp_mask.shape[0] self.W = tmp_mask.shape[1] if self.single_object_mode: self.obj_id = ( int(video_png_root.split("/")[-1]) + 1 ) # offset by 1 as bg is 0 else: self.obj_id = None def load(self, frame_id): if self.single_object_mode: return self._load_single_png(frame_id) else: return self._load_multiple_pngs(frame_id) def _load_single_png(self, frame_id): """ load single png from the disk (path: f'{self.obj_id}/{frame_id:05d}.png') Args: frame_id: int, define the mask path Return: binary_segments: dict """ mask_path = os.path.join(self.video_png_root, f"{frame_id:05d}.png") binary_segments = {} if os.path.exists(mask_path): mask = np.array(PILImage.open(mask_path)) else: # if png doesn't exist, empty mask mask = np.zeros((self.H, self.W), dtype=bool) binary_segments[self.obj_id] = torch.from_numpy(mask > 0) return binary_segments def _load_multiple_pngs(self, frame_id): """ load multiple png masks from the disk (path: f'{obj_id}/{frame_id:05d}.png') Args: frame_id: int, define the mask path Return: binary_segments: dict """ # get the path all_objects = sorted(glob.glob(os.path.join(self.video_png_root, "*"))) num_objects = len(all_objects) assert num_objects > 0 # load the masks binary_segments = {} for obj_folder in all_objects: # obj_folder is {video_name}/{obj_id}, obj_id is specified by the name of the folder obj_id = int(obj_folder.split("/")[-1]) obj_id = obj_id + 1 # offset 1 as bg is 0 mask_path = os.path.join(obj_folder, f"{frame_id:05d}.png") if os.path.exists(mask_path): mask = np.array(PILImage.open(mask_path)) else: mask = np.zeros((self.H, self.W), dtype=bool) binary_segments[obj_id] = torch.from_numpy(mask > 0) return binary_segments def __len__(self): return class LazySegments: """ Only decodes segments that are actually used. """ def __init__(self): self.segments = {} self.cache = {} def __setitem__(self, key, item): self.segments[key] = item def __getitem__(self, key): if key in self.cache: return self.cache[key] rle = self.segments[key] mask = torch.from_numpy(mask_utils.decode([rle])).permute(2, 0, 1)[0] self.cache[key] = mask return mask def __contains__(self, key): return key in self.segments def __len__(self): return len(self.segments) def keys(self): return self.segments.keys() class SA1BSegmentLoader: def __init__( self, video_mask_path, mask_area_frac_thresh=1.1, video_frame_path=None, uncertain_iou=-1, ): with open(video_mask_path, "r") as f: self.frame_annots = json.load(f) if mask_area_frac_thresh <= 1.0: # Lazily read frame orig_w, orig_h = PILImage.open(video_frame_path).size area = orig_w * orig_h self.frame_annots = self.frame_annots["annotations"] rle_masks = [] for frame_annot in self.frame_annots: if not frame_annot["area"] > 0: continue if ("uncertain_iou" in frame_annot) and ( frame_annot["uncertain_iou"] < uncertain_iou ): # uncertain_iou is stability score continue if ( mask_area_frac_thresh <= 1.0 and (frame_annot["area"] / area) >= mask_area_frac_thresh ): continue rle_masks.append(frame_annot["segmentation"]) self.segments = LazySegments() for i, rle in enumerate(rle_masks): self.segments[i] = rle def load(self, frame_idx): return self.segments ================================================ FILE: auto-seg/submodules/segment-anything-2/training/loss_fns.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. from collections import defaultdict from typing import Dict, List import torch import torch.distributed import torch.nn as nn import torch.nn.functional as F from training.trainer import CORE_LOSS_KEY from training.utils.distributed import get_world_size, is_dist_avail_and_initialized def dice_loss(inputs, targets, num_objects, loss_on_multimask=False): """ Compute the DICE loss, similar to generalized IOU for masks Args: inputs: A float tensor of arbitrary shape. The predictions for each example. targets: A float tensor with the same shape as inputs. Stores the binary classification label for each element in inputs (0 for the negative class and 1 for the positive class). num_objects: Number of objects in the batch loss_on_multimask: True if multimask prediction is enabled Returns: Dice loss tensor """ inputs = inputs.sigmoid() if loss_on_multimask: # inputs and targets are [N, M, H, W] where M corresponds to multiple predicted masks assert inputs.dim() == 4 and targets.dim() == 4 # flatten spatial dimension while keeping multimask channel dimension inputs = inputs.flatten(2) targets = targets.flatten(2) numerator = 2 * (inputs * targets).sum(-1) else: inputs = inputs.flatten(1) numerator = 2 * (inputs * targets).sum(1) denominator = inputs.sum(-1) + targets.sum(-1) loss = 1 - (numerator + 1) / (denominator + 1) if loss_on_multimask: return loss / num_objects return loss.sum() / num_objects def sigmoid_focal_loss( inputs, targets, num_objects, alpha: float = 0.25, gamma: float = 2, loss_on_multimask=False, ): """ Loss used in RetinaNet for dense detection: https://arxiv.org/abs/1708.02002. Args: inputs: A float tensor of arbitrary shape. The predictions for each example. targets: A float tensor with the same shape as inputs. Stores the binary classification label for each element in inputs (0 for the negative class and 1 for the positive class). num_objects: Number of objects in the batch alpha: (optional) Weighting factor in range (0,1) to balance positive vs negative examples. Default = -1 (no weighting). gamma: Exponent of the modulating factor (1 - p_t) to balance easy vs hard examples. loss_on_multimask: True if multimask prediction is enabled Returns: focal loss tensor """ prob = inputs.sigmoid() ce_loss = F.binary_cross_entropy_with_logits(inputs, targets, reduction="none") p_t = prob * targets + (1 - prob) * (1 - targets) loss = ce_loss * ((1 - p_t) ** gamma) if alpha >= 0: alpha_t = alpha * targets + (1 - alpha) * (1 - targets) loss = alpha_t * loss if loss_on_multimask: # loss is [N, M, H, W] where M corresponds to multiple predicted masks assert loss.dim() == 4 return loss.flatten(2).mean(-1) / num_objects # average over spatial dims return loss.mean(1).sum() / num_objects def iou_loss( inputs, targets, pred_ious, num_objects, loss_on_multimask=False, use_l1_loss=False ): """ Args: inputs: A float tensor of arbitrary shape. The predictions for each example. targets: A float tensor with the same shape as inputs. Stores the binary classification label for each element in inputs (0 for the negative class and 1 for the positive class). pred_ious: A float tensor containing the predicted IoUs scores per mask num_objects: Number of objects in the batch loss_on_multimask: True if multimask prediction is enabled use_l1_loss: Whether to use L1 loss is used instead of MSE loss Returns: IoU loss tensor """ assert inputs.dim() == 4 and targets.dim() == 4 pred_mask = inputs.flatten(2) > 0 gt_mask = targets.flatten(2) > 0 area_i = torch.sum(pred_mask & gt_mask, dim=-1).float() area_u = torch.sum(pred_mask | gt_mask, dim=-1).float() actual_ious = area_i / torch.clamp(area_u, min=1.0) if use_l1_loss: loss = F.l1_loss(pred_ious, actual_ious, reduction="none") else: loss = F.mse_loss(pred_ious, actual_ious, reduction="none") if loss_on_multimask: return loss / num_objects return loss.sum() / num_objects class MultiStepMultiMasksAndIous(nn.Module): def __init__( self, weight_dict, focal_alpha=0.25, focal_gamma=2, supervise_all_iou=False, iou_use_l1_loss=False, pred_obj_scores=False, focal_gamma_obj_score=0.0, focal_alpha_obj_score=-1, ): """ This class computes the multi-step multi-mask and IoU losses. Args: weight_dict: dict containing weights for focal, dice, iou losses focal_alpha: alpha for sigmoid focal loss focal_gamma: gamma for sigmoid focal loss supervise_all_iou: if True, back-prop iou losses for all predicted masks iou_use_l1_loss: use L1 loss instead of MSE loss for iou pred_obj_scores: if True, compute loss for object scores focal_gamma_obj_score: gamma for sigmoid focal loss on object scores focal_alpha_obj_score: alpha for sigmoid focal loss on object scores """ super().__init__() self.weight_dict = weight_dict self.focal_alpha = focal_alpha self.focal_gamma = focal_gamma assert "loss_mask" in self.weight_dict assert "loss_dice" in self.weight_dict assert "loss_iou" in self.weight_dict if "loss_class" not in self.weight_dict: self.weight_dict["loss_class"] = 0.0 self.focal_alpha_obj_score = focal_alpha_obj_score self.focal_gamma_obj_score = focal_gamma_obj_score self.supervise_all_iou = supervise_all_iou self.iou_use_l1_loss = iou_use_l1_loss self.pred_obj_scores = pred_obj_scores def forward(self, outs_batch: List[Dict], targets_batch: torch.Tensor): assert len(outs_batch) == len(targets_batch) num_objects = torch.tensor( (targets_batch.shape[1]), device=targets_batch.device, dtype=torch.float ) # Number of objects is fixed within a batch if is_dist_avail_and_initialized(): torch.distributed.all_reduce(num_objects) num_objects = torch.clamp(num_objects / get_world_size(), min=1).item() losses = defaultdict(int) for outs, targets in zip(outs_batch, targets_batch): cur_losses = self._forward(outs, targets, num_objects) for k, v in cur_losses.items(): losses[k] += v return losses def _forward(self, outputs: Dict, targets: torch.Tensor, num_objects): """ Compute the losses related to the masks: the focal loss and the dice loss. and also the MAE or MSE loss between predicted IoUs and actual IoUs. Here "multistep_pred_multimasks_high_res" is a list of multimasks (tensors of shape [N, M, H, W], where M could be 1 or larger, corresponding to one or multiple predicted masks from a click. We back-propagate focal, dice losses only on the prediction channel with the lowest focal+dice loss between predicted mask and ground-truth. If `supervise_all_iou` is True, we backpropagate ious losses for all predicted masks. """ target_masks = targets.unsqueeze(1).float() assert target_masks.dim() == 4 # [N, 1, H, W] src_masks_list = outputs["multistep_pred_multimasks_high_res"] ious_list = outputs["multistep_pred_ious"] object_score_logits_list = outputs["multistep_object_score_logits"] assert len(src_masks_list) == len(ious_list) assert len(object_score_logits_list) == len(ious_list) # accumulate the loss over prediction steps losses = {"loss_mask": 0, "loss_dice": 0, "loss_iou": 0, "loss_class": 0} for src_masks, ious, object_score_logits in zip( src_masks_list, ious_list, object_score_logits_list ): self._update_losses( losses, src_masks, target_masks, ious, num_objects, object_score_logits ) losses[CORE_LOSS_KEY] = self.reduce_loss(losses) return losses def _update_losses( self, losses, src_masks, target_masks, ious, num_objects, object_score_logits ): target_masks = target_masks.expand_as(src_masks) # get focal, dice and iou loss on all output masks in a prediction step loss_multimask = sigmoid_focal_loss( src_masks, target_masks, num_objects, alpha=self.focal_alpha, gamma=self.focal_gamma, loss_on_multimask=True, ) loss_multidice = dice_loss( src_masks, target_masks, num_objects, loss_on_multimask=True ) if not self.pred_obj_scores: loss_class = torch.tensor( 0.0, dtype=loss_multimask.dtype, device=loss_multimask.device ) target_obj = torch.ones( loss_multimask.shape[0], 1, dtype=loss_multimask.dtype, device=loss_multimask.device, ) else: target_obj = torch.any((target_masks[:, 0] > 0).flatten(1), dim=-1)[ ..., None ].float() loss_class = sigmoid_focal_loss( object_score_logits, target_obj, num_objects, alpha=self.focal_alpha_obj_score, gamma=self.focal_gamma_obj_score, ) loss_multiiou = iou_loss( src_masks, target_masks, ious, num_objects, loss_on_multimask=True, use_l1_loss=self.iou_use_l1_loss, ) assert loss_multimask.dim() == 2 assert loss_multidice.dim() == 2 assert loss_multiiou.dim() == 2 if loss_multimask.size(1) > 1: # take the mask indices with the smallest focal + dice loss for back propagation loss_combo = ( loss_multimask * self.weight_dict["loss_mask"] + loss_multidice * self.weight_dict["loss_dice"] ) best_loss_inds = torch.argmin(loss_combo, dim=-1) batch_inds = torch.arange(loss_combo.size(0), device=loss_combo.device) loss_mask = loss_multimask[batch_inds, best_loss_inds].unsqueeze(1) loss_dice = loss_multidice[batch_inds, best_loss_inds].unsqueeze(1) # calculate the iou prediction and slot losses only in the index # with the minimum loss for each mask (to be consistent w/ SAM) if self.supervise_all_iou: loss_iou = loss_multiiou.mean(dim=-1).unsqueeze(1) else: loss_iou = loss_multiiou[batch_inds, best_loss_inds].unsqueeze(1) else: loss_mask = loss_multimask loss_dice = loss_multidice loss_iou = loss_multiiou # backprop focal, dice and iou loss only if obj present loss_mask = loss_mask * target_obj loss_dice = loss_dice * target_obj loss_iou = loss_iou * target_obj # sum over batch dimension (note that the losses are already divided by num_objects) losses["loss_mask"] += loss_mask.sum() losses["loss_dice"] += loss_dice.sum() losses["loss_iou"] += loss_iou.sum() losses["loss_class"] += loss_class def reduce_loss(self, losses): reduced_loss = 0.0 for loss_key, weight in self.weight_dict.items(): if loss_key not in losses: raise ValueError(f"{type(self)} doesn't compute {loss_key}") if weight != 0: reduced_loss += losses[loss_key] * weight return reduced_loss ================================================ FILE: auto-seg/submodules/segment-anything-2/training/model/__init__.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. ================================================ FILE: auto-seg/submodules/segment-anything-2/training/optimizer.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import fnmatch import inspect import itertools import logging import types from typing import ( Any, Callable, Dict, Iterable, List, Mapping, Optional, Set, Tuple, Type, Union, ) import hydra import torch import torch.nn as nn from omegaconf import DictConfig from torch import Tensor class Optimizer: def __init__(self, optimizer, schedulers=None) -> None: self.optimizer = optimizer self.schedulers = schedulers self._validate_optimizer_schedulers() self.step_schedulers(0.0, 0) def _validate_optimizer_schedulers(self): if self.schedulers is None: return for _, set_of_schedulers in enumerate(self.schedulers): for option, _ in set_of_schedulers.items(): assert option in self.optimizer.defaults, ( "Optimizer option " f"{option} not found in {self.optimizer}. Valid options are " f"{self.optimizer.defaults.keys()}" ) def step_schedulers(self, where: float, step: int) -> None: if self.schedulers is None: return for i, param_group in enumerate(self.optimizer.param_groups): for option, scheduler in self.schedulers[i].items(): if "step" in inspect.signature(scheduler.__call__).parameters: new_value = scheduler(step=step, where=where) elif ( hasattr(scheduler, "scheduler") and "step" in inspect.signature(scheduler.scheduler.__call__).parameters ): # To handle ValueScaler wrappers new_value = scheduler(step=step, where=where) else: new_value = scheduler(where) param_group[option] = new_value def step(self, where, step, closure=None): self.step_schedulers(where, step) return self.optimizer.step(closure) def zero_grad(self, *args, **kwargs): return self.optimizer.zero_grad(*args, **kwargs) def set_default_parameters( scheduler_cfgs: List[DictConfig], all_parameter_names: Set[str] ) -> None: """Set up the "default" scheduler with the right parameters. Args: scheduler_cgfs: A list of scheduler configs, where each scheduler also specifies which parameters it applies to, based on the names of parameters or the class of the modules. At most one scheduler is allowed to skip this specification, which is used as a "default" specification for any remaining parameters. all_parameter_names: Names of all the parameters to consider. """ constraints = [ scheduler_cfg.parameter_names for scheduler_cfg in scheduler_cfgs if scheduler_cfg.parameter_names is not None ] if len(constraints) == 0: default_params = set(all_parameter_names) else: default_params = all_parameter_names - set.union(*constraints) default_count = 0 for scheduler_cfg in scheduler_cfgs: if scheduler_cfg.parameter_names is None: scheduler_cfg.parameter_names = default_params default_count += 1 assert default_count <= 1, "Only one scheduler per option can be default" if default_count == 0: # No default scheduler specified, add a default, but without any scheduler # for that option scheduler_cfgs.append({"parameter_names": default_params}) def name_constraints_to_parameters( param_constraints: List[Set[str]], named_parameters: Dict[str, Tensor] ) -> List[torch.nn.Parameter]: """Return parameters which match the intersection of parameter constraints. Note that this returns the parameters themselves, not their names. Args: param_constraints: A list, with each element being a set of allowed parameters. named_parameters: Mapping from a parameter name to the parameter itself. Returns: A list containing the parameters which overlap with _each_ constraint set from param_constraints. """ matching_names = set.intersection(*param_constraints) return [value for name, value in named_parameters.items() if name in matching_names] def map_scheduler_cfgs_to_param_groups( all_scheduler_cfgs: Iterable[List[Dict]], named_parameters: Dict[str, Tensor], ) -> Tuple[List[Dict[Any, Any]], List[Dict[str, List[torch.nn.Parameter]]]]: """Produce parameter groups corresponding to all the scheduler configs. Takes all the scheduler configs, each of which applies to a specific optimizer option (like "lr" or "weight_decay") and has a set of parameter names which it applies to, and produces a final set of param groups where each param group covers all the options which apply to a particular set of parameters. Args: all_scheduler_cfgs: All the scheduler configs covering every option. named_parameters: Mapping from a parameter name to the parameter itself. Returns: Tuple of lists of schedulers and param_groups, where schedulers[i] applies to param_groups[i]. """ scheduler_cfgs_per_param_group = itertools.product(*all_scheduler_cfgs) schedulers = [] param_groups = [] for scheduler_cfgs in scheduler_cfgs_per_param_group: param_constraints = [ scheduler_cfg["parameter_names"] for scheduler_cfg in scheduler_cfgs ] matching_parameters = name_constraints_to_parameters( param_constraints, named_parameters ) if len(matching_parameters) == 0: # If no overlap of parameters, skip continue schedulers_for_group = { scheduler_cfg["option"]: scheduler_cfg["scheduler"] for scheduler_cfg in scheduler_cfgs if "option" in scheduler_cfg } schedulers.append(schedulers_for_group) param_groups.append({"params": matching_parameters}) return schedulers, param_groups def validate_param_group_params(param_groups: List[Dict], model: nn.Module): """Check that the param groups are non-overlapping and cover all the parameters. Args: param_groups: List of all param groups model: Model to validate against. The check ensures that all the model parameters are part of param_groups """ for pg in param_groups: # no param should be repeated within a group assert len(pg["params"]) == len(set(pg["params"])) parameters = [set(param_group["params"]) for param_group in param_groups] model_parameters = {parameter for _, parameter in model.named_parameters()} for p1, p2 in itertools.permutations(parameters, 2): assert p1.isdisjoint(p2), "Scheduler generated param_groups should be disjoint" assert set.union(*parameters) == model_parameters, ( "Scheduler generated param_groups must include all parameters of the model." f" Found {len(set.union(*parameters))} params whereas model has" f" {len(model_parameters)} params" ) def unix_module_cls_pattern_to_parameter_names( filter_module_cls_names: List[str], module_cls_to_param_names: Dict[Type, str], ) -> Union[None, Set[str]]: """Returns param names which pass the filters specified in filter_module_cls_names. Args: filter_module_cls_names: A list of filter strings containing class names, like ["torch.nn.LayerNorm", "torch.nn.BatchNorm2d"] module_cls_to_param_names: Mapping from module classes to the parameter names they contain. See `get_module_cls_to_param_names`. """ if filter_module_cls_names is None: return set() allowed_parameter_names = [] for module_cls_name in filter_module_cls_names: module_cls = hydra.utils.get_class(module_cls_name) if module_cls not in module_cls_to_param_names: raise AssertionError( f"module_cls_name {module_cls_name} does not " "match any classes in the model" ) matching_parameters = module_cls_to_param_names[module_cls] assert ( len(matching_parameters) > 0 ), f"module_cls_name {module_cls_name} does not contain any parameters in the model" logging.info( f"Matches for module_cls_name [{module_cls_name}]: {matching_parameters} " ) allowed_parameter_names.append(matching_parameters) return set.union(*allowed_parameter_names) def unix_param_pattern_to_parameter_names( filter_param_names: Optional[List[str]], parameter_names: Dict[str, torch.Tensor], ) -> Union[None, Set[str]]: """Returns param names which pass the filters specified in filter_param_names. Args: filter_param_names: A list of unix-style filter strings with optional wildcards, like ["block.2.*", "block.2.linear.weight"] module_cls_to_param_names: Mapping from module classes to the parameter names they contain. See `get_module_cls_to_param_names`. """ if filter_param_names is None: return set() allowed_parameter_names = [] for param_name in filter_param_names: matching_parameters = set(fnmatch.filter(parameter_names, param_name)) assert ( len(matching_parameters) >= 1 ), f"param_name {param_name} does not match any parameters in the model" logging.info(f"Matches for param_name [{param_name}]: {matching_parameters}") allowed_parameter_names.append(matching_parameters) return set.union(*allowed_parameter_names) def _unix_pattern_to_parameter_names( scheduler_cfg: DictConfig, parameter_names: Set[str], module_cls_to_param_names: Dict[Type, str], ) -> Union[None, Set[str]]: """Returns param names which pass the filters specified in scheduler_cfg. Args: scheduler_cfg: The config for the scheduler parameter_names: The set of all parameter names which will be filtered """ if "param_names" not in scheduler_cfg and "module_cls_names" not in scheduler_cfg: return None return unix_param_pattern_to_parameter_names( scheduler_cfg.get("param_names"), parameter_names ).union( unix_module_cls_pattern_to_parameter_names( scheduler_cfg.get("module_cls_names"), module_cls_to_param_names ) ) def get_module_cls_to_param_names( model: nn.Module, param_allowlist: Set[str] = None ) -> Dict[Type, str]: """Produce a mapping from all the modules classes to the names of parames they own. Only counts a parameter as part of the immediate parent module, i.e. recursive parents do not count. Args: model: Model to iterate over param_allowlist: If specified, only these param names will be processed """ module_cls_to_params = {} for module_name, module in model.named_modules(): module_cls = type(module) module_cls_to_params.setdefault(module_cls, set()) for param_name, _ in module.named_parameters(recurse=False): full_param_name = get_full_parameter_name(module_name, param_name) if param_allowlist is None or full_param_name in param_allowlist: module_cls_to_params[module_cls].add(full_param_name) return module_cls_to_params def construct_optimizer( model: torch.nn.Module, optimizer_conf: Any, options_conf: Mapping[str, List] = None, param_group_modifiers_conf: List[Callable] = None, param_allowlist: Optional[Set[str]] = None, validate_param_groups=True, ) -> Optimizer: """ Constructs a stochastic gradient descent or ADAM (or ADAMw) optimizer with momentum. i.e, constructs a torch.optim.Optimizer with zero-weight decay Batchnorm and/or no-update 1-D parameters support, based on the config. Supports wrapping the optimizer with Layer-wise Adaptive Rate Scaling (LARS): https://arxiv.org/abs/1708.03888 Args: model: model to perform stochastic gradient descent optimization or ADAM optimization. optimizer_conf: Hydra config consisting a partial torch optimizer like SGD or ADAM, still missing the params argument which this function provides to produce the final optimizer param_group_modifiers_conf: Optional user specified functions which can modify the final scheduler configs before the optimizer's param groups are built param_allowlist: The parameters to optimize. Parameters which are not part of this allowlist will be skipped. validate_param_groups: If enabled, valides that the produced param_groups don't overlap and cover all the model parameters. """ if param_allowlist is None: param_allowlist = {name for name, _ in model.named_parameters()} named_parameters = { name: param for name, param in model.named_parameters() if name in param_allowlist } if not options_conf: optimizer = hydra.utils.instantiate(optimizer_conf, named_parameters.values()) return Optimizer(optimizer) all_parameter_names = { name for name, _ in model.named_parameters() if name in param_allowlist } module_cls_to_all_param_names = get_module_cls_to_param_names( model, param_allowlist ) scheduler_cfgs_per_option = hydra.utils.instantiate(options_conf) all_scheduler_cfgs = [] for option, scheduler_cfgs in scheduler_cfgs_per_option.items(): for config in scheduler_cfgs: config.option = option config.parameter_names = _unix_pattern_to_parameter_names( config, all_parameter_names, module_cls_to_all_param_names ) set_default_parameters(scheduler_cfgs, all_parameter_names) all_scheduler_cfgs.append(scheduler_cfgs) if param_group_modifiers_conf: for custom_param_modifier in param_group_modifiers_conf: custom_param_modifier = hydra.utils.instantiate(custom_param_modifier) all_scheduler_cfgs = custom_param_modifier( scheduler_cfgs=all_scheduler_cfgs, model=model ) schedulers, param_groups = map_scheduler_cfgs_to_param_groups( all_scheduler_cfgs, named_parameters ) if validate_param_groups: validate_param_group_params(param_groups, model) optimizer = hydra.utils.instantiate(optimizer_conf, param_groups) return Optimizer(optimizer, schedulers) def get_full_parameter_name(module_name, param_name): if module_name == "": return param_name return f"{module_name}.{param_name}" class GradientClipper: """ Gradient clipping utils that works for DDP """ def __init__(self, max_norm: float = 1.0, norm_type: int = 2): assert isinstance(max_norm, (int, float)) or max_norm is None self.max_norm = max_norm if max_norm is None else float(max_norm) self.norm_type = norm_type def __call__(self, model: nn.Module): if self.max_norm is None: return # no-op nn.utils.clip_grad_norm_( model.parameters(), max_norm=self.max_norm, norm_type=self.norm_type ) class ValueScaler: def __init__(self, scheduler, mult_val: float): self.scheduler = scheduler self.mult_val = mult_val def __call__(self, *args, **kwargs): val = self.scheduler(*args, **kwargs) return val * self.mult_val def rgetattr(obj, rattrs: str = None): """ Like getattr(), but supports dotted notation for nested objects. rattrs is a str of form 'attr1.attr2', returns obj.attr1.attr2 """ if rattrs is None: return obj attrs = rattrs.split(".") for attr in attrs: obj = getattr(obj, attr) return obj def layer_decay_param_modifier( scheduler_cfgs: List[List[Dict]], model, layer_decay_value: float, layer_decay_min: Optional[float] = None, apply_to: Optional[str] = None, overrides: List[Dict] = (), ) -> List[List[Dict]]: """ Args - scheduler_cfgs: a list of omegaconf.ListConfigs. Each element in the list is a omegaconfg.DictConfig with the following structure { "scheduler": "option": possible options are "lr", "weight_decay" etc. "parameter_names": Set of str indicating param names that this scheduler applies to } - model: a model that implements a method `get_layer_id` that maps layer_name to an integer and and a method get_num_layers. Alternatively, use apply_to argument to select a specific component of the model. - layer_decay_value: float - layer_decay_min: min val for layer decay - apply_to: optional arg to select which component of the model to apply the the layer decay modifier to - overrides: to manually override lr for specific patterns. Is a list of dicts. Each dict, has keys "pattern", "value". Returns - scheduler_configs: same structure as the input, elements can be modified """ model = rgetattr(model, apply_to) num_layers = model.get_num_layers() + 1 layer_decays = [ layer_decay_value ** (num_layers - i) for i in range(num_layers + 1) ] if layer_decay_min is not None: layer_decays = [max(val, layer_decay_min) for val in layer_decays] final_scheduler_cfgs = [] # scheduler_cfgs is a list of lists for scheduler_cfg_group in scheduler_cfgs: curr_cfg_group = [] # scheduler_cfg_group is a list of dictionaries for scheduler_cfg in scheduler_cfg_group: if scheduler_cfg["option"] != "lr": curr_cfg_group.append(scheduler_cfg) continue # Need sorted so that the list of parameter names is deterministic and consistent # across re-runs of this job. Else it was causing issues with loading the optimizer # state during a job restart (D38591759) parameter_names = sorted(scheduler_cfg["parameter_names"]) # Only want one cfg group per layer layer_cfg_groups = {} for param_name in parameter_names: layer_id = num_layers this_scale = layer_decays[layer_id] if param_name.startswith(apply_to): layer_id = model.get_layer_id(param_name) this_scale = layer_decays[layer_id] # Overrides for override in overrides: if fnmatch.fnmatchcase(param_name, override["pattern"]): this_scale = float(override["value"]) layer_id = override["pattern"] break if layer_id not in layer_cfg_groups: curr_param = { "option": scheduler_cfg["option"], "scheduler": ValueScaler( scheduler_cfg["scheduler"], this_scale ), "parameter_names": {param_name}, } else: curr_param = layer_cfg_groups[layer_id] curr_param["parameter_names"].add(param_name) layer_cfg_groups[layer_id] = curr_param for layer_cfg in layer_cfg_groups.values(): curr_cfg_group.append(layer_cfg) final_scheduler_cfgs.append(curr_cfg_group) return final_scheduler_cfgs ================================================ FILE: auto-seg/submodules/segment-anything-2/training/scripts/sav_frame_extraction_submitit.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. import argparse import os from pathlib import Path import cv2 import numpy as np import submitit import tqdm def get_args_parser(): parser = argparse.ArgumentParser( description="[SA-V Preprocessing] Extracting JPEG frames", formatter_class=argparse.ArgumentDefaultsHelpFormatter, ) # ------------ # DATA # ------------ data_parser = parser.add_argument_group( title="SA-V dataset data root", description="What data to load and how to process it.", ) data_parser.add_argument( "--sav-vid-dir", type=str, required=True, help=("Where to find the SAV videos"), ) data_parser.add_argument( "--sav-frame-sample-rate", type=int, default=4, help="Rate at which to sub-sample frames", ) # ------------ # LAUNCH # ------------ launch_parser = parser.add_argument_group( title="Cluster launch settings", description="Number of jobs and retry settings.", ) launch_parser.add_argument( "--n-jobs", type=int, required=True, help="Shard the run over this many jobs.", ) launch_parser.add_argument( "--timeout", type=int, required=True, help="SLURM timeout parameter in minutes." ) launch_parser.add_argument( "--partition", type=str, required=True, help="Partition to launch on." ) launch_parser.add_argument( "--account", type=str, required=True, help="Partition to launch on." ) launch_parser.add_argument("--qos", type=str, required=True, help="QOS.") # ------------ # OUTPUT # ------------ output_parser = parser.add_argument_group( title="Setting for results output", description="Where and how to save results." ) output_parser.add_argument( "--output-dir", type=str, required=True, help=("Where to dump the extracted jpeg frames"), ) output_parser.add_argument( "--slurm-output-root-dir", type=str, required=True, help=("Where to save slurm outputs"), ) return parser def decode_video(video_path: str): assert os.path.exists(video_path) video = cv2.VideoCapture(video_path) video_frames = [] while video.isOpened(): ret, frame = video.read() if ret: video_frames.append(frame) else: break return video_frames def extract_frames(video_path, sample_rate): frames = decode_video(video_path) return frames[::sample_rate] def submitit_launch(video_paths, sample_rate, save_root): for path in tqdm.tqdm(video_paths): frames = extract_frames(path, sample_rate) output_folder = os.path.join(save_root, Path(path).stem) if not os.path.exists(output_folder): os.makedirs(output_folder) for fid, frame in enumerate(frames): frame_path = os.path.join(output_folder, f"{fid*sample_rate:05d}.jpg") cv2.imwrite(frame_path, frame) print(f"Saved output to {save_root}") if __name__ == "__main__": parser = get_args_parser() args = parser.parse_args() sav_vid_dir = args.sav_vid_dir save_root = args.output_dir sample_rate = args.sav_frame_sample_rate # List all SA-V videos mp4_files = sorted([str(p) for p in Path(sav_vid_dir).glob("*/*.mp4")]) mp4_files = np.array(mp4_files) chunked_mp4_files = [x.tolist() for x in np.array_split(mp4_files, args.n_jobs)] print(f"Processing videos in: {sav_vid_dir}") print(f"Processing {len(mp4_files)} files") print(f"Beginning processing in {args.n_jobs} processes") # Submitit params jobs_dir = os.path.join(args.slurm_output_root_dir, "%j") cpus_per_task = 4 executor = submitit.AutoExecutor(folder=jobs_dir) executor.update_parameters( timeout_min=args.timeout, gpus_per_node=0, tasks_per_node=1, slurm_array_parallelism=args.n_jobs, cpus_per_task=cpus_per_task, slurm_partition=args.partition, slurm_account=args.account, slurm_qos=args.qos, ) executor.update_parameters(slurm_srun_args=["-vv", "--cpu-bind", "none"]) # Launch jobs = [] with executor.batch(): for _, mp4_chunk in tqdm.tqdm(enumerate(chunked_mp4_files)): job = executor.submit( submitit_launch, video_paths=mp4_chunk, sample_rate=sample_rate, save_root=save_root, ) jobs.append(job) for j in jobs: print(f"Slurm JobID: {j.job_id}") print(f"Saving outputs to {save_root}") print(f"Slurm outputs at {args.slurm_output_root_dir}") ================================================ FILE: auto-seg/submodules/segment-anything-2/training/train.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import logging import os import random import sys import traceback from argparse import ArgumentParser import submitit import torch from hydra import compose, initialize_config_module from hydra.utils import instantiate from iopath.common.file_io import g_pathmgr from omegaconf import OmegaConf from training.utils.train_utils import makedir, register_omegaconf_resolvers os.environ["HYDRA_FULL_ERROR"] = "1" def single_proc_run(local_rank, main_port, cfg, world_size): """Single GPU process""" os.environ["MASTER_ADDR"] = "localhost" os.environ["MASTER_PORT"] = str(main_port) os.environ["RANK"] = str(local_rank) os.environ["LOCAL_RANK"] = str(local_rank) os.environ["WORLD_SIZE"] = str(world_size) try: register_omegaconf_resolvers() except Exception as e: logging.info(e) trainer = instantiate(cfg.trainer, _recursive_=False) trainer.run() def single_node_runner(cfg, main_port: int): assert cfg.launcher.num_nodes == 1 num_proc = cfg.launcher.gpus_per_node torch.multiprocessing.set_start_method( "spawn" ) # CUDA runtime does not support `fork` if num_proc == 1: # directly call single_proc so we can easily set breakpoints # mp.spawn does not let us set breakpoints single_proc_run(local_rank=0, main_port=main_port, cfg=cfg, world_size=num_proc) else: mp_runner = torch.multiprocessing.start_processes args = (main_port, cfg, num_proc) # Note: using "fork" below, "spawn" causes time and error regressions. Using # spawn changes the default multiprocessing context to spawn, which doesn't # interact well with the dataloaders (likely due to the use of OpenCV). mp_runner(single_proc_run, args=args, nprocs=num_proc, start_method="spawn") def format_exception(e: Exception, limit=20): traceback_str = "".join(traceback.format_tb(e.__traceback__, limit=limit)) return f"{type(e).__name__}: {e}\nTraceback:\n{traceback_str}" class SubmititRunner(submitit.helpers.Checkpointable): """A callable which is passed to submitit to launch the jobs.""" def __init__(self, port, cfg): self.cfg = cfg self.port = port self.has_setup = False def run_trainer(self): job_env = submitit.JobEnvironment() # Need to add this again so the hydra.job.set_env PYTHONPATH # is also set when launching jobs. add_pythonpath_to_sys_path() os.environ["MASTER_ADDR"] = job_env.hostnames[0] os.environ["MASTER_PORT"] = str(self.port) os.environ["RANK"] = str(job_env.global_rank) os.environ["LOCAL_RANK"] = str(job_env.local_rank) os.environ["WORLD_SIZE"] = str(job_env.num_tasks) register_omegaconf_resolvers() cfg_resolved = OmegaConf.to_container(self.cfg, resolve=False) cfg_resolved = OmegaConf.create(cfg_resolved) trainer = instantiate(cfg_resolved.trainer, _recursive_=False) trainer.run() def __call__(self): job_env = submitit.JobEnvironment() self.setup_job_info(job_env.job_id, job_env.global_rank) try: self.run_trainer() except Exception as e: # Log the exception. Then raise it again (as what SubmititRunner currently does). message = format_exception(e) logging.error(message) raise e def setup_job_info(self, job_id, rank): """Set up slurm job info""" self.job_info = { "job_id": job_id, "rank": rank, "cluster": self.cfg.get("cluster", None), "experiment_log_dir": self.cfg.launcher.experiment_log_dir, } self.has_setup = True def add_pythonpath_to_sys_path(): if "PYTHONPATH" not in os.environ or not os.environ["PYTHONPATH"]: return sys.path = os.environ["PYTHONPATH"].split(":") + sys.path def main(args) -> None: cfg = compose(config_name=args.config) if cfg.launcher.experiment_log_dir is None: cfg.launcher.experiment_log_dir = os.path.join( os.getcwd(), "sam2_logs", args.config ) print("###################### Train App Config ####################") print(OmegaConf.to_yaml(cfg)) print("############################################################") add_pythonpath_to_sys_path() makedir(cfg.launcher.experiment_log_dir) with g_pathmgr.open( os.path.join(cfg.launcher.experiment_log_dir, "config.yaml"), "w" ) as f: f.write(OmegaConf.to_yaml(cfg)) cfg_resolved = OmegaConf.to_container(cfg, resolve=False) cfg_resolved = OmegaConf.create(cfg_resolved) with g_pathmgr.open( os.path.join(cfg.launcher.experiment_log_dir, "config_resolved.yaml"), "w" ) as f: f.write(OmegaConf.to_yaml(cfg_resolved, resolve=True)) submitit_conf = cfg.get("submitit", None) assert submitit_conf is not None, "Missing submitit config" submitit_dir = cfg.launcher.experiment_log_dir submitit_dir = os.path.join(submitit_dir, "submitit_logs") # Priotrize cmd line args cfg.launcher.gpus_per_node = ( args.num_gpus if args.num_gpus is not None else cfg.launcher.gpus_per_node ) cfg.launcher.num_nodes = ( args.num_nodes if args.num_nodes is not None else cfg.launcher.num_nodes ) submitit_conf.use_cluster = ( args.use_cluster if args.use_cluster is not None else submitit_conf.use_cluster ) if submitit_conf.use_cluster: executor = submitit.AutoExecutor(folder=submitit_dir) submitit_conf.partition = ( args.partition if args.partition is not None else submitit_conf.get("partition", None) ) submitit_conf.account = ( args.account if args.account is not None else submitit_conf.get("account", None) ) submitit_conf.qos = ( args.qos if args.qos is not None else submitit_conf.get("qos", None) ) job_kwargs = { "timeout_min": 60 * submitit_conf.timeout_hour, "name": ( submitit_conf.name if hasattr(submitit_conf, "name") else args.config ), "slurm_partition": submitit_conf.partition, "gpus_per_node": cfg.launcher.gpus_per_node, "tasks_per_node": cfg.launcher.gpus_per_node, # one task per GPU "cpus_per_task": submitit_conf.cpus_per_task, "nodes": cfg.launcher.num_nodes, "slurm_additional_parameters": { "exclude": " ".join(submitit_conf.get("exclude_nodes", [])), }, } if "include_nodes" in submitit_conf: assert ( len(submitit_conf["include_nodes"]) >= cfg.launcher.num_nodes ), "Not enough nodes" job_kwargs["slurm_additional_parameters"]["nodelist"] = " ".join( submitit_conf["include_nodes"] ) if submitit_conf.account is not None: job_kwargs["slurm_additional_parameters"]["account"] = submitit_conf.account if submitit_conf.qos is not None: job_kwargs["slurm_additional_parameters"]["qos"] = submitit_conf.qos if submitit_conf.get("mem_gb", None) is not None: job_kwargs["mem_gb"] = submitit_conf.mem_gb elif submitit_conf.get("mem", None) is not None: job_kwargs["slurm_mem"] = submitit_conf.mem if submitit_conf.get("constraints", None) is not None: job_kwargs["slurm_constraint"] = submitit_conf.constraints if submitit_conf.get("comment", None) is not None: job_kwargs["slurm_comment"] = submitit_conf.comment # Supports only cpu-bind option within srun_args. New options can be added here if submitit_conf.get("srun_args", None) is not None: job_kwargs["slurm_srun_args"] = [] if submitit_conf.srun_args.get("cpu_bind", None) is not None: job_kwargs["slurm_srun_args"].extend( ["--cpu-bind", submitit_conf.srun_args.cpu_bind] ) print("###################### SLURM Config ####################") print(job_kwargs) print("##########################################") executor.update_parameters(**job_kwargs) main_port = random.randint( submitit_conf.port_range[0], submitit_conf.port_range[1] ) runner = SubmititRunner(main_port, cfg) job = executor.submit(runner) print(f"Submitit Job ID: {job.job_id}") runner.setup_job_info(job.job_id, rank=0) else: cfg.launcher.num_nodes = 1 main_port = random.randint( submitit_conf.port_range[0], submitit_conf.port_range[1] ) single_node_runner(cfg, main_port) if __name__ == "__main__": initialize_config_module("sam2", version_base="1.2") parser = ArgumentParser() parser.add_argument( "-c", "--config", required=True, type=str, help="path to config file (e.g. configs/sam2.1_training/sam2.1_hiera_b+_MOSE_finetune.yaml)", ) parser.add_argument( "--use-cluster", type=int, default=None, help="whether to launch on a cluster, 0: run locally, 1: run on a cluster", ) parser.add_argument("--partition", type=str, default=None, help="SLURM partition") parser.add_argument("--account", type=str, default=None, help="SLURM account") parser.add_argument("--qos", type=str, default=None, help="SLURM qos") parser.add_argument( "--num-gpus", type=int, default=None, help="number of GPUS per node" ) parser.add_argument("--num-nodes", type=int, default=None, help="Number of nodes") args = parser.parse_args() args.use_cluster = bool(args.use_cluster) if args.use_cluster is not None else None register_omegaconf_resolvers() main(args) ================================================ FILE: auto-seg/submodules/segment-anything-2/training/trainer.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import gc import json import logging import math import os import time from collections import OrderedDict from dataclasses import dataclass, field from typing import Any, Dict, List, Mapping, Optional import numpy as np import torch import torch.distributed as dist import torch.nn as nn from hydra.utils import instantiate from iopath.common.file_io import g_pathmgr from training.optimizer import construct_optimizer from training.utils.checkpoint_utils import ( assert_skipped_parameters_are_frozen, exclude_params_matching_unix_pattern, load_state_dict_into_model, with_check_parameter_frozen, ) from training.utils.data_utils import BatchedVideoDatapoint from training.utils.distributed import all_reduce_max, barrier, get_rank from training.utils.logger import Logger, setup_logging from training.utils.train_utils import ( AverageMeter, collect_dict_keys, DurationMeter, get_amp_type, get_machine_local_and_dist_rank, get_resume_checkpoint, human_readable_time, is_dist_avail_and_initialized, log_env_variables, makedir, MemMeter, Phase, ProgressMeter, set_seeds, setup_distributed_backend, ) CORE_LOSS_KEY = "core_loss" def unwrap_ddp_if_wrapped(model): if isinstance(model, torch.nn.parallel.DistributedDataParallel): return model.module return model @dataclass class OptimAMPConf: enabled: bool = False amp_dtype: str = "float16" @dataclass class OptimConf: optimizer: torch.optim.Optimizer = None options: Optional[Dict[str, Any]] = None param_group_modifiers: Optional[List] = None amp: Optional[Dict[str, Any]] = None gradient_clip: Any = None gradient_logger: Any = None def __post_init__(self): # amp if not isinstance(self.amp, OptimAMPConf): if self.amp is None: self.amp = {} assert isinstance(self.amp, Mapping) self.amp = OptimAMPConf(**self.amp) @dataclass class DistributedConf: backend: Optional[str] = None # inferred from accelerator type comms_dtype: Optional[str] = None find_unused_parameters: bool = False timeout_mins: int = 30 @dataclass class CudaConf: cudnn_deterministic: bool = False cudnn_benchmark: bool = True allow_tf32: bool = False # if not None, `matmul_allow_tf32` key will override `allow_tf32` for matmul matmul_allow_tf32: Optional[bool] = None # if not None, `cudnn_allow_tf32` key will override `allow_tf32` for cudnn cudnn_allow_tf32: Optional[bool] = None @dataclass class CheckpointConf: save_dir: str save_freq: int save_list: List[int] = field(default_factory=list) model_weight_initializer: Any = None save_best_meters: List[str] = None skip_saving_parameters: List[str] = field(default_factory=list) initialize_after_preemption: Optional[bool] = None # if not None, training will be resumed from this checkpoint resume_from: Optional[str] = None def infer_missing(self): if self.initialize_after_preemption is None: with_skip_saving = len(self.skip_saving_parameters) > 0 self.initialize_after_preemption = with_skip_saving return self @dataclass class LoggingConf: log_dir: str log_freq: int # In iterations tensorboard_writer: Any log_level_primary: str = "INFO" log_level_secondary: str = "ERROR" log_scalar_frequency: int = 100 log_visual_frequency: int = 100 scalar_keys_to_log: Optional[Dict[str, Any]] = None log_batch_stats: bool = False class Trainer: """ Trainer supporting the DDP training strategies. """ EPSILON = 1e-8 def __init__( self, *, # the order of these args can change at any time, so they are keyword-only data: Dict[str, Any], model: Dict[str, Any], logging: Dict[str, Any], checkpoint: Dict[str, Any], max_epochs: int, mode: str = "train", accelerator: str = "cuda", seed_value: int = 123, val_epoch_freq: int = 1, distributed: Dict[str, bool] = None, cuda: Dict[str, bool] = None, env_variables: Optional[Dict[str, Any]] = None, optim: Optional[Dict[str, Any]] = None, optim_overrides: Optional[List[Dict[str, Any]]] = None, meters: Optional[Dict[str, Any]] = None, loss: Optional[Dict[str, Any]] = None, ): self._setup_env_variables(env_variables) self._setup_timers() self.data_conf = data self.model_conf = model self.logging_conf = LoggingConf(**logging) self.checkpoint_conf = CheckpointConf(**checkpoint).infer_missing() self.max_epochs = max_epochs self.mode = mode self.val_epoch_freq = val_epoch_freq self.optim_conf = OptimConf(**optim) if optim is not None else None self.meters_conf = meters self.loss_conf = loss distributed = DistributedConf(**distributed or {}) cuda = CudaConf(**cuda or {}) self.where = 0.0 self._infer_distributed_backend_if_none(distributed, accelerator) self._setup_device(accelerator) self._setup_torch_dist_and_backend(cuda, distributed) makedir(self.logging_conf.log_dir) setup_logging( __name__, output_dir=self.logging_conf.log_dir, rank=self.rank, log_level_primary=self.logging_conf.log_level_primary, log_level_secondary=self.logging_conf.log_level_secondary, ) set_seeds(seed_value, self.max_epochs, self.distributed_rank) log_env_variables() assert ( is_dist_avail_and_initialized() ), "Torch distributed needs to be initialized before calling the trainer." self._setup_components() # Except Optimizer everything is setup here. self._move_to_device() self._construct_optimizers() self._setup_dataloaders() self.time_elapsed_meter = DurationMeter("Time Elapsed", self.device, ":.2f") if self.checkpoint_conf.resume_from is not None: assert os.path.exists( self.checkpoint_conf.resume_from ), f"The 'resume_from' checkpoint {self.checkpoint_conf.resume_from} does not exist!" dst = os.path.join(self.checkpoint_conf.save_dir, "checkpoint.pt") if self.distributed_rank == 0 and not os.path.exists(dst): # Copy the "resume_from" checkpoint to the checkpoint folder # if there is not a checkpoint to resume from already there makedir(self.checkpoint_conf.save_dir) g_pathmgr.copy(self.checkpoint_conf.resume_from, dst) barrier() self.load_checkpoint() self._setup_ddp_distributed_training(distributed, accelerator) barrier() def _setup_timers(self): """ Initializes counters for elapsed time and eta. """ self.start_time = time.time() self.ckpt_time_elapsed = 0 self.est_epoch_time = dict.fromkeys([Phase.TRAIN, Phase.VAL], 0) def _get_meters(self, phase_filters=None): if self.meters is None: return {} meters = {} for phase, phase_meters in self.meters.items(): if phase_filters is not None and phase not in phase_filters: continue for key, key_meters in phase_meters.items(): if key_meters is None: continue for name, meter in key_meters.items(): meters[f"{phase}_{key}/{name}"] = meter return meters def _infer_distributed_backend_if_none(self, distributed_conf, accelerator): if distributed_conf.backend is None: distributed_conf.backend = "nccl" if accelerator == "cuda" else "gloo" def _setup_env_variables(self, env_variables_conf) -> None: if env_variables_conf is not None: for variable_name, value in env_variables_conf.items(): os.environ[variable_name] = value def _setup_torch_dist_and_backend(self, cuda_conf, distributed_conf) -> None: if torch.cuda.is_available(): torch.backends.cudnn.deterministic = cuda_conf.cudnn_deterministic torch.backends.cudnn.benchmark = cuda_conf.cudnn_benchmark torch.backends.cuda.matmul.allow_tf32 = ( cuda_conf.matmul_allow_tf32 if cuda_conf.matmul_allow_tf32 is not None else cuda_conf.allow_tf32 ) torch.backends.cudnn.allow_tf32 = ( cuda_conf.cudnn_allow_tf32 if cuda_conf.cudnn_allow_tf32 is not None else cuda_conf.allow_tf32 ) self.rank = setup_distributed_backend( distributed_conf.backend, distributed_conf.timeout_mins ) def _setup_device(self, accelerator): self.local_rank, self.distributed_rank = get_machine_local_and_dist_rank() if accelerator == "cuda": self.device = torch.device("cuda", self.local_rank) torch.cuda.set_device(self.local_rank) elif accelerator == "cpu": self.device = torch.device("cpu") else: raise ValueError(f"Unsupported accelerator: {accelerator}") def _setup_ddp_distributed_training(self, distributed_conf, accelerator): assert isinstance(self.model, torch.nn.Module) self.model = nn.parallel.DistributedDataParallel( self.model, device_ids=[self.local_rank] if accelerator == "cuda" else [], find_unused_parameters=distributed_conf.find_unused_parameters, ) if distributed_conf.comms_dtype is not None: # noqa from torch.distributed.algorithms import ddp_comm_hooks amp_type = get_amp_type(distributed_conf.comms_dtype) if amp_type == torch.bfloat16: hook = ddp_comm_hooks.default_hooks.bf16_compress_hook logging.info("Enabling bfloat16 grad communication") else: hook = ddp_comm_hooks.default_hooks.fp16_compress_hook logging.info("Enabling fp16 grad communication") process_group = None self.model.register_comm_hook(process_group, hook) def _move_to_device(self): logging.info( f"Moving components to device {self.device} and local rank {self.local_rank}." ) self.model.to(self.device) logging.info( f"Done moving components to device {self.device} and local rank {self.local_rank}." ) def save_checkpoint(self, epoch, checkpoint_names=None): checkpoint_folder = self.checkpoint_conf.save_dir makedir(checkpoint_folder) if checkpoint_names is None: checkpoint_names = ["checkpoint"] if ( self.checkpoint_conf.save_freq > 0 and (int(epoch) % self.checkpoint_conf.save_freq == 0) ) or int(epoch) in self.checkpoint_conf.save_list: checkpoint_names.append(f"checkpoint_{int(epoch)}") checkpoint_paths = [] for ckpt_name in checkpoint_names: checkpoint_paths.append(os.path.join(checkpoint_folder, f"{ckpt_name}.pt")) state_dict = unwrap_ddp_if_wrapped(self.model).state_dict() state_dict = exclude_params_matching_unix_pattern( patterns=self.checkpoint_conf.skip_saving_parameters, state_dict=state_dict ) checkpoint = { "model": state_dict, "optimizer": self.optim.optimizer.state_dict(), "epoch": epoch, "loss": self.loss.state_dict(), "steps": self.steps, "time_elapsed": self.time_elapsed_meter.val, "best_meter_values": self.best_meter_values, } if self.optim_conf.amp.enabled: checkpoint["scaler"] = self.scaler.state_dict() # DDP checkpoints are only saved on rank 0 (all workers are identical) if self.distributed_rank != 0: return for checkpoint_path in checkpoint_paths: self._save_checkpoint(checkpoint, checkpoint_path) def _save_checkpoint(self, checkpoint, checkpoint_path): """ Save a checkpoint while guarding against the job being killed in the middle of checkpoint saving (which corrupts the checkpoint file and ruins the entire training since usually only the last checkpoint is kept per run). We first save the new checkpoint to a temp file (with a '.tmp' suffix), and and move it to overwrite the old checkpoint_path. """ checkpoint_path_tmp = f"{checkpoint_path}.tmp" with g_pathmgr.open(checkpoint_path_tmp, "wb") as f: torch.save(checkpoint, f) # after torch.save is completed, replace the old checkpoint with the new one if g_pathmgr.exists(checkpoint_path): # remove the old checkpoint_path file first (otherwise g_pathmgr.mv fails) g_pathmgr.rm(checkpoint_path) success = g_pathmgr.mv(checkpoint_path_tmp, checkpoint_path) assert success def load_checkpoint(self): ckpt_path = get_resume_checkpoint(self.checkpoint_conf.save_dir) if ckpt_path is None: self._init_model_state() else: if self.checkpoint_conf.initialize_after_preemption: self._call_model_initializer() self._load_resuming_checkpoint(ckpt_path) def _init_model_state(self): # Checking that parameters that won't be saved are indeed frozen # We do this check here before even saving the model to catch errors # are early as possible and not at the end of the first epoch assert_skipped_parameters_are_frozen( patterns=self.checkpoint_conf.skip_saving_parameters, model=self.model, ) # Checking that parameters that won't be saved are initialized from # within the model definition, unless `initialize_after_preemption` # is explicitly set to `True`. If not, this is a bug, and after # preemption, the `skip_saving_parameters` will have random values allow_init_skip_parameters = self.checkpoint_conf.initialize_after_preemption with with_check_parameter_frozen( patterns=self.checkpoint_conf.skip_saving_parameters, model=self.model, disabled=allow_init_skip_parameters, ): self._call_model_initializer() def _call_model_initializer(self): model_weight_initializer = instantiate( self.checkpoint_conf.model_weight_initializer ) if model_weight_initializer is not None: logging.info( f"Loading pretrained checkpoint from {self.checkpoint_conf.model_weight_initializer}" ) self.model = model_weight_initializer(model=self.model) def _load_resuming_checkpoint(self, ckpt_path: str): logging.info(f"Resuming training from {ckpt_path}") with g_pathmgr.open(ckpt_path, "rb") as f: checkpoint = torch.load(f, map_location="cpu") load_state_dict_into_model( model=self.model, state_dict=checkpoint["model"], ignore_missing_keys=self.checkpoint_conf.skip_saving_parameters, ) self.optim.optimizer.load_state_dict(checkpoint["optimizer"]) self.loss.load_state_dict(checkpoint["loss"], strict=True) self.epoch = checkpoint["epoch"] self.steps = checkpoint["steps"] self.ckpt_time_elapsed = checkpoint.get("time_elapsed") if self.optim_conf.amp.enabled and "scaler" in checkpoint: self.scaler.load_state_dict(checkpoint["scaler"]) self.best_meter_values = checkpoint.get("best_meter_values", {}) if "train_dataset" in checkpoint and self.train_dataset is not None: self.train_dataset.load_checkpoint_state(checkpoint["train_dataset"]) def is_intermediate_val_epoch(self, epoch): return epoch % self.val_epoch_freq == 0 and epoch < self.max_epochs - 1 def _step( self, batch: BatchedVideoDatapoint, model: nn.Module, phase: str, ): outputs = model(batch) targets = batch.masks batch_size = len(batch.img_batch) key = batch.dict_key # key for dataset loss = self.loss[key](outputs, targets) loss_str = f"Losses/{phase}_{key}_loss" loss_log_str = os.path.join("Step_Losses", loss_str) # loss contains multiple sub-components we wish to log step_losses = {} if isinstance(loss, dict): step_losses.update( {f"Losses/{phase}_{key}_{k}": v for k, v in loss.items()} ) loss = self._log_loss_detailed_and_return_core_loss( loss, loss_log_str, self.steps[phase] ) if self.steps[phase] % self.logging_conf.log_scalar_frequency == 0: self.logger.log( loss_log_str, loss, self.steps[phase], ) self.steps[phase] += 1 ret_tuple = {loss_str: loss}, batch_size, step_losses if phase in self.meters and key in self.meters[phase]: meters_dict = self.meters[phase][key] if meters_dict is not None: for _, meter in meters_dict.items(): meter.update( find_stages=outputs, find_metadatas=batch.metadata, ) return ret_tuple def run(self): assert self.mode in ["train", "train_only", "val"] if self.mode == "train": if self.epoch > 0: logging.info(f"Resuming training from epoch: {self.epoch}") # resuming from a checkpoint if self.is_intermediate_val_epoch(self.epoch - 1): logging.info("Running previous val epoch") self.epoch -= 1 self.run_val() self.epoch += 1 self.run_train() self.run_val() elif self.mode == "val": self.run_val() elif self.mode == "train_only": self.run_train() def _setup_dataloaders(self): self.train_dataset = None self.val_dataset = None if self.mode in ["train", "val"]: self.val_dataset = instantiate(self.data_conf.get(Phase.VAL, None)) if self.mode in ["train", "train_only"]: self.train_dataset = instantiate(self.data_conf.train) def run_train(self): while self.epoch < self.max_epochs: dataloader = self.train_dataset.get_loader(epoch=int(self.epoch)) barrier() outs = self.train_epoch(dataloader) self.logger.log_dict(outs, self.epoch) # Logged only on rank 0 # log train to text file. if self.distributed_rank == 0: with g_pathmgr.open( os.path.join(self.logging_conf.log_dir, "train_stats.json"), "a", ) as f: f.write(json.dumps(outs) + "\n") # Save checkpoint before validating self.save_checkpoint(self.epoch + 1) del dataloader gc.collect() # Run val, not running on last epoch since will run after the # loop anyway if self.is_intermediate_val_epoch(self.epoch): self.run_val() if self.distributed_rank == 0: self.best_meter_values.update(self._get_trainer_state("train")) with g_pathmgr.open( os.path.join(self.logging_conf.log_dir, "best_stats.json"), "a", ) as f: f.write(json.dumps(self.best_meter_values) + "\n") self.epoch += 1 # epoch was incremented in the loop but the val step runs out of the loop self.epoch -= 1 def run_val(self): if not self.val_dataset: return dataloader = self.val_dataset.get_loader(epoch=int(self.epoch)) outs = self.val_epoch(dataloader, phase=Phase.VAL) del dataloader gc.collect() self.logger.log_dict(outs, self.epoch) # Logged only on rank 0 if self.distributed_rank == 0: with g_pathmgr.open( os.path.join(self.logging_conf.log_dir, "val_stats.json"), "a", ) as f: f.write(json.dumps(outs) + "\n") def val_epoch(self, val_loader, phase): batch_time = AverageMeter("Batch Time", self.device, ":.2f") data_time = AverageMeter("Data Time", self.device, ":.2f") mem = MemMeter("Mem (GB)", self.device, ":.2f") iters_per_epoch = len(val_loader) curr_phases = [phase] curr_models = [self.model] loss_names = [] for p in curr_phases: for key in self.loss.keys(): loss_names.append(f"Losses/{p}_{key}_loss") loss_mts = OrderedDict( [(name, AverageMeter(name, self.device, ":.2e")) for name in loss_names] ) extra_loss_mts = {} for model in curr_models: model.eval() if hasattr(unwrap_ddp_if_wrapped(model), "on_validation_epoch_start"): unwrap_ddp_if_wrapped(model).on_validation_epoch_start() progress = ProgressMeter( iters_per_epoch, [batch_time, data_time, mem, self.time_elapsed_meter, *loss_mts.values()], self._get_meters(curr_phases), prefix="Val Epoch: [{}]".format(self.epoch), ) end = time.time() for data_iter, batch in enumerate(val_loader): # measure data loading time data_time.update(time.time() - end) batch = batch.to(self.device, non_blocking=True) # compute output with torch.no_grad(): with torch.cuda.amp.autocast( enabled=(self.optim_conf.amp.enabled if self.optim_conf else False), dtype=( get_amp_type(self.optim_conf.amp.amp_dtype) if self.optim_conf else None ), ): for phase, model in zip(curr_phases, curr_models): loss_dict, batch_size, extra_losses = self._step( batch, model, phase, ) assert len(loss_dict) == 1 loss_key, loss = loss_dict.popitem() loss_mts[loss_key].update(loss.item(), batch_size) for k, v in extra_losses.items(): if k not in extra_loss_mts: extra_loss_mts[k] = AverageMeter(k, self.device, ":.2e") extra_loss_mts[k].update(v.item(), batch_size) # measure elapsed time batch_time.update(time.time() - end) end = time.time() self.time_elapsed_meter.update( time.time() - self.start_time + self.ckpt_time_elapsed ) if torch.cuda.is_available(): mem.update(reset_peak_usage=True) if data_iter % self.logging_conf.log_freq == 0: progress.display(data_iter) if data_iter % self.logging_conf.log_scalar_frequency == 0: # Log progress meters. for progress_meter in progress.meters: self.logger.log( os.path.join("Step_Stats", phase, progress_meter.name), progress_meter.val, self.steps[Phase.VAL], ) if data_iter % 10 == 0: dist.barrier() self.est_epoch_time[phase] = batch_time.avg * iters_per_epoch self._log_timers(phase) for model in curr_models: if hasattr(unwrap_ddp_if_wrapped(model), "on_validation_epoch_end"): unwrap_ddp_if_wrapped(model).on_validation_epoch_end() out_dict = self._log_meters_and_save_best_ckpts(curr_phases) for k, v in loss_mts.items(): out_dict[k] = v.avg for k, v in extra_loss_mts.items(): out_dict[k] = v.avg for phase in curr_phases: out_dict.update(self._get_trainer_state(phase)) self._reset_meters(curr_phases) logging.info(f"Meters: {out_dict}") return out_dict def _get_trainer_state(self, phase): return { "Trainer/where": self.where, "Trainer/epoch": self.epoch, f"Trainer/steps_{phase}": self.steps[phase], } def train_epoch(self, train_loader): # Init stat meters batch_time_meter = AverageMeter("Batch Time", self.device, ":.2f") data_time_meter = AverageMeter("Data Time", self.device, ":.2f") mem_meter = MemMeter("Mem (GB)", self.device, ":.2f") data_times = [] phase = Phase.TRAIN iters_per_epoch = len(train_loader) loss_names = [] for batch_key in self.loss.keys(): loss_names.append(f"Losses/{phase}_{batch_key}_loss") loss_mts = OrderedDict( [(name, AverageMeter(name, self.device, ":.2e")) for name in loss_names] ) extra_loss_mts = {} progress = ProgressMeter( iters_per_epoch, [ batch_time_meter, data_time_meter, mem_meter, self.time_elapsed_meter, *loss_mts.values(), ], self._get_meters([phase]), prefix="Train Epoch: [{}]".format(self.epoch), ) # Model training loop self.model.train() end = time.time() for data_iter, batch in enumerate(train_loader): # measure data loading time data_time_meter.update(time.time() - end) data_times.append(data_time_meter.val) batch = batch.to( self.device, non_blocking=True ) # move tensors in a tensorclass try: self._run_step(batch, phase, loss_mts, extra_loss_mts) # compute gradient and do optim step exact_epoch = self.epoch + float(data_iter) / iters_per_epoch self.where = float(exact_epoch) / self.max_epochs assert self.where <= 1 + self.EPSILON if self.where < 1.0: self.optim.step_schedulers( self.where, step=int(exact_epoch * iters_per_epoch) ) else: logging.warning( f"Skipping scheduler update since the training is at the end, i.e, {self.where} of [0,1]." ) # Log schedulers if data_iter % self.logging_conf.log_scalar_frequency == 0: for j, param_group in enumerate(self.optim.optimizer.param_groups): for option in self.optim.schedulers[j]: optim_prefix = ( "" + f"{j}_" if len(self.optim.optimizer.param_groups) > 1 else "" ) self.logger.log( os.path.join("Optim", f"{optim_prefix}", option), param_group[option], self.steps[phase], ) # Clipping gradients and detecting diverging gradients if self.gradient_clipper is not None: self.scaler.unscale_(self.optim.optimizer) self.gradient_clipper(model=self.model) if self.gradient_logger is not None: self.gradient_logger( self.model, rank=self.distributed_rank, where=self.where ) # Optimizer step: the scaler will make sure gradients are not # applied if the gradients are infinite self.scaler.step(self.optim.optimizer) self.scaler.update() # measure elapsed time batch_time_meter.update(time.time() - end) end = time.time() self.time_elapsed_meter.update( time.time() - self.start_time + self.ckpt_time_elapsed ) mem_meter.update(reset_peak_usage=True) if data_iter % self.logging_conf.log_freq == 0: progress.display(data_iter) if data_iter % self.logging_conf.log_scalar_frequency == 0: # Log progress meters. for progress_meter in progress.meters: self.logger.log( os.path.join("Step_Stats", phase, progress_meter.name), progress_meter.val, self.steps[phase], ) # Catching NaN/Inf errors in the loss except FloatingPointError as e: raise e self.est_epoch_time[Phase.TRAIN] = batch_time_meter.avg * iters_per_epoch self._log_timers(Phase.TRAIN) self._log_sync_data_times(Phase.TRAIN, data_times) out_dict = self._log_meters_and_save_best_ckpts([Phase.TRAIN]) for k, v in loss_mts.items(): out_dict[k] = v.avg for k, v in extra_loss_mts.items(): out_dict[k] = v.avg out_dict.update(self._get_trainer_state(phase)) logging.info(f"Losses and meters: {out_dict}") self._reset_meters([phase]) return out_dict def _log_sync_data_times(self, phase, data_times): data_times = all_reduce_max(torch.tensor(data_times)).tolist() steps = range(self.steps[phase] - len(data_times), self.steps[phase]) for step, data_time in zip(steps, data_times): if step % self.logging_conf.log_scalar_frequency == 0: self.logger.log( os.path.join("Step_Stats", phase, "Data Time Synced"), data_time, step, ) def _run_step( self, batch: BatchedVideoDatapoint, phase: str, loss_mts: Dict[str, AverageMeter], extra_loss_mts: Dict[str, AverageMeter], raise_on_error: bool = True, ): """ Run the forward / backward """ # it's important to set grads to None, especially with Adam since 0 # grads will also update a model even if the step doesn't produce # gradients self.optim.zero_grad(set_to_none=True) with torch.cuda.amp.autocast( enabled=self.optim_conf.amp.enabled, dtype=get_amp_type(self.optim_conf.amp.amp_dtype), ): loss_dict, batch_size, extra_losses = self._step( batch, self.model, phase, ) assert len(loss_dict) == 1 loss_key, loss = loss_dict.popitem() if not math.isfinite(loss.item()): error_msg = f"Loss is {loss.item()}, attempting to stop training" logging.error(error_msg) if raise_on_error: raise FloatingPointError(error_msg) else: return self.scaler.scale(loss).backward() loss_mts[loss_key].update(loss.item(), batch_size) for extra_loss_key, extra_loss in extra_losses.items(): if extra_loss_key not in extra_loss_mts: extra_loss_mts[extra_loss_key] = AverageMeter( extra_loss_key, self.device, ":.2e" ) extra_loss_mts[extra_loss_key].update(extra_loss.item(), batch_size) def _log_meters_and_save_best_ckpts(self, phases: List[str]): logging.info("Synchronizing meters") out_dict = {} checkpoint_save_keys = [] for key, meter in self._get_meters(phases).items(): meter_output = meter.compute_synced() is_better_check = getattr(meter, "is_better", None) for meter_subkey, meter_value in meter_output.items(): out_dict[os.path.join("Meters_train", key, meter_subkey)] = meter_value if is_better_check is None: continue tracked_meter_key = os.path.join(key, meter_subkey) if tracked_meter_key not in self.best_meter_values or is_better_check( meter_value, self.best_meter_values[tracked_meter_key], ): self.best_meter_values[tracked_meter_key] = meter_value if ( self.checkpoint_conf.save_best_meters is not None and key in self.checkpoint_conf.save_best_meters ): checkpoint_save_keys.append(tracked_meter_key.replace("/", "_")) if len(checkpoint_save_keys) > 0: self.save_checkpoint(self.epoch + 1, checkpoint_save_keys) return out_dict def _log_timers(self, phase): time_remaining = 0 epochs_remaining = self.max_epochs - self.epoch - 1 val_epochs_remaining = sum( n % self.val_epoch_freq == 0 for n in range(self.epoch, self.max_epochs) ) # Adding the guaranteed val run at the end if val_epoch_freq doesn't coincide with # the end epoch. if (self.max_epochs - 1) % self.val_epoch_freq != 0: val_epochs_remaining += 1 # Remove the current val run from estimate if phase == Phase.VAL: val_epochs_remaining -= 1 time_remaining += ( epochs_remaining * self.est_epoch_time[Phase.TRAIN] + val_epochs_remaining * self.est_epoch_time[Phase.VAL] ) self.logger.log( os.path.join("Step_Stats", phase, self.time_elapsed_meter.name), self.time_elapsed_meter.val, self.steps[phase], ) logging.info(f"Estimated time remaining: {human_readable_time(time_remaining)}") def _reset_meters(self, phases: str) -> None: for meter in self._get_meters(phases).values(): meter.reset() def _check_val_key_match(self, val_keys, phase): if val_keys is not None: # Check if there are any duplicates assert len(val_keys) == len( set(val_keys) ), f"Duplicate keys in val datasets, keys: {val_keys}" # Check that the keys match the meter keys if self.meters_conf is not None and phase in self.meters_conf: assert set(val_keys) == set(self.meters_conf[phase].keys()), ( f"Keys in val datasets do not match the keys in meters." f"\nMissing in meters: {set(val_keys) - set(self.meters_conf[phase].keys())}" f"\nMissing in val datasets: {set(self.meters_conf[phase].keys()) - set(val_keys)}" ) if self.loss_conf is not None: loss_keys = set(self.loss_conf.keys()) - set(["all"]) assert all([k in loss_keys for k in val_keys]), ( f"Keys in val datasets do not match the keys in losses." f"\nMissing in losses: {set(val_keys) - loss_keys}" f"\nMissing in val datasets: {loss_keys - set(val_keys)}" ) def _setup_components(self): # Get the keys for all the val datasets, if any val_phase = Phase.VAL val_keys = None if self.data_conf.get(val_phase, None) is not None: val_keys = collect_dict_keys(self.data_conf[val_phase]) # Additional checks on the sanity of the config for val datasets self._check_val_key_match(val_keys, phase=val_phase) logging.info("Setting up components: Model, loss, optim, meters etc.") self.epoch = 0 self.steps = {Phase.TRAIN: 0, Phase.VAL: 0} self.logger = Logger(self.logging_conf) self.model = instantiate(self.model_conf, _convert_="all") print_model_summary(self.model) self.loss = None if self.loss_conf: self.loss = { key: el # wrap_base_loss(el) for (key, el) in instantiate(self.loss_conf, _convert_="all").items() } self.loss = nn.ModuleDict(self.loss) self.meters = {} self.best_meter_values = {} if self.meters_conf: self.meters = instantiate(self.meters_conf, _convert_="all") self.scaler = torch.amp.GradScaler( self.device, enabled=self.optim_conf.amp.enabled if self.optim_conf else False, ) self.gradient_clipper = ( instantiate(self.optim_conf.gradient_clip) if self.optim_conf else None ) self.gradient_logger = ( instantiate(self.optim_conf.gradient_logger) if self.optim_conf else None ) logging.info("Finished setting up components: Model, loss, optim, meters etc.") def _construct_optimizers(self): self.optim = construct_optimizer( self.model, self.optim_conf.optimizer, self.optim_conf.options, self.optim_conf.param_group_modifiers, ) def _log_loss_detailed_and_return_core_loss(self, loss, loss_str, step): core_loss = loss.pop(CORE_LOSS_KEY) if step % self.logging_conf.log_scalar_frequency == 0: for k in loss: log_str = os.path.join(loss_str, k) self.logger.log(log_str, loss[k], step) return core_loss def print_model_summary(model: torch.nn.Module, log_dir: str = ""): """ Prints the model and the number of parameters in the model. # Multiple packages provide this info in a nice table format # However, they need us to provide an `input` (as they also write down the output sizes) # Our models are complex, and a single input is restrictive. # https://github.com/sksq96/pytorch-summary # https://github.com/nmhkahn/torchsummaryX """ if get_rank() != 0: return param_kwargs = {} trainable_parameters = sum( p.numel() for p in model.parameters(**param_kwargs) if p.requires_grad ) total_parameters = sum(p.numel() for p in model.parameters(**param_kwargs)) non_trainable_parameters = total_parameters - trainable_parameters logging.info("==" * 10) logging.info(f"Summary for model {type(model)}") logging.info(f"Model is {model}") logging.info(f"\tTotal parameters {get_human_readable_count(total_parameters)}") logging.info( f"\tTrainable parameters {get_human_readable_count(trainable_parameters)}" ) logging.info( f"\tNon-Trainable parameters {get_human_readable_count(non_trainable_parameters)}" ) logging.info("==" * 10) if log_dir: output_fpath = os.path.join(log_dir, "model.txt") with g_pathmgr.open(output_fpath, "w") as f: print(model, file=f) PARAMETER_NUM_UNITS = [" ", "K", "M", "B", "T"] def get_human_readable_count(number: int) -> str: """ Abbreviates an integer number with K, M, B, T for thousands, millions, billions and trillions, respectively. Examples: >>> get_human_readable_count(123) '123 ' >>> get_human_readable_count(1234) # (one thousand) '1.2 K' >>> get_human_readable_count(2e6) # (two million) '2.0 M' >>> get_human_readable_count(3e9) # (three billion) '3.0 B' >>> get_human_readable_count(4e14) # (four hundred trillion) '400 T' >>> get_human_readable_count(5e15) # (more than trillion) '5,000 T' Args: number: a positive integer number Return: A string formatted according to the pattern described above. """ assert number >= 0 labels = PARAMETER_NUM_UNITS num_digits = int(np.floor(np.log10(number)) + 1 if number > 0 else 1) num_groups = int(np.ceil(num_digits / 3)) num_groups = min(num_groups, len(labels)) # don't abbreviate beyond trillions shift = -3 * (num_groups - 1) number = number * (10**shift) index = num_groups - 1 if index < 1 or number >= 100: return f"{int(number):,d} {labels[index]}" else: return f"{number:,.1f} {labels[index]}" ================================================ FILE: auto-seg/submodules/segment-anything-2/training/utils/__init__.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. ================================================ FILE: auto-seg/submodules/segment-anything-2/training/utils/checkpoint_utils.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import contextlib import fnmatch import logging from typing import ( Any, Callable, Dict, List, Mapping, Optional, Sequence, Set, Tuple, Union, ) import numpy as np import torch import torch.nn as nn from iopath.common.file_io import g_pathmgr from torch.jit._script import RecursiveScriptModule def unix_pattern_to_parameter_names( constraints: List[str], all_parameter_names: Sequence[str] ) -> Union[None, Set[str]]: """ Go through the list of parameter names and select those that match any of the provided constraints """ parameter_names = [] for param_name in constraints: matching_parameters = set(fnmatch.filter(all_parameter_names, param_name)) assert ( len(matching_parameters) > 0 ), f"param_names {param_name} don't match any param in the given names." parameter_names.append(matching_parameters) return set.union(*parameter_names) def filter_params_matching_unix_pattern( patterns: List[str], state_dict: Dict[str, torch.Tensor] ) -> Dict[str, torch.Tensor]: """ Remove from the state dictionary the parameters matching the provided unix patterns Args: patterns: the list of unix patterns to exclude state_dict: the dictionary to filter Returns: A new state dictionary """ if len(patterns) == 0: return {} all_keys = list(state_dict.keys()) included_keys = unix_pattern_to_parameter_names(patterns, all_keys) return {k: state_dict[k] for k in included_keys} def exclude_params_matching_unix_pattern( patterns: List[str], state_dict: Dict[str, torch.Tensor] ) -> Dict[str, torch.Tensor]: """ Remove from the state dictionary the parameters matching the provided unix patterns Args: patterns: the list of unix patterns to exclude state_dict: the dictionary to filter Returns: A new state dictionary """ if len(patterns) == 0: return state_dict all_keys = list(state_dict.keys()) excluded_keys = unix_pattern_to_parameter_names(patterns, all_keys) return {k: v for k, v in state_dict.items() if k not in excluded_keys} def _get_state_dict_summary(state_dict: Dict[str, torch.Tensor]): keys = [] trace = [] for k, v in state_dict.items(): keys.append(k) trace.append(v.sum().item()) trace = np.array(trace)[np.argsort(keys)] return trace def assert_skipped_parameters_are_frozen(model: nn.Module, patterns: List[str]): """ Verifies that all the parameters matching the provided patterns are frozen - this acts as a safeguard when ignoring parameter when saving checkpoints - if the parameters are in fact trainable """ if not patterns: return frozen_state_dict = filter_params_matching_unix_pattern( patterns=patterns, state_dict=model.state_dict() ) non_frozen_keys = { n for n, p in model.named_parameters() if n in frozen_state_dict and p.requires_grad } if non_frozen_keys: raise ValueError( f"Parameters excluded with `skip_saving_parameters` should be frozen: {non_frozen_keys}" ) @contextlib.contextmanager def with_check_parameter_frozen( model: nn.Module, patterns: List[str], disabled: bool = True ): """ Context manager that inspects a model surrounding a piece of code and verifies if the model has been updated by this piece of code The function will raise an exception if the model has been updated on at least one of the parameter that matches one of the pattern Args: model: the model that might have been updated patterns: for the parameters we want to observe allowed: """ if not patterns or disabled: yield return frozen_state_dict = filter_params_matching_unix_pattern( patterns=patterns, state_dict=model.state_dict() ) summary_before = _get_state_dict_summary(frozen_state_dict) yield frozen_state_dict = filter_params_matching_unix_pattern( patterns=patterns, state_dict=model.state_dict() ) summary_after = _get_state_dict_summary(frozen_state_dict) if not np.allclose(summary_before, summary_after, atol=1e-6): raise ValueError( f""" The `model_weight_initializer` has initialized parameters frozen with `skip_saving_parameters`. You can resolve this error by either initializing those parameters from within the model definition or using the flag `trainer.checkpoint.initialize_after_preemption` to True. """ ) class CkptExcludeKernel: """ Removes the keys from the given model state_dict that match the key_pattern. Args: key_pattern: Patterns used to select the keys in the state_dict that are eligible for this kernel. """ def __init__(self, key_pattern: List[str]): self.key_pattern = key_pattern def __call__(self, state_dict: Dict): """ Args: state_dict: A dictionary representing the given checkpoint's state dict. """ if len(self.key_pattern) == 0: return state_dict exclude_keys = unix_pattern_to_parameter_names( self.key_pattern, state_dict.keys() ) return {k: v for k, v in state_dict.items() if k not in exclude_keys} def load_checkpoint( path_list: List[str], pick_recursive_keys: Optional[List[str]] = None, map_location: str = "cpu", ) -> Any: """ Loads a checkpoint from the specified path. Args: path_list: A list of paths which contain the checkpoint. Each element is tried (in order) until a file that exists is found. That file is then used to read the checkpoint. pick_recursive_keys: Picks sub dicts from the loaded checkpoint if not None. For pick_recursive_keys = ["a", "b"], will return checkpoint_dict["a"]["b"] map_location (str): a function, torch.device, string or a dict specifying how to remap storage locations Returns: Model with the matchin pre-trained weights loaded. """ path_exists = False for path in path_list: if g_pathmgr.exists(path): path_exists = True break if not path_exists: raise ValueError(f"No path exists in {path_list}") with g_pathmgr.open(path, "rb") as f: checkpoint = torch.load(f, map_location=map_location) logging.info(f"Loaded checkpoint from {path}") if pick_recursive_keys is not None: for key in pick_recursive_keys: checkpoint = checkpoint[key] return checkpoint def get_state_dict(checkpoint, ckpt_state_dict_keys): if isinstance(checkpoint, RecursiveScriptModule): # This is a torchscript JIT model return checkpoint.state_dict() pre_train_dict = checkpoint for i, key in enumerate(ckpt_state_dict_keys): if (isinstance(pre_train_dict, Mapping) and key not in pre_train_dict) or ( isinstance(pre_train_dict, Sequence) and key >= len(pre_train_dict) ): key_str = ( '["' + '"]["'.join(list(map(ckpt_state_dict_keys[:i], str))) + '"]' ) raise KeyError( f"'{key}' not found in checkpoint{key_str} " f"with keys: {pre_train_dict.keys()}" ) pre_train_dict = pre_train_dict[key] return pre_train_dict def load_checkpoint_and_apply_kernels( checkpoint_path: str, checkpoint_kernels: List[Callable] = None, ckpt_state_dict_keys: Tuple[str] = ("state_dict",), map_location: str = "cpu", ) -> nn.Module: """ Performs checkpoint loading with a variety of pre-processing kernel applied in sequence. Args: checkpoint_path (str): Path to the checkpoint. checkpoint_kernels List(Callable): A list of checkpoint processing kernels to apply in the specified order. Supported kernels include `CkptIncludeKernel`, `CkptExcludeKernel`, etc. These kernels are applied in the given order. ckpt_state_dict_keys (str): Keys containing the model state dict. map_location (str): a function, torch.device, string or a dict specifying how to remap storage locations Returns: Model with the matchin pre-trained weights loaded. """ assert g_pathmgr.exists(checkpoint_path), "Checkpoint '{}' not found".format( checkpoint_path ) # Load the checkpoint on CPU to avoid GPU mem spike. with g_pathmgr.open(checkpoint_path, "rb") as f: checkpoint = torch.load(f, map_location=map_location) pre_train_dict = get_state_dict(checkpoint, ckpt_state_dict_keys) # Not logging into info etc since it's a huge log logging.debug( "Loaded Checkpoint State Dict pre-kernel application: %s" % str(", ".join(list(pre_train_dict.keys()))) ) # Apply kernels if checkpoint_kernels is not None: for f in checkpoint_kernels: pre_train_dict = f(state_dict=pre_train_dict) logging.debug( "Loaded Checkpoint State Dict Post-kernel application %s" % str(", ".join(list(pre_train_dict.keys()))) ) return pre_train_dict def check_load_state_dict_errors( missing_keys, unexpected_keys, strict: bool, ignore_missing_keys: List[str] = None, ignore_unexpected_keys: List[str] = None, ): if ignore_missing_keys is not None and len(ignore_missing_keys) > 0: ignored_keys = unix_pattern_to_parameter_names( ignore_missing_keys, missing_keys ) missing_keys = [key for key in missing_keys if key not in ignored_keys] if ignore_unexpected_keys is not None and len(ignore_unexpected_keys) > 0: ignored_unexpected_keys = unix_pattern_to_parameter_names( ignore_unexpected_keys, unexpected_keys ) unexpected_keys = [ key for key in unexpected_keys if key not in ignored_unexpected_keys ] err = "State key mismatch." if unexpected_keys: err += f" Unexpected keys: {unexpected_keys}." if missing_keys: err += f" Missing keys: {missing_keys}." if unexpected_keys or missing_keys: logging.warning(err) if unexpected_keys or strict: raise KeyError(err) def load_state_dict_into_model( state_dict: Dict, model: nn.Module, strict: bool = True, ignore_missing_keys: List[str] = None, ignore_unexpected_keys: List[str] = None, checkpoint_kernels: List[Callable] = None, ): """ Loads a state dict into the given model. Args: state_dict: A dictionary containing the model's state dict, or a subset if strict is False model: Model to load the checkpoint weights into strict: raise if the state_dict has missing state keys ignore_missing_keys: unix pattern of keys to ignore """ # Apply kernels if checkpoint_kernels is not None: for f in checkpoint_kernels: state_dict = f(state_dict=state_dict) missing_keys, unexpected_keys = model.load_state_dict(state_dict, strict=False) check_load_state_dict_errors( missing_keys, unexpected_keys, strict=strict, ignore_missing_keys=ignore_missing_keys, ignore_unexpected_keys=ignore_unexpected_keys, ) return model ================================================ FILE: auto-seg/submodules/segment-anything-2/training/utils/data_utils.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. """ Misc functions, including distributed helpers. Mostly copy-paste from torchvision references. """ from dataclasses import dataclass from typing import List, Optional, Tuple, Union import torch from PIL import Image as PILImage from tensordict import tensorclass @tensorclass class BatchedVideoMetaData: """ This class represents metadata about a batch of videos. Attributes: unique_objects_identifier: A tensor of shape Bx3 containing unique identifiers for each object in the batch. Index consists of (video_id, obj_id, frame_id) frame_orig_size: A tensor of shape Bx2 containing the original size of each frame in the batch. """ unique_objects_identifier: torch.LongTensor frame_orig_size: torch.LongTensor @tensorclass class BatchedVideoDatapoint: """ This class represents a batch of videos with associated annotations and metadata. Attributes: img_batch: A [TxBxCxHxW] tensor containing the image data for each frame in the batch, where T is the number of frames per video, and B is the number of videos in the batch. obj_to_frame_idx: A [TxOx2] tensor containing the image_batch index which the object belongs to. O is the number of objects in the batch. masks: A [TxOxHxW] tensor containing binary masks for each object in the batch. metadata: An instance of BatchedVideoMetaData containing metadata about the batch. dict_key: A string key used to identify the batch. """ img_batch: torch.FloatTensor obj_to_frame_idx: torch.IntTensor masks: torch.BoolTensor metadata: BatchedVideoMetaData dict_key: str def pin_memory(self, device=None): return self.apply(torch.Tensor.pin_memory, device=device) @property def num_frames(self) -> int: """ Returns the number of frames per video. """ return self.batch_size[0] @property def num_videos(self) -> int: """ Returns the number of videos in the batch. """ return self.img_batch.shape[1] @property def flat_obj_to_img_idx(self) -> torch.IntTensor: """ Returns a flattened tensor containing the object to img index. The flat index can be used to access a flattened img_batch of shape [(T*B)xCxHxW] """ frame_idx, video_idx = self.obj_to_frame_idx.unbind(dim=-1) flat_idx = video_idx * self.num_frames + frame_idx return flat_idx @property def flat_img_batch(self) -> torch.FloatTensor: """ Returns a flattened img_batch_tensor of shape [(B*T)xCxHxW] """ return self.img_batch.transpose(0, 1).flatten(0, 1) @dataclass class Object: # Id of the object in the media object_id: int # Index of the frame in the media (0 if single image) frame_index: int segment: Union[torch.Tensor, dict] # RLE dict or binary mask @dataclass class Frame: data: Union[torch.Tensor, PILImage.Image] objects: List[Object] @dataclass class VideoDatapoint: """Refers to an image/video and all its annotations""" frames: List[Frame] video_id: int size: Tuple[int, int] def collate_fn( batch: List[VideoDatapoint], dict_key, ) -> BatchedVideoDatapoint: """ Args: batch: A list of VideoDatapoint instances. dict_key (str): A string key used to identify the batch. """ img_batch = [] for video in batch: img_batch += [torch.stack([frame.data for frame in video.frames], dim=0)] img_batch = torch.stack(img_batch, dim=0).permute((1, 0, 2, 3, 4)) T = img_batch.shape[0] # Prepare data structures for sequential processing. Per-frame processing but batched across videos. step_t_objects_identifier = [[] for _ in range(T)] step_t_frame_orig_size = [[] for _ in range(T)] step_t_masks = [[] for _ in range(T)] step_t_obj_to_frame_idx = [ [] for _ in range(T) ] # List to store frame indices for each time step for video_idx, video in enumerate(batch): orig_video_id = video.video_id orig_frame_size = video.size for t, frame in enumerate(video.frames): objects = frame.objects for obj in objects: orig_obj_id = obj.object_id orig_frame_idx = obj.frame_index step_t_obj_to_frame_idx[t].append( torch.tensor([t, video_idx], dtype=torch.int) ) step_t_masks[t].append(obj.segment.to(torch.bool)) step_t_objects_identifier[t].append( torch.tensor([orig_video_id, orig_obj_id, orig_frame_idx]) ) step_t_frame_orig_size[t].append(torch.tensor(orig_frame_size)) obj_to_frame_idx = torch.stack( [ torch.stack(obj_to_frame_idx, dim=0) for obj_to_frame_idx in step_t_obj_to_frame_idx ], dim=0, ) masks = torch.stack([torch.stack(masks, dim=0) for masks in step_t_masks], dim=0) objects_identifier = torch.stack( [torch.stack(id, dim=0) for id in step_t_objects_identifier], dim=0 ) frame_orig_size = torch.stack( [torch.stack(id, dim=0) for id in step_t_frame_orig_size], dim=0 ) return BatchedVideoDatapoint( img_batch=img_batch, obj_to_frame_idx=obj_to_frame_idx, masks=masks, metadata=BatchedVideoMetaData( unique_objects_identifier=objects_identifier, frame_orig_size=frame_orig_size, ), dict_key=dict_key, batch_size=[T], ) ================================================ FILE: auto-seg/submodules/segment-anything-2/training/utils/distributed.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import datetime import functools import io import logging import os import random import tempfile import time from typing import Any, Callable, List, Tuple import torch import torch.autograd as autograd import torch.distributed as dist # Default to GPU 0 _cuda_device_index: int = 0 # Setting _cuda_device_index to -1 internally implies that we should use CPU _CPU_DEVICE_INDEX = -1 _PRIMARY_RANK = 0 @functools.lru_cache() def _get_global_gloo_group(): """ Return a process group based on gloo backend, containing all the ranks The result is cached. """ if dist.get_backend() == "nccl": # Increase timeout from 1800 sec to 43200 sec (12 hr) to avoid some processes # being much slower than others causing a timeout (which can happen in relation # or LVIS class mAP evaluation). timeout = 43200 return dist.new_group( backend="gloo", timeout=datetime.timedelta(seconds=timeout), ) return dist.group.WORLD def is_main_process(): """Return true if the current process is the main one""" return get_rank() == 0 def all_gather_via_filesys(data, filesys_save_dir=None, gather_to_rank_0_only=False): """ Run all_gather on arbitrary picklable data (not necessarily tensors), similar to `all_gather` above, but using filesystem instead of collective ops. If gather_to_rank_0_only is True, only rank 0 will load the gathered object list (and other ranks will have an empty list). """ world_size = get_world_size() if world_size == 1: return [data] print("gathering via files") cpu_group = _get_global_gloo_group() # if unspecified, we will save to the current python file dir if filesys_save_dir is not None: save_dir = filesys_save_dir elif "EXP_DIR" in os.environ: save_dir = os.environ["EXP_DIR"] else: # try the same directory where the code is stored save_dir = filesys_save_dir or os.path.dirname(__file__) save_dir = os.path.join(save_dir, "all_gather_via_filesys") if is_main_process(): os.makedirs(save_dir, exist_ok=True) # use a timestamp and salt to distinguish different all_gather timestamp = int(time.time()) if is_main_process() else 0 salt = random.randint(0, 2**31 - 1) if is_main_process() else 0 # broadcast the timestamp and salt across ranks # (all-reduce will do the broadcasting since only rank 0 is non-zero) timestamp_and_salt = torch.tensor([timestamp, salt], dtype=torch.long) dist.all_reduce(timestamp_and_salt, group=cpu_group) timestamp, salt = timestamp_and_salt.tolist() # save the data to a file on the disk rank_save = get_rank() save_data_filename = f"data_to_gather_{timestamp}_{salt}_{rank_save}.pkl" save_data_path = os.path.join(save_dir, save_data_filename) assert not os.path.exists(save_data_path), f"{save_data_path} already exists" torch.save(data, save_data_path) dist.barrier(group=cpu_group) # read the data from the files data_list = [] if rank_save == 0 or not gather_to_rank_0_only: for rank_load in range(world_size): load_data_filename = f"data_to_gather_{timestamp}_{salt}_{rank_load}.pkl" load_data_path = os.path.join(save_dir, load_data_filename) assert os.path.exists(load_data_path), f"cannot read {save_data_path}" data_list.append(torch.load(load_data_path)) dist.barrier(group=cpu_group) # delete the saved file os.remove(save_data_path) return data_list def all_gather(data, force_cpu=False, force_filesys=False, filesys_save_dir=None): """ Run all_gather on arbitrary picklable data (not necessarily tensors) Args: data: any picklable object Returns: list[data]: list of data gathered from each rank """ world_size = get_world_size() if world_size == 1: return [data] if os.getenv("MDETR_FILESYS_REDUCE_RANK_0_ONLY") == "1": return all_gather_via_filesys( data, filesys_save_dir, gather_to_rank_0_only=True ) if os.getenv("MDETR_FILESYS_REDUCE") == "1" or force_filesys: return all_gather_via_filesys(data, filesys_save_dir) cpu_group = None if os.getenv("MDETR_CPU_REDUCE") == "1" or force_cpu: cpu_group = _get_global_gloo_group() buffer = io.BytesIO() torch.save(data, buffer) data_view = buffer.getbuffer() device = "cuda" if cpu_group is None else "cpu" tensor = torch.ByteTensor(data_view).to(device) # obtain Tensor size of each rank local_size = torch.tensor([tensor.numel()], device=device, dtype=torch.long) size_list = [ torch.tensor([0], device=device, dtype=torch.long) for _ in range(world_size) ] if cpu_group is None: dist.all_gather(size_list, local_size) else: print("gathering on cpu") dist.all_gather(size_list, local_size, group=cpu_group) size_list = [int(size.item()) for size in size_list] max_size = max(size_list) assert isinstance(local_size.item(), int) local_size = int(local_size.item()) # receiving Tensor from all ranks # we pad the tensor because torch all_gather does not support # gathering tensors of different shapes tensor_list = [] for _ in size_list: tensor_list.append(torch.empty((max_size,), dtype=torch.uint8, device=device)) if local_size != max_size: padding = torch.empty( size=(max_size - local_size,), dtype=torch.uint8, device=device ) tensor = torch.cat((tensor, padding), dim=0) if cpu_group is None: dist.all_gather(tensor_list, tensor) else: dist.all_gather(tensor_list, tensor, group=cpu_group) data_list = [] for size, tensor in zip(size_list, tensor_list): tensor = torch.split(tensor, [size, max_size - size], dim=0)[0] buffer = io.BytesIO(tensor.cpu().numpy()) obj = torch.load(buffer) data_list.append(obj) return data_list def convert_to_distributed_tensor(tensor: torch.Tensor) -> Tuple[torch.Tensor, str]: """ For some backends, such as NCCL, communication only works if the tensor is on the GPU. This helper function converts to the correct device and returns the tensor + original device. """ orig_device = "cpu" if not tensor.is_cuda else "gpu" if ( torch.distributed.is_available() and torch.distributed.get_backend() == torch.distributed.Backend.NCCL and not tensor.is_cuda ): tensor = tensor.cuda() return (tensor, orig_device) def convert_to_normal_tensor(tensor: torch.Tensor, orig_device: str) -> torch.Tensor: """ For some backends, such as NCCL, communication only works if the tensor is on the GPU. This converts the tensor back to original device. """ if tensor.is_cuda and orig_device == "cpu": tensor = tensor.cpu() return tensor def is_distributed_training_run() -> bool: return ( torch.distributed.is_available() and torch.distributed.is_initialized() and (torch.distributed.get_world_size() > 1) ) def is_primary() -> bool: """ Returns True if this is rank 0 of a distributed training job OR if it is a single trainer job. Otherwise False. """ return get_rank() == _PRIMARY_RANK def all_reduce_mean(tensor: torch.Tensor) -> torch.Tensor: """ Wrapper over torch.distributed.all_reduce for performing mean reduction of tensor over all processes. """ return all_reduce_op( tensor, torch.distributed.ReduceOp.SUM, lambda t: t / torch.distributed.get_world_size(), ) def all_reduce_sum(tensor: torch.Tensor) -> torch.Tensor: """ Wrapper over torch.distributed.all_reduce for performing sum reduction of tensor over all processes in both distributed / non-distributed scenarios. """ return all_reduce_op(tensor, torch.distributed.ReduceOp.SUM) def all_reduce_min(tensor: torch.Tensor) -> torch.Tensor: """ Wrapper over torch.distributed.all_reduce for performing min reduction of tensor over all processes in both distributed / non-distributed scenarios. """ return all_reduce_op(tensor, torch.distributed.ReduceOp.MIN) def all_reduce_max(tensor: torch.Tensor) -> torch.Tensor: """ Wrapper over torch.distributed.all_reduce for performing min reduction of tensor over all processes in both distributed / non-distributed scenarios. """ return all_reduce_op(tensor, torch.distributed.ReduceOp.MAX) def all_reduce_op( tensor: torch.Tensor, op: torch.distributed.ReduceOp, after_op_func: Callable[[torch.Tensor], torch.Tensor] = None, ) -> torch.Tensor: """ Wrapper over torch.distributed.all_reduce for performing reduction of tensor over all processes in both distributed / non-distributed scenarios. """ if is_distributed_training_run(): tensor, orig_device = convert_to_distributed_tensor(tensor) torch.distributed.all_reduce(tensor, op) if after_op_func is not None: tensor = after_op_func(tensor) tensor = convert_to_normal_tensor(tensor, orig_device) return tensor def gather_tensors_from_all(tensor: torch.Tensor) -> List[torch.Tensor]: """ Wrapper over torch.distributed.all_gather for performing 'gather' of 'tensor' over all processes in both distributed / non-distributed scenarios. """ if tensor.ndim == 0: # 0 dim tensors cannot be gathered. so unsqueeze tensor = tensor.unsqueeze(0) if is_distributed_training_run(): tensor, orig_device = convert_to_distributed_tensor(tensor) gathered_tensors = [ torch.zeros_like(tensor) for _ in range(torch.distributed.get_world_size()) ] torch.distributed.all_gather(gathered_tensors, tensor) gathered_tensors = [ convert_to_normal_tensor(_tensor, orig_device) for _tensor in gathered_tensors ] else: gathered_tensors = [tensor] return gathered_tensors def gather_from_all(tensor: torch.Tensor) -> torch.Tensor: gathered_tensors = gather_tensors_from_all(tensor) gathered_tensor = torch.cat(gathered_tensors, 0) return gathered_tensor def broadcast(tensor: torch.Tensor, src: int = 0) -> torch.Tensor: """ Wrapper over torch.distributed.broadcast for broadcasting a tensor from the source to all processes in both distributed / non-distributed scenarios. """ if is_distributed_training_run(): tensor, orig_device = convert_to_distributed_tensor(tensor) torch.distributed.broadcast(tensor, src) tensor = convert_to_normal_tensor(tensor, orig_device) return tensor def barrier() -> None: """ Wrapper over torch.distributed.barrier, returns without waiting if the distributed process group is not initialized instead of throwing error. """ if not torch.distributed.is_available() or not torch.distributed.is_initialized(): return torch.distributed.barrier() def get_world_size() -> int: """ Simple wrapper for correctly getting worldsize in both distributed / non-distributed settings """ return ( torch.distributed.get_world_size() if torch.distributed.is_available() and torch.distributed.is_initialized() else 1 ) def get_rank() -> int: """ Simple wrapper for correctly getting rank in both distributed / non-distributed settings """ return ( torch.distributed.get_rank() if torch.distributed.is_available() and torch.distributed.is_initialized() else 0 ) def get_primary_rank() -> int: return _PRIMARY_RANK def set_cuda_device_index(idx: int) -> None: global _cuda_device_index _cuda_device_index = idx torch.cuda.set_device(_cuda_device_index) def set_cpu_device() -> None: global _cuda_device_index _cuda_device_index = _CPU_DEVICE_INDEX def get_cuda_device_index() -> int: return _cuda_device_index def init_distributed_data_parallel_model( model: torch.nn.Module, broadcast_buffers: bool = False, find_unused_parameters: bool = True, bucket_cap_mb: int = 25, ) -> torch.nn.parallel.DistributedDataParallel: global _cuda_device_index if _cuda_device_index == _CPU_DEVICE_INDEX: # CPU-only model, don't specify device return torch.nn.parallel.DistributedDataParallel( model, broadcast_buffers=broadcast_buffers, find_unused_parameters=find_unused_parameters, bucket_cap_mb=bucket_cap_mb, ) else: # GPU model return torch.nn.parallel.DistributedDataParallel( model, device_ids=[_cuda_device_index], output_device=_cuda_device_index, broadcast_buffers=broadcast_buffers, find_unused_parameters=find_unused_parameters, bucket_cap_mb=bucket_cap_mb, ) def broadcast_object(obj: Any, src: int = _PRIMARY_RANK, use_disk: bool = True) -> Any: """Broadcast an object from a source to all workers. Args: obj: Object to broadcast, must be serializable src: Source rank for broadcast (default is primary) use_disk: If enabled, removes redundant CPU memory copies by writing to disk """ # Either broadcast from primary to the fleet (default), # or use the src setting as the original rank if get_rank() == src: # Emit data buffer = io.BytesIO() torch.save(obj, buffer) data_view = buffer.getbuffer() length_tensor = torch.LongTensor([len(data_view)]) length_tensor = broadcast(length_tensor, src=src) data_tensor = torch.ByteTensor(data_view) data_tensor = broadcast(data_tensor, src=src) else: # Fetch from the source length_tensor = torch.LongTensor([0]) length_tensor = broadcast(length_tensor, src=src) data_tensor = torch.empty([length_tensor.item()], dtype=torch.uint8) data_tensor = broadcast(data_tensor, src=src) if use_disk: with tempfile.TemporaryFile("r+b") as f: f.write(data_tensor.numpy()) # remove reference to the data tensor and hope that Python garbage # collects it del data_tensor f.seek(0) obj = torch.load(f) else: buffer = io.BytesIO(data_tensor.numpy()) obj = torch.load(buffer) return obj def all_gather_tensor(tensor: torch.Tensor, world_size=None): if world_size is None: world_size = get_world_size() # make contiguous because NCCL won't gather the tensor otherwise assert tensor.is_contiguous(), f"{tensor.shape} is not contiguous!" tensor, orig_device = convert_to_distributed_tensor(tensor) tensor_all = [torch.ones_like(tensor) for _ in range(world_size)] dist.all_gather(tensor_all, tensor, async_op=False) # performance opt tensor_all = [ convert_to_normal_tensor(tensor, orig_device) for tensor in tensor_all ] return tensor_all def all_gather_batch(tensors: List[torch.Tensor]): """ Performs all_gather operation on the provided tensors. """ # Queue the gathered tensors world_size = get_world_size() # There is no need for reduction in the single-proc case if world_size == 1: return tensors tensor_list = [] output_tensor = [] for tensor in tensors: tensor_all = all_gather_tensor(tensor, world_size) tensor_list.append(tensor_all) for tensor_all in tensor_list: output_tensor.append(torch.cat(tensor_all, dim=0)) return output_tensor class GatherLayer(autograd.Function): """ Gather tensors from all workers with support for backward propagation: This implementation does not cut the gradients as torch.distributed.all_gather does. """ @staticmethod def forward(ctx, x): output = [torch.zeros_like(x) for _ in range(dist.get_world_size())] dist.all_gather(output, x) return tuple(output) @staticmethod def backward(ctx, *grads): all_gradients = torch.stack(grads) dist.all_reduce(all_gradients) return all_gradients[dist.get_rank()] def all_gather_batch_with_grad(tensors): """ Performs all_gather operation on the provided tensors. Graph remains connected for backward grad computation. """ # Queue the gathered tensors world_size = get_world_size() # There is no need for reduction in the single-proc case if world_size == 1: return tensors tensor_list = [] output_tensor = [] for tensor in tensors: tensor_all = GatherLayer.apply(tensor) tensor_list.append(tensor_all) for tensor_all in tensor_list: output_tensor.append(torch.cat(tensor_all, dim=0)) return output_tensor def unwrap_ddp_if_wrapped(model): if isinstance(model, torch.nn.parallel.DistributedDataParallel): return model.module return model def create_new_process_group(group_size): """ Creates process groups of a gives `group_size` and returns process group that current GPU participates in. `group_size` must divide the total number of GPUs (world_size). Modified from https://github.com/NVIDIA/apex/blob/4e1ae43f7f7ac69113ef426dd15f37123f0a2ed3/apex/parallel/__init__.py#L60 Args: group_size (int): number of GPU's to collaborate for sync bn """ assert group_size > 0 world_size = torch.distributed.get_world_size() if world_size <= 8: if group_size > world_size: logging.warning( f"Requested group size [{group_size}] > world size [{world_size}]. " "Assuming local debug run and capping it to world size." ) group_size = world_size assert world_size >= group_size assert world_size % group_size == 0 group = None for group_num in range(world_size // group_size): group_ids = range(group_num * group_size, (group_num + 1) * group_size) cur_group = torch.distributed.new_group(ranks=group_ids) if torch.distributed.get_rank() // group_size == group_num: group = cur_group # can not drop out and return here, every process must go through creation of all subgroups assert group is not None return group def is_dist_avail_and_initialized(): if not dist.is_available(): return False if not dist.is_initialized(): return False return True ================================================ FILE: auto-seg/submodules/segment-anything-2/training/utils/logger.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. # Code borrowed from TLC - https://www.internalfb.com/code/fbsource/fbcode/pytorch/tlc/torchtlc/loggers/tensorboard.py import atexit import functools import logging import sys import uuid from typing import Any, Dict, Optional, Union from hydra.utils import instantiate from iopath.common.file_io import g_pathmgr from numpy import ndarray from torch import Tensor from torch.utils.tensorboard import SummaryWriter from training.utils.train_utils import get_machine_local_and_dist_rank, makedir Scalar = Union[Tensor, ndarray, int, float] def make_tensorboard_logger(log_dir: str, **writer_kwargs: Any): makedir(log_dir) summary_writer_method = SummaryWriter return TensorBoardLogger( path=log_dir, summary_writer_method=summary_writer_method, **writer_kwargs ) class TensorBoardWriterWrapper: """ A wrapper around a SummaryWriter object. """ def __init__( self, path: str, *args: Any, filename_suffix: str = None, summary_writer_method: Any = SummaryWriter, **kwargs: Any, ) -> None: """Create a new TensorBoard logger. On construction, the logger creates a new events file that logs will be written to. If the environment variable `RANK` is defined, logger will only log if RANK = 0. NOTE: If using the logger with distributed training: - This logger can call collective operations - Logs will be written on rank 0 only - Logger must be constructed synchronously *after* initializing distributed process group. Args: path (str): path to write logs to *args, **kwargs: Extra arguments to pass to SummaryWriter """ self._writer: Optional[SummaryWriter] = None _, self._rank = get_machine_local_and_dist_rank() self._path: str = path if self._rank == 0: logging.info( f"TensorBoard SummaryWriter instantiated. Files will be stored in: {path}" ) self._writer = summary_writer_method( log_dir=path, *args, filename_suffix=filename_suffix or str(uuid.uuid4()), **kwargs, ) else: logging.debug( f"Not logging meters on this host because env RANK: {self._rank} != 0" ) atexit.register(self.close) @property def writer(self) -> Optional[SummaryWriter]: return self._writer @property def path(self) -> str: return self._path def flush(self) -> None: """Writes pending logs to disk.""" if not self._writer: return self._writer.flush() def close(self) -> None: """Close writer, flushing pending logs to disk. Logs cannot be written after `close` is called. """ if not self._writer: return self._writer.close() self._writer = None class TensorBoardLogger(TensorBoardWriterWrapper): """ A simple logger for TensorBoard. """ def log_dict(self, payload: Dict[str, Scalar], step: int) -> None: """Add multiple scalar values to TensorBoard. Args: payload (dict): dictionary of tag name and scalar value step (int, Optional): step value to record """ if not self._writer: return for k, v in payload.items(): self.log(k, v, step) def log(self, name: str, data: Scalar, step: int) -> None: """Add scalar data to TensorBoard. Args: name (string): tag name used to group scalars data (float/int/Tensor): scalar data to log step (int, optional): step value to record """ if not self._writer: return self._writer.add_scalar(name, data, global_step=step, new_style=True) def log_hparams( self, hparams: Dict[str, Scalar], meters: Dict[str, Scalar] ) -> None: """Add hyperparameter data to TensorBoard. Args: hparams (dict): dictionary of hyperparameter names and corresponding values meters (dict): dictionary of name of meter and corersponding values """ if not self._writer: return self._writer.add_hparams(hparams, meters) class Logger: """ A logger class that can interface with multiple loggers. It now supports tensorboard only for simplicity, but you can extend it with your own logger. """ def __init__(self, logging_conf): # allow turning off TensorBoard with "should_log: false" in config tb_config = logging_conf.tensorboard_writer tb_should_log = tb_config and tb_config.pop("should_log", True) self.tb_logger = instantiate(tb_config) if tb_should_log else None def log_dict(self, payload: Dict[str, Scalar], step: int) -> None: if self.tb_logger: self.tb_logger.log_dict(payload, step) def log(self, name: str, data: Scalar, step: int) -> None: if self.tb_logger: self.tb_logger.log(name, data, step) def log_hparams( self, hparams: Dict[str, Scalar], meters: Dict[str, Scalar] ) -> None: if self.tb_logger: self.tb_logger.log_hparams(hparams, meters) # cache the opened file object, so that different calls to `setup_logger` # with the same file name can safely write to the same file. @functools.lru_cache(maxsize=None) def _cached_log_stream(filename): # we tune the buffering value so that the logs are updated # frequently. log_buffer_kb = 10 * 1024 # 10KB io = g_pathmgr.open(filename, mode="a", buffering=log_buffer_kb) atexit.register(io.close) return io def setup_logging( name, output_dir=None, rank=0, log_level_primary="INFO", log_level_secondary="ERROR", ): """ Setup various logging streams: stdout and file handlers. For file handlers, we only setup for the master gpu. """ # get the filename if we want to log to the file as well log_filename = None if output_dir: makedir(output_dir) if rank == 0: log_filename = f"{output_dir}/log.txt" logger = logging.getLogger(name) logger.setLevel(log_level_primary) # create formatter FORMAT = "%(levelname)s %(asctime)s %(filename)s:%(lineno)4d: %(message)s" formatter = logging.Formatter(FORMAT) # Cleanup any existing handlers for h in logger.handlers: logger.removeHandler(h) logger.root.handlers = [] # setup the console handler console_handler = logging.StreamHandler(sys.stdout) console_handler.setFormatter(formatter) logger.addHandler(console_handler) if rank == 0: console_handler.setLevel(log_level_primary) else: console_handler.setLevel(log_level_secondary) # we log to file as well if user wants if log_filename and rank == 0: file_handler = logging.StreamHandler(_cached_log_stream(log_filename)) file_handler.setLevel(log_level_primary) file_handler.setFormatter(formatter) logger.addHandler(file_handler) logging.root = logger def shutdown_logging(): """ After training is done, we ensure to shut down all the logger streams. """ logging.info("Shutting down loggers...") handlers = logging.root.handlers for handler in handlers: handler.close() ================================================ FILE: auto-seg/submodules/segment-anything-2/training/utils/train_utils.py ================================================ # Copyright (c) Meta Platforms, Inc. and affiliates. # All rights reserved. # This source code is licensed under the license found in the # LICENSE file in the root directory of this source tree. import logging import math import os import random import re from datetime import timedelta from typing import Optional import hydra import numpy as np import omegaconf import torch import torch.distributed as dist from iopath.common.file_io import g_pathmgr from omegaconf import OmegaConf def multiply_all(*args): return np.prod(np.array(args)).item() def collect_dict_keys(config): """This function recursively iterates through a dataset configuration, and collect all the dict_key that are defined""" val_keys = [] # If the this config points to the collate function, then it has a key if "_target_" in config and re.match(r".*collate_fn.*", config["_target_"]): val_keys.append(config["dict_key"]) else: # Recursively proceed for v in config.values(): if isinstance(v, type(config)): val_keys.extend(collect_dict_keys(v)) elif isinstance(v, omegaconf.listconfig.ListConfig): for item in v: if isinstance(item, type(config)): val_keys.extend(collect_dict_keys(item)) return val_keys class Phase: TRAIN = "train" VAL = "val" def register_omegaconf_resolvers(): OmegaConf.register_new_resolver("get_method", hydra.utils.get_method) OmegaConf.register_new_resolver("get_class", hydra.utils.get_class) OmegaConf.register_new_resolver("add", lambda x, y: x + y) OmegaConf.register_new_resolver("times", multiply_all) OmegaConf.register_new_resolver("divide", lambda x, y: x / y) OmegaConf.register_new_resolver("pow", lambda x, y: x**y) OmegaConf.register_new_resolver("subtract", lambda x, y: x - y) OmegaConf.register_new_resolver("range", lambda x: list(range(x))) OmegaConf.register_new_resolver("int", lambda x: int(x)) OmegaConf.register_new_resolver("ceil_int", lambda x: int(math.ceil(x))) OmegaConf.register_new_resolver("merge", lambda *x: OmegaConf.merge(*x)) def setup_distributed_backend(backend, timeout_mins): """ Initialize torch.distributed and set the CUDA device. Expects environment variables to be set as per https://pytorch.org/docs/stable/distributed.html#environment-variable-initialization along with the environ variable "LOCAL_RANK" which is used to set the CUDA device. """ # enable TORCH_NCCL_ASYNC_ERROR_HANDLING to ensure dist nccl ops time out after timeout_mins # of waiting os.environ["TORCH_NCCL_ASYNC_ERROR_HANDLING"] = "1" logging.info(f"Setting up torch.distributed with a timeout of {timeout_mins} mins") dist.init_process_group(backend=backend, timeout=timedelta(minutes=timeout_mins)) return dist.get_rank() def get_machine_local_and_dist_rank(): """ Get the distributed and local rank of the current gpu. """ local_rank = int(os.environ.get("LOCAL_RANK", None)) distributed_rank = int(os.environ.get("RANK", None)) assert ( local_rank is not None and distributed_rank is not None ), "Please the set the RANK and LOCAL_RANK environment variables." return local_rank, distributed_rank def print_cfg(cfg): """ Supports printing both Hydra DictConfig and also the AttrDict config """ logging.info("Training with config:") logging.info(OmegaConf.to_yaml(cfg)) def set_seeds(seed_value, max_epochs, dist_rank): """ Set the python random, numpy and torch seed for each gpu. Also set the CUDA seeds if the CUDA is available. This ensures deterministic nature of the training. """ # Since in the pytorch sampler, we increment the seed by 1 for every epoch. seed_value = (seed_value + dist_rank) * max_epochs logging.info(f"MACHINE SEED: {seed_value}") random.seed(seed_value) np.random.seed(seed_value) torch.manual_seed(seed_value) if torch.cuda.is_available(): torch.cuda.manual_seed_all(seed_value) def makedir(dir_path): """ Create the directory if it does not exist. """ is_success = False try: if not g_pathmgr.exists(dir_path): g_pathmgr.mkdirs(dir_path) is_success = True except BaseException: logging.info(f"Error creating directory: {dir_path}") return is_success def is_dist_avail_and_initialized(): if not dist.is_available(): return False if not dist.is_initialized(): return False return True def get_amp_type(amp_type: Optional[str] = None): if amp_type is None: return None assert amp_type in ["bfloat16", "float16"], "Invalid Amp type." if amp_type == "bfloat16": return torch.bfloat16 else: return torch.float16 def log_env_variables(): env_keys = sorted(list(os.environ.keys())) st = "" for k in env_keys: v = os.environ[k] st += f"{k}={v}\n" logging.info("Logging ENV_VARIABLES") logging.info(st) class AverageMeter: """Computes and stores the average and current value""" def __init__(self, name, device, fmt=":f"): self.name = name self.fmt = fmt self.device = device self.reset() def reset(self): self.val = 0 self.avg = 0 self.sum = 0 self.count = 0 self._allow_updates = True def update(self, val, n=1): self.val = val self.sum += val * n self.count += n self.avg = self.sum / self.count def __str__(self): fmtstr = "{name}: {val" + self.fmt + "} ({avg" + self.fmt + "})" return fmtstr.format(**self.__dict__) class MemMeter: """Computes and stores the current, avg, and max of peak Mem usage per iteration""" def __init__(self, name, device, fmt=":f"): self.name = name self.fmt = fmt self.device = device self.reset() def reset(self): self.val = 0 # Per iteration max usage self.avg = 0 # Avg per iteration max usage self.peak = 0 # Peak usage for lifetime of program self.sum = 0 self.count = 0 self._allow_updates = True def update(self, n=1, reset_peak_usage=True): self.val = torch.cuda.max_memory_allocated() // 1e9 self.sum += self.val * n self.count += n self.avg = self.sum / self.count self.peak = max(self.peak, self.val) if reset_peak_usage: torch.cuda.reset_peak_memory_stats() def __str__(self): fmtstr = ( "{name}: {val" + self.fmt + "} ({avg" + self.fmt + "}/{peak" + self.fmt + "})" ) return fmtstr.format(**self.__dict__) def human_readable_time(time_seconds): time = int(time_seconds) minutes, seconds = divmod(time, 60) hours, minutes = divmod(minutes, 60) days, hours = divmod(hours, 24) return f"{days:02}d {hours:02}h {minutes:02}m" class DurationMeter: def __init__(self, name, device, fmt=":f"): self.name = name self.device = device self.fmt = fmt self.val = 0 def reset(self): self.val = 0 def update(self, val): self.val = val def add(self, val): self.val += val def __str__(self): return f"{self.name}: {human_readable_time(self.val)}" class ProgressMeter: def __init__(self, num_batches, meters, real_meters, prefix=""): self.batch_fmtstr = self._get_batch_fmtstr(num_batches) self.meters = meters self.real_meters = real_meters self.prefix = prefix def display(self, batch, enable_print=False): entries = [self.prefix + self.batch_fmtstr.format(batch)] entries += [str(meter) for meter in self.meters] entries += [ " | ".join( [ f"{os.path.join(name, subname)}: {val:.4f}" for subname, val in meter.compute().items() ] ) for name, meter in self.real_meters.items() ] logging.info(" | ".join(entries)) if enable_print: print(" | ".join(entries)) def _get_batch_fmtstr(self, num_batches): num_digits = len(str(num_batches // 1)) fmt = "{:" + str(num_digits) + "d}" return "[" + fmt + "/" + fmt.format(num_batches) + "]" def get_resume_checkpoint(checkpoint_save_dir): if not g_pathmgr.isdir(checkpoint_save_dir): return None ckpt_file = os.path.join(checkpoint_save_dir, "checkpoint.pt") if not g_pathmgr.isfile(ckpt_file): return None return ckpt_file ================================================ FILE: cogvideox_interpolation/datasets.py ================================================ import json import torch import cv2 from typing import Any, Dict, List, Optional, Tuple from torch.utils.data import DataLoader, Dataset import torchvision.transforms as TT from torchvision import transforms from torchvision.transforms.functional import center_crop, resize from torchvision.transforms import InterpolationMode import numpy as np import random, os try: import decord except ImportError: raise ImportError( "The `decord` package is required for loading the video dataset. Install with `pip install decord`" ) decord.bridge.set_bridge("torch") class ImageVideoDataset(Dataset): def __init__( self, root_path, annotation_json, tokenizer, max_sequence_length: int = 226, height: int = 480, width: int = 640, video_reshape_mode: str = "center", fps: int = 8, stripe: int = 2, max_num_frames: int = 49, skip_frames_start: int = 0, skip_frames_end: int = 0, random_flip: Optional[float] = None, ) -> None: super().__init__() self.root_path = root_path with open(annotation_json, 'r') as f: self.data_list = json.load(f) self.tokenizer = tokenizer self.max_sequence_length = max_sequence_length self.height = height self.width = width self.video_reshape_mode = video_reshape_mode self.fps = fps self.max_num_frames = max_num_frames self.skip_frames_start = skip_frames_start self.skip_frames_end = skip_frames_end self.stripe = stripe self.video_transforms = transforms.Compose( [ transforms.RandomHorizontalFlip(random_flip) if random_flip else transforms.Lambda(lambda x: x), transforms.Lambda(lambda x: x / 255.0), transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5], inplace=True), ] ) def __len__(self): return len(self.data_list) def _resize_for_rectangle_crop(self, arr): image_size = self.height, self.width reshape_mode = self.video_reshape_mode if arr.shape[3] / arr.shape[2] > image_size[1] / image_size[0]: arr = resize( arr, size=[image_size[0], int(arr.shape[3] * image_size[0] / arr.shape[2])], interpolation=InterpolationMode.BICUBIC, ) else: arr = resize( arr, size=[int(arr.shape[2] * image_size[1] / arr.shape[3]), image_size[1]], interpolation=InterpolationMode.BICUBIC, ) h, w = arr.shape[2], arr.shape[3] arr = arr.squeeze(0) delta_h = h - image_size[0] delta_w = w - image_size[1] if reshape_mode == "random" or reshape_mode == "none": top = np.random.randint(0, delta_h + 1) left = np.random.randint(0, delta_w + 1) elif reshape_mode == "center": top, left = delta_h // 2, delta_w // 2 else: raise NotImplementedError arr = TT.functional.crop(arr, top=top, left=left, height=image_size[0], width=image_size[1]) return arr def __getitem__(self, index): while True: try: video_path = os.path.join(self.root_path, self.data_list[index]['clip_path']) video_reader = decord.VideoReader(video_path, width=self.width, height=self.height) video_num_frames = len(video_reader) # print(video_num_frames, video_reader.get_avg_fps()) if self.stripe * self.max_num_frames > video_num_frames: stripe = 1 else: stripe = self.stripe random_range = video_num_frames - stripe * self.max_num_frames - 1 random_range = max(1, random_range) start_frame = random.randint(1, random_range) if random_range > 0 else 1 indices = list(range(start_frame, start_frame + stripe * self.max_num_frames, stripe)) # (end_frame - start_frame) // self.max_num_frames)) frames = video_reader.get_batch(indices) # Ensure that we don't go over the limit frames = frames[: self.max_num_frames] selected_num_frames = frames.shape[0] # Choose first (4k + 1) frames as this is how many is required by the VAE remainder = (3 + (selected_num_frames % 4)) % 4 if remainder != 0: frames = frames[:-remainder] selected_num_frames = frames.shape[0] assert (selected_num_frames - 1) % 4 == 0 if selected_num_frames == self.max_num_frames: break else: index = (index + 1) % len(self.data_list) continue except Exception as e: index = (index + 1) % len(self.data_list) print(video_num_frames, start_frame, indices) print( "Error encounter during audio feature extraction: ", e, ) continue # Training transforms # frames = (frames - 127.5) / 127.5 frames = frames.permute(0, 3, 1, 2).contiguous() # [F, C, H, W] frames = self._resize_for_rectangle_crop(frames) frames = torch.stack([self.video_transforms(frame) for frame in frames], dim=0) text_inputs = self.tokenizer( [self.data_list[index]['caption']], padding="max_length", max_length=self.max_sequence_length, truncation=True, add_special_tokens=True, return_tensors="pt", ) text_input_ids = text_inputs.input_ids[0] return frames.contiguous(), text_input_ids class AutoEncoderDataset(ImageVideoDataset): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) def __getitem__(self, index): while True: try: video_path = os.path.join(self.root_path, self.data_list[index]['clip_path']) video_reader = decord.VideoReader(video_path, width=self.width, height=self.height) video_num_frames = len(video_reader) # print(video_num_frames, video_reader.get_avg_fps()) if self.stripe * self.max_num_frames > video_num_frames: stripe = 1 else: stripe = self.stripe random_indice = [random.randint(1, video_num_frames - 1)] # random selects a frame from the video frames = video_reader.get_batch(random_indice) break except Exception as e: print("[WARN] Get problem when loading video: ", self.data_list[index]['clip_path']) print( "Error encounter during audio feature extraction: ", e, ) index = random.randint(0, len(self.data_list) - 1) continue return frames class LvisDataset(Dataset): def __init__( self, root_path, annotation_json, height: int = 480, width: int = 640, random_flip: Optional[float] = None, ) -> None: super().__init__() self.root_path = root_path with open(annotation_json, 'r') as f: self.data_list = json.load(f)['images'] self.height = height self.width = width self.width = width self.video_transforms = transforms.Compose( [ transforms.Lambda(lambda x: x / 255.0), transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5], inplace=True), ] ) def __len__(self): return len(self.data_list) def __getitem__(self, index): image_path = os.path.join(self.root_path, "unlabeled2017", self.data_list[index]['file_name']) image = cv2.imread(image_path) image = cv2.resize(image, (self.width, self.height)) image = self.video_transforms(torch.from_numpy(image).permute(2, 0, 1)) return image.contiguous() ================================================ FILE: cogvideox_interpolation/losses.py ================================================ import torch import torch.nn as nn import torch.nn.functional as F from einops import rearrange, repeat from .lpips import LPIPS def l2_loss(network_output, gt): return ((network_output - gt) ** 2).mean() def cos_loss(network_output, gt): return 1 - F.cosine_similarity(network_output, gt, dim=1).mean() def hinge_d_loss(logits_real, logits_fake): loss_real = torch.mean(F.relu(1.0 - logits_real)) loss_fake = torch.mean(F.relu(1.0 + logits_fake)) d_loss = 0.5 * (loss_real + loss_fake) return d_loss def vanilla_d_loss(logits_real, logits_fake): d_loss = 0.5 * ( torch.mean(torch.nn.functional.softplus(-logits_real)) + torch.mean(torch.nn.functional.softplus(logits_fake)) ) return d_loss # from MAGVIT, used in place hof hinge_d_loss def sigmoid_cross_entropy_with_logits(labels, logits): # The final formulation is: max(x, 0) - x * z + log(1 + exp(-abs(x))) zeros = torch.zeros_like(logits, dtype=logits.dtype) condition = logits >= zeros relu_logits = torch.where(condition, logits, zeros) neg_abs_logits = torch.where(condition, -logits, logits) return relu_logits - logits * labels + torch.log1p(torch.exp(neg_abs_logits)) def lecam_reg(real_pred, fake_pred, ema_real_pred, ema_fake_pred): assert real_pred.ndim == 0 and ema_fake_pred.ndim == 0 lecam_loss = torch.mean(torch.pow(nn.ReLU()(real_pred - ema_fake_pred), 2)) lecam_loss += torch.mean(torch.pow(nn.ReLU()(ema_real_pred - fake_pred), 2)) return lecam_loss def gradient_penalty_fn(images, output): gradients = torch.autograd.grad( outputs=output, inputs=images, grad_outputs=torch.ones(output.size(), device=images.device), create_graph=True, retain_graph=True, only_inputs=True, )[0] gradients = rearrange(gradients, "b ... -> b (...)") return ((gradients.norm(2, dim=1) - 1) ** 2).mean() class VAELoss(nn.Module): def __init__( self, logvar_init=0.0, perceptual_loss_weight=0.1, kl_loss_weight=0.000001, device="cpu", dtype="bf16", ): super().__init__() if type(dtype) == str: if dtype == "bf16": dtype = torch.bfloat16 elif dtype == "fp16": dtype = torch.float16 else: raise NotImplementedError(f"dtype: {dtype}") # KL Loss self.kl_loss_weight = kl_loss_weight # Perceptual Loss self.perceptual_loss_fn = LPIPS().eval().to(device, dtype) self.perceptual_loss_weight = perceptual_loss_weight self.logvar = nn.Parameter(torch.ones(size=()) * logvar_init) def forward( self, video, recon_video, posterior, nll_weights=None, no_perceptual=False, ): video = rearrange(video, "b c t h w -> (b t) c h w").contiguous() recon_video = rearrange(recon_video, "b c t h w -> (b t) c h w").contiguous() # reconstruction loss recon_loss = torch.abs(video - recon_video) # perceptual loss if self.perceptual_loss_weight is not None and self.perceptual_loss_weight > 0.0 and not no_perceptual: # handle channels channels = video.shape[1] assert channels in {1, 3} if channels == 1: input_vgg_input = repeat(video, "b 1 h w -> b c h w", c=3) recon_vgg_input = repeat(recon_video, "b 1 h w -> b c h w", c=3) else: input_vgg_input = video recon_vgg_input = recon_video perceptual_loss = self.perceptual_loss_fn(input_vgg_input, recon_vgg_input) recon_loss = recon_loss + self.perceptual_loss_weight * perceptual_loss nll_loss = recon_loss / torch.exp(self.logvar) + self.logvar weighted_nll_loss = nll_loss if nll_weights is not None: weighted_nll_loss = nll_weights * nll_loss weighted_nll_loss = torch.sum(weighted_nll_loss) / weighted_nll_loss.shape[0] nll_loss = torch.sum(nll_loss) / nll_loss.shape[0] # KL Loss weighted_kl_loss = 0 if self.kl_loss_weight is not None and self.kl_loss_weight > 0.0: kl_loss = posterior.kl() kl_loss = torch.sum(kl_loss) / kl_loss.shape[0] weighted_kl_loss = kl_loss * self.kl_loss_weight return nll_loss, weighted_nll_loss, weighted_kl_loss def adopt_weight(weight, global_step, threshold=0, value=0.0): if global_step < threshold: weight = value return weight class AdversarialLoss(nn.Module): def __init__( self, discriminator_factor=1.0, discriminator_start=50001, generator_factor=0.5, generator_loss_type="non-saturating", ): super().__init__() self.discriminator_factor = discriminator_factor self.discriminator_start = discriminator_start self.generator_factor = generator_factor self.generator_loss_type = generator_loss_type def calculate_adaptive_weight(self, nll_loss, g_loss, last_layer): nll_grads = torch.autograd.grad(nll_loss, last_layer, retain_graph=True)[0] g_grads = torch.autograd.grad(g_loss, last_layer, retain_graph=True)[0] d_weight = torch.norm(nll_grads) / (torch.norm(g_grads) + 1e-4) d_weight = torch.clamp(d_weight, 0.0, 1e4).detach() d_weight = d_weight * self.generator_factor return d_weight def forward( self, fake_logits, nll_loss, last_layer, global_step, is_training=True, ): # NOTE: following MAGVIT to allow non_saturating assert self.generator_loss_type in ["hinge", "vanilla", "non-saturating"] if self.generator_loss_type == "hinge": gen_loss = -torch.mean(fake_logits) elif self.generator_loss_type == "non-saturating": gen_loss = torch.mean( sigmoid_cross_entropy_with_logits(labels=torch.ones_like(fake_logits), logits=fake_logits) ) else: raise ValueError("Generator loss {} not supported".format(self.generator_loss_type)) if self.discriminator_factor is not None and self.discriminator_factor > 0.0: try: d_weight = self.calculate_adaptive_weight(nll_loss, gen_loss, last_layer) except RuntimeError: assert not is_training d_weight = torch.tensor(0.0) else: d_weight = torch.tensor(0.0) disc_factor = adopt_weight(self.discriminator_factor, global_step, threshold=self.discriminator_start) weighted_gen_loss = d_weight * disc_factor * gen_loss return weighted_gen_loss class LeCamEMA: def __init__(self, ema_real=0.0, ema_fake=0.0, decay=0.999, dtype=torch.bfloat16, device="cpu"): self.decay = decay self.ema_real = torch.tensor(ema_real).to(device, dtype) self.ema_fake = torch.tensor(ema_fake).to(device, dtype) def update(self, ema_real, ema_fake): self.ema_real = self.ema_real * self.decay + ema_real * (1 - self.decay) self.ema_fake = self.ema_fake * self.decay + ema_fake * (1 - self.decay) def get(self): return self.ema_real, self.ema_fake class DiscriminatorLoss(nn.Module): def __init__( self, discriminator_factor=1.0, discriminator_start=50001, discriminator_loss_type="non-saturating", lecam_loss_weight=None, gradient_penalty_loss_weight=None, # SCH: following MAGVIT config.vqgan.grad_penalty_cost ): super().__init__() assert discriminator_loss_type in ["hinge", "vanilla", "non-saturating"] self.discriminator_factor = discriminator_factor self.discriminator_start = discriminator_start self.lecam_loss_weight = lecam_loss_weight self.gradient_penalty_loss_weight = gradient_penalty_loss_weight self.discriminator_loss_type = discriminator_loss_type def forward( self, real_logits, fake_logits, global_step, lecam_ema_real=None, lecam_ema_fake=None, real_video=None, split="train", ): if self.discriminator_factor is not None and self.discriminator_factor > 0.0: disc_factor = adopt_weight(self.discriminator_factor, global_step, threshold=self.discriminator_start) if self.discriminator_loss_type == "hinge": disc_loss = hinge_d_loss(real_logits, fake_logits) elif self.discriminator_loss_type == "non-saturating": if real_logits is not None: real_loss = sigmoid_cross_entropy_with_logits( labels=torch.ones_like(real_logits), logits=real_logits ) else: real_loss = 0.0 if fake_logits is not None: fake_loss = sigmoid_cross_entropy_with_logits( labels=torch.zeros_like(fake_logits), logits=fake_logits ) else: fake_loss = 0.0 disc_loss = 0.5 * (torch.mean(real_loss) + torch.mean(fake_loss)) elif self.discriminator_loss_type == "vanilla": disc_loss = vanilla_d_loss(real_logits, fake_logits) else: raise ValueError(f"Unknown GAN loss '{self.discriminator_loss_type}'.") weighted_d_adversarial_loss = disc_factor * disc_loss else: weighted_d_adversarial_loss = 0 lecam_loss = torch.tensor(0.0) if self.lecam_loss_weight is not None and self.lecam_loss_weight > 0.0: real_pred = torch.mean(real_logits) fake_pred = torch.mean(fake_logits) lecam_loss = lecam_reg(real_pred, fake_pred, lecam_ema_real, lecam_ema_fake) lecam_loss = lecam_loss * self.lecam_loss_weight gradient_penalty = torch.tensor(0.0) if self.gradient_penalty_loss_weight is not None and self.gradient_penalty_loss_weight > 0.0: assert real_video is not None gradient_penalty = gradient_penalty_fn(real_video, real_logits) gradient_penalty *= self.gradient_penalty_loss_weight return (weighted_d_adversarial_loss, lecam_loss, gradient_penalty) ================================================ FILE: cogvideox_interpolation/lpips.py ================================================ import hashlib import os from collections import namedtuple import requests import torch import torch.nn as nn from torchvision import models from tqdm import tqdm URL_MAP = {"vgg_lpips": "https://heibox.uni-heidelberg.de/f/607503859c864bc1b30b/?dl=1"} CKPT_MAP = {"vgg_lpips": "vgg.pth"} MD5_MAP = {"vgg_lpips": "d507d7349b931f0638a25a48a722f98a"} def md5_hash(path): with open(path, "rb") as f: content = f.read() return hashlib.md5(content).hexdigest() def download(url, local_path, chunk_size=1024): os.makedirs(os.path.split(local_path)[0], exist_ok=True) with requests.get(url, stream=True) as r: total_size = int(r.headers.get("content-length", 0)) with tqdm(total=total_size, unit="B", unit_scale=True) as pbar: with open(local_path, "wb") as f: for data in r.iter_content(chunk_size=chunk_size): if data: f.write(data) pbar.update(chunk_size) def get_ckpt_path(name, root, check=False): assert name in URL_MAP path = os.path.join(root, CKPT_MAP[name]) if not os.path.exists(path) or (check and not md5_hash(path) == MD5_MAP[name]): print("Downloading {} model from {} to {}".format(name, URL_MAP[name], path)) download(URL_MAP[name], path) md5 = md5_hash(path) assert md5 == MD5_MAP[name], md5 return path class LPIPS(nn.Module): # Learned perceptual metric def __init__(self, use_dropout=True): super().__init__() self.scaling_layer = ScalingLayer() self.chns = [64, 128, 256, 512, 512] # vg16 features self.net = vgg16(pretrained=True, requires_grad=False) self.lin0 = NetLinLayer(self.chns[0], use_dropout=use_dropout) self.lin1 = NetLinLayer(self.chns[1], use_dropout=use_dropout) self.lin2 = NetLinLayer(self.chns[2], use_dropout=use_dropout) self.lin3 = NetLinLayer(self.chns[3], use_dropout=use_dropout) self.lin4 = NetLinLayer(self.chns[4], use_dropout=use_dropout) self.load_from_pretrained() for param in self.parameters(): param.requires_grad = False def load_from_pretrained(self, name="vgg_lpips"): ckpt = get_ckpt_path(name, "model_zoo/taming/modules/autoencoder/lpips") self.load_state_dict(torch.load(ckpt, map_location=torch.device("cpu")), strict=False) # print("loaded pretrained LPIPS loss from {}".format(ckpt)) @classmethod def from_pretrained(cls, name="vgg_lpips"): if name != "vgg_lpips": raise NotImplementedError model = cls() ckpt = get_ckpt_path(name) model.load_state_dict(torch.load(ckpt, map_location=torch.device("cpu")), strict=False) return model def forward(self, input, target): in0_input, in1_input = (self.scaling_layer(input), self.scaling_layer(target)) outs0, outs1 = self.net(in0_input), self.net(in1_input) feats0, feats1, diffs = {}, {}, {} lins = [self.lin0, self.lin1, self.lin2, self.lin3, self.lin4] for kk in range(len(self.chns)): feats0[kk], feats1[kk] = normalize_tensor(outs0[kk]), normalize_tensor(outs1[kk]) diffs[kk] = (feats0[kk] - feats1[kk]) ** 2 res = [spatial_average(lins[kk].model(diffs[kk]), keepdim=True) for kk in range(len(self.chns))] val = res[0] for l in range(1, len(self.chns)): val += res[l] return val class ScalingLayer(nn.Module): def __init__(self): super(ScalingLayer, self).__init__() self.register_buffer("shift", torch.Tensor([-0.030, -0.088, -0.188])[None, :, None, None]) self.register_buffer("scale", torch.Tensor([0.458, 0.448, 0.450])[None, :, None, None]) def forward(self, inp): return (inp - self.shift) / self.scale class NetLinLayer(nn.Module): """A single linear layer which does a 1x1 conv""" def __init__(self, chn_in, chn_out=1, use_dropout=False): super(NetLinLayer, self).__init__() layers = ( [ nn.Dropout(), ] if (use_dropout) else [] ) layers += [ nn.Conv2d(chn_in, chn_out, 1, stride=1, padding=0, bias=False), ] self.model = nn.Sequential(*layers) class vgg16(torch.nn.Module): def __init__(self, requires_grad=False, pretrained=True): super(vgg16, self).__init__() vgg_pretrained_features = models.vgg16(pretrained=pretrained).features self.slice1 = torch.nn.Sequential() self.slice2 = torch.nn.Sequential() self.slice3 = torch.nn.Sequential() self.slice4 = torch.nn.Sequential() self.slice5 = torch.nn.Sequential() self.N_slices = 5 for x in range(4): self.slice1.add_module(str(x), vgg_pretrained_features[x]) for x in range(4, 9): self.slice2.add_module(str(x), vgg_pretrained_features[x]) for x in range(9, 16): self.slice3.add_module(str(x), vgg_pretrained_features[x]) for x in range(16, 23): self.slice4.add_module(str(x), vgg_pretrained_features[x]) for x in range(23, 30): self.slice5.add_module(str(x), vgg_pretrained_features[x]) if not requires_grad: for param in self.parameters(): param.requires_grad = False def forward(self, X): h = self.slice1(X) h_relu1_2 = h h = self.slice2(h) h_relu2_2 = h h = self.slice3(h) h_relu3_3 = h h = self.slice4(h) h_relu4_3 = h h = self.slice5(h) h_relu5_3 = h vgg_outputs = namedtuple("VggOutputs", ["relu1_2", "relu2_2", "relu3_3", "relu4_3", "relu5_3"]) out = vgg_outputs(h_relu1_2, h_relu2_2, h_relu3_3, h_relu4_3, h_relu5_3) return out def normalize_tensor(x, eps=1e-10): norm_factor = torch.sqrt(torch.sum(x**2, dim=1, keepdim=True)) return x / (norm_factor + eps) def spatial_average(x, keepdim=True): return x.mean([2, 3], keepdim=keepdim) ================================================ FILE: cogvideox_interpolation/pipeline.py ================================================ import inspect import math from typing import Callable, Dict, List, Optional, Tuple, Union import PIL import torch from diffusers.callbacks import MultiPipelineCallbacks, PipelineCallback from diffusers.image_processor import PipelineImageInput from diffusers.models import (AutoencoderKLCogVideoX, CogVideoXTransformer3DModel) from diffusers.models.embeddings import get_3d_rotary_pos_embed from diffusers.pipelines.pipeline_utils import DiffusionPipeline from diffusers.schedulers import CogVideoXDDIMScheduler, CogVideoXDPMScheduler from diffusers.utils import logging, replace_example_docstring from diffusers.utils.torch_utils import randn_tensor from diffusers.video_processor import VideoProcessor from transformers import T5EncoderModel, T5Tokenizer def get_resize_crop_region_for_grid(src, tgt_width, tgt_height): tw = tgt_width th = tgt_height h, w = src r = h / w if r > (th / tw): resize_height = th resize_width = int(round(th / h * w)) else: resize_width = tw resize_height = int(round(tw / w * h)) crop_top = int(round((th - resize_height) / 2.0)) crop_left = int(round((tw - resize_width) / 2.0)) return (crop_top, crop_left), (crop_top + resize_height, crop_left + resize_width) def retrieve_timesteps( scheduler, num_inference_steps: Optional[int] = None, device: Optional[Union[str, torch.device]] = None, timesteps: Optional[List[int]] = None, sigmas: Optional[List[float]] = None, **kwargs, ): """ Calls the scheduler's `set_timesteps` method and retrieves timesteps from the scheduler after the call. Handles custom timesteps. Any kwargs will be supplied to `scheduler.set_timesteps`. Args: scheduler (`SchedulerMixin`): The scheduler to get timesteps from. num_inference_steps (`int`): The number of diffusion steps used when generating samples with a pre-trained model. If used, `timesteps` must be `None`. device (`str` or `torch.device`, *optional*): The device to which the timesteps should be moved to. If `None`, the timesteps are not moved. timesteps (`List[int]`, *optional*): Custom timesteps used to override the timestep spacing strategy of the scheduler. If `timesteps` is passed, `num_inference_steps` and `sigmas` must be `None`. sigmas (`List[float]`, *optional*): Custom sigmas used to override the timestep spacing strategy of the scheduler. If `sigmas` is passed, `num_inference_steps` and `timesteps` must be `None`. Returns: `Tuple[torch.Tensor, int]`: A tuple where the first element is the timestep schedule from the scheduler and the second element is the number of inference steps. """ if timesteps is not None and sigmas is not None: raise ValueError("Only one of `timesteps` or `sigmas` can be passed. Please choose one to set custom values") if timesteps is not None: accepts_timesteps = "timesteps" in set(inspect.signature(scheduler.set_timesteps).parameters.keys()) if not accepts_timesteps: raise ValueError( f"The current scheduler class {scheduler.__class__}'s `set_timesteps` does not support custom" f" timestep schedules. Please check whether you are using the correct scheduler." ) scheduler.set_timesteps(timesteps=timesteps, device=device, **kwargs) timesteps = scheduler.timesteps num_inference_steps = len(timesteps) elif sigmas is not None: accept_sigmas = "sigmas" in set(inspect.signature(scheduler.set_timesteps).parameters.keys()) if not accept_sigmas: raise ValueError( f"The current scheduler class {scheduler.__class__}'s `set_timesteps` does not support custom" f" sigmas schedules. Please check whether you are using the correct scheduler." ) scheduler.set_timesteps(sigmas=sigmas, device=device, **kwargs) timesteps = scheduler.timesteps num_inference_steps = len(timesteps) else: scheduler.set_timesteps(num_inference_steps, device=device, **kwargs) timesteps = scheduler.timesteps return timesteps, num_inference_steps # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.retrieve_latents def retrieve_latents( encoder_output: torch.Tensor, generator: Optional[torch.Generator] = None, sample_mode: str = "sample" ): if hasattr(encoder_output, "latent_dist") and sample_mode == "sample": return encoder_output.latent_dist.sample(generator) elif hasattr(encoder_output, "latent_dist") and sample_mode == "argmax": return encoder_output.latent_dist.mode() elif hasattr(encoder_output, "latents"): return encoder_output.latents else: raise AttributeError("Could not access latents of provided encoder_output") class CogVideoXInterpolationPipeline(DiffusionPipeline): _optional_components = [] model_cpu_offload_seq = "text_encoder->transformer->vae" _callback_tensor_inputs = [ "latents", "prompt_embeds", "negative_prompt_embeds", ] def __init__( self, tokenizer: T5Tokenizer, text_encoder: T5EncoderModel, vae: AutoencoderKLCogVideoX, transformer: CogVideoXTransformer3DModel, scheduler: Union[CogVideoXDDIMScheduler, CogVideoXDPMScheduler], ): super().__init__() self.register_modules( tokenizer=tokenizer, text_encoder=text_encoder, vae=vae, transformer=transformer, scheduler=scheduler, ) self.vae_scale_factor_spatial = ( 2 ** (len(self.vae.config.block_out_channels) - 1) if hasattr(self, "vae") and self.vae is not None else 8 ) self.vae_scale_factor_temporal = ( self.vae.config.temporal_compression_ratio if hasattr(self, "vae") and self.vae is not None else 4 ) self.video_processor = VideoProcessor(vae_scale_factor=self.vae_scale_factor_spatial) # Copied from diffusers.pipelines.cogvideo.pipeline_cogvideox.CogVideoXPipeline._get_t5_prompt_embeds def _get_t5_prompt_embeds( self, prompt: Union[str, List[str]] = None, num_videos_per_prompt: int = 1, max_sequence_length: int = 226, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None, ): device = device or self._execution_device dtype = dtype or self.text_encoder.dtype prompt = [prompt] if isinstance(prompt, str) else prompt batch_size = len(prompt) text_inputs = self.tokenizer( prompt, padding="max_length", max_length=max_sequence_length, truncation=True, add_special_tokens=True, return_tensors="pt", ) text_input_ids = text_inputs.input_ids untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(text_input_ids, untruncated_ids): removed_text = self.tokenizer.batch_decode(untruncated_ids[:, max_sequence_length - 1 : -1]) logger.warning( "The following part of your input was truncated because `max_sequence_length` is set to " f" {max_sequence_length} tokens: {removed_text}" ) prompt_embeds = self.text_encoder(text_input_ids.to(device))[0] prompt_embeds = prompt_embeds.to(dtype=dtype, device=device) # duplicate text embeddings for each generation per prompt, using mps friendly method _, seq_len, _ = prompt_embeds.shape prompt_embeds = prompt_embeds.repeat(1, num_videos_per_prompt, 1) prompt_embeds = prompt_embeds.view(batch_size * num_videos_per_prompt, seq_len, -1) return prompt_embeds # Copied from diffusers.pipelines.cogvideo.pipeline_cogvideox.CogVideoXPipeline.encode_prompt def encode_prompt( self, prompt: Union[str, List[str]], negative_prompt: Optional[Union[str, List[str]]] = None, do_classifier_free_guidance: bool = True, num_videos_per_prompt: int = 1, prompt_embeds: Optional[torch.Tensor] = None, negative_prompt_embeds: Optional[torch.Tensor] = None, max_sequence_length: int = 226, device: Optional[torch.device] = None, dtype: Optional[torch.dtype] = None, ): r""" Encodes the prompt into text encoder hidden states. Args: prompt (`str` or `List[str]`, *optional*): prompt to be encoded negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). do_classifier_free_guidance (`bool`, *optional*, defaults to `True`): Whether to use classifier free guidance or not. num_videos_per_prompt (`int`, *optional*, defaults to 1): Number of videos that should be generated per prompt. torch device to place the resulting embeddings on prompt_embeds (`torch.Tensor`, *optional*): Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, text embeddings will be generated from `prompt` input argument. negative_prompt_embeds (`torch.Tensor`, *optional*): Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input argument. device: (`torch.device`, *optional*): torch device dtype: (`torch.dtype`, *optional*): torch dtype """ device = device or self._execution_device prompt = [prompt] if isinstance(prompt, str) else prompt if prompt is not None: batch_size = len(prompt) else: batch_size = prompt_embeds.shape[0] if prompt_embeds is None: prompt_embeds = self._get_t5_prompt_embeds( prompt=prompt, num_videos_per_prompt=num_videos_per_prompt, max_sequence_length=max_sequence_length, device=device, dtype=dtype, ) if do_classifier_free_guidance and negative_prompt_embeds is None: negative_prompt = negative_prompt or "" negative_prompt = batch_size * [negative_prompt] if isinstance(negative_prompt, str) else negative_prompt if prompt is not None and type(prompt) is not type(negative_prompt): raise TypeError( f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" f" {type(prompt)}." ) elif batch_size != len(negative_prompt): raise ValueError( f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" " the batch size of `prompt`." ) negative_prompt_embeds = self._get_t5_prompt_embeds( prompt=negative_prompt, num_videos_per_prompt=num_videos_per_prompt, max_sequence_length=max_sequence_length, device=device, dtype=dtype, ) return prompt_embeds, negative_prompt_embeds def prepare_latents( self, first_image: torch.Tensor, last_image: torch.Tensor, batch_size: int = 1, num_channels_latents: int = 16, num_frames: int = 13, height: int = 60, width: int = 90, dtype: Optional[torch.dtype] = None, device: Optional[torch.device] = None, generator: Optional[torch.Generator] = None, latents: Optional[torch.Tensor] = None, ): num_frames = (num_frames - 1) // self.vae_scale_factor_temporal + 1 shape = ( batch_size, num_frames, num_channels_latents, height // self.vae_scale_factor_spatial, width // self.vae_scale_factor_spatial, ) if isinstance(generator, list) and len(generator) != batch_size: raise ValueError( f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" f" size of {batch_size}. Make sure the batch size matches the length of the generators." ) first_image = first_image.unsqueeze(2) # [B, C, F, H, W] last_image = last_image.unsqueeze(2) # [B, C, F, H, W] if isinstance(generator, list): first_image_latents = [ retrieve_latents(self.vae.encode(first_image[i].unsqueeze(0)), generator[i]) for i in range(batch_size) ] else: first_image_latents = [retrieve_latents(self.vae.encode(first_img.unsqueeze(0)), generator) for first_img in first_image] if isinstance(generator, list): last_image_latents = [ retrieve_latents(self.vae.encode(last_image[i].unsqueeze(0)), generator[i]) for i in range(batch_size) ] else: last_image_latents = [retrieve_latents(self.vae.encode(last_img.unsqueeze(0)), generator) for last_img in last_image] first_image_latents = torch.cat(first_image_latents, dim=0).to(dtype).permute(0, 2, 1, 3, 4) # [B, F, C, H, W] first_image_latents = self.vae.config.scaling_factor * first_image_latents last_image_latents = torch.cat(last_image_latents, dim=0).to(dtype).permute(0, 2, 1, 3, 4) # [B, F, C, H, W] last_image_latents = self.vae.config.scaling_factor * last_image_latents padding_shape = ( batch_size, num_frames - 2, num_channels_latents, height // self.vae_scale_factor_spatial, width // self.vae_scale_factor_spatial, ) latent_padding = torch.zeros(padding_shape, device=device, dtype=dtype) image_latents = torch.cat([first_image_latents, latent_padding, last_image_latents], dim=1) if latents is None: latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) else: latents = latents.to(device) # scale the initial noise by the standard deviation required by the scheduler latents = latents * self.scheduler.init_noise_sigma return latents, image_latents # Copied from diffusers.pipelines.cogvideo.pipeline_cogvideox.CogVideoXPipeline.decode_latents def decode_latents(self, latents: torch.Tensor) -> torch.Tensor: latents = latents.permute(0, 2, 1, 3, 4) # [batch_size, num_channels, num_frames, height, width] latents = 1 / self.vae.config.scaling_factor * latents frames = self.vae.decode(latents).sample return frames # Copied from diffusers.pipelines.animatediff.pipeline_animatediff_video2video.AnimateDiffVideoToVideoPipeline.get_timesteps def get_timesteps(self, num_inference_steps, timesteps, strength, device): # get the original timestep using init_timestep init_timestep = min(int(num_inference_steps * strength), num_inference_steps) t_start = max(num_inference_steps - init_timestep, 0) timesteps = timesteps[t_start * self.scheduler.order :] return timesteps, num_inference_steps - t_start # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs def prepare_extra_step_kwargs(self, generator, eta): # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 # and should be between [0, 1] accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) extra_step_kwargs = {} if accepts_eta: extra_step_kwargs["eta"] = eta # check if the scheduler accepts generator accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) if accepts_generator: extra_step_kwargs["generator"] = generator return extra_step_kwargs def check_inputs( self, first_image, last_image, prompt, height, width, negative_prompt, callback_on_step_end_tensor_inputs, video=None, latents=None, prompt_embeds=None, negative_prompt_embeds=None, ): if ( not isinstance(first_image, torch.Tensor) and not isinstance(first_image, PIL.Image.Image) and not isinstance(first_image, list) ): raise ValueError( "`image` has to be of type `torch.Tensor` or `PIL.Image.Image` or `List[PIL.Image.Image]` but is" f" {type(first_image)}" ) if ( not isinstance(last_image, torch.Tensor) and not isinstance(last_image, PIL.Image.Image) and not isinstance(last_image, list) ): raise ValueError( "`image` has to be of type `torch.Tensor` or `PIL.Image.Image` or `List[PIL.Image.Image]` but is" f" {type(last_image)}" ) if height % 8 != 0 or width % 8 != 0: raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") if callback_on_step_end_tensor_inputs is not None and not all( k in self._callback_tensor_inputs for k in callback_on_step_end_tensor_inputs ): raise ValueError( f"`callback_on_step_end_tensor_inputs` has to be in {self._callback_tensor_inputs}, but found {[k for k in callback_on_step_end_tensor_inputs if k not in self._callback_tensor_inputs]}" ) if prompt is not None and prompt_embeds is not None: raise ValueError( f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" " only forward one of the two." ) elif prompt is None and prompt_embeds is None: raise ValueError( "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." ) elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") if prompt is not None and negative_prompt_embeds is not None: raise ValueError( f"Cannot forward both `prompt`: {prompt} and `negative_prompt_embeds`:" f" {negative_prompt_embeds}. Please make sure to only forward one of the two." ) if negative_prompt is not None and negative_prompt_embeds is not None: raise ValueError( f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" f" {negative_prompt_embeds}. Please make sure to only forward one of the two." ) if prompt_embeds is not None and negative_prompt_embeds is not None: if prompt_embeds.shape != negative_prompt_embeds.shape: raise ValueError( "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" f" {negative_prompt_embeds.shape}." ) if video is not None and latents is not None: raise ValueError("Only one of `video` or `latents` should be provided") # Copied from diffusers.pipelines.cogvideo.pipeline_cogvideox.CogVideoXPipeline.fuse_qkv_projections def fuse_qkv_projections(self) -> None: r"""Enables fused QKV projections.""" self.fusing_transformer = True self.transformer.fuse_qkv_projections() # Copied from diffusers.pipelines.cogvideo.pipeline_cogvideox.CogVideoXPipeline.unfuse_qkv_projections def unfuse_qkv_projections(self) -> None: r"""Disable QKV projection fusion if enabled.""" if not self.fusing_transformer: logger.warning("The Transformer was not initially fused for QKV projections. Doing nothing.") else: self.transformer.unfuse_qkv_projections() self.fusing_transformer = False # Copied from diffusers.pipelines.cogvideo.pipeline_cogvideox.CogVideoXPipeline._prepare_rotary_positional_embeddings def _prepare_rotary_positional_embeddings( self, height: int, width: int, num_frames: int, device: torch.device, ) -> Tuple[torch.Tensor, torch.Tensor]: grid_height = height // (self.vae_scale_factor_spatial * self.transformer.config.patch_size) grid_width = width // (self.vae_scale_factor_spatial * self.transformer.config.patch_size) base_size_width = 720 // (self.vae_scale_factor_spatial * self.transformer.config.patch_size) base_size_height = 480 // (self.vae_scale_factor_spatial * self.transformer.config.patch_size) grid_crops_coords = get_resize_crop_region_for_grid( (grid_height, grid_width), base_size_width, base_size_height ) freqs_cos, freqs_sin = get_3d_rotary_pos_embed( embed_dim=self.transformer.config.attention_head_dim, crops_coords=grid_crops_coords, grid_size=(grid_height, grid_width), temporal_size=num_frames, ) freqs_cos = freqs_cos.to(device=device) freqs_sin = freqs_sin.to(device=device) return freqs_cos, freqs_sin @property def guidance_scale(self): return self._guidance_scale @property def num_timesteps(self): return self._num_timesteps @property def interrupt(self): return self._interrupt @torch.no_grad() def __call__( self, first_image: PipelineImageInput, last_image: PipelineImageInput, prompt: Optional[Union[str, List[str]]] = None, negative_prompt: Optional[Union[str, List[str]]] = None, height: int = 480, width: int = 720, num_frames: int = 49, num_inference_steps: int = 50, timesteps: Optional[List[int]] = None, guidance_scale: float = 6, use_dynamic_cfg: bool = False, num_videos_per_prompt: int = 1, eta: float = 0.0, generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, latents: Optional[torch.FloatTensor] = None, prompt_embeds: Optional[torch.FloatTensor] = None, negative_prompt_embeds: Optional[torch.FloatTensor] = None, output_type: str = "pil", return_dict: bool = True, callback_on_step_end: Optional[ Union[Callable[[int, int, Dict], None], PipelineCallback, MultiPipelineCallbacks] ] = None, callback_on_step_end_tensor_inputs: List[str] = ["latents"], max_sequence_length: int = 226, ): """ Function invoked when calling the pipeline for generation. Args: image (`PipelineImageInput`): The input video to condition the generation on. Must be an image, a list of images or a `torch.Tensor`. prompt (`str` or `List[str]`, *optional*): The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`. instead. negative_prompt (`str` or `List[str]`, *optional*): The prompt or prompts not to guide the image generation. If not defined, one has to pass `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): The height in pixels of the generated image. This is set to 1024 by default for the best results. width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): The width in pixels of the generated image. This is set to 1024 by default for the best results. num_frames (`int`, defaults to `48`): Number of frames to generate. Must be divisible by self.vae_scale_factor_temporal. Generated video will contain 1 extra frame because CogVideoX is conditioned with (num_seconds * fps + 1) frames where num_seconds is 6 and fps is 4. However, since videos can be saved at any fps, the only condition that needs to be satisfied is that of divisibility mentioned above. num_inference_steps (`int`, *optional*, defaults to 50): The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. timesteps (`List[int]`, *optional*): Custom timesteps to use for the denoising process with schedulers which support a `timesteps` argument in their `set_timesteps` method. If not defined, the default behavior when `num_inference_steps` is passed will be used. Must be in descending order. guidance_scale (`float`, *optional*, defaults to 7.0): Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). `guidance_scale` is defined as `w` of equation 2. of [Imagen Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, usually at the expense of lower image quality. num_videos_per_prompt (`int`, *optional*, defaults to 1): The number of videos to generate per prompt. generator (`torch.Generator` or `List[torch.Generator]`, *optional*): One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation deterministic. latents (`torch.FloatTensor`, *optional*): Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will ge generated by sampling using the supplied random `generator`. prompt_embeds (`torch.FloatTensor`, *optional*): Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, text embeddings will be generated from `prompt` input argument. negative_prompt_embeds (`torch.FloatTensor`, *optional*): Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input argument. output_type (`str`, *optional*, defaults to `"pil"`): The output format of the generate image. Choose between [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. return_dict (`bool`, *optional*, defaults to `True`): Whether or not to return a [`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput`] instead of a plain tuple. callback_on_step_end (`Callable`, *optional*): A function that calls at the end of each denoising steps during the inference. The function is called with the following arguments: `callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)`. `callback_kwargs` will include a list of all tensors as specified by `callback_on_step_end_tensor_inputs`. callback_on_step_end_tensor_inputs (`List`, *optional*): The list of tensor inputs for the `callback_on_step_end` function. The tensors specified in the list will be passed as `callback_kwargs` argument. You will only be able to include variables listed in the `._callback_tensor_inputs` attribute of your pipeline class. max_sequence_length (`int`, defaults to `226`): Maximum sequence length in encoded prompt. Must be consistent with `self.transformer.config.max_text_seq_length` otherwise may lead to poor results. Examples: Returns: [`~pipelines.cogvideo.pipeline_output.CogVideoXPipelineOutput`] or `tuple`: [`~pipelines.cogvideo.pipeline_output.CogVideoXPipelineOutput`] if `return_dict` is True, otherwise a `tuple`. When returning a tuple, the first element is a list with the generated images. """ if num_frames > 49: raise ValueError( "The number of frames must be less than 49 for now due to static positional embeddings. This will be updated in the future to remove this limitation." ) if isinstance(callback_on_step_end, (PipelineCallback, MultiPipelineCallbacks)): callback_on_step_end_tensor_inputs = callback_on_step_end.tensor_inputs height = height or self.transformer.config.sample_size * self.vae_scale_factor_spatial width = width or self.transformer.config.sample_size * self.vae_scale_factor_spatial num_videos_per_prompt = 1 # 1. Check inputs. Raise error if not correct self.check_inputs( first_image, last_image, prompt, height, width, negative_prompt, callback_on_step_end_tensor_inputs, prompt_embeds, negative_prompt_embeds, ) self._guidance_scale = guidance_scale self._interrupt = False # 2. Default call parameters if prompt is not None and isinstance(prompt, str): batch_size = 1 elif prompt is not None and isinstance(prompt, list): batch_size = len(prompt) else: batch_size = prompt_embeds.shape[0] device = self._execution_device # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` # corresponds to doing no classifier free guidance. do_classifier_free_guidance = guidance_scale > 1.0 # 3. Encode input prompt prompt_embeds, negative_prompt_embeds = self.encode_prompt( prompt=prompt, negative_prompt=negative_prompt, do_classifier_free_guidance=do_classifier_free_guidance, num_videos_per_prompt=num_videos_per_prompt, prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_prompt_embeds, max_sequence_length=max_sequence_length, device=device, ) if do_classifier_free_guidance: prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds], dim=0) # 4. Prepare timesteps timesteps, num_inference_steps = retrieve_timesteps(self.scheduler, num_inference_steps, device, timesteps) self._num_timesteps = len(timesteps) # 5. Prepare latents first_image = self.video_processor.preprocess(first_image, height=height, width=width).to( device, dtype=prompt_embeds.dtype ) last_image = self.video_processor.preprocess(last_image, height=height, width=width).to( device, dtype=prompt_embeds.dtype ) latent_channels = self.transformer.config.in_channels // 2 latents, image_latents = self.prepare_latents( first_image, last_image, batch_size * num_videos_per_prompt, latent_channels, num_frames, height, width, prompt_embeds.dtype, device, generator, latents, ) # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) # 7. Create rotary embeds if required image_rotary_emb = ( self._prepare_rotary_positional_embeddings(height, width, latents.size(1), device) if self.transformer.config.use_rotary_positional_embeddings else None ) # 8. Denoising loop num_warmup_steps = max(len(timesteps) - num_inference_steps * self.scheduler.order, 0) with self.progress_bar(total=num_inference_steps) as progress_bar: # for DPM-solver++ old_pred_original_sample = None for i, t in enumerate(timesteps): if self.interrupt: continue latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) latent_image_input = torch.cat([image_latents] * 2) if do_classifier_free_guidance else image_latents latent_model_input = torch.cat([latent_model_input, latent_image_input], dim=2) # broadcast to batch dimension in a way that's compatible with ONNX/Core ML timestep = t.expand(latent_model_input.shape[0]) # predict noise model_output noise_pred = self.transformer( hidden_states=latent_model_input, encoder_hidden_states=prompt_embeds, timestep=timestep, image_rotary_emb=image_rotary_emb, return_dict=False, )[0] noise_pred = noise_pred.float() # perform guidance if use_dynamic_cfg: self._guidance_scale = 1 + guidance_scale * ( (1 - math.cos(math.pi * ((num_inference_steps - t.item()) / num_inference_steps) ** 5.0)) / 2 ) if do_classifier_free_guidance: noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) noise_pred = noise_pred_uncond + self.guidance_scale * (noise_pred_text - noise_pred_uncond) # compute the previous noisy sample x_t -> x_t-1 if not isinstance(self.scheduler, CogVideoXDPMScheduler): latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0] else: latents, old_pred_original_sample = self.scheduler.step( noise_pred, old_pred_original_sample, t, timesteps[i - 1] if i > 0 else None, latents, **extra_step_kwargs, return_dict=False, ) latents = latents.to(prompt_embeds.dtype) # call the callback, if provided if callback_on_step_end is not None: callback_kwargs = {} for k in callback_on_step_end_tensor_inputs: callback_kwargs[k] = locals()[k] callback_outputs = callback_on_step_end(self, i, t, callback_kwargs) latents = callback_outputs.pop("latents", latents) prompt_embeds = callback_outputs.pop("prompt_embeds", prompt_embeds) negative_prompt_embeds = callback_outputs.pop("negative_prompt_embeds", negative_prompt_embeds) if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): progress_bar.update() if not output_type == "latent": video = self.decode_latents(latents) video = self.video_processor.postprocess_video(video=video, output_type=output_type) else: video = latents # Offload all models self.maybe_free_model_hooks() if not return_dict: return (video,) return (video,) ================================================ FILE: cogvideox_interpolation/utils/colormaps.py ================================================ # Copyright 2022 the Regents of the University of California, Nerfstudio Team and contributors. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Helper functions for visualizing outputs """ from dataclasses import dataclass from typing import Optional import matplotlib import mediapy as media import torch from jaxtyping import Bool, Float from torch import Tensor from . import colors # Colormaps = Literal["default", "turbo", "viridis", "magma", "inferno", "cividis", "gray", "pca"] Colormaps = "turbo" @dataclass(frozen=True) class ColormapOptions: """Options for colormap""" colormap: Colormaps = "default" """ The colormap to use """ normalize: bool = False """ Whether to normalize the input tensor image """ colormap_min: float = 0 """ Minimum value for the output colormap """ colormap_max: float = 1 """ Maximum value for the output colormap """ invert: bool = False """ Whether to invert the output colormap """ def apply_colormap( image: Float[Tensor, "*bs channels"], colormap_options: ColormapOptions = ColormapOptions(), eps: float = 1e-9, mask_thresh=None, ): """ Applies a colormap to a tensor image. If single channel, applies a colormap to the image. If 3 channel, treats the channels as RGB. If more than 3 channel, applies a PCA reduction on the dimensions to 3 channels Args: image: Input tensor image. eps: Epsilon value for numerical stability. Returns: Tensor with the colormap applied. """ # default for rgb images if image.shape[-1] == 3: return image # rendering depth outputs if image.shape[-1] == 1 and torch.is_floating_point(image): output = image if mask_thresh is not None: mask = output < mask_thresh if colormap_options.normalize: output = output - torch.min(output) output = output / (torch.max(output) + eps) output = ( output * (colormap_options.colormap_max - colormap_options.colormap_min) + colormap_options.colormap_min ) output = torch.clip(output, 0, 1) if mask_thresh is not None: output[mask] = 0 if colormap_options.invert: output = 1 - output return apply_float_colormap(output, colormap=colormap_options.colormap) # rendering boolean outputs if image.dtype == torch.bool: return apply_boolean_colormap(image) if image.shape[-1] > 3: return apply_pca_colormap(image) raise NotImplementedError def apply_float_colormap(image: Float[Tensor, "*bs 1"], colormap: Colormaps = "viridis"): """Convert single channel to a color image. Args: image: Single channel image. colormap: Colormap for image. Returns: Tensor: Colored image with colors in [0, 1] """ if colormap == "default": colormap = "turbo" image = torch.nan_to_num(image, 0) if colormap == "gray": return image.repeat(1, 1, 3) image_long = (image * 255).long() image_long_min = torch.min(image_long) image_long_max = torch.max(image_long) assert image_long_min >= 0, f"the min value is {image_long_min}" assert image_long_max <= 255, f"the max value is {image_long_max}" return torch.tensor(matplotlib.colormaps[colormap].colors, device=image.device)[image_long[..., 0]] def apply_depth_colormap( depth: Float[Tensor, "*bs 1"], accumulation: Optional[Float[Tensor, "*bs 1"]] = None, near_plane: Optional[float] = None, far_plane: Optional[float] = None, colormap_options: ColormapOptions = ColormapOptions(), ): """Converts a depth image to color for easier analysis. Args: depth: Depth image. accumulation: Ray accumulation used for masking vis. near_plane: Closest depth to consider. If None, use min image value. far_plane: Furthest depth to consider. If None, use max image value. colormap: Colormap to apply. Returns: Colored depth image with colors in [0, 1] """ near_plane = near_plane or float(torch.min(depth)) far_plane = far_plane or float(torch.max(depth)) depth = (depth - near_plane) / (far_plane - near_plane + 1e-10) depth = torch.clip(depth, 0, 1) # depth = torch.nan_to_num(depth, nan=0.0) # TODO(ethan): remove this colored_image = apply_colormap(depth, colormap_options=colormap_options) if accumulation is not None: colored_image = colored_image * accumulation + (1 - accumulation) return colored_image def apply_boolean_colormap( image: Bool[Tensor, "*bs 1"], true_color = colors.WHITE, false_color = colors.BLACK, ): """Converts a depth image to color for easier analysis. Args: image: Boolean image. true_color: Color to use for True. false_color: Color to use for False. Returns: Colored boolean image """ colored_image = torch.ones(image.shape[:-1] + (3,)) colored_image[image[..., 0], :] = true_color colored_image[~image[..., 0], :] = false_color return colored_image def apply_pca_colormap(image: Float[Tensor, "*bs dim"]): """Convert feature image to 3-channel RGB via PCA. The first three principle components are used for the color channels, with outlier rejection per-channel Args: image: image of arbitrary vectors Returns: Tensor: Colored image """ original_shape = image.shape image = image.contiguous().view(-1, image.shape[-1]) _, _, v = torch.pca_lowrank(image) image = torch.matmul(image, v[..., :3]) d = torch.abs(image - torch.median(image, dim=0).values) mdev = torch.median(d, dim=0).values s = d / mdev m = 3.0 # this is a hyperparam controlling how many std dev outside for outliers rins = image[s[:, 0] < m, 0] gins = image[s[:, 1] < m, 1] bins = image[s[:, 2] < m, 2] if len(rins) == 0 or len(gins) == 0 or len(bins) == 0: return image.new_zeros(*original_shape[:-1], 3) image[:, 0] -= rins.min() image[:, 1] -= gins.min() image[:, 2] -= bins.min() image[:, 0] /= rins.max() - rins.min() image[:, 1] /= gins.max() - gins.min() image[:, 2] /= bins.max() - bins.min() image = torch.clamp(image, 0, 1) image_long = (image * 255).long() image_long_min = torch.min(image_long) image_long_max = torch.max(image_long) assert image_long_min >= 0, f"the min value is {image_long_min}" assert image_long_max <= 255, f"the max value is {image_long_max}" return image.view(*original_shape[:-1], 3) def colormap_saving(image: torch.Tensor, colormap_options, mask_thresh=None, save_path=None): """ if image's shape is (h, w, 1): draw colored relevance map; if image's shape is (h, w, 3): return directively; if image's shape is (h, w, c): execute PCA and transform it into (h, w, 3). """ output_image = ( apply_colormap( image=image, mask_thresh=mask_thresh, colormap_options=colormap_options, ).cpu().numpy() ) if save_path is not None: media.write_image(save_path.with_suffix(".png"), output_image, fmt="png") return output_image ================================================ FILE: cogvideox_interpolation/utils/colors.py ================================================ # Copyright 2022 the Regents of the University of California, Nerfstudio Team and contributors. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Common Colors""" from typing import Union import torch from jaxtyping import Float from torch import Tensor WHITE = torch.tensor([1.0, 1.0, 1.0]) BLACK = torch.tensor([0.0, 0.0, 0.0]) RED = torch.tensor([1.0, 0.0, 0.0]) GREEN = torch.tensor([0.0, 1.0, 0.0]) BLUE = torch.tensor([0.0, 0.0, 1.0]) COLORS_DICT = { "white": WHITE, "black": BLACK, "red": RED, "green": GREEN, "blue": BLUE, } def get_color(color: Union[str, list]) -> Float[Tensor, "3"]: """ Args: Color as a string or a rgb list Returns: Parsed color """ if isinstance(color, str): color = color.lower() if color not in COLORS_DICT: raise ValueError(f"{color} is not a valid preset color") return COLORS_DICT[color] if isinstance(color, list): if len(color) != 3: raise ValueError(f"Color should be 3 values (RGB) instead got {color}") return torch.tensor(color) raise ValueError(f"Color should be an RGB list or string, instead got {type(color)}") ================================================ FILE: cogvideox_interpolation/utils/config_utils.py ================================================ import argparse import os from mmengine.config import Config def parse_args(): parser = argparse.ArgumentParser(description="Simple example of a training script.") parser.add_argument("--config", help="model config file path") parser.add_argument( "--pretrained_model_name_or_path", type=str, default=None, required=True, help="Path to pretrained model or model identifier from huggingface.co/models.", ) parser.add_argument( "--pretrained_model_ae", type=str, default=None, help="Path to pretrained model or model identifier from huggingface.co/models.", ) parser.add_argument( "--root_path", type=str, help="The output directory where the model predictions and checkpoints will be written.", ) parser.add_argument( "--annotation_json", type=str, help="The output directory where the model predictions and checkpoints will be written.", ) parser.add_argument( "--output_dir", type=str, default="results", help="The output directory where the model predictions and checkpoints will be written.", ) parser.add_argument( "--logging_dir", type=str, default="logs", ) parser.add_argument( "--mixed_precision", type=str, default=None, choices=["no", "fp16", "bf16"], help=( "Whether to use mixed precision. Choose between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >=" " 1.10.and an Nvidia Ampere GPU. Default to the value of accelerate config of the current system or the" " flag passed with the `accelerate.launch` command. Use this argument to override the accelerate config." ), ) parser.add_argument( "--report_to", type=str, default="tensorboard", help=( 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' ), ) parser.add_argument( "--gradient_accumulation_steps", type=int, default=8, help="Number of updates steps to accumulate before performing a backward/update pass.", ) parser.add_argument( "--revision", type=str, default=None, required=False, help="Revision of pretrained model identifier from huggingface.co/models.", ) parser.add_argument( "--variant", type=str, default=None, help="Variant of the model files of the pretrained model identifier from huggingface.co/models, 'e.g.' fp16", ) parser.add_argument( "--gradient_checkpointing", type=bool, default=True, help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", ) parser.add_argument( "--learning_rate", type=float, default=1e-4, help="Initial learning rate (after the potential warmup period) to use.", ) parser.add_argument( "--use_8bit_adam", type=bool, default=True, help="Whether or not to use 8-bit Adam from bitsandbytes." ) parser.add_argument( "--use_came", type=bool, default=False, help="whether to use came", ) parser.add_argument( "--allow_tf32", type=bool, default=True, help=( "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see" " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices" ), ) parser.add_argument( "--lr_scheduler", type=str, default="constant", help=( 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' ' "constant", "constant_with_warmup"]' ), ) parser.add_argument( "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." ) parser.add_argument( "--max_train_steps", type=int, default=None, help="Total number of training steps to perform. If provided, overrides num_train_epochs.", ) parser.add_argument( "--dataloader_num_workers", type=int, default=0, help=( "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process." ), ) parser.add_argument( "--train_batch_size", type=int, default=1, help="Batch size (per device) for the training dataloader." ) parser.add_argument( "--checkpointing_steps", type=int, default=4000, help=( "Save a checkpoint of the training state every X updates. These checkpoints are only suitable for resuming" " training using `--resume_from_checkpoint`." ), ) parser.add_argument("--seed", type=int, default=42, help="A seed for reproducible training.") parser.add_argument("--num_train_epochs", type=int, default=100) parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") parser.add_argument("--classes", type=str, nargs="+") parser.add_argument("--img_path", type=str) parser.add_argument("--save_path", type=str) args = parser.parse_args() env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) if env_local_rank != -1 and env_local_rank != args.local_rank: args.local_rank = env_local_rank return args def merge_args(cfg, args): # if args.ckpt_path is not None: # cfg.model["from_pretrained"] = args.ckpt_path # if cfg.get("discriminator") is not None: # cfg.discriminator["from_pretrained"] = args.ckpt_path # args.ckpt_path = None for k, v in vars(args).items(): if v is not None: cfg[k] = v return cfg def read_config(config_path): cfg = Config.fromfile(config_path) return cfg def parse_configs(): args = parse_args() cfg = read_config(args.config) cfg = merge_args(cfg, args) return cfg def str2bool(v): if isinstance(v, bool): return v if v.lower() in ("yes", "true", "t", "y", "1"): return True elif v.lower() in ("no", "false", "f", "n", "0"): return False else: raise argparse.ArgumentTypeError("Boolean value expected.") ================================================ FILE: cogvideox_interpolation/utils/misc.py ================================================ import torch, time class Timer: def __init__(self, name, log=False): self.name = name self.start_time = None self.end_time = None @property def elapsed_time(self): return self.end_time - self.start_time def __enter__(self): torch.cuda.synchronize() self.start_time = time.time() return self def __exit__(self, exc_type, exc_val, exc_tb): torch.cuda.synchronize() self.end_time = time.time() ================================================ FILE: configs/field_construction.yaml ================================================ wandb: enable_wandb: False project_name: "proj" pipeline: rgb_video_path: outputs/kitchen/rgb/video_ckpt_800.mp4 seg_video_path: outputs/kitchen/seg/video_ckpt_800.mp4 normal_video_path: outputs/kitchen/normal/video_ckpt_800.mp4 data_path: "field_construction/data/kitchen" skip_video_process: False skip_pose_estimate: True skip_lang_feature_extraction: False load_iteration: 5_000 selection: False selected_idxs: [] chunk_num: 8 keep_num_per_chunk: 3 mode: "train" video_processor: img_format: "png" feature_extractor: type: "open-seg" model_path: "/home/lff/data1/cjw/langscene/model_zoo/openseg_exported_clip" pose_estimator: type: "vggt" device: "cuda" lseg: model_path: "model_zoo/lseg/demo_e200.ckpt" device: "cuda" ae: model_path: "model_zoo/ae/model.safetensors" device: "cuda" gaussian: debug_from: -100 detect_anomaly: False test_iterations: [100, 500, 1000, 2000, 5000, 10_000, 12_000] save_iterations: [100, 500, 1000, 2000, 5000, 10_000, 12_000] quiet: False checkpoint_iterations: [100, 500, 1000, 5000, 10_000, 12_000] start_checkpoint: None dataset: sh_degree: 3 source_path: "/home/lff/data1/cjw/langscene/field_construction/data/kitchen" model_path: "/home/lff/data1/cjw/langscene/field_construction/outputs/kitchen" images: "input" normal: "normal" resolution: -1 white_background: False data_device: "cuda" # "cuda" or "cpu" eval: False preload_img: True ncc_scale: 1.0 multi_view_num: 8 multi_view_max_angle: 30 multi_view_min_dis: 0.01 multi_view_max_dis: 1.5 language_features_name: "lang_features_dim3" opt: pp_optimizer: False optim_pose: True pose_until_iter: 2000 iterations: 12_000 max_geo_iter: 1500 normal_optim: False position_lr_init: 0.00016 position_lr_final: 0.0000016 position_lr_delay_mult: 0.01 position_lr_max_steps: 1000 feature_lr: 0.0025 opacity_lr: 0.05 language_feature_lr: 0.0050 instance_feature_lr: 0.0050 scaling_lr: 0.005 rotation_lr: 0.001 percent_dense: 0.001 lambda_dssim: 0.2 densification_interval: 100 opacity_reset_interval: 999_999 densify_from_iter: 500 densify_until_iter: 1200 # densify_from_iter: 999_999 # densify_until_iter: -1 # densify_grad_threshold: 0.0002 densify_grad_threshold: 0.004 scale_loss_weight: 100.0 wo_image_weight: False single_view_weight: 0.10 single_view_weight_from_iter: 500 single_view_weight_end_iter: 2000 instance_supervision_from_iter: 12_001 use_virtul_cam: False virtul_cam_prob: 0.5 use_multi_view_trim: True multi_view_ncc_weight: 0.15 multi_view_geo_weight: 0.03 multi_view_weight_from_iter: 500 multi_view_weight_end_iter: 2000 multi_view_patch_size: 3 multi_view_sample_num: 102400 multi_view_pixel_noise_th: 1.0 wo_use_geo_occ_aware: False opacity_cull_threshold: 0.05 # densify_abs_grad_threshold: 0.0008 densify_abs_grad_threshold: 0.016 abs_split_radii2D_threshold: 20 max_abs_split_points: 0 max_all_points: 12_000_000 exposure_compensation: False random_background: False reg3d_start: 2 reg3d_k: 5 reg3d_lambda_val: 4 lang_loss_start_iter: 1200 grouping_loss: True loss_obj_3d: True pipe: convert_SHs_python: False compute_cov3D_python: False debug: False render: load_iteration: 5_000 pose_optim_iter: 100 voxel_size: 0.01 normalized: True include_features: True eval: eval_data_path: "" pose_optim_iter: 100 ================================================ FILE: configs/test_config.py ================================================ # validation val_steps=10 wandb=False record_time = True # autoencoder config # exp_name = "AE-gpu8-bs1-channel-16" # autoencoder_name = "ae-channel-16" exp_name = "test" autoencoder_name = "ae-channel-3" dataset_name = "lvis" lseg_weights="model_zoo/lseg/demo_e200.ckpt" # loss weights perceptual_loss_weight = 0.1 # use vgg is not None and more than 0 kl_loss_weight = 1e-6 mixed_strategy = "mixed_video_image" mixed_image_ratio = 0.2 use_real_rec_loss = False use_z_rec_loss = True use_image_identity_loss = True ================================================ FILE: configs/unet_config_c16.py ================================================ # == datasets == dataset_path = "/mnt/juicefs/datasets/lvis" json_path = "/mnt/juicefs/datasets/lvis/annotations/image_info_unlabeled2017.json" #== train == mixed_precision = "no" num_train_epochs = 5 train_batch_size = 4 wandb=True exp_name = "train-acc-unet-c16" record_time = False pretrained_model_ae = None # validation val_steps=100 checkpointing_steps = 50000 #== model == in_channels = 512 out_channels = 512 latent_channels = 16 encoder_block_out_channels=[256, 128, 64, 32, 16] decoder_block_out_channels=[16, 32, 64, 128, 256] num_encoder_blocks=(1, 1, 1, 1, 1) num_decoder_blocks=(1, 1, 1, 1, 1) ================================================ FILE: configs/unet_config_c32.py ================================================ # == datasets == dataset_path = "/mnt/juicefs/datasets/lvis" json_path = "/mnt/juicefs/datasets/lvis/annotations/image_info_unlabeled2017.json" #== train == mixed_precision = "no" num_train_epochs = 5 train_batch_size = 4 wandb=True exp_name = "train-acc-unet-c16" record_time = False pretrained_model_ae = None # validation val_steps=100 checkpointing_steps = 50000 #== model == in_channels = 512 out_channels = 512 latent_channels = 32 encoder_block_out_channels=[256, 64, 16] decoder_block_out_channels=[16, 64, 256] num_encoder_blocks=(1, 1, 1) num_decoder_blocks=(1, 1, 1) ================================================ FILE: entry_point.py ================================================ import logging import os import random import warnings from random import randint import hydra import numpy as np import torch from omegaconf import DictConfig from field_construction.pipeline import FieldConstructionPipeline def setup_seed(seed): torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) np.random.seed(seed) random.seed(seed) @hydra.main(config_path="configs", config_name="field_construction", version_base=None) def main(cfg: DictConfig): logging.basicConfig( level=logging.INFO, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s", handlers=[logging.StreamHandler] ) # ignore pil debug message. pil_logger = logging.getLogger("PIL") pil_logger.setLevel(logging.WARNING) os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' warnings.filterwarnings("ignore", category=FutureWarning) setup_seed(42) pipeline = FieldConstructionPipeline(cfg) if cfg.pipeline.mode == "train": pipeline.construct_field() elif cfg.pipeline.mode == "render": pipeline.render_result() elif cfg.pipeline.mode == "eval": pipeline.eval() else: raise NotImplementedError if __name__ == "__main__": main() ================================================ FILE: field_construction/auto_encoder.py ================================================ import glob import os import numpy as np import torch import torch.nn as nn from torch.utils.data import Dataset class Autoencoder_dataset(Dataset): def __init__(self, data_dir): data_names = sorted(glob.glob(os.path.join(data_dir, '*.npy'))) data = [] for i in range(len(data_names)): features = torch.from_numpy(np.load(data_names[i])) data.append(features) self.data = torch.cat(data, dim=0).float() def __getitem__(self, index): data = self.data[index] return data def __len__(self): return self.data.shape[0] class Autoencoder(nn.Module): def __init__(self, encoder_hidden_dims=None, decoder_hidden_dims=None): super(Autoencoder, self).__init__() encoder_layers = [] if not encoder_hidden_dims: encoder_hidden_dims = [512, 256, 128, 64, 32, 16, 3] if not decoder_hidden_dims: decoder_hidden_dims = [16, 32, 64, 128, 256, 512, 768] for i in range(len(encoder_hidden_dims)): if i == 0: encoder_layers.append(nn.Linear(768, encoder_hidden_dims[i])) else: encoder_layers.append(torch.nn.BatchNorm1d(encoder_hidden_dims[i-1])) encoder_layers.append(nn.ReLU()) encoder_layers.append(nn.Linear(encoder_hidden_dims[i-1], encoder_hidden_dims[i])) self.encoder = nn.ModuleList(encoder_layers) decoder_layers = [] for i in range(len(decoder_hidden_dims)): if i == 0: decoder_layers.append(nn.Linear(encoder_hidden_dims[-1], decoder_hidden_dims[i])) else: decoder_layers.append(nn.ReLU()) decoder_layers.append(nn.Linear(decoder_hidden_dims[i-1], decoder_hidden_dims[i])) self.decoder = nn.ModuleList(decoder_layers) print(self.encoder, self.decoder) def forward(self, x): for m in self.encoder: x = m(x) x = x / x.norm(dim=-1, keepdim=True) for m in self.decoder: x = m(x) x = x / x.norm(dim=-1, keepdim=True) return x def encode(self, x): for m in self.encoder: x = m(x) x = x / x.norm(dim=-1, keepdim=True) return x def decode(self, x): for m in self.decoder: x = m(x) x = x / x.norm(dim=-1, keepdim=True) return x ================================================ FILE: field_construction/extract_with_openseg.py ================================================ import os from argparse import ArgumentParser import cv2 import numpy as np import torch from tqdm import tqdm def extract_with_openseg(cfg): import tensorflow as tf2 import tensorflow._api.v2.compat.v1 as tf gpus = tf.config.experimental.list_physical_devices('GPU') if gpus: try: for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) except RuntimeError as e: print(e) openseg = tf2.saved_model.load( cfg.feature_extractor.model_path, tags=[tf.saved_model.tag_constants.SERVING] ) imgs_path = os.path.join(cfg.pipeline.data_path, "input") img_names = list( filter( lambda x: x.endswith("png") or x.endswith("jpg"), os.listdir(imgs_path) ) ) img_list = [] np_image_string_list = [] for img_name in img_names: img_path = os.path.join(imgs_path, img_name) image = cv2.imread(img_path) with tf.gfile.GFile(img_path, 'rb') as f: np_image_string = np.array([f.read()]) image = torch.from_numpy(image) img_list.append(image) np_image_string_list.append(np_image_string) images = [img_list[i].permute(2, 0, 1)[None, ...] for i in range(len(img_list))] imgs = torch.cat(images) save_path = os.path.join(cfg.pipeline.data_path, "lang_features") os.makedirs(save_path, exist_ok=True) embed_size = 768 for i, (img, np_image_string) in enumerate(tqdm((zip(imgs, np_image_string_list)), desc="Extracting lang features")): text_emb = tf.zeros([1, 1, embed_size]) results = openseg.signatures["serving_default"]( inp_image_bytes=tf.convert_to_tensor(np_image_string[0]), inp_text_emb=text_emb ) img_info = results['image_info'] crop_sz = [ int(img_info[0, 0] * img_info[2, 0]), int(img_info[0, 1] * img_info[2, 1]) ] image_embedding_feat = results['image_embedding_feat'][:, :crop_sz[0], :crop_sz[1]] img_size = (img.shape[1], img.shape[2]) feat_2d = tf.cast( tf.image.resize_nearest_neighbor( image_embedding_feat, img_size, align_corners=True )[0], dtype=tf.float16 ).numpy() # save feat_2d np.save(os.path.join(save_path, str(i+1).zfill(4)+".npy"), feat_2d) if __name__ == "__main__": arg_parser = ArgumentParser() arg_parser.add_argument("--cfg") args = arg_parser.parse_args() extract_with_openseg(args.cfg) ================================================ FILE: field_construction/gaussian_field.py ================================================ # # Copyright (C) 2023, Inria # GRAPHDECO research group, https://team.inria.fr/graphdeco # All rights reserved. # # This software is free for non-commercial, research and evaluation use # under the terms of the LICENSE.md file. # # For inquiries contact george.drettakis@inria.fr # import copy import logging import os import random from random import randint import cv2 import numpy as np import open3d as o3d import torch import torch.nn.functional as F import torchvision from tqdm import tqdm from cogvideox_interpolation.utils.colormaps import apply_pca_colormap from .gaussian_renderer import render from .scene import GaussianModel, Scene from .scene.app_model import AppModel from .scene.cameras import Camera from .utils.camera_utils import gen_virtul_cam from .utils.general_utils import safe_state from .utils.graphics_utils import patch_offsets, patch_warp from .utils.image_utils import psnr from .utils.loss_utils import (get_img_grad_weight, get_loss_instance_group, get_loss_semantic_group, l1_loss, lncc, loss_cls_3d, ranking_loss, ssim) from .utils.pose_utils import (get_camera_from_tensor, get_tensor_from_camera, post_pose_process, quad2rotation) def post_process_mesh(mesh, cluster_to_keep=3): """ Post-process a mesh to filter out floaters and disconnected parts """ print("post processing the mesh to have {} clusterscluster_to_kep".format(cluster_to_keep)) mesh_0 = copy.deepcopy(mesh) with o3d.utility.VerbosityContextManager(o3d.utility.VerbosityLevel.Debug) as cm: triangle_clusters, cluster_n_triangles, cluster_area = (mesh_0.cluster_connected_triangles()) triangle_clusters = np.asarray(triangle_clusters) cluster_n_triangles = np.asarray(cluster_n_triangles) cluster_area = np.asarray(cluster_area) n_cluster = np.sort(cluster_n_triangles.copy())[-cluster_to_keep] n_cluster = max(n_cluster, 50) # filter meshes smaller than 50 triangles_to_remove = cluster_n_triangles[triangle_clusters] < n_cluster mesh_0.remove_triangles_by_mask(triangles_to_remove) mesh_0.remove_unreferenced_vertices() mesh_0.remove_degenerate_triangles() print("num vertices raw {}".format(len(mesh.vertices))) print("num vertices post {}".format(len(mesh_0.vertices))) return mesh_0 def permuted_pca(image): return apply_pca_colormap(image.permute(0, 2, 3, 1)).permute(0, 3, 1, 2) def save_pose(path, quat_pose, train_cams): # Get camera IDs and convert quaternion poses to camera matrices camera_ids = [cam.colmap_id for cam in train_cams] world_to_camera = [get_camera_from_tensor(quat) for quat in quat_pose] # Reorder poses according to colmap IDs colmap_poses = [] for i in range(len(camera_ids)): idx = camera_ids.index(i + 1) # Find position of camera i+1 pose = world_to_camera[idx] colmap_poses.append(pose) # Convert to numpy array and save colmap_poses = torch.stack(colmap_poses).detach().cpu().numpy() np.save(path, colmap_poses) def load_and_prepare_confidence(confidence_path, device='cuda', scale=(0.1, 1.0)): """ Loads, normalizes, inverts, and scales confidence values to obtain learning rate modifiers. Args: confidence_path (str): Path to the .npy confidence file. device (str): Device to load the tensor onto. scale (tuple): Desired range for the learning rate modifiers. Returns: torch.Tensor: Learning rate modifiers. """ # Load and normalize confidence_np = np.load(confidence_path) confidence_tensor = torch.from_numpy(confidence_np).float().to(device) normalized_confidence = torch.sigmoid(confidence_tensor) # Invert confidence and scale to desired range inverted_confidence = 1.0 - normalized_confidence min_scale, max_scale = scale lr_modifiers = inverted_confidence * (max_scale - min_scale) + min_scale return lr_modifiers class GaussianField(): def __init__(self, cfg): self.cfg = cfg def train(self): cfg = self.cfg dataset = cfg.gaussian.dataset opt = cfg.gaussian.opt pipe = cfg.gaussian.pipe device = cfg.gaussian.dataset.data_device self.gaussians = GaussianModel(cfg.gaussian.dataset.sh_degree) self.scene = Scene(cfg.gaussian.dataset, self.gaussians) self.app_model = AppModel() self.app_model.train().cuda() logging.info("Optimizing " + dataset.model_path) safe_state(cfg.gaussian.quiet) if opt.pp_optimizer: confidence_path = os.path.join(dataset.source_path, f"sparse/0", "confidence_dsp.npy") try: confidence_lr = load_and_prepare_confidence(confidence_path, device='cuda', scale=(2, 100)) self.gaussians.training_setup_pp(opt, confidence_lr, device) except: logging.warning("can not load confidence. ") cfg.opt.pp_optimizer = False self.gaussians.training_setup(opt, device) else: self.gaussians.training_setup(opt, device) train_cams_init = self.scene.getTrainCameras().copy() for save_iter in cfg.gaussian.save_iterations: os.makedirs(self.scene.model_path + f'/pose/iter_{save_iter}', exist_ok=True) save_pose(self.scene.model_path + f'/pose/iter_{save_iter}/pose_org.npy', self.gaussians.P, train_cams_init) first_iter = 0 if cfg.gaussian.start_checkpoint != "None": model_params, first_iter = torch.load(cfg.gaussian.start_checkpoint) self.gaussians.restore(model_params, opt) self.app_model.load_weights(self.scene.model_path) bg_color = [1, 1, 1] if dataset.white_background else [0, 0, 0] background = torch.tensor(bg_color, dtype=torch.float32, device=device) iter_start = torch.cuda.Event(enable_timing=True) iter_end = torch.cuda.Event(enable_timing=True) viewpoint_stack = None ema_loss_for_log = 0.0 ema_single_view_for_log = 0.0 ema_multi_view_geo_for_log = 0.0 ema_multi_view_pho_for_log = 0.0 ema_language_loss_for_log = 0.0 ema_grouping_loss = 0.0 ema_loss_obj_3d = 0.0 ema_ins_grouping_loss = 0.0 ema_ins_obj_3d_loss = 0.0 normal_loss, geo_loss, ncc_loss = None, None, None language_loss = None grouping_loss = None include_feature = True progress_bar = tqdm(range(first_iter, opt.iterations), desc="Training progress") first_iter += 1 debug_path = os.path.join(self.scene.model_path, "debug") os.makedirs(debug_path, exist_ok=True) camera_list = self.scene.getTrainCameras().copy() last_cam_id = -1 self.gaussians.change_reqiures_grad("semantic", iteration=first_iter, quiet=False) if not opt.optim_pose: self.gaussians.P.requires_grad_(False) for iteration in range(first_iter, opt.iterations + 1): iter_start.record() self.gaussians.update_learning_rate(iteration) if iteration % 100 == 0: self.gaussians.oneupSHdegree() if not viewpoint_stack: viewpoint_stack = camera_list.copy() # update camera lists: for cam_idx, cam in enumerate(camera_list): if cam.uid == last_cam_id: updated_pose = self.gaussians.get_RT(self.gaussians.index_mapping[last_cam_id]).clone().detach() extrinsics = get_camera_from_tensor(updated_pose) camera_list[cam_idx].R = extrinsics[:3, :3].T camera_list[cam_idx].T = extrinsics[:3, 3] break viewpoint_cam: Camera = viewpoint_stack.pop(randint(0, len(viewpoint_stack) - 1)) last_cam_id = viewpoint_cam.uid pose = self.gaussians.get_RT(self.gaussians.index_mapping[last_cam_id]) # quad t if (iteration - 1) == cfg.gaussian.debug_from: pipe.debug = True bg = torch.rand((3), device="cuda") if opt.random_background else background if not opt.optim_pose: render_pkg = render(viewpoint_cam, self.gaussians, pipe, bg, app_model=self.app_model, return_depth_normal=iteration > opt.single_view_weight_from_iter, include_feature=include_feature) else: render_pkg = render(viewpoint_cam, self.gaussians, pipe, bg, app_model=self.app_model, return_depth_normal=iteration > opt.single_view_weight_from_iter, include_feature=include_feature, camera_pose=pose) image, viewspace_point_tensor, visibility_filter, radii, language_feature, instance_feature = \ render_pkg["render"], render_pkg["viewspace_points"], render_pkg["visibility_filter"], render_pkg["radii"], \ render_pkg["language_feature"], render_pkg["instance_feature"] overall_loss = 0 image_loss = None obj_3d_loss = None grouping_loss = None ins_obj_3d_loss = None ins_grouping_loss = None if iteration == opt.max_geo_iter: self.gaussians.change_reqiures_grad("semantic_only", iteration=iteration, quiet=False) if iteration < opt.max_geo_iter: gt_image, gt_image_gray = viewpoint_cam.get_image() ssim_loss = (1.0 - ssim(image, gt_image)) if 'app_image' in render_pkg and ssim_loss < 0.5: app_image = render_pkg['app_image'] Ll1 = l1_loss(app_image, gt_image) else: Ll1 = l1_loss(image, gt_image) image_loss = (1.0 - opt.lambda_dssim) * Ll1 + opt.lambda_dssim * ssim_loss overall_loss = overall_loss + image_loss # scale loss if visibility_filter.sum() > 0: scale = self.gaussians.get_scaling[visibility_filter] sorted_scale, _ = torch.sort(scale, dim=-1) min_scale_loss = sorted_scale[..., 0] overall_loss = overall_loss + opt.scale_loss_weight * min_scale_loss.mean() # single view loss: if opt.single_view_weight_from_iter < iteration < opt.single_view_weight_end_iter: weight = opt.single_view_weight normal = render_pkg["rendered_normal"] depth_normal = render_pkg["depth_normal"] image_weight = (1.0 - get_img_grad_weight(gt_image)) image_weight = (image_weight).clamp(0, 1).detach() ** 2 if opt.normal_optim: render_normal = (normal.permute(1, 2, 0) @ (viewpoint_cam.world_view_transform[:3, :3].T)).permute(2, 0, 1) rendered_depth_normal = (depth_normal.permute(1, 2, 0) @ (viewpoint_cam.world_view_transform[:3, :3].T)).permute(2, 0, 1) normal_gt, normal_mask = viewpoint_cam.get_normal() prior_normal = normal_gt prior_normal_mask = normal_mask[0] normal_prior_error = (1 - F.cosine_similarity(prior_normal, render_normal, dim=0)) + \ (1 - F.cosine_similarity(prior_normal, rendered_depth_normal, dim=0)) normal_prior_error = ranking_loss(normal_prior_error[prior_normal_mask], penalize_ratio=1.0, type="mean") normal_loss = weight * normal_prior_error else: if not opt.wo_image_weight: normal_loss = weight * (image_weight * (((depth_normal - normal)).abs().sum(0))).mean() else: normal_loss = weight * (((depth_normal - normal)).abs().sum(0)).mean() overall_loss = overall_loss + normal_loss # multi-view loss if opt.multi_view_weight_from_iter < iteration < opt.multi_view_weight_end_iter: nearest_cam = None if len(viewpoint_cam.nearest_id) == 0 else camera_list[ random.sample(viewpoint_cam.nearest_id, 1)[0]] use_virtul_cam = False if opt.use_virtul_cam and (np.random.random() < opt.virtul_cam_prob or nearest_cam is None): nearest_cam = gen_virtul_cam(viewpoint_cam, trans_noise=dataset.multi_view_max_dis, deg_noise=dataset.multi_view_max_angle, device=device) use_virtul_cam = True if nearest_cam is not None: patch_size = opt.multi_view_patch_size sample_num = opt.multi_view_sample_num pixel_noise_th = opt.multi_view_pixel_noise_th total_patch_size = (patch_size * 2 + 1) ** 2 ncc_weight = opt.multi_view_ncc_weight geo_weight = opt.multi_view_geo_weight H, W = render_pkg['plane_depth'].squeeze().shape ix, iy = torch.meshgrid( torch.arange(W), torch.arange(H), indexing='xy') pixels = torch.stack([ix, iy], dim=-1).float().to(render_pkg['plane_depth'].device) if not use_virtul_cam: nearest_pose = self.gaussians.get_RT(self.gaussians.index_mapping[nearest_cam.uid]) # quad t if not opt.optim_pose: nearest_render_pkg = render(nearest_cam, self.gaussians, pipe, bg, app_model=self.app_model, return_plane=True, return_depth_normal=False) else: nearest_render_pkg = render(nearest_cam, self.gaussians, pipe, bg, app_model=self.app_model, return_plane=True, return_depth_normal=False, camera_pose=nearest_pose.clone().detach()) else: nearest_render_pkg = render(nearest_cam, self.gaussians, pipe, bg, app_model=self.app_model, return_plane=True, return_depth_normal=False) pts = self.gaussians.get_points_from_depth(viewpoint_cam, render_pkg['plane_depth']) pts_in_nearest_cam = pts @ nearest_cam.world_view_transform[:3, :3] + nearest_cam.world_view_transform[3, :3] map_z, d_mask = self.gaussians.get_points_depth_in_depth_map(nearest_cam, nearest_render_pkg['plane_depth'], pts_in_nearest_cam) pts_in_nearest_cam = pts_in_nearest_cam / (pts_in_nearest_cam[:, 2:3]) pts_in_nearest_cam = pts_in_nearest_cam * map_z.squeeze()[..., None] R = torch.tensor(nearest_cam.R).float().cuda() T = torch.tensor(nearest_cam.T).float().cuda() pts_ = (pts_in_nearest_cam - T) @ R.transpose(-1, -2) pts_in_view_cam = pts_ @ viewpoint_cam.world_view_transform[:3, :3] + viewpoint_cam.world_view_transform[3, :3] pts_projections = torch.stack( [pts_in_view_cam[:, 0] * viewpoint_cam.Fx / pts_in_view_cam[:, 2] + viewpoint_cam.Cx, pts_in_view_cam[:, 1] * viewpoint_cam.Fy / pts_in_view_cam[:, 2] + viewpoint_cam.Cy], -1).float() pixel_noise = torch.norm(pts_projections - pixels.reshape(*pts_projections.shape), dim=-1) if not opt.wo_use_geo_occ_aware: d_mask = d_mask & (pixel_noise < pixel_noise_th) weights = (1.0 / torch.exp(pixel_noise)).detach() weights[~d_mask] = 0 else: d_mask = d_mask weights = torch.ones_like(pixel_noise) weights[~d_mask] = 0 if iteration % 200 == 0: gt_img_show = ((gt_image).permute(1, 2, 0).clamp(0, 1)[:, :, [2, 1, 0]] * 255).detach().cpu().numpy().astype(np.uint8) if 'app_image' in render_pkg: img_show = ((render_pkg['app_image']).permute(1, 2, 0).clamp(0, 1)[:, :, [2, 1, 0]] * 255).detach().cpu().numpy().astype(np.uint8) else: img_show = ((image).permute(1, 2, 0).clamp(0, 1)[:, :, [2, 1, 0]] * 255).detach().cpu().numpy().astype(np.uint8) normal_show = (((normal + 1.0) * 0.5).permute(1, 2, 0).clamp(0,1) * 255).detach().cpu().numpy().astype(np.uint8) depth_normal_show = (((depth_normal + 1.0) * 0.5).permute(1, 2, 0).clamp(0,1) * 255).detach().cpu().numpy().astype(np.uint8) if not opt.normal_optim: normal_gt = torch.zeros_like(normal) normal_gt_show = (normal_gt.permute(1, 2, 0) @ (viewpoint_cam.world_view_transform[:3, :3])).permute(2, 0, 1) normal_gt_show = (((normal_gt_show + 1.0) * 0.5).permute(1, 2, 0).clamp(0, 1) * 255).detach().cpu().numpy().astype(np.uint8) d_mask_show = (weights.float() * 255).detach().cpu().numpy().astype(np.uint8).reshape(H, W) d_mask_show_color = cv2.applyColorMap(d_mask_show, cv2.COLORMAP_JET) depth = render_pkg['plane_depth'].squeeze().detach().cpu().numpy() depth_i = (depth - depth.min()) / (depth.max() - depth.min() + 1e-20) depth_i = (depth_i * 255).clip(0, 255).astype(np.uint8) depth_color = cv2.applyColorMap(depth_i, cv2.COLORMAP_JET) distance = render_pkg['rendered_distance'].squeeze().detach().cpu().numpy() distance_i = (distance - distance.min()) / (distance.max() - distance.min() + 1e-20) distance_i = (distance_i * 255).clip(0, 255).astype(np.uint8) distance_color = cv2.applyColorMap(distance_i, cv2.COLORMAP_JET) image_weight = image_weight.detach().cpu().numpy() image_weight = (image_weight * 255).clip(0, 255).astype(np.uint8) image_weight_color = cv2.applyColorMap(image_weight, cv2.COLORMAP_JET) row0 = np.concatenate([gt_img_show, img_show, normal_show, distance_color], axis=1) row1 = np.concatenate([d_mask_show_color, depth_color, depth_normal_show, normal_gt_show], axis=1) image_to_show = np.concatenate([row0, row1], axis=0) cv2.imwrite( os.path.join(debug_path, "%05d" % iteration + "_" + viewpoint_cam.image_name + ".jpg"), image_to_show) if d_mask.sum() > 0: geo_loss = geo_weight * ((weights * pixel_noise)[d_mask]).mean() overall_loss += geo_loss if use_virtul_cam is False: with torch.no_grad(): # sample mask d_mask = d_mask.reshape(-1) valid_indices = torch.arange(d_mask.shape[0], device=d_mask.device)[d_mask] if d_mask.sum() > sample_num: index = np.random.choice(d_mask.sum().cpu().numpy(), sample_num, replace=False) valid_indices = valid_indices[index] weights = weights.reshape(-1)[valid_indices] # sample ref frame patch pixels = pixels.reshape(-1, 2)[valid_indices] offsets = patch_offsets(patch_size, pixels.device) ori_pixels_patch = pixels.reshape(-1, 1, 2) / viewpoint_cam.ncc_scale + offsets.float() H, W = gt_image_gray.squeeze().shape pixels_patch = ori_pixels_patch.clone() pixels_patch[:, :, 0] = 2 * pixels_patch[:, :, 0] / (W - 1) - 1.0 pixels_patch[:, :, 1] = 2 * pixels_patch[:, :, 1] / (H - 1) - 1.0 ref_gray_val = F.grid_sample(gt_image_gray.unsqueeze(1), pixels_patch.view(1, -1, 1, 2), align_corners=True) ref_gray_val = ref_gray_val.reshape(-1, total_patch_size) ref_to_neareast_r = nearest_cam.world_view_transform[:3, :3].transpose(-1, -2) @ viewpoint_cam.world_view_transform[ :3, :3] ref_to_neareast_t = -ref_to_neareast_r @ viewpoint_cam.world_view_transform[3, :3] + nearest_cam.world_view_transform[3, :3] # compute Homography ref_local_n = render_pkg["rendered_normal"].permute(1, 2, 0) ref_local_n = ref_local_n.reshape(-1, 3)[valid_indices] ref_local_d = render_pkg['rendered_distance'].squeeze() ref_local_d = ref_local_d.reshape(-1)[valid_indices] H_ref_to_neareast = ref_to_neareast_r[None] - \ torch.matmul( ref_to_neareast_t[None, :, None].expand(ref_local_d.shape[0], 3, 1), ref_local_n[:, :, None].expand(ref_local_d.shape[0], 3, 1).permute( 0, 2, 1)) / ref_local_d[..., None, None] H_ref_to_neareast = torch.matmul( nearest_cam.get_k(nearest_cam.ncc_scale)[None].expand(ref_local_d.shape[0], 3, 3), H_ref_to_neareast) H_ref_to_neareast = H_ref_to_neareast @ viewpoint_cam.get_inv_k(viewpoint_cam.ncc_scale) # compute neareast frame patch grid = patch_warp(H_ref_to_neareast.reshape(-1, 3, 3), ori_pixels_patch) grid[:, :, 0] = 2 * grid[:, :, 0] / (W - 1) - 1.0 grid[:, :, 1] = 2 * grid[:, :, 1] / (H - 1) - 1.0 _, nearest_image_gray = nearest_cam.get_image() sampled_gray_val = F.grid_sample(nearest_image_gray[None], grid.reshape(1, -1, 1, 2), align_corners=True) sampled_gray_val = sampled_gray_val.reshape(-1, total_patch_size) # compute loss ncc, ncc_mask = lncc(ref_gray_val, sampled_gray_val) mask = ncc_mask.reshape(-1) ncc = ncc.reshape(-1) * weights ncc = ncc[mask].squeeze() if mask.sum() > 0: ncc_loss = ncc_weight * ncc.mean() overall_loss = overall_loss + ncc_loss if opt.lang_loss_start_iter <= iteration < opt.instance_supervision_from_iter: # language feature loss lf_path = os.path.join(dataset.source_path, dataset.language_features_name) gt_language_feature, language_feature_mask, gt_seg = viewpoint_cam.get_language_feature(lf_path) language_loss = l1_loss(language_feature * language_feature_mask, gt_language_feature * language_feature_mask) overall_loss = overall_loss + language_loss language_feature_mask = language_feature_mask.reshape(-1) if opt.grouping_loss: grouping_loss = get_loss_semantic_group(gt_seg.reshape(-1)[language_feature_mask], language_feature.permute(1, 2, 0).reshape(-1, 3)[ language_feature_mask]) overall_loss = overall_loss + grouping_loss if opt.loss_obj_3d: obj_3d_loss = loss_cls_3d(self.gaussians._xyz.detach().squeeze(), self.gaussians._language_feature.squeeze(), opt.reg3d_k, opt.reg3d_lambda_val, 2000000, 800) overall_loss += obj_3d_loss elif iteration >= opt.instance_supervision_from_iter: # change the grad mode and copy the semantic featuers into instance-level if iteration == opt.instance_supervision_from_iter: self.gaussians._instance_feature.data.copy_(self.gaussians._language_feature.detach().clone()) self.gaussians.change_reqiures_grad("instance", iteration=iteration, quiet=False) _, language_feature_mask, gt_seg = viewpoint_cam.get_language_feature(lf_path) language_feature_mask = language_feature_mask.reshape(-1) # supervise the instance features if opt.grouping_loss: ins_grouping_loss = get_loss_instance_group(gt_seg.reshape(-1)[language_feature_mask], instance_feature.permute(1, 2, 0).reshape(-1, 3)[ language_feature_mask], language_feature.permute(1, 2, 0).reshape(-1, 3)[ language_feature_mask]) overall_loss = overall_loss + ins_grouping_loss if opt.loss_obj_3d: ins_obj_3d_loss = loss_cls_3d(self.gaussians._xyz.detach().squeeze(), self.gaussians._instance_feature.squeeze(), opt.reg3d_k, opt.reg3d_lambda_val, 2000000, 800) overall_loss += ins_obj_3d_loss overall_loss.backward() iter_end.record() with torch.no_grad(): ema_loss_for_log = 0.4 * image_loss.item() + 0.6 * ema_loss_for_log if image_loss is not None else 0.0 + 0.6 * ema_loss_for_log ema_single_view_for_log = 0.4 * normal_loss.item() if normal_loss is not None else 0.0 + 0.6 * ema_single_view_for_log ema_multi_view_geo_for_log = 0.4 * geo_loss.item() if geo_loss is not None else 0.0 + 0.6 * ema_multi_view_geo_for_log ema_multi_view_pho_for_log = 0.4 * ncc_loss.item() if ncc_loss is not None else 0.0 + 0.6 * ema_multi_view_pho_for_log ema_language_loss_for_log = 0.4 * language_loss.item() if language_loss is not None else 0.0 + 0.6 * ema_language_loss_for_log ema_grouping_loss = 0.4 * grouping_loss.item() if grouping_loss is not None else 0.0 + 0.6 * ema_grouping_loss ema_loss_obj_3d = 0.4 * obj_3d_loss.item() if obj_3d_loss is not None else 0.0 + 0.6 * ema_loss_obj_3d ema_ins_obj_3d_loss = 0.4 * ins_obj_3d_loss.item() if ins_obj_3d_loss is not None else 0.0 + 0.6 * ema_ins_obj_3d_loss ema_ins_grouping_loss = 0.4 * ins_grouping_loss.item() if ins_grouping_loss is not None else 0.0 + 0.6 * ema_ins_grouping_loss if iteration % 10 == 0: loss_dict = { "Loss": f"{ema_loss_for_log:.{5}f}", "Lang": f"{ema_language_loss_for_log:.{5}f}", "Points": f"{len(self.gaussians.get_xyz)}", "gp": f"{ema_grouping_loss:.{5}f}", "3d": f"{ema_loss_obj_3d:.{5}f}", "Ins": f"{ema_ins_grouping_loss:.{5}f}", } progress_bar.set_postfix(loss_dict) progress_bar.update(10) if iteration == opt.iterations: progress_bar.close() self.training_report(iteration, camera_list, l1_loss, render, (pipe, background)) if (iteration in cfg.gaussian.save_iterations): print("\n[ITER {}] Saving Gaussians".format(iteration)) self.scene.save(iteration, include_feature=include_feature) save_pose(self.scene.model_path + f'/pose/iter_{iteration}/pose_optimized.npy', self.gaussians.P, train_cams_init) # Densification if iteration < min(opt.max_geo_iter, opt.densify_until_iter): # Keep track of max radii in image-space for pruning mask = (render_pkg["out_observe"] > 0) & visibility_filter self.gaussians.max_radii2D[mask] = torch.max(self.gaussians.max_radii2D[mask], radii[mask]) viewspace_point_tensor_abs = render_pkg["viewspace_points_abs"] self.gaussians.add_densification_stats(viewspace_point_tensor, viewspace_point_tensor_abs, visibility_filter) if opt.densify_from_iter < iteration < min(opt.max_geo_iter, opt.densify_until_iter) and iteration % opt.densification_interval == 0: logging.info("densifying and pruning...") size_threshold = 20 if iteration > opt.opacity_reset_interval else None self.gaussians.densify_and_prune(opt.densify_grad_threshold, opt.densify_abs_grad_threshold, opt.opacity_cull_threshold, self.scene.cameras_extent, size_threshold) if iteration % opt.opacity_reset_interval == 0 or (dataset.white_background and iteration == opt.densify_from_iter): self.gaussians.reset_opacity() if iteration < opt.iterations: self.gaussians.optimizer.step() self.gaussians.cam_optimizer.step() self.app_model.optimizer.step() self.gaussians.optimizer.zero_grad(set_to_none=True) self.gaussians.cam_optimizer.zero_grad(set_to_none=True) self.app_model.optimizer.zero_grad(set_to_none=True) if (iteration in cfg.gaussian.checkpoint_iterations): print("\n[ITER {}] Saving Checkpoint".format(iteration)) torch.save((self.gaussians.capture(include_feature=include_feature), iteration), self.scene.model_path + "/chkpnt" + str(iteration) + ".pth") self.app_model.save_weights(self.scene.model_path, iteration) self.app_model.save_weights(self.scene.model_path, opt.iterations) torch.cuda.empty_cache() # move camera poses to target path. max_save_iter = max(cfg.gaussian.save_iterations) orig_path = self.scene.model_path + f'/pose/iter_{max_save_iter}/pose_optimized.npy' camera_path = os.path.join(cfg.pipeline.data_path, "camera") eg_file = os.listdir(camera_path)[0] logging.info("Post processing pose & move to data path...") post_pose_process(orig_path, os.path.join(camera_path, eg_file), os.path.join(cfg.pipeline.data_path, "render_camera")) def training_report(self, iteration, camera_list, l1_loss, renderFunc, renderArgs): # Report test and samples of training set # do not use the optimized poses. if iteration in self.cfg.gaussian.test_iterations: torch.cuda.empty_cache() validation_configs = ({'name': 'test', 'cameras': camera_list}, {'name': 'train', 'cameras': [self.scene.getTrainCameras()[idx % len(self.scene.getTrainCameras())] for idx in range(5, 30, 5)]}) for config in validation_configs: if config['cameras'] and len(config['cameras']) > 0: l1_test = 0.0 psnr_test = 0.0 for idx, viewpoint in enumerate(config['cameras']): if self.cfg.gaussian.opt.optim_pose: camera_pose = get_tensor_from_camera(viewpoint.world_view_transform.transpose(0, 1)) out = renderFunc(viewpoint, self.scene.gaussians, *renderArgs, app_model=self.app_model, camera_pose=camera_pose) else: out = renderFunc(viewpoint, self.scene.gaussians, *renderArgs, app_model=self.app_model) image = out["render"] if 'app_image' in out: image = out['app_image'] image = torch.clamp(image, 0.0, 1.0) gt_image, _ = viewpoint.get_image() gt_image = torch.clamp(gt_image.to("cuda"), 0.0, 1.0) l1_test += l1_loss(image, gt_image).mean().double() psnr_test += psnr(image, gt_image).mean().double() img_show = ((image).permute(1, 2, 0).clamp(0, 1)[:, :, [2, 1, 0]] * 255).detach().cpu().numpy().astype(np.uint8) img_gt_show = ((gt_image).permute(1, 2, 0).clamp(0, 1)[:, :, [2, 1, 0]] * 255).detach().cpu().numpy().astype(np.uint8) img_tosave = np.concatenate([img_show, img_gt_show], axis=1) valid_path = os.path.join(self.cfg.gaussian.dataset.model_path, "valid") os.makedirs(valid_path, exist_ok=True) cv2.imwrite(os.path.join(valid_path, f"{iteration}_{viewpoint.uid}.png"), img_tosave) psnr_test /= len(config['cameras']) l1_test /= len(config['cameras']) logging.info("\n[ITER {}] Evaluating {}: L1 {} PSNR {}".format(iteration, config['name'], l1_test, psnr_test)) torch.cuda.empty_cache() def render(self): cfg = self.cfg dataset = cfg.gaussian.dataset pipe = cfg.gaussian.pipe device = cfg.gaussian.dataset.data_device render_cfg = cfg.gaussian.render opt = cfg.gaussian.opt logging.info("Rendering " + dataset.model_path) safe_state(cfg.gaussian.quiet) voxel_size = 0.01 volume = o3d.pipelines.integration.ScalableTSDFVolume( voxel_length=voxel_size, sdf_trunc=4.0 * voxel_size, color_type=o3d.pipelines.integration.TSDFVolumeColorType.RGB8 ) volume_feature = o3d.pipelines.integration.ScalableTSDFVolume( voxel_length=voxel_size, sdf_trunc=4.0 * voxel_size, color_type=o3d.pipelines.integration.TSDFVolumeColorType.RGB8 ) with torch.no_grad(): self.gaussians = GaussianModel(cfg.gaussian.dataset.sh_degree) self.scene = Scene(cfg.gaussian.dataset, self.gaussians, load_iteration=cfg.pipeline.load_iteration, shuffle=False) self.app_model = AppModel() self.scene.loaded_iter = None bg_color = [1, 1, 1] if dataset.white_background else [0, 0, 0] background = torch.tensor(bg_color, dtype=torch.float32, device=device) render_path = os.path.join(dataset.model_path, "test", "renders_rgb") render_depth_path = os.path.join(dataset.model_path, "test", "renders_depth") render_depth_npy_path = os.path.join(dataset.model_path, "test", "renders_depth_npy") render_normal_path = os.path.join(dataset.model_path, "test", "renders_normal") os.makedirs(render_path, exist_ok=True) os.makedirs(render_depth_path, exist_ok=True) os.makedirs(render_depth_npy_path, exist_ok=True) os.makedirs(render_normal_path, exist_ok=True) depths_tsdf_fusion = [] all_language_feature = [] all_gt_language_feature = [] all_instance_feature = [] for idx, view in enumerate(tqdm(self.scene.getTrainCameras(), desc="Rendering progress")): camera_pose = get_tensor_from_camera(view.world_view_transform.transpose(0, 1)) gt, _ = view.get_image() if not opt.optim_pose: out = render(view, self.gaussians, pipe, background, app_model=None) else: out = render(view, self.gaussians, pipe, background, app_model=None, camera_pose=camera_pose) rendering = out["render"].clamp(0.0, 1.0) _, H, W = rendering.shape depth = out["plane_depth"].squeeze() depth_tsdf = depth.clone() depth = depth.detach().cpu().numpy() depth_i = (depth - depth.min()) / (depth.max() - depth.min() + 1e-20) depth_i = (depth_i * 255).clip(0, 255).astype(np.uint8) depth_color = cv2.applyColorMap(depth_i, cv2.COLORMAP_JET) normal = out["rendered_normal"].permute(1, 2, 0) normal = normal @ view.world_view_transform[:3, :3] normal = normal / (normal.norm(dim=-1, keepdim=True) + 1.0e-8) # normal = normal.detach().cpu().numpy() # normal = ((normal + 1) * 127.5).astype(np.uint8).clip(0, 255) normal = normal.detach().cpu().numpy()[:, :, ::-1] normal = ((1-normal) * 127.5).astype(np.uint8).clip(0, 255) language_feature = out["language_feature"] instance_feature = out["instance_feature"] all_language_feature.append(language_feature) all_instance_feature.append(instance_feature) lf_path = os.path.join(dataset.source_path, dataset.language_features_name) if os.path.exists(lf_path): gt_language, _, _ = view.get_language_feature(lf_path) all_gt_language_feature.append(gt_language) gts_path = os.path.join(dataset.model_path, "test", "gt_rgb") os.makedirs(gts_path, exist_ok=True) torchvision.utils.save_image(gt.clamp(0.0, 1.0), os.path.join(gts_path, view.image_name + ".png")) torchvision.utils.save_image(rendering, os.path.join(render_path, view.image_name + ".png")) cv2.imwrite(os.path.join(render_depth_path, view.image_name + ".jpg"), depth_color) np.save(os.path.join(render_depth_npy_path, view.image_name + ".npy"), depth) cv2.imwrite(os.path.join(render_normal_path, view.image_name + ".jpg"), normal) view_dir = torch.nn.functional.normalize(view.get_rays(), p=2, dim=-1) depth_normal = out["depth_normal"].permute(1, 2, 0) depth_normal = torch.nn.functional.normalize(depth_normal, p=2, dim=-1) dot = torch.sum(view_dir * depth_normal, dim=-1).abs() angle = torch.acos(dot) mask = angle > (80.0 / 180 * 3.14159) depth_tsdf[mask] = 0 depths_tsdf_fusion.append(depth_tsdf.squeeze().cpu()) depths_tsdf_fusion = torch.stack(depths_tsdf_fusion, dim=0) max_depth = 5.0 for idx, view in enumerate(tqdm(self.scene.getTrainCameras(), desc="TSDF Fusion progress")): ref_depth = depths_tsdf_fusion[idx].cuda() if view.mask is not None: ref_depth[view.mask.squeeze() < 0.5] = 0 ref_depth[ref_depth > max_depth] = 0 ref_depth = ref_depth.detach().cpu().numpy() pose = np.identity(4) pose[:3, :3] = view.R.transpose(-1, -2) pose[:3, 3] = view.T color = o3d.io.read_image(os.path.join(render_path, view.image_name + ".png")) depth = o3d.geometry.Image((ref_depth * 1000).astype(np.uint16)) rgbd = o3d.geometry.RGBDImage.create_from_color_and_depth( color, depth, depth_scale=1000.0, depth_trunc=max_depth, convert_rgb_to_intensity=False) volume.integrate( rgbd, o3d.camera.PinholeCameraIntrinsic(W, H, view.Fx, view.Fy, view.Cx, view.Cy), pose ) num_cluster = 3 path = os.path.join(dataset.model_path, "mesh") os.makedirs(path, exist_ok=True) mesh = volume.extract_triangle_mesh() o3d.io.write_triangle_mesh(os.path.join(path, "tsdf_fusion.ply"), mesh, write_triangle_uvs=True, write_vertex_colors=True, write_vertex_normals=True) mesh = post_process_mesh(mesh, num_cluster) o3d.io.write_triangle_mesh(os.path.join(path, "tsdf_fusion_post.ply"), mesh, write_triangle_uvs=True, write_vertex_colors=True, write_vertex_normals=True) # perform pca among all lang/instance features render_language_path = os.path.join(dataset.model_path, "test", "renders_language") render_instance_path = os.path.join(dataset.model_path, "test", "renders_instance") gts_language_path = os.path.join(dataset.model_path, "test", "gt_language") render_language_npy_path = os.path.join(dataset.model_path, "test", "renders_language_npy") render_instance_npy_path = os.path.join(dataset.model_path, "test", "renders_instance_npy") gts_language_npy_path = os.path.join(dataset.model_path, "test", "gt_language_npy") os.makedirs(render_language_path, exist_ok=True) os.makedirs(gts_language_path, exist_ok=True) os.makedirs(render_language_npy_path, exist_ok=True) os.makedirs(gts_language_npy_path, exist_ok=True) os.makedirs(render_instance_path, exist_ok=True) os.makedirs(render_instance_npy_path, exist_ok=True) all_language_feature = torch.stack(all_language_feature) all_instance_feature = torch.stack(all_instance_feature) if len(all_gt_language_feature): all_gt_language_feature = torch.stack(all_gt_language_feature) if render_cfg.normalized: all_language_feature = torch.clamp(all_language_feature, min=-1, max=2) min_value = torch.min(all_language_feature) max_value = torch.max(all_language_feature) normalized_language_feature = (all_language_feature - min_value) / (max_value - min_value) pca_language_feature = permuted_pca(normalized_language_feature) for idx, view in enumerate(self.scene.getTrainCameras()): torchvision.utils.save_image(normalized_language_feature[idx], os.path.join(render_language_path, view.image_name + ".png")) all_instance_feature = torch.clamp(all_instance_feature, min=-1, max=2) min_value = torch.min(all_instance_feature) max_value = torch.max(all_instance_feature) normalized_instance_feature = (all_instance_feature - min_value) / (max_value - min_value) pca_instance_feature = permuted_pca(normalized_instance_feature) for idx, view in enumerate(self.scene.getTrainCameras()): torchvision.utils.save_image( # pca_instance_feature[idx], normalized_instance_feature[idx], os.path.join(render_instance_path, view.image_name + ".png") ) if os.path.exists(lf_path): all_gt_language_feature = torch.clamp(all_gt_language_feature, min=-1, max=2) min_value = torch.min(all_gt_language_feature) max_value = torch.max(all_gt_language_feature) normalized_gt_language = (all_gt_language_feature - min_value) / (max_value - min_value) pca_gt_language = permuted_pca(normalized_gt_language) for idx, view in enumerate(self.scene.getTrainCameras()): torchvision.utils.save_image( pca_gt_language[idx], os.path.join(gts_language_path, view.image_name + ".png") ) else: breakpoint() all_language_feature = torch.clamp(all_language_feature, min=-1, max=2) pca_language_feature = permuted_pca(all_language_feature) for idx, view in enumerate(self.scene.getTrainCameras()): torchvision.utils.save_image( pca_language_feature[idx], os.path.join(render_language_path, view.image_name + ".png") ) all_instance_feature = torch.clamp(all_instance_feature, min=-1, max=2) pca_instance_feature = permuted_pca(all_instance_feature) for idx, view in enumerate(self.scene.getTrainCameras()): torchvision.utils.save_image( pca_instance_feature[idx], os.path.join(render_instance_path, view.image_name + ".png") ) if os.path.exists(lf_path): all_gt_language_feature = torch.clamp(all_gt_language_feature, min=-1, max=2) pca_gt_language = permuted_pca(all_gt_language_feature) for idx, view in enumerate(self.scene.getTrainCameras()): torchvision.utils.save_image( pca_gt_language[idx], os.path.join(gts_language_path, view.image_name + ".png") ) for idx, view in enumerate(self.scene.getTrainCameras()): np.save( os.path.join(render_language_npy_path, view.image_name + ".npy"), all_language_feature[idx].permute(1, 2, 0).cpu().numpy() ) np.save( os.path.join(render_instance_npy_path, view.image_name + ".npy"), all_instance_feature[idx].permute(1, 2, 0).cpu().numpy() ) if os.path.exists(lf_path): np.save( os.path.join(gts_language_npy_path, view.image_name + ".npy"), all_gt_language_feature[idx].permute(1, 2, 0).cpu().numpy() ) for idx, view in enumerate(tqdm(self.scene.getTrainCameras(), desc="TSDF Fusion progress")): ref_depth = depths_tsdf_fusion[idx].cuda() if view.mask is not None: ref_depth[view.mask.squeeze() < 0.5] = 0 ref_depth[ref_depth > max_depth] = 0 ref_depth = ref_depth.detach().cpu().numpy() pose = np.identity(4) pose[:3, :3] = view.R.transpose(-1, -2) pose[:3, 3] = view.T color_feature = o3d.io.read_image(os.path.join(render_language_path, view.image_name + ".png")) depth = o3d.geometry.Image((ref_depth * 1000).astype(np.uint16)) rgbd_feature = o3d.geometry.RGBDImage.create_from_color_and_depth( color_feature, depth, depth_scale=1000.0, depth_trunc=max_depth, convert_rgb_to_intensity=False ) volume_feature.integrate( rgbd_feature, o3d.camera.PinholeCameraIntrinsic(W, H, view.Fx, view.Fy, view.Cx, view.Cy), pose ) num_cluster = 3 mesh_feature = volume_feature.extract_triangle_mesh() o3d.io.write_triangle_mesh(os.path.join(path, "feature_tsdf_fusion.ply"), mesh_feature, write_triangle_uvs=True, write_vertex_colors=True, write_vertex_normals=True) mesh_feature = post_process_mesh(mesh_feature, num_cluster) o3d.io.write_triangle_mesh(os.path.join(path, "feature_tsdf_fusion_post.ply"), mesh_feature, write_triangle_uvs=True, write_vertex_colors=True, write_vertex_normals=True) def eval(self): cfg = self.cfg dataset = cfg.gaussian.dataset opt = cfg.gaussian.opt pipe = cfg.gaussian.pipe device = cfg.gaussian.dataset.data_device dataset.source_path = cfg.gaussian.eval.eval_data_path logging.info("Evaling " + dataset.model_path) safe_state(cfg.gaussian.quiet) # optimizing poses: self.gaussians = GaussianModel(cfg.gaussian.dataset.sh_degree) self.scene = Scene(cfg.gaussian.dataset, self.gaussians, load_iteration=cfg.pipeline.load_iteration, shuffle=False) self.gaussians.training_setup(opt, device) self.scene.loaded_iter = None bg_color = [1, 1, 1] if dataset.white_background else [0, 0, 0] background = torch.tensor(bg_color, dtype=torch.float32, device=device) render_path = os.path.join(dataset.model_path, "eval", "renders_rgb") render_depth_path = os.path.join(dataset.model_path, "eval", "renders_depth") render_depth_npy_path = os.path.join(dataset.model_path, "eval", "renders_depth_npy") render_normal_path = os.path.join(dataset.model_path, "eval", "renders_normal") render_lang_path = os.path.join(dataset.model_path, "eval", "renders_lang") render_instance_path = os.path.join(dataset.model_path, "eval", "renders_instance") render_lang_npy_path = os.path.join(dataset.model_path, "eval", "renders_lang_npy") render_instance_npy_path = os.path.join(dataset.model_path, "eval", "renders_instance_npy") os.makedirs(render_path, exist_ok=True) os.makedirs(render_depth_path, exist_ok=True) os.makedirs(render_depth_npy_path, exist_ok=True) os.makedirs(render_normal_path, exist_ok=True) os.makedirs(render_lang_path, exist_ok=True) os.makedirs(render_instance_path, exist_ok=True) os.makedirs(render_lang_npy_path, exist_ok=True) os.makedirs(render_instance_npy_path, exist_ok=True) self.gaussians.change_reqiures_grad("pose_only", iteration=0, quiet=False) for cam_idx, cam in enumerate(self.scene.getTrainCameras().copy()): # optim pose iter: first_iter = 1 ema_loss_for_log = 0.0 include_feature = True progress_bar = tqdm(range(first_iter, cfg.gaussian.eval.pose_optim_iter + 1)) logging.info(f"Optimizing camera {cam_idx}") iter_start = torch.cuda.Event(enable_timing=True) iter_end = torch.cuda.Event(enable_timing=True) for iteration in progress_bar: iter_start.record() self.gaussians.update_learning_rate(iteration) pose = self.gaussians.get_RT(self.gaussians.index_mapping[cam.uid]) bg = torch.rand((3), device="cuda") if opt.random_background else background render_pkg = render(cam, self.gaussians, pipe, bg, app_model=None, return_plane=False, return_depth_normal=False, include_feature=include_feature, camera_pose=pose) image = render_pkg["render"] gt_image, _ = cam.get_image() ssim_loss = (1.0 - ssim(image, gt_image)) Ll1 = l1_loss(image, gt_image) image_loss = (1.0 - opt.lambda_dssim) * Ll1 + opt.lambda_dssim * ssim_loss image_loss.backward() iter_end.record() with torch.no_grad(): ema_loss_for_log = 0.4 * image_loss + 0.6 * ema_loss_for_log if iteration % 10 == 0: loss_dict = { "Loss": f"{ema_loss_for_log:.5f}" } progress_bar.set_postfix(loss_dict) progress_bar.update(10) if iteration < cfg.gaussian.eval.pose_optim_iter: self.gaussians.cam_optimizer.step() self.gaussians.cam_optimizer.zero_grad(set_to_none=True) if iteration == cfg.gaussian.eval.pose_optim_iter: # saving results: progress_bar.close() logging.info("Saving results...") language_feature, instance_feature = render_pkg["language_feature"], render_pkg["instance_feature"] image_tosave = torch.cat([image, gt_image], dim=2).clamp(0, 1) torchvision.utils.save_image(image_tosave, os.path.join(render_path, cam.image_name + ".png")) min_value = torch.min(language_feature) max_value = torch.max(language_feature) normalized_language_feature = (language_feature - min_value) / (max_value - min_value) torchvision.utils.save_image(permuted_pca(normalized_language_feature), os.path.join(render_lang_path, cam.image_name + ".png")) np.save(os.path.join(render_lang_npy_path, cam.image_name + ".npy"), language_feature.permute(1, 2, 0).cpu().numpy()) min_value = torch.min(instance_feature) max_value = torch.max(instance_feature) normalized_instance_feature = (instance_feature - min_value) / (max_value - min_value) torchvision.utils.save_image(permuted_pca(normalized_instance_feature), os.path.join(render_instance_path, cam.image_name + ".png")) np.save(os.path.join(render_instance_npy_path, cam.image_name + ".npy"), instance_feature.permute(1, 2, 0).cpu().numpy()) torch.cuda.empty_cache() ================================================ FILE: field_construction/gaussian_renderer/__init__.py ================================================ # # Copyright (C) 2023, Inria # GRAPHDECO research group, https://team.inria.fr/graphdeco # All rights reserved. # # This software is free for non-commercial, research and evaluation use # under the terms of the LICENSE.md file. # # For inquiries contact george.drettakis@inria.fr # import math import torch from diff_LangSurf_rasterization import \ GaussianRasterizationSettings as PlaneGaussianRasterizationSettings from diff_LangSurf_rasterization import \ GaussianRasterizer as PlaneGaussianRasterizer from field_construction.scene.app_model import AppModel from field_construction.scene.gaussian_model import GaussianModel from field_construction.utils.graphics_utils import normal_from_depth_image from field_construction.utils.pose_utils import (get_camera_from_tensor, quadmultiply) from field_construction.utils.sh_utils import eval_sh def render_normal(viewpoint_cam, depth, offset=None, normal=None, scale=1): # depth: (H, W), bg_color: (3), alpha: (H, W) # normal_ref: (3, H, W) intrinsic_matrix, extrinsic_matrix = viewpoint_cam.get_calib_matrix_nerf(scale=scale) st = max(int(scale/2)-1,0) if offset is not None: offset = offset[st::scale,st::scale] normal_ref = normal_from_depth_image(depth[st::scale,st::scale], intrinsic_matrix.to(depth.device), extrinsic_matrix.to(depth.device), offset) normal_ref = normal_ref.permute(2,0,1) return normal_ref def render( viewpoint_camera, pc : GaussianModel, pipe, bg_color : torch.Tensor, scaling_modifier=1.0, override_color=None, app_model: AppModel=None, return_plane=True, return_depth_normal=True, include_feature=True, camera_pose=None ): """ Render the scene. Background tensor (bg_color) must be on GPU! """ # Create zero tensor. We will use it to make pytorch return gradients of the 2D (screen-space) means screenspace_points = torch.zeros_like(pc.get_xyz, dtype=pc.get_xyz.dtype, requires_grad=True, device="cuda") + 0 screenspace_points_abs = torch.zeros_like(pc.get_xyz, dtype=pc.get_xyz.dtype, requires_grad=True, device="cuda") + 0 try: screenspace_points.retain_grad() screenspace_points_abs.retain_grad() except: pass # Set up rasterization configuration tanfovx = math.tan(viewpoint_camera.FoVx * 0.5) tanfovy = math.tan(viewpoint_camera.FoVy * 0.5) w2c = torch.eye(4).cuda() projmatrix = ( w2c.unsqueeze(0).bmm(viewpoint_camera.projection_matrix.unsqueeze(0)) ).squeeze(0) camera_pos = w2c.inverse()[3, :3] if camera_pose is not None: rel_w2c = get_camera_from_tensor(camera_pose) gaussians_xyz = pc._xyz.clone() gaussians_rot = pc._rotation.clone() xyz_ones = torch.ones(gaussians_xyz.shape[0], 1).cuda().float() xyz_homo = torch.cat((gaussians_xyz, xyz_ones), dim=1) gaussians_xyz_trans = (rel_w2c @ xyz_homo.T).T[:, :3] gaussians_rot_trans = quadmultiply(camera_pose[:4], gaussians_rot) means3D = gaussians_xyz_trans else: means3D = pc.get_xyz means2D = screenspace_points means2D_abs = screenspace_points_abs opacity = pc.get_opacity # If precomputed 3d covariance is provided, use it. If not, then it will be computed from # scaling / rotation by the rasterizer. scales = None rotations = None cov3D_precomp = None if pipe.compute_cov3D_python: cov3D_precomp = pc.get_covariance(scaling_modifier) else: scales = pc.get_scaling rotations = gaussians_rot_trans if camera_pose is not None else pc.get_rotation # rotations = pc.get_rotation # If precomputed colors are provided, use them. Otherwise, if it is desired to precompute colors # from SHs in Python, do it. If not, then SH -> RGB conversion will be done by rasterizer. shs = None colors_precomp = None if override_color is None: if pipe.convert_SHs_python: shs_view = pc.get_features.transpose(1, 2).view(-1, 3, (pc.max_sh_degree+1)**2) dir_pp = (pc.get_xyz - viewpoint_camera.camera_center.repeat(pc.get_features.shape[0], 1)) dir_pp_normalized = dir_pp/dir_pp.norm(dim=1, keepdim=True) sh2rgb = eval_sh(pc.active_sh_degree, shs_view, dir_pp_normalized) colors_precomp = torch.clamp_min(sh2rgb + 0.5, 0.0) else: shs = pc.get_features else: colors_precomp = override_color if include_feature: language_feature_precomp = pc.get_language_feature instance_feature_precomp = pc.get_instance_feature # language_feature_precomp = language_feature_precomp / (language_feature_precomp.norm(dim=-1, keepdim=True) + 1e-9) # instance_feature_precomp = instance_feature_precomp / (instance_feature_precomp.norm(dim=-1, keepdim=True) + 1e-9) # language_feature_precomp = torch.sigmoid(language_feature_precomp) else: language_feature_precomp = torch.zeros((1,), dtype=opacity.dtype, device=opacity.device) instance_feature_precomp = torch.zeros((1,), dtype=opacity.dtype, device=opacity.device) return_dict = None raster_settings = PlaneGaussianRasterizationSettings( image_height=int(viewpoint_camera.image_height), image_width=int(viewpoint_camera.image_width), tanfovx=tanfovx, tanfovy=tanfovy, bg=bg_color, scale_modifier=scaling_modifier, # viewmatrix=viewpoint_camera.world_view_transform, # projmatrix=viewpoint_camera.full_proj_transform, viewmatrix=w2c if camera_pose is not None else viewpoint_camera.world_view_transform, projmatrix=projmatrix if camera_pose is not None else viewpoint_camera.full_proj_transform, sh_degree=pc.active_sh_degree, # campos=viewpoint_camera.camera_center, campos=camera_pos if camera_pose is not None else viewpoint_camera.camera_center, prefiltered=False, render_geo=return_plane, debug=pipe.debug, include_feature=include_feature, ) rasterizer = PlaneGaussianRasterizer(raster_settings=raster_settings) if not return_plane: rendered_image, language_feature, instance_feature, radii, out_observe, _, _ = rasterizer( means3D = means3D, means2D = means2D, means2D_abs = means2D_abs, shs = shs, colors_precomp = colors_precomp, language_feature_precomp = language_feature_precomp, language_feature_instance_precomp = instance_feature_precomp, opacities = opacity, scales = scales, rotations = rotations, cov3D_precomp = cov3D_precomp) return_dict = {"render": rendered_image, "viewspace_points": screenspace_points, "viewspace_points_abs": screenspace_points_abs, "visibility_filter" : radii > 0, "radii": radii, "out_observe": out_observe, "language_feature": language_feature, "instance_feature": instance_feature, } if app_model is not None and pc.use_app: appear_ab = app_model.appear_ab[torch.tensor(viewpoint_camera.uid).cuda()] app_image = torch.exp(appear_ab[0]) * rendered_image + appear_ab[1] return_dict.update({"app_image": app_image}) return return_dict global_normal = pc.get_normal(viewpoint_camera) local_normal = global_normal @ viewpoint_camera.world_view_transform[:3,:3] pts_in_cam = means3D @ viewpoint_camera.world_view_transform[:3,:3] + viewpoint_camera.world_view_transform[3,:3] depth_z = pts_in_cam[:, 2] local_distance = (local_normal * pts_in_cam).sum(-1).abs() input_all_map = torch.zeros((means3D.shape[0], 5)).cuda().float() input_all_map[:, :3] = local_normal input_all_map[:, 3] = 1.0 input_all_map[:, 4] = local_distance rendered_image, language_feature, instance_feature, radii, out_observe, out_all_map, plane_depth = rasterizer( means3D = means3D, means2D = means2D, means2D_abs = means2D_abs, shs = shs, colors_precomp = colors_precomp, language_feature_precomp = language_feature_precomp, language_feature_instance_precomp = instance_feature_precomp, opacities = opacity, scales = scales, rotations = rotations, all_map = input_all_map, cov3D_precomp = cov3D_precomp) rendered_normal = out_all_map[0:3] rendered_alpha = out_all_map[3:4, ] rendered_distance = out_all_map[4:5, ] return_dict = {"render": rendered_image, "viewspace_points": screenspace_points, "viewspace_points_abs": screenspace_points_abs, "visibility_filter" : radii > 0, "radii": radii, "out_observe": out_observe, "rendered_normal": rendered_normal, "plane_depth": plane_depth, "rendered_distance": rendered_distance, "language_feature": language_feature, "instance_feature": instance_feature, } if app_model is not None: appear_ab = app_model.appear_ab[torch.tensor(viewpoint_camera.uid).cuda()] app_image = torch.exp(appear_ab[0]) * rendered_image + appear_ab[1] return_dict.update({"app_image": app_image}) if return_depth_normal: depth_normal = render_normal(viewpoint_camera, plane_depth.squeeze()) * (rendered_alpha).detach() return_dict.update({"depth_normal": depth_normal}) # Those Gaussians that were frustum culled or had a radius of 0 were not visible. # They will be excluded from value updates used in the splitting criteria. return return_dict ================================================ FILE: field_construction/gaussian_renderer/network_gui.py ================================================ # # Copyright (C) 2023, Inria # GRAPHDECO research group, https://team.inria.fr/graphdeco # All rights reserved. # # This software is free for non-commercial, research and evaluation use # under the terms of the LICENSE.md file. # # For inquiries contact george.drettakis@inria.fr # import json import socket import traceback import torch from field_construction.scene.cameras import MiniCam host = "127.0.0.1" port = 6009 conn = None addr = None listener = socket.socket(socket.AF_INET, socket.SOCK_STREAM) def init(wish_host, wish_port): global host, port, listener host = wish_host port = wish_port listener.bind((host, port)) listener.listen() listener.settimeout(0) def try_connect(): global conn, addr, listener try: conn, addr = listener.accept() print(f"\nConnected by {addr}") conn.settimeout(None) except Exception as inst: pass def read(): global conn messageLength = conn.recv(4) messageLength = int.from_bytes(messageLength, 'little') message = conn.recv(messageLength) return json.loads(message.decode("utf-8")) def send(message_bytes, verify): global conn if message_bytes != None: conn.sendall(message_bytes) conn.sendall(len(verify).to_bytes(4, 'little')) conn.sendall(bytes(verify, 'ascii')) def receive(): message = read() width = message["resolution_x"] height = message["resolution_y"] if width != 0 and height != 0: try: do_training = bool(message["train"]) fovy = message["fov_y"] fovx = message["fov_x"] znear = message["z_near"] zfar = message["z_far"] do_shs_python = bool(message["shs_python"]) do_rot_scale_python = bool(message["rot_scale_python"]) keep_alive = bool(message["keep_alive"]) scaling_modifier = message["scaling_modifier"] world_view_transform = torch.reshape(torch.tensor(message["view_matrix"]), (4, 4)).cuda() world_view_transform[:,1] = -world_view_transform[:,1] world_view_transform[:,2] = -world_view_transform[:,2] full_proj_transform = torch.reshape(torch.tensor(message["view_projection_matrix"]), (4, 4)).cuda() full_proj_transform[:,1] = -full_proj_transform[:,1] custom_cam = MiniCam(width, height, fovy, fovx, znear, zfar, world_view_transform, full_proj_transform) except Exception as e: print("") traceback.print_exc() raise e return custom_cam, do_training, do_shs_python, do_rot_scale_python, keep_alive, scaling_modifier else: return None, None, None, None, None, None ================================================ FILE: field_construction/lpipsPyTorch/__init__.py ================================================ import torch from .modules.lpips import LPIPS def lpips(x: torch.Tensor, y: torch.Tensor, net_type: str = 'alex', version: str = '0.1'): r"""Function that measures Learned Perceptual Image Patch Similarity (LPIPS). Arguments: x, y (torch.Tensor): the input tensors to compare. net_type (str): the network type to compare the features: 'alex' | 'squeeze' | 'vgg'. Default: 'alex'. version (str): the version of LPIPS. Default: 0.1. """ device = x.device criterion = LPIPS(net_type, version).to(device) return criterion(x, y) ================================================ FILE: field_construction/lpipsPyTorch/modules/lpips.py ================================================ import torch import torch.nn as nn from .networks import get_network, LinLayers from .utils import get_state_dict class LPIPS(nn.Module): r"""Creates a criterion that measures Learned Perceptual Image Patch Similarity (LPIPS). Arguments: net_type (str): the network type to compare the features: 'alex' | 'squeeze' | 'vgg'. Default: 'alex'. version (str): the version of LPIPS. Default: 0.1. """ def __init__(self, net_type: str = 'alex', version: str = '0.1'): assert version in ['0.1'], 'v0.1 is only supported now' super(LPIPS, self).__init__() # pretrained network self.net = get_network(net_type) # linear layers self.lin = LinLayers(self.net.n_channels_list) self.lin.load_state_dict(get_state_dict(net_type, version)) def forward(self, x: torch.Tensor, y: torch.Tensor): feat_x, feat_y = self.net(x), self.net(y) diff = [(fx - fy) ** 2 for fx, fy in zip(feat_x, feat_y)] res = [l(d).mean((2, 3), True) for d, l in zip(diff, self.lin)] return torch.sum(torch.cat(res, 0), 0, True) ================================================ FILE: field_construction/lpipsPyTorch/modules/networks.py ================================================ from typing import Sequence from itertools import chain import torch import torch.nn as nn from torchvision import models from .utils import normalize_activation def get_network(net_type: str): if net_type == 'alex': return AlexNet() elif net_type == 'squeeze': return SqueezeNet() elif net_type == 'vgg': return VGG16() else: raise NotImplementedError('choose net_type from [alex, squeeze, vgg].') class LinLayers(nn.ModuleList): def __init__(self, n_channels_list: Sequence[int]): super(LinLayers, self).__init__([ nn.Sequential( nn.Identity(), nn.Conv2d(nc, 1, 1, 1, 0, bias=False) ) for nc in n_channels_list ]) for param in self.parameters(): param.requires_grad = False class BaseNet(nn.Module): def __init__(self): super(BaseNet, self).__init__() # register buffer self.register_buffer( 'mean', torch.Tensor([-.030, -.088, -.188])[None, :, None, None]) self.register_buffer( 'std', torch.Tensor([.458, .448, .450])[None, :, None, None]) def set_requires_grad(self, state: bool): for param in chain(self.parameters(), self.buffers()): param.requires_grad = state def z_score(self, x: torch.Tensor): return (x - self.mean) / self.std def forward(self, x: torch.Tensor): x = self.z_score(x) output = [] for i, (_, layer) in enumerate(self.layers._modules.items(), 1): x = layer(x) if i in self.target_layers: output.append(normalize_activation(x)) if len(output) == len(self.target_layers): break return output class SqueezeNet(BaseNet): def __init__(self): super(SqueezeNet, self).__init__() self.layers = models.squeezenet1_1(True).features self.target_layers = [2, 5, 8, 10, 11, 12, 13] self.n_channels_list = [64, 128, 256, 384, 384, 512, 512] self.set_requires_grad(False) class AlexNet(BaseNet): def __init__(self): super(AlexNet, self).__init__() self.layers = models.alexnet(True).features self.target_layers = [2, 5, 8, 10, 12] self.n_channels_list = [64, 192, 384, 256, 256] self.set_requires_grad(False) class VGG16(BaseNet): def __init__(self): super(VGG16, self).__init__() self.layers = models.vgg16(weights=models.VGG16_Weights.IMAGENET1K_V1).features self.target_layers = [4, 9, 16, 23, 30] self.n_channels_list = [64, 128, 256, 512, 512] self.set_requires_grad(False) ================================================ FILE: field_construction/lpipsPyTorch/modules/utils.py ================================================ from collections import OrderedDict import torch def normalize_activation(x, eps=1e-10): norm_factor = torch.sqrt(torch.sum(x ** 2, dim=1, keepdim=True)) return x / (norm_factor + eps) def get_state_dict(net_type: str = 'alex', version: str = '0.1'): # build url url = 'https://raw.githubusercontent.com/richzhang/PerceptualSimilarity/' \ + f'master/lpips/weights/v{version}/{net_type}.pth' # download old_state_dict = torch.hub.load_state_dict_from_url( url, progress=True, map_location=None if torch.cuda.is_available() else torch.device('cpu') ) # rename keys new_state_dict = OrderedDict() for key, val in old_state_dict.items(): new_key = key new_key = new_key.replace('lin', '') new_key = new_key.replace('model.', '') new_state_dict[new_key] = val return new_state_dict ================================================ FILE: field_construction/pipeline.py ================================================ from random import randint from .gaussian_field import GaussianField from .preprocessor import Preprocessor class FieldConstructionPipeline: def __init__(self, cfg): self.cfg = cfg if cfg.pipeline.mode == "train": self.preprocessor = Preprocessor(cfg) else: self.preprocessor = None def construct_field(self): self.preprocessor.preprocess() del self.preprocessor self.preprocessor = None self.gaussian_field = GaussianField(self.cfg) self.gaussian_field.train() def render_result(self): self.gaussian_field = GaussianField(self.cfg) self.gaussian_field.render() def eval(self): self.gaussian_field = GaussianField(self.cfg) self.gaussian_field.eval() ================================================ FILE: field_construction/pose_estimator/__init__.py ================================================ import logging import os import shutil import time from abc import ABC, abstractmethod from pathlib import Path import numpy as np import open3d as o3d import torch from utils.sfm_utils import (compute_co_vis_masks, get_sorted_image_files, load_images, save_extrinsic, save_intrinsics, save_points3D) from .utils import prepare_input, prepare_output, storePly class BaseEstimator(ABC): @abstractmethod def get_poses(): pass class ColmapEstimator(BaseEstimator): def __init__(self, cfg): self.cfg = cfg def get_poses(self, camera_model="OPENCV", use_gpu=True): save_path = self.cfg.pipeline.data_path database_path = os.path.join(save_path, "distorted", "database.db") raw_img_path = os.path.join(save_path, "input") sparse_path = os.path.join(save_path, "distorted", "sparse") os.makedirs(os.path.join(save_path, "distorted"), exist_ok=True) os.makedirs(sparse_path, exist_ok=True) feat_extraction_cmd = [ "colmap", "feature_extractor", "--database_path", database_path, "--image_path", raw_img_path, "--ImageReader.single_camera", "1", "--ImageReader.camera_model", camera_model, "--SiftExtraction.use_gpu", str(int(use_gpu)) ] feat_extraction_cmd = " ".join(feat_extraction_cmd) exit_code = os.system(feat_extraction_cmd) if exit_code != 0: logging.error(f"Feature extraction failed with code {exit_code}. Exiting.") exit(exit_code) feat_matching_cmd = [ "colmap", "exhaustive_matcher", "--database_path", database_path, "--SiftMatching.use_gpu", str(int(use_gpu)) ] feat_matching_cmd = " ".join(feat_matching_cmd) exit_code = os.system(feat_matching_cmd) if exit_code != 0: logging.error(f"Feature matching failed with code {exit_code}. Exiting.") exit(exit_code) mapper_cmd = [ "colmap", "mapper", "--database_path", database_path, "--image_path", raw_img_path, "--output_path", sparse_path, "--Mapper.ba_global_function_tolerance=0.000001" ] mapper_cmd = " ".join(mapper_cmd) exit_code = os.system(mapper_cmd) if exit_code != 0: logging.error(f"Mapper failed with code {exit_code}. Exiting.") exit(exit_code) img_undist_cmd = [ "colmap", "image_undistorter", "--image_path", raw_img_path, "--input_path", os.path.join(sparse_path, "0"), "--output_path", save_path, "--output_type", "COLMAP" ] img_undist_cmd = " ".join(img_undist_cmd) exit_code = os.system(img_undist_cmd) if exit_code != 0: logging.error(f"Mapper failed with code {exit_code}. Exiting.") exit(exit_code) # move data: curr_path = os.path.join(save_path, "sparse") dest_path = os.path.join(curr_path, "0") os.makedirs(dest_path, exist_ok=True) files = list(filter(lambda x: x != "0", os.listdir(curr_path))) for file in files: src_file = os.path.join(curr_path, file) dest_file = os.path.join(dest_path, file) shutil.move(src_file, dest_file) class MASt3REstimator(BaseEstimator): def __init__(self, cfg): from mast3r.model import AsymmetricMASt3R self.cfg = cfg self.device = cfg.pose_estimator.device self.model = AsymmetricMASt3R.from_pretrained(cfg.pose_estimator.model_path).to(self.device) def get_poses(self): from dust3r.cloud_opt import GlobalAlignerMode, global_aligner from dust3r.image_pairs import make_pairs from dust3r.inference import inference from dust3r.utils.device import to_numpy from dust3r.utils.geometry import inv save_path = self.cfg.pipeline.data_path co_vis_dsp = self.cfg.pose_estimator.co_vis_dsp sparse_path = os.path.join(save_path, "sparse", "0") os.makedirs(sparse_path, exist_ok=True) image_dir = Path(save_path) / "input" image_files, image_suffix = get_sorted_image_files(image_dir) n_views = len(image_files) images, org_imgs_shape = load_images(image_files, size=512) logging.info(">> Making pairs...") pairs = make_pairs(images) logging.info(">> Inference...") output = inference(pairs, self.model, self.device, batch_size=1, verbose=True) logging.info(f'>> Global alignment...') scene = global_aligner(output, device=self.device, mode=GlobalAlignerMode.PointCloudOptimizer) extrinsics_w2c = inv(to_numpy(scene.get_im_poses())) intrinsics = to_numpy(scene.get_intrinsics()) focals = to_numpy(scene.get_focals()) imgs = np.array(scene.imgs) pts3d = to_numpy(scene.get_pts3d()) pts3d = np.array(pts3d) depthmaps = to_numpy(scene.im_depthmaps.detach().cpu().numpy()) values = [param.detach().cpu().numpy() for param in scene.im_conf] confs = np.array(values) logging.info(f'>> Confiden-aware Ranking...') avg_conf_scores = confs.mean(axis=(1, 2)) sorted_conf_indices = np.argsort(avg_conf_scores)[::-1] sorted_conf_avg_conf_scores = avg_conf_scores[sorted_conf_indices] logging.info("Sorted indices:", str(sorted_conf_indices)) logging.info("Sorted average confidence scores:", str(sorted_conf_avg_conf_scores)) logging.info(f'>> Calculate the co-visibility mask...') depth_thre = self.cfg.pose_estimator.depth_thre if depth_thre > 0: overlapping_masks = compute_co_vis_masks(sorted_conf_indices, depthmaps, pts3d, intrinsics, extrinsics_w2c, imgs.shape, depth_threshold=depth_thre) overlapping_masks = ~overlapping_masks else: co_vis_dsp = False overlapping_masks = None focals = np.repeat(focals[0], n_views) logging.info(f'>> Saving results...') save_extrinsic(Path(sparse_path), extrinsics_w2c, image_files, image_suffix) save_intrinsics(Path(sparse_path), focals, org_imgs_shape, imgs.shape, save_focals=True) pts_num = save_points3D(Path(sparse_path), imgs, pts3d, confs.reshape(pts3d.shape[0], -1), overlapping_masks, use_masks=co_vis_dsp, save_all_pts=True, save_txt_path=save_path, depth_threshold=depth_thre) # save_images_and_masks(Path(sparse_path), n_views, imgs, overlapping_masks, image_files, image_suffix) logging.info(f'MASt3R Reconstruction is successfully converted to COLMAP files in: {sparse_path}') logging.info(f'Number of points: {pts3d.reshape(-1, 3).shape[0]}') logging.info(f'Number of points after downsampling: {pts_num}') class CUT3REstimator(BaseEstimator): def __init__(self, cfg): self.cfg = cfg self.device = cfg.pose_estimator.device def get_poses(self): cfg = self.cfg if self.device == "cuda" and not torch.cuda.is_available(): print("cuda not available. switching to cpu.") self.device = "cpu" from cut3r.dust3r.inference import inference from cut3r.dust3r.model import ARCroco3DStereo save_path = self.cfg.pipeline.data_path img_folder_path = os.path.join(save_path, "input") img_paths = [os.path.join(img_folder_path, img_name) for img_name in os.listdir(img_folder_path)] img_mask = [True] * len(img_paths) views, orig_shape = prepare_input( img_paths=img_paths, img_mask=img_mask, size=512, revisit=1, update=True, ) model = ARCroco3DStereo.from_pretrained(cfg.pose_estimator.model_path).to(self.device) model.eval() logging.info("Running inference...") start_time = time.time() outputs, state_args = inference(views, model, self.device) total_time = time.time() - start_time per_frame_time = total_time / len(views) print( f"Inference completed in {total_time:.2f} seconds (average {per_frame_time:.2f} s per frame)." ) pts3ds_other, colors, conf, cam_dict = prepare_output( outputs, orig_shape, save_path, 1, True ) conf = torch.cat(conf, dim=0) if self.cfg.pipeline.selection: conf_score = conf.mean(dim=(1, 2)) chunk_num = self.cfg.pipeline.chunk_num keep_num_per_chunk = self.cfg.pipeline.keep_num_per_chunk conf_scores_tuple = conf_score.chunk(chunk_num) selected_idxs = [] total_conf_len = 0 for conf_scores_chunk in conf_scores_tuple: _, idxs = conf_scores_chunk.sort(descending=True) idxs = idxs[:keep_num_per_chunk] selected_idxs += [(idx + total_conf_len).item() for idx in idxs] total_conf_len += len(conf_scores_chunk) self.cfg.pipeline.selected_idxs = sorted(selected_idxs) pts3ds_to_save = [pts3ds_other[idx].cpu().numpy() for idx in self.cfg.pipeline.selected_idxs] colors_to_save = [colors[idx].cpu().numpy() for idx in self.cfg.pipeline.selected_idxs] all_pts3ds = np.stack(pts3ds_to_save).reshape(-1, 3) all_colors = np.stack(colors_to_save).reshape(-1, 3) storePly(os.path.join(save_path, "points3D.ply"), all_pts3ds, all_colors) class VGGTEstimator(BaseEstimator): def __init__(self, cfg): self.cfg = cfg self.device = cfg.pose_estimator.device def get_poses(self): from vggt.models.vggt import VGGT from vggt.utils.geometry import unproject_depth_map_to_point_map from vggt.utils.load_fn import load_and_preprocess_images from vggt.utils.pose_enc import pose_encoding_to_extri_intri cfg = self.cfg if self.device == "cuda" and not torch.cuda.is_available(): print("cuda not available. switching to cpu.") self.device = "cpu" dtype = torch.bfloat16 if torch.cuda.get_device_capability()[0] >= 8 else torch.float16 logging.info("Loading vggt...") model = VGGT.from_pretrained("facebook/VGGT-1B").to(self.device) save_path = self.cfg.pipeline.data_path img_folder_path = os.path.join(save_path, "input") img_paths = [os.path.join(img_folder_path, img_name) for img_name in os.listdir(img_folder_path)] images = load_and_preprocess_images(img_paths).to(self.device) with torch.no_grad(), torch.amp.autocast("cuda", dtype=dtype): images = images[None] aggregated_tokens_list, ps_idx = model.aggregator(images) pose_enc = model.camera_head(aggregated_tokens_list)[-1] extrinsic, intrinsic = pose_encoding_to_extri_intri(pose_enc, images.shape[-2:]) depth_map, depth_conf = model.depth_head(aggregated_tokens_list, images, ps_idx) point_map = unproject_depth_map_to_point_map( depth_map.squeeze(0), extrinsic.squeeze(0), intrinsic.squeeze(0) ) extrinsic, intrinsic = extrinsic.squeeze(), intrinsic.squeeze() extrinsics_w2c = torch.eye(4)[None].repeat(len(extrinsic), 1, 1) extrinsics_w2c[:, :3, :4] = extrinsic.cpu() extrinsics_w2c = extrinsics_w2c.cpu().numpy() intrinsics = intrinsic.cpu().numpy() scaled_y, scaled_x = images.shape[-2:] intrinsics[:, 0, 0] *= 720 / scaled_x intrinsics[:, 1, 1] *= 480 / scaled_y intrinsics[:, 0, 2] *= 720 / scaled_x intrinsics[:, 1, 2] *= 480 / scaled_y images = torch.stack([images[:, 0], images[:, -1]], dim=1) point_map = np.stack([point_map[0], point_map[-1]], axis=0) colors = images.permute(0, 1, 3, 4, 2).detach().cpu().numpy() colors = colors.reshape(-1, 3) point_map = point_map.reshape(-1, 3).astype(np.float32) pcd = o3d.geometry.PointCloud() pcd.points = o3d.utility.Vector3dVector(point_map) pcd.colors = o3d.utility.Vector3dVector(colors) o3d.io.write_point_cloud(os.path.join(save_path, "points3D.ply"), pcd) camera_dir = os.path.join(save_path, "camera") os.makedirs(camera_dir, exist_ok=True) for i, (w2c, intrinsic) in enumerate(zip(extrinsics_w2c, intrinsics)): c2w = np.eye(4) c2w[:3, :3] = w2c[:3, :3].T c2w[:3, 3] = - w2c[:3, :3].T @ w2c[:3, 3] np.savez( os.path.join(camera_dir, f"{i+1:04d}.npz"), pose=c2w, intrinsics=intrinsic ) def get_pose_estimator(cfg): POSE_ESTIMATOR = { "colmap": ColmapEstimator, "mast3r": MASt3REstimator, "cut3r": CUT3REstimator, "vggt": VGGTEstimator, } return POSE_ESTIMATOR[cfg.pose_estimator.type](cfg) ================================================ FILE: field_construction/pose_estimator/utils.py ================================================ import os from copy import deepcopy import numpy as np import open3d as o3d import torch def storePly(path, xyz, rgb): pcd = o3d.geometry.PointCloud() pcd.points = o3d.utility.Vector3dVector(xyz) pcd.colors = o3d.utility.Vector3dVector(rgb) o3d.io.write_point_cloud(path, pcd) def prepare_input( img_paths, img_mask, size, raymaps=None, raymap_mask=None, revisit=1, update=True ): """ Prepare input views for inference from a list of image paths. Args: img_paths (list): List of image file paths. img_mask (list of bool): Flags indicating valid images. size (int): Target image size. raymaps (list, optional): List of ray maps. raymap_mask (list, optional): Flags indicating valid ray maps. revisit (int): How many times to revisit each view. update (bool): Whether to update the state on revisits. Returns: list: A list of view dictionaries. """ # Import image loader (delayed import needed after adding ckpt path). from cut3r.dust3r.utils.image import load_images images, orig_shape = load_images(img_paths, size=size) views = [] if raymaps is None and raymap_mask is None: # Only images are provided. for i in range(len(images)): view = { "img": images[i]["img"], "ray_map": torch.full( ( images[i]["img"].shape[0], 6, images[i]["img"].shape[-2], images[i]["img"].shape[-1], ), torch.nan, ), "true_shape": torch.from_numpy(images[i]["true_shape"]), "idx": i, "instance": str(i), "camera_pose": torch.from_numpy(np.eye(4, dtype=np.float32)).unsqueeze( 0 ), "img_mask": torch.tensor(True).unsqueeze(0), "ray_mask": torch.tensor(False).unsqueeze(0), "update": torch.tensor(True).unsqueeze(0), "reset": torch.tensor(False).unsqueeze(0), } views.append(view) else: # Combine images and raymaps. num_views = len(images) + len(raymaps) assert len(img_mask) == len(raymap_mask) == num_views assert sum(img_mask) == len(images) and sum(raymap_mask) == len(raymaps) j = 0 k = 0 for i in range(num_views): view = { "img": ( images[j]["img"] if img_mask[i] else torch.full_like(images[0]["img"], torch.nan) ), "ray_map": ( raymaps[k] if raymap_mask[i] else torch.full_like(raymaps[0], torch.nan) ), "true_shape": ( torch.from_numpy(images[j]["true_shape"]) if img_mask[i] else torch.from_numpy(np.int32([raymaps[k].shape[1:-1][::-1]])) ), "idx": i, "instance": str(i), "camera_pose": torch.from_numpy(np.eye(4, dtype=np.float32)).unsqueeze( 0 ), "img_mask": torch.tensor(img_mask[i]).unsqueeze(0), "ray_mask": torch.tensor(raymap_mask[i]).unsqueeze(0), "update": torch.tensor(img_mask[i]).unsqueeze(0), "reset": torch.tensor(False).unsqueeze(0), } if img_mask[i]: j += 1 if raymap_mask[i]: k += 1 views.append(view) assert j == len(images) and k == len(raymaps) if revisit > 1: new_views = [] for r in range(revisit): for i, view in enumerate(views): new_view = deepcopy(view) new_view["idx"] = r * len(views) + i new_view["instance"] = str(r * len(views) + i) if r > 0 and not update: new_view["update"] = torch.tensor(False).unsqueeze(0) new_views.append(new_view) return new_views return views, orig_shape def prepare_output(outputs, orig_shape, outdir, revisit=1, use_pose=True): """ Process inference outputs to generate point clouds and camera parameters for visualization. Args: outputs (dict): Inference outputs. revisit (int): Number of revisits per view. use_pose (bool): Whether to transform points using camera pose. Returns: tuple: (points, colors, confidence, camera parameters dictionary) """ from cut3r.dust3r.post_process import estimate_focal_knowing_depth from cut3r.dust3r.utils.camera import pose_encoding_to_camera from cut3r.dust3r.utils.geometry import geotrf # Only keep the outputs corresponding to one full pass. valid_length = len(outputs["pred"]) // revisit outputs["pred"] = outputs["pred"][-valid_length:] outputs["views"] = outputs["views"][-valid_length:] pts3ds_self_ls = [output["pts3d_in_self_view"].cpu() for output in outputs["pred"]] pts3ds_other = [output["pts3d_in_other_view"].cpu() for output in outputs["pred"]] conf_self = [output["conf_self"].cpu() for output in outputs["pred"]] conf_other = [output["conf"].cpu() for output in outputs["pred"]] pts3ds_self = torch.cat(pts3ds_self_ls, 0) # Recover camera poses. pr_poses = [ pose_encoding_to_camera(pred["camera_pose"].clone()).cpu() for pred in outputs["pred"] ] R_c2w = torch.cat([pr_pose[:, :3, :3] for pr_pose in pr_poses], 0) t_c2w = torch.cat([pr_pose[:, :3, 3] for pr_pose in pr_poses], 0) if use_pose: transformed_pts3ds_other = [] for pose, pself in zip(pr_poses, pts3ds_self): transformed_pts3ds_other.append(geotrf(pose, pself.unsqueeze(0))) pts3ds_other = transformed_pts3ds_other conf_other = conf_self # Estimate focal length based on depth. B, H, W, _ = pts3ds_self.shape orig_H, orig_W = orig_shape pp = torch.tensor([orig_W // 2, orig_H // 2], device=pts3ds_self.device).float().repeat(B, 1) focal = estimate_focal_knowing_depth(pts3ds_self, pp, focal_mode="weiszfeld") # focal = focal.mean().repeat(len(focal)) focal_x = focal * orig_W / W focal_y = focal * orig_H / H colors = [ 0.5 * (output["img"].permute(0, 2, 3, 1) + 1.0) for output in outputs["views"] ] cam_dict = { "focal": focal.cpu().numpy(), "pp": pp.cpu().numpy(), "R": R_c2w.cpu().numpy(), "t": t_c2w.cpu().numpy(), } cam2world_tosave = torch.cat(pr_poses) # B, 4, 4 intrinsics_tosave = ( torch.eye(3).unsqueeze(0).repeat(cam2world_tosave.shape[0], 1, 1) ) # B, 3, 3 intrinsics_tosave[:, 0, 0] = focal_x.detach().cpu() intrinsics_tosave[:, 1, 1] = focal_y.detach().cpu() intrinsics_tosave[:, 0, 2] = pp[:, 0] intrinsics_tosave[:, 1, 2] = pp[:, 1] os.makedirs(os.path.join(outdir, "camera"), exist_ok=True) for f_id in range(len(cam2world_tosave)): c2w = cam2world_tosave[f_id].cpu().numpy() intrins = intrinsics_tosave[f_id].cpu().numpy() np.savez( os.path.join(outdir, "camera", f"{f_id+1:04d}.npz"), pose=c2w, intrinsics=intrins, ) return pts3ds_other, colors, conf_other, cam_dict ================================================ FILE: field_construction/preprocessor.py ================================================ import glob import logging import os import shutil import subprocess import cv2 import numpy as np import torch from diffusers.models.autoencoders.vq_model import VQModel from safetensors.torch import load_file from torch.utils.data import DataLoader from torchvision import transforms from tqdm import tqdm from .auto_encoder import Autoencoder, Autoencoder_dataset from .pose_estimator import get_pose_estimator from .utils.loss_utils import cos_loss, l2_loss from .video_preprocessor import VideoPreprocessor def extract_with_openseg(cfg): import tensorflow as tf2 import tensorflow._api.v2.compat.v1 as tf gpus = tf.config.experimental.list_physical_devices('GPU') if gpus: try: for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) except RuntimeError as e: print(e) openseg = tf2.saved_model.load( cfg.feature_extractor.model_path, tags=[tf.saved_model.tag_constants.SERVING] ) imgs_path = os.path.join(cfg.pipeline.data_path, "input") img_names = list( filter( lambda x: x.endswith("png") or x.endswith("jpg"), sorted(os.listdir(imgs_path)) ) ) img_list = [] np_image_string_list = [] for img_name in img_names: img_path = os.path.join(imgs_path, img_name) image = cv2.imread(img_path) with tf.gfile.GFile(img_path, 'rb') as f: np_image_string = np.array([f.read()]) image = torch.from_numpy(image) img_list.append(image) np_image_string_list.append(np_image_string) images = [img_list[i].permute(2, 0, 1)[None, ...] for i in range(len(img_list))] imgs = torch.cat(images) save_path = os.path.join(cfg.pipeline.data_path, "lang_features") os.makedirs(save_path, exist_ok=True) embed_size = 768 for i, (img, np_image_string) in enumerate(tqdm((zip(imgs, np_image_string_list)), desc="Extracting lang features", total=(len(imgs)))): text_emb = tf.zeros([1, 1, embed_size]) results = openseg.signatures["serving_default"]( inp_image_bytes=tf.convert_to_tensor(np_image_string[0]), inp_text_emb=text_emb ) img_info = results['image_info'] crop_sz = [ int(img_info[0, 0] * img_info[2, 0]), int(img_info[0, 1] * img_info[2, 1]) ] image_embedding_feat = results['image_embedding_feat'][:, :crop_sz[0], :crop_sz[1]] img_size = (img.shape[1], img.shape[2]) feat_2d = tf.cast( tf.image.resize_nearest_neighbor( image_embedding_feat, img_size, align_corners=True )[0], dtype=tf.float32 ).numpy() # perform mask-pooling over feat2d feat_2d = np.transpose(feat_2d, axes=(2, 0, 1)) pooled_feats2d = [] curr_mask = np.load(os.path.join(cfg.pipeline.data_path, "lang_features_dim3", str(i+1).zfill(4)+"_s.npy")) for color_id in range(-1, curr_mask.max() + 1): if not feat_2d[:, curr_mask == color_id].shape[-1]: continue pooled = feat_2d[:, curr_mask == color_id].mean(axis=-1) pooled /= np.linalg.norm(pooled) pooled_feats2d.append(pooled) pooled_feats2d = np.stack(pooled_feats2d) np.save(os.path.join(save_path, str(i+1).zfill(4)+".npy"), pooled_feats2d) class Preprocessor: def __init__(self, cfg): self.cfg = cfg if not cfg.pipeline.skip_video_process: self.video_processor = VideoPreprocessor(cfg) else: self.video_processor = None if not cfg.pipeline.skip_pose_estimate: self.pose_estimator = get_pose_estimator(cfg) else: self.pose_estimator = None if not cfg.pipeline.skip_lang_feature_extraction: # load feature extractor if cfg.feature_extractor.type == "open-seg": self.lseg = None self.sem_ae = Autoencoder() self.sem_ae.cuda() elif cfg.feature_extractor.type == "lseg": self.lseg = LSegFeatureExtractor.from_pretrained(cfg.lseg.model_path) self.lseg.to(cfg.lseg.device, dtype=torch.float32).eval() self.sem_ae = VQModel( in_channels=512, out_channels=512, latent_channels=4, norm_num_groups=2, block_out_channels=[256, 64, 16], down_block_types=["DownEncoderBlock2D"] * 3, up_block_types=["UpDecoderBlock2D"] * 3, layers_per_block=1, norm_type="spatial", num_vq_embeddings=1024, ) self.sem_ae.load_state_dict(load_file(cfg.ae.model_path)) self.sem_ae.to(cfg.ae.device, dtype=torch.float32).eval() self.img_transform = transforms.Compose( [ transforms.Lambda(lambda x: x / 255), transforms.Normalize( mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5], inplace=True ), ] ) else: self.lseg = None self.sem_ae = None self.img_transform = None def generate_lang_features_with_openseg(self): extract_with_openseg(self.cfg) logging.info("Done feature extraction.") num_epochs = 400 os.makedirs(os.path.join(self.cfg.pipeline.data_path, "ckpt"), exist_ok=True) save_path = os.path.join(self.cfg.pipeline.data_path, "lang_features") train_dataset = Autoencoder_dataset(save_path) train_loader = DataLoader( dataset=train_dataset, batch_size=512, shuffle=True, num_workers=32, drop_last=False ) test_loader = DataLoader( dataset=train_dataset, batch_size=512, shuffle=False, num_workers=32, drop_last=False ) optimizer = torch.optim.Adam(self.sem_ae.parameters(), lr=1e-4) pbar = tqdm(range(num_epochs)) best_eval_loss = 100.0 best_epoch = 0 for epoch in pbar: self.sem_ae.train() for idx, feature in enumerate(train_loader): data = feature.to("cuda") outputs_dim3 = self.sem_ae.encode(data) outputs = self.sem_ae.decode(outputs_dim3) l2loss = l2_loss(outputs, data) cosloss = cos_loss(outputs, data) loss = l2loss + cosloss * 0.001 optimizer.zero_grad() loss.backward() optimizer.step() if epoch > 300: eval_loss = 0.0 self.sem_ae.eval() for idx, feature in enumerate(test_loader): data = feature.to("cuda") with torch.no_grad(): outputs = self.sem_ae(data) loss = l2_loss(outputs, data) + cos_loss(outputs, data) eval_loss += loss * len(feature) eval_loss = eval_loss / len(train_dataset) print("eval_loss:{:.8f}".format(eval_loss)) if eval_loss < best_eval_loss: best_eval_loss = eval_loss best_epoch = epoch torch.save(self.sem_ae.state_dict(), os.path.join(self.cfg.pipeline.data_path, "ckpt", "best_ckpt.pth")) pbar.set_postfix({"Loss": f"{loss.item():.{7}f}"}) pbar.update(1) print(f"best_epoch: {best_epoch}") print("best_loss: {:.8f}".format(best_eval_loss)) # compress lang_feats with ae logging.info("Compresing language features with best ckpt...") best_state_dict = torch.load(os.path.join(self.cfg.pipeline.data_path, "ckpt", "best_ckpt.pth"), weights_only=False) self.sem_ae.load_state_dict(best_state_dict) # check device orig_lang_feat_names = sorted(glob.glob(os.path.join(save_path, "*.npy"))) dim3_save_path = os.path.join(self.cfg.pipeline.data_path, "lang_features_dim3") with torch.no_grad(): for idx, orig_lang_feat_name in enumerate(orig_lang_feat_names): orig_lang_feat = torch.from_numpy(np.load(orig_lang_feat_name)).cuda() mask = np.load(os.path.join(dim3_save_path, str(idx+1).zfill(4)+"_s.npy")) # check dtype lang_feat = self.sem_ae.encode(orig_lang_feat).detach().cpu().numpy() full_lang_feat = np.zeros((3, mask.shape[0], mask.shape[1])) curr_id = 0 for color_id in range(-1, mask.max() + 1): if not mask[mask == color_id].shape[-1]: continue full_lang_feat[:, mask == color_id] = lang_feat[curr_id][:, None] curr_id += 1 np.save(os.path.join(dim3_save_path, str(idx+1).zfill(4)+"_f.npy"), full_lang_feat) def generate_lang_features_with_lseg(self): from cogvideox_interpolation.lseg import LSegFeatureExtractor imgs_path = os.path.join(self.cfg.pipeline.data_path, "input") img_names = list( filter( lambda x: x.endswith("png") or x.endswith("jpg"), os.listdir(imgs_path) ) ) save_path = os.path.join(self.cfg.pipeline.data_path, "lang_features_dim4") os.makedirs(save_path, exist_ok=True) for img_name in tqdm(img_names): img_path = os.path.join(imgs_path, img_name) img = cv2.imread(img_path) resolution = (640, 480) img = cv2.resize(img, resolution) frame_embed = self.img_transform(torch.from_numpy(img).permute(2, 0, 1)).to( self.cfg.lseg.device, dtype=torch.float32 )[None, ...] lseg_features = self.lseg.extract_features(frame_embed) if lseg_features.device != self.sem_ae.device: lseg_features = lseg_features.to("cpu").to(self.sem_ae.device) z = self.sem_ae.encode(lseg_features).latents # [1, 4, 240, 320] np.save( os.path.join(save_path, f"{img_name.split('.')[0]}_f.npy"), z.detach().cpu().numpy(), ) def select_valid_data(self): cfg = self.cfg curr_data_path = cfg.pipeline.data_path raw_data_path = os.path.join(curr_data_path, "raw") os.makedirs(raw_data_path, exist_ok=True) dirs_to_move = ["camera", "input", "lang_features_dim3", "normal"] orig_view_nums = len(os.listdir(os.path.join(curr_data_path, "camera"))) indexs = np.linspace(0, orig_view_nums-1, cfg.pipeline.chunk_num * cfg.pipeline.keep_num_per_chunk) indexs = indexs.astype(np.int32).tolist() cfg.pipeline.selected_idxs = indexs for dir_to_move in dirs_to_move: shutil.move(os.path.join(curr_data_path, dir_to_move), raw_data_path) src_dir = os.path.join(raw_data_path, dir_to_move) tar_dir = os.path.join(curr_data_path, dir_to_move) os.makedirs(tar_dir, exist_ok=True) file_lst = sorted(os.listdir(src_dir)) file_suffix = file_lst[0].split(".")[-1] if dir_to_move == "lang_features_dim3": f_file_lst = [file_lst[2 * idx] for idx in cfg.pipeline.selected_idxs] s_file_lst = [file_lst[2 * idx + 1] for idx in cfg.pipeline.selected_idxs] for file_idx in range(len(f_file_lst)): shutil.copy( os.path.join(src_dir, f_file_lst[file_idx]), os.path.join(tar_dir, f"{file_idx+1:04d}_f.{file_suffix}"), ) shutil.copy( os.path.join(src_dir, s_file_lst[file_idx]), os.path.join(tar_dir, f"{file_idx+1:04d}_s.{file_suffix}"), ) else: file_lst = [file_lst[idx] for idx in cfg.pipeline.selected_idxs] for file_idx, file_name in enumerate(file_lst): shutil.copy( os.path.join(src_dir, file_name), os.path.join(tar_dir, f"{file_idx+1:04d}.{file_suffix}"), ) def preprocess(self): if not self.cfg.pipeline.skip_video_process: logging.info("Processing input videos...") self.video_processor.video_process() if not self.cfg.pipeline.skip_pose_estimate: logging.info("Estimating poses...") self.pose_estimator.get_poses() if not self.cfg.pipeline.skip_lang_feature_extraction: logging.info("Generating language features...") if self.cfg.feature_extractor.type == "lseg": self.generate_lang_features_with_lseg() elif self.cfg.feature_extractor.type == "open-seg": self.generate_lang_features_with_openseg() if self.cfg.pipeline.selection: logging.info("Selecting views with higher confidence...") self.select_valid_data() logging.info("Done all preprocessing!") ================================================ FILE: field_construction/scene/__init__.py ================================================ # # Copyright (C) 2023, Inria # GRAPHDECO research group, https://team.inria.fr/graphdeco # All rights reserved. # # This software is free for non-commercial, research and evaluation use # under the terms of the LICENSE.md file. # # For inquiries contact george.drettakis@inria.fr # import json import os import random import numpy as np import torch from field_construction.scene.dataset_readers import sceneLoadTypeCallbacks from field_construction.scene.gaussian_model import GaussianModel from field_construction.utils.camera_utils import (camera_to_JSON, cameraList_from_camInfos) from field_construction.utils.system_utils import searchForMaxIteration class Scene: gaussians: GaussianModel def __init__(self, args, gaussians: GaussianModel, load_iteration=None, shuffle=True, resolution_scales=[1.0]): """b :param path: Path to colmap scene main folder. """ self.model_path = args.model_path os.makedirs(self.model_path, exist_ok=True) self.loaded_iter = None self.gaussians = gaussians self.source_path = args.source_path if load_iteration: if load_iteration == -1: self.loaded_iter = searchForMaxIteration(os.path.join(self.model_path, "point_cloud")) else: self.loaded_iter = load_iteration print("Loading trained model at iteration {}".format(self.loaded_iter)) self.train_cameras = {} self.test_cameras = {} if os.path.exists(os.path.join(args.source_path, "sparse")): scene_info = sceneLoadTypeCallbacks["Colmap"](args.source_path, "images", args.eval, loaded_iter=self.loaded_iter) elif os.path.exists(os.path.join(args.source_path, "transforms_train.json")): print("Found transforms_train.json file, assuming Blender data set!") scene_info = sceneLoadTypeCallbacks["Blender"](args.source_path, args.white_background, args.eval) else: print("Assuming CUT3R data set...") scene_info = sceneLoadTypeCallbacks["CUT3R"](args.source_path, args.white_background, args.eval, loaded_iter=self.loaded_iter) if not self.loaded_iter: with open(scene_info.ply_path, 'rb') as src_file, open(os.path.join(self.model_path, "input.ply"), 'wb') as dest_file: dest_file.write(src_file.read()) json_cams = [] camlist = [] if scene_info.test_cameras: camlist.extend(scene_info.test_cameras) if scene_info.train_cameras: camlist.extend(scene_info.train_cameras) for id, cam in enumerate(camlist): json_cams.append(camera_to_JSON(id, cam)) with open(os.path.join(self.model_path, "cameras.json"), 'w') as file: json.dump(json_cams, file) if shuffle: random.shuffle(scene_info.train_cameras) # Multi-res consistent random shuffling random.shuffle(scene_info.test_cameras) # Multi-res consistent random shuffling self.cameras_extent = scene_info.nerf_normalization["radius"] print(f"cameras_extent {self.cameras_extent}") self.multi_view_num = args.multi_view_num for resolution_scale in resolution_scales: print("Loading Training Cameras") self.train_cameras[resolution_scale] = cameraList_from_camInfos(scene_info.train_cameras, resolution_scale, args) print("Loading Test Cameras") self.test_cameras[resolution_scale] = cameraList_from_camInfos(scene_info.test_cameras, resolution_scale, args) print("computing nearest_id") self.world_view_transforms = [] camera_centers = [] center_rays = [] for id, cur_cam in enumerate(self.train_cameras[resolution_scale]): self.world_view_transforms.append(cur_cam.world_view_transform) camera_centers.append(cur_cam.camera_center) R = torch.tensor(cur_cam.R).float().cuda() T = torch.tensor(cur_cam.T).float().cuda() center_ray = torch.tensor([0.0, 0.0, 1.0]).float().cuda() center_ray = center_ray @ R.transpose(-1, -2) center_rays.append(center_ray) self.world_view_transforms = torch.stack(self.world_view_transforms) camera_centers = torch.stack(camera_centers, dim=0) center_rays = torch.stack(center_rays, dim=0) center_rays = torch.nn.functional.normalize(center_rays, dim=-1) diss = torch.norm(camera_centers[:, None] - camera_centers[None], dim=-1).detach().cpu().numpy() tmp = torch.sum(center_rays[:, None] * center_rays[None], dim=-1) angles = torch.arccos(tmp) * 180 / 3.14159 angles = angles.detach().cpu().numpy() with open(os.path.join(self.model_path, "multi_view.json"), 'w') as file: for id, cur_cam in enumerate(self.train_cameras[resolution_scale]): sorted_indices = np.lexsort((angles[id], diss[id])) # sorted_indices = np.lexsort((diss[id], angles[id])) mask = (angles[id][sorted_indices] < args.multi_view_max_angle) & \ (diss[id][sorted_indices] > args.multi_view_min_dis) & \ (diss[id][sorted_indices] < args.multi_view_max_dis) sorted_indices = sorted_indices[mask] multi_view_num = min(self.multi_view_num, len(sorted_indices)) json_d = {'ref_name': cur_cam.image_name, 'nearest_name': []} for index in sorted_indices[:multi_view_num]: cur_cam.nearest_id.append(index) cur_cam.nearest_names.append(self.train_cameras[resolution_scale][index].image_name) json_d["nearest_name"].append(self.train_cameras[resolution_scale][index].image_name) json_str = json.dumps(json_d, separators=(',', ':')) file.write(json_str) file.write('\n') # print(f"frame {cur_cam.image_name}, neareast {cur_cam.nearest_names}, \ # angle {angles[id][cur_cam.nearest_id]}, diss {diss[id][cur_cam.nearest_id]}") if self.loaded_iter: self.gaussians.load_ply(os.path.join(self.model_path, "point_cloud", "iteration_" + str(self.loaded_iter), "point_cloud.ply")) else: self.gaussians.create_from_pcd(scene_info.point_cloud, self.cameras_extent) self.gaussians.init_RT_seq(self.train_cameras) def save(self, iteration, mask=None, include_feature=False, finetune=False): if include_feature: point_cloud_path = os.path.join(self.model_path, "point_cloud/iteration_{}".format(iteration)) else: point_cloud_path = os.path.join(self.model_path, "point_cloud/iteration_{}".format(iteration)) if finetune: self.gaussians.save_ply(os.path.join(point_cloud_path, "finetune.ply"), mask, include_feature) else: self.gaussians.save_ply(os.path.join(point_cloud_path, "point_cloud.ply"), mask, include_feature) def getTrainCameras(self, scale=1.0): return self.train_cameras[scale] def getTestCameras(self, scale=1.0): return self.test_cameras[scale] ================================================ FILE: field_construction/scene/app_model.py ================================================ import os import torch import torch.nn as nn def searchForMaxIteration(folder): saved_iters = [int(fname.split("_")[-1]) for fname in os.listdir(folder)] return max(saved_iters) class AppModel(nn.Module): def __init__(self, num_images=1600): super().__init__() self.appear_ab = nn.Parameter(torch.zeros(num_images, 2).cuda()) self.optimizer = torch.optim.Adam([ {'params': self.appear_ab, 'lr': 0.001, "name": "appear_ab"}, ], betas=(0.9, 0.99)) def save_weights(self, model_path, iteration): out_weights_path = os.path.join(model_path, "app_model/iteration_{}".format(iteration)) os.makedirs(out_weights_path, exist_ok=True) print(f"save app model. path: {out_weights_path}") torch.save(self.state_dict(), os.path.join(out_weights_path, 'app.pth')) def load_weights(self, model_path, iteration=-1): if iteration == -1: loaded_iter = searchForMaxIteration(os.path.join(model_path, "app_model")) else: loaded_iter = iteration weights_path = os.path.join(model_path, "app_model/iteration_{}/app.pth".format(loaded_iter)) state_dict = torch.load(weights_path) self.load_state_dict(state_dict) def freeze(self): self.appear_ab.requires_grad_(False) ================================================ FILE: field_construction/scene/cameras.py ================================================ # # Copyright (C) 2023, Inria # GRAPHDECO research group, https://team.inria.fr/graphdeco # All rights reserved. # # This software is free for non-commercial, research and evaluation use # under the terms of the LICENSE.md file. # # For inquiries contact george.drettakis@inria.fr # import copy import os import numpy as np import torch import torch.nn.functional as F from PIL import Image from torch import nn from field_construction.utils.general_utils import PILtoTorch from field_construction.utils.graphics_utils import ( fov2focal, getProjectionMatrix, getProjectionMatrixCenterShift, getWorld2View2) def dilate(bin_img, ksize=6): pad = (ksize - 1) // 2 bin_img = F.pad(bin_img, pad=[pad, pad, pad, pad], mode='reflect') out = F.max_pool2d(bin_img, kernel_size=ksize, stride=1, padding=0) return out def erode(bin_img, ksize=12): out = 1 - dilate(1 - bin_img, ksize) return out def process_image(image_path, resolution, ncc_scale): image = Image.open(image_path) if len(image.split()) > 3: resized_image_rgb = torch.cat([PILtoTorch(im, resolution) for im in image.split()[:3]], dim=0) loaded_mask = PILtoTorch(image.split()[3], resolution) gt_image = resized_image_rgb if ncc_scale != 1.0: ncc_resolution = (int(resolution[0]/ncc_scale), int(resolution[1]/ncc_scale)) resized_image_rgb = torch.cat([PILtoTorch(im, ncc_resolution) for im in image.split()[:3]], dim=0) else: resized_image_rgb = PILtoTorch(image, resolution) loaded_mask = None gt_image = resized_image_rgb if ncc_scale != 1.0: ncc_resolution = (int(resolution[0]/ncc_scale), int(resolution[1]/ncc_scale)) resized_image_rgb = PILtoTorch(image, ncc_resolution) gray_image = (0.299 * resized_image_rgb[0] + 0.587 * resized_image_rgb[1] + 0.114 * resized_image_rgb[2])[None] return gt_image, gray_image, loaded_mask class Camera(nn.Module): def __init__(self, colmap_id, R, T, FoVx, FoVy, image_width, image_height, image_path, image_name, uid, trans=np.array([0.0, 0.0, 0.0]), scale=1.0, ncc_scale=1.0, preload_img=True, data_device = "cuda" ): super(Camera, self).__init__() self.uid = uid self.nearest_id = [] self.nearest_names = [] self.colmap_id = colmap_id self.R = R self.T = T self.FoVx = FoVx self.FoVy = FoVy self.image_name = image_name self.image_path = image_path self.image_width = image_width self.image_height = image_height self.resolution = (image_width, image_height) self.Fx = fov2focal(FoVx, self.image_width) self.Fy = fov2focal(FoVy, self.image_height) self.Cx = 0.5 * self.image_width self.Cy = 0.5 * self.image_height base_image_path = "/".join(self.image_path.split("/")[:-2]) self.normal_path = os.path.join(base_image_path, "normal", self.image_path.split("/")[-1]) try: self.data_device = torch.device(data_device) except Exception as e: print(e) print(f"[Warning] Custom device {data_device} failed, fallback to default cuda device" ) self.data_device = torch.device("cuda") self.original_image, self.image_gray, self.mask = None, None, None self.preload_img = preload_img self.ncc_scale = ncc_scale if self.preload_img: gt_image, gray_image, loaded_mask = process_image(self.image_path, self.resolution, ncc_scale) self.original_image = gt_image.to(self.data_device) self.original_image_gray = gray_image.to(self.data_device) self.mask = loaded_mask self.zfar = 100.0 self.znear = 0.01 self.trans = trans self.scale = scale self.world_view_transform = torch.tensor(getWorld2View2(R, T, trans, scale)).transpose(0, 1).cuda() self.projection_matrix = getProjectionMatrix(znear=self.znear, zfar=self.zfar, fovX=self.FoVx, fovY=self.FoVy).transpose(0,1).cuda() self.full_proj_transform = (self.world_view_transform.unsqueeze(0).bmm(self.projection_matrix.unsqueeze(0))).squeeze(0) self.camera_center = self.world_view_transform.inverse()[3, :3] self.plane_mask, self.non_plane_mask = None, None def get_image(self): if self.preload_img: return self.original_image.cuda(), self.original_image_gray.cuda() else: gt_image, gray_image, _ = process_image(self.image_path, self.resolution, self.ncc_scale) return gt_image.cuda(), gray_image.cuda() def get_normal(self): _normal = Image.open(self.normal_path) resized_normal = PILtoTorch(_normal, self.resolution) resized_normal = resized_normal[:3] _normal = - (resized_normal * 2 - 1).cuda() # normalize normal _normal = _normal.permute(1, 2, 0) @ (torch.linalg.inv(torch.as_tensor(self.R).float()).cuda()) normal_gt = _normal.permute(2, 0, 1) normal_norm = torch.norm(normal_gt, dim=0, keepdim=True) normal_mask = ~((normal_norm > 1.1) | (normal_norm < 0.9)) normal_gt /= normal_norm return normal_gt, normal_mask def get_language_feature(self, language_feature_dir): language_feature_name = os.path.join(language_feature_dir, self.image_name) feature_map = torch.from_numpy(np.load(language_feature_name + '_f.npy')).to(self.data_device) if len(feature_map.shape) < 4: feature_map = feature_map[None] point_feature = F.interpolate(feature_map, (self.image_height, self.image_width), mode="bilinear", align_corners=False) seg_map = torch.from_numpy(np.load(language_feature_name + "_s.npy")).to(self.data_device) # (h, w) seg_map = seg_map.long() mask = seg_map != -1 # perform mask_pooling: point_feature = point_feature.squeeze(0) # (feat_dim, h, w) # for color_id in range(seg_map.max() + 1): # point_feature[:, seg_map == color_id] = point_feature[:, seg_map == color_id].mean(dim=-1, keepdim=True) return point_feature, mask, seg_map def get_calib_matrix_nerf(self, scale=1.0): intrinsic_matrix = torch.tensor([[self.Fx/scale, 0, self.Cx/scale], [0, self.Fy/scale, self.Cy/scale], [0, 0, 1]]).float() extrinsic_matrix = self.world_view_transform.transpose(0,1).contiguous() # cam2world return intrinsic_matrix, extrinsic_matrix def get_rays(self, scale=1.0): W, H = int(self.image_width/scale), int(self.image_height/scale) ix, iy = torch.meshgrid( torch.arange(W), torch.arange(H), indexing='xy') rays_d = torch.stack( [(ix-self.Cx/scale) / self.Fx * scale, (iy-self.Cy/scale) / self.Fy * scale, torch.ones_like(ix)], -1).float().cuda() return rays_d def get_k(self, scale=1.0): K = torch.tensor([[self.Fx / scale, 0, self.Cx / scale], [0, self.Fy / scale, self.Cy / scale], [0, 0, 1]]).cuda() return K def get_inv_k(self, scale=1.0): K_T = torch.tensor([[scale/self.Fx, 0, -self.Cx/self.Fx], [0, scale/self.Fy, -self.Cy/self.Fy], [0, 0, 1]]).cuda() return K_T class MiniCam: def __init__(self, width, height, fovy, fovx, znear, zfar, world_view_transform, full_proj_transform): self.image_width = width self.image_height = height self.FoVy = fovy self.FoVx = fovx self.znear = znear self.zfar = zfar self.world_view_transform = world_view_transform self.full_proj_transform = full_proj_transform view_inv = torch.inverse(self.world_view_transform) self.camera_center = view_inv[3][:3] def sample_cam(cam_l: Camera, cam_r: Camera): cam = copy.copy(cam_l) Rt = np.zeros((4, 4)) Rt[:3, :3] = cam_l.R.transpose() Rt[:3, 3] = cam_l.T Rt[3, 3] = 1.0 Rt2 = np.zeros((4, 4)) Rt2[:3, :3] = cam_r.R.transpose() Rt2[:3, 3] = cam_r.T Rt2[3, 3] = 1.0 C2W = np.linalg.inv(Rt) C2W2 = np.linalg.inv(Rt2) w = np.random.rand() pose_c2w_at_unseen = w * C2W + (1 - w) * C2W2 Rt = np.linalg.inv(pose_c2w_at_unseen) cam.R = Rt[:3, :3] cam.T = Rt[:3, 3] cam.world_view_transform = torch.tensor(getWorld2View2(cam.R, cam.T, cam.trans, cam.scale)).transpose(0, 1).cuda() cam.projection_matrix = getProjectionMatrix(znear=cam.znear, zfar=cam.zfar, fovX=cam.FoVx, fovY=cam.FoVy).transpose(0,1).cuda() cam.full_proj_transform = (cam.world_view_transform.unsqueeze(0).bmm(cam.projection_matrix.unsqueeze(0))).squeeze(0) cam.camera_center = cam.world_view_transform.inverse()[3, :3] return cam ================================================ FILE: field_construction/scene/colmap_loader.py ================================================ # # Copyright (C) 2023, Inria # GRAPHDECO research group, https://team.inria.fr/graphdeco # All rights reserved. # # This software is free for non-commercial, research and evaluation use # under the terms of the LICENSE.md file. # # For inquiries contact george.drettakis@inria.fr # import collections import struct import numpy as np CameraModel = collections.namedtuple( "CameraModel", ["model_id", "model_name", "num_params"]) Camera = collections.namedtuple( "Camera", ["id", "model", "width", "height", "params"]) BaseImage = collections.namedtuple( "Image", ["id", "qvec", "tvec", "camera_id", "name", "xys", "point3D_ids"]) Point3D = collections.namedtuple( "Point3D", ["id", "xyz", "rgb", "error", "image_ids", "point2D_idxs"]) CAMERA_MODELS = { CameraModel(model_id=0, model_name="SIMPLE_PINHOLE", num_params=3), CameraModel(model_id=1, model_name="PINHOLE", num_params=4), CameraModel(model_id=2, model_name="SIMPLE_RADIAL", num_params=4), CameraModel(model_id=3, model_name="RADIAL", num_params=5), CameraModel(model_id=4, model_name="OPENCV", num_params=8), CameraModel(model_id=5, model_name="OPENCV_FISHEYE", num_params=8), CameraModel(model_id=6, model_name="FULL_OPENCV", num_params=12), CameraModel(model_id=7, model_name="FOV", num_params=5), CameraModel(model_id=8, model_name="SIMPLE_RADIAL_FISHEYE", num_params=4), CameraModel(model_id=9, model_name="RADIAL_FISHEYE", num_params=5), CameraModel(model_id=10, model_name="THIN_PRISM_FISHEYE", num_params=12) } CAMERA_MODEL_IDS = dict([(camera_model.model_id, camera_model) for camera_model in CAMERA_MODELS]) CAMERA_MODEL_NAMES = dict([(camera_model.model_name, camera_model) for camera_model in CAMERA_MODELS]) def qvec2rotmat(qvec): return np.array([ [1 - 2 * qvec[2]**2 - 2 * qvec[3]**2, 2 * qvec[1] * qvec[2] - 2 * qvec[0] * qvec[3], 2 * qvec[3] * qvec[1] + 2 * qvec[0] * qvec[2]], [2 * qvec[1] * qvec[2] + 2 * qvec[0] * qvec[3], 1 - 2 * qvec[1]**2 - 2 * qvec[3]**2, 2 * qvec[2] * qvec[3] - 2 * qvec[0] * qvec[1]], [2 * qvec[3] * qvec[1] - 2 * qvec[0] * qvec[2], 2 * qvec[2] * qvec[3] + 2 * qvec[0] * qvec[1], 1 - 2 * qvec[1]**2 - 2 * qvec[2]**2]]) def rotmat2qvec(R): Rxx, Ryx, Rzx, Rxy, Ryy, Rzy, Rxz, Ryz, Rzz = R.flat K = np.array([ [Rxx - Ryy - Rzz, 0, 0, 0], [Ryx + Rxy, Ryy - Rxx - Rzz, 0, 0], [Rzx + Rxz, Rzy + Ryz, Rzz - Rxx - Ryy, 0], [Ryz - Rzy, Rzx - Rxz, Rxy - Ryx, Rxx + Ryy + Rzz]]) / 3.0 eigvals, eigvecs = np.linalg.eigh(K) qvec = eigvecs[[3, 0, 1, 2], np.argmax(eigvals)] if qvec[0] < 0: qvec *= -1 return qvec class Image(BaseImage): def qvec2rotmat(self): return qvec2rotmat(self.qvec) def read_next_bytes(fid, num_bytes, format_char_sequence, endian_character="<"): """Read and unpack the next bytes from a binary file. :param fid: :param num_bytes: Sum of combination of {2, 4, 8}, e.g. 2, 6, 16, 30, etc. :param format_char_sequence: List of {c, e, f, d, h, H, i, I, l, L, q, Q}. :param endian_character: Any of {@, =, <, >, !} :return: Tuple of read and unpacked values. """ data = fid.read(num_bytes) return struct.unpack(endian_character + format_char_sequence, data) def read_points3D_text(path): """ see: src/base/reconstruction.cc void Reconstruction::ReadPoints3DText(const std::string& path) void Reconstruction::WritePoints3DText(const std::string& path) """ xyzs = None rgbs = None errors = None num_points = 0 with open(path, "r") as fid: while True: line = fid.readline() if not line: break line = line.strip() if len(line) > 0 and line[0] != "#": num_points += 1 xyzs = np.empty((num_points, 3)) rgbs = np.empty((num_points, 3)) errors = np.empty((num_points, 1)) count = 0 with open(path, "r") as fid: while True: line = fid.readline() if not line: break line = line.strip() if len(line) > 0 and line[0] != "#": elems = line.split() xyz = np.array(tuple(map(float, elems[1:4]))) rgb = np.array(tuple(map(int, elems[4:7]))) error = np.array(float(elems[7])) if error > 2.0: continue xyzs[count] = xyz rgbs[count] = rgb errors[count] = error count += 1 xyzs = np.delete(xyzs, np.arange(count,num_points),axis=0) rgbs = np.delete(rgbs, np.arange(count,num_points),axis=0) errors = np.delete(errors, np.arange(count,num_points),axis=0) return xyzs, rgbs, errors def read_points3D_binary(path_to_model_file): """ see: src/base/reconstruction.cc void Reconstruction::ReadPoints3DBinary(const std::string& path) void Reconstruction::WritePoints3DBinary(const std::string& path) """ with open(path_to_model_file, "rb") as fid: num_points = read_next_bytes(fid, 8, "Q")[0] xyzs = np.empty((num_points, 3)) rgbs = np.empty((num_points, 3)) errors = np.empty((num_points, 1)) count = 0 for p_id in range(num_points): binary_point_line_properties = read_next_bytes( fid, num_bytes=43, format_char_sequence="QdddBBBd") xyz = np.array(binary_point_line_properties[1:4]) rgb = np.array(binary_point_line_properties[4:7]) error = np.array(binary_point_line_properties[7]) track_length = read_next_bytes( fid, num_bytes=8, format_char_sequence="Q")[0] track_elems = read_next_bytes( fid, num_bytes=8*track_length, format_char_sequence="ii"*track_length) if error > 2.0 or track_length < 3: continue xyzs[count] = xyz rgbs[count] = rgb errors[count] = error count += 1 xyzs = np.delete(xyzs, np.arange(count,num_points),axis=0) rgbs = np.delete(rgbs, np.arange(count,num_points),axis=0) errors = np.delete(errors, np.arange(count,num_points),axis=0) return xyzs, rgbs, errors def read_intrinsics_text(path): """ Taken from https://github.com/colmap/colmap/blob/dev/scripts/python/read_write_model.py """ cameras = {} with open(path, "r") as fid: while True: line = fid.readline() if not line: break line = line.strip() if len(line) > 0 and line[0] != "#": elems = line.split() camera_id = int(elems[0]) model = elems[1] assert model == "PINHOLE", "While the loader support other types, the rest of the code assumes PINHOLE" width = int(elems[2]) height = int(elems[3]) params = np.array(tuple(map(float, elems[4:]))) cameras[camera_id] = Camera(id=camera_id, model=model, width=width, height=height, params=params) return cameras def read_extrinsics_binary(path_to_model_file): """ see: src/base/reconstruction.cc void Reconstruction::ReadImagesBinary(const std::string& path) void Reconstruction::WriteImagesBinary(const std::string& path) """ images = {} with open(path_to_model_file, "rb") as fid: num_reg_images = read_next_bytes(fid, 8, "Q")[0] for _ in range(num_reg_images): binary_image_properties = read_next_bytes( fid, num_bytes=64, format_char_sequence="idddddddi") image_id = binary_image_properties[0] qvec = np.array(binary_image_properties[1:5]) tvec = np.array(binary_image_properties[5:8]) camera_id = binary_image_properties[8] image_name = "" current_char = read_next_bytes(fid, 1, "c")[0] while current_char != b"\x00": # look for the ASCII 0 entry image_name += current_char.decode("utf-8") current_char = read_next_bytes(fid, 1, "c")[0] num_points2D = read_next_bytes(fid, num_bytes=8, format_char_sequence="Q")[0] x_y_id_s = read_next_bytes(fid, num_bytes=24*num_points2D, format_char_sequence="ddq"*num_points2D) xys = np.column_stack([tuple(map(float, x_y_id_s[0::3])), tuple(map(float, x_y_id_s[1::3]))]) point3D_ids = np.array(tuple(map(int, x_y_id_s[2::3]))) images[image_id] = Image( id=image_id, qvec=qvec, tvec=tvec, camera_id=camera_id, name=image_name, xys=xys, point3D_ids=point3D_ids) return images def read_intrinsics_binary(path_to_model_file): """ see: src/base/reconstruction.cc void Reconstruction::WriteCamerasBinary(const std::string& path) void Reconstruction::ReadCamerasBinary(const std::string& path) """ cameras = {} with open(path_to_model_file, "rb") as fid: num_cameras = read_next_bytes(fid, 8, "Q")[0] for _ in range(num_cameras): camera_properties = read_next_bytes( fid, num_bytes=24, format_char_sequence="iiQQ") camera_id = camera_properties[0] model_id = camera_properties[1] model_name = CAMERA_MODEL_IDS[camera_properties[1]].model_name width = camera_properties[2] height = camera_properties[3] num_params = CAMERA_MODEL_IDS[model_id].num_params params = read_next_bytes(fid, num_bytes=8*num_params, format_char_sequence="d"*num_params) cameras[camera_id] = Camera(id=camera_id, model=model_name, width=width, height=height, params=np.array(params)) assert len(cameras) == num_cameras return cameras def read_extrinsics_text(path): """ Taken from https://github.com/colmap/colmap/blob/dev/scripts/python/read_write_model.py """ images = {} with open(path, "r") as fid: while True: line = fid.readline() if not line: break line = line.strip() if len(line) > 0 and line[0] != "#": elems = line.split() image_id = int(elems[0]) qvec = np.array(tuple(map(float, elems[1:5]))) tvec = np.array(tuple(map(float, elems[5:8]))) camera_id = int(elems[8]) image_name = elems[9] elems = fid.readline().split() xys = np.column_stack([tuple(map(float, elems[0::3])), tuple(map(float, elems[1::3]))]) point3D_ids = np.array(tuple(map(int, elems[2::3]))) images[image_id] = Image( id=image_id, qvec=qvec, tvec=tvec, camera_id=camera_id, name=image_name, xys=xys, point3D_ids=point3D_ids) return images def read_colmap_bin_array(path): """ Taken from https://github.com/colmap/colmap/blob/dev/scripts/python/read_dense.py :param path: path to the colmap binary file. :return: nd array with the floating point values in the value """ with open(path, "rb") as fid: width, height, channels = np.genfromtxt(fid, delimiter="&", max_rows=1, usecols=(0, 1, 2), dtype=int) fid.seek(0) num_delimiter = 0 byte = fid.read(1) while True: if byte == b"&": num_delimiter += 1 if num_delimiter >= 3: break byte = fid.read(1) array = np.fromfile(fid, np.float32) array = array.reshape((width, height, channels), order="F") return np.transpose(array, (1, 0, 2)).squeeze() def write_cameras_text(cameras, path): """ see: src/colmap/scene/reconstruction.cc void Reconstruction::WriteCamerasText(const std::string& path) void Reconstruction::ReadCamerasText(const std::string& path) """ HEADER = ( "# Camera list with one line of data per camera:\n" + "# CAMERA_ID, MODEL, WIDTH, HEIGHT, PARAMS[]\n" + "# Number of cameras: {}\n".format(len(cameras)) ) with open(path, "w") as fid: fid.write(HEADER) for _, cam in cameras.items(): to_write = [cam.id, cam.model, cam.width, cam.height, *cam.params] line = " ".join([str(elem) for elem in to_write]) fid.write(line + "\n") def write_next_bytes(fid, data, format_char_sequence, endian_character="<"): """pack and write to a binary file. :param fid: :param data: data to send, if multiple elements are sent at the same time, they should be encapsuled either in a list or a tuple :param format_char_sequence: List of {c, e, f, d, h, H, i, I, l, L, q, Q}. should be the same length as the data list or tuple :param endian_character: Any of {@, =, <, >, !} """ if isinstance(data, (list, tuple)): bytes = struct.pack(endian_character + format_char_sequence, *data) else: bytes = struct.pack(endian_character + format_char_sequence, data) fid.write(bytes) def write_cameras_binary(cameras, path_to_model_file): """ see: src/colmap/scene/reconstruction.cc void Reconstruction::WriteCamerasBinary(const std::string& path) void Reconstruction::ReadCamerasBinary(const std::string& path) """ with open(path_to_model_file, "wb") as fid: write_next_bytes(fid, len(cameras), "Q") for _, cam in cameras.items(): model_id = CAMERA_MODEL_NAMES[cam.model].model_id camera_properties = [cam.id, model_id, cam.width, cam.height] write_next_bytes(fid, camera_properties, "iiQQ") for p in cam.params: write_next_bytes(fid, float(p), "d") return cameras def write_images_text(images, path): """ see: src/colmap/scene/reconstruction.cc void Reconstruction::ReadImagesText(const std::string& path) void Reconstruction::WriteImagesText(const std::string& path) """ if len(images) == 0: mean_observations = 0 else: mean_observations = sum( (len(img.point3D_ids) for _, img in images.items()) ) / len(images) HEADER = ( "# Image list with two lines of data per image:\n" + "# IMAGE_ID, QW, QX, QY, QZ, TX, TY, TZ, CAMERA_ID, NAME\n" + "# POINTS2D[] as (X, Y, POINT3D_ID)\n" + "# Number of images: {}, mean observations per image: {}\n".format( len(images), mean_observations ) ) with open(path, "w") as fid: fid.write(HEADER) for _, img in images.items(): image_header = [ img.id, *img.qvec, *img.tvec, img.camera_id, img.name, ] first_line = " ".join(map(str, image_header)) fid.write(first_line + "\n") points_strings = [] for xy, point3D_id in zip(img.xys, img.point3D_ids): points_strings.append(" ".join(map(str, [*xy, point3D_id]))) fid.write(" ".join(points_strings) + "\n") def write_images_binary(images, path_to_model_file): """ see: src/colmap/scene/reconstruction.cc void Reconstruction::ReadImagesBinary(const std::string& path) void Reconstruction::WriteImagesBinary(const std::string& path) """ with open(path_to_model_file, "wb") as fid: write_next_bytes(fid, len(images), "Q") for _, img in images.items(): write_next_bytes(fid, img.id, "i") write_next_bytes(fid, img.qvec.tolist(), "dddd") write_next_bytes(fid, img.tvec.tolist(), "ddd") write_next_bytes(fid, img.camera_id, "i") for char in img.name: write_next_bytes(fid, char.encode("utf-8"), "c") write_next_bytes(fid, b"\x00", "c") write_next_bytes(fid, len(img.point3D_ids), "Q") for xy, p3d_id in zip(img.xys, img.point3D_ids): write_next_bytes(fid, [*xy, p3d_id], "ddq") def write_points3D_text(points3D, path): """ see: src/colmap/scene/reconstruction.cc void Reconstruction::ReadPoints3DText(const std::string& path) void Reconstruction::WritePoints3DText(const std::string& path) """ if len(points3D) == 0: mean_track_length = 0 else: mean_track_length = sum( (len(pt.image_ids) for _, pt in points3D.items()) ) / len(points3D) HEADER = ( "# 3D point list with one line of data per point:\n" + "# POINT3D_ID, X, Y, Z, R, G, B, ERROR, TRACK[] as (IMAGE_ID, POINT2D_IDX)\n" + "# Number of points: {}, mean track length: {}\n".format( len(points3D), mean_track_length ) ) with open(path, "w") as fid: fid.write(HEADER) for _, pt in points3D.items(): point_header = [pt.id, *pt.xyz, *pt.rgb, pt.error] fid.write(" ".join(map(str, point_header)) + " ") track_strings = [] for image_id, point2D in zip(pt.image_ids, pt.point2D_idxs): track_strings.append(" ".join(map(str, [image_id, point2D]))) fid.write(" ".join(track_strings) + "\n") def write_points3D_binary(points3D, path_to_model_file): """ see: src/colmap/scene/reconstruction.cc void Reconstruction::ReadPoints3DBinary(const std::string& path) void Reconstruction::WritePoints3DBinary(const std::string& path) """ with open(path_to_model_file, "wb") as fid: write_next_bytes(fid, len(points3D), "Q") for _, pt in points3D.items(): write_next_bytes(fid, pt.id, "Q") write_next_bytes(fid, pt.xyz.tolist(), "ddd") write_next_bytes(fid, pt.rgb.tolist(), "BBB") write_next_bytes(fid, pt.error, "d") track_length = pt.image_ids.shape[0] write_next_bytes(fid, track_length, "Q") for image_id, point2D_id in zip(pt.image_ids, pt.point2D_idxs): write_next_bytes(fid, [image_id, point2D_id], "ii") ================================================ FILE: field_construction/scene/dataset_readers.py ================================================ # # Copyright (C) 2023, Inria # GRAPHDECO research group, https://team.inria.fr/graphdeco # All rights reserved. # # This software is free for non-commercial, research and evaluation use # under the terms of the LICENSE.md file. # # For inquiries contact george.drettakis@inria.fr # import json import os import sys from pathlib import Path from typing import NamedTuple import numpy as np import open3d as o3d from PIL import Image from plyfile import PlyData, PlyElement from scipy.spatial.transform import Rotation as R from field_construction.scene.colmap_loader import (Camera, Image, qvec2rotmat, read_extrinsics_binary, read_extrinsics_text, read_intrinsics_binary, read_intrinsics_text, read_points3D_binary, read_points3D_text) from field_construction.scene.gaussian_model import BasicPointCloud from field_construction.utils.graphics_utils import (focal2fov, fov2focal, getWorld2View2) from field_construction.utils.sh_utils import SH2RGB class CameraInfo(NamedTuple): uid: int global_id: int R: np.array T: np.array FovY: np.array FovX: np.array image_path: str image_name: str width: int height: int fx: float fy: float class SceneInfo(NamedTuple): point_cloud: BasicPointCloud train_cameras: list test_cameras: list nerf_normalization: dict ply_path: str def getNerfppNorm(cam_info): def get_center_and_diag(cam_centers): cam_centers = np.hstack(cam_centers) avg_cam_center = np.mean(cam_centers, axis=1, keepdims=True) center = avg_cam_center dist = np.linalg.norm(cam_centers - center, axis=0, keepdims=True) diagonal = np.max(dist) return center.flatten(), diagonal cam_centers = [] for cam in cam_info: W2C = getWorld2View2(cam.R, cam.T) C2W = np.linalg.inv(W2C) cam_centers.append(C2W[:3, 3:4]) center, diagonal = get_center_and_diag(cam_centers) radius = diagonal * 1.1 translate = -center return {"translate": translate, "radius": radius} def load_poses(pose_path, num): poses = [] with open(pose_path, "r") as f: lines = f.readlines() for i in range(num): line = lines[i] c2w = np.array(list(map(float, line.split()))).reshape(4, 4) c2w[:3,3] = c2w[:3,3] * 10.0 w2c = np.linalg.inv(c2w) w2c = w2c poses.append(w2c) poses = np.stack(poses, axis=0) return poses def readColmapCameras(cam_extrinsics, cam_intrinsics, images_folder): cam_infos = [] for idx, key in enumerate(cam_extrinsics): sys.stdout.write('\r') # the exact output you're looking for: sys.stdout.write("Reading camera {}/{}".format(idx+1, len(cam_extrinsics))) sys.stdout.flush() extr = cam_extrinsics[key] intr = cam_intrinsics[extr.camera_id] height = intr.height width = intr.width uid = intr.id R = np.transpose(qvec2rotmat(extr.qvec)) T = np.array(extr.tvec) if intr.model=="SIMPLE_PINHOLE": focal_length_x = intr.params[0] FovY = focal2fov(focal_length_x, height) FovX = focal2fov(focal_length_x, width) elif intr.model=="PINHOLE": focal_length_x = intr.params[0] focal_length_y = intr.params[1] FovY = focal2fov(focal_length_y, height) FovX = focal2fov(focal_length_x, width) else: assert False, "Colmap camera model not handled: only undistorted datasets (PINHOLE or SIMPLE_PINHOLE cameras) supported!" image_path = os.path.join(images_folder, os.path.basename(extr.name)) image_name = os.path.basename(image_path).split(".")[0] cam_info = CameraInfo(uid=uid, global_id=idx, R=R, T=T, FovY=FovY, FovX=FovX, image_path=image_path, image_name=image_name, width=width, height=height, fx=focal_length_x, fy=focal_length_y) cam_infos.append(cam_info) sys.stdout.write('\n') return cam_infos def fetchPly_o3d(path): pcd = o3d.io.read_point_cloud(path) positions = np.asarray(pcd.points) colors = np.asarray(pcd.colors) normals = np.zeros_like(positions) return BasicPointCloud(points=positions, colors=colors, normals=normals) def fetchPly(path): plydata = PlyData.read(path) vertices = plydata['vertex'] positions = np.vstack([vertices['x'], vertices['y'], vertices['z']]).T colors = np.vstack([vertices['red'], vertices['green'], vertices['blue']]).T / 255.0 normals = np.vstack([vertices['nx'], vertices['ny'], vertices['nz']]).T return BasicPointCloud(points=positions, colors=colors, normals=normals) def storePly(path, xyz, rgb): # Define the dtype for the structured array dtype = [('x', 'f4'), ('y', 'f4'), ('z', 'f4'), ('nx', 'f4'), ('ny', 'f4'), ('nz', 'f4'), ('red', 'u1'), ('green', 'u1'), ('blue', 'u1')] normals = np.zeros_like(xyz) elements = np.empty(xyz.shape[0], dtype=dtype) attributes = np.concatenate((xyz, normals, rgb), axis=1) elements[:] = list(map(tuple, attributes)) # Create the PlyData object and write to file vertex_element = PlyElement.describe(elements, 'vertex') ply_data = PlyData([vertex_element]) ply_data.write(path) def readColmapSceneInfo(path, images, eval, llffhold=10, loaded_iter=None): try: cameras_extrinsic_file = os.path.join(path, "sparse/0", "images.txt") cameras_intrinsic_file = os.path.join(path, "sparse/0", "cameras.txt") cam_extrinsics = read_extrinsics_text(cameras_extrinsic_file) cam_intrinsics = read_intrinsics_text(cameras_intrinsic_file) except: cameras_extrinsic_file = os.path.join(path, "sparse/0", "images.bin") cameras_intrinsic_file = os.path.join(path, "sparse/0", "cameras.bin") cam_extrinsics = read_extrinsics_binary(cameras_extrinsic_file) cam_intrinsics = read_intrinsics_binary(cameras_intrinsic_file) reading_dir = "input" if images == None else images cam_infos_unsorted = readColmapCameras(cam_extrinsics=cam_extrinsics, cam_intrinsics=cam_intrinsics, images_folder=os.path.join(path, reading_dir)) # cam_infos = sorted(cam_infos_unsorted.copy(), key = lambda x : int(x.image_name.split('_')[-1])) cam_infos = sorted(cam_infos_unsorted.copy(), key = lambda x : x.image_name) js_file = f"{path}/split.json" train_list = None test_list = None if os.path.exists(js_file): with open(js_file) as file: meta = json.load(file) train_list = meta["train"] test_list = meta["test"] print(f"train_list {len(train_list)}, test_list {len(test_list)}") if train_list is not None: train_cam_infos = [c for idx, c in enumerate(cam_infos) if c.image_name in train_list] test_cam_infos = [c for idx, c in enumerate(cam_infos) if c.image_name in test_list] print(f"train_cam_infos {len(train_cam_infos)}, test_cam_infos {len(test_cam_infos)}") elif eval: train_cam_infos = [c for idx, c in enumerate(cam_infos) if idx % llffhold != 0] test_cam_infos = [c for idx, c in enumerate(cam_infos) if idx % llffhold == 0] print("train_cam_infos: ", len(train_cam_infos)) print("test_cam_infos: ", len(test_cam_infos)) else: train_cam_infos = cam_infos test_cam_infos = [] print("only train_cam_infos: ", len(train_cam_infos)) nerf_normalization = getNerfppNorm(train_cam_infos) ply_path = os.path.join(path, "sparse/0/points3D.ply") bin_path = os.path.join(path, "sparse/0/points3D.bin") txt_path = os.path.join(path, "sparse/0/points3D.txt") if not loaded_iter: if not os.path.exists(ply_path): print("Converting point3d.bin to .ply, will happen only the first time you open the scene.") try: xyz, rgb, _ = read_points3D_binary(bin_path) print(f"xyz {xyz.shape}") except: xyz, rgb, _ = read_points3D_text(txt_path) storePly(ply_path, xyz, rgb) try: pcd = fetchPly(ply_path) except: pcd = None else: pcd = None scene_info = SceneInfo(point_cloud=pcd, train_cameras=train_cam_infos, test_cameras=test_cam_infos, nerf_normalization=nerf_normalization, ply_path=ply_path) return scene_info def read_camera_npz(camera_dir): images = {} cameras = {} for file_name in sorted(os.listdir(camera_dir)): if not file_name.endswith(".npz"): continue file_path = os.path.join(camera_dir, file_name) data = np.load(file_path) pose = data["pose"] intrinsics = data["intrinsics"] R_c2w = pose[:3, :3] t_c2w = pose[:3, 3] R_w2c = R_c2w.T t_w2c = - R_w2c @ t_c2w rotation = R.from_matrix(R_w2c) quat = rotation.as_quat() qvec = np.array([quat[3], quat[0], quat[1], quat[2]]) tvec = t_w2c fx = intrinsics[0, 0] fy = intrinsics[1, 1] cx = intrinsics[0, 2] cy = intrinsics[1, 2] model_name = 'PINHOLE' params = np.array([fx, fy, cx, cy], dtype=np.float64) width = int(cx * 2) height = int(cy * 2) try: image_id = int(os.path.splitext(file_name)[0]) except: image_id = int(os.path.splitext(file_name.split("_")[1])[0]) camera_id = image_id cameras[camera_id] = Camera( id=camera_id, model=model_name, width=width, height=height, params=params ) image_name = os.path.splitext(file_name)[0] + ".png" images[image_id] = Image( id=image_id, qvec=qvec, tvec=tvec, camera_id=camera_id, name=image_name, xys=np.zeros((0, 2)), point3D_ids=np.zeros(0, dtype=int) ) return images, cameras def readCUT3RInfo(path, images, eval, llffhold=10, loaded_iter=None): cameras_file = os.path.join(path, "camera") extrinsics, intrinsics = read_camera_npz(cameras_file) reading_dir = "input" cam_infos_unsorted = readColmapCameras(cam_extrinsics=extrinsics, cam_intrinsics=intrinsics, images_folder=os.path.join(path, reading_dir)) # cam_infos = sorted(cam_infos_unsorted.copy(), key = lambda x : int(x.image_name.split('_')[-1])) cam_infos = sorted(cam_infos_unsorted.copy(), key = lambda x : x.image_name) js_file = f"{path}/split.json" train_list = None test_list = None if os.path.exists(js_file): with open(js_file) as file: meta = json.load(file) train_list = meta["train"] test_list = meta["test"] print(f"train_list {len(train_list)}, test_list {len(test_list)}") if train_list is not None: train_cam_infos = [c for idx, c in enumerate(cam_infos) if c.image_name in train_list] test_cam_infos = [c for idx, c in enumerate(cam_infos) if c.image_name in test_list] print(f"train_cam_infos {len(train_cam_infos)}, test_cam_infos {len(test_cam_infos)}") elif eval: train_cam_infos = [c for idx, c in enumerate(cam_infos) if idx % llffhold != 0] test_cam_infos = [c for idx, c in enumerate(cam_infos) if idx % llffhold == 0] print("train_cam_infos: ", len(train_cam_infos)) print("test_cam_infos: ", len(test_cam_infos)) else: train_cam_infos = cam_infos test_cam_infos = [] print("only train_cam_infos: ", len(train_cam_infos)) nerf_normalization = getNerfppNorm(train_cam_infos) ply_path = os.path.join(path, "points3D.ply") bin_path = os.path.join(path, "points3D.bin") txt_path = os.path.join(path, "points3D.txt") if not loaded_iter: if not os.path.exists(ply_path): print("Converting point3d.bin to .ply, will happen only the first time you open the scene.") try: xyz, rgb, _ = read_points3D_binary(bin_path) print(f"xyz {xyz.shape}") except: xyz, rgb, _ = read_points3D_text(txt_path) storePly(ply_path, xyz, rgb) try: pcd = fetchPly_o3d(ply_path) except: pcd = None else: pcd = None scene_info = SceneInfo(point_cloud=pcd, train_cameras=train_cam_infos, test_cameras=test_cam_infos, nerf_normalization=nerf_normalization, ply_path=ply_path) return scene_info def readCamerasFromTransforms(path, transformsfile, white_background, extension=".png"): cam_infos = [] with open(os.path.join(path, transformsfile)) as json_file: contents = json.load(json_file) fovx = contents["camera_angle_x"] frames = contents["frames"] for idx, frame in enumerate(frames): cam_name = os.path.join(path, frame["file_path"] + extension) # NeRF 'transform_matrix' is a camera-to-world transform c2w = np.array(frame["transform_matrix"]) # change from OpenGL/Blender camera axes (Y up, Z back) to COLMAP (Y down, Z forward) c2w[:3, 1:3] *= -1 # get the world-to-camera transform and set R, T w2c = np.linalg.inv(c2w) R = np.transpose(w2c[:3,:3]) # R is stored transposed due to 'glm' in CUDA code T = w2c[:3, 3] image_path = os.path.join(path, cam_name) image_name = Path(cam_name).stem image = Image.open(image_path) im_data = np.array(image.convert("RGBA")) bg = np.array([1,1,1]) if white_background else np.array([0, 0, 0]) norm_data = im_data / 255.0 arr = norm_data[:,:,:3] * norm_data[:, :, 3:4] + bg * (1 - norm_data[:, :, 3:4]) image = Image.fromarray(np.array(arr*255.0, dtype=np.byte), "RGB") fovy = focal2fov(fov2focal(fovx, image.size[0]), image.size[1]) FovY = fovy FovX = fovx cam_infos.append(CameraInfo(uid=idx, global_id=idx, R=R, T=T, FovY=FovY, FovX=FovX, image=image, image_path=image_path, image_name=image_name, width=image.size[0], height=image.size[1])) return cam_infos def readNerfSyntheticInfo(path, white_background, eval, extension=".png"): print("Reading Training Transforms") train_cam_infos = readCamerasFromTransforms(path, "transforms_train.json", white_background, extension) print("Reading Test Transforms") test_cam_infos = readCamerasFromTransforms(path, "transforms_test.json", white_background, extension) if not eval: train_cam_infos.extend(test_cam_infos) test_cam_infos = [] nerf_normalization = getNerfppNorm(train_cam_infos) ply_path = os.path.join(path, "points3d.ply") if not os.path.exists(ply_path): # Since this data set has no colmap data, we start with random points num_pts = 100_000 print(f"Generating random point cloud ({num_pts})...") # We create random points inside the bounds of the synthetic Blender scenes xyz = np.random.random((num_pts, 3)) * 2.6 - 1.3 shs = np.random.random((num_pts, 3)) / 255.0 pcd = BasicPointCloud(points=xyz, colors=SH2RGB(shs), normals=np.zeros((num_pts, 3))) storePly(ply_path, xyz, SH2RGB(shs) * 255) try: pcd = fetchPly(ply_path) except: pcd = None scene_info = SceneInfo(point_cloud=pcd, train_cameras=train_cam_infos, test_cameras=test_cam_infos, nerf_normalization=nerf_normalization, ply_path=ply_path) return scene_info sceneLoadTypeCallbacks = { "Colmap": readColmapSceneInfo, "Blender" : readNerfSyntheticInfo, "CUT3R": readCUT3RInfo } ================================================ FILE: field_construction/scene/gaussian_model.py ================================================ # # Copyright (C) 2023, Inria # GRAPHDECO research group, https://team.inria.fr/graphdeco # All rights reserved. # # This software is free for non-commercial, research and evaluation use # under the terms of the LICENSE.md file. # # For inquiries contact george.drettakis@inria.fr # import os import numpy as np import torch from plyfile import PlyData, PlyElement from pytorch3d.transforms import quaternion_to_matrix from simple_knn._C import distCUDA2 from torch import nn from field_construction.scene.per_point_adam import PerPointAdam from field_construction.utils.general_utils import (build_rotation, build_scaling, build_scaling_rotation, get_expon_lr_func, inverse_sigmoid, strip_symmetric) from field_construction.utils.graphics_utils import BasicPointCloud from field_construction.utils.pose_utils import get_tensor_from_camera from field_construction.utils.sh_utils import RGB2SH from field_construction.utils.system_utils import mkdir_p def dilate(bin_img, ksize=5): pad = (ksize - 1) // 2 bin_img = torch.nn.functional.pad(bin_img, pad=[pad, pad, pad, pad], mode='reflect') out = torch.nn.functional.max_pool2d(bin_img, kernel_size=ksize, stride=1, padding=0) return out def erode(bin_img, ksize=5): out = 1 - dilate(1 - bin_img, ksize) return out class GaussianModel: def setup_functions(self): def build_covariance_from_scaling_rotation(scaling, scaling_modifier, rotation): L = build_scaling_rotation(scaling_modifier * scaling, rotation) actual_covariance = L @ L.transpose(1, 2) symm = strip_symmetric(actual_covariance) return symm self.scaling_activation = torch.exp self.scaling_inverse_activation = torch.log self.covariance_activation = build_covariance_from_scaling_rotation self.opacity_activation = torch.sigmoid self.inverse_opacity_activation = inverse_sigmoid self.rotation_activation = torch.nn.functional.normalize def __init__(self, sh_degree : int): self.active_sh_degree = 0 self.max_sh_degree = sh_degree self._xyz = torch.empty(0) self._knn_f = torch.empty(0) self._features_dc = torch.empty(0) self._features_rest = torch.empty(0) self._scaling = torch.empty(0) self._rotation = torch.empty(0) self._opacity = torch.empty(0) self._language_feature = torch.empty(0) self._instance_feature=torch.empty(0) self.max_radii2D = torch.empty(0) self.max_weight = torch.empty(0) self.xyz_gradient_accum = torch.empty(0) self.xyz_gradient_accum_abs = torch.empty(0) self.denom = torch.empty(0) self.denom_abs = torch.empty(0) self.optimizer = None self.cam_optimizer = None self.percent_dense = 0 self.spatial_lr_scale = 0 self.knn_dists = None self.knn_idx = None self.setup_functions() self.use_app = False def capture(self, include_feature=False): if include_feature: return ( self.active_sh_degree, self._xyz, self._knn_f, self._features_dc, self._features_rest, self._scaling, self._rotation, self._opacity, self._language_feature, self._instance_feature, self.max_radii2D, self.max_weight, self.xyz_gradient_accum, self.xyz_gradient_accum_abs, self.denom, self.denom_abs, self.optimizer.state_dict(), self.cam_optimizer.state_dict(), self.spatial_lr_scale, self.P ) else: return ( self.active_sh_degree, self._xyz, self._knn_f, self._features_dc, self._features_rest, self._scaling, self._rotation, self._opacity, self.max_radii2D, self.max_weight, self.xyz_gradient_accum, self.xyz_gradient_accum_abs, self.denom, self.denom_abs, self.optimizer.state_dict(), self.cam_optimizer.state_dict(), self.spatial_lr_scale, self.P ) def restore(self, model_args, training_args, mode='train'): # Ckpt with training feature (20 arguments) if len(model_args) == 20: (self.active_sh_degree, self._xyz, self._knn_f, self._features_dc, self._features_rest, self._scaling, self._rotation, self._opacity, self._language_feature, # Added training feature: language feature self._instance_feature, # Added training feature: instance feature self.max_radii2D, self.max_weight, xyz_gradient_accum, xyz_gradient_accum_abs, denom, denom_abs, opt_dict, cam_opt_dict, self.spatial_lr_scale, self.P ) = model_args # Ckpt without training feature (18 arguments) elif len(model_args) == 18: (self.active_sh_degree, self._xyz, self._knn_f, self._features_dc, self._features_rest, self._scaling, self._rotation, self._opacity, self.max_radii2D, self.max_weight, xyz_gradient_accum, xyz_gradient_accum_abs, denom, denom_abs, opt_dict, cam_opt_dict, self.spatial_lr_scale, self.P ) = model_args if mode == 'train': if isinstance(self.optimizer, PerPointAdam): self.training_setup_pp(training_args) else: self.training_setup(training_args) self.xyz_gradient_accum = xyz_gradient_accum self.xyz_gradient_accum_abs = xyz_gradient_accum_abs self.denom = denom self.denom_abs = denom_abs self.optimizer.load_state_dict(opt_dict) self.cam_optimizer.load_state_dict(cam_opt_dict) @property def get_scaling(self): return self.scaling_activation(self._scaling) @property def get_rotation(self): return self.rotation_activation(self._rotation) @property def get_xyz(self): return self._xyz @property def get_features(self): features_dc = self._features_dc features_rest = self._features_rest return torch.cat((features_dc, features_rest), dim=1) @property def get_opacity(self): return self.opacity_activation(self._opacity) @property def get_language_feature(self): return self._language_feature @property def get_instance_feature(self): return self._instance_feature def get_smallest_axis(self, return_idx=False): rotation_matrices = self.get_rotation_matrix() smallest_axis_idx = self.get_scaling.min(dim=-1)[1][..., None, None].expand(-1, 3, -1) smallest_axis = rotation_matrices.gather(2, smallest_axis_idx) if return_idx: return smallest_axis.squeeze(dim=2), smallest_axis_idx[..., 0, 0] return smallest_axis.squeeze(dim=2) def get_normal(self, view_cam): normal_global = self.get_smallest_axis() gaussian_to_cam_global = view_cam.camera_center - self._xyz neg_mask = (normal_global * gaussian_to_cam_global).sum(-1) < 0.0 normal_global[neg_mask] = -normal_global[neg_mask] return normal_global def init_RT_seq(self, cam_list): poses =[] index_mapping = {} for cam_idx, cam in enumerate(cam_list[1.0]): p = get_tensor_from_camera(cam.world_view_transform.transpose(0, 1)) # R T -> quat t poses.append(p) index_mapping[cam.uid] = cam_idx poses = torch.stack(poses) self.index_mapping = index_mapping self.P = poses.cuda().requires_grad_(True) def get_RT(self, idx): pose = self.P[idx] return pose def get_RT_test(self, idx): pose = self.test_P[idx] return pose def get_rotation_matrix(self): return quaternion_to_matrix(self.get_rotation) def get_covariance(self, scaling_modifier = 1): return self.covariance_activation(self.get_scaling, scaling_modifier, self._rotation) def oneupSHdegree(self): if self.active_sh_degree < self.max_sh_degree: self.active_sh_degree += 1 def create_from_pcd(self, pcd : BasicPointCloud, spatial_lr_scale : float): self.spatial_lr_scale = spatial_lr_scale fused_point_cloud = torch.tensor(np.asarray(pcd.points)).float().cuda() fused_color = RGB2SH(torch.tensor(np.asarray(pcd.colors)).float().cuda()) features = torch.zeros((fused_color.shape[0], 3, (self.max_sh_degree + 1) ** 2)).float().cuda() features[:, :3, 0 ] = fused_color features[:, 3:, 1:] = 0.0 print("Number of points at initialisation : ", fused_point_cloud.shape[0]) dist = torch.sqrt(torch.clamp_min(distCUDA2(torch.from_numpy(np.asarray(pcd.points)).float().cuda()), 0.0000001)) # print(f"new scale {torch.quantile(dist, 0.1)}") scales = torch.log(dist)[...,None].repeat(1, 3) rots = torch.zeros((fused_point_cloud.shape[0], 4), device="cuda") rots[:, 0] = 1 opacities = inverse_sigmoid(0.1 * torch.ones((fused_point_cloud.shape[0], 1), dtype=torch.float, device="cuda")) knn_f = torch.randn((fused_point_cloud.shape[0], 6)).float().cuda() self._xyz = nn.Parameter(fused_point_cloud.requires_grad_(True)) self._knn_f = nn.Parameter(knn_f.requires_grad_(True)) self._features_dc = nn.Parameter(features[:,:,0:1].transpose(1, 2).contiguous().requires_grad_(True)) self._features_rest = nn.Parameter(features[:,:,1:].transpose(1, 2).contiguous().requires_grad_(True)) self._scaling = nn.Parameter(scales.requires_grad_(True)) self._rotation = nn.Parameter(rots.requires_grad_(True)) self._opacity = nn.Parameter(opacities.requires_grad_(True)) self.max_radii2D = torch.zeros((self.get_xyz.shape[0]), device="cuda") self.max_weight = torch.zeros((self.get_xyz.shape[0]), device="cuda") language_feature = torch.zeros((fused_point_cloud.shape[0], 3), device="cuda") self._language_feature = nn.Parameter(language_feature.requires_grad_(True)).requires_grad_(True) # dont train feature at first # NOTE for instance distinguish instance_feature = torch.zeros((fused_point_cloud.shape[0], 3), device="cuda") self._instance_feature = nn.Parameter(instance_feature.requires_grad_(False)).requires_grad_(False) # just train feature at last def training_setup(self, training_args, device): self.percent_dense = training_args.percent_dense self.xyz_gradient_accum = torch.zeros((self.get_xyz.shape[0], 1), device=device) self.xyz_gradient_accum_abs = torch.zeros((self.get_xyz.shape[0], 1), device=device) self.denom = torch.zeros((self.get_xyz.shape[0], 1), device=device) self.denom_abs = torch.zeros((self.get_xyz.shape[0], 1), device=device) self.abs_split_radii2D_threshold = training_args.abs_split_radii2D_threshold self.max_abs_split_points = training_args.max_abs_split_points self.max_all_points = training_args.max_all_points l = [ {'params': [self._xyz], 'lr': training_args.position_lr_init * self.spatial_lr_scale, "name": "xyz"}, {'params': [self._knn_f], 'lr': 0.01, "name": "knn_f"}, {'params': [self._features_dc], 'lr': training_args.feature_lr, "name": "f_dc"}, {'params': [self._features_rest], 'lr': training_args.feature_lr / 20.0, "name": "f_rest"}, {'params': [self._opacity], 'lr': training_args.opacity_lr, "name": "opacity"}, {'params': [self._scaling], 'lr': training_args.scaling_lr, "name": "scaling"}, {'params': [self._rotation], 'lr': training_args.rotation_lr, "name": "rotation"}, {'params': [self._language_feature], 'lr': training_args.language_feature_lr, "name": "language_feature"}, # semantic {'params': [self._instance_feature], 'lr': training_args.language_feature_lr, "name": "instance_feature"}, # instance ] l_cam = [{'params': [self.P],'lr': training_args.rotation_lr*0.1, "name": "pose"},] # l += l_cam self.optimizer = torch.optim.Adam(l, lr=0.0, eps=1e-15) self.cam_optimizer = torch.optim.Adam(l_cam, lr=0.0, eps=1e-15) self.xyz_scheduler_args = get_expon_lr_func(lr_init=training_args.position_lr_init*self.spatial_lr_scale, lr_final=training_args.position_lr_final*self.spatial_lr_scale, lr_delay_mult=training_args.position_lr_delay_mult, max_steps=training_args.position_lr_max_steps) self.cam_scheduler_args = get_expon_lr_func( lr_init=training_args.rotation_lr*0.1, lr_final=training_args.rotation_lr*0.001, lr_delay_mult=training_args.position_lr_delay_mult, max_steps=training_args.iterations) # per-point optimizer def training_setup_pp(self, training_args, confidence_lr=None, device="cuda"): self.percent_dense = training_args.percent_dense self.xyz_gradient_accum = torch.zeros((self.get_xyz.shape[0], 1), device=device) self.xyz_gradient_accum_abs = torch.zeros((self.get_xyz.shape[0], 1), device=device) self.denom = torch.zeros((self.get_xyz.shape[0], 1), device=device) self.denom_abs = torch.zeros((self.get_xyz.shape[0], 1), device=device) self.abs_split_radii2D_threshold = training_args.abs_split_radii2D_threshold self.max_abs_split_points = training_args.max_abs_split_points self.max_all_points = training_args.max_all_points self.per_point_lr = confidence_lr l = [ {'params': [self._xyz], 'per_point_lr': self.per_point_lr, 'lr': training_args.position_lr_init * self.spatial_lr_scale, "name": "xyz"}, {'params': [self._knn_f], 'lr': 0.01, "name": "knn_f"}, {'params': [self._features_dc], 'lr': training_args.feature_lr, "name": "f_dc"}, {'params': [self._features_rest], 'lr': training_args.feature_lr / 20.0, "name": "f_rest"}, {'params': [self._opacity], 'lr': training_args.opacity_lr, "name": "opacity"}, {'params': [self._scaling], 'lr': training_args.scaling_lr, "name": "scaling"}, {'params': [self._rotation], 'lr': training_args.rotation_lr, "name": "rotation"}, {'params': [self._language_feature], 'lr': training_args.language_feature_lr, "name": "language_feature"}, # semantic {'params': [self._instance_feature], 'lr': training_args.language_feature_lr, "name": "instance_feature"}, # instance ] l_cam = [{'params': [self.P],'lr': training_args.rotation_lr*0.1, "name": "pose"},] # l += l_cam self.optimizer = PerPointAdam(l, lr=0, betas=(0.9, 0.999), eps=1e-15, weight_decay=0.0) self.cam_optimizer = torch.optim.Adam(l_cam, lr=0.0, eps=1e-15) self.xyz_scheduler_args = get_expon_lr_func(lr_init=training_args.position_lr_init*self.spatial_lr_scale, lr_final=training_args.position_lr_final*self.spatial_lr_scale, lr_delay_mult=training_args.position_lr_delay_mult, max_steps=training_args.position_lr_max_steps) self.cam_scheduler_args = get_expon_lr_func( lr_init=training_args.rotation_lr*0.1, lr_final=training_args.rotation_lr*0.001, lr_delay_mult=training_args.position_lr_delay_mult, max_steps=training_args.iterations) def clip_grad(self, norm=1.0): for group in self.optimizer.param_groups: torch.nn.utils.clip_grad_norm_(group["params"][0], norm) def update_learning_rate(self, iteration): ''' Learning rate scheduling per step ''' for param_group in self.cam_optimizer.param_groups: if param_group["name"] == "pose": lr = self.cam_scheduler_args(iteration) param_group['lr'] = lr for param_group in self.optimizer.param_groups: if param_group["name"] == "xyz": lr = self.xyz_scheduler_args(iteration) param_group['lr'] = lr def construct_list_of_attributes(self, include_feature=False): l = ['x', 'y', 'z', 'nx', 'ny', 'nz'] # All channels except the 3 DC for i in range(self._features_dc.shape[1]*self._features_dc.shape[2]): l.append('f_dc_{}'.format(i)) for i in range(self._features_rest.shape[1]*self._features_rest.shape[2]): l.append('f_rest_{}'.format(i)) l.append('opacity') for i in range(self._scaling.shape[1]): l.append('scale_{}'.format(i)) for i in range(self._rotation.shape[1]): l.append('rot_{}'.format(i)) if include_feature: for i in range(self._language_feature.shape[1]): l.append('language_feature_{}'.format(i)) for i in range(self._instance_feature.shape[1]): l.append('instance_feature_{}'.format(i)) return l def save_ply(self, path, mask=None, include_feature=False): mkdir_p(os.path.dirname(path)) xyz = self._xyz.detach().cpu().numpy() normals = np.zeros_like(xyz) f_dc = self._features_dc.detach().transpose(1, 2).flatten(start_dim=1).contiguous().cpu().numpy() f_rest = self._features_rest.detach().transpose(1, 2).flatten(start_dim=1).contiguous().cpu().numpy() opacities = self._opacity.detach().cpu().numpy() scale = self._scaling.detach().cpu().numpy() rotation = self._rotation.detach().cpu().numpy() language_feature = self._language_feature.detach().cpu().numpy() instance_feature = self._instance_feature.detach().cpu().numpy() dtype_full = [(attribute, 'f4') for attribute in self.construct_list_of_attributes(include_feature)] elements = np.empty(xyz.shape[0], dtype=dtype_full) if include_feature: attributes = np.concatenate((xyz, normals, f_dc, f_rest, opacities, scale, rotation, language_feature, instance_feature), axis=1) else: attributes = np.concatenate((xyz, normals, f_dc, f_rest, opacities, scale, rotation), axis=1) elements[:] = list(map(tuple, attributes)) el = PlyElement.describe(elements, 'vertex') PlyData([el]).write(path) def reset_opacity(self): opacities_new = inverse_sigmoid(torch.min(self.get_opacity, torch.ones_like(self.get_opacity)*0.01)) optimizable_tensors = self.replace_tensor_to_optimizer(opacities_new, "opacity") self._opacity = optimizable_tensors["opacity"] def load_ply(self, path): plydata = PlyData.read(path) xyz = np.stack((np.asarray(plydata.elements[0]["x"]), np.asarray(plydata.elements[0]["y"]), np.asarray(plydata.elements[0]["z"])), axis=1) opacities = np.asarray(plydata.elements[0]["opacity"])[..., np.newaxis] features_dc = np.zeros((xyz.shape[0], 3, 1)) features_dc[:, 0, 0] = np.asarray(plydata.elements[0]["f_dc_0"]) features_dc[:, 1, 0] = np.asarray(plydata.elements[0]["f_dc_1"]) features_dc[:, 2, 0] = np.asarray(plydata.elements[0]["f_dc_2"]) extra_f_names = [p.name for p in plydata.elements[0].properties if p.name.startswith("f_rest_")] extra_f_names = sorted(extra_f_names, key = lambda x: int(x.split('_')[-1])) assert len(extra_f_names)==3*(self.max_sh_degree + 1) ** 2 - 3 features_extra = np.zeros((xyz.shape[0], len(extra_f_names))) for idx, attr_name in enumerate(extra_f_names): features_extra[:, idx] = np.asarray(plydata.elements[0][attr_name]) # Reshape (P,F*SH_coeffs) to (P, F, SH_coeffs except DC) features_extra = features_extra.reshape((features_extra.shape[0], 3, (self.max_sh_degree + 1) ** 2 - 1)) scale_names = [p.name for p in plydata.elements[0].properties if p.name.startswith("scale_")] scale_names = sorted(scale_names, key = lambda x: int(x.split('_')[-1])) scales = np.zeros((xyz.shape[0], len(scale_names))) for idx, attr_name in enumerate(scale_names): scales[:, idx] = np.asarray(plydata.elements[0][attr_name]) rot_names = [p.name for p in plydata.elements[0].properties if p.name.startswith("rot")] rot_names = sorted(rot_names, key = lambda x: int(x.split('_')[-1])) rots = np.zeros((xyz.shape[0], len(rot_names))) for idx, attr_name in enumerate(rot_names): rots[:, idx] = np.asarray(plydata.elements[0][attr_name]) language_feature_names = [p.name for p in plydata.elements[0].properties if p.name.startswith("language_feature")] language_feature_names = sorted(language_feature_names, key = lambda x: int(x.split('_')[-1])) language_feature = np.zeros((xyz.shape[0], len(language_feature_names))) for idx, attr_name in enumerate(language_feature_names): language_feature[:, idx] = np.asarray(plydata.elements[0][attr_name]) # NOTE instance instance_feature_names = [p.name for p in plydata.elements[0].properties if p.name.startswith("instance_feature")] instance_feature_names = sorted(instance_feature_names, key = lambda x: int(x.split('_')[-1])) instance_feature = np.zeros((xyz.shape[0], len(instance_feature_names))) for idx, attr_name in enumerate(instance_feature_names): instance_feature[:, idx] = np.asarray(plydata.elements[0][attr_name]) self._xyz = nn.Parameter(torch.tensor(xyz, dtype=torch.float, device="cuda").requires_grad_(True)) self._features_dc = nn.Parameter(torch.tensor(features_dc, dtype=torch.float, device="cuda").transpose(1, 2).contiguous().requires_grad_(True)) self._features_rest = nn.Parameter(torch.tensor(features_extra, dtype=torch.float, device="cuda").transpose(1, 2).contiguous().requires_grad_(True)) self._opacity = nn.Parameter(torch.tensor(opacities, dtype=torch.float, device="cuda").requires_grad_(True)) self._scaling = nn.Parameter(torch.tensor(scales, dtype=torch.float, device="cuda").requires_grad_(True)) self._rotation = nn.Parameter(torch.tensor(rots, dtype=torch.float, device="cuda").requires_grad_(True)) self._language_feature = nn.Parameter(torch.tensor(language_feature, dtype=torch.float, device="cuda").requires_grad_(False)) self._instance_feature = nn.Parameter(torch.tensor(instance_feature, dtype=torch.float, device="cuda").requires_grad_(False)) self.active_sh_degree = self.max_sh_degree def replace_tensor_to_optimizer(self, tensor, name): optimizable_tensors = {} for group in self.optimizer.param_groups: if group["name"] == name: stored_state = self.optimizer.state.get(group['params'][0], None) stored_state["exp_avg"] = torch.zeros_like(tensor) stored_state["exp_avg_sq"] = torch.zeros_like(tensor) del self.optimizer.state[group['params'][0]] group["params"][0] = nn.Parameter(tensor.requires_grad_(True)) self.optimizer.state[group['params'][0]] = stored_state optimizable_tensors[group["name"]] = group["params"][0] return optimizable_tensors def _prune_optimizer(self, mask): optimizable_tensors = {} for group in self.optimizer.param_groups: stored_state = self.optimizer.state.get(group['params'][0], None) if stored_state is not None: stored_state["exp_avg"] = stored_state["exp_avg"][mask] stored_state["exp_avg_sq"] = stored_state["exp_avg_sq"][mask] del self.optimizer.state[group['params'][0]] group["params"][0] = nn.Parameter((group["params"][0][mask].requires_grad_(True))) self.optimizer.state[group['params'][0]] = stored_state optimizable_tensors[group["name"]] = group["params"][0] else: group["params"][0] = nn.Parameter(group["params"][0][mask].requires_grad_(True)) optimizable_tensors[group["name"]] = group["params"][0] return optimizable_tensors def prune_points(self, mask): valid_points_mask = ~mask optimizable_tensors = self._prune_optimizer(valid_points_mask) self._xyz = optimizable_tensors["xyz"] self._knn_f = optimizable_tensors["knn_f"] self._features_dc = optimizable_tensors["f_dc"] self._features_rest = optimizable_tensors["f_rest"] self._opacity = optimizable_tensors["opacity"] self._scaling = optimizable_tensors["scaling"] self._rotation = optimizable_tensors["rotation"] self._language_feature = optimizable_tensors["language_feature"] self._instance_feature = optimizable_tensors["instance_feature"] self.xyz_gradient_accum = self.xyz_gradient_accum[valid_points_mask] self.xyz_gradient_accum_abs = self.xyz_gradient_accum_abs[valid_points_mask] self.denom = self.denom[valid_points_mask] self.denom_abs = self.denom_abs[valid_points_mask] self.max_radii2D = self.max_radii2D[valid_points_mask] self.max_weight = self.max_weight[valid_points_mask] def cat_tensors_to_optimizer(self, tensors_dict): optimizable_tensors = {} for group in self.optimizer.param_groups: assert len(group["params"]) == 1 extension_tensor = tensors_dict[group["name"]] stored_state = self.optimizer.state.get(group['params'][0], None) if stored_state is not None: stored_state["exp_avg"] = torch.cat((stored_state["exp_avg"], torch.zeros_like(extension_tensor)), dim=0) stored_state["exp_avg_sq"] = torch.cat((stored_state["exp_avg_sq"], torch.zeros_like(extension_tensor)), dim=0) del self.optimizer.state[group['params'][0]] group["params"][0] = nn.Parameter(torch.cat((group["params"][0], extension_tensor), dim=0).requires_grad_(True)) self.optimizer.state[group['params'][0]] = stored_state optimizable_tensors[group["name"]] = group["params"][0] else: group["params"][0] = nn.Parameter(torch.cat((group["params"][0], extension_tensor), dim=0).requires_grad_(True)) optimizable_tensors[group["name"]] = group["params"][0] return optimizable_tensors def densification_postfix(self, new_xyz, new_knn_f, new_features_dc, new_features_rest, new_opacities, new_scaling, new_rotation, new_language_feature, new_instance_feature): d = {"xyz": new_xyz, "knn_f": new_knn_f, "f_dc": new_features_dc, "f_rest": new_features_rest, "opacity": new_opacities, "scaling" : new_scaling, "rotation" : new_rotation, "language_feature": new_language_feature, "instance_feature": new_instance_feature, } optimizable_tensors = self.cat_tensors_to_optimizer(d) self._xyz = optimizable_tensors["xyz"] self._knn_f = optimizable_tensors["knn_f"] self._features_dc = optimizable_tensors["f_dc"] self._features_rest = optimizable_tensors["f_rest"] self._opacity = optimizable_tensors["opacity"] self._scaling = optimizable_tensors["scaling"] self._rotation = optimizable_tensors["rotation"] self._language_feature = optimizable_tensors["language_feature"] self._instance_feature = optimizable_tensors["instance_feature"] self.xyz_gradient_accum = torch.zeros((self.get_xyz.shape[0], 1), device="cuda") self.xyz_gradient_accum_abs = torch.zeros((self.get_xyz.shape[0], 1), device="cuda") self.denom = torch.zeros((self.get_xyz.shape[0], 1), device="cuda") self.denom_abs = torch.zeros((self.get_xyz.shape[0], 1), device="cuda") self.max_radii2D = torch.zeros((self.get_xyz.shape[0]), device="cuda") self.max_weight = torch.zeros((self.get_xyz.shape[0]), device="cuda") def densify_and_split(self, grads, grad_threshold, grads_abs, grad_abs_threshold, scene_extent, max_radii2D, N=2): n_init_points = self.get_xyz.shape[0] # Extract points that satisfy the gradient condition padded_grad = torch.zeros((n_init_points), device="cuda") padded_grad[:grads.shape[0]] = grads.squeeze() padded_grads_abs = torch.zeros((n_init_points), device="cuda") padded_grads_abs[:grads_abs.shape[0]] = grads_abs.squeeze() padded_max_radii2D = torch.zeros((n_init_points), device="cuda") padded_max_radii2D[:max_radii2D.shape[0]] = max_radii2D.squeeze() selected_pts_mask = torch.where(padded_grad >= grad_threshold, True, False) selected_pts_mask = torch.logical_and(selected_pts_mask, torch.max(self.get_scaling, dim=1).values > self.percent_dense*scene_extent) if selected_pts_mask.sum() + n_init_points > self.max_all_points: limited_num = self.max_all_points - n_init_points padded_grad[~selected_pts_mask] = 0 ratio = limited_num / float(n_init_points) threshold = torch.quantile(padded_grad, (1.0-ratio)) selected_pts_mask = torch.where(padded_grad > threshold, True, False) # print(f"split {selected_pts_mask.sum()}, raddi2D {padded_max_radii2D.max()} ,{padded_max_radii2D.median()}") else: padded_grads_abs[selected_pts_mask] = 0 mask = (torch.max(self.get_scaling, dim=1).values > self.percent_dense*scene_extent) & (padded_max_radii2D > self.abs_split_radii2D_threshold) padded_grads_abs[~mask] = 0 selected_pts_mask_abs = torch.where(padded_grads_abs >= grad_abs_threshold, True, False) limited_num = min(self.max_all_points - n_init_points - selected_pts_mask.sum(), self.max_abs_split_points) if selected_pts_mask_abs.sum() > limited_num: ratio = limited_num / float(n_init_points) threshold = torch.quantile(padded_grads_abs, (1.0-ratio)) selected_pts_mask_abs = torch.where(padded_grads_abs > threshold, True, False) selected_pts_mask = torch.logical_or(selected_pts_mask, selected_pts_mask_abs) # print(f"split {selected_pts_mask.sum()}, abs {selected_pts_mask_abs.sum()}, raddi2D {padded_max_radii2D.max()} ,{padded_max_radii2D.median()}") stds = self.get_scaling[selected_pts_mask].repeat(N,1) means =torch.zeros((stds.size(0), 3),device="cuda") samples = torch.normal(mean=means, std=stds) rots = build_rotation(self._rotation[selected_pts_mask]).repeat(N,1,1) new_xyz = torch.bmm(rots, samples.unsqueeze(-1)).squeeze(-1) + self.get_xyz[selected_pts_mask].repeat(N, 1) new_scaling = self.scaling_inverse_activation(self.get_scaling[selected_pts_mask].repeat(N,1) / (0.8*N)) new_rotation = self._rotation[selected_pts_mask].repeat(N,1) new_features_dc = self._features_dc[selected_pts_mask].repeat(N,1,1) new_features_rest = self._features_rest[selected_pts_mask].repeat(N,1,1) new_opacity = self._opacity[selected_pts_mask].repeat(N,1) new_knn_f = self._knn_f[selected_pts_mask].repeat(N,1) new_language_feature = self._language_feature[selected_pts_mask].repeat(N,1) new_instance_feature = self._instance_feature[selected_pts_mask].repeat(N,1) self.densification_postfix(new_xyz, new_knn_f, new_features_dc, new_features_rest, new_opacity, new_scaling, new_rotation, new_language_feature, new_instance_feature) prune_filter = torch.cat((selected_pts_mask, torch.zeros(N * selected_pts_mask.sum(), device="cuda", dtype=bool))) self.prune_points(prune_filter) def densify_and_clone(self, grads, grad_threshold, scene_extent): n_init_points = self.get_xyz.shape[0] # Extract points that satisfy the gradient condition selected_pts_mask = torch.where(torch.norm(grads, dim=-1) >= grad_threshold, True, False) selected_pts_mask = torch.logical_and(selected_pts_mask, torch.max(self.get_scaling, dim=1).values <= self.percent_dense*scene_extent) if selected_pts_mask.sum() + n_init_points > self.max_all_points: limited_num = self.max_all_points - n_init_points grads_tmp = grads.squeeze().clone() grads_tmp[~selected_pts_mask] = 0 ratio = min(limited_num / float(n_init_points), 1) threshold = torch.quantile(grads_tmp, (1.0-ratio)) selected_pts_mask = torch.where(grads_tmp > threshold, True, False) if selected_pts_mask.sum() > 0: # print(f"clone {selected_pts_mask.sum()}") new_xyz = self._xyz[selected_pts_mask] stds = self.get_scaling[selected_pts_mask] means =torch.zeros((stds.size(0), 3),device="cuda") samples = torch.normal(mean=means, std=stds) rots = build_rotation(self._rotation[selected_pts_mask]) new_xyz = torch.bmm(rots, samples.unsqueeze(-1)).squeeze(-1) + self.get_xyz[selected_pts_mask] new_features_dc = self._features_dc[selected_pts_mask] new_features_rest = self._features_rest[selected_pts_mask] new_opacities = self._opacity[selected_pts_mask] new_scaling = self._scaling[selected_pts_mask] new_rotation = self._rotation[selected_pts_mask] new_knn_f = self._knn_f[selected_pts_mask] new_language_feature = self._language_feature[selected_pts_mask] new_instance_feature = self._instance_feature[selected_pts_mask] self.densification_postfix(new_xyz, new_knn_f, new_features_dc, new_features_rest, new_opacities, new_scaling, new_rotation, new_language_feature, new_instance_feature) def densify_and_prune(self, max_grad, abs_max_grad, min_opacity, extent, max_screen_size): grads = self.xyz_gradient_accum / self.denom grads_abs = self.xyz_gradient_accum_abs / self.denom_abs grads[grads.isnan()] = 0.0 grads_abs[grads_abs.isnan()] = 0.0 max_radii2D = self.max_radii2D.clone() self.densify_and_clone(grads, max_grad, extent) self.densify_and_split(grads, max_grad, grads_abs, abs_max_grad, extent, max_radii2D) prune_mask = (self.get_opacity < min_opacity).squeeze() if max_screen_size: big_points_vs = self.max_radii2D > max_screen_size big_points_ws = self.get_scaling.max(dim=1).values > 0.1 * extent prune_mask = torch.logical_or(torch.logical_or(prune_mask, big_points_vs), big_points_ws) self.prune_points(prune_mask) # print(f"all points {self._xyz.shape[0]}") torch.cuda.empty_cache() def add_densification_stats(self, viewspace_point_tensor, viewspace_point_tensor_abs, update_filter): self.xyz_gradient_accum[update_filter] += torch.norm(viewspace_point_tensor.grad[update_filter,:2], dim=-1, keepdim=True) self.xyz_gradient_accum_abs[update_filter] += torch.norm(viewspace_point_tensor_abs.grad[update_filter,:2], dim=-1, keepdim=True) self.denom[update_filter] += 1 self.denom_abs[update_filter] += 1 def get_points_depth_in_depth_map(self, fov_camera, depth, points_in_camera_space, scale=1): st = max(int(scale/2)-1,0) depth_view = depth[None,:,st::scale,st::scale] W, H = int(fov_camera.image_width/scale), int(fov_camera.image_height/scale) depth_view = depth_view[:H, :W] pts_projections = torch.stack( [points_in_camera_space[:,0] * fov_camera.Fx / points_in_camera_space[:,2] + fov_camera.Cx, points_in_camera_space[:,1] * fov_camera.Fy / points_in_camera_space[:,2] + fov_camera.Cy], -1).float()/scale mask = (pts_projections[:, 0] > 0) & (pts_projections[:, 0] < W) &\ (pts_projections[:, 1] > 0) & (pts_projections[:, 1] < H) & (points_in_camera_space[:,2] > 0.1) pts_projections[..., 0] /= ((W - 1) / 2) pts_projections[..., 1] /= ((H - 1) / 2) pts_projections -= 1 pts_projections = pts_projections.view(1, -1, 1, 2) map_z = torch.nn.functional.grid_sample(input=depth_view, grid=pts_projections, mode='bilinear', padding_mode='border', align_corners=True )[0, :, :, 0] return map_z, mask def get_points_from_depth(self, fov_camera, depth, scale=1): st = int(max(int(scale/2)-1,0)) depth_view = depth.squeeze()[st::scale,st::scale] rays_d = fov_camera.get_rays(scale=scale) depth_view = depth_view[:rays_d.shape[0], :rays_d.shape[1]] pts = (rays_d * depth_view[..., None]).reshape(-1,3) R = torch.tensor(fov_camera.R).float().cuda() T = torch.tensor(fov_camera.T).float().cuda() pts = (pts-T)@R.transpose(-1,-2) return pts def change_reqiures_grad(self, change, iteration, quiet=True): if change == "geometry": self._xyz.requires_grad_(True) self._knn_f.requires_grad_(True) self._features_dc.requires_grad_(True) self._features_rest.requires_grad_(True) self._scaling.requires_grad_(True) self._rotation.requires_grad_(True) self._opacity.requires_grad_(True) self.P.requires_grad_(True) self._language_feature.requires_grad_(False) self._instance_feature.requires_grad_(False) if not quiet: print(f'\n[ITER {iteration}] Training gaussian params') elif change == 'semantic': self._xyz.requires_grad_(True) self._knn_f.requires_grad_(True) self._features_dc.requires_grad_(True) self._features_rest.requires_grad_(True) self._scaling.requires_grad_(True) self._rotation.requires_grad_(True) self._opacity.requires_grad_(True) self.P.requires_grad_(True) self._language_feature.requires_grad_(True) self._instance_feature.requires_grad_(False) if not quiet: print(f'\n[ITER {iteration}] Training gaussian params and language feature') elif change == 'semantic_only': self._xyz.requires_grad_(False) self._knn_f.requires_grad_(False) self._features_dc.requires_grad_(False) self._features_rest.requires_grad_(False) self._scaling.requires_grad_(False) self._rotation.requires_grad_(False) self._opacity.requires_grad_(False) self.P.requires_grad_(False) self._language_feature.requires_grad_(True) self._instance_feature.requires_grad_(False) if not quiet: print(f'\n[ITER {iteration}] Training language feature') elif change == 'instance': self._xyz.requires_grad_(False) self._knn_f.requires_grad_(False) self._features_dc.requires_grad_(False) self._features_rest.requires_grad_(False) self._scaling.requires_grad_(False) self._rotation.requires_grad_(False) self._opacity.requires_grad_(False) self.P.requires_grad_(False) self._language_feature.requires_grad_(False) self._instance_feature.requires_grad_(True) if not quiet: print(f'\n[ITER {iteration}] Training instance feature') elif change == "pose_only": self._xyz.requires_grad_(False) self._knn_f.requires_grad_(False) self._features_dc.requires_grad_(False) self._features_rest.requires_grad_(False) self._scaling.requires_grad_(False) self._rotation.requires_grad_(False) self._opacity.requires_grad_(False) self.P.requires_grad_(True) self._language_feature.requires_grad_(False) self._instance_feature.requires_grad_(False) if not quiet: print(f'\n[ITER {iteration}] Training instance feature') elif change == 'finetune': self._xyz.requires_grad_(False) self._knn_f.requires_grad_(False) self._features_dc.requires_grad_(True) self._features_rest.requires_grad_(True) self._scaling.requires_grad_(False) self._rotation.requires_grad_(False) self._opacity.requires_grad_(False) self.P.requires_grad_(False) self._language_feature.requires_grad_(False) self._instance_feature.requires_grad_(False) if not quiet: print(f'\n[ITER {iteration}] finetune') else: raise ValueError('Unknown type!') ================================================ FILE: field_construction/scene/per_point_adam.py ================================================ import torch from torch.optim import Optimizer class PerPointAdam(Optimizer): """Implements Adam optimizer with per-point learning rates. Allows unique learning rates for each point in specified parameter tensors, useful for point cloud optimization. Args: params: Iterable of parameters to optimize or parameter groups lr (float, optional): Default learning rate (default: 1e-3) betas (tuple, optional): Coefficients for moving averages (default: (0.9, 0.999)) eps (float, optional): Term for numerical stability (default: 1e-8) weight_decay (float, optional): Weight decay (L2 penalty) (default: 0) """ def __init__(self, params, lr=1e-3, betas=(0.9, 0.999), eps=1e-8, weight_decay=0): if not all(0.0 <= x for x in [lr, eps, weight_decay]): raise ValueError(f"Invalid learning parameters: lr={lr}, eps={eps}, weight_decay={weight_decay}") if not all(0.0 <= beta < 1.0 for beta in betas): raise ValueError(f"Invalid beta parameters: {betas}") defaults = dict(lr=lr, betas=betas, eps=eps, weight_decay=weight_decay, per_point_lr=None) super().__init__(params, defaults) def _adjust_per_point_lr(self, per_point_lr, grad, mask): """Adjusts per-point learning rates based on gradient magnitudes.""" grad_magnitude = grad.norm(dim=-1) scaling_factor = torch.ones_like(grad_magnitude) grad_sigmoid = torch.sigmoid(grad_magnitude[mask]) scaling_factor[mask] = 0.99 + (grad_sigmoid * 0.02) return per_point_lr * scaling_factor.unsqueeze(1) def step(self, closure=None): """Performs a single optimization step.""" loss = closure() if closure is not None else None for group in self.param_groups: per_point_lr = group.get('per_point_lr') for p in group['params']: if p.grad is None: continue grad = p.grad.data if grad.is_sparse: raise RuntimeError('PerPointAdam does not support sparse gradients') # Initialize state if needed state = self.state[p] if len(state) == 0: state['step'] = 0 state['exp_avg'] = torch.zeros_like(p.data) state['exp_avg_sq'] = torch.zeros_like(p.data) # Get state values exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq'] beta1, beta2 = group['betas'] state['step'] += 1 # Apply weight decay if specified if group['weight_decay'] != 0: grad = grad.add(p.data, alpha=group['weight_decay']) # Compute mask for non-zero gradients grad_norm = grad.norm() mask = grad_norm > 0 # Update momentum terms exp_avg.masked_scatter_(mask, exp_avg[mask].mul_(beta1).add_(grad[mask], alpha=1 - beta1)) exp_avg_sq.masked_scatter_(mask, exp_avg_sq[mask].mul_(beta2).addcmul_(grad[mask], grad[mask], value=1 - beta2)) # Compute bias corrections bias_correction1 = 1 - beta1 ** state['step'] bias_correction2 = 1 - beta2 ** state['step'] # Compute step size denom = exp_avg_sq.sqrt().add_(group['eps']) step_size = group['lr'] * (bias_correction2 ** 0.5 / bias_correction1) # Apply updates if per_point_lr is not None: if not isinstance(per_point_lr, torch.Tensor): raise TypeError("per_point_lr must be a torch.Tensor") if per_point_lr.device != p.data.device: raise ValueError("per_point_lr must be on the same device as parameter") expected_shape = p.data.shape[:1] + (1,) * (p.data.dim() - 1) if per_point_lr.shape != expected_shape: raise ValueError(f"{group['name']}: Invalid per_point_lr shape. Expected {expected_shape}, got {per_point_lr.shape}") scaled_step_size = step_size * per_point_lr p.data.add_(-scaled_step_size * (exp_avg / denom)) per_point_lr = self._adjust_per_point_lr(per_point_lr, grad, mask) else: p.data.addcdiv_(exp_avg, denom, value=-step_size) return loss ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/CMakeLists.txt ================================================ # # Copyright (C) 2023, Inria # GRAPHDECO research group, https://team.inria.fr/graphdeco # All rights reserved. # # This software is free for non-commercial, research and evaluation use # under the terms of the LICENSE.md file. # # For inquiries contact george.drettakis@inria.fr # cmake_minimum_required(VERSION 3.20) project(DiffRast LANGUAGES CUDA CXX) set(CMAKE_CXX_STANDARD 17) set(CMAKE_CXX_EXTENSIONS OFF) set(CMAKE_CUDA_STANDARD 17) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}") add_library(CudaRasterizer cuda_rasterizer/backward.h cuda_rasterizer/backward.cu cuda_rasterizer/forward.h cuda_rasterizer/forward.cu cuda_rasterizer/auxiliary.h cuda_rasterizer/rasterizer_impl.cu cuda_rasterizer/rasterizer_impl.h cuda_rasterizer/rasterizer.h ) set_target_properties(CudaRasterizer PROPERTIES CUDA_ARCHITECTURES "70;75;86") target_include_directories(CudaRasterizer PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/cuda_rasterizer) target_include_directories(CudaRasterizer PRIVATE third_party/glm ${CMAKE_CUDA_TOOLKIT_INCLUDE_DIRECTORIES}) ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/LICENSE.md ================================================ Gaussian-Splatting License =========================== **Inria** and **the Max Planck Institut for Informatik (MPII)** hold all the ownership rights on the *Software* named **gaussian-splatting**. The *Software* is in the process of being registered with the Agence pour la Protection des Programmes (APP). The *Software* is still being developed by the *Licensor*. *Licensor*'s goal is to allow the research community to use, test and evaluate the *Software*. ## 1. Definitions *Licensee* means any person or entity that uses the *Software* and distributes its *Work*. *Licensor* means the owners of the *Software*, i.e Inria and MPII *Software* means the original work of authorship made available under this License ie gaussian-splatting. *Work* means the *Software* and any additions to or derivative works of the *Software* that are made available under this License. ## 2. Purpose This license is intended to define the rights granted to the *Licensee* by Licensors under the *Software*. ## 3. Rights granted For the above reasons Licensors have decided to distribute the *Software*. Licensors grant non-exclusive rights to use the *Software* for research purposes to research users (both academic and industrial), free of charge, without right to sublicense.. The *Software* may be used "non-commercially", i.e., for research and/or evaluation purposes only. Subject to the terms and conditions of this License, you are granted a non-exclusive, royalty-free, license to reproduce, prepare derivative works of, publicly display, publicly perform and distribute its *Work* and any resulting derivative works in any form. ## 4. Limitations **4.1 Redistribution.** You may reproduce or distribute the *Work* only if (a) you do so under this License, (b) you include a complete copy of this License with your distribution, and (c) you retain without modification any copyright, patent, trademark, or attribution notices that are present in the *Work*. **4.2 Derivative Works.** You may specify that additional or different terms apply to the use, reproduction, and distribution of your derivative works of the *Work* ("Your Terms") only if (a) Your Terms provide that the use limitation in Section 2 applies to your derivative works, and (b) you identify the specific derivative works that are subject to Your Terms. Notwithstanding Your Terms, this License (including the redistribution requirements in Section 3.1) will continue to apply to the *Work* itself. **4.3** Any other use without of prior consent of Licensors is prohibited. Research users explicitly acknowledge having received from Licensors all information allowing to appreciate the adequacy between of the *Software* and their needs and to undertake all necessary precautions for its execution and use. **4.4** The *Software* is provided both as a compiled library file and as source code. In case of using the *Software* for a publication or other results obtained through the use of the *Software*, users are strongly encouraged to cite the corresponding publications as explained in the documentation of the *Software*. ## 5. Disclaimer THE USER CANNOT USE, EXPLOIT OR DISTRIBUTE THE *SOFTWARE* FOR COMMERCIAL PURPOSES WITHOUT PRIOR AND EXPLICIT CONSENT OF LICENSORS. YOU MUST CONTACT INRIA FOR ANY UNAUTHORIZED USE: stip-sophia.transfert@inria.fr . ANY SUCH ACTION WILL CONSTITUTE A FORGERY. THIS *SOFTWARE* IS PROVIDED "AS IS" WITHOUT ANY WARRANTIES OF ANY NATURE AND ANY EXPRESS OR IMPLIED WARRANTIES, WITH REGARDS TO COMMERCIAL USE, PROFESSIONNAL USE, LEGAL OR NOT, OR OTHER, OR COMMERCIALISATION OR ADAPTATION. UNLESS EXPLICITLY PROVIDED BY LAW, IN NO EVENT, SHALL INRIA OR THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES, LOSS OF USE, DATA, OR PROFITS OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING FROM, OUT OF OR IN CONNECTION WITH THE *SOFTWARE* OR THE USE OR OTHER DEALINGS IN THE *SOFTWARE*. ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/README.md ================================================ # Differential Gaussian Rasterization for LangSurf Used as the rasterization engine for LangSurf which can render rgb, depth, normal, language features and instance-level language features. If you can make use of it in your own research, please be so kind to cite us. ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/build/temp.linux-x86_64-cpython-310/.ninja_log ================================================ # ninja log v5 1 3269 1740114322642951852 /home/lff/data1/cjw/langscene/field_construction/submodules/diff-LangSurf-rasterizer/build/temp.linux-x86_64-cpython-310/cuda_rasterizer/forward.o aa1daee3d707d538 1 3499 1740114322874950457 /home/lff/data1/cjw/langscene/field_construction/submodules/diff-LangSurf-rasterizer/build/temp.linux-x86_64-cpython-310/cuda_rasterizer/backward.o ccc62c27cc08f3d2 1 10560 1740114329930908314 /home/lff/data1/cjw/langscene/field_construction/submodules/diff-LangSurf-rasterizer/build/temp.linux-x86_64-cpython-310/cuda_rasterizer/rasterizer_impl.o b10d15ada471cc56 2 16378 1740114335754873938 /home/lff/data1/cjw/langscene/field_construction/submodules/diff-LangSurf-rasterizer/build/temp.linux-x86_64-cpython-310/ext.o 5065cf0abad3b1dc 2 51547 1740114370898674085 /home/lff/data1/cjw/langscene/field_construction/submodules/diff-LangSurf-rasterizer/build/temp.linux-x86_64-cpython-310/rasterize_points.o 69fcc9f1b32d75 17 3419 1740115149196330150 /home/lff/data1/cjw/langscene/field_construction/submodules/diff-LangSurf-rasterizer/build/temp.linux-x86_64-cpython-310/cuda_rasterizer/forward.o aa1daee3d707d538 16 3568 1740115149352329917 /home/lff/data1/cjw/langscene/field_construction/submodules/diff-LangSurf-rasterizer/build/temp.linux-x86_64-cpython-310/cuda_rasterizer/backward.o ccc62c27cc08f3d2 17 10358 1740115156128319828 /home/lff/data1/cjw/langscene/field_construction/submodules/diff-LangSurf-rasterizer/build/temp.linux-x86_64-cpython-310/cuda_rasterizer/rasterizer_impl.o b10d15ada471cc56 18 50342 1740115196108262280 /home/lff/data1/cjw/langscene/field_construction/submodules/diff-LangSurf-rasterizer/build/temp.linux-x86_64-cpython-310/rasterize_points.o 69fcc9f1b32d75 18 3188 1740115705015741260 /home/lff/data1/cjw/langscene/field_construction/submodules/diff-LangSurf-rasterizer/build/temp.linux-x86_64-cpython-310/cuda_rasterizer/forward.o aa1daee3d707d538 18 3450 1740115705271741067 /home/lff/data1/cjw/langscene/field_construction/submodules/diff-LangSurf-rasterizer/build/temp.linux-x86_64-cpython-310/cuda_rasterizer/backward.o ccc62c27cc08f3d2 18 10867 1740115712691735492 /home/lff/data1/cjw/langscene/field_construction/submodules/diff-LangSurf-rasterizer/build/temp.linux-x86_64-cpython-310/cuda_rasterizer/rasterizer_impl.o b10d15ada471cc56 19 51652 1740115753455705563 /home/lff/data1/cjw/langscene/field_construction/submodules/diff-LangSurf-rasterizer/build/temp.linux-x86_64-cpython-310/rasterize_points.o 69fcc9f1b32d75 16 3577 1740119157035453892 /home/lff/data1/cjw/langscene/field_construction/submodules/diff-LangSurf-rasterizer/build/temp.linux-x86_64-cpython-310/cuda_rasterizer/forward.o aa1daee3d707d538 15 3811 1740119157275453274 /home/lff/data1/cjw/langscene/field_construction/submodules/diff-LangSurf-rasterizer/build/temp.linux-x86_64-cpython-310/cuda_rasterizer/backward.o ccc62c27cc08f3d2 16 11339 1740119164799433919 /home/lff/data1/cjw/langscene/field_construction/submodules/diff-LangSurf-rasterizer/build/temp.linux-x86_64-cpython-310/cuda_rasterizer/rasterizer_impl.o b10d15ada471cc56 17 56708 1740119210139319436 /home/lff/data1/cjw/langscene/field_construction/submodules/diff-LangSurf-rasterizer/build/temp.linux-x86_64-cpython-310/rasterize_points.o 69fcc9f1b32d75 21 3532 1740331667538673799 /home/lff/data1/cjw/langscene/field_construction/submodules/diff-LangSurf-rasterizer/build/temp.linux-x86_64-cpython-310/cuda_rasterizer/forward.o aa1daee3d707d538 20 3750 1740331667758673391 /home/lff/data1/cjw/langscene/field_construction/submodules/diff-LangSurf-rasterizer/build/temp.linux-x86_64-cpython-310/cuda_rasterizer/backward.o ccc62c27cc08f3d2 21 11715 1740331675714658604 /home/lff/data1/cjw/langscene/field_construction/submodules/diff-LangSurf-rasterizer/build/temp.linux-x86_64-cpython-310/cuda_rasterizer/rasterizer_impl.o b10d15ada471cc56 21 77173 1740331741158535523 /home/lff/data1/cjw/langscene/field_construction/submodules/diff-LangSurf-rasterizer/build/temp.linux-x86_64-cpython-310/rasterize_points.o 69fcc9f1b32d75 56 3946 1743475571713662010 /home/lff/data1/cjw/langscene/field_construction/submodules/diff-langsurf-rasterizer/build/temp.linux-x86_64-cpython-310/cuda_rasterizer/backward.o 87ff917ebb3c08f0 57 4118 1743475571889660174 /home/lff/data1/cjw/langscene/field_construction/submodules/diff-langsurf-rasterizer/build/temp.linux-x86_64-cpython-310/cuda_rasterizer/forward.o 874625155179daf1 57 11768 1743475579533580803 /home/lff/data1/cjw/langscene/field_construction/submodules/diff-langsurf-rasterizer/build/temp.linux-x86_64-cpython-310/cuda_rasterizer/rasterizer_impl.o 355dd834fa004c2c 58 45380 1743475613145239337 /home/lff/data1/cjw/langscene/field_construction/submodules/diff-langsurf-rasterizer/build/temp.linux-x86_64-cpython-310/ext.o 6f5822467011ce30 58 78044 1743475645788918929 /home/lff/data1/cjw/langscene/field_construction/submodules/diff-langsurf-rasterizer/build/temp.linux-x86_64-cpython-310/rasterize_points.o bad4edeb9a9f8d44 ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/build/temp.linux-x86_64-cpython-310/build.ninja ================================================ ninja_required_version = 1.3 cxx = c++ nvcc = /usr/local/cuda-12.2/bin/nvcc cflags = -pthread -B /home/lff/miniconda3/envs/langscene/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/lff/miniconda3/envs/langscene/include -fPIC -O2 -isystem /home/lff/miniconda3/envs/langscene/include -fPIC -I/home/lff/miniconda3/envs/langscene/lib/python3.10/site-packages/torch/include -I/home/lff/miniconda3/envs/langscene/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/home/lff/miniconda3/envs/langscene/lib/python3.10/site-packages/torch/include/TH -I/home/lff/miniconda3/envs/langscene/lib/python3.10/site-packages/torch/include/THC -I/usr/local/cuda-12.2/include -I/home/lff/miniconda3/envs/langscene/include/python3.10 -c post_cflags = -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++17 cuda_cflags = -I/home/lff/miniconda3/envs/langscene/lib/python3.10/site-packages/torch/include -I/home/lff/miniconda3/envs/langscene/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -I/home/lff/miniconda3/envs/langscene/lib/python3.10/site-packages/torch/include/TH -I/home/lff/miniconda3/envs/langscene/lib/python3.10/site-packages/torch/include/THC -I/usr/local/cuda-12.2/include -I/home/lff/miniconda3/envs/langscene/include/python3.10 -c cuda_post_cflags = -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -I/home/lff/data1/cjw/langscene/field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/ -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++17 cuda_dlink_post_cflags = ldflags = rule compile command = $cxx -MMD -MF $out.d $cflags -c $in -o $out $post_cflags depfile = $out.d deps = gcc rule cuda_compile depfile = $out.d deps = gcc command = $nvcc --generate-dependencies-with-compile --dependency-output $out.d $cuda_cflags -c $in -o $out $cuda_post_cflags build /home/lff/data1/cjw/langscene/field_construction/submodules/diff-langsurf-rasterizer/build/temp.linux-x86_64-cpython-310/cuda_rasterizer/backward.o: cuda_compile /home/lff/data1/cjw/langscene/field_construction/submodules/diff-langsurf-rasterizer/cuda_rasterizer/backward.cu build /home/lff/data1/cjw/langscene/field_construction/submodules/diff-langsurf-rasterizer/build/temp.linux-x86_64-cpython-310/cuda_rasterizer/forward.o: cuda_compile /home/lff/data1/cjw/langscene/field_construction/submodules/diff-langsurf-rasterizer/cuda_rasterizer/forward.cu build /home/lff/data1/cjw/langscene/field_construction/submodules/diff-langsurf-rasterizer/build/temp.linux-x86_64-cpython-310/cuda_rasterizer/rasterizer_impl.o: cuda_compile /home/lff/data1/cjw/langscene/field_construction/submodules/diff-langsurf-rasterizer/cuda_rasterizer/rasterizer_impl.cu build /home/lff/data1/cjw/langscene/field_construction/submodules/diff-langsurf-rasterizer/build/temp.linux-x86_64-cpython-310/ext.o: compile /home/lff/data1/cjw/langscene/field_construction/submodules/diff-langsurf-rasterizer/ext.cpp build /home/lff/data1/cjw/langscene/field_construction/submodules/diff-langsurf-rasterizer/build/temp.linux-x86_64-cpython-310/rasterize_points.o: cuda_compile /home/lff/data1/cjw/langscene/field_construction/submodules/diff-langsurf-rasterizer/rasterize_points.cu ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/cuda_rasterizer/auxiliary.h ================================================ /* * Copyright (C) 2023, Inria * GRAPHDECO research group, https://team.inria.fr/graphdeco * All rights reserved. * * This software is free for non-commercial, research and evaluation use * under the terms of the LICENSE.md file. * * For inquiries contact george.drettakis@inria.fr */ #ifndef CUDA_RASTERIZER_AUXILIARY_H_INCLUDED #define CUDA_RASTERIZER_AUXILIARY_H_INCLUDED #include "config.h" #include "stdio.h" #define BLOCK_SIZE (BLOCK_X * BLOCK_Y) #define NUM_WARPS (BLOCK_SIZE/32) // Spherical harmonics coefficients __device__ const float SH_C0 = 0.28209479177387814f; __device__ const float SH_C1 = 0.4886025119029199f; __device__ const float SH_C2[] = { 1.0925484305920792f, -1.0925484305920792f, 0.31539156525252005f, -1.0925484305920792f, 0.5462742152960396f }; __device__ const float SH_C3[] = { -0.5900435899266435f, 2.890611442640554f, -0.4570457994644658f, 0.3731763325901154f, -0.4570457994644658f, 1.445305721320277f, -0.5900435899266435f }; __forceinline__ __device__ float ndc2Pix(float v, int S) { return ((v + 1.0) * S - 1.0) * 0.5; } __forceinline__ __device__ void getRect(const float2 p, int max_radius, uint2& rect_min, uint2& rect_max, dim3 grid) { rect_min = { min(grid.x, max((int)0, (int)((p.x - max_radius) / BLOCK_X))), min(grid.y, max((int)0, (int)((p.y - max_radius) / BLOCK_Y))) }; rect_max = { min(grid.x, max((int)0, (int)((p.x + max_radius + BLOCK_X - 1) / BLOCK_X))), min(grid.y, max((int)0, (int)((p.y + max_radius + BLOCK_Y - 1) / BLOCK_Y))) }; } __forceinline__ __device__ float3 transformPoint4x3(const float3& p, const float* matrix) { float3 transformed = { matrix[0] * p.x + matrix[4] * p.y + matrix[8] * p.z + matrix[12], matrix[1] * p.x + matrix[5] * p.y + matrix[9] * p.z + matrix[13], matrix[2] * p.x + matrix[6] * p.y + matrix[10] * p.z + matrix[14], }; return transformed; } __forceinline__ __device__ float4 transformPoint4x4(const float3& p, const float* matrix) { float4 transformed = { matrix[0] * p.x + matrix[4] * p.y + matrix[8] * p.z + matrix[12], matrix[1] * p.x + matrix[5] * p.y + matrix[9] * p.z + matrix[13], matrix[2] * p.x + matrix[6] * p.y + matrix[10] * p.z + matrix[14], matrix[3] * p.x + matrix[7] * p.y + matrix[11] * p.z + matrix[15] }; return transformed; } __forceinline__ __device__ float3 transformVec4x3(const float3& p, const float* matrix) { float3 transformed = { matrix[0] * p.x + matrix[4] * p.y + matrix[8] * p.z, matrix[1] * p.x + matrix[5] * p.y + matrix[9] * p.z, matrix[2] * p.x + matrix[6] * p.y + matrix[10] * p.z, }; return transformed; } __forceinline__ __device__ float3 transformVec4x3Transpose(const float3& p, const float* matrix) { float3 transformed = { matrix[0] * p.x + matrix[1] * p.y + matrix[2] * p.z, matrix[4] * p.x + matrix[5] * p.y + matrix[6] * p.z, matrix[8] * p.x + matrix[9] * p.y + matrix[10] * p.z, }; return transformed; } __forceinline__ __device__ float dnormvdz(float3 v, float3 dv) { float sum2 = v.x * v.x + v.y * v.y + v.z * v.z; float invsum32 = 1.0f / sqrt(sum2 * sum2 * sum2); float dnormvdz = (-v.x * v.z * dv.x - v.y * v.z * dv.y + (sum2 - v.z * v.z) * dv.z) * invsum32; return dnormvdz; } __forceinline__ __device__ float3 dnormvdv(float3 v, float3 dv) { float sum2 = v.x * v.x + v.y * v.y + v.z * v.z; float invsum32 = 1.0f / sqrt(sum2 * sum2 * sum2); float3 dnormvdv; dnormvdv.x = ((+sum2 - v.x * v.x) * dv.x - v.y * v.x * dv.y - v.z * v.x * dv.z) * invsum32; dnormvdv.y = (-v.x * v.y * dv.x + (sum2 - v.y * v.y) * dv.y - v.z * v.y * dv.z) * invsum32; dnormvdv.z = (-v.x * v.z * dv.x - v.y * v.z * dv.y + (sum2 - v.z * v.z) * dv.z) * invsum32; return dnormvdv; } __forceinline__ __device__ float4 dnormvdv(float4 v, float4 dv) { float sum2 = v.x * v.x + v.y * v.y + v.z * v.z + v.w * v.w; float invsum32 = 1.0f / sqrt(sum2 * sum2 * sum2); float4 vdv = { v.x * dv.x, v.y * dv.y, v.z * dv.z, v.w * dv.w }; float vdv_sum = vdv.x + vdv.y + vdv.z + vdv.w; float4 dnormvdv; dnormvdv.x = ((sum2 - v.x * v.x) * dv.x - v.x * (vdv_sum - vdv.x)) * invsum32; dnormvdv.y = ((sum2 - v.y * v.y) * dv.y - v.y * (vdv_sum - vdv.y)) * invsum32; dnormvdv.z = ((sum2 - v.z * v.z) * dv.z - v.z * (vdv_sum - vdv.z)) * invsum32; dnormvdv.w = ((sum2 - v.w * v.w) * dv.w - v.w * (vdv_sum - vdv.w)) * invsum32; return dnormvdv; } __forceinline__ __device__ float sigmoid(float x) { return 1.0f / (1.0f + expf(-x)); } __forceinline__ __device__ bool in_frustum(int idx, const float* orig_points, const float* viewmatrix, const float* projmatrix, bool prefiltered, float3& p_view) { float3 p_orig = { orig_points[3 * idx], orig_points[3 * idx + 1], orig_points[3 * idx + 2] }; // Bring points to screen space float4 p_hom = transformPoint4x4(p_orig, projmatrix); float p_w = 1.0f / (p_hom.w + 0.0000001f); float3 p_proj = { p_hom.x * p_w, p_hom.y * p_w, p_hom.z * p_w }; p_view = transformPoint4x3(p_orig, viewmatrix); if (p_view.z <= 0.2f)// || ((p_proj.x < -1.3 || p_proj.x > 1.3 || p_proj.y < -1.3 || p_proj.y > 1.3))) { if (prefiltered) { printf("Point is filtered although prefiltered is set. This shouldn't happen!"); __trap(); } return false; } return true; } #define CHECK_CUDA(A, debug) \ A; if(debug) { \ auto ret = cudaDeviceSynchronize(); \ if (ret != cudaSuccess) { \ std::cerr << "\n[CUDA ERROR] in " << __FILE__ << "\nLine " << __LINE__ << ": " << cudaGetErrorString(ret); \ throw std::runtime_error(cudaGetErrorString(ret)); \ } \ } #endif ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/cuda_rasterizer/backward.cu ================================================ /* * Copyright (C) 2023, Inria * GRAPHDECO research group, https://team.inria.fr/graphdeco * All rights reserved. * * This software is free for non-commercial, research and evaluation use * under the terms of the LICENSE.md file. * * For inquiries contact george.drettakis@inria.fr */ #include "backward.h" #include "auxiliary.h" #include #include namespace cg = cooperative_groups; // Backward pass for conversion of spherical harmonics to RGB for // each Gaussian. __device__ void computeColorFromSH(int idx, int deg, int max_coeffs, const glm::vec3* means, glm::vec3 campos, const float* shs, const bool* clamped, const glm::vec3* dL_dcolor, glm::vec3* dL_dmeans, glm::vec3* dL_dshs) { // Compute intermediate values, as it is done during forward glm::vec3 pos = means[idx]; glm::vec3 dir_orig = pos - campos; glm::vec3 dir = dir_orig / glm::length(dir_orig); glm::vec3* sh = ((glm::vec3*)shs) + idx * max_coeffs; // Use PyTorch rule for clamping: if clamping was applied, // gradient becomes 0. glm::vec3 dL_dRGB = dL_dcolor[idx]; dL_dRGB.x *= clamped[3 * idx + 0] ? 0 : 1; dL_dRGB.y *= clamped[3 * idx + 1] ? 0 : 1; dL_dRGB.z *= clamped[3 * idx + 2] ? 0 : 1; glm::vec3 dRGBdx(0, 0, 0); glm::vec3 dRGBdy(0, 0, 0); glm::vec3 dRGBdz(0, 0, 0); float x = dir.x; float y = dir.y; float z = dir.z; // Target location for this Gaussian to write SH gradients to glm::vec3* dL_dsh = dL_dshs + idx * max_coeffs; // No tricks here, just high school-level calculus. float dRGBdsh0 = SH_C0; dL_dsh[0] = dRGBdsh0 * dL_dRGB; if (deg > 0) { float dRGBdsh1 = -SH_C1 * y; float dRGBdsh2 = SH_C1 * z; float dRGBdsh3 = -SH_C1 * x; dL_dsh[1] = dRGBdsh1 * dL_dRGB; dL_dsh[2] = dRGBdsh2 * dL_dRGB; dL_dsh[3] = dRGBdsh3 * dL_dRGB; dRGBdx = -SH_C1 * sh[3]; dRGBdy = -SH_C1 * sh[1]; dRGBdz = SH_C1 * sh[2]; if (deg > 1) { float xx = x * x, yy = y * y, zz = z * z; float xy = x * y, yz = y * z, xz = x * z; float dRGBdsh4 = SH_C2[0] * xy; float dRGBdsh5 = SH_C2[1] * yz; float dRGBdsh6 = SH_C2[2] * (2.f * zz - xx - yy); float dRGBdsh7 = SH_C2[3] * xz; float dRGBdsh8 = SH_C2[4] * (xx - yy); dL_dsh[4] = dRGBdsh4 * dL_dRGB; dL_dsh[5] = dRGBdsh5 * dL_dRGB; dL_dsh[6] = dRGBdsh6 * dL_dRGB; dL_dsh[7] = dRGBdsh7 * dL_dRGB; dL_dsh[8] = dRGBdsh8 * dL_dRGB; dRGBdx += SH_C2[0] * y * sh[4] + SH_C2[2] * 2.f * -x * sh[6] + SH_C2[3] * z * sh[7] + SH_C2[4] * 2.f * x * sh[8]; dRGBdy += SH_C2[0] * x * sh[4] + SH_C2[1] * z * sh[5] + SH_C2[2] * 2.f * -y * sh[6] + SH_C2[4] * 2.f * -y * sh[8]; dRGBdz += SH_C2[1] * y * sh[5] + SH_C2[2] * 2.f * 2.f * z * sh[6] + SH_C2[3] * x * sh[7]; if (deg > 2) { float dRGBdsh9 = SH_C3[0] * y * (3.f * xx - yy); float dRGBdsh10 = SH_C3[1] * xy * z; float dRGBdsh11 = SH_C3[2] * y * (4.f * zz - xx - yy); float dRGBdsh12 = SH_C3[3] * z * (2.f * zz - 3.f * xx - 3.f * yy); float dRGBdsh13 = SH_C3[4] * x * (4.f * zz - xx - yy); float dRGBdsh14 = SH_C3[5] * z * (xx - yy); float dRGBdsh15 = SH_C3[6] * x * (xx - 3.f * yy); dL_dsh[9] = dRGBdsh9 * dL_dRGB; dL_dsh[10] = dRGBdsh10 * dL_dRGB; dL_dsh[11] = dRGBdsh11 * dL_dRGB; dL_dsh[12] = dRGBdsh12 * dL_dRGB; dL_dsh[13] = dRGBdsh13 * dL_dRGB; dL_dsh[14] = dRGBdsh14 * dL_dRGB; dL_dsh[15] = dRGBdsh15 * dL_dRGB; dRGBdx += ( SH_C3[0] * sh[9] * 3.f * 2.f * xy + SH_C3[1] * sh[10] * yz + SH_C3[2] * sh[11] * -2.f * xy + SH_C3[3] * sh[12] * -3.f * 2.f * xz + SH_C3[4] * sh[13] * (-3.f * xx + 4.f * zz - yy) + SH_C3[5] * sh[14] * 2.f * xz + SH_C3[6] * sh[15] * 3.f * (xx - yy)); dRGBdy += ( SH_C3[0] * sh[9] * 3.f * (xx - yy) + SH_C3[1] * sh[10] * xz + SH_C3[2] * sh[11] * (-3.f * yy + 4.f * zz - xx) + SH_C3[3] * sh[12] * -3.f * 2.f * yz + SH_C3[4] * sh[13] * -2.f * xy + SH_C3[5] * sh[14] * -2.f * yz + SH_C3[6] * sh[15] * -3.f * 2.f * xy); dRGBdz += ( SH_C3[1] * sh[10] * xy + SH_C3[2] * sh[11] * 4.f * 2.f * yz + SH_C3[3] * sh[12] * 3.f * (2.f * zz - xx - yy) + SH_C3[4] * sh[13] * 4.f * 2.f * xz + SH_C3[5] * sh[14] * (xx - yy)); } } } // The view direction is an input to the computation. View direction // is influenced by the Gaussian's mean, so SHs gradients // must propagate back into 3D position. glm::vec3 dL_ddir(glm::dot(dRGBdx, dL_dRGB), glm::dot(dRGBdy, dL_dRGB), glm::dot(dRGBdz, dL_dRGB)); // Account for normalization of direction float3 dL_dmean = dnormvdv(float3{ dir_orig.x, dir_orig.y, dir_orig.z }, float3{ dL_ddir.x, dL_ddir.y, dL_ddir.z }); // Gradients of loss w.r.t. Gaussian means, but only the portion // that is caused because the mean affects the view-dependent color. // Additional mean gradient is accumulated in below methods. dL_dmeans[idx] += glm::vec3(dL_dmean.x, dL_dmean.y, dL_dmean.z); } // Backward version of INVERSE 2D covariance matrix computation // (due to length launched as separate kernel before other // backward steps contained in preprocess) __global__ void computeCov2DCUDA(int P, const float3* means, const int* radii, const float* cov3Ds, const float h_x, float h_y, const float tan_fovx, float tan_fovy, const float* view_matrix, const float* dL_dconics, float3* dL_dmeans, float* dL_dcov) { auto idx = cg::this_grid().thread_rank(); if (idx >= P || !(radii[idx] > 0)) return; // Reading location of 3D covariance for this Gaussian const float* cov3D = cov3Ds + 6 * idx; // Fetch gradients, recompute 2D covariance and relevant // intermediate forward results needed in the backward. float3 mean = means[idx]; float3 dL_dconic = { dL_dconics[4 * idx], dL_dconics[4 * idx + 1], dL_dconics[4 * idx + 3] }; float3 t = transformPoint4x3(mean, view_matrix); const float limx = 1.3f * tan_fovx; const float limy = 1.3f * tan_fovy; const float txtz = t.x / t.z; const float tytz = t.y / t.z; t.x = min(limx, max(-limx, txtz)) * t.z; t.y = min(limy, max(-limy, tytz)) * t.z; const float x_grad_mul = txtz < -limx || txtz > limx ? 0 : 1; const float y_grad_mul = tytz < -limy || tytz > limy ? 0 : 1; glm::mat3 J = glm::mat3(h_x / t.z, 0.0f, -(h_x * t.x) / (t.z * t.z), 0.0f, h_y / t.z, -(h_y * t.y) / (t.z * t.z), 0, 0, 0); glm::mat3 W = glm::mat3( view_matrix[0], view_matrix[4], view_matrix[8], view_matrix[1], view_matrix[5], view_matrix[9], view_matrix[2], view_matrix[6], view_matrix[10]); glm::mat3 Vrk = glm::mat3( cov3D[0], cov3D[1], cov3D[2], cov3D[1], cov3D[3], cov3D[4], cov3D[2], cov3D[4], cov3D[5]); glm::mat3 T = W * J; glm::mat3 cov2D = glm::transpose(T) * glm::transpose(Vrk) * T; // Use helper variables for 2D covariance entries. More compact. float a = cov2D[0][0] += 0.3f; float b = cov2D[0][1]; float c = cov2D[1][1] += 0.3f; float denom = a * c - b * b; float dL_da = 0, dL_db = 0, dL_dc = 0; float denom2inv = 1.0f / ((denom * denom) + 0.0000001f); if (denom2inv != 0) { // Gradients of loss w.r.t. entries of 2D covariance matrix, // given gradients of loss w.r.t. conic matrix (inverse covariance matrix). // e.g., dL / da = dL / d_conic_a * d_conic_a / d_a dL_da = denom2inv * (-c * c * dL_dconic.x + 2 * b * c * dL_dconic.y + (denom - a * c) * dL_dconic.z); dL_dc = denom2inv * (-a * a * dL_dconic.z + 2 * a * b * dL_dconic.y + (denom - a * c) * dL_dconic.x); dL_db = denom2inv * 2 * (b * c * dL_dconic.x - (denom + 2 * b * b) * dL_dconic.y + a * b * dL_dconic.z); // Gradients of loss L w.r.t. each 3D covariance matrix (Vrk) entry, // given gradients w.r.t. 2D covariance matrix (diagonal). // cov2D = transpose(T) * transpose(Vrk) * T; dL_dcov[6 * idx + 0] = (T[0][0] * T[0][0] * dL_da + T[0][0] * T[1][0] * dL_db + T[1][0] * T[1][0] * dL_dc); dL_dcov[6 * idx + 3] = (T[0][1] * T[0][1] * dL_da + T[0][1] * T[1][1] * dL_db + T[1][1] * T[1][1] * dL_dc); dL_dcov[6 * idx + 5] = (T[0][2] * T[0][2] * dL_da + T[0][2] * T[1][2] * dL_db + T[1][2] * T[1][2] * dL_dc); // Gradients of loss L w.r.t. each 3D covariance matrix (Vrk) entry, // given gradients w.r.t. 2D covariance matrix (off-diagonal). // Off-diagonal elements appear twice --> double the gradient. // cov2D = transpose(T) * transpose(Vrk) * T; dL_dcov[6 * idx + 1] = 2 * T[0][0] * T[0][1] * dL_da + (T[0][0] * T[1][1] + T[0][1] * T[1][0]) * dL_db + 2 * T[1][0] * T[1][1] * dL_dc; dL_dcov[6 * idx + 2] = 2 * T[0][0] * T[0][2] * dL_da + (T[0][0] * T[1][2] + T[0][2] * T[1][0]) * dL_db + 2 * T[1][0] * T[1][2] * dL_dc; dL_dcov[6 * idx + 4] = 2 * T[0][2] * T[0][1] * dL_da + (T[0][1] * T[1][2] + T[0][2] * T[1][1]) * dL_db + 2 * T[1][1] * T[1][2] * dL_dc; } else { for (int i = 0; i < 6; i++) dL_dcov[6 * idx + i] = 0; } // Gradients of loss w.r.t. upper 2x3 portion of intermediate matrix T // cov2D = transpose(T) * transpose(Vrk) * T; float dL_dT00 = 2 * (T[0][0] * Vrk[0][0] + T[0][1] * Vrk[0][1] + T[0][2] * Vrk[0][2]) * dL_da + (T[1][0] * Vrk[0][0] + T[1][1] * Vrk[0][1] + T[1][2] * Vrk[0][2]) * dL_db; float dL_dT01 = 2 * (T[0][0] * Vrk[1][0] + T[0][1] * Vrk[1][1] + T[0][2] * Vrk[1][2]) * dL_da + (T[1][0] * Vrk[1][0] + T[1][1] * Vrk[1][1] + T[1][2] * Vrk[1][2]) * dL_db; float dL_dT02 = 2 * (T[0][0] * Vrk[2][0] + T[0][1] * Vrk[2][1] + T[0][2] * Vrk[2][2]) * dL_da + (T[1][0] * Vrk[2][0] + T[1][1] * Vrk[2][1] + T[1][2] * Vrk[2][2]) * dL_db; float dL_dT10 = 2 * (T[1][0] * Vrk[0][0] + T[1][1] * Vrk[0][1] + T[1][2] * Vrk[0][2]) * dL_dc + (T[0][0] * Vrk[0][0] + T[0][1] * Vrk[0][1] + T[0][2] * Vrk[0][2]) * dL_db; float dL_dT11 = 2 * (T[1][0] * Vrk[1][0] + T[1][1] * Vrk[1][1] + T[1][2] * Vrk[1][2]) * dL_dc + (T[0][0] * Vrk[1][0] + T[0][1] * Vrk[1][1] + T[0][2] * Vrk[1][2]) * dL_db; float dL_dT12 = 2 * (T[1][0] * Vrk[2][0] + T[1][1] * Vrk[2][1] + T[1][2] * Vrk[2][2]) * dL_dc + (T[0][0] * Vrk[2][0] + T[0][1] * Vrk[2][1] + T[0][2] * Vrk[2][2]) * dL_db; // Gradients of loss w.r.t. upper 3x2 non-zero entries of Jacobian matrix // T = W * J float dL_dJ00 = W[0][0] * dL_dT00 + W[0][1] * dL_dT01 + W[0][2] * dL_dT02; float dL_dJ02 = W[2][0] * dL_dT00 + W[2][1] * dL_dT01 + W[2][2] * dL_dT02; float dL_dJ11 = W[1][0] * dL_dT10 + W[1][1] * dL_dT11 + W[1][2] * dL_dT12; float dL_dJ12 = W[2][0] * dL_dT10 + W[2][1] * dL_dT11 + W[2][2] * dL_dT12; float tz = 1.f / t.z; float tz2 = tz * tz; float tz3 = tz2 * tz; // Gradients of loss w.r.t. transformed Gaussian mean t float dL_dtx = x_grad_mul * -h_x * tz2 * dL_dJ02; float dL_dty = y_grad_mul * -h_y * tz2 * dL_dJ12; float dL_dtz = -h_x * tz2 * dL_dJ00 - h_y * tz2 * dL_dJ11 + (2 * h_x * t.x) * tz3 * dL_dJ02 + (2 * h_y * t.y) * tz3 * dL_dJ12; // Account for transformation of mean to t // t = transformPoint4x3(mean, view_matrix); float3 dL_dmean = transformVec4x3Transpose({ dL_dtx, dL_dty, dL_dtz }, view_matrix); // Gradients of loss w.r.t. Gaussian means, but only the portion // that is caused because the mean affects the covariance matrix. // Additional mean gradient is accumulated in BACKWARD::preprocess. dL_dmeans[idx] = dL_dmean; } // Backward pass for the conversion of scale and rotation to a // 3D covariance matrix for each Gaussian. __device__ void computeCov3D(int idx, const glm::vec3 scale, float mod, const glm::vec4 rot, const float* dL_dcov3Ds, glm::vec3* dL_dscales, glm::vec4* dL_drots) { // Recompute (intermediate) results for the 3D covariance computation. glm::vec4 q = rot;// / glm::length(rot); float r = q.x; float x = q.y; float y = q.z; float z = q.w; glm::mat3 R = glm::mat3( 1.f - 2.f * (y * y + z * z), 2.f * (x * y - r * z), 2.f * (x * z + r * y), 2.f * (x * y + r * z), 1.f - 2.f * (x * x + z * z), 2.f * (y * z - r * x), 2.f * (x * z - r * y), 2.f * (y * z + r * x), 1.f - 2.f * (x * x + y * y) ); glm::mat3 S = glm::mat3(1.0f); glm::vec3 s = mod * scale; S[0][0] = s.x; S[1][1] = s.y; S[2][2] = s.z; glm::mat3 M = S * R; const float* dL_dcov3D = dL_dcov3Ds + 6 * idx; glm::vec3 dunc(dL_dcov3D[0], dL_dcov3D[3], dL_dcov3D[5]); glm::vec3 ounc = 0.5f * glm::vec3(dL_dcov3D[1], dL_dcov3D[2], dL_dcov3D[4]); // Convert per-element covariance loss gradients to matrix form glm::mat3 dL_dSigma = glm::mat3( dL_dcov3D[0], 0.5f * dL_dcov3D[1], 0.5f * dL_dcov3D[2], 0.5f * dL_dcov3D[1], dL_dcov3D[3], 0.5f * dL_dcov3D[4], 0.5f * dL_dcov3D[2], 0.5f * dL_dcov3D[4], dL_dcov3D[5] ); // Compute loss gradient w.r.t. matrix M // dSigma_dM = 2 * M glm::mat3 dL_dM = 2.0f * M * dL_dSigma; glm::mat3 Rt = glm::transpose(R); glm::mat3 dL_dMt = glm::transpose(dL_dM); // Gradients of loss w.r.t. scale glm::vec3* dL_dscale = dL_dscales + idx; dL_dscale->x = glm::dot(Rt[0], dL_dMt[0]); dL_dscale->y = glm::dot(Rt[1], dL_dMt[1]); dL_dscale->z = glm::dot(Rt[2], dL_dMt[2]); dL_dMt[0] *= s.x; dL_dMt[1] *= s.y; dL_dMt[2] *= s.z; // Gradients of loss w.r.t. normalized quaternion glm::vec4 dL_dq; dL_dq.x = 2 * z * (dL_dMt[0][1] - dL_dMt[1][0]) + 2 * y * (dL_dMt[2][0] - dL_dMt[0][2]) + 2 * x * (dL_dMt[1][2] - dL_dMt[2][1]); dL_dq.y = 2 * y * (dL_dMt[1][0] + dL_dMt[0][1]) + 2 * z * (dL_dMt[2][0] + dL_dMt[0][2]) + 2 * r * (dL_dMt[1][2] - dL_dMt[2][1]) - 4 * x * (dL_dMt[2][2] + dL_dMt[1][1]); dL_dq.z = 2 * x * (dL_dMt[1][0] + dL_dMt[0][1]) + 2 * r * (dL_dMt[2][0] - dL_dMt[0][2]) + 2 * z * (dL_dMt[1][2] + dL_dMt[2][1]) - 4 * y * (dL_dMt[2][2] + dL_dMt[0][0]); dL_dq.w = 2 * r * (dL_dMt[0][1] - dL_dMt[1][0]) + 2 * x * (dL_dMt[2][0] + dL_dMt[0][2]) + 2 * y * (dL_dMt[1][2] + dL_dMt[2][1]) - 4 * z * (dL_dMt[1][1] + dL_dMt[0][0]); // Gradients of loss w.r.t. unnormalized quaternion float4* dL_drot = (float4*)(dL_drots + idx); *dL_drot = float4{ dL_dq.x, dL_dq.y, dL_dq.z, dL_dq.w };//dnormvdv(float4{ rot.x, rot.y, rot.z, rot.w }, float4{ dL_dq.x, dL_dq.y, dL_dq.z, dL_dq.w }); } // Backward pass of the preprocessing steps, except // for the covariance computation and inversion // (those are handled by a previous kernel call) template __global__ void preprocessCUDA( int P, int D, int M, const float3* means, const int* radii, const float* shs, const bool* clamped, const glm::vec3* scales, const glm::vec4* rotations, const float scale_modifier, const float* proj, const glm::vec3* campos, const float3* dL_dmean2D, glm::vec3* dL_dmeans, float* dL_dcolor, float* dL_dcov3D, float* dL_dsh, glm::vec3* dL_dscale, glm::vec4* dL_drot) { auto idx = cg::this_grid().thread_rank(); if (idx >= P || !(radii[idx] > 0)) return; float3 m = means[idx]; // Taking care of gradients from the screenspace points float4 m_hom = transformPoint4x4(m, proj); float m_w = 1.0f / (m_hom.w + 0.0000001f); // Compute loss gradient w.r.t. 3D means due to gradients of 2D means // from rendering procedure glm::vec3 dL_dmean; float mul1 = (proj[0] * m.x + proj[4] * m.y + proj[8] * m.z + proj[12]) * m_w * m_w; float mul2 = (proj[1] * m.x + proj[5] * m.y + proj[9] * m.z + proj[13]) * m_w * m_w; dL_dmean.x = (proj[0] * m_w - proj[3] * mul1) * dL_dmean2D[idx].x + (proj[1] * m_w - proj[3] * mul2) * dL_dmean2D[idx].y; dL_dmean.y = (proj[4] * m_w - proj[7] * mul1) * dL_dmean2D[idx].x + (proj[5] * m_w - proj[7] * mul2) * dL_dmean2D[idx].y; dL_dmean.z = (proj[8] * m_w - proj[11] * mul1) * dL_dmean2D[idx].x + (proj[9] * m_w - proj[11] * mul2) * dL_dmean2D[idx].y; // That's the second part of the mean gradient. Previous computation // of cov2D and following SH conversion also affects it. dL_dmeans[idx] += dL_dmean; // Compute gradient updates due to computing colors from SHs if (shs) computeColorFromSH(idx, D, M, (glm::vec3*)means, *campos, shs, clamped, (glm::vec3*)dL_dcolor, (glm::vec3*)dL_dmeans, (glm::vec3*)dL_dsh); // Compute gradient updates due to computing covariance from scale/rotation if (scales) computeCov3D(idx, scales[idx], scale_modifier, rotations[idx], dL_dcov3D, dL_dscale, dL_drot); } // Backward version of the rendering procedure. template __global__ void __launch_bounds__(BLOCK_X * BLOCK_Y) renderCUDA( const uint2* __restrict__ ranges, const uint32_t* __restrict__ point_list, int W, int H, float fx, float fy, const float* __restrict__ bg_color, const float2* __restrict__ points_xy_image, const float4* __restrict__ conic_opacity, const float* __restrict__ colors, const float* __restrict__ language_feature, const float* __restrict__ language_feature_instance, const float* __restrict__ all_maps, const float* __restrict__ all_map_pixels, const float* __restrict__ final_Ts, const uint32_t* __restrict__ n_contrib, const float* __restrict__ dL_dpixels, const float* __restrict__ dL_dpixels_F, const float* __restrict__ dL_dpixels_F_instance, const float* __restrict__ dL_dout_all_maps, const float* __restrict__ dL_dout_plane_depths, float3* __restrict__ dL_dmean2D, float3* __restrict__ dL_dmean2D_abs, float4* __restrict__ dL_dconic2D, float* __restrict__ dL_dopacity, float* __restrict__ dL_dcolors, float* __restrict__ dL_dlanguage_feature, float* __restrict__ dL_dlanguage_feature_instance, float* __restrict__ dL_dall_map, const bool render_geo, bool include_feature) { // We rasterize again. Compute necessary block info. auto block = cg::this_thread_block(); const uint32_t horizontal_blocks = (W + BLOCK_X - 1) / BLOCK_X; const uint2 pix_min = { block.group_index().x * BLOCK_X, block.group_index().y * BLOCK_Y }; const uint2 pix_max = { min(pix_min.x + BLOCK_X, W), min(pix_min.y + BLOCK_Y , H) }; const uint2 pix = { pix_min.x + block.thread_index().x, pix_min.y + block.thread_index().y }; const uint32_t pix_id = W * pix.y + pix.x; const float2 pixf = { (float)pix.x, (float)pix.y }; const float2 ray = { (pixf.x - W * 0.5) / fx, (pixf.y - H * 0.5) / fy }; const bool inside = pix.x < W&& pix.y < H; const uint2 range = ranges[block.group_index().y * horizontal_blocks + block.group_index().x]; const int rounds = ((range.y - range.x + BLOCK_SIZE - 1) / BLOCK_SIZE); bool done = !inside; int toDo = range.y - range.x; __shared__ int collected_id[BLOCK_SIZE]; __shared__ float2 collected_xy[BLOCK_SIZE]; __shared__ float4 collected_conic_opacity[BLOCK_SIZE]; __shared__ float collected_colors[C * BLOCK_SIZE]; __shared__ float collected_feature[F * BLOCK_SIZE]; __shared__ float collected_feature_instance[F_ins * BLOCK_SIZE]; __shared__ float collected_all_maps[MAP_N * BLOCK_SIZE]; // In the forward, we stored the final value for T, the // product of all (1 - alpha) factors. const float T_final = inside ? final_Ts[pix_id] : 0; float T = T_final; // We start from the back. The ID of the last contributing // Gaussian is known from each pixel from the forward. uint32_t contributor = toDo; const int last_contributor = inside ? n_contrib[pix_id] : 0; float accum_rec[C] = { 0 }; float accum_all_map[MAP_N] = { 0 }; float accum_rec_F[F] = { 0 }; float accum_rec_F_instance[F_ins] = { 0 }; float dL_dpixel[C]; float dL_dout_all_map[MAP_N]; float dL_dpixel_F[F] = { 0 }; float dL_dpixel_F_instance[F_ins] = { 0 }; // float grad_sum = 0; if (inside) { for (int i = 0; i < C; i++) { dL_dpixel[i] = dL_dpixels[i * H * W + pix_id]; // grad_sum += fabs(dL_dpixel[i]); } if (include_feature) { for (int i = 0; i < F; i++) { dL_dpixel_F[i] = dL_dpixels_F[i * H * W + pix_id]; } for (int i = 0; i < F_ins; i++) { dL_dpixel_F_instance[i] = dL_dpixels_F_instance[i * H * W + pix_id]; } } if(render_geo) { for (int i = 0; i < MAP_N; i++) { dL_dout_all_map[i] = dL_dout_all_maps[i * H * W + pix_id]; // grad_sum += fabs(dL_dout_all_map[i]); } const float3 normal = {all_map_pixels[pix_id], all_map_pixels[H * W + pix_id], all_map_pixels[2 * H * W + pix_id]}; const float distance = all_map_pixels[4 * H * W + pix_id]; const float tmp = (normal.x * ray.x + normal.y * ray.y + normal.z + 1.0e-8); dL_dout_all_map[MAP_N-1] += (-dL_dout_plane_depths[pix_id] / tmp); dL_dout_all_map[0] += dL_dout_plane_depths[pix_id] * (distance / (tmp * tmp) * ray.x); dL_dout_all_map[1] += dL_dout_plane_depths[pix_id] * (distance / (tmp * tmp) * ray.y); dL_dout_all_map[2] += dL_dout_plane_depths[pix_id] * (distance / (tmp * tmp)); } } // If grad is too small, skip // if (grad_sum < 0.000001f) { // done = true; // } float last_alpha = 0; float last_color[C] = { 0 }; float last_language_feature[F] = { 0 }; float last_language_feature_instance[F_ins] = { 0 }; float last_all_map[MAP_N] = { 0 }; // Gradient of pixel coordinate w.r.t. normalized // screen-space viewport corrdinates (-1 to 1) const float ddelx_dx = 0.5 * W; const float ddely_dy = 0.5 * H; // Traverse all Gaussians for (int i = 0; i < rounds; i++, toDo -= BLOCK_SIZE) { // Load auxiliary data into shared memory, start in the BACK // and load them in revers order. block.sync(); const int progress = i * BLOCK_SIZE + block.thread_rank(); if (range.x + progress < range.y) { const int coll_id = point_list[range.y - progress - 1]; collected_id[block.thread_rank()] = coll_id; collected_xy[block.thread_rank()] = points_xy_image[coll_id]; collected_conic_opacity[block.thread_rank()] = conic_opacity[coll_id]; for (int i = 0; i < C; i++) collected_colors[i * BLOCK_SIZE + block.thread_rank()] = colors[coll_id * C + i]; if (include_feature) { for (int i = 0; i < F; i++) collected_feature[i * BLOCK_SIZE + block.thread_rank()] = language_feature[coll_id * F + i]; for (int i = 0; i < F_ins; i++) collected_feature_instance[i * BLOCK_SIZE + block.thread_rank()] = language_feature_instance[coll_id * F_ins + i]; } if (render_geo) { for (int i = 0; i < MAP_N; i++) collected_all_maps[i * BLOCK_SIZE + block.thread_rank()] = all_maps[coll_id * MAP_N + i]; } } block.sync(); // Iterate over Gaussians for (int j = 0; !done && j < min(BLOCK_SIZE, toDo); j++) { // Keep track of current Gaussian ID. Skip, if this one // is behind the last contributor for this pixel. contributor--; if (contributor >= last_contributor) continue; // Compute blending values, as before. const float2 xy = collected_xy[j]; const float2 d = { xy.x - pixf.x, xy.y - pixf.y }; const float4 con_o = collected_conic_opacity[j]; const float power = -0.5f * (con_o.x * d.x * d.x + con_o.z * d.y * d.y) - con_o.y * d.x * d.y; if (power > 0.0f) continue; const float G = exp(power); const float alpha = min(0.99f, con_o.w * G); if (alpha < 1.0f / 255.0f) continue; T = T / (1.f - alpha); const float dchannel_dcolor = alpha * T; // Propagate gradients to per-Gaussian colors and keep // gradients w.r.t. alpha (blending factor for a Gaussian/pixel // pair). float dL_dalpha = 0.0f; const int global_id = collected_id[j]; for (int ch = 0; ch < C; ch++) { const float c = collected_colors[ch * BLOCK_SIZE + j]; // Update last color (to be used in the next iteration) accum_rec[ch] = last_alpha * last_color[ch] + (1.f - last_alpha) * accum_rec[ch]; last_color[ch] = c; const float dL_dchannel = dL_dpixel[ch]; dL_dalpha += (c - accum_rec[ch]) * dL_dchannel; // Update the gradients w.r.t. color of the Gaussian. // Atomic, since this pixel is just one of potentially // many that were affected by this Gaussian. atomicAdd(&(dL_dcolors[global_id * C + ch]), dchannel_dcolor * dL_dchannel); } if (include_feature) { for (int ch = 0; ch < F; ch++) { const float f = collected_feature[ch * BLOCK_SIZE + j]; // Update last color (to be used in the next iteration) accum_rec_F[ch] = last_alpha * last_language_feature[ch] + (1.f - last_alpha) * accum_rec_F[ch]; last_language_feature[ch] = f; const float dL_dchannel_F = dL_dpixel_F[ch]; dL_dalpha += (f - accum_rec_F[ch]) * dL_dchannel_F; // Update the gradients w.r.t. color of the Gaussian. // Atomic, since this pixel is just one of potentially // many that were affected by this Gaussian. atomicAdd(&(dL_dlanguage_feature[global_id * F + ch]), dchannel_dcolor * dL_dchannel_F); } for (int ch = 0; ch < F_ins; ch++) { // instance const float f_ins = collected_feature_instance[ch * BLOCK_SIZE + j]; // Update last color (to be used in the next iteration) accum_rec_F_instance[ch] = last_alpha * last_language_feature_instance[ch] + (1.f - last_alpha) * accum_rec_F_instance[ch]; last_language_feature_instance[ch] = f_ins; const float dL_dchannel_F_instance = dL_dpixel_F_instance[ch]; dL_dalpha += (f_ins - accum_rec_F_instance[ch]) * dL_dchannel_F_instance; // Update the gradients w.r.t. color of the Gaussian. // Atomic, since this pixel is just one of potentially // many that were affected by this Gaussian. atomicAdd(&(dL_dlanguage_feature_instance[global_id * F_ins + ch]), dchannel_dcolor * dL_dchannel_F_instance); } } if (render_geo) { for (int ch = 0; ch < MAP_N; ch++) { const float c = collected_all_maps[ch * BLOCK_SIZE + j]; // Update last color (to be used in the next iteration) accum_all_map[ch] = last_alpha * last_all_map[ch] + (1.f - last_alpha) * accum_all_map[ch]; last_all_map[ch] = c; const float dL_dchannel = dL_dout_all_map[ch]; dL_dalpha += (c - accum_all_map[ch]) * dL_dchannel; // Update the gradients w.r.t. color of the Gaussian. // Atomic, since this pixel is just one of potentially // many that were affected by this Gaussian. atomicAdd(&(dL_dall_map[global_id * MAP_N + ch]), dchannel_dcolor * dL_dchannel); } } dL_dalpha *= T; // Update last alpha (to be used in the next iteration) last_alpha = alpha; // Account for fact that alpha also influences how much of // the background color is added if nothing left to blend float bg_dot_dpixel = 0; for (int i = 0; i < C; i++) bg_dot_dpixel += bg_color[i] * dL_dpixel[i]; dL_dalpha += (-T_final / (1.f - alpha)) * bg_dot_dpixel; // Helpful reusable temporary variables const float dL_dG = con_o.w * dL_dalpha; const float gdx = G * d.x; const float gdy = G * d.y; const float dG_ddelx = -gdx * con_o.x - gdy * con_o.y; const float dG_ddely = -gdy * con_o.z - gdx * con_o.y; // Update gradients w.r.t. 2D mean position of the Gaussian atomicAdd(&dL_dmean2D[global_id].x, dL_dG * dG_ddelx * ddelx_dx); atomicAdd(&dL_dmean2D[global_id].y, dL_dG * dG_ddely * ddely_dy); atomicAdd(&dL_dmean2D_abs[global_id].x, fabs(dL_dG * dG_ddelx * ddelx_dx)); atomicAdd(&dL_dmean2D_abs[global_id].y, fabs(dL_dG * dG_ddely * ddely_dy)); // Update gradients w.r.t. 2D covariance (2x2 matrix, symmetric) atomicAdd(&dL_dconic2D[global_id].x, -0.5f * gdx * d.x * dL_dG); atomicAdd(&dL_dconic2D[global_id].y, -0.5f * gdx * d.y * dL_dG); atomicAdd(&dL_dconic2D[global_id].w, -0.5f * gdy * d.y * dL_dG); // Update gradients w.r.t. opacity of the Gaussian atomicAdd(&(dL_dopacity[global_id]), G * dL_dalpha); } } } void BACKWARD::preprocess( int P, int D, int M, const float3* means3D, const int* radii, const float* shs, const bool* clamped, const glm::vec3* scales, const glm::vec4* rotations, const float scale_modifier, const float* cov3Ds, const float* viewmatrix, const float* projmatrix, const float focal_x, float focal_y, const float tan_fovx, float tan_fovy, const glm::vec3* campos, const float3* dL_dmean2D, const float* dL_dconic, glm::vec3* dL_dmean3D, float* dL_dcolor, float* dL_dcov3D, float* dL_dsh, glm::vec3* dL_dscale, glm::vec4* dL_drot) { // Propagate gradients for the path of 2D conic matrix computation. // Somewhat long, thus it is its own kernel rather than being part of // "preprocess". When done, loss gradient w.r.t. 3D means has been // modified and gradient w.r.t. 3D covariance matrix has been computed. computeCov2DCUDA << <(P + 255) / 256, 256 >> > ( P, means3D, radii, cov3Ds, focal_x, focal_y, tan_fovx, tan_fovy, viewmatrix, dL_dconic, (float3*)dL_dmean3D, dL_dcov3D); // Propagate gradients for remaining steps: finish 3D mean gradients, // propagate color gradients to SH (if desireD), propagate 3D covariance // matrix gradients to scale and rotation. preprocessCUDA << < (P + 255) / 256, 256 >> > ( P, D, M, (float3*)means3D, radii, shs, clamped, (glm::vec3*)scales, (glm::vec4*)rotations, scale_modifier, projmatrix, campos, (float3*)dL_dmean2D, (glm::vec3*)dL_dmean3D, dL_dcolor, dL_dcov3D, dL_dsh, dL_dscale, dL_drot); } void BACKWARD::render( const dim3 grid, const dim3 block, const uint2* ranges, const uint32_t* point_list, int W, int H, float fx, float fy, const float* bg_color, const float2* means2D, const float4* conic_opacity, const float* colors, const float* language_feature, const float* language_feature_instance, const float* all_maps, const float* all_map_pixels, const float* final_Ts, const uint32_t* n_contrib, const float* dL_dpixels, const float* dL_dpixels_F, const float* dL_dpixels_F_instance, const float* dL_dout_all_map, const float* dL_dout_plane_depth, float3* dL_dmean2D, float3* dL_dmean2D_abs, float4* dL_dconic2D, float* dL_dopacity, float* dL_dcolors, float* dL_dlanguage_feature, float* dL_dlanguage_feature_instance, float* dL_dall_map, const bool render_geo, bool include_feature) { renderCUDA << > >( ranges, point_list, W, H, fx, fy, bg_color, means2D, conic_opacity, colors, language_feature, language_feature_instance, all_maps, all_map_pixels, final_Ts, n_contrib, dL_dpixels, dL_dpixels_F, dL_dpixels_F_instance, dL_dout_all_map, dL_dout_plane_depth, dL_dmean2D, dL_dmean2D_abs, dL_dconic2D, dL_dopacity, dL_dcolors, dL_dlanguage_feature, dL_dlanguage_feature_instance, dL_dall_map, render_geo, include_feature ); } ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/cuda_rasterizer/backward.h ================================================ /* * Copyright (C) 2023, Inria * GRAPHDECO research group, https://team.inria.fr/graphdeco * All rights reserved. * * This software is free for non-commercial, research and evaluation use * under the terms of the LICENSE.md file. * * For inquiries contact george.drettakis@inria.fr */ #ifndef CUDA_RASTERIZER_BACKWARD_H_INCLUDED #define CUDA_RASTERIZER_BACKWARD_H_INCLUDED #include #include "cuda_runtime.h" #include "device_launch_parameters.h" #define GLM_FORCE_CUDA #include namespace BACKWARD { void render( const dim3 grid, dim3 block, const uint2* ranges, const uint32_t* point_list, int W, int H, float fx, float fy, const float* bg_color, const float2* means2D, const float4* conic_opacity, const float* colors, const float* language_feature, const float* language_feature_instance, const float* all_maps, const float* all_map_pixels, const float* final_Ts, const uint32_t* n_contrib, const float* dL_dpixels, const float* dL_dpixels_F, const float* dL_dpixels_F_instance, const float* dL_dout_all_map, const float* dL_dout_plane_depth, float3* dL_dmean2D, float3* dL_dmean2D_abs, float4* dL_dconic2D, float* dL_dopacity, float* dL_dcolors, float* dL_dlanguage_feature, float* dL_dlanguage_feature_instance, float* dL_dall_map, const bool render_geo, bool include_feature); void preprocess( int P, int D, int M, const float3* means, const int* radii, const float* shs, const bool* clamped, const glm::vec3* scales, const glm::vec4* rotations, const float scale_modifier, const float* cov3Ds, const float* view, const float* proj, const float focal_x, float focal_y, const float tan_fovx, float tan_fovy, const glm::vec3* campos, const float3* dL_dmean2D, const float* dL_dconics, glm::vec3* dL_dmeans, float* dL_dcolor, float* dL_dcov3D, float* dL_dsh, glm::vec3* dL_dscale, glm::vec4* dL_drot); } #endif ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/cuda_rasterizer/config.h ================================================ /* * Copyright (C) 2023, Inria * GRAPHDECO research group, https://team.inria.fr/graphdeco * All rights reserved. * * This software is free for non-commercial, research and evaluation use * under the terms of the LICENSE.md file. * * For inquiries contact george.drettakis@inria.fr */ #ifndef CUDA_RASTERIZER_CONFIG_H_INCLUDED #define CUDA_RASTERIZER_CONFIG_H_INCLUDED #define NUM_CHANNELS 3 // Default 3, RGB #define NUM_CHANNELS_language_feature 3 #define NUM_CHANNELS_instance_feature 3 #define NUM_ALL_MAP 5 #define BLOCK_X 16 #define BLOCK_Y 16 #endif ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/cuda_rasterizer/forward.cu ================================================ /* * Copyright (C) 2023, Inria * GRAPHDECO research group, https://team.inria.fr/graphdeco * All rights reserved. * * This software is free for non-commercial, research and evaluation use * under the terms of the LICENSE.md file. * * For inquiries contact george.drettakis@inria.fr */ #include "forward.h" #include "auxiliary.h" #include #include #include namespace cg = cooperative_groups; // Forward method for converting the input spherical harmonics // coefficients of each Gaussian to a simple RGB color. __device__ glm::vec3 computeColorFromSH(int idx, int deg, int max_coeffs, const glm::vec3 *means, glm::vec3 campos, const float *shs, bool *clamped) { // The implementation is loosely based on code for // "Differentiable Point-Based Radiance Fields for // Efficient View Synthesis" by Zhang et al. (2022) glm::vec3 pos = means[idx]; glm::vec3 dir = pos - campos; dir = dir / glm::length(dir); glm::vec3 *sh = ((glm::vec3 *)shs) + idx * max_coeffs; glm::vec3 result = SH_C0 * sh[0]; if (deg > 0) { float x = dir.x; float y = dir.y; float z = dir.z; result = result - SH_C1 * y * sh[1] + SH_C1 * z * sh[2] - SH_C1 * x * sh[3]; if (deg > 1) { float xx = x * x, yy = y * y, zz = z * z; float xy = x * y, yz = y * z, xz = x * z; result = result + SH_C2[0] * xy * sh[4] + SH_C2[1] * yz * sh[5] + SH_C2[2] * (2.0f * zz - xx - yy) * sh[6] + SH_C2[3] * xz * sh[7] + SH_C2[4] * (xx - yy) * sh[8]; if (deg > 2) { result = result + SH_C3[0] * y * (3.0f * xx - yy) * sh[9] + SH_C3[1] * xy * z * sh[10] + SH_C3[2] * y * (4.0f * zz - xx - yy) * sh[11] + SH_C3[3] * z * (2.0f * zz - 3.0f * xx - 3.0f * yy) * sh[12] + SH_C3[4] * x * (4.0f * zz - xx - yy) * sh[13] + SH_C3[5] * z * (xx - yy) * sh[14] + SH_C3[6] * x * (xx - 3.0f * yy) * sh[15]; } } } result += 0.5f; // RGB colors are clamped to positive values. If values are // clamped, we need to keep track of this for the backward pass. clamped[3 * idx + 0] = (result.x < 0); clamped[3 * idx + 1] = (result.y < 0); clamped[3 * idx + 2] = (result.z < 0); return glm::max(result, 0.0f); } // Forward version of 2D covariance matrix computation __device__ float3 computeCov2D(const float3 &mean, float focal_x, float focal_y, float tan_fovx, float tan_fovy, const float *cov3D, const float *viewmatrix) { // The following models the steps outlined by equations 29 // and 31 in "EWA Splatting" (Zwicker et al., 2002). // Additionally considers aspect / scaling of viewport. // Transposes used to account for row-/column-major conventions. float3 t = transformPoint4x3(mean, viewmatrix); const float limx = 1.3f * tan_fovx; const float limy = 1.3f * tan_fovy; const float txtz = t.x / t.z; const float tytz = t.y / t.z; t.x = min(limx, max(-limx, txtz)) * t.z; t.y = min(limy, max(-limy, tytz)) * t.z; glm::mat3 J = glm::mat3( focal_x / t.z, 0.0f, -(focal_x * t.x) / (t.z * t.z), 0.0f, focal_y / t.z, -(focal_y * t.y) / (t.z * t.z), 0, 0, 0); glm::mat3 W = glm::mat3( viewmatrix[0], viewmatrix[4], viewmatrix[8], viewmatrix[1], viewmatrix[5], viewmatrix[9], viewmatrix[2], viewmatrix[6], viewmatrix[10]); glm::mat3 T = W * J; glm::mat3 Vrk = glm::mat3( cov3D[0], cov3D[1], cov3D[2], cov3D[1], cov3D[3], cov3D[4], cov3D[2], cov3D[4], cov3D[5]); glm::mat3 cov = glm::transpose(T) * glm::transpose(Vrk) * T; // Apply low-pass filter: every Gaussian should be at least // one pixel wide/high. Discard 3rd row and column. cov[0][0] += 0.3f; cov[1][1] += 0.3f; return {float(cov[0][0]), float(cov[0][1]), float(cov[1][1])}; } // Forward method for converting scale and rotation properties of each // Gaussian to a 3D covariance matrix in world space. Also takes care // of quaternion normalization. __device__ void computeCov3D(const glm::vec3 scale, float mod, const glm::vec4 rot, float *cov3D) { // Create scaling matrix glm::mat3 S = glm::mat3(1.0f); S[0][0] = mod * scale.x; S[1][1] = mod * scale.y; S[2][2] = mod * scale.z; // Normalize quaternion to get valid rotation glm::vec4 q = rot; // / glm::length(rot); float r = q.x; float x = q.y; float y = q.z; float z = q.w; // Compute rotation matrix from quaternion glm::mat3 R = glm::mat3( 1.f - 2.f * (y * y + z * z), 2.f * (x * y - r * z), 2.f * (x * z + r * y), 2.f * (x * y + r * z), 1.f - 2.f * (x * x + z * z), 2.f * (y * z - r * x), 2.f * (x * z - r * y), 2.f * (y * z + r * x), 1.f - 2.f * (x * x + y * y)); glm::mat3 M = S * R; // Compute 3D world covariance matrix Sigma glm::mat3 Sigma = glm::transpose(M) * M; // Covariance is symmetric, only store upper right cov3D[0] = Sigma[0][0]; cov3D[1] = Sigma[0][1]; cov3D[2] = Sigma[0][2]; cov3D[3] = Sigma[1][1]; cov3D[4] = Sigma[1][2]; cov3D[5] = Sigma[2][2]; } // Perform initial steps for each Gaussian prior to rasterization. template __global__ void preprocessCUDA(int P, int D, int M, const float *orig_points, const glm::vec3 *scales, const float scale_modifier, const glm::vec4 *rotations, const float *opacities, const float *shs, bool *clamped, const float *cov3D_precomp, const float *colors_precomp, const float *viewmatrix, const float *projmatrix, const glm::vec3 *cam_pos, const int W, int H, const float tan_fovx, float tan_fovy, const float focal_x, float focal_y, int *radii, float2 *points_xy_image, float *depths, float *cov3Ds, float *rgb, float4 *conic_opacity, const dim3 grid, uint32_t *tiles_touched, bool prefiltered) { auto idx = cg::this_grid().thread_rank(); if (idx >= P) return; // Initialize radius and touched tiles to 0. If this isn't changed, // this Gaussian will not be processed further. radii[idx] = 0; tiles_touched[idx] = 0; // Perform near culling, quit if outside. float3 p_view; if (!in_frustum(idx, orig_points, viewmatrix, projmatrix, prefiltered, p_view)) return; // Transform point by projecting float3 p_orig = {orig_points[3 * idx], orig_points[3 * idx + 1], orig_points[3 * idx + 2]}; float4 p_hom = transformPoint4x4(p_orig, projmatrix); float p_w = 1.0f / (p_hom.w + 0.0000001f); float3 p_proj = {p_hom.x * p_w, p_hom.y * p_w, p_hom.z * p_w}; // If 3D covariance matrix is precomputed, use it, otherwise compute // from scaling and rotation parameters. const float *cov3D; if (cov3D_precomp != nullptr) { cov3D = cov3D_precomp + idx * 6; } else { computeCov3D(scales[idx], scale_modifier, rotations[idx], cov3Ds + idx * 6); cov3D = cov3Ds + idx * 6; } // Compute 2D screen-space covariance matrix float3 cov = computeCov2D(p_orig, focal_x, focal_y, tan_fovx, tan_fovy, cov3D, viewmatrix); // Invert covariance (EWA algorithm) float det = (cov.x * cov.z - cov.y * cov.y); if (det == 0.0f) return; float det_inv = 1.f / det; float3 conic = {cov.z * det_inv, -cov.y * det_inv, cov.x * det_inv}; // Compute extent in screen space (by finding eigenvalues of // 2D covariance matrix). Use extent to compute a bounding rectangle // of screen-space tiles that this Gaussian overlaps with. Quit if // rectangle covers 0 tiles. float mid = 0.5f * (cov.x + cov.z); float lambda1 = mid + sqrt(max(0.1f, mid * mid - det)); float lambda2 = mid - sqrt(max(0.1f, mid * mid - det)); float my_radius = ceil(3.f * sqrt(max(lambda1, lambda2))); float2 point_image = {ndc2Pix(p_proj.x, W), ndc2Pix(p_proj.y, H)}; uint2 rect_min, rect_max; getRect(point_image, my_radius, rect_min, rect_max, grid); if ((rect_max.x - rect_min.x) * (rect_max.y - rect_min.y) == 0) return; // If colors have been precomputed, use them, otherwise convert // spherical harmonics coefficients to RGB color. if (colors_precomp == nullptr) { glm::vec3 result = computeColorFromSH(idx, D, M, (glm::vec3 *)orig_points, *cam_pos, shs, clamped); rgb[idx * C + 0] = result.x; rgb[idx * C + 1] = result.y; rgb[idx * C + 2] = result.z; } // if (opacities[idx] > 0.9 && point_image.x > 0 && point_image.x < 1264 && point_image.y > 0 && point_image.y < 832) { // glm::vec4 q = rotations[idx]; // glm::vec3 cp = *cam_pos; // printf("q(wxyz) %lf %lf %lf %lf, scale %lf %lf %lf, mean3d %lf %lf %lf, c %lf %lf %lf\n viewmatrix %lf %lf %lf %lf, %lf %lf %lf %lf, %lf %lf %lf %lf, %lf %lf %lf %lf\n", // q.x, q.y, q.z, q.w, scales[idx].x, scales[idx].y, scales[idx].z, // p_orig.x, p_orig.y, p_orig.z, cp.x, cp.y, cp.z, // viewmatrix[0],viewmatrix[4],viewmatrix[8],viewmatrix[12], // viewmatrix[1],viewmatrix[5],viewmatrix[9],viewmatrix[13], // viewmatrix[2],viewmatrix[6],viewmatrix[10],viewmatrix[14], // viewmatrix[3],viewmatrix[7],viewmatrix[11],viewmatrix[15]); // } // Store some useful helper data for the next steps. depths[idx] = p_view.z; radii[idx] = my_radius; points_xy_image[idx] = point_image; // Inverse 2D covariance and opacity neatly pack into one float4 conic_opacity[idx] = {conic.x, conic.y, conic.z, opacities[idx]}; tiles_touched[idx] = (rect_max.y - rect_min.y) * (rect_max.x - rect_min.x); } // Main rasterization method. Collaboratively works on one tile per // block, each thread treats one pixel. Alternates between fetching // and rasterizing data. template __global__ void __launch_bounds__(BLOCK_X *BLOCK_Y) renderCUDA( const uint2 *__restrict__ ranges, const uint32_t *__restrict__ point_list, int W, int H, const float focal_x, const float focal_y, const float cx, const float cy, const float *__restrict__ viewmatrix, const float *__restrict__ cam_pos, const float2 *__restrict__ points_xy_image, const float *__restrict__ features, const float *__restrict__ language_feature, const float *__restrict__ language_feature_instance, const float *__restrict__ all_map, const float4 *__restrict__ conic_opacity, float *__restrict__ final_T, uint32_t *__restrict__ n_contrib, const float *__restrict__ bg_color, float *__restrict__ out_color, float *__restrict__ out_language_feature, float *__restrict__ out_language_feature_instance, int *__restrict__ out_observe, float *__restrict__ out_all_map, float *__restrict__ out_plane_depth, const bool render_geo, bool include_feature) { // Identify current tile and associated min/max pixel range. auto block = cg::this_thread_block(); const uint32_t horizontal_blocks = (W + BLOCK_X - 1) / BLOCK_X; const uint2 pix_min = {block.group_index().x * BLOCK_X, block.group_index().y * BLOCK_Y}; const uint2 pix_max = {min(pix_min.x + BLOCK_X, W), min(pix_min.y + BLOCK_Y, H)}; const uint2 pix = {pix_min.x + block.thread_index().x, pix_min.y + block.thread_index().y}; const uint32_t pix_id = W * pix.y + pix.x; const float2 pixf = {(float)pix.x, (float)pix.y}; const float2 ray = {(pixf.x - cx) / focal_x, (pixf.y - cy) / focal_y}; // Check if this thread is associated with a valid pixel or outside. bool inside = pix.x < W && pix.y < H; // Done threads can help with fetching, but don't rasterize bool done = !inside; // Load start/end range of IDs to process in bit sorted list. uint2 range = ranges[block.group_index().y * horizontal_blocks + block.group_index().x]; const int rounds = ((range.y - range.x + BLOCK_SIZE - 1) / BLOCK_SIZE); int toDo = range.y - range.x; // Allocate storage for batches of collectively fetched data. __shared__ int collected_id[BLOCK_SIZE]; __shared__ float2 collected_xy[BLOCK_SIZE]; __shared__ float4 collected_conic_opacity[BLOCK_SIZE]; // Initialize helper variables float T = 1.0f; uint32_t contributor = 0; uint32_t last_contributor = 0; float C[CHANNELS] = {0}; float F[CHANNELS_language_feature] = {0}; float F_ins[CHANNELS_instance_feature] = {0}; float All_map[ALL_MAP] = {0}; // Iterate over batches until all done or range is complete for (int i = 0; i < rounds; i++, toDo -= BLOCK_SIZE) { // End if entire block votes that it is done rasterizing int num_done = __syncthreads_count(done); if (num_done == BLOCK_SIZE) break; // Collectively fetch per-Gaussian data from global to shared int progress = i * BLOCK_SIZE + block.thread_rank(); if (range.x + progress < range.y) { int coll_id = point_list[range.x + progress]; collected_id[block.thread_rank()] = coll_id; collected_xy[block.thread_rank()] = points_xy_image[coll_id]; collected_conic_opacity[block.thread_rank()] = conic_opacity[coll_id]; } block.sync(); // Iterate over current batch for (int j = 0; !done && j < min(BLOCK_SIZE, toDo); j++) { // Keep track of current position in range contributor++; // Resample using conic matrix (cf. "Surface // Splatting" by Zwicker et al., 2001) float2 xy = collected_xy[j]; float2 d = {xy.x - pixf.x, xy.y - pixf.y}; float4 con_o = collected_conic_opacity[j]; float power = -0.5f * (con_o.x * d.x * d.x + con_o.z * d.y * d.y) - con_o.y * d.x * d.y; if (power > 0.0f) continue; // Eq. (2) from 3D Gaussian splatting paper. // Obtain alpha by multiplying with Gaussian opacity // and its exponential falloff from mean. // Avoid numerical instabilities (see paper appendix). float alpha = min(0.99f, con_o.w * exp(power)); if (alpha < 1.0f / 255.0f) continue; float test_T = T * (1 - alpha); if (test_T < 0.0001f) { done = true; continue; } // Eq. (3) from 3D Gaussian splatting paper. for (int ch = 0; ch < CHANNELS; ch++) C[ch] += features[collected_id[j] * CHANNELS + ch] * alpha * T; if (include_feature) { for (int ch = 0; ch < CHANNELS_language_feature; ch++) F[ch] += language_feature[collected_id[j] * CHANNELS_language_feature + ch] * alpha * T; for (int ch = 0; ch < CHANNELS_instance_feature; ch++) F_ins[ch] += language_feature_instance[collected_id[j] * CHANNELS_instance_feature + ch] * alpha * T; } if (render_geo) { for (int ch = 0; ch < ALL_MAP; ch++) All_map[ch] += all_map[collected_id[j] * ALL_MAP + ch] * alpha * T; } if (T > 0.5) { atomicAdd(&(out_observe[collected_id[j]]), 1); } T = test_T; // Keep track of last range entry to update this // pixel. last_contributor = contributor; } } // All threads that treat valid pixel write out their final // rendering data to the frame and auxiliary buffers. if (inside) { final_T[pix_id] = T; n_contrib[pix_id] = last_contributor; for (int ch = 0; ch < CHANNELS; ch++) out_color[ch * H * W + pix_id] = C[ch] + T * bg_color[ch]; if (include_feature) { for (int ch = 0; ch < CHANNELS_language_feature; ch++) out_language_feature[ch * H * W + pix_id] = F[ch]; // bg_color ??? for (int ch = 0; ch < CHANNELS_instance_feature; ch++) out_language_feature_instance[ch * H * W + pix_id] = F_ins[ch]; } if (render_geo) { for (int ch = 0; ch < ALL_MAP; ch++) out_all_map[ch * H * W + pix_id] = All_map[ch]; out_plane_depth[pix_id] = All_map[4] / -(All_map[0] * ray.x + All_map[1] * ray.y + All_map[2] + 1.0e-8); } } } void FORWARD::render( const dim3 grid, dim3 block, const uint2 *ranges, const uint32_t *point_list, int W, int H, const float focal_x, const float focal_y, const float cx, const float cy, const float *viewmatrix, const float *cam_pos, const float2 *means2D, const float *colors, const float *language_feature, const float *language_feature_instance, const float *all_map, const float4 *conic_opacity, float *final_T, uint32_t *n_contrib, const float *bg_color, float *out_color, float *out_language_feature, float *out_language_feature_instance, int *out_observe, float *out_all_map, float *out_plane_depth, const bool render_geo, bool include_feature) { renderCUDA<<>>( ranges, point_list, W, H, focal_x, focal_y, cx, cy, viewmatrix, cam_pos, means2D, colors, language_feature, language_feature_instance, all_map, conic_opacity, final_T, n_contrib, bg_color, out_color, out_language_feature, out_language_feature_instance, out_observe, out_all_map, out_plane_depth, render_geo, include_feature); } void FORWARD::preprocess(int P, int D, int M, const float *means3D, const glm::vec3 *scales, const float scale_modifier, const glm::vec4 *rotations, const float *opacities, const float *shs, bool *clamped, const float *cov3D_precomp, const float *colors_precomp, const float *viewmatrix, const float *projmatrix, const glm::vec3 *cam_pos, const int W, int H, const float focal_x, float focal_y, const float tan_fovx, float tan_fovy, int *radii, float2 *means2D, float *depths, float *cov3Ds, float *rgb, float4 *conic_opacity, const dim3 grid, uint32_t *tiles_touched, bool prefiltered) { preprocessCUDA<<<(P + 255) / 256, 256>>>( P, D, M, means3D, scales, scale_modifier, rotations, opacities, shs, clamped, cov3D_precomp, colors_precomp, viewmatrix, projmatrix, cam_pos, W, H, tan_fovx, tan_fovy, focal_x, focal_y, radii, means2D, depths, cov3Ds, rgb, conic_opacity, grid, tiles_touched, prefiltered); } ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/cuda_rasterizer/forward.h ================================================ /* * Copyright (C) 2023, Inria * GRAPHDECO research group, https://team.inria.fr/graphdeco * All rights reserved. * * This software is free for non-commercial, research and evaluation use * under the terms of the LICENSE.md file. * * For inquiries contact george.drettakis@inria.fr */ #ifndef CUDA_RASTERIZER_FORWARD_H_INCLUDED #define CUDA_RASTERIZER_FORWARD_H_INCLUDED #include #include "cuda_runtime.h" #include "device_launch_parameters.h" #define GLM_FORCE_CUDA #include namespace FORWARD { // Perform initial steps for each Gaussian prior to rasterization. void preprocess(int P, int D, int M, const float* orig_points, const glm::vec3* scales, const float scale_modifier, const glm::vec4* rotations, const float* opacities, const float* shs, bool* clamped, const float* cov3D_precomp, const float* colors_precomp, const float* viewmatrix, const float* projmatrix, const glm::vec3* cam_pos, const int W, int H, const float focal_x, float focal_y, const float tan_fovx, float tan_fovy, int* radii, float2* points_xy_image, float* depths, float* cov3Ds, float* colors, float4* conic_opacity, const dim3 grid, uint32_t* tiles_touched, bool prefiltered); // Main rasterization method. void render( const dim3 grid, dim3 block, const uint2* ranges, const uint32_t* point_list, int W, int H, const float focal_x, const float focal_y, const float cx, const float cy, const float* viewmatrix, const float* cam_pos, const float2* points_xy_image, const float* features, const float* language_feature, const float* language_feature_instance, const float* all_map, const float4* conic_opacity, float* final_T, uint32_t* n_contrib, const float* bg_color, float* out_color, float* out_language_feature, float* out_language_feature_instance, int* out_observe, float* out_all_map, float* out_plane_depth, const bool render_geo, bool include_feature); } #endif ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/cuda_rasterizer/rasterizer.h ================================================ /* * Copyright (C) 2023, Inria * GRAPHDECO research group, https://team.inria.fr/graphdeco * All rights reserved. * * This software is free for non-commercial, research and evaluation use * under the terms of the LICENSE.md file. * * For inquiries contact george.drettakis@inria.fr */ #ifndef CUDA_RASTERIZER_H_INCLUDED #define CUDA_RASTERIZER_H_INCLUDED #include #include namespace CudaRasterizer { class Rasterizer { public: static void markVisible( int P, float* means3D, float* viewmatrix, float* projmatrix, bool* present); static int forward( std::function geometryBuffer, std::function binningBuffer, std::function imageBuffer, const int P, int D, int M, const float* background, const int width, int height, const float* means3D, const float* shs, const float* colors_precomp, const float* language_feature_precomp, const float* language_feature_instance_precomp, const float* opacities, const float* scales, const float scale_modifier, const float* rotations, const float* cov3D_precomp, const float* all_map, const float* viewmatrix, const float* projmatrix, const float* cam_pos, const float tan_fovx, float tan_fovy, const bool prefiltered, float* out_color, float* out_language_feature, float* out_language_feature_instance, int* radii, int* out_observe, float* out_all_map, float* out_plane_depth, const bool render_geo, bool debug = false, bool include_feature = false); static void backward( const int P, int D, int M, int R, const float* background, const float* all_map_pixels, const int width, int height, const float* means3D, const float* shs, const float* colors_precomp, const float* language_feature_precomp, const float* language_feature_instance_precomp, const float* all_maps, const float* scales, const float scale_modifier, const float* rotations, const float* cov3D_precomp, const float* viewmatrix, const float* projmatrix, const float* campos, const float tan_fovx, float tan_fovy, const int* radii, char* geom_buffer, char* binning_buffer, char* image_buffer, const float* dL_dpix, const float* dL_dpix_F, const float* dL_dpix_F_instance, const float* dL_dout_all_map, const float* dL_dout_plane_depth, float* dL_dmean2D, float* dL_dmean2D_abs, float* dL_dconic, float* dL_dopacity, float* dL_dcolor, float* dL_dlanguage_feature, float* dL_dlanguage_feature_instance, float* dL_dmean3D, float* dL_dcov3D, float* dL_dsh, float* dL_dscale, float* dL_drot, float* dL_dall_map, const bool render_geo, bool debug, bool include_feature); }; }; #endif ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/cuda_rasterizer/rasterizer_impl.cu ================================================ /* * Copyright (C) 2023, Inria * GRAPHDECO research group, https://team.inria.fr/graphdeco * All rights reserved. * * This software is free for non-commercial, research and evaluation use * under the terms of the LICENSE.md file. * * For inquiries contact george.drettakis@inria.fr */ #include "rasterizer_impl.h" #include #include #include #include #include #include "cuda_runtime.h" #include "device_launch_parameters.h" #include #include #define GLM_FORCE_CUDA #include #include #include namespace cg = cooperative_groups; #include "auxiliary.h" #include "forward.h" #include "backward.h" // Helper function to find the next-highest bit of the MSB // on the CPU. uint32_t getHigherMsb(uint32_t n) { uint32_t msb = sizeof(n) * 4; uint32_t step = msb; while (step > 1) { step /= 2; if (n >> msb) msb += step; else msb -= step; } if (n >> msb) msb++; return msb; } // Wrapper method to call auxiliary coarse frustum containment test. // Mark all Gaussians that pass it. __global__ void checkFrustum(int P, const float* orig_points, const float* viewmatrix, const float* projmatrix, bool* present) { auto idx = cg::this_grid().thread_rank(); if (idx >= P) return; float3 p_view; present[idx] = in_frustum(idx, orig_points, viewmatrix, projmatrix, false, p_view); } // Generates one key/value pair for all Gaussian / tile overlaps. // Run once per Gaussian (1:N mapping). __global__ void duplicateWithKeys( int P, const float2* points_xy, const float* depths, const uint32_t* offsets, uint64_t* gaussian_keys_unsorted, uint32_t* gaussian_values_unsorted, int* radii, dim3 grid) { auto idx = cg::this_grid().thread_rank(); if (idx >= P) return; // Generate no key/value pair for invisible Gaussians if (radii[idx] > 0) { // Find this Gaussian's offset in buffer for writing keys/values. uint32_t off = (idx == 0) ? 0 : offsets[idx - 1]; uint2 rect_min, rect_max; getRect(points_xy[idx], radii[idx], rect_min, rect_max, grid); // For each tile that the bounding rect overlaps, emit a // key/value pair. The key is | tile ID | depth |, // and the value is the ID of the Gaussian. Sorting the values // with this key yields Gaussian IDs in a list, such that they // are first sorted by tile and then by depth. for (int y = rect_min.y; y < rect_max.y; y++) { for (int x = rect_min.x; x < rect_max.x; x++) { uint64_t key = y * grid.x + x; key <<= 32; key |= *((uint32_t*)&depths[idx]); gaussian_keys_unsorted[off] = key; gaussian_values_unsorted[off] = idx; off++; } } } } // Check keys to see if it is at the start/end of one tile's range in // the full sorted list. If yes, write start/end of this tile. // Run once per instanced (duplicated) Gaussian ID. __global__ void identifyTileRanges(int L, uint64_t* point_list_keys, uint2* ranges) { auto idx = cg::this_grid().thread_rank(); if (idx >= L) return; // Read tile ID from key. Update start/end of tile range if at limit. uint64_t key = point_list_keys[idx]; uint32_t currtile = key >> 32; if (idx == 0) ranges[currtile].x = 0; else { uint32_t prevtile = point_list_keys[idx - 1] >> 32; if (currtile != prevtile) { ranges[prevtile].y = idx; ranges[currtile].x = idx; } } if (idx == L - 1) ranges[currtile].y = L; } // Mark Gaussians as visible/invisible, based on view frustum testing void CudaRasterizer::Rasterizer::markVisible( int P, float* means3D, float* viewmatrix, float* projmatrix, bool* present) { checkFrustum << <(P + 255) / 256, 256 >> > ( P, means3D, viewmatrix, projmatrix, present); } CudaRasterizer::GeometryState CudaRasterizer::GeometryState::fromChunk(char*& chunk, size_t P) { GeometryState geom; obtain(chunk, geom.depths, P, 128); obtain(chunk, geom.clamped, P * 3, 128); obtain(chunk, geom.internal_radii, P, 128); obtain(chunk, geom.means2D, P, 128); obtain(chunk, geom.cov3D, P * 6, 128); obtain(chunk, geom.conic_opacity, P, 128); obtain(chunk, geom.rgb, P * 3, 128); obtain(chunk, geom.tiles_touched, P, 128); cub::DeviceScan::InclusiveSum(nullptr, geom.scan_size, geom.tiles_touched, geom.tiles_touched, P); obtain(chunk, geom.scanning_space, geom.scan_size, 128); obtain(chunk, geom.point_offsets, P, 128); return geom; } CudaRasterizer::ImageState CudaRasterizer::ImageState::fromChunk(char*& chunk, size_t N) { ImageState img; obtain(chunk, img.accum_alpha, N, 128); obtain(chunk, img.n_contrib, N, 128); obtain(chunk, img.ranges, N, 128); return img; } CudaRasterizer::BinningState CudaRasterizer::BinningState::fromChunk(char*& chunk, size_t P) { BinningState binning; obtain(chunk, binning.point_list, P, 128); obtain(chunk, binning.point_list_unsorted, P, 128); obtain(chunk, binning.point_list_keys, P, 128); obtain(chunk, binning.point_list_keys_unsorted, P, 128); cub::DeviceRadixSort::SortPairs( nullptr, binning.sorting_size, binning.point_list_keys_unsorted, binning.point_list_keys, binning.point_list_unsorted, binning.point_list, P); obtain(chunk, binning.list_sorting_space, binning.sorting_size, 128); return binning; } // Forward rendering procedure for differentiable rasterization // of Gaussians. int CudaRasterizer::Rasterizer::forward( std::function geometryBuffer, std::function binningBuffer, std::function imageBuffer, const int P, int D, int M, const float* background, const int width, int height, const float* means3D, const float* shs, const float* colors_precomp, const float* language_feature_precomp, const float* language_feature_instance_precomp, const float* opacities, const float* scales, const float scale_modifier, const float* rotations, const float* cov3D_precomp, const float* all_map, const float* viewmatrix, const float* projmatrix, const float* cam_pos, const float tan_fovx, float tan_fovy, const bool prefiltered, float* out_color, float* out_language_feature, float* out_language_feature_instance, int* radii, int* out_observe, float* out_all_map, float* out_plane_depth, const bool render_geo, bool debug, bool include_feature) { const float focal_y = height / (2.0f * tan_fovy); const float focal_x = width / (2.0f * tan_fovx); size_t chunk_size = required(P); char* chunkptr = geometryBuffer(chunk_size); GeometryState geomState = GeometryState::fromChunk(chunkptr, P); if (radii == nullptr) { radii = geomState.internal_radii; } dim3 tile_grid((width + BLOCK_X - 1) / BLOCK_X, (height + BLOCK_Y - 1) / BLOCK_Y, 1); dim3 block(BLOCK_X, BLOCK_Y, 1); // Dynamically resize image-based auxiliary buffers during training size_t img_chunk_size = required(width * height); char* img_chunkptr = imageBuffer(img_chunk_size); ImageState imgState = ImageState::fromChunk(img_chunkptr, width * height); if (NUM_CHANNELS != 3 && colors_precomp == nullptr) { throw std::runtime_error("For non-RGB, provide precomputed Gaussian colors!"); } // Run preprocessing per-Gaussian (transformation, bounding, conversion of SHs to RGB) CHECK_CUDA(FORWARD::preprocess( P, D, M, means3D, (glm::vec3*)scales, scale_modifier, (glm::vec4*)rotations, opacities, shs, geomState.clamped, cov3D_precomp, colors_precomp, viewmatrix, projmatrix, (glm::vec3*)cam_pos, width, height, focal_x, focal_y, tan_fovx, tan_fovy, radii, geomState.means2D, geomState.depths, geomState.cov3D, geomState.rgb, geomState.conic_opacity, tile_grid, geomState.tiles_touched, prefiltered ), debug) // Compute prefix sum over full list of touched tile counts by Gaussians // E.g., [2, 3, 0, 2, 1] -> [2, 5, 5, 7, 8] CHECK_CUDA(cub::DeviceScan::InclusiveSum(geomState.scanning_space, geomState.scan_size, geomState.tiles_touched, geomState.point_offsets, P), debug) // Retrieve total number of Gaussian instances to launch and resize aux buffers int num_rendered; CHECK_CUDA(cudaMemcpy(&num_rendered, geomState.point_offsets + P - 1, sizeof(int), cudaMemcpyDeviceToHost), debug); size_t binning_chunk_size = required(num_rendered); char* binning_chunkptr = binningBuffer(binning_chunk_size); BinningState binningState = BinningState::fromChunk(binning_chunkptr, num_rendered); // For each instance to be rendered, produce adequate [ tile | depth ] key // and corresponding dublicated Gaussian indices to be sorted duplicateWithKeys << <(P + 255) / 256, 256 >> > ( P, geomState.means2D, geomState.depths, geomState.point_offsets, binningState.point_list_keys_unsorted, binningState.point_list_unsorted, radii, tile_grid) CHECK_CUDA(, debug) int bit = getHigherMsb(tile_grid.x * tile_grid.y); // Sort complete list of (duplicated) Gaussian indices by keys CHECK_CUDA(cub::DeviceRadixSort::SortPairs( binningState.list_sorting_space, binningState.sorting_size, binningState.point_list_keys_unsorted, binningState.point_list_keys, binningState.point_list_unsorted, binningState.point_list, num_rendered, 0, 32 + bit), debug) CHECK_CUDA(cudaMemset(imgState.ranges, 0, tile_grid.x * tile_grid.y * sizeof(uint2)), debug); // Identify start and end of per-tile workloads in sorted list if (num_rendered > 0) identifyTileRanges << <(num_rendered + 255) / 256, 256 >> > ( num_rendered, binningState.point_list_keys, imgState.ranges); CHECK_CUDA(, debug) // Let each tile blend its range of Gaussians independently in parallel const float* feature_ptr = colors_precomp != nullptr ? colors_precomp : geomState.rgb; const float* language_feature_ptr = language_feature_precomp; const float* language_feature_instance_ptr = language_feature_instance_precomp; CHECK_CUDA(FORWARD::render( tile_grid, block, imgState.ranges, binningState.point_list, width, height, focal_x, focal_y, float(width*0.5f), float(height*0.5f), viewmatrix, cam_pos, geomState.means2D, feature_ptr, language_feature_ptr, language_feature_instance_ptr, all_map, geomState.conic_opacity, imgState.accum_alpha, imgState.n_contrib, background, out_color, out_language_feature, out_language_feature_instance, out_observe, out_all_map, out_plane_depth, render_geo, include_feature), debug) return num_rendered; } // Produce necessary gradients for optimization, corresponding // to forward render pass void CudaRasterizer::Rasterizer::backward( const int P, int D, int M, int R, const float* background, const float* all_map_pixels, const int width, int height, const float* means3D, const float* shs, const float* colors_precomp, const float* language_feature_precomp, const float* language_feature_instance_precomp, const float* all_maps, const float* scales, const float scale_modifier, const float* rotations, const float* cov3D_precomp, const float* viewmatrix, const float* projmatrix, const float* campos, const float tan_fovx, float tan_fovy, const int* radii, char* geom_buffer, char* binning_buffer, char* img_buffer, const float* dL_dpix, const float* dL_dpix_F, const float* dL_dpix_F_instance, const float* dL_dout_all_map, const float* dL_dout_plane_depth, float* dL_dmean2D, float* dL_dmean2D_abs, float* dL_dconic, float* dL_dopacity, float* dL_dcolor, float* dL_dlanguage_feature, float* dL_dlanguage_feature_instance, float* dL_dmean3D, float* dL_dcov3D, float* dL_dsh, float* dL_dscale, float* dL_drot, float* dL_dall_map, const bool render_geo, bool debug, bool include_feature) { GeometryState geomState = GeometryState::fromChunk(geom_buffer, P); BinningState binningState = BinningState::fromChunk(binning_buffer, R); ImageState imgState = ImageState::fromChunk(img_buffer, width * height); if (radii == nullptr) { radii = geomState.internal_radii; } const float focal_y = height / (2.0f * tan_fovy); const float focal_x = width / (2.0f * tan_fovx); const dim3 tile_grid((width + BLOCK_X - 1) / BLOCK_X, (height + BLOCK_Y - 1) / BLOCK_Y, 1); const dim3 block(BLOCK_X, BLOCK_Y, 1); // Compute loss gradients w.r.t. 2D mean position, conic matrix, // opacity and RGB of Gaussians from per-pixel loss gradients. // If we were given precomputed colors and not SHs, use them. const float* color_ptr = (colors_precomp != nullptr) ? colors_precomp : geomState.rgb; const float* language_feature_ptr = language_feature_precomp; const float* language_feature_instance_ptr = language_feature_instance_precomp; CHECK_CUDA(BACKWARD::render( tile_grid, block, imgState.ranges, binningState.point_list, width, height, focal_x, focal_y, background, geomState.means2D, geomState.conic_opacity, color_ptr, language_feature_ptr, language_feature_instance_ptr, all_maps, all_map_pixels, imgState.accum_alpha, imgState.n_contrib, dL_dpix, dL_dpix_F, dL_dpix_F_instance, dL_dout_all_map, dL_dout_plane_depth, (float3*)dL_dmean2D, (float3*)dL_dmean2D_abs, (float4*)dL_dconic, dL_dopacity, dL_dcolor, dL_dlanguage_feature, dL_dlanguage_feature_instance, dL_dall_map, render_geo, include_feature), debug) // Take care of the rest of preprocessing. Was the precomputed covariance // given to us or a scales/rot pair? If precomputed, pass that. If not, // use the one we computed ourselves. const float* cov3D_ptr = (cov3D_precomp != nullptr) ? cov3D_precomp : geomState.cov3D; CHECK_CUDA(BACKWARD::preprocess(P, D, M, (float3*)means3D, radii, shs, geomState.clamped, (glm::vec3*)scales, (glm::vec4*)rotations, scale_modifier, cov3D_ptr, viewmatrix, projmatrix, focal_x, focal_y, tan_fovx, tan_fovy, (glm::vec3*)campos, (float3*)dL_dmean2D, dL_dconic, (glm::vec3*)dL_dmean3D, dL_dcolor, dL_dcov3D, dL_dsh, (glm::vec3*)dL_dscale, (glm::vec4*)dL_drot), debug) } ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/cuda_rasterizer/rasterizer_impl.h ================================================ /* * Copyright (C) 2023, Inria * GRAPHDECO research group, https://team.inria.fr/graphdeco * All rights reserved. * * This software is free for non-commercial, research and evaluation use * under the terms of the LICENSE.md file. * * For inquiries contact george.drettakis@inria.fr */ #pragma once #include #include #include "rasterizer.h" #include namespace CudaRasterizer { template static void obtain(char*& chunk, T*& ptr, std::size_t count, std::size_t alignment) { std::size_t offset = (reinterpret_cast(chunk) + alignment - 1) & ~(alignment - 1); ptr = reinterpret_cast(offset); chunk = reinterpret_cast(ptr + count); } struct GeometryState { size_t scan_size; float* depths; char* scanning_space; bool* clamped; int* internal_radii; float2* means2D; float* cov3D; float4* conic_opacity; float* rgb; uint32_t* point_offsets; uint32_t* tiles_touched; static GeometryState fromChunk(char*& chunk, size_t P); }; struct ImageState { uint2* ranges; uint32_t* n_contrib; float* accum_alpha; static ImageState fromChunk(char*& chunk, size_t N); }; struct BinningState { size_t sorting_size; uint64_t* point_list_keys_unsorted; uint64_t* point_list_keys; uint32_t* point_list_unsorted; uint32_t* point_list; char* list_sorting_space; static BinningState fromChunk(char*& chunk, size_t P); }; template size_t required(size_t P) { char* size = nullptr; T::fromChunk(size, P); return ((size_t)size) + 128; } }; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/diff_LangSurf_rasterization/__init__.py ================================================ # # Copyright (C) 2023, Inria # GRAPHDECO research group, https://team.inria.fr/graphdeco # All rights reserved. # # This software is free for non-commercial, research and evaluation use # under the terms of the LICENSE.md file. # # For inquiries contact george.drettakis@inria.fr # from typing import NamedTuple import torch.nn as nn import torch from . import _C def cpu_deep_copy_tuple(input_tuple): copied_tensors = [item.cpu().clone() if isinstance(item, torch.Tensor) else item for item in input_tuple] return tuple(copied_tensors) def rasterize_gaussians( means3D, means2D, means2D_abs, sh, colors_precomp, language_feature_precomp, language_feature_instance_precomp, opacities, scales, rotations, cov3Ds_precomp, all_map, raster_settings, ): return _RasterizeGaussians.apply( means3D, means2D, means2D_abs, sh, colors_precomp, language_feature_precomp, language_feature_instance_precomp, opacities, scales, rotations, cov3Ds_precomp, all_map, raster_settings, ) class _RasterizeGaussians(torch.autograd.Function): @staticmethod def forward( ctx, means3D, means2D, means2D_abs, sh, colors_precomp, language_feature_precomp, language_feature_instance_precomp, opacities, scales, rotations, cov3Ds_precomp, all_maps, raster_settings, ): # Restructure arguments the way that the C++ lib expects them args = ( raster_settings.bg, means3D, colors_precomp, language_feature_precomp, language_feature_instance_precomp, opacities, scales, rotations, raster_settings.scale_modifier, cov3Ds_precomp, all_maps, raster_settings.viewmatrix, raster_settings.projmatrix, raster_settings.tanfovx, raster_settings.tanfovy, raster_settings.image_height, raster_settings.image_width, sh, raster_settings.sh_degree, raster_settings.campos, raster_settings.prefiltered, raster_settings.render_geo, raster_settings.debug, raster_settings.include_feature ) # Invoke C++/CUDA rasterizer if raster_settings.debug: cpu_args = cpu_deep_copy_tuple(args) # Copy them before they can be corrupted try: num_rendered, color, language_feature, language_feature_instance, radii, out_observe, out_all_map, geomBuffer, binningBuffer, imgBuffer = _C.rasterize_gaussians(*args) except Exception as ex: torch.save(cpu_args, "snapshot_fw.dump") print("\nAn error occured in forward. Please forward snapshot_fw.dump for debugging.") raise ex else: num_rendered, color, language_feature, language_feature_instance, radii, out_observe, out_all_map, out_plane_depth, geomBuffer, binningBuffer, imgBuffer = _C.rasterize_gaussians(*args) # Keep relevant tensors for backward ctx.raster_settings = raster_settings ctx.num_rendered = num_rendered ctx.save_for_backward(out_all_map, colors_precomp, language_feature_precomp, language_feature_instance_precomp, all_maps, means3D, scales, rotations, cov3Ds_precomp, radii, sh, geomBuffer, binningBuffer, imgBuffer) return color, language_feature, language_feature_instance, radii, out_observe, out_all_map, out_plane_depth @staticmethod def backward(ctx, grad_out_color, grad_out_language_feature, grad_out_language_feature_instance, grad_radii, grad_out_observe, grad_out_all_map, grad_out_plane_depth): # Restore necessary values from context num_rendered = ctx.num_rendered raster_settings = ctx.raster_settings all_map_pixels, colors_precomp, language_feature_precomp, language_feature_instance_precomp, all_maps, means3D, scales, rotations, cov3Ds_precomp, radii, sh, geomBuffer, binningBuffer, imgBuffer = ctx.saved_tensors # Restructure args as C++ method expects them args = (raster_settings.bg, all_map_pixels, means3D, radii, colors_precomp, language_feature_precomp, language_feature_instance_precomp, all_maps, scales, rotations, raster_settings.scale_modifier, cov3Ds_precomp, raster_settings.viewmatrix, raster_settings.projmatrix, raster_settings.tanfovx, raster_settings.tanfovy, grad_out_color, grad_out_language_feature, grad_out_language_feature_instance, grad_out_all_map, grad_out_plane_depth, sh, raster_settings.sh_degree, raster_settings.campos, geomBuffer, num_rendered, binningBuffer, imgBuffer, raster_settings.render_geo, raster_settings.debug, raster_settings.include_feature) # Compute gradients for relevant tensors by invoking backward method if raster_settings.debug: cpu_args = cpu_deep_copy_tuple(args) # Copy them before they can be corrupted try: grad_means2D, grad_means2D_abs, grad_colors_precomp, grad_language_feature_precomp, grad_language_feature_instance_precomp, grad_opacities, grad_means3D, grad_cov3Ds_precomp, grad_sh, grad_scales, grad_rotations, gard_all_map = _C.rasterize_gaussians_backward(*args) except Exception as ex: torch.save(cpu_args, "snapshot_bw.dump") print("\nAn error occured in backward. Writing snapshot_bw.dump for debugging.\n") raise ex else: grad_means2D, grad_means2D_abs, grad_colors_precomp, grad_language_feature_precomp, grad_language_feature_instance_precomp, grad_opacities, grad_means3D, grad_cov3Ds_precomp, grad_sh, grad_scales, grad_rotations, gard_all_map = _C.rasterize_gaussians_backward(*args) # print(f"grad_means2D {grad_means2D.sum()}, grad_means2D_abs {grad_means2D_abs.sum()}") grads = ( grad_means3D, grad_means2D, grad_means2D_abs, grad_sh, grad_colors_precomp, grad_language_feature_precomp, grad_language_feature_instance_precomp, grad_opacities, grad_scales, grad_rotations, grad_cov3Ds_precomp, gard_all_map, None, ) return grads class GaussianRasterizationSettings(NamedTuple): image_height: int image_width: int tanfovx : float tanfovy : float bg : torch.Tensor scale_modifier : float viewmatrix : torch.Tensor projmatrix : torch.Tensor sh_degree : int campos : torch.Tensor prefiltered : bool render_geo : bool debug : bool include_feature: bool class GaussianRasterizer(nn.Module): def __init__(self, raster_settings): super().__init__() self.raster_settings = raster_settings def markVisible(self, positions): # Mark visible points (based on frustum culling for camera) with a boolean with torch.no_grad(): raster_settings = self.raster_settings visible = _C.mark_visible( positions, raster_settings.viewmatrix, raster_settings.projmatrix) return visible def forward(self, means3D, means2D, means2D_abs, opacities, shs = None, colors_precomp = None, language_feature_precomp = None, language_feature_instance_precomp = None, scales = None, rotations = None, cov3D_precomp = None, all_map=None): raster_settings = self.raster_settings if (shs is None and colors_precomp is None) or (shs is not None and colors_precomp is not None): raise Exception('Please provide excatly one of either SHs or precomputed colors!') if ((scales is None or rotations is None) and cov3D_precomp is None) or ((scales is not None or rotations is not None) and cov3D_precomp is not None): raise Exception('Please provide exactly one of either scale/rotation pair or precomputed 3D covariance!') if shs is None: shs = torch.Tensor([]) if colors_precomp is None: colors_precomp = torch.Tensor([]) if language_feature_precomp is None: language_feature_precomp = torch.Tensor([]) if language_feature_instance_precomp is None: language_feature_instance_precomp = torch.Tensor([]) if scales is None: scales = torch.Tensor([]) if rotations is None: rotations = torch.Tensor([]) if cov3D_precomp is None: cov3D_precomp = torch.Tensor([]) if all_map is None: all_map = torch.Tensor([]) # Invoke C++/CUDA rasterization routine return rasterize_gaussians( means3D, means2D, means2D_abs, shs, colors_precomp, language_feature_precomp, language_feature_instance_precomp, opacities, scales, rotations, cov3D_precomp, all_map, raster_settings, ) ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/diff_LangSurf_rasterization.egg-info/PKG-INFO ================================================ Metadata-Version: 2.4 Name: diff_LangSurf_rasterization Version: 0.0.0 License-File: LICENSE.md Dynamic: license-file ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/diff_LangSurf_rasterization.egg-info/SOURCES.txt ================================================ LICENSE.md README.md ext.cpp rasterize_points.cu setup.py cuda_rasterizer/backward.cu cuda_rasterizer/forward.cu cuda_rasterizer/rasterizer_impl.cu diff_LangSurf_rasterization/__init__.py diff_LangSurf_rasterization.egg-info/PKG-INFO diff_LangSurf_rasterization.egg-info/SOURCES.txt diff_LangSurf_rasterization.egg-info/dependency_links.txt diff_LangSurf_rasterization.egg-info/top_level.txt ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/diff_LangSurf_rasterization.egg-info/dependency_links.txt ================================================ ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/diff_LangSurf_rasterization.egg-info/top_level.txt ================================================ diff_LangSurf_rasterization ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/ext.cpp ================================================ /* * Copyright (C) 2023, Inria * GRAPHDECO research group, https://team.inria.fr/graphdeco * All rights reserved. * * This software is free for non-commercial, research and evaluation use * under the terms of the LICENSE.md file. * * For inquiries contact george.drettakis@inria.fr */ #include #include "rasterize_points.h" PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { m.def("rasterize_gaussians", &RasterizeGaussiansCUDA); m.def("rasterize_gaussians_backward", &RasterizeGaussiansBackwardCUDA); m.def("mark_visible", &markVisible); } ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/rasterize_points.cu ================================================ /* * Copyright (C) 2023, Inria * GRAPHDECO research group, https://team.inria.fr/graphdeco * All rights reserved. * * This software is free for non-commercial, research and evaluation use * under the terms of the LICENSE.md file. * * For inquiries contact george.drettakis@inria.fr */ #include #include #include #include #include #include #include #include #include #include "cuda_rasterizer/config.h" #include "cuda_rasterizer/rasterizer.h" #include #include #include std::function resizeFunctional(torch::Tensor& t) { auto lambda = [&t](size_t N) { t.resize_({(long long)N}); return reinterpret_cast(t.contiguous().data_ptr()); }; return lambda; } std::tuple RasterizeGaussiansCUDA( const torch::Tensor& background, const torch::Tensor& means3D, const torch::Tensor& colors, const torch::Tensor& language_feature, const torch::Tensor& language_feature_instance, const torch::Tensor& opacity, const torch::Tensor& scales, const torch::Tensor& rotations, const float scale_modifier, const torch::Tensor& cov3D_precomp, const torch::Tensor& all_map, const torch::Tensor& viewmatrix, const torch::Tensor& projmatrix, const float tan_fovx, const float tan_fovy, const int image_height, const int image_width, const torch::Tensor& sh, const int degree, const torch::Tensor& campos, const bool prefiltered, const bool render_geo, const bool debug, const bool include_feature) { if (means3D.ndimension() != 2 || means3D.size(1) != 3) { AT_ERROR("means3D must have dimensions (num_points, 3)"); } const int P = means3D.size(0); const int H = image_height; const int W = image_width; auto int_opts = means3D.options().dtype(torch::kInt32); auto float_opts = means3D.options().dtype(torch::kFloat32); torch::Tensor out_color = torch::full({NUM_CHANNELS, H, W}, 0.0, float_opts); torch::Tensor out_language_feature; torch::Tensor out_language_feature_instance; if (include_feature) { out_language_feature = torch::full({NUM_CHANNELS_language_feature, H, W}, 0.0, float_opts); out_language_feature_instance = torch::full({NUM_CHANNELS_instance_feature, H, W}, 0.0, float_opts); } else { out_language_feature = torch::full({1}, 0.0, float_opts); out_language_feature_instance = torch::full({1}, 0.0, float_opts); } torch::Tensor radii = torch::full({P}, 0, means3D.options().dtype(torch::kInt32)); torch::Tensor out_observe = torch::full({P}, 0, means3D.options().dtype(torch::kInt32)); torch::Tensor out_all_map = torch::full({NUM_ALL_MAP, H, W}, 0, float_opts); torch::Tensor out_plane_depth = torch::full({1, H, W}, 0, float_opts); torch::Device device(torch::kCUDA); torch::TensorOptions options(torch::kByte); torch::Tensor geomBuffer = torch::empty({0}, options.device(device)); torch::Tensor binningBuffer = torch::empty({0}, options.device(device)); torch::Tensor imgBuffer = torch::empty({0}, options.device(device)); std::function geomFunc = resizeFunctional(geomBuffer); std::function binningFunc = resizeFunctional(binningBuffer); std::function imgFunc = resizeFunctional(imgBuffer); int rendered = 0; if(P != 0) { int M = 0; if(sh.size(0) != 0) { M = sh.size(1); } rendered = CudaRasterizer::Rasterizer::forward( geomFunc, binningFunc, imgFunc, P, degree, M, background.contiguous().data(), W, H, means3D.contiguous().data(), sh.contiguous().data_ptr(), colors.contiguous().data(), language_feature.contiguous().data(), language_feature_instance.contiguous().data(), opacity.contiguous().data(), scales.contiguous().data_ptr(), scale_modifier, rotations.contiguous().data_ptr(), cov3D_precomp.contiguous().data(), all_map.contiguous().data(), viewmatrix.contiguous().data(), projmatrix.contiguous().data(), campos.contiguous().data(), tan_fovx, tan_fovy, prefiltered, out_color.contiguous().data(), out_language_feature.contiguous().data(), out_language_feature_instance.contiguous().data(), radii.contiguous().data(), out_observe.contiguous().data(), out_all_map.contiguous().data(), out_plane_depth.contiguous().data(), render_geo, debug, include_feature); } return std::make_tuple(rendered, out_color, out_language_feature, out_language_feature_instance, radii, out_observe, out_all_map, out_plane_depth, geomBuffer, binningBuffer, imgBuffer); } std::tuple RasterizeGaussiansBackwardCUDA( const torch::Tensor& background, const torch::Tensor& all_map_pixels, const torch::Tensor& means3D, const torch::Tensor& radii, const torch::Tensor& colors, const torch::Tensor& language_feature, const torch::Tensor& language_feature_instance, const torch::Tensor& all_maps, const torch::Tensor& scales, const torch::Tensor& rotations, const float scale_modifier, const torch::Tensor& cov3D_precomp, const torch::Tensor& viewmatrix, const torch::Tensor& projmatrix, const float tan_fovx, const float tan_fovy, const torch::Tensor& dL_dout_color, const torch::Tensor& dL_dout_language_feature, const torch::Tensor& dL_dout_language_feature_instance, const torch::Tensor& dL_dout_all_map, const torch::Tensor& dL_dout_plane_depth, const torch::Tensor& sh, const int degree, const torch::Tensor& campos, const torch::Tensor& geomBuffer, const int R, const torch::Tensor& binningBuffer, const torch::Tensor& imageBuffer, const bool render_geo, const bool debug, const bool include_feature) { const int P = means3D.size(0); const int H = dL_dout_color.size(1); const int W = dL_dout_color.size(2); int M = 0; if(sh.size(0) != 0) { M = sh.size(1); } torch::Tensor dL_dmeans3D = torch::zeros({P, 3}, means3D.options()); torch::Tensor dL_dmeans2D = torch::zeros({P, 3}, means3D.options()); torch::Tensor dL_dmeans2D_abs = torch::zeros({P, 3}, means3D.options()); torch::Tensor dL_dcolors = torch::zeros({P, NUM_CHANNELS}, means3D.options()); torch::Tensor dL_dlanguage_feature; torch::Tensor dL_dlanguage_feature_instance; if (include_feature) { dL_dlanguage_feature = torch::zeros({P, NUM_CHANNELS_language_feature}, means3D.options()); dL_dlanguage_feature_instance = torch::zeros({P, NUM_CHANNELS_instance_feature}, means3D.options()); } else { dL_dlanguage_feature = torch::zeros({1}, means3D.options()); dL_dlanguage_feature_instance = torch::zeros({1}, means3D.options()); } torch::Tensor dL_dall_map = torch::zeros({P, NUM_ALL_MAP}, means3D.options()); torch::Tensor dL_dconic = torch::zeros({P, 2, 2}, means3D.options()); torch::Tensor dL_dopacity = torch::zeros({P, 1}, means3D.options()); torch::Tensor dL_dcov3D = torch::zeros({P, 6}, means3D.options()); torch::Tensor dL_dsh = torch::zeros({P, M, 3}, means3D.options()); torch::Tensor dL_dscales = torch::zeros({P, 3}, means3D.options()); torch::Tensor dL_drotations = torch::zeros({P, 4}, means3D.options()); if(P != 0) { CudaRasterizer::Rasterizer::backward(P, degree, M, R, background.contiguous().data(), all_map_pixels.contiguous().data(), W, H, means3D.contiguous().data(), sh.contiguous().data(), colors.contiguous().data(), language_feature.contiguous().data(), language_feature_instance.contiguous().data(), all_maps.contiguous().data(), scales.data_ptr(), scale_modifier, rotations.data_ptr(), cov3D_precomp.contiguous().data(), viewmatrix.contiguous().data(), projmatrix.contiguous().data(), campos.contiguous().data(), tan_fovx, tan_fovy, radii.contiguous().data(), reinterpret_cast(geomBuffer.contiguous().data_ptr()), reinterpret_cast(binningBuffer.contiguous().data_ptr()), reinterpret_cast(imageBuffer.contiguous().data_ptr()), dL_dout_color.contiguous().data(), dL_dout_language_feature.contiguous().data(), dL_dout_language_feature_instance.contiguous().data(), dL_dout_all_map.contiguous().data(), dL_dout_plane_depth.contiguous().data(), dL_dmeans2D.contiguous().data(), dL_dmeans2D_abs.contiguous().data(), dL_dconic.contiguous().data(), dL_dopacity.contiguous().data(), dL_dcolors.contiguous().data(), dL_dlanguage_feature.contiguous().data(), dL_dlanguage_feature_instance.contiguous().data(), dL_dmeans3D.contiguous().data(), dL_dcov3D.contiguous().data(), dL_dsh.contiguous().data(), dL_dscales.contiguous().data(), dL_drotations.contiguous().data(), dL_dall_map.contiguous().data(), render_geo, debug, include_feature); } return std::make_tuple(dL_dmeans2D, dL_dmeans2D_abs, dL_dcolors, dL_dlanguage_feature, dL_dlanguage_feature_instance, dL_dopacity, dL_dmeans3D, dL_dcov3D, dL_dsh, dL_dscales, dL_drotations, dL_dall_map); } torch::Tensor markVisible( torch::Tensor& means3D, torch::Tensor& viewmatrix, torch::Tensor& projmatrix) { const int P = means3D.size(0); torch::Tensor present = torch::full({P}, false, means3D.options().dtype(at::kBool)); if(P != 0) { CudaRasterizer::Rasterizer::markVisible(P, means3D.contiguous().data(), viewmatrix.contiguous().data(), projmatrix.contiguous().data(), present.contiguous().data()); } return present; } ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/rasterize_points.h ================================================ /* * Copyright (C) 2023, Inria * GRAPHDECO research group, https://team.inria.fr/graphdeco * All rights reserved. * * This software is free for non-commercial, research and evaluation use * under the terms of the LICENSE.md file. * * For inquiries contact george.drettakis@inria.fr */ #pragma once #include #include #include #include std::tuple RasterizeGaussiansCUDA( const torch::Tensor& background, const torch::Tensor& means3D, const torch::Tensor& colors, const torch::Tensor& language_feature, const torch::Tensor& language_feature_instance, const torch::Tensor& opacity, const torch::Tensor& scales, const torch::Tensor& rotations, const float scale_modifier, const torch::Tensor& cov3D_precomp, const torch::Tensor& all_map, const torch::Tensor& viewmatrix, const torch::Tensor& projmatrix, const float tan_fovx, const float tan_fovy, const int image_height, const int image_width, const torch::Tensor& sh, const int degree, const torch::Tensor& campos, const bool prefiltered, const bool render_geo, const bool debug, const bool include_feature); std::tuple RasterizeGaussiansBackwardCUDA( const torch::Tensor& background, const torch::Tensor& all_map_pixels, const torch::Tensor& means3D, const torch::Tensor& radii, const torch::Tensor& colors, const torch::Tensor& language_feature, const torch::Tensor& language_feature_instance, const torch::Tensor& all_maps, const torch::Tensor& scales, const torch::Tensor& rotations, const float scale_modifier, const torch::Tensor& cov3D_precomp, const torch::Tensor& viewmatrix, const torch::Tensor& projmatrix, const float tan_fovx, const float tan_fovy, const torch::Tensor& dL_dout_color, const torch::Tensor& dL_dout_language_feature, const torch::Tensor& dL_dout_language_feature_instance, const torch::Tensor& dL_dout_all_map, const torch::Tensor& dL_dout_plane_depth, const torch::Tensor& sh, const int degree, const torch::Tensor& campos, const torch::Tensor& geomBuffer, const int R, const torch::Tensor& binningBuffer, const torch::Tensor& imageBuffer, const bool render_geo, const bool debug, const bool include_feature); torch::Tensor markVisible( torch::Tensor& means3D, torch::Tensor& viewmatrix, torch::Tensor& projmatrix); ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/setup.py ================================================ # # Copyright (C) 2023, Inria # GRAPHDECO research group, https://team.inria.fr/graphdeco # All rights reserved. # # This software is free for non-commercial, research and evaluation use # under the terms of the LICENSE.md file. # # For inquiries contact george.drettakis@inria.fr # import os from setuptools import setup from torch.utils.cpp_extension import BuildExtension, CUDAExtension os.path.dirname(os.path.abspath(__file__)) setup( name="diff_LangSurf_rasterization", packages=['diff_LangSurf_rasterization'], ext_modules=[ CUDAExtension( name="diff_LangSurf_rasterization._C", sources=[ "cuda_rasterizer/rasterizer_impl.cu", "cuda_rasterizer/forward.cu", "cuda_rasterizer/backward.cu", "rasterize_points.cu", "ext.cpp"], extra_compile_args={"nvcc": ["-I" + os.path.join(os.path.dirname(os.path.abspath(__file__)), "third_party/glm/")]}) ], cmdclass={ 'build_ext': BuildExtension } ) ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/.appveyor.yml ================================================ shallow_clone: true platform: - x86 - x64 configuration: - Debug - Release image: - Visual Studio 2013 - Visual Studio 2015 - Visual Studio 2017 - Visual Studio 2019 environment: matrix: - GLM_ARGUMENTS: -DGLM_TEST_FORCE_PURE=ON - GLM_ARGUMENTS: -DGLM_TEST_ENABLE_SIMD_SSE2=ON -DGLM_TEST_ENABLE_LANG_EXTENSIONS=ON - GLM_ARGUMENTS: -DGLM_TEST_ENABLE_SIMD_AVX=ON -DGLM_TEST_ENABLE_LANG_EXTENSIONS=ON - GLM_ARGUMENTS: -DGLM_TEST_ENABLE_SIMD_AVX=ON -DGLM_TEST_ENABLE_LANG_EXTENSIONS=ON -DGLM_TEST_ENABLE_CXX_14=ON - GLM_ARGUMENTS: -DGLM_TEST_ENABLE_SIMD_AVX=ON -DGLM_TEST_ENABLE_LANG_EXTENSIONS=ON -DGLM_TEST_ENABLE_CXX_17=ON matrix: exclude: - image: Visual Studio 2013 GLM_ARGUMENTS: -DGLM_TEST_ENABLE_SIMD_AVX=ON -DGLM_TEST_ENABLE_LANG_EXTENSIONS=ON - image: Visual Studio 2013 GLM_ARGUMENTS: -DGLM_TEST_ENABLE_SIMD_AVX=ON -DGLM_TEST_ENABLE_LANG_EXTENSIONS=ON -DGLM_TEST_ENABLE_CXX_14=ON - image: Visual Studio 2013 GLM_ARGUMENTS: -DGLM_TEST_ENABLE_SIMD_AVX=ON -DGLM_TEST_ENABLE_LANG_EXTENSIONS=ON -DGLM_TEST_ENABLE_CXX_17=ON - image: Visual Studio 2013 configuration: Debug - image: Visual Studio 2015 GLM_ARGUMENTS: -DGLM_TEST_ENABLE_SIMD_SSE2=ON -DGLM_TEST_ENABLE_LANG_EXTENSIONS=ON - image: Visual Studio 2015 GLM_ARGUMENTS: -DGLM_TEST_ENABLE_SIMD_AVX=ON -DGLM_TEST_ENABLE_LANG_EXTENSIONS=ON -DGLM_TEST_ENABLE_CXX_14=ON - image: Visual Studio 2015 GLM_ARGUMENTS: -DGLM_TEST_ENABLE_SIMD_AVX=ON -DGLM_TEST_ENABLE_LANG_EXTENSIONS=ON -DGLM_TEST_ENABLE_CXX_17=ON - image: Visual Studio 2015 platform: x86 - image: Visual Studio 2015 configuration: Debug - image: Visual Studio 2017 platform: x86 - image: Visual Studio 2017 configuration: Debug - image: Visual Studio 2019 platform: x64 branches: only: - master before_build: - ps: | mkdir build cd build if ("$env:APPVEYOR_JOB_NAME" -match "Image: Visual Studio 2013") { $env:generator="Visual Studio 12 2013" } if ("$env:APPVEYOR_JOB_NAME" -match "Image: Visual Studio 2015") { $env:generator="Visual Studio 14 2015" } if ("$env:APPVEYOR_JOB_NAME" -match "Image: Visual Studio 2017") { $env:generator="Visual Studio 15 2017" } if ("$env:APPVEYOR_JOB_NAME" -match "Image: Visual Studio 2019") { $env:generator="Visual Studio 16 2019" } if ($env:PLATFORM -eq "x64") { $env:generator="$env:generator Win64" } echo generator="$env:generator" cmake .. -G "$env:generator" -DCMAKE_INSTALL_PREFIX="$env:APPVEYOR_BUILD_FOLDER/install" -DGLM_QUIET=ON -DGLM_TEST_ENABLE=ON "$env:GLM_ARGUMENTS" build_script: - cmake --build . --parallel --config %CONFIGURATION% -- /m /v:minimal - cmake --build . --target install --parallel --config %CONFIGURATION% -- /m /v:minimal test_script: - ctest --parallel 4 --verbose -C %CONFIGURATION% - cd .. - ps: | mkdir build_test_cmake cd build_test_cmake cmake ..\test\cmake\ -G "$env:generator" -DCMAKE_PREFIX_PATH="$env:APPVEYOR_BUILD_FOLDER/install" - cmake --build . --parallel --config %CONFIGURATION% -- /m /v:minimal deploy: off ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/.gitignore ================================================ # Compiled Object files *.slo *.lo *.o *.obj # Precompiled Headers *.gch *.pch # Compiled Dynamic libraries *.so *.dylib *.dll # Fortran module files *.mod # Compiled Static libraries *.lai *.la *.a *.lib # Executables *.exe *.out *.app # CMake CMakeCache.txt CMakeFiles cmake_install.cmake install_manifest.txt *.cmake !glmConfig.cmake !glmConfig-version.cmake # ^ May need to add future .cmake files as exceptions # Test logs Testing/* # Test input test/gtc/*.dds # Project Files Makefile *.cbp *.user # Misc. *.log # local build(s) build* /.vs /.vscode /CMakeSettings.json .DS_Store *.swp ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/.travis.yml ================================================ language: cpp branches: only: - master - stable jobs: include: - name: "Xcode 7.3 C++98 pure release" os: osx osx_image: xcode7.3 env: - MATRIX_EVAL="" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Release -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_98=ON -DGLM_TEST_FORCE_PURE=ON" - name: "Xcode 7.3 C++98 sse2 release" os: osx osx_image: xcode7.3 env: - MATRIX_EVAL="" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Release -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_98=ON -DGLM_TEST_ENABLE_LANG_EXTENSIONS=ON -DGLM_TEST_ENABLE_SIMD_SSE2=ON" - name: "Xcode 7.3 C++98 ms release" os: osx osx_image: xcode7.3 env: - MATRIX_EVAL="" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Release -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_98=ON -DGLM_TEST_ENABLE_LANG_EXTENSIONS=ON" - name: "XCode 7.3 C++11 pure release" os: osx osx_image: xcode7.3 env: - MATRIX_EVAL="" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Release -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_11=ON -DGLM_TEST_FORCE_PURE=ON" - name: "XCode 7.3 C++11 sse2 release" os: osx osx_image: xcode7.3 env: - MATRIX_EVAL="" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Release -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_11=ON -DGLM_TEST_ENABLE_LANG_EXTENSIONS=ON -DGLM_TEST_ENABLE_SIMD_SSE3=ON" - name: "XCode 10.3 C++11 sse2 release" os: osx osx_image: xcode10.3 env: - MATRIX_EVAL="" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Release -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_11=ON -DGLM_TEST_ENABLE_LANG_EXTENSIONS=ON -DGLM_TEST_ENABLE_SIMD_SSE3=ON" - name: "XCode 12.2 C++11 sse2 release" os: osx osx_image: xcode12.2 env: - MATRIX_EVAL="" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Release -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_11=ON -DGLM_TEST_ENABLE_LANG_EXTENSIONS=ON -DGLM_TEST_ENABLE_SIMD_SSE3=ON" - CTEST_ENV="--parallel 4 --output-on-failure" - CMAKE_ENV="--parallel" - name: "XCode 12.2 C++11 sse2 debug" os: osx osx_image: xcode12.2 env: - MATRIX_EVAL="" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Debug -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_11=ON -DGLM_TEST_ENABLE_LANG_EXTENSIONS=ON -DGLM_TEST_ENABLE_SIMD_SSE3=ON" - CTEST_ENV="--parallel 4 --output-on-failure" - CMAKE_ENV="--parallel" - name: "XCode 12.2 C++11 avx debug" os: osx osx_image: xcode12.2 env: - MATRIX_EVAL="" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Debug -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_11=ON -DGLM_TEST_ENABLE_LANG_EXTENSIONS=ON -DGLM_TEST_ENABLE_SIMD_AVX=ON" - CTEST_ENV="--parallel 4 --output-on-failure" - CMAKE_ENV="--parallel" - name: "XCode 12.2 C++14 avx debug" os: osx osx_image: xcode12.2 env: - MATRIX_EVAL="" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Debug -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_14=ON -DGLM_TEST_ENABLE_LANG_EXTENSIONS=ON -DGLM_TEST_ENABLE_SIMD_AVX=ON" - CTEST_ENV="--parallel 4 --output-on-failure" - CMAKE_ENV="--parallel" - name: "XCode 12.2 C++14 pure debug" os: osx osx_image: xcode12.2 env: - MATRIX_EVAL="" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Debug -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_14=ON -DGLM_TEST_ENABLE_LANG_EXTENSIONS=ON -DGLM_TEST_FORCE_PURE=ON" - CTEST_ENV="--parallel 4 --output-on-failure" - CMAKE_ENV="--parallel" - name: "XCode 12.2 C++17 pure debug" os: osx osx_image: xcode12.2 env: - MATRIX_EVAL="" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Debug -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_17=ON -DGLM_TEST_ENABLE_LANG_EXTENSIONS=ON -DGLM_TEST_FORCE_PURE=ON" - CTEST_ENV="--parallel 4 --output-on-failure" - CMAKE_ENV="--parallel" - name: "XCode 12.2 C++17 sse2 debug" os: osx osx_image: xcode12.2 env: - MATRIX_EVAL="" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Debug -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_17=ON -DGLM_TEST_ENABLE_LANG_EXTENSIONS=ON -DGLM_TEST_ENABLE_SIMD_SSE2=ON" - CTEST_ENV="--parallel 4 --output-on-failure" - CMAKE_ENV="--parallel" - name: "XCode 12.2 C++17 sse2 release" os: osx osx_image: xcode12.2 env: - MATRIX_EVAL="" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Release -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_17=ON -DGLM_TEST_ENABLE_LANG_EXTENSIONS=ON -DGLM_TEST_ENABLE_SIMD_SSE2=ON" - CTEST_ENV="--parallel 4 --output-on-failure" - CMAKE_ENV="--parallel" - name: "XCode 12.2 C++17 avx release" os: osx osx_image: xcode12.2 env: - MATRIX_EVAL="" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Release -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_17=ON -DGLM_TEST_ENABLE_LANG_EXTENSIONS=ON -DGLM_TEST_ENABLE_SIMD_AVX=ON" - CTEST_ENV="--parallel 4 --output-on-failure" - CMAKE_ENV="--parallel" - name: "GCC 4.9 C++98 pure release" os: linux dist: Xenial addons: apt: sources: - ubuntu-toolchain-r-test packages: - g++-4.9 env: - MATRIX_EVAL="CC=gcc-4.9 && CXX=g++-4.9" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Release -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_98=ON -DGLM_TEST_FORCE_PURE=ON" - CTEST_ENV="--parallel 4 --output-on-failure" - CMAKE_ENV="--parallel" - name: "GCC 4.9 C++98 pure debug" os: linux dist: Xenial addons: apt: sources: - ubuntu-toolchain-r-test packages: - g++-4.9 env: - MATRIX_EVAL="CC=gcc-4.9 && CXX=g++-4.9" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Debug -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_98=ON -DGLM_TEST_FORCE_PURE=ON" - CTEST_ENV="--parallel 4 --output-on-failure" - CMAKE_ENV="--parallel" - name: "GCC 4.9 C++98 ms debug" os: linux dist: Xenial addons: apt: sources: - ubuntu-toolchain-r-test packages: - g++-4.9 env: - MATRIX_EVAL="CC=gcc-4.9 && CXX=g++-4.9" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Debug -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_98=ON -DGLM_TEST_ENABLE_LANG_EXTENSIONS=ON" - CTEST_ENV="--parallel 4 --output-on-failure" - CMAKE_ENV="--parallel" - name: "GCC 4.9 C++11 ms debug" os: linux dist: Xenial addons: apt: sources: - ubuntu-toolchain-r-test packages: - g++-4.9 env: - MATRIX_EVAL="CC=gcc-4.9 && CXX=g++-4.9" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Debug -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_11=ON -DGLM_TEST_ENABLE_LANG_EXTENSIONS=ON" - CTEST_ENV="--parallel 4 --output-on-failure" - CMAKE_ENV="--parallel" - name: "GCC 4.9 C++11 pure debug" os: linux dist: Xenial addons: apt: sources: - ubuntu-toolchain-r-test packages: - g++-4.9 env: - MATRIX_EVAL="CC=gcc-4.9 && CXX=g++-4.9" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Debug -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_11=ON -DGLM_TEST_FORCE_PURE=ON" - CTEST_ENV="--parallel 4 --output-on-failure" - CMAKE_ENV="--parallel" - name: "GCC 6 C++14 pure debug" os: linux dist: bionic addons: apt: sources: - ubuntu-toolchain-r-test packages: - g++-6 env: - MATRIX_EVAL="CC=gcc-6 && CXX=g++-6" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Debug -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_14=ON -DGLM_TEST_FORCE_PURE=ON" - CTEST_ENV="--parallel 4 --output-on-failure" - CMAKE_ENV="--parallel" - name: "GCC 6 C++14 ms debug" os: linux dist: bionic addons: apt: sources: - ubuntu-toolchain-r-test packages: - g++-6 env: - MATRIX_EVAL="CC=gcc-6 && CXX=g++-6" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Debug -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_14=ON -DGLM_TEST_ENABLE_LANG_EXTENSIONS=ON" - CTEST_ENV="--parallel 4 --output-on-failure" - CMAKE_ENV="--parallel" - name: "GCC 7 C++17 ms debug" os: linux dist: bionic addons: apt: sources: - ubuntu-toolchain-r-test packages: - g++-7 env: - MATRIX_EVAL="CC=gcc-7 && CXX=g++-7" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Debug -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_17=ON -DGLM_TEST_ENABLE_LANG_EXTENSIONS=ON" - CTEST_ENV="--parallel 4 --output-on-failure" - CMAKE_ENV="--parallel" - name: "GCC 7 C++17 pure debug" os: linux dist: bionic addons: apt: sources: - ubuntu-toolchain-r-test packages: - g++-7 env: - MATRIX_EVAL="CC=gcc-7 && CXX=g++-7" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Debug -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_17=ON -DGLM_TEST_FORCE_PURE=ON" - CTEST_ENV="--parallel 4 --output-on-failure" - CMAKE_ENV="--parallel" - name: "GCC 10 C++17 pure debug" os: linux dist: bionic addons: apt: sources: - ubuntu-toolchain-r-test packages: - g++-10 env: - MATRIX_EVAL="CC=gcc-10 && CXX=g++-10" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Debug -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_17=ON -DGLM_TEST_FORCE_PURE=ON" - CTEST_ENV="--parallel 4 --output-on-failure" - CMAKE_ENV="--parallel" - name: "GCC 10 C++17 pure release" os: linux dist: bionic addons: apt: sources: - ubuntu-toolchain-r-test packages: - g++-10 env: - MATRIX_EVAL="CC=gcc-10 && CXX=g++-10" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Release -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_17=ON -DGLM_TEST_FORCE_PURE=ON" - CTEST_ENV="--parallel 4 --output-on-failure" - CMAKE_ENV="--parallel" - name: "Clang C++14 pure release" os: linux dist: Xenial env: - MATRIX_EVAL="CC=clang && CXX=clang++" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Release -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_14=ON -DGLM_TEST_FORCE_PURE=ON" - CTEST_ENV="--parallel 4 --output-on-failure" - CMAKE_ENV="--parallel" - name: "Clang C++14 pure debug" os: linux dist: Xenial env: - MATRIX_EVAL="CC=clang && CXX=clang++" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Debug -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_14=ON -DGLM_TEST_FORCE_PURE=ON" - CTEST_ENV="--parallel 4 --output-on-failure" - CMAKE_ENV="--parallel" - name: "Clang C++14 sse2 debug" os: linux dist: Xenial env: - MATRIX_EVAL="CC=clang && CXX=clang++" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Debug -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_14=ON -DGLM_TEST_ENABLE_LANG_EXTENSIONS=ON -DGLM_TEST_ENABLE_SIMD_SSE2=ON" - CTEST_ENV="--parallel 4 --output-on-failure" - CMAKE_ENV="--parallel" - name: "Clang C++14 sse2 debug" os: linux dist: focal env: - MATRIX_EVAL="CC=clang && CXX=clang++" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Debug -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_14=ON -DGLM_TEST_ENABLE_LANG_EXTENSIONS=ON -DGLM_TEST_ENABLE_SIMD_SSE2=ON" - CTEST_ENV="--parallel 4 --output-on-failure" - CMAKE_ENV="--parallel" - name: "Clang C++17 sse2 debug" os: linux dist: focal env: - MATRIX_EVAL="CC=clang && CXX=clang++" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Debug -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_17=ON -DGLM_TEST_ENABLE_LANG_EXTENSIONS=ON -DGLM_TEST_ENABLE_SIMD_SSE2=ON" - CTEST_ENV="--parallel 4 --output-on-failure" - CMAKE_ENV="--parallel" - name: "Clang C++17 avx2 debug" os: linux dist: focal env: - MATRIX_EVAL="CC=clang && CXX=clang++" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Debug -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_17=ON -DGLM_TEST_ENABLE_LANG_EXTENSIONS=ON -DGLM_TEST_ENABLE_SIMD_AVX2=ON" - CTEST_ENV="--parallel 4 --output-on-failure" - CMAKE_ENV="--parallel" - name: "Clang C++17 pure debug" os: linux dist: focal env: - MATRIX_EVAL="CC=clang && CXX=clang++" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Debug -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_17=ON -DGLM_TEST_FORCE_PURE=ON" - CTEST_ENV="--parallel 4 --output-on-failure" - CMAKE_ENV="--parallel" - name: "Clang C++17 pure release" os: linux dist: focal env: - MATRIX_EVAL="CC=clang && CXX=clang++" - CMAKE_BUILD_ENV="-DCMAKE_BUILD_TYPE=Release -DGLM_TEST_ENABLE=ON -DGLM_TEST_ENABLE_CXX_17=ON -DGLM_TEST_FORCE_PURE=ON" - CTEST_ENV="--parallel 4 --output-on-failure" - CMAKE_ENV="--parallel" before_script: - cmake --version - eval "${MATRIX_EVAL}" script: - ${CC} --version - mkdir ./build - cd ./build - cmake -DCMAKE_INSTALL_PREFIX=$TRAVIS_BUILD_DIR/install -DCMAKE_CXX_COMPILER=$COMPILER ${CMAKE_BUILD_ENV} .. - cmake --build . ${CMAKE_ENV} - ctest ${CTEST_ENV} - cmake --build . --target install ${CMAKE_ENV} - cd $TRAVIS_BUILD_DIR - mkdir ./build_test_cmake - cd ./build_test_cmake - cmake -DCMAKE_CXX_COMPILER=$COMPILER $TRAVIS_BUILD_DIR/test/cmake/ -DCMAKE_PREFIX_PATH=$TRAVIS_BUILD_DIR/install - cmake --build . ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/CMakeLists.txt ================================================ cmake_minimum_required(VERSION 3.2 FATAL_ERROR) cmake_policy(VERSION 3.2) file(READ "glm/detail/setup.hpp" GLM_SETUP_FILE) string(REGEX MATCH "#define[ ]+GLM_VERSION_MAJOR[ ]+([0-9]+)" _ ${GLM_SETUP_FILE}) set(GLM_VERSION_MAJOR "${CMAKE_MATCH_1}") string(REGEX MATCH "#define[ ]+GLM_VERSION_MINOR[ ]+([0-9]+)" _ ${GLM_SETUP_FILE}) set(GLM_VERSION_MINOR "${CMAKE_MATCH_1}") string(REGEX MATCH "#define[ ]+GLM_VERSION_PATCH[ ]+([0-9]+)" _ ${GLM_SETUP_FILE}) set(GLM_VERSION_PATCH "${CMAKE_MATCH_1}") string(REGEX MATCH "#define[ ]+GLM_VERSION_REVISION[ ]+([0-9]+)" _ ${GLM_SETUP_FILE}) set(GLM_VERSION_REVISION "${CMAKE_MATCH_1}") set(GLM_VERSION ${GLM_VERSION_MAJOR}.${GLM_VERSION_MINOR}.${GLM_VERSION_PATCH}.${GLM_VERSION_REVISION}) project(glm VERSION ${GLM_VERSION} LANGUAGES CXX) message(STATUS "GLM: Version " ${GLM_VERSION}) add_subdirectory(glm) add_library(glm::glm ALIAS glm) if(${CMAKE_SOURCE_DIR} STREQUAL ${CMAKE_CURRENT_SOURCE_DIR}) include(CPack) install(DIRECTORY glm DESTINATION ${CMAKE_INSTALL_INCLUDEDIR} PATTERN "CMakeLists.txt" EXCLUDE) install(EXPORT glm FILE glmConfig.cmake DESTINATION ${CMAKE_INSTALL_LIBDIR}/cmake/glm NAMESPACE glm::) include(CMakePackageConfigHelpers) write_basic_package_version_file("glmConfigVersion.cmake" COMPATIBILITY AnyNewerVersion) install(FILES ${CMAKE_CURRENT_BINARY_DIR}/glmConfigVersion.cmake DESTINATION ${CMAKE_INSTALL_LIBDIR}/cmake/glm) include(CTest) if(BUILD_TESTING) add_subdirectory(test) endif() endif(${CMAKE_SOURCE_DIR} STREQUAL ${CMAKE_CURRENT_SOURCE_DIR}) if (NOT TARGET uninstall) configure_file(cmake/cmake_uninstall.cmake.in cmake_uninstall.cmake IMMEDIATE @ONLY) add_custom_target(uninstall "${CMAKE_COMMAND}" -P "${CMAKE_BINARY_DIR}/cmake_uninstall.cmake") endif() ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/cmake/cmake_uninstall.cmake.in ================================================ if(NOT EXISTS "@CMAKE_BINARY_DIR@/install_manifest.txt") message(FATAL_ERROR "Cannot find install manifest: @CMAKE_BINARY_DIR@/install_manifest.txt") endif() file(READ "@CMAKE_BINARY_DIR@/install_manifest.txt" files) string(REGEX REPLACE "\n" ";" files "${files}") foreach(file ${files}) message(STATUS "Uninstalling $ENV{DESTDIR}${file}") if(IS_SYMLINK "$ENV{DESTDIR}${file}" OR EXISTS "$ENV{DESTDIR}${file}") exec_program( "@CMAKE_COMMAND@" ARGS "-E remove \"$ENV{DESTDIR}${file}\"" OUTPUT_VARIABLE rm_out RETURN_VALUE rm_retval ) if(NOT "${rm_retval}" STREQUAL 0) message(FATAL_ERROR "Problem when removing $ENV{DESTDIR}${file}") endif() else(IS_SYMLINK "$ENV{DESTDIR}${file}" OR EXISTS "$ENV{DESTDIR}${file}") message(STATUS "File $ENV{DESTDIR}${file} does not exist.") endif() endforeach() ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/copying.txt ================================================ ================================================================================ OpenGL Mathematics (GLM) -------------------------------------------------------------------------------- GLM is licensed under The Happy Bunny License or MIT License ================================================================================ The Happy Bunny License (Modified MIT License) -------------------------------------------------------------------------------- Copyright (c) 2005 - G-Truc Creation Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. Restrictions: By making use of the Software for military purposes, you choose to make a Bunny unhappy. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ================================================================================ The MIT License -------------------------------------------------------------------------------- Copyright (c) 2005 - G-Truc Creation Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00001_source.html ================================================ 0.9.9 API documentation: _features.hpp Source File
0.9.9 API documentation
_features.hpp
1 #pragma once
2 
3 // #define GLM_CXX98_EXCEPTIONS
4 // #define GLM_CXX98_RTTI
5 
6 // #define GLM_CXX11_RVALUE_REFERENCES
7 // Rvalue references - GCC 4.3
8 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2118.html
9 
10 // GLM_CXX11_TRAILING_RETURN
11 // Rvalue references for *this - GCC not supported
12 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2439.htm
13 
14 // GLM_CXX11_NONSTATIC_MEMBER_INIT
15 // Initialization of class objects by rvalues - GCC any
16 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2004/n1610.html
17 
18 // GLM_CXX11_NONSTATIC_MEMBER_INIT
19 // Non-static data member initializers - GCC 4.7
20 // http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2008/n2756.htm
21 
22 // #define GLM_CXX11_VARIADIC_TEMPLATE
23 // Variadic templates - GCC 4.3
24 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2242.pdf
25 
26 //
27 // Extending variadic template template parameters - GCC 4.4
28 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2555.pdf
29 
30 // #define GLM_CXX11_GENERALIZED_INITIALIZERS
31 // Initializer lists - GCC 4.4
32 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2672.htm
33 
34 // #define GLM_CXX11_STATIC_ASSERT
35 // Static assertions - GCC 4.3
36 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2004/n1720.html
37 
38 // #define GLM_CXX11_AUTO_TYPE
39 // auto-typed variables - GCC 4.4
40 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n1984.pdf
41 
42 // #define GLM_CXX11_AUTO_TYPE
43 // Multi-declarator auto - GCC 4.4
44 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2004/n1737.pdf
45 
46 // #define GLM_CXX11_AUTO_TYPE
47 // Removal of auto as a storage-class specifier - GCC 4.4
48 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2546.htm
49 
50 // #define GLM_CXX11_AUTO_TYPE
51 // New function declarator syntax - GCC 4.4
52 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2541.htm
53 
54 // #define GLM_CXX11_LAMBDAS
55 // New wording for C++0x lambdas - GCC 4.5
56 // http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2009/n2927.pdf
57 
58 // #define GLM_CXX11_DECLTYPE
59 // Declared type of an expression - GCC 4.3
60 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2343.pdf
61 
62 //
63 // Right angle brackets - GCC 4.3
64 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1757.html
65 
66 //
67 // Default template arguments for function templates DR226 GCC 4.3
68 // http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_defects.html#226
69 
70 //
71 // Solving the SFINAE problem for expressions DR339 GCC 4.4
72 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2634.html
73 
74 // #define GLM_CXX11_ALIAS_TEMPLATE
75 // Template aliases N2258 GCC 4.7
76 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2258.pdf
77 
78 //
79 // Extern templates N1987 Yes
80 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n1987.htm
81 
82 // #define GLM_CXX11_NULLPTR
83 // Null pointer constant N2431 GCC 4.6
84 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2431.pdf
85 
86 // #define GLM_CXX11_STRONG_ENUMS
87 // Strongly-typed enums N2347 GCC 4.4
88 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2347.pdf
89 
90 //
91 // Forward declarations for enums N2764 GCC 4.6
92 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2764.pdf
93 
94 //
95 // Generalized attributes N2761 GCC 4.8
96 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2761.pdf
97 
98 //
99 // Generalized constant expressions N2235 GCC 4.6
100 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2235.pdf
101 
102 //
103 // Alignment support N2341 GCC 4.8
104 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2341.pdf
105 
106 // #define GLM_CXX11_DELEGATING_CONSTRUCTORS
107 // Delegating constructors N1986 GCC 4.7
108 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n1986.pdf
109 
110 //
111 // Inheriting constructors N2540 GCC 4.8
112 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2540.htm
113 
114 // #define GLM_CXX11_EXPLICIT_CONVERSIONS
115 // Explicit conversion operators N2437 GCC 4.5
116 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2437.pdf
117 
118 //
119 // New character types N2249 GCC 4.4
120 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2249.html
121 
122 //
123 // Unicode string literals N2442 GCC 4.5
124 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2442.htm
125 
126 //
127 // Raw string literals N2442 GCC 4.5
128 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2442.htm
129 
130 //
131 // Universal character name literals N2170 GCC 4.5
132 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2170.html
133 
134 // #define GLM_CXX11_USER_LITERALS
135 // User-defined literals N2765 GCC 4.7
136 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2765.pdf
137 
138 //
139 // Standard Layout Types N2342 GCC 4.5
140 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2342.htm
141 
142 // #define GLM_CXX11_DEFAULTED_FUNCTIONS
143 // #define GLM_CXX11_DELETED_FUNCTIONS
144 // Defaulted and deleted functions N2346 GCC 4.4
145 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2346.htm
146 
147 //
148 // Extended friend declarations N1791 GCC 4.7
149 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1791.pdf
150 
151 //
152 // Extending sizeof N2253 GCC 4.4
153 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2253.html
154 
155 // #define GLM_CXX11_INLINE_NAMESPACES
156 // Inline namespaces N2535 GCC 4.4
157 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2535.htm
158 
159 // #define GLM_CXX11_UNRESTRICTED_UNIONS
160 // Unrestricted unions N2544 GCC 4.6
161 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2544.pdf
162 
163 // #define GLM_CXX11_LOCAL_TYPE_TEMPLATE_ARGS
164 // Local and unnamed types as template arguments N2657 GCC 4.5
165 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2657.htm
166 
167 // #define GLM_CXX11_RANGE_FOR
168 // Range-based for N2930 GCC 4.6
169 // http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2009/n2930.html
170 
171 // #define GLM_CXX11_OVERRIDE_CONTROL
172 // Explicit virtual overrides N2928 N3206 N3272 GCC 4.7
173 // http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2009/n2928.htm
174 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2010/n3206.htm
175 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2011/n3272.htm
176 
177 //
178 // Minimal support for garbage collection and reachability-based leak detection N2670 No
179 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2670.htm
180 
181 // #define GLM_CXX11_NOEXCEPT
182 // Allowing move constructors to throw [noexcept] N3050 GCC 4.6 (core language only)
183 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2010/n3050.html
184 
185 //
186 // Defining move special member functions N3053 GCC 4.6
187 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2010/n3053.html
188 
189 //
190 // Sequence points N2239 Yes
191 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2239.html
192 
193 //
194 // Atomic operations N2427 GCC 4.4
195 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2239.html
196 
197 //
198 // Strong Compare and Exchange N2748 GCC 4.5
199 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2427.html
200 
201 //
202 // Bidirectional Fences N2752 GCC 4.8
203 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2752.htm
204 
205 //
206 // Memory model N2429 GCC 4.8
207 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2429.htm
208 
209 //
210 // Data-dependency ordering: atomics and memory model N2664 GCC 4.4
211 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2664.htm
212 
213 //
214 // Propagating exceptions N2179 GCC 4.4
215 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2179.html
216 
217 //
218 // Abandoning a process and at_quick_exit N2440 GCC 4.8
219 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2440.htm
220 
221 //
222 // Allow atomics use in signal handlers N2547 Yes
223 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2547.htm
224 
225 //
226 // Thread-local storage N2659 GCC 4.8
227 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2659.htm
228 
229 //
230 // Dynamic initialization and destruction with concurrency N2660 GCC 4.3
231 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2660.htm
232 
233 //
234 // __func__ predefined identifier N2340 GCC 4.3
235 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2340.htm
236 
237 //
238 // C99 preprocessor N1653 GCC 4.3
239 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2004/n1653.htm
240 
241 //
242 // long long N1811 GCC 4.3
243 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1811.pdf
244 
245 //
246 // Extended integral types N1988 Yes
247 // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n1988.pdf
248 
249 #if(GLM_COMPILER & GLM_COMPILER_GCC)
250 
251 # define GLM_CXX11_STATIC_ASSERT
252 
253 #elif(GLM_COMPILER & GLM_COMPILER_CLANG)
254 # if(__has_feature(cxx_exceptions))
255 # define GLM_CXX98_EXCEPTIONS
256 # endif
257 
258 # if(__has_feature(cxx_rtti))
259 # define GLM_CXX98_RTTI
260 # endif
261 
262 # if(__has_feature(cxx_access_control_sfinae))
263 # define GLM_CXX11_ACCESS_CONTROL_SFINAE
264 # endif
265 
266 # if(__has_feature(cxx_alias_templates))
267 # define GLM_CXX11_ALIAS_TEMPLATE
268 # endif
269 
270 # if(__has_feature(cxx_alignas))
271 # define GLM_CXX11_ALIGNAS
272 # endif
273 
274 # if(__has_feature(cxx_attributes))
275 # define GLM_CXX11_ATTRIBUTES
276 # endif
277 
278 # if(__has_feature(cxx_constexpr))
279 # define GLM_CXX11_CONSTEXPR
280 # endif
281 
282 # if(__has_feature(cxx_decltype))
283 # define GLM_CXX11_DECLTYPE
284 # endif
285 
286 # if(__has_feature(cxx_default_function_template_args))
287 # define GLM_CXX11_DEFAULT_FUNCTION_TEMPLATE_ARGS
288 # endif
289 
290 # if(__has_feature(cxx_defaulted_functions))
291 # define GLM_CXX11_DEFAULTED_FUNCTIONS
292 # endif
293 
294 # if(__has_feature(cxx_delegating_constructors))
295 # define GLM_CXX11_DELEGATING_CONSTRUCTORS
296 # endif
297 
298 # if(__has_feature(cxx_deleted_functions))
299 # define GLM_CXX11_DELETED_FUNCTIONS
300 # endif
301 
302 # if(__has_feature(cxx_explicit_conversions))
303 # define GLM_CXX11_EXPLICIT_CONVERSIONS
304 # endif
305 
306 # if(__has_feature(cxx_generalized_initializers))
307 # define GLM_CXX11_GENERALIZED_INITIALIZERS
308 # endif
309 
310 # if(__has_feature(cxx_implicit_moves))
311 # define GLM_CXX11_IMPLICIT_MOVES
312 # endif
313 
314 # if(__has_feature(cxx_inheriting_constructors))
315 # define GLM_CXX11_INHERITING_CONSTRUCTORS
316 # endif
317 
318 # if(__has_feature(cxx_inline_namespaces))
319 # define GLM_CXX11_INLINE_NAMESPACES
320 # endif
321 
322 # if(__has_feature(cxx_lambdas))
323 # define GLM_CXX11_LAMBDAS
324 # endif
325 
326 # if(__has_feature(cxx_local_type_template_args))
327 # define GLM_CXX11_LOCAL_TYPE_TEMPLATE_ARGS
328 # endif
329 
330 # if(__has_feature(cxx_noexcept))
331 # define GLM_CXX11_NOEXCEPT
332 # endif
333 
334 # if(__has_feature(cxx_nonstatic_member_init))
335 # define GLM_CXX11_NONSTATIC_MEMBER_INIT
336 # endif
337 
338 # if(__has_feature(cxx_nullptr))
339 # define GLM_CXX11_NULLPTR
340 # endif
341 
342 # if(__has_feature(cxx_override_control))
343 # define GLM_CXX11_OVERRIDE_CONTROL
344 # endif
345 
346 # if(__has_feature(cxx_reference_qualified_functions))
347 # define GLM_CXX11_REFERENCE_QUALIFIED_FUNCTIONS
348 # endif
349 
350 # if(__has_feature(cxx_range_for))
351 # define GLM_CXX11_RANGE_FOR
352 # endif
353 
354 # if(__has_feature(cxx_raw_string_literals))
355 # define GLM_CXX11_RAW_STRING_LITERALS
356 # endif
357 
358 # if(__has_feature(cxx_rvalue_references))
359 # define GLM_CXX11_RVALUE_REFERENCES
360 # endif
361 
362 # if(__has_feature(cxx_static_assert))
363 # define GLM_CXX11_STATIC_ASSERT
364 # endif
365 
366 # if(__has_feature(cxx_auto_type))
367 # define GLM_CXX11_AUTO_TYPE
368 # endif
369 
370 # if(__has_feature(cxx_strong_enums))
371 # define GLM_CXX11_STRONG_ENUMS
372 # endif
373 
374 # if(__has_feature(cxx_trailing_return))
375 # define GLM_CXX11_TRAILING_RETURN
376 # endif
377 
378 # if(__has_feature(cxx_unicode_literals))
379 # define GLM_CXX11_UNICODE_LITERALS
380 # endif
381 
382 # if(__has_feature(cxx_unrestricted_unions))
383 # define GLM_CXX11_UNRESTRICTED_UNIONS
384 # endif
385 
386 # if(__has_feature(cxx_user_literals))
387 # define GLM_CXX11_USER_LITERALS
388 # endif
389 
390 # if(__has_feature(cxx_variadic_templates))
391 # define GLM_CXX11_VARIADIC_TEMPLATES
392 # endif
393 
394 #endif//(GLM_COMPILER & GLM_COMPILER_CLANG)
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00002_source.html ================================================ 0.9.9 API documentation: _fixes.hpp Source File
0.9.9 API documentation
_fixes.hpp
1 #include <cmath>
2 
4 #ifdef max
5 #undef max
6 #endif
7 
9 #ifdef min
10 #undef min
11 #endif
12 
14 #ifdef isnan
15 #undef isnan
16 #endif
17 
19 #ifdef isinf
20 #undef isinf
21 #endif
22 
24 #ifdef log2
25 #undef log2
26 #endif
27 
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00003_source.html ================================================ 0.9.9 API documentation: _noise.hpp Source File
0.9.9 API documentation
_noise.hpp
1 #pragma once
2 
3 #include "../common.hpp"
4 
5 namespace glm{
6 namespace detail
7 {
8  template<typename T>
9  GLM_FUNC_QUALIFIER T mod289(T const& x)
10  {
11  return x - floor(x * (static_cast<T>(1.0) / static_cast<T>(289.0))) * static_cast<T>(289.0);
12  }
13 
14  template<typename T>
15  GLM_FUNC_QUALIFIER T permute(T const& x)
16  {
17  return mod289(((x * static_cast<T>(34)) + static_cast<T>(1)) * x);
18  }
19 
20  template<typename T, qualifier Q>
21  GLM_FUNC_QUALIFIER vec<2, T, Q> permute(vec<2, T, Q> const& x)
22  {
23  return mod289(((x * static_cast<T>(34)) + static_cast<T>(1)) * x);
24  }
25 
26  template<typename T, qualifier Q>
27  GLM_FUNC_QUALIFIER vec<3, T, Q> permute(vec<3, T, Q> const& x)
28  {
29  return mod289(((x * static_cast<T>(34)) + static_cast<T>(1)) * x);
30  }
31 
32  template<typename T, qualifier Q>
33  GLM_FUNC_QUALIFIER vec<4, T, Q> permute(vec<4, T, Q> const& x)
34  {
35  return mod289(((x * static_cast<T>(34)) + static_cast<T>(1)) * x);
36  }
37 
38  template<typename T>
39  GLM_FUNC_QUALIFIER T taylorInvSqrt(T const& r)
40  {
41  return static_cast<T>(1.79284291400159) - static_cast<T>(0.85373472095314) * r;
42  }
43 
44  template<typename T, qualifier Q>
45  GLM_FUNC_QUALIFIER vec<2, T, Q> taylorInvSqrt(vec<2, T, Q> const& r)
46  {
47  return static_cast<T>(1.79284291400159) - static_cast<T>(0.85373472095314) * r;
48  }
49 
50  template<typename T, qualifier Q>
51  GLM_FUNC_QUALIFIER vec<3, T, Q> taylorInvSqrt(vec<3, T, Q> const& r)
52  {
53  return static_cast<T>(1.79284291400159) - static_cast<T>(0.85373472095314) * r;
54  }
55 
56  template<typename T, qualifier Q>
57  GLM_FUNC_QUALIFIER vec<4, T, Q> taylorInvSqrt(vec<4, T, Q> const& r)
58  {
59  return static_cast<T>(1.79284291400159) - static_cast<T>(0.85373472095314) * r;
60  }
61 
62  template<typename T, qualifier Q>
63  GLM_FUNC_QUALIFIER vec<2, T, Q> fade(vec<2, T, Q> const& t)
64  {
65  return (t * t * t) * (t * (t * static_cast<T>(6) - static_cast<T>(15)) + static_cast<T>(10));
66  }
67 
68  template<typename T, qualifier Q>
69  GLM_FUNC_QUALIFIER vec<3, T, Q> fade(vec<3, T, Q> const& t)
70  {
71  return (t * t * t) * (t * (t * static_cast<T>(6) - static_cast<T>(15)) + static_cast<T>(10));
72  }
73 
74  template<typename T, qualifier Q>
75  GLM_FUNC_QUALIFIER vec<4, T, Q> fade(vec<4, T, Q> const& t)
76  {
77  return (t * t * t) * (t * (t * static_cast<T>(6) - static_cast<T>(15)) + static_cast<T>(10));
78  }
79 }//namespace detail
80 }//namespace glm
81 
GLM_FUNC_DECL vec< L, T, Q > floor(vec< L, T, Q > const &x)
Returns a value equal to the nearest integer that is less then or equal to x.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00004_source.html ================================================ 0.9.9 API documentation: _swizzle.hpp Source File
0.9.9 API documentation
_swizzle.hpp
1 #pragma once
2 
3 namespace glm{
4 namespace detail
5 {
6  // Internal class for implementing swizzle operators
7  template<typename T, int N>
8  struct _swizzle_base0
9  {
10  protected:
11  GLM_FUNC_QUALIFIER T& elem(size_t i){ return (reinterpret_cast<T*>(_buffer))[i]; }
12  GLM_FUNC_QUALIFIER T const& elem(size_t i) const{ return (reinterpret_cast<const T*>(_buffer))[i]; }
13 
14  // Use an opaque buffer to *ensure* the compiler doesn't call a constructor.
15  // The size 1 buffer is assumed to aligned to the actual members so that the
16  // elem()
17  char _buffer[1];
18  };
19 
20  template<int N, typename T, qualifier Q, int E0, int E1, int E2, int E3, bool Aligned>
21  struct _swizzle_base1 : public _swizzle_base0<T, N>
22  {
23  };
24 
25  template<typename T, qualifier Q, int E0, int E1, bool Aligned>
26  struct _swizzle_base1<2, T, Q, E0,E1,-1,-2, Aligned> : public _swizzle_base0<T, 2>
27  {
28  GLM_FUNC_QUALIFIER vec<2, T, Q> operator ()() const { return vec<2, T, Q>(this->elem(E0), this->elem(E1)); }
29  };
30 
31  template<typename T, qualifier Q, int E0, int E1, int E2, bool Aligned>
32  struct _swizzle_base1<3, T, Q, E0,E1,E2,-1, Aligned> : public _swizzle_base0<T, 3>
33  {
34  GLM_FUNC_QUALIFIER vec<3, T, Q> operator ()() const { return vec<3, T, Q>(this->elem(E0), this->elem(E1), this->elem(E2)); }
35  };
36 
37  template<typename T, qualifier Q, int E0, int E1, int E2, int E3, bool Aligned>
38  struct _swizzle_base1<4, T, Q, E0,E1,E2,E3, Aligned> : public _swizzle_base0<T, 4>
39  {
40  GLM_FUNC_QUALIFIER vec<4, T, Q> operator ()() const { return vec<4, T, Q>(this->elem(E0), this->elem(E1), this->elem(E2), this->elem(E3)); }
41  };
42 
43  // Internal class for implementing swizzle operators
44  /*
45  Template parameters:
46 
47  T = type of scalar values (e.g. float, double)
48  N = number of components in the vector (e.g. 3)
49  E0...3 = what index the n-th element of this swizzle refers to in the unswizzled vec
50 
51  DUPLICATE_ELEMENTS = 1 if there is a repeated element, 0 otherwise (used to specialize swizzles
52  containing duplicate elements so that they cannot be used as r-values).
53  */
54  template<int N, typename T, qualifier Q, int E0, int E1, int E2, int E3, int DUPLICATE_ELEMENTS>
55  struct _swizzle_base2 : public _swizzle_base1<N, T, Q, E0,E1,E2,E3, detail::is_aligned<Q>::value>
56  {
57  struct op_equal
58  {
59  GLM_FUNC_QUALIFIER void operator() (T& e, T& t) const{ e = t; }
60  };
61 
62  struct op_minus
63  {
64  GLM_FUNC_QUALIFIER void operator() (T& e, T& t) const{ e -= t; }
65  };
66 
67  struct op_plus
68  {
69  GLM_FUNC_QUALIFIER void operator() (T& e, T& t) const{ e += t; }
70  };
71 
72  struct op_mul
73  {
74  GLM_FUNC_QUALIFIER void operator() (T& e, T& t) const{ e *= t; }
75  };
76 
77  struct op_div
78  {
79  GLM_FUNC_QUALIFIER void operator() (T& e, T& t) const{ e /= t; }
80  };
81 
82  public:
83  GLM_FUNC_QUALIFIER _swizzle_base2& operator= (const T& t)
84  {
85  for (int i = 0; i < N; ++i)
86  (*this)[i] = t;
87  return *this;
88  }
89 
90  GLM_FUNC_QUALIFIER _swizzle_base2& operator= (vec<N, T, Q> const& that)
91  {
92  _apply_op(that, op_equal());
93  return *this;
94  }
95 
96  GLM_FUNC_QUALIFIER void operator -= (vec<N, T, Q> const& that)
97  {
98  _apply_op(that, op_minus());
99  }
100 
101  GLM_FUNC_QUALIFIER void operator += (vec<N, T, Q> const& that)
102  {
103  _apply_op(that, op_plus());
104  }
105 
106  GLM_FUNC_QUALIFIER void operator *= (vec<N, T, Q> const& that)
107  {
108  _apply_op(that, op_mul());
109  }
110 
111  GLM_FUNC_QUALIFIER void operator /= (vec<N, T, Q> const& that)
112  {
113  _apply_op(that, op_div());
114  }
115 
116  GLM_FUNC_QUALIFIER T& operator[](size_t i)
117  {
118  const int offset_dst[4] = { E0, E1, E2, E3 };
119  return this->elem(offset_dst[i]);
120  }
121  GLM_FUNC_QUALIFIER T operator[](size_t i) const
122  {
123  const int offset_dst[4] = { E0, E1, E2, E3 };
124  return this->elem(offset_dst[i]);
125  }
126 
127  protected:
128  template<typename U>
129  GLM_FUNC_QUALIFIER void _apply_op(vec<N, T, Q> const& that, const U& op)
130  {
131  // Make a copy of the data in this == &that.
132  // The copier should optimize out the copy in cases where the function is
133  // properly inlined and the copy is not necessary.
134  T t[N];
135  for (int i = 0; i < N; ++i)
136  t[i] = that[i];
137  for (int i = 0; i < N; ++i)
138  op( (*this)[i], t[i] );
139  }
140  };
141 
142  // Specialization for swizzles containing duplicate elements. These cannot be modified.
143  template<int N, typename T, qualifier Q, int E0, int E1, int E2, int E3>
144  struct _swizzle_base2<N, T, Q, E0,E1,E2,E3, 1> : public _swizzle_base1<N, T, Q, E0,E1,E2,E3, detail::is_aligned<Q>::value>
145  {
146  struct Stub {};
147 
148  GLM_FUNC_QUALIFIER _swizzle_base2& operator= (Stub const&) { return *this; }
149 
150  GLM_FUNC_QUALIFIER T operator[] (size_t i) const
151  {
152  const int offset_dst[4] = { E0, E1, E2, E3 };
153  return this->elem(offset_dst[i]);
154  }
155  };
156 
157  template<int N, typename T, qualifier Q, int E0, int E1, int E2, int E3>
158  struct _swizzle : public _swizzle_base2<N, T, Q, E0, E1, E2, E3, (E0 == E1 || E0 == E2 || E0 == E3 || E1 == E2 || E1 == E3 || E2 == E3)>
159  {
160  typedef _swizzle_base2<N, T, Q, E0, E1, E2, E3, (E0 == E1 || E0 == E2 || E0 == E3 || E1 == E2 || E1 == E3 || E2 == E3)> base_type;
161 
162  using base_type::operator=;
163 
164  GLM_FUNC_QUALIFIER operator vec<N, T, Q> () const { return (*this)(); }
165  };
166 
167 //
168 // To prevent the C++ syntax from getting entirely overwhelming, define some alias macros
169 //
170 #define GLM_SWIZZLE_TEMPLATE1 template<int N, typename T, qualifier Q, int E0, int E1, int E2, int E3>
171 #define GLM_SWIZZLE_TEMPLATE2 template<int N, typename T, qualifier Q, int E0, int E1, int E2, int E3, int F0, int F1, int F2, int F3>
172 #define GLM_SWIZZLE_TYPE1 _swizzle<N, T, Q, E0, E1, E2, E3>
173 #define GLM_SWIZZLE_TYPE2 _swizzle<N, T, Q, F0, F1, F2, F3>
174 
175 //
176 // Wrapper for a binary operator (e.g. u.yy + v.zy)
177 //
178 #define GLM_SWIZZLE_VECTOR_BINARY_OPERATOR_IMPLEMENTATION(OPERAND) \
179  GLM_SWIZZLE_TEMPLATE2 \
180  GLM_FUNC_QUALIFIER vec<N, T, Q> operator OPERAND ( const GLM_SWIZZLE_TYPE1& a, const GLM_SWIZZLE_TYPE2& b) \
181  { \
182  return a() OPERAND b(); \
183  } \
184  GLM_SWIZZLE_TEMPLATE1 \
185  GLM_FUNC_QUALIFIER vec<N, T, Q> operator OPERAND ( const GLM_SWIZZLE_TYPE1& a, const vec<N, T, Q>& b) \
186  { \
187  return a() OPERAND b; \
188  } \
189  GLM_SWIZZLE_TEMPLATE1 \
190  GLM_FUNC_QUALIFIER vec<N, T, Q> operator OPERAND ( const vec<N, T, Q>& a, const GLM_SWIZZLE_TYPE1& b) \
191  { \
192  return a OPERAND b(); \
193  }
194 
195 //
196 // Wrapper for a operand between a swizzle and a binary (e.g. 1.0f - u.xyz)
197 //
198 #define GLM_SWIZZLE_SCALAR_BINARY_OPERATOR_IMPLEMENTATION(OPERAND) \
199  GLM_SWIZZLE_TEMPLATE1 \
200  GLM_FUNC_QUALIFIER vec<N, T, Q> operator OPERAND ( const GLM_SWIZZLE_TYPE1& a, const T& b) \
201  { \
202  return a() OPERAND b; \
203  } \
204  GLM_SWIZZLE_TEMPLATE1 \
205  GLM_FUNC_QUALIFIER vec<N, T, Q> operator OPERAND ( const T& a, const GLM_SWIZZLE_TYPE1& b) \
206  { \
207  return a OPERAND b(); \
208  }
209 
210 //
211 // Macro for wrapping a function taking one argument (e.g. abs())
212 //
213 #define GLM_SWIZZLE_FUNCTION_1_ARGS(RETURN_TYPE,FUNCTION) \
214  GLM_SWIZZLE_TEMPLATE1 \
215  GLM_FUNC_QUALIFIER typename GLM_SWIZZLE_TYPE1::RETURN_TYPE FUNCTION(const GLM_SWIZZLE_TYPE1& a) \
216  { \
217  return FUNCTION(a()); \
218  }
219 
220 //
221 // Macro for wrapping a function taking two vector arguments (e.g. dot()).
222 //
223 #define GLM_SWIZZLE_FUNCTION_2_ARGS(RETURN_TYPE,FUNCTION) \
224  GLM_SWIZZLE_TEMPLATE2 \
225  GLM_FUNC_QUALIFIER typename GLM_SWIZZLE_TYPE1::RETURN_TYPE FUNCTION(const GLM_SWIZZLE_TYPE1& a, const GLM_SWIZZLE_TYPE2& b) \
226  { \
227  return FUNCTION(a(), b()); \
228  } \
229  GLM_SWIZZLE_TEMPLATE1 \
230  GLM_FUNC_QUALIFIER typename GLM_SWIZZLE_TYPE1::RETURN_TYPE FUNCTION(const GLM_SWIZZLE_TYPE1& a, const GLM_SWIZZLE_TYPE1& b) \
231  { \
232  return FUNCTION(a(), b()); \
233  } \
234  GLM_SWIZZLE_TEMPLATE1 \
235  GLM_FUNC_QUALIFIER typename GLM_SWIZZLE_TYPE1::RETURN_TYPE FUNCTION(const GLM_SWIZZLE_TYPE1& a, const typename V& b) \
236  { \
237  return FUNCTION(a(), b); \
238  } \
239  GLM_SWIZZLE_TEMPLATE1 \
240  GLM_FUNC_QUALIFIER typename GLM_SWIZZLE_TYPE1::RETURN_TYPE FUNCTION(const V& a, const GLM_SWIZZLE_TYPE1& b) \
241  { \
242  return FUNCTION(a, b()); \
243  }
244 
245 //
246 // Macro for wrapping a function take 2 vec arguments followed by a scalar (e.g. mix()).
247 //
248 #define GLM_SWIZZLE_FUNCTION_2_ARGS_SCALAR(RETURN_TYPE,FUNCTION) \
249  GLM_SWIZZLE_TEMPLATE2 \
250  GLM_FUNC_QUALIFIER typename GLM_SWIZZLE_TYPE1::RETURN_TYPE FUNCTION(const GLM_SWIZZLE_TYPE1& a, const GLM_SWIZZLE_TYPE2& b, const T& c) \
251  { \
252  return FUNCTION(a(), b(), c); \
253  } \
254  GLM_SWIZZLE_TEMPLATE1 \
255  GLM_FUNC_QUALIFIER typename GLM_SWIZZLE_TYPE1::RETURN_TYPE FUNCTION(const GLM_SWIZZLE_TYPE1& a, const GLM_SWIZZLE_TYPE1& b, const T& c) \
256  { \
257  return FUNCTION(a(), b(), c); \
258  } \
259  GLM_SWIZZLE_TEMPLATE1 \
260  GLM_FUNC_QUALIFIER typename GLM_SWIZZLE_TYPE1::RETURN_TYPE FUNCTION(const GLM_SWIZZLE_TYPE1& a, const typename S0::vec_type& b, const T& c)\
261  { \
262  return FUNCTION(a(), b, c); \
263  } \
264  GLM_SWIZZLE_TEMPLATE1 \
265  GLM_FUNC_QUALIFIER typename GLM_SWIZZLE_TYPE1::RETURN_TYPE FUNCTION(const typename V& a, const GLM_SWIZZLE_TYPE1& b, const T& c) \
266  { \
267  return FUNCTION(a, b(), c); \
268  }
269 
270 }//namespace detail
271 }//namespace glm
272 
273 namespace glm
274 {
275  namespace detail
276  {
277  GLM_SWIZZLE_SCALAR_BINARY_OPERATOR_IMPLEMENTATION(-)
278  GLM_SWIZZLE_SCALAR_BINARY_OPERATOR_IMPLEMENTATION(*)
279  GLM_SWIZZLE_VECTOR_BINARY_OPERATOR_IMPLEMENTATION(+)
280  GLM_SWIZZLE_VECTOR_BINARY_OPERATOR_IMPLEMENTATION(-)
281  GLM_SWIZZLE_VECTOR_BINARY_OPERATOR_IMPLEMENTATION(*)
282  GLM_SWIZZLE_VECTOR_BINARY_OPERATOR_IMPLEMENTATION(/)
283  }
284 
285  //
286  // Swizzles are distinct types from the unswizzled type. The below macros will
287  // provide template specializations for the swizzle types for the given functions
288  // so that the compiler does not have any ambiguity to choosing how to handle
289  // the function.
290  //
291  // The alternative is to use the operator()() when calling the function in order
292  // to explicitly convert the swizzled type to the unswizzled type.
293  //
294 
295  //GLM_SWIZZLE_FUNCTION_1_ARGS(vec_type, abs);
296  //GLM_SWIZZLE_FUNCTION_1_ARGS(vec_type, acos);
297  //GLM_SWIZZLE_FUNCTION_1_ARGS(vec_type, acosh);
298  //GLM_SWIZZLE_FUNCTION_1_ARGS(vec_type, all);
299  //GLM_SWIZZLE_FUNCTION_1_ARGS(vec_type, any);
300 
301  //GLM_SWIZZLE_FUNCTION_2_ARGS(value_type, dot);
302  //GLM_SWIZZLE_FUNCTION_2_ARGS(vec_type, cross);
303  //GLM_SWIZZLE_FUNCTION_2_ARGS(vec_type, step);
304  //GLM_SWIZZLE_FUNCTION_2_ARGS_SCALAR(vec_type, mix);
305 }
306 
307 #define GLM_SWIZZLE2_2_MEMBERS(T, Q, E0,E1) \
308  struct { detail::_swizzle<2, T, Q, 0,0,-1,-2> E0 ## E0; }; \
309  struct { detail::_swizzle<2, T, Q, 0,1,-1,-2> E0 ## E1; }; \
310  struct { detail::_swizzle<2, T, Q, 1,0,-1,-2> E1 ## E0; }; \
311  struct { detail::_swizzle<2, T, Q, 1,1,-1,-2> E1 ## E1; };
312 
313 #define GLM_SWIZZLE2_3_MEMBERS(T, Q, E0,E1) \
314  struct { detail::_swizzle<3,T, Q, 0,0,0,-1> E0 ## E0 ## E0; }; \
315  struct { detail::_swizzle<3,T, Q, 0,0,1,-1> E0 ## E0 ## E1; }; \
316  struct { detail::_swizzle<3,T, Q, 0,1,0,-1> E0 ## E1 ## E0; }; \
317  struct { detail::_swizzle<3,T, Q, 0,1,1,-1> E0 ## E1 ## E1; }; \
318  struct { detail::_swizzle<3,T, Q, 1,0,0,-1> E1 ## E0 ## E0; }; \
319  struct { detail::_swizzle<3,T, Q, 1,0,1,-1> E1 ## E0 ## E1; }; \
320  struct { detail::_swizzle<3,T, Q, 1,1,0,-1> E1 ## E1 ## E0; }; \
321  struct { detail::_swizzle<3,T, Q, 1,1,1,-1> E1 ## E1 ## E1; };
322 
323 #define GLM_SWIZZLE2_4_MEMBERS(T, Q, E0,E1) \
324  struct { detail::_swizzle<4,T, Q, 0,0,0,0> E0 ## E0 ## E0 ## E0; }; \
325  struct { detail::_swizzle<4,T, Q, 0,0,0,1> E0 ## E0 ## E0 ## E1; }; \
326  struct { detail::_swizzle<4,T, Q, 0,0,1,0> E0 ## E0 ## E1 ## E0; }; \
327  struct { detail::_swizzle<4,T, Q, 0,0,1,1> E0 ## E0 ## E1 ## E1; }; \
328  struct { detail::_swizzle<4,T, Q, 0,1,0,0> E0 ## E1 ## E0 ## E0; }; \
329  struct { detail::_swizzle<4,T, Q, 0,1,0,1> E0 ## E1 ## E0 ## E1; }; \
330  struct { detail::_swizzle<4,T, Q, 0,1,1,0> E0 ## E1 ## E1 ## E0; }; \
331  struct { detail::_swizzle<4,T, Q, 0,1,1,1> E0 ## E1 ## E1 ## E1; }; \
332  struct { detail::_swizzle<4,T, Q, 1,0,0,0> E1 ## E0 ## E0 ## E0; }; \
333  struct { detail::_swizzle<4,T, Q, 1,0,0,1> E1 ## E0 ## E0 ## E1; }; \
334  struct { detail::_swizzle<4,T, Q, 1,0,1,0> E1 ## E0 ## E1 ## E0; }; \
335  struct { detail::_swizzle<4,T, Q, 1,0,1,1> E1 ## E0 ## E1 ## E1; }; \
336  struct { detail::_swizzle<4,T, Q, 1,1,0,0> E1 ## E1 ## E0 ## E0; }; \
337  struct { detail::_swizzle<4,T, Q, 1,1,0,1> E1 ## E1 ## E0 ## E1; }; \
338  struct { detail::_swizzle<4,T, Q, 1,1,1,0> E1 ## E1 ## E1 ## E0; }; \
339  struct { detail::_swizzle<4,T, Q, 1,1,1,1> E1 ## E1 ## E1 ## E1; };
340 
341 #define GLM_SWIZZLE3_2_MEMBERS(T, Q, E0,E1,E2) \
342  struct { detail::_swizzle<2,T, Q, 0,0,-1,-2> E0 ## E0; }; \
343  struct { detail::_swizzle<2,T, Q, 0,1,-1,-2> E0 ## E1; }; \
344  struct { detail::_swizzle<2,T, Q, 0,2,-1,-2> E0 ## E2; }; \
345  struct { detail::_swizzle<2,T, Q, 1,0,-1,-2> E1 ## E0; }; \
346  struct { detail::_swizzle<2,T, Q, 1,1,-1,-2> E1 ## E1; }; \
347  struct { detail::_swizzle<2,T, Q, 1,2,-1,-2> E1 ## E2; }; \
348  struct { detail::_swizzle<2,T, Q, 2,0,-1,-2> E2 ## E0; }; \
349  struct { detail::_swizzle<2,T, Q, 2,1,-1,-2> E2 ## E1; }; \
350  struct { detail::_swizzle<2,T, Q, 2,2,-1,-2> E2 ## E2; };
351 
352 #define GLM_SWIZZLE3_3_MEMBERS(T, Q ,E0,E1,E2) \
353  struct { detail::_swizzle<3, T, Q, 0,0,0,-1> E0 ## E0 ## E0; }; \
354  struct { detail::_swizzle<3, T, Q, 0,0,1,-1> E0 ## E0 ## E1; }; \
355  struct { detail::_swizzle<3, T, Q, 0,0,2,-1> E0 ## E0 ## E2; }; \
356  struct { detail::_swizzle<3, T, Q, 0,1,0,-1> E0 ## E1 ## E0; }; \
357  struct { detail::_swizzle<3, T, Q, 0,1,1,-1> E0 ## E1 ## E1; }; \
358  struct { detail::_swizzle<3, T, Q, 0,1,2,-1> E0 ## E1 ## E2; }; \
359  struct { detail::_swizzle<3, T, Q, 0,2,0,-1> E0 ## E2 ## E0; }; \
360  struct { detail::_swizzle<3, T, Q, 0,2,1,-1> E0 ## E2 ## E1; }; \
361  struct { detail::_swizzle<3, T, Q, 0,2,2,-1> E0 ## E2 ## E2; }; \
362  struct { detail::_swizzle<3, T, Q, 1,0,0,-1> E1 ## E0 ## E0; }; \
363  struct { detail::_swizzle<3, T, Q, 1,0,1,-1> E1 ## E0 ## E1; }; \
364  struct { detail::_swizzle<3, T, Q, 1,0,2,-1> E1 ## E0 ## E2; }; \
365  struct { detail::_swizzle<3, T, Q, 1,1,0,-1> E1 ## E1 ## E0; }; \
366  struct { detail::_swizzle<3, T, Q, 1,1,1,-1> E1 ## E1 ## E1; }; \
367  struct { detail::_swizzle<3, T, Q, 1,1,2,-1> E1 ## E1 ## E2; }; \
368  struct { detail::_swizzle<3, T, Q, 1,2,0,-1> E1 ## E2 ## E0; }; \
369  struct { detail::_swizzle<3, T, Q, 1,2,1,-1> E1 ## E2 ## E1; }; \
370  struct { detail::_swizzle<3, T, Q, 1,2,2,-1> E1 ## E2 ## E2; }; \
371  struct { detail::_swizzle<3, T, Q, 2,0,0,-1> E2 ## E0 ## E0; }; \
372  struct { detail::_swizzle<3, T, Q, 2,0,1,-1> E2 ## E0 ## E1; }; \
373  struct { detail::_swizzle<3, T, Q, 2,0,2,-1> E2 ## E0 ## E2; }; \
374  struct { detail::_swizzle<3, T, Q, 2,1,0,-1> E2 ## E1 ## E0; }; \
375  struct { detail::_swizzle<3, T, Q, 2,1,1,-1> E2 ## E1 ## E1; }; \
376  struct { detail::_swizzle<3, T, Q, 2,1,2,-1> E2 ## E1 ## E2; }; \
377  struct { detail::_swizzle<3, T, Q, 2,2,0,-1> E2 ## E2 ## E0; }; \
378  struct { detail::_swizzle<3, T, Q, 2,2,1,-1> E2 ## E2 ## E1; }; \
379  struct { detail::_swizzle<3, T, Q, 2,2,2,-1> E2 ## E2 ## E2; };
380 
381 #define GLM_SWIZZLE3_4_MEMBERS(T, Q, E0,E1,E2) \
382  struct { detail::_swizzle<4,T, Q, 0,0,0,0> E0 ## E0 ## E0 ## E0; }; \
383  struct { detail::_swizzle<4,T, Q, 0,0,0,1> E0 ## E0 ## E0 ## E1; }; \
384  struct { detail::_swizzle<4,T, Q, 0,0,0,2> E0 ## E0 ## E0 ## E2; }; \
385  struct { detail::_swizzle<4,T, Q, 0,0,1,0> E0 ## E0 ## E1 ## E0; }; \
386  struct { detail::_swizzle<4,T, Q, 0,0,1,1> E0 ## E0 ## E1 ## E1; }; \
387  struct { detail::_swizzle<4,T, Q, 0,0,1,2> E0 ## E0 ## E1 ## E2; }; \
388  struct { detail::_swizzle<4,T, Q, 0,0,2,0> E0 ## E0 ## E2 ## E0; }; \
389  struct { detail::_swizzle<4,T, Q, 0,0,2,1> E0 ## E0 ## E2 ## E1; }; \
390  struct { detail::_swizzle<4,T, Q, 0,0,2,2> E0 ## E0 ## E2 ## E2; }; \
391  struct { detail::_swizzle<4,T, Q, 0,1,0,0> E0 ## E1 ## E0 ## E0; }; \
392  struct { detail::_swizzle<4,T, Q, 0,1,0,1> E0 ## E1 ## E0 ## E1; }; \
393  struct { detail::_swizzle<4,T, Q, 0,1,0,2> E0 ## E1 ## E0 ## E2; }; \
394  struct { detail::_swizzle<4,T, Q, 0,1,1,0> E0 ## E1 ## E1 ## E0; }; \
395  struct { detail::_swizzle<4,T, Q, 0,1,1,1> E0 ## E1 ## E1 ## E1; }; \
396  struct { detail::_swizzle<4,T, Q, 0,1,1,2> E0 ## E1 ## E1 ## E2; }; \
397  struct { detail::_swizzle<4,T, Q, 0,1,2,0> E0 ## E1 ## E2 ## E0; }; \
398  struct { detail::_swizzle<4,T, Q, 0,1,2,1> E0 ## E1 ## E2 ## E1; }; \
399  struct { detail::_swizzle<4,T, Q, 0,1,2,2> E0 ## E1 ## E2 ## E2; }; \
400  struct { detail::_swizzle<4,T, Q, 0,2,0,0> E0 ## E2 ## E0 ## E0; }; \
401  struct { detail::_swizzle<4,T, Q, 0,2,0,1> E0 ## E2 ## E0 ## E1; }; \
402  struct { detail::_swizzle<4,T, Q, 0,2,0,2> E0 ## E2 ## E0 ## E2; }; \
403  struct { detail::_swizzle<4,T, Q, 0,2,1,0> E0 ## E2 ## E1 ## E0; }; \
404  struct { detail::_swizzle<4,T, Q, 0,2,1,1> E0 ## E2 ## E1 ## E1; }; \
405  struct { detail::_swizzle<4,T, Q, 0,2,1,2> E0 ## E2 ## E1 ## E2; }; \
406  struct { detail::_swizzle<4,T, Q, 0,2,2,0> E0 ## E2 ## E2 ## E0; }; \
407  struct { detail::_swizzle<4,T, Q, 0,2,2,1> E0 ## E2 ## E2 ## E1; }; \
408  struct { detail::_swizzle<4,T, Q, 0,2,2,2> E0 ## E2 ## E2 ## E2; }; \
409  struct { detail::_swizzle<4,T, Q, 1,0,0,0> E1 ## E0 ## E0 ## E0; }; \
410  struct { detail::_swizzle<4,T, Q, 1,0,0,1> E1 ## E0 ## E0 ## E1; }; \
411  struct { detail::_swizzle<4,T, Q, 1,0,0,2> E1 ## E0 ## E0 ## E2; }; \
412  struct { detail::_swizzle<4,T, Q, 1,0,1,0> E1 ## E0 ## E1 ## E0; }; \
413  struct { detail::_swizzle<4,T, Q, 1,0,1,1> E1 ## E0 ## E1 ## E1; }; \
414  struct { detail::_swizzle<4,T, Q, 1,0,1,2> E1 ## E0 ## E1 ## E2; }; \
415  struct { detail::_swizzle<4,T, Q, 1,0,2,0> E1 ## E0 ## E2 ## E0; }; \
416  struct { detail::_swizzle<4,T, Q, 1,0,2,1> E1 ## E0 ## E2 ## E1; }; \
417  struct { detail::_swizzle<4,T, Q, 1,0,2,2> E1 ## E0 ## E2 ## E2; }; \
418  struct { detail::_swizzle<4,T, Q, 1,1,0,0> E1 ## E1 ## E0 ## E0; }; \
419  struct { detail::_swizzle<4,T, Q, 1,1,0,1> E1 ## E1 ## E0 ## E1; }; \
420  struct { detail::_swizzle<4,T, Q, 1,1,0,2> E1 ## E1 ## E0 ## E2; }; \
421  struct { detail::_swizzle<4,T, Q, 1,1,1,0> E1 ## E1 ## E1 ## E0; }; \
422  struct { detail::_swizzle<4,T, Q, 1,1,1,1> E1 ## E1 ## E1 ## E1; }; \
423  struct { detail::_swizzle<4,T, Q, 1,1,1,2> E1 ## E1 ## E1 ## E2; }; \
424  struct { detail::_swizzle<4,T, Q, 1,1,2,0> E1 ## E1 ## E2 ## E0; }; \
425  struct { detail::_swizzle<4,T, Q, 1,1,2,1> E1 ## E1 ## E2 ## E1; }; \
426  struct { detail::_swizzle<4,T, Q, 1,1,2,2> E1 ## E1 ## E2 ## E2; }; \
427  struct { detail::_swizzle<4,T, Q, 1,2,0,0> E1 ## E2 ## E0 ## E0; }; \
428  struct { detail::_swizzle<4,T, Q, 1,2,0,1> E1 ## E2 ## E0 ## E1; }; \
429  struct { detail::_swizzle<4,T, Q, 1,2,0,2> E1 ## E2 ## E0 ## E2; }; \
430  struct { detail::_swizzle<4,T, Q, 1,2,1,0> E1 ## E2 ## E1 ## E0; }; \
431  struct { detail::_swizzle<4,T, Q, 1,2,1,1> E1 ## E2 ## E1 ## E1; }; \
432  struct { detail::_swizzle<4,T, Q, 1,2,1,2> E1 ## E2 ## E1 ## E2; }; \
433  struct { detail::_swizzle<4,T, Q, 1,2,2,0> E1 ## E2 ## E2 ## E0; }; \
434  struct { detail::_swizzle<4,T, Q, 1,2,2,1> E1 ## E2 ## E2 ## E1; }; \
435  struct { detail::_swizzle<4,T, Q, 1,2,2,2> E1 ## E2 ## E2 ## E2; }; \
436  struct { detail::_swizzle<4,T, Q, 2,0,0,0> E2 ## E0 ## E0 ## E0; }; \
437  struct { detail::_swizzle<4,T, Q, 2,0,0,1> E2 ## E0 ## E0 ## E1; }; \
438  struct { detail::_swizzle<4,T, Q, 2,0,0,2> E2 ## E0 ## E0 ## E2; }; \
439  struct { detail::_swizzle<4,T, Q, 2,0,1,0> E2 ## E0 ## E1 ## E0; }; \
440  struct { detail::_swizzle<4,T, Q, 2,0,1,1> E2 ## E0 ## E1 ## E1; }; \
441  struct { detail::_swizzle<4,T, Q, 2,0,1,2> E2 ## E0 ## E1 ## E2; }; \
442  struct { detail::_swizzle<4,T, Q, 2,0,2,0> E2 ## E0 ## E2 ## E0; }; \
443  struct { detail::_swizzle<4,T, Q, 2,0,2,1> E2 ## E0 ## E2 ## E1; }; \
444  struct { detail::_swizzle<4,T, Q, 2,0,2,2> E2 ## E0 ## E2 ## E2; }; \
445  struct { detail::_swizzle<4,T, Q, 2,1,0,0> E2 ## E1 ## E0 ## E0; }; \
446  struct { detail::_swizzle<4,T, Q, 2,1,0,1> E2 ## E1 ## E0 ## E1; }; \
447  struct { detail::_swizzle<4,T, Q, 2,1,0,2> E2 ## E1 ## E0 ## E2; }; \
448  struct { detail::_swizzle<4,T, Q, 2,1,1,0> E2 ## E1 ## E1 ## E0; }; \
449  struct { detail::_swizzle<4,T, Q, 2,1,1,1> E2 ## E1 ## E1 ## E1; }; \
450  struct { detail::_swizzle<4,T, Q, 2,1,1,2> E2 ## E1 ## E1 ## E2; }; \
451  struct { detail::_swizzle<4,T, Q, 2,1,2,0> E2 ## E1 ## E2 ## E0; }; \
452  struct { detail::_swizzle<4,T, Q, 2,1,2,1> E2 ## E1 ## E2 ## E1; }; \
453  struct { detail::_swizzle<4,T, Q, 2,1,2,2> E2 ## E1 ## E2 ## E2; }; \
454  struct { detail::_swizzle<4,T, Q, 2,2,0,0> E2 ## E2 ## E0 ## E0; }; \
455  struct { detail::_swizzle<4,T, Q, 2,2,0,1> E2 ## E2 ## E0 ## E1; }; \
456  struct { detail::_swizzle<4,T, Q, 2,2,0,2> E2 ## E2 ## E0 ## E2; }; \
457  struct { detail::_swizzle<4,T, Q, 2,2,1,0> E2 ## E2 ## E1 ## E0; }; \
458  struct { detail::_swizzle<4,T, Q, 2,2,1,1> E2 ## E2 ## E1 ## E1; }; \
459  struct { detail::_swizzle<4,T, Q, 2,2,1,2> E2 ## E2 ## E1 ## E2; }; \
460  struct { detail::_swizzle<4,T, Q, 2,2,2,0> E2 ## E2 ## E2 ## E0; }; \
461  struct { detail::_swizzle<4,T, Q, 2,2,2,1> E2 ## E2 ## E2 ## E1; }; \
462  struct { detail::_swizzle<4,T, Q, 2,2,2,2> E2 ## E2 ## E2 ## E2; };
463 
464 #define GLM_SWIZZLE4_2_MEMBERS(T, Q, E0,E1,E2,E3) \
465  struct { detail::_swizzle<2,T, Q, 0,0,-1,-2> E0 ## E0; }; \
466  struct { detail::_swizzle<2,T, Q, 0,1,-1,-2> E0 ## E1; }; \
467  struct { detail::_swizzle<2,T, Q, 0,2,-1,-2> E0 ## E2; }; \
468  struct { detail::_swizzle<2,T, Q, 0,3,-1,-2> E0 ## E3; }; \
469  struct { detail::_swizzle<2,T, Q, 1,0,-1,-2> E1 ## E0; }; \
470  struct { detail::_swizzle<2,T, Q, 1,1,-1,-2> E1 ## E1; }; \
471  struct { detail::_swizzle<2,T, Q, 1,2,-1,-2> E1 ## E2; }; \
472  struct { detail::_swizzle<2,T, Q, 1,3,-1,-2> E1 ## E3; }; \
473  struct { detail::_swizzle<2,T, Q, 2,0,-1,-2> E2 ## E0; }; \
474  struct { detail::_swizzle<2,T, Q, 2,1,-1,-2> E2 ## E1; }; \
475  struct { detail::_swizzle<2,T, Q, 2,2,-1,-2> E2 ## E2; }; \
476  struct { detail::_swizzle<2,T, Q, 2,3,-1,-2> E2 ## E3; }; \
477  struct { detail::_swizzle<2,T, Q, 3,0,-1,-2> E3 ## E0; }; \
478  struct { detail::_swizzle<2,T, Q, 3,1,-1,-2> E3 ## E1; }; \
479  struct { detail::_swizzle<2,T, Q, 3,2,-1,-2> E3 ## E2; }; \
480  struct { detail::_swizzle<2,T, Q, 3,3,-1,-2> E3 ## E3; };
481 
482 #define GLM_SWIZZLE4_3_MEMBERS(T, Q, E0,E1,E2,E3) \
483  struct { detail::_swizzle<3, T, Q, 0,0,0,-1> E0 ## E0 ## E0; }; \
484  struct { detail::_swizzle<3, T, Q, 0,0,1,-1> E0 ## E0 ## E1; }; \
485  struct { detail::_swizzle<3, T, Q, 0,0,2,-1> E0 ## E0 ## E2; }; \
486  struct { detail::_swizzle<3, T, Q, 0,0,3,-1> E0 ## E0 ## E3; }; \
487  struct { detail::_swizzle<3, T, Q, 0,1,0,-1> E0 ## E1 ## E0; }; \
488  struct { detail::_swizzle<3, T, Q, 0,1,1,-1> E0 ## E1 ## E1; }; \
489  struct { detail::_swizzle<3, T, Q, 0,1,2,-1> E0 ## E1 ## E2; }; \
490  struct { detail::_swizzle<3, T, Q, 0,1,3,-1> E0 ## E1 ## E3; }; \
491  struct { detail::_swizzle<3, T, Q, 0,2,0,-1> E0 ## E2 ## E0; }; \
492  struct { detail::_swizzle<3, T, Q, 0,2,1,-1> E0 ## E2 ## E1; }; \
493  struct { detail::_swizzle<3, T, Q, 0,2,2,-1> E0 ## E2 ## E2; }; \
494  struct { detail::_swizzle<3, T, Q, 0,2,3,-1> E0 ## E2 ## E3; }; \
495  struct { detail::_swizzle<3, T, Q, 0,3,0,-1> E0 ## E3 ## E0; }; \
496  struct { detail::_swizzle<3, T, Q, 0,3,1,-1> E0 ## E3 ## E1; }; \
497  struct { detail::_swizzle<3, T, Q, 0,3,2,-1> E0 ## E3 ## E2; }; \
498  struct { detail::_swizzle<3, T, Q, 0,3,3,-1> E0 ## E3 ## E3; }; \
499  struct { detail::_swizzle<3, T, Q, 1,0,0,-1> E1 ## E0 ## E0; }; \
500  struct { detail::_swizzle<3, T, Q, 1,0,1,-1> E1 ## E0 ## E1; }; \
501  struct { detail::_swizzle<3, T, Q, 1,0,2,-1> E1 ## E0 ## E2; }; \
502  struct { detail::_swizzle<3, T, Q, 1,0,3,-1> E1 ## E0 ## E3; }; \
503  struct { detail::_swizzle<3, T, Q, 1,1,0,-1> E1 ## E1 ## E0; }; \
504  struct { detail::_swizzle<3, T, Q, 1,1,1,-1> E1 ## E1 ## E1; }; \
505  struct { detail::_swizzle<3, T, Q, 1,1,2,-1> E1 ## E1 ## E2; }; \
506  struct { detail::_swizzle<3, T, Q, 1,1,3,-1> E1 ## E1 ## E3; }; \
507  struct { detail::_swizzle<3, T, Q, 1,2,0,-1> E1 ## E2 ## E0; }; \
508  struct { detail::_swizzle<3, T, Q, 1,2,1,-1> E1 ## E2 ## E1; }; \
509  struct { detail::_swizzle<3, T, Q, 1,2,2,-1> E1 ## E2 ## E2; }; \
510  struct { detail::_swizzle<3, T, Q, 1,2,3,-1> E1 ## E2 ## E3; }; \
511  struct { detail::_swizzle<3, T, Q, 1,3,0,-1> E1 ## E3 ## E0; }; \
512  struct { detail::_swizzle<3, T, Q, 1,3,1,-1> E1 ## E3 ## E1; }; \
513  struct { detail::_swizzle<3, T, Q, 1,3,2,-1> E1 ## E3 ## E2; }; \
514  struct { detail::_swizzle<3, T, Q, 1,3,3,-1> E1 ## E3 ## E3; }; \
515  struct { detail::_swizzle<3, T, Q, 2,0,0,-1> E2 ## E0 ## E0; }; \
516  struct { detail::_swizzle<3, T, Q, 2,0,1,-1> E2 ## E0 ## E1; }; \
517  struct { detail::_swizzle<3, T, Q, 2,0,2,-1> E2 ## E0 ## E2; }; \
518  struct { detail::_swizzle<3, T, Q, 2,0,3,-1> E2 ## E0 ## E3; }; \
519  struct { detail::_swizzle<3, T, Q, 2,1,0,-1> E2 ## E1 ## E0; }; \
520  struct { detail::_swizzle<3, T, Q, 2,1,1,-1> E2 ## E1 ## E1; }; \
521  struct { detail::_swizzle<3, T, Q, 2,1,2,-1> E2 ## E1 ## E2; }; \
522  struct { detail::_swizzle<3, T, Q, 2,1,3,-1> E2 ## E1 ## E3; }; \
523  struct { detail::_swizzle<3, T, Q, 2,2,0,-1> E2 ## E2 ## E0; }; \
524  struct { detail::_swizzle<3, T, Q, 2,2,1,-1> E2 ## E2 ## E1; }; \
525  struct { detail::_swizzle<3, T, Q, 2,2,2,-1> E2 ## E2 ## E2; }; \
526  struct { detail::_swizzle<3, T, Q, 2,2,3,-1> E2 ## E2 ## E3; }; \
527  struct { detail::_swizzle<3, T, Q, 2,3,0,-1> E2 ## E3 ## E0; }; \
528  struct { detail::_swizzle<3, T, Q, 2,3,1,-1> E2 ## E3 ## E1; }; \
529  struct { detail::_swizzle<3, T, Q, 2,3,2,-1> E2 ## E3 ## E2; }; \
530  struct { detail::_swizzle<3, T, Q, 2,3,3,-1> E2 ## E3 ## E3; }; \
531  struct { detail::_swizzle<3, T, Q, 3,0,0,-1> E3 ## E0 ## E0; }; \
532  struct { detail::_swizzle<3, T, Q, 3,0,1,-1> E3 ## E0 ## E1; }; \
533  struct { detail::_swizzle<3, T, Q, 3,0,2,-1> E3 ## E0 ## E2; }; \
534  struct { detail::_swizzle<3, T, Q, 3,0,3,-1> E3 ## E0 ## E3; }; \
535  struct { detail::_swizzle<3, T, Q, 3,1,0,-1> E3 ## E1 ## E0; }; \
536  struct { detail::_swizzle<3, T, Q, 3,1,1,-1> E3 ## E1 ## E1; }; \
537  struct { detail::_swizzle<3, T, Q, 3,1,2,-1> E3 ## E1 ## E2; }; \
538  struct { detail::_swizzle<3, T, Q, 3,1,3,-1> E3 ## E1 ## E3; }; \
539  struct { detail::_swizzle<3, T, Q, 3,2,0,-1> E3 ## E2 ## E0; }; \
540  struct { detail::_swizzle<3, T, Q, 3,2,1,-1> E3 ## E2 ## E1; }; \
541  struct { detail::_swizzle<3, T, Q, 3,2,2,-1> E3 ## E2 ## E2; }; \
542  struct { detail::_swizzle<3, T, Q, 3,2,3,-1> E3 ## E2 ## E3; }; \
543  struct { detail::_swizzle<3, T, Q, 3,3,0,-1> E3 ## E3 ## E0; }; \
544  struct { detail::_swizzle<3, T, Q, 3,3,1,-1> E3 ## E3 ## E1; }; \
545  struct { detail::_swizzle<3, T, Q, 3,3,2,-1> E3 ## E3 ## E2; }; \
546  struct { detail::_swizzle<3, T, Q, 3,3,3,-1> E3 ## E3 ## E3; };
547 
548 #define GLM_SWIZZLE4_4_MEMBERS(T, Q, E0,E1,E2,E3) \
549  struct { detail::_swizzle<4, T, Q, 0,0,0,0> E0 ## E0 ## E0 ## E0; }; \
550  struct { detail::_swizzle<4, T, Q, 0,0,0,1> E0 ## E0 ## E0 ## E1; }; \
551  struct { detail::_swizzle<4, T, Q, 0,0,0,2> E0 ## E0 ## E0 ## E2; }; \
552  struct { detail::_swizzle<4, T, Q, 0,0,0,3> E0 ## E0 ## E0 ## E3; }; \
553  struct { detail::_swizzle<4, T, Q, 0,0,1,0> E0 ## E0 ## E1 ## E0; }; \
554  struct { detail::_swizzle<4, T, Q, 0,0,1,1> E0 ## E0 ## E1 ## E1; }; \
555  struct { detail::_swizzle<4, T, Q, 0,0,1,2> E0 ## E0 ## E1 ## E2; }; \
556  struct { detail::_swizzle<4, T, Q, 0,0,1,3> E0 ## E0 ## E1 ## E3; }; \
557  struct { detail::_swizzle<4, T, Q, 0,0,2,0> E0 ## E0 ## E2 ## E0; }; \
558  struct { detail::_swizzle<4, T, Q, 0,0,2,1> E0 ## E0 ## E2 ## E1; }; \
559  struct { detail::_swizzle<4, T, Q, 0,0,2,2> E0 ## E0 ## E2 ## E2; }; \
560  struct { detail::_swizzle<4, T, Q, 0,0,2,3> E0 ## E0 ## E2 ## E3; }; \
561  struct { detail::_swizzle<4, T, Q, 0,0,3,0> E0 ## E0 ## E3 ## E0; }; \
562  struct { detail::_swizzle<4, T, Q, 0,0,3,1> E0 ## E0 ## E3 ## E1; }; \
563  struct { detail::_swizzle<4, T, Q, 0,0,3,2> E0 ## E0 ## E3 ## E2; }; \
564  struct { detail::_swizzle<4, T, Q, 0,0,3,3> E0 ## E0 ## E3 ## E3; }; \
565  struct { detail::_swizzle<4, T, Q, 0,1,0,0> E0 ## E1 ## E0 ## E0; }; \
566  struct { detail::_swizzle<4, T, Q, 0,1,0,1> E0 ## E1 ## E0 ## E1; }; \
567  struct { detail::_swizzle<4, T, Q, 0,1,0,2> E0 ## E1 ## E0 ## E2; }; \
568  struct { detail::_swizzle<4, T, Q, 0,1,0,3> E0 ## E1 ## E0 ## E3; }; \
569  struct { detail::_swizzle<4, T, Q, 0,1,1,0> E0 ## E1 ## E1 ## E0; }; \
570  struct { detail::_swizzle<4, T, Q, 0,1,1,1> E0 ## E1 ## E1 ## E1; }; \
571  struct { detail::_swizzle<4, T, Q, 0,1,1,2> E0 ## E1 ## E1 ## E2; }; \
572  struct { detail::_swizzle<4, T, Q, 0,1,1,3> E0 ## E1 ## E1 ## E3; }; \
573  struct { detail::_swizzle<4, T, Q, 0,1,2,0> E0 ## E1 ## E2 ## E0; }; \
574  struct { detail::_swizzle<4, T, Q, 0,1,2,1> E0 ## E1 ## E2 ## E1; }; \
575  struct { detail::_swizzle<4, T, Q, 0,1,2,2> E0 ## E1 ## E2 ## E2; }; \
576  struct { detail::_swizzle<4, T, Q, 0,1,2,3> E0 ## E1 ## E2 ## E3; }; \
577  struct { detail::_swizzle<4, T, Q, 0,1,3,0> E0 ## E1 ## E3 ## E0; }; \
578  struct { detail::_swizzle<4, T, Q, 0,1,3,1> E0 ## E1 ## E3 ## E1; }; \
579  struct { detail::_swizzle<4, T, Q, 0,1,3,2> E0 ## E1 ## E3 ## E2; }; \
580  struct { detail::_swizzle<4, T, Q, 0,1,3,3> E0 ## E1 ## E3 ## E3; }; \
581  struct { detail::_swizzle<4, T, Q, 0,2,0,0> E0 ## E2 ## E0 ## E0; }; \
582  struct { detail::_swizzle<4, T, Q, 0,2,0,1> E0 ## E2 ## E0 ## E1; }; \
583  struct { detail::_swizzle<4, T, Q, 0,2,0,2> E0 ## E2 ## E0 ## E2; }; \
584  struct { detail::_swizzle<4, T, Q, 0,2,0,3> E0 ## E2 ## E0 ## E3; }; \
585  struct { detail::_swizzle<4, T, Q, 0,2,1,0> E0 ## E2 ## E1 ## E0; }; \
586  struct { detail::_swizzle<4, T, Q, 0,2,1,1> E0 ## E2 ## E1 ## E1; }; \
587  struct { detail::_swizzle<4, T, Q, 0,2,1,2> E0 ## E2 ## E1 ## E2; }; \
588  struct { detail::_swizzle<4, T, Q, 0,2,1,3> E0 ## E2 ## E1 ## E3; }; \
589  struct { detail::_swizzle<4, T, Q, 0,2,2,0> E0 ## E2 ## E2 ## E0; }; \
590  struct { detail::_swizzle<4, T, Q, 0,2,2,1> E0 ## E2 ## E2 ## E1; }; \
591  struct { detail::_swizzle<4, T, Q, 0,2,2,2> E0 ## E2 ## E2 ## E2; }; \
592  struct { detail::_swizzle<4, T, Q, 0,2,2,3> E0 ## E2 ## E2 ## E3; }; \
593  struct { detail::_swizzle<4, T, Q, 0,2,3,0> E0 ## E2 ## E3 ## E0; }; \
594  struct { detail::_swizzle<4, T, Q, 0,2,3,1> E0 ## E2 ## E3 ## E1; }; \
595  struct { detail::_swizzle<4, T, Q, 0,2,3,2> E0 ## E2 ## E3 ## E2; }; \
596  struct { detail::_swizzle<4, T, Q, 0,2,3,3> E0 ## E2 ## E3 ## E3; }; \
597  struct { detail::_swizzle<4, T, Q, 0,3,0,0> E0 ## E3 ## E0 ## E0; }; \
598  struct { detail::_swizzle<4, T, Q, 0,3,0,1> E0 ## E3 ## E0 ## E1; }; \
599  struct { detail::_swizzle<4, T, Q, 0,3,0,2> E0 ## E3 ## E0 ## E2; }; \
600  struct { detail::_swizzle<4, T, Q, 0,3,0,3> E0 ## E3 ## E0 ## E3; }; \
601  struct { detail::_swizzle<4, T, Q, 0,3,1,0> E0 ## E3 ## E1 ## E0; }; \
602  struct { detail::_swizzle<4, T, Q, 0,3,1,1> E0 ## E3 ## E1 ## E1; }; \
603  struct { detail::_swizzle<4, T, Q, 0,3,1,2> E0 ## E3 ## E1 ## E2; }; \
604  struct { detail::_swizzle<4, T, Q, 0,3,1,3> E0 ## E3 ## E1 ## E3; }; \
605  struct { detail::_swizzle<4, T, Q, 0,3,2,0> E0 ## E3 ## E2 ## E0; }; \
606  struct { detail::_swizzle<4, T, Q, 0,3,2,1> E0 ## E3 ## E2 ## E1; }; \
607  struct { detail::_swizzle<4, T, Q, 0,3,2,2> E0 ## E3 ## E2 ## E2; }; \
608  struct { detail::_swizzle<4, T, Q, 0,3,2,3> E0 ## E3 ## E2 ## E3; }; \
609  struct { detail::_swizzle<4, T, Q, 0,3,3,0> E0 ## E3 ## E3 ## E0; }; \
610  struct { detail::_swizzle<4, T, Q, 0,3,3,1> E0 ## E3 ## E3 ## E1; }; \
611  struct { detail::_swizzle<4, T, Q, 0,3,3,2> E0 ## E3 ## E3 ## E2; }; \
612  struct { detail::_swizzle<4, T, Q, 0,3,3,3> E0 ## E3 ## E3 ## E3; }; \
613  struct { detail::_swizzle<4, T, Q, 1,0,0,0> E1 ## E0 ## E0 ## E0; }; \
614  struct { detail::_swizzle<4, T, Q, 1,0,0,1> E1 ## E0 ## E0 ## E1; }; \
615  struct { detail::_swizzle<4, T, Q, 1,0,0,2> E1 ## E0 ## E0 ## E2; }; \
616  struct { detail::_swizzle<4, T, Q, 1,0,0,3> E1 ## E0 ## E0 ## E3; }; \
617  struct { detail::_swizzle<4, T, Q, 1,0,1,0> E1 ## E0 ## E1 ## E0; }; \
618  struct { detail::_swizzle<4, T, Q, 1,0,1,1> E1 ## E0 ## E1 ## E1; }; \
619  struct { detail::_swizzle<4, T, Q, 1,0,1,2> E1 ## E0 ## E1 ## E2; }; \
620  struct { detail::_swizzle<4, T, Q, 1,0,1,3> E1 ## E0 ## E1 ## E3; }; \
621  struct { detail::_swizzle<4, T, Q, 1,0,2,0> E1 ## E0 ## E2 ## E0; }; \
622  struct { detail::_swizzle<4, T, Q, 1,0,2,1> E1 ## E0 ## E2 ## E1; }; \
623  struct { detail::_swizzle<4, T, Q, 1,0,2,2> E1 ## E0 ## E2 ## E2; }; \
624  struct { detail::_swizzle<4, T, Q, 1,0,2,3> E1 ## E0 ## E2 ## E3; }; \
625  struct { detail::_swizzle<4, T, Q, 1,0,3,0> E1 ## E0 ## E3 ## E0; }; \
626  struct { detail::_swizzle<4, T, Q, 1,0,3,1> E1 ## E0 ## E3 ## E1; }; \
627  struct { detail::_swizzle<4, T, Q, 1,0,3,2> E1 ## E0 ## E3 ## E2; }; \
628  struct { detail::_swizzle<4, T, Q, 1,0,3,3> E1 ## E0 ## E3 ## E3; }; \
629  struct { detail::_swizzle<4, T, Q, 1,1,0,0> E1 ## E1 ## E0 ## E0; }; \
630  struct { detail::_swizzle<4, T, Q, 1,1,0,1> E1 ## E1 ## E0 ## E1; }; \
631  struct { detail::_swizzle<4, T, Q, 1,1,0,2> E1 ## E1 ## E0 ## E2; }; \
632  struct { detail::_swizzle<4, T, Q, 1,1,0,3> E1 ## E1 ## E0 ## E3; }; \
633  struct { detail::_swizzle<4, T, Q, 1,1,1,0> E1 ## E1 ## E1 ## E0; }; \
634  struct { detail::_swizzle<4, T, Q, 1,1,1,1> E1 ## E1 ## E1 ## E1; }; \
635  struct { detail::_swizzle<4, T, Q, 1,1,1,2> E1 ## E1 ## E1 ## E2; }; \
636  struct { detail::_swizzle<4, T, Q, 1,1,1,3> E1 ## E1 ## E1 ## E3; }; \
637  struct { detail::_swizzle<4, T, Q, 1,1,2,0> E1 ## E1 ## E2 ## E0; }; \
638  struct { detail::_swizzle<4, T, Q, 1,1,2,1> E1 ## E1 ## E2 ## E1; }; \
639  struct { detail::_swizzle<4, T, Q, 1,1,2,2> E1 ## E1 ## E2 ## E2; }; \
640  struct { detail::_swizzle<4, T, Q, 1,1,2,3> E1 ## E1 ## E2 ## E3; }; \
641  struct { detail::_swizzle<4, T, Q, 1,1,3,0> E1 ## E1 ## E3 ## E0; }; \
642  struct { detail::_swizzle<4, T, Q, 1,1,3,1> E1 ## E1 ## E3 ## E1; }; \
643  struct { detail::_swizzle<4, T, Q, 1,1,3,2> E1 ## E1 ## E3 ## E2; }; \
644  struct { detail::_swizzle<4, T, Q, 1,1,3,3> E1 ## E1 ## E3 ## E3; }; \
645  struct { detail::_swizzle<4, T, Q, 1,2,0,0> E1 ## E2 ## E0 ## E0; }; \
646  struct { detail::_swizzle<4, T, Q, 1,2,0,1> E1 ## E2 ## E0 ## E1; }; \
647  struct { detail::_swizzle<4, T, Q, 1,2,0,2> E1 ## E2 ## E0 ## E2; }; \
648  struct { detail::_swizzle<4, T, Q, 1,2,0,3> E1 ## E2 ## E0 ## E3; }; \
649  struct { detail::_swizzle<4, T, Q, 1,2,1,0> E1 ## E2 ## E1 ## E0; }; \
650  struct { detail::_swizzle<4, T, Q, 1,2,1,1> E1 ## E2 ## E1 ## E1; }; \
651  struct { detail::_swizzle<4, T, Q, 1,2,1,2> E1 ## E2 ## E1 ## E2; }; \
652  struct { detail::_swizzle<4, T, Q, 1,2,1,3> E1 ## E2 ## E1 ## E3; }; \
653  struct { detail::_swizzle<4, T, Q, 1,2,2,0> E1 ## E2 ## E2 ## E0; }; \
654  struct { detail::_swizzle<4, T, Q, 1,2,2,1> E1 ## E2 ## E2 ## E1; }; \
655  struct { detail::_swizzle<4, T, Q, 1,2,2,2> E1 ## E2 ## E2 ## E2; }; \
656  struct { detail::_swizzle<4, T, Q, 1,2,2,3> E1 ## E2 ## E2 ## E3; }; \
657  struct { detail::_swizzle<4, T, Q, 1,2,3,0> E1 ## E2 ## E3 ## E0; }; \
658  struct { detail::_swizzle<4, T, Q, 1,2,3,1> E1 ## E2 ## E3 ## E1; }; \
659  struct { detail::_swizzle<4, T, Q, 1,2,3,2> E1 ## E2 ## E3 ## E2; }; \
660  struct { detail::_swizzle<4, T, Q, 1,2,3,3> E1 ## E2 ## E3 ## E3; }; \
661  struct { detail::_swizzle<4, T, Q, 1,3,0,0> E1 ## E3 ## E0 ## E0; }; \
662  struct { detail::_swizzle<4, T, Q, 1,3,0,1> E1 ## E3 ## E0 ## E1; }; \
663  struct { detail::_swizzle<4, T, Q, 1,3,0,2> E1 ## E3 ## E0 ## E2; }; \
664  struct { detail::_swizzle<4, T, Q, 1,3,0,3> E1 ## E3 ## E0 ## E3; }; \
665  struct { detail::_swizzle<4, T, Q, 1,3,1,0> E1 ## E3 ## E1 ## E0; }; \
666  struct { detail::_swizzle<4, T, Q, 1,3,1,1> E1 ## E3 ## E1 ## E1; }; \
667  struct { detail::_swizzle<4, T, Q, 1,3,1,2> E1 ## E3 ## E1 ## E2; }; \
668  struct { detail::_swizzle<4, T, Q, 1,3,1,3> E1 ## E3 ## E1 ## E3; }; \
669  struct { detail::_swizzle<4, T, Q, 1,3,2,0> E1 ## E3 ## E2 ## E0; }; \
670  struct { detail::_swizzle<4, T, Q, 1,3,2,1> E1 ## E3 ## E2 ## E1; }; \
671  struct { detail::_swizzle<4, T, Q, 1,3,2,2> E1 ## E3 ## E2 ## E2; }; \
672  struct { detail::_swizzle<4, T, Q, 1,3,2,3> E1 ## E3 ## E2 ## E3; }; \
673  struct { detail::_swizzle<4, T, Q, 1,3,3,0> E1 ## E3 ## E3 ## E0; }; \
674  struct { detail::_swizzle<4, T, Q, 1,3,3,1> E1 ## E3 ## E3 ## E1; }; \
675  struct { detail::_swizzle<4, T, Q, 1,3,3,2> E1 ## E3 ## E3 ## E2; }; \
676  struct { detail::_swizzle<4, T, Q, 1,3,3,3> E1 ## E3 ## E3 ## E3; }; \
677  struct { detail::_swizzle<4, T, Q, 2,0,0,0> E2 ## E0 ## E0 ## E0; }; \
678  struct { detail::_swizzle<4, T, Q, 2,0,0,1> E2 ## E0 ## E0 ## E1; }; \
679  struct { detail::_swizzle<4, T, Q, 2,0,0,2> E2 ## E0 ## E0 ## E2; }; \
680  struct { detail::_swizzle<4, T, Q, 2,0,0,3> E2 ## E0 ## E0 ## E3; }; \
681  struct { detail::_swizzle<4, T, Q, 2,0,1,0> E2 ## E0 ## E1 ## E0; }; \
682  struct { detail::_swizzle<4, T, Q, 2,0,1,1> E2 ## E0 ## E1 ## E1; }; \
683  struct { detail::_swizzle<4, T, Q, 2,0,1,2> E2 ## E0 ## E1 ## E2; }; \
684  struct { detail::_swizzle<4, T, Q, 2,0,1,3> E2 ## E0 ## E1 ## E3; }; \
685  struct { detail::_swizzle<4, T, Q, 2,0,2,0> E2 ## E0 ## E2 ## E0; }; \
686  struct { detail::_swizzle<4, T, Q, 2,0,2,1> E2 ## E0 ## E2 ## E1; }; \
687  struct { detail::_swizzle<4, T, Q, 2,0,2,2> E2 ## E0 ## E2 ## E2; }; \
688  struct { detail::_swizzle<4, T, Q, 2,0,2,3> E2 ## E0 ## E2 ## E3; }; \
689  struct { detail::_swizzle<4, T, Q, 2,0,3,0> E2 ## E0 ## E3 ## E0; }; \
690  struct { detail::_swizzle<4, T, Q, 2,0,3,1> E2 ## E0 ## E3 ## E1; }; \
691  struct { detail::_swizzle<4, T, Q, 2,0,3,2> E2 ## E0 ## E3 ## E2; }; \
692  struct { detail::_swizzle<4, T, Q, 2,0,3,3> E2 ## E0 ## E3 ## E3; }; \
693  struct { detail::_swizzle<4, T, Q, 2,1,0,0> E2 ## E1 ## E0 ## E0; }; \
694  struct { detail::_swizzle<4, T, Q, 2,1,0,1> E2 ## E1 ## E0 ## E1; }; \
695  struct { detail::_swizzle<4, T, Q, 2,1,0,2> E2 ## E1 ## E0 ## E2; }; \
696  struct { detail::_swizzle<4, T, Q, 2,1,0,3> E2 ## E1 ## E0 ## E3; }; \
697  struct { detail::_swizzle<4, T, Q, 2,1,1,0> E2 ## E1 ## E1 ## E0; }; \
698  struct { detail::_swizzle<4, T, Q, 2,1,1,1> E2 ## E1 ## E1 ## E1; }; \
699  struct { detail::_swizzle<4, T, Q, 2,1,1,2> E2 ## E1 ## E1 ## E2; }; \
700  struct { detail::_swizzle<4, T, Q, 2,1,1,3> E2 ## E1 ## E1 ## E3; }; \
701  struct { detail::_swizzle<4, T, Q, 2,1,2,0> E2 ## E1 ## E2 ## E0; }; \
702  struct { detail::_swizzle<4, T, Q, 2,1,2,1> E2 ## E1 ## E2 ## E1; }; \
703  struct { detail::_swizzle<4, T, Q, 2,1,2,2> E2 ## E1 ## E2 ## E2; }; \
704  struct { detail::_swizzle<4, T, Q, 2,1,2,3> E2 ## E1 ## E2 ## E3; }; \
705  struct { detail::_swizzle<4, T, Q, 2,1,3,0> E2 ## E1 ## E3 ## E0; }; \
706  struct { detail::_swizzle<4, T, Q, 2,1,3,1> E2 ## E1 ## E3 ## E1; }; \
707  struct { detail::_swizzle<4, T, Q, 2,1,3,2> E2 ## E1 ## E3 ## E2; }; \
708  struct { detail::_swizzle<4, T, Q, 2,1,3,3> E2 ## E1 ## E3 ## E3; }; \
709  struct { detail::_swizzle<4, T, Q, 2,2,0,0> E2 ## E2 ## E0 ## E0; }; \
710  struct { detail::_swizzle<4, T, Q, 2,2,0,1> E2 ## E2 ## E0 ## E1; }; \
711  struct { detail::_swizzle<4, T, Q, 2,2,0,2> E2 ## E2 ## E0 ## E2; }; \
712  struct { detail::_swizzle<4, T, Q, 2,2,0,3> E2 ## E2 ## E0 ## E3; }; \
713  struct { detail::_swizzle<4, T, Q, 2,2,1,0> E2 ## E2 ## E1 ## E0; }; \
714  struct { detail::_swizzle<4, T, Q, 2,2,1,1> E2 ## E2 ## E1 ## E1; }; \
715  struct { detail::_swizzle<4, T, Q, 2,2,1,2> E2 ## E2 ## E1 ## E2; }; \
716  struct { detail::_swizzle<4, T, Q, 2,2,1,3> E2 ## E2 ## E1 ## E3; }; \
717  struct { detail::_swizzle<4, T, Q, 2,2,2,0> E2 ## E2 ## E2 ## E0; }; \
718  struct { detail::_swizzle<4, T, Q, 2,2,2,1> E2 ## E2 ## E2 ## E1; }; \
719  struct { detail::_swizzle<4, T, Q, 2,2,2,2> E2 ## E2 ## E2 ## E2; }; \
720  struct { detail::_swizzle<4, T, Q, 2,2,2,3> E2 ## E2 ## E2 ## E3; }; \
721  struct { detail::_swizzle<4, T, Q, 2,2,3,0> E2 ## E2 ## E3 ## E0; }; \
722  struct { detail::_swizzle<4, T, Q, 2,2,3,1> E2 ## E2 ## E3 ## E1; }; \
723  struct { detail::_swizzle<4, T, Q, 2,2,3,2> E2 ## E2 ## E3 ## E2; }; \
724  struct { detail::_swizzle<4, T, Q, 2,2,3,3> E2 ## E2 ## E3 ## E3; }; \
725  struct { detail::_swizzle<4, T, Q, 2,3,0,0> E2 ## E3 ## E0 ## E0; }; \
726  struct { detail::_swizzle<4, T, Q, 2,3,0,1> E2 ## E3 ## E0 ## E1; }; \
727  struct { detail::_swizzle<4, T, Q, 2,3,0,2> E2 ## E3 ## E0 ## E2; }; \
728  struct { detail::_swizzle<4, T, Q, 2,3,0,3> E2 ## E3 ## E0 ## E3; }; \
729  struct { detail::_swizzle<4, T, Q, 2,3,1,0> E2 ## E3 ## E1 ## E0; }; \
730  struct { detail::_swizzle<4, T, Q, 2,3,1,1> E2 ## E3 ## E1 ## E1; }; \
731  struct { detail::_swizzle<4, T, Q, 2,3,1,2> E2 ## E3 ## E1 ## E2; }; \
732  struct { detail::_swizzle<4, T, Q, 2,3,1,3> E2 ## E3 ## E1 ## E3; }; \
733  struct { detail::_swizzle<4, T, Q, 2,3,2,0> E2 ## E3 ## E2 ## E0; }; \
734  struct { detail::_swizzle<4, T, Q, 2,3,2,1> E2 ## E3 ## E2 ## E1; }; \
735  struct { detail::_swizzle<4, T, Q, 2,3,2,2> E2 ## E3 ## E2 ## E2; }; \
736  struct { detail::_swizzle<4, T, Q, 2,3,2,3> E2 ## E3 ## E2 ## E3; }; \
737  struct { detail::_swizzle<4, T, Q, 2,3,3,0> E2 ## E3 ## E3 ## E0; }; \
738  struct { detail::_swizzle<4, T, Q, 2,3,3,1> E2 ## E3 ## E3 ## E1; }; \
739  struct { detail::_swizzle<4, T, Q, 2,3,3,2> E2 ## E3 ## E3 ## E2; }; \
740  struct { detail::_swizzle<4, T, Q, 2,3,3,3> E2 ## E3 ## E3 ## E3; }; \
741  struct { detail::_swizzle<4, T, Q, 3,0,0,0> E3 ## E0 ## E0 ## E0; }; \
742  struct { detail::_swizzle<4, T, Q, 3,0,0,1> E3 ## E0 ## E0 ## E1; }; \
743  struct { detail::_swizzle<4, T, Q, 3,0,0,2> E3 ## E0 ## E0 ## E2; }; \
744  struct { detail::_swizzle<4, T, Q, 3,0,0,3> E3 ## E0 ## E0 ## E3; }; \
745  struct { detail::_swizzle<4, T, Q, 3,0,1,0> E3 ## E0 ## E1 ## E0; }; \
746  struct { detail::_swizzle<4, T, Q, 3,0,1,1> E3 ## E0 ## E1 ## E1; }; \
747  struct { detail::_swizzle<4, T, Q, 3,0,1,2> E3 ## E0 ## E1 ## E2; }; \
748  struct { detail::_swizzle<4, T, Q, 3,0,1,3> E3 ## E0 ## E1 ## E3; }; \
749  struct { detail::_swizzle<4, T, Q, 3,0,2,0> E3 ## E0 ## E2 ## E0; }; \
750  struct { detail::_swizzle<4, T, Q, 3,0,2,1> E3 ## E0 ## E2 ## E1; }; \
751  struct { detail::_swizzle<4, T, Q, 3,0,2,2> E3 ## E0 ## E2 ## E2; }; \
752  struct { detail::_swizzle<4, T, Q, 3,0,2,3> E3 ## E0 ## E2 ## E3; }; \
753  struct { detail::_swizzle<4, T, Q, 3,0,3,0> E3 ## E0 ## E3 ## E0; }; \
754  struct { detail::_swizzle<4, T, Q, 3,0,3,1> E3 ## E0 ## E3 ## E1; }; \
755  struct { detail::_swizzle<4, T, Q, 3,0,3,2> E3 ## E0 ## E3 ## E2; }; \
756  struct { detail::_swizzle<4, T, Q, 3,0,3,3> E3 ## E0 ## E3 ## E3; }; \
757  struct { detail::_swizzle<4, T, Q, 3,1,0,0> E3 ## E1 ## E0 ## E0; }; \
758  struct { detail::_swizzle<4, T, Q, 3,1,0,1> E3 ## E1 ## E0 ## E1; }; \
759  struct { detail::_swizzle<4, T, Q, 3,1,0,2> E3 ## E1 ## E0 ## E2; }; \
760  struct { detail::_swizzle<4, T, Q, 3,1,0,3> E3 ## E1 ## E0 ## E3; }; \
761  struct { detail::_swizzle<4, T, Q, 3,1,1,0> E3 ## E1 ## E1 ## E0; }; \
762  struct { detail::_swizzle<4, T, Q, 3,1,1,1> E3 ## E1 ## E1 ## E1; }; \
763  struct { detail::_swizzle<4, T, Q, 3,1,1,2> E3 ## E1 ## E1 ## E2; }; \
764  struct { detail::_swizzle<4, T, Q, 3,1,1,3> E3 ## E1 ## E1 ## E3; }; \
765  struct { detail::_swizzle<4, T, Q, 3,1,2,0> E3 ## E1 ## E2 ## E0; }; \
766  struct { detail::_swizzle<4, T, Q, 3,1,2,1> E3 ## E1 ## E2 ## E1; }; \
767  struct { detail::_swizzle<4, T, Q, 3,1,2,2> E3 ## E1 ## E2 ## E2; }; \
768  struct { detail::_swizzle<4, T, Q, 3,1,2,3> E3 ## E1 ## E2 ## E3; }; \
769  struct { detail::_swizzle<4, T, Q, 3,1,3,0> E3 ## E1 ## E3 ## E0; }; \
770  struct { detail::_swizzle<4, T, Q, 3,1,3,1> E3 ## E1 ## E3 ## E1; }; \
771  struct { detail::_swizzle<4, T, Q, 3,1,3,2> E3 ## E1 ## E3 ## E2; }; \
772  struct { detail::_swizzle<4, T, Q, 3,1,3,3> E3 ## E1 ## E3 ## E3; }; \
773  struct { detail::_swizzle<4, T, Q, 3,2,0,0> E3 ## E2 ## E0 ## E0; }; \
774  struct { detail::_swizzle<4, T, Q, 3,2,0,1> E3 ## E2 ## E0 ## E1; }; \
775  struct { detail::_swizzle<4, T, Q, 3,2,0,2> E3 ## E2 ## E0 ## E2; }; \
776  struct { detail::_swizzle<4, T, Q, 3,2,0,3> E3 ## E2 ## E0 ## E3; }; \
777  struct { detail::_swizzle<4, T, Q, 3,2,1,0> E3 ## E2 ## E1 ## E0; }; \
778  struct { detail::_swizzle<4, T, Q, 3,2,1,1> E3 ## E2 ## E1 ## E1; }; \
779  struct { detail::_swizzle<4, T, Q, 3,2,1,2> E3 ## E2 ## E1 ## E2; }; \
780  struct { detail::_swizzle<4, T, Q, 3,2,1,3> E3 ## E2 ## E1 ## E3; }; \
781  struct { detail::_swizzle<4, T, Q, 3,2,2,0> E3 ## E2 ## E2 ## E0; }; \
782  struct { detail::_swizzle<4, T, Q, 3,2,2,1> E3 ## E2 ## E2 ## E1; }; \
783  struct { detail::_swizzle<4, T, Q, 3,2,2,2> E3 ## E2 ## E2 ## E2; }; \
784  struct { detail::_swizzle<4, T, Q, 3,2,2,3> E3 ## E2 ## E2 ## E3; }; \
785  struct { detail::_swizzle<4, T, Q, 3,2,3,0> E3 ## E2 ## E3 ## E0; }; \
786  struct { detail::_swizzle<4, T, Q, 3,2,3,1> E3 ## E2 ## E3 ## E1; }; \
787  struct { detail::_swizzle<4, T, Q, 3,2,3,2> E3 ## E2 ## E3 ## E2; }; \
788  struct { detail::_swizzle<4, T, Q, 3,2,3,3> E3 ## E2 ## E3 ## E3; }; \
789  struct { detail::_swizzle<4, T, Q, 3,3,0,0> E3 ## E3 ## E0 ## E0; }; \
790  struct { detail::_swizzle<4, T, Q, 3,3,0,1> E3 ## E3 ## E0 ## E1; }; \
791  struct { detail::_swizzle<4, T, Q, 3,3,0,2> E3 ## E3 ## E0 ## E2; }; \
792  struct { detail::_swizzle<4, T, Q, 3,3,0,3> E3 ## E3 ## E0 ## E3; }; \
793  struct { detail::_swizzle<4, T, Q, 3,3,1,0> E3 ## E3 ## E1 ## E0; }; \
794  struct { detail::_swizzle<4, T, Q, 3,3,1,1> E3 ## E3 ## E1 ## E1; }; \
795  struct { detail::_swizzle<4, T, Q, 3,3,1,2> E3 ## E3 ## E1 ## E2; }; \
796  struct { detail::_swizzle<4, T, Q, 3,3,1,3> E3 ## E3 ## E1 ## E3; }; \
797  struct { detail::_swizzle<4, T, Q, 3,3,2,0> E3 ## E3 ## E2 ## E0; }; \
798  struct { detail::_swizzle<4, T, Q, 3,3,2,1> E3 ## E3 ## E2 ## E1; }; \
799  struct { detail::_swizzle<4, T, Q, 3,3,2,2> E3 ## E3 ## E2 ## E2; }; \
800  struct { detail::_swizzle<4, T, Q, 3,3,2,3> E3 ## E3 ## E2 ## E3; }; \
801  struct { detail::_swizzle<4, T, Q, 3,3,3,0> E3 ## E3 ## E3 ## E0; }; \
802  struct { detail::_swizzle<4, T, Q, 3,3,3,1> E3 ## E3 ## E3 ## E1; }; \
803  struct { detail::_swizzle<4, T, Q, 3,3,3,2> E3 ## E3 ## E3 ## E2; }; \
804  struct { detail::_swizzle<4, T, Q, 3,3,3,3> E3 ## E3 ## E3 ## E3; };
GLM_FUNC_DECL GLM_CONSTEXPR genType e()
Return e constant.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00005_source.html ================================================ 0.9.9 API documentation: _swizzle_func.hpp Source File
0.9.9 API documentation
_swizzle_func.hpp
1 #pragma once
2 
3 #define GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, CONST, A, B) \
4  vec<2, T, Q> A ## B() CONST \
5  { \
6  return vec<2, T, Q>(this->A, this->B); \
7  }
8 
9 #define GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, CONST, A, B, C) \
10  vec<3, T, Q> A ## B ## C() CONST \
11  { \
12  return vec<3, T, Q>(this->A, this->B, this->C); \
13  }
14 
15 #define GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, CONST, A, B, C, D) \
16  vec<4, T, Q> A ## B ## C ## D() CONST \
17  { \
18  return vec<4, T, Q>(this->A, this->B, this->C, this->D); \
19  }
20 
21 #define GLM_SWIZZLE_GEN_VEC2_ENTRY_DEF(T, P, L, CONST, A, B) \
22  template<typename T> \
23  vec<L, T, Q> vec<L, T, Q>::A ## B() CONST \
24  { \
25  return vec<2, T, Q>(this->A, this->B); \
26  }
27 
28 #define GLM_SWIZZLE_GEN_VEC3_ENTRY_DEF(T, P, L, CONST, A, B, C) \
29  template<typename T> \
30  vec<3, T, Q> vec<L, T, Q>::A ## B ## C() CONST \
31  { \
32  return vec<3, T, Q>(this->A, this->B, this->C); \
33  }
34 
35 #define GLM_SWIZZLE_GEN_VEC4_ENTRY_DEF(T, P, L, CONST, A, B, C, D) \
36  template<typename T> \
37  vec<4, T, Q> vec<L, T, Q>::A ## B ## C ## D() CONST \
38  { \
39  return vec<4, T, Q>(this->A, this->B, this->C, this->D); \
40  }
41 
42 #define GLM_MUTABLE
43 
44 #define GLM_SWIZZLE_GEN_REF2_FROM_VEC2_SWIZZLE(T, P, A, B) \
45  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, 2, GLM_MUTABLE, A, B) \
46  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, 2, GLM_MUTABLE, B, A)
47 
48 #define GLM_SWIZZLE_GEN_REF_FROM_VEC2(T, P) \
49  GLM_SWIZZLE_GEN_REF2_FROM_VEC2_SWIZZLE(T, P, x, y) \
50  GLM_SWIZZLE_GEN_REF2_FROM_VEC2_SWIZZLE(T, P, r, g) \
51  GLM_SWIZZLE_GEN_REF2_FROM_VEC2_SWIZZLE(T, P, s, t)
52 
53 #define GLM_SWIZZLE_GEN_REF2_FROM_VEC3_SWIZZLE(T, P, A, B, C) \
54  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, GLM_MUTABLE, A, B) \
55  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, GLM_MUTABLE, A, C) \
56  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, GLM_MUTABLE, B, A) \
57  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, GLM_MUTABLE, B, C) \
58  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, GLM_MUTABLE, C, A) \
59  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, GLM_MUTABLE, C, B)
60 
61 #define GLM_SWIZZLE_GEN_REF3_FROM_VEC3_SWIZZLE(T, P, A, B, C) \
62  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, GLM_MUTABLE, A, B, C) \
63  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, GLM_MUTABLE, A, C, B) \
64  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, GLM_MUTABLE, B, A, C) \
65  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, GLM_MUTABLE, B, C, A) \
66  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, GLM_MUTABLE, C, A, B) \
67  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, GLM_MUTABLE, C, B, A)
68 
69 #define GLM_SWIZZLE_GEN_REF_FROM_VEC3_COMP(T, P, A, B, C) \
70  GLM_SWIZZLE_GEN_REF3_FROM_VEC3_SWIZZLE(T, P, A, B, C) \
71  GLM_SWIZZLE_GEN_REF2_FROM_VEC3_SWIZZLE(T, P, A, B, C)
72 
73 #define GLM_SWIZZLE_GEN_REF_FROM_VEC3(T, P) \
74  GLM_SWIZZLE_GEN_REF_FROM_VEC3_COMP(T, P, x, y, z) \
75  GLM_SWIZZLE_GEN_REF_FROM_VEC3_COMP(T, P, r, g, b) \
76  GLM_SWIZZLE_GEN_REF_FROM_VEC3_COMP(T, P, s, t, p)
77 
78 #define GLM_SWIZZLE_GEN_REF2_FROM_VEC4_SWIZZLE(T, P, A, B, C, D) \
79  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, GLM_MUTABLE, A, B) \
80  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, GLM_MUTABLE, A, C) \
81  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, GLM_MUTABLE, A, D) \
82  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, GLM_MUTABLE, B, A) \
83  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, GLM_MUTABLE, B, C) \
84  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, GLM_MUTABLE, B, D) \
85  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, GLM_MUTABLE, C, A) \
86  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, GLM_MUTABLE, C, B) \
87  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, GLM_MUTABLE, C, D) \
88  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, GLM_MUTABLE, D, A) \
89  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, GLM_MUTABLE, D, B) \
90  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, GLM_MUTABLE, D, C)
91 
92 #define GLM_SWIZZLE_GEN_REF3_FROM_VEC4_SWIZZLE(T, P, A, B, C, D) \
93  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, , A, B, C) \
94  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, , A, B, D) \
95  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, , A, C, B) \
96  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, , A, C, D) \
97  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, , A, D, B) \
98  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, , A, D, C) \
99  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, , B, A, C) \
100  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, , B, A, D) \
101  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, , B, C, A) \
102  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, , B, C, D) \
103  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, , B, D, A) \
104  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, , B, D, C) \
105  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, , C, A, B) \
106  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, , C, A, D) \
107  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, , C, B, A) \
108  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, , C, B, D) \
109  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, , C, D, A) \
110  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, , C, D, B) \
111  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, , D, A, B) \
112  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, , D, A, C) \
113  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, , D, B, A) \
114  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, , D, B, C) \
115  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, , D, C, A) \
116  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, , D, C, B)
117 
118 #define GLM_SWIZZLE_GEN_REF4_FROM_VEC4_SWIZZLE(T, P, A, B, C, D) \
119  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, , A, C, B, D) \
120  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, , A, C, D, B) \
121  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, , A, D, B, C) \
122  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, , A, D, C, B) \
123  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, , A, B, D, C) \
124  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, , A, B, C, D) \
125  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, , B, C, A, D) \
126  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, , B, C, D, A) \
127  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, , B, D, A, C) \
128  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, , B, D, C, A) \
129  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, , B, A, D, C) \
130  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, , B, A, C, D) \
131  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, , C, B, A, D) \
132  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, , C, B, D, A) \
133  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, , C, D, A, B) \
134  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, , C, D, B, A) \
135  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, , C, A, D, B) \
136  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, , C, A, B, D) \
137  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, , D, C, B, A) \
138  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, , D, C, A, B) \
139  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, , D, A, B, C) \
140  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, , D, A, C, B) \
141  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, , D, B, A, C) \
142  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, , D, B, C, A)
143 
144 #define GLM_SWIZZLE_GEN_REF_FROM_VEC4_COMP(T, P, A, B, C, D) \
145  GLM_SWIZZLE_GEN_REF2_FROM_VEC4_SWIZZLE(T, P, A, B, C, D) \
146  GLM_SWIZZLE_GEN_REF3_FROM_VEC4_SWIZZLE(T, P, A, B, C, D) \
147  GLM_SWIZZLE_GEN_REF4_FROM_VEC4_SWIZZLE(T, P, A, B, C, D)
148 
149 #define GLM_SWIZZLE_GEN_REF_FROM_VEC4(T, P) \
150  GLM_SWIZZLE_GEN_REF_FROM_VEC4_COMP(T, P, x, y, z, w) \
151  GLM_SWIZZLE_GEN_REF_FROM_VEC4_COMP(T, P, r, g, b, a) \
152  GLM_SWIZZLE_GEN_REF_FROM_VEC4_COMP(T, P, s, t, p, q)
153 
154 #define GLM_SWIZZLE_GEN_VEC2_FROM_VEC2_SWIZZLE(T, P, A, B) \
155  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, const, A, A) \
156  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, const, A, B) \
157  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, const, B, A) \
158  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, const, B, B)
159 
160 #define GLM_SWIZZLE_GEN_VEC3_FROM_VEC2_SWIZZLE(T, P, A, B) \
161  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, A, A, A) \
162  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, A, A, B) \
163  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, A, B, A) \
164  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, A, B, B) \
165  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, B, A, A) \
166  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, B, A, B) \
167  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, B, B, A) \
168  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, B, B, B)
169 
170 #define GLM_SWIZZLE_GEN_VEC4_FROM_VEC2_SWIZZLE(T, P, A, B) \
171  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, A, A, A) \
172  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, A, A, B) \
173  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, A, B, A) \
174  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, A, B, B) \
175  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, B, A, A) \
176  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, B, A, B) \
177  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, B, B, A) \
178  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, B, B, B) \
179  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, A, A, A) \
180  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, A, A, B) \
181  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, A, B, A) \
182  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, A, B, B) \
183  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, B, A, A) \
184  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, B, A, B) \
185  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, B, B, A) \
186  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, B, B, B)
187 
188 #define GLM_SWIZZLE_GEN_VEC_FROM_VEC2_COMP(T, P, A, B) \
189  GLM_SWIZZLE_GEN_VEC2_FROM_VEC2_SWIZZLE(T, P, A, B) \
190  GLM_SWIZZLE_GEN_VEC3_FROM_VEC2_SWIZZLE(T, P, A, B) \
191  GLM_SWIZZLE_GEN_VEC4_FROM_VEC2_SWIZZLE(T, P, A, B)
192 
193 #define GLM_SWIZZLE_GEN_VEC_FROM_VEC2(T, P) \
194  GLM_SWIZZLE_GEN_VEC_FROM_VEC2_COMP(T, P, x, y) \
195  GLM_SWIZZLE_GEN_VEC_FROM_VEC2_COMP(T, P, r, g) \
196  GLM_SWIZZLE_GEN_VEC_FROM_VEC2_COMP(T, P, s, t)
197 
198 #define GLM_SWIZZLE_GEN_VEC2_FROM_VEC3_SWIZZLE(T, P, A, B, C) \
199  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, const, A, A) \
200  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, const, A, B) \
201  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, const, A, C) \
202  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, const, B, A) \
203  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, const, B, B) \
204  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, const, B, C) \
205  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, const, C, A) \
206  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, const, C, B) \
207  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, const, C, C)
208 
209 #define GLM_SWIZZLE_GEN_VEC3_FROM_VEC3_SWIZZLE(T, P, A, B, C) \
210  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, A, A, A) \
211  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, A, A, B) \
212  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, A, A, C) \
213  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, A, B, A) \
214  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, A, B, B) \
215  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, A, B, C) \
216  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, A, C, A) \
217  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, A, C, B) \
218  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, A, C, C) \
219  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, B, A, A) \
220  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, B, A, B) \
221  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, B, A, C) \
222  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, B, B, A) \
223  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, B, B, B) \
224  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, B, B, C) \
225  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, B, C, A) \
226  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, B, C, B) \
227  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, B, C, C) \
228  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, C, A, A) \
229  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, C, A, B) \
230  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, C, A, C) \
231  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, C, B, A) \
232  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, C, B, B) \
233  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, C, B, C) \
234  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, C, C, A) \
235  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, C, C, B) \
236  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, C, C, C)
237 
238 #define GLM_SWIZZLE_GEN_VEC4_FROM_VEC3_SWIZZLE(T, P, A, B, C) \
239  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, A, A, A) \
240  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, A, A, B) \
241  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, A, A, C) \
242  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, A, B, A) \
243  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, A, B, B) \
244  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, A, B, C) \
245  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, A, C, A) \
246  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, A, C, B) \
247  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, A, C, C) \
248  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, B, A, A) \
249  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, B, A, B) \
250  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, B, A, C) \
251  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, B, B, A) \
252  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, B, B, B) \
253  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, B, B, C) \
254  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, B, C, A) \
255  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, B, C, B) \
256  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, B, C, C) \
257  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, C, A, A) \
258  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, C, A, B) \
259  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, C, A, C) \
260  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, C, B, A) \
261  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, C, B, B) \
262  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, C, B, C) \
263  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, C, C, A) \
264  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, C, C, B) \
265  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, C, C, C) \
266  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, A, A, A) \
267  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, A, A, B) \
268  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, A, A, C) \
269  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, A, B, A) \
270  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, A, B, B) \
271  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, A, B, C) \
272  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, A, C, A) \
273  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, A, C, B) \
274  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, A, C, C) \
275  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, B, A, A) \
276  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, B, A, B) \
277  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, B, A, C) \
278  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, B, B, A) \
279  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, B, B, B) \
280  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, B, B, C) \
281  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, B, C, A) \
282  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, B, C, B) \
283  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, B, C, C) \
284  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, C, A, A) \
285  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, C, A, B) \
286  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, C, A, C) \
287  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, C, B, A) \
288  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, C, B, B) \
289  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, C, B, C) \
290  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, C, C, A) \
291  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, C, C, B) \
292  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, C, C, C) \
293  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, A, A, A) \
294  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, A, A, B) \
295  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, A, A, C) \
296  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, A, B, A) \
297  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, A, B, B) \
298  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, A, B, C) \
299  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, A, C, A) \
300  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, A, C, B) \
301  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, A, C, C) \
302  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, B, A, A) \
303  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, B, A, B) \
304  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, B, A, C) \
305  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, B, B, A) \
306  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, B, B, B) \
307  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, B, B, C) \
308  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, B, C, A) \
309  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, B, C, B) \
310  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, B, C, C) \
311  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, C, A, A) \
312  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, C, A, B) \
313  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, C, A, C) \
314  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, C, B, A) \
315  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, C, B, B) \
316  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, C, B, C) \
317  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, C, C, A) \
318  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, C, C, B) \
319  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, C, C, C)
320 
321 #define GLM_SWIZZLE_GEN_VEC_FROM_VEC3_COMP(T, P, A, B, C) \
322  GLM_SWIZZLE_GEN_VEC2_FROM_VEC3_SWIZZLE(T, P, A, B, C) \
323  GLM_SWIZZLE_GEN_VEC3_FROM_VEC3_SWIZZLE(T, P, A, B, C) \
324  GLM_SWIZZLE_GEN_VEC4_FROM_VEC3_SWIZZLE(T, P, A, B, C)
325 
326 #define GLM_SWIZZLE_GEN_VEC_FROM_VEC3(T, P) \
327  GLM_SWIZZLE_GEN_VEC_FROM_VEC3_COMP(T, P, x, y, z) \
328  GLM_SWIZZLE_GEN_VEC_FROM_VEC3_COMP(T, P, r, g, b) \
329  GLM_SWIZZLE_GEN_VEC_FROM_VEC3_COMP(T, P, s, t, p)
330 
331 #define GLM_SWIZZLE_GEN_VEC2_FROM_VEC4_SWIZZLE(T, P, A, B, C, D) \
332  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, const, A, A) \
333  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, const, A, B) \
334  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, const, A, C) \
335  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, const, A, D) \
336  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, const, B, A) \
337  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, const, B, B) \
338  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, const, B, C) \
339  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, const, B, D) \
340  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, const, C, A) \
341  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, const, C, B) \
342  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, const, C, C) \
343  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, const, C, D) \
344  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, const, D, A) \
345  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, const, D, B) \
346  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, const, D, C) \
347  GLM_SWIZZLE_GEN_VEC2_ENTRY(T, P, const, D, D)
348 
349 #define GLM_SWIZZLE_GEN_VEC3_FROM_VEC4_SWIZZLE(T, P, A, B, C, D) \
350  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, A, A, A) \
351  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, A, A, B) \
352  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, A, A, C) \
353  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, A, A, D) \
354  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, A, B, A) \
355  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, A, B, B) \
356  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, A, B, C) \
357  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, A, B, D) \
358  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, A, C, A) \
359  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, A, C, B) \
360  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, A, C, C) \
361  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, A, C, D) \
362  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, A, D, A) \
363  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, A, D, B) \
364  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, A, D, C) \
365  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, A, D, D) \
366  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, B, A, A) \
367  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, B, A, B) \
368  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, B, A, C) \
369  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, B, A, D) \
370  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, B, B, A) \
371  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, B, B, B) \
372  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, B, B, C) \
373  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, B, B, D) \
374  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, B, C, A) \
375  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, B, C, B) \
376  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, B, C, C) \
377  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, B, C, D) \
378  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, B, D, A) \
379  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, B, D, B) \
380  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, B, D, C) \
381  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, B, D, D) \
382  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, C, A, A) \
383  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, C, A, B) \
384  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, C, A, C) \
385  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, C, A, D) \
386  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, C, B, A) \
387  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, C, B, B) \
388  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, C, B, C) \
389  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, C, B, D) \
390  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, C, C, A) \
391  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, C, C, B) \
392  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, C, C, C) \
393  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, C, C, D) \
394  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, C, D, A) \
395  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, C, D, B) \
396  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, C, D, C) \
397  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, C, D, D) \
398  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, D, A, A) \
399  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, D, A, B) \
400  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, D, A, C) \
401  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, D, A, D) \
402  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, D, B, A) \
403  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, D, B, B) \
404  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, D, B, C) \
405  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, D, B, D) \
406  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, D, C, A) \
407  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, D, C, B) \
408  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, D, C, C) \
409  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, D, C, D) \
410  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, D, D, A) \
411  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, D, D, B) \
412  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, D, D, C) \
413  GLM_SWIZZLE_GEN_VEC3_ENTRY(T, P, const, D, D, D)
414 
415 #define GLM_SWIZZLE_GEN_VEC4_FROM_VEC4_SWIZZLE(T, P, A, B, C, D) \
416  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, A, A, A) \
417  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, A, A, B) \
418  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, A, A, C) \
419  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, A, A, D) \
420  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, A, B, A) \
421  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, A, B, B) \
422  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, A, B, C) \
423  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, A, B, D) \
424  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, A, C, A) \
425  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, A, C, B) \
426  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, A, C, C) \
427  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, A, C, D) \
428  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, A, D, A) \
429  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, A, D, B) \
430  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, A, D, C) \
431  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, A, D, D) \
432  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, B, A, A) \
433  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, B, A, B) \
434  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, B, A, C) \
435  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, B, A, D) \
436  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, B, B, A) \
437  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, B, B, B) \
438  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, B, B, C) \
439  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, B, B, D) \
440  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, B, C, A) \
441  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, B, C, B) \
442  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, B, C, C) \
443  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, B, C, D) \
444  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, B, D, A) \
445  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, B, D, B) \
446  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, B, D, C) \
447  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, B, D, D) \
448  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, C, A, A) \
449  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, C, A, B) \
450  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, C, A, C) \
451  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, C, A, D) \
452  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, C, B, A) \
453  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, C, B, B) \
454  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, C, B, C) \
455  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, C, B, D) \
456  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, C, C, A) \
457  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, C, C, B) \
458  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, C, C, C) \
459  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, C, C, D) \
460  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, C, D, A) \
461  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, C, D, B) \
462  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, C, D, C) \
463  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, C, D, D) \
464  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, D, A, A) \
465  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, D, A, B) \
466  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, D, A, C) \
467  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, D, A, D) \
468  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, D, B, A) \
469  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, D, B, B) \
470  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, D, B, C) \
471  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, D, B, D) \
472  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, D, C, A) \
473  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, D, C, B) \
474  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, D, C, C) \
475  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, D, C, D) \
476  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, D, D, A) \
477  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, D, D, B) \
478  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, D, D, C) \
479  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, A, D, D, D) \
480  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, A, A, A) \
481  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, A, A, B) \
482  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, A, A, C) \
483  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, A, A, D) \
484  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, A, B, A) \
485  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, A, B, B) \
486  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, A, B, C) \
487  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, A, B, D) \
488  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, A, C, A) \
489  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, A, C, B) \
490  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, A, C, C) \
491  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, A, C, D) \
492  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, A, D, A) \
493  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, A, D, B) \
494  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, A, D, C) \
495  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, A, D, D) \
496  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, B, A, A) \
497  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, B, A, B) \
498  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, B, A, C) \
499  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, B, A, D) \
500  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, B, B, A) \
501  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, B, B, B) \
502  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, B, B, C) \
503  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, B, B, D) \
504  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, B, C, A) \
505  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, B, C, B) \
506  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, B, C, C) \
507  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, B, C, D) \
508  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, B, D, A) \
509  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, B, D, B) \
510  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, B, D, C) \
511  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, B, D, D) \
512  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, C, A, A) \
513  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, C, A, B) \
514  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, C, A, C) \
515  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, C, A, D) \
516  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, C, B, A) \
517  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, C, B, B) \
518  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, C, B, C) \
519  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, C, B, D) \
520  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, C, C, A) \
521  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, C, C, B) \
522  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, C, C, C) \
523  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, C, C, D) \
524  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, C, D, A) \
525  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, C, D, B) \
526  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, C, D, C) \
527  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, C, D, D) \
528  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, D, A, A) \
529  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, D, A, B) \
530  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, D, A, C) \
531  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, D, A, D) \
532  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, D, B, A) \
533  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, D, B, B) \
534  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, D, B, C) \
535  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, D, B, D) \
536  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, D, C, A) \
537  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, D, C, B) \
538  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, D, C, C) \
539  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, D, C, D) \
540  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, D, D, A) \
541  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, D, D, B) \
542  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, D, D, C) \
543  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, B, D, D, D) \
544  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, A, A, A) \
545  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, A, A, B) \
546  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, A, A, C) \
547  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, A, A, D) \
548  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, A, B, A) \
549  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, A, B, B) \
550  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, A, B, C) \
551  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, A, B, D) \
552  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, A, C, A) \
553  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, A, C, B) \
554  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, A, C, C) \
555  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, A, C, D) \
556  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, A, D, A) \
557  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, A, D, B) \
558  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, A, D, C) \
559  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, A, D, D) \
560  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, B, A, A) \
561  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, B, A, B) \
562  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, B, A, C) \
563  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, B, A, D) \
564  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, B, B, A) \
565  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, B, B, B) \
566  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, B, B, C) \
567  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, B, B, D) \
568  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, B, C, A) \
569  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, B, C, B) \
570  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, B, C, C) \
571  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, B, C, D) \
572  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, B, D, A) \
573  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, B, D, B) \
574  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, B, D, C) \
575  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, B, D, D) \
576  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, C, A, A) \
577  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, C, A, B) \
578  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, C, A, C) \
579  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, C, A, D) \
580  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, C, B, A) \
581  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, C, B, B) \
582  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, C, B, C) \
583  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, C, B, D) \
584  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, C, C, A) \
585  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, C, C, B) \
586  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, C, C, C) \
587  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, C, C, D) \
588  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, C, D, A) \
589  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, C, D, B) \
590  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, C, D, C) \
591  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, C, D, D) \
592  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, D, A, A) \
593  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, D, A, B) \
594  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, D, A, C) \
595  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, D, A, D) \
596  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, D, B, A) \
597  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, D, B, B) \
598  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, D, B, C) \
599  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, D, B, D) \
600  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, D, C, A) \
601  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, D, C, B) \
602  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, D, C, C) \
603  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, D, C, D) \
604  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, D, D, A) \
605  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, D, D, B) \
606  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, D, D, C) \
607  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, C, D, D, D) \
608  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, A, A, A) \
609  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, A, A, B) \
610  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, A, A, C) \
611  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, A, A, D) \
612  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, A, B, A) \
613  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, A, B, B) \
614  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, A, B, C) \
615  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, A, B, D) \
616  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, A, C, A) \
617  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, A, C, B) \
618  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, A, C, C) \
619  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, A, C, D) \
620  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, A, D, A) \
621  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, A, D, B) \
622  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, A, D, C) \
623  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, A, D, D) \
624  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, B, A, A) \
625  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, B, A, B) \
626  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, B, A, C) \
627  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, B, A, D) \
628  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, B, B, A) \
629  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, B, B, B) \
630  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, B, B, C) \
631  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, B, B, D) \
632  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, B, C, A) \
633  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, B, C, B) \
634  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, B, C, C) \
635  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, B, C, D) \
636  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, B, D, A) \
637  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, B, D, B) \
638  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, B, D, C) \
639  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, B, D, D) \
640  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, C, A, A) \
641  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, C, A, B) \
642  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, C, A, C) \
643  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, C, A, D) \
644  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, C, B, A) \
645  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, C, B, B) \
646  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, C, B, C) \
647  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, C, B, D) \
648  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, C, C, A) \
649  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, C, C, B) \
650  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, C, C, C) \
651  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, C, C, D) \
652  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, C, D, A) \
653  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, C, D, B) \
654  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, C, D, C) \
655  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, C, D, D) \
656  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, D, A, A) \
657  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, D, A, B) \
658  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, D, A, C) \
659  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, D, A, D) \
660  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, D, B, A) \
661  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, D, B, B) \
662  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, D, B, C) \
663  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, D, B, D) \
664  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, D, C, A) \
665  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, D, C, B) \
666  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, D, C, C) \
667  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, D, C, D) \
668  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, D, D, A) \
669  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, D, D, B) \
670  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, D, D, C) \
671  GLM_SWIZZLE_GEN_VEC4_ENTRY(T, P, const, D, D, D, D)
672 
673 #define GLM_SWIZZLE_GEN_VEC_FROM_VEC4_COMP(T, P, A, B, C, D) \
674  GLM_SWIZZLE_GEN_VEC2_FROM_VEC4_SWIZZLE(T, P, A, B, C, D) \
675  GLM_SWIZZLE_GEN_VEC3_FROM_VEC4_SWIZZLE(T, P, A, B, C, D) \
676  GLM_SWIZZLE_GEN_VEC4_FROM_VEC4_SWIZZLE(T, P, A, B, C, D)
677 
678 #define GLM_SWIZZLE_GEN_VEC_FROM_VEC4(T, P) \
679  GLM_SWIZZLE_GEN_VEC_FROM_VEC4_COMP(T, P, x, y, z, w) \
680  GLM_SWIZZLE_GEN_VEC_FROM_VEC4_COMP(T, P, r, g, b, a) \
681  GLM_SWIZZLE_GEN_VEC_FROM_VEC4_COMP(T, P, s, t, p, q)
682 
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00006_source.html ================================================ 0.9.9 API documentation: _vectorize.hpp Source File
0.9.9 API documentation
_vectorize.hpp
1 #pragma once
2 
3 namespace glm{
4 namespace detail
5 {
6  template<template<length_t L, typename T, qualifier Q> class vec, length_t L, typename R, typename T, qualifier Q>
7  struct functor1{};
8 
9  template<template<length_t L, typename T, qualifier Q> class vec, typename R, typename T, qualifier Q>
10  struct functor1<vec, 1, R, T, Q>
11  {
12  GLM_FUNC_QUALIFIER GLM_CONSTEXPR static vec<1, R, Q> call(R (*Func) (T x), vec<1, T, Q> const& v)
13  {
14  return vec<1, R, Q>(Func(v.x));
15  }
16  };
17 
18  template<template<length_t L, typename T, qualifier Q> class vec, typename R, typename T, qualifier Q>
19  struct functor1<vec, 2, R, T, Q>
20  {
21  GLM_FUNC_QUALIFIER GLM_CONSTEXPR static vec<2, R, Q> call(R (*Func) (T x), vec<2, T, Q> const& v)
22  {
23  return vec<2, R, Q>(Func(v.x), Func(v.y));
24  }
25  };
26 
27  template<template<length_t L, typename T, qualifier Q> class vec, typename R, typename T, qualifier Q>
28  struct functor1<vec, 3, R, T, Q>
29  {
30  GLM_FUNC_QUALIFIER GLM_CONSTEXPR static vec<3, R, Q> call(R (*Func) (T x), vec<3, T, Q> const& v)
31  {
32  return vec<3, R, Q>(Func(v.x), Func(v.y), Func(v.z));
33  }
34  };
35 
36  template<template<length_t L, typename T, qualifier Q> class vec, typename R, typename T, qualifier Q>
37  struct functor1<vec, 4, R, T, Q>
38  {
39  GLM_FUNC_QUALIFIER GLM_CONSTEXPR static vec<4, R, Q> call(R (*Func) (T x), vec<4, T, Q> const& v)
40  {
41  return vec<4, R, Q>(Func(v.x), Func(v.y), Func(v.z), Func(v.w));
42  }
43  };
44 
45  template<template<length_t L, typename T, qualifier Q> class vec, length_t L, typename T, qualifier Q>
46  struct functor2{};
47 
48  template<template<length_t L, typename T, qualifier Q> class vec, typename T, qualifier Q>
49  struct functor2<vec, 1, T, Q>
50  {
51  GLM_FUNC_QUALIFIER static vec<1, T, Q> call(T (*Func) (T x, T y), vec<1, T, Q> const& a, vec<1, T, Q> const& b)
52  {
53  return vec<1, T, Q>(Func(a.x, b.x));
54  }
55  };
56 
57  template<template<length_t L, typename T, qualifier Q> class vec, typename T, qualifier Q>
58  struct functor2<vec, 2, T, Q>
59  {
60  GLM_FUNC_QUALIFIER static vec<2, T, Q> call(T (*Func) (T x, T y), vec<2, T, Q> const& a, vec<2, T, Q> const& b)
61  {
62  return vec<2, T, Q>(Func(a.x, b.x), Func(a.y, b.y));
63  }
64  };
65 
66  template<template<length_t L, typename T, qualifier Q> class vec, typename T, qualifier Q>
67  struct functor2<vec, 3, T, Q>
68  {
69  GLM_FUNC_QUALIFIER static vec<3, T, Q> call(T (*Func) (T x, T y), vec<3, T, Q> const& a, vec<3, T, Q> const& b)
70  {
71  return vec<3, T, Q>(Func(a.x, b.x), Func(a.y, b.y), Func(a.z, b.z));
72  }
73  };
74 
75  template<template<length_t L, typename T, qualifier Q> class vec, typename T, qualifier Q>
76  struct functor2<vec, 4, T, Q>
77  {
78  GLM_FUNC_QUALIFIER static vec<4, T, Q> call(T (*Func) (T x, T y), vec<4, T, Q> const& a, vec<4, T, Q> const& b)
79  {
80  return vec<4, T, Q>(Func(a.x, b.x), Func(a.y, b.y), Func(a.z, b.z), Func(a.w, b.w));
81  }
82  };
83 
84  template<template<length_t L, typename T, qualifier Q> class vec, length_t L, typename T, qualifier Q>
85  struct functor2_vec_sca{};
86 
87  template<template<length_t L, typename T, qualifier Q> class vec, typename T, qualifier Q>
88  struct functor2_vec_sca<vec, 1, T, Q>
89  {
90  GLM_FUNC_QUALIFIER static vec<1, T, Q> call(T (*Func) (T x, T y), vec<1, T, Q> const& a, T b)
91  {
92  return vec<1, T, Q>(Func(a.x, b));
93  }
94  };
95 
96  template<template<length_t L, typename T, qualifier Q> class vec, typename T, qualifier Q>
97  struct functor2_vec_sca<vec, 2, T, Q>
98  {
99  GLM_FUNC_QUALIFIER static vec<2, T, Q> call(T (*Func) (T x, T y), vec<2, T, Q> const& a, T b)
100  {
101  return vec<2, T, Q>(Func(a.x, b), Func(a.y, b));
102  }
103  };
104 
105  template<template<length_t L, typename T, qualifier Q> class vec, typename T, qualifier Q>
106  struct functor2_vec_sca<vec, 3, T, Q>
107  {
108  GLM_FUNC_QUALIFIER static vec<3, T, Q> call(T (*Func) (T x, T y), vec<3, T, Q> const& a, T b)
109  {
110  return vec<3, T, Q>(Func(a.x, b), Func(a.y, b), Func(a.z, b));
111  }
112  };
113 
114  template<template<length_t L, typename T, qualifier Q> class vec, typename T, qualifier Q>
115  struct functor2_vec_sca<vec, 4, T, Q>
116  {
117  GLM_FUNC_QUALIFIER static vec<4, T, Q> call(T (*Func) (T x, T y), vec<4, T, Q> const& a, T b)
118  {
119  return vec<4, T, Q>(Func(a.x, b), Func(a.y, b), Func(a.z, b), Func(a.w, b));
120  }
121  };
122 
123  template<length_t L, typename T, qualifier Q>
124  struct functor2_vec_int {};
125 
126  template<typename T, qualifier Q>
127  struct functor2_vec_int<1, T, Q>
128  {
129  GLM_FUNC_QUALIFIER static vec<1, int, Q> call(int (*Func) (T x, int y), vec<1, T, Q> const& a, vec<1, int, Q> const& b)
130  {
131  return vec<1, int, Q>(Func(a.x, b.x));
132  }
133  };
134 
135  template<typename T, qualifier Q>
136  struct functor2_vec_int<2, T, Q>
137  {
138  GLM_FUNC_QUALIFIER static vec<2, int, Q> call(int (*Func) (T x, int y), vec<2, T, Q> const& a, vec<2, int, Q> const& b)
139  {
140  return vec<2, int, Q>(Func(a.x, b.x), Func(a.y, b.y));
141  }
142  };
143 
144  template<typename T, qualifier Q>
145  struct functor2_vec_int<3, T, Q>
146  {
147  GLM_FUNC_QUALIFIER static vec<3, int, Q> call(int (*Func) (T x, int y), vec<3, T, Q> const& a, vec<3, int, Q> const& b)
148  {
149  return vec<3, int, Q>(Func(a.x, b.x), Func(a.y, b.y), Func(a.z, b.z));
150  }
151  };
152 
153  template<typename T, qualifier Q>
154  struct functor2_vec_int<4, T, Q>
155  {
156  GLM_FUNC_QUALIFIER static vec<4, int, Q> call(int (*Func) (T x, int y), vec<4, T, Q> const& a, vec<4, int, Q> const& b)
157  {
158  return vec<4, int, Q>(Func(a.x, b.x), Func(a.y, b.y), Func(a.z, b.z), Func(a.w, b.w));
159  }
160  };
161 }//namespace detail
162 }//namespace glm
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00007.html ================================================ 0.9.9 API documentation: associated_min_max.hpp File Reference
0.9.9 API documentation
associated_min_max.hpp File Reference

GLM_GTX_associated_min_max More...

Go to the source code of this file.

Functions

template<typename T , typename U >
GLM_FUNC_DECL U associatedMax (T x, U a, T y, U b)
 Maximum comparison between 2 variables and returns 2 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< 2, U, Q > associatedMax (vec< L, T, Q > const &x, vec< L, U, Q > const &a, vec< L, T, Q > const &y, vec< L, U, Q > const &b)
 Maximum comparison between 2 variables and returns 2 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > associatedMax (T x, vec< L, U, Q > const &a, T y, vec< L, U, Q > const &b)
 Maximum comparison between 2 variables and returns 2 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< L, U, Q > associatedMax (vec< L, T, Q > const &x, U a, vec< L, T, Q > const &y, U b)
 Maximum comparison between 2 variables and returns 2 associated variable values. More...
 
template<typename T , typename U >
GLM_FUNC_DECL U associatedMax (T x, U a, T y, U b, T z, U c)
 Maximum comparison between 3 variables and returns 3 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< L, U, Q > associatedMax (vec< L, T, Q > const &x, vec< L, U, Q > const &a, vec< L, T, Q > const &y, vec< L, U, Q > const &b, vec< L, T, Q > const &z, vec< L, U, Q > const &c)
 Maximum comparison between 3 variables and returns 3 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > associatedMax (T x, vec< L, U, Q > const &a, T y, vec< L, U, Q > const &b, T z, vec< L, U, Q > const &c)
 Maximum comparison between 3 variables and returns 3 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< L, U, Q > associatedMax (vec< L, T, Q > const &x, U a, vec< L, T, Q > const &y, U b, vec< L, T, Q > const &z, U c)
 Maximum comparison between 3 variables and returns 3 associated variable values. More...
 
template<typename T , typename U >
GLM_FUNC_DECL U associatedMax (T x, U a, T y, U b, T z, U c, T w, U d)
 Maximum comparison between 4 variables and returns 4 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< L, U, Q > associatedMax (vec< L, T, Q > const &x, vec< L, U, Q > const &a, vec< L, T, Q > const &y, vec< L, U, Q > const &b, vec< L, T, Q > const &z, vec< L, U, Q > const &c, vec< L, T, Q > const &w, vec< L, U, Q > const &d)
 Maximum comparison between 4 variables and returns 4 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< L, U, Q > associatedMax (T x, vec< L, U, Q > const &a, T y, vec< L, U, Q > const &b, T z, vec< L, U, Q > const &c, T w, vec< L, U, Q > const &d)
 Maximum comparison between 4 variables and returns 4 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< L, U, Q > associatedMax (vec< L, T, Q > const &x, U a, vec< L, T, Q > const &y, U b, vec< L, T, Q > const &z, U c, vec< L, T, Q > const &w, U d)
 Maximum comparison between 4 variables and returns 4 associated variable values. More...
 
template<typename T , typename U , qualifier Q>
GLM_FUNC_DECL U associatedMin (T x, U a, T y, U b)
 Minimum comparison between 2 variables and returns 2 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< 2, U, Q > associatedMin (vec< L, T, Q > const &x, vec< L, U, Q > const &a, vec< L, T, Q > const &y, vec< L, U, Q > const &b)
 Minimum comparison between 2 variables and returns 2 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< L, U, Q > associatedMin (T x, const vec< L, U, Q > &a, T y, const vec< L, U, Q > &b)
 Minimum comparison between 2 variables and returns 2 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< L, U, Q > associatedMin (vec< L, T, Q > const &x, U a, vec< L, T, Q > const &y, U b)
 Minimum comparison between 2 variables and returns 2 associated variable values. More...
 
template<typename T , typename U >
GLM_FUNC_DECL U associatedMin (T x, U a, T y, U b, T z, U c)
 Minimum comparison between 3 variables and returns 3 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< L, U, Q > associatedMin (vec< L, T, Q > const &x, vec< L, U, Q > const &a, vec< L, T, Q > const &y, vec< L, U, Q > const &b, vec< L, T, Q > const &z, vec< L, U, Q > const &c)
 Minimum comparison between 3 variables and returns 3 associated variable values. More...
 
template<typename T , typename U >
GLM_FUNC_DECL U associatedMin (T x, U a, T y, U b, T z, U c, T w, U d)
 Minimum comparison between 4 variables and returns 4 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< L, U, Q > associatedMin (vec< L, T, Q > const &x, vec< L, U, Q > const &a, vec< L, T, Q > const &y, vec< L, U, Q > const &b, vec< L, T, Q > const &z, vec< L, U, Q > const &c, vec< L, T, Q > const &w, vec< L, U, Q > const &d)
 Minimum comparison between 4 variables and returns 4 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< L, U, Q > associatedMin (T x, vec< L, U, Q > const &a, T y, vec< L, U, Q > const &b, T z, vec< L, U, Q > const &c, T w, vec< L, U, Q > const &d)
 Minimum comparison between 4 variables and returns 4 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< L, U, Q > associatedMin (vec< L, T, Q > const &x, U a, vec< L, T, Q > const &y, U b, vec< L, T, Q > const &z, U c, vec< L, T, Q > const &w, U d)
 Minimum comparison between 4 variables and returns 4 associated variable values. More...
 

Detailed Description

GLM_GTX_associated_min_max

See also
Core features (dependence)
gtx_extented_min_max (dependence)

Definition in file associated_min_max.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00007_source.html ================================================ 0.9.9 API documentation: associated_min_max.hpp Source File
0.9.9 API documentation
associated_min_max.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 // Dependency:
17 #include "../glm.hpp"
18 
19 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
20 # ifndef GLM_ENABLE_EXPERIMENTAL
21 # pragma message("GLM: GLM_GTX_associated_min_max is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
22 # else
23 # pragma message("GLM: GLM_GTX_associated_min_max extension included")
24 # endif
25 #endif
26 
27 namespace glm
28 {
31 
34  template<typename T, typename U, qualifier Q>
35  GLM_FUNC_DECL U associatedMin(T x, U a, T y, U b);
36 
39  template<length_t L, typename T, typename U, qualifier Q>
40  GLM_FUNC_DECL vec<2, U, Q> associatedMin(
41  vec<L, T, Q> const& x, vec<L, U, Q> const& a,
42  vec<L, T, Q> const& y, vec<L, U, Q> const& b);
43 
46  template<length_t L, typename T, typename U, qualifier Q>
47  GLM_FUNC_DECL vec<L, U, Q> associatedMin(
48  T x, const vec<L, U, Q>& a,
49  T y, const vec<L, U, Q>& b);
50 
53  template<length_t L, typename T, typename U, qualifier Q>
54  GLM_FUNC_DECL vec<L, U, Q> associatedMin(
55  vec<L, T, Q> const& x, U a,
56  vec<L, T, Q> const& y, U b);
57 
60  template<typename T, typename U>
61  GLM_FUNC_DECL U associatedMin(
62  T x, U a,
63  T y, U b,
64  T z, U c);
65 
68  template<length_t L, typename T, typename U, qualifier Q>
69  GLM_FUNC_DECL vec<L, U, Q> associatedMin(
70  vec<L, T, Q> const& x, vec<L, U, Q> const& a,
71  vec<L, T, Q> const& y, vec<L, U, Q> const& b,
72  vec<L, T, Q> const& z, vec<L, U, Q> const& c);
73 
76  template<typename T, typename U>
77  GLM_FUNC_DECL U associatedMin(
78  T x, U a,
79  T y, U b,
80  T z, U c,
81  T w, U d);
82 
85  template<length_t L, typename T, typename U, qualifier Q>
86  GLM_FUNC_DECL vec<L, U, Q> associatedMin(
87  vec<L, T, Q> const& x, vec<L, U, Q> const& a,
88  vec<L, T, Q> const& y, vec<L, U, Q> const& b,
89  vec<L, T, Q> const& z, vec<L, U, Q> const& c,
90  vec<L, T, Q> const& w, vec<L, U, Q> const& d);
91 
94  template<length_t L, typename T, typename U, qualifier Q>
95  GLM_FUNC_DECL vec<L, U, Q> associatedMin(
96  T x, vec<L, U, Q> const& a,
97  T y, vec<L, U, Q> const& b,
98  T z, vec<L, U, Q> const& c,
99  T w, vec<L, U, Q> const& d);
100 
103  template<length_t L, typename T, typename U, qualifier Q>
104  GLM_FUNC_DECL vec<L, U, Q> associatedMin(
105  vec<L, T, Q> const& x, U a,
106  vec<L, T, Q> const& y, U b,
107  vec<L, T, Q> const& z, U c,
108  vec<L, T, Q> const& w, U d);
109 
112  template<typename T, typename U>
113  GLM_FUNC_DECL U associatedMax(T x, U a, T y, U b);
114 
117  template<length_t L, typename T, typename U, qualifier Q>
118  GLM_FUNC_DECL vec<2, U, Q> associatedMax(
119  vec<L, T, Q> const& x, vec<L, U, Q> const& a,
120  vec<L, T, Q> const& y, vec<L, U, Q> const& b);
121 
124  template<length_t L, typename T, typename U, qualifier Q>
125  GLM_FUNC_DECL vec<L, T, Q> associatedMax(
126  T x, vec<L, U, Q> const& a,
127  T y, vec<L, U, Q> const& b);
128 
131  template<length_t L, typename T, typename U, qualifier Q>
132  GLM_FUNC_DECL vec<L, U, Q> associatedMax(
133  vec<L, T, Q> const& x, U a,
134  vec<L, T, Q> const& y, U b);
135 
138  template<typename T, typename U>
139  GLM_FUNC_DECL U associatedMax(
140  T x, U a,
141  T y, U b,
142  T z, U c);
143 
146  template<length_t L, typename T, typename U, qualifier Q>
147  GLM_FUNC_DECL vec<L, U, Q> associatedMax(
148  vec<L, T, Q> const& x, vec<L, U, Q> const& a,
149  vec<L, T, Q> const& y, vec<L, U, Q> const& b,
150  vec<L, T, Q> const& z, vec<L, U, Q> const& c);
151 
154  template<length_t L, typename T, typename U, qualifier Q>
155  GLM_FUNC_DECL vec<L, T, Q> associatedMax(
156  T x, vec<L, U, Q> const& a,
157  T y, vec<L, U, Q> const& b,
158  T z, vec<L, U, Q> const& c);
159 
162  template<length_t L, typename T, typename U, qualifier Q>
163  GLM_FUNC_DECL vec<L, U, Q> associatedMax(
164  vec<L, T, Q> const& x, U a,
165  vec<L, T, Q> const& y, U b,
166  vec<L, T, Q> const& z, U c);
167 
170  template<typename T, typename U>
171  GLM_FUNC_DECL U associatedMax(
172  T x, U a,
173  T y, U b,
174  T z, U c,
175  T w, U d);
176 
179  template<length_t L, typename T, typename U, qualifier Q>
180  GLM_FUNC_DECL vec<L, U, Q> associatedMax(
181  vec<L, T, Q> const& x, vec<L, U, Q> const& a,
182  vec<L, T, Q> const& y, vec<L, U, Q> const& b,
183  vec<L, T, Q> const& z, vec<L, U, Q> const& c,
184  vec<L, T, Q> const& w, vec<L, U, Q> const& d);
185 
188  template<length_t L, typename T, typename U, qualifier Q>
189  GLM_FUNC_DECL vec<L, U, Q> associatedMax(
190  T x, vec<L, U, Q> const& a,
191  T y, vec<L, U, Q> const& b,
192  T z, vec<L, U, Q> const& c,
193  T w, vec<L, U, Q> const& d);
194 
197  template<length_t L, typename T, typename U, qualifier Q>
198  GLM_FUNC_DECL vec<L, U, Q> associatedMax(
199  vec<L, T, Q> const& x, U a,
200  vec<L, T, Q> const& y, U b,
201  vec<L, T, Q> const& z, U c,
202  vec<L, T, Q> const& w, U d);
203 
205 } //namespace glm
206 
207 #include "associated_min_max.inl"
GLM_FUNC_DECL vec< L, U, Q > associatedMax(vec< L, T, Q > const &x, U a, vec< L, T, Q > const &y, U b, vec< L, T, Q > const &z, U c, vec< L, T, Q > const &w, U d)
Maximum comparison between 4 variables and returns 4 associated variable values.
GLM_FUNC_DECL vec< L, U, Q > associatedMin(vec< L, T, Q > const &x, U a, vec< L, T, Q > const &y, U b, vec< L, T, Q > const &z, U c, vec< L, T, Q > const &w, U d)
Minimum comparison between 4 variables and returns 4 associated variable values.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00008.html ================================================ 0.9.9 API documentation: bit.hpp File Reference
0.9.9 API documentation
bit.hpp File Reference

GLM_GTX_bit More...

Go to the source code of this file.

Functions

template<typename genIUType >
GLM_FUNC_DECL genIUType highestBitValue (genIUType Value)
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > highestBitValue (vec< L, T, Q > const &value)
 Find the highest bit set to 1 in a integer variable and return its value. More...
 
template<typename genIUType >
GLM_FUNC_DECL genIUType lowestBitValue (genIUType Value)
 
template<typename genIUType >
GLM_DEPRECATED GLM_FUNC_DECL genIUType powerOfTwoAbove (genIUType Value)
 Return the power of two number which value is just higher the input value. More...
 
template<length_t L, typename T , qualifier Q>
GLM_DEPRECATED GLM_FUNC_DECL vec< L, T, Q > powerOfTwoAbove (vec< L, T, Q > const &value)
 Return the power of two number which value is just higher the input value. More...
 
template<typename genIUType >
GLM_DEPRECATED GLM_FUNC_DECL genIUType powerOfTwoBelow (genIUType Value)
 Return the power of two number which value is just lower the input value. More...
 
template<length_t L, typename T , qualifier Q>
GLM_DEPRECATED GLM_FUNC_DECL vec< L, T, Q > powerOfTwoBelow (vec< L, T, Q > const &value)
 Return the power of two number which value is just lower the input value. More...
 
template<typename genIUType >
GLM_DEPRECATED GLM_FUNC_DECL genIUType powerOfTwoNearest (genIUType Value)
 Return the power of two number which value is the closet to the input value. More...
 
template<length_t L, typename T , qualifier Q>
GLM_DEPRECATED GLM_FUNC_DECL vec< L, T, Q > powerOfTwoNearest (vec< L, T, Q > const &value)
 Return the power of two number which value is the closet to the input value. More...
 

Detailed Description

GLM_GTX_bit

See also
Core features (dependence)

Definition in file bit.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00008_source.html ================================================ 0.9.9 API documentation: bit.hpp Source File
0.9.9 API documentation
bit.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependencies
16 #include "../gtc/bitfield.hpp"
17 
18 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
19 # ifndef GLM_ENABLE_EXPERIMENTAL
20 # pragma message("GLM: GLM_GTX_bit is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
21 # else
22 # pragma message("GLM: GLM_GTX_bit extension included")
23 # endif
24 #endif
25 
26 namespace glm
27 {
30 
32  template<typename genIUType>
33  GLM_FUNC_DECL genIUType highestBitValue(genIUType Value);
34 
36  template<typename genIUType>
37  GLM_FUNC_DECL genIUType lowestBitValue(genIUType Value);
38 
42  template<length_t L, typename T, qualifier Q>
43  GLM_FUNC_DECL vec<L, T, Q> highestBitValue(vec<L, T, Q> const& value);
44 
50  template<typename genIUType>
51  GLM_DEPRECATED GLM_FUNC_DECL genIUType powerOfTwoAbove(genIUType Value);
52 
58  template<length_t L, typename T, qualifier Q>
59  GLM_DEPRECATED GLM_FUNC_DECL vec<L, T, Q> powerOfTwoAbove(vec<L, T, Q> const& value);
60 
66  template<typename genIUType>
67  GLM_DEPRECATED GLM_FUNC_DECL genIUType powerOfTwoBelow(genIUType Value);
68 
74  template<length_t L, typename T, qualifier Q>
75  GLM_DEPRECATED GLM_FUNC_DECL vec<L, T, Q> powerOfTwoBelow(vec<L, T, Q> const& value);
76 
82  template<typename genIUType>
83  GLM_DEPRECATED GLM_FUNC_DECL genIUType powerOfTwoNearest(genIUType Value);
84 
90  template<length_t L, typename T, qualifier Q>
91  GLM_DEPRECATED GLM_FUNC_DECL vec<L, T, Q> powerOfTwoNearest(vec<L, T, Q> const& value);
92 
94 } //namespace glm
95 
96 
97 #include "bit.inl"
98 
GLM_FUNC_DECL vec< L, T, Q > highestBitValue(vec< L, T, Q > const &value)
Find the highest bit set to 1 in a integer variable and return its value.
GLM_DEPRECATED GLM_FUNC_DECL vec< L, T, Q > powerOfTwoBelow(vec< L, T, Q > const &value)
Return the power of two number which value is just lower the input value.
GLM_DEPRECATED GLM_FUNC_DECL vec< L, T, Q > powerOfTwoAbove(vec< L, T, Q > const &value)
Return the power of two number which value is just higher the input value.
GLM_DEPRECATED GLM_FUNC_DECL vec< L, T, Q > powerOfTwoNearest(vec< L, T, Q > const &value)
Return the power of two number which value is the closet to the input value.
GLM_FUNC_DECL genIUType lowestBitValue(genIUType Value)
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00009.html ================================================ 0.9.9 API documentation: bitfield.hpp File Reference
0.9.9 API documentation
bitfield.hpp File Reference

GLM_GTC_bitfield More...

Go to the source code of this file.

Functions

GLM_FUNC_DECL glm::u8vec2 bitfieldDeinterleave (glm::uint16 x)
 Deinterleaves the bits of x. More...
 
GLM_FUNC_DECL glm::u16vec2 bitfieldDeinterleave (glm::uint32 x)
 Deinterleaves the bits of x. More...
 
GLM_FUNC_DECL glm::u32vec2 bitfieldDeinterleave (glm::uint64 x)
 Deinterleaves the bits of x. More...
 
template<typename genIUType >
GLM_FUNC_DECL genIUType bitfieldFillOne (genIUType Value, int FirstBit, int BitCount)
 Set to 1 a range of bits. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > bitfieldFillOne (vec< L, T, Q > const &Value, int FirstBit, int BitCount)
 Set to 1 a range of bits. More...
 
template<typename genIUType >
GLM_FUNC_DECL genIUType bitfieldFillZero (genIUType Value, int FirstBit, int BitCount)
 Set to 0 a range of bits. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > bitfieldFillZero (vec< L, T, Q > const &Value, int FirstBit, int BitCount)
 Set to 0 a range of bits. More...
 
GLM_FUNC_DECL int16 bitfieldInterleave (int8 x, int8 y)
 Interleaves the bits of x and y. More...
 
GLM_FUNC_DECL uint16 bitfieldInterleave (uint8 x, uint8 y)
 Interleaves the bits of x and y. More...
 
GLM_FUNC_DECL uint16 bitfieldInterleave (u8vec2 const &v)
 Interleaves the bits of x and y. More...
 
GLM_FUNC_DECL int32 bitfieldInterleave (int16 x, int16 y)
 Interleaves the bits of x and y. More...
 
GLM_FUNC_DECL uint32 bitfieldInterleave (uint16 x, uint16 y)
 Interleaves the bits of x and y. More...
 
GLM_FUNC_DECL uint32 bitfieldInterleave (u16vec2 const &v)
 Interleaves the bits of x and y. More...
 
GLM_FUNC_DECL int64 bitfieldInterleave (int32 x, int32 y)
 Interleaves the bits of x and y. More...
 
GLM_FUNC_DECL uint64 bitfieldInterleave (uint32 x, uint32 y)
 Interleaves the bits of x and y. More...
 
GLM_FUNC_DECL uint64 bitfieldInterleave (u32vec2 const &v)
 Interleaves the bits of x and y. More...
 
GLM_FUNC_DECL int32 bitfieldInterleave (int8 x, int8 y, int8 z)
 Interleaves the bits of x, y and z. More...
 
GLM_FUNC_DECL uint32 bitfieldInterleave (uint8 x, uint8 y, uint8 z)
 Interleaves the bits of x, y and z. More...
 
GLM_FUNC_DECL int64 bitfieldInterleave (int16 x, int16 y, int16 z)
 Interleaves the bits of x, y and z. More...
 
GLM_FUNC_DECL uint64 bitfieldInterleave (uint16 x, uint16 y, uint16 z)
 Interleaves the bits of x, y and z. More...
 
GLM_FUNC_DECL int64 bitfieldInterleave (int32 x, int32 y, int32 z)
 Interleaves the bits of x, y and z. More...
 
GLM_FUNC_DECL uint64 bitfieldInterleave (uint32 x, uint32 y, uint32 z)
 Interleaves the bits of x, y and z. More...
 
GLM_FUNC_DECL int32 bitfieldInterleave (int8 x, int8 y, int8 z, int8 w)
 Interleaves the bits of x, y, z and w. More...
 
GLM_FUNC_DECL uint32 bitfieldInterleave (uint8 x, uint8 y, uint8 z, uint8 w)
 Interleaves the bits of x, y, z and w. More...
 
GLM_FUNC_DECL int64 bitfieldInterleave (int16 x, int16 y, int16 z, int16 w)
 Interleaves the bits of x, y, z and w. More...
 
GLM_FUNC_DECL uint64 bitfieldInterleave (uint16 x, uint16 y, uint16 z, uint16 w)
 Interleaves the bits of x, y, z and w. More...
 
template<typename genIUType >
GLM_FUNC_DECL genIUType bitfieldRotateLeft (genIUType In, int Shift)
 Rotate all bits to the left. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > bitfieldRotateLeft (vec< L, T, Q > const &In, int Shift)
 Rotate all bits to the left. More...
 
template<typename genIUType >
GLM_FUNC_DECL genIUType bitfieldRotateRight (genIUType In, int Shift)
 Rotate all bits to the right. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > bitfieldRotateRight (vec< L, T, Q > const &In, int Shift)
 Rotate all bits to the right. More...
 
template<typename genIUType >
GLM_FUNC_DECL genIUType mask (genIUType Bits)
 Build a mask of 'count' bits. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > mask (vec< L, T, Q > const &v)
 Build a mask of 'count' bits. More...
 

Detailed Description

GLM_GTC_bitfield

See also
Core features (dependence)
GLM_GTC_bitfield (dependence)

Definition in file bitfield.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00009_source.html ================================================ 0.9.9 API documentation: bitfield.hpp Source File
0.9.9 API documentation
bitfield.hpp
Go to the documentation of this file.
1 
14 #include "../detail/setup.hpp"
15 
16 #pragma once
17 
18 // Dependencies
19 #include "../ext/scalar_int_sized.hpp"
20 #include "../ext/scalar_uint_sized.hpp"
21 #include "../detail/qualifier.hpp"
22 #include "../detail/_vectorize.hpp"
23 #include "type_precision.hpp"
24 #include <limits>
25 
26 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
27 # pragma message("GLM: GLM_GTC_bitfield extension included")
28 #endif
29 
30 namespace glm
31 {
34 
38  template<typename genIUType>
39  GLM_FUNC_DECL genIUType mask(genIUType Bits);
40 
48  template<length_t L, typename T, qualifier Q>
49  GLM_FUNC_DECL vec<L, T, Q> mask(vec<L, T, Q> const& v);
50 
54  template<typename genIUType>
55  GLM_FUNC_DECL genIUType bitfieldRotateRight(genIUType In, int Shift);
56 
64  template<length_t L, typename T, qualifier Q>
65  GLM_FUNC_DECL vec<L, T, Q> bitfieldRotateRight(vec<L, T, Q> const& In, int Shift);
66 
70  template<typename genIUType>
71  GLM_FUNC_DECL genIUType bitfieldRotateLeft(genIUType In, int Shift);
72 
80  template<length_t L, typename T, qualifier Q>
81  GLM_FUNC_DECL vec<L, T, Q> bitfieldRotateLeft(vec<L, T, Q> const& In, int Shift);
82 
86  template<typename genIUType>
87  GLM_FUNC_DECL genIUType bitfieldFillOne(genIUType Value, int FirstBit, int BitCount);
88 
96  template<length_t L, typename T, qualifier Q>
97  GLM_FUNC_DECL vec<L, T, Q> bitfieldFillOne(vec<L, T, Q> const& Value, int FirstBit, int BitCount);
98 
102  template<typename genIUType>
103  GLM_FUNC_DECL genIUType bitfieldFillZero(genIUType Value, int FirstBit, int BitCount);
104 
112  template<length_t L, typename T, qualifier Q>
113  GLM_FUNC_DECL vec<L, T, Q> bitfieldFillZero(vec<L, T, Q> const& Value, int FirstBit, int BitCount);
114 
120  GLM_FUNC_DECL int16 bitfieldInterleave(int8 x, int8 y);
121 
127  GLM_FUNC_DECL uint16 bitfieldInterleave(uint8 x, uint8 y);
128 
134  GLM_FUNC_DECL uint16 bitfieldInterleave(u8vec2 const& v);
135 
140 
146  GLM_FUNC_DECL int32 bitfieldInterleave(int16 x, int16 y);
147 
153  GLM_FUNC_DECL uint32 bitfieldInterleave(uint16 x, uint16 y);
154 
160  GLM_FUNC_DECL uint32 bitfieldInterleave(u16vec2 const& v);
161 
166 
172  GLM_FUNC_DECL int64 bitfieldInterleave(int32 x, int32 y);
173 
179  GLM_FUNC_DECL uint64 bitfieldInterleave(uint32 x, uint32 y);
180 
186  GLM_FUNC_DECL uint64 bitfieldInterleave(u32vec2 const& v);
187 
192 
198  GLM_FUNC_DECL int32 bitfieldInterleave(int8 x, int8 y, int8 z);
199 
205  GLM_FUNC_DECL uint32 bitfieldInterleave(uint8 x, uint8 y, uint8 z);
206 
212  GLM_FUNC_DECL int64 bitfieldInterleave(int16 x, int16 y, int16 z);
213 
219  GLM_FUNC_DECL uint64 bitfieldInterleave(uint16 x, uint16 y, uint16 z);
220 
226  GLM_FUNC_DECL int64 bitfieldInterleave(int32 x, int32 y, int32 z);
227 
233  GLM_FUNC_DECL uint64 bitfieldInterleave(uint32 x, uint32 y, uint32 z);
234 
240  GLM_FUNC_DECL int32 bitfieldInterleave(int8 x, int8 y, int8 z, int8 w);
241 
247  GLM_FUNC_DECL uint32 bitfieldInterleave(uint8 x, uint8 y, uint8 z, uint8 w);
248 
254  GLM_FUNC_DECL int64 bitfieldInterleave(int16 x, int16 y, int16 z, int16 w);
255 
261  GLM_FUNC_DECL uint64 bitfieldInterleave(uint16 x, uint16 y, uint16 z, uint16 w);
262 
264 } //namespace glm
265 
266 #include "bitfield.inl"
detail::uint32 uint32
32 bit unsigned integer type.
GLM_FUNC_DECL uint64 bitfieldInterleave(uint16 x, uint16 y, uint16 z, uint16 w)
Interleaves the bits of x, y, z and w.
GLM_FUNC_DECL glm::u32vec2 bitfieldDeinterleave(glm::uint64 x)
Deinterleaves the bits of x.
GLM_FUNC_DECL vec< L, T, Q > bitfieldFillZero(vec< L, T, Q > const &Value, int FirstBit, int BitCount)
Set to 0 a range of bits.
detail::uint16 uint16
16 bit unsigned integer type.
vec< 2, u8, defaultp > u8vec2
Default qualifier 8 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:340
GLM_FUNC_DECL vec< L, T, Q > bitfieldRotateLeft(vec< L, T, Q > const &In, int Shift)
Rotate all bits to the left.
GLM_FUNC_DECL vec< L, T, Q > mask(vec< L, T, Q > const &v)
Build a mask of 'count' bits.
detail::uint64 uint64
64 bit unsigned integer type.
GLM_FUNC_DECL vec< L, T, Q > bitfieldFillOne(vec< L, T, Q > const &Value, int FirstBit, int BitCount)
Set to 1 a range of bits.
GLM_GTC_type_precision
detail::int64 int64
64 bit signed integer type.
vec< 2, u32, defaultp > u32vec2
Default qualifier 32 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:380
GLM_FUNC_DECL vec< L, T, Q > bitfieldRotateRight(vec< L, T, Q > const &In, int Shift)
Rotate all bits to the right.
vec< 2, u16, defaultp > u16vec2
Default qualifier 16 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:360
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00010.html ================================================ 0.9.9 API documentation: closest_point.hpp File Reference
0.9.9 API documentation
closest_point.hpp File Reference

GLM_GTX_closest_point More...

Go to the source code of this file.

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > closestPointOnLine (vec< 3, T, Q > const &point, vec< 3, T, Q > const &a, vec< 3, T, Q > const &b)
 Find the point on a straight line which is the closet of a point. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 2, T, Q > closestPointOnLine (vec< 2, T, Q > const &point, vec< 2, T, Q > const &a, vec< 2, T, Q > const &b)
 2d lines work as well
 

Detailed Description

GLM_GTX_closest_point

See also
Core features (dependence)

Definition in file closest_point.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00010_source.html ================================================ 0.9.9 API documentation: closest_point.hpp Source File
0.9.9 API documentation
closest_point.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependency:
16 #include "../glm.hpp"
17 
18 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
19 # ifndef GLM_ENABLE_EXPERIMENTAL
20 # pragma message("GLM: GLM_GTX_closest_point is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
21 # else
22 # pragma message("GLM: GLM_GTX_closest_point extension included")
23 # endif
24 #endif
25 
26 namespace glm
27 {
30 
33  template<typename T, qualifier Q>
34  GLM_FUNC_DECL vec<3, T, Q> closestPointOnLine(
35  vec<3, T, Q> const& point,
36  vec<3, T, Q> const& a,
37  vec<3, T, Q> const& b);
38 
40  template<typename T, qualifier Q>
41  GLM_FUNC_DECL vec<2, T, Q> closestPointOnLine(
42  vec<2, T, Q> const& point,
43  vec<2, T, Q> const& a,
44  vec<2, T, Q> const& b);
45 
47 }// namespace glm
48 
49 #include "closest_point.inl"
GLM_FUNC_DECL vec< 2, T, Q > closestPointOnLine(vec< 2, T, Q > const &point, vec< 2, T, Q > const &a, vec< 2, T, Q > const &b)
2d lines work as well
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00011.html ================================================ 0.9.9 API documentation: color_encoding.hpp File Reference
0.9.9 API documentation
color_encoding.hpp File Reference

GLM_GTX_color_encoding More...

Go to the source code of this file.

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > convertD65XYZToD50XYZ (vec< 3, T, Q > const &ColorD65XYZ)
 Convert a D65 YUV color to D50 YUV.
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > convertD65XYZToLinearSRGB (vec< 3, T, Q > const &ColorD65XYZ)
 Convert a D65 YUV color to linear sRGB.
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > convertLinearSRGBToD50XYZ (vec< 3, T, Q > const &ColorLinearSRGB)
 Convert a linear sRGB color to D50 YUV.
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > convertLinearSRGBToD65XYZ (vec< 3, T, Q > const &ColorLinearSRGB)
 Convert a linear sRGB color to D65 YUV.
 

Detailed Description

GLM_GTX_color_encoding

See also
Core features (dependence)
GLM_GTX_color_encoding (dependence)

Definition in file color_encoding.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00011_source.html ================================================ 0.9.9 API documentation: color_encoding.hpp Source File
0.9.9 API documentation
color_encoding.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 // Dependencies
17 #include "../detail/setup.hpp"
18 #include "../detail/qualifier.hpp"
19 #include "../vec3.hpp"
20 #include <limits>
21 
22 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
23 # ifndef GLM_ENABLE_EXPERIMENTAL
24 # pragma message("GLM: GLM_GTC_color_encoding is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
25 # else
26 # pragma message("GLM: GLM_GTC_color_encoding extension included")
27 # endif
28 #endif
29 
30 namespace glm
31 {
34 
36  template<typename T, qualifier Q>
37  GLM_FUNC_DECL vec<3, T, Q> convertLinearSRGBToD65XYZ(vec<3, T, Q> const& ColorLinearSRGB);
38 
40  template<typename T, qualifier Q>
41  GLM_FUNC_DECL vec<3, T, Q> convertLinearSRGBToD50XYZ(vec<3, T, Q> const& ColorLinearSRGB);
42 
44  template<typename T, qualifier Q>
45  GLM_FUNC_DECL vec<3, T, Q> convertD65XYZToLinearSRGB(vec<3, T, Q> const& ColorD65XYZ);
46 
48  template<typename T, qualifier Q>
49  GLM_FUNC_DECL vec<3, T, Q> convertD65XYZToD50XYZ(vec<3, T, Q> const& ColorD65XYZ);
50 
52 } //namespace glm
53 
54 #include "color_encoding.inl"
GLM_FUNC_DECL vec< 3, T, Q > convertD65XYZToLinearSRGB(vec< 3, T, Q > const &ColorD65XYZ)
Convert a D65 YUV color to linear sRGB.
GLM_FUNC_DECL vec< 3, T, Q > convertLinearSRGBToD50XYZ(vec< 3, T, Q > const &ColorLinearSRGB)
Convert a linear sRGB color to D50 YUV.
GLM_FUNC_DECL vec< 3, T, Q > convertLinearSRGBToD65XYZ(vec< 3, T, Q > const &ColorLinearSRGB)
Convert a linear sRGB color to D65 YUV.
GLM_FUNC_DECL vec< 3, T, Q > convertD65XYZToD50XYZ(vec< 3, T, Q > const &ColorD65XYZ)
Convert a D65 YUV color to D50 YUV.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00012.html ================================================ 0.9.9 API documentation: color_space.hpp File Reference
0.9.9 API documentation
gtc/color_space.hpp File Reference

GLM_GTC_color_space More...

Go to the source code of this file.

Functions

template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > convertLinearToSRGB (vec< L, T, Q > const &ColorLinear)
 Convert a linear color to sRGB color using a standard gamma correction. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > convertLinearToSRGB (vec< L, T, Q > const &ColorLinear, T Gamma)
 Convert a linear color to sRGB color using a custom gamma correction. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > convertSRGBToLinear (vec< L, T, Q > const &ColorSRGB)
 Convert a sRGB color to linear color using a standard gamma correction. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > convertSRGBToLinear (vec< L, T, Q > const &ColorSRGB, T Gamma)
 Convert a sRGB color to linear color using a custom gamma correction.
 

Detailed Description

GLM_GTC_color_space

See also
Core features (dependence)
GLM_GTC_color_space (dependence)

Definition in file gtc/color_space.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00012_source.html ================================================ 0.9.9 API documentation: color_space.hpp Source File
0.9.9 API documentation
gtc/color_space.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 // Dependencies
17 #include "../detail/setup.hpp"
18 #include "../detail/qualifier.hpp"
19 #include "../exponential.hpp"
20 #include "../vec3.hpp"
21 #include "../vec4.hpp"
22 #include <limits>
23 
24 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
25 # pragma message("GLM: GLM_GTC_color_space extension included")
26 #endif
27 
28 namespace glm
29 {
32 
35  template<length_t L, typename T, qualifier Q>
36  GLM_FUNC_DECL vec<L, T, Q> convertLinearToSRGB(vec<L, T, Q> const& ColorLinear);
37 
40  template<length_t L, typename T, qualifier Q>
41  GLM_FUNC_DECL vec<L, T, Q> convertLinearToSRGB(vec<L, T, Q> const& ColorLinear, T Gamma);
42 
45  template<length_t L, typename T, qualifier Q>
46  GLM_FUNC_DECL vec<L, T, Q> convertSRGBToLinear(vec<L, T, Q> const& ColorSRGB);
47 
49  // IEC 61966-2-1:1999 / Rec. 709 specification https://www.w3.org/Graphics/Color/srgb
50  template<length_t L, typename T, qualifier Q>
51  GLM_FUNC_DECL vec<L, T, Q> convertSRGBToLinear(vec<L, T, Q> const& ColorSRGB, T Gamma);
52 
54 } //namespace glm
55 
56 #include "color_space.inl"
GLM_FUNC_DECL vec< L, T, Q > convertLinearToSRGB(vec< L, T, Q > const &ColorLinear, T Gamma)
Convert a linear color to sRGB color using a custom gamma correction.
GLM_FUNC_DECL vec< L, T, Q > convertSRGBToLinear(vec< L, T, Q > const &ColorSRGB, T Gamma)
Convert a sRGB color to linear color using a custom gamma correction.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00013.html ================================================ 0.9.9 API documentation: color_space.hpp File Reference
0.9.9 API documentation
gtx/color_space.hpp File Reference

GLM_GTX_color_space More...

Go to the source code of this file.

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > hsvColor (vec< 3, T, Q > const &rgbValue)
 Converts a color from RGB color space to its color in HSV color space. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T luminosity (vec< 3, T, Q > const &color)
 Compute color luminosity associating ratios (0.33, 0.59, 0.11) to RGB canals. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > rgbColor (vec< 3, T, Q > const &hsvValue)
 Converts a color from HSV color space to its color in RGB color space. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > saturation (T const s)
 Build a saturation matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > saturation (T const s, vec< 3, T, Q > const &color)
 Modify the saturation of a color. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, T, Q > saturation (T const s, vec< 4, T, Q > const &color)
 Modify the saturation of a color. More...
 

Detailed Description

GLM_GTX_color_space

See also
Core features (dependence)

Definition in file gtx/color_space.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00013_source.html ================================================ 0.9.9 API documentation: color_space.hpp Source File
0.9.9 API documentation
gtx/color_space.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependency:
16 #include "../glm.hpp"
17 
18 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
19 # ifndef GLM_ENABLE_EXPERIMENTAL
20 # pragma message("GLM: GLM_GTX_color_space is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
21 # else
22 # pragma message("GLM: GLM_GTX_color_space extension included")
23 # endif
24 #endif
25 
26 namespace glm
27 {
30 
33  template<typename T, qualifier Q>
34  GLM_FUNC_DECL vec<3, T, Q> rgbColor(
35  vec<3, T, Q> const& hsvValue);
36 
39  template<typename T, qualifier Q>
40  GLM_FUNC_DECL vec<3, T, Q> hsvColor(
41  vec<3, T, Q> const& rgbValue);
42 
45  template<typename T>
46  GLM_FUNC_DECL mat<4, 4, T, defaultp> saturation(
47  T const s);
48 
51  template<typename T, qualifier Q>
52  GLM_FUNC_DECL vec<3, T, Q> saturation(
53  T const s,
54  vec<3, T, Q> const& color);
55 
58  template<typename T, qualifier Q>
59  GLM_FUNC_DECL vec<4, T, Q> saturation(
60  T const s,
61  vec<4, T, Q> const& color);
62 
65  template<typename T, qualifier Q>
66  GLM_FUNC_DECL T luminosity(
67  vec<3, T, Q> const& color);
68 
70 }//namespace glm
71 
72 #include "color_space.inl"
GLM_FUNC_DECL T luminosity(vec< 3, T, Q > const &color)
Compute color luminosity associating ratios (0.33, 0.59, 0.11) to RGB canals.
GLM_FUNC_DECL vec< 4, T, Q > saturation(T const s, vec< 4, T, Q > const &color)
Modify the saturation of a color.
GLM_FUNC_DECL vec< 3, T, Q > rgbColor(vec< 3, T, Q > const &hsvValue)
Converts a color from HSV color space to its color in RGB color space.
GLM_FUNC_DECL vec< 3, T, Q > hsvColor(vec< 3, T, Q > const &rgbValue)
Converts a color from RGB color space to its color in HSV color space.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00014.html ================================================ 0.9.9 API documentation: color_space_YCoCg.hpp File Reference
0.9.9 API documentation
color_space_YCoCg.hpp File Reference

GLM_GTX_color_space_YCoCg More...

Go to the source code of this file.

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > rgb2YCoCg (vec< 3, T, Q > const &rgbColor)
 Convert a color from RGB color space to YCoCg color space. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > rgb2YCoCgR (vec< 3, T, Q > const &rgbColor)
 Convert a color from RGB color space to YCoCgR color space. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > YCoCg2rgb (vec< 3, T, Q > const &YCoCgColor)
 Convert a color from YCoCg color space to RGB color space. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > YCoCgR2rgb (vec< 3, T, Q > const &YCoCgColor)
 Convert a color from YCoCgR color space to RGB color space. More...
 

Detailed Description

GLM_GTX_color_space_YCoCg

See also
Core features (dependence)

Definition in file color_space_YCoCg.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00014_source.html ================================================ 0.9.9 API documentation: color_space_YCoCg.hpp Source File
0.9.9 API documentation
color_space_YCoCg.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependency:
16 #include "../glm.hpp"
17 
18 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
19 # ifndef GLM_ENABLE_EXPERIMENTAL
20 # pragma message("GLM: GLM_GTX_color_space_YCoCg is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
21 # else
22 # pragma message("GLM: GLM_GTX_color_space_YCoCg extension included")
23 # endif
24 #endif
25 
26 namespace glm
27 {
30 
33  template<typename T, qualifier Q>
34  GLM_FUNC_DECL vec<3, T, Q> rgb2YCoCg(
35  vec<3, T, Q> const& rgbColor);
36 
39  template<typename T, qualifier Q>
40  GLM_FUNC_DECL vec<3, T, Q> YCoCg2rgb(
41  vec<3, T, Q> const& YCoCgColor);
42 
46  template<typename T, qualifier Q>
47  GLM_FUNC_DECL vec<3, T, Q> rgb2YCoCgR(
48  vec<3, T, Q> const& rgbColor);
49 
53  template<typename T, qualifier Q>
54  GLM_FUNC_DECL vec<3, T, Q> YCoCgR2rgb(
55  vec<3, T, Q> const& YCoCgColor);
56 
58 }//namespace glm
59 
60 #include "color_space_YCoCg.inl"
GLM_FUNC_DECL vec< 3, T, Q > YCoCgR2rgb(vec< 3, T, Q > const &YCoCgColor)
Convert a color from YCoCgR color space to RGB color space.
GLM_FUNC_DECL vec< 3, T, Q > YCoCg2rgb(vec< 3, T, Q > const &YCoCgColor)
Convert a color from YCoCg color space to RGB color space.
GLM_FUNC_DECL vec< 3, T, Q > rgbColor(vec< 3, T, Q > const &hsvValue)
Converts a color from HSV color space to its color in RGB color space.
GLM_FUNC_DECL vec< 3, T, Q > rgb2YCoCg(vec< 3, T, Q > const &rgbColor)
Convert a color from RGB color space to YCoCg color space.
GLM_FUNC_DECL vec< 3, T, Q > rgb2YCoCgR(vec< 3, T, Q > const &rgbColor)
Convert a color from RGB color space to YCoCgR color space.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00015.html ================================================ 0.9.9 API documentation: common.hpp File Reference
0.9.9 API documentation
common.hpp File Reference

Core features More...

Go to the source code of this file.

Functions

template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType abs (genType x)
 Returns x if x >= 0; otherwise, it returns -x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > abs (vec< L, T, Q > const &x)
 Returns x if x >= 0; otherwise, it returns -x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > ceil (vec< L, T, Q > const &x)
 Returns a value equal to the nearest integer that is greater than or equal to x. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType clamp (genType x, genType minVal, genType maxVal)
 Returns min(max(x, minVal), maxVal) for each component in x using the floating-point values minVal and maxVal. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > clamp (vec< L, T, Q > const &x, T minVal, T maxVal)
 Returns min(max(x, minVal), maxVal) for each component in x using the floating-point values minVal and maxVal. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > clamp (vec< L, T, Q > const &x, vec< L, T, Q > const &minVal, vec< L, T, Q > const &maxVal)
 Returns min(max(x, minVal), maxVal) for each component in x using the floating-point values minVal and maxVal. More...
 
GLM_FUNC_DECL int floatBitsToInt (float const &v)
 Returns a signed integer value representing the encoding of a floating-point value. More...
 
template<length_t L, qualifier Q>
GLM_FUNC_DECL vec< L, int, Q > floatBitsToInt (vec< L, float, Q > const &v)
 Returns a signed integer value representing the encoding of a floating-point value. More...
 
GLM_FUNC_DECL uint floatBitsToUint (float const &v)
 Returns a unsigned integer value representing the encoding of a floating-point value. More...
 
template<length_t L, qualifier Q>
GLM_FUNC_DECL vec< L, uint, Q > floatBitsToUint (vec< L, float, Q > const &v)
 Returns a unsigned integer value representing the encoding of a floating-point value. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > floor (vec< L, T, Q > const &x)
 Returns a value equal to the nearest integer that is less then or equal to x. More...
 
template<typename genType >
GLM_FUNC_DECL genType fma (genType const &a, genType const &b, genType const &c)
 Computes and returns a * b + c. More...
 
template<typename genType >
GLM_FUNC_DECL genType fract (genType x)
 Return x - floor(x). More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fract (vec< L, T, Q > const &x)
 Return x - floor(x). More...
 
template<typename genType >
GLM_FUNC_DECL genType frexp (genType x, int &exp)
 Splits x into a floating-point significand in the range [0.5, 1.0) and an integral exponent of two, such that: x = significand * exp(2, exponent) More...
 
GLM_FUNC_DECL float intBitsToFloat (int const &v)
 Returns a floating-point value corresponding to a signed integer encoding of a floating-point value. More...
 
template<length_t L, qualifier Q>
GLM_FUNC_DECL vec< L, float, Q > intBitsToFloat (vec< L, int, Q > const &v)
 Returns a floating-point value corresponding to a signed integer encoding of a floating-point value. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, bool, Q > isinf (vec< L, T, Q > const &x)
 Returns true if x holds a positive infinity or negative infinity representation in the underlying implementation's set of floating point representations. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, bool, Q > isnan (vec< L, T, Q > const &x)
 Returns true if x holds a NaN (not a number) representation in the underlying implementation's set of floating point representations. More...
 
template<typename genType >
GLM_FUNC_DECL genType ldexp (genType const &x, int const &exp)
 Builds a floating-point number from x and the corresponding integral exponent of two in exp, returning: significand * exp(2, exponent) More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType max (genType x, genType y)
 Returns y if x < y; otherwise, it returns x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > max (vec< L, T, Q > const &x, T y)
 Returns y if x < y; otherwise, it returns x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > max (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Returns y if x < y; otherwise, it returns x. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType min (genType x, genType y)
 Returns y if y < x; otherwise, it returns x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > min (vec< L, T, Q > const &x, T y)
 Returns y if y < x; otherwise, it returns x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > min (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Returns y if y < x; otherwise, it returns x. More...
 
template<typename genTypeT , typename genTypeU >
GLM_FUNC_DECL genTypeT mix (genTypeT x, genTypeT y, genTypeU a)
 If genTypeU is a floating scalar or vector: Returns x * (1.0 - a) + y * a, i.e., the linear blend of x and y using the floating-point value a. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > mod (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Modulus. More...
 
template<typename genType >
GLM_FUNC_DECL genType modf (genType x, genType &i)
 Returns the fractional part of x and sets i to the integer part (as a whole number floating point value). More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > round (vec< L, T, Q > const &x)
 Returns a value equal to the nearest integer to x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > roundEven (vec< L, T, Q > const &x)
 Returns a value equal to the nearest integer to x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > sign (vec< L, T, Q > const &x)
 Returns 1.0 if x > 0, 0.0 if x == 0, or -1.0 if x < 0. More...
 
template<typename genType >
GLM_FUNC_DECL genType smoothstep (genType edge0, genType edge1, genType x)
 Returns 0.0 if x <= edge0 and 1.0 if x >= edge1 and performs smooth Hermite interpolation between 0 and 1 when edge0 < x < edge1. More...
 
template<typename genType >
GLM_FUNC_DECL genType step (genType edge, genType x)
 Returns 0.0 if x < edge, otherwise it returns 1.0 for each component of a genType. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > step (T edge, vec< L, T, Q > const &x)
 Returns 0.0 if x < edge, otherwise it returns 1.0. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > step (vec< L, T, Q > const &edge, vec< L, T, Q > const &x)
 Returns 0.0 if x < edge, otherwise it returns 1.0. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > trunc (vec< L, T, Q > const &x)
 Returns a value equal to the nearest integer to x whose absolute value is not larger than the absolute value of x. More...
 
GLM_FUNC_DECL float uintBitsToFloat (uint const &v)
 Returns a floating-point value corresponding to a unsigned integer encoding of a floating-point value. More...
 
template<length_t L, qualifier Q>
GLM_FUNC_DECL vec< L, float, Q > uintBitsToFloat (vec< L, uint, Q > const &v)
 Returns a floating-point value corresponding to a unsigned integer encoding of a floating-point value. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00015_source.html ================================================ 0.9.9 API documentation: common.hpp Source File
0.9.9 API documentation
common.hpp
Go to the documentation of this file.
1 
15 #pragma once
16 
17 #include "detail/qualifier.hpp"
18 #include "detail/_fixes.hpp"
19 
20 namespace glm
21 {
24 
31  template<typename genType>
32  GLM_FUNC_DECL GLM_CONSTEXPR genType abs(genType x);
33 
42  template<length_t L, typename T, qualifier Q>
43  GLM_FUNC_DECL GLM_CONSTEXPR vec<L, T, Q> abs(vec<L, T, Q> const& x);
44 
53  template<length_t L, typename T, qualifier Q>
54  GLM_FUNC_DECL vec<L, T, Q> sign(vec<L, T, Q> const& x);
55 
64  template<length_t L, typename T, qualifier Q>
65  GLM_FUNC_DECL vec<L, T, Q> floor(vec<L, T, Q> const& x);
66 
76  template<length_t L, typename T, qualifier Q>
77  GLM_FUNC_DECL vec<L, T, Q> trunc(vec<L, T, Q> const& x);
78 
91  template<length_t L, typename T, qualifier Q>
92  GLM_FUNC_DECL vec<L, T, Q> round(vec<L, T, Q> const& x);
93 
105  template<length_t L, typename T, qualifier Q>
106  GLM_FUNC_DECL vec<L, T, Q> roundEven(vec<L, T, Q> const& x);
107 
117  template<length_t L, typename T, qualifier Q>
118  GLM_FUNC_DECL vec<L, T, Q> ceil(vec<L, T, Q> const& x);
119 
126  template<typename genType>
127  GLM_FUNC_DECL genType fract(genType x);
128 
137  template<length_t L, typename T, qualifier Q>
138  GLM_FUNC_DECL vec<L, T, Q> fract(vec<L, T, Q> const& x);
139 
140  template<typename genType>
141  GLM_FUNC_DECL genType mod(genType x, genType y);
142 
143  template<length_t L, typename T, qualifier Q>
144  GLM_FUNC_DECL vec<L, T, Q> mod(vec<L, T, Q> const& x, T y);
145 
155  template<length_t L, typename T, qualifier Q>
156  GLM_FUNC_DECL vec<L, T, Q> mod(vec<L, T, Q> const& x, vec<L, T, Q> const& y);
157 
167  template<typename genType>
168  GLM_FUNC_DECL genType modf(genType x, genType& i);
169 
176  template<typename genType>
177  GLM_FUNC_DECL GLM_CONSTEXPR genType min(genType x, genType y);
178 
187  template<length_t L, typename T, qualifier Q>
188  GLM_FUNC_DECL GLM_CONSTEXPR vec<L, T, Q> min(vec<L, T, Q> const& x, T y);
189 
198  template<length_t L, typename T, qualifier Q>
199  GLM_FUNC_DECL GLM_CONSTEXPR vec<L, T, Q> min(vec<L, T, Q> const& x, vec<L, T, Q> const& y);
200 
207  template<typename genType>
208  GLM_FUNC_DECL GLM_CONSTEXPR genType max(genType x, genType y);
209 
218  template<length_t L, typename T, qualifier Q>
219  GLM_FUNC_DECL GLM_CONSTEXPR vec<L, T, Q> max(vec<L, T, Q> const& x, T y);
220 
229  template<length_t L, typename T, qualifier Q>
230  GLM_FUNC_DECL GLM_CONSTEXPR vec<L, T, Q> max(vec<L, T, Q> const& x, vec<L, T, Q> const& y);
231 
239  template<typename genType>
240  GLM_FUNC_DECL GLM_CONSTEXPR genType clamp(genType x, genType minVal, genType maxVal);
241 
251  template<length_t L, typename T, qualifier Q>
252  GLM_FUNC_DECL GLM_CONSTEXPR vec<L, T, Q> clamp(vec<L, T, Q> const& x, T minVal, T maxVal);
253 
263  template<length_t L, typename T, qualifier Q>
264  GLM_FUNC_DECL GLM_CONSTEXPR vec<L, T, Q> clamp(vec<L, T, Q> const& x, vec<L, T, Q> const& minVal, vec<L, T, Q> const& maxVal);
265 
308  template<typename genTypeT, typename genTypeU>
309  GLM_FUNC_DECL genTypeT mix(genTypeT x, genTypeT y, genTypeU a);
310 
311  template<length_t L, typename T, typename U, qualifier Q>
312  GLM_FUNC_DECL vec<L, T, Q> mix(vec<L, T, Q> const& x, vec<L, T, Q> const& y, vec<L, U, Q> const& a);
313 
314  template<length_t L, typename T, typename U, qualifier Q>
315  GLM_FUNC_DECL vec<L, T, Q> mix(vec<L, T, Q> const& x, vec<L, T, Q> const& y, U a);
316 
321  template<typename genType>
322  GLM_FUNC_DECL genType step(genType edge, genType x);
323 
332  template<length_t L, typename T, qualifier Q>
333  GLM_FUNC_DECL vec<L, T, Q> step(T edge, vec<L, T, Q> const& x);
334 
343  template<length_t L, typename T, qualifier Q>
344  GLM_FUNC_DECL vec<L, T, Q> step(vec<L, T, Q> const& edge, vec<L, T, Q> const& x);
345 
360  template<typename genType>
361  GLM_FUNC_DECL genType smoothstep(genType edge0, genType edge1, genType x);
362 
363  template<length_t L, typename T, qualifier Q>
364  GLM_FUNC_DECL vec<L, T, Q> smoothstep(T edge0, T edge1, vec<L, T, Q> const& x);
365 
366  template<length_t L, typename T, qualifier Q>
367  GLM_FUNC_DECL vec<L, T, Q> smoothstep(vec<L, T, Q> const& edge0, vec<L, T, Q> const& edge1, vec<L, T, Q> const& x);
368 
383  template<length_t L, typename T, qualifier Q>
384  GLM_FUNC_DECL vec<L, bool, Q> isnan(vec<L, T, Q> const& x);
385 
398  template<length_t L, typename T, qualifier Q>
399  GLM_FUNC_DECL vec<L, bool, Q> isinf(vec<L, T, Q> const& x);
400 
407  GLM_FUNC_DECL int floatBitsToInt(float const& v);
408 
418  template<length_t L, qualifier Q>
419  GLM_FUNC_DECL vec<L, int, Q> floatBitsToInt(vec<L, float, Q> const& v);
420 
427  GLM_FUNC_DECL uint floatBitsToUint(float const& v);
428 
438  template<length_t L, qualifier Q>
439  GLM_FUNC_DECL vec<L, uint, Q> floatBitsToUint(vec<L, float, Q> const& v);
440 
449  GLM_FUNC_DECL float intBitsToFloat(int const& v);
450 
462  template<length_t L, qualifier Q>
463  GLM_FUNC_DECL vec<L, float, Q> intBitsToFloat(vec<L, int, Q> const& v);
464 
473  GLM_FUNC_DECL float uintBitsToFloat(uint const& v);
474 
486  template<length_t L, qualifier Q>
487  GLM_FUNC_DECL vec<L, float, Q> uintBitsToFloat(vec<L, uint, Q> const& v);
488 
495  template<typename genType>
496  GLM_FUNC_DECL genType fma(genType const& a, genType const& b, genType const& c);
497 
512  template<typename genType>
513  GLM_FUNC_DECL genType frexp(genType x, int& exp);
514 
515  template<length_t L, typename T, qualifier Q>
516  GLM_FUNC_DECL vec<L, T, Q> frexp(vec<L, T, Q> const& v, vec<L, int, Q>& exp);
517 
529  template<typename genType>
530  GLM_FUNC_DECL genType ldexp(genType const& x, int const& exp);
531 
532  template<length_t L, typename T, qualifier Q>
533  GLM_FUNC_DECL vec<L, T, Q> ldexp(vec<L, T, Q> const& v, vec<L, int, Q> const& exp);
534 
536 }//namespace glm
537 
538 #include "detail/func_common.inl"
539 
GLM_FUNC_DECL vec< L, T, Q > floor(vec< L, T, Q > const &x)
Returns a value equal to the nearest integer that is less then or equal to x.
GLM_FUNC_DECL genType fma(genType const &a, genType const &b, genType const &c)
Computes and returns a * b + c.
GLM_FUNC_DECL vec< L, T, Q > trunc(vec< L, T, Q > const &x)
Returns a value equal to the nearest integer to x whose absolute value is not larger than the absolut...
GLM_FUNC_DECL vec< L, T, Q > mod(vec< L, T, Q > const &x, vec< L, T, Q > const &y)
Modulus.
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > clamp(vec< L, T, Q > const &x, vec< L, T, Q > const &minVal, vec< L, T, Q > const &maxVal)
Returns min(max(x, minVal), maxVal) for each component in x using the floating-point values minVal an...
GLM_FUNC_DECL vec< L, T, Q > round(vec< L, T, Q > const &x)
Returns a value equal to the nearest integer to x.
GLM_FUNC_DECL vec< L, float, Q > uintBitsToFloat(vec< L, uint, Q > const &v)
Returns a floating-point value corresponding to a unsigned integer encoding of a floating-point value...
GLM_FUNC_DECL vec< L, T, Q > sign(vec< L, T, Q > const &x)
Returns 1.0 if x > 0, 0.0 if x == 0, or -1.0 if x < 0.
GLM_FUNC_DECL vec< L, bool, Q > isinf(vec< L, T, Q > const &x)
Returns true if x holds a positive infinity or negative infinity representation in the underlying imp...
GLM_FUNC_DECL vec< L, T, Q > roundEven(vec< L, T, Q > const &x)
Returns a value equal to the nearest integer to x.
GLM_FUNC_DECL genType modf(genType x, genType &i)
Returns the fractional part of x and sets i to the integer part (as a whole number floating point val...
GLM_FUNC_DECL vec< L, T, Q > ceil(vec< L, T, Q > const &x)
Returns a value equal to the nearest integer that is greater than or equal to x.
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > min(vec< L, T, Q > const &x, vec< L, T, Q > const &y)
Returns y if y < x; otherwise, it returns x.
GLM_FUNC_DECL vec< L, float, Q > intBitsToFloat(vec< L, int, Q > const &v)
Returns a floating-point value corresponding to a signed integer encoding of a floating-point value...
GLM_FUNC_DECL vec< L, bool, Q > isnan(vec< L, T, Q > const &x)
Returns true if x holds a NaN (not a number) representation in the underlying implementation's set of...
GLM_FUNC_DECL vec< L, T, Q > exp(vec< L, T, Q > const &v)
Returns the natural exponentiation of x, i.e., e^x.
GLM_FUNC_DECL vec< L, uint, Q > floatBitsToUint(vec< L, float, Q > const &v)
Returns a unsigned integer value representing the encoding of a floating-point value.
GLM_FUNC_DECL genType smoothstep(genType edge0, genType edge1, genType x)
Returns 0.0 if x <= edge0 and 1.0 if x >= edge1 and performs smooth Hermite interpolation between 0 a...
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > abs(vec< L, T, Q > const &x)
Returns x if x >= 0; otherwise, it returns -x.
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > max(vec< L, T, Q > const &x, vec< L, T, Q > const &y)
Returns y if x < y; otherwise, it returns x.
GLM_FUNC_DECL vec< L, T, Q > step(vec< L, T, Q > const &edge, vec< L, T, Q > const &x)
Returns 0.0 if x < edge, otherwise it returns 1.0.
GLM_FUNC_DECL vec< L, T, Q > fract(vec< L, T, Q > const &x)
Return x - floor(x).
GLM_FUNC_DECL genType ldexp(genType const &x, int const &exp)
Builds a floating-point number from x and the corresponding integral exponent of two in exp...
GLM_FUNC_DECL vec< L, int, Q > floatBitsToInt(vec< L, float, Q > const &v)
Returns a signed integer value representing the encoding of a floating-point value.
GLM_FUNC_DECL genTypeT mix(genTypeT x, genTypeT y, genTypeU a)
If genTypeU is a floating scalar or vector: Returns x * (1.0 - a) + y * a, i.e., the linear blend of ...
GLM_FUNC_DECL genType frexp(genType x, int &exp)
Splits x into a floating-point significand in the range [0.5, 1.0) and an integral exponent of two...
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00016.html ================================================ 0.9.9 API documentation: common.hpp File Reference
0.9.9 API documentation
gtx/common.hpp File Reference

GLM_GTX_common More...

Go to the source code of this file.

Functions

template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, bool, Q > closeBounded (vec< L, T, Q > const &Value, vec< L, T, Q > const &Min, vec< L, T, Q > const &Max)
 Returns whether vector components values are within an interval. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fmod (vec< L, T, Q > const &v)
 Similar to 'mod' but with a different rounding and integer support. More...
 
template<typename genType >
GLM_FUNC_DECL genType::bool_type isdenormal (genType const &x)
 Returns true if x is a denormalized number Numbers whose absolute value is too small to be represented in the normal format are represented in an alternate, denormalized format. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, bool, Q > openBounded (vec< L, T, Q > const &Value, vec< L, T, Q > const &Min, vec< L, T, Q > const &Max)
 Returns whether vector components values are within an interval. More...
 

Detailed Description

GLM_GTX_common

See also
Core features (dependence)

Definition in file gtx/common.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00016_source.html ================================================ 0.9.9 API documentation: common.hpp Source File
0.9.9 API documentation
gtx/common.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependencies:
16 #include "../vec2.hpp"
17 #include "../vec3.hpp"
18 #include "../vec4.hpp"
19 #include "../gtc/vec1.hpp"
20 
21 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
22 # ifndef GLM_ENABLE_EXPERIMENTAL
23 # pragma message("GLM: GLM_GTX_common is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
24 # else
25 # pragma message("GLM: GLM_GTX_common extension included")
26 # endif
27 #endif
28 
29 namespace glm
30 {
33 
42  template<typename genType>
43  GLM_FUNC_DECL typename genType::bool_type isdenormal(genType const& x);
44 
50  template<length_t L, typename T, qualifier Q>
51  GLM_FUNC_DECL vec<L, T, Q> fmod(vec<L, T, Q> const& v);
52 
60  template <length_t L, typename T, qualifier Q>
61  GLM_FUNC_DECL vec<L, bool, Q> openBounded(vec<L, T, Q> const& Value, vec<L, T, Q> const& Min, vec<L, T, Q> const& Max);
62 
70  template <length_t L, typename T, qualifier Q>
71  GLM_FUNC_DECL vec<L, bool, Q> closeBounded(vec<L, T, Q> const& Value, vec<L, T, Q> const& Min, vec<L, T, Q> const& Max);
72 
74 }//namespace glm
75 
76 #include "common.inl"
GLM_FUNC_DECL vec< L, T, Q > fmod(vec< L, T, Q > const &v)
Similar to 'mod' but with a different rounding and integer support.
GLM_FUNC_DECL vec< L, bool, Q > openBounded(vec< L, T, Q > const &Value, vec< L, T, Q > const &Min, vec< L, T, Q > const &Max)
Returns whether vector components values are within an interval.
GLM_FUNC_DECL genType::bool_type isdenormal(genType const &x)
Returns true if x is a denormalized number Numbers whose absolute value is too small to be represente...
Definition: common.hpp:20
GLM_FUNC_DECL vec< L, bool, Q > closeBounded(vec< L, T, Q > const &Value, vec< L, T, Q > const &Min, vec< L, T, Q > const &Max)
Returns whether vector components values are within an interval.
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00017.html ================================================ 0.9.9 API documentation: compatibility.hpp File Reference
0.9.9 API documentation
compatibility.hpp File Reference

GLM_GTX_compatibility More...

Go to the source code of this file.

Typedefs

typedef bool bool1
 boolean type with 1 component. (From GLM_GTX_compatibility extension)
 
typedef bool bool1x1
 boolean matrix with 1 x 1 component. (From GLM_GTX_compatibility extension)
 
typedef vec< 2, bool, highp > bool2
 boolean type with 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 2, 2, bool, highp > bool2x2
 boolean matrix with 2 x 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 2, 3, bool, highp > bool2x3
 boolean matrix with 2 x 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 2, 4, bool, highp > bool2x4
 boolean matrix with 2 x 4 components. (From GLM_GTX_compatibility extension)
 
typedef vec< 3, bool, highp > bool3
 boolean type with 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 3, 2, bool, highp > bool3x2
 boolean matrix with 3 x 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 3, 3, bool, highp > bool3x3
 boolean matrix with 3 x 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 3, 4, bool, highp > bool3x4
 boolean matrix with 3 x 4 components. (From GLM_GTX_compatibility extension)
 
typedef vec< 4, bool, highp > bool4
 boolean type with 4 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 4, 2, bool, highp > bool4x2
 boolean matrix with 4 x 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 4, 3, bool, highp > bool4x3
 boolean matrix with 4 x 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 4, 4, bool, highp > bool4x4
 boolean matrix with 4 x 4 components. (From GLM_GTX_compatibility extension)
 
typedef double double1
 double-qualifier floating-point vector with 1 component. (From GLM_GTX_compatibility extension)
 
typedef double double1x1
 double-qualifier floating-point matrix with 1 component. (From GLM_GTX_compatibility extension)
 
typedef vec< 2, double, highp > double2
 double-qualifier floating-point vector with 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 2, 2, double, highp > double2x2
 double-qualifier floating-point matrix with 2 x 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 2, 3, double, highp > double2x3
 double-qualifier floating-point matrix with 2 x 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 2, 4, double, highp > double2x4
 double-qualifier floating-point matrix with 2 x 4 components. (From GLM_GTX_compatibility extension)
 
typedef vec< 3, double, highp > double3
 double-qualifier floating-point vector with 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 3, 2, double, highp > double3x2
 double-qualifier floating-point matrix with 3 x 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 3, 3, double, highp > double3x3
 double-qualifier floating-point matrix with 3 x 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 3, 4, double, highp > double3x4
 double-qualifier floating-point matrix with 3 x 4 components. (From GLM_GTX_compatibility extension)
 
typedef vec< 4, double, highp > double4
 double-qualifier floating-point vector with 4 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 4, 2, double, highp > double4x2
 double-qualifier floating-point matrix with 4 x 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 4, 3, double, highp > double4x3
 double-qualifier floating-point matrix with 4 x 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 4, 4, double, highp > double4x4
 double-qualifier floating-point matrix with 4 x 4 components. (From GLM_GTX_compatibility extension)
 
typedef float float1
 single-qualifier floating-point vector with 1 component. (From GLM_GTX_compatibility extension)
 
typedef float float1x1
 single-qualifier floating-point matrix with 1 component. (From GLM_GTX_compatibility extension)
 
typedef vec< 2, float, highp > float2
 single-qualifier floating-point vector with 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 2, 2, float, highp > float2x2
 single-qualifier floating-point matrix with 2 x 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 2, 3, float, highp > float2x3
 single-qualifier floating-point matrix with 2 x 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 2, 4, float, highp > float2x4
 single-qualifier floating-point matrix with 2 x 4 components. (From GLM_GTX_compatibility extension)
 
typedef vec< 3, float, highp > float3
 single-qualifier floating-point vector with 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 3, 2, float, highp > float3x2
 single-qualifier floating-point matrix with 3 x 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 3, 3, float, highp > float3x3
 single-qualifier floating-point matrix with 3 x 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 3, 4, float, highp > float3x4
 single-qualifier floating-point matrix with 3 x 4 components. (From GLM_GTX_compatibility extension)
 
typedef vec< 4, float, highp > float4
 single-qualifier floating-point vector with 4 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 4, 2, float, highp > float4x2
 single-qualifier floating-point matrix with 4 x 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 4, 3, float, highp > float4x3
 single-qualifier floating-point matrix with 4 x 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 4, 4, float, highp > float4x4
 single-qualifier floating-point matrix with 4 x 4 components. (From GLM_GTX_compatibility extension)
 
typedef int int1
 integer vector with 1 component. (From GLM_GTX_compatibility extension)
 
typedef int int1x1
 integer matrix with 1 component. (From GLM_GTX_compatibility extension)
 
typedef vec< 2, int, highp > int2
 integer vector with 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 2, 2, int, highp > int2x2
 integer matrix with 2 x 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 2, 3, int, highp > int2x3
 integer matrix with 2 x 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 2, 4, int, highp > int2x4
 integer matrix with 2 x 4 components. (From GLM_GTX_compatibility extension)
 
typedef vec< 3, int, highp > int3
 integer vector with 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 3, 2, int, highp > int3x2
 integer matrix with 3 x 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 3, 3, int, highp > int3x3
 integer matrix with 3 x 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 3, 4, int, highp > int3x4
 integer matrix with 3 x 4 components. (From GLM_GTX_compatibility extension)
 
typedef vec< 4, int, highp > int4
 integer vector with 4 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 4, 2, int, highp > int4x2
 integer matrix with 4 x 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 4, 3, int, highp > int4x3
 integer matrix with 4 x 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 4, 4, int, highp > int4x4
 integer matrix with 4 x 4 components. (From GLM_GTX_compatibility extension)
 

Functions

template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER T atan2 (T x, T y)
 Arc tangent. Returns an angle whose tangent is y/x. The signs of x and y are used to determine what quadrant the angle is in. The range of values returned by this function is [-PI, PI]. Results are undefined if x and y are both 0. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER vec< 2, T, Q > atan2 (const vec< 2, T, Q > &x, const vec< 2, T, Q > &y)
 Arc tangent. Returns an angle whose tangent is y/x. The signs of x and y are used to determine what quadrant the angle is in. The range of values returned by this function is [-PI, PI]. Results are undefined if x and y are both 0. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER vec< 3, T, Q > atan2 (const vec< 3, T, Q > &x, const vec< 3, T, Q > &y)
 Arc tangent. Returns an angle whose tangent is y/x. The signs of x and y are used to determine what quadrant the angle is in. The range of values returned by this function is [-PI, PI]. Results are undefined if x and y are both 0. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER vec< 4, T, Q > atan2 (const vec< 4, T, Q > &x, const vec< 4, T, Q > &y)
 Arc tangent. Returns an angle whose tangent is y/x. The signs of x and y are used to determine what quadrant the angle is in. The range of values returned by this function is [-PI, PI]. Results are undefined if x and y are both 0. (From GLM_GTX_compatibility)
 
template<typename genType >
GLM_FUNC_DECL bool isfinite (genType const &x)
 Test whether or not a scalar or each vector component is a finite value. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 1, bool, Q > isfinite (const vec< 1, T, Q > &x)
 Test whether or not a scalar or each vector component is a finite value. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 2, bool, Q > isfinite (const vec< 2, T, Q > &x)
 Test whether or not a scalar or each vector component is a finite value. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, bool, Q > isfinite (const vec< 3, T, Q > &x)
 Test whether or not a scalar or each vector component is a finite value. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, bool, Q > isfinite (const vec< 4, T, Q > &x)
 Test whether or not a scalar or each vector component is a finite value. (From GLM_GTX_compatibility)
 
template<typename T >
GLM_FUNC_QUALIFIER T lerp (T x, T y, T a)
 Returns x * (1.0 - a) + y * a, i.e., the linear blend of x and y using the floating-point value a. The value for a is not restricted to the range [0, 1]. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER vec< 2, T, Q > lerp (const vec< 2, T, Q > &x, const vec< 2, T, Q > &y, T a)
 Returns x * (1.0 - a) + y * a, i.e., the linear blend of x and y using the floating-point value a. The value for a is not restricted to the range [0, 1]. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER vec< 3, T, Q > lerp (const vec< 3, T, Q > &x, const vec< 3, T, Q > &y, T a)
 Returns x * (1.0 - a) + y * a, i.e., the linear blend of x and y using the floating-point value a. The value for a is not restricted to the range [0, 1]. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER vec< 4, T, Q > lerp (const vec< 4, T, Q > &x, const vec< 4, T, Q > &y, T a)
 Returns x * (1.0 - a) + y * a, i.e., the linear blend of x and y using the floating-point value a. The value for a is not restricted to the range [0, 1]. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER vec< 2, T, Q > lerp (const vec< 2, T, Q > &x, const vec< 2, T, Q > &y, const vec< 2, T, Q > &a)
 Returns the component-wise result of x * (1.0 - a) + y * a, i.e., the linear blend of x and y using vector a. The value for a is not restricted to the range [0, 1]. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER vec< 3, T, Q > lerp (const vec< 3, T, Q > &x, const vec< 3, T, Q > &y, const vec< 3, T, Q > &a)
 Returns the component-wise result of x * (1.0 - a) + y * a, i.e., the linear blend of x and y using vector a. The value for a is not restricted to the range [0, 1]. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER vec< 4, T, Q > lerp (const vec< 4, T, Q > &x, const vec< 4, T, Q > &y, const vec< 4, T, Q > &a)
 Returns the component-wise result of x * (1.0 - a) + y * a, i.e., the linear blend of x and y using vector a. The value for a is not restricted to the range [0, 1]. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER T saturate (T x)
 Returns clamp(x, 0, 1) for each component in x. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER vec< 2, T, Q > saturate (const vec< 2, T, Q > &x)
 Returns clamp(x, 0, 1) for each component in x. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER vec< 3, T, Q > saturate (const vec< 3, T, Q > &x)
 Returns clamp(x, 0, 1) for each component in x. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER vec< 4, T, Q > saturate (const vec< 4, T, Q > &x)
 Returns clamp(x, 0, 1) for each component in x. (From GLM_GTX_compatibility)
 

Detailed Description

GLM_GTX_compatibility

See also
Core features (dependence)

Definition in file compatibility.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00017_source.html ================================================ 0.9.9 API documentation: compatibility.hpp Source File
0.9.9 API documentation
compatibility.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependency:
16 #include "../glm.hpp"
17 #include "../gtc/quaternion.hpp"
18 
19 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
20 # ifndef GLM_ENABLE_EXPERIMENTAL
21 # pragma message("GLM: GLM_GTX_compatibility is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
22 # else
23 # pragma message("GLM: GLM_GTX_compatibility extension included")
24 # endif
25 #endif
26 
27 #if GLM_COMPILER & GLM_COMPILER_VC
28 # include <cfloat>
29 #elif GLM_COMPILER & GLM_COMPILER_GCC
30 # include <cmath>
31 # if(GLM_PLATFORM & GLM_PLATFORM_ANDROID)
32 # undef isfinite
33 # endif
34 #endif//GLM_COMPILER
35 
36 namespace glm
37 {
40 
41  template<typename T> GLM_FUNC_QUALIFIER T lerp(T x, T y, T a){return mix(x, y, a);}
42  template<typename T, qualifier Q> GLM_FUNC_QUALIFIER vec<2, T, Q> lerp(const vec<2, T, Q>& x, const vec<2, T, Q>& y, T a){return mix(x, y, a);}
43 
44  template<typename T, qualifier Q> GLM_FUNC_QUALIFIER vec<3, T, Q> lerp(const vec<3, T, Q>& x, const vec<3, T, Q>& y, T a){return mix(x, y, a);}
45  template<typename T, qualifier Q> GLM_FUNC_QUALIFIER vec<4, T, Q> lerp(const vec<4, T, Q>& x, const vec<4, T, Q>& y, T a){return mix(x, y, a);}
46  template<typename T, qualifier Q> GLM_FUNC_QUALIFIER vec<2, T, Q> lerp(const vec<2, T, Q>& x, const vec<2, T, Q>& y, const vec<2, T, Q>& a){return mix(x, y, a);}
47  template<typename T, qualifier Q> GLM_FUNC_QUALIFIER vec<3, T, Q> lerp(const vec<3, T, Q>& x, const vec<3, T, Q>& y, const vec<3, T, Q>& a){return mix(x, y, a);}
48  template<typename T, qualifier Q> GLM_FUNC_QUALIFIER vec<4, T, Q> lerp(const vec<4, T, Q>& x, const vec<4, T, Q>& y, const vec<4, T, Q>& a){return mix(x, y, a);}
49 
50  template<typename T, qualifier Q> GLM_FUNC_QUALIFIER T saturate(T x){return clamp(x, T(0), T(1));}
51  template<typename T, qualifier Q> GLM_FUNC_QUALIFIER vec<2, T, Q> saturate(const vec<2, T, Q>& x){return clamp(x, T(0), T(1));}
52  template<typename T, qualifier Q> GLM_FUNC_QUALIFIER vec<3, T, Q> saturate(const vec<3, T, Q>& x){return clamp(x, T(0), T(1));}
53  template<typename T, qualifier Q> GLM_FUNC_QUALIFIER vec<4, T, Q> saturate(const vec<4, T, Q>& x){return clamp(x, T(0), T(1));}
54 
55  template<typename T, qualifier Q> GLM_FUNC_QUALIFIER T atan2(T x, T y){return atan(x, y);}
56  template<typename T, qualifier Q> GLM_FUNC_QUALIFIER vec<2, T, Q> atan2(const vec<2, T, Q>& x, const vec<2, T, Q>& y){return atan(x, y);}
57  template<typename T, qualifier Q> GLM_FUNC_QUALIFIER vec<3, T, Q> atan2(const vec<3, T, Q>& x, const vec<3, T, Q>& y){return atan(x, y);}
58  template<typename T, qualifier Q> GLM_FUNC_QUALIFIER vec<4, T, Q> atan2(const vec<4, T, Q>& x, const vec<4, T, Q>& y){return atan(x, y);}
59 
60  template<typename genType> GLM_FUNC_DECL bool isfinite(genType const& x);
61  template<typename T, qualifier Q> GLM_FUNC_DECL vec<1, bool, Q> isfinite(const vec<1, T, Q>& x);
62  template<typename T, qualifier Q> GLM_FUNC_DECL vec<2, bool, Q> isfinite(const vec<2, T, Q>& x);
63  template<typename T, qualifier Q> GLM_FUNC_DECL vec<3, bool, Q> isfinite(const vec<3, T, Q>& x);
64  template<typename T, qualifier Q> GLM_FUNC_DECL vec<4, bool, Q> isfinite(const vec<4, T, Q>& x);
65 
66  typedef bool bool1;
67  typedef vec<2, bool, highp> bool2;
68  typedef vec<3, bool, highp> bool3;
69  typedef vec<4, bool, highp> bool4;
70 
71  typedef bool bool1x1;
72  typedef mat<2, 2, bool, highp> bool2x2;
73  typedef mat<2, 3, bool, highp> bool2x3;
74  typedef mat<2, 4, bool, highp> bool2x4;
75  typedef mat<3, 2, bool, highp> bool3x2;
76  typedef mat<3, 3, bool, highp> bool3x3;
77  typedef mat<3, 4, bool, highp> bool3x4;
78  typedef mat<4, 2, bool, highp> bool4x2;
79  typedef mat<4, 3, bool, highp> bool4x3;
80  typedef mat<4, 4, bool, highp> bool4x4;
81 
82  typedef int int1;
83  typedef vec<2, int, highp> int2;
84  typedef vec<3, int, highp> int3;
85  typedef vec<4, int, highp> int4;
86 
87  typedef int int1x1;
88  typedef mat<2, 2, int, highp> int2x2;
89  typedef mat<2, 3, int, highp> int2x3;
90  typedef mat<2, 4, int, highp> int2x4;
91  typedef mat<3, 2, int, highp> int3x2;
92  typedef mat<3, 3, int, highp> int3x3;
93  typedef mat<3, 4, int, highp> int3x4;
94  typedef mat<4, 2, int, highp> int4x2;
95  typedef mat<4, 3, int, highp> int4x3;
96  typedef mat<4, 4, int, highp> int4x4;
97 
98  typedef float float1;
99  typedef vec<2, float, highp> float2;
100  typedef vec<3, float, highp> float3;
101  typedef vec<4, float, highp> float4;
102 
103  typedef float float1x1;
104  typedef mat<2, 2, float, highp> float2x2;
105  typedef mat<2, 3, float, highp> float2x3;
106  typedef mat<2, 4, float, highp> float2x4;
107  typedef mat<3, 2, float, highp> float3x2;
108  typedef mat<3, 3, float, highp> float3x3;
109  typedef mat<3, 4, float, highp> float3x4;
110  typedef mat<4, 2, float, highp> float4x2;
111  typedef mat<4, 3, float, highp> float4x3;
112  typedef mat<4, 4, float, highp> float4x4;
113 
114  typedef double double1;
115  typedef vec<2, double, highp> double2;
116  typedef vec<3, double, highp> double3;
117  typedef vec<4, double, highp> double4;
118 
119  typedef double double1x1;
120  typedef mat<2, 2, double, highp> double2x2;
121  typedef mat<2, 3, double, highp> double2x3;
122  typedef mat<2, 4, double, highp> double2x4;
123  typedef mat<3, 2, double, highp> double3x2;
124  typedef mat<3, 3, double, highp> double3x3;
125  typedef mat<3, 4, double, highp> double3x4;
126  typedef mat<4, 2, double, highp> double4x2;
127  typedef mat<4, 3, double, highp> double4x3;
128  typedef mat<4, 4, double, highp> double4x4;
129 
131 }//namespace glm
132 
133 #include "compatibility.inl"
mat< 4, 4, double, highp > double4x4
double-qualifier floating-point matrix with 4 x 4 components. (From GLM_GTX_compatibility extension) ...
mat< 3, 4, int, highp > int3x4
integer matrix with 3 x 4 components. (From GLM_GTX_compatibility extension)
GLM_FUNC_DECL vec< L, T, Q > atan(vec< L, T, Q > const &y, vec< L, T, Q > const &x)
Arc tangent.
bool bool1
boolean type with 1 component. (From GLM_GTX_compatibility extension)
mat< 4, 3, float, highp > float4x3
single-qualifier floating-point matrix with 4 x 3 components. (From GLM_GTX_compatibility extension) ...
mat< 4, 4, float, highp > float4x4
single-qualifier floating-point matrix with 4 x 4 components. (From GLM_GTX_compatibility extension) ...
mat< 2, 4, double, highp > double2x4
double-qualifier floating-point matrix with 2 x 4 components. (From GLM_GTX_compatibility extension) ...
mat< 2, 2, double, highp > double2x2
double-qualifier floating-point matrix with 2 x 2 components. (From GLM_GTX_compatibility extension) ...
mat< 3, 2, double, highp > double3x2
double-qualifier floating-point matrix with 3 x 2 components. (From GLM_GTX_compatibility extension) ...
GLM_FUNC_QUALIFIER vec< 4, T, Q > atan2(const vec< 4, T, Q > &x, const vec< 4, T, Q > &y)
Arc tangent. Returns an angle whose tangent is y/x. The signs of x and y are used to determine what q...
double double1x1
double-qualifier floating-point matrix with 1 component. (From GLM_GTX_compatibility extension) ...
GLM_FUNC_QUALIFIER vec< 4, T, Q > lerp(const vec< 4, T, Q > &x, const vec< 4, T, Q > &y, const vec< 4, T, Q > &a)
Returns the component-wise result of x * (1.0 - a) + y * a, i.e., the linear blend of x and y using v...
mat< 3, 3, double, highp > double3x3
double-qualifier floating-point matrix with 3 x 3 components. (From GLM_GTX_compatibility extension) ...
vec< 4, float, highp > float4
single-qualifier floating-point vector with 4 components. (From GLM_GTX_compatibility extension) ...
int int1x1
integer matrix with 1 component. (From GLM_GTX_compatibility extension)
vec< 2, float, highp > float2
single-qualifier floating-point vector with 2 components. (From GLM_GTX_compatibility extension) ...
GLM_FUNC_DECL vec< 4, bool, Q > isfinite(const vec< 4, T, Q > &x)
Test whether or not a scalar or each vector component is a finite value. (From GLM_GTX_compatibility)...
mat< 2, 3, bool, highp > bool2x3
boolean matrix with 2 x 3 components. (From GLM_GTX_compatibility extension)
mat< 2, 3, int, highp > int2x3
integer matrix with 2 x 3 components. (From GLM_GTX_compatibility extension)
int int1
integer vector with 1 component. (From GLM_GTX_compatibility extension)
vec< 3, float, highp > float3
single-qualifier floating-point vector with 3 components. (From GLM_GTX_compatibility extension) ...
mat< 2, 4, float, highp > float2x4
single-qualifier floating-point matrix with 2 x 4 components. (From GLM_GTX_compatibility extension) ...
mat< 2, 2, bool, highp > bool2x2
boolean matrix with 2 x 2 components. (From GLM_GTX_compatibility extension)
mat< 4, 4, bool, highp > bool4x4
boolean matrix with 4 x 4 components. (From GLM_GTX_compatibility extension)
float float1
single-qualifier floating-point vector with 1 component. (From GLM_GTX_compatibility extension) ...
float float1x1
single-qualifier floating-point matrix with 1 component. (From GLM_GTX_compatibility extension) ...
mat< 4, 2, double, highp > double4x2
double-qualifier floating-point matrix with 4 x 2 components. (From GLM_GTX_compatibility extension) ...
mat< 4, 3, int, highp > int4x3
integer matrix with 4 x 3 components. (From GLM_GTX_compatibility extension)
mat< 4, 2, bool, highp > bool4x2
boolean matrix with 4 x 2 components. (From GLM_GTX_compatibility extension)
mat< 2, 2, float, highp > float2x2
single-qualifier floating-point matrix with 2 x 2 components. (From GLM_GTX_compatibility extension) ...
vec< 3, int, highp > int3
integer vector with 3 components. (From GLM_GTX_compatibility extension)
mat< 4, 2, float, highp > float4x2
single-qualifier floating-point matrix with 4 x 2 components. (From GLM_GTX_compatibility extension) ...
mat< 2, 3, double, highp > double2x3
double-qualifier floating-point matrix with 2 x 3 components. (From GLM_GTX_compatibility extension) ...
mat< 2, 3, float, highp > float2x3
single-qualifier floating-point matrix with 2 x 3 components. (From GLM_GTX_compatibility extension) ...
mat< 3, 2, int, highp > int3x2
integer matrix with 3 x 2 components. (From GLM_GTX_compatibility extension)
vec< 4, bool, highp > bool4
boolean type with 4 components. (From GLM_GTX_compatibility extension)
mat< 4, 2, int, highp > int4x2
integer matrix with 4 x 2 components. (From GLM_GTX_compatibility extension)
bool bool1x1
boolean matrix with 1 x 1 component. (From GLM_GTX_compatibility extension)
GLM_FUNC_QUALIFIER vec< 4, T, Q > saturate(const vec< 4, T, Q > &x)
Returns clamp(x, 0, 1) for each component in x. (From GLM_GTX_compatibility)
vec< 3, bool, highp > bool3
boolean type with 3 components. (From GLM_GTX_compatibility extension)
GLM_FUNC_DECL GLM_CONSTEXPR genType clamp(genType x, genType minVal, genType maxVal)
Returns min(max(x, minVal), maxVal) for each component in x using the floating-point values minVal an...
mat< 2, 2, int, highp > int2x2
integer matrix with 2 x 2 components. (From GLM_GTX_compatibility extension)
vec< 2, int, highp > int2
integer vector with 2 components. (From GLM_GTX_compatibility extension)
mat< 4, 4, int, highp > int4x4
integer matrix with 4 x 4 components. (From GLM_GTX_compatibility extension)
mat< 3, 2, bool, highp > bool3x2
boolean matrix with 3 x 2 components. (From GLM_GTX_compatibility extension)
mat< 4, 3, double, highp > double4x3
double-qualifier floating-point matrix with 4 x 3 components. (From GLM_GTX_compatibility extension) ...
mat< 4, 3, bool, highp > bool4x3
boolean matrix with 4 x 3 components. (From GLM_GTX_compatibility extension)
double double1
double-qualifier floating-point vector with 1 component. (From GLM_GTX_compatibility extension) ...
vec< 3, double, highp > double3
double-qualifier floating-point vector with 3 components. (From GLM_GTX_compatibility extension) ...
vec< 4, double, highp > double4
double-qualifier floating-point vector with 4 components. (From GLM_GTX_compatibility extension) ...
mat< 3, 3, int, highp > int3x3
integer matrix with 3 x 3 components. (From GLM_GTX_compatibility extension)
mat< 3, 3, bool, highp > bool3x3
boolean matrix with 3 x 3 components. (From GLM_GTX_compatibility extension)
mat< 3, 2, float, highp > float3x2
single-qualifier floating-point matrix with 3 x 2 components. (From GLM_GTX_compatibility extension) ...
vec< 4, int, highp > int4
integer vector with 4 components. (From GLM_GTX_compatibility extension)
vec< 2, double, highp > double2
double-qualifier floating-point vector with 2 components. (From GLM_GTX_compatibility extension) ...
mat< 3, 3, float, highp > float3x3
single-qualifier floating-point matrix with 3 x 3 components. (From GLM_GTX_compatibility extension) ...
GLM_FUNC_DECL genTypeT mix(genTypeT x, genTypeT y, genTypeU a)
If genTypeU is a floating scalar or vector: Returns x * (1.0 - a) + y * a, i.e., the linear blend of ...
vec< 2, bool, highp > bool2
boolean type with 2 components. (From GLM_GTX_compatibility extension)
mat< 3, 4, bool, highp > bool3x4
boolean matrix with 3 x 4 components. (From GLM_GTX_compatibility extension)
mat< 2, 4, int, highp > int2x4
integer matrix with 2 x 4 components. (From GLM_GTX_compatibility extension)
mat< 2, 4, bool, highp > bool2x4
boolean matrix with 2 x 4 components. (From GLM_GTX_compatibility extension)
mat< 3, 4, double, highp > double3x4
double-qualifier floating-point matrix with 3 x 4 components. (From GLM_GTX_compatibility extension) ...
Definition: common.hpp:20
mat< 3, 4, float, highp > float3x4
single-qualifier floating-point matrix with 3 x 4 components. (From GLM_GTX_compatibility extension) ...
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00018.html ================================================ 0.9.9 API documentation: component_wise.hpp File Reference
0.9.9 API documentation
component_wise.hpp File Reference

GLM_GTX_component_wise More...

Go to the source code of this file.

Functions

template<typename genType >
GLM_FUNC_DECL genType::value_type compAdd (genType const &v)
 Add all vector components together. More...
 
template<typename genType >
GLM_FUNC_DECL genType::value_type compMax (genType const &v)
 Find the maximum value between single vector components. More...
 
template<typename genType >
GLM_FUNC_DECL genType::value_type compMin (genType const &v)
 Find the minimum value between single vector components. More...
 
template<typename genType >
GLM_FUNC_DECL genType::value_type compMul (genType const &v)
 Multiply all vector components together. More...
 
template<typename floatType , length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, floatType, Q > compNormalize (vec< L, T, Q > const &v)
 Convert an integer vector to a normalized float vector. More...
 
template<length_t L, typename T , typename floatType , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > compScale (vec< L, floatType, Q > const &v)
 Convert a normalized float vector to an integer vector. More...
 

Detailed Description

GLM_GTX_component_wise

Date
2007-05-21 / 2011-06-07
Author
Christophe Riccio
See also
Core features (dependence)

Definition in file component_wise.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00018_source.html ================================================ 0.9.9 API documentation: component_wise.hpp Source File
0.9.9 API documentation
component_wise.hpp
Go to the documentation of this file.
1 
15 #pragma once
16 
17 // Dependencies
18 #include "../detail/setup.hpp"
19 #include "../detail/qualifier.hpp"
20 
21 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
22 # ifndef GLM_ENABLE_EXPERIMENTAL
23 # pragma message("GLM: GLM_GTX_component_wise is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
24 # else
25 # pragma message("GLM: GLM_GTX_component_wise extension included")
26 # endif
27 #endif
28 
29 namespace glm
30 {
33 
37  template<typename floatType, length_t L, typename T, qualifier Q>
38  GLM_FUNC_DECL vec<L, floatType, Q> compNormalize(vec<L, T, Q> const& v);
39 
43  template<length_t L, typename T, typename floatType, qualifier Q>
44  GLM_FUNC_DECL vec<L, T, Q> compScale(vec<L, floatType, Q> const& v);
45 
48  template<typename genType>
49  GLM_FUNC_DECL typename genType::value_type compAdd(genType const& v);
50 
53  template<typename genType>
54  GLM_FUNC_DECL typename genType::value_type compMul(genType const& v);
55 
58  template<typename genType>
59  GLM_FUNC_DECL typename genType::value_type compMin(genType const& v);
60 
63  template<typename genType>
64  GLM_FUNC_DECL typename genType::value_type compMax(genType const& v);
65 
67 }//namespace glm
68 
69 #include "component_wise.inl"
GLM_FUNC_DECL genType::value_type compMax(genType const &v)
Find the maximum value between single vector components.
GLM_FUNC_DECL genType::value_type compMul(genType const &v)
Multiply all vector components together.
GLM_FUNC_DECL vec< L, T, Q > compScale(vec< L, floatType, Q > const &v)
Convert a normalized float vector to an integer vector.
GLM_FUNC_DECL vec< L, floatType, Q > compNormalize(vec< L, T, Q > const &v)
Convert an integer vector to a normalized float vector.
GLM_FUNC_DECL genType::value_type compMin(genType const &v)
Find the minimum value between single vector components.
GLM_FUNC_DECL genType::value_type compAdd(genType const &v)
Add all vector components together.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00019_source.html ================================================ 0.9.9 API documentation: compute_common.hpp Source File
0.9.9 API documentation
compute_common.hpp
1 #pragma once
2 
3 #include "setup.hpp"
4 #include <limits>
5 
6 namespace glm{
7 namespace detail
8 {
9  template<typename genFIType, bool /*signed*/>
10  struct compute_abs
11  {};
12 
13  template<typename genFIType>
14  struct compute_abs<genFIType, true>
15  {
16  GLM_FUNC_QUALIFIER GLM_CONSTEXPR static genFIType call(genFIType x)
17  {
18  GLM_STATIC_ASSERT(
19  std::numeric_limits<genFIType>::is_iec559 || std::numeric_limits<genFIType>::is_signed,
20  "'abs' only accept floating-point and integer scalar or vector inputs");
21 
22  return x >= genFIType(0) ? x : -x;
23  // TODO, perf comp with: *(((int *) &x) + 1) &= 0x7fffffff;
24  }
25  };
26 
27 #if GLM_COMPILER & GLM_COMPILER_CUDA
28  template<>
29  struct compute_abs<float, true>
30  {
31  GLM_FUNC_QUALIFIER GLM_CONSTEXPR static float call(float x)
32  {
33  return fabsf(x);
34  }
35  };
36 #endif
37 
38  template<typename genFIType>
39  struct compute_abs<genFIType, false>
40  {
41  GLM_FUNC_QUALIFIER GLM_CONSTEXPR static genFIType call(genFIType x)
42  {
43  GLM_STATIC_ASSERT(
44  (!std::numeric_limits<genFIType>::is_signed && std::numeric_limits<genFIType>::is_integer),
45  "'abs' only accept floating-point and integer scalar or vector inputs");
46  return x;
47  }
48  };
49 }//namespace detail
50 }//namespace glm
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00020_source.html ================================================ 0.9.9 API documentation: compute_vector_relational.hpp Source File
0.9.9 API documentation
compute_vector_relational.hpp
1 #pragma once
2 
3 //#include "compute_common.hpp"
4 #include "setup.hpp"
5 #include <limits>
6 
7 namespace glm{
8 namespace detail
9 {
10  template <typename T, bool isFloat>
11  struct compute_equal
12  {
13  GLM_FUNC_QUALIFIER GLM_CONSTEXPR static bool call(T a, T b)
14  {
15  return a == b;
16  }
17  };
18 /*
19  template <typename T>
20  struct compute_equal<T, true>
21  {
22  GLM_FUNC_QUALIFIER GLM_CONSTEXPR static bool call(T a, T b)
23  {
24  return detail::compute_abs<T, std::numeric_limits<T>::is_signed>::call(b - a) <= static_cast<T>(0);
25  //return std::memcmp(&a, &b, sizeof(T)) == 0;
26  }
27  };
28 */
29 }//namespace detail
30 }//namespace glm
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00021.html ================================================ 0.9.9 API documentation: constants.hpp File Reference
0.9.9 API documentation
constants.hpp File Reference

GLM_GTC_constants More...

Go to the source code of this file.

Functions

template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType e ()
 Return e constant. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType euler ()
 Return Euler's constant. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType four_over_pi ()
 Return 4 / pi. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType golden_ratio ()
 Return the golden ratio constant. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType half_pi ()
 Return pi / 2. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType ln_ln_two ()
 Return ln(ln(2)). More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType ln_ten ()
 Return ln(10). More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType ln_two ()
 Return ln(2). More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType one ()
 Return 1. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType one_over_pi ()
 Return 1 / pi. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType one_over_root_two ()
 Return 1 / sqrt(2). More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType one_over_two_pi ()
 Return 1 / (pi * 2). More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType quarter_pi ()
 Return pi / 4. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType root_five ()
 Return sqrt(5). More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType root_half_pi ()
 Return sqrt(pi / 2). More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType root_ln_four ()
 Return sqrt(ln(4)). More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType root_pi ()
 Return square root of pi. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType root_three ()
 Return sqrt(3). More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType root_two ()
 Return sqrt(2). More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType root_two_pi ()
 Return sqrt(2 * pi). More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType third ()
 Return 1 / 3. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType three_over_two_pi ()
 Return pi / 2 * 3. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType two_over_pi ()
 Return 2 / pi. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType two_over_root_pi ()
 Return 2 / sqrt(pi). More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType two_pi ()
 Return pi * 2. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType two_thirds ()
 Return 2 / 3. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType zero ()
 Return 0. More...
 

Detailed Description

GLM_GTC_constants

See also
Core features (dependence)

Definition in file constants.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00021_source.html ================================================ 0.9.9 API documentation: constants.hpp Source File
0.9.9 API documentation
constants.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependencies
16 #include "../ext/scalar_constants.hpp"
17 
18 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
19 # pragma message("GLM: GLM_GTC_constants extension included")
20 #endif
21 
22 namespace glm
23 {
26 
29  template<typename genType>
30  GLM_FUNC_DECL GLM_CONSTEXPR genType zero();
31 
34  template<typename genType>
35  GLM_FUNC_DECL GLM_CONSTEXPR genType one();
36 
39  template<typename genType>
40  GLM_FUNC_DECL GLM_CONSTEXPR genType two_pi();
41 
44  template<typename genType>
45  GLM_FUNC_DECL GLM_CONSTEXPR genType root_pi();
46 
49  template<typename genType>
50  GLM_FUNC_DECL GLM_CONSTEXPR genType half_pi();
51 
54  template<typename genType>
55  GLM_FUNC_DECL GLM_CONSTEXPR genType three_over_two_pi();
56 
59  template<typename genType>
60  GLM_FUNC_DECL GLM_CONSTEXPR genType quarter_pi();
61 
64  template<typename genType>
65  GLM_FUNC_DECL GLM_CONSTEXPR genType one_over_pi();
66 
69  template<typename genType>
70  GLM_FUNC_DECL GLM_CONSTEXPR genType one_over_two_pi();
71 
74  template<typename genType>
75  GLM_FUNC_DECL GLM_CONSTEXPR genType two_over_pi();
76 
79  template<typename genType>
80  GLM_FUNC_DECL GLM_CONSTEXPR genType four_over_pi();
81 
84  template<typename genType>
85  GLM_FUNC_DECL GLM_CONSTEXPR genType two_over_root_pi();
86 
89  template<typename genType>
90  GLM_FUNC_DECL GLM_CONSTEXPR genType one_over_root_two();
91 
94  template<typename genType>
95  GLM_FUNC_DECL GLM_CONSTEXPR genType root_half_pi();
96 
99  template<typename genType>
100  GLM_FUNC_DECL GLM_CONSTEXPR genType root_two_pi();
101 
104  template<typename genType>
105  GLM_FUNC_DECL GLM_CONSTEXPR genType root_ln_four();
106 
109  template<typename genType>
110  GLM_FUNC_DECL GLM_CONSTEXPR genType e();
111 
114  template<typename genType>
115  GLM_FUNC_DECL GLM_CONSTEXPR genType euler();
116 
119  template<typename genType>
120  GLM_FUNC_DECL GLM_CONSTEXPR genType root_two();
121 
124  template<typename genType>
125  GLM_FUNC_DECL GLM_CONSTEXPR genType root_three();
126 
129  template<typename genType>
130  GLM_FUNC_DECL GLM_CONSTEXPR genType root_five();
131 
134  template<typename genType>
135  GLM_FUNC_DECL GLM_CONSTEXPR genType ln_two();
136 
139  template<typename genType>
140  GLM_FUNC_DECL GLM_CONSTEXPR genType ln_ten();
141 
144  template<typename genType>
145  GLM_FUNC_DECL GLM_CONSTEXPR genType ln_ln_two();
146 
149  template<typename genType>
150  GLM_FUNC_DECL GLM_CONSTEXPR genType third();
151 
154  template<typename genType>
155  GLM_FUNC_DECL GLM_CONSTEXPR genType two_thirds();
156 
159  template<typename genType>
160  GLM_FUNC_DECL GLM_CONSTEXPR genType golden_ratio();
161 
163 } //namespace glm
164 
165 #include "constants.inl"
GLM_FUNC_DECL GLM_CONSTEXPR genType third()
Return 1 / 3.
GLM_FUNC_DECL GLM_CONSTEXPR genType root_two()
Return sqrt(2).
GLM_FUNC_DECL GLM_CONSTEXPR genType one_over_root_two()
Return 1 / sqrt(2).
GLM_FUNC_DECL GLM_CONSTEXPR genType euler()
Return Euler's constant.
GLM_FUNC_DECL GLM_CONSTEXPR genType two_thirds()
Return 2 / 3.
GLM_FUNC_DECL GLM_CONSTEXPR genType two_pi()
Return pi * 2.
GLM_FUNC_DECL GLM_CONSTEXPR genType golden_ratio()
Return the golden ratio constant.
GLM_FUNC_DECL GLM_CONSTEXPR genType quarter_pi()
Return pi / 4.
GLM_FUNC_DECL GLM_CONSTEXPR genType one()
Return 1.
GLM_FUNC_DECL GLM_CONSTEXPR genType root_five()
Return sqrt(5).
GLM_FUNC_DECL GLM_CONSTEXPR genType three_over_two_pi()
Return pi / 2 * 3.
GLM_FUNC_DECL GLM_CONSTEXPR genType zero()
Return 0.
GLM_FUNC_DECL GLM_CONSTEXPR genType ln_ten()
Return ln(10).
GLM_FUNC_DECL GLM_CONSTEXPR genType root_three()
Return sqrt(3).
GLM_FUNC_DECL GLM_CONSTEXPR genType root_pi()
Return square root of pi.
GLM_FUNC_DECL GLM_CONSTEXPR genType e()
Return e constant.
GLM_FUNC_DECL GLM_CONSTEXPR genType one_over_pi()
Return 1 / pi.
GLM_FUNC_DECL GLM_CONSTEXPR genType two_over_pi()
Return 2 / pi.
GLM_FUNC_DECL GLM_CONSTEXPR genType four_over_pi()
Return 4 / pi.
GLM_FUNC_DECL GLM_CONSTEXPR genType root_two_pi()
Return sqrt(2 * pi).
GLM_FUNC_DECL GLM_CONSTEXPR genType ln_two()
Return ln(2).
GLM_FUNC_DECL GLM_CONSTEXPR genType root_ln_four()
Return sqrt(ln(4)).
GLM_FUNC_DECL GLM_CONSTEXPR genType two_over_root_pi()
Return 2 / sqrt(pi).
GLM_FUNC_DECL GLM_CONSTEXPR genType ln_ln_two()
Return ln(ln(2)).
GLM_FUNC_DECL GLM_CONSTEXPR genType root_half_pi()
Return sqrt(pi / 2).
GLM_FUNC_DECL GLM_CONSTEXPR genType half_pi()
Return pi / 2.
GLM_FUNC_DECL GLM_CONSTEXPR genType one_over_two_pi()
Return 1 / (pi * 2).
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00022.html ================================================ 0.9.9 API documentation: dual_quaternion.hpp File Reference
0.9.9 API documentation
dual_quaternion.hpp File Reference

GLM_GTX_dual_quaternion More...

Go to the source code of this file.

Typedefs

typedef highp_ddualquat ddualquat
 Dual-quaternion of default double-qualifier floating-point numbers. More...
 
typedef highp_fdualquat dualquat
 Dual-quaternion of floating-point numbers. More...
 
typedef highp_fdualquat fdualquat
 Dual-quaternion of single-qualifier floating-point numbers. More...
 
typedef tdualquat< double, highp > highp_ddualquat
 Dual-quaternion of high double-qualifier floating-point numbers. More...
 
typedef tdualquat< float, highp > highp_dualquat
 Dual-quaternion of high single-qualifier floating-point numbers. More...
 
typedef tdualquat< float, highp > highp_fdualquat
 Dual-quaternion of high single-qualifier floating-point numbers. More...
 
typedef tdualquat< double, lowp > lowp_ddualquat
 Dual-quaternion of low double-qualifier floating-point numbers. More...
 
typedef tdualquat< float, lowp > lowp_dualquat
 Dual-quaternion of low single-qualifier floating-point numbers. More...
 
typedef tdualquat< float, lowp > lowp_fdualquat
 Dual-quaternion of low single-qualifier floating-point numbers. More...
 
typedef tdualquat< double, mediump > mediump_ddualquat
 Dual-quaternion of medium double-qualifier floating-point numbers. More...
 
typedef tdualquat< float, mediump > mediump_dualquat
 Dual-quaternion of medium single-qualifier floating-point numbers. More...
 
typedef tdualquat< float, mediump > mediump_fdualquat
 Dual-quaternion of medium single-qualifier floating-point numbers. More...
 

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL tdualquat< T, Q > dual_quat_identity ()
 Creates an identity dual quaternion. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL tdualquat< T, Q > dualquat_cast (mat< 2, 4, T, Q > const &x)
 Converts a 2 * 4 matrix (matrix which holds real and dual parts) to a quaternion. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL tdualquat< T, Q > dualquat_cast (mat< 3, 4, T, Q > const &x)
 Converts a 3 * 4 matrix (augmented matrix rotation + translation) to a quaternion. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL tdualquat< T, Q > inverse (tdualquat< T, Q > const &q)
 Returns the q inverse. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL tdualquat< T, Q > lerp (tdualquat< T, Q > const &x, tdualquat< T, Q > const &y, T const &a)
 Returns the linear interpolation of two dual quaternion. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 2, 4, T, Q > mat2x4_cast (tdualquat< T, Q > const &x)
 Converts a quaternion to a 2 * 4 matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 4, T, Q > mat3x4_cast (tdualquat< T, Q > const &x)
 Converts a quaternion to a 3 * 4 matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL tdualquat< T, Q > normalize (tdualquat< T, Q > const &q)
 Returns the normalized quaternion. More...
 

Detailed Description

GLM_GTX_dual_quaternion

Author
Maksim Vorobiev (msome.nosp@m.one@.nosp@m.gmail.nosp@m..com)
See also
Core features (dependence)
GLM_GTC_constants (dependence)
GLM_GTC_quaternion (dependence)

Definition in file dual_quaternion.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00022_source.html ================================================ 0.9.9 API documentation: dual_quaternion.hpp Source File
0.9.9 API documentation
dual_quaternion.hpp
Go to the documentation of this file.
1 
16 #pragma once
17 
18 // Dependency:
19 #include "../glm.hpp"
20 #include "../gtc/constants.hpp"
21 #include "../gtc/quaternion.hpp"
22 
23 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
24 # ifndef GLM_ENABLE_EXPERIMENTAL
25 # pragma message("GLM: GLM_GTX_dual_quaternion is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
26 # else
27 # pragma message("GLM: GLM_GTX_dual_quaternion extension included")
28 # endif
29 #endif
30 
31 namespace glm
32 {
35 
36  template<typename T, qualifier Q = defaultp>
37  struct tdualquat
38  {
39  // -- Implementation detail --
40 
41  typedef T value_type;
42  typedef qua<T, Q> part_type;
43 
44  // -- Data --
45 
46  qua<T, Q> real, dual;
47 
48  // -- Component accesses --
49 
50  typedef length_t length_type;
52  GLM_FUNC_DECL static GLM_CONSTEXPR length_type length(){return 2;}
53 
54  GLM_FUNC_DECL part_type & operator[](length_type i);
55  GLM_FUNC_DECL part_type const& operator[](length_type i) const;
56 
57  // -- Implicit basic constructors --
58 
59  GLM_FUNC_DECL GLM_CONSTEXPR tdualquat() GLM_DEFAULT;
60  GLM_FUNC_DECL GLM_CONSTEXPR tdualquat(tdualquat<T, Q> const& d) GLM_DEFAULT;
61  template<qualifier P>
62  GLM_FUNC_DECL GLM_CONSTEXPR tdualquat(tdualquat<T, P> const& d);
63 
64  // -- Explicit basic constructors --
65 
66  GLM_FUNC_DECL GLM_CONSTEXPR tdualquat(qua<T, Q> const& real);
67  GLM_FUNC_DECL GLM_CONSTEXPR tdualquat(qua<T, Q> const& orientation, vec<3, T, Q> const& translation);
68  GLM_FUNC_DECL GLM_CONSTEXPR tdualquat(qua<T, Q> const& real, qua<T, Q> const& dual);
69 
70  // -- Conversion constructors --
71 
72  template<typename U, qualifier P>
73  GLM_FUNC_DECL GLM_CONSTEXPR GLM_EXPLICIT tdualquat(tdualquat<U, P> const& q);
74 
75  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR tdualquat(mat<2, 4, T, Q> const& holder_mat);
76  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR tdualquat(mat<3, 4, T, Q> const& aug_mat);
77 
78  // -- Unary arithmetic operators --
79 
80  GLM_FUNC_DECL tdualquat<T, Q> & operator=(tdualquat<T, Q> const& m) GLM_DEFAULT;
81 
82  template<typename U>
83  GLM_FUNC_DECL tdualquat<T, Q> & operator=(tdualquat<U, Q> const& m);
84  template<typename U>
85  GLM_FUNC_DECL tdualquat<T, Q> & operator*=(U s);
86  template<typename U>
87  GLM_FUNC_DECL tdualquat<T, Q> & operator/=(U s);
88  };
89 
90  // -- Unary bit operators --
91 
92  template<typename T, qualifier Q>
93  GLM_FUNC_DECL tdualquat<T, Q> operator+(tdualquat<T, Q> const& q);
94 
95  template<typename T, qualifier Q>
96  GLM_FUNC_DECL tdualquat<T, Q> operator-(tdualquat<T, Q> const& q);
97 
98  // -- Binary operators --
99 
100  template<typename T, qualifier Q>
101  GLM_FUNC_DECL tdualquat<T, Q> operator+(tdualquat<T, Q> const& q, tdualquat<T, Q> const& p);
102 
103  template<typename T, qualifier Q>
104  GLM_FUNC_DECL tdualquat<T, Q> operator*(tdualquat<T, Q> const& q, tdualquat<T, Q> const& p);
105 
106  template<typename T, qualifier Q>
107  GLM_FUNC_DECL vec<3, T, Q> operator*(tdualquat<T, Q> const& q, vec<3, T, Q> const& v);
108 
109  template<typename T, qualifier Q>
110  GLM_FUNC_DECL vec<3, T, Q> operator*(vec<3, T, Q> const& v, tdualquat<T, Q> const& q);
111 
112  template<typename T, qualifier Q>
113  GLM_FUNC_DECL vec<4, T, Q> operator*(tdualquat<T, Q> const& q, vec<4, T, Q> const& v);
114 
115  template<typename T, qualifier Q>
116  GLM_FUNC_DECL vec<4, T, Q> operator*(vec<4, T, Q> const& v, tdualquat<T, Q> const& q);
117 
118  template<typename T, qualifier Q>
119  GLM_FUNC_DECL tdualquat<T, Q> operator*(tdualquat<T, Q> const& q, T const& s);
120 
121  template<typename T, qualifier Q>
122  GLM_FUNC_DECL tdualquat<T, Q> operator*(T const& s, tdualquat<T, Q> const& q);
123 
124  template<typename T, qualifier Q>
125  GLM_FUNC_DECL tdualquat<T, Q> operator/(tdualquat<T, Q> const& q, T const& s);
126 
127  // -- Boolean operators --
128 
129  template<typename T, qualifier Q>
130  GLM_FUNC_DECL bool operator==(tdualquat<T, Q> const& q1, tdualquat<T, Q> const& q2);
131 
132  template<typename T, qualifier Q>
133  GLM_FUNC_DECL bool operator!=(tdualquat<T, Q> const& q1, tdualquat<T, Q> const& q2);
134 
138  template <typename T, qualifier Q>
139  GLM_FUNC_DECL tdualquat<T, Q> dual_quat_identity();
140 
144  template<typename T, qualifier Q>
145  GLM_FUNC_DECL tdualquat<T, Q> normalize(tdualquat<T, Q> const& q);
146 
150  template<typename T, qualifier Q>
151  GLM_FUNC_DECL tdualquat<T, Q> lerp(tdualquat<T, Q> const& x, tdualquat<T, Q> const& y, T const& a);
152 
156  template<typename T, qualifier Q>
157  GLM_FUNC_DECL tdualquat<T, Q> inverse(tdualquat<T, Q> const& q);
158 
162  template<typename T, qualifier Q>
163  GLM_FUNC_DECL mat<2, 4, T, Q> mat2x4_cast(tdualquat<T, Q> const& x);
164 
168  template<typename T, qualifier Q>
169  GLM_FUNC_DECL mat<3, 4, T, Q> mat3x4_cast(tdualquat<T, Q> const& x);
170 
174  template<typename T, qualifier Q>
175  GLM_FUNC_DECL tdualquat<T, Q> dualquat_cast(mat<2, 4, T, Q> const& x);
176 
180  template<typename T, qualifier Q>
181  GLM_FUNC_DECL tdualquat<T, Q> dualquat_cast(mat<3, 4, T, Q> const& x);
182 
183 
187  typedef tdualquat<float, lowp> lowp_dualquat;
188 
192  typedef tdualquat<float, mediump> mediump_dualquat;
193 
197  typedef tdualquat<float, highp> highp_dualquat;
198 
199 
203  typedef tdualquat<float, lowp> lowp_fdualquat;
204 
208  typedef tdualquat<float, mediump> mediump_fdualquat;
209 
213  typedef tdualquat<float, highp> highp_fdualquat;
214 
215 
219  typedef tdualquat<double, lowp> lowp_ddualquat;
220 
224  typedef tdualquat<double, mediump> mediump_ddualquat;
225 
229  typedef tdualquat<double, highp> highp_ddualquat;
230 
231 
232 #if(!defined(GLM_PRECISION_HIGHP_FLOAT) && !defined(GLM_PRECISION_MEDIUMP_FLOAT) && !defined(GLM_PRECISION_LOWP_FLOAT))
233  typedef highp_fdualquat dualquat;
237 
241  typedef highp_fdualquat fdualquat;
242 #elif(defined(GLM_PRECISION_HIGHP_FLOAT) && !defined(GLM_PRECISION_MEDIUMP_FLOAT) && !defined(GLM_PRECISION_LOWP_FLOAT))
243  typedef highp_fdualquat dualquat;
244  typedef highp_fdualquat fdualquat;
245 #elif(!defined(GLM_PRECISION_HIGHP_FLOAT) && defined(GLM_PRECISION_MEDIUMP_FLOAT) && !defined(GLM_PRECISION_LOWP_FLOAT))
246  typedef mediump_fdualquat dualquat;
247  typedef mediump_fdualquat fdualquat;
248 #elif(!defined(GLM_PRECISION_HIGHP_FLOAT) && !defined(GLM_PRECISION_MEDIUMP_FLOAT) && defined(GLM_PRECISION_LOWP_FLOAT))
249  typedef lowp_fdualquat dualquat;
250  typedef lowp_fdualquat fdualquat;
251 #else
252 # error "GLM error: multiple default precision requested for single-precision floating-point types"
253 #endif
254 
255 
256 #if(!defined(GLM_PRECISION_HIGHP_DOUBLE) && !defined(GLM_PRECISION_MEDIUMP_DOUBLE) && !defined(GLM_PRECISION_LOWP_DOUBLE))
257  typedef highp_ddualquat ddualquat;
261 #elif(defined(GLM_PRECISION_HIGHP_DOUBLE) && !defined(GLM_PRECISION_MEDIUMP_DOUBLE) && !defined(GLM_PRECISION_LOWP_DOUBLE))
262  typedef highp_ddualquat ddualquat;
263 #elif(!defined(GLM_PRECISION_HIGHP_DOUBLE) && defined(GLM_PRECISION_MEDIUMP_DOUBLE) && !defined(GLM_PRECISION_LOWP_DOUBLE))
264  typedef mediump_ddualquat ddualquat;
265 #elif(!defined(GLM_PRECISION_HIGHP_DOUBLE) && !defined(GLM_PRECISION_MEDIUMP_DOUBLE) && defined(GLM_PRECISION_LOWP_DOUBLE))
266  typedef lowp_ddualquat ddualquat;
267 #else
268 # error "GLM error: Multiple default precision requested for double-precision floating-point types"
269 #endif
270 
272 } //namespace glm
273 
274 #include "dual_quaternion.inl"
highp_ddualquat ddualquat
Dual-quaternion of default double-qualifier floating-point numbers.
highp_fdualquat fdualquat
Dual-quaternion of single-qualifier floating-point numbers.
GLM_FUNC_DECL mat< 2, 4, T, Q > mat2x4_cast(tdualquat< T, Q > const &x)
Converts a quaternion to a 2 * 4 matrix.
tdualquat< double, highp > highp_ddualquat
Dual-quaternion of high double-qualifier floating-point numbers.
GLM_FUNC_DECL tdualquat< T, Q > normalize(tdualquat< T, Q > const &q)
Returns the normalized quaternion.
GLM_FUNC_DECL tdualquat< T, Q > dual_quat_identity()
Creates an identity dual quaternion.
GLM_FUNC_DECL tdualquat< T, Q > inverse(tdualquat< T, Q > const &q)
Returns the q inverse.
GLM_FUNC_DECL tdualquat< T, Q > lerp(tdualquat< T, Q > const &x, tdualquat< T, Q > const &y, T const &a)
Returns the linear interpolation of two dual quaternion.
tdualquat< float, lowp > lowp_dualquat
Dual-quaternion of low single-qualifier floating-point numbers.
tdualquat< float, lowp > lowp_fdualquat
Dual-quaternion of low single-qualifier floating-point numbers.
GLM_FUNC_DECL T length(qua< T, Q > const &q)
Returns the norm of a quaternions.
tdualquat< double, lowp > lowp_ddualquat
Dual-quaternion of low double-qualifier floating-point numbers.
GLM_FUNC_DECL mat< 3, 4, T, Q > mat3x4_cast(tdualquat< T, Q > const &x)
Converts a quaternion to a 3 * 4 matrix.
highp_fdualquat dualquat
Dual-quaternion of floating-point numbers.
tdualquat< float, highp > highp_fdualquat
Dual-quaternion of high single-qualifier floating-point numbers.
GLM_FUNC_DECL mat< 4, 4, T, Q > orientation(vec< 3, T, Q > const &Normal, vec< 3, T, Q > const &Up)
Build a rotation matrix from a normal and a up vector.
tdualquat< float, mediump > mediump_dualquat
Dual-quaternion of medium single-qualifier floating-point numbers.
tdualquat< float, mediump > mediump_fdualquat
Dual-quaternion of medium single-qualifier floating-point numbers.
tdualquat< double, mediump > mediump_ddualquat
Dual-quaternion of medium double-qualifier floating-point numbers.
GLM_FUNC_DECL tdualquat< T, Q > dualquat_cast(mat< 3, 4, T, Q > const &x)
Converts a 3 * 4 matrix (augmented matrix rotation + translation) to a quaternion.
tdualquat< float, highp > highp_dualquat
Dual-quaternion of high single-qualifier floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00023.html ================================================ 0.9.9 API documentation: easing.hpp File Reference
0.9.9 API documentation
easing.hpp File Reference

GLM_GTX_easing More...

Go to the source code of this file.

Functions

template<typename genType >
GLM_FUNC_DECL genType backEaseIn (genType const &a)
 
template<typename genType >
GLM_FUNC_DECL genType backEaseIn (genType const &a, genType const &o)
 
template<typename genType >
GLM_FUNC_DECL genType backEaseInOut (genType const &a)
 
template<typename genType >
GLM_FUNC_DECL genType backEaseInOut (genType const &a, genType const &o)
 
template<typename genType >
GLM_FUNC_DECL genType backEaseOut (genType const &a)
 
template<typename genType >
GLM_FUNC_DECL genType backEaseOut (genType const &a, genType const &o)
 
template<typename genType >
GLM_FUNC_DECL genType bounceEaseIn (genType const &a)
 
template<typename genType >
GLM_FUNC_DECL genType bounceEaseInOut (genType const &a)
 
template<typename genType >
GLM_FUNC_DECL genType bounceEaseOut (genType const &a)
 
template<typename genType >
GLM_FUNC_DECL genType circularEaseIn (genType const &a)
 Modelled after shifted quadrant IV of unit circle. More...
 
template<typename genType >
GLM_FUNC_DECL genType circularEaseInOut (genType const &a)
 Modelled after the piecewise circular function y = (1/2)(1 - sqrt(1 - 4x^2)) ; [0, 0.5) y = (1/2)(sqrt(-(2x - 3)*(2x - 1)) + 1) ; [0.5, 1]. More...
 
template<typename genType >
GLM_FUNC_DECL genType circularEaseOut (genType const &a)
 Modelled after shifted quadrant II of unit circle. More...
 
template<typename genType >
GLM_FUNC_DECL genType cubicEaseIn (genType const &a)
 Modelled after the cubic y = x^3.
 
template<typename genType >
GLM_FUNC_DECL genType cubicEaseInOut (genType const &a)
 Modelled after the piecewise cubic y = (1/2)((2x)^3) ; [0, 0.5) y = (1/2)((2x-2)^3 + 2) ; [0.5, 1]. More...
 
template<typename genType >
GLM_FUNC_DECL genType cubicEaseOut (genType const &a)
 Modelled after the cubic y = (x - 1)^3 + 1. More...
 
template<typename genType >
GLM_FUNC_DECL genType elasticEaseIn (genType const &a)
 Modelled after the damped sine wave y = sin(13pi/2*x)*pow(2, 10 * (x - 1)) More...
 
template<typename genType >
GLM_FUNC_DECL genType elasticEaseInOut (genType const &a)
 Modelled after the piecewise exponentially-damped sine wave: y = (1/2)*sin(13pi/2*(2*x))*pow(2, 10 * ((2*x) - 1)) ; [0,0.5) y = (1/2)*(sin(-13pi/2*((2x-1)+1))*pow(2,-10(2*x-1)) + 2) ; [0.5, 1]. More...
 
template<typename genType >
GLM_FUNC_DECL genType elasticEaseOut (genType const &a)
 Modelled after the damped sine wave y = sin(-13pi/2*(x + 1))*pow(2, -10x) + 1. More...
 
template<typename genType >
GLM_FUNC_DECL genType exponentialEaseIn (genType const &a)
 Modelled after the exponential function y = 2^(10(x - 1)) More...
 
template<typename genType >
GLM_FUNC_DECL genType exponentialEaseInOut (genType const &a)
 Modelled after the piecewise exponential y = (1/2)2^(10(2x - 1)) ; [0,0.5) y = -(1/2)*2^(-10(2x - 1))) + 1 ; [0.5,1]. More...
 
template<typename genType >
GLM_FUNC_DECL genType exponentialEaseOut (genType const &a)
 Modelled after the exponential function y = -2^(-10x) + 1. More...
 
template<typename genType >
GLM_FUNC_DECL genType linearInterpolation (genType const &a)
 Modelled after the line y = x. More...
 
template<typename genType >
GLM_FUNC_DECL genType quadraticEaseIn (genType const &a)
 Modelled after the parabola y = x^2. More...
 
template<typename genType >
GLM_FUNC_DECL genType quadraticEaseInOut (genType const &a)
 Modelled after the piecewise quadratic y = (1/2)((2x)^2) ; [0, 0.5) y = -(1/2)((2x-1)*(2x-3) - 1) ; [0.5, 1]. More...
 
template<typename genType >
GLM_FUNC_DECL genType quadraticEaseOut (genType const &a)
 Modelled after the parabola y = -x^2 + 2x. More...
 
template<typename genType >
GLM_FUNC_DECL genType quarticEaseIn (genType const &a)
 Modelled after the quartic x^4. More...
 
template<typename genType >
GLM_FUNC_DECL genType quarticEaseInOut (genType const &a)
 Modelled after the piecewise quartic y = (1/2)((2x)^4) ; [0, 0.5) y = -(1/2)((2x-2)^4 - 2) ; [0.5, 1]. More...
 
template<typename genType >
GLM_FUNC_DECL genType quarticEaseOut (genType const &a)
 Modelled after the quartic y = 1 - (x - 1)^4. More...
 
template<typename genType >
GLM_FUNC_DECL genType quinticEaseIn (genType const &a)
 Modelled after the quintic y = x^5. More...
 
template<typename genType >
GLM_FUNC_DECL genType quinticEaseInOut (genType const &a)
 Modelled after the piecewise quintic y = (1/2)((2x)^5) ; [0, 0.5) y = (1/2)((2x-2)^5 + 2) ; [0.5, 1]. More...
 
template<typename genType >
GLM_FUNC_DECL genType quinticEaseOut (genType const &a)
 Modelled after the quintic y = (x - 1)^5 + 1. More...
 
template<typename genType >
GLM_FUNC_DECL genType sineEaseIn (genType const &a)
 Modelled after quarter-cycle of sine wave. More...
 
template<typename genType >
GLM_FUNC_DECL genType sineEaseInOut (genType const &a)
 Modelled after half sine wave. More...
 
template<typename genType >
GLM_FUNC_DECL genType sineEaseOut (genType const &a)
 Modelled after quarter-cycle of sine wave (different phase) More...
 

Detailed Description

GLM_GTX_easing

Author
Robert Chisholm
See also
Core features (dependence)

Definition in file easing.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00023_source.html ================================================ 0.9.9 API documentation: easing.hpp Source File
0.9.9 API documentation
easing.hpp
Go to the documentation of this file.
1 
17 #pragma once
18 
19 // Dependency:
20 #include "../glm.hpp"
21 #include "../gtc/constants.hpp"
22 #include "../detail/qualifier.hpp"
23 
24 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
25 # ifndef GLM_ENABLE_EXPERIMENTAL
26 # pragma message("GLM: GLM_GTX_easing is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
27 # else
28 # pragma message("GLM: GLM_GTX_easing extension included")
29 # endif
30 #endif
31 
32 namespace glm{
35 
38  template <typename genType>
39  GLM_FUNC_DECL genType linearInterpolation(genType const & a);
40 
43  template <typename genType>
44  GLM_FUNC_DECL genType quadraticEaseIn(genType const & a);
45 
48  template <typename genType>
49  GLM_FUNC_DECL genType quadraticEaseOut(genType const & a);
50 
55  template <typename genType>
56  GLM_FUNC_DECL genType quadraticEaseInOut(genType const & a);
57 
59  template <typename genType>
60  GLM_FUNC_DECL genType cubicEaseIn(genType const & a);
61 
64  template <typename genType>
65  GLM_FUNC_DECL genType cubicEaseOut(genType const & a);
66 
71  template <typename genType>
72  GLM_FUNC_DECL genType cubicEaseInOut(genType const & a);
73 
76  template <typename genType>
77  GLM_FUNC_DECL genType quarticEaseIn(genType const & a);
78 
81  template <typename genType>
82  GLM_FUNC_DECL genType quarticEaseOut(genType const & a);
83 
88  template <typename genType>
89  GLM_FUNC_DECL genType quarticEaseInOut(genType const & a);
90 
93  template <typename genType>
94  GLM_FUNC_DECL genType quinticEaseIn(genType const & a);
95 
98  template <typename genType>
99  GLM_FUNC_DECL genType quinticEaseOut(genType const & a);
100 
105  template <typename genType>
106  GLM_FUNC_DECL genType quinticEaseInOut(genType const & a);
107 
110  template <typename genType>
111  GLM_FUNC_DECL genType sineEaseIn(genType const & a);
112 
115  template <typename genType>
116  GLM_FUNC_DECL genType sineEaseOut(genType const & a);
117 
120  template <typename genType>
121  GLM_FUNC_DECL genType sineEaseInOut(genType const & a);
122 
125  template <typename genType>
126  GLM_FUNC_DECL genType circularEaseIn(genType const & a);
127 
130  template <typename genType>
131  GLM_FUNC_DECL genType circularEaseOut(genType const & a);
132 
137  template <typename genType>
138  GLM_FUNC_DECL genType circularEaseInOut(genType const & a);
139 
142  template <typename genType>
143  GLM_FUNC_DECL genType exponentialEaseIn(genType const & a);
144 
147  template <typename genType>
148  GLM_FUNC_DECL genType exponentialEaseOut(genType const & a);
149 
154  template <typename genType>
155  GLM_FUNC_DECL genType exponentialEaseInOut(genType const & a);
156 
159  template <typename genType>
160  GLM_FUNC_DECL genType elasticEaseIn(genType const & a);
161 
164  template <typename genType>
165  GLM_FUNC_DECL genType elasticEaseOut(genType const & a);
166 
171  template <typename genType>
172  GLM_FUNC_DECL genType elasticEaseInOut(genType const & a);
173 
175  template <typename genType>
176  GLM_FUNC_DECL genType backEaseIn(genType const& a);
177 
179  template <typename genType>
180  GLM_FUNC_DECL genType backEaseOut(genType const& a);
181 
183  template <typename genType>
184  GLM_FUNC_DECL genType backEaseInOut(genType const& a);
185 
189  template <typename genType>
190  GLM_FUNC_DECL genType backEaseIn(genType const& a, genType const& o);
191 
195  template <typename genType>
196  GLM_FUNC_DECL genType backEaseOut(genType const& a, genType const& o);
197 
201  template <typename genType>
202  GLM_FUNC_DECL genType backEaseInOut(genType const& a, genType const& o);
203 
205  template <typename genType>
206  GLM_FUNC_DECL genType bounceEaseIn(genType const& a);
207 
209  template <typename genType>
210  GLM_FUNC_DECL genType bounceEaseOut(genType const& a);
211 
213  template <typename genType>
214  GLM_FUNC_DECL genType bounceEaseInOut(genType const& a);
215 
217 }//namespace glm
218 
219 #include "easing.inl"
GLM_FUNC_DECL genType bounceEaseIn(genType const &a)
GLM_FUNC_DECL genType circularEaseInOut(genType const &a)
Modelled after the piecewise circular function y = (1/2)(1 - sqrt(1 - 4x^2)) ; [0, 0.5) y = (1/2)(sqrt(-(2x - 3)*(2x - 1)) + 1) ; [0.5, 1].
GLM_FUNC_DECL genType cubicEaseIn(genType const &a)
Modelled after the cubic y = x^3.
GLM_FUNC_DECL genType elasticEaseIn(genType const &a)
Modelled after the damped sine wave y = sin(13pi/2*x)*pow(2, 10 * (x - 1))
GLM_FUNC_DECL genType quinticEaseIn(genType const &a)
Modelled after the quintic y = x^5.
GLM_FUNC_DECL genType sineEaseInOut(genType const &a)
Modelled after half sine wave.
GLM_FUNC_DECL genType circularEaseOut(genType const &a)
Modelled after shifted quadrant II of unit circle.
GLM_FUNC_DECL genType elasticEaseOut(genType const &a)
Modelled after the damped sine wave y = sin(-13pi/2*(x + 1))*pow(2, -10x) + 1.
GLM_FUNC_DECL genType elasticEaseInOut(genType const &a)
Modelled after the piecewise exponentially-damped sine wave: y = (1/2)*sin(13pi/2*(2*x))*pow(2, 10 * ((2*x) - 1)) ; [0,0.5) y = (1/2)*(sin(-13pi/2*((2x-1)+1))*pow(2,-10(2*x-1)) + 2) ; [0.5, 1].
GLM_FUNC_DECL genType sineEaseIn(genType const &a)
Modelled after quarter-cycle of sine wave.
GLM_FUNC_DECL genType linearInterpolation(genType const &a)
Modelled after the line y = x.
GLM_FUNC_DECL genType quarticEaseIn(genType const &a)
Modelled after the quartic x^4.
GLM_FUNC_DECL genType quarticEaseOut(genType const &a)
Modelled after the quartic y = 1 - (x - 1)^4.
GLM_FUNC_DECL genType sineEaseOut(genType const &a)
Modelled after quarter-cycle of sine wave (different phase)
GLM_FUNC_DECL genType quadraticEaseInOut(genType const &a)
Modelled after the piecewise quadratic y = (1/2)((2x)^2) ; [0, 0.5) y = -(1/2)((2x-1)*(2x-3) - 1) ; [...
GLM_FUNC_DECL genType circularEaseIn(genType const &a)
Modelled after shifted quadrant IV of unit circle.
GLM_FUNC_DECL genType quadraticEaseOut(genType const &a)
Modelled after the parabola y = -x^2 + 2x.
GLM_FUNC_DECL genType exponentialEaseOut(genType const &a)
Modelled after the exponential function y = -2^(-10x) + 1.
GLM_FUNC_DECL genType quinticEaseOut(genType const &a)
Modelled after the quintic y = (x - 1)^5 + 1.
GLM_FUNC_DECL genType cubicEaseOut(genType const &a)
Modelled after the cubic y = (x - 1)^3 + 1.
GLM_FUNC_DECL genType exponentialEaseInOut(genType const &a)
Modelled after the piecewise exponential y = (1/2)2^(10(2x - 1)) ; [0,0.5) y = -(1/2)*2^(-10(2x - 1))...
GLM_FUNC_DECL genType bounceEaseOut(genType const &a)
GLM_FUNC_DECL genType quinticEaseInOut(genType const &a)
Modelled after the piecewise quintic y = (1/2)((2x)^5) ; [0, 0.5) y = (1/2)((2x-2)^5 + 2) ; [0...
GLM_FUNC_DECL genType backEaseIn(genType const &a, genType const &o)
GLM_FUNC_DECL genType exponentialEaseIn(genType const &a)
Modelled after the exponential function y = 2^(10(x - 1))
GLM_FUNC_DECL genType quadraticEaseIn(genType const &a)
Modelled after the parabola y = x^2.
GLM_FUNC_DECL genType quarticEaseInOut(genType const &a)
Modelled after the piecewise quartic y = (1/2)((2x)^4) ; [0, 0.5) y = -(1/2)((2x-2)^4 - 2) ; [0...
GLM_FUNC_DECL genType cubicEaseInOut(genType const &a)
Modelled after the piecewise cubic y = (1/2)((2x)^3) ; [0, 0.5) y = (1/2)((2x-2)^3 + 2) ; [0...
GLM_FUNC_DECL genType bounceEaseInOut(genType const &a)
GLM_FUNC_DECL genType backEaseInOut(genType const &a, genType const &o)
GLM_FUNC_DECL genType backEaseOut(genType const &a, genType const &o)
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00024.html ================================================ 0.9.9 API documentation: epsilon.hpp File Reference
0.9.9 API documentation
epsilon.hpp File Reference

GLM_GTC_epsilon More...

Go to the source code of this file.

Functions

template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, bool, Q > epsilonEqual (vec< L, T, Q > const &x, vec< L, T, Q > const &y, T const &epsilon)
 Returns the component-wise comparison of |x - y| < epsilon. More...
 
template<typename genType >
GLM_FUNC_DECL bool epsilonEqual (genType const &x, genType const &y, genType const &epsilon)
 Returns the component-wise comparison of |x - y| < epsilon. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, bool, Q > epsilonNotEqual (vec< L, T, Q > const &x, vec< L, T, Q > const &y, T const &epsilon)
 Returns the component-wise comparison of |x - y| < epsilon. More...
 
template<typename genType >
GLM_FUNC_DECL bool epsilonNotEqual (genType const &x, genType const &y, genType const &epsilon)
 Returns the component-wise comparison of |x - y| >= epsilon. More...
 

Detailed Description

GLM_GTC_epsilon

See also
Core features (dependence)
GLM_GTC_quaternion (dependence)

Definition in file epsilon.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00024_source.html ================================================ 0.9.9 API documentation: epsilon.hpp Source File
0.9.9 API documentation
epsilon.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 // Dependencies
17 #include "../detail/setup.hpp"
18 #include "../detail/qualifier.hpp"
19 
20 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
21 # pragma message("GLM: GLM_GTC_epsilon extension included")
22 #endif
23 
24 namespace glm
25 {
28 
33  template<length_t L, typename T, qualifier Q>
34  GLM_FUNC_DECL vec<L, bool, Q> epsilonEqual(vec<L, T, Q> const& x, vec<L, T, Q> const& y, T const& epsilon);
35 
40  template<typename genType>
41  GLM_FUNC_DECL bool epsilonEqual(genType const& x, genType const& y, genType const& epsilon);
42 
47  template<length_t L, typename T, qualifier Q>
48  GLM_FUNC_DECL vec<L, bool, Q> epsilonNotEqual(vec<L, T, Q> const& x, vec<L, T, Q> const& y, T const& epsilon);
49 
54  template<typename genType>
55  GLM_FUNC_DECL bool epsilonNotEqual(genType const& x, genType const& y, genType const& epsilon);
56 
58 }//namespace glm
59 
60 #include "epsilon.inl"
GLM_FUNC_DECL bool epsilonEqual(genType const &x, genType const &y, genType const &epsilon)
Returns the component-wise comparison of |x - y| < epsilon.
GLM_FUNC_DECL bool epsilonNotEqual(genType const &x, genType const &y, genType const &epsilon)
Returns the component-wise comparison of |x - y| >= epsilon.
GLM_FUNC_DECL GLM_CONSTEXPR genType epsilon()
Return the epsilon constant for floating point types.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00025.html ================================================ 0.9.9 API documentation: euler_angles.hpp File Reference
0.9.9 API documentation
euler_angles.hpp File Reference

GLM_GTX_euler_angles More...

Go to the source code of this file.

Functions

template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > derivedEulerAngleX (T const &angleX, T const &angularVelocityX)
 Creates a 3D 4 * 4 homogeneous derived matrix from the rotation matrix about X-axis. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > derivedEulerAngleY (T const &angleY, T const &angularVelocityY)
 Creates a 3D 4 * 4 homogeneous derived matrix from the rotation matrix about Y-axis. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > derivedEulerAngleZ (T const &angleZ, T const &angularVelocityZ)
 Creates a 3D 4 * 4 homogeneous derived matrix from the rotation matrix about Z-axis. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleX (T const &angleX)
 Creates a 3D 4 * 4 homogeneous rotation matrix from an euler angle X. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleXY (T const &angleX, T const &angleY)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (X * Y). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleXYX (T const &t1, T const &t2, T const &t3)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (X * Y * X). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleXYZ (T const &t1, T const &t2, T const &t3)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (X * Y * Z). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleXZ (T const &angleX, T const &angleZ)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (X * Z). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleXZX (T const &t1, T const &t2, T const &t3)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (X * Z * X). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleXZY (T const &t1, T const &t2, T const &t3)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (X * Z * Y). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleY (T const &angleY)
 Creates a 3D 4 * 4 homogeneous rotation matrix from an euler angle Y. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleYX (T const &angleY, T const &angleX)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * X). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleYXY (T const &t1, T const &t2, T const &t3)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * X * Y). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleYXZ (T const &yaw, T const &pitch, T const &roll)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * X * Z). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleYZ (T const &angleY, T const &angleZ)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * Z). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleYZX (T const &t1, T const &t2, T const &t3)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * Z * X). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleYZY (T const &t1, T const &t2, T const &t3)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * Z * Y). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleZ (T const &angleZ)
 Creates a 3D 4 * 4 homogeneous rotation matrix from an euler angle Z. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleZX (T const &angle, T const &angleX)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Z * X). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleZXY (T const &t1, T const &t2, T const &t3)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Z * X * Y). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleZXZ (T const &t1, T const &t2, T const &t3)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Z * X * Z). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleZY (T const &angleZ, T const &angleY)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Z * Y). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleZYX (T const &t1, T const &t2, T const &t3)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Z * Y * X). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleZYZ (T const &t1, T const &t2, T const &t3)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Z * Y * Z). More...
 
template<typename T >
GLM_FUNC_DECL void extractEulerAngleXYX (mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
 Extracts the (X * Y * X) Euler angles from the rotation matrix M. More...
 
template<typename T >
GLM_FUNC_DECL void extractEulerAngleXYZ (mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
 Extracts the (X * Y * Z) Euler angles from the rotation matrix M. More...
 
template<typename T >
GLM_FUNC_DECL void extractEulerAngleXZX (mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
 Extracts the (X * Z * X) Euler angles from the rotation matrix M. More...
 
template<typename T >
GLM_FUNC_DECL void extractEulerAngleXZY (mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
 Extracts the (X * Z * Y) Euler angles from the rotation matrix M. More...
 
template<typename T >
GLM_FUNC_DECL void extractEulerAngleYXY (mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
 Extracts the (Y * X * Y) Euler angles from the rotation matrix M. More...
 
template<typename T >
GLM_FUNC_DECL void extractEulerAngleYXZ (mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
 Extracts the (Y * X * Z) Euler angles from the rotation matrix M. More...
 
template<typename T >
GLM_FUNC_DECL void extractEulerAngleYZX (mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
 Extracts the (Y * Z * X) Euler angles from the rotation matrix M. More...
 
template<typename T >
GLM_FUNC_DECL void extractEulerAngleYZY (mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
 Extracts the (Y * Z * Y) Euler angles from the rotation matrix M. More...
 
template<typename T >
GLM_FUNC_DECL void extractEulerAngleZXY (mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
 Extracts the (Z * X * Y) Euler angles from the rotation matrix M. More...
 
template<typename T >
GLM_FUNC_DECL void extractEulerAngleZXZ (mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
 Extracts the (Z * X * Z) Euler angles from the rotation matrix M. More...
 
template<typename T >
GLM_FUNC_DECL void extractEulerAngleZYX (mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
 Extracts the (Z * Y * X) Euler angles from the rotation matrix M. More...
 
template<typename T >
GLM_FUNC_DECL void extractEulerAngleZYZ (mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
 Extracts the (Z * Y * Z) Euler angles from the rotation matrix M. More...
 
template<typename T >
GLM_FUNC_DECL mat< 2, 2, T, defaultp > orientate2 (T const &angle)
 Creates a 2D 2 * 2 rotation matrix from an euler angle. More...
 
template<typename T >
GLM_FUNC_DECL mat< 3, 3, T, defaultp > orientate3 (T const &angle)
 Creates a 2D 4 * 4 homogeneous rotation matrix from an euler angle. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 3, T, Q > orientate3 (vec< 3, T, Q > const &angles)
 Creates a 3D 3 * 3 rotation matrix from euler angles (Y * X * Z). More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > orientate4 (vec< 3, T, Q > const &angles)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * X * Z). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > yawPitchRoll (T const &yaw, T const &pitch, T const &roll)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * X * Z). More...
 

Detailed Description

GLM_GTX_euler_angles

See also
Core features (dependence)

Definition in file euler_angles.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00025_source.html ================================================ 0.9.9 API documentation: euler_angles.hpp Source File
0.9.9 API documentation
euler_angles.hpp
Go to the documentation of this file.
1 
16 #pragma once
17 
18 // Dependency:
19 #include "../glm.hpp"
20 
21 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
22 # ifndef GLM_ENABLE_EXPERIMENTAL
23 # pragma message("GLM: GLM_GTX_euler_angles is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
24 # else
25 # pragma message("GLM: GLM_GTX_euler_angles extension included")
26 # endif
27 #endif
28 
29 namespace glm
30 {
33 
36  template<typename T>
37  GLM_FUNC_DECL mat<4, 4, T, defaultp> eulerAngleX(
38  T const& angleX);
39 
42  template<typename T>
43  GLM_FUNC_DECL mat<4, 4, T, defaultp> eulerAngleY(
44  T const& angleY);
45 
48  template<typename T>
49  GLM_FUNC_DECL mat<4, 4, T, defaultp> eulerAngleZ(
50  T const& angleZ);
51 
54  template <typename T>
55  GLM_FUNC_DECL mat<4, 4, T, defaultp> derivedEulerAngleX(
56  T const & angleX, T const & angularVelocityX);
57 
60  template <typename T>
61  GLM_FUNC_DECL mat<4, 4, T, defaultp> derivedEulerAngleY(
62  T const & angleY, T const & angularVelocityY);
63 
66  template <typename T>
67  GLM_FUNC_DECL mat<4, 4, T, defaultp> derivedEulerAngleZ(
68  T const & angleZ, T const & angularVelocityZ);
69 
72  template<typename T>
73  GLM_FUNC_DECL mat<4, 4, T, defaultp> eulerAngleXY(
74  T const& angleX,
75  T const& angleY);
76 
79  template<typename T>
80  GLM_FUNC_DECL mat<4, 4, T, defaultp> eulerAngleYX(
81  T const& angleY,
82  T const& angleX);
83 
86  template<typename T>
87  GLM_FUNC_DECL mat<4, 4, T, defaultp> eulerAngleXZ(
88  T const& angleX,
89  T const& angleZ);
90 
93  template<typename T>
94  GLM_FUNC_DECL mat<4, 4, T, defaultp> eulerAngleZX(
95  T const& angle,
96  T const& angleX);
97 
100  template<typename T>
101  GLM_FUNC_DECL mat<4, 4, T, defaultp> eulerAngleYZ(
102  T const& angleY,
103  T const& angleZ);
104 
107  template<typename T>
108  GLM_FUNC_DECL mat<4, 4, T, defaultp> eulerAngleZY(
109  T const& angleZ,
110  T const& angleY);
111 
114  template<typename T>
115  GLM_FUNC_DECL mat<4, 4, T, defaultp> eulerAngleXYZ(
116  T const& t1,
117  T const& t2,
118  T const& t3);
119 
122  template<typename T>
123  GLM_FUNC_DECL mat<4, 4, T, defaultp> eulerAngleYXZ(
124  T const& yaw,
125  T const& pitch,
126  T const& roll);
127 
130  template <typename T>
131  GLM_FUNC_DECL mat<4, 4, T, defaultp> eulerAngleXZX(
132  T const & t1,
133  T const & t2,
134  T const & t3);
135 
138  template <typename T>
139  GLM_FUNC_DECL mat<4, 4, T, defaultp> eulerAngleXYX(
140  T const & t1,
141  T const & t2,
142  T const & t3);
143 
146  template <typename T>
147  GLM_FUNC_DECL mat<4, 4, T, defaultp> eulerAngleYXY(
148  T const & t1,
149  T const & t2,
150  T const & t3);
151 
154  template <typename T>
155  GLM_FUNC_DECL mat<4, 4, T, defaultp> eulerAngleYZY(
156  T const & t1,
157  T const & t2,
158  T const & t3);
159 
162  template <typename T>
163  GLM_FUNC_DECL mat<4, 4, T, defaultp> eulerAngleZYZ(
164  T const & t1,
165  T const & t2,
166  T const & t3);
167 
170  template <typename T>
171  GLM_FUNC_DECL mat<4, 4, T, defaultp> eulerAngleZXZ(
172  T const & t1,
173  T const & t2,
174  T const & t3);
175 
178  template <typename T>
179  GLM_FUNC_DECL mat<4, 4, T, defaultp> eulerAngleXZY(
180  T const & t1,
181  T const & t2,
182  T const & t3);
183 
186  template <typename T>
187  GLM_FUNC_DECL mat<4, 4, T, defaultp> eulerAngleYZX(
188  T const & t1,
189  T const & t2,
190  T const & t3);
191 
194  template <typename T>
195  GLM_FUNC_DECL mat<4, 4, T, defaultp> eulerAngleZYX(
196  T const & t1,
197  T const & t2,
198  T const & t3);
199 
202  template <typename T>
203  GLM_FUNC_DECL mat<4, 4, T, defaultp> eulerAngleZXY(
204  T const & t1,
205  T const & t2,
206  T const & t3);
207 
210  template<typename T>
211  GLM_FUNC_DECL mat<4, 4, T, defaultp> yawPitchRoll(
212  T const& yaw,
213  T const& pitch,
214  T const& roll);
215 
218  template<typename T>
219  GLM_FUNC_DECL mat<2, 2, T, defaultp> orientate2(T const& angle);
220 
223  template<typename T>
224  GLM_FUNC_DECL mat<3, 3, T, defaultp> orientate3(T const& angle);
225 
228  template<typename T, qualifier Q>
229  GLM_FUNC_DECL mat<3, 3, T, Q> orientate3(vec<3, T, Q> const& angles);
230 
233  template<typename T, qualifier Q>
234  GLM_FUNC_DECL mat<4, 4, T, Q> orientate4(vec<3, T, Q> const& angles);
235 
238  template<typename T>
239  GLM_FUNC_DECL void extractEulerAngleXYZ(mat<4, 4, T, defaultp> const& M,
240  T & t1,
241  T & t2,
242  T & t3);
243 
246  template <typename T>
247  GLM_FUNC_DECL void extractEulerAngleYXZ(mat<4, 4, T, defaultp> const & M,
248  T & t1,
249  T & t2,
250  T & t3);
251 
254  template <typename T>
255  GLM_FUNC_DECL void extractEulerAngleXZX(mat<4, 4, T, defaultp> const & M,
256  T & t1,
257  T & t2,
258  T & t3);
259 
262  template <typename T>
263  GLM_FUNC_DECL void extractEulerAngleXYX(mat<4, 4, T, defaultp> const & M,
264  T & t1,
265  T & t2,
266  T & t3);
267 
270  template <typename T>
271  GLM_FUNC_DECL void extractEulerAngleYXY(mat<4, 4, T, defaultp> const & M,
272  T & t1,
273  T & t2,
274  T & t3);
275 
278  template <typename T>
279  GLM_FUNC_DECL void extractEulerAngleYZY(mat<4, 4, T, defaultp> const & M,
280  T & t1,
281  T & t2,
282  T & t3);
283 
286  template <typename T>
287  GLM_FUNC_DECL void extractEulerAngleZYZ(mat<4, 4, T, defaultp> const & M,
288  T & t1,
289  T & t2,
290  T & t3);
291 
294  template <typename T>
295  GLM_FUNC_DECL void extractEulerAngleZXZ(mat<4, 4, T, defaultp> const & M,
296  T & t1,
297  T & t2,
298  T & t3);
299 
302  template <typename T>
303  GLM_FUNC_DECL void extractEulerAngleXZY(mat<4, 4, T, defaultp> const & M,
304  T & t1,
305  T & t2,
306  T & t3);
307 
310  template <typename T>
311  GLM_FUNC_DECL void extractEulerAngleYZX(mat<4, 4, T, defaultp> const & M,
312  T & t1,
313  T & t2,
314  T & t3);
315 
318  template <typename T>
319  GLM_FUNC_DECL void extractEulerAngleZYX(mat<4, 4, T, defaultp> const & M,
320  T & t1,
321  T & t2,
322  T & t3);
323 
326  template <typename T>
327  GLM_FUNC_DECL void extractEulerAngleZXY(mat<4, 4, T, defaultp> const & M,
328  T & t1,
329  T & t2,
330  T & t3);
331 
333 }//namespace glm
334 
335 #include "euler_angles.inl"
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleXY(T const &angleX, T const &angleY)
Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (X * Y).
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleYZY(T const &t1, T const &t2, T const &t3)
Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * Z * Y).
GLM_FUNC_DECL void extractEulerAngleYXZ(mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
Extracts the (Y * X * Z) Euler angles from the rotation matrix M.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleXYZ(T const &t1, T const &t2, T const &t3)
Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (X * Y * Z).
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleXZY(T const &t1, T const &t2, T const &t3)
Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (X * Z * Y).
GLM_FUNC_DECL mat< 4, 4, T, defaultp > derivedEulerAngleZ(T const &angleZ, T const &angularVelocityZ)
Creates a 3D 4 * 4 homogeneous derived matrix from the rotation matrix about Z-axis.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleYX(T const &angleY, T const &angleX)
Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * X).
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleY(T const &angleY)
Creates a 3D 4 * 4 homogeneous rotation matrix from an euler angle Y.
GLM_FUNC_DECL T angle(qua< T, Q > const &x)
Returns the quaternion rotation angle.
GLM_FUNC_DECL void extractEulerAngleZYZ(mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
Extracts the (Z * Y * Z) Euler angles from the rotation matrix M.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > derivedEulerAngleX(T const &angleX, T const &angularVelocityX)
Creates a 3D 4 * 4 homogeneous derived matrix from the rotation matrix about X-axis.
GLM_FUNC_DECL void extractEulerAngleXYX(mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
Extracts the (X * Y * X) Euler angles from the rotation matrix M.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleZXY(T const &t1, T const &t2, T const &t3)
Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Z * X * Y).
GLM_FUNC_DECL T roll(qua< T, Q > const &x)
Returns roll value of euler angles expressed in radians.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleX(T const &angleX)
Creates a 3D 4 * 4 homogeneous rotation matrix from an euler angle X.
GLM_FUNC_DECL mat< 2, 2, T, defaultp > orientate2(T const &angle)
Creates a 2D 2 * 2 rotation matrix from an euler angle.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleXYX(T const &t1, T const &t2, T const &t3)
Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (X * Y * X).
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleYXZ(T const &yaw, T const &pitch, T const &roll)
Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * X * Z).
GLM_FUNC_DECL void extractEulerAngleXZX(mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
Extracts the (X * Z * X) Euler angles from the rotation matrix M.
GLM_FUNC_DECL T yaw(qua< T, Q > const &x)
Returns yaw value of euler angles expressed in radians.
GLM_FUNC_DECL void extractEulerAngleYXY(mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
Extracts the (Y * X * Y) Euler angles from the rotation matrix M.
GLM_FUNC_DECL void extractEulerAngleZXY(mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
Extracts the (Z * X * Y) Euler angles from the rotation matrix M.
GLM_FUNC_DECL void extractEulerAngleXZY(mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
Extracts the (X * Z * Y) Euler angles from the rotation matrix M.
GLM_FUNC_DECL void extractEulerAngleYZX(mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
Extracts the (Y * Z * X) Euler angles from the rotation matrix M.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleXZX(T const &t1, T const &t2, T const &t3)
Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (X * Z * X).
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleZYX(T const &t1, T const &t2, T const &t3)
Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Z * Y * X).
GLM_FUNC_DECL mat< 4, 4, T, Q > orientate4(vec< 3, T, Q > const &angles)
Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * X * Z).
GLM_FUNC_DECL void extractEulerAngleZYX(mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
Extracts the (Z * Y * X) Euler angles from the rotation matrix M.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleZ(T const &angleZ)
Creates a 3D 4 * 4 homogeneous rotation matrix from an euler angle Z.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleYXY(T const &t1, T const &t2, T const &t3)
Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * X * Y).
GLM_FUNC_DECL void extractEulerAngleYZY(mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
Extracts the (Y * Z * Y) Euler angles from the rotation matrix M.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > yawPitchRoll(T const &yaw, T const &pitch, T const &roll)
Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * X * Z).
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleXZ(T const &angleX, T const &angleZ)
Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (X * Z).
GLM_FUNC_DECL void extractEulerAngleXYZ(mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
Extracts the (X * Y * Z) Euler angles from the rotation matrix M.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleZXZ(T const &t1, T const &t2, T const &t3)
Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Z * X * Z).
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleYZX(T const &t1, T const &t2, T const &t3)
Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * Z * X).
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleZY(T const &angleZ, T const &angleY)
Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Z * Y).
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleZYZ(T const &t1, T const &t2, T const &t3)
Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Z * Y * Z).
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleYZ(T const &angleY, T const &angleZ)
Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * Z).
GLM_FUNC_DECL mat< 3, 3, T, Q > orientate3(vec< 3, T, Q > const &angles)
Creates a 3D 3 * 3 rotation matrix from euler angles (Y * X * Z).
GLM_FUNC_DECL void extractEulerAngleZXZ(mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
Extracts the (Z * X * Z) Euler angles from the rotation matrix M.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > derivedEulerAngleY(T const &angleY, T const &angularVelocityY)
Creates a 3D 4 * 4 homogeneous derived matrix from the rotation matrix about Y-axis.
GLM_FUNC_DECL T pitch(qua< T, Q > const &x)
Returns pitch value of euler angles expressed in radians.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleZX(T const &angle, T const &angleX)
Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Z * X).
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00026.html ================================================ 0.9.9 API documentation: exponential.hpp File Reference
0.9.9 API documentation
exponential.hpp File Reference

Core features More...

Go to the source code of this file.

Functions

template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > exp (vec< L, T, Q > const &v)
 Returns the natural exponentiation of x, i.e., e^x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > exp2 (vec< L, T, Q > const &v)
 Returns 2 raised to the v power. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > inversesqrt (vec< L, T, Q > const &v)
 Returns the reciprocal of the positive square root of v. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > log (vec< L, T, Q > const &v)
 Returns the natural logarithm of v, i.e., returns the value y which satisfies the equation x = e^y. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > log2 (vec< L, T, Q > const &v)
 Returns the base 2 log of x, i.e., returns the value y, which satisfies the equation x = 2 ^ y. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > pow (vec< L, T, Q > const &base, vec< L, T, Q > const &exponent)
 Returns 'base' raised to the power 'exponent'. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > sqrt (vec< L, T, Q > const &v)
 Returns the positive square root of v. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00026_source.html ================================================ 0.9.9 API documentation: exponential.hpp Source File
0.9.9 API documentation
exponential.hpp
Go to the documentation of this file.
1 
15 #pragma once
16 
17 #include "detail/type_vec1.hpp"
18 #include "detail/type_vec2.hpp"
19 #include "detail/type_vec3.hpp"
20 #include "detail/type_vec4.hpp"
21 #include <cmath>
22 
23 namespace glm
24 {
27 
35  template<length_t L, typename T, qualifier Q>
36  GLM_FUNC_DECL vec<L, T, Q> pow(vec<L, T, Q> const& base, vec<L, T, Q> const& exponent);
37 
46  template<length_t L, typename T, qualifier Q>
47  GLM_FUNC_DECL vec<L, T, Q> exp(vec<L, T, Q> const& v);
48 
59  template<length_t L, typename T, qualifier Q>
60  GLM_FUNC_DECL vec<L, T, Q> log(vec<L, T, Q> const& v);
61 
70  template<length_t L, typename T, qualifier Q>
71  GLM_FUNC_DECL vec<L, T, Q> exp2(vec<L, T, Q> const& v);
72 
82  template<length_t L, typename T, qualifier Q>
83  GLM_FUNC_DECL vec<L, T, Q> log2(vec<L, T, Q> const& v);
84 
93  template<length_t L, typename T, qualifier Q>
94  GLM_FUNC_DECL vec<L, T, Q> sqrt(vec<L, T, Q> const& v);
95 
104  template<length_t L, typename T, qualifier Q>
105  GLM_FUNC_DECL vec<L, T, Q> inversesqrt(vec<L, T, Q> const& v);
106 
108 }//namespace glm
109 
110 #include "detail/func_exponential.inl"
Core features
GLM_FUNC_DECL vec< L, T, Q > sqrt(vec< L, T, Q > const &v)
Returns the positive square root of v.
GLM_FUNC_DECL vec< L, T, Q > exp2(vec< L, T, Q > const &v)
Returns 2 raised to the v power.
GLM_FUNC_DECL vec< L, T, Q > inversesqrt(vec< L, T, Q > const &v)
Returns the reciprocal of the positive square root of v.
Core features
Core features
GLM_FUNC_DECL vec< L, T, Q > pow(vec< L, T, Q > const &base, vec< L, T, Q > const &exponent)
Returns 'base' raised to the power 'exponent'.
GLM_FUNC_DECL vec< L, T, Q > exp(vec< L, T, Q > const &v)
Returns the natural exponentiation of x, i.e., e^x.
GLM_FUNC_DECL vec< L, T, Q > log(vec< L, T, Q > const &v)
Returns the natural logarithm of v, i.e., returns the value y which satisfies the equation x = e^y...
Core features
GLM_FUNC_DECL vec< L, T, Q > log2(vec< L, T, Q > const &v)
Returns the base 2 log of x, i.e., returns the value y, which satisfies the equation x = 2 ^ y...
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00027.html ================================================ 0.9.9 API documentation: ext.hpp File Reference
0.9.9 API documentation
ext.hpp File Reference

Core features (Dependence) More...

Go to the source code of this file.

Detailed Description

Core features (Dependence)

Definition in file ext.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00027_source.html ================================================ 0.9.9 API documentation: ext.hpp Source File
0.9.9 API documentation
ext.hpp
Go to the documentation of this file.
1 
5 #include "detail/setup.hpp"
6 
7 #pragma once
8 
9 #include "glm.hpp"
10 
11 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_MESSAGE_EXT_INCLUDED_DISPLAYED)
12 # define GLM_MESSAGE_EXT_INCLUDED_DISPLAYED
13 # pragma message("GLM: All extensions included (not recommended)")
14 #endif//GLM_MESSAGES
15 
34 
48 #include "./ext/matrix_float4x2_precision.hpp"
53 
55 
62 
66 
67 #include "./ext/vector_bool1.hpp"
69 #include "./ext/vector_bool2.hpp"
71 #include "./ext/vector_bool3.hpp"
73 #include "./ext/vector_bool4.hpp"
75 
76 #include "./ext/vector_double1.hpp"
78 #include "./ext/vector_double2.hpp"
80 #include "./ext/vector_double3.hpp"
82 #include "./ext/vector_double4.hpp"
84 
85 #include "./ext/vector_float1.hpp"
87 #include "./ext/vector_float2.hpp"
89 #include "./ext/vector_float3.hpp"
91 #include "./ext/vector_float4.hpp"
93 
94 #include "./ext/vector_int1.hpp"
96 #include "./ext/vector_int2.hpp"
98 #include "./ext/vector_int3.hpp"
100 #include "./ext/vector_int4.hpp"
102 
104 
105 #include "./ext/vector_uint1.hpp"
107 #include "./ext/vector_uint2.hpp"
109 #include "./ext/vector_uint3.hpp"
111 #include "./ext/vector_uint4.hpp"
113 
114 #include "./gtc/bitfield.hpp"
115 #include "./gtc/color_space.hpp"
116 #include "./gtc/constants.hpp"
117 #include "./gtc/epsilon.hpp"
118 #include "./gtc/integer.hpp"
119 #include "./gtc/matrix_access.hpp"
120 #include "./gtc/matrix_integer.hpp"
121 #include "./gtc/matrix_inverse.hpp"
123 #include "./gtc/noise.hpp"
124 #include "./gtc/packing.hpp"
125 #include "./gtc/quaternion.hpp"
126 #include "./gtc/random.hpp"
127 #include "./gtc/reciprocal.hpp"
128 #include "./gtc/round.hpp"
129 #include "./gtc/type_precision.hpp"
130 #include "./gtc/type_ptr.hpp"
131 #include "./gtc/ulp.hpp"
132 #include "./gtc/vec1.hpp"
133 #if GLM_CONFIG_ALIGNED_GENTYPES == GLM_ENABLE
134 # include "./gtc/type_aligned.hpp"
135 #endif
136 
137 #ifdef GLM_ENABLE_EXPERIMENTAL
139 #include "./gtx/bit.hpp"
140 #include "./gtx/closest_point.hpp"
141 #include "./gtx/color_encoding.hpp"
142 #include "./gtx/color_space.hpp"
144 #include "./gtx/compatibility.hpp"
145 #include "./gtx/component_wise.hpp"
146 #include "./gtx/dual_quaternion.hpp"
147 #include "./gtx/euler_angles.hpp"
148 #include "./gtx/extend.hpp"
153 #include "./gtx/functions.hpp"
154 #include "./gtx/gradient_paint.hpp"
156 #include "./gtx/integer.hpp"
157 #include "./gtx/intersect.hpp"
158 #include "./gtx/log_base.hpp"
163 #include "./gtx/matrix_query.hpp"
164 #include "./gtx/mixed_product.hpp"
165 #include "./gtx/norm.hpp"
166 #include "./gtx/normal.hpp"
167 #include "./gtx/normalize_dot.hpp"
169 #include "./gtx/optimum_pow.hpp"
170 #include "./gtx/orthonormalize.hpp"
171 #include "./gtx/perpendicular.hpp"
173 #include "./gtx/projection.hpp"
174 #include "./gtx/quaternion.hpp"
175 #include "./gtx/raw_data.hpp"
176 #include "./gtx/rotate_vector.hpp"
177 #include "./gtx/spline.hpp"
178 #include "./gtx/std_based_type.hpp"
179 #if !(GLM_COMPILER & GLM_COMPILER_CUDA)
180 # include "./gtx/string_cast.hpp"
181 #endif
182 #include "./gtx/transform.hpp"
183 #include "./gtx/transform2.hpp"
184 #include "./gtx/vec_swizzle.hpp"
185 #include "./gtx/vector_angle.hpp"
186 #include "./gtx/vector_query.hpp"
187 #include "./gtx/wrap.hpp"
188 
189 #if GLM_HAS_TEMPLATE_ALIASES
191 #endif
192 
193 #if GLM_HAS_RANGE_FOR
194 # include "./gtx/range.hpp"
195 #endif
196 #endif//GLM_ENABLE_EXPERIMENTAL
GLM_GTC_epsilon
GLM_EXT_vector_relational
GLM_GTX_dual_quaternion
GLM_GTX_polar_coordinates
GLM_GTX_closest_point
Core features
GLM_GTX_handed_coordinate_space
Core features
GLM_GTX_raw_data
Core features
GLM_GTX_string_cast
GLM_EXT_vector_uint1_precision
GLM_GTX_intersect
GLM_EXT_vector_int1_precision
GLM_GTX_normalize_dot
GLM_GTX_integer
GLM_GTX_rotate_vector
GLM_GTX_matrix_major_storage
Core features
Core features
GLM_GTX_matrix_interpolation
GLM_GTX_vector_angle
GLM_GTX_transform2
GLM_GTX_wrap
GLM_GTX_vector_query
GLM_GTX_projection
GLM_GTC_constants
GLM_GTX_perpendicular
Core features
Core features
Core features
Core features
GLM_GTX_std_based_type
Core features
GLM_GTX_component_wise
GLM_GTC_ulp
GLM_GTC_round
Core features
GLM_GTX_orthonormalize
GLM_GTC_integer
GLM_EXT_vector_float1
GLM_GTX_matrix_query
GLM_EXT_vector_double1_precision
GLM_GTX_vec_swizzle
Core features
GLM_GTC_type_ptr
Core features
GLM_GTX_gradient_paint
GLM_GTC_bitfield
GLM_GTX_range
Core features
GLM_GTC_matrix_transform
GLM_GTX_matrix_cross_product
GLM_EXT_vector_bool1_precision
GLM_GTC_type_aligned
GLM_EXT_vector_uint1
GLM_GTX_quaternion
GLM_GTX_color_space_YCoCg
GLM_EXT_vector_int1
GLM_GTX_normal
GLM_GTC_color_space
Core features
GLM_GTC_noise
Core features
Core features
GLM_GTC_matrix_integer
GLM_GTC_matrix_access
GLM_GTX_extented_min_max
GLM_GTC_vec1
GLM_GTX_transform
GLM_EXT_quaternion_double_precision
GLM_GTX_log_base
GLM_GTX_compatibility
GLM_EXT_scalar_int_sized
GLM_GTX_optimum_pow
GLM_GTX_functions
GLM_EXT_quaternion_relational
GLM_GTX_fast_square_root
Core features
GLM_EXT_quaternion_float_precision
Core features
GLM_EXT_scalar_relational
Core features
GLM_GTC_random
GLM_GTX_euler_angles
GLM_GTX_spline
GLM_GTC_quaternion
GLM_GTX_color_space
GLM_GTX_norm
GLM_GTX_color_encoding
GLM_GTC_reciprocal
Core features
GLM_GTX_mixed_producte
Core features
GLM_EXT_vector_double1
Core features
GLM_GTC_type_precision
GLM_EXT_scalar_constants
GLM_GTX_fast_trigonometry
GLM_GTX_bit
GLM_EXT_quaternion_geometric
Core features
GLM_GTX_fast_exponential
GLM_EXT_quaternion_float
GLM_EXT_vector_bool1
Core features
Core features
Core features
Core features
GLM_GTX_extend
Core features
GLM_EXT_quaternion_double
Core features
GLM_GTX_number_precision
Core features
GLM_GTX_matrix_operation
Core features
GLM_GTC_matrix_inverse
Core features
Experimental extensions
GLM_GTC_packing
Core features
GLM_GTX_associated_min_max
GLM_EXT_vector_float1_precision
GLM_EXT_matrix_relational
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00028.html ================================================ 0.9.9 API documentation: extend.hpp File Reference
0.9.9 API documentation
extend.hpp File Reference

GLM_GTX_extend More...

Go to the source code of this file.

Functions

template<typename genType >
GLM_FUNC_DECL genType extend (genType const &Origin, genType const &Source, typename genType::value_type const Length)
 Extends of Length the Origin position using the (Source - Origin) direction. More...
 

Detailed Description

GLM_GTX_extend

See also
Core features (dependence)

Definition in file extend.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00028_source.html ================================================ 0.9.9 API documentation: extend.hpp Source File
0.9.9 API documentation
extend.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependency:
16 #include "../glm.hpp"
17 
18 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
19 # ifndef GLM_ENABLE_EXPERIMENTAL
20 # pragma message("GLM: GLM_GTX_extend is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
21 # else
22 # pragma message("GLM: GLM_GTX_extend extension included")
23 # endif
24 #endif
25 
26 namespace glm
27 {
30 
33  template<typename genType>
34  GLM_FUNC_DECL genType extend(
35  genType const& Origin,
36  genType const& Source,
37  typename genType::value_type const Length);
38 
40 }//namespace glm
41 
42 #include "extend.inl"
GLM_FUNC_DECL genType extend(genType const &Origin, genType const &Source, typename genType::value_type const Length)
Extends of Length the Origin position using the (Source - Origin) direction.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00029.html ================================================ 0.9.9 API documentation: extended_min_max.hpp File Reference
0.9.9 API documentation
extended_min_max.hpp File Reference

GLM_GTX_extented_min_max More...

Go to the source code of this file.

Functions

template<typename genType >
GLM_FUNC_DECL genType fclamp (genType x, genType minVal, genType maxVal)
 Returns min(max(x, minVal), maxVal) for each component in x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fclamp (vec< L, T, Q > const &x, T minVal, T maxVal)
 Returns min(max(x, minVal), maxVal) for each component in x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fclamp (vec< L, T, Q > const &x, vec< L, T, Q > const &minVal, vec< L, T, Q > const &maxVal)
 Returns min(max(x, minVal), maxVal) for each component in x. More...
 
template<typename genType >
GLM_FUNC_DECL genType fmax (genType x, genType y)
 Returns y if x < y; otherwise, it returns x. More...
 
template<typename genType >
GLM_FUNC_DECL genType fmin (genType x, genType y)
 Returns y if y < x; otherwise, it returns x. More...
 
template<typename T >
GLM_FUNC_DECL T max (T const &x, T const &y, T const &z)
 Return the maximum component-wise values of 3 inputs. More...
 
template<typename T , template< typename > class C>
GLM_FUNC_DECL C< T > max (C< T > const &x, typename C< T >::T const &y, typename C< T >::T const &z)
 Return the maximum component-wise values of 3 inputs. More...
 
template<typename T , template< typename > class C>
GLM_FUNC_DECL C< T > max (C< T > const &x, C< T > const &y, C< T > const &z)
 Return the maximum component-wise values of 3 inputs. More...
 
template<typename T >
GLM_FUNC_DECL T max (T const &x, T const &y, T const &z, T const &w)
 Return the maximum component-wise values of 4 inputs. More...
 
template<typename T , template< typename > class C>
GLM_FUNC_DECL C< T > max (C< T > const &x, typename C< T >::T const &y, typename C< T >::T const &z, typename C< T >::T const &w)
 Return the maximum component-wise values of 4 inputs. More...
 
template<typename T , template< typename > class C>
GLM_FUNC_DECL C< T > max (C< T > const &x, C< T > const &y, C< T > const &z, C< T > const &w)
 Return the maximum component-wise values of 4 inputs. More...
 
template<typename T >
GLM_FUNC_DECL T min (T const &x, T const &y, T const &z)
 Return the minimum component-wise values of 3 inputs. More...
 
template<typename T , template< typename > class C>
GLM_FUNC_DECL C< T > min (C< T > const &x, typename C< T >::T const &y, typename C< T >::T const &z)
 Return the minimum component-wise values of 3 inputs. More...
 
template<typename T , template< typename > class C>
GLM_FUNC_DECL C< T > min (C< T > const &x, C< T > const &y, C< T > const &z)
 Return the minimum component-wise values of 3 inputs. More...
 
template<typename T >
GLM_FUNC_DECL T min (T const &x, T const &y, T const &z, T const &w)
 Return the minimum component-wise values of 4 inputs. More...
 
template<typename T , template< typename > class C>
GLM_FUNC_DECL C< T > min (C< T > const &x, typename C< T >::T const &y, typename C< T >::T const &z, typename C< T >::T const &w)
 Return the minimum component-wise values of 4 inputs. More...
 
template<typename T , template< typename > class C>
GLM_FUNC_DECL C< T > min (C< T > const &x, C< T > const &y, C< T > const &z, C< T > const &w)
 Return the minimum component-wise values of 4 inputs. More...
 

Detailed Description

GLM_GTX_extented_min_max

See also
Core features (dependence)

Definition in file extended_min_max.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00029_source.html ================================================ 0.9.9 API documentation: extended_min_max.hpp Source File
0.9.9 API documentation
extended_min_max.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependency:
16 #include "../glm.hpp"
17 
18 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
19 # ifndef GLM_ENABLE_EXPERIMENTAL
20 # pragma message("GLM: GLM_GTX_extented_min_max is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
21 # else
22 # pragma message("GLM: GLM_GTX_extented_min_max extension included")
23 # endif
24 #endif
25 
26 namespace glm
27 {
30 
33  template<typename T>
34  GLM_FUNC_DECL T min(
35  T const& x,
36  T const& y,
37  T const& z);
38 
41  template<typename T, template<typename> class C>
42  GLM_FUNC_DECL C<T> min(
43  C<T> const& x,
44  typename C<T>::T const& y,
45  typename C<T>::T const& z);
46 
49  template<typename T, template<typename> class C>
50  GLM_FUNC_DECL C<T> min(
51  C<T> const& x,
52  C<T> const& y,
53  C<T> const& z);
54 
57  template<typename T>
58  GLM_FUNC_DECL T min(
59  T const& x,
60  T const& y,
61  T const& z,
62  T const& w);
63 
66  template<typename T, template<typename> class C>
67  GLM_FUNC_DECL C<T> min(
68  C<T> const& x,
69  typename C<T>::T const& y,
70  typename C<T>::T const& z,
71  typename C<T>::T const& w);
72 
75  template<typename T, template<typename> class C>
76  GLM_FUNC_DECL C<T> min(
77  C<T> const& x,
78  C<T> const& y,
79  C<T> const& z,
80  C<T> const& w);
81 
84  template<typename T>
85  GLM_FUNC_DECL T max(
86  T const& x,
87  T const& y,
88  T const& z);
89 
92  template<typename T, template<typename> class C>
93  GLM_FUNC_DECL C<T> max(
94  C<T> const& x,
95  typename C<T>::T const& y,
96  typename C<T>::T const& z);
97 
100  template<typename T, template<typename> class C>
101  GLM_FUNC_DECL C<T> max(
102  C<T> const& x,
103  C<T> const& y,
104  C<T> const& z);
105 
108  template<typename T>
109  GLM_FUNC_DECL T max(
110  T const& x,
111  T const& y,
112  T const& z,
113  T const& w);
114 
117  template<typename T, template<typename> class C>
118  GLM_FUNC_DECL C<T> max(
119  C<T> const& x,
120  typename C<T>::T const& y,
121  typename C<T>::T const& z,
122  typename C<T>::T const& w);
123 
126  template<typename T, template<typename> class C>
127  GLM_FUNC_DECL C<T> max(
128  C<T> const& x,
129  C<T> const& y,
130  C<T> const& z,
131  C<T> const& w);
132 
138  template<typename genType>
139  GLM_FUNC_DECL genType fmin(genType x, genType y);
140 
147  template<typename genType>
148  GLM_FUNC_DECL genType fmax(genType x, genType y);
149 
155  template<typename genType>
156  GLM_FUNC_DECL genType fclamp(genType x, genType minVal, genType maxVal);
157 
165  template<length_t L, typename T, qualifier Q>
166  GLM_FUNC_DECL vec<L, T, Q> fclamp(vec<L, T, Q> const& x, T minVal, T maxVal);
167 
175  template<length_t L, typename T, qualifier Q>
176  GLM_FUNC_DECL vec<L, T, Q> fclamp(vec<L, T, Q> const& x, vec<L, T, Q> const& minVal, vec<L, T, Q> const& maxVal);
177 
178 
180 }//namespace glm
181 
182 #include "extended_min_max.inl"
GLM_FUNC_DECL vec< L, T, Q > fclamp(vec< L, T, Q > const &x, vec< L, T, Q > const &minVal, vec< L, T, Q > const &maxVal)
Returns min(max(x, minVal), maxVal) for each component in x.
GLM_FUNC_DECL genType fmin(genType x, genType y)
Returns y if y < x; otherwise, it returns x.
GLM_FUNC_DECL genType fmax(genType x, genType y)
Returns y if x < y; otherwise, it returns x.
GLM_FUNC_DECL C< T > max(C< T > const &x, C< T > const &y, C< T > const &z, C< T > const &w)
Return the maximum component-wise values of 4 inputs.
GLM_FUNC_DECL C< T > min(C< T > const &x, C< T > const &y, C< T > const &z, C< T > const &w)
Return the minimum component-wise values of 4 inputs.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00030.html ================================================ 0.9.9 API documentation: exterior_product.hpp File Reference
0.9.9 API documentation
exterior_product.hpp File Reference

GLM_GTX_exterior_product More...

Go to the source code of this file.

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL T cross (vec< 2, T, Q > const &v, vec< 2, T, Q > const &u)
 Returns the cross product of x and y. More...
 

Detailed Description

GLM_GTX_exterior_product

See also
Core features (dependence)
GLM_GTX_exterior_product (dependence)

Definition in file exterior_product.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00030_source.html ================================================ 0.9.9 API documentation: exterior_product.hpp Source File
0.9.9 API documentation
exterior_product.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 // Dependencies
17 #include "../detail/setup.hpp"
18 #include "../detail/qualifier.hpp"
19 
20 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
21 # ifndef GLM_ENABLE_EXPERIMENTAL
22 # pragma message("GLM: GLM_GTX_exterior_product is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
23 # else
24 # pragma message("GLM: GLM_GTX_exterior_product extension included")
25 # endif
26 #endif
27 
28 namespace glm
29 {
32 
39  template<typename T, qualifier Q>
40  GLM_FUNC_DECL T cross(vec<2, T, Q> const& v, vec<2, T, Q> const& u);
41 
43 } //namespace glm
44 
45 #include "exterior_product.inl"
GLM_FUNC_DECL T cross(vec< 2, T, Q > const &v, vec< 2, T, Q > const &u)
Returns the cross product of x and y.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00031.html ================================================ 0.9.9 API documentation: fast_exponential.hpp File Reference
0.9.9 API documentation
fast_exponential.hpp File Reference

GLM_GTX_fast_exponential More...

Go to the source code of this file.

Functions

template<typename T >
GLM_FUNC_DECL T fastExp (T x)
 Faster than the common exp function but less accurate. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fastExp (vec< L, T, Q > const &x)
 Faster than the common exp function but less accurate. More...
 
template<typename T >
GLM_FUNC_DECL T fastExp2 (T x)
 Faster than the common exp2 function but less accurate. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fastExp2 (vec< L, T, Q > const &x)
 Faster than the common exp2 function but less accurate. More...
 
template<typename T >
GLM_FUNC_DECL T fastLog (T x)
 Faster than the common log function but less accurate. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fastLog (vec< L, T, Q > const &x)
 Faster than the common exp2 function but less accurate. More...
 
template<typename T >
GLM_FUNC_DECL T fastLog2 (T x)
 Faster than the common log2 function but less accurate. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fastLog2 (vec< L, T, Q > const &x)
 Faster than the common log2 function but less accurate. More...
 
template<typename genType >
GLM_FUNC_DECL genType fastPow (genType x, genType y)
 Faster than the common pow function but less accurate. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fastPow (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Faster than the common pow function but less accurate. More...
 
template<typename genTypeT , typename genTypeU >
GLM_FUNC_DECL genTypeT fastPow (genTypeT x, genTypeU y)
 Faster than the common pow function but less accurate. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fastPow (vec< L, T, Q > const &x)
 Faster than the common pow function but less accurate. More...
 

Detailed Description

GLM_GTX_fast_exponential

See also
Core features (dependence)
gtx_half_float (dependence)

Definition in file fast_exponential.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00031_source.html ================================================ 0.9.9 API documentation: fast_exponential.hpp Source File
0.9.9 API documentation
fast_exponential.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 // Dependency:
17 #include "../glm.hpp"
18 
19 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
20 # ifndef GLM_ENABLE_EXPERIMENTAL
21 # pragma message("GLM: GLM_GTX_fast_exponential is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
22 # else
23 # pragma message("GLM: GLM_GTX_fast_exponential extension included")
24 # endif
25 #endif
26 
27 namespace glm
28 {
31 
34  template<typename genType>
35  GLM_FUNC_DECL genType fastPow(genType x, genType y);
36 
39  template<length_t L, typename T, qualifier Q>
40  GLM_FUNC_DECL vec<L, T, Q> fastPow(vec<L, T, Q> const& x, vec<L, T, Q> const& y);
41 
44  template<typename genTypeT, typename genTypeU>
45  GLM_FUNC_DECL genTypeT fastPow(genTypeT x, genTypeU y);
46 
49  template<length_t L, typename T, qualifier Q>
50  GLM_FUNC_DECL vec<L, T, Q> fastPow(vec<L, T, Q> const& x);
51 
54  template<typename T>
55  GLM_FUNC_DECL T fastExp(T x);
56 
59  template<length_t L, typename T, qualifier Q>
60  GLM_FUNC_DECL vec<L, T, Q> fastExp(vec<L, T, Q> const& x);
61 
64  template<typename T>
65  GLM_FUNC_DECL T fastLog(T x);
66 
69  template<length_t L, typename T, qualifier Q>
70  GLM_FUNC_DECL vec<L, T, Q> fastLog(vec<L, T, Q> const& x);
71 
74  template<typename T>
75  GLM_FUNC_DECL T fastExp2(T x);
76 
79  template<length_t L, typename T, qualifier Q>
80  GLM_FUNC_DECL vec<L, T, Q> fastExp2(vec<L, T, Q> const& x);
81 
84  template<typename T>
85  GLM_FUNC_DECL T fastLog2(T x);
86 
89  template<length_t L, typename T, qualifier Q>
90  GLM_FUNC_DECL vec<L, T, Q> fastLog2(vec<L, T, Q> const& x);
91 
93 }//namespace glm
94 
95 #include "fast_exponential.inl"
GLM_FUNC_DECL vec< L, T, Q > fastLog(vec< L, T, Q > const &x)
Faster than the common exp2 function but less accurate.
GLM_FUNC_DECL vec< L, T, Q > fastPow(vec< L, T, Q > const &x)
Faster than the common pow function but less accurate.
GLM_FUNC_DECL vec< L, T, Q > fastLog2(vec< L, T, Q > const &x)
Faster than the common log2 function but less accurate.
GLM_FUNC_DECL vec< L, T, Q > fastExp2(vec< L, T, Q > const &x)
Faster than the common exp2 function but less accurate.
GLM_FUNC_DECL vec< L, T, Q > fastExp(vec< L, T, Q > const &x)
Faster than the common exp function but less accurate.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00032.html ================================================ 0.9.9 API documentation: fast_square_root.hpp File Reference
0.9.9 API documentation
fast_square_root.hpp File Reference

GLM_GTX_fast_square_root More...

Go to the source code of this file.

Functions

template<typename genType >
GLM_FUNC_DECL genType fastDistance (genType x, genType y)
 Faster than the common distance function but less accurate. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL T fastDistance (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Faster than the common distance function but less accurate. More...
 
template<typename genType >
GLM_FUNC_DECL genType fastInverseSqrt (genType x)
 Faster than the common inversesqrt function but less accurate. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fastInverseSqrt (vec< L, T, Q > const &x)
 Faster than the common inversesqrt function but less accurate. More...
 
template<typename genType >
GLM_FUNC_DECL genType fastLength (genType x)
 Faster than the common length function but less accurate. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL T fastLength (vec< L, T, Q > const &x)
 Faster than the common length function but less accurate. More...
 
template<typename genType >
GLM_FUNC_DECL genType fastNormalize (genType const &x)
 Faster than the common normalize function but less accurate. More...
 
template<typename genType >
GLM_FUNC_DECL genType fastSqrt (genType x)
 Faster than the common sqrt function but less accurate. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fastSqrt (vec< L, T, Q > const &x)
 Faster than the common sqrt function but less accurate. More...
 

Detailed Description

GLM_GTX_fast_square_root

See also
Core features (dependence)

Definition in file fast_square_root.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00032_source.html ================================================ 0.9.9 API documentation: fast_square_root.hpp Source File
0.9.9 API documentation
fast_square_root.hpp
Go to the documentation of this file.
1 
15 #pragma once
16 
17 // Dependency:
18 #include "../common.hpp"
19 #include "../exponential.hpp"
20 #include "../geometric.hpp"
21 
22 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
23 # ifndef GLM_ENABLE_EXPERIMENTAL
24 # pragma message("GLM: GLM_GTX_fast_square_root is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
25 # else
26 # pragma message("GLM: GLM_GTX_fast_square_root extension included")
27 # endif
28 #endif
29 
30 namespace glm
31 {
34 
38  template<typename genType>
39  GLM_FUNC_DECL genType fastSqrt(genType x);
40 
44  template<length_t L, typename T, qualifier Q>
45  GLM_FUNC_DECL vec<L, T, Q> fastSqrt(vec<L, T, Q> const& x);
46 
50  template<typename genType>
51  GLM_FUNC_DECL genType fastInverseSqrt(genType x);
52 
56  template<length_t L, typename T, qualifier Q>
57  GLM_FUNC_DECL vec<L, T, Q> fastInverseSqrt(vec<L, T, Q> const& x);
58 
62  template<typename genType>
63  GLM_FUNC_DECL genType fastLength(genType x);
64 
68  template<length_t L, typename T, qualifier Q>
69  GLM_FUNC_DECL T fastLength(vec<L, T, Q> const& x);
70 
74  template<typename genType>
75  GLM_FUNC_DECL genType fastDistance(genType x, genType y);
76 
80  template<length_t L, typename T, qualifier Q>
81  GLM_FUNC_DECL T fastDistance(vec<L, T, Q> const& x, vec<L, T, Q> const& y);
82 
86  template<typename genType>
87  GLM_FUNC_DECL genType fastNormalize(genType const& x);
88 
90 }// namespace glm
91 
92 #include "fast_square_root.inl"
GLM_FUNC_DECL T fastLength(vec< L, T, Q > const &x)
Faster than the common length function but less accurate.
GLM_FUNC_DECL T fastDistance(vec< L, T, Q > const &x, vec< L, T, Q > const &y)
Faster than the common distance function but less accurate.
GLM_FUNC_DECL vec< L, T, Q > fastSqrt(vec< L, T, Q > const &x)
Faster than the common sqrt function but less accurate.
GLM_FUNC_DECL genType fastNormalize(genType const &x)
Faster than the common normalize function but less accurate.
GLM_FUNC_DECL vec< L, T, Q > fastInverseSqrt(vec< L, T, Q > const &x)
Faster than the common inversesqrt function but less accurate.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00033.html ================================================ 0.9.9 API documentation: fast_trigonometry.hpp File Reference
0.9.9 API documentation
fast_trigonometry.hpp File Reference

GLM_GTX_fast_trigonometry More...

Go to the source code of this file.

Functions

template<typename T >
GLM_FUNC_DECL T fastAcos (T angle)
 Faster than the common acos function but less accurate. More...
 
template<typename T >
GLM_FUNC_DECL T fastAsin (T angle)
 Faster than the common asin function but less accurate. More...
 
template<typename T >
GLM_FUNC_DECL T fastAtan (T y, T x)
 Faster than the common atan function but less accurate. More...
 
template<typename T >
GLM_FUNC_DECL T fastAtan (T angle)
 Faster than the common atan function but less accurate. More...
 
template<typename T >
GLM_FUNC_DECL T fastCos (T angle)
 Faster than the common cos function but less accurate. More...
 
template<typename T >
GLM_FUNC_DECL T fastSin (T angle)
 Faster than the common sin function but less accurate. More...
 
template<typename T >
GLM_FUNC_DECL T fastTan (T angle)
 Faster than the common tan function but less accurate. More...
 
template<typename T >
GLM_FUNC_DECL T wrapAngle (T angle)
 Wrap an angle to [0 2pi[ From GLM_GTX_fast_trigonometry extension. More...
 

Detailed Description

GLM_GTX_fast_trigonometry

See also
Core features (dependence)

Definition in file fast_trigonometry.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00033_source.html ================================================ 0.9.9 API documentation: fast_trigonometry.hpp Source File
0.9.9 API documentation
fast_trigonometry.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependency:
16 #include "../gtc/constants.hpp"
17 
18 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
19 # ifndef GLM_ENABLE_EXPERIMENTAL
20 # pragma message("GLM: GLM_GTX_fast_trigonometry is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
21 # else
22 # pragma message("GLM: GLM_GTX_fast_trigonometry extension included")
23 # endif
24 #endif
25 
26 namespace glm
27 {
30 
33  template<typename T>
34  GLM_FUNC_DECL T wrapAngle(T angle);
35 
38  template<typename T>
39  GLM_FUNC_DECL T fastSin(T angle);
40 
43  template<typename T>
44  GLM_FUNC_DECL T fastCos(T angle);
45 
49  template<typename T>
50  GLM_FUNC_DECL T fastTan(T angle);
51 
55  template<typename T>
56  GLM_FUNC_DECL T fastAsin(T angle);
57 
61  template<typename T>
62  GLM_FUNC_DECL T fastAcos(T angle);
63 
67  template<typename T>
68  GLM_FUNC_DECL T fastAtan(T y, T x);
69 
73  template<typename T>
74  GLM_FUNC_DECL T fastAtan(T angle);
75 
77 }//namespace glm
78 
79 #include "fast_trigonometry.inl"
GLM_FUNC_DECL T fastAsin(T angle)
Faster than the common asin function but less accurate.
GLM_FUNC_DECL T angle(qua< T, Q > const &x)
Returns the quaternion rotation angle.
GLM_FUNC_DECL T fastAcos(T angle)
Faster than the common acos function but less accurate.
GLM_FUNC_DECL T fastTan(T angle)
Faster than the common tan function but less accurate.
GLM_FUNC_DECL T fastCos(T angle)
Faster than the common cos function but less accurate.
GLM_FUNC_DECL T fastAtan(T angle)
Faster than the common atan function but less accurate.
GLM_FUNC_DECL T fastSin(T angle)
Faster than the common sin function but less accurate.
GLM_FUNC_DECL T wrapAngle(T angle)
Wrap an angle to [0 2pi[ From GLM_GTX_fast_trigonometry extension.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00034.html ================================================ 0.9.9 API documentation: functions.hpp File Reference
0.9.9 API documentation
functions.hpp File Reference

GLM_GTX_functions More...

Go to the source code of this file.

Functions

template<typename T >
GLM_FUNC_DECL T gauss (T x, T ExpectedValue, T StandardDeviation)
 1D gauss function More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T gauss (vec< 2, T, Q > const &Coord, vec< 2, T, Q > const &ExpectedValue, vec< 2, T, Q > const &StandardDeviation)
 2D gauss function More...
 

Detailed Description

GLM_GTX_functions

See also
Core features (dependence)
GLM_GTC_quaternion (dependence)

Definition in file functions.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00034_source.html ================================================ 0.9.9 API documentation: functions.hpp Source File
0.9.9 API documentation
functions.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 // Dependencies
17 #include "../detail/setup.hpp"
18 #include "../detail/qualifier.hpp"
19 #include "../detail/type_vec2.hpp"
20 
21 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
22 # ifndef GLM_ENABLE_EXPERIMENTAL
23 # pragma message("GLM: GLM_GTX_functions is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
24 # else
25 # pragma message("GLM: GLM_GTX_functions extension included")
26 # endif
27 #endif
28 
29 namespace glm
30 {
33 
37  template<typename T>
38  GLM_FUNC_DECL T gauss(
39  T x,
40  T ExpectedValue,
41  T StandardDeviation);
42 
46  template<typename T, qualifier Q>
47  GLM_FUNC_DECL T gauss(
48  vec<2, T, Q> const& Coord,
49  vec<2, T, Q> const& ExpectedValue,
50  vec<2, T, Q> const& StandardDeviation);
51 
53 }//namespace glm
54 
55 #include "functions.inl"
56 
GLM_FUNC_DECL T gauss(vec< 2, T, Q > const &Coord, vec< 2, T, Q > const &ExpectedValue, vec< 2, T, Q > const &StandardDeviation)
2D gauss function
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00035_source.html ================================================ 0.9.9 API documentation: fwd.hpp Source File
0.9.9 API documentation
fwd.hpp
1 #pragma once
2 
3 #include "detail/qualifier.hpp"
4 
5 namespace glm
6 {
7 #if GLM_HAS_EXTENDED_INTEGER_TYPE
8  typedef std::int8_t int8;
9  typedef std::int16_t int16;
10  typedef std::int32_t int32;
11  typedef std::int64_t int64;
12 
13  typedef std::uint8_t uint8;
14  typedef std::uint16_t uint16;
15  typedef std::uint32_t uint32;
16  typedef std::uint64_t uint64;
17 #else
18  typedef signed char int8;
19  typedef signed short int16;
20  typedef signed int int32;
21  typedef detail::int64 int64;
22 
23  typedef unsigned char uint8;
24  typedef unsigned short uint16;
25  typedef unsigned int uint32;
26  typedef detail::uint64 uint64;
27 #endif
28 
29  // Scalar int
30 
31  typedef int8 lowp_i8;
32  typedef int8 mediump_i8;
33  typedef int8 highp_i8;
34  typedef int8 i8;
35 
36  typedef int8 lowp_int8;
37  typedef int8 mediump_int8;
38  typedef int8 highp_int8;
39 
40  typedef int8 lowp_int8_t;
41  typedef int8 mediump_int8_t;
42  typedef int8 highp_int8_t;
43  typedef int8 int8_t;
44 
45  typedef int16 lowp_i16;
46  typedef int16 mediump_i16;
47  typedef int16 highp_i16;
48  typedef int16 i16;
49 
50  typedef int16 lowp_int16;
51  typedef int16 mediump_int16;
52  typedef int16 highp_int16;
53 
54  typedef int16 lowp_int16_t;
55  typedef int16 mediump_int16_t;
56  typedef int16 highp_int16_t;
57  typedef int16 int16_t;
58 
59  typedef int32 lowp_i32;
60  typedef int32 mediump_i32;
61  typedef int32 highp_i32;
62  typedef int32 i32;
63 
64  typedef int32 lowp_int32;
65  typedef int32 mediump_int32;
66  typedef int32 highp_int32;
67 
68  typedef int32 lowp_int32_t;
69  typedef int32 mediump_int32_t;
70  typedef int32 highp_int32_t;
71  typedef int32 int32_t;
72 
73  typedef int64 lowp_i64;
74  typedef int64 mediump_i64;
75  typedef int64 highp_i64;
76  typedef int64 i64;
77 
78  typedef int64 lowp_int64;
79  typedef int64 mediump_int64;
80  typedef int64 highp_int64;
81 
82  typedef int64 lowp_int64_t;
83  typedef int64 mediump_int64_t;
84  typedef int64 highp_int64_t;
85  typedef int64 int64_t;
86 
87  // Scalar uint
88 
89  typedef uint8 lowp_u8;
90  typedef uint8 mediump_u8;
91  typedef uint8 highp_u8;
92  typedef uint8 u8;
93 
94  typedef uint8 lowp_uint8;
95  typedef uint8 mediump_uint8;
96  typedef uint8 highp_uint8;
97 
98  typedef uint8 lowp_uint8_t;
99  typedef uint8 mediump_uint8_t;
100  typedef uint8 highp_uint8_t;
101  typedef uint8 uint8_t;
102 
103  typedef uint16 lowp_u16;
104  typedef uint16 mediump_u16;
105  typedef uint16 highp_u16;
106  typedef uint16 u16;
107 
108  typedef uint16 lowp_uint16;
109  typedef uint16 mediump_uint16;
110  typedef uint16 highp_uint16;
111 
112  typedef uint16 lowp_uint16_t;
113  typedef uint16 mediump_uint16_t;
114  typedef uint16 highp_uint16_t;
115  typedef uint16 uint16_t;
116 
117  typedef uint32 lowp_u32;
118  typedef uint32 mediump_u32;
119  typedef uint32 highp_u32;
120  typedef uint32 u32;
121 
122  typedef uint32 lowp_uint32;
123  typedef uint32 mediump_uint32;
124  typedef uint32 highp_uint32;
125 
126  typedef uint32 lowp_uint32_t;
127  typedef uint32 mediump_uint32_t;
128  typedef uint32 highp_uint32_t;
129  typedef uint32 uint32_t;
130 
131  typedef uint64 lowp_u64;
132  typedef uint64 mediump_u64;
133  typedef uint64 highp_u64;
134  typedef uint64 u64;
135 
136  typedef uint64 lowp_uint64;
137  typedef uint64 mediump_uint64;
138  typedef uint64 highp_uint64;
139 
140  typedef uint64 lowp_uint64_t;
141  typedef uint64 mediump_uint64_t;
142  typedef uint64 highp_uint64_t;
143  typedef uint64 uint64_t;
144 
145  // Scalar float
146 
147  typedef float lowp_f32;
148  typedef float mediump_f32;
149  typedef float highp_f32;
150  typedef float f32;
151 
152  typedef float lowp_float32;
153  typedef float mediump_float32;
154  typedef float highp_float32;
155  typedef float float32;
156 
157  typedef float lowp_float32_t;
158  typedef float mediump_float32_t;
159  typedef float highp_float32_t;
160  typedef float float32_t;
161 
162 
163  typedef double lowp_f64;
164  typedef double mediump_f64;
165  typedef double highp_f64;
166  typedef double f64;
167 
168  typedef double lowp_float64;
169  typedef double mediump_float64;
170  typedef double highp_float64;
171  typedef double float64;
172 
173  typedef double lowp_float64_t;
174  typedef double mediump_float64_t;
175  typedef double highp_float64_t;
176  typedef double float64_t;
177 
178  // Vector bool
179 
180  typedef vec<1, bool, lowp> lowp_bvec1;
181  typedef vec<2, bool, lowp> lowp_bvec2;
182  typedef vec<3, bool, lowp> lowp_bvec3;
183  typedef vec<4, bool, lowp> lowp_bvec4;
184 
185  typedef vec<1, bool, mediump> mediump_bvec1;
186  typedef vec<2, bool, mediump> mediump_bvec2;
187  typedef vec<3, bool, mediump> mediump_bvec3;
188  typedef vec<4, bool, mediump> mediump_bvec4;
189 
190  typedef vec<1, bool, highp> highp_bvec1;
191  typedef vec<2, bool, highp> highp_bvec2;
192  typedef vec<3, bool, highp> highp_bvec3;
193  typedef vec<4, bool, highp> highp_bvec4;
194 
195  typedef vec<1, bool, defaultp> bvec1;
196  typedef vec<2, bool, defaultp> bvec2;
197  typedef vec<3, bool, defaultp> bvec3;
198  typedef vec<4, bool, defaultp> bvec4;
199 
200  // Vector int
201 
202  typedef vec<1, i32, lowp> lowp_ivec1;
203  typedef vec<2, i32, lowp> lowp_ivec2;
204  typedef vec<3, i32, lowp> lowp_ivec3;
205  typedef vec<4, i32, lowp> lowp_ivec4;
206 
207  typedef vec<1, i32, mediump> mediump_ivec1;
208  typedef vec<2, i32, mediump> mediump_ivec2;
209  typedef vec<3, i32, mediump> mediump_ivec3;
210  typedef vec<4, i32, mediump> mediump_ivec4;
211 
212  typedef vec<1, i32, highp> highp_ivec1;
213  typedef vec<2, i32, highp> highp_ivec2;
214  typedef vec<3, i32, highp> highp_ivec3;
215  typedef vec<4, i32, highp> highp_ivec4;
216 
217  typedef vec<1, i32, defaultp> ivec1;
218  typedef vec<2, i32, defaultp> ivec2;
219  typedef vec<3, i32, defaultp> ivec3;
220  typedef vec<4, i32, defaultp> ivec4;
221 
222  typedef vec<1, i8, lowp> lowp_i8vec1;
223  typedef vec<2, i8, lowp> lowp_i8vec2;
224  typedef vec<3, i8, lowp> lowp_i8vec3;
225  typedef vec<4, i8, lowp> lowp_i8vec4;
226 
227  typedef vec<1, i8, mediump> mediump_i8vec1;
228  typedef vec<2, i8, mediump> mediump_i8vec2;
229  typedef vec<3, i8, mediump> mediump_i8vec3;
230  typedef vec<4, i8, mediump> mediump_i8vec4;
231 
232  typedef vec<1, i8, highp> highp_i8vec1;
233  typedef vec<2, i8, highp> highp_i8vec2;
234  typedef vec<3, i8, highp> highp_i8vec3;
235  typedef vec<4, i8, highp> highp_i8vec4;
236 
237  typedef vec<1, i8, defaultp> i8vec1;
238  typedef vec<2, i8, defaultp> i8vec2;
239  typedef vec<3, i8, defaultp> i8vec3;
240  typedef vec<4, i8, defaultp> i8vec4;
241 
242  typedef vec<1, i16, lowp> lowp_i16vec1;
243  typedef vec<2, i16, lowp> lowp_i16vec2;
244  typedef vec<3, i16, lowp> lowp_i16vec3;
245  typedef vec<4, i16, lowp> lowp_i16vec4;
246 
247  typedef vec<1, i16, mediump> mediump_i16vec1;
248  typedef vec<2, i16, mediump> mediump_i16vec2;
249  typedef vec<3, i16, mediump> mediump_i16vec3;
250  typedef vec<4, i16, mediump> mediump_i16vec4;
251 
252  typedef vec<1, i16, highp> highp_i16vec1;
253  typedef vec<2, i16, highp> highp_i16vec2;
254  typedef vec<3, i16, highp> highp_i16vec3;
255  typedef vec<4, i16, highp> highp_i16vec4;
256 
257  typedef vec<1, i16, defaultp> i16vec1;
258  typedef vec<2, i16, defaultp> i16vec2;
259  typedef vec<3, i16, defaultp> i16vec3;
260  typedef vec<4, i16, defaultp> i16vec4;
261 
262  typedef vec<1, i32, lowp> lowp_i32vec1;
263  typedef vec<2, i32, lowp> lowp_i32vec2;
264  typedef vec<3, i32, lowp> lowp_i32vec3;
265  typedef vec<4, i32, lowp> lowp_i32vec4;
266 
267  typedef vec<1, i32, mediump> mediump_i32vec1;
268  typedef vec<2, i32, mediump> mediump_i32vec2;
269  typedef vec<3, i32, mediump> mediump_i32vec3;
270  typedef vec<4, i32, mediump> mediump_i32vec4;
271 
272  typedef vec<1, i32, highp> highp_i32vec1;
273  typedef vec<2, i32, highp> highp_i32vec2;
274  typedef vec<3, i32, highp> highp_i32vec3;
275  typedef vec<4, i32, highp> highp_i32vec4;
276 
277  typedef vec<1, i32, defaultp> i32vec1;
278  typedef vec<2, i32, defaultp> i32vec2;
279  typedef vec<3, i32, defaultp> i32vec3;
280  typedef vec<4, i32, defaultp> i32vec4;
281 
282  typedef vec<1, i64, lowp> lowp_i64vec1;
283  typedef vec<2, i64, lowp> lowp_i64vec2;
284  typedef vec<3, i64, lowp> lowp_i64vec3;
285  typedef vec<4, i64, lowp> lowp_i64vec4;
286 
287  typedef vec<1, i64, mediump> mediump_i64vec1;
288  typedef vec<2, i64, mediump> mediump_i64vec2;
289  typedef vec<3, i64, mediump> mediump_i64vec3;
290  typedef vec<4, i64, mediump> mediump_i64vec4;
291 
292  typedef vec<1, i64, highp> highp_i64vec1;
293  typedef vec<2, i64, highp> highp_i64vec2;
294  typedef vec<3, i64, highp> highp_i64vec3;
295  typedef vec<4, i64, highp> highp_i64vec4;
296 
297  typedef vec<1, i64, defaultp> i64vec1;
298  typedef vec<2, i64, defaultp> i64vec2;
299  typedef vec<3, i64, defaultp> i64vec3;
300  typedef vec<4, i64, defaultp> i64vec4;
301 
302  // Vector uint
303 
304  typedef vec<1, u32, lowp> lowp_uvec1;
305  typedef vec<2, u32, lowp> lowp_uvec2;
306  typedef vec<3, u32, lowp> lowp_uvec3;
307  typedef vec<4, u32, lowp> lowp_uvec4;
308 
309  typedef vec<1, u32, mediump> mediump_uvec1;
310  typedef vec<2, u32, mediump> mediump_uvec2;
311  typedef vec<3, u32, mediump> mediump_uvec3;
312  typedef vec<4, u32, mediump> mediump_uvec4;
313 
314  typedef vec<1, u32, highp> highp_uvec1;
315  typedef vec<2, u32, highp> highp_uvec2;
316  typedef vec<3, u32, highp> highp_uvec3;
317  typedef vec<4, u32, highp> highp_uvec4;
318 
319  typedef vec<1, u32, defaultp> uvec1;
320  typedef vec<2, u32, defaultp> uvec2;
321  typedef vec<3, u32, defaultp> uvec3;
322  typedef vec<4, u32, defaultp> uvec4;
323 
324  typedef vec<1, u8, lowp> lowp_u8vec1;
325  typedef vec<2, u8, lowp> lowp_u8vec2;
326  typedef vec<3, u8, lowp> lowp_u8vec3;
327  typedef vec<4, u8, lowp> lowp_u8vec4;
328 
329  typedef vec<1, u8, mediump> mediump_u8vec1;
330  typedef vec<2, u8, mediump> mediump_u8vec2;
331  typedef vec<3, u8, mediump> mediump_u8vec3;
332  typedef vec<4, u8, mediump> mediump_u8vec4;
333 
334  typedef vec<1, u8, highp> highp_u8vec1;
335  typedef vec<2, u8, highp> highp_u8vec2;
336  typedef vec<3, u8, highp> highp_u8vec3;
337  typedef vec<4, u8, highp> highp_u8vec4;
338 
339  typedef vec<1, u8, defaultp> u8vec1;
340  typedef vec<2, u8, defaultp> u8vec2;
341  typedef vec<3, u8, defaultp> u8vec3;
342  typedef vec<4, u8, defaultp> u8vec4;
343 
344  typedef vec<1, u16, lowp> lowp_u16vec1;
345  typedef vec<2, u16, lowp> lowp_u16vec2;
346  typedef vec<3, u16, lowp> lowp_u16vec3;
347  typedef vec<4, u16, lowp> lowp_u16vec4;
348 
349  typedef vec<1, u16, mediump> mediump_u16vec1;
350  typedef vec<2, u16, mediump> mediump_u16vec2;
351  typedef vec<3, u16, mediump> mediump_u16vec3;
352  typedef vec<4, u16, mediump> mediump_u16vec4;
353 
354  typedef vec<1, u16, highp> highp_u16vec1;
355  typedef vec<2, u16, highp> highp_u16vec2;
356  typedef vec<3, u16, highp> highp_u16vec3;
357  typedef vec<4, u16, highp> highp_u16vec4;
358 
359  typedef vec<1, u16, defaultp> u16vec1;
360  typedef vec<2, u16, defaultp> u16vec2;
361  typedef vec<3, u16, defaultp> u16vec3;
362  typedef vec<4, u16, defaultp> u16vec4;
363 
364  typedef vec<1, u32, lowp> lowp_u32vec1;
365  typedef vec<2, u32, lowp> lowp_u32vec2;
366  typedef vec<3, u32, lowp> lowp_u32vec3;
367  typedef vec<4, u32, lowp> lowp_u32vec4;
368 
369  typedef vec<1, u32, mediump> mediump_u32vec1;
370  typedef vec<2, u32, mediump> mediump_u32vec2;
371  typedef vec<3, u32, mediump> mediump_u32vec3;
372  typedef vec<4, u32, mediump> mediump_u32vec4;
373 
374  typedef vec<1, u32, highp> highp_u32vec1;
375  typedef vec<2, u32, highp> highp_u32vec2;
376  typedef vec<3, u32, highp> highp_u32vec3;
377  typedef vec<4, u32, highp> highp_u32vec4;
378 
379  typedef vec<1, u32, defaultp> u32vec1;
380  typedef vec<2, u32, defaultp> u32vec2;
381  typedef vec<3, u32, defaultp> u32vec3;
382  typedef vec<4, u32, defaultp> u32vec4;
383 
384  typedef vec<1, u64, lowp> lowp_u64vec1;
385  typedef vec<2, u64, lowp> lowp_u64vec2;
386  typedef vec<3, u64, lowp> lowp_u64vec3;
387  typedef vec<4, u64, lowp> lowp_u64vec4;
388 
389  typedef vec<1, u64, mediump> mediump_u64vec1;
390  typedef vec<2, u64, mediump> mediump_u64vec2;
391  typedef vec<3, u64, mediump> mediump_u64vec3;
392  typedef vec<4, u64, mediump> mediump_u64vec4;
393 
394  typedef vec<1, u64, highp> highp_u64vec1;
395  typedef vec<2, u64, highp> highp_u64vec2;
396  typedef vec<3, u64, highp> highp_u64vec3;
397  typedef vec<4, u64, highp> highp_u64vec4;
398 
399  typedef vec<1, u64, defaultp> u64vec1;
400  typedef vec<2, u64, defaultp> u64vec2;
401  typedef vec<3, u64, defaultp> u64vec3;
402  typedef vec<4, u64, defaultp> u64vec4;
403 
404  // Vector float
405 
406  typedef vec<1, float, lowp> lowp_vec1;
407  typedef vec<2, float, lowp> lowp_vec2;
408  typedef vec<3, float, lowp> lowp_vec3;
409  typedef vec<4, float, lowp> lowp_vec4;
410 
411  typedef vec<1, float, mediump> mediump_vec1;
412  typedef vec<2, float, mediump> mediump_vec2;
413  typedef vec<3, float, mediump> mediump_vec3;
414  typedef vec<4, float, mediump> mediump_vec4;
415 
416  typedef vec<1, float, highp> highp_vec1;
417  typedef vec<2, float, highp> highp_vec2;
418  typedef vec<3, float, highp> highp_vec3;
419  typedef vec<4, float, highp> highp_vec4;
420 
421  typedef vec<1, float, defaultp> vec1;
422  typedef vec<2, float, defaultp> vec2;
423  typedef vec<3, float, defaultp> vec3;
424  typedef vec<4, float, defaultp> vec4;
425 
426  typedef vec<1, float, lowp> lowp_fvec1;
427  typedef vec<2, float, lowp> lowp_fvec2;
428  typedef vec<3, float, lowp> lowp_fvec3;
429  typedef vec<4, float, lowp> lowp_fvec4;
430 
431  typedef vec<1, float, mediump> mediump_fvec1;
432  typedef vec<2, float, mediump> mediump_fvec2;
433  typedef vec<3, float, mediump> mediump_fvec3;
434  typedef vec<4, float, mediump> mediump_fvec4;
435 
436  typedef vec<1, float, highp> highp_fvec1;
437  typedef vec<2, float, highp> highp_fvec2;
438  typedef vec<3, float, highp> highp_fvec3;
439  typedef vec<4, float, highp> highp_fvec4;
440 
441  typedef vec<1, f32, defaultp> fvec1;
442  typedef vec<2, f32, defaultp> fvec2;
443  typedef vec<3, f32, defaultp> fvec3;
444  typedef vec<4, f32, defaultp> fvec4;
445 
446  typedef vec<1, f32, lowp> lowp_f32vec1;
447  typedef vec<2, f32, lowp> lowp_f32vec2;
448  typedef vec<3, f32, lowp> lowp_f32vec3;
449  typedef vec<4, f32, lowp> lowp_f32vec4;
450 
451  typedef vec<1, f32, mediump> mediump_f32vec1;
452  typedef vec<2, f32, mediump> mediump_f32vec2;
453  typedef vec<3, f32, mediump> mediump_f32vec3;
454  typedef vec<4, f32, mediump> mediump_f32vec4;
455 
456  typedef vec<1, f32, highp> highp_f32vec1;
457  typedef vec<2, f32, highp> highp_f32vec2;
458  typedef vec<3, f32, highp> highp_f32vec3;
459  typedef vec<4, f32, highp> highp_f32vec4;
460 
461  typedef vec<1, f32, defaultp> f32vec1;
462  typedef vec<2, f32, defaultp> f32vec2;
463  typedef vec<3, f32, defaultp> f32vec3;
464  typedef vec<4, f32, defaultp> f32vec4;
465 
466  typedef vec<1, f64, lowp> lowp_dvec1;
467  typedef vec<2, f64, lowp> lowp_dvec2;
468  typedef vec<3, f64, lowp> lowp_dvec3;
469  typedef vec<4, f64, lowp> lowp_dvec4;
470 
471  typedef vec<1, f64, mediump> mediump_dvec1;
472  typedef vec<2, f64, mediump> mediump_dvec2;
473  typedef vec<3, f64, mediump> mediump_dvec3;
474  typedef vec<4, f64, mediump> mediump_dvec4;
475 
476  typedef vec<1, f64, highp> highp_dvec1;
477  typedef vec<2, f64, highp> highp_dvec2;
478  typedef vec<3, f64, highp> highp_dvec3;
479  typedef vec<4, f64, highp> highp_dvec4;
480 
481  typedef vec<1, f64, defaultp> dvec1;
482  typedef vec<2, f64, defaultp> dvec2;
483  typedef vec<3, f64, defaultp> dvec3;
484  typedef vec<4, f64, defaultp> dvec4;
485 
486  typedef vec<1, f64, lowp> lowp_f64vec1;
487  typedef vec<2, f64, lowp> lowp_f64vec2;
488  typedef vec<3, f64, lowp> lowp_f64vec3;
489  typedef vec<4, f64, lowp> lowp_f64vec4;
490 
491  typedef vec<1, f64, mediump> mediump_f64vec1;
492  typedef vec<2, f64, mediump> mediump_f64vec2;
493  typedef vec<3, f64, mediump> mediump_f64vec3;
494  typedef vec<4, f64, mediump> mediump_f64vec4;
495 
496  typedef vec<1, f64, highp> highp_f64vec1;
497  typedef vec<2, f64, highp> highp_f64vec2;
498  typedef vec<3, f64, highp> highp_f64vec3;
499  typedef vec<4, f64, highp> highp_f64vec4;
500 
501  typedef vec<1, f64, defaultp> f64vec1;
502  typedef vec<2, f64, defaultp> f64vec2;
503  typedef vec<3, f64, defaultp> f64vec3;
504  typedef vec<4, f64, defaultp> f64vec4;
505 
506  // Matrix NxN
507 
508  typedef mat<2, 2, f32, lowp> lowp_mat2;
509  typedef mat<3, 3, f32, lowp> lowp_mat3;
510  typedef mat<4, 4, f32, lowp> lowp_mat4;
511 
512  typedef mat<2, 2, f32, mediump> mediump_mat2;
513  typedef mat<3, 3, f32, mediump> mediump_mat3;
514  typedef mat<4, 4, f32, mediump> mediump_mat4;
515 
516  typedef mat<2, 2, f32, highp> highp_mat2;
517  typedef mat<3, 3, f32, highp> highp_mat3;
518  typedef mat<4, 4, f32, highp> highp_mat4;
519 
520  typedef mat<2, 2, f32, defaultp> mat2;
521  typedef mat<3, 3, f32, defaultp> mat3;
522  typedef mat<4, 4, f32, defaultp> mat4;
523 
524  typedef mat<2, 2, f32, lowp> lowp_fmat2;
525  typedef mat<3, 3, f32, lowp> lowp_fmat3;
526  typedef mat<4, 4, f32, lowp> lowp_fmat4;
527 
528  typedef mat<2, 2, f32, mediump> mediump_fmat2;
529  typedef mat<3, 3, f32, mediump> mediump_fmat3;
530  typedef mat<4, 4, f32, mediump> mediump_fmat4;
531 
532  typedef mat<2, 2, f32, highp> highp_fmat2;
533  typedef mat<3, 3, f32, highp> highp_fmat3;
534  typedef mat<4, 4, f32, highp> highp_fmat4;
535 
536  typedef mat<2, 2, f32, defaultp> fmat2;
537  typedef mat<3, 3, f32, defaultp> fmat3;
538  typedef mat<4, 4, f32, defaultp> fmat4;
539 
540  typedef mat<2, 2, f32, lowp> lowp_f32mat2;
541  typedef mat<3, 3, f32, lowp> lowp_f32mat3;
542  typedef mat<4, 4, f32, lowp> lowp_f32mat4;
543 
544  typedef mat<2, 2, f32, mediump> mediump_f32mat2;
545  typedef mat<3, 3, f32, mediump> mediump_f32mat3;
546  typedef mat<4, 4, f32, mediump> mediump_f32mat4;
547 
548  typedef mat<2, 2, f32, highp> highp_f32mat2;
549  typedef mat<3, 3, f32, highp> highp_f32mat3;
550  typedef mat<4, 4, f32, highp> highp_f32mat4;
551 
552  typedef mat<2, 2, f32, defaultp> f32mat2;
553  typedef mat<3, 3, f32, defaultp> f32mat3;
554  typedef mat<4, 4, f32, defaultp> f32mat4;
555 
556  typedef mat<2, 2, f64, lowp> lowp_dmat2;
557  typedef mat<3, 3, f64, lowp> lowp_dmat3;
558  typedef mat<4, 4, f64, lowp> lowp_dmat4;
559 
560  typedef mat<2, 2, f64, mediump> mediump_dmat2;
561  typedef mat<3, 3, f64, mediump> mediump_dmat3;
562  typedef mat<4, 4, f64, mediump> mediump_dmat4;
563 
564  typedef mat<2, 2, f64, highp> highp_dmat2;
565  typedef mat<3, 3, f64, highp> highp_dmat3;
566  typedef mat<4, 4, f64, highp> highp_dmat4;
567 
568  typedef mat<2, 2, f64, defaultp> dmat2;
569  typedef mat<3, 3, f64, defaultp> dmat3;
570  typedef mat<4, 4, f64, defaultp> dmat4;
571 
572  typedef mat<2, 2, f64, lowp> lowp_f64mat2;
573  typedef mat<3, 3, f64, lowp> lowp_f64mat3;
574  typedef mat<4, 4, f64, lowp> lowp_f64mat4;
575 
576  typedef mat<2, 2, f64, mediump> mediump_f64mat2;
577  typedef mat<3, 3, f64, mediump> mediump_f64mat3;
578  typedef mat<4, 4, f64, mediump> mediump_f64mat4;
579 
580  typedef mat<2, 2, f64, highp> highp_f64mat2;
581  typedef mat<3, 3, f64, highp> highp_f64mat3;
582  typedef mat<4, 4, f64, highp> highp_f64mat4;
583 
584  typedef mat<2, 2, f64, defaultp> f64mat2;
585  typedef mat<3, 3, f64, defaultp> f64mat3;
586  typedef mat<4, 4, f64, defaultp> f64mat4;
587 
588  // Matrix MxN
589 
590  typedef mat<2, 2, f32, lowp> lowp_mat2x2;
591  typedef mat<2, 3, f32, lowp> lowp_mat2x3;
592  typedef mat<2, 4, f32, lowp> lowp_mat2x4;
593  typedef mat<3, 2, f32, lowp> lowp_mat3x2;
594  typedef mat<3, 3, f32, lowp> lowp_mat3x3;
595  typedef mat<3, 4, f32, lowp> lowp_mat3x4;
596  typedef mat<4, 2, f32, lowp> lowp_mat4x2;
597  typedef mat<4, 3, f32, lowp> lowp_mat4x3;
598  typedef mat<4, 4, f32, lowp> lowp_mat4x4;
599 
600  typedef mat<2, 2, f32, mediump> mediump_mat2x2;
601  typedef mat<2, 3, f32, mediump> mediump_mat2x3;
602  typedef mat<2, 4, f32, mediump> mediump_mat2x4;
603  typedef mat<3, 2, f32, mediump> mediump_mat3x2;
604  typedef mat<3, 3, f32, mediump> mediump_mat3x3;
605  typedef mat<3, 4, f32, mediump> mediump_mat3x4;
606  typedef mat<4, 2, f32, mediump> mediump_mat4x2;
607  typedef mat<4, 3, f32, mediump> mediump_mat4x3;
608  typedef mat<4, 4, f32, mediump> mediump_mat4x4;
609 
610  typedef mat<2, 2, f32, highp> highp_mat2x2;
611  typedef mat<2, 3, f32, highp> highp_mat2x3;
612  typedef mat<2, 4, f32, highp> highp_mat2x4;
613  typedef mat<3, 2, f32, highp> highp_mat3x2;
614  typedef mat<3, 3, f32, highp> highp_mat3x3;
615  typedef mat<3, 4, f32, highp> highp_mat3x4;
616  typedef mat<4, 2, f32, highp> highp_mat4x2;
617  typedef mat<4, 3, f32, highp> highp_mat4x3;
618  typedef mat<4, 4, f32, highp> highp_mat4x4;
619 
620  typedef mat<2, 2, f32, defaultp> mat2x2;
621  typedef mat<3, 2, f32, defaultp> mat3x2;
622  typedef mat<4, 2, f32, defaultp> mat4x2;
623  typedef mat<2, 3, f32, defaultp> mat2x3;
624  typedef mat<3, 3, f32, defaultp> mat3x3;
625  typedef mat<4, 3, f32, defaultp> mat4x3;
626  typedef mat<2, 4, f32, defaultp> mat2x4;
627  typedef mat<3, 4, f32, defaultp> mat3x4;
628  typedef mat<4, 4, f32, defaultp> mat4x4;
629 
630  typedef mat<2, 2, f32, lowp> lowp_fmat2x2;
631  typedef mat<2, 3, f32, lowp> lowp_fmat2x3;
632  typedef mat<2, 4, f32, lowp> lowp_fmat2x4;
633  typedef mat<3, 2, f32, lowp> lowp_fmat3x2;
634  typedef mat<3, 3, f32, lowp> lowp_fmat3x3;
635  typedef mat<3, 4, f32, lowp> lowp_fmat3x4;
636  typedef mat<4, 2, f32, lowp> lowp_fmat4x2;
637  typedef mat<4, 3, f32, lowp> lowp_fmat4x3;
638  typedef mat<4, 4, f32, lowp> lowp_fmat4x4;
639 
640  typedef mat<2, 2, f32, mediump> mediump_fmat2x2;
641  typedef mat<2, 3, f32, mediump> mediump_fmat2x3;
642  typedef mat<2, 4, f32, mediump> mediump_fmat2x4;
643  typedef mat<3, 2, f32, mediump> mediump_fmat3x2;
644  typedef mat<3, 3, f32, mediump> mediump_fmat3x3;
645  typedef mat<3, 4, f32, mediump> mediump_fmat3x4;
646  typedef mat<4, 2, f32, mediump> mediump_fmat4x2;
647  typedef mat<4, 3, f32, mediump> mediump_fmat4x3;
648  typedef mat<4, 4, f32, mediump> mediump_fmat4x4;
649 
650  typedef mat<2, 2, f32, highp> highp_fmat2x2;
651  typedef mat<2, 3, f32, highp> highp_fmat2x3;
652  typedef mat<2, 4, f32, highp> highp_fmat2x4;
653  typedef mat<3, 2, f32, highp> highp_fmat3x2;
654  typedef mat<3, 3, f32, highp> highp_fmat3x3;
655  typedef mat<3, 4, f32, highp> highp_fmat3x4;
656  typedef mat<4, 2, f32, highp> highp_fmat4x2;
657  typedef mat<4, 3, f32, highp> highp_fmat4x3;
658  typedef mat<4, 4, f32, highp> highp_fmat4x4;
659 
660  typedef mat<2, 2, f32, defaultp> fmat2x2;
661  typedef mat<3, 2, f32, defaultp> fmat3x2;
662  typedef mat<4, 2, f32, defaultp> fmat4x2;
663  typedef mat<2, 3, f32, defaultp> fmat2x3;
664  typedef mat<3, 3, f32, defaultp> fmat3x3;
665  typedef mat<4, 3, f32, defaultp> fmat4x3;
666  typedef mat<2, 4, f32, defaultp> fmat2x4;
667  typedef mat<3, 4, f32, defaultp> fmat3x4;
668  typedef mat<4, 4, f32, defaultp> fmat4x4;
669 
670  typedef mat<2, 2, f32, lowp> lowp_f32mat2x2;
671  typedef mat<2, 3, f32, lowp> lowp_f32mat2x3;
672  typedef mat<2, 4, f32, lowp> lowp_f32mat2x4;
673  typedef mat<3, 2, f32, lowp> lowp_f32mat3x2;
674  typedef mat<3, 3, f32, lowp> lowp_f32mat3x3;
675  typedef mat<3, 4, f32, lowp> lowp_f32mat3x4;
676  typedef mat<4, 2, f32, lowp> lowp_f32mat4x2;
677  typedef mat<4, 3, f32, lowp> lowp_f32mat4x3;
678  typedef mat<4, 4, f32, lowp> lowp_f32mat4x4;
679 
680  typedef mat<2, 2, f32, mediump> mediump_f32mat2x2;
681  typedef mat<2, 3, f32, mediump> mediump_f32mat2x3;
682  typedef mat<2, 4, f32, mediump> mediump_f32mat2x4;
683  typedef mat<3, 2, f32, mediump> mediump_f32mat3x2;
684  typedef mat<3, 3, f32, mediump> mediump_f32mat3x3;
685  typedef mat<3, 4, f32, mediump> mediump_f32mat3x4;
686  typedef mat<4, 2, f32, mediump> mediump_f32mat4x2;
687  typedef mat<4, 3, f32, mediump> mediump_f32mat4x3;
688  typedef mat<4, 4, f32, mediump> mediump_f32mat4x4;
689 
690  typedef mat<2, 2, f32, highp> highp_f32mat2x2;
691  typedef mat<2, 3, f32, highp> highp_f32mat2x3;
692  typedef mat<2, 4, f32, highp> highp_f32mat2x4;
693  typedef mat<3, 2, f32, highp> highp_f32mat3x2;
694  typedef mat<3, 3, f32, highp> highp_f32mat3x3;
695  typedef mat<3, 4, f32, highp> highp_f32mat3x4;
696  typedef mat<4, 2, f32, highp> highp_f32mat4x2;
697  typedef mat<4, 3, f32, highp> highp_f32mat4x3;
698  typedef mat<4, 4, f32, highp> highp_f32mat4x4;
699 
700  typedef mat<2, 2, f32, defaultp> f32mat2x2;
701  typedef mat<3, 2, f32, defaultp> f32mat3x2;
702  typedef mat<4, 2, f32, defaultp> f32mat4x2;
703  typedef mat<2, 3, f32, defaultp> f32mat2x3;
704  typedef mat<3, 3, f32, defaultp> f32mat3x3;
705  typedef mat<4, 3, f32, defaultp> f32mat4x3;
706  typedef mat<2, 4, f32, defaultp> f32mat2x4;
707  typedef mat<3, 4, f32, defaultp> f32mat3x4;
708  typedef mat<4, 4, f32, defaultp> f32mat4x4;
709 
710  typedef mat<2, 2, double, lowp> lowp_dmat2x2;
711  typedef mat<2, 3, double, lowp> lowp_dmat2x3;
712  typedef mat<2, 4, double, lowp> lowp_dmat2x4;
713  typedef mat<3, 2, double, lowp> lowp_dmat3x2;
714  typedef mat<3, 3, double, lowp> lowp_dmat3x3;
715  typedef mat<3, 4, double, lowp> lowp_dmat3x4;
716  typedef mat<4, 2, double, lowp> lowp_dmat4x2;
717  typedef mat<4, 3, double, lowp> lowp_dmat4x3;
718  typedef mat<4, 4, double, lowp> lowp_dmat4x4;
719 
720  typedef mat<2, 2, double, mediump> mediump_dmat2x2;
721  typedef mat<2, 3, double, mediump> mediump_dmat2x3;
722  typedef mat<2, 4, double, mediump> mediump_dmat2x4;
723  typedef mat<3, 2, double, mediump> mediump_dmat3x2;
724  typedef mat<3, 3, double, mediump> mediump_dmat3x3;
725  typedef mat<3, 4, double, mediump> mediump_dmat3x4;
726  typedef mat<4, 2, double, mediump> mediump_dmat4x2;
727  typedef mat<4, 3, double, mediump> mediump_dmat4x3;
728  typedef mat<4, 4, double, mediump> mediump_dmat4x4;
729 
730  typedef mat<2, 2, double, highp> highp_dmat2x2;
731  typedef mat<2, 3, double, highp> highp_dmat2x3;
732  typedef mat<2, 4, double, highp> highp_dmat2x4;
733  typedef mat<3, 2, double, highp> highp_dmat3x2;
734  typedef mat<3, 3, double, highp> highp_dmat3x3;
735  typedef mat<3, 4, double, highp> highp_dmat3x4;
736  typedef mat<4, 2, double, highp> highp_dmat4x2;
737  typedef mat<4, 3, double, highp> highp_dmat4x3;
738  typedef mat<4, 4, double, highp> highp_dmat4x4;
739 
740  typedef mat<2, 2, double, defaultp> dmat2x2;
741  typedef mat<3, 2, double, defaultp> dmat3x2;
742  typedef mat<4, 2, double, defaultp> dmat4x2;
743  typedef mat<2, 3, double, defaultp> dmat2x3;
744  typedef mat<3, 3, double, defaultp> dmat3x3;
745  typedef mat<4, 3, double, defaultp> dmat4x3;
746  typedef mat<2, 4, double, defaultp> dmat2x4;
747  typedef mat<3, 4, double, defaultp> dmat3x4;
748  typedef mat<4, 4, double, defaultp> dmat4x4;
749 
750  typedef mat<2, 2, f64, lowp> lowp_f64mat2x2;
751  typedef mat<2, 3, f64, lowp> lowp_f64mat2x3;
752  typedef mat<2, 4, f64, lowp> lowp_f64mat2x4;
753  typedef mat<3, 2, f64, lowp> lowp_f64mat3x2;
754  typedef mat<3, 3, f64, lowp> lowp_f64mat3x3;
755  typedef mat<3, 4, f64, lowp> lowp_f64mat3x4;
756  typedef mat<4, 2, f64, lowp> lowp_f64mat4x2;
757  typedef mat<4, 3, f64, lowp> lowp_f64mat4x3;
758  typedef mat<4, 4, f64, lowp> lowp_f64mat4x4;
759 
760  typedef mat<2, 2, f64, mediump> mediump_f64mat2x2;
761  typedef mat<2, 3, f64, mediump> mediump_f64mat2x3;
762  typedef mat<2, 4, f64, mediump> mediump_f64mat2x4;
763  typedef mat<3, 2, f64, mediump> mediump_f64mat3x2;
764  typedef mat<3, 3, f64, mediump> mediump_f64mat3x3;
765  typedef mat<3, 4, f64, mediump> mediump_f64mat3x4;
766  typedef mat<4, 2, f64, mediump> mediump_f64mat4x2;
767  typedef mat<4, 3, f64, mediump> mediump_f64mat4x3;
768  typedef mat<4, 4, f64, mediump> mediump_f64mat4x4;
769 
770  typedef mat<2, 2, f64, highp> highp_f64mat2x2;
771  typedef mat<2, 3, f64, highp> highp_f64mat2x3;
772  typedef mat<2, 4, f64, highp> highp_f64mat2x4;
773  typedef mat<3, 2, f64, highp> highp_f64mat3x2;
774  typedef mat<3, 3, f64, highp> highp_f64mat3x3;
775  typedef mat<3, 4, f64, highp> highp_f64mat3x4;
776  typedef mat<4, 2, f64, highp> highp_f64mat4x2;
777  typedef mat<4, 3, f64, highp> highp_f64mat4x3;
778  typedef mat<4, 4, f64, highp> highp_f64mat4x4;
779 
780  typedef mat<2, 2, f64, defaultp> f64mat2x2;
781  typedef mat<3, 2, f64, defaultp> f64mat3x2;
782  typedef mat<4, 2, f64, defaultp> f64mat4x2;
783  typedef mat<2, 3, f64, defaultp> f64mat2x3;
784  typedef mat<3, 3, f64, defaultp> f64mat3x3;
785  typedef mat<4, 3, f64, defaultp> f64mat4x3;
786  typedef mat<2, 4, f64, defaultp> f64mat2x4;
787  typedef mat<3, 4, f64, defaultp> f64mat3x4;
788  typedef mat<4, 4, f64, defaultp> f64mat4x4;
789 
790  // Quaternion
791 
792  typedef qua<float, lowp> lowp_quat;
793  typedef qua<float, mediump> mediump_quat;
794  typedef qua<float, highp> highp_quat;
795  typedef qua<float, defaultp> quat;
796 
797  typedef qua<float, lowp> lowp_fquat;
798  typedef qua<float, mediump> mediump_fquat;
799  typedef qua<float, highp> highp_fquat;
800  typedef qua<float, defaultp> fquat;
801 
802  typedef qua<f32, lowp> lowp_f32quat;
803  typedef qua<f32, mediump> mediump_f32quat;
804  typedef qua<f32, highp> highp_f32quat;
805  typedef qua<f32, defaultp> f32quat;
806 
807  typedef qua<double, lowp> lowp_dquat;
808  typedef qua<double, mediump> mediump_dquat;
809  typedef qua<double, highp> highp_dquat;
810  typedef qua<double, defaultp> dquat;
811 
812  typedef qua<f64, lowp> lowp_f64quat;
813  typedef qua<f64, mediump> mediump_f64quat;
814  typedef qua<f64, highp> highp_f64quat;
815  typedef qua<f64, defaultp> f64quat;
816 }//namespace glm
817 
818 
vec< 1, u16, highp > highp_u16vec1
High qualifier 16 bit unsigned integer scalar type.
Definition: fwd.hpp:354
mat< 4, 2, float, mediump > mediump_mat4x2
4 columns of 2 components matrix of single-precision floating-point numbers using medium precision ar...
mat< 4, 2, f32, highp > highp_f32mat4x2
High single-qualifier floating-point 4x2 matrix.
Definition: fwd.hpp:696
mat< 4, 3, float, highp > highp_mat4x3
4 columns of 3 components matrix of single-precision floating-point numbers using high precision arit...
mat< 4, 4, float, defaultp > mat4x4
4 columns of 4 components matrix of single-precision floating-point numbers.
vec< 4, unsigned int, mediump > mediump_uvec4
4 components vector of medium qualifier unsigned integer numbers.
uint64 highp_u64
High qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:133
vec< 1, f64, mediump > mediump_f64vec1
Medium double-qualifier floating-point vector of 1 component.
Definition: fwd.hpp:491
vec< 3, f32, defaultp > f32vec3
Single-qualifier floating-point vector of 3 components.
Definition: fwd.hpp:463
mat< 2, 2, f32, mediump > mediump_fmat2
Medium single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:528
double highp_float64_t
High 64 bit double-qualifier floating-point scalar.
Definition: fwd.hpp:175
mat< 4, 4, f64, defaultp > f64mat4
Double-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:586
vec< 1, int, mediump > mediump_ivec1
1 component vector of signed integer values.
vec< 4, double, mediump > mediump_dvec4
4 components vector of medium double-qualifier floating-point numbers.
vec< 3, float, highp > highp_vec3
3 components vector of high single-qualifier floating-point numbers.
mat< 4, 2, double, lowp > lowp_dmat4x2
4 columns of 2 components matrix of double-precision floating-point numbers using low precision arith...
mat< 2, 2, float, defaultp > mat2x2
2 columns of 2 components matrix of single-precision floating-point numbers.
mat< 2, 2, f64, defaultp > f64mat2
Double-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:584
mat< 4, 3, f32, mediump > mediump_fmat4x3
Medium single-qualifier floating-point 4x3 matrix.
Definition: fwd.hpp:647
mat< 3, 3, f32, mediump > mediump_f32mat3
Medium single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:545
uint32 mediump_uint32_t
Medium qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:127
uint64 lowp_uint64
Low qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:136
mat< 3, 3, float, mediump > mediump_mat3x3
3 columns of 3 components matrix of single-precision floating-point numbers using medium precision ar...
mat< 2, 2, f32, mediump > mediump_fmat2x2
Medium single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:640
vec< 1, f32, defaultp > f32vec1
Single-qualifier floating-point vector of 1 component.
Definition: fwd.hpp:461
mat< 4, 4, f32, highp > highp_f32mat4
High single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:550
qua< float, highp > highp_quat
Quaternion of single-precision floating-point numbers using high precision arithmetic in term of ULPs...
double highp_float64
High 64 bit double-qualifier floating-point scalar.
Definition: fwd.hpp:170
mat< 3, 2, double, mediump > mediump_dmat3x2
3 columns of 2 components matrix of double-precision floating-point numbers using medium precision ar...
uint8 lowp_u8
Low qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:89
mat< 3, 2, double, lowp > lowp_dmat3x2
3 columns of 2 components matrix of double-precision floating-point numbers using low precision arith...
uint32 u32
Default qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:120
mat< 3, 3, f64, defaultp > f64mat3
Double-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:585
vec< 2, int, highp > highp_ivec2
2 components vector of high qualifier signed integer numbers.
mat< 4, 3, double, highp > highp_dmat4x3
4 columns of 3 components matrix of double-precision floating-point numbers using medium precision ar...
mat< 2, 3, float, mediump > mediump_mat2x3
2 columns of 3 components matrix of single-precision floating-point numbers using medium precision ar...
double lowp_float64
Low 64 bit double-qualifier floating-point scalar.
Definition: fwd.hpp:168
vec< 1, i32, defaultp > i32vec1
32 bit signed integer scalar type.
Definition: fwd.hpp:277
uint16 highp_uint16
High qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:110
mat< 2, 4, f64, mediump > mediump_f64mat2x4
Medium double-qualifier floating-point 2x4 matrix.
Definition: fwd.hpp:762
vec< 4, i64, highp > highp_i64vec4
High qualifier 64 bit signed integer vector of 4 components type.
Definition: fwd.hpp:295
mat< 4, 4, double, mediump > mediump_dmat4x4
4 columns of 4 components matrix of double-precision floating-point numbers using medium precision ar...
mat< 3, 4, f64, defaultp > f64mat3x4
Double-qualifier floating-point 3x4 matrix.
Definition: fwd.hpp:787
vec< 4, double, highp > highp_dvec4
4 components vector of high double-qualifier floating-point numbers.
mat< 2, 2, f32, defaultp > fmat2
Single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:536
mat< 3, 4, double, lowp > lowp_dmat3x4
3 columns of 4 components matrix of double-precision floating-point numbers using low precision arith...
vec< 3, i16, defaultp > i16vec3
16 bit signed integer vector of 3 components type.
Definition: fwd.hpp:259
uint32 lowp_uint32_t
Low qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:126
vec< 2, float, lowp > lowp_fvec2
Low single-qualifier floating-point vector of 2 components.
Definition: fwd.hpp:427
uint32 mediump_uint32
Medium qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:123
mat< 4, 4, f32, mediump > mediump_fmat4
Medium single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:530
uint64 highp_uint64
High qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:138
mat< 2, 2, f32, lowp > lowp_fmat2
Low single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:524
uint32 lowp_uint32
Low qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:122
vec< 3, float, lowp > lowp_fvec3
Low single-qualifier floating-point vector of 3 components.
Definition: fwd.hpp:428
vec< 2, float, mediump > mediump_fvec2
Medium Single-qualifier floating-point vector of 2 components.
Definition: fwd.hpp:432
mat< 2, 3, float, highp > highp_mat2x3
2 columns of 3 components matrix of single-precision floating-point numbers using high precision arit...
mat< 3, 4, f32, lowp > lowp_fmat3x4
Low single-qualifier floating-point 3x4 matrix.
Definition: fwd.hpp:635
vec< 2, float, defaultp > vec2
2 components vector of single-precision floating-point numbers.
mat< 2, 2, f64, lowp > lowp_f64mat2x2
Low double-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:750
vec< 4, i64, defaultp > i64vec4
64 bit signed integer vector of 4 components type.
Definition: fwd.hpp:300
vec< 3, u16, defaultp > u16vec3
Default qualifier 16 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:361
vec< 1, u64, lowp > lowp_u64vec1
Low qualifier 64 bit unsigned integer scalar type.
Definition: fwd.hpp:384
mat< 2, 2, double, mediump > mediump_dmat2
2 columns of 2 components matrix of double-precision floating-point numbers using medium precision ar...
vec< 1, u16, mediump > mediump_u16vec1
Medium qualifier 16 bit unsigned integer scalar type.
Definition: fwd.hpp:349
vec< 2, float, highp > highp_vec2
2 components vector of high single-qualifier floating-point numbers.
vec< 2, i8, defaultp > i8vec2
8 bit signed integer vector of 2 components type.
Definition: fwd.hpp:238
mat< 2, 3, f64, mediump > mediump_f64mat2x3
Medium double-qualifier floating-point 2x3 matrix.
Definition: fwd.hpp:761
vec< 4, u32, lowp > lowp_u32vec4
Low qualifier 32 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:367
vec< 4, f32, highp > highp_f32vec4
High single-qualifier floating-point vector of 4 components.
Definition: fwd.hpp:459
vec< 3, unsigned int, defaultp > uvec3
3 components vector of unsigned integer numbers.
vec< 1, f32, lowp > lowp_f32vec1
Low single-qualifier floating-point vector of 1 component.
Definition: fwd.hpp:446
mat< 2, 3, f32, highp > highp_f32mat2x3
High single-qualifier floating-point 2x3 matrix.
Definition: fwd.hpp:691
int64 highp_int64
High qualifier 64 bit signed integer type.
Definition: fwd.hpp:80
vec< 2, i32, mediump > mediump_i32vec2
Medium qualifier 32 bit signed integer vector of 2 components type.
Definition: fwd.hpp:268
vec< 1, double, lowp > lowp_dvec1
1 component vector of double-precision floating-point numbers using low precision arithmetic in term ...
mat< 4, 4, f64, lowp > lowp_f64mat4
Low double-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:574
mat< 4, 4, f32, defaultp > fmat4
Single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:538
mat< 3, 4, f32, mediump > mediump_fmat3x4
Medium single-qualifier floating-point 3x4 matrix.
Definition: fwd.hpp:645
mat< 3, 3, double, lowp > lowp_dmat3
3 columns of 3 components matrix of double-precision floating-point numbers using low precision arith...
int16 lowp_int16_t
Low qualifier 16 bit signed integer type.
Definition: fwd.hpp:54
vec< 4, i32, highp > highp_i32vec4
High qualifier 32 bit signed integer vector of 4 components type.
Definition: fwd.hpp:275
mat< 4, 2, f32, defaultp > f32mat4x2
Single-qualifier floating-point 4x2 matrix.
Definition: fwd.hpp:702
mat< 3, 2, f32, highp > highp_fmat3x2
High single-qualifier floating-point 3x2 matrix.
Definition: fwd.hpp:653
mat< 2, 4, float, defaultp > mat2x4
2 columns of 4 components matrix of single-precision floating-point numbers.
mat< 2, 3, f32, mediump > mediump_fmat2x3
Medium single-qualifier floating-point 2x3 matrix.
Definition: fwd.hpp:641
uint32 mediump_u32
Medium qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:118
mat< 3, 2, f32, lowp > lowp_fmat3x2
Low single-qualifier floating-point 3x2 matrix.
Definition: fwd.hpp:633
mat< 2, 3, float, lowp > lowp_mat2x3
2 columns of 3 components matrix of single-precision floating-point numbers using low precision arith...
mat< 2, 2, float, lowp > lowp_mat2
2 columns of 2 components matrix of single-precision floating-point numbers using low precision arith...
mat< 4, 2, f64, mediump > mediump_f64mat4x2
Medium double-qualifier floating-point 4x2 matrix.
Definition: fwd.hpp:766
vec< 4, bool, lowp > lowp_bvec4
4 components vector of low qualifier bool numbers.
vec< 2, u16, highp > highp_u16vec2
High qualifier 16 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:355
vec< 1, f64, highp > highp_f64vec1
High double-qualifier floating-point vector of 1 component.
Definition: fwd.hpp:496
vec< 3, int, defaultp > ivec3
3 components vector of signed integer numbers.
Definition: vector_int3.hpp:15
vec< 2, i16, mediump > mediump_i16vec2
Medium qualifier 16 bit signed integer vector of 2 components type.
Definition: fwd.hpp:248
mat< 2, 4, f32, highp > highp_fmat2x4
High single-qualifier floating-point 2x4 matrix.
Definition: fwd.hpp:652
vec< 3, u64, defaultp > u64vec3
Default qualifier 64 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:401
uint8 lowp_uint8
Low qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:94
mat< 3, 2, f32, lowp > lowp_f32mat3x2
Low single-qualifier floating-point 3x2 matrix.
Definition: fwd.hpp:673
vec< 4, bool, mediump > mediump_bvec4
4 components vector of medium qualifier bool numbers.
mat< 3, 2, float, defaultp > mat3x2
3 columns of 2 components matrix of single-precision floating-point numbers.
uint64 lowp_u64
Low qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:131
vec< 1, unsigned int, mediump > mediump_uvec1
1 component vector of unsigned integer values.
vec< 3, i64, highp > highp_i64vec3
High qualifier 64 bit signed integer vector of 3 components type.
Definition: fwd.hpp:294
int8 mediump_int8
Medium qualifier 8 bit signed integer type.
Definition: fwd.hpp:37
int64 lowp_int64
Low qualifier 64 bit signed integer type.
Definition: fwd.hpp:78
vec< 1, float, lowp > lowp_vec1
1 component vector of single-precision floating-point numbers using low precision arithmetic in term ...
mat< 4, 2, f32, mediump > mediump_f32mat4x2
Medium single-qualifier floating-point 4x2 matrix.
Definition: fwd.hpp:686
mat< 3, 3, float, highp > highp_mat3x3
3 columns of 3 components matrix of single-precision floating-point numbers using high precision arit...
vec< 3, f64, lowp > lowp_f64vec3
Low double-qualifier floating-point vector of 3 components.
Definition: fwd.hpp:488
mat< 3, 4, float, defaultp > mat3x4
3 columns of 4 components matrix of single-precision floating-point numbers.
mat< 3, 3, float, lowp > lowp_mat3x3
3 columns of 3 components matrix of single-precision floating-point numbers using low precision arith...
vec< 2, u64, defaultp > u64vec2
Default qualifier 64 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:400
vec< 3, i64, lowp > lowp_i64vec3
Low qualifier 64 bit signed integer vector of 3 components type.
Definition: fwd.hpp:284
vec< 2, i8, mediump > mediump_i8vec2
Medium qualifier 8 bit signed integer vector of 2 components type.
Definition: fwd.hpp:228
vec< 4, float, lowp > lowp_vec4
4 components vector of low single-qualifier floating-point numbers.
mat< 4, 3, float, defaultp > mat4x3
4 columns of 3 components matrix of single-precision floating-point numbers.
mat< 3, 4, f32, defaultp > f32mat3x4
Single-qualifier floating-point 3x4 matrix.
Definition: fwd.hpp:707
mat< 4, 2, double, mediump > mediump_dmat4x2
4 columns of 2 components matrix of double-precision floating-point numbers using medium precision ar...
vec< 2, float, lowp > lowp_vec2
2 components vector of low single-qualifier floating-point numbers.
vec< 3, i16, highp > highp_i16vec3
High qualifier 16 bit signed integer vector of 3 components type.
Definition: fwd.hpp:254
mat< 2, 3, double, mediump > mediump_dmat2x3
2 columns of 3 components matrix of double-precision floating-point numbers using medium precision ar...
vec< 3, i16, mediump > mediump_i16vec3
Medium qualifier 16 bit signed integer vector of 3 components type.
Definition: fwd.hpp:249
uint64 u64
Default qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:134
vec< 2, int, mediump > mediump_ivec2
2 components vector of medium qualifier signed integer numbers.
mat< 3, 2, f32, mediump > mediump_fmat3x2
Medium single-qualifier floating-point 3x2 matrix.
Definition: fwd.hpp:643
vec< 1, f64, defaultp > f64vec1
Double-qualifier floating-point vector of 1 component.
Definition: fwd.hpp:501
vec< 1, i64, mediump > mediump_i64vec1
Medium qualifier 64 bit signed integer scalar type.
Definition: fwd.hpp:287
vec< 1, i16, defaultp > i16vec1
16 bit signed integer scalar type.
Definition: fwd.hpp:257
mat< 2, 2, double, lowp > lowp_dmat2
2 columns of 2 components matrix of double-precision floating-point numbers using low precision arith...
mat< 2, 4, double, highp > highp_dmat2x4
2 columns of 4 components matrix of double-precision floating-point numbers using medium precision ar...
mat< 3, 3, f64, lowp > lowp_f64mat3x3
Low double-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:754
vec< 2, f64, lowp > lowp_f64vec2
Low double-qualifier floating-point vector of 2 components.
Definition: fwd.hpp:487
mat< 2, 3, f32, highp > highp_fmat2x3
High single-qualifier floating-point 2x3 matrix.
Definition: fwd.hpp:651
mat< 4, 3, f32, lowp > lowp_f32mat4x3
Low single-qualifier floating-point 4x3 matrix.
Definition: fwd.hpp:677
mat< 3, 3, f64, lowp > lowp_f64mat3
Low double-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:573
vec< 3, u64, mediump > mediump_u64vec3
Medium qualifier 64 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:391
double mediump_float64
Medium 64 bit double-qualifier floating-point scalar.
Definition: fwd.hpp:169
double float64
Double-qualifier floating-point scalar.
Definition: fwd.hpp:171
vec< 2, bool, highp > highp_bvec2
2 components vector of high qualifier bool numbers.
vec< 2, i16, highp > highp_i16vec2
High qualifier 16 bit signed integer vector of 2 components type.
Definition: fwd.hpp:253
mat< 4, 2, f32, defaultp > fmat4x2
Single-qualifier floating-point 4x2 matrix.
Definition: fwd.hpp:662
mat< 2, 3, f64, lowp > lowp_f64mat2x3
Low double-qualifier floating-point 2x3 matrix.
Definition: fwd.hpp:751
mat< 3, 4, f32, defaultp > fmat3x4
Single-qualifier floating-point 3x4 matrix.
Definition: fwd.hpp:667
mat< 3, 3, double, lowp > lowp_dmat3x3
3 columns of 3 components matrix of double-precision floating-point numbers using low precision arith...
vec< 3, u32, lowp > lowp_u32vec3
Low qualifier 32 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:366
mat< 2, 4, f32, defaultp > f32mat2x4
Single-qualifier floating-point 2x4 matrix.
Definition: fwd.hpp:706
vec< 4, float, lowp > lowp_fvec4
Low single-qualifier floating-point vector of 4 components.
Definition: fwd.hpp:429
vec< 4, f32, mediump > mediump_f32vec4
Medium single-qualifier floating-point vector of 4 components.
Definition: fwd.hpp:454
vec< 4, i16, defaultp > i16vec4
16 bit signed integer vector of 4 components type.
Definition: fwd.hpp:260
uint8 lowp_uint8_t
Low qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:98
uint32 highp_uint32_t
High qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:128
mat< 3, 3, f32, defaultp > fmat3x3
Single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:664
mat< 3, 4, f64, mediump > mediump_f64mat3x4
Medium double-qualifier floating-point 3x4 matrix.
Definition: fwd.hpp:765
mat< 2, 3, f32, lowp > lowp_fmat2x3
Low single-qualifier floating-point 2x3 matrix.
Definition: fwd.hpp:631
vec< 1, u32, lowp > lowp_u32vec1
Low qualifier 32 bit unsigned integer scalar type.
Definition: fwd.hpp:364
mat< 3, 2, float, lowp > lowp_mat3x2
3 columns of 2 components matrix of single-precision floating-point numbers using low precision arith...
mat< 2, 3, f32, defaultp > f32mat2x3
Single-qualifier floating-point 2x3 matrix.
Definition: fwd.hpp:703
vec< 1, i32, mediump > mediump_i32vec1
Medium qualifier 32 bit signed integer scalar type.
Definition: fwd.hpp:267
vec< 4, u16, highp > highp_u16vec4
High qualifier 16 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:357
vec< 1, i32, lowp > lowp_i32vec1
Low qualifier 32 bit signed integer scalar type.
Definition: fwd.hpp:262
vec< 1, i64, lowp > lowp_i64vec1
Low qualifier 64 bit signed integer scalar type.
Definition: fwd.hpp:282
vec< 1, u32, highp > highp_u32vec1
High qualifier 32 bit unsigned integer scalar type.
Definition: fwd.hpp:374
vec< 1, bool, highp > highp_bvec1
1 component vector of bool values.
int16 mediump_int16
Medium qualifier 16 bit signed integer type.
Definition: fwd.hpp:51
uint16 mediump_u16
Medium qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:104
qua< f64, defaultp > f64quat
Double-qualifier floating-point quaternion.
Definition: fwd.hpp:815
vec< 4, float, mediump > mediump_vec4
4 components vector of medium single-qualifier floating-point numbers.
vec< 3, f64, mediump > mediump_f64vec3
Medium double-qualifier floating-point vector of 3 components.
Definition: fwd.hpp:493
qua< double, defaultp > dquat
Quaternion of double-precision floating-point numbers.
vec< 1, u64, defaultp > u64vec1
Default qualifier 64 bit unsigned integer scalar type.
Definition: fwd.hpp:399
int64 int64_t
64 bit signed integer type.
Definition: fwd.hpp:85
vec< 1, u8, defaultp > u8vec1
Default qualifier 8 bit unsigned integer scalar type.
Definition: fwd.hpp:339
vec< 1, i8, highp > highp_i8vec1
High qualifier 8 bit signed integer scalar type.
Definition: fwd.hpp:232
vec< 4, u8, defaultp > u8vec4
Default qualifier 8 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:342
int8 int8_t
8 bit signed integer type.
Definition: fwd.hpp:43
int32 i32
32 bit signed integer type.
Definition: fwd.hpp:62
vec< 1, u32, mediump > mediump_u32vec1
Medium qualifier 32 bit unsigned integer scalar type.
Definition: fwd.hpp:369
mat< 2, 2, f64, defaultp > f64mat2x2
Double-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:780
mat< 2, 2, f32, lowp > lowp_f32mat2x2
Low single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:670
vec< 4, f32, lowp > lowp_f32vec4
Low single-qualifier floating-point vector of 4 components.
Definition: fwd.hpp:449
vec< 3, float, highp > highp_fvec3
High Single-qualifier floating-point vector of 3 components.
Definition: fwd.hpp:438
mat< 4, 2, f64, lowp > lowp_f64mat4x2
Low double-qualifier floating-point 4x2 matrix.
Definition: fwd.hpp:756
mat< 3, 3, f32, mediump > mediump_fmat3x3
Medium single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:644
vec< 1, i64, highp > highp_i64vec1
High qualifier 64 bit signed integer scalar type.
Definition: fwd.hpp:292
vec< 4, i8, defaultp > i8vec4
8 bit signed integer vector of 4 components type.
Definition: fwd.hpp:240
vec< 1, int, highp > highp_ivec1
1 component vector of signed integer values.
vec< 3, bool, mediump > mediump_bvec3
3 components vector of medium qualifier bool numbers.
int32 highp_int32
High qualifier 32 bit signed integer type.
Definition: fwd.hpp:66
mat< 2, 3, f32, mediump > mediump_f32mat2x3
Medium single-qualifier floating-point 2x3 matrix.
Definition: fwd.hpp:681
mat< 3, 4, double, mediump > mediump_dmat3x4
3 columns of 4 components matrix of double-precision floating-point numbers using medium precision ar...
mat< 3, 2, f64, lowp > lowp_f64mat3x2
Low double-qualifier floating-point 3x2 matrix.
Definition: fwd.hpp:753
mat< 4, 2, float, defaultp > mat4x2
4 columns of 2 components matrix of single-precision floating-point numbers.
vec< 1, float, mediump > mediump_vec1
1 component vector of single-precision floating-point numbers using medium precision arithmetic in te...
uint32 highp_u32
High qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:119
int32 highp_i32
High qualifier 32 bit signed integer type.
Definition: fwd.hpp:61
vec< 4, int, defaultp > ivec4
4 components vector of signed integer numbers.
Definition: vector_int4.hpp:15
mat< 4, 4, float, mediump > mediump_mat4x4
4 columns of 4 components matrix of single-precision floating-point numbers using medium precision ar...
vec< 4, u64, defaultp > u64vec4
Default qualifier 64 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:402
vec< 2, int, lowp > lowp_ivec2
2 components vector of low qualifier signed integer numbers.
vec< 4, f32, defaultp > f32vec4
Single-qualifier floating-point vector of 4 components.
Definition: fwd.hpp:464
mat< 2, 3, f64, defaultp > f64mat2x3
Double-qualifier floating-point 2x3 matrix.
Definition: fwd.hpp:783
mat< 4, 4, f64, mediump > mediump_f64mat4x4
Medium double-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:768
mat< 2, 2, double, mediump > mediump_dmat2x2
2 columns of 2 components matrix of double-precision floating-point numbers using medium precision ar...
vec< 4, u16, lowp > lowp_u16vec4
Low qualifier 16 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:347
vec< 4, unsigned int, highp > highp_uvec4
4 components vector of high qualifier unsigned integer numbers.
uint32 highp_uint32
High qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:124
mat< 4, 4, f32, lowp > lowp_f32mat4
Low single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:542
mat< 3, 2, f64, defaultp > f64mat3x2
Double-qualifier floating-point 3x2 matrix.
Definition: fwd.hpp:781
float mediump_float32
Medium 32 bit single-qualifier floating-point scalar.
Definition: fwd.hpp:153
vec< 1, u32, defaultp > u32vec1
Default qualifier 32 bit unsigned integer scalar type.
Definition: fwd.hpp:379
mat< 4, 2, float, lowp > lowp_mat4x2
4 columns of 2 components matrix of single-precision floating-point numbers using low precision arith...
vec< 4, f64, mediump > mediump_f64vec4
Medium double-qualifier floating-point vector of 4 components.
Definition: fwd.hpp:494
mat< 3, 3, f64, defaultp > f64mat3x3
Double-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:784
float highp_float32
High 32 bit single-qualifier floating-point scalar.
Definition: fwd.hpp:154
uint8 highp_uint8
High qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:96
int8 highp_i8
High qualifier 8 bit signed integer type.
Definition: fwd.hpp:33
mat< 2, 4, f64, lowp > lowp_f64mat2x4
Low double-qualifier floating-point 2x4 matrix.
Definition: fwd.hpp:752
mat< 3, 4, f64, lowp > lowp_f64mat3x4
Low double-qualifier floating-point 3x4 matrix.
Definition: fwd.hpp:755
vec< 3, float, lowp > lowp_vec3
3 components vector of low single-qualifier floating-point numbers.
mat< 3, 4, float, highp > highp_mat3x4
3 columns of 4 components matrix of single-precision floating-point numbers using high precision arit...
mat< 4, 4, float, lowp > lowp_mat4
4 columns of 4 components matrix of single-precision floating-point numbers using low precision arith...
int8 mediump_i8
Medium qualifier 8 bit signed integer type.
Definition: fwd.hpp:32
int64 highp_int64_t
High qualifier 64 bit signed integer type.
Definition: fwd.hpp:84
mat< 4, 4, f32, defaultp > f32mat4x4
Single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:708
float float32_t
Default 32 bit single-qualifier floating-point scalar.
Definition: fwd.hpp:160
mat< 2, 2, f32, defaultp > f32mat2x2
Single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:700
vec< 2, i64, lowp > lowp_i64vec2
Low qualifier 64 bit signed integer vector of 2 components type.
Definition: fwd.hpp:283
mat< 2, 4, f32, lowp > lowp_f32mat2x4
Low single-qualifier floating-point 2x4 matrix.
Definition: fwd.hpp:672
vec< 4, bool, highp > highp_bvec4
4 components vector of high qualifier bool numbers.
uint32 uint32_t
Default qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:129
mat< 3, 3, f32, highp > highp_f32mat3
High single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:549
mat< 3, 3, f64, mediump > mediump_f64mat3x3
Medium double-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:764
vec< 2, bool, defaultp > bvec2
2 components vector of boolean.
vec< 4, float, defaultp > vec4
4 components vector of single-precision floating-point numbers.
uint8 u8
Default qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:92
vec< 3, i32, highp > highp_i32vec3
High qualifier 32 bit signed integer vector of 3 components type.
Definition: fwd.hpp:274
float float32
Single-qualifier floating-point scalar.
Definition: fwd.hpp:155
vec< 4, f32, defaultp > fvec4
Single-qualifier floating-point vector of 4 components.
Definition: fwd.hpp:444
vec< 1, i32, highp > highp_i32vec1
High qualifier 32 bit signed integer scalar type.
Definition: fwd.hpp:272
mat< 3, 3, double, highp > highp_dmat3
3 columns of 3 components matrix of double-precision floating-point numbers using medium precision ar...
mat< 3, 3, f32, lowp > lowp_f32mat3
Low single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:541
vec< 1, u16, defaultp > u16vec1
Default qualifier 16 bit unsigned integer scalar type.
Definition: fwd.hpp:359
mat< 2, 4, float, lowp > lowp_mat2x4
2 columns of 4 components matrix of single-precision floating-point numbers using low precision arith...
vec< 1, double, defaultp > dvec1
1 components vector of double-precision floating-point numbers.
vec< 1, i8, defaultp > i8vec1
8 bit signed integer scalar type.
Definition: fwd.hpp:237
vec< 3, i32, mediump > mediump_i32vec3
Medium qualifier 32 bit signed integer vector of 3 components type.
Definition: fwd.hpp:269
vec< 2, i32, defaultp > i32vec2
32 bit signed integer vector of 2 components type.
Definition: fwd.hpp:278
vec< 2, bool, mediump > mediump_bvec2
2 components vector of medium qualifier bool numbers.
vec< 2, i16, lowp > lowp_i16vec2
Low qualifier 16 bit signed integer vector of 2 components type.
Definition: fwd.hpp:243
vec< 2, float, mediump > mediump_vec2
2 components vector of medium single-qualifier floating-point numbers.
vec< 2, u64, mediump > mediump_u64vec2
Medium qualifier 64 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:390
vec< 4, u8, lowp > lowp_u8vec4
Low qualifier 8 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:327
mat< 3, 3, f32, highp > highp_f32mat3x3
High single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:694
vec< 1, u8, highp > highp_u8vec1
High qualifier 8 bit unsigned integer scalar type.
Definition: fwd.hpp:334
uint8 highp_uint8_t
High qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:100
vec< 4, u32, mediump > mediump_u32vec4
Medium qualifier 32 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:372
mat< 2, 2, f32, highp > highp_f32mat2x2
High single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:690
vec< 4, f64, highp > highp_f64vec4
High double-qualifier floating-point vector of 4 components.
Definition: fwd.hpp:499
mat< 3, 3, double, highp > highp_dmat3x3
3 columns of 3 components matrix of double-precision floating-point numbers using medium precision ar...
vec< 3, u8, lowp > lowp_u8vec3
Low qualifier 8 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:326
float highp_f32
High 32 bit single-qualifier floating-point scalar.
Definition: fwd.hpp:149
uint64 mediump_uint64
Medium qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:137
int32 highp_int32_t
32 bit signed integer type.
Definition: fwd.hpp:70
mat< 2, 3, f32, lowp > lowp_f32mat2x3
Low single-qualifier floating-point 2x3 matrix.
Definition: fwd.hpp:671
vec< 3, f64, defaultp > f64vec3
Double-qualifier floating-point vector of 3 components.
Definition: fwd.hpp:503
vec< 3, u16, mediump > mediump_u16vec3
Medium qualifier 16 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:351
mat< 2, 4, f64, defaultp > f64mat2x4
Double-qualifier floating-point 2x4 matrix.
Definition: fwd.hpp:786
qua< double, mediump > mediump_dquat
Quaternion of medium double-qualifier floating-point numbers using high precision arithmetic in term ...
mat< 3, 3, f32, defaultp > f32mat3
Single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:553
mat< 2, 2, f64, mediump > mediump_f64mat2x2
Medium double-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:760
vec< 1, double, highp > highp_dvec1
1 component vector of double-precision floating-point numbers using high precision arithmetic in term...
mat< 3, 3, float, defaultp > mat3x3
3 columns of 3 components matrix of single-precision floating-point numbers.
uint64 mediump_u64
Medium qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:132
mat< 4, 4, float, mediump > mediump_mat4
4 columns of 4 components matrix of single-precision floating-point numbers using medium precision ar...
vec< 4, i16, highp > highp_i16vec4
High qualifier 16 bit signed integer vector of 4 components type.
Definition: fwd.hpp:255
mat< 4, 4, f32, lowp > lowp_fmat4
Low single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:526
vec< 2, u32, mediump > mediump_u32vec2
Medium qualifier 32 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:370
vec< 3, u64, highp > highp_u64vec3
High qualifier 64 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:396
vec< 2, unsigned int, defaultp > uvec2
2 components vector of unsigned integer numbers.
uint16 lowp_u16
Low qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:103
vec< 3, i16, lowp > lowp_i16vec3
Low qualifier 16 bit signed integer vector of 3 components type.
Definition: fwd.hpp:244
vec< 3, u16, lowp > lowp_u16vec3
Low qualifier 16 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:346
vec< 1, unsigned int, defaultp > uvec1
1 component vector of unsigned integer numbers.
vec< 3, f32, lowp > lowp_f32vec3
Low single-qualifier floating-point vector of 3 components.
Definition: fwd.hpp:448
mat< 4, 4, f32, highp > highp_fmat4
High single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:534
mat< 3, 3, f32, lowp > lowp_fmat3
Low single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:525
int16 highp_i16
High qualifier 16 bit signed integer type.
Definition: fwd.hpp:47
qua< f32, mediump > mediump_f32quat
Medium single-qualifier floating-point quaternion.
Definition: fwd.hpp:803
int8 highp_int8
High qualifier 8 bit signed integer type.
Definition: fwd.hpp:38
mat< 4, 4, f64, defaultp > f64mat4x4
Double-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:788
mat< 4, 3, f32, defaultp > fmat4x3
Single-qualifier floating-point 4x3 matrix.
Definition: fwd.hpp:665
mat< 2, 4, f32, lowp > lowp_fmat2x4
Low single-qualifier floating-point 2x4 matrix.
Definition: fwd.hpp:632
mat< 3, 3, f64, highp > highp_f64mat3
High double-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:581
vec< 3, i8, mediump > mediump_i8vec3
Medium qualifier 8 bit signed integer vector of 3 components type.
Definition: fwd.hpp:229
vec< 1, f32, highp > highp_f32vec1
High single-qualifier floating-point vector of 1 component.
Definition: fwd.hpp:456
vec< 3, i8, lowp > lowp_i8vec3
Low qualifier 8 bit signed integer vector of 3 components type.
Definition: fwd.hpp:224
mat< 3, 3, double, mediump > mediump_dmat3x3
3 columns of 3 components matrix of double-precision floating-point numbers using medium precision ar...
mat< 4, 3, f64, lowp > lowp_f64mat4x3
Low double-qualifier floating-point 4x3 matrix.
Definition: fwd.hpp:757
vec< 4, u64, highp > highp_u64vec4
High qualifier 64 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:397
mat< 3, 3, float, mediump > mediump_mat3
3 columns of 3 components matrix of single-precision floating-point numbers using medium precision ar...
vec< 3, f32, defaultp > fvec3
Single-qualifier floating-point vector of 3 components.
Definition: fwd.hpp:443
vec< 2, i16, defaultp > i16vec2
16 bit signed integer vector of 2 components type.
Definition: fwd.hpp:258
vec< 1, bool, mediump > mediump_bvec1
1 component vector of bool values.
mat< 4, 4, double, lowp > lowp_dmat4
4 columns of 4 components matrix of double-precision floating-point numbers using low precision arith...
mat< 3, 4, double, highp > highp_dmat3x4
3 columns of 4 components matrix of double-precision floating-point numbers using medium precision ar...
mat< 4, 3, f32, defaultp > f32mat4x3
Single-qualifier floating-point 4x3 matrix.
Definition: fwd.hpp:705
mat< 2, 2, f32, defaultp > f32mat2
Single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:552
mat< 2, 4, f32, mediump > mediump_fmat2x4
Medium single-qualifier floating-point 2x4 matrix.
Definition: fwd.hpp:642
vec< 2, u16, mediump > mediump_u16vec2
Medium qualifier 16 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:350
mat< 4, 4, f32, lowp > lowp_f32mat4x4
Low single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:678
vec< 2, unsigned int, lowp > lowp_uvec2
2 components vector of low qualifier unsigned integer numbers.
mat< 3, 3, float, lowp > lowp_mat3
3 columns of 3 components matrix of single-precision floating-point numbers using low precision arith...
vec< 2, u8, lowp > lowp_u8vec2
Low qualifier 8 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:325
vec< 2, double, lowp > lowp_dvec2
2 components vector of low double-qualifier floating-point numbers.
mat< 3, 3, f64, mediump > mediump_f64mat3
Medium double-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:577
int16 lowp_i16
Low qualifier 16 bit signed integer type.
Definition: fwd.hpp:45
vec< 1, float, defaultp > vec1
1 components vector of single-precision floating-point numbers.
vec< 3, unsigned int, mediump > mediump_uvec3
3 components vector of medium qualifier unsigned integer numbers.
mat< 3, 4, f32, highp > highp_fmat3x4
High single-qualifier floating-point 3x4 matrix.
Definition: fwd.hpp:655
double float64_t
Default 64 bit double-qualifier floating-point scalar.
Definition: fwd.hpp:176
mat< 4, 4, f64, highp > highp_f64mat4x4
High double-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:778
mat< 2, 2, float, highp > highp_mat2
2 columns of 2 components matrix of single-precision floating-point numbers using high precision arit...
mat< 4, 3, f32, mediump > mediump_f32mat4x3
Medium single-qualifier floating-point 4x3 matrix.
Definition: fwd.hpp:687
int16 lowp_int16
Low qualifier 16 bit signed integer type.
Definition: fwd.hpp:50
vec< 3, int, lowp > lowp_ivec3
3 components vector of low qualifier signed integer numbers.
mat< 3, 3, f32, mediump > mediump_fmat3
Medium single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:529
mat< 4, 4, double, mediump > mediump_dmat4
4 columns of 4 components matrix of double-precision floating-point numbers using medium precision ar...
mat< 4, 4, f32, highp > highp_f32mat4x4
High single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:698
int64 lowp_int64_t
Low qualifier 64 bit signed integer type.
Definition: fwd.hpp:82
vec< 4, int, lowp > lowp_ivec4
4 components vector of low qualifier signed integer numbers.
uint16 uint16_t
Default qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:115
vec< 4, unsigned int, lowp > lowp_uvec4
4 components vector of low qualifier unsigned integer numbers.
vec< 2, f64, highp > highp_f64vec2
High double-qualifier floating-point vector of 2 components.
Definition: fwd.hpp:497
vec< 2, u64, lowp > lowp_u64vec2
Low qualifier 64 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:385
mat< 3, 3, f32, defaultp > fmat3
Single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:537
mat< 3, 2, f32, mediump > mediump_f32mat3x2
Medium single-qualifier floating-point 3x2 matrix.
Definition: fwd.hpp:683
mat< 3, 3, double, defaultp > dmat3x3
3 columns of 3 components matrix of double-precision floating-point numbers.
mat< 3, 3, double, defaultp > dmat3
3 columns of 3 components matrix of double-precision floating-point numbers.
mat< 4, 2, f32, lowp > lowp_f32mat4x2
Low single-qualifier floating-point 4x2 matrix.
Definition: fwd.hpp:676
int32 lowp_int32
Low qualifier 32 bit signed integer type.
Definition: fwd.hpp:64
vec< 4, i64, mediump > mediump_i64vec4
Medium qualifier 64 bit signed integer vector of 4 components type.
Definition: fwd.hpp:290
vec< 4, bool, defaultp > bvec4
4 components vector of boolean.
uint8 uint8_t
Default qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:101
vec< 1, i8, mediump > mediump_i8vec1
Medium qualifier 8 bit signed integer scalar type.
Definition: fwd.hpp:227
int32 mediump_int32_t
Medium qualifier 32 bit signed integer type.
Definition: fwd.hpp:69
mat< 4, 3, double, mediump > mediump_dmat4x3
4 columns of 3 components matrix of double-precision floating-point numbers using medium precision ar...
float highp_float32_t
High 32 bit single-qualifier floating-point scalar.
Definition: fwd.hpp:159
mat< 3, 3, f32, defaultp > f32mat3x3
Single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:704
mat< 4, 4, double, highp > highp_dmat4x4
4 columns of 4 components matrix of double-precision floating-point numbers using medium precision ar...
uint8 highp_u8
High qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:91
mat< 2, 3, double, highp > highp_dmat2x3
2 columns of 3 components matrix of double-precision floating-point numbers using medium precision ar...
uint8 mediump_uint8
Medium qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:95
mat< 4, 2, f32, highp > highp_fmat4x2
High single-qualifier floating-point 4x2 matrix.
Definition: fwd.hpp:656
vec< 2, f32, highp > highp_f32vec2
High single-qualifier floating-point vector of 2 components.
Definition: fwd.hpp:457
mat< 2, 4, double, mediump > mediump_dmat2x4
2 columns of 4 components matrix of double-precision floating-point numbers using medium precision ar...
mat< 2, 2, double, defaultp > dmat2
2 columns of 2 components matrix of double-precision floating-point numbers.
vec< 4, float, highp > highp_vec4
4 components vector of high single-qualifier floating-point numbers.
int64 mediump_int64_t
Medium qualifier 64 bit signed integer type.
Definition: fwd.hpp:83
vec< 3, u64, lowp > lowp_u64vec3
Low qualifier 64 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:386
mat< 4, 4, double, defaultp > dmat4x4
4 columns of 4 components matrix of double-precision floating-point numbers.
vec< 1, bool, lowp > lowp_bvec1
1 component vector of bool values.
mat< 2, 2, f64, highp > highp_f64mat2x2
High double-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:770
vec< 3, u32, highp > highp_u32vec3
High qualifier 32 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:376
vec< 3, bool, highp > highp_bvec3
3 components vector of high qualifier bool numbers.
int8 highp_int8_t
High qualifier 8 bit signed integer type.
Definition: fwd.hpp:42
qua< f32, lowp > lowp_f32quat
Low single-qualifier floating-point quaternion.
Definition: fwd.hpp:802
vec< 4, i32, lowp > lowp_i32vec4
Low qualifier 32 bit signed integer vector of 4 components type.
Definition: fwd.hpp:265
vec< 1, i16, highp > highp_i16vec1
High qualifier 16 bit signed integer scalar type.
Definition: fwd.hpp:252
mat< 4, 4, f32, lowp > lowp_fmat4x4
Low single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:638
mat< 4, 3, double, lowp > lowp_dmat4x3
4 columns of 3 components matrix of double-precision floating-point numbers using low precision arith...
mat< 3, 2, f32, defaultp > f32mat3x2
Single-qualifier floating-point 3x2 matrix.
Definition: fwd.hpp:701
mat< 3, 3, f32, lowp > lowp_f32mat3x3
Low single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:674
vec< 2, i8, lowp > lowp_i8vec2
Low qualifier 8 bit signed integer vector of 2 components type.
Definition: fwd.hpp:223
vec< 4, i32, defaultp > i32vec4
32 bit signed integer vector of 4 components type.
Definition: fwd.hpp:280
mat< 2, 2, f32, highp > highp_f32mat2
High single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:548
float lowp_f32
Low 32 bit single-qualifier floating-point scalar.
Definition: fwd.hpp:147
vec< 1, unsigned int, highp > highp_uvec1
1 component vector of unsigned integer values.
vec< 4, u16, mediump > mediump_u16vec4
Medium qualifier 16 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:352
vec< 3, unsigned int, highp > highp_uvec3
3 components vector of high qualifier unsigned integer numbers.
vec< 3, u32, defaultp > u32vec3
Default qualifier 32 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:381
vec< 2, u8, defaultp > u8vec2
Default qualifier 8 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:340
vec< 3, double, mediump > mediump_dvec3
3 components vector of medium double-qualifier floating-point numbers.
int16 mediump_i16
Medium qualifier 16 bit signed integer type.
Definition: fwd.hpp:46
vec< 2, u64, highp > highp_u64vec2
High qualifier 64 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:395
vec< 1, int, lowp > lowp_ivec1
1 component vector of signed integer values.
vec< 3, i8, defaultp > i8vec3
8 bit signed integer vector of 3 components type.
Definition: fwd.hpp:239
mat< 2, 2, f32, mediump > mediump_f32mat2x2
High single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:680
mat< 4, 4, float, defaultp > mat4
4 columns of 4 components matrix of single-precision floating-point numbers.
uint16 mediump_uint16_t
Medium qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:113
mat< 4, 3, f64, mediump > mediump_f64mat4x3
Medium double-qualifier floating-point 4x3 matrix.
Definition: fwd.hpp:767
vec< 3, u8, defaultp > u8vec3
Default qualifier 8 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:341
double highp_f64
High 64 bit double-qualifier floating-point scalar.
Definition: fwd.hpp:165
vec< 3, float, mediump > mediump_fvec3
Medium Single-qualifier floating-point vector of 3 components.
Definition: fwd.hpp:433
int64 mediump_int64
Medium qualifier 64 bit signed integer type.
Definition: fwd.hpp:79
vec< 4, u64, mediump > mediump_u64vec4
Medium qualifier 64 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:392
mat< 2, 2, double, highp > highp_dmat2x2
2 columns of 2 components matrix of double-precision floating-point numbers using medium precision ar...
uint64 uint64_t
Default qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:143
vec< 2, u32, highp > highp_u32vec2
High qualifier 32 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:375
vec< 1, double, mediump > mediump_dvec1
1 component vector of double-precision floating-point numbers using medium precision arithmetic in te...
vec< 1, float, highp > highp_fvec1
High single-qualifier floating-point vector of 1 component.
Definition: fwd.hpp:436
vec< 4, i64, lowp > lowp_i64vec4
Low qualifier 64 bit signed integer vector of 4 components type.
Definition: fwd.hpp:285
vec< 4, int, highp > highp_ivec4
4 components vector of high qualifier signed integer numbers.
vec< 3, i32, defaultp > i32vec3
32 bit signed integer vector of 3 components type.
Definition: fwd.hpp:279
mat< 2, 4, f32, highp > highp_f32mat2x4
High single-qualifier floating-point 2x4 matrix.
Definition: fwd.hpp:692
vec< 1, i8, lowp > lowp_i8vec1
Low qualifier 8 bit signed integer scalar type.
Definition: fwd.hpp:222
mat< 2, 2, f64, highp > highp_f64mat2
High double-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:580
vec< 3, double, lowp > lowp_dvec3
3 components vector of low double-qualifier floating-point numbers.
uint16 lowp_uint16_t
Low qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:112
vec< 2, double, defaultp > dvec2
2 components vector of double-precision floating-point numbers.
mat< 3, 2, f64, highp > highp_f64mat3x2
High double-qualifier floating-point 3x2 matrix.
Definition: fwd.hpp:773
vec< 3, u32, mediump > mediump_u32vec3
Medium qualifier 32 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:371
uint16 lowp_uint16
Low qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:108
mat< 3, 3, float, highp > highp_mat3
3 columns of 3 components matrix of single-precision floating-point numbers using high precision arit...
vec< 3, u8, highp > highp_u8vec3
High qualifier 8 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:336
vec< 4, f64, defaultp > f64vec4
Double-qualifier floating-point vector of 4 components.
Definition: fwd.hpp:504
vec< 2, i8, highp > highp_i8vec2
High qualifier 8 bit signed integer vector of 2 components type.
Definition: fwd.hpp:233
mat< 2, 2, double, highp > highp_dmat2
2 columns of 2 components matrix of double-precision floating-point numbers using medium precision ar...
vec< 3, i32, lowp > lowp_i32vec3
Low qualifier 32 bit signed integer vector of 3 components type.
Definition: fwd.hpp:264
int32 lowp_i32
Low qualifier 32 bit signed integer type.
Definition: fwd.hpp:59
mat< 4, 4, f32, mediump > mediump_fmat4x4
Medium single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:648
vec< 3, float, defaultp > vec3
3 components vector of single-precision floating-point numbers.
mat< 4, 4, double, lowp > lowp_dmat4x4
4 columns of 4 components matrix of double-precision floating-point numbers using low precision arith...
int64 mediump_i64
Medium qualifier 64 bit signed integer type.
Definition: fwd.hpp:74
mat< 4, 4, double, highp > highp_dmat4
4 columns of 4 components matrix of double-precision floating-point numbers using medium precision ar...
vec< 4, i16, lowp > lowp_i16vec4
Low qualifier 16 bit signed integer vector of 4 components type.
Definition: fwd.hpp:245
vec< 1, bool, defaultp > bvec1
1 components vector of boolean.
mat< 4, 3, f64, highp > highp_f64mat4x3
High double-qualifier floating-point 4x3 matrix.
Definition: fwd.hpp:777
vec< 2, u8, highp > highp_u8vec2
High qualifier 8 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:335
vec< 3, int, mediump > mediump_ivec3
3 components vector of medium qualifier signed integer numbers.
vec< 3, i8, highp > highp_i8vec3
High qualifier 8 bit signed integer vector of 3 components type.
Definition: fwd.hpp:234
vec< 3, f64, highp > highp_f64vec3
High double-qualifier floating-point vector of 3 components.
Definition: fwd.hpp:498
vec< 2, f32, defaultp > fvec2
Single-qualifier floating-point vector of 2 components.
Definition: fwd.hpp:442
vec< 4, f64, lowp > lowp_f64vec4
Low double-qualifier floating-point vector of 4 components.
Definition: fwd.hpp:489
qua< double, highp > highp_dquat
Quaternion of high double-qualifier floating-point numbers using high precision arithmetic in term of...
vec< 3, f32, mediump > mediump_f32vec3
Medium single-qualifier floating-point vector of 3 components.
Definition: fwd.hpp:453
double lowp_f64
Low 64 bit double-qualifier floating-point scalar.
Definition: fwd.hpp:163
mat< 4, 2, f32, lowp > lowp_fmat4x2
Low single-qualifier floating-point 4x2 matrix.
Definition: fwd.hpp:636
vec< 3, int, highp > highp_ivec3
3 components vector of high qualifier signed integer numbers.
mat< 2, 4, f64, highp > highp_f64mat2x4
High double-qualifier floating-point 2x4 matrix.
Definition: fwd.hpp:772
mat< 4, 4, f64, highp > highp_f64mat4
High double-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:582
vec< 4, i32, mediump > mediump_i32vec4
Medium qualifier 32 bit signed integer vector of 4 components type.
Definition: fwd.hpp:270
mat< 2, 2, f32, lowp > lowp_f32mat2
Low single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:540
int16 int16_t
16 bit signed integer type.
Definition: fwd.hpp:57
mat< 3, 4, double, defaultp > dmat3x4
3 columns of 4 components matrix of double-precision floating-point numbers.
mat< 2, 3, double, lowp > lowp_dmat2x3
2 columns of 3 components matrix of double-precision floating-point numbers using low precision arith...
int64 highp_i64
High qualifier 64 bit signed integer type.
Definition: fwd.hpp:75
mat< 2, 4, float, mediump > mediump_mat2x4
2 columns of 4 components matrix of single-precision floating-point numbers using medium precision ar...
mat< 3, 4, f64, highp > highp_f64mat3x4
High double-qualifier floating-point 3x4 matrix.
Definition: fwd.hpp:775
mat< 3, 3, f32, highp > highp_fmat3
High single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:533
mat< 3, 3, f32, mediump > mediump_f32mat3x3
Medium single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:684
qua< f64, mediump > mediump_f64quat
Medium double-qualifier floating-point quaternion.
Definition: fwd.hpp:813
int32 int32_t
32 bit signed integer type.
Definition: fwd.hpp:71
vec< 2, f64, defaultp > f64vec2
Double-qualifier floating-point vector of 2 components.
Definition: fwd.hpp:502
vec< 4, unsigned int, defaultp > uvec4
4 components vector of unsigned integer numbers.
uint64 lowp_uint64_t
Low qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:140
detail::uint64 uint64
64 bit unsigned integer type.
int16 highp_int16
High qualifier 16 bit signed integer type.
Definition: fwd.hpp:52
mat< 2, 2, double, defaultp > dmat2x2
2 columns of 2 components matrix of double-precision floating-point numbers.
vec< 1, i16, mediump > mediump_i16vec1
Medium qualifier 16 bit signed integer scalar type.
Definition: fwd.hpp:247
mat< 2, 4, double, defaultp > dmat2x4
2 columns of 4 components matrix of double-precision floating-point numbers.
mat< 3, 2, double, highp > highp_dmat3x2
3 columns of 2 components matrix of double-precision floating-point numbers using medium precision ar...
mat< 2, 4, f32, defaultp > fmat2x4
Single-qualifier floating-point 2x4 matrix.
Definition: fwd.hpp:666
mat< 2, 2, f32, highp > highp_fmat2x2
High single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:650
vec< 4, float, highp > highp_fvec4
High Single-qualifier floating-point vector of 4 components.
Definition: fwd.hpp:439
mat< 3, 3, f64, highp > highp_f64mat3x3
High double-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:774
int32 mediump_i32
Medium qualifier 32 bit signed integer type.
Definition: fwd.hpp:60
vec< 3, float, mediump > mediump_vec3
3 components vector of medium single-qualifier floating-point numbers.
vec< 2, u16, lowp > lowp_u16vec2
Low qualifier 16 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:345
vec< 4, u32, highp > highp_u32vec4
High qualifier 32 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:377
mat< 4, 2, double, defaultp > dmat4x2
4 columns of 2 components matrix of double-precision floating-point numbers.
vec< 4, double, lowp > lowp_dvec4
4 components vector of low double-qualifier floating-point numbers.
float lowp_float32_t
Low 32 bit single-qualifier floating-point scalar.
Definition: fwd.hpp:157
uint64 highp_uint64_t
High qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:142
vec< 2, f32, lowp > lowp_f32vec2
Low single-qualifier floating-point vector of 2 components.
Definition: fwd.hpp:447
vec< 4, u32, defaultp > u32vec4
Default qualifier 32 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:382
mat< 2, 2, f64, mediump > mediump_f64mat2
Medium double-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:576
qua< float, mediump > mediump_quat
Quaternion of single-precision floating-point numbers using high precision arithmetic in term of ULPs...
mat< 4, 3, f32, highp > highp_f32mat4x3
High single-qualifier floating-point 4x3 matrix.
Definition: fwd.hpp:697
vec< 3, unsigned int, lowp > lowp_uvec3
3 components vector of low qualifier unsigned integer numbers.
mat< 2, 2, float, lowp > lowp_mat2x2
2 columns of 2 components matrix of single-precision floating-point numbers using low precision arith...
qua< f32, defaultp > f32quat
Single-qualifier floating-point quaternion.
Definition: fwd.hpp:805
detail::int64 int64
64 bit signed integer type.
qua< double, lowp > lowp_dquat
Quaternion of double-precision floating-point numbers using high precision arithmetic in term of ULPs...
vec< 1, u64, highp > highp_u64vec1
High qualifier 64 bit unsigned integer scalar type.
Definition: fwd.hpp:394
mat< 3, 4, float, mediump > mediump_mat3x4
3 columns of 4 components matrix of single-precision floating-point numbers using medium precision ar...
mat< 2, 3, f64, highp > highp_f64mat2x3
High double-qualifier floating-point 2x3 matrix.
Definition: fwd.hpp:771
vec< 4, i8, lowp > lowp_i8vec4
Low qualifier 8 bit signed integer vector of 4 components type.
Definition: fwd.hpp:225
mat< 4, 3, f32, lowp > lowp_fmat4x3
Low single-qualifier floating-point 4x3 matrix.
Definition: fwd.hpp:637
float f32
Default 32 bit single-qualifier floating-point scalar.
Definition: fwd.hpp:150
vec< 2, i32, highp > highp_i32vec2
High qualifier 32 bit signed integer vector of 2 components type.
Definition: fwd.hpp:273
vec< 1, u8, mediump > mediump_u8vec1
Medium qualifier 8 bit unsigned integer scalar type.
Definition: fwd.hpp:329
mat< 4, 3, f32, highp > highp_fmat4x3
High single-qualifier floating-point 4x3 matrix.
Definition: fwd.hpp:657
mat< 3, 2, double, defaultp > dmat3x2
3 columns of 2 components matrix of double-precision floating-point numbers.
vec< 4, i16, mediump > mediump_i16vec4
Medium qualifier 16 bit signed integer vector of 4 components type.
Definition: fwd.hpp:250
mat< 4, 2, f64, defaultp > f64mat4x2
Double-qualifier floating-point 4x2 matrix.
Definition: fwd.hpp:782
mat< 2, 3, f32, defaultp > fmat2x3
Single-qualifier floating-point 2x3 matrix.
Definition: fwd.hpp:663
mat< 4, 4, f64, mediump > mediump_f64mat4
Medium double-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:578
vec< 4, u8, mediump > mediump_u8vec4
Medium qualifier 8 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:332
vec< 3, double, highp > highp_dvec3
3 components vector of high double-qualifier floating-point numbers.
mat< 3, 4, f32, lowp > lowp_f32mat3x4
Low single-qualifier floating-point 3x4 matrix.
Definition: fwd.hpp:675
double mediump_float64_t
Medium 64 bit double-qualifier floating-point scalar.
Definition: fwd.hpp:174
mat< 2, 2, float, highp > highp_mat2x2
2 columns of 2 components matrix of single-precision floating-point numbers using high precision arit...
mat< 4, 3, float, lowp > lowp_mat4x3
4 columns of 3 components matrix of single-precision floating-point numbers using low precision arith...
vec< 2, float, highp > highp_fvec2
High Single-qualifier floating-point vector of 2 components.
Definition: fwd.hpp:437
uint16 u16
Default qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:106
int64 lowp_i64
Low qualifier 64 bit signed integer type.
Definition: fwd.hpp:73
vec< 1, unsigned int, lowp > lowp_uvec1
1 component vector of unsigned integer values.
vec< 2, int, defaultp > ivec2
2 components vector of signed integer numbers.
Definition: vector_int2.hpp:15
mat< 4, 4, f32, defaultp > f32mat4
Single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:554
mat< 4, 2, f32, mediump > mediump_fmat4x2
Medium single-qualifier floating-point 4x2 matrix.
Definition: fwd.hpp:646
mat< 2, 2, f64, lowp > lowp_f64mat2
Low double-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:572
int8 mediump_int8_t
Medium qualifier 8 bit signed integer type.
Definition: fwd.hpp:41
mat< 3, 3, f32, lowp > lowp_fmat3x3
Low single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:634
double lowp_float64_t
Low 64 bit double-qualifier floating-point scalar.
Definition: fwd.hpp:173
int16 highp_int16_t
High qualifier 16 bit signed integer type.
Definition: fwd.hpp:56
mat< 3, 3, f32, highp > highp_fmat3x3
High single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:654
mat< 4, 4, double, defaultp > dmat4
4 columns of 4 components matrix of double-precision floating-point numbers.
vec< 1, i64, defaultp > i64vec1
64 bit signed integer scalar type.
Definition: fwd.hpp:297
uint32 lowp_u32
Low qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:117
mat< 4, 3, float, mediump > mediump_mat4x3
4 columns of 3 components matrix of single-precision floating-point numbers using medium precision ar...
vec< 1, u8, lowp > lowp_u8vec1
Low qualifier 8 bit unsigned integer scalar type.
Definition: fwd.hpp:324
vec< 3, i64, mediump > mediump_i64vec3
Medium qualifier 64 bit signed integer vector of 3 components type.
Definition: fwd.hpp:289
vec< 1, int, defaultp > ivec1
1 component vector of signed integer numbers.
Definition: vector_int1.hpp:28
qua< f32, highp > highp_f32quat
High single-qualifier floating-point quaternion.
Definition: fwd.hpp:804
uint16 highp_u16
High qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:105
vec< 1, f32, defaultp > fvec1
Single-qualifier floating-point vector of 1 component.
Definition: fwd.hpp:441
mat< 3, 2, float, mediump > mediump_mat3x2
3 columns of 2 components matrix of single-precision floating-point numbers using medium precision ar...
vec< 2, bool, lowp > lowp_bvec2
2 components vector of low qualifier bool numbers.
vec< 2, u8, mediump > mediump_u8vec2
Medium qualifier 8 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:330
int32 lowp_int32_t
Low qualifier 32 bit signed integer type.
Definition: fwd.hpp:68
vec< 1, u16, lowp > lowp_u16vec1
Low qualifier 16 bit unsigned integer scalar type.
Definition: fwd.hpp:344
mat< 4, 4, f32, highp > highp_fmat4x4
High single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:658
mat< 3, 4, f32, highp > highp_f32mat3x4
High single-qualifier floating-point 3x4 matrix.
Definition: fwd.hpp:695
vec< 3, bool, defaultp > bvec3
3 components vector of boolean.
vec< 2, f32, defaultp > f32vec2
Single-qualifier floating-point vector of 2 components.
Definition: fwd.hpp:462
vec< 3, u16, highp > highp_u16vec3
High qualifier 16 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:356
float mediump_float32_t
Medium 32 bit single-qualifier floating-point scalar.
Definition: fwd.hpp:158
mat< 2, 2, f32, defaultp > fmat2x2
Single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:660
float mediump_f32
Medium 32 bit single-qualifier floating-point scalar.
Definition: fwd.hpp:148
mat< 4, 4, f32, mediump > mediump_f32mat4x4
Medium single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:688
qua< float, lowp > lowp_quat
Quaternion of single-precision floating-point numbers using high precision arithmetic in term of ULPs...
vec< 2, f32, mediump > mediump_f32vec2
Medium single-qualifier floating-point vector of 2 components.
Definition: fwd.hpp:452
int8 lowp_int8
Low qualifier 8 bit signed integer type.
Definition: fwd.hpp:36
mat< 2, 3, float, defaultp > mat2x3
2 columns of 3 components matrix of single-precision floating-point numbers.
vec< 1, f64, lowp > lowp_f64vec1
Low double-qualifier floating-point vector of 1 component.
Definition: fwd.hpp:486
mat< 3, 2, f32, highp > highp_f32mat3x2
High single-qualifier floating-point 3x2 matrix.
Definition: fwd.hpp:693
mat< 3, 2, f64, mediump > mediump_f64mat3x2
Medium double-qualifier floating-point 3x2 matrix.
Definition: fwd.hpp:763
mat< 3, 3, double, mediump > mediump_dmat3
3 columns of 3 components matrix of double-precision floating-point numbers using medium precision ar...
vec< 3, u8, mediump > mediump_u8vec3
Medium qualifier 8 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:331
mat< 2, 3, double, defaultp > dmat2x3
2 columns of 3 components matrix of double-precision floating-point numbers.
mat< 4, 4, f64, lowp > lowp_f64mat4x4
Low double-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:758
vec< 1, i16, lowp > lowp_i16vec1
Low qualifier 16 bit signed integer scalar type.
Definition: fwd.hpp:242
vec< 3, double, defaultp > dvec3
3 components vector of double-precision floating-point numbers.
mat< 2, 4, double, lowp > lowp_dmat2x4
2 columns of 4 components matrix of double-precision floating-point numbers using low precision arith...
int8 lowp_int8_t
Low qualifier 8 bit signed integer type.
Definition: fwd.hpp:40
vec< 2, u32, lowp > lowp_u32vec2
Low qualifier 32 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:365
mat< 2, 4, f32, mediump > mediump_f32mat2x4
Medium single-qualifier floating-point 2x4 matrix.
Definition: fwd.hpp:682
mat< 4, 3, f64, defaultp > f64mat4x3
Double-qualifier floating-point 4x3 matrix.
Definition: fwd.hpp:785
vec< 2, i64, highp > highp_i64vec2
High qualifier 64 bit signed integer vector of 2 components type.
Definition: fwd.hpp:293
mat< 4, 4, f32, mediump > mediump_f32mat4
Medium single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:546
mat< 3, 2, float, highp > highp_mat3x2
3 columns of 2 components matrix of single-precision floating-point numbers using high precision arit...
mat< 4, 4, float, highp > highp_mat4x4
4 columns of 4 components matrix of single-precision floating-point numbers using high precision arit...
vec< 2, double, mediump > mediump_dvec2
2 components vector of medium double-qualifier floating-point numbers.
mat< 2, 2, double, lowp > lowp_dmat2x2
2 columns of 2 components matrix of double-precision floating-point numbers using low precision arith...
int64 i64
64 bit signed integer type.
Definition: fwd.hpp:76
double f64
Default 64 bit double-qualifier floating-point scalar.
Definition: fwd.hpp:166
vec< 3, bool, lowp > lowp_bvec3
3 components vector of low qualifier bool numbers.
mat< 3, 4, float, lowp > lowp_mat3x4
3 columns of 4 components matrix of single-precision floating-point numbers using low precision arith...
mat< 4, 4, float, lowp > lowp_mat4x4
4 columns of 4 components matrix of single-precision floating-point numbers using low precision arith...
vec< 1, float, highp > highp_vec1
1 component vector of single-precision floating-point numbers using high precision arithmetic in term...
vec< 1, f32, mediump > mediump_f32vec1
Medium single-qualifier floating-point vector of 1 component.
Definition: fwd.hpp:451
mat< 3, 4, f32, mediump > mediump_f32mat3x4
Medium single-qualifier floating-point 3x4 matrix.
Definition: fwd.hpp:685
mat< 2, 2, f32, highp > highp_fmat2
High single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:532
vec< 2, unsigned int, highp > highp_uvec2
2 components vector of high qualifier unsigned integer numbers.
vec< 3, f32, highp > highp_f32vec3
High single-qualifier floating-point vector of 3 components.
Definition: fwd.hpp:458
mat< 2, 2, float, mediump > mediump_mat2x2
2 columns of 2 components matrix of single-precision floating-point numbers using medium precision ar...
vec< 4, i8, mediump > mediump_i8vec4
Medium qualifier 8 bit signed integer vector of 4 components type.
Definition: fwd.hpp:230
float lowp_float32
Low 32 bit single-qualifier floating-point scalar.
Definition: fwd.hpp:152
vec< 2, u32, defaultp > u32vec2
Default qualifier 32 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:380
vec< 2, unsigned int, mediump > mediump_uvec2
2 components vector of medium qualifier unsigned integer numbers.
qua< float, defaultp > quat
Quaternion of single-precision floating-point numbers.
vec< 2, double, highp > highp_dvec2
2 components vector of high double-qualifier floating-point numbers.
vec< 4, float, mediump > mediump_fvec4
Medium Single-qualifier floating-point vector of 4 components.
Definition: fwd.hpp:434
int32 mediump_int32
Medium qualifier 32 bit signed integer type.
Definition: fwd.hpp:65
vec< 2, i64, defaultp > i64vec2
64 bit signed integer vector of 2 components type.
Definition: fwd.hpp:298
int16 i16
16 bit signed integer type.
Definition: fwd.hpp:48
vec< 4, double, defaultp > dvec4
4 components vector of double-precision floating-point numbers.
mat< 4, 4, f32, defaultp > fmat4x4
Single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:668
mat< 2, 2, float, mediump > mediump_mat2
2 columns of 2 components matrix of single-precision floating-point numbers using medium precision ar...
qua< f64, lowp > lowp_f64quat
Low double-qualifier floating-point quaternion.
Definition: fwd.hpp:812
mat< 2, 2, float, defaultp > mat2
2 columns of 2 components matrix of single-precision floating-point numbers.
mat< 3, 2, f32, defaultp > fmat3x2
Single-qualifier floating-point 3x2 matrix.
Definition: fwd.hpp:661
mat< 4, 3, double, defaultp > dmat4x3
4 columns of 3 components matrix of double-precision floating-point numbers.
mat< 4, 2, double, highp > highp_dmat4x2
4 columns of 2 components matrix of double-precision floating-point numbers using medium precision ar...
vec< 4, u16, defaultp > u16vec4
Default qualifier 16 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:362
vec< 2, u16, defaultp > u16vec2
Default qualifier 16 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:360
uint8 mediump_u8
Medium qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:90
mat< 2, 2, f32, lowp > lowp_fmat2x2
Low single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:630
vec< 4, i8, highp > highp_i8vec4
High qualifier 8 bit signed integer vector of 4 components type.
Definition: fwd.hpp:235
vec< 4, u64, lowp > lowp_u64vec4
Low qualifier 64 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:387
vec< 2, i64, mediump > mediump_i64vec2
Medium qualifier 64 bit signed integer vector of 2 components type.
Definition: fwd.hpp:288
mat< 4, 2, f64, highp > highp_f64mat4x2
High double-qualifier floating-point 4x2 matrix.
Definition: fwd.hpp:776
mat< 4, 4, float, highp > highp_mat4
4 columns of 4 components matrix of single-precision floating-point numbers using high precision arit...
int16 mediump_int16_t
Medium qualifier 16 bit signed integer type.
Definition: fwd.hpp:55
int8 lowp_i8
Low qualifier 8 bit signed integer type.
Definition: fwd.hpp:31
mat< 4, 2, float, highp > highp_mat4x2
4 columns of 2 components matrix of single-precision floating-point numbers using high precision arit...
vec< 3, i64, defaultp > i64vec3
64 bit signed integer vector of 3 components type.
Definition: fwd.hpp:299
vec< 2, i32, lowp > lowp_i32vec2
Low qualifier 32 bit signed integer vector of 2 components type.
Definition: fwd.hpp:263
qua< f64, highp > highp_f64quat
High double-qualifier floating-point quaternion.
Definition: fwd.hpp:814
mat< 3, 3, float, defaultp > mat3
3 columns of 3 components matrix of single-precision floating-point numbers.
vec< 2, f64, mediump > mediump_f64vec2
Medium double-qualifier floating-point vector of 2 components.
Definition: fwd.hpp:492
uint16 highp_uint16_t
High qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:114
vec< 1, float, lowp > lowp_fvec1
Low single-qualifier floating-point vector of 1 component.
Definition: fwd.hpp:426
int8 i8
8 bit signed integer type.
Definition: fwd.hpp:34
uint64 mediump_uint64_t
Medium qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:141
vec< 1, u64, mediump > mediump_u64vec1
Medium qualifier 64 bit unsigned integer scalar type.
Definition: fwd.hpp:389
mat< 2, 2, f32, mediump > mediump_f32mat2
Medium single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:544
vec< 4, int, mediump > mediump_ivec4
4 components vector of medium qualifier signed integer numbers.
mat< 2, 4, float, highp > highp_mat2x4
2 columns of 4 components matrix of single-precision floating-point numbers using high precision arit...
uint8 mediump_uint8_t
Medium qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:99
Definition: common.hpp:20
double mediump_f64
Medium 64 bit double-qualifier floating-point scalar.
Definition: fwd.hpp:164
vec< 1, float, mediump > mediump_fvec1
Medium single-qualifier floating-point vector of 1 component.
Definition: fwd.hpp:431
uint16 mediump_uint16
Medium qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:109
vec< 4, u8, highp > highp_u8vec4
High qualifier 8 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:337
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00036.html ================================================ 0.9.9 API documentation: geometric.hpp File Reference
0.9.9 API documentation
geometric.hpp File Reference

Core features More...

Go to the source code of this file.

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > cross (vec< 3, T, Q > const &x, vec< 3, T, Q > const &y)
 Returns the cross product of x and y. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL T distance (vec< L, T, Q > const &p0, vec< L, T, Q > const &p1)
 Returns the distance betwwen p0 and p1, i.e., length(p0 - p1). More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL T dot (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Returns the dot product of x and y, i.e., result = x * y. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > faceforward (vec< L, T, Q > const &N, vec< L, T, Q > const &I, vec< L, T, Q > const &Nref)
 If dot(Nref, I) < 0.0, return N, otherwise, return -N. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL T length (vec< L, T, Q > const &x)
 Returns the length of x, i.e., sqrt(x * x). More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > normalize (vec< L, T, Q > const &x)
 Returns a vector in the same direction as x but with length of 1. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > reflect (vec< L, T, Q > const &I, vec< L, T, Q > const &N)
 For the incident vector I and surface orientation N, returns the reflection direction : result = I - 2.0 * dot(N, I) * N. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > refract (vec< L, T, Q > const &I, vec< L, T, Q > const &N, T eta)
 For the incident vector I and surface normal N, and the ratio of indices of refraction eta, return the refraction vector. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00036_source.html ================================================ 0.9.9 API documentation: geometric.hpp Source File
0.9.9 API documentation
geometric.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 #include "detail/type_vec3.hpp"
16 
17 namespace glm
18 {
21 
29  template<length_t L, typename T, qualifier Q>
30  GLM_FUNC_DECL T length(vec<L, T, Q> const& x);
31 
39  template<length_t L, typename T, qualifier Q>
40  GLM_FUNC_DECL T distance(vec<L, T, Q> const& p0, vec<L, T, Q> const& p1);
41 
49  template<length_t L, typename T, qualifier Q>
50  GLM_FUNC_DECL T dot(vec<L, T, Q> const& x, vec<L, T, Q> const& y);
51 
58  template<typename T, qualifier Q>
59  GLM_FUNC_DECL vec<3, T, Q> cross(vec<3, T, Q> const& x, vec<3, T, Q> const& y);
60 
69  template<length_t L, typename T, qualifier Q>
70  GLM_FUNC_DECL vec<L, T, Q> normalize(vec<L, T, Q> const& x);
71 
79  template<length_t L, typename T, qualifier Q>
80  GLM_FUNC_DECL vec<L, T, Q> faceforward(
81  vec<L, T, Q> const& N,
82  vec<L, T, Q> const& I,
83  vec<L, T, Q> const& Nref);
84 
93  template<length_t L, typename T, qualifier Q>
94  GLM_FUNC_DECL vec<L, T, Q> reflect(
95  vec<L, T, Q> const& I,
96  vec<L, T, Q> const& N);
97 
107  template<length_t L, typename T, qualifier Q>
108  GLM_FUNC_DECL vec<L, T, Q> refract(
109  vec<L, T, Q> const& I,
110  vec<L, T, Q> const& N,
111  T eta);
112 
114 }//namespace glm
115 
116 #include "detail/func_geometric.inl"
GLM_FUNC_DECL vec< L, T, Q > reflect(vec< L, T, Q > const &I, vec< L, T, Q > const &N)
For the incident vector I and surface orientation N, returns the reflection direction : result = I - ...
GLM_FUNC_DECL vec< L, T, Q > faceforward(vec< L, T, Q > const &N, vec< L, T, Q > const &I, vec< L, T, Q > const &Nref)
If dot(Nref, I) < 0.0, return N, otherwise, return -N.
GLM_FUNC_DECL T length(vec< L, T, Q > const &x)
Returns the length of x, i.e., sqrt(x * x).
GLM_FUNC_DECL vec< 3, T, Q > cross(vec< 3, T, Q > const &x, vec< 3, T, Q > const &y)
Returns the cross product of x and y.
GLM_FUNC_DECL vec< L, T, Q > refract(vec< L, T, Q > const &I, vec< L, T, Q > const &N, T eta)
For the incident vector I and surface normal N, and the ratio of indices of refraction eta...
GLM_FUNC_DECL vec< L, T, Q > normalize(vec< L, T, Q > const &x)
Returns a vector in the same direction as x but with length of 1.
Core features
GLM_FUNC_DECL T distance(vec< L, T, Q > const &p0, vec< L, T, Q > const &p1)
Returns the distance betwwen p0 and p1, i.e., length(p0 - p1).
GLM_FUNC_DECL T dot(vec< L, T, Q > const &x, vec< L, T, Q > const &y)
Returns the dot product of x and y, i.e., result = x * y.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00037.html ================================================ 0.9.9 API documentation: glm.hpp File Reference
0.9.9 API documentation
glm.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00037_source.html ================================================ 0.9.9 API documentation: glm.hpp Source File
0.9.9 API documentation
glm.hpp
Go to the documentation of this file.
1 
103 #include "detail/_fixes.hpp"
104 
105 #include "detail/setup.hpp"
106 
107 #pragma once
108 
109 #include <cmath>
110 #include <climits>
111 #include <cfloat>
112 #include <limits>
113 #include <cassert>
114 #include "fwd.hpp"
115 
116 #include "vec2.hpp"
117 #include "vec3.hpp"
118 #include "vec4.hpp"
119 #include "mat2x2.hpp"
120 #include "mat2x3.hpp"
121 #include "mat2x4.hpp"
122 #include "mat3x2.hpp"
123 #include "mat3x3.hpp"
124 #include "mat3x4.hpp"
125 #include "mat4x2.hpp"
126 #include "mat4x3.hpp"
127 #include "mat4x4.hpp"
128 
129 #include "trigonometric.hpp"
130 #include "exponential.hpp"
131 #include "common.hpp"
132 #include "packing.hpp"
133 #include "geometric.hpp"
134 #include "matrix.hpp"
135 #include "vector_relational.hpp"
136 #include "integer.hpp"
Core features
Core features
Core features
Core features
Core features
Core features
Core features
Core features
Core features
Core features
Core features
Core features
Core features
Core features
Core features
Core features
Core features
Core features
Core features
Core features
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00038.html ================================================ 0.9.9 API documentation: gradient_paint.hpp File Reference
0.9.9 API documentation
gradient_paint.hpp File Reference

GLM_GTX_gradient_paint More...

Go to the source code of this file.

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL T linearGradient (vec< 2, T, Q > const &Point0, vec< 2, T, Q > const &Point1, vec< 2, T, Q > const &Position)
 Return a color from a linear gradient. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T radialGradient (vec< 2, T, Q > const &Center, T const &Radius, vec< 2, T, Q > const &Focal, vec< 2, T, Q > const &Position)
 Return a color from a radial gradient. More...
 

Detailed Description

GLM_GTX_gradient_paint

See also
Core features (dependence)
GLM_GTX_optimum_pow (dependence)

Definition in file gradient_paint.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00038_source.html ================================================ 0.9.9 API documentation: gradient_paint.hpp Source File
0.9.9 API documentation
gradient_paint.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 // Dependency:
17 #include "../glm.hpp"
18 #include "../gtx/optimum_pow.hpp"
19 
20 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
21 # ifndef GLM_ENABLE_EXPERIMENTAL
22 # pragma message("GLM: GLM_GTX_gradient_paint is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
23 # else
24 # pragma message("GLM: GLM_GTX_gradient_paint extension included")
25 # endif
26 #endif
27 
28 namespace glm
29 {
32 
35  template<typename T, qualifier Q>
36  GLM_FUNC_DECL T radialGradient(
37  vec<2, T, Q> const& Center,
38  T const& Radius,
39  vec<2, T, Q> const& Focal,
40  vec<2, T, Q> const& Position);
41 
44  template<typename T, qualifier Q>
45  GLM_FUNC_DECL T linearGradient(
46  vec<2, T, Q> const& Point0,
47  vec<2, T, Q> const& Point1,
48  vec<2, T, Q> const& Position);
49 
51 }// namespace glm
52 
53 #include "gradient_paint.inl"
GLM_FUNC_DECL T radialGradient(vec< 2, T, Q > const &Center, T const &Radius, vec< 2, T, Q > const &Focal, vec< 2, T, Q > const &Position)
Return a color from a radial gradient.
GLM_FUNC_DECL T linearGradient(vec< 2, T, Q > const &Point0, vec< 2, T, Q > const &Point1, vec< 2, T, Q > const &Position)
Return a color from a linear gradient.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00039.html ================================================ 0.9.9 API documentation: handed_coordinate_space.hpp File Reference
0.9.9 API documentation
handed_coordinate_space.hpp File Reference

GLM_GTX_handed_coordinate_space More...

Go to the source code of this file.

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL bool leftHanded (vec< 3, T, Q > const &tangent, vec< 3, T, Q > const &binormal, vec< 3, T, Q > const &normal)
 Return if a trihedron left handed or not. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL bool rightHanded (vec< 3, T, Q > const &tangent, vec< 3, T, Q > const &binormal, vec< 3, T, Q > const &normal)
 Return if a trihedron right handed or not. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00039_source.html ================================================ 0.9.9 API documentation: handed_coordinate_space.hpp Source File
0.9.9 API documentation
handed_coordinate_space.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependency:
16 #include "../glm.hpp"
17 
18 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
19 # ifndef GLM_ENABLE_EXPERIMENTAL
20 # pragma message("GLM: GLM_GTX_handed_coordinate_space is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
21 # else
22 # pragma message("GLM: GLM_GTX_handed_coordinate_space extension included")
23 # endif
24 #endif
25 
26 namespace glm
27 {
30 
33  template<typename T, qualifier Q>
34  GLM_FUNC_DECL bool rightHanded(
35  vec<3, T, Q> const& tangent,
36  vec<3, T, Q> const& binormal,
37  vec<3, T, Q> const& normal);
38 
41  template<typename T, qualifier Q>
42  GLM_FUNC_DECL bool leftHanded(
43  vec<3, T, Q> const& tangent,
44  vec<3, T, Q> const& binormal,
45  vec<3, T, Q> const& normal);
46 
48 }// namespace glm
49 
50 #include "handed_coordinate_space.inl"
GLM_FUNC_DECL bool leftHanded(vec< 3, T, Q > const &tangent, vec< 3, T, Q > const &binormal, vec< 3, T, Q > const &normal)
Return if a trihedron left handed or not.
GLM_FUNC_DECL bool rightHanded(vec< 3, T, Q > const &tangent, vec< 3, T, Q > const &binormal, vec< 3, T, Q > const &normal)
Return if a trihedron right handed or not.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00040.html ================================================ 0.9.9 API documentation: hash.hpp File Reference
0.9.9 API documentation
hash.hpp File Reference

GLM_GTX_hash More...

Go to the source code of this file.

Detailed Description

GLM_GTX_hash

See also
Core features (dependence)

Definition in file hash.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00040_source.html ================================================ 0.9.9 API documentation: hash.hpp Source File
0.9.9 API documentation
hash.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
16 # ifndef GLM_ENABLE_EXPERIMENTAL
17 # pragma message("GLM: GLM_GTX_hash is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
18 # else
19 # pragma message("GLM: GLM_GTX_hash extension included")
20 # endif
21 #endif
22 
23 #include <functional>
24 
25 #include "../vec2.hpp"
26 #include "../vec3.hpp"
27 #include "../vec4.hpp"
28 #include "../gtc/vec1.hpp"
29 
30 #include "../gtc/quaternion.hpp"
31 #include "../gtx/dual_quaternion.hpp"
32 
33 #include "../mat2x2.hpp"
34 #include "../mat2x3.hpp"
35 #include "../mat2x4.hpp"
36 
37 #include "../mat3x2.hpp"
38 #include "../mat3x3.hpp"
39 #include "../mat3x4.hpp"
40 
41 #include "../mat4x2.hpp"
42 #include "../mat4x3.hpp"
43 #include "../mat4x4.hpp"
44 
45 #if !GLM_HAS_CXX11_STL
46 # error "GLM_GTX_hash requires C++11 standard library support"
47 #endif
48 
49 namespace std
50 {
51  template<typename T, glm::qualifier Q>
52  struct hash<glm::vec<1, T,Q> >
53  {
54  GLM_FUNC_DECL size_t operator()(glm::vec<1, T, Q> const& v) const;
55  };
56 
57  template<typename T, glm::qualifier Q>
58  struct hash<glm::vec<2, T,Q> >
59  {
60  GLM_FUNC_DECL size_t operator()(glm::vec<2, T, Q> const& v) const;
61  };
62 
63  template<typename T, glm::qualifier Q>
64  struct hash<glm::vec<3, T,Q> >
65  {
66  GLM_FUNC_DECL size_t operator()(glm::vec<3, T, Q> const& v) const;
67  };
68 
69  template<typename T, glm::qualifier Q>
70  struct hash<glm::vec<4, T,Q> >
71  {
72  GLM_FUNC_DECL size_t operator()(glm::vec<4, T, Q> const& v) const;
73  };
74 
75  template<typename T, glm::qualifier Q>
76  struct hash<glm::qua<T,Q>>
77  {
78  GLM_FUNC_DECL size_t operator()(glm::qua<T, Q> const& q) const;
79  };
80 
81  template<typename T, glm::qualifier Q>
82  struct hash<glm::tdualquat<T,Q> >
83  {
84  GLM_FUNC_DECL size_t operator()(glm::tdualquat<T,Q> const& q) const;
85  };
86 
87  template<typename T, glm::qualifier Q>
88  struct hash<glm::mat<2, 2, T,Q> >
89  {
90  GLM_FUNC_DECL size_t operator()(glm::mat<2, 2, T,Q> const& m) const;
91  };
92 
93  template<typename T, glm::qualifier Q>
94  struct hash<glm::mat<2, 3, T,Q> >
95  {
96  GLM_FUNC_DECL size_t operator()(glm::mat<2, 3, T,Q> const& m) const;
97  };
98 
99  template<typename T, glm::qualifier Q>
100  struct hash<glm::mat<2, 4, T,Q> >
101  {
102  GLM_FUNC_DECL size_t operator()(glm::mat<2, 4, T,Q> const& m) const;
103  };
104 
105  template<typename T, glm::qualifier Q>
106  struct hash<glm::mat<3, 2, T,Q> >
107  {
108  GLM_FUNC_DECL size_t operator()(glm::mat<3, 2, T,Q> const& m) const;
109  };
110 
111  template<typename T, glm::qualifier Q>
112  struct hash<glm::mat<3, 3, T,Q> >
113  {
114  GLM_FUNC_DECL size_t operator()(glm::mat<3, 3, T,Q> const& m) const;
115  };
116 
117  template<typename T, glm::qualifier Q>
118  struct hash<glm::mat<3, 4, T,Q> >
119  {
120  GLM_FUNC_DECL size_t operator()(glm::mat<3, 4, T,Q> const& m) const;
121  };
122 
123  template<typename T, glm::qualifier Q>
124  struct hash<glm::mat<4, 2, T,Q> >
125  {
126  GLM_FUNC_DECL size_t operator()(glm::mat<4, 2, T,Q> const& m) const;
127  };
128 
129  template<typename T, glm::qualifier Q>
130  struct hash<glm::mat<4, 3, T,Q> >
131  {
132  GLM_FUNC_DECL size_t operator()(glm::mat<4, 3, T,Q> const& m) const;
133  };
134 
135  template<typename T, glm::qualifier Q>
136  struct hash<glm::mat<4, 4, T,Q> >
137  {
138  GLM_FUNC_DECL size_t operator()(glm::mat<4, 4, T,Q> const& m) const;
139  };
140 } // namespace std
141 
142 #include "hash.inl"
Definition: hash.hpp:49
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00041.html ================================================ 0.9.9 API documentation: integer.hpp File Reference
0.9.9 API documentation
gtc/integer.hpp File Reference

GLM_GTC_integer More...

Go to the source code of this file.

Functions

template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, int, Q > iround (vec< L, T, Q > const &x)
 Returns a value equal to the nearest integer to x. More...
 
template<typename genIUType >
GLM_FUNC_DECL genIUType log2 (genIUType x)
 Returns the log2 of x for integer values. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, uint, Q > uround (vec< L, T, Q > const &x)
 Returns a value equal to the nearest integer to x. More...
 

Detailed Description

GLM_GTC_integer

See also
Core features (dependence)
GLM_GTC_integer (dependence)

Definition in file gtc/integer.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00041_source.html ================================================ 0.9.9 API documentation: integer.hpp Source File
0.9.9 API documentation
gtc/integer.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 // Dependencies
17 #include "../detail/setup.hpp"
18 #include "../detail/qualifier.hpp"
19 #include "../common.hpp"
20 #include "../integer.hpp"
21 #include "../exponential.hpp"
22 #include <limits>
23 
24 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
25 # pragma message("GLM: GLM_GTC_integer extension included")
26 #endif
27 
28 namespace glm
29 {
32 
35  template<typename genIUType>
36  GLM_FUNC_DECL genIUType log2(genIUType x);
37 
47  template<length_t L, typename T, qualifier Q>
48  GLM_FUNC_DECL vec<L, int, Q> iround(vec<L, T, Q> const& x);
49 
59  template<length_t L, typename T, qualifier Q>
60  GLM_FUNC_DECL vec<L, uint, Q> uround(vec<L, T, Q> const& x);
61 
63 } //namespace glm
64 
65 #include "integer.inl"
GLM_FUNC_DECL vec< L, uint, Q > uround(vec< L, T, Q > const &x)
Returns a value equal to the nearest integer to x.
GLM_FUNC_DECL genIUType log2(genIUType x)
Returns the log2 of x for integer values.
GLM_FUNC_DECL vec< L, int, Q > iround(vec< L, T, Q > const &x)
Returns a value equal to the nearest integer to x.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00042.html ================================================ 0.9.9 API documentation: integer.hpp File Reference
0.9.9 API documentation
gtx/integer.hpp File Reference

GLM_GTX_integer More...

Go to the source code of this file.

Typedefs

typedef signed int sint
 32bit signed integer. More...
 

Functions

template<typename genType >
GLM_FUNC_DECL genType factorial (genType const &x)
 Return the factorial value of a number (!12 max, integer only) From GLM_GTX_integer extension. More...
 
GLM_FUNC_DECL unsigned int floor_log2 (unsigned int x)
 Returns the floor log2 of x. More...
 
GLM_FUNC_DECL int mod (int x, int y)
 Modulus. More...
 
GLM_FUNC_DECL uint mod (uint x, uint y)
 Modulus. More...
 
GLM_FUNC_DECL uint nlz (uint x)
 Returns the number of leading zeros. More...
 
GLM_FUNC_DECL int pow (int x, uint y)
 Returns x raised to the y power. More...
 
GLM_FUNC_DECL uint pow (uint x, uint y)
 Returns x raised to the y power. More...
 
GLM_FUNC_DECL int sqrt (int x)
 Returns the positive square root of x. More...
 
GLM_FUNC_DECL uint sqrt (uint x)
 Returns the positive square root of x. More...
 

Detailed Description

GLM_GTX_integer

See also
Core features (dependence)

Definition in file gtx/integer.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00042_source.html ================================================ 0.9.9 API documentation: integer.hpp Source File
0.9.9 API documentation
gtx/integer.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependency:
16 #include "../glm.hpp"
17 #include "../gtc/integer.hpp"
18 
19 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
20 # ifndef GLM_ENABLE_EXPERIMENTAL
21 # pragma message("GLM: GLM_GTX_integer is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
22 # else
23 # pragma message("GLM: GLM_GTX_integer extension included")
24 # endif
25 #endif
26 
27 namespace glm
28 {
31 
34  GLM_FUNC_DECL int pow(int x, uint y);
35 
38  GLM_FUNC_DECL int sqrt(int x);
39 
42  GLM_FUNC_DECL unsigned int floor_log2(unsigned int x);
43 
46  GLM_FUNC_DECL int mod(int x, int y);
47 
50  template<typename genType>
51  GLM_FUNC_DECL genType factorial(genType const& x);
52 
55  typedef signed int sint;
56 
59  GLM_FUNC_DECL uint pow(uint x, uint y);
60 
63  GLM_FUNC_DECL uint sqrt(uint x);
64 
67  GLM_FUNC_DECL uint mod(uint x, uint y);
68 
71  GLM_FUNC_DECL uint nlz(uint x);
72 
74 }//namespace glm
75 
76 #include "integer.inl"
GLM_FUNC_DECL uint nlz(uint x)
Returns the number of leading zeros.
GLM_FUNC_DECL uint mod(uint x, uint y)
Modulus.
GLM_FUNC_DECL unsigned int floor_log2(unsigned int x)
Returns the floor log2 of x.
signed int sint
32bit signed integer.
Definition: gtx/integer.hpp:55
GLM_FUNC_DECL genType factorial(genType const &x)
Return the factorial value of a number (!12 max, integer only) From GLM_GTX_integer extension...
GLM_FUNC_DECL uint pow(uint x, uint y)
Returns x raised to the y power.
GLM_FUNC_DECL uint sqrt(uint x)
Returns the positive square root of x.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00043.html ================================================ 0.9.9 API documentation: integer.hpp File Reference
0.9.9 API documentation
integer.hpp File Reference

Core features More...

Go to the source code of this file.

Functions

template<typename genType >
GLM_FUNC_DECL int bitCount (genType v)
 Returns the number of bits set to 1 in the binary representation of value. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, int, Q > bitCount (vec< L, T, Q > const &v)
 Returns the number of bits set to 1 in the binary representation of value. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > bitfieldExtract (vec< L, T, Q > const &Value, int Offset, int Bits)
 Extracts bits [offset, offset + bits - 1] from value, returning them in the least significant bits of the result. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > bitfieldInsert (vec< L, T, Q > const &Base, vec< L, T, Q > const &Insert, int Offset, int Bits)
 Returns the insertion the bits least-significant bits of insert into base. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > bitfieldReverse (vec< L, T, Q > const &v)
 Returns the reversal of the bits of value. More...
 
template<typename genIUType >
GLM_FUNC_DECL int findLSB (genIUType x)
 Returns the bit number of the least significant bit set to 1 in the binary representation of value. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, int, Q > findLSB (vec< L, T, Q > const &v)
 Returns the bit number of the least significant bit set to 1 in the binary representation of value. More...
 
template<typename genIUType >
GLM_FUNC_DECL int findMSB (genIUType x)
 Returns the bit number of the most significant bit in the binary representation of value. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, int, Q > findMSB (vec< L, T, Q > const &v)
 Returns the bit number of the most significant bit in the binary representation of value. More...
 
template<length_t L, qualifier Q>
GLM_FUNC_DECL void imulExtended (vec< L, int, Q > const &x, vec< L, int, Q > const &y, vec< L, int, Q > &msb, vec< L, int, Q > &lsb)
 Multiplies 32-bit integers x and y, producing a 64-bit result. More...
 
template<length_t L, qualifier Q>
GLM_FUNC_DECL vec< L, uint, Q > uaddCarry (vec< L, uint, Q > const &x, vec< L, uint, Q > const &y, vec< L, uint, Q > &carry)
 Adds 32-bit unsigned integer x and y, returning the sum modulo pow(2, 32). More...
 
template<length_t L, qualifier Q>
GLM_FUNC_DECL void umulExtended (vec< L, uint, Q > const &x, vec< L, uint, Q > const &y, vec< L, uint, Q > &msb, vec< L, uint, Q > &lsb)
 Multiplies 32-bit integers x and y, producing a 64-bit result. More...
 
template<length_t L, qualifier Q>
GLM_FUNC_DECL vec< L, uint, Q > usubBorrow (vec< L, uint, Q > const &x, vec< L, uint, Q > const &y, vec< L, uint, Q > &borrow)
 Subtracts the 32-bit unsigned integer y from x, returning the difference if non-negative, or pow(2, 32) plus the difference otherwise. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00043_source.html ================================================ 0.9.9 API documentation: integer.hpp Source File
0.9.9 API documentation
integer.hpp
Go to the documentation of this file.
1 
17 #pragma once
18 
19 #include "detail/qualifier.hpp"
20 #include "common.hpp"
21 #include "vector_relational.hpp"
22 
23 namespace glm
24 {
27 
36  template<length_t L, qualifier Q>
37  GLM_FUNC_DECL vec<L, uint, Q> uaddCarry(
38  vec<L, uint, Q> const& x,
39  vec<L, uint, Q> const& y,
40  vec<L, uint, Q> & carry);
41 
50  template<length_t L, qualifier Q>
51  GLM_FUNC_DECL vec<L, uint, Q> usubBorrow(
52  vec<L, uint, Q> const& x,
53  vec<L, uint, Q> const& y,
54  vec<L, uint, Q> & borrow);
55 
64  template<length_t L, qualifier Q>
65  GLM_FUNC_DECL void umulExtended(
66  vec<L, uint, Q> const& x,
67  vec<L, uint, Q> const& y,
68  vec<L, uint, Q> & msb,
69  vec<L, uint, Q> & lsb);
70 
79  template<length_t L, qualifier Q>
80  GLM_FUNC_DECL void imulExtended(
81  vec<L, int, Q> const& x,
82  vec<L, int, Q> const& y,
83  vec<L, int, Q> & msb,
84  vec<L, int, Q> & lsb);
85 
102  template<length_t L, typename T, qualifier Q>
103  GLM_FUNC_DECL vec<L, T, Q> bitfieldExtract(
104  vec<L, T, Q> const& Value,
105  int Offset,
106  int Bits);
107 
123  template<length_t L, typename T, qualifier Q>
124  GLM_FUNC_DECL vec<L, T, Q> bitfieldInsert(
125  vec<L, T, Q> const& Base,
126  vec<L, T, Q> const& Insert,
127  int Offset,
128  int Bits);
129 
139  template<length_t L, typename T, qualifier Q>
140  GLM_FUNC_DECL vec<L, T, Q> bitfieldReverse(vec<L, T, Q> const& v);
141 
148  template<typename genType>
149  GLM_FUNC_DECL int bitCount(genType v);
150 
158  template<length_t L, typename T, qualifier Q>
159  GLM_FUNC_DECL vec<L, int, Q> bitCount(vec<L, T, Q> const& v);
160 
169  template<typename genIUType>
170  GLM_FUNC_DECL int findLSB(genIUType x);
171 
181  template<length_t L, typename T, qualifier Q>
182  GLM_FUNC_DECL vec<L, int, Q> findLSB(vec<L, T, Q> const& v);
183 
193  template<typename genIUType>
194  GLM_FUNC_DECL int findMSB(genIUType x);
195 
206  template<length_t L, typename T, qualifier Q>
207  GLM_FUNC_DECL vec<L, int, Q> findMSB(vec<L, T, Q> const& v);
208 
210 }//namespace glm
211 
212 #include "detail/func_integer.inl"
Core features
GLM_FUNC_DECL vec< L, int, Q > findMSB(vec< L, T, Q > const &v)
Returns the bit number of the most significant bit in the binary representation of value...
GLM_FUNC_DECL void umulExtended(vec< L, uint, Q > const &x, vec< L, uint, Q > const &y, vec< L, uint, Q > &msb, vec< L, uint, Q > &lsb)
Multiplies 32-bit integers x and y, producing a 64-bit result.
GLM_FUNC_DECL void imulExtended(vec< L, int, Q > const &x, vec< L, int, Q > const &y, vec< L, int, Q > &msb, vec< L, int, Q > &lsb)
Multiplies 32-bit integers x and y, producing a 64-bit result.
GLM_FUNC_DECL vec< L, int, Q > bitCount(vec< L, T, Q > const &v)
Returns the number of bits set to 1 in the binary representation of value.
GLM_FUNC_DECL vec< L, uint, Q > uaddCarry(vec< L, uint, Q > const &x, vec< L, uint, Q > const &y, vec< L, uint, Q > &carry)
Adds 32-bit unsigned integer x and y, returning the sum modulo pow(2, 32).
GLM_FUNC_DECL vec< L, T, Q > bitfieldExtract(vec< L, T, Q > const &Value, int Offset, int Bits)
Extracts bits [offset, offset + bits - 1] from value, returning them in the least significant bits of...
GLM_FUNC_DECL vec< L, T, Q > bitfieldInsert(vec< L, T, Q > const &Base, vec< L, T, Q > const &Insert, int Offset, int Bits)
Returns the insertion the bits least-significant bits of insert into base.
Core features
GLM_FUNC_DECL vec< L, T, Q > bitfieldReverse(vec< L, T, Q > const &v)
Returns the reversal of the bits of value.
GLM_FUNC_DECL vec< L, uint, Q > usubBorrow(vec< L, uint, Q > const &x, vec< L, uint, Q > const &y, vec< L, uint, Q > &borrow)
Subtracts the 32-bit unsigned integer y from x, returning the difference if non-negative, or pow(2, 32) plus the difference otherwise.
GLM_FUNC_DECL vec< L, int, Q > findLSB(vec< L, T, Q > const &v)
Returns the bit number of the least significant bit set to 1 in the binary representation of value...
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00044.html ================================================ 0.9.9 API documentation: intersect.hpp File Reference
0.9.9 API documentation
intersect.hpp File Reference

GLM_GTX_intersect More...

Go to the source code of this file.

Functions

template<typename genType >
GLM_FUNC_DECL bool intersectLineSphere (genType const &point0, genType const &point1, genType const &sphereCenter, typename genType::value_type sphereRadius, genType &intersectionPosition1, genType &intersectionNormal1, genType &intersectionPosition2=genType(), genType &intersectionNormal2=genType())
 Compute the intersection of a line and a sphere. More...
 
template<typename genType >
GLM_FUNC_DECL bool intersectLineTriangle (genType const &orig, genType const &dir, genType const &vert0, genType const &vert1, genType const &vert2, genType &position)
 Compute the intersection of a line and a triangle. More...
 
template<typename genType >
GLM_FUNC_DECL bool intersectRayPlane (genType const &orig, genType const &dir, genType const &planeOrig, genType const &planeNormal, typename genType::value_type &intersectionDistance)
 Compute the intersection of a ray and a plane. More...
 
template<typename genType >
GLM_FUNC_DECL bool intersectRaySphere (genType const &rayStarting, genType const &rayNormalizedDirection, genType const &sphereCenter, typename genType::value_type const sphereRadiusSquared, typename genType::value_type &intersectionDistance)
 Compute the intersection distance of a ray and a sphere. More...
 
template<typename genType >
GLM_FUNC_DECL bool intersectRaySphere (genType const &rayStarting, genType const &rayNormalizedDirection, genType const &sphereCenter, const typename genType::value_type sphereRadius, genType &intersectionPosition, genType &intersectionNormal)
 Compute the intersection of a ray and a sphere. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL bool intersectRayTriangle (vec< 3, T, Q > const &orig, vec< 3, T, Q > const &dir, vec< 3, T, Q > const &v0, vec< 3, T, Q > const &v1, vec< 3, T, Q > const &v2, vec< 2, T, Q > &baryPosition, T &distance)
 Compute the intersection of a ray and a triangle. More...
 

Detailed Description

GLM_GTX_intersect

See also
Core features (dependence)
GLM_GTX_closest_point (dependence)

Definition in file intersect.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00044_source.html ================================================ 0.9.9 API documentation: intersect.hpp Source File
0.9.9 API documentation
intersect.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 // Dependency:
17 #include <cfloat>
18 #include <limits>
19 #include "../glm.hpp"
20 #include "../geometric.hpp"
21 #include "../gtx/closest_point.hpp"
22 #include "../gtx/vector_query.hpp"
23 
24 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
25 # ifndef GLM_ENABLE_EXPERIMENTAL
26 # pragma message("GLM: GLM_GTX_closest_point is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
27 # else
28 # pragma message("GLM: GLM_GTX_closest_point extension included")
29 # endif
30 #endif
31 
32 namespace glm
33 {
36 
40  template<typename genType>
41  GLM_FUNC_DECL bool intersectRayPlane(
42  genType const& orig, genType const& dir,
43  genType const& planeOrig, genType const& planeNormal,
44  typename genType::value_type & intersectionDistance);
45 
49  template<typename T, qualifier Q>
50  GLM_FUNC_DECL bool intersectRayTriangle(
51  vec<3, T, Q> const& orig, vec<3, T, Q> const& dir,
52  vec<3, T, Q> const& v0, vec<3, T, Q> const& v1, vec<3, T, Q> const& v2,
53  vec<2, T, Q>& baryPosition, T& distance);
54 
57  template<typename genType>
58  GLM_FUNC_DECL bool intersectLineTriangle(
59  genType const& orig, genType const& dir,
60  genType const& vert0, genType const& vert1, genType const& vert2,
61  genType & position);
62 
66  template<typename genType>
67  GLM_FUNC_DECL bool intersectRaySphere(
68  genType const& rayStarting, genType const& rayNormalizedDirection,
69  genType const& sphereCenter, typename genType::value_type const sphereRadiusSquared,
70  typename genType::value_type & intersectionDistance);
71 
74  template<typename genType>
75  GLM_FUNC_DECL bool intersectRaySphere(
76  genType const& rayStarting, genType const& rayNormalizedDirection,
77  genType const& sphereCenter, const typename genType::value_type sphereRadius,
78  genType & intersectionPosition, genType & intersectionNormal);
79 
82  template<typename genType>
83  GLM_FUNC_DECL bool intersectLineSphere(
84  genType const& point0, genType const& point1,
85  genType const& sphereCenter, typename genType::value_type sphereRadius,
86  genType & intersectionPosition1, genType & intersectionNormal1,
87  genType & intersectionPosition2 = genType(), genType & intersectionNormal2 = genType());
88 
90 }//namespace glm
91 
92 #include "intersect.inl"
GLM_FUNC_DECL bool intersectRayTriangle(vec< 3, T, Q > const &orig, vec< 3, T, Q > const &dir, vec< 3, T, Q > const &v0, vec< 3, T, Q > const &v1, vec< 3, T, Q > const &v2, vec< 2, T, Q > &baryPosition, T &distance)
Compute the intersection of a ray and a triangle.
GLM_FUNC_DECL bool intersectRaySphere(genType const &rayStarting, genType const &rayNormalizedDirection, genType const &sphereCenter, const typename genType::value_type sphereRadius, genType &intersectionPosition, genType &intersectionNormal)
Compute the intersection of a ray and a sphere.
GLM_FUNC_DECL bool intersectRayPlane(genType const &orig, genType const &dir, genType const &planeOrig, genType const &planeNormal, typename genType::value_type &intersectionDistance)
Compute the intersection of a ray and a plane.
GLM_FUNC_DECL bool intersectLineTriangle(genType const &orig, genType const &dir, genType const &vert0, genType const &vert1, genType const &vert2, genType &position)
Compute the intersection of a line and a triangle.
GLM_FUNC_DECL bool intersectLineSphere(genType const &point0, genType const &point1, genType const &sphereCenter, typename genType::value_type sphereRadius, genType &intersectionPosition1, genType &intersectionNormal1, genType &intersectionPosition2=genType(), genType &intersectionNormal2=genType())
Compute the intersection of a line and a sphere.
GLM_FUNC_DECL T distance(vec< L, T, Q > const &p0, vec< L, T, Q > const &p1)
Returns the distance betwwen p0 and p1, i.e., length(p0 - p1).
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00045.html ================================================ 0.9.9 API documentation: io.hpp File Reference
0.9.9 API documentation
io.hpp File Reference

GLM_GTX_io More...

Go to the source code of this file.

Detailed Description

GLM_GTX_io

Author
Jan P Springer (regni.nosp@m.rpsj.nosp@m.@gmai.nosp@m.l.co.nosp@m.m)
See also
Core features (dependence)
GLM_GTC_matrix_access (dependence)
GLM_GTC_quaternion (dependence)

Definition in file io.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00045_source.html ================================================ 0.9.9 API documentation: io.hpp Source File
0.9.9 API documentation
io.hpp
Go to the documentation of this file.
1 
20 #pragma once
21 
22 // Dependency:
23 #include "../glm.hpp"
24 #include "../gtx/quaternion.hpp"
25 
26 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
27 # ifndef GLM_ENABLE_EXPERIMENTAL
28 # pragma message("GLM: GLM_GTX_io is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
29 # else
30 # pragma message("GLM: GLM_GTX_io extension included")
31 # endif
32 #endif
33 
34 #include <iosfwd> // std::basic_ostream<> (fwd)
35 #include <locale> // std::locale, std::locale::facet, std::locale::id
36 #include <utility> // std::pair<>
37 
38 namespace glm
39 {
42 
43  namespace io
44  {
45  enum order_type { column_major, row_major};
46 
47  template<typename CTy>
48  class format_punct : public std::locale::facet
49  {
50  typedef CTy char_type;
51 
52  public:
53 
54  static std::locale::id id;
55 
56  bool formatted;
57  unsigned precision;
58  unsigned width;
59  char_type separator;
60  char_type delim_left;
61  char_type delim_right;
62  char_type space;
63  char_type newline;
64  order_type order;
65 
66  GLM_FUNC_DECL explicit format_punct(size_t a = 0);
67  GLM_FUNC_DECL explicit format_punct(format_punct const&);
68  };
69 
70  template<typename CTy, typename CTr = std::char_traits<CTy> >
71  class basic_state_saver {
72 
73  public:
74 
75  GLM_FUNC_DECL explicit basic_state_saver(std::basic_ios<CTy,CTr>&);
76  GLM_FUNC_DECL ~basic_state_saver();
77 
78  private:
79 
80  typedef ::std::basic_ios<CTy,CTr> state_type;
81  typedef typename state_type::char_type char_type;
82  typedef ::std::ios_base::fmtflags flags_type;
83  typedef ::std::streamsize streamsize_type;
84  typedef ::std::locale const locale_type;
85 
86  state_type& state_;
87  flags_type flags_;
88  streamsize_type precision_;
89  streamsize_type width_;
90  char_type fill_;
91  locale_type locale_;
92 
93  GLM_FUNC_DECL basic_state_saver& operator=(basic_state_saver const&);
94  };
95 
96  typedef basic_state_saver<char> state_saver;
97  typedef basic_state_saver<wchar_t> wstate_saver;
98 
99  template<typename CTy, typename CTr = std::char_traits<CTy> >
100  class basic_format_saver
101  {
102  public:
103 
104  GLM_FUNC_DECL explicit basic_format_saver(std::basic_ios<CTy,CTr>&);
105  GLM_FUNC_DECL ~basic_format_saver();
106 
107  private:
108 
109  basic_state_saver<CTy> const bss_;
110 
111  GLM_FUNC_DECL basic_format_saver& operator=(basic_format_saver const&);
112  };
113 
114  typedef basic_format_saver<char> format_saver;
115  typedef basic_format_saver<wchar_t> wformat_saver;
116 
117  struct precision
118  {
119  unsigned value;
120 
121  GLM_FUNC_DECL explicit precision(unsigned);
122  };
123 
124  struct width
125  {
126  unsigned value;
127 
128  GLM_FUNC_DECL explicit width(unsigned);
129  };
130 
131  template<typename CTy>
132  struct delimeter
133  {
134  CTy value[3];
135 
136  GLM_FUNC_DECL explicit delimeter(CTy /* left */, CTy /* right */, CTy /* separator */ = ',');
137  };
138 
139  struct order
140  {
141  order_type value;
142 
143  GLM_FUNC_DECL explicit order(order_type);
144  };
145 
146  // functions, inlined (inline)
147 
148  template<typename FTy, typename CTy, typename CTr>
149  FTy const& get_facet(std::basic_ios<CTy,CTr>&);
150  template<typename FTy, typename CTy, typename CTr>
151  std::basic_ios<CTy,CTr>& formatted(std::basic_ios<CTy,CTr>&);
152  template<typename FTy, typename CTy, typename CTr>
153  std::basic_ios<CTy,CTr>& unformattet(std::basic_ios<CTy,CTr>&);
154 
155  template<typename CTy, typename CTr>
156  std::basic_ostream<CTy, CTr>& operator<<(std::basic_ostream<CTy, CTr>&, precision const&);
157  template<typename CTy, typename CTr>
158  std::basic_ostream<CTy, CTr>& operator<<(std::basic_ostream<CTy, CTr>&, width const&);
159  template<typename CTy, typename CTr>
160  std::basic_ostream<CTy, CTr>& operator<<(std::basic_ostream<CTy, CTr>&, delimeter<CTy> const&);
161  template<typename CTy, typename CTr>
162  std::basic_ostream<CTy, CTr>& operator<<(std::basic_ostream<CTy, CTr>&, order const&);
163  }//namespace io
164 
165  template<typename CTy, typename CTr, typename T, qualifier Q>
166  GLM_FUNC_DECL std::basic_ostream<CTy,CTr>& operator<<(std::basic_ostream<CTy,CTr>&, qua<T, Q> const&);
167  template<typename CTy, typename CTr, typename T, qualifier Q>
168  GLM_FUNC_DECL std::basic_ostream<CTy,CTr>& operator<<(std::basic_ostream<CTy,CTr>&, vec<1, T, Q> const&);
169  template<typename CTy, typename CTr, typename T, qualifier Q>
170  GLM_FUNC_DECL std::basic_ostream<CTy,CTr>& operator<<(std::basic_ostream<CTy,CTr>&, vec<2, T, Q> const&);
171  template<typename CTy, typename CTr, typename T, qualifier Q>
172  GLM_FUNC_DECL std::basic_ostream<CTy,CTr>& operator<<(std::basic_ostream<CTy,CTr>&, vec<3, T, Q> const&);
173  template<typename CTy, typename CTr, typename T, qualifier Q>
174  GLM_FUNC_DECL std::basic_ostream<CTy,CTr>& operator<<(std::basic_ostream<CTy,CTr>&, vec<4, T, Q> const&);
175  template<typename CTy, typename CTr, typename T, qualifier Q>
176  GLM_FUNC_DECL std::basic_ostream<CTy,CTr>& operator<<(std::basic_ostream<CTy,CTr>&, mat<2, 2, T, Q> const&);
177  template<typename CTy, typename CTr, typename T, qualifier Q>
178  GLM_FUNC_DECL std::basic_ostream<CTy,CTr>& operator<<(std::basic_ostream<CTy,CTr>&, mat<2, 3, T, Q> const&);
179  template<typename CTy, typename CTr, typename T, qualifier Q>
180  GLM_FUNC_DECL std::basic_ostream<CTy,CTr>& operator<<(std::basic_ostream<CTy,CTr>&, mat<2, 4, T, Q> const&);
181  template<typename CTy, typename CTr, typename T, qualifier Q>
182  GLM_FUNC_DECL std::basic_ostream<CTy,CTr>& operator<<(std::basic_ostream<CTy,CTr>&, mat<3, 2, T, Q> const&);
183  template<typename CTy, typename CTr, typename T, qualifier Q>
184  GLM_FUNC_DECL std::basic_ostream<CTy,CTr>& operator<<(std::basic_ostream<CTy,CTr>&, mat<3, 3, T, Q> const&);
185  template<typename CTy, typename CTr, typename T, qualifier Q>
186  GLM_FUNC_DECL std::basic_ostream<CTy,CTr>& operator<<(std::basic_ostream<CTy,CTr>&, mat<3, 4, T, Q> const&);
187  template<typename CTy, typename CTr, typename T, qualifier Q>
188  GLM_FUNC_DECL std::basic_ostream<CTy,CTr>& operator<<(std::basic_ostream<CTy,CTr>&, mat<4, 2, T, Q> const&);
189  template<typename CTy, typename CTr, typename T, qualifier Q>
190  GLM_FUNC_DECL std::basic_ostream<CTy,CTr>& operator<<(std::basic_ostream<CTy,CTr>&, mat<4, 3, T, Q> const&);
191  template<typename CTy, typename CTr, typename T, qualifier Q>
192  GLM_FUNC_DECL std::basic_ostream<CTy,CTr>& operator<<(std::basic_ostream<CTy,CTr>&, mat<4, 4, T, Q> const&);
193 
194  template<typename CTy, typename CTr, typename T, qualifier Q>
195  GLM_FUNC_DECL std::basic_ostream<CTy,CTr> & operator<<(std::basic_ostream<CTy,CTr> &,
196  std::pair<mat<4, 4, T, Q> const, mat<4, 4, T, Q> const> const&);
197 
199 }//namespace glm
200 
201 #include "io.inl"
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00046.html ================================================ 0.9.9 API documentation: log_base.hpp File Reference
0.9.9 API documentation
log_base.hpp File Reference

GLM_GTX_log_base More...

Go to the source code of this file.

Functions

template<typename genType >
GLM_FUNC_DECL genType log (genType const &x, genType const &base)
 Logarithm for any base. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > sign (vec< L, T, Q > const &x, vec< L, T, Q > const &base)
 Logarithm for any base. More...
 

Detailed Description

GLM_GTX_log_base

See also
Core features (dependence)

Definition in file log_base.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00046_source.html ================================================ 0.9.9 API documentation: log_base.hpp Source File
0.9.9 API documentation
log_base.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependency:
16 #include "../glm.hpp"
17 
18 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
19 # ifndef GLM_ENABLE_EXPERIMENTAL
20 # pragma message("GLM: GLM_GTX_log_base is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
21 # else
22 # pragma message("GLM: GLM_GTX_log_base extension included")
23 # endif
24 #endif
25 
26 namespace glm
27 {
30 
33  template<typename genType>
34  GLM_FUNC_DECL genType log(
35  genType const& x,
36  genType const& base);
37 
40  template<length_t L, typename T, qualifier Q>
41  GLM_FUNC_DECL vec<L, T, Q> sign(
42  vec<L, T, Q> const& x,
43  vec<L, T, Q> const& base);
44 
46 }//namespace glm
47 
48 #include "log_base.inl"
GLM_FUNC_DECL vec< L, T, Q > sign(vec< L, T, Q > const &x, vec< L, T, Q > const &base)
Logarithm for any base.
GLM_FUNC_DECL genType log(genType const &x, genType const &base)
Logarithm for any base.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00047_source.html ================================================ 0.9.9 API documentation: man.doxy Source File
0.9.9 API documentation
man.doxy
1 # Doxyfile 1.8.10
2 
3 # This file describes the settings to be used by the documentation system
4 # doxygen (www.doxygen.org) for a project.
5 #
6 # All text after a double hash (##) is considered a comment and is placed in
7 # front of the TAG it is preceding.
8 #
9 # All text after a single hash (#) is considered a comment and will be ignored.
10 # The format is:
11 # TAG = value [value, ...]
12 # For lists, items can also be appended using:
13 # TAG += value [value, ...]
14 # Values that contain spaces should be placed between quotes (\" \").
15 
16 #---------------------------------------------------------------------------
17 # Project related configuration options
18 #---------------------------------------------------------------------------
19 
20 # This tag specifies the encoding used for all characters in the config file
21 # that follow. The default is UTF-8 which is also the encoding used for all text
22 # before the first occurrence of this tag. Doxygen uses libiconv (or the iconv
23 # built into libc) for the transcoding. See http://www.gnu.org/software/libiconv
24 # for the list of possible encodings.
25 # The default value is: UTF-8.
26 
27 DOXYFILE_ENCODING = UTF-8
28 
29 # The PROJECT_NAME tag is a single word (or a sequence of words surrounded by
30 # double-quotes, unless you are using Doxywizard) that should identify the
31 # project for which the documentation is generated. This name is used in the
32 # title of most generated pages and in a few other places.
33 # The default value is: My Project.
34 
35 PROJECT_NAME = "0.9.9 API documentation"
36 
37 # The PROJECT_NUMBER tag can be used to enter a project or revision number. This
38 # could be handy for archiving the generated documentation or if some version
39 # control system is used.
40 
41 PROJECT_NUMBER =
42 
43 # Using the PROJECT_BRIEF tag one can provide an optional one line description
44 # for a project that appears at the top of each page and should give viewer a
45 # quick idea about the purpose of the project. Keep the description short.
46 
47 PROJECT_BRIEF =
48 
49 # With the PROJECT_LOGO tag one can specify a logo or an icon that is included
50 # in the documentation. The maximum height of the logo should not exceed 55
51 # pixels and the maximum width should not exceed 200 pixels. Doxygen will copy
52 # the logo to the output directory.
53 
54 PROJECT_LOGO = theme/logo-mini.png
55 
56 # The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path
57 # into which the generated documentation will be written. If a relative path is
58 # entered, it will be relative to the location where doxygen was started. If
59 # left blank the current directory will be used.
60 
61 OUTPUT_DIRECTORY = .
62 
63 # If the CREATE_SUBDIRS tag is set to YES then doxygen will create 4096 sub-
64 # directories (in 2 levels) under the output directory of each output format and
65 # will distribute the generated files over these directories. Enabling this
66 # option can be useful when feeding doxygen a huge amount of source files, where
67 # putting all generated files in the same directory would otherwise causes
68 # performance problems for the file system.
69 # The default value is: NO.
70 
71 CREATE_SUBDIRS = NO
72 
73 # If the ALLOW_UNICODE_NAMES tag is set to YES, doxygen will allow non-ASCII
74 # characters to appear in the names of generated files. If set to NO, non-ASCII
75 # characters will be escaped, for example _xE3_x81_x84 will be used for Unicode
76 # U+3044.
77 # The default value is: NO.
78 
79 ALLOW_UNICODE_NAMES = NO
80 
81 # The OUTPUT_LANGUAGE tag is used to specify the language in which all
82 # documentation generated by doxygen is written. Doxygen will use this
83 # information to generate all constant output in the proper language.
84 # Possible values are: Afrikaans, Arabic, Armenian, Brazilian, Catalan, Chinese,
85 # Chinese-Traditional, Croatian, Czech, Danish, Dutch, English (United States),
86 # Esperanto, Farsi (Persian), Finnish, French, German, Greek, Hungarian,
87 # Indonesian, Italian, Japanese, Japanese-en (Japanese with English messages),
88 # Korean, Korean-en (Korean with English messages), Latvian, Lithuanian,
89 # Macedonian, Norwegian, Persian (Farsi), Polish, Portuguese, Romanian, Russian,
90 # Serbian, Serbian-Cyrillic, Slovak, Slovene, Spanish, Swedish, Turkish,
91 # Ukrainian and Vietnamese.
92 # The default value is: English.
93 
94 OUTPUT_LANGUAGE = English
95 
96 # If the BRIEF_MEMBER_DESC tag is set to YES, doxygen will include brief member
97 # descriptions after the members that are listed in the file and class
98 # documentation (similar to Javadoc). Set to NO to disable this.
99 # The default value is: YES.
100 
101 BRIEF_MEMBER_DESC = YES
102 
103 # If the REPEAT_BRIEF tag is set to YES, doxygen will prepend the brief
104 # description of a member or function before the detailed description
105 #
106 # Note: If both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the
107 # brief descriptions will be completely suppressed.
108 # The default value is: YES.
109 
110 REPEAT_BRIEF = YES
111 
112 # This tag implements a quasi-intelligent brief description abbreviator that is
113 # used to form the text in various listings. Each string in this list, if found
114 # as the leading text of the brief description, will be stripped from the text
115 # and the result, after processing the whole list, is used as the annotated
116 # text. Otherwise, the brief description is used as-is. If left blank, the
117 # following values are used ($name is automatically replaced with the name of
118 # the entity):The $name class, The $name widget, The $name file, is, provides,
119 # specifies, contains, represents, a, an and the.
120 
121 ABBREVIATE_BRIEF = "The $name class " \
122  "The $name widget " \
123  "The $name file " \
124  is \
125  provides \
126  specifies \
127  contains \
128  represents \
129  a \
130  an \
131  the
132 
133 # If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then
134 # doxygen will generate a detailed section even if there is only a brief
135 # description.
136 # The default value is: NO.
137 
138 ALWAYS_DETAILED_SEC = NO
139 
140 # If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all
141 # inherited members of a class in the documentation of that class as if those
142 # members were ordinary class members. Constructors, destructors and assignment
143 # operators of the base classes will not be shown.
144 # The default value is: NO.
145 
146 INLINE_INHERITED_MEMB = NO
147 
148 # If the FULL_PATH_NAMES tag is set to YES, doxygen will prepend the full path
149 # before files name in the file list and in the header files. If set to NO the
150 # shortest path that makes the file name unique will be used
151 # The default value is: YES.
152 
153 FULL_PATH_NAMES = NO
154 
155 # The STRIP_FROM_PATH tag can be used to strip a user-defined part of the path.
156 # Stripping is only done if one of the specified strings matches the left-hand
157 # part of the path. The tag can be used to show relative paths in the file list.
158 # If left blank the directory from which doxygen is run is used as the path to
159 # strip.
160 #
161 # Note that you can specify absolute paths here, but also relative paths, which
162 # will be relative from the directory where doxygen is started.
163 # This tag requires that the tag FULL_PATH_NAMES is set to YES.
164 
165 STRIP_FROM_PATH = "C:/Documents and Settings/Groove/ "
166 
167 # The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of the
168 # path mentioned in the documentation of a class, which tells the reader which
169 # header file to include in order to use a class. If left blank only the name of
170 # the header file containing the class definition is used. Otherwise one should
171 # specify the list of include paths that are normally passed to the compiler
172 # using the -I flag.
173 
174 STRIP_FROM_INC_PATH =
175 
176 # If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter (but
177 # less readable) file names. This can be useful is your file systems doesn't
178 # support long names like on DOS, Mac, or CD-ROM.
179 # The default value is: NO.
180 
181 SHORT_NAMES = YES
182 
183 # If the JAVADOC_AUTOBRIEF tag is set to YES then doxygen will interpret the
184 # first line (until the first dot) of a Javadoc-style comment as the brief
185 # description. If set to NO, the Javadoc-style will behave just like regular Qt-
186 # style comments (thus requiring an explicit @brief command for a brief
187 # description.)
188 # The default value is: NO.
189 
190 JAVADOC_AUTOBRIEF = YES
191 
192 # If the QT_AUTOBRIEF tag is set to YES then doxygen will interpret the first
193 # line (until the first dot) of a Qt-style comment as the brief description. If
194 # set to NO, the Qt-style will behave just like regular Qt-style comments (thus
195 # requiring an explicit \brief command for a brief description.)
196 # The default value is: NO.
197 
198 QT_AUTOBRIEF = NO
199 
200 # The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make doxygen treat a
201 # multi-line C++ special comment block (i.e. a block of
202 # a brief description. This used to be the default behavior. The new default is
203 # to treat a multi-line C++ comment block as a detailed description. Set this
204 # tag to YES if you prefer the old behavior instead.
205 #
206 # Note that setting this tag to YES also means that rational rose comments are
207 # not recognized any more.
208 # The default value is: NO.
209 
210 MULTILINE_CPP_IS_BRIEF = NO
211 
212 # If the INHERIT_DOCS tag is set to YES then an undocumented member inherits the
213 # documentation from any documented member that it re-implements.
214 # The default value is: YES.
215 
216 INHERIT_DOCS = YES
217 
218 # If the SEPARATE_MEMBER_PAGES tag is set to YES then doxygen will produce a new
219 # page for each member. If set to NO, the documentation of a member will be part
220 # of the file/class/namespace that contains it.
221 # The default value is: NO.
222 
223 SEPARATE_MEMBER_PAGES = NO
224 
225 # The TAB_SIZE tag can be used to set the number of spaces in a tab. Doxygen
226 # uses this value to replace tabs by spaces in code fragments.
227 # Minimum value: 1, maximum value: 16, default value: 4.
228 
229 TAB_SIZE = 8
230 
231 # This tag can be used to specify a number of aliases that act as commands in
232 # the documentation. An alias has the form:
233 # name=value
234 # For example adding
235 # "sideeffect=@par Side Effects:\n"
236 # will allow you to put the command \sideeffect (or @sideeffect) in the
237 # documentation, which will result in a user-defined paragraph with heading
238 # "Side Effects:". You can put \n's in the value part of an alias to insert
239 # newlines.
240 
241 ALIASES =
242 
243 # This tag can be used to specify a number of word-keyword mappings (TCL only).
244 # A mapping has the form "name=value". For example adding "class=itcl::class"
245 # will allow you to use the command class in the itcl::class meaning.
246 
247 TCL_SUBST =
248 
249 # Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources
250 # only. Doxygen will then generate output that is more tailored for C. For
251 # instance, some of the names that are used will be different. The list of all
252 # members will be omitted, etc.
253 # The default value is: NO.
254 
255 OPTIMIZE_OUTPUT_FOR_C = NO
256 
257 # Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java or
258 # Python sources only. Doxygen will then generate output that is more tailored
259 # for that language. For instance, namespaces will be presented as packages,
260 # qualified scopes will look different, etc.
261 # The default value is: NO.
262 
263 OPTIMIZE_OUTPUT_JAVA = NO
264 
265 # Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran
266 # sources. Doxygen will then generate output that is tailored for Fortran.
267 # The default value is: NO.
268 
269 OPTIMIZE_FOR_FORTRAN = NO
270 
271 # Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL
272 # sources. Doxygen will then generate output that is tailored for VHDL.
273 # The default value is: NO.
274 
275 OPTIMIZE_OUTPUT_VHDL = NO
276 
277 # Doxygen selects the parser to use depending on the extension of the files it
278 # parses. With this tag you can assign which parser to use for a given
279 # extension. Doxygen has a built-in mapping, but you can override or extend it
280 # using this tag. The format is ext=language, where ext is a file extension, and
281 # language is one of the parsers supported by doxygen: IDL, Java, Javascript,
282 # C#, C, C++, D, PHP, Objective-C, Python, Fortran (fixed format Fortran:
283 # FortranFixed, free formatted Fortran: FortranFree, unknown formatted Fortran:
284 # Fortran. In the later case the parser tries to guess whether the code is fixed
285 # or free formatted code, this is the default for Fortran type files), VHDL. For
286 # instance to make doxygen treat .inc files as Fortran files (default is PHP),
287 # and .f files as C (default is Fortran), use: inc=Fortran f=C.
288 #
289 # Note: For files without extension you can use no_extension as a placeholder.
290 #
291 # Note that for custom extensions you also need to set FILE_PATTERNS otherwise
292 # the files are not read by doxygen.
293 
294 EXTENSION_MAPPING =
295 
296 # If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments
297 # according to the Markdown format, which allows for more readable
298 # documentation. See http://daringfireball.net/projects/markdown/ for details.
299 # The output of markdown processing is further processed by doxygen, so you can
300 # mix doxygen, HTML, and XML commands with Markdown formatting. Disable only in
301 # case of backward compatibilities issues.
302 # The default value is: YES.
303 
304 MARKDOWN_SUPPORT = YES
305 
306 # When enabled doxygen tries to link words that correspond to documented
307 # classes, or namespaces to their corresponding documentation. Such a link can
308 # be prevented in individual cases by putting a % sign in front of the word or
309 # globally by setting AUTOLINK_SUPPORT to NO.
310 # The default value is: YES.
311 
312 AUTOLINK_SUPPORT = YES
313 
314 # If you use STL classes (i.e. std::string, std::vector, etc.) but do not want
315 # to include (a tag file for) the STL sources as input, then you should set this
316 # tag to YES in order to let doxygen match functions declarations and
317 # definitions whose arguments contain STL classes (e.g. func(std::string);
318 # versus func(std::string) {}). This also make the inheritance and collaboration
319 # diagrams that involve STL classes more complete and accurate.
320 # The default value is: NO.
321 
322 BUILTIN_STL_SUPPORT = NO
323 
324 # If you use Microsoft's C++/CLI language, you should set this option to YES to
325 # enable parsing support.
326 # The default value is: NO.
327 
328 CPP_CLI_SUPPORT = NO
329 
330 # Set the SIP_SUPPORT tag to YES if your project consists of sip (see:
331 # http://www.riverbankcomputing.co.uk/software/sip/intro) sources only. Doxygen
332 # will parse them like normal C++ but will assume all classes use public instead
333 # of private inheritance when no explicit protection keyword is present.
334 # The default value is: NO.
335 
336 SIP_SUPPORT = NO
337 
338 # For Microsoft's IDL there are propget and propput attributes to indicate
339 # getter and setter methods for a property. Setting this option to YES will make
340 # doxygen to replace the get and set methods by a property in the documentation.
341 # This will only work if the methods are indeed getting or setting a simple
342 # type. If this is not the case, or you want to show the methods anyway, you
343 # should set this option to NO.
344 # The default value is: YES.
345 
346 IDL_PROPERTY_SUPPORT = YES
347 
348 # If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC
349 # tag is set to YES then doxygen will reuse the documentation of the first
350 # member in the group (if any) for the other members of the group. By default
351 # all members of a group must be documented explicitly.
352 # The default value is: NO.
353 
354 DISTRIBUTE_GROUP_DOC = NO
355 
356 # If one adds a struct or class to a group and this option is enabled, then also
357 # any nested class or struct is added to the same group. By default this option
358 # is disabled and one has to add nested compounds explicitly via \ingroup.
359 # The default value is: NO.
360 
361 GROUP_NESTED_COMPOUNDS = NO
362 
363 # Set the SUBGROUPING tag to YES to allow class member groups of the same type
364 # (for instance a group of public functions) to be put as a subgroup of that
365 # type (e.g. under the Public Functions section). Set it to NO to prevent
366 # subgrouping. Alternatively, this can be done per class using the
367 # \nosubgrouping command.
368 # The default value is: YES.
369 
370 SUBGROUPING = NO
371 
372 # When the INLINE_GROUPED_CLASSES tag is set to YES, classes, structs and unions
373 # are shown inside the group in which they are included (e.g. using \ingroup)
374 # instead of on a separate page (for HTML and Man pages) or section (for LaTeX
375 # and RTF).
376 #
377 # Note that this feature does not work in combination with
378 # SEPARATE_MEMBER_PAGES.
379 # The default value is: NO.
380 
381 INLINE_GROUPED_CLASSES = NO
382 
383 # When the INLINE_SIMPLE_STRUCTS tag is set to YES, structs, classes, and unions
384 # with only public data fields or simple typedef fields will be shown inline in
385 # the documentation of the scope in which they are defined (i.e. file,
386 # namespace, or group documentation), provided this scope is documented. If set
387 # to NO, structs, classes, and unions are shown on a separate page (for HTML and
388 # Man pages) or section (for LaTeX and RTF).
389 # The default value is: NO.
390 
391 INLINE_SIMPLE_STRUCTS = NO
392 
393 # When TYPEDEF_HIDES_STRUCT tag is enabled, a typedef of a struct, union, or
394 # enum is documented as struct, union, or enum with the name of the typedef. So
395 # typedef struct TypeS {} TypeT, will appear in the documentation as a struct
396 # with name TypeT. When disabled the typedef will appear as a member of a file,
397 # namespace, or class. And the struct will be named TypeS. This can typically be
398 # useful for C code in case the coding convention dictates that all compound
399 # types are typedef'ed and only the typedef is referenced, never the tag name.
400 # The default value is: NO.
401 
402 TYPEDEF_HIDES_STRUCT = NO
403 
404 # The size of the symbol lookup cache can be set using LOOKUP_CACHE_SIZE. This
405 # cache is used to resolve symbols given their name and scope. Since this can be
406 # an expensive process and often the same symbol appears multiple times in the
407 # code, doxygen keeps a cache of pre-resolved symbols. If the cache is too small
408 # doxygen will become slower. If the cache is too large, memory is wasted. The
409 # cache size is given by this formula: 2^(16+LOOKUP_CACHE_SIZE). The valid range
410 # is 0..9, the default is 0, corresponding to a cache size of 2^16=65536
411 # symbols. At the end of a run doxygen will report the cache usage and suggest
412 # the optimal cache size from a speed point of view.
413 # Minimum value: 0, maximum value: 9, default value: 0.
414 
415 LOOKUP_CACHE_SIZE = 0
416 
417 #---------------------------------------------------------------------------
418 # Build related configuration options
419 #---------------------------------------------------------------------------
420 
421 # If the EXTRACT_ALL tag is set to YES, doxygen will assume all entities in
422 # documentation are documented, even if no documentation was available. Private
423 # class members and static file members will be hidden unless the
424 # EXTRACT_PRIVATE respectively EXTRACT_STATIC tags are set to YES.
425 # Note: This will also disable the warnings about undocumented members that are
426 # normally produced when WARNINGS is set to YES.
427 # The default value is: NO.
428 
429 EXTRACT_ALL = NO
430 
431 # If the EXTRACT_PRIVATE tag is set to YES, all private members of a class will
432 # be included in the documentation.
433 # The default value is: NO.
434 
435 EXTRACT_PRIVATE = NO
436 
437 # If the EXTRACT_PACKAGE tag is set to YES, all members with package or internal
438 # scope will be included in the documentation.
439 # The default value is: NO.
440 
441 EXTRACT_PACKAGE = NO
442 
443 # If the EXTRACT_STATIC tag is set to YES, all static members of a file will be
444 # included in the documentation.
445 # The default value is: NO.
446 
447 EXTRACT_STATIC = YES
448 
449 # If the EXTRACT_LOCAL_CLASSES tag is set to YES, classes (and structs) defined
450 # locally in source files will be included in the documentation. If set to NO,
451 # only classes defined in header files are included. Does not have any effect
452 # for Java sources.
453 # The default value is: YES.
454 
455 EXTRACT_LOCAL_CLASSES = NO
456 
457 # This flag is only useful for Objective-C code. If set to YES, local methods,
458 # which are defined in the implementation section but not in the interface are
459 # included in the documentation. If set to NO, only methods in the interface are
460 # included.
461 # The default value is: NO.
462 
463 EXTRACT_LOCAL_METHODS = NO
464 
465 # If this flag is set to YES, the members of anonymous namespaces will be
466 # extracted and appear in the documentation as a namespace called
467 # 'anonymous_namespace{file}', where file will be replaced with the base name of
468 # the file that contains the anonymous namespace. By default anonymous namespace
469 # are hidden.
470 # The default value is: NO.
471 
472 EXTRACT_ANON_NSPACES = NO
473 
474 # If the HIDE_UNDOC_MEMBERS tag is set to YES, doxygen will hide all
475 # undocumented members inside documented classes or files. If set to NO these
476 # members will be included in the various overviews, but no documentation
477 # section is generated. This option has no effect if EXTRACT_ALL is enabled.
478 # The default value is: NO.
479 
480 HIDE_UNDOC_MEMBERS = YES
481 
482 # If the HIDE_UNDOC_CLASSES tag is set to YES, doxygen will hide all
483 # undocumented classes that are normally visible in the class hierarchy. If set
484 # to NO, these classes will be included in the various overviews. This option
485 # has no effect if EXTRACT_ALL is enabled.
486 # The default value is: NO.
487 
488 HIDE_UNDOC_CLASSES = YES
489 
490 # If the HIDE_FRIEND_COMPOUNDS tag is set to YES, doxygen will hide all friend
491 # (class|struct|union) declarations. If set to NO, these declarations will be
492 # included in the documentation.
493 # The default value is: NO.
494 
495 HIDE_FRIEND_COMPOUNDS = YES
496 
497 # If the HIDE_IN_BODY_DOCS tag is set to YES, doxygen will hide any
498 # documentation blocks found inside the body of a function. If set to NO, these
499 # blocks will be appended to the function's detailed documentation block.
500 # The default value is: NO.
501 
502 HIDE_IN_BODY_DOCS = YES
503 
504 # The INTERNAL_DOCS tag determines if documentation that is typed after a
505 # \internal command is included. If the tag is set to NO then the documentation
506 # will be excluded. Set it to YES to include the internal documentation.
507 # The default value is: NO.
508 
509 INTERNAL_DOCS = NO
510 
511 # If the CASE_SENSE_NAMES tag is set to NO then doxygen will only generate file
512 # names in lower-case letters. If set to YES, upper-case letters are also
513 # allowed. This is useful if you have classes or files whose names only differ
514 # in case and if your file system supports case sensitive file names. Windows
515 # and Mac users are advised to set this option to NO.
516 # The default value is: system dependent.
517 
518 CASE_SENSE_NAMES = YES
519 
520 # If the HIDE_SCOPE_NAMES tag is set to NO then doxygen will show members with
521 # their full class and namespace scopes in the documentation. If set to YES, the
522 # scope will be hidden.
523 # The default value is: NO.
524 
525 HIDE_SCOPE_NAMES = YES
526 
527 # If the HIDE_COMPOUND_REFERENCE tag is set to NO (default) then doxygen will
528 # append additional text to a page's title, such as Class Reference. If set to
529 # YES the compound reference will be hidden.
530 # The default value is: NO.
531 
532 HIDE_COMPOUND_REFERENCE= NO
533 
534 # If the SHOW_INCLUDE_FILES tag is set to YES then doxygen will put a list of
535 # the files that are included by a file in the documentation of that file.
536 # The default value is: YES.
537 
538 SHOW_INCLUDE_FILES = NO
539 
540 # If the SHOW_GROUPED_MEMB_INC tag is set to YES then Doxygen will add for each
541 # grouped member an include statement to the documentation, telling the reader
542 # which file to include in order to use the member.
543 # The default value is: NO.
544 
545 SHOW_GROUPED_MEMB_INC = NO
546 
547 # If the FORCE_LOCAL_INCLUDES tag is set to YES then doxygen will list include
548 # files with double quotes in the documentation rather than with sharp brackets.
549 # The default value is: NO.
550 
551 FORCE_LOCAL_INCLUDES = NO
552 
553 # If the INLINE_INFO tag is set to YES then a tag [inline] is inserted in the
554 # documentation for inline members.
555 # The default value is: YES.
556 
557 INLINE_INFO = NO
558 
559 # If the SORT_MEMBER_DOCS tag is set to YES then doxygen will sort the
560 # (detailed) documentation of file and class members alphabetically by member
561 # name. If set to NO, the members will appear in declaration order.
562 # The default value is: YES.
563 
564 SORT_MEMBER_DOCS = YES
565 
566 # If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the brief
567 # descriptions of file, namespace and class members alphabetically by member
568 # name. If set to NO, the members will appear in declaration order. Note that
569 # this will also influence the order of the classes in the class list.
570 # The default value is: NO.
571 
572 SORT_BRIEF_DOCS = YES
573 
574 # If the SORT_MEMBERS_CTORS_1ST tag is set to YES then doxygen will sort the
575 # (brief and detailed) documentation of class members so that constructors and
576 # destructors are listed first. If set to NO the constructors will appear in the
577 # respective orders defined by SORT_BRIEF_DOCS and SORT_MEMBER_DOCS.
578 # Note: If SORT_BRIEF_DOCS is set to NO this option is ignored for sorting brief
579 # member documentation.
580 # Note: If SORT_MEMBER_DOCS is set to NO this option is ignored for sorting
581 # detailed member documentation.
582 # The default value is: NO.
583 
584 SORT_MEMBERS_CTORS_1ST = NO
585 
586 # If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the hierarchy
587 # of group names into alphabetical order. If set to NO the group names will
588 # appear in their defined order.
589 # The default value is: NO.
590 
591 SORT_GROUP_NAMES = NO
592 
593 # If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be sorted by
594 # fully-qualified names, including namespaces. If set to NO, the class list will
595 # be sorted only by class name, not including the namespace part.
596 # Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES.
597 # Note: This option applies only to the class list, not to the alphabetical
598 # list.
599 # The default value is: NO.
600 
601 SORT_BY_SCOPE_NAME = YES
602 
603 # If the STRICT_PROTO_MATCHING option is enabled and doxygen fails to do proper
604 # type resolution of all parameters of a function it will reject a match between
605 # the prototype and the implementation of a member function even if there is
606 # only one candidate or it is obvious which candidate to choose by doing a
607 # simple string match. By disabling STRICT_PROTO_MATCHING doxygen will still
608 # accept a match between prototype and implementation in such cases.
609 # The default value is: NO.
610 
611 STRICT_PROTO_MATCHING = NO
612 
613 # The GENERATE_TODOLIST tag can be used to enable (YES) or disable (NO) the todo
614 # list. This list is created by putting \todo commands in the documentation.
615 # The default value is: YES.
616 
617 GENERATE_TODOLIST = YES
618 
619 # The GENERATE_TESTLIST tag can be used to enable (YES) or disable (NO) the test
620 # list. This list is created by putting \test commands in the documentation.
621 # The default value is: YES.
622 
623 GENERATE_TESTLIST = YES
624 
625 # The GENERATE_BUGLIST tag can be used to enable (YES) or disable (NO) the bug
626 # list. This list is created by putting \bug commands in the documentation.
627 # The default value is: YES.
628 
629 GENERATE_BUGLIST = YES
630 
631 # The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or disable (NO)
632 # the deprecated list. This list is created by putting \deprecated commands in
633 # the documentation.
634 # The default value is: YES.
635 
636 GENERATE_DEPRECATEDLIST= YES
637 
638 # The ENABLED_SECTIONS tag can be used to enable conditional documentation
639 # sections, marked by \if <section_label> ... \endif and \cond <section_label>
640 # ... \endcond blocks.
641 
642 ENABLED_SECTIONS =
643 
644 # The MAX_INITIALIZER_LINES tag determines the maximum number of lines that the
645 # initial value of a variable or macro / define can have for it to appear in the
646 # documentation. If the initializer consists of more lines than specified here
647 # it will be hidden. Use a value of 0 to hide initializers completely. The
648 # appearance of the value of individual variables and macros / defines can be
649 # controlled using \showinitializer or \hideinitializer command in the
650 # documentation regardless of this setting.
651 # Minimum value: 0, maximum value: 10000, default value: 30.
652 
653 MAX_INITIALIZER_LINES = 30
654 
655 # Set the SHOW_USED_FILES tag to NO to disable the list of files generated at
656 # the bottom of the documentation of classes and structs. If set to YES, the
657 # list will mention the files that were used to generate the documentation.
658 # The default value is: YES.
659 
660 SHOW_USED_FILES = NO
661 
662 # Set the SHOW_FILES tag to NO to disable the generation of the Files page. This
663 # will remove the Files entry from the Quick Index and from the Folder Tree View
664 # (if specified).
665 # The default value is: YES.
666 
667 SHOW_FILES = YES
668 
669 # Set the SHOW_NAMESPACES tag to NO to disable the generation of the Namespaces
670 # page. This will remove the Namespaces entry from the Quick Index and from the
671 # Folder Tree View (if specified).
672 # The default value is: YES.
673 
674 SHOW_NAMESPACES = YES
675 
676 # The FILE_VERSION_FILTER tag can be used to specify a program or script that
677 # doxygen should invoke to get the current version for each file (typically from
678 # the version control system). Doxygen will invoke the program by executing (via
679 # popen()) the command command input-file, where command is the value of the
680 # FILE_VERSION_FILTER tag, and input-file is the name of an input file provided
681 # by doxygen. Whatever the program writes to standard output is used as the file
682 # version. For an example see the documentation.
683 
684 FILE_VERSION_FILTER =
685 
686 # The LAYOUT_FILE tag can be used to specify a layout file which will be parsed
687 # by doxygen. The layout file controls the global structure of the generated
688 # output files in an output format independent way. To create the layout file
689 # that represents doxygen's defaults, run doxygen with the -l option. You can
690 # optionally specify a file name after the option, if omitted DoxygenLayout.xml
691 # will be used as the name of the layout file.
692 #
693 # Note that if you run doxygen from a directory containing a file called
694 # DoxygenLayout.xml, doxygen will parse it automatically even if the LAYOUT_FILE
695 # tag is left empty.
696 
697 LAYOUT_FILE =
698 
699 # The CITE_BIB_FILES tag can be used to specify one or more bib files containing
700 # the reference definitions. This must be a list of .bib files. The .bib
701 # extension is automatically appended if omitted. This requires the bibtex tool
702 # to be installed. See also http://en.wikipedia.org/wiki/BibTeX for more info.
703 # For LaTeX the style of the bibliography can be controlled using
704 # LATEX_BIB_STYLE. To use this feature you need bibtex and perl available in the
705 # search path. See also \cite for info how to create references.
706 
707 CITE_BIB_FILES =
708 
709 #---------------------------------------------------------------------------
710 # Configuration options related to warning and progress messages
711 #---------------------------------------------------------------------------
712 
713 # The QUIET tag can be used to turn on/off the messages that are generated to
714 # standard output by doxygen. If QUIET is set to YES this implies that the
715 # messages are off.
716 # The default value is: NO.
717 
718 QUIET = NO
719 
720 # The WARNINGS tag can be used to turn on/off the warning messages that are
721 # generated to standard error (stderr) by doxygen. If WARNINGS is set to YES
722 # this implies that the warnings are on.
723 #
724 # Tip: Turn warnings on while writing the documentation.
725 # The default value is: YES.
726 
727 WARNINGS = YES
728 
729 # If the WARN_IF_UNDOCUMENTED tag is set to YES then doxygen will generate
730 # warnings for undocumented members. If EXTRACT_ALL is set to YES then this flag
731 # will automatically be disabled.
732 # The default value is: YES.
733 
734 WARN_IF_UNDOCUMENTED = YES
735 
736 # If the WARN_IF_DOC_ERROR tag is set to YES, doxygen will generate warnings for
737 # potential errors in the documentation, such as not documenting some parameters
738 # in a documented function, or documenting parameters that don't exist or using
739 # markup commands wrongly.
740 # The default value is: YES.
741 
742 WARN_IF_DOC_ERROR = YES
743 
744 # This WARN_NO_PARAMDOC option can be enabled to get warnings for functions that
745 # are documented, but have no documentation for their parameters or return
746 # value. If set to NO, doxygen will only warn about wrong or incomplete
747 # parameter documentation, but not about the absence of documentation.
748 # The default value is: NO.
749 
750 WARN_NO_PARAMDOC = NO
751 
752 # The WARN_FORMAT tag determines the format of the warning messages that doxygen
753 # can produce. The string should contain the $file, $line, and $text tags, which
754 # will be replaced by the file and line number from which the warning originated
755 # and the warning text. Optionally the format may contain $version, which will
756 # be replaced by the version of the file (if it could be obtained via
757 # FILE_VERSION_FILTER)
758 # The default value is: $file:$line: $text.
759 
760 WARN_FORMAT = "$file:$line: $text"
761 
762 # The WARN_LOGFILE tag can be used to specify a file to which warning and error
763 # messages should be written. If left blank the output is written to standard
764 # error (stderr).
765 
766 WARN_LOGFILE =
767 
768 #---------------------------------------------------------------------------
769 # Configuration options related to the input files
770 #---------------------------------------------------------------------------
771 
772 # The INPUT tag is used to specify the files and/or directories that contain
773 # documented source files. You may enter file names like myfile.cpp or
774 # directories like /usr/src/myproject. Separate the files or directories with
775 # spaces. See also FILE_PATTERNS and EXTENSION_MAPPING
776 # Note: If this tag is empty the current directory is searched.
777 
778 INPUT = ../glm \
779  .
780 
781 # This tag can be used to specify the character encoding of the source files
782 # that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses
783 # libiconv (or the iconv built into libc) for the transcoding. See the libiconv
784 # documentation (see: http://www.gnu.org/software/libiconv) for the list of
785 # possible encodings.
786 # The default value is: UTF-8.
787 
788 INPUT_ENCODING = UTF-8
789 
790 # If the value of the INPUT tag contains directories, you can use the
791 # FILE_PATTERNS tag to specify one or more wildcard patterns (like *.cpp and
792 # *.h) to filter out the source-files in the directories.
793 #
794 # Note that for custom extensions or not directly supported extensions you also
795 # need to set EXTENSION_MAPPING for the extension otherwise the files are not
796 # read by doxygen.
797 #
798 # If left blank the following patterns are tested:*.c, *.cc, *.cxx, *.cpp,
799 # *.c++, *.java, *.ii, *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl, *.h,
800 # *.hh, *.hxx, *.hpp, *.h++, *.cs, *.d, *.php, *.php4, *.php5, *.phtml, *.inc,
801 # *.m, *.markdown, *.md, *.mm, *.dox, *.py, *.f90, *.f, *.for, *.tcl, *.vhd,
802 # *.vhdl, *.ucf, *.qsf, *.as and *.js.
803 
804 FILE_PATTERNS = *.hpp \
805  *.doxy
806 
807 # The RECURSIVE tag can be used to specify whether or not subdirectories should
808 # be searched for input files as well.
809 # The default value is: NO.
810 
811 RECURSIVE = YES
812 
813 # The EXCLUDE tag can be used to specify files and/or directories that should be
814 # excluded from the INPUT source files. This way you can easily exclude a
815 # subdirectory from a directory tree whose root is specified with the INPUT tag.
816 #
817 # Note that relative paths are relative to the directory from which doxygen is
818 # run.
819 
820 EXCLUDE =
821 
822 # The EXCLUDE_SYMLINKS tag can be used to select whether or not files or
823 # directories that are symbolic links (a Unix file system feature) are excluded
824 # from the input.
825 # The default value is: NO.
826 
827 EXCLUDE_SYMLINKS = NO
828 
829 # If the value of the INPUT tag contains directories, you can use the
830 # EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude
831 # certain files from those directories.
832 #
833 # Note that the wildcards are matched against the file with absolute path, so to
834 # exclude all test directories for example use the pattern */test/*
835 
836 EXCLUDE_PATTERNS =
837 
838 # The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names
839 # (namespaces, classes, functions, etc.) that should be excluded from the
840 # output. The symbol name can be a fully qualified name, a word, or if the
841 # wildcard * is used, a substring. Examples: ANamespace, AClass,
842 # AClass::ANamespace, ANamespace::*Test
843 #
844 # Note that the wildcards are matched against the file with absolute path, so to
845 # exclude all test directories use the pattern */test/*
846 
847 EXCLUDE_SYMBOLS =
848 
849 # The EXAMPLE_PATH tag can be used to specify one or more files or directories
850 # that contain example code fragments that are included (see the \include
851 # command).
852 
853 EXAMPLE_PATH =
854 
855 # If the value of the EXAMPLE_PATH tag contains directories, you can use the
856 # EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and
857 # *.h) to filter out the source-files in the directories. If left blank all
858 # files are included.
859 
860 EXAMPLE_PATTERNS = *
861 
862 # If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be
863 # searched for input files to be used with the \include or \dontinclude commands
864 # irrespective of the value of the RECURSIVE tag.
865 # The default value is: NO.
866 
867 EXAMPLE_RECURSIVE = NO
868 
869 # The IMAGE_PATH tag can be used to specify one or more files or directories
870 # that contain images that are to be included in the documentation (see the
871 # \image command).
872 
873 IMAGE_PATH =
874 
875 # The INPUT_FILTER tag can be used to specify a program that doxygen should
876 # invoke to filter for each input file. Doxygen will invoke the filter program
877 # by executing (via popen()) the command:
878 #
879 # <filter> <input-file>
880 #
881 # where <filter> is the value of the INPUT_FILTER tag, and <input-file> is the
882 # name of an input file. Doxygen will then use the output that the filter
883 # program writes to standard output. If FILTER_PATTERNS is specified, this tag
884 # will be ignored.
885 #
886 # Note that the filter must not add or remove lines; it is applied before the
887 # code is scanned, but not when the output code is generated. If lines are added
888 # or removed, the anchors will not be placed correctly.
889 
890 INPUT_FILTER =
891 
892 # The FILTER_PATTERNS tag can be used to specify filters on a per file pattern
893 # basis. Doxygen will compare the file name with each pattern and apply the
894 # filter if there is a match. The filters are a list of the form: pattern=filter
895 # (like *.cpp=my_cpp_filter). See INPUT_FILTER for further information on how
896 # filters are used. If the FILTER_PATTERNS tag is empty or if none of the
897 # patterns match the file name, INPUT_FILTER is applied.
898 
899 FILTER_PATTERNS =
900 
901 # If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using
902 # INPUT_FILTER) will also be used to filter the input files that are used for
903 # producing the source files to browse (i.e. when SOURCE_BROWSER is set to YES).
904 # The default value is: NO.
905 
906 FILTER_SOURCE_FILES = NO
907 
908 # The FILTER_SOURCE_PATTERNS tag can be used to specify source filters per file
909 # pattern. A pattern will override the setting for FILTER_PATTERN (if any) and
910 # it is also possible to disable source filtering for a specific pattern using
911 # *.ext= (so without naming a filter).
912 # This tag requires that the tag FILTER_SOURCE_FILES is set to YES.
913 
914 FILTER_SOURCE_PATTERNS =
915 
916 # If the USE_MDFILE_AS_MAINPAGE tag refers to the name of a markdown file that
917 # is part of the input, its contents will be placed on the main page
918 # (index.html). This can be useful if you have a project on for instance GitHub
919 # and want to reuse the introduction page also for the doxygen output.
920 
921 USE_MDFILE_AS_MAINPAGE =
922 
923 #---------------------------------------------------------------------------
924 # Configuration options related to source browsing
925 #---------------------------------------------------------------------------
926 
927 # If the SOURCE_BROWSER tag is set to YES then a list of source files will be
928 # generated. Documented entities will be cross-referenced with these sources.
929 #
930 # Note: To get rid of all source code in the generated output, make sure that
931 # also VERBATIM_HEADERS is set to NO.
932 # The default value is: NO.
933 
934 SOURCE_BROWSER = YES
935 
936 # Setting the INLINE_SOURCES tag to YES will include the body of functions,
937 # classes and enums directly into the documentation.
938 # The default value is: NO.
939 
940 INLINE_SOURCES = NO
941 
942 # Setting the STRIP_CODE_COMMENTS tag to YES will instruct doxygen to hide any
943 # special comment blocks from generated source code fragments. Normal C, C++ and
944 # Fortran comments will always remain visible.
945 # The default value is: YES.
946 
947 STRIP_CODE_COMMENTS = YES
948 
949 # If the REFERENCED_BY_RELATION tag is set to YES then for each documented
950 # function all documented functions referencing it will be listed.
951 # The default value is: NO.
952 
953 REFERENCED_BY_RELATION = YES
954 
955 # If the REFERENCES_RELATION tag is set to YES then for each documented function
956 # all documented entities called/used by that function will be listed.
957 # The default value is: NO.
958 
959 REFERENCES_RELATION = YES
960 
961 # If the REFERENCES_LINK_SOURCE tag is set to YES and SOURCE_BROWSER tag is set
962 # to YES then the hyperlinks from functions in REFERENCES_RELATION and
963 # REFERENCED_BY_RELATION lists will link to the source code. Otherwise they will
964 # link to the documentation.
965 # The default value is: YES.
966 
967 REFERENCES_LINK_SOURCE = YES
968 
969 # If SOURCE_TOOLTIPS is enabled (the default) then hovering a hyperlink in the
970 # source code will show a tooltip with additional information such as prototype,
971 # brief description and links to the definition and documentation. Since this
972 # will make the HTML file larger and loading of large files a bit slower, you
973 # can opt to disable this feature.
974 # The default value is: YES.
975 # This tag requires that the tag SOURCE_BROWSER is set to YES.
976 
977 SOURCE_TOOLTIPS = YES
978 
979 # If the USE_HTAGS tag is set to YES then the references to source code will
980 # point to the HTML generated by the htags(1) tool instead of doxygen built-in
981 # source browser. The htags tool is part of GNU's global source tagging system
982 # (see http://www.gnu.org/software/global/global.html). You will need version
983 # 4.8.6 or higher.
984 #
985 # To use it do the following:
986 # - Install the latest version of global
987 # - Enable SOURCE_BROWSER and USE_HTAGS in the config file
988 # - Make sure the INPUT points to the root of the source tree
989 # - Run doxygen as normal
990 #
991 # Doxygen will invoke htags (and that will in turn invoke gtags), so these
992 # tools must be available from the command line (i.e. in the search path).
993 #
994 # The result: instead of the source browser generated by doxygen, the links to
995 # source code will now point to the output of htags.
996 # The default value is: NO.
997 # This tag requires that the tag SOURCE_BROWSER is set to YES.
998 
999 USE_HTAGS = NO
1000 
1001 # If the VERBATIM_HEADERS tag is set the YES then doxygen will generate a
1002 # verbatim copy of the header file for each class for which an include is
1003 # specified. Set to NO to disable this.
1004 # See also: Section \class.
1005 # The default value is: YES.
1006 
1007 VERBATIM_HEADERS = YES
1008 
1009 # If the CLANG_ASSISTED_PARSING tag is set to YES then doxygen will use the
1010 # clang parser (see: http://clang.llvm.org/) for more accurate parsing at the
1011 # cost of reduced performance. This can be particularly helpful with template
1012 # rich C++ code for which doxygen's built-in parser lacks the necessary type
1013 # information.
1014 # Note: The availability of this option depends on whether or not doxygen was
1015 # compiled with the --with-libclang option.
1016 # The default value is: NO.
1017 
1018 CLANG_ASSISTED_PARSING = NO
1019 
1020 # If clang assisted parsing is enabled you can provide the compiler with command
1021 # line options that you would normally use when invoking the compiler. Note that
1022 # the include paths will already be set by doxygen for the files and directories
1023 # specified with INPUT and INCLUDE_PATH.
1024 # This tag requires that the tag CLANG_ASSISTED_PARSING is set to YES.
1025 
1026 CLANG_OPTIONS =
1027 
1028 #---------------------------------------------------------------------------
1029 # Configuration options related to the alphabetical class index
1030 #---------------------------------------------------------------------------
1031 
1032 # If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index of all
1033 # compounds will be generated. Enable this if the project contains a lot of
1034 # classes, structs, unions or interfaces.
1035 # The default value is: YES.
1036 
1037 ALPHABETICAL_INDEX = NO
1038 
1039 # The COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns in
1040 # which the alphabetical index list will be split.
1041 # Minimum value: 1, maximum value: 20, default value: 5.
1042 # This tag requires that the tag ALPHABETICAL_INDEX is set to YES.
1043 
1044 COLS_IN_ALPHA_INDEX = 5
1045 
1046 # In case all classes in a project start with a common prefix, all classes will
1047 # be put under the same header in the alphabetical index. The IGNORE_PREFIX tag
1048 # can be used to specify a prefix (or a list of prefixes) that should be ignored
1049 # while generating the index headers.
1050 # This tag requires that the tag ALPHABETICAL_INDEX is set to YES.
1051 
1052 IGNORE_PREFIX =
1053 
1054 #---------------------------------------------------------------------------
1055 # Configuration options related to the HTML output
1056 #---------------------------------------------------------------------------
1057 
1058 # If the GENERATE_HTML tag is set to YES, doxygen will generate HTML output
1059 # The default value is: YES.
1060 
1061 GENERATE_HTML = YES
1062 
1063 # The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a
1064 # relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
1065 # it.
1066 # The default directory is: html.
1067 # This tag requires that the tag GENERATE_HTML is set to YES.
1068 
1069 HTML_OUTPUT = html
1070 
1071 # The HTML_FILE_EXTENSION tag can be used to specify the file extension for each
1072 # generated HTML page (for example: .htm, .php, .asp).
1073 # The default value is: .html.
1074 # This tag requires that the tag GENERATE_HTML is set to YES.
1075 
1076 HTML_FILE_EXTENSION = .html
1077 
1078 # The HTML_HEADER tag can be used to specify a user-defined HTML header file for
1079 # each generated HTML page. If the tag is left blank doxygen will generate a
1080 # standard header.
1081 #
1082 # To get valid HTML the header file that includes any scripts and style sheets
1083 # that doxygen needs, which is dependent on the configuration options used (e.g.
1084 # the setting GENERATE_TREEVIEW). It is highly recommended to start with a
1085 # default header using
1086 # doxygen -w html new_header.html new_footer.html new_stylesheet.css
1087 # YourConfigFile
1088 # and then modify the file new_header.html. See also section "Doxygen usage"
1089 # for information on how to generate the default header that doxygen normally
1090 # uses.
1091 # Note: The header is subject to change so you typically have to regenerate the
1092 # default header when upgrading to a newer version of doxygen. For a description
1093 # of the possible markers and block names see the documentation.
1094 # This tag requires that the tag GENERATE_HTML is set to YES.
1095 
1096 HTML_HEADER =
1097 
1098 # The HTML_FOOTER tag can be used to specify a user-defined HTML footer for each
1099 # generated HTML page. If the tag is left blank doxygen will generate a standard
1100 # footer. See HTML_HEADER for more information on how to generate a default
1101 # footer and what special commands can be used inside the footer. See also
1102 # section "Doxygen usage" for information on how to generate the default footer
1103 # that doxygen normally uses.
1104 # This tag requires that the tag GENERATE_HTML is set to YES.
1105 
1106 HTML_FOOTER =
1107 
1108 # The HTML_STYLESHEET tag can be used to specify a user-defined cascading style
1109 # sheet that is used by each HTML page. It can be used to fine-tune the look of
1110 # the HTML output. If left blank doxygen will generate a default style sheet.
1111 # See also section "Doxygen usage" for information on how to generate the style
1112 # sheet that doxygen normally uses.
1113 # Note: It is recommended to use HTML_EXTRA_STYLESHEET instead of this tag, as
1114 # it is more robust and this tag (HTML_STYLESHEET) will in the future become
1115 # obsolete.
1116 # This tag requires that the tag GENERATE_HTML is set to YES.
1117 
1118 HTML_STYLESHEET =
1119 
1120 # The HTML_EXTRA_STYLESHEET tag can be used to specify additional user-defined
1121 # cascading style sheets that are included after the standard style sheets
1122 # created by doxygen. Using this option one can overrule certain style aspects.
1123 # This is preferred over using HTML_STYLESHEET since it does not replace the
1124 # standard style sheet and is therefore more robust against future updates.
1125 # Doxygen will copy the style sheet files to the output directory.
1126 # Note: The order of the extra style sheet files is of importance (e.g. the last
1127 # style sheet in the list overrules the setting of the previous ones in the
1128 # list). For an example see the documentation.
1129 # This tag requires that the tag GENERATE_HTML is set to YES.
1130 
1131 HTML_EXTRA_STYLESHEET =
1132 
1133 # The HTML_EXTRA_FILES tag can be used to specify one or more extra images or
1134 # other source files which should be copied to the HTML output directory. Note
1135 # that these files will be copied to the base HTML output directory. Use the
1136 # $relpath^ marker in the HTML_HEADER and/or HTML_FOOTER files to load these
1137 # files. In the HTML_STYLESHEET file, use the file name only. Also note that the
1138 # files will be copied as-is; there are no commands or markers available.
1139 # This tag requires that the tag GENERATE_HTML is set to YES.
1140 
1141 HTML_EXTRA_FILES =
1142 
1143 # The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. Doxygen
1144 # will adjust the colors in the style sheet and background images according to
1145 # this color. Hue is specified as an angle on a colorwheel, see
1146 # http://en.wikipedia.org/wiki/Hue for more information. For instance the value
1147 # 0 represents red, 60 is yellow, 120 is green, 180 is cyan, 240 is blue, 300
1148 # purple, and 360 is red again.
1149 # Minimum value: 0, maximum value: 359, default value: 220.
1150 # This tag requires that the tag GENERATE_HTML is set to YES.
1151 
1152 HTML_COLORSTYLE_HUE = 220
1153 
1154 # The HTML_COLORSTYLE_SAT tag controls the purity (or saturation) of the colors
1155 # in the HTML output. For a value of 0 the output will use grayscales only. A
1156 # value of 255 will produce the most vivid colors.
1157 # Minimum value: 0, maximum value: 255, default value: 100.
1158 # This tag requires that the tag GENERATE_HTML is set to YES.
1159 
1160 HTML_COLORSTYLE_SAT = 100
1161 
1162 # The HTML_COLORSTYLE_GAMMA tag controls the gamma correction applied to the
1163 # luminance component of the colors in the HTML output. Values below 100
1164 # gradually make the output lighter, whereas values above 100 make the output
1165 # darker. The value divided by 100 is the actual gamma applied, so 80 represents
1166 # a gamma of 0.8, The value 220 represents a gamma of 2.2, and 100 does not
1167 # change the gamma.
1168 # Minimum value: 40, maximum value: 240, default value: 80.
1169 # This tag requires that the tag GENERATE_HTML is set to YES.
1170 
1171 HTML_COLORSTYLE_GAMMA = 80
1172 
1173 # If the HTML_TIMESTAMP tag is set to YES then the footer of each generated HTML
1174 # page will contain the date and time when the page was generated. Setting this
1175 # to YES can help to show when doxygen was last run and thus if the
1176 # documentation is up to date.
1177 # The default value is: NO.
1178 # This tag requires that the tag GENERATE_HTML is set to YES.
1179 
1180 HTML_TIMESTAMP = NO
1181 
1182 # If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML
1183 # documentation will contain sections that can be hidden and shown after the
1184 # page has loaded.
1185 # The default value is: NO.
1186 # This tag requires that the tag GENERATE_HTML is set to YES.
1187 
1188 HTML_DYNAMIC_SECTIONS = NO
1189 
1190 # With HTML_INDEX_NUM_ENTRIES one can control the preferred number of entries
1191 # shown in the various tree structured indices initially; the user can expand
1192 # and collapse entries dynamically later on. Doxygen will expand the tree to
1193 # such a level that at most the specified number of entries are visible (unless
1194 # a fully collapsed tree already exceeds this amount). So setting the number of
1195 # entries 1 will produce a full collapsed tree by default. 0 is a special value
1196 # representing an infinite number of entries and will result in a full expanded
1197 # tree by default.
1198 # Minimum value: 0, maximum value: 9999, default value: 100.
1199 # This tag requires that the tag GENERATE_HTML is set to YES.
1200 
1201 HTML_INDEX_NUM_ENTRIES = 100
1202 
1203 # If the GENERATE_DOCSET tag is set to YES, additional index files will be
1204 # generated that can be used as input for Apple's Xcode 3 integrated development
1205 # environment (see: http://developer.apple.com/tools/xcode/), introduced with
1206 # OSX 10.5 (Leopard). To create a documentation set, doxygen will generate a
1207 # Makefile in the HTML output directory. Running make will produce the docset in
1208 # that directory and running make install will install the docset in
1209 # ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at
1210 # startup. See http://developer.apple.com/tools/creatingdocsetswithdoxygen.html
1211 # for more information.
1212 # The default value is: NO.
1213 # This tag requires that the tag GENERATE_HTML is set to YES.
1214 
1215 GENERATE_DOCSET = NO
1216 
1217 # This tag determines the name of the docset feed. A documentation feed provides
1218 # an umbrella under which multiple documentation sets from a single provider
1219 # (such as a company or product suite) can be grouped.
1220 # The default value is: Doxygen generated docs.
1221 # This tag requires that the tag GENERATE_DOCSET is set to YES.
1222 
1223 DOCSET_FEEDNAME = "Doxygen generated docs"
1224 
1225 # This tag specifies a string that should uniquely identify the documentation
1226 # set bundle. This should be a reverse domain-name style string, e.g.
1227 # com.mycompany.MyDocSet. Doxygen will append .docset to the name.
1228 # The default value is: org.doxygen.Project.
1229 # This tag requires that the tag GENERATE_DOCSET is set to YES.
1230 
1231 DOCSET_BUNDLE_ID = org.doxygen.Project
1232 
1233 # The DOCSET_PUBLISHER_ID tag specifies a string that should uniquely identify
1234 # the documentation publisher. This should be a reverse domain-name style
1235 # string, e.g. com.mycompany.MyDocSet.documentation.
1236 # The default value is: org.doxygen.Publisher.
1237 # This tag requires that the tag GENERATE_DOCSET is set to YES.
1238 
1239 DOCSET_PUBLISHER_ID = org.doxygen.Publisher
1240 
1241 # The DOCSET_PUBLISHER_NAME tag identifies the documentation publisher.
1242 # The default value is: Publisher.
1243 # This tag requires that the tag GENERATE_DOCSET is set to YES.
1244 
1245 DOCSET_PUBLISHER_NAME = Publisher
1246 
1247 # If the GENERATE_HTMLHELP tag is set to YES then doxygen generates three
1248 # additional HTML index files: index.hhp, index.hhc, and index.hhk. The
1249 # index.hhp is a project file that can be read by Microsoft's HTML Help Workshop
1250 # (see: http://www.microsoft.com/en-us/download/details.aspx?id=21138) on
1251 # Windows.
1252 #
1253 # The HTML Help Workshop contains a compiler that can convert all HTML output
1254 # generated by doxygen into a single compiled HTML file (.chm). Compiled HTML
1255 # files are now used as the Windows 98 help format, and will replace the old
1256 # Windows help format (.hlp) on all Windows platforms in the future. Compressed
1257 # HTML files also contain an index, a table of contents, and you can search for
1258 # words in the documentation. The HTML workshop also contains a viewer for
1259 # compressed HTML files.
1260 # The default value is: NO.
1261 # This tag requires that the tag GENERATE_HTML is set to YES.
1262 
1263 GENERATE_HTMLHELP = NO
1264 
1265 # The CHM_FILE tag can be used to specify the file name of the resulting .chm
1266 # file. You can add a path in front of the file if the result should not be
1267 # written to the html output directory.
1268 # This tag requires that the tag GENERATE_HTMLHELP is set to YES.
1269 
1270 CHM_FILE =
1271 
1272 # The HHC_LOCATION tag can be used to specify the location (absolute path
1273 # including file name) of the HTML help compiler (hhc.exe). If non-empty,
1274 # doxygen will try to run the HTML help compiler on the generated index.hhp.
1275 # The file has to be specified with full path.
1276 # This tag requires that the tag GENERATE_HTMLHELP is set to YES.
1277 
1278 HHC_LOCATION =
1279 
1280 # The GENERATE_CHI flag controls if a separate .chi index file is generated
1281 # (YES) or that it should be included in the master .chm file (NO).
1282 # The default value is: NO.
1283 # This tag requires that the tag GENERATE_HTMLHELP is set to YES.
1284 
1285 GENERATE_CHI = NO
1286 
1287 # The CHM_INDEX_ENCODING is used to encode HtmlHelp index (hhk), content (hhc)
1288 # and project file content.
1289 # This tag requires that the tag GENERATE_HTMLHELP is set to YES.
1290 
1291 CHM_INDEX_ENCODING =
1292 
1293 # The BINARY_TOC flag controls whether a binary table of contents is generated
1294 # (YES) or a normal table of contents (NO) in the .chm file. Furthermore it
1295 # enables the Previous and Next buttons.
1296 # The default value is: NO.
1297 # This tag requires that the tag GENERATE_HTMLHELP is set to YES.
1298 
1299 BINARY_TOC = NO
1300 
1301 # The TOC_EXPAND flag can be set to YES to add extra items for group members to
1302 # the table of contents of the HTML help documentation and to the tree view.
1303 # The default value is: NO.
1304 # This tag requires that the tag GENERATE_HTMLHELP is set to YES.
1305 
1306 TOC_EXPAND = NO
1307 
1308 # If the GENERATE_QHP tag is set to YES and both QHP_NAMESPACE and
1309 # QHP_VIRTUAL_FOLDER are set, an additional index file will be generated that
1310 # can be used as input for Qt's qhelpgenerator to generate a Qt Compressed Help
1311 # (.qch) of the generated HTML documentation.
1312 # The default value is: NO.
1313 # This tag requires that the tag GENERATE_HTML is set to YES.
1314 
1315 GENERATE_QHP = NO
1316 
1317 # If the QHG_LOCATION tag is specified, the QCH_FILE tag can be used to specify
1318 # the file name of the resulting .qch file. The path specified is relative to
1319 # the HTML output folder.
1320 # This tag requires that the tag GENERATE_QHP is set to YES.
1321 
1322 QCH_FILE =
1323 
1324 # The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help
1325 # Project output. For more information please see Qt Help Project / Namespace
1326 # (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#namespace).
1327 # The default value is: org.doxygen.Project.
1328 # This tag requires that the tag GENERATE_QHP is set to YES.
1329 
1330 QHP_NAMESPACE = org.doxygen.Project
1331 
1332 # The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating Qt
1333 # Help Project output. For more information please see Qt Help Project / Virtual
1334 # Folders (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#virtual-
1335 # folders).
1336 # The default value is: doc.
1337 # This tag requires that the tag GENERATE_QHP is set to YES.
1338 
1339 QHP_VIRTUAL_FOLDER = doc
1340 
1341 # If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a custom
1342 # filter to add. For more information please see Qt Help Project / Custom
1343 # Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom-
1344 # filters).
1345 # This tag requires that the tag GENERATE_QHP is set to YES.
1346 
1347 QHP_CUST_FILTER_NAME =
1348 
1349 # The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the
1350 # custom filter to add. For more information please see Qt Help Project / Custom
1351 # Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom-
1352 # filters).
1353 # This tag requires that the tag GENERATE_QHP is set to YES.
1354 
1355 QHP_CUST_FILTER_ATTRS =
1356 
1357 # The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this
1358 # project's filter section matches. Qt Help Project / Filter Attributes (see:
1359 # http://qt-project.org/doc/qt-4.8/qthelpproject.html#filter-attributes).
1360 # This tag requires that the tag GENERATE_QHP is set to YES.
1361 
1362 QHP_SECT_FILTER_ATTRS =
1363 
1364 # The QHG_LOCATION tag can be used to specify the location of Qt's
1365 # qhelpgenerator. If non-empty doxygen will try to run qhelpgenerator on the
1366 # generated .qhp file.
1367 # This tag requires that the tag GENERATE_QHP is set to YES.
1368 
1369 QHG_LOCATION =
1370 
1371 # If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files will be
1372 # generated, together with the HTML files, they form an Eclipse help plugin. To
1373 # install this plugin and make it available under the help contents menu in
1374 # Eclipse, the contents of the directory containing the HTML and XML files needs
1375 # to be copied into the plugins directory of eclipse. The name of the directory
1376 # within the plugins directory should be the same as the ECLIPSE_DOC_ID value.
1377 # After copying Eclipse needs to be restarted before the help appears.
1378 # The default value is: NO.
1379 # This tag requires that the tag GENERATE_HTML is set to YES.
1380 
1381 GENERATE_ECLIPSEHELP = NO
1382 
1383 # A unique identifier for the Eclipse help plugin. When installing the plugin
1384 # the directory name containing the HTML and XML files should also have this
1385 # name. Each documentation set should have its own identifier.
1386 # The default value is: org.doxygen.Project.
1387 # This tag requires that the tag GENERATE_ECLIPSEHELP is set to YES.
1388 
1389 ECLIPSE_DOC_ID = org.doxygen.Project
1390 
1391 # If you want full control over the layout of the generated HTML pages it might
1392 # be necessary to disable the index and replace it with your own. The
1393 # DISABLE_INDEX tag can be used to turn on/off the condensed index (tabs) at top
1394 # of each HTML page. A value of NO enables the index and the value YES disables
1395 # it. Since the tabs in the index contain the same information as the navigation
1396 # tree, you can set this option to YES if you also set GENERATE_TREEVIEW to YES.
1397 # The default value is: NO.
1398 # This tag requires that the tag GENERATE_HTML is set to YES.
1399 
1400 DISABLE_INDEX = NO
1401 
1402 # The GENERATE_TREEVIEW tag is used to specify whether a tree-like index
1403 # structure should be generated to display hierarchical information. If the tag
1404 # value is set to YES, a side panel will be generated containing a tree-like
1405 # index structure (just like the one that is generated for HTML Help). For this
1406 # to work a browser that supports JavaScript, DHTML, CSS and frames is required
1407 # (i.e. any modern browser). Windows users are probably better off using the
1408 # HTML help feature. Via custom style sheets (see HTML_EXTRA_STYLESHEET) one can
1409 # further fine-tune the look of the index. As an example, the default style
1410 # sheet generated by doxygen has an example that shows how to put an image at
1411 # the root of the tree instead of the PROJECT_NAME. Since the tree basically has
1412 # the same information as the tab index, you could consider setting
1413 # DISABLE_INDEX to YES when enabling this option.
1414 # The default value is: NO.
1415 # This tag requires that the tag GENERATE_HTML is set to YES.
1416 
1417 GENERATE_TREEVIEW = NO
1418 
1419 # The ENUM_VALUES_PER_LINE tag can be used to set the number of enum values that
1420 # doxygen will group on one line in the generated HTML documentation.
1421 #
1422 # Note that a value of 0 will completely suppress the enum values from appearing
1423 # in the overview section.
1424 # Minimum value: 0, maximum value: 20, default value: 4.
1425 # This tag requires that the tag GENERATE_HTML is set to YES.
1426 
1427 ENUM_VALUES_PER_LINE = 4
1428 
1429 # If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be used
1430 # to set the initial width (in pixels) of the frame in which the tree is shown.
1431 # Minimum value: 0, maximum value: 1500, default value: 250.
1432 # This tag requires that the tag GENERATE_HTML is set to YES.
1433 
1434 TREEVIEW_WIDTH = 250
1435 
1436 # If the EXT_LINKS_IN_WINDOW option is set to YES, doxygen will open links to
1437 # external symbols imported via tag files in a separate window.
1438 # The default value is: NO.
1439 # This tag requires that the tag GENERATE_HTML is set to YES.
1440 
1441 EXT_LINKS_IN_WINDOW = NO
1442 
1443 # Use this tag to change the font size of LaTeX formulas included as images in
1444 # the HTML documentation. When you change the font size after a successful
1445 # doxygen run you need to manually remove any form_*.png images from the HTML
1446 # output directory to force them to be regenerated.
1447 # Minimum value: 8, maximum value: 50, default value: 10.
1448 # This tag requires that the tag GENERATE_HTML is set to YES.
1449 
1450 FORMULA_FONTSIZE = 10
1451 
1452 # Use the FORMULA_TRANPARENT tag to determine whether or not the images
1453 # generated for formulas are transparent PNGs. Transparent PNGs are not
1454 # supported properly for IE 6.0, but are supported on all modern browsers.
1455 #
1456 # Note that when changing this option you need to delete any form_*.png files in
1457 # the HTML output directory before the changes have effect.
1458 # The default value is: YES.
1459 # This tag requires that the tag GENERATE_HTML is set to YES.
1460 
1461 FORMULA_TRANSPARENT = YES
1462 
1463 # Enable the USE_MATHJAX option to render LaTeX formulas using MathJax (see
1464 # http://www.mathjax.org) which uses client side Javascript for the rendering
1465 # instead of using pre-rendered bitmaps. Use this if you do not have LaTeX
1466 # installed or if you want to formulas look prettier in the HTML output. When
1467 # enabled you may also need to install MathJax separately and configure the path
1468 # to it using the MATHJAX_RELPATH option.
1469 # The default value is: NO.
1470 # This tag requires that the tag GENERATE_HTML is set to YES.
1471 
1472 USE_MATHJAX = NO
1473 
1474 # When MathJax is enabled you can set the default output format to be used for
1475 # the MathJax output. See the MathJax site (see:
1476 # http://docs.mathjax.org/en/latest/output.html) for more details.
1477 # Possible values are: HTML-CSS (which is slower, but has the best
1478 # compatibility), NativeMML (i.e. MathML) and SVG.
1479 # The default value is: HTML-CSS.
1480 # This tag requires that the tag USE_MATHJAX is set to YES.
1481 
1482 MATHJAX_FORMAT = HTML-CSS
1483 
1484 # When MathJax is enabled you need to specify the location relative to the HTML
1485 # output directory using the MATHJAX_RELPATH option. The destination directory
1486 # should contain the MathJax.js script. For instance, if the mathjax directory
1487 # is located at the same level as the HTML output directory, then
1488 # MATHJAX_RELPATH should be ../mathjax. The default value points to the MathJax
1489 # Content Delivery Network so you can quickly see the result without installing
1490 # MathJax. However, it is strongly recommended to install a local copy of
1491 # MathJax from http://www.mathjax.org before deployment.
1492 # The default value is: http://cdn.mathjax.org/mathjax/latest.
1493 # This tag requires that the tag USE_MATHJAX is set to YES.
1494 
1495 MATHJAX_RELPATH = http://www.mathjax.org/mathjax
1496 
1497 # The MATHJAX_EXTENSIONS tag can be used to specify one or more MathJax
1498 # extension names that should be enabled during MathJax rendering. For example
1499 # MATHJAX_EXTENSIONS = TeX/AMSmath TeX/AMSsymbols
1500 # This tag requires that the tag USE_MATHJAX is set to YES.
1501 
1502 MATHJAX_EXTENSIONS =
1503 
1504 # The MATHJAX_CODEFILE tag can be used to specify a file with javascript pieces
1505 # of code that will be used on startup of the MathJax code. See the MathJax site
1506 # (see: http://docs.mathjax.org/en/latest/output.html) for more details. For an
1507 # example see the documentation.
1508 # This tag requires that the tag USE_MATHJAX is set to YES.
1509 
1510 MATHJAX_CODEFILE =
1511 
1512 # When the SEARCHENGINE tag is enabled doxygen will generate a search box for
1513 # the HTML output. The underlying search engine uses javascript and DHTML and
1514 # should work on any modern browser. Note that when using HTML help
1515 # (GENERATE_HTMLHELP), Qt help (GENERATE_QHP), or docsets (GENERATE_DOCSET)
1516 # there is already a search function so this one should typically be disabled.
1517 # For large projects the javascript based search engine can be slow, then
1518 # enabling SERVER_BASED_SEARCH may provide a better solution. It is possible to
1519 # search using the keyboard; to jump to the search box use <access key> + S
1520 # (what the <access key> is depends on the OS and browser, but it is typically
1521 # <CTRL>, <ALT>/<option>, or both). Inside the search box use the <cursor down
1522 # key> to jump into the search results window, the results can be navigated
1523 # using the <cursor keys>. Press <Enter> to select an item or <escape> to cancel
1524 # the search. The filter options can be selected when the cursor is inside the
1525 # search box by pressing <Shift>+<cursor down>. Also here use the <cursor keys>
1526 # to select a filter and <Enter> or <escape> to activate or cancel the filter
1527 # option.
1528 # The default value is: YES.
1529 # This tag requires that the tag GENERATE_HTML is set to YES.
1530 
1531 SEARCHENGINE = YES
1532 
1533 # When the SERVER_BASED_SEARCH tag is enabled the search engine will be
1534 # implemented using a web server instead of a web client using Javascript. There
1535 # are two flavors of web server based searching depending on the EXTERNAL_SEARCH
1536 # setting. When disabled, doxygen will generate a PHP script for searching and
1537 # an index file used by the script. When EXTERNAL_SEARCH is enabled the indexing
1538 # and searching needs to be provided by external tools. See the section
1539 # "External Indexing and Searching" for details.
1540 # The default value is: NO.
1541 # This tag requires that the tag SEARCHENGINE is set to YES.
1542 
1543 SERVER_BASED_SEARCH = NO
1544 
1545 # When EXTERNAL_SEARCH tag is enabled doxygen will no longer generate the PHP
1546 # script for searching. Instead the search results are written to an XML file
1547 # which needs to be processed by an external indexer. Doxygen will invoke an
1548 # external search engine pointed to by the SEARCHENGINE_URL option to obtain the
1549 # search results.
1550 #
1551 # Doxygen ships with an example indexer (doxyindexer) and search engine
1552 # (doxysearch.cgi) which are based on the open source search engine library
1553 # Xapian (see: http://xapian.org/).
1554 #
1555 # See the section "External Indexing and Searching" for details.
1556 # The default value is: NO.
1557 # This tag requires that the tag SEARCHENGINE is set to YES.
1558 
1559 EXTERNAL_SEARCH = NO
1560 
1561 # The SEARCHENGINE_URL should point to a search engine hosted by a web server
1562 # which will return the search results when EXTERNAL_SEARCH is enabled.
1563 #
1564 # Doxygen ships with an example indexer (doxyindexer) and search engine
1565 # (doxysearch.cgi) which are based on the open source search engine library
1566 # Xapian (see: http://xapian.org/). See the section "External Indexing and
1567 # Searching" for details.
1568 # This tag requires that the tag SEARCHENGINE is set to YES.
1569 
1570 SEARCHENGINE_URL =
1571 
1572 # When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the unindexed
1573 # search data is written to a file for indexing by an external tool. With the
1574 # SEARCHDATA_FILE tag the name of this file can be specified.
1575 # The default file is: searchdata.xml.
1576 # This tag requires that the tag SEARCHENGINE is set to YES.
1577 
1578 SEARCHDATA_FILE = searchdata.xml
1579 
1580 # When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the
1581 # EXTERNAL_SEARCH_ID tag can be used as an identifier for the project. This is
1582 # useful in combination with EXTRA_SEARCH_MAPPINGS to search through multiple
1583 # projects and redirect the results back to the right project.
1584 # This tag requires that the tag SEARCHENGINE is set to YES.
1585 
1586 EXTERNAL_SEARCH_ID =
1587 
1588 # The EXTRA_SEARCH_MAPPINGS tag can be used to enable searching through doxygen
1589 # projects other than the one defined by this configuration file, but that are
1590 # all added to the same external search index. Each project needs to have a
1591 # unique id set via EXTERNAL_SEARCH_ID. The search mapping then maps the id of
1592 # to a relative location where the documentation can be found. The format is:
1593 # EXTRA_SEARCH_MAPPINGS = tagname1=loc1 tagname2=loc2 ...
1594 # This tag requires that the tag SEARCHENGINE is set to YES.
1595 
1596 EXTRA_SEARCH_MAPPINGS =
1597 
1598 #---------------------------------------------------------------------------
1599 # Configuration options related to the LaTeX output
1600 #---------------------------------------------------------------------------
1601 
1602 # If the GENERATE_LATEX tag is set to YES, doxygen will generate LaTeX output.
1603 # The default value is: YES.
1604 
1605 GENERATE_LATEX = NO
1606 
1607 # The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put. If a
1608 # relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
1609 # it.
1610 # The default directory is: latex.
1611 # This tag requires that the tag GENERATE_LATEX is set to YES.
1612 
1613 LATEX_OUTPUT = latex
1614 
1615 # The LATEX_CMD_NAME tag can be used to specify the LaTeX command name to be
1616 # invoked.
1617 #
1618 # Note that when enabling USE_PDFLATEX this option is only used for generating
1619 # bitmaps for formulas in the HTML output, but not in the Makefile that is
1620 # written to the output directory.
1621 # The default file is: latex.
1622 # This tag requires that the tag GENERATE_LATEX is set to YES.
1623 
1624 LATEX_CMD_NAME = latex
1625 
1626 # The MAKEINDEX_CMD_NAME tag can be used to specify the command name to generate
1627 # index for LaTeX.
1628 # The default file is: makeindex.
1629 # This tag requires that the tag GENERATE_LATEX is set to YES.
1630 
1631 MAKEINDEX_CMD_NAME = makeindex
1632 
1633 # If the COMPACT_LATEX tag is set to YES, doxygen generates more compact LaTeX
1634 # documents. This may be useful for small projects and may help to save some
1635 # trees in general.
1636 # The default value is: NO.
1637 # This tag requires that the tag GENERATE_LATEX is set to YES.
1638 
1639 COMPACT_LATEX = NO
1640 
1641 # The PAPER_TYPE tag can be used to set the paper type that is used by the
1642 # printer.
1643 # Possible values are: a4 (210 x 297 mm), letter (8.5 x 11 inches), legal (8.5 x
1644 # 14 inches) and executive (7.25 x 10.5 inches).
1645 # The default value is: a4.
1646 # This tag requires that the tag GENERATE_LATEX is set to YES.
1647 
1648 PAPER_TYPE = a4wide
1649 
1650 # The EXTRA_PACKAGES tag can be used to specify one or more LaTeX package names
1651 # that should be included in the LaTeX output. The package can be specified just
1652 # by its name or with the correct syntax as to be used with the LaTeX
1653 # \usepackage command. To get the times font for instance you can specify :
1654 # EXTRA_PACKAGES=times or EXTRA_PACKAGES={times}
1655 # To use the option intlimits with the amsmath package you can specify:
1656 # EXTRA_PACKAGES=[intlimits]{amsmath}
1657 # If left blank no extra packages will be included.
1658 # This tag requires that the tag GENERATE_LATEX is set to YES.
1659 
1660 EXTRA_PACKAGES =
1661 
1662 # The LATEX_HEADER tag can be used to specify a personal LaTeX header for the
1663 # generated LaTeX document. The header should contain everything until the first
1664 # chapter. If it is left blank doxygen will generate a standard header. See
1665 # section "Doxygen usage" for information on how to let doxygen write the
1666 # default header to a separate file.
1667 #
1668 # Note: Only use a user-defined header if you know what you are doing! The
1669 # following commands have a special meaning inside the header: $title,
1670 # $datetime, $date, $doxygenversion, $projectname, $projectnumber,
1671 # $projectbrief, $projectlogo. Doxygen will replace $title with the empty
1672 # string, for the replacement values of the other commands the user is referred
1673 # to HTML_HEADER.
1674 # This tag requires that the tag GENERATE_LATEX is set to YES.
1675 
1676 LATEX_HEADER =
1677 
1678 # The LATEX_FOOTER tag can be used to specify a personal LaTeX footer for the
1679 # generated LaTeX document. The footer should contain everything after the last
1680 # chapter. If it is left blank doxygen will generate a standard footer. See
1681 # LATEX_HEADER for more information on how to generate a default footer and what
1682 # special commands can be used inside the footer.
1683 #
1684 # Note: Only use a user-defined footer if you know what you are doing!
1685 # This tag requires that the tag GENERATE_LATEX is set to YES.
1686 
1687 LATEX_FOOTER =
1688 
1689 # The LATEX_EXTRA_STYLESHEET tag can be used to specify additional user-defined
1690 # LaTeX style sheets that are included after the standard style sheets created
1691 # by doxygen. Using this option one can overrule certain style aspects. Doxygen
1692 # will copy the style sheet files to the output directory.
1693 # Note: The order of the extra style sheet files is of importance (e.g. the last
1694 # style sheet in the list overrules the setting of the previous ones in the
1695 # list).
1696 # This tag requires that the tag GENERATE_LATEX is set to YES.
1697 
1698 LATEX_EXTRA_STYLESHEET =
1699 
1700 # The LATEX_EXTRA_FILES tag can be used to specify one or more extra images or
1701 # other source files which should be copied to the LATEX_OUTPUT output
1702 # directory. Note that the files will be copied as-is; there are no commands or
1703 # markers available.
1704 # This tag requires that the tag GENERATE_LATEX is set to YES.
1705 
1706 LATEX_EXTRA_FILES =
1707 
1708 # If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated is
1709 # prepared for conversion to PDF (using ps2pdf or pdflatex). The PDF file will
1710 # contain links (just like the HTML output) instead of page references. This
1711 # makes the output suitable for online browsing using a PDF viewer.
1712 # The default value is: YES.
1713 # This tag requires that the tag GENERATE_LATEX is set to YES.
1714 
1715 PDF_HYPERLINKS = NO
1716 
1717 # If the USE_PDFLATEX tag is set to YES, doxygen will use pdflatex to generate
1718 # the PDF file directly from the LaTeX files. Set this option to YES, to get a
1719 # higher quality PDF documentation.
1720 # The default value is: YES.
1721 # This tag requires that the tag GENERATE_LATEX is set to YES.
1722 
1723 USE_PDFLATEX = YES
1724 
1725 # If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \batchmode
1726 # command to the generated LaTeX files. This will instruct LaTeX to keep running
1727 # if errors occur, instead of asking the user for help. This option is also used
1728 # when generating formulas in HTML.
1729 # The default value is: NO.
1730 # This tag requires that the tag GENERATE_LATEX is set to YES.
1731 
1732 LATEX_BATCHMODE = NO
1733 
1734 # If the LATEX_HIDE_INDICES tag is set to YES then doxygen will not include the
1735 # index chapters (such as File Index, Compound Index, etc.) in the output.
1736 # The default value is: NO.
1737 # This tag requires that the tag GENERATE_LATEX is set to YES.
1738 
1739 LATEX_HIDE_INDICES = NO
1740 
1741 # If the LATEX_SOURCE_CODE tag is set to YES then doxygen will include source
1742 # code with syntax highlighting in the LaTeX output.
1743 #
1744 # Note that which sources are shown also depends on other settings such as
1745 # SOURCE_BROWSER.
1746 # The default value is: NO.
1747 # This tag requires that the tag GENERATE_LATEX is set to YES.
1748 
1749 LATEX_SOURCE_CODE = NO
1750 
1751 # The LATEX_BIB_STYLE tag can be used to specify the style to use for the
1752 # bibliography, e.g. plainnat, or ieeetr. See
1753 # http://en.wikipedia.org/wiki/BibTeX and \cite for more info.
1754 # The default value is: plain.
1755 # This tag requires that the tag GENERATE_LATEX is set to YES.
1756 
1757 LATEX_BIB_STYLE = plain
1758 
1759 #---------------------------------------------------------------------------
1760 # Configuration options related to the RTF output
1761 #---------------------------------------------------------------------------
1762 
1763 # If the GENERATE_RTF tag is set to YES, doxygen will generate RTF output. The
1764 # RTF output is optimized for Word 97 and may not look too pretty with other RTF
1765 # readers/editors.
1766 # The default value is: NO.
1767 
1768 GENERATE_RTF = NO
1769 
1770 # The RTF_OUTPUT tag is used to specify where the RTF docs will be put. If a
1771 # relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
1772 # it.
1773 # The default directory is: rtf.
1774 # This tag requires that the tag GENERATE_RTF is set to YES.
1775 
1776 RTF_OUTPUT = glm.rtf
1777 
1778 # If the COMPACT_RTF tag is set to YES, doxygen generates more compact RTF
1779 # documents. This may be useful for small projects and may help to save some
1780 # trees in general.
1781 # The default value is: NO.
1782 # This tag requires that the tag GENERATE_RTF is set to YES.
1783 
1784 COMPACT_RTF = NO
1785 
1786 # If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated will
1787 # contain hyperlink fields. The RTF file will contain links (just like the HTML
1788 # output) instead of page references. This makes the output suitable for online
1789 # browsing using Word or some other Word compatible readers that support those
1790 # fields.
1791 #
1792 # Note: WordPad (write) and others do not support links.
1793 # The default value is: NO.
1794 # This tag requires that the tag GENERATE_RTF is set to YES.
1795 
1796 RTF_HYPERLINKS = YES
1797 
1798 # Load stylesheet definitions from file. Syntax is similar to doxygen's config
1799 # file, i.e. a series of assignments. You only have to provide replacements,
1800 # missing definitions are set to their default value.
1801 #
1802 # See also section "Doxygen usage" for information on how to generate the
1803 # default style sheet that doxygen normally uses.
1804 # This tag requires that the tag GENERATE_RTF is set to YES.
1805 
1806 RTF_STYLESHEET_FILE =
1807 
1808 # Set optional variables used in the generation of an RTF document. Syntax is
1809 # similar to doxygen's config file. A template extensions file can be generated
1810 # using doxygen -e rtf extensionFile.
1811 # This tag requires that the tag GENERATE_RTF is set to YES.
1812 
1813 RTF_EXTENSIONS_FILE =
1814 
1815 # If the RTF_SOURCE_CODE tag is set to YES then doxygen will include source code
1816 # with syntax highlighting in the RTF output.
1817 #
1818 # Note that which sources are shown also depends on other settings such as
1819 # SOURCE_BROWSER.
1820 # The default value is: NO.
1821 # This tag requires that the tag GENERATE_RTF is set to YES.
1822 
1823 RTF_SOURCE_CODE = NO
1824 
1825 #---------------------------------------------------------------------------
1826 # Configuration options related to the man page output
1827 #---------------------------------------------------------------------------
1828 
1829 # If the GENERATE_MAN tag is set to YES, doxygen will generate man pages for
1830 # classes and files.
1831 # The default value is: NO.
1832 
1833 GENERATE_MAN = NO
1834 
1835 # The MAN_OUTPUT tag is used to specify where the man pages will be put. If a
1836 # relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
1837 # it. A directory man3 will be created inside the directory specified by
1838 # MAN_OUTPUT.
1839 # The default directory is: man.
1840 # This tag requires that the tag GENERATE_MAN is set to YES.
1841 
1842 MAN_OUTPUT = man
1843 
1844 # The MAN_EXTENSION tag determines the extension that is added to the generated
1845 # man pages. In case the manual section does not start with a number, the number
1846 # 3 is prepended. The dot (.) at the beginning of the MAN_EXTENSION tag is
1847 # optional.
1848 # The default value is: .3.
1849 # This tag requires that the tag GENERATE_MAN is set to YES.
1850 
1851 MAN_EXTENSION = .3
1852 
1853 # The MAN_SUBDIR tag determines the name of the directory created within
1854 # MAN_OUTPUT in which the man pages are placed. If defaults to man followed by
1855 # MAN_EXTENSION with the initial . removed.
1856 # This tag requires that the tag GENERATE_MAN is set to YES.
1857 
1858 MAN_SUBDIR =
1859 
1860 # If the MAN_LINKS tag is set to YES and doxygen generates man output, then it
1861 # will generate one additional man file for each entity documented in the real
1862 # man page(s). These additional files only source the real man page, but without
1863 # them the man command would be unable to find the correct page.
1864 # The default value is: NO.
1865 # This tag requires that the tag GENERATE_MAN is set to YES.
1866 
1867 MAN_LINKS = NO
1868 
1869 #---------------------------------------------------------------------------
1870 # Configuration options related to the XML output
1871 #---------------------------------------------------------------------------
1872 
1873 # If the GENERATE_XML tag is set to YES, doxygen will generate an XML file that
1874 # captures the structure of the code including all documentation.
1875 # The default value is: NO.
1876 
1877 GENERATE_XML = NO
1878 
1879 # The XML_OUTPUT tag is used to specify where the XML pages will be put. If a
1880 # relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
1881 # it.
1882 # The default directory is: xml.
1883 # This tag requires that the tag GENERATE_XML is set to YES.
1884 
1885 XML_OUTPUT = xml
1886 
1887 # If the XML_PROGRAMLISTING tag is set to YES, doxygen will dump the program
1888 # listings (including syntax highlighting and cross-referencing information) to
1889 # the XML output. Note that enabling this will significantly increase the size
1890 # of the XML output.
1891 # The default value is: YES.
1892 # This tag requires that the tag GENERATE_XML is set to YES.
1893 
1894 XML_PROGRAMLISTING = YES
1895 
1896 #---------------------------------------------------------------------------
1897 # Configuration options related to the DOCBOOK output
1898 #---------------------------------------------------------------------------
1899 
1900 # If the GENERATE_DOCBOOK tag is set to YES, doxygen will generate Docbook files
1901 # that can be used to generate PDF.
1902 # The default value is: NO.
1903 
1904 GENERATE_DOCBOOK = NO
1905 
1906 # The DOCBOOK_OUTPUT tag is used to specify where the Docbook pages will be put.
1907 # If a relative path is entered the value of OUTPUT_DIRECTORY will be put in
1908 # front of it.
1909 # The default directory is: docbook.
1910 # This tag requires that the tag GENERATE_DOCBOOK is set to YES.
1911 
1912 DOCBOOK_OUTPUT = docbook
1913 
1914 # If the DOCBOOK_PROGRAMLISTING tag is set to YES, doxygen will include the
1915 # program listings (including syntax highlighting and cross-referencing
1916 # information) to the DOCBOOK output. Note that enabling this will significantly
1917 # increase the size of the DOCBOOK output.
1918 # The default value is: NO.
1919 # This tag requires that the tag GENERATE_DOCBOOK is set to YES.
1920 
1921 DOCBOOK_PROGRAMLISTING = NO
1922 
1923 #---------------------------------------------------------------------------
1924 # Configuration options for the AutoGen Definitions output
1925 #---------------------------------------------------------------------------
1926 
1927 # If the GENERATE_AUTOGEN_DEF tag is set to YES, doxygen will generate an
1928 # AutoGen Definitions (see http://autogen.sf.net) file that captures the
1929 # structure of the code including all documentation. Note that this feature is
1930 # still experimental and incomplete at the moment.
1931 # The default value is: NO.
1932 
1933 GENERATE_AUTOGEN_DEF = NO
1934 
1935 #---------------------------------------------------------------------------
1936 # Configuration options related to the Perl module output
1937 #---------------------------------------------------------------------------
1938 
1939 # If the GENERATE_PERLMOD tag is set to YES, doxygen will generate a Perl module
1940 # file that captures the structure of the code including all documentation.
1941 #
1942 # Note that this feature is still experimental and incomplete at the moment.
1943 # The default value is: NO.
1944 
1945 GENERATE_PERLMOD = NO
1946 
1947 # If the PERLMOD_LATEX tag is set to YES, doxygen will generate the necessary
1948 # Makefile rules, Perl scripts and LaTeX code to be able to generate PDF and DVI
1949 # output from the Perl module output.
1950 # The default value is: NO.
1951 # This tag requires that the tag GENERATE_PERLMOD is set to YES.
1952 
1953 PERLMOD_LATEX = NO
1954 
1955 # If the PERLMOD_PRETTY tag is set to YES, the Perl module output will be nicely
1956 # formatted so it can be parsed by a human reader. This is useful if you want to
1957 # understand what is going on. On the other hand, if this tag is set to NO, the
1958 # size of the Perl module output will be much smaller and Perl will parse it
1959 # just the same.
1960 # The default value is: YES.
1961 # This tag requires that the tag GENERATE_PERLMOD is set to YES.
1962 
1963 PERLMOD_PRETTY = YES
1964 
1965 # The names of the make variables in the generated doxyrules.make file are
1966 # prefixed with the string contained in PERLMOD_MAKEVAR_PREFIX. This is useful
1967 # so different doxyrules.make files included by the same Makefile don't
1968 # overwrite each other's variables.
1969 # This tag requires that the tag GENERATE_PERLMOD is set to YES.
1970 
1971 PERLMOD_MAKEVAR_PREFIX =
1972 
1973 #---------------------------------------------------------------------------
1974 # Configuration options related to the preprocessor
1975 #---------------------------------------------------------------------------
1976 
1977 # If the ENABLE_PREPROCESSING tag is set to YES, doxygen will evaluate all
1978 # C-preprocessor directives found in the sources and include files.
1979 # The default value is: YES.
1980 
1981 ENABLE_PREPROCESSING = YES
1982 
1983 # If the MACRO_EXPANSION tag is set to YES, doxygen will expand all macro names
1984 # in the source code. If set to NO, only conditional compilation will be
1985 # performed. Macro expansion can be done in a controlled way by setting
1986 # EXPAND_ONLY_PREDEF to YES.
1987 # The default value is: NO.
1988 # This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
1989 
1990 MACRO_EXPANSION = NO
1991 
1992 # If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES then
1993 # the macro expansion is limited to the macros specified with the PREDEFINED and
1994 # EXPAND_AS_DEFINED tags.
1995 # The default value is: NO.
1996 # This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
1997 
1998 EXPAND_ONLY_PREDEF = NO
1999 
2000 # If the SEARCH_INCLUDES tag is set to YES, the include files in the
2001 # INCLUDE_PATH will be searched if a #include is found.
2002 # The default value is: YES.
2003 # This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
2004 
2005 SEARCH_INCLUDES = YES
2006 
2007 # The INCLUDE_PATH tag can be used to specify one or more directories that
2008 # contain include files that are not input files but should be processed by the
2009 # preprocessor.
2010 # This tag requires that the tag SEARCH_INCLUDES is set to YES.
2011 
2012 INCLUDE_PATH =
2013 
2014 # You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard
2015 # patterns (like *.h and *.hpp) to filter out the header-files in the
2016 # directories. If left blank, the patterns specified with FILE_PATTERNS will be
2017 # used.
2018 # This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
2019 
2020 INCLUDE_FILE_PATTERNS =
2021 
2022 # The PREDEFINED tag can be used to specify one or more macro names that are
2023 # defined before the preprocessor is started (similar to the -D option of e.g.
2024 # gcc). The argument of the tag is a list of macros of the form: name or
2025 # name=definition (no spaces). If the definition and the "=" are omitted, "=1"
2026 # is assumed. To prevent a macro definition from being undefined via #undef or
2027 # recursively expanded use the := operator instead of the = operator.
2028 # This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
2029 
2030 PREDEFINED =
2031 
2032 # If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then this
2033 # tag can be used to specify a list of macro names that should be expanded. The
2034 # macro definition that is found in the sources will be used. Use the PREDEFINED
2035 # tag if you want to use a different macro definition that overrules the
2036 # definition found in the source code.
2037 # This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
2038 
2039 EXPAND_AS_DEFINED =
2040 
2041 # If the SKIP_FUNCTION_MACROS tag is set to YES then doxygen's preprocessor will
2042 # remove all references to function-like macros that are alone on a line, have
2043 # an all uppercase name, and do not end with a semicolon. Such function macros
2044 # are typically used for boiler-plate code, and will confuse the parser if not
2045 # removed.
2046 # The default value is: YES.
2047 # This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
2048 
2049 SKIP_FUNCTION_MACROS = YES
2050 
2051 #---------------------------------------------------------------------------
2052 # Configuration options related to external references
2053 #---------------------------------------------------------------------------
2054 
2055 # The TAGFILES tag can be used to specify one or more tag files. For each tag
2056 # file the location of the external documentation should be added. The format of
2057 # a tag file without this location is as follows:
2058 # TAGFILES = file1 file2 ...
2059 # Adding location for the tag files is done as follows:
2060 # TAGFILES = file1=loc1 "file2 = loc2" ...
2061 # where loc1 and loc2 can be relative or absolute paths or URLs. See the
2062 # section "Linking to external documentation" for more information about the use
2063 # of tag files.
2064 # Note: Each tag file must have a unique name (where the name does NOT include
2065 # the path). If a tag file is not located in the directory in which doxygen is
2066 # run, you must also specify the path to the tagfile here.
2067 
2068 TAGFILES =
2069 
2070 # When a file name is specified after GENERATE_TAGFILE, doxygen will create a
2071 # tag file that is based on the input files it reads. See section "Linking to
2072 # external documentation" for more information about the usage of tag files.
2073 
2074 GENERATE_TAGFILE =
2075 
2076 # If the ALLEXTERNALS tag is set to YES, all external class will be listed in
2077 # the class index. If set to NO, only the inherited external classes will be
2078 # listed.
2079 # The default value is: NO.
2080 
2081 ALLEXTERNALS = NO
2082 
2083 # If the EXTERNAL_GROUPS tag is set to YES, all external groups will be listed
2084 # in the modules index. If set to NO, only the current project's groups will be
2085 # listed.
2086 # The default value is: YES.
2087 
2088 EXTERNAL_GROUPS = YES
2089 
2090 # If the EXTERNAL_PAGES tag is set to YES, all external pages will be listed in
2091 # the related pages index. If set to NO, only the current project's pages will
2092 # be listed.
2093 # The default value is: YES.
2094 
2095 EXTERNAL_PAGES = YES
2096 
2097 # The PERL_PATH should be the absolute path and name of the perl script
2098 # interpreter (i.e. the result of 'which perl').
2099 # The default file (with absolute path) is: /usr/bin/perl.
2100 
2101 PERL_PATH = /usr/bin/perl
2102 
2103 #---------------------------------------------------------------------------
2104 # Configuration options related to the dot tool
2105 #---------------------------------------------------------------------------
2106 
2107 # If the CLASS_DIAGRAMS tag is set to YES, doxygen will generate a class diagram
2108 # (in HTML and LaTeX) for classes with base or super classes. Setting the tag to
2109 # NO turns the diagrams off. Note that this option also works with HAVE_DOT
2110 # disabled, but it is recommended to install and use dot, since it yields more
2111 # powerful graphs.
2112 # The default value is: YES.
2113 
2114 CLASS_DIAGRAMS = YES
2115 
2116 # You can define message sequence charts within doxygen comments using the \msc
2117 # command. Doxygen will then run the mscgen tool (see:
2118 # http://www.mcternan.me.uk/mscgen/)) to produce the chart and insert it in the
2119 # documentation. The MSCGEN_PATH tag allows you to specify the directory where
2120 # the mscgen tool resides. If left empty the tool is assumed to be found in the
2121 # default search path.
2122 
2123 MSCGEN_PATH =
2124 
2125 # You can include diagrams made with dia in doxygen documentation. Doxygen will
2126 # then run dia to produce the diagram and insert it in the documentation. The
2127 # DIA_PATH tag allows you to specify the directory where the dia binary resides.
2128 # If left empty dia is assumed to be found in the default search path.
2129 
2130 DIA_PATH =
2131 
2132 # If set to YES the inheritance and collaboration graphs will hide inheritance
2133 # and usage relations if the target is undocumented or is not a class.
2134 # The default value is: YES.
2135 
2136 HIDE_UNDOC_RELATIONS = YES
2137 
2138 # If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is
2139 # available from the path. This tool is part of Graphviz (see:
2140 # http://www.graphviz.org/), a graph visualization toolkit from AT&T and Lucent
2141 # Bell Labs. The other options in this section have no effect if this option is
2142 # set to NO
2143 # The default value is: NO.
2144 
2145 HAVE_DOT = NO
2146 
2147 # The DOT_NUM_THREADS specifies the number of dot invocations doxygen is allowed
2148 # to run in parallel. When set to 0 doxygen will base this on the number of
2149 # processors available in the system. You can set it explicitly to a value
2150 # larger than 0 to get control over the balance between CPU load and processing
2151 # speed.
2152 # Minimum value: 0, maximum value: 32, default value: 0.
2153 # This tag requires that the tag HAVE_DOT is set to YES.
2154 
2155 DOT_NUM_THREADS = 0
2156 
2157 # When you want a differently looking font in the dot files that doxygen
2158 # generates you can specify the font name using DOT_FONTNAME. You need to make
2159 # sure dot is able to find the font, which can be done by putting it in a
2160 # standard location or by setting the DOTFONTPATH environment variable or by
2161 # setting DOT_FONTPATH to the directory containing the font.
2162 # The default value is: Helvetica.
2163 # This tag requires that the tag HAVE_DOT is set to YES.
2164 
2165 DOT_FONTNAME = Helvetica
2166 
2167 # The DOT_FONTSIZE tag can be used to set the size (in points) of the font of
2168 # dot graphs.
2169 # Minimum value: 4, maximum value: 24, default value: 10.
2170 # This tag requires that the tag HAVE_DOT is set to YES.
2171 
2172 DOT_FONTSIZE = 10
2173 
2174 # By default doxygen will tell dot to use the default font as specified with
2175 # DOT_FONTNAME. If you specify a different font using DOT_FONTNAME you can set
2176 # the path where dot can find it using this tag.
2177 # This tag requires that the tag HAVE_DOT is set to YES.
2178 
2179 DOT_FONTPATH =
2180 
2181 # If the CLASS_GRAPH tag is set to YES then doxygen will generate a graph for
2182 # each documented class showing the direct and indirect inheritance relations.
2183 # Setting this tag to YES will force the CLASS_DIAGRAMS tag to NO.
2184 # The default value is: YES.
2185 # This tag requires that the tag HAVE_DOT is set to YES.
2186 
2187 CLASS_GRAPH = YES
2188 
2189 # If the COLLABORATION_GRAPH tag is set to YES then doxygen will generate a
2190 # graph for each documented class showing the direct and indirect implementation
2191 # dependencies (inheritance, containment, and class references variables) of the
2192 # class with other documented classes.
2193 # The default value is: YES.
2194 # This tag requires that the tag HAVE_DOT is set to YES.
2195 
2196 COLLABORATION_GRAPH = YES
2197 
2198 # If the GROUP_GRAPHS tag is set to YES then doxygen will generate a graph for
2199 # groups, showing the direct groups dependencies.
2200 # The default value is: YES.
2201 # This tag requires that the tag HAVE_DOT is set to YES.
2202 
2203 GROUP_GRAPHS = YES
2204 
2205 # If the UML_LOOK tag is set to YES, doxygen will generate inheritance and
2206 # collaboration diagrams in a style similar to the OMG's Unified Modeling
2207 # Language.
2208 # The default value is: NO.
2209 # This tag requires that the tag HAVE_DOT is set to YES.
2210 
2211 UML_LOOK = NO
2212 
2213 # If the UML_LOOK tag is enabled, the fields and methods are shown inside the
2214 # class node. If there are many fields or methods and many nodes the graph may
2215 # become too big to be useful. The UML_LIMIT_NUM_FIELDS threshold limits the
2216 # number of items for each type to make the size more manageable. Set this to 0
2217 # for no limit. Note that the threshold may be exceeded by 50% before the limit
2218 # is enforced. So when you set the threshold to 10, up to 15 fields may appear,
2219 # but if the number exceeds 15, the total amount of fields shown is limited to
2220 # 10.
2221 # Minimum value: 0, maximum value: 100, default value: 10.
2222 # This tag requires that the tag HAVE_DOT is set to YES.
2223 
2224 UML_LIMIT_NUM_FIELDS = 10
2225 
2226 # If the TEMPLATE_RELATIONS tag is set to YES then the inheritance and
2227 # collaboration graphs will show the relations between templates and their
2228 # instances.
2229 # The default value is: NO.
2230 # This tag requires that the tag HAVE_DOT is set to YES.
2231 
2232 TEMPLATE_RELATIONS = NO
2233 
2234 # If the INCLUDE_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are set to
2235 # YES then doxygen will generate a graph for each documented file showing the
2236 # direct and indirect include dependencies of the file with other documented
2237 # files.
2238 # The default value is: YES.
2239 # This tag requires that the tag HAVE_DOT is set to YES.
2240 
2241 INCLUDE_GRAPH = YES
2242 
2243 # If the INCLUDED_BY_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are
2244 # set to YES then doxygen will generate a graph for each documented file showing
2245 # the direct and indirect include dependencies of the file with other documented
2246 # files.
2247 # The default value is: YES.
2248 # This tag requires that the tag HAVE_DOT is set to YES.
2249 
2250 INCLUDED_BY_GRAPH = YES
2251 
2252 # If the CALL_GRAPH tag is set to YES then doxygen will generate a call
2253 # dependency graph for every global function or class method.
2254 #
2255 # Note that enabling this option will significantly increase the time of a run.
2256 # So in most cases it will be better to enable call graphs for selected
2257 # functions only using the \callgraph command. Disabling a call graph can be
2258 # accomplished by means of the command \hidecallgraph.
2259 # The default value is: NO.
2260 # This tag requires that the tag HAVE_DOT is set to YES.
2261 
2262 CALL_GRAPH = YES
2263 
2264 # If the CALLER_GRAPH tag is set to YES then doxygen will generate a caller
2265 # dependency graph for every global function or class method.
2266 #
2267 # Note that enabling this option will significantly increase the time of a run.
2268 # So in most cases it will be better to enable caller graphs for selected
2269 # functions only using the \callergraph command. Disabling a caller graph can be
2270 # accomplished by means of the command \hidecallergraph.
2271 # The default value is: NO.
2272 # This tag requires that the tag HAVE_DOT is set to YES.
2273 
2274 CALLER_GRAPH = YES
2275 
2276 # If the GRAPHICAL_HIERARCHY tag is set to YES then doxygen will graphical
2277 # hierarchy of all classes instead of a textual one.
2278 # The default value is: YES.
2279 # This tag requires that the tag HAVE_DOT is set to YES.
2280 
2281 GRAPHICAL_HIERARCHY = YES
2282 
2283 # If the DIRECTORY_GRAPH tag is set to YES then doxygen will show the
2284 # dependencies a directory has on other directories in a graphical way. The
2285 # dependency relations are determined by the #include relations between the
2286 # files in the directories.
2287 # The default value is: YES.
2288 # This tag requires that the tag HAVE_DOT is set to YES.
2289 
2290 DIRECTORY_GRAPH = YES
2291 
2292 # The DOT_IMAGE_FORMAT tag can be used to set the image format of the images
2293 # generated by dot. For an explanation of the image formats see the section
2294 # output formats in the documentation of the dot tool (Graphviz (see:
2295 # http://www.graphviz.org/)).
2296 # Note: If you choose svg you need to set HTML_FILE_EXTENSION to xhtml in order
2297 # to make the SVG files visible in IE 9+ (other browsers do not have this
2298 # requirement).
2299 # Possible values are: png, jpg, gif, svg, png:gd, png:gd:gd, png:cairo,
2300 # png:cairo:gd, png:cairo:cairo, png:cairo:gdiplus, png:gdiplus and
2301 # png:gdiplus:gdiplus.
2302 # The default value is: png.
2303 # This tag requires that the tag HAVE_DOT is set to YES.
2304 
2305 DOT_IMAGE_FORMAT = png
2306 
2307 # If DOT_IMAGE_FORMAT is set to svg, then this option can be set to YES to
2308 # enable generation of interactive SVG images that allow zooming and panning.
2309 #
2310 # Note that this requires a modern browser other than Internet Explorer. Tested
2311 # and working are Firefox, Chrome, Safari, and Opera.
2312 # Note: For IE 9+ you need to set HTML_FILE_EXTENSION to xhtml in order to make
2313 # the SVG files visible. Older versions of IE do not have SVG support.
2314 # The default value is: NO.
2315 # This tag requires that the tag HAVE_DOT is set to YES.
2316 
2317 INTERACTIVE_SVG = NO
2318 
2319 # The DOT_PATH tag can be used to specify the path where the dot tool can be
2320 # found. If left blank, it is assumed the dot tool can be found in the path.
2321 # This tag requires that the tag HAVE_DOT is set to YES.
2322 
2323 DOT_PATH =
2324 
2325 # The DOTFILE_DIRS tag can be used to specify one or more directories that
2326 # contain dot files that are included in the documentation (see the \dotfile
2327 # command).
2328 # This tag requires that the tag HAVE_DOT is set to YES.
2329 
2330 DOTFILE_DIRS =
2331 
2332 # The MSCFILE_DIRS tag can be used to specify one or more directories that
2333 # contain msc files that are included in the documentation (see the \mscfile
2334 # command).
2335 
2336 MSCFILE_DIRS =
2337 
2338 # The DIAFILE_DIRS tag can be used to specify one or more directories that
2339 # contain dia files that are included in the documentation (see the \diafile
2340 # command).
2341 
2342 DIAFILE_DIRS =
2343 
2344 # When using plantuml, the PLANTUML_JAR_PATH tag should be used to specify the
2345 # path where java can find the plantuml.jar file. If left blank, it is assumed
2346 # PlantUML is not used or called during a preprocessing step. Doxygen will
2347 # generate a warning when it encounters a \startuml command in this case and
2348 # will not generate output for the diagram.
2349 
2350 PLANTUML_JAR_PATH =
2351 
2352 # When using plantuml, the specified paths are searched for files specified by
2353 # the !include statement in a plantuml block.
2354 
2355 PLANTUML_INCLUDE_PATH =
2356 
2357 # The DOT_GRAPH_MAX_NODES tag can be used to set the maximum number of nodes
2358 # that will be shown in the graph. If the number of nodes in a graph becomes
2359 # larger than this value, doxygen will truncate the graph, which is visualized
2360 # by representing a node as a red box. Note that doxygen if the number of direct
2361 # children of the root node in a graph is already larger than
2362 # DOT_GRAPH_MAX_NODES then the graph will not be shown at all. Also note that
2363 # the size of a graph can be further restricted by MAX_DOT_GRAPH_DEPTH.
2364 # Minimum value: 0, maximum value: 10000, default value: 50.
2365 # This tag requires that the tag HAVE_DOT is set to YES.
2366 
2367 DOT_GRAPH_MAX_NODES = 50
2368 
2369 # The MAX_DOT_GRAPH_DEPTH tag can be used to set the maximum depth of the graphs
2370 # generated by dot. A depth value of 3 means that only nodes reachable from the
2371 # root by following a path via at most 3 edges will be shown. Nodes that lay
2372 # further from the root node will be omitted. Note that setting this option to 1
2373 # or 2 may greatly reduce the computation time needed for large code bases. Also
2374 # note that the size of a graph can be further restricted by
2375 # DOT_GRAPH_MAX_NODES. Using a depth of 0 means no depth restriction.
2376 # Minimum value: 0, maximum value: 1000, default value: 0.
2377 # This tag requires that the tag HAVE_DOT is set to YES.
2378 
2379 MAX_DOT_GRAPH_DEPTH = 1000
2380 
2381 # Set the DOT_TRANSPARENT tag to YES to generate images with a transparent
2382 # background. This is disabled by default, because dot on Windows does not seem
2383 # to support this out of the box.
2384 #
2385 # Warning: Depending on the platform used, enabling this option may lead to
2386 # badly anti-aliased labels on the edges of a graph (i.e. they become hard to
2387 # read).
2388 # The default value is: NO.
2389 # This tag requires that the tag HAVE_DOT is set to YES.
2390 
2391 DOT_TRANSPARENT = NO
2392 
2393 # Set the DOT_MULTI_TARGETS tag to YES to allow dot to generate multiple output
2394 # files in one run (i.e. multiple -o and -T options on the command line). This
2395 # makes dot run faster, but since only newer versions of dot (>1.8.10) support
2396 # this, this feature is disabled by default.
2397 # The default value is: NO.
2398 # This tag requires that the tag HAVE_DOT is set to YES.
2399 
2400 DOT_MULTI_TARGETS = NO
2401 
2402 # If the GENERATE_LEGEND tag is set to YES doxygen will generate a legend page
2403 # explaining the meaning of the various boxes and arrows in the dot generated
2404 # graphs.
2405 # The default value is: YES.
2406 # This tag requires that the tag HAVE_DOT is set to YES.
2407 
2408 GENERATE_LEGEND = YES
2409 
2410 # If the DOT_CLEANUP tag is set to YES, doxygen will remove the intermediate dot
2411 # files that are used to generate the various graphs.
2412 # The default value is: YES.
2413 # This tag requires that the tag HAVE_DOT is set to YES.
2414 
2415 DOT_CLEANUP = YES
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00048.html ================================================ 0.9.9 API documentation: mat2x2.hpp File Reference
0.9.9 API documentation
mat2x2.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00048_source.html ================================================ 0.9.9 API documentation: mat2x2.hpp Source File
0.9.9 API documentation
mat2x2.hpp
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00049.html ================================================ 0.9.9 API documentation: mat2x3.hpp File Reference
0.9.9 API documentation
mat2x3.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00049_source.html ================================================ 0.9.9 API documentation: mat2x3.hpp Source File
0.9.9 API documentation
mat2x3.hpp
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00050.html ================================================ 0.9.9 API documentation: mat2x4.hpp File Reference
0.9.9 API documentation
mat2x4.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00050_source.html ================================================ 0.9.9 API documentation: mat2x4.hpp Source File
0.9.9 API documentation
mat2x4.hpp
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00051.html ================================================ 0.9.9 API documentation: mat3x2.hpp File Reference
0.9.9 API documentation
mat3x2.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00051_source.html ================================================ 0.9.9 API documentation: mat3x2.hpp Source File
0.9.9 API documentation
mat3x2.hpp
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00052.html ================================================ 0.9.9 API documentation: mat3x3.hpp File Reference
0.9.9 API documentation
mat3x3.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00052_source.html ================================================ 0.9.9 API documentation: mat3x3.hpp Source File
0.9.9 API documentation
mat3x3.hpp
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00053.html ================================================ 0.9.9 API documentation: mat3x4.hpp File Reference
0.9.9 API documentation
mat3x4.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00053_source.html ================================================ 0.9.9 API documentation: mat3x4.hpp Source File
0.9.9 API documentation
mat3x4.hpp
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00054.html ================================================ 0.9.9 API documentation: mat4x2.hpp File Reference
0.9.9 API documentation
mat4x2.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00054_source.html ================================================ 0.9.9 API documentation: mat4x2.hpp Source File
0.9.9 API documentation
mat4x2.hpp
Go to the documentation of this file.
1 
4 #pragma once
8 #include "./ext/matrix_float4x2_precision.hpp"
9 
Core features
Core features
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00055.html ================================================ 0.9.9 API documentation: mat4x3.hpp File Reference
0.9.9 API documentation
mat4x3.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00055_source.html ================================================ 0.9.9 API documentation: mat4x3.hpp Source File
0.9.9 API documentation
mat4x3.hpp
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00056.html ================================================ 0.9.9 API documentation: mat4x4.hpp File Reference
0.9.9 API documentation
mat4x4.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00056_source.html ================================================ 0.9.9 API documentation: mat4x4.hpp Source File
0.9.9 API documentation
mat4x4.hpp
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00057.html ================================================ 0.9.9 API documentation: matrix.hpp File Reference
0.9.9 API documentation
matrix.hpp File Reference

Core features More...

Go to the source code of this file.

Functions

template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL T determinant (mat< C, R, T, Q > const &m)
 Return the determinant of a squared matrix. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL mat< C, R, T, Q > inverse (mat< C, R, T, Q > const &m)
 Return the inverse of a squared matrix. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL mat< C, R, T, Q > matrixCompMult (mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y)
 Multiply matrix x by matrix y component-wise, i.e., result[i][j] is the scalar product of x[i][j] and y[i][j]. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL detail::outerProduct_trait< C, R, T, Q >::type outerProduct (vec< C, T, Q > const &c, vec< R, T, Q > const &r)
 Treats the first parameter c as a column vector and the second parameter r as a row vector and does a linear algebraic matrix multiply c * r. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL mat< C, R, T, Q >::transpose_type transpose (mat< C, R, T, Q > const &x)
 Returns the transposed matrix of x. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00057_source.html ================================================ 0.9.9 API documentation: matrix.hpp Source File
0.9.9 API documentation
matrix.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependencies
16 #include "detail/qualifier.hpp"
17 #include "detail/setup.hpp"
18 #include "vec2.hpp"
19 #include "vec3.hpp"
20 #include "vec4.hpp"
21 #include "mat2x2.hpp"
22 #include "mat2x3.hpp"
23 #include "mat2x4.hpp"
24 #include "mat3x2.hpp"
25 #include "mat3x3.hpp"
26 #include "mat3x4.hpp"
27 #include "mat4x2.hpp"
28 #include "mat4x3.hpp"
29 #include "mat4x4.hpp"
30 
31 namespace glm {
32 namespace detail
33 {
34  template<length_t C, length_t R, typename T, qualifier Q>
35  struct outerProduct_trait{};
36 
37  template<typename T, qualifier Q>
38  struct outerProduct_trait<2, 2, T, Q>
39  {
40  typedef mat<2, 2, T, Q> type;
41  };
42 
43  template<typename T, qualifier Q>
44  struct outerProduct_trait<2, 3, T, Q>
45  {
46  typedef mat<3, 2, T, Q> type;
47  };
48 
49  template<typename T, qualifier Q>
50  struct outerProduct_trait<2, 4, T, Q>
51  {
52  typedef mat<4, 2, T, Q> type;
53  };
54 
55  template<typename T, qualifier Q>
56  struct outerProduct_trait<3, 2, T, Q>
57  {
58  typedef mat<2, 3, T, Q> type;
59  };
60 
61  template<typename T, qualifier Q>
62  struct outerProduct_trait<3, 3, T, Q>
63  {
64  typedef mat<3, 3, T, Q> type;
65  };
66 
67  template<typename T, qualifier Q>
68  struct outerProduct_trait<3, 4, T, Q>
69  {
70  typedef mat<4, 3, T, Q> type;
71  };
72 
73  template<typename T, qualifier Q>
74  struct outerProduct_trait<4, 2, T, Q>
75  {
76  typedef mat<2, 4, T, Q> type;
77  };
78 
79  template<typename T, qualifier Q>
80  struct outerProduct_trait<4, 3, T, Q>
81  {
82  typedef mat<3, 4, T, Q> type;
83  };
84 
85  template<typename T, qualifier Q>
86  struct outerProduct_trait<4, 4, T, Q>
87  {
88  typedef mat<4, 4, T, Q> type;
89  };
90 }//namespace detail
91 
94 
105  template<length_t C, length_t R, typename T, qualifier Q>
106  GLM_FUNC_DECL mat<C, R, T, Q> matrixCompMult(mat<C, R, T, Q> const& x, mat<C, R, T, Q> const& y);
107 
119  template<length_t C, length_t R, typename T, qualifier Q>
120  GLM_FUNC_DECL typename detail::outerProduct_trait<C, R, T, Q>::type outerProduct(vec<C, T, Q> const& c, vec<R, T, Q> const& r);
121 
131  template<length_t C, length_t R, typename T, qualifier Q>
132  GLM_FUNC_DECL typename mat<C, R, T, Q>::transpose_type transpose(mat<C, R, T, Q> const& x);
133 
143  template<length_t C, length_t R, typename T, qualifier Q>
144  GLM_FUNC_DECL T determinant(mat<C, R, T, Q> const& m);
145 
155  template<length_t C, length_t R, typename T, qualifier Q>
156  GLM_FUNC_DECL mat<C, R, T, Q> inverse(mat<C, R, T, Q> const& m);
157 
159 }//namespace glm
160 
161 #include "detail/func_matrix.inl"
GLM_FUNC_DECL mat< C, R, T, Q > matrixCompMult(mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y)
Multiply matrix x by matrix y component-wise, i.e., result[i][j] is the scalar product of x[i][j] and...
Core features
GLM_FUNC_DECL T determinant(mat< C, R, T, Q > const &m)
Return the determinant of a squared matrix.
Core features
GLM_FUNC_DECL detail::outerProduct_trait< C, R, T, Q >::type outerProduct(vec< C, T, Q > const &c, vec< R, T, Q > const &r)
Treats the first parameter c as a column vector and the second parameter r as a row vector and does a...
Core features
Core features
GLM_FUNC_DECL mat< C, R, T, Q >::transpose_type transpose(mat< C, R, T, Q > const &x)
Returns the transposed matrix of x.
Core features
Core features
Core features
Core features
GLM_FUNC_DECL mat< C, R, T, Q > inverse(mat< C, R, T, Q > const &m)
Return the inverse of a squared matrix.
Core features
Core features
Core features
Core features
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00058.html ================================================ 0.9.9 API documentation: matrix_access.hpp File Reference
0.9.9 API documentation
matrix_access.hpp File Reference

GLM_GTC_matrix_access More...

Go to the source code of this file.

Functions

template<typename genType >
GLM_FUNC_DECL genType::col_type column (genType const &m, length_t index)
 Get a specific column of a matrix. More...
 
template<typename genType >
GLM_FUNC_DECL genType column (genType const &m, length_t index, typename genType::col_type const &x)
 Set a specific column to a matrix. More...
 
template<typename genType >
GLM_FUNC_DECL genType::row_type row (genType const &m, length_t index)
 Get a specific row of a matrix. More...
 
template<typename genType >
GLM_FUNC_DECL genType row (genType const &m, length_t index, typename genType::row_type const &x)
 Set a specific row to a matrix. More...
 

Detailed Description

GLM_GTC_matrix_access

See also
Core features (dependence)

Definition in file matrix_access.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00058_source.html ================================================ 0.9.9 API documentation: matrix_access.hpp Source File
0.9.9 API documentation
matrix_access.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependency:
16 #include "../detail/setup.hpp"
17 
18 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
19 # pragma message("GLM: GLM_GTC_matrix_access extension included")
20 #endif
21 
22 namespace glm
23 {
26 
29  template<typename genType>
30  GLM_FUNC_DECL typename genType::row_type row(
31  genType const& m,
32  length_t index);
33 
36  template<typename genType>
37  GLM_FUNC_DECL genType row(
38  genType const& m,
39  length_t index,
40  typename genType::row_type const& x);
41 
44  template<typename genType>
45  GLM_FUNC_DECL typename genType::col_type column(
46  genType const& m,
47  length_t index);
48 
51  template<typename genType>
52  GLM_FUNC_DECL genType column(
53  genType const& m,
54  length_t index,
55  typename genType::col_type const& x);
56 
58 }//namespace glm
59 
60 #include "matrix_access.inl"
GLM_FUNC_DECL genType row(genType const &m, length_t index, typename genType::row_type const &x)
Set a specific row to a matrix.
GLM_FUNC_DECL genType column(genType const &m, length_t index, typename genType::col_type const &x)
Set a specific column to a matrix.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00059.html ================================================ 0.9.9 API documentation: matrix_clip_space.hpp File Reference
0.9.9 API documentation
matrix_clip_space.hpp File Reference

GLM_EXT_matrix_clip_space More...

Go to the source code of this file.

Functions

template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > frustum (T left, T right, T bottom, T top, T near, T far)
 Creates a frustum matrix with default handedness, using the default handedness and default near and far clip planes definition. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > frustumLH (T left, T right, T bottom, T top, T near, T far)
 Creates a left handed frustum matrix. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > frustumLH_NO (T left, T right, T bottom, T top, T near, T far)
 Creates a left handed frustum matrix. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > frustumLH_ZO (T left, T right, T bottom, T top, T near, T far)
 Creates a left handed frustum matrix. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > frustumNO (T left, T right, T bottom, T top, T near, T far)
 Creates a frustum matrix using left-handed coordinates if GLM_FORCE_LEFT_HANDED if defined or right-handed coordinates otherwise. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > frustumRH (T left, T right, T bottom, T top, T near, T far)
 Creates a right handed frustum matrix. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > frustumRH_NO (T left, T right, T bottom, T top, T near, T far)
 Creates a right handed frustum matrix. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > frustumRH_ZO (T left, T right, T bottom, T top, T near, T far)
 Creates a right handed frustum matrix. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > frustumZO (T left, T right, T bottom, T top, T near, T far)
 Creates a frustum matrix using left-handed coordinates if GLM_FORCE_LEFT_HANDED if defined or right-handed coordinates otherwise. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > infinitePerspective (T fovy, T aspect, T near)
 Creates a matrix for a symmetric perspective-view frustum with far plane at infinite with default handedness. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > infinitePerspectiveLH (T fovy, T aspect, T near)
 Creates a matrix for a left handed, symmetric perspective-view frustum with far plane at infinite. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > infinitePerspectiveRH (T fovy, T aspect, T near)
 Creates a matrix for a right handed, symmetric perspective-view frustum with far plane at infinite. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > ortho (T left, T right, T bottom, T top)
 Creates a matrix for projecting two-dimensional coordinates onto the screen. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > ortho (T left, T right, T bottom, T top, T zNear, T zFar)
 Creates a matrix for an orthographic parallel viewing volume, using the default handedness and default near and far clip planes definition. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > orthoLH (T left, T right, T bottom, T top, T zNear, T zFar)
 Creates a matrix for an orthographic parallel viewing volume, using left-handed coordinates. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > orthoLH_NO (T left, T right, T bottom, T top, T zNear, T zFar)
 Creates a matrix for an orthographic parallel viewing volume using right-handed coordinates. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > orthoLH_ZO (T left, T right, T bottom, T top, T zNear, T zFar)
 Creates a matrix for an orthographic parallel viewing volume, using left-handed coordinates. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > orthoNO (T left, T right, T bottom, T top, T zNear, T zFar)
 Creates a matrix for an orthographic parallel viewing volume, using left-handed coordinates if GLM_FORCE_LEFT_HANDED if defined or right-handed coordinates otherwise. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > orthoRH (T left, T right, T bottom, T top, T zNear, T zFar)
 Creates a matrix for an orthographic parallel viewing volume, using right-handed coordinates. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > orthoRH_NO (T left, T right, T bottom, T top, T zNear, T zFar)
 Creates a matrix for an orthographic parallel viewing volume, using right-handed coordinates. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > orthoRH_ZO (T left, T right, T bottom, T top, T zNear, T zFar)
 Creates a matrix for an orthographic parallel viewing volume, using left-handed coordinates. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > orthoZO (T left, T right, T bottom, T top, T zNear, T zFar)
 Creates a matrix for an orthographic parallel viewing volume, using left-handed coordinates. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspective (T fovy, T aspect, T near, T far)
 Creates a matrix for a symetric perspective-view frustum based on the default handedness and default near and far clip planes definition. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveFov (T fov, T width, T height, T near, T far)
 Builds a perspective projection matrix based on a field of view and the default handedness and default near and far clip planes definition. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveFovLH (T fov, T width, T height, T near, T far)
 Builds a left handed perspective projection matrix based on a field of view. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveFovLH_NO (T fov, T width, T height, T near, T far)
 Builds a perspective projection matrix based on a field of view using left-handed coordinates. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveFovLH_ZO (T fov, T width, T height, T near, T far)
 Builds a perspective projection matrix based on a field of view using left-handed coordinates. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveFovNO (T fov, T width, T height, T near, T far)
 Builds a perspective projection matrix based on a field of view using left-handed coordinates if GLM_FORCE_LEFT_HANDED if defined or right-handed coordinates otherwise. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveFovRH (T fov, T width, T height, T near, T far)
 Builds a right handed perspective projection matrix based on a field of view. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveFovRH_NO (T fov, T width, T height, T near, T far)
 Builds a perspective projection matrix based on a field of view using right-handed coordinates. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveFovRH_ZO (T fov, T width, T height, T near, T far)
 Builds a perspective projection matrix based on a field of view using right-handed coordinates. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveFovZO (T fov, T width, T height, T near, T far)
 Builds a perspective projection matrix based on a field of view using left-handed coordinates if GLM_FORCE_LEFT_HANDED if defined or right-handed coordinates otherwise. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveLH (T fovy, T aspect, T near, T far)
 Creates a matrix for a left handed, symetric perspective-view frustum. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveLH_NO (T fovy, T aspect, T near, T far)
 Creates a matrix for a left handed, symetric perspective-view frustum. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveLH_ZO (T fovy, T aspect, T near, T far)
 Creates a matrix for a left handed, symetric perspective-view frustum. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveNO (T fovy, T aspect, T near, T far)
 Creates a matrix for a symetric perspective-view frustum using left-handed coordinates if GLM_FORCE_LEFT_HANDED if defined or right-handed coordinates otherwise. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveRH (T fovy, T aspect, T near, T far)
 Creates a matrix for a right handed, symetric perspective-view frustum. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveRH_NO (T fovy, T aspect, T near, T far)
 Creates a matrix for a right handed, symetric perspective-view frustum. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveRH_ZO (T fovy, T aspect, T near, T far)
 Creates a matrix for a right handed, symetric perspective-view frustum. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveZO (T fovy, T aspect, T near, T far)
 Creates a matrix for a symetric perspective-view frustum using left-handed coordinates if GLM_FORCE_LEFT_HANDED if defined or right-handed coordinates otherwise. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > tweakedInfinitePerspective (T fovy, T aspect, T near)
 Creates a matrix for a symmetric perspective-view frustum with far plane at infinite for graphics hardware that doesn't support depth clamping. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > tweakedInfinitePerspective (T fovy, T aspect, T near, T ep)
 Creates a matrix for a symmetric perspective-view frustum with far plane at infinite for graphics hardware that doesn't support depth clamping. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00059_source.html ================================================ 0.9.9 API documentation: matrix_clip_space.hpp Source File
0.9.9 API documentation
matrix_clip_space.hpp
Go to the documentation of this file.
1 
20 #pragma once
21 
22 // Dependencies
23 #include "../ext/scalar_constants.hpp"
24 #include "../geometric.hpp"
25 #include "../trigonometric.hpp"
26 
27 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
28 # pragma message("GLM: GLM_EXT_matrix_clip_space extension included")
29 #endif
30 
31 namespace glm
32 {
35 
42  template<typename T>
43  GLM_FUNC_DECL mat<4, 4, T, defaultp> ortho(
44  T left, T right, T bottom, T top);
45 
52  template<typename T>
53  GLM_FUNC_DECL mat<4, 4, T, defaultp> orthoLH_ZO(
54  T left, T right, T bottom, T top, T zNear, T zFar);
55 
62  template<typename T>
63  GLM_FUNC_DECL mat<4, 4, T, defaultp> orthoLH_NO(
64  T left, T right, T bottom, T top, T zNear, T zFar);
65 
72  template<typename T>
73  GLM_FUNC_DECL mat<4, 4, T, defaultp> orthoRH_ZO(
74  T left, T right, T bottom, T top, T zNear, T zFar);
75 
82  template<typename T>
83  GLM_FUNC_DECL mat<4, 4, T, defaultp> orthoRH_NO(
84  T left, T right, T bottom, T top, T zNear, T zFar);
85 
92  template<typename T>
93  GLM_FUNC_DECL mat<4, 4, T, defaultp> orthoZO(
94  T left, T right, T bottom, T top, T zNear, T zFar);
95 
102  template<typename T>
103  GLM_FUNC_DECL mat<4, 4, T, defaultp> orthoNO(
104  T left, T right, T bottom, T top, T zNear, T zFar);
105 
113  template<typename T>
114  GLM_FUNC_DECL mat<4, 4, T, defaultp> orthoLH(
115  T left, T right, T bottom, T top, T zNear, T zFar);
116 
124  template<typename T>
125  GLM_FUNC_DECL mat<4, 4, T, defaultp> orthoRH(
126  T left, T right, T bottom, T top, T zNear, T zFar);
127 
135  template<typename T>
136  GLM_FUNC_DECL mat<4, 4, T, defaultp> ortho(
137  T left, T right, T bottom, T top, T zNear, T zFar);
138 
143  template<typename T>
144  GLM_FUNC_DECL mat<4, 4, T, defaultp> frustumLH_ZO(
145  T left, T right, T bottom, T top, T near, T far);
146 
151  template<typename T>
152  GLM_FUNC_DECL mat<4, 4, T, defaultp> frustumLH_NO(
153  T left, T right, T bottom, T top, T near, T far);
154 
159  template<typename T>
160  GLM_FUNC_DECL mat<4, 4, T, defaultp> frustumRH_ZO(
161  T left, T right, T bottom, T top, T near, T far);
162 
167  template<typename T>
168  GLM_FUNC_DECL mat<4, 4, T, defaultp> frustumRH_NO(
169  T left, T right, T bottom, T top, T near, T far);
170 
175  template<typename T>
176  GLM_FUNC_DECL mat<4, 4, T, defaultp> frustumZO(
177  T left, T right, T bottom, T top, T near, T far);
178 
183  template<typename T>
184  GLM_FUNC_DECL mat<4, 4, T, defaultp> frustumNO(
185  T left, T right, T bottom, T top, T near, T far);
186 
192  template<typename T>
193  GLM_FUNC_DECL mat<4, 4, T, defaultp> frustumLH(
194  T left, T right, T bottom, T top, T near, T far);
195 
201  template<typename T>
202  GLM_FUNC_DECL mat<4, 4, T, defaultp> frustumRH(
203  T left, T right, T bottom, T top, T near, T far);
204 
210  template<typename T>
211  GLM_FUNC_DECL mat<4, 4, T, defaultp> frustum(
212  T left, T right, T bottom, T top, T near, T far);
213 
214 
224  template<typename T>
225  GLM_FUNC_DECL mat<4, 4, T, defaultp> perspectiveRH_ZO(
226  T fovy, T aspect, T near, T far);
227 
237  template<typename T>
238  GLM_FUNC_DECL mat<4, 4, T, defaultp> perspectiveRH_NO(
239  T fovy, T aspect, T near, T far);
240 
250  template<typename T>
251  GLM_FUNC_DECL mat<4, 4, T, defaultp> perspectiveLH_ZO(
252  T fovy, T aspect, T near, T far);
253 
263  template<typename T>
264  GLM_FUNC_DECL mat<4, 4, T, defaultp> perspectiveLH_NO(
265  T fovy, T aspect, T near, T far);
266 
276  template<typename T>
277  GLM_FUNC_DECL mat<4, 4, T, defaultp> perspectiveZO(
278  T fovy, T aspect, T near, T far);
279 
289  template<typename T>
290  GLM_FUNC_DECL mat<4, 4, T, defaultp> perspectiveNO(
291  T fovy, T aspect, T near, T far);
292 
303  template<typename T>
304  GLM_FUNC_DECL mat<4, 4, T, defaultp> perspectiveRH(
305  T fovy, T aspect, T near, T far);
306 
317  template<typename T>
318  GLM_FUNC_DECL mat<4, 4, T, defaultp> perspectiveLH(
319  T fovy, T aspect, T near, T far);
320 
331  template<typename T>
332  GLM_FUNC_DECL mat<4, 4, T, defaultp> perspective(
333  T fovy, T aspect, T near, T far);
334 
345  template<typename T>
346  GLM_FUNC_DECL mat<4, 4, T, defaultp> perspectiveFovRH_ZO(
347  T fov, T width, T height, T near, T far);
348 
359  template<typename T>
360  GLM_FUNC_DECL mat<4, 4, T, defaultp> perspectiveFovRH_NO(
361  T fov, T width, T height, T near, T far);
362 
373  template<typename T>
374  GLM_FUNC_DECL mat<4, 4, T, defaultp> perspectiveFovLH_ZO(
375  T fov, T width, T height, T near, T far);
376 
387  template<typename T>
388  GLM_FUNC_DECL mat<4, 4, T, defaultp> perspectiveFovLH_NO(
389  T fov, T width, T height, T near, T far);
390 
401  template<typename T>
402  GLM_FUNC_DECL mat<4, 4, T, defaultp> perspectiveFovZO(
403  T fov, T width, T height, T near, T far);
404 
415  template<typename T>
416  GLM_FUNC_DECL mat<4, 4, T, defaultp> perspectiveFovNO(
417  T fov, T width, T height, T near, T far);
418 
430  template<typename T>
431  GLM_FUNC_DECL mat<4, 4, T, defaultp> perspectiveFovRH(
432  T fov, T width, T height, T near, T far);
433 
445  template<typename T>
446  GLM_FUNC_DECL mat<4, 4, T, defaultp> perspectiveFovLH(
447  T fov, T width, T height, T near, T far);
448 
459  template<typename T>
460  GLM_FUNC_DECL mat<4, 4, T, defaultp> perspectiveFov(
461  T fov, T width, T height, T near, T far);
462 
470  template<typename T>
471  GLM_FUNC_DECL mat<4, 4, T, defaultp> infinitePerspectiveLH(
472  T fovy, T aspect, T near);
473 
481  template<typename T>
482  GLM_FUNC_DECL mat<4, 4, T, defaultp> infinitePerspectiveRH(
483  T fovy, T aspect, T near);
484 
492  template<typename T>
493  GLM_FUNC_DECL mat<4, 4, T, defaultp> infinitePerspective(
494  T fovy, T aspect, T near);
495 
503  template<typename T>
504  GLM_FUNC_DECL mat<4, 4, T, defaultp> tweakedInfinitePerspective(
505  T fovy, T aspect, T near);
506 
515  template<typename T>
516  GLM_FUNC_DECL mat<4, 4, T, defaultp> tweakedInfinitePerspective(
517  T fovy, T aspect, T near, T ep);
518 
520 }//namespace glm
521 
522 #include "matrix_clip_space.inl"
GLM_FUNC_DECL mat< 4, 4, T, defaultp > frustumRH_NO(T left, T right, T bottom, T top, T near, T far)
Creates a right handed frustum matrix.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > infinitePerspective(T fovy, T aspect, T near)
Creates a matrix for a symmetric perspective-view frustum with far plane at infinite with default han...
GLM_FUNC_DECL mat< 4, 4, T, defaultp > orthoZO(T left, T right, T bottom, T top, T zNear, T zFar)
Creates a matrix for an orthographic parallel viewing volume, using left-handed coordinates.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > tweakedInfinitePerspective(T fovy, T aspect, T near, T ep)
Creates a matrix for a symmetric perspective-view frustum with far plane at infinite for graphics har...
GLM_FUNC_DECL mat< 4, 4, T, defaultp > orthoRH(T left, T right, T bottom, T top, T zNear, T zFar)
Creates a matrix for an orthographic parallel viewing volume, using right-handed coordinates.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveFovLH(T fov, T width, T height, T near, T far)
Builds a left handed perspective projection matrix based on a field of view.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > frustumLH_ZO(T left, T right, T bottom, T top, T near, T far)
Creates a left handed frustum matrix.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > frustumLH_NO(T left, T right, T bottom, T top, T near, T far)
Creates a left handed frustum matrix.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > frustumNO(T left, T right, T bottom, T top, T near, T far)
Creates a frustum matrix using left-handed coordinates if GLM_FORCE_LEFT_HANDED if defined or right-h...
GLM_FUNC_DECL mat< 4, 4, T, defaultp > frustumRH(T left, T right, T bottom, T top, T near, T far)
Creates a right handed frustum matrix.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > frustumLH(T left, T right, T bottom, T top, T near, T far)
Creates a left handed frustum matrix.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveFovRH_NO(T fov, T width, T height, T near, T far)
Builds a perspective projection matrix based on a field of view using right-handed coordinates...
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveFov(T fov, T width, T height, T near, T far)
Builds a perspective projection matrix based on a field of view and the default handedness and defaul...
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveFovRH(T fov, T width, T height, T near, T far)
Builds a right handed perspective projection matrix based on a field of view.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > frustumRH_ZO(T left, T right, T bottom, T top, T near, T far)
Creates a right handed frustum matrix.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > orthoLH_ZO(T left, T right, T bottom, T top, T zNear, T zFar)
Creates a matrix for an orthographic parallel viewing volume, using left-handed coordinates.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveLH_ZO(T fovy, T aspect, T near, T far)
Creates a matrix for a left handed, symetric perspective-view frustum.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveRH_NO(T fovy, T aspect, T near, T far)
Creates a matrix for a right handed, symetric perspective-view frustum.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > orthoRH_NO(T left, T right, T bottom, T top, T zNear, T zFar)
Creates a matrix for an orthographic parallel viewing volume, using right-handed coordinates.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveLH_NO(T fovy, T aspect, T near, T far)
Creates a matrix for a left handed, symetric perspective-view frustum.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > ortho(T left, T right, T bottom, T top, T zNear, T zFar)
Creates a matrix for an orthographic parallel viewing volume, using the default handedness and defaul...
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveFovLH_ZO(T fov, T width, T height, T near, T far)
Builds a perspective projection matrix based on a field of view using left-handed coordinates...
GLM_FUNC_DECL mat< 4, 4, T, defaultp > frustumZO(T left, T right, T bottom, T top, T near, T far)
Creates a frustum matrix using left-handed coordinates if GLM_FORCE_LEFT_HANDED if defined or right-h...
GLM_FUNC_DECL mat< 4, 4, T, defaultp > orthoLH(T left, T right, T bottom, T top, T zNear, T zFar)
Creates a matrix for an orthographic parallel viewing volume, using left-handed coordinates.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > orthoNO(T left, T right, T bottom, T top, T zNear, T zFar)
Creates a matrix for an orthographic parallel viewing volume, using left-handed coordinates if GLM_FO...
GLM_FUNC_DECL mat< 4, 4, T, defaultp > orthoLH_NO(T left, T right, T bottom, T top, T zNear, T zFar)
Creates a matrix for an orthographic parallel viewing volume using right-handed coordinates.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveFovNO(T fov, T width, T height, T near, T far)
Builds a perspective projection matrix based on a field of view using left-handed coordinates if GLM_...
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspective(T fovy, T aspect, T near, T far)
Creates a matrix for a symetric perspective-view frustum based on the default handedness and default ...
GLM_FUNC_DECL mat< 4, 4, T, defaultp > orthoRH_ZO(T left, T right, T bottom, T top, T zNear, T zFar)
Creates a matrix for an orthographic parallel viewing volume, using left-handed coordinates.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveFovZO(T fov, T width, T height, T near, T far)
Builds a perspective projection matrix based on a field of view using left-handed coordinates if GLM_...
GLM_FUNC_DECL mat< 4, 4, T, defaultp > infinitePerspectiveRH(T fovy, T aspect, T near)
Creates a matrix for a right handed, symmetric perspective-view frustum with far plane at infinite...
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveNO(T fovy, T aspect, T near, T far)
Creates a matrix for a symetric perspective-view frustum using left-handed coordinates if GLM_FORCE_L...
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveFovLH_NO(T fov, T width, T height, T near, T far)
Builds a perspective projection matrix based on a field of view using left-handed coordinates...
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveRH_ZO(T fovy, T aspect, T near, T far)
Creates a matrix for a right handed, symetric perspective-view frustum.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveZO(T fovy, T aspect, T near, T far)
Creates a matrix for a symetric perspective-view frustum using left-handed coordinates if GLM_FORCE_L...
GLM_FUNC_DECL mat< 4, 4, T, defaultp > infinitePerspectiveLH(T fovy, T aspect, T near)
Creates a matrix for a left handed, symmetric perspective-view frustum with far plane at infinite...
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveLH(T fovy, T aspect, T near, T far)
Creates a matrix for a left handed, symetric perspective-view frustum.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveFovRH_ZO(T fov, T width, T height, T near, T far)
Builds a perspective projection matrix based on a field of view using right-handed coordinates...
GLM_FUNC_DECL mat< 4, 4, T, defaultp > frustum(T left, T right, T bottom, T top, T near, T far)
Creates a frustum matrix with default handedness, using the default handedness and default near and f...
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveRH(T fovy, T aspect, T near, T far)
Creates a matrix for a right handed, symetric perspective-view frustum.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00060.html ================================================ 0.9.9 API documentation: matrix_common.hpp File Reference
0.9.9 API documentation
matrix_common.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00060_source.html ================================================ 0.9.9 API documentation: matrix_common.hpp Source File
0.9.9 API documentation
matrix_common.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 #include "../detail/qualifier.hpp"
16 #include "../detail/_fixes.hpp"
17 
18 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
19 # pragma message("GLM: GLM_EXT_matrix_transform extension included")
20 #endif
21 
22 namespace glm
23 {
26 
27  template<length_t C, length_t R, typename T, typename U, qualifier Q>
28  GLM_FUNC_DECL mat<C, R, T, Q> mix(mat<C, R, T, Q> const& x, mat<C, R, T, Q> const& y, mat<C, R, U, Q> const& a);
29 
30  template<length_t C, length_t R, typename T, typename U, qualifier Q>
31  GLM_FUNC_DECL mat<C, R, T, Q> mix(mat<C, R, T, Q> const& x, mat<C, R, T, Q> const& y, U a);
32 
34 }//namespace glm
35 
36 #include "matrix_common.inl"
GLM_FUNC_DECL genTypeT mix(genTypeT x, genTypeT y, genTypeU a)
If genTypeU is a floating scalar or vector: Returns x * (1.0 - a) + y * a, i.e., the linear blend of ...
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00061.html ================================================ 0.9.9 API documentation: matrix_cross_product.hpp File Reference
0.9.9 API documentation
matrix_cross_product.hpp File Reference

GLM_GTX_matrix_cross_product More...

Go to the source code of this file.

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 3, T, Q > matrixCross3 (vec< 3, T, Q > const &x)
 Build a cross product matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > matrixCross4 (vec< 3, T, Q > const &x)
 Build a cross product matrix. More...
 

Detailed Description

GLM_GTX_matrix_cross_product

See also
Core features (dependence)
gtx_extented_min_max (dependence)

Definition in file matrix_cross_product.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00061_source.html ================================================ 0.9.9 API documentation: matrix_cross_product.hpp Source File
0.9.9 API documentation
matrix_cross_product.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 // Dependency:
17 #include "../glm.hpp"
18 
19 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
20 # ifndef GLM_ENABLE_EXPERIMENTAL
21 # pragma message("GLM: GLM_GTX_matrix_cross_product is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
22 # else
23 # pragma message("GLM: GLM_GTX_matrix_cross_product extension included")
24 # endif
25 #endif
26 
27 namespace glm
28 {
31 
34  template<typename T, qualifier Q>
35  GLM_FUNC_DECL mat<3, 3, T, Q> matrixCross3(
36  vec<3, T, Q> const& x);
37 
40  template<typename T, qualifier Q>
41  GLM_FUNC_DECL mat<4, 4, T, Q> matrixCross4(
42  vec<3, T, Q> const& x);
43 
45 }//namespace glm
46 
47 #include "matrix_cross_product.inl"
GLM_FUNC_DECL mat< 4, 4, T, Q > matrixCross4(vec< 3, T, Q > const &x)
Build a cross product matrix.
GLM_FUNC_DECL mat< 3, 3, T, Q > matrixCross3(vec< 3, T, Q > const &x)
Build a cross product matrix.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00062.html ================================================ 0.9.9 API documentation: matrix_decompose.hpp File Reference
0.9.9 API documentation
matrix_decompose.hpp File Reference

GLM_GTX_matrix_decompose More...

Go to the source code of this file.

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL bool decompose (mat< 4, 4, T, Q > const &modelMatrix, vec< 3, T, Q > &scale, qua< T, Q > &orientation, vec< 3, T, Q > &translation, vec< 3, T, Q > &skew, vec< 4, T, Q > &perspective)
 Decomposes a model matrix to translations, rotation and scale components. More...
 

Detailed Description

GLM_GTX_matrix_decompose

See also
Core features (dependence)

Definition in file matrix_decompose.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00062_source.html ================================================ 0.9.9 API documentation: matrix_decompose.hpp Source File
0.9.9 API documentation
matrix_decompose.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependencies
16 #include "../mat4x4.hpp"
17 #include "../vec3.hpp"
18 #include "../vec4.hpp"
19 #include "../geometric.hpp"
20 #include "../gtc/quaternion.hpp"
21 #include "../gtc/matrix_transform.hpp"
22 
23 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
24 # ifndef GLM_ENABLE_EXPERIMENTAL
25 # pragma message("GLM: GLM_GTX_matrix_decompose is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
26 # else
27 # pragma message("GLM: GLM_GTX_matrix_decompose extension included")
28 # endif
29 #endif
30 
31 namespace glm
32 {
35 
38  template<typename T, qualifier Q>
39  GLM_FUNC_DECL bool decompose(
40  mat<4, 4, T, Q> const& modelMatrix,
41  vec<3, T, Q> & scale, qua<T, Q> & orientation, vec<3, T, Q> & translation, vec<3, T, Q> & skew, vec<4, T, Q> & perspective);
42 
44 }//namespace glm
45 
46 #include "matrix_decompose.inl"
GLM_FUNC_DECL bool decompose(mat< 4, 4, T, Q > const &modelMatrix, vec< 3, T, Q > &scale, qua< T, Q > &orientation, vec< 3, T, Q > &translation, vec< 3, T, Q > &skew, vec< 4, T, Q > &perspective)
Decomposes a model matrix to translations, rotation and scale components.
GLM_FUNC_DECL mat< 4, 4, T, Q > scale(mat< 4, 4, T, Q > const &m, vec< 3, T, Q > const &v)
Builds a scale 4 * 4 matrix created from 3 scalars.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspective(T fovy, T aspect, T near, T far)
Creates a matrix for a symetric perspective-view frustum based on the default handedness and default ...
GLM_FUNC_DECL mat< 4, 4, T, Q > orientation(vec< 3, T, Q > const &Normal, vec< 3, T, Q > const &Up)
Build a rotation matrix from a normal and a up vector.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00063.html ================================================ 0.9.9 API documentation: matrix_double2x2.hpp File Reference
0.9.9 API documentation
matrix_double2x2.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 2, 2, double, defaultp > dmat2
 2 columns of 2 components matrix of double-precision floating-point numbers. More...
 
typedef mat< 2, 2, double, defaultp > dmat2x2
 2 columns of 2 components matrix of double-precision floating-point numbers. More...
 

Detailed Description

Core features

Definition in file matrix_double2x2.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00063_source.html ================================================ 0.9.9 API documentation: matrix_double2x2.hpp Source File
0.9.9 API documentation
matrix_double2x2.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat2x2.hpp"
6 
7 namespace glm
8 {
11 
15  typedef mat<2, 2, double, defaultp> dmat2x2;
16 
20  typedef mat<2, 2, double, defaultp> dmat2;
21 
23 }//namespace glm
mat< 2, 2, double, defaultp > dmat2
2 columns of 2 components matrix of double-precision floating-point numbers.
mat< 2, 2, double, defaultp > dmat2x2
2 columns of 2 components matrix of double-precision floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00064.html ================================================ 0.9.9 API documentation: matrix_double2x2_precision.hpp File Reference
0.9.9 API documentation
matrix_double2x2_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 2, 2, double, highp > highp_dmat2
 2 columns of 2 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 2, double, highp > highp_dmat2x2
 2 columns of 2 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 2, double, lowp > lowp_dmat2
 2 columns of 2 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 2, double, lowp > lowp_dmat2x2
 2 columns of 2 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 2, double, mediump > mediump_dmat2
 2 columns of 2 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 2, double, mediump > mediump_dmat2x2
 2 columns of 2 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00064_source.html ================================================ 0.9.9 API documentation: matrix_double2x2_precision.hpp Source File
0.9.9 API documentation
matrix_double2x2_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat2x2.hpp"
6 
7 namespace glm
8 {
11 
16  typedef mat<2, 2, double, lowp> lowp_dmat2;
17 
22  typedef mat<2, 2, double, mediump> mediump_dmat2;
23 
28  typedef mat<2, 2, double, highp> highp_dmat2;
29 
34  typedef mat<2, 2, double, lowp> lowp_dmat2x2;
35 
40  typedef mat<2, 2, double, mediump> mediump_dmat2x2;
41 
46  typedef mat<2, 2, double, highp> highp_dmat2x2;
47 
49 }//namespace glm
mat< 2, 2, double, mediump > mediump_dmat2
2 columns of 2 components matrix of double-precision floating-point numbers using medium precision ar...
mat< 2, 2, double, lowp > lowp_dmat2
2 columns of 2 components matrix of double-precision floating-point numbers using low precision arith...
mat< 2, 2, double, mediump > mediump_dmat2x2
2 columns of 2 components matrix of double-precision floating-point numbers using medium precision ar...
mat< 2, 2, double, highp > highp_dmat2x2
2 columns of 2 components matrix of double-precision floating-point numbers using medium precision ar...
mat< 2, 2, double, highp > highp_dmat2
2 columns of 2 components matrix of double-precision floating-point numbers using medium precision ar...
mat< 2, 2, double, lowp > lowp_dmat2x2
2 columns of 2 components matrix of double-precision floating-point numbers using low precision arith...
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00065.html ================================================ 0.9.9 API documentation: matrix_double2x3.hpp File Reference
0.9.9 API documentation
matrix_double2x3.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 2, 3, double, defaultp > dmat2x3
 2 columns of 3 components matrix of double-precision floating-point numbers. More...
 

Detailed Description

Core features

Definition in file matrix_double2x3.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00065_source.html ================================================ 0.9.9 API documentation: matrix_double2x3.hpp Source File
0.9.9 API documentation
matrix_double2x3.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat2x3.hpp"
6 
7 namespace glm
8 {
11 
15  typedef mat<2, 3, double, defaultp> dmat2x3;
16 
18 }//namespace glm
mat< 2, 3, double, defaultp > dmat2x3
2 columns of 3 components matrix of double-precision floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00066.html ================================================ 0.9.9 API documentation: matrix_double2x3_precision.hpp File Reference
0.9.9 API documentation
matrix_double2x3_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 2, 3, double, highp > highp_dmat2x3
 2 columns of 3 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 3, double, lowp > lowp_dmat2x3
 2 columns of 3 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 3, double, mediump > mediump_dmat2x3
 2 columns of 3 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00066_source.html ================================================ 0.9.9 API documentation: matrix_double2x3_precision.hpp Source File
0.9.9 API documentation
matrix_double2x3_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat2x3.hpp"
6 
7 namespace glm
8 {
11 
16  typedef mat<2, 3, double, lowp> lowp_dmat2x3;
17 
22  typedef mat<2, 3, double, mediump> mediump_dmat2x3;
23 
28  typedef mat<2, 3, double, highp> highp_dmat2x3;
29 
31 }//namespace glm
mat< 2, 3, double, mediump > mediump_dmat2x3
2 columns of 3 components matrix of double-precision floating-point numbers using medium precision ar...
mat< 2, 3, double, highp > highp_dmat2x3
2 columns of 3 components matrix of double-precision floating-point numbers using medium precision ar...
mat< 2, 3, double, lowp > lowp_dmat2x3
2 columns of 3 components matrix of double-precision floating-point numbers using low precision arith...
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00067.html ================================================ 0.9.9 API documentation: matrix_double2x4.hpp File Reference
0.9.9 API documentation
matrix_double2x4.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 2, 4, double, defaultp > dmat2x4
 2 columns of 4 components matrix of double-precision floating-point numbers. More...
 

Detailed Description

Core features

Definition in file matrix_double2x4.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00067_source.html ================================================ 0.9.9 API documentation: matrix_double2x4.hpp Source File
0.9.9 API documentation
matrix_double2x4.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat2x4.hpp"
6 
7 namespace glm
8 {
11 
15  typedef mat<2, 4, double, defaultp> dmat2x4;
16 
18 }//namespace glm
mat< 2, 4, double, defaultp > dmat2x4
2 columns of 4 components matrix of double-precision floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00068.html ================================================ 0.9.9 API documentation: matrix_double2x4_precision.hpp File Reference
0.9.9 API documentation
matrix_double2x4_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 2, 4, double, highp > highp_dmat2x4
 2 columns of 4 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 4, double, lowp > lowp_dmat2x4
 2 columns of 4 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 4, double, mediump > mediump_dmat2x4
 2 columns of 4 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00068_source.html ================================================ 0.9.9 API documentation: matrix_double2x4_precision.hpp Source File
0.9.9 API documentation
matrix_double2x4_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat2x4.hpp"
6 
7 namespace glm
8 {
11 
16  typedef mat<2, 4, double, lowp> lowp_dmat2x4;
17 
22  typedef mat<2, 4, double, mediump> mediump_dmat2x4;
23 
28  typedef mat<2, 4, double, highp> highp_dmat2x4;
29 
31 }//namespace glm
mat< 2, 4, double, highp > highp_dmat2x4
2 columns of 4 components matrix of double-precision floating-point numbers using medium precision ar...
mat< 2, 4, double, mediump > mediump_dmat2x4
2 columns of 4 components matrix of double-precision floating-point numbers using medium precision ar...
mat< 2, 4, double, lowp > lowp_dmat2x4
2 columns of 4 components matrix of double-precision floating-point numbers using low precision arith...
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00069.html ================================================ 0.9.9 API documentation: matrix_double3x2.hpp File Reference
0.9.9 API documentation
matrix_double3x2.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 3, 2, double, defaultp > dmat3x2
 3 columns of 2 components matrix of double-precision floating-point numbers. More...
 

Detailed Description

Core features

Definition in file matrix_double3x2.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00069_source.html ================================================ 0.9.9 API documentation: matrix_double3x2.hpp Source File
0.9.9 API documentation
matrix_double3x2.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat3x2.hpp"
6 
7 namespace glm
8 {
11 
15  typedef mat<3, 2, double, defaultp> dmat3x2;
16 
18 }//namespace glm
mat< 3, 2, double, defaultp > dmat3x2
3 columns of 2 components matrix of double-precision floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00070.html ================================================ 0.9.9 API documentation: matrix_double3x2_precision.hpp File Reference
0.9.9 API documentation
matrix_double3x2_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 3, 2, double, highp > highp_dmat3x2
 3 columns of 2 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 2, double, lowp > lowp_dmat3x2
 3 columns of 2 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 2, double, mediump > mediump_dmat3x2
 3 columns of 2 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00070_source.html ================================================ 0.9.9 API documentation: matrix_double3x2_precision.hpp Source File
0.9.9 API documentation
matrix_double3x2_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat3x2.hpp"
6 
7 namespace glm
8 {
11 
16  typedef mat<3, 2, double, lowp> lowp_dmat3x2;
17 
22  typedef mat<3, 2, double, mediump> mediump_dmat3x2;
23 
28  typedef mat<3, 2, double, highp> highp_dmat3x2;
29 
31 }//namespace glm
mat< 3, 2, double, mediump > mediump_dmat3x2
3 columns of 2 components matrix of double-precision floating-point numbers using medium precision ar...
mat< 3, 2, double, lowp > lowp_dmat3x2
3 columns of 2 components matrix of double-precision floating-point numbers using low precision arith...
mat< 3, 2, double, highp > highp_dmat3x2
3 columns of 2 components matrix of double-precision floating-point numbers using medium precision ar...
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00071.html ================================================ 0.9.9 API documentation: matrix_double3x3.hpp File Reference
0.9.9 API documentation
matrix_double3x3.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 3, 3, double, defaultp > dmat3
 3 columns of 3 components matrix of double-precision floating-point numbers. More...
 
typedef mat< 3, 3, double, defaultp > dmat3x3
 3 columns of 3 components matrix of double-precision floating-point numbers. More...
 

Detailed Description

Core features

Definition in file matrix_double3x3.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00071_source.html ================================================ 0.9.9 API documentation: matrix_double3x3.hpp Source File
0.9.9 API documentation
matrix_double3x3.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat3x3.hpp"
6 
7 namespace glm
8 {
11 
15  typedef mat<3, 3, double, defaultp> dmat3x3;
16 
20  typedef mat<3, 3, double, defaultp> dmat3;
21 
23 }//namespace glm
mat< 3, 3, double, defaultp > dmat3x3
3 columns of 3 components matrix of double-precision floating-point numbers.
mat< 3, 3, double, defaultp > dmat3
3 columns of 3 components matrix of double-precision floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00072.html ================================================ 0.9.9 API documentation: matrix_double3x3_precision.hpp File Reference
0.9.9 API documentation
matrix_double3x3_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 3, 3, double, highp > highp_dmat3
 3 columns of 3 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 3, double, highp > highp_dmat3x3
 3 columns of 3 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 3, double, lowp > lowp_dmat3
 3 columns of 3 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 3, double, lowp > lowp_dmat3x3
 3 columns of 3 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 3, double, mediump > mediump_dmat3
 3 columns of 3 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 3, double, mediump > mediump_dmat3x3
 3 columns of 3 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00072_source.html ================================================ 0.9.9 API documentation: matrix_double3x3_precision.hpp Source File
0.9.9 API documentation
matrix_double3x3_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat3x3.hpp"
6 
7 namespace glm
8 {
11 
16  typedef mat<3, 3, double, lowp> lowp_dmat3;
17 
22  typedef mat<3, 3, double, mediump> mediump_dmat3;
23 
28  typedef mat<3, 3, double, highp> highp_dmat3;
29 
34  typedef mat<3, 3, double, lowp> lowp_dmat3x3;
35 
40  typedef mat<3, 3, double, mediump> mediump_dmat3x3;
41 
46  typedef mat<3, 3, double, highp> highp_dmat3x3;
47 
49 }//namespace glm
mat< 3, 3, double, lowp > lowp_dmat3
3 columns of 3 components matrix of double-precision floating-point numbers using low precision arith...
mat< 3, 3, double, lowp > lowp_dmat3x3
3 columns of 3 components matrix of double-precision floating-point numbers using low precision arith...
mat< 3, 3, double, highp > highp_dmat3
3 columns of 3 components matrix of double-precision floating-point numbers using medium precision ar...
mat< 3, 3, double, highp > highp_dmat3x3
3 columns of 3 components matrix of double-precision floating-point numbers using medium precision ar...
mat< 3, 3, double, mediump > mediump_dmat3x3
3 columns of 3 components matrix of double-precision floating-point numbers using medium precision ar...
mat< 3, 3, double, mediump > mediump_dmat3
3 columns of 3 components matrix of double-precision floating-point numbers using medium precision ar...
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00073.html ================================================ 0.9.9 API documentation: matrix_double3x4.hpp File Reference
0.9.9 API documentation
matrix_double3x4.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 3, 4, double, defaultp > dmat3x4
 3 columns of 4 components matrix of double-precision floating-point numbers. More...
 

Detailed Description

Core features

Definition in file matrix_double3x4.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00073_source.html ================================================ 0.9.9 API documentation: matrix_double3x4.hpp Source File
0.9.9 API documentation
matrix_double3x4.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat3x4.hpp"
6 
7 namespace glm
8 {
11 
15  typedef mat<3, 4, double, defaultp> dmat3x4;
16 
18 }//namespace glm
mat< 3, 4, double, defaultp > dmat3x4
3 columns of 4 components matrix of double-precision floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00074.html ================================================ 0.9.9 API documentation: matrix_double3x4_precision.hpp File Reference
0.9.9 API documentation
matrix_double3x4_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 3, 4, double, highp > highp_dmat3x4
 3 columns of 4 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 4, double, lowp > lowp_dmat3x4
 3 columns of 4 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 4, double, mediump > mediump_dmat3x4
 3 columns of 4 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00074_source.html ================================================ 0.9.9 API documentation: matrix_double3x4_precision.hpp Source File
0.9.9 API documentation
matrix_double3x4_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat3x4.hpp"
6 
7 namespace glm
8 {
11 
16  typedef mat<3, 4, double, lowp> lowp_dmat3x4;
17 
22  typedef mat<3, 4, double, mediump> mediump_dmat3x4;
23 
28  typedef mat<3, 4, double, highp> highp_dmat3x4;
29 
31 }//namespace glm
mat< 3, 4, double, lowp > lowp_dmat3x4
3 columns of 4 components matrix of double-precision floating-point numbers using low precision arith...
mat< 3, 4, double, mediump > mediump_dmat3x4
3 columns of 4 components matrix of double-precision floating-point numbers using medium precision ar...
mat< 3, 4, double, highp > highp_dmat3x4
3 columns of 4 components matrix of double-precision floating-point numbers using medium precision ar...
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00075.html ================================================ 0.9.9 API documentation: matrix_double4x2.hpp File Reference
0.9.9 API documentation
matrix_double4x2.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 4, 2, double, defaultp > dmat4x2
 4 columns of 2 components matrix of double-precision floating-point numbers. More...
 

Detailed Description

Core features

Definition in file matrix_double4x2.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00075_source.html ================================================ 0.9.9 API documentation: matrix_double4x2.hpp Source File
0.9.9 API documentation
matrix_double4x2.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat4x2.hpp"
6 
7 namespace glm
8 {
11 
15  typedef mat<4, 2, double, defaultp> dmat4x2;
16 
18 }//namespace glm
mat< 4, 2, double, defaultp > dmat4x2
4 columns of 2 components matrix of double-precision floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00076.html ================================================ 0.9.9 API documentation: matrix_double4x2_precision.hpp File Reference
0.9.9 API documentation
matrix_double4x2_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 4, 2, double, highp > highp_dmat4x2
 4 columns of 2 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 2, double, lowp > lowp_dmat4x2
 4 columns of 2 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 2, double, mediump > mediump_dmat4x2
 4 columns of 2 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00076_source.html ================================================ 0.9.9 API documentation: matrix_double4x2_precision.hpp Source File
0.9.9 API documentation
matrix_double4x2_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat4x2.hpp"
6 
7 namespace glm
8 {
11 
16  typedef mat<4, 2, double, lowp> lowp_dmat4x2;
17 
22  typedef mat<4, 2, double, mediump> mediump_dmat4x2;
23 
28  typedef mat<4, 2, double, highp> highp_dmat4x2;
29 
31 }//namespace glm
mat< 4, 2, double, lowp > lowp_dmat4x2
4 columns of 2 components matrix of double-precision floating-point numbers using low precision arith...
mat< 4, 2, double, mediump > mediump_dmat4x2
4 columns of 2 components matrix of double-precision floating-point numbers using medium precision ar...
mat< 4, 2, double, highp > highp_dmat4x2
4 columns of 2 components matrix of double-precision floating-point numbers using medium precision ar...
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00077.html ================================================ 0.9.9 API documentation: matrix_double4x3.hpp File Reference
0.9.9 API documentation
matrix_double4x3.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 4, 3, double, defaultp > dmat4x3
 4 columns of 3 components matrix of double-precision floating-point numbers. More...
 

Detailed Description

Core features

Definition in file matrix_double4x3.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00077_source.html ================================================ 0.9.9 API documentation: matrix_double4x3.hpp Source File
0.9.9 API documentation
matrix_double4x3.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat4x3.hpp"
6 
7 namespace glm
8 {
11 
15  typedef mat<4, 3, double, defaultp> dmat4x3;
16 
18 }//namespace glm
mat< 4, 3, double, defaultp > dmat4x3
4 columns of 3 components matrix of double-precision floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00078.html ================================================ 0.9.9 API documentation: matrix_double4x3_precision.hpp File Reference
0.9.9 API documentation
matrix_double4x3_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 4, 3, double, highp > highp_dmat4x3
 4 columns of 3 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 3, double, lowp > lowp_dmat4x3
 4 columns of 3 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 3, double, mediump > mediump_dmat4x3
 4 columns of 3 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00078_source.html ================================================ 0.9.9 API documentation: matrix_double4x3_precision.hpp Source File
0.9.9 API documentation
matrix_double4x3_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat4x3.hpp"
6 
7 namespace glm
8 {
11 
16  typedef mat<4, 3, double, lowp> lowp_dmat4x3;
17 
22  typedef mat<4, 3, double, mediump> mediump_dmat4x3;
23 
28  typedef mat<4, 3, double, highp> highp_dmat4x3;
29 
31 }//namespace glm
mat< 4, 3, double, highp > highp_dmat4x3
4 columns of 3 components matrix of double-precision floating-point numbers using medium precision ar...
mat< 4, 3, double, mediump > mediump_dmat4x3
4 columns of 3 components matrix of double-precision floating-point numbers using medium precision ar...
mat< 4, 3, double, lowp > lowp_dmat4x3
4 columns of 3 components matrix of double-precision floating-point numbers using low precision arith...
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00079.html ================================================ 0.9.9 API documentation: matrix_double4x4.hpp File Reference
0.9.9 API documentation
matrix_double4x4.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 4, 4, double, defaultp > dmat4
 4 columns of 4 components matrix of double-precision floating-point numbers. More...
 
typedef mat< 4, 4, double, defaultp > dmat4x4
 4 columns of 4 components matrix of double-precision floating-point numbers. More...
 

Detailed Description

Core features

Definition in file matrix_double4x4.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00079_source.html ================================================ 0.9.9 API documentation: matrix_double4x4.hpp Source File
0.9.9 API documentation
matrix_double4x4.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat4x4.hpp"
6 
7 namespace glm
8 {
11 
15  typedef mat<4, 4, double, defaultp> dmat4x4;
16 
20  typedef mat<4, 4, double, defaultp> dmat4;
21 
23 }//namespace glm
mat< 4, 4, double, defaultp > dmat4x4
4 columns of 4 components matrix of double-precision floating-point numbers.
mat< 4, 4, double, defaultp > dmat4
4 columns of 4 components matrix of double-precision floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00080.html ================================================ 0.9.9 API documentation: matrix_double4x4_precision.hpp File Reference
0.9.9 API documentation
matrix_double4x4_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 4, 4, double, highp > highp_dmat4
 4 columns of 4 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 4, double, highp > highp_dmat4x4
 4 columns of 4 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 4, double, lowp > lowp_dmat4
 4 columns of 4 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 4, double, lowp > lowp_dmat4x4
 4 columns of 4 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 4, double, mediump > mediump_dmat4
 4 columns of 4 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 4, double, mediump > mediump_dmat4x4
 4 columns of 4 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00080_source.html ================================================ 0.9.9 API documentation: matrix_double4x4_precision.hpp Source File
0.9.9 API documentation
matrix_double4x4_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat4x4.hpp"
6 
7 namespace glm
8 {
11 
16  typedef mat<4, 4, double, lowp> lowp_dmat4;
17 
22  typedef mat<4, 4, double, mediump> mediump_dmat4;
23 
28  typedef mat<4, 4, double, highp> highp_dmat4;
29 
34  typedef mat<4, 4, double, lowp> lowp_dmat4x4;
35 
40  typedef mat<4, 4, double, mediump> mediump_dmat4x4;
41 
46  typedef mat<4, 4, double, highp> highp_dmat4x4;
47 
49 }//namespace glm
mat< 4, 4, double, mediump > mediump_dmat4x4
4 columns of 4 components matrix of double-precision floating-point numbers using medium precision ar...
mat< 4, 4, double, lowp > lowp_dmat4
4 columns of 4 components matrix of double-precision floating-point numbers using low precision arith...
mat< 4, 4, double, mediump > mediump_dmat4
4 columns of 4 components matrix of double-precision floating-point numbers using medium precision ar...
mat< 4, 4, double, highp > highp_dmat4x4
4 columns of 4 components matrix of double-precision floating-point numbers using medium precision ar...
mat< 4, 4, double, lowp > lowp_dmat4x4
4 columns of 4 components matrix of double-precision floating-point numbers using low precision arith...
mat< 4, 4, double, highp > highp_dmat4
4 columns of 4 components matrix of double-precision floating-point numbers using medium precision ar...
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00081.html ================================================ 0.9.9 API documentation: matrix_factorisation.hpp File Reference
0.9.9 API documentation
matrix_factorisation.hpp File Reference

GLM_GTX_matrix_factorisation More...

Go to the source code of this file.

Functions

template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL mat< C, R, T, Q > fliplr (mat< C, R, T, Q > const &in)
 Flips the matrix columns right and left. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL mat< C, R, T, Q > flipud (mat< C, R, T, Q > const &in)
 Flips the matrix rows up and down. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL void qr_decompose (mat< C, R, T, Q > const &in, mat<(C< R?C:R), R, T, Q > &q, mat< C,(C< R?C:R), T, Q > &r)
 Performs QR factorisation of a matrix. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL void rq_decompose (mat< C, R, T, Q > const &in, mat<(C< R?C:R), R, T, Q > &r, mat< C,(C< R?C:R), T, Q > &q)
 Performs RQ factorisation of a matrix. More...
 

Detailed Description

GLM_GTX_matrix_factorisation

See also
Core features (dependence)

Definition in file matrix_factorisation.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00081_source.html ================================================ 0.9.9 API documentation: matrix_factorisation.hpp Source File
0.9.9 API documentation
matrix_factorisation.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependency:
16 #include "../glm.hpp"
17 
18 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
19 # ifndef GLM_ENABLE_EXPERIMENTAL
20 # pragma message("GLM: GLM_GTX_matrix_factorisation is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
21 # else
22 # pragma message("GLM: GLM_GTX_matrix_factorisation extension included")
23 # endif
24 #endif
25 
26 /*
27 Suggestions:
28  - Move helper functions flipud and fliplr to another file: They may be helpful in more general circumstances.
29  - Implement other types of matrix factorisation, such as: QL and LQ, L(D)U, eigendecompositions, etc...
30 */
31 
32 namespace glm
33 {
36 
40  template <length_t C, length_t R, typename T, qualifier Q>
41  GLM_FUNC_DECL mat<C, R, T, Q> flipud(mat<C, R, T, Q> const& in);
42 
46  template <length_t C, length_t R, typename T, qualifier Q>
47  GLM_FUNC_DECL mat<C, R, T, Q> fliplr(mat<C, R, T, Q> const& in);
48 
54  template <length_t C, length_t R, typename T, qualifier Q>
55  GLM_FUNC_DECL void qr_decompose(mat<C, R, T, Q> const& in, mat<(C < R ? C : R), R, T, Q>& q, mat<C, (C < R ? C : R), T, Q>& r);
56 
63  template <length_t C, length_t R, typename T, qualifier Q>
64  GLM_FUNC_DECL void rq_decompose(mat<C, R, T, Q> const& in, mat<(C < R ? C : R), R, T, Q>& r, mat<C, (C < R ? C : R), T, Q>& q);
65 
67 }
68 
69 #include "matrix_factorisation.inl"
GLM_FUNC_DECL void rq_decompose(mat< C, R, T, Q > const &in, mat<(C< R?C:R), R, T, Q > &r, mat< C,(C< R?C:R), T, Q > &q)
Performs RQ factorisation of a matrix.
GLM_FUNC_DECL void qr_decompose(mat< C, R, T, Q > const &in, mat<(C< R?C:R), R, T, Q > &q, mat< C,(C< R?C:R), T, Q > &r)
Performs QR factorisation of a matrix.
GLM_FUNC_DECL mat< C, R, T, Q > flipud(mat< C, R, T, Q > const &in)
Flips the matrix rows up and down.
GLM_FUNC_DECL mat< C, R, T, Q > fliplr(mat< C, R, T, Q > const &in)
Flips the matrix columns right and left.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00082.html ================================================ 0.9.9 API documentation: matrix_float2x2.hpp File Reference
0.9.9 API documentation
matrix_float2x2.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 2, 2, float, defaultp > mat2
 2 columns of 2 components matrix of single-precision floating-point numbers. More...
 
typedef mat< 2, 2, float, defaultp > mat2x2
 2 columns of 2 components matrix of single-precision floating-point numbers. More...
 

Detailed Description

Core features

Definition in file matrix_float2x2.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00082_source.html ================================================ 0.9.9 API documentation: matrix_float2x2.hpp Source File
0.9.9 API documentation
matrix_float2x2.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat2x2.hpp"
6 
7 namespace glm
8 {
11 
15  typedef mat<2, 2, float, defaultp> mat2x2;
16 
20  typedef mat<2, 2, float, defaultp> mat2;
21 
23 }//namespace glm
mat< 2, 2, float, defaultp > mat2x2
2 columns of 2 components matrix of single-precision floating-point numbers.
mat< 2, 2, float, defaultp > mat2
2 columns of 2 components matrix of single-precision floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00083.html ================================================ 0.9.9 API documentation: matrix_float2x2_precision.hpp File Reference
0.9.9 API documentation
matrix_float2x2_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 2, 2, float, highp > highp_mat2
 2 columns of 2 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 2, float, highp > highp_mat2x2
 2 columns of 2 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 2, float, lowp > lowp_mat2
 2 columns of 2 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 2, float, lowp > lowp_mat2x2
 2 columns of 2 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 2, float, mediump > mediump_mat2
 2 columns of 2 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 2, float, mediump > mediump_mat2x2
 2 columns of 2 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00083_source.html ================================================ 0.9.9 API documentation: matrix_float2x2_precision.hpp Source File
0.9.9 API documentation
matrix_float2x2_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat2x2.hpp"
6 
7 namespace glm
8 {
11 
16  typedef mat<2, 2, float, lowp> lowp_mat2;
17 
22  typedef mat<2, 2, float, mediump> mediump_mat2;
23 
28  typedef mat<2, 2, float, highp> highp_mat2;
29 
34  typedef mat<2, 2, float, lowp> lowp_mat2x2;
35 
40  typedef mat<2, 2, float, mediump> mediump_mat2x2;
41 
46  typedef mat<2, 2, float, highp> highp_mat2x2;
47 
49 }//namespace glm
mat< 2, 2, float, lowp > lowp_mat2
2 columns of 2 components matrix of single-precision floating-point numbers using low precision arith...
mat< 2, 2, float, highp > highp_mat2
2 columns of 2 components matrix of single-precision floating-point numbers using high precision arit...
mat< 2, 2, float, lowp > lowp_mat2x2
2 columns of 2 components matrix of single-precision floating-point numbers using low precision arith...
mat< 2, 2, float, highp > highp_mat2x2
2 columns of 2 components matrix of single-precision floating-point numbers using high precision arit...
mat< 2, 2, float, mediump > mediump_mat2x2
2 columns of 2 components matrix of single-precision floating-point numbers using medium precision ar...
mat< 2, 2, float, mediump > mediump_mat2
2 columns of 2 components matrix of single-precision floating-point numbers using medium precision ar...
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00084.html ================================================ 0.9.9 API documentation: matrix_float2x3.hpp File Reference
0.9.9 API documentation
matrix_float2x3.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 2, 3, float, defaultp > mat2x3
 2 columns of 3 components matrix of single-precision floating-point numbers. More...
 

Detailed Description

Core features

Definition in file matrix_float2x3.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00084_source.html ================================================ 0.9.9 API documentation: matrix_float2x3.hpp Source File
0.9.9 API documentation
matrix_float2x3.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat2x3.hpp"
6 
7 namespace glm
8 {
11 
15  typedef mat<2, 3, float, defaultp> mat2x3;
16 
18 }//namespace glm
mat< 2, 3, float, defaultp > mat2x3
2 columns of 3 components matrix of single-precision floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00085.html ================================================ 0.9.9 API documentation: matrix_float2x3_precision.hpp File Reference
0.9.9 API documentation
matrix_float2x3_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 2, 3, float, highp > highp_mat2x3
 2 columns of 3 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 3, float, lowp > lowp_mat2x3
 2 columns of 3 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 3, float, mediump > mediump_mat2x3
 2 columns of 3 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00085_source.html ================================================ 0.9.9 API documentation: matrix_float2x3_precision.hpp Source File
0.9.9 API documentation
matrix_float2x3_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat2x3.hpp"
6 
7 namespace glm
8 {
11 
16  typedef mat<2, 3, float, lowp> lowp_mat2x3;
17 
22  typedef mat<2, 3, float, mediump> mediump_mat2x3;
23 
28  typedef mat<2, 3, float, highp> highp_mat2x3;
29 
31 }//namespace glm
mat< 2, 3, float, mediump > mediump_mat2x3
2 columns of 3 components matrix of single-precision floating-point numbers using medium precision ar...
mat< 2, 3, float, highp > highp_mat2x3
2 columns of 3 components matrix of single-precision floating-point numbers using high precision arit...
mat< 2, 3, float, lowp > lowp_mat2x3
2 columns of 3 components matrix of single-precision floating-point numbers using low precision arith...
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00086.html ================================================ 0.9.9 API documentation: matrix_float2x4.hpp File Reference
0.9.9 API documentation
matrix_float2x4.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 2, 4, float, defaultp > mat2x4
 2 columns of 4 components matrix of single-precision floating-point numbers. More...
 

Detailed Description

Core features

Definition in file matrix_float2x4.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00086_source.html ================================================ 0.9.9 API documentation: matrix_float2x4.hpp Source File
0.9.9 API documentation
matrix_float2x4.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat2x4.hpp"
6 
7 namespace glm
8 {
11 
15  typedef mat<2, 4, float, defaultp> mat2x4;
16 
18 }//namespace glm
mat< 2, 4, float, defaultp > mat2x4
2 columns of 4 components matrix of single-precision floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00087.html ================================================ 0.9.9 API documentation: matrix_float2x4_precision.hpp File Reference
0.9.9 API documentation
matrix_float2x4_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 2, 4, float, highp > highp_mat2x4
 2 columns of 4 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 4, float, lowp > lowp_mat2x4
 2 columns of 4 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 4, float, mediump > mediump_mat2x4
 2 columns of 4 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00087_source.html ================================================ 0.9.9 API documentation: matrix_float2x4_precision.hpp Source File
0.9.9 API documentation
matrix_float2x4_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat2x4.hpp"
6 
7 namespace glm
8 {
11 
16  typedef mat<2, 4, float, lowp> lowp_mat2x4;
17 
22  typedef mat<2, 4, float, mediump> mediump_mat2x4;
23 
28  typedef mat<2, 4, float, highp> highp_mat2x4;
29 
31 }//namespace glm
mat< 2, 4, float, lowp > lowp_mat2x4
2 columns of 4 components matrix of single-precision floating-point numbers using low precision arith...
mat< 2, 4, float, mediump > mediump_mat2x4
2 columns of 4 components matrix of single-precision floating-point numbers using medium precision ar...
mat< 2, 4, float, highp > highp_mat2x4
2 columns of 4 components matrix of single-precision floating-point numbers using high precision arit...
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00088.html ================================================ 0.9.9 API documentation: matrix_float3x2.hpp File Reference
0.9.9 API documentation
matrix_float3x2.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 3, 2, float, defaultp > mat3x2
 3 columns of 2 components matrix of single-precision floating-point numbers. More...
 

Detailed Description

Core features

Definition in file matrix_float3x2.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00088_source.html ================================================ 0.9.9 API documentation: matrix_float3x2.hpp Source File
0.9.9 API documentation
matrix_float3x2.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat3x2.hpp"
6 
7 namespace glm
8 {
11 
15  typedef mat<3, 2, float, defaultp> mat3x2;
16 
18 }//namespace glm
mat< 3, 2, float, defaultp > mat3x2
3 columns of 2 components matrix of single-precision floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00089.html ================================================ 0.9.9 API documentation: matrix_float3x2_precision.hpp File Reference
0.9.9 API documentation
matrix_float3x2_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 3, 2, float, highp > highp_mat3x2
 3 columns of 2 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 2, float, lowp > lowp_mat3x2
 3 columns of 2 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 2, float, mediump > mediump_mat3x2
 3 columns of 2 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00089_source.html ================================================ 0.9.9 API documentation: matrix_float3x2_precision.hpp Source File
0.9.9 API documentation
matrix_float3x2_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat3x2.hpp"
6 
7 namespace glm
8 {
11 
16  typedef mat<3, 2, float, lowp> lowp_mat3x2;
17 
22  typedef mat<3, 2, float, mediump> mediump_mat3x2;
23 
28  typedef mat<3, 2, float, highp> highp_mat3x2;
29 
31 }//namespace glm
mat< 3, 2, float, lowp > lowp_mat3x2
3 columns of 2 components matrix of single-precision floating-point numbers using low precision arith...
mat< 3, 2, float, mediump > mediump_mat3x2
3 columns of 2 components matrix of single-precision floating-point numbers using medium precision ar...
mat< 3, 2, float, highp > highp_mat3x2
3 columns of 2 components matrix of single-precision floating-point numbers using high precision arit...
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00090.html ================================================ 0.9.9 API documentation: matrix_float3x3.hpp File Reference
0.9.9 API documentation
matrix_float3x3.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 3, 3, float, defaultp > mat3
 3 columns of 3 components matrix of single-precision floating-point numbers. More...
 
typedef mat< 3, 3, float, defaultp > mat3x3
 3 columns of 3 components matrix of single-precision floating-point numbers. More...
 

Detailed Description

Core features

Definition in file matrix_float3x3.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00090_source.html ================================================ 0.9.9 API documentation: matrix_float3x3.hpp Source File
0.9.9 API documentation
matrix_float3x3.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat3x3.hpp"
6 
7 namespace glm
8 {
11 
15  typedef mat<3, 3, float, defaultp> mat3x3;
16 
20  typedef mat<3, 3, float, defaultp> mat3;
21 
23 }//namespace glm
mat< 3, 3, float, defaultp > mat3x3
3 columns of 3 components matrix of single-precision floating-point numbers.
mat< 3, 3, float, defaultp > mat3
3 columns of 3 components matrix of single-precision floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00091.html ================================================ 0.9.9 API documentation: matrix_float3x3_precision.hpp File Reference
0.9.9 API documentation
matrix_float3x3_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 3, 3, float, highp > highp_mat3
 3 columns of 3 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 3, float, highp > highp_mat3x3
 3 columns of 3 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 3, float, lowp > lowp_mat3
 3 columns of 3 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 3, float, lowp > lowp_mat3x3
 3 columns of 3 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 3, float, mediump > mediump_mat3
 3 columns of 3 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 3, float, mediump > mediump_mat3x3
 3 columns of 3 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00091_source.html ================================================ 0.9.9 API documentation: matrix_float3x3_precision.hpp Source File
0.9.9 API documentation
matrix_float3x3_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat3x3.hpp"
6 
7 namespace glm
8 {
11 
16  typedef mat<3, 3, float, lowp> lowp_mat3;
17 
22  typedef mat<3, 3, float, mediump> mediump_mat3;
23 
28  typedef mat<3, 3, float, highp> highp_mat3;
29 
34  typedef mat<3, 3, float, lowp> lowp_mat3x3;
35 
40  typedef mat<3, 3, float, mediump> mediump_mat3x3;
41 
46  typedef mat<3, 3, float, highp> highp_mat3x3;
47 
49 }//namespace glm
mat< 3, 3, float, mediump > mediump_mat3x3
3 columns of 3 components matrix of single-precision floating-point numbers using medium precision ar...
mat< 3, 3, float, highp > highp_mat3x3
3 columns of 3 components matrix of single-precision floating-point numbers using high precision arit...
mat< 3, 3, float, lowp > lowp_mat3x3
3 columns of 3 components matrix of single-precision floating-point numbers using low precision arith...
mat< 3, 3, float, mediump > mediump_mat3
3 columns of 3 components matrix of single-precision floating-point numbers using medium precision ar...
mat< 3, 3, float, lowp > lowp_mat3
3 columns of 3 components matrix of single-precision floating-point numbers using low precision arith...
mat< 3, 3, float, highp > highp_mat3
3 columns of 3 components matrix of single-precision floating-point numbers using high precision arit...
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00092.html ================================================ 0.9.9 API documentation: matrix_float3x4.hpp File Reference
0.9.9 API documentation
matrix_float3x4.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 3, 4, float, defaultp > mat3x4
 3 columns of 4 components matrix of single-precision floating-point numbers. More...
 

Detailed Description

Core features

Definition in file matrix_float3x4.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00092_source.html ================================================ 0.9.9 API documentation: matrix_float3x4.hpp Source File
0.9.9 API documentation
matrix_float3x4.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat3x4.hpp"
6 
7 namespace glm
8 {
11 
15  typedef mat<3, 4, float, defaultp> mat3x4;
16 
18 }//namespace glm
mat< 3, 4, float, defaultp > mat3x4
3 columns of 4 components matrix of single-precision floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00093.html ================================================ 0.9.9 API documentation: matrix_float3x4_precision.hpp File Reference
0.9.9 API documentation
matrix_float3x4_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 3, 4, float, highp > highp_mat3x4
 3 columns of 4 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 4, float, lowp > lowp_mat3x4
 3 columns of 4 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 4, float, mediump > mediump_mat3x4
 3 columns of 4 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00093_source.html ================================================ 0.9.9 API documentation: matrix_float3x4_precision.hpp Source File
0.9.9 API documentation
matrix_float3x4_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat3x4.hpp"
6 
7 namespace glm
8 {
11 
16  typedef mat<3, 4, float, lowp> lowp_mat3x4;
17 
22  typedef mat<3, 4, float, mediump> mediump_mat3x4;
23 
28  typedef mat<3, 4, float, highp> highp_mat3x4;
29 
31 }//namespace glm
mat< 3, 4, float, highp > highp_mat3x4
3 columns of 4 components matrix of single-precision floating-point numbers using high precision arit...
mat< 3, 4, float, mediump > mediump_mat3x4
3 columns of 4 components matrix of single-precision floating-point numbers using medium precision ar...
mat< 3, 4, float, lowp > lowp_mat3x4
3 columns of 4 components matrix of single-precision floating-point numbers using low precision arith...
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00094.html ================================================ 0.9.9 API documentation: matrix_float4x2.hpp File Reference
0.9.9 API documentation
matrix_float4x2.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 4, 2, float, defaultp > mat4x2
 4 columns of 2 components matrix of single-precision floating-point numbers. More...
 

Detailed Description

Core features

Definition in file matrix_float4x2.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00094_source.html ================================================ 0.9.9 API documentation: matrix_float4x2.hpp Source File
0.9.9 API documentation
matrix_float4x2.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat4x2.hpp"
6 
7 namespace glm
8 {
11 
15  typedef mat<4, 2, float, defaultp> mat4x2;
16 
18 }//namespace glm
mat< 4, 2, float, defaultp > mat4x2
4 columns of 2 components matrix of single-precision floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00095_source.html ================================================ 0.9.9 API documentation: matrix_float4x2_precision.hpp Source File
0.9.9 API documentation
matrix_float4x2_precision.hpp
1 
4 #pragma once
5 #include "../detail/type_mat2x2.hpp"
6 
7 namespace glm
8 {
11 
16  typedef mat<4, 2, float, lowp> lowp_mat4x2;
17 
22  typedef mat<4, 2, float, mediump> mediump_mat4x2;
23 
28  typedef mat<4, 2, float, highp> highp_mat4x2;
29 
31 }//namespace glm
mat< 4, 2, float, mediump > mediump_mat4x2
4 columns of 2 components matrix of single-precision floating-point numbers using medium precision ar...
mat< 4, 2, float, lowp > lowp_mat4x2
4 columns of 2 components matrix of single-precision floating-point numbers using low precision arith...
mat< 4, 2, float, highp > highp_mat4x2
4 columns of 2 components matrix of single-precision floating-point numbers using high precision arit...
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00096.html ================================================ 0.9.9 API documentation: matrix_float4x3.hpp File Reference
0.9.9 API documentation
matrix_float4x3.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 4, 3, float, defaultp > mat4x3
 4 columns of 3 components matrix of single-precision floating-point numbers. More...
 

Detailed Description

Core features

Definition in file matrix_float4x3.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00096_source.html ================================================ 0.9.9 API documentation: matrix_float4x3.hpp Source File
0.9.9 API documentation
matrix_float4x3.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat4x3.hpp"
6 
7 namespace glm
8 {
11 
15  typedef mat<4, 3, float, defaultp> mat4x3;
16 
18 }//namespace glm
mat< 4, 3, float, defaultp > mat4x3
4 columns of 3 components matrix of single-precision floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00097.html ================================================ 0.9.9 API documentation: matrix_float4x3_precision.hpp File Reference
0.9.9 API documentation
matrix_float4x3_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 4, 3, float, highp > highp_mat4x3
 4 columns of 3 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 3, float, lowp > lowp_mat4x3
 4 columns of 3 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 3, float, mediump > mediump_mat4x3
 4 columns of 3 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00097_source.html ================================================ 0.9.9 API documentation: matrix_float4x3_precision.hpp Source File
0.9.9 API documentation
matrix_float4x3_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat4x3.hpp"
6 
7 namespace glm
8 {
11 
16  typedef mat<4, 3, float, lowp> lowp_mat4x3;
17 
22  typedef mat<4, 3, float, mediump> mediump_mat4x3;
23 
28  typedef mat<4, 3, float, highp> highp_mat4x3;
29 
31 }//namespace glm
mat< 4, 3, float, highp > highp_mat4x3
4 columns of 3 components matrix of single-precision floating-point numbers using high precision arit...
mat< 4, 3, float, lowp > lowp_mat4x3
4 columns of 3 components matrix of single-precision floating-point numbers using low precision arith...
mat< 4, 3, float, mediump > mediump_mat4x3
4 columns of 3 components matrix of single-precision floating-point numbers using medium precision ar...
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00098.html ================================================ 0.9.9 API documentation: matrix_float4x4.hpp File Reference
0.9.9 API documentation
matrix_float4x4.hpp File Reference

Core features More...

Go to the source code of this file.

typedef mat< 4, 4, float, defaultp > mat4x4
 4 columns of 4 components matrix of single-precision floating-point numbers. More...
 
typedef mat< 4, 4, float, defaultp > mat4
 4 columns of 4 components matrix of single-precision floating-point numbers. More...
 

Detailed Description

Core features

Definition in file matrix_float4x4.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00098_source.html ================================================ 0.9.9 API documentation: matrix_float4x4.hpp Source File
0.9.9 API documentation
matrix_float4x4.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat4x4.hpp"
6 
7 namespace glm
8 {
11 
15  typedef mat<4, 4, float, defaultp> mat4x4;
16 
20  typedef mat<4, 4, float, defaultp> mat4;
21 
23 }//namespace glm
mat< 4, 4, float, defaultp > mat4x4
4 columns of 4 components matrix of single-precision floating-point numbers.
mat< 4, 4, float, defaultp > mat4
4 columns of 4 components matrix of single-precision floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00099.html ================================================ 0.9.9 API documentation: matrix_float4x4_precision.hpp File Reference
0.9.9 API documentation
matrix_float4x4_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef mat< 4, 4, float, highp > highp_mat4
 4 columns of 4 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 4, float, highp > highp_mat4x4
 4 columns of 4 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 4, float, lowp > lowp_mat4
 4 columns of 4 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 4, float, lowp > lowp_mat4x4
 4 columns of 4 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 4, float, mediump > mediump_mat4
 4 columns of 4 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 4, float, mediump > mediump_mat4x4
 4 columns of 4 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00099_source.html ================================================ 0.9.9 API documentation: matrix_float4x4_precision.hpp Source File
0.9.9 API documentation
matrix_float4x4_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_mat4x4.hpp"
6 
7 namespace glm
8 {
11 
16  typedef mat<4, 4, float, lowp> lowp_mat4;
17 
22  typedef mat<4, 4, float, mediump> mediump_mat4;
23 
28  typedef mat<4, 4, float, highp> highp_mat4;
29 
34  typedef mat<4, 4, float, lowp> lowp_mat4x4;
35 
40  typedef mat<4, 4, float, mediump> mediump_mat4x4;
41 
46  typedef mat<4, 4, float, highp> highp_mat4x4;
47 
49 }//namespace glm
mat< 4, 4, float, mediump > mediump_mat4x4
4 columns of 4 components matrix of single-precision floating-point numbers using medium precision ar...
mat< 4, 4, float, lowp > lowp_mat4
4 columns of 4 components matrix of single-precision floating-point numbers using low precision arith...
mat< 4, 4, float, mediump > mediump_mat4
4 columns of 4 components matrix of single-precision floating-point numbers using medium precision ar...
mat< 4, 4, float, highp > highp_mat4x4
4 columns of 4 components matrix of single-precision floating-point numbers using high precision arit...
mat< 4, 4, float, lowp > lowp_mat4x4
4 columns of 4 components matrix of single-precision floating-point numbers using low precision arith...
mat< 4, 4, float, highp > highp_mat4
4 columns of 4 components matrix of single-precision floating-point numbers using high precision arit...
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00100.html ================================================ 0.9.9 API documentation: matrix_integer.hpp File Reference
0.9.9 API documentation
matrix_integer.hpp File Reference

GLM_GTC_matrix_integer More...

Go to the source code of this file.

Typedefs

typedef mat< 2, 2, int, highp > highp_imat2
 High-qualifier signed integer 2x2 matrix. More...
 
typedef mat< 2, 2, int, highp > highp_imat2x2
 High-qualifier signed integer 2x2 matrix. More...
 
typedef mat< 2, 3, int, highp > highp_imat2x3
 High-qualifier signed integer 2x3 matrix. More...
 
typedef mat< 2, 4, int, highp > highp_imat2x4
 High-qualifier signed integer 2x4 matrix. More...
 
typedef mat< 3, 3, int, highp > highp_imat3
 High-qualifier signed integer 3x3 matrix. More...
 
typedef mat< 3, 2, int, highp > highp_imat3x2
 High-qualifier signed integer 3x2 matrix. More...
 
typedef mat< 3, 3, int, highp > highp_imat3x3
 High-qualifier signed integer 3x3 matrix. More...
 
typedef mat< 3, 4, int, highp > highp_imat3x4
 High-qualifier signed integer 3x4 matrix. More...
 
typedef mat< 4, 4, int, highp > highp_imat4
 High-qualifier signed integer 4x4 matrix. More...
 
typedef mat< 4, 2, int, highp > highp_imat4x2
 High-qualifier signed integer 4x2 matrix. More...
 
typedef mat< 4, 3, int, highp > highp_imat4x3
 High-qualifier signed integer 4x3 matrix. More...
 
typedef mat< 4, 4, int, highp > highp_imat4x4
 High-qualifier signed integer 4x4 matrix. More...
 
typedef mat< 2, 2, uint, highp > highp_umat2
 High-qualifier unsigned integer 2x2 matrix. More...
 
typedef mat< 2, 2, uint, highp > highp_umat2x2
 High-qualifier unsigned integer 2x2 matrix. More...
 
typedef mat< 2, 3, uint, highp > highp_umat2x3
 High-qualifier unsigned integer 2x3 matrix. More...
 
typedef mat< 2, 4, uint, highp > highp_umat2x4
 High-qualifier unsigned integer 2x4 matrix. More...
 
typedef mat< 3, 3, uint, highp > highp_umat3
 High-qualifier unsigned integer 3x3 matrix. More...
 
typedef mat< 3, 2, uint, highp > highp_umat3x2
 High-qualifier unsigned integer 3x2 matrix. More...
 
typedef mat< 3, 3, uint, highp > highp_umat3x3
 High-qualifier unsigned integer 3x3 matrix. More...
 
typedef mat< 3, 4, uint, highp > highp_umat3x4
 High-qualifier unsigned integer 3x4 matrix. More...
 
typedef mat< 4, 4, uint, highp > highp_umat4
 High-qualifier unsigned integer 4x4 matrix. More...
 
typedef mat< 4, 2, uint, highp > highp_umat4x2
 High-qualifier unsigned integer 4x2 matrix. More...
 
typedef mat< 4, 3, uint, highp > highp_umat4x3
 High-qualifier unsigned integer 4x3 matrix. More...
 
typedef mat< 4, 4, uint, highp > highp_umat4x4
 High-qualifier unsigned integer 4x4 matrix. More...
 
typedef mediump_imat2 imat2
 Signed integer 2x2 matrix. More...
 
typedef mediump_imat2x2 imat2x2
 Signed integer 2x2 matrix. More...
 
typedef mediump_imat2x3 imat2x3
 Signed integer 2x3 matrix. More...
 
typedef mediump_imat2x4 imat2x4
 Signed integer 2x4 matrix. More...
 
typedef mediump_imat3 imat3
 Signed integer 3x3 matrix. More...
 
typedef mediump_imat3x2 imat3x2
 Signed integer 3x2 matrix. More...
 
typedef mediump_imat3x3 imat3x3
 Signed integer 3x3 matrix. More...
 
typedef mediump_imat3x4 imat3x4
 Signed integer 3x4 matrix. More...
 
typedef mediump_imat4 imat4
 Signed integer 4x4 matrix. More...
 
typedef mediump_imat4x2 imat4x2
 Signed integer 4x2 matrix. More...
 
typedef mediump_imat4x3 imat4x3
 Signed integer 4x3 matrix. More...
 
typedef mediump_imat4x4 imat4x4
 Signed integer 4x4 matrix. More...
 
typedef mat< 2, 2, int, lowp > lowp_imat2
 Low-qualifier signed integer 2x2 matrix. More...
 
typedef mat< 2, 2, int, lowp > lowp_imat2x2
 Low-qualifier signed integer 2x2 matrix. More...
 
typedef mat< 2, 3, int, lowp > lowp_imat2x3
 Low-qualifier signed integer 2x3 matrix. More...
 
typedef mat< 2, 4, int, lowp > lowp_imat2x4
 Low-qualifier signed integer 2x4 matrix. More...
 
typedef mat< 3, 3, int, lowp > lowp_imat3
 Low-qualifier signed integer 3x3 matrix. More...
 
typedef mat< 3, 2, int, lowp > lowp_imat3x2
 Low-qualifier signed integer 3x2 matrix. More...
 
typedef mat< 3, 3, int, lowp > lowp_imat3x3
 Low-qualifier signed integer 3x3 matrix. More...
 
typedef mat< 3, 4, int, lowp > lowp_imat3x4
 Low-qualifier signed integer 3x4 matrix. More...
 
typedef mat< 4, 4, int, lowp > lowp_imat4
 Low-qualifier signed integer 4x4 matrix. More...
 
typedef mat< 4, 2, int, lowp > lowp_imat4x2
 Low-qualifier signed integer 4x2 matrix. More...
 
typedef mat< 4, 3, int, lowp > lowp_imat4x3
 Low-qualifier signed integer 4x3 matrix. More...
 
typedef mat< 4, 4, int, lowp > lowp_imat4x4
 Low-qualifier signed integer 4x4 matrix. More...
 
typedef mat< 2, 2, uint, lowp > lowp_umat2
 Low-qualifier unsigned integer 2x2 matrix. More...
 
typedef mat< 2, 2, uint, lowp > lowp_umat2x2
 Low-qualifier unsigned integer 2x2 matrix. More...
 
typedef mat< 2, 3, uint, lowp > lowp_umat2x3
 Low-qualifier unsigned integer 2x3 matrix. More...
 
typedef mat< 2, 4, uint, lowp > lowp_umat2x4
 Low-qualifier unsigned integer 2x4 matrix. More...
 
typedef mat< 3, 3, uint, lowp > lowp_umat3
 Low-qualifier unsigned integer 3x3 matrix. More...
 
typedef mat< 3, 2, uint, lowp > lowp_umat3x2
 Low-qualifier unsigned integer 3x2 matrix. More...
 
typedef mat< 3, 3, uint, lowp > lowp_umat3x3
 Low-qualifier unsigned integer 3x3 matrix. More...
 
typedef mat< 3, 4, uint, lowp > lowp_umat3x4
 Low-qualifier unsigned integer 3x4 matrix. More...
 
typedef mat< 4, 4, uint, lowp > lowp_umat4
 Low-qualifier unsigned integer 4x4 matrix. More...
 
typedef mat< 4, 2, uint, lowp > lowp_umat4x2
 Low-qualifier unsigned integer 4x2 matrix. More...
 
typedef mat< 4, 3, uint, lowp > lowp_umat4x3
 Low-qualifier unsigned integer 4x3 matrix. More...
 
typedef mat< 4, 4, uint, lowp > lowp_umat4x4
 Low-qualifier unsigned integer 4x4 matrix. More...
 
typedef mat< 2, 2, int, mediump > mediump_imat2
 Medium-qualifier signed integer 2x2 matrix. More...
 
typedef mat< 2, 2, int, mediump > mediump_imat2x2
 Medium-qualifier signed integer 2x2 matrix. More...
 
typedef mat< 2, 3, int, mediump > mediump_imat2x3
 Medium-qualifier signed integer 2x3 matrix. More...
 
typedef mat< 2, 4, int, mediump > mediump_imat2x4
 Medium-qualifier signed integer 2x4 matrix. More...
 
typedef mat< 3, 3, int, mediump > mediump_imat3
 Medium-qualifier signed integer 3x3 matrix. More...
 
typedef mat< 3, 2, int, mediump > mediump_imat3x2
 Medium-qualifier signed integer 3x2 matrix. More...
 
typedef mat< 3, 3, int, mediump > mediump_imat3x3
 Medium-qualifier signed integer 3x3 matrix. More...
 
typedef mat< 3, 4, int, mediump > mediump_imat3x4
 Medium-qualifier signed integer 3x4 matrix. More...
 
typedef mat< 4, 4, int, mediump > mediump_imat4
 Medium-qualifier signed integer 4x4 matrix. More...
 
typedef mat< 4, 2, int, mediump > mediump_imat4x2
 Medium-qualifier signed integer 4x2 matrix. More...
 
typedef mat< 4, 3, int, mediump > mediump_imat4x3
 Medium-qualifier signed integer 4x3 matrix. More...
 
typedef mat< 4, 4, int, mediump > mediump_imat4x4
 Medium-qualifier signed integer 4x4 matrix. More...
 
typedef mat< 2, 2, uint, mediump > mediump_umat2
 Medium-qualifier unsigned integer 2x2 matrix. More...
 
typedef mat< 2, 2, uint, mediump > mediump_umat2x2
 Medium-qualifier unsigned integer 2x2 matrix. More...
 
typedef mat< 2, 3, uint, mediump > mediump_umat2x3
 Medium-qualifier unsigned integer 2x3 matrix. More...
 
typedef mat< 2, 4, uint, mediump > mediump_umat2x4
 Medium-qualifier unsigned integer 2x4 matrix. More...
 
typedef mat< 3, 3, uint, mediump > mediump_umat3
 Medium-qualifier unsigned integer 3x3 matrix. More...
 
typedef mat< 3, 2, uint, mediump > mediump_umat3x2
 Medium-qualifier unsigned integer 3x2 matrix. More...
 
typedef mat< 3, 3, uint, mediump > mediump_umat3x3
 Medium-qualifier unsigned integer 3x3 matrix. More...
 
typedef mat< 3, 4, uint, mediump > mediump_umat3x4
 Medium-qualifier unsigned integer 3x4 matrix. More...
 
typedef mat< 4, 4, uint, mediump > mediump_umat4
 Medium-qualifier unsigned integer 4x4 matrix. More...
 
typedef mat< 4, 2, uint, mediump > mediump_umat4x2
 Medium-qualifier unsigned integer 4x2 matrix. More...
 
typedef mat< 4, 3, uint, mediump > mediump_umat4x3
 Medium-qualifier unsigned integer 4x3 matrix. More...
 
typedef mat< 4, 4, uint, mediump > mediump_umat4x4
 Medium-qualifier unsigned integer 4x4 matrix. More...
 
typedef mediump_umat2 umat2
 Unsigned integer 2x2 matrix. More...
 
typedef mediump_umat2x2 umat2x2
 Unsigned integer 2x2 matrix. More...
 
typedef mediump_umat2x3 umat2x3
 Unsigned integer 2x3 matrix. More...
 
typedef mediump_umat2x4 umat2x4
 Unsigned integer 2x4 matrix. More...
 
typedef mediump_umat3 umat3
 Unsigned integer 3x3 matrix. More...
 
typedef mediump_umat3x2 umat3x2
 Unsigned integer 3x2 matrix. More...
 
typedef mediump_umat3x3 umat3x3
 Unsigned integer 3x3 matrix. More...
 
typedef mediump_umat3x4 umat3x4
 Unsigned integer 3x4 matrix. More...
 
typedef mediump_umat4 umat4
 Unsigned integer 4x4 matrix. More...
 
typedef mediump_umat4x2 umat4x2
 Unsigned integer 4x2 matrix. More...
 
typedef mediump_umat4x3 umat4x3
 Unsigned integer 4x3 matrix. More...
 
typedef mediump_umat4x4 umat4x4
 Unsigned integer 4x4 matrix. More...
 

Detailed Description

GLM_GTC_matrix_integer

See also
Core features (dependence)

Definition in file matrix_integer.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00100_source.html ================================================ 0.9.9 API documentation: matrix_integer.hpp Source File
0.9.9 API documentation
matrix_integer.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependency:
16 #include "../mat2x2.hpp"
17 #include "../mat2x3.hpp"
18 #include "../mat2x4.hpp"
19 #include "../mat3x2.hpp"
20 #include "../mat3x3.hpp"
21 #include "../mat3x4.hpp"
22 #include "../mat4x2.hpp"
23 #include "../mat4x3.hpp"
24 #include "../mat4x4.hpp"
25 
26 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
27 # pragma message("GLM: GLM_GTC_matrix_integer extension included")
28 #endif
29 
30 namespace glm
31 {
34 
37  typedef mat<2, 2, int, highp> highp_imat2;
38 
41  typedef mat<3, 3, int, highp> highp_imat3;
42 
45  typedef mat<4, 4, int, highp> highp_imat4;
46 
49  typedef mat<2, 2, int, highp> highp_imat2x2;
50 
53  typedef mat<2, 3, int, highp> highp_imat2x3;
54 
57  typedef mat<2, 4, int, highp> highp_imat2x4;
58 
61  typedef mat<3, 2, int, highp> highp_imat3x2;
62 
65  typedef mat<3, 3, int, highp> highp_imat3x3;
66 
69  typedef mat<3, 4, int, highp> highp_imat3x4;
70 
73  typedef mat<4, 2, int, highp> highp_imat4x2;
74 
77  typedef mat<4, 3, int, highp> highp_imat4x3;
78 
81  typedef mat<4, 4, int, highp> highp_imat4x4;
82 
83 
86  typedef mat<2, 2, int, mediump> mediump_imat2;
87 
90  typedef mat<3, 3, int, mediump> mediump_imat3;
91 
94  typedef mat<4, 4, int, mediump> mediump_imat4;
95 
96 
99  typedef mat<2, 2, int, mediump> mediump_imat2x2;
100 
103  typedef mat<2, 3, int, mediump> mediump_imat2x3;
104 
107  typedef mat<2, 4, int, mediump> mediump_imat2x4;
108 
111  typedef mat<3, 2, int, mediump> mediump_imat3x2;
112 
115  typedef mat<3, 3, int, mediump> mediump_imat3x3;
116 
119  typedef mat<3, 4, int, mediump> mediump_imat3x4;
120 
123  typedef mat<4, 2, int, mediump> mediump_imat4x2;
124 
127  typedef mat<4, 3, int, mediump> mediump_imat4x3;
128 
131  typedef mat<4, 4, int, mediump> mediump_imat4x4;
132 
133 
136  typedef mat<2, 2, int, lowp> lowp_imat2;
137 
140  typedef mat<3, 3, int, lowp> lowp_imat3;
141 
144  typedef mat<4, 4, int, lowp> lowp_imat4;
145 
146 
149  typedef mat<2, 2, int, lowp> lowp_imat2x2;
150 
153  typedef mat<2, 3, int, lowp> lowp_imat2x3;
154 
157  typedef mat<2, 4, int, lowp> lowp_imat2x4;
158 
161  typedef mat<3, 2, int, lowp> lowp_imat3x2;
162 
165  typedef mat<3, 3, int, lowp> lowp_imat3x3;
166 
169  typedef mat<3, 4, int, lowp> lowp_imat3x4;
170 
173  typedef mat<4, 2, int, lowp> lowp_imat4x2;
174 
177  typedef mat<4, 3, int, lowp> lowp_imat4x3;
178 
181  typedef mat<4, 4, int, lowp> lowp_imat4x4;
182 
183 
186  typedef mat<2, 2, uint, highp> highp_umat2;
187 
190  typedef mat<3, 3, uint, highp> highp_umat3;
191 
194  typedef mat<4, 4, uint, highp> highp_umat4;
195 
198  typedef mat<2, 2, uint, highp> highp_umat2x2;
199 
202  typedef mat<2, 3, uint, highp> highp_umat2x3;
203 
206  typedef mat<2, 4, uint, highp> highp_umat2x4;
207 
210  typedef mat<3, 2, uint, highp> highp_umat3x2;
211 
214  typedef mat<3, 3, uint, highp> highp_umat3x3;
215 
218  typedef mat<3, 4, uint, highp> highp_umat3x4;
219 
222  typedef mat<4, 2, uint, highp> highp_umat4x2;
223 
226  typedef mat<4, 3, uint, highp> highp_umat4x3;
227 
230  typedef mat<4, 4, uint, highp> highp_umat4x4;
231 
232 
235  typedef mat<2, 2, uint, mediump> mediump_umat2;
236 
239  typedef mat<3, 3, uint, mediump> mediump_umat3;
240 
243  typedef mat<4, 4, uint, mediump> mediump_umat4;
244 
245 
248  typedef mat<2, 2, uint, mediump> mediump_umat2x2;
249 
252  typedef mat<2, 3, uint, mediump> mediump_umat2x3;
253 
256  typedef mat<2, 4, uint, mediump> mediump_umat2x4;
257 
260  typedef mat<3, 2, uint, mediump> mediump_umat3x2;
261 
264  typedef mat<3, 3, uint, mediump> mediump_umat3x3;
265 
268  typedef mat<3, 4, uint, mediump> mediump_umat3x4;
269 
272  typedef mat<4, 2, uint, mediump> mediump_umat4x2;
273 
276  typedef mat<4, 3, uint, mediump> mediump_umat4x3;
277 
280  typedef mat<4, 4, uint, mediump> mediump_umat4x4;
281 
282 
285  typedef mat<2, 2, uint, lowp> lowp_umat2;
286 
289  typedef mat<3, 3, uint, lowp> lowp_umat3;
290 
293  typedef mat<4, 4, uint, lowp> lowp_umat4;
294 
295 
298  typedef mat<2, 2, uint, lowp> lowp_umat2x2;
299 
302  typedef mat<2, 3, uint, lowp> lowp_umat2x3;
303 
306  typedef mat<2, 4, uint, lowp> lowp_umat2x4;
307 
310  typedef mat<3, 2, uint, lowp> lowp_umat3x2;
311 
314  typedef mat<3, 3, uint, lowp> lowp_umat3x3;
315 
318  typedef mat<3, 4, uint, lowp> lowp_umat3x4;
319 
322  typedef mat<4, 2, uint, lowp> lowp_umat4x2;
323 
326  typedef mat<4, 3, uint, lowp> lowp_umat4x3;
327 
330  typedef mat<4, 4, uint, lowp> lowp_umat4x4;
331 
332 #if(defined(GLM_PRECISION_HIGHP_INT))
333  typedef highp_imat2 imat2;
334  typedef highp_imat3 imat3;
335  typedef highp_imat4 imat4;
336  typedef highp_imat2x2 imat2x2;
337  typedef highp_imat2x3 imat2x3;
338  typedef highp_imat2x4 imat2x4;
339  typedef highp_imat3x2 imat3x2;
340  typedef highp_imat3x3 imat3x3;
341  typedef highp_imat3x4 imat3x4;
342  typedef highp_imat4x2 imat4x2;
343  typedef highp_imat4x3 imat4x3;
344  typedef highp_imat4x4 imat4x4;
345 #elif(defined(GLM_PRECISION_LOWP_INT))
346  typedef lowp_imat2 imat2;
347  typedef lowp_imat3 imat3;
348  typedef lowp_imat4 imat4;
349  typedef lowp_imat2x2 imat2x2;
350  typedef lowp_imat2x3 imat2x3;
351  typedef lowp_imat2x4 imat2x4;
352  typedef lowp_imat3x2 imat3x2;
353  typedef lowp_imat3x3 imat3x3;
354  typedef lowp_imat3x4 imat3x4;
355  typedef lowp_imat4x2 imat4x2;
356  typedef lowp_imat4x3 imat4x3;
357  typedef lowp_imat4x4 imat4x4;
358 #else //if(defined(GLM_PRECISION_MEDIUMP_INT))
359 
362  typedef mediump_imat2 imat2;
363 
366  typedef mediump_imat3 imat3;
367 
370  typedef mediump_imat4 imat4;
371 
374  typedef mediump_imat2x2 imat2x2;
375 
378  typedef mediump_imat2x3 imat2x3;
379 
382  typedef mediump_imat2x4 imat2x4;
383 
386  typedef mediump_imat3x2 imat3x2;
387 
390  typedef mediump_imat3x3 imat3x3;
391 
394  typedef mediump_imat3x4 imat3x4;
395 
398  typedef mediump_imat4x2 imat4x2;
399 
402  typedef mediump_imat4x3 imat4x3;
403 
406  typedef mediump_imat4x4 imat4x4;
407 #endif//GLM_PRECISION
408 
409 #if(defined(GLM_PRECISION_HIGHP_UINT))
410  typedef highp_umat2 umat2;
411  typedef highp_umat3 umat3;
412  typedef highp_umat4 umat4;
413  typedef highp_umat2x2 umat2x2;
414  typedef highp_umat2x3 umat2x3;
415  typedef highp_umat2x4 umat2x4;
416  typedef highp_umat3x2 umat3x2;
417  typedef highp_umat3x3 umat3x3;
418  typedef highp_umat3x4 umat3x4;
419  typedef highp_umat4x2 umat4x2;
420  typedef highp_umat4x3 umat4x3;
421  typedef highp_umat4x4 umat4x4;
422 #elif(defined(GLM_PRECISION_LOWP_UINT))
423  typedef lowp_umat2 umat2;
424  typedef lowp_umat3 umat3;
425  typedef lowp_umat4 umat4;
426  typedef lowp_umat2x2 umat2x2;
427  typedef lowp_umat2x3 umat2x3;
428  typedef lowp_umat2x4 umat2x4;
429  typedef lowp_umat3x2 umat3x2;
430  typedef lowp_umat3x3 umat3x3;
431  typedef lowp_umat3x4 umat3x4;
432  typedef lowp_umat4x2 umat4x2;
433  typedef lowp_umat4x3 umat4x3;
434  typedef lowp_umat4x4 umat4x4;
435 #else //if(defined(GLM_PRECISION_MEDIUMP_UINT))
436 
439  typedef mediump_umat2 umat2;
440 
443  typedef mediump_umat3 umat3;
444 
447  typedef mediump_umat4 umat4;
448 
451  typedef mediump_umat2x2 umat2x2;
452 
455  typedef mediump_umat2x3 umat2x3;
456 
459  typedef mediump_umat2x4 umat2x4;
460 
463  typedef mediump_umat3x2 umat3x2;
464 
467  typedef mediump_umat3x3 umat3x3;
468 
471  typedef mediump_umat3x4 umat3x4;
472 
475  typedef mediump_umat4x2 umat4x2;
476 
479  typedef mediump_umat4x3 umat4x3;
480 
483  typedef mediump_umat4x4 umat4x4;
484 #endif//GLM_PRECISION
485 
487 }//namespace glm
mediump_imat4x4 imat4x4
Signed integer 4x4 matrix.
mediump_imat2x2 imat2x2
Signed integer 2x2 matrix.
mediump_umat4 umat4
Unsigned integer 4x4 matrix.
mediump_umat4x2 umat4x2
Unsigned integer 4x2 matrix.
mat< 4, 4, uint, lowp > lowp_umat4x4
Low-qualifier unsigned integer 4x4 matrix.
mat< 4, 2, int, mediump > mediump_imat4x2
Medium-qualifier signed integer 4x2 matrix.
mat< 4, 4, uint, lowp > lowp_umat4
Low-qualifier unsigned integer 4x4 matrix.
mat< 3, 2, int, highp > highp_imat3x2
High-qualifier signed integer 3x2 matrix.
mat< 3, 3, uint, highp > highp_umat3x3
High-qualifier unsigned integer 3x3 matrix.
mat< 2, 2, uint, lowp > lowp_umat2x2
Low-qualifier unsigned integer 2x2 matrix.
mediump_umat3x3 umat3x3
Unsigned integer 3x3 matrix.
mat< 2, 4, uint, highp > highp_umat2x4
High-qualifier unsigned integer 2x4 matrix.
mediump_umat3x2 umat3x2
Unsigned integer 3x2 matrix.
mat< 3, 2, int, lowp > lowp_imat3x2
Low-qualifier signed integer 3x2 matrix.
mat< 3, 3, uint, highp > highp_umat3
High-qualifier unsigned integer 3x3 matrix.
mat< 4, 3, int, mediump > mediump_imat4x3
Medium-qualifier signed integer 4x3 matrix.
mediump_imat3 imat3
Signed integer 3x3 matrix.
mat< 2, 2, int, mediump > mediump_imat2
Medium-qualifier signed integer 2x2 matrix.
mat< 3, 4, uint, mediump > mediump_umat3x4
Medium-qualifier unsigned integer 3x4 matrix.
mat< 4, 4, int, lowp > lowp_imat4x4
Low-qualifier signed integer 4x4 matrix.
mat< 2, 4, int, highp > highp_imat2x4
High-qualifier signed integer 2x4 matrix.
mediump_umat2x3 umat2x3
Unsigned integer 2x3 matrix.
mat< 4, 3, int, lowp > lowp_imat4x3
Low-qualifier signed integer 4x3 matrix.
mat< 3, 3, uint, lowp > lowp_umat3
Low-qualifier unsigned integer 3x3 matrix.
mat< 4, 4, uint, mediump > mediump_umat4x4
Medium-qualifier unsigned integer 4x4 matrix.
mat< 3, 2, uint, mediump > mediump_umat3x2
Medium-qualifier unsigned integer 3x2 matrix.
mat< 2, 4, uint, mediump > mediump_umat2x4
Medium-qualifier unsigned integer 2x4 matrix.
mat< 4, 4, int, highp > highp_imat4x4
High-qualifier signed integer 4x4 matrix.
mat< 2, 4, uint, lowp > lowp_umat2x4
Low-qualifier unsigned integer 2x4 matrix.
mediump_imat4x3 imat4x3
Signed integer 4x3 matrix.
mat< 3, 3, uint, mediump > mediump_umat3x3
Medium-qualifier unsigned integer 3x3 matrix.
mat< 2, 2, int, highp > highp_imat2
High-qualifier signed integer 2x2 matrix.
mediump_umat2 umat2
Unsigned integer 2x2 matrix.
mat< 3, 4, uint, lowp > lowp_umat3x4
Low-qualifier unsigned integer 3x4 matrix.
mat< 4, 2, uint, mediump > mediump_umat4x2
Medium-qualifier unsigned integer 4x2 matrix.
mediump_imat4x2 imat4x2
Signed integer 4x2 matrix.
mat< 2, 3, int, mediump > mediump_imat2x3
Medium-qualifier signed integer 2x3 matrix.
mat< 2, 2, uint, mediump > mediump_umat2
Medium-qualifier unsigned integer 2x2 matrix.
mediump_imat2 imat2
Signed integer 2x2 matrix.
mat< 4, 3, uint, mediump > mediump_umat4x3
Medium-qualifier unsigned integer 4x3 matrix.
mat< 3, 3, int, mediump > mediump_imat3
Medium-qualifier signed integer 3x3 matrix.
mat< 2, 2, uint, highp > highp_umat2
High-qualifier unsigned integer 2x2 matrix.
mediump_imat3x4 imat3x4
Signed integer 3x4 matrix.
mat< 3, 2, uint, highp > highp_umat3x2
High-qualifier unsigned integer 3x2 matrix.
mat< 2, 2, int, highp > highp_imat2x2
High-qualifier signed integer 2x2 matrix.
mat< 3, 4, uint, highp > highp_umat3x4
High-qualifier unsigned integer 3x4 matrix.
mat< 3, 3, int, mediump > mediump_imat3x3
Medium-qualifier signed integer 3x3 matrix.
mat< 4, 4, uint, highp > highp_umat4x4
High-qualifier unsigned integer 4x4 matrix.
mediump_imat2x4 imat2x4
Signed integer 2x4 matrix.
mediump_umat2x4 umat2x4
Unsigned integer 2x4 matrix.
mat< 2, 4, int, mediump > mediump_imat2x4
Medium-qualifier signed integer 2x4 matrix.
mat< 2, 2, int, lowp > lowp_imat2
Low-qualifier signed integer 2x2 matrix.
mat< 4, 2, int, lowp > lowp_imat4x2
Low-qualifier signed integer 4x2 matrix.
mat< 4, 3, uint, lowp > lowp_umat4x3
Low-qualifier unsigned integer 4x3 matrix.
mediump_imat4 imat4
Signed integer 4x4 matrix.
mediump_imat3x2 imat3x2
Signed integer 3x2 matrix.
mat< 2, 3, uint, lowp > lowp_umat2x3
Low-qualifier unsigned integer 2x3 matrix.
mat< 3, 2, int, mediump > mediump_imat3x2
Medium-qualifier signed integer 3x2 matrix.
mediump_umat4x4 umat4x4
Unsigned integer 4x4 matrix.
mat< 4, 3, int, highp > highp_imat4x3
High-qualifier signed integer 4x3 matrix.
mediump_umat4x3 umat4x3
Unsigned integer 4x3 matrix.
mat< 4, 2, uint, lowp > lowp_umat4x2
Low-qualifier unsigned integer 4x2 matrix.
mat< 3, 2, uint, lowp > lowp_umat3x2
Low-qualifier unsigned integer 3x2 matrix.
mat< 2, 2, uint, highp > highp_umat2x2
High-qualifier unsigned integer 2x2 matrix.
mat< 3, 3, int, lowp > lowp_imat3x3
Low-qualifier signed integer 3x3 matrix.
mat< 3, 3, int, highp > highp_imat3x3
High-qualifier signed integer 3x3 matrix.
mat< 2, 3, uint, mediump > mediump_umat2x3
Medium-qualifier unsigned integer 2x3 matrix.
mat< 4, 2, uint, highp > highp_umat4x2
High-qualifier unsigned integer 4x2 matrix.
mat< 3, 3, uint, lowp > lowp_umat3x3
Low-qualifier unsigned integer 3x3 matrix.
mediump_imat2x3 imat2x3
Signed integer 2x3 matrix.
mat< 2, 3, int, lowp > lowp_imat2x3
Low-qualifier signed integer 2x3 matrix.
mat< 4, 4, uint, highp > highp_umat4
High-qualifier unsigned integer 4x4 matrix.
mat< 3, 3, int, highp > highp_imat3
High-qualifier signed integer 3x3 matrix.
mat< 3, 3, uint, mediump > mediump_umat3
Medium-qualifier unsigned integer 3x3 matrix.
mat< 2, 2, int, mediump > mediump_imat2x2
Medium-qualifier signed integer 2x2 matrix.
mat< 2, 3, int, highp > highp_imat2x3
High-qualifier signed integer 2x3 matrix.
mat< 4, 2, int, highp > highp_imat4x2
High-qualifier signed integer 4x2 matrix.
mat< 3, 4, int, lowp > lowp_imat3x4
Low-qualifier signed integer 3x4 matrix.
mediump_umat3 umat3
Unsigned integer 3x3 matrix.
mat< 2, 2, int, lowp > lowp_imat2x2
Low-qualifier signed integer 2x2 matrix.
mat< 2, 3, uint, highp > highp_umat2x3
High-qualifier unsigned integer 2x3 matrix.
mat< 4, 4, int, highp > highp_imat4
High-qualifier signed integer 4x4 matrix.
mat< 2, 4, int, lowp > lowp_imat2x4
Low-qualifier signed integer 2x4 matrix.
mat< 3, 4, int, mediump > mediump_imat3x4
Medium-qualifier signed integer 3x4 matrix.
mat< 4, 4, int, mediump > mediump_imat4x4
Medium-qualifier signed integer 4x4 matrix.
mat< 4, 4, int, mediump > mediump_imat4
Medium-qualifier signed integer 4x4 matrix.
mediump_imat3x3 imat3x3
Signed integer 3x3 matrix.
mat< 3, 3, int, lowp > lowp_imat3
Low-qualifier signed integer 3x3 matrix.
mat< 2, 2, uint, lowp > lowp_umat2
Low-qualifier unsigned integer 2x2 matrix.
mat< 4, 3, uint, highp > highp_umat4x3
High-qualifier unsigned integer 4x3 matrix.
mediump_umat2x2 umat2x2
Unsigned integer 2x2 matrix.
mat< 4, 4, uint, mediump > mediump_umat4
Medium-qualifier unsigned integer 4x4 matrix.
mat< 4, 4, int, lowp > lowp_imat4
Low-qualifier signed integer 4x4 matrix.
mediump_umat3x4 umat3x4
Unsigned integer 3x4 matrix.
mat< 3, 4, int, highp > highp_imat3x4
High-qualifier signed integer 3x4 matrix.
mat< 2, 2, uint, mediump > mediump_umat2x2
Medium-qualifier unsigned integer 2x2 matrix.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00101.html ================================================ 0.9.9 API documentation: matrix_interpolation.hpp File Reference
0.9.9 API documentation
matrix_interpolation.hpp File Reference

GLM_GTX_matrix_interpolation More...

Go to the source code of this file.

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL void axisAngle (mat< 4, 4, T, Q > const &Mat, vec< 3, T, Q > &Axis, T &Angle)
 Get the axis and angle of the rotation from a matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > axisAngleMatrix (vec< 3, T, Q > const &Axis, T const Angle)
 Build a matrix from axis and angle. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > extractMatrixRotation (mat< 4, 4, T, Q > const &Mat)
 Extracts the rotation part of a matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > interpolate (mat< 4, 4, T, Q > const &m1, mat< 4, 4, T, Q > const &m2, T const Delta)
 Build a interpolation of 4 * 4 matrixes. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00101_source.html ================================================ 0.9.9 API documentation: matrix_interpolation.hpp Source File
0.9.9 API documentation
matrix_interpolation.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 // Dependency:
17 #include "../glm.hpp"
18 
19 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
20 # ifndef GLM_ENABLE_EXPERIMENTAL
21 # pragma message("GLM: GLM_GTX_matrix_interpolation is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
22 # else
23 # pragma message("GLM: GLM_GTX_matrix_interpolation extension included")
24 # endif
25 #endif
26 
27 namespace glm
28 {
31 
34  template<typename T, qualifier Q>
35  GLM_FUNC_DECL void axisAngle(
36  mat<4, 4, T, Q> const& Mat, vec<3, T, Q> & Axis, T & Angle);
37 
40  template<typename T, qualifier Q>
41  GLM_FUNC_DECL mat<4, 4, T, Q> axisAngleMatrix(
42  vec<3, T, Q> const& Axis, T const Angle);
43 
46  template<typename T, qualifier Q>
47  GLM_FUNC_DECL mat<4, 4, T, Q> extractMatrixRotation(
48  mat<4, 4, T, Q> const& Mat);
49 
53  template<typename T, qualifier Q>
54  GLM_FUNC_DECL mat<4, 4, T, Q> interpolate(
55  mat<4, 4, T, Q> const& m1, mat<4, 4, T, Q> const& m2, T const Delta);
56 
58 }//namespace glm
59 
60 #include "matrix_interpolation.inl"
GLM_FUNC_DECL mat< 4, 4, T, Q > extractMatrixRotation(mat< 4, 4, T, Q > const &Mat)
Extracts the rotation part of a matrix.
GLM_FUNC_DECL mat< 4, 4, T, Q > interpolate(mat< 4, 4, T, Q > const &m1, mat< 4, 4, T, Q > const &m2, T const Delta)
Build a interpolation of 4 * 4 matrixes.
GLM_FUNC_DECL void axisAngle(mat< 4, 4, T, Q > const &Mat, vec< 3, T, Q > &Axis, T &Angle)
Get the axis and angle of the rotation from a matrix.
GLM_FUNC_DECL mat< 4, 4, T, Q > axisAngleMatrix(vec< 3, T, Q > const &Axis, T const Angle)
Build a matrix from axis and angle.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00102.html ================================================ 0.9.9 API documentation: matrix_inverse.hpp File Reference
0.9.9 API documentation
matrix_inverse.hpp File Reference

GLM_GTC_matrix_inverse More...

Go to the source code of this file.

Functions

template<typename genType >
GLM_FUNC_DECL genType affineInverse (genType const &m)
 Fast matrix inverse for affine matrix. More...
 
template<typename genType >
GLM_FUNC_DECL genType inverseTranspose (genType const &m)
 Compute the inverse transpose of a matrix. More...
 

Detailed Description

GLM_GTC_matrix_inverse

See also
Core features (dependence)

Definition in file matrix_inverse.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00102_source.html ================================================ 0.9.9 API documentation: matrix_inverse.hpp Source File
0.9.9 API documentation
matrix_inverse.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependencies
16 #include "../detail/setup.hpp"
17 #include "../matrix.hpp"
18 #include "../mat2x2.hpp"
19 #include "../mat3x3.hpp"
20 #include "../mat4x4.hpp"
21 
22 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
23 # pragma message("GLM: GLM_GTC_matrix_inverse extension included")
24 #endif
25 
26 namespace glm
27 {
30 
36  template<typename genType>
37  GLM_FUNC_DECL genType affineInverse(genType const& m);
38 
44  template<typename genType>
45  GLM_FUNC_DECL genType inverseTranspose(genType const& m);
46 
48 }//namespace glm
49 
50 #include "matrix_inverse.inl"
GLM_FUNC_DECL genType inverseTranspose(genType const &m)
Compute the inverse transpose of a matrix.
GLM_FUNC_DECL genType affineInverse(genType const &m)
Fast matrix inverse for affine matrix.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00103.html ================================================ 0.9.9 API documentation: matrix_major_storage.hpp File Reference
0.9.9 API documentation
matrix_major_storage.hpp File Reference

GLM_GTX_matrix_major_storage More...

Go to the source code of this file.

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 2, 2, T, Q > colMajor2 (vec< 2, T, Q > const &v1, vec< 2, T, Q > const &v2)
 Build a column major matrix from column vectors. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 2, 2, T, Q > colMajor2 (mat< 2, 2, T, Q > const &m)
 Build a column major matrix from other matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 3, T, Q > colMajor3 (vec< 3, T, Q > const &v1, vec< 3, T, Q > const &v2, vec< 3, T, Q > const &v3)
 Build a column major matrix from column vectors. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 3, T, Q > colMajor3 (mat< 3, 3, T, Q > const &m)
 Build a column major matrix from other matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > colMajor4 (vec< 4, T, Q > const &v1, vec< 4, T, Q > const &v2, vec< 4, T, Q > const &v3, vec< 4, T, Q > const &v4)
 Build a column major matrix from column vectors. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > colMajor4 (mat< 4, 4, T, Q > const &m)
 Build a column major matrix from other matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 2, 2, T, Q > rowMajor2 (vec< 2, T, Q > const &v1, vec< 2, T, Q > const &v2)
 Build a row major matrix from row vectors. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 2, 2, T, Q > rowMajor2 (mat< 2, 2, T, Q > const &m)
 Build a row major matrix from other matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 3, T, Q > rowMajor3 (vec< 3, T, Q > const &v1, vec< 3, T, Q > const &v2, vec< 3, T, Q > const &v3)
 Build a row major matrix from row vectors. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 3, T, Q > rowMajor3 (mat< 3, 3, T, Q > const &m)
 Build a row major matrix from other matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > rowMajor4 (vec< 4, T, Q > const &v1, vec< 4, T, Q > const &v2, vec< 4, T, Q > const &v3, vec< 4, T, Q > const &v4)
 Build a row major matrix from row vectors. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > rowMajor4 (mat< 4, 4, T, Q > const &m)
 Build a row major matrix from other matrix. More...
 

Detailed Description

GLM_GTX_matrix_major_storage

See also
Core features (dependence)
gtx_extented_min_max (dependence)

Definition in file matrix_major_storage.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00103_source.html ================================================ 0.9.9 API documentation: matrix_major_storage.hpp Source File
0.9.9 API documentation
matrix_major_storage.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 // Dependency:
17 #include "../glm.hpp"
18 
19 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
20 # ifndef GLM_ENABLE_EXPERIMENTAL
21 # pragma message("GLM: GLM_GTX_matrix_major_storage is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
22 # else
23 # pragma message("GLM: GLM_GTX_matrix_major_storage extension included")
24 # endif
25 #endif
26 
27 namespace glm
28 {
31 
34  template<typename T, qualifier Q>
35  GLM_FUNC_DECL mat<2, 2, T, Q> rowMajor2(
36  vec<2, T, Q> const& v1,
37  vec<2, T, Q> const& v2);
38 
41  template<typename T, qualifier Q>
42  GLM_FUNC_DECL mat<2, 2, T, Q> rowMajor2(
43  mat<2, 2, T, Q> const& m);
44 
47  template<typename T, qualifier Q>
48  GLM_FUNC_DECL mat<3, 3, T, Q> rowMajor3(
49  vec<3, T, Q> const& v1,
50  vec<3, T, Q> const& v2,
51  vec<3, T, Q> const& v3);
52 
55  template<typename T, qualifier Q>
56  GLM_FUNC_DECL mat<3, 3, T, Q> rowMajor3(
57  mat<3, 3, T, Q> const& m);
58 
61  template<typename T, qualifier Q>
62  GLM_FUNC_DECL mat<4, 4, T, Q> rowMajor4(
63  vec<4, T, Q> const& v1,
64  vec<4, T, Q> const& v2,
65  vec<4, T, Q> const& v3,
66  vec<4, T, Q> const& v4);
67 
70  template<typename T, qualifier Q>
71  GLM_FUNC_DECL mat<4, 4, T, Q> rowMajor4(
72  mat<4, 4, T, Q> const& m);
73 
76  template<typename T, qualifier Q>
77  GLM_FUNC_DECL mat<2, 2, T, Q> colMajor2(
78  vec<2, T, Q> const& v1,
79  vec<2, T, Q> const& v2);
80 
83  template<typename T, qualifier Q>
84  GLM_FUNC_DECL mat<2, 2, T, Q> colMajor2(
85  mat<2, 2, T, Q> const& m);
86 
89  template<typename T, qualifier Q>
90  GLM_FUNC_DECL mat<3, 3, T, Q> colMajor3(
91  vec<3, T, Q> const& v1,
92  vec<3, T, Q> const& v2,
93  vec<3, T, Q> const& v3);
94 
97  template<typename T, qualifier Q>
98  GLM_FUNC_DECL mat<3, 3, T, Q> colMajor3(
99  mat<3, 3, T, Q> const& m);
100 
103  template<typename T, qualifier Q>
104  GLM_FUNC_DECL mat<4, 4, T, Q> colMajor4(
105  vec<4, T, Q> const& v1,
106  vec<4, T, Q> const& v2,
107  vec<4, T, Q> const& v3,
108  vec<4, T, Q> const& v4);
109 
112  template<typename T, qualifier Q>
113  GLM_FUNC_DECL mat<4, 4, T, Q> colMajor4(
114  mat<4, 4, T, Q> const& m);
115 
117 }//namespace glm
118 
119 #include "matrix_major_storage.inl"
GLM_FUNC_DECL mat< 4, 4, T, Q > rowMajor4(mat< 4, 4, T, Q > const &m)
Build a row major matrix from other matrix.
GLM_FUNC_DECL mat< 2, 2, T, Q > rowMajor2(mat< 2, 2, T, Q > const &m)
Build a row major matrix from other matrix.
GLM_FUNC_DECL mat< 4, 4, T, Q > colMajor4(mat< 4, 4, T, Q > const &m)
Build a column major matrix from other matrix.
GLM_FUNC_DECL mat< 3, 3, T, Q > colMajor3(mat< 3, 3, T, Q > const &m)
Build a column major matrix from other matrix.
GLM_FUNC_DECL mat< 2, 2, T, Q > colMajor2(mat< 2, 2, T, Q > const &m)
Build a column major matrix from other matrix.
GLM_FUNC_DECL mat< 3, 3, T, Q > rowMajor3(mat< 3, 3, T, Q > const &m)
Build a row major matrix from other matrix.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00104.html ================================================ 0.9.9 API documentation: matrix_operation.hpp File Reference
0.9.9 API documentation
matrix_operation.hpp File Reference

GLM_GTX_matrix_operation More...

Go to the source code of this file.

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 2, 2, T, Q > adjugate (mat< 2, 2, T, Q > const &m)
 Build an adjugate matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 3, T, Q > adjugate (mat< 3, 3, T, Q > const &m)
 Build an adjugate matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > adjugate (mat< 4, 4, T, Q > const &m)
 Build an adjugate matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 2, 2, T, Q > diagonal2x2 (vec< 2, T, Q > const &v)
 Build a diagonal matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 2, 3, T, Q > diagonal2x3 (vec< 2, T, Q > const &v)
 Build a diagonal matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 2, 4, T, Q > diagonal2x4 (vec< 2, T, Q > const &v)
 Build a diagonal matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 2, T, Q > diagonal3x2 (vec< 2, T, Q > const &v)
 Build a diagonal matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 3, T, Q > diagonal3x3 (vec< 3, T, Q > const &v)
 Build a diagonal matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 4, T, Q > diagonal3x4 (vec< 3, T, Q > const &v)
 Build a diagonal matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 2, T, Q > diagonal4x2 (vec< 2, T, Q > const &v)
 Build a diagonal matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 3, T, Q > diagonal4x3 (vec< 3, T, Q > const &v)
 Build a diagonal matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > diagonal4x4 (vec< 4, T, Q > const &v)
 Build a diagonal matrix. More...
 

Detailed Description

GLM_GTX_matrix_operation

See also
Core features (dependence)

Definition in file matrix_operation.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00104_source.html ================================================ 0.9.9 API documentation: matrix_operation.hpp Source File
0.9.9 API documentation
matrix_operation.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependency:
16 #include "../glm.hpp"
17 
18 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
19 # ifndef GLM_ENABLE_EXPERIMENTAL
20 # pragma message("GLM: GLM_GTX_matrix_operation is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
21 # else
22 # pragma message("GLM: GLM_GTX_matrix_operation extension included")
23 # endif
24 #endif
25 
26 namespace glm
27 {
30 
33  template<typename T, qualifier Q>
34  GLM_FUNC_DECL mat<2, 2, T, Q> diagonal2x2(
35  vec<2, T, Q> const& v);
36 
39  template<typename T, qualifier Q>
40  GLM_FUNC_DECL mat<2, 3, T, Q> diagonal2x3(
41  vec<2, T, Q> const& v);
42 
45  template<typename T, qualifier Q>
46  GLM_FUNC_DECL mat<2, 4, T, Q> diagonal2x4(
47  vec<2, T, Q> const& v);
48 
51  template<typename T, qualifier Q>
52  GLM_FUNC_DECL mat<3, 2, T, Q> diagonal3x2(
53  vec<2, T, Q> const& v);
54 
57  template<typename T, qualifier Q>
58  GLM_FUNC_DECL mat<3, 3, T, Q> diagonal3x3(
59  vec<3, T, Q> const& v);
60 
63  template<typename T, qualifier Q>
64  GLM_FUNC_DECL mat<3, 4, T, Q> diagonal3x4(
65  vec<3, T, Q> const& v);
66 
69  template<typename T, qualifier Q>
70  GLM_FUNC_DECL mat<4, 2, T, Q> diagonal4x2(
71  vec<2, T, Q> const& v);
72 
75  template<typename T, qualifier Q>
76  GLM_FUNC_DECL mat<4, 3, T, Q> diagonal4x3(
77  vec<3, T, Q> const& v);
78 
81  template<typename T, qualifier Q>
82  GLM_FUNC_DECL mat<4, 4, T, Q> diagonal4x4(
83  vec<4, T, Q> const& v);
84 
87  template<typename T, qualifier Q>
88  GLM_FUNC_DECL mat<2, 2, T, Q> adjugate(mat<2, 2, T, Q> const& m);
89 
92  template<typename T, qualifier Q>
93  GLM_FUNC_DECL mat<3, 3, T, Q> adjugate(mat<3, 3, T, Q> const& m);
94 
97  template<typename T, qualifier Q>
98  GLM_FUNC_DECL mat<4, 4, T, Q> adjugate(mat<4, 4, T, Q> const& m);
99 
101 }//namespace glm
102 
103 #include "matrix_operation.inl"
GLM_FUNC_DECL mat< 4, 3, T, Q > diagonal4x3(vec< 3, T, Q > const &v)
Build a diagonal matrix.
GLM_FUNC_DECL mat< 2, 2, T, Q > diagonal2x2(vec< 2, T, Q > const &v)
Build a diagonal matrix.
GLM_FUNC_DECL mat< 3, 4, T, Q > diagonal3x4(vec< 3, T, Q > const &v)
Build a diagonal matrix.
GLM_FUNC_DECL mat< 3, 2, T, Q > diagonal3x2(vec< 2, T, Q > const &v)
Build a diagonal matrix.
GLM_FUNC_DECL mat< 2, 3, T, Q > diagonal2x3(vec< 2, T, Q > const &v)
Build a diagonal matrix.
GLM_FUNC_DECL mat< 3, 3, T, Q > diagonal3x3(vec< 3, T, Q > const &v)
Build a diagonal matrix.
GLM_FUNC_DECL mat< 4, 4, T, Q > adjugate(mat< 4, 4, T, Q > const &m)
Build an adjugate matrix.
GLM_FUNC_DECL mat< 2, 4, T, Q > diagonal2x4(vec< 2, T, Q > const &v)
Build a diagonal matrix.
GLM_FUNC_DECL mat< 4, 2, T, Q > diagonal4x2(vec< 2, T, Q > const &v)
Build a diagonal matrix.
GLM_FUNC_DECL mat< 4, 4, T, Q > diagonal4x4(vec< 4, T, Q > const &v)
Build a diagonal matrix.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00105.html ================================================ 0.9.9 API documentation: matrix_projection.hpp File Reference
0.9.9 API documentation
matrix_projection.hpp File Reference

GLM_EXT_matrix_projection More...

Go to the source code of this file.

Functions

template<typename T , qualifier Q, typename U >
GLM_FUNC_DECL mat< 4, 4, T, Q > pickMatrix (vec< 2, T, Q > const &center, vec< 2, T, Q > const &delta, vec< 4, U, Q > const &viewport)
 Define a picking region. More...
 
template<typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > project (vec< 3, T, Q > const &obj, mat< 4, 4, T, Q > const &model, mat< 4, 4, T, Q > const &proj, vec< 4, U, Q > const &viewport)
 Map the specified object coordinates (obj.x, obj.y, obj.z) into window coordinates using default near and far clip planes definition. More...
 
template<typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > projectNO (vec< 3, T, Q > const &obj, mat< 4, 4, T, Q > const &model, mat< 4, 4, T, Q > const &proj, vec< 4, U, Q > const &viewport)
 Map the specified object coordinates (obj.x, obj.y, obj.z) into window coordinates. More...
 
template<typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > projectZO (vec< 3, T, Q > const &obj, mat< 4, 4, T, Q > const &model, mat< 4, 4, T, Q > const &proj, vec< 4, U, Q > const &viewport)
 Map the specified object coordinates (obj.x, obj.y, obj.z) into window coordinates. More...
 
template<typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > unProject (vec< 3, T, Q > const &win, mat< 4, 4, T, Q > const &model, mat< 4, 4, T, Q > const &proj, vec< 4, U, Q > const &viewport)
 Map the specified window coordinates (win.x, win.y, win.z) into object coordinates using default near and far clip planes definition. More...
 
template<typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > unProjectNO (vec< 3, T, Q > const &win, mat< 4, 4, T, Q > const &model, mat< 4, 4, T, Q > const &proj, vec< 4, U, Q > const &viewport)
 Map the specified window coordinates (win.x, win.y, win.z) into object coordinates. More...
 
template<typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > unProjectZO (vec< 3, T, Q > const &win, mat< 4, 4, T, Q > const &model, mat< 4, 4, T, Q > const &proj, vec< 4, U, Q > const &viewport)
 Map the specified window coordinates (win.x, win.y, win.z) into object coordinates. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00105_source.html ================================================ 0.9.9 API documentation: matrix_projection.hpp Source File
0.9.9 API documentation
matrix_projection.hpp
Go to the documentation of this file.
1 
20 #pragma once
21 
22 // Dependencies
23 #include "../gtc/constants.hpp"
24 #include "../geometric.hpp"
25 #include "../trigonometric.hpp"
26 #include "../matrix.hpp"
27 
28 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
29 # pragma message("GLM: GLM_EXT_matrix_projection extension included")
30 #endif
31 
32 namespace glm
33 {
36 
49  template<typename T, typename U, qualifier Q>
50  GLM_FUNC_DECL vec<3, T, Q> projectZO(
51  vec<3, T, Q> const& obj, mat<4, 4, T, Q> const& model, mat<4, 4, T, Q> const& proj, vec<4, U, Q> const& viewport);
52 
65  template<typename T, typename U, qualifier Q>
66  GLM_FUNC_DECL vec<3, T, Q> projectNO(
67  vec<3, T, Q> const& obj, mat<4, 4, T, Q> const& model, mat<4, 4, T, Q> const& proj, vec<4, U, Q> const& viewport);
68 
81  template<typename T, typename U, qualifier Q>
82  GLM_FUNC_DECL vec<3, T, Q> project(
83  vec<3, T, Q> const& obj, mat<4, 4, T, Q> const& model, mat<4, 4, T, Q> const& proj, vec<4, U, Q> const& viewport);
84 
97  template<typename T, typename U, qualifier Q>
98  GLM_FUNC_DECL vec<3, T, Q> unProjectZO(
99  vec<3, T, Q> const& win, mat<4, 4, T, Q> const& model, mat<4, 4, T, Q> const& proj, vec<4, U, Q> const& viewport);
100 
113  template<typename T, typename U, qualifier Q>
114  GLM_FUNC_DECL vec<3, T, Q> unProjectNO(
115  vec<3, T, Q> const& win, mat<4, 4, T, Q> const& model, mat<4, 4, T, Q> const& proj, vec<4, U, Q> const& viewport);
116 
129  template<typename T, typename U, qualifier Q>
130  GLM_FUNC_DECL vec<3, T, Q> unProject(
131  vec<3, T, Q> const& win, mat<4, 4, T, Q> const& model, mat<4, 4, T, Q> const& proj, vec<4, U, Q> const& viewport);
132 
142  template<typename T, qualifier Q, typename U>
143  GLM_FUNC_DECL mat<4, 4, T, Q> pickMatrix(
144  vec<2, T, Q> const& center, vec<2, T, Q> const& delta, vec<4, U, Q> const& viewport);
145 
147 }//namespace glm
148 
149 #include "matrix_projection.inl"
GLM_FUNC_DECL vec< 3, T, Q > unProjectZO(vec< 3, T, Q > const &win, mat< 4, 4, T, Q > const &model, mat< 4, 4, T, Q > const &proj, vec< 4, U, Q > const &viewport)
Map the specified window coordinates (win.x, win.y, win.z) into object coordinates.
GLM_FUNC_DECL genType proj(genType const &x, genType const &Normal)
Projects x on Normal.
GLM_FUNC_DECL vec< 3, T, Q > projectZO(vec< 3, T, Q > const &obj, mat< 4, 4, T, Q > const &model, mat< 4, 4, T, Q > const &proj, vec< 4, U, Q > const &viewport)
Map the specified object coordinates (obj.x, obj.y, obj.z) into window coordinates.
GLM_FUNC_DECL vec< 3, T, Q > projectNO(vec< 3, T, Q > const &obj, mat< 4, 4, T, Q > const &model, mat< 4, 4, T, Q > const &proj, vec< 4, U, Q > const &viewport)
Map the specified object coordinates (obj.x, obj.y, obj.z) into window coordinates.
GLM_FUNC_DECL vec< 3, T, Q > project(vec< 3, T, Q > const &obj, mat< 4, 4, T, Q > const &model, mat< 4, 4, T, Q > const &proj, vec< 4, U, Q > const &viewport)
Map the specified object coordinates (obj.x, obj.y, obj.z) into window coordinates using default near...
GLM_FUNC_DECL vec< 3, T, Q > unProjectNO(vec< 3, T, Q > const &win, mat< 4, 4, T, Q > const &model, mat< 4, 4, T, Q > const &proj, vec< 4, U, Q > const &viewport)
Map the specified window coordinates (win.x, win.y, win.z) into object coordinates.
GLM_FUNC_DECL vec< 3, T, Q > unProject(vec< 3, T, Q > const &win, mat< 4, 4, T, Q > const &model, mat< 4, 4, T, Q > const &proj, vec< 4, U, Q > const &viewport)
Map the specified window coordinates (win.x, win.y, win.z) into object coordinates using default near...
GLM_FUNC_DECL mat< 4, 4, T, Q > pickMatrix(vec< 2, T, Q > const &center, vec< 2, T, Q > const &delta, vec< 4, U, Q > const &viewport)
Define a picking region.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00106.html ================================================ 0.9.9 API documentation: matrix_query.hpp File Reference
0.9.9 API documentation
matrix_query.hpp File Reference

GLM_GTX_matrix_query More...

Go to the source code of this file.

Functions

template<length_t C, length_t R, typename T , qualifier Q, template< length_t, length_t, typename, qualifier > class matType>
GLM_FUNC_DECL bool isIdentity (matType< C, R, T, Q > const &m, T const &epsilon)
 Return whether a matrix is an identity matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL bool isNormalized (mat< 2, 2, T, Q > const &m, T const &epsilon)
 Return whether a matrix is a normalized matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL bool isNormalized (mat< 3, 3, T, Q > const &m, T const &epsilon)
 Return whether a matrix is a normalized matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL bool isNormalized (mat< 4, 4, T, Q > const &m, T const &epsilon)
 Return whether a matrix is a normalized matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL bool isNull (mat< 2, 2, T, Q > const &m, T const &epsilon)
 Return whether a matrix a null matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL bool isNull (mat< 3, 3, T, Q > const &m, T const &epsilon)
 Return whether a matrix a null matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL bool isNull (mat< 4, 4, T, Q > const &m, T const &epsilon)
 Return whether a matrix is a null matrix. More...
 
template<length_t C, length_t R, typename T , qualifier Q, template< length_t, length_t, typename, qualifier > class matType>
GLM_FUNC_DECL bool isOrthogonal (matType< C, R, T, Q > const &m, T const &epsilon)
 Return whether a matrix is an orthonormalized matrix. More...
 

Detailed Description

GLM_GTX_matrix_query

See also
Core features (dependence)
GLM_GTX_vector_query (dependence)

Definition in file matrix_query.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00106_source.html ================================================ 0.9.9 API documentation: matrix_query.hpp Source File
0.9.9 API documentation
matrix_query.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 // Dependency:
17 #include "../glm.hpp"
18 #include "../gtx/vector_query.hpp"
19 #include <limits>
20 
21 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
22 # ifndef GLM_ENABLE_EXPERIMENTAL
23 # pragma message("GLM: GLM_GTX_matrix_query is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
24 # else
25 # pragma message("GLM: GLM_GTX_matrix_query extension included")
26 # endif
27 #endif
28 
29 namespace glm
30 {
33 
36  template<typename T, qualifier Q>
37  GLM_FUNC_DECL bool isNull(mat<2, 2, T, Q> const& m, T const& epsilon);
38 
41  template<typename T, qualifier Q>
42  GLM_FUNC_DECL bool isNull(mat<3, 3, T, Q> const& m, T const& epsilon);
43 
46  template<typename T, qualifier Q>
47  GLM_FUNC_DECL bool isNull(mat<4, 4, T, Q> const& m, T const& epsilon);
48 
51  template<length_t C, length_t R, typename T, qualifier Q, template<length_t, length_t, typename, qualifier> class matType>
52  GLM_FUNC_DECL bool isIdentity(matType<C, R, T, Q> const& m, T const& epsilon);
53 
56  template<typename T, qualifier Q>
57  GLM_FUNC_DECL bool isNormalized(mat<2, 2, T, Q> const& m, T const& epsilon);
58 
61  template<typename T, qualifier Q>
62  GLM_FUNC_DECL bool isNormalized(mat<3, 3, T, Q> const& m, T const& epsilon);
63 
66  template<typename T, qualifier Q>
67  GLM_FUNC_DECL bool isNormalized(mat<4, 4, T, Q> const& m, T const& epsilon);
68 
71  template<length_t C, length_t R, typename T, qualifier Q, template<length_t, length_t, typename, qualifier> class matType>
72  GLM_FUNC_DECL bool isOrthogonal(matType<C, R, T, Q> const& m, T const& epsilon);
73 
75 }//namespace glm
76 
77 #include "matrix_query.inl"
GLM_FUNC_DECL bool isNormalized(mat< 4, 4, T, Q > const &m, T const &epsilon)
Return whether a matrix is a normalized matrix.
GLM_FUNC_DECL bool isIdentity(matType< C, R, T, Q > const &m, T const &epsilon)
Return whether a matrix is an identity matrix.
GLM_FUNC_DECL bool isNull(mat< 4, 4, T, Q > const &m, T const &epsilon)
Return whether a matrix is a null matrix.
GLM_FUNC_DECL GLM_CONSTEXPR genType epsilon()
Return the epsilon constant for floating point types.
GLM_FUNC_DECL bool isOrthogonal(matType< C, R, T, Q > const &m, T const &epsilon)
Return whether a matrix is an orthonormalized matrix.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00107.html ================================================ 0.9.9 API documentation: matrix_relational.hpp File Reference
0.9.9 API documentation
matrix_relational.hpp File Reference

GLM_EXT_matrix_relational More...

Go to the source code of this file.

Functions

template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< C, bool, Q > equal (mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y)
 Perform a component-wise equal-to comparison of two matrices. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< C, bool, Q > equal (mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, T epsilon)
 Returns the component-wise comparison of |x - y| < epsilon. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< C, bool, Q > equal (mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, vec< C, T, Q > const &epsilon)
 Returns the component-wise comparison of |x - y| < epsilon. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< C, bool, Q > equal (mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, int ULPs)
 Returns the component-wise comparison between two vectors in term of ULPs. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< C, bool, Q > equal (mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, vec< C, int, Q > const &ULPs)
 Returns the component-wise comparison between two vectors in term of ULPs. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< C, bool, Q > notEqual (mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y)
 Perform a component-wise not-equal-to comparison of two matrices. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< C, bool, Q > notEqual (mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, T epsilon)
 Returns the component-wise comparison of |x - y| < epsilon. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< C, bool, Q > notEqual (mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, vec< C, T, Q > const &epsilon)
 Returns the component-wise comparison of |x - y| >= epsilon. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< C, bool, Q > notEqual (mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, int ULPs)
 Returns the component-wise comparison between two vectors in term of ULPs. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< C, bool, Q > notEqual (mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, vec< C, int, Q > const &ULPs)
 Returns the component-wise comparison between two vectors in term of ULPs. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00107_source.html ================================================ 0.9.9 API documentation: matrix_relational.hpp Source File
0.9.9 API documentation
matrix_relational.hpp
Go to the documentation of this file.
1 
15 #pragma once
16 
17 // Dependencies
18 #include "../detail/qualifier.hpp"
19 
20 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
21 # pragma message("GLM: GLM_EXT_matrix_relational extension included")
22 #endif
23 
24 namespace glm
25 {
28 
36  template<length_t C, length_t R, typename T, qualifier Q>
37  GLM_FUNC_DECL GLM_CONSTEXPR vec<C, bool, Q> equal(mat<C, R, T, Q> const& x, mat<C, R, T, Q> const& y);
38 
46  template<length_t C, length_t R, typename T, qualifier Q>
47  GLM_FUNC_DECL GLM_CONSTEXPR vec<C, bool, Q> notEqual(mat<C, R, T, Q> const& x, mat<C, R, T, Q> const& y);
48 
56  template<length_t C, length_t R, typename T, qualifier Q>
57  GLM_FUNC_DECL GLM_CONSTEXPR vec<C, bool, Q> equal(mat<C, R, T, Q> const& x, mat<C, R, T, Q> const& y, T epsilon);
58 
66  template<length_t C, length_t R, typename T, qualifier Q>
67  GLM_FUNC_DECL GLM_CONSTEXPR vec<C, bool, Q> equal(mat<C, R, T, Q> const& x, mat<C, R, T, Q> const& y, vec<C, T, Q> const& epsilon);
68 
76  template<length_t C, length_t R, typename T, qualifier Q>
77  GLM_FUNC_DECL GLM_CONSTEXPR vec<C, bool, Q> notEqual(mat<C, R, T, Q> const& x, mat<C, R, T, Q> const& y, T epsilon);
78 
86  template<length_t C, length_t R, typename T, qualifier Q>
87  GLM_FUNC_DECL GLM_CONSTEXPR vec<C, bool, Q> notEqual(mat<C, R, T, Q> const& x, mat<C, R, T, Q> const& y, vec<C, T, Q> const& epsilon);
88 
96  template<length_t C, length_t R, typename T, qualifier Q>
97  GLM_FUNC_DECL GLM_CONSTEXPR vec<C, bool, Q> equal(mat<C, R, T, Q> const& x, mat<C, R, T, Q> const& y, int ULPs);
98 
106  template<length_t C, length_t R, typename T, qualifier Q>
107  GLM_FUNC_DECL GLM_CONSTEXPR vec<C, bool, Q> equal(mat<C, R, T, Q> const& x, mat<C, R, T, Q> const& y, vec<C, int, Q> const& ULPs);
108 
116  template<length_t C, length_t R, typename T, qualifier Q>
117  GLM_FUNC_DECL GLM_CONSTEXPR vec<C, bool, Q> notEqual(mat<C, R, T, Q> const& x, mat<C, R, T, Q> const& y, int ULPs);
118 
126  template<length_t C, length_t R, typename T, qualifier Q>
127  GLM_FUNC_DECL GLM_CONSTEXPR vec<C, bool, Q> notEqual(mat<C, R, T, Q> const& x, mat<C, R, T, Q> const& y, vec<C, int, Q> const& ULPs);
128 
130 }//namespace glm
131 
132 #include "matrix_relational.inl"
GLM_FUNC_DECL GLM_CONSTEXPR vec< C, bool, Q > equal(mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, vec< C, int, Q > const &ULPs)
Returns the component-wise comparison between two vectors in term of ULPs.
GLM_FUNC_DECL GLM_CONSTEXPR genType epsilon()
Return the epsilon constant for floating point types.
GLM_FUNC_DECL GLM_CONSTEXPR vec< C, bool, Q > notEqual(mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, vec< C, int, Q > const &ULPs)
Returns the component-wise comparison between two vectors in term of ULPs.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00108.html ================================================ 0.9.9 API documentation: matrix_transform.hpp File Reference
0.9.9 API documentation
ext/matrix_transform.hpp File Reference

GLM_EXT_matrix_transform More...

Go to the source code of this file.

Functions

template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType identity ()
 Builds an identity matrix.
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > lookAt (vec< 3, T, Q > const &eye, vec< 3, T, Q > const &center, vec< 3, T, Q > const &up)
 Build a look at view matrix based on the default handedness. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > lookAtLH (vec< 3, T, Q > const &eye, vec< 3, T, Q > const &center, vec< 3, T, Q > const &up)
 Build a left handed look at view matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > lookAtRH (vec< 3, T, Q > const &eye, vec< 3, T, Q > const &center, vec< 3, T, Q > const &up)
 Build a right handed look at view matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > rotate (mat< 4, 4, T, Q > const &m, T angle, vec< 3, T, Q > const &axis)
 Builds a rotation 4 * 4 matrix created from an axis vector and an angle. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > scale (mat< 4, 4, T, Q > const &m, vec< 3, T, Q > const &v)
 Builds a scale 4 * 4 matrix created from 3 scalars. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > translate (mat< 4, 4, T, Q > const &m, vec< 3, T, Q > const &v)
 Builds a translation 4 * 4 matrix created from a vector of 3 components. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00108_source.html ================================================ 0.9.9 API documentation: matrix_transform.hpp Source File
0.9.9 API documentation
ext/matrix_transform.hpp
Go to the documentation of this file.
1 
20 #pragma once
21 
22 // Dependencies
23 #include "../gtc/constants.hpp"
24 #include "../geometric.hpp"
25 #include "../trigonometric.hpp"
26 #include "../matrix.hpp"
27 
28 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
29 # pragma message("GLM: GLM_EXT_matrix_transform extension included")
30 #endif
31 
32 namespace glm
33 {
36 
38  template<typename genType>
39  GLM_FUNC_DECL GLM_CONSTEXPR genType identity();
40 
63  template<typename T, qualifier Q>
64  GLM_FUNC_DECL mat<4, 4, T, Q> translate(
65  mat<4, 4, T, Q> const& m, vec<3, T, Q> const& v);
66 
79  template<typename T, qualifier Q>
80  GLM_FUNC_DECL mat<4, 4, T, Q> rotate(
81  mat<4, 4, T, Q> const& m, T angle, vec<3, T, Q> const& axis);
82 
94  template<typename T, qualifier Q>
95  GLM_FUNC_DECL mat<4, 4, T, Q> scale(
96  mat<4, 4, T, Q> const& m, vec<3, T, Q> const& v);
97 
108  template<typename T, qualifier Q>
109  GLM_FUNC_DECL mat<4, 4, T, Q> lookAtRH(
110  vec<3, T, Q> const& eye, vec<3, T, Q> const& center, vec<3, T, Q> const& up);
111 
122  template<typename T, qualifier Q>
123  GLM_FUNC_DECL mat<4, 4, T, Q> lookAtLH(
124  vec<3, T, Q> const& eye, vec<3, T, Q> const& center, vec<3, T, Q> const& up);
125 
137  template<typename T, qualifier Q>
138  GLM_FUNC_DECL mat<4, 4, T, Q> lookAt(
139  vec<3, T, Q> const& eye, vec<3, T, Q> const& center, vec<3, T, Q> const& up);
140 
142 }//namespace glm
143 
144 #include "matrix_transform.inl"
GLM_FUNC_DECL mat< 4, 4, T, Q > lookAtLH(vec< 3, T, Q > const &eye, vec< 3, T, Q > const &center, vec< 3, T, Q > const &up)
Build a left handed look at view matrix.
GLM_FUNC_DECL mat< 4, 4, T, Q > lookAtRH(vec< 3, T, Q > const &eye, vec< 3, T, Q > const &center, vec< 3, T, Q > const &up)
Build a right handed look at view matrix.
GLM_FUNC_DECL T angle(qua< T, Q > const &x)
Returns the quaternion rotation angle.
GLM_FUNC_DECL mat< 4, 4, T, Q > translate(mat< 4, 4, T, Q > const &m, vec< 3, T, Q > const &v)
Builds a translation 4 * 4 matrix created from a vector of 3 components.
GLM_FUNC_DECL mat< 4, 4, T, Q > rotate(mat< 4, 4, T, Q > const &m, T angle, vec< 3, T, Q > const &axis)
Builds a rotation 4 * 4 matrix created from an axis vector and an angle.
GLM_FUNC_DECL GLM_CONSTEXPR genType identity()
Builds an identity matrix.
GLM_FUNC_DECL mat< 4, 4, T, Q > scale(mat< 4, 4, T, Q > const &m, vec< 3, T, Q > const &v)
Builds a scale 4 * 4 matrix created from 3 scalars.
GLM_FUNC_DECL vec< 3, T, Q > axis(qua< T, Q > const &x)
Returns the q rotation axis.
GLM_FUNC_DECL mat< 4, 4, T, Q > lookAt(vec< 3, T, Q > const &eye, vec< 3, T, Q > const &center, vec< 3, T, Q > const &up)
Build a look at view matrix based on the default handedness.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00109.html ================================================ 0.9.9 API documentation: matrix_transform.hpp File Reference
0.9.9 API documentation
gtc/matrix_transform.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00109_source.html ================================================ 0.9.9 API documentation: matrix_transform.hpp Source File
0.9.9 API documentation
gtc/matrix_transform.hpp
Go to the documentation of this file.
1 
21 #pragma once
22 
23 // Dependencies
24 #include "../mat4x4.hpp"
25 #include "../vec2.hpp"
26 #include "../vec3.hpp"
27 #include "../vec4.hpp"
28 #include "../ext/matrix_projection.hpp"
29 #include "../ext/matrix_clip_space.hpp"
30 #include "../ext/matrix_transform.hpp"
31 
32 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
33 # pragma message("GLM: GLM_GTC_matrix_transform extension included")
34 #endif
35 
36 #include "matrix_transform.inl"
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00110.html ================================================ 0.9.9 API documentation: matrix_transform_2d.hpp File Reference
0.9.9 API documentation
matrix_transform_2d.hpp File Reference

GLM_GTX_matrix_transform_2d More...

Go to the source code of this file.

Functions

template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER mat< 3, 3, T, Q > rotate (mat< 3, 3, T, Q > const &m, T angle)
 Builds a rotation 3 * 3 matrix created from an angle. More...
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER mat< 3, 3, T, Q > scale (mat< 3, 3, T, Q > const &m, vec< 2, T, Q > const &v)
 Builds a scale 3 * 3 matrix created from a vector of 2 components. More...
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER mat< 3, 3, T, Q > shearX (mat< 3, 3, T, Q > const &m, T y)
 Builds an horizontal (parallel to the x axis) shear 3 * 3 matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER mat< 3, 3, T, Q > shearY (mat< 3, 3, T, Q > const &m, T x)
 Builds a vertical (parallel to the y axis) shear 3 * 3 matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER mat< 3, 3, T, Q > translate (mat< 3, 3, T, Q > const &m, vec< 2, T, Q > const &v)
 Builds a translation 3 * 3 matrix created from a vector of 2 components. More...
 

Detailed Description

GLM_GTX_matrix_transform_2d

Author
Miguel Ángel Pérez Martínez
See also
Core features (dependence)

Definition in file matrix_transform_2d.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00110_source.html ================================================ 0.9.9 API documentation: matrix_transform_2d.hpp Source File
0.9.9 API documentation
matrix_transform_2d.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 // Dependency:
17 #include "../mat3x3.hpp"
18 #include "../vec2.hpp"
19 
20 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
21 # ifndef GLM_ENABLE_EXPERIMENTAL
22 # pragma message("GLM: GLM_GTX_matrix_transform_2d is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
23 # else
24 # pragma message("GLM: GLM_GTX_matrix_transform_2d extension included")
25 # endif
26 #endif
27 
28 namespace glm
29 {
32 
37  template<typename T, qualifier Q>
38  GLM_FUNC_QUALIFIER mat<3, 3, T, Q> translate(
39  mat<3, 3, T, Q> const& m,
40  vec<2, T, Q> const& v);
41 
46  template<typename T, qualifier Q>
47  GLM_FUNC_QUALIFIER mat<3, 3, T, Q> rotate(
48  mat<3, 3, T, Q> const& m,
49  T angle);
50 
55  template<typename T, qualifier Q>
56  GLM_FUNC_QUALIFIER mat<3, 3, T, Q> scale(
57  mat<3, 3, T, Q> const& m,
58  vec<2, T, Q> const& v);
59 
64  template<typename T, qualifier Q>
65  GLM_FUNC_QUALIFIER mat<3, 3, T, Q> shearX(
66  mat<3, 3, T, Q> const& m,
67  T y);
68 
73  template<typename T, qualifier Q>
74  GLM_FUNC_QUALIFIER mat<3, 3, T, Q> shearY(
75  mat<3, 3, T, Q> const& m,
76  T x);
77 
79 }//namespace glm
80 
81 #include "matrix_transform_2d.inl"
GLM_FUNC_DECL T angle(qua< T, Q > const &x)
Returns the quaternion rotation angle.
GLM_FUNC_QUALIFIER mat< 3, 3, T, Q > translate(mat< 3, 3, T, Q > const &m, vec< 2, T, Q > const &v)
Builds a translation 3 * 3 matrix created from a vector of 2 components.
GLM_FUNC_QUALIFIER mat< 3, 3, T, Q > rotate(mat< 3, 3, T, Q > const &m, T angle)
Builds a rotation 3 * 3 matrix created from an angle.
GLM_FUNC_QUALIFIER mat< 3, 3, T, Q > shearY(mat< 3, 3, T, Q > const &m, T x)
Builds a vertical (parallel to the y axis) shear 3 * 3 matrix.
GLM_FUNC_QUALIFIER mat< 3, 3, T, Q > scale(mat< 3, 3, T, Q > const &m, vec< 2, T, Q > const &v)
Builds a scale 3 * 3 matrix created from a vector of 2 components.
GLM_FUNC_QUALIFIER mat< 3, 3, T, Q > shearX(mat< 3, 3, T, Q > const &m, T y)
Builds an horizontal (parallel to the x axis) shear 3 * 3 matrix.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00111.html ================================================ 0.9.9 API documentation: mixed_product.hpp File Reference
0.9.9 API documentation
mixed_product.hpp File Reference

GLM_GTX_mixed_producte More...

Go to the source code of this file.

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL T mixedProduct (vec< 3, T, Q > const &v1, vec< 3, T, Q > const &v2, vec< 3, T, Q > const &v3)
 Mixed product of 3 vectors (from GLM_GTX_mixed_product extension)
 

Detailed Description

GLM_GTX_mixed_producte

See also
Core features (dependence)

Definition in file mixed_product.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00111_source.html ================================================ 0.9.9 API documentation: mixed_product.hpp Source File
0.9.9 API documentation
mixed_product.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependency:
16 #include "../glm.hpp"
17 
18 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
19 # ifndef GLM_ENABLE_EXPERIMENTAL
20 # pragma message("GLM: GLM_GTX_mixed_product is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
21 # else
22 # pragma message("GLM: GLM_GTX_mixed_product extension included")
23 # endif
24 #endif
25 
26 namespace glm
27 {
30 
32  template<typename T, qualifier Q>
33  GLM_FUNC_DECL T mixedProduct(
34  vec<3, T, Q> const& v1,
35  vec<3, T, Q> const& v2,
36  vec<3, T, Q> const& v3);
37 
39 }// namespace glm
40 
41 #include "mixed_product.inl"
GLM_FUNC_DECL T mixedProduct(vec< 3, T, Q > const &v1, vec< 3, T, Q > const &v2, vec< 3, T, Q > const &v3)
Mixed product of 3 vectors (from GLM_GTX_mixed_product extension)
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00112.html ================================================ 0.9.9 API documentation: noise.hpp File Reference
0.9.9 API documentation
noise.hpp File Reference

GLM_GTC_noise More...

Go to the source code of this file.

Functions

template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL T perlin (vec< L, T, Q > const &p)
 Classic perlin noise. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL T perlin (vec< L, T, Q > const &p, vec< L, T, Q > const &rep)
 Periodic perlin noise. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL T simplex (vec< L, T, Q > const &p)
 Simplex noise. More...
 

Detailed Description

GLM_GTC_noise

See also
Core features (dependence)

Definition in file noise.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00112_source.html ================================================ 0.9.9 API documentation: noise.hpp Source File
0.9.9 API documentation
noise.hpp
Go to the documentation of this file.
1 
17 #pragma once
18 
19 // Dependencies
20 #include "../detail/setup.hpp"
21 #include "../detail/qualifier.hpp"
22 #include "../detail/_noise.hpp"
23 #include "../geometric.hpp"
24 #include "../common.hpp"
25 #include "../vector_relational.hpp"
26 #include "../vec2.hpp"
27 #include "../vec3.hpp"
28 #include "../vec4.hpp"
29 
30 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
31 # pragma message("GLM: GLM_GTC_noise extension included")
32 #endif
33 
34 namespace glm
35 {
38 
41  template<length_t L, typename T, qualifier Q>
42  GLM_FUNC_DECL T perlin(
43  vec<L, T, Q> const& p);
44 
47  template<length_t L, typename T, qualifier Q>
48  GLM_FUNC_DECL T perlin(
49  vec<L, T, Q> const& p,
50  vec<L, T, Q> const& rep);
51 
54  template<length_t L, typename T, qualifier Q>
55  GLM_FUNC_DECL T simplex(
56  vec<L, T, Q> const& p);
57 
59 }//namespace glm
60 
61 #include "noise.inl"
GLM_FUNC_DECL T simplex(vec< L, T, Q > const &p)
Simplex noise.
GLM_FUNC_DECL T perlin(vec< L, T, Q > const &p, vec< L, T, Q > const &rep)
Periodic perlin noise.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00113.html ================================================ 0.9.9 API documentation: norm.hpp File Reference
0.9.9 API documentation
norm.hpp File Reference

GLM_GTX_norm More...

Go to the source code of this file.

Functions

template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL T distance2 (vec< L, T, Q > const &p0, vec< L, T, Q > const &p1)
 Returns the squared distance between p0 and p1, i.e., length2(p0 - p1). More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T l1Norm (vec< 3, T, Q > const &x, vec< 3, T, Q > const &y)
 Returns the L1 norm between x and y. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T l1Norm (vec< 3, T, Q > const &v)
 Returns the L1 norm of v. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T l2Norm (vec< 3, T, Q > const &x, vec< 3, T, Q > const &y)
 Returns the L2 norm between x and y. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T l2Norm (vec< 3, T, Q > const &x)
 Returns the L2 norm of v. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL T length2 (vec< L, T, Q > const &x)
 Returns the squared length of x. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T lMaxNorm (vec< 3, T, Q > const &x, vec< 3, T, Q > const &y)
 Returns the LMax norm between x and y. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T lMaxNorm (vec< 3, T, Q > const &x)
 Returns the LMax norm of v. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T lxNorm (vec< 3, T, Q > const &x, vec< 3, T, Q > const &y, unsigned int Depth)
 Returns the L norm between x and y. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T lxNorm (vec< 3, T, Q > const &x, unsigned int Depth)
 Returns the L norm of v. More...
 

Detailed Description

GLM_GTX_norm

See also
Core features (dependence)
GLM_GTX_quaternion (dependence)
GLM_GTX_component_wise (dependence)

Definition in file norm.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00113_source.html ================================================ 0.9.9 API documentation: norm.hpp Source File
0.9.9 API documentation
norm.hpp
Go to the documentation of this file.
1 
15 #pragma once
16 
17 // Dependency:
18 #include "../geometric.hpp"
19 #include "../gtx/quaternion.hpp"
20 #include "../gtx/component_wise.hpp"
21 
22 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
23 # ifndef GLM_ENABLE_EXPERIMENTAL
24 # pragma message("GLM: GLM_GTX_norm is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
25 # else
26 # pragma message("GLM: GLM_GTX_norm extension included")
27 # endif
28 #endif
29 
30 namespace glm
31 {
34 
37  template<length_t L, typename T, qualifier Q>
38  GLM_FUNC_DECL T length2(vec<L, T, Q> const& x);
39 
42  template<length_t L, typename T, qualifier Q>
43  GLM_FUNC_DECL T distance2(vec<L, T, Q> const& p0, vec<L, T, Q> const& p1);
44 
47  template<typename T, qualifier Q>
48  GLM_FUNC_DECL T l1Norm(vec<3, T, Q> const& x, vec<3, T, Q> const& y);
49 
52  template<typename T, qualifier Q>
53  GLM_FUNC_DECL T l1Norm(vec<3, T, Q> const& v);
54 
57  template<typename T, qualifier Q>
58  GLM_FUNC_DECL T l2Norm(vec<3, T, Q> const& x, vec<3, T, Q> const& y);
59 
62  template<typename T, qualifier Q>
63  GLM_FUNC_DECL T l2Norm(vec<3, T, Q> const& x);
64 
67  template<typename T, qualifier Q>
68  GLM_FUNC_DECL T lxNorm(vec<3, T, Q> const& x, vec<3, T, Q> const& y, unsigned int Depth);
69 
72  template<typename T, qualifier Q>
73  GLM_FUNC_DECL T lxNorm(vec<3, T, Q> const& x, unsigned int Depth);
74 
77  template<typename T, qualifier Q>
78  GLM_FUNC_DECL T lMaxNorm(vec<3, T, Q> const& x, vec<3, T, Q> const& y);
79 
82  template<typename T, qualifier Q>
83  GLM_FUNC_DECL T lMaxNorm(vec<3, T, Q> const& x);
84 
86 }//namespace glm
87 
88 #include "norm.inl"
GLM_FUNC_DECL T length2(vec< L, T, Q > const &x)
Returns the squared length of x.
GLM_FUNC_DECL T l1Norm(vec< 3, T, Q > const &v)
Returns the L1 norm of v.
GLM_FUNC_DECL T distance2(vec< L, T, Q > const &p0, vec< L, T, Q > const &p1)
Returns the squared distance between p0 and p1, i.e., length2(p0 - p1).
GLM_FUNC_DECL T lMaxNorm(vec< 3, T, Q > const &x)
Returns the LMax norm of v.
GLM_FUNC_DECL T lxNorm(vec< 3, T, Q > const &x, unsigned int Depth)
Returns the L norm of v.
GLM_FUNC_DECL T l2Norm(vec< 3, T, Q > const &x)
Returns the L2 norm of v.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00114.html ================================================ 0.9.9 API documentation: normal.hpp File Reference
0.9.9 API documentation
normal.hpp File Reference

GLM_GTX_normal More...

Go to the source code of this file.

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > triangleNormal (vec< 3, T, Q > const &p1, vec< 3, T, Q > const &p2, vec< 3, T, Q > const &p3)
 Computes triangle normal from triangle points. More...
 

Detailed Description

GLM_GTX_normal

See also
Core features (dependence)
gtx_extented_min_max (dependence)

Definition in file normal.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00114_source.html ================================================ 0.9.9 API documentation: normal.hpp Source File
0.9.9 API documentation
normal.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 // Dependency:
17 #include "../glm.hpp"
18 
19 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
20 # ifndef GLM_ENABLE_EXPERIMENTAL
21 # pragma message("GLM: GLM_GTX_normal is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
22 # else
23 # pragma message("GLM: GLM_GTX_normal extension included")
24 # endif
25 #endif
26 
27 namespace glm
28 {
31 
35  template<typename T, qualifier Q>
36  GLM_FUNC_DECL vec<3, T, Q> triangleNormal(vec<3, T, Q> const& p1, vec<3, T, Q> const& p2, vec<3, T, Q> const& p3);
37 
39 }//namespace glm
40 
41 #include "normal.inl"
GLM_FUNC_DECL vec< 3, T, Q > triangleNormal(vec< 3, T, Q > const &p1, vec< 3, T, Q > const &p2, vec< 3, T, Q > const &p3)
Computes triangle normal from triangle points.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00115.html ================================================ 0.9.9 API documentation: normalize_dot.hpp File Reference
0.9.9 API documentation
normalize_dot.hpp File Reference

GLM_GTX_normalize_dot More...

Go to the source code of this file.

Functions

template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL T fastNormalizeDot (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Normalize parameters and returns the dot product of x and y. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL T normalizeDot (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Normalize parameters and returns the dot product of x and y. More...
 

Detailed Description

GLM_GTX_normalize_dot

See also
Core features (dependence)
GLM_GTX_fast_square_root (dependence)

Definition in file normalize_dot.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00115_source.html ================================================ 0.9.9 API documentation: normalize_dot.hpp Source File
0.9.9 API documentation
normalize_dot.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 // Dependency:
17 #include "../gtx/fast_square_root.hpp"
18 
19 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
20 # ifndef GLM_ENABLE_EXPERIMENTAL
21 # pragma message("GLM: GLM_GTX_normalize_dot is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
22 # else
23 # pragma message("GLM: GLM_GTX_normalize_dot extension included")
24 # endif
25 #endif
26 
27 namespace glm
28 {
31 
36  template<length_t L, typename T, qualifier Q>
37  GLM_FUNC_DECL T normalizeDot(vec<L, T, Q> const& x, vec<L, T, Q> const& y);
38 
43  template<length_t L, typename T, qualifier Q>
44  GLM_FUNC_DECL T fastNormalizeDot(vec<L, T, Q> const& x, vec<L, T, Q> const& y);
45 
47 }//namespace glm
48 
49 #include "normalize_dot.inl"
GLM_FUNC_DECL T normalizeDot(vec< L, T, Q > const &x, vec< L, T, Q > const &y)
Normalize parameters and returns the dot product of x and y.
GLM_FUNC_DECL T fastNormalizeDot(vec< L, T, Q > const &x, vec< L, T, Q > const &y)
Normalize parameters and returns the dot product of x and y.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00116.html ================================================ 0.9.9 API documentation: number_precision.hpp File Reference
0.9.9 API documentation
number_precision.hpp File Reference

GLM_GTX_number_precision More...

Go to the source code of this file.

Typedefs

typedef f32 f32mat1
 Single-qualifier floating-point scalar. (from GLM_GTX_number_precision extension)
 
typedef f32 f32mat1x1
 Single-qualifier floating-point scalar. (from GLM_GTX_number_precision extension)
 
typedef f32 f32vec1
 Single-qualifier floating-point scalar. (from GLM_GTX_number_precision extension)
 
typedef f64 f64mat1
 Double-qualifier floating-point scalar. (from GLM_GTX_number_precision extension)
 
typedef f64 f64mat1x1
 Double-qualifier floating-point scalar. (from GLM_GTX_number_precision extension)
 
typedef f64 f64vec1
 Single-qualifier floating-point scalar. (from GLM_GTX_number_precision extension)
 
typedef u16 u16vec1
 16bit unsigned integer scalar. (from GLM_GTX_number_precision extension)
 
typedef u32 u32vec1
 32bit unsigned integer scalar. (from GLM_GTX_number_precision extension)
 
typedef u64 u64vec1
 64bit unsigned integer scalar. (from GLM_GTX_number_precision extension)
 
typedef u8 u8vec1
 8bit unsigned integer scalar. (from GLM_GTX_number_precision extension)
 

Detailed Description

GLM_GTX_number_precision

See also
Core features (dependence)
GLM_GTC_type_precision (dependence)
GLM_GTC_quaternion (dependence)

Definition in file number_precision.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00116_source.html ================================================ 0.9.9 API documentation: number_precision.hpp Source File
0.9.9 API documentation
number_precision.hpp
Go to the documentation of this file.
1 
15 #pragma once
16 
17 // Dependency:
18 #include "../glm.hpp"
19 #include "../gtc/type_precision.hpp"
20 
21 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
22 # ifndef GLM_ENABLE_EXPERIMENTAL
23 # pragma message("GLM: GLM_GTX_number_precision is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
24 # else
25 # pragma message("GLM: GLM_GTX_number_precision extension included")
26 # endif
27 #endif
28 
29 namespace glm{
30 namespace gtx
31 {
33  // Unsigned int vector types
34 
37 
38  typedef u8 u8vec1;
39  typedef u16 u16vec1;
40  typedef u32 u32vec1;
41  typedef u64 u64vec1;
42 
44  // Float vector types
45 
46  typedef f32 f32vec1;
47  typedef f64 f64vec1;
48 
50  // Float matrix types
51 
52  typedef f32 f32mat1;
53  typedef f32 f32mat1x1;
54  typedef f64 f64mat1;
55  typedef f64 f64mat1x1;
56 
58 }//namespace gtx
59 }//namespace glm
60 
61 #include "number_precision.inl"
uint32 u32
Default qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:120
uint64 u64
Default qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:134
f32 f32mat1x1
Single-qualifier floating-point scalar. (from GLM_GTX_number_precision extension) ...
f64 f64mat1
Double-qualifier floating-point scalar. (from GLM_GTX_number_precision extension) ...
u16 u16vec1
16bit unsigned integer scalar. (from GLM_GTX_number_precision extension)
uint8 u8
Default qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:92
f32 f32mat1
Single-qualifier floating-point scalar. (from GLM_GTX_number_precision extension) ...
f32 f32vec1
Single-qualifier floating-point scalar. (from GLM_GTX_number_precision extension) ...
f64 f64vec1
Single-qualifier floating-point scalar. (from GLM_GTX_number_precision extension) ...
f64 f64mat1x1
Double-qualifier floating-point scalar. (from GLM_GTX_number_precision extension) ...
u64 u64vec1
64bit unsigned integer scalar. (from GLM_GTX_number_precision extension)
u32 u32vec1
32bit unsigned integer scalar. (from GLM_GTX_number_precision extension)
float f32
Default 32 bit single-qualifier floating-point scalar.
Definition: fwd.hpp:150
uint16 u16
Default qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:106
u8 u8vec1
8bit unsigned integer scalar. (from GLM_GTX_number_precision extension)
double f64
Default 64 bit double-qualifier floating-point scalar.
Definition: fwd.hpp:166
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00117.html ================================================ 0.9.9 API documentation: optimum_pow.hpp File Reference
0.9.9 API documentation
optimum_pow.hpp File Reference

GLM_GTX_optimum_pow More...

Go to the source code of this file.

Functions

template<typename genType >
GLM_FUNC_DECL genType pow2 (genType const &x)
 Returns x raised to the power of 2. More...
 
template<typename genType >
GLM_FUNC_DECL genType pow3 (genType const &x)
 Returns x raised to the power of 3. More...
 
template<typename genType >
GLM_FUNC_DECL genType pow4 (genType const &x)
 Returns x raised to the power of 4. More...
 

Detailed Description

GLM_GTX_optimum_pow

See also
Core features (dependence)

Definition in file optimum_pow.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00117_source.html ================================================ 0.9.9 API documentation: optimum_pow.hpp Source File
0.9.9 API documentation
optimum_pow.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependency:
16 #include "../glm.hpp"
17 
18 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
19 # ifndef GLM_ENABLE_EXPERIMENTAL
20 # pragma message("GLM: GLM_GTX_optimum_pow is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
21 # else
22 # pragma message("GLM: GLM_GTX_optimum_pow extension included")
23 # endif
24 #endif
25 
26 namespace glm{
27 namespace gtx
28 {
31 
35  template<typename genType>
36  GLM_FUNC_DECL genType pow2(genType const& x);
37 
41  template<typename genType>
42  GLM_FUNC_DECL genType pow3(genType const& x);
43 
47  template<typename genType>
48  GLM_FUNC_DECL genType pow4(genType const& x);
49 
51 }//namespace gtx
52 }//namespace glm
53 
54 #include "optimum_pow.inl"
GLM_FUNC_DECL genType pow3(genType const &x)
Returns x raised to the power of 3.
GLM_FUNC_DECL genType pow4(genType const &x)
Returns x raised to the power of 4.
GLM_FUNC_DECL genType pow2(genType const &x)
Returns x raised to the power of 2.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00118.html ================================================ 0.9.9 API documentation: orthonormalize.hpp File Reference
0.9.9 API documentation
orthonormalize.hpp File Reference

GLM_GTX_orthonormalize More...

Go to the source code of this file.

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 3, T, Q > orthonormalize (mat< 3, 3, T, Q > const &m)
 Returns the orthonormalized matrix of m. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > orthonormalize (vec< 3, T, Q > const &x, vec< 3, T, Q > const &y)
 Orthonormalizes x according y. More...
 

Detailed Description

GLM_GTX_orthonormalize

See also
Core features (dependence)
gtx_extented_min_max (dependence)

Definition in file orthonormalize.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00118_source.html ================================================ 0.9.9 API documentation: orthonormalize.hpp Source File
0.9.9 API documentation
orthonormalize.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 // Dependency:
17 #include "../vec3.hpp"
18 #include "../mat3x3.hpp"
19 #include "../geometric.hpp"
20 
21 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
22 # ifndef GLM_ENABLE_EXPERIMENTAL
23 # pragma message("GLM: GLM_GTX_orthonormalize is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
24 # else
25 # pragma message("GLM: GLM_GTX_orthonormalize extension included")
26 # endif
27 #endif
28 
29 namespace glm
30 {
33 
37  template<typename T, qualifier Q>
38  GLM_FUNC_DECL mat<3, 3, T, Q> orthonormalize(mat<3, 3, T, Q> const& m);
39 
43  template<typename T, qualifier Q>
44  GLM_FUNC_DECL vec<3, T, Q> orthonormalize(vec<3, T, Q> const& x, vec<3, T, Q> const& y);
45 
47 }//namespace glm
48 
49 #include "orthonormalize.inl"
GLM_FUNC_DECL vec< 3, T, Q > orthonormalize(vec< 3, T, Q > const &x, vec< 3, T, Q > const &y)
Orthonormalizes x according y.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00119.html ================================================ 0.9.9 API documentation: packing.hpp File Reference
0.9.9 API documentation
gtc/packing.hpp File Reference

GLM_GTC_packing More...

Go to the source code of this file.

Functions

GLM_FUNC_DECL uint32 packF2x11_1x10 (vec3 const &v)
 First, converts the first two components of the normalized floating-point value v into 11-bit signless floating-point values. More...
 
GLM_FUNC_DECL uint32 packF3x9_E1x5 (vec3 const &v)
 First, converts the first two components of the normalized floating-point value v into 11-bit signless floating-point values. More...
 
template<length_t L, qualifier Q>
GLM_FUNC_DECL vec< L, uint16, Q > packHalf (vec< L, float, Q > const &v)
 Returns an unsigned integer vector obtained by converting the components of a floating-point vector to the 16-bit floating-point representation found in the OpenGL Specification. More...
 
GLM_FUNC_DECL uint16 packHalf1x16 (float v)
 Returns an unsigned integer obtained by converting the components of a floating-point scalar to the 16-bit floating-point representation found in the OpenGL Specification, and then packing this 16-bit value into a 16-bit unsigned integer. More...
 
GLM_FUNC_DECL uint64 packHalf4x16 (vec4 const &v)
 Returns an unsigned integer obtained by converting the components of a four-component floating-point vector to the 16-bit floating-point representation found in the OpenGL Specification, and then packing these four 16-bit values into a 64-bit unsigned integer. More...
 
GLM_FUNC_DECL uint32 packI3x10_1x2 (ivec4 const &v)
 Returns an unsigned integer obtained by converting the components of a four-component signed integer vector to the 10-10-10-2-bit signed integer representation found in the OpenGL Specification, and then packing these four values into a 32-bit unsigned integer. More...
 
GLM_FUNC_DECL int packInt2x16 (i16vec2 const &v)
 Convert each component from an integer vector into a packed integer. More...
 
GLM_FUNC_DECL int64 packInt2x32 (i32vec2 const &v)
 Convert each component from an integer vector into a packed integer. More...
 
GLM_FUNC_DECL int16 packInt2x8 (i8vec2 const &v)
 Convert each component from an integer vector into a packed integer. More...
 
GLM_FUNC_DECL int64 packInt4x16 (i16vec4 const &v)
 Convert each component from an integer vector into a packed integer. More...
 
GLM_FUNC_DECL int32 packInt4x8 (i8vec4 const &v)
 Convert each component from an integer vector into a packed integer. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, T, Q > packRGBM (vec< 3, T, Q > const &rgb)
 Returns an unsigned integer vector obtained by converting the components of a floating-point vector to the 16-bit floating-point representation found in the OpenGL Specification. More...
 
template<typename intType , length_t L, typename floatType , qualifier Q>
GLM_FUNC_DECL vec< L, intType, Q > packSnorm (vec< L, floatType, Q > const &v)
 Convert each component of the normalized floating-point vector into signed integer values. More...
 
GLM_FUNC_DECL uint16 packSnorm1x16 (float v)
 First, converts the normalized floating-point value v into 16-bit integer value. More...
 
GLM_FUNC_DECL uint8 packSnorm1x8 (float s)
 First, converts the normalized floating-point value v into 8-bit integer value. More...
 
GLM_FUNC_DECL uint16 packSnorm2x8 (vec2 const &v)
 First, converts each component of the normalized floating-point value v into 8-bit integer values. More...
 
GLM_FUNC_DECL uint32 packSnorm3x10_1x2 (vec4 const &v)
 First, converts the first three components of the normalized floating-point value v into 10-bit signed integer values. More...
 
GLM_FUNC_DECL uint64 packSnorm4x16 (vec4 const &v)
 First, converts each component of the normalized floating-point value v into 16-bit integer values. More...
 
GLM_FUNC_DECL uint32 packU3x10_1x2 (uvec4 const &v)
 Returns an unsigned integer obtained by converting the components of a four-component unsigned integer vector to the 10-10-10-2-bit unsigned integer representation found in the OpenGL Specification, and then packing these four values into a 32-bit unsigned integer. More...
 
GLM_FUNC_DECL uint packUint2x16 (u16vec2 const &v)
 Convert each component from an integer vector into a packed unsigned integer. More...
 
GLM_FUNC_DECL uint64 packUint2x32 (u32vec2 const &v)
 Convert each component from an integer vector into a packed unsigned integer. More...
 
GLM_FUNC_DECL uint16 packUint2x8 (u8vec2 const &v)
 Convert each component from an integer vector into a packed unsigned integer. More...
 
GLM_FUNC_DECL uint64 packUint4x16 (u16vec4 const &v)
 Convert each component from an integer vector into a packed unsigned integer. More...
 
GLM_FUNC_DECL uint32 packUint4x8 (u8vec4 const &v)
 Convert each component from an integer vector into a packed unsigned integer. More...
 
template<typename uintType , length_t L, typename floatType , qualifier Q>
GLM_FUNC_DECL vec< L, uintType, Q > packUnorm (vec< L, floatType, Q > const &v)
 Convert each component of the normalized floating-point vector into unsigned integer values. More...
 
GLM_FUNC_DECL uint16 packUnorm1x16 (float v)
 First, converts the normalized floating-point value v into a 16-bit integer value. More...
 
GLM_FUNC_DECL uint16 packUnorm1x5_1x6_1x5 (vec3 const &v)
 Convert each component of the normalized floating-point vector into unsigned integer values. More...
 
GLM_FUNC_DECL uint8 packUnorm1x8 (float v)
 First, converts the normalized floating-point value v into a 8-bit integer value. More...
 
GLM_FUNC_DECL uint8 packUnorm2x3_1x2 (vec3 const &v)
 Convert each component of the normalized floating-point vector into unsigned integer values. More...
 
GLM_FUNC_DECL uint8 packUnorm2x4 (vec2 const &v)
 Convert each component of the normalized floating-point vector into unsigned integer values. More...
 
GLM_FUNC_DECL uint16 packUnorm2x8 (vec2 const &v)
 First, converts each component of the normalized floating-point value v into 8-bit integer values. More...
 
GLM_FUNC_DECL uint32 packUnorm3x10_1x2 (vec4 const &v)
 First, converts the first three components of the normalized floating-point value v into 10-bit unsigned integer values. More...
 
GLM_FUNC_DECL uint16 packUnorm3x5_1x1 (vec4 const &v)
 Convert each component of the normalized floating-point vector into unsigned integer values. More...
 
GLM_FUNC_DECL uint64 packUnorm4x16 (vec4 const &v)
 First, converts each component of the normalized floating-point value v into 16-bit integer values. More...
 
GLM_FUNC_DECL uint16 packUnorm4x4 (vec4 const &v)
 Convert each component of the normalized floating-point vector into unsigned integer values. More...
 
GLM_FUNC_DECL vec3 unpackF2x11_1x10 (uint32 p)
 First, unpacks a single 32-bit unsigned integer p into two 11-bit signless floating-point values and one 10-bit signless floating-point value . More...
 
GLM_FUNC_DECL vec3 unpackF3x9_E1x5 (uint32 p)
 First, unpacks a single 32-bit unsigned integer p into two 11-bit signless floating-point values and one 10-bit signless floating-point value . More...
 
template<length_t L, qualifier Q>
GLM_FUNC_DECL vec< L, float, Q > unpackHalf (vec< L, uint16, Q > const &p)
 Returns a floating-point vector with components obtained by reinterpreting an integer vector as 16-bit floating-point numbers and converting them to 32-bit floating-point values. More...
 
GLM_FUNC_DECL float unpackHalf1x16 (uint16 v)
 Returns a floating-point scalar with components obtained by unpacking a 16-bit unsigned integer into a 16-bit value, interpreted as a 16-bit floating-point number according to the OpenGL Specification, and converting it to 32-bit floating-point values. More...
 
GLM_FUNC_DECL vec4 unpackHalf4x16 (uint64 p)
 Returns a four-component floating-point vector with components obtained by unpacking a 64-bit unsigned integer into four 16-bit values, interpreting those values as 16-bit floating-point numbers according to the OpenGL Specification, and converting them to 32-bit floating-point values. More...
 
GLM_FUNC_DECL ivec4 unpackI3x10_1x2 (uint32 p)
 Unpacks a single 32-bit unsigned integer p into three 10-bit and one 2-bit signed integers. More...
 
GLM_FUNC_DECL i16vec2 unpackInt2x16 (int p)
 Convert a packed integer into an integer vector. More...
 
GLM_FUNC_DECL i32vec2 unpackInt2x32 (int64 p)
 Convert a packed integer into an integer vector. More...
 
GLM_FUNC_DECL i8vec2 unpackInt2x8 (int16 p)
 Convert a packed integer into an integer vector. More...
 
GLM_FUNC_DECL i16vec4 unpackInt4x16 (int64 p)
 Convert a packed integer into an integer vector. More...
 
GLM_FUNC_DECL i8vec4 unpackInt4x8 (int32 p)
 Convert a packed integer into an integer vector. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > unpackRGBM (vec< 4, T, Q > const &rgbm)
 Returns a floating-point vector with components obtained by reinterpreting an integer vector as 16-bit floating-point numbers and converting them to 32-bit floating-point values. More...
 
template<typename floatType , length_t L, typename intType , qualifier Q>
GLM_FUNC_DECL vec< L, floatType, Q > unpackSnorm (vec< L, intType, Q > const &v)
 Convert a packed integer to a normalized floating-point vector. More...
 
GLM_FUNC_DECL float unpackSnorm1x16 (uint16 p)
 First, unpacks a single 16-bit unsigned integer p into a single 16-bit signed integers. More...
 
GLM_FUNC_DECL float unpackSnorm1x8 (uint8 p)
 First, unpacks a single 8-bit unsigned integer p into a single 8-bit signed integers. More...
 
GLM_FUNC_DECL vec2 unpackSnorm2x8 (uint16 p)
 First, unpacks a single 16-bit unsigned integer p into a pair of 8-bit signed integers. More...
 
GLM_FUNC_DECL vec4 unpackSnorm3x10_1x2 (uint32 p)
 First, unpacks a single 32-bit unsigned integer p into four 16-bit signed integers. More...
 
GLM_FUNC_DECL vec4 unpackSnorm4x16 (uint64 p)
 First, unpacks a single 64-bit unsigned integer p into four 16-bit signed integers. More...
 
GLM_FUNC_DECL uvec4 unpackU3x10_1x2 (uint32 p)
 Unpacks a single 32-bit unsigned integer p into three 10-bit and one 2-bit unsigned integers. More...
 
GLM_FUNC_DECL u16vec2 unpackUint2x16 (uint p)
 Convert a packed integer into an integer vector. More...
 
GLM_FUNC_DECL u32vec2 unpackUint2x32 (uint64 p)
 Convert a packed integer into an integer vector. More...
 
GLM_FUNC_DECL u8vec2 unpackUint2x8 (uint16 p)
 Convert a packed integer into an integer vector. More...
 
GLM_FUNC_DECL u16vec4 unpackUint4x16 (uint64 p)
 Convert a packed integer into an integer vector. More...
 
GLM_FUNC_DECL u8vec4 unpackUint4x8 (uint32 p)
 Convert a packed integer into an integer vector. More...
 
template<typename floatType , length_t L, typename uintType , qualifier Q>
GLM_FUNC_DECL vec< L, floatType, Q > unpackUnorm (vec< L, uintType, Q > const &v)
 Convert a packed integer to a normalized floating-point vector. More...
 
GLM_FUNC_DECL float unpackUnorm1x16 (uint16 p)
 First, unpacks a single 16-bit unsigned integer p into a of 16-bit unsigned integers. More...
 
GLM_FUNC_DECL vec3 unpackUnorm1x5_1x6_1x5 (uint16 p)
 Convert a packed integer to a normalized floating-point vector. More...
 
GLM_FUNC_DECL float unpackUnorm1x8 (uint8 p)
 Convert a single 8-bit integer to a normalized floating-point value. More...
 
GLM_FUNC_DECL vec3 unpackUnorm2x3_1x2 (uint8 p)
 Convert a packed integer to a normalized floating-point vector. More...
 
GLM_FUNC_DECL vec2 unpackUnorm2x4 (uint8 p)
 Convert a packed integer to a normalized floating-point vector. More...
 
GLM_FUNC_DECL vec2 unpackUnorm2x8 (uint16 p)
 First, unpacks a single 16-bit unsigned integer p into a pair of 8-bit unsigned integers. More...
 
GLM_FUNC_DECL vec4 unpackUnorm3x10_1x2 (uint32 p)
 First, unpacks a single 32-bit unsigned integer p into four 16-bit signed integers. More...
 
GLM_FUNC_DECL vec4 unpackUnorm3x5_1x1 (uint16 p)
 Convert a packed integer to a normalized floating-point vector. More...
 
GLM_FUNC_DECL vec4 unpackUnorm4x16 (uint64 p)
 First, unpacks a single 64-bit unsigned integer p into four 16-bit unsigned integers. More...
 
GLM_FUNC_DECL vec4 unpackUnorm4x4 (uint16 p)
 Convert a packed integer to a normalized floating-point vector. More...
 

Detailed Description

GLM_GTC_packing

See also
Core features (dependence)

Definition in file gtc/packing.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00119_source.html ================================================ 0.9.9 API documentation: packing.hpp Source File
0.9.9 API documentation
gtc/packing.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 // Dependency:
17 #include "type_precision.hpp"
18 
19 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
20 # pragma message("GLM: GLM_GTC_packing extension included")
21 #endif
22 
23 namespace glm
24 {
27 
39  GLM_FUNC_DECL uint8 packUnorm1x8(float v);
40 
51  GLM_FUNC_DECL float unpackUnorm1x8(uint8 p);
52 
67  GLM_FUNC_DECL uint16 packUnorm2x8(vec2 const& v);
68 
83  GLM_FUNC_DECL vec2 unpackUnorm2x8(uint16 p);
84 
96  GLM_FUNC_DECL uint8 packSnorm1x8(float s);
97 
109  GLM_FUNC_DECL float unpackSnorm1x8(uint8 p);
110 
125  GLM_FUNC_DECL uint16 packSnorm2x8(vec2 const& v);
126 
141  GLM_FUNC_DECL vec2 unpackSnorm2x8(uint16 p);
142 
154  GLM_FUNC_DECL uint16 packUnorm1x16(float v);
155 
167  GLM_FUNC_DECL float unpackUnorm1x16(uint16 p);
168 
183  GLM_FUNC_DECL uint64 packUnorm4x16(vec4 const& v);
184 
199  GLM_FUNC_DECL vec4 unpackUnorm4x16(uint64 p);
200 
212  GLM_FUNC_DECL uint16 packSnorm1x16(float v);
213 
225  GLM_FUNC_DECL float unpackSnorm1x16(uint16 p);
226 
241  GLM_FUNC_DECL uint64 packSnorm4x16(vec4 const& v);
242 
257  GLM_FUNC_DECL vec4 unpackSnorm4x16(uint64 p);
258 
268  GLM_FUNC_DECL uint16 packHalf1x16(float v);
269 
279  GLM_FUNC_DECL float unpackHalf1x16(uint16 v);
280 
292  GLM_FUNC_DECL uint64 packHalf4x16(vec4 const& v);
293 
305  GLM_FUNC_DECL vec4 unpackHalf4x16(uint64 p);
306 
318  GLM_FUNC_DECL uint32 packI3x10_1x2(ivec4 const& v);
319 
329  GLM_FUNC_DECL ivec4 unpackI3x10_1x2(uint32 p);
330 
342  GLM_FUNC_DECL uint32 packU3x10_1x2(uvec4 const& v);
343 
353  GLM_FUNC_DECL uvec4 unpackU3x10_1x2(uint32 p);
354 
371  GLM_FUNC_DECL uint32 packSnorm3x10_1x2(vec4 const& v);
372 
388  GLM_FUNC_DECL vec4 unpackSnorm3x10_1x2(uint32 p);
389 
406  GLM_FUNC_DECL uint32 packUnorm3x10_1x2(vec4 const& v);
407 
423  GLM_FUNC_DECL vec4 unpackUnorm3x10_1x2(uint32 p);
424 
434  GLM_FUNC_DECL uint32 packF2x11_1x10(vec3 const& v);
435 
444  GLM_FUNC_DECL vec3 unpackF2x11_1x10(uint32 p);
445 
446 
458  GLM_FUNC_DECL uint32 packF3x9_E1x5(vec3 const& v);
459 
470  GLM_FUNC_DECL vec3 unpackF3x9_E1x5(uint32 p);
471 
480  template<length_t L, typename T, qualifier Q>
481  GLM_FUNC_DECL vec<4, T, Q> packRGBM(vec<3, T, Q> const& rgb);
482 
490  template<length_t L, typename T, qualifier Q>
491  GLM_FUNC_DECL vec<3, T, Q> unpackRGBM(vec<4, T, Q> const& rgbm);
492 
501  template<length_t L, qualifier Q>
502  GLM_FUNC_DECL vec<L, uint16, Q> packHalf(vec<L, float, Q> const& v);
503 
511  template<length_t L, qualifier Q>
512  GLM_FUNC_DECL vec<L, float, Q> unpackHalf(vec<L, uint16, Q> const& p);
513 
518  template<typename uintType, length_t L, typename floatType, qualifier Q>
519  GLM_FUNC_DECL vec<L, uintType, Q> packUnorm(vec<L, floatType, Q> const& v);
520 
525  template<typename floatType, length_t L, typename uintType, qualifier Q>
526  GLM_FUNC_DECL vec<L, floatType, Q> unpackUnorm(vec<L, uintType, Q> const& v);
527 
532  template<typename intType, length_t L, typename floatType, qualifier Q>
533  GLM_FUNC_DECL vec<L, intType, Q> packSnorm(vec<L, floatType, Q> const& v);
534 
539  template<typename floatType, length_t L, typename intType, qualifier Q>
540  GLM_FUNC_DECL vec<L, floatType, Q> unpackSnorm(vec<L, intType, Q> const& v);
541 
546  GLM_FUNC_DECL uint8 packUnorm2x4(vec2 const& v);
547 
552  GLM_FUNC_DECL vec2 unpackUnorm2x4(uint8 p);
553 
558  GLM_FUNC_DECL uint16 packUnorm4x4(vec4 const& v);
559 
564  GLM_FUNC_DECL vec4 unpackUnorm4x4(uint16 p);
565 
570  GLM_FUNC_DECL uint16 packUnorm1x5_1x6_1x5(vec3 const& v);
571 
576  GLM_FUNC_DECL vec3 unpackUnorm1x5_1x6_1x5(uint16 p);
577 
582  GLM_FUNC_DECL uint16 packUnorm3x5_1x1(vec4 const& v);
583 
588  GLM_FUNC_DECL vec4 unpackUnorm3x5_1x1(uint16 p);
589 
594  GLM_FUNC_DECL uint8 packUnorm2x3_1x2(vec3 const& v);
595 
600  GLM_FUNC_DECL vec3 unpackUnorm2x3_1x2(uint8 p);
601 
602 
603 
608  GLM_FUNC_DECL int16 packInt2x8(i8vec2 const& v);
609 
614  GLM_FUNC_DECL i8vec2 unpackInt2x8(int16 p);
615 
620  GLM_FUNC_DECL uint16 packUint2x8(u8vec2 const& v);
621 
626  GLM_FUNC_DECL u8vec2 unpackUint2x8(uint16 p);
627 
632  GLM_FUNC_DECL int32 packInt4x8(i8vec4 const& v);
633 
638  GLM_FUNC_DECL i8vec4 unpackInt4x8(int32 p);
639 
644  GLM_FUNC_DECL uint32 packUint4x8(u8vec4 const& v);
645 
650  GLM_FUNC_DECL u8vec4 unpackUint4x8(uint32 p);
651 
656  GLM_FUNC_DECL int packInt2x16(i16vec2 const& v);
657 
662  GLM_FUNC_DECL i16vec2 unpackInt2x16(int p);
663 
668  GLM_FUNC_DECL int64 packInt4x16(i16vec4 const& v);
669 
674  GLM_FUNC_DECL i16vec4 unpackInt4x16(int64 p);
675 
680  GLM_FUNC_DECL uint packUint2x16(u16vec2 const& v);
681 
686  GLM_FUNC_DECL u16vec2 unpackUint2x16(uint p);
687 
692  GLM_FUNC_DECL uint64 packUint4x16(u16vec4 const& v);
693 
698  GLM_FUNC_DECL u16vec4 unpackUint4x16(uint64 p);
699 
704  GLM_FUNC_DECL int64 packInt2x32(i32vec2 const& v);
705 
710  GLM_FUNC_DECL i32vec2 unpackInt2x32(int64 p);
711 
716  GLM_FUNC_DECL uint64 packUint2x32(u32vec2 const& v);
717 
722  GLM_FUNC_DECL u32vec2 unpackUint2x32(uint64 p);
723 
724 
726 }// namespace glm
727 
728 #include "packing.inl"
GLM_FUNC_DECL uint16 packUnorm4x4(vec4 const &v)
Convert each component of the normalized floating-point vector into unsigned integer values...
GLM_FUNC_DECL uint16 packUint2x8(u8vec2 const &v)
Convert each component from an integer vector into a packed unsigned integer.
GLM_FUNC_DECL uint8 packUnorm2x4(vec2 const &v)
Convert each component of the normalized floating-point vector into unsigned integer values...
GLM_FUNC_DECL vec4 unpackUnorm4x16(uint64 p)
First, unpacks a single 64-bit unsigned integer p into four 16-bit unsigned integers.
GLM_FUNC_DECL i16vec2 unpackInt2x16(int p)
Convert a packed integer into an integer vector.
GLM_FUNC_DECL i8vec4 unpackInt4x8(int32 p)
Convert a packed integer into an integer vector.
vec< 2, float, defaultp > vec2
2 components vector of single-precision floating-point numbers.
GLM_FUNC_DECL u16vec4 unpackUint4x16(uint64 p)
Convert a packed integer into an integer vector.
vec< 2, i8, defaultp > i8vec2
8 bit signed integer vector of 2 components type.
Definition: fwd.hpp:238
GLM_FUNC_DECL u32vec2 unpackUint2x32(uint64 p)
Convert a packed integer into an integer vector.
GLM_FUNC_DECL vec< L, float, Q > unpackHalf(vec< L, uint16, Q > const &p)
Returns a floating-point vector with components obtained by reinterpreting an integer vector as 16-bi...
GLM_FUNC_DECL uint64 packUnorm4x16(vec4 const &v)
First, converts each component of the normalized floating-point value v into 16-bit integer values...
GLM_FUNC_DECL vec< L, uintType, Q > packUnorm(vec< L, floatType, Q > const &v)
Convert each component of the normalized floating-point vector into unsigned integer values...
GLM_FUNC_DECL vec3 unpackF2x11_1x10(uint32 p)
First, unpacks a single 32-bit unsigned integer p into two 11-bit signless floating-point values and ...
GLM_FUNC_DECL uint8 packUnorm1x8(float v)
First, converts the normalized floating-point value v into a 8-bit integer value. ...
GLM_FUNC_DECL u8vec2 unpackUint2x8(uint16 p)
Convert a packed integer into an integer vector.
GLM_FUNC_DECL vec4 unpackUnorm3x10_1x2(uint32 p)
First, unpacks a single 32-bit unsigned integer p into four 16-bit signed integers.
GLM_FUNC_DECL vec2 unpackUnorm2x4(uint8 p)
Convert a packed integer to a normalized floating-point vector.
GLM_FUNC_DECL vec< 4, T, Q > packRGBM(vec< 3, T, Q > const &rgb)
Returns an unsigned integer vector obtained by converting the components of a floating-point vector t...
vec< 4, i16, defaultp > i16vec4
16 bit signed integer vector of 4 components type.
Definition: fwd.hpp:260
GLM_FUNC_DECL vec4 unpackSnorm4x16(uint64 p)
First, unpacks a single 64-bit unsigned integer p into four 16-bit signed integers.
GLM_FUNC_DECL uint32 packUnorm3x10_1x2(vec4 const &v)
First, converts the first three components of the normalized floating-point value v into 10-bit unsig...
vec< 4, u8, defaultp > u8vec4
Default qualifier 8 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:342
vec< 4, i8, defaultp > i8vec4
8 bit signed integer vector of 4 components type.
Definition: fwd.hpp:240
vec< 4, int, defaultp > ivec4
4 components vector of signed integer numbers.
Definition: vector_int4.hpp:15
GLM_FUNC_DECL i32vec2 unpackInt2x32(int64 p)
Convert a packed integer into an integer vector.
GLM_FUNC_DECL uint8 packUnorm2x3_1x2(vec3 const &v)
Convert each component of the normalized floating-point vector into unsigned integer values...
GLM_FUNC_DECL uint16 packUnorm1x16(float v)
First, converts the normalized floating-point value v into a 16-bit integer value.
vec< 4, float, defaultp > vec4
4 components vector of single-precision floating-point numbers.
vec< 2, i32, defaultp > i32vec2
32 bit signed integer vector of 2 components type.
Definition: fwd.hpp:278
GLM_FUNC_DECL float unpackUnorm1x8(uint8 p)
Convert a single 8-bit integer to a normalized floating-point value.
GLM_FUNC_DECL float unpackSnorm1x8(uint8 p)
First, unpacks a single 8-bit unsigned integer p into a single 8-bit signed integers.
GLM_FUNC_DECL float unpackHalf1x16(uint16 v)
Returns a floating-point scalar with components obtained by unpacking a 16-bit unsigned integer into ...
GLM_FUNC_DECL i16vec4 unpackInt4x16(int64 p)
Convert a packed integer into an integer vector.
GLM_FUNC_DECL float unpackUnorm1x16(uint16 p)
First, unpacks a single 16-bit unsigned integer p into a of 16-bit unsigned integers.
GLM_FUNC_DECL vec2 unpackSnorm2x8(uint16 p)
First, unpacks a single 16-bit unsigned integer p into a pair of 8-bit signed integers.
vec< 2, i16, defaultp > i16vec2
16 bit signed integer vector of 2 components type.
Definition: fwd.hpp:258
GLM_FUNC_DECL ivec4 unpackI3x10_1x2(uint32 p)
Unpacks a single 32-bit unsigned integer p into three 10-bit and one 2-bit signed integers...
GLM_FUNC_DECL uint16 packSnorm1x16(float v)
First, converts the normalized floating-point value v into 16-bit integer value.
GLM_FUNC_DECL vec3 unpackF3x9_E1x5(uint32 p)
First, unpacks a single 32-bit unsigned integer p into two 11-bit signless floating-point values and ...
GLM_FUNC_DECL vec< L, floatType, Q > unpackUnorm(vec< L, uintType, Q > const &v)
Convert a packed integer to a normalized floating-point vector.
GLM_FUNC_DECL vec< L, intType, Q > packSnorm(vec< L, floatType, Q > const &v)
Convert each component of the normalized floating-point vector into signed integer values...
GLM_FUNC_DECL uint16 packUnorm1x5_1x6_1x5(vec3 const &v)
Convert each component of the normalized floating-point vector into unsigned integer values...
GLM_FUNC_DECL vec4 unpackUnorm3x5_1x1(uint16 p)
Convert a packed integer to a normalized floating-point vector.
vec< 2, u8, defaultp > u8vec2
Default qualifier 8 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:340
GLM_FUNC_DECL vec4 unpackSnorm3x10_1x2(uint32 p)
First, unpacks a single 32-bit unsigned integer p into four 16-bit signed integers.
GLM_FUNC_DECL uint32 packSnorm3x10_1x2(vec4 const &v)
First, converts the first three components of the normalized floating-point value v into 10-bit signe...
GLM_FUNC_DECL int64 packInt4x16(i16vec4 const &v)
Convert each component from an integer vector into a packed integer.
GLM_FUNC_DECL vec3 unpackUnorm1x5_1x6_1x5(uint16 p)
Convert a packed integer to a normalized floating-point vector.
GLM_FUNC_DECL uint32 packF3x9_E1x5(vec3 const &v)
First, converts the first two components of the normalized floating-point value v into 11-bit signles...
GLM_FUNC_DECL uint16 packUnorm2x8(vec2 const &v)
First, converts each component of the normalized floating-point value v into 8-bit integer values...
GLM_FUNC_DECL uint64 packUint4x16(u16vec4 const &v)
Convert each component from an integer vector into a packed unsigned integer.
vec< 3, float, defaultp > vec3
3 components vector of single-precision floating-point numbers.
GLM_FUNC_DECL uint16 packUnorm3x5_1x1(vec4 const &v)
Convert each component of the normalized floating-point vector into unsigned integer values...
GLM_FUNC_DECL vec2 unpackUnorm2x8(uint16 p)
First, unpacks a single 16-bit unsigned integer p into a pair of 8-bit unsigned integers.
GLM_FUNC_DECL u16vec2 unpackUint2x16(uint p)
Convert a packed integer into an integer vector.
GLM_FUNC_DECL uint packUint2x16(u16vec2 const &v)
Convert each component from an integer vector into a packed unsigned integer.
GLM_FUNC_DECL uint64 packSnorm4x16(vec4 const &v)
First, converts each component of the normalized floating-point value v into 16-bit integer values...
GLM_FUNC_DECL uint64 packUint2x32(u32vec2 const &v)
Convert each component from an integer vector into a packed unsigned integer.
vec< 4, unsigned int, defaultp > uvec4
4 components vector of unsigned integer numbers.
GLM_FUNC_DECL vec4 unpackHalf4x16(uint64 p)
Returns a four-component floating-point vector with components obtained by unpacking a 64-bit unsigne...
detail::uint64 uint64
64 bit unsigned integer type.
GLM_FUNC_DECL vec4 unpackUnorm4x4(uint16 p)
Convert a packed integer to a normalized floating-point vector.
GLM_FUNC_DECL uint64 packHalf4x16(vec4 const &v)
Returns an unsigned integer obtained by converting the components of a four-component floating-point ...
GLM_GTC_type_precision
GLM_FUNC_DECL i8vec2 unpackInt2x8(int16 p)
Convert a packed integer into an integer vector.
detail::int64 int64
64 bit signed integer type.
GLM_FUNC_DECL vec3 unpackUnorm2x3_1x2(uint8 p)
Convert a packed integer to a normalized floating-point vector.
GLM_FUNC_DECL uint32 packF2x11_1x10(vec3 const &v)
First, converts the first two components of the normalized floating-point value v into 11-bit signles...
GLM_FUNC_DECL uvec4 unpackU3x10_1x2(uint32 p)
Unpacks a single 32-bit unsigned integer p into three 10-bit and one 2-bit unsigned integers...
GLM_FUNC_DECL uint16 packHalf1x16(float v)
Returns an unsigned integer obtained by converting the components of a floating-point scalar to the 1...
GLM_FUNC_DECL vec< L, floatType, Q > unpackSnorm(vec< L, intType, Q > const &v)
Convert a packed integer to a normalized floating-point vector.
GLM_FUNC_DECL vec< 3, T, Q > unpackRGBM(vec< 4, T, Q > const &rgbm)
Returns a floating-point vector with components obtained by reinterpreting an integer vector as 16-bi...
GLM_FUNC_DECL uint16 packSnorm2x8(vec2 const &v)
First, converts each component of the normalized floating-point value v into 8-bit integer values...
GLM_FUNC_DECL uint8 packSnorm1x8(float s)
First, converts the normalized floating-point value v into 8-bit integer value.
GLM_FUNC_DECL u8vec4 unpackUint4x8(uint32 p)
Convert a packed integer into an integer vector.
GLM_FUNC_DECL uint32 packI3x10_1x2(ivec4 const &v)
Returns an unsigned integer obtained by converting the components of a four-component signed integer ...
GLM_FUNC_DECL int16 packInt2x8(i8vec2 const &v)
Convert each component from an integer vector into a packed integer.
GLM_FUNC_DECL int64 packInt2x32(i32vec2 const &v)
Convert each component from an integer vector into a packed integer.
GLM_FUNC_DECL uint32 packUint4x8(u8vec4 const &v)
Convert each component from an integer vector into a packed unsigned integer.
GLM_FUNC_DECL uint32 packU3x10_1x2(uvec4 const &v)
Returns an unsigned integer obtained by converting the components of a four-component unsigned intege...
vec< 2, u32, defaultp > u32vec2
Default qualifier 32 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:380
GLM_FUNC_DECL int packInt2x16(i16vec2 const &v)
Convert each component from an integer vector into a packed integer.
GLM_FUNC_DECL vec< L, uint16, Q > packHalf(vec< L, float, Q > const &v)
Returns an unsigned integer vector obtained by converting the components of a floating-point vector t...
vec< 4, u16, defaultp > u16vec4
Default qualifier 16 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:362
GLM_FUNC_DECL int32 packInt4x8(i8vec4 const &v)
Convert each component from an integer vector into a packed integer.
vec< 2, u16, defaultp > u16vec2
Default qualifier 16 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:360
GLM_FUNC_DECL float unpackSnorm1x16(uint16 p)
First, unpacks a single 16-bit unsigned integer p into a single 16-bit signed integers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00120.html ================================================ 0.9.9 API documentation: packing.hpp File Reference
0.9.9 API documentation
packing.hpp File Reference

Core features More...

Go to the source code of this file.

Functions

GLM_FUNC_DECL double packDouble2x32 (uvec2 const &v)
 Returns a double-qualifier value obtained by packing the components of v into a 64-bit value. More...
 
GLM_FUNC_DECL uint packHalf2x16 (vec2 const &v)
 Returns an unsigned integer obtained by converting the components of a two-component floating-point vector to the 16-bit floating-point representation found in the OpenGL Specification, and then packing these two 16- bit integers into a 32-bit unsigned integer. More...
 
GLM_FUNC_DECL uint packSnorm2x16 (vec2 const &v)
 First, converts each component of the normalized floating-point value v into 8- or 16-bit integer values. More...
 
GLM_FUNC_DECL uint packSnorm4x8 (vec4 const &v)
 First, converts each component of the normalized floating-point value v into 8- or 16-bit integer values. More...
 
GLM_FUNC_DECL uint packUnorm2x16 (vec2 const &v)
 First, converts each component of the normalized floating-point value v into 8- or 16-bit integer values. More...
 
GLM_FUNC_DECL uint packUnorm4x8 (vec4 const &v)
 First, converts each component of the normalized floating-point value v into 8- or 16-bit integer values. More...
 
GLM_FUNC_DECL uvec2 unpackDouble2x32 (double v)
 Returns a two-component unsigned integer vector representation of v. More...
 
GLM_FUNC_DECL vec2 unpackHalf2x16 (uint v)
 Returns a two-component floating-point vector with components obtained by unpacking a 32-bit unsigned integer into a pair of 16-bit values, interpreting those values as 16-bit floating-point numbers according to the OpenGL Specification, and converting them to 32-bit floating-point values. More...
 
GLM_FUNC_DECL vec2 unpackSnorm2x16 (uint p)
 First, unpacks a single 32-bit unsigned integer p into a pair of 16-bit unsigned integers, four 8-bit unsigned integers, or four 8-bit signed integers. More...
 
GLM_FUNC_DECL vec4 unpackSnorm4x8 (uint p)
 First, unpacks a single 32-bit unsigned integer p into a pair of 16-bit unsigned integers, four 8-bit unsigned integers, or four 8-bit signed integers. More...
 
GLM_FUNC_DECL vec2 unpackUnorm2x16 (uint p)
 First, unpacks a single 32-bit unsigned integer p into a pair of 16-bit unsigned integers, four 8-bit unsigned integers, or four 8-bit signed integers. More...
 
GLM_FUNC_DECL vec4 unpackUnorm4x8 (uint p)
 First, unpacks a single 32-bit unsigned integer p into a pair of 16-bit unsigned integers, four 8-bit unsigned integers, or four 8-bit signed integers. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00120_source.html ================================================ 0.9.9 API documentation: packing.hpp Source File
0.9.9 API documentation
packing.hpp
Go to the documentation of this file.
1 
16 #pragma once
17 
18 #include "./ext/vector_uint2.hpp"
19 #include "./ext/vector_float2.hpp"
20 #include "./ext/vector_float4.hpp"
21 
22 namespace glm
23 {
26 
38  GLM_FUNC_DECL uint packUnorm2x16(vec2 const& v);
39 
51  GLM_FUNC_DECL uint packSnorm2x16(vec2 const& v);
52 
64  GLM_FUNC_DECL uint packUnorm4x8(vec4 const& v);
65 
77  GLM_FUNC_DECL uint packSnorm4x8(vec4 const& v);
78 
90  GLM_FUNC_DECL vec2 unpackUnorm2x16(uint p);
91 
103  GLM_FUNC_DECL vec2 unpackSnorm2x16(uint p);
104 
116  GLM_FUNC_DECL vec4 unpackUnorm4x8(uint p);
117 
129  GLM_FUNC_DECL vec4 unpackSnorm4x8(uint p);
130 
139  GLM_FUNC_DECL double packDouble2x32(uvec2 const& v);
140 
148  GLM_FUNC_DECL uvec2 unpackDouble2x32(double v);
149 
158  GLM_FUNC_DECL uint packHalf2x16(vec2 const& v);
159 
168  GLM_FUNC_DECL vec2 unpackHalf2x16(uint v);
169 
171 }//namespace glm
172 
173 #include "detail/func_packing.inl"
GLM_FUNC_DECL vec2 unpackUnorm2x16(uint p)
First, unpacks a single 32-bit unsigned integer p into a pair of 16-bit unsigned integers, four 8-bit unsigned integers, or four 8-bit signed integers.
vec< 2, float, defaultp > vec2
2 components vector of single-precision floating-point numbers.
GLM_FUNC_DECL uint packSnorm2x16(vec2 const &v)
First, converts each component of the normalized floating-point value v into 8- or 16-bit integer val...
GLM_FUNC_DECL uint packSnorm4x8(vec4 const &v)
First, converts each component of the normalized floating-point value v into 8- or 16-bit integer val...
GLM_FUNC_DECL uint packUnorm2x16(vec2 const &v)
First, converts each component of the normalized floating-point value v into 8- or 16-bit integer val...
GLM_FUNC_DECL uvec2 unpackDouble2x32(double v)
Returns a two-component unsigned integer vector representation of v.
vec< 4, float, defaultp > vec4
4 components vector of single-precision floating-point numbers.
GLM_FUNC_DECL vec2 unpackSnorm2x16(uint p)
First, unpacks a single 32-bit unsigned integer p into a pair of 16-bit unsigned integers, four 8-bit unsigned integers, or four 8-bit signed integers.
vec< 2, unsigned int, defaultp > uvec2
2 components vector of unsigned integer numbers.
GLM_FUNC_DECL vec2 unpackHalf2x16(uint v)
Returns a two-component floating-point vector with components obtained by unpacking a 32-bit unsigned...
Core features
GLM_FUNC_DECL uint packUnorm4x8(vec4 const &v)
First, converts each component of the normalized floating-point value v into 8- or 16-bit integer val...
GLM_FUNC_DECL vec4 unpackSnorm4x8(uint p)
First, unpacks a single 32-bit unsigned integer p into a pair of 16-bit unsigned integers, four 8-bit unsigned integers, or four 8-bit signed integers.
GLM_FUNC_DECL double packDouble2x32(uvec2 const &v)
Returns a double-qualifier value obtained by packing the components of v into a 64-bit value...
GLM_FUNC_DECL uint packHalf2x16(vec2 const &v)
Returns an unsigned integer obtained by converting the components of a two-component floating-point v...
Core features
GLM_FUNC_DECL vec4 unpackUnorm4x8(uint p)
First, unpacks a single 32-bit unsigned integer p into a pair of 16-bit unsigned integers, four 8-bit unsigned integers, or four 8-bit signed integers.
Core features
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00121.html ================================================ 0.9.9 API documentation: perpendicular.hpp File Reference
0.9.9 API documentation
perpendicular.hpp File Reference

GLM_GTX_perpendicular More...

Go to the source code of this file.

Functions

template<typename genType >
GLM_FUNC_DECL genType perp (genType const &x, genType const &Normal)
 Projects x a perpendicular axis of Normal. More...
 

Detailed Description

GLM_GTX_perpendicular

See also
Core features (dependence)
GLM_GTX_projection (dependence)

Definition in file perpendicular.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00121_source.html ================================================ 0.9.9 API documentation: perpendicular.hpp Source File
0.9.9 API documentation
perpendicular.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 // Dependency:
17 #include "../glm.hpp"
18 #include "../gtx/projection.hpp"
19 
20 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
21 # ifndef GLM_ENABLE_EXPERIMENTAL
22 # pragma message("GLM: GLM_GTX_perpendicular is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
23 # else
24 # pragma message("GLM: GLM_GTX_perpendicular extension included")
25 # endif
26 #endif
27 
28 namespace glm
29 {
32 
35  template<typename genType>
36  GLM_FUNC_DECL genType perp(genType const& x, genType const& Normal);
37 
39 }//namespace glm
40 
41 #include "perpendicular.inl"
GLM_FUNC_DECL genType perp(genType const &x, genType const &Normal)
Projects x a perpendicular axis of Normal.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00122.html ================================================ 0.9.9 API documentation: polar_coordinates.hpp File Reference
0.9.9 API documentation
polar_coordinates.hpp File Reference

GLM_GTX_polar_coordinates More...

Go to the source code of this file.

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > euclidean (vec< 2, T, Q > const &polar)
 Convert Polar to Euclidean coordinates. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > polar (vec< 3, T, Q > const &euclidean)
 Convert Euclidean to Polar coordinates, x is the xz distance, y, the latitude and z the longitude. More...
 

Detailed Description

GLM_GTX_polar_coordinates

See also
Core features (dependence)

Definition in file polar_coordinates.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00122_source.html ================================================ 0.9.9 API documentation: polar_coordinates.hpp Source File
0.9.9 API documentation
polar_coordinates.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependency:
16 #include "../glm.hpp"
17 
18 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
19 # ifndef GLM_ENABLE_EXPERIMENTAL
20 # pragma message("GLM: GLM_GTX_polar_coordinates is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
21 # else
22 # pragma message("GLM: GLM_GTX_polar_coordinates extension included")
23 # endif
24 #endif
25 
26 namespace glm
27 {
30 
34  template<typename T, qualifier Q>
35  GLM_FUNC_DECL vec<3, T, Q> polar(
36  vec<3, T, Q> const& euclidean);
37 
41  template<typename T, qualifier Q>
42  GLM_FUNC_DECL vec<3, T, Q> euclidean(
43  vec<2, T, Q> const& polar);
44 
46 }//namespace glm
47 
48 #include "polar_coordinates.inl"
GLM_FUNC_DECL vec< 3, T, Q > polar(vec< 3, T, Q > const &euclidean)
Convert Euclidean to Polar coordinates, x is the xz distance, y, the latitude and z the longitude...
GLM_FUNC_DECL vec< 3, T, Q > euclidean(vec< 2, T, Q > const &polar)
Convert Polar to Euclidean coordinates.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00123.html ================================================ 0.9.9 API documentation: projection.hpp File Reference
0.9.9 API documentation
projection.hpp File Reference

GLM_GTX_projection More...

Go to the source code of this file.

Functions

template<typename genType >
GLM_FUNC_DECL genType proj (genType const &x, genType const &Normal)
 Projects x on Normal. More...
 

Detailed Description

GLM_GTX_projection

See also
Core features (dependence)

Definition in file projection.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00123_source.html ================================================ 0.9.9 API documentation: projection.hpp Source File
0.9.9 API documentation
projection.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependency:
16 #include "../geometric.hpp"
17 
18 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
19 # ifndef GLM_ENABLE_EXPERIMENTAL
20 # pragma message("GLM: GLM_GTX_projection is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
21 # else
22 # pragma message("GLM: GLM_GTX_projection extension included")
23 # endif
24 #endif
25 
26 namespace glm
27 {
30 
37  template<typename genType>
38  GLM_FUNC_DECL genType proj(genType const& x, genType const& Normal);
39 
41 }//namespace glm
42 
43 #include "projection.inl"
GLM_FUNC_DECL genType proj(genType const &x, genType const &Normal)
Projects x on Normal.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00124_source.html ================================================ 0.9.9 API documentation: qualifier.hpp Source File
0.9.9 API documentation
qualifier.hpp
1 #pragma once
2 
3 #include "setup.hpp"
4 
5 namespace glm
6 {
8  enum qualifier
9  {
10  packed_highp,
11  packed_mediump,
12  packed_lowp,
13 
14 # if GLM_CONFIG_ALIGNED_GENTYPES == GLM_ENABLE
15  aligned_highp,
16  aligned_mediump,
17  aligned_lowp, // ///< Typed data is aligned in memory allowing SIMD optimizations and operations are executed with high precision in term of ULPs to maximize performance
18  aligned = aligned_highp,
19 # endif
20 
21  highp = packed_highp,
22  mediump = packed_mediump,
23  lowp = packed_lowp,
24  packed = packed_highp,
25 
26 # if GLM_CONFIG_ALIGNED_GENTYPES == GLM_ENABLE && defined(GLM_FORCE_DEFAULT_ALIGNED_GENTYPES)
27  defaultp = aligned_highp
28 # else
29  defaultp = highp
30 # endif
31  };
32 
33  typedef qualifier precision;
34 
35  template<length_t L, typename T, qualifier Q = defaultp> struct vec;
36  template<length_t C, length_t R, typename T, qualifier Q = defaultp> struct mat;
37  template<typename T, qualifier Q = defaultp> struct qua;
38 
39 # if GLM_HAS_TEMPLATE_ALIASES
40  template <typename T, qualifier Q = defaultp> using tvec1 = vec<1, T, Q>;
41  template <typename T, qualifier Q = defaultp> using tvec2 = vec<2, T, Q>;
42  template <typename T, qualifier Q = defaultp> using tvec3 = vec<3, T, Q>;
43  template <typename T, qualifier Q = defaultp> using tvec4 = vec<4, T, Q>;
44  template <typename T, qualifier Q = defaultp> using tmat2x2 = mat<2, 2, T, Q>;
45  template <typename T, qualifier Q = defaultp> using tmat2x3 = mat<2, 3, T, Q>;
46  template <typename T, qualifier Q = defaultp> using tmat2x4 = mat<2, 4, T, Q>;
47  template <typename T, qualifier Q = defaultp> using tmat3x2 = mat<3, 2, T, Q>;
48  template <typename T, qualifier Q = defaultp> using tmat3x3 = mat<3, 3, T, Q>;
49  template <typename T, qualifier Q = defaultp> using tmat3x4 = mat<3, 4, T, Q>;
50  template <typename T, qualifier Q = defaultp> using tmat4x2 = mat<4, 2, T, Q>;
51  template <typename T, qualifier Q = defaultp> using tmat4x3 = mat<4, 3, T, Q>;
52  template <typename T, qualifier Q = defaultp> using tmat4x4 = mat<4, 4, T, Q>;
53  template <typename T, qualifier Q = defaultp> using tquat = qua<T, Q>;
54 # endif
55 
56 namespace detail
57 {
58  template<glm::qualifier P>
59  struct is_aligned
60  {
61  static const bool value = false;
62  };
63 
64 # if GLM_CONFIG_ALIGNED_GENTYPES == GLM_ENABLE
65  template<>
66  struct is_aligned<glm::aligned_lowp>
67  {
68  static const bool value = true;
69  };
70 
71  template<>
72  struct is_aligned<glm::aligned_mediump>
73  {
74  static const bool value = true;
75  };
76 
77  template<>
78  struct is_aligned<glm::aligned_highp>
79  {
80  static const bool value = true;
81  };
82 # endif
83 
84  template<length_t L, typename T, bool is_aligned>
85  struct storage
86  {
87  typedef struct type {
88  T data[L];
89  } type;
90  };
91 
92 # if GLM_HAS_ALIGNOF
93  template<length_t L, typename T>
94  struct storage<L, T, true>
95  {
96  typedef struct alignas(L * sizeof(T)) type {
97  T data[L];
98  } type;
99  };
100 
101  template<typename T>
102  struct storage<3, T, true>
103  {
104  typedef struct alignas(4 * sizeof(T)) type {
105  T data[4];
106  } type;
107  };
108 # endif
109 
110 # if GLM_ARCH & GLM_ARCH_SSE2_BIT
111  template<>
112  struct storage<4, float, true>
113  {
114  typedef glm_f32vec4 type;
115  };
116 
117  template<>
118  struct storage<4, int, true>
119  {
120  typedef glm_i32vec4 type;
121  };
122 
123  template<>
124  struct storage<4, unsigned int, true>
125  {
126  typedef glm_u32vec4 type;
127  };
128 
129  template<>
130  struct storage<2, double, true>
131  {
132  typedef glm_f64vec2 type;
133  };
134 
135  template<>
136  struct storage<2, detail::int64, true>
137  {
138  typedef glm_i64vec2 type;
139  };
140 
141  template<>
142  struct storage<2, detail::uint64, true>
143  {
144  typedef glm_u64vec2 type;
145  };
146 # endif
147 
148 # if (GLM_ARCH & GLM_ARCH_AVX_BIT)
149  template<>
150  struct storage<4, double, true>
151  {
152  typedef glm_f64vec4 type;
153  };
154 # endif
155 
156 # if (GLM_ARCH & GLM_ARCH_AVX2_BIT)
157  template<>
158  struct storage<4, detail::int64, true>
159  {
160  typedef glm_i64vec4 type;
161  };
162 
163  template<>
164  struct storage<4, detail::uint64, true>
165  {
166  typedef glm_u64vec4 type;
167  };
168 # endif
169 
170 # if GLM_ARCH & GLM_ARCH_NEON_BIT
171  template<>
172  struct storage<4, float, true>
173  {
174  typedef glm_f32vec4 type;
175  };
176 
177  template<>
178  struct storage<4, int, true>
179  {
180  typedef glm_i32vec4 type;
181  };
182 
183  template<>
184  struct storage<4, unsigned int, true>
185  {
186  typedef glm_u32vec4 type;
187  };
188 # endif
189 
190  enum genTypeEnum
191  {
192  GENTYPE_VEC,
193  GENTYPE_MAT,
194  GENTYPE_QUAT
195  };
196 
197  template <typename genType>
198  struct genTypeTrait
199  {};
200 
201  template <length_t C, length_t R, typename T>
202  struct genTypeTrait<mat<C, R, T> >
203  {
204  static const genTypeEnum GENTYPE = GENTYPE_MAT;
205  };
206 
207  template<typename genType, genTypeEnum type>
208  struct init_gentype
209  {
210  };
211 
212  template<typename genType>
213  struct init_gentype<genType, GENTYPE_QUAT>
214  {
215  GLM_FUNC_QUALIFIER GLM_CONSTEXPR static genType identity()
216  {
217  return genType(1, 0, 0, 0);
218  }
219  };
220 
221  template<typename genType>
222  struct init_gentype<genType, GENTYPE_MAT>
223  {
224  GLM_FUNC_QUALIFIER GLM_CONSTEXPR static genType identity()
225  {
226  return genType(1);
227  }
228  };
229 }//namespace detail
230 }//namespace glm
GLM_FUNC_DECL GLM_CONSTEXPR genType identity()
Builds an identity matrix.
detail::uint64 uint64
64 bit unsigned integer type.
detail::int64 int64
64 bit signed integer type.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00125.html ================================================ 0.9.9 API documentation: quaternion.hpp File Reference
0.9.9 API documentation
gtc/quaternion.hpp File Reference

GLM_GTC_quaternion More...

Go to the source code of this file.

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > eulerAngles (qua< T, Q > const &x)
 Returns euler angles, pitch as x, yaw as y, roll as z. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, bool, Q > greaterThan (qua< T, Q > const &x, qua< T, Q > const &y)
 Returns the component-wise comparison of result x > y. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, bool, Q > greaterThanEqual (qua< T, Q > const &x, qua< T, Q > const &y)
 Returns the component-wise comparison of result x >= y. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, bool, Q > lessThan (qua< T, Q > const &x, qua< T, Q > const &y)
 Returns the component-wise comparison result of x < y. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, bool, Q > lessThanEqual (qua< T, Q > const &x, qua< T, Q > const &y)
 Returns the component-wise comparison of result x <= y. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 3, T, Q > mat3_cast (qua< T, Q > const &x)
 Converts a quaternion to a 3 * 3 matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > mat4_cast (qua< T, Q > const &x)
 Converts a quaternion to a 4 * 4 matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T pitch (qua< T, Q > const &x)
 Returns pitch value of euler angles expressed in radians. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > quat_cast (mat< 3, 3, T, Q > const &x)
 Converts a pure rotation 3 * 3 matrix to a quaternion. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > quat_cast (mat< 4, 4, T, Q > const &x)
 Converts a pure rotation 4 * 4 matrix to a quaternion. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > quatLookAt (vec< 3, T, Q > const &direction, vec< 3, T, Q > const &up)
 Build a look at quaternion based on the default handedness. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > quatLookAtLH (vec< 3, T, Q > const &direction, vec< 3, T, Q > const &up)
 Build a left-handed look at quaternion. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > quatLookAtRH (vec< 3, T, Q > const &direction, vec< 3, T, Q > const &up)
 Build a right-handed look at quaternion. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T roll (qua< T, Q > const &x)
 Returns roll value of euler angles expressed in radians. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T yaw (qua< T, Q > const &x)
 Returns yaw value of euler angles expressed in radians. More...
 

Detailed Description

GLM_GTC_quaternion

See also
Core features (dependence)
GLM_GTC_constants (dependence)

Definition in file gtc/quaternion.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00125_source.html ================================================ 0.9.9 API documentation: quaternion.hpp Source File
0.9.9 API documentation
gtc/quaternion.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 // Dependency:
17 #include "../gtc/constants.hpp"
18 #include "../gtc/matrix_transform.hpp"
19 #include "../ext/vector_relational.hpp"
20 #include "../ext/quaternion_common.hpp"
21 #include "../ext/quaternion_float.hpp"
22 #include "../ext/quaternion_float_precision.hpp"
23 #include "../ext/quaternion_double.hpp"
24 #include "../ext/quaternion_double_precision.hpp"
25 #include "../ext/quaternion_relational.hpp"
26 #include "../ext/quaternion_geometric.hpp"
27 #include "../ext/quaternion_trigonometric.hpp"
28 #include "../ext/quaternion_transform.hpp"
29 #include "../detail/type_mat3x3.hpp"
30 #include "../detail/type_mat4x4.hpp"
31 #include "../detail/type_vec3.hpp"
32 #include "../detail/type_vec4.hpp"
33 
34 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
35 # pragma message("GLM: GLM_GTC_quaternion extension included")
36 #endif
37 
38 namespace glm
39 {
42 
49  template<typename T, qualifier Q>
50  GLM_FUNC_DECL vec<3, T, Q> eulerAngles(qua<T, Q> const& x);
51 
57  template<typename T, qualifier Q>
58  GLM_FUNC_DECL T roll(qua<T, Q> const& x);
59 
65  template<typename T, qualifier Q>
66  GLM_FUNC_DECL T pitch(qua<T, Q> const& x);
67 
73  template<typename T, qualifier Q>
74  GLM_FUNC_DECL T yaw(qua<T, Q> const& x);
75 
81  template<typename T, qualifier Q>
82  GLM_FUNC_DECL mat<3, 3, T, Q> mat3_cast(qua<T, Q> const& x);
83 
89  template<typename T, qualifier Q>
90  GLM_FUNC_DECL mat<4, 4, T, Q> mat4_cast(qua<T, Q> const& x);
91 
97  template<typename T, qualifier Q>
98  GLM_FUNC_DECL qua<T, Q> quat_cast(mat<3, 3, T, Q> const& x);
99 
105  template<typename T, qualifier Q>
106  GLM_FUNC_DECL qua<T, Q> quat_cast(mat<4, 4, T, Q> const& x);
107 
114  template<typename T, qualifier Q>
115  GLM_FUNC_DECL vec<4, bool, Q> lessThan(qua<T, Q> const& x, qua<T, Q> const& y);
116 
123  template<typename T, qualifier Q>
124  GLM_FUNC_DECL vec<4, bool, Q> lessThanEqual(qua<T, Q> const& x, qua<T, Q> const& y);
125 
132  template<typename T, qualifier Q>
133  GLM_FUNC_DECL vec<4, bool, Q> greaterThan(qua<T, Q> const& x, qua<T, Q> const& y);
134 
141  template<typename T, qualifier Q>
142  GLM_FUNC_DECL vec<4, bool, Q> greaterThanEqual(qua<T, Q> const& x, qua<T, Q> const& y);
143 
148  template<typename T, qualifier Q>
149  GLM_FUNC_DECL qua<T, Q> quatLookAt(
150  vec<3, T, Q> const& direction,
151  vec<3, T, Q> const& up);
152 
157  template<typename T, qualifier Q>
158  GLM_FUNC_DECL qua<T, Q> quatLookAtRH(
159  vec<3, T, Q> const& direction,
160  vec<3, T, Q> const& up);
161 
166  template<typename T, qualifier Q>
167  GLM_FUNC_DECL qua<T, Q> quatLookAtLH(
168  vec<3, T, Q> const& direction,
169  vec<3, T, Q> const& up);
171 } //namespace glm
172 
173 #include "quaternion.inl"
GLM_FUNC_DECL mat< 4, 4, T, Q > mat4_cast(qua< T, Q > const &x)
Converts a quaternion to a 4 * 4 matrix.
GLM_FUNC_DECL vec< 4, bool, Q > greaterThan(qua< T, Q > const &x, qua< T, Q > const &y)
Returns the component-wise comparison of result x > y.
GLM_FUNC_DECL vec< 4, bool, Q > greaterThanEqual(qua< T, Q > const &x, qua< T, Q > const &y)
Returns the component-wise comparison of result x >= y.
GLM_FUNC_DECL vec< 4, bool, Q > lessThanEqual(qua< T, Q > const &x, qua< T, Q > const &y)
Returns the component-wise comparison of result x <= y.
GLM_FUNC_DECL T roll(qua< T, Q > const &x)
Returns roll value of euler angles expressed in radians.
GLM_FUNC_DECL qua< T, Q > quatLookAt(vec< 3, T, Q > const &direction, vec< 3, T, Q > const &up)
Build a look at quaternion based on the default handedness.
GLM_FUNC_DECL qua< T, Q > quat_cast(mat< 4, 4, T, Q > const &x)
Converts a pure rotation 4 * 4 matrix to a quaternion.
GLM_FUNC_DECL mat< 3, 3, T, Q > mat3_cast(qua< T, Q > const &x)
Converts a quaternion to a 3 * 3 matrix.
GLM_FUNC_DECL vec< 3, T, Q > eulerAngles(qua< T, Q > const &x)
Returns euler angles, pitch as x, yaw as y, roll as z.
GLM_FUNC_DECL vec< 4, bool, Q > lessThan(qua< T, Q > const &x, qua< T, Q > const &y)
Returns the component-wise comparison result of x < y.
GLM_FUNC_DECL T yaw(qua< T, Q > const &x)
Returns yaw value of euler angles expressed in radians.
GLM_FUNC_DECL qua< T, Q > quatLookAtLH(vec< 3, T, Q > const &direction, vec< 3, T, Q > const &up)
Build a left-handed look at quaternion.
GLM_FUNC_DECL qua< T, Q > quatLookAtRH(vec< 3, T, Q > const &direction, vec< 3, T, Q > const &up)
Build a right-handed look at quaternion.
GLM_FUNC_DECL T pitch(qua< T, Q > const &x)
Returns pitch value of euler angles expressed in radians.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00126.html ================================================ 0.9.9 API documentation: quaternion.hpp File Reference
0.9.9 API documentation
gtx/quaternion.hpp File Reference

GLM_GTX_quaternion More...

Go to the source code of this file.

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > cross (qua< T, Q > const &q, vec< 3, T, Q > const &v)
 Compute a cross product between a quaternion and a vector. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > cross (vec< 3, T, Q > const &v, qua< T, Q > const &q)
 Compute a cross product between a vector and a quaternion. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T extractRealComponent (qua< T, Q > const &q)
 Extract the real component of a quaternion. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > fastMix (qua< T, Q > const &x, qua< T, Q > const &y, T const &a)
 Quaternion normalized linear interpolation. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > intermediate (qua< T, Q > const &prev, qua< T, Q > const &curr, qua< T, Q > const &next)
 Returns an intermediate control point for squad interpolation. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T length2 (qua< T, Q > const &q)
 Returns the squared length of x. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > quat_identity ()
 Create an identity quaternion. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > rotate (qua< T, Q > const &q, vec< 3, T, Q > const &v)
 Returns quarternion square root. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, T, Q > rotate (qua< T, Q > const &q, vec< 4, T, Q > const &v)
 Rotates a 4 components vector by a quaternion. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > rotation (vec< 3, T, Q > const &orig, vec< 3, T, Q > const &dest)
 Compute the rotation between two vectors. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > shortMix (qua< T, Q > const &x, qua< T, Q > const &y, T const &a)
 Quaternion interpolation using the rotation short path. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > squad (qua< T, Q > const &q1, qua< T, Q > const &q2, qua< T, Q > const &s1, qua< T, Q > const &s2, T const &h)
 Compute a point on a path according squad equation. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 3, T, Q > toMat3 (qua< T, Q > const &x)
 Converts a quaternion to a 3 * 3 matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > toMat4 (qua< T, Q > const &x)
 Converts a quaternion to a 4 * 4 matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > toQuat (mat< 3, 3, T, Q > const &x)
 Converts a 3 * 3 matrix to a quaternion. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > toQuat (mat< 4, 4, T, Q > const &x)
 Converts a 4 * 4 matrix to a quaternion. More...
 

Detailed Description

GLM_GTX_quaternion

See also
Core features (dependence)
gtx_extented_min_max (dependence)

Definition in file gtx/quaternion.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00126_source.html ================================================ 0.9.9 API documentation: quaternion.hpp Source File
0.9.9 API documentation
gtx/quaternion.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 // Dependency:
17 #include "../glm.hpp"
18 #include "../gtc/constants.hpp"
19 #include "../gtc/quaternion.hpp"
20 #include "../ext/quaternion_exponential.hpp"
21 #include "../gtx/norm.hpp"
22 
23 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
24 # ifndef GLM_ENABLE_EXPERIMENTAL
25 # pragma message("GLM: GLM_GTX_quaternion is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
26 # else
27 # pragma message("GLM: GLM_GTX_quaternion extension included")
28 # endif
29 #endif
30 
31 namespace glm
32 {
35 
39  template<typename T, qualifier Q>
40  GLM_FUNC_DECL qua<T, Q> quat_identity();
41 
45  template<typename T, qualifier Q>
46  GLM_FUNC_DECL vec<3, T, Q> cross(
47  qua<T, Q> const& q,
48  vec<3, T, Q> const& v);
49 
53  template<typename T, qualifier Q>
54  GLM_FUNC_DECL vec<3, T, Q> cross(
55  vec<3, T, Q> const& v,
56  qua<T, Q> const& q);
57 
62  template<typename T, qualifier Q>
63  GLM_FUNC_DECL qua<T, Q> squad(
64  qua<T, Q> const& q1,
65  qua<T, Q> const& q2,
66  qua<T, Q> const& s1,
67  qua<T, Q> const& s2,
68  T const& h);
69 
73  template<typename T, qualifier Q>
74  GLM_FUNC_DECL qua<T, Q> intermediate(
75  qua<T, Q> const& prev,
76  qua<T, Q> const& curr,
77  qua<T, Q> const& next);
78 
82  //template<typename T, qualifier Q>
83  //qua<T, Q> sqrt(
84  // qua<T, Q> const& q);
85 
89  template<typename T, qualifier Q>
90  GLM_FUNC_DECL vec<3, T, Q> rotate(
91  qua<T, Q> const& q,
92  vec<3, T, Q> const& v);
93 
97  template<typename T, qualifier Q>
98  GLM_FUNC_DECL vec<4, T, Q> rotate(
99  qua<T, Q> const& q,
100  vec<4, T, Q> const& v);
101 
105  template<typename T, qualifier Q>
106  GLM_FUNC_DECL T extractRealComponent(
107  qua<T, Q> const& q);
108 
112  template<typename T, qualifier Q>
113  GLM_FUNC_DECL mat<3, 3, T, Q> toMat3(
114  qua<T, Q> const& x){return mat3_cast(x);}
115 
119  template<typename T, qualifier Q>
120  GLM_FUNC_DECL mat<4, 4, T, Q> toMat4(
121  qua<T, Q> const& x){return mat4_cast(x);}
122 
126  template<typename T, qualifier Q>
127  GLM_FUNC_DECL qua<T, Q> toQuat(
128  mat<3, 3, T, Q> const& x){return quat_cast(x);}
129 
133  template<typename T, qualifier Q>
134  GLM_FUNC_DECL qua<T, Q> toQuat(
135  mat<4, 4, T, Q> const& x){return quat_cast(x);}
136 
140  template<typename T, qualifier Q>
141  GLM_FUNC_DECL qua<T, Q> shortMix(
142  qua<T, Q> const& x,
143  qua<T, Q> const& y,
144  T const& a);
145 
149  template<typename T, qualifier Q>
150  GLM_FUNC_DECL qua<T, Q> fastMix(
151  qua<T, Q> const& x,
152  qua<T, Q> const& y,
153  T const& a);
154 
160  template<typename T, qualifier Q>
161  GLM_FUNC_DECL qua<T, Q> rotation(
162  vec<3, T, Q> const& orig,
163  vec<3, T, Q> const& dest);
164 
168  template<typename T, qualifier Q>
169  GLM_FUNC_DECL T length2(qua<T, Q> const& q);
170 
172 }//namespace glm
173 
174 #include "quaternion.inl"
GLM_FUNC_DECL mat< 4, 4, T, Q > mat4_cast(qua< T, Q > const &x)
Converts a quaternion to a 4 * 4 matrix.
GLM_FUNC_DECL qua< T, Q > shortMix(qua< T, Q > const &x, qua< T, Q > const &y, T const &a)
Quaternion interpolation using the rotation short path.
GLM_FUNC_DECL qua< T, Q > quat_identity()
Create an identity quaternion.
GLM_FUNC_DECL qua< T, Q > quat_cast(mat< 3, 3, T, Q > const &x)
Converts a pure rotation 3 * 3 matrix to a quaternion.
GLM_FUNC_DECL qua< T, Q > intermediate(qua< T, Q > const &prev, qua< T, Q > const &curr, qua< T, Q > const &next)
Returns an intermediate control point for squad interpolation.
GLM_FUNC_DECL mat< 3, 3, T, Q > mat3_cast(qua< T, Q > const &x)
Converts a quaternion to a 3 * 3 matrix.
GLM_FUNC_DECL mat< 4, 4, T, Q > toMat4(qua< T, Q > const &x)
Converts a quaternion to a 4 * 4 matrix.
GLM_FUNC_DECL T extractRealComponent(qua< T, Q > const &q)
Extract the real component of a quaternion.
GLM_FUNC_DECL mat< 3, 3, T, Q > toMat3(qua< T, Q > const &x)
Converts a quaternion to a 3 * 3 matrix.
GLM_FUNC_DECL qua< T, Q > squad(qua< T, Q > const &q1, qua< T, Q > const &q2, qua< T, Q > const &s1, qua< T, Q > const &s2, T const &h)
Compute a point on a path according squad equation.
GLM_FUNC_DECL vec< 3, T, Q > cross(vec< 3, T, Q > const &v, qua< T, Q > const &q)
Compute a cross product between a vector and a quaternion.
GLM_FUNC_DECL qua< T, Q > toQuat(mat< 4, 4, T, Q > const &x)
Converts a 4 * 4 matrix to a quaternion.
GLM_FUNC_DECL qua< T, Q > rotation(vec< 3, T, Q > const &orig, vec< 3, T, Q > const &dest)
Compute the rotation between two vectors.
GLM_FUNC_DECL vec< 4, T, Q > rotate(qua< T, Q > const &q, vec< 4, T, Q > const &v)
Rotates a 4 components vector by a quaternion.
GLM_FUNC_DECL qua< T, Q > fastMix(qua< T, Q > const &x, qua< T, Q > const &y, T const &a)
Quaternion normalized linear interpolation.
GLM_FUNC_DECL T length2(qua< T, Q > const &q)
Returns the squared length of x.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00127.html ================================================ 0.9.9 API documentation: quaternion_common.hpp File Reference
0.9.9 API documentation
quaternion_common.hpp File Reference

GLM_EXT_quaternion_common More...

Go to the source code of this file.

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > conjugate (qua< T, Q > const &q)
 Returns the q conjugate. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > inverse (qua< T, Q > const &q)
 Returns the q inverse. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, bool, Q > isinf (qua< T, Q > const &x)
 Returns true if x holds a positive infinity or negative infinity representation in the underlying implementation's set of floating point representations. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, bool, Q > isnan (qua< T, Q > const &x)
 Returns true if x holds a NaN (not a number) representation in the underlying implementation's set of floating point representations. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > lerp (qua< T, Q > const &x, qua< T, Q > const &y, T a)
 Linear interpolation of two quaternions. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > mix (qua< T, Q > const &x, qua< T, Q > const &y, T a)
 Spherical linear interpolation of two quaternions. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > slerp (qua< T, Q > const &x, qua< T, Q > const &y, T a)
 Spherical linear interpolation of two quaternions. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00127_source.html ================================================ 0.9.9 API documentation: quaternion_common.hpp Source File
0.9.9 API documentation
quaternion_common.hpp
Go to the documentation of this file.
1 
21 #pragma once
22 
23 // Dependency:
24 #include "../ext/scalar_constants.hpp"
25 #include "../ext/quaternion_geometric.hpp"
26 #include "../common.hpp"
27 #include "../trigonometric.hpp"
28 #include "../exponential.hpp"
29 #include <limits>
30 
31 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
32 # pragma message("GLM: GLM_EXT_quaternion_common extension included")
33 #endif
34 
35 namespace glm
36 {
39 
52  template<typename T, qualifier Q>
53  GLM_FUNC_DECL qua<T, Q> mix(qua<T, Q> const& x, qua<T, Q> const& y, T a);
54 
64  template<typename T, qualifier Q>
65  GLM_FUNC_DECL qua<T, Q> lerp(qua<T, Q> const& x, qua<T, Q> const& y, T a);
66 
76  template<typename T, qualifier Q>
77  GLM_FUNC_DECL qua<T, Q> slerp(qua<T, Q> const& x, qua<T, Q> const& y, T a);
78 
83  template<typename T, qualifier Q>
84  GLM_FUNC_DECL qua<T, Q> conjugate(qua<T, Q> const& q);
85 
90  template<typename T, qualifier Q>
91  GLM_FUNC_DECL qua<T, Q> inverse(qua<T, Q> const& q);
92 
103  template<typename T, qualifier Q>
104  GLM_FUNC_DECL vec<4, bool, Q> isnan(qua<T, Q> const& x);
105 
114  template<typename T, qualifier Q>
115  GLM_FUNC_DECL vec<4, bool, Q> isinf(qua<T, Q> const& x);
116 
118 } //namespace glm
119 
120 #include "quaternion_common.inl"
GLM_FUNC_DECL vec< 4, bool, Q > isinf(qua< T, Q > const &x)
Returns true if x holds a positive infinity or negative infinity representation in the underlying imp...
GLM_FUNC_DECL vec< 4, bool, Q > isnan(qua< T, Q > const &x)
Returns true if x holds a NaN (not a number) representation in the underlying implementation's set of...
GLM_FUNC_DECL qua< T, Q > conjugate(qua< T, Q > const &q)
Returns the q conjugate.
GLM_FUNC_DECL qua< T, Q > slerp(qua< T, Q > const &x, qua< T, Q > const &y, T a)
Spherical linear interpolation of two quaternions.
GLM_FUNC_DECL qua< T, Q > inverse(qua< T, Q > const &q)
Returns the q inverse.
GLM_FUNC_DECL qua< T, Q > lerp(qua< T, Q > const &x, qua< T, Q > const &y, T a)
Linear interpolation of two quaternions.
GLM_FUNC_DECL qua< T, Q > mix(qua< T, Q > const &x, qua< T, Q > const &y, T a)
Spherical linear interpolation of two quaternions.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00128.html ================================================ 0.9.9 API documentation: quaternion_double.hpp File Reference
0.9.9 API documentation
quaternion_double.hpp File Reference

GLM_EXT_quaternion_double More...

Go to the source code of this file.

Typedefs

typedef qua< double, defaultp > dquat
 Quaternion of double-precision floating-point numbers.
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00128_source.html ================================================ 0.9.9 API documentation: quaternion_double.hpp Source File
0.9.9 API documentation
quaternion_double.hpp
Go to the documentation of this file.
1 
20 #pragma once
21 
22 // Dependency:
23 #include "../detail/type_quat.hpp"
24 
25 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
26 # pragma message("GLM: GLM_EXT_quaternion_double extension included")
27 #endif
28 
29 namespace glm
30 {
33 
35  typedef qua<double, defaultp> dquat;
36 
38 } //namespace glm
39 
qua< double, defaultp > dquat
Quaternion of double-precision floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00129.html ================================================ 0.9.9 API documentation: quaternion_double_precision.hpp File Reference
0.9.9 API documentation
quaternion_double_precision.hpp File Reference

GLM_EXT_quaternion_double_precision More...

Go to the source code of this file.

Typedefs

typedef qua< double, highp > highp_dquat
 Quaternion of high double-qualifier floating-point numbers using high precision arithmetic in term of ULPs. More...
 
typedef qua< double, lowp > lowp_dquat
 Quaternion of double-precision floating-point numbers using high precision arithmetic in term of ULPs. More...
 
typedef qua< double, mediump > mediump_dquat
 Quaternion of medium double-qualifier floating-point numbers using high precision arithmetic in term of ULPs. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00129_source.html ================================================ 0.9.9 API documentation: quaternion_double_precision.hpp Source File
0.9.9 API documentation
quaternion_double_precision.hpp
Go to the documentation of this file.
1 
11 #pragma once
12 
13 // Dependency:
14 #include "../detail/type_quat.hpp"
15 
16 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
17 # pragma message("GLM: GLM_EXT_quaternion_double_precision extension included")
18 #endif
19 
20 namespace glm
21 {
24 
28  typedef qua<double, lowp> lowp_dquat;
29 
33  typedef qua<double, mediump> mediump_dquat;
34 
38  typedef qua<double, highp> highp_dquat;
39 
41 } //namespace glm
42 
qua< double, mediump > mediump_dquat
Quaternion of medium double-qualifier floating-point numbers using high precision arithmetic in term ...
qua< double, highp > highp_dquat
Quaternion of high double-qualifier floating-point numbers using high precision arithmetic in term of...
qua< double, lowp > lowp_dquat
Quaternion of double-precision floating-point numbers using high precision arithmetic in term of ULPs...
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00130.html ================================================ 0.9.9 API documentation: quaternion_exponential.hpp File Reference
0.9.9 API documentation
quaternion_exponential.hpp File Reference

GLM_EXT_quaternion_exponential More...

Go to the source code of this file.

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > exp (qua< T, Q > const &q)
 Returns a exponential of a quaternion. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > log (qua< T, Q > const &q)
 Returns a logarithm of a quaternion. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > pow (qua< T, Q > const &q, T y)
 Returns a quaternion raised to a power. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > sqrt (qua< T, Q > const &q)
 Returns the square root of a quaternion. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00130_source.html ================================================ 0.9.9 API documentation: quaternion_exponential.hpp Source File
0.9.9 API documentation
quaternion_exponential.hpp
Go to the documentation of this file.
1 
15 #pragma once
16 
17 // Dependency:
18 #include "../common.hpp"
19 #include "../trigonometric.hpp"
20 #include "../geometric.hpp"
21 #include "../ext/scalar_constants.hpp"
22 
23 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
24 # pragma message("GLM: GLM_EXT_quaternion_exponential extension included")
25 #endif
26 
27 namespace glm
28 {
31 
36  template<typename T, qualifier Q>
37  GLM_FUNC_DECL qua<T, Q> exp(qua<T, Q> const& q);
38 
43  template<typename T, qualifier Q>
44  GLM_FUNC_DECL qua<T, Q> log(qua<T, Q> const& q);
45 
50  template<typename T, qualifier Q>
51  GLM_FUNC_DECL qua<T, Q> pow(qua<T, Q> const& q, T y);
52 
57  template<typename T, qualifier Q>
58  GLM_FUNC_DECL qua<T, Q> sqrt(qua<T, Q> const& q);
59 
61 } //namespace glm
62 
63 #include "quaternion_exponential.inl"
GLM_FUNC_DECL qua< T, Q > log(qua< T, Q > const &q)
Returns a logarithm of a quaternion.
GLM_FUNC_DECL qua< T, Q > pow(qua< T, Q > const &q, T y)
Returns a quaternion raised to a power.
GLM_FUNC_DECL qua< T, Q > sqrt(qua< T, Q > const &q)
Returns the square root of a quaternion.
GLM_FUNC_DECL qua< T, Q > exp(qua< T, Q > const &q)
Returns a exponential of a quaternion.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00131.html ================================================ 0.9.9 API documentation: quaternion_float.hpp File Reference
0.9.9 API documentation
quaternion_float.hpp File Reference

GLM_EXT_quaternion_float More...

Go to the source code of this file.

Typedefs

typedef qua< float, defaultp > quat
 Quaternion of single-precision floating-point numbers.
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00131_source.html ================================================ 0.9.9 API documentation: quaternion_float.hpp Source File
0.9.9 API documentation
quaternion_float.hpp
Go to the documentation of this file.
1 
20 #pragma once
21 
22 // Dependency:
23 #include "../detail/type_quat.hpp"
24 
25 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
26 # pragma message("GLM: GLM_EXT_quaternion_float extension included")
27 #endif
28 
29 namespace glm
30 {
33 
35  typedef qua<float, defaultp> quat;
36 
38 } //namespace glm
39 
qua< float, defaultp > quat
Quaternion of single-precision floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00132.html ================================================ 0.9.9 API documentation: quaternion_float_precision.hpp File Reference
0.9.9 API documentation
quaternion_float_precision.hpp File Reference

GLM_EXT_quaternion_float_precision More...

Go to the source code of this file.

Typedefs

typedef qua< float, highp > highp_quat
 Quaternion of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef qua< float, lowp > lowp_quat
 Quaternion of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef qua< float, mediump > mediump_quat
 Quaternion of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00132_source.html ================================================ 0.9.9 API documentation: quaternion_float_precision.hpp Source File
0.9.9 API documentation
quaternion_float_precision.hpp
Go to the documentation of this file.
1 
11 #pragma once
12 
13 // Dependency:
14 #include "../detail/type_quat.hpp"
15 
16 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
17 # pragma message("GLM: GLM_EXT_quaternion_float_precision extension included")
18 #endif
19 
20 namespace glm
21 {
24 
26  typedef qua<float, lowp> lowp_quat;
27 
29  typedef qua<float, mediump> mediump_quat;
30 
32  typedef qua<float, highp> highp_quat;
33 
35 } //namespace glm
36 
qua< float, highp > highp_quat
Quaternion of single-precision floating-point numbers using high precision arithmetic in term of ULPs...
qua< float, mediump > mediump_quat
Quaternion of single-precision floating-point numbers using high precision arithmetic in term of ULPs...
qua< float, lowp > lowp_quat
Quaternion of single-precision floating-point numbers using high precision arithmetic in term of ULPs...
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00133.html ================================================ 0.9.9 API documentation: quaternion_geometric.hpp File Reference
0.9.9 API documentation
quaternion_geometric.hpp File Reference

GLM_EXT_quaternion_geometric More...

Go to the source code of this file.

Functions

template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER qua< T, Q > cross (qua< T, Q > const &q1, qua< T, Q > const &q2)
 Compute a cross product. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T dot (qua< T, Q > const &x, qua< T, Q > const &y)
 Returns dot product of q1 and q2, i.e., q1[0] * q2[0] + q1[1] * q2[1] + ... More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T length (qua< T, Q > const &q)
 Returns the norm of a quaternions. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > normalize (qua< T, Q > const &q)
 Returns the normalized quaternion. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00133_source.html ================================================ 0.9.9 API documentation: quaternion_geometric.hpp Source File
0.9.9 API documentation
quaternion_geometric.hpp
Go to the documentation of this file.
1 
15 #pragma once
16 
17 // Dependency:
18 #include "../geometric.hpp"
19 #include "../exponential.hpp"
20 #include "../ext/vector_relational.hpp"
21 
22 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
23 # pragma message("GLM: GLM_EXT_quaternion_geometric extension included")
24 #endif
25 
26 namespace glm
27 {
30 
37  template<typename T, qualifier Q>
38  GLM_FUNC_DECL T length(qua<T, Q> const& q);
39 
46  template<typename T, qualifier Q>
47  GLM_FUNC_DECL qua<T, Q> normalize(qua<T, Q> const& q);
48 
55  template<typename T, qualifier Q>
56  GLM_FUNC_DECL T dot(qua<T, Q> const& x, qua<T, Q> const& y);
57 
64  template<typename T, qualifier Q>
65  GLM_FUNC_QUALIFIER qua<T, Q> cross(qua<T, Q> const& q1, qua<T, Q> const& q2);
66 
68 } //namespace glm
69 
70 #include "quaternion_geometric.inl"
GLM_FUNC_DECL T length(qua< T, Q > const &q)
Returns the norm of a quaternions.
GLM_FUNC_DECL T dot(qua< T, Q > const &x, qua< T, Q > const &y)
Returns dot product of q1 and q2, i.e., q1[0] * q2[0] + q1[1] * q2[1] + ...
GLM_FUNC_QUALIFIER qua< T, Q > cross(qua< T, Q > const &q1, qua< T, Q > const &q2)
Compute a cross product.
GLM_FUNC_DECL qua< T, Q > normalize(qua< T, Q > const &q)
Returns the normalized quaternion.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00134.html ================================================ 0.9.9 API documentation: quaternion_relational.hpp File Reference
0.9.9 API documentation
quaternion_relational.hpp File Reference

GLM_EXT_quaternion_relational More...

Go to the source code of this file.

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, bool, Q > equal (qua< T, Q > const &x, qua< T, Q > const &y)
 Returns the component-wise comparison of result x == y. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, bool, Q > equal (qua< T, Q > const &x, qua< T, Q > const &y, T epsilon)
 Returns the component-wise comparison of |x - y| < epsilon. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, bool, Q > notEqual (qua< T, Q > const &x, qua< T, Q > const &y)
 Returns the component-wise comparison of result x != y. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, bool, Q > notEqual (qua< T, Q > const &x, qua< T, Q > const &y, T epsilon)
 Returns the component-wise comparison of |x - y| >= epsilon. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00134_source.html ================================================ 0.9.9 API documentation: quaternion_relational.hpp Source File
0.9.9 API documentation
quaternion_relational.hpp
Go to the documentation of this file.
1 
17 #pragma once
18 
19 // Dependency:
20 #include "../vector_relational.hpp"
21 
22 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
23 # pragma message("GLM: GLM_EXT_quaternion_relational extension included")
24 #endif
25 
26 namespace glm
27 {
30 
35  template<typename T, qualifier Q>
36  GLM_FUNC_DECL vec<4, bool, Q> equal(qua<T, Q> const& x, qua<T, Q> const& y);
37 
42  template<typename T, qualifier Q>
43  GLM_FUNC_DECL vec<4, bool, Q> equal(qua<T, Q> const& x, qua<T, Q> const& y, T epsilon);
44 
49  template<typename T, qualifier Q>
50  GLM_FUNC_DECL vec<4, bool, Q> notEqual(qua<T, Q> const& x, qua<T, Q> const& y);
51 
56  template<typename T, qualifier Q>
57  GLM_FUNC_DECL vec<4, bool, Q> notEqual(qua<T, Q> const& x, qua<T, Q> const& y, T epsilon);
58 
60 } //namespace glm
61 
62 #include "quaternion_relational.inl"
GLM_FUNC_DECL GLM_CONSTEXPR genType epsilon()
Return the epsilon constant for floating point types.
GLM_FUNC_DECL vec< 4, bool, Q > notEqual(qua< T, Q > const &x, qua< T, Q > const &y, T epsilon)
Returns the component-wise comparison of |x - y| >= epsilon.
GLM_FUNC_DECL vec< 4, bool, Q > equal(qua< T, Q > const &x, qua< T, Q > const &y, T epsilon)
Returns the component-wise comparison of |x - y| < epsilon.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00135.html ================================================ 0.9.9 API documentation: quaternion_transform.hpp File Reference
0.9.9 API documentation
quaternion_transform.hpp File Reference

GLM_EXT_quaternion_transform More...

Go to the source code of this file.

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > rotate (qua< T, Q > const &q, T const &angle, vec< 3, T, Q > const &axis)
 Rotates a quaternion from a vector of 3 components axis and an angle. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00135_source.html ================================================ 0.9.9 API documentation: quaternion_transform.hpp Source File
0.9.9 API documentation
quaternion_transform.hpp
Go to the documentation of this file.
1 
18 #pragma once
19 
20 // Dependency:
21 #include "../common.hpp"
22 #include "../trigonometric.hpp"
23 #include "../geometric.hpp"
24 
25 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
26 # pragma message("GLM: GLM_EXT_quaternion_transform extension included")
27 #endif
28 
29 namespace glm
30 {
33 
42  template<typename T, qualifier Q>
43  GLM_FUNC_DECL qua<T, Q> rotate(qua<T, Q> const& q, T const& angle, vec<3, T, Q> const& axis);
45 } //namespace glm
46 
47 #include "quaternion_transform.inl"
GLM_FUNC_DECL T angle(qua< T, Q > const &x)
Returns the quaternion rotation angle.
GLM_FUNC_DECL qua< T, Q > rotate(qua< T, Q > const &q, T const &angle, vec< 3, T, Q > const &axis)
Rotates a quaternion from a vector of 3 components axis and an angle.
GLM_FUNC_DECL vec< 3, T, Q > axis(qua< T, Q > const &x)
Returns the q rotation axis.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00136.html ================================================ 0.9.9 API documentation: quaternion_trigonometric.hpp File Reference
0.9.9 API documentation
quaternion_trigonometric.hpp File Reference

GLM_EXT_quaternion_trigonometric More...

Go to the source code of this file.

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL T angle (qua< T, Q > const &x)
 Returns the quaternion rotation angle. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > angleAxis (T const &angle, vec< 3, T, Q > const &axis)
 Build a quaternion from an angle and a normalized axis. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > axis (qua< T, Q > const &x)
 Returns the q rotation axis. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00136_source.html ================================================ 0.9.9 API documentation: quaternion_trigonometric.hpp Source File
0.9.9 API documentation
quaternion_trigonometric.hpp
Go to the documentation of this file.
1 
18 #pragma once
19 
20 // Dependency:
21 #include "../trigonometric.hpp"
22 #include "../exponential.hpp"
23 #include "scalar_constants.hpp"
24 #include "vector_relational.hpp"
25 #include <limits>
26 
27 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
28 # pragma message("GLM: GLM_EXT_quaternion_trigonometric extension included")
29 #endif
30 
31 namespace glm
32 {
35 
40  template<typename T, qualifier Q>
41  GLM_FUNC_DECL T angle(qua<T, Q> const& x);
42 
47  template<typename T, qualifier Q>
48  GLM_FUNC_DECL vec<3, T, Q> axis(qua<T, Q> const& x);
49 
57  template<typename T, qualifier Q>
58  GLM_FUNC_DECL qua<T, Q> angleAxis(T const& angle, vec<3, T, Q> const& axis);
59 
61 } //namespace glm
62 
63 #include "quaternion_trigonometric.inl"
GLM_EXT_vector_relational
GLM_FUNC_DECL T angle(qua< T, Q > const &x)
Returns the quaternion rotation angle.
GLM_FUNC_DECL qua< T, Q > angleAxis(T const &angle, vec< 3, T, Q > const &axis)
Build a quaternion from an angle and a normalized axis.
GLM_EXT_scalar_constants
GLM_FUNC_DECL vec< 3, T, Q > axis(qua< T, Q > const &x)
Returns the q rotation axis.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00137.html ================================================ 0.9.9 API documentation: random.hpp File Reference
0.9.9 API documentation
random.hpp File Reference

GLM_GTC_random More...

Go to the source code of this file.

Functions

template<typename T >
GLM_FUNC_DECL vec< 3, T, defaultp > ballRand (T Radius)
 Generate a random 3D vector which coordinates are regulary distributed within the volume of a ball of a given radius. More...
 
template<typename T >
GLM_FUNC_DECL vec< 2, T, defaultp > circularRand (T Radius)
 Generate a random 2D vector which coordinates are regulary distributed on a circle of a given radius. More...
 
template<typename T >
GLM_FUNC_DECL vec< 2, T, defaultp > diskRand (T Radius)
 Generate a random 2D vector which coordinates are regulary distributed within the area of a disk of a given radius. More...
 
template<typename genType >
GLM_FUNC_DECL genType gaussRand (genType Mean, genType Deviation)
 Generate random numbers in the interval [Min, Max], according a gaussian distribution. More...
 
template<typename genType >
GLM_FUNC_DECL genType linearRand (genType Min, genType Max)
 Generate random numbers in the interval [Min, Max], according a linear distribution. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > linearRand (vec< L, T, Q > const &Min, vec< L, T, Q > const &Max)
 Generate random numbers in the interval [Min, Max], according a linear distribution. More...
 
template<typename T >
GLM_FUNC_DECL vec< 3, T, defaultp > sphericalRand (T Radius)
 Generate a random 3D vector which coordinates are regulary distributed on a sphere of a given radius. More...
 

Detailed Description

GLM_GTC_random

See also
Core features (dependence)
gtx_random (extended)

Definition in file random.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00137_source.html ================================================ 0.9.9 API documentation: random.hpp Source File
0.9.9 API documentation
random.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 // Dependency:
17 #include "../ext/scalar_int_sized.hpp"
18 #include "../ext/scalar_uint_sized.hpp"
19 #include "../detail/qualifier.hpp"
20 
21 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
22 # pragma message("GLM: GLM_GTC_random extension included")
23 #endif
24 
25 namespace glm
26 {
29 
36  template<typename genType>
37  GLM_FUNC_DECL genType linearRand(genType Min, genType Max);
38 
46  template<length_t L, typename T, qualifier Q>
47  GLM_FUNC_DECL vec<L, T, Q> linearRand(vec<L, T, Q> const& Min, vec<L, T, Q> const& Max);
48 
52  template<typename genType>
53  GLM_FUNC_DECL genType gaussRand(genType Mean, genType Deviation);
54 
58  template<typename T>
59  GLM_FUNC_DECL vec<2, T, defaultp> circularRand(T Radius);
60 
64  template<typename T>
65  GLM_FUNC_DECL vec<3, T, defaultp> sphericalRand(T Radius);
66 
70  template<typename T>
71  GLM_FUNC_DECL vec<2, T, defaultp> diskRand(T Radius);
72 
76  template<typename T>
77  GLM_FUNC_DECL vec<3, T, defaultp> ballRand(T Radius);
78 
80 }//namespace glm
81 
82 #include "random.inl"
GLM_FUNC_DECL vec< 2, T, defaultp > circularRand(T Radius)
Generate a random 2D vector which coordinates are regulary distributed on a circle of a given radius...
GLM_FUNC_DECL vec< 2, T, defaultp > diskRand(T Radius)
Generate a random 2D vector which coordinates are regulary distributed within the area of a disk of a...
GLM_FUNC_DECL genType gaussRand(genType Mean, genType Deviation)
Generate random numbers in the interval [Min, Max], according a gaussian distribution.
GLM_FUNC_DECL vec< 3, T, defaultp > sphericalRand(T Radius)
Generate a random 3D vector which coordinates are regulary distributed on a sphere of a given radius...
GLM_FUNC_DECL vec< 3, T, defaultp > ballRand(T Radius)
Generate a random 3D vector which coordinates are regulary distributed within the volume of a ball of...
GLM_FUNC_DECL vec< L, T, Q > linearRand(vec< L, T, Q > const &Min, vec< L, T, Q > const &Max)
Generate random numbers in the interval [Min, Max], according a linear distribution.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00138.html ================================================ 0.9.9 API documentation: range.hpp File Reference
0.9.9 API documentation
range.hpp File Reference

GLM_GTX_range More...

Go to the source code of this file.

Detailed Description

GLM_GTX_range

Author
Joshua Moerman

Definition in file range.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00138_source.html ================================================ 0.9.9 API documentation: range.hpp Source File
0.9.9 API documentation
range.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependencies
16 #include "../detail/setup.hpp"
17 
18 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
19 # ifndef GLM_ENABLE_EXPERIMENTAL
20 # pragma message("GLM: GLM_GTX_range is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
21 # else
22 # pragma message("GLM: GLM_GTX_range extension included")
23 # endif
24 #endif
25 
26 #include "../gtc/type_ptr.hpp"
27 #include "../gtc/vec1.hpp"
28 
29 namespace glm
30 {
33 
34 # if GLM_COMPILER & GLM_COMPILER_VC
35 # pragma warning(push)
36 # pragma warning(disable : 4100) // unreferenced formal parameter
37 # endif
38 
39  template<typename T, qualifier Q>
40  inline length_t components(vec<1, T, Q> const& v)
41  {
42  return v.length();
43  }
44 
45  template<typename T, qualifier Q>
46  inline length_t components(vec<2, T, Q> const& v)
47  {
48  return v.length();
49  }
50 
51  template<typename T, qualifier Q>
52  inline length_t components(vec<3, T, Q> const& v)
53  {
54  return v.length();
55  }
56 
57  template<typename T, qualifier Q>
58  inline length_t components(vec<4, T, Q> const& v)
59  {
60  return v.length();
61  }
62 
63  template<typename genType>
64  inline length_t components(genType const& m)
65  {
66  return m.length() * m[0].length();
67  }
68 
69  template<typename genType>
70  inline typename genType::value_type const * begin(genType const& v)
71  {
72  return value_ptr(v);
73  }
74 
75  template<typename genType>
76  inline typename genType::value_type const * end(genType const& v)
77  {
78  return begin(v) + components(v);
79  }
80 
81  template<typename genType>
82  inline typename genType::value_type * begin(genType& v)
83  {
84  return value_ptr(v);
85  }
86 
87  template<typename genType>
88  inline typename genType::value_type * end(genType& v)
89  {
90  return begin(v) + components(v);
91  }
92 
93 # if GLM_COMPILER & GLM_COMPILER_VC
94 # pragma warning(pop)
95 # endif
96 
98 }//namespace glm
GLM_FUNC_DECL genType::value_type const * value_ptr(genType const &v)
Return the constant address to the data of the input parameter.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00139.html ================================================ 0.9.9 API documentation: raw_data.hpp File Reference
0.9.9 API documentation
raw_data.hpp File Reference

GLM_GTX_raw_data More...

Go to the source code of this file.

Typedefs

typedef detail::uint8 byte
 Type for byte numbers. More...
 
typedef detail::uint32 dword
 Type for dword numbers. More...
 
typedef detail::uint64 qword
 Type for qword numbers. More...
 
typedef detail::uint16 word
 Type for word numbers. More...
 

Detailed Description

GLM_GTX_raw_data

See also
Core features (dependence)

Definition in file raw_data.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00139_source.html ================================================ 0.9.9 API documentation: raw_data.hpp Source File
0.9.9 API documentation
raw_data.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependencies
16 #include "../ext/scalar_uint_sized.hpp"
17 #include "../detail/setup.hpp"
18 
19 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
20 # ifndef GLM_ENABLE_EXPERIMENTAL
21 # pragma message("GLM: GLM_GTX_raw_data is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
22 # else
23 # pragma message("GLM: GLM_GTX_raw_data extension included")
24 # endif
25 #endif
26 
27 namespace glm
28 {
31 
34  typedef detail::uint8 byte;
35 
38  typedef detail::uint16 word;
39 
42  typedef detail::uint32 dword;
43 
46  typedef detail::uint64 qword;
47 
49 }// namespace glm
50 
51 #include "raw_data.inl"
detail::uint32 dword
Type for dword numbers.
Definition: raw_data.hpp:42
detail::uint8 byte
Type for byte numbers.
Definition: raw_data.hpp:34
detail::uint64 qword
Type for qword numbers.
Definition: raw_data.hpp:46
detail::uint16 word
Type for word numbers.
Definition: raw_data.hpp:38
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00140.html ================================================ 0.9.9 API documentation: reciprocal.hpp File Reference
0.9.9 API documentation
reciprocal.hpp File Reference

GLM_GTC_reciprocal More...

Go to the source code of this file.

Functions

template<typename genType >
GLM_FUNC_DECL genType acot (genType x)
 Inverse cotangent function. More...
 
template<typename genType >
GLM_FUNC_DECL genType acoth (genType x)
 Inverse cotangent hyperbolic function. More...
 
template<typename genType >
GLM_FUNC_DECL genType acsc (genType x)
 Inverse cosecant function. More...
 
template<typename genType >
GLM_FUNC_DECL genType acsch (genType x)
 Inverse cosecant hyperbolic function. More...
 
template<typename genType >
GLM_FUNC_DECL genType asec (genType x)
 Inverse secant function. More...
 
template<typename genType >
GLM_FUNC_DECL genType asech (genType x)
 Inverse secant hyperbolic function. More...
 
template<typename genType >
GLM_FUNC_DECL genType cot (genType angle)
 Cotangent function. More...
 
template<typename genType >
GLM_FUNC_DECL genType coth (genType angle)
 Cotangent hyperbolic function. More...
 
template<typename genType >
GLM_FUNC_DECL genType csc (genType angle)
 Cosecant function. More...
 
template<typename genType >
GLM_FUNC_DECL genType csch (genType angle)
 Cosecant hyperbolic function. More...
 
template<typename genType >
GLM_FUNC_DECL genType sec (genType angle)
 Secant function. More...
 
template<typename genType >
GLM_FUNC_DECL genType sech (genType angle)
 Secant hyperbolic function. More...
 

Detailed Description

GLM_GTC_reciprocal

See also
Core features (dependence)

Definition in file reciprocal.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00140_source.html ================================================ 0.9.9 API documentation: reciprocal.hpp Source File
0.9.9 API documentation
reciprocal.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependencies
16 #include "../detail/setup.hpp"
17 
18 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
19 # pragma message("GLM: GLM_GTC_reciprocal extension included")
20 #endif
21 
22 namespace glm
23 {
26 
33  template<typename genType>
34  GLM_FUNC_DECL genType sec(genType angle);
35 
42  template<typename genType>
43  GLM_FUNC_DECL genType csc(genType angle);
44 
51  template<typename genType>
52  GLM_FUNC_DECL genType cot(genType angle);
53 
60  template<typename genType>
61  GLM_FUNC_DECL genType asec(genType x);
62 
69  template<typename genType>
70  GLM_FUNC_DECL genType acsc(genType x);
71 
78  template<typename genType>
79  GLM_FUNC_DECL genType acot(genType x);
80 
86  template<typename genType>
87  GLM_FUNC_DECL genType sech(genType angle);
88 
94  template<typename genType>
95  GLM_FUNC_DECL genType csch(genType angle);
96 
102  template<typename genType>
103  GLM_FUNC_DECL genType coth(genType angle);
104 
111  template<typename genType>
112  GLM_FUNC_DECL genType asech(genType x);
113 
120  template<typename genType>
121  GLM_FUNC_DECL genType acsch(genType x);
122 
129  template<typename genType>
130  GLM_FUNC_DECL genType acoth(genType x);
131 
133 }//namespace glm
134 
135 #include "reciprocal.inl"
GLM_FUNC_DECL genType sec(genType angle)
Secant function.
GLM_FUNC_DECL genType csc(genType angle)
Cosecant function.
GLM_FUNC_DECL genType coth(genType angle)
Cotangent hyperbolic function.
GLM_FUNC_DECL genType asec(genType x)
Inverse secant function.
GLM_FUNC_DECL T angle(qua< T, Q > const &x)
Returns the quaternion rotation angle.
GLM_FUNC_DECL genType cot(genType angle)
Cotangent function.
GLM_FUNC_DECL genType acsc(genType x)
Inverse cosecant function.
GLM_FUNC_DECL genType sech(genType angle)
Secant hyperbolic function.
GLM_FUNC_DECL genType csch(genType angle)
Cosecant hyperbolic function.
GLM_FUNC_DECL genType acoth(genType x)
Inverse cotangent hyperbolic function.
GLM_FUNC_DECL genType acot(genType x)
Inverse cotangent function.
GLM_FUNC_DECL genType asech(genType x)
Inverse secant hyperbolic function.
GLM_FUNC_DECL genType acsch(genType x)
Inverse cosecant hyperbolic function.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00141.html ================================================ 0.9.9 API documentation: rotate_normalized_axis.hpp File Reference
0.9.9 API documentation
rotate_normalized_axis.hpp File Reference

GLM_GTX_rotate_normalized_axis More...

Go to the source code of this file.

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > rotateNormalizedAxis (mat< 4, 4, T, Q > const &m, T const &angle, vec< 3, T, Q > const &axis)
 Builds a rotation 4 * 4 matrix created from a normalized axis and an angle. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > rotateNormalizedAxis (qua< T, Q > const &q, T const &angle, vec< 3, T, Q > const &axis)
 Rotates a quaternion from a vector of 3 components normalized axis and an angle. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00141_source.html ================================================ 0.9.9 API documentation: rotate_normalized_axis.hpp Source File
0.9.9 API documentation
rotate_normalized_axis.hpp
Go to the documentation of this file.
1 
15 #pragma once
16 
17 // Dependency:
18 #include "../glm.hpp"
19 #include "../gtc/epsilon.hpp"
20 #include "../gtc/quaternion.hpp"
21 
22 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
23 # ifndef GLM_ENABLE_EXPERIMENTAL
24 # pragma message("GLM: GLM_GTX_rotate_normalized_axis is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
25 # else
26 # pragma message("GLM: GLM_GTX_rotate_normalized_axis extension included")
27 # endif
28 #endif
29 
30 namespace glm
31 {
34 
46  template<typename T, qualifier Q>
47  GLM_FUNC_DECL mat<4, 4, T, Q> rotateNormalizedAxis(
48  mat<4, 4, T, Q> const& m,
49  T const& angle,
50  vec<3, T, Q> const& axis);
51 
59  template<typename T, qualifier Q>
60  GLM_FUNC_DECL qua<T, Q> rotateNormalizedAxis(
61  qua<T, Q> const& q,
62  T const& angle,
63  vec<3, T, Q> const& axis);
64 
66 }//namespace glm
67 
68 #include "rotate_normalized_axis.inl"
GLM_FUNC_DECL T angle(qua< T, Q > const &x)
Returns the quaternion rotation angle.
GLM_FUNC_DECL qua< T, Q > rotateNormalizedAxis(qua< T, Q > const &q, T const &angle, vec< 3, T, Q > const &axis)
Rotates a quaternion from a vector of 3 components normalized axis and an angle.
GLM_FUNC_DECL vec< 3, T, Q > axis(qua< T, Q > const &x)
Returns the q rotation axis.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00142.html ================================================ 0.9.9 API documentation: rotate_vector.hpp File Reference
0.9.9 API documentation
rotate_vector.hpp File Reference

GLM_GTX_rotate_vector More...

Go to the source code of this file.

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > orientation (vec< 3, T, Q > const &Normal, vec< 3, T, Q > const &Up)
 Build a rotation matrix from a normal and a up vector. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 2, T, Q > rotate (vec< 2, T, Q > const &v, T const &angle)
 Rotate a two dimensional vector. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > rotate (vec< 3, T, Q > const &v, T const &angle, vec< 3, T, Q > const &normal)
 Rotate a three dimensional vector around an axis. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, T, Q > rotate (vec< 4, T, Q > const &v, T const &angle, vec< 3, T, Q > const &normal)
 Rotate a four dimensional vector around an axis. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > rotateX (vec< 3, T, Q > const &v, T const &angle)
 Rotate a three dimensional vector around the X axis. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, T, Q > rotateX (vec< 4, T, Q > const &v, T const &angle)
 Rotate a four dimensional vector around the X axis. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > rotateY (vec< 3, T, Q > const &v, T const &angle)
 Rotate a three dimensional vector around the Y axis. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, T, Q > rotateY (vec< 4, T, Q > const &v, T const &angle)
 Rotate a four dimensional vector around the Y axis. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > rotateZ (vec< 3, T, Q > const &v, T const &angle)
 Rotate a three dimensional vector around the Z axis. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, T, Q > rotateZ (vec< 4, T, Q > const &v, T const &angle)
 Rotate a four dimensional vector around the Z axis. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > slerp (vec< 3, T, Q > const &x, vec< 3, T, Q > const &y, T const &a)
 Returns Spherical interpolation between two vectors. More...
 

Detailed Description

GLM_GTX_rotate_vector

See also
Core features (dependence)
GLM_GTX_transform (dependence)

Definition in file rotate_vector.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00142_source.html ================================================ 0.9.9 API documentation: rotate_vector.hpp Source File
0.9.9 API documentation
rotate_vector.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 // Dependency:
17 #include "../gtx/transform.hpp"
18 #include "../gtc/epsilon.hpp"
19 #include "../ext/vector_relational.hpp"
20 #include "../glm.hpp"
21 
22 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
23 # ifndef GLM_ENABLE_EXPERIMENTAL
24 # pragma message("GLM: GLM_GTX_rotate_vector is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
25 # else
26 # pragma message("GLM: GLM_GTX_rotate_vector extension included")
27 # endif
28 #endif
29 
30 namespace glm
31 {
34 
42  template<typename T, qualifier Q>
43  GLM_FUNC_DECL vec<3, T, Q> slerp(
44  vec<3, T, Q> const& x,
45  vec<3, T, Q> const& y,
46  T const& a);
47 
50  template<typename T, qualifier Q>
51  GLM_FUNC_DECL vec<2, T, Q> rotate(
52  vec<2, T, Q> const& v,
53  T const& angle);
54 
57  template<typename T, qualifier Q>
58  GLM_FUNC_DECL vec<3, T, Q> rotate(
59  vec<3, T, Q> const& v,
60  T const& angle,
61  vec<3, T, Q> const& normal);
62 
65  template<typename T, qualifier Q>
66  GLM_FUNC_DECL vec<4, T, Q> rotate(
67  vec<4, T, Q> const& v,
68  T const& angle,
69  vec<3, T, Q> const& normal);
70 
73  template<typename T, qualifier Q>
74  GLM_FUNC_DECL vec<3, T, Q> rotateX(
75  vec<3, T, Q> const& v,
76  T const& angle);
77 
80  template<typename T, qualifier Q>
81  GLM_FUNC_DECL vec<3, T, Q> rotateY(
82  vec<3, T, Q> const& v,
83  T const& angle);
84 
87  template<typename T, qualifier Q>
88  GLM_FUNC_DECL vec<3, T, Q> rotateZ(
89  vec<3, T, Q> const& v,
90  T const& angle);
91 
94  template<typename T, qualifier Q>
95  GLM_FUNC_DECL vec<4, T, Q> rotateX(
96  vec<4, T, Q> const& v,
97  T const& angle);
98 
101  template<typename T, qualifier Q>
102  GLM_FUNC_DECL vec<4, T, Q> rotateY(
103  vec<4, T, Q> const& v,
104  T const& angle);
105 
108  template<typename T, qualifier Q>
109  GLM_FUNC_DECL vec<4, T, Q> rotateZ(
110  vec<4, T, Q> const& v,
111  T const& angle);
112 
115  template<typename T, qualifier Q>
116  GLM_FUNC_DECL mat<4, 4, T, Q> orientation(
117  vec<3, T, Q> const& Normal,
118  vec<3, T, Q> const& Up);
119 
121 }//namespace glm
122 
123 #include "rotate_vector.inl"
GLM_FUNC_DECL T angle(qua< T, Q > const &x)
Returns the quaternion rotation angle.
GLM_FUNC_DECL vec< 4, T, Q > rotateZ(vec< 4, T, Q > const &v, T const &angle)
Rotate a four dimensional vector around the Z axis.
GLM_FUNC_DECL vec< 4, T, Q > rotateY(vec< 4, T, Q > const &v, T const &angle)
Rotate a four dimensional vector around the Y axis.
GLM_FUNC_DECL vec< 4, T, Q > rotateX(vec< 4, T, Q > const &v, T const &angle)
Rotate a four dimensional vector around the X axis.
GLM_FUNC_DECL vec< 3, T, Q > slerp(vec< 3, T, Q > const &x, vec< 3, T, Q > const &y, T const &a)
Returns Spherical interpolation between two vectors.
GLM_FUNC_DECL mat< 4, 4, T, Q > orientation(vec< 3, T, Q > const &Normal, vec< 3, T, Q > const &Up)
Build a rotation matrix from a normal and a up vector.
GLM_FUNC_DECL vec< 4, T, Q > rotate(vec< 4, T, Q > const &v, T const &angle, vec< 3, T, Q > const &normal)
Rotate a four dimensional vector around an axis.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00143.html ================================================ 0.9.9 API documentation: round.hpp File Reference
0.9.9 API documentation
round.hpp File Reference

GLM_GTC_round More...

Go to the source code of this file.

Functions

template<typename genType >
GLM_FUNC_DECL genType ceilMultiple (genType v, genType Multiple)
 Higher multiple number of Source. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > ceilMultiple (vec< L, T, Q > const &v, vec< L, T, Q > const &Multiple)
 Higher multiple number of Source. More...
 
template<typename genIUType >
GLM_FUNC_DECL genIUType ceilPowerOfTwo (genIUType v)
 Return the power of two number which value is just higher the input value, round up to a power of two. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > ceilPowerOfTwo (vec< L, T, Q > const &v)
 Return the power of two number which value is just higher the input value, round up to a power of two. More...
 
template<typename genType >
GLM_FUNC_DECL genType floorMultiple (genType v, genType Multiple)
 Lower multiple number of Source. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > floorMultiple (vec< L, T, Q > const &v, vec< L, T, Q > const &Multiple)
 Lower multiple number of Source. More...
 
template<typename genIUType >
GLM_FUNC_DECL genIUType floorPowerOfTwo (genIUType v)
 Return the power of two number which value is just lower the input value, round down to a power of two. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > floorPowerOfTwo (vec< L, T, Q > const &v)
 Return the power of two number which value is just lower the input value, round down to a power of two. More...
 
template<typename genType >
GLM_FUNC_DECL genType roundMultiple (genType v, genType Multiple)
 Lower multiple number of Source. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > roundMultiple (vec< L, T, Q > const &v, vec< L, T, Q > const &Multiple)
 Lower multiple number of Source. More...
 
template<typename genIUType >
GLM_FUNC_DECL genIUType roundPowerOfTwo (genIUType v)
 Return the power of two number which value is the closet to the input value. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > roundPowerOfTwo (vec< L, T, Q > const &v)
 Return the power of two number which value is the closet to the input value. More...
 

Detailed Description

GLM_GTC_round

See also
Core features (dependence)
GLM_GTC_round (dependence)

Definition in file round.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00143_source.html ================================================ 0.9.9 API documentation: round.hpp Source File
0.9.9 API documentation
round.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 // Dependencies
17 #include "../detail/setup.hpp"
18 #include "../detail/qualifier.hpp"
19 #include "../detail/_vectorize.hpp"
20 #include "../vector_relational.hpp"
21 #include "../common.hpp"
22 #include <limits>
23 
24 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
25 # pragma message("GLM: GLM_GTC_round extension included")
26 #endif
27 
28 namespace glm
29 {
32 
37  template<typename genIUType>
38  GLM_FUNC_DECL genIUType ceilPowerOfTwo(genIUType v);
39 
48  template<length_t L, typename T, qualifier Q>
49  GLM_FUNC_DECL vec<L, T, Q> ceilPowerOfTwo(vec<L, T, Q> const& v);
50 
55  template<typename genIUType>
56  GLM_FUNC_DECL genIUType floorPowerOfTwo(genIUType v);
57 
66  template<length_t L, typename T, qualifier Q>
67  GLM_FUNC_DECL vec<L, T, Q> floorPowerOfTwo(vec<L, T, Q> const& v);
68 
72  template<typename genIUType>
73  GLM_FUNC_DECL genIUType roundPowerOfTwo(genIUType v);
74 
82  template<length_t L, typename T, qualifier Q>
83  GLM_FUNC_DECL vec<L, T, Q> roundPowerOfTwo(vec<L, T, Q> const& v);
84 
93  template<typename genType>
94  GLM_FUNC_DECL genType ceilMultiple(genType v, genType Multiple);
95 
106  template<length_t L, typename T, qualifier Q>
107  GLM_FUNC_DECL vec<L, T, Q> ceilMultiple(vec<L, T, Q> const& v, vec<L, T, Q> const& Multiple);
108 
117  template<typename genType>
118  GLM_FUNC_DECL genType floorMultiple(genType v, genType Multiple);
119 
130  template<length_t L, typename T, qualifier Q>
131  GLM_FUNC_DECL vec<L, T, Q> floorMultiple(vec<L, T, Q> const& v, vec<L, T, Q> const& Multiple);
132 
141  template<typename genType>
142  GLM_FUNC_DECL genType roundMultiple(genType v, genType Multiple);
143 
154  template<length_t L, typename T, qualifier Q>
155  GLM_FUNC_DECL vec<L, T, Q> roundMultiple(vec<L, T, Q> const& v, vec<L, T, Q> const& Multiple);
156 
158 } //namespace glm
159 
160 #include "round.inl"
GLM_FUNC_DECL vec< L, T, Q > roundPowerOfTwo(vec< L, T, Q > const &v)
Return the power of two number which value is the closet to the input value.
GLM_FUNC_DECL vec< L, T, Q > ceilMultiple(vec< L, T, Q > const &v, vec< L, T, Q > const &Multiple)
Higher multiple number of Source.
GLM_FUNC_DECL vec< L, T, Q > floorPowerOfTwo(vec< L, T, Q > const &v)
Return the power of two number which value is just lower the input value, round down to a power of tw...
GLM_FUNC_DECL vec< L, T, Q > roundMultiple(vec< L, T, Q > const &v, vec< L, T, Q > const &Multiple)
Lower multiple number of Source.
GLM_FUNC_DECL vec< L, T, Q > ceilPowerOfTwo(vec< L, T, Q > const &v)
Return the power of two number which value is just higher the input value, round up to a power of two...
GLM_FUNC_DECL vec< L, T, Q > floorMultiple(vec< L, T, Q > const &v, vec< L, T, Q > const &Multiple)
Lower multiple number of Source.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00144.html ================================================ 0.9.9 API documentation: scalar_common.hpp File Reference
0.9.9 API documentation
scalar_common.hpp File Reference

GLM_EXT_scalar_common More...

Go to the source code of this file.

Functions

template<typename T >
GLM_FUNC_DECL T fmax (T a, T b)
 Returns the maximum component-wise values of 2 inputs. More...
 
template<typename T >
GLM_FUNC_DECL T fmax (T a, T b, T C)
 Returns the maximum component-wise values of 3 inputs. More...
 
template<typename T >
GLM_FUNC_DECL T fmax (T a, T b, T C, T D)
 Returns the maximum component-wise values of 4 inputs. More...
 
template<typename T >
GLM_FUNC_DECL T fmin (T a, T b)
 Returns the minimum component-wise values of 2 inputs. More...
 
template<typename T >
GLM_FUNC_DECL T fmin (T a, T b, T c)
 Returns the minimum component-wise values of 3 inputs. More...
 
template<typename T >
GLM_FUNC_DECL T fmin (T a, T b, T c, T d)
 Returns the minimum component-wise values of 4 inputs. More...
 
template<typename T >
GLM_FUNC_DECL T max (T a, T b, T c)
 Returns the maximum component-wise values of 3 inputs. More...
 
template<typename T >
GLM_FUNC_DECL T max (T a, T b, T c, T d)
 Returns the maximum component-wise values of 4 inputs. More...
 
template<typename T >
GLM_FUNC_DECL T min (T a, T b, T c)
 Returns the minimum component-wise values of 3 inputs. More...
 
template<typename T >
GLM_FUNC_DECL T min (T a, T b, T c, T d)
 Returns the minimum component-wise values of 4 inputs. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00144_source.html ================================================ 0.9.9 API documentation: scalar_common.hpp Source File
0.9.9 API documentation
scalar_common.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 // Dependency:
17 #include "../common.hpp"
18 
19 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
20 # pragma message("GLM: GLM_EXT_scalar_common extension included")
21 #endif
22 
23 namespace glm
24 {
27 
31  template<typename T>
32  GLM_FUNC_DECL T min(T a, T b, T c);
33 
37  template<typename T>
38  GLM_FUNC_DECL T min(T a, T b, T c, T d);
39 
43  template<typename T>
44  GLM_FUNC_DECL T max(T a, T b, T c);
45 
49  template<typename T>
50  GLM_FUNC_DECL T max(T a, T b, T c, T d);
51 
57  template<typename T>
58  GLM_FUNC_DECL T fmin(T a, T b);
59 
65  template<typename T>
66  GLM_FUNC_DECL T fmin(T a, T b, T c);
67 
73  template<typename T>
74  GLM_FUNC_DECL T fmin(T a, T b, T c, T d);
75 
81  template<typename T>
82  GLM_FUNC_DECL T fmax(T a, T b);
83 
89  template<typename T>
90  GLM_FUNC_DECL T fmax(T a, T b, T C);
91 
97  template<typename T>
98  GLM_FUNC_DECL T fmax(T a, T b, T C, T D);
99 
101 }//namespace glm
102 
103 #include "scalar_common.inl"
GLM_FUNC_DECL T min(T a, T b, T c, T d)
Returns the minimum component-wise values of 4 inputs.
GLM_FUNC_DECL T max(T a, T b, T c, T d)
Returns the maximum component-wise values of 4 inputs.
GLM_FUNC_DECL T fmax(T a, T b, T C, T D)
Returns the maximum component-wise values of 4 inputs.
GLM_FUNC_DECL T fmin(T a, T b, T c, T d)
Returns the minimum component-wise values of 4 inputs.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00145.html ================================================ 0.9.9 API documentation: scalar_constants.hpp File Reference
0.9.9 API documentation
scalar_constants.hpp File Reference

GLM_EXT_scalar_constants More...

Go to the source code of this file.

Functions

template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType epsilon ()
 Return the epsilon constant for floating point types.
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType pi ()
 Return the pi constant for floating point types.
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00145_source.html ================================================ 0.9.9 API documentation: scalar_constants.hpp Source File
0.9.9 API documentation
scalar_constants.hpp
Go to the documentation of this file.
1 
11 #pragma once
12 
13 // Dependencies
14 #include "../detail/setup.hpp"
15 
16 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
17 # pragma message("GLM: GLM_EXT_scalar_constants extension included")
18 #endif
19 
20 namespace glm
21 {
24 
26  template<typename genType>
27  GLM_FUNC_DECL GLM_CONSTEXPR genType epsilon();
28 
30  template<typename genType>
31  GLM_FUNC_DECL GLM_CONSTEXPR genType pi();
32 
34 } //namespace glm
35 
36 #include "scalar_constants.inl"
GLM_FUNC_DECL GLM_CONSTEXPR genType pi()
Return the pi constant for floating point types.
GLM_FUNC_DECL GLM_CONSTEXPR genType epsilon()
Return the epsilon constant for floating point types.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00146.html ================================================ 0.9.9 API documentation: scalar_int_sized.hpp File Reference
0.9.9 API documentation
scalar_int_sized.hpp File Reference

GLM_EXT_scalar_int_sized More...

Go to the source code of this file.

Typedefs

typedef detail::int16 int16
 16 bit signed integer type.
 
typedef detail::int32 int32
 32 bit signed integer type.
 
typedef detail::int64 int64
 64 bit signed integer type.
 
typedef detail::int8 int8
 8 bit signed integer type.
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00146_source.html ================================================ 0.9.9 API documentation: scalar_int_sized.hpp Source File
0.9.9 API documentation
scalar_int_sized.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 #include "../detail/setup.hpp"
16 
17 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
18 # pragma message("GLM: GLM_EXT_scalar_int_sized extension included")
19 #endif
20 
21 namespace glm{
22 namespace detail
23 {
24 # if GLM_HAS_EXTENDED_INTEGER_TYPE
25  typedef std::int8_t int8;
26  typedef std::int16_t int16;
27  typedef std::int32_t int32;
28 # else
29  typedef signed char int8;
30  typedef signed short int16;
31  typedef signed int int32;
32 #endif//
33 
34  template<>
35  struct is_int<int8>
36  {
37  enum test {value = ~0};
38  };
39 
40  template<>
41  struct is_int<int16>
42  {
43  enum test {value = ~0};
44  };
45 
46  template<>
47  struct is_int<int64>
48  {
49  enum test {value = ~0};
50  };
51 }//namespace detail
52 
53 
56 
58  typedef detail::int8 int8;
59 
61  typedef detail::int16 int16;
62 
64  typedef detail::int32 int32;
65 
67  typedef detail::int64 int64;
68 
70 }//namespace glm
int8 int8_t
8 bit signed integer type.
Definition: fwd.hpp:43
detail::int8 int8
8 bit signed integer type.
int16 int16_t
16 bit signed integer type.
Definition: fwd.hpp:57
int32 int32_t
32 bit signed integer type.
Definition: fwd.hpp:71
detail::int64 int64
64 bit signed integer type.
detail::int32 int32
32 bit signed integer type.
detail::int16 int16
16 bit signed integer type.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00147.html ================================================ 0.9.9 API documentation: scalar_integer.hpp File Reference
0.9.9 API documentation
scalar_integer.hpp File Reference

GLM_EXT_scalar_integer More...

Go to the source code of this file.

Functions

template<typename genIUType >
GLM_FUNC_DECL int findNSB (genIUType x, int significantBitCount)
 Returns the bit number of the Nth significant bit set to 1 in the binary representation of value. More...
 
template<typename genIUType >
GLM_FUNC_DECL bool isMultiple (genIUType v, genIUType Multiple)
 Return true if the 'Value' is a multiple of 'Multiple'. More...
 
template<typename genIUType >
GLM_FUNC_DECL bool isPowerOfTwo (genIUType v)
 Return true if the value is a power of two number. More...
 
template<typename genIUType >
GLM_FUNC_DECL genIUType nextMultiple (genIUType v, genIUType Multiple)
 Higher multiple number of Source. More...
 
template<typename genIUType >
GLM_FUNC_DECL genIUType nextPowerOfTwo (genIUType v)
 Return the power of two number which value is just higher the input value, round up to a power of two. More...
 
template<typename genIUType >
GLM_FUNC_DECL genIUType prevMultiple (genIUType v, genIUType Multiple)
 Lower multiple number of Source. More...
 
template<typename genIUType >
GLM_FUNC_DECL genIUType prevPowerOfTwo (genIUType v)
 Return the power of two number which value is just lower the input value, round down to a power of two. More...
 

Detailed Description

GLM_EXT_scalar_integer

See also
Core features (dependence)

Definition in file scalar_integer.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00147_source.html ================================================ 0.9.9 API documentation: scalar_integer.hpp Source File
0.9.9 API documentation
scalar_integer.hpp
Go to the documentation of this file.
1 
11 #pragma once
12 
13 // Dependencies
14 #include "../detail/setup.hpp"
15 #include "../detail/qualifier.hpp"
16 #include "../detail/_vectorize.hpp"
17 #include "../detail/type_float.hpp"
18 #include "../vector_relational.hpp"
19 #include "../common.hpp"
20 #include <limits>
21 
22 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
23 # pragma message("GLM: GLM_EXT_scalar_integer extension included")
24 #endif
25 
26 namespace glm
27 {
30 
34  template<typename genIUType>
35  GLM_FUNC_DECL bool isPowerOfTwo(genIUType v);
36 
41  template<typename genIUType>
42  GLM_FUNC_DECL genIUType nextPowerOfTwo(genIUType v);
43 
48  template<typename genIUType>
49  GLM_FUNC_DECL genIUType prevPowerOfTwo(genIUType v);
50 
54  template<typename genIUType>
55  GLM_FUNC_DECL bool isMultiple(genIUType v, genIUType Multiple);
56 
65  template<typename genIUType>
66  GLM_FUNC_DECL genIUType nextMultiple(genIUType v, genIUType Multiple);
67 
76  template<typename genIUType>
77  GLM_FUNC_DECL genIUType prevMultiple(genIUType v, genIUType Multiple);
78 
86  template<typename genIUType>
87  GLM_FUNC_DECL int findNSB(genIUType x, int significantBitCount);
88 
90 } //namespace glm
91 
92 #include "scalar_integer.inl"
GLM_FUNC_DECL genIUType prevPowerOfTwo(genIUType v)
Return the power of two number which value is just lower the input value, round down to a power of tw...
GLM_FUNC_DECL genIUType prevMultiple(genIUType v, genIUType Multiple)
Lower multiple number of Source.
GLM_FUNC_DECL bool isMultiple(genIUType v, genIUType Multiple)
Return true if the 'Value' is a multiple of 'Multiple'.
GLM_FUNC_DECL int findNSB(genIUType x, int significantBitCount)
Returns the bit number of the Nth significant bit set to 1 in the binary representation of value...
GLM_FUNC_DECL genIUType nextMultiple(genIUType v, genIUType Multiple)
Higher multiple number of Source.
GLM_FUNC_DECL bool isPowerOfTwo(genIUType v)
Return true if the value is a power of two number.
GLM_FUNC_DECL genIUType nextPowerOfTwo(genIUType v)
Return the power of two number which value is just higher the input value, round up to a power of two...
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00148.html ================================================ 0.9.9 API documentation: scalar_multiplication.hpp File Reference
0.9.9 API documentation
scalar_multiplication.hpp File Reference

Experimental extensions More...

Go to the source code of this file.

Detailed Description

Experimental extensions

Author
Joshua Moerman

Include <glm/gtx/scalar_multiplication.hpp> to use the features of this extension.

Enables scalar multiplication for all types

Since GLSL is very strict about types, the following (often used) combinations do not work: double * vec4 int * vec4 vec4 / int So we'll fix that! Of course "float * vec4" should remain the same (hence the enable_if magic)

Definition in file scalar_multiplication.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00148_source.html ================================================ 0.9.9 API documentation: scalar_multiplication.hpp Source File
0.9.9 API documentation
scalar_multiplication.hpp
Go to the documentation of this file.
1 
15 #pragma once
16 
17 #include "../detail/setup.hpp"
18 
19 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
20 # ifndef GLM_ENABLE_EXPERIMENTAL
21 # pragma message("GLM: GLM_GTX_scalar_multiplication is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
22 # else
23 # pragma message("GLM: GLM_GTX_scalar_multiplication extension included")
24 # endif
25 #endif
26 
27 #include "../vec2.hpp"
28 #include "../vec3.hpp"
29 #include "../vec4.hpp"
30 #include "../mat2x2.hpp"
31 #include <type_traits>
32 
33 namespace glm
34 {
35  template<typename T, typename Vec>
36  using return_type_scalar_multiplication = typename std::enable_if<
37  !std::is_same<T, float>::value // T may not be a float
38  && std::is_arithmetic<T>::value, Vec // But it may be an int or double (no vec3 or mat3, ...)
39  >::type;
40 
41 #define GLM_IMPLEMENT_SCAL_MULT(Vec) \
42  template<typename T> \
43  return_type_scalar_multiplication<T, Vec> \
44  operator*(T const& s, Vec rh){ \
45  return rh *= static_cast<float>(s); \
46  } \
47  \
48  template<typename T> \
49  return_type_scalar_multiplication<T, Vec> \
50  operator*(Vec lh, T const& s){ \
51  return lh *= static_cast<float>(s); \
52  } \
53  \
54  template<typename T> \
55  return_type_scalar_multiplication<T, Vec> \
56  operator/(Vec lh, T const& s){ \
57  return lh *= 1.0f / s; \
58  }
59 
60 GLM_IMPLEMENT_SCAL_MULT(vec2)
61 GLM_IMPLEMENT_SCAL_MULT(vec3)
62 GLM_IMPLEMENT_SCAL_MULT(vec4)
63 
64 GLM_IMPLEMENT_SCAL_MULT(mat2)
65 GLM_IMPLEMENT_SCAL_MULT(mat2x3)
66 GLM_IMPLEMENT_SCAL_MULT(mat2x4)
67 GLM_IMPLEMENT_SCAL_MULT(mat3x2)
68 GLM_IMPLEMENT_SCAL_MULT(mat3)
69 GLM_IMPLEMENT_SCAL_MULT(mat3x4)
70 GLM_IMPLEMENT_SCAL_MULT(mat4x2)
71 GLM_IMPLEMENT_SCAL_MULT(mat4x3)
72 GLM_IMPLEMENT_SCAL_MULT(mat4)
73 
74 #undef GLM_IMPLEMENT_SCAL_MULT
75 } // namespace glm
vec< 2, float, defaultp > vec2
2 components vector of single-precision floating-point numbers.
mat< 2, 4, float, defaultp > mat2x4
2 columns of 4 components matrix of single-precision floating-point numbers.
mat< 3, 2, float, defaultp > mat3x2
3 columns of 2 components matrix of single-precision floating-point numbers.
mat< 3, 4, float, defaultp > mat3x4
3 columns of 4 components matrix of single-precision floating-point numbers.
mat< 4, 3, float, defaultp > mat4x3
4 columns of 3 components matrix of single-precision floating-point numbers.
mat< 4, 2, float, defaultp > mat4x2
4 columns of 2 components matrix of single-precision floating-point numbers.
vec< 4, float, defaultp > vec4
4 components vector of single-precision floating-point numbers.
mat< 4, 4, float, defaultp > mat4
4 columns of 4 components matrix of single-precision floating-point numbers.
vec< 3, float, defaultp > vec3
3 components vector of single-precision floating-point numbers.
mat< 2, 3, float, defaultp > mat2x3
2 columns of 3 components matrix of single-precision floating-point numbers.
mat< 2, 2, float, defaultp > mat2
2 columns of 2 components matrix of single-precision floating-point numbers.
mat< 3, 3, float, defaultp > mat3
3 columns of 3 components matrix of single-precision floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00149.html ================================================ 0.9.9 API documentation: scalar_relational.hpp File Reference
0.9.9 API documentation
ext/scalar_relational.hpp File Reference

GLM_EXT_scalar_relational More...

Go to the source code of this file.

Functions

template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR bool equal (genType const &x, genType const &y, genType const &epsilon)
 Returns the component-wise comparison of |x - y| < epsilon. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR bool equal (genType const &x, genType const &y, int ULPs)
 Returns the component-wise comparison between two scalars in term of ULPs. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR bool notEqual (genType const &x, genType const &y, genType const &epsilon)
 Returns the component-wise comparison of |x - y| >= epsilon. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR bool notEqual (genType const &x, genType const &y, int ULPs)
 Returns the component-wise comparison between two scalars in term of ULPs. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00149_source.html ================================================ 0.9.9 API documentation: scalar_relational.hpp Source File
0.9.9 API documentation
ext/scalar_relational.hpp
Go to the documentation of this file.
1 
15 #pragma once
16 
17 // Dependencies
18 #include "../detail/qualifier.hpp"
19 
20 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
21 # pragma message("GLM: GLM_EXT_scalar_relational extension included")
22 #endif
23 
24 namespace glm
25 {
30  template<typename genType>
31  GLM_FUNC_DECL GLM_CONSTEXPR bool equal(genType const& x, genType const& y, genType const& epsilon);
32 
37  template<typename genType>
38  GLM_FUNC_DECL GLM_CONSTEXPR bool notEqual(genType const& x, genType const& y, genType const& epsilon);
39 
48  template<typename genType>
49  GLM_FUNC_DECL GLM_CONSTEXPR bool equal(genType const& x, genType const& y, int ULPs);
50 
59  template<typename genType>
60  GLM_FUNC_DECL GLM_CONSTEXPR bool notEqual(genType const& x, genType const& y, int ULPs);
61 
63 }//namespace glm
64 
65 #include "scalar_relational.inl"
GLM_FUNC_DECL GLM_CONSTEXPR vec< C, bool, Q > notEqual(mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y)
Perform a component-wise not-equal-to comparison of two matrices.
GLM_FUNC_DECL GLM_CONSTEXPR vec< C, bool, Q > equal(mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y)
Perform a component-wise equal-to comparison of two matrices.
GLM_FUNC_DECL GLM_CONSTEXPR genType epsilon()
Return the epsilon constant for floating point types.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00150.html ================================================ 0.9.9 API documentation: scalar_relational.hpp File Reference
0.9.9 API documentation
gtx/scalar_relational.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00150_source.html ================================================ 0.9.9 API documentation: scalar_relational.hpp Source File
0.9.9 API documentation
gtx/scalar_relational.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependency:
16 #include "../glm.hpp"
17 
18 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
19 # ifndef GLM_ENABLE_EXPERIMENTAL
20 # pragma message("GLM: GLM_GTX_extend is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
21 # else
22 # pragma message("GLM: GLM_GTX_extend extension included")
23 # endif
24 #endif
25 
26 namespace glm
27 {
30 
31 
32 
34 }//namespace glm
35 
36 #include "scalar_relational.inl"
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00151.html ================================================ 0.9.9 API documentation: scalar_uint_sized.hpp File Reference
0.9.9 API documentation
scalar_uint_sized.hpp File Reference

GLM_EXT_scalar_uint_sized More...

Go to the source code of this file.

Typedefs

typedef detail::uint16 uint16
 16 bit unsigned integer type.
 
typedef detail::uint32 uint32
 32 bit unsigned integer type.
 
typedef detail::uint64 uint64
 64 bit unsigned integer type.
 
typedef detail::uint8 uint8
 8 bit unsigned integer type.
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00151_source.html ================================================ 0.9.9 API documentation: scalar_uint_sized.hpp Source File
0.9.9 API documentation
scalar_uint_sized.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 #include "../detail/setup.hpp"
16 
17 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
18 # pragma message("GLM: GLM_EXT_scalar_uint_sized extension included")
19 #endif
20 
21 namespace glm{
22 namespace detail
23 {
24 # if GLM_HAS_EXTENDED_INTEGER_TYPE
25  typedef std::uint8_t uint8;
26  typedef std::uint16_t uint16;
27  typedef std::uint32_t uint32;
28 # else
29  typedef unsigned char uint8;
30  typedef unsigned short uint16;
31  typedef unsigned int uint32;
32 #endif
33 
34  template<>
35  struct is_int<uint8>
36  {
37  enum test {value = ~0};
38  };
39 
40  template<>
41  struct is_int<uint16>
42  {
43  enum test {value = ~0};
44  };
45 
46  template<>
47  struct is_int<uint64>
48  {
49  enum test {value = ~0};
50  };
51 }//namespace detail
52 
53 
56 
58  typedef detail::uint8 uint8;
59 
61  typedef detail::uint16 uint16;
62 
64  typedef detail::uint32 uint32;
65 
67  typedef detail::uint64 uint64;
68 
70 }//namespace glm
detail::uint32 uint32
32 bit unsigned integer type.
uint32 uint32_t
Default qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:129
detail::uint16 uint16
16 bit unsigned integer type.
uint16 uint16_t
Default qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:115
uint8 uint8_t
Default qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:101
detail::uint64 uint64
64 bit unsigned integer type.
detail::uint8 uint8
8 bit unsigned integer type.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00152.html ================================================ 0.9.9 API documentation: scalar_ulp.hpp File Reference
0.9.9 API documentation
scalar_ulp.hpp File Reference

GLM_EXT_scalar_ulp More...

Go to the source code of this file.

Functions

GLM_FUNC_DECL int floatDistance (float x, float y)
 Return the distance in the number of ULP between 2 single-precision floating-point scalars. More...
 
GLM_FUNC_DECL int64 floatDistance (double x, double y)
 Return the distance in the number of ULP between 2 double-precision floating-point scalars. More...
 
template<typename genType >
GLM_FUNC_DECL genType nextFloat (genType x)
 Return the next ULP value(s) after the input value(s). More...
 
template<typename genType >
GLM_FUNC_DECL genType nextFloat (genType x, int ULPs)
 Return the value(s) ULP distance after the input value(s). More...
 
template<typename genType >
GLM_FUNC_DECL genType prevFloat (genType x)
 Return the previous ULP value(s) before the input value(s). More...
 
template<typename genType >
GLM_FUNC_DECL genType prevFloat (genType x, int ULPs)
 Return the value(s) ULP distance before the input value(s). More...
 

Detailed Description

GLM_EXT_scalar_ulp

Definition in file scalar_ulp.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00152_source.html ================================================ 0.9.9 API documentation: scalar_ulp.hpp Source File
0.9.9 API documentation
scalar_ulp.hpp
Go to the documentation of this file.
1 
16 #pragma once
17 
18 // Dependencies
19 #include "../ext/scalar_int_sized.hpp"
20 #include "../common.hpp"
21 #include "../detail/qualifier.hpp"
22 
23 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
24 # pragma message("GLM: GLM_EXT_scalar_ulp extension included")
25 #endif
26 
27 namespace glm
28 {
34  template<typename genType>
35  GLM_FUNC_DECL genType nextFloat(genType x);
36 
42  template<typename genType>
43  GLM_FUNC_DECL genType prevFloat(genType x);
44 
50  template<typename genType>
51  GLM_FUNC_DECL genType nextFloat(genType x, int ULPs);
52 
58  template<typename genType>
59  GLM_FUNC_DECL genType prevFloat(genType x, int ULPs);
60 
64  GLM_FUNC_DECL int floatDistance(float x, float y);
65 
69  GLM_FUNC_DECL int64 floatDistance(double x, double y);
70 
72 }//namespace glm
73 
74 #include "scalar_ulp.inl"
detail::int64 int64
64 bit signed integer type.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00153_source.html ================================================ 0.9.9 API documentation: setup.hpp Source File
0.9.9 API documentation
setup.hpp
1 #ifndef GLM_SETUP_INCLUDED
2 
3 #include <cassert>
4 #include <cstddef>
5 
6 #define GLM_VERSION_MAJOR 0
7 #define GLM_VERSION_MINOR 9
8 #define GLM_VERSION_PATCH 9
9 #define GLM_VERSION_REVISION 6
10 #define GLM_VERSION 996
11 #define GLM_VERSION_MESSAGE "GLM: version 0.9.9.6"
12 
13 #define GLM_SETUP_INCLUDED GLM_VERSION
14 
16 // Active states
17 
18 #define GLM_DISABLE 0
19 #define GLM_ENABLE 1
20 
22 // Messages
23 
24 #if defined(GLM_FORCE_MESSAGES)
25 # define GLM_MESSAGES GLM_ENABLE
26 #else
27 # define GLM_MESSAGES GLM_DISABLE
28 #endif
29 
31 // Detect the platform
32 
33 #include "../simd/platform.h"
34 
36 // Build model
37 
38 #if defined(__arch64__) || defined(__LP64__) || defined(_M_X64) || defined(__ppc64__) || defined(__x86_64__)
39 # define GLM_MODEL GLM_MODEL_64
40 #elif defined(__i386__) || defined(__ppc__)
41 # define GLM_MODEL GLM_MODEL_32
42 #else
43 # define GLM_MODEL GLM_MODEL_32
44 #endif//
45 
46 #if !defined(GLM_MODEL) && GLM_COMPILER != 0
47 # error "GLM_MODEL undefined, your compiler may not be supported by GLM. Add #define GLM_MODEL 0 to ignore this message."
48 #endif//GLM_MODEL
49 
51 // C++ Version
52 
53 // User defines: GLM_FORCE_CXX98, GLM_FORCE_CXX03, GLM_FORCE_CXX11, GLM_FORCE_CXX14, GLM_FORCE_CXX17, GLM_FORCE_CXX2A
54 
55 #define GLM_LANG_CXX98_FLAG (1 << 1)
56 #define GLM_LANG_CXX03_FLAG (1 << 2)
57 #define GLM_LANG_CXX0X_FLAG (1 << 3)
58 #define GLM_LANG_CXX11_FLAG (1 << 4)
59 #define GLM_LANG_CXX14_FLAG (1 << 5)
60 #define GLM_LANG_CXX17_FLAG (1 << 6)
61 #define GLM_LANG_CXX2A_FLAG (1 << 7)
62 #define GLM_LANG_CXXMS_FLAG (1 << 8)
63 #define GLM_LANG_CXXGNU_FLAG (1 << 9)
64 
65 #define GLM_LANG_CXX98 GLM_LANG_CXX98_FLAG
66 #define GLM_LANG_CXX03 (GLM_LANG_CXX98 | GLM_LANG_CXX03_FLAG)
67 #define GLM_LANG_CXX0X (GLM_LANG_CXX03 | GLM_LANG_CXX0X_FLAG)
68 #define GLM_LANG_CXX11 (GLM_LANG_CXX0X | GLM_LANG_CXX11_FLAG)
69 #define GLM_LANG_CXX14 (GLM_LANG_CXX11 | GLM_LANG_CXX14_FLAG)
70 #define GLM_LANG_CXX17 (GLM_LANG_CXX14 | GLM_LANG_CXX17_FLAG)
71 #define GLM_LANG_CXX2A (GLM_LANG_CXX17 | GLM_LANG_CXX2A_FLAG)
72 #define GLM_LANG_CXXMS GLM_LANG_CXXMS_FLAG
73 #define GLM_LANG_CXXGNU GLM_LANG_CXXGNU_FLAG
74 
75 #if (defined(_MSC_EXTENSIONS))
76 # define GLM_LANG_EXT GLM_LANG_CXXMS_FLAG
77 #elif ((GLM_COMPILER & (GLM_COMPILER_CLANG | GLM_COMPILER_GCC)) && (GLM_ARCH & GLM_ARCH_SIMD_BIT))
78 # define GLM_LANG_EXT GLM_LANG_CXXMS_FLAG
79 #else
80 # define GLM_LANG_EXT 0
81 #endif
82 
83 #if (defined(GLM_FORCE_CXX_UNKNOWN))
84 # define GLM_LANG 0
85 #elif defined(GLM_FORCE_CXX2A)
86 # define GLM_LANG (GLM_LANG_CXX2A | GLM_LANG_EXT)
87 # define GLM_LANG_STL11_FORCED
88 #elif defined(GLM_FORCE_CXX17)
89 # define GLM_LANG (GLM_LANG_CXX17 | GLM_LANG_EXT)
90 # define GLM_LANG_STL11_FORCED
91 #elif defined(GLM_FORCE_CXX14)
92 # define GLM_LANG (GLM_LANG_CXX14 | GLM_LANG_EXT)
93 # define GLM_LANG_STL11_FORCED
94 #elif defined(GLM_FORCE_CXX11)
95 # define GLM_LANG (GLM_LANG_CXX11 | GLM_LANG_EXT)
96 # define GLM_LANG_STL11_FORCED
97 #elif defined(GLM_FORCE_CXX03)
98 # define GLM_LANG (GLM_LANG_CXX03 | GLM_LANG_EXT)
99 #elif defined(GLM_FORCE_CXX98)
100 # define GLM_LANG (GLM_LANG_CXX98 | GLM_LANG_EXT)
101 #else
102 # if GLM_COMPILER & GLM_COMPILER_VC && defined(_MSVC_LANG)
103 # if GLM_COMPILER >= GLM_COMPILER_VC15_7
104 # define GLM_LANG_PLATFORM _MSVC_LANG
105 # elif GLM_COMPILER >= GLM_COMPILER_VC15
106 # if _MSVC_LANG > 201402L
107 # define GLM_LANG_PLATFORM 201402L
108 # else
109 # define GLM_LANG_PLATFORM _MSVC_LANG
110 # endif
111 # else
112 # define GLM_LANG_PLATFORM 0
113 # endif
114 # else
115 # define GLM_LANG_PLATFORM 0
116 # endif
117 
118 # if __cplusplus > 201703L || GLM_LANG_PLATFORM > 201703L
119 # define GLM_LANG (GLM_LANG_CXX2A | GLM_LANG_EXT)
120 # elif __cplusplus == 201703L || GLM_LANG_PLATFORM == 201703L
121 # define GLM_LANG (GLM_LANG_CXX17 | GLM_LANG_EXT)
122 # elif __cplusplus == 201402L || __cplusplus == 201500L || GLM_LANG_PLATFORM == 201402L
123 # define GLM_LANG (GLM_LANG_CXX14 | GLM_LANG_EXT)
124 # elif __cplusplus == 201103L || GLM_LANG_PLATFORM == 201103L
125 # define GLM_LANG (GLM_LANG_CXX11 | GLM_LANG_EXT)
126 # elif defined(__INTEL_CXX11_MODE__) || defined(_MSC_VER) || defined(__GXX_EXPERIMENTAL_CXX0X__)
127 # define GLM_LANG (GLM_LANG_CXX0X | GLM_LANG_EXT)
128 # elif __cplusplus == 199711L
129 # define GLM_LANG (GLM_LANG_CXX98 | GLM_LANG_EXT)
130 # else
131 # define GLM_LANG (0 | GLM_LANG_EXT)
132 # endif
133 #endif
134 
136 // Has of C++ features
137 
138 // http://clang.llvm.org/cxx_status.html
139 // http://gcc.gnu.org/projects/cxx0x.html
140 // http://msdn.microsoft.com/en-us/library/vstudio/hh567368(v=vs.120).aspx
141 
142 // Android has multiple STLs but C++11 STL detection doesn't always work #284 #564
143 #if GLM_PLATFORM == GLM_PLATFORM_ANDROID && !defined(GLM_LANG_STL11_FORCED)
144 # define GLM_HAS_CXX11_STL 0
145 #elif GLM_COMPILER & GLM_COMPILER_CLANG
146 # if (defined(_LIBCPP_VERSION) || (GLM_LANG & GLM_LANG_CXX11_FLAG) || defined(GLM_LANG_STL11_FORCED))
147 # define GLM_HAS_CXX11_STL 1
148 # else
149 # define GLM_HAS_CXX11_STL 0
150 # endif
151 #elif GLM_LANG & GLM_LANG_CXX11_FLAG
152 # define GLM_HAS_CXX11_STL 1
153 #else
154 # define GLM_HAS_CXX11_STL ((GLM_LANG & GLM_LANG_CXX0X_FLAG) && (\
155  ((GLM_COMPILER & GLM_COMPILER_GCC) && (GLM_COMPILER >= GLM_COMPILER_GCC48)) || \
156  ((GLM_COMPILER & GLM_COMPILER_VC) && (GLM_COMPILER >= GLM_COMPILER_VC12)) || \
157  ((GLM_PLATFORM != GLM_PLATFORM_WINDOWS) && (GLM_COMPILER & GLM_COMPILER_INTEL) && (GLM_COMPILER >= GLM_COMPILER_INTEL15))))
158 #endif
159 
160 // N1720
161 #if GLM_COMPILER & GLM_COMPILER_CLANG
162 # define GLM_HAS_STATIC_ASSERT __has_feature(cxx_static_assert)
163 #elif GLM_LANG & GLM_LANG_CXX11_FLAG
164 # define GLM_HAS_STATIC_ASSERT 1
165 #else
166 # define GLM_HAS_STATIC_ASSERT ((GLM_LANG & GLM_LANG_CXX0X_FLAG) && (\
167  ((GLM_COMPILER & GLM_COMPILER_CUDA)) || \
168  ((GLM_COMPILER & GLM_COMPILER_VC))))
169 #endif
170 
171 // N1988
172 #if GLM_LANG & GLM_LANG_CXX11_FLAG
173 # define GLM_HAS_EXTENDED_INTEGER_TYPE 1
174 #else
175 # define GLM_HAS_EXTENDED_INTEGER_TYPE (\
176  ((GLM_LANG & GLM_LANG_CXX0X_FLAG) && (GLM_COMPILER & GLM_COMPILER_VC)) || \
177  ((GLM_LANG & GLM_LANG_CXX0X_FLAG) && (GLM_COMPILER & GLM_COMPILER_CUDA)) || \
178  ((GLM_LANG & GLM_LANG_CXX0X_FLAG) && (GLM_COMPILER & GLM_COMPILER_CLANG)))
179 #endif
180 
181 // N2672 Initializer lists http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2672.htm
182 #if GLM_COMPILER & GLM_COMPILER_CLANG
183 # define GLM_HAS_INITIALIZER_LISTS __has_feature(cxx_generalized_initializers)
184 #elif GLM_LANG & GLM_LANG_CXX11_FLAG
185 # define GLM_HAS_INITIALIZER_LISTS 1
186 #else
187 # define GLM_HAS_INITIALIZER_LISTS ((GLM_LANG & GLM_LANG_CXX0X_FLAG) && (\
188  ((GLM_COMPILER & GLM_COMPILER_VC) && (GLM_COMPILER >= GLM_COMPILER_VC15)) || \
189  ((GLM_COMPILER & GLM_COMPILER_INTEL) && (GLM_COMPILER >= GLM_COMPILER_INTEL14)) || \
190  ((GLM_COMPILER & GLM_COMPILER_CUDA))))
191 #endif
192 
193 // N2544 Unrestricted unions http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2544.pdf
194 #if GLM_COMPILER & GLM_COMPILER_CLANG
195 # define GLM_HAS_UNRESTRICTED_UNIONS __has_feature(cxx_unrestricted_unions)
196 #elif GLM_LANG & GLM_LANG_CXX11_FLAG
197 # define GLM_HAS_UNRESTRICTED_UNIONS 1
198 #else
199 # define GLM_HAS_UNRESTRICTED_UNIONS (GLM_LANG & GLM_LANG_CXX0X_FLAG) && (\
200  (GLM_COMPILER & GLM_COMPILER_VC) || \
201  ((GLM_COMPILER & GLM_COMPILER_CUDA)))
202 #endif
203 
204 // N2346
205 #if GLM_COMPILER & GLM_COMPILER_CLANG
206 # define GLM_HAS_DEFAULTED_FUNCTIONS __has_feature(cxx_defaulted_functions)
207 #elif GLM_LANG & GLM_LANG_CXX11_FLAG
208 # define GLM_HAS_DEFAULTED_FUNCTIONS 1
209 #else
210 # define GLM_HAS_DEFAULTED_FUNCTIONS ((GLM_LANG & GLM_LANG_CXX0X_FLAG) && (\
211  ((GLM_COMPILER & GLM_COMPILER_VC) && (GLM_COMPILER >= GLM_COMPILER_VC12)) || \
212  ((GLM_COMPILER & GLM_COMPILER_INTEL)) || \
213  (GLM_COMPILER & GLM_COMPILER_CUDA)))
214 #endif
215 
216 // N2118
217 #if GLM_COMPILER & GLM_COMPILER_CLANG
218 # define GLM_HAS_RVALUE_REFERENCES __has_feature(cxx_rvalue_references)
219 #elif GLM_LANG & GLM_LANG_CXX11_FLAG
220 # define GLM_HAS_RVALUE_REFERENCES 1
221 #else
222 # define GLM_HAS_RVALUE_REFERENCES ((GLM_LANG & GLM_LANG_CXX0X_FLAG) && (\
223  ((GLM_COMPILER & GLM_COMPILER_VC)) || \
224  ((GLM_COMPILER & GLM_COMPILER_CUDA))))
225 #endif
226 
227 // N2437 http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2437.pdf
228 #if GLM_COMPILER & GLM_COMPILER_CLANG
229 # define GLM_HAS_EXPLICIT_CONVERSION_OPERATORS __has_feature(cxx_explicit_conversions)
230 #elif GLM_LANG & GLM_LANG_CXX11_FLAG
231 # define GLM_HAS_EXPLICIT_CONVERSION_OPERATORS 1
232 #else
233 # define GLM_HAS_EXPLICIT_CONVERSION_OPERATORS ((GLM_LANG & GLM_LANG_CXX0X_FLAG) && (\
234  ((GLM_COMPILER & GLM_COMPILER_INTEL) && (GLM_COMPILER >= GLM_COMPILER_INTEL14)) || \
235  ((GLM_COMPILER & GLM_COMPILER_VC) && (GLM_COMPILER >= GLM_COMPILER_VC12)) || \
236  ((GLM_COMPILER & GLM_COMPILER_CUDA))))
237 #endif
238 
239 // N2258 http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2258.pdf
240 #if GLM_COMPILER & GLM_COMPILER_CLANG
241 # define GLM_HAS_TEMPLATE_ALIASES __has_feature(cxx_alias_templates)
242 #elif GLM_LANG & GLM_LANG_CXX11_FLAG
243 # define GLM_HAS_TEMPLATE_ALIASES 1
244 #else
245 # define GLM_HAS_TEMPLATE_ALIASES ((GLM_LANG & GLM_LANG_CXX0X_FLAG) && (\
246  ((GLM_COMPILER & GLM_COMPILER_INTEL)) || \
247  ((GLM_COMPILER & GLM_COMPILER_VC) && (GLM_COMPILER >= GLM_COMPILER_VC12)) || \
248  ((GLM_COMPILER & GLM_COMPILER_CUDA))))
249 #endif
250 
251 // N2930 http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2009/n2930.html
252 #if GLM_COMPILER & GLM_COMPILER_CLANG
253 # define GLM_HAS_RANGE_FOR __has_feature(cxx_range_for)
254 #elif GLM_LANG & GLM_LANG_CXX11_FLAG
255 # define GLM_HAS_RANGE_FOR 1
256 #else
257 # define GLM_HAS_RANGE_FOR ((GLM_LANG & GLM_LANG_CXX0X_FLAG) && (\
258  ((GLM_COMPILER & GLM_COMPILER_INTEL)) || \
259  ((GLM_COMPILER & GLM_COMPILER_VC)) || \
260  ((GLM_COMPILER & GLM_COMPILER_CUDA))))
261 #endif
262 
263 // N2341 http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2341.pdf
264 #if GLM_COMPILER & GLM_COMPILER_CLANG
265 # define GLM_HAS_ALIGNOF __has_feature(cxx_alignas)
266 #elif GLM_LANG & GLM_LANG_CXX11_FLAG
267 # define GLM_HAS_ALIGNOF 1
268 #else
269 # define GLM_HAS_ALIGNOF ((GLM_LANG & GLM_LANG_CXX0X_FLAG) && (\
270  ((GLM_COMPILER & GLM_COMPILER_INTEL) && (GLM_COMPILER >= GLM_COMPILER_INTEL15)) || \
271  ((GLM_COMPILER & GLM_COMPILER_VC) && (GLM_COMPILER >= GLM_COMPILER_VC14)) || \
272  ((GLM_COMPILER & GLM_COMPILER_CUDA))))
273 #endif
274 
275 // N2235 Generalized Constant Expressions http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2235.pdf
276 // N3652 Extended Constant Expressions http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3652.html
277 #if (GLM_ARCH & GLM_ARCH_SIMD_BIT) // Compiler SIMD intrinsics don't support constexpr...
278 # define GLM_HAS_CONSTEXPR 0
279 #elif (GLM_COMPILER & GLM_COMPILER_CLANG)
280 # define GLM_HAS_CONSTEXPR __has_feature(cxx_relaxed_constexpr)
281 #elif (GLM_LANG & GLM_LANG_CXX14_FLAG)
282 # define GLM_HAS_CONSTEXPR 1
283 #else
284 # define GLM_HAS_CONSTEXPR ((GLM_LANG & GLM_LANG_CXX0X_FLAG) && GLM_HAS_INITIALIZER_LISTS && (\
285  ((GLM_COMPILER & GLM_COMPILER_INTEL) && (GLM_COMPILER >= GLM_COMPILER_INTEL17)) || \
286  ((GLM_COMPILER & GLM_COMPILER_VC) && (GLM_COMPILER >= GLM_COMPILER_VC15))))
287 #endif
288 
289 #if GLM_HAS_CONSTEXPR
290 # define GLM_CONSTEXPR constexpr
291 #else
292 # define GLM_CONSTEXPR
293 #endif
294 
295 //
296 #if GLM_HAS_CONSTEXPR
297 # if (GLM_COMPILER & GLM_COMPILER_CLANG)
298 # if __has_feature(cxx_if_constexpr)
299 # define GLM_HAS_IF_CONSTEXPR 1
300 # else
301 # define GLM_HAS_IF_CONSTEXPR 0
302 # endif
303 # elif (GLM_LANG & GLM_LANG_CXX17_FLAG)
304 # define GLM_HAS_IF_CONSTEXPR 1
305 # else
306 # define GLM_HAS_IF_CONSTEXPR 0
307 # endif
308 #else
309 # define GLM_HAS_IF_CONSTEXPR 0
310 #endif
311 
312 #if GLM_HAS_IF_CONSTEXPR
313 # define GLM_IF_CONSTEXPR if constexpr
314 #else
315 # define GLM_IF_CONSTEXPR if
316 #endif
317 
318 //
319 #if GLM_LANG & GLM_LANG_CXX11_FLAG
320 # define GLM_HAS_ASSIGNABLE 1
321 #else
322 # define GLM_HAS_ASSIGNABLE ((GLM_LANG & GLM_LANG_CXX0X_FLAG) && (\
323  ((GLM_COMPILER & GLM_COMPILER_VC) && (GLM_COMPILER >= GLM_COMPILER_VC15)) || \
324  ((GLM_COMPILER & GLM_COMPILER_GCC) && (GLM_COMPILER >= GLM_COMPILER_GCC49))))
325 #endif
326 
327 //
328 #define GLM_HAS_TRIVIAL_QUERIES 0
329 
330 //
331 #if GLM_LANG & GLM_LANG_CXX11_FLAG
332 # define GLM_HAS_MAKE_SIGNED 1
333 #else
334 # define GLM_HAS_MAKE_SIGNED ((GLM_LANG & GLM_LANG_CXX0X_FLAG) && (\
335  ((GLM_COMPILER & GLM_COMPILER_VC) && (GLM_COMPILER >= GLM_COMPILER_VC12)) || \
336  ((GLM_COMPILER & GLM_COMPILER_CUDA))))
337 #endif
338 
339 //
340 #if defined(GLM_FORCE_INTRINSICS)
341 # define GLM_HAS_BITSCAN_WINDOWS ((GLM_PLATFORM & GLM_PLATFORM_WINDOWS) && (\
342  ((GLM_COMPILER & GLM_COMPILER_INTEL)) || \
343  ((GLM_COMPILER & GLM_COMPILER_VC) && (GLM_COMPILER >= GLM_COMPILER_VC14) && (GLM_ARCH & GLM_ARCH_X86_BIT))))
344 #else
345 # define GLM_HAS_BITSCAN_WINDOWS 0
346 #endif
347 
349 // OpenMP
350 #ifdef _OPENMP
351 # if GLM_COMPILER & GLM_COMPILER_GCC
352 # if GLM_COMPILER >= GLM_COMPILER_GCC61
353 # define GLM_HAS_OPENMP 45
354 # elif GLM_COMPILER >= GLM_COMPILER_GCC49
355 # define GLM_HAS_OPENMP 40
356 # elif GLM_COMPILER >= GLM_COMPILER_GCC47
357 # define GLM_HAS_OPENMP 31
358 # else
359 # define GLM_HAS_OPENMP 0
360 # endif
361 # elif GLM_COMPILER & GLM_COMPILER_CLANG
362 # if GLM_COMPILER >= GLM_COMPILER_CLANG38
363 # define GLM_HAS_OPENMP 31
364 # else
365 # define GLM_HAS_OPENMP 0
366 # endif
367 # elif GLM_COMPILER & GLM_COMPILER_VC
368 # define GLM_HAS_OPENMP 20
369 # elif GLM_COMPILER & GLM_COMPILER_INTEL
370 # if GLM_COMPILER >= GLM_COMPILER_INTEL16
371 # define GLM_HAS_OPENMP 40
372 # else
373 # define GLM_HAS_OPENMP 0
374 # endif
375 # else
376 # define GLM_HAS_OPENMP 0
377 # endif
378 #else
379 # define GLM_HAS_OPENMP 0
380 #endif
381 
383 // nullptr
384 
385 #if GLM_LANG & GLM_LANG_CXX0X_FLAG
386 # define GLM_CONFIG_NULLPTR GLM_ENABLE
387 #else
388 # define GLM_CONFIG_NULLPTR GLM_DISABLE
389 #endif
390 
391 #if GLM_CONFIG_NULLPTR == GLM_ENABLE
392 # define GLM_NULLPTR nullptr
393 #else
394 # define GLM_NULLPTR 0
395 #endif
396 
398 // Static assert
399 
400 #if GLM_HAS_STATIC_ASSERT
401 # define GLM_STATIC_ASSERT(x, message) static_assert(x, message)
402 #elif GLM_COMPILER & GLM_COMPILER_VC
403 # define GLM_STATIC_ASSERT(x, message) typedef char __CASSERT__##__LINE__[(x) ? 1 : -1]
404 #else
405 # define GLM_STATIC_ASSERT(x, message) assert(x)
406 #endif//GLM_LANG
407 
409 // Qualifiers
410 
411 #if GLM_COMPILER & GLM_COMPILER_CUDA
412 # define GLM_CUDA_FUNC_DEF __device__ __host__
413 # define GLM_CUDA_FUNC_DECL __device__ __host__
414 #else
415 # define GLM_CUDA_FUNC_DEF
416 # define GLM_CUDA_FUNC_DECL
417 #endif
418 
419 #if defined(GLM_FORCE_INLINE)
420 # if GLM_COMPILER & GLM_COMPILER_VC
421 # define GLM_INLINE __forceinline
422 # define GLM_NEVER_INLINE __declspec((noinline))
423 # elif GLM_COMPILER & (GLM_COMPILER_GCC | GLM_COMPILER_CLANG)
424 # define GLM_INLINE inline __attribute__((__always_inline__))
425 # define GLM_NEVER_INLINE __attribute__((__noinline__))
426 # elif GLM_COMPILER & GLM_COMPILER_CUDA
427 # define GLM_INLINE __forceinline__
428 # define GLM_NEVER_INLINE __noinline__
429 # else
430 # define GLM_INLINE inline
431 # define GLM_NEVER_INLINE
432 # endif//GLM_COMPILER
433 #else
434 # define GLM_INLINE inline
435 # define GLM_NEVER_INLINE
436 #endif//defined(GLM_FORCE_INLINE)
437 
438 #define GLM_FUNC_DECL GLM_CUDA_FUNC_DECL
439 #define GLM_FUNC_QUALIFIER GLM_CUDA_FUNC_DEF GLM_INLINE
440 
442 // Swizzle operators
443 
444 // User defines: GLM_FORCE_SWIZZLE
445 
446 #define GLM_SWIZZLE_DISABLED 0
447 #define GLM_SWIZZLE_OPERATOR 1
448 #define GLM_SWIZZLE_FUNCTION 2
449 
450 #if defined(GLM_FORCE_XYZW_ONLY)
451 # undef GLM_FORCE_SWIZZLE
452 #endif
453 
454 #if defined(GLM_SWIZZLE)
455 # pragma message("GLM: GLM_SWIZZLE is deprecated, use GLM_FORCE_SWIZZLE instead.")
456 # define GLM_FORCE_SWIZZLE
457 #endif
458 
459 #if defined(GLM_FORCE_SWIZZLE) && (GLM_LANG & GLM_LANG_CXXMS_FLAG)
460 # define GLM_CONFIG_SWIZZLE GLM_SWIZZLE_OPERATOR
461 #elif defined(GLM_FORCE_SWIZZLE)
462 # define GLM_CONFIG_SWIZZLE GLM_SWIZZLE_FUNCTION
463 #else
464 # define GLM_CONFIG_SWIZZLE GLM_SWIZZLE_DISABLED
465 #endif
466 
468 // Allows using not basic types as genType
469 
470 // #define GLM_FORCE_UNRESTRICTED_GENTYPE
471 
472 #ifdef GLM_FORCE_UNRESTRICTED_GENTYPE
473 # define GLM_CONFIG_UNRESTRICTED_GENTYPE GLM_ENABLE
474 #else
475 # define GLM_CONFIG_UNRESTRICTED_GENTYPE GLM_DISABLE
476 #endif
477 
479 // Clip control, define GLM_FORCE_DEPTH_ZERO_TO_ONE before including GLM
480 // to use a clip space between 0 to 1.
481 // Coordinate system, define GLM_FORCE_LEFT_HANDED before including GLM
482 // to use left handed coordinate system by default.
483 
484 #define GLM_CLIP_CONTROL_ZO_BIT (1 << 0) // ZERO_TO_ONE
485 #define GLM_CLIP_CONTROL_NO_BIT (1 << 1) // NEGATIVE_ONE_TO_ONE
486 #define GLM_CLIP_CONTROL_LH_BIT (1 << 2) // LEFT_HANDED, For DirectX, Metal, Vulkan
487 #define GLM_CLIP_CONTROL_RH_BIT (1 << 3) // RIGHT_HANDED, For OpenGL, default in GLM
488 
489 #define GLM_CLIP_CONTROL_LH_ZO (GLM_CLIP_CONTROL_LH_BIT | GLM_CLIP_CONTROL_ZO_BIT)
490 #define GLM_CLIP_CONTROL_LH_NO (GLM_CLIP_CONTROL_LH_BIT | GLM_CLIP_CONTROL_NO_BIT)
491 #define GLM_CLIP_CONTROL_RH_ZO (GLM_CLIP_CONTROL_RH_BIT | GLM_CLIP_CONTROL_ZO_BIT)
492 #define GLM_CLIP_CONTROL_RH_NO (GLM_CLIP_CONTROL_RH_BIT | GLM_CLIP_CONTROL_NO_BIT)
493 
494 #ifdef GLM_FORCE_DEPTH_ZERO_TO_ONE
495 # ifdef GLM_FORCE_LEFT_HANDED
496 # define GLM_CONFIG_CLIP_CONTROL GLM_CLIP_CONTROL_LH_ZO
497 # else
498 # define GLM_CONFIG_CLIP_CONTROL GLM_CLIP_CONTROL_RH_ZO
499 # endif
500 #else
501 # ifdef GLM_FORCE_LEFT_HANDED
502 # define GLM_CONFIG_CLIP_CONTROL GLM_CLIP_CONTROL_LH_NO
503 # else
504 # define GLM_CONFIG_CLIP_CONTROL GLM_CLIP_CONTROL_RH_NO
505 # endif
506 #endif
507 
509 // Qualifiers
510 
511 #if (GLM_COMPILER & GLM_COMPILER_VC) || ((GLM_COMPILER & GLM_COMPILER_INTEL) && (GLM_PLATFORM & GLM_PLATFORM_WINDOWS))
512 # define GLM_DEPRECATED __declspec(deprecated)
513 # define GLM_ALIGNED_TYPEDEF(type, name, alignment) typedef __declspec(align(alignment)) type name
514 #elif GLM_COMPILER & (GLM_COMPILER_GCC | GLM_COMPILER_CLANG | GLM_COMPILER_INTEL)
515 # define GLM_DEPRECATED __attribute__((__deprecated__))
516 # define GLM_ALIGNED_TYPEDEF(type, name, alignment) typedef type name __attribute__((aligned(alignment)))
517 #elif GLM_COMPILER & GLM_COMPILER_CUDA
518 # define GLM_DEPRECATED
519 # define GLM_ALIGNED_TYPEDEF(type, name, alignment) typedef type name __align__(x)
520 #else
521 # define GLM_DEPRECATED
522 # define GLM_ALIGNED_TYPEDEF(type, name, alignment) typedef type name
523 #endif
524 
526 
527 #ifdef GLM_FORCE_EXPLICIT_CTOR
528 # define GLM_EXPLICIT explicit
529 #else
530 # define GLM_EXPLICIT
531 #endif
532 
534 // SYCL
535 
536 #if GLM_COMPILER==GLM_COMPILER_SYCL
537 
538 #include <CL/sycl.hpp>
539 #include <limits>
540 
541 namespace glm {
542 namespace std {
543  // Import SYCL's functions into the namespace glm::std to force their usages.
544  // It's important to use the math built-in function (sin, exp, ...)
545  // of SYCL instead the std ones.
546  using namespace cl::sycl;
547 
549  // Import some "harmless" std's stuffs used by glm into
550  // the new glm::std namespace.
551  template<typename T>
552  using numeric_limits = ::std::numeric_limits<T>;
553 
554  using ::std::size_t;
555 
560 
565 
566  using ::std::make_unsigned;
568 } //namespace std
569 } //namespace glm
570 
571 #endif
572 
574 
576 // Length type: all length functions returns a length_t type.
577 // When GLM_FORCE_SIZE_T_LENGTH is defined, length_t is a typedef of size_t otherwise
578 // length_t is a typedef of int like GLSL defines it.
579 
580 #define GLM_LENGTH_INT 1
581 #define GLM_LENGTH_SIZE_T 2
582 
583 #ifdef GLM_FORCE_SIZE_T_LENGTH
584 # define GLM_CONFIG_LENGTH_TYPE GLM_LENGTH_SIZE_T
585 #else
586 # define GLM_CONFIG_LENGTH_TYPE GLM_LENGTH_INT
587 #endif
588 
589 namespace glm
590 {
591  using std::size_t;
592 # if GLM_CONFIG_LENGTH_TYPE == GLM_LENGTH_SIZE_T
593  typedef size_t length_t;
594 # else
595  typedef int length_t;
596 # endif
597 }//namespace glm
598 
600 // constexpr
601 
602 #if GLM_HAS_CONSTEXPR
603 # define GLM_CONFIG_CONSTEXP GLM_ENABLE
604 
605  namespace glm
606  {
607  template<typename T, std::size_t N>
608  constexpr std::size_t countof(T const (&)[N])
609  {
610  return N;
611  }
612  }//namespace glm
613 # define GLM_COUNTOF(arr) glm::countof(arr)
614 #elif defined(_MSC_VER)
615 # define GLM_CONFIG_CONSTEXP GLM_DISABLE
616 
617 # define GLM_COUNTOF(arr) _countof(arr)
618 #else
619 # define GLM_CONFIG_CONSTEXP GLM_DISABLE
620 
621 # define GLM_COUNTOF(arr) sizeof(arr) / sizeof(arr[0])
622 #endif
623 
625 // uint
626 
627 namespace glm{
628 namespace detail
629 {
630  template<typename T>
631  struct is_int
632  {
633  enum test {value = 0};
634  };
635 
636  template<>
637  struct is_int<unsigned int>
638  {
639  enum test {value = ~0};
640  };
641 
642  template<>
643  struct is_int<signed int>
644  {
645  enum test {value = ~0};
646  };
647 }//namespace detail
648 
649  typedef unsigned int uint;
650 }//namespace glm
651 
653 // 64-bit int
654 
655 #if GLM_HAS_EXTENDED_INTEGER_TYPE
656 # include <cstdint>
657 #endif
658 
659 namespace glm{
660 namespace detail
661 {
662 # if GLM_HAS_EXTENDED_INTEGER_TYPE
663  typedef std::uint64_t uint64;
664  typedef std::int64_t int64;
665 # elif (defined(__STDC_VERSION__) && (__STDC_VERSION__ >= 199901L)) // C99 detected, 64 bit types available
666  typedef uint64_t uint64;
667  typedef int64_t int64;
668 # elif GLM_COMPILER & GLM_COMPILER_VC
669  typedef unsigned __int64 uint64;
670  typedef signed __int64 int64;
671 # elif GLM_COMPILER & GLM_COMPILER_GCC
672 # pragma GCC diagnostic ignored "-Wlong-long"
673  __extension__ typedef unsigned long long uint64;
674  __extension__ typedef signed long long int64;
675 # elif (GLM_COMPILER & GLM_COMPILER_CLANG)
676 # pragma clang diagnostic ignored "-Wc++11-long-long"
677  typedef unsigned long long uint64;
678  typedef signed long long int64;
679 # else//unknown compiler
680  typedef unsigned long long uint64;
681  typedef signed long long int64;
682 # endif
683 }//namespace detail
684 }//namespace glm
685 
687 // make_unsigned
688 
689 #if GLM_HAS_MAKE_SIGNED
690 # include <type_traits>
691 
692 namespace glm{
693 namespace detail
694 {
695  using std::make_unsigned;
696 }//namespace detail
697 }//namespace glm
698 
699 #else
700 
701 namespace glm{
702 namespace detail
703 {
704  template<typename genType>
705  struct make_unsigned
706  {};
707 
708  template<>
709  struct make_unsigned<char>
710  {
711  typedef unsigned char type;
712  };
713 
714  template<>
715  struct make_unsigned<signed char>
716  {
717  typedef unsigned char type;
718  };
719 
720  template<>
721  struct make_unsigned<short>
722  {
723  typedef unsigned short type;
724  };
725 
726  template<>
727  struct make_unsigned<int>
728  {
729  typedef unsigned int type;
730  };
731 
732  template<>
733  struct make_unsigned<long>
734  {
735  typedef unsigned long type;
736  };
737 
738  template<>
739  struct make_unsigned<int64>
740  {
741  typedef uint64 type;
742  };
743 
744  template<>
745  struct make_unsigned<unsigned char>
746  {
747  typedef unsigned char type;
748  };
749 
750  template<>
751  struct make_unsigned<unsigned short>
752  {
753  typedef unsigned short type;
754  };
755 
756  template<>
757  struct make_unsigned<unsigned int>
758  {
759  typedef unsigned int type;
760  };
761 
762  template<>
763  struct make_unsigned<unsigned long>
764  {
765  typedef unsigned long type;
766  };
767 
768  template<>
769  struct make_unsigned<uint64>
770  {
771  typedef uint64 type;
772  };
773 }//namespace detail
774 }//namespace glm
775 #endif
776 
778 // Only use x, y, z, w as vector type components
779 
780 #ifdef GLM_FORCE_XYZW_ONLY
781 # define GLM_CONFIG_XYZW_ONLY GLM_ENABLE
782 #else
783 # define GLM_CONFIG_XYZW_ONLY GLM_DISABLE
784 #endif
785 
787 // Configure the use of defaulted initialized types
788 
789 #define GLM_CTOR_INIT_DISABLE 0
790 #define GLM_CTOR_INITIALIZER_LIST 1
791 #define GLM_CTOR_INITIALISATION 2
792 
793 #if defined(GLM_FORCE_CTOR_INIT) && GLM_HAS_INITIALIZER_LISTS
794 # define GLM_CONFIG_CTOR_INIT GLM_CTOR_INITIALIZER_LIST
795 #elif defined(GLM_FORCE_CTOR_INIT) && !GLM_HAS_INITIALIZER_LISTS
796 # define GLM_CONFIG_CTOR_INIT GLM_CTOR_INITIALISATION
797 #else
798 # define GLM_CONFIG_CTOR_INIT GLM_CTOR_INIT_DISABLE
799 #endif
800 
802 // Use SIMD instruction sets
803 
804 #if GLM_HAS_ALIGNOF && (GLM_LANG & GLM_LANG_CXXMS_FLAG) && (GLM_ARCH & GLM_ARCH_SIMD_BIT)
805 # define GLM_CONFIG_SIMD GLM_ENABLE
806 #else
807 # define GLM_CONFIG_SIMD GLM_DISABLE
808 #endif
809 
811 // Configure the use of defaulted function
812 
813 #if GLM_HAS_DEFAULTED_FUNCTIONS && GLM_CONFIG_CTOR_INIT == GLM_CTOR_INIT_DISABLE
814 # define GLM_CONFIG_DEFAULTED_FUNCTIONS GLM_ENABLE
815 # define GLM_DEFAULT = default
816 #else
817 # define GLM_CONFIG_DEFAULTED_FUNCTIONS GLM_DISABLE
818 # define GLM_DEFAULT
819 #endif
820 
822 // Configure the use of aligned gentypes
823 
824 #ifdef GLM_FORCE_ALIGNED // Legacy define
825 # define GLM_FORCE_DEFAULT_ALIGNED_GENTYPES
826 #endif
827 
828 #ifdef GLM_FORCE_DEFAULT_ALIGNED_GENTYPES
829 # define GLM_FORCE_ALIGNED_GENTYPES
830 #endif
831 
832 #if GLM_HAS_ALIGNOF && (GLM_LANG & GLM_LANG_CXXMS_FLAG) && (defined(GLM_FORCE_ALIGNED_GENTYPES) || (GLM_CONFIG_SIMD == GLM_ENABLE))
833 # define GLM_CONFIG_ALIGNED_GENTYPES GLM_ENABLE
834 #else
835 # define GLM_CONFIG_ALIGNED_GENTYPES GLM_DISABLE
836 #endif
837 
839 // Configure the use of anonymous structure as implementation detail
840 
841 #if ((GLM_CONFIG_SIMD == GLM_ENABLE) || (GLM_CONFIG_SWIZZLE == GLM_SWIZZLE_OPERATOR) || (GLM_CONFIG_ALIGNED_GENTYPES == GLM_ENABLE))
842 # define GLM_CONFIG_ANONYMOUS_STRUCT GLM_ENABLE
843 #else
844 # define GLM_CONFIG_ANONYMOUS_STRUCT GLM_DISABLE
845 #endif
846 
848 // Silent warnings
849 
850 #ifdef GLM_FORCE_SILENT_WARNINGS
851 # define GLM_SILENT_WARNINGS GLM_ENABLE
852 #else
853 # define GLM_SILENT_WARNINGS GLM_DISABLE
854 #endif
855 
857 // Precision
858 
859 #define GLM_HIGHP 1
860 #define GLM_MEDIUMP 2
861 #define GLM_LOWP 3
862 
863 #if defined(GLM_FORCE_PRECISION_HIGHP_BOOL) || defined(GLM_PRECISION_HIGHP_BOOL)
864 # define GLM_CONFIG_PRECISION_BOOL GLM_HIGHP
865 #elif defined(GLM_FORCE_PRECISION_MEDIUMP_BOOL) || defined(GLM_PRECISION_MEDIUMP_BOOL)
866 # define GLM_CONFIG_PRECISION_BOOL GLM_MEDIUMP
867 #elif defined(GLM_FORCE_PRECISION_LOWP_BOOL) || defined(GLM_PRECISION_LOWP_BOOL)
868 # define GLM_CONFIG_PRECISION_BOOL GLM_LOWP
869 #else
870 # define GLM_CONFIG_PRECISION_BOOL GLM_HIGHP
871 #endif
872 
873 #if defined(GLM_FORCE_PRECISION_HIGHP_INT) || defined(GLM_PRECISION_HIGHP_INT)
874 # define GLM_CONFIG_PRECISION_INT GLM_HIGHP
875 #elif defined(GLM_FORCE_PRECISION_MEDIUMP_INT) || defined(GLM_PRECISION_MEDIUMP_INT)
876 # define GLM_CONFIG_PRECISION_INT GLM_MEDIUMP
877 #elif defined(GLM_FORCE_PRECISION_LOWP_INT) || defined(GLM_PRECISION_LOWP_INT)
878 # define GLM_CONFIG_PRECISION_INT GLM_LOWP
879 #else
880 # define GLM_CONFIG_PRECISION_INT GLM_HIGHP
881 #endif
882 
883 #if defined(GLM_FORCE_PRECISION_HIGHP_UINT) || defined(GLM_PRECISION_HIGHP_UINT)
884 # define GLM_CONFIG_PRECISION_UINT GLM_HIGHP
885 #elif defined(GLM_FORCE_PRECISION_MEDIUMP_UINT) || defined(GLM_PRECISION_MEDIUMP_UINT)
886 # define GLM_CONFIG_PRECISION_UINT GLM_MEDIUMP
887 #elif defined(GLM_FORCE_PRECISION_LOWP_UINT) || defined(GLM_PRECISION_LOWP_UINT)
888 # define GLM_CONFIG_PRECISION_UINT GLM_LOWP
889 #else
890 # define GLM_CONFIG_PRECISION_UINT GLM_HIGHP
891 #endif
892 
893 #if defined(GLM_FORCE_PRECISION_HIGHP_FLOAT) || defined(GLM_PRECISION_HIGHP_FLOAT)
894 # define GLM_CONFIG_PRECISION_FLOAT GLM_HIGHP
895 #elif defined(GLM_FORCE_PRECISION_MEDIUMP_FLOAT) || defined(GLM_PRECISION_MEDIUMP_FLOAT)
896 # define GLM_CONFIG_PRECISION_FLOAT GLM_MEDIUMP
897 #elif defined(GLM_FORCE_PRECISION_LOWP_FLOAT) || defined(GLM_PRECISION_LOWP_FLOAT)
898 # define GLM_CONFIG_PRECISION_FLOAT GLM_LOWP
899 #else
900 # define GLM_CONFIG_PRECISION_FLOAT GLM_HIGHP
901 #endif
902 
903 #if defined(GLM_FORCE_PRECISION_HIGHP_DOUBLE) || defined(GLM_PRECISION_HIGHP_DOUBLE)
904 # define GLM_CONFIG_PRECISION_DOUBLE GLM_HIGHP
905 #elif defined(GLM_FORCE_PRECISION_MEDIUMP_DOUBLE) || defined(GLM_PRECISION_MEDIUMP_DOUBLE)
906 # define GLM_CONFIG_PRECISION_DOUBLE GLM_MEDIUMP
907 #elif defined(GLM_FORCE_PRECISION_LOWP_DOUBLE) || defined(GLM_PRECISION_LOWP_DOUBLE)
908 # define GLM_CONFIG_PRECISION_DOUBLE GLM_LOWP
909 #else
910 # define GLM_CONFIG_PRECISION_DOUBLE GLM_HIGHP
911 #endif
912 
914 // Check inclusions of different versions of GLM
915 
916 #elif ((GLM_SETUP_INCLUDED != GLM_VERSION) && !defined(GLM_FORCE_IGNORE_VERSION))
917 # error "GLM error: A different version of GLM is already included. Define GLM_FORCE_IGNORE_VERSION before including GLM headers to ignore this error."
918 #elif GLM_SETUP_INCLUDED == GLM_VERSION
919 
921 // Messages
922 
923 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_MESSAGE_DISPLAYED)
924 # define GLM_MESSAGE_DISPLAYED
925 # define GLM_STR_HELPER(x) #x
926 # define GLM_STR(x) GLM_STR_HELPER(x)
927 
928  // Report GLM version
929 # pragma message (GLM_STR(GLM_VERSION_MESSAGE))
930 
931  // Report C++ language
932 # if (GLM_LANG & GLM_LANG_CXX2A_FLAG) && (GLM_LANG & GLM_LANG_EXT)
933 # pragma message("GLM: C++ 2A with extensions")
934 # elif (GLM_LANG & GLM_LANG_CXX2A_FLAG)
935 # pragma message("GLM: C++ 2A")
936 # elif (GLM_LANG & GLM_LANG_CXX17_FLAG) && (GLM_LANG & GLM_LANG_EXT)
937 # pragma message("GLM: C++ 17 with extensions")
938 # elif (GLM_LANG & GLM_LANG_CXX17_FLAG)
939 # pragma message("GLM: C++ 17")
940 # elif (GLM_LANG & GLM_LANG_CXX14_FLAG) && (GLM_LANG & GLM_LANG_EXT)
941 # pragma message("GLM: C++ 14 with extensions")
942 # elif (GLM_LANG & GLM_LANG_CXX14_FLAG)
943 # pragma message("GLM: C++ 14")
944 # elif (GLM_LANG & GLM_LANG_CXX11_FLAG) && (GLM_LANG & GLM_LANG_EXT)
945 # pragma message("GLM: C++ 11 with extensions")
946 # elif (GLM_LANG & GLM_LANG_CXX11_FLAG)
947 # pragma message("GLM: C++ 11")
948 # elif (GLM_LANG & GLM_LANG_CXX0X_FLAG) && (GLM_LANG & GLM_LANG_EXT)
949 # pragma message("GLM: C++ 0x with extensions")
950 # elif (GLM_LANG & GLM_LANG_CXX0X_FLAG)
951 # pragma message("GLM: C++ 0x")
952 # elif (GLM_LANG & GLM_LANG_CXX03_FLAG) && (GLM_LANG & GLM_LANG_EXT)
953 # pragma message("GLM: C++ 03 with extensions")
954 # elif (GLM_LANG & GLM_LANG_CXX03_FLAG)
955 # pragma message("GLM: C++ 03")
956 # elif (GLM_LANG & GLM_LANG_CXX98_FLAG) && (GLM_LANG & GLM_LANG_EXT)
957 # pragma message("GLM: C++ 98 with extensions")
958 # elif (GLM_LANG & GLM_LANG_CXX98_FLAG)
959 # pragma message("GLM: C++ 98")
960 # else
961 # pragma message("GLM: C++ language undetected")
962 # endif//GLM_LANG
963 
964  // Report compiler detection
965 # if GLM_COMPILER & GLM_COMPILER_CUDA
966 # pragma message("GLM: CUDA compiler detected")
967 # elif GLM_COMPILER & GLM_COMPILER_VC
968 # pragma message("GLM: Visual C++ compiler detected")
969 # elif GLM_COMPILER & GLM_COMPILER_CLANG
970 # pragma message("GLM: Clang compiler detected")
971 # elif GLM_COMPILER & GLM_COMPILER_INTEL
972 # pragma message("GLM: Intel Compiler detected")
973 # elif GLM_COMPILER & GLM_COMPILER_GCC
974 # pragma message("GLM: GCC compiler detected")
975 # else
976 # pragma message("GLM: Compiler not detected")
977 # endif
978 
979  // Report build target
980 # if (GLM_ARCH & GLM_ARCH_AVX2_BIT) && (GLM_MODEL == GLM_MODEL_64)
981 # pragma message("GLM: x86 64 bits with AVX2 instruction set build target")
982 # elif (GLM_ARCH & GLM_ARCH_AVX2_BIT) && (GLM_MODEL == GLM_MODEL_32)
983 # pragma message("GLM: x86 32 bits with AVX2 instruction set build target")
984 
985 # elif (GLM_ARCH & GLM_ARCH_AVX_BIT) && (GLM_MODEL == GLM_MODEL_64)
986 # pragma message("GLM: x86 64 bits with AVX instruction set build target")
987 # elif (GLM_ARCH & GLM_ARCH_AVX_BIT) && (GLM_MODEL == GLM_MODEL_32)
988 # pragma message("GLM: x86 32 bits with AVX instruction set build target")
989 
990 # elif (GLM_ARCH & GLM_ARCH_SSE42_BIT) && (GLM_MODEL == GLM_MODEL_64)
991 # pragma message("GLM: x86 64 bits with SSE4.2 instruction set build target")
992 # elif (GLM_ARCH & GLM_ARCH_SSE42_BIT) && (GLM_MODEL == GLM_MODEL_32)
993 # pragma message("GLM: x86 32 bits with SSE4.2 instruction set build target")
994 
995 # elif (GLM_ARCH & GLM_ARCH_SSE41_BIT) && (GLM_MODEL == GLM_MODEL_64)
996 # pragma message("GLM: x86 64 bits with SSE4.1 instruction set build target")
997 # elif (GLM_ARCH & GLM_ARCH_SSE41_BIT) && (GLM_MODEL == GLM_MODEL_32)
998 # pragma message("GLM: x86 32 bits with SSE4.1 instruction set build target")
999 
1000 # elif (GLM_ARCH & GLM_ARCH_SSSE3_BIT) && (GLM_MODEL == GLM_MODEL_64)
1001 # pragma message("GLM: x86 64 bits with SSSE3 instruction set build target")
1002 # elif (GLM_ARCH & GLM_ARCH_SSSE3_BIT) && (GLM_MODEL == GLM_MODEL_32)
1003 # pragma message("GLM: x86 32 bits with SSSE3 instruction set build target")
1004 
1005 # elif (GLM_ARCH & GLM_ARCH_SSE3_BIT) && (GLM_MODEL == GLM_MODEL_64)
1006 # pragma message("GLM: x86 64 bits with SSE3 instruction set build target")
1007 # elif (GLM_ARCH & GLM_ARCH_SSE3_BIT) && (GLM_MODEL == GLM_MODEL_32)
1008 # pragma message("GLM: x86 32 bits with SSE3 instruction set build target")
1009 
1010 # elif (GLM_ARCH & GLM_ARCH_SSE2_BIT) && (GLM_MODEL == GLM_MODEL_64)
1011 # pragma message("GLM: x86 64 bits with SSE2 instruction set build target")
1012 # elif (GLM_ARCH & GLM_ARCH_SSE2_BIT) && (GLM_MODEL == GLM_MODEL_32)
1013 # pragma message("GLM: x86 32 bits with SSE2 instruction set build target")
1014 
1015 # elif (GLM_ARCH & GLM_ARCH_X86_BIT) && (GLM_MODEL == GLM_MODEL_64)
1016 # pragma message("GLM: x86 64 bits build target")
1017 # elif (GLM_ARCH & GLM_ARCH_X86_BIT) && (GLM_MODEL == GLM_MODEL_32)
1018 # pragma message("GLM: x86 32 bits build target")
1019 
1020 # elif (GLM_ARCH & GLM_ARCH_NEON_BIT) && (GLM_MODEL == GLM_MODEL_64)
1021 # pragma message("GLM: ARM 64 bits with Neon instruction set build target")
1022 # elif (GLM_ARCH & GLM_ARCH_NEON_BIT) && (GLM_MODEL == GLM_MODEL_32)
1023 # pragma message("GLM: ARM 32 bits with Neon instruction set build target")
1024 
1025 # elif (GLM_ARCH & GLM_ARCH_ARM_BIT) && (GLM_MODEL == GLM_MODEL_64)
1026 # pragma message("GLM: ARM 64 bits build target")
1027 # elif (GLM_ARCH & GLM_ARCH_ARM_BIT) && (GLM_MODEL == GLM_MODEL_32)
1028 # pragma message("GLM: ARM 32 bits build target")
1029 
1030 # elif (GLM_ARCH & GLM_ARCH_MIPS_BIT) && (GLM_MODEL == GLM_MODEL_64)
1031 # pragma message("GLM: MIPS 64 bits build target")
1032 # elif (GLM_ARCH & GLM_ARCH_MIPS_BIT) && (GLM_MODEL == GLM_MODEL_32)
1033 # pragma message("GLM: MIPS 32 bits build target")
1034 
1035 # elif (GLM_ARCH & GLM_ARCH_PPC_BIT) && (GLM_MODEL == GLM_MODEL_64)
1036 # pragma message("GLM: PowerPC 64 bits build target")
1037 # elif (GLM_ARCH & GLM_ARCH_PPC_BIT) && (GLM_MODEL == GLM_MODEL_32)
1038 # pragma message("GLM: PowerPC 32 bits build target")
1039 # else
1040 # pragma message("GLM: Unknown build target")
1041 # endif//GLM_ARCH
1042 
1043  // Report platform name
1044 # if(GLM_PLATFORM & GLM_PLATFORM_QNXNTO)
1045 # pragma message("GLM: QNX platform detected")
1046 //# elif(GLM_PLATFORM & GLM_PLATFORM_IOS)
1047 //# pragma message("GLM: iOS platform detected")
1048 # elif(GLM_PLATFORM & GLM_PLATFORM_APPLE)
1049 # pragma message("GLM: Apple platform detected")
1050 # elif(GLM_PLATFORM & GLM_PLATFORM_WINCE)
1051 # pragma message("GLM: WinCE platform detected")
1052 # elif(GLM_PLATFORM & GLM_PLATFORM_WINDOWS)
1053 # pragma message("GLM: Windows platform detected")
1054 # elif(GLM_PLATFORM & GLM_PLATFORM_CHROME_NACL)
1055 # pragma message("GLM: Native Client detected")
1056 # elif(GLM_PLATFORM & GLM_PLATFORM_ANDROID)
1057 # pragma message("GLM: Android platform detected")
1058 # elif(GLM_PLATFORM & GLM_PLATFORM_LINUX)
1059 # pragma message("GLM: Linux platform detected")
1060 # elif(GLM_PLATFORM & GLM_PLATFORM_UNIX)
1061 # pragma message("GLM: UNIX platform detected")
1062 # elif(GLM_PLATFORM & GLM_PLATFORM_UNKNOWN)
1063 # pragma message("GLM: platform unknown")
1064 # else
1065 # pragma message("GLM: platform not detected")
1066 # endif
1067 
1068  // Report whether only xyzw component are used
1069 # if defined GLM_FORCE_XYZW_ONLY
1070 # pragma message("GLM: GLM_FORCE_XYZW_ONLY is defined. Only x, y, z and w component are available in vector type. This define disables swizzle operators and SIMD instruction sets.")
1071 # endif
1072 
1073  // Report swizzle operator support
1074 # if GLM_CONFIG_SWIZZLE == GLM_SWIZZLE_OPERATOR
1075 # pragma message("GLM: GLM_FORCE_SWIZZLE is defined, swizzling operators enabled.")
1076 # elif GLM_CONFIG_SWIZZLE == GLM_SWIZZLE_FUNCTION
1077 # pragma message("GLM: GLM_FORCE_SWIZZLE is defined, swizzling functions enabled. Enable compiler C++ language extensions to enable swizzle operators.")
1078 # else
1079 # pragma message("GLM: GLM_FORCE_SWIZZLE is undefined. swizzling functions or operators are disabled.")
1080 # endif
1081 
1082  // Report .length() type
1083 # if GLM_CONFIG_LENGTH_TYPE == GLM_LENGTH_SIZE_T
1084 # pragma message("GLM: GLM_FORCE_SIZE_T_LENGTH is defined. .length() returns a glm::length_t, a typedef of std::size_t.")
1085 # else
1086 # pragma message("GLM: GLM_FORCE_SIZE_T_LENGTH is undefined. .length() returns a glm::length_t, a typedef of int following GLSL.")
1087 # endif
1088 
1089 # if GLM_CONFIG_UNRESTRICTED_GENTYPE == GLM_ENABLE
1090 # pragma message("GLM: GLM_FORCE_UNRESTRICTED_GENTYPE is defined. Removes GLSL restrictions on valid function genTypes.")
1091 # else
1092 # pragma message("GLM: GLM_FORCE_UNRESTRICTED_GENTYPE is undefined. Follows strictly GLSL on valid function genTypes.")
1093 # endif
1094 
1095 # if GLM_SILENT_WARNINGS == GLM_ENABLE
1096 # pragma message("GLM: GLM_FORCE_SILENT_WARNINGS is defined. Ignores C++ warnings from using C++ language extensions.")
1097 # else
1098 # pragma message("GLM: GLM_FORCE_SILENT_WARNINGS is undefined. Shows C++ warnings from using C++ language extensions.")
1099 # endif
1100 
1101 # ifdef GLM_FORCE_SINGLE_ONLY
1102 # pragma message("GLM: GLM_FORCE_SINGLE_ONLY is defined. Using only single precision floating-point types.")
1103 # endif
1104 
1105 # if defined(GLM_FORCE_ALIGNED_GENTYPES) && (GLM_CONFIG_ALIGNED_GENTYPES == GLM_ENABLE)
1106 # undef GLM_FORCE_ALIGNED_GENTYPES
1107 # pragma message("GLM: GLM_FORCE_ALIGNED_GENTYPES is defined, allowing aligned types. This prevents the use of C++ constexpr.")
1108 # elif defined(GLM_FORCE_ALIGNED_GENTYPES) && (GLM_CONFIG_ALIGNED_GENTYPES == GLM_DISABLE)
1109 # undef GLM_FORCE_ALIGNED_GENTYPES
1110 # pragma message("GLM: GLM_FORCE_ALIGNED_GENTYPES is defined but is disabled. It requires C++11 and language extensions.")
1111 # endif
1112 
1113 # if defined(GLM_FORCE_DEFAULT_ALIGNED_GENTYPES)
1114 # if GLM_CONFIG_ALIGNED_GENTYPES == GLM_DISABLE
1115 # undef GLM_FORCE_DEFAULT_ALIGNED_GENTYPES
1116 # pragma message("GLM: GLM_FORCE_DEFAULT_ALIGNED_GENTYPES is defined but is disabled. It requires C++11 and language extensions.")
1117 # elif GLM_CONFIG_ALIGNED_GENTYPES == GLM_ENABLE
1118 # pragma message("GLM: GLM_FORCE_DEFAULT_ALIGNED_GENTYPES is defined. All gentypes (e.g. vec3) will be aligned and padded by default.")
1119 # endif
1120 # endif
1121 
1122 # if GLM_CONFIG_CLIP_CONTROL & GLM_CLIP_CONTROL_ZO_BIT
1123 # pragma message("GLM: GLM_FORCE_DEPTH_ZERO_TO_ONE is defined. Using zero to one depth clip space.")
1124 # else
1125 # pragma message("GLM: GLM_FORCE_DEPTH_ZERO_TO_ONE is undefined. Using negative one to one depth clip space.")
1126 # endif
1127 
1128 # if GLM_CONFIG_CLIP_CONTROL & GLM_CLIP_CONTROL_LH_BIT
1129 # pragma message("GLM: GLM_FORCE_LEFT_HANDED is defined. Using left handed coordinate system.")
1130 # else
1131 # pragma message("GLM: GLM_FORCE_LEFT_HANDED is undefined. Using right handed coordinate system.")
1132 # endif
1133 #endif//GLM_MESSAGES
1134 
1135 #endif//GLM_SETUP_INCLUDED
int64 int64_t
64 bit signed integer type.
Definition: fwd.hpp:85
int8 int8_t
8 bit signed integer type.
Definition: fwd.hpp:43
uint32 uint32_t
Default qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:129
uint16 uint16_t
Default qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:115
uint8 uint8_t
Default qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:101
uint64 uint64_t
Default qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:143
int16 int16_t
16 bit signed integer type.
Definition: fwd.hpp:57
Definition: hash.hpp:49
int32 int32_t
32 bit signed integer type.
Definition: fwd.hpp:71
detail::uint64 uint64
64 bit unsigned integer type.
detail::int64 int64
64 bit signed integer type.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00154.html ================================================ 0.9.9 API documentation: spline.hpp File Reference
0.9.9 API documentation
spline.hpp File Reference

GLM_GTX_spline More...

Go to the source code of this file.

Functions

template<typename genType >
GLM_FUNC_DECL genType catmullRom (genType const &v1, genType const &v2, genType const &v3, genType const &v4, typename genType::value_type const &s)
 Return a point from a catmull rom curve. More...
 
template<typename genType >
GLM_FUNC_DECL genType cubic (genType const &v1, genType const &v2, genType const &v3, genType const &v4, typename genType::value_type const &s)
 Return a point from a cubic curve. More...
 
template<typename genType >
GLM_FUNC_DECL genType hermite (genType const &v1, genType const &t1, genType const &v2, genType const &t2, typename genType::value_type const &s)
 Return a point from a hermite curve. More...
 

Detailed Description

GLM_GTX_spline

See also
Core features (dependence)

Definition in file spline.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00154_source.html ================================================ 0.9.9 API documentation: spline.hpp Source File
0.9.9 API documentation
spline.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependency:
16 #include "../glm.hpp"
17 #include "../gtx/optimum_pow.hpp"
18 
19 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
20 # ifndef GLM_ENABLE_EXPERIMENTAL
21 # pragma message("GLM: GLM_GTX_spline is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
22 # else
23 # pragma message("GLM: GLM_GTX_spline extension included")
24 # endif
25 #endif
26 
27 namespace glm
28 {
31 
34  template<typename genType>
35  GLM_FUNC_DECL genType catmullRom(
36  genType const& v1,
37  genType const& v2,
38  genType const& v3,
39  genType const& v4,
40  typename genType::value_type const& s);
41 
44  template<typename genType>
45  GLM_FUNC_DECL genType hermite(
46  genType const& v1,
47  genType const& t1,
48  genType const& v2,
49  genType const& t2,
50  typename genType::value_type const& s);
51 
54  template<typename genType>
55  GLM_FUNC_DECL genType cubic(
56  genType const& v1,
57  genType const& v2,
58  genType const& v3,
59  genType const& v4,
60  typename genType::value_type const& s);
61 
63 }//namespace glm
64 
65 #include "spline.inl"
GLM_FUNC_DECL genType hermite(genType const &v1, genType const &t1, genType const &v2, genType const &t2, typename genType::value_type const &s)
Return a point from a hermite curve.
GLM_FUNC_DECL genType cubic(genType const &v1, genType const &v2, genType const &v3, genType const &v4, typename genType::value_type const &s)
Return a point from a cubic curve.
GLM_FUNC_DECL genType catmullRom(genType const &v1, genType const &v2, genType const &v3, genType const &v4, typename genType::value_type const &s)
Return a point from a catmull rom curve.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00155.html ================================================ 0.9.9 API documentation: std_based_type.hpp File Reference
0.9.9 API documentation
std_based_type.hpp File Reference

GLM_GTX_std_based_type More...

Go to the source code of this file.

Typedefs

typedef vec< 1, std::size_t, defaultp > size1
 Vector type based of one std::size_t component. More...
 
typedef vec< 1, std::size_t, defaultp > size1_t
 Vector type based of one std::size_t component. More...
 
typedef vec< 2, std::size_t, defaultp > size2
 Vector type based of two std::size_t components. More...
 
typedef vec< 2, std::size_t, defaultp > size2_t
 Vector type based of two std::size_t components. More...
 
typedef vec< 3, std::size_t, defaultp > size3
 Vector type based of three std::size_t components. More...
 
typedef vec< 3, std::size_t, defaultp > size3_t
 Vector type based of three std::size_t components. More...
 
typedef vec< 4, std::size_t, defaultp > size4
 Vector type based of four std::size_t components. More...
 
typedef vec< 4, std::size_t, defaultp > size4_t
 Vector type based of four std::size_t components. More...
 

Detailed Description

GLM_GTX_std_based_type

See also
Core features (dependence)
gtx_extented_min_max (dependence)

Definition in file std_based_type.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00155_source.html ================================================ 0.9.9 API documentation: std_based_type.hpp Source File
0.9.9 API documentation
std_based_type.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 // Dependency:
17 #include "../glm.hpp"
18 #include <cstdlib>
19 
20 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
21 # ifndef GLM_ENABLE_EXPERIMENTAL
22 # pragma message("GLM: GLM_GTX_std_based_type is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
23 # else
24 # pragma message("GLM: GLM_GTX_std_based_type extension included")
25 # endif
26 #endif
27 
28 namespace glm
29 {
32 
35  typedef vec<1, std::size_t, defaultp> size1;
36 
39  typedef vec<2, std::size_t, defaultp> size2;
40 
43  typedef vec<3, std::size_t, defaultp> size3;
44 
47  typedef vec<4, std::size_t, defaultp> size4;
48 
51  typedef vec<1, std::size_t, defaultp> size1_t;
52 
55  typedef vec<2, std::size_t, defaultp> size2_t;
56 
59  typedef vec<3, std::size_t, defaultp> size3_t;
60 
63  typedef vec<4, std::size_t, defaultp> size4_t;
64 
66 }//namespace glm
67 
68 #include "std_based_type.inl"
vec< 1, std::size_t, defaultp > size1
Vector type based of one std::size_t component.
vec< 3, std::size_t, defaultp > size3_t
Vector type based of three std::size_t components.
vec< 2, std::size_t, defaultp > size2_t
Vector type based of two std::size_t components.
vec< 4, std::size_t, defaultp > size4
Vector type based of four std::size_t components.
vec< 1, std::size_t, defaultp > size1_t
Vector type based of one std::size_t component.
vec< 3, std::size_t, defaultp > size3
Vector type based of three std::size_t components.
vec< 2, std::size_t, defaultp > size2
Vector type based of two std::size_t components.
vec< 4, std::size_t, defaultp > size4_t
Vector type based of four std::size_t components.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00156.html ================================================ 0.9.9 API documentation: string_cast.hpp File Reference
0.9.9 API documentation
string_cast.hpp File Reference

GLM_GTX_string_cast More...

Go to the source code of this file.

Functions

template<typename genType >
GLM_FUNC_DECL std::string to_string (genType const &x)
 Create a string from a GLM vector or matrix typed variable. More...
 

Detailed Description

GLM_GTX_string_cast

See also
Core features (dependence)
GLM_GTX_integer (dependence)
GLM_GTX_quaternion (dependence)

Definition in file string_cast.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00156_source.html ================================================ 0.9.9 API documentation: string_cast.hpp Source File
0.9.9 API documentation
string_cast.hpp
Go to the documentation of this file.
1 
17 #pragma once
18 
19 // Dependency:
20 #include "../glm.hpp"
21 #include "../gtc/type_precision.hpp"
22 #include "../gtc/quaternion.hpp"
23 #include "../gtx/dual_quaternion.hpp"
24 #include <string>
25 #include <cmath>
26 
27 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
28 # ifndef GLM_ENABLE_EXPERIMENTAL
29 # pragma message("GLM: GLM_GTX_string_cast is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
30 # else
31 # pragma message("GLM: GLM_GTX_string_cast extension included")
32 # endif
33 #endif
34 
35 #if(GLM_COMPILER & GLM_COMPILER_CUDA)
36 # error "GLM_GTX_string_cast is not supported on CUDA compiler"
37 #endif
38 
39 namespace glm
40 {
43 
46  template<typename genType>
47  GLM_FUNC_DECL std::string to_string(genType const& x);
48 
50 }//namespace glm
51 
52 #include "string_cast.inl"
GLM_FUNC_DECL std::string to_string(genType const &x)
Create a string from a GLM vector or matrix typed variable.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00157.html ================================================ 0.9.9 API documentation: texture.hpp File Reference
0.9.9 API documentation
texture.hpp File Reference

GLM_GTX_texture More...

Go to the source code of this file.

Functions

template<length_t L, typename T , qualifier Q>
levels (vec< L, T, Q > const &Extent)
 Compute the number of mipmaps levels necessary to create a mipmap complete texture. More...
 

Detailed Description

GLM_GTX_texture

See also
Core features (dependence)

Definition in file texture.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00157_source.html ================================================ 0.9.9 API documentation: texture.hpp Source File
0.9.9 API documentation
texture.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependency:
16 #include "../glm.hpp"
17 #include "../gtc/integer.hpp"
18 #include "../gtx/component_wise.hpp"
19 
20 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
21 # ifndef GLM_ENABLE_EXPERIMENTAL
22 # pragma message("GLM: GLM_GTX_texture is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
23 # else
24 # pragma message("GLM: GLM_GTX_texture extension included")
25 # endif
26 #endif
27 
28 namespace glm
29 {
32 
39  template <length_t L, typename T, qualifier Q>
40  T levels(vec<L, T, Q> const& Extent);
41 
43 }// namespace glm
44 
45 #include "texture.inl"
46 
T levels(vec< L, T, Q > const &Extent)
Compute the number of mipmaps levels necessary to create a mipmap complete texture.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00158.html ================================================ 0.9.9 API documentation: transform.hpp File Reference
0.9.9 API documentation
transform.hpp File Reference

GLM_GTX_transform More...

Go to the source code of this file.

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > rotate (T angle, vec< 3, T, Q > const &v)
 Builds a rotation 4 * 4 matrix created from an axis of 3 scalars and an angle expressed in radians. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > scale (vec< 3, T, Q > const &v)
 Transforms a matrix with a scale 4 * 4 matrix created from a vector of 3 components. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > translate (vec< 3, T, Q > const &v)
 Transforms a matrix with a translation 4 * 4 matrix created from 3 scalars. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00158_source.html ================================================ 0.9.9 API documentation: transform.hpp Source File
0.9.9 API documentation
transform.hpp
Go to the documentation of this file.
1 
16 #pragma once
17 
18 // Dependency:
19 #include "../glm.hpp"
20 #include "../gtc/matrix_transform.hpp"
21 
22 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
23 # ifndef GLM_ENABLE_EXPERIMENTAL
24 # pragma message("GLM: GLM_GTX_transform is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
25 # else
26 # pragma message("GLM: GLM_GTX_transform extension included")
27 # endif
28 #endif
29 
30 namespace glm
31 {
34 
38  template<typename T, qualifier Q>
39  GLM_FUNC_DECL mat<4, 4, T, Q> translate(
40  vec<3, T, Q> const& v);
41 
45  template<typename T, qualifier Q>
46  GLM_FUNC_DECL mat<4, 4, T, Q> rotate(
47  T angle,
48  vec<3, T, Q> const& v);
49 
53  template<typename T, qualifier Q>
54  GLM_FUNC_DECL mat<4, 4, T, Q> scale(
55  vec<3, T, Q> const& v);
56 
58 }// namespace glm
59 
60 #include "transform.inl"
GLM_FUNC_DECL mat< 4, 4, T, Q > translate(vec< 3, T, Q > const &v)
Transforms a matrix with a translation 4 * 4 matrix created from 3 scalars.
GLM_FUNC_DECL T angle(qua< T, Q > const &x)
Returns the quaternion rotation angle.
GLM_FUNC_DECL mat< 4, 4, T, Q > scale(vec< 3, T, Q > const &v)
Transforms a matrix with a scale 4 * 4 matrix created from a vector of 3 components.
GLM_FUNC_DECL mat< 4, 4, T, Q > rotate(T angle, vec< 3, T, Q > const &v)
Builds a rotation 4 * 4 matrix created from an axis of 3 scalars and an angle expressed in radians...
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00159.html ================================================ 0.9.9 API documentation: transform2.hpp File Reference
0.9.9 API documentation
transform2.hpp File Reference

GLM_GTX_transform2 More...

Go to the source code of this file.

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 3, T, Q > proj2D (mat< 3, 3, T, Q > const &m, vec< 3, T, Q > const &normal)
 Build planar projection matrix along normal axis. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > proj3D (mat< 4, 4, T, Q > const &m, vec< 3, T, Q > const &normal)
 Build planar projection matrix along normal axis. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > scaleBias (T scale, T bias)
 Build a scale bias matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > scaleBias (mat< 4, 4, T, Q > const &m, T scale, T bias)
 Build a scale bias matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 3, T, Q > shearX2D (mat< 3, 3, T, Q > const &m, T y)
 Transforms a matrix with a shearing on X axis. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > shearX3D (mat< 4, 4, T, Q > const &m, T y, T z)
 Transforms a matrix with a shearing on X axis From GLM_GTX_transform2 extension. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 3, T, Q > shearY2D (mat< 3, 3, T, Q > const &m, T x)
 Transforms a matrix with a shearing on Y axis. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > shearY3D (mat< 4, 4, T, Q > const &m, T x, T z)
 Transforms a matrix with a shearing on Y axis. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > shearZ3D (mat< 4, 4, T, Q > const &m, T x, T y)
 Transforms a matrix with a shearing on Z axis. More...
 

Detailed Description

GLM_GTX_transform2

See also
Core features (dependence)
GLM_GTX_transform (dependence)

Definition in file transform2.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00159_source.html ================================================ 0.9.9 API documentation: transform2.hpp Source File
0.9.9 API documentation
transform2.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 // Dependency:
17 #include "../glm.hpp"
18 #include "../gtx/transform.hpp"
19 
20 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
21 # ifndef GLM_ENABLE_EXPERIMENTAL
22 # pragma message("GLM: GLM_GTX_transform2 is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
23 # else
24 # pragma message("GLM: GLM_GTX_transform2 extension included")
25 # endif
26 #endif
27 
28 namespace glm
29 {
32 
35  template<typename T, qualifier Q>
36  GLM_FUNC_DECL mat<3, 3, T, Q> shearX2D(mat<3, 3, T, Q> const& m, T y);
37 
40  template<typename T, qualifier Q>
41  GLM_FUNC_DECL mat<3, 3, T, Q> shearY2D(mat<3, 3, T, Q> const& m, T x);
42 
45  template<typename T, qualifier Q>
46  GLM_FUNC_DECL mat<4, 4, T, Q> shearX3D(mat<4, 4, T, Q> const& m, T y, T z);
47 
50  template<typename T, qualifier Q>
51  GLM_FUNC_DECL mat<4, 4, T, Q> shearY3D(mat<4, 4, T, Q> const& m, T x, T z);
52 
55  template<typename T, qualifier Q>
56  GLM_FUNC_DECL mat<4, 4, T, Q> shearZ3D(mat<4, 4, T, Q> const& m, T x, T y);
57 
58  //template<typename T> GLM_FUNC_QUALIFIER mat<4, 4, T, Q> shear(const mat<4, 4, T, Q> & m, shearPlane, planePoint, angle)
59  // Identity + tan(angle) * cross(Normal, OnPlaneVector) 0
60  // - dot(PointOnPlane, normal) * OnPlaneVector 1
61 
62  // Reflect functions seem to don't work
63  //template<typename T> mat<3, 3, T, Q> reflect2D(const mat<3, 3, T, Q> & m, const vec<3, T, Q>& normal){return reflect2DGTX(m, normal);} //!< \brief Build a reflection matrix (from GLM_GTX_transform2 extension)
64  //template<typename T> mat<4, 4, T, Q> reflect3D(const mat<4, 4, T, Q> & m, const vec<3, T, Q>& normal){return reflect3DGTX(m, normal);} //!< \brief Build a reflection matrix (from GLM_GTX_transform2 extension)
65 
68  template<typename T, qualifier Q>
69  GLM_FUNC_DECL mat<3, 3, T, Q> proj2D(mat<3, 3, T, Q> const& m, vec<3, T, Q> const& normal);
70 
73  template<typename T, qualifier Q>
74  GLM_FUNC_DECL mat<4, 4, T, Q> proj3D(mat<4, 4, T, Q> const & m, vec<3, T, Q> const& normal);
75 
78  template<typename T, qualifier Q>
79  GLM_FUNC_DECL mat<4, 4, T, Q> scaleBias(T scale, T bias);
80 
83  template<typename T, qualifier Q>
84  GLM_FUNC_DECL mat<4, 4, T, Q> scaleBias(mat<4, 4, T, Q> const& m, T scale, T bias);
85 
87 }// namespace glm
88 
89 #include "transform2.inl"
GLM_FUNC_DECL mat< 3, 3, T, Q > shearX2D(mat< 3, 3, T, Q > const &m, T y)
Transforms a matrix with a shearing on X axis.
GLM_FUNC_DECL mat< 3, 3, T, Q > shearY2D(mat< 3, 3, T, Q > const &m, T x)
Transforms a matrix with a shearing on Y axis.
GLM_FUNC_DECL mat< 4, 4, T, Q > proj3D(mat< 4, 4, T, Q > const &m, vec< 3, T, Q > const &normal)
Build planar projection matrix along normal axis.
GLM_FUNC_DECL mat< 3, 3, T, Q > proj2D(mat< 3, 3, T, Q > const &m, vec< 3, T, Q > const &normal)
Build planar projection matrix along normal axis.
GLM_FUNC_DECL mat< 4, 4, T, Q > shearZ3D(mat< 4, 4, T, Q > const &m, T x, T y)
Transforms a matrix with a shearing on Z axis.
GLM_FUNC_DECL mat< 4, 4, T, Q > scale(mat< 4, 4, T, Q > const &m, vec< 3, T, Q > const &v)
Builds a scale 4 * 4 matrix created from 3 scalars.
GLM_FUNC_DECL mat< 4, 4, T, Q > shearY3D(mat< 4, 4, T, Q > const &m, T x, T z)
Transforms a matrix with a shearing on Y axis.
GLM_FUNC_DECL mat< 4, 4, T, Q > scaleBias(mat< 4, 4, T, Q > const &m, T scale, T bias)
Build a scale bias matrix.
GLM_FUNC_DECL mat< 4, 4, T, Q > shearX3D(mat< 4, 4, T, Q > const &m, T y, T z)
Transforms a matrix with a shearing on X axis From GLM_GTX_transform2 extension.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00160.html ================================================ 0.9.9 API documentation: trigonometric.hpp File Reference
0.9.9 API documentation
trigonometric.hpp File Reference

Core features More...

Go to the source code of this file.

Functions

template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > acos (vec< L, T, Q > const &x)
 Arc cosine. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > acosh (vec< L, T, Q > const &x)
 Arc hyperbolic cosine; returns the non-negative inverse of cosh. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > asin (vec< L, T, Q > const &x)
 Arc sine. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > asinh (vec< L, T, Q > const &x)
 Arc hyperbolic sine; returns the inverse of sinh. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > atan (vec< L, T, Q > const &y, vec< L, T, Q > const &x)
 Arc tangent. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > atan (vec< L, T, Q > const &y_over_x)
 Arc tangent. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > atanh (vec< L, T, Q > const &x)
 Arc hyperbolic tangent; returns the inverse of tanh. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > cos (vec< L, T, Q > const &angle)
 The standard trigonometric cosine function. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > cosh (vec< L, T, Q > const &angle)
 Returns the hyperbolic cosine function, (exp(x) + exp(-x)) / 2. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > degrees (vec< L, T, Q > const &radians)
 Converts radians to degrees and returns the result. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > radians (vec< L, T, Q > const &degrees)
 Converts degrees to radians and returns the result. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > sin (vec< L, T, Q > const &angle)
 The standard trigonometric sine function. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > sinh (vec< L, T, Q > const &angle)
 Returns the hyperbolic sine function, (exp(x) - exp(-x)) / 2. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > tan (vec< L, T, Q > const &angle)
 The standard trigonometric tangent function. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > tanh (vec< L, T, Q > const &angle)
 Returns the hyperbolic tangent function, sinh(angle) / cosh(angle) More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00160_source.html ================================================ 0.9.9 API documentation: trigonometric.hpp Source File
0.9.9 API documentation
trigonometric.hpp
Go to the documentation of this file.
1 
19 #pragma once
20 
21 #include "detail/setup.hpp"
22 #include "detail/qualifier.hpp"
23 
24 namespace glm
25 {
28 
37  template<length_t L, typename T, qualifier Q>
38  GLM_FUNC_DECL GLM_CONSTEXPR vec<L, T, Q> radians(vec<L, T, Q> const& degrees);
39 
48  template<length_t L, typename T, qualifier Q>
49  GLM_FUNC_DECL GLM_CONSTEXPR vec<L, T, Q> degrees(vec<L, T, Q> const& radians);
50 
60  template<length_t L, typename T, qualifier Q>
61  GLM_FUNC_DECL vec<L, T, Q> sin(vec<L, T, Q> const& angle);
62 
72  template<length_t L, typename T, qualifier Q>
73  GLM_FUNC_DECL vec<L, T, Q> cos(vec<L, T, Q> const& angle);
74 
83  template<length_t L, typename T, qualifier Q>
84  GLM_FUNC_DECL vec<L, T, Q> tan(vec<L, T, Q> const& angle);
85 
96  template<length_t L, typename T, qualifier Q>
97  GLM_FUNC_DECL vec<L, T, Q> asin(vec<L, T, Q> const& x);
98 
109  template<length_t L, typename T, qualifier Q>
110  GLM_FUNC_DECL vec<L, T, Q> acos(vec<L, T, Q> const& x);
111 
124  template<length_t L, typename T, qualifier Q>
125  GLM_FUNC_DECL vec<L, T, Q> atan(vec<L, T, Q> const& y, vec<L, T, Q> const& x);
126 
136  template<length_t L, typename T, qualifier Q>
137  GLM_FUNC_DECL vec<L, T, Q> atan(vec<L, T, Q> const& y_over_x);
138 
147  template<length_t L, typename T, qualifier Q>
148  GLM_FUNC_DECL vec<L, T, Q> sinh(vec<L, T, Q> const& angle);
149 
158  template<length_t L, typename T, qualifier Q>
159  GLM_FUNC_DECL vec<L, T, Q> cosh(vec<L, T, Q> const& angle);
160 
169  template<length_t L, typename T, qualifier Q>
170  GLM_FUNC_DECL vec<L, T, Q> tanh(vec<L, T, Q> const& angle);
171 
180  template<length_t L, typename T, qualifier Q>
181  GLM_FUNC_DECL vec<L, T, Q> asinh(vec<L, T, Q> const& x);
182 
192  template<length_t L, typename T, qualifier Q>
193  GLM_FUNC_DECL vec<L, T, Q> acosh(vec<L, T, Q> const& x);
194 
204  template<length_t L, typename T, qualifier Q>
205  GLM_FUNC_DECL vec<L, T, Q> atanh(vec<L, T, Q> const& x);
206 
208 }//namespace glm
209 
210 #include "detail/func_trigonometric.inl"
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > degrees(vec< L, T, Q > const &radians)
Converts radians to degrees and returns the result.
GLM_FUNC_DECL vec< L, T, Q > cosh(vec< L, T, Q > const &angle)
Returns the hyperbolic cosine function, (exp(x) + exp(-x)) / 2.
GLM_FUNC_DECL vec< L, T, Q > acos(vec< L, T, Q > const &x)
Arc cosine.
GLM_FUNC_DECL vec< L, T, Q > sin(vec< L, T, Q > const &angle)
The standard trigonometric sine function.
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > radians(vec< L, T, Q > const &degrees)
Converts degrees to radians and returns the result.
GLM_FUNC_DECL T angle(qua< T, Q > const &x)
Returns the quaternion rotation angle.
GLM_FUNC_DECL vec< L, T, Q > asin(vec< L, T, Q > const &x)
Arc sine.
GLM_FUNC_DECL vec< L, T, Q > tanh(vec< L, T, Q > const &angle)
Returns the hyperbolic tangent function, sinh(angle) / cosh(angle)
GLM_FUNC_DECL vec< L, T, Q > sinh(vec< L, T, Q > const &angle)
Returns the hyperbolic sine function, (exp(x) - exp(-x)) / 2.
GLM_FUNC_DECL vec< L, T, Q > asinh(vec< L, T, Q > const &x)
Arc hyperbolic sine; returns the inverse of sinh.
GLM_FUNC_DECL vec< L, T, Q > atanh(vec< L, T, Q > const &x)
Arc hyperbolic tangent; returns the inverse of tanh.
GLM_FUNC_DECL vec< L, T, Q > cos(vec< L, T, Q > const &angle)
The standard trigonometric cosine function.
GLM_FUNC_DECL vec< L, T, Q > atan(vec< L, T, Q > const &y_over_x)
Arc tangent.
GLM_FUNC_DECL vec< L, T, Q > acosh(vec< L, T, Q > const &x)
Arc hyperbolic cosine; returns the non-negative inverse of cosh.
GLM_FUNC_DECL vec< L, T, Q > tan(vec< L, T, Q > const &angle)
The standard trigonometric tangent function.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00161.html ================================================ 0.9.9 API documentation: type_aligned.hpp File Reference
0.9.9 API documentation
gtc/type_aligned.hpp File Reference

GLM_GTC_type_aligned More...

Go to the source code of this file.

Typedefs

typedef aligned_highp_bvec1 aligned_bvec1
 1 component vector aligned in memory of bool values.
 
typedef aligned_highp_bvec2 aligned_bvec2
 2 components vector aligned in memory of bool values.
 
typedef aligned_highp_bvec3 aligned_bvec3
 3 components vector aligned in memory of bool values.
 
typedef aligned_highp_bvec4 aligned_bvec4
 4 components vector aligned in memory of bool values.
 
typedef aligned_highp_dmat2 aligned_dmat2
 2 by 2 matrix tightly aligned in memory of double-precision floating-point numbers.
 
typedef aligned_highp_dmat2x2 aligned_dmat2x2
 2 by 2 matrix tightly aligned in memory of double-precision floating-point numbers.
 
typedef aligned_highp_dmat2x3 aligned_dmat2x3
 2 by 3 matrix tightly aligned in memory of double-precision floating-point numbers.
 
typedef aligned_highp_dmat2x4 aligned_dmat2x4
 2 by 4 matrix tightly aligned in memory of double-precision floating-point numbers.
 
typedef aligned_highp_dmat3 aligned_dmat3
 3 by 3 matrix tightly aligned in memory of double-precision floating-point numbers.
 
typedef aligned_highp_dmat3x2 aligned_dmat3x2
 3 by 2 matrix tightly aligned in memory of double-precision floating-point numbers.
 
typedef aligned_highp_dmat3x3 aligned_dmat3x3
 3 by 3 matrix tightly aligned in memory of double-precision floating-point numbers.
 
typedef aligned_highp_dmat3x4 aligned_dmat3x4
 3 by 4 matrix tightly aligned in memory of double-precision floating-point numbers.
 
typedef aligned_highp_dmat4 aligned_dmat4
 4 by 4 matrix tightly aligned in memory of double-precision floating-point numbers.
 
typedef aligned_highp_dmat4x2 aligned_dmat4x2
 4 by 2 matrix tightly aligned in memory of double-precision floating-point numbers.
 
typedef aligned_highp_dmat4x3 aligned_dmat4x3
 4 by 3 matrix tightly aligned in memory of double-precision floating-point numbers.
 
typedef aligned_highp_dmat4x4 aligned_dmat4x4
 4 by 4 matrix tightly aligned in memory of double-precision floating-point numbers.
 
typedef aligned_highp_dvec1 aligned_dvec1
 1 component vector aligned in memory of double-precision floating-point numbers.
 
typedef aligned_highp_dvec2 aligned_dvec2
 2 components vector aligned in memory of double-precision floating-point numbers.
 
typedef aligned_highp_dvec3 aligned_dvec3
 3 components vector aligned in memory of double-precision floating-point numbers.
 
typedef aligned_highp_dvec4 aligned_dvec4
 4 components vector aligned in memory of double-precision floating-point numbers.
 
typedef vec< 1, bool, aligned_highp > aligned_highp_bvec1
 1 component vector aligned in memory of bool values.
 
typedef vec< 2, bool, aligned_highp > aligned_highp_bvec2
 2 components vector aligned in memory of bool values.
 
typedef vec< 3, bool, aligned_highp > aligned_highp_bvec3
 3 components vector aligned in memory of bool values.
 
typedef vec< 4, bool, aligned_highp > aligned_highp_bvec4
 4 components vector aligned in memory of bool values.
 
typedef mat< 2, 2, double, aligned_highp > aligned_highp_dmat2
 2 by 2 matrix aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 2, 2, double, aligned_highp > aligned_highp_dmat2x2
 2 by 2 matrix aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 2, 3, double, aligned_highp > aligned_highp_dmat2x3
 2 by 3 matrix aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 2, 4, double, aligned_highp > aligned_highp_dmat2x4
 2 by 4 matrix aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, double, aligned_highp > aligned_highp_dmat3
 3 by 3 matrix aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 2, double, aligned_highp > aligned_highp_dmat3x2
 3 by 2 matrix aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, double, aligned_highp > aligned_highp_dmat3x3
 3 by 3 matrix aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 4, double, aligned_highp > aligned_highp_dmat3x4
 3 by 4 matrix aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, double, aligned_highp > aligned_highp_dmat4
 4 by 4 matrix aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 2, double, aligned_highp > aligned_highp_dmat4x2
 4 by 2 matrix aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 3, double, aligned_highp > aligned_highp_dmat4x3
 4 by 3 matrix aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, double, aligned_highp > aligned_highp_dmat4x4
 4 by 4 matrix aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 1, double, aligned_highp > aligned_highp_dvec1
 1 component vector aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 2, double, aligned_highp > aligned_highp_dvec2
 2 components vector aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 3, double, aligned_highp > aligned_highp_dvec3
 3 components vector aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 4, double, aligned_highp > aligned_highp_dvec4
 4 components vector aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 1, int, aligned_highp > aligned_highp_ivec1
 1 component vector aligned in memory of signed integer numbers.
 
typedef vec< 2, int, aligned_highp > aligned_highp_ivec2
 2 components vector aligned in memory of signed integer numbers.
 
typedef vec< 3, int, aligned_highp > aligned_highp_ivec3
 3 components vector aligned in memory of signed integer numbers.
 
typedef vec< 4, int, aligned_highp > aligned_highp_ivec4
 4 components vector aligned in memory of signed integer numbers.
 
typedef mat< 2, 2, float, aligned_highp > aligned_highp_mat2
 2 by 2 matrix aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 2, 2, float, aligned_highp > aligned_highp_mat2x2
 2 by 2 matrix aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 2, 3, float, aligned_highp > aligned_highp_mat2x3
 2 by 3 matrix aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 2, 4, float, aligned_highp > aligned_highp_mat2x4
 2 by 4 matrix aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, float, aligned_highp > aligned_highp_mat3
 3 by 3 matrix aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 2, float, aligned_highp > aligned_highp_mat3x2
 3 by 2 matrix aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, float, aligned_highp > aligned_highp_mat3x3
 3 by 3 matrix aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 4, float, aligned_highp > aligned_highp_mat3x4
 3 by 4 matrix aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, float, aligned_highp > aligned_highp_mat4
 4 by 4 matrix aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 2, float, aligned_highp > aligned_highp_mat4x2
 4 by 2 matrix aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 3, float, aligned_highp > aligned_highp_mat4x3
 4 by 3 matrix aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, float, aligned_highp > aligned_highp_mat4x4
 4 by 4 matrix aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 1, uint, aligned_highp > aligned_highp_uvec1
 1 component vector aligned in memory of unsigned integer numbers.
 
typedef vec< 2, uint, aligned_highp > aligned_highp_uvec2
 2 components vector aligned in memory of unsigned integer numbers.
 
typedef vec< 3, uint, aligned_highp > aligned_highp_uvec3
 3 components vector aligned in memory of unsigned integer numbers.
 
typedef vec< 4, uint, aligned_highp > aligned_highp_uvec4
 4 components vector aligned in memory of unsigned integer numbers.
 
typedef vec< 1, float, aligned_highp > aligned_highp_vec1
 1 component vector aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 2, float, aligned_highp > aligned_highp_vec2
 2 components vector aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 3, float, aligned_highp > aligned_highp_vec3
 3 components vector aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 4, float, aligned_highp > aligned_highp_vec4
 4 components vector aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef aligned_highp_ivec1 aligned_ivec1
 1 component vector aligned in memory of signed integer numbers.
 
typedef aligned_highp_ivec2 aligned_ivec2
 2 components vector aligned in memory of signed integer numbers.
 
typedef aligned_highp_ivec3 aligned_ivec3
 3 components vector aligned in memory of signed integer numbers.
 
typedef aligned_highp_ivec4 aligned_ivec4
 4 components vector aligned in memory of signed integer numbers.
 
typedef vec< 1, bool, aligned_lowp > aligned_lowp_bvec1
 1 component vector aligned in memory of bool values.
 
typedef vec< 2, bool, aligned_lowp > aligned_lowp_bvec2
 2 components vector aligned in memory of bool values.
 
typedef vec< 3, bool, aligned_lowp > aligned_lowp_bvec3
 3 components vector aligned in memory of bool values.
 
typedef vec< 4, bool, aligned_lowp > aligned_lowp_bvec4
 4 components vector aligned in memory of bool values.
 
typedef mat< 2, 2, double, aligned_lowp > aligned_lowp_dmat2
 2 by 2 matrix aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 2, 2, double, aligned_lowp > aligned_lowp_dmat2x2
 2 by 2 matrix aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 2, 3, double, aligned_lowp > aligned_lowp_dmat2x3
 2 by 3 matrix aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 2, 4, double, aligned_lowp > aligned_lowp_dmat2x4
 2 by 4 matrix aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, double, aligned_lowp > aligned_lowp_dmat3
 3 by 3 matrix aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 2, double, aligned_lowp > aligned_lowp_dmat3x2
 3 by 2 matrix aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, double, aligned_lowp > aligned_lowp_dmat3x3
 3 by 3 matrix aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 4, double, aligned_lowp > aligned_lowp_dmat3x4
 3 by 4 matrix aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, double, aligned_lowp > aligned_lowp_dmat4
 4 by 4 matrix aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 2, double, aligned_lowp > aligned_lowp_dmat4x2
 4 by 2 matrix aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 3, double, aligned_lowp > aligned_lowp_dmat4x3
 4 by 3 matrix aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, double, aligned_lowp > aligned_lowp_dmat4x4
 4 by 4 matrix aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 1, double, aligned_lowp > aligned_lowp_dvec1
 1 component vector aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 2, double, aligned_lowp > aligned_lowp_dvec2
 2 components vector aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 3, double, aligned_lowp > aligned_lowp_dvec3
 3 components vector aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 4, double, aligned_lowp > aligned_lowp_dvec4
 4 components vector aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 1, int, aligned_lowp > aligned_lowp_ivec1
 1 component vector aligned in memory of signed integer numbers.
 
typedef vec< 2, int, aligned_lowp > aligned_lowp_ivec2
 2 components vector aligned in memory of signed integer numbers.
 
typedef vec< 3, int, aligned_lowp > aligned_lowp_ivec3
 3 components vector aligned in memory of signed integer numbers.
 
typedef vec< 4, int, aligned_lowp > aligned_lowp_ivec4
 4 components vector aligned in memory of signed integer numbers.
 
typedef mat< 2, 2, float, aligned_lowp > aligned_lowp_mat2
 2 by 2 matrix aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 2, 2, float, aligned_lowp > aligned_lowp_mat2x2
 2 by 2 matrix aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 2, 3, float, aligned_lowp > aligned_lowp_mat2x3
 2 by 3 matrix aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 2, 4, float, aligned_lowp > aligned_lowp_mat2x4
 2 by 4 matrix aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, float, aligned_lowp > aligned_lowp_mat3
 3 by 3 matrix aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 2, float, aligned_lowp > aligned_lowp_mat3x2
 3 by 2 matrix aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, float, aligned_lowp > aligned_lowp_mat3x3
 3 by 3 matrix aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 4, float, aligned_lowp > aligned_lowp_mat3x4
 3 by 4 matrix aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, float, aligned_lowp > aligned_lowp_mat4
 4 by 4 matrix aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 2, float, aligned_lowp > aligned_lowp_mat4x2
 4 by 2 matrix aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 3, float, aligned_lowp > aligned_lowp_mat4x3
 4 by 3 matrix aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, float, aligned_lowp > aligned_lowp_mat4x4
 4 by 4 matrix aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 1, uint, aligned_lowp > aligned_lowp_uvec1
 1 component vector aligned in memory of unsigned integer numbers.
 
typedef vec< 2, uint, aligned_lowp > aligned_lowp_uvec2
 2 components vector aligned in memory of unsigned integer numbers.
 
typedef vec< 3, uint, aligned_lowp > aligned_lowp_uvec3
 3 components vector aligned in memory of unsigned integer numbers.
 
typedef vec< 4, uint, aligned_lowp > aligned_lowp_uvec4
 4 components vector aligned in memory of unsigned integer numbers.
 
typedef vec< 1, float, aligned_lowp > aligned_lowp_vec1
 1 component vector aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 2, float, aligned_lowp > aligned_lowp_vec2
 2 components vector aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 3, float, aligned_lowp > aligned_lowp_vec3
 3 components vector aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 4, float, aligned_lowp > aligned_lowp_vec4
 4 components vector aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef aligned_highp_mat2 aligned_mat2
 2 by 2 matrix tightly aligned in memory of single-precision floating-point numbers.
 
typedef aligned_highp_mat2x2 aligned_mat2x2
 2 by 2 matrix tightly aligned in memory of single-precision floating-point numbers.
 
typedef aligned_highp_mat2x3 aligned_mat2x3
 2 by 3 matrix tightly aligned in memory of single-precision floating-point numbers.
 
typedef aligned_highp_mat2x4 aligned_mat2x4
 2 by 4 matrix tightly aligned in memory of single-precision floating-point numbers.
 
typedef aligned_highp_mat3 aligned_mat3
 3 by 3 matrix tightly aligned in memory of single-precision floating-point numbers.
 
typedef aligned_highp_mat3x2 aligned_mat3x2
 3 by 2 matrix tightly aligned in memory of single-precision floating-point numbers.
 
typedef aligned_highp_mat3x3 aligned_mat3x3
 3 by 3 matrix tightly aligned in memory of single-precision floating-point numbers.
 
typedef aligned_highp_mat3x4 aligned_mat3x4
 3 by 4 matrix tightly aligned in memory of single-precision floating-point numbers.
 
typedef aligned_highp_mat4 aligned_mat4
 4 by 4 matrix tightly aligned in memory of single-precision floating-point numbers.
 
typedef aligned_highp_mat4x2 aligned_mat4x2
 4 by 2 matrix tightly aligned in memory of single-precision floating-point numbers.
 
typedef aligned_highp_mat4x3 aligned_mat4x3
 4 by 3 matrix tightly aligned in memory of single-precision floating-point numbers.
 
typedef aligned_highp_mat4x4 aligned_mat4x4
 4 by 4 matrix tightly aligned in memory of single-precision floating-point numbers.
 
typedef vec< 1, bool, aligned_mediump > aligned_mediump_bvec1
 1 component vector aligned in memory of bool values.
 
typedef vec< 2, bool, aligned_mediump > aligned_mediump_bvec2
 2 components vector aligned in memory of bool values.
 
typedef vec< 3, bool, aligned_mediump > aligned_mediump_bvec3
 3 components vector aligned in memory of bool values.
 
typedef vec< 4, bool, aligned_mediump > aligned_mediump_bvec4
 4 components vector aligned in memory of bool values.
 
typedef mat< 2, 2, double, aligned_mediump > aligned_mediump_dmat2
 2 by 2 matrix aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 2, 2, double, aligned_mediump > aligned_mediump_dmat2x2
 2 by 2 matrix aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 2, 3, double, aligned_mediump > aligned_mediump_dmat2x3
 2 by 3 matrix aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 2, 4, double, aligned_mediump > aligned_mediump_dmat2x4
 2 by 4 matrix aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, double, aligned_mediump > aligned_mediump_dmat3
 3 by 3 matrix aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 2, double, aligned_mediump > aligned_mediump_dmat3x2
 3 by 2 matrix aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, double, aligned_mediump > aligned_mediump_dmat3x3
 3 by 3 matrix aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 4, double, aligned_mediump > aligned_mediump_dmat3x4
 3 by 4 matrix aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, double, aligned_mediump > aligned_mediump_dmat4
 4 by 4 matrix aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 2, double, aligned_mediump > aligned_mediump_dmat4x2
 4 by 2 matrix aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 3, double, aligned_mediump > aligned_mediump_dmat4x3
 4 by 3 matrix aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, double, aligned_mediump > aligned_mediump_dmat4x4
 4 by 4 matrix aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 1, double, aligned_mediump > aligned_mediump_dvec1
 1 component vector aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 2, double, aligned_mediump > aligned_mediump_dvec2
 2 components vector aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 3, double, aligned_mediump > aligned_mediump_dvec3
 3 components vector aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 4, double, aligned_mediump > aligned_mediump_dvec4
 4 components vector aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 1, int, aligned_mediump > aligned_mediump_ivec1
 1 component vector aligned in memory of signed integer numbers.
 
typedef vec< 2, int, aligned_mediump > aligned_mediump_ivec2
 2 components vector aligned in memory of signed integer numbers.
 
typedef vec< 3, int, aligned_mediump > aligned_mediump_ivec3
 3 components vector aligned in memory of signed integer numbers.
 
typedef vec< 4, int, aligned_mediump > aligned_mediump_ivec4
 4 components vector aligned in memory of signed integer numbers.
 
typedef mat< 2, 2, float, aligned_mediump > aligned_mediump_mat2
 2 by 2 matrix aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 2, 2, float, aligned_mediump > aligned_mediump_mat2x2
 2 by 2 matrix aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 2, 3, float, aligned_mediump > aligned_mediump_mat2x3
 2 by 3 matrix aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 2, 4, float, aligned_mediump > aligned_mediump_mat2x4
 2 by 4 matrix aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, float, aligned_mediump > aligned_mediump_mat3
 3 by 3 matrix aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 2, float, aligned_mediump > aligned_mediump_mat3x2
 3 by 2 matrix aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, float, aligned_mediump > aligned_mediump_mat3x3
 3 by 3 matrix aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 4, float, aligned_mediump > aligned_mediump_mat3x4
 3 by 4 matrix aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, float, aligned_mediump > aligned_mediump_mat4
 4 by 4 matrix aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 2, float, aligned_mediump > aligned_mediump_mat4x2
 4 by 2 matrix aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 3, float, aligned_mediump > aligned_mediump_mat4x3
 4 by 3 matrix aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, float, aligned_mediump > aligned_mediump_mat4x4
 4 by 4 matrix aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 1, uint, aligned_mediump > aligned_mediump_uvec1
 1 component vector aligned in memory of unsigned integer numbers.
 
typedef vec< 2, uint, aligned_mediump > aligned_mediump_uvec2
 2 components vector aligned in memory of unsigned integer numbers.
 
typedef vec< 3, uint, aligned_mediump > aligned_mediump_uvec3
 3 components vector aligned in memory of unsigned integer numbers.
 
typedef vec< 4, uint, aligned_mediump > aligned_mediump_uvec4
 4 components vector aligned in memory of unsigned integer numbers.
 
typedef vec< 1, float, aligned_mediump > aligned_mediump_vec1
 1 component vector aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 2, float, aligned_mediump > aligned_mediump_vec2
 2 components vector aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 3, float, aligned_mediump > aligned_mediump_vec3
 3 components vector aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 4, float, aligned_mediump > aligned_mediump_vec4
 4 components vector aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef aligned_highp_uvec1 aligned_uvec1
 1 component vector aligned in memory of unsigned integer numbers.
 
typedef aligned_highp_uvec2 aligned_uvec2
 2 components vector aligned in memory of unsigned integer numbers.
 
typedef aligned_highp_uvec3 aligned_uvec3
 3 components vector aligned in memory of unsigned integer numbers.
 
typedef aligned_highp_uvec4 aligned_uvec4
 4 components vector aligned in memory of unsigned integer numbers.
 
typedef aligned_highp_vec1 aligned_vec1
 1 component vector aligned in memory of single-precision floating-point numbers.
 
typedef aligned_highp_vec2 aligned_vec2
 2 components vector aligned in memory of single-precision floating-point numbers.
 
typedef aligned_highp_vec3 aligned_vec3
 3 components vector aligned in memory of single-precision floating-point numbers.
 
typedef aligned_highp_vec4 aligned_vec4
 4 components vector aligned in memory of single-precision floating-point numbers.
 
typedef packed_highp_bvec1 packed_bvec1
 1 components vector tightly packed in memory of bool values.
 
typedef packed_highp_bvec2 packed_bvec2
 2 components vector tightly packed in memory of bool values.
 
typedef packed_highp_bvec3 packed_bvec3
 3 components vector tightly packed in memory of bool values.
 
typedef packed_highp_bvec4 packed_bvec4
 4 components vector tightly packed in memory of bool values.
 
typedef packed_highp_dmat2 packed_dmat2
 2 by 2 matrix tightly packed in memory of double-precision floating-point numbers.
 
typedef packed_highp_dmat2x2 packed_dmat2x2
 2 by 2 matrix tightly packed in memory of double-precision floating-point numbers.
 
typedef packed_highp_dmat2x3 packed_dmat2x3
 2 by 3 matrix tightly packed in memory of double-precision floating-point numbers.
 
typedef packed_highp_dmat2x4 packed_dmat2x4
 2 by 4 matrix tightly packed in memory of double-precision floating-point numbers.
 
typedef packed_highp_dmat3 packed_dmat3
 3 by 3 matrix tightly packed in memory of double-precision floating-point numbers.
 
typedef packed_highp_dmat3x2 packed_dmat3x2
 3 by 2 matrix tightly packed in memory of double-precision floating-point numbers.
 
typedef packed_highp_dmat3x3 packed_dmat3x3
 3 by 3 matrix tightly packed in memory of double-precision floating-point numbers.
 
typedef packed_highp_dmat3x4 packed_dmat3x4
 3 by 4 matrix tightly packed in memory of double-precision floating-point numbers.
 
typedef packed_highp_dmat4 packed_dmat4
 4 by 4 matrix tightly packed in memory of double-precision floating-point numbers.
 
typedef packed_highp_dmat4x2 packed_dmat4x2
 4 by 2 matrix tightly packed in memory of double-precision floating-point numbers.
 
typedef packed_highp_dmat4x3 packed_dmat4x3
 4 by 3 matrix tightly packed in memory of double-precision floating-point numbers.
 
typedef packed_highp_dmat4x4 packed_dmat4x4
 4 by 4 matrix tightly packed in memory of double-precision floating-point numbers.
 
typedef packed_highp_dvec1 packed_dvec1
 1 component vector tightly packed in memory of double-precision floating-point numbers.
 
typedef packed_highp_dvec2 packed_dvec2
 2 components vector tightly packed in memory of double-precision floating-point numbers.
 
typedef packed_highp_dvec3 packed_dvec3
 3 components vector tightly packed in memory of double-precision floating-point numbers.
 
typedef packed_highp_dvec4 packed_dvec4
 4 components vector tightly packed in memory of double-precision floating-point numbers.
 
typedef vec< 1, bool, packed_highp > packed_highp_bvec1
 1 component vector tightly packed in memory of bool values.
 
typedef vec< 2, bool, packed_highp > packed_highp_bvec2
 2 components vector tightly packed in memory of bool values.
 
typedef vec< 3, bool, packed_highp > packed_highp_bvec3
 3 components vector tightly packed in memory of bool values.
 
typedef vec< 4, bool, packed_highp > packed_highp_bvec4
 4 components vector tightly packed in memory of bool values.
 
typedef mat< 2, 2, double, packed_highp > packed_highp_dmat2
 2 by 2 matrix tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 2, 2, double, packed_highp > packed_highp_dmat2x2
 2 by 2 matrix tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 2, 3, double, packed_highp > packed_highp_dmat2x3
 2 by 3 matrix tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 2, 4, double, packed_highp > packed_highp_dmat2x4
 2 by 4 matrix tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, double, packed_highp > packed_highp_dmat3
 3 by 3 matrix tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 2, double, packed_highp > packed_highp_dmat3x2
 3 by 2 matrix tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, double, packed_highp > packed_highp_dmat3x3
 3 by 3 matrix tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 4, double, packed_highp > packed_highp_dmat3x4
 3 by 4 matrix tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, double, packed_highp > packed_highp_dmat4
 4 by 4 matrix tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 2, double, packed_highp > packed_highp_dmat4x2
 4 by 2 matrix tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 3, double, packed_highp > packed_highp_dmat4x3
 4 by 3 matrix tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, double, packed_highp > packed_highp_dmat4x4
 4 by 4 matrix tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 1, double, packed_highp > packed_highp_dvec1
 1 component vector tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 2, double, packed_highp > packed_highp_dvec2
 2 components vector tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 3, double, packed_highp > packed_highp_dvec3
 3 components vector tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 4, double, packed_highp > packed_highp_dvec4
 4 components vector tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 1, int, packed_highp > packed_highp_ivec1
 1 component vector tightly packed in memory of signed integer numbers.
 
typedef vec< 2, int, packed_highp > packed_highp_ivec2
 2 components vector tightly packed in memory of signed integer numbers.
 
typedef vec< 3, int, packed_highp > packed_highp_ivec3
 3 components vector tightly packed in memory of signed integer numbers.
 
typedef vec< 4, int, packed_highp > packed_highp_ivec4
 4 components vector tightly packed in memory of signed integer numbers.
 
typedef mat< 2, 2, float, packed_highp > packed_highp_mat2
 2 by 2 matrix tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 2, 2, float, packed_highp > packed_highp_mat2x2
 2 by 2 matrix tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 2, 3, float, packed_highp > packed_highp_mat2x3
 2 by 3 matrix tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 2, 4, float, packed_highp > packed_highp_mat2x4
 2 by 4 matrix tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, float, packed_highp > packed_highp_mat3
 3 by 3 matrix tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 2, float, packed_highp > packed_highp_mat3x2
 3 by 2 matrix tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, float, packed_highp > packed_highp_mat3x3
 3 by 3 matrix tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 4, float, packed_highp > packed_highp_mat3x4
 3 by 4 matrix tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, float, packed_highp > packed_highp_mat4
 4 by 4 matrix tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 2, float, packed_highp > packed_highp_mat4x2
 4 by 2 matrix tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 3, float, packed_highp > packed_highp_mat4x3
 4 by 3 matrix tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, float, packed_highp > packed_highp_mat4x4
 4 by 4 matrix tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 1, uint, packed_highp > packed_highp_uvec1
 1 component vector tightly packed in memory of unsigned integer numbers.
 
typedef vec< 2, uint, packed_highp > packed_highp_uvec2
 2 components vector tightly packed in memory of unsigned integer numbers.
 
typedef vec< 3, uint, packed_highp > packed_highp_uvec3
 3 components vector tightly packed in memory of unsigned integer numbers.
 
typedef vec< 4, uint, packed_highp > packed_highp_uvec4
 4 components vector tightly packed in memory of unsigned integer numbers.
 
typedef vec< 1, float, packed_highp > packed_highp_vec1
 1 component vector tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 2, float, packed_highp > packed_highp_vec2
 2 components vector tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 3, float, packed_highp > packed_highp_vec3
 3 components vector tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 4, float, packed_highp > packed_highp_vec4
 4 components vector tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef packed_highp_ivec1 packed_ivec1
 1 component vector tightly packed in memory of signed integer numbers.
 
typedef packed_highp_ivec2 packed_ivec2
 2 components vector tightly packed in memory of signed integer numbers.
 
typedef packed_highp_ivec3 packed_ivec3
 3 components vector tightly packed in memory of signed integer numbers.
 
typedef packed_highp_ivec4 packed_ivec4
 4 components vector tightly packed in memory of signed integer numbers.
 
typedef vec< 1, bool, packed_lowp > packed_lowp_bvec1
 1 component vector tightly packed in memory of bool values.
 
typedef vec< 2, bool, packed_lowp > packed_lowp_bvec2
 2 components vector tightly packed in memory of bool values.
 
typedef vec< 3, bool, packed_lowp > packed_lowp_bvec3
 3 components vector tightly packed in memory of bool values.
 
typedef vec< 4, bool, packed_lowp > packed_lowp_bvec4
 4 components vector tightly packed in memory of bool values.
 
typedef mat< 2, 2, double, packed_lowp > packed_lowp_dmat2
 2 by 2 matrix tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 2, 2, double, packed_lowp > packed_lowp_dmat2x2
 2 by 2 matrix tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 2, 3, double, packed_lowp > packed_lowp_dmat2x3
 2 by 3 matrix tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 2, 4, double, packed_lowp > packed_lowp_dmat2x4
 2 by 4 matrix tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, double, packed_lowp > packed_lowp_dmat3
 3 by 3 matrix tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 2, double, packed_lowp > packed_lowp_dmat3x2
 3 by 2 matrix tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, double, packed_lowp > packed_lowp_dmat3x3
 3 by 3 matrix tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 4, double, packed_lowp > packed_lowp_dmat3x4
 3 by 4 matrix tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, double, packed_lowp > packed_lowp_dmat4
 4 by 4 matrix tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 2, double, packed_lowp > packed_lowp_dmat4x2
 4 by 2 matrix tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 3, double, packed_lowp > packed_lowp_dmat4x3
 4 by 3 matrix tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, double, packed_lowp > packed_lowp_dmat4x4
 4 by 4 matrix tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 1, double, packed_lowp > packed_lowp_dvec1
 1 component vector tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 2, double, packed_lowp > packed_lowp_dvec2
 2 components vector tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 3, double, packed_lowp > packed_lowp_dvec3
 3 components vector tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 4, double, packed_lowp > packed_lowp_dvec4
 4 components vector tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 1, int, packed_lowp > packed_lowp_ivec1
 1 component vector tightly packed in memory of signed integer numbers.
 
typedef vec< 2, int, packed_lowp > packed_lowp_ivec2
 2 components vector tightly packed in memory of signed integer numbers.
 
typedef vec< 3, int, packed_lowp > packed_lowp_ivec3
 3 components vector tightly packed in memory of signed integer numbers.
 
typedef vec< 4, int, packed_lowp > packed_lowp_ivec4
 4 components vector tightly packed in memory of signed integer numbers.
 
typedef mat< 2, 2, float, packed_lowp > packed_lowp_mat2
 2 by 2 matrix tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 2, 2, float, packed_lowp > packed_lowp_mat2x2
 2 by 2 matrix tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 2, 3, float, packed_lowp > packed_lowp_mat2x3
 2 by 3 matrix tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 2, 4, float, packed_lowp > packed_lowp_mat2x4
 2 by 4 matrix tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, float, packed_lowp > packed_lowp_mat3
 3 by 3 matrix tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 2, float, packed_lowp > packed_lowp_mat3x2
 3 by 2 matrix tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, float, packed_lowp > packed_lowp_mat3x3
 3 by 3 matrix tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 4, float, packed_lowp > packed_lowp_mat3x4
 3 by 4 matrix tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, float, packed_lowp > packed_lowp_mat4
 4 by 4 matrix tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 2, float, packed_lowp > packed_lowp_mat4x2
 4 by 2 matrix tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 3, float, packed_lowp > packed_lowp_mat4x3
 4 by 3 matrix tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, float, packed_lowp > packed_lowp_mat4x4
 4 by 4 matrix tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 1, uint, packed_lowp > packed_lowp_uvec1
 1 component vector tightly packed in memory of unsigned integer numbers.
 
typedef vec< 2, uint, packed_lowp > packed_lowp_uvec2
 2 components vector tightly packed in memory of unsigned integer numbers.
 
typedef vec< 3, uint, packed_lowp > packed_lowp_uvec3
 3 components vector tightly packed in memory of unsigned integer numbers.
 
typedef vec< 4, uint, packed_lowp > packed_lowp_uvec4
 4 components vector tightly packed in memory of unsigned integer numbers.
 
typedef vec< 1, float, packed_lowp > packed_lowp_vec1
 1 component vector tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 2, float, packed_lowp > packed_lowp_vec2
 2 components vector tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 3, float, packed_lowp > packed_lowp_vec3
 3 components vector tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 4, float, packed_lowp > packed_lowp_vec4
 4 components vector tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef packed_highp_mat2 packed_mat2
 2 by 2 matrix tightly packed in memory of single-precision floating-point numbers.
 
typedef packed_highp_mat2x2 packed_mat2x2
 2 by 2 matrix tightly packed in memory of single-precision floating-point numbers.
 
typedef packed_highp_mat2x3 packed_mat2x3
 2 by 3 matrix tightly packed in memory of single-precision floating-point numbers.
 
typedef packed_highp_mat2x4 packed_mat2x4
 2 by 4 matrix tightly packed in memory of single-precision floating-point numbers.
 
typedef packed_highp_mat3 packed_mat3
 3 by 3 matrix tightly packed in memory of single-precision floating-point numbers.
 
typedef packed_highp_mat3x2 packed_mat3x2
 3 by 2 matrix tightly packed in memory of single-precision floating-point numbers.
 
typedef packed_highp_mat3x3 packed_mat3x3
 3 by 3 matrix tightly packed in memory of single-precision floating-point numbers.
 
typedef packed_highp_mat3x4 packed_mat3x4
 3 by 4 matrix tightly packed in memory of single-precision floating-point numbers.
 
typedef packed_highp_mat4 packed_mat4
 4 by 4 matrix tightly packed in memory of single-precision floating-point numbers.
 
typedef packed_highp_mat4x2 packed_mat4x2
 4 by 2 matrix tightly packed in memory of single-precision floating-point numbers.
 
typedef packed_highp_mat4x3 packed_mat4x3
 4 by 3 matrix tightly packed in memory of single-precision floating-point numbers.
 
typedef packed_highp_mat4x4 packed_mat4x4
 4 by 4 matrix tightly packed in memory of single-precision floating-point numbers.
 
typedef vec< 1, bool, packed_mediump > packed_mediump_bvec1
 1 component vector tightly packed in memory of bool values.
 
typedef vec< 2, bool, packed_mediump > packed_mediump_bvec2
 2 components vector tightly packed in memory of bool values.
 
typedef vec< 3, bool, packed_mediump > packed_mediump_bvec3
 3 components vector tightly packed in memory of bool values.
 
typedef vec< 4, bool, packed_mediump > packed_mediump_bvec4
 4 components vector tightly packed in memory of bool values.
 
typedef mat< 2, 2, double, packed_mediump > packed_mediump_dmat2
 2 by 2 matrix tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 2, 2, double, packed_mediump > packed_mediump_dmat2x2
 2 by 2 matrix tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 2, 3, double, packed_mediump > packed_mediump_dmat2x3
 2 by 3 matrix tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 2, 4, double, packed_mediump > packed_mediump_dmat2x4
 2 by 4 matrix tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, double, packed_mediump > packed_mediump_dmat3
 3 by 3 matrix tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 2, double, packed_mediump > packed_mediump_dmat3x2
 3 by 2 matrix tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, double, packed_mediump > packed_mediump_dmat3x3
 3 by 3 matrix tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 4, double, packed_mediump > packed_mediump_dmat3x4
 3 by 4 matrix tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, double, packed_mediump > packed_mediump_dmat4
 4 by 4 matrix tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 2, double, packed_mediump > packed_mediump_dmat4x2
 4 by 2 matrix tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 3, double, packed_mediump > packed_mediump_dmat4x3
 4 by 3 matrix tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, double, packed_mediump > packed_mediump_dmat4x4
 4 by 4 matrix tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 1, double, packed_mediump > packed_mediump_dvec1
 1 component vector tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 2, double, packed_mediump > packed_mediump_dvec2
 2 components vector tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 3, double, packed_mediump > packed_mediump_dvec3
 3 components vector tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 4, double, packed_mediump > packed_mediump_dvec4
 4 components vector tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 1, int, packed_mediump > packed_mediump_ivec1
 1 component vector tightly packed in memory of signed integer numbers.
 
typedef vec< 2, int, packed_mediump > packed_mediump_ivec2
 2 components vector tightly packed in memory of signed integer numbers.
 
typedef vec< 3, int, packed_mediump > packed_mediump_ivec3
 3 components vector tightly packed in memory of signed integer numbers.
 
typedef vec< 4, int, packed_mediump > packed_mediump_ivec4
 4 components vector tightly packed in memory of signed integer numbers.
 
typedef mat< 2, 2, float, packed_mediump > packed_mediump_mat2
 2 by 2 matrix tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 2, 2, float, packed_mediump > packed_mediump_mat2x2
 2 by 2 matrix tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 2, 3, float, packed_mediump > packed_mediump_mat2x3
 2 by 3 matrix tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 2, 4, float, packed_mediump > packed_mediump_mat2x4
 2 by 4 matrix tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, float, packed_mediump > packed_mediump_mat3
 3 by 3 matrix tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 2, float, packed_mediump > packed_mediump_mat3x2
 3 by 2 matrix tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, float, packed_mediump > packed_mediump_mat3x3
 3 by 3 matrix tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 4, float, packed_mediump > packed_mediump_mat3x4
 3 by 4 matrix tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, float, packed_mediump > packed_mediump_mat4
 4 by 4 matrix tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 2, float, packed_mediump > packed_mediump_mat4x2
 4 by 2 matrix tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 3, float, packed_mediump > packed_mediump_mat4x3
 4 by 3 matrix tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, float, packed_mediump > packed_mediump_mat4x4
 4 by 4 matrix tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 1, uint, packed_mediump > packed_mediump_uvec1
 1 component vector tightly packed in memory of unsigned integer numbers.
 
typedef vec< 2, uint, packed_mediump > packed_mediump_uvec2
 2 components vector tightly packed in memory of unsigned integer numbers.
 
typedef vec< 3, uint, packed_mediump > packed_mediump_uvec3
 3 components vector tightly packed in memory of unsigned integer numbers.
 
typedef vec< 4, uint, packed_mediump > packed_mediump_uvec4
 4 components vector tightly packed in memory of unsigned integer numbers.
 
typedef vec< 1, float, packed_mediump > packed_mediump_vec1
 1 component vector tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 2, float, packed_mediump > packed_mediump_vec2
 2 components vector tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 3, float, packed_mediump > packed_mediump_vec3
 3 components vector tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 4, float, packed_mediump > packed_mediump_vec4
 4 components vector tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef packed_highp_uvec1 packed_uvec1
 1 component vector tightly packed in memory of unsigned integer numbers.
 
typedef packed_highp_uvec2 packed_uvec2
 2 components vector tightly packed in memory of unsigned integer numbers.
 
typedef packed_highp_uvec3 packed_uvec3
 3 components vector tightly packed in memory of unsigned integer numbers.
 
typedef packed_highp_uvec4 packed_uvec4
 4 components vector tightly packed in memory of unsigned integer numbers.
 
typedef packed_highp_vec1 packed_vec1
 1 component vector tightly packed in memory of single-precision floating-point numbers.
 
typedef packed_highp_vec2 packed_vec2
 2 components vector tightly packed in memory of single-precision floating-point numbers.
 
typedef packed_highp_vec3 packed_vec3
 3 components vector tightly packed in memory of single-precision floating-point numbers.
 
typedef packed_highp_vec4 packed_vec4
 4 components vector tightly packed in memory of single-precision floating-point numbers.
 

Detailed Description

GLM_GTC_type_aligned

See also
Core features (dependence)

Definition in file gtc/type_aligned.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00161_source.html ================================================ 0.9.9 API documentation: type_aligned.hpp Source File
0.9.9 API documentation
gtc/type_aligned.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 #if (GLM_CONFIG_ALIGNED_GENTYPES == GLM_DISABLE)
16 # error "GLM: Aligned gentypes require to enable C++ language extensions. Define GLM_FORCE_ALIGNED_GENTYPES before including GLM headers to use aligned types."
17 #endif
18 
19 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
20 # pragma message("GLM: GLM_GTC_type_aligned extension included")
21 #endif
22 
23 #include "../mat4x4.hpp"
24 #include "../mat4x3.hpp"
25 #include "../mat4x2.hpp"
26 #include "../mat3x4.hpp"
27 #include "../mat3x3.hpp"
28 #include "../mat3x2.hpp"
29 #include "../mat2x4.hpp"
30 #include "../mat2x3.hpp"
31 #include "../mat2x2.hpp"
32 #include "../gtc/vec1.hpp"
33 #include "../vec2.hpp"
34 #include "../vec3.hpp"
35 #include "../vec4.hpp"
36 
37 namespace glm
38 {
41 
42  // -- *vec1 --
43 
45  typedef vec<1, float, aligned_highp> aligned_highp_vec1;
46 
48  typedef vec<1, float, aligned_mediump> aligned_mediump_vec1;
49 
51  typedef vec<1, float, aligned_lowp> aligned_lowp_vec1;
52 
54  typedef vec<1, double, aligned_highp> aligned_highp_dvec1;
55 
57  typedef vec<1, double, aligned_mediump> aligned_mediump_dvec1;
58 
60  typedef vec<1, double, aligned_lowp> aligned_lowp_dvec1;
61 
63  typedef vec<1, int, aligned_highp> aligned_highp_ivec1;
64 
66  typedef vec<1, int, aligned_mediump> aligned_mediump_ivec1;
67 
69  typedef vec<1, int, aligned_lowp> aligned_lowp_ivec1;
70 
72  typedef vec<1, uint, aligned_highp> aligned_highp_uvec1;
73 
75  typedef vec<1, uint, aligned_mediump> aligned_mediump_uvec1;
76 
78  typedef vec<1, uint, aligned_lowp> aligned_lowp_uvec1;
79 
81  typedef vec<1, bool, aligned_highp> aligned_highp_bvec1;
82 
84  typedef vec<1, bool, aligned_mediump> aligned_mediump_bvec1;
85 
87  typedef vec<1, bool, aligned_lowp> aligned_lowp_bvec1;
88 
90  typedef vec<1, float, packed_highp> packed_highp_vec1;
91 
93  typedef vec<1, float, packed_mediump> packed_mediump_vec1;
94 
96  typedef vec<1, float, packed_lowp> packed_lowp_vec1;
97 
99  typedef vec<1, double, packed_highp> packed_highp_dvec1;
100 
102  typedef vec<1, double, packed_mediump> packed_mediump_dvec1;
103 
105  typedef vec<1, double, packed_lowp> packed_lowp_dvec1;
106 
108  typedef vec<1, int, packed_highp> packed_highp_ivec1;
109 
111  typedef vec<1, int, packed_mediump> packed_mediump_ivec1;
112 
114  typedef vec<1, int, packed_lowp> packed_lowp_ivec1;
115 
117  typedef vec<1, uint, packed_highp> packed_highp_uvec1;
118 
120  typedef vec<1, uint, packed_mediump> packed_mediump_uvec1;
121 
123  typedef vec<1, uint, packed_lowp> packed_lowp_uvec1;
124 
126  typedef vec<1, bool, packed_highp> packed_highp_bvec1;
127 
129  typedef vec<1, bool, packed_mediump> packed_mediump_bvec1;
130 
132  typedef vec<1, bool, packed_lowp> packed_lowp_bvec1;
133 
134  // -- *vec2 --
135 
137  typedef vec<2, float, aligned_highp> aligned_highp_vec2;
138 
140  typedef vec<2, float, aligned_mediump> aligned_mediump_vec2;
141 
143  typedef vec<2, float, aligned_lowp> aligned_lowp_vec2;
144 
146  typedef vec<2, double, aligned_highp> aligned_highp_dvec2;
147 
149  typedef vec<2, double, aligned_mediump> aligned_mediump_dvec2;
150 
152  typedef vec<2, double, aligned_lowp> aligned_lowp_dvec2;
153 
155  typedef vec<2, int, aligned_highp> aligned_highp_ivec2;
156 
158  typedef vec<2, int, aligned_mediump> aligned_mediump_ivec2;
159 
161  typedef vec<2, int, aligned_lowp> aligned_lowp_ivec2;
162 
164  typedef vec<2, uint, aligned_highp> aligned_highp_uvec2;
165 
167  typedef vec<2, uint, aligned_mediump> aligned_mediump_uvec2;
168 
170  typedef vec<2, uint, aligned_lowp> aligned_lowp_uvec2;
171 
173  typedef vec<2, bool, aligned_highp> aligned_highp_bvec2;
174 
176  typedef vec<2, bool, aligned_mediump> aligned_mediump_bvec2;
177 
179  typedef vec<2, bool, aligned_lowp> aligned_lowp_bvec2;
180 
182  typedef vec<2, float, packed_highp> packed_highp_vec2;
183 
185  typedef vec<2, float, packed_mediump> packed_mediump_vec2;
186 
188  typedef vec<2, float, packed_lowp> packed_lowp_vec2;
189 
191  typedef vec<2, double, packed_highp> packed_highp_dvec2;
192 
194  typedef vec<2, double, packed_mediump> packed_mediump_dvec2;
195 
197  typedef vec<2, double, packed_lowp> packed_lowp_dvec2;
198 
200  typedef vec<2, int, packed_highp> packed_highp_ivec2;
201 
203  typedef vec<2, int, packed_mediump> packed_mediump_ivec2;
204 
206  typedef vec<2, int, packed_lowp> packed_lowp_ivec2;
207 
209  typedef vec<2, uint, packed_highp> packed_highp_uvec2;
210 
212  typedef vec<2, uint, packed_mediump> packed_mediump_uvec2;
213 
215  typedef vec<2, uint, packed_lowp> packed_lowp_uvec2;
216 
218  typedef vec<2, bool, packed_highp> packed_highp_bvec2;
219 
221  typedef vec<2, bool, packed_mediump> packed_mediump_bvec2;
222 
224  typedef vec<2, bool, packed_lowp> packed_lowp_bvec2;
225 
226  // -- *vec3 --
227 
229  typedef vec<3, float, aligned_highp> aligned_highp_vec3;
230 
232  typedef vec<3, float, aligned_mediump> aligned_mediump_vec3;
233 
235  typedef vec<3, float, aligned_lowp> aligned_lowp_vec3;
236 
238  typedef vec<3, double, aligned_highp> aligned_highp_dvec3;
239 
241  typedef vec<3, double, aligned_mediump> aligned_mediump_dvec3;
242 
244  typedef vec<3, double, aligned_lowp> aligned_lowp_dvec3;
245 
247  typedef vec<3, int, aligned_highp> aligned_highp_ivec3;
248 
250  typedef vec<3, int, aligned_mediump> aligned_mediump_ivec3;
251 
253  typedef vec<3, int, aligned_lowp> aligned_lowp_ivec3;
254 
256  typedef vec<3, uint, aligned_highp> aligned_highp_uvec3;
257 
259  typedef vec<3, uint, aligned_mediump> aligned_mediump_uvec3;
260 
262  typedef vec<3, uint, aligned_lowp> aligned_lowp_uvec3;
263 
265  typedef vec<3, bool, aligned_highp> aligned_highp_bvec3;
266 
268  typedef vec<3, bool, aligned_mediump> aligned_mediump_bvec3;
269 
271  typedef vec<3, bool, aligned_lowp> aligned_lowp_bvec3;
272 
274  typedef vec<3, float, packed_highp> packed_highp_vec3;
275 
277  typedef vec<3, float, packed_mediump> packed_mediump_vec3;
278 
280  typedef vec<3, float, packed_lowp> packed_lowp_vec3;
281 
283  typedef vec<3, double, packed_highp> packed_highp_dvec3;
284 
286  typedef vec<3, double, packed_mediump> packed_mediump_dvec3;
287 
289  typedef vec<3, double, packed_lowp> packed_lowp_dvec3;
290 
292  typedef vec<3, int, packed_highp> packed_highp_ivec3;
293 
295  typedef vec<3, int, packed_mediump> packed_mediump_ivec3;
296 
298  typedef vec<3, int, packed_lowp> packed_lowp_ivec3;
299 
301  typedef vec<3, uint, packed_highp> packed_highp_uvec3;
302 
304  typedef vec<3, uint, packed_mediump> packed_mediump_uvec3;
305 
307  typedef vec<3, uint, packed_lowp> packed_lowp_uvec3;
308 
310  typedef vec<3, bool, packed_highp> packed_highp_bvec3;
311 
313  typedef vec<3, bool, packed_mediump> packed_mediump_bvec3;
314 
316  typedef vec<3, bool, packed_lowp> packed_lowp_bvec3;
317 
318  // -- *vec4 --
319 
321  typedef vec<4, float, aligned_highp> aligned_highp_vec4;
322 
324  typedef vec<4, float, aligned_mediump> aligned_mediump_vec4;
325 
327  typedef vec<4, float, aligned_lowp> aligned_lowp_vec4;
328 
330  typedef vec<4, double, aligned_highp> aligned_highp_dvec4;
331 
333  typedef vec<4, double, aligned_mediump> aligned_mediump_dvec4;
334 
336  typedef vec<4, double, aligned_lowp> aligned_lowp_dvec4;
337 
339  typedef vec<4, int, aligned_highp> aligned_highp_ivec4;
340 
342  typedef vec<4, int, aligned_mediump> aligned_mediump_ivec4;
343 
345  typedef vec<4, int, aligned_lowp> aligned_lowp_ivec4;
346 
348  typedef vec<4, uint, aligned_highp> aligned_highp_uvec4;
349 
351  typedef vec<4, uint, aligned_mediump> aligned_mediump_uvec4;
352 
354  typedef vec<4, uint, aligned_lowp> aligned_lowp_uvec4;
355 
357  typedef vec<4, bool, aligned_highp> aligned_highp_bvec4;
358 
360  typedef vec<4, bool, aligned_mediump> aligned_mediump_bvec4;
361 
363  typedef vec<4, bool, aligned_lowp> aligned_lowp_bvec4;
364 
366  typedef vec<4, float, packed_highp> packed_highp_vec4;
367 
369  typedef vec<4, float, packed_mediump> packed_mediump_vec4;
370 
372  typedef vec<4, float, packed_lowp> packed_lowp_vec4;
373 
375  typedef vec<4, double, packed_highp> packed_highp_dvec4;
376 
378  typedef vec<4, double, packed_mediump> packed_mediump_dvec4;
379 
381  typedef vec<4, double, packed_lowp> packed_lowp_dvec4;
382 
384  typedef vec<4, int, packed_highp> packed_highp_ivec4;
385 
387  typedef vec<4, int, packed_mediump> packed_mediump_ivec4;
388 
390  typedef vec<4, int, packed_lowp> packed_lowp_ivec4;
391 
393  typedef vec<4, uint, packed_highp> packed_highp_uvec4;
394 
396  typedef vec<4, uint, packed_mediump> packed_mediump_uvec4;
397 
399  typedef vec<4, uint, packed_lowp> packed_lowp_uvec4;
400 
402  typedef vec<4, bool, packed_highp> packed_highp_bvec4;
403 
405  typedef vec<4, bool, packed_mediump> packed_mediump_bvec4;
406 
408  typedef vec<4, bool, packed_lowp> packed_lowp_bvec4;
409 
410  // -- *mat2 --
411 
413  typedef mat<2, 2, float, aligned_highp> aligned_highp_mat2;
414 
416  typedef mat<2, 2, float, aligned_mediump> aligned_mediump_mat2;
417 
419  typedef mat<2, 2, float, aligned_lowp> aligned_lowp_mat2;
420 
422  typedef mat<2, 2, double, aligned_highp> aligned_highp_dmat2;
423 
425  typedef mat<2, 2, double, aligned_mediump> aligned_mediump_dmat2;
426 
428  typedef mat<2, 2, double, aligned_lowp> aligned_lowp_dmat2;
429 
431  typedef mat<2, 2, float, packed_highp> packed_highp_mat2;
432 
434  typedef mat<2, 2, float, packed_mediump> packed_mediump_mat2;
435 
437  typedef mat<2, 2, float, packed_lowp> packed_lowp_mat2;
438 
440  typedef mat<2, 2, double, packed_highp> packed_highp_dmat2;
441 
443  typedef mat<2, 2, double, packed_mediump> packed_mediump_dmat2;
444 
446  typedef mat<2, 2, double, packed_lowp> packed_lowp_dmat2;
447 
448  // -- *mat3 --
449 
451  typedef mat<3, 3, float, aligned_highp> aligned_highp_mat3;
452 
454  typedef mat<3, 3, float, aligned_mediump> aligned_mediump_mat3;
455 
457  typedef mat<3, 3, float, aligned_lowp> aligned_lowp_mat3;
458 
460  typedef mat<3, 3, double, aligned_highp> aligned_highp_dmat3;
461 
463  typedef mat<3, 3, double, aligned_mediump> aligned_mediump_dmat3;
464 
466  typedef mat<3, 3, double, aligned_lowp> aligned_lowp_dmat3;
467 
469  typedef mat<3, 3, float, packed_highp> packed_highp_mat3;
470 
472  typedef mat<3, 3, float, packed_mediump> packed_mediump_mat3;
473 
475  typedef mat<3, 3, float, packed_lowp> packed_lowp_mat3;
476 
478  typedef mat<3, 3, double, packed_highp> packed_highp_dmat3;
479 
481  typedef mat<3, 3, double, packed_mediump> packed_mediump_dmat3;
482 
484  typedef mat<3, 3, double, packed_lowp> packed_lowp_dmat3;
485 
486  // -- *mat4 --
487 
489  typedef mat<4, 4, float, aligned_highp> aligned_highp_mat4;
490 
492  typedef mat<4, 4, float, aligned_mediump> aligned_mediump_mat4;
493 
495  typedef mat<4, 4, float, aligned_lowp> aligned_lowp_mat4;
496 
498  typedef mat<4, 4, double, aligned_highp> aligned_highp_dmat4;
499 
501  typedef mat<4, 4, double, aligned_mediump> aligned_mediump_dmat4;
502 
504  typedef mat<4, 4, double, aligned_lowp> aligned_lowp_dmat4;
505 
507  typedef mat<4, 4, float, packed_highp> packed_highp_mat4;
508 
510  typedef mat<4, 4, float, packed_mediump> packed_mediump_mat4;
511 
513  typedef mat<4, 4, float, packed_lowp> packed_lowp_mat4;
514 
516  typedef mat<4, 4, double, packed_highp> packed_highp_dmat4;
517 
519  typedef mat<4, 4, double, packed_mediump> packed_mediump_dmat4;
520 
522  typedef mat<4, 4, double, packed_lowp> packed_lowp_dmat4;
523 
524  // -- *mat2x2 --
525 
527  typedef mat<2, 2, float, aligned_highp> aligned_highp_mat2x2;
528 
530  typedef mat<2, 2, float, aligned_mediump> aligned_mediump_mat2x2;
531 
533  typedef mat<2, 2, float, aligned_lowp> aligned_lowp_mat2x2;
534 
536  typedef mat<2, 2, double, aligned_highp> aligned_highp_dmat2x2;
537 
539  typedef mat<2, 2, double, aligned_mediump> aligned_mediump_dmat2x2;
540 
542  typedef mat<2, 2, double, aligned_lowp> aligned_lowp_dmat2x2;
543 
545  typedef mat<2, 2, float, packed_highp> packed_highp_mat2x2;
546 
548  typedef mat<2, 2, float, packed_mediump> packed_mediump_mat2x2;
549 
551  typedef mat<2, 2, float, packed_lowp> packed_lowp_mat2x2;
552 
554  typedef mat<2, 2, double, packed_highp> packed_highp_dmat2x2;
555 
557  typedef mat<2, 2, double, packed_mediump> packed_mediump_dmat2x2;
558 
560  typedef mat<2, 2, double, packed_lowp> packed_lowp_dmat2x2;
561 
562  // -- *mat2x3 --
563 
565  typedef mat<2, 3, float, aligned_highp> aligned_highp_mat2x3;
566 
568  typedef mat<2, 3, float, aligned_mediump> aligned_mediump_mat2x3;
569 
571  typedef mat<2, 3, float, aligned_lowp> aligned_lowp_mat2x3;
572 
574  typedef mat<2, 3, double, aligned_highp> aligned_highp_dmat2x3;
575 
577  typedef mat<2, 3, double, aligned_mediump> aligned_mediump_dmat2x3;
578 
580  typedef mat<2, 3, double, aligned_lowp> aligned_lowp_dmat2x3;
581 
583  typedef mat<2, 3, float, packed_highp> packed_highp_mat2x3;
584 
586  typedef mat<2, 3, float, packed_mediump> packed_mediump_mat2x3;
587 
589  typedef mat<2, 3, float, packed_lowp> packed_lowp_mat2x3;
590 
592  typedef mat<2, 3, double, packed_highp> packed_highp_dmat2x3;
593 
595  typedef mat<2, 3, double, packed_mediump> packed_mediump_dmat2x3;
596 
598  typedef mat<2, 3, double, packed_lowp> packed_lowp_dmat2x3;
599 
600  // -- *mat2x4 --
601 
603  typedef mat<2, 4, float, aligned_highp> aligned_highp_mat2x4;
604 
606  typedef mat<2, 4, float, aligned_mediump> aligned_mediump_mat2x4;
607 
609  typedef mat<2, 4, float, aligned_lowp> aligned_lowp_mat2x4;
610 
612  typedef mat<2, 4, double, aligned_highp> aligned_highp_dmat2x4;
613 
615  typedef mat<2, 4, double, aligned_mediump> aligned_mediump_dmat2x4;
616 
618  typedef mat<2, 4, double, aligned_lowp> aligned_lowp_dmat2x4;
619 
621  typedef mat<2, 4, float, packed_highp> packed_highp_mat2x4;
622 
624  typedef mat<2, 4, float, packed_mediump> packed_mediump_mat2x4;
625 
627  typedef mat<2, 4, float, packed_lowp> packed_lowp_mat2x4;
628 
630  typedef mat<2, 4, double, packed_highp> packed_highp_dmat2x4;
631 
633  typedef mat<2, 4, double, packed_mediump> packed_mediump_dmat2x4;
634 
636  typedef mat<2, 4, double, packed_lowp> packed_lowp_dmat2x4;
637 
638  // -- *mat3x2 --
639 
641  typedef mat<3, 2, float, aligned_highp> aligned_highp_mat3x2;
642 
644  typedef mat<3, 2, float, aligned_mediump> aligned_mediump_mat3x2;
645 
647  typedef mat<3, 2, float, aligned_lowp> aligned_lowp_mat3x2;
648 
650  typedef mat<3, 2, double, aligned_highp> aligned_highp_dmat3x2;
651 
653  typedef mat<3, 2, double, aligned_mediump> aligned_mediump_dmat3x2;
654 
656  typedef mat<3, 2, double, aligned_lowp> aligned_lowp_dmat3x2;
657 
659  typedef mat<3, 2, float, packed_highp> packed_highp_mat3x2;
660 
662  typedef mat<3, 2, float, packed_mediump> packed_mediump_mat3x2;
663 
665  typedef mat<3, 2, float, packed_lowp> packed_lowp_mat3x2;
666 
668  typedef mat<3, 2, double, packed_highp> packed_highp_dmat3x2;
669 
671  typedef mat<3, 2, double, packed_mediump> packed_mediump_dmat3x2;
672 
674  typedef mat<3, 2, double, packed_lowp> packed_lowp_dmat3x2;
675 
676  // -- *mat3x3 --
677 
679  typedef mat<3, 3, float, aligned_highp> aligned_highp_mat3x3;
680 
682  typedef mat<3, 3, float, aligned_mediump> aligned_mediump_mat3x3;
683 
685  typedef mat<3, 3, float, aligned_lowp> aligned_lowp_mat3x3;
686 
688  typedef mat<3, 3, double, aligned_highp> aligned_highp_dmat3x3;
689 
691  typedef mat<3, 3, double, aligned_mediump> aligned_mediump_dmat3x3;
692 
694  typedef mat<3, 3, double, aligned_lowp> aligned_lowp_dmat3x3;
695 
697  typedef mat<3, 3, float, packed_highp> packed_highp_mat3x3;
698 
700  typedef mat<3, 3, float, packed_mediump> packed_mediump_mat3x3;
701 
703  typedef mat<3, 3, float, packed_lowp> packed_lowp_mat3x3;
704 
706  typedef mat<3, 3, double, packed_highp> packed_highp_dmat3x3;
707 
709  typedef mat<3, 3, double, packed_mediump> packed_mediump_dmat3x3;
710 
712  typedef mat<3, 3, double, packed_lowp> packed_lowp_dmat3x3;
713 
714  // -- *mat3x4 --
715 
717  typedef mat<3, 4, float, aligned_highp> aligned_highp_mat3x4;
718 
720  typedef mat<3, 4, float, aligned_mediump> aligned_mediump_mat3x4;
721 
723  typedef mat<3, 4, float, aligned_lowp> aligned_lowp_mat3x4;
724 
726  typedef mat<3, 4, double, aligned_highp> aligned_highp_dmat3x4;
727 
729  typedef mat<3, 4, double, aligned_mediump> aligned_mediump_dmat3x4;
730 
732  typedef mat<3, 4, double, aligned_lowp> aligned_lowp_dmat3x4;
733 
735  typedef mat<3, 4, float, packed_highp> packed_highp_mat3x4;
736 
738  typedef mat<3, 4, float, packed_mediump> packed_mediump_mat3x4;
739 
741  typedef mat<3, 4, float, packed_lowp> packed_lowp_mat3x4;
742 
744  typedef mat<3, 4, double, packed_highp> packed_highp_dmat3x4;
745 
747  typedef mat<3, 4, double, packed_mediump> packed_mediump_dmat3x4;
748 
750  typedef mat<3, 4, double, packed_lowp> packed_lowp_dmat3x4;
751 
752  // -- *mat4x2 --
753 
755  typedef mat<4, 2, float, aligned_highp> aligned_highp_mat4x2;
756 
758  typedef mat<4, 2, float, aligned_mediump> aligned_mediump_mat4x2;
759 
761  typedef mat<4, 2, float, aligned_lowp> aligned_lowp_mat4x2;
762 
764  typedef mat<4, 2, double, aligned_highp> aligned_highp_dmat4x2;
765 
767  typedef mat<4, 2, double, aligned_mediump> aligned_mediump_dmat4x2;
768 
770  typedef mat<4, 2, double, aligned_lowp> aligned_lowp_dmat4x2;
771 
773  typedef mat<4, 2, float, packed_highp> packed_highp_mat4x2;
774 
776  typedef mat<4, 2, float, packed_mediump> packed_mediump_mat4x2;
777 
779  typedef mat<4, 2, float, packed_lowp> packed_lowp_mat4x2;
780 
782  typedef mat<4, 2, double, packed_highp> packed_highp_dmat4x2;
783 
785  typedef mat<4, 2, double, packed_mediump> packed_mediump_dmat4x2;
786 
788  typedef mat<4, 2, double, packed_lowp> packed_lowp_dmat4x2;
789 
790  // -- *mat4x3 --
791 
793  typedef mat<4, 3, float, aligned_highp> aligned_highp_mat4x3;
794 
796  typedef mat<4, 3, float, aligned_mediump> aligned_mediump_mat4x3;
797 
799  typedef mat<4, 3, float, aligned_lowp> aligned_lowp_mat4x3;
800 
802  typedef mat<4, 3, double, aligned_highp> aligned_highp_dmat4x3;
803 
805  typedef mat<4, 3, double, aligned_mediump> aligned_mediump_dmat4x3;
806 
808  typedef mat<4, 3, double, aligned_lowp> aligned_lowp_dmat4x3;
809 
811  typedef mat<4, 3, float, packed_highp> packed_highp_mat4x3;
812 
814  typedef mat<4, 3, float, packed_mediump> packed_mediump_mat4x3;
815 
817  typedef mat<4, 3, float, packed_lowp> packed_lowp_mat4x3;
818 
820  typedef mat<4, 3, double, packed_highp> packed_highp_dmat4x3;
821 
823  typedef mat<4, 3, double, packed_mediump> packed_mediump_dmat4x3;
824 
826  typedef mat<4, 3, double, packed_lowp> packed_lowp_dmat4x3;
827 
828  // -- *mat4x4 --
829 
831  typedef mat<4, 4, float, aligned_highp> aligned_highp_mat4x4;
832 
834  typedef mat<4, 4, float, aligned_mediump> aligned_mediump_mat4x4;
835 
837  typedef mat<4, 4, float, aligned_lowp> aligned_lowp_mat4x4;
838 
840  typedef mat<4, 4, double, aligned_highp> aligned_highp_dmat4x4;
841 
843  typedef mat<4, 4, double, aligned_mediump> aligned_mediump_dmat4x4;
844 
846  typedef mat<4, 4, double, aligned_lowp> aligned_lowp_dmat4x4;
847 
849  typedef mat<4, 4, float, packed_highp> packed_highp_mat4x4;
850 
852  typedef mat<4, 4, float, packed_mediump> packed_mediump_mat4x4;
853 
855  typedef mat<4, 4, float, packed_lowp> packed_lowp_mat4x4;
856 
858  typedef mat<4, 4, double, packed_highp> packed_highp_dmat4x4;
859 
861  typedef mat<4, 4, double, packed_mediump> packed_mediump_dmat4x4;
862 
864  typedef mat<4, 4, double, packed_lowp> packed_lowp_dmat4x4;
865 
866  // -- default --
867 
868 #if(defined(GLM_PRECISION_LOWP_FLOAT))
869  typedef aligned_lowp_vec1 aligned_vec1;
870  typedef aligned_lowp_vec2 aligned_vec2;
871  typedef aligned_lowp_vec3 aligned_vec3;
872  typedef aligned_lowp_vec4 aligned_vec4;
873  typedef packed_lowp_vec1 packed_vec1;
874  typedef packed_lowp_vec2 packed_vec2;
875  typedef packed_lowp_vec3 packed_vec3;
876  typedef packed_lowp_vec4 packed_vec4;
877 
878  typedef aligned_lowp_mat2 aligned_mat2;
879  typedef aligned_lowp_mat3 aligned_mat3;
880  typedef aligned_lowp_mat4 aligned_mat4;
881  typedef packed_lowp_mat2 packed_mat2;
882  typedef packed_lowp_mat3 packed_mat3;
883  typedef packed_lowp_mat4 packed_mat4;
884 
885  typedef aligned_lowp_mat2x2 aligned_mat2x2;
886  typedef aligned_lowp_mat2x3 aligned_mat2x3;
887  typedef aligned_lowp_mat2x4 aligned_mat2x4;
888  typedef aligned_lowp_mat3x2 aligned_mat3x2;
889  typedef aligned_lowp_mat3x3 aligned_mat3x3;
890  typedef aligned_lowp_mat3x4 aligned_mat3x4;
891  typedef aligned_lowp_mat4x2 aligned_mat4x2;
892  typedef aligned_lowp_mat4x3 aligned_mat4x3;
893  typedef aligned_lowp_mat4x4 aligned_mat4x4;
894  typedef packed_lowp_mat2x2 packed_mat2x2;
895  typedef packed_lowp_mat2x3 packed_mat2x3;
896  typedef packed_lowp_mat2x4 packed_mat2x4;
897  typedef packed_lowp_mat3x2 packed_mat3x2;
898  typedef packed_lowp_mat3x3 packed_mat3x3;
899  typedef packed_lowp_mat3x4 packed_mat3x4;
900  typedef packed_lowp_mat4x2 packed_mat4x2;
901  typedef packed_lowp_mat4x3 packed_mat4x3;
902  typedef packed_lowp_mat4x4 packed_mat4x4;
903 #elif(defined(GLM_PRECISION_MEDIUMP_FLOAT))
904  typedef aligned_mediump_vec1 aligned_vec1;
905  typedef aligned_mediump_vec2 aligned_vec2;
906  typedef aligned_mediump_vec3 aligned_vec3;
907  typedef aligned_mediump_vec4 aligned_vec4;
908  typedef packed_mediump_vec1 packed_vec1;
909  typedef packed_mediump_vec2 packed_vec2;
910  typedef packed_mediump_vec3 packed_vec3;
911  typedef packed_mediump_vec4 packed_vec4;
912 
913  typedef aligned_mediump_mat2 aligned_mat2;
914  typedef aligned_mediump_mat3 aligned_mat3;
915  typedef aligned_mediump_mat4 aligned_mat4;
916  typedef packed_mediump_mat2 packed_mat2;
917  typedef packed_mediump_mat3 packed_mat3;
918  typedef packed_mediump_mat4 packed_mat4;
919 
920  typedef aligned_mediump_mat2x2 aligned_mat2x2;
921  typedef aligned_mediump_mat2x3 aligned_mat2x3;
922  typedef aligned_mediump_mat2x4 aligned_mat2x4;
923  typedef aligned_mediump_mat3x2 aligned_mat3x2;
924  typedef aligned_mediump_mat3x3 aligned_mat3x3;
925  typedef aligned_mediump_mat3x4 aligned_mat3x4;
926  typedef aligned_mediump_mat4x2 aligned_mat4x2;
927  typedef aligned_mediump_mat4x3 aligned_mat4x3;
928  typedef aligned_mediump_mat4x4 aligned_mat4x4;
929  typedef packed_mediump_mat2x2 packed_mat2x2;
930  typedef packed_mediump_mat2x3 packed_mat2x3;
931  typedef packed_mediump_mat2x4 packed_mat2x4;
932  typedef packed_mediump_mat3x2 packed_mat3x2;
933  typedef packed_mediump_mat3x3 packed_mat3x3;
934  typedef packed_mediump_mat3x4 packed_mat3x4;
935  typedef packed_mediump_mat4x2 packed_mat4x2;
936  typedef packed_mediump_mat4x3 packed_mat4x3;
937  typedef packed_mediump_mat4x4 packed_mat4x4;
938 #else //defined(GLM_PRECISION_HIGHP_FLOAT)
939  typedef aligned_highp_vec1 aligned_vec1;
941 
943  typedef aligned_highp_vec2 aligned_vec2;
944 
946  typedef aligned_highp_vec3 aligned_vec3;
947 
949  typedef aligned_highp_vec4 aligned_vec4;
950 
952  typedef packed_highp_vec1 packed_vec1;
953 
955  typedef packed_highp_vec2 packed_vec2;
956 
958  typedef packed_highp_vec3 packed_vec3;
959 
961  typedef packed_highp_vec4 packed_vec4;
962 
964  typedef aligned_highp_mat2 aligned_mat2;
965 
967  typedef aligned_highp_mat3 aligned_mat3;
968 
970  typedef aligned_highp_mat4 aligned_mat4;
971 
973  typedef packed_highp_mat2 packed_mat2;
974 
976  typedef packed_highp_mat3 packed_mat3;
977 
979  typedef packed_highp_mat4 packed_mat4;
980 
982  typedef aligned_highp_mat2x2 aligned_mat2x2;
983 
985  typedef aligned_highp_mat2x3 aligned_mat2x3;
986 
988  typedef aligned_highp_mat2x4 aligned_mat2x4;
989 
991  typedef aligned_highp_mat3x2 aligned_mat3x2;
992 
994  typedef aligned_highp_mat3x3 aligned_mat3x3;
995 
997  typedef aligned_highp_mat3x4 aligned_mat3x4;
998 
1000  typedef aligned_highp_mat4x2 aligned_mat4x2;
1001 
1003  typedef aligned_highp_mat4x3 aligned_mat4x3;
1004 
1006  typedef aligned_highp_mat4x4 aligned_mat4x4;
1007 
1009  typedef packed_highp_mat2x2 packed_mat2x2;
1010 
1012  typedef packed_highp_mat2x3 packed_mat2x3;
1013 
1015  typedef packed_highp_mat2x4 packed_mat2x4;
1016 
1018  typedef packed_highp_mat3x2 packed_mat3x2;
1019 
1021  typedef packed_highp_mat3x3 packed_mat3x3;
1022 
1024  typedef packed_highp_mat3x4 packed_mat3x4;
1025 
1027  typedef packed_highp_mat4x2 packed_mat4x2;
1028 
1030  typedef packed_highp_mat4x3 packed_mat4x3;
1031 
1033  typedef packed_highp_mat4x4 packed_mat4x4;
1034 #endif//GLM_PRECISION
1035 
1036 #if(defined(GLM_PRECISION_LOWP_DOUBLE))
1037  typedef aligned_lowp_dvec1 aligned_dvec1;
1038  typedef aligned_lowp_dvec2 aligned_dvec2;
1039  typedef aligned_lowp_dvec3 aligned_dvec3;
1040  typedef aligned_lowp_dvec4 aligned_dvec4;
1041  typedef packed_lowp_dvec1 packed_dvec1;
1042  typedef packed_lowp_dvec2 packed_dvec2;
1043  typedef packed_lowp_dvec3 packed_dvec3;
1044  typedef packed_lowp_dvec4 packed_dvec4;
1045 
1046  typedef aligned_lowp_dmat2 aligned_dmat2;
1047  typedef aligned_lowp_dmat3 aligned_dmat3;
1048  typedef aligned_lowp_dmat4 aligned_dmat4;
1049  typedef packed_lowp_dmat2 packed_dmat2;
1050  typedef packed_lowp_dmat3 packed_dmat3;
1051  typedef packed_lowp_dmat4 packed_dmat4;
1052 
1053  typedef aligned_lowp_dmat2x2 aligned_dmat2x2;
1054  typedef aligned_lowp_dmat2x3 aligned_dmat2x3;
1055  typedef aligned_lowp_dmat2x4 aligned_dmat2x4;
1056  typedef aligned_lowp_dmat3x2 aligned_dmat3x2;
1057  typedef aligned_lowp_dmat3x3 aligned_dmat3x3;
1058  typedef aligned_lowp_dmat3x4 aligned_dmat3x4;
1059  typedef aligned_lowp_dmat4x2 aligned_dmat4x2;
1060  typedef aligned_lowp_dmat4x3 aligned_dmat4x3;
1061  typedef aligned_lowp_dmat4x4 aligned_dmat4x4;
1062  typedef packed_lowp_dmat2x2 packed_dmat2x2;
1063  typedef packed_lowp_dmat2x3 packed_dmat2x3;
1064  typedef packed_lowp_dmat2x4 packed_dmat2x4;
1065  typedef packed_lowp_dmat3x2 packed_dmat3x2;
1066  typedef packed_lowp_dmat3x3 packed_dmat3x3;
1067  typedef packed_lowp_dmat3x4 packed_dmat3x4;
1068  typedef packed_lowp_dmat4x2 packed_dmat4x2;
1069  typedef packed_lowp_dmat4x3 packed_dmat4x3;
1070  typedef packed_lowp_dmat4x4 packed_dmat4x4;
1071 #elif(defined(GLM_PRECISION_MEDIUMP_DOUBLE))
1072  typedef aligned_mediump_dvec1 aligned_dvec1;
1073  typedef aligned_mediump_dvec2 aligned_dvec2;
1074  typedef aligned_mediump_dvec3 aligned_dvec3;
1075  typedef aligned_mediump_dvec4 aligned_dvec4;
1076  typedef packed_mediump_dvec1 packed_dvec1;
1077  typedef packed_mediump_dvec2 packed_dvec2;
1078  typedef packed_mediump_dvec3 packed_dvec3;
1079  typedef packed_mediump_dvec4 packed_dvec4;
1080 
1081  typedef aligned_mediump_dmat2 aligned_dmat2;
1082  typedef aligned_mediump_dmat3 aligned_dmat3;
1083  typedef aligned_mediump_dmat4 aligned_dmat4;
1084  typedef packed_mediump_dmat2 packed_dmat2;
1085  typedef packed_mediump_dmat3 packed_dmat3;
1086  typedef packed_mediump_dmat4 packed_dmat4;
1087 
1088  typedef aligned_mediump_dmat2x2 aligned_dmat2x2;
1089  typedef aligned_mediump_dmat2x3 aligned_dmat2x3;
1090  typedef aligned_mediump_dmat2x4 aligned_dmat2x4;
1091  typedef aligned_mediump_dmat3x2 aligned_dmat3x2;
1092  typedef aligned_mediump_dmat3x3 aligned_dmat3x3;
1093  typedef aligned_mediump_dmat3x4 aligned_dmat3x4;
1094  typedef aligned_mediump_dmat4x2 aligned_dmat4x2;
1095  typedef aligned_mediump_dmat4x3 aligned_dmat4x3;
1096  typedef aligned_mediump_dmat4x4 aligned_dmat4x4;
1097  typedef packed_mediump_dmat2x2 packed_dmat2x2;
1098  typedef packed_mediump_dmat2x3 packed_dmat2x3;
1099  typedef packed_mediump_dmat2x4 packed_dmat2x4;
1100  typedef packed_mediump_dmat3x2 packed_dmat3x2;
1101  typedef packed_mediump_dmat3x3 packed_dmat3x3;
1102  typedef packed_mediump_dmat3x4 packed_dmat3x4;
1103  typedef packed_mediump_dmat4x2 packed_dmat4x2;
1104  typedef packed_mediump_dmat4x3 packed_dmat4x3;
1105  typedef packed_mediump_dmat4x4 packed_dmat4x4;
1106 #else //defined(GLM_PRECISION_HIGHP_DOUBLE)
1107  typedef aligned_highp_dvec1 aligned_dvec1;
1109 
1111  typedef aligned_highp_dvec2 aligned_dvec2;
1112 
1114  typedef aligned_highp_dvec3 aligned_dvec3;
1115 
1117  typedef aligned_highp_dvec4 aligned_dvec4;
1118 
1120  typedef packed_highp_dvec1 packed_dvec1;
1121 
1123  typedef packed_highp_dvec2 packed_dvec2;
1124 
1126  typedef packed_highp_dvec3 packed_dvec3;
1127 
1129  typedef packed_highp_dvec4 packed_dvec4;
1130 
1132  typedef aligned_highp_dmat2 aligned_dmat2;
1133 
1135  typedef aligned_highp_dmat3 aligned_dmat3;
1136 
1138  typedef aligned_highp_dmat4 aligned_dmat4;
1139 
1141  typedef packed_highp_dmat2 packed_dmat2;
1142 
1144  typedef packed_highp_dmat3 packed_dmat3;
1145 
1147  typedef packed_highp_dmat4 packed_dmat4;
1148 
1150  typedef aligned_highp_dmat2x2 aligned_dmat2x2;
1151 
1153  typedef aligned_highp_dmat2x3 aligned_dmat2x3;
1154 
1156  typedef aligned_highp_dmat2x4 aligned_dmat2x4;
1157 
1159  typedef aligned_highp_dmat3x2 aligned_dmat3x2;
1160 
1162  typedef aligned_highp_dmat3x3 aligned_dmat3x3;
1163 
1165  typedef aligned_highp_dmat3x4 aligned_dmat3x4;
1166 
1168  typedef aligned_highp_dmat4x2 aligned_dmat4x2;
1169 
1171  typedef aligned_highp_dmat4x3 aligned_dmat4x3;
1172 
1174  typedef aligned_highp_dmat4x4 aligned_dmat4x4;
1175 
1177  typedef packed_highp_dmat2x2 packed_dmat2x2;
1178 
1180  typedef packed_highp_dmat2x3 packed_dmat2x3;
1181 
1183  typedef packed_highp_dmat2x4 packed_dmat2x4;
1184 
1186  typedef packed_highp_dmat3x2 packed_dmat3x2;
1187 
1189  typedef packed_highp_dmat3x3 packed_dmat3x3;
1190 
1192  typedef packed_highp_dmat3x4 packed_dmat3x4;
1193 
1195  typedef packed_highp_dmat4x2 packed_dmat4x2;
1196 
1198  typedef packed_highp_dmat4x3 packed_dmat4x3;
1199 
1201  typedef packed_highp_dmat4x4 packed_dmat4x4;
1202 #endif//GLM_PRECISION
1203 
1204 #if(defined(GLM_PRECISION_LOWP_INT))
1205  typedef aligned_lowp_ivec1 aligned_ivec1;
1206  typedef aligned_lowp_ivec2 aligned_ivec2;
1207  typedef aligned_lowp_ivec3 aligned_ivec3;
1208  typedef aligned_lowp_ivec4 aligned_ivec4;
1209 #elif(defined(GLM_PRECISION_MEDIUMP_INT))
1210  typedef aligned_mediump_ivec1 aligned_ivec1;
1211  typedef aligned_mediump_ivec2 aligned_ivec2;
1212  typedef aligned_mediump_ivec3 aligned_ivec3;
1213  typedef aligned_mediump_ivec4 aligned_ivec4;
1214 #else //defined(GLM_PRECISION_HIGHP_INT)
1215  typedef aligned_highp_ivec1 aligned_ivec1;
1217 
1219  typedef aligned_highp_ivec2 aligned_ivec2;
1220 
1222  typedef aligned_highp_ivec3 aligned_ivec3;
1223 
1225  typedef aligned_highp_ivec4 aligned_ivec4;
1226 
1228  typedef packed_highp_ivec1 packed_ivec1;
1229 
1231  typedef packed_highp_ivec2 packed_ivec2;
1232 
1234  typedef packed_highp_ivec3 packed_ivec3;
1235 
1237  typedef packed_highp_ivec4 packed_ivec4;
1238 #endif//GLM_PRECISION
1239 
1240  // -- Unsigned integer definition --
1241 
1242 #if(defined(GLM_PRECISION_LOWP_UINT))
1243  typedef aligned_lowp_uvec1 aligned_uvec1;
1244  typedef aligned_lowp_uvec2 aligned_uvec2;
1245  typedef aligned_lowp_uvec3 aligned_uvec3;
1246  typedef aligned_lowp_uvec4 aligned_uvec4;
1247 #elif(defined(GLM_PRECISION_MEDIUMP_UINT))
1248  typedef aligned_mediump_uvec1 aligned_uvec1;
1249  typedef aligned_mediump_uvec2 aligned_uvec2;
1250  typedef aligned_mediump_uvec3 aligned_uvec3;
1251  typedef aligned_mediump_uvec4 aligned_uvec4;
1252 #else //defined(GLM_PRECISION_HIGHP_UINT)
1253  typedef aligned_highp_uvec1 aligned_uvec1;
1255 
1257  typedef aligned_highp_uvec2 aligned_uvec2;
1258 
1260  typedef aligned_highp_uvec3 aligned_uvec3;
1261 
1263  typedef aligned_highp_uvec4 aligned_uvec4;
1264 
1266  typedef packed_highp_uvec1 packed_uvec1;
1267 
1269  typedef packed_highp_uvec2 packed_uvec2;
1270 
1272  typedef packed_highp_uvec3 packed_uvec3;
1273 
1275  typedef packed_highp_uvec4 packed_uvec4;
1276 #endif//GLM_PRECISION
1277 
1278 #if(defined(GLM_PRECISION_LOWP_BOOL))
1279  typedef aligned_lowp_bvec1 aligned_bvec1;
1280  typedef aligned_lowp_bvec2 aligned_bvec2;
1281  typedef aligned_lowp_bvec3 aligned_bvec3;
1282  typedef aligned_lowp_bvec4 aligned_bvec4;
1283 #elif(defined(GLM_PRECISION_MEDIUMP_BOOL))
1284  typedef aligned_mediump_bvec1 aligned_bvec1;
1285  typedef aligned_mediump_bvec2 aligned_bvec2;
1286  typedef aligned_mediump_bvec3 aligned_bvec3;
1287  typedef aligned_mediump_bvec4 aligned_bvec4;
1288 #else //defined(GLM_PRECISION_HIGHP_BOOL)
1289  typedef aligned_highp_bvec1 aligned_bvec1;
1291 
1293  typedef aligned_highp_bvec2 aligned_bvec2;
1294 
1296  typedef aligned_highp_bvec3 aligned_bvec3;
1297 
1299  typedef aligned_highp_bvec4 aligned_bvec4;
1300 
1302  typedef packed_highp_bvec1 packed_bvec1;
1303 
1305  typedef packed_highp_bvec2 packed_bvec2;
1306 
1308  typedef packed_highp_bvec3 packed_bvec3;
1309 
1311  typedef packed_highp_bvec4 packed_bvec4;
1312 #endif//GLM_PRECISION
1313 
1315 }//namespace glm
packed_highp_uvec3 packed_uvec3
3 components vector tightly packed in memory of unsigned integer numbers.
packed_highp_mat2x2 packed_mat2x2
2 by 2 matrix tightly packed in memory of single-precision floating-point numbers.
mat< 2, 4, float, aligned_lowp > aligned_lowp_mat2x4
2 by 4 matrix aligned in memory of single-precision floating-point numbers using low precision arithm...
vec< 4, bool, aligned_lowp > aligned_lowp_bvec4
4 components vector aligned in memory of bool values.
vec< 4, double, packed_highp > packed_highp_dvec4
4 components vector tightly packed in memory of double-precision floating-point numbers using high pr...
packed_highp_dmat2x3 packed_dmat2x3
2 by 3 matrix tightly packed in memory of double-precision floating-point numbers.
vec< 3, bool, packed_lowp > packed_lowp_bvec3
3 components vector tightly packed in memory of bool values.
packed_highp_mat4 packed_mat4
4 by 4 matrix tightly packed in memory of single-precision floating-point numbers.
aligned_highp_uvec2 aligned_uvec2
2 components vector aligned in memory of unsigned integer numbers.
vec< 2, bool, aligned_lowp > aligned_lowp_bvec2
2 components vector aligned in memory of bool values.
vec< 3, int, packed_highp > packed_highp_ivec3
3 components vector tightly packed in memory of signed integer numbers.
mat< 4, 2, double, packed_highp > packed_highp_dmat4x2
4 by 2 matrix tightly packed in memory of double-precision floating-point numbers using high precisio...
packed_highp_dmat2 packed_dmat2
2 by 2 matrix tightly packed in memory of double-precision floating-point numbers.
mat< 3, 3, double, packed_highp > packed_highp_dmat3
3 by 3 matrix tightly packed in memory of double-precision floating-point numbers using high precisio...
mat< 4, 3, float, aligned_lowp > aligned_lowp_mat4x3
4 by 3 matrix aligned in memory of single-precision floating-point numbers using low precision arithm...
mat< 2, 4, double, packed_highp > packed_highp_dmat2x4
2 by 4 matrix tightly packed in memory of double-precision floating-point numbers using high precisio...
vec< 2, int, aligned_mediump > aligned_mediump_ivec2
2 components vector aligned in memory of signed integer numbers.
mat< 4, 3, float, packed_mediump > packed_mediump_mat4x3
4 by 3 matrix tightly packed in memory of single-precision floating-point numbers using medium precis...
packed_highp_ivec3 packed_ivec3
3 components vector tightly packed in memory of signed integer numbers.
mat< 3, 4, double, aligned_highp > aligned_highp_dmat3x4
3 by 4 matrix aligned in memory of double-precision floating-point numbers using high precision arith...
mat< 3, 3, double, packed_mediump > packed_mediump_dmat3x3
3 by 3 matrix tightly packed in memory of double-precision floating-point numbers using medium precis...
packed_highp_mat2 packed_mat2
2 by 2 matrix tightly packed in memory of single-precision floating-point numbers.
mat< 3, 4, double, packed_lowp > packed_lowp_dmat3x4
3 by 4 matrix tightly packed in memory of double-precision floating-point numbers using low precision...
vec< 2, float, aligned_mediump > aligned_mediump_vec2
2 components vector aligned in memory of single-precision floating-point numbers using medium precisi...
mat< 4, 4, float, aligned_lowp > aligned_lowp_mat4x4
4 by 4 matrix aligned in memory of single-precision floating-point numbers using low precision arithm...
packed_highp_mat3 packed_mat3
3 by 3 matrix tightly packed in memory of single-precision floating-point numbers.
packed_highp_dmat4 packed_dmat4
4 by 4 matrix tightly packed in memory of double-precision floating-point numbers.
packed_highp_vec4 packed_vec4
4 components vector tightly packed in memory of single-precision floating-point numbers.
vec< 4, float, aligned_highp > aligned_highp_vec4
4 components vector aligned in memory of single-precision floating-point numbers using high precision...
mat< 4, 4, double, packed_highp > packed_highp_dmat4x4
4 by 4 matrix tightly packed in memory of double-precision floating-point numbers using high precisio...
vec< 1, double, aligned_mediump > aligned_mediump_dvec1
1 component vector aligned in memory of double-precision floating-point numbers using medium precisio...
mat< 3, 3, double, aligned_highp > aligned_highp_dmat3x3
3 by 3 matrix aligned in memory of double-precision floating-point numbers using high precision arith...
packed_highp_dvec3 packed_dvec3
3 components vector tightly packed in memory of double-precision floating-point numbers.
vec< 1, double, packed_mediump > packed_mediump_dvec1
1 component vector tightly packed in memory of double-precision floating-point numbers using medium p...
packed_highp_uvec1 packed_uvec1
1 component vector tightly packed in memory of unsigned integer numbers.
mat< 3, 4, float, packed_lowp > packed_lowp_mat3x4
3 by 4 matrix tightly packed in memory of single-precision floating-point numbers using low precision...
vec< 1, uint, aligned_lowp > aligned_lowp_uvec1
1 component vector aligned in memory of unsigned integer numbers.
mat< 2, 4, double, packed_lowp > packed_lowp_dmat2x4
2 by 4 matrix tightly packed in memory of double-precision floating-point numbers using low precision...
aligned_highp_ivec3 aligned_ivec3
3 components vector aligned in memory of signed integer numbers.
mat< 3, 4, double, packed_highp > packed_highp_dmat3x4
3 by 4 matrix tightly packed in memory of double-precision floating-point numbers using high precisio...
packed_highp_vec2 packed_vec2
2 components vector tightly packed in memory of single-precision floating-point numbers.
vec< 1, uint, packed_highp > packed_highp_uvec1
1 component vector tightly packed in memory of unsigned integer numbers.
mat< 2, 2, float, packed_lowp > packed_lowp_mat2x2
2 by 2 matrix tightly packed in memory of single-precision floating-point numbers using low precision...
vec< 1, bool, packed_highp > packed_highp_bvec1
1 component vector tightly packed in memory of bool values.
aligned_highp_bvec4 aligned_bvec4
4 components vector aligned in memory of bool values.
aligned_highp_vec3 aligned_vec3
3 components vector aligned in memory of single-precision floating-point numbers. ...
mat< 3, 3, double, packed_lowp > packed_lowp_dmat3x3
3 by 3 matrix tightly packed in memory of double-precision floating-point numbers using low precision...
aligned_highp_uvec3 aligned_uvec3
3 components vector aligned in memory of unsigned integer numbers.
mat< 4, 2, double, aligned_mediump > aligned_mediump_dmat4x2
4 by 2 matrix aligned in memory of double-precision floating-point numbers using medium precision ari...
mat< 3, 3, float, aligned_highp > aligned_highp_mat3
3 by 3 matrix aligned in memory of single-precision floating-point numbers using high precision arith...
vec< 2, uint, packed_mediump > packed_mediump_uvec2
2 components vector tightly packed in memory of unsigned integer numbers.
vec< 3, float, aligned_highp > aligned_highp_vec3
3 components vector aligned in memory of single-precision floating-point numbers using high precision...
aligned_highp_mat4x3 aligned_mat4x3
4 by 3 matrix tightly aligned in memory of single-precision floating-point numbers.
vec< 4, int, aligned_lowp > aligned_lowp_ivec4
4 components vector aligned in memory of signed integer numbers.
mat< 2, 2, float, aligned_highp > aligned_highp_mat2
2 by 2 matrix aligned in memory of single-precision floating-point numbers using high precision arith...
packed_highp_vec1 packed_vec1
1 component vector tightly packed in memory of single-precision floating-point numbers.
vec< 2, int, aligned_lowp > aligned_lowp_ivec2
2 components vector aligned in memory of signed integer numbers.
mat< 4, 2, double, packed_lowp > packed_lowp_dmat4x2
4 by 2 matrix tightly packed in memory of double-precision floating-point numbers using low precision...
aligned_highp_dvec3 aligned_dvec3
3 components vector aligned in memory of double-precision floating-point numbers. ...
vec< 1, float, packed_lowp > packed_lowp_vec1
1 component vector tightly packed in memory of single-precision floating-point numbers using low prec...
vec< 3, bool, packed_mediump > packed_mediump_bvec3
3 components vector tightly packed in memory of bool values.
aligned_highp_dvec1 aligned_dvec1
1 component vector aligned in memory of double-precision floating-point numbers.
packed_highp_uvec4 packed_uvec4
4 components vector tightly packed in memory of unsigned integer numbers.
packed_highp_dmat3 packed_dmat3
3 by 3 matrix tightly packed in memory of double-precision floating-point numbers.
vec< 2, float, aligned_highp > aligned_highp_vec2
2 components vector aligned in memory of single-precision floating-point numbers using high precision...
mat< 2, 2, double, aligned_highp > aligned_highp_dmat2x2
2 by 2 matrix aligned in memory of double-precision floating-point numbers using high precision arith...
vec< 3, uint, aligned_lowp > aligned_lowp_uvec3
3 components vector aligned in memory of unsigned integer numbers.
vec< 1, double, aligned_lowp > aligned_lowp_dvec1
1 component vector aligned in memory of double-precision floating-point numbers using low precision a...
vec< 1, float, aligned_mediump > aligned_mediump_vec1
1 component vector aligned in memory of single-precision floating-point numbers using medium precisio...
mat< 4, 3, double, packed_lowp > packed_lowp_dmat4x3
4 by 3 matrix tightly packed in memory of double-precision floating-point numbers using low precision...
mat< 3, 4, double, aligned_lowp > aligned_lowp_dmat3x4
3 by 4 matrix aligned in memory of double-precision floating-point numbers using low precision arithm...
mat< 4, 2, float, packed_highp > packed_highp_mat4x2
4 by 2 matrix tightly packed in memory of single-precision floating-point numbers using high precisio...
mat< 3, 2, float, aligned_mediump > aligned_mediump_mat3x2
3 by 2 matrix aligned in memory of single-precision floating-point numbers using medium precision ari...
mat< 4, 3, float, aligned_mediump > aligned_mediump_mat4x3
4 by 3 matrix aligned in memory of single-precision floating-point numbers using medium precision ari...
vec< 3, double, aligned_lowp > aligned_lowp_dvec3
3 components vector aligned in memory of double-precision floating-point numbers using low precision ...
mat< 2, 2, float, packed_mediump > packed_mediump_mat2
2 by 2 matrix tightly packed in memory of single-precision floating-point numbers using medium precis...
packed_highp_mat3x3 packed_mat3x3
3 by 3 matrix tightly packed in memory of single-precision floating-point numbers.
vec< 1, double, packed_highp > packed_highp_dvec1
1 component vector tightly packed in memory of double-precision floating-point numbers using high pre...
mat< 3, 2, double, aligned_lowp > aligned_lowp_dmat3x2
3 by 2 matrix aligned in memory of double-precision floating-point numbers using low precision arithm...
mat< 3, 3, float, packed_mediump > packed_mediump_mat3
3 by 3 matrix tightly packed in memory of single-precision floating-point numbers using medium precis...
mat< 2, 2, float, aligned_lowp > aligned_lowp_mat2
2 by 2 matrix aligned in memory of single-precision floating-point numbers using low precision arithm...
packed_highp_dmat4x4 packed_dmat4x4
4 by 4 matrix tightly packed in memory of double-precision floating-point numbers.
mat< 2, 3, double, aligned_lowp > aligned_lowp_dmat2x3
2 by 3 matrix aligned in memory of double-precision floating-point numbers using low precision arithm...
mat< 4, 4, float, packed_lowp > packed_lowp_mat4
4 by 4 matrix tightly packed in memory of single-precision floating-point numbers using low precision...
vec< 1, float, packed_highp > packed_highp_vec1
1 component vector tightly packed in memory of single-precision floating-point numbers using high pre...
mat< 2, 3, float, aligned_mediump > aligned_mediump_mat2x3
2 by 3 matrix aligned in memory of single-precision floating-point numbers using medium precision ari...
mat< 2, 2, float, packed_highp > packed_highp_mat2x2
2 by 2 matrix tightly packed in memory of single-precision floating-point numbers using high precisio...
vec< 2, bool, aligned_mediump > aligned_mediump_bvec2
2 components vector aligned in memory of bool values.
mat< 2, 2, double, packed_lowp > packed_lowp_dmat2
2 by 2 matrix tightly packed in memory of double-precision floating-point numbers using low precision...
mat< 2, 2, double, aligned_mediump > aligned_mediump_dmat2
2 by 2 matrix aligned in memory of double-precision floating-point numbers using medium precision ari...
vec< 4, float, packed_mediump > packed_mediump_vec4
4 components vector tightly packed in memory of single-precision floating-point numbers using medium ...
aligned_highp_dmat4x2 aligned_dmat4x2
4 by 2 matrix tightly aligned in memory of double-precision floating-point numbers.
mat< 4, 4, double, packed_lowp > packed_lowp_dmat4x4
4 by 4 matrix tightly packed in memory of double-precision floating-point numbers using low precision...
mat< 2, 2, double, packed_highp > packed_highp_dmat2x2
2 by 2 matrix tightly packed in memory of double-precision floating-point numbers using high precisio...
mat< 3, 3, float, packed_lowp > packed_lowp_mat3x3
3 by 3 matrix tightly packed in memory of single-precision floating-point numbers using low precision...
mat< 4, 3, float, packed_highp > packed_highp_mat4x3
4 by 3 matrix tightly packed in memory of single-precision floating-point numbers using high precisio...
mat< 3, 3, float, aligned_mediump > aligned_mediump_mat3
3 by 3 matrix aligned in memory of single-precision floating-point numbers using medium precision ari...
mat< 4, 3, double, aligned_highp > aligned_highp_dmat4x3
4 by 3 matrix aligned in memory of double-precision floating-point numbers using high precision arith...
vec< 1, bool, aligned_lowp > aligned_lowp_bvec1
1 component vector aligned in memory of bool values.
aligned_highp_mat2 aligned_mat2
2 by 2 matrix tightly aligned in memory of single-precision floating-point numbers.
mat< 4, 4, double, aligned_mediump > aligned_mediump_dmat4x4
4 by 4 matrix aligned in memory of double-precision floating-point numbers using medium precision ari...
vec< 3, int, aligned_mediump > aligned_mediump_ivec3
3 components vector aligned in memory of signed integer numbers.
aligned_highp_bvec3 aligned_bvec3
3 components vector aligned in memory of bool values.
packed_highp_uvec2 packed_uvec2
2 components vector tightly packed in memory of unsigned integer numbers.
vec< 4, double, aligned_lowp > aligned_lowp_dvec4
4 components vector aligned in memory of double-precision floating-point numbers using low precision ...
mat< 3, 3, double, aligned_lowp > aligned_lowp_dmat3
3 by 3 matrix aligned in memory of double-precision floating-point numbers using low precision arithm...
mat< 4, 4, float, packed_mediump > packed_mediump_mat4x4
4 by 4 matrix tightly packed in memory of single-precision floating-point numbers using medium precis...
vec< 4, uint, aligned_highp > aligned_highp_uvec4
4 components vector aligned in memory of unsigned integer numbers.
mat< 4, 3, double, packed_highp > packed_highp_dmat4x3
4 by 3 matrix tightly packed in memory of double-precision floating-point numbers using high precisio...
mat< 4, 3, float, packed_lowp > packed_lowp_mat4x3
4 by 3 matrix tightly packed in memory of single-precision floating-point numbers using low precision...
vec< 2, float, aligned_lowp > aligned_lowp_vec2
2 components vector aligned in memory of single-precision floating-point numbers using low precision ...
vec< 1, int, packed_lowp > packed_lowp_ivec1
1 component vector tightly packed in memory of signed integer numbers.
vec< 3, bool, aligned_lowp > aligned_lowp_bvec3
3 components vector aligned in memory of bool values.
mat< 4, 4, double, aligned_mediump > aligned_mediump_dmat4
4 by 4 matrix aligned in memory of double-precision floating-point numbers using medium precision ari...
mat< 2, 4, float, packed_mediump > packed_mediump_mat2x4
2 by 4 matrix tightly packed in memory of single-precision floating-point numbers using medium precis...
mat< 4, 4, double, packed_highp > packed_highp_dmat4
4 by 4 matrix tightly packed in memory of double-precision floating-point numbers using high precisio...
mat< 4, 2, float, aligned_mediump > aligned_mediump_mat4x2
4 by 2 matrix aligned in memory of single-precision floating-point numbers using medium precision ari...
mat< 3, 4, float, packed_mediump > packed_mediump_mat3x4
3 by 4 matrix tightly packed in memory of single-precision floating-point numbers using medium precis...
vec< 3, uint, packed_highp > packed_highp_uvec3
3 components vector tightly packed in memory of unsigned integer numbers.
aligned_highp_dmat2x2 aligned_dmat2x2
2 by 2 matrix tightly aligned in memory of double-precision floating-point numbers.
mat< 2, 2, double, packed_mediump > packed_mediump_dmat2
2 by 2 matrix tightly packed in memory of double-precision floating-point numbers using medium precis...
mat< 3, 4, float, packed_highp > packed_highp_mat3x4
3 by 4 matrix tightly packed in memory of single-precision floating-point numbers using high precisio...
packed_highp_mat3x4 packed_mat3x4
3 by 4 matrix tightly packed in memory of single-precision floating-point numbers.
mat< 2, 4, double, packed_mediump > packed_mediump_dmat2x4
2 by 4 matrix tightly packed in memory of double-precision floating-point numbers using medium precis...
vec< 1, uint, packed_mediump > packed_mediump_uvec1
1 component vector tightly packed in memory of unsigned integer numbers.
mat< 4, 4, double, aligned_lowp > aligned_lowp_dmat4
4 by 4 matrix aligned in memory of double-precision floating-point numbers using low precision arithm...
packed_highp_mat4x3 packed_mat4x3
4 by 3 matrix tightly packed in memory of single-precision floating-point numbers.
vec< 4, int, packed_lowp > packed_lowp_ivec4
4 components vector tightly packed in memory of signed integer numbers.
vec< 4, int, packed_mediump > packed_mediump_ivec4
4 components vector tightly packed in memory of signed integer numbers.
vec< 2, double, aligned_mediump > aligned_mediump_dvec2
2 components vector aligned in memory of double-precision floating-point numbers using medium precisi...
packed_highp_ivec2 packed_ivec2
2 components vector tightly packed in memory of signed integer numbers.
aligned_highp_ivec1 aligned_ivec1
1 component vector aligned in memory of signed integer numbers.
vec< 3, int, packed_mediump > packed_mediump_ivec3
3 components vector tightly packed in memory of signed integer numbers.
vec< 3, uint, packed_lowp > packed_lowp_uvec3
3 components vector tightly packed in memory of unsigned integer numbers.
packed_highp_dmat4x2 packed_dmat4x2
4 by 2 matrix tightly packed in memory of double-precision floating-point numbers.
vec< 4, bool, aligned_mediump > aligned_mediump_bvec4
4 components vector aligned in memory of bool values.
vec< 2, bool, aligned_highp > aligned_highp_bvec2
2 components vector aligned in memory of bool values.
vec< 4, float, packed_lowp > packed_lowp_vec4
4 components vector tightly packed in memory of single-precision floating-point numbers using low pre...
vec< 4, double, aligned_highp > aligned_highp_dvec4
4 components vector aligned in memory of double-precision floating-point numbers using high precision...
mat< 2, 3, double, packed_highp > packed_highp_dmat2x3
2 by 3 matrix tightly packed in memory of double-precision floating-point numbers using high precisio...
mat< 2, 2, float, aligned_mediump > aligned_mediump_mat2
2 by 2 matrix aligned in memory of single-precision floating-point numbers using medium precision ari...
mat< 2, 4, double, aligned_mediump > aligned_mediump_dmat2x4
2 by 4 matrix aligned in memory of double-precision floating-point numbers using medium precision ari...
vec< 4, bool, packed_lowp > packed_lowp_bvec4
4 components vector tightly packed in memory of bool values.
vec< 2, double, packed_mediump > packed_mediump_dvec2
2 components vector tightly packed in memory of double-precision floating-point numbers using medium ...
vec< 2, double, aligned_highp > aligned_highp_dvec2
2 components vector aligned in memory of double-precision floating-point numbers using high precision...
mat< 2, 4, double, aligned_lowp > aligned_lowp_dmat2x4
2 by 4 matrix aligned in memory of double-precision floating-point numbers using low precision arithm...
aligned_highp_mat2x3 aligned_mat2x3
2 by 3 matrix tightly aligned in memory of single-precision floating-point numbers.
mat< 2, 2, float, packed_lowp > packed_lowp_mat2
2 by 2 matrix tightly packed in memory of single-precision floating-point numbers using low precision...
vec< 4, int, aligned_mediump > aligned_mediump_ivec4
4 components vector aligned in memory of signed integer numbers.
vec< 2, bool, packed_lowp > packed_lowp_bvec2
2 components vector tightly packed in memory of bool values.
vec< 2, int, packed_highp > packed_highp_ivec2
2 components vector tightly packed in memory of signed integer numbers.
packed_highp_dmat3x4 packed_dmat3x4
3 by 4 matrix tightly packed in memory of double-precision floating-point numbers.
mat< 3, 3, double, packed_mediump > packed_mediump_dmat3
3 by 3 matrix tightly packed in memory of double-precision floating-point numbers using medium precis...
mat< 4, 3, double, aligned_lowp > aligned_lowp_dmat4x3
4 by 3 matrix aligned in memory of double-precision floating-point numbers using low precision arithm...
mat< 4, 4, double, aligned_highp > aligned_highp_dmat4x4
4 by 4 matrix aligned in memory of double-precision floating-point numbers using high precision arith...
mat< 4, 2, double, packed_mediump > packed_mediump_dmat4x2
4 by 2 matrix tightly packed in memory of double-precision floating-point numbers using medium precis...
packed_highp_mat4x4 packed_mat4x4
4 by 4 matrix tightly packed in memory of single-precision floating-point numbers.
mat< 4, 4, float, packed_highp > packed_highp_mat4
4 by 4 matrix tightly packed in memory of single-precision floating-point numbers using high precisio...
mat< 4, 4, double, aligned_highp > aligned_highp_dmat4
4 by 4 matrix aligned in memory of double-precision floating-point numbers using high precision arith...
mat< 3, 2, double, aligned_highp > aligned_highp_dmat3x2
3 by 2 matrix aligned in memory of double-precision floating-point numbers using high precision arith...
vec< 1, double, packed_lowp > packed_lowp_dvec1
1 component vector tightly packed in memory of double-precision floating-point numbers using low prec...
mat< 4, 4, float, aligned_lowp > aligned_lowp_mat4
4 by 4 matrix aligned in memory of single-precision floating-point numbers using low precision arithm...
vec< 3, uint, packed_mediump > packed_mediump_uvec3
3 components vector tightly packed in memory of unsigned integer numbers.
aligned_highp_dmat4x3 aligned_dmat4x3
4 by 3 matrix tightly aligned in memory of double-precision floating-point numbers.
mat< 4, 2, double, aligned_lowp > aligned_lowp_dmat4x2
4 by 2 matrix aligned in memory of double-precision floating-point numbers using low precision arithm...
mat< 2, 3, double, packed_mediump > packed_mediump_dmat2x3
2 by 3 matrix tightly packed in memory of double-precision floating-point numbers using medium precis...
mat< 4, 2, double, aligned_highp > aligned_highp_dmat4x2
4 by 2 matrix aligned in memory of double-precision floating-point numbers using high precision arith...
aligned_highp_mat3x4 aligned_mat3x4
3 by 4 matrix tightly aligned in memory of single-precision floating-point numbers.
mat< 4, 4, double, aligned_lowp > aligned_lowp_dmat4x4
4 by 4 matrix aligned in memory of double-precision floating-point numbers using low precision arithm...
mat< 4, 4, float, aligned_mediump > aligned_mediump_mat4x4
4 by 4 matrix aligned in memory of single-precision floating-point numbers using medium precision ari...
aligned_highp_mat4 aligned_mat4
4 by 4 matrix tightly aligned in memory of single-precision floating-point numbers.
mat< 2, 2, double, packed_lowp > packed_lowp_dmat2x2
2 by 2 matrix tightly packed in memory of double-precision floating-point numbers using low precision...
vec< 2, int, packed_mediump > packed_mediump_ivec2
2 components vector tightly packed in memory of signed integer numbers.
packed_highp_dmat3x2 packed_dmat3x2
3 by 2 matrix tightly packed in memory of double-precision floating-point numbers.
mat< 4, 4, double, packed_mediump > packed_mediump_dmat4
4 by 4 matrix tightly packed in memory of double-precision floating-point numbers using medium precis...
vec< 3, float, aligned_lowp > aligned_lowp_vec3
3 components vector aligned in memory of single-precision floating-point numbers using low precision ...
mat< 2, 4, float, packed_highp > packed_highp_mat2x4
2 by 4 matrix tightly packed in memory of single-precision floating-point numbers using high precisio...
mat< 2, 3, float, aligned_highp > aligned_highp_mat2x3
2 by 3 matrix aligned in memory of single-precision floating-point numbers using high precision arith...
mat< 3, 3, float, packed_mediump > packed_mediump_mat3x3
3 by 3 matrix tightly packed in memory of single-precision floating-point numbers using medium precis...
vec< 4, float, packed_highp > packed_highp_vec4
4 components vector tightly packed in memory of single-precision floating-point numbers using high pr...
aligned_highp_uvec1 aligned_uvec1
1 component vector aligned in memory of unsigned integer numbers.
mat< 4, 4, float, aligned_highp > aligned_highp_mat4x4
4 by 4 matrix aligned in memory of single-precision floating-point numbers using high precision arith...
mat< 4, 2, float, packed_mediump > packed_mediump_mat4x2
4 by 2 matrix tightly packed in memory of single-precision floating-point numbers using medium precis...
mat< 3, 2, float, aligned_lowp > aligned_lowp_mat3x2
3 by 2 matrix aligned in memory of single-precision floating-point numbers using low precision arithm...
mat< 3, 3, float, packed_lowp > packed_lowp_mat3
3 by 3 matrix tightly packed in memory of single-precision floating-point numbers using low precision...
vec< 4, bool, packed_highp > packed_highp_bvec4
4 components vector tightly packed in memory of bool values.
aligned_highp_vec1 aligned_vec1
1 component vector aligned in memory of single-precision floating-point numbers.
packed_highp_vec3 packed_vec3
3 components vector tightly packed in memory of single-precision floating-point numbers.
packed_highp_mat2x3 packed_mat2x3
2 by 3 matrix tightly packed in memory of single-precision floating-point numbers.
vec< 3, bool, aligned_mediump > aligned_mediump_bvec3
3 components vector aligned in memory of bool values.
vec< 1, uint, aligned_mediump > aligned_mediump_uvec1
1 component vector aligned in memory of unsigned integer numbers.
aligned_highp_bvec2 aligned_bvec2
2 components vector aligned in memory of bool values.
packed_highp_dmat2x2 packed_dmat2x2
2 by 2 matrix tightly packed in memory of double-precision floating-point numbers.
mat< 4, 2, float, packed_lowp > packed_lowp_mat4x2
4 by 2 matrix tightly packed in memory of single-precision floating-point numbers using low precision...
packed_highp_dmat2x4 packed_dmat2x4
2 by 4 matrix tightly packed in memory of double-precision floating-point numbers.
vec< 3, uint, aligned_highp > aligned_highp_uvec3
3 components vector aligned in memory of unsigned integer numbers.
vec< 2, bool, packed_mediump > packed_mediump_bvec2
2 components vector tightly packed in memory of bool values.
aligned_highp_bvec1 aligned_bvec1
1 component vector aligned in memory of bool values.
aligned_highp_mat3x2 aligned_mat3x2
3 by 2 matrix tightly aligned in memory of single-precision floating-point numbers.
vec< 1, int, aligned_lowp > aligned_lowp_ivec1
1 component vector aligned in memory of signed integer numbers.
mat< 3, 3, double, aligned_mediump > aligned_mediump_dmat3x3
3 by 3 matrix aligned in memory of double-precision floating-point numbers using medium precision ari...
mat< 3, 2, float, packed_lowp > packed_lowp_mat3x2
3 by 2 matrix tightly packed in memory of single-precision floating-point numbers using low precision...
mat< 2, 3, float, packed_highp > packed_highp_mat2x3
2 by 3 matrix tightly packed in memory of single-precision floating-point numbers using high precisio...
mat< 4, 4, float, packed_lowp > packed_lowp_mat4x4
4 by 4 matrix tightly packed in memory of single-precision floating-point numbers using low precision...
aligned_highp_uvec4 aligned_uvec4
4 components vector aligned in memory of unsigned integer numbers.
packed_highp_bvec2 packed_bvec2
2 components vector tightly packed in memory of bool values.
mat< 3, 3, float, aligned_highp > aligned_highp_mat3x3
3 by 3 matrix aligned in memory of single-precision floating-point numbers using high precision arith...
packed_highp_bvec4 packed_bvec4
4 components vector tightly packed in memory of bool values.
aligned_highp_ivec4 aligned_ivec4
4 components vector aligned in memory of signed integer numbers.
mat< 3, 3, float, packed_highp > packed_highp_mat3x3
3 by 3 matrix tightly packed in memory of single-precision floating-point numbers using high precisio...
vec< 4, int, packed_highp > packed_highp_ivec4
4 components vector tightly packed in memory of signed integer numbers.
packed_highp_mat3x2 packed_mat3x2
3 by 2 matrix tightly packed in memory of single-precision floating-point numbers.
vec< 2, uint, aligned_highp > aligned_highp_uvec2
2 components vector aligned in memory of unsigned integer numbers.
aligned_highp_dmat3 aligned_dmat3
3 by 3 matrix tightly aligned in memory of double-precision floating-point numbers.
vec< 3, int, aligned_highp > aligned_highp_ivec3
3 components vector aligned in memory of signed integer numbers.
mat< 2, 2, double, aligned_lowp > aligned_lowp_dmat2x2
2 by 2 matrix aligned in memory of double-precision floating-point numbers using low precision arithm...
mat< 3, 2, float, aligned_highp > aligned_highp_mat3x2
3 by 2 matrix aligned in memory of single-precision floating-point numbers using high precision arith...
vec< 1, uint, aligned_highp > aligned_highp_uvec1
1 component vector aligned in memory of unsigned integer numbers.
aligned_highp_mat2x4 aligned_mat2x4
2 by 4 matrix tightly aligned in memory of single-precision floating-point numbers.
mat< 4, 4, float, aligned_mediump > aligned_mediump_mat4
4 by 4 matrix aligned in memory of single-precision floating-point numbers using medium precision ari...
packed_highp_dvec1 packed_dvec1
1 component vector tightly packed in memory of double-precision floating-point numbers.
aligned_highp_dmat2x4 aligned_dmat2x4
2 by 4 matrix tightly aligned in memory of double-precision floating-point numbers.
mat< 2, 2, double, packed_mediump > packed_mediump_dmat2x2
2 by 2 matrix tightly packed in memory of double-precision floating-point numbers using medium precis...
vec< 3, double, packed_lowp > packed_lowp_dvec3
3 components vector tightly packed in memory of double-precision floating-point numbers using low pre...
vec< 4, uint, aligned_lowp > aligned_lowp_uvec4
4 components vector aligned in memory of unsigned integer numbers.
vec< 4, uint, packed_highp > packed_highp_uvec4
4 components vector tightly packed in memory of unsigned integer numbers.
mat< 2, 4, float, packed_lowp > packed_lowp_mat2x4
2 by 4 matrix tightly packed in memory of single-precision floating-point numbers using low precision...
aligned_highp_vec2 aligned_vec2
2 components vector aligned in memory of single-precision floating-point numbers. ...
aligned_highp_mat2x2 aligned_mat2x2
2 by 2 matrix tightly aligned in memory of single-precision floating-point numbers.
mat< 3, 3, double, packed_lowp > packed_lowp_dmat3
3 by 3 matrix tightly packed in memory of double-precision floating-point numbers using low precision...
aligned_highp_dmat3x3 aligned_dmat3x3
3 by 3 matrix tightly aligned in memory of double-precision floating-point numbers.
vec< 2, double, packed_highp > packed_highp_dvec2
2 components vector tightly packed in memory of double-precision floating-point numbers using high pr...
mat< 2, 2, double, aligned_mediump > aligned_mediump_dmat2x2
2 by 2 matrix aligned in memory of double-precision floating-point numbers using medium precision ari...
vec< 1, uint, packed_lowp > packed_lowp_uvec1
1 component vector tightly packed in memory of unsigned integer numbers.
vec< 2, uint, packed_lowp > packed_lowp_uvec2
2 components vector tightly packed in memory of unsigned integer numbers.
packed_highp_dmat4x3 packed_dmat4x3
4 by 3 matrix tightly packed in memory of double-precision floating-point numbers.
mat< 3, 3, double, aligned_lowp > aligned_lowp_dmat3x3
3 by 3 matrix aligned in memory of double-precision floating-point numbers using low precision arithm...
mat< 3, 3, float, packed_highp > packed_highp_mat3
3 by 3 matrix tightly packed in memory of single-precision floating-point numbers using high precisio...
aligned_highp_dmat2x3 aligned_dmat2x3
2 by 3 matrix tightly aligned in memory of double-precision floating-point numbers.
mat< 2, 2, float, aligned_mediump > aligned_mediump_mat2x2
2 by 2 matrix aligned in memory of single-precision floating-point numbers using medium precision ari...
mat< 2, 2, float, aligned_lowp > aligned_lowp_mat2x2
2 by 2 matrix aligned in memory of single-precision floating-point numbers using low precision arithm...
vec< 1, bool, packed_mediump > packed_mediump_bvec1
1 component vector tightly packed in memory of bool values.
mat< 4, 4, double, packed_lowp > packed_lowp_dmat4
4 by 4 matrix tightly packed in memory of double-precision floating-point numbers using low precision...
packed_highp_ivec1 packed_ivec1
1 component vector tightly packed in memory of signed integer numbers.
vec< 1, bool, packed_lowp > packed_lowp_bvec1
1 component vector tightly packed in memory of bool values.
aligned_highp_dmat3x2 aligned_dmat3x2
3 by 2 matrix tightly aligned in memory of double-precision floating-point numbers.
mat< 3, 2, double, packed_highp > packed_highp_dmat3x2
3 by 2 matrix tightly packed in memory of double-precision floating-point numbers using high precisio...
aligned_highp_ivec2 aligned_ivec2
2 components vector aligned in memory of signed integer numbers.
aligned_highp_dmat4x4 aligned_dmat4x4
4 by 4 matrix tightly aligned in memory of double-precision floating-point numbers.
mat< 3, 3, double, packed_highp > packed_highp_dmat3x3
3 by 3 matrix tightly packed in memory of double-precision floating-point numbers using high precisio...
vec< 4, bool, aligned_highp > aligned_highp_bvec4
4 components vector aligned in memory of bool values.
vec< 4, bool, packed_mediump > packed_mediump_bvec4
4 components vector tightly packed in memory of bool values.
mat< 2, 2, float, packed_highp > packed_highp_mat2
2 by 2 matrix tightly packed in memory of single-precision floating-point numbers using high precisio...
packed_highp_mat2x4 packed_mat2x4
2 by 4 matrix tightly packed in memory of single-precision floating-point numbers.
mat< 4, 2, float, aligned_lowp > aligned_lowp_mat4x2
4 by 2 matrix aligned in memory of single-precision floating-point numbers using low precision arithm...
vec< 1, bool, aligned_mediump > aligned_mediump_bvec1
1 component vector aligned in memory of bool values.
mat< 2, 4, double, aligned_highp > aligned_highp_dmat2x4
2 by 4 matrix aligned in memory of double-precision floating-point numbers using high precision arith...
mat< 3, 3, float, aligned_mediump > aligned_mediump_mat3x3
3 by 3 matrix aligned in memory of single-precision floating-point numbers using medium precision ari...
mat< 4, 4, float, packed_mediump > packed_mediump_mat4
4 by 4 matrix tightly packed in memory of single-precision floating-point numbers using medium precis...
vec< 1, float, packed_mediump > packed_mediump_vec1
1 component vector tightly packed in memory of single-precision floating-point numbers using medium p...
aligned_highp_mat4x4 aligned_mat4x4
4 by 4 matrix tightly aligned in memory of single-precision floating-point numbers.
aligned_highp_mat4x2 aligned_mat4x2
4 by 2 matrix tightly aligned in memory of single-precision floating-point numbers.
vec< 3, float, packed_highp > packed_highp_vec3
3 components vector tightly packed in memory of single-precision floating-point numbers using high pr...
aligned_highp_dvec4 aligned_dvec4
4 components vector aligned in memory of double-precision floating-point numbers. ...
vec< 1, int, aligned_highp > aligned_highp_ivec1
1 component vector aligned in memory of signed integer numbers.
mat< 3, 3, float, aligned_lowp > aligned_lowp_mat3
3 by 3 matrix aligned in memory of single-precision floating-point numbers using low precision arithm...
mat< 2, 2, double, aligned_highp > aligned_highp_dmat2
2 by 2 matrix aligned in memory of double-precision floating-point numbers using high precision arith...
aligned_highp_dmat3x4 aligned_dmat3x4
3 by 4 matrix tightly aligned in memory of double-precision floating-point numbers.
packed_highp_bvec3 packed_bvec3
3 components vector tightly packed in memory of bool values.
mat< 4, 4, float, packed_highp > packed_highp_mat4x4
4 by 4 matrix tightly packed in memory of single-precision floating-point numbers using high precisio...
vec< 3, float, packed_mediump > packed_mediump_vec3
3 components vector tightly packed in memory of single-precision floating-point numbers using medium ...
vec< 2, uint, aligned_lowp > aligned_lowp_uvec2
2 components vector aligned in memory of unsigned integer numbers.
vec< 1, bool, aligned_highp > aligned_highp_bvec1
1 component vector aligned in memory of bool values.
vec< 2, bool, packed_highp > packed_highp_bvec2
2 components vector tightly packed in memory of bool values.
vec< 1, int, packed_highp > packed_highp_ivec1
1 component vector tightly packed in memory of signed integer numbers.
mat< 2, 4, float, aligned_highp > aligned_highp_mat2x4
2 by 4 matrix aligned in memory of single-precision floating-point numbers using high precision arith...
vec< 2, int, packed_lowp > packed_lowp_ivec2
2 components vector tightly packed in memory of signed integer numbers.
vec< 3, double, packed_highp > packed_highp_dvec3
3 components vector tightly packed in memory of double-precision floating-point numbers using high pr...
vec< 2, int, aligned_highp > aligned_highp_ivec2
2 components vector aligned in memory of signed integer numbers.
aligned_highp_dmat2 aligned_dmat2
2 by 2 matrix tightly aligned in memory of double-precision floating-point numbers.
mat< 3, 2, double, packed_mediump > packed_mediump_dmat3x2
3 by 2 matrix tightly packed in memory of double-precision floating-point numbers using medium precis...
vec< 3, uint, aligned_mediump > aligned_mediump_uvec3
3 components vector aligned in memory of unsigned integer numbers.
vec< 4, double, packed_lowp > packed_lowp_dvec4
4 components vector tightly packed in memory of double-precision floating-point numbers using low pre...
vec< 3, double, packed_mediump > packed_mediump_dvec3
3 components vector tightly packed in memory of double-precision floating-point numbers using medium ...
mat< 2, 3, double, aligned_mediump > aligned_mediump_dmat2x3
2 by 3 matrix aligned in memory of double-precision floating-point numbers using medium precision ari...
vec< 3, int, aligned_lowp > aligned_lowp_ivec3
3 components vector aligned in memory of signed integer numbers.
mat< 2, 2, float, aligned_highp > aligned_highp_mat2x2
2 by 2 matrix aligned in memory of single-precision floating-point numbers using high precision arith...
mat< 4, 3, float, aligned_highp > aligned_highp_mat4x3
4 by 3 matrix aligned in memory of single-precision floating-point numbers using high precision arith...
vec< 3, bool, aligned_highp > aligned_highp_bvec3
3 components vector aligned in memory of bool values.
vec< 3, float, packed_lowp > packed_lowp_vec3
3 components vector tightly packed in memory of single-precision floating-point numbers using low pre...
vec< 2, uint, aligned_mediump > aligned_mediump_uvec2
2 components vector aligned in memory of unsigned integer numbers.
vec< 1, int, packed_mediump > packed_mediump_ivec1
1 component vector tightly packed in memory of signed integer numbers.
mat< 3, 3, double, aligned_highp > aligned_highp_dmat3
3 by 3 matrix aligned in memory of double-precision floating-point numbers using high precision arith...
vec< 4, uint, packed_mediump > packed_mediump_uvec4
4 components vector tightly packed in memory of unsigned integer numbers.
mat< 3, 2, double, aligned_mediump > aligned_mediump_dmat3x2
3 by 2 matrix aligned in memory of double-precision floating-point numbers using medium precision ari...
mat< 3, 4, double, aligned_mediump > aligned_mediump_dmat3x4
3 by 4 matrix aligned in memory of double-precision floating-point numbers using medium precision ari...
mat< 4, 3, double, packed_mediump > packed_mediump_dmat4x3
4 by 3 matrix tightly packed in memory of double-precision floating-point numbers using medium precis...
vec< 2, uint, packed_highp > packed_highp_uvec2
2 components vector tightly packed in memory of unsigned integer numbers.
vec< 4, uint, aligned_mediump > aligned_mediump_uvec4
4 components vector aligned in memory of unsigned integer numbers.
vec< 4, double, packed_mediump > packed_mediump_dvec4
4 components vector tightly packed in memory of double-precision floating-point numbers using medium ...
aligned_highp_vec4 aligned_vec4
4 components vector aligned in memory of single-precision floating-point numbers. ...
vec< 4, int, aligned_highp > aligned_highp_ivec4
4 components vector aligned in memory of signed integer numbers.
vec< 2, double, aligned_lowp > aligned_lowp_dvec2
2 components vector aligned in memory of double-precision floating-point numbers using low precision ...
packed_highp_bvec1 packed_bvec1
1 components vector tightly packed in memory of bool values.
mat< 2, 3, float, packed_lowp > packed_lowp_mat2x3
2 by 3 matrix tightly packed in memory of single-precision floating-point numbers using low precision...
vec< 1, float, aligned_lowp > aligned_lowp_vec1
1 component vector aligned in memory of single-precision floating-point numbers using low precision a...
vec< 2, float, packed_lowp > packed_lowp_vec2
2 components vector tightly packed in memory of single-precision floating-point numbers using low pre...
vec< 1, double, aligned_highp > aligned_highp_dvec1
1 component vector aligned in memory of double-precision floating-point numbers using high precision ...
mat< 2, 3, float, packed_mediump > packed_mediump_mat2x3
2 by 3 matrix tightly packed in memory of single-precision floating-point numbers using medium precis...
mat< 3, 2, float, packed_mediump > packed_mediump_mat3x2
3 by 2 matrix tightly packed in memory of single-precision floating-point numbers using medium precis...
vec< 4, float, aligned_mediump > aligned_mediump_vec4
4 components vector aligned in memory of single-precision floating-point numbers using medium precisi...
packed_highp_dvec4 packed_dvec4
4 components vector tightly packed in memory of double-precision floating-point numbers.
vec< 4, double, aligned_mediump > aligned_mediump_dvec4
4 components vector aligned in memory of double-precision floating-point numbers using medium precisi...
mat< 4, 4, double, packed_mediump > packed_mediump_dmat4x4
4 by 4 matrix tightly packed in memory of double-precision floating-point numbers using medium precis...
mat< 3, 4, double, packed_mediump > packed_mediump_dmat3x4
3 by 4 matrix tightly packed in memory of double-precision floating-point numbers using medium precis...
mat< 3, 4, float, aligned_highp > aligned_highp_mat3x4
3 by 4 matrix aligned in memory of single-precision floating-point numbers using high precision arith...
mat< 4, 4, float, aligned_highp > aligned_highp_mat4
4 by 4 matrix aligned in memory of single-precision floating-point numbers using high precision arith...
mat< 2, 3, double, packed_lowp > packed_lowp_dmat2x3
2 by 3 matrix tightly packed in memory of double-precision floating-point numbers using low precision...
mat< 3, 4, float, aligned_mediump > aligned_mediump_mat3x4
3 by 4 matrix aligned in memory of single-precision floating-point numbers using medium precision ari...
packed_highp_dmat3x3 packed_dmat3x3
3 by 3 matrix tightly packed in memory of double-precision floating-point numbers.
aligned_highp_mat3x3 aligned_mat3x3
3 by 3 matrix tightly aligned in memory of single-precision floating-point numbers.
vec< 3, int, packed_lowp > packed_lowp_ivec3
3 components vector tightly packed in memory of signed integer numbers.
aligned_highp_dmat4 aligned_dmat4
4 by 4 matrix tightly aligned in memory of double-precision floating-point numbers.
mat< 4, 2, float, aligned_highp > aligned_highp_mat4x2
4 by 2 matrix aligned in memory of single-precision floating-point numbers using high precision arith...
vec< 2, float, packed_highp > packed_highp_vec2
2 components vector tightly packed in memory of single-precision floating-point numbers using high pr...
mat< 2, 2, double, aligned_lowp > aligned_lowp_dmat2
2 by 2 matrix aligned in memory of double-precision floating-point numbers using low precision arithm...
vec< 4, uint, packed_lowp > packed_lowp_uvec4
4 components vector tightly packed in memory of unsigned integer numbers.
packed_highp_ivec4 packed_ivec4
4 components vector tightly packed in memory of signed integer numbers.
packed_highp_dvec2 packed_dvec2
2 components vector tightly packed in memory of double-precision floating-point numbers.
mat< 3, 3, float, aligned_lowp > aligned_lowp_mat3x3
3 by 3 matrix aligned in memory of single-precision floating-point numbers using low precision arithm...
mat< 3, 2, double, packed_lowp > packed_lowp_dmat3x2
3 by 2 matrix tightly packed in memory of double-precision floating-point numbers using low precision...
mat< 3, 2, float, packed_highp > packed_highp_mat3x2
3 by 2 matrix tightly packed in memory of single-precision floating-point numbers using high precisio...
mat< 3, 3, double, aligned_mediump > aligned_mediump_dmat3
3 by 3 matrix aligned in memory of double-precision floating-point numbers using medium precision ari...
mat< 2, 3, float, aligned_lowp > aligned_lowp_mat2x3
2 by 3 matrix aligned in memory of single-precision floating-point numbers using low precision arithm...
vec< 1, int, aligned_mediump > aligned_mediump_ivec1
1 component vector aligned in memory of signed integer numbers.
mat< 2, 2, double, packed_highp > packed_highp_dmat2
2 by 2 matrix tightly packed in memory of double-precision floating-point numbers using high precisio...
vec< 2, float, packed_mediump > packed_mediump_vec2
2 components vector tightly packed in memory of single-precision floating-point numbers using medium ...
aligned_highp_mat3 aligned_mat3
3 by 3 matrix tightly aligned in memory of single-precision floating-point numbers.
mat< 4, 3, double, aligned_mediump > aligned_mediump_dmat4x3
4 by 3 matrix aligned in memory of double-precision floating-point numbers using medium precision ari...
mat< 2, 2, float, packed_mediump > packed_mediump_mat2x2
2 by 2 matrix tightly packed in memory of single-precision floating-point numbers using medium precis...
mat< 2, 3, double, aligned_highp > aligned_highp_dmat2x3
2 by 3 matrix aligned in memory of double-precision floating-point numbers using high precision arith...
aligned_highp_dvec2 aligned_dvec2
2 components vector aligned in memory of double-precision floating-point numbers. ...
mat< 3, 4, float, aligned_lowp > aligned_lowp_mat3x4
3 by 4 matrix aligned in memory of single-precision floating-point numbers using low precision arithm...
packed_highp_mat4x2 packed_mat4x2
4 by 2 matrix tightly packed in memory of single-precision floating-point numbers.
vec< 4, float, aligned_lowp > aligned_lowp_vec4
4 components vector aligned in memory of single-precision floating-point numbers using low precision ...
vec< 3, bool, packed_highp > packed_highp_bvec3
3 components vector tightly packed in memory of bool values.
vec< 2, double, packed_lowp > packed_lowp_dvec2
2 components vector tightly packed in memory of double-precision floating-point numbers using low pre...
vec< 3, double, aligned_mediump > aligned_mediump_dvec3
3 components vector aligned in memory of double-precision floating-point numbers using medium precisi...
vec< 3, float, aligned_mediump > aligned_mediump_vec3
3 components vector aligned in memory of single-precision floating-point numbers using medium precisi...
mat< 2, 4, float, aligned_mediump > aligned_mediump_mat2x4
2 by 4 matrix aligned in memory of single-precision floating-point numbers using medium precision ari...
Definition: common.hpp:20
vec< 3, double, aligned_highp > aligned_highp_dvec3
3 components vector aligned in memory of double-precision floating-point numbers using high precision...
vec< 1, float, aligned_highp > aligned_highp_vec1
1 component vector aligned in memory of single-precision floating-point numbers using high precision ...
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00162.html ================================================ 0.9.9 API documentation: type_aligned.hpp File Reference
0.9.9 API documentation
gtx/type_aligned.hpp File Reference

GLM_GTX_type_aligned More...

Go to the source code of this file.

Functions

 GLM_ALIGNED_TYPEDEF (lowp_int8, aligned_lowp_int8, 1)
 Low qualifier 8 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_int16, aligned_lowp_int16, 2)
 Low qualifier 16 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_int32, aligned_lowp_int32, 4)
 Low qualifier 32 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_int64, aligned_lowp_int64, 8)
 Low qualifier 64 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_int8_t, aligned_lowp_int8_t, 1)
 Low qualifier 8 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_int16_t, aligned_lowp_int16_t, 2)
 Low qualifier 16 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_int32_t, aligned_lowp_int32_t, 4)
 Low qualifier 32 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_int64_t, aligned_lowp_int64_t, 8)
 Low qualifier 64 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_i8, aligned_lowp_i8, 1)
 Low qualifier 8 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_i16, aligned_lowp_i16, 2)
 Low qualifier 16 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_i32, aligned_lowp_i32, 4)
 Low qualifier 32 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_i64, aligned_lowp_i64, 8)
 Low qualifier 64 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_int8, aligned_mediump_int8, 1)
 Medium qualifier 8 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_int16, aligned_mediump_int16, 2)
 Medium qualifier 16 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_int32, aligned_mediump_int32, 4)
 Medium qualifier 32 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_int64, aligned_mediump_int64, 8)
 Medium qualifier 64 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_int8_t, aligned_mediump_int8_t, 1)
 Medium qualifier 8 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_int16_t, aligned_mediump_int16_t, 2)
 Medium qualifier 16 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_int32_t, aligned_mediump_int32_t, 4)
 Medium qualifier 32 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_int64_t, aligned_mediump_int64_t, 8)
 Medium qualifier 64 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_i8, aligned_mediump_i8, 1)
 Medium qualifier 8 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_i16, aligned_mediump_i16, 2)
 Medium qualifier 16 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_i32, aligned_mediump_i32, 4)
 Medium qualifier 32 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_i64, aligned_mediump_i64, 8)
 Medium qualifier 64 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_int8, aligned_highp_int8, 1)
 High qualifier 8 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_int16, aligned_highp_int16, 2)
 High qualifier 16 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_int32, aligned_highp_int32, 4)
 High qualifier 32 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_int64, aligned_highp_int64, 8)
 High qualifier 64 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_int8_t, aligned_highp_int8_t, 1)
 High qualifier 8 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_int16_t, aligned_highp_int16_t, 2)
 High qualifier 16 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_int32_t, aligned_highp_int32_t, 4)
 High qualifier 32 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_int64_t, aligned_highp_int64_t, 8)
 High qualifier 64 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_i8, aligned_highp_i8, 1)
 High qualifier 8 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_i16, aligned_highp_i16, 2)
 High qualifier 16 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_i32, aligned_highp_i32, 4)
 High qualifier 32 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_i64, aligned_highp_i64, 8)
 High qualifier 64 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (int8, aligned_int8, 1)
 Default qualifier 8 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (int16, aligned_int16, 2)
 Default qualifier 16 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (int32, aligned_int32, 4)
 Default qualifier 32 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (int64, aligned_int64, 8)
 Default qualifier 64 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (int8_t, aligned_int8_t, 1)
 Default qualifier 8 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (int16_t, aligned_int16_t, 2)
 Default qualifier 16 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (int32_t, aligned_int32_t, 4)
 Default qualifier 32 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (int64_t, aligned_int64_t, 8)
 Default qualifier 64 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (i8, aligned_i8, 1)
 Default qualifier 8 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (i16, aligned_i16, 2)
 Default qualifier 16 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (i32, aligned_i32, 4)
 Default qualifier 32 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (i64, aligned_i64, 8)
 Default qualifier 64 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (ivec1, aligned_ivec1, 4)
 Default qualifier 32 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (ivec2, aligned_ivec2, 8)
 Default qualifier 32 bit signed integer aligned vector of 2 components type. More...
 
 GLM_ALIGNED_TYPEDEF (ivec3, aligned_ivec3, 16)
 Default qualifier 32 bit signed integer aligned vector of 3 components type. More...
 
 GLM_ALIGNED_TYPEDEF (ivec4, aligned_ivec4, 16)
 Default qualifier 32 bit signed integer aligned vector of 4 components type. More...
 
 GLM_ALIGNED_TYPEDEF (i8vec1, aligned_i8vec1, 1)
 Default qualifier 8 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (i8vec2, aligned_i8vec2, 2)
 Default qualifier 8 bit signed integer aligned vector of 2 components type. More...
 
 GLM_ALIGNED_TYPEDEF (i8vec3, aligned_i8vec3, 4)
 Default qualifier 8 bit signed integer aligned vector of 3 components type. More...
 
 GLM_ALIGNED_TYPEDEF (i8vec4, aligned_i8vec4, 4)
 Default qualifier 8 bit signed integer aligned vector of 4 components type. More...
 
 GLM_ALIGNED_TYPEDEF (i16vec1, aligned_i16vec1, 2)
 Default qualifier 16 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (i16vec2, aligned_i16vec2, 4)
 Default qualifier 16 bit signed integer aligned vector of 2 components type. More...
 
 GLM_ALIGNED_TYPEDEF (i16vec3, aligned_i16vec3, 8)
 Default qualifier 16 bit signed integer aligned vector of 3 components type. More...
 
 GLM_ALIGNED_TYPEDEF (i16vec4, aligned_i16vec4, 8)
 Default qualifier 16 bit signed integer aligned vector of 4 components type. More...
 
 GLM_ALIGNED_TYPEDEF (i32vec1, aligned_i32vec1, 4)
 Default qualifier 32 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (i32vec2, aligned_i32vec2, 8)
 Default qualifier 32 bit signed integer aligned vector of 2 components type. More...
 
 GLM_ALIGNED_TYPEDEF (i32vec3, aligned_i32vec3, 16)
 Default qualifier 32 bit signed integer aligned vector of 3 components type. More...
 
 GLM_ALIGNED_TYPEDEF (i32vec4, aligned_i32vec4, 16)
 Default qualifier 32 bit signed integer aligned vector of 4 components type. More...
 
 GLM_ALIGNED_TYPEDEF (i64vec1, aligned_i64vec1, 8)
 Default qualifier 64 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (i64vec2, aligned_i64vec2, 16)
 Default qualifier 64 bit signed integer aligned vector of 2 components type. More...
 
 GLM_ALIGNED_TYPEDEF (i64vec3, aligned_i64vec3, 32)
 Default qualifier 64 bit signed integer aligned vector of 3 components type. More...
 
 GLM_ALIGNED_TYPEDEF (i64vec4, aligned_i64vec4, 32)
 Default qualifier 64 bit signed integer aligned vector of 4 components type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_uint8, aligned_lowp_uint8, 1)
 Low qualifier 8 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_uint16, aligned_lowp_uint16, 2)
 Low qualifier 16 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_uint32, aligned_lowp_uint32, 4)
 Low qualifier 32 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_uint64, aligned_lowp_uint64, 8)
 Low qualifier 64 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_uint8_t, aligned_lowp_uint8_t, 1)
 Low qualifier 8 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_uint16_t, aligned_lowp_uint16_t, 2)
 Low qualifier 16 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_uint32_t, aligned_lowp_uint32_t, 4)
 Low qualifier 32 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_uint64_t, aligned_lowp_uint64_t, 8)
 Low qualifier 64 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_u8, aligned_lowp_u8, 1)
 Low qualifier 8 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_u16, aligned_lowp_u16, 2)
 Low qualifier 16 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_u32, aligned_lowp_u32, 4)
 Low qualifier 32 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_u64, aligned_lowp_u64, 8)
 Low qualifier 64 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_uint8, aligned_mediump_uint8, 1)
 Medium qualifier 8 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_uint16, aligned_mediump_uint16, 2)
 Medium qualifier 16 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_uint32, aligned_mediump_uint32, 4)
 Medium qualifier 32 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_uint64, aligned_mediump_uint64, 8)
 Medium qualifier 64 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_uint8_t, aligned_mediump_uint8_t, 1)
 Medium qualifier 8 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_uint16_t, aligned_mediump_uint16_t, 2)
 Medium qualifier 16 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_uint32_t, aligned_mediump_uint32_t, 4)
 Medium qualifier 32 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_uint64_t, aligned_mediump_uint64_t, 8)
 Medium qualifier 64 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_u8, aligned_mediump_u8, 1)
 Medium qualifier 8 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_u16, aligned_mediump_u16, 2)
 Medium qualifier 16 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_u32, aligned_mediump_u32, 4)
 Medium qualifier 32 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_u64, aligned_mediump_u64, 8)
 Medium qualifier 64 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_uint8, aligned_highp_uint8, 1)
 High qualifier 8 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_uint16, aligned_highp_uint16, 2)
 High qualifier 16 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_uint32, aligned_highp_uint32, 4)
 High qualifier 32 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_uint64, aligned_highp_uint64, 8)
 High qualifier 64 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_uint8_t, aligned_highp_uint8_t, 1)
 High qualifier 8 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_uint16_t, aligned_highp_uint16_t, 2)
 High qualifier 16 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_uint32_t, aligned_highp_uint32_t, 4)
 High qualifier 32 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_uint64_t, aligned_highp_uint64_t, 8)
 High qualifier 64 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_u8, aligned_highp_u8, 1)
 High qualifier 8 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_u16, aligned_highp_u16, 2)
 High qualifier 16 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_u32, aligned_highp_u32, 4)
 High qualifier 32 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_u64, aligned_highp_u64, 8)
 High qualifier 64 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (uint8, aligned_uint8, 1)
 Default qualifier 8 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (uint16, aligned_uint16, 2)
 Default qualifier 16 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (uint32, aligned_uint32, 4)
 Default qualifier 32 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (uint64, aligned_uint64, 8)
 Default qualifier 64 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (uint8_t, aligned_uint8_t, 1)
 Default qualifier 8 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (uint16_t, aligned_uint16_t, 2)
 Default qualifier 16 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (uint32_t, aligned_uint32_t, 4)
 Default qualifier 32 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (uint64_t, aligned_uint64_t, 8)
 Default qualifier 64 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (u8, aligned_u8, 1)
 Default qualifier 8 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (u16, aligned_u16, 2)
 Default qualifier 16 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (u32, aligned_u32, 4)
 Default qualifier 32 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (u64, aligned_u64, 8)
 Default qualifier 64 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (uvec1, aligned_uvec1, 4)
 Default qualifier 32 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (uvec2, aligned_uvec2, 8)
 Default qualifier 32 bit unsigned integer aligned vector of 2 components type. More...
 
 GLM_ALIGNED_TYPEDEF (uvec3, aligned_uvec3, 16)
 Default qualifier 32 bit unsigned integer aligned vector of 3 components type. More...
 
 GLM_ALIGNED_TYPEDEF (uvec4, aligned_uvec4, 16)
 Default qualifier 32 bit unsigned integer aligned vector of 4 components type. More...
 
 GLM_ALIGNED_TYPEDEF (u8vec1, aligned_u8vec1, 1)
 Default qualifier 8 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (u8vec2, aligned_u8vec2, 2)
 Default qualifier 8 bit unsigned integer aligned vector of 2 components type. More...
 
 GLM_ALIGNED_TYPEDEF (u8vec3, aligned_u8vec3, 4)
 Default qualifier 8 bit unsigned integer aligned vector of 3 components type. More...
 
 GLM_ALIGNED_TYPEDEF (u8vec4, aligned_u8vec4, 4)
 Default qualifier 8 bit unsigned integer aligned vector of 4 components type. More...
 
 GLM_ALIGNED_TYPEDEF (u16vec1, aligned_u16vec1, 2)
 Default qualifier 16 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (u16vec2, aligned_u16vec2, 4)
 Default qualifier 16 bit unsigned integer aligned vector of 2 components type. More...
 
 GLM_ALIGNED_TYPEDEF (u16vec3, aligned_u16vec3, 8)
 Default qualifier 16 bit unsigned integer aligned vector of 3 components type. More...
 
 GLM_ALIGNED_TYPEDEF (u16vec4, aligned_u16vec4, 8)
 Default qualifier 16 bit unsigned integer aligned vector of 4 components type. More...
 
 GLM_ALIGNED_TYPEDEF (u32vec1, aligned_u32vec1, 4)
 Default qualifier 32 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (u32vec2, aligned_u32vec2, 8)
 Default qualifier 32 bit unsigned integer aligned vector of 2 components type. More...
 
 GLM_ALIGNED_TYPEDEF (u32vec3, aligned_u32vec3, 16)
 Default qualifier 32 bit unsigned integer aligned vector of 3 components type. More...
 
 GLM_ALIGNED_TYPEDEF (u32vec4, aligned_u32vec4, 16)
 Default qualifier 32 bit unsigned integer aligned vector of 4 components type. More...
 
 GLM_ALIGNED_TYPEDEF (u64vec1, aligned_u64vec1, 8)
 Default qualifier 64 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (u64vec2, aligned_u64vec2, 16)
 Default qualifier 64 bit unsigned integer aligned vector of 2 components type. More...
 
 GLM_ALIGNED_TYPEDEF (u64vec3, aligned_u64vec3, 32)
 Default qualifier 64 bit unsigned integer aligned vector of 3 components type. More...
 
 GLM_ALIGNED_TYPEDEF (u64vec4, aligned_u64vec4, 32)
 Default qualifier 64 bit unsigned integer aligned vector of 4 components type. More...
 
 GLM_ALIGNED_TYPEDEF (float32, aligned_float32, 4)
 32 bit single-qualifier floating-point aligned scalar. More...
 
 GLM_ALIGNED_TYPEDEF (float32_t, aligned_float32_t, 4)
 32 bit single-qualifier floating-point aligned scalar. More...
 
 GLM_ALIGNED_TYPEDEF (float32, aligned_f32, 4)
 32 bit single-qualifier floating-point aligned scalar. More...
 
 GLM_ALIGNED_TYPEDEF (float64, aligned_float64, 8)
 64 bit double-qualifier floating-point aligned scalar. More...
 
 GLM_ALIGNED_TYPEDEF (float64_t, aligned_float64_t, 8)
 64 bit double-qualifier floating-point aligned scalar. More...
 
 GLM_ALIGNED_TYPEDEF (float64, aligned_f64, 8)
 64 bit double-qualifier floating-point aligned scalar. More...
 
 GLM_ALIGNED_TYPEDEF (vec1, aligned_vec1, 4)
 Single-qualifier floating-point aligned vector of 1 component. More...
 
 GLM_ALIGNED_TYPEDEF (vec2, aligned_vec2, 8)
 Single-qualifier floating-point aligned vector of 2 components. More...
 
 GLM_ALIGNED_TYPEDEF (vec3, aligned_vec3, 16)
 Single-qualifier floating-point aligned vector of 3 components. More...
 
 GLM_ALIGNED_TYPEDEF (vec4, aligned_vec4, 16)
 Single-qualifier floating-point aligned vector of 4 components. More...
 
 GLM_ALIGNED_TYPEDEF (fvec1, aligned_fvec1, 4)
 Single-qualifier floating-point aligned vector of 1 component. More...
 
 GLM_ALIGNED_TYPEDEF (fvec2, aligned_fvec2, 8)
 Single-qualifier floating-point aligned vector of 2 components. More...
 
 GLM_ALIGNED_TYPEDEF (fvec3, aligned_fvec3, 16)
 Single-qualifier floating-point aligned vector of 3 components. More...
 
 GLM_ALIGNED_TYPEDEF (fvec4, aligned_fvec4, 16)
 Single-qualifier floating-point aligned vector of 4 components. More...
 
 GLM_ALIGNED_TYPEDEF (f32vec1, aligned_f32vec1, 4)
 Single-qualifier floating-point aligned vector of 1 component. More...
 
 GLM_ALIGNED_TYPEDEF (f32vec2, aligned_f32vec2, 8)
 Single-qualifier floating-point aligned vector of 2 components. More...
 
 GLM_ALIGNED_TYPEDEF (f32vec3, aligned_f32vec3, 16)
 Single-qualifier floating-point aligned vector of 3 components. More...
 
 GLM_ALIGNED_TYPEDEF (f32vec4, aligned_f32vec4, 16)
 Single-qualifier floating-point aligned vector of 4 components. More...
 
 GLM_ALIGNED_TYPEDEF (dvec1, aligned_dvec1, 8)
 Double-qualifier floating-point aligned vector of 1 component. More...
 
 GLM_ALIGNED_TYPEDEF (dvec2, aligned_dvec2, 16)
 Double-qualifier floating-point aligned vector of 2 components. More...
 
 GLM_ALIGNED_TYPEDEF (dvec3, aligned_dvec3, 32)
 Double-qualifier floating-point aligned vector of 3 components. More...
 
 GLM_ALIGNED_TYPEDEF (dvec4, aligned_dvec4, 32)
 Double-qualifier floating-point aligned vector of 4 components. More...
 
 GLM_ALIGNED_TYPEDEF (f64vec1, aligned_f64vec1, 8)
 Double-qualifier floating-point aligned vector of 1 component. More...
 
 GLM_ALIGNED_TYPEDEF (f64vec2, aligned_f64vec2, 16)
 Double-qualifier floating-point aligned vector of 2 components. More...
 
 GLM_ALIGNED_TYPEDEF (f64vec3, aligned_f64vec3, 32)
 Double-qualifier floating-point aligned vector of 3 components. More...
 
 GLM_ALIGNED_TYPEDEF (f64vec4, aligned_f64vec4, 32)
 Double-qualifier floating-point aligned vector of 4 components. More...
 
 GLM_ALIGNED_TYPEDEF (mat2, aligned_mat2, 16)
 Single-qualifier floating-point aligned 1x1 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (mat3, aligned_mat3, 16)
 Single-qualifier floating-point aligned 3x3 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (mat4, aligned_mat4, 16)
 Single-qualifier floating-point aligned 4x4 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (fmat2x2, aligned_fmat2, 16)
 Single-qualifier floating-point aligned 1x1 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (fmat3x3, aligned_fmat3, 16)
 Single-qualifier floating-point aligned 3x3 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (fmat4x4, aligned_fmat4, 16)
 Single-qualifier floating-point aligned 4x4 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (fmat2x2, aligned_fmat2x2, 16)
 Single-qualifier floating-point aligned 1x1 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (fmat2x3, aligned_fmat2x3, 16)
 Single-qualifier floating-point aligned 2x3 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (fmat2x4, aligned_fmat2x4, 16)
 Single-qualifier floating-point aligned 2x4 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (fmat3x2, aligned_fmat3x2, 16)
 Single-qualifier floating-point aligned 3x2 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (fmat3x3, aligned_fmat3x3, 16)
 Single-qualifier floating-point aligned 3x3 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (fmat3x4, aligned_fmat3x4, 16)
 Single-qualifier floating-point aligned 3x4 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (fmat4x2, aligned_fmat4x2, 16)
 Single-qualifier floating-point aligned 4x2 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (fmat4x3, aligned_fmat4x3, 16)
 Single-qualifier floating-point aligned 4x3 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (fmat4x4, aligned_fmat4x4, 16)
 Single-qualifier floating-point aligned 4x4 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f32mat2x2, aligned_f32mat2, 16)
 Single-qualifier floating-point aligned 1x1 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f32mat3x3, aligned_f32mat3, 16)
 Single-qualifier floating-point aligned 3x3 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f32mat4x4, aligned_f32mat4, 16)
 Single-qualifier floating-point aligned 4x4 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f32mat2x2, aligned_f32mat2x2, 16)
 Single-qualifier floating-point aligned 1x1 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f32mat2x3, aligned_f32mat2x3, 16)
 Single-qualifier floating-point aligned 2x3 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f32mat2x4, aligned_f32mat2x4, 16)
 Single-qualifier floating-point aligned 2x4 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f32mat3x2, aligned_f32mat3x2, 16)
 Single-qualifier floating-point aligned 3x2 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f32mat3x3, aligned_f32mat3x3, 16)
 Single-qualifier floating-point aligned 3x3 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f32mat3x4, aligned_f32mat3x4, 16)
 Single-qualifier floating-point aligned 3x4 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f32mat4x2, aligned_f32mat4x2, 16)
 Single-qualifier floating-point aligned 4x2 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f32mat4x3, aligned_f32mat4x3, 16)
 Single-qualifier floating-point aligned 4x3 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f32mat4x4, aligned_f32mat4x4, 16)
 Single-qualifier floating-point aligned 4x4 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f64mat2x2, aligned_f64mat2, 32)
 Double-qualifier floating-point aligned 1x1 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f64mat3x3, aligned_f64mat3, 32)
 Double-qualifier floating-point aligned 3x3 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f64mat4x4, aligned_f64mat4, 32)
 Double-qualifier floating-point aligned 4x4 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f64mat2x2, aligned_f64mat2x2, 32)
 Double-qualifier floating-point aligned 1x1 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f64mat2x3, aligned_f64mat2x3, 32)
 Double-qualifier floating-point aligned 2x3 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f64mat2x4, aligned_f64mat2x4, 32)
 Double-qualifier floating-point aligned 2x4 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f64mat3x2, aligned_f64mat3x2, 32)
 Double-qualifier floating-point aligned 3x2 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f64mat3x3, aligned_f64mat3x3, 32)
 Double-qualifier floating-point aligned 3x3 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f64mat3x4, aligned_f64mat3x4, 32)
 Double-qualifier floating-point aligned 3x4 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f64mat4x2, aligned_f64mat4x2, 32)
 Double-qualifier floating-point aligned 4x2 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f64mat4x3, aligned_f64mat4x3, 32)
 Double-qualifier floating-point aligned 4x3 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f64mat4x4, aligned_f64mat4x4, 32)
 Double-qualifier floating-point aligned 4x4 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (quat, aligned_quat, 16)
 Single-qualifier floating-point aligned quaternion. More...
 
 GLM_ALIGNED_TYPEDEF (quat, aligned_fquat, 16)
 Single-qualifier floating-point aligned quaternion. More...
 
 GLM_ALIGNED_TYPEDEF (dquat, aligned_dquat, 32)
 Double-qualifier floating-point aligned quaternion. More...
 
 GLM_ALIGNED_TYPEDEF (f32quat, aligned_f32quat, 16)
 Single-qualifier floating-point aligned quaternion. More...
 
 GLM_ALIGNED_TYPEDEF (f64quat, aligned_f64quat, 32)
 Double-qualifier floating-point aligned quaternion. More...
 

Detailed Description

GLM_GTX_type_aligned

See also
Core features (dependence)
GLM_GTC_quaternion (dependence)

Definition in file gtx/type_aligned.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00162_source.html ================================================ 0.9.9 API documentation: type_aligned.hpp Source File
0.9.9 API documentation
gtx/type_aligned.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 // Dependency:
17 #include "../gtc/type_precision.hpp"
18 #include "../gtc/quaternion.hpp"
19 
20 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
21 # ifndef GLM_ENABLE_EXPERIMENTAL
22 # pragma message("GLM: GLM_GTX_type_aligned is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
23 # else
24 # pragma message("GLM: GLM_GTX_type_aligned extension included")
25 # endif
26 #endif
27 
28 namespace glm
29 {
31  // Signed int vector types
32 
35 
38  GLM_ALIGNED_TYPEDEF(lowp_int8, aligned_lowp_int8, 1);
39 
42  GLM_ALIGNED_TYPEDEF(lowp_int16, aligned_lowp_int16, 2);
43 
46  GLM_ALIGNED_TYPEDEF(lowp_int32, aligned_lowp_int32, 4);
47 
50  GLM_ALIGNED_TYPEDEF(lowp_int64, aligned_lowp_int64, 8);
51 
52 
55  GLM_ALIGNED_TYPEDEF(lowp_int8_t, aligned_lowp_int8_t, 1);
56 
59  GLM_ALIGNED_TYPEDEF(lowp_int16_t, aligned_lowp_int16_t, 2);
60 
63  GLM_ALIGNED_TYPEDEF(lowp_int32_t, aligned_lowp_int32_t, 4);
64 
67  GLM_ALIGNED_TYPEDEF(lowp_int64_t, aligned_lowp_int64_t, 8);
68 
69 
72  GLM_ALIGNED_TYPEDEF(lowp_i8, aligned_lowp_i8, 1);
73 
76  GLM_ALIGNED_TYPEDEF(lowp_i16, aligned_lowp_i16, 2);
77 
80  GLM_ALIGNED_TYPEDEF(lowp_i32, aligned_lowp_i32, 4);
81 
84  GLM_ALIGNED_TYPEDEF(lowp_i64, aligned_lowp_i64, 8);
85 
86 
89  GLM_ALIGNED_TYPEDEF(mediump_int8, aligned_mediump_int8, 1);
90 
93  GLM_ALIGNED_TYPEDEF(mediump_int16, aligned_mediump_int16, 2);
94 
97  GLM_ALIGNED_TYPEDEF(mediump_int32, aligned_mediump_int32, 4);
98 
101  GLM_ALIGNED_TYPEDEF(mediump_int64, aligned_mediump_int64, 8);
102 
103 
106  GLM_ALIGNED_TYPEDEF(mediump_int8_t, aligned_mediump_int8_t, 1);
107 
110  GLM_ALIGNED_TYPEDEF(mediump_int16_t, aligned_mediump_int16_t, 2);
111 
114  GLM_ALIGNED_TYPEDEF(mediump_int32_t, aligned_mediump_int32_t, 4);
115 
118  GLM_ALIGNED_TYPEDEF(mediump_int64_t, aligned_mediump_int64_t, 8);
119 
120 
123  GLM_ALIGNED_TYPEDEF(mediump_i8, aligned_mediump_i8, 1);
124 
127  GLM_ALIGNED_TYPEDEF(mediump_i16, aligned_mediump_i16, 2);
128 
131  GLM_ALIGNED_TYPEDEF(mediump_i32, aligned_mediump_i32, 4);
132 
135  GLM_ALIGNED_TYPEDEF(mediump_i64, aligned_mediump_i64, 8);
136 
137 
140  GLM_ALIGNED_TYPEDEF(highp_int8, aligned_highp_int8, 1);
141 
144  GLM_ALIGNED_TYPEDEF(highp_int16, aligned_highp_int16, 2);
145 
148  GLM_ALIGNED_TYPEDEF(highp_int32, aligned_highp_int32, 4);
149 
152  GLM_ALIGNED_TYPEDEF(highp_int64, aligned_highp_int64, 8);
153 
154 
157  GLM_ALIGNED_TYPEDEF(highp_int8_t, aligned_highp_int8_t, 1);
158 
161  GLM_ALIGNED_TYPEDEF(highp_int16_t, aligned_highp_int16_t, 2);
162 
165  GLM_ALIGNED_TYPEDEF(highp_int32_t, aligned_highp_int32_t, 4);
166 
169  GLM_ALIGNED_TYPEDEF(highp_int64_t, aligned_highp_int64_t, 8);
170 
171 
174  GLM_ALIGNED_TYPEDEF(highp_i8, aligned_highp_i8, 1);
175 
178  GLM_ALIGNED_TYPEDEF(highp_i16, aligned_highp_i16, 2);
179 
182  GLM_ALIGNED_TYPEDEF(highp_i32, aligned_highp_i32, 4);
183 
186  GLM_ALIGNED_TYPEDEF(highp_i64, aligned_highp_i64, 8);
187 
188 
191  GLM_ALIGNED_TYPEDEF(int8, aligned_int8, 1);
192 
195  GLM_ALIGNED_TYPEDEF(int16, aligned_int16, 2);
196 
199  GLM_ALIGNED_TYPEDEF(int32, aligned_int32, 4);
200 
203  GLM_ALIGNED_TYPEDEF(int64, aligned_int64, 8);
204 
205 
208  GLM_ALIGNED_TYPEDEF(int8_t, aligned_int8_t, 1);
209 
212  GLM_ALIGNED_TYPEDEF(int16_t, aligned_int16_t, 2);
213 
216  GLM_ALIGNED_TYPEDEF(int32_t, aligned_int32_t, 4);
217 
220  GLM_ALIGNED_TYPEDEF(int64_t, aligned_int64_t, 8);
221 
222 
225  GLM_ALIGNED_TYPEDEF(i8, aligned_i8, 1);
226 
229  GLM_ALIGNED_TYPEDEF(i16, aligned_i16, 2);
230 
233  GLM_ALIGNED_TYPEDEF(i32, aligned_i32, 4);
234 
237  GLM_ALIGNED_TYPEDEF(i64, aligned_i64, 8);
238 
239 
243 
247 
251 
255 
256 
259  GLM_ALIGNED_TYPEDEF(i8vec1, aligned_i8vec1, 1);
260 
263  GLM_ALIGNED_TYPEDEF(i8vec2, aligned_i8vec2, 2);
264 
267  GLM_ALIGNED_TYPEDEF(i8vec3, aligned_i8vec3, 4);
268 
271  GLM_ALIGNED_TYPEDEF(i8vec4, aligned_i8vec4, 4);
272 
273 
276  GLM_ALIGNED_TYPEDEF(i16vec1, aligned_i16vec1, 2);
277 
280  GLM_ALIGNED_TYPEDEF(i16vec2, aligned_i16vec2, 4);
281 
284  GLM_ALIGNED_TYPEDEF(i16vec3, aligned_i16vec3, 8);
285 
288  GLM_ALIGNED_TYPEDEF(i16vec4, aligned_i16vec4, 8);
289 
290 
293  GLM_ALIGNED_TYPEDEF(i32vec1, aligned_i32vec1, 4);
294 
297  GLM_ALIGNED_TYPEDEF(i32vec2, aligned_i32vec2, 8);
298 
301  GLM_ALIGNED_TYPEDEF(i32vec3, aligned_i32vec3, 16);
302 
305  GLM_ALIGNED_TYPEDEF(i32vec4, aligned_i32vec4, 16);
306 
307 
310  GLM_ALIGNED_TYPEDEF(i64vec1, aligned_i64vec1, 8);
311 
314  GLM_ALIGNED_TYPEDEF(i64vec2, aligned_i64vec2, 16);
315 
318  GLM_ALIGNED_TYPEDEF(i64vec3, aligned_i64vec3, 32);
319 
322  GLM_ALIGNED_TYPEDEF(i64vec4, aligned_i64vec4, 32);
323 
324 
326  // Unsigned int vector types
327 
330  GLM_ALIGNED_TYPEDEF(lowp_uint8, aligned_lowp_uint8, 1);
331 
334  GLM_ALIGNED_TYPEDEF(lowp_uint16, aligned_lowp_uint16, 2);
335 
338  GLM_ALIGNED_TYPEDEF(lowp_uint32, aligned_lowp_uint32, 4);
339 
342  GLM_ALIGNED_TYPEDEF(lowp_uint64, aligned_lowp_uint64, 8);
343 
344 
347  GLM_ALIGNED_TYPEDEF(lowp_uint8_t, aligned_lowp_uint8_t, 1);
348 
351  GLM_ALIGNED_TYPEDEF(lowp_uint16_t, aligned_lowp_uint16_t, 2);
352 
355  GLM_ALIGNED_TYPEDEF(lowp_uint32_t, aligned_lowp_uint32_t, 4);
356 
359  GLM_ALIGNED_TYPEDEF(lowp_uint64_t, aligned_lowp_uint64_t, 8);
360 
361 
364  GLM_ALIGNED_TYPEDEF(lowp_u8, aligned_lowp_u8, 1);
365 
368  GLM_ALIGNED_TYPEDEF(lowp_u16, aligned_lowp_u16, 2);
369 
372  GLM_ALIGNED_TYPEDEF(lowp_u32, aligned_lowp_u32, 4);
373 
376  GLM_ALIGNED_TYPEDEF(lowp_u64, aligned_lowp_u64, 8);
377 
378 
381  GLM_ALIGNED_TYPEDEF(mediump_uint8, aligned_mediump_uint8, 1);
382 
385  GLM_ALIGNED_TYPEDEF(mediump_uint16, aligned_mediump_uint16, 2);
386 
389  GLM_ALIGNED_TYPEDEF(mediump_uint32, aligned_mediump_uint32, 4);
390 
393  GLM_ALIGNED_TYPEDEF(mediump_uint64, aligned_mediump_uint64, 8);
394 
395 
398  GLM_ALIGNED_TYPEDEF(mediump_uint8_t, aligned_mediump_uint8_t, 1);
399 
402  GLM_ALIGNED_TYPEDEF(mediump_uint16_t, aligned_mediump_uint16_t, 2);
403 
406  GLM_ALIGNED_TYPEDEF(mediump_uint32_t, aligned_mediump_uint32_t, 4);
407 
410  GLM_ALIGNED_TYPEDEF(mediump_uint64_t, aligned_mediump_uint64_t, 8);
411 
412 
415  GLM_ALIGNED_TYPEDEF(mediump_u8, aligned_mediump_u8, 1);
416 
419  GLM_ALIGNED_TYPEDEF(mediump_u16, aligned_mediump_u16, 2);
420 
423  GLM_ALIGNED_TYPEDEF(mediump_u32, aligned_mediump_u32, 4);
424 
427  GLM_ALIGNED_TYPEDEF(mediump_u64, aligned_mediump_u64, 8);
428 
429 
432  GLM_ALIGNED_TYPEDEF(highp_uint8, aligned_highp_uint8, 1);
433 
436  GLM_ALIGNED_TYPEDEF(highp_uint16, aligned_highp_uint16, 2);
437 
440  GLM_ALIGNED_TYPEDEF(highp_uint32, aligned_highp_uint32, 4);
441 
444  GLM_ALIGNED_TYPEDEF(highp_uint64, aligned_highp_uint64, 8);
445 
446 
449  GLM_ALIGNED_TYPEDEF(highp_uint8_t, aligned_highp_uint8_t, 1);
450 
453  GLM_ALIGNED_TYPEDEF(highp_uint16_t, aligned_highp_uint16_t, 2);
454 
457  GLM_ALIGNED_TYPEDEF(highp_uint32_t, aligned_highp_uint32_t, 4);
458 
461  GLM_ALIGNED_TYPEDEF(highp_uint64_t, aligned_highp_uint64_t, 8);
462 
463 
466  GLM_ALIGNED_TYPEDEF(highp_u8, aligned_highp_u8, 1);
467 
470  GLM_ALIGNED_TYPEDEF(highp_u16, aligned_highp_u16, 2);
471 
474  GLM_ALIGNED_TYPEDEF(highp_u32, aligned_highp_u32, 4);
475 
478  GLM_ALIGNED_TYPEDEF(highp_u64, aligned_highp_u64, 8);
479 
480 
483  GLM_ALIGNED_TYPEDEF(uint8, aligned_uint8, 1);
484 
487  GLM_ALIGNED_TYPEDEF(uint16, aligned_uint16, 2);
488 
491  GLM_ALIGNED_TYPEDEF(uint32, aligned_uint32, 4);
492 
495  GLM_ALIGNED_TYPEDEF(uint64, aligned_uint64, 8);
496 
497 
500  GLM_ALIGNED_TYPEDEF(uint8_t, aligned_uint8_t, 1);
501 
504  GLM_ALIGNED_TYPEDEF(uint16_t, aligned_uint16_t, 2);
505 
508  GLM_ALIGNED_TYPEDEF(uint32_t, aligned_uint32_t, 4);
509 
512  GLM_ALIGNED_TYPEDEF(uint64_t, aligned_uint64_t, 8);
513 
514 
517  GLM_ALIGNED_TYPEDEF(u8, aligned_u8, 1);
518 
521  GLM_ALIGNED_TYPEDEF(u16, aligned_u16, 2);
522 
525  GLM_ALIGNED_TYPEDEF(u32, aligned_u32, 4);
526 
529  GLM_ALIGNED_TYPEDEF(u64, aligned_u64, 8);
530 
531 
535 
539 
543 
547 
548 
551  GLM_ALIGNED_TYPEDEF(u8vec1, aligned_u8vec1, 1);
552 
555  GLM_ALIGNED_TYPEDEF(u8vec2, aligned_u8vec2, 2);
556 
559  GLM_ALIGNED_TYPEDEF(u8vec3, aligned_u8vec3, 4);
560 
563  GLM_ALIGNED_TYPEDEF(u8vec4, aligned_u8vec4, 4);
564 
565 
568  GLM_ALIGNED_TYPEDEF(u16vec1, aligned_u16vec1, 2);
569 
572  GLM_ALIGNED_TYPEDEF(u16vec2, aligned_u16vec2, 4);
573 
576  GLM_ALIGNED_TYPEDEF(u16vec3, aligned_u16vec3, 8);
577 
580  GLM_ALIGNED_TYPEDEF(u16vec4, aligned_u16vec4, 8);
581 
582 
585  GLM_ALIGNED_TYPEDEF(u32vec1, aligned_u32vec1, 4);
586 
589  GLM_ALIGNED_TYPEDEF(u32vec2, aligned_u32vec2, 8);
590 
593  GLM_ALIGNED_TYPEDEF(u32vec3, aligned_u32vec3, 16);
594 
597  GLM_ALIGNED_TYPEDEF(u32vec4, aligned_u32vec4, 16);
598 
599 
602  GLM_ALIGNED_TYPEDEF(u64vec1, aligned_u64vec1, 8);
603 
606  GLM_ALIGNED_TYPEDEF(u64vec2, aligned_u64vec2, 16);
607 
610  GLM_ALIGNED_TYPEDEF(u64vec3, aligned_u64vec3, 32);
611 
614  GLM_ALIGNED_TYPEDEF(u64vec4, aligned_u64vec4, 32);
615 
616 
618  // Float vector types
619 
622  GLM_ALIGNED_TYPEDEF(float32, aligned_float32, 4);
623 
626  GLM_ALIGNED_TYPEDEF(float32_t, aligned_float32_t, 4);
627 
630  GLM_ALIGNED_TYPEDEF(float32, aligned_f32, 4);
631 
632 # ifndef GLM_FORCE_SINGLE_ONLY
633 
636  GLM_ALIGNED_TYPEDEF(float64, aligned_float64, 8);
637 
640  GLM_ALIGNED_TYPEDEF(float64_t, aligned_float64_t, 8);
641 
644  GLM_ALIGNED_TYPEDEF(float64, aligned_f64, 8);
645 
646 # endif//GLM_FORCE_SINGLE_ONLY
647 
648 
652 
656 
660 
664 
665 
668  GLM_ALIGNED_TYPEDEF(fvec1, aligned_fvec1, 4);
669 
672  GLM_ALIGNED_TYPEDEF(fvec2, aligned_fvec2, 8);
673 
676  GLM_ALIGNED_TYPEDEF(fvec3, aligned_fvec3, 16);
677 
680  GLM_ALIGNED_TYPEDEF(fvec4, aligned_fvec4, 16);
681 
682 
685  GLM_ALIGNED_TYPEDEF(f32vec1, aligned_f32vec1, 4);
686 
689  GLM_ALIGNED_TYPEDEF(f32vec2, aligned_f32vec2, 8);
690 
693  GLM_ALIGNED_TYPEDEF(f32vec3, aligned_f32vec3, 16);
694 
697  GLM_ALIGNED_TYPEDEF(f32vec4, aligned_f32vec4, 16);
698 
699 
703 
707 
711 
715 
716 
717 # ifndef GLM_FORCE_SINGLE_ONLY
718 
721  GLM_ALIGNED_TYPEDEF(f64vec1, aligned_f64vec1, 8);
722 
725  GLM_ALIGNED_TYPEDEF(f64vec2, aligned_f64vec2, 16);
726 
729  GLM_ALIGNED_TYPEDEF(f64vec3, aligned_f64vec3, 32);
730 
733  GLM_ALIGNED_TYPEDEF(f64vec4, aligned_f64vec4, 32);
734 
735 # endif//GLM_FORCE_SINGLE_ONLY
736 
738  // Float matrix types
739 
742  //typedef detail::tmat1<f32> mat1;
743 
747 
751 
755 
756 
759  //typedef detail::tmat1x1<f32> mat1;
760 
764 
768 
772 
773 
776  //typedef detail::tmat1x1<f32> fmat1;
777 
780  GLM_ALIGNED_TYPEDEF(fmat2x2, aligned_fmat2, 16);
781 
784  GLM_ALIGNED_TYPEDEF(fmat3x3, aligned_fmat3, 16);
785 
788  GLM_ALIGNED_TYPEDEF(fmat4x4, aligned_fmat4, 16);
789 
790 
793  //typedef f32 fmat1x1;
794 
797  GLM_ALIGNED_TYPEDEF(fmat2x2, aligned_fmat2x2, 16);
798 
801  GLM_ALIGNED_TYPEDEF(fmat2x3, aligned_fmat2x3, 16);
802 
805  GLM_ALIGNED_TYPEDEF(fmat2x4, aligned_fmat2x4, 16);
806 
809  GLM_ALIGNED_TYPEDEF(fmat3x2, aligned_fmat3x2, 16);
810 
813  GLM_ALIGNED_TYPEDEF(fmat3x3, aligned_fmat3x3, 16);
814 
817  GLM_ALIGNED_TYPEDEF(fmat3x4, aligned_fmat3x4, 16);
818 
821  GLM_ALIGNED_TYPEDEF(fmat4x2, aligned_fmat4x2, 16);
822 
825  GLM_ALIGNED_TYPEDEF(fmat4x3, aligned_fmat4x3, 16);
826 
829  GLM_ALIGNED_TYPEDEF(fmat4x4, aligned_fmat4x4, 16);
830 
831 
834  //typedef detail::tmat1x1<f32, defaultp> f32mat1;
835 
838  GLM_ALIGNED_TYPEDEF(f32mat2x2, aligned_f32mat2, 16);
839 
842  GLM_ALIGNED_TYPEDEF(f32mat3x3, aligned_f32mat3, 16);
843 
846  GLM_ALIGNED_TYPEDEF(f32mat4x4, aligned_f32mat4, 16);
847 
848 
851  //typedef f32 f32mat1x1;
852 
855  GLM_ALIGNED_TYPEDEF(f32mat2x2, aligned_f32mat2x2, 16);
856 
859  GLM_ALIGNED_TYPEDEF(f32mat2x3, aligned_f32mat2x3, 16);
860 
863  GLM_ALIGNED_TYPEDEF(f32mat2x4, aligned_f32mat2x4, 16);
864 
867  GLM_ALIGNED_TYPEDEF(f32mat3x2, aligned_f32mat3x2, 16);
868 
871  GLM_ALIGNED_TYPEDEF(f32mat3x3, aligned_f32mat3x3, 16);
872 
875  GLM_ALIGNED_TYPEDEF(f32mat3x4, aligned_f32mat3x4, 16);
876 
879  GLM_ALIGNED_TYPEDEF(f32mat4x2, aligned_f32mat4x2, 16);
880 
883  GLM_ALIGNED_TYPEDEF(f32mat4x3, aligned_f32mat4x3, 16);
884 
887  GLM_ALIGNED_TYPEDEF(f32mat4x4, aligned_f32mat4x4, 16);
888 
889 
890 # ifndef GLM_FORCE_SINGLE_ONLY
891 
894  //typedef detail::tmat1x1<f64, defaultp> f64mat1;
895 
898  GLM_ALIGNED_TYPEDEF(f64mat2x2, aligned_f64mat2, 32);
899 
902  GLM_ALIGNED_TYPEDEF(f64mat3x3, aligned_f64mat3, 32);
903 
906  GLM_ALIGNED_TYPEDEF(f64mat4x4, aligned_f64mat4, 32);
907 
908 
911  //typedef f64 f64mat1x1;
912 
915  GLM_ALIGNED_TYPEDEF(f64mat2x2, aligned_f64mat2x2, 32);
916 
919  GLM_ALIGNED_TYPEDEF(f64mat2x3, aligned_f64mat2x3, 32);
920 
923  GLM_ALIGNED_TYPEDEF(f64mat2x4, aligned_f64mat2x4, 32);
924 
927  GLM_ALIGNED_TYPEDEF(f64mat3x2, aligned_f64mat3x2, 32);
928 
931  GLM_ALIGNED_TYPEDEF(f64mat3x3, aligned_f64mat3x3, 32);
932 
935  GLM_ALIGNED_TYPEDEF(f64mat3x4, aligned_f64mat3x4, 32);
936 
939  GLM_ALIGNED_TYPEDEF(f64mat4x2, aligned_f64mat4x2, 32);
940 
943  GLM_ALIGNED_TYPEDEF(f64mat4x3, aligned_f64mat4x3, 32);
944 
947  GLM_ALIGNED_TYPEDEF(f64mat4x4, aligned_f64mat4x4, 32);
948 
949 # endif//GLM_FORCE_SINGLE_ONLY
950 
951 
953  // Quaternion types
954 
957  GLM_ALIGNED_TYPEDEF(quat, aligned_quat, 16);
958 
961  GLM_ALIGNED_TYPEDEF(quat, aligned_fquat, 16);
962 
965  GLM_ALIGNED_TYPEDEF(dquat, aligned_dquat, 32);
966 
969  GLM_ALIGNED_TYPEDEF(f32quat, aligned_f32quat, 16);
970 
971 # ifndef GLM_FORCE_SINGLE_ONLY
972 
975  GLM_ALIGNED_TYPEDEF(f64quat, aligned_f64quat, 32);
976 
977 # endif//GLM_FORCE_SINGLE_ONLY
978 
980 }//namespace glm
981 
982 #include "type_aligned.inl"
mat< 4, 4, float, defaultp > mat4x4
4 columns of 4 components matrix of single-precision floating-point numbers.
uint64 highp_u64
High qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:133
vec< 3, f32, defaultp > f32vec3
Single-qualifier floating-point vector of 3 components.
Definition: fwd.hpp:463
mat< 2, 2, float, defaultp > mat2x2
2 columns of 2 components matrix of single-precision floating-point numbers.
uint32 mediump_uint32_t
Medium qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:127
aligned_highp_uvec2 aligned_uvec2
2 components vector aligned in memory of unsigned integer numbers.
uint64 lowp_uint64
Low qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:136
vec< 1, f32, defaultp > f32vec1
Single-qualifier floating-point vector of 1 component.
Definition: fwd.hpp:461
uint8 lowp_u8
Low qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:89
uint32 u32
Default qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:120
vec< 1, i32, defaultp > i32vec1
32 bit signed integer scalar type.
Definition: fwd.hpp:277
uint16 highp_uint16
High qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:110
mat< 3, 4, f64, defaultp > f64mat3x4
Double-qualifier floating-point 3x4 matrix.
Definition: fwd.hpp:787
vec< 3, i16, defaultp > i16vec3
16 bit signed integer vector of 3 components type.
Definition: fwd.hpp:259
uint32 lowp_uint32_t
Low qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:126
uint32 mediump_uint32
Medium qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:123
uint64 highp_uint64
High qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:138
uint32 lowp_uint32
Low qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:122
vec< 2, float, defaultp > vec2
2 components vector of single-precision floating-point numbers.
vec< 4, i64, defaultp > i64vec4
64 bit signed integer vector of 4 components type.
Definition: fwd.hpp:300
vec< 3, u16, defaultp > u16vec3
Default qualifier 16 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:361
aligned_highp_ivec3 aligned_ivec3
3 components vector aligned in memory of signed integer numbers.
vec< 2, i8, defaultp > i8vec2
8 bit signed integer vector of 2 components type.
Definition: fwd.hpp:238
aligned_highp_vec3 aligned_vec3
3 components vector aligned in memory of single-precision floating-point numbers. ...
vec< 3, unsigned int, defaultp > uvec3
3 components vector of unsigned integer numbers.
aligned_highp_uvec3 aligned_uvec3
3 components vector aligned in memory of unsigned integer numbers.
int64 highp_int64
High qualifier 64 bit signed integer type.
Definition: fwd.hpp:80
int16 lowp_int16_t
Low qualifier 16 bit signed integer type.
Definition: fwd.hpp:54
mat< 4, 2, f32, defaultp > f32mat4x2
Single-qualifier floating-point 4x2 matrix.
Definition: fwd.hpp:702
uint32 mediump_u32
Medium qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:118
GLM_ALIGNED_TYPEDEF(f64quat, aligned_f64quat, 32)
Double-qualifier floating-point aligned quaternion.
aligned_highp_dvec3 aligned_dvec3
3 components vector aligned in memory of double-precision floating-point numbers. ...
aligned_highp_dvec1 aligned_dvec1
1 component vector aligned in memory of double-precision floating-point numbers.
vec< 3, int, defaultp > ivec3
3 components vector of signed integer numbers.
Definition: vector_int3.hpp:15
vec< 3, u64, defaultp > u64vec3
Default qualifier 64 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:401
uint8 lowp_uint8
Low qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:94
uint64 lowp_u64
Low qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:131
int8 mediump_int8
Medium qualifier 8 bit signed integer type.
Definition: fwd.hpp:37
int64 lowp_int64
Low qualifier 64 bit signed integer type.
Definition: fwd.hpp:78
vec< 2, u64, defaultp > u64vec2
Default qualifier 64 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:400
mat< 3, 4, f32, defaultp > f32mat3x4
Single-qualifier floating-point 3x4 matrix.
Definition: fwd.hpp:707
uint64 u64
Default qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:134
vec< 1, f64, defaultp > f64vec1
Double-qualifier floating-point vector of 1 component.
Definition: fwd.hpp:501
vec< 1, i16, defaultp > i16vec1
16 bit signed integer scalar type.
Definition: fwd.hpp:257
double float64
Double-qualifier floating-point scalar.
Definition: fwd.hpp:171
mat< 4, 2, f32, defaultp > fmat4x2
Single-qualifier floating-point 4x2 matrix.
Definition: fwd.hpp:662
mat< 3, 4, f32, defaultp > fmat3x4
Single-qualifier floating-point 3x4 matrix.
Definition: fwd.hpp:667
mat< 2, 4, f32, defaultp > f32mat2x4
Single-qualifier floating-point 2x4 matrix.
Definition: fwd.hpp:706
vec< 4, i16, defaultp > i16vec4
16 bit signed integer vector of 4 components type.
Definition: fwd.hpp:260
uint8 lowp_uint8_t
Low qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:98
uint32 highp_uint32_t
High qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:128
mat< 3, 3, f32, defaultp > fmat3x3
Single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:664
mat< 2, 3, f32, defaultp > f32mat2x3
Single-qualifier floating-point 2x3 matrix.
Definition: fwd.hpp:703
int16 mediump_int16
Medium qualifier 16 bit signed integer type.
Definition: fwd.hpp:51
uint16 mediump_u16
Medium qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:104
qua< f64, defaultp > f64quat
Double-qualifier floating-point quaternion.
Definition: fwd.hpp:815
qua< double, defaultp > dquat
Quaternion of double-precision floating-point numbers.
vec< 1, u64, defaultp > u64vec1
Default qualifier 64 bit unsigned integer scalar type.
Definition: fwd.hpp:399
int64 int64_t
64 bit signed integer type.
Definition: fwd.hpp:85
aligned_highp_mat2 aligned_mat2
2 by 2 matrix tightly aligned in memory of single-precision floating-point numbers.
vec< 1, u8, defaultp > u8vec1
Default qualifier 8 bit unsigned integer scalar type.
Definition: fwd.hpp:339
vec< 4, u8, defaultp > u8vec4
Default qualifier 8 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:342
int8 int8_t
8 bit signed integer type.
Definition: fwd.hpp:43
int32 i32
32 bit signed integer type.
Definition: fwd.hpp:62
mat< 2, 2, f64, defaultp > f64mat2x2
Double-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:780
vec< 4, i8, defaultp > i8vec4
8 bit signed integer vector of 4 components type.
Definition: fwd.hpp:240
int32 highp_int32
High qualifier 32 bit signed integer type.
Definition: fwd.hpp:66
uint32 highp_u32
High qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:119
int32 highp_i32
High qualifier 32 bit signed integer type.
Definition: fwd.hpp:61
vec< 4, int, defaultp > ivec4
4 components vector of signed integer numbers.
Definition: vector_int4.hpp:15
vec< 4, u64, defaultp > u64vec4
Default qualifier 64 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:402
vec< 4, f32, defaultp > f32vec4
Single-qualifier floating-point vector of 4 components.
Definition: fwd.hpp:464
mat< 2, 3, f64, defaultp > f64mat2x3
Double-qualifier floating-point 2x3 matrix.
Definition: fwd.hpp:783
uint32 highp_uint32
High qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:124
mat< 3, 2, f64, defaultp > f64mat3x2
Double-qualifier floating-point 3x2 matrix.
Definition: fwd.hpp:781
vec< 1, u32, defaultp > u32vec1
Default qualifier 32 bit unsigned integer scalar type.
Definition: fwd.hpp:379
mat< 3, 3, f64, defaultp > f64mat3x3
Double-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:784
uint8 highp_uint8
High qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:96
int8 highp_i8
High qualifier 8 bit signed integer type.
Definition: fwd.hpp:33
int8 mediump_i8
Medium qualifier 8 bit signed integer type.
Definition: fwd.hpp:32
int64 highp_int64_t
High qualifier 64 bit signed integer type.
Definition: fwd.hpp:84
mat< 4, 4, f32, defaultp > f32mat4x4
Single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:708
float float32_t
Default 32 bit single-qualifier floating-point scalar.
Definition: fwd.hpp:160
mat< 2, 2, f32, defaultp > f32mat2x2
Single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:700
uint32 uint32_t
Default qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:129
aligned_highp_ivec1 aligned_ivec1
1 component vector aligned in memory of signed integer numbers.
vec< 4, float, defaultp > vec4
4 components vector of single-precision floating-point numbers.
uint8 u8
Default qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:92
float float32
Single-qualifier floating-point scalar.
Definition: fwd.hpp:155
vec< 4, f32, defaultp > fvec4
Single-qualifier floating-point vector of 4 components.
Definition: fwd.hpp:444
vec< 1, u16, defaultp > u16vec1
Default qualifier 16 bit unsigned integer scalar type.
Definition: fwd.hpp:359
vec< 1, double, defaultp > dvec1
1 components vector of double-precision floating-point numbers.
vec< 1, i8, defaultp > i8vec1
8 bit signed integer scalar type.
Definition: fwd.hpp:237
vec< 2, i32, defaultp > i32vec2
32 bit signed integer vector of 2 components type.
Definition: fwd.hpp:278
uint8 highp_uint8_t
High qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:100
uint64 mediump_uint64
Medium qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:137
int32 highp_int32_t
32 bit signed integer type.
Definition: fwd.hpp:70
vec< 3, f64, defaultp > f64vec3
Double-qualifier floating-point vector of 3 components.
Definition: fwd.hpp:503
mat< 2, 4, f64, defaultp > f64mat2x4
Double-qualifier floating-point 2x4 matrix.
Definition: fwd.hpp:786
mat< 3, 3, float, defaultp > mat3x3
3 columns of 3 components matrix of single-precision floating-point numbers.
uint64 mediump_u64
Medium qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:132
vec< 2, unsigned int, defaultp > uvec2
2 components vector of unsigned integer numbers.
uint16 lowp_u16
Low qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:103
vec< 1, unsigned int, defaultp > uvec1
1 component vector of unsigned integer numbers.
int16 highp_i16
High qualifier 16 bit signed integer type.
Definition: fwd.hpp:47
int8 highp_int8
High qualifier 8 bit signed integer type.
Definition: fwd.hpp:38
mat< 4, 4, f64, defaultp > f64mat4x4
Double-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:788
mat< 4, 3, f32, defaultp > fmat4x3
Single-qualifier floating-point 4x3 matrix.
Definition: fwd.hpp:665
vec< 3, f32, defaultp > fvec3
Single-qualifier floating-point vector of 3 components.
Definition: fwd.hpp:443
vec< 2, i16, defaultp > i16vec2
16 bit signed integer vector of 2 components type.
Definition: fwd.hpp:258
mat< 4, 3, f32, defaultp > f32mat4x3
Single-qualifier floating-point 4x3 matrix.
Definition: fwd.hpp:705
int16 lowp_i16
Low qualifier 16 bit signed integer type.
Definition: fwd.hpp:45
vec< 1, float, defaultp > vec1
1 components vector of single-precision floating-point numbers.
aligned_highp_mat4 aligned_mat4
4 by 4 matrix tightly aligned in memory of single-precision floating-point numbers.
double float64_t
Default 64 bit double-qualifier floating-point scalar.
Definition: fwd.hpp:176
int16 lowp_int16
Low qualifier 16 bit signed integer type.
Definition: fwd.hpp:50
aligned_highp_uvec1 aligned_uvec1
1 component vector aligned in memory of unsigned integer numbers.
int64 lowp_int64_t
Low qualifier 64 bit signed integer type.
Definition: fwd.hpp:82
uint16 uint16_t
Default qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:115
aligned_highp_vec1 aligned_vec1
1 component vector aligned in memory of single-precision floating-point numbers.
int32 lowp_int32
Low qualifier 32 bit signed integer type.
Definition: fwd.hpp:64
uint8 uint8_t
Default qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:101
int32 mediump_int32_t
Medium qualifier 32 bit signed integer type.
Definition: fwd.hpp:69
mat< 3, 3, f32, defaultp > f32mat3x3
Single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:704
uint8 highp_u8
High qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:91
uint8 mediump_uint8
Medium qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:95
aligned_highp_uvec4 aligned_uvec4
4 components vector aligned in memory of unsigned integer numbers.
int64 mediump_int64_t
Medium qualifier 64 bit signed integer type.
Definition: fwd.hpp:83
aligned_highp_ivec4 aligned_ivec4
4 components vector aligned in memory of signed integer numbers.
int8 highp_int8_t
High qualifier 8 bit signed integer type.
Definition: fwd.hpp:42
mat< 3, 2, f32, defaultp > f32mat3x2
Single-qualifier floating-point 3x2 matrix.
Definition: fwd.hpp:701
vec< 4, i32, defaultp > i32vec4
32 bit signed integer vector of 4 components type.
Definition: fwd.hpp:280
vec< 3, u32, defaultp > u32vec3
Default qualifier 32 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:381
vec< 2, u8, defaultp > u8vec2
Default qualifier 8 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:340
int16 mediump_i16
Medium qualifier 16 bit signed integer type.
Definition: fwd.hpp:46
vec< 3, i8, defaultp > i8vec3
8 bit signed integer vector of 3 components type.
Definition: fwd.hpp:239
aligned_highp_vec2 aligned_vec2
2 components vector aligned in memory of single-precision floating-point numbers. ...
mat< 4, 4, float, defaultp > mat4
4 columns of 4 components matrix of single-precision floating-point numbers.
aligned_highp_mat2x2 aligned_mat2x2
2 by 2 matrix tightly aligned in memory of single-precision floating-point numbers.
uint16 mediump_uint16_t
Medium qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:113
vec< 3, u8, defaultp > u8vec3
Default qualifier 8 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:341
int64 mediump_int64
Medium qualifier 64 bit signed integer type.
Definition: fwd.hpp:79
uint64 uint64_t
Default qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:143
vec< 3, i32, defaultp > i32vec3
32 bit signed integer vector of 3 components type.
Definition: fwd.hpp:279
uint16 lowp_uint16_t
Low qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:112
vec< 2, double, defaultp > dvec2
2 components vector of double-precision floating-point numbers.
uint16 lowp_uint16
Low qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:108
vec< 4, f64, defaultp > f64vec4
Double-qualifier floating-point vector of 4 components.
Definition: fwd.hpp:504
int32 lowp_i32
Low qualifier 32 bit signed integer type.
Definition: fwd.hpp:59
vec< 3, float, defaultp > vec3
3 components vector of single-precision floating-point numbers.
int64 mediump_i64
Medium qualifier 64 bit signed integer type.
Definition: fwd.hpp:74
vec< 2, f32, defaultp > fvec2
Single-qualifier floating-point vector of 2 components.
Definition: fwd.hpp:442
aligned_highp_ivec2 aligned_ivec2
2 components vector aligned in memory of signed integer numbers.
int16 int16_t
16 bit signed integer type.
Definition: fwd.hpp:57
int64 highp_i64
High qualifier 64 bit signed integer type.
Definition: fwd.hpp:75
int32 int32_t
32 bit signed integer type.
Definition: fwd.hpp:71
vec< 2, f64, defaultp > f64vec2
Double-qualifier floating-point vector of 2 components.
Definition: fwd.hpp:502
vec< 4, unsigned int, defaultp > uvec4
4 components vector of unsigned integer numbers.
uint64 lowp_uint64_t
Low qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:140
detail::uint64 uint64
64 bit unsigned integer type.
int16 highp_int16
High qualifier 16 bit signed integer type.
Definition: fwd.hpp:52
aligned_highp_mat4x4 aligned_mat4x4
4 by 4 matrix tightly aligned in memory of single-precision floating-point numbers.
mat< 2, 4, f32, defaultp > fmat2x4
Single-qualifier floating-point 2x4 matrix.
Definition: fwd.hpp:666
int32 mediump_i32
Medium qualifier 32 bit signed integer type.
Definition: fwd.hpp:60
aligned_highp_dvec4 aligned_dvec4
4 components vector aligned in memory of double-precision floating-point numbers. ...
uint64 highp_uint64_t
High qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:142
vec< 4, u32, defaultp > u32vec4
Default qualifier 32 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:382
qua< f32, defaultp > f32quat
Single-qualifier floating-point quaternion.
Definition: fwd.hpp:805
detail::int64 int64
64 bit signed integer type.
mat< 4, 2, f64, defaultp > f64mat4x2
Double-qualifier floating-point 4x2 matrix.
Definition: fwd.hpp:782
mat< 2, 3, f32, defaultp > fmat2x3
Single-qualifier floating-point 2x3 matrix.
Definition: fwd.hpp:663
uint16 u16
Default qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:106
int64 lowp_i64
Low qualifier 64 bit signed integer type.
Definition: fwd.hpp:73
vec< 2, int, defaultp > ivec2
2 components vector of signed integer numbers.
Definition: vector_int2.hpp:15
int8 mediump_int8_t
Medium qualifier 8 bit signed integer type.
Definition: fwd.hpp:41
int16 highp_int16_t
High qualifier 16 bit signed integer type.
Definition: fwd.hpp:56
vec< 1, i64, defaultp > i64vec1
64 bit signed integer scalar type.
Definition: fwd.hpp:297
aligned_highp_vec4 aligned_vec4
4 components vector aligned in memory of single-precision floating-point numbers. ...
uint32 lowp_u32
Low qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:117
vec< 1, int, defaultp > ivec1
1 component vector of signed integer numbers.
Definition: vector_int1.hpp:28
uint16 highp_u16
High qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:105
vec< 1, f32, defaultp > fvec1
Single-qualifier floating-point vector of 1 component.
Definition: fwd.hpp:441
int32 lowp_int32_t
Low qualifier 32 bit signed integer type.
Definition: fwd.hpp:68
vec< 2, f32, defaultp > f32vec2
Single-qualifier floating-point vector of 2 components.
Definition: fwd.hpp:462
mat< 2, 2, f32, defaultp > fmat2x2
Single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:660
int8 lowp_int8
Low qualifier 8 bit signed integer type.
Definition: fwd.hpp:36
vec< 3, double, defaultp > dvec3
3 components vector of double-precision floating-point numbers.
int8 lowp_int8_t
Low qualifier 8 bit signed integer type.
Definition: fwd.hpp:40
aligned_highp_mat3x3 aligned_mat3x3
3 by 3 matrix tightly aligned in memory of single-precision floating-point numbers.
mat< 4, 3, f64, defaultp > f64mat4x3
Double-qualifier floating-point 4x3 matrix.
Definition: fwd.hpp:785
int64 i64
64 bit signed integer type.
Definition: fwd.hpp:76
vec< 2, u32, defaultp > u32vec2
Default qualifier 32 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:380
qua< float, defaultp > quat
Quaternion of single-precision floating-point numbers.
int32 mediump_int32
Medium qualifier 32 bit signed integer type.
Definition: fwd.hpp:65
vec< 2, i64, defaultp > i64vec2
64 bit signed integer vector of 2 components type.
Definition: fwd.hpp:298
int16 i16
16 bit signed integer type.
Definition: fwd.hpp:48
vec< 4, double, defaultp > dvec4
4 components vector of double-precision floating-point numbers.
mat< 4, 4, f32, defaultp > fmat4x4
Single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:668
mat< 2, 2, float, defaultp > mat2
2 columns of 2 components matrix of single-precision floating-point numbers.
aligned_highp_mat3 aligned_mat3
3 by 3 matrix tightly aligned in memory of single-precision floating-point numbers.
mat< 3, 2, f32, defaultp > fmat3x2
Single-qualifier floating-point 3x2 matrix.
Definition: fwd.hpp:661
vec< 4, u16, defaultp > u16vec4
Default qualifier 16 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:362
vec< 2, u16, defaultp > u16vec2
Default qualifier 16 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:360
uint8 mediump_u8
Medium qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:90
aligned_highp_dvec2 aligned_dvec2
2 components vector aligned in memory of double-precision floating-point numbers. ...
int16 mediump_int16_t
Medium qualifier 16 bit signed integer type.
Definition: fwd.hpp:55
int8 lowp_i8
Low qualifier 8 bit signed integer type.
Definition: fwd.hpp:31
vec< 3, i64, defaultp > i64vec3
64 bit signed integer vector of 3 components type.
Definition: fwd.hpp:299
mat< 3, 3, float, defaultp > mat3
3 columns of 3 components matrix of single-precision floating-point numbers.
uint16 highp_uint16_t
High qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:114
int8 i8
8 bit signed integer type.
Definition: fwd.hpp:34
uint64 mediump_uint64_t
Medium qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:141
uint8 mediump_uint8_t
Medium qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:99
Definition: common.hpp:20
uint16 mediump_uint16
Medium qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:109
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00163_source.html ================================================ 0.9.9 API documentation: type_float.hpp Source File
0.9.9 API documentation
type_float.hpp
1 #pragma once
2 
3 #include "setup.hpp"
4 
5 #if GLM_COMPILER == GLM_COMPILER_VC12
6 # pragma warning(push)
7 # pragma warning(disable: 4512) // assignment operator could not be generated
8 #endif
9 
10 namespace glm{
11 namespace detail
12 {
13  template <typename T>
14  union float_t
15  {};
16 
17  // https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/
18  template <>
19  union float_t<float>
20  {
21  typedef int int_type;
22  typedef float float_type;
23 
24  GLM_CONSTEXPR float_t(float_type Num = 0.0f) : f(Num) {}
25 
26  GLM_CONSTEXPR float_t& operator=(float_t const& x)
27  {
28  f = x.f;
29  return *this;
30  }
31 
32  // Portable extraction of components.
33  GLM_CONSTEXPR bool negative() const { return i < 0; }
34  GLM_CONSTEXPR int_type mantissa() const { return i & ((1 << 23) - 1); }
35  GLM_CONSTEXPR int_type exponent() const { return (i >> 23) & ((1 << 8) - 1); }
36 
37  int_type i;
38  float_type f;
39  };
40 
41  template <>
42  union float_t<double>
43  {
44  typedef detail::int64 int_type;
45  typedef double float_type;
46 
47  GLM_CONSTEXPR float_t(float_type Num = static_cast<float_type>(0)) : f(Num) {}
48 
49  GLM_CONSTEXPR float_t& operator=(float_t const& x)
50  {
51  f = x.f;
52  return *this;
53  }
54 
55  // Portable extraction of components.
56  GLM_CONSTEXPR bool negative() const { return i < 0; }
57  GLM_CONSTEXPR int_type mantissa() const { return i & ((int_type(1) << 52) - 1); }
58  GLM_CONSTEXPR int_type exponent() const { return (i >> 52) & ((int_type(1) << 11) - 1); }
59 
60  int_type i;
61  float_type f;
62  };
63 }//namespace detail
64 }//namespace glm
65 
66 #if GLM_COMPILER == GLM_COMPILER_VC12
67 # pragma warning(pop)
68 #endif
detail::int64 int64
64 bit signed integer type.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00164_source.html ================================================ 0.9.9 API documentation: type_half.hpp Source File
0.9.9 API documentation
type_half.hpp
1 #pragma once
2 
3 #include "setup.hpp"
4 
5 namespace glm{
6 namespace detail
7 {
8  typedef short hdata;
9 
10  GLM_FUNC_DECL float toFloat32(hdata value);
11  GLM_FUNC_DECL hdata toFloat16(float const& value);
12 
13 }//namespace detail
14 }//namespace glm
15 
16 #include "type_half.inl"
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00165.html ================================================ 0.9.9 API documentation: type_mat2x2.hpp File Reference
0.9.9 API documentation
type_mat2x2.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00165_source.html ================================================ 0.9.9 API documentation: type_mat2x2.hpp Source File
0.9.9 API documentation
type_mat2x2.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 
6 #include "type_vec2.hpp"
7 #include <limits>
8 #include <cstddef>
9 
10 namespace glm
11 {
12  template<typename T, qualifier Q>
13  struct mat<2, 2, T, Q>
14  {
15  typedef vec<2, T, Q> col_type;
16  typedef vec<2, T, Q> row_type;
17  typedef mat<2, 2, T, Q> type;
18  typedef mat<2, 2, T, Q> transpose_type;
19  typedef T value_type;
20 
21  private:
22  col_type value[2];
23 
24  public:
25  // -- Accesses --
26 
27  typedef length_t length_type;
28  GLM_FUNC_DECL static GLM_CONSTEXPR length_type length() { return 2; }
29 
30  GLM_FUNC_DECL col_type & operator[](length_type i);
31  GLM_FUNC_DECL GLM_CONSTEXPR col_type const& operator[](length_type i) const;
32 
33  // -- Constructors --
34 
35  GLM_FUNC_DECL GLM_CONSTEXPR mat() GLM_DEFAULT;
36  template<qualifier P>
37  GLM_FUNC_DECL GLM_CONSTEXPR mat(mat<2, 2, T, P> const& m);
38 
39  GLM_FUNC_DECL explicit GLM_CONSTEXPR mat(T scalar);
40  GLM_FUNC_DECL GLM_CONSTEXPR mat(
41  T const& x1, T const& y1,
42  T const& x2, T const& y2);
43  GLM_FUNC_DECL GLM_CONSTEXPR mat(
44  col_type const& v1,
45  col_type const& v2);
46 
47  // -- Conversions --
48 
49  template<typename U, typename V, typename M, typename N>
50  GLM_FUNC_DECL GLM_CONSTEXPR mat(
51  U const& x1, V const& y1,
52  M const& x2, N const& y2);
53 
54  template<typename U, typename V>
55  GLM_FUNC_DECL GLM_CONSTEXPR mat(
56  vec<2, U, Q> const& v1,
57  vec<2, V, Q> const& v2);
58 
59  // -- Matrix conversions --
60 
61  template<typename U, qualifier P>
62  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<2, 2, U, P> const& m);
63 
64  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<3, 3, T, Q> const& x);
65  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<4, 4, T, Q> const& x);
66  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<2, 3, T, Q> const& x);
67  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<3, 2, T, Q> const& x);
68  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<2, 4, T, Q> const& x);
69  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<4, 2, T, Q> const& x);
70  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<3, 4, T, Q> const& x);
71  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<4, 3, T, Q> const& x);
72 
73  // -- Unary arithmetic operators --
74 
75  template<typename U>
76  GLM_FUNC_DECL mat<2, 2, T, Q> & operator=(mat<2, 2, U, Q> const& m);
77  template<typename U>
78  GLM_FUNC_DECL mat<2, 2, T, Q> & operator+=(U s);
79  template<typename U>
80  GLM_FUNC_DECL mat<2, 2, T, Q> & operator+=(mat<2, 2, U, Q> const& m);
81  template<typename U>
82  GLM_FUNC_DECL mat<2, 2, T, Q> & operator-=(U s);
83  template<typename U>
84  GLM_FUNC_DECL mat<2, 2, T, Q> & operator-=(mat<2, 2, U, Q> const& m);
85  template<typename U>
86  GLM_FUNC_DECL mat<2, 2, T, Q> & operator*=(U s);
87  template<typename U>
88  GLM_FUNC_DECL mat<2, 2, T, Q> & operator*=(mat<2, 2, U, Q> const& m);
89  template<typename U>
90  GLM_FUNC_DECL mat<2, 2, T, Q> & operator/=(U s);
91  template<typename U>
92  GLM_FUNC_DECL mat<2, 2, T, Q> & operator/=(mat<2, 2, U, Q> const& m);
93 
94  // -- Increment and decrement operators --
95 
96  GLM_FUNC_DECL mat<2, 2, T, Q> & operator++ ();
97  GLM_FUNC_DECL mat<2, 2, T, Q> & operator-- ();
98  GLM_FUNC_DECL mat<2, 2, T, Q> operator++(int);
99  GLM_FUNC_DECL mat<2, 2, T, Q> operator--(int);
100  };
101 
102  // -- Unary operators --
103 
104  template<typename T, qualifier Q>
105  GLM_FUNC_DECL mat<2, 2, T, Q> operator+(mat<2, 2, T, Q> const& m);
106 
107  template<typename T, qualifier Q>
108  GLM_FUNC_DECL mat<2, 2, T, Q> operator-(mat<2, 2, T, Q> const& m);
109 
110  // -- Binary operators --
111 
112  template<typename T, qualifier Q>
113  GLM_FUNC_DECL mat<2, 2, T, Q> operator+(mat<2, 2, T, Q> const& m, T scalar);
114 
115  template<typename T, qualifier Q>
116  GLM_FUNC_DECL mat<2, 2, T, Q> operator+(T scalar, mat<2, 2, T, Q> const& m);
117 
118  template<typename T, qualifier Q>
119  GLM_FUNC_DECL mat<2, 2, T, Q> operator+(mat<2, 2, T, Q> const& m1, mat<2, 2, T, Q> const& m2);
120 
121  template<typename T, qualifier Q>
122  GLM_FUNC_DECL mat<2, 2, T, Q> operator-(mat<2, 2, T, Q> const& m, T scalar);
123 
124  template<typename T, qualifier Q>
125  GLM_FUNC_DECL mat<2, 2, T, Q> operator-(T scalar, mat<2, 2, T, Q> const& m);
126 
127  template<typename T, qualifier Q>
128  GLM_FUNC_DECL mat<2, 2, T, Q> operator-(mat<2, 2, T, Q> const& m1, mat<2, 2, T, Q> const& m2);
129 
130  template<typename T, qualifier Q>
131  GLM_FUNC_DECL mat<2, 2, T, Q> operator*(mat<2, 2, T, Q> const& m, T scalar);
132 
133  template<typename T, qualifier Q>
134  GLM_FUNC_DECL mat<2, 2, T, Q> operator*(T scalar, mat<2, 2, T, Q> const& m);
135 
136  template<typename T, qualifier Q>
137  GLM_FUNC_DECL typename mat<2, 2, T, Q>::col_type operator*(mat<2, 2, T, Q> const& m, typename mat<2, 2, T, Q>::row_type const& v);
138 
139  template<typename T, qualifier Q>
140  GLM_FUNC_DECL typename mat<2, 2, T, Q>::row_type operator*(typename mat<2, 2, T, Q>::col_type const& v, mat<2, 2, T, Q> const& m);
141 
142  template<typename T, qualifier Q>
143  GLM_FUNC_DECL mat<2, 2, T, Q> operator*(mat<2, 2, T, Q> const& m1, mat<2, 2, T, Q> const& m2);
144 
145  template<typename T, qualifier Q>
146  GLM_FUNC_DECL mat<3, 2, T, Q> operator*(mat<2, 2, T, Q> const& m1, mat<3, 2, T, Q> const& m2);
147 
148  template<typename T, qualifier Q>
149  GLM_FUNC_DECL mat<4, 2, T, Q> operator*(mat<2, 2, T, Q> const& m1, mat<4, 2, T, Q> const& m2);
150 
151  template<typename T, qualifier Q>
152  GLM_FUNC_DECL mat<2, 2, T, Q> operator/(mat<2, 2, T, Q> const& m, T scalar);
153 
154  template<typename T, qualifier Q>
155  GLM_FUNC_DECL mat<2, 2, T, Q> operator/(T scalar, mat<2, 2, T, Q> const& m);
156 
157  template<typename T, qualifier Q>
158  GLM_FUNC_DECL typename mat<2, 2, T, Q>::col_type operator/(mat<2, 2, T, Q> const& m, typename mat<2, 2, T, Q>::row_type const& v);
159 
160  template<typename T, qualifier Q>
161  GLM_FUNC_DECL typename mat<2, 2, T, Q>::row_type operator/(typename mat<2, 2, T, Q>::col_type const& v, mat<2, 2, T, Q> const& m);
162 
163  template<typename T, qualifier Q>
164  GLM_FUNC_DECL mat<2, 2, T, Q> operator/(mat<2, 2, T, Q> const& m1, mat<2, 2, T, Q> const& m2);
165 
166  // -- Boolean operators --
167 
168  template<typename T, qualifier Q>
169  GLM_FUNC_DECL bool operator==(mat<2, 2, T, Q> const& m1, mat<2, 2, T, Q> const& m2);
170 
171  template<typename T, qualifier Q>
172  GLM_FUNC_DECL bool operator!=(mat<2, 2, T, Q> const& m1, mat<2, 2, T, Q> const& m2);
173 } //namespace glm
174 
175 #ifndef GLM_EXTERNAL_TEMPLATE
176 #include "type_mat2x2.inl"
177 #endif
Core features
GLM_FUNC_DECL T length(qua< T, Q > const &q)
Returns the norm of a quaternions.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00166.html ================================================ 0.9.9 API documentation: type_mat2x3.hpp File Reference
0.9.9 API documentation
type_mat2x3.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00166_source.html ================================================ 0.9.9 API documentation: type_mat2x3.hpp Source File
0.9.9 API documentation
type_mat2x3.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 
6 #include "type_vec2.hpp"
7 #include "type_vec3.hpp"
8 #include <limits>
9 #include <cstddef>
10 
11 namespace glm
12 {
13  template<typename T, qualifier Q>
14  struct mat<2, 3, T, Q>
15  {
16  typedef vec<3, T, Q> col_type;
17  typedef vec<2, T, Q> row_type;
18  typedef mat<2, 3, T, Q> type;
19  typedef mat<3, 2, T, Q> transpose_type;
20  typedef T value_type;
21 
22  private:
23  col_type value[2];
24 
25  public:
26  // -- Accesses --
27 
28  typedef length_t length_type;
29  GLM_FUNC_DECL static GLM_CONSTEXPR length_type length() { return 2; }
30 
31  GLM_FUNC_DECL col_type & operator[](length_type i);
32  GLM_FUNC_DECL GLM_CONSTEXPR col_type const& operator[](length_type i) const;
33 
34  // -- Constructors --
35 
36  GLM_FUNC_DECL GLM_CONSTEXPR mat() GLM_DEFAULT;
37  template<qualifier P>
38  GLM_FUNC_DECL GLM_CONSTEXPR mat(mat<2, 3, T, P> const& m);
39 
40  GLM_FUNC_DECL explicit GLM_CONSTEXPR mat(T scalar);
41  GLM_FUNC_DECL GLM_CONSTEXPR mat(
42  T x0, T y0, T z0,
43  T x1, T y1, T z1);
44  GLM_FUNC_DECL GLM_CONSTEXPR mat(
45  col_type const& v0,
46  col_type const& v1);
47 
48  // -- Conversions --
49 
50  template<typename X1, typename Y1, typename Z1, typename X2, typename Y2, typename Z2>
51  GLM_FUNC_DECL GLM_CONSTEXPR mat(
52  X1 x1, Y1 y1, Z1 z1,
53  X2 x2, Y2 y2, Z2 z2);
54 
55  template<typename U, typename V>
56  GLM_FUNC_DECL GLM_CONSTEXPR mat(
57  vec<3, U, Q> const& v1,
58  vec<3, V, Q> const& v2);
59 
60  // -- Matrix conversions --
61 
62  template<typename U, qualifier P>
63  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<2, 3, U, P> const& m);
64 
65  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<2, 2, T, Q> const& x);
66  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<3, 3, T, Q> const& x);
67  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<4, 4, T, Q> const& x);
68  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<2, 4, T, Q> const& x);
69  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<3, 2, T, Q> const& x);
70  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<3, 4, T, Q> const& x);
71  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<4, 2, T, Q> const& x);
72  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<4, 3, T, Q> const& x);
73 
74  // -- Unary arithmetic operators --
75 
76  template<typename U>
77  GLM_FUNC_DECL mat<2, 3, T, Q> & operator=(mat<2, 3, U, Q> const& m);
78  template<typename U>
79  GLM_FUNC_DECL mat<2, 3, T, Q> & operator+=(U s);
80  template<typename U>
81  GLM_FUNC_DECL mat<2, 3, T, Q> & operator+=(mat<2, 3, U, Q> const& m);
82  template<typename U>
83  GLM_FUNC_DECL mat<2, 3, T, Q> & operator-=(U s);
84  template<typename U>
85  GLM_FUNC_DECL mat<2, 3, T, Q> & operator-=(mat<2, 3, U, Q> const& m);
86  template<typename U>
87  GLM_FUNC_DECL mat<2, 3, T, Q> & operator*=(U s);
88  template<typename U>
89  GLM_FUNC_DECL mat<2, 3, T, Q> & operator/=(U s);
90 
91  // -- Increment and decrement operators --
92 
93  GLM_FUNC_DECL mat<2, 3, T, Q> & operator++ ();
94  GLM_FUNC_DECL mat<2, 3, T, Q> & operator-- ();
95  GLM_FUNC_DECL mat<2, 3, T, Q> operator++(int);
96  GLM_FUNC_DECL mat<2, 3, T, Q> operator--(int);
97  };
98 
99  // -- Unary operators --
100 
101  template<typename T, qualifier Q>
102  GLM_FUNC_DECL mat<2, 3, T, Q> operator+(mat<2, 3, T, Q> const& m);
103 
104  template<typename T, qualifier Q>
105  GLM_FUNC_DECL mat<2, 3, T, Q> operator-(mat<2, 3, T, Q> const& m);
106 
107  // -- Binary operators --
108 
109  template<typename T, qualifier Q>
110  GLM_FUNC_DECL mat<2, 3, T, Q> operator+(mat<2, 3, T, Q> const& m, T scalar);
111 
112  template<typename T, qualifier Q>
113  GLM_FUNC_DECL mat<2, 3, T, Q> operator+(mat<2, 3, T, Q> const& m1, mat<2, 3, T, Q> const& m2);
114 
115  template<typename T, qualifier Q>
116  GLM_FUNC_DECL mat<2, 3, T, Q> operator-(mat<2, 3, T, Q> const& m, T scalar);
117 
118  template<typename T, qualifier Q>
119  GLM_FUNC_DECL mat<2, 3, T, Q> operator-(mat<2, 3, T, Q> const& m1, mat<2, 3, T, Q> const& m2);
120 
121  template<typename T, qualifier Q>
122  GLM_FUNC_DECL mat<2, 3, T, Q> operator*(mat<2, 3, T, Q> const& m, T scalar);
123 
124  template<typename T, qualifier Q>
125  GLM_FUNC_DECL mat<2, 3, T, Q> operator*(T scalar, mat<2, 3, T, Q> const& m);
126 
127  template<typename T, qualifier Q>
128  GLM_FUNC_DECL typename mat<2, 3, T, Q>::col_type operator*(mat<2, 3, T, Q> const& m, typename mat<2, 3, T, Q>::row_type const& v);
129 
130  template<typename T, qualifier Q>
131  GLM_FUNC_DECL typename mat<2, 3, T, Q>::row_type operator*(typename mat<2, 3, T, Q>::col_type const& v, mat<2, 3, T, Q> const& m);
132 
133  template<typename T, qualifier Q>
134  GLM_FUNC_DECL mat<2, 3, T, Q> operator*(mat<2, 3, T, Q> const& m1, mat<2, 2, T, Q> const& m2);
135 
136  template<typename T, qualifier Q>
137  GLM_FUNC_DECL mat<3, 3, T, Q> operator*(mat<2, 3, T, Q> const& m1, mat<3, 2, T, Q> const& m2);
138 
139  template<typename T, qualifier Q>
140  GLM_FUNC_DECL mat<4, 3, T, Q> operator*(mat<2, 3, T, Q> const& m1, mat<4, 2, T, Q> const& m2);
141 
142  template<typename T, qualifier Q>
143  GLM_FUNC_DECL mat<2, 3, T, Q> operator/(mat<2, 3, T, Q> const& m, T scalar);
144 
145  template<typename T, qualifier Q>
146  GLM_FUNC_DECL mat<2, 3, T, Q> operator/(T scalar, mat<2, 3, T, Q> const& m);
147 
148  // -- Boolean operators --
149 
150  template<typename T, qualifier Q>
151  GLM_FUNC_DECL bool operator==(mat<2, 3, T, Q> const& m1, mat<2, 3, T, Q> const& m2);
152 
153  template<typename T, qualifier Q>
154  GLM_FUNC_DECL bool operator!=(mat<2, 3, T, Q> const& m1, mat<2, 3, T, Q> const& m2);
155 }//namespace glm
156 
157 #ifndef GLM_EXTERNAL_TEMPLATE
158 #include "type_mat2x3.inl"
159 #endif
Core features
GLM_FUNC_DECL T length(qua< T, Q > const &q)
Returns the norm of a quaternions.
Core features
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00167.html ================================================ 0.9.9 API documentation: type_mat2x4.hpp File Reference
0.9.9 API documentation
type_mat2x4.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00167_source.html ================================================ 0.9.9 API documentation: type_mat2x4.hpp Source File
0.9.9 API documentation
type_mat2x4.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 
6 #include "type_vec2.hpp"
7 #include "type_vec4.hpp"
8 #include <limits>
9 #include <cstddef>
10 
11 namespace glm
12 {
13  template<typename T, qualifier Q>
14  struct mat<2, 4, T, Q>
15  {
16  typedef vec<4, T, Q> col_type;
17  typedef vec<2, T, Q> row_type;
18  typedef mat<2, 4, T, Q> type;
19  typedef mat<4, 2, T, Q> transpose_type;
20  typedef T value_type;
21 
22  private:
23  col_type value[2];
24 
25  public:
26  // -- Accesses --
27 
28  typedef length_t length_type;
29  GLM_FUNC_DECL static GLM_CONSTEXPR length_type length() { return 2; }
30 
31  GLM_FUNC_DECL col_type & operator[](length_type i);
32  GLM_FUNC_DECL GLM_CONSTEXPR col_type const& operator[](length_type i) const;
33 
34  // -- Constructors --
35 
36  GLM_FUNC_DECL GLM_CONSTEXPR mat() GLM_DEFAULT;
37  template<qualifier P>
38  GLM_FUNC_DECL GLM_CONSTEXPR mat(mat<2, 4, T, P> const& m);
39 
40  GLM_FUNC_DECL explicit GLM_CONSTEXPR mat(T scalar);
41  GLM_FUNC_DECL GLM_CONSTEXPR mat(
42  T x0, T y0, T z0, T w0,
43  T x1, T y1, T z1, T w1);
44  GLM_FUNC_DECL GLM_CONSTEXPR mat(
45  col_type const& v0,
46  col_type const& v1);
47 
48  // -- Conversions --
49 
50  template<
51  typename X1, typename Y1, typename Z1, typename W1,
52  typename X2, typename Y2, typename Z2, typename W2>
53  GLM_FUNC_DECL GLM_CONSTEXPR mat(
54  X1 x1, Y1 y1, Z1 z1, W1 w1,
55  X2 x2, Y2 y2, Z2 z2, W2 w2);
56 
57  template<typename U, typename V>
58  GLM_FUNC_DECL GLM_CONSTEXPR mat(
59  vec<4, U, Q> const& v1,
60  vec<4, V, Q> const& v2);
61 
62  // -- Matrix conversions --
63 
64  template<typename U, qualifier P>
65  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<2, 4, U, P> const& m);
66 
67  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<2, 2, T, Q> const& x);
68  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<3, 3, T, Q> const& x);
69  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<4, 4, T, Q> const& x);
70  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<2, 3, T, Q> const& x);
71  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<3, 2, T, Q> const& x);
72  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<3, 4, T, Q> const& x);
73  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<4, 2, T, Q> const& x);
74  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<4, 3, T, Q> const& x);
75 
76  // -- Unary arithmetic operators --
77 
78  template<typename U>
79  GLM_FUNC_DECL mat<2, 4, T, Q> & operator=(mat<2, 4, U, Q> const& m);
80  template<typename U>
81  GLM_FUNC_DECL mat<2, 4, T, Q> & operator+=(U s);
82  template<typename U>
83  GLM_FUNC_DECL mat<2, 4, T, Q> & operator+=(mat<2, 4, U, Q> const& m);
84  template<typename U>
85  GLM_FUNC_DECL mat<2, 4, T, Q> & operator-=(U s);
86  template<typename U>
87  GLM_FUNC_DECL mat<2, 4, T, Q> & operator-=(mat<2, 4, U, Q> const& m);
88  template<typename U>
89  GLM_FUNC_DECL mat<2, 4, T, Q> & operator*=(U s);
90  template<typename U>
91  GLM_FUNC_DECL mat<2, 4, T, Q> & operator/=(U s);
92 
93  // -- Increment and decrement operators --
94 
95  GLM_FUNC_DECL mat<2, 4, T, Q> & operator++ ();
96  GLM_FUNC_DECL mat<2, 4, T, Q> & operator-- ();
97  GLM_FUNC_DECL mat<2, 4, T, Q> operator++(int);
98  GLM_FUNC_DECL mat<2, 4, T, Q> operator--(int);
99  };
100 
101  // -- Unary operators --
102 
103  template<typename T, qualifier Q>
104  GLM_FUNC_DECL mat<2, 4, T, Q> operator+(mat<2, 4, T, Q> const& m);
105 
106  template<typename T, qualifier Q>
107  GLM_FUNC_DECL mat<2, 4, T, Q> operator-(mat<2, 4, T, Q> const& m);
108 
109  // -- Binary operators --
110 
111  template<typename T, qualifier Q>
112  GLM_FUNC_DECL mat<2, 4, T, Q> operator+(mat<2, 4, T, Q> const& m, T scalar);
113 
114  template<typename T, qualifier Q>
115  GLM_FUNC_DECL mat<2, 4, T, Q> operator+(mat<2, 4, T, Q> const& m1, mat<2, 4, T, Q> const& m2);
116 
117  template<typename T, qualifier Q>
118  GLM_FUNC_DECL mat<2, 4, T, Q> operator-(mat<2, 4, T, Q> const& m, T scalar);
119 
120  template<typename T, qualifier Q>
121  GLM_FUNC_DECL mat<2, 4, T, Q> operator-(mat<2, 4, T, Q> const& m1, mat<2, 4, T, Q> const& m2);
122 
123  template<typename T, qualifier Q>
124  GLM_FUNC_DECL mat<2, 4, T, Q> operator*(mat<2, 4, T, Q> const& m, T scalar);
125 
126  template<typename T, qualifier Q>
127  GLM_FUNC_DECL mat<2, 4, T, Q> operator*(T scalar, mat<2, 4, T, Q> const& m);
128 
129  template<typename T, qualifier Q>
130  GLM_FUNC_DECL typename mat<2, 4, T, Q>::col_type operator*(mat<2, 4, T, Q> const& m, typename mat<2, 4, T, Q>::row_type const& v);
131 
132  template<typename T, qualifier Q>
133  GLM_FUNC_DECL typename mat<2, 4, T, Q>::row_type operator*(typename mat<2, 4, T, Q>::col_type const& v, mat<2, 4, T, Q> const& m);
134 
135  template<typename T, qualifier Q>
136  GLM_FUNC_DECL mat<4, 4, T, Q> operator*(mat<2, 4, T, Q> const& m1, mat<4, 2, T, Q> const& m2);
137 
138  template<typename T, qualifier Q>
139  GLM_FUNC_DECL mat<2, 4, T, Q> operator*(mat<2, 4, T, Q> const& m1, mat<2, 2, T, Q> const& m2);
140 
141  template<typename T, qualifier Q>
142  GLM_FUNC_DECL mat<3, 4, T, Q> operator*(mat<2, 4, T, Q> const& m1, mat<3, 2, T, Q> const& m2);
143 
144  template<typename T, qualifier Q>
145  GLM_FUNC_DECL mat<2, 4, T, Q> operator/(mat<2, 4, T, Q> const& m, T scalar);
146 
147  template<typename T, qualifier Q>
148  GLM_FUNC_DECL mat<2, 4, T, Q> operator/(T scalar, mat<2, 4, T, Q> const& m);
149 
150  // -- Boolean operators --
151 
152  template<typename T, qualifier Q>
153  GLM_FUNC_DECL bool operator==(mat<2, 4, T, Q> const& m1, mat<2, 4, T, Q> const& m2);
154 
155  template<typename T, qualifier Q>
156  GLM_FUNC_DECL bool operator!=(mat<2, 4, T, Q> const& m1, mat<2, 4, T, Q> const& m2);
157 }//namespace glm
158 
159 #ifndef GLM_EXTERNAL_TEMPLATE
160 #include "type_mat2x4.inl"
161 #endif
Core features
GLM_FUNC_DECL T length(qua< T, Q > const &q)
Returns the norm of a quaternions.
Core features
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00168.html ================================================ 0.9.9 API documentation: type_mat3x2.hpp File Reference
0.9.9 API documentation
type_mat3x2.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00168_source.html ================================================ 0.9.9 API documentation: type_mat3x2.hpp Source File
0.9.9 API documentation
type_mat3x2.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 
6 #include "type_vec2.hpp"
7 #include "type_vec3.hpp"
8 #include <limits>
9 #include <cstddef>
10 
11 namespace glm
12 {
13  template<typename T, qualifier Q>
14  struct mat<3, 2, T, Q>
15  {
16  typedef vec<2, T, Q> col_type;
17  typedef vec<3, T, Q> row_type;
18  typedef mat<3, 2, T, Q> type;
19  typedef mat<2, 3, T, Q> transpose_type;
20  typedef T value_type;
21 
22  private:
23  col_type value[3];
24 
25  public:
26  // -- Accesses --
27 
28  typedef length_t length_type;
29  GLM_FUNC_DECL static GLM_CONSTEXPR length_type length() { return 3; }
30 
31  GLM_FUNC_DECL col_type & operator[](length_type i);
32  GLM_FUNC_DECL GLM_CONSTEXPR col_type const& operator[](length_type i) const;
33 
34  // -- Constructors --
35 
36  GLM_FUNC_DECL GLM_CONSTEXPR mat() GLM_DEFAULT;
37  template<qualifier P>
38  GLM_FUNC_DECL GLM_CONSTEXPR mat(mat<3, 2, T, P> const& m);
39 
40  GLM_FUNC_DECL explicit GLM_CONSTEXPR mat(T scalar);
41  GLM_FUNC_DECL GLM_CONSTEXPR mat(
42  T x0, T y0,
43  T x1, T y1,
44  T x2, T y2);
45  GLM_FUNC_DECL GLM_CONSTEXPR mat(
46  col_type const& v0,
47  col_type const& v1,
48  col_type const& v2);
49 
50  // -- Conversions --
51 
52  template<
53  typename X1, typename Y1,
54  typename X2, typename Y2,
55  typename X3, typename Y3>
56  GLM_FUNC_DECL GLM_CONSTEXPR mat(
57  X1 x1, Y1 y1,
58  X2 x2, Y2 y2,
59  X3 x3, Y3 y3);
60 
61  template<typename V1, typename V2, typename V3>
62  GLM_FUNC_DECL GLM_CONSTEXPR mat(
63  vec<2, V1, Q> const& v1,
64  vec<2, V2, Q> const& v2,
65  vec<2, V3, Q> const& v3);
66 
67  // -- Matrix conversions --
68 
69  template<typename U, qualifier P>
70  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<3, 2, U, P> const& m);
71 
72  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<2, 2, T, Q> const& x);
73  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<3, 3, T, Q> const& x);
74  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<4, 4, T, Q> const& x);
75  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<2, 3, T, Q> const& x);
76  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<2, 4, T, Q> const& x);
77  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<3, 4, T, Q> const& x);
78  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<4, 2, T, Q> const& x);
79  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<4, 3, T, Q> const& x);
80 
81  // -- Unary arithmetic operators --
82 
83  template<typename U>
84  GLM_FUNC_DECL mat<3, 2, T, Q> & operator=(mat<3, 2, U, Q> const& m);
85  template<typename U>
86  GLM_FUNC_DECL mat<3, 2, T, Q> & operator+=(U s);
87  template<typename U>
88  GLM_FUNC_DECL mat<3, 2, T, Q> & operator+=(mat<3, 2, U, Q> const& m);
89  template<typename U>
90  GLM_FUNC_DECL mat<3, 2, T, Q> & operator-=(U s);
91  template<typename U>
92  GLM_FUNC_DECL mat<3, 2, T, Q> & operator-=(mat<3, 2, U, Q> const& m);
93  template<typename U>
94  GLM_FUNC_DECL mat<3, 2, T, Q> & operator*=(U s);
95  template<typename U>
96  GLM_FUNC_DECL mat<3, 2, T, Q> & operator/=(U s);
97 
98  // -- Increment and decrement operators --
99 
100  GLM_FUNC_DECL mat<3, 2, T, Q> & operator++ ();
101  GLM_FUNC_DECL mat<3, 2, T, Q> & operator-- ();
102  GLM_FUNC_DECL mat<3, 2, T, Q> operator++(int);
103  GLM_FUNC_DECL mat<3, 2, T, Q> operator--(int);
104  };
105 
106  // -- Unary operators --
107 
108  template<typename T, qualifier Q>
109  GLM_FUNC_DECL mat<3, 2, T, Q> operator+(mat<3, 2, T, Q> const& m);
110 
111  template<typename T, qualifier Q>
112  GLM_FUNC_DECL mat<3, 2, T, Q> operator-(mat<3, 2, T, Q> const& m);
113 
114  // -- Binary operators --
115 
116  template<typename T, qualifier Q>
117  GLM_FUNC_DECL mat<3, 2, T, Q> operator+(mat<3, 2, T, Q> const& m, T scalar);
118 
119  template<typename T, qualifier Q>
120  GLM_FUNC_DECL mat<3, 2, T, Q> operator+(mat<3, 2, T, Q> const& m1, mat<3, 2, T, Q> const& m2);
121 
122  template<typename T, qualifier Q>
123  GLM_FUNC_DECL mat<3, 2, T, Q> operator-(mat<3, 2, T, Q> const& m, T scalar);
124 
125  template<typename T, qualifier Q>
126  GLM_FUNC_DECL mat<3, 2, T, Q> operator-(mat<3, 2, T, Q> const& m1, mat<3, 2, T, Q> const& m2);
127 
128  template<typename T, qualifier Q>
129  GLM_FUNC_DECL mat<3, 2, T, Q> operator*(mat<3, 2, T, Q> const& m, T scalar);
130 
131  template<typename T, qualifier Q>
132  GLM_FUNC_DECL mat<3, 2, T, Q> operator*(T scalar, mat<3, 2, T, Q> const& m);
133 
134  template<typename T, qualifier Q>
135  GLM_FUNC_DECL typename mat<3, 2, T, Q>::col_type operator*(mat<3, 2, T, Q> const& m, typename mat<3, 2, T, Q>::row_type const& v);
136 
137  template<typename T, qualifier Q>
138  GLM_FUNC_DECL typename mat<3, 2, T, Q>::row_type operator*(typename mat<3, 2, T, Q>::col_type const& v, mat<3, 2, T, Q> const& m);
139 
140  template<typename T, qualifier Q>
141  GLM_FUNC_DECL mat<2, 2, T, Q> operator*(mat<3, 2, T, Q> const& m1, mat<2, 3, T, Q> const& m2);
142 
143  template<typename T, qualifier Q>
144  GLM_FUNC_DECL mat<3, 2, T, Q> operator*(mat<3, 2, T, Q> const& m1, mat<3, 3, T, Q> const& m2);
145 
146  template<typename T, qualifier Q>
147  GLM_FUNC_DECL mat<4, 2, T, Q> operator*(mat<3, 2, T, Q> const& m1, mat<4, 3, T, Q> const& m2);
148 
149  template<typename T, qualifier Q>
150  GLM_FUNC_DECL mat<3, 2, T, Q> operator/(mat<3, 2, T, Q> const& m, T scalar);
151 
152  template<typename T, qualifier Q>
153  GLM_FUNC_DECL mat<3, 2, T, Q> operator/(T scalar, mat<3, 2, T, Q> const& m);
154 
155  // -- Boolean operators --
156 
157  template<typename T, qualifier Q>
158  GLM_FUNC_DECL bool operator==(mat<3, 2, T, Q> const& m1, mat<3, 2, T, Q> const& m2);
159 
160  template<typename T, qualifier Q>
161  GLM_FUNC_DECL bool operator!=(mat<3, 2, T, Q> const& m1, mat<3, 2, T, Q> const& m2);
162 
163 }//namespace glm
164 
165 #ifndef GLM_EXTERNAL_TEMPLATE
166 #include "type_mat3x2.inl"
167 #endif
Core features
GLM_FUNC_DECL T length(qua< T, Q > const &q)
Returns the norm of a quaternions.
Core features
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00169.html ================================================ 0.9.9 API documentation: type_mat3x3.hpp File Reference
0.9.9 API documentation
type_mat3x3.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00169_source.html ================================================ 0.9.9 API documentation: type_mat3x3.hpp Source File
0.9.9 API documentation
type_mat3x3.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 
6 #include "type_vec3.hpp"
7 #include <limits>
8 #include <cstddef>
9 
10 namespace glm
11 {
12  template<typename T, qualifier Q>
13  struct mat<3, 3, T, Q>
14  {
15  typedef vec<3, T, Q> col_type;
16  typedef vec<3, T, Q> row_type;
17  typedef mat<3, 3, T, Q> type;
18  typedef mat<3, 3, T, Q> transpose_type;
19  typedef T value_type;
20 
21  private:
22  col_type value[3];
23 
24  public:
25  // -- Accesses --
26 
27  typedef length_t length_type;
28  GLM_FUNC_DECL static GLM_CONSTEXPR length_type length() { return 3; }
29 
30  GLM_FUNC_DECL col_type & operator[](length_type i);
31  GLM_FUNC_DECL GLM_CONSTEXPR col_type const& operator[](length_type i) const;
32 
33  // -- Constructors --
34 
35  GLM_FUNC_DECL GLM_CONSTEXPR mat() GLM_DEFAULT;
36  template<qualifier P>
37  GLM_FUNC_DECL GLM_CONSTEXPR mat(mat<3, 3, T, P> const& m);
38 
39  GLM_FUNC_DECL explicit GLM_CONSTEXPR mat(T scalar);
40  GLM_FUNC_DECL GLM_CONSTEXPR mat(
41  T x0, T y0, T z0,
42  T x1, T y1, T z1,
43  T x2, T y2, T z2);
44  GLM_FUNC_DECL GLM_CONSTEXPR mat(
45  col_type const& v0,
46  col_type const& v1,
47  col_type const& v2);
48 
49  // -- Conversions --
50 
51  template<
52  typename X1, typename Y1, typename Z1,
53  typename X2, typename Y2, typename Z2,
54  typename X3, typename Y3, typename Z3>
55  GLM_FUNC_DECL GLM_CONSTEXPR mat(
56  X1 x1, Y1 y1, Z1 z1,
57  X2 x2, Y2 y2, Z2 z2,
58  X3 x3, Y3 y3, Z3 z3);
59 
60  template<typename V1, typename V2, typename V3>
61  GLM_FUNC_DECL GLM_CONSTEXPR mat(
62  vec<3, V1, Q> const& v1,
63  vec<3, V2, Q> const& v2,
64  vec<3, V3, Q> const& v3);
65 
66  // -- Matrix conversions --
67 
68  template<typename U, qualifier P>
69  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<3, 3, U, P> const& m);
70 
71  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<2, 2, T, Q> const& x);
72  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<4, 4, T, Q> const& x);
73  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<2, 3, T, Q> const& x);
74  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<3, 2, T, Q> const& x);
75  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<2, 4, T, Q> const& x);
76  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<4, 2, T, Q> const& x);
77  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<3, 4, T, Q> const& x);
78  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<4, 3, T, Q> const& x);
79 
80  // -- Unary arithmetic operators --
81 
82  template<typename U>
83  GLM_FUNC_DECL mat<3, 3, T, Q> & operator=(mat<3, 3, U, Q> const& m);
84  template<typename U>
85  GLM_FUNC_DECL mat<3, 3, T, Q> & operator+=(U s);
86  template<typename U>
87  GLM_FUNC_DECL mat<3, 3, T, Q> & operator+=(mat<3, 3, U, Q> const& m);
88  template<typename U>
89  GLM_FUNC_DECL mat<3, 3, T, Q> & operator-=(U s);
90  template<typename U>
91  GLM_FUNC_DECL mat<3, 3, T, Q> & operator-=(mat<3, 3, U, Q> const& m);
92  template<typename U>
93  GLM_FUNC_DECL mat<3, 3, T, Q> & operator*=(U s);
94  template<typename U>
95  GLM_FUNC_DECL mat<3, 3, T, Q> & operator*=(mat<3, 3, U, Q> const& m);
96  template<typename U>
97  GLM_FUNC_DECL mat<3, 3, T, Q> & operator/=(U s);
98  template<typename U>
99  GLM_FUNC_DECL mat<3, 3, T, Q> & operator/=(mat<3, 3, U, Q> const& m);
100 
101  // -- Increment and decrement operators --
102 
103  GLM_FUNC_DECL mat<3, 3, T, Q> & operator++();
104  GLM_FUNC_DECL mat<3, 3, T, Q> & operator--();
105  GLM_FUNC_DECL mat<3, 3, T, Q> operator++(int);
106  GLM_FUNC_DECL mat<3, 3, T, Q> operator--(int);
107  };
108 
109  // -- Unary operators --
110 
111  template<typename T, qualifier Q>
112  GLM_FUNC_DECL mat<3, 3, T, Q> operator+(mat<3, 3, T, Q> const& m);
113 
114  template<typename T, qualifier Q>
115  GLM_FUNC_DECL mat<3, 3, T, Q> operator-(mat<3, 3, T, Q> const& m);
116 
117  // -- Binary operators --
118 
119  template<typename T, qualifier Q>
120  GLM_FUNC_DECL mat<3, 3, T, Q> operator+(mat<3, 3, T, Q> const& m, T scalar);
121 
122  template<typename T, qualifier Q>
123  GLM_FUNC_DECL mat<3, 3, T, Q> operator+(T scalar, mat<3, 3, T, Q> const& m);
124 
125  template<typename T, qualifier Q>
126  GLM_FUNC_DECL mat<3, 3, T, Q> operator+(mat<3, 3, T, Q> const& m1, mat<3, 3, T, Q> const& m2);
127 
128  template<typename T, qualifier Q>
129  GLM_FUNC_DECL mat<3, 3, T, Q> operator-(mat<3, 3, T, Q> const& m, T scalar);
130 
131  template<typename T, qualifier Q>
132  GLM_FUNC_DECL mat<3, 3, T, Q> operator-(T scalar, mat<3, 3, T, Q> const& m);
133 
134  template<typename T, qualifier Q>
135  GLM_FUNC_DECL mat<3, 3, T, Q> operator-(mat<3, 3, T, Q> const& m1, mat<3, 3, T, Q> const& m2);
136 
137  template<typename T, qualifier Q>
138  GLM_FUNC_DECL mat<3, 3, T, Q> operator*(mat<3, 3, T, Q> const& m, T scalar);
139 
140  template<typename T, qualifier Q>
141  GLM_FUNC_DECL mat<3, 3, T, Q> operator*(T scalar, mat<3, 3, T, Q> const& m);
142 
143  template<typename T, qualifier Q>
144  GLM_FUNC_DECL typename mat<3, 3, T, Q>::col_type operator*(mat<3, 3, T, Q> const& m, typename mat<3, 3, T, Q>::row_type const& v);
145 
146  template<typename T, qualifier Q>
147  GLM_FUNC_DECL typename mat<3, 3, T, Q>::row_type operator*(typename mat<3, 3, T, Q>::col_type const& v, mat<3, 3, T, Q> const& m);
148 
149  template<typename T, qualifier Q>
150  GLM_FUNC_DECL mat<3, 3, T, Q> operator*(mat<3, 3, T, Q> const& m1, mat<3, 3, T, Q> const& m2);
151 
152  template<typename T, qualifier Q>
153  GLM_FUNC_DECL mat<2, 3, T, Q> operator*(mat<3, 3, T, Q> const& m1, mat<2, 3, T, Q> const& m2);
154 
155  template<typename T, qualifier Q>
156  GLM_FUNC_DECL mat<4, 3, T, Q> operator*(mat<3, 3, T, Q> const& m1, mat<4, 3, T, Q> const& m2);
157 
158  template<typename T, qualifier Q>
159  GLM_FUNC_DECL mat<3, 3, T, Q> operator/(mat<3, 3, T, Q> const& m, T scalar);
160 
161  template<typename T, qualifier Q>
162  GLM_FUNC_DECL mat<3, 3, T, Q> operator/(T scalar, mat<3, 3, T, Q> const& m);
163 
164  template<typename T, qualifier Q>
165  GLM_FUNC_DECL typename mat<3, 3, T, Q>::col_type operator/(mat<3, 3, T, Q> const& m, typename mat<3, 3, T, Q>::row_type const& v);
166 
167  template<typename T, qualifier Q>
168  GLM_FUNC_DECL typename mat<3, 3, T, Q>::row_type operator/(typename mat<3, 3, T, Q>::col_type const& v, mat<3, 3, T, Q> const& m);
169 
170  template<typename T, qualifier Q>
171  GLM_FUNC_DECL mat<3, 3, T, Q> operator/(mat<3, 3, T, Q> const& m1, mat<3, 3, T, Q> const& m2);
172 
173  // -- Boolean operators --
174 
175  template<typename T, qualifier Q>
176  GLM_FUNC_DECL GLM_CONSTEXPR bool operator==(mat<3, 3, T, Q> const& m1, mat<3, 3, T, Q> const& m2);
177 
178  template<typename T, qualifier Q>
179  GLM_FUNC_DECL bool operator!=(mat<3, 3, T, Q> const& m1, mat<3, 3, T, Q> const& m2);
180 }//namespace glm
181 
182 #ifndef GLM_EXTERNAL_TEMPLATE
183 #include "type_mat3x3.inl"
184 #endif
GLM_FUNC_DECL T length(qua< T, Q > const &q)
Returns the norm of a quaternions.
Core features
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00170.html ================================================ 0.9.9 API documentation: type_mat3x4.hpp File Reference
0.9.9 API documentation
type_mat3x4.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00170_source.html ================================================ 0.9.9 API documentation: type_mat3x4.hpp Source File
0.9.9 API documentation
type_mat3x4.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 
6 #include "type_vec3.hpp"
7 #include "type_vec4.hpp"
8 #include <limits>
9 #include <cstddef>
10 
11 namespace glm
12 {
13  template<typename T, qualifier Q>
14  struct mat<3, 4, T, Q>
15  {
16  typedef vec<4, T, Q> col_type;
17  typedef vec<3, T, Q> row_type;
18  typedef mat<3, 4, T, Q> type;
19  typedef mat<4, 3, T, Q> transpose_type;
20  typedef T value_type;
21 
22  private:
23  col_type value[3];
24 
25  public:
26  // -- Accesses --
27 
28  typedef length_t length_type;
29  GLM_FUNC_DECL static GLM_CONSTEXPR length_type length() { return 3; }
30 
31  GLM_FUNC_DECL col_type & operator[](length_type i);
32  GLM_FUNC_DECL GLM_CONSTEXPR col_type const& operator[](length_type i) const;
33 
34  // -- Constructors --
35 
36  GLM_FUNC_DECL GLM_CONSTEXPR mat() GLM_DEFAULT;
37  template<qualifier P>
38  GLM_FUNC_DECL GLM_CONSTEXPR mat(mat<3, 4, T, P> const& m);
39 
40  GLM_FUNC_DECL explicit GLM_CONSTEXPR mat(T scalar);
41  GLM_FUNC_DECL GLM_CONSTEXPR mat(
42  T x0, T y0, T z0, T w0,
43  T x1, T y1, T z1, T w1,
44  T x2, T y2, T z2, T w2);
45  GLM_FUNC_DECL GLM_CONSTEXPR mat(
46  col_type const& v0,
47  col_type const& v1,
48  col_type const& v2);
49 
50  // -- Conversions --
51 
52  template<
53  typename X1, typename Y1, typename Z1, typename W1,
54  typename X2, typename Y2, typename Z2, typename W2,
55  typename X3, typename Y3, typename Z3, typename W3>
56  GLM_FUNC_DECL GLM_CONSTEXPR mat(
57  X1 x1, Y1 y1, Z1 z1, W1 w1,
58  X2 x2, Y2 y2, Z2 z2, W2 w2,
59  X3 x3, Y3 y3, Z3 z3, W3 w3);
60 
61  template<typename V1, typename V2, typename V3>
62  GLM_FUNC_DECL GLM_CONSTEXPR mat(
63  vec<4, V1, Q> const& v1,
64  vec<4, V2, Q> const& v2,
65  vec<4, V3, Q> const& v3);
66 
67  // -- Matrix conversions --
68 
69  template<typename U, qualifier P>
70  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<3, 4, U, P> const& m);
71 
72  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<2, 2, T, Q> const& x);
73  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<3, 3, T, Q> const& x);
74  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<4, 4, T, Q> const& x);
75  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<2, 3, T, Q> const& x);
76  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<3, 2, T, Q> const& x);
77  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<2, 4, T, Q> const& x);
78  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<4, 2, T, Q> const& x);
79  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<4, 3, T, Q> const& x);
80 
81  // -- Unary arithmetic operators --
82 
83  template<typename U>
84  GLM_FUNC_DECL mat<3, 4, T, Q> & operator=(mat<3, 4, U, Q> const& m);
85  template<typename U>
86  GLM_FUNC_DECL mat<3, 4, T, Q> & operator+=(U s);
87  template<typename U>
88  GLM_FUNC_DECL mat<3, 4, T, Q> & operator+=(mat<3, 4, U, Q> const& m);
89  template<typename U>
90  GLM_FUNC_DECL mat<3, 4, T, Q> & operator-=(U s);
91  template<typename U>
92  GLM_FUNC_DECL mat<3, 4, T, Q> & operator-=(mat<3, 4, U, Q> const& m);
93  template<typename U>
94  GLM_FUNC_DECL mat<3, 4, T, Q> & operator*=(U s);
95  template<typename U>
96  GLM_FUNC_DECL mat<3, 4, T, Q> & operator/=(U s);
97 
98  // -- Increment and decrement operators --
99 
100  GLM_FUNC_DECL mat<3, 4, T, Q> & operator++();
101  GLM_FUNC_DECL mat<3, 4, T, Q> & operator--();
102  GLM_FUNC_DECL mat<3, 4, T, Q> operator++(int);
103  GLM_FUNC_DECL mat<3, 4, T, Q> operator--(int);
104  };
105 
106  // -- Unary operators --
107 
108  template<typename T, qualifier Q>
109  GLM_FUNC_DECL mat<3, 4, T, Q> operator+(mat<3, 4, T, Q> const& m);
110 
111  template<typename T, qualifier Q>
112  GLM_FUNC_DECL mat<3, 4, T, Q> operator-(mat<3, 4, T, Q> const& m);
113 
114  // -- Binary operators --
115 
116  template<typename T, qualifier Q>
117  GLM_FUNC_DECL mat<3, 4, T, Q> operator+(mat<3, 4, T, Q> const& m, T scalar);
118 
119  template<typename T, qualifier Q>
120  GLM_FUNC_DECL mat<3, 4, T, Q> operator+(mat<3, 4, T, Q> const& m1, mat<3, 4, T, Q> const& m2);
121 
122  template<typename T, qualifier Q>
123  GLM_FUNC_DECL mat<3, 4, T, Q> operator-(mat<3, 4, T, Q> const& m, T scalar);
124 
125  template<typename T, qualifier Q>
126  GLM_FUNC_DECL mat<3, 4, T, Q> operator-(mat<3, 4, T, Q> const& m1, mat<3, 4, T, Q> const& m2);
127 
128  template<typename T, qualifier Q>
129  GLM_FUNC_DECL mat<3, 4, T, Q> operator*(mat<3, 4, T, Q> const& m, T scalar);
130 
131  template<typename T, qualifier Q>
132  GLM_FUNC_DECL mat<3, 4, T, Q> operator*(T scalar, mat<3, 4, T, Q> const& m);
133 
134  template<typename T, qualifier Q>
135  GLM_FUNC_DECL typename mat<3, 4, T, Q>::col_type operator*(mat<3, 4, T, Q> const& m, typename mat<3, 4, T, Q>::row_type const& v);
136 
137  template<typename T, qualifier Q>
138  GLM_FUNC_DECL typename mat<3, 4, T, Q>::row_type operator*(typename mat<3, 4, T, Q>::col_type const& v, mat<3, 4, T, Q> const& m);
139 
140  template<typename T, qualifier Q>
141  GLM_FUNC_DECL mat<4, 4, T, Q> operator*(mat<3, 4, T, Q> const& m1, mat<4, 3, T, Q> const& m2);
142 
143  template<typename T, qualifier Q>
144  GLM_FUNC_DECL mat<2, 4, T, Q> operator*(mat<3, 4, T, Q> const& m1, mat<2, 3, T, Q> const& m2);
145 
146  template<typename T, qualifier Q>
147  GLM_FUNC_DECL mat<3, 4, T, Q> operator*(mat<3, 4, T, Q> const& m1, mat<3, 3, T, Q> const& m2);
148 
149  template<typename T, qualifier Q>
150  GLM_FUNC_DECL mat<3, 4, T, Q> operator/(mat<3, 4, T, Q> const& m, T scalar);
151 
152  template<typename T, qualifier Q>
153  GLM_FUNC_DECL mat<3, 4, T, Q> operator/(T scalar, mat<3, 4, T, Q> const& m);
154 
155  // -- Boolean operators --
156 
157  template<typename T, qualifier Q>
158  GLM_FUNC_DECL bool operator==(mat<3, 4, T, Q> const& m1, mat<3, 4, T, Q> const& m2);
159 
160  template<typename T, qualifier Q>
161  GLM_FUNC_DECL bool operator!=(mat<3, 4, T, Q> const& m1, mat<3, 4, T, Q> const& m2);
162 }//namespace glm
163 
164 #ifndef GLM_EXTERNAL_TEMPLATE
165 #include "type_mat3x4.inl"
166 #endif
GLM_FUNC_DECL T length(qua< T, Q > const &q)
Returns the norm of a quaternions.
Core features
Core features
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00171.html ================================================ 0.9.9 API documentation: type_mat4x2.hpp File Reference
0.9.9 API documentation
type_mat4x2.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00171_source.html ================================================ 0.9.9 API documentation: type_mat4x2.hpp Source File
0.9.9 API documentation
type_mat4x2.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 
6 #include "type_vec2.hpp"
7 #include "type_vec4.hpp"
8 #include <limits>
9 #include <cstddef>
10 
11 namespace glm
12 {
13  template<typename T, qualifier Q>
14  struct mat<4, 2, T, Q>
15  {
16  typedef vec<2, T, Q> col_type;
17  typedef vec<4, T, Q> row_type;
18  typedef mat<4, 2, T, Q> type;
19  typedef mat<2, 4, T, Q> transpose_type;
20  typedef T value_type;
21 
22  private:
23  col_type value[4];
24 
25  public:
26  // -- Accesses --
27 
28  typedef length_t length_type;
29  GLM_FUNC_DECL static GLM_CONSTEXPR length_type length() { return 4; }
30 
31  GLM_FUNC_DECL col_type & operator[](length_type i);
32  GLM_FUNC_DECL GLM_CONSTEXPR col_type const& operator[](length_type i) const;
33 
34  // -- Constructors --
35 
36  GLM_FUNC_DECL GLM_CONSTEXPR mat() GLM_DEFAULT;
37  template<qualifier P>
38  GLM_FUNC_DECL GLM_CONSTEXPR mat(mat<4, 2, T, P> const& m);
39 
40  GLM_FUNC_DECL explicit GLM_CONSTEXPR mat(T scalar);
41  GLM_FUNC_DECL GLM_CONSTEXPR mat(
42  T x0, T y0,
43  T x1, T y1,
44  T x2, T y2,
45  T x3, T y3);
46  GLM_FUNC_DECL GLM_CONSTEXPR mat(
47  col_type const& v0,
48  col_type const& v1,
49  col_type const& v2,
50  col_type const& v3);
51 
52  // -- Conversions --
53 
54  template<
55  typename X0, typename Y0,
56  typename X1, typename Y1,
57  typename X2, typename Y2,
58  typename X3, typename Y3>
59  GLM_FUNC_DECL GLM_CONSTEXPR mat(
60  X0 x0, Y0 y0,
61  X1 x1, Y1 y1,
62  X2 x2, Y2 y2,
63  X3 x3, Y3 y3);
64 
65  template<typename V1, typename V2, typename V3, typename V4>
66  GLM_FUNC_DECL GLM_CONSTEXPR mat(
67  vec<2, V1, Q> const& v1,
68  vec<2, V2, Q> const& v2,
69  vec<2, V3, Q> const& v3,
70  vec<2, V4, Q> const& v4);
71 
72  // -- Matrix conversions --
73 
74  template<typename U, qualifier P>
75  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<4, 2, U, P> const& m);
76 
77  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<2, 2, T, Q> const& x);
78  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<3, 3, T, Q> const& x);
79  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<4, 4, T, Q> const& x);
80  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<2, 3, T, Q> const& x);
81  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<3, 2, T, Q> const& x);
82  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<2, 4, T, Q> const& x);
83  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<4, 3, T, Q> const& x);
84  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<3, 4, T, Q> const& x);
85 
86  // -- Unary arithmetic operators --
87 
88  template<typename U>
89  GLM_FUNC_DECL mat<4, 2, T, Q> & operator=(mat<4, 2, U, Q> const& m);
90  template<typename U>
91  GLM_FUNC_DECL mat<4, 2, T, Q> & operator+=(U s);
92  template<typename U>
93  GLM_FUNC_DECL mat<4, 2, T, Q> & operator+=(mat<4, 2, U, Q> const& m);
94  template<typename U>
95  GLM_FUNC_DECL mat<4, 2, T, Q> & operator-=(U s);
96  template<typename U>
97  GLM_FUNC_DECL mat<4, 2, T, Q> & operator-=(mat<4, 2, U, Q> const& m);
98  template<typename U>
99  GLM_FUNC_DECL mat<4, 2, T, Q> & operator*=(U s);
100  template<typename U>
101  GLM_FUNC_DECL mat<4, 2, T, Q> & operator/=(U s);
102 
103  // -- Increment and decrement operators --
104 
105  GLM_FUNC_DECL mat<4, 2, T, Q> & operator++ ();
106  GLM_FUNC_DECL mat<4, 2, T, Q> & operator-- ();
107  GLM_FUNC_DECL mat<4, 2, T, Q> operator++(int);
108  GLM_FUNC_DECL mat<4, 2, T, Q> operator--(int);
109  };
110 
111  // -- Unary operators --
112 
113  template<typename T, qualifier Q>
114  GLM_FUNC_DECL mat<4, 2, T, Q> operator+(mat<4, 2, T, Q> const& m);
115 
116  template<typename T, qualifier Q>
117  GLM_FUNC_DECL mat<4, 2, T, Q> operator-(mat<4, 2, T, Q> const& m);
118 
119  // -- Binary operators --
120 
121  template<typename T, qualifier Q>
122  GLM_FUNC_DECL mat<4, 2, T, Q> operator+(mat<4, 2, T, Q> const& m, T scalar);
123 
124  template<typename T, qualifier Q>
125  GLM_FUNC_DECL mat<4, 2, T, Q> operator+(mat<4, 2, T, Q> const& m1, mat<4, 2, T, Q> const& m2);
126 
127  template<typename T, qualifier Q>
128  GLM_FUNC_DECL mat<4, 2, T, Q> operator-(mat<4, 2, T, Q> const& m, T scalar);
129 
130  template<typename T, qualifier Q>
131  GLM_FUNC_DECL mat<4, 2, T, Q> operator-(mat<4, 2, T, Q> const& m1, mat<4, 2, T, Q> const& m2);
132 
133  template<typename T, qualifier Q>
134  GLM_FUNC_DECL mat<4, 2, T, Q> operator*(mat<4, 2, T, Q> const& m, T scalar);
135 
136  template<typename T, qualifier Q>
137  GLM_FUNC_DECL mat<4, 2, T, Q> operator*(T scalar, mat<4, 2, T, Q> const& m);
138 
139  template<typename T, qualifier Q>
140  GLM_FUNC_DECL typename mat<4, 2, T, Q>::col_type operator*(mat<4, 2, T, Q> const& m, typename mat<4, 2, T, Q>::row_type const& v);
141 
142  template<typename T, qualifier Q>
143  GLM_FUNC_DECL typename mat<4, 2, T, Q>::row_type operator*(typename mat<4, 2, T, Q>::col_type const& v, mat<4, 2, T, Q> const& m);
144 
145  template<typename T, qualifier Q>
146  GLM_FUNC_DECL mat<2, 2, T, Q> operator*(mat<4, 2, T, Q> const& m1, mat<2, 4, T, Q> const& m2);
147 
148  template<typename T, qualifier Q>
149  GLM_FUNC_DECL mat<3, 2, T, Q> operator*(mat<4, 2, T, Q> const& m1, mat<3, 4, T, Q> const& m2);
150 
151  template<typename T, qualifier Q>
152  GLM_FUNC_DECL mat<4, 2, T, Q> operator*(mat<4, 2, T, Q> const& m1, mat<4, 4, T, Q> const& m2);
153 
154  template<typename T, qualifier Q>
155  GLM_FUNC_DECL mat<4, 2, T, Q> operator/(mat<4, 2, T, Q> const& m, T scalar);
156 
157  template<typename T, qualifier Q>
158  GLM_FUNC_DECL mat<4, 2, T, Q> operator/(T scalar, mat<4, 2, T, Q> const& m);
159 
160  // -- Boolean operators --
161 
162  template<typename T, qualifier Q>
163  GLM_FUNC_DECL bool operator==(mat<4, 2, T, Q> const& m1, mat<4, 2, T, Q> const& m2);
164 
165  template<typename T, qualifier Q>
166  GLM_FUNC_DECL bool operator!=(mat<4, 2, T, Q> const& m1, mat<4, 2, T, Q> const& m2);
167 }//namespace glm
168 
169 #ifndef GLM_EXTERNAL_TEMPLATE
170 #include "type_mat4x2.inl"
171 #endif
Core features
GLM_FUNC_DECL T length(qua< T, Q > const &q)
Returns the norm of a quaternions.
Core features
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00172.html ================================================ 0.9.9 API documentation: type_mat4x3.hpp File Reference
0.9.9 API documentation
type_mat4x3.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00172_source.html ================================================ 0.9.9 API documentation: type_mat4x3.hpp Source File
0.9.9 API documentation
type_mat4x3.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 
6 #include "type_vec3.hpp"
7 #include "type_vec4.hpp"
8 #include <limits>
9 #include <cstddef>
10 
11 namespace glm
12 {
13  template<typename T, qualifier Q>
14  struct mat<4, 3, T, Q>
15  {
16  typedef vec<3, T, Q> col_type;
17  typedef vec<4, T, Q> row_type;
18  typedef mat<4, 3, T, Q> type;
19  typedef mat<3, 4, T, Q> transpose_type;
20  typedef T value_type;
21 
22  private:
23  col_type value[4];
24 
25  public:
26  // -- Accesses --
27 
28  typedef length_t length_type;
29  GLM_FUNC_DECL static GLM_CONSTEXPR length_type length() { return 4; }
30 
31  GLM_FUNC_DECL col_type & operator[](length_type i);
32  GLM_FUNC_DECL GLM_CONSTEXPR col_type const& operator[](length_type i) const;
33 
34  // -- Constructors --
35 
36  GLM_FUNC_DECL GLM_CONSTEXPR mat() GLM_DEFAULT;
37  template<qualifier P>
38  GLM_FUNC_DECL GLM_CONSTEXPR mat(mat<4, 3, T, P> const& m);
39 
40  GLM_FUNC_DECL explicit GLM_CONSTEXPR mat(T const& x);
41  GLM_FUNC_DECL GLM_CONSTEXPR mat(
42  T const& x0, T const& y0, T const& z0,
43  T const& x1, T const& y1, T const& z1,
44  T const& x2, T const& y2, T const& z2,
45  T const& x3, T const& y3, T const& z3);
46  GLM_FUNC_DECL GLM_CONSTEXPR mat(
47  col_type const& v0,
48  col_type const& v1,
49  col_type const& v2,
50  col_type const& v3);
51 
52  // -- Conversions --
53 
54  template<
55  typename X1, typename Y1, typename Z1,
56  typename X2, typename Y2, typename Z2,
57  typename X3, typename Y3, typename Z3,
58  typename X4, typename Y4, typename Z4>
59  GLM_FUNC_DECL GLM_CONSTEXPR mat(
60  X1 const& x1, Y1 const& y1, Z1 const& z1,
61  X2 const& x2, Y2 const& y2, Z2 const& z2,
62  X3 const& x3, Y3 const& y3, Z3 const& z3,
63  X4 const& x4, Y4 const& y4, Z4 const& z4);
64 
65  template<typename V1, typename V2, typename V3, typename V4>
66  GLM_FUNC_DECL GLM_CONSTEXPR mat(
67  vec<3, V1, Q> const& v1,
68  vec<3, V2, Q> const& v2,
69  vec<3, V3, Q> const& v3,
70  vec<3, V4, Q> const& v4);
71 
72  // -- Matrix conversions --
73 
74  template<typename U, qualifier P>
75  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<4, 3, U, P> const& m);
76 
77  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<2, 2, T, Q> const& x);
78  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<3, 3, T, Q> const& x);
79  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<4, 4, T, Q> const& x);
80  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<2, 3, T, Q> const& x);
81  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<3, 2, T, Q> const& x);
82  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<2, 4, T, Q> const& x);
83  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<4, 2, T, Q> const& x);
84  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<3, 4, T, Q> const& x);
85 
86  // -- Unary arithmetic operators --
87 
88  template<typename U>
89  GLM_FUNC_DECL mat<4, 3, T, Q> & operator=(mat<4, 3, U, Q> const& m);
90  template<typename U>
91  GLM_FUNC_DECL mat<4, 3, T, Q> & operator+=(U s);
92  template<typename U>
93  GLM_FUNC_DECL mat<4, 3, T, Q> & operator+=(mat<4, 3, U, Q> const& m);
94  template<typename U>
95  GLM_FUNC_DECL mat<4, 3, T, Q> & operator-=(U s);
96  template<typename U>
97  GLM_FUNC_DECL mat<4, 3, T, Q> & operator-=(mat<4, 3, U, Q> const& m);
98  template<typename U>
99  GLM_FUNC_DECL mat<4, 3, T, Q> & operator*=(U s);
100  template<typename U>
101  GLM_FUNC_DECL mat<4, 3, T, Q> & operator/=(U s);
102 
103  // -- Increment and decrement operators --
104 
105  GLM_FUNC_DECL mat<4, 3, T, Q>& operator++();
106  GLM_FUNC_DECL mat<4, 3, T, Q>& operator--();
107  GLM_FUNC_DECL mat<4, 3, T, Q> operator++(int);
108  GLM_FUNC_DECL mat<4, 3, T, Q> operator--(int);
109  };
110 
111  // -- Unary operators --
112 
113  template<typename T, qualifier Q>
114  GLM_FUNC_DECL mat<4, 3, T, Q> operator+(mat<4, 3, T, Q> const& m);
115 
116  template<typename T, qualifier Q>
117  GLM_FUNC_DECL mat<4, 3, T, Q> operator-(mat<4, 3, T, Q> const& m);
118 
119  // -- Binary operators --
120 
121  template<typename T, qualifier Q>
122  GLM_FUNC_DECL mat<4, 3, T, Q> operator+(mat<4, 3, T, Q> const& m, T const& s);
123 
124  template<typename T, qualifier Q>
125  GLM_FUNC_DECL mat<4, 3, T, Q> operator+(mat<4, 3, T, Q> const& m1, mat<4, 3, T, Q> const& m2);
126 
127  template<typename T, qualifier Q>
128  GLM_FUNC_DECL mat<4, 3, T, Q> operator-(mat<4, 3, T, Q> const& m, T const& s);
129 
130  template<typename T, qualifier Q>
131  GLM_FUNC_DECL mat<4, 3, T, Q> operator-(mat<4, 3, T, Q> const& m1, mat<4, 3, T, Q> const& m2);
132 
133  template<typename T, qualifier Q>
134  GLM_FUNC_DECL mat<4, 3, T, Q> operator*(mat<4, 3, T, Q> const& m, T const& s);
135 
136  template<typename T, qualifier Q>
137  GLM_FUNC_DECL mat<4, 3, T, Q> operator*(T const& s, mat<4, 3, T, Q> const& m);
138 
139  template<typename T, qualifier Q>
140  GLM_FUNC_DECL typename mat<4, 3, T, Q>::col_type operator*(mat<4, 3, T, Q> const& m, typename mat<4, 3, T, Q>::row_type const& v);
141 
142  template<typename T, qualifier Q>
143  GLM_FUNC_DECL typename mat<4, 3, T, Q>::row_type operator*(typename mat<4, 3, T, Q>::col_type const& v, mat<4, 3, T, Q> const& m);
144 
145  template<typename T, qualifier Q>
146  GLM_FUNC_DECL mat<2, 3, T, Q> operator*(mat<4, 3, T, Q> const& m1, mat<2, 4, T, Q> const& m2);
147 
148  template<typename T, qualifier Q>
149  GLM_FUNC_DECL mat<3, 3, T, Q> operator*(mat<4, 3, T, Q> const& m1, mat<3, 4, T, Q> const& m2);
150 
151  template<typename T, qualifier Q>
152  GLM_FUNC_DECL mat<4, 3, T, Q> operator*(mat<4, 3, T, Q> const& m1, mat<4, 4, T, Q> const& m2);
153 
154  template<typename T, qualifier Q>
155  GLM_FUNC_DECL mat<4, 3, T, Q> operator/(mat<4, 3, T, Q> const& m, T const& s);
156 
157  template<typename T, qualifier Q>
158  GLM_FUNC_DECL mat<4, 3, T, Q> operator/(T const& s, mat<4, 3, T, Q> const& m);
159 
160  // -- Boolean operators --
161 
162  template<typename T, qualifier Q>
163  GLM_FUNC_DECL bool operator==(mat<4, 3, T, Q> const& m1, mat<4, 3, T, Q> const& m2);
164 
165  template<typename T, qualifier Q>
166  GLM_FUNC_DECL bool operator!=(mat<4, 3, T, Q> const& m1, mat<4, 3, T, Q> const& m2);
167 }//namespace glm
168 
169 #ifndef GLM_EXTERNAL_TEMPLATE
170 #include "type_mat4x3.inl"
171 #endif //GLM_EXTERNAL_TEMPLATE
GLM_FUNC_DECL T length(qua< T, Q > const &q)
Returns the norm of a quaternions.
Core features
Core features
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00173.html ================================================ 0.9.9 API documentation: type_mat4x4.hpp File Reference
0.9.9 API documentation
type_mat4x4.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00173_source.html ================================================ 0.9.9 API documentation: type_mat4x4.hpp Source File
0.9.9 API documentation
type_mat4x4.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 
6 #include "type_vec4.hpp"
7 #include <limits>
8 #include <cstddef>
9 
10 namespace glm
11 {
12  template<typename T, qualifier Q>
13  struct mat<4, 4, T, Q>
14  {
15  typedef vec<4, T, Q> col_type;
16  typedef vec<4, T, Q> row_type;
17  typedef mat<4, 4, T, Q> type;
18  typedef mat<4, 4, T, Q> transpose_type;
19  typedef T value_type;
20 
21  private:
22  col_type value[4];
23 
24  public:
25  // -- Accesses --
26 
27  typedef length_t length_type;
28  GLM_FUNC_DECL static GLM_CONSTEXPR length_type length(){return 4;}
29 
30  GLM_FUNC_DECL col_type & operator[](length_type i);
31  GLM_FUNC_DECL GLM_CONSTEXPR col_type const& operator[](length_type i) const;
32 
33  // -- Constructors --
34 
35  GLM_FUNC_DECL GLM_CONSTEXPR mat() GLM_DEFAULT;
36  template<qualifier P>
37  GLM_FUNC_DECL GLM_CONSTEXPR mat(mat<4, 4, T, P> const& m);
38 
39  GLM_FUNC_DECL explicit GLM_CONSTEXPR mat(T const& x);
40  GLM_FUNC_DECL GLM_CONSTEXPR mat(
41  T const& x0, T const& y0, T const& z0, T const& w0,
42  T const& x1, T const& y1, T const& z1, T const& w1,
43  T const& x2, T const& y2, T const& z2, T const& w2,
44  T const& x3, T const& y3, T const& z3, T const& w3);
45  GLM_FUNC_DECL GLM_CONSTEXPR mat(
46  col_type const& v0,
47  col_type const& v1,
48  col_type const& v2,
49  col_type const& v3);
50 
51  // -- Conversions --
52 
53  template<
54  typename X1, typename Y1, typename Z1, typename W1,
55  typename X2, typename Y2, typename Z2, typename W2,
56  typename X3, typename Y3, typename Z3, typename W3,
57  typename X4, typename Y4, typename Z4, typename W4>
58  GLM_FUNC_DECL GLM_CONSTEXPR mat(
59  X1 const& x1, Y1 const& y1, Z1 const& z1, W1 const& w1,
60  X2 const& x2, Y2 const& y2, Z2 const& z2, W2 const& w2,
61  X3 const& x3, Y3 const& y3, Z3 const& z3, W3 const& w3,
62  X4 const& x4, Y4 const& y4, Z4 const& z4, W4 const& w4);
63 
64  template<typename V1, typename V2, typename V3, typename V4>
65  GLM_FUNC_DECL GLM_CONSTEXPR mat(
66  vec<4, V1, Q> const& v1,
67  vec<4, V2, Q> const& v2,
68  vec<4, V3, Q> const& v3,
69  vec<4, V4, Q> const& v4);
70 
71  // -- Matrix conversions --
72 
73  template<typename U, qualifier P>
74  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<4, 4, U, P> const& m);
75 
76  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<2, 2, T, Q> const& x);
77  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<3, 3, T, Q> const& x);
78  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<2, 3, T, Q> const& x);
79  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<3, 2, T, Q> const& x);
80  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<2, 4, T, Q> const& x);
81  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<4, 2, T, Q> const& x);
82  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<3, 4, T, Q> const& x);
83  GLM_FUNC_DECL GLM_EXPLICIT GLM_CONSTEXPR mat(mat<4, 3, T, Q> const& x);
84 
85  // -- Unary arithmetic operators --
86 
87  template<typename U>
88  GLM_FUNC_DECL mat<4, 4, T, Q> & operator=(mat<4, 4, U, Q> const& m);
89  template<typename U>
90  GLM_FUNC_DECL mat<4, 4, T, Q> & operator+=(U s);
91  template<typename U>
92  GLM_FUNC_DECL mat<4, 4, T, Q> & operator+=(mat<4, 4, U, Q> const& m);
93  template<typename U>
94  GLM_FUNC_DECL mat<4, 4, T, Q> & operator-=(U s);
95  template<typename U>
96  GLM_FUNC_DECL mat<4, 4, T, Q> & operator-=(mat<4, 4, U, Q> const& m);
97  template<typename U>
98  GLM_FUNC_DECL mat<4, 4, T, Q> & operator*=(U s);
99  template<typename U>
100  GLM_FUNC_DECL mat<4, 4, T, Q> & operator*=(mat<4, 4, U, Q> const& m);
101  template<typename U>
102  GLM_FUNC_DECL mat<4, 4, T, Q> & operator/=(U s);
103  template<typename U>
104  GLM_FUNC_DECL mat<4, 4, T, Q> & operator/=(mat<4, 4, U, Q> const& m);
105 
106  // -- Increment and decrement operators --
107 
108  GLM_FUNC_DECL mat<4, 4, T, Q> & operator++();
109  GLM_FUNC_DECL mat<4, 4, T, Q> & operator--();
110  GLM_FUNC_DECL mat<4, 4, T, Q> operator++(int);
111  GLM_FUNC_DECL mat<4, 4, T, Q> operator--(int);
112  };
113 
114  // -- Unary operators --
115 
116  template<typename T, qualifier Q>
117  GLM_FUNC_DECL mat<4, 4, T, Q> operator+(mat<4, 4, T, Q> const& m);
118 
119  template<typename T, qualifier Q>
120  GLM_FUNC_DECL mat<4, 4, T, Q> operator-(mat<4, 4, T, Q> const& m);
121 
122  // -- Binary operators --
123 
124  template<typename T, qualifier Q>
125  GLM_FUNC_DECL mat<4, 4, T, Q> operator+(mat<4, 4, T, Q> const& m, T const& s);
126 
127  template<typename T, qualifier Q>
128  GLM_FUNC_DECL mat<4, 4, T, Q> operator+(T const& s, mat<4, 4, T, Q> const& m);
129 
130  template<typename T, qualifier Q>
131  GLM_FUNC_DECL mat<4, 4, T, Q> operator+(mat<4, 4, T, Q> const& m1, mat<4, 4, T, Q> const& m2);
132 
133  template<typename T, qualifier Q>
134  GLM_FUNC_DECL mat<4, 4, T, Q> operator-(mat<4, 4, T, Q> const& m, T const& s);
135 
136  template<typename T, qualifier Q>
137  GLM_FUNC_DECL mat<4, 4, T, Q> operator-(T const& s, mat<4, 4, T, Q> const& m);
138 
139  template<typename T, qualifier Q>
140  GLM_FUNC_DECL mat<4, 4, T, Q> operator-(mat<4, 4, T, Q> const& m1, mat<4, 4, T, Q> const& m2);
141 
142  template<typename T, qualifier Q>
143  GLM_FUNC_DECL mat<4, 4, T, Q> operator*(mat<4, 4, T, Q> const& m, T const& s);
144 
145  template<typename T, qualifier Q>
146  GLM_FUNC_DECL mat<4, 4, T, Q> operator*(T const& s, mat<4, 4, T, Q> const& m);
147 
148  template<typename T, qualifier Q>
149  GLM_FUNC_DECL typename mat<4, 4, T, Q>::col_type operator*(mat<4, 4, T, Q> const& m, typename mat<4, 4, T, Q>::row_type const& v);
150 
151  template<typename T, qualifier Q>
152  GLM_FUNC_DECL typename mat<4, 4, T, Q>::row_type operator*(typename mat<4, 4, T, Q>::col_type const& v, mat<4, 4, T, Q> const& m);
153 
154  template<typename T, qualifier Q>
155  GLM_FUNC_DECL mat<2, 4, T, Q> operator*(mat<4, 4, T, Q> const& m1, mat<2, 4, T, Q> const& m2);
156 
157  template<typename T, qualifier Q>
158  GLM_FUNC_DECL mat<3, 4, T, Q> operator*(mat<4, 4, T, Q> const& m1, mat<3, 4, T, Q> const& m2);
159 
160  template<typename T, qualifier Q>
161  GLM_FUNC_DECL mat<4, 4, T, Q> operator*(mat<4, 4, T, Q> const& m1, mat<4, 4, T, Q> const& m2);
162 
163  template<typename T, qualifier Q>
164  GLM_FUNC_DECL mat<4, 4, T, Q> operator/(mat<4, 4, T, Q> const& m, T const& s);
165 
166  template<typename T, qualifier Q>
167  GLM_FUNC_DECL mat<4, 4, T, Q> operator/(T const& s, mat<4, 4, T, Q> const& m);
168 
169  template<typename T, qualifier Q>
170  GLM_FUNC_DECL typename mat<4, 4, T, Q>::col_type operator/(mat<4, 4, T, Q> const& m, typename mat<4, 4, T, Q>::row_type const& v);
171 
172  template<typename T, qualifier Q>
173  GLM_FUNC_DECL typename mat<4, 4, T, Q>::row_type operator/(typename mat<4, 4, T, Q>::col_type const& v, mat<4, 4, T, Q> const& m);
174 
175  template<typename T, qualifier Q>
176  GLM_FUNC_DECL mat<4, 4, T, Q> operator/(mat<4, 4, T, Q> const& m1, mat<4, 4, T, Q> const& m2);
177 
178  // -- Boolean operators --
179 
180  template<typename T, qualifier Q>
181  GLM_FUNC_DECL bool operator==(mat<4, 4, T, Q> const& m1, mat<4, 4, T, Q> const& m2);
182 
183  template<typename T, qualifier Q>
184  GLM_FUNC_DECL bool operator!=(mat<4, 4, T, Q> const& m1, mat<4, 4, T, Q> const& m2);
185 }//namespace glm
186 
187 #ifndef GLM_EXTERNAL_TEMPLATE
188 #include "type_mat4x4.inl"
189 #endif//GLM_EXTERNAL_TEMPLATE
GLM_FUNC_DECL T length(qua< T, Q > const &q)
Returns the norm of a quaternions.
Core features
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00174.html ================================================ 0.9.9 API documentation: type_precision.hpp File Reference
0.9.9 API documentation
type_precision.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00174_source.html ================================================ 0.9.9 API documentation: type_precision.hpp Source File
0.9.9 API documentation
type_precision.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 // Dependency:
17 #include "../gtc/quaternion.hpp"
18 #include "../gtc/vec1.hpp"
19 #include "../ext/scalar_int_sized.hpp"
20 #include "../ext/scalar_uint_sized.hpp"
21 #include "../detail/type_vec2.hpp"
22 #include "../detail/type_vec3.hpp"
23 #include "../detail/type_vec4.hpp"
24 #include "../detail/type_mat2x2.hpp"
25 #include "../detail/type_mat2x3.hpp"
26 #include "../detail/type_mat2x4.hpp"
27 #include "../detail/type_mat3x2.hpp"
28 #include "../detail/type_mat3x3.hpp"
29 #include "../detail/type_mat3x4.hpp"
30 #include "../detail/type_mat4x2.hpp"
31 #include "../detail/type_mat4x3.hpp"
32 #include "../detail/type_mat4x4.hpp"
33 
34 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
35 # pragma message("GLM: GLM_GTC_type_precision extension included")
36 #endif
37 
38 namespace glm
39 {
41  // Signed int vector types
42 
45 
48  typedef detail::int8 lowp_int8;
49 
52  typedef detail::int16 lowp_int16;
53 
56  typedef detail::int32 lowp_int32;
57 
60  typedef detail::int64 lowp_int64;
61 
64  typedef detail::int8 lowp_int8_t;
65 
68  typedef detail::int16 lowp_int16_t;
69 
72  typedef detail::int32 lowp_int32_t;
73 
77 
80  typedef detail::int8 lowp_i8;
81 
84  typedef detail::int16 lowp_i16;
85 
88  typedef detail::int32 lowp_i32;
89 
92  typedef detail::int64 lowp_i64;
93 
96  typedef detail::int8 mediump_int8;
97 
100  typedef detail::int16 mediump_int16;
101 
104  typedef detail::int32 mediump_int32;
105 
109 
112  typedef detail::int8 mediump_int8_t;
113 
116  typedef detail::int16 mediump_int16_t;
117 
120  typedef detail::int32 mediump_int32_t;
121 
125 
128  typedef detail::int8 mediump_i8;
129 
132  typedef detail::int16 mediump_i16;
133 
136  typedef detail::int32 mediump_i32;
137 
140  typedef detail::int64 mediump_i64;
141 
144  typedef detail::int8 highp_int8;
145 
148  typedef detail::int16 highp_int16;
149 
152  typedef detail::int32 highp_int32;
153 
156  typedef detail::int64 highp_int64;
157 
160  typedef detail::int8 highp_int8_t;
161 
164  typedef detail::int16 highp_int16_t;
165 
168  typedef detail::int32 highp_int32_t;
169 
173 
176  typedef detail::int8 highp_i8;
177 
180  typedef detail::int16 highp_i16;
181 
184  typedef detail::int32 highp_i32;
185 
188  typedef detail::int64 highp_i64;
189 
190 
191 #if GLM_HAS_EXTENDED_INTEGER_TYPE
192  using std::int8_t;
193  using std::int16_t;
194  using std::int32_t;
195  using std::int64_t;
196 #else
197  typedef detail::int8 int8_t;
200 
203  typedef detail::int16 int16_t;
204 
207  typedef detail::int32 int32_t;
208 
211  typedef detail::int64 int64_t;
212 #endif
213 
216  typedef detail::int8 i8;
217 
220  typedef detail::int16 i16;
221 
224  typedef detail::int32 i32;
225 
228  typedef detail::int64 i64;
229 
230 
231 
234  typedef vec<1, i8, lowp> lowp_i8vec1;
235 
238  typedef vec<2, i8, lowp> lowp_i8vec2;
239 
242  typedef vec<3, i8, lowp> lowp_i8vec3;
243 
246  typedef vec<4, i8, lowp> lowp_i8vec4;
247 
248 
251  typedef vec<1, i8, mediump> mediump_i8vec1;
252 
255  typedef vec<2, i8, mediump> mediump_i8vec2;
256 
259  typedef vec<3, i8, mediump> mediump_i8vec3;
260 
263  typedef vec<4, i8, mediump> mediump_i8vec4;
264 
265 
268  typedef vec<1, i8, highp> highp_i8vec1;
269 
272  typedef vec<2, i8, highp> highp_i8vec2;
273 
276  typedef vec<3, i8, highp> highp_i8vec3;
277 
280  typedef vec<4, i8, highp> highp_i8vec4;
281 
282 
283 
286  typedef vec<1, i8, defaultp> i8vec1;
287 
290  typedef vec<2, i8, defaultp> i8vec2;
291 
294  typedef vec<3, i8, defaultp> i8vec3;
295 
298  typedef vec<4, i8, defaultp> i8vec4;
299 
300 
301 
302 
303 
306  typedef vec<1, i16, lowp> lowp_i16vec1;
307 
310  typedef vec<2, i16, lowp> lowp_i16vec2;
311 
314  typedef vec<3, i16, lowp> lowp_i16vec3;
315 
318  typedef vec<4, i16, lowp> lowp_i16vec4;
319 
320 
323  typedef vec<1, i16, mediump> mediump_i16vec1;
324 
327  typedef vec<2, i16, mediump> mediump_i16vec2;
328 
331  typedef vec<3, i16, mediump> mediump_i16vec3;
332 
335  typedef vec<4, i16, mediump> mediump_i16vec4;
336 
337 
340  typedef vec<1, i16, highp> highp_i16vec1;
341 
344  typedef vec<2, i16, highp> highp_i16vec2;
345 
348  typedef vec<3, i16, highp> highp_i16vec3;
349 
352  typedef vec<4, i16, highp> highp_i16vec4;
353 
354 
355 
356 
359  typedef vec<1, i16, defaultp> i16vec1;
360 
363  typedef vec<2, i16, defaultp> i16vec2;
364 
367  typedef vec<3, i16, defaultp> i16vec3;
368 
371  typedef vec<4, i16, defaultp> i16vec4;
372 
373 
374 
377  typedef vec<1, i32, lowp> lowp_i32vec1;
378 
381  typedef vec<2, i32, lowp> lowp_i32vec2;
382 
385  typedef vec<3, i32, lowp> lowp_i32vec3;
386 
389  typedef vec<4, i32, lowp> lowp_i32vec4;
390 
391 
394  typedef vec<1, i32, mediump> mediump_i32vec1;
395 
398  typedef vec<2, i32, mediump> mediump_i32vec2;
399 
402  typedef vec<3, i32, mediump> mediump_i32vec3;
403 
406  typedef vec<4, i32, mediump> mediump_i32vec4;
407 
408 
411  typedef vec<1, i32, highp> highp_i32vec1;
412 
415  typedef vec<2, i32, highp> highp_i32vec2;
416 
419  typedef vec<3, i32, highp> highp_i32vec3;
420 
423  typedef vec<4, i32, highp> highp_i32vec4;
424 
425 
428  typedef vec<1, i32, defaultp> i32vec1;
429 
432  typedef vec<2, i32, defaultp> i32vec2;
433 
436  typedef vec<3, i32, defaultp> i32vec3;
437 
440  typedef vec<4, i32, defaultp> i32vec4;
441 
442 
443 
444 
447  typedef vec<1, i64, lowp> lowp_i64vec1;
448 
451  typedef vec<2, i64, lowp> lowp_i64vec2;
452 
455  typedef vec<3, i64, lowp> lowp_i64vec3;
456 
459  typedef vec<4, i64, lowp> lowp_i64vec4;
460 
461 
464  typedef vec<1, i64, mediump> mediump_i64vec1;
465 
468  typedef vec<2, i64, mediump> mediump_i64vec2;
469 
472  typedef vec<3, i64, mediump> mediump_i64vec3;
473 
476  typedef vec<4, i64, mediump> mediump_i64vec4;
477 
478 
481  typedef vec<1, i64, highp> highp_i64vec1;
482 
485  typedef vec<2, i64, highp> highp_i64vec2;
486 
489  typedef vec<3, i64, highp> highp_i64vec3;
490 
493  typedef vec<4, i64, highp> highp_i64vec4;
494 
495 
498  typedef vec<1, i64, defaultp> i64vec1;
499 
502  typedef vec<2, i64, defaultp> i64vec2;
503 
506  typedef vec<3, i64, defaultp> i64vec3;
507 
510  typedef vec<4, i64, defaultp> i64vec4;
511 
512 
514  // Unsigned int vector types
515 
518  typedef detail::uint8 lowp_uint8;
519 
522  typedef detail::uint16 lowp_uint16;
523 
526  typedef detail::uint32 lowp_uint32;
527 
530  typedef detail::uint64 lowp_uint64;
531 
534  typedef detail::uint8 lowp_uint8_t;
535 
538  typedef detail::uint16 lowp_uint16_t;
539 
542  typedef detail::uint32 lowp_uint32_t;
543 
547 
550  typedef detail::uint8 lowp_u8;
551 
554  typedef detail::uint16 lowp_u16;
555 
558  typedef detail::uint32 lowp_u32;
559 
562  typedef detail::uint64 lowp_u64;
563 
566  typedef detail::uint8 mediump_uint8;
567 
570  typedef detail::uint16 mediump_uint16;
571 
574  typedef detail::uint32 mediump_uint32;
575 
579 
582  typedef detail::uint8 mediump_uint8_t;
583 
586  typedef detail::uint16 mediump_uint16_t;
587 
590  typedef detail::uint32 mediump_uint32_t;
591 
595 
598  typedef detail::uint8 mediump_u8;
599 
602  typedef detail::uint16 mediump_u16;
603 
606  typedef detail::uint32 mediump_u32;
607 
610  typedef detail::uint64 mediump_u64;
611 
614  typedef detail::uint8 highp_uint8;
615 
618  typedef detail::uint16 highp_uint16;
619 
622  typedef detail::uint32 highp_uint32;
623 
627 
630  typedef detail::uint8 highp_uint8_t;
631 
634  typedef detail::uint16 highp_uint16_t;
635 
638  typedef detail::uint32 highp_uint32_t;
639 
643 
646  typedef detail::uint8 highp_u8;
647 
650  typedef detail::uint16 highp_u16;
651 
654  typedef detail::uint32 highp_u32;
655 
658  typedef detail::uint64 highp_u64;
659 
660 #if GLM_HAS_EXTENDED_INTEGER_TYPE
661  using std::uint8_t;
662  using std::uint16_t;
663  using std::uint32_t;
664  using std::uint64_t;
665 #else
666  typedef detail::uint8 uint8_t;
669 
672  typedef detail::uint16 uint16_t;
673 
676  typedef detail::uint32 uint32_t;
677 
680  typedef detail::uint64 uint64_t;
681 #endif
682 
685  typedef detail::uint8 u8;
686 
689  typedef detail::uint16 u16;
690 
693  typedef detail::uint32 u32;
694 
697  typedef detail::uint64 u64;
698 
699 
700 
701 
702 
704  // Float vector types
705 
708  typedef float float32;
709 
712  typedef double float64;
713 
716  typedef float32 lowp_float32;
717 
720  typedef float64 lowp_float64;
721 
724  typedef float32 lowp_float32_t;
725 
728  typedef float64 lowp_float64_t;
729 
732  typedef float32 lowp_f32;
733 
736  typedef float64 lowp_f64;
737 
740  typedef float32 lowp_float32;
741 
744  typedef float64 lowp_float64;
745 
748  typedef float32 lowp_float32_t;
749 
752  typedef float64 lowp_float64_t;
753 
756  typedef float32 lowp_f32;
757 
760  typedef float64 lowp_f64;
761 
762 
765  typedef float32 lowp_float32;
766 
769  typedef float64 lowp_float64;
770 
773  typedef float32 lowp_float32_t;
774 
777  typedef float64 lowp_float64_t;
778 
781  typedef float32 lowp_f32;
782 
785  typedef float64 lowp_f64;
786 
787 
790  typedef float32 mediump_float32;
791 
794  typedef float64 mediump_float64;
795 
798  typedef float32 mediump_float32_t;
799 
802  typedef float64 mediump_float64_t;
803 
806  typedef float32 mediump_f32;
807 
810  typedef float64 mediump_f64;
811 
812 
815  typedef float32 highp_float32;
816 
819  typedef float64 highp_float64;
820 
823  typedef float32 highp_float32_t;
824 
827  typedef float64 highp_float64_t;
828 
831  typedef float32 highp_f32;
832 
835  typedef float64 highp_f64;
836 
837 
838 #if(defined(GLM_PRECISION_LOWP_FLOAT))
839  typedef lowp_float32_t float32_t;
842 
845  typedef lowp_float64_t float64_t;
846 
849  typedef lowp_f32 f32;
850 
853  typedef lowp_f64 f64;
854 
855 #elif(defined(GLM_PRECISION_MEDIUMP_FLOAT))
856  typedef mediump_float32 float32_t;
859 
862  typedef mediump_float64 float64_t;
863 
866  typedef mediump_float32 f32;
867 
870  typedef mediump_float64 f64;
871 
872 #else//(defined(GLM_PRECISION_HIGHP_FLOAT))
873 
876  typedef highp_float32_t float32_t;
877 
880  typedef highp_float64_t float64_t;
881 
884  typedef highp_float32_t f32;
885 
888  typedef highp_float64_t f64;
889 #endif
890 
891 
894  typedef vec<1, float, lowp> lowp_fvec1;
895 
898  typedef vec<2, float, lowp> lowp_fvec2;
899 
902  typedef vec<3, float, lowp> lowp_fvec3;
903 
906  typedef vec<4, float, lowp> lowp_fvec4;
907 
908 
911  typedef vec<1, float, mediump> mediump_fvec1;
912 
915  typedef vec<2, float, mediump> mediump_fvec2;
916 
919  typedef vec<3, float, mediump> mediump_fvec3;
920 
923  typedef vec<4, float, mediump> mediump_fvec4;
924 
925 
928  typedef vec<1, float, highp> highp_fvec1;
929 
932  typedef vec<2, float, highp> highp_fvec2;
933 
936  typedef vec<3, float, highp> highp_fvec3;
937 
940  typedef vec<4, float, highp> highp_fvec4;
941 
942 
945  typedef vec<1, f32, lowp> lowp_f32vec1;
946 
949  typedef vec<2, f32, lowp> lowp_f32vec2;
950 
953  typedef vec<3, f32, lowp> lowp_f32vec3;
954 
957  typedef vec<4, f32, lowp> lowp_f32vec4;
958 
961  typedef vec<1, f32, mediump> mediump_f32vec1;
962 
965  typedef vec<2, f32, mediump> mediump_f32vec2;
966 
969  typedef vec<3, f32, mediump> mediump_f32vec3;
970 
973  typedef vec<4, f32, mediump> mediump_f32vec4;
974 
977  typedef vec<1, f32, highp> highp_f32vec1;
978 
981  typedef vec<2, f32, highp> highp_f32vec2;
982 
985  typedef vec<3, f32, highp> highp_f32vec3;
986 
989  typedef vec<4, f32, highp> highp_f32vec4;
990 
991 
994  typedef vec<1, f64, lowp> lowp_f64vec1;
995 
998  typedef vec<2, f64, lowp> lowp_f64vec2;
999 
1002  typedef vec<3, f64, lowp> lowp_f64vec3;
1003 
1006  typedef vec<4, f64, lowp> lowp_f64vec4;
1007 
1010  typedef vec<1, f64, mediump> mediump_f64vec1;
1011 
1014  typedef vec<2, f64, mediump> mediump_f64vec2;
1015 
1018  typedef vec<3, f64, mediump> mediump_f64vec3;
1019 
1022  typedef vec<4, f64, mediump> mediump_f64vec4;
1023 
1026  typedef vec<1, f64, highp> highp_f64vec1;
1027 
1030  typedef vec<2, f64, highp> highp_f64vec2;
1031 
1034  typedef vec<3, f64, highp> highp_f64vec3;
1035 
1038  typedef vec<4, f64, highp> highp_f64vec4;
1039 
1040 
1041 
1043  // Float matrix types
1044 
1047  //typedef lowp_f32 lowp_fmat1x1;
1048 
1051  typedef mat<2, 2, f32, lowp> lowp_fmat2x2;
1052 
1055  typedef mat<2, 3, f32, lowp> lowp_fmat2x3;
1056 
1059  typedef mat<2, 4, f32, lowp> lowp_fmat2x4;
1060 
1063  typedef mat<3, 2, f32, lowp> lowp_fmat3x2;
1064 
1067  typedef mat<3, 3, f32, lowp> lowp_fmat3x3;
1068 
1071  typedef mat<3, 4, f32, lowp> lowp_fmat3x4;
1072 
1075  typedef mat<4, 2, f32, lowp> lowp_fmat4x2;
1076 
1079  typedef mat<4, 3, f32, lowp> lowp_fmat4x3;
1080 
1083  typedef mat<4, 4, f32, lowp> lowp_fmat4x4;
1084 
1087  //typedef lowp_fmat1x1 lowp_fmat1;
1088 
1091  typedef lowp_fmat2x2 lowp_fmat2;
1092 
1095  typedef lowp_fmat3x3 lowp_fmat3;
1096 
1099  typedef lowp_fmat4x4 lowp_fmat4;
1100 
1101 
1104  //typedef mediump_f32 mediump_fmat1x1;
1105 
1108  typedef mat<2, 2, f32, mediump> mediump_fmat2x2;
1109 
1112  typedef mat<2, 3, f32, mediump> mediump_fmat2x3;
1113 
1116  typedef mat<2, 4, f32, mediump> mediump_fmat2x4;
1117 
1120  typedef mat<3, 2, f32, mediump> mediump_fmat3x2;
1121 
1124  typedef mat<3, 3, f32, mediump> mediump_fmat3x3;
1125 
1128  typedef mat<3, 4, f32, mediump> mediump_fmat3x4;
1129 
1132  typedef mat<4, 2, f32, mediump> mediump_fmat4x2;
1133 
1136  typedef mat<4, 3, f32, mediump> mediump_fmat4x3;
1137 
1140  typedef mat<4, 4, f32, mediump> mediump_fmat4x4;
1141 
1144  //typedef mediump_fmat1x1 mediump_fmat1;
1145 
1148  typedef mediump_fmat2x2 mediump_fmat2;
1149 
1152  typedef mediump_fmat3x3 mediump_fmat3;
1153 
1156  typedef mediump_fmat4x4 mediump_fmat4;
1157 
1158 
1161  //typedef highp_f32 highp_fmat1x1;
1162 
1165  typedef mat<2, 2, f32, highp> highp_fmat2x2;
1166 
1169  typedef mat<2, 3, f32, highp> highp_fmat2x3;
1170 
1173  typedef mat<2, 4, f32, highp> highp_fmat2x4;
1174 
1177  typedef mat<3, 2, f32, highp> highp_fmat3x2;
1178 
1181  typedef mat<3, 3, f32, highp> highp_fmat3x3;
1182 
1185  typedef mat<3, 4, f32, highp> highp_fmat3x4;
1186 
1189  typedef mat<4, 2, f32, highp> highp_fmat4x2;
1190 
1193  typedef mat<4, 3, f32, highp> highp_fmat4x3;
1194 
1197  typedef mat<4, 4, f32, highp> highp_fmat4x4;
1198 
1201  //typedef highp_fmat1x1 highp_fmat1;
1202 
1205  typedef highp_fmat2x2 highp_fmat2;
1206 
1209  typedef highp_fmat3x3 highp_fmat3;
1210 
1213  typedef highp_fmat4x4 highp_fmat4;
1214 
1215 
1218  //typedef f32 lowp_f32mat1x1;
1219 
1222  typedef mat<2, 2, f32, lowp> lowp_f32mat2x2;
1223 
1226  typedef mat<2, 3, f32, lowp> lowp_f32mat2x3;
1227 
1230  typedef mat<2, 4, f32, lowp> lowp_f32mat2x4;
1231 
1234  typedef mat<3, 2, f32, lowp> lowp_f32mat3x2;
1235 
1238  typedef mat<3, 3, f32, lowp> lowp_f32mat3x3;
1239 
1242  typedef mat<3, 4, f32, lowp> lowp_f32mat3x4;
1243 
1246  typedef mat<4, 2, f32, lowp> lowp_f32mat4x2;
1247 
1250  typedef mat<4, 3, f32, lowp> lowp_f32mat4x3;
1251 
1254  typedef mat<4, 4, f32, lowp> lowp_f32mat4x4;
1255 
1258  //typedef detail::tmat1x1<f32, lowp> lowp_f32mat1;
1259 
1262  typedef lowp_f32mat2x2 lowp_f32mat2;
1263 
1266  typedef lowp_f32mat3x3 lowp_f32mat3;
1267 
1270  typedef lowp_f32mat4x4 lowp_f32mat4;
1271 
1272 
1275  //typedef f32 mediump_f32mat1x1;
1276 
1279  typedef mat<2, 2, f32, mediump> mediump_f32mat2x2;
1280 
1283  typedef mat<2, 3, f32, mediump> mediump_f32mat2x3;
1284 
1287  typedef mat<2, 4, f32, mediump> mediump_f32mat2x4;
1288 
1291  typedef mat<3, 2, f32, mediump> mediump_f32mat3x2;
1292 
1295  typedef mat<3, 3, f32, mediump> mediump_f32mat3x3;
1296 
1299  typedef mat<3, 4, f32, mediump> mediump_f32mat3x4;
1300 
1303  typedef mat<4, 2, f32, mediump> mediump_f32mat4x2;
1304 
1307  typedef mat<4, 3, f32, mediump> mediump_f32mat4x3;
1308 
1311  typedef mat<4, 4, f32, mediump> mediump_f32mat4x4;
1312 
1315  //typedef detail::tmat1x1<f32, mediump> f32mat1;
1316 
1319  typedef mediump_f32mat2x2 mediump_f32mat2;
1320 
1323  typedef mediump_f32mat3x3 mediump_f32mat3;
1324 
1327  typedef mediump_f32mat4x4 mediump_f32mat4;
1328 
1329 
1332  //typedef f32 highp_f32mat1x1;
1333 
1336  typedef mat<2, 2, f32, highp> highp_f32mat2x2;
1337 
1340  typedef mat<2, 3, f32, highp> highp_f32mat2x3;
1341 
1344  typedef mat<2, 4, f32, highp> highp_f32mat2x4;
1345 
1348  typedef mat<3, 2, f32, highp> highp_f32mat3x2;
1349 
1352  typedef mat<3, 3, f32, highp> highp_f32mat3x3;
1353 
1356  typedef mat<3, 4, f32, highp> highp_f32mat3x4;
1357 
1360  typedef mat<4, 2, f32, highp> highp_f32mat4x2;
1361 
1364  typedef mat<4, 3, f32, highp> highp_f32mat4x3;
1365 
1368  typedef mat<4, 4, f32, highp> highp_f32mat4x4;
1369 
1372  //typedef detail::tmat1x1<f32, highp> f32mat1;
1373 
1376  typedef highp_f32mat2x2 highp_f32mat2;
1377 
1380  typedef highp_f32mat3x3 highp_f32mat3;
1381 
1384  typedef highp_f32mat4x4 highp_f32mat4;
1385 
1386 
1389  //typedef f64 lowp_f64mat1x1;
1390 
1393  typedef mat<2, 2, f64, lowp> lowp_f64mat2x2;
1394 
1397  typedef mat<2, 3, f64, lowp> lowp_f64mat2x3;
1398 
1401  typedef mat<2, 4, f64, lowp> lowp_f64mat2x4;
1402 
1405  typedef mat<3, 2, f64, lowp> lowp_f64mat3x2;
1406 
1409  typedef mat<3, 3, f64, lowp> lowp_f64mat3x3;
1410 
1413  typedef mat<3, 4, f64, lowp> lowp_f64mat3x4;
1414 
1417  typedef mat<4, 2, f64, lowp> lowp_f64mat4x2;
1418 
1421  typedef mat<4, 3, f64, lowp> lowp_f64mat4x3;
1422 
1425  typedef mat<4, 4, f64, lowp> lowp_f64mat4x4;
1426 
1429  //typedef lowp_f64mat1x1 lowp_f64mat1;
1430 
1433  typedef lowp_f64mat2x2 lowp_f64mat2;
1434 
1437  typedef lowp_f64mat3x3 lowp_f64mat3;
1438 
1441  typedef lowp_f64mat4x4 lowp_f64mat4;
1442 
1443 
1446  //typedef f64 Highp_f64mat1x1;
1447 
1450  typedef mat<2, 2, f64, mediump> mediump_f64mat2x2;
1451 
1454  typedef mat<2, 3, f64, mediump> mediump_f64mat2x3;
1455 
1458  typedef mat<2, 4, f64, mediump> mediump_f64mat2x4;
1459 
1462  typedef mat<3, 2, f64, mediump> mediump_f64mat3x2;
1463 
1466  typedef mat<3, 3, f64, mediump> mediump_f64mat3x3;
1467 
1470  typedef mat<3, 4, f64, mediump> mediump_f64mat3x4;
1471 
1474  typedef mat<4, 2, f64, mediump> mediump_f64mat4x2;
1475 
1478  typedef mat<4, 3, f64, mediump> mediump_f64mat4x3;
1479 
1482  typedef mat<4, 4, f64, mediump> mediump_f64mat4x4;
1483 
1486  //typedef mediump_f64mat1x1 mediump_f64mat1;
1487 
1490  typedef mediump_f64mat2x2 mediump_f64mat2;
1491 
1494  typedef mediump_f64mat3x3 mediump_f64mat3;
1495 
1498  typedef mediump_f64mat4x4 mediump_f64mat4;
1499 
1502  //typedef f64 highp_f64mat1x1;
1503 
1506  typedef mat<2, 2, f64, highp> highp_f64mat2x2;
1507 
1510  typedef mat<2, 3, f64, highp> highp_f64mat2x3;
1511 
1514  typedef mat<2, 4, f64, highp> highp_f64mat2x4;
1515 
1518  typedef mat<3, 2, f64, highp> highp_f64mat3x2;
1519 
1522  typedef mat<3, 3, f64, highp> highp_f64mat3x3;
1523 
1526  typedef mat<3, 4, f64, highp> highp_f64mat3x4;
1527 
1530  typedef mat<4, 2, f64, highp> highp_f64mat4x2;
1531 
1534  typedef mat<4, 3, f64, highp> highp_f64mat4x3;
1535 
1538  typedef mat<4, 4, f64, highp> highp_f64mat4x4;
1539 
1542  //typedef highp_f64mat1x1 highp_f64mat1;
1543 
1546  typedef highp_f64mat2x2 highp_f64mat2;
1547 
1550  typedef highp_f64mat3x3 highp_f64mat3;
1551 
1554  typedef highp_f64mat4x4 highp_f64mat4;
1555 
1556 
1557 
1558 
1561  typedef vec<1, u8, lowp> lowp_u8vec1;
1562 
1565  typedef vec<2, u8, lowp> lowp_u8vec2;
1566 
1569  typedef vec<3, u8, lowp> lowp_u8vec3;
1570 
1573  typedef vec<4, u8, lowp> lowp_u8vec4;
1574 
1575 
1578  typedef vec<1, u8, mediump> mediump_u8vec1;
1579 
1582  typedef vec<2, u8, mediump> mediump_u8vec2;
1583 
1586  typedef vec<3, u8, mediump> mediump_u8vec3;
1587 
1590  typedef vec<4, u8, mediump> mediump_u8vec4;
1591 
1592 
1595  typedef vec<1, u8, highp> highp_u8vec1;
1596 
1599  typedef vec<2, u8, highp> highp_u8vec2;
1600 
1603  typedef vec<3, u8, highp> highp_u8vec3;
1604 
1607  typedef vec<4, u8, highp> highp_u8vec4;
1608 
1609 
1610 
1613  typedef vec<1, u8, defaultp> u8vec1;
1614 
1617  typedef vec<2, u8, defaultp> u8vec2;
1618 
1621  typedef vec<3, u8, defaultp> u8vec3;
1622 
1625  typedef vec<4, u8, defaultp> u8vec4;
1626 
1627 
1628 
1629 
1632  typedef vec<1, u16, lowp> lowp_u16vec1;
1633 
1636  typedef vec<2, u16, lowp> lowp_u16vec2;
1637 
1640  typedef vec<3, u16, lowp> lowp_u16vec3;
1641 
1644  typedef vec<4, u16, lowp> lowp_u16vec4;
1645 
1646 
1649  typedef vec<1, u16, mediump> mediump_u16vec1;
1650 
1653  typedef vec<2, u16, mediump> mediump_u16vec2;
1654 
1657  typedef vec<3, u16, mediump> mediump_u16vec3;
1658 
1661  typedef vec<4, u16, mediump> mediump_u16vec4;
1662 
1663 
1666  typedef vec<1, u16, highp> highp_u16vec1;
1667 
1670  typedef vec<2, u16, highp> highp_u16vec2;
1671 
1674  typedef vec<3, u16, highp> highp_u16vec3;
1675 
1678  typedef vec<4, u16, highp> highp_u16vec4;
1679 
1680 
1681 
1682 
1685  typedef vec<1, u16, defaultp> u16vec1;
1686 
1689  typedef vec<2, u16, defaultp> u16vec2;
1690 
1693  typedef vec<3, u16, defaultp> u16vec3;
1694 
1697  typedef vec<4, u16, defaultp> u16vec4;
1698 
1699 
1700 
1703  typedef vec<1, u32, lowp> lowp_u32vec1;
1704 
1707  typedef vec<2, u32, lowp> lowp_u32vec2;
1708 
1711  typedef vec<3, u32, lowp> lowp_u32vec3;
1712 
1715  typedef vec<4, u32, lowp> lowp_u32vec4;
1716 
1717 
1720  typedef vec<1, u32, mediump> mediump_u32vec1;
1721 
1724  typedef vec<2, u32, mediump> mediump_u32vec2;
1725 
1728  typedef vec<3, u32, mediump> mediump_u32vec3;
1729 
1732  typedef vec<4, u32, mediump> mediump_u32vec4;
1733 
1734 
1737  typedef vec<1, u32, highp> highp_u32vec1;
1738 
1741  typedef vec<2, u32, highp> highp_u32vec2;
1742 
1745  typedef vec<3, u32, highp> highp_u32vec3;
1746 
1749  typedef vec<4, u32, highp> highp_u32vec4;
1750 
1751 
1752 
1755  typedef vec<1, u32, defaultp> u32vec1;
1756 
1759  typedef vec<2, u32, defaultp> u32vec2;
1760 
1763  typedef vec<3, u32, defaultp> u32vec3;
1764 
1767  typedef vec<4, u32, defaultp> u32vec4;
1768 
1769 
1770 
1771 
1774  typedef vec<1, u64, lowp> lowp_u64vec1;
1775 
1778  typedef vec<2, u64, lowp> lowp_u64vec2;
1779 
1782  typedef vec<3, u64, lowp> lowp_u64vec3;
1783 
1786  typedef vec<4, u64, lowp> lowp_u64vec4;
1787 
1788 
1791  typedef vec<1, u64, mediump> mediump_u64vec1;
1792 
1795  typedef vec<2, u64, mediump> mediump_u64vec2;
1796 
1799  typedef vec<3, u64, mediump> mediump_u64vec3;
1800 
1803  typedef vec<4, u64, mediump> mediump_u64vec4;
1804 
1805 
1808  typedef vec<1, u64, highp> highp_u64vec1;
1809 
1812  typedef vec<2, u64, highp> highp_u64vec2;
1813 
1816  typedef vec<3, u64, highp> highp_u64vec3;
1817 
1820  typedef vec<4, u64, highp> highp_u64vec4;
1821 
1822 
1823 
1824 
1827  typedef vec<1, u64, defaultp> u64vec1;
1828 
1831  typedef vec<2, u64, defaultp> u64vec2;
1832 
1835  typedef vec<3, u64, defaultp> u64vec3;
1836 
1839  typedef vec<4, u64, defaultp> u64vec4;
1840 
1841 
1843  // Float vector types
1844 
1847  typedef float32 float32_t;
1848 
1851  typedef float32 f32;
1852 
1853 # ifndef GLM_FORCE_SINGLE_ONLY
1854 
1857  typedef float64 float64_t;
1858 
1861  typedef float64 f64;
1862 # endif//GLM_FORCE_SINGLE_ONLY
1863 
1866  typedef vec<1, float, defaultp> fvec1;
1867 
1870  typedef vec<2, float, defaultp> fvec2;
1871 
1874  typedef vec<3, float, defaultp> fvec3;
1875 
1878  typedef vec<4, float, defaultp> fvec4;
1879 
1880 
1883  typedef vec<1, f32, defaultp> f32vec1;
1884 
1887  typedef vec<2, f32, defaultp> f32vec2;
1888 
1891  typedef vec<3, f32, defaultp> f32vec3;
1892 
1895  typedef vec<4, f32, defaultp> f32vec4;
1896 
1897 # ifndef GLM_FORCE_SINGLE_ONLY
1898  typedef vec<1, f64, defaultp> f64vec1;
1901 
1904  typedef vec<2, f64, defaultp> f64vec2;
1905 
1908  typedef vec<3, f64, defaultp> f64vec3;
1909 
1912  typedef vec<4, f64, defaultp> f64vec4;
1913 # endif//GLM_FORCE_SINGLE_ONLY
1914 
1915 
1917  // Float matrix types
1918 
1921  //typedef detail::tmat1x1<f32> fmat1;
1922 
1925  typedef mat<2, 2, f32, defaultp> fmat2;
1926 
1929  typedef mat<3, 3, f32, defaultp> fmat3;
1930 
1933  typedef mat<4, 4, f32, defaultp> fmat4;
1934 
1935 
1938  //typedef f32 fmat1x1;
1939 
1942  typedef mat<2, 2, f32, defaultp> fmat2x2;
1943 
1946  typedef mat<2, 3, f32, defaultp> fmat2x3;
1947 
1950  typedef mat<2, 4, f32, defaultp> fmat2x4;
1951 
1954  typedef mat<3, 2, f32, defaultp> fmat3x2;
1955 
1958  typedef mat<3, 3, f32, defaultp> fmat3x3;
1959 
1962  typedef mat<3, 4, f32, defaultp> fmat3x4;
1963 
1966  typedef mat<4, 2, f32, defaultp> fmat4x2;
1967 
1970  typedef mat<4, 3, f32, defaultp> fmat4x3;
1971 
1974  typedef mat<4, 4, f32, defaultp> fmat4x4;
1975 
1976 
1979  //typedef detail::tmat1x1<f32, defaultp> f32mat1;
1980 
1983  typedef mat<2, 2, f32, defaultp> f32mat2;
1984 
1987  typedef mat<3, 3, f32, defaultp> f32mat3;
1988 
1991  typedef mat<4, 4, f32, defaultp> f32mat4;
1992 
1993 
1996  //typedef f32 f32mat1x1;
1997 
2000  typedef mat<2, 2, f32, defaultp> f32mat2x2;
2001 
2004  typedef mat<2, 3, f32, defaultp> f32mat2x3;
2005 
2008  typedef mat<2, 4, f32, defaultp> f32mat2x4;
2009 
2012  typedef mat<3, 2, f32, defaultp> f32mat3x2;
2013 
2016  typedef mat<3, 3, f32, defaultp> f32mat3x3;
2017 
2020  typedef mat<3, 4, f32, defaultp> f32mat3x4;
2021 
2024  typedef mat<4, 2, f32, defaultp> f32mat4x2;
2025 
2028  typedef mat<4, 3, f32, defaultp> f32mat4x3;
2029 
2032  typedef mat<4, 4, f32, defaultp> f32mat4x4;
2033 
2034 
2035 # ifndef GLM_FORCE_SINGLE_ONLY
2036 
2039  //typedef detail::tmat1x1<f64, defaultp> f64mat1;
2040 
2043  typedef mat<2, 2, f64, defaultp> f64mat2;
2044 
2047  typedef mat<3, 3, f64, defaultp> f64mat3;
2048 
2051  typedef mat<4, 4, f64, defaultp> f64mat4;
2052 
2053 
2056  //typedef f64 f64mat1x1;
2057 
2060  typedef mat<2, 2, f64, defaultp> f64mat2x2;
2061 
2064  typedef mat<2, 3, f64, defaultp> f64mat2x3;
2065 
2068  typedef mat<2, 4, f64, defaultp> f64mat2x4;
2069 
2072  typedef mat<3, 2, f64, defaultp> f64mat3x2;
2073 
2076  typedef mat<3, 3, f64, defaultp> f64mat3x3;
2077 
2080  typedef mat<3, 4, f64, defaultp> f64mat3x4;
2081 
2084  typedef mat<4, 2, f64, defaultp> f64mat4x2;
2085 
2088  typedef mat<4, 3, f64, defaultp> f64mat4x3;
2089 
2092  typedef mat<4, 4, f64, defaultp> f64mat4x4;
2093 
2094 # endif//GLM_FORCE_SINGLE_ONLY
2095 
2097  // Quaternion types
2098 
2101  typedef qua<f32, defaultp> f32quat;
2102 
2105  typedef qua<f32, lowp> lowp_f32quat;
2106 
2109  typedef qua<f64, lowp> lowp_f64quat;
2110 
2113  typedef qua<f32, mediump> mediump_f32quat;
2114 
2115 # ifndef GLM_FORCE_SINGLE_ONLY
2116 
2119  typedef qua<f64, mediump> mediump_f64quat;
2120 
2123  typedef qua<f32, highp> highp_f32quat;
2124 
2127  typedef qua<f64, highp> highp_f64quat;
2128 
2131  typedef qua<f64, defaultp> f64quat;
2132 
2133 # endif//GLM_FORCE_SINGLE_ONLY
2134 
2136 }//namespace glm
2137 
2138 #include "type_precision.inl"
vec< 1, u16, highp > highp_u16vec1
High qualifier 16 bit unsigned integer scalar type.
Definition: fwd.hpp:354
mat< 4, 2, f32, highp > highp_f32mat4x2
High single-qualifier floating-point 4x2 matrix.
Definition: fwd.hpp:696
uint64 highp_u64
High qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:133
vec< 1, f64, mediump > mediump_f64vec1
Medium double-qualifier floating-point vector of 1 component.
Definition: fwd.hpp:491
vec< 3, f32, defaultp > f32vec3
Single-qualifier floating-point vector of 3 components.
Definition: fwd.hpp:463
mat< 2, 2, f32, mediump > mediump_fmat2
Medium single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:528
double highp_float64_t
High 64 bit double-qualifier floating-point scalar.
Definition: fwd.hpp:175
mat< 4, 4, f64, defaultp > f64mat4
Double-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:586
mat< 2, 2, f64, defaultp > f64mat2
Double-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:584
mat< 4, 3, f32, mediump > mediump_fmat4x3
Medium single-qualifier floating-point 4x3 matrix.
Definition: fwd.hpp:647
mat< 3, 3, f32, mediump > mediump_f32mat3
Medium single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:545
uint32 mediump_uint32_t
Medium qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:127
uint64 lowp_uint64
Low qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:136
mat< 2, 2, f32, mediump > mediump_fmat2x2
Medium single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:640
vec< 1, f32, defaultp > f32vec1
Single-qualifier floating-point vector of 1 component.
Definition: fwd.hpp:461
mat< 4, 4, f32, highp > highp_f32mat4
High single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:550
double highp_float64
High 64 bit double-qualifier floating-point scalar.
Definition: fwd.hpp:170
uint8 lowp_u8
Low qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:89
uint32 u32
Default qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:120
mat< 3, 3, f64, defaultp > f64mat3
Double-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:585
double lowp_float64
Low 64 bit double-qualifier floating-point scalar.
Definition: fwd.hpp:168
vec< 1, i32, defaultp > i32vec1
32 bit signed integer scalar type.
Definition: fwd.hpp:277
uint16 highp_uint16
High qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:110
mat< 2, 4, f64, mediump > mediump_f64mat2x4
Medium double-qualifier floating-point 2x4 matrix.
Definition: fwd.hpp:762
vec< 4, i64, highp > highp_i64vec4
High qualifier 64 bit signed integer vector of 4 components type.
Definition: fwd.hpp:295
mat< 3, 4, f64, defaultp > f64mat3x4
Double-qualifier floating-point 3x4 matrix.
Definition: fwd.hpp:787
mat< 2, 2, f32, defaultp > fmat2
Single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:536
vec< 3, i16, defaultp > i16vec3
16 bit signed integer vector of 3 components type.
Definition: fwd.hpp:259
uint32 lowp_uint32_t
Low qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:126
vec< 2, float, lowp > lowp_fvec2
Low single-qualifier floating-point vector of 2 components.
Definition: fwd.hpp:427
uint32 mediump_uint32
Medium qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:123
mat< 4, 4, f32, mediump > mediump_fmat4
Medium single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:530
uint64 highp_uint64
High qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:138
mat< 2, 2, f32, lowp > lowp_fmat2
Low single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:524
uint32 lowp_uint32
Low qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:122
vec< 3, float, lowp > lowp_fvec3
Low single-qualifier floating-point vector of 3 components.
Definition: fwd.hpp:428
vec< 2, float, mediump > mediump_fvec2
Medium Single-qualifier floating-point vector of 2 components.
Definition: fwd.hpp:432
mat< 3, 4, f32, lowp > lowp_fmat3x4
Low single-qualifier floating-point 3x4 matrix.
Definition: fwd.hpp:635
mat< 2, 2, f64, lowp > lowp_f64mat2x2
Low double-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:750
vec< 4, i64, defaultp > i64vec4
64 bit signed integer vector of 4 components type.
Definition: fwd.hpp:300
vec< 3, u16, defaultp > u16vec3
Default qualifier 16 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:361
vec< 1, u64, lowp > lowp_u64vec1
Low qualifier 64 bit unsigned integer scalar type.
Definition: fwd.hpp:384
vec< 1, u16, mediump > mediump_u16vec1
Medium qualifier 16 bit unsigned integer scalar type.
Definition: fwd.hpp:349
vec< 2, i8, defaultp > i8vec2
8 bit signed integer vector of 2 components type.
Definition: fwd.hpp:238
mat< 2, 3, f64, mediump > mediump_f64mat2x3
Medium double-qualifier floating-point 2x3 matrix.
Definition: fwd.hpp:761
vec< 4, u32, lowp > lowp_u32vec4
Low qualifier 32 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:367
vec< 4, f32, highp > highp_f32vec4
High single-qualifier floating-point vector of 4 components.
Definition: fwd.hpp:459
vec< 1, f32, lowp > lowp_f32vec1
Low single-qualifier floating-point vector of 1 component.
Definition: fwd.hpp:446
mat< 2, 3, f32, highp > highp_f32mat2x3
High single-qualifier floating-point 2x3 matrix.
Definition: fwd.hpp:691
int64 highp_int64
High qualifier 64 bit signed integer type.
Definition: fwd.hpp:80
vec< 2, i32, mediump > mediump_i32vec2
Medium qualifier 32 bit signed integer vector of 2 components type.
Definition: fwd.hpp:268
mat< 4, 4, f64, lowp > lowp_f64mat4
Low double-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:574
mat< 4, 4, f32, defaultp > fmat4
Single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:538
mat< 3, 4, f32, mediump > mediump_fmat3x4
Medium single-qualifier floating-point 3x4 matrix.
Definition: fwd.hpp:645
int16 lowp_int16_t
Low qualifier 16 bit signed integer type.
Definition: fwd.hpp:54
vec< 4, i32, highp > highp_i32vec4
High qualifier 32 bit signed integer vector of 4 components type.
Definition: fwd.hpp:275
mat< 4, 2, f32, defaultp > f32mat4x2
Single-qualifier floating-point 4x2 matrix.
Definition: fwd.hpp:702
mat< 3, 2, f32, highp > highp_fmat3x2
High single-qualifier floating-point 3x2 matrix.
Definition: fwd.hpp:653
mat< 2, 3, f32, mediump > mediump_fmat2x3
Medium single-qualifier floating-point 2x3 matrix.
Definition: fwd.hpp:641
uint32 mediump_u32
Medium qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:118
mat< 3, 2, f32, lowp > lowp_fmat3x2
Low single-qualifier floating-point 3x2 matrix.
Definition: fwd.hpp:633
mat< 4, 2, f64, mediump > mediump_f64mat4x2
Medium double-qualifier floating-point 4x2 matrix.
Definition: fwd.hpp:766
vec< 2, u16, highp > highp_u16vec2
High qualifier 16 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:355
vec< 1, f64, highp > highp_f64vec1
High double-qualifier floating-point vector of 1 component.
Definition: fwd.hpp:496
vec< 2, i16, mediump > mediump_i16vec2
Medium qualifier 16 bit signed integer vector of 2 components type.
Definition: fwd.hpp:248
mat< 2, 4, f32, highp > highp_fmat2x4
High single-qualifier floating-point 2x4 matrix.
Definition: fwd.hpp:652
vec< 3, u64, defaultp > u64vec3
Default qualifier 64 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:401
uint8 lowp_uint8
Low qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:94
mat< 3, 2, f32, lowp > lowp_f32mat3x2
Low single-qualifier floating-point 3x2 matrix.
Definition: fwd.hpp:673
uint64 lowp_u64
Low qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:131
vec< 3, i64, highp > highp_i64vec3
High qualifier 64 bit signed integer vector of 3 components type.
Definition: fwd.hpp:294
int8 mediump_int8
Medium qualifier 8 bit signed integer type.
Definition: fwd.hpp:37
int64 lowp_int64
Low qualifier 64 bit signed integer type.
Definition: fwd.hpp:78
mat< 4, 2, f32, mediump > mediump_f32mat4x2
Medium single-qualifier floating-point 4x2 matrix.
Definition: fwd.hpp:686
vec< 3, f64, lowp > lowp_f64vec3
Low double-qualifier floating-point vector of 3 components.
Definition: fwd.hpp:488
vec< 2, u64, defaultp > u64vec2
Default qualifier 64 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:400
vec< 3, i64, lowp > lowp_i64vec3
Low qualifier 64 bit signed integer vector of 3 components type.
Definition: fwd.hpp:284
vec< 2, i8, mediump > mediump_i8vec2
Medium qualifier 8 bit signed integer vector of 2 components type.
Definition: fwd.hpp:228
mat< 3, 4, f32, defaultp > f32mat3x4
Single-qualifier floating-point 3x4 matrix.
Definition: fwd.hpp:707
vec< 3, i16, highp > highp_i16vec3
High qualifier 16 bit signed integer vector of 3 components type.
Definition: fwd.hpp:254
vec< 3, i16, mediump > mediump_i16vec3
Medium qualifier 16 bit signed integer vector of 3 components type.
Definition: fwd.hpp:249
uint64 u64
Default qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:134
vec< 1, f64, defaultp > f64vec1
Double-qualifier floating-point vector of 1 component.
Definition: fwd.hpp:501
mat< 3, 2, f32, mediump > mediump_fmat3x2
Medium single-qualifier floating-point 3x2 matrix.
Definition: fwd.hpp:643
vec< 1, i64, mediump > mediump_i64vec1
Medium qualifier 64 bit signed integer scalar type.
Definition: fwd.hpp:287
vec< 1, i16, defaultp > i16vec1
16 bit signed integer scalar type.
Definition: fwd.hpp:257
mat< 3, 3, f64, lowp > lowp_f64mat3x3
Low double-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:754
vec< 2, f64, lowp > lowp_f64vec2
Low double-qualifier floating-point vector of 2 components.
Definition: fwd.hpp:487
mat< 2, 3, f32, highp > highp_fmat2x3
High single-qualifier floating-point 2x3 matrix.
Definition: fwd.hpp:651
mat< 3, 3, f64, lowp > lowp_f64mat3
Low double-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:573
mat< 4, 3, f32, lowp > lowp_f32mat4x3
Low single-qualifier floating-point 4x3 matrix.
Definition: fwd.hpp:677
vec< 3, u64, mediump > mediump_u64vec3
Medium qualifier 64 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:391
double mediump_float64
Medium 64 bit double-qualifier floating-point scalar.
Definition: fwd.hpp:169
double float64
Double-qualifier floating-point scalar.
Definition: fwd.hpp:171
vec< 2, i16, highp > highp_i16vec2
High qualifier 16 bit signed integer vector of 2 components type.
Definition: fwd.hpp:253
mat< 4, 2, f32, defaultp > fmat4x2
Single-qualifier floating-point 4x2 matrix.
Definition: fwd.hpp:662
mat< 2, 3, f64, lowp > lowp_f64mat2x3
Low double-qualifier floating-point 2x3 matrix.
Definition: fwd.hpp:751
mat< 3, 4, f32, defaultp > fmat3x4
Single-qualifier floating-point 3x4 matrix.
Definition: fwd.hpp:667
vec< 3, u32, lowp > lowp_u32vec3
Low qualifier 32 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:366
mat< 2, 4, f32, defaultp > f32mat2x4
Single-qualifier floating-point 2x4 matrix.
Definition: fwd.hpp:706
vec< 4, float, lowp > lowp_fvec4
Low single-qualifier floating-point vector of 4 components.
Definition: fwd.hpp:429
vec< 4, f32, mediump > mediump_f32vec4
Medium single-qualifier floating-point vector of 4 components.
Definition: fwd.hpp:454
vec< 4, i16, defaultp > i16vec4
16 bit signed integer vector of 4 components type.
Definition: fwd.hpp:260
uint8 lowp_uint8_t
Low qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:98
uint32 highp_uint32_t
High qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:128
mat< 3, 3, f32, defaultp > fmat3x3
Single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:664
mat< 3, 4, f64, mediump > mediump_f64mat3x4
Medium double-qualifier floating-point 3x4 matrix.
Definition: fwd.hpp:765
mat< 2, 3, f32, lowp > lowp_fmat2x3
Low single-qualifier floating-point 2x3 matrix.
Definition: fwd.hpp:631
vec< 1, u32, lowp > lowp_u32vec1
Low qualifier 32 bit unsigned integer scalar type.
Definition: fwd.hpp:364
mat< 2, 3, f32, defaultp > f32mat2x3
Single-qualifier floating-point 2x3 matrix.
Definition: fwd.hpp:703
vec< 1, i32, mediump > mediump_i32vec1
Medium qualifier 32 bit signed integer scalar type.
Definition: fwd.hpp:267
vec< 4, u16, highp > highp_u16vec4
High qualifier 16 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:357
vec< 1, i32, lowp > lowp_i32vec1
Low qualifier 32 bit signed integer scalar type.
Definition: fwd.hpp:262
vec< 1, i64, lowp > lowp_i64vec1
Low qualifier 64 bit signed integer scalar type.
Definition: fwd.hpp:282
vec< 1, u32, highp > highp_u32vec1
High qualifier 32 bit unsigned integer scalar type.
Definition: fwd.hpp:374
int16 mediump_int16
Medium qualifier 16 bit signed integer type.
Definition: fwd.hpp:51
uint16 mediump_u16
Medium qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:104
qua< f64, defaultp > f64quat
Double-qualifier floating-point quaternion.
Definition: fwd.hpp:815
vec< 3, f64, mediump > mediump_f64vec3
Medium double-qualifier floating-point vector of 3 components.
Definition: fwd.hpp:493
vec< 1, u64, defaultp > u64vec1
Default qualifier 64 bit unsigned integer scalar type.
Definition: fwd.hpp:399
int64 int64_t
64 bit signed integer type.
Definition: fwd.hpp:85
vec< 1, u8, defaultp > u8vec1
Default qualifier 8 bit unsigned integer scalar type.
Definition: fwd.hpp:339
vec< 1, i8, highp > highp_i8vec1
High qualifier 8 bit signed integer scalar type.
Definition: fwd.hpp:232
vec< 4, u8, defaultp > u8vec4
Default qualifier 8 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:342
int8 int8_t
8 bit signed integer type.
Definition: fwd.hpp:43
int32 i32
32 bit signed integer type.
Definition: fwd.hpp:62
vec< 1, u32, mediump > mediump_u32vec1
Medium qualifier 32 bit unsigned integer scalar type.
Definition: fwd.hpp:369
mat< 2, 2, f64, defaultp > f64mat2x2
Double-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:780
mat< 2, 2, f32, lowp > lowp_f32mat2x2
Low single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:670
vec< 4, f32, lowp > lowp_f32vec4
Low single-qualifier floating-point vector of 4 components.
Definition: fwd.hpp:449
vec< 3, float, highp > highp_fvec3
High Single-qualifier floating-point vector of 3 components.
Definition: fwd.hpp:438
mat< 4, 2, f64, lowp > lowp_f64mat4x2
Low double-qualifier floating-point 4x2 matrix.
Definition: fwd.hpp:756
mat< 3, 3, f32, mediump > mediump_fmat3x3
Medium single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:644
vec< 1, i64, highp > highp_i64vec1
High qualifier 64 bit signed integer scalar type.
Definition: fwd.hpp:292
vec< 4, i8, defaultp > i8vec4
8 bit signed integer vector of 4 components type.
Definition: fwd.hpp:240
int32 highp_int32
High qualifier 32 bit signed integer type.
Definition: fwd.hpp:66
mat< 2, 3, f32, mediump > mediump_f32mat2x3
Medium single-qualifier floating-point 2x3 matrix.
Definition: fwd.hpp:681
mat< 3, 2, f64, lowp > lowp_f64mat3x2
Low double-qualifier floating-point 3x2 matrix.
Definition: fwd.hpp:753
uint32 highp_u32
High qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:119
int32 highp_i32
High qualifier 32 bit signed integer type.
Definition: fwd.hpp:61
vec< 4, u64, defaultp > u64vec4
Default qualifier 64 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:402
vec< 4, f32, defaultp > f32vec4
Single-qualifier floating-point vector of 4 components.
Definition: fwd.hpp:464
mat< 2, 3, f64, defaultp > f64mat2x3
Double-qualifier floating-point 2x3 matrix.
Definition: fwd.hpp:783
mat< 4, 4, f64, mediump > mediump_f64mat4x4
Medium double-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:768
vec< 4, u16, lowp > lowp_u16vec4
Low qualifier 16 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:347
uint32 highp_uint32
High qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:124
mat< 4, 4, f32, lowp > lowp_f32mat4
Low single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:542
mat< 3, 2, f64, defaultp > f64mat3x2
Double-qualifier floating-point 3x2 matrix.
Definition: fwd.hpp:781
float mediump_float32
Medium 32 bit single-qualifier floating-point scalar.
Definition: fwd.hpp:153
vec< 1, u32, defaultp > u32vec1
Default qualifier 32 bit unsigned integer scalar type.
Definition: fwd.hpp:379
vec< 4, f64, mediump > mediump_f64vec4
Medium double-qualifier floating-point vector of 4 components.
Definition: fwd.hpp:494
mat< 3, 3, f64, defaultp > f64mat3x3
Double-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:784
float highp_float32
High 32 bit single-qualifier floating-point scalar.
Definition: fwd.hpp:154
uint8 highp_uint8
High qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:96
int8 highp_i8
High qualifier 8 bit signed integer type.
Definition: fwd.hpp:33
mat< 2, 4, f64, lowp > lowp_f64mat2x4
Low double-qualifier floating-point 2x4 matrix.
Definition: fwd.hpp:752
mat< 3, 4, f64, lowp > lowp_f64mat3x4
Low double-qualifier floating-point 3x4 matrix.
Definition: fwd.hpp:755
int8 mediump_i8
Medium qualifier 8 bit signed integer type.
Definition: fwd.hpp:32
int64 highp_int64_t
High qualifier 64 bit signed integer type.
Definition: fwd.hpp:84
mat< 4, 4, f32, defaultp > f32mat4x4
Single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:708
float float32_t
Default 32 bit single-qualifier floating-point scalar.
Definition: fwd.hpp:160
mat< 2, 2, f32, defaultp > f32mat2x2
Single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:700
vec< 2, i64, lowp > lowp_i64vec2
Low qualifier 64 bit signed integer vector of 2 components type.
Definition: fwd.hpp:283
mat< 2, 4, f32, lowp > lowp_f32mat2x4
Low single-qualifier floating-point 2x4 matrix.
Definition: fwd.hpp:672
uint32 uint32_t
Default qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:129
mat< 3, 3, f32, highp > highp_f32mat3
High single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:549
mat< 3, 3, f64, mediump > mediump_f64mat3x3
Medium double-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:764
uint8 u8
Default qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:92
vec< 3, i32, highp > highp_i32vec3
High qualifier 32 bit signed integer vector of 3 components type.
Definition: fwd.hpp:274
float float32
Single-qualifier floating-point scalar.
Definition: fwd.hpp:155
vec< 4, f32, defaultp > fvec4
Single-qualifier floating-point vector of 4 components.
Definition: fwd.hpp:444
vec< 1, i32, highp > highp_i32vec1
High qualifier 32 bit signed integer scalar type.
Definition: fwd.hpp:272
mat< 3, 3, f32, lowp > lowp_f32mat3
Low single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:541
vec< 1, u16, defaultp > u16vec1
Default qualifier 16 bit unsigned integer scalar type.
Definition: fwd.hpp:359
vec< 1, i8, defaultp > i8vec1
8 bit signed integer scalar type.
Definition: fwd.hpp:237
vec< 3, i32, mediump > mediump_i32vec3
Medium qualifier 32 bit signed integer vector of 3 components type.
Definition: fwd.hpp:269
vec< 2, i32, defaultp > i32vec2
32 bit signed integer vector of 2 components type.
Definition: fwd.hpp:278
vec< 2, i16, lowp > lowp_i16vec2
Low qualifier 16 bit signed integer vector of 2 components type.
Definition: fwd.hpp:243
vec< 2, u64, mediump > mediump_u64vec2
Medium qualifier 64 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:390
vec< 4, u8, lowp > lowp_u8vec4
Low qualifier 8 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:327
mat< 3, 3, f32, highp > highp_f32mat3x3
High single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:694
vec< 1, u8, highp > highp_u8vec1
High qualifier 8 bit unsigned integer scalar type.
Definition: fwd.hpp:334
uint8 highp_uint8_t
High qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:100
vec< 4, u32, mediump > mediump_u32vec4
Medium qualifier 32 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:372
mat< 2, 2, f32, highp > highp_f32mat2x2
High single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:690
vec< 4, f64, highp > highp_f64vec4
High double-qualifier floating-point vector of 4 components.
Definition: fwd.hpp:499
vec< 3, u8, lowp > lowp_u8vec3
Low qualifier 8 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:326
float highp_f32
High 32 bit single-qualifier floating-point scalar.
Definition: fwd.hpp:149
uint64 mediump_uint64
Medium qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:137
int32 highp_int32_t
32 bit signed integer type.
Definition: fwd.hpp:70
vec< 3, f64, defaultp > f64vec3
Double-qualifier floating-point vector of 3 components.
Definition: fwd.hpp:503
mat< 2, 3, f32, lowp > lowp_f32mat2x3
Low single-qualifier floating-point 2x3 matrix.
Definition: fwd.hpp:671
vec< 3, u16, mediump > mediump_u16vec3
Medium qualifier 16 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:351
mat< 2, 4, f64, defaultp > f64mat2x4
Double-qualifier floating-point 2x4 matrix.
Definition: fwd.hpp:786
mat< 3, 3, f32, defaultp > f32mat3
Single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:553
mat< 2, 2, f64, mediump > mediump_f64mat2x2
Medium double-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:760
uint64 mediump_u64
Medium qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:132
vec< 4, i16, highp > highp_i16vec4
High qualifier 16 bit signed integer vector of 4 components type.
Definition: fwd.hpp:255
mat< 4, 4, f32, lowp > lowp_fmat4
Low single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:526
vec< 2, u32, mediump > mediump_u32vec2
Medium qualifier 32 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:370
vec< 3, u64, highp > highp_u64vec3
High qualifier 64 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:396
uint16 lowp_u16
Low qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:103
vec< 3, i16, lowp > lowp_i16vec3
Low qualifier 16 bit signed integer vector of 3 components type.
Definition: fwd.hpp:244
vec< 3, u16, lowp > lowp_u16vec3
Low qualifier 16 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:346
vec< 3, f32, lowp > lowp_f32vec3
Low single-qualifier floating-point vector of 3 components.
Definition: fwd.hpp:448
mat< 4, 4, f32, highp > highp_fmat4
High single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:534
mat< 3, 3, f32, lowp > lowp_fmat3
Low single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:525
int16 highp_i16
High qualifier 16 bit signed integer type.
Definition: fwd.hpp:47
qua< f32, mediump > mediump_f32quat
Medium single-qualifier floating-point quaternion.
Definition: fwd.hpp:803
int8 highp_int8
High qualifier 8 bit signed integer type.
Definition: fwd.hpp:38
mat< 4, 4, f64, defaultp > f64mat4x4
Double-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:788
mat< 4, 3, f32, defaultp > fmat4x3
Single-qualifier floating-point 4x3 matrix.
Definition: fwd.hpp:665
mat< 2, 4, f32, lowp > lowp_fmat2x4
Low single-qualifier floating-point 2x4 matrix.
Definition: fwd.hpp:632
mat< 3, 3, f64, highp > highp_f64mat3
High double-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:581
vec< 3, i8, mediump > mediump_i8vec3
Medium qualifier 8 bit signed integer vector of 3 components type.
Definition: fwd.hpp:229
vec< 1, f32, highp > highp_f32vec1
High single-qualifier floating-point vector of 1 component.
Definition: fwd.hpp:456
vec< 3, i8, lowp > lowp_i8vec3
Low qualifier 8 bit signed integer vector of 3 components type.
Definition: fwd.hpp:224
mat< 4, 3, f64, lowp > lowp_f64mat4x3
Low double-qualifier floating-point 4x3 matrix.
Definition: fwd.hpp:757
vec< 4, u64, highp > highp_u64vec4
High qualifier 64 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:397
vec< 3, f32, defaultp > fvec3
Single-qualifier floating-point vector of 3 components.
Definition: fwd.hpp:443
vec< 2, i16, defaultp > i16vec2
16 bit signed integer vector of 2 components type.
Definition: fwd.hpp:258
mat< 4, 3, f32, defaultp > f32mat4x3
Single-qualifier floating-point 4x3 matrix.
Definition: fwd.hpp:705
mat< 2, 2, f32, defaultp > f32mat2
Single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:552
vec< 2, u16, mediump > mediump_u16vec2
Medium qualifier 16 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:350
mat< 2, 4, f32, mediump > mediump_fmat2x4
Medium single-qualifier floating-point 2x4 matrix.
Definition: fwd.hpp:642
mat< 4, 4, f32, lowp > lowp_f32mat4x4
Low single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:678
vec< 2, u8, lowp > lowp_u8vec2
Low qualifier 8 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:325
mat< 3, 3, f64, mediump > mediump_f64mat3
Medium double-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:577
int16 lowp_i16
Low qualifier 16 bit signed integer type.
Definition: fwd.hpp:45
mat< 3, 4, f32, highp > highp_fmat3x4
High single-qualifier floating-point 3x4 matrix.
Definition: fwd.hpp:655
double float64_t
Default 64 bit double-qualifier floating-point scalar.
Definition: fwd.hpp:176
mat< 4, 4, f64, highp > highp_f64mat4x4
High double-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:778
mat< 4, 3, f32, mediump > mediump_f32mat4x3
Medium single-qualifier floating-point 4x3 matrix.
Definition: fwd.hpp:687
int16 lowp_int16
Low qualifier 16 bit signed integer type.
Definition: fwd.hpp:50
mat< 3, 3, f32, mediump > mediump_fmat3
Medium single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:529
mat< 4, 4, f32, highp > highp_f32mat4x4
High single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:698
int64 lowp_int64_t
Low qualifier 64 bit signed integer type.
Definition: fwd.hpp:82
uint16 uint16_t
Default qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:115
vec< 2, f64, highp > highp_f64vec2
High double-qualifier floating-point vector of 2 components.
Definition: fwd.hpp:497
vec< 2, u64, lowp > lowp_u64vec2
Low qualifier 64 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:385
mat< 3, 3, f32, defaultp > fmat3
Single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:537
mat< 3, 2, f32, mediump > mediump_f32mat3x2
Medium single-qualifier floating-point 3x2 matrix.
Definition: fwd.hpp:683
mat< 4, 2, f32, lowp > lowp_f32mat4x2
Low single-qualifier floating-point 4x2 matrix.
Definition: fwd.hpp:676
int32 lowp_int32
Low qualifier 32 bit signed integer type.
Definition: fwd.hpp:64
vec< 4, i64, mediump > mediump_i64vec4
Medium qualifier 64 bit signed integer vector of 4 components type.
Definition: fwd.hpp:290
uint8 uint8_t
Default qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:101
vec< 1, i8, mediump > mediump_i8vec1
Medium qualifier 8 bit signed integer scalar type.
Definition: fwd.hpp:227
int32 mediump_int32_t
Medium qualifier 32 bit signed integer type.
Definition: fwd.hpp:69
float highp_float32_t
High 32 bit single-qualifier floating-point scalar.
Definition: fwd.hpp:159
mat< 3, 3, f32, defaultp > f32mat3x3
Single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:704
uint8 highp_u8
High qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:91
uint8 mediump_uint8
Medium qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:95
mat< 4, 2, f32, highp > highp_fmat4x2
High single-qualifier floating-point 4x2 matrix.
Definition: fwd.hpp:656
vec< 2, f32, highp > highp_f32vec2
High single-qualifier floating-point vector of 2 components.
Definition: fwd.hpp:457
int64 mediump_int64_t
Medium qualifier 64 bit signed integer type.
Definition: fwd.hpp:83
vec< 3, u64, lowp > lowp_u64vec3
Low qualifier 64 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:386
mat< 2, 2, f64, highp > highp_f64mat2x2
High double-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:770
vec< 3, u32, highp > highp_u32vec3
High qualifier 32 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:376
int8 highp_int8_t
High qualifier 8 bit signed integer type.
Definition: fwd.hpp:42
qua< f32, lowp > lowp_f32quat
Low single-qualifier floating-point quaternion.
Definition: fwd.hpp:802
vec< 4, i32, lowp > lowp_i32vec4
Low qualifier 32 bit signed integer vector of 4 components type.
Definition: fwd.hpp:265
vec< 1, i16, highp > highp_i16vec1
High qualifier 16 bit signed integer scalar type.
Definition: fwd.hpp:252
mat< 4, 4, f32, lowp > lowp_fmat4x4
Low single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:638
mat< 3, 2, f32, defaultp > f32mat3x2
Single-qualifier floating-point 3x2 matrix.
Definition: fwd.hpp:701
mat< 3, 3, f32, lowp > lowp_f32mat3x3
Low single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:674
vec< 2, i8, lowp > lowp_i8vec2
Low qualifier 8 bit signed integer vector of 2 components type.
Definition: fwd.hpp:223
vec< 4, i32, defaultp > i32vec4
32 bit signed integer vector of 4 components type.
Definition: fwd.hpp:280
mat< 2, 2, f32, highp > highp_f32mat2
High single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:548
float lowp_f32
Low 32 bit single-qualifier floating-point scalar.
Definition: fwd.hpp:147
vec< 4, u16, mediump > mediump_u16vec4
Medium qualifier 16 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:352
vec< 3, u32, defaultp > u32vec3
Default qualifier 32 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:381
vec< 2, u8, defaultp > u8vec2
Default qualifier 8 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:340
int16 mediump_i16
Medium qualifier 16 bit signed integer type.
Definition: fwd.hpp:46
vec< 2, u64, highp > highp_u64vec2
High qualifier 64 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:395
vec< 3, i8, defaultp > i8vec3
8 bit signed integer vector of 3 components type.
Definition: fwd.hpp:239
mat< 2, 2, f32, mediump > mediump_f32mat2x2
High single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:680
uint16 mediump_uint16_t
Medium qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:113
mat< 4, 3, f64, mediump > mediump_f64mat4x3
Medium double-qualifier floating-point 4x3 matrix.
Definition: fwd.hpp:767
vec< 3, u8, defaultp > u8vec3
Default qualifier 8 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:341
double highp_f64
High 64 bit double-qualifier floating-point scalar.
Definition: fwd.hpp:165
vec< 3, float, mediump > mediump_fvec3
Medium Single-qualifier floating-point vector of 3 components.
Definition: fwd.hpp:433
int64 mediump_int64
Medium qualifier 64 bit signed integer type.
Definition: fwd.hpp:79
vec< 4, u64, mediump > mediump_u64vec4
Medium qualifier 64 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:392
uint64 uint64_t
Default qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:143
vec< 2, u32, highp > highp_u32vec2
High qualifier 32 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:375
vec< 1, float, highp > highp_fvec1
High single-qualifier floating-point vector of 1 component.
Definition: fwd.hpp:436
vec< 4, i64, lowp > lowp_i64vec4
Low qualifier 64 bit signed integer vector of 4 components type.
Definition: fwd.hpp:285
vec< 3, i32, defaultp > i32vec3
32 bit signed integer vector of 3 components type.
Definition: fwd.hpp:279
mat< 2, 4, f32, highp > highp_f32mat2x4
High single-qualifier floating-point 2x4 matrix.
Definition: fwd.hpp:692
vec< 1, i8, lowp > lowp_i8vec1
Low qualifier 8 bit signed integer scalar type.
Definition: fwd.hpp:222
mat< 2, 2, f64, highp > highp_f64mat2
High double-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:580
uint16 lowp_uint16_t
Low qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:112
mat< 3, 2, f64, highp > highp_f64mat3x2
High double-qualifier floating-point 3x2 matrix.
Definition: fwd.hpp:773
vec< 3, u32, mediump > mediump_u32vec3
Medium qualifier 32 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:371
uint16 lowp_uint16
Low qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:108
vec< 3, u8, highp > highp_u8vec3
High qualifier 8 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:336
vec< 4, f64, defaultp > f64vec4
Double-qualifier floating-point vector of 4 components.
Definition: fwd.hpp:504
vec< 2, i8, highp > highp_i8vec2
High qualifier 8 bit signed integer vector of 2 components type.
Definition: fwd.hpp:233
vec< 3, i32, lowp > lowp_i32vec3
Low qualifier 32 bit signed integer vector of 3 components type.
Definition: fwd.hpp:264
int32 lowp_i32
Low qualifier 32 bit signed integer type.
Definition: fwd.hpp:59
mat< 4, 4, f32, mediump > mediump_fmat4x4
Medium single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:648
int64 mediump_i64
Medium qualifier 64 bit signed integer type.
Definition: fwd.hpp:74
vec< 4, i16, lowp > lowp_i16vec4
Low qualifier 16 bit signed integer vector of 4 components type.
Definition: fwd.hpp:245
mat< 4, 3, f64, highp > highp_f64mat4x3
High double-qualifier floating-point 4x3 matrix.
Definition: fwd.hpp:777
vec< 2, u8, highp > highp_u8vec2
High qualifier 8 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:335
vec< 3, i8, highp > highp_i8vec3
High qualifier 8 bit signed integer vector of 3 components type.
Definition: fwd.hpp:234
vec< 3, f64, highp > highp_f64vec3
High double-qualifier floating-point vector of 3 components.
Definition: fwd.hpp:498
vec< 2, f32, defaultp > fvec2
Single-qualifier floating-point vector of 2 components.
Definition: fwd.hpp:442
vec< 4, f64, lowp > lowp_f64vec4
Low double-qualifier floating-point vector of 4 components.
Definition: fwd.hpp:489
vec< 3, f32, mediump > mediump_f32vec3
Medium single-qualifier floating-point vector of 3 components.
Definition: fwd.hpp:453
double lowp_f64
Low 64 bit double-qualifier floating-point scalar.
Definition: fwd.hpp:163
mat< 4, 2, f32, lowp > lowp_fmat4x2
Low single-qualifier floating-point 4x2 matrix.
Definition: fwd.hpp:636
mat< 2, 4, f64, highp > highp_f64mat2x4
High double-qualifier floating-point 2x4 matrix.
Definition: fwd.hpp:772
mat< 4, 4, f64, highp > highp_f64mat4
High double-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:582
vec< 4, i32, mediump > mediump_i32vec4
Medium qualifier 32 bit signed integer vector of 4 components type.
Definition: fwd.hpp:270
mat< 2, 2, f32, lowp > lowp_f32mat2
Low single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:540
int16 int16_t
16 bit signed integer type.
Definition: fwd.hpp:57
int64 highp_i64
High qualifier 64 bit signed integer type.
Definition: fwd.hpp:75
mat< 3, 4, f64, highp > highp_f64mat3x4
High double-qualifier floating-point 3x4 matrix.
Definition: fwd.hpp:775
mat< 3, 3, f32, highp > highp_fmat3
High single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:533
mat< 3, 3, f32, mediump > mediump_f32mat3x3
Medium single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:684
qua< f64, mediump > mediump_f64quat
Medium double-qualifier floating-point quaternion.
Definition: fwd.hpp:813
int32 int32_t
32 bit signed integer type.
Definition: fwd.hpp:71
vec< 2, f64, defaultp > f64vec2
Double-qualifier floating-point vector of 2 components.
Definition: fwd.hpp:502
uint64 lowp_uint64_t
Low qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:140
detail::uint64 uint64
64 bit unsigned integer type.
int16 highp_int16
High qualifier 16 bit signed integer type.
Definition: fwd.hpp:52
vec< 1, i16, mediump > mediump_i16vec1
Medium qualifier 16 bit signed integer scalar type.
Definition: fwd.hpp:247
mat< 2, 4, f32, defaultp > fmat2x4
Single-qualifier floating-point 2x4 matrix.
Definition: fwd.hpp:666
mat< 2, 2, f32, highp > highp_fmat2x2
High single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:650
vec< 4, float, highp > highp_fvec4
High Single-qualifier floating-point vector of 4 components.
Definition: fwd.hpp:439
mat< 3, 3, f64, highp > highp_f64mat3x3
High double-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:774
int32 mediump_i32
Medium qualifier 32 bit signed integer type.
Definition: fwd.hpp:60
vec< 2, u16, lowp > lowp_u16vec2
Low qualifier 16 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:345
vec< 4, u32, highp > highp_u32vec4
High qualifier 32 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:377
float lowp_float32_t
Low 32 bit single-qualifier floating-point scalar.
Definition: fwd.hpp:157
uint64 highp_uint64_t
High qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:142
vec< 2, f32, lowp > lowp_f32vec2
Low single-qualifier floating-point vector of 2 components.
Definition: fwd.hpp:447
vec< 4, u32, defaultp > u32vec4
Default qualifier 32 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:382
mat< 2, 2, f64, mediump > mediump_f64mat2
Medium double-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:576
mat< 4, 3, f32, highp > highp_f32mat4x3
High single-qualifier floating-point 4x3 matrix.
Definition: fwd.hpp:697
qua< f32, defaultp > f32quat
Single-qualifier floating-point quaternion.
Definition: fwd.hpp:805
detail::int64 int64
64 bit signed integer type.
vec< 1, u64, highp > highp_u64vec1
High qualifier 64 bit unsigned integer scalar type.
Definition: fwd.hpp:394
mat< 2, 3, f64, highp > highp_f64mat2x3
High double-qualifier floating-point 2x3 matrix.
Definition: fwd.hpp:771
vec< 4, i8, lowp > lowp_i8vec4
Low qualifier 8 bit signed integer vector of 4 components type.
Definition: fwd.hpp:225
mat< 4, 3, f32, lowp > lowp_fmat4x3
Low single-qualifier floating-point 4x3 matrix.
Definition: fwd.hpp:637
float f32
Default 32 bit single-qualifier floating-point scalar.
Definition: fwd.hpp:150
vec< 2, i32, highp > highp_i32vec2
High qualifier 32 bit signed integer vector of 2 components type.
Definition: fwd.hpp:273
vec< 1, u8, mediump > mediump_u8vec1
Medium qualifier 8 bit unsigned integer scalar type.
Definition: fwd.hpp:329
mat< 4, 3, f32, highp > highp_fmat4x3
High single-qualifier floating-point 4x3 matrix.
Definition: fwd.hpp:657
vec< 4, i16, mediump > mediump_i16vec4
Medium qualifier 16 bit signed integer vector of 4 components type.
Definition: fwd.hpp:250
mat< 4, 2, f64, defaultp > f64mat4x2
Double-qualifier floating-point 4x2 matrix.
Definition: fwd.hpp:782
mat< 2, 3, f32, defaultp > fmat2x3
Single-qualifier floating-point 2x3 matrix.
Definition: fwd.hpp:663
mat< 4, 4, f64, mediump > mediump_f64mat4
Medium double-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:578
vec< 4, u8, mediump > mediump_u8vec4
Medium qualifier 8 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:332
mat< 3, 4, f32, lowp > lowp_f32mat3x4
Low single-qualifier floating-point 3x4 matrix.
Definition: fwd.hpp:675
double mediump_float64_t
Medium 64 bit double-qualifier floating-point scalar.
Definition: fwd.hpp:174
vec< 2, float, highp > highp_fvec2
High Single-qualifier floating-point vector of 2 components.
Definition: fwd.hpp:437
uint16 u16
Default qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:106
int64 lowp_i64
Low qualifier 64 bit signed integer type.
Definition: fwd.hpp:73
mat< 4, 4, f32, defaultp > f32mat4
Single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:554
mat< 4, 2, f32, mediump > mediump_fmat4x2
Medium single-qualifier floating-point 4x2 matrix.
Definition: fwd.hpp:646
mat< 2, 2, f64, lowp > lowp_f64mat2
Low double-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:572
int8 mediump_int8_t
Medium qualifier 8 bit signed integer type.
Definition: fwd.hpp:41
mat< 3, 3, f32, lowp > lowp_fmat3x3
Low single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:634
double lowp_float64_t
Low 64 bit double-qualifier floating-point scalar.
Definition: fwd.hpp:173
int16 highp_int16_t
High qualifier 16 bit signed integer type.
Definition: fwd.hpp:56
mat< 3, 3, f32, highp > highp_fmat3x3
High single-qualifier floating-point 3x3 matrix.
Definition: fwd.hpp:654
vec< 1, i64, defaultp > i64vec1
64 bit signed integer scalar type.
Definition: fwd.hpp:297
uint32 lowp_u32
Low qualifier 32 bit unsigned integer type.
Definition: fwd.hpp:117
vec< 1, u8, lowp > lowp_u8vec1
Low qualifier 8 bit unsigned integer scalar type.
Definition: fwd.hpp:324
vec< 3, i64, mediump > mediump_i64vec3
Medium qualifier 64 bit signed integer vector of 3 components type.
Definition: fwd.hpp:289
qua< f32, highp > highp_f32quat
High single-qualifier floating-point quaternion.
Definition: fwd.hpp:804
uint16 highp_u16
High qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:105
vec< 1, f32, defaultp > fvec1
Single-qualifier floating-point vector of 1 component.
Definition: fwd.hpp:441
vec< 2, u8, mediump > mediump_u8vec2
Medium qualifier 8 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:330
int32 lowp_int32_t
Low qualifier 32 bit signed integer type.
Definition: fwd.hpp:68
vec< 1, u16, lowp > lowp_u16vec1
Low qualifier 16 bit unsigned integer scalar type.
Definition: fwd.hpp:344
mat< 4, 4, f32, highp > highp_fmat4x4
High single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:658
mat< 3, 4, f32, highp > highp_f32mat3x4
High single-qualifier floating-point 3x4 matrix.
Definition: fwd.hpp:695
vec< 2, f32, defaultp > f32vec2
Single-qualifier floating-point vector of 2 components.
Definition: fwd.hpp:462
vec< 3, u16, highp > highp_u16vec3
High qualifier 16 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:356
float mediump_float32_t
Medium 32 bit single-qualifier floating-point scalar.
Definition: fwd.hpp:158
mat< 2, 2, f32, defaultp > fmat2x2
Single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:660
float mediump_f32
Medium 32 bit single-qualifier floating-point scalar.
Definition: fwd.hpp:148
mat< 4, 4, f32, mediump > mediump_f32mat4x4
Medium single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:688
vec< 2, f32, mediump > mediump_f32vec2
Medium single-qualifier floating-point vector of 2 components.
Definition: fwd.hpp:452
int8 lowp_int8
Low qualifier 8 bit signed integer type.
Definition: fwd.hpp:36
vec< 1, f64, lowp > lowp_f64vec1
Low double-qualifier floating-point vector of 1 component.
Definition: fwd.hpp:486
mat< 3, 2, f32, highp > highp_f32mat3x2
High single-qualifier floating-point 3x2 matrix.
Definition: fwd.hpp:693
mat< 3, 2, f64, mediump > mediump_f64mat3x2
Medium double-qualifier floating-point 3x2 matrix.
Definition: fwd.hpp:763
vec< 3, u8, mediump > mediump_u8vec3
Medium qualifier 8 bit unsigned integer vector of 3 components type.
Definition: fwd.hpp:331
mat< 4, 4, f64, lowp > lowp_f64mat4x4
Low double-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:758
vec< 1, i16, lowp > lowp_i16vec1
Low qualifier 16 bit signed integer scalar type.
Definition: fwd.hpp:242
int8 lowp_int8_t
Low qualifier 8 bit signed integer type.
Definition: fwd.hpp:40
vec< 2, u32, lowp > lowp_u32vec2
Low qualifier 32 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:365
mat< 2, 4, f32, mediump > mediump_f32mat2x4
Medium single-qualifier floating-point 2x4 matrix.
Definition: fwd.hpp:682
mat< 4, 3, f64, defaultp > f64mat4x3
Double-qualifier floating-point 4x3 matrix.
Definition: fwd.hpp:785
vec< 2, i64, highp > highp_i64vec2
High qualifier 64 bit signed integer vector of 2 components type.
Definition: fwd.hpp:293
mat< 4, 4, f32, mediump > mediump_f32mat4
Medium single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:546
int64 i64
64 bit signed integer type.
Definition: fwd.hpp:76
double f64
Default 64 bit double-qualifier floating-point scalar.
Definition: fwd.hpp:166
vec< 1, f32, mediump > mediump_f32vec1
Medium single-qualifier floating-point vector of 1 component.
Definition: fwd.hpp:451
mat< 3, 4, f32, mediump > mediump_f32mat3x4
Medium single-qualifier floating-point 3x4 matrix.
Definition: fwd.hpp:685
mat< 2, 2, f32, highp > highp_fmat2
High single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:532
vec< 3, f32, highp > highp_f32vec3
High single-qualifier floating-point vector of 3 components.
Definition: fwd.hpp:458
vec< 4, i8, mediump > mediump_i8vec4
Medium qualifier 8 bit signed integer vector of 4 components type.
Definition: fwd.hpp:230
float lowp_float32
Low 32 bit single-qualifier floating-point scalar.
Definition: fwd.hpp:152
vec< 2, u32, defaultp > u32vec2
Default qualifier 32 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:380
vec< 4, float, mediump > mediump_fvec4
Medium Single-qualifier floating-point vector of 4 components.
Definition: fwd.hpp:434
int32 mediump_int32
Medium qualifier 32 bit signed integer type.
Definition: fwd.hpp:65
vec< 2, i64, defaultp > i64vec2
64 bit signed integer vector of 2 components type.
Definition: fwd.hpp:298
int16 i16
16 bit signed integer type.
Definition: fwd.hpp:48
mat< 4, 4, f32, defaultp > fmat4x4
Single-qualifier floating-point 4x4 matrix.
Definition: fwd.hpp:668
qua< f64, lowp > lowp_f64quat
Low double-qualifier floating-point quaternion.
Definition: fwd.hpp:812
mat< 3, 2, f32, defaultp > fmat3x2
Single-qualifier floating-point 3x2 matrix.
Definition: fwd.hpp:661
vec< 4, u16, defaultp > u16vec4
Default qualifier 16 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:362
vec< 2, u16, defaultp > u16vec2
Default qualifier 16 bit unsigned integer vector of 2 components type.
Definition: fwd.hpp:360
uint8 mediump_u8
Medium qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:90
mat< 2, 2, f32, lowp > lowp_fmat2x2
Low single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:630
vec< 4, i8, highp > highp_i8vec4
High qualifier 8 bit signed integer vector of 4 components type.
Definition: fwd.hpp:235
vec< 4, u64, lowp > lowp_u64vec4
Low qualifier 64 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:387
vec< 2, i64, mediump > mediump_i64vec2
Medium qualifier 64 bit signed integer vector of 2 components type.
Definition: fwd.hpp:288
mat< 4, 2, f64, highp > highp_f64mat4x2
High double-qualifier floating-point 4x2 matrix.
Definition: fwd.hpp:776
int16 mediump_int16_t
Medium qualifier 16 bit signed integer type.
Definition: fwd.hpp:55
int8 lowp_i8
Low qualifier 8 bit signed integer type.
Definition: fwd.hpp:31
vec< 3, i64, defaultp > i64vec3
64 bit signed integer vector of 3 components type.
Definition: fwd.hpp:299
vec< 2, i32, lowp > lowp_i32vec2
Low qualifier 32 bit signed integer vector of 2 components type.
Definition: fwd.hpp:263
qua< f64, highp > highp_f64quat
High double-qualifier floating-point quaternion.
Definition: fwd.hpp:814
vec< 2, f64, mediump > mediump_f64vec2
Medium double-qualifier floating-point vector of 2 components.
Definition: fwd.hpp:492
uint16 highp_uint16_t
High qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:114
vec< 1, float, lowp > lowp_fvec1
Low single-qualifier floating-point vector of 1 component.
Definition: fwd.hpp:426
int8 i8
8 bit signed integer type.
Definition: fwd.hpp:34
uint64 mediump_uint64_t
Medium qualifier 64 bit unsigned integer type.
Definition: fwd.hpp:141
vec< 1, u64, mediump > mediump_u64vec1
Medium qualifier 64 bit unsigned integer scalar type.
Definition: fwd.hpp:389
mat< 2, 2, f32, mediump > mediump_f32mat2
Medium single-qualifier floating-point 1x1 matrix.
Definition: fwd.hpp:544
uint8 mediump_uint8_t
Medium qualifier 8 bit unsigned integer type.
Definition: fwd.hpp:99
Definition: common.hpp:20
double mediump_f64
Medium 64 bit double-qualifier floating-point scalar.
Definition: fwd.hpp:164
vec< 1, float, mediump > mediump_fvec1
Medium single-qualifier floating-point vector of 1 component.
Definition: fwd.hpp:431
uint16 mediump_uint16
Medium qualifier 16 bit unsigned integer type.
Definition: fwd.hpp:109
vec< 4, u8, highp > highp_u8vec4
High qualifier 8 bit unsigned integer vector of 4 components type.
Definition: fwd.hpp:337
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00175.html ================================================ 0.9.9 API documentation: type_ptr.hpp File Reference
0.9.9 API documentation
type_ptr.hpp File Reference

GLM_GTC_type_ptr More...

Go to the source code of this file.

Functions

template<typename T >
GLM_FUNC_DECL mat< 2, 2, T, defaultp > make_mat2 (T const *const ptr)
 Build a matrix from a pointer. More...
 
template<typename T >
GLM_FUNC_DECL mat< 2, 2, T, defaultp > make_mat2x2 (T const *const ptr)
 Build a matrix from a pointer. More...
 
template<typename T >
GLM_FUNC_DECL mat< 2, 3, T, defaultp > make_mat2x3 (T const *const ptr)
 Build a matrix from a pointer. More...
 
template<typename T >
GLM_FUNC_DECL mat< 2, 4, T, defaultp > make_mat2x4 (T const *const ptr)
 Build a matrix from a pointer. More...
 
template<typename T >
GLM_FUNC_DECL mat< 3, 3, T, defaultp > make_mat3 (T const *const ptr)
 Build a matrix from a pointer. More...
 
template<typename T >
GLM_FUNC_DECL mat< 3, 2, T, defaultp > make_mat3x2 (T const *const ptr)
 Build a matrix from a pointer. More...
 
template<typename T >
GLM_FUNC_DECL mat< 3, 3, T, defaultp > make_mat3x3 (T const *const ptr)
 Build a matrix from a pointer. More...
 
template<typename T >
GLM_FUNC_DECL mat< 3, 4, T, defaultp > make_mat3x4 (T const *const ptr)
 Build a matrix from a pointer. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > make_mat4 (T const *const ptr)
 Build a matrix from a pointer. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 2, T, defaultp > make_mat4x2 (T const *const ptr)
 Build a matrix from a pointer. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 3, T, defaultp > make_mat4x3 (T const *const ptr)
 Build a matrix from a pointer. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > make_mat4x4 (T const *const ptr)
 Build a matrix from a pointer. More...
 
template<typename T >
GLM_FUNC_DECL qua< T, defaultp > make_quat (T const *const ptr)
 Build a quaternion from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 1, T, Q > make_vec1 (vec< 1, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 1, T, Q > make_vec1 (vec< 2, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 1, T, Q > make_vec1 (vec< 3, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 1, T, Q > make_vec1 (vec< 4, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 2, T, Q > make_vec2 (vec< 1, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 2, T, Q > make_vec2 (vec< 2, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 2, T, Q > make_vec2 (vec< 3, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 2, T, Q > make_vec2 (vec< 4, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T >
GLM_FUNC_DECL vec< 2, T, defaultp > make_vec2 (T const *const ptr)
 Build a vector from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > make_vec3 (vec< 1, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > make_vec3 (vec< 2, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > make_vec3 (vec< 3, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > make_vec3 (vec< 4, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T >
GLM_FUNC_DECL vec< 3, T, defaultp > make_vec3 (T const *const ptr)
 Build a vector from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, T, Q > make_vec4 (vec< 1, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, T, Q > make_vec4 (vec< 2, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, T, Q > make_vec4 (vec< 3, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, T, Q > make_vec4 (vec< 4, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T >
GLM_FUNC_DECL vec< 4, T, defaultp > make_vec4 (T const *const ptr)
 Build a vector from a pointer. More...
 
template<typename genType >
GLM_FUNC_DECL genType::value_type const * value_ptr (genType const &v)
 Return the constant address to the data of the input parameter. More...
 

Detailed Description

GLM_GTC_type_ptr

See also
Core features (dependence)
GLM_GTC_quaternion (dependence)

Definition in file type_ptr.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00175_source.html ================================================ 0.9.9 API documentation: type_ptr.hpp Source File
0.9.9 API documentation
type_ptr.hpp
Go to the documentation of this file.
1 
34 #pragma once
35 
36 // Dependency:
37 #include "../gtc/quaternion.hpp"
38 #include "../gtc/vec1.hpp"
39 #include "../vec2.hpp"
40 #include "../vec3.hpp"
41 #include "../vec4.hpp"
42 #include "../mat2x2.hpp"
43 #include "../mat2x3.hpp"
44 #include "../mat2x4.hpp"
45 #include "../mat3x2.hpp"
46 #include "../mat3x3.hpp"
47 #include "../mat3x4.hpp"
48 #include "../mat4x2.hpp"
49 #include "../mat4x3.hpp"
50 #include "../mat4x4.hpp"
51 #include <cstring>
52 
53 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
54 # pragma message("GLM: GLM_GTC_type_ptr extension included")
55 #endif
56 
57 namespace glm
58 {
61 
64  template<typename genType>
65  GLM_FUNC_DECL typename genType::value_type const * value_ptr(genType const& v);
66 
69  template <typename T, qualifier Q>
70  GLM_FUNC_DECL vec<1, T, Q> make_vec1(vec<1, T, Q> const& v);
71 
74  template <typename T, qualifier Q>
75  GLM_FUNC_DECL vec<1, T, Q> make_vec1(vec<2, T, Q> const& v);
76 
79  template <typename T, qualifier Q>
80  GLM_FUNC_DECL vec<1, T, Q> make_vec1(vec<3, T, Q> const& v);
81 
84  template <typename T, qualifier Q>
85  GLM_FUNC_DECL vec<1, T, Q> make_vec1(vec<4, T, Q> const& v);
86 
89  template <typename T, qualifier Q>
90  GLM_FUNC_DECL vec<2, T, Q> make_vec2(vec<1, T, Q> const& v);
91 
94  template <typename T, qualifier Q>
95  GLM_FUNC_DECL vec<2, T, Q> make_vec2(vec<2, T, Q> const& v);
96 
99  template <typename T, qualifier Q>
100  GLM_FUNC_DECL vec<2, T, Q> make_vec2(vec<3, T, Q> const& v);
101 
104  template <typename T, qualifier Q>
105  GLM_FUNC_DECL vec<2, T, Q> make_vec2(vec<4, T, Q> const& v);
106 
109  template <typename T, qualifier Q>
110  GLM_FUNC_DECL vec<3, T, Q> make_vec3(vec<1, T, Q> const& v);
111 
114  template <typename T, qualifier Q>
115  GLM_FUNC_DECL vec<3, T, Q> make_vec3(vec<2, T, Q> const& v);
116 
119  template <typename T, qualifier Q>
120  GLM_FUNC_DECL vec<3, T, Q> make_vec3(vec<3, T, Q> const& v);
121 
124  template <typename T, qualifier Q>
125  GLM_FUNC_DECL vec<3, T, Q> make_vec3(vec<4, T, Q> const& v);
126 
129  template <typename T, qualifier Q>
130  GLM_FUNC_DECL vec<4, T, Q> make_vec4(vec<1, T, Q> const& v);
131 
134  template <typename T, qualifier Q>
135  GLM_FUNC_DECL vec<4, T, Q> make_vec4(vec<2, T, Q> const& v);
136 
139  template <typename T, qualifier Q>
140  GLM_FUNC_DECL vec<4, T, Q> make_vec4(vec<3, T, Q> const& v);
141 
144  template <typename T, qualifier Q>
145  GLM_FUNC_DECL vec<4, T, Q> make_vec4(vec<4, T, Q> const& v);
146 
149  template<typename T>
150  GLM_FUNC_DECL vec<2, T, defaultp> make_vec2(T const * const ptr);
151 
154  template<typename T>
155  GLM_FUNC_DECL vec<3, T, defaultp> make_vec3(T const * const ptr);
156 
159  template<typename T>
160  GLM_FUNC_DECL vec<4, T, defaultp> make_vec4(T const * const ptr);
161 
164  template<typename T>
165  GLM_FUNC_DECL mat<2, 2, T, defaultp> make_mat2x2(T const * const ptr);
166 
169  template<typename T>
170  GLM_FUNC_DECL mat<2, 3, T, defaultp> make_mat2x3(T const * const ptr);
171 
174  template<typename T>
175  GLM_FUNC_DECL mat<2, 4, T, defaultp> make_mat2x4(T const * const ptr);
176 
179  template<typename T>
180  GLM_FUNC_DECL mat<3, 2, T, defaultp> make_mat3x2(T const * const ptr);
181 
184  template<typename T>
185  GLM_FUNC_DECL mat<3, 3, T, defaultp> make_mat3x3(T const * const ptr);
186 
189  template<typename T>
190  GLM_FUNC_DECL mat<3, 4, T, defaultp> make_mat3x4(T const * const ptr);
191 
194  template<typename T>
195  GLM_FUNC_DECL mat<4, 2, T, defaultp> make_mat4x2(T const * const ptr);
196 
199  template<typename T>
200  GLM_FUNC_DECL mat<4, 3, T, defaultp> make_mat4x3(T const * const ptr);
201 
204  template<typename T>
205  GLM_FUNC_DECL mat<4, 4, T, defaultp> make_mat4x4(T const * const ptr);
206 
209  template<typename T>
210  GLM_FUNC_DECL mat<2, 2, T, defaultp> make_mat2(T const * const ptr);
211 
214  template<typename T>
215  GLM_FUNC_DECL mat<3, 3, T, defaultp> make_mat3(T const * const ptr);
216 
219  template<typename T>
220  GLM_FUNC_DECL mat<4, 4, T, defaultp> make_mat4(T const * const ptr);
221 
224  template<typename T>
225  GLM_FUNC_DECL qua<T, defaultp> make_quat(T const * const ptr);
226 
228 }//namespace glm
229 
230 #include "type_ptr.inl"
GLM_FUNC_DECL mat< 3, 3, T, defaultp > make_mat3(T const *const ptr)
Build a matrix from a pointer.
GLM_FUNC_DECL vec< 3, T, defaultp > make_vec3(T const *const ptr)
Build a vector from a pointer.
GLM_FUNC_DECL mat< 3, 2, T, defaultp > make_mat3x2(T const *const ptr)
Build a matrix from a pointer.
GLM_FUNC_DECL vec< 1, T, Q > make_vec1(vec< 4, T, Q > const &v)
Build a vector from a pointer.
GLM_FUNC_DECL qua< T, defaultp > make_quat(T const *const ptr)
Build a quaternion from a pointer.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > make_mat4(T const *const ptr)
Build a matrix from a pointer.
GLM_FUNC_DECL vec< 2, T, defaultp > make_vec2(T const *const ptr)
Build a vector from a pointer.
GLM_FUNC_DECL mat< 2, 4, T, defaultp > make_mat2x4(T const *const ptr)
Build a matrix from a pointer.
GLM_FUNC_DECL mat< 2, 2, T, defaultp > make_mat2(T const *const ptr)
Build a matrix from a pointer.
GLM_FUNC_DECL genType::value_type const * value_ptr(genType const &v)
Return the constant address to the data of the input parameter.
GLM_FUNC_DECL mat< 2, 2, T, defaultp > make_mat2x2(T const *const ptr)
Build a matrix from a pointer.
GLM_FUNC_DECL mat< 2, 3, T, defaultp > make_mat2x3(T const *const ptr)
Build a matrix from a pointer.
GLM_FUNC_DECL mat< 3, 4, T, defaultp > make_mat3x4(T const *const ptr)
Build a matrix from a pointer.
GLM_FUNC_DECL vec< 4, T, defaultp > make_vec4(T const *const ptr)
Build a vector from a pointer.
GLM_FUNC_DECL mat< 4, 3, T, defaultp > make_mat4x3(T const *const ptr)
Build a matrix from a pointer.
GLM_FUNC_DECL mat< 3, 3, T, defaultp > make_mat3x3(T const *const ptr)
Build a matrix from a pointer.
GLM_FUNC_DECL mat< 4, 4, T, defaultp > make_mat4x4(T const *const ptr)
Build a matrix from a pointer.
GLM_FUNC_DECL mat< 4, 2, T, defaultp > make_mat4x2(T const *const ptr)
Build a matrix from a pointer.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00176.html ================================================ 0.9.9 API documentation: type_quat.hpp File Reference
0.9.9 API documentation
type_quat.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00176_source.html ================================================ 0.9.9 API documentation: type_quat.hpp Source File
0.9.9 API documentation
type_quat.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 
6 // Dependency:
7 #include "../detail/type_mat3x3.hpp"
8 #include "../detail/type_mat4x4.hpp"
9 #include "../detail/type_vec3.hpp"
10 #include "../detail/type_vec4.hpp"
11 #include "../ext/vector_relational.hpp"
12 #include "../ext/quaternion_relational.hpp"
13 #include "../gtc/constants.hpp"
14 #include "../gtc/matrix_transform.hpp"
15 
16 namespace glm
17 {
18  template<typename T, qualifier Q>
19  struct qua
20  {
21  // -- Implementation detail --
22 
23  typedef qua<T, Q> type;
24  typedef T value_type;
25 
26  // -- Data --
27 
28 # if GLM_SILENT_WARNINGS == GLM_ENABLE
29 # if GLM_COMPILER & GLM_COMPILER_GCC
30 # pragma GCC diagnostic push
31 # pragma GCC diagnostic ignored "-Wpedantic"
32 # elif GLM_COMPILER & GLM_COMPILER_CLANG
33 # pragma clang diagnostic push
34 # pragma clang diagnostic ignored "-Wgnu-anonymous-struct"
35 # pragma clang diagnostic ignored "-Wnested-anon-types"
36 # elif GLM_COMPILER & GLM_COMPILER_VC
37 # pragma warning(push)
38 # pragma warning(disable: 4201) // nonstandard extension used : nameless struct/union
39 # endif
40 # endif
41 
42 # if GLM_LANG & GLM_LANG_CXXMS_FLAG
43  union
44  {
45  struct { T x, y, z, w;};
46 
47  typename detail::storage<4, T, detail::is_aligned<Q>::value>::type data;
48  };
49 # else
50  T x, y, z, w;
51 # endif
52 
53 # if GLM_SILENT_WARNINGS == GLM_ENABLE
54 # if GLM_COMPILER & GLM_COMPILER_CLANG
55 # pragma clang diagnostic pop
56 # elif GLM_COMPILER & GLM_COMPILER_GCC
57 # pragma GCC diagnostic pop
58 # elif GLM_COMPILER & GLM_COMPILER_VC
59 # pragma warning(pop)
60 # endif
61 # endif
62 
63  // -- Component accesses --
64 
65  typedef length_t length_type;
66 
68  GLM_FUNC_DECL static GLM_CONSTEXPR length_type length(){return 4;}
69 
70  GLM_FUNC_DECL GLM_CONSTEXPR T & operator[](length_type i);
71  GLM_FUNC_DECL GLM_CONSTEXPR T const& operator[](length_type i) const;
72 
73  // -- Implicit basic constructors --
74 
75  GLM_FUNC_DECL GLM_CONSTEXPR qua() GLM_DEFAULT;
76  GLM_FUNC_DECL GLM_CONSTEXPR qua(qua<T, Q> const& q) GLM_DEFAULT;
77  template<qualifier P>
78  GLM_FUNC_DECL GLM_CONSTEXPR qua(qua<T, P> const& q);
79 
80  // -- Explicit basic constructors --
81 
82  GLM_FUNC_DECL GLM_CONSTEXPR qua(T s, vec<3, T, Q> const& v);
83  GLM_FUNC_DECL GLM_CONSTEXPR qua(T w, T x, T y, T z);
84 
85  // -- Conversion constructors --
86 
87  template<typename U, qualifier P>
88  GLM_FUNC_DECL GLM_CONSTEXPR GLM_EXPLICIT qua(qua<U, P> const& q);
89 
91 # if GLM_HAS_EXPLICIT_CONVERSION_OPERATORS
92  GLM_FUNC_DECL explicit operator mat<3, 3, T, Q>() const;
93  GLM_FUNC_DECL explicit operator mat<4, 4, T, Q>() const;
94 # endif
95 
102  GLM_FUNC_DECL qua(vec<3, T, Q> const& u, vec<3, T, Q> const& v);
103 
105  GLM_FUNC_DECL GLM_EXPLICIT qua(vec<3, T, Q> const& eulerAngles);
106  GLM_FUNC_DECL GLM_EXPLICIT qua(mat<3, 3, T, Q> const& q);
107  GLM_FUNC_DECL GLM_EXPLICIT qua(mat<4, 4, T, Q> const& q);
108 
109  // -- Unary arithmetic operators --
110 
111  GLM_FUNC_DECL qua<T, Q>& operator=(qua<T, Q> const& q) GLM_DEFAULT;
112 
113  template<typename U>
114  GLM_FUNC_DECL qua<T, Q>& operator=(qua<U, Q> const& q);
115  template<typename U>
116  GLM_FUNC_DECL qua<T, Q>& operator+=(qua<U, Q> const& q);
117  template<typename U>
118  GLM_FUNC_DECL qua<T, Q>& operator-=(qua<U, Q> const& q);
119  template<typename U>
120  GLM_FUNC_DECL qua<T, Q>& operator*=(qua<U, Q> const& q);
121  template<typename U>
122  GLM_FUNC_DECL qua<T, Q>& operator*=(U s);
123  template<typename U>
124  GLM_FUNC_DECL qua<T, Q>& operator/=(U s);
125  };
126 
127  // -- Unary bit operators --
128 
129  template<typename T, qualifier Q>
130  GLM_FUNC_DECL qua<T, Q> operator+(qua<T, Q> const& q);
131 
132  template<typename T, qualifier Q>
133  GLM_FUNC_DECL qua<T, Q> operator-(qua<T, Q> const& q);
134 
135  // -- Binary operators --
136 
137  template<typename T, qualifier Q>
138  GLM_FUNC_DECL qua<T, Q> operator+(qua<T, Q> const& q, qua<T, Q> const& p);
139 
140  template<typename T, qualifier Q>
141  GLM_FUNC_DECL qua<T, Q> operator-(qua<T, Q> const& q, qua<T, Q> const& p);
142 
143  template<typename T, qualifier Q>
144  GLM_FUNC_DECL qua<T, Q> operator*(qua<T, Q> const& q, qua<T, Q> const& p);
145 
146  template<typename T, qualifier Q>
147  GLM_FUNC_DECL vec<3, T, Q> operator*(qua<T, Q> const& q, vec<3, T, Q> const& v);
148 
149  template<typename T, qualifier Q>
150  GLM_FUNC_DECL vec<3, T, Q> operator*(vec<3, T, Q> const& v, qua<T, Q> const& q);
151 
152  template<typename T, qualifier Q>
153  GLM_FUNC_DECL vec<4, T, Q> operator*(qua<T, Q> const& q, vec<4, T, Q> const& v);
154 
155  template<typename T, qualifier Q>
156  GLM_FUNC_DECL vec<4, T, Q> operator*(vec<4, T, Q> const& v, qua<T, Q> const& q);
157 
158  template<typename T, qualifier Q>
159  GLM_FUNC_DECL qua<T, Q> operator*(qua<T, Q> const& q, T const& s);
160 
161  template<typename T, qualifier Q>
162  GLM_FUNC_DECL qua<T, Q> operator*(T const& s, qua<T, Q> const& q);
163 
164  template<typename T, qualifier Q>
165  GLM_FUNC_DECL qua<T, Q> operator/(qua<T, Q> const& q, T const& s);
166 
167  // -- Boolean operators --
168 
169  template<typename T, qualifier Q>
170  GLM_FUNC_DECL GLM_CONSTEXPR bool operator==(qua<T, Q> const& q1, qua<T, Q> const& q2);
171 
172  template<typename T, qualifier Q>
173  GLM_FUNC_DECL GLM_CONSTEXPR bool operator!=(qua<T, Q> const& q1, qua<T, Q> const& q2);
174 } //namespace glm
175 
176 #ifndef GLM_EXTERNAL_TEMPLATE
177 #include "type_quat.inl"
178 #endif//GLM_EXTERNAL_TEMPLATE
GLM_FUNC_DECL vec< 3, T, Q > eulerAngles(qua< T, Q > const &x)
Returns euler angles, pitch as x, yaw as y, roll as z.
GLM_FUNC_DECL T length(qua< T, Q > const &q)
Returns the norm of a quaternions.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00177.html ================================================ 0.9.9 API documentation: type_trait.hpp File Reference
0.9.9 API documentation
type_trait.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00177_source.html ================================================ 0.9.9 API documentation: type_trait.hpp Source File
0.9.9 API documentation
type_trait.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
16 # ifndef GLM_ENABLE_EXPERIMENTAL
17 # pragma message("GLM: GLM_GTX_type_trait is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
18 # else
19 # pragma message("GLM: GLM_GTX_type_trait extension included")
20 # endif
21 #endif
22 
23 // Dependency:
24 #include "../detail/qualifier.hpp"
25 #include "../gtc/quaternion.hpp"
26 #include "../gtx/dual_quaternion.hpp"
27 
28 namespace glm
29 {
32 
33  template<typename T>
34  struct type
35  {
36  static bool const is_vec = false;
37  static bool const is_mat = false;
38  static bool const is_quat = false;
39  static length_t const components = 0;
40  static length_t const cols = 0;
41  static length_t const rows = 0;
42  };
43 
44  template<length_t L, typename T, qualifier Q>
45  struct type<vec<L, T, Q> >
46  {
47  static bool const is_vec = true;
48  static bool const is_mat = false;
49  static bool const is_quat = false;
50  static length_t const components = L;
51  };
52 
53  template<length_t C, length_t R, typename T, qualifier Q>
54  struct type<mat<C, R, T, Q> >
55  {
56  static bool const is_vec = false;
57  static bool const is_mat = true;
58  static bool const is_quat = false;
59  static length_t const components = C;
60  static length_t const cols = C;
61  static length_t const rows = R;
62  };
63 
64  template<typename T, qualifier Q>
65  struct type<qua<T, Q> >
66  {
67  static bool const is_vec = false;
68  static bool const is_mat = false;
69  static bool const is_quat = true;
70  static length_t const components = 4;
71  };
72 
73  template<typename T, qualifier Q>
74  struct type<tdualquat<T, Q> >
75  {
76  static bool const is_vec = false;
77  static bool const is_mat = false;
78  static bool const is_quat = true;
79  static length_t const components = 8;
80  };
81 
83 }//namespace glm
84 
85 #include "type_trait.inl"
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00178.html ================================================ 0.9.9 API documentation: type_vec1.hpp File Reference
0.9.9 API documentation
type_vec1.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00178_source.html ================================================ 0.9.9 API documentation: type_vec1.hpp Source File
0.9.9 API documentation
type_vec1.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 
6 #include "qualifier.hpp"
7 #if GLM_CONFIG_SWIZZLE == GLM_SWIZZLE_OPERATOR
8 # include "_swizzle.hpp"
9 #elif GLM_CONFIG_SWIZZLE == GLM_SWIZZLE_FUNCTION
10 # include "_swizzle_func.hpp"
11 #endif
12 #include <cstddef>
13 
14 namespace glm
15 {
16  template<typename T, qualifier Q>
17  struct vec<1, T, Q>
18  {
19  // -- Implementation detail --
20 
21  typedef T value_type;
22  typedef vec<1, T, Q> type;
23  typedef vec<1, bool, Q> bool_type;
24 
25  // -- Data --
26 
27 # if GLM_SILENT_WARNINGS == GLM_ENABLE
28 # if GLM_COMPILER & GLM_COMPILER_GCC
29 # pragma GCC diagnostic push
30 # pragma GCC diagnostic ignored "-Wpedantic"
31 # elif GLM_COMPILER & GLM_COMPILER_CLANG
32 # pragma clang diagnostic push
33 # pragma clang diagnostic ignored "-Wgnu-anonymous-struct"
34 # pragma clang diagnostic ignored "-Wnested-anon-types"
35 # elif GLM_COMPILER & GLM_COMPILER_VC
36 # pragma warning(push)
37 # pragma warning(disable: 4201) // nonstandard extension used : nameless struct/union
38 # endif
39 # endif
40 
41 # if GLM_CONFIG_XYZW_ONLY
42  T x;
43 # elif GLM_CONFIG_ANONYMOUS_STRUCT == GLM_ENABLE
44  union
45  {
46  T x;
47  T r;
48  T s;
49 
50  typename detail::storage<1, T, detail::is_aligned<Q>::value>::type data;
51 /*
52 # if GLM_CONFIG_SWIZZLE == GLM_SWIZZLE_OPERATOR
53  _GLM_SWIZZLE1_2_MEMBERS(T, Q, x)
54  _GLM_SWIZZLE1_2_MEMBERS(T, Q, r)
55  _GLM_SWIZZLE1_2_MEMBERS(T, Q, s)
56  _GLM_SWIZZLE1_3_MEMBERS(T, Q, x)
57  _GLM_SWIZZLE1_3_MEMBERS(T, Q, r)
58  _GLM_SWIZZLE1_3_MEMBERS(T, Q, s)
59  _GLM_SWIZZLE1_4_MEMBERS(T, Q, x)
60  _GLM_SWIZZLE1_4_MEMBERS(T, Q, r)
61  _GLM_SWIZZLE1_4_MEMBERS(T, Q, s)
62 # endif
63 */
64  };
65 # else
66  union {T x, r, s;};
67 /*
68 # if GLM_CONFIG_SWIZZLE == GLM_SWIZZLE_FUNCTION
69  GLM_SWIZZLE_GEN_VEC_FROM_VEC1(T, Q)
70 # endif
71 */
72 # endif
73 
74 # if GLM_SILENT_WARNINGS == GLM_ENABLE
75 # if GLM_COMPILER & GLM_COMPILER_CLANG
76 # pragma clang diagnostic pop
77 # elif GLM_COMPILER & GLM_COMPILER_GCC
78 # pragma GCC diagnostic pop
79 # elif GLM_COMPILER & GLM_COMPILER_VC
80 # pragma warning(pop)
81 # endif
82 # endif
83 
84  // -- Component accesses --
85 
87  typedef length_t length_type;
88  GLM_FUNC_DECL static GLM_CONSTEXPR length_type length(){return 1;}
89 
90  GLM_FUNC_DECL GLM_CONSTEXPR T & operator[](length_type i);
91  GLM_FUNC_DECL GLM_CONSTEXPR T const& operator[](length_type i) const;
92 
93  // -- Implicit basic constructors --
94 
95  GLM_FUNC_DECL GLM_CONSTEXPR vec() GLM_DEFAULT;
96  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec const& v) GLM_DEFAULT;
97  template<qualifier P>
98  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<1, T, P> const& v);
99 
100  // -- Explicit basic constructors --
101 
102  GLM_FUNC_DECL GLM_CONSTEXPR explicit vec(T scalar);
103 
104  // -- Conversion vector constructors --
105 
107  template<typename U, qualifier P>
108  GLM_FUNC_DECL GLM_CONSTEXPR GLM_EXPLICIT vec(vec<2, U, P> const& v);
110  template<typename U, qualifier P>
111  GLM_FUNC_DECL GLM_CONSTEXPR GLM_EXPLICIT vec(vec<3, U, P> const& v);
113  template<typename U, qualifier P>
114  GLM_FUNC_DECL GLM_CONSTEXPR GLM_EXPLICIT vec(vec<4, U, P> const& v);
115 
117  template<typename U, qualifier P>
118  GLM_FUNC_DECL GLM_CONSTEXPR GLM_EXPLICIT vec(vec<1, U, P> const& v);
119 
120  // -- Swizzle constructors --
121 /*
122 # if GLM_CONFIG_SWIZZLE == GLM_SWIZZLE_OPERATOR
123  template<int E0>
124  GLM_FUNC_DECL GLM_CONSTEXPR vec(detail::_swizzle<1, T, Q, E0, -1,-2,-3> const& that)
125  {
126  *this = that();
127  }
128 # endif//GLM_CONFIG_SWIZZLE == GLM_SWIZZLE_OPERATOR
129 */
130  // -- Unary arithmetic operators --
131 
132  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> & operator=(vec const& v) GLM_DEFAULT;
133 
134  template<typename U>
135  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> & operator=(vec<1, U, Q> const& v);
136  template<typename U>
137  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> & operator+=(U scalar);
138  template<typename U>
139  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> & operator+=(vec<1, U, Q> const& v);
140  template<typename U>
141  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> & operator-=(U scalar);
142  template<typename U>
143  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> & operator-=(vec<1, U, Q> const& v);
144  template<typename U>
145  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> & operator*=(U scalar);
146  template<typename U>
147  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> & operator*=(vec<1, U, Q> const& v);
148  template<typename U>
149  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> & operator/=(U scalar);
150  template<typename U>
151  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> & operator/=(vec<1, U, Q> const& v);
152 
153  // -- Increment and decrement operators --
154 
155  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> & operator++();
156  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> & operator--();
157  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator++(int);
158  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator--(int);
159 
160  // -- Unary bit operators --
161 
162  template<typename U>
163  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> & operator%=(U scalar);
164  template<typename U>
165  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> & operator%=(vec<1, U, Q> const& v);
166  template<typename U>
167  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> & operator&=(U scalar);
168  template<typename U>
169  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> & operator&=(vec<1, U, Q> const& v);
170  template<typename U>
171  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> & operator|=(U scalar);
172  template<typename U>
173  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> & operator|=(vec<1, U, Q> const& v);
174  template<typename U>
175  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> & operator^=(U scalar);
176  template<typename U>
177  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> & operator^=(vec<1, U, Q> const& v);
178  template<typename U>
179  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> & operator<<=(U scalar);
180  template<typename U>
181  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> & operator<<=(vec<1, U, Q> const& v);
182  template<typename U>
183  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> & operator>>=(U scalar);
184  template<typename U>
185  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> & operator>>=(vec<1, U, Q> const& v);
186  };
187 
188  // -- Unary operators --
189 
190  template<typename T, qualifier Q>
191  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator+(vec<1, T, Q> const& v);
192 
193  template<typename T, qualifier Q>
194  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator-(vec<1, T, Q> const& v);
195 
196  // -- Binary operators --
197 
198  template<typename T, qualifier Q>
199  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator+(vec<1, T, Q> const& v, T scalar);
200 
201  template<typename T, qualifier Q>
202  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator+(T scalar, vec<1, T, Q> const& v);
203 
204  template<typename T, qualifier Q>
205  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator+(vec<1, T, Q> const& v1, vec<1, T, Q> const& v2);
206 
207  template<typename T, qualifier Q>
208  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator-(vec<1, T, Q> const& v, T scalar);
209 
210  template<typename T, qualifier Q>
211  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator-(T scalar, vec<1, T, Q> const& v);
212 
213  template<typename T, qualifier Q>
214  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator-(vec<1, T, Q> const& v1, vec<1, T, Q> const& v2);
215 
216  template<typename T, qualifier Q>
217  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator*(vec<1, T, Q> const& v, T scalar);
218 
219  template<typename T, qualifier Q>
220  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator*(T scalar, vec<1, T, Q> const& v);
221 
222  template<typename T, qualifier Q>
223  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator*(vec<1, T, Q> const& v1, vec<1, T, Q> const& v2);
224 
225  template<typename T, qualifier Q>
226  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator/(vec<1, T, Q> const& v, T scalar);
227 
228  template<typename T, qualifier Q>
229  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator/(T scalar, vec<1, T, Q> const& v);
230 
231  template<typename T, qualifier Q>
232  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator/(vec<1, T, Q> const& v1, vec<1, T, Q> const& v2);
233 
234  template<typename T, qualifier Q>
235  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator%(vec<1, T, Q> const& v, T scalar);
236 
237  template<typename T, qualifier Q>
238  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator%(T scalar, vec<1, T, Q> const& v);
239 
240  template<typename T, qualifier Q>
241  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator%(vec<1, T, Q> const& v1, vec<1, T, Q> const& v2);
242 
243  template<typename T, qualifier Q>
244  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator&(vec<1, T, Q> const& v, T scalar);
245 
246  template<typename T, qualifier Q>
247  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator&(T scalar, vec<1, T, Q> const& v);
248 
249  template<typename T, qualifier Q>
250  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator&(vec<1, T, Q> const& v1, vec<1, T, Q> const& v2);
251 
252  template<typename T, qualifier Q>
253  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator|(vec<1, T, Q> const& v, T scalar);
254 
255  template<typename T, qualifier Q>
256  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator|(T scalar, vec<1, T, Q> const& v);
257 
258  template<typename T, qualifier Q>
259  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator|(vec<1, T, Q> const& v1, vec<1, T, Q> const& v2);
260 
261  template<typename T, qualifier Q>
262  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator^(vec<1, T, Q> const& v, T scalar);
263 
264  template<typename T, qualifier Q>
265  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator^(T scalar, vec<1, T, Q> const& v);
266 
267  template<typename T, qualifier Q>
268  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator^(vec<1, T, Q> const& v1, vec<1, T, Q> const& v2);
269 
270  template<typename T, qualifier Q>
271  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator<<(vec<1, T, Q> const& v, T scalar);
272 
273  template<typename T, qualifier Q>
274  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator<<(T scalar, vec<1, T, Q> const& v);
275 
276  template<typename T, qualifier Q>
277  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator<<(vec<1, T, Q> const& v1, vec<1, T, Q> const& v2);
278 
279  template<typename T, qualifier Q>
280  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator>>(vec<1, T, Q> const& v, T scalar);
281 
282  template<typename T, qualifier Q>
283  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator>>(T scalar, vec<1, T, Q> const& v);
284 
285  template<typename T, qualifier Q>
286  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator>>(vec<1, T, Q> const& v1, vec<1, T, Q> const& v2);
287 
288  template<typename T, qualifier Q>
289  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, T, Q> operator~(vec<1, T, Q> const& v);
290 
291  // -- Boolean operators --
292 
293  template<typename T, qualifier Q>
294  GLM_FUNC_DECL GLM_CONSTEXPR bool operator==(vec<1, T, Q> const& v1, vec<1, T, Q> const& v2);
295 
296  template<typename T, qualifier Q>
297  GLM_FUNC_DECL GLM_CONSTEXPR bool operator!=(vec<1, T, Q> const& v1, vec<1, T, Q> const& v2);
298 
299  template<qualifier Q>
300  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, bool, Q> operator&&(vec<1, bool, Q> const& v1, vec<1, bool, Q> const& v2);
301 
302  template<qualifier Q>
303  GLM_FUNC_DECL GLM_CONSTEXPR vec<1, bool, Q> operator||(vec<1, bool, Q> const& v1, vec<1, bool, Q> const& v2);
304 }//namespace glm
305 
306 #ifndef GLM_EXTERNAL_TEMPLATE
307 #include "type_vec1.inl"
308 #endif//GLM_EXTERNAL_TEMPLATE
GLM_FUNC_DECL T length(qua< T, Q > const &q)
Returns the norm of a quaternions.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00179.html ================================================ 0.9.9 API documentation: type_vec2.hpp File Reference
0.9.9 API documentation
type_vec2.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00179_source.html ================================================ 0.9.9 API documentation: type_vec2.hpp Source File
0.9.9 API documentation
type_vec2.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 
6 #include "qualifier.hpp"
7 #if GLM_CONFIG_SWIZZLE == GLM_SWIZZLE_OPERATOR
8 # include "_swizzle.hpp"
9 #elif GLM_CONFIG_SWIZZLE == GLM_SWIZZLE_FUNCTION
10 # include "_swizzle_func.hpp"
11 #endif
12 #include <cstddef>
13 
14 namespace glm
15 {
16  template<typename T, qualifier Q>
17  struct vec<2, T, Q>
18  {
19  // -- Implementation detail --
20 
21  typedef T value_type;
22  typedef vec<2, T, Q> type;
23  typedef vec<2, bool, Q> bool_type;
24 
25  // -- Data --
26 
27 # if GLM_SILENT_WARNINGS == GLM_ENABLE
28 # if GLM_COMPILER & GLM_COMPILER_GCC
29 # pragma GCC diagnostic push
30 # pragma GCC diagnostic ignored "-Wpedantic"
31 # elif GLM_COMPILER & GLM_COMPILER_CLANG
32 # pragma clang diagnostic push
33 # pragma clang diagnostic ignored "-Wgnu-anonymous-struct"
34 # pragma clang diagnostic ignored "-Wnested-anon-types"
35 # elif GLM_COMPILER & GLM_COMPILER_VC
36 # pragma warning(push)
37 # pragma warning(disable: 4201) // nonstandard extension used : nameless struct/union
38 # endif
39 # endif
40 
41 # if GLM_CONFIG_XYZW_ONLY
42  T x, y;
43 # elif GLM_CONFIG_ANONYMOUS_STRUCT == GLM_ENABLE
44  union
45  {
46  struct{ T x, y; };
47  struct{ T r, g; };
48  struct{ T s, t; };
49 
50  typename detail::storage<2, T, detail::is_aligned<Q>::value>::type data;
51 
52 # if GLM_CONFIG_SWIZZLE == GLM_SWIZZLE_OPERATOR
53  GLM_SWIZZLE2_2_MEMBERS(T, Q, x, y)
54  GLM_SWIZZLE2_2_MEMBERS(T, Q, r, g)
55  GLM_SWIZZLE2_2_MEMBERS(T, Q, s, t)
56  GLM_SWIZZLE2_3_MEMBERS(T, Q, x, y)
57  GLM_SWIZZLE2_3_MEMBERS(T, Q, r, g)
58  GLM_SWIZZLE2_3_MEMBERS(T, Q, s, t)
59  GLM_SWIZZLE2_4_MEMBERS(T, Q, x, y)
60  GLM_SWIZZLE2_4_MEMBERS(T, Q, r, g)
61  GLM_SWIZZLE2_4_MEMBERS(T, Q, s, t)
62 # endif
63  };
64 # else
65  union {T x, r, s;};
66  union {T y, g, t;};
67 
68 # if GLM_CONFIG_SWIZZLE == GLM_SWIZZLE_FUNCTION
69  GLM_SWIZZLE_GEN_VEC_FROM_VEC2(T, Q)
70 # endif//GLM_CONFIG_SWIZZLE
71 # endif
72 
73 # if GLM_SILENT_WARNINGS == GLM_ENABLE
74 # if GLM_COMPILER & GLM_COMPILER_CLANG
75 # pragma clang diagnostic pop
76 # elif GLM_COMPILER & GLM_COMPILER_GCC
77 # pragma GCC diagnostic pop
78 # elif GLM_COMPILER & GLM_COMPILER_VC
79 # pragma warning(pop)
80 # endif
81 # endif
82 
83  // -- Component accesses --
84 
86  typedef length_t length_type;
87  GLM_FUNC_DECL static GLM_CONSTEXPR length_type length(){return 2;}
88 
89  GLM_FUNC_DECL GLM_CONSTEXPR T& operator[](length_type i);
90  GLM_FUNC_DECL GLM_CONSTEXPR T const& operator[](length_type i) const;
91 
92  // -- Implicit basic constructors --
93 
94  GLM_FUNC_DECL GLM_CONSTEXPR vec() GLM_DEFAULT;
95  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec const& v) GLM_DEFAULT;
96  template<qualifier P>
97  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<2, T, P> const& v);
98 
99  // -- Explicit basic constructors --
100 
101  GLM_FUNC_DECL GLM_CONSTEXPR explicit vec(T scalar);
102  GLM_FUNC_DECL GLM_CONSTEXPR vec(T x, T y);
103 
104  // -- Conversion constructors --
105 
106  template<typename U, qualifier P>
107  GLM_FUNC_DECL GLM_CONSTEXPR explicit vec(vec<1, U, P> const& v);
108 
110  template<typename A, typename B>
111  GLM_FUNC_DECL GLM_CONSTEXPR vec(A x, B y);
112  template<typename A, typename B>
113  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<1, A, Q> const& x, B y);
114  template<typename A, typename B>
115  GLM_FUNC_DECL GLM_CONSTEXPR vec(A x, vec<1, B, Q> const& y);
116  template<typename A, typename B>
117  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<1, A, Q> const& x, vec<1, B, Q> const& y);
118 
119  // -- Conversion vector constructors --
120 
122  template<typename U, qualifier P>
123  GLM_FUNC_DECL GLM_CONSTEXPR GLM_EXPLICIT vec(vec<3, U, P> const& v);
125  template<typename U, qualifier P>
126  GLM_FUNC_DECL GLM_CONSTEXPR GLM_EXPLICIT vec(vec<4, U, P> const& v);
127 
129  template<typename U, qualifier P>
130  GLM_FUNC_DECL GLM_CONSTEXPR GLM_EXPLICIT vec(vec<2, U, P> const& v);
131 
132  // -- Swizzle constructors --
133 # if GLM_CONFIG_SWIZZLE == GLM_SWIZZLE_OPERATOR
134  template<int E0, int E1>
135  GLM_FUNC_DECL GLM_CONSTEXPR vec(detail::_swizzle<2, T, Q, E0, E1,-1,-2> const& that)
136  {
137  *this = that();
138  }
139 # endif//GLM_CONFIG_SWIZZLE == GLM_SWIZZLE_OPERATOR
140 
141  // -- Unary arithmetic operators --
142 
143  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator=(vec const& v) GLM_DEFAULT;
144 
145  template<typename U>
146  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator=(vec<2, U, Q> const& v);
147  template<typename U>
148  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator+=(U scalar);
149  template<typename U>
150  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator+=(vec<1, U, Q> const& v);
151  template<typename U>
152  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator+=(vec<2, U, Q> const& v);
153  template<typename U>
154  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator-=(U scalar);
155  template<typename U>
156  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator-=(vec<1, U, Q> const& v);
157  template<typename U>
158  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator-=(vec<2, U, Q> const& v);
159  template<typename U>
160  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator*=(U scalar);
161  template<typename U>
162  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator*=(vec<1, U, Q> const& v);
163  template<typename U>
164  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator*=(vec<2, U, Q> const& v);
165  template<typename U>
166  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator/=(U scalar);
167  template<typename U>
168  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator/=(vec<1, U, Q> const& v);
169  template<typename U>
170  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator/=(vec<2, U, Q> const& v);
171 
172  // -- Increment and decrement operators --
173 
174  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator++();
175  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator--();
176  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator++(int);
177  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator--(int);
178 
179  // -- Unary bit operators --
180 
181  template<typename U>
182  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator%=(U scalar);
183  template<typename U>
184  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator%=(vec<1, U, Q> const& v);
185  template<typename U>
186  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator%=(vec<2, U, Q> const& v);
187  template<typename U>
188  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator&=(U scalar);
189  template<typename U>
190  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator&=(vec<1, U, Q> const& v);
191  template<typename U>
192  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator&=(vec<2, U, Q> const& v);
193  template<typename U>
194  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator|=(U scalar);
195  template<typename U>
196  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator|=(vec<1, U, Q> const& v);
197  template<typename U>
198  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator|=(vec<2, U, Q> const& v);
199  template<typename U>
200  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator^=(U scalar);
201  template<typename U>
202  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator^=(vec<1, U, Q> const& v);
203  template<typename U>
204  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator^=(vec<2, U, Q> const& v);
205  template<typename U>
206  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator<<=(U scalar);
207  template<typename U>
208  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator<<=(vec<1, U, Q> const& v);
209  template<typename U>
210  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator<<=(vec<2, U, Q> const& v);
211  template<typename U>
212  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator>>=(U scalar);
213  template<typename U>
214  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator>>=(vec<1, U, Q> const& v);
215  template<typename U>
216  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> & operator>>=(vec<2, U, Q> const& v);
217  };
218 
219  // -- Unary operators --
220 
221  template<typename T, qualifier Q>
222  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator+(vec<2, T, Q> const& v);
223 
224  template<typename T, qualifier Q>
225  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator-(vec<2, T, Q> const& v);
226 
227  // -- Binary operators --
228 
229  template<typename T, qualifier Q>
230  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator+(vec<2, T, Q> const& v, T scalar);
231 
232  template<typename T, qualifier Q>
233  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator+(vec<2, T, Q> const& v1, vec<1, T, Q> const& v2);
234 
235  template<typename T, qualifier Q>
236  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator+(T scalar, vec<2, T, Q> const& v);
237 
238  template<typename T, qualifier Q>
239  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator+(vec<1, T, Q> const& v1, vec<2, T, Q> const& v2);
240 
241  template<typename T, qualifier Q>
242  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator+(vec<2, T, Q> const& v1, vec<2, T, Q> const& v2);
243 
244  template<typename T, qualifier Q>
245  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator-(vec<2, T, Q> const& v, T scalar);
246 
247  template<typename T, qualifier Q>
248  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator-(vec<2, T, Q> const& v1, vec<1, T, Q> const& v2);
249 
250  template<typename T, qualifier Q>
251  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator-(T scalar, vec<2, T, Q> const& v);
252 
253  template<typename T, qualifier Q>
254  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator-(vec<1, T, Q> const& v1, vec<2, T, Q> const& v2);
255 
256  template<typename T, qualifier Q>
257  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator-(vec<2, T, Q> const& v1, vec<2, T, Q> const& v2);
258 
259  template<typename T, qualifier Q>
260  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator*(vec<2, T, Q> const& v, T scalar);
261 
262  template<typename T, qualifier Q>
263  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator*(vec<2, T, Q> const& v1, vec<1, T, Q> const& v2);
264 
265  template<typename T, qualifier Q>
266  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator*(T scalar, vec<2, T, Q> const& v);
267 
268  template<typename T, qualifier Q>
269  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator*(vec<1, T, Q> const& v1, vec<2, T, Q> const& v2);
270 
271  template<typename T, qualifier Q>
272  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator*(vec<2, T, Q> const& v1, vec<2, T, Q> const& v2);
273 
274  template<typename T, qualifier Q>
275  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator/(vec<2, T, Q> const& v, T scalar);
276 
277  template<typename T, qualifier Q>
278  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator/(vec<2, T, Q> const& v1, vec<1, T, Q> const& v2);
279 
280  template<typename T, qualifier Q>
281  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator/(T scalar, vec<2, T, Q> const& v);
282 
283  template<typename T, qualifier Q>
284  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator/(vec<1, T, Q> const& v1, vec<2, T, Q> const& v2);
285 
286  template<typename T, qualifier Q>
287  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator/(vec<2, T, Q> const& v1, vec<2, T, Q> const& v2);
288 
289  template<typename T, qualifier Q>
290  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator%(vec<2, T, Q> const& v, T scalar);
291 
292  template<typename T, qualifier Q>
293  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator%(vec<2, T, Q> const& v1, vec<1, T, Q> const& v2);
294 
295  template<typename T, qualifier Q>
296  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator%(T scalar, vec<2, T, Q> const& v);
297 
298  template<typename T, qualifier Q>
299  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator%(vec<1, T, Q> const& v1, vec<2, T, Q> const& v2);
300 
301  template<typename T, qualifier Q>
302  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator%(vec<2, T, Q> const& v1, vec<2, T, Q> const& v2);
303 
304  template<typename T, qualifier Q>
305  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator&(vec<2, T, Q> const& v, T scalar);
306 
307  template<typename T, qualifier Q>
308  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator&(vec<2, T, Q> const& v1, vec<1, T, Q> const& v2);
309 
310  template<typename T, qualifier Q>
311  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator&(T scalar, vec<2, T, Q> const& v);
312 
313  template<typename T, qualifier Q>
314  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator&(vec<1, T, Q> const& v1, vec<2, T, Q> const& v2);
315 
316  template<typename T, qualifier Q>
317  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator&(vec<2, T, Q> const& v1, vec<2, T, Q> const& v2);
318 
319  template<typename T, qualifier Q>
320  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator|(vec<2, T, Q> const& v, T scalar);
321 
322  template<typename T, qualifier Q>
323  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator|(vec<2, T, Q> const& v1, vec<1, T, Q> const& v2);
324 
325  template<typename T, qualifier Q>
326  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator|(T scalar, vec<2, T, Q> const& v);
327 
328  template<typename T, qualifier Q>
329  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator|(vec<1, T, Q> const& v1, vec<2, T, Q> const& v2);
330 
331  template<typename T, qualifier Q>
332  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator|(vec<2, T, Q> const& v1, vec<2, T, Q> const& v2);
333 
334  template<typename T, qualifier Q>
335  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator^(vec<2, T, Q> const& v, T scalar);
336 
337  template<typename T, qualifier Q>
338  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator^(vec<2, T, Q> const& v1, vec<1, T, Q> const& v2);
339 
340  template<typename T, qualifier Q>
341  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator^(T scalar, vec<2, T, Q> const& v);
342 
343  template<typename T, qualifier Q>
344  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator^(vec<1, T, Q> const& v1, vec<2, T, Q> const& v2);
345 
346  template<typename T, qualifier Q>
347  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator^(vec<2, T, Q> const& v1, vec<2, T, Q> const& v2);
348 
349  template<typename T, qualifier Q>
350  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator<<(vec<2, T, Q> const& v, T scalar);
351 
352  template<typename T, qualifier Q>
353  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator<<(vec<2, T, Q> const& v1, vec<1, T, Q> const& v2);
354 
355  template<typename T, qualifier Q>
356  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator<<(T scalar, vec<2, T, Q> const& v);
357 
358  template<typename T, qualifier Q>
359  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator<<(vec<1, T, Q> const& v1, vec<2, T, Q> const& v2);
360 
361  template<typename T, qualifier Q>
362  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator<<(vec<2, T, Q> const& v1, vec<2, T, Q> const& v2);
363 
364  template<typename T, qualifier Q>
365  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator>>(vec<2, T, Q> const& v, T scalar);
366 
367  template<typename T, qualifier Q>
368  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator>>(vec<2, T, Q> const& v1, vec<1, T, Q> const& v2);
369 
370  template<typename T, qualifier Q>
371  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator>>(T scalar, vec<2, T, Q> const& v);
372 
373  template<typename T, qualifier Q>
374  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator>>(vec<1, T, Q> const& v1, vec<2, T, Q> const& v2);
375 
376  template<typename T, qualifier Q>
377  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator>>(vec<2, T, Q> const& v1, vec<2, T, Q> const& v2);
378 
379  template<typename T, qualifier Q>
380  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, T, Q> operator~(vec<2, T, Q> const& v);
381 
382  // -- Boolean operators --
383 
384  template<typename T, qualifier Q>
385  GLM_FUNC_DECL GLM_CONSTEXPR bool operator==(vec<2, T, Q> const& v1, vec<2, T, Q> const& v2);
386 
387  template<typename T, qualifier Q>
388  GLM_FUNC_DECL GLM_CONSTEXPR bool operator!=(vec<2, T, Q> const& v1, vec<2, T, Q> const& v2);
389 
390  template<qualifier Q>
391  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, bool, Q> operator&&(vec<2, bool, Q> const& v1, vec<2, bool, Q> const& v2);
392 
393  template<qualifier Q>
394  GLM_FUNC_DECL GLM_CONSTEXPR vec<2, bool, Q> operator||(vec<2, bool, Q> const& v1, vec<2, bool, Q> const& v2);
395 }//namespace glm
396 
397 #ifndef GLM_EXTERNAL_TEMPLATE
398 #include "type_vec2.inl"
399 #endif//GLM_EXTERNAL_TEMPLATE
GLM_FUNC_DECL T length(qua< T, Q > const &q)
Returns the norm of a quaternions.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00180.html ================================================ 0.9.9 API documentation: type_vec3.hpp File Reference
0.9.9 API documentation
type_vec3.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00180_source.html ================================================ 0.9.9 API documentation: type_vec3.hpp Source File
0.9.9 API documentation
type_vec3.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 
6 #include "qualifier.hpp"
7 #if GLM_CONFIG_SWIZZLE == GLM_SWIZZLE_OPERATOR
8 # include "_swizzle.hpp"
9 #elif GLM_CONFIG_SWIZZLE == GLM_SWIZZLE_FUNCTION
10 # include "_swizzle_func.hpp"
11 #endif
12 #include <cstddef>
13 
14 namespace glm
15 {
16  template<typename T, qualifier Q>
17  struct vec<3, T, Q>
18  {
19  // -- Implementation detail --
20 
21  typedef T value_type;
22  typedef vec<3, T, Q> type;
23  typedef vec<3, bool, Q> bool_type;
24 
25  // -- Data --
26 
27 # if GLM_SILENT_WARNINGS == GLM_ENABLE
28 # if GLM_COMPILER & GLM_COMPILER_GCC
29 # pragma GCC diagnostic push
30 # pragma GCC diagnostic ignored "-Wpedantic"
31 # elif GLM_COMPILER & GLM_COMPILER_CLANG
32 # pragma clang diagnostic push
33 # pragma clang diagnostic ignored "-Wgnu-anonymous-struct"
34 # pragma clang diagnostic ignored "-Wnested-anon-types"
35 # elif GLM_COMPILER & GLM_COMPILER_VC
36 # pragma warning(push)
37 # pragma warning(disable: 4201) // nonstandard extension used : nameless struct/union
38 # if GLM_CONFIG_ALIGNED_GENTYPES == GLM_ENABLE
39 # pragma warning(disable: 4324) // structure was padded due to alignment specifier
40 # endif
41 # endif
42 # endif
43 
44 # if GLM_CONFIG_XYZW_ONLY
45  T x, y, z;
46 # elif GLM_CONFIG_ANONYMOUS_STRUCT == GLM_ENABLE
47  union
48  {
49  struct{ T x, y, z; };
50  struct{ T r, g, b; };
51  struct{ T s, t, p; };
52 
53  typename detail::storage<3, T, detail::is_aligned<Q>::value>::type data;
54 
55 # if GLM_CONFIG_SWIZZLE == GLM_SWIZZLE_OPERATOR
56  GLM_SWIZZLE3_2_MEMBERS(T, Q, x, y, z)
57  GLM_SWIZZLE3_2_MEMBERS(T, Q, r, g, b)
58  GLM_SWIZZLE3_2_MEMBERS(T, Q, s, t, p)
59  GLM_SWIZZLE3_3_MEMBERS(T, Q, x, y, z)
60  GLM_SWIZZLE3_3_MEMBERS(T, Q, r, g, b)
61  GLM_SWIZZLE3_3_MEMBERS(T, Q, s, t, p)
62  GLM_SWIZZLE3_4_MEMBERS(T, Q, x, y, z)
63  GLM_SWIZZLE3_4_MEMBERS(T, Q, r, g, b)
64  GLM_SWIZZLE3_4_MEMBERS(T, Q, s, t, p)
65 # endif
66  };
67 # else
68  union { T x, r, s; };
69  union { T y, g, t; };
70  union { T z, b, p; };
71 
72 # if GLM_CONFIG_SWIZZLE == GLM_SWIZZLE_FUNCTION
73  GLM_SWIZZLE_GEN_VEC_FROM_VEC3(T, Q)
74 # endif//GLM_CONFIG_SWIZZLE
75 # endif//GLM_LANG
76 
77 # if GLM_SILENT_WARNINGS == GLM_ENABLE
78 # if GLM_COMPILER & GLM_COMPILER_CLANG
79 # pragma clang diagnostic pop
80 # elif GLM_COMPILER & GLM_COMPILER_GCC
81 # pragma GCC diagnostic pop
82 # elif GLM_COMPILER & GLM_COMPILER_VC
83 # pragma warning(pop)
84 # endif
85 # endif
86 
87  // -- Component accesses --
88 
90  typedef length_t length_type;
91  GLM_FUNC_DECL static GLM_CONSTEXPR length_type length(){return 3;}
92 
93  GLM_FUNC_DECL GLM_CONSTEXPR T & operator[](length_type i);
94  GLM_FUNC_DECL GLM_CONSTEXPR T const& operator[](length_type i) const;
95 
96  // -- Implicit basic constructors --
97 
98  GLM_FUNC_DECL GLM_CONSTEXPR vec() GLM_DEFAULT;
99  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec const& v) GLM_DEFAULT;
100  template<qualifier P>
101  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<3, T, P> const& v);
102 
103  // -- Explicit basic constructors --
104 
105  GLM_FUNC_DECL GLM_CONSTEXPR explicit vec(T scalar);
106  GLM_FUNC_DECL GLM_CONSTEXPR vec(T a, T b, T c);
107 
108  // -- Conversion scalar constructors --
109 
110  template<typename U, qualifier P>
111  GLM_FUNC_DECL GLM_CONSTEXPR explicit vec(vec<1, U, P> const& v);
112 
114  template<typename X, typename Y, typename Z>
115  GLM_FUNC_DECL GLM_CONSTEXPR vec(X x, Y y, Z z);
116  template<typename X, typename Y, typename Z>
117  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<1, X, Q> const& _x, Y _y, Z _z);
118  template<typename X, typename Y, typename Z>
119  GLM_FUNC_DECL GLM_CONSTEXPR vec(X _x, vec<1, Y, Q> const& _y, Z _z);
120  template<typename X, typename Y, typename Z>
121  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<1, X, Q> const& _x, vec<1, Y, Q> const& _y, Z _z);
122  template<typename X, typename Y, typename Z>
123  GLM_FUNC_DECL GLM_CONSTEXPR vec(X _x, Y _y, vec<1, Z, Q> const& _z);
124  template<typename X, typename Y, typename Z>
125  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<1, X, Q> const& _x, Y _y, vec<1, Z, Q> const& _z);
126  template<typename X, typename Y, typename Z>
127  GLM_FUNC_DECL GLM_CONSTEXPR vec(X _x, vec<1, Y, Q> const& _y, vec<1, Z, Q> const& _z);
128  template<typename X, typename Y, typename Z>
129  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<1, X, Q> const& _x, vec<1, Y, Q> const& _y, vec<1, Z, Q> const& _z);
130 
131  // -- Conversion vector constructors --
132 
134  template<typename A, typename B, qualifier P>
135  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<2, A, P> const& _xy, B _z);
137  template<typename A, typename B, qualifier P>
138  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<2, A, P> const& _xy, vec<1, B, P> const& _z);
140  template<typename A, typename B, qualifier P>
141  GLM_FUNC_DECL GLM_CONSTEXPR vec(A _x, vec<2, B, P> const& _yz);
143  template<typename A, typename B, qualifier P>
144  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<1, A, P> const& _x, vec<2, B, P> const& _yz);
146  template<typename U, qualifier P>
147  GLM_FUNC_DECL GLM_CONSTEXPR GLM_EXPLICIT vec(vec<4, U, P> const& v);
148 
150  template<typename U, qualifier P>
151  GLM_FUNC_DECL GLM_CONSTEXPR GLM_EXPLICIT vec(vec<3, U, P> const& v);
152 
153  // -- Swizzle constructors --
154 # if GLM_CONFIG_SWIZZLE == GLM_SWIZZLE_OPERATOR
155  template<int E0, int E1, int E2>
156  GLM_FUNC_DECL GLM_CONSTEXPR vec(detail::_swizzle<3, T, Q, E0, E1, E2, -1> const& that)
157  {
158  *this = that();
159  }
160 
161  template<int E0, int E1>
162  GLM_FUNC_DECL GLM_CONSTEXPR vec(detail::_swizzle<2, T, Q, E0, E1, -1, -2> const& v, T const& scalar)
163  {
164  *this = vec(v(), scalar);
165  }
166 
167  template<int E0, int E1>
168  GLM_FUNC_DECL GLM_CONSTEXPR vec(T const& scalar, detail::_swizzle<2, T, Q, E0, E1, -1, -2> const& v)
169  {
170  *this = vec(scalar, v());
171  }
172 # endif//GLM_CONFIG_SWIZZLE == GLM_SWIZZLE_OPERATOR
173 
174  // -- Unary arithmetic operators --
175 
176  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q>& operator=(vec<3, T, Q> const& v) GLM_DEFAULT;
177 
178  template<typename U>
179  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator=(vec<3, U, Q> const& v);
180  template<typename U>
181  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator+=(U scalar);
182  template<typename U>
183  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator+=(vec<1, U, Q> const& v);
184  template<typename U>
185  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator+=(vec<3, U, Q> const& v);
186  template<typename U>
187  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator-=(U scalar);
188  template<typename U>
189  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator-=(vec<1, U, Q> const& v);
190  template<typename U>
191  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator-=(vec<3, U, Q> const& v);
192  template<typename U>
193  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator*=(U scalar);
194  template<typename U>
195  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator*=(vec<1, U, Q> const& v);
196  template<typename U>
197  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator*=(vec<3, U, Q> const& v);
198  template<typename U>
199  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator/=(U scalar);
200  template<typename U>
201  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator/=(vec<1, U, Q> const& v);
202  template<typename U>
203  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator/=(vec<3, U, Q> const& v);
204 
205  // -- Increment and decrement operators --
206 
207  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator++();
208  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator--();
209  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator++(int);
210  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator--(int);
211 
212  // -- Unary bit operators --
213 
214  template<typename U>
215  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator%=(U scalar);
216  template<typename U>
217  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator%=(vec<1, U, Q> const& v);
218  template<typename U>
219  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator%=(vec<3, U, Q> const& v);
220  template<typename U>
221  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator&=(U scalar);
222  template<typename U>
223  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator&=(vec<1, U, Q> const& v);
224  template<typename U>
225  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator&=(vec<3, U, Q> const& v);
226  template<typename U>
227  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator|=(U scalar);
228  template<typename U>
229  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator|=(vec<1, U, Q> const& v);
230  template<typename U>
231  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator|=(vec<3, U, Q> const& v);
232  template<typename U>
233  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator^=(U scalar);
234  template<typename U>
235  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator^=(vec<1, U, Q> const& v);
236  template<typename U>
237  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator^=(vec<3, U, Q> const& v);
238  template<typename U>
239  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator<<=(U scalar);
240  template<typename U>
241  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator<<=(vec<1, U, Q> const& v);
242  template<typename U>
243  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator<<=(vec<3, U, Q> const& v);
244  template<typename U>
245  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator>>=(U scalar);
246  template<typename U>
247  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator>>=(vec<1, U, Q> const& v);
248  template<typename U>
249  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> & operator>>=(vec<3, U, Q> const& v);
250  };
251 
252  // -- Unary operators --
253 
254  template<typename T, qualifier Q>
255  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator+(vec<3, T, Q> const& v);
256 
257  template<typename T, qualifier Q>
258  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator-(vec<3, T, Q> const& v);
259 
260  // -- Binary operators --
261 
262  template<typename T, qualifier Q>
263  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator+(vec<3, T, Q> const& v, T scalar);
264 
265  template<typename T, qualifier Q>
266  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator+(vec<3, T, Q> const& v, vec<1, T, Q> const& scalar);
267 
268  template<typename T, qualifier Q>
269  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator+(T scalar, vec<3, T, Q> const& v);
270 
271  template<typename T, qualifier Q>
272  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator+(vec<1, T, Q> const& v1, vec<3, T, Q> const& v2);
273 
274  template<typename T, qualifier Q>
275  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator+(vec<3, T, Q> const& v1, vec<3, T, Q> const& v2);
276 
277  template<typename T, qualifier Q>
278  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator-(vec<3, T, Q> const& v, T scalar);
279 
280  template<typename T, qualifier Q>
281  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator-(vec<3, T, Q> const& v1, vec<1, T, Q> const& v2);
282 
283  template<typename T, qualifier Q>
284  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator-(T scalar, vec<3, T, Q> const& v);
285 
286  template<typename T, qualifier Q>
287  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator-(vec<1, T, Q> const& v1, vec<3, T, Q> const& v2);
288 
289  template<typename T, qualifier Q>
290  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator-(vec<3, T, Q> const& v1, vec<3, T, Q> const& v2);
291 
292  template<typename T, qualifier Q>
293  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator*(vec<3, T, Q> const& v, T scalar);
294 
295  template<typename T, qualifier Q>
296  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator*(vec<3, T, Q> const& v1, vec<1, T, Q> const& v2);
297 
298  template<typename T, qualifier Q>
299  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator*(T scalar, vec<3, T, Q> const& v);
300 
301  template<typename T, qualifier Q>
302  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator*(vec<1, T, Q> const& v1, vec<3, T, Q> const& v2);
303 
304  template<typename T, qualifier Q>
305  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator*(vec<3, T, Q> const& v1, vec<3, T, Q> const& v2);
306 
307  template<typename T, qualifier Q>
308  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator/(vec<3, T, Q> const& v, T scalar);
309 
310  template<typename T, qualifier Q>
311  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator/(vec<3, T, Q> const& v1, vec<1, T, Q> const& v2);
312 
313  template<typename T, qualifier Q>
314  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator/(T scalar, vec<3, T, Q> const& v);
315 
316  template<typename T, qualifier Q>
317  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator/(vec<1, T, Q> const& v1, vec<3, T, Q> const& v2);
318 
319  template<typename T, qualifier Q>
320  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator/(vec<3, T, Q> const& v1, vec<3, T, Q> const& v2);
321 
322  template<typename T, qualifier Q>
323  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator%(vec<3, T, Q> const& v, T scalar);
324 
325  template<typename T, qualifier Q>
326  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator%(vec<3, T, Q> const& v1, vec<1, T, Q> const& v2);
327 
328  template<typename T, qualifier Q>
329  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator%(T scalar, vec<3, T, Q> const& v);
330 
331  template<typename T, qualifier Q>
332  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator%(vec<1, T, Q> const& v1, vec<3, T, Q> const& v2);
333 
334  template<typename T, qualifier Q>
335  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator%(vec<3, T, Q> const& v1, vec<3, T, Q> const& v2);
336 
337  template<typename T, qualifier Q>
338  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator&(vec<3, T, Q> const& v1, T scalar);
339 
340  template<typename T, qualifier Q>
341  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator&(vec<3, T, Q> const& v1, vec<1, T, Q> const& v2);
342 
343  template<typename T, qualifier Q>
344  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator&(T scalar, vec<3, T, Q> const& v);
345 
346  template<typename T, qualifier Q>
347  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator&(vec<1, T, Q> const& v1, vec<3, T, Q> const& v2);
348 
349  template<typename T, qualifier Q>
350  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator&(vec<3, T, Q> const& v1, vec<3, T, Q> const& v2);
351 
352  template<typename T, qualifier Q>
353  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator|(vec<3, T, Q> const& v, T scalar);
354 
355  template<typename T, qualifier Q>
356  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator|(vec<3, T, Q> const& v1, vec<1, T, Q> const& v2);
357 
358  template<typename T, qualifier Q>
359  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator|(T scalar, vec<3, T, Q> const& v);
360 
361  template<typename T, qualifier Q>
362  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator|(vec<1, T, Q> const& v1, vec<3, T, Q> const& v2);
363 
364  template<typename T, qualifier Q>
365  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator|(vec<3, T, Q> const& v1, vec<3, T, Q> const& v2);
366 
367  template<typename T, qualifier Q>
368  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator^(vec<3, T, Q> const& v, T scalar);
369 
370  template<typename T, qualifier Q>
371  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator^(vec<3, T, Q> const& v1, vec<1, T, Q> const& v2);
372 
373  template<typename T, qualifier Q>
374  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator^(T scalar, vec<3, T, Q> const& v);
375 
376  template<typename T, qualifier Q>
377  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator^(vec<1, T, Q> const& v1, vec<3, T, Q> const& v2);
378 
379  template<typename T, qualifier Q>
380  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator^(vec<3, T, Q> const& v1, vec<3, T, Q> const& v2);
381 
382  template<typename T, qualifier Q>
383  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator<<(vec<3, T, Q> const& v, T scalar);
384 
385  template<typename T, qualifier Q>
386  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator<<(vec<3, T, Q> const& v1, vec<1, T, Q> const& v2);
387 
388  template<typename T, qualifier Q>
389  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator<<(T scalar, vec<3, T, Q> const& v);
390 
391  template<typename T, qualifier Q>
392  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator<<(vec<1, T, Q> const& v1, vec<3, T, Q> const& v2);
393 
394  template<typename T, qualifier Q>
395  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator<<(vec<3, T, Q> const& v1, vec<3, T, Q> const& v2);
396 
397  template<typename T, qualifier Q>
398  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator>>(vec<3, T, Q> const& v, T scalar);
399 
400  template<typename T, qualifier Q>
401  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator>>(vec<3, T, Q> const& v1, vec<1, T, Q> const& v2);
402 
403  template<typename T, qualifier Q>
404  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator>>(T scalar, vec<3, T, Q> const& v);
405 
406  template<typename T, qualifier Q>
407  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator>>(vec<1, T, Q> const& v1, vec<3, T, Q> const& v2);
408 
409  template<typename T, qualifier Q>
410  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator>>(vec<3, T, Q> const& v1, vec<3, T, Q> const& v2);
411 
412  template<typename T, qualifier Q>
413  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, T, Q> operator~(vec<3, T, Q> const& v);
414 
415  // -- Boolean operators --
416 
417  template<typename T, qualifier Q>
418  GLM_FUNC_DECL GLM_CONSTEXPR bool operator==(vec<3, T, Q> const& v1, vec<3, T, Q> const& v2);
419 
420  template<typename T, qualifier Q>
421  GLM_FUNC_DECL GLM_CONSTEXPR bool operator!=(vec<3, T, Q> const& v1, vec<3, T, Q> const& v2);
422 
423  template<qualifier Q>
424  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, bool, Q> operator&&(vec<3, bool, Q> const& v1, vec<3, bool, Q> const& v2);
425 
426  template<qualifier Q>
427  GLM_FUNC_DECL GLM_CONSTEXPR vec<3, bool, Q> operator||(vec<3, bool, Q> const& v1, vec<3, bool, Q> const& v2);
428 }//namespace glm
429 
430 #ifndef GLM_EXTERNAL_TEMPLATE
431 #include "type_vec3.inl"
432 #endif//GLM_EXTERNAL_TEMPLATE
GLM_FUNC_DECL T length(qua< T, Q > const &q)
Returns the norm of a quaternions.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00181.html ================================================ 0.9.9 API documentation: type_vec4.hpp File Reference
0.9.9 API documentation
type_vec4.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00181_source.html ================================================ 0.9.9 API documentation: type_vec4.hpp Source File
0.9.9 API documentation
type_vec4.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 
6 #include "qualifier.hpp"
7 #if GLM_CONFIG_SWIZZLE == GLM_SWIZZLE_OPERATOR
8 # include "_swizzle.hpp"
9 #elif GLM_CONFIG_SWIZZLE == GLM_SWIZZLE_FUNCTION
10 # include "_swizzle_func.hpp"
11 #endif
12 #include <cstddef>
13 
14 namespace glm
15 {
16  template<typename T, qualifier Q>
17  struct vec<4, T, Q>
18  {
19  // -- Implementation detail --
20 
21  typedef T value_type;
22  typedef vec<4, T, Q> type;
23  typedef vec<4, bool, Q> bool_type;
24 
25  // -- Data --
26 
27 # if GLM_SILENT_WARNINGS == GLM_ENABLE
28 # if GLM_COMPILER & GLM_COMPILER_GCC
29 # pragma GCC diagnostic push
30 # pragma GCC diagnostic ignored "-Wpedantic"
31 # elif GLM_COMPILER & GLM_COMPILER_CLANG
32 # pragma clang diagnostic push
33 # pragma clang diagnostic ignored "-Wgnu-anonymous-struct"
34 # pragma clang diagnostic ignored "-Wnested-anon-types"
35 # elif GLM_COMPILER & GLM_COMPILER_VC
36 # pragma warning(push)
37 # pragma warning(disable: 4201) // nonstandard extension used : nameless struct/union
38 # endif
39 # endif
40 
41 # if GLM_CONFIG_XYZW_ONLY
42  T x, y, z, w;
43 # elif GLM_CONFIG_ANONYMOUS_STRUCT == GLM_ENABLE
44  union
45  {
46  struct { T x, y, z, w; };
47  struct { T r, g, b, a; };
48  struct { T s, t, p, q; };
49 
50  typename detail::storage<4, T, detail::is_aligned<Q>::value>::type data;
51 
52 # if GLM_CONFIG_SWIZZLE == GLM_SWIZZLE_OPERATOR
53  GLM_SWIZZLE4_2_MEMBERS(T, Q, x, y, z, w)
54  GLM_SWIZZLE4_2_MEMBERS(T, Q, r, g, b, a)
55  GLM_SWIZZLE4_2_MEMBERS(T, Q, s, t, p, q)
56  GLM_SWIZZLE4_3_MEMBERS(T, Q, x, y, z, w)
57  GLM_SWIZZLE4_3_MEMBERS(T, Q, r, g, b, a)
58  GLM_SWIZZLE4_3_MEMBERS(T, Q, s, t, p, q)
59  GLM_SWIZZLE4_4_MEMBERS(T, Q, x, y, z, w)
60  GLM_SWIZZLE4_4_MEMBERS(T, Q, r, g, b, a)
61  GLM_SWIZZLE4_4_MEMBERS(T, Q, s, t, p, q)
62 # endif
63  };
64 # else
65  union { T x, r, s; };
66  union { T y, g, t; };
67  union { T z, b, p; };
68  union { T w, a, q; };
69 
70 # if GLM_CONFIG_SWIZZLE == GLM_SWIZZLE_FUNCTION
71  GLM_SWIZZLE_GEN_VEC_FROM_VEC4(T, Q)
72 # endif
73 # endif
74 
75 # if GLM_SILENT_WARNINGS == GLM_ENABLE
76 # if GLM_COMPILER & GLM_COMPILER_CLANG
77 # pragma clang diagnostic pop
78 # elif GLM_COMPILER & GLM_COMPILER_GCC
79 # pragma GCC diagnostic pop
80 # elif GLM_COMPILER & GLM_COMPILER_VC
81 # pragma warning(pop)
82 # endif
83 # endif
84 
85  // -- Component accesses --
86 
87  typedef length_t length_type;
88 
90  GLM_FUNC_DECL static GLM_CONSTEXPR length_type length(){return 4;}
91 
92  GLM_FUNC_DECL GLM_CONSTEXPR T & operator[](length_type i);
93  GLM_FUNC_DECL GLM_CONSTEXPR T const& operator[](length_type i) const;
94 
95  // -- Implicit basic constructors --
96 
97  GLM_FUNC_DECL GLM_CONSTEXPR vec() GLM_DEFAULT;
98  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<4, T, Q> const& v) GLM_DEFAULT;
99  template<qualifier P>
100  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<4, T, P> const& v);
101 
102  // -- Explicit basic constructors --
103 
104  GLM_FUNC_DECL GLM_CONSTEXPR explicit vec(T scalar);
105  GLM_FUNC_DECL GLM_CONSTEXPR vec(T x, T y, T z, T w);
106 
107  // -- Conversion scalar constructors --
108 
109  template<typename U, qualifier P>
110  GLM_FUNC_DECL GLM_CONSTEXPR explicit vec(vec<1, U, P> const& v);
111 
113  template<typename X, typename Y, typename Z, typename W>
114  GLM_FUNC_DECL GLM_CONSTEXPR vec(X _x, Y _y, Z _z, W _w);
115  template<typename X, typename Y, typename Z, typename W>
116  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<1, X, Q> const& _x, Y _y, Z _z, W _w);
117  template<typename X, typename Y, typename Z, typename W>
118  GLM_FUNC_DECL GLM_CONSTEXPR vec(X _x, vec<1, Y, Q> const& _y, Z _z, W _w);
119  template<typename X, typename Y, typename Z, typename W>
120  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<1, X, Q> const& _x, vec<1, Y, Q> const& _y, Z _z, W _w);
121  template<typename X, typename Y, typename Z, typename W>
122  GLM_FUNC_DECL GLM_CONSTEXPR vec(X _x, Y _y, vec<1, Z, Q> const& _z, W _w);
123  template<typename X, typename Y, typename Z, typename W>
124  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<1, X, Q> const& _x, Y _y, vec<1, Z, Q> const& _z, W _w);
125  template<typename X, typename Y, typename Z, typename W>
126  GLM_FUNC_DECL GLM_CONSTEXPR vec(X _x, vec<1, Y, Q> const& _y, vec<1, Z, Q> const& _z, W _w);
127  template<typename X, typename Y, typename Z, typename W>
128  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<1, X, Q> const& _x, vec<1, Y, Q> const& _y, vec<1, Z, Q> const& _z, W _w);
129  template<typename X, typename Y, typename Z, typename W>
130  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<1, X, Q> const& _x, Y _y, Z _z, vec<1, W, Q> const& _w);
131  template<typename X, typename Y, typename Z, typename W>
132  GLM_FUNC_DECL GLM_CONSTEXPR vec(X _x, vec<1, Y, Q> const& _y, Z _z, vec<1, W, Q> const& _w);
133  template<typename X, typename Y, typename Z, typename W>
134  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<1, X, Q> const& _x, vec<1, Y, Q> const& _y, Z _z, vec<1, W, Q> const& _w);
135  template<typename X, typename Y, typename Z, typename W>
136  GLM_FUNC_DECL GLM_CONSTEXPR vec(X _x, Y _y, vec<1, Z, Q> const& _z, vec<1, W, Q> const& _w);
137  template<typename X, typename Y, typename Z, typename W>
138  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<1, X, Q> const& _x, Y _y, vec<1, Z, Q> const& _z, vec<1, W, Q> const& _w);
139  template<typename X, typename Y, typename Z, typename W>
140  GLM_FUNC_DECL GLM_CONSTEXPR vec(X _x, vec<1, Y, Q> const& _y, vec<1, Z, Q> const& _z, vec<1, W, Q> const& _w);
141  template<typename X, typename Y, typename Z, typename W>
142  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<1, X, Q> const& _x, vec<1, Y, Q> const& _Y, vec<1, Z, Q> const& _z, vec<1, W, Q> const& _w);
143 
144  // -- Conversion vector constructors --
145 
147  template<typename A, typename B, typename C, qualifier P>
148  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<2, A, P> const& _xy, B _z, C _w);
150  template<typename A, typename B, typename C, qualifier P>
151  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<2, A, P> const& _xy, vec<1, B, P> const& _z, C _w);
153  template<typename A, typename B, typename C, qualifier P>
154  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<2, A, P> const& _xy, B _z, vec<1, C, P> const& _w);
156  template<typename A, typename B, typename C, qualifier P>
157  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<2, A, P> const& _xy, vec<1, B, P> const& _z, vec<1, C, P> const& _w);
159  template<typename A, typename B, typename C, qualifier P>
160  GLM_FUNC_DECL GLM_CONSTEXPR vec(A _x, vec<2, B, P> const& _yz, C _w);
162  template<typename A, typename B, typename C, qualifier P>
163  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<1, A, P> const& _x, vec<2, B, P> const& _yz, C _w);
165  template<typename A, typename B, typename C, qualifier P>
166  GLM_FUNC_DECL GLM_CONSTEXPR vec(A _x, vec<2, B, P> const& _yz, vec<1, C, P> const& _w);
168  template<typename A, typename B, typename C, qualifier P>
169  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<1, A, P> const& _x, vec<2, B, P> const& _yz, vec<1, C, P> const& _w);
171  template<typename A, typename B, typename C, qualifier P>
172  GLM_FUNC_DECL GLM_CONSTEXPR vec(A _x, B _y, vec<2, C, P> const& _zw);
174  template<typename A, typename B, typename C, qualifier P>
175  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<1, A, P> const& _x, B _y, vec<2, C, P> const& _zw);
177  template<typename A, typename B, typename C, qualifier P>
178  GLM_FUNC_DECL GLM_CONSTEXPR vec(A _x, vec<1, B, P> const& _y, vec<2, C, P> const& _zw);
180  template<typename A, typename B, typename C, qualifier P>
181  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<1, A, P> const& _x, vec<1, B, P> const& _y, vec<2, C, P> const& _zw);
183  template<typename A, typename B, qualifier P>
184  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<3, A, P> const& _xyz, B _w);
186  template<typename A, typename B, qualifier P>
187  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<3, A, P> const& _xyz, vec<1, B, P> const& _w);
189  template<typename A, typename B, qualifier P>
190  GLM_FUNC_DECL GLM_CONSTEXPR vec(A _x, vec<3, B, P> const& _yzw);
192  template<typename A, typename B, qualifier P>
193  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<1, A, P> const& _x, vec<3, B, P> const& _yzw);
195  template<typename A, typename B, qualifier P>
196  GLM_FUNC_DECL GLM_CONSTEXPR vec(vec<2, A, P> const& _xy, vec<2, B, P> const& _zw);
197 
199  template<typename U, qualifier P>
200  GLM_FUNC_DECL GLM_CONSTEXPR GLM_EXPLICIT vec(vec<4, U, P> const& v);
201 
202  // -- Swizzle constructors --
203 # if GLM_CONFIG_SWIZZLE == GLM_SWIZZLE_OPERATOR
204  template<int E0, int E1, int E2, int E3>
205  GLM_FUNC_DECL GLM_CONSTEXPR vec(detail::_swizzle<4, T, Q, E0, E1, E2, E3> const& that)
206  {
207  *this = that();
208  }
209 
210  template<int E0, int E1, int F0, int F1>
211  GLM_FUNC_DECL GLM_CONSTEXPR vec(detail::_swizzle<2, T, Q, E0, E1, -1, -2> const& v, detail::_swizzle<2, T, Q, F0, F1, -1, -2> const& u)
212  {
213  *this = vec<4, T, Q>(v(), u());
214  }
215 
216  template<int E0, int E1>
217  GLM_FUNC_DECL GLM_CONSTEXPR vec(T const& x, T const& y, detail::_swizzle<2, T, Q, E0, E1, -1, -2> const& v)
218  {
219  *this = vec<4, T, Q>(x, y, v());
220  }
221 
222  template<int E0, int E1>
223  GLM_FUNC_DECL GLM_CONSTEXPR vec(T const& x, detail::_swizzle<2, T, Q, E0, E1, -1, -2> const& v, T const& w)
224  {
225  *this = vec<4, T, Q>(x, v(), w);
226  }
227 
228  template<int E0, int E1>
229  GLM_FUNC_DECL GLM_CONSTEXPR vec(detail::_swizzle<2, T, Q, E0, E1, -1, -2> const& v, T const& z, T const& w)
230  {
231  *this = vec<4, T, Q>(v(), z, w);
232  }
233 
234  template<int E0, int E1, int E2>
235  GLM_FUNC_DECL GLM_CONSTEXPR vec(detail::_swizzle<3, T, Q, E0, E1, E2, -1> const& v, T const& w)
236  {
237  *this = vec<4, T, Q>(v(), w);
238  }
239 
240  template<int E0, int E1, int E2>
241  GLM_FUNC_DECL GLM_CONSTEXPR vec(T const& x, detail::_swizzle<3, T, Q, E0, E1, E2, -1> const& v)
242  {
243  *this = vec<4, T, Q>(x, v());
244  }
245 # endif//GLM_CONFIG_SWIZZLE == GLM_SWIZZLE_OPERATOR
246 
247  // -- Unary arithmetic operators --
248 
249  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q>& operator=(vec<4, T, Q> const& v) GLM_DEFAULT;
250 
251  template<typename U>
252  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q>& operator=(vec<4, U, Q> const& v);
253  template<typename U>
254  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q>& operator+=(U scalar);
255  template<typename U>
256  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q>& operator+=(vec<1, U, Q> const& v);
257  template<typename U>
258  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q>& operator+=(vec<4, U, Q> const& v);
259  template<typename U>
260  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q>& operator-=(U scalar);
261  template<typename U>
262  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q>& operator-=(vec<1, U, Q> const& v);
263  template<typename U>
264  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q>& operator-=(vec<4, U, Q> const& v);
265  template<typename U>
266  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q>& operator*=(U scalar);
267  template<typename U>
268  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q>& operator*=(vec<1, U, Q> const& v);
269  template<typename U>
270  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q>& operator*=(vec<4, U, Q> const& v);
271  template<typename U>
272  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q>& operator/=(U scalar);
273  template<typename U>
274  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q>& operator/=(vec<1, U, Q> const& v);
275  template<typename U>
276  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q>& operator/=(vec<4, U, Q> const& v);
277 
278  // -- Increment and decrement operators --
279 
280  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> & operator++();
281  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> & operator--();
282  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator++(int);
283  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator--(int);
284 
285  // -- Unary bit operators --
286 
287  template<typename U>
288  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> & operator%=(U scalar);
289  template<typename U>
290  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> & operator%=(vec<1, U, Q> const& v);
291  template<typename U>
292  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> & operator%=(vec<4, U, Q> const& v);
293  template<typename U>
294  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> & operator&=(U scalar);
295  template<typename U>
296  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> & operator&=(vec<1, U, Q> const& v);
297  template<typename U>
298  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> & operator&=(vec<4, U, Q> const& v);
299  template<typename U>
300  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> & operator|=(U scalar);
301  template<typename U>
302  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> & operator|=(vec<1, U, Q> const& v);
303  template<typename U>
304  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> & operator|=(vec<4, U, Q> const& v);
305  template<typename U>
306  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> & operator^=(U scalar);
307  template<typename U>
308  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> & operator^=(vec<1, U, Q> const& v);
309  template<typename U>
310  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> & operator^=(vec<4, U, Q> const& v);
311  template<typename U>
312  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> & operator<<=(U scalar);
313  template<typename U>
314  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> & operator<<=(vec<1, U, Q> const& v);
315  template<typename U>
316  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> & operator<<=(vec<4, U, Q> const& v);
317  template<typename U>
318  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> & operator>>=(U scalar);
319  template<typename U>
320  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> & operator>>=(vec<1, U, Q> const& v);
321  template<typename U>
322  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> & operator>>=(vec<4, U, Q> const& v);
323  };
324 
325  // -- Unary operators --
326 
327  template<typename T, qualifier Q>
328  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator+(vec<4, T, Q> const& v);
329 
330  template<typename T, qualifier Q>
331  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator-(vec<4, T, Q> const& v);
332 
333  // -- Binary operators --
334 
335  template<typename T, qualifier Q>
336  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator+(vec<4, T, Q> const& v, T const & scalar);
337 
338  template<typename T, qualifier Q>
339  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator+(vec<4, T, Q> const& v1, vec<1, T, Q> const& v2);
340 
341  template<typename T, qualifier Q>
342  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator+(T scalar, vec<4, T, Q> const& v);
343 
344  template<typename T, qualifier Q>
345  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator+(vec<1, T, Q> const& v1, vec<4, T, Q> const& v2);
346 
347  template<typename T, qualifier Q>
348  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator+(vec<4, T, Q> const& v1, vec<4, T, Q> const& v2);
349 
350  template<typename T, qualifier Q>
351  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator-(vec<4, T, Q> const& v, T const & scalar);
352 
353  template<typename T, qualifier Q>
354  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator-(vec<4, T, Q> const& v1, vec<1, T, Q> const& v2);
355 
356  template<typename T, qualifier Q>
357  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator-(T scalar, vec<4, T, Q> const& v);
358 
359  template<typename T, qualifier Q>
360  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator-(vec<1, T, Q> const& v1, vec<4, T, Q> const& v2);
361 
362  template<typename T, qualifier Q>
363  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator-(vec<4, T, Q> const& v1, vec<4, T, Q> const& v2);
364 
365  template<typename T, qualifier Q>
366  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator*(vec<4, T, Q> const& v, T const & scalar);
367 
368  template<typename T, qualifier Q>
369  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator*(vec<4, T, Q> const& v1, vec<1, T, Q> const& v2);
370 
371  template<typename T, qualifier Q>
372  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator*(T scalar, vec<4, T, Q> const& v);
373 
374  template<typename T, qualifier Q>
375  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator*(vec<1, T, Q> const& v1, vec<4, T, Q> const& v2);
376 
377  template<typename T, qualifier Q>
378  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator*(vec<4, T, Q> const& v1, vec<4, T, Q> const& v2);
379 
380  template<typename T, qualifier Q>
381  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator/(vec<4, T, Q> const& v, T const & scalar);
382 
383  template<typename T, qualifier Q>
384  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator/(vec<4, T, Q> const& v1, vec<1, T, Q> const& v2);
385 
386  template<typename T, qualifier Q>
387  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator/(T scalar, vec<4, T, Q> const& v);
388 
389  template<typename T, qualifier Q>
390  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator/(vec<1, T, Q> const& v1, vec<4, T, Q> const& v2);
391 
392  template<typename T, qualifier Q>
393  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator/(vec<4, T, Q> const& v1, vec<4, T, Q> const& v2);
394 
395  template<typename T, qualifier Q>
396  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator%(vec<4, T, Q> const& v, T scalar);
397 
398  template<typename T, qualifier Q>
399  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator%(vec<4, T, Q> const& v, vec<1, T, Q> const& scalar);
400 
401  template<typename T, qualifier Q>
402  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator%(T scalar, vec<4, T, Q> const& v);
403 
404  template<typename T, qualifier Q>
405  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator%(vec<1, T, Q> const& scalar, vec<4, T, Q> const& v);
406 
407  template<typename T, qualifier Q>
408  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator%(vec<4, T, Q> const& v1, vec<4, T, Q> const& v2);
409 
410  template<typename T, qualifier Q>
411  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator&(vec<4, T, Q> const& v, T scalar);
412 
413  template<typename T, qualifier Q>
414  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator&(vec<4, T, Q> const& v, vec<1, T, Q> const& scalar);
415 
416  template<typename T, qualifier Q>
417  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator&(T scalar, vec<4, T, Q> const& v);
418 
419  template<typename T, qualifier Q>
420  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator&(vec<1, T, Q> const& scalar, vec<4, T, Q> const& v);
421 
422  template<typename T, qualifier Q>
423  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator&(vec<4, T, Q> const& v1, vec<4, T, Q> const& v2);
424 
425  template<typename T, qualifier Q>
426  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator|(vec<4, T, Q> const& v, T scalar);
427 
428  template<typename T, qualifier Q>
429  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator|(vec<4, T, Q> const& v, vec<1, T, Q> const& scalar);
430 
431  template<typename T, qualifier Q>
432  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator|(T scalar, vec<4, T, Q> const& v);
433 
434  template<typename T, qualifier Q>
435  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator|(vec<1, T, Q> const& scalar, vec<4, T, Q> const& v);
436 
437  template<typename T, qualifier Q>
438  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator|(vec<4, T, Q> const& v1, vec<4, T, Q> const& v2);
439 
440  template<typename T, qualifier Q>
441  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator^(vec<4, T, Q> const& v, T scalar);
442 
443  template<typename T, qualifier Q>
444  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator^(vec<4, T, Q> const& v, vec<1, T, Q> const& scalar);
445 
446  template<typename T, qualifier Q>
447  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator^(T scalar, vec<4, T, Q> const& v);
448 
449  template<typename T, qualifier Q>
450  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator^(vec<1, T, Q> const& scalar, vec<4, T, Q> const& v);
451 
452  template<typename T, qualifier Q>
453  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator^(vec<4, T, Q> const& v1, vec<4, T, Q> const& v2);
454 
455  template<typename T, qualifier Q>
456  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator<<(vec<4, T, Q> const& v, T scalar);
457 
458  template<typename T, qualifier Q>
459  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator<<(vec<4, T, Q> const& v, vec<1, T, Q> const& scalar);
460 
461  template<typename T, qualifier Q>
462  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator<<(T scalar, vec<4, T, Q> const& v);
463 
464  template<typename T, qualifier Q>
465  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator<<(vec<1, T, Q> const& scalar, vec<4, T, Q> const& v);
466 
467  template<typename T, qualifier Q>
468  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator<<(vec<4, T, Q> const& v1, vec<4, T, Q> const& v2);
469 
470  template<typename T, qualifier Q>
471  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator>>(vec<4, T, Q> const& v, T scalar);
472 
473  template<typename T, qualifier Q>
474  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator>>(vec<4, T, Q> const& v, vec<1, T, Q> const& scalar);
475 
476  template<typename T, qualifier Q>
477  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator>>(T scalar, vec<4, T, Q> const& v);
478 
479  template<typename T, qualifier Q>
480  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator>>(vec<1, T, Q> const& scalar, vec<4, T, Q> const& v);
481 
482  template<typename T, qualifier Q>
483  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator>>(vec<4, T, Q> const& v1, vec<4, T, Q> const& v2);
484 
485  template<typename T, qualifier Q>
486  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, T, Q> operator~(vec<4, T, Q> const& v);
487 
488  // -- Boolean operators --
489 
490  template<typename T, qualifier Q>
491  GLM_FUNC_DECL GLM_CONSTEXPR bool operator==(vec<4, T, Q> const& v1, vec<4, T, Q> const& v2);
492 
493  template<typename T, qualifier Q>
494  GLM_FUNC_DECL GLM_CONSTEXPR bool operator!=(vec<4, T, Q> const& v1, vec<4, T, Q> const& v2);
495 
496  template<qualifier Q>
497  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, bool, Q> operator&&(vec<4, bool, Q> const& v1, vec<4, bool, Q> const& v2);
498 
499  template<qualifier Q>
500  GLM_FUNC_DECL GLM_CONSTEXPR vec<4, bool, Q> operator||(vec<4, bool, Q> const& v1, vec<4, bool, Q> const& v2);
501 }//namespace glm
502 
503 #ifndef GLM_EXTERNAL_TEMPLATE
504 #include "type_vec4.inl"
505 #endif//GLM_EXTERNAL_TEMPLATE
GLM_FUNC_DECL T length(qua< T, Q > const &q)
Returns the norm of a quaternions.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00182.html ================================================ 0.9.9 API documentation: ulp.hpp File Reference
0.9.9 API documentation
ulp.hpp File Reference

GLM_GTC_ulp More...

Go to the source code of this file.

Functions

GLM_FUNC_DECL int float_distance (float x, float y)
 Return the distance in the number of ULP between 2 single-precision floating-point scalars. More...
 
GLM_FUNC_DECL int64 float_distance (double x, double y)
 Return the distance in the number of ULP between 2 double-precision floating-point scalars. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, int, Q > float_distance (vec< L, float, Q > const &x, vec< L, float, Q > const &y)
 Return the distance in the number of ULP between 2 single-precision floating-point scalars. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, int64, Q > float_distance (vec< L, double, Q > const &x, vec< L, double, Q > const &y)
 Return the distance in the number of ULP between 2 double-precision floating-point scalars. More...
 
template<typename genType >
GLM_FUNC_DECL genType next_float (genType x)
 Return the next ULP value(s) after the input value(s). More...
 
template<typename genType >
GLM_FUNC_DECL genType next_float (genType x, int ULPs)
 Return the value(s) ULP distance after the input value(s). More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > next_float (vec< L, T, Q > const &x)
 Return the next ULP value(s) after the input value(s). More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > next_float (vec< L, T, Q > const &x, int ULPs)
 Return the value(s) ULP distance after the input value(s). More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > next_float (vec< L, T, Q > const &x, vec< L, int, Q > const &ULPs)
 Return the value(s) ULP distance after the input value(s). More...
 
template<typename genType >
GLM_FUNC_DECL genType prev_float (genType x)
 Return the previous ULP value(s) before the input value(s). More...
 
template<typename genType >
GLM_FUNC_DECL genType prev_float (genType x, int ULPs)
 Return the value(s) ULP distance before the input value(s). More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > prev_float (vec< L, T, Q > const &x)
 Return the previous ULP value(s) before the input value(s). More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > prev_float (vec< L, T, Q > const &x, int ULPs)
 Return the value(s) ULP distance before the input value(s). More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > prev_float (vec< L, T, Q > const &x, vec< L, int, Q > const &ULPs)
 Return the value(s) ULP distance before the input value(s). More...
 

Detailed Description

GLM_GTC_ulp

See also
Core features (dependence)

Definition in file ulp.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00182_source.html ================================================ 0.9.9 API documentation: ulp.hpp Source File
0.9.9 API documentation
ulp.hpp
Go to the documentation of this file.
1 
15 #pragma once
16 
17 // Dependencies
18 #include "../detail/setup.hpp"
19 #include "../detail/qualifier.hpp"
20 #include "../detail/_vectorize.hpp"
21 #include "../ext/scalar_int_sized.hpp"
22 
23 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
24 # pragma message("GLM: GLM_GTC_ulp extension included")
25 #endif
26 
27 namespace glm
28 {
34  template<typename genType>
35  GLM_FUNC_DECL genType next_float(genType x);
36 
42  template<typename genType>
43  GLM_FUNC_DECL genType prev_float(genType x);
44 
50  template<typename genType>
51  GLM_FUNC_DECL genType next_float(genType x, int ULPs);
52 
58  template<typename genType>
59  GLM_FUNC_DECL genType prev_float(genType x, int ULPs);
60 
64  GLM_FUNC_DECL int float_distance(float x, float y);
65 
69  GLM_FUNC_DECL int64 float_distance(double x, double y);
70 
78  template<length_t L, typename T, qualifier Q>
79  GLM_FUNC_DECL vec<L, T, Q> next_float(vec<L, T, Q> const& x);
80 
88  template<length_t L, typename T, qualifier Q>
89  GLM_FUNC_DECL vec<L, T, Q> next_float(vec<L, T, Q> const& x, int ULPs);
90 
98  template<length_t L, typename T, qualifier Q>
99  GLM_FUNC_DECL vec<L, T, Q> next_float(vec<L, T, Q> const& x, vec<L, int, Q> const& ULPs);
100 
108  template<length_t L, typename T, qualifier Q>
109  GLM_FUNC_DECL vec<L, T, Q> prev_float(vec<L, T, Q> const& x);
110 
118  template<length_t L, typename T, qualifier Q>
119  GLM_FUNC_DECL vec<L, T, Q> prev_float(vec<L, T, Q> const& x, int ULPs);
120 
128  template<length_t L, typename T, qualifier Q>
129  GLM_FUNC_DECL vec<L, T, Q> prev_float(vec<L, T, Q> const& x, vec<L, int, Q> const& ULPs);
130 
137  template<length_t L, typename T, qualifier Q>
138  GLM_FUNC_DECL vec<L, int, Q> float_distance(vec<L, float, Q> const& x, vec<L, float, Q> const& y);
139 
146  template<length_t L, typename T, qualifier Q>
147  GLM_FUNC_DECL vec<L, int64, Q> float_distance(vec<L, double, Q> const& x, vec<L, double, Q> const& y);
148 
150 }//namespace glm
151 
152 #include "ulp.inl"
detail::int64 int64
64 bit signed integer type.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00183.html ================================================ 0.9.9 API documentation: vec1.hpp File Reference
0.9.9 API documentation
vec1.hpp File Reference

GLM_GTC_vec1 More...

Go to the source code of this file.

Detailed Description

GLM_GTC_vec1

See also
Core features (dependence)

Definition in file vec1.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00183_source.html ================================================ 0.9.9 API documentation: vec1.hpp Source File
0.9.9 API documentation
vec1.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependency:
16 #include "../ext/vector_bool1.hpp"
17 #include "../ext/vector_bool1_precision.hpp"
18 #include "../ext/vector_float1.hpp"
19 #include "../ext/vector_float1_precision.hpp"
20 #include "../ext/vector_double1.hpp"
21 #include "../ext/vector_double1_precision.hpp"
22 #include "../ext/vector_int1.hpp"
23 #include "../ext/vector_int1_precision.hpp"
24 #include "../ext/vector_uint1.hpp"
25 #include "../ext/vector_uint1_precision.hpp"
26 
27 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
28 # pragma message("GLM: GLM_GTC_vec1 extension included")
29 #endif
30 
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00184.html ================================================ 0.9.9 API documentation: vec2.hpp File Reference
0.9.9 API documentation
vec2.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00184_source.html ================================================ 0.9.9 API documentation: vec2.hpp Source File
0.9.9 API documentation
vec2.hpp
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00185.html ================================================ 0.9.9 API documentation: vec3.hpp File Reference
0.9.9 API documentation
vec3.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00185_source.html ================================================ 0.9.9 API documentation: vec3.hpp Source File
0.9.9 API documentation
vec3.hpp
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00186.html ================================================ 0.9.9 API documentation: vec4.hpp File Reference
0.9.9 API documentation
vec4.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00186_source.html ================================================ 0.9.9 API documentation: vec4.hpp Source File
0.9.9 API documentation
vec4.hpp
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00187.html ================================================ 0.9.9 API documentation: vec_swizzle.hpp File Reference
0.9.9 API documentation
vec_swizzle.hpp File Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00187_source.html ================================================ 0.9.9 API documentation: vec_swizzle.hpp Source File
0.9.9 API documentation
vec_swizzle.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 #include "../glm.hpp"
16 
17 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
18 # ifndef GLM_ENABLE_EXPERIMENTAL
19 # pragma message("GLM: GLM_GTX_vec_swizzle is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
20 # else
21 # pragma message("GLM: GLM_GTX_vec_swizzle extension included")
22 # endif
23 #endif
24 
25 namespace glm {
26  // xx
27  template<typename T, qualifier Q>
28  GLM_INLINE glm::vec<2, T, Q> xx(const glm::vec<1, T, Q> &v) {
29  return glm::vec<2, T, Q>(v.x, v.x);
30  }
31 
32  template<typename T, qualifier Q>
33  GLM_INLINE glm::vec<2, T, Q> xx(const glm::vec<2, T, Q> &v) {
34  return glm::vec<2, T, Q>(v.x, v.x);
35  }
36 
37  template<typename T, qualifier Q>
38  GLM_INLINE glm::vec<2, T, Q> xx(const glm::vec<3, T, Q> &v) {
39  return glm::vec<2, T, Q>(v.x, v.x);
40  }
41 
42  template<typename T, qualifier Q>
43  GLM_INLINE glm::vec<2, T, Q> xx(const glm::vec<4, T, Q> &v) {
44  return glm::vec<2, T, Q>(v.x, v.x);
45  }
46 
47  // xy
48  template<typename T, qualifier Q>
49  GLM_INLINE glm::vec<2, T, Q> xy(const glm::vec<2, T, Q> &v) {
50  return glm::vec<2, T, Q>(v.x, v.y);
51  }
52 
53  template<typename T, qualifier Q>
54  GLM_INLINE glm::vec<2, T, Q> xy(const glm::vec<3, T, Q> &v) {
55  return glm::vec<2, T, Q>(v.x, v.y);
56  }
57 
58  template<typename T, qualifier Q>
59  GLM_INLINE glm::vec<2, T, Q> xy(const glm::vec<4, T, Q> &v) {
60  return glm::vec<2, T, Q>(v.x, v.y);
61  }
62 
63  // xz
64  template<typename T, qualifier Q>
65  GLM_INLINE glm::vec<2, T, Q> xz(const glm::vec<3, T, Q> &v) {
66  return glm::vec<2, T, Q>(v.x, v.z);
67  }
68 
69  template<typename T, qualifier Q>
70  GLM_INLINE glm::vec<2, T, Q> xz(const glm::vec<4, T, Q> &v) {
71  return glm::vec<2, T, Q>(v.x, v.z);
72  }
73 
74  // xw
75  template<typename T, qualifier Q>
76  GLM_INLINE glm::vec<2, T, Q> xw(const glm::vec<4, T, Q> &v) {
77  return glm::vec<2, T, Q>(v.x, v.w);
78  }
79 
80  // yx
81  template<typename T, qualifier Q>
82  GLM_INLINE glm::vec<2, T, Q> yx(const glm::vec<2, T, Q> &v) {
83  return glm::vec<2, T, Q>(v.y, v.x);
84  }
85 
86  template<typename T, qualifier Q>
87  GLM_INLINE glm::vec<2, T, Q> yx(const glm::vec<3, T, Q> &v) {
88  return glm::vec<2, T, Q>(v.y, v.x);
89  }
90 
91  template<typename T, qualifier Q>
92  GLM_INLINE glm::vec<2, T, Q> yx(const glm::vec<4, T, Q> &v) {
93  return glm::vec<2, T, Q>(v.y, v.x);
94  }
95 
96  // yy
97  template<typename T, qualifier Q>
98  GLM_INLINE glm::vec<2, T, Q> yy(const glm::vec<2, T, Q> &v) {
99  return glm::vec<2, T, Q>(v.y, v.y);
100  }
101 
102  template<typename T, qualifier Q>
103  GLM_INLINE glm::vec<2, T, Q> yy(const glm::vec<3, T, Q> &v) {
104  return glm::vec<2, T, Q>(v.y, v.y);
105  }
106 
107  template<typename T, qualifier Q>
108  GLM_INLINE glm::vec<2, T, Q> yy(const glm::vec<4, T, Q> &v) {
109  return glm::vec<2, T, Q>(v.y, v.y);
110  }
111 
112  // yz
113  template<typename T, qualifier Q>
114  GLM_INLINE glm::vec<2, T, Q> yz(const glm::vec<3, T, Q> &v) {
115  return glm::vec<2, T, Q>(v.y, v.z);
116  }
117 
118  template<typename T, qualifier Q>
119  GLM_INLINE glm::vec<2, T, Q> yz(const glm::vec<4, T, Q> &v) {
120  return glm::vec<2, T, Q>(v.y, v.z);
121  }
122 
123  // yw
124  template<typename T, qualifier Q>
125  GLM_INLINE glm::vec<2, T, Q> yw(const glm::vec<4, T, Q> &v) {
126  return glm::vec<2, T, Q>(v.y, v.w);
127  }
128 
129  // zx
130  template<typename T, qualifier Q>
131  GLM_INLINE glm::vec<2, T, Q> zx(const glm::vec<3, T, Q> &v) {
132  return glm::vec<2, T, Q>(v.z, v.x);
133  }
134 
135  template<typename T, qualifier Q>
136  GLM_INLINE glm::vec<2, T, Q> zx(const glm::vec<4, T, Q> &v) {
137  return glm::vec<2, T, Q>(v.z, v.x);
138  }
139 
140  // zy
141  template<typename T, qualifier Q>
142  GLM_INLINE glm::vec<2, T, Q> zy(const glm::vec<3, T, Q> &v) {
143  return glm::vec<2, T, Q>(v.z, v.y);
144  }
145 
146  template<typename T, qualifier Q>
147  GLM_INLINE glm::vec<2, T, Q> zy(const glm::vec<4, T, Q> &v) {
148  return glm::vec<2, T, Q>(v.z, v.y);
149  }
150 
151  // zz
152  template<typename T, qualifier Q>
153  GLM_INLINE glm::vec<2, T, Q> zz(const glm::vec<3, T, Q> &v) {
154  return glm::vec<2, T, Q>(v.z, v.z);
155  }
156 
157  template<typename T, qualifier Q>
158  GLM_INLINE glm::vec<2, T, Q> zz(const glm::vec<4, T, Q> &v) {
159  return glm::vec<2, T, Q>(v.z, v.z);
160  }
161 
162  // zw
163  template<typename T, qualifier Q>
164  GLM_INLINE glm::vec<2, T, Q> zw(const glm::vec<4, T, Q> &v) {
165  return glm::vec<2, T, Q>(v.z, v.w);
166  }
167 
168  // wx
169  template<typename T, qualifier Q>
170  GLM_INLINE glm::vec<2, T, Q> wx(const glm::vec<4, T, Q> &v) {
171  return glm::vec<2, T, Q>(v.w, v.x);
172  }
173 
174  // wy
175  template<typename T, qualifier Q>
176  GLM_INLINE glm::vec<2, T, Q> wy(const glm::vec<4, T, Q> &v) {
177  return glm::vec<2, T, Q>(v.w, v.y);
178  }
179 
180  // wz
181  template<typename T, qualifier Q>
182  GLM_INLINE glm::vec<2, T, Q> wz(const glm::vec<4, T, Q> &v) {
183  return glm::vec<2, T, Q>(v.w, v.z);
184  }
185 
186  // ww
187  template<typename T, qualifier Q>
188  GLM_INLINE glm::vec<2, T, Q> ww(const glm::vec<4, T, Q> &v) {
189  return glm::vec<2, T, Q>(v.w, v.w);
190  }
191 
192  // xxx
193  template<typename T, qualifier Q>
194  GLM_INLINE glm::vec<3, T, Q> xxx(const glm::vec<1, T, Q> &v) {
195  return glm::vec<3, T, Q>(v.x, v.x, v.x);
196  }
197 
198  template<typename T, qualifier Q>
199  GLM_INLINE glm::vec<3, T, Q> xxx(const glm::vec<2, T, Q> &v) {
200  return glm::vec<3, T, Q>(v.x, v.x, v.x);
201  }
202 
203  template<typename T, qualifier Q>
204  GLM_INLINE glm::vec<3, T, Q> xxx(const glm::vec<3, T, Q> &v) {
205  return glm::vec<3, T, Q>(v.x, v.x, v.x);
206  }
207 
208  template<typename T, qualifier Q>
209  GLM_INLINE glm::vec<3, T, Q> xxx(const glm::vec<4, T, Q> &v) {
210  return glm::vec<3, T, Q>(v.x, v.x, v.x);
211  }
212 
213  // xxy
214  template<typename T, qualifier Q>
215  GLM_INLINE glm::vec<3, T, Q> xxy(const glm::vec<2, T, Q> &v) {
216  return glm::vec<3, T, Q>(v.x, v.x, v.y);
217  }
218 
219  template<typename T, qualifier Q>
220  GLM_INLINE glm::vec<3, T, Q> xxy(const glm::vec<3, T, Q> &v) {
221  return glm::vec<3, T, Q>(v.x, v.x, v.y);
222  }
223 
224  template<typename T, qualifier Q>
225  GLM_INLINE glm::vec<3, T, Q> xxy(const glm::vec<4, T, Q> &v) {
226  return glm::vec<3, T, Q>(v.x, v.x, v.y);
227  }
228 
229  // xxz
230  template<typename T, qualifier Q>
231  GLM_INLINE glm::vec<3, T, Q> xxz(const glm::vec<3, T, Q> &v) {
232  return glm::vec<3, T, Q>(v.x, v.x, v.z);
233  }
234 
235  template<typename T, qualifier Q>
236  GLM_INLINE glm::vec<3, T, Q> xxz(const glm::vec<4, T, Q> &v) {
237  return glm::vec<3, T, Q>(v.x, v.x, v.z);
238  }
239 
240  // xxw
241  template<typename T, qualifier Q>
242  GLM_INLINE glm::vec<3, T, Q> xxw(const glm::vec<4, T, Q> &v) {
243  return glm::vec<3, T, Q>(v.x, v.x, v.w);
244  }
245 
246  // xyx
247  template<typename T, qualifier Q>
248  GLM_INLINE glm::vec<3, T, Q> xyx(const glm::vec<2, T, Q> &v) {
249  return glm::vec<3, T, Q>(v.x, v.y, v.x);
250  }
251 
252  template<typename T, qualifier Q>
253  GLM_INLINE glm::vec<3, T, Q> xyx(const glm::vec<3, T, Q> &v) {
254  return glm::vec<3, T, Q>(v.x, v.y, v.x);
255  }
256 
257  template<typename T, qualifier Q>
258  GLM_INLINE glm::vec<3, T, Q> xyx(const glm::vec<4, T, Q> &v) {
259  return glm::vec<3, T, Q>(v.x, v.y, v.x);
260  }
261 
262  // xyy
263  template<typename T, qualifier Q>
264  GLM_INLINE glm::vec<3, T, Q> xyy(const glm::vec<2, T, Q> &v) {
265  return glm::vec<3, T, Q>(v.x, v.y, v.y);
266  }
267 
268  template<typename T, qualifier Q>
269  GLM_INLINE glm::vec<3, T, Q> xyy(const glm::vec<3, T, Q> &v) {
270  return glm::vec<3, T, Q>(v.x, v.y, v.y);
271  }
272 
273  template<typename T, qualifier Q>
274  GLM_INLINE glm::vec<3, T, Q> xyy(const glm::vec<4, T, Q> &v) {
275  return glm::vec<3, T, Q>(v.x, v.y, v.y);
276  }
277 
278  // xyz
279  template<typename T, qualifier Q>
280  GLM_INLINE glm::vec<3, T, Q> xyz(const glm::vec<3, T, Q> &v) {
281  return glm::vec<3, T, Q>(v.x, v.y, v.z);
282  }
283 
284  template<typename T, qualifier Q>
285  GLM_INLINE glm::vec<3, T, Q> xyz(const glm::vec<4, T, Q> &v) {
286  return glm::vec<3, T, Q>(v.x, v.y, v.z);
287  }
288 
289  // xyw
290  template<typename T, qualifier Q>
291  GLM_INLINE glm::vec<3, T, Q> xyw(const glm::vec<4, T, Q> &v) {
292  return glm::vec<3, T, Q>(v.x, v.y, v.w);
293  }
294 
295  // xzx
296  template<typename T, qualifier Q>
297  GLM_INLINE glm::vec<3, T, Q> xzx(const glm::vec<3, T, Q> &v) {
298  return glm::vec<3, T, Q>(v.x, v.z, v.x);
299  }
300 
301  template<typename T, qualifier Q>
302  GLM_INLINE glm::vec<3, T, Q> xzx(const glm::vec<4, T, Q> &v) {
303  return glm::vec<3, T, Q>(v.x, v.z, v.x);
304  }
305 
306  // xzy
307  template<typename T, qualifier Q>
308  GLM_INLINE glm::vec<3, T, Q> xzy(const glm::vec<3, T, Q> &v) {
309  return glm::vec<3, T, Q>(v.x, v.z, v.y);
310  }
311 
312  template<typename T, qualifier Q>
313  GLM_INLINE glm::vec<3, T, Q> xzy(const glm::vec<4, T, Q> &v) {
314  return glm::vec<3, T, Q>(v.x, v.z, v.y);
315  }
316 
317  // xzz
318  template<typename T, qualifier Q>
319  GLM_INLINE glm::vec<3, T, Q> xzz(const glm::vec<3, T, Q> &v) {
320  return glm::vec<3, T, Q>(v.x, v.z, v.z);
321  }
322 
323  template<typename T, qualifier Q>
324  GLM_INLINE glm::vec<3, T, Q> xzz(const glm::vec<4, T, Q> &v) {
325  return glm::vec<3, T, Q>(v.x, v.z, v.z);
326  }
327 
328  // xzw
329  template<typename T, qualifier Q>
330  GLM_INLINE glm::vec<3, T, Q> xzw(const glm::vec<4, T, Q> &v) {
331  return glm::vec<3, T, Q>(v.x, v.z, v.w);
332  }
333 
334  // xwx
335  template<typename T, qualifier Q>
336  GLM_INLINE glm::vec<3, T, Q> xwx(const glm::vec<4, T, Q> &v) {
337  return glm::vec<3, T, Q>(v.x, v.w, v.x);
338  }
339 
340  // xwy
341  template<typename T, qualifier Q>
342  GLM_INLINE glm::vec<3, T, Q> xwy(const glm::vec<4, T, Q> &v) {
343  return glm::vec<3, T, Q>(v.x, v.w, v.y);
344  }
345 
346  // xwz
347  template<typename T, qualifier Q>
348  GLM_INLINE glm::vec<3, T, Q> xwz(const glm::vec<4, T, Q> &v) {
349  return glm::vec<3, T, Q>(v.x, v.w, v.z);
350  }
351 
352  // xww
353  template<typename T, qualifier Q>
354  GLM_INLINE glm::vec<3, T, Q> xww(const glm::vec<4, T, Q> &v) {
355  return glm::vec<3, T, Q>(v.x, v.w, v.w);
356  }
357 
358  // yxx
359  template<typename T, qualifier Q>
360  GLM_INLINE glm::vec<3, T, Q> yxx(const glm::vec<2, T, Q> &v) {
361  return glm::vec<3, T, Q>(v.y, v.x, v.x);
362  }
363 
364  template<typename T, qualifier Q>
365  GLM_INLINE glm::vec<3, T, Q> yxx(const glm::vec<3, T, Q> &v) {
366  return glm::vec<3, T, Q>(v.y, v.x, v.x);
367  }
368 
369  template<typename T, qualifier Q>
370  GLM_INLINE glm::vec<3, T, Q> yxx(const glm::vec<4, T, Q> &v) {
371  return glm::vec<3, T, Q>(v.y, v.x, v.x);
372  }
373 
374  // yxy
375  template<typename T, qualifier Q>
376  GLM_INLINE glm::vec<3, T, Q> yxy(const glm::vec<2, T, Q> &v) {
377  return glm::vec<3, T, Q>(v.y, v.x, v.y);
378  }
379 
380  template<typename T, qualifier Q>
381  GLM_INLINE glm::vec<3, T, Q> yxy(const glm::vec<3, T, Q> &v) {
382  return glm::vec<3, T, Q>(v.y, v.x, v.y);
383  }
384 
385  template<typename T, qualifier Q>
386  GLM_INLINE glm::vec<3, T, Q> yxy(const glm::vec<4, T, Q> &v) {
387  return glm::vec<3, T, Q>(v.y, v.x, v.y);
388  }
389 
390  // yxz
391  template<typename T, qualifier Q>
392  GLM_INLINE glm::vec<3, T, Q> yxz(const glm::vec<3, T, Q> &v) {
393  return glm::vec<3, T, Q>(v.y, v.x, v.z);
394  }
395 
396  template<typename T, qualifier Q>
397  GLM_INLINE glm::vec<3, T, Q> yxz(const glm::vec<4, T, Q> &v) {
398  return glm::vec<3, T, Q>(v.y, v.x, v.z);
399  }
400 
401  // yxw
402  template<typename T, qualifier Q>
403  GLM_INLINE glm::vec<3, T, Q> yxw(const glm::vec<4, T, Q> &v) {
404  return glm::vec<3, T, Q>(v.y, v.x, v.w);
405  }
406 
407  // yyx
408  template<typename T, qualifier Q>
409  GLM_INLINE glm::vec<3, T, Q> yyx(const glm::vec<2, T, Q> &v) {
410  return glm::vec<3, T, Q>(v.y, v.y, v.x);
411  }
412 
413  template<typename T, qualifier Q>
414  GLM_INLINE glm::vec<3, T, Q> yyx(const glm::vec<3, T, Q> &v) {
415  return glm::vec<3, T, Q>(v.y, v.y, v.x);
416  }
417 
418  template<typename T, qualifier Q>
419  GLM_INLINE glm::vec<3, T, Q> yyx(const glm::vec<4, T, Q> &v) {
420  return glm::vec<3, T, Q>(v.y, v.y, v.x);
421  }
422 
423  // yyy
424  template<typename T, qualifier Q>
425  GLM_INLINE glm::vec<3, T, Q> yyy(const glm::vec<2, T, Q> &v) {
426  return glm::vec<3, T, Q>(v.y, v.y, v.y);
427  }
428 
429  template<typename T, qualifier Q>
430  GLM_INLINE glm::vec<3, T, Q> yyy(const glm::vec<3, T, Q> &v) {
431  return glm::vec<3, T, Q>(v.y, v.y, v.y);
432  }
433 
434  template<typename T, qualifier Q>
435  GLM_INLINE glm::vec<3, T, Q> yyy(const glm::vec<4, T, Q> &v) {
436  return glm::vec<3, T, Q>(v.y, v.y, v.y);
437  }
438 
439  // yyz
440  template<typename T, qualifier Q>
441  GLM_INLINE glm::vec<3, T, Q> yyz(const glm::vec<3, T, Q> &v) {
442  return glm::vec<3, T, Q>(v.y, v.y, v.z);
443  }
444 
445  template<typename T, qualifier Q>
446  GLM_INLINE glm::vec<3, T, Q> yyz(const glm::vec<4, T, Q> &v) {
447  return glm::vec<3, T, Q>(v.y, v.y, v.z);
448  }
449 
450  // yyw
451  template<typename T, qualifier Q>
452  GLM_INLINE glm::vec<3, T, Q> yyw(const glm::vec<4, T, Q> &v) {
453  return glm::vec<3, T, Q>(v.y, v.y, v.w);
454  }
455 
456  // yzx
457  template<typename T, qualifier Q>
458  GLM_INLINE glm::vec<3, T, Q> yzx(const glm::vec<3, T, Q> &v) {
459  return glm::vec<3, T, Q>(v.y, v.z, v.x);
460  }
461 
462  template<typename T, qualifier Q>
463  GLM_INLINE glm::vec<3, T, Q> yzx(const glm::vec<4, T, Q> &v) {
464  return glm::vec<3, T, Q>(v.y, v.z, v.x);
465  }
466 
467  // yzy
468  template<typename T, qualifier Q>
469  GLM_INLINE glm::vec<3, T, Q> yzy(const glm::vec<3, T, Q> &v) {
470  return glm::vec<3, T, Q>(v.y, v.z, v.y);
471  }
472 
473  template<typename T, qualifier Q>
474  GLM_INLINE glm::vec<3, T, Q> yzy(const glm::vec<4, T, Q> &v) {
475  return glm::vec<3, T, Q>(v.y, v.z, v.y);
476  }
477 
478  // yzz
479  template<typename T, qualifier Q>
480  GLM_INLINE glm::vec<3, T, Q> yzz(const glm::vec<3, T, Q> &v) {
481  return glm::vec<3, T, Q>(v.y, v.z, v.z);
482  }
483 
484  template<typename T, qualifier Q>
485  GLM_INLINE glm::vec<3, T, Q> yzz(const glm::vec<4, T, Q> &v) {
486  return glm::vec<3, T, Q>(v.y, v.z, v.z);
487  }
488 
489  // yzw
490  template<typename T, qualifier Q>
491  GLM_INLINE glm::vec<3, T, Q> yzw(const glm::vec<4, T, Q> &v) {
492  return glm::vec<3, T, Q>(v.y, v.z, v.w);
493  }
494 
495  // ywx
496  template<typename T, qualifier Q>
497  GLM_INLINE glm::vec<3, T, Q> ywx(const glm::vec<4, T, Q> &v) {
498  return glm::vec<3, T, Q>(v.y, v.w, v.x);
499  }
500 
501  // ywy
502  template<typename T, qualifier Q>
503  GLM_INLINE glm::vec<3, T, Q> ywy(const glm::vec<4, T, Q> &v) {
504  return glm::vec<3, T, Q>(v.y, v.w, v.y);
505  }
506 
507  // ywz
508  template<typename T, qualifier Q>
509  GLM_INLINE glm::vec<3, T, Q> ywz(const glm::vec<4, T, Q> &v) {
510  return glm::vec<3, T, Q>(v.y, v.w, v.z);
511  }
512 
513  // yww
514  template<typename T, qualifier Q>
515  GLM_INLINE glm::vec<3, T, Q> yww(const glm::vec<4, T, Q> &v) {
516  return glm::vec<3, T, Q>(v.y, v.w, v.w);
517  }
518 
519  // zxx
520  template<typename T, qualifier Q>
521  GLM_INLINE glm::vec<3, T, Q> zxx(const glm::vec<3, T, Q> &v) {
522  return glm::vec<3, T, Q>(v.z, v.x, v.x);
523  }
524 
525  template<typename T, qualifier Q>
526  GLM_INLINE glm::vec<3, T, Q> zxx(const glm::vec<4, T, Q> &v) {
527  return glm::vec<3, T, Q>(v.z, v.x, v.x);
528  }
529 
530  // zxy
531  template<typename T, qualifier Q>
532  GLM_INLINE glm::vec<3, T, Q> zxy(const glm::vec<3, T, Q> &v) {
533  return glm::vec<3, T, Q>(v.z, v.x, v.y);
534  }
535 
536  template<typename T, qualifier Q>
537  GLM_INLINE glm::vec<3, T, Q> zxy(const glm::vec<4, T, Q> &v) {
538  return glm::vec<3, T, Q>(v.z, v.x, v.y);
539  }
540 
541  // zxz
542  template<typename T, qualifier Q>
543  GLM_INLINE glm::vec<3, T, Q> zxz(const glm::vec<3, T, Q> &v) {
544  return glm::vec<3, T, Q>(v.z, v.x, v.z);
545  }
546 
547  template<typename T, qualifier Q>
548  GLM_INLINE glm::vec<3, T, Q> zxz(const glm::vec<4, T, Q> &v) {
549  return glm::vec<3, T, Q>(v.z, v.x, v.z);
550  }
551 
552  // zxw
553  template<typename T, qualifier Q>
554  GLM_INLINE glm::vec<3, T, Q> zxw(const glm::vec<4, T, Q> &v) {
555  return glm::vec<3, T, Q>(v.z, v.x, v.w);
556  }
557 
558  // zyx
559  template<typename T, qualifier Q>
560  GLM_INLINE glm::vec<3, T, Q> zyx(const glm::vec<3, T, Q> &v) {
561  return glm::vec<3, T, Q>(v.z, v.y, v.x);
562  }
563 
564  template<typename T, qualifier Q>
565  GLM_INLINE glm::vec<3, T, Q> zyx(const glm::vec<4, T, Q> &v) {
566  return glm::vec<3, T, Q>(v.z, v.y, v.x);
567  }
568 
569  // zyy
570  template<typename T, qualifier Q>
571  GLM_INLINE glm::vec<3, T, Q> zyy(const glm::vec<3, T, Q> &v) {
572  return glm::vec<3, T, Q>(v.z, v.y, v.y);
573  }
574 
575  template<typename T, qualifier Q>
576  GLM_INLINE glm::vec<3, T, Q> zyy(const glm::vec<4, T, Q> &v) {
577  return glm::vec<3, T, Q>(v.z, v.y, v.y);
578  }
579 
580  // zyz
581  template<typename T, qualifier Q>
582  GLM_INLINE glm::vec<3, T, Q> zyz(const glm::vec<3, T, Q> &v) {
583  return glm::vec<3, T, Q>(v.z, v.y, v.z);
584  }
585 
586  template<typename T, qualifier Q>
587  GLM_INLINE glm::vec<3, T, Q> zyz(const glm::vec<4, T, Q> &v) {
588  return glm::vec<3, T, Q>(v.z, v.y, v.z);
589  }
590 
591  // zyw
592  template<typename T, qualifier Q>
593  GLM_INLINE glm::vec<3, T, Q> zyw(const glm::vec<4, T, Q> &v) {
594  return glm::vec<3, T, Q>(v.z, v.y, v.w);
595  }
596 
597  // zzx
598  template<typename T, qualifier Q>
599  GLM_INLINE glm::vec<3, T, Q> zzx(const glm::vec<3, T, Q> &v) {
600  return glm::vec<3, T, Q>(v.z, v.z, v.x);
601  }
602 
603  template<typename T, qualifier Q>
604  GLM_INLINE glm::vec<3, T, Q> zzx(const glm::vec<4, T, Q> &v) {
605  return glm::vec<3, T, Q>(v.z, v.z, v.x);
606  }
607 
608  // zzy
609  template<typename T, qualifier Q>
610  GLM_INLINE glm::vec<3, T, Q> zzy(const glm::vec<3, T, Q> &v) {
611  return glm::vec<3, T, Q>(v.z, v.z, v.y);
612  }
613 
614  template<typename T, qualifier Q>
615  GLM_INLINE glm::vec<3, T, Q> zzy(const glm::vec<4, T, Q> &v) {
616  return glm::vec<3, T, Q>(v.z, v.z, v.y);
617  }
618 
619  // zzz
620  template<typename T, qualifier Q>
621  GLM_INLINE glm::vec<3, T, Q> zzz(const glm::vec<3, T, Q> &v) {
622  return glm::vec<3, T, Q>(v.z, v.z, v.z);
623  }
624 
625  template<typename T, qualifier Q>
626  GLM_INLINE glm::vec<3, T, Q> zzz(const glm::vec<4, T, Q> &v) {
627  return glm::vec<3, T, Q>(v.z, v.z, v.z);
628  }
629 
630  // zzw
631  template<typename T, qualifier Q>
632  GLM_INLINE glm::vec<3, T, Q> zzw(const glm::vec<4, T, Q> &v) {
633  return glm::vec<3, T, Q>(v.z, v.z, v.w);
634  }
635 
636  // zwx
637  template<typename T, qualifier Q>
638  GLM_INLINE glm::vec<3, T, Q> zwx(const glm::vec<4, T, Q> &v) {
639  return glm::vec<3, T, Q>(v.z, v.w, v.x);
640  }
641 
642  // zwy
643  template<typename T, qualifier Q>
644  GLM_INLINE glm::vec<3, T, Q> zwy(const glm::vec<4, T, Q> &v) {
645  return glm::vec<3, T, Q>(v.z, v.w, v.y);
646  }
647 
648  // zwz
649  template<typename T, qualifier Q>
650  GLM_INLINE glm::vec<3, T, Q> zwz(const glm::vec<4, T, Q> &v) {
651  return glm::vec<3, T, Q>(v.z, v.w, v.z);
652  }
653 
654  // zww
655  template<typename T, qualifier Q>
656  GLM_INLINE glm::vec<3, T, Q> zww(const glm::vec<4, T, Q> &v) {
657  return glm::vec<3, T, Q>(v.z, v.w, v.w);
658  }
659 
660  // wxx
661  template<typename T, qualifier Q>
662  GLM_INLINE glm::vec<3, T, Q> wxx(const glm::vec<4, T, Q> &v) {
663  return glm::vec<3, T, Q>(v.w, v.x, v.x);
664  }
665 
666  // wxy
667  template<typename T, qualifier Q>
668  GLM_INLINE glm::vec<3, T, Q> wxy(const glm::vec<4, T, Q> &v) {
669  return glm::vec<3, T, Q>(v.w, v.x, v.y);
670  }
671 
672  // wxz
673  template<typename T, qualifier Q>
674  GLM_INLINE glm::vec<3, T, Q> wxz(const glm::vec<4, T, Q> &v) {
675  return glm::vec<3, T, Q>(v.w, v.x, v.z);
676  }
677 
678  // wxw
679  template<typename T, qualifier Q>
680  GLM_INLINE glm::vec<3, T, Q> wxw(const glm::vec<4, T, Q> &v) {
681  return glm::vec<3, T, Q>(v.w, v.x, v.w);
682  }
683 
684  // wyx
685  template<typename T, qualifier Q>
686  GLM_INLINE glm::vec<3, T, Q> wyx(const glm::vec<4, T, Q> &v) {
687  return glm::vec<3, T, Q>(v.w, v.y, v.x);
688  }
689 
690  // wyy
691  template<typename T, qualifier Q>
692  GLM_INLINE glm::vec<3, T, Q> wyy(const glm::vec<4, T, Q> &v) {
693  return glm::vec<3, T, Q>(v.w, v.y, v.y);
694  }
695 
696  // wyz
697  template<typename T, qualifier Q>
698  GLM_INLINE glm::vec<3, T, Q> wyz(const glm::vec<4, T, Q> &v) {
699  return glm::vec<3, T, Q>(v.w, v.y, v.z);
700  }
701 
702  // wyw
703  template<typename T, qualifier Q>
704  GLM_INLINE glm::vec<3, T, Q> wyw(const glm::vec<4, T, Q> &v) {
705  return glm::vec<3, T, Q>(v.w, v.y, v.w);
706  }
707 
708  // wzx
709  template<typename T, qualifier Q>
710  GLM_INLINE glm::vec<3, T, Q> wzx(const glm::vec<4, T, Q> &v) {
711  return glm::vec<3, T, Q>(v.w, v.z, v.x);
712  }
713 
714  // wzy
715  template<typename T, qualifier Q>
716  GLM_INLINE glm::vec<3, T, Q> wzy(const glm::vec<4, T, Q> &v) {
717  return glm::vec<3, T, Q>(v.w, v.z, v.y);
718  }
719 
720  // wzz
721  template<typename T, qualifier Q>
722  GLM_INLINE glm::vec<3, T, Q> wzz(const glm::vec<4, T, Q> &v) {
723  return glm::vec<3, T, Q>(v.w, v.z, v.z);
724  }
725 
726  // wzw
727  template<typename T, qualifier Q>
728  GLM_INLINE glm::vec<3, T, Q> wzw(const glm::vec<4, T, Q> &v) {
729  return glm::vec<3, T, Q>(v.w, v.z, v.w);
730  }
731 
732  // wwx
733  template<typename T, qualifier Q>
734  GLM_INLINE glm::vec<3, T, Q> wwx(const glm::vec<4, T, Q> &v) {
735  return glm::vec<3, T, Q>(v.w, v.w, v.x);
736  }
737 
738  // wwy
739  template<typename T, qualifier Q>
740  GLM_INLINE glm::vec<3, T, Q> wwy(const glm::vec<4, T, Q> &v) {
741  return glm::vec<3, T, Q>(v.w, v.w, v.y);
742  }
743 
744  // wwz
745  template<typename T, qualifier Q>
746  GLM_INLINE glm::vec<3, T, Q> wwz(const glm::vec<4, T, Q> &v) {
747  return glm::vec<3, T, Q>(v.w, v.w, v.z);
748  }
749 
750  // www
751  template<typename T, qualifier Q>
752  GLM_INLINE glm::vec<3, T, Q> www(const glm::vec<4, T, Q> &v) {
753  return glm::vec<3, T, Q>(v.w, v.w, v.w);
754  }
755 
756  // xxxx
757  template<typename T, qualifier Q>
758  GLM_INLINE glm::vec<4, T, Q> xxxx(const glm::vec<1, T, Q> &v) {
759  return glm::vec<4, T, Q>(v.x, v.x, v.x, v.x);
760  }
761 
762  template<typename T, qualifier Q>
763  GLM_INLINE glm::vec<4, T, Q> xxxx(const glm::vec<2, T, Q> &v) {
764  return glm::vec<4, T, Q>(v.x, v.x, v.x, v.x);
765  }
766 
767  template<typename T, qualifier Q>
768  GLM_INLINE glm::vec<4, T, Q> xxxx(const glm::vec<3, T, Q> &v) {
769  return glm::vec<4, T, Q>(v.x, v.x, v.x, v.x);
770  }
771 
772  template<typename T, qualifier Q>
773  GLM_INLINE glm::vec<4, T, Q> xxxx(const glm::vec<4, T, Q> &v) {
774  return glm::vec<4, T, Q>(v.x, v.x, v.x, v.x);
775  }
776 
777  // xxxy
778  template<typename T, qualifier Q>
779  GLM_INLINE glm::vec<4, T, Q> xxxy(const glm::vec<2, T, Q> &v) {
780  return glm::vec<4, T, Q>(v.x, v.x, v.x, v.y);
781  }
782 
783  template<typename T, qualifier Q>
784  GLM_INLINE glm::vec<4, T, Q> xxxy(const glm::vec<3, T, Q> &v) {
785  return glm::vec<4, T, Q>(v.x, v.x, v.x, v.y);
786  }
787 
788  template<typename T, qualifier Q>
789  GLM_INLINE glm::vec<4, T, Q> xxxy(const glm::vec<4, T, Q> &v) {
790  return glm::vec<4, T, Q>(v.x, v.x, v.x, v.y);
791  }
792 
793  // xxxz
794  template<typename T, qualifier Q>
795  GLM_INLINE glm::vec<4, T, Q> xxxz(const glm::vec<3, T, Q> &v) {
796  return glm::vec<4, T, Q>(v.x, v.x, v.x, v.z);
797  }
798 
799  template<typename T, qualifier Q>
800  GLM_INLINE glm::vec<4, T, Q> xxxz(const glm::vec<4, T, Q> &v) {
801  return glm::vec<4, T, Q>(v.x, v.x, v.x, v.z);
802  }
803 
804  // xxxw
805  template<typename T, qualifier Q>
806  GLM_INLINE glm::vec<4, T, Q> xxxw(const glm::vec<4, T, Q> &v) {
807  return glm::vec<4, T, Q>(v.x, v.x, v.x, v.w);
808  }
809 
810  // xxyx
811  template<typename T, qualifier Q>
812  GLM_INLINE glm::vec<4, T, Q> xxyx(const glm::vec<2, T, Q> &v) {
813  return glm::vec<4, T, Q>(v.x, v.x, v.y, v.x);
814  }
815 
816  template<typename T, qualifier Q>
817  GLM_INLINE glm::vec<4, T, Q> xxyx(const glm::vec<3, T, Q> &v) {
818  return glm::vec<4, T, Q>(v.x, v.x, v.y, v.x);
819  }
820 
821  template<typename T, qualifier Q>
822  GLM_INLINE glm::vec<4, T, Q> xxyx(const glm::vec<4, T, Q> &v) {
823  return glm::vec<4, T, Q>(v.x, v.x, v.y, v.x);
824  }
825 
826  // xxyy
827  template<typename T, qualifier Q>
828  GLM_INLINE glm::vec<4, T, Q> xxyy(const glm::vec<2, T, Q> &v) {
829  return glm::vec<4, T, Q>(v.x, v.x, v.y, v.y);
830  }
831 
832  template<typename T, qualifier Q>
833  GLM_INLINE glm::vec<4, T, Q> xxyy(const glm::vec<3, T, Q> &v) {
834  return glm::vec<4, T, Q>(v.x, v.x, v.y, v.y);
835  }
836 
837  template<typename T, qualifier Q>
838  GLM_INLINE glm::vec<4, T, Q> xxyy(const glm::vec<4, T, Q> &v) {
839  return glm::vec<4, T, Q>(v.x, v.x, v.y, v.y);
840  }
841 
842  // xxyz
843  template<typename T, qualifier Q>
844  GLM_INLINE glm::vec<4, T, Q> xxyz(const glm::vec<3, T, Q> &v) {
845  return glm::vec<4, T, Q>(v.x, v.x, v.y, v.z);
846  }
847 
848  template<typename T, qualifier Q>
849  GLM_INLINE glm::vec<4, T, Q> xxyz(const glm::vec<4, T, Q> &v) {
850  return glm::vec<4, T, Q>(v.x, v.x, v.y, v.z);
851  }
852 
853  // xxyw
854  template<typename T, qualifier Q>
855  GLM_INLINE glm::vec<4, T, Q> xxyw(const glm::vec<4, T, Q> &v) {
856  return glm::vec<4, T, Q>(v.x, v.x, v.y, v.w);
857  }
858 
859  // xxzx
860  template<typename T, qualifier Q>
861  GLM_INLINE glm::vec<4, T, Q> xxzx(const glm::vec<3, T, Q> &v) {
862  return glm::vec<4, T, Q>(v.x, v.x, v.z, v.x);
863  }
864 
865  template<typename T, qualifier Q>
866  GLM_INLINE glm::vec<4, T, Q> xxzx(const glm::vec<4, T, Q> &v) {
867  return glm::vec<4, T, Q>(v.x, v.x, v.z, v.x);
868  }
869 
870  // xxzy
871  template<typename T, qualifier Q>
872  GLM_INLINE glm::vec<4, T, Q> xxzy(const glm::vec<3, T, Q> &v) {
873  return glm::vec<4, T, Q>(v.x, v.x, v.z, v.y);
874  }
875 
876  template<typename T, qualifier Q>
877  GLM_INLINE glm::vec<4, T, Q> xxzy(const glm::vec<4, T, Q> &v) {
878  return glm::vec<4, T, Q>(v.x, v.x, v.z, v.y);
879  }
880 
881  // xxzz
882  template<typename T, qualifier Q>
883  GLM_INLINE glm::vec<4, T, Q> xxzz(const glm::vec<3, T, Q> &v) {
884  return glm::vec<4, T, Q>(v.x, v.x, v.z, v.z);
885  }
886 
887  template<typename T, qualifier Q>
888  GLM_INLINE glm::vec<4, T, Q> xxzz(const glm::vec<4, T, Q> &v) {
889  return glm::vec<4, T, Q>(v.x, v.x, v.z, v.z);
890  }
891 
892  // xxzw
893  template<typename T, qualifier Q>
894  GLM_INLINE glm::vec<4, T, Q> xxzw(const glm::vec<4, T, Q> &v) {
895  return glm::vec<4, T, Q>(v.x, v.x, v.z, v.w);
896  }
897 
898  // xxwx
899  template<typename T, qualifier Q>
900  GLM_INLINE glm::vec<4, T, Q> xxwx(const glm::vec<4, T, Q> &v) {
901  return glm::vec<4, T, Q>(v.x, v.x, v.w, v.x);
902  }
903 
904  // xxwy
905  template<typename T, qualifier Q>
906  GLM_INLINE glm::vec<4, T, Q> xxwy(const glm::vec<4, T, Q> &v) {
907  return glm::vec<4, T, Q>(v.x, v.x, v.w, v.y);
908  }
909 
910  // xxwz
911  template<typename T, qualifier Q>
912  GLM_INLINE glm::vec<4, T, Q> xxwz(const glm::vec<4, T, Q> &v) {
913  return glm::vec<4, T, Q>(v.x, v.x, v.w, v.z);
914  }
915 
916  // xxww
917  template<typename T, qualifier Q>
918  GLM_INLINE glm::vec<4, T, Q> xxww(const glm::vec<4, T, Q> &v) {
919  return glm::vec<4, T, Q>(v.x, v.x, v.w, v.w);
920  }
921 
922  // xyxx
923  template<typename T, qualifier Q>
924  GLM_INLINE glm::vec<4, T, Q> xyxx(const glm::vec<2, T, Q> &v) {
925  return glm::vec<4, T, Q>(v.x, v.y, v.x, v.x);
926  }
927 
928  template<typename T, qualifier Q>
929  GLM_INLINE glm::vec<4, T, Q> xyxx(const glm::vec<3, T, Q> &v) {
930  return glm::vec<4, T, Q>(v.x, v.y, v.x, v.x);
931  }
932 
933  template<typename T, qualifier Q>
934  GLM_INLINE glm::vec<4, T, Q> xyxx(const glm::vec<4, T, Q> &v) {
935  return glm::vec<4, T, Q>(v.x, v.y, v.x, v.x);
936  }
937 
938  // xyxy
939  template<typename T, qualifier Q>
940  GLM_INLINE glm::vec<4, T, Q> xyxy(const glm::vec<2, T, Q> &v) {
941  return glm::vec<4, T, Q>(v.x, v.y, v.x, v.y);
942  }
943 
944  template<typename T, qualifier Q>
945  GLM_INLINE glm::vec<4, T, Q> xyxy(const glm::vec<3, T, Q> &v) {
946  return glm::vec<4, T, Q>(v.x, v.y, v.x, v.y);
947  }
948 
949  template<typename T, qualifier Q>
950  GLM_INLINE glm::vec<4, T, Q> xyxy(const glm::vec<4, T, Q> &v) {
951  return glm::vec<4, T, Q>(v.x, v.y, v.x, v.y);
952  }
953 
954  // xyxz
955  template<typename T, qualifier Q>
956  GLM_INLINE glm::vec<4, T, Q> xyxz(const glm::vec<3, T, Q> &v) {
957  return glm::vec<4, T, Q>(v.x, v.y, v.x, v.z);
958  }
959 
960  template<typename T, qualifier Q>
961  GLM_INLINE glm::vec<4, T, Q> xyxz(const glm::vec<4, T, Q> &v) {
962  return glm::vec<4, T, Q>(v.x, v.y, v.x, v.z);
963  }
964 
965  // xyxw
966  template<typename T, qualifier Q>
967  GLM_INLINE glm::vec<4, T, Q> xyxw(const glm::vec<4, T, Q> &v) {
968  return glm::vec<4, T, Q>(v.x, v.y, v.x, v.w);
969  }
970 
971  // xyyx
972  template<typename T, qualifier Q>
973  GLM_INLINE glm::vec<4, T, Q> xyyx(const glm::vec<2, T, Q> &v) {
974  return glm::vec<4, T, Q>(v.x, v.y, v.y, v.x);
975  }
976 
977  template<typename T, qualifier Q>
978  GLM_INLINE glm::vec<4, T, Q> xyyx(const glm::vec<3, T, Q> &v) {
979  return glm::vec<4, T, Q>(v.x, v.y, v.y, v.x);
980  }
981 
982  template<typename T, qualifier Q>
983  GLM_INLINE glm::vec<4, T, Q> xyyx(const glm::vec<4, T, Q> &v) {
984  return glm::vec<4, T, Q>(v.x, v.y, v.y, v.x);
985  }
986 
987  // xyyy
988  template<typename T, qualifier Q>
989  GLM_INLINE glm::vec<4, T, Q> xyyy(const glm::vec<2, T, Q> &v) {
990  return glm::vec<4, T, Q>(v.x, v.y, v.y, v.y);
991  }
992 
993  template<typename T, qualifier Q>
994  GLM_INLINE glm::vec<4, T, Q> xyyy(const glm::vec<3, T, Q> &v) {
995  return glm::vec<4, T, Q>(v.x, v.y, v.y, v.y);
996  }
997 
998  template<typename T, qualifier Q>
999  GLM_INLINE glm::vec<4, T, Q> xyyy(const glm::vec<4, T, Q> &v) {
1000  return glm::vec<4, T, Q>(v.x, v.y, v.y, v.y);
1001  }
1002 
1003  // xyyz
1004  template<typename T, qualifier Q>
1005  GLM_INLINE glm::vec<4, T, Q> xyyz(const glm::vec<3, T, Q> &v) {
1006  return glm::vec<4, T, Q>(v.x, v.y, v.y, v.z);
1007  }
1008 
1009  template<typename T, qualifier Q>
1010  GLM_INLINE glm::vec<4, T, Q> xyyz(const glm::vec<4, T, Q> &v) {
1011  return glm::vec<4, T, Q>(v.x, v.y, v.y, v.z);
1012  }
1013 
1014  // xyyw
1015  template<typename T, qualifier Q>
1016  GLM_INLINE glm::vec<4, T, Q> xyyw(const glm::vec<4, T, Q> &v) {
1017  return glm::vec<4, T, Q>(v.x, v.y, v.y, v.w);
1018  }
1019 
1020  // xyzx
1021  template<typename T, qualifier Q>
1022  GLM_INLINE glm::vec<4, T, Q> xyzx(const glm::vec<3, T, Q> &v) {
1023  return glm::vec<4, T, Q>(v.x, v.y, v.z, v.x);
1024  }
1025 
1026  template<typename T, qualifier Q>
1027  GLM_INLINE glm::vec<4, T, Q> xyzx(const glm::vec<4, T, Q> &v) {
1028  return glm::vec<4, T, Q>(v.x, v.y, v.z, v.x);
1029  }
1030 
1031  // xyzy
1032  template<typename T, qualifier Q>
1033  GLM_INLINE glm::vec<4, T, Q> xyzy(const glm::vec<3, T, Q> &v) {
1034  return glm::vec<4, T, Q>(v.x, v.y, v.z, v.y);
1035  }
1036 
1037  template<typename T, qualifier Q>
1038  GLM_INLINE glm::vec<4, T, Q> xyzy(const glm::vec<4, T, Q> &v) {
1039  return glm::vec<4, T, Q>(v.x, v.y, v.z, v.y);
1040  }
1041 
1042  // xyzz
1043  template<typename T, qualifier Q>
1044  GLM_INLINE glm::vec<4, T, Q> xyzz(const glm::vec<3, T, Q> &v) {
1045  return glm::vec<4, T, Q>(v.x, v.y, v.z, v.z);
1046  }
1047 
1048  template<typename T, qualifier Q>
1049  GLM_INLINE glm::vec<4, T, Q> xyzz(const glm::vec<4, T, Q> &v) {
1050  return glm::vec<4, T, Q>(v.x, v.y, v.z, v.z);
1051  }
1052 
1053  // xyzw
1054  template<typename T, qualifier Q>
1055  GLM_INLINE glm::vec<4, T, Q> xyzw(const glm::vec<4, T, Q> &v) {
1056  return glm::vec<4, T, Q>(v.x, v.y, v.z, v.w);
1057  }
1058 
1059  // xywx
1060  template<typename T, qualifier Q>
1061  GLM_INLINE glm::vec<4, T, Q> xywx(const glm::vec<4, T, Q> &v) {
1062  return glm::vec<4, T, Q>(v.x, v.y, v.w, v.x);
1063  }
1064 
1065  // xywy
1066  template<typename T, qualifier Q>
1067  GLM_INLINE glm::vec<4, T, Q> xywy(const glm::vec<4, T, Q> &v) {
1068  return glm::vec<4, T, Q>(v.x, v.y, v.w, v.y);
1069  }
1070 
1071  // xywz
1072  template<typename T, qualifier Q>
1073  GLM_INLINE glm::vec<4, T, Q> xywz(const glm::vec<4, T, Q> &v) {
1074  return glm::vec<4, T, Q>(v.x, v.y, v.w, v.z);
1075  }
1076 
1077  // xyww
1078  template<typename T, qualifier Q>
1079  GLM_INLINE glm::vec<4, T, Q> xyww(const glm::vec<4, T, Q> &v) {
1080  return glm::vec<4, T, Q>(v.x, v.y, v.w, v.w);
1081  }
1082 
1083  // xzxx
1084  template<typename T, qualifier Q>
1085  GLM_INLINE glm::vec<4, T, Q> xzxx(const glm::vec<3, T, Q> &v) {
1086  return glm::vec<4, T, Q>(v.x, v.z, v.x, v.x);
1087  }
1088 
1089  template<typename T, qualifier Q>
1090  GLM_INLINE glm::vec<4, T, Q> xzxx(const glm::vec<4, T, Q> &v) {
1091  return glm::vec<4, T, Q>(v.x, v.z, v.x, v.x);
1092  }
1093 
1094  // xzxy
1095  template<typename T, qualifier Q>
1096  GLM_INLINE glm::vec<4, T, Q> xzxy(const glm::vec<3, T, Q> &v) {
1097  return glm::vec<4, T, Q>(v.x, v.z, v.x, v.y);
1098  }
1099 
1100  template<typename T, qualifier Q>
1101  GLM_INLINE glm::vec<4, T, Q> xzxy(const glm::vec<4, T, Q> &v) {
1102  return glm::vec<4, T, Q>(v.x, v.z, v.x, v.y);
1103  }
1104 
1105  // xzxz
1106  template<typename T, qualifier Q>
1107  GLM_INLINE glm::vec<4, T, Q> xzxz(const glm::vec<3, T, Q> &v) {
1108  return glm::vec<4, T, Q>(v.x, v.z, v.x, v.z);
1109  }
1110 
1111  template<typename T, qualifier Q>
1112  GLM_INLINE glm::vec<4, T, Q> xzxz(const glm::vec<4, T, Q> &v) {
1113  return glm::vec<4, T, Q>(v.x, v.z, v.x, v.z);
1114  }
1115 
1116  // xzxw
1117  template<typename T, qualifier Q>
1118  GLM_INLINE glm::vec<4, T, Q> xzxw(const glm::vec<4, T, Q> &v) {
1119  return glm::vec<4, T, Q>(v.x, v.z, v.x, v.w);
1120  }
1121 
1122  // xzyx
1123  template<typename T, qualifier Q>
1124  GLM_INLINE glm::vec<4, T, Q> xzyx(const glm::vec<3, T, Q> &v) {
1125  return glm::vec<4, T, Q>(v.x, v.z, v.y, v.x);
1126  }
1127 
1128  template<typename T, qualifier Q>
1129  GLM_INLINE glm::vec<4, T, Q> xzyx(const glm::vec<4, T, Q> &v) {
1130  return glm::vec<4, T, Q>(v.x, v.z, v.y, v.x);
1131  }
1132 
1133  // xzyy
1134  template<typename T, qualifier Q>
1135  GLM_INLINE glm::vec<4, T, Q> xzyy(const glm::vec<3, T, Q> &v) {
1136  return glm::vec<4, T, Q>(v.x, v.z, v.y, v.y);
1137  }
1138 
1139  template<typename T, qualifier Q>
1140  GLM_INLINE glm::vec<4, T, Q> xzyy(const glm::vec<4, T, Q> &v) {
1141  return glm::vec<4, T, Q>(v.x, v.z, v.y, v.y);
1142  }
1143 
1144  // xzyz
1145  template<typename T, qualifier Q>
1146  GLM_INLINE glm::vec<4, T, Q> xzyz(const glm::vec<3, T, Q> &v) {
1147  return glm::vec<4, T, Q>(v.x, v.z, v.y, v.z);
1148  }
1149 
1150  template<typename T, qualifier Q>
1151  GLM_INLINE glm::vec<4, T, Q> xzyz(const glm::vec<4, T, Q> &v) {
1152  return glm::vec<4, T, Q>(v.x, v.z, v.y, v.z);
1153  }
1154 
1155  // xzyw
1156  template<typename T, qualifier Q>
1157  GLM_INLINE glm::vec<4, T, Q> xzyw(const glm::vec<4, T, Q> &v) {
1158  return glm::vec<4, T, Q>(v.x, v.z, v.y, v.w);
1159  }
1160 
1161  // xzzx
1162  template<typename T, qualifier Q>
1163  GLM_INLINE glm::vec<4, T, Q> xzzx(const glm::vec<3, T, Q> &v) {
1164  return glm::vec<4, T, Q>(v.x, v.z, v.z, v.x);
1165  }
1166 
1167  template<typename T, qualifier Q>
1168  GLM_INLINE glm::vec<4, T, Q> xzzx(const glm::vec<4, T, Q> &v) {
1169  return glm::vec<4, T, Q>(v.x, v.z, v.z, v.x);
1170  }
1171 
1172  // xzzy
1173  template<typename T, qualifier Q>
1174  GLM_INLINE glm::vec<4, T, Q> xzzy(const glm::vec<3, T, Q> &v) {
1175  return glm::vec<4, T, Q>(v.x, v.z, v.z, v.y);
1176  }
1177 
1178  template<typename T, qualifier Q>
1179  GLM_INLINE glm::vec<4, T, Q> xzzy(const glm::vec<4, T, Q> &v) {
1180  return glm::vec<4, T, Q>(v.x, v.z, v.z, v.y);
1181  }
1182 
1183  // xzzz
1184  template<typename T, qualifier Q>
1185  GLM_INLINE glm::vec<4, T, Q> xzzz(const glm::vec<3, T, Q> &v) {
1186  return glm::vec<4, T, Q>(v.x, v.z, v.z, v.z);
1187  }
1188 
1189  template<typename T, qualifier Q>
1190  GLM_INLINE glm::vec<4, T, Q> xzzz(const glm::vec<4, T, Q> &v) {
1191  return glm::vec<4, T, Q>(v.x, v.z, v.z, v.z);
1192  }
1193 
1194  // xzzw
1195  template<typename T, qualifier Q>
1196  GLM_INLINE glm::vec<4, T, Q> xzzw(const glm::vec<4, T, Q> &v) {
1197  return glm::vec<4, T, Q>(v.x, v.z, v.z, v.w);
1198  }
1199 
1200  // xzwx
1201  template<typename T, qualifier Q>
1202  GLM_INLINE glm::vec<4, T, Q> xzwx(const glm::vec<4, T, Q> &v) {
1203  return glm::vec<4, T, Q>(v.x, v.z, v.w, v.x);
1204  }
1205 
1206  // xzwy
1207  template<typename T, qualifier Q>
1208  GLM_INLINE glm::vec<4, T, Q> xzwy(const glm::vec<4, T, Q> &v) {
1209  return glm::vec<4, T, Q>(v.x, v.z, v.w, v.y);
1210  }
1211 
1212  // xzwz
1213  template<typename T, qualifier Q>
1214  GLM_INLINE glm::vec<4, T, Q> xzwz(const glm::vec<4, T, Q> &v) {
1215  return glm::vec<4, T, Q>(v.x, v.z, v.w, v.z);
1216  }
1217 
1218  // xzww
1219  template<typename T, qualifier Q>
1220  GLM_INLINE glm::vec<4, T, Q> xzww(const glm::vec<4, T, Q> &v) {
1221  return glm::vec<4, T, Q>(v.x, v.z, v.w, v.w);
1222  }
1223 
1224  // xwxx
1225  template<typename T, qualifier Q>
1226  GLM_INLINE glm::vec<4, T, Q> xwxx(const glm::vec<4, T, Q> &v) {
1227  return glm::vec<4, T, Q>(v.x, v.w, v.x, v.x);
1228  }
1229 
1230  // xwxy
1231  template<typename T, qualifier Q>
1232  GLM_INLINE glm::vec<4, T, Q> xwxy(const glm::vec<4, T, Q> &v) {
1233  return glm::vec<4, T, Q>(v.x, v.w, v.x, v.y);
1234  }
1235 
1236  // xwxz
1237  template<typename T, qualifier Q>
1238  GLM_INLINE glm::vec<4, T, Q> xwxz(const glm::vec<4, T, Q> &v) {
1239  return glm::vec<4, T, Q>(v.x, v.w, v.x, v.z);
1240  }
1241 
1242  // xwxw
1243  template<typename T, qualifier Q>
1244  GLM_INLINE glm::vec<4, T, Q> xwxw(const glm::vec<4, T, Q> &v) {
1245  return glm::vec<4, T, Q>(v.x, v.w, v.x, v.w);
1246  }
1247 
1248  // xwyx
1249  template<typename T, qualifier Q>
1250  GLM_INLINE glm::vec<4, T, Q> xwyx(const glm::vec<4, T, Q> &v) {
1251  return glm::vec<4, T, Q>(v.x, v.w, v.y, v.x);
1252  }
1253 
1254  // xwyy
1255  template<typename T, qualifier Q>
1256  GLM_INLINE glm::vec<4, T, Q> xwyy(const glm::vec<4, T, Q> &v) {
1257  return glm::vec<4, T, Q>(v.x, v.w, v.y, v.y);
1258  }
1259 
1260  // xwyz
1261  template<typename T, qualifier Q>
1262  GLM_INLINE glm::vec<4, T, Q> xwyz(const glm::vec<4, T, Q> &v) {
1263  return glm::vec<4, T, Q>(v.x, v.w, v.y, v.z);
1264  }
1265 
1266  // xwyw
1267  template<typename T, qualifier Q>
1268  GLM_INLINE glm::vec<4, T, Q> xwyw(const glm::vec<4, T, Q> &v) {
1269  return glm::vec<4, T, Q>(v.x, v.w, v.y, v.w);
1270  }
1271 
1272  // xwzx
1273  template<typename T, qualifier Q>
1274  GLM_INLINE glm::vec<4, T, Q> xwzx(const glm::vec<4, T, Q> &v) {
1275  return glm::vec<4, T, Q>(v.x, v.w, v.z, v.x);
1276  }
1277 
1278  // xwzy
1279  template<typename T, qualifier Q>
1280  GLM_INLINE glm::vec<4, T, Q> xwzy(const glm::vec<4, T, Q> &v) {
1281  return glm::vec<4, T, Q>(v.x, v.w, v.z, v.y);
1282  }
1283 
1284  // xwzz
1285  template<typename T, qualifier Q>
1286  GLM_INLINE glm::vec<4, T, Q> xwzz(const glm::vec<4, T, Q> &v) {
1287  return glm::vec<4, T, Q>(v.x, v.w, v.z, v.z);
1288  }
1289 
1290  // xwzw
1291  template<typename T, qualifier Q>
1292  GLM_INLINE glm::vec<4, T, Q> xwzw(const glm::vec<4, T, Q> &v) {
1293  return glm::vec<4, T, Q>(v.x, v.w, v.z, v.w);
1294  }
1295 
1296  // xwwx
1297  template<typename T, qualifier Q>
1298  GLM_INLINE glm::vec<4, T, Q> xwwx(const glm::vec<4, T, Q> &v) {
1299  return glm::vec<4, T, Q>(v.x, v.w, v.w, v.x);
1300  }
1301 
1302  // xwwy
1303  template<typename T, qualifier Q>
1304  GLM_INLINE glm::vec<4, T, Q> xwwy(const glm::vec<4, T, Q> &v) {
1305  return glm::vec<4, T, Q>(v.x, v.w, v.w, v.y);
1306  }
1307 
1308  // xwwz
1309  template<typename T, qualifier Q>
1310  GLM_INLINE glm::vec<4, T, Q> xwwz(const glm::vec<4, T, Q> &v) {
1311  return glm::vec<4, T, Q>(v.x, v.w, v.w, v.z);
1312  }
1313 
1314  // xwww
1315  template<typename T, qualifier Q>
1316  GLM_INLINE glm::vec<4, T, Q> xwww(const glm::vec<4, T, Q> &v) {
1317  return glm::vec<4, T, Q>(v.x, v.w, v.w, v.w);
1318  }
1319 
1320  // yxxx
1321  template<typename T, qualifier Q>
1322  GLM_INLINE glm::vec<4, T, Q> yxxx(const glm::vec<2, T, Q> &v) {
1323  return glm::vec<4, T, Q>(v.y, v.x, v.x, v.x);
1324  }
1325 
1326  template<typename T, qualifier Q>
1327  GLM_INLINE glm::vec<4, T, Q> yxxx(const glm::vec<3, T, Q> &v) {
1328  return glm::vec<4, T, Q>(v.y, v.x, v.x, v.x);
1329  }
1330 
1331  template<typename T, qualifier Q>
1332  GLM_INLINE glm::vec<4, T, Q> yxxx(const glm::vec<4, T, Q> &v) {
1333  return glm::vec<4, T, Q>(v.y, v.x, v.x, v.x);
1334  }
1335 
1336  // yxxy
1337  template<typename T, qualifier Q>
1338  GLM_INLINE glm::vec<4, T, Q> yxxy(const glm::vec<2, T, Q> &v) {
1339  return glm::vec<4, T, Q>(v.y, v.x, v.x, v.y);
1340  }
1341 
1342  template<typename T, qualifier Q>
1343  GLM_INLINE glm::vec<4, T, Q> yxxy(const glm::vec<3, T, Q> &v) {
1344  return glm::vec<4, T, Q>(v.y, v.x, v.x, v.y);
1345  }
1346 
1347  template<typename T, qualifier Q>
1348  GLM_INLINE glm::vec<4, T, Q> yxxy(const glm::vec<4, T, Q> &v) {
1349  return glm::vec<4, T, Q>(v.y, v.x, v.x, v.y);
1350  }
1351 
1352  // yxxz
1353  template<typename T, qualifier Q>
1354  GLM_INLINE glm::vec<4, T, Q> yxxz(const glm::vec<3, T, Q> &v) {
1355  return glm::vec<4, T, Q>(v.y, v.x, v.x, v.z);
1356  }
1357 
1358  template<typename T, qualifier Q>
1359  GLM_INLINE glm::vec<4, T, Q> yxxz(const glm::vec<4, T, Q> &v) {
1360  return glm::vec<4, T, Q>(v.y, v.x, v.x, v.z);
1361  }
1362 
1363  // yxxw
1364  template<typename T, qualifier Q>
1365  GLM_INLINE glm::vec<4, T, Q> yxxw(const glm::vec<4, T, Q> &v) {
1366  return glm::vec<4, T, Q>(v.y, v.x, v.x, v.w);
1367  }
1368 
1369  // yxyx
1370  template<typename T, qualifier Q>
1371  GLM_INLINE glm::vec<4, T, Q> yxyx(const glm::vec<2, T, Q> &v) {
1372  return glm::vec<4, T, Q>(v.y, v.x, v.y, v.x);
1373  }
1374 
1375  template<typename T, qualifier Q>
1376  GLM_INLINE glm::vec<4, T, Q> yxyx(const glm::vec<3, T, Q> &v) {
1377  return glm::vec<4, T, Q>(v.y, v.x, v.y, v.x);
1378  }
1379 
1380  template<typename T, qualifier Q>
1381  GLM_INLINE glm::vec<4, T, Q> yxyx(const glm::vec<4, T, Q> &v) {
1382  return glm::vec<4, T, Q>(v.y, v.x, v.y, v.x);
1383  }
1384 
1385  // yxyy
1386  template<typename T, qualifier Q>
1387  GLM_INLINE glm::vec<4, T, Q> yxyy(const glm::vec<2, T, Q> &v) {
1388  return glm::vec<4, T, Q>(v.y, v.x, v.y, v.y);
1389  }
1390 
1391  template<typename T, qualifier Q>
1392  GLM_INLINE glm::vec<4, T, Q> yxyy(const glm::vec<3, T, Q> &v) {
1393  return glm::vec<4, T, Q>(v.y, v.x, v.y, v.y);
1394  }
1395 
1396  template<typename T, qualifier Q>
1397  GLM_INLINE glm::vec<4, T, Q> yxyy(const glm::vec<4, T, Q> &v) {
1398  return glm::vec<4, T, Q>(v.y, v.x, v.y, v.y);
1399  }
1400 
1401  // yxyz
1402  template<typename T, qualifier Q>
1403  GLM_INLINE glm::vec<4, T, Q> yxyz(const glm::vec<3, T, Q> &v) {
1404  return glm::vec<4, T, Q>(v.y, v.x, v.y, v.z);
1405  }
1406 
1407  template<typename T, qualifier Q>
1408  GLM_INLINE glm::vec<4, T, Q> yxyz(const glm::vec<4, T, Q> &v) {
1409  return glm::vec<4, T, Q>(v.y, v.x, v.y, v.z);
1410  }
1411 
1412  // yxyw
1413  template<typename T, qualifier Q>
1414  GLM_INLINE glm::vec<4, T, Q> yxyw(const glm::vec<4, T, Q> &v) {
1415  return glm::vec<4, T, Q>(v.y, v.x, v.y, v.w);
1416  }
1417 
1418  // yxzx
1419  template<typename T, qualifier Q>
1420  GLM_INLINE glm::vec<4, T, Q> yxzx(const glm::vec<3, T, Q> &v) {
1421  return glm::vec<4, T, Q>(v.y, v.x, v.z, v.x);
1422  }
1423 
1424  template<typename T, qualifier Q>
1425  GLM_INLINE glm::vec<4, T, Q> yxzx(const glm::vec<4, T, Q> &v) {
1426  return glm::vec<4, T, Q>(v.y, v.x, v.z, v.x);
1427  }
1428 
1429  // yxzy
1430  template<typename T, qualifier Q>
1431  GLM_INLINE glm::vec<4, T, Q> yxzy(const glm::vec<3, T, Q> &v) {
1432  return glm::vec<4, T, Q>(v.y, v.x, v.z, v.y);
1433  }
1434 
1435  template<typename T, qualifier Q>
1436  GLM_INLINE glm::vec<4, T, Q> yxzy(const glm::vec<4, T, Q> &v) {
1437  return glm::vec<4, T, Q>(v.y, v.x, v.z, v.y);
1438  }
1439 
1440  // yxzz
1441  template<typename T, qualifier Q>
1442  GLM_INLINE glm::vec<4, T, Q> yxzz(const glm::vec<3, T, Q> &v) {
1443  return glm::vec<4, T, Q>(v.y, v.x, v.z, v.z);
1444  }
1445 
1446  template<typename T, qualifier Q>
1447  GLM_INLINE glm::vec<4, T, Q> yxzz(const glm::vec<4, T, Q> &v) {
1448  return glm::vec<4, T, Q>(v.y, v.x, v.z, v.z);
1449  }
1450 
1451  // yxzw
1452  template<typename T, qualifier Q>
1453  GLM_INLINE glm::vec<4, T, Q> yxzw(const glm::vec<4, T, Q> &v) {
1454  return glm::vec<4, T, Q>(v.y, v.x, v.z, v.w);
1455  }
1456 
1457  // yxwx
1458  template<typename T, qualifier Q>
1459  GLM_INLINE glm::vec<4, T, Q> yxwx(const glm::vec<4, T, Q> &v) {
1460  return glm::vec<4, T, Q>(v.y, v.x, v.w, v.x);
1461  }
1462 
1463  // yxwy
1464  template<typename T, qualifier Q>
1465  GLM_INLINE glm::vec<4, T, Q> yxwy(const glm::vec<4, T, Q> &v) {
1466  return glm::vec<4, T, Q>(v.y, v.x, v.w, v.y);
1467  }
1468 
1469  // yxwz
1470  template<typename T, qualifier Q>
1471  GLM_INLINE glm::vec<4, T, Q> yxwz(const glm::vec<4, T, Q> &v) {
1472  return glm::vec<4, T, Q>(v.y, v.x, v.w, v.z);
1473  }
1474 
1475  // yxww
1476  template<typename T, qualifier Q>
1477  GLM_INLINE glm::vec<4, T, Q> yxww(const glm::vec<4, T, Q> &v) {
1478  return glm::vec<4, T, Q>(v.y, v.x, v.w, v.w);
1479  }
1480 
1481  // yyxx
1482  template<typename T, qualifier Q>
1483  GLM_INLINE glm::vec<4, T, Q> yyxx(const glm::vec<2, T, Q> &v) {
1484  return glm::vec<4, T, Q>(v.y, v.y, v.x, v.x);
1485  }
1486 
1487  template<typename T, qualifier Q>
1488  GLM_INLINE glm::vec<4, T, Q> yyxx(const glm::vec<3, T, Q> &v) {
1489  return glm::vec<4, T, Q>(v.y, v.y, v.x, v.x);
1490  }
1491 
1492  template<typename T, qualifier Q>
1493  GLM_INLINE glm::vec<4, T, Q> yyxx(const glm::vec<4, T, Q> &v) {
1494  return glm::vec<4, T, Q>(v.y, v.y, v.x, v.x);
1495  }
1496 
1497  // yyxy
1498  template<typename T, qualifier Q>
1499  GLM_INLINE glm::vec<4, T, Q> yyxy(const glm::vec<2, T, Q> &v) {
1500  return glm::vec<4, T, Q>(v.y, v.y, v.x, v.y);
1501  }
1502 
1503  template<typename T, qualifier Q>
1504  GLM_INLINE glm::vec<4, T, Q> yyxy(const glm::vec<3, T, Q> &v) {
1505  return glm::vec<4, T, Q>(v.y, v.y, v.x, v.y);
1506  }
1507 
1508  template<typename T, qualifier Q>
1509  GLM_INLINE glm::vec<4, T, Q> yyxy(const glm::vec<4, T, Q> &v) {
1510  return glm::vec<4, T, Q>(v.y, v.y, v.x, v.y);
1511  }
1512 
1513  // yyxz
1514  template<typename T, qualifier Q>
1515  GLM_INLINE glm::vec<4, T, Q> yyxz(const glm::vec<3, T, Q> &v) {
1516  return glm::vec<4, T, Q>(v.y, v.y, v.x, v.z);
1517  }
1518 
1519  template<typename T, qualifier Q>
1520  GLM_INLINE glm::vec<4, T, Q> yyxz(const glm::vec<4, T, Q> &v) {
1521  return glm::vec<4, T, Q>(v.y, v.y, v.x, v.z);
1522  }
1523 
1524  // yyxw
1525  template<typename T, qualifier Q>
1526  GLM_INLINE glm::vec<4, T, Q> yyxw(const glm::vec<4, T, Q> &v) {
1527  return glm::vec<4, T, Q>(v.y, v.y, v.x, v.w);
1528  }
1529 
1530  // yyyx
1531  template<typename T, qualifier Q>
1532  GLM_INLINE glm::vec<4, T, Q> yyyx(const glm::vec<2, T, Q> &v) {
1533  return glm::vec<4, T, Q>(v.y, v.y, v.y, v.x);
1534  }
1535 
1536  template<typename T, qualifier Q>
1537  GLM_INLINE glm::vec<4, T, Q> yyyx(const glm::vec<3, T, Q> &v) {
1538  return glm::vec<4, T, Q>(v.y, v.y, v.y, v.x);
1539  }
1540 
1541  template<typename T, qualifier Q>
1542  GLM_INLINE glm::vec<4, T, Q> yyyx(const glm::vec<4, T, Q> &v) {
1543  return glm::vec<4, T, Q>(v.y, v.y, v.y, v.x);
1544  }
1545 
1546  // yyyy
1547  template<typename T, qualifier Q>
1548  GLM_INLINE glm::vec<4, T, Q> yyyy(const glm::vec<2, T, Q> &v) {
1549  return glm::vec<4, T, Q>(v.y, v.y, v.y, v.y);
1550  }
1551 
1552  template<typename T, qualifier Q>
1553  GLM_INLINE glm::vec<4, T, Q> yyyy(const glm::vec<3, T, Q> &v) {
1554  return glm::vec<4, T, Q>(v.y, v.y, v.y, v.y);
1555  }
1556 
1557  template<typename T, qualifier Q>
1558  GLM_INLINE glm::vec<4, T, Q> yyyy(const glm::vec<4, T, Q> &v) {
1559  return glm::vec<4, T, Q>(v.y, v.y, v.y, v.y);
1560  }
1561 
1562  // yyyz
1563  template<typename T, qualifier Q>
1564  GLM_INLINE glm::vec<4, T, Q> yyyz(const glm::vec<3, T, Q> &v) {
1565  return glm::vec<4, T, Q>(v.y, v.y, v.y, v.z);
1566  }
1567 
1568  template<typename T, qualifier Q>
1569  GLM_INLINE glm::vec<4, T, Q> yyyz(const glm::vec<4, T, Q> &v) {
1570  return glm::vec<4, T, Q>(v.y, v.y, v.y, v.z);
1571  }
1572 
1573  // yyyw
1574  template<typename T, qualifier Q>
1575  GLM_INLINE glm::vec<4, T, Q> yyyw(const glm::vec<4, T, Q> &v) {
1576  return glm::vec<4, T, Q>(v.y, v.y, v.y, v.w);
1577  }
1578 
1579  // yyzx
1580  template<typename T, qualifier Q>
1581  GLM_INLINE glm::vec<4, T, Q> yyzx(const glm::vec<3, T, Q> &v) {
1582  return glm::vec<4, T, Q>(v.y, v.y, v.z, v.x);
1583  }
1584 
1585  template<typename T, qualifier Q>
1586  GLM_INLINE glm::vec<4, T, Q> yyzx(const glm::vec<4, T, Q> &v) {
1587  return glm::vec<4, T, Q>(v.y, v.y, v.z, v.x);
1588  }
1589 
1590  // yyzy
1591  template<typename T, qualifier Q>
1592  GLM_INLINE glm::vec<4, T, Q> yyzy(const glm::vec<3, T, Q> &v) {
1593  return glm::vec<4, T, Q>(v.y, v.y, v.z, v.y);
1594  }
1595 
1596  template<typename T, qualifier Q>
1597  GLM_INLINE glm::vec<4, T, Q> yyzy(const glm::vec<4, T, Q> &v) {
1598  return glm::vec<4, T, Q>(v.y, v.y, v.z, v.y);
1599  }
1600 
1601  // yyzz
1602  template<typename T, qualifier Q>
1603  GLM_INLINE glm::vec<4, T, Q> yyzz(const glm::vec<3, T, Q> &v) {
1604  return glm::vec<4, T, Q>(v.y, v.y, v.z, v.z);
1605  }
1606 
1607  template<typename T, qualifier Q>
1608  GLM_INLINE glm::vec<4, T, Q> yyzz(const glm::vec<4, T, Q> &v) {
1609  return glm::vec<4, T, Q>(v.y, v.y, v.z, v.z);
1610  }
1611 
1612  // yyzw
1613  template<typename T, qualifier Q>
1614  GLM_INLINE glm::vec<4, T, Q> yyzw(const glm::vec<4, T, Q> &v) {
1615  return glm::vec<4, T, Q>(v.y, v.y, v.z, v.w);
1616  }
1617 
1618  // yywx
1619  template<typename T, qualifier Q>
1620  GLM_INLINE glm::vec<4, T, Q> yywx(const glm::vec<4, T, Q> &v) {
1621  return glm::vec<4, T, Q>(v.y, v.y, v.w, v.x);
1622  }
1623 
1624  // yywy
1625  template<typename T, qualifier Q>
1626  GLM_INLINE glm::vec<4, T, Q> yywy(const glm::vec<4, T, Q> &v) {
1627  return glm::vec<4, T, Q>(v.y, v.y, v.w, v.y);
1628  }
1629 
1630  // yywz
1631  template<typename T, qualifier Q>
1632  GLM_INLINE glm::vec<4, T, Q> yywz(const glm::vec<4, T, Q> &v) {
1633  return glm::vec<4, T, Q>(v.y, v.y, v.w, v.z);
1634  }
1635 
1636  // yyww
1637  template<typename T, qualifier Q>
1638  GLM_INLINE glm::vec<4, T, Q> yyww(const glm::vec<4, T, Q> &v) {
1639  return glm::vec<4, T, Q>(v.y, v.y, v.w, v.w);
1640  }
1641 
1642  // yzxx
1643  template<typename T, qualifier Q>
1644  GLM_INLINE glm::vec<4, T, Q> yzxx(const glm::vec<3, T, Q> &v) {
1645  return glm::vec<4, T, Q>(v.y, v.z, v.x, v.x);
1646  }
1647 
1648  template<typename T, qualifier Q>
1649  GLM_INLINE glm::vec<4, T, Q> yzxx(const glm::vec<4, T, Q> &v) {
1650  return glm::vec<4, T, Q>(v.y, v.z, v.x, v.x);
1651  }
1652 
1653  // yzxy
1654  template<typename T, qualifier Q>
1655  GLM_INLINE glm::vec<4, T, Q> yzxy(const glm::vec<3, T, Q> &v) {
1656  return glm::vec<4, T, Q>(v.y, v.z, v.x, v.y);
1657  }
1658 
1659  template<typename T, qualifier Q>
1660  GLM_INLINE glm::vec<4, T, Q> yzxy(const glm::vec<4, T, Q> &v) {
1661  return glm::vec<4, T, Q>(v.y, v.z, v.x, v.y);
1662  }
1663 
1664  // yzxz
1665  template<typename T, qualifier Q>
1666  GLM_INLINE glm::vec<4, T, Q> yzxz(const glm::vec<3, T, Q> &v) {
1667  return glm::vec<4, T, Q>(v.y, v.z, v.x, v.z);
1668  }
1669 
1670  template<typename T, qualifier Q>
1671  GLM_INLINE glm::vec<4, T, Q> yzxz(const glm::vec<4, T, Q> &v) {
1672  return glm::vec<4, T, Q>(v.y, v.z, v.x, v.z);
1673  }
1674 
1675  // yzxw
1676  template<typename T, qualifier Q>
1677  GLM_INLINE glm::vec<4, T, Q> yzxw(const glm::vec<4, T, Q> &v) {
1678  return glm::vec<4, T, Q>(v.y, v.z, v.x, v.w);
1679  }
1680 
1681  // yzyx
1682  template<typename T, qualifier Q>
1683  GLM_INLINE glm::vec<4, T, Q> yzyx(const glm::vec<3, T, Q> &v) {
1684  return glm::vec<4, T, Q>(v.y, v.z, v.y, v.x);
1685  }
1686 
1687  template<typename T, qualifier Q>
1688  GLM_INLINE glm::vec<4, T, Q> yzyx(const glm::vec<4, T, Q> &v) {
1689  return glm::vec<4, T, Q>(v.y, v.z, v.y, v.x);
1690  }
1691 
1692  // yzyy
1693  template<typename T, qualifier Q>
1694  GLM_INLINE glm::vec<4, T, Q> yzyy(const glm::vec<3, T, Q> &v) {
1695  return glm::vec<4, T, Q>(v.y, v.z, v.y, v.y);
1696  }
1697 
1698  template<typename T, qualifier Q>
1699  GLM_INLINE glm::vec<4, T, Q> yzyy(const glm::vec<4, T, Q> &v) {
1700  return glm::vec<4, T, Q>(v.y, v.z, v.y, v.y);
1701  }
1702 
1703  // yzyz
1704  template<typename T, qualifier Q>
1705  GLM_INLINE glm::vec<4, T, Q> yzyz(const glm::vec<3, T, Q> &v) {
1706  return glm::vec<4, T, Q>(v.y, v.z, v.y, v.z);
1707  }
1708 
1709  template<typename T, qualifier Q>
1710  GLM_INLINE glm::vec<4, T, Q> yzyz(const glm::vec<4, T, Q> &v) {
1711  return glm::vec<4, T, Q>(v.y, v.z, v.y, v.z);
1712  }
1713 
1714  // yzyw
1715  template<typename T, qualifier Q>
1716  GLM_INLINE glm::vec<4, T, Q> yzyw(const glm::vec<4, T, Q> &v) {
1717  return glm::vec<4, T, Q>(v.y, v.z, v.y, v.w);
1718  }
1719 
1720  // yzzx
1721  template<typename T, qualifier Q>
1722  GLM_INLINE glm::vec<4, T, Q> yzzx(const glm::vec<3, T, Q> &v) {
1723  return glm::vec<4, T, Q>(v.y, v.z, v.z, v.x);
1724  }
1725 
1726  template<typename T, qualifier Q>
1727  GLM_INLINE glm::vec<4, T, Q> yzzx(const glm::vec<4, T, Q> &v) {
1728  return glm::vec<4, T, Q>(v.y, v.z, v.z, v.x);
1729  }
1730 
1731  // yzzy
1732  template<typename T, qualifier Q>
1733  GLM_INLINE glm::vec<4, T, Q> yzzy(const glm::vec<3, T, Q> &v) {
1734  return glm::vec<4, T, Q>(v.y, v.z, v.z, v.y);
1735  }
1736 
1737  template<typename T, qualifier Q>
1738  GLM_INLINE glm::vec<4, T, Q> yzzy(const glm::vec<4, T, Q> &v) {
1739  return glm::vec<4, T, Q>(v.y, v.z, v.z, v.y);
1740  }
1741 
1742  // yzzz
1743  template<typename T, qualifier Q>
1744  GLM_INLINE glm::vec<4, T, Q> yzzz(const glm::vec<3, T, Q> &v) {
1745  return glm::vec<4, T, Q>(v.y, v.z, v.z, v.z);
1746  }
1747 
1748  template<typename T, qualifier Q>
1749  GLM_INLINE glm::vec<4, T, Q> yzzz(const glm::vec<4, T, Q> &v) {
1750  return glm::vec<4, T, Q>(v.y, v.z, v.z, v.z);
1751  }
1752 
1753  // yzzw
1754  template<typename T, qualifier Q>
1755  GLM_INLINE glm::vec<4, T, Q> yzzw(const glm::vec<4, T, Q> &v) {
1756  return glm::vec<4, T, Q>(v.y, v.z, v.z, v.w);
1757  }
1758 
1759  // yzwx
1760  template<typename T, qualifier Q>
1761  GLM_INLINE glm::vec<4, T, Q> yzwx(const glm::vec<4, T, Q> &v) {
1762  return glm::vec<4, T, Q>(v.y, v.z, v.w, v.x);
1763  }
1764 
1765  // yzwy
1766  template<typename T, qualifier Q>
1767  GLM_INLINE glm::vec<4, T, Q> yzwy(const glm::vec<4, T, Q> &v) {
1768  return glm::vec<4, T, Q>(v.y, v.z, v.w, v.y);
1769  }
1770 
1771  // yzwz
1772  template<typename T, qualifier Q>
1773  GLM_INLINE glm::vec<4, T, Q> yzwz(const glm::vec<4, T, Q> &v) {
1774  return glm::vec<4, T, Q>(v.y, v.z, v.w, v.z);
1775  }
1776 
1777  // yzww
1778  template<typename T, qualifier Q>
1779  GLM_INLINE glm::vec<4, T, Q> yzww(const glm::vec<4, T, Q> &v) {
1780  return glm::vec<4, T, Q>(v.y, v.z, v.w, v.w);
1781  }
1782 
1783  // ywxx
1784  template<typename T, qualifier Q>
1785  GLM_INLINE glm::vec<4, T, Q> ywxx(const glm::vec<4, T, Q> &v) {
1786  return glm::vec<4, T, Q>(v.y, v.w, v.x, v.x);
1787  }
1788 
1789  // ywxy
1790  template<typename T, qualifier Q>
1791  GLM_INLINE glm::vec<4, T, Q> ywxy(const glm::vec<4, T, Q> &v) {
1792  return glm::vec<4, T, Q>(v.y, v.w, v.x, v.y);
1793  }
1794 
1795  // ywxz
1796  template<typename T, qualifier Q>
1797  GLM_INLINE glm::vec<4, T, Q> ywxz(const glm::vec<4, T, Q> &v) {
1798  return glm::vec<4, T, Q>(v.y, v.w, v.x, v.z);
1799  }
1800 
1801  // ywxw
1802  template<typename T, qualifier Q>
1803  GLM_INLINE glm::vec<4, T, Q> ywxw(const glm::vec<4, T, Q> &v) {
1804  return glm::vec<4, T, Q>(v.y, v.w, v.x, v.w);
1805  }
1806 
1807  // ywyx
1808  template<typename T, qualifier Q>
1809  GLM_INLINE glm::vec<4, T, Q> ywyx(const glm::vec<4, T, Q> &v) {
1810  return glm::vec<4, T, Q>(v.y, v.w, v.y, v.x);
1811  }
1812 
1813  // ywyy
1814  template<typename T, qualifier Q>
1815  GLM_INLINE glm::vec<4, T, Q> ywyy(const glm::vec<4, T, Q> &v) {
1816  return glm::vec<4, T, Q>(v.y, v.w, v.y, v.y);
1817  }
1818 
1819  // ywyz
1820  template<typename T, qualifier Q>
1821  GLM_INLINE glm::vec<4, T, Q> ywyz(const glm::vec<4, T, Q> &v) {
1822  return glm::vec<4, T, Q>(v.y, v.w, v.y, v.z);
1823  }
1824 
1825  // ywyw
1826  template<typename T, qualifier Q>
1827  GLM_INLINE glm::vec<4, T, Q> ywyw(const glm::vec<4, T, Q> &v) {
1828  return glm::vec<4, T, Q>(v.y, v.w, v.y, v.w);
1829  }
1830 
1831  // ywzx
1832  template<typename T, qualifier Q>
1833  GLM_INLINE glm::vec<4, T, Q> ywzx(const glm::vec<4, T, Q> &v) {
1834  return glm::vec<4, T, Q>(v.y, v.w, v.z, v.x);
1835  }
1836 
1837  // ywzy
1838  template<typename T, qualifier Q>
1839  GLM_INLINE glm::vec<4, T, Q> ywzy(const glm::vec<4, T, Q> &v) {
1840  return glm::vec<4, T, Q>(v.y, v.w, v.z, v.y);
1841  }
1842 
1843  // ywzz
1844  template<typename T, qualifier Q>
1845  GLM_INLINE glm::vec<4, T, Q> ywzz(const glm::vec<4, T, Q> &v) {
1846  return glm::vec<4, T, Q>(v.y, v.w, v.z, v.z);
1847  }
1848 
1849  // ywzw
1850  template<typename T, qualifier Q>
1851  GLM_INLINE glm::vec<4, T, Q> ywzw(const glm::vec<4, T, Q> &v) {
1852  return glm::vec<4, T, Q>(v.y, v.w, v.z, v.w);
1853  }
1854 
1855  // ywwx
1856  template<typename T, qualifier Q>
1857  GLM_INLINE glm::vec<4, T, Q> ywwx(const glm::vec<4, T, Q> &v) {
1858  return glm::vec<4, T, Q>(v.y, v.w, v.w, v.x);
1859  }
1860 
1861  // ywwy
1862  template<typename T, qualifier Q>
1863  GLM_INLINE glm::vec<4, T, Q> ywwy(const glm::vec<4, T, Q> &v) {
1864  return glm::vec<4, T, Q>(v.y, v.w, v.w, v.y);
1865  }
1866 
1867  // ywwz
1868  template<typename T, qualifier Q>
1869  GLM_INLINE glm::vec<4, T, Q> ywwz(const glm::vec<4, T, Q> &v) {
1870  return glm::vec<4, T, Q>(v.y, v.w, v.w, v.z);
1871  }
1872 
1873  // ywww
1874  template<typename T, qualifier Q>
1875  GLM_INLINE glm::vec<4, T, Q> ywww(const glm::vec<4, T, Q> &v) {
1876  return glm::vec<4, T, Q>(v.y, v.w, v.w, v.w);
1877  }
1878 
1879  // zxxx
1880  template<typename T, qualifier Q>
1881  GLM_INLINE glm::vec<4, T, Q> zxxx(const glm::vec<3, T, Q> &v) {
1882  return glm::vec<4, T, Q>(v.z, v.x, v.x, v.x);
1883  }
1884 
1885  template<typename T, qualifier Q>
1886  GLM_INLINE glm::vec<4, T, Q> zxxx(const glm::vec<4, T, Q> &v) {
1887  return glm::vec<4, T, Q>(v.z, v.x, v.x, v.x);
1888  }
1889 
1890  // zxxy
1891  template<typename T, qualifier Q>
1892  GLM_INLINE glm::vec<4, T, Q> zxxy(const glm::vec<3, T, Q> &v) {
1893  return glm::vec<4, T, Q>(v.z, v.x, v.x, v.y);
1894  }
1895 
1896  template<typename T, qualifier Q>
1897  GLM_INLINE glm::vec<4, T, Q> zxxy(const glm::vec<4, T, Q> &v) {
1898  return glm::vec<4, T, Q>(v.z, v.x, v.x, v.y);
1899  }
1900 
1901  // zxxz
1902  template<typename T, qualifier Q>
1903  GLM_INLINE glm::vec<4, T, Q> zxxz(const glm::vec<3, T, Q> &v) {
1904  return glm::vec<4, T, Q>(v.z, v.x, v.x, v.z);
1905  }
1906 
1907  template<typename T, qualifier Q>
1908  GLM_INLINE glm::vec<4, T, Q> zxxz(const glm::vec<4, T, Q> &v) {
1909  return glm::vec<4, T, Q>(v.z, v.x, v.x, v.z);
1910  }
1911 
1912  // zxxw
1913  template<typename T, qualifier Q>
1914  GLM_INLINE glm::vec<4, T, Q> zxxw(const glm::vec<4, T, Q> &v) {
1915  return glm::vec<4, T, Q>(v.z, v.x, v.x, v.w);
1916  }
1917 
1918  // zxyx
1919  template<typename T, qualifier Q>
1920  GLM_INLINE glm::vec<4, T, Q> zxyx(const glm::vec<3, T, Q> &v) {
1921  return glm::vec<4, T, Q>(v.z, v.x, v.y, v.x);
1922  }
1923 
1924  template<typename T, qualifier Q>
1925  GLM_INLINE glm::vec<4, T, Q> zxyx(const glm::vec<4, T, Q> &v) {
1926  return glm::vec<4, T, Q>(v.z, v.x, v.y, v.x);
1927  }
1928 
1929  // zxyy
1930  template<typename T, qualifier Q>
1931  GLM_INLINE glm::vec<4, T, Q> zxyy(const glm::vec<3, T, Q> &v) {
1932  return glm::vec<4, T, Q>(v.z, v.x, v.y, v.y);
1933  }
1934 
1935  template<typename T, qualifier Q>
1936  GLM_INLINE glm::vec<4, T, Q> zxyy(const glm::vec<4, T, Q> &v) {
1937  return glm::vec<4, T, Q>(v.z, v.x, v.y, v.y);
1938  }
1939 
1940  // zxyz
1941  template<typename T, qualifier Q>
1942  GLM_INLINE glm::vec<4, T, Q> zxyz(const glm::vec<3, T, Q> &v) {
1943  return glm::vec<4, T, Q>(v.z, v.x, v.y, v.z);
1944  }
1945 
1946  template<typename T, qualifier Q>
1947  GLM_INLINE glm::vec<4, T, Q> zxyz(const glm::vec<4, T, Q> &v) {
1948  return glm::vec<4, T, Q>(v.z, v.x, v.y, v.z);
1949  }
1950 
1951  // zxyw
1952  template<typename T, qualifier Q>
1953  GLM_INLINE glm::vec<4, T, Q> zxyw(const glm::vec<4, T, Q> &v) {
1954  return glm::vec<4, T, Q>(v.z, v.x, v.y, v.w);
1955  }
1956 
1957  // zxzx
1958  template<typename T, qualifier Q>
1959  GLM_INLINE glm::vec<4, T, Q> zxzx(const glm::vec<3, T, Q> &v) {
1960  return glm::vec<4, T, Q>(v.z, v.x, v.z, v.x);
1961  }
1962 
1963  template<typename T, qualifier Q>
1964  GLM_INLINE glm::vec<4, T, Q> zxzx(const glm::vec<4, T, Q> &v) {
1965  return glm::vec<4, T, Q>(v.z, v.x, v.z, v.x);
1966  }
1967 
1968  // zxzy
1969  template<typename T, qualifier Q>
1970  GLM_INLINE glm::vec<4, T, Q> zxzy(const glm::vec<3, T, Q> &v) {
1971  return glm::vec<4, T, Q>(v.z, v.x, v.z, v.y);
1972  }
1973 
1974  template<typename T, qualifier Q>
1975  GLM_INLINE glm::vec<4, T, Q> zxzy(const glm::vec<4, T, Q> &v) {
1976  return glm::vec<4, T, Q>(v.z, v.x, v.z, v.y);
1977  }
1978 
1979  // zxzz
1980  template<typename T, qualifier Q>
1981  GLM_INLINE glm::vec<4, T, Q> zxzz(const glm::vec<3, T, Q> &v) {
1982  return glm::vec<4, T, Q>(v.z, v.x, v.z, v.z);
1983  }
1984 
1985  template<typename T, qualifier Q>
1986  GLM_INLINE glm::vec<4, T, Q> zxzz(const glm::vec<4, T, Q> &v) {
1987  return glm::vec<4, T, Q>(v.z, v.x, v.z, v.z);
1988  }
1989 
1990  // zxzw
1991  template<typename T, qualifier Q>
1992  GLM_INLINE glm::vec<4, T, Q> zxzw(const glm::vec<4, T, Q> &v) {
1993  return glm::vec<4, T, Q>(v.z, v.x, v.z, v.w);
1994  }
1995 
1996  // zxwx
1997  template<typename T, qualifier Q>
1998  GLM_INLINE glm::vec<4, T, Q> zxwx(const glm::vec<4, T, Q> &v) {
1999  return glm::vec<4, T, Q>(v.z, v.x, v.w, v.x);
2000  }
2001 
2002  // zxwy
2003  template<typename T, qualifier Q>
2004  GLM_INLINE glm::vec<4, T, Q> zxwy(const glm::vec<4, T, Q> &v) {
2005  return glm::vec<4, T, Q>(v.z, v.x, v.w, v.y);
2006  }
2007 
2008  // zxwz
2009  template<typename T, qualifier Q>
2010  GLM_INLINE glm::vec<4, T, Q> zxwz(const glm::vec<4, T, Q> &v) {
2011  return glm::vec<4, T, Q>(v.z, v.x, v.w, v.z);
2012  }
2013 
2014  // zxww
2015  template<typename T, qualifier Q>
2016  GLM_INLINE glm::vec<4, T, Q> zxww(const glm::vec<4, T, Q> &v) {
2017  return glm::vec<4, T, Q>(v.z, v.x, v.w, v.w);
2018  }
2019 
2020  // zyxx
2021  template<typename T, qualifier Q>
2022  GLM_INLINE glm::vec<4, T, Q> zyxx(const glm::vec<3, T, Q> &v) {
2023  return glm::vec<4, T, Q>(v.z, v.y, v.x, v.x);
2024  }
2025 
2026  template<typename T, qualifier Q>
2027  GLM_INLINE glm::vec<4, T, Q> zyxx(const glm::vec<4, T, Q> &v) {
2028  return glm::vec<4, T, Q>(v.z, v.y, v.x, v.x);
2029  }
2030 
2031  // zyxy
2032  template<typename T, qualifier Q>
2033  GLM_INLINE glm::vec<4, T, Q> zyxy(const glm::vec<3, T, Q> &v) {
2034  return glm::vec<4, T, Q>(v.z, v.y, v.x, v.y);
2035  }
2036 
2037  template<typename T, qualifier Q>
2038  GLM_INLINE glm::vec<4, T, Q> zyxy(const glm::vec<4, T, Q> &v) {
2039  return glm::vec<4, T, Q>(v.z, v.y, v.x, v.y);
2040  }
2041 
2042  // zyxz
2043  template<typename T, qualifier Q>
2044  GLM_INLINE glm::vec<4, T, Q> zyxz(const glm::vec<3, T, Q> &v) {
2045  return glm::vec<4, T, Q>(v.z, v.y, v.x, v.z);
2046  }
2047 
2048  template<typename T, qualifier Q>
2049  GLM_INLINE glm::vec<4, T, Q> zyxz(const glm::vec<4, T, Q> &v) {
2050  return glm::vec<4, T, Q>(v.z, v.y, v.x, v.z);
2051  }
2052 
2053  // zyxw
2054  template<typename T, qualifier Q>
2055  GLM_INLINE glm::vec<4, T, Q> zyxw(const glm::vec<4, T, Q> &v) {
2056  return glm::vec<4, T, Q>(v.z, v.y, v.x, v.w);
2057  }
2058 
2059  // zyyx
2060  template<typename T, qualifier Q>
2061  GLM_INLINE glm::vec<4, T, Q> zyyx(const glm::vec<3, T, Q> &v) {
2062  return glm::vec<4, T, Q>(v.z, v.y, v.y, v.x);
2063  }
2064 
2065  template<typename T, qualifier Q>
2066  GLM_INLINE glm::vec<4, T, Q> zyyx(const glm::vec<4, T, Q> &v) {
2067  return glm::vec<4, T, Q>(v.z, v.y, v.y, v.x);
2068  }
2069 
2070  // zyyy
2071  template<typename T, qualifier Q>
2072  GLM_INLINE glm::vec<4, T, Q> zyyy(const glm::vec<3, T, Q> &v) {
2073  return glm::vec<4, T, Q>(v.z, v.y, v.y, v.y);
2074  }
2075 
2076  template<typename T, qualifier Q>
2077  GLM_INLINE glm::vec<4, T, Q> zyyy(const glm::vec<4, T, Q> &v) {
2078  return glm::vec<4, T, Q>(v.z, v.y, v.y, v.y);
2079  }
2080 
2081  // zyyz
2082  template<typename T, qualifier Q>
2083  GLM_INLINE glm::vec<4, T, Q> zyyz(const glm::vec<3, T, Q> &v) {
2084  return glm::vec<4, T, Q>(v.z, v.y, v.y, v.z);
2085  }
2086 
2087  template<typename T, qualifier Q>
2088  GLM_INLINE glm::vec<4, T, Q> zyyz(const glm::vec<4, T, Q> &v) {
2089  return glm::vec<4, T, Q>(v.z, v.y, v.y, v.z);
2090  }
2091 
2092  // zyyw
2093  template<typename T, qualifier Q>
2094  GLM_INLINE glm::vec<4, T, Q> zyyw(const glm::vec<4, T, Q> &v) {
2095  return glm::vec<4, T, Q>(v.z, v.y, v.y, v.w);
2096  }
2097 
2098  // zyzx
2099  template<typename T, qualifier Q>
2100  GLM_INLINE glm::vec<4, T, Q> zyzx(const glm::vec<3, T, Q> &v) {
2101  return glm::vec<4, T, Q>(v.z, v.y, v.z, v.x);
2102  }
2103 
2104  template<typename T, qualifier Q>
2105  GLM_INLINE glm::vec<4, T, Q> zyzx(const glm::vec<4, T, Q> &v) {
2106  return glm::vec<4, T, Q>(v.z, v.y, v.z, v.x);
2107  }
2108 
2109  // zyzy
2110  template<typename T, qualifier Q>
2111  GLM_INLINE glm::vec<4, T, Q> zyzy(const glm::vec<3, T, Q> &v) {
2112  return glm::vec<4, T, Q>(v.z, v.y, v.z, v.y);
2113  }
2114 
2115  template<typename T, qualifier Q>
2116  GLM_INLINE glm::vec<4, T, Q> zyzy(const glm::vec<4, T, Q> &v) {
2117  return glm::vec<4, T, Q>(v.z, v.y, v.z, v.y);
2118  }
2119 
2120  // zyzz
2121  template<typename T, qualifier Q>
2122  GLM_INLINE glm::vec<4, T, Q> zyzz(const glm::vec<3, T, Q> &v) {
2123  return glm::vec<4, T, Q>(v.z, v.y, v.z, v.z);
2124  }
2125 
2126  template<typename T, qualifier Q>
2127  GLM_INLINE glm::vec<4, T, Q> zyzz(const glm::vec<4, T, Q> &v) {
2128  return glm::vec<4, T, Q>(v.z, v.y, v.z, v.z);
2129  }
2130 
2131  // zyzw
2132  template<typename T, qualifier Q>
2133  GLM_INLINE glm::vec<4, T, Q> zyzw(const glm::vec<4, T, Q> &v) {
2134  return glm::vec<4, T, Q>(v.z, v.y, v.z, v.w);
2135  }
2136 
2137  // zywx
2138  template<typename T, qualifier Q>
2139  GLM_INLINE glm::vec<4, T, Q> zywx(const glm::vec<4, T, Q> &v) {
2140  return glm::vec<4, T, Q>(v.z, v.y, v.w, v.x);
2141  }
2142 
2143  // zywy
2144  template<typename T, qualifier Q>
2145  GLM_INLINE glm::vec<4, T, Q> zywy(const glm::vec<4, T, Q> &v) {
2146  return glm::vec<4, T, Q>(v.z, v.y, v.w, v.y);
2147  }
2148 
2149  // zywz
2150  template<typename T, qualifier Q>
2151  GLM_INLINE glm::vec<4, T, Q> zywz(const glm::vec<4, T, Q> &v) {
2152  return glm::vec<4, T, Q>(v.z, v.y, v.w, v.z);
2153  }
2154 
2155  // zyww
2156  template<typename T, qualifier Q>
2157  GLM_INLINE glm::vec<4, T, Q> zyww(const glm::vec<4, T, Q> &v) {
2158  return glm::vec<4, T, Q>(v.z, v.y, v.w, v.w);
2159  }
2160 
2161  // zzxx
2162  template<typename T, qualifier Q>
2163  GLM_INLINE glm::vec<4, T, Q> zzxx(const glm::vec<3, T, Q> &v) {
2164  return glm::vec<4, T, Q>(v.z, v.z, v.x, v.x);
2165  }
2166 
2167  template<typename T, qualifier Q>
2168  GLM_INLINE glm::vec<4, T, Q> zzxx(const glm::vec<4, T, Q> &v) {
2169  return glm::vec<4, T, Q>(v.z, v.z, v.x, v.x);
2170  }
2171 
2172  // zzxy
2173  template<typename T, qualifier Q>
2174  GLM_INLINE glm::vec<4, T, Q> zzxy(const glm::vec<3, T, Q> &v) {
2175  return glm::vec<4, T, Q>(v.z, v.z, v.x, v.y);
2176  }
2177 
2178  template<typename T, qualifier Q>
2179  GLM_INLINE glm::vec<4, T, Q> zzxy(const glm::vec<4, T, Q> &v) {
2180  return glm::vec<4, T, Q>(v.z, v.z, v.x, v.y);
2181  }
2182 
2183  // zzxz
2184  template<typename T, qualifier Q>
2185  GLM_INLINE glm::vec<4, T, Q> zzxz(const glm::vec<3, T, Q> &v) {
2186  return glm::vec<4, T, Q>(v.z, v.z, v.x, v.z);
2187  }
2188 
2189  template<typename T, qualifier Q>
2190  GLM_INLINE glm::vec<4, T, Q> zzxz(const glm::vec<4, T, Q> &v) {
2191  return glm::vec<4, T, Q>(v.z, v.z, v.x, v.z);
2192  }
2193 
2194  // zzxw
2195  template<typename T, qualifier Q>
2196  GLM_INLINE glm::vec<4, T, Q> zzxw(const glm::vec<4, T, Q> &v) {
2197  return glm::vec<4, T, Q>(v.z, v.z, v.x, v.w);
2198  }
2199 
2200  // zzyx
2201  template<typename T, qualifier Q>
2202  GLM_INLINE glm::vec<4, T, Q> zzyx(const glm::vec<3, T, Q> &v) {
2203  return glm::vec<4, T, Q>(v.z, v.z, v.y, v.x);
2204  }
2205 
2206  template<typename T, qualifier Q>
2207  GLM_INLINE glm::vec<4, T, Q> zzyx(const glm::vec<4, T, Q> &v) {
2208  return glm::vec<4, T, Q>(v.z, v.z, v.y, v.x);
2209  }
2210 
2211  // zzyy
2212  template<typename T, qualifier Q>
2213  GLM_INLINE glm::vec<4, T, Q> zzyy(const glm::vec<3, T, Q> &v) {
2214  return glm::vec<4, T, Q>(v.z, v.z, v.y, v.y);
2215  }
2216 
2217  template<typename T, qualifier Q>
2218  GLM_INLINE glm::vec<4, T, Q> zzyy(const glm::vec<4, T, Q> &v) {
2219  return glm::vec<4, T, Q>(v.z, v.z, v.y, v.y);
2220  }
2221 
2222  // zzyz
2223  template<typename T, qualifier Q>
2224  GLM_INLINE glm::vec<4, T, Q> zzyz(const glm::vec<3, T, Q> &v) {
2225  return glm::vec<4, T, Q>(v.z, v.z, v.y, v.z);
2226  }
2227 
2228  template<typename T, qualifier Q>
2229  GLM_INLINE glm::vec<4, T, Q> zzyz(const glm::vec<4, T, Q> &v) {
2230  return glm::vec<4, T, Q>(v.z, v.z, v.y, v.z);
2231  }
2232 
2233  // zzyw
2234  template<typename T, qualifier Q>
2235  GLM_INLINE glm::vec<4, T, Q> zzyw(const glm::vec<4, T, Q> &v) {
2236  return glm::vec<4, T, Q>(v.z, v.z, v.y, v.w);
2237  }
2238 
2239  // zzzx
2240  template<typename T, qualifier Q>
2241  GLM_INLINE glm::vec<4, T, Q> zzzx(const glm::vec<3, T, Q> &v) {
2242  return glm::vec<4, T, Q>(v.z, v.z, v.z, v.x);
2243  }
2244 
2245  template<typename T, qualifier Q>
2246  GLM_INLINE glm::vec<4, T, Q> zzzx(const glm::vec<4, T, Q> &v) {
2247  return glm::vec<4, T, Q>(v.z, v.z, v.z, v.x);
2248  }
2249 
2250  // zzzy
2251  template<typename T, qualifier Q>
2252  GLM_INLINE glm::vec<4, T, Q> zzzy(const glm::vec<3, T, Q> &v) {
2253  return glm::vec<4, T, Q>(v.z, v.z, v.z, v.y);
2254  }
2255 
2256  template<typename T, qualifier Q>
2257  GLM_INLINE glm::vec<4, T, Q> zzzy(const glm::vec<4, T, Q> &v) {
2258  return glm::vec<4, T, Q>(v.z, v.z, v.z, v.y);
2259  }
2260 
2261  // zzzz
2262  template<typename T, qualifier Q>
2263  GLM_INLINE glm::vec<4, T, Q> zzzz(const glm::vec<3, T, Q> &v) {
2264  return glm::vec<4, T, Q>(v.z, v.z, v.z, v.z);
2265  }
2266 
2267  template<typename T, qualifier Q>
2268  GLM_INLINE glm::vec<4, T, Q> zzzz(const glm::vec<4, T, Q> &v) {
2269  return glm::vec<4, T, Q>(v.z, v.z, v.z, v.z);
2270  }
2271 
2272  // zzzw
2273  template<typename T, qualifier Q>
2274  GLM_INLINE glm::vec<4, T, Q> zzzw(const glm::vec<4, T, Q> &v) {
2275  return glm::vec<4, T, Q>(v.z, v.z, v.z, v.w);
2276  }
2277 
2278  // zzwx
2279  template<typename T, qualifier Q>
2280  GLM_INLINE glm::vec<4, T, Q> zzwx(const glm::vec<4, T, Q> &v) {
2281  return glm::vec<4, T, Q>(v.z, v.z, v.w, v.x);
2282  }
2283 
2284  // zzwy
2285  template<typename T, qualifier Q>
2286  GLM_INLINE glm::vec<4, T, Q> zzwy(const glm::vec<4, T, Q> &v) {
2287  return glm::vec<4, T, Q>(v.z, v.z, v.w, v.y);
2288  }
2289 
2290  // zzwz
2291  template<typename T, qualifier Q>
2292  GLM_INLINE glm::vec<4, T, Q> zzwz(const glm::vec<4, T, Q> &v) {
2293  return glm::vec<4, T, Q>(v.z, v.z, v.w, v.z);
2294  }
2295 
2296  // zzww
2297  template<typename T, qualifier Q>
2298  GLM_INLINE glm::vec<4, T, Q> zzww(const glm::vec<4, T, Q> &v) {
2299  return glm::vec<4, T, Q>(v.z, v.z, v.w, v.w);
2300  }
2301 
2302  // zwxx
2303  template<typename T, qualifier Q>
2304  GLM_INLINE glm::vec<4, T, Q> zwxx(const glm::vec<4, T, Q> &v) {
2305  return glm::vec<4, T, Q>(v.z, v.w, v.x, v.x);
2306  }
2307 
2308  // zwxy
2309  template<typename T, qualifier Q>
2310  GLM_INLINE glm::vec<4, T, Q> zwxy(const glm::vec<4, T, Q> &v) {
2311  return glm::vec<4, T, Q>(v.z, v.w, v.x, v.y);
2312  }
2313 
2314  // zwxz
2315  template<typename T, qualifier Q>
2316  GLM_INLINE glm::vec<4, T, Q> zwxz(const glm::vec<4, T, Q> &v) {
2317  return glm::vec<4, T, Q>(v.z, v.w, v.x, v.z);
2318  }
2319 
2320  // zwxw
2321  template<typename T, qualifier Q>
2322  GLM_INLINE glm::vec<4, T, Q> zwxw(const glm::vec<4, T, Q> &v) {
2323  return glm::vec<4, T, Q>(v.z, v.w, v.x, v.w);
2324  }
2325 
2326  // zwyx
2327  template<typename T, qualifier Q>
2328  GLM_INLINE glm::vec<4, T, Q> zwyx(const glm::vec<4, T, Q> &v) {
2329  return glm::vec<4, T, Q>(v.z, v.w, v.y, v.x);
2330  }
2331 
2332  // zwyy
2333  template<typename T, qualifier Q>
2334  GLM_INLINE glm::vec<4, T, Q> zwyy(const glm::vec<4, T, Q> &v) {
2335  return glm::vec<4, T, Q>(v.z, v.w, v.y, v.y);
2336  }
2337 
2338  // zwyz
2339  template<typename T, qualifier Q>
2340  GLM_INLINE glm::vec<4, T, Q> zwyz(const glm::vec<4, T, Q> &v) {
2341  return glm::vec<4, T, Q>(v.z, v.w, v.y, v.z);
2342  }
2343 
2344  // zwyw
2345  template<typename T, qualifier Q>
2346  GLM_INLINE glm::vec<4, T, Q> zwyw(const glm::vec<4, T, Q> &v) {
2347  return glm::vec<4, T, Q>(v.z, v.w, v.y, v.w);
2348  }
2349 
2350  // zwzx
2351  template<typename T, qualifier Q>
2352  GLM_INLINE glm::vec<4, T, Q> zwzx(const glm::vec<4, T, Q> &v) {
2353  return glm::vec<4, T, Q>(v.z, v.w, v.z, v.x);
2354  }
2355 
2356  // zwzy
2357  template<typename T, qualifier Q>
2358  GLM_INLINE glm::vec<4, T, Q> zwzy(const glm::vec<4, T, Q> &v) {
2359  return glm::vec<4, T, Q>(v.z, v.w, v.z, v.y);
2360  }
2361 
2362  // zwzz
2363  template<typename T, qualifier Q>
2364  GLM_INLINE glm::vec<4, T, Q> zwzz(const glm::vec<4, T, Q> &v) {
2365  return glm::vec<4, T, Q>(v.z, v.w, v.z, v.z);
2366  }
2367 
2368  // zwzw
2369  template<typename T, qualifier Q>
2370  GLM_INLINE glm::vec<4, T, Q> zwzw(const glm::vec<4, T, Q> &v) {
2371  return glm::vec<4, T, Q>(v.z, v.w, v.z, v.w);
2372  }
2373 
2374  // zwwx
2375  template<typename T, qualifier Q>
2376  GLM_INLINE glm::vec<4, T, Q> zwwx(const glm::vec<4, T, Q> &v) {
2377  return glm::vec<4, T, Q>(v.z, v.w, v.w, v.x);
2378  }
2379 
2380  // zwwy
2381  template<typename T, qualifier Q>
2382  GLM_INLINE glm::vec<4, T, Q> zwwy(const glm::vec<4, T, Q> &v) {
2383  return glm::vec<4, T, Q>(v.z, v.w, v.w, v.y);
2384  }
2385 
2386  // zwwz
2387  template<typename T, qualifier Q>
2388  GLM_INLINE glm::vec<4, T, Q> zwwz(const glm::vec<4, T, Q> &v) {
2389  return glm::vec<4, T, Q>(v.z, v.w, v.w, v.z);
2390  }
2391 
2392  // zwww
2393  template<typename T, qualifier Q>
2394  GLM_INLINE glm::vec<4, T, Q> zwww(const glm::vec<4, T, Q> &v) {
2395  return glm::vec<4, T, Q>(v.z, v.w, v.w, v.w);
2396  }
2397 
2398  // wxxx
2399  template<typename T, qualifier Q>
2400  GLM_INLINE glm::vec<4, T, Q> wxxx(const glm::vec<4, T, Q> &v) {
2401  return glm::vec<4, T, Q>(v.w, v.x, v.x, v.x);
2402  }
2403 
2404  // wxxy
2405  template<typename T, qualifier Q>
2406  GLM_INLINE glm::vec<4, T, Q> wxxy(const glm::vec<4, T, Q> &v) {
2407  return glm::vec<4, T, Q>(v.w, v.x, v.x, v.y);
2408  }
2409 
2410  // wxxz
2411  template<typename T, qualifier Q>
2412  GLM_INLINE glm::vec<4, T, Q> wxxz(const glm::vec<4, T, Q> &v) {
2413  return glm::vec<4, T, Q>(v.w, v.x, v.x, v.z);
2414  }
2415 
2416  // wxxw
2417  template<typename T, qualifier Q>
2418  GLM_INLINE glm::vec<4, T, Q> wxxw(const glm::vec<4, T, Q> &v) {
2419  return glm::vec<4, T, Q>(v.w, v.x, v.x, v.w);
2420  }
2421 
2422  // wxyx
2423  template<typename T, qualifier Q>
2424  GLM_INLINE glm::vec<4, T, Q> wxyx(const glm::vec<4, T, Q> &v) {
2425  return glm::vec<4, T, Q>(v.w, v.x, v.y, v.x);
2426  }
2427 
2428  // wxyy
2429  template<typename T, qualifier Q>
2430  GLM_INLINE glm::vec<4, T, Q> wxyy(const glm::vec<4, T, Q> &v) {
2431  return glm::vec<4, T, Q>(v.w, v.x, v.y, v.y);
2432  }
2433 
2434  // wxyz
2435  template<typename T, qualifier Q>
2436  GLM_INLINE glm::vec<4, T, Q> wxyz(const glm::vec<4, T, Q> &v) {
2437  return glm::vec<4, T, Q>(v.w, v.x, v.y, v.z);
2438  }
2439 
2440  // wxyw
2441  template<typename T, qualifier Q>
2442  GLM_INLINE glm::vec<4, T, Q> wxyw(const glm::vec<4, T, Q> &v) {
2443  return glm::vec<4, T, Q>(v.w, v.x, v.y, v.w);
2444  }
2445 
2446  // wxzx
2447  template<typename T, qualifier Q>
2448  GLM_INLINE glm::vec<4, T, Q> wxzx(const glm::vec<4, T, Q> &v) {
2449  return glm::vec<4, T, Q>(v.w, v.x, v.z, v.x);
2450  }
2451 
2452  // wxzy
2453  template<typename T, qualifier Q>
2454  GLM_INLINE glm::vec<4, T, Q> wxzy(const glm::vec<4, T, Q> &v) {
2455  return glm::vec<4, T, Q>(v.w, v.x, v.z, v.y);
2456  }
2457 
2458  // wxzz
2459  template<typename T, qualifier Q>
2460  GLM_INLINE glm::vec<4, T, Q> wxzz(const glm::vec<4, T, Q> &v) {
2461  return glm::vec<4, T, Q>(v.w, v.x, v.z, v.z);
2462  }
2463 
2464  // wxzw
2465  template<typename T, qualifier Q>
2466  GLM_INLINE glm::vec<4, T, Q> wxzw(const glm::vec<4, T, Q> &v) {
2467  return glm::vec<4, T, Q>(v.w, v.x, v.z, v.w);
2468  }
2469 
2470  // wxwx
2471  template<typename T, qualifier Q>
2472  GLM_INLINE glm::vec<4, T, Q> wxwx(const glm::vec<4, T, Q> &v) {
2473  return glm::vec<4, T, Q>(v.w, v.x, v.w, v.x);
2474  }
2475 
2476  // wxwy
2477  template<typename T, qualifier Q>
2478  GLM_INLINE glm::vec<4, T, Q> wxwy(const glm::vec<4, T, Q> &v) {
2479  return glm::vec<4, T, Q>(v.w, v.x, v.w, v.y);
2480  }
2481 
2482  // wxwz
2483  template<typename T, qualifier Q>
2484  GLM_INLINE glm::vec<4, T, Q> wxwz(const glm::vec<4, T, Q> &v) {
2485  return glm::vec<4, T, Q>(v.w, v.x, v.w, v.z);
2486  }
2487 
2488  // wxww
2489  template<typename T, qualifier Q>
2490  GLM_INLINE glm::vec<4, T, Q> wxww(const glm::vec<4, T, Q> &v) {
2491  return glm::vec<4, T, Q>(v.w, v.x, v.w, v.w);
2492  }
2493 
2494  // wyxx
2495  template<typename T, qualifier Q>
2496  GLM_INLINE glm::vec<4, T, Q> wyxx(const glm::vec<4, T, Q> &v) {
2497  return glm::vec<4, T, Q>(v.w, v.y, v.x, v.x);
2498  }
2499 
2500  // wyxy
2501  template<typename T, qualifier Q>
2502  GLM_INLINE glm::vec<4, T, Q> wyxy(const glm::vec<4, T, Q> &v) {
2503  return glm::vec<4, T, Q>(v.w, v.y, v.x, v.y);
2504  }
2505 
2506  // wyxz
2507  template<typename T, qualifier Q>
2508  GLM_INLINE glm::vec<4, T, Q> wyxz(const glm::vec<4, T, Q> &v) {
2509  return glm::vec<4, T, Q>(v.w, v.y, v.x, v.z);
2510  }
2511 
2512  // wyxw
2513  template<typename T, qualifier Q>
2514  GLM_INLINE glm::vec<4, T, Q> wyxw(const glm::vec<4, T, Q> &v) {
2515  return glm::vec<4, T, Q>(v.w, v.y, v.x, v.w);
2516  }
2517 
2518  // wyyx
2519  template<typename T, qualifier Q>
2520  GLM_INLINE glm::vec<4, T, Q> wyyx(const glm::vec<4, T, Q> &v) {
2521  return glm::vec<4, T, Q>(v.w, v.y, v.y, v.x);
2522  }
2523 
2524  // wyyy
2525  template<typename T, qualifier Q>
2526  GLM_INLINE glm::vec<4, T, Q> wyyy(const glm::vec<4, T, Q> &v) {
2527  return glm::vec<4, T, Q>(v.w, v.y, v.y, v.y);
2528  }
2529 
2530  // wyyz
2531  template<typename T, qualifier Q>
2532  GLM_INLINE glm::vec<4, T, Q> wyyz(const glm::vec<4, T, Q> &v) {
2533  return glm::vec<4, T, Q>(v.w, v.y, v.y, v.z);
2534  }
2535 
2536  // wyyw
2537  template<typename T, qualifier Q>
2538  GLM_INLINE glm::vec<4, T, Q> wyyw(const glm::vec<4, T, Q> &v) {
2539  return glm::vec<4, T, Q>(v.w, v.y, v.y, v.w);
2540  }
2541 
2542  // wyzx
2543  template<typename T, qualifier Q>
2544  GLM_INLINE glm::vec<4, T, Q> wyzx(const glm::vec<4, T, Q> &v) {
2545  return glm::vec<4, T, Q>(v.w, v.y, v.z, v.x);
2546  }
2547 
2548  // wyzy
2549  template<typename T, qualifier Q>
2550  GLM_INLINE glm::vec<4, T, Q> wyzy(const glm::vec<4, T, Q> &v) {
2551  return glm::vec<4, T, Q>(v.w, v.y, v.z, v.y);
2552  }
2553 
2554  // wyzz
2555  template<typename T, qualifier Q>
2556  GLM_INLINE glm::vec<4, T, Q> wyzz(const glm::vec<4, T, Q> &v) {
2557  return glm::vec<4, T, Q>(v.w, v.y, v.z, v.z);
2558  }
2559 
2560  // wyzw
2561  template<typename T, qualifier Q>
2562  GLM_INLINE glm::vec<4, T, Q> wyzw(const glm::vec<4, T, Q> &v) {
2563  return glm::vec<4, T, Q>(v.w, v.y, v.z, v.w);
2564  }
2565 
2566  // wywx
2567  template<typename T, qualifier Q>
2568  GLM_INLINE glm::vec<4, T, Q> wywx(const glm::vec<4, T, Q> &v) {
2569  return glm::vec<4, T, Q>(v.w, v.y, v.w, v.x);
2570  }
2571 
2572  // wywy
2573  template<typename T, qualifier Q>
2574  GLM_INLINE glm::vec<4, T, Q> wywy(const glm::vec<4, T, Q> &v) {
2575  return glm::vec<4, T, Q>(v.w, v.y, v.w, v.y);
2576  }
2577 
2578  // wywz
2579  template<typename T, qualifier Q>
2580  GLM_INLINE glm::vec<4, T, Q> wywz(const glm::vec<4, T, Q> &v) {
2581  return glm::vec<4, T, Q>(v.w, v.y, v.w, v.z);
2582  }
2583 
2584  // wyww
2585  template<typename T, qualifier Q>
2586  GLM_INLINE glm::vec<4, T, Q> wyww(const glm::vec<4, T, Q> &v) {
2587  return glm::vec<4, T, Q>(v.w, v.y, v.w, v.w);
2588  }
2589 
2590  // wzxx
2591  template<typename T, qualifier Q>
2592  GLM_INLINE glm::vec<4, T, Q> wzxx(const glm::vec<4, T, Q> &v) {
2593  return glm::vec<4, T, Q>(v.w, v.z, v.x, v.x);
2594  }
2595 
2596  // wzxy
2597  template<typename T, qualifier Q>
2598  GLM_INLINE glm::vec<4, T, Q> wzxy(const glm::vec<4, T, Q> &v) {
2599  return glm::vec<4, T, Q>(v.w, v.z, v.x, v.y);
2600  }
2601 
2602  // wzxz
2603  template<typename T, qualifier Q>
2604  GLM_INLINE glm::vec<4, T, Q> wzxz(const glm::vec<4, T, Q> &v) {
2605  return glm::vec<4, T, Q>(v.w, v.z, v.x, v.z);
2606  }
2607 
2608  // wzxw
2609  template<typename T, qualifier Q>
2610  GLM_INLINE glm::vec<4, T, Q> wzxw(const glm::vec<4, T, Q> &v) {
2611  return glm::vec<4, T, Q>(v.w, v.z, v.x, v.w);
2612  }
2613 
2614  // wzyx
2615  template<typename T, qualifier Q>
2616  GLM_INLINE glm::vec<4, T, Q> wzyx(const glm::vec<4, T, Q> &v) {
2617  return glm::vec<4, T, Q>(v.w, v.z, v.y, v.x);
2618  }
2619 
2620  // wzyy
2621  template<typename T, qualifier Q>
2622  GLM_INLINE glm::vec<4, T, Q> wzyy(const glm::vec<4, T, Q> &v) {
2623  return glm::vec<4, T, Q>(v.w, v.z, v.y, v.y);
2624  }
2625 
2626  // wzyz
2627  template<typename T, qualifier Q>
2628  GLM_INLINE glm::vec<4, T, Q> wzyz(const glm::vec<4, T, Q> &v) {
2629  return glm::vec<4, T, Q>(v.w, v.z, v.y, v.z);
2630  }
2631 
2632  // wzyw
2633  template<typename T, qualifier Q>
2634  GLM_INLINE glm::vec<4, T, Q> wzyw(const glm::vec<4, T, Q> &v) {
2635  return glm::vec<4, T, Q>(v.w, v.z, v.y, v.w);
2636  }
2637 
2638  // wzzx
2639  template<typename T, qualifier Q>
2640  GLM_INLINE glm::vec<4, T, Q> wzzx(const glm::vec<4, T, Q> &v) {
2641  return glm::vec<4, T, Q>(v.w, v.z, v.z, v.x);
2642  }
2643 
2644  // wzzy
2645  template<typename T, qualifier Q>
2646  GLM_INLINE glm::vec<4, T, Q> wzzy(const glm::vec<4, T, Q> &v) {
2647  return glm::vec<4, T, Q>(v.w, v.z, v.z, v.y);
2648  }
2649 
2650  // wzzz
2651  template<typename T, qualifier Q>
2652  GLM_INLINE glm::vec<4, T, Q> wzzz(const glm::vec<4, T, Q> &v) {
2653  return glm::vec<4, T, Q>(v.w, v.z, v.z, v.z);
2654  }
2655 
2656  // wzzw
2657  template<typename T, qualifier Q>
2658  GLM_INLINE glm::vec<4, T, Q> wzzw(const glm::vec<4, T, Q> &v) {
2659  return glm::vec<4, T, Q>(v.w, v.z, v.z, v.w);
2660  }
2661 
2662  // wzwx
2663  template<typename T, qualifier Q>
2664  GLM_INLINE glm::vec<4, T, Q> wzwx(const glm::vec<4, T, Q> &v) {
2665  return glm::vec<4, T, Q>(v.w, v.z, v.w, v.x);
2666  }
2667 
2668  // wzwy
2669  template<typename T, qualifier Q>
2670  GLM_INLINE glm::vec<4, T, Q> wzwy(const glm::vec<4, T, Q> &v) {
2671  return glm::vec<4, T, Q>(v.w, v.z, v.w, v.y);
2672  }
2673 
2674  // wzwz
2675  template<typename T, qualifier Q>
2676  GLM_INLINE glm::vec<4, T, Q> wzwz(const glm::vec<4, T, Q> &v) {
2677  return glm::vec<4, T, Q>(v.w, v.z, v.w, v.z);
2678  }
2679 
2680  // wzww
2681  template<typename T, qualifier Q>
2682  GLM_INLINE glm::vec<4, T, Q> wzww(const glm::vec<4, T, Q> &v) {
2683  return glm::vec<4, T, Q>(v.w, v.z, v.w, v.w);
2684  }
2685 
2686  // wwxx
2687  template<typename T, qualifier Q>
2688  GLM_INLINE glm::vec<4, T, Q> wwxx(const glm::vec<4, T, Q> &v) {
2689  return glm::vec<4, T, Q>(v.w, v.w, v.x, v.x);
2690  }
2691 
2692  // wwxy
2693  template<typename T, qualifier Q>
2694  GLM_INLINE glm::vec<4, T, Q> wwxy(const glm::vec<4, T, Q> &v) {
2695  return glm::vec<4, T, Q>(v.w, v.w, v.x, v.y);
2696  }
2697 
2698  // wwxz
2699  template<typename T, qualifier Q>
2700  GLM_INLINE glm::vec<4, T, Q> wwxz(const glm::vec<4, T, Q> &v) {
2701  return glm::vec<4, T, Q>(v.w, v.w, v.x, v.z);
2702  }
2703 
2704  // wwxw
2705  template<typename T, qualifier Q>
2706  GLM_INLINE glm::vec<4, T, Q> wwxw(const glm::vec<4, T, Q> &v) {
2707  return glm::vec<4, T, Q>(v.w, v.w, v.x, v.w);
2708  }
2709 
2710  // wwyx
2711  template<typename T, qualifier Q>
2712  GLM_INLINE glm::vec<4, T, Q> wwyx(const glm::vec<4, T, Q> &v) {
2713  return glm::vec<4, T, Q>(v.w, v.w, v.y, v.x);
2714  }
2715 
2716  // wwyy
2717  template<typename T, qualifier Q>
2718  GLM_INLINE glm::vec<4, T, Q> wwyy(const glm::vec<4, T, Q> &v) {
2719  return glm::vec<4, T, Q>(v.w, v.w, v.y, v.y);
2720  }
2721 
2722  // wwyz
2723  template<typename T, qualifier Q>
2724  GLM_INLINE glm::vec<4, T, Q> wwyz(const glm::vec<4, T, Q> &v) {
2725  return glm::vec<4, T, Q>(v.w, v.w, v.y, v.z);
2726  }
2727 
2728  // wwyw
2729  template<typename T, qualifier Q>
2730  GLM_INLINE glm::vec<4, T, Q> wwyw(const glm::vec<4, T, Q> &v) {
2731  return glm::vec<4, T, Q>(v.w, v.w, v.y, v.w);
2732  }
2733 
2734  // wwzx
2735  template<typename T, qualifier Q>
2736  GLM_INLINE glm::vec<4, T, Q> wwzx(const glm::vec<4, T, Q> &v) {
2737  return glm::vec<4, T, Q>(v.w, v.w, v.z, v.x);
2738  }
2739 
2740  // wwzy
2741  template<typename T, qualifier Q>
2742  GLM_INLINE glm::vec<4, T, Q> wwzy(const glm::vec<4, T, Q> &v) {
2743  return glm::vec<4, T, Q>(v.w, v.w, v.z, v.y);
2744  }
2745 
2746  // wwzz
2747  template<typename T, qualifier Q>
2748  GLM_INLINE glm::vec<4, T, Q> wwzz(const glm::vec<4, T, Q> &v) {
2749  return glm::vec<4, T, Q>(v.w, v.w, v.z, v.z);
2750  }
2751 
2752  // wwzw
2753  template<typename T, qualifier Q>
2754  GLM_INLINE glm::vec<4, T, Q> wwzw(const glm::vec<4, T, Q> &v) {
2755  return glm::vec<4, T, Q>(v.w, v.w, v.z, v.w);
2756  }
2757 
2758  // wwwx
2759  template<typename T, qualifier Q>
2760  GLM_INLINE glm::vec<4, T, Q> wwwx(const glm::vec<4, T, Q> &v) {
2761  return glm::vec<4, T, Q>(v.w, v.w, v.w, v.x);
2762  }
2763 
2764  // wwwy
2765  template<typename T, qualifier Q>
2766  GLM_INLINE glm::vec<4, T, Q> wwwy(const glm::vec<4, T, Q> &v) {
2767  return glm::vec<4, T, Q>(v.w, v.w, v.w, v.y);
2768  }
2769 
2770  // wwwz
2771  template<typename T, qualifier Q>
2772  GLM_INLINE glm::vec<4, T, Q> wwwz(const glm::vec<4, T, Q> &v) {
2773  return glm::vec<4, T, Q>(v.w, v.w, v.w, v.z);
2774  }
2775 
2776  // wwww
2777  template<typename T, qualifier Q>
2778  GLM_INLINE glm::vec<4, T, Q> wwww(const glm::vec<4, T, Q> &v) {
2779  return glm::vec<4, T, Q>(v.w, v.w, v.w, v.w);
2780  }
2781 
2782 }
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00188.html ================================================ 0.9.9 API documentation: vector_angle.hpp File Reference
0.9.9 API documentation
vector_angle.hpp File Reference

GLM_GTX_vector_angle More...

Go to the source code of this file.

Functions

template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL T angle (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Returns the absolute angle between two vectors. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T orientedAngle (vec< 2, T, Q > const &x, vec< 2, T, Q > const &y)
 Returns the oriented angle between two 2d vectors. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T orientedAngle (vec< 3, T, Q > const &x, vec< 3, T, Q > const &y, vec< 3, T, Q > const &ref)
 Returns the oriented angle between two 3d vectors based from a reference axis. More...
 

Detailed Description

GLM_GTX_vector_angle

See also
Core features (dependence)
GLM_GTX_quaternion (dependence)
gtx_epsilon (dependence)

Definition in file vector_angle.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00188_source.html ================================================ 0.9.9 API documentation: vector_angle.hpp Source File
0.9.9 API documentation
vector_angle.hpp
Go to the documentation of this file.
1 
15 #pragma once
16 
17 // Dependency:
18 #include "../glm.hpp"
19 #include "../gtc/epsilon.hpp"
20 #include "../gtx/quaternion.hpp"
21 #include "../gtx/rotate_vector.hpp"
22 
23 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
24 # ifndef GLM_ENABLE_EXPERIMENTAL
25 # pragma message("GLM: GLM_GTX_vector_angle is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
26 # else
27 # pragma message("GLM: GLM_GTX_vector_angle extension included")
28 # endif
29 #endif
30 
31 namespace glm
32 {
35 
39  template<length_t L, typename T, qualifier Q>
40  GLM_FUNC_DECL T angle(vec<L, T, Q> const& x, vec<L, T, Q> const& y);
41 
45  template<typename T, qualifier Q>
46  GLM_FUNC_DECL T orientedAngle(vec<2, T, Q> const& x, vec<2, T, Q> const& y);
47 
51  template<typename T, qualifier Q>
52  GLM_FUNC_DECL T orientedAngle(vec<3, T, Q> const& x, vec<3, T, Q> const& y, vec<3, T, Q> const& ref);
53 
55 }// namespace glm
56 
57 #include "vector_angle.inl"
GLM_FUNC_DECL T orientedAngle(vec< 3, T, Q > const &x, vec< 3, T, Q > const &y, vec< 3, T, Q > const &ref)
Returns the oriented angle between two 3d vectors based from a reference axis.
GLM_FUNC_DECL T angle(vec< L, T, Q > const &x, vec< L, T, Q > const &y)
Returns the absolute angle between two vectors.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00189.html ================================================ 0.9.9 API documentation: vector_bool1.hpp File Reference
0.9.9 API documentation
vector_bool1.hpp File Reference

GLM_EXT_vector_bool1 More...

Go to the source code of this file.

Typedefs

typedef vec< 1, bool, defaultp > bvec1
 1 components vector of boolean.
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00189_source.html ================================================ 0.9.9 API documentation: vector_bool1.hpp Source File
0.9.9 API documentation
vector_bool1.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 #include "../detail/type_vec1.hpp"
16 
17 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
18 # pragma message("GLM: GLM_EXT_vector_bool1 extension included")
19 #endif
20 
21 namespace glm
22 {
25 
27  typedef vec<1, bool, defaultp> bvec1;
28 
30 }//namespace glm
vec< 1, bool, defaultp > bvec1
1 components vector of boolean.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00190.html ================================================ 0.9.9 API documentation: vector_bool1_precision.hpp File Reference
0.9.9 API documentation
vector_bool1_precision.hpp File Reference

GLM_EXT_vector_bool1_precision More...

Go to the source code of this file.

Typedefs

typedef vec< 1, bool, highp > highp_bvec1
 1 component vector of bool values.
 
typedef vec< 1, bool, lowp > lowp_bvec1
 1 component vector of bool values.
 
typedef vec< 1, bool, mediump > mediump_bvec1
 1 component vector of bool values.
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00190_source.html ================================================ 0.9.9 API documentation: vector_bool1_precision.hpp Source File
0.9.9 API documentation
vector_bool1_precision.hpp
Go to the documentation of this file.
1 
11 #pragma once
12 
13 #include "../detail/type_vec1.hpp"
14 
15 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
16 # pragma message("GLM: GLM_EXT_vector_bool1_precision extension included")
17 #endif
18 
19 namespace glm
20 {
23 
25  typedef vec<1, bool, highp> highp_bvec1;
26 
28  typedef vec<1, bool, mediump> mediump_bvec1;
29 
31  typedef vec<1, bool, lowp> lowp_bvec1;
32 
34 }//namespace glm
vec< 1, bool, highp > highp_bvec1
1 component vector of bool values.
vec< 1, bool, mediump > mediump_bvec1
1 component vector of bool values.
vec< 1, bool, lowp > lowp_bvec1
1 component vector of bool values.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00191.html ================================================ 0.9.9 API documentation: vector_bool2.hpp File Reference
0.9.9 API documentation
vector_bool2.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef vec< 2, bool, defaultp > bvec2
 2 components vector of boolean. More...
 

Detailed Description

Core features

Definition in file vector_bool2.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00191_source.html ================================================ 0.9.9 API documentation: vector_bool2.hpp Source File
0.9.9 API documentation
vector_bool2.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_vec2.hpp"
6 
7 namespace glm
8 {
11 
15  typedef vec<2, bool, defaultp> bvec2;
16 
18 }//namespace glm
vec< 2, bool, defaultp > bvec2
2 components vector of boolean.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00192.html ================================================ 0.9.9 API documentation: vector_bool2_precision.hpp File Reference
0.9.9 API documentation
vector_bool2_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef vec< 2, bool, highp > highp_bvec2
 2 components vector of high qualifier bool numbers. More...
 
typedef vec< 2, bool, lowp > lowp_bvec2
 2 components vector of low qualifier bool numbers. More...
 
typedef vec< 2, bool, mediump > mediump_bvec2
 2 components vector of medium qualifier bool numbers. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00192_source.html ================================================ 0.9.9 API documentation: vector_bool2_precision.hpp Source File
0.9.9 API documentation
vector_bool2_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_vec2.hpp"
6 
7 namespace glm
8 {
11 
16  typedef vec<2, bool, highp> highp_bvec2;
17 
22  typedef vec<2, bool, mediump> mediump_bvec2;
23 
28  typedef vec<2, bool, lowp> lowp_bvec2;
29 
31 }//namespace glm
vec< 2, bool, highp > highp_bvec2
2 components vector of high qualifier bool numbers.
vec< 2, bool, mediump > mediump_bvec2
2 components vector of medium qualifier bool numbers.
vec< 2, bool, lowp > lowp_bvec2
2 components vector of low qualifier bool numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00193.html ================================================ 0.9.9 API documentation: vector_bool3.hpp File Reference
0.9.9 API documentation
vector_bool3.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef vec< 3, bool, defaultp > bvec3
 3 components vector of boolean. More...
 

Detailed Description

Core features

Definition in file vector_bool3.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00193_source.html ================================================ 0.9.9 API documentation: vector_bool3.hpp Source File
0.9.9 API documentation
vector_bool3.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_vec3.hpp"
6 
7 namespace glm
8 {
11 
15  typedef vec<3, bool, defaultp> bvec3;
16 
18 }//namespace glm
vec< 3, bool, defaultp > bvec3
3 components vector of boolean.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00194.html ================================================ 0.9.9 API documentation: vector_bool3_precision.hpp File Reference
0.9.9 API documentation
vector_bool3_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef vec< 3, bool, highp > highp_bvec3
 3 components vector of high qualifier bool numbers. More...
 
typedef vec< 3, bool, lowp > lowp_bvec3
 3 components vector of low qualifier bool numbers. More...
 
typedef vec< 3, bool, mediump > mediump_bvec3
 3 components vector of medium qualifier bool numbers. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00194_source.html ================================================ 0.9.9 API documentation: vector_bool3_precision.hpp Source File
0.9.9 API documentation
vector_bool3_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_vec3.hpp"
6 
7 namespace glm
8 {
11 
16  typedef vec<3, bool, highp> highp_bvec3;
17 
22  typedef vec<3, bool, mediump> mediump_bvec3;
23 
28  typedef vec<3, bool, lowp> lowp_bvec3;
29 
31 }//namespace glm
vec< 3, bool, mediump > mediump_bvec3
3 components vector of medium qualifier bool numbers.
vec< 3, bool, highp > highp_bvec3
3 components vector of high qualifier bool numbers.
vec< 3, bool, lowp > lowp_bvec3
3 components vector of low qualifier bool numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00195.html ================================================ 0.9.9 API documentation: vector_bool4.hpp File Reference
0.9.9 API documentation
vector_bool4.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef vec< 4, bool, defaultp > bvec4
 4 components vector of boolean. More...
 

Detailed Description

Core features

Definition in file vector_bool4.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00195_source.html ================================================ 0.9.9 API documentation: vector_bool4.hpp Source File
0.9.9 API documentation
vector_bool4.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_vec4.hpp"
6 
7 namespace glm
8 {
11 
15  typedef vec<4, bool, defaultp> bvec4;
16 
18 }//namespace glm
vec< 4, bool, defaultp > bvec4
4 components vector of boolean.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00196.html ================================================ 0.9.9 API documentation: vector_bool4_precision.hpp File Reference
0.9.9 API documentation
vector_bool4_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef vec< 4, bool, highp > highp_bvec4
 4 components vector of high qualifier bool numbers. More...
 
typedef vec< 4, bool, lowp > lowp_bvec4
 4 components vector of low qualifier bool numbers. More...
 
typedef vec< 4, bool, mediump > mediump_bvec4
 4 components vector of medium qualifier bool numbers. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00196_source.html ================================================ 0.9.9 API documentation: vector_bool4_precision.hpp Source File
0.9.9 API documentation
vector_bool4_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_vec4.hpp"
6 
7 namespace glm
8 {
11 
16  typedef vec<4, bool, highp> highp_bvec4;
17 
22  typedef vec<4, bool, mediump> mediump_bvec4;
23 
28  typedef vec<4, bool, lowp> lowp_bvec4;
29 
31 }//namespace glm
vec< 4, bool, lowp > lowp_bvec4
4 components vector of low qualifier bool numbers.
vec< 4, bool, mediump > mediump_bvec4
4 components vector of medium qualifier bool numbers.
vec< 4, bool, highp > highp_bvec4
4 components vector of high qualifier bool numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00197.html ================================================ 0.9.9 API documentation: vector_common.hpp File Reference
0.9.9 API documentation
vector_common.hpp File Reference

GLM_EXT_vector_common More...

Go to the source code of this file.

Functions

template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fmax (vec< L, T, Q > const &a, T b)
 Returns y if x < y; otherwise, it returns x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fmax (vec< L, T, Q > const &a, vec< L, T, Q > const &b)
 Returns y if x < y; otherwise, it returns x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fmax (vec< L, T, Q > const &a, vec< L, T, Q > const &b, vec< L, T, Q > const &c)
 Returns y if x < y; otherwise, it returns x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fmax (vec< L, T, Q > const &a, vec< L, T, Q > const &b, vec< L, T, Q > const &c, vec< L, T, Q > const &d)
 Returns y if x < y; otherwise, it returns x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fmin (vec< L, T, Q > const &x, T y)
 Returns y if y < x; otherwise, it returns x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fmin (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Returns y if y < x; otherwise, it returns x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fmin (vec< L, T, Q > const &a, vec< L, T, Q > const &b, vec< L, T, Q > const &c)
 Returns y if y < x; otherwise, it returns x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fmin (vec< L, T, Q > const &a, vec< L, T, Q > const &b, vec< L, T, Q > const &c, vec< L, T, Q > const &d)
 Returns y if y < x; otherwise, it returns x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > max (vec< L, T, Q > const &x, vec< L, T, Q > const &y, vec< L, T, Q > const &z)
 Return the maximum component-wise values of 3 inputs. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > max (vec< L, T, Q > const &x, vec< L, T, Q > const &y, vec< L, T, Q > const &z, vec< L, T, Q > const &w)
 Return the maximum component-wise values of 4 inputs. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > min (vec< L, T, Q > const &a, vec< L, T, Q > const &b, vec< L, T, Q > const &c)
 Return the minimum component-wise values of 3 inputs. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > min (vec< L, T, Q > const &a, vec< L, T, Q > const &b, vec< L, T, Q > const &c, vec< L, T, Q > const &d)
 Return the minimum component-wise values of 4 inputs. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00197_source.html ================================================ 0.9.9 API documentation: vector_common.hpp Source File
0.9.9 API documentation
vector_common.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 // Dependency:
17 #include "../ext/scalar_common.hpp"
18 #include "../common.hpp"
19 
20 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
21 # pragma message("GLM: GLM_EXT_vector_common extension included")
22 #endif
23 
24 namespace glm
25 {
28 
34  template<length_t L, typename T, qualifier Q>
35  GLM_FUNC_DECL GLM_CONSTEXPR vec<L, T, Q> min(vec<L, T, Q> const& a, vec<L, T, Q> const& b, vec<L, T, Q> const& c);
36 
42  template<length_t L, typename T, qualifier Q>
43  GLM_FUNC_DECL GLM_CONSTEXPR vec<L, T, Q> min(vec<L, T, Q> const& a, vec<L, T, Q> const& b, vec<L, T, Q> const& c, vec<L, T, Q> const& d);
44 
50  template<length_t L, typename T, qualifier Q>
51  GLM_FUNC_DECL GLM_CONSTEXPR vec<L, T, Q> max(vec<L, T, Q> const& x, vec<L, T, Q> const& y, vec<L, T, Q> const& z);
52 
58  template<length_t L, typename T, qualifier Q>
59  GLM_FUNC_DECL GLM_CONSTEXPR vec<L, T, Q> max( vec<L, T, Q> const& x, vec<L, T, Q> const& y, vec<L, T, Q> const& z, vec<L, T, Q> const& w);
60 
68  template<length_t L, typename T, qualifier Q>
69  GLM_FUNC_DECL vec<L, T, Q> fmin(vec<L, T, Q> const& x, T y);
70 
78  template<length_t L, typename T, qualifier Q>
79  GLM_FUNC_DECL vec<L, T, Q> fmin(vec<L, T, Q> const& x, vec<L, T, Q> const& y);
80 
88  template<length_t L, typename T, qualifier Q>
89  GLM_FUNC_DECL vec<L, T, Q> fmin(vec<L, T, Q> const& a, vec<L, T, Q> const& b, vec<L, T, Q> const& c);
90 
98  template<length_t L, typename T, qualifier Q>
99  GLM_FUNC_DECL vec<L, T, Q> fmin(vec<L, T, Q> const& a, vec<L, T, Q> const& b, vec<L, T, Q> const& c, vec<L, T, Q> const& d);
100 
108  template<length_t L, typename T, qualifier Q>
109  GLM_FUNC_DECL vec<L, T, Q> fmax(vec<L, T, Q> const& a, T b);
110 
118  template<length_t L, typename T, qualifier Q>
119  GLM_FUNC_DECL vec<L, T, Q> fmax(vec<L, T, Q> const& a, vec<L, T, Q> const& b);
120 
128  template<length_t L, typename T, qualifier Q>
129  GLM_FUNC_DECL vec<L, T, Q> fmax(vec<L, T, Q> const& a, vec<L, T, Q> const& b, vec<L, T, Q> const& c);
130 
138  template<length_t L, typename T, qualifier Q>
139  GLM_FUNC_DECL vec<L, T, Q> fmax(vec<L, T, Q> const& a, vec<L, T, Q> const& b, vec<L, T, Q> const& c, vec<L, T, Q> const& d);
140 
142 }//namespace glm
143 
144 #include "vector_common.inl"
GLM_FUNC_DECL vec< L, T, Q > fmax(vec< L, T, Q > const &a, vec< L, T, Q > const &b, vec< L, T, Q > const &c, vec< L, T, Q > const &d)
Returns y if x < y; otherwise, it returns x.
GLM_FUNC_DECL vec< L, T, Q > fmin(vec< L, T, Q > const &a, vec< L, T, Q > const &b, vec< L, T, Q > const &c, vec< L, T, Q > const &d)
Returns y if y < x; otherwise, it returns x.
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > max(vec< L, T, Q > const &x, vec< L, T, Q > const &y, vec< L, T, Q > const &z, vec< L, T, Q > const &w)
Return the maximum component-wise values of 4 inputs.
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > min(vec< L, T, Q > const &a, vec< L, T, Q > const &b, vec< L, T, Q > const &c, vec< L, T, Q > const &d)
Return the minimum component-wise values of 4 inputs.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00198.html ================================================ 0.9.9 API documentation: vector_double1.hpp File Reference
0.9.9 API documentation
vector_double1.hpp File Reference

GLM_EXT_vector_double1 More...

Go to the source code of this file.

Typedefs

typedef vec< 1, double, defaultp > dvec1
 1 components vector of double-precision floating-point numbers.
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00198_source.html ================================================ 0.9.9 API documentation: vector_double1.hpp Source File
0.9.9 API documentation
vector_double1.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 #include "../detail/type_vec1.hpp"
17 
18 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
19 # pragma message("GLM: GLM_EXT_vector_double1 extension included")
20 #endif
21 
22 namespace glm
23 {
26 
28  typedef vec<1, double, defaultp> dvec1;
29 
31 }//namespace glm
vec< 1, double, defaultp > dvec1
1 components vector of double-precision floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00199.html ================================================ 0.9.9 API documentation: vector_double1_precision.hpp File Reference
0.9.9 API documentation
vector_double1_precision.hpp File Reference

GLM_EXT_vector_double1_precision More...

Go to the source code of this file.

Typedefs

typedef vec< 1, double, highp > highp_dvec1
 1 component vector of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 1, double, lowp > lowp_dvec1
 1 component vector of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 1, double, mediump > mediump_dvec1
 1 component vector of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00199_source.html ================================================ 0.9.9 API documentation: vector_double1_precision.hpp Source File
0.9.9 API documentation
vector_double1_precision.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 #include "../detail/type_vec1.hpp"
16 
17 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
18 # pragma message("GLM: GLM_EXT_vector_double1_precision extension included")
19 #endif
20 
21 namespace glm
22 {
25 
27  typedef vec<1, double, highp> highp_dvec1;
28 
30  typedef vec<1, double, mediump> mediump_dvec1;
31 
33  typedef vec<1, double, lowp> lowp_dvec1;
34 
36 }//namespace glm
vec< 1, double, lowp > lowp_dvec1
1 component vector of double-precision floating-point numbers using low precision arithmetic in term ...
vec< 1, double, highp > highp_dvec1
1 component vector of double-precision floating-point numbers using high precision arithmetic in term...
vec< 1, double, mediump > mediump_dvec1
1 component vector of double-precision floating-point numbers using medium precision arithmetic in te...
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00200.html ================================================ 0.9.9 API documentation: vector_double2.hpp File Reference
0.9.9 API documentation
vector_double2.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef vec< 2, double, defaultp > dvec2
 2 components vector of double-precision floating-point numbers. More...
 

Detailed Description

Core features

Definition in file vector_double2.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00200_source.html ================================================ 0.9.9 API documentation: vector_double2.hpp Source File
0.9.9 API documentation
vector_double2.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_vec2.hpp"
6 
7 namespace glm
8 {
11 
15  typedef vec<2, double, defaultp> dvec2;
16 
18 }//namespace glm
vec< 2, double, defaultp > dvec2
2 components vector of double-precision floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00201.html ================================================ 0.9.9 API documentation: vector_double2_precision.hpp File Reference
0.9.9 API documentation
vector_double2_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef vec< 2, double, highp > highp_dvec2
 2 components vector of high double-qualifier floating-point numbers. More...
 
typedef vec< 2, double, lowp > lowp_dvec2
 2 components vector of low double-qualifier floating-point numbers. More...
 
typedef vec< 2, double, mediump > mediump_dvec2
 2 components vector of medium double-qualifier floating-point numbers. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00201_source.html ================================================ 0.9.9 API documentation: vector_double2_precision.hpp Source File
0.9.9 API documentation
vector_double2_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_vec2.hpp"
6 
7 namespace glm
8 {
11 
16  typedef vec<2, double, highp> highp_dvec2;
17 
22  typedef vec<2, double, mediump> mediump_dvec2;
23 
28  typedef vec<2, double, lowp> lowp_dvec2;
29 
31 }//namespace glm
vec< 2, double, lowp > lowp_dvec2
2 components vector of low double-qualifier floating-point numbers.
vec< 2, double, mediump > mediump_dvec2
2 components vector of medium double-qualifier floating-point numbers.
vec< 2, double, highp > highp_dvec2
2 components vector of high double-qualifier floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00202.html ================================================ 0.9.9 API documentation: vector_double3.hpp File Reference
0.9.9 API documentation
vector_double3.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef vec< 3, double, defaultp > dvec3
 3 components vector of double-precision floating-point numbers. More...
 

Detailed Description

Core features

Definition in file vector_double3.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00202_source.html ================================================ 0.9.9 API documentation: vector_double3.hpp Source File
0.9.9 API documentation
vector_double3.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_vec3.hpp"
6 
7 namespace glm
8 {
11 
15  typedef vec<3, double, defaultp> dvec3;
16 
18 }//namespace glm
vec< 3, double, defaultp > dvec3
3 components vector of double-precision floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00203.html ================================================ 0.9.9 API documentation: vector_double3_precision.hpp File Reference
0.9.9 API documentation
vector_double3_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef vec< 3, double, highp > highp_dvec3
 3 components vector of high double-qualifier floating-point numbers. More...
 
typedef vec< 3, double, lowp > lowp_dvec3
 3 components vector of low double-qualifier floating-point numbers. More...
 
typedef vec< 3, double, mediump > mediump_dvec3
 3 components vector of medium double-qualifier floating-point numbers. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00203_source.html ================================================ 0.9.9 API documentation: vector_double3_precision.hpp Source File
0.9.9 API documentation
vector_double3_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_vec3.hpp"
6 
7 namespace glm
8 {
11 
17  typedef vec<3, double, highp> highp_dvec3;
18 
24  typedef vec<3, double, mediump> mediump_dvec3;
25 
31  typedef vec<3, double, lowp> lowp_dvec3;
32 
34 }//namespace glm
vec< 3, double, mediump > mediump_dvec3
3 components vector of medium double-qualifier floating-point numbers.
vec< 3, double, lowp > lowp_dvec3
3 components vector of low double-qualifier floating-point numbers.
vec< 3, double, highp > highp_dvec3
3 components vector of high double-qualifier floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00204.html ================================================ 0.9.9 API documentation: vector_double4.hpp File Reference
0.9.9 API documentation
vector_double4.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef vec< 4, double, defaultp > dvec4
 4 components vector of double-precision floating-point numbers. More...
 

Detailed Description

Core features

Definition in file vector_double4.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00204_source.html ================================================ 0.9.9 API documentation: vector_double4.hpp Source File
0.9.9 API documentation
vector_double4.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_vec4.hpp"
6 
7 namespace glm
8 {
11 
15  typedef vec<4, double, defaultp> dvec4;
16 
18 }//namespace glm
vec< 4, double, defaultp > dvec4
4 components vector of double-precision floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00205.html ================================================ 0.9.9 API documentation: vector_double4_precision.hpp File Reference
0.9.9 API documentation
vector_double4_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef vec< 4, double, highp > highp_dvec4
 4 components vector of high double-qualifier floating-point numbers. More...
 
typedef vec< 4, double, lowp > lowp_dvec4
 4 components vector of low double-qualifier floating-point numbers. More...
 
typedef vec< 4, double, mediump > mediump_dvec4
 4 components vector of medium double-qualifier floating-point numbers. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00205_source.html ================================================ 0.9.9 API documentation: vector_double4_precision.hpp Source File
0.9.9 API documentation
vector_double4_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/setup.hpp"
6 #include "../detail/type_vec4.hpp"
7 
8 namespace glm
9 {
12 
18  typedef vec<4, double, highp> highp_dvec4;
19 
25  typedef vec<4, double, mediump> mediump_dvec4;
26 
32  typedef vec<4, double, lowp> lowp_dvec4;
33 
35 }//namespace glm
vec< 4, double, mediump > mediump_dvec4
4 components vector of medium double-qualifier floating-point numbers.
vec< 4, double, highp > highp_dvec4
4 components vector of high double-qualifier floating-point numbers.
vec< 4, double, lowp > lowp_dvec4
4 components vector of low double-qualifier floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00206.html ================================================ 0.9.9 API documentation: vector_float1.hpp File Reference
0.9.9 API documentation
vector_float1.hpp File Reference

GLM_EXT_vector_float1 More...

Go to the source code of this file.

Typedefs

typedef vec< 1, float, defaultp > vec1
 1 components vector of single-precision floating-point numbers.
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00206_source.html ================================================ 0.9.9 API documentation: vector_float1.hpp Source File
0.9.9 API documentation
vector_float1.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 #include "../detail/type_vec1.hpp"
17 
18 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
19 # pragma message("GLM: GLM_EXT_vector_float1 extension included")
20 #endif
21 
22 namespace glm
23 {
26 
28  typedef vec<1, float, defaultp> vec1;
29 
31 }//namespace glm
vec< 1, float, defaultp > vec1
1 components vector of single-precision floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00207.html ================================================ 0.9.9 API documentation: vector_float1_precision.hpp File Reference
0.9.9 API documentation
vector_float1_precision.hpp File Reference

GLM_EXT_vector_float1_precision More...

Go to the source code of this file.

Typedefs

typedef vec< 1, float, highp > highp_vec1
 1 component vector of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 1, float, lowp > lowp_vec1
 1 component vector of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 1, float, mediump > mediump_vec1
 1 component vector of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00207_source.html ================================================ 0.9.9 API documentation: vector_float1_precision.hpp Source File
0.9.9 API documentation
vector_float1_precision.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 #include "../detail/type_vec1.hpp"
16 
17 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
18 # pragma message("GLM: GLM_EXT_vector_float1_precision extension included")
19 #endif
20 
21 namespace glm
22 {
25 
27  typedef vec<1, float, highp> highp_vec1;
28 
30  typedef vec<1, float, mediump> mediump_vec1;
31 
33  typedef vec<1, float, lowp> lowp_vec1;
34 
36 }//namespace glm
vec< 1, float, lowp > lowp_vec1
1 component vector of single-precision floating-point numbers using low precision arithmetic in term ...
vec< 1, float, mediump > mediump_vec1
1 component vector of single-precision floating-point numbers using medium precision arithmetic in te...
vec< 1, float, highp > highp_vec1
1 component vector of single-precision floating-point numbers using high precision arithmetic in term...
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00208.html ================================================ 0.9.9 API documentation: vector_float2.hpp File Reference
0.9.9 API documentation
vector_float2.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef vec< 2, float, defaultp > vec2
 2 components vector of single-precision floating-point numbers. More...
 

Detailed Description

Core features

Definition in file vector_float2.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00208_source.html ================================================ 0.9.9 API documentation: vector_float2.hpp Source File
0.9.9 API documentation
vector_float2.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_vec2.hpp"
6 
7 namespace glm
8 {
11 
15  typedef vec<2, float, defaultp> vec2;
16 
18 }//namespace glm
vec< 2, float, defaultp > vec2
2 components vector of single-precision floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00209.html ================================================ 0.9.9 API documentation: vector_float2_precision.hpp File Reference
0.9.9 API documentation
vector_float2_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef vec< 2, float, highp > highp_vec2
 2 components vector of high single-qualifier floating-point numbers. More...
 
typedef vec< 2, float, lowp > lowp_vec2
 2 components vector of low single-qualifier floating-point numbers. More...
 
typedef vec< 2, float, mediump > mediump_vec2
 2 components vector of medium single-qualifier floating-point numbers. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00209_source.html ================================================ 0.9.9 API documentation: vector_float2_precision.hpp Source File
0.9.9 API documentation
vector_float2_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_vec2.hpp"
6 
7 namespace glm
8 {
11 
16  typedef vec<2, float, highp> highp_vec2;
17 
22  typedef vec<2, float, mediump> mediump_vec2;
23 
28  typedef vec<2, float, lowp> lowp_vec2;
29 
31 }//namespace glm
vec< 2, float, highp > highp_vec2
2 components vector of high single-qualifier floating-point numbers.
vec< 2, float, lowp > lowp_vec2
2 components vector of low single-qualifier floating-point numbers.
vec< 2, float, mediump > mediump_vec2
2 components vector of medium single-qualifier floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00210.html ================================================ 0.9.9 API documentation: vector_float3.hpp File Reference
0.9.9 API documentation
vector_float3.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef vec< 3, float, defaultp > vec3
 3 components vector of single-precision floating-point numbers. More...
 

Detailed Description

Core features

Definition in file vector_float3.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00210_source.html ================================================ 0.9.9 API documentation: vector_float3.hpp Source File
0.9.9 API documentation
vector_float3.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_vec3.hpp"
6 
7 namespace glm
8 {
11 
15  typedef vec<3, float, defaultp> vec3;
16 
18 }//namespace glm
vec< 3, float, defaultp > vec3
3 components vector of single-precision floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00211.html ================================================ 0.9.9 API documentation: vector_float3_precision.hpp File Reference
0.9.9 API documentation
vector_float3_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef vec< 3, float, highp > highp_vec3
 3 components vector of high single-qualifier floating-point numbers. More...
 
typedef vec< 3, float, lowp > lowp_vec3
 3 components vector of low single-qualifier floating-point numbers. More...
 
typedef vec< 3, float, mediump > mediump_vec3
 3 components vector of medium single-qualifier floating-point numbers. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00211_source.html ================================================ 0.9.9 API documentation: vector_float3_precision.hpp Source File
0.9.9 API documentation
vector_float3_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_vec3.hpp"
6 
7 namespace glm
8 {
11 
16  typedef vec<3, float, highp> highp_vec3;
17 
22  typedef vec<3, float, mediump> mediump_vec3;
23 
28  typedef vec<3, float, lowp> lowp_vec3;
29 
31 }//namespace glm
vec< 3, float, highp > highp_vec3
3 components vector of high single-qualifier floating-point numbers.
vec< 3, float, lowp > lowp_vec3
3 components vector of low single-qualifier floating-point numbers.
vec< 3, float, mediump > mediump_vec3
3 components vector of medium single-qualifier floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00212.html ================================================ 0.9.9 API documentation: vector_float4.hpp File Reference
0.9.9 API documentation
vector_float4.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef vec< 4, float, defaultp > vec4
 4 components vector of single-precision floating-point numbers. More...
 

Detailed Description

Core features

Definition in file vector_float4.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00212_source.html ================================================ 0.9.9 API documentation: vector_float4.hpp Source File
0.9.9 API documentation
vector_float4.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_vec4.hpp"
6 
7 namespace glm
8 {
11 
15  typedef vec<4, float, defaultp> vec4;
16 
18 }//namespace glm
vec< 4, float, defaultp > vec4
4 components vector of single-precision floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00213.html ================================================ 0.9.9 API documentation: vector_float4_precision.hpp File Reference
0.9.9 API documentation
vector_float4_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef vec< 4, float, highp > highp_vec4
 4 components vector of high single-qualifier floating-point numbers. More...
 
typedef vec< 4, float, lowp > lowp_vec4
 4 components vector of low single-qualifier floating-point numbers. More...
 
typedef vec< 4, float, mediump > mediump_vec4
 4 components vector of medium single-qualifier floating-point numbers. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00213_source.html ================================================ 0.9.9 API documentation: vector_float4_precision.hpp Source File
0.9.9 API documentation
vector_float4_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_vec4.hpp"
6 
7 namespace glm
8 {
11 
16  typedef vec<4, float, highp> highp_vec4;
17 
22  typedef vec<4, float, mediump> mediump_vec4;
23 
28  typedef vec<4, float, lowp> lowp_vec4;
29 
31 }//namespace glm
vec< 4, float, lowp > lowp_vec4
4 components vector of low single-qualifier floating-point numbers.
vec< 4, float, mediump > mediump_vec4
4 components vector of medium single-qualifier floating-point numbers.
vec< 4, float, highp > highp_vec4
4 components vector of high single-qualifier floating-point numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00214.html ================================================ 0.9.9 API documentation: vector_int1.hpp File Reference
0.9.9 API documentation
vector_int1.hpp File Reference

GLM_EXT_vector_int1 More...

Go to the source code of this file.

Typedefs

typedef vec< 1, int, defaultp > ivec1
 1 component vector of signed integer numbers.
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00214_source.html ================================================ 0.9.9 API documentation: vector_int1.hpp Source File
0.9.9 API documentation
vector_int1.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 #include "../detail/type_vec1.hpp"
17 
18 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
19 # pragma message("GLM: GLM_EXT_vector_int1 extension included")
20 #endif
21 
22 namespace glm
23 {
26 
28  typedef vec<1, int, defaultp> ivec1;
29 
31 }//namespace glm
32 
vec< 1, int, defaultp > ivec1
1 component vector of signed integer numbers.
Definition: vector_int1.hpp:28
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00215.html ================================================ 0.9.9 API documentation: vector_int1_precision.hpp File Reference
0.9.9 API documentation
vector_int1_precision.hpp File Reference

GLM_EXT_vector_int1_precision More...

Go to the source code of this file.

Typedefs

typedef vec< 1, int, highp > highp_ivec1
 1 component vector of signed integer values.
 
typedef vec< 1, int, lowp > lowp_ivec1
 1 component vector of signed integer values.
 
typedef vec< 1, int, mediump > mediump_ivec1
 1 component vector of signed integer values.
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00215_source.html ================================================ 0.9.9 API documentation: vector_int1_precision.hpp Source File
0.9.9 API documentation
vector_int1_precision.hpp
Go to the documentation of this file.
1 
11 #pragma once
12 
13 #include "../detail/type_vec1.hpp"
14 
15 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
16 # pragma message("GLM: GLM_EXT_vector_int1_precision extension included")
17 #endif
18 
19 namespace glm
20 {
23 
25  typedef vec<1, int, highp> highp_ivec1;
26 
28  typedef vec<1, int, mediump> mediump_ivec1;
29 
31  typedef vec<1, int, lowp> lowp_ivec1;
32 
34 }//namespace glm
vec< 1, int, mediump > mediump_ivec1
1 component vector of signed integer values.
vec< 1, int, highp > highp_ivec1
1 component vector of signed integer values.
vec< 1, int, lowp > lowp_ivec1
1 component vector of signed integer values.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00216.html ================================================ 0.9.9 API documentation: vector_int2.hpp File Reference
0.9.9 API documentation
vector_int2.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef vec< 2, int, defaultp > ivec2
 2 components vector of signed integer numbers. More...
 

Detailed Description

Core features

Definition in file vector_int2.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00216_source.html ================================================ 0.9.9 API documentation: vector_int2.hpp Source File
0.9.9 API documentation
vector_int2.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_vec2.hpp"
6 
7 namespace glm
8 {
11 
15  typedef vec<2, int, defaultp> ivec2;
16 
18 }//namespace glm
vec< 2, int, defaultp > ivec2
2 components vector of signed integer numbers.
Definition: vector_int2.hpp:15
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00217.html ================================================ 0.9.9 API documentation: vector_int2_precision.hpp File Reference
0.9.9 API documentation
vector_int2_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef vec< 2, int, highp > highp_ivec2
 2 components vector of high qualifier signed integer numbers. More...
 
typedef vec< 2, int, lowp > lowp_ivec2
 2 components vector of low qualifier signed integer numbers. More...
 
typedef vec< 2, int, mediump > mediump_ivec2
 2 components vector of medium qualifier signed integer numbers. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00217_source.html ================================================ 0.9.9 API documentation: vector_int2_precision.hpp Source File
0.9.9 API documentation
vector_int2_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_vec2.hpp"
6 
7 namespace glm
8 {
11 
16  typedef vec<2, int, highp> highp_ivec2;
17 
22  typedef vec<2, int, mediump> mediump_ivec2;
23 
28  typedef vec<2, int, lowp> lowp_ivec2;
29 
31 }//namespace glm
vec< 2, int, highp > highp_ivec2
2 components vector of high qualifier signed integer numbers.
vec< 2, int, mediump > mediump_ivec2
2 components vector of medium qualifier signed integer numbers.
vec< 2, int, lowp > lowp_ivec2
2 components vector of low qualifier signed integer numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00218.html ================================================ 0.9.9 API documentation: vector_int3.hpp File Reference
0.9.9 API documentation
vector_int3.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef vec< 3, int, defaultp > ivec3
 3 components vector of signed integer numbers. More...
 

Detailed Description

Core features

Definition in file vector_int3.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00218_source.html ================================================ 0.9.9 API documentation: vector_int3.hpp Source File
0.9.9 API documentation
vector_int3.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_vec3.hpp"
6 
7 namespace glm
8 {
11 
15  typedef vec<3, int, defaultp> ivec3;
16 
18 }//namespace glm
vec< 3, int, defaultp > ivec3
3 components vector of signed integer numbers.
Definition: vector_int3.hpp:15
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00219.html ================================================ 0.9.9 API documentation: vector_int3_precision.hpp File Reference
0.9.9 API documentation
vector_int3_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef vec< 3, int, highp > highp_ivec3
 3 components vector of high qualifier signed integer numbers. More...
 
typedef vec< 3, int, lowp > lowp_ivec3
 3 components vector of low qualifier signed integer numbers. More...
 
typedef vec< 3, int, mediump > mediump_ivec3
 3 components vector of medium qualifier signed integer numbers. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00219_source.html ================================================ 0.9.9 API documentation: vector_int3_precision.hpp Source File
0.9.9 API documentation
vector_int3_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_vec3.hpp"
6 
7 namespace glm
8 {
11 
16  typedef vec<3, int, highp> highp_ivec3;
17 
22  typedef vec<3, int, mediump> mediump_ivec3;
23 
28  typedef vec<3, int, lowp> lowp_ivec3;
29 
31 }//namespace glm
vec< 3, int, lowp > lowp_ivec3
3 components vector of low qualifier signed integer numbers.
vec< 3, int, mediump > mediump_ivec3
3 components vector of medium qualifier signed integer numbers.
vec< 3, int, highp > highp_ivec3
3 components vector of high qualifier signed integer numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00220.html ================================================ 0.9.9 API documentation: vector_int4.hpp File Reference
0.9.9 API documentation
vector_int4.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef vec< 4, int, defaultp > ivec4
 4 components vector of signed integer numbers. More...
 

Detailed Description

Core features

Definition in file vector_int4.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00220_source.html ================================================ 0.9.9 API documentation: vector_int4.hpp Source File
0.9.9 API documentation
vector_int4.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_vec4.hpp"
6 
7 namespace glm
8 {
11 
15  typedef vec<4, int, defaultp> ivec4;
16 
18 }//namespace glm
vec< 4, int, defaultp > ivec4
4 components vector of signed integer numbers.
Definition: vector_int4.hpp:15
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00221.html ================================================ 0.9.9 API documentation: vector_int4_precision.hpp File Reference
0.9.9 API documentation
vector_int4_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef vec< 4, int, highp > highp_ivec4
 4 components vector of high qualifier signed integer numbers. More...
 
typedef vec< 4, int, lowp > lowp_ivec4
 4 components vector of low qualifier signed integer numbers. More...
 
typedef vec< 4, int, mediump > mediump_ivec4
 4 components vector of medium qualifier signed integer numbers. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00221_source.html ================================================ 0.9.9 API documentation: vector_int4_precision.hpp Source File
0.9.9 API documentation
vector_int4_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_vec4.hpp"
6 
7 namespace glm
8 {
11 
16  typedef vec<4, int, highp> highp_ivec4;
17 
22  typedef vec<4, int, mediump> mediump_ivec4;
23 
28  typedef vec<4, int, lowp> lowp_ivec4;
29 
31 }//namespace glm
vec< 4, int, lowp > lowp_ivec4
4 components vector of low qualifier signed integer numbers.
vec< 4, int, highp > highp_ivec4
4 components vector of high qualifier signed integer numbers.
vec< 4, int, mediump > mediump_ivec4
4 components vector of medium qualifier signed integer numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00222.html ================================================ 0.9.9 API documentation: vector_integer.hpp File Reference
0.9.9 API documentation
vector_integer.hpp File Reference

GLM_EXT_vector_integer More...

Go to the source code of this file.

Functions

template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, int, Q > findNSB (vec< L, T, Q > const &Source, vec< L, int, Q > SignificantBitCount)
 Returns the bit number of the Nth significant bit set to 1 in the binary representation of value. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, bool, Q > isMultiple (vec< L, T, Q > const &v, T Multiple)
 Return true if the 'Value' is a multiple of 'Multiple'. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, bool, Q > isMultiple (vec< L, T, Q > const &v, vec< L, T, Q > const &Multiple)
 Return true if the 'Value' is a multiple of 'Multiple'. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, bool, Q > isPowerOfTwo (vec< L, T, Q > const &v)
 Return true if the value is a power of two number. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > nextMultiple (vec< L, T, Q > const &v, T Multiple)
 Higher multiple number of Source. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > nextMultiple (vec< L, T, Q > const &v, vec< L, T, Q > const &Multiple)
 Higher multiple number of Source. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > nextPowerOfTwo (vec< L, T, Q > const &v)
 Return the power of two number which value is just higher the input value, round up to a power of two. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > prevMultiple (vec< L, T, Q > const &v, T Multiple)
 Lower multiple number of Source. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > prevMultiple (vec< L, T, Q > const &v, vec< L, T, Q > const &Multiple)
 Lower multiple number of Source. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > prevPowerOfTwo (vec< L, T, Q > const &v)
 Return the power of two number which value is just lower the input value, round down to a power of two. More...
 

Detailed Description

GLM_EXT_vector_integer

See also
Core features (dependence)
GLM_EXT_vector_integer (dependence)

Definition in file vector_integer.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00222_source.html ================================================ 0.9.9 API documentation: vector_integer.hpp Source File
0.9.9 API documentation
vector_integer.hpp
Go to the documentation of this file.
1 
12 #pragma once
13 
14 // Dependencies
15 #include "../detail/setup.hpp"
16 #include "../detail/qualifier.hpp"
17 #include "../detail/_vectorize.hpp"
18 #include "../vector_relational.hpp"
19 #include "../common.hpp"
20 #include <limits>
21 
22 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
23 # pragma message("GLM: GLM_EXT_vector_integer extension included")
24 #endif
25 
26 namespace glm
27 {
30 
38  template<length_t L, typename T, qualifier Q>
39  GLM_FUNC_DECL vec<L, bool, Q> isPowerOfTwo(vec<L, T, Q> const& v);
40 
49  template<length_t L, typename T, qualifier Q>
50  GLM_FUNC_DECL vec<L, T, Q> nextPowerOfTwo(vec<L, T, Q> const& v);
51 
60  template<length_t L, typename T, qualifier Q>
61  GLM_FUNC_DECL vec<L, T, Q> prevPowerOfTwo(vec<L, T, Q> const& v);
62 
70  template<length_t L, typename T, qualifier Q>
71  GLM_FUNC_DECL vec<L, bool, Q> isMultiple(vec<L, T, Q> const& v, T Multiple);
72 
80  template<length_t L, typename T, qualifier Q>
81  GLM_FUNC_DECL vec<L, bool, Q> isMultiple(vec<L, T, Q> const& v, vec<L, T, Q> const& Multiple);
82 
93  template<length_t L, typename T, qualifier Q>
94  GLM_FUNC_DECL vec<L, T, Q> nextMultiple(vec<L, T, Q> const& v, T Multiple);
95 
106  template<length_t L, typename T, qualifier Q>
107  GLM_FUNC_DECL vec<L, T, Q> nextMultiple(vec<L, T, Q> const& v, vec<L, T, Q> const& Multiple);
108 
119  template<length_t L, typename T, qualifier Q>
120  GLM_FUNC_DECL vec<L, T, Q> prevMultiple(vec<L, T, Q> const& v, T Multiple);
121 
132  template<length_t L, typename T, qualifier Q>
133  GLM_FUNC_DECL vec<L, T, Q> prevMultiple(vec<L, T, Q> const& v, vec<L, T, Q> const& Multiple);
134 
143  template<length_t L, typename T, qualifier Q>
144  GLM_FUNC_DECL vec<L, int, Q> findNSB(vec<L, T, Q> const& Source, vec<L, int, Q> SignificantBitCount);
145 
147 } //namespace glm
148 
149 #include "vector_integer.inl"
GLM_FUNC_DECL vec< L, bool, Q > isPowerOfTwo(vec< L, T, Q > const &v)
Return true if the value is a power of two number.
GLM_FUNC_DECL vec< L, T, Q > nextPowerOfTwo(vec< L, T, Q > const &v)
Return the power of two number which value is just higher the input value, round up to a power of two...
GLM_FUNC_DECL vec< L, T, Q > nextMultiple(vec< L, T, Q > const &v, vec< L, T, Q > const &Multiple)
Higher multiple number of Source.
GLM_FUNC_DECL vec< L, T, Q > prevPowerOfTwo(vec< L, T, Q > const &v)
Return the power of two number which value is just lower the input value, round down to a power of tw...
GLM_FUNC_DECL vec< L, int, Q > findNSB(vec< L, T, Q > const &Source, vec< L, int, Q > SignificantBitCount)
Returns the bit number of the Nth significant bit set to 1 in the binary representation of value...
GLM_FUNC_DECL vec< L, T, Q > prevMultiple(vec< L, T, Q > const &v, vec< L, T, Q > const &Multiple)
Lower multiple number of Source.
GLM_FUNC_DECL vec< L, bool, Q > isMultiple(vec< L, T, Q > const &v, vec< L, T, Q > const &Multiple)
Return true if the 'Value' is a multiple of 'Multiple'.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00223.html ================================================ 0.9.9 API documentation: vector_query.hpp File Reference
0.9.9 API documentation
vector_query.hpp File Reference

GLM_GTX_vector_query More...

Go to the source code of this file.

Functions

template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL bool areCollinear (vec< L, T, Q > const &v0, vec< L, T, Q > const &v1, T const &epsilon)
 Check whether two vectors are collinears. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL bool areOrthogonal (vec< L, T, Q > const &v0, vec< L, T, Q > const &v1, T const &epsilon)
 Check whether two vectors are orthogonals. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL bool areOrthonormal (vec< L, T, Q > const &v0, vec< L, T, Q > const &v1, T const &epsilon)
 Check whether two vectors are orthonormal. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, bool, Q > isCompNull (vec< L, T, Q > const &v, T const &epsilon)
 Check whether a each component of a vector is null. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL bool isNormalized (vec< L, T, Q > const &v, T const &epsilon)
 Check whether a vector is normalized. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL bool isNull (vec< L, T, Q > const &v, T const &epsilon)
 Check whether a vector is null. More...
 

Detailed Description

GLM_GTX_vector_query

See also
Core features (dependence)

Definition in file vector_query.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00223_source.html ================================================ 0.9.9 API documentation: vector_query.hpp Source File
0.9.9 API documentation
vector_query.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependency:
16 #include "../glm.hpp"
17 #include <cfloat>
18 #include <limits>
19 
20 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
21 # ifndef GLM_ENABLE_EXPERIMENTAL
22 # pragma message("GLM: GLM_GTX_vector_query is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
23 # else
24 # pragma message("GLM: GLM_GTX_vector_query extension included")
25 # endif
26 #endif
27 
28 namespace glm
29 {
32 
35  template<length_t L, typename T, qualifier Q>
36  GLM_FUNC_DECL bool areCollinear(vec<L, T, Q> const& v0, vec<L, T, Q> const& v1, T const& epsilon);
37 
40  template<length_t L, typename T, qualifier Q>
41  GLM_FUNC_DECL bool areOrthogonal(vec<L, T, Q> const& v0, vec<L, T, Q> const& v1, T const& epsilon);
42 
45  template<length_t L, typename T, qualifier Q>
46  GLM_FUNC_DECL bool isNormalized(vec<L, T, Q> const& v, T const& epsilon);
47 
50  template<length_t L, typename T, qualifier Q>
51  GLM_FUNC_DECL bool isNull(vec<L, T, Q> const& v, T const& epsilon);
52 
55  template<length_t L, typename T, qualifier Q>
56  GLM_FUNC_DECL vec<L, bool, Q> isCompNull(vec<L, T, Q> const& v, T const& epsilon);
57 
60  template<length_t L, typename T, qualifier Q>
61  GLM_FUNC_DECL bool areOrthonormal(vec<L, T, Q> const& v0, vec<L, T, Q> const& v1, T const& epsilon);
62 
64 }// namespace glm
65 
66 #include "vector_query.inl"
GLM_FUNC_DECL bool isNull(vec< L, T, Q > const &v, T const &epsilon)
Check whether a vector is null.
GLM_FUNC_DECL bool areCollinear(vec< L, T, Q > const &v0, vec< L, T, Q > const &v1, T const &epsilon)
Check whether two vectors are collinears.
GLM_FUNC_DECL bool isNormalized(vec< L, T, Q > const &v, T const &epsilon)
Check whether a vector is normalized.
GLM_FUNC_DECL bool areOrthonormal(vec< L, T, Q > const &v0, vec< L, T, Q > const &v1, T const &epsilon)
Check whether two vectors are orthonormal.
GLM_FUNC_DECL vec< L, bool, Q > isCompNull(vec< L, T, Q > const &v, T const &epsilon)
Check whether a each component of a vector is null.
GLM_FUNC_DECL bool areOrthogonal(vec< L, T, Q > const &v0, vec< L, T, Q > const &v1, T const &epsilon)
Check whether two vectors are orthogonals.
GLM_FUNC_DECL GLM_CONSTEXPR genType epsilon()
Return the epsilon constant for floating point types.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00224.html ================================================ 0.9.9 API documentation: vector_relational.hpp File Reference
0.9.9 API documentation
ext/vector_relational.hpp File Reference

GLM_EXT_vector_relational More...

Go to the source code of this file.

Functions

template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > equal (vec< L, T, Q > const &x, vec< L, T, Q > const &y, T epsilon)
 Returns the component-wise comparison of |x - y| < epsilon. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > equal (vec< L, T, Q > const &x, vec< L, T, Q > const &y, vec< L, T, Q > const &epsilon)
 Returns the component-wise comparison of |x - y| < epsilon. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > equal (vec< L, T, Q > const &x, vec< L, T, Q > const &y, int ULPs)
 Returns the component-wise comparison between two vectors in term of ULPs. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > equal (vec< L, T, Q > const &x, vec< L, T, Q > const &y, vec< L, int, Q > const &ULPs)
 Returns the component-wise comparison between two vectors in term of ULPs. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > notEqual (vec< L, T, Q > const &x, vec< L, T, Q > const &y, T epsilon)
 Returns the component-wise comparison of |x - y| >= epsilon. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > notEqual (vec< L, T, Q > const &x, vec< L, T, Q > const &y, vec< L, T, Q > const &epsilon)
 Returns the component-wise comparison of |x - y| >= epsilon. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > notEqual (vec< L, T, Q > const &x, vec< L, T, Q > const &y, int ULPs)
 Returns the component-wise comparison between two vectors in term of ULPs. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > notEqual (vec< L, T, Q > const &x, vec< L, T, Q > const &y, vec< L, int, Q > const &ULPs)
 Returns the component-wise comparison between two vectors in term of ULPs. More...
 

Detailed Description

GLM_EXT_vector_relational

See also
Core features (dependence)
GLM_EXT_scalar_integer (dependence)

Definition in file ext/vector_relational.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00224_source.html ================================================ 0.9.9 API documentation: vector_relational.hpp Source File
0.9.9 API documentation
ext/vector_relational.hpp
Go to the documentation of this file.
1 
18 #pragma once
19 
20 // Dependencies
21 #include "../detail/qualifier.hpp"
22 
23 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
24 # pragma message("GLM: GLM_EXT_vector_relational extension included")
25 #endif
26 
27 namespace glm
28 {
31 
38  template<length_t L, typename T, qualifier Q>
39  GLM_FUNC_DECL GLM_CONSTEXPR vec<L, bool, Q> equal(vec<L, T, Q> const& x, vec<L, T, Q> const& y, T epsilon);
40 
47  template<length_t L, typename T, qualifier Q>
48  GLM_FUNC_DECL GLM_CONSTEXPR vec<L, bool, Q> equal(vec<L, T, Q> const& x, vec<L, T, Q> const& y, vec<L, T, Q> const& epsilon);
49 
56  template<length_t L, typename T, qualifier Q>
57  GLM_FUNC_DECL GLM_CONSTEXPR vec<L, bool, Q> notEqual(vec<L, T, Q> const& x, vec<L, T, Q> const& y, T epsilon);
58 
65  template<length_t L, typename T, qualifier Q>
66  GLM_FUNC_DECL GLM_CONSTEXPR vec<L, bool, Q> notEqual(vec<L, T, Q> const& x, vec<L, T, Q> const& y, vec<L, T, Q> const& epsilon);
67 
74  template<length_t L, typename T, qualifier Q>
75  GLM_FUNC_DECL GLM_CONSTEXPR vec<L, bool, Q> equal(vec<L, T, Q> const& x, vec<L, T, Q> const& y, int ULPs);
76 
83  template<length_t L, typename T, qualifier Q>
84  GLM_FUNC_DECL GLM_CONSTEXPR vec<L, bool, Q> equal(vec<L, T, Q> const& x, vec<L, T, Q> const& y, vec<L, int, Q> const& ULPs);
85 
92  template<length_t L, typename T, qualifier Q>
93  GLM_FUNC_DECL GLM_CONSTEXPR vec<L, bool, Q> notEqual(vec<L, T, Q> const& x, vec<L, T, Q> const& y, int ULPs);
94 
101  template<length_t L, typename T, qualifier Q>
102  GLM_FUNC_DECL GLM_CONSTEXPR vec<L, bool, Q> notEqual(vec<L, T, Q> const& x, vec<L, T, Q> const& y, vec<L, int, Q> const& ULPs);
103 
105 }//namespace glm
106 
107 #include "vector_relational.inl"
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > notEqual(vec< L, T, Q > const &x, vec< L, T, Q > const &y, vec< L, int, Q > const &ULPs)
Returns the component-wise comparison between two vectors in term of ULPs.
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > equal(vec< L, T, Q > const &x, vec< L, T, Q > const &y, vec< L, int, Q > const &ULPs)
Returns the component-wise comparison between two vectors in term of ULPs.
GLM_FUNC_DECL GLM_CONSTEXPR genType epsilon()
Return the epsilon constant for floating point types.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00225.html ================================================ 0.9.9 API documentation: vector_relational.hpp File Reference
0.9.9 API documentation
vector_relational.hpp File Reference

Core features More...

Go to the source code of this file.

Functions

template<length_t L, qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR bool all (vec< L, bool, Q > const &v)
 Returns true if all components of x are true. More...
 
template<length_t L, qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR bool any (vec< L, bool, Q > const &v)
 Returns true if any component of x is true. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > equal (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Returns the component-wise comparison of result x == y. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > greaterThan (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Returns the component-wise comparison of result x > y. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > greaterThanEqual (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Returns the component-wise comparison of result x >= y. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > lessThan (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Returns the component-wise comparison result of x < y. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > lessThanEqual (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Returns the component-wise comparison of result x <= y. More...
 
template<length_t L, qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > not_ (vec< L, bool, Q > const &v)
 Returns the component-wise logical complement of x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > notEqual (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Returns the component-wise comparison of result x != y. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00225_source.html ================================================ 0.9.9 API documentation: vector_relational.hpp Source File
0.9.9 API documentation
vector_relational.hpp
Go to the documentation of this file.
1 
20 #pragma once
21 
22 #include "detail/qualifier.hpp"
23 #include "detail/setup.hpp"
24 
25 namespace glm
26 {
29 
37  template<length_t L, typename T, qualifier Q>
38  GLM_FUNC_DECL GLM_CONSTEXPR vec<L, bool, Q> lessThan(vec<L, T, Q> const& x, vec<L, T, Q> const& y);
39 
47  template<length_t L, typename T, qualifier Q>
48  GLM_FUNC_DECL GLM_CONSTEXPR vec<L, bool, Q> lessThanEqual(vec<L, T, Q> const& x, vec<L, T, Q> const& y);
49 
57  template<length_t L, typename T, qualifier Q>
58  GLM_FUNC_DECL GLM_CONSTEXPR vec<L, bool, Q> greaterThan(vec<L, T, Q> const& x, vec<L, T, Q> const& y);
59 
67  template<length_t L, typename T, qualifier Q>
68  GLM_FUNC_DECL GLM_CONSTEXPR vec<L, bool, Q> greaterThanEqual(vec<L, T, Q> const& x, vec<L, T, Q> const& y);
69 
77  template<length_t L, typename T, qualifier Q>
78  GLM_FUNC_DECL GLM_CONSTEXPR vec<L, bool, Q> equal(vec<L, T, Q> const& x, vec<L, T, Q> const& y);
79 
87  template<length_t L, typename T, qualifier Q>
88  GLM_FUNC_DECL GLM_CONSTEXPR vec<L, bool, Q> notEqual(vec<L, T, Q> const& x, vec<L, T, Q> const& y);
89 
96  template<length_t L, qualifier Q>
97  GLM_FUNC_DECL GLM_CONSTEXPR bool any(vec<L, bool, Q> const& v);
98 
105  template<length_t L, qualifier Q>
106  GLM_FUNC_DECL GLM_CONSTEXPR bool all(vec<L, bool, Q> const& v);
107 
115  template<length_t L, qualifier Q>
116  GLM_FUNC_DECL GLM_CONSTEXPR vec<L, bool, Q> not_(vec<L, bool, Q> const& v);
117 
119 }//namespace glm
120 
121 #include "detail/func_vector_relational.inl"
GLM_FUNC_DECL GLM_CONSTEXPR bool all(vec< L, bool, Q > const &v)
Returns true if all components of x are true.
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > greaterThan(vec< L, T, Q > const &x, vec< L, T, Q > const &y)
Returns the component-wise comparison of result x > y.
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > notEqual(vec< L, T, Q > const &x, vec< L, T, Q > const &y)
Returns the component-wise comparison of result x != y.
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > lessThanEqual(vec< L, T, Q > const &x, vec< L, T, Q > const &y)
Returns the component-wise comparison of result x <= y.
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > not_(vec< L, bool, Q > const &v)
Returns the component-wise logical complement of x.
GLM_FUNC_DECL GLM_CONSTEXPR bool any(vec< L, bool, Q > const &v)
Returns true if any component of x is true.
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > equal(vec< L, T, Q > const &x, vec< L, T, Q > const &y)
Returns the component-wise comparison of result x == y.
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > greaterThanEqual(vec< L, T, Q > const &x, vec< L, T, Q > const &y)
Returns the component-wise comparison of result x >= y.
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > lessThan(vec< L, T, Q > const &x, vec< L, T, Q > const &y)
Returns the component-wise comparison result of x < y.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00226.html ================================================ 0.9.9 API documentation: vector_uint1.hpp File Reference
0.9.9 API documentation
vector_uint1.hpp File Reference

GLM_EXT_vector_uint1 More...

Go to the source code of this file.

Typedefs

typedef vec< 1, unsigned int, defaultp > uvec1
 1 component vector of unsigned integer numbers.
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00226_source.html ================================================ 0.9.9 API documentation: vector_uint1.hpp Source File
0.9.9 API documentation
vector_uint1.hpp
Go to the documentation of this file.
1 
14 #pragma once
15 
16 #include "../detail/type_vec1.hpp"
17 
18 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
19 # pragma message("GLM: GLM_EXT_vector_uint1 extension included")
20 #endif
21 
22 namespace glm
23 {
26 
28  typedef vec<1, unsigned int, defaultp> uvec1;
29 
31 }//namespace glm
32 
vec< 1, unsigned int, defaultp > uvec1
1 component vector of unsigned integer numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00227.html ================================================ 0.9.9 API documentation: vector_uint1_precision.hpp File Reference
0.9.9 API documentation
vector_uint1_precision.hpp File Reference

GLM_EXT_vector_uint1_precision More...

Go to the source code of this file.

Typedefs

typedef vec< 1, unsigned int, highp > highp_uvec1
 1 component vector of unsigned integer values. More...
 
typedef vec< 1, unsigned int, lowp > lowp_uvec1
 1 component vector of unsigned integer values. More...
 
typedef vec< 1, unsigned int, mediump > mediump_uvec1
 1 component vector of unsigned integer values. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00227_source.html ================================================ 0.9.9 API documentation: vector_uint1_precision.hpp Source File
0.9.9 API documentation
vector_uint1_precision.hpp
Go to the documentation of this file.
1 
11 #pragma once
12 
13 #include "../detail/type_vec1.hpp"
14 
15 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
16 # pragma message("GLM: GLM_EXT_vector_uint1_precision extension included")
17 #endif
18 
19 namespace glm
20 {
23 
27  typedef vec<1, unsigned int, highp> highp_uvec1;
28 
32  typedef vec<1, unsigned int, mediump> mediump_uvec1;
33 
37  typedef vec<1, unsigned int, lowp> lowp_uvec1;
38 
40 }//namespace glm
vec< 1, unsigned int, mediump > mediump_uvec1
1 component vector of unsigned integer values.
vec< 1, unsigned int, highp > highp_uvec1
1 component vector of unsigned integer values.
vec< 1, unsigned int, lowp > lowp_uvec1
1 component vector of unsigned integer values.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00228.html ================================================ 0.9.9 API documentation: vector_uint2.hpp File Reference
0.9.9 API documentation
vector_uint2.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef vec< 2, unsigned int, defaultp > uvec2
 2 components vector of unsigned integer numbers. More...
 

Detailed Description

Core features

Definition in file vector_uint2.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00228_source.html ================================================ 0.9.9 API documentation: vector_uint2.hpp Source File
0.9.9 API documentation
vector_uint2.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_vec2.hpp"
6 
7 namespace glm
8 {
11 
15  typedef vec<2, unsigned int, defaultp> uvec2;
16 
18 }//namespace glm
vec< 2, unsigned int, defaultp > uvec2
2 components vector of unsigned integer numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00229.html ================================================ 0.9.9 API documentation: vector_uint2_precision.hpp File Reference
0.9.9 API documentation
vector_uint2_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef vec< 2, unsigned int, highp > highp_uvec2
 2 components vector of high qualifier unsigned integer numbers. More...
 
typedef vec< 2, unsigned int, lowp > lowp_uvec2
 2 components vector of low qualifier unsigned integer numbers. More...
 
typedef vec< 2, unsigned int, mediump > mediump_uvec2
 2 components vector of medium qualifier unsigned integer numbers. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00229_source.html ================================================ 0.9.9 API documentation: vector_uint2_precision.hpp Source File
0.9.9 API documentation
vector_uint2_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_vec2.hpp"
6 
7 namespace glm
8 {
11 
16  typedef vec<2, unsigned int, highp> highp_uvec2;
17 
22  typedef vec<2, unsigned int, mediump> mediump_uvec2;
23 
28  typedef vec<2, unsigned int, lowp> lowp_uvec2;
29 
31 }//namespace glm
vec< 2, unsigned int, lowp > lowp_uvec2
2 components vector of low qualifier unsigned integer numbers.
vec< 2, unsigned int, highp > highp_uvec2
2 components vector of high qualifier unsigned integer numbers.
vec< 2, unsigned int, mediump > mediump_uvec2
2 components vector of medium qualifier unsigned integer numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00230.html ================================================ 0.9.9 API documentation: vector_uint3.hpp File Reference
0.9.9 API documentation
vector_uint3.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef vec< 3, unsigned int, defaultp > uvec3
 3 components vector of unsigned integer numbers. More...
 

Detailed Description

Core features

Definition in file vector_uint3.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00230_source.html ================================================ 0.9.9 API documentation: vector_uint3.hpp Source File
0.9.9 API documentation
vector_uint3.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_vec3.hpp"
6 
7 namespace glm
8 {
11 
15  typedef vec<3, unsigned int, defaultp> uvec3;
16 
18 }//namespace glm
vec< 3, unsigned int, defaultp > uvec3
3 components vector of unsigned integer numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00231.html ================================================ 0.9.9 API documentation: vector_uint3_precision.hpp File Reference
0.9.9 API documentation
vector_uint3_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef vec< 3, unsigned int, highp > highp_uvec3
 3 components vector of high qualifier unsigned integer numbers. More...
 
typedef vec< 3, unsigned int, lowp > lowp_uvec3
 3 components vector of low qualifier unsigned integer numbers. More...
 
typedef vec< 3, unsigned int, mediump > mediump_uvec3
 3 components vector of medium qualifier unsigned integer numbers. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00231_source.html ================================================ 0.9.9 API documentation: vector_uint3_precision.hpp Source File
0.9.9 API documentation
vector_uint3_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_vec3.hpp"
6 
7 namespace glm
8 {
11 
16  typedef vec<3, unsigned int, highp> highp_uvec3;
17 
22  typedef vec<3, unsigned int, mediump> mediump_uvec3;
23 
28  typedef vec<3, unsigned int, lowp> lowp_uvec3;
29 
31 }//namespace glm
vec< 3, unsigned int, mediump > mediump_uvec3
3 components vector of medium qualifier unsigned integer numbers.
vec< 3, unsigned int, highp > highp_uvec3
3 components vector of high qualifier unsigned integer numbers.
vec< 3, unsigned int, lowp > lowp_uvec3
3 components vector of low qualifier unsigned integer numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00232.html ================================================ 0.9.9 API documentation: vector_uint4.hpp File Reference
0.9.9 API documentation
vector_uint4.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef vec< 4, unsigned int, defaultp > uvec4
 4 components vector of unsigned integer numbers. More...
 

Detailed Description

Core features

Definition in file vector_uint4.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00232_source.html ================================================ 0.9.9 API documentation: vector_uint4.hpp Source File
0.9.9 API documentation
vector_uint4.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_vec4.hpp"
6 
7 namespace glm
8 {
11 
15  typedef vec<4, unsigned int, defaultp> uvec4;
16 
18 }//namespace glm
vec< 4, unsigned int, defaultp > uvec4
4 components vector of unsigned integer numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00233.html ================================================ 0.9.9 API documentation: vector_uint4_precision.hpp File Reference
0.9.9 API documentation
vector_uint4_precision.hpp File Reference

Core features More...

Go to the source code of this file.

Typedefs

typedef vec< 4, unsigned int, highp > highp_uvec4
 4 components vector of high qualifier unsigned integer numbers. More...
 
typedef vec< 4, unsigned int, lowp > lowp_uvec4
 4 components vector of low qualifier unsigned integer numbers. More...
 
typedef vec< 4, unsigned int, mediump > mediump_uvec4
 4 components vector of medium qualifier unsigned integer numbers. More...
 

Detailed Description

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00233_source.html ================================================ 0.9.9 API documentation: vector_uint4_precision.hpp Source File
0.9.9 API documentation
vector_uint4_precision.hpp
Go to the documentation of this file.
1 
4 #pragma once
5 #include "../detail/type_vec4.hpp"
6 
7 namespace glm
8 {
11 
16  typedef vec<4, unsigned int, highp> highp_uvec4;
17 
22  typedef vec<4, unsigned int, mediump> mediump_uvec4;
23 
28  typedef vec<4, unsigned int, lowp> lowp_uvec4;
29 
31 }//namespace glm
vec< 4, unsigned int, mediump > mediump_uvec4
4 components vector of medium qualifier unsigned integer numbers.
vec< 4, unsigned int, highp > highp_uvec4
4 components vector of high qualifier unsigned integer numbers.
vec< 4, unsigned int, lowp > lowp_uvec4
4 components vector of low qualifier unsigned integer numbers.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00234.html ================================================ 0.9.9 API documentation: vector_ulp.hpp File Reference
0.9.9 API documentation
vector_ulp.hpp File Reference

GLM_EXT_vector_ulp More...

Go to the source code of this file.

Functions

template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, int, Q > floatDistance (vec< L, float, Q > const &x, vec< L, float, Q > const &y)
 Return the distance in the number of ULP between 2 single-precision floating-point scalars. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, int64, Q > floatDistance (vec< L, double, Q > const &x, vec< L, double, Q > const &y)
 Return the distance in the number of ULP between 2 double-precision floating-point scalars. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > nextFloat (vec< L, T, Q > const &x)
 Return the next ULP value(s) after the input value(s). More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > nextFloat (vec< L, T, Q > const &x, int ULPs)
 Return the value(s) ULP distance after the input value(s). More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > nextFloat (vec< L, T, Q > const &x, vec< L, int, Q > const &ULPs)
 Return the value(s) ULP distance after the input value(s). More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > prevFloat (vec< L, T, Q > const &x)
 Return the previous ULP value(s) before the input value(s). More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > prevFloat (vec< L, T, Q > const &x, int ULPs)
 Return the value(s) ULP distance before the input value(s). More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > prevFloat (vec< L, T, Q > const &x, vec< L, int, Q > const &ULPs)
 Return the value(s) ULP distance before the input value(s). More...
 

Detailed Description

GLM_EXT_vector_ulp

Definition in file vector_ulp.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00234_source.html ================================================ 0.9.9 API documentation: vector_ulp.hpp Source File
0.9.9 API documentation
vector_ulp.hpp
Go to the documentation of this file.
1 
17 #pragma once
18 
19 // Dependencies
20 #include "../ext/scalar_ulp.hpp"
21 
22 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
23 # pragma message("GLM: GLM_EXT_vector_ulp extension included")
24 #endif
25 
26 namespace glm
27 {
35  template<length_t L, typename T, qualifier Q>
36  GLM_FUNC_DECL vec<L, T, Q> nextFloat(vec<L, T, Q> const& x);
37 
45  template<length_t L, typename T, qualifier Q>
46  GLM_FUNC_DECL vec<L, T, Q> nextFloat(vec<L, T, Q> const& x, int ULPs);
47 
55  template<length_t L, typename T, qualifier Q>
56  GLM_FUNC_DECL vec<L, T, Q> nextFloat(vec<L, T, Q> const& x, vec<L, int, Q> const& ULPs);
57 
65  template<length_t L, typename T, qualifier Q>
66  GLM_FUNC_DECL vec<L, T, Q> prevFloat(vec<L, T, Q> const& x);
67 
75  template<length_t L, typename T, qualifier Q>
76  GLM_FUNC_DECL vec<L, T, Q> prevFloat(vec<L, T, Q> const& x, int ULPs);
77 
85  template<length_t L, typename T, qualifier Q>
86  GLM_FUNC_DECL vec<L, T, Q> prevFloat(vec<L, T, Q> const& x, vec<L, int, Q> const& ULPs);
87 
94  template<length_t L, typename T, qualifier Q>
95  GLM_FUNC_DECL vec<L, int, Q> floatDistance(vec<L, float, Q> const& x, vec<L, float, Q> const& y);
96 
103  template<length_t L, typename T, qualifier Q>
104  GLM_FUNC_DECL vec<L, int64, Q> floatDistance(vec<L, double, Q> const& x, vec<L, double, Q> const& y);
105 
107 }//namespace glm
108 
109 #include "vector_ulp.inl"
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00235.html ================================================ 0.9.9 API documentation: wrap.hpp File Reference
0.9.9 API documentation
wrap.hpp File Reference

GLM_GTX_wrap More...

Go to the source code of this file.

Functions

template<typename genType >
GLM_FUNC_DECL genType clamp (genType const &Texcoord)
 Simulate GL_CLAMP OpenGL wrap mode. More...
 
template<typename genType >
GLM_FUNC_DECL genType mirrorClamp (genType const &Texcoord)
 Simulate GL_MIRRORED_REPEAT OpenGL wrap mode. More...
 
template<typename genType >
GLM_FUNC_DECL genType mirrorRepeat (genType const &Texcoord)
 Simulate GL_MIRROR_REPEAT OpenGL wrap mode. More...
 
template<typename genType >
GLM_FUNC_DECL genType repeat (genType const &Texcoord)
 Simulate GL_REPEAT OpenGL wrap mode. More...
 

Detailed Description

GLM_GTX_wrap

See also
Core features (dependence)

Definition in file wrap.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00235_source.html ================================================ 0.9.9 API documentation: wrap.hpp Source File
0.9.9 API documentation
wrap.hpp
Go to the documentation of this file.
1 
13 #pragma once
14 
15 // Dependency:
16 #include "../glm.hpp"
17 #include "../gtc/vec1.hpp"
18 
19 #if GLM_MESSAGES == GLM_ENABLE && !defined(GLM_EXT_INCLUDED)
20 # ifndef GLM_ENABLE_EXPERIMENTAL
21 # pragma message("GLM: GLM_GTX_wrap is an experimental extension and may change in the future. Use #define GLM_ENABLE_EXPERIMENTAL before including it, if you really want to use it.")
22 # else
23 # pragma message("GLM: GLM_GTX_wrap extension included")
24 # endif
25 #endif
26 
27 namespace glm
28 {
31 
34  template<typename genType>
35  GLM_FUNC_DECL genType clamp(genType const& Texcoord);
36 
39  template<typename genType>
40  GLM_FUNC_DECL genType repeat(genType const& Texcoord);
41 
44  template<typename genType>
45  GLM_FUNC_DECL genType mirrorClamp(genType const& Texcoord);
46 
49  template<typename genType>
50  GLM_FUNC_DECL genType mirrorRepeat(genType const& Texcoord);
51 
53 }// namespace glm
54 
55 #include "wrap.inl"
GLM_FUNC_DECL genType mirrorRepeat(genType const &Texcoord)
Simulate GL_MIRROR_REPEAT OpenGL wrap mode.
GLM_FUNC_DECL genType repeat(genType const &Texcoord)
Simulate GL_REPEAT OpenGL wrap mode.
GLM_FUNC_DECL genType mirrorClamp(genType const &Texcoord)
Simulate GL_MIRRORED_REPEAT OpenGL wrap mode.
GLM_FUNC_DECL genType clamp(genType const &Texcoord)
Simulate GL_CLAMP OpenGL wrap mode.
Definition: common.hpp:20
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00241.html ================================================ 0.9.9 API documentation: Common functions
0.9.9 API documentation
Common functions

Provides GLSL common functions. More...

Functions

template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType abs (genType x)
 Returns x if x >= 0; otherwise, it returns -x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > abs (vec< L, T, Q > const &x)
 Returns x if x >= 0; otherwise, it returns -x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > ceil (vec< L, T, Q > const &x)
 Returns a value equal to the nearest integer that is greater than or equal to x. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType clamp (genType x, genType minVal, genType maxVal)
 Returns min(max(x, minVal), maxVal) for each component in x using the floating-point values minVal and maxVal. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > clamp (vec< L, T, Q > const &x, T minVal, T maxVal)
 Returns min(max(x, minVal), maxVal) for each component in x using the floating-point values minVal and maxVal. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > clamp (vec< L, T, Q > const &x, vec< L, T, Q > const &minVal, vec< L, T, Q > const &maxVal)
 Returns min(max(x, minVal), maxVal) for each component in x using the floating-point values minVal and maxVal. More...
 
GLM_FUNC_DECL int floatBitsToInt (float const &v)
 Returns a signed integer value representing the encoding of a floating-point value. More...
 
template<length_t L, qualifier Q>
GLM_FUNC_DECL vec< L, int, Q > floatBitsToInt (vec< L, float, Q > const &v)
 Returns a signed integer value representing the encoding of a floating-point value. More...
 
GLM_FUNC_DECL uint floatBitsToUint (float const &v)
 Returns a unsigned integer value representing the encoding of a floating-point value. More...
 
template<length_t L, qualifier Q>
GLM_FUNC_DECL vec< L, uint, Q > floatBitsToUint (vec< L, float, Q > const &v)
 Returns a unsigned integer value representing the encoding of a floating-point value. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > floor (vec< L, T, Q > const &x)
 Returns a value equal to the nearest integer that is less then or equal to x. More...
 
template<typename genType >
GLM_FUNC_DECL genType fma (genType const &a, genType const &b, genType const &c)
 Computes and returns a * b + c. More...
 
template<typename genType >
GLM_FUNC_DECL genType fract (genType x)
 Return x - floor(x). More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fract (vec< L, T, Q > const &x)
 Return x - floor(x). More...
 
template<typename genType >
GLM_FUNC_DECL genType frexp (genType x, int &exp)
 Splits x into a floating-point significand in the range [0.5, 1.0) and an integral exponent of two, such that: x = significand * exp(2, exponent) More...
 
GLM_FUNC_DECL float intBitsToFloat (int const &v)
 Returns a floating-point value corresponding to a signed integer encoding of a floating-point value. More...
 
template<length_t L, qualifier Q>
GLM_FUNC_DECL vec< L, float, Q > intBitsToFloat (vec< L, int, Q > const &v)
 Returns a floating-point value corresponding to a signed integer encoding of a floating-point value. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, bool, Q > isinf (vec< L, T, Q > const &x)
 Returns true if x holds a positive infinity or negative infinity representation in the underlying implementation's set of floating point representations. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, bool, Q > isnan (vec< L, T, Q > const &x)
 Returns true if x holds a NaN (not a number) representation in the underlying implementation's set of floating point representations. More...
 
template<typename genType >
GLM_FUNC_DECL genType ldexp (genType const &x, int const &exp)
 Builds a floating-point number from x and the corresponding integral exponent of two in exp, returning: significand * exp(2, exponent) More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType max (genType x, genType y)
 Returns y if x < y; otherwise, it returns x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > max (vec< L, T, Q > const &x, T y)
 Returns y if x < y; otherwise, it returns x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > max (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Returns y if x < y; otherwise, it returns x. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType min (genType x, genType y)
 Returns y if y < x; otherwise, it returns x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > min (vec< L, T, Q > const &x, T y)
 Returns y if y < x; otherwise, it returns x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > min (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Returns y if y < x; otherwise, it returns x. More...
 
template<typename genTypeT , typename genTypeU >
GLM_FUNC_DECL genTypeT mix (genTypeT x, genTypeT y, genTypeU a)
 If genTypeU is a floating scalar or vector: Returns x * (1.0 - a) + y * a, i.e., the linear blend of x and y using the floating-point value a. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > mod (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Modulus. More...
 
template<typename genType >
GLM_FUNC_DECL genType modf (genType x, genType &i)
 Returns the fractional part of x and sets i to the integer part (as a whole number floating point value). More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > round (vec< L, T, Q > const &x)
 Returns a value equal to the nearest integer to x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > roundEven (vec< L, T, Q > const &x)
 Returns a value equal to the nearest integer to x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > sign (vec< L, T, Q > const &x)
 Returns 1.0 if x > 0, 0.0 if x == 0, or -1.0 if x < 0. More...
 
template<typename genType >
GLM_FUNC_DECL genType smoothstep (genType edge0, genType edge1, genType x)
 Returns 0.0 if x <= edge0 and 1.0 if x >= edge1 and performs smooth Hermite interpolation between 0 and 1 when edge0 < x < edge1. More...
 
template<typename genType >
GLM_FUNC_DECL genType step (genType edge, genType x)
 Returns 0.0 if x < edge, otherwise it returns 1.0 for each component of a genType. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > step (T edge, vec< L, T, Q > const &x)
 Returns 0.0 if x < edge, otherwise it returns 1.0. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > step (vec< L, T, Q > const &edge, vec< L, T, Q > const &x)
 Returns 0.0 if x < edge, otherwise it returns 1.0. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > trunc (vec< L, T, Q > const &x)
 Returns a value equal to the nearest integer to x whose absolute value is not larger than the absolute value of x. More...
 
GLM_FUNC_DECL float uintBitsToFloat (uint const &v)
 Returns a floating-point value corresponding to a unsigned integer encoding of a floating-point value. More...
 
template<length_t L, qualifier Q>
GLM_FUNC_DECL vec< L, float, Q > uintBitsToFloat (vec< L, uint, Q > const &v)
 Returns a floating-point value corresponding to a unsigned integer encoding of a floating-point value. More...
 

Detailed Description

Provides GLSL common functions.

These all operate component-wise. The description is per component.

Include <glm/common.hpp> to use these core features.

Function Documentation

GLM_FUNC_DECL GLM_CONSTEXPR genType glm::abs ( genType  x)

Returns x if x >= 0; otherwise, it returns -x.

Template Parameters
genTypefloating-point or signed integer; scalar or vector types.
See also
GLSL abs man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL GLM_CONSTEXPR vec<L, T, Q> glm::abs ( vec< L, T, Q > const &  x)

Returns x if x >= 0; otherwise, it returns -x.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point or signed integer scalar types
QValue from qualifier enum
See also
GLSL abs man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL vec<L, T, Q> glm::ceil ( vec< L, T, Q > const &  x)

Returns a value equal to the nearest integer that is greater than or equal to x.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
GLSL ceil man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL GLM_CONSTEXPR genType glm::clamp ( genType  x,
genType  minVal,
genType  maxVal 
)

Returns min(max(x, minVal), maxVal) for each component in x using the floating-point values minVal and maxVal.

Template Parameters
genTypeFloating-point or integer; scalar or vector types.
See also
GLSL clamp man page
GLSL 4.20.8 specification, section 8.3 Common Functions

Referenced by glm::saturate().

GLM_FUNC_DECL GLM_CONSTEXPR vec<L, T, Q> glm::clamp ( vec< L, T, Q > const &  x,
minVal,
maxVal 
)

Returns min(max(x, minVal), maxVal) for each component in x using the floating-point values minVal and maxVal.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point or integer scalar types
QValue from qualifier enum
See also
GLSL clamp man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL GLM_CONSTEXPR vec<L, T, Q> glm::clamp ( vec< L, T, Q > const &  x,
vec< L, T, Q > const &  minVal,
vec< L, T, Q > const &  maxVal 
)

Returns min(max(x, minVal), maxVal) for each component in x using the floating-point values minVal and maxVal.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point or integer scalar types
QValue from qualifier enum
See also
GLSL clamp man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL int glm::floatBitsToInt ( float const &  v)

Returns a signed integer value representing the encoding of a floating-point value.

The floating-point value's bit-level representation is preserved.

See also
GLSL floatBitsToInt man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL vec<L, int, Q> glm::floatBitsToInt ( vec< L, float, Q > const &  v)

Returns a signed integer value representing the encoding of a floating-point value.

The floatingpoint value's bit-level representation is preserved.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
QValue from qualifier enum
See also
GLSL floatBitsToInt man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL uint glm::floatBitsToUint ( float const &  v)

Returns a unsigned integer value representing the encoding of a floating-point value.

The floatingpoint value's bit-level representation is preserved.

See also
GLSL floatBitsToUint man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL vec<L, uint, Q> glm::floatBitsToUint ( vec< L, float, Q > const &  v)

Returns a unsigned integer value representing the encoding of a floating-point value.

The floatingpoint value's bit-level representation is preserved.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
QValue from qualifier enum
See also
GLSL floatBitsToUint man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL vec<L, T, Q> glm::floor ( vec< L, T, Q > const &  x)

Returns a value equal to the nearest integer that is less then or equal to x.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
GLSL floor man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL genType glm::fma ( genType const &  a,
genType const &  b,
genType const &  c 
)

Computes and returns a * b + c.

Template Parameters
genTypeFloating-point scalar or vector types.
See also
GLSL fma man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL genType glm::fract ( genType  x)

Return x - floor(x).

Template Parameters
genTypeFloating-point scalar or vector types.
See also
GLSL fract man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL vec<L, T, Q> glm::fract ( vec< L, T, Q > const &  x)

Return x - floor(x).

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
GLSL fract man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL genType glm::frexp ( genType  x,
int &  exp 
)

Splits x into a floating-point significand in the range [0.5, 1.0) and an integral exponent of two, such that: x = significand * exp(2, exponent)

The significand is returned by the function and the exponent is returned in the parameter exp. For a floating-point value of zero, the significant and exponent are both zero. For a floating-point value that is an infinity or is not a number, the results are undefined.

Template Parameters
genTypeFloating-point scalar or vector types.
See also
GLSL frexp man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL float glm::intBitsToFloat ( int const &  v)

Returns a floating-point value corresponding to a signed integer encoding of a floating-point value.

If an inf or NaN is passed in, it will not signal, and the resulting floating point value is unspecified. Otherwise, the bit-level representation is preserved.

See also
GLSL intBitsToFloat man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL vec<L, float, Q> glm::intBitsToFloat ( vec< L, int, Q > const &  v)

Returns a floating-point value corresponding to a signed integer encoding of a floating-point value.

If an inf or NaN is passed in, it will not signal, and the resulting floating point value is unspecified. Otherwise, the bit-level representation is preserved.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
QValue from qualifier enum
See also
GLSL intBitsToFloat man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL vec<L, bool, Q> glm::isinf ( vec< L, T, Q > const &  x)

Returns true if x holds a positive infinity or negative infinity representation in the underlying implementation's set of floating point representations.

Returns false otherwise, including for implementations with no infinity representations.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
GLSL isinf man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL vec<L, bool, Q> glm::isnan ( vec< L, T, Q > const &  x)

Returns true if x holds a NaN (not a number) representation in the underlying implementation's set of floating point representations.

Returns false otherwise, including for implementations with no NaN representations.

/!\ When using compiler fast math, this function may fail.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
GLSL isnan man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL genType glm::ldexp ( genType const &  x,
int const &  exp 
)

Builds a floating-point number from x and the corresponding integral exponent of two in exp, returning: significand * exp(2, exponent)

If this product is too large to be represented in the floating-point type, the result is undefined.

Template Parameters
genTypeFloating-point scalar or vector types.
See also
GLSL ldexp man page;
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL GLM_CONSTEXPR genType glm::max ( genType  x,
genType  y 
)

Returns y if x < y; otherwise, it returns x.

Template Parameters
genTypeFloating-point or integer; scalar or vector types.
See also
GLSL max man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL GLM_CONSTEXPR vec<L, T, Q> glm::max ( vec< L, T, Q > const &  x,
y 
)

Returns y if x < y; otherwise, it returns x.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point or integer scalar types
QValue from qualifier enum
See also
GLSL max man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL GLM_CONSTEXPR vec<L, T, Q> glm::max ( vec< L, T, Q > const &  x,
vec< L, T, Q > const &  y 
)

Returns y if x < y; otherwise, it returns x.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point or integer scalar types
QValue from qualifier enum
See also
GLSL max man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL GLM_CONSTEXPR genType glm::min ( genType  x,
genType  y 
)

Returns y if y < x; otherwise, it returns x.

Template Parameters
genTypeFloating-point or integer; scalar or vector types.
See also
GLSL min man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL GLM_CONSTEXPR vec<L, T, Q> glm::min ( vec< L, T, Q > const &  x,
y 
)

Returns y if y < x; otherwise, it returns x.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point or integer scalar types
QValue from qualifier enum
See also
GLSL min man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL GLM_CONSTEXPR vec<L, T, Q> glm::min ( vec< L, T, Q > const &  x,
vec< L, T, Q > const &  y 
)

Returns y if y < x; otherwise, it returns x.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point or integer scalar types
QValue from qualifier enum
See also
GLSL min man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL genTypeT glm::mix ( genTypeT  x,
genTypeT  y,
genTypeU  a 
)

If genTypeU is a floating scalar or vector: Returns x * (1.0 - a) + y * a, i.e., the linear blend of x and y using the floating-point value a.

The value for a is not restricted to the range [0, 1].

If genTypeU is a boolean scalar or vector: Selects which vector each returned component comes from. For a component of 'a' that is false, the corresponding component of 'x' is returned. For a component of 'a' that is true, the corresponding component of 'y' is returned. Components of 'x' and 'y' that are not selected are allowed to be invalid floating point values and will have no effect on the results. Thus, this provides different functionality than genType mix(genType x, genType y, genType(a)) where a is a Boolean vector.

See also
GLSL mix man page
GLSL 4.20.8 specification, section 8.3 Common Functions
Parameters
[in]xValue to interpolate.
[in]yValue to interpolate.
[in]aInterpolant.
Template Parameters
genTypeTFloating point scalar or vector.
genTypeUFloating point or boolean scalar or vector. It can't be a vector if it is the length of genTypeT.
#include <glm/glm.hpp>
...
float a;
bool b;
...
glm::vec4 r = glm::mix(g, h, a); // Interpolate with a floating-point scalar two vectors.
glm::vec4 s = glm::mix(g, h, b); // Returns g or h;
glm::dvec3 t = glm::mix(e, f, a); // Types of the third parameter is not required to match with the first and the second.
glm::vec4 u = glm::mix(g, h, r); // Interpolations can be perform per component with a vector for the last parameter.

Referenced by glm::lerp().

GLM_FUNC_DECL vec<L, T, Q> glm::mod ( vec< L, T, Q > const &  x,
vec< L, T, Q > const &  y 
)

Modulus.

Returns x - y * floor(x / y) for each component in x using the floating point value y.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types, include glm/gtc/integer for integer scalar types support
QValue from qualifier enum
See also
GLSL mod man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL genType glm::modf ( genType  x,
genType &  i 
)

Returns the fractional part of x and sets i to the integer part (as a whole number floating point value).

Both the return value and the output parameter will have the same sign as x.

Template Parameters
genTypeFloating-point scalar or vector types.
See also
GLSL modf man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL vec<L, T, Q> glm::round ( vec< L, T, Q > const &  x)

Returns a value equal to the nearest integer to x.

The fraction 0.5 will round in a direction chosen by the implementation, presumably the direction that is fastest. This includes the possibility that round(x) returns the same value as roundEven(x) for all values of x.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
GLSL round man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL vec<L, T, Q> glm::roundEven ( vec< L, T, Q > const &  x)

Returns a value equal to the nearest integer to x.

A fractional part of 0.5 will round toward the nearest even integer. (Both 3.5 and 4.5 for x will return 4.0.)

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
GLSL roundEven man page
GLSL 4.20.8 specification, section 8.3 Common Functions
New round to even technique
GLM_FUNC_DECL vec<L, T, Q> glm::sign ( vec< L, T, Q > const &  x)

Returns 1.0 if x > 0, 0.0 if x == 0, or -1.0 if x < 0.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
GLSL sign man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL genType glm::smoothstep ( genType  edge0,
genType  edge1,
genType  x 
)

Returns 0.0 if x <= edge0 and 1.0 if x >= edge1 and performs smooth Hermite interpolation between 0 and 1 when edge0 < x < edge1.

This is useful in cases where you would want a threshold function with a smooth transition. This is equivalent to: genType t; t = clamp ((x - edge0) / (edge1 - edge0), 0, 1); return t * t * (3 - 2 * t); Results are undefined if edge0 >= edge1.

Template Parameters
genTypeFloating-point scalar or vector types.
See also
GLSL smoothstep man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL genType glm::step ( genType  edge,
genType  x 
)

Returns 0.0 if x < edge, otherwise it returns 1.0 for each component of a genType.

See also
GLSL step man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL vec<L, T, Q> glm::step ( edge,
vec< L, T, Q > const &  x 
)

Returns 0.0 if x < edge, otherwise it returns 1.0.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
GLSL step man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL vec<L, T, Q> glm::step ( vec< L, T, Q > const &  edge,
vec< L, T, Q > const &  x 
)

Returns 0.0 if x < edge, otherwise it returns 1.0.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
GLSL step man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL vec<L, T, Q> glm::trunc ( vec< L, T, Q > const &  x)

Returns a value equal to the nearest integer to x whose absolute value is not larger than the absolute value of x.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
GLSL trunc man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL float glm::uintBitsToFloat ( uint const &  v)

Returns a floating-point value corresponding to a unsigned integer encoding of a floating-point value.

If an inf or NaN is passed in, it will not signal, and the resulting floating point value is unspecified. Otherwise, the bit-level representation is preserved.

See also
GLSL uintBitsToFloat man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL vec<L, float, Q> glm::uintBitsToFloat ( vec< L, uint, Q > const &  v)

Returns a floating-point value corresponding to a unsigned integer encoding of a floating-point value.

If an inf or NaN is passed in, it will not signal, and the resulting floating point value is unspecified. Otherwise, the bit-level representation is preserved.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
QValue from qualifier enum
See also
GLSL uintBitsToFloat man page
GLSL 4.20.8 specification, section 8.3 Common Functions
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00242.html ================================================ 0.9.9 API documentation: Exponential functions
0.9.9 API documentation
Exponential functions

Provides GLSL exponential functions. More...

Functions

template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > exp (vec< L, T, Q > const &v)
 Returns the natural exponentiation of v, i.e., e^v. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > exp2 (vec< L, T, Q > const &v)
 Returns 2 raised to the v power. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > inversesqrt (vec< L, T, Q > const &v)
 Returns the reciprocal of the positive square root of v. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > log (vec< L, T, Q > const &v)
 Returns the natural logarithm of v, i.e., returns the value y which satisfies the equation x = e^y. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > log2 (vec< L, T, Q > const &v)
 Returns the base 2 log of x, i.e., returns the value y, which satisfies the equation x = 2 ^ y. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > pow (vec< L, T, Q > const &base, vec< L, T, Q > const &exponent)
 Returns 'base' raised to the power 'exponent'. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > sqrt (vec< L, T, Q > const &v)
 Returns the positive square root of v. More...
 

Detailed Description

Provides GLSL exponential functions.

These all operate component-wise. The description is per component.

Include <glm/exponential.hpp> to use these core features.

Function Documentation

GLM_FUNC_DECL vec<L, T, Q> glm::exp ( vec< L, T, Q > const &  v)

Returns the natural exponentiation of x, i.e., e^x.

Parameters
vexp function is defined for input values of v defined in the range (inf-, inf+) in the limit of the type qualifier.
Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TFloating-point scalar types.
See also
GLSL exp man page
GLSL 4.20.8 specification, section 8.2 Exponential Functions
GLM_FUNC_DECL vec<L, T, Q> glm::exp2 ( vec< L, T, Q > const &  v)

Returns 2 raised to the v power.

Parameters
vexp2 function is defined for input values of v defined in the range (inf-, inf+) in the limit of the type qualifier.
Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TFloating-point scalar types.
See also
GLSL exp2 man page
GLSL 4.20.8 specification, section 8.2 Exponential Functions
GLM_FUNC_DECL vec<L, T, Q> glm::inversesqrt ( vec< L, T, Q > const &  v)

Returns the reciprocal of the positive square root of v.

Parameters
vinversesqrt function is defined for input values of v defined in the range [0, inf+) in the limit of the type qualifier.
Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TFloating-point scalar types.
See also
GLSL inversesqrt man page
GLSL 4.20.8 specification, section 8.2 Exponential Functions
GLM_FUNC_DECL vec<L, T, Q> glm::log ( vec< L, T, Q > const &  v)

Returns the natural logarithm of v, i.e., returns the value y which satisfies the equation x = e^y.

Results are undefined if v <= 0.

Parameters
vlog function is defined for input values of v defined in the range (0, inf+) in the limit of the type qualifier.
Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TFloating-point scalar types.
See also
GLSL log man page
GLSL 4.20.8 specification, section 8.2 Exponential Functions
GLM_FUNC_DECL vec<L, T, Q> glm::log2 ( vec< L, T, Q > const &  v)

Returns the base 2 log of x, i.e., returns the value y, which satisfies the equation x = 2 ^ y.

Parameters
vlog2 function is defined for input values of v defined in the range (0, inf+) in the limit of the type qualifier.
Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TFloating-point scalar types.
See also
GLSL log2 man page
GLSL 4.20.8 specification, section 8.2 Exponential Functions
GLM_FUNC_DECL vec<L, T, Q> glm::pow ( vec< L, T, Q > const &  base,
vec< L, T, Q > const &  exponent 
)

Returns 'base' raised to the power 'exponent'.

Parameters
baseFloating point value. pow function is defined for input values of 'base' defined in the range (inf-, inf+) in the limit of the type qualifier.
exponentFloating point value representing the 'exponent'.
See also
GLSL pow man page
GLSL 4.20.8 specification, section 8.2 Exponential Functions
GLM_FUNC_DECL vec<L, T, Q> glm::sqrt ( vec< L, T, Q > const &  v)

Returns the positive square root of v.

Parameters
vsqrt function is defined for input values of v defined in the range [0, inf+) in the limit of the type qualifier.
Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TFloating-point scalar types.
See also
GLSL sqrt man page
GLSL 4.20.8 specification, section 8.2 Exponential Functions
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00243.html ================================================ 0.9.9 API documentation: GLM_EXT_matrix_clip_space
0.9.9 API documentation
GLM_EXT_matrix_clip_space

Defines functions that generate clip space transformation matrices. More...

Functions

template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > frustum (T left, T right, T bottom, T top, T near, T far)
 Creates a frustum matrix with default handedness, using the default handedness and default near and far clip planes definition. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > frustumLH (T left, T right, T bottom, T top, T near, T far)
 Creates a left handed frustum matrix. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > frustumLH_NO (T left, T right, T bottom, T top, T near, T far)
 Creates a left handed frustum matrix. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > frustumLH_ZO (T left, T right, T bottom, T top, T near, T far)
 Creates a left handed frustum matrix. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > frustumNO (T left, T right, T bottom, T top, T near, T far)
 Creates a frustum matrix using left-handed coordinates if GLM_FORCE_LEFT_HANDED if defined or right-handed coordinates otherwise. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > frustumRH (T left, T right, T bottom, T top, T near, T far)
 Creates a right handed frustum matrix. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > frustumRH_NO (T left, T right, T bottom, T top, T near, T far)
 Creates a right handed frustum matrix. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > frustumRH_ZO (T left, T right, T bottom, T top, T near, T far)
 Creates a right handed frustum matrix. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > frustumZO (T left, T right, T bottom, T top, T near, T far)
 Creates a frustum matrix using left-handed coordinates if GLM_FORCE_LEFT_HANDED if defined or right-handed coordinates otherwise. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > infinitePerspective (T fovy, T aspect, T near)
 Creates a matrix for a symmetric perspective-view frustum with far plane at infinite with default handedness. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > infinitePerspectiveLH (T fovy, T aspect, T near)
 Creates a matrix for a left handed, symmetric perspective-view frustum with far plane at infinite. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > infinitePerspectiveRH (T fovy, T aspect, T near)
 Creates a matrix for a right handed, symmetric perspective-view frustum with far plane at infinite. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > ortho (T left, T right, T bottom, T top)
 Creates a matrix for projecting two-dimensional coordinates onto the screen. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > ortho (T left, T right, T bottom, T top, T zNear, T zFar)
 Creates a matrix for an orthographic parallel viewing volume, using the default handedness and default near and far clip planes definition. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > orthoLH (T left, T right, T bottom, T top, T zNear, T zFar)
 Creates a matrix for an orthographic parallel viewing volume, using left-handed coordinates. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > orthoLH_NO (T left, T right, T bottom, T top, T zNear, T zFar)
 Creates a matrix for an orthographic parallel viewing volume using right-handed coordinates. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > orthoLH_ZO (T left, T right, T bottom, T top, T zNear, T zFar)
 Creates a matrix for an orthographic parallel viewing volume, using left-handed coordinates. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > orthoNO (T left, T right, T bottom, T top, T zNear, T zFar)
 Creates a matrix for an orthographic parallel viewing volume, using left-handed coordinates if GLM_FORCE_LEFT_HANDED if defined or right-handed coordinates otherwise. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > orthoRH (T left, T right, T bottom, T top, T zNear, T zFar)
 Creates a matrix for an orthographic parallel viewing volume, using right-handed coordinates. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > orthoRH_NO (T left, T right, T bottom, T top, T zNear, T zFar)
 Creates a matrix for an orthographic parallel viewing volume, using right-handed coordinates. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > orthoRH_ZO (T left, T right, T bottom, T top, T zNear, T zFar)
 Creates a matrix for an orthographic parallel viewing volume, using left-handed coordinates. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > orthoZO (T left, T right, T bottom, T top, T zNear, T zFar)
 Creates a matrix for an orthographic parallel viewing volume, using left-handed coordinates. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspective (T fovy, T aspect, T near, T far)
 Creates a matrix for a symetric perspective-view frustum based on the default handedness and default near and far clip planes definition. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveFov (T fov, T width, T height, T near, T far)
 Builds a perspective projection matrix based on a field of view and the default handedness and default near and far clip planes definition. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveFovLH (T fov, T width, T height, T near, T far)
 Builds a left handed perspective projection matrix based on a field of view. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveFovLH_NO (T fov, T width, T height, T near, T far)
 Builds a perspective projection matrix based on a field of view using left-handed coordinates. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveFovLH_ZO (T fov, T width, T height, T near, T far)
 Builds a perspective projection matrix based on a field of view using left-handed coordinates. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveFovNO (T fov, T width, T height, T near, T far)
 Builds a perspective projection matrix based on a field of view using left-handed coordinates if GLM_FORCE_LEFT_HANDED if defined or right-handed coordinates otherwise. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveFovRH (T fov, T width, T height, T near, T far)
 Builds a right handed perspective projection matrix based on a field of view. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveFovRH_NO (T fov, T width, T height, T near, T far)
 Builds a perspective projection matrix based on a field of view using right-handed coordinates. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveFovRH_ZO (T fov, T width, T height, T near, T far)
 Builds a perspective projection matrix based on a field of view using right-handed coordinates. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveFovZO (T fov, T width, T height, T near, T far)
 Builds a perspective projection matrix based on a field of view using left-handed coordinates if GLM_FORCE_LEFT_HANDED if defined or right-handed coordinates otherwise. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveLH (T fovy, T aspect, T near, T far)
 Creates a matrix for a left handed, symetric perspective-view frustum. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveLH_NO (T fovy, T aspect, T near, T far)
 Creates a matrix for a left handed, symetric perspective-view frustum. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveLH_ZO (T fovy, T aspect, T near, T far)
 Creates a matrix for a left handed, symetric perspective-view frustum. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveNO (T fovy, T aspect, T near, T far)
 Creates a matrix for a symetric perspective-view frustum using left-handed coordinates if GLM_FORCE_LEFT_HANDED if defined or right-handed coordinates otherwise. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveRH (T fovy, T aspect, T near, T far)
 Creates a matrix for a right handed, symetric perspective-view frustum. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveRH_NO (T fovy, T aspect, T near, T far)
 Creates a matrix for a right handed, symetric perspective-view frustum. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveRH_ZO (T fovy, T aspect, T near, T far)
 Creates a matrix for a right handed, symetric perspective-view frustum. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > perspectiveZO (T fovy, T aspect, T near, T far)
 Creates a matrix for a symetric perspective-view frustum using left-handed coordinates if GLM_FORCE_LEFT_HANDED if defined or right-handed coordinates otherwise. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > tweakedInfinitePerspective (T fovy, T aspect, T near)
 Creates a matrix for a symmetric perspective-view frustum with far plane at infinite for graphics hardware that doesn't support depth clamping. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > tweakedInfinitePerspective (T fovy, T aspect, T near, T ep)
 Creates a matrix for a symmetric perspective-view frustum with far plane at infinite for graphics hardware that doesn't support depth clamping. More...
 

Detailed Description

Defines functions that generate clip space transformation matrices.

The matrices generated by this extension use standard OpenGL fixed-function conventions. For example, the lookAt function generates a transform from world space into the specific eye space that the projective matrix functions (perspective, ortho, etc) are designed to expect. The OpenGL compatibility specifications defines the particular layout of this eye space.

Include <glm/ext/matrix_clip_space.hpp> to use the features of this extension.

See also
GLM_EXT_matrix_transform
GLM_EXT_matrix_projection

Function Documentation

GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::frustum ( left,
right,
bottom,
top,
near,
far 
)

Creates a frustum matrix with default handedness, using the default handedness and default near and far clip planes definition.

To change default handedness use GLM_FORCE_LEFT_HANDED. To change default near and far clip planes definition use GLM_FORCE_DEPTH_ZERO_TO_ONE.

Template Parameters
TA floating-point scalar type
See also
glFrustum man page
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::frustumLH ( left,
right,
bottom,
top,
near,
far 
)

Creates a left handed frustum matrix.

If GLM_FORCE_DEPTH_ZERO_TO_ONE is defined, the near and far clip planes correspond to z normalized device coordinates of 0 and +1 respectively. (Direct3D clip volume definition) Otherwise, the near and far clip planes correspond to z normalized device coordinates of -1 and +1 respectively. (OpenGL clip volume definition)

Template Parameters
TA floating-point scalar type
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::frustumLH_NO ( left,
right,
bottom,
top,
near,
far 
)

Creates a left handed frustum matrix.

The near and far clip planes correspond to z normalized device coordinates of -1 and +1 respectively. (OpenGL clip volume definition)

Template Parameters
TA floating-point scalar type
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::frustumLH_ZO ( left,
right,
bottom,
top,
near,
far 
)

Creates a left handed frustum matrix.

The near and far clip planes correspond to z normalized device coordinates of 0 and +1 respectively. (Direct3D clip volume definition)

Template Parameters
TA floating-point scalar type
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::frustumNO ( left,
right,
bottom,
top,
near,
far 
)

Creates a frustum matrix using left-handed coordinates if GLM_FORCE_LEFT_HANDED if defined or right-handed coordinates otherwise.

The near and far clip planes correspond to z normalized device coordinates of -1 and +1 respectively. (OpenGL clip volume definition)

Template Parameters
TA floating-point scalar type
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::frustumRH ( left,
right,
bottom,
top,
near,
far 
)

Creates a right handed frustum matrix.

If GLM_FORCE_DEPTH_ZERO_TO_ONE is defined, the near and far clip planes correspond to z normalized device coordinates of 0 and +1 respectively. (Direct3D clip volume definition) Otherwise, the near and far clip planes correspond to z normalized device coordinates of -1 and +1 respectively. (OpenGL clip volume definition)

Template Parameters
TA floating-point scalar type
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::frustumRH_NO ( left,
right,
bottom,
top,
near,
far 
)

Creates a right handed frustum matrix.

The near and far clip planes correspond to z normalized device coordinates of -1 and +1 respectively. (OpenGL clip volume definition)

Template Parameters
TA floating-point scalar type
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::frustumRH_ZO ( left,
right,
bottom,
top,
near,
far 
)

Creates a right handed frustum matrix.

The near and far clip planes correspond to z normalized device coordinates of 0 and +1 respectively. (Direct3D clip volume definition)

Template Parameters
TA floating-point scalar type
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::frustumZO ( left,
right,
bottom,
top,
near,
far 
)

Creates a frustum matrix using left-handed coordinates if GLM_FORCE_LEFT_HANDED if defined or right-handed coordinates otherwise.

The near and far clip planes correspond to z normalized device coordinates of 0 and +1 respectively. (Direct3D clip volume definition)

Template Parameters
TA floating-point scalar type
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::infinitePerspective ( fovy,
aspect,
near 
)

Creates a matrix for a symmetric perspective-view frustum with far plane at infinite with default handedness.

Parameters
fovySpecifies the field of view angle, in degrees, in the y direction. Expressed in radians.
aspectSpecifies the aspect ratio that determines the field of view in the x direction. The aspect ratio is the ratio of x (width) to y (height).
nearSpecifies the distance from the viewer to the near clipping plane (always positive).
Template Parameters
TA floating-point scalar type
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::infinitePerspectiveLH ( fovy,
aspect,
near 
)

Creates a matrix for a left handed, symmetric perspective-view frustum with far plane at infinite.

Parameters
fovySpecifies the field of view angle, in degrees, in the y direction. Expressed in radians.
aspectSpecifies the aspect ratio that determines the field of view in the x direction. The aspect ratio is the ratio of x (width) to y (height).
nearSpecifies the distance from the viewer to the near clipping plane (always positive).
Template Parameters
TA floating-point scalar type
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::infinitePerspectiveRH ( fovy,
aspect,
near 
)

Creates a matrix for a right handed, symmetric perspective-view frustum with far plane at infinite.

Parameters
fovySpecifies the field of view angle, in degrees, in the y direction. Expressed in radians.
aspectSpecifies the aspect ratio that determines the field of view in the x direction. The aspect ratio is the ratio of x (width) to y (height).
nearSpecifies the distance from the viewer to the near clipping plane (always positive).
Template Parameters
TA floating-point scalar type
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::ortho ( left,
right,
bottom,
top 
)

Creates a matrix for projecting two-dimensional coordinates onto the screen.

Template Parameters
TA floating-point scalar type
See also
- glm::ortho(T const& left, T const& right, T const& bottom, T const& top, T const& zNear, T const& zFar)
gluOrtho2D man page
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::ortho ( left,
right,
bottom,
top,
zNear,
zFar 
)

Creates a matrix for an orthographic parallel viewing volume, using the default handedness and default near and far clip planes definition.

To change default handedness use GLM_FORCE_LEFT_HANDED. To change default near and far clip planes definition use GLM_FORCE_DEPTH_ZERO_TO_ONE.

Template Parameters
TA floating-point scalar type
See also
- glm::ortho(T const& left, T const& right, T const& bottom, T const& top)
glOrtho man page
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::orthoLH ( left,
right,
bottom,
top,
zNear,
zFar 
)

Creates a matrix for an orthographic parallel viewing volume, using left-handed coordinates.

If GLM_FORCE_DEPTH_ZERO_TO_ONE is defined, the near and far clip planes correspond to z normalized device coordinates of 0 and +1 respectively. (Direct3D clip volume definition) Otherwise, the near and far clip planes correspond to z normalized device coordinates of -1 and +1 respectively. (OpenGL clip volume definition)

Template Parameters
TA floating-point scalar type
See also
- glm::ortho(T const& left, T const& right, T const& bottom, T const& top)
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::orthoLH_NO ( left,
right,
bottom,
top,
zNear,
zFar 
)

Creates a matrix for an orthographic parallel viewing volume using right-handed coordinates.

The near and far clip planes correspond to z normalized device coordinates of -1 and +1 respectively. (OpenGL clip volume definition)

Template Parameters
TA floating-point scalar type
See also
- glm::ortho(T const& left, T const& right, T const& bottom, T const& top)
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::orthoLH_ZO ( left,
right,
bottom,
top,
zNear,
zFar 
)

Creates a matrix for an orthographic parallel viewing volume, using left-handed coordinates.

The near and far clip planes correspond to z normalized device coordinates of 0 and +1 respectively. (Direct3D clip volume definition)

Template Parameters
TA floating-point scalar type
See also
- glm::ortho(T const& left, T const& right, T const& bottom, T const& top)
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::orthoNO ( left,
right,
bottom,
top,
zNear,
zFar 
)

Creates a matrix for an orthographic parallel viewing volume, using left-handed coordinates if GLM_FORCE_LEFT_HANDED if defined or right-handed coordinates otherwise.

The near and far clip planes correspond to z normalized device coordinates of -1 and +1 respectively. (OpenGL clip volume definition)

Template Parameters
TA floating-point scalar type
See also
- glm::ortho(T const& left, T const& right, T const& bottom, T const& top)
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::orthoRH ( left,
right,
bottom,
top,
zNear,
zFar 
)

Creates a matrix for an orthographic parallel viewing volume, using right-handed coordinates.

If GLM_FORCE_DEPTH_ZERO_TO_ONE is defined, the near and far clip planes correspond to z normalized device coordinates of 0 and +1 respectively. (Direct3D clip volume definition) Otherwise, the near and far clip planes correspond to z normalized device coordinates of -1 and +1 respectively. (OpenGL clip volume definition)

Template Parameters
TA floating-point scalar type
See also
- glm::ortho(T const& left, T const& right, T const& bottom, T const& top)
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::orthoRH_NO ( left,
right,
bottom,
top,
zNear,
zFar 
)

Creates a matrix for an orthographic parallel viewing volume, using right-handed coordinates.

The near and far clip planes correspond to z normalized device coordinates of -1 and +1 respectively. (OpenGL clip volume definition)

Template Parameters
TA floating-point scalar type
See also
- glm::ortho(T const& left, T const& right, T const& bottom, T const& top)
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::orthoRH_ZO ( left,
right,
bottom,
top,
zNear,
zFar 
)

Creates a matrix for an orthographic parallel viewing volume, using left-handed coordinates.

The near and far clip planes correspond to z normalized device coordinates of 0 and +1 respectively. (Direct3D clip volume definition)

Template Parameters
TA floating-point scalar type
See also
- glm::ortho(T const& left, T const& right, T const& bottom, T const& top)
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::orthoZO ( left,
right,
bottom,
top,
zNear,
zFar 
)

Creates a matrix for an orthographic parallel viewing volume, using left-handed coordinates.

The near and far clip planes correspond to z normalized device coordinates of 0 and +1 respectively. (Direct3D clip volume definition)

Template Parameters
TA floating-point scalar type
See also
- glm::ortho(T const& left, T const& right, T const& bottom, T const& top)
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::perspective ( fovy,
aspect,
near,
far 
)

Creates a matrix for a symetric perspective-view frustum based on the default handedness and default near and far clip planes definition.

To change default handedness use GLM_FORCE_LEFT_HANDED. To change default near and far clip planes definition use GLM_FORCE_DEPTH_ZERO_TO_ONE.

Parameters
fovySpecifies the field of view angle in the y direction. Expressed in radians.
aspectSpecifies the aspect ratio that determines the field of view in the x direction. The aspect ratio is the ratio of x (width) to y (height).
nearSpecifies the distance from the viewer to the near clipping plane (always positive).
farSpecifies the distance from the viewer to the far clipping plane (always positive).
Template Parameters
TA floating-point scalar type
See also
gluPerspective man page
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::perspectiveFov ( fov,
width,
height,
near,
far 
)

Builds a perspective projection matrix based on a field of view and the default handedness and default near and far clip planes definition.

To change default handedness use GLM_FORCE_LEFT_HANDED. To change default near and far clip planes definition use GLM_FORCE_DEPTH_ZERO_TO_ONE.

Parameters
fovExpressed in radians.
widthWidth of the viewport
heightHeight of the viewport
nearSpecifies the distance from the viewer to the near clipping plane (always positive).
farSpecifies the distance from the viewer to the far clipping plane (always positive).
Template Parameters
TA floating-point scalar type
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::perspectiveFovLH ( fov,
width,
height,
near,
far 
)

Builds a left handed perspective projection matrix based on a field of view.

If GLM_FORCE_DEPTH_ZERO_TO_ONE is defined, the near and far clip planes correspond to z normalized device coordinates of 0 and +1 respectively. (Direct3D clip volume definition) Otherwise, the near and far clip planes correspond to z normalized device coordinates of -1 and +1 respectively. (OpenGL clip volume definition)

Parameters
fovExpressed in radians.
widthWidth of the viewport
heightHeight of the viewport
nearSpecifies the distance from the viewer to the near clipping plane (always positive).
farSpecifies the distance from the viewer to the far clipping plane (always positive).
Template Parameters
TA floating-point scalar type
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::perspectiveFovLH_NO ( fov,
width,
height,
near,
far 
)

Builds a perspective projection matrix based on a field of view using left-handed coordinates.

The near and far clip planes correspond to z normalized device coordinates of -1 and +1 respectively. (OpenGL clip volume definition)

Parameters
fovExpressed in radians.
widthWidth of the viewport
heightHeight of the viewport
nearSpecifies the distance from the viewer to the near clipping plane (always positive).
farSpecifies the distance from the viewer to the far clipping plane (always positive).
Template Parameters
TA floating-point scalar type
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::perspectiveFovLH_ZO ( fov,
width,
height,
near,
far 
)

Builds a perspective projection matrix based on a field of view using left-handed coordinates.

The near and far clip planes correspond to z normalized device coordinates of 0 and +1 respectively. (Direct3D clip volume definition)

Parameters
fovExpressed in radians.
widthWidth of the viewport
heightHeight of the viewport
nearSpecifies the distance from the viewer to the near clipping plane (always positive).
farSpecifies the distance from the viewer to the far clipping plane (always positive).
Template Parameters
TA floating-point scalar type
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::perspectiveFovNO ( fov,
width,
height,
near,
far 
)

Builds a perspective projection matrix based on a field of view using left-handed coordinates if GLM_FORCE_LEFT_HANDED if defined or right-handed coordinates otherwise.

The near and far clip planes correspond to z normalized device coordinates of -1 and +1 respectively. (OpenGL clip volume definition)

Parameters
fovExpressed in radians.
widthWidth of the viewport
heightHeight of the viewport
nearSpecifies the distance from the viewer to the near clipping plane (always positive).
farSpecifies the distance from the viewer to the far clipping plane (always positive).
Template Parameters
TA floating-point scalar type
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::perspectiveFovRH ( fov,
width,
height,
near,
far 
)

Builds a right handed perspective projection matrix based on a field of view.

If GLM_FORCE_DEPTH_ZERO_TO_ONE is defined, the near and far clip planes correspond to z normalized device coordinates of 0 and +1 respectively. (Direct3D clip volume definition) Otherwise, the near and far clip planes correspond to z normalized device coordinates of -1 and +1 respectively. (OpenGL clip volume definition)

Parameters
fovExpressed in radians.
widthWidth of the viewport
heightHeight of the viewport
nearSpecifies the distance from the viewer to the near clipping plane (always positive).
farSpecifies the distance from the viewer to the far clipping plane (always positive).
Template Parameters
TA floating-point scalar type
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::perspectiveFovRH_NO ( fov,
width,
height,
near,
far 
)

Builds a perspective projection matrix based on a field of view using right-handed coordinates.

The near and far clip planes correspond to z normalized device coordinates of -1 and +1 respectively. (OpenGL clip volume definition)

Parameters
fovExpressed in radians.
widthWidth of the viewport
heightHeight of the viewport
nearSpecifies the distance from the viewer to the near clipping plane (always positive).
farSpecifies the distance from the viewer to the far clipping plane (always positive).
Template Parameters
TA floating-point scalar type
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::perspectiveFovRH_ZO ( fov,
width,
height,
near,
far 
)

Builds a perspective projection matrix based on a field of view using right-handed coordinates.

The near and far clip planes correspond to z normalized device coordinates of 0 and +1 respectively. (Direct3D clip volume definition)

Parameters
fovExpressed in radians.
widthWidth of the viewport
heightHeight of the viewport
nearSpecifies the distance from the viewer to the near clipping plane (always positive).
farSpecifies the distance from the viewer to the far clipping plane (always positive).
Template Parameters
TA floating-point scalar type
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::perspectiveFovZO ( fov,
width,
height,
near,
far 
)

Builds a perspective projection matrix based on a field of view using left-handed coordinates if GLM_FORCE_LEFT_HANDED if defined or right-handed coordinates otherwise.

The near and far clip planes correspond to z normalized device coordinates of 0 and +1 respectively. (Direct3D clip volume definition)

Parameters
fovExpressed in radians.
widthWidth of the viewport
heightHeight of the viewport
nearSpecifies the distance from the viewer to the near clipping plane (always positive).
farSpecifies the distance from the viewer to the far clipping plane (always positive).
Template Parameters
TA floating-point scalar type
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::perspectiveLH ( fovy,
aspect,
near,
far 
)

Creates a matrix for a left handed, symetric perspective-view frustum.

If GLM_FORCE_DEPTH_ZERO_TO_ONE is defined, the near and far clip planes correspond to z normalized device coordinates of 0 and +1 respectively. (Direct3D clip volume definition) Otherwise, the near and far clip planes correspond to z normalized device coordinates of -1 and +1 respectively. (OpenGL clip volume definition)

Parameters
fovySpecifies the field of view angle, in degrees, in the y direction. Expressed in radians.
aspectSpecifies the aspect ratio that determines the field of view in the x direction. The aspect ratio is the ratio of x (width) to y (height).
nearSpecifies the distance from the viewer to the near clipping plane (always positive).
farSpecifies the distance from the viewer to the far clipping plane (always positive).
Template Parameters
TA floating-point scalar type
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::perspectiveLH_NO ( fovy,
aspect,
near,
far 
)

Creates a matrix for a left handed, symetric perspective-view frustum.

The near and far clip planes correspond to z normalized device coordinates of -1 and +1 respectively. (OpenGL clip volume definition)

Parameters
fovySpecifies the field of view angle, in degrees, in the y direction. Expressed in radians.
aspectSpecifies the aspect ratio that determines the field of view in the x direction. The aspect ratio is the ratio of x (width) to y (height).
nearSpecifies the distance from the viewer to the near clipping plane (always positive).
farSpecifies the distance from the viewer to the far clipping plane (always positive).
Template Parameters
TA floating-point scalar type
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::perspectiveLH_ZO ( fovy,
aspect,
near,
far 
)

Creates a matrix for a left handed, symetric perspective-view frustum.

The near and far clip planes correspond to z normalized device coordinates of 0 and +1 respectively. (Direct3D clip volume definition)

Parameters
fovySpecifies the field of view angle, in degrees, in the y direction. Expressed in radians.
aspectSpecifies the aspect ratio that determines the field of view in the x direction. The aspect ratio is the ratio of x (width) to y (height).
nearSpecifies the distance from the viewer to the near clipping plane (always positive).
farSpecifies the distance from the viewer to the far clipping plane (always positive).
Template Parameters
TA floating-point scalar type
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::perspectiveNO ( fovy,
aspect,
near,
far 
)

Creates a matrix for a symetric perspective-view frustum using left-handed coordinates if GLM_FORCE_LEFT_HANDED if defined or right-handed coordinates otherwise.

The near and far clip planes correspond to z normalized device coordinates of -1 and +1 respectively. (OpenGL clip volume definition)

Parameters
fovySpecifies the field of view angle, in degrees, in the y direction. Expressed in radians.
aspectSpecifies the aspect ratio that determines the field of view in the x direction. The aspect ratio is the ratio of x (width) to y (height).
nearSpecifies the distance from the viewer to the near clipping plane (always positive).
farSpecifies the distance from the viewer to the far clipping plane (always positive).
Template Parameters
TA floating-point scalar type
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::perspectiveRH ( fovy,
aspect,
near,
far 
)

Creates a matrix for a right handed, symetric perspective-view frustum.

If GLM_FORCE_DEPTH_ZERO_TO_ONE is defined, the near and far clip planes correspond to z normalized device coordinates of 0 and +1 respectively. (Direct3D clip volume definition) Otherwise, the near and far clip planes correspond to z normalized device coordinates of -1 and +1 respectively. (OpenGL clip volume definition)

Parameters
fovySpecifies the field of view angle, in degrees, in the y direction. Expressed in radians.
aspectSpecifies the aspect ratio that determines the field of view in the x direction. The aspect ratio is the ratio of x (width) to y (height).
nearSpecifies the distance from the viewer to the near clipping plane (always positive).
farSpecifies the distance from the viewer to the far clipping plane (always positive).
Template Parameters
TA floating-point scalar type
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::perspectiveRH_NO ( fovy,
aspect,
near,
far 
)

Creates a matrix for a right handed, symetric perspective-view frustum.

The near and far clip planes correspond to z normalized device coordinates of -1 and +1 respectively. (OpenGL clip volume definition)

Parameters
fovySpecifies the field of view angle, in degrees, in the y direction. Expressed in radians.
aspectSpecifies the aspect ratio that determines the field of view in the x direction. The aspect ratio is the ratio of x (width) to y (height).
nearSpecifies the distance from the viewer to the near clipping plane (always positive).
farSpecifies the distance from the viewer to the far clipping plane (always positive).
Template Parameters
TA floating-point scalar type
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::perspectiveRH_ZO ( fovy,
aspect,
near,
far 
)

Creates a matrix for a right handed, symetric perspective-view frustum.

The near and far clip planes correspond to z normalized device coordinates of 0 and +1 respectively. (Direct3D clip volume definition)

Parameters
fovySpecifies the field of view angle, in degrees, in the y direction. Expressed in radians.
aspectSpecifies the aspect ratio that determines the field of view in the x direction. The aspect ratio is the ratio of x (width) to y (height).
nearSpecifies the distance from the viewer to the near clipping plane (always positive).
farSpecifies the distance from the viewer to the far clipping plane (always positive).
Template Parameters
TA floating-point scalar type
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::perspectiveZO ( fovy,
aspect,
near,
far 
)

Creates a matrix for a symetric perspective-view frustum using left-handed coordinates if GLM_FORCE_LEFT_HANDED if defined or right-handed coordinates otherwise.

The near and far clip planes correspond to z normalized device coordinates of 0 and +1 respectively. (Direct3D clip volume definition)

Parameters
fovySpecifies the field of view angle, in degrees, in the y direction. Expressed in radians.
aspectSpecifies the aspect ratio that determines the field of view in the x direction. The aspect ratio is the ratio of x (width) to y (height).
nearSpecifies the distance from the viewer to the near clipping plane (always positive).
farSpecifies the distance from the viewer to the far clipping plane (always positive).
Template Parameters
TA floating-point scalar type
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::tweakedInfinitePerspective ( fovy,
aspect,
near 
)

Creates a matrix for a symmetric perspective-view frustum with far plane at infinite for graphics hardware that doesn't support depth clamping.

Parameters
fovySpecifies the field of view angle, in degrees, in the y direction. Expressed in radians.
aspectSpecifies the aspect ratio that determines the field of view in the x direction. The aspect ratio is the ratio of x (width) to y (height).
nearSpecifies the distance from the viewer to the near clipping plane (always positive).
Template Parameters
TA floating-point scalar type
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::tweakedInfinitePerspective ( fovy,
aspect,
near,
ep 
)

Creates a matrix for a symmetric perspective-view frustum with far plane at infinite for graphics hardware that doesn't support depth clamping.

Parameters
fovySpecifies the field of view angle, in degrees, in the y direction. Expressed in radians.
aspectSpecifies the aspect ratio that determines the field of view in the x direction. The aspect ratio is the ratio of x (width) to y (height).
nearSpecifies the distance from the viewer to the near clipping plane (always positive).
epEpsilon
Template Parameters
TA floating-point scalar type
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00244.html ================================================ 0.9.9 API documentation: GLM_EXT_matrix_common
0.9.9 API documentation
GLM_EXT_matrix_common

Defines functions for common matrix operations. More...

Detailed Description

Defines functions for common matrix operations.

Include <glm/ext/matrix_common.hpp> to use the features of this extension.

See also
GLM_EXT_matrix_common
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00245.html ================================================ 0.9.9 API documentation: GLM_EXT_matrix_projection
0.9.9 API documentation
GLM_EXT_matrix_projection

Functions that generate common projection transformation matrices. More...

Functions

template<typename T , qualifier Q, typename U >
GLM_FUNC_DECL mat< 4, 4, T, Q > pickMatrix (vec< 2, T, Q > const &center, vec< 2, T, Q > const &delta, vec< 4, U, Q > const &viewport)
 Define a picking region. More...
 
template<typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > project (vec< 3, T, Q > const &obj, mat< 4, 4, T, Q > const &model, mat< 4, 4, T, Q > const &proj, vec< 4, U, Q > const &viewport)
 Map the specified object coordinates (obj.x, obj.y, obj.z) into window coordinates using default near and far clip planes definition. More...
 
template<typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > projectNO (vec< 3, T, Q > const &obj, mat< 4, 4, T, Q > const &model, mat< 4, 4, T, Q > const &proj, vec< 4, U, Q > const &viewport)
 Map the specified object coordinates (obj.x, obj.y, obj.z) into window coordinates. More...
 
template<typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > projectZO (vec< 3, T, Q > const &obj, mat< 4, 4, T, Q > const &model, mat< 4, 4, T, Q > const &proj, vec< 4, U, Q > const &viewport)
 Map the specified object coordinates (obj.x, obj.y, obj.z) into window coordinates. More...
 
template<typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > unProject (vec< 3, T, Q > const &win, mat< 4, 4, T, Q > const &model, mat< 4, 4, T, Q > const &proj, vec< 4, U, Q > const &viewport)
 Map the specified window coordinates (win.x, win.y, win.z) into object coordinates using default near and far clip planes definition. More...
 
template<typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > unProjectNO (vec< 3, T, Q > const &win, mat< 4, 4, T, Q > const &model, mat< 4, 4, T, Q > const &proj, vec< 4, U, Q > const &viewport)
 Map the specified window coordinates (win.x, win.y, win.z) into object coordinates. More...
 
template<typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > unProjectZO (vec< 3, T, Q > const &win, mat< 4, 4, T, Q > const &model, mat< 4, 4, T, Q > const &proj, vec< 4, U, Q > const &viewport)
 Map the specified window coordinates (win.x, win.y, win.z) into object coordinates. More...
 

Detailed Description

Functions that generate common projection transformation matrices.

The matrices generated by this extension use standard OpenGL fixed-function conventions. For example, the lookAt function generates a transform from world space into the specific eye space that the projective matrix functions (perspective, ortho, etc) are designed to expect. The OpenGL compatibility specifications defines the particular layout of this eye space.

Include <glm/ext/matrix_projection.hpp> to use the features of this extension.

See also
GLM_EXT_matrix_transform
GLM_EXT_matrix_clip_space

Function Documentation

GLM_FUNC_DECL mat<4, 4, T, Q> glm::pickMatrix ( vec< 2, T, Q > const &  center,
vec< 2, T, Q > const &  delta,
vec< 4, U, Q > const &  viewport 
)

Define a picking region.

Parameters
centerSpecify the center of a picking region in window coordinates.
deltaSpecify the width and height, respectively, of the picking region in window coordinates.
viewportRendering viewport
Template Parameters
TNative type used for the computation. Currently supported: half (not recommended), float or double.
UCurrently supported: Floating-point types and integer types.
See also
gluPickMatrix man page
GLM_FUNC_DECL vec<3, T, Q> glm::project ( vec< 3, T, Q > const &  obj,
mat< 4, 4, T, Q > const &  model,
mat< 4, 4, T, Q > const &  proj,
vec< 4, U, Q > const &  viewport 
)

Map the specified object coordinates (obj.x, obj.y, obj.z) into window coordinates using default near and far clip planes definition.

To change default near and far clip planes definition use GLM_FORCE_DEPTH_ZERO_TO_ONE.

Parameters
objSpecify the object coordinates.
modelSpecifies the current modelview matrix
projSpecifies the current projection matrix
viewportSpecifies the current viewport
Returns
Return the computed window coordinates.
Template Parameters
TNative type used for the computation. Currently supported: half (not recommended), float or double.
UCurrently supported: Floating-point types and integer types.
See also
gluProject man page
GLM_FUNC_DECL vec<3, T, Q> glm::projectNO ( vec< 3, T, Q > const &  obj,
mat< 4, 4, T, Q > const &  model,
mat< 4, 4, T, Q > const &  proj,
vec< 4, U, Q > const &  viewport 
)

Map the specified object coordinates (obj.x, obj.y, obj.z) into window coordinates.

The near and far clip planes correspond to z normalized device coordinates of -1 and +1 respectively. (OpenGL clip volume definition)

Parameters
objSpecify the object coordinates.
modelSpecifies the current modelview matrix
projSpecifies the current projection matrix
viewportSpecifies the current viewport
Returns
Return the computed window coordinates.
Template Parameters
TNative type used for the computation. Currently supported: half (not recommended), float or double.
UCurrently supported: Floating-point types and integer types.
See also
gluProject man page
GLM_FUNC_DECL vec<3, T, Q> glm::projectZO ( vec< 3, T, Q > const &  obj,
mat< 4, 4, T, Q > const &  model,
mat< 4, 4, T, Q > const &  proj,
vec< 4, U, Q > const &  viewport 
)

Map the specified object coordinates (obj.x, obj.y, obj.z) into window coordinates.

The near and far clip planes correspond to z normalized device coordinates of 0 and +1 respectively. (Direct3D clip volume definition)

Parameters
objSpecify the object coordinates.
modelSpecifies the current modelview matrix
projSpecifies the current projection matrix
viewportSpecifies the current viewport
Returns
Return the computed window coordinates.
Template Parameters
TNative type used for the computation. Currently supported: half (not recommended), float or double.
UCurrently supported: Floating-point types and integer types.
See also
gluProject man page
GLM_FUNC_DECL vec<3, T, Q> glm::unProject ( vec< 3, T, Q > const &  win,
mat< 4, 4, T, Q > const &  model,
mat< 4, 4, T, Q > const &  proj,
vec< 4, U, Q > const &  viewport 
)

Map the specified window coordinates (win.x, win.y, win.z) into object coordinates using default near and far clip planes definition.

To change default near and far clip planes definition use GLM_FORCE_DEPTH_ZERO_TO_ONE.

Parameters
winSpecify the window coordinates to be mapped.
modelSpecifies the modelview matrix
projSpecifies the projection matrix
viewportSpecifies the viewport
Returns
Returns the computed object coordinates.
Template Parameters
TNative type used for the computation. Currently supported: half (not recommended), float or double.
UCurrently supported: Floating-point types and integer types.
See also
gluUnProject man page
GLM_FUNC_DECL vec<3, T, Q> glm::unProjectNO ( vec< 3, T, Q > const &  win,
mat< 4, 4, T, Q > const &  model,
mat< 4, 4, T, Q > const &  proj,
vec< 4, U, Q > const &  viewport 
)

Map the specified window coordinates (win.x, win.y, win.z) into object coordinates.

The near and far clip planes correspond to z normalized device coordinates of -1 and +1 respectively. (OpenGL clip volume definition)

Parameters
winSpecify the window coordinates to be mapped.
modelSpecifies the modelview matrix
projSpecifies the projection matrix
viewportSpecifies the viewport
Returns
Returns the computed object coordinates.
Template Parameters
TNative type used for the computation. Currently supported: half (not recommended), float or double.
UCurrently supported: Floating-point types and integer types.
See also
gluUnProject man page
GLM_FUNC_DECL vec<3, T, Q> glm::unProjectZO ( vec< 3, T, Q > const &  win,
mat< 4, 4, T, Q > const &  model,
mat< 4, 4, T, Q > const &  proj,
vec< 4, U, Q > const &  viewport 
)

Map the specified window coordinates (win.x, win.y, win.z) into object coordinates.

The near and far clip planes correspond to z normalized device coordinates of 0 and +1 respectively. (Direct3D clip volume definition)

Parameters
winSpecify the window coordinates to be mapped.
modelSpecifies the modelview matrix
projSpecifies the projection matrix
viewportSpecifies the viewport
Returns
Returns the computed object coordinates.
Template Parameters
TNative type used for the computation. Currently supported: half (not recommended), float or double.
UCurrently supported: Floating-point types and integer types.
See also
gluUnProject man page
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00246.html ================================================ 0.9.9 API documentation: GLM_EXT_matrix_relational
0.9.9 API documentation
GLM_EXT_matrix_relational

Exposes comparison functions for matrix types that take a user defined epsilon values. More...

Functions

template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< C, bool, Q > equal (mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y)
 Perform a component-wise equal-to comparison of two matrices. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< C, bool, Q > equal (mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, T epsilon)
 Returns the component-wise comparison of |x - y| < epsilon. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< C, bool, Q > equal (mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, vec< C, T, Q > const &epsilon)
 Returns the component-wise comparison of |x - y| < epsilon. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< C, bool, Q > equal (mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, int ULPs)
 Returns the component-wise comparison between two vectors in term of ULPs. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< C, bool, Q > equal (mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, vec< C, int, Q > const &ULPs)
 Returns the component-wise comparison between two vectors in term of ULPs. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< C, bool, Q > notEqual (mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y)
 Perform a component-wise not-equal-to comparison of two matrices. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< C, bool, Q > notEqual (mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, T epsilon)
 Returns the component-wise comparison of |x - y| < epsilon. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< C, bool, Q > notEqual (mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, vec< C, T, Q > const &epsilon)
 Returns the component-wise comparison of |x - y| >= epsilon. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< C, bool, Q > notEqual (mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, int ULPs)
 Returns the component-wise comparison between two vectors in term of ULPs. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< C, bool, Q > notEqual (mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, vec< C, int, Q > const &ULPs)
 Returns the component-wise comparison between two vectors in term of ULPs. More...
 

Detailed Description

Exposes comparison functions for matrix types that take a user defined epsilon values.

Include <glm/ext/matrix_relational.hpp> to use the features of this extension.

See also
GLM_EXT_vector_relational
GLM_EXT_scalar_relational
GLM_EXT_quaternion_relational

Function Documentation

GLM_FUNC_DECL GLM_CONSTEXPR vec<C, bool, Q> glm::equal ( mat< C, R, T, Q > const &  x,
mat< C, R, T, Q > const &  y 
)

Perform a component-wise equal-to comparison of two matrices.

Return a boolean vector which components value is True if this expression is satisfied per column of the matrices.

Template Parameters
CInteger between 1 and 4 included that qualify the number of columns of the matrix
RInteger between 1 and 4 included that qualify the number of rows of the matrix
TFloating-point or integer scalar types
QValue from qualifier enum
GLM_FUNC_DECL GLM_CONSTEXPR vec<C, bool, Q> glm::equal ( mat< C, R, T, Q > const &  x,
mat< C, R, T, Q > const &  y,
epsilon 
)

Returns the component-wise comparison of |x - y| < epsilon.

True if this expression is satisfied.

Template Parameters
CInteger between 1 and 4 included that qualify the number of columns of the matrix
RInteger between 1 and 4 included that qualify the number of rows of the matrix
TFloating-point or integer scalar types
QValue from qualifier enum
GLM_FUNC_DECL GLM_CONSTEXPR vec<C, bool, Q> glm::equal ( mat< C, R, T, Q > const &  x,
mat< C, R, T, Q > const &  y,
vec< C, T, Q > const &  epsilon 
)

Returns the component-wise comparison of |x - y| < epsilon.

True if this expression is satisfied.

Template Parameters
CInteger between 1 and 4 included that qualify the number of columns of the matrix
RInteger between 1 and 4 included that qualify the number of rows of the matrix
TFloating-point or integer scalar types
QValue from qualifier enum
GLM_FUNC_DECL GLM_CONSTEXPR vec<C, bool, Q> glm::equal ( mat< C, R, T, Q > const &  x,
mat< C, R, T, Q > const &  y,
int  ULPs 
)

Returns the component-wise comparison between two vectors in term of ULPs.

True if this expression is satisfied.

Template Parameters
CInteger between 1 and 4 included that qualify the number of columns of the matrix
RInteger between 1 and 4 included that qualify the number of rows of the matrix
TFloating-point
QValue from qualifier enum
GLM_FUNC_DECL GLM_CONSTEXPR vec<C, bool, Q> glm::equal ( mat< C, R, T, Q > const &  x,
mat< C, R, T, Q > const &  y,
vec< C, int, Q > const &  ULPs 
)

Returns the component-wise comparison between two vectors in term of ULPs.

True if this expression is satisfied.

Template Parameters
CInteger between 1 and 4 included that qualify the number of columns of the matrix
RInteger between 1 and 4 included that qualify the number of rows of the matrix
TFloating-point
QValue from qualifier enum
GLM_FUNC_DECL GLM_CONSTEXPR vec<C, bool, Q> glm::notEqual ( mat< C, R, T, Q > const &  x,
mat< C, R, T, Q > const &  y 
)

Perform a component-wise not-equal-to comparison of two matrices.

Return a boolean vector which components value is True if this expression is satisfied per column of the matrices.

Template Parameters
CInteger between 1 and 4 included that qualify the number of columns of the matrix
RInteger between 1 and 4 included that qualify the number of rows of the matrix
TFloating-point or integer scalar types
QValue from qualifier enum
GLM_FUNC_DECL GLM_CONSTEXPR vec<C, bool, Q> glm::notEqual ( mat< C, R, T, Q > const &  x,
mat< C, R, T, Q > const &  y,
epsilon 
)

Returns the component-wise comparison of |x - y| < epsilon.

True if this expression is not satisfied.

Template Parameters
CInteger between 1 and 4 included that qualify the number of columns of the matrix
RInteger between 1 and 4 included that qualify the number of rows of the matrix
TFloating-point or integer scalar types
QValue from qualifier enum
GLM_FUNC_DECL GLM_CONSTEXPR vec<C, bool, Q> glm::notEqual ( mat< C, R, T, Q > const &  x,
mat< C, R, T, Q > const &  y,
vec< C, T, Q > const &  epsilon 
)

Returns the component-wise comparison of |x - y| >= epsilon.

True if this expression is not satisfied.

Template Parameters
CInteger between 1 and 4 included that qualify the number of columns of the matrix
RInteger between 1 and 4 included that qualify the number of rows of the matrix
TFloating-point or integer scalar types
QValue from qualifier enum
GLM_FUNC_DECL GLM_CONSTEXPR vec<C, bool, Q> glm::notEqual ( mat< C, R, T, Q > const &  x,
mat< C, R, T, Q > const &  y,
int  ULPs 
)

Returns the component-wise comparison between two vectors in term of ULPs.

True if this expression is not satisfied.

Template Parameters
CInteger between 1 and 4 included that qualify the number of columns of the matrix
RInteger between 1 and 4 included that qualify the number of rows of the matrix
TFloating-point
QValue from qualifier enum
GLM_FUNC_DECL GLM_CONSTEXPR vec<C, bool, Q> glm::notEqual ( mat< C, R, T, Q > const &  x,
mat< C, R, T, Q > const &  y,
vec< C, int, Q > const &  ULPs 
)

Returns the component-wise comparison between two vectors in term of ULPs.

True if this expression is not satisfied.

Template Parameters
CInteger between 1 and 4 included that qualify the number of columns of the matrix
RInteger between 1 and 4 included that qualify the number of rows of the matrix
TFloating-point
QValue from qualifier enum
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00247.html ================================================ 0.9.9 API documentation: GLM_EXT_matrix_transform
0.9.9 API documentation
GLM_EXT_matrix_transform

Defines functions that generate common transformation matrices. More...

Functions

template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType identity ()
 Builds an identity matrix.
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > lookAt (vec< 3, T, Q > const &eye, vec< 3, T, Q > const &center, vec< 3, T, Q > const &up)
 Build a look at view matrix based on the default handedness. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > lookAtLH (vec< 3, T, Q > const &eye, vec< 3, T, Q > const &center, vec< 3, T, Q > const &up)
 Build a left handed look at view matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > lookAtRH (vec< 3, T, Q > const &eye, vec< 3, T, Q > const &center, vec< 3, T, Q > const &up)
 Build a right handed look at view matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > rotate (mat< 4, 4, T, Q > const &m, T angle, vec< 3, T, Q > const &axis)
 Builds a rotation 4 * 4 matrix created from an axis vector and an angle. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > scale (mat< 4, 4, T, Q > const &m, vec< 3, T, Q > const &v)
 Builds a scale 4 * 4 matrix created from 3 scalars. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > translate (mat< 4, 4, T, Q > const &m, vec< 3, T, Q > const &v)
 Builds a translation 4 * 4 matrix created from a vector of 3 components. More...
 

Detailed Description

Defines functions that generate common transformation matrices.

The matrices generated by this extension use standard OpenGL fixed-function conventions. For example, the lookAt function generates a transform from world space into the specific eye space that the projective matrix functions (perspective, ortho, etc) are designed to expect. The OpenGL compatibility specifications defines the particular layout of this eye space.

Include <glm/ext/matrix_transform.hpp> to use the features of this extension.

See also
GLM_EXT_matrix_projection
GLM_EXT_matrix_clip_space

Function Documentation

GLM_FUNC_DECL mat<4, 4, T, Q> glm::lookAt ( vec< 3, T, Q > const &  eye,
vec< 3, T, Q > const &  center,
vec< 3, T, Q > const &  up 
)

Build a look at view matrix based on the default handedness.

Parameters
eyePosition of the camera
centerPosition where the camera is looking at
upNormalized up vector, how the camera is oriented. Typically (0, 0, 1)
Template Parameters
TA floating-point scalar type
QA value from qualifier enum
See also
- frustum(T const& left, T const& right, T const& bottom, T const& top, T const& nearVal, T const& farVal) frustum(T const& left, T const& right, T const& bottom, T const& top, T const& nearVal, T const& farVal)
gluLookAt man page
GLM_FUNC_DECL mat<4, 4, T, Q> glm::lookAtLH ( vec< 3, T, Q > const &  eye,
vec< 3, T, Q > const &  center,
vec< 3, T, Q > const &  up 
)

Build a left handed look at view matrix.

Parameters
eyePosition of the camera
centerPosition where the camera is looking at
upNormalized up vector, how the camera is oriented. Typically (0, 0, 1)
Template Parameters
TA floating-point scalar type
QA value from qualifier enum
See also
- frustum(T const& left, T const& right, T const& bottom, T const& top, T const& nearVal, T const& farVal) frustum(T const& left, T const& right, T const& bottom, T const& top, T const& nearVal, T const& farVal)
GLM_FUNC_DECL mat<4, 4, T, Q> glm::lookAtRH ( vec< 3, T, Q > const &  eye,
vec< 3, T, Q > const &  center,
vec< 3, T, Q > const &  up 
)

Build a right handed look at view matrix.

Parameters
eyePosition of the camera
centerPosition where the camera is looking at
upNormalized up vector, how the camera is oriented. Typically (0, 0, 1)
Template Parameters
TA floating-point scalar type
QA value from qualifier enum
See also
- frustum(T const& left, T const& right, T const& bottom, T const& top, T const& nearVal, T const& farVal) frustum(T const& left, T const& right, T const& bottom, T const& top, T const& nearVal, T const& farVal)
GLM_FUNC_DECL mat<4, 4, T, Q> glm::rotate ( mat< 4, 4, T, Q > const &  m,
angle,
vec< 3, T, Q > const &  axis 
)

Builds a rotation 4 * 4 matrix created from an axis vector and an angle.

Parameters
mInput matrix multiplied by this rotation matrix.
angleRotation angle expressed in radians.
axisRotation axis, recommended to be normalized.
Template Parameters
TA floating-point scalar type
QA value from qualifier enum
See also
- rotate(mat<4, 4, T, Q> const& m, T angle, T x, T y, T z)
- rotate(T angle, vec<3, T, Q> const& v)
glRotate man page
GLM_FUNC_DECL mat<4, 4, T, Q> glm::scale ( mat< 4, 4, T, Q > const &  m,
vec< 3, T, Q > const &  v 
)

Builds a scale 4 * 4 matrix created from 3 scalars.

Parameters
mInput matrix multiplied by this scale matrix.
vRatio of scaling for each axis.
Template Parameters
TA floating-point scalar type
QA value from qualifier enum
See also
- scale(mat<4, 4, T, Q> const& m, T x, T y, T z)
- scale(vec<3, T, Q> const& v)
glScale man page
GLM_FUNC_DECL mat<4, 4, T, Q> glm::translate ( mat< 4, 4, T, Q > const &  m,
vec< 3, T, Q > const &  v 
)

Builds a translation 4 * 4 matrix created from a vector of 3 components.

Parameters
mInput matrix multiplied by this translation matrix.
vCoordinates of a translation vector.
Template Parameters
TA floating-point scalar type
QA value from qualifier enum
#include <glm/glm.hpp>
...
glm::mat4 m = glm::translate(glm::mat4(1.0f), glm::vec3(1.0f));
// m[0][0] == 1.0f, m[0][1] == 0.0f, m[0][2] == 0.0f, m[0][3] == 0.0f
// m[1][0] == 0.0f, m[1][1] == 1.0f, m[1][2] == 0.0f, m[1][3] == 0.0f
// m[2][0] == 0.0f, m[2][1] == 0.0f, m[2][2] == 1.0f, m[2][3] == 0.0f
// m[3][0] == 1.0f, m[3][1] == 1.0f, m[3][2] == 1.0f, m[3][3] == 1.0f
See also
- translate(mat<4, 4, T, Q> const& m, T x, T y, T z)
- translate(vec<3, T, Q> const& v)
glTranslate man page
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00248.html ================================================ 0.9.9 API documentation: GLM_EXT_quaternion_common
0.9.9 API documentation
GLM_EXT_quaternion_common

Provides common functions for quaternion types. More...

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > conjugate (qua< T, Q > const &q)
 Returns the q conjugate. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > inverse (qua< T, Q > const &q)
 Returns the q inverse. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, bool, Q > isinf (qua< T, Q > const &x)
 Returns true if x holds a positive infinity or negative infinity representation in the underlying implementation's set of floating point representations. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, bool, Q > isnan (qua< T, Q > const &x)
 Returns true if x holds a NaN (not a number) representation in the underlying implementation's set of floating point representations. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > lerp (qua< T, Q > const &x, qua< T, Q > const &y, T a)
 Linear interpolation of two quaternions. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > mix (qua< T, Q > const &x, qua< T, Q > const &y, T a)
 Spherical linear interpolation of two quaternions. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > slerp (qua< T, Q > const &x, qua< T, Q > const &y, T a)
 Spherical linear interpolation of two quaternions. More...
 

Detailed Description

Provides common functions for quaternion types.

Include <glm/ext/quaternion_common.hpp> to use the features of this extension.

See also
GLM_EXT_scalar_common
GLM_EXT_vector_common
GLM_EXT_quaternion_float
GLM_EXT_quaternion_double
GLM_EXT_quaternion_exponential
GLM_EXT_quaternion_geometric
GLM_EXT_quaternion_relational
GLM_EXT_quaternion_trigonometric
GLM_EXT_quaternion_transform

Function Documentation

GLM_FUNC_DECL qua<T, Q> glm::conjugate ( qua< T, Q > const &  q)

Returns the q conjugate.

Template Parameters
TA floating-point scalar type
QA value from qualifier enum
GLM_FUNC_DECL qua<T, Q> glm::inverse ( qua< T, Q > const &  q)

Returns the q inverse.

Template Parameters
TA floating-point scalar type
QA value from qualifier enum
GLM_FUNC_DECL vec<4, bool, Q> glm::isinf ( qua< T, Q > const &  x)

Returns true if x holds a positive infinity or negative infinity representation in the underlying implementation's set of floating point representations.

Returns false otherwise, including for implementations with no infinity representations.

Template Parameters
TA floating-point scalar type
QA value from qualifier enum
GLM_FUNC_DECL vec<4, bool, Q> glm::isnan ( qua< T, Q > const &  x)

Returns true if x holds a NaN (not a number) representation in the underlying implementation's set of floating point representations.

Returns false otherwise, including for implementations with no NaN representations.

/!\ When using compiler fast math, this function may fail.

Template Parameters
TA floating-point scalar type
QA value from qualifier enum
GLM_FUNC_DECL qua<T, Q> glm::lerp ( qua< T, Q > const &  x,
qua< T, Q > const &  y,
a 
)

Linear interpolation of two quaternions.

The interpolation is oriented.

Parameters
xA quaternion
yA quaternion
aInterpolation factor. The interpolation is defined in the range [0, 1].
Template Parameters
TA floating-point scalar type
QA value from qualifier enum
GLM_FUNC_DECL qua<T, Q> glm::mix ( qua< T, Q > const &  x,
qua< T, Q > const &  y,
a 
)

Spherical linear interpolation of two quaternions.

The interpolation is oriented and the rotation is performed at constant speed. For short path spherical linear interpolation, use the slerp function.

Parameters
xA quaternion
yA quaternion
aInterpolation factor. The interpolation is defined beyond the range [0, 1].
Template Parameters
TA floating-point scalar type
QA value from qualifier enum
See also
- slerp(qua<T, Q> const& x, qua<T, Q> const& y, T const& a)
GLM_FUNC_DECL qua<T, Q> glm::slerp ( qua< T, Q > const &  x,
qua< T, Q > const &  y,
a 
)

Spherical linear interpolation of two quaternions.

The interpolation always take the short path and the rotation is performed at constant speed.

Parameters
xA quaternion
yA quaternion
aInterpolation factor. The interpolation is defined beyond the range [0, 1].
Template Parameters
TA floating-point scalar type
QA value from qualifier enum
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00249.html ================================================ 0.9.9 API documentation: GLM_EXT_quaternion_double
0.9.9 API documentation
GLM_EXT_quaternion_double

Exposes double-precision floating point quaternion type. More...

Typedefs

typedef qua< double, defaultp > dquat
 Quaternion of double-precision floating-point numbers.
 

Detailed Description

Exposes double-precision floating point quaternion type.

Include <glm/ext/quaternion_double.hpp> to use the features of this extension.

See also
GLM_EXT_quaternion_float
GLM_EXT_quaternion_double_precision
GLM_EXT_quaternion_common
GLM_EXT_quaternion_exponential
GLM_EXT_quaternion_geometric
GLM_EXT_quaternion_relational
GLM_EXT_quaternion_transform
GLM_EXT_quaternion_trigonometric
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00250.html ================================================ 0.9.9 API documentation: GLM_EXT_quaternion_double_precision
0.9.9 API documentation
GLM_EXT_quaternion_double_precision

Exposes double-precision floating point quaternion type with various precision in term of ULPs. More...

Typedefs

typedef qua< double, highp > highp_dquat
 Quaternion of high double-qualifier floating-point numbers using high precision arithmetic in term of ULPs. More...
 
typedef qua< double, lowp > lowp_dquat
 Quaternion of double-precision floating-point numbers using high precision arithmetic in term of ULPs. More...
 
typedef qua< double, mediump > mediump_dquat
 Quaternion of medium double-qualifier floating-point numbers using high precision arithmetic in term of ULPs. More...
 

Detailed Description

Exposes double-precision floating point quaternion type with various precision in term of ULPs.

Include <glm/ext/quaternion_double_precision.hpp> to use the features of this extension.

Typedef Documentation

typedef qua< double, highp > highp_dquat

Quaternion of high double-qualifier floating-point numbers using high precision arithmetic in term of ULPs.

See also
GLM_EXT_quaternion_double_precision

Definition at line 38 of file quaternion_double_precision.hpp.

typedef qua< double, lowp > lowp_dquat

Quaternion of double-precision floating-point numbers using high precision arithmetic in term of ULPs.

See also
GLM_EXT_quaternion_double_precision

Definition at line 28 of file quaternion_double_precision.hpp.

typedef qua< double, mediump > mediump_dquat

Quaternion of medium double-qualifier floating-point numbers using high precision arithmetic in term of ULPs.

See also
GLM_EXT_quaternion_double_precision

Definition at line 33 of file quaternion_double_precision.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00251.html ================================================ 0.9.9 API documentation: GLM_EXT_quaternion_exponential
0.9.9 API documentation
GLM_EXT_quaternion_exponential

Provides exponential functions for quaternion types. More...

Provides exponential functions for quaternion types.

Include <glm/ext/quaternion_exponential.hpp> to use the features of this extension.

See also
core_exponential
GLM_EXT_quaternion_float
GLM_EXT_quaternion_double
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00252.html ================================================ 0.9.9 API documentation: GLM_EXT_quaternion_float
0.9.9 API documentation
GLM_EXT_quaternion_float

Exposes single-precision floating point quaternion type. More...

Typedefs

typedef qua< float, defaultp > quat
 Quaternion of single-precision floating-point numbers.
 

Detailed Description

Exposes single-precision floating point quaternion type.

Include <glm/ext/quaternion_float.hpp> to use the features of this extension.

See also
GLM_EXT_quaternion_double
GLM_EXT_quaternion_float_precision
GLM_EXT_quaternion_common
GLM_EXT_quaternion_exponential
GLM_EXT_quaternion_geometric
GLM_EXT_quaternion_relational
GLM_EXT_quaternion_transform
GLM_EXT_quaternion_trigonometric
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00253.html ================================================ 0.9.9 API documentation: GLM_EXT_quaternion_float_precision
0.9.9 API documentation
GLM_EXT_quaternion_float_precision

Exposes single-precision floating point quaternion type with various precision in term of ULPs. More...

Typedefs

typedef qua< float, highp > highp_quat
 Quaternion of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef qua< float, lowp > lowp_quat
 Quaternion of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef qua< float, mediump > mediump_quat
 Quaternion of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 

Detailed Description

Exposes single-precision floating point quaternion type with various precision in term of ULPs.

Include <glm/ext/quaternion_float_precision.hpp> to use the features of this extension.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00254.html ================================================ 0.9.9 API documentation: GLM_EXT_quaternion_geometric
0.9.9 API documentation
GLM_EXT_quaternion_geometric

Provides geometric functions for quaternion types. More...

Functions

template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER qua< T, Q > cross (qua< T, Q > const &q1, qua< T, Q > const &q2)
 Compute a cross product. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T dot (qua< T, Q > const &x, qua< T, Q > const &y)
 Returns dot product of q1 and q2, i.e., q1[0] * q2[0] + q1[1] * q2[1] + ... More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T length (qua< T, Q > const &q)
 Returns the norm of a quaternions. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > normalize (qua< T, Q > const &q)
 Returns the normalized quaternion. More...
 

Detailed Description

Provides geometric functions for quaternion types.

Include <glm/ext/quaternion_geometric.hpp> to use the features of this extension.

See also
core_geometric
GLM_EXT_quaternion_float
GLM_EXT_quaternion_double

Function Documentation

GLM_FUNC_QUALIFIER qua<T, Q> glm::cross ( qua< T, Q > const &  q1,
qua< T, Q > const &  q2 
)

Compute a cross product.

Template Parameters
TFloating-point scalar types
QValue from qualifier enum
See also
GLM_EXT_quaternion_geometric
GLM_FUNC_DECL T glm::dot ( qua< T, Q > const &  x,
qua< T, Q > const &  y 
)

Returns dot product of q1 and q2, i.e., q1[0] * q2[0] + q1[1] * q2[1] + ...

Template Parameters
TFloating-point scalar types.
QValue from qualifier enum
See also
GLM_EXT_quaternion_geometric
GLM_FUNC_DECL T glm::length ( qua< T, Q > const &  q)

Returns the norm of a quaternions.

Template Parameters
TFloating-point scalar types
QValue from qualifier enum
See also
GLM_EXT_quaternion_geometric
GLM_FUNC_DECL qua<T, Q> glm::normalize ( qua< T, Q > const &  q)

Returns the normalized quaternion.

Template Parameters
TFloating-point scalar types
QValue from qualifier enum
See also
GLM_EXT_quaternion_geometric
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00255.html ================================================ 0.9.9 API documentation: GLM_EXT_quaternion_relational
0.9.9 API documentation
GLM_EXT_quaternion_relational

Exposes comparison functions for quaternion types that take a user defined epsilon values. More...

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, bool, Q > equal (qua< T, Q > const &x, qua< T, Q > const &y)
 Returns the component-wise comparison of result x == y. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, bool, Q > equal (qua< T, Q > const &x, qua< T, Q > const &y, T epsilon)
 Returns the component-wise comparison of |x - y| < epsilon. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, bool, Q > notEqual (qua< T, Q > const &x, qua< T, Q > const &y)
 Returns the component-wise comparison of result x != y. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, bool, Q > notEqual (qua< T, Q > const &x, qua< T, Q > const &y, T epsilon)
 Returns the component-wise comparison of |x - y| >= epsilon. More...
 

Detailed Description

Exposes comparison functions for quaternion types that take a user defined epsilon values.

Include <glm/ext/quaternion_relational.hpp> to use the features of this extension.

See also
core_vector_relational
GLM_EXT_vector_relational
GLM_EXT_matrix_relational
GLM_EXT_quaternion_float
GLM_EXT_quaternion_double

Function Documentation

GLM_FUNC_DECL vec<4, bool, Q> glm::equal ( qua< T, Q > const &  x,
qua< T, Q > const &  y 
)

Returns the component-wise comparison of result x == y.

Template Parameters
TFloating-point scalar types
QValue from qualifier enum
GLM_FUNC_DECL vec<4, bool, Q> glm::equal ( qua< T, Q > const &  x,
qua< T, Q > const &  y,
epsilon 
)

Returns the component-wise comparison of |x - y| < epsilon.

Template Parameters
TFloating-point scalar types
QValue from qualifier enum
GLM_FUNC_DECL vec<4, bool, Q> glm::notEqual ( qua< T, Q > const &  x,
qua< T, Q > const &  y 
)

Returns the component-wise comparison of result x != y.

Template Parameters
TFloating-point scalar types
QValue from qualifier enum
GLM_FUNC_DECL vec<4, bool, Q> glm::notEqual ( qua< T, Q > const &  x,
qua< T, Q > const &  y,
epsilon 
)

Returns the component-wise comparison of |x - y| >= epsilon.

Template Parameters
TFloating-point scalar types
QValue from qualifier enum
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00256.html ================================================ 0.9.9 API documentation: GLM_EXT_quaternion_transform
0.9.9 API documentation
GLM_EXT_quaternion_transform

Provides transformation functions for quaternion types. More...

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > exp (qua< T, Q > const &q)
 Returns a exponential of a quaternion. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > log (qua< T, Q > const &q)
 Returns a logarithm of a quaternion. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > pow (qua< T, Q > const &q, T y)
 Returns a quaternion raised to a power. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > rotate (qua< T, Q > const &q, T const &angle, vec< 3, T, Q > const &axis)
 Rotates a quaternion from a vector of 3 components axis and an angle. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > sqrt (qua< T, Q > const &q)
 Returns the square root of a quaternion. More...
 

Detailed Description

Provides transformation functions for quaternion types.

Include <glm/ext/quaternion_transform.hpp> to use the features of this extension.

See also
GLM_EXT_quaternion_float
GLM_EXT_quaternion_double
GLM_EXT_quaternion_exponential
GLM_EXT_quaternion_geometric
GLM_EXT_quaternion_relational
GLM_EXT_quaternion_trigonometric

Function Documentation

GLM_FUNC_DECL qua<T, Q> glm::exp ( qua< T, Q > const &  q)

Returns a exponential of a quaternion.

Template Parameters
TA floating-point scalar type
QA value from qualifier enum
GLM_FUNC_DECL qua<T, Q> glm::log ( qua< T, Q > const &  q)

Returns a logarithm of a quaternion.

Template Parameters
TA floating-point scalar type
QA value from qualifier enum
GLM_FUNC_DECL qua<T, Q> glm::pow ( qua< T, Q > const &  q,
y 
)

Returns a quaternion raised to a power.

Template Parameters
TA floating-point scalar type
QA value from qualifier enum
GLM_FUNC_DECL qua<T, Q> glm::rotate ( qua< T, Q > const &  q,
T const &  angle,
vec< 3, T, Q > const &  axis 
)

Rotates a quaternion from a vector of 3 components axis and an angle.

Parameters
qSource orientation
angleAngle expressed in radians.
axisAxis of the rotation
Template Parameters
TFloating-point scalar types
QValue from qualifier enum
GLM_FUNC_DECL qua<T, Q> glm::sqrt ( qua< T, Q > const &  q)

Returns the square root of a quaternion.

Template Parameters
TA floating-point scalar type
QA value from qualifier enum
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00257.html ================================================ 0.9.9 API documentation: GLM_EXT_quaternion_trigonometric
0.9.9 API documentation
GLM_EXT_quaternion_trigonometric

Provides trigonometric functions for quaternion types. More...

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL T angle (qua< T, Q > const &x)
 Returns the quaternion rotation angle. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > angleAxis (T const &angle, vec< 3, T, Q > const &axis)
 Build a quaternion from an angle and a normalized axis. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > axis (qua< T, Q > const &x)
 Returns the q rotation axis. More...
 

Detailed Description

Provides trigonometric functions for quaternion types.

Include <glm/ext/quaternion_trigonometric.hpp> to use the features of this extension.

See also
GLM_EXT_quaternion_float
GLM_EXT_quaternion_double
GLM_EXT_quaternion_exponential
GLM_EXT_quaternion_geometric
GLM_EXT_quaternion_relational
GLM_EXT_quaternion_transform

Function Documentation

GLM_FUNC_DECL T glm::angle ( qua< T, Q > const &  x)

Returns the quaternion rotation angle.

Template Parameters
TA floating-point scalar type
QA value from qualifier enum
GLM_FUNC_DECL qua<T, Q> glm::angleAxis ( T const &  angle,
vec< 3, T, Q > const &  axis 
)

Build a quaternion from an angle and a normalized axis.

Parameters
angleAngle expressed in radians.
axisAxis of the quaternion, must be normalized.
Template Parameters
TA floating-point scalar type
QA value from qualifier enum
GLM_FUNC_DECL vec<3, T, Q> glm::axis ( qua< T, Q > const &  x)

Returns the q rotation axis.

Template Parameters
TA floating-point scalar type
QA value from qualifier enum
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00258.html ================================================ 0.9.9 API documentation: GLM_EXT_scalar_common
0.9.9 API documentation
GLM_EXT_scalar_common

Exposes min and max functions for 3 to 4 scalar parameters. More...

Functions

template<typename T >
GLM_FUNC_DECL T fmax (T a, T b)
 Returns the maximum component-wise values of 2 inputs. More...
 
template<typename T >
GLM_FUNC_DECL T fmax (T a, T b, T C)
 Returns the maximum component-wise values of 3 inputs. More...
 
template<typename T >
GLM_FUNC_DECL T fmax (T a, T b, T C, T D)
 Returns the maximum component-wise values of 4 inputs. More...
 
template<typename T >
GLM_FUNC_DECL T fmin (T a, T b)
 Returns the minimum component-wise values of 2 inputs. More...
 
template<typename T >
GLM_FUNC_DECL T fmin (T a, T b, T c)
 Returns the minimum component-wise values of 3 inputs. More...
 
template<typename T >
GLM_FUNC_DECL T fmin (T a, T b, T c, T d)
 Returns the minimum component-wise values of 4 inputs. More...
 
template<typename T >
GLM_FUNC_DECL T max (T a, T b, T c)
 Returns the maximum component-wise values of 3 inputs. More...
 
template<typename T >
GLM_FUNC_DECL T max (T a, T b, T c, T d)
 Returns the maximum component-wise values of 4 inputs. More...
 
template<typename T >
GLM_FUNC_DECL T min (T a, T b, T c)
 Returns the minimum component-wise values of 3 inputs. More...
 
template<typename T >
GLM_FUNC_DECL T min (T a, T b, T c, T d)
 Returns the minimum component-wise values of 4 inputs. More...
 

Detailed Description

Exposes min and max functions for 3 to 4 scalar parameters.

Include <glm/ext/scalar_common.hpp> to use the features of this extension.

See also
Common functions
GLM_EXT_vector_common

Function Documentation

GLM_FUNC_DECL T glm::fmax ( a,
b 
)

Returns the maximum component-wise values of 2 inputs.

If one of the two arguments is NaN, the value of the other argument is returned.

Template Parameters
TA floating-point scalar type.
See also
std::fmax documentation
GLM_FUNC_DECL T glm::fmax ( a,
b,
C 
)

Returns the maximum component-wise values of 3 inputs.

If one of the two arguments is NaN, the value of the other argument is returned.

Template Parameters
TA floating-point scalar type.
See also
std::fmax documentation
GLM_FUNC_DECL T glm::fmax ( a,
b,
C,
D 
)

Returns the maximum component-wise values of 4 inputs.

If one of the two arguments is NaN, the value of the other argument is returned.

Template Parameters
TA floating-point scalar type.
See also
std::fmax documentation
GLM_FUNC_DECL T glm::fmin ( a,
b 
)

Returns the minimum component-wise values of 2 inputs.

If one of the two arguments is NaN, the value of the other argument is returned.

Template Parameters
TA floating-point scalar type.
See also
std::fmin documentation
GLM_FUNC_DECL T glm::fmin ( a,
b,
c 
)

Returns the minimum component-wise values of 3 inputs.

If one of the two arguments is NaN, the value of the other argument is returned.

Template Parameters
TA floating-point scalar type.
See also
std::fmin documentation
GLM_FUNC_DECL T glm::fmin ( a,
b,
c,
d 
)

Returns the minimum component-wise values of 4 inputs.

If one of the two arguments is NaN, the value of the other argument is returned.

Template Parameters
TA floating-point scalar type.
See also
std::fmin documentation
GLM_FUNC_DECL T glm::max ( a,
b,
c 
)

Returns the maximum component-wise values of 3 inputs.

Template Parameters
TA floating-point scalar type.
GLM_FUNC_DECL T glm::max ( a,
b,
c,
d 
)

Returns the maximum component-wise values of 4 inputs.

Template Parameters
TA floating-point scalar type.
GLM_FUNC_DECL T glm::min ( a,
b,
c 
)

Returns the minimum component-wise values of 3 inputs.

Template Parameters
TA floating-point scalar type.
GLM_FUNC_DECL T glm::min ( a,
b,
c,
d 
)

Returns the minimum component-wise values of 4 inputs.

Template Parameters
TA floating-point scalar type.
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00259.html ================================================ 0.9.9 API documentation: GLM_EXT_scalar_constants
0.9.9 API documentation
GLM_EXT_scalar_constants

Provides a list of constants and precomputed useful values. More...

Functions

template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType epsilon ()
 Return the epsilon constant for floating point types.
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType pi ()
 Return the pi constant for floating point types.
 

Detailed Description

Provides a list of constants and precomputed useful values.

Include <glm/ext/scalar_constants.hpp> to use the features of this extension.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00260.html ================================================ 0.9.9 API documentation: GLM_EXT_scalar_int_sized
0.9.9 API documentation
GLM_EXT_scalar_int_sized

Exposes sized signed integer scalar types. More...

Typedefs

typedef detail::int16 int16
 16 bit signed integer type.
 
typedef detail::int32 int32
 32 bit signed integer type.
 
typedef detail::int64 int64
 64 bit signed integer type.
 
typedef detail::int8 int8
 8 bit signed integer type.
 

Detailed Description

Exposes sized signed integer scalar types.

Include <glm/ext/scalar_int_sized.hpp> to use the features of this extension.

See also
GLM_EXT_scalar_uint_sized
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00261.html ================================================ 0.9.9 API documentation: GLM_EXT_scalar_integer
0.9.9 API documentation
GLM_EXT_scalar_integer

Include <glm/ext/scalar_integer.hpp> to use the features of this extension. More...

Functions

template<typename genIUType >
GLM_FUNC_DECL int findNSB (genIUType x, int significantBitCount)
 Returns the bit number of the Nth significant bit set to 1 in the binary representation of value. More...
 
template<typename genIUType >
GLM_FUNC_DECL bool isMultiple (genIUType v, genIUType Multiple)
 Return true if the 'Value' is a multiple of 'Multiple'. More...
 
template<typename genIUType >
GLM_FUNC_DECL bool isPowerOfTwo (genIUType v)
 Return true if the value is a power of two number. More...
 
template<typename genIUType >
GLM_FUNC_DECL genIUType nextMultiple (genIUType v, genIUType Multiple)
 Higher multiple number of Source. More...
 
template<typename genIUType >
GLM_FUNC_DECL genIUType nextPowerOfTwo (genIUType v)
 Return the power of two number which value is just higher the input value, round up to a power of two. More...
 
template<typename genIUType >
GLM_FUNC_DECL genIUType prevMultiple (genIUType v, genIUType Multiple)
 Lower multiple number of Source. More...
 
template<typename genIUType >
GLM_FUNC_DECL genIUType prevPowerOfTwo (genIUType v)
 Return the power of two number which value is just lower the input value, round down to a power of two. More...
 

Detailed Description

Include <glm/ext/scalar_integer.hpp> to use the features of this extension.

Function Documentation

GLM_FUNC_DECL int glm::findNSB ( genIUType  x,
int  significantBitCount 
)

Returns the bit number of the Nth significant bit set to 1 in the binary representation of value.

If value bitcount is less than the Nth significant bit, -1 will be returned.

Template Parameters
genIUTypeSigned or unsigned integer scalar types.
See also
GLM_EXT_scalar_integer
GLM_FUNC_DECL bool glm::isMultiple ( genIUType  v,
genIUType  Multiple 
)

Return true if the 'Value' is a multiple of 'Multiple'.

See also
GLM_EXT_scalar_integer
GLM_FUNC_DECL bool glm::isPowerOfTwo ( genIUType  v)

Return true if the value is a power of two number.

See also
GLM_EXT_scalar_integer
GLM_FUNC_DECL genIUType glm::nextMultiple ( genIUType  v,
genIUType  Multiple 
)

Higher multiple number of Source.

Template Parameters
genIUTypeInteger scalar or vector types.
Parameters
vSource value to which is applied the function
MultipleMust be a null or positive value
See also
GLM_EXT_scalar_integer
GLM_FUNC_DECL genIUType glm::nextPowerOfTwo ( genIUType  v)

Return the power of two number which value is just higher the input value, round up to a power of two.

See also
GLM_EXT_scalar_integer
GLM_FUNC_DECL genIUType glm::prevMultiple ( genIUType  v,
genIUType  Multiple 
)

Lower multiple number of Source.

Template Parameters
genIUTypeInteger scalar or vector types.
Parameters
vSource value to which is applied the function
MultipleMust be a null or positive value
See also
GLM_EXT_scalar_integer
GLM_FUNC_DECL genIUType glm::prevPowerOfTwo ( genIUType  v)

Return the power of two number which value is just lower the input value, round down to a power of two.

See also
GLM_EXT_scalar_integer
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00262.html ================================================ 0.9.9 API documentation: GLM_EXT_scalar_relational
0.9.9 API documentation
GLM_EXT_scalar_relational

Exposes comparison functions for scalar types that take a user defined epsilon values. More...

Exposes comparison functions for scalar types that take a user defined epsilon values.

Include <glm/ext/scalar_relational.hpp> to use the features of this extension.

See also
core_vector_relational
GLM_EXT_vector_relational
GLM_EXT_matrix_relational
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00263.html ================================================ 0.9.9 API documentation: GLM_EXT_scalar_uint_sized
0.9.9 API documentation
GLM_EXT_scalar_uint_sized

Exposes sized unsigned integer scalar types. More...

Typedefs

typedef detail::uint16 uint16
 16 bit unsigned integer type.
 
typedef detail::uint32 uint32
 32 bit unsigned integer type.
 
typedef detail::uint64 uint64
 64 bit unsigned integer type.
 
typedef detail::uint8 uint8
 8 bit unsigned integer type.
 

Detailed Description

Exposes sized unsigned integer scalar types.

Include <glm/ext/scalar_uint_sized.hpp> to use the features of this extension.

See also
GLM_EXT_scalar_int_sized
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00264.html ================================================ 0.9.9 API documentation: GLM_EXT_scalar_ulp
0.9.9 API documentation
GLM_EXT_scalar_ulp

Allow the measurement of the accuracy of a function against a reference implementation. More...

Allow the measurement of the accuracy of a function against a reference implementation.

This extension works on floating-point data and provide results in ULP.

Include <glm/ext/scalar_ulp.hpp> to use the features of this extension.

See also
GLM_EXT_vector_ulp
GLM_EXT_scalar_relational
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00265.html ================================================ 0.9.9 API documentation: GLM_EXT_vector_bool1
0.9.9 API documentation
GLM_EXT_vector_bool1

Exposes bvec1 vector type. More...

Typedefs

typedef vec< 1, bool, defaultp > bvec1
 1 components vector of boolean.
 

Detailed Description

Exposes bvec1 vector type.

Include <glm/ext/vector_bool1.hpp> to use the features of this extension.

See also
GLM_EXT_vector_bool1_precision extension.
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00266.html ================================================ 0.9.9 API documentation: GLM_EXT_vector_bool1_precision
0.9.9 API documentation
GLM_EXT_vector_bool1_precision

Exposes highp_bvec1, mediump_bvec1 and lowp_bvec1 types. More...

Typedefs

typedef vec< 1, bool, highp > highp_bvec1
 1 component vector of bool values.
 
typedef vec< 1, bool, lowp > lowp_bvec1
 1 component vector of bool values.
 
typedef vec< 1, bool, mediump > mediump_bvec1
 1 component vector of bool values.
 

Detailed Description

Exposes highp_bvec1, mediump_bvec1 and lowp_bvec1 types.

Include <glm/ext/vector_bool1_precision.hpp> to use the features of this extension.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00267.html ================================================ 0.9.9 API documentation: GLM_EXT_vector_common
0.9.9 API documentation
GLM_EXT_vector_common

Exposes min and max functions for 3 to 4 vector parameters. More...

Functions

template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fmax (vec< L, T, Q > const &a, T b)
 Returns y if x < y; otherwise, it returns x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fmax (vec< L, T, Q > const &a, vec< L, T, Q > const &b)
 Returns y if x < y; otherwise, it returns x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fmax (vec< L, T, Q > const &a, vec< L, T, Q > const &b, vec< L, T, Q > const &c)
 Returns y if x < y; otherwise, it returns x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fmax (vec< L, T, Q > const &a, vec< L, T, Q > const &b, vec< L, T, Q > const &c, vec< L, T, Q > const &d)
 Returns y if x < y; otherwise, it returns x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fmin (vec< L, T, Q > const &x, T y)
 Returns y if y < x; otherwise, it returns x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fmin (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Returns y if y < x; otherwise, it returns x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fmin (vec< L, T, Q > const &a, vec< L, T, Q > const &b, vec< L, T, Q > const &c)
 Returns y if y < x; otherwise, it returns x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fmin (vec< L, T, Q > const &a, vec< L, T, Q > const &b, vec< L, T, Q > const &c, vec< L, T, Q > const &d)
 Returns y if y < x; otherwise, it returns x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > max (vec< L, T, Q > const &x, vec< L, T, Q > const &y, vec< L, T, Q > const &z)
 Return the maximum component-wise values of 3 inputs. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > max (vec< L, T, Q > const &x, vec< L, T, Q > const &y, vec< L, T, Q > const &z, vec< L, T, Q > const &w)
 Return the maximum component-wise values of 4 inputs. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > min (vec< L, T, Q > const &a, vec< L, T, Q > const &b, vec< L, T, Q > const &c)
 Return the minimum component-wise values of 3 inputs. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > min (vec< L, T, Q > const &a, vec< L, T, Q > const &b, vec< L, T, Q > const &c, vec< L, T, Q > const &d)
 Return the minimum component-wise values of 4 inputs. More...
 

Detailed Description

Exposes min and max functions for 3 to 4 vector parameters.

Include <glm/ext/vector_common.hpp> to use the features of this extension.

See also
core_common
GLM_EXT_scalar_common

Function Documentation

GLM_FUNC_DECL vec<L, T, Q> glm::fmax ( vec< L, T, Q > const &  a,
b 
)

Returns y if x < y; otherwise, it returns x.

If one of the two arguments is NaN, the value of the other argument is returned.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
std::fmax documentation
GLM_FUNC_DECL vec<L, T, Q> glm::fmax ( vec< L, T, Q > const &  a,
vec< L, T, Q > const &  b 
)

Returns y if x < y; otherwise, it returns x.

If one of the two arguments is NaN, the value of the other argument is returned.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
std::fmax documentation
GLM_FUNC_DECL vec<L, T, Q> glm::fmax ( vec< L, T, Q > const &  a,
vec< L, T, Q > const &  b,
vec< L, T, Q > const &  c 
)

Returns y if x < y; otherwise, it returns x.

If one of the two arguments is NaN, the value of the other argument is returned.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
std::fmax documentation
GLM_FUNC_DECL vec<L, T, Q> glm::fmax ( vec< L, T, Q > const &  a,
vec< L, T, Q > const &  b,
vec< L, T, Q > const &  c,
vec< L, T, Q > const &  d 
)

Returns y if x < y; otherwise, it returns x.

If one of the two arguments is NaN, the value of the other argument is returned.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
std::fmax documentation
GLM_FUNC_DECL vec<L, T, Q> glm::fmin ( vec< L, T, Q > const &  x,
y 
)

Returns y if y < x; otherwise, it returns x.

If one of the two arguments is NaN, the value of the other argument is returned.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
std::fmin documentation
GLM_FUNC_DECL vec<L, T, Q> glm::fmin ( vec< L, T, Q > const &  x,
vec< L, T, Q > const &  y 
)

Returns y if y < x; otherwise, it returns x.

If one of the two arguments is NaN, the value of the other argument is returned.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
std::fmin documentation
GLM_FUNC_DECL vec<L, T, Q> glm::fmin ( vec< L, T, Q > const &  a,
vec< L, T, Q > const &  b,
vec< L, T, Q > const &  c 
)

Returns y if y < x; otherwise, it returns x.

If one of the two arguments is NaN, the value of the other argument is returned.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
std::fmin documentation
GLM_FUNC_DECL vec<L, T, Q> glm::fmin ( vec< L, T, Q > const &  a,
vec< L, T, Q > const &  b,
vec< L, T, Q > const &  c,
vec< L, T, Q > const &  d 
)

Returns y if y < x; otherwise, it returns x.

If one of the two arguments is NaN, the value of the other argument is returned.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
std::fmin documentation
GLM_FUNC_DECL GLM_CONSTEXPR vec<L, T, Q> glm::max ( vec< L, T, Q > const &  x,
vec< L, T, Q > const &  y,
vec< L, T, Q > const &  z 
)

Return the maximum component-wise values of 3 inputs.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point or integer scalar types
QValue from qualifier enum
GLM_FUNC_DECL GLM_CONSTEXPR vec<L, T, Q> glm::max ( vec< L, T, Q > const &  x,
vec< L, T, Q > const &  y,
vec< L, T, Q > const &  z,
vec< L, T, Q > const &  w 
)

Return the maximum component-wise values of 4 inputs.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point or integer scalar types
QValue from qualifier enum
GLM_FUNC_DECL GLM_CONSTEXPR vec<L, T, Q> glm::min ( vec< L, T, Q > const &  a,
vec< L, T, Q > const &  b,
vec< L, T, Q > const &  c 
)

Return the minimum component-wise values of 3 inputs.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point or integer scalar types
QValue from qualifier enum
GLM_FUNC_DECL GLM_CONSTEXPR vec<L, T, Q> glm::min ( vec< L, T, Q > const &  a,
vec< L, T, Q > const &  b,
vec< L, T, Q > const &  c,
vec< L, T, Q > const &  d 
)

Return the minimum component-wise values of 4 inputs.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point or integer scalar types
QValue from qualifier enum
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00268.html ================================================ 0.9.9 API documentation: GLM_EXT_vector_double1
0.9.9 API documentation
GLM_EXT_vector_double1

Exposes double-precision floating point vector type with one component. More...

Typedefs

typedef vec< 1, double, defaultp > dvec1
 1 components vector of double-precision floating-point numbers.
 

Detailed Description

Exposes double-precision floating point vector type with one component.

Include <glm/ext/vector_double1.hpp> to use the features of this extension.

See also
GLM_EXT_vector_double1_precision extension.
GLM_EXT_vector_float1 extension.
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00269.html ================================================ 0.9.9 API documentation: GLM_EXT_vector_double1_precision
0.9.9 API documentation
GLM_EXT_vector_double1_precision

Exposes highp_dvec1, mediump_dvec1 and lowp_dvec1 types. More...

Typedefs

typedef vec< 1, double, highp > highp_dvec1
 1 component vector of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 1, double, lowp > lowp_dvec1
 1 component vector of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 1, double, mediump > mediump_dvec1
 1 component vector of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 

Detailed Description

Exposes highp_dvec1, mediump_dvec1 and lowp_dvec1 types.

Include <glm/ext/vector_double1_precision.hpp> to use the features of this extension.

See also
GLM_EXT_vector_double1
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00270.html ================================================ 0.9.9 API documentation: GLM_EXT_vector_float1
0.9.9 API documentation
GLM_EXT_vector_float1

Exposes single-precision floating point vector type with one component. More...

Typedefs

typedef vec< 1, float, defaultp > vec1
 1 components vector of single-precision floating-point numbers.
 

Detailed Description

Exposes single-precision floating point vector type with one component.

Include <glm/ext/vector_float1.hpp> to use the features of this extension.

See also
GLM_EXT_vector_float1_precision extension.
GLM_EXT_vector_double1 extension.
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00271.html ================================================ 0.9.9 API documentation: GLM_EXT_vector_float1_precision
0.9.9 API documentation
GLM_EXT_vector_float1_precision

Exposes highp_vec1, mediump_vec1 and lowp_vec1 types. More...

Typedefs

typedef vec< 1, float, highp > highp_vec1
 1 component vector of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 1, float, lowp > lowp_vec1
 1 component vector of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 1, float, mediump > mediump_vec1
 1 component vector of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 

Detailed Description

Exposes highp_vec1, mediump_vec1 and lowp_vec1 types.

Include <glm/ext/vector_float1_precision.hpp> to use the features of this extension.

See also
GLM_EXT_vector_float1 extension.
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00272.html ================================================ 0.9.9 API documentation: GLM_EXT_vector_int1
0.9.9 API documentation
GLM_EXT_vector_int1

Exposes ivec1 vector type. More...

Typedefs

typedef vec< 1, int, defaultp > ivec1
 1 component vector of signed integer numbers.
 

Detailed Description

Exposes ivec1 vector type.

Include <glm/ext/vector_int1.hpp> to use the features of this extension.

See also
GLM_EXT_vector_uint1 extension.
GLM_EXT_vector_int1_precision extension.
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00273.html ================================================ 0.9.9 API documentation: GLM_EXT_vector_int1_precision
0.9.9 API documentation
GLM_EXT_vector_int1_precision

Exposes highp_ivec1, mediump_ivec1 and lowp_ivec1 types. More...

Typedefs

typedef vec< 1, int, highp > highp_ivec1
 1 component vector of signed integer values.
 
typedef vec< 1, int, lowp > lowp_ivec1
 1 component vector of signed integer values.
 
typedef vec< 1, int, mediump > mediump_ivec1
 1 component vector of signed integer values.
 

Detailed Description

Exposes highp_ivec1, mediump_ivec1 and lowp_ivec1 types.

Include <glm/ext/vector_int1_precision.hpp> to use the features of this extension.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00274.html ================================================ 0.9.9 API documentation: GLM_EXT_vector_integer
0.9.9 API documentation
GLM_EXT_vector_integer

Include <glm/ext/vector_integer.hpp> to use the features of this extension. More...

Functions

template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, int, Q > findNSB (vec< L, T, Q > const &Source, vec< L, int, Q > SignificantBitCount)
 Returns the bit number of the Nth significant bit set to 1 in the binary representation of value. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, bool, Q > isMultiple (vec< L, T, Q > const &v, T Multiple)
 Return true if the 'Value' is a multiple of 'Multiple'. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, bool, Q > isMultiple (vec< L, T, Q > const &v, vec< L, T, Q > const &Multiple)
 Return true if the 'Value' is a multiple of 'Multiple'. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, bool, Q > isPowerOfTwo (vec< L, T, Q > const &v)
 Return true if the value is a power of two number. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > nextMultiple (vec< L, T, Q > const &v, T Multiple)
 Higher multiple number of Source. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > nextMultiple (vec< L, T, Q > const &v, vec< L, T, Q > const &Multiple)
 Higher multiple number of Source. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > nextPowerOfTwo (vec< L, T, Q > const &v)
 Return the power of two number which value is just higher the input value, round up to a power of two. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > prevMultiple (vec< L, T, Q > const &v, T Multiple)
 Lower multiple number of Source. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > prevMultiple (vec< L, T, Q > const &v, vec< L, T, Q > const &Multiple)
 Lower multiple number of Source. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > prevPowerOfTwo (vec< L, T, Q > const &v)
 Return the power of two number which value is just lower the input value, round down to a power of two. More...
 

Detailed Description

Include <glm/ext/vector_integer.hpp> to use the features of this extension.

Function Documentation

GLM_FUNC_DECL vec<L, int, Q> glm::findNSB ( vec< L, T, Q > const &  Source,
vec< L, int, Q >  SignificantBitCount 
)

Returns the bit number of the Nth significant bit set to 1 in the binary representation of value.

If value bitcount is less than the Nth significant bit, -1 will be returned.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TSigned or unsigned integer scalar types.
See also
GLM_EXT_vector_integer
GLM_FUNC_DECL vec<L, bool, Q> glm::isMultiple ( vec< L, T, Q > const &  v,
Multiple 
)

Return true if the 'Value' is a multiple of 'Multiple'.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TSigned or unsigned integer scalar types.
QValue from qualifier enum
See also
GLM_EXT_vector_integer
GLM_FUNC_DECL vec<L, bool, Q> glm::isMultiple ( vec< L, T, Q > const &  v,
vec< L, T, Q > const &  Multiple 
)

Return true if the 'Value' is a multiple of 'Multiple'.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TSigned or unsigned integer scalar types.
QValue from qualifier enum
See also
GLM_EXT_vector_integer
GLM_FUNC_DECL vec<L, bool, Q> glm::isPowerOfTwo ( vec< L, T, Q > const &  v)

Return true if the value is a power of two number.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TSigned or unsigned integer scalar types.
QValue from qualifier enum
See also
GLM_EXT_vector_integer
GLM_FUNC_DECL vec<L, T, Q> glm::nextMultiple ( vec< L, T, Q > const &  v,
Multiple 
)

Higher multiple number of Source.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TSigned or unsigned integer scalar types.
QValue from qualifier enum
Parameters
vSource values to which is applied the function
MultipleMust be a null or positive value
See also
GLM_EXT_vector_integer
GLM_FUNC_DECL vec<L, T, Q> glm::nextMultiple ( vec< L, T, Q > const &  v,
vec< L, T, Q > const &  Multiple 
)

Higher multiple number of Source.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TSigned or unsigned integer scalar types.
QValue from qualifier enum
Parameters
vSource values to which is applied the function
MultipleMust be a null or positive value
See also
GLM_EXT_vector_integer
GLM_FUNC_DECL vec<L, T, Q> glm::nextPowerOfTwo ( vec< L, T, Q > const &  v)

Return the power of two number which value is just higher the input value, round up to a power of two.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TSigned or unsigned integer scalar types.
QValue from qualifier enum
See also
GLM_EXT_vector_integer
GLM_FUNC_DECL vec<L, T, Q> glm::prevMultiple ( vec< L, T, Q > const &  v,
Multiple 
)

Lower multiple number of Source.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TSigned or unsigned integer scalar types.
QValue from qualifier enum
Parameters
vSource values to which is applied the function
MultipleMust be a null or positive value
See also
GLM_EXT_vector_integer
GLM_FUNC_DECL vec<L, T, Q> glm::prevMultiple ( vec< L, T, Q > const &  v,
vec< L, T, Q > const &  Multiple 
)

Lower multiple number of Source.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TSigned or unsigned integer scalar types.
QValue from qualifier enum
Parameters
vSource values to which is applied the function
MultipleMust be a null or positive value
See also
GLM_EXT_vector_integer
GLM_FUNC_DECL vec<L, T, Q> glm::prevPowerOfTwo ( vec< L, T, Q > const &  v)

Return the power of two number which value is just lower the input value, round down to a power of two.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TSigned or unsigned integer scalar types.
QValue from qualifier enum
See also
GLM_EXT_vector_integer
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00275.html ================================================ 0.9.9 API documentation: GLM_EXT_vector_relational
0.9.9 API documentation
GLM_EXT_vector_relational

Exposes comparison functions for vector types that take a user defined epsilon values. More...

Functions

template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > equal (vec< L, T, Q > const &x, vec< L, T, Q > const &y, T epsilon)
 Returns the component-wise comparison of |x - y| < epsilon. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > equal (vec< L, T, Q > const &x, vec< L, T, Q > const &y, vec< L, T, Q > const &epsilon)
 Returns the component-wise comparison of |x - y| < epsilon. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > equal (vec< L, T, Q > const &x, vec< L, T, Q > const &y, int ULPs)
 Returns the component-wise comparison between two vectors in term of ULPs. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > equal (vec< L, T, Q > const &x, vec< L, T, Q > const &y, vec< L, int, Q > const &ULPs)
 Returns the component-wise comparison between two vectors in term of ULPs. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > notEqual (vec< L, T, Q > const &x, vec< L, T, Q > const &y, T epsilon)
 Returns the component-wise comparison of |x - y| >= epsilon. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > notEqual (vec< L, T, Q > const &x, vec< L, T, Q > const &y, vec< L, T, Q > const &epsilon)
 Returns the component-wise comparison of |x - y| >= epsilon. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > notEqual (vec< L, T, Q > const &x, vec< L, T, Q > const &y, int ULPs)
 Returns the component-wise comparison between two vectors in term of ULPs. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > notEqual (vec< L, T, Q > const &x, vec< L, T, Q > const &y, vec< L, int, Q > const &ULPs)
 Returns the component-wise comparison between two vectors in term of ULPs. More...
 

Detailed Description

Exposes comparison functions for vector types that take a user defined epsilon values.

Include <glm/ext/vector_relational.hpp> to use the features of this extension.

See also
core_vector_relational
GLM_EXT_scalar_relational
GLM_EXT_matrix_relational

Function Documentation

GLM_FUNC_DECL GLM_CONSTEXPR vec<L, bool, Q> glm::equal ( vec< L, T, Q > const &  x,
vec< L, T, Q > const &  y,
epsilon 
)

Returns the component-wise comparison of |x - y| < epsilon.

True if this expression is satisfied.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point or integer scalar types
QValue from qualifier enum
GLM_FUNC_DECL GLM_CONSTEXPR vec<L, bool, Q> glm::equal ( vec< L, T, Q > const &  x,
vec< L, T, Q > const &  y,
vec< L, T, Q > const &  epsilon 
)

Returns the component-wise comparison of |x - y| < epsilon.

True if this expression is satisfied.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point or integer scalar types
QValue from qualifier enum
GLM_FUNC_DECL GLM_CONSTEXPR vec<L, bool, Q> glm::equal ( vec< L, T, Q > const &  x,
vec< L, T, Q > const &  y,
int  ULPs 
)

Returns the component-wise comparison between two vectors in term of ULPs.

True if this expression is satisfied.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point
QValue from qualifier enum
GLM_FUNC_DECL GLM_CONSTEXPR vec<L, bool, Q> glm::equal ( vec< L, T, Q > const &  x,
vec< L, T, Q > const &  y,
vec< L, int, Q > const &  ULPs 
)

Returns the component-wise comparison between two vectors in term of ULPs.

True if this expression is satisfied.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point
QValue from qualifier enum
GLM_FUNC_DECL GLM_CONSTEXPR vec<L, bool, Q> glm::notEqual ( vec< L, T, Q > const &  x,
vec< L, T, Q > const &  y,
epsilon 
)

Returns the component-wise comparison of |x - y| >= epsilon.

True if this expression is not satisfied.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point or integer scalar types
QValue from qualifier enum
GLM_FUNC_DECL GLM_CONSTEXPR vec<L, bool, Q> glm::notEqual ( vec< L, T, Q > const &  x,
vec< L, T, Q > const &  y,
vec< L, T, Q > const &  epsilon 
)

Returns the component-wise comparison of |x - y| >= epsilon.

True if this expression is not satisfied.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point or integer scalar types
QValue from qualifier enum
GLM_FUNC_DECL GLM_CONSTEXPR vec<L, bool, Q> glm::notEqual ( vec< L, T, Q > const &  x,
vec< L, T, Q > const &  y,
int  ULPs 
)

Returns the component-wise comparison between two vectors in term of ULPs.

True if this expression is not satisfied.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point
QValue from qualifier enum
GLM_FUNC_DECL GLM_CONSTEXPR vec<L, bool, Q> glm::notEqual ( vec< L, T, Q > const &  x,
vec< L, T, Q > const &  y,
vec< L, int, Q > const &  ULPs 
)

Returns the component-wise comparison between two vectors in term of ULPs.

True if this expression is not satisfied.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point
QValue from qualifier enum
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00276.html ================================================ 0.9.9 API documentation: GLM_EXT_vector_uint1
0.9.9 API documentation
GLM_EXT_vector_uint1

Exposes uvec1 vector type. More...

Typedefs

typedef vec< 1, unsigned int, defaultp > uvec1
 1 component vector of unsigned integer numbers.
 

Detailed Description

Exposes uvec1 vector type.

Include <glm/ext/vector_uvec1.hpp> to use the features of this extension.

See also
GLM_EXT_vector_int1 extension.
GLM_EXT_vector_uint1_precision extension.
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00277.html ================================================ 0.9.9 API documentation: GLM_EXT_vector_uint1_precision
0.9.9 API documentation
GLM_EXT_vector_uint1_precision

Exposes highp_uvec1, mediump_uvec1 and lowp_uvec1 types. More...

Typedefs

typedef vec< 1, unsigned int, highp > highp_uvec1
 1 component vector of unsigned integer values. More...
 
typedef vec< 1, unsigned int, lowp > lowp_uvec1
 1 component vector of unsigned integer values. More...
 
typedef vec< 1, unsigned int, mediump > mediump_uvec1
 1 component vector of unsigned integer values. More...
 

Detailed Description

Exposes highp_uvec1, mediump_uvec1 and lowp_uvec1 types.

Include <glm/ext/vector_uint1_precision.hpp> to use the features of this extension.

Typedef Documentation

typedef vec< 1, u32, highp > highp_uvec1

1 component vector of unsigned integer values.

See also
GLM_EXT_vector_uint1_precision

Definition at line 27 of file vector_uint1_precision.hpp.

typedef vec< 1, u32, lowp > lowp_uvec1

1 component vector of unsigned integer values.

See also
GLM_EXT_vector_uint1_precision

Definition at line 37 of file vector_uint1_precision.hpp.

typedef vec< 1, u32, mediump > mediump_uvec1

1 component vector of unsigned integer values.

See also
GLM_EXT_vector_uint1_precision

Definition at line 32 of file vector_uint1_precision.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00278.html ================================================ 0.9.9 API documentation: GLM_EXT_vector_ulp
0.9.9 API documentation
GLM_EXT_vector_ulp

Allow the measurement of the accuracy of a function against a reference implementation. More...

Allow the measurement of the accuracy of a function against a reference implementation.

This extension works on floating-point data and provide results in ULP.

Include <glm/ext/vector_ulp.hpp> to use the features of this extension.

See also
GLM_EXT_scalar_ulp
GLM_EXT_scalar_relational
GLM_EXT_vector_relational
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00279.html ================================================ 0.9.9 API documentation: Geometric functions
0.9.9 API documentation
Geometric functions

These operate on vectors as vectors, not component-wise. More...

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > cross (vec< 3, T, Q > const &x, vec< 3, T, Q > const &y)
 Returns the cross product of x and y. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL T distance (vec< L, T, Q > const &p0, vec< L, T, Q > const &p1)
 Returns the distance betwwen p0 and p1, i.e., length(p0 - p1). More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL T dot (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Returns the dot product of x and y, i.e., result = x * y. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > faceforward (vec< L, T, Q > const &N, vec< L, T, Q > const &I, vec< L, T, Q > const &Nref)
 If dot(Nref, I) < 0.0, return N, otherwise, return -N. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL T length (vec< L, T, Q > const &x)
 Returns the length of x, i.e., sqrt(x * x). More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > normalize (vec< L, T, Q > const &x)
 Returns a vector in the same direction as x but with length of 1. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > reflect (vec< L, T, Q > const &I, vec< L, T, Q > const &N)
 For the incident vector I and surface orientation N, returns the reflection direction : result = I - 2.0 * dot(N, I) * N. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > refract (vec< L, T, Q > const &I, vec< L, T, Q > const &N, T eta)
 For the incident vector I and surface normal N, and the ratio of indices of refraction eta, return the refraction vector. More...
 

Detailed Description

These operate on vectors as vectors, not component-wise.

Include <glm/geometric.hpp> to use these core features.

Function Documentation

GLM_FUNC_DECL vec<3, T, Q> glm::cross ( vec< 3, T, Q > const &  x,
vec< 3, T, Q > const &  y 
)

Returns the cross product of x and y.

Template Parameters
TFloating-point scalar types.
See also
GLSL cross man page
GLSL 4.20.8 specification, section 8.5 Geometric Functions
GLM_FUNC_DECL T glm::distance ( vec< L, T, Q > const &  p0,
vec< L, T, Q > const &  p1 
)

Returns the distance betwwen p0 and p1, i.e., length(p0 - p1).

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TFloating-point scalar types.
See also
GLSL distance man page
GLSL 4.20.8 specification, section 8.5 Geometric Functions
GLM_FUNC_DECL T glm::dot ( vec< L, T, Q > const &  x,
vec< L, T, Q > const &  y 
)

Returns the dot product of x and y, i.e., result = x * y.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TFloating-point scalar types.
See also
GLSL dot man page
GLSL 4.20.8 specification, section 8.5 Geometric Functions
GLM_FUNC_DECL vec<L, T, Q> glm::faceforward ( vec< L, T, Q > const &  N,
vec< L, T, Q > const &  I,
vec< L, T, Q > const &  Nref 
)

If dot(Nref, I) < 0.0, return N, otherwise, return -N.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TFloating-point scalar types.
See also
GLSL faceforward man page
GLSL 4.20.8 specification, section 8.5 Geometric Functions
GLM_FUNC_DECL T glm::length ( vec< L, T, Q > const &  x)

Returns the length of x, i.e., sqrt(x * x).

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TFloating-point scalar types.
See also
GLSL length man page
GLSL 4.20.8 specification, section 8.5 Geometric Functions
GLM_FUNC_DECL vec<L, T, Q> glm::normalize ( vec< L, T, Q > const &  x)

Returns a vector in the same direction as x but with length of 1.

According to issue 10 GLSL 1.10 specification, if length(x) == 0 then result is undefined and generate an error.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TFloating-point scalar types.
See also
GLSL normalize man page
GLSL 4.20.8 specification, section 8.5 Geometric Functions
GLM_FUNC_DECL vec<L, T, Q> glm::reflect ( vec< L, T, Q > const &  I,
vec< L, T, Q > const &  N 
)

For the incident vector I and surface orientation N, returns the reflection direction : result = I - 2.0 * dot(N, I) * N.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TFloating-point scalar types.
See also
GLSL reflect man page
GLSL 4.20.8 specification, section 8.5 Geometric Functions
GLM_FUNC_DECL vec<L, T, Q> glm::refract ( vec< L, T, Q > const &  I,
vec< L, T, Q > const &  N,
eta 
)

For the incident vector I and surface normal N, and the ratio of indices of refraction eta, return the refraction vector.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TFloating-point scalar types.
See also
GLSL refract man page
GLSL 4.20.8 specification, section 8.5 Geometric Functions
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00280.html ================================================ 0.9.9 API documentation: Core features
0.9.9 API documentation
Core features

Features that implement in C++ the GLSL specification as closely as possible. More...

Modules

 Common functions
 Provides GLSL common functions.
 
 Exponential functions
 Provides GLSL exponential functions.
 
 Geometric functions
 These operate on vectors as vectors, not component-wise.
 
 Vector types
 Vector types of two to four components with an exhaustive set of operators.
 
 Vector types with precision qualifiers
 Vector types with precision qualifiers which may result in various precision in term of ULPs.
 
 Matrix types
 Matrix types of with C columns and R rows where C and R are values between 2 to 4 included.
 
 Matrix types with precision qualifiers
 Matrix types with precision qualifiers which may result in various precision in term of ULPs.
 
 Integer functions
 Provides GLSL functions on integer types.
 
 Matrix functions
 Provides GLSL matrix functions.
 
 Floating-Point Pack and Unpack Functions
 Provides GLSL functions to pack and unpack half, single and double-precision floating point values into more compact integer types.
 
 Angle and Trigonometry Functions
 Function parameters specified as angle are assumed to be in units of radians.
 
 Vector Relational Functions
 Relational and equality operators (<, <=, >, >=, ==, !=) are defined to operate on scalars and produce scalar Boolean results.
 

Typedefs

typedef mat< 3, 2, float, defaultp > mat3x2
 3 columns of 2 components matrix of single-precision floating-point numbers. More...
 

Detailed Description

Features that implement in C++ the GLSL specification as closely as possible.

The GLM core consists of C++ types that mirror GLSL types and C++ functions that mirror the GLSL functions.

The best documentation for GLM Core is the current GLSL specification, version 4.2 (pdf file).

GLM core functionalities require <glm/glm.hpp> to be included to be used.

Typedef Documentation

typedef mat< 3, 2, f32, defaultp > mat3x2

3 columns of 2 components matrix of single-precision floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices

Definition at line 15 of file matrix_float3x2.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00281.html ================================================ 0.9.9 API documentation: Vector types
0.9.9 API documentation
Vector types

Vector types of two to four components with an exhaustive set of operators. More...

Typedefs

typedef vec< 2, bool, defaultp > bvec2
 2 components vector of boolean. More...
 
typedef vec< 3, bool, defaultp > bvec3
 3 components vector of boolean. More...
 
typedef vec< 4, bool, defaultp > bvec4
 4 components vector of boolean. More...
 
typedef vec< 2, double, defaultp > dvec2
 2 components vector of double-precision floating-point numbers. More...
 
typedef vec< 3, double, defaultp > dvec3
 3 components vector of double-precision floating-point numbers. More...
 
typedef vec< 4, double, defaultp > dvec4
 4 components vector of double-precision floating-point numbers. More...
 
typedef vec< 2, int, defaultp > ivec2
 2 components vector of signed integer numbers. More...
 
typedef vec< 3, int, defaultp > ivec3
 3 components vector of signed integer numbers. More...
 
typedef vec< 4, int, defaultp > ivec4
 4 components vector of signed integer numbers. More...
 
typedef vec< 2, unsigned int, defaultp > uvec2
 2 components vector of unsigned integer numbers. More...
 
typedef vec< 3, unsigned int, defaultp > uvec3
 3 components vector of unsigned integer numbers. More...
 
typedef vec< 4, unsigned int, defaultp > uvec4
 4 components vector of unsigned integer numbers. More...
 
typedef vec< 2, float, defaultp > vec2
 2 components vector of single-precision floating-point numbers. More...
 
typedef vec< 3, float, defaultp > vec3
 3 components vector of single-precision floating-point numbers. More...
 
typedef vec< 4, float, defaultp > vec4
 4 components vector of single-precision floating-point numbers. More...
 

Detailed Description

Vector types of two to four components with an exhaustive set of operators.

Typedef Documentation

typedef vec< 2, bool, defaultp > bvec2

2 components vector of boolean.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors

Definition at line 15 of file vector_bool2.hpp.

typedef vec< 3, bool, defaultp > bvec3

3 components vector of boolean.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors

Definition at line 15 of file vector_bool3.hpp.

typedef vec< 4, bool, defaultp > bvec4

4 components vector of boolean.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors

Definition at line 15 of file vector_bool4.hpp.

typedef vec< 2, f64, defaultp > dvec2

2 components vector of double-precision floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors

Definition at line 15 of file vector_double2.hpp.

typedef vec< 3, f64, defaultp > dvec3

3 components vector of double-precision floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors

Definition at line 15 of file vector_double3.hpp.

typedef vec< 4, f64, defaultp > dvec4

4 components vector of double-precision floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors

Definition at line 15 of file vector_double4.hpp.

typedef vec< 2, i32, defaultp > ivec2

2 components vector of signed integer numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors

Definition at line 15 of file vector_int2.hpp.

typedef vec< 3, i32, defaultp > ivec3

3 components vector of signed integer numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors

Definition at line 15 of file vector_int3.hpp.

typedef vec< 4, i32, defaultp > ivec4

4 components vector of signed integer numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors

Definition at line 15 of file vector_int4.hpp.

typedef vec< 2, u32, defaultp > uvec2

2 components vector of unsigned integer numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors

Definition at line 15 of file vector_uint2.hpp.

typedef vec< 3, u32, defaultp > uvec3

3 components vector of unsigned integer numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors

Definition at line 15 of file vector_uint3.hpp.

typedef vec< 4, u32, defaultp > uvec4

4 components vector of unsigned integer numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors

Definition at line 15 of file vector_uint4.hpp.

typedef vec< 2, float, defaultp > vec2

2 components vector of single-precision floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors

Definition at line 15 of file vector_float2.hpp.

typedef vec< 3, float, defaultp > vec3

3 components vector of single-precision floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors

Definition at line 15 of file vector_float3.hpp.

typedef vec< 4, float, defaultp > vec4

4 components vector of single-precision floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors

Definition at line 15 of file vector_float4.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00282.html ================================================ 0.9.9 API documentation: Vector types with precision qualifiers
0.9.9 API documentation
Vector types with precision qualifiers

Vector types with precision qualifiers which may result in various precision in term of ULPs. More...

Typedefs

typedef vec< 2, bool, highp > highp_bvec2
 2 components vector of high qualifier bool numbers. More...
 
typedef vec< 3, bool, highp > highp_bvec3
 3 components vector of high qualifier bool numbers. More...
 
typedef vec< 4, bool, highp > highp_bvec4
 4 components vector of high qualifier bool numbers. More...
 
typedef vec< 2, double, highp > highp_dvec2
 2 components vector of high double-qualifier floating-point numbers. More...
 
typedef vec< 3, double, highp > highp_dvec3
 3 components vector of high double-qualifier floating-point numbers. More...
 
typedef vec< 4, double, highp > highp_dvec4
 4 components vector of high double-qualifier floating-point numbers. More...
 
typedef vec< 2, int, highp > highp_ivec2
 2 components vector of high qualifier signed integer numbers. More...
 
typedef vec< 3, int, highp > highp_ivec3
 3 components vector of high qualifier signed integer numbers. More...
 
typedef vec< 4, int, highp > highp_ivec4
 4 components vector of high qualifier signed integer numbers. More...
 
typedef vec< 2, unsigned int, highp > highp_uvec2
 2 components vector of high qualifier unsigned integer numbers. More...
 
typedef vec< 3, unsigned int, highp > highp_uvec3
 3 components vector of high qualifier unsigned integer numbers. More...
 
typedef vec< 4, unsigned int, highp > highp_uvec4
 4 components vector of high qualifier unsigned integer numbers. More...
 
typedef vec< 2, float, highp > highp_vec2
 2 components vector of high single-qualifier floating-point numbers. More...
 
typedef vec< 3, float, highp > highp_vec3
 3 components vector of high single-qualifier floating-point numbers. More...
 
typedef vec< 4, float, highp > highp_vec4
 4 components vector of high single-qualifier floating-point numbers. More...
 
typedef vec< 2, bool, lowp > lowp_bvec2
 2 components vector of low qualifier bool numbers. More...
 
typedef vec< 3, bool, lowp > lowp_bvec3
 3 components vector of low qualifier bool numbers. More...
 
typedef vec< 4, bool, lowp > lowp_bvec4
 4 components vector of low qualifier bool numbers. More...
 
typedef vec< 2, double, lowp > lowp_dvec2
 2 components vector of low double-qualifier floating-point numbers. More...
 
typedef vec< 3, double, lowp > lowp_dvec3
 3 components vector of low double-qualifier floating-point numbers. More...
 
typedef vec< 4, double, lowp > lowp_dvec4
 4 components vector of low double-qualifier floating-point numbers. More...
 
typedef vec< 2, int, lowp > lowp_ivec2
 2 components vector of low qualifier signed integer numbers. More...
 
typedef vec< 3, int, lowp > lowp_ivec3
 3 components vector of low qualifier signed integer numbers. More...
 
typedef vec< 4, int, lowp > lowp_ivec4
 4 components vector of low qualifier signed integer numbers. More...
 
typedef vec< 2, unsigned int, lowp > lowp_uvec2
 2 components vector of low qualifier unsigned integer numbers. More...
 
typedef vec< 3, unsigned int, lowp > lowp_uvec3
 3 components vector of low qualifier unsigned integer numbers. More...
 
typedef vec< 4, unsigned int, lowp > lowp_uvec4
 4 components vector of low qualifier unsigned integer numbers. More...
 
typedef vec< 2, float, lowp > lowp_vec2
 2 components vector of low single-qualifier floating-point numbers. More...
 
typedef vec< 3, float, lowp > lowp_vec3
 3 components vector of low single-qualifier floating-point numbers. More...
 
typedef vec< 4, float, lowp > lowp_vec4
 4 components vector of low single-qualifier floating-point numbers. More...
 
typedef vec< 2, bool, mediump > mediump_bvec2
 2 components vector of medium qualifier bool numbers. More...
 
typedef vec< 3, bool, mediump > mediump_bvec3
 3 components vector of medium qualifier bool numbers. More...
 
typedef vec< 4, bool, mediump > mediump_bvec4
 4 components vector of medium qualifier bool numbers. More...
 
typedef vec< 2, double, mediump > mediump_dvec2
 2 components vector of medium double-qualifier floating-point numbers. More...
 
typedef vec< 3, double, mediump > mediump_dvec3
 3 components vector of medium double-qualifier floating-point numbers. More...
 
typedef vec< 4, double, mediump > mediump_dvec4
 4 components vector of medium double-qualifier floating-point numbers. More...
 
typedef vec< 2, int, mediump > mediump_ivec2
 2 components vector of medium qualifier signed integer numbers. More...
 
typedef vec< 3, int, mediump > mediump_ivec3
 3 components vector of medium qualifier signed integer numbers. More...
 
typedef vec< 4, int, mediump > mediump_ivec4
 4 components vector of medium qualifier signed integer numbers. More...
 
typedef vec< 2, unsigned int, mediump > mediump_uvec2
 2 components vector of medium qualifier unsigned integer numbers. More...
 
typedef vec< 3, unsigned int, mediump > mediump_uvec3
 3 components vector of medium qualifier unsigned integer numbers. More...
 
typedef vec< 4, unsigned int, mediump > mediump_uvec4
 4 components vector of medium qualifier unsigned integer numbers. More...
 
typedef vec< 2, float, mediump > mediump_vec2
 2 components vector of medium single-qualifier floating-point numbers. More...
 
typedef vec< 3, float, mediump > mediump_vec3
 3 components vector of medium single-qualifier floating-point numbers. More...
 
typedef vec< 4, float, mediump > mediump_vec4
 4 components vector of medium single-qualifier floating-point numbers. More...
 

Detailed Description

Vector types with precision qualifiers which may result in various precision in term of ULPs.

GLSL allows defining qualifiers for particular variables. With OpenGL's GLSL, these qualifiers have no effect; they are there for compatibility, with OpenGL ES's GLSL, these qualifiers do have an effect.

C++ has no language equivalent to qualifier qualifiers. So GLM provides the next-best thing: a number of typedefs that use a particular qualifier.

None of these types make any guarantees about the actual qualifier used.

Typedef Documentation

typedef vec< 2, bool, highp > highp_bvec2

2 components vector of high qualifier bool numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 16 of file vector_bool2_precision.hpp.

typedef vec< 3, bool, highp > highp_bvec3

3 components vector of high qualifier bool numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 16 of file vector_bool3_precision.hpp.

typedef vec< 4, bool, highp > highp_bvec4

4 components vector of high qualifier bool numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 16 of file vector_bool4_precision.hpp.

typedef vec< 2, f64, highp > highp_dvec2

2 components vector of high double-qualifier floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 16 of file vector_double2_precision.hpp.

typedef vec< 3, f64, highp > highp_dvec3

3 components vector of high double-qualifier floating-point numbers.

There is no guarantee on the actual qualifier.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 17 of file vector_double3_precision.hpp.

typedef vec< 4, f64, highp > highp_dvec4

4 components vector of high double-qualifier floating-point numbers.

There is no guarantee on the actual qualifier.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 18 of file vector_double4_precision.hpp.

typedef vec< 2, i32, highp > highp_ivec2

2 components vector of high qualifier signed integer numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 16 of file vector_int2_precision.hpp.

typedef vec< 3, i32, highp > highp_ivec3

3 components vector of high qualifier signed integer numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 16 of file vector_int3_precision.hpp.

typedef vec< 4, i32, highp > highp_ivec4

4 components vector of high qualifier signed integer numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 16 of file vector_int4_precision.hpp.

typedef vec< 2, u32, highp > highp_uvec2

2 components vector of high qualifier unsigned integer numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 16 of file vector_uint2_precision.hpp.

typedef vec< 3, u32, highp > highp_uvec3

3 components vector of high qualifier unsigned integer numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 16 of file vector_uint3_precision.hpp.

typedef vec< 4, u32, highp > highp_uvec4

4 components vector of high qualifier unsigned integer numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 16 of file vector_uint4_precision.hpp.

typedef vec< 2, float, highp > highp_vec2

2 components vector of high single-qualifier floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 16 of file vector_float2_precision.hpp.

typedef vec< 3, float, highp > highp_vec3

3 components vector of high single-qualifier floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 16 of file vector_float3_precision.hpp.

typedef vec< 4, float, highp > highp_vec4

4 components vector of high single-qualifier floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 16 of file vector_float4_precision.hpp.

typedef vec< 2, bool, lowp > lowp_bvec2

2 components vector of low qualifier bool numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 28 of file vector_bool2_precision.hpp.

typedef vec< 3, bool, lowp > lowp_bvec3

3 components vector of low qualifier bool numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 28 of file vector_bool3_precision.hpp.

typedef vec< 4, bool, lowp > lowp_bvec4

4 components vector of low qualifier bool numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 28 of file vector_bool4_precision.hpp.

typedef vec< 2, f64, lowp > lowp_dvec2

2 components vector of low double-qualifier floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 28 of file vector_double2_precision.hpp.

typedef vec< 3, f64, lowp > lowp_dvec3

3 components vector of low double-qualifier floating-point numbers.

There is no guarantee on the actual qualifier.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 31 of file vector_double3_precision.hpp.

typedef vec< 4, f64, lowp > lowp_dvec4

4 components vector of low double-qualifier floating-point numbers.

There is no guarantee on the actual qualifier.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 32 of file vector_double4_precision.hpp.

typedef vec< 2, i32, lowp > lowp_ivec2

2 components vector of low qualifier signed integer numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 28 of file vector_int2_precision.hpp.

typedef vec< 3, i32, lowp > lowp_ivec3

3 components vector of low qualifier signed integer numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 28 of file vector_int3_precision.hpp.

typedef vec< 4, i32, lowp > lowp_ivec4

4 components vector of low qualifier signed integer numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 28 of file vector_int4_precision.hpp.

typedef vec< 2, u32, lowp > lowp_uvec2

2 components vector of low qualifier unsigned integer numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 28 of file vector_uint2_precision.hpp.

typedef vec< 3, u32, lowp > lowp_uvec3

3 components vector of low qualifier unsigned integer numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 28 of file vector_uint3_precision.hpp.

typedef vec< 4, u32, lowp > lowp_uvec4

4 components vector of low qualifier unsigned integer numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 28 of file vector_uint4_precision.hpp.

typedef vec< 2, float, lowp > lowp_vec2

2 components vector of low single-qualifier floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 28 of file vector_float2_precision.hpp.

typedef vec< 3, float, lowp > lowp_vec3

3 components vector of low single-qualifier floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 28 of file vector_float3_precision.hpp.

typedef vec< 4, float, lowp > lowp_vec4

4 components vector of low single-qualifier floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 28 of file vector_float4_precision.hpp.

typedef vec< 2, bool, mediump > mediump_bvec2

2 components vector of medium qualifier bool numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 22 of file vector_bool2_precision.hpp.

typedef vec< 3, bool, mediump > mediump_bvec3

3 components vector of medium qualifier bool numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 22 of file vector_bool3_precision.hpp.

typedef vec< 4, bool, mediump > mediump_bvec4

4 components vector of medium qualifier bool numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 22 of file vector_bool4_precision.hpp.

typedef vec< 2, f64, mediump > mediump_dvec2

2 components vector of medium double-qualifier floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 22 of file vector_double2_precision.hpp.

typedef vec< 3, f64, mediump > mediump_dvec3

3 components vector of medium double-qualifier floating-point numbers.

There is no guarantee on the actual qualifier.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 24 of file vector_double3_precision.hpp.

typedef vec< 4, f64, mediump > mediump_dvec4

4 components vector of medium double-qualifier floating-point numbers.

There is no guarantee on the actual qualifier.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 25 of file vector_double4_precision.hpp.

typedef vec< 2, i32, mediump > mediump_ivec2

2 components vector of medium qualifier signed integer numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 22 of file vector_int2_precision.hpp.

typedef vec< 3, i32, mediump > mediump_ivec3

3 components vector of medium qualifier signed integer numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 22 of file vector_int3_precision.hpp.

typedef vec< 4, i32, mediump > mediump_ivec4

4 components vector of medium qualifier signed integer numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 22 of file vector_int4_precision.hpp.

typedef vec< 2, u32, mediump > mediump_uvec2

2 components vector of medium qualifier unsigned integer numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 22 of file vector_uint2_precision.hpp.

typedef vec< 3, u32, mediump > mediump_uvec3

3 components vector of medium qualifier unsigned integer numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 22 of file vector_uint3_precision.hpp.

typedef vec< 4, u32, mediump > mediump_uvec4

4 components vector of medium qualifier unsigned integer numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 22 of file vector_uint4_precision.hpp.

typedef vec< 2, float, mediump > mediump_vec2

2 components vector of medium single-qualifier floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 22 of file vector_float2_precision.hpp.

typedef vec< 3, float, mediump > mediump_vec3

3 components vector of medium single-qualifier floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 22 of file vector_float3_precision.hpp.

typedef vec< 4, float, mediump > mediump_vec4

4 components vector of medium single-qualifier floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.5 Vectors
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 22 of file vector_float4_precision.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00283.html ================================================ 0.9.9 API documentation: Matrix types
0.9.9 API documentation
Matrix types

Matrix types of with C columns and R rows where C and R are values between 2 to 4 included. More...

Typedefs

typedef mat< 2, 2, double, defaultp > dmat2
 2 columns of 2 components matrix of double-precision floating-point numbers. More...
 
typedef mat< 2, 2, double, defaultp > dmat2x2
 2 columns of 2 components matrix of double-precision floating-point numbers. More...
 
typedef mat< 2, 3, double, defaultp > dmat2x3
 2 columns of 3 components matrix of double-precision floating-point numbers. More...
 
typedef mat< 2, 4, double, defaultp > dmat2x4
 2 columns of 4 components matrix of double-precision floating-point numbers. More...
 
typedef mat< 3, 3, double, defaultp > dmat3
 3 columns of 3 components matrix of double-precision floating-point numbers. More...
 
typedef mat< 3, 2, double, defaultp > dmat3x2
 3 columns of 2 components matrix of double-precision floating-point numbers. More...
 
typedef mat< 3, 3, double, defaultp > dmat3x3
 3 columns of 3 components matrix of double-precision floating-point numbers. More...
 
typedef mat< 3, 4, double, defaultp > dmat3x4
 3 columns of 4 components matrix of double-precision floating-point numbers. More...
 
typedef mat< 4, 4, double, defaultp > dmat4
 4 columns of 4 components matrix of double-precision floating-point numbers. More...
 
typedef mat< 4, 2, double, defaultp > dmat4x2
 4 columns of 2 components matrix of double-precision floating-point numbers. More...
 
typedef mat< 4, 3, double, defaultp > dmat4x3
 4 columns of 3 components matrix of double-precision floating-point numbers. More...
 
typedef mat< 4, 4, double, defaultp > dmat4x4
 4 columns of 4 components matrix of double-precision floating-point numbers. More...
 
typedef mat< 2, 2, float, defaultp > mat2
 2 columns of 2 components matrix of single-precision floating-point numbers. More...
 
typedef mat< 2, 2, float, defaultp > mat2x2
 2 columns of 2 components matrix of single-precision floating-point numbers. More...
 
typedef mat< 2, 3, float, defaultp > mat2x3
 2 columns of 3 components matrix of single-precision floating-point numbers. More...
 
typedef mat< 2, 4, float, defaultp > mat2x4
 2 columns of 4 components matrix of single-precision floating-point numbers. More...
 
typedef mat< 3, 3, float, defaultp > mat3
 3 columns of 3 components matrix of single-precision floating-point numbers. More...
 
typedef mat< 3, 3, float, defaultp > mat3x3
 3 columns of 3 components matrix of single-precision floating-point numbers. More...
 
typedef mat< 3, 4, float, defaultp > mat3x4
 3 columns of 4 components matrix of single-precision floating-point numbers. More...
 
typedef mat< 4, 2, float, defaultp > mat4x2
 4 columns of 2 components matrix of single-precision floating-point numbers. More...
 
typedef mat< 4, 3, float, defaultp > mat4x3
 4 columns of 3 components matrix of single-precision floating-point numbers. More...
 
typedef mat< 4, 4, float, defaultp > mat4x4
 4 columns of 4 components matrix of single-precision floating-point numbers. More...
 
typedef mat< 4, 4, float, defaultp > mat4
 4 columns of 4 components matrix of single-precision floating-point numbers. More...
 

Detailed Description

Matrix types of with C columns and R rows where C and R are values between 2 to 4 included.

These types have exhaustive sets of operators.

Typedef Documentation

typedef mat< 2, 2, f64, defaultp > dmat2

2 columns of 2 components matrix of double-precision floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices

Definition at line 20 of file matrix_double2x2.hpp.

typedef mat< 2, 2, double, defaultp > dmat2x2

2 columns of 2 components matrix of double-precision floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices

Definition at line 15 of file matrix_double2x2.hpp.

typedef mat< 2, 3, double, defaultp > dmat2x3

2 columns of 3 components matrix of double-precision floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices

Definition at line 15 of file matrix_double2x3.hpp.

typedef mat< 2, 4, double, defaultp > dmat2x4

2 columns of 4 components matrix of double-precision floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices

Definition at line 15 of file matrix_double2x4.hpp.

typedef mat< 3, 3, f64, defaultp > dmat3

3 columns of 3 components matrix of double-precision floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices

Definition at line 20 of file matrix_double3x3.hpp.

typedef mat< 3, 2, double, defaultp > dmat3x2

3 columns of 2 components matrix of double-precision floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices

Definition at line 15 of file matrix_double3x2.hpp.

typedef mat< 3, 3, double, defaultp > dmat3x3

3 columns of 3 components matrix of double-precision floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices

Definition at line 15 of file matrix_double3x3.hpp.

typedef mat< 3, 4, double, defaultp > dmat3x4

3 columns of 4 components matrix of double-precision floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices

Definition at line 15 of file matrix_double3x4.hpp.

typedef mat< 4, 4, f64, defaultp > dmat4

4 columns of 4 components matrix of double-precision floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices

Definition at line 20 of file matrix_double4x4.hpp.

typedef mat< 4, 2, double, defaultp > dmat4x2

4 columns of 2 components matrix of double-precision floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices

Definition at line 15 of file matrix_double4x2.hpp.

typedef mat< 4, 3, double, defaultp > dmat4x3

4 columns of 3 components matrix of double-precision floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices

Definition at line 15 of file matrix_double4x3.hpp.

typedef mat< 4, 4, double, defaultp > dmat4x4

4 columns of 4 components matrix of double-precision floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices

Definition at line 15 of file matrix_double4x4.hpp.

typedef mat< 2, 2, f32, defaultp > mat2

2 columns of 2 components matrix of single-precision floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices

Definition at line 20 of file matrix_float2x2.hpp.

typedef mat< 2, 2, f32, defaultp > mat2x2

2 columns of 2 components matrix of single-precision floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices

Definition at line 15 of file matrix_float2x2.hpp.

typedef mat< 2, 3, f32, defaultp > mat2x3

2 columns of 3 components matrix of single-precision floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices

Definition at line 15 of file matrix_float2x3.hpp.

typedef mat< 2, 4, f32, defaultp > mat2x4

2 columns of 4 components matrix of single-precision floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices

Definition at line 15 of file matrix_float2x4.hpp.

typedef mat< 3, 3, f32, defaultp > mat3

3 columns of 3 components matrix of single-precision floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices

Definition at line 20 of file matrix_float3x3.hpp.

typedef mat< 3, 3, f32, defaultp > mat3x3

3 columns of 3 components matrix of single-precision floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices

Definition at line 15 of file matrix_float3x3.hpp.

typedef mat< 3, 4, f32, defaultp > mat3x4

3 columns of 4 components matrix of single-precision floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices

Definition at line 15 of file matrix_float3x4.hpp.

typedef mat< 4, 4, f32, defaultp > mat4

4 columns of 4 components matrix of single-precision floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices

Definition at line 20 of file matrix_float4x4.hpp.

typedef mat< 4, 2, f32, defaultp > mat4x2

4 columns of 2 components matrix of single-precision floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices

Definition at line 15 of file matrix_float4x2.hpp.

typedef mat< 4, 3, f32, defaultp > mat4x3

4 columns of 3 components matrix of single-precision floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices

Definition at line 15 of file matrix_float4x3.hpp.

typedef mat< 4, 4, f32, defaultp > mat4x4

4 columns of 4 components matrix of single-precision floating-point numbers.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices

Definition at line 15 of file matrix_float4x4.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00284.html ================================================ 0.9.9 API documentation: Matrix types with precision qualifiers
0.9.9 API documentation
Matrix types with precision qualifiers

Matrix types with precision qualifiers which may result in various precision in term of ULPs. More...

Typedefs

typedef mat< 2, 2, double, highp > highp_dmat2
 2 columns of 2 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 2, double, highp > highp_dmat2x2
 2 columns of 2 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 3, double, highp > highp_dmat2x3
 2 columns of 3 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 4, double, highp > highp_dmat2x4
 2 columns of 4 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 3, double, highp > highp_dmat3
 3 columns of 3 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 2, double, highp > highp_dmat3x2
 3 columns of 2 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 3, double, highp > highp_dmat3x3
 3 columns of 3 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 4, double, highp > highp_dmat3x4
 3 columns of 4 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 4, double, highp > highp_dmat4
 4 columns of 4 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 2, double, highp > highp_dmat4x2
 4 columns of 2 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 3, double, highp > highp_dmat4x3
 4 columns of 3 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 4, double, highp > highp_dmat4x4
 4 columns of 4 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 2, float, highp > highp_mat2
 2 columns of 2 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 2, float, highp > highp_mat2x2
 2 columns of 2 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 3, float, highp > highp_mat2x3
 2 columns of 3 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 4, float, highp > highp_mat2x4
 2 columns of 4 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 3, float, highp > highp_mat3
 3 columns of 3 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 2, float, highp > highp_mat3x2
 3 columns of 2 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 3, float, highp > highp_mat3x3
 3 columns of 3 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 4, float, highp > highp_mat3x4
 3 columns of 4 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 4, float, highp > highp_mat4
 4 columns of 4 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 2, float, highp > highp_mat4x2
 4 columns of 2 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 3, float, highp > highp_mat4x3
 4 columns of 3 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 4, float, highp > highp_mat4x4
 4 columns of 4 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 2, double, lowp > lowp_dmat2
 2 columns of 2 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 2, double, lowp > lowp_dmat2x2
 2 columns of 2 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 3, double, lowp > lowp_dmat2x3
 2 columns of 3 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 4, double, lowp > lowp_dmat2x4
 2 columns of 4 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 3, double, lowp > lowp_dmat3
 3 columns of 3 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 2, double, lowp > lowp_dmat3x2
 3 columns of 2 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 3, double, lowp > lowp_dmat3x3
 3 columns of 3 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 4, double, lowp > lowp_dmat3x4
 3 columns of 4 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 4, double, lowp > lowp_dmat4
 4 columns of 4 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 2, double, lowp > lowp_dmat4x2
 4 columns of 2 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 3, double, lowp > lowp_dmat4x3
 4 columns of 3 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 4, double, lowp > lowp_dmat4x4
 4 columns of 4 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 2, float, lowp > lowp_mat2
 2 columns of 2 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 2, float, lowp > lowp_mat2x2
 2 columns of 2 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 3, float, lowp > lowp_mat2x3
 2 columns of 3 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 4, float, lowp > lowp_mat2x4
 2 columns of 4 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 3, float, lowp > lowp_mat3
 3 columns of 3 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 2, float, lowp > lowp_mat3x2
 3 columns of 2 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 3, float, lowp > lowp_mat3x3
 3 columns of 3 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 4, float, lowp > lowp_mat3x4
 3 columns of 4 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 4, float, lowp > lowp_mat4
 4 columns of 4 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 2, float, lowp > lowp_mat4x2
 4 columns of 2 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 3, float, lowp > lowp_mat4x3
 4 columns of 3 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 4, float, lowp > lowp_mat4x4
 4 columns of 4 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 2, double, mediump > mediump_dmat2
 2 columns of 2 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 2, double, mediump > mediump_dmat2x2
 2 columns of 2 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 3, double, mediump > mediump_dmat2x3
 2 columns of 3 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 4, double, mediump > mediump_dmat2x4
 2 columns of 4 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 3, double, mediump > mediump_dmat3
 3 columns of 3 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 2, double, mediump > mediump_dmat3x2
 3 columns of 2 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 3, double, mediump > mediump_dmat3x3
 3 columns of 3 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 4, double, mediump > mediump_dmat3x4
 3 columns of 4 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 4, double, mediump > mediump_dmat4
 4 columns of 4 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 2, double, mediump > mediump_dmat4x2
 4 columns of 2 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 3, double, mediump > mediump_dmat4x3
 4 columns of 3 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 4, double, mediump > mediump_dmat4x4
 4 columns of 4 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 2, float, mediump > mediump_mat2
 2 columns of 2 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 2, float, mediump > mediump_mat2x2
 2 columns of 2 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 3, float, mediump > mediump_mat2x3
 2 columns of 3 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 2, 4, float, mediump > mediump_mat2x4
 2 columns of 4 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 3, float, mediump > mediump_mat3
 3 columns of 3 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 2, float, mediump > mediump_mat3x2
 3 columns of 2 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 3, float, mediump > mediump_mat3x3
 3 columns of 3 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 3, 4, float, mediump > mediump_mat3x4
 3 columns of 4 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 4, float, mediump > mediump_mat4
 4 columns of 4 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 2, float, mediump > mediump_mat4x2
 4 columns of 2 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 3, float, mediump > mediump_mat4x3
 4 columns of 3 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 
typedef mat< 4, 4, float, mediump > mediump_mat4x4
 4 columns of 4 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs. More...
 

Detailed Description

Matrix types with precision qualifiers which may result in various precision in term of ULPs.

GLSL allows defining qualifiers for particular variables. With OpenGL's GLSL, these qualifiers have no effect; they are there for compatibility, with OpenGL ES's GLSL, these qualifiers do have an effect.

C++ has no language equivalent to qualifier qualifiers. So GLM provides the next-best thing: a number of typedefs that use a particular qualifier.

None of these types make any guarantees about the actual qualifier used.

Typedef Documentation

typedef mat< 2, 2, f64, highp > highp_dmat2

2 columns of 2 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 28 of file matrix_double2x2_precision.hpp.

typedef mat< 2, 2, double, highp > highp_dmat2x2

2 columns of 2 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 46 of file matrix_double2x2_precision.hpp.

typedef mat< 2, 3, double, highp > highp_dmat2x3

2 columns of 3 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 28 of file matrix_double2x3_precision.hpp.

typedef mat< 2, 4, double, highp > highp_dmat2x4

2 columns of 4 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 28 of file matrix_double2x4_precision.hpp.

typedef mat< 3, 3, f64, highp > highp_dmat3

3 columns of 3 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 28 of file matrix_double3x3_precision.hpp.

typedef mat< 3, 2, double, highp > highp_dmat3x2

3 columns of 2 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 28 of file matrix_double3x2_precision.hpp.

typedef mat< 3, 3, double, highp > highp_dmat3x3

3 columns of 3 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 46 of file matrix_double3x3_precision.hpp.

typedef mat< 3, 4, double, highp > highp_dmat3x4

3 columns of 4 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 28 of file matrix_double3x4_precision.hpp.

typedef mat< 4, 4, f64, highp > highp_dmat4

4 columns of 4 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 28 of file matrix_double4x4_precision.hpp.

typedef mat< 4, 2, double, highp > highp_dmat4x2

4 columns of 2 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 28 of file matrix_double4x2_precision.hpp.

typedef mat< 4, 3, double, highp > highp_dmat4x3

4 columns of 3 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 28 of file matrix_double4x3_precision.hpp.

typedef mat< 4, 4, double, highp > highp_dmat4x4

4 columns of 4 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 46 of file matrix_double4x4_precision.hpp.

typedef mat< 2, 2, f32, highp > highp_mat2

2 columns of 2 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 28 of file matrix_float2x2_precision.hpp.

typedef mat< 2, 2, f32, highp > highp_mat2x2

2 columns of 2 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 46 of file matrix_float2x2_precision.hpp.

typedef mat< 2, 3, f32, highp > highp_mat2x3

2 columns of 3 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 28 of file matrix_float2x3_precision.hpp.

typedef mat< 2, 4, f32, highp > highp_mat2x4

2 columns of 4 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 28 of file matrix_float2x4_precision.hpp.

typedef mat< 3, 3, f32, highp > highp_mat3

3 columns of 3 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 28 of file matrix_float3x3_precision.hpp.

typedef mat< 3, 2, f32, highp > highp_mat3x2

3 columns of 2 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 28 of file matrix_float3x2_precision.hpp.

typedef mat< 3, 3, f32, highp > highp_mat3x3

3 columns of 3 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 46 of file matrix_float3x3_precision.hpp.

typedef mat< 3, 4, f32, highp > highp_mat3x4

3 columns of 4 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 28 of file matrix_float3x4_precision.hpp.

typedef mat< 4, 4, f32, highp > highp_mat4

4 columns of 4 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 28 of file matrix_float4x4_precision.hpp.

typedef mat< 4, 2, f32, highp > highp_mat4x2

4 columns of 2 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 28 of file matrix_float4x2_precision.hpp.

typedef mat< 4, 3, f32, highp > highp_mat4x3

4 columns of 3 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 28 of file matrix_float4x3_precision.hpp.

typedef mat< 4, 4, f32, highp > highp_mat4x4

4 columns of 4 components matrix of single-precision floating-point numbers using high precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 46 of file matrix_float4x4_precision.hpp.

typedef mat< 2, 2, f64, lowp > lowp_dmat2

2 columns of 2 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 16 of file matrix_double2x2_precision.hpp.

typedef mat< 2, 2, double, lowp > lowp_dmat2x2

2 columns of 2 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 34 of file matrix_double2x2_precision.hpp.

typedef mat< 2, 3, double, lowp > lowp_dmat2x3

2 columns of 3 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 16 of file matrix_double2x3_precision.hpp.

typedef mat< 2, 4, double, lowp > lowp_dmat2x4

2 columns of 4 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 16 of file matrix_double2x4_precision.hpp.

typedef mat< 3, 3, f64, lowp > lowp_dmat3

3 columns of 3 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 16 of file matrix_double3x3_precision.hpp.

typedef mat< 3, 2, double, lowp > lowp_dmat3x2

3 columns of 2 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 16 of file matrix_double3x2_precision.hpp.

typedef mat< 3, 3, double, lowp > lowp_dmat3x3

3 columns of 3 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 34 of file matrix_double3x3_precision.hpp.

typedef mat< 3, 4, double, lowp > lowp_dmat3x4

3 columns of 4 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 16 of file matrix_double3x4_precision.hpp.

typedef mat< 4, 4, f64, lowp > lowp_dmat4

4 columns of 4 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 16 of file matrix_double4x4_precision.hpp.

typedef mat< 4, 2, double, lowp > lowp_dmat4x2

4 columns of 2 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 16 of file matrix_double4x2_precision.hpp.

typedef mat< 4, 3, double, lowp > lowp_dmat4x3

4 columns of 3 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 16 of file matrix_double4x3_precision.hpp.

typedef mat< 4, 4, double, lowp > lowp_dmat4x4

4 columns of 4 components matrix of double-precision floating-point numbers using low precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 34 of file matrix_double4x4_precision.hpp.

typedef mat< 2, 2, f32, lowp > lowp_mat2

2 columns of 2 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 16 of file matrix_float2x2_precision.hpp.

typedef mat< 2, 2, f32, lowp > lowp_mat2x2

2 columns of 2 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 34 of file matrix_float2x2_precision.hpp.

typedef mat< 2, 3, f32, lowp > lowp_mat2x3

2 columns of 3 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 16 of file matrix_float2x3_precision.hpp.

typedef mat< 2, 4, f32, lowp > lowp_mat2x4

2 columns of 4 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 16 of file matrix_float2x4_precision.hpp.

typedef mat< 3, 3, f32, lowp > lowp_mat3

3 columns of 3 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 16 of file matrix_float3x3_precision.hpp.

typedef mat< 3, 2, f32, lowp > lowp_mat3x2

3 columns of 2 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 16 of file matrix_float3x2_precision.hpp.

typedef mat< 3, 3, f32, lowp > lowp_mat3x3

3 columns of 3 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 34 of file matrix_float3x3_precision.hpp.

typedef mat< 3, 4, f32, lowp > lowp_mat3x4

3 columns of 4 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 16 of file matrix_float3x4_precision.hpp.

typedef mat< 4, 4, f32, lowp > lowp_mat4

4 columns of 4 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 16 of file matrix_float4x4_precision.hpp.

typedef mat< 4, 2, f32, lowp > lowp_mat4x2

4 columns of 2 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 16 of file matrix_float4x2_precision.hpp.

typedef mat< 4, 3, f32, lowp > lowp_mat4x3

4 columns of 3 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 16 of file matrix_float4x3_precision.hpp.

typedef mat< 4, 4, f32, lowp > lowp_mat4x4

4 columns of 4 components matrix of single-precision floating-point numbers using low precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 34 of file matrix_float4x4_precision.hpp.

typedef mat< 2, 2, f64, mediump > mediump_dmat2

2 columns of 2 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 22 of file matrix_double2x2_precision.hpp.

typedef mat< 2, 2, double, mediump > mediump_dmat2x2

2 columns of 2 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 40 of file matrix_double2x2_precision.hpp.

typedef mat< 2, 3, double, mediump > mediump_dmat2x3

2 columns of 3 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 22 of file matrix_double2x3_precision.hpp.

typedef mat< 2, 4, double, mediump > mediump_dmat2x4

2 columns of 4 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 22 of file matrix_double2x4_precision.hpp.

typedef mat< 3, 3, f64, mediump > mediump_dmat3

3 columns of 3 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 22 of file matrix_double3x3_precision.hpp.

typedef mat< 3, 2, double, mediump > mediump_dmat3x2

3 columns of 2 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 22 of file matrix_double3x2_precision.hpp.

typedef mat< 3, 3, double, mediump > mediump_dmat3x3

3 columns of 3 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 40 of file matrix_double3x3_precision.hpp.

typedef mat< 3, 4, double, mediump > mediump_dmat3x4

3 columns of 4 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 22 of file matrix_double3x4_precision.hpp.

typedef mat< 4, 4, f64, mediump > mediump_dmat4

4 columns of 4 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 22 of file matrix_double4x4_precision.hpp.

typedef mat< 4, 2, double, mediump > mediump_dmat4x2

4 columns of 2 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 22 of file matrix_double4x2_precision.hpp.

typedef mat< 4, 3, double, mediump > mediump_dmat4x3

4 columns of 3 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 22 of file matrix_double4x3_precision.hpp.

typedef mat< 4, 4, double, mediump > mediump_dmat4x4

4 columns of 4 components matrix of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 40 of file matrix_double4x4_precision.hpp.

typedef mat< 2, 2, f32, mediump > mediump_mat2

2 columns of 2 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 22 of file matrix_float2x2_precision.hpp.

typedef mat< 2, 2, f32, mediump > mediump_mat2x2

2 columns of 2 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 40 of file matrix_float2x2_precision.hpp.

typedef mat< 2, 3, f32, mediump > mediump_mat2x3

2 columns of 3 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 22 of file matrix_float2x3_precision.hpp.

typedef mat< 2, 4, f32, mediump > mediump_mat2x4

2 columns of 4 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 22 of file matrix_float2x4_precision.hpp.

typedef mat< 3, 3, f32, mediump > mediump_mat3

3 columns of 3 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 22 of file matrix_float3x3_precision.hpp.

typedef mat< 3, 2, f32, mediump > mediump_mat3x2

3 columns of 2 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 22 of file matrix_float3x2_precision.hpp.

typedef mat< 3, 3, f32, mediump > mediump_mat3x3

3 columns of 3 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 40 of file matrix_float3x3_precision.hpp.

typedef mat< 3, 4, f32, mediump > mediump_mat3x4

3 columns of 4 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 22 of file matrix_float3x4_precision.hpp.

typedef mat< 4, 4, f32, mediump > mediump_mat4

4 columns of 4 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 22 of file matrix_float4x4_precision.hpp.

typedef mat< 4, 2, f32, mediump > mediump_mat4x2

4 columns of 2 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 22 of file matrix_float4x2_precision.hpp.

typedef mat< 4, 3, f32, mediump > mediump_mat4x3

4 columns of 3 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 22 of file matrix_float4x3_precision.hpp.

typedef mat< 4, 4, f32, mediump > mediump_mat4x4

4 columns of 4 components matrix of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.

See also
GLSL 4.20.8 specification, section 4.1.6 Matrices
GLSL 4.20.8 specification, section 4.7.2 Precision Qualifier

Definition at line 40 of file matrix_float4x4_precision.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00285.html ================================================ 0.9.9 API documentation: Stable extensions
0.9.9 API documentation
Stable extensions

Additional features not specified by GLSL specification. More...

Modules

 GLM_EXT_matrix_clip_space
 Defines functions that generate clip space transformation matrices.
 
 GLM_EXT_matrix_common
 Defines functions for common matrix operations.
 
 GLM_EXT_matrix_projection
 Functions that generate common projection transformation matrices.
 
 GLM_EXT_matrix_relational
 Exposes comparison functions for matrix types that take a user defined epsilon values.
 
 GLM_EXT_matrix_transform
 Defines functions that generate common transformation matrices.
 
 GLM_EXT_quaternion_common
 Provides common functions for quaternion types.
 
 GLM_EXT_quaternion_double
 Exposes double-precision floating point quaternion type.
 
 GLM_EXT_quaternion_double_precision
 Exposes double-precision floating point quaternion type with various precision in term of ULPs.
 
 GLM_EXT_quaternion_exponential
 Provides exponential functions for quaternion types.
 
 GLM_EXT_quaternion_float
 Exposes single-precision floating point quaternion type.
 
 GLM_EXT_quaternion_float_precision
 Exposes single-precision floating point quaternion type with various precision in term of ULPs.
 
 GLM_EXT_quaternion_geometric
 Provides geometric functions for quaternion types.
 
 GLM_EXT_quaternion_relational
 Exposes comparison functions for quaternion types that take a user defined epsilon values.
 
 GLM_EXT_quaternion_transform
 Provides transformation functions for quaternion types.
 
 GLM_EXT_quaternion_trigonometric
 Provides trigonometric functions for quaternion types.
 
 GLM_EXT_scalar_common
 Exposes min and max functions for 3 to 4 scalar parameters.
 
 GLM_EXT_scalar_constants
 Provides a list of constants and precomputed useful values.
 
 GLM_EXT_scalar_int_sized
 Exposes sized signed integer scalar types.
 
 GLM_EXT_scalar_integer
 Include <glm/ext/scalar_integer.hpp> to use the features of this extension.
 
 GLM_EXT_scalar_relational
 Exposes comparison functions for scalar types that take a user defined epsilon values.
 
 GLM_EXT_scalar_uint_sized
 Exposes sized unsigned integer scalar types.
 
 GLM_EXT_scalar_ulp
 Allow the measurement of the accuracy of a function against a reference implementation.
 
 GLM_EXT_vector_bool1
 Exposes bvec1 vector type.
 
 GLM_EXT_vector_bool1_precision
 Exposes highp_bvec1, mediump_bvec1 and lowp_bvec1 types.
 
 GLM_EXT_vector_common
 Exposes min and max functions for 3 to 4 vector parameters.
 
 GLM_EXT_vector_double1
 Exposes double-precision floating point vector type with one component.
 
 GLM_EXT_vector_double1_precision
 Exposes highp_dvec1, mediump_dvec1 and lowp_dvec1 types.
 
 GLM_EXT_vector_float1
 Exposes single-precision floating point vector type with one component.
 
 GLM_EXT_vector_float1_precision
 Exposes highp_vec1, mediump_vec1 and lowp_vec1 types.
 
 GLM_EXT_vector_int1
 Exposes ivec1 vector type.
 
 GLM_EXT_vector_int1_precision
 Exposes highp_ivec1, mediump_ivec1 and lowp_ivec1 types.
 
 GLM_EXT_vector_integer
 Include <glm/ext/vector_integer.hpp> to use the features of this extension.
 
 GLM_EXT_vector_relational
 Exposes comparison functions for vector types that take a user defined epsilon values.
 
 GLM_EXT_vector_uint1
 Exposes uvec1 vector type.
 
 GLM_EXT_vector_uint1_precision
 Exposes highp_uvec1, mediump_uvec1 and lowp_uvec1 types.
 
 GLM_EXT_vector_ulp
 Allow the measurement of the accuracy of a function against a reference implementation.
 

Detailed Description

Additional features not specified by GLSL specification.

EXT extensions are fully tested and documented.

Even if it's highly unrecommended, it's possible to include all the extensions at once by including <glm/ext.hpp>. Otherwise, each extension needs to be included a specific file.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00286.html ================================================ 0.9.9 API documentation: Recommended extensions
0.9.9 API documentation
Recommended extensions

Additional features not specified by GLSL specification. More...

Modules

 GLM_GTC_bitfield
 Include <glm/gtc/bitfield.hpp> to use the features of this extension.
 
 GLM_GTC_color_space
 Include <glm/gtc/color_space.hpp> to use the features of this extension.
 
 GLM_GTC_constants
 Include <glm/gtc/constants.hpp> to use the features of this extension.
 
 GLM_GTC_epsilon
 Include <glm/gtc/epsilon.hpp> to use the features of this extension.
 
 GLM_GTC_integer
 Include <glm/gtc/integer.hpp> to use the features of this extension.
 
 GLM_GTC_matrix_access
 Include <glm/gtc/matrix_access.hpp> to use the features of this extension.
 
 GLM_GTC_matrix_integer
 Include <glm/gtc/matrix_integer.hpp> to use the features of this extension.
 
 GLM_GTC_matrix_inverse
 Include <glm/gtc/matrix_integer.hpp> to use the features of this extension.
 
 GLM_GTC_matrix_transform
 Include <glm/gtc/matrix_transform.hpp> to use the features of this extension.
 
 GLM_GTC_noise
 Include <glm/gtc/noise.hpp> to use the features of this extension.
 
 GLM_GTC_packing
 Include <glm/gtc/packing.hpp> to use the features of this extension.
 
 GLM_GTC_quaternion
 Include <glm/gtc/quaternion.hpp> to use the features of this extension.
 
 GLM_GTC_random
 Include <glm/gtc/random.hpp> to use the features of this extension.
 
 GLM_GTC_reciprocal
 Include <glm/gtc/reciprocal.hpp> to use the features of this extension.
 
 GLM_GTC_round
 Include <glm/gtc/round.hpp> to use the features of this extension.
 
 GLM_GTC_type_aligned
 Include <glm/gtc/type_aligned.hpp> to use the features of this extension.
 
 GLM_GTC_type_precision
 Include <glm/gtc/type_precision.hpp> to use the features of this extension.
 
 GLM_GTC_type_ptr
 Include <glm/gtc/type_ptr.hpp> to use the features of this extension.
 
 GLM_GTC_ulp
 Include <glm/gtc/ulp.hpp> to use the features of this extension.
 
 GLM_GTC_vec1
 Include <glm/gtc/vec1.hpp> to use the features of this extension.
 

Detailed Description

Additional features not specified by GLSL specification.

GTC extensions aim to be stable with tests and documentation.

Even if it's highly unrecommended, it's possible to include all the extensions at once by including <glm/ext.hpp>. Otherwise, each extension needs to be included a specific file.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00287.html ================================================ 0.9.9 API documentation: Experimental extensions
0.9.9 API documentation
Experimental extensions

Experimental features not specified by GLSL specification. More...

Modules

 GLM_GTX_associated_min_max
 Include <glm/gtx/associated_min_max.hpp> to use the features of this extension.
 
 GLM_GTX_bit
 Include <glm/gtx/bit.hpp> to use the features of this extension.
 
 GLM_GTX_closest_point
 Include <glm/gtx/closest_point.hpp> to use the features of this extension.
 
 GLM_GTX_color_encoding
 Include <glm/gtx/color_encoding.hpp> to use the features of this extension.
 
 GLM_GTX_color_space
 Include <glm/gtx/color_space.hpp> to use the features of this extension.
 
 GLM_GTX_color_space_YCoCg
 Include <glm/gtx/color_space_YCoCg.hpp> to use the features of this extension.
 
 GLM_GTX_common
 Include <glm/gtx/common.hpp> to use the features of this extension.
 
 GLM_GTX_compatibility
 Include <glm/gtx/compatibility.hpp> to use the features of this extension.
 
 GLM_GTX_component_wise
 Include <glm/gtx/component_wise.hpp> to use the features of this extension.
 
 GLM_GTX_dual_quaternion
 Include <glm/gtx/dual_quaternion.hpp> to use the features of this extension.
 
 GLM_GTX_easing
 Include <glm/gtx/easing.hpp> to use the features of this extension.
 
 GLM_GTX_euler_angles
 Include <glm/gtx/euler_angles.hpp> to use the features of this extension.
 
 GLM_GTX_extend
 Include <glm/gtx/extend.hpp> to use the features of this extension.
 
 GLM_GTX_extented_min_max
 Include <glm/gtx/extented_min_max.hpp> to use the features of this extension.
 
 GLM_GTX_exterior_product
 Include <glm/gtx/exterior_product.hpp> to use the features of this extension.
 
 GLM_GTX_fast_exponential
 Include <glm/gtx/fast_exponential.hpp> to use the features of this extension.
 
 GLM_GTX_fast_square_root
 Include <glm/gtx/fast_square_root.hpp> to use the features of this extension.
 
 GLM_GTX_fast_trigonometry
 Include <glm/gtx/fast_trigonometry.hpp> to use the features of this extension.
 
 GLM_GTX_functions
 Include <glm/gtx/functions.hpp> to use the features of this extension.
 
 GLM_GTX_gradient_paint
 Include <glm/gtx/gradient_paint.hpp> to use the features of this extension.
 
 GLM_GTX_handed_coordinate_space
 Include <glm/gtx/handed_coordinate_system.hpp> to use the features of this extension.
 
 GLM_GTX_hash
 Include <glm/gtx/hash.hpp> to use the features of this extension.
 
 GLM_GTX_integer
 Include <glm/gtx/integer.hpp> to use the features of this extension.
 
 GLM_GTX_intersect
 Include <glm/gtx/intersect.hpp> to use the features of this extension.
 
 GLM_GTX_io
 Include <glm/gtx/io.hpp> to use the features of this extension.
 
 GLM_GTX_log_base
 Include <glm/gtx/log_base.hpp> to use the features of this extension.
 
 GLM_GTX_matrix_cross_product
 Include <glm/gtx/matrix_cross_product.hpp> to use the features of this extension.
 
 GLM_GTX_matrix_decompose
 Include <glm/gtx/matrix_decompose.hpp> to use the features of this extension.
 
 GLM_GTX_matrix_factorisation
 Include <glm/gtx/matrix_factorisation.hpp> to use the features of this extension.
 
 GLM_GTX_matrix_interpolation
 Include <glm/gtx/matrix_interpolation.hpp> to use the features of this extension.
 
 GLM_GTX_matrix_major_storage
 Include <glm/gtx/matrix_major_storage.hpp> to use the features of this extension.
 
 GLM_GTX_matrix_operation
 Include <glm/gtx/matrix_operation.hpp> to use the features of this extension.
 
 GLM_GTX_matrix_query
 Include <glm/gtx/matrix_query.hpp> to use the features of this extension.
 
 GLM_GTX_matrix_transform_2d
 Include <glm/gtx/matrix_transform_2d.hpp> to use the features of this extension.
 
 GLM_GTX_mixed_producte
 Include <glm/gtx/mixed_product.hpp> to use the features of this extension.
 
 GLM_GTX_norm
 Include <glm/gtx/norm.hpp> to use the features of this extension.
 
 GLM_GTX_normal
 Include <glm/gtx/normal.hpp> to use the features of this extension.
 
 GLM_GTX_normalize_dot
 Include <glm/gtx/normalized_dot.hpp> to use the features of this extension.
 
 GLM_GTX_number_precision
 Include <glm/gtx/number_precision.hpp> to use the features of this extension.
 
 GLM_GTX_optimum_pow
 Include <glm/gtx/optimum_pow.hpp> to use the features of this extension.
 
 GLM_GTX_orthonormalize
 Include <glm/gtx/orthonormalize.hpp> to use the features of this extension.
 
 GLM_GTX_perpendicular
 Include <glm/gtx/perpendicular.hpp> to use the features of this extension.
 
 GLM_GTX_polar_coordinates
 Include <glm/gtx/polar_coordinates.hpp> to use the features of this extension.
 
 GLM_GTX_projection
 Include <glm/gtx/projection.hpp> to use the features of this extension.
 
 GLM_GTX_quaternion
 Include <glm/gtx/quaternion.hpp> to use the features of this extension.
 
 GLM_GTX_range
 Include <glm/gtx/range.hpp> to use the features of this extension.
 
 GLM_GTX_raw_data
 Include <glm/gtx/raw_data.hpp> to use the features of this extension.
 
 GLM_GTX_rotate_normalized_axis
 Include <glm/gtx/rotate_normalized_axis.hpp> to use the features of this extension.
 
 GLM_GTX_rotate_vector
 Include <glm/gtx/rotate_vector.hpp> to use the features of this extension.
 
 GLM_GTX_scalar_relational
 Include <glm/gtx/scalar_relational.hpp> to use the features of this extension.
 
 GLM_GTX_spline
 Include <glm/gtx/spline.hpp> to use the features of this extension.
 
 GLM_GTX_std_based_type
 Include <glm/gtx/std_based_type.hpp> to use the features of this extension.
 
 GLM_GTX_string_cast
 Include <glm/gtx/string_cast.hpp> to use the features of this extension.
 
 GLM_GTX_texture
 Include <glm/gtx/texture.hpp> to use the features of this extension.
 
 GLM_GTX_transform
 Include <glm/gtx/transform.hpp> to use the features of this extension.
 
 GLM_GTX_transform2
 Include <glm/gtx/transform2.hpp> to use the features of this extension.
 
 GLM_GTX_type_aligned
 Include <glm/gtx/type_aligned.hpp> to use the features of this extension.
 
 GLM_GTX_type_trait
 Include <glm/gtx/type_trait.hpp> to use the features of this extension.
 
 GLM_GTX_vec_swizzle
 Include <glm/gtx/vec_swizzle.hpp> to use the features of this extension.
 
 GLM_GTX_vector_angle
 Include <glm/gtx/vector_angle.hpp> to use the features of this extension.
 
 GLM_GTX_vector_query
 Include <glm/gtx/vector_query.hpp> to use the features of this extension.
 
 GLM_GTX_wrap
 Include <glm/gtx/wrap.hpp> to use the features of this extension.
 

Detailed Description

Experimental features not specified by GLSL specification.

Experimental extensions are useful functions and types, but the development of their API and functionality is not necessarily stable. They can change substantially between versions. Backwards compatibility is not much of an issue for them.

Even if it's highly unrecommended, it's possible to include all the extensions at once by including <glm/ext.hpp>. Otherwise, each extension needs to be included a specific file.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00288.html ================================================ 0.9.9 API documentation: GLM_GTC_bitfield
0.9.9 API documentation

Include <glm/gtc/bitfield.hpp> to use the features of this extension. More...

Functions

GLM_FUNC_DECL glm::u8vec2 bitfieldDeinterleave (glm::uint16 x)
 Deinterleaves the bits of x. More...
 
GLM_FUNC_DECL glm::u16vec2 bitfieldDeinterleave (glm::uint32 x)
 Deinterleaves the bits of x. More...
 
GLM_FUNC_DECL glm::u32vec2 bitfieldDeinterleave (glm::uint64 x)
 Deinterleaves the bits of x. More...
 
template<typename genIUType >
GLM_FUNC_DECL genIUType bitfieldFillOne (genIUType Value, int FirstBit, int BitCount)
 Set to 1 a range of bits. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > bitfieldFillOne (vec< L, T, Q > const &Value, int FirstBit, int BitCount)
 Set to 1 a range of bits. More...
 
template<typename genIUType >
GLM_FUNC_DECL genIUType bitfieldFillZero (genIUType Value, int FirstBit, int BitCount)
 Set to 0 a range of bits. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > bitfieldFillZero (vec< L, T, Q > const &Value, int FirstBit, int BitCount)
 Set to 0 a range of bits. More...
 
GLM_FUNC_DECL int16 bitfieldInterleave (int8 x, int8 y)
 Interleaves the bits of x and y. More...
 
GLM_FUNC_DECL uint16 bitfieldInterleave (uint8 x, uint8 y)
 Interleaves the bits of x and y. More...
 
GLM_FUNC_DECL uint16 bitfieldInterleave (u8vec2 const &v)
 Interleaves the bits of x and y. More...
 
GLM_FUNC_DECL int32 bitfieldInterleave (int16 x, int16 y)
 Interleaves the bits of x and y. More...
 
GLM_FUNC_DECL uint32 bitfieldInterleave (uint16 x, uint16 y)
 Interleaves the bits of x and y. More...
 
GLM_FUNC_DECL uint32 bitfieldInterleave (u16vec2 const &v)
 Interleaves the bits of x and y. More...
 
GLM_FUNC_DECL int64 bitfieldInterleave (int32 x, int32 y)
 Interleaves the bits of x and y. More...
 
GLM_FUNC_DECL uint64 bitfieldInterleave (uint32 x, uint32 y)
 Interleaves the bits of x and y. More...
 
GLM_FUNC_DECL uint64 bitfieldInterleave (u32vec2 const &v)
 Interleaves the bits of x and y. More...
 
GLM_FUNC_DECL int32 bitfieldInterleave (int8 x, int8 y, int8 z)
 Interleaves the bits of x, y and z. More...
 
GLM_FUNC_DECL uint32 bitfieldInterleave (uint8 x, uint8 y, uint8 z)
 Interleaves the bits of x, y and z. More...
 
GLM_FUNC_DECL int64 bitfieldInterleave (int16 x, int16 y, int16 z)
 Interleaves the bits of x, y and z. More...
 
GLM_FUNC_DECL uint64 bitfieldInterleave (uint16 x, uint16 y, uint16 z)
 Interleaves the bits of x, y and z. More...
 
GLM_FUNC_DECL int64 bitfieldInterleave (int32 x, int32 y, int32 z)
 Interleaves the bits of x, y and z. More...
 
GLM_FUNC_DECL uint64 bitfieldInterleave (uint32 x, uint32 y, uint32 z)
 Interleaves the bits of x, y and z. More...
 
GLM_FUNC_DECL int32 bitfieldInterleave (int8 x, int8 y, int8 z, int8 w)
 Interleaves the bits of x, y, z and w. More...
 
GLM_FUNC_DECL uint32 bitfieldInterleave (uint8 x, uint8 y, uint8 z, uint8 w)
 Interleaves the bits of x, y, z and w. More...
 
GLM_FUNC_DECL int64 bitfieldInterleave (int16 x, int16 y, int16 z, int16 w)
 Interleaves the bits of x, y, z and w. More...
 
GLM_FUNC_DECL uint64 bitfieldInterleave (uint16 x, uint16 y, uint16 z, uint16 w)
 Interleaves the bits of x, y, z and w. More...
 
template<typename genIUType >
GLM_FUNC_DECL genIUType bitfieldRotateLeft (genIUType In, int Shift)
 Rotate all bits to the left. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > bitfieldRotateLeft (vec< L, T, Q > const &In, int Shift)
 Rotate all bits to the left. More...
 
template<typename genIUType >
GLM_FUNC_DECL genIUType bitfieldRotateRight (genIUType In, int Shift)
 Rotate all bits to the right. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > bitfieldRotateRight (vec< L, T, Q > const &In, int Shift)
 Rotate all bits to the right. More...
 
template<typename genIUType >
GLM_FUNC_DECL genIUType mask (genIUType Bits)
 Build a mask of 'count' bits. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > mask (vec< L, T, Q > const &v)
 Build a mask of 'count' bits. More...
 

Detailed Description

Include <glm/gtc/bitfield.hpp> to use the features of this extension.

Allow to perform bit operations on integer values

Function Documentation

GLM_FUNC_DECL glm::u8vec2 glm::bitfieldDeinterleave ( glm::uint16  x)

Deinterleaves the bits of x.

See also
GLM_GTC_bitfield
GLM_FUNC_DECL glm::u16vec2 glm::bitfieldDeinterleave ( glm::uint32  x)

Deinterleaves the bits of x.

See also
GLM_GTC_bitfield
GLM_FUNC_DECL glm::u32vec2 glm::bitfieldDeinterleave ( glm::uint64  x)

Deinterleaves the bits of x.

See also
GLM_GTC_bitfield
GLM_FUNC_DECL genIUType glm::bitfieldFillOne ( genIUType  Value,
int  FirstBit,
int  BitCount 
)

Set to 1 a range of bits.

See also
GLM_GTC_bitfield
GLM_FUNC_DECL vec<L, T, Q> glm::bitfieldFillOne ( vec< L, T, Q > const &  Value,
int  FirstBit,
int  BitCount 
)

Set to 1 a range of bits.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TSigned and unsigned integer scalar types
QValue from qualifier enum
See also
GLM_GTC_bitfield
GLM_FUNC_DECL genIUType glm::bitfieldFillZero ( genIUType  Value,
int  FirstBit,
int  BitCount 
)

Set to 0 a range of bits.

See also
GLM_GTC_bitfield
GLM_FUNC_DECL vec<L, T, Q> glm::bitfieldFillZero ( vec< L, T, Q > const &  Value,
int  FirstBit,
int  BitCount 
)

Set to 0 a range of bits.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TSigned and unsigned integer scalar types
QValue from qualifier enum
See also
GLM_GTC_bitfield
GLM_FUNC_DECL int16 glm::bitfieldInterleave ( int8  x,
int8  y 
)

Interleaves the bits of x and y.

The first bit is the first bit of x followed by the first bit of y. The other bits are interleaved following the previous sequence.

See also
GLM_GTC_bitfield
GLM_FUNC_DECL uint16 glm::bitfieldInterleave ( uint8  x,
uint8  y 
)

Interleaves the bits of x and y.

The first bit is the first bit of x followed by the first bit of y. The other bits are interleaved following the previous sequence.

See also
GLM_GTC_bitfield
GLM_FUNC_DECL uint16 glm::bitfieldInterleave ( u8vec2 const &  v)

Interleaves the bits of x and y.

The first bit is the first bit of v.x followed by the first bit of v.y. The other bits are interleaved following the previous sequence.

See also
GLM_GTC_bitfield
GLM_FUNC_DECL int32 glm::bitfieldInterleave ( int16  x,
int16  y 
)

Interleaves the bits of x and y.

The first bit is the first bit of x followed by the first bit of y. The other bits are interleaved following the previous sequence.

See also
GLM_GTC_bitfield
GLM_FUNC_DECL uint32 glm::bitfieldInterleave ( uint16  x,
uint16  y 
)

Interleaves the bits of x and y.

The first bit is the first bit of x followed by the first bit of y. The other bits are interleaved following the previous sequence.

See also
GLM_GTC_bitfield
GLM_FUNC_DECL uint32 glm::bitfieldInterleave ( u16vec2 const &  v)

Interleaves the bits of x and y.

The first bit is the first bit of v.x followed by the first bit of v.y. The other bits are interleaved following the previous sequence.

See also
GLM_GTC_bitfield
GLM_FUNC_DECL int64 glm::bitfieldInterleave ( int32  x,
int32  y 
)

Interleaves the bits of x and y.

The first bit is the first bit of x followed by the first bit of y. The other bits are interleaved following the previous sequence.

See also
GLM_GTC_bitfield
GLM_FUNC_DECL uint64 glm::bitfieldInterleave ( uint32  x,
uint32  y 
)

Interleaves the bits of x and y.

The first bit is the first bit of x followed by the first bit of y. The other bits are interleaved following the previous sequence.

See also
GLM_GTC_bitfield
GLM_FUNC_DECL uint64 glm::bitfieldInterleave ( u32vec2 const &  v)

Interleaves the bits of x and y.

The first bit is the first bit of v.x followed by the first bit of v.y. The other bits are interleaved following the previous sequence.

See also
GLM_GTC_bitfield
GLM_FUNC_DECL int32 glm::bitfieldInterleave ( int8  x,
int8  y,
int8  z 
)

Interleaves the bits of x, y and z.

The first bit is the first bit of x followed by the first bit of y and the first bit of z. The other bits are interleaved following the previous sequence.

See also
GLM_GTC_bitfield
GLM_FUNC_DECL uint32 glm::bitfieldInterleave ( uint8  x,
uint8  y,
uint8  z 
)

Interleaves the bits of x, y and z.

The first bit is the first bit of x followed by the first bit of y and the first bit of z. The other bits are interleaved following the previous sequence.

See also
GLM_GTC_bitfield
GLM_FUNC_DECL int64 glm::bitfieldInterleave ( int16  x,
int16  y,
int16  z 
)

Interleaves the bits of x, y and z.

The first bit is the first bit of x followed by the first bit of y and the first bit of z. The other bits are interleaved following the previous sequence.

See also
GLM_GTC_bitfield
GLM_FUNC_DECL uint64 glm::bitfieldInterleave ( uint16  x,
uint16  y,
uint16  z 
)

Interleaves the bits of x, y and z.

The first bit is the first bit of x followed by the first bit of y and the first bit of z. The other bits are interleaved following the previous sequence.

See also
GLM_GTC_bitfield
GLM_FUNC_DECL int64 glm::bitfieldInterleave ( int32  x,
int32  y,
int32  z 
)

Interleaves the bits of x, y and z.

The first bit is the first bit of x followed by the first bit of y and the first bit of z. The other bits are interleaved following the previous sequence.

See also
GLM_GTC_bitfield
GLM_FUNC_DECL uint64 glm::bitfieldInterleave ( uint32  x,
uint32  y,
uint32  z 
)

Interleaves the bits of x, y and z.

The first bit is the first bit of x followed by the first bit of y and the first bit of z. The other bits are interleaved following the previous sequence.

See also
GLM_GTC_bitfield
GLM_FUNC_DECL int32 glm::bitfieldInterleave ( int8  x,
int8  y,
int8  z,
int8  w 
)

Interleaves the bits of x, y, z and w.

The first bit is the first bit of x followed by the first bit of y, the first bit of z and finally the first bit of w. The other bits are interleaved following the previous sequence.

See also
GLM_GTC_bitfield
GLM_FUNC_DECL uint32 glm::bitfieldInterleave ( uint8  x,
uint8  y,
uint8  z,
uint8  w 
)

Interleaves the bits of x, y, z and w.

The first bit is the first bit of x followed by the first bit of y, the first bit of z and finally the first bit of w. The other bits are interleaved following the previous sequence.

See also
GLM_GTC_bitfield
GLM_FUNC_DECL int64 glm::bitfieldInterleave ( int16  x,
int16  y,
int16  z,
int16  w 
)

Interleaves the bits of x, y, z and w.

The first bit is the first bit of x followed by the first bit of y, the first bit of z and finally the first bit of w. The other bits are interleaved following the previous sequence.

See also
GLM_GTC_bitfield
GLM_FUNC_DECL uint64 glm::bitfieldInterleave ( uint16  x,
uint16  y,
uint16  z,
uint16  w 
)

Interleaves the bits of x, y, z and w.

The first bit is the first bit of x followed by the first bit of y, the first bit of z and finally the first bit of w. The other bits are interleaved following the previous sequence.

See also
GLM_GTC_bitfield
GLM_FUNC_DECL genIUType glm::bitfieldRotateLeft ( genIUType  In,
int  Shift 
)

Rotate all bits to the left.

All the bits dropped in the left side are inserted back on the right side.

See also
GLM_GTC_bitfield
GLM_FUNC_DECL vec<L, T, Q> glm::bitfieldRotateLeft ( vec< L, T, Q > const &  In,
int  Shift 
)

Rotate all bits to the left.

All the bits dropped in the left side are inserted back on the right side.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TSigned and unsigned integer scalar types
QValue from qualifier enum
See also
GLM_GTC_bitfield
GLM_FUNC_DECL genIUType glm::bitfieldRotateRight ( genIUType  In,
int  Shift 
)

Rotate all bits to the right.

All the bits dropped in the right side are inserted back on the left side.

See also
GLM_GTC_bitfield
GLM_FUNC_DECL vec<L, T, Q> glm::bitfieldRotateRight ( vec< L, T, Q > const &  In,
int  Shift 
)

Rotate all bits to the right.

All the bits dropped in the right side are inserted back on the left side.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TSigned and unsigned integer scalar types
QValue from qualifier enum
See also
GLM_GTC_bitfield
GLM_FUNC_DECL genIUType glm::mask ( genIUType  Bits)

Build a mask of 'count' bits.

See also
GLM_GTC_bitfield
GLM_FUNC_DECL vec<L, T, Q> glm::mask ( vec< L, T, Q > const &  v)

Build a mask of 'count' bits.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TSigned and unsigned integer scalar types
QValue from qualifier enum
See also
GLM_GTC_bitfield
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00289.html ================================================ 0.9.9 API documentation: GLM_GTC_color_space
0.9.9 API documentation
GLM_GTC_color_space

Include <glm/gtc/color_space.hpp> to use the features of this extension. More...

Functions

template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > convertLinearToSRGB (vec< L, T, Q > const &ColorLinear)
 Convert a linear color to sRGB color using a standard gamma correction. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > convertLinearToSRGB (vec< L, T, Q > const &ColorLinear, T Gamma)
 Convert a linear color to sRGB color using a custom gamma correction. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > convertSRGBToLinear (vec< L, T, Q > const &ColorSRGB)
 Convert a sRGB color to linear color using a standard gamma correction. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > convertSRGBToLinear (vec< L, T, Q > const &ColorSRGB, T Gamma)
 Convert a sRGB color to linear color using a custom gamma correction.
 

Detailed Description

Include <glm/gtc/color_space.hpp> to use the features of this extension.

Allow to perform bit operations on integer values

Function Documentation

GLM_FUNC_DECL vec<L, T, Q> glm::convertLinearToSRGB ( vec< L, T, Q > const &  ColorLinear)

Convert a linear color to sRGB color using a standard gamma correction.

IEC 61966-2-1:1999 / Rec. 709 specification https://www.w3.org/Graphics/Color/srgb

GLM_FUNC_DECL vec<L, T, Q> glm::convertLinearToSRGB ( vec< L, T, Q > const &  ColorLinear,
Gamma 
)

Convert a linear color to sRGB color using a custom gamma correction.

IEC 61966-2-1:1999 / Rec. 709 specification https://www.w3.org/Graphics/Color/srgb

GLM_FUNC_DECL vec<L, T, Q> glm::convertSRGBToLinear ( vec< L, T, Q > const &  ColorSRGB)

Convert a sRGB color to linear color using a standard gamma correction.

IEC 61966-2-1:1999 / Rec. 709 specification https://www.w3.org/Graphics/Color/srgb

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00290.html ================================================ 0.9.9 API documentation: GLM_GTC_constants
0.9.9 API documentation

Include <glm/gtc/constants.hpp> to use the features of this extension. More...

Functions

template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType e ()
 Return e constant. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType euler ()
 Return Euler's constant. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType four_over_pi ()
 Return 4 / pi. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType golden_ratio ()
 Return the golden ratio constant. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType half_pi ()
 Return pi / 2. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType ln_ln_two ()
 Return ln(ln(2)). More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType ln_ten ()
 Return ln(10). More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType ln_two ()
 Return ln(2). More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType one ()
 Return 1. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType one_over_pi ()
 Return 1 / pi. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType one_over_root_two ()
 Return 1 / sqrt(2). More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType one_over_two_pi ()
 Return 1 / (pi * 2). More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType quarter_pi ()
 Return pi / 4. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType root_five ()
 Return sqrt(5). More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType root_half_pi ()
 Return sqrt(pi / 2). More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType root_ln_four ()
 Return sqrt(ln(4)). More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType root_pi ()
 Return square root of pi. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType root_three ()
 Return sqrt(3). More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType root_two ()
 Return sqrt(2). More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType root_two_pi ()
 Return sqrt(2 * pi). More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType third ()
 Return 1 / 3. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType three_over_two_pi ()
 Return pi / 2 * 3. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType two_over_pi ()
 Return 2 / pi. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType two_over_root_pi ()
 Return 2 / sqrt(pi). More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType two_pi ()
 Return pi * 2. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType two_thirds ()
 Return 2 / 3. More...
 
template<typename genType >
GLM_FUNC_DECL GLM_CONSTEXPR genType zero ()
 Return 0. More...
 

Detailed Description

Include <glm/gtc/constants.hpp> to use the features of this extension.

Provide a list of constants and precomputed useful values.

Function Documentation

GLM_FUNC_DECL GLM_CONSTEXPR genType glm::e ( )

Return e constant.

See also
GLM_GTC_constants
GLM_FUNC_DECL GLM_CONSTEXPR genType glm::euler ( )

Return Euler's constant.

See also
GLM_GTC_constants
GLM_FUNC_DECL GLM_CONSTEXPR genType glm::four_over_pi ( )

Return 4 / pi.

See also
GLM_GTC_constants
GLM_FUNC_DECL GLM_CONSTEXPR genType glm::golden_ratio ( )

Return the golden ratio constant.

See also
GLM_GTC_constants
GLM_FUNC_DECL GLM_CONSTEXPR genType glm::half_pi ( )

Return pi / 2.

See also
GLM_GTC_constants
GLM_FUNC_DECL GLM_CONSTEXPR genType glm::ln_ln_two ( )

Return ln(ln(2)).

See also
GLM_GTC_constants
GLM_FUNC_DECL GLM_CONSTEXPR genType glm::ln_ten ( )

Return ln(10).

See also
GLM_GTC_constants
GLM_FUNC_DECL GLM_CONSTEXPR genType glm::ln_two ( )

Return ln(2).

See also
GLM_GTC_constants
GLM_FUNC_DECL GLM_CONSTEXPR genType glm::one ( )

Return 1.

See also
GLM_GTC_constants
GLM_FUNC_DECL GLM_CONSTEXPR genType glm::one_over_pi ( )

Return 1 / pi.

See also
GLM_GTC_constants
GLM_FUNC_DECL GLM_CONSTEXPR genType glm::one_over_root_two ( )

Return 1 / sqrt(2).

See also
GLM_GTC_constants
GLM_FUNC_DECL GLM_CONSTEXPR genType glm::one_over_two_pi ( )

Return 1 / (pi * 2).

See also
GLM_GTC_constants
GLM_FUNC_DECL GLM_CONSTEXPR genType glm::quarter_pi ( )

Return pi / 4.

See also
GLM_GTC_constants
GLM_FUNC_DECL GLM_CONSTEXPR genType glm::root_five ( )

Return sqrt(5).

See also
GLM_GTC_constants
GLM_FUNC_DECL GLM_CONSTEXPR genType glm::root_half_pi ( )

Return sqrt(pi / 2).

See also
GLM_GTC_constants
GLM_FUNC_DECL GLM_CONSTEXPR genType glm::root_ln_four ( )

Return sqrt(ln(4)).

See also
GLM_GTC_constants
GLM_FUNC_DECL GLM_CONSTEXPR genType glm::root_pi ( )

Return square root of pi.

See also
GLM_GTC_constants
GLM_FUNC_DECL GLM_CONSTEXPR genType glm::root_three ( )

Return sqrt(3).

See also
GLM_GTC_constants
GLM_FUNC_DECL GLM_CONSTEXPR genType glm::root_two ( )

Return sqrt(2).

See also
GLM_GTC_constants
GLM_FUNC_DECL GLM_CONSTEXPR genType glm::root_two_pi ( )

Return sqrt(2 * pi).

See also
GLM_GTC_constants
GLM_FUNC_DECL GLM_CONSTEXPR genType glm::third ( )

Return 1 / 3.

See also
GLM_GTC_constants
GLM_FUNC_DECL GLM_CONSTEXPR genType glm::three_over_two_pi ( )

Return pi / 2 * 3.

See also
GLM_GTC_constants
GLM_FUNC_DECL GLM_CONSTEXPR genType glm::two_over_pi ( )

Return 2 / pi.

See also
GLM_GTC_constants
GLM_FUNC_DECL GLM_CONSTEXPR genType glm::two_over_root_pi ( )

Return 2 / sqrt(pi).

See also
GLM_GTC_constants
GLM_FUNC_DECL GLM_CONSTEXPR genType glm::two_pi ( )

Return pi * 2.

See also
GLM_GTC_constants
GLM_FUNC_DECL GLM_CONSTEXPR genType glm::two_thirds ( )

Return 2 / 3.

See also
GLM_GTC_constants
GLM_FUNC_DECL GLM_CONSTEXPR genType glm::zero ( )

Return 0.

See also
GLM_GTC_constants
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00291.html ================================================ 0.9.9 API documentation: GLM_GTC_epsilon
0.9.9 API documentation

Include <glm/gtc/epsilon.hpp> to use the features of this extension. More...

Functions

template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, bool, Q > epsilonEqual (vec< L, T, Q > const &x, vec< L, T, Q > const &y, T const &epsilon)
 Returns the component-wise comparison of |x - y| < epsilon. More...
 
template<typename genType >
GLM_FUNC_DECL bool epsilonEqual (genType const &x, genType const &y, genType const &epsilon)
 Returns the component-wise comparison of |x - y| < epsilon. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, bool, Q > epsilonNotEqual (vec< L, T, Q > const &x, vec< L, T, Q > const &y, T const &epsilon)
 Returns the component-wise comparison of |x - y| < epsilon. More...
 
template<typename genType >
GLM_FUNC_DECL bool epsilonNotEqual (genType const &x, genType const &y, genType const &epsilon)
 Returns the component-wise comparison of |x - y| >= epsilon. More...
 

Detailed Description

Include <glm/gtc/epsilon.hpp> to use the features of this extension.

Comparison functions for a user defined epsilon values.

Function Documentation

GLM_FUNC_DECL vec<L, bool, Q> glm::epsilonEqual ( vec< L, T, Q > const &  x,
vec< L, T, Q > const &  y,
T const &  epsilon 
)

Returns the component-wise comparison of |x - y| < epsilon.

True if this expression is satisfied.

See also
GLM_GTC_epsilon
GLM_FUNC_DECL bool glm::epsilonEqual ( genType const &  x,
genType const &  y,
genType const &  epsilon 
)

Returns the component-wise comparison of |x - y| < epsilon.

True if this expression is satisfied.

See also
GLM_GTC_epsilon
GLM_FUNC_DECL vec<L, bool, Q> glm::epsilonNotEqual ( vec< L, T, Q > const &  x,
vec< L, T, Q > const &  y,
T const &  epsilon 
)

Returns the component-wise comparison of |x - y| < epsilon.

True if this expression is not satisfied.

See also
GLM_GTC_epsilon
GLM_FUNC_DECL bool glm::epsilonNotEqual ( genType const &  x,
genType const &  y,
genType const &  epsilon 
)

Returns the component-wise comparison of |x - y| >= epsilon.

True if this expression is not satisfied.

See also
GLM_GTC_epsilon
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00292.html ================================================ 0.9.9 API documentation: GLM_GTC_integer
0.9.9 API documentation

Include <glm/gtc/integer.hpp> to use the features of this extension. More...

Functions

template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, int, Q > iround (vec< L, T, Q > const &x)
 Returns a value equal to the nearest integer to x. More...
 
template<typename genIUType >
GLM_FUNC_DECL genIUType log2 (genIUType x)
 Returns the log2 of x for integer values. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, uint, Q > uround (vec< L, T, Q > const &x)
 Returns a value equal to the nearest integer to x. More...
 

Detailed Description

Include <glm/gtc/integer.hpp> to use the features of this extension.

Allow to perform bit operations on integer values

Function Documentation

GLM_FUNC_DECL vec<L, int, Q> glm::iround ( vec< L, T, Q > const &  x)

Returns a value equal to the nearest integer to x.

The fraction 0.5 will round in a direction chosen by the implementation, presumably the direction that is fastest.

Parameters
xThe values of the argument must be greater or equal to zero.
Template Parameters
Tfloating point scalar types.
See also
GLSL round man page
GLM_GTC_integer
GLM_FUNC_DECL genIUType glm::log2 ( genIUType  x)

Returns the log2 of x for integer values.

Usefull to compute mipmap count from the texture size.

See also
GLM_GTC_integer
GLM_FUNC_DECL vec<L, uint, Q> glm::uround ( vec< L, T, Q > const &  x)

Returns a value equal to the nearest integer to x.

The fraction 0.5 will round in a direction chosen by the implementation, presumably the direction that is fastest.

Parameters
xThe values of the argument must be greater or equal to zero.
Template Parameters
Tfloating point scalar types.
See also
GLSL round man page
GLM_GTC_integer
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00293.html ================================================ 0.9.9 API documentation: GLM_GTC_matrix_access
0.9.9 API documentation
GLM_GTC_matrix_access

Include <glm/gtc/matrix_access.hpp> to use the features of this extension. More...

Functions

template<typename genType >
GLM_FUNC_DECL genType::col_type column (genType const &m, length_t index)
 Get a specific column of a matrix. More...
 
template<typename genType >
GLM_FUNC_DECL genType column (genType const &m, length_t index, typename genType::col_type const &x)
 Set a specific column to a matrix. More...
 
template<typename genType >
GLM_FUNC_DECL genType::row_type row (genType const &m, length_t index)
 Get a specific row of a matrix. More...
 
template<typename genType >
GLM_FUNC_DECL genType row (genType const &m, length_t index, typename genType::row_type const &x)
 Set a specific row to a matrix. More...
 

Detailed Description

Include <glm/gtc/matrix_access.hpp> to use the features of this extension.

Defines functions to access rows or columns of a matrix easily.

Function Documentation

GLM_FUNC_DECL genType::col_type glm::column ( genType const &  m,
length_t  index 
)

Get a specific column of a matrix.

See also
GLM_GTC_matrix_access
GLM_FUNC_DECL genType glm::column ( genType const &  m,
length_t  index,
typename genType::col_type const &  x 
)

Set a specific column to a matrix.

See also
GLM_GTC_matrix_access
GLM_FUNC_DECL genType::row_type glm::row ( genType const &  m,
length_t  index 
)

Get a specific row of a matrix.

See also
GLM_GTC_matrix_access
GLM_FUNC_DECL genType glm::row ( genType const &  m,
length_t  index,
typename genType::row_type const &  x 
)

Set a specific row to a matrix.

See also
GLM_GTC_matrix_access
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00294.html ================================================ 0.9.9 API documentation: GLM_GTC_matrix_integer
0.9.9 API documentation
GLM_GTC_matrix_integer

Include <glm/gtc/matrix_integer.hpp> to use the features of this extension. More...

Typedefs

typedef mat< 2, 2, int, highp > highp_imat2
 High-qualifier signed integer 2x2 matrix. More...
 
typedef mat< 2, 2, int, highp > highp_imat2x2
 High-qualifier signed integer 2x2 matrix. More...
 
typedef mat< 2, 3, int, highp > highp_imat2x3
 High-qualifier signed integer 2x3 matrix. More...
 
typedef mat< 2, 4, int, highp > highp_imat2x4
 High-qualifier signed integer 2x4 matrix. More...
 
typedef mat< 3, 3, int, highp > highp_imat3
 High-qualifier signed integer 3x3 matrix. More...
 
typedef mat< 3, 2, int, highp > highp_imat3x2
 High-qualifier signed integer 3x2 matrix. More...
 
typedef mat< 3, 3, int, highp > highp_imat3x3
 High-qualifier signed integer 3x3 matrix. More...
 
typedef mat< 3, 4, int, highp > highp_imat3x4
 High-qualifier signed integer 3x4 matrix. More...
 
typedef mat< 4, 4, int, highp > highp_imat4
 High-qualifier signed integer 4x4 matrix. More...
 
typedef mat< 4, 2, int, highp > highp_imat4x2
 High-qualifier signed integer 4x2 matrix. More...
 
typedef mat< 4, 3, int, highp > highp_imat4x3
 High-qualifier signed integer 4x3 matrix. More...
 
typedef mat< 4, 4, int, highp > highp_imat4x4
 High-qualifier signed integer 4x4 matrix. More...
 
typedef mat< 2, 2, uint, highp > highp_umat2
 High-qualifier unsigned integer 2x2 matrix. More...
 
typedef mat< 2, 2, uint, highp > highp_umat2x2
 High-qualifier unsigned integer 2x2 matrix. More...
 
typedef mat< 2, 3, uint, highp > highp_umat2x3
 High-qualifier unsigned integer 2x3 matrix. More...
 
typedef mat< 2, 4, uint, highp > highp_umat2x4
 High-qualifier unsigned integer 2x4 matrix. More...
 
typedef mat< 3, 3, uint, highp > highp_umat3
 High-qualifier unsigned integer 3x3 matrix. More...
 
typedef mat< 3, 2, uint, highp > highp_umat3x2
 High-qualifier unsigned integer 3x2 matrix. More...
 
typedef mat< 3, 3, uint, highp > highp_umat3x3
 High-qualifier unsigned integer 3x3 matrix. More...
 
typedef mat< 3, 4, uint, highp > highp_umat3x4
 High-qualifier unsigned integer 3x4 matrix. More...
 
typedef mat< 4, 4, uint, highp > highp_umat4
 High-qualifier unsigned integer 4x4 matrix. More...
 
typedef mat< 4, 2, uint, highp > highp_umat4x2
 High-qualifier unsigned integer 4x2 matrix. More...
 
typedef mat< 4, 3, uint, highp > highp_umat4x3
 High-qualifier unsigned integer 4x3 matrix. More...
 
typedef mat< 4, 4, uint, highp > highp_umat4x4
 High-qualifier unsigned integer 4x4 matrix. More...
 
typedef mediump_imat2 imat2
 Signed integer 2x2 matrix. More...
 
typedef mediump_imat2x2 imat2x2
 Signed integer 2x2 matrix. More...
 
typedef mediump_imat2x3 imat2x3
 Signed integer 2x3 matrix. More...
 
typedef mediump_imat2x4 imat2x4
 Signed integer 2x4 matrix. More...
 
typedef mediump_imat3 imat3
 Signed integer 3x3 matrix. More...
 
typedef mediump_imat3x2 imat3x2
 Signed integer 3x2 matrix. More...
 
typedef mediump_imat3x3 imat3x3
 Signed integer 3x3 matrix. More...
 
typedef mediump_imat3x4 imat3x4
 Signed integer 3x4 matrix. More...
 
typedef mediump_imat4 imat4
 Signed integer 4x4 matrix. More...
 
typedef mediump_imat4x2 imat4x2
 Signed integer 4x2 matrix. More...
 
typedef mediump_imat4x3 imat4x3
 Signed integer 4x3 matrix. More...
 
typedef mediump_imat4x4 imat4x4
 Signed integer 4x4 matrix. More...
 
typedef mat< 2, 2, int, lowp > lowp_imat2
 Low-qualifier signed integer 2x2 matrix. More...
 
typedef mat< 2, 2, int, lowp > lowp_imat2x2
 Low-qualifier signed integer 2x2 matrix. More...
 
typedef mat< 2, 3, int, lowp > lowp_imat2x3
 Low-qualifier signed integer 2x3 matrix. More...
 
typedef mat< 2, 4, int, lowp > lowp_imat2x4
 Low-qualifier signed integer 2x4 matrix. More...
 
typedef mat< 3, 3, int, lowp > lowp_imat3
 Low-qualifier signed integer 3x3 matrix. More...
 
typedef mat< 3, 2, int, lowp > lowp_imat3x2
 Low-qualifier signed integer 3x2 matrix. More...
 
typedef mat< 3, 3, int, lowp > lowp_imat3x3
 Low-qualifier signed integer 3x3 matrix. More...
 
typedef mat< 3, 4, int, lowp > lowp_imat3x4
 Low-qualifier signed integer 3x4 matrix. More...
 
typedef mat< 4, 4, int, lowp > lowp_imat4
 Low-qualifier signed integer 4x4 matrix. More...
 
typedef mat< 4, 2, int, lowp > lowp_imat4x2
 Low-qualifier signed integer 4x2 matrix. More...
 
typedef mat< 4, 3, int, lowp > lowp_imat4x3
 Low-qualifier signed integer 4x3 matrix. More...
 
typedef mat< 4, 4, int, lowp > lowp_imat4x4
 Low-qualifier signed integer 4x4 matrix. More...
 
typedef mat< 2, 2, uint, lowp > lowp_umat2
 Low-qualifier unsigned integer 2x2 matrix. More...
 
typedef mat< 2, 2, uint, lowp > lowp_umat2x2
 Low-qualifier unsigned integer 2x2 matrix. More...
 
typedef mat< 2, 3, uint, lowp > lowp_umat2x3
 Low-qualifier unsigned integer 2x3 matrix. More...
 
typedef mat< 2, 4, uint, lowp > lowp_umat2x4
 Low-qualifier unsigned integer 2x4 matrix. More...
 
typedef mat< 3, 3, uint, lowp > lowp_umat3
 Low-qualifier unsigned integer 3x3 matrix. More...
 
typedef mat< 3, 2, uint, lowp > lowp_umat3x2
 Low-qualifier unsigned integer 3x2 matrix. More...
 
typedef mat< 3, 3, uint, lowp > lowp_umat3x3
 Low-qualifier unsigned integer 3x3 matrix. More...
 
typedef mat< 3, 4, uint, lowp > lowp_umat3x4
 Low-qualifier unsigned integer 3x4 matrix. More...
 
typedef mat< 4, 4, uint, lowp > lowp_umat4
 Low-qualifier unsigned integer 4x4 matrix. More...
 
typedef mat< 4, 2, uint, lowp > lowp_umat4x2
 Low-qualifier unsigned integer 4x2 matrix. More...
 
typedef mat< 4, 3, uint, lowp > lowp_umat4x3
 Low-qualifier unsigned integer 4x3 matrix. More...
 
typedef mat< 4, 4, uint, lowp > lowp_umat4x4
 Low-qualifier unsigned integer 4x4 matrix. More...
 
typedef mat< 2, 2, int, mediump > mediump_imat2
 Medium-qualifier signed integer 2x2 matrix. More...
 
typedef mat< 2, 2, int, mediump > mediump_imat2x2
 Medium-qualifier signed integer 2x2 matrix. More...
 
typedef mat< 2, 3, int, mediump > mediump_imat2x3
 Medium-qualifier signed integer 2x3 matrix. More...
 
typedef mat< 2, 4, int, mediump > mediump_imat2x4
 Medium-qualifier signed integer 2x4 matrix. More...
 
typedef mat< 3, 3, int, mediump > mediump_imat3
 Medium-qualifier signed integer 3x3 matrix. More...
 
typedef mat< 3, 2, int, mediump > mediump_imat3x2
 Medium-qualifier signed integer 3x2 matrix. More...
 
typedef mat< 3, 3, int, mediump > mediump_imat3x3
 Medium-qualifier signed integer 3x3 matrix. More...
 
typedef mat< 3, 4, int, mediump > mediump_imat3x4
 Medium-qualifier signed integer 3x4 matrix. More...
 
typedef mat< 4, 4, int, mediump > mediump_imat4
 Medium-qualifier signed integer 4x4 matrix. More...
 
typedef mat< 4, 2, int, mediump > mediump_imat4x2
 Medium-qualifier signed integer 4x2 matrix. More...
 
typedef mat< 4, 3, int, mediump > mediump_imat4x3
 Medium-qualifier signed integer 4x3 matrix. More...
 
typedef mat< 4, 4, int, mediump > mediump_imat4x4
 Medium-qualifier signed integer 4x4 matrix. More...
 
typedef mat< 2, 2, uint, mediump > mediump_umat2
 Medium-qualifier unsigned integer 2x2 matrix. More...
 
typedef mat< 2, 2, uint, mediump > mediump_umat2x2
 Medium-qualifier unsigned integer 2x2 matrix. More...
 
typedef mat< 2, 3, uint, mediump > mediump_umat2x3
 Medium-qualifier unsigned integer 2x3 matrix. More...
 
typedef mat< 2, 4, uint, mediump > mediump_umat2x4
 Medium-qualifier unsigned integer 2x4 matrix. More...
 
typedef mat< 3, 3, uint, mediump > mediump_umat3
 Medium-qualifier unsigned integer 3x3 matrix. More...
 
typedef mat< 3, 2, uint, mediump > mediump_umat3x2
 Medium-qualifier unsigned integer 3x2 matrix. More...
 
typedef mat< 3, 3, uint, mediump > mediump_umat3x3
 Medium-qualifier unsigned integer 3x3 matrix. More...
 
typedef mat< 3, 4, uint, mediump > mediump_umat3x4
 Medium-qualifier unsigned integer 3x4 matrix. More...
 
typedef mat< 4, 4, uint, mediump > mediump_umat4
 Medium-qualifier unsigned integer 4x4 matrix. More...
 
typedef mat< 4, 2, uint, mediump > mediump_umat4x2
 Medium-qualifier unsigned integer 4x2 matrix. More...
 
typedef mat< 4, 3, uint, mediump > mediump_umat4x3
 Medium-qualifier unsigned integer 4x3 matrix. More...
 
typedef mat< 4, 4, uint, mediump > mediump_umat4x4
 Medium-qualifier unsigned integer 4x4 matrix. More...
 
typedef mediump_umat2 umat2
 Unsigned integer 2x2 matrix. More...
 
typedef mediump_umat2x2 umat2x2
 Unsigned integer 2x2 matrix. More...
 
typedef mediump_umat2x3 umat2x3
 Unsigned integer 2x3 matrix. More...
 
typedef mediump_umat2x4 umat2x4
 Unsigned integer 2x4 matrix. More...
 
typedef mediump_umat3 umat3
 Unsigned integer 3x3 matrix. More...
 
typedef mediump_umat3x2 umat3x2
 Unsigned integer 3x2 matrix. More...
 
typedef mediump_umat3x3 umat3x3
 Unsigned integer 3x3 matrix. More...
 
typedef mediump_umat3x4 umat3x4
 Unsigned integer 3x4 matrix. More...
 
typedef mediump_umat4 umat4
 Unsigned integer 4x4 matrix. More...
 
typedef mediump_umat4x2 umat4x2
 Unsigned integer 4x2 matrix. More...
 
typedef mediump_umat4x3 umat4x3
 Unsigned integer 4x3 matrix. More...
 
typedef mediump_umat4x4 umat4x4
 Unsigned integer 4x4 matrix. More...
 

Detailed Description

Include <glm/gtc/matrix_integer.hpp> to use the features of this extension.

Defines a number of matrices with integer types.

Typedef Documentation

typedef mat<2, 2, int, highp> highp_imat2

High-qualifier signed integer 2x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 37 of file matrix_integer.hpp.

typedef mat<2, 2, int, highp> highp_imat2x2

High-qualifier signed integer 2x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 49 of file matrix_integer.hpp.

typedef mat<2, 3, int, highp> highp_imat2x3

High-qualifier signed integer 2x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 53 of file matrix_integer.hpp.

typedef mat<2, 4, int, highp> highp_imat2x4

High-qualifier signed integer 2x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 57 of file matrix_integer.hpp.

typedef mat<3, 3, int, highp> highp_imat3

High-qualifier signed integer 3x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 41 of file matrix_integer.hpp.

typedef mat<3, 2, int, highp> highp_imat3x2

High-qualifier signed integer 3x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 61 of file matrix_integer.hpp.

typedef mat<3, 3, int, highp> highp_imat3x3

High-qualifier signed integer 3x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 65 of file matrix_integer.hpp.

typedef mat<3, 4, int, highp> highp_imat3x4

High-qualifier signed integer 3x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 69 of file matrix_integer.hpp.

typedef mat<4, 4, int, highp> highp_imat4

High-qualifier signed integer 4x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 45 of file matrix_integer.hpp.

typedef mat<4, 2, int, highp> highp_imat4x2

High-qualifier signed integer 4x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 73 of file matrix_integer.hpp.

typedef mat<4, 3, int, highp> highp_imat4x3

High-qualifier signed integer 4x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 77 of file matrix_integer.hpp.

typedef mat<4, 4, int, highp> highp_imat4x4

High-qualifier signed integer 4x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 81 of file matrix_integer.hpp.

typedef mat<2, 2, uint, highp> highp_umat2

High-qualifier unsigned integer 2x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 186 of file matrix_integer.hpp.

typedef mat<2, 2, uint, highp> highp_umat2x2

High-qualifier unsigned integer 2x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 198 of file matrix_integer.hpp.

typedef mat<2, 3, uint, highp> highp_umat2x3

High-qualifier unsigned integer 2x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 202 of file matrix_integer.hpp.

typedef mat<2, 4, uint, highp> highp_umat2x4

High-qualifier unsigned integer 2x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 206 of file matrix_integer.hpp.

typedef mat<3, 3, uint, highp> highp_umat3

High-qualifier unsigned integer 3x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 190 of file matrix_integer.hpp.

typedef mat<3, 2, uint, highp> highp_umat3x2

High-qualifier unsigned integer 3x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 210 of file matrix_integer.hpp.

typedef mat<3, 3, uint, highp> highp_umat3x3

High-qualifier unsigned integer 3x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 214 of file matrix_integer.hpp.

typedef mat<3, 4, uint, highp> highp_umat3x4

High-qualifier unsigned integer 3x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 218 of file matrix_integer.hpp.

typedef mat<4, 4, uint, highp> highp_umat4

High-qualifier unsigned integer 4x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 194 of file matrix_integer.hpp.

typedef mat<4, 2, uint, highp> highp_umat4x2

High-qualifier unsigned integer 4x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 222 of file matrix_integer.hpp.

typedef mat<4, 3, uint, highp> highp_umat4x3

High-qualifier unsigned integer 4x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 226 of file matrix_integer.hpp.

typedef mat<4, 4, uint, highp> highp_umat4x4

High-qualifier unsigned integer 4x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 230 of file matrix_integer.hpp.

typedef mediump_imat2 imat2

Signed integer 2x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 362 of file matrix_integer.hpp.

typedef mediump_imat2x2 imat2x2

Signed integer 2x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 374 of file matrix_integer.hpp.

typedef mediump_imat2x3 imat2x3

Signed integer 2x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 378 of file matrix_integer.hpp.

typedef mediump_imat2x4 imat2x4

Signed integer 2x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 382 of file matrix_integer.hpp.

typedef mediump_imat3 imat3

Signed integer 3x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 366 of file matrix_integer.hpp.

typedef mediump_imat3x2 imat3x2

Signed integer 3x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 386 of file matrix_integer.hpp.

typedef mediump_imat3x3 imat3x3

Signed integer 3x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 390 of file matrix_integer.hpp.

typedef mediump_imat3x4 imat3x4

Signed integer 3x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 394 of file matrix_integer.hpp.

typedef mediump_imat4 imat4

Signed integer 4x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 370 of file matrix_integer.hpp.

typedef mediump_imat4x2 imat4x2

Signed integer 4x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 398 of file matrix_integer.hpp.

typedef mediump_imat4x3 imat4x3

Signed integer 4x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 402 of file matrix_integer.hpp.

typedef mediump_imat4x4 imat4x4

Signed integer 4x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 406 of file matrix_integer.hpp.

typedef mat<2, 2, int, lowp> lowp_imat2

Low-qualifier signed integer 2x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 136 of file matrix_integer.hpp.

typedef mat<2, 2, int, lowp> lowp_imat2x2

Low-qualifier signed integer 2x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 149 of file matrix_integer.hpp.

typedef mat<2, 3, int, lowp> lowp_imat2x3

Low-qualifier signed integer 2x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 153 of file matrix_integer.hpp.

typedef mat<2, 4, int, lowp> lowp_imat2x4

Low-qualifier signed integer 2x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 157 of file matrix_integer.hpp.

typedef mat<3, 3, int, lowp> lowp_imat3

Low-qualifier signed integer 3x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 140 of file matrix_integer.hpp.

typedef mat<3, 2, int, lowp> lowp_imat3x2

Low-qualifier signed integer 3x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 161 of file matrix_integer.hpp.

typedef mat<3, 3, int, lowp> lowp_imat3x3

Low-qualifier signed integer 3x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 165 of file matrix_integer.hpp.

typedef mat<3, 4, int, lowp> lowp_imat3x4

Low-qualifier signed integer 3x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 169 of file matrix_integer.hpp.

typedef mat<4, 4, int, lowp> lowp_imat4

Low-qualifier signed integer 4x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 144 of file matrix_integer.hpp.

typedef mat<4, 2, int, lowp> lowp_imat4x2

Low-qualifier signed integer 4x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 173 of file matrix_integer.hpp.

typedef mat<4, 3, int, lowp> lowp_imat4x3

Low-qualifier signed integer 4x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 177 of file matrix_integer.hpp.

typedef mat<4, 4, int, lowp> lowp_imat4x4

Low-qualifier signed integer 4x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 181 of file matrix_integer.hpp.

typedef mat<2, 2, uint, lowp> lowp_umat2

Low-qualifier unsigned integer 2x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 285 of file matrix_integer.hpp.

typedef mat<2, 2, uint, lowp> lowp_umat2x2

Low-qualifier unsigned integer 2x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 298 of file matrix_integer.hpp.

typedef mat<2, 3, uint, lowp> lowp_umat2x3

Low-qualifier unsigned integer 2x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 302 of file matrix_integer.hpp.

typedef mat<2, 4, uint, lowp> lowp_umat2x4

Low-qualifier unsigned integer 2x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 306 of file matrix_integer.hpp.

typedef mat<3, 3, uint, lowp> lowp_umat3

Low-qualifier unsigned integer 3x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 289 of file matrix_integer.hpp.

typedef mat<3, 2, uint, lowp> lowp_umat3x2

Low-qualifier unsigned integer 3x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 310 of file matrix_integer.hpp.

typedef mat<3, 3, uint, lowp> lowp_umat3x3

Low-qualifier unsigned integer 3x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 314 of file matrix_integer.hpp.

typedef mat<3, 4, uint, lowp> lowp_umat3x4

Low-qualifier unsigned integer 3x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 318 of file matrix_integer.hpp.

typedef mat<4, 4, uint, lowp> lowp_umat4

Low-qualifier unsigned integer 4x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 293 of file matrix_integer.hpp.

typedef mat<4, 2, uint, lowp> lowp_umat4x2

Low-qualifier unsigned integer 4x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 322 of file matrix_integer.hpp.

typedef mat<4, 3, uint, lowp> lowp_umat4x3

Low-qualifier unsigned integer 4x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 326 of file matrix_integer.hpp.

typedef mat<4, 4, uint, lowp> lowp_umat4x4

Low-qualifier unsigned integer 4x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 330 of file matrix_integer.hpp.

typedef mat<2, 2, int, mediump> mediump_imat2

Medium-qualifier signed integer 2x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 86 of file matrix_integer.hpp.

typedef mat<2, 2, int, mediump> mediump_imat2x2

Medium-qualifier signed integer 2x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 99 of file matrix_integer.hpp.

typedef mat<2, 3, int, mediump> mediump_imat2x3

Medium-qualifier signed integer 2x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 103 of file matrix_integer.hpp.

typedef mat<2, 4, int, mediump> mediump_imat2x4

Medium-qualifier signed integer 2x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 107 of file matrix_integer.hpp.

typedef mat<3, 3, int, mediump> mediump_imat3

Medium-qualifier signed integer 3x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 90 of file matrix_integer.hpp.

typedef mat<3, 2, int, mediump> mediump_imat3x2

Medium-qualifier signed integer 3x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 111 of file matrix_integer.hpp.

typedef mat<3, 3, int, mediump> mediump_imat3x3

Medium-qualifier signed integer 3x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 115 of file matrix_integer.hpp.

typedef mat<3, 4, int, mediump> mediump_imat3x4

Medium-qualifier signed integer 3x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 119 of file matrix_integer.hpp.

typedef mat<4, 4, int, mediump> mediump_imat4

Medium-qualifier signed integer 4x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 94 of file matrix_integer.hpp.

typedef mat<4, 2, int, mediump> mediump_imat4x2

Medium-qualifier signed integer 4x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 123 of file matrix_integer.hpp.

typedef mat<4, 3, int, mediump> mediump_imat4x3

Medium-qualifier signed integer 4x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 127 of file matrix_integer.hpp.

typedef mat<4, 4, int, mediump> mediump_imat4x4

Medium-qualifier signed integer 4x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 131 of file matrix_integer.hpp.

typedef mat<2, 2, uint, mediump> mediump_umat2

Medium-qualifier unsigned integer 2x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 235 of file matrix_integer.hpp.

typedef mat<2, 2, uint, mediump> mediump_umat2x2

Medium-qualifier unsigned integer 2x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 248 of file matrix_integer.hpp.

typedef mat<2, 3, uint, mediump> mediump_umat2x3

Medium-qualifier unsigned integer 2x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 252 of file matrix_integer.hpp.

typedef mat<2, 4, uint, mediump> mediump_umat2x4

Medium-qualifier unsigned integer 2x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 256 of file matrix_integer.hpp.

typedef mat<3, 3, uint, mediump> mediump_umat3

Medium-qualifier unsigned integer 3x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 239 of file matrix_integer.hpp.

typedef mat<3, 2, uint, mediump> mediump_umat3x2

Medium-qualifier unsigned integer 3x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 260 of file matrix_integer.hpp.

typedef mat<3, 3, uint, mediump> mediump_umat3x3

Medium-qualifier unsigned integer 3x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 264 of file matrix_integer.hpp.

typedef mat<3, 4, uint, mediump> mediump_umat3x4

Medium-qualifier unsigned integer 3x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 268 of file matrix_integer.hpp.

typedef mat<4, 4, uint, mediump> mediump_umat4

Medium-qualifier unsigned integer 4x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 243 of file matrix_integer.hpp.

typedef mat<4, 2, uint, mediump> mediump_umat4x2

Medium-qualifier unsigned integer 4x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 272 of file matrix_integer.hpp.

typedef mat<4, 3, uint, mediump> mediump_umat4x3

Medium-qualifier unsigned integer 4x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 276 of file matrix_integer.hpp.

typedef mat<4, 4, uint, mediump> mediump_umat4x4

Medium-qualifier unsigned integer 4x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 280 of file matrix_integer.hpp.

typedef mediump_umat2 umat2

Unsigned integer 2x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 439 of file matrix_integer.hpp.

typedef mediump_umat2x2 umat2x2

Unsigned integer 2x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 451 of file matrix_integer.hpp.

typedef mediump_umat2x3 umat2x3

Unsigned integer 2x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 455 of file matrix_integer.hpp.

typedef mediump_umat2x4 umat2x4

Unsigned integer 2x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 459 of file matrix_integer.hpp.

typedef mediump_umat3 umat3

Unsigned integer 3x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 443 of file matrix_integer.hpp.

typedef mediump_umat3x2 umat3x2

Unsigned integer 3x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 463 of file matrix_integer.hpp.

typedef mediump_umat3x3 umat3x3

Unsigned integer 3x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 467 of file matrix_integer.hpp.

typedef mediump_umat3x4 umat3x4

Unsigned integer 3x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 471 of file matrix_integer.hpp.

typedef mediump_umat4 umat4

Unsigned integer 4x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 447 of file matrix_integer.hpp.

typedef mediump_umat4x2 umat4x2

Unsigned integer 4x2 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 475 of file matrix_integer.hpp.

typedef mediump_umat4x3 umat4x3

Unsigned integer 4x3 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 479 of file matrix_integer.hpp.

typedef mediump_umat4x4 umat4x4

Unsigned integer 4x4 matrix.

See also
GLM_GTC_matrix_integer

Definition at line 483 of file matrix_integer.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00295.html ================================================ 0.9.9 API documentation: GLM_GTC_matrix_inverse
0.9.9 API documentation
GLM_GTC_matrix_inverse

Include <glm/gtc/matrix_integer.hpp> to use the features of this extension. More...

Functions

template<typename genType >
GLM_FUNC_DECL genType affineInverse (genType const &m)
 Fast matrix inverse for affine matrix. More...
 
template<typename genType >
GLM_FUNC_DECL genType inverseTranspose (genType const &m)
 Compute the inverse transpose of a matrix. More...
 

Detailed Description

Include <glm/gtc/matrix_integer.hpp> to use the features of this extension.

Defines additional matrix inverting functions.

Function Documentation

GLM_FUNC_DECL genType glm::affineInverse ( genType const &  m)

Fast matrix inverse for affine matrix.

Parameters
mInput matrix to invert.
Template Parameters
genTypeSquared floating-point matrix: half, float or double. Inverse of matrix based of half-qualifier floating point value is highly innacurate.
See also
GLM_GTC_matrix_inverse
GLM_FUNC_DECL genType glm::inverseTranspose ( genType const &  m)

Compute the inverse transpose of a matrix.

Parameters
mInput matrix to invert transpose.
Template Parameters
genTypeSquared floating-point matrix: half, float or double. Inverse of matrix based of half-qualifier floating point value is highly innacurate.
See also
GLM_GTC_matrix_inverse
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00296.html ================================================ 0.9.9 API documentation: GLM_GTC_matrix_transform
0.9.9 API documentation
GLM_GTC_matrix_transform

Include <glm/gtc/matrix_transform.hpp> to use the features of this extension. More...

Include <glm/gtc/matrix_transform.hpp> to use the features of this extension.

Defines functions that generate common transformation matrices.

The matrices generated by this extension use standard OpenGL fixed-function conventions. For example, the lookAt function generates a transform from world space into the specific eye space that the projective matrix functions (perspective, ortho, etc) are designed to expect. The OpenGL compatibility specifications defines the particular layout of this eye space.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00297.html ================================================ 0.9.9 API documentation: GLM_GTC_noise
0.9.9 API documentation

Include <glm/gtc/noise.hpp> to use the features of this extension. More...

Functions

template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL T perlin (vec< L, T, Q > const &p)
 Classic perlin noise. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL T perlin (vec< L, T, Q > const &p, vec< L, T, Q > const &rep)
 Periodic perlin noise. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL T simplex (vec< L, T, Q > const &p)
 Simplex noise. More...
 

Detailed Description

Include <glm/gtc/noise.hpp> to use the features of this extension.

Defines 2D, 3D and 4D procedural noise functions Based on the work of Stefan Gustavson and Ashima Arts on "webgl-noise": https://github.com/ashima/webgl-noise Following Stefan Gustavson's paper "Simplex noise demystified": http://www.itn.liu.se/~stegu/simplexnoise/simplexnoise.pdf

Function Documentation

GLM_FUNC_DECL T glm::perlin ( vec< L, T, Q > const &  p)

Classic perlin noise.

See also
GLM_GTC_noise
GLM_FUNC_DECL T glm::perlin ( vec< L, T, Q > const &  p,
vec< L, T, Q > const &  rep 
)

Periodic perlin noise.

See also
GLM_GTC_noise
GLM_FUNC_DECL T glm::simplex ( vec< L, T, Q > const &  p)

Simplex noise.

See also
GLM_GTC_noise
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00298.html ================================================ 0.9.9 API documentation: GLM_GTC_packing
0.9.9 API documentation

Include <glm/gtc/packing.hpp> to use the features of this extension. More...

Functions

GLM_FUNC_DECL uint32 packF2x11_1x10 (vec3 const &v)
 First, converts the first two components of the normalized floating-point value v into 11-bit signless floating-point values. More...
 
GLM_FUNC_DECL uint32 packF3x9_E1x5 (vec3 const &v)
 First, converts the first two components of the normalized floating-point value v into 11-bit signless floating-point values. More...
 
template<length_t L, qualifier Q>
GLM_FUNC_DECL vec< L, uint16, Q > packHalf (vec< L, float, Q > const &v)
 Returns an unsigned integer vector obtained by converting the components of a floating-point vector to the 16-bit floating-point representation found in the OpenGL Specification. More...
 
GLM_FUNC_DECL uint16 packHalf1x16 (float v)
 Returns an unsigned integer obtained by converting the components of a floating-point scalar to the 16-bit floating-point representation found in the OpenGL Specification, and then packing this 16-bit value into a 16-bit unsigned integer. More...
 
GLM_FUNC_DECL uint64 packHalf4x16 (vec4 const &v)
 Returns an unsigned integer obtained by converting the components of a four-component floating-point vector to the 16-bit floating-point representation found in the OpenGL Specification, and then packing these four 16-bit values into a 64-bit unsigned integer. More...
 
GLM_FUNC_DECL uint32 packI3x10_1x2 (ivec4 const &v)
 Returns an unsigned integer obtained by converting the components of a four-component signed integer vector to the 10-10-10-2-bit signed integer representation found in the OpenGL Specification, and then packing these four values into a 32-bit unsigned integer. More...
 
GLM_FUNC_DECL int packInt2x16 (i16vec2 const &v)
 Convert each component from an integer vector into a packed integer. More...
 
GLM_FUNC_DECL int64 packInt2x32 (i32vec2 const &v)
 Convert each component from an integer vector into a packed integer. More...
 
GLM_FUNC_DECL int16 packInt2x8 (i8vec2 const &v)
 Convert each component from an integer vector into a packed integer. More...
 
GLM_FUNC_DECL int64 packInt4x16 (i16vec4 const &v)
 Convert each component from an integer vector into a packed integer. More...
 
GLM_FUNC_DECL int32 packInt4x8 (i8vec4 const &v)
 Convert each component from an integer vector into a packed integer. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, T, Q > packRGBM (vec< 3, T, Q > const &rgb)
 Returns an unsigned integer vector obtained by converting the components of a floating-point vector to the 16-bit floating-point representation found in the OpenGL Specification. More...
 
template<typename intType , length_t L, typename floatType , qualifier Q>
GLM_FUNC_DECL vec< L, intType, Q > packSnorm (vec< L, floatType, Q > const &v)
 Convert each component of the normalized floating-point vector into signed integer values. More...
 
GLM_FUNC_DECL uint16 packSnorm1x16 (float v)
 First, converts the normalized floating-point value v into 16-bit integer value. More...
 
GLM_FUNC_DECL uint8 packSnorm1x8 (float s)
 First, converts the normalized floating-point value v into 8-bit integer value. More...
 
GLM_FUNC_DECL uint16 packSnorm2x8 (vec2 const &v)
 First, converts each component of the normalized floating-point value v into 8-bit integer values. More...
 
GLM_FUNC_DECL uint32 packSnorm3x10_1x2 (vec4 const &v)
 First, converts the first three components of the normalized floating-point value v into 10-bit signed integer values. More...
 
GLM_FUNC_DECL uint64 packSnorm4x16 (vec4 const &v)
 First, converts each component of the normalized floating-point value v into 16-bit integer values. More...
 
GLM_FUNC_DECL uint32 packU3x10_1x2 (uvec4 const &v)
 Returns an unsigned integer obtained by converting the components of a four-component unsigned integer vector to the 10-10-10-2-bit unsigned integer representation found in the OpenGL Specification, and then packing these four values into a 32-bit unsigned integer. More...
 
GLM_FUNC_DECL uint packUint2x16 (u16vec2 const &v)
 Convert each component from an integer vector into a packed unsigned integer. More...
 
GLM_FUNC_DECL uint64 packUint2x32 (u32vec2 const &v)
 Convert each component from an integer vector into a packed unsigned integer. More...
 
GLM_FUNC_DECL uint16 packUint2x8 (u8vec2 const &v)
 Convert each component from an integer vector into a packed unsigned integer. More...
 
GLM_FUNC_DECL uint64 packUint4x16 (u16vec4 const &v)
 Convert each component from an integer vector into a packed unsigned integer. More...
 
GLM_FUNC_DECL uint32 packUint4x8 (u8vec4 const &v)
 Convert each component from an integer vector into a packed unsigned integer. More...
 
template<typename uintType , length_t L, typename floatType , qualifier Q>
GLM_FUNC_DECL vec< L, uintType, Q > packUnorm (vec< L, floatType, Q > const &v)
 Convert each component of the normalized floating-point vector into unsigned integer values. More...
 
GLM_FUNC_DECL uint16 packUnorm1x16 (float v)
 First, converts the normalized floating-point value v into a 16-bit integer value. More...
 
GLM_FUNC_DECL uint16 packUnorm1x5_1x6_1x5 (vec3 const &v)
 Convert each component of the normalized floating-point vector into unsigned integer values. More...
 
GLM_FUNC_DECL uint8 packUnorm1x8 (float v)
 First, converts the normalized floating-point value v into a 8-bit integer value. More...
 
GLM_FUNC_DECL uint8 packUnorm2x3_1x2 (vec3 const &v)
 Convert each component of the normalized floating-point vector into unsigned integer values. More...
 
GLM_FUNC_DECL uint8 packUnorm2x4 (vec2 const &v)
 Convert each component of the normalized floating-point vector into unsigned integer values. More...
 
GLM_FUNC_DECL uint16 packUnorm2x8 (vec2 const &v)
 First, converts each component of the normalized floating-point value v into 8-bit integer values. More...
 
GLM_FUNC_DECL uint32 packUnorm3x10_1x2 (vec4 const &v)
 First, converts the first three components of the normalized floating-point value v into 10-bit unsigned integer values. More...
 
GLM_FUNC_DECL uint16 packUnorm3x5_1x1 (vec4 const &v)
 Convert each component of the normalized floating-point vector into unsigned integer values. More...
 
GLM_FUNC_DECL uint64 packUnorm4x16 (vec4 const &v)
 First, converts each component of the normalized floating-point value v into 16-bit integer values. More...
 
GLM_FUNC_DECL uint16 packUnorm4x4 (vec4 const &v)
 Convert each component of the normalized floating-point vector into unsigned integer values. More...
 
GLM_FUNC_DECL vec3 unpackF2x11_1x10 (uint32 p)
 First, unpacks a single 32-bit unsigned integer p into two 11-bit signless floating-point values and one 10-bit signless floating-point value . More...
 
GLM_FUNC_DECL vec3 unpackF3x9_E1x5 (uint32 p)
 First, unpacks a single 32-bit unsigned integer p into two 11-bit signless floating-point values and one 10-bit signless floating-point value . More...
 
template<length_t L, qualifier Q>
GLM_FUNC_DECL vec< L, float, Q > unpackHalf (vec< L, uint16, Q > const &p)
 Returns a floating-point vector with components obtained by reinterpreting an integer vector as 16-bit floating-point numbers and converting them to 32-bit floating-point values. More...
 
GLM_FUNC_DECL float unpackHalf1x16 (uint16 v)
 Returns a floating-point scalar with components obtained by unpacking a 16-bit unsigned integer into a 16-bit value, interpreted as a 16-bit floating-point number according to the OpenGL Specification, and converting it to 32-bit floating-point values. More...
 
GLM_FUNC_DECL vec4 unpackHalf4x16 (uint64 p)
 Returns a four-component floating-point vector with components obtained by unpacking a 64-bit unsigned integer into four 16-bit values, interpreting those values as 16-bit floating-point numbers according to the OpenGL Specification, and converting them to 32-bit floating-point values. More...
 
GLM_FUNC_DECL ivec4 unpackI3x10_1x2 (uint32 p)
 Unpacks a single 32-bit unsigned integer p into three 10-bit and one 2-bit signed integers. More...
 
GLM_FUNC_DECL i16vec2 unpackInt2x16 (int p)
 Convert a packed integer into an integer vector. More...
 
GLM_FUNC_DECL i32vec2 unpackInt2x32 (int64 p)
 Convert a packed integer into an integer vector. More...
 
GLM_FUNC_DECL i8vec2 unpackInt2x8 (int16 p)
 Convert a packed integer into an integer vector. More...
 
GLM_FUNC_DECL i16vec4 unpackInt4x16 (int64 p)
 Convert a packed integer into an integer vector. More...
 
GLM_FUNC_DECL i8vec4 unpackInt4x8 (int32 p)
 Convert a packed integer into an integer vector. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > unpackRGBM (vec< 4, T, Q > const &rgbm)
 Returns a floating-point vector with components obtained by reinterpreting an integer vector as 16-bit floating-point numbers and converting them to 32-bit floating-point values. More...
 
template<typename floatType , length_t L, typename intType , qualifier Q>
GLM_FUNC_DECL vec< L, floatType, Q > unpackSnorm (vec< L, intType, Q > const &v)
 Convert a packed integer to a normalized floating-point vector. More...
 
GLM_FUNC_DECL float unpackSnorm1x16 (uint16 p)
 First, unpacks a single 16-bit unsigned integer p into a single 16-bit signed integers. More...
 
GLM_FUNC_DECL float unpackSnorm1x8 (uint8 p)
 First, unpacks a single 8-bit unsigned integer p into a single 8-bit signed integers. More...
 
GLM_FUNC_DECL vec2 unpackSnorm2x8 (uint16 p)
 First, unpacks a single 16-bit unsigned integer p into a pair of 8-bit signed integers. More...
 
GLM_FUNC_DECL vec4 unpackSnorm3x10_1x2 (uint32 p)
 First, unpacks a single 32-bit unsigned integer p into four 16-bit signed integers. More...
 
GLM_FUNC_DECL vec4 unpackSnorm4x16 (uint64 p)
 First, unpacks a single 64-bit unsigned integer p into four 16-bit signed integers. More...
 
GLM_FUNC_DECL uvec4 unpackU3x10_1x2 (uint32 p)
 Unpacks a single 32-bit unsigned integer p into three 10-bit and one 2-bit unsigned integers. More...
 
GLM_FUNC_DECL u16vec2 unpackUint2x16 (uint p)
 Convert a packed integer into an integer vector. More...
 
GLM_FUNC_DECL u32vec2 unpackUint2x32 (uint64 p)
 Convert a packed integer into an integer vector. More...
 
GLM_FUNC_DECL u8vec2 unpackUint2x8 (uint16 p)
 Convert a packed integer into an integer vector. More...
 
GLM_FUNC_DECL u16vec4 unpackUint4x16 (uint64 p)
 Convert a packed integer into an integer vector. More...
 
GLM_FUNC_DECL u8vec4 unpackUint4x8 (uint32 p)
 Convert a packed integer into an integer vector. More...
 
template<typename floatType , length_t L, typename uintType , qualifier Q>
GLM_FUNC_DECL vec< L, floatType, Q > unpackUnorm (vec< L, uintType, Q > const &v)
 Convert a packed integer to a normalized floating-point vector. More...
 
GLM_FUNC_DECL float unpackUnorm1x16 (uint16 p)
 First, unpacks a single 16-bit unsigned integer p into a of 16-bit unsigned integers. More...
 
GLM_FUNC_DECL vec3 unpackUnorm1x5_1x6_1x5 (uint16 p)
 Convert a packed integer to a normalized floating-point vector. More...
 
GLM_FUNC_DECL float unpackUnorm1x8 (uint8 p)
 Convert a single 8-bit integer to a normalized floating-point value. More...
 
GLM_FUNC_DECL vec3 unpackUnorm2x3_1x2 (uint8 p)
 Convert a packed integer to a normalized floating-point vector. More...
 
GLM_FUNC_DECL vec2 unpackUnorm2x4 (uint8 p)
 Convert a packed integer to a normalized floating-point vector. More...
 
GLM_FUNC_DECL vec2 unpackUnorm2x8 (uint16 p)
 First, unpacks a single 16-bit unsigned integer p into a pair of 8-bit unsigned integers. More...
 
GLM_FUNC_DECL vec4 unpackUnorm3x10_1x2 (uint32 p)
 First, unpacks a single 32-bit unsigned integer p into four 16-bit signed integers. More...
 
GLM_FUNC_DECL vec4 unpackUnorm3x5_1x1 (uint16 p)
 Convert a packed integer to a normalized floating-point vector. More...
 
GLM_FUNC_DECL vec4 unpackUnorm4x16 (uint64 p)
 First, unpacks a single 64-bit unsigned integer p into four 16-bit unsigned integers. More...
 
GLM_FUNC_DECL vec4 unpackUnorm4x4 (uint16 p)
 Convert a packed integer to a normalized floating-point vector. More...
 

Detailed Description

Include <glm/gtc/packing.hpp> to use the features of this extension.

This extension provides a set of function to convert vertors to packed formats.

Function Documentation

GLM_FUNC_DECL uint32 glm::packF2x11_1x10 ( vec3 const &  v)

First, converts the first two components of the normalized floating-point value v into 11-bit signless floating-point values.

Then, converts the third component of the normalized floating-point value v into a 10-bit signless floating-point value. Then, the results are packed into the returned 32-bit unsigned integer.

The first vector component specifies the 11 least-significant bits of the result; the last component specifies the 10 most-significant bits.

See also
GLM_GTC_packing
vec3 unpackF2x11_1x10(uint32 const& p)
GLM_FUNC_DECL uint32 glm::packF3x9_E1x5 ( vec3 const &  v)

First, converts the first two components of the normalized floating-point value v into 11-bit signless floating-point values.

Then, converts the third component of the normalized floating-point value v into a 10-bit signless floating-point value. Then, the results are packed into the returned 32-bit unsigned integer.

The first vector component specifies the 11 least-significant bits of the result; the last component specifies the 10 most-significant bits.

packF3x9_E1x5 allows encoding into RGBE / RGB9E5 format

See also
GLM_GTC_packing
vec3 unpackF3x9_E1x5(uint32 const& p)
GLM_FUNC_DECL vec<L, uint16, Q> glm::packHalf ( vec< L, float, Q > const &  v)

Returns an unsigned integer vector obtained by converting the components of a floating-point vector to the 16-bit floating-point representation found in the OpenGL Specification.

The first vector component specifies the 16 least-significant bits of the result; the forth component specifies the 16 most-significant bits.

See also
GLM_GTC_packing
vec<L, float, Q> unpackHalf(vec<L, uint16, Q> const& p)
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL uint16 glm::packHalf1x16 ( float  v)

Returns an unsigned integer obtained by converting the components of a floating-point scalar to the 16-bit floating-point representation found in the OpenGL Specification, and then packing this 16-bit value into a 16-bit unsigned integer.

See also
GLM_GTC_packing
uint32 packHalf2x16(vec2 const& v)
uint64 packHalf4x16(vec4 const& v)
GLSL packHalf2x16 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL uint64 glm::packHalf4x16 ( vec4 const &  v)

Returns an unsigned integer obtained by converting the components of a four-component floating-point vector to the 16-bit floating-point representation found in the OpenGL Specification, and then packing these four 16-bit values into a 64-bit unsigned integer.

The first vector component specifies the 16 least-significant bits of the result; the forth component specifies the 16 most-significant bits.

See also
GLM_GTC_packing
uint16 packHalf1x16(float const& v)
uint32 packHalf2x16(vec2 const& v)
GLSL packHalf2x16 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL uint32 glm::packI3x10_1x2 ( ivec4 const &  v)

Returns an unsigned integer obtained by converting the components of a four-component signed integer vector to the 10-10-10-2-bit signed integer representation found in the OpenGL Specification, and then packing these four values into a 32-bit unsigned integer.

The first vector component specifies the 10 least-significant bits of the result; the forth component specifies the 2 most-significant bits.

See also
GLM_GTC_packing
uint32 packI3x10_1x2(uvec4 const& v)
uint32 packSnorm3x10_1x2(vec4 const& v)
uint32 packUnorm3x10_1x2(vec4 const& v)
ivec4 unpackI3x10_1x2(uint32 const& p)
GLM_FUNC_DECL int glm::packInt2x16 ( i16vec2 const &  v)

Convert each component from an integer vector into a packed integer.

See also
GLM_GTC_packing
i16vec2 unpackInt2x16(int p)
GLM_FUNC_DECL int64 glm::packInt2x32 ( i32vec2 const &  v)

Convert each component from an integer vector into a packed integer.

See also
GLM_GTC_packing
i32vec2 unpackInt2x32(int p)
GLM_FUNC_DECL int16 glm::packInt2x8 ( i8vec2 const &  v)

Convert each component from an integer vector into a packed integer.

See also
GLM_GTC_packing
i8vec2 unpackInt2x8(int16 p)
GLM_FUNC_DECL int64 glm::packInt4x16 ( i16vec4 const &  v)

Convert each component from an integer vector into a packed integer.

See also
GLM_GTC_packing
i16vec4 unpackInt4x16(int64 p)
GLM_FUNC_DECL int32 glm::packInt4x8 ( i8vec4 const &  v)

Convert each component from an integer vector into a packed integer.

See also
GLM_GTC_packing
i8vec4 unpackInt4x8(int32 p)
GLM_FUNC_DECL vec<4, T, Q> glm::packRGBM ( vec< 3, T, Q > const &  rgb)

Returns an unsigned integer vector obtained by converting the components of a floating-point vector to the 16-bit floating-point representation found in the OpenGL Specification.

The first vector component specifies the 16 least-significant bits of the result; the forth component specifies the 16 most-significant bits.

See also
GLM_GTC_packing
vec<3, T, Q> unpackRGBM(vec<4, T, Q> const& p)
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL vec<L, intType, Q> glm::packSnorm ( vec< L, floatType, Q > const &  v)

Convert each component of the normalized floating-point vector into signed integer values.

See also
GLM_GTC_packing
vec<L, floatType, Q> unpackSnorm(vec<L, intType, Q> const& p);
GLM_FUNC_DECL uint16 glm::packSnorm1x16 ( float  v)

First, converts the normalized floating-point value v into 16-bit integer value.

Then, the results are packed into the returned 16-bit unsigned integer.

The conversion to fixed point is done as follows: packSnorm1x8: round(clamp(s, -1, +1) * 32767.0)

See also
GLM_GTC_packing
uint32 packSnorm2x16(vec2 const& v)
uint64 packSnorm4x16(vec4 const& v)
GLSL packSnorm4x8 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL uint8 glm::packSnorm1x8 ( float  s)

First, converts the normalized floating-point value v into 8-bit integer value.

Then, the results are packed into the returned 8-bit unsigned integer.

The conversion to fixed point is done as follows: packSnorm1x8: round(clamp(s, -1, +1) * 127.0)

See also
GLM_GTC_packing
uint16 packSnorm2x8(vec2 const& v)
uint32 packSnorm4x8(vec4 const& v)
GLSL packSnorm4x8 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL uint16 glm::packSnorm2x8 ( vec2 const &  v)

First, converts each component of the normalized floating-point value v into 8-bit integer values.

Then, the results are packed into the returned 16-bit unsigned integer.

The conversion for component c of v to fixed point is done as follows: packSnorm2x8: round(clamp(c, -1, +1) * 127.0)

The first component of the vector will be written to the least significant bits of the output; the last component will be written to the most significant bits.

See also
GLM_GTC_packing
uint8 packSnorm1x8(float const& v)
uint32 packSnorm4x8(vec4 const& v)
GLSL packSnorm4x8 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL uint32 glm::packSnorm3x10_1x2 ( vec4 const &  v)

First, converts the first three components of the normalized floating-point value v into 10-bit signed integer values.

Then, converts the forth component of the normalized floating-point value v into 2-bit signed integer values. Then, the results are packed into the returned 32-bit unsigned integer.

The conversion for component c of v to fixed point is done as follows: packSnorm3x10_1x2(xyz): round(clamp(c, -1, +1) * 511.0) packSnorm3x10_1x2(w): round(clamp(c, -1, +1) * 1.0)

The first vector component specifies the 10 least-significant bits of the result; the forth component specifies the 2 most-significant bits.

See also
GLM_GTC_packing
vec4 unpackSnorm3x10_1x2(uint32 const& p)
uint32 packUnorm3x10_1x2(vec4 const& v)
uint32 packU3x10_1x2(uvec4 const& v)
uint32 packI3x10_1x2(ivec4 const& v)
GLM_FUNC_DECL uint64 glm::packSnorm4x16 ( vec4 const &  v)

First, converts each component of the normalized floating-point value v into 16-bit integer values.

Then, the results are packed into the returned 64-bit unsigned integer.

The conversion for component c of v to fixed point is done as follows: packSnorm2x8: round(clamp(c, -1, +1) * 32767.0)

The first component of the vector will be written to the least significant bits of the output; the last component will be written to the most significant bits.

See also
GLM_GTC_packing
uint16 packSnorm1x16(float const& v)
uint32 packSnorm2x16(vec2 const& v)
GLSL packSnorm4x8 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL uint32 glm::packU3x10_1x2 ( uvec4 const &  v)

Returns an unsigned integer obtained by converting the components of a four-component unsigned integer vector to the 10-10-10-2-bit unsigned integer representation found in the OpenGL Specification, and then packing these four values into a 32-bit unsigned integer.

The first vector component specifies the 10 least-significant bits of the result; the forth component specifies the 2 most-significant bits.

See also
GLM_GTC_packing
uint32 packI3x10_1x2(ivec4 const& v)
uint32 packSnorm3x10_1x2(vec4 const& v)
uint32 packUnorm3x10_1x2(vec4 const& v)
ivec4 unpackU3x10_1x2(uint32 const& p)
GLM_FUNC_DECL uint glm::packUint2x16 ( u16vec2 const &  v)

Convert each component from an integer vector into a packed unsigned integer.

See also
GLM_GTC_packing
u16vec2 unpackUint2x16(uint p)
GLM_FUNC_DECL uint64 glm::packUint2x32 ( u32vec2 const &  v)

Convert each component from an integer vector into a packed unsigned integer.

See also
GLM_GTC_packing
u32vec2 unpackUint2x32(int p)
GLM_FUNC_DECL uint16 glm::packUint2x8 ( u8vec2 const &  v)

Convert each component from an integer vector into a packed unsigned integer.

See also
GLM_GTC_packing
u8vec2 unpackInt2x8(uint16 p)
GLM_FUNC_DECL uint64 glm::packUint4x16 ( u16vec4 const &  v)

Convert each component from an integer vector into a packed unsigned integer.

See also
GLM_GTC_packing
u16vec4 unpackUint4x16(uint64 p)
GLM_FUNC_DECL uint32 glm::packUint4x8 ( u8vec4 const &  v)

Convert each component from an integer vector into a packed unsigned integer.

See also
GLM_GTC_packing
u8vec4 unpackUint4x8(uint32 p)
GLM_FUNC_DECL vec<L, uintType, Q> glm::packUnorm ( vec< L, floatType, Q > const &  v)

Convert each component of the normalized floating-point vector into unsigned integer values.

See also
GLM_GTC_packing
vec<L, floatType, Q> unpackUnorm(vec<L, intType, Q> const& p);
GLM_FUNC_DECL uint16 glm::packUnorm1x16 ( float  v)

First, converts the normalized floating-point value v into a 16-bit integer value.

Then, the results are packed into the returned 16-bit unsigned integer.

The conversion for component c of v to fixed point is done as follows: packUnorm1x16: round(clamp(c, 0, +1) * 65535.0)

See also
GLM_GTC_packing
uint16 packSnorm1x16(float const& v)
uint64 packSnorm4x16(vec4 const& v)
GLSL packUnorm4x8 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL uint16 glm::packUnorm1x5_1x6_1x5 ( vec3 const &  v)

Convert each component of the normalized floating-point vector into unsigned integer values.

See also
GLM_GTC_packing
vec3 unpackUnorm1x5_1x6_1x5(uint16 p)
GLM_FUNC_DECL uint8 glm::packUnorm1x8 ( float  v)

First, converts the normalized floating-point value v into a 8-bit integer value.

Then, the results are packed into the returned 8-bit unsigned integer.

The conversion for component c of v to fixed point is done as follows: packUnorm1x8: round(clamp(c, 0, +1) * 255.0)

See also
GLM_GTC_packing
uint16 packUnorm2x8(vec2 const& v)
uint32 packUnorm4x8(vec4 const& v)
GLSL packUnorm4x8 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL uint8 glm::packUnorm2x3_1x2 ( vec3 const &  v)

Convert each component of the normalized floating-point vector into unsigned integer values.

See also
GLM_GTC_packing
vec3 unpackUnorm2x3_1x2(uint8 p)
GLM_FUNC_DECL uint8 glm::packUnorm2x4 ( vec2 const &  v)

Convert each component of the normalized floating-point vector into unsigned integer values.

See also
GLM_GTC_packing
vec2 unpackUnorm2x4(uint8 p)
GLM_FUNC_DECL uint16 glm::packUnorm2x8 ( vec2 const &  v)

First, converts each component of the normalized floating-point value v into 8-bit integer values.

Then, the results are packed into the returned 16-bit unsigned integer.

The conversion for component c of v to fixed point is done as follows: packUnorm2x8: round(clamp(c, 0, +1) * 255.0)

The first component of the vector will be written to the least significant bits of the output; the last component will be written to the most significant bits.

See also
GLM_GTC_packing
uint8 packUnorm1x8(float const& v)
uint32 packUnorm4x8(vec4 const& v)
GLSL packUnorm4x8 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL uint32 glm::packUnorm3x10_1x2 ( vec4 const &  v)

First, converts the first three components of the normalized floating-point value v into 10-bit unsigned integer values.

Then, converts the forth component of the normalized floating-point value v into 2-bit signed uninteger values. Then, the results are packed into the returned 32-bit unsigned integer.

The conversion for component c of v to fixed point is done as follows: packUnorm3x10_1x2(xyz): round(clamp(c, 0, +1) * 1023.0) packUnorm3x10_1x2(w): round(clamp(c, 0, +1) * 3.0)

The first vector component specifies the 10 least-significant bits of the result; the forth component specifies the 2 most-significant bits.

See also
GLM_GTC_packing
vec4 unpackUnorm3x10_1x2(uint32 const& p)
uint32 packUnorm3x10_1x2(vec4 const& v)
uint32 packU3x10_1x2(uvec4 const& v)
uint32 packI3x10_1x2(ivec4 const& v)
GLM_FUNC_DECL uint16 glm::packUnorm3x5_1x1 ( vec4 const &  v)

Convert each component of the normalized floating-point vector into unsigned integer values.

See also
GLM_GTC_packing
vec4 unpackUnorm3x5_1x1(uint16 p)
GLM_FUNC_DECL uint64 glm::packUnorm4x16 ( vec4 const &  v)

First, converts each component of the normalized floating-point value v into 16-bit integer values.

Then, the results are packed into the returned 64-bit unsigned integer.

The conversion for component c of v to fixed point is done as follows: packUnorm4x16: round(clamp(c, 0, +1) * 65535.0)

The first component of the vector will be written to the least significant bits of the output; the last component will be written to the most significant bits.

See also
GLM_GTC_packing
uint16 packUnorm1x16(float const& v)
uint32 packUnorm2x16(vec2 const& v)
GLSL packUnorm4x8 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL uint16 glm::packUnorm4x4 ( vec4 const &  v)

Convert each component of the normalized floating-point vector into unsigned integer values.

See also
GLM_GTC_packing
vec4 unpackUnorm4x4(uint16 p)
GLM_FUNC_DECL vec3 glm::unpackF2x11_1x10 ( uint32  p)

First, unpacks a single 32-bit unsigned integer p into two 11-bit signless floating-point values and one 10-bit signless floating-point value .

Then, each component is converted to a normalized floating-point value to generate the returned three-component vector.

The first component of the returned vector will be extracted from the least significant bits of the input; the last component will be extracted from the most significant bits.

See also
GLM_GTC_packing
uint32 packF2x11_1x10(vec3 const& v)
GLM_FUNC_DECL vec3 glm::unpackF3x9_E1x5 ( uint32  p)

First, unpacks a single 32-bit unsigned integer p into two 11-bit signless floating-point values and one 10-bit signless floating-point value .

Then, each component is converted to a normalized floating-point value to generate the returned three-component vector.

The first component of the returned vector will be extracted from the least significant bits of the input; the last component will be extracted from the most significant bits.

unpackF3x9_E1x5 allows decoding RGBE / RGB9E5 data

See also
GLM_GTC_packing
uint32 packF3x9_E1x5(vec3 const& v)
GLM_FUNC_DECL vec<L, float, Q> glm::unpackHalf ( vec< L, uint16, Q > const &  p)

Returns a floating-point vector with components obtained by reinterpreting an integer vector as 16-bit floating-point numbers and converting them to 32-bit floating-point values.

The first component of the vector is obtained from the 16 least-significant bits of v; the forth component is obtained from the 16 most-significant bits of v.

See also
GLM_GTC_packing
vec<L, uint16, Q> packHalf(vec<L, float, Q> const& v)
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL float glm::unpackHalf1x16 ( uint16  v)

Returns a floating-point scalar with components obtained by unpacking a 16-bit unsigned integer into a 16-bit value, interpreted as a 16-bit floating-point number according to the OpenGL Specification, and converting it to 32-bit floating-point values.

See also
GLM_GTC_packing
vec2 unpackHalf2x16(uint32 const& v)
vec4 unpackHalf4x16(uint64 const& v)
GLSL unpackHalf2x16 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL vec4 glm::unpackHalf4x16 ( uint64  p)

Returns a four-component floating-point vector with components obtained by unpacking a 64-bit unsigned integer into four 16-bit values, interpreting those values as 16-bit floating-point numbers according to the OpenGL Specification, and converting them to 32-bit floating-point values.

The first component of the vector is obtained from the 16 least-significant bits of v; the forth component is obtained from the 16 most-significant bits of v.

See also
GLM_GTC_packing
float unpackHalf1x16(uint16 const& v)
vec2 unpackHalf2x16(uint32 const& v)
GLSL unpackHalf2x16 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL ivec4 glm::unpackI3x10_1x2 ( uint32  p)

Unpacks a single 32-bit unsigned integer p into three 10-bit and one 2-bit signed integers.

The first component of the returned vector will be extracted from the least significant bits of the input; the last component will be extracted from the most significant bits.

See also
GLM_GTC_packing
uint32 packU3x10_1x2(uvec4 const& v)
vec4 unpackSnorm3x10_1x2(uint32 const& p);
uvec4 unpackI3x10_1x2(uint32 const& p);
GLM_FUNC_DECL i16vec2 glm::unpackInt2x16 ( int  p)

Convert a packed integer into an integer vector.

See also
GLM_GTC_packing
int packInt2x16(i16vec2 const& v)
GLM_FUNC_DECL i32vec2 glm::unpackInt2x32 ( int64  p)

Convert a packed integer into an integer vector.

See also
GLM_GTC_packing
int packInt2x16(i32vec2 const& v)
GLM_FUNC_DECL i8vec2 glm::unpackInt2x8 ( int16  p)

Convert a packed integer into an integer vector.

See also
GLM_GTC_packing
int16 packInt2x8(i8vec2 const& v)
GLM_FUNC_DECL i16vec4 glm::unpackInt4x16 ( int64  p)

Convert a packed integer into an integer vector.

See also
GLM_GTC_packing
int64 packInt4x16(i16vec4 const& v)
GLM_FUNC_DECL i8vec4 glm::unpackInt4x8 ( int32  p)

Convert a packed integer into an integer vector.

See also
GLM_GTC_packing
int32 packInt2x8(i8vec4 const& v)
GLM_FUNC_DECL vec<3, T, Q> glm::unpackRGBM ( vec< 4, T, Q > const &  rgbm)

Returns a floating-point vector with components obtained by reinterpreting an integer vector as 16-bit floating-point numbers and converting them to 32-bit floating-point values.

The first component of the vector is obtained from the 16 least-significant bits of v; the forth component is obtained from the 16 most-significant bits of v.

See also
GLM_GTC_packing
vec<4, T, Q> packRGBM(vec<3, float, Q> const& v)
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL vec<L, floatType, Q> glm::unpackSnorm ( vec< L, intType, Q > const &  v)

Convert a packed integer to a normalized floating-point vector.

See also
GLM_GTC_packing
vec<L, intType, Q> packSnorm(vec<L, floatType, Q> const& v)
GLM_FUNC_DECL float glm::unpackSnorm1x16 ( uint16  p)

First, unpacks a single 16-bit unsigned integer p into a single 16-bit signed integers.

Then, each component is converted to a normalized floating-point value to generate the returned scalar.

The conversion for unpacked fixed-point value f to floating point is done as follows: unpackSnorm1x16: clamp(f / 32767.0, -1, +1)

See also
GLM_GTC_packing
vec2 unpackSnorm2x16(uint32 p)
vec4 unpackSnorm4x16(uint64 p)
GLSL unpackSnorm4x8 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL float glm::unpackSnorm1x8 ( uint8  p)

First, unpacks a single 8-bit unsigned integer p into a single 8-bit signed integers.

Then, the value is converted to a normalized floating-point value to generate the returned scalar.

The conversion for unpacked fixed-point value f to floating point is done as follows: unpackSnorm1x8: clamp(f / 127.0, -1, +1)

See also
GLM_GTC_packing
vec2 unpackSnorm2x8(uint16 p)
vec4 unpackSnorm4x8(uint32 p)
GLSL unpackSnorm4x8 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL vec2 glm::unpackSnorm2x8 ( uint16  p)

First, unpacks a single 16-bit unsigned integer p into a pair of 8-bit signed integers.

Then, each component is converted to a normalized floating-point value to generate the returned two-component vector.

The conversion for unpacked fixed-point value f to floating point is done as follows: unpackSnorm2x8: clamp(f / 127.0, -1, +1)

The first component of the returned vector will be extracted from the least significant bits of the input; the last component will be extracted from the most significant bits.

See also
GLM_GTC_packing
float unpackSnorm1x8(uint8 p)
vec4 unpackSnorm4x8(uint32 p)
GLSL unpackSnorm4x8 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL vec4 glm::unpackSnorm3x10_1x2 ( uint32  p)

First, unpacks a single 32-bit unsigned integer p into four 16-bit signed integers.

Then, each component is converted to a normalized floating-point value to generate the returned four-component vector.

The conversion for unpacked fixed-point value f to floating point is done as follows: unpackSnorm3x10_1x2(xyz): clamp(f / 511.0, -1, +1) unpackSnorm3x10_1x2(w): clamp(f / 511.0, -1, +1)

The first component of the returned vector will be extracted from the least significant bits of the input; the last component will be extracted from the most significant bits.

See also
GLM_GTC_packing
uint32 packSnorm3x10_1x2(vec4 const& v)
vec4 unpackUnorm3x10_1x2(uint32 const& p))
uvec4 unpackI3x10_1x2(uint32 const& p)
uvec4 unpackU3x10_1x2(uint32 const& p)
GLM_FUNC_DECL vec4 glm::unpackSnorm4x16 ( uint64  p)

First, unpacks a single 64-bit unsigned integer p into four 16-bit signed integers.

Then, each component is converted to a normalized floating-point value to generate the returned four-component vector.

The conversion for unpacked fixed-point value f to floating point is done as follows: unpackSnorm4x16: clamp(f / 32767.0, -1, +1)

The first component of the returned vector will be extracted from the least significant bits of the input; the last component will be extracted from the most significant bits.

See also
GLM_GTC_packing
float unpackSnorm1x16(uint16 p)
vec2 unpackSnorm2x16(uint32 p)
GLSL unpackSnorm4x8 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL uvec4 glm::unpackU3x10_1x2 ( uint32  p)

Unpacks a single 32-bit unsigned integer p into three 10-bit and one 2-bit unsigned integers.

The first component of the returned vector will be extracted from the least significant bits of the input; the last component will be extracted from the most significant bits.

See also
GLM_GTC_packing
uint32 packU3x10_1x2(uvec4 const& v)
vec4 unpackSnorm3x10_1x2(uint32 const& p);
uvec4 unpackI3x10_1x2(uint32 const& p);
GLM_FUNC_DECL u16vec2 glm::unpackUint2x16 ( uint  p)

Convert a packed integer into an integer vector.

See also
GLM_GTC_packing
uint packUint2x16(u16vec2 const& v)
GLM_FUNC_DECL u32vec2 glm::unpackUint2x32 ( uint64  p)

Convert a packed integer into an integer vector.

See also
GLM_GTC_packing
int packUint2x16(u32vec2 const& v)
GLM_FUNC_DECL u8vec2 glm::unpackUint2x8 ( uint16  p)

Convert a packed integer into an integer vector.

See also
GLM_GTC_packing
uint16 packInt2x8(u8vec2 const& v)
GLM_FUNC_DECL u16vec4 glm::unpackUint4x16 ( uint64  p)

Convert a packed integer into an integer vector.

See also
GLM_GTC_packing
uint64 packUint4x16(u16vec4 const& v)
GLM_FUNC_DECL u8vec4 glm::unpackUint4x8 ( uint32  p)

Convert a packed integer into an integer vector.

See also
GLM_GTC_packing
uint32 packUint4x8(u8vec2 const& v)
GLM_FUNC_DECL vec<L, floatType, Q> glm::unpackUnorm ( vec< L, uintType, Q > const &  v)

Convert a packed integer to a normalized floating-point vector.

See also
GLM_GTC_packing
vec<L, intType, Q> packUnorm(vec<L, floatType, Q> const& v)
GLM_FUNC_DECL float glm::unpackUnorm1x16 ( uint16  p)

First, unpacks a single 16-bit unsigned integer p into a of 16-bit unsigned integers.

Then, the value is converted to a normalized floating-point value to generate the returned scalar.

The conversion for unpacked fixed-point value f to floating point is done as follows: unpackUnorm1x16: f / 65535.0

See also
GLM_GTC_packing
vec2 unpackUnorm2x16(uint32 p)
vec4 unpackUnorm4x16(uint64 p)
GLSL unpackUnorm2x16 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL vec3 glm::unpackUnorm1x5_1x6_1x5 ( uint16  p)

Convert a packed integer to a normalized floating-point vector.

See also
GLM_GTC_packing
uint16 packUnorm1x5_1x6_1x5(vec3 const& v)
GLM_FUNC_DECL float glm::unpackUnorm1x8 ( uint8  p)

Convert a single 8-bit integer to a normalized floating-point value.

The conversion for unpacked fixed-point value f to floating point is done as follows: unpackUnorm4x8: f / 255.0

See also
GLM_GTC_packing
vec2 unpackUnorm2x8(uint16 p)
vec4 unpackUnorm4x8(uint32 p)
GLSL unpackUnorm4x8 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL vec3 glm::unpackUnorm2x3_1x2 ( uint8  p)

Convert a packed integer to a normalized floating-point vector.

See also
GLM_GTC_packing
uint8 packUnorm2x3_1x2(vec3 const& v)
GLM_FUNC_DECL vec2 glm::unpackUnorm2x4 ( uint8  p)

Convert a packed integer to a normalized floating-point vector.

See also
GLM_GTC_packing
uint8 packUnorm2x4(vec2 const& v)
GLM_FUNC_DECL vec2 glm::unpackUnorm2x8 ( uint16  p)

First, unpacks a single 16-bit unsigned integer p into a pair of 8-bit unsigned integers.

Then, each component is converted to a normalized floating-point value to generate the returned two-component vector.

The conversion for unpacked fixed-point value f to floating point is done as follows: unpackUnorm4x8: f / 255.0

The first component of the returned vector will be extracted from the least significant bits of the input; the last component will be extracted from the most significant bits.

See also
GLM_GTC_packing
float unpackUnorm1x8(uint8 v)
vec4 unpackUnorm4x8(uint32 p)
GLSL unpackUnorm4x8 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL vec4 glm::unpackUnorm3x10_1x2 ( uint32  p)

First, unpacks a single 32-bit unsigned integer p into four 16-bit signed integers.

Then, each component is converted to a normalized floating-point value to generate the returned four-component vector.

The conversion for unpacked fixed-point value f to floating point is done as follows: unpackSnorm3x10_1x2(xyz): clamp(f / 1023.0, 0, +1) unpackSnorm3x10_1x2(w): clamp(f / 3.0, 0, +1)

The first component of the returned vector will be extracted from the least significant bits of the input; the last component will be extracted from the most significant bits.

See also
GLM_GTC_packing
uint32 packSnorm3x10_1x2(vec4 const& v)
vec4 unpackInorm3x10_1x2(uint32 const& p))
uvec4 unpackI3x10_1x2(uint32 const& p)
uvec4 unpackU3x10_1x2(uint32 const& p)
GLM_FUNC_DECL vec4 glm::unpackUnorm3x5_1x1 ( uint16  p)

Convert a packed integer to a normalized floating-point vector.

See also
GLM_GTC_packing
uint16 packUnorm3x5_1x1(vec4 const& v)
GLM_FUNC_DECL vec4 glm::unpackUnorm4x16 ( uint64  p)

First, unpacks a single 64-bit unsigned integer p into four 16-bit unsigned integers.

Then, each component is converted to a normalized floating-point value to generate the returned four-component vector.

The conversion for unpacked fixed-point value f to floating point is done as follows: unpackUnormx4x16: f / 65535.0

The first component of the returned vector will be extracted from the least significant bits of the input; the last component will be extracted from the most significant bits.

See also
GLM_GTC_packing
float unpackUnorm1x16(uint16 p)
vec2 unpackUnorm2x16(uint32 p)
GLSL unpackUnorm2x16 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL vec4 glm::unpackUnorm4x4 ( uint16  p)

Convert a packed integer to a normalized floating-point vector.

See also
GLM_GTC_packing
uint16 packUnorm4x4(vec4 const& v)
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00299.html ================================================ 0.9.9 API documentation: GLM_GTC_quaternion
0.9.9 API documentation
GLM_GTC_quaternion

Include <glm/gtc/quaternion.hpp> to use the features of this extension. More...

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > eulerAngles (qua< T, Q > const &x)
 Returns euler angles, pitch as x, yaw as y, roll as z. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, bool, Q > greaterThan (qua< T, Q > const &x, qua< T, Q > const &y)
 Returns the component-wise comparison of result x > y. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, bool, Q > greaterThanEqual (qua< T, Q > const &x, qua< T, Q > const &y)
 Returns the component-wise comparison of result x >= y. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, bool, Q > lessThan (qua< T, Q > const &x, qua< T, Q > const &y)
 Returns the component-wise comparison result of x < y. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, bool, Q > lessThanEqual (qua< T, Q > const &x, qua< T, Q > const &y)
 Returns the component-wise comparison of result x <= y. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 3, T, Q > mat3_cast (qua< T, Q > const &x)
 Converts a quaternion to a 3 * 3 matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > mat4_cast (qua< T, Q > const &x)
 Converts a quaternion to a 4 * 4 matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T pitch (qua< T, Q > const &x)
 Returns pitch value of euler angles expressed in radians. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > quat_cast (mat< 3, 3, T, Q > const &x)
 Converts a pure rotation 3 * 3 matrix to a quaternion. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > quat_cast (mat< 4, 4, T, Q > const &x)
 Converts a pure rotation 4 * 4 matrix to a quaternion. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > quatLookAt (vec< 3, T, Q > const &direction, vec< 3, T, Q > const &up)
 Build a look at quaternion based on the default handedness. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > quatLookAtLH (vec< 3, T, Q > const &direction, vec< 3, T, Q > const &up)
 Build a left-handed look at quaternion. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > quatLookAtRH (vec< 3, T, Q > const &direction, vec< 3, T, Q > const &up)
 Build a right-handed look at quaternion. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T roll (qua< T, Q > const &x)
 Returns roll value of euler angles expressed in radians. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T yaw (qua< T, Q > const &x)
 Returns yaw value of euler angles expressed in radians. More...
 

Detailed Description

Include <glm/gtc/quaternion.hpp> to use the features of this extension.

Defines a templated quaternion type and several quaternion operations.

Function Documentation

GLM_FUNC_DECL vec<3, T, Q> glm::eulerAngles ( qua< T, Q > const &  x)

Returns euler angles, pitch as x, yaw as y, roll as z.

The result is expressed in radians.

Template Parameters
TFloating-point scalar types.
See also
GLM_GTC_quaternion
GLM_FUNC_DECL vec<4, bool, Q> glm::greaterThan ( qua< T, Q > const &  x,
qua< T, Q > const &  y 
)

Returns the component-wise comparison of result x > y.

Template Parameters
TFloating-point scalar types
QValue from qualifier enum
See also
GLM_EXT_quaternion_relational
GLM_FUNC_DECL vec<4, bool, Q> glm::greaterThanEqual ( qua< T, Q > const &  x,
qua< T, Q > const &  y 
)

Returns the component-wise comparison of result x >= y.

Template Parameters
TFloating-point scalar types
QValue from qualifier enum
See also
GLM_EXT_quaternion_relational
GLM_FUNC_DECL vec<4, bool, Q> glm::lessThan ( qua< T, Q > const &  x,
qua< T, Q > const &  y 
)

Returns the component-wise comparison result of x < y.

Template Parameters
TFloating-point scalar types
QValue from qualifier enum
See also
GLM_EXT_quaternion_relational
GLM_FUNC_DECL vec<4, bool, Q> glm::lessThanEqual ( qua< T, Q > const &  x,
qua< T, Q > const &  y 
)

Returns the component-wise comparison of result x <= y.

Template Parameters
TFloating-point scalar types
QValue from qualifier enum
See also
GLM_EXT_quaternion_relational
GLM_FUNC_DECL mat<3, 3, T, Q> glm::mat3_cast ( qua< T, Q > const &  x)

Converts a quaternion to a 3 * 3 matrix.

Template Parameters
TFloating-point scalar types.
See also
GLM_GTC_quaternion

Referenced by glm::toMat3().

GLM_FUNC_DECL mat<4, 4, T, Q> glm::mat4_cast ( qua< T, Q > const &  x)

Converts a quaternion to a 4 * 4 matrix.

Template Parameters
TFloating-point scalar types.
See also
GLM_GTC_quaternion

Referenced by glm::toMat4().

GLM_FUNC_DECL T glm::pitch ( qua< T, Q > const &  x)

Returns pitch value of euler angles expressed in radians.

Template Parameters
TFloating-point scalar types.
See also
GLM_GTC_quaternion
GLM_FUNC_DECL qua<T, Q> glm::quat_cast ( mat< 3, 3, T, Q > const &  x)

Converts a pure rotation 3 * 3 matrix to a quaternion.

Template Parameters
TFloating-point scalar types.
See also
GLM_GTC_quaternion

Referenced by glm::toQuat().

GLM_FUNC_DECL qua<T, Q> glm::quat_cast ( mat< 4, 4, T, Q > const &  x)

Converts a pure rotation 4 * 4 matrix to a quaternion.

Template Parameters
TFloating-point scalar types.
See also
GLM_GTC_quaternion
GLM_FUNC_DECL qua<T, Q> glm::quatLookAt ( vec< 3, T, Q > const &  direction,
vec< 3, T, Q > const &  up 
)

Build a look at quaternion based on the default handedness.

Parameters
directionDesired forward direction. Needs to be normalized.
upUp vector, how the camera is oriented. Typically (0, 1, 0).
GLM_FUNC_DECL qua<T, Q> glm::quatLookAtLH ( vec< 3, T, Q > const &  direction,
vec< 3, T, Q > const &  up 
)

Build a left-handed look at quaternion.

Parameters
directionDesired forward direction onto which the +z-axis gets mapped. Needs to be normalized.
upUp vector, how the camera is oriented. Typically (0, 1, 0).
GLM_FUNC_DECL qua<T, Q> glm::quatLookAtRH ( vec< 3, T, Q > const &  direction,
vec< 3, T, Q > const &  up 
)

Build a right-handed look at quaternion.

Parameters
directionDesired forward direction onto which the -z-axis gets mapped. Needs to be normalized.
upUp vector, how the camera is oriented. Typically (0, 1, 0).
GLM_FUNC_DECL T glm::roll ( qua< T, Q > const &  x)

Returns roll value of euler angles expressed in radians.

Template Parameters
TFloating-point scalar types.
See also
GLM_GTC_quaternion
GLM_FUNC_DECL T glm::yaw ( qua< T, Q > const &  x)

Returns yaw value of euler angles expressed in radians.

Template Parameters
TFloating-point scalar types.
See also
GLM_GTC_quaternion
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00300.html ================================================ 0.9.9 API documentation: GLM_GTC_random
0.9.9 API documentation

Include <glm/gtc/random.hpp> to use the features of this extension. More...

Functions

template<typename T >
GLM_FUNC_DECL vec< 3, T, defaultp > ballRand (T Radius)
 Generate a random 3D vector which coordinates are regulary distributed within the volume of a ball of a given radius. More...
 
template<typename T >
GLM_FUNC_DECL vec< 2, T, defaultp > circularRand (T Radius)
 Generate a random 2D vector which coordinates are regulary distributed on a circle of a given radius. More...
 
template<typename T >
GLM_FUNC_DECL vec< 2, T, defaultp > diskRand (T Radius)
 Generate a random 2D vector which coordinates are regulary distributed within the area of a disk of a given radius. More...
 
template<typename genType >
GLM_FUNC_DECL genType gaussRand (genType Mean, genType Deviation)
 Generate random numbers in the interval [Min, Max], according a gaussian distribution. More...
 
template<typename genType >
GLM_FUNC_DECL genType linearRand (genType Min, genType Max)
 Generate random numbers in the interval [Min, Max], according a linear distribution. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > linearRand (vec< L, T, Q > const &Min, vec< L, T, Q > const &Max)
 Generate random numbers in the interval [Min, Max], according a linear distribution. More...
 
template<typename T >
GLM_FUNC_DECL vec< 3, T, defaultp > sphericalRand (T Radius)
 Generate a random 3D vector which coordinates are regulary distributed on a sphere of a given radius. More...
 

Detailed Description

Include <glm/gtc/random.hpp> to use the features of this extension.

Generate random number from various distribution methods.

Function Documentation

GLM_FUNC_DECL vec<3, T, defaultp> glm::ballRand ( Radius)

Generate a random 3D vector which coordinates are regulary distributed within the volume of a ball of a given radius.

See also
GLM_GTC_random
GLM_FUNC_DECL vec<2, T, defaultp> glm::circularRand ( Radius)

Generate a random 2D vector which coordinates are regulary distributed on a circle of a given radius.

See also
GLM_GTC_random
GLM_FUNC_DECL vec<2, T, defaultp> glm::diskRand ( Radius)

Generate a random 2D vector which coordinates are regulary distributed within the area of a disk of a given radius.

See also
GLM_GTC_random
GLM_FUNC_DECL genType glm::gaussRand ( genType  Mean,
genType  Deviation 
)

Generate random numbers in the interval [Min, Max], according a gaussian distribution.

See also
GLM_GTC_random
GLM_FUNC_DECL genType glm::linearRand ( genType  Min,
genType  Max 
)

Generate random numbers in the interval [Min, Max], according a linear distribution.

Parameters
MinMinimum value included in the sampling
MaxMaximum value included in the sampling
Template Parameters
genTypeValue type. Currently supported: float or double scalars.
See also
GLM_GTC_random
GLM_FUNC_DECL vec<L, T, Q> glm::linearRand ( vec< L, T, Q > const &  Min,
vec< L, T, Q > const &  Max 
)

Generate random numbers in the interval [Min, Max], according a linear distribution.

Parameters
MinMinimum value included in the sampling
MaxMaximum value included in the sampling
Template Parameters
TValue type. Currently supported: float or double.
See also
GLM_GTC_random
GLM_FUNC_DECL vec<3, T, defaultp> glm::sphericalRand ( Radius)

Generate a random 3D vector which coordinates are regulary distributed on a sphere of a given radius.

See also
GLM_GTC_random
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00301.html ================================================ 0.9.9 API documentation: GLM_GTC_reciprocal
0.9.9 API documentation
GLM_GTC_reciprocal

Include <glm/gtc/reciprocal.hpp> to use the features of this extension. More...

Functions

template<typename genType >
GLM_FUNC_DECL genType acot (genType x)
 Inverse cotangent function. More...
 
template<typename genType >
GLM_FUNC_DECL genType acoth (genType x)
 Inverse cotangent hyperbolic function. More...
 
template<typename genType >
GLM_FUNC_DECL genType acsc (genType x)
 Inverse cosecant function. More...
 
template<typename genType >
GLM_FUNC_DECL genType acsch (genType x)
 Inverse cosecant hyperbolic function. More...
 
template<typename genType >
GLM_FUNC_DECL genType asec (genType x)
 Inverse secant function. More...
 
template<typename genType >
GLM_FUNC_DECL genType asech (genType x)
 Inverse secant hyperbolic function. More...
 
template<typename genType >
GLM_FUNC_DECL genType cot (genType angle)
 Cotangent function. More...
 
template<typename genType >
GLM_FUNC_DECL genType coth (genType angle)
 Cotangent hyperbolic function. More...
 
template<typename genType >
GLM_FUNC_DECL genType csc (genType angle)
 Cosecant function. More...
 
template<typename genType >
GLM_FUNC_DECL genType csch (genType angle)
 Cosecant hyperbolic function. More...
 
template<typename genType >
GLM_FUNC_DECL genType sec (genType angle)
 Secant function. More...
 
template<typename genType >
GLM_FUNC_DECL genType sech (genType angle)
 Secant hyperbolic function. More...
 

Detailed Description

Include <glm/gtc/reciprocal.hpp> to use the features of this extension.

Define secant, cosecant and cotangent functions.

Function Documentation

GLM_FUNC_DECL genType glm::acot ( genType  x)

Inverse cotangent function.

Returns
Return an angle expressed in radians.
Template Parameters
genTypeFloating-point scalar or vector types.
See also
GLM_GTC_reciprocal
GLM_FUNC_DECL genType glm::acoth ( genType  x)

Inverse cotangent hyperbolic function.

Returns
Return an angle expressed in radians.
Template Parameters
genTypeFloating-point scalar or vector types.
See also
GLM_GTC_reciprocal
GLM_FUNC_DECL genType glm::acsc ( genType  x)

Inverse cosecant function.

Returns
Return an angle expressed in radians.
Template Parameters
genTypeFloating-point scalar or vector types.
See also
GLM_GTC_reciprocal
GLM_FUNC_DECL genType glm::acsch ( genType  x)

Inverse cosecant hyperbolic function.

Returns
Return an angle expressed in radians.
Template Parameters
genTypeFloating-point scalar or vector types.
See also
GLM_GTC_reciprocal
GLM_FUNC_DECL genType glm::asec ( genType  x)

Inverse secant function.

Returns
Return an angle expressed in radians.
Template Parameters
genTypeFloating-point scalar or vector types.
See also
GLM_GTC_reciprocal
GLM_FUNC_DECL genType glm::asech ( genType  x)

Inverse secant hyperbolic function.

Returns
Return an angle expressed in radians.
Template Parameters
genTypeFloating-point scalar or vector types.
See also
GLM_GTC_reciprocal
GLM_FUNC_DECL genType glm::cot ( genType  angle)

Cotangent function.

adjacent / opposite or 1 / tan(x)

Template Parameters
genTypeFloating-point scalar or vector types.
See also
GLM_GTC_reciprocal
GLM_FUNC_DECL genType glm::coth ( genType  angle)

Cotangent hyperbolic function.

Template Parameters
genTypeFloating-point scalar or vector types.
See also
GLM_GTC_reciprocal
GLM_FUNC_DECL genType glm::csc ( genType  angle)

Cosecant function.

hypotenuse / opposite or 1 / sin(x)

Template Parameters
genTypeFloating-point scalar or vector types.
See also
GLM_GTC_reciprocal
GLM_FUNC_DECL genType glm::csch ( genType  angle)

Cosecant hyperbolic function.

Template Parameters
genTypeFloating-point scalar or vector types.
See also
GLM_GTC_reciprocal
GLM_FUNC_DECL genType glm::sec ( genType  angle)

Secant function.

hypotenuse / adjacent or 1 / cos(x)

Template Parameters
genTypeFloating-point scalar or vector types.
See also
GLM_GTC_reciprocal
GLM_FUNC_DECL genType glm::sech ( genType  angle)

Secant hyperbolic function.

Template Parameters
genTypeFloating-point scalar or vector types.
See also
GLM_GTC_reciprocal
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00302.html ================================================ 0.9.9 API documentation: GLM_GTC_round
0.9.9 API documentation

Include <glm/gtc/round.hpp> to use the features of this extension. More...

Functions

template<typename genType >
GLM_FUNC_DECL genType ceilMultiple (genType v, genType Multiple)
 Higher multiple number of Source. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > ceilMultiple (vec< L, T, Q > const &v, vec< L, T, Q > const &Multiple)
 Higher multiple number of Source. More...
 
template<typename genIUType >
GLM_FUNC_DECL genIUType ceilPowerOfTwo (genIUType v)
 Return the power of two number which value is just higher the input value, round up to a power of two. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > ceilPowerOfTwo (vec< L, T, Q > const &v)
 Return the power of two number which value is just higher the input value, round up to a power of two. More...
 
template<typename genType >
GLM_FUNC_DECL genType floorMultiple (genType v, genType Multiple)
 Lower multiple number of Source. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > floorMultiple (vec< L, T, Q > const &v, vec< L, T, Q > const &Multiple)
 Lower multiple number of Source. More...
 
template<typename genIUType >
GLM_FUNC_DECL genIUType floorPowerOfTwo (genIUType v)
 Return the power of two number which value is just lower the input value, round down to a power of two. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > floorPowerOfTwo (vec< L, T, Q > const &v)
 Return the power of two number which value is just lower the input value, round down to a power of two. More...
 
template<typename genType >
GLM_FUNC_DECL genType roundMultiple (genType v, genType Multiple)
 Lower multiple number of Source. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > roundMultiple (vec< L, T, Q > const &v, vec< L, T, Q > const &Multiple)
 Lower multiple number of Source. More...
 
template<typename genIUType >
GLM_FUNC_DECL genIUType roundPowerOfTwo (genIUType v)
 Return the power of two number which value is the closet to the input value. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > roundPowerOfTwo (vec< L, T, Q > const &v)
 Return the power of two number which value is the closet to the input value. More...
 

Detailed Description

Include <glm/gtc/round.hpp> to use the features of this extension.

Rounding value to specific boundings

Function Documentation

GLM_FUNC_DECL genType glm::ceilMultiple ( genType  v,
genType  Multiple 
)

Higher multiple number of Source.

Template Parameters
genTypeFloating-point or integer scalar or vector types.
Parameters
vSource value to which is applied the function
MultipleMust be a null or positive value
See also
GLM_GTC_round
GLM_FUNC_DECL vec<L, T, Q> glm::ceilMultiple ( vec< L, T, Q > const &  v,
vec< L, T, Q > const &  Multiple 
)

Higher multiple number of Source.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point or integer scalar types
QValue from qualifier enum
Parameters
vSource values to which is applied the function
MultipleMust be a null or positive value
See also
GLM_GTC_round
GLM_FUNC_DECL genIUType glm::ceilPowerOfTwo ( genIUType  v)

Return the power of two number which value is just higher the input value, round up to a power of two.

See also
GLM_GTC_round
GLM_FUNC_DECL vec<L, T, Q> glm::ceilPowerOfTwo ( vec< L, T, Q > const &  v)

Return the power of two number which value is just higher the input value, round up to a power of two.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point or integer scalar types
QValue from qualifier enum
See also
GLM_GTC_round
GLM_FUNC_DECL genType glm::floorMultiple ( genType  v,
genType  Multiple 
)

Lower multiple number of Source.

Template Parameters
genTypeFloating-point or integer scalar or vector types.
Parameters
vSource value to which is applied the function
MultipleMust be a null or positive value
See also
GLM_GTC_round
GLM_FUNC_DECL vec<L, T, Q> glm::floorMultiple ( vec< L, T, Q > const &  v,
vec< L, T, Q > const &  Multiple 
)

Lower multiple number of Source.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point or integer scalar types
QValue from qualifier enum
Parameters
vSource values to which is applied the function
MultipleMust be a null or positive value
See also
GLM_GTC_round
GLM_FUNC_DECL genIUType glm::floorPowerOfTwo ( genIUType  v)

Return the power of two number which value is just lower the input value, round down to a power of two.

See also
GLM_GTC_round
GLM_FUNC_DECL vec<L, T, Q> glm::floorPowerOfTwo ( vec< L, T, Q > const &  v)

Return the power of two number which value is just lower the input value, round down to a power of two.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point or integer scalar types
QValue from qualifier enum
See also
GLM_GTC_round
GLM_FUNC_DECL genType glm::roundMultiple ( genType  v,
genType  Multiple 
)

Lower multiple number of Source.

Template Parameters
genTypeFloating-point or integer scalar or vector types.
Parameters
vSource value to which is applied the function
MultipleMust be a null or positive value
See also
GLM_GTC_round
GLM_FUNC_DECL vec<L, T, Q> glm::roundMultiple ( vec< L, T, Q > const &  v,
vec< L, T, Q > const &  Multiple 
)

Lower multiple number of Source.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point or integer scalar types
QValue from qualifier enum
Parameters
vSource values to which is applied the function
MultipleMust be a null or positive value
See also
GLM_GTC_round
GLM_FUNC_DECL genIUType glm::roundPowerOfTwo ( genIUType  v)

Return the power of two number which value is the closet to the input value.

See also
GLM_GTC_round
GLM_FUNC_DECL vec<L, T, Q> glm::roundPowerOfTwo ( vec< L, T, Q > const &  v)

Return the power of two number which value is the closet to the input value.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point or integer scalar types
QValue from qualifier enum
See also
GLM_GTC_round
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00303.html ================================================ 0.9.9 API documentation: GLM_GTC_type_aligned
0.9.9 API documentation
GLM_GTC_type_aligned

Include <glm/gtc/type_aligned.hpp> to use the features of this extension. More...

Typedefs

typedef aligned_highp_bvec1 aligned_bvec1
 1 component vector aligned in memory of bool values.
 
typedef aligned_highp_bvec2 aligned_bvec2
 2 components vector aligned in memory of bool values.
 
typedef aligned_highp_bvec3 aligned_bvec3
 3 components vector aligned in memory of bool values.
 
typedef aligned_highp_bvec4 aligned_bvec4
 4 components vector aligned in memory of bool values.
 
typedef aligned_highp_dmat2 aligned_dmat2
 2 by 2 matrix tightly aligned in memory of double-precision floating-point numbers.
 
typedef aligned_highp_dmat2x2 aligned_dmat2x2
 2 by 2 matrix tightly aligned in memory of double-precision floating-point numbers.
 
typedef aligned_highp_dmat2x3 aligned_dmat2x3
 2 by 3 matrix tightly aligned in memory of double-precision floating-point numbers.
 
typedef aligned_highp_dmat2x4 aligned_dmat2x4
 2 by 4 matrix tightly aligned in memory of double-precision floating-point numbers.
 
typedef aligned_highp_dmat3 aligned_dmat3
 3 by 3 matrix tightly aligned in memory of double-precision floating-point numbers.
 
typedef aligned_highp_dmat3x2 aligned_dmat3x2
 3 by 2 matrix tightly aligned in memory of double-precision floating-point numbers.
 
typedef aligned_highp_dmat3x3 aligned_dmat3x3
 3 by 3 matrix tightly aligned in memory of double-precision floating-point numbers.
 
typedef aligned_highp_dmat3x4 aligned_dmat3x4
 3 by 4 matrix tightly aligned in memory of double-precision floating-point numbers.
 
typedef aligned_highp_dmat4 aligned_dmat4
 4 by 4 matrix tightly aligned in memory of double-precision floating-point numbers.
 
typedef aligned_highp_dmat4x2 aligned_dmat4x2
 4 by 2 matrix tightly aligned in memory of double-precision floating-point numbers.
 
typedef aligned_highp_dmat4x3 aligned_dmat4x3
 4 by 3 matrix tightly aligned in memory of double-precision floating-point numbers.
 
typedef aligned_highp_dmat4x4 aligned_dmat4x4
 4 by 4 matrix tightly aligned in memory of double-precision floating-point numbers.
 
typedef aligned_highp_dvec1 aligned_dvec1
 1 component vector aligned in memory of double-precision floating-point numbers.
 
typedef aligned_highp_dvec2 aligned_dvec2
 2 components vector aligned in memory of double-precision floating-point numbers.
 
typedef aligned_highp_dvec3 aligned_dvec3
 3 components vector aligned in memory of double-precision floating-point numbers.
 
typedef aligned_highp_dvec4 aligned_dvec4
 4 components vector aligned in memory of double-precision floating-point numbers.
 
typedef vec< 1, bool, aligned_highp > aligned_highp_bvec1
 1 component vector aligned in memory of bool values.
 
typedef vec< 2, bool, aligned_highp > aligned_highp_bvec2
 2 components vector aligned in memory of bool values.
 
typedef vec< 3, bool, aligned_highp > aligned_highp_bvec3
 3 components vector aligned in memory of bool values.
 
typedef vec< 4, bool, aligned_highp > aligned_highp_bvec4
 4 components vector aligned in memory of bool values.
 
typedef mat< 2, 2, double, aligned_highp > aligned_highp_dmat2
 2 by 2 matrix aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 2, 2, double, aligned_highp > aligned_highp_dmat2x2
 2 by 2 matrix aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 2, 3, double, aligned_highp > aligned_highp_dmat2x3
 2 by 3 matrix aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 2, 4, double, aligned_highp > aligned_highp_dmat2x4
 2 by 4 matrix aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, double, aligned_highp > aligned_highp_dmat3
 3 by 3 matrix aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 2, double, aligned_highp > aligned_highp_dmat3x2
 3 by 2 matrix aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, double, aligned_highp > aligned_highp_dmat3x3
 3 by 3 matrix aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 4, double, aligned_highp > aligned_highp_dmat3x4
 3 by 4 matrix aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, double, aligned_highp > aligned_highp_dmat4
 4 by 4 matrix aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 2, double, aligned_highp > aligned_highp_dmat4x2
 4 by 2 matrix aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 3, double, aligned_highp > aligned_highp_dmat4x3
 4 by 3 matrix aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, double, aligned_highp > aligned_highp_dmat4x4
 4 by 4 matrix aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 1, double, aligned_highp > aligned_highp_dvec1
 1 component vector aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 2, double, aligned_highp > aligned_highp_dvec2
 2 components vector aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 3, double, aligned_highp > aligned_highp_dvec3
 3 components vector aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 4, double, aligned_highp > aligned_highp_dvec4
 4 components vector aligned in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 1, int, aligned_highp > aligned_highp_ivec1
 1 component vector aligned in memory of signed integer numbers.
 
typedef vec< 2, int, aligned_highp > aligned_highp_ivec2
 2 components vector aligned in memory of signed integer numbers.
 
typedef vec< 3, int, aligned_highp > aligned_highp_ivec3
 3 components vector aligned in memory of signed integer numbers.
 
typedef vec< 4, int, aligned_highp > aligned_highp_ivec4
 4 components vector aligned in memory of signed integer numbers.
 
typedef mat< 2, 2, float, aligned_highp > aligned_highp_mat2
 2 by 2 matrix aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 2, 2, float, aligned_highp > aligned_highp_mat2x2
 2 by 2 matrix aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 2, 3, float, aligned_highp > aligned_highp_mat2x3
 2 by 3 matrix aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 2, 4, float, aligned_highp > aligned_highp_mat2x4
 2 by 4 matrix aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, float, aligned_highp > aligned_highp_mat3
 3 by 3 matrix aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 2, float, aligned_highp > aligned_highp_mat3x2
 3 by 2 matrix aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, float, aligned_highp > aligned_highp_mat3x3
 3 by 3 matrix aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 4, float, aligned_highp > aligned_highp_mat3x4
 3 by 4 matrix aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, float, aligned_highp > aligned_highp_mat4
 4 by 4 matrix aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 2, float, aligned_highp > aligned_highp_mat4x2
 4 by 2 matrix aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 3, float, aligned_highp > aligned_highp_mat4x3
 4 by 3 matrix aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, float, aligned_highp > aligned_highp_mat4x4
 4 by 4 matrix aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 1, uint, aligned_highp > aligned_highp_uvec1
 1 component vector aligned in memory of unsigned integer numbers.
 
typedef vec< 2, uint, aligned_highp > aligned_highp_uvec2
 2 components vector aligned in memory of unsigned integer numbers.
 
typedef vec< 3, uint, aligned_highp > aligned_highp_uvec3
 3 components vector aligned in memory of unsigned integer numbers.
 
typedef vec< 4, uint, aligned_highp > aligned_highp_uvec4
 4 components vector aligned in memory of unsigned integer numbers.
 
typedef vec< 1, float, aligned_highp > aligned_highp_vec1
 1 component vector aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 2, float, aligned_highp > aligned_highp_vec2
 2 components vector aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 3, float, aligned_highp > aligned_highp_vec3
 3 components vector aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 4, float, aligned_highp > aligned_highp_vec4
 4 components vector aligned in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef aligned_highp_ivec1 aligned_ivec1
 1 component vector aligned in memory of signed integer numbers.
 
typedef aligned_highp_ivec2 aligned_ivec2
 2 components vector aligned in memory of signed integer numbers.
 
typedef aligned_highp_ivec3 aligned_ivec3
 3 components vector aligned in memory of signed integer numbers.
 
typedef aligned_highp_ivec4 aligned_ivec4
 4 components vector aligned in memory of signed integer numbers.
 
typedef vec< 1, bool, aligned_lowp > aligned_lowp_bvec1
 1 component vector aligned in memory of bool values.
 
typedef vec< 2, bool, aligned_lowp > aligned_lowp_bvec2
 2 components vector aligned in memory of bool values.
 
typedef vec< 3, bool, aligned_lowp > aligned_lowp_bvec3
 3 components vector aligned in memory of bool values.
 
typedef vec< 4, bool, aligned_lowp > aligned_lowp_bvec4
 4 components vector aligned in memory of bool values.
 
typedef mat< 2, 2, double, aligned_lowp > aligned_lowp_dmat2
 2 by 2 matrix aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 2, 2, double, aligned_lowp > aligned_lowp_dmat2x2
 2 by 2 matrix aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 2, 3, double, aligned_lowp > aligned_lowp_dmat2x3
 2 by 3 matrix aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 2, 4, double, aligned_lowp > aligned_lowp_dmat2x4
 2 by 4 matrix aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, double, aligned_lowp > aligned_lowp_dmat3
 3 by 3 matrix aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 2, double, aligned_lowp > aligned_lowp_dmat3x2
 3 by 2 matrix aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, double, aligned_lowp > aligned_lowp_dmat3x3
 3 by 3 matrix aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 4, double, aligned_lowp > aligned_lowp_dmat3x4
 3 by 4 matrix aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, double, aligned_lowp > aligned_lowp_dmat4
 4 by 4 matrix aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 2, double, aligned_lowp > aligned_lowp_dmat4x2
 4 by 2 matrix aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 3, double, aligned_lowp > aligned_lowp_dmat4x3
 4 by 3 matrix aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, double, aligned_lowp > aligned_lowp_dmat4x4
 4 by 4 matrix aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 1, double, aligned_lowp > aligned_lowp_dvec1
 1 component vector aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 2, double, aligned_lowp > aligned_lowp_dvec2
 2 components vector aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 3, double, aligned_lowp > aligned_lowp_dvec3
 3 components vector aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 4, double, aligned_lowp > aligned_lowp_dvec4
 4 components vector aligned in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 1, int, aligned_lowp > aligned_lowp_ivec1
 1 component vector aligned in memory of signed integer numbers.
 
typedef vec< 2, int, aligned_lowp > aligned_lowp_ivec2
 2 components vector aligned in memory of signed integer numbers.
 
typedef vec< 3, int, aligned_lowp > aligned_lowp_ivec3
 3 components vector aligned in memory of signed integer numbers.
 
typedef vec< 4, int, aligned_lowp > aligned_lowp_ivec4
 4 components vector aligned in memory of signed integer numbers.
 
typedef mat< 2, 2, float, aligned_lowp > aligned_lowp_mat2
 2 by 2 matrix aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 2, 2, float, aligned_lowp > aligned_lowp_mat2x2
 2 by 2 matrix aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 2, 3, float, aligned_lowp > aligned_lowp_mat2x3
 2 by 3 matrix aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 2, 4, float, aligned_lowp > aligned_lowp_mat2x4
 2 by 4 matrix aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, float, aligned_lowp > aligned_lowp_mat3
 3 by 3 matrix aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 2, float, aligned_lowp > aligned_lowp_mat3x2
 3 by 2 matrix aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, float, aligned_lowp > aligned_lowp_mat3x3
 3 by 3 matrix aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 4, float, aligned_lowp > aligned_lowp_mat3x4
 3 by 4 matrix aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, float, aligned_lowp > aligned_lowp_mat4
 4 by 4 matrix aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 2, float, aligned_lowp > aligned_lowp_mat4x2
 4 by 2 matrix aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 3, float, aligned_lowp > aligned_lowp_mat4x3
 4 by 3 matrix aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, float, aligned_lowp > aligned_lowp_mat4x4
 4 by 4 matrix aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 1, uint, aligned_lowp > aligned_lowp_uvec1
 1 component vector aligned in memory of unsigned integer numbers.
 
typedef vec< 2, uint, aligned_lowp > aligned_lowp_uvec2
 2 components vector aligned in memory of unsigned integer numbers.
 
typedef vec< 3, uint, aligned_lowp > aligned_lowp_uvec3
 3 components vector aligned in memory of unsigned integer numbers.
 
typedef vec< 4, uint, aligned_lowp > aligned_lowp_uvec4
 4 components vector aligned in memory of unsigned integer numbers.
 
typedef vec< 1, float, aligned_lowp > aligned_lowp_vec1
 1 component vector aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 2, float, aligned_lowp > aligned_lowp_vec2
 2 components vector aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 3, float, aligned_lowp > aligned_lowp_vec3
 3 components vector aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 4, float, aligned_lowp > aligned_lowp_vec4
 4 components vector aligned in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef aligned_highp_mat2 aligned_mat2
 2 by 2 matrix tightly aligned in memory of single-precision floating-point numbers.
 
typedef aligned_highp_mat2x2 aligned_mat2x2
 2 by 2 matrix tightly aligned in memory of single-precision floating-point numbers.
 
typedef aligned_highp_mat2x3 aligned_mat2x3
 2 by 3 matrix tightly aligned in memory of single-precision floating-point numbers.
 
typedef aligned_highp_mat2x4 aligned_mat2x4
 2 by 4 matrix tightly aligned in memory of single-precision floating-point numbers.
 
typedef aligned_highp_mat3 aligned_mat3
 3 by 3 matrix tightly aligned in memory of single-precision floating-point numbers.
 
typedef aligned_highp_mat3x2 aligned_mat3x2
 3 by 2 matrix tightly aligned in memory of single-precision floating-point numbers.
 
typedef aligned_highp_mat3x3 aligned_mat3x3
 3 by 3 matrix tightly aligned in memory of single-precision floating-point numbers.
 
typedef aligned_highp_mat3x4 aligned_mat3x4
 3 by 4 matrix tightly aligned in memory of single-precision floating-point numbers.
 
typedef aligned_highp_mat4 aligned_mat4
 4 by 4 matrix tightly aligned in memory of single-precision floating-point numbers.
 
typedef aligned_highp_mat4x2 aligned_mat4x2
 4 by 2 matrix tightly aligned in memory of single-precision floating-point numbers.
 
typedef aligned_highp_mat4x3 aligned_mat4x3
 4 by 3 matrix tightly aligned in memory of single-precision floating-point numbers.
 
typedef aligned_highp_mat4x4 aligned_mat4x4
 4 by 4 matrix tightly aligned in memory of single-precision floating-point numbers.
 
typedef vec< 1, bool, aligned_mediump > aligned_mediump_bvec1
 1 component vector aligned in memory of bool values.
 
typedef vec< 2, bool, aligned_mediump > aligned_mediump_bvec2
 2 components vector aligned in memory of bool values.
 
typedef vec< 3, bool, aligned_mediump > aligned_mediump_bvec3
 3 components vector aligned in memory of bool values.
 
typedef vec< 4, bool, aligned_mediump > aligned_mediump_bvec4
 4 components vector aligned in memory of bool values.
 
typedef mat< 2, 2, double, aligned_mediump > aligned_mediump_dmat2
 2 by 2 matrix aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 2, 2, double, aligned_mediump > aligned_mediump_dmat2x2
 2 by 2 matrix aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 2, 3, double, aligned_mediump > aligned_mediump_dmat2x3
 2 by 3 matrix aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 2, 4, double, aligned_mediump > aligned_mediump_dmat2x4
 2 by 4 matrix aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, double, aligned_mediump > aligned_mediump_dmat3
 3 by 3 matrix aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 2, double, aligned_mediump > aligned_mediump_dmat3x2
 3 by 2 matrix aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, double, aligned_mediump > aligned_mediump_dmat3x3
 3 by 3 matrix aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 4, double, aligned_mediump > aligned_mediump_dmat3x4
 3 by 4 matrix aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, double, aligned_mediump > aligned_mediump_dmat4
 4 by 4 matrix aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 2, double, aligned_mediump > aligned_mediump_dmat4x2
 4 by 2 matrix aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 3, double, aligned_mediump > aligned_mediump_dmat4x3
 4 by 3 matrix aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, double, aligned_mediump > aligned_mediump_dmat4x4
 4 by 4 matrix aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 1, double, aligned_mediump > aligned_mediump_dvec1
 1 component vector aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 2, double, aligned_mediump > aligned_mediump_dvec2
 2 components vector aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 3, double, aligned_mediump > aligned_mediump_dvec3
 3 components vector aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 4, double, aligned_mediump > aligned_mediump_dvec4
 4 components vector aligned in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 1, int, aligned_mediump > aligned_mediump_ivec1
 1 component vector aligned in memory of signed integer numbers.
 
typedef vec< 2, int, aligned_mediump > aligned_mediump_ivec2
 2 components vector aligned in memory of signed integer numbers.
 
typedef vec< 3, int, aligned_mediump > aligned_mediump_ivec3
 3 components vector aligned in memory of signed integer numbers.
 
typedef vec< 4, int, aligned_mediump > aligned_mediump_ivec4
 4 components vector aligned in memory of signed integer numbers.
 
typedef mat< 2, 2, float, aligned_mediump > aligned_mediump_mat2
 2 by 2 matrix aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 2, 2, float, aligned_mediump > aligned_mediump_mat2x2
 2 by 2 matrix aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 2, 3, float, aligned_mediump > aligned_mediump_mat2x3
 2 by 3 matrix aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 2, 4, float, aligned_mediump > aligned_mediump_mat2x4
 2 by 4 matrix aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, float, aligned_mediump > aligned_mediump_mat3
 3 by 3 matrix aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 2, float, aligned_mediump > aligned_mediump_mat3x2
 3 by 2 matrix aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, float, aligned_mediump > aligned_mediump_mat3x3
 3 by 3 matrix aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 4, float, aligned_mediump > aligned_mediump_mat3x4
 3 by 4 matrix aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, float, aligned_mediump > aligned_mediump_mat4
 4 by 4 matrix aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 2, float, aligned_mediump > aligned_mediump_mat4x2
 4 by 2 matrix aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 3, float, aligned_mediump > aligned_mediump_mat4x3
 4 by 3 matrix aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, float, aligned_mediump > aligned_mediump_mat4x4
 4 by 4 matrix aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 1, uint, aligned_mediump > aligned_mediump_uvec1
 1 component vector aligned in memory of unsigned integer numbers.
 
typedef vec< 2, uint, aligned_mediump > aligned_mediump_uvec2
 2 components vector aligned in memory of unsigned integer numbers.
 
typedef vec< 3, uint, aligned_mediump > aligned_mediump_uvec3
 3 components vector aligned in memory of unsigned integer numbers.
 
typedef vec< 4, uint, aligned_mediump > aligned_mediump_uvec4
 4 components vector aligned in memory of unsigned integer numbers.
 
typedef vec< 1, float, aligned_mediump > aligned_mediump_vec1
 1 component vector aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 2, float, aligned_mediump > aligned_mediump_vec2
 2 components vector aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 3, float, aligned_mediump > aligned_mediump_vec3
 3 components vector aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 4, float, aligned_mediump > aligned_mediump_vec4
 4 components vector aligned in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef aligned_highp_uvec1 aligned_uvec1
 1 component vector aligned in memory of unsigned integer numbers.
 
typedef aligned_highp_uvec2 aligned_uvec2
 2 components vector aligned in memory of unsigned integer numbers.
 
typedef aligned_highp_uvec3 aligned_uvec3
 3 components vector aligned in memory of unsigned integer numbers.
 
typedef aligned_highp_uvec4 aligned_uvec4
 4 components vector aligned in memory of unsigned integer numbers.
 
typedef aligned_highp_vec1 aligned_vec1
 1 component vector aligned in memory of single-precision floating-point numbers.
 
typedef aligned_highp_vec2 aligned_vec2
 2 components vector aligned in memory of single-precision floating-point numbers.
 
typedef aligned_highp_vec3 aligned_vec3
 3 components vector aligned in memory of single-precision floating-point numbers.
 
typedef aligned_highp_vec4 aligned_vec4
 4 components vector aligned in memory of single-precision floating-point numbers.
 
typedef packed_highp_bvec1 packed_bvec1
 1 components vector tightly packed in memory of bool values.
 
typedef packed_highp_bvec2 packed_bvec2
 2 components vector tightly packed in memory of bool values.
 
typedef packed_highp_bvec3 packed_bvec3
 3 components vector tightly packed in memory of bool values.
 
typedef packed_highp_bvec4 packed_bvec4
 4 components vector tightly packed in memory of bool values.
 
typedef packed_highp_dmat2 packed_dmat2
 2 by 2 matrix tightly packed in memory of double-precision floating-point numbers.
 
typedef packed_highp_dmat2x2 packed_dmat2x2
 2 by 2 matrix tightly packed in memory of double-precision floating-point numbers.
 
typedef packed_highp_dmat2x3 packed_dmat2x3
 2 by 3 matrix tightly packed in memory of double-precision floating-point numbers.
 
typedef packed_highp_dmat2x4 packed_dmat2x4
 2 by 4 matrix tightly packed in memory of double-precision floating-point numbers.
 
typedef packed_highp_dmat3 packed_dmat3
 3 by 3 matrix tightly packed in memory of double-precision floating-point numbers.
 
typedef packed_highp_dmat3x2 packed_dmat3x2
 3 by 2 matrix tightly packed in memory of double-precision floating-point numbers.
 
typedef packed_highp_dmat3x3 packed_dmat3x3
 3 by 3 matrix tightly packed in memory of double-precision floating-point numbers.
 
typedef packed_highp_dmat3x4 packed_dmat3x4
 3 by 4 matrix tightly packed in memory of double-precision floating-point numbers.
 
typedef packed_highp_dmat4 packed_dmat4
 4 by 4 matrix tightly packed in memory of double-precision floating-point numbers.
 
typedef packed_highp_dmat4x2 packed_dmat4x2
 4 by 2 matrix tightly packed in memory of double-precision floating-point numbers.
 
typedef packed_highp_dmat4x3 packed_dmat4x3
 4 by 3 matrix tightly packed in memory of double-precision floating-point numbers.
 
typedef packed_highp_dmat4x4 packed_dmat4x4
 4 by 4 matrix tightly packed in memory of double-precision floating-point numbers.
 
typedef packed_highp_dvec1 packed_dvec1
 1 component vector tightly packed in memory of double-precision floating-point numbers.
 
typedef packed_highp_dvec2 packed_dvec2
 2 components vector tightly packed in memory of double-precision floating-point numbers.
 
typedef packed_highp_dvec3 packed_dvec3
 3 components vector tightly packed in memory of double-precision floating-point numbers.
 
typedef packed_highp_dvec4 packed_dvec4
 4 components vector tightly packed in memory of double-precision floating-point numbers.
 
typedef vec< 1, bool, packed_highp > packed_highp_bvec1
 1 component vector tightly packed in memory of bool values.
 
typedef vec< 2, bool, packed_highp > packed_highp_bvec2
 2 components vector tightly packed in memory of bool values.
 
typedef vec< 3, bool, packed_highp > packed_highp_bvec3
 3 components vector tightly packed in memory of bool values.
 
typedef vec< 4, bool, packed_highp > packed_highp_bvec4
 4 components vector tightly packed in memory of bool values.
 
typedef mat< 2, 2, double, packed_highp > packed_highp_dmat2
 2 by 2 matrix tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 2, 2, double, packed_highp > packed_highp_dmat2x2
 2 by 2 matrix tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 2, 3, double, packed_highp > packed_highp_dmat2x3
 2 by 3 matrix tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 2, 4, double, packed_highp > packed_highp_dmat2x4
 2 by 4 matrix tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, double, packed_highp > packed_highp_dmat3
 3 by 3 matrix tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 2, double, packed_highp > packed_highp_dmat3x2
 3 by 2 matrix tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, double, packed_highp > packed_highp_dmat3x3
 3 by 3 matrix tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 4, double, packed_highp > packed_highp_dmat3x4
 3 by 4 matrix tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, double, packed_highp > packed_highp_dmat4
 4 by 4 matrix tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 2, double, packed_highp > packed_highp_dmat4x2
 4 by 2 matrix tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 3, double, packed_highp > packed_highp_dmat4x3
 4 by 3 matrix tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, double, packed_highp > packed_highp_dmat4x4
 4 by 4 matrix tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 1, double, packed_highp > packed_highp_dvec1
 1 component vector tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 2, double, packed_highp > packed_highp_dvec2
 2 components vector tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 3, double, packed_highp > packed_highp_dvec3
 3 components vector tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 4, double, packed_highp > packed_highp_dvec4
 4 components vector tightly packed in memory of double-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 1, int, packed_highp > packed_highp_ivec1
 1 component vector tightly packed in memory of signed integer numbers.
 
typedef vec< 2, int, packed_highp > packed_highp_ivec2
 2 components vector tightly packed in memory of signed integer numbers.
 
typedef vec< 3, int, packed_highp > packed_highp_ivec3
 3 components vector tightly packed in memory of signed integer numbers.
 
typedef vec< 4, int, packed_highp > packed_highp_ivec4
 4 components vector tightly packed in memory of signed integer numbers.
 
typedef mat< 2, 2, float, packed_highp > packed_highp_mat2
 2 by 2 matrix tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 2, 2, float, packed_highp > packed_highp_mat2x2
 2 by 2 matrix tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 2, 3, float, packed_highp > packed_highp_mat2x3
 2 by 3 matrix tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 2, 4, float, packed_highp > packed_highp_mat2x4
 2 by 4 matrix tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, float, packed_highp > packed_highp_mat3
 3 by 3 matrix tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 2, float, packed_highp > packed_highp_mat3x2
 3 by 2 matrix tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, float, packed_highp > packed_highp_mat3x3
 3 by 3 matrix tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 3, 4, float, packed_highp > packed_highp_mat3x4
 3 by 4 matrix tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, float, packed_highp > packed_highp_mat4
 4 by 4 matrix tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 2, float, packed_highp > packed_highp_mat4x2
 4 by 2 matrix tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 3, float, packed_highp > packed_highp_mat4x3
 4 by 3 matrix tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, float, packed_highp > packed_highp_mat4x4
 4 by 4 matrix tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 1, uint, packed_highp > packed_highp_uvec1
 1 component vector tightly packed in memory of unsigned integer numbers.
 
typedef vec< 2, uint, packed_highp > packed_highp_uvec2
 2 components vector tightly packed in memory of unsigned integer numbers.
 
typedef vec< 3, uint, packed_highp > packed_highp_uvec3
 3 components vector tightly packed in memory of unsigned integer numbers.
 
typedef vec< 4, uint, packed_highp > packed_highp_uvec4
 4 components vector tightly packed in memory of unsigned integer numbers.
 
typedef vec< 1, float, packed_highp > packed_highp_vec1
 1 component vector tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 2, float, packed_highp > packed_highp_vec2
 2 components vector tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 3, float, packed_highp > packed_highp_vec3
 3 components vector tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef vec< 4, float, packed_highp > packed_highp_vec4
 4 components vector tightly packed in memory of single-precision floating-point numbers using high precision arithmetic in term of ULPs.
 
typedef packed_highp_ivec1 packed_ivec1
 1 component vector tightly packed in memory of signed integer numbers.
 
typedef packed_highp_ivec2 packed_ivec2
 2 components vector tightly packed in memory of signed integer numbers.
 
typedef packed_highp_ivec3 packed_ivec3
 3 components vector tightly packed in memory of signed integer numbers.
 
typedef packed_highp_ivec4 packed_ivec4
 4 components vector tightly packed in memory of signed integer numbers.
 
typedef vec< 1, bool, packed_lowp > packed_lowp_bvec1
 1 component vector tightly packed in memory of bool values.
 
typedef vec< 2, bool, packed_lowp > packed_lowp_bvec2
 2 components vector tightly packed in memory of bool values.
 
typedef vec< 3, bool, packed_lowp > packed_lowp_bvec3
 3 components vector tightly packed in memory of bool values.
 
typedef vec< 4, bool, packed_lowp > packed_lowp_bvec4
 4 components vector tightly packed in memory of bool values.
 
typedef mat< 2, 2, double, packed_lowp > packed_lowp_dmat2
 2 by 2 matrix tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 2, 2, double, packed_lowp > packed_lowp_dmat2x2
 2 by 2 matrix tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 2, 3, double, packed_lowp > packed_lowp_dmat2x3
 2 by 3 matrix tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 2, 4, double, packed_lowp > packed_lowp_dmat2x4
 2 by 4 matrix tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, double, packed_lowp > packed_lowp_dmat3
 3 by 3 matrix tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 2, double, packed_lowp > packed_lowp_dmat3x2
 3 by 2 matrix tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, double, packed_lowp > packed_lowp_dmat3x3
 3 by 3 matrix tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 4, double, packed_lowp > packed_lowp_dmat3x4
 3 by 4 matrix tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, double, packed_lowp > packed_lowp_dmat4
 4 by 4 matrix tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 2, double, packed_lowp > packed_lowp_dmat4x2
 4 by 2 matrix tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 3, double, packed_lowp > packed_lowp_dmat4x3
 4 by 3 matrix tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, double, packed_lowp > packed_lowp_dmat4x4
 4 by 4 matrix tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 1, double, packed_lowp > packed_lowp_dvec1
 1 component vector tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 2, double, packed_lowp > packed_lowp_dvec2
 2 components vector tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 3, double, packed_lowp > packed_lowp_dvec3
 3 components vector tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 4, double, packed_lowp > packed_lowp_dvec4
 4 components vector tightly packed in memory of double-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 1, int, packed_lowp > packed_lowp_ivec1
 1 component vector tightly packed in memory of signed integer numbers.
 
typedef vec< 2, int, packed_lowp > packed_lowp_ivec2
 2 components vector tightly packed in memory of signed integer numbers.
 
typedef vec< 3, int, packed_lowp > packed_lowp_ivec3
 3 components vector tightly packed in memory of signed integer numbers.
 
typedef vec< 4, int, packed_lowp > packed_lowp_ivec4
 4 components vector tightly packed in memory of signed integer numbers.
 
typedef mat< 2, 2, float, packed_lowp > packed_lowp_mat2
 2 by 2 matrix tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 2, 2, float, packed_lowp > packed_lowp_mat2x2
 2 by 2 matrix tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 2, 3, float, packed_lowp > packed_lowp_mat2x3
 2 by 3 matrix tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 2, 4, float, packed_lowp > packed_lowp_mat2x4
 2 by 4 matrix tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, float, packed_lowp > packed_lowp_mat3
 3 by 3 matrix tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 2, float, packed_lowp > packed_lowp_mat3x2
 3 by 2 matrix tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, float, packed_lowp > packed_lowp_mat3x3
 3 by 3 matrix tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 3, 4, float, packed_lowp > packed_lowp_mat3x4
 3 by 4 matrix tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, float, packed_lowp > packed_lowp_mat4
 4 by 4 matrix tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 2, float, packed_lowp > packed_lowp_mat4x2
 4 by 2 matrix tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 3, float, packed_lowp > packed_lowp_mat4x3
 4 by 3 matrix tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, float, packed_lowp > packed_lowp_mat4x4
 4 by 4 matrix tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 1, uint, packed_lowp > packed_lowp_uvec1
 1 component vector tightly packed in memory of unsigned integer numbers.
 
typedef vec< 2, uint, packed_lowp > packed_lowp_uvec2
 2 components vector tightly packed in memory of unsigned integer numbers.
 
typedef vec< 3, uint, packed_lowp > packed_lowp_uvec3
 3 components vector tightly packed in memory of unsigned integer numbers.
 
typedef vec< 4, uint, packed_lowp > packed_lowp_uvec4
 4 components vector tightly packed in memory of unsigned integer numbers.
 
typedef vec< 1, float, packed_lowp > packed_lowp_vec1
 1 component vector tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 2, float, packed_lowp > packed_lowp_vec2
 2 components vector tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 3, float, packed_lowp > packed_lowp_vec3
 3 components vector tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef vec< 4, float, packed_lowp > packed_lowp_vec4
 4 components vector tightly packed in memory of single-precision floating-point numbers using low precision arithmetic in term of ULPs.
 
typedef packed_highp_mat2 packed_mat2
 2 by 2 matrix tightly packed in memory of single-precision floating-point numbers.
 
typedef packed_highp_mat2x2 packed_mat2x2
 2 by 2 matrix tightly packed in memory of single-precision floating-point numbers.
 
typedef packed_highp_mat2x3 packed_mat2x3
 2 by 3 matrix tightly packed in memory of single-precision floating-point numbers.
 
typedef packed_highp_mat2x4 packed_mat2x4
 2 by 4 matrix tightly packed in memory of single-precision floating-point numbers.
 
typedef packed_highp_mat3 packed_mat3
 3 by 3 matrix tightly packed in memory of single-precision floating-point numbers.
 
typedef packed_highp_mat3x2 packed_mat3x2
 3 by 2 matrix tightly packed in memory of single-precision floating-point numbers.
 
typedef packed_highp_mat3x3 packed_mat3x3
 3 by 3 matrix tightly packed in memory of single-precision floating-point numbers.
 
typedef packed_highp_mat3x4 packed_mat3x4
 3 by 4 matrix tightly packed in memory of single-precision floating-point numbers.
 
typedef packed_highp_mat4 packed_mat4
 4 by 4 matrix tightly packed in memory of single-precision floating-point numbers.
 
typedef packed_highp_mat4x2 packed_mat4x2
 4 by 2 matrix tightly packed in memory of single-precision floating-point numbers.
 
typedef packed_highp_mat4x3 packed_mat4x3
 4 by 3 matrix tightly packed in memory of single-precision floating-point numbers.
 
typedef packed_highp_mat4x4 packed_mat4x4
 4 by 4 matrix tightly packed in memory of single-precision floating-point numbers.
 
typedef vec< 1, bool, packed_mediump > packed_mediump_bvec1
 1 component vector tightly packed in memory of bool values.
 
typedef vec< 2, bool, packed_mediump > packed_mediump_bvec2
 2 components vector tightly packed in memory of bool values.
 
typedef vec< 3, bool, packed_mediump > packed_mediump_bvec3
 3 components vector tightly packed in memory of bool values.
 
typedef vec< 4, bool, packed_mediump > packed_mediump_bvec4
 4 components vector tightly packed in memory of bool values.
 
typedef mat< 2, 2, double, packed_mediump > packed_mediump_dmat2
 2 by 2 matrix tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 2, 2, double, packed_mediump > packed_mediump_dmat2x2
 2 by 2 matrix tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 2, 3, double, packed_mediump > packed_mediump_dmat2x3
 2 by 3 matrix tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 2, 4, double, packed_mediump > packed_mediump_dmat2x4
 2 by 4 matrix tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, double, packed_mediump > packed_mediump_dmat3
 3 by 3 matrix tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 2, double, packed_mediump > packed_mediump_dmat3x2
 3 by 2 matrix tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, double, packed_mediump > packed_mediump_dmat3x3
 3 by 3 matrix tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 4, double, packed_mediump > packed_mediump_dmat3x4
 3 by 4 matrix tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, double, packed_mediump > packed_mediump_dmat4
 4 by 4 matrix tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 2, double, packed_mediump > packed_mediump_dmat4x2
 4 by 2 matrix tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 3, double, packed_mediump > packed_mediump_dmat4x3
 4 by 3 matrix tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, double, packed_mediump > packed_mediump_dmat4x4
 4 by 4 matrix tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 1, double, packed_mediump > packed_mediump_dvec1
 1 component vector tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 2, double, packed_mediump > packed_mediump_dvec2
 2 components vector tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 3, double, packed_mediump > packed_mediump_dvec3
 3 components vector tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 4, double, packed_mediump > packed_mediump_dvec4
 4 components vector tightly packed in memory of double-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 1, int, packed_mediump > packed_mediump_ivec1
 1 component vector tightly packed in memory of signed integer numbers.
 
typedef vec< 2, int, packed_mediump > packed_mediump_ivec2
 2 components vector tightly packed in memory of signed integer numbers.
 
typedef vec< 3, int, packed_mediump > packed_mediump_ivec3
 3 components vector tightly packed in memory of signed integer numbers.
 
typedef vec< 4, int, packed_mediump > packed_mediump_ivec4
 4 components vector tightly packed in memory of signed integer numbers.
 
typedef mat< 2, 2, float, packed_mediump > packed_mediump_mat2
 2 by 2 matrix tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 2, 2, float, packed_mediump > packed_mediump_mat2x2
 2 by 2 matrix tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 2, 3, float, packed_mediump > packed_mediump_mat2x3
 2 by 3 matrix tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 2, 4, float, packed_mediump > packed_mediump_mat2x4
 2 by 4 matrix tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, float, packed_mediump > packed_mediump_mat3
 3 by 3 matrix tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 2, float, packed_mediump > packed_mediump_mat3x2
 3 by 2 matrix tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 3, float, packed_mediump > packed_mediump_mat3x3
 3 by 3 matrix tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 3, 4, float, packed_mediump > packed_mediump_mat3x4
 3 by 4 matrix tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, float, packed_mediump > packed_mediump_mat4
 4 by 4 matrix tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 2, float, packed_mediump > packed_mediump_mat4x2
 4 by 2 matrix tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 3, float, packed_mediump > packed_mediump_mat4x3
 4 by 3 matrix tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef mat< 4, 4, float, packed_mediump > packed_mediump_mat4x4
 4 by 4 matrix tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 1, uint, packed_mediump > packed_mediump_uvec1
 1 component vector tightly packed in memory of unsigned integer numbers.
 
typedef vec< 2, uint, packed_mediump > packed_mediump_uvec2
 2 components vector tightly packed in memory of unsigned integer numbers.
 
typedef vec< 3, uint, packed_mediump > packed_mediump_uvec3
 3 components vector tightly packed in memory of unsigned integer numbers.
 
typedef vec< 4, uint, packed_mediump > packed_mediump_uvec4
 4 components vector tightly packed in memory of unsigned integer numbers.
 
typedef vec< 1, float, packed_mediump > packed_mediump_vec1
 1 component vector tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 2, float, packed_mediump > packed_mediump_vec2
 2 components vector tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 3, float, packed_mediump > packed_mediump_vec3
 3 components vector tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef vec< 4, float, packed_mediump > packed_mediump_vec4
 4 components vector tightly packed in memory of single-precision floating-point numbers using medium precision arithmetic in term of ULPs.
 
typedef packed_highp_uvec1 packed_uvec1
 1 component vector tightly packed in memory of unsigned integer numbers.
 
typedef packed_highp_uvec2 packed_uvec2
 2 components vector tightly packed in memory of unsigned integer numbers.
 
typedef packed_highp_uvec3 packed_uvec3
 3 components vector tightly packed in memory of unsigned integer numbers.
 
typedef packed_highp_uvec4 packed_uvec4
 4 components vector tightly packed in memory of unsigned integer numbers.
 
typedef packed_highp_vec1 packed_vec1
 1 component vector tightly packed in memory of single-precision floating-point numbers.
 
typedef packed_highp_vec2 packed_vec2
 2 components vector tightly packed in memory of single-precision floating-point numbers.
 
typedef packed_highp_vec3 packed_vec3
 3 components vector tightly packed in memory of single-precision floating-point numbers.
 
typedef packed_highp_vec4 packed_vec4
 4 components vector tightly packed in memory of single-precision floating-point numbers.
 

Detailed Description

Include <glm/gtc/type_aligned.hpp> to use the features of this extension.

Aligned types allowing SIMD optimizations of vectors and matrices types

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00304.html ================================================ 0.9.9 API documentation: GLM_GTC_type_precision
0.9.9 API documentation
GLM_GTC_type_precision

Include <glm/gtc/type_precision.hpp> to use the features of this extension. More...

Typedefs

typedef float f32
 Default 32 bit single-qualifier floating-point scalar. More...
 
typedef mat< 2, 2, f32, defaultp > f32mat2
 Single-qualifier floating-point 1x1 matrix. More...
 
typedef mat< 2, 2, f32, defaultp > f32mat2x2
 Single-qualifier floating-point 1x1 matrix. More...
 
typedef mat< 2, 3, f32, defaultp > f32mat2x3
 Single-qualifier floating-point 2x3 matrix. More...
 
typedef mat< 2, 4, f32, defaultp > f32mat2x4
 Single-qualifier floating-point 2x4 matrix. More...
 
typedef mat< 3, 3, f32, defaultp > f32mat3
 Single-qualifier floating-point 3x3 matrix. More...
 
typedef mat< 3, 2, f32, defaultp > f32mat3x2
 Single-qualifier floating-point 3x2 matrix. More...
 
typedef mat< 3, 3, f32, defaultp > f32mat3x3
 Single-qualifier floating-point 3x3 matrix. More...
 
typedef mat< 3, 4, f32, defaultp > f32mat3x4
 Single-qualifier floating-point 3x4 matrix. More...
 
typedef mat< 4, 4, f32, defaultp > f32mat4
 Single-qualifier floating-point 4x4 matrix. More...
 
typedef mat< 4, 2, f32, defaultp > f32mat4x2
 Single-qualifier floating-point 4x2 matrix. More...
 
typedef mat< 4, 3, f32, defaultp > f32mat4x3
 Single-qualifier floating-point 4x3 matrix. More...
 
typedef mat< 4, 4, f32, defaultp > f32mat4x4
 Single-qualifier floating-point 4x4 matrix. More...
 
typedef qua< f32, defaultp > f32quat
 Single-qualifier floating-point quaternion. More...
 
typedef vec< 1, f32, defaultp > f32vec1
 Single-qualifier floating-point vector of 1 component. More...
 
typedef vec< 2, f32, defaultp > f32vec2
 Single-qualifier floating-point vector of 2 components. More...
 
typedef vec< 3, f32, defaultp > f32vec3
 Single-qualifier floating-point vector of 3 components. More...
 
typedef vec< 4, f32, defaultp > f32vec4
 Single-qualifier floating-point vector of 4 components. More...
 
typedef double f64
 Default 64 bit double-qualifier floating-point scalar. More...
 
typedef mat< 2, 2, f64, defaultp > f64mat2
 Double-qualifier floating-point 1x1 matrix. More...
 
typedef mat< 2, 2, f64, defaultp > f64mat2x2
 Double-qualifier floating-point 1x1 matrix. More...
 
typedef mat< 2, 3, f64, defaultp > f64mat2x3
 Double-qualifier floating-point 2x3 matrix. More...
 
typedef mat< 2, 4, f64, defaultp > f64mat2x4
 Double-qualifier floating-point 2x4 matrix. More...
 
typedef mat< 3, 3, f64, defaultp > f64mat3
 Double-qualifier floating-point 3x3 matrix. More...
 
typedef mat< 3, 2, f64, defaultp > f64mat3x2
 Double-qualifier floating-point 3x2 matrix. More...
 
typedef mat< 3, 3, f64, defaultp > f64mat3x3
 Double-qualifier floating-point 3x3 matrix. More...
 
typedef mat< 3, 4, f64, defaultp > f64mat3x4
 Double-qualifier floating-point 3x4 matrix. More...
 
typedef mat< 4, 4, f64, defaultp > f64mat4
 Double-qualifier floating-point 4x4 matrix. More...
 
typedef mat< 4, 2, f64, defaultp > f64mat4x2
 Double-qualifier floating-point 4x2 matrix. More...
 
typedef mat< 4, 3, f64, defaultp > f64mat4x3
 Double-qualifier floating-point 4x3 matrix. More...
 
typedef mat< 4, 4, f64, defaultp > f64mat4x4
 Double-qualifier floating-point 4x4 matrix. More...
 
typedef qua< f64, defaultp > f64quat
 Double-qualifier floating-point quaternion. More...
 
typedef vec< 1, f64, defaultp > f64vec1
 Double-qualifier floating-point vector of 1 component. More...
 
typedef vec< 2, f64, defaultp > f64vec2
 Double-qualifier floating-point vector of 2 components. More...
 
typedef vec< 3, f64, defaultp > f64vec3
 Double-qualifier floating-point vector of 3 components. More...
 
typedef vec< 4, f64, defaultp > f64vec4
 Double-qualifier floating-point vector of 4 components. More...
 
typedef float float32
 Single-qualifier floating-point scalar. More...
 
typedef float float32_t
 Default 32 bit single-qualifier floating-point scalar. More...
 
typedef double float64
 Double-qualifier floating-point scalar. More...
 
typedef double float64_t
 Default 64 bit double-qualifier floating-point scalar. More...
 
typedef mat< 2, 2, f32, defaultp > fmat2
 Single-qualifier floating-point 1x1 matrix. More...
 
typedef mat< 2, 2, f32, defaultp > fmat2x2
 Single-qualifier floating-point 1x1 matrix. More...
 
typedef mat< 2, 3, f32, defaultp > fmat2x3
 Single-qualifier floating-point 2x3 matrix. More...
 
typedef mat< 2, 4, f32, defaultp > fmat2x4
 Single-qualifier floating-point 2x4 matrix. More...
 
typedef mat< 3, 3, f32, defaultp > fmat3
 Single-qualifier floating-point 3x3 matrix. More...
 
typedef mat< 3, 2, f32, defaultp > fmat3x2
 Single-qualifier floating-point 3x2 matrix. More...
 
typedef mat< 3, 3, f32, defaultp > fmat3x3
 Single-qualifier floating-point 3x3 matrix. More...
 
typedef mat< 3, 4, f32, defaultp > fmat3x4
 Single-qualifier floating-point 3x4 matrix. More...
 
typedef mat< 4, 4, f32, defaultp > fmat4
 Single-qualifier floating-point 4x4 matrix. More...
 
typedef mat< 4, 2, f32, defaultp > fmat4x2
 Single-qualifier floating-point 4x2 matrix. More...
 
typedef mat< 4, 3, f32, defaultp > fmat4x3
 Single-qualifier floating-point 4x3 matrix. More...
 
typedef mat< 4, 4, f32, defaultp > fmat4x4
 Single-qualifier floating-point 4x4 matrix. More...
 
typedef vec< 1, f32, defaultp > fvec1
 Single-qualifier floating-point vector of 1 component. More...
 
typedef vec< 2, f32, defaultp > fvec2
 Single-qualifier floating-point vector of 2 components. More...
 
typedef vec< 3, f32, defaultp > fvec3
 Single-qualifier floating-point vector of 3 components. More...
 
typedef vec< 4, f32, defaultp > fvec4
 Single-qualifier floating-point vector of 4 components. More...
 
typedef float highp_f32
 High 32 bit single-qualifier floating-point scalar. More...
 
typedef mat< 2, 2, f32, highp > highp_f32mat2
 High single-qualifier floating-point 1x1 matrix. More...
 
typedef mat< 2, 2, f32, highp > highp_f32mat2x2
 High single-qualifier floating-point 1x1 matrix. More...
 
typedef mat< 2, 3, f32, highp > highp_f32mat2x3
 High single-qualifier floating-point 2x3 matrix. More...
 
typedef mat< 2, 4, f32, highp > highp_f32mat2x4
 High single-qualifier floating-point 2x4 matrix. More...
 
typedef mat< 3, 3, f32, highp > highp_f32mat3
 High single-qualifier floating-point 3x3 matrix. More...
 
typedef mat< 3, 2, f32, highp > highp_f32mat3x2
 High single-qualifier floating-point 3x2 matrix. More...
 
typedef mat< 3, 3, f32, highp > highp_f32mat3x3
 High single-qualifier floating-point 3x3 matrix. More...
 
typedef mat< 3, 4, f32, highp > highp_f32mat3x4
 High single-qualifier floating-point 3x4 matrix. More...
 
typedef mat< 4, 4, f32, highp > highp_f32mat4
 High single-qualifier floating-point 4x4 matrix. More...
 
typedef mat< 4, 2, f32, highp > highp_f32mat4x2
 High single-qualifier floating-point 4x2 matrix. More...
 
typedef mat< 4, 3, f32, highp > highp_f32mat4x3
 High single-qualifier floating-point 4x3 matrix. More...
 
typedef mat< 4, 4, f32, highp > highp_f32mat4x4
 High single-qualifier floating-point 4x4 matrix. More...
 
typedef qua< f32, highp > highp_f32quat
 High single-qualifier floating-point quaternion. More...
 
typedef vec< 1, f32, highp > highp_f32vec1
 High single-qualifier floating-point vector of 1 component. More...
 
typedef vec< 2, f32, highp > highp_f32vec2
 High single-qualifier floating-point vector of 2 components. More...
 
typedef vec< 3, f32, highp > highp_f32vec3
 High single-qualifier floating-point vector of 3 components. More...
 
typedef vec< 4, f32, highp > highp_f32vec4
 High single-qualifier floating-point vector of 4 components. More...
 
typedef double highp_f64
 High 64 bit double-qualifier floating-point scalar. More...
 
typedef mat< 2, 2, f64, highp > highp_f64mat2
 High double-qualifier floating-point 1x1 matrix. More...
 
typedef mat< 2, 2, f64, highp > highp_f64mat2x2
 High double-qualifier floating-point 1x1 matrix. More...
 
typedef mat< 2, 3, f64, highp > highp_f64mat2x3
 High double-qualifier floating-point 2x3 matrix. More...
 
typedef mat< 2, 4, f64, highp > highp_f64mat2x4
 High double-qualifier floating-point 2x4 matrix. More...
 
typedef mat< 3, 3, f64, highp > highp_f64mat3
 High double-qualifier floating-point 3x3 matrix. More...
 
typedef mat< 3, 2, f64, highp > highp_f64mat3x2
 High double-qualifier floating-point 3x2 matrix. More...
 
typedef mat< 3, 3, f64, highp > highp_f64mat3x3
 High double-qualifier floating-point 3x3 matrix. More...
 
typedef mat< 3, 4, f64, highp > highp_f64mat3x4
 High double-qualifier floating-point 3x4 matrix. More...
 
typedef mat< 4, 4, f64, highp > highp_f64mat4
 High double-qualifier floating-point 4x4 matrix. More...
 
typedef mat< 4, 2, f64, highp > highp_f64mat4x2
 High double-qualifier floating-point 4x2 matrix. More...
 
typedef mat< 4, 3, f64, highp > highp_f64mat4x3
 High double-qualifier floating-point 4x3 matrix. More...
 
typedef mat< 4, 4, f64, highp > highp_f64mat4x4
 High double-qualifier floating-point 4x4 matrix. More...
 
typedef qua< f64, highp > highp_f64quat
 High double-qualifier floating-point quaternion. More...
 
typedef vec< 1, f64, highp > highp_f64vec1
 High double-qualifier floating-point vector of 1 component. More...
 
typedef vec< 2, f64, highp > highp_f64vec2
 High double-qualifier floating-point vector of 2 components. More...
 
typedef vec< 3, f64, highp > highp_f64vec3
 High double-qualifier floating-point vector of 3 components. More...
 
typedef vec< 4, f64, highp > highp_f64vec4
 High double-qualifier floating-point vector of 4 components. More...
 
typedef float highp_float32
 High 32 bit single-qualifier floating-point scalar. More...
 
typedef float highp_float32_t
 High 32 bit single-qualifier floating-point scalar. More...
 
typedef double highp_float64
 High 64 bit double-qualifier floating-point scalar. More...
 
typedef double highp_float64_t
 High 64 bit double-qualifier floating-point scalar. More...
 
typedef mat< 2, 2, f32, highp > highp_fmat2
 High single-qualifier floating-point 1x1 matrix. More...
 
typedef mat< 2, 2, f32, highp > highp_fmat2x2
 High single-qualifier floating-point 1x1 matrix. More...
 
typedef mat< 2, 3, f32, highp > highp_fmat2x3
 High single-qualifier floating-point 2x3 matrix. More...
 
typedef mat< 2, 4, f32, highp > highp_fmat2x4
 High single-qualifier floating-point 2x4 matrix. More...
 
typedef mat< 3, 3, f32, highp > highp_fmat3
 High single-qualifier floating-point 3x3 matrix. More...
 
typedef mat< 3, 2, f32, highp > highp_fmat3x2
 High single-qualifier floating-point 3x2 matrix. More...
 
typedef mat< 3, 3, f32, highp > highp_fmat3x3
 High single-qualifier floating-point 3x3 matrix. More...
 
typedef mat< 3, 4, f32, highp > highp_fmat3x4
 High single-qualifier floating-point 3x4 matrix. More...
 
typedef mat< 4, 4, f32, highp > highp_fmat4
 High single-qualifier floating-point 4x4 matrix. More...
 
typedef mat< 4, 2, f32, highp > highp_fmat4x2
 High single-qualifier floating-point 4x2 matrix. More...
 
typedef mat< 4, 3, f32, highp > highp_fmat4x3
 High single-qualifier floating-point 4x3 matrix. More...
 
typedef mat< 4, 4, f32, highp > highp_fmat4x4
 High single-qualifier floating-point 4x4 matrix. More...
 
typedef vec< 1, float, highp > highp_fvec1
 High single-qualifier floating-point vector of 1 component. More...
 
typedef vec< 2, float, highp > highp_fvec2
 High Single-qualifier floating-point vector of 2 components. More...
 
typedef vec< 3, float, highp > highp_fvec3
 High Single-qualifier floating-point vector of 3 components. More...
 
typedef vec< 4, float, highp > highp_fvec4
 High Single-qualifier floating-point vector of 4 components. More...
 
typedef int16 highp_i16
 High qualifier 16 bit signed integer type. More...
 
typedef vec< 1, i16, highp > highp_i16vec1
 High qualifier 16 bit signed integer scalar type. More...
 
typedef vec< 2, i16, highp > highp_i16vec2
 High qualifier 16 bit signed integer vector of 2 components type. More...
 
typedef vec< 3, i16, highp > highp_i16vec3
 High qualifier 16 bit signed integer vector of 3 components type. More...
 
typedef vec< 4, i16, highp > highp_i16vec4
 High qualifier 16 bit signed integer vector of 4 components type. More...
 
typedef int32 highp_i32
 High qualifier 32 bit signed integer type. More...
 
typedef vec< 1, i32, highp > highp_i32vec1
 High qualifier 32 bit signed integer scalar type. More...
 
typedef vec< 2, i32, highp > highp_i32vec2
 High qualifier 32 bit signed integer vector of 2 components type. More...
 
typedef vec< 3, i32, highp > highp_i32vec3
 High qualifier 32 bit signed integer vector of 3 components type. More...
 
typedef vec< 4, i32, highp > highp_i32vec4
 High qualifier 32 bit signed integer vector of 4 components type. More...
 
typedef int64 highp_i64
 High qualifier 64 bit signed integer type. More...
 
typedef vec< 1, i64, highp > highp_i64vec1
 High qualifier 64 bit signed integer scalar type. More...
 
typedef vec< 2, i64, highp > highp_i64vec2
 High qualifier 64 bit signed integer vector of 2 components type. More...
 
typedef vec< 3, i64, highp > highp_i64vec3
 High qualifier 64 bit signed integer vector of 3 components type. More...
 
typedef vec< 4, i64, highp > highp_i64vec4
 High qualifier 64 bit signed integer vector of 4 components type. More...
 
typedef int8 highp_i8
 High qualifier 8 bit signed integer type. More...
 
typedef vec< 1, i8, highp > highp_i8vec1
 High qualifier 8 bit signed integer scalar type. More...
 
typedef vec< 2, i8, highp > highp_i8vec2
 High qualifier 8 bit signed integer vector of 2 components type. More...
 
typedef vec< 3, i8, highp > highp_i8vec3
 High qualifier 8 bit signed integer vector of 3 components type. More...
 
typedef vec< 4, i8, highp > highp_i8vec4
 High qualifier 8 bit signed integer vector of 4 components type. More...
 
typedef int16 highp_int16
 High qualifier 16 bit signed integer type. More...
 
typedef int16 highp_int16_t
 High qualifier 16 bit signed integer type. More...
 
typedef int32 highp_int32
 High qualifier 32 bit signed integer type. More...
 
typedef int32 highp_int32_t
 32 bit signed integer type. More...
 
typedef int64 highp_int64
 High qualifier 64 bit signed integer type. More...
 
typedef int64 highp_int64_t
 High qualifier 64 bit signed integer type. More...
 
typedef int8 highp_int8
 High qualifier 8 bit signed integer type. More...
 
typedef int8 highp_int8_t
 High qualifier 8 bit signed integer type. More...
 
typedef uint16 highp_u16
 High qualifier 16 bit unsigned integer type. More...
 
typedef vec< 1, u16, highp > highp_u16vec1
 High qualifier 16 bit unsigned integer scalar type. More...
 
typedef vec< 2, u16, highp > highp_u16vec2
 High qualifier 16 bit unsigned integer vector of 2 components type. More...
 
typedef vec< 3, u16, highp > highp_u16vec3
 High qualifier 16 bit unsigned integer vector of 3 components type. More...
 
typedef vec< 4, u16, highp > highp_u16vec4
 High qualifier 16 bit unsigned integer vector of 4 components type. More...
 
typedef uint32 highp_u32
 High qualifier 32 bit unsigned integer type. More...
 
typedef vec< 1, u32, highp > highp_u32vec1
 High qualifier 32 bit unsigned integer scalar type. More...
 
typedef vec< 2, u32, highp > highp_u32vec2
 High qualifier 32 bit unsigned integer vector of 2 components type. More...
 
typedef vec< 3, u32, highp > highp_u32vec3
 High qualifier 32 bit unsigned integer vector of 3 components type. More...
 
typedef vec< 4, u32, highp > highp_u32vec4
 High qualifier 32 bit unsigned integer vector of 4 components type. More...
 
typedef uint64 highp_u64
 High qualifier 64 bit unsigned integer type. More...
 
typedef vec< 1, u64, highp > highp_u64vec1
 High qualifier 64 bit unsigned integer scalar type. More...
 
typedef vec< 2, u64, highp > highp_u64vec2
 High qualifier 64 bit unsigned integer vector of 2 components type. More...
 
typedef vec< 3, u64, highp > highp_u64vec3
 High qualifier 64 bit unsigned integer vector of 3 components type. More...
 
typedef vec< 4, u64, highp > highp_u64vec4
 High qualifier 64 bit unsigned integer vector of 4 components type. More...
 
typedef uint8 highp_u8
 High qualifier 8 bit unsigned integer type. More...
 
typedef vec< 1, u8, highp > highp_u8vec1
 High qualifier 8 bit unsigned integer scalar type. More...
 
typedef vec< 2, u8, highp > highp_u8vec2
 High qualifier 8 bit unsigned integer vector of 2 components type. More...
 
typedef vec< 3, u8, highp > highp_u8vec3
 High qualifier 8 bit unsigned integer vector of 3 components type. More...
 
typedef vec< 4, u8, highp > highp_u8vec4
 High qualifier 8 bit unsigned integer vector of 4 components type. More...
 
typedef uint16 highp_uint16
 High qualifier 16 bit unsigned integer type. More...
 
typedef uint16 highp_uint16_t
 High qualifier 16 bit unsigned integer type. More...
 
typedef uint32 highp_uint32
 High qualifier 32 bit unsigned integer type. More...
 
typedef uint32 highp_uint32_t
 High qualifier 32 bit unsigned integer type. More...
 
typedef uint64 highp_uint64
 High qualifier 64 bit unsigned integer type. More...
 
typedef uint64 highp_uint64_t
 High qualifier 64 bit unsigned integer type. More...
 
typedef uint8 highp_uint8
 High qualifier 8 bit unsigned integer type. More...
 
typedef uint8 highp_uint8_t
 High qualifier 8 bit unsigned integer type. More...
 
typedef int16 i16
 16 bit signed integer type. More...
 
typedef vec< 1, i16, defaultp > i16vec1
 16 bit signed integer scalar type. More...
 
typedef vec< 2, i16, defaultp > i16vec2
 16 bit signed integer vector of 2 components type. More...
 
typedef vec< 3, i16, defaultp > i16vec3
 16 bit signed integer vector of 3 components type. More...
 
typedef vec< 4, i16, defaultp > i16vec4
 16 bit signed integer vector of 4 components type. More...
 
typedef int32 i32
 32 bit signed integer type. More...
 
typedef vec< 1, i32, defaultp > i32vec1
 32 bit signed integer scalar type. More...
 
typedef vec< 2, i32, defaultp > i32vec2
 32 bit signed integer vector of 2 components type. More...
 
typedef vec< 3, i32, defaultp > i32vec3
 32 bit signed integer vector of 3 components type. More...
 
typedef vec< 4, i32, defaultp > i32vec4
 32 bit signed integer vector of 4 components type. More...
 
typedef int64 i64
 64 bit signed integer type. More...
 
typedef vec< 1, i64, defaultp > i64vec1
 64 bit signed integer scalar type. More...
 
typedef vec< 2, i64, defaultp > i64vec2
 64 bit signed integer vector of 2 components type. More...
 
typedef vec< 3, i64, defaultp > i64vec3
 64 bit signed integer vector of 3 components type. More...
 
typedef vec< 4, i64, defaultp > i64vec4
 64 bit signed integer vector of 4 components type. More...
 
typedef int8 i8
 8 bit signed integer type. More...
 
typedef vec< 1, i8, defaultp > i8vec1
 8 bit signed integer scalar type. More...
 
typedef vec< 2, i8, defaultp > i8vec2
 8 bit signed integer vector of 2 components type. More...
 
typedef vec< 3, i8, defaultp > i8vec3
 8 bit signed integer vector of 3 components type. More...
 
typedef vec< 4, i8, defaultp > i8vec4
 8 bit signed integer vector of 4 components type. More...
 
typedef int16 int16_t
 16 bit signed integer type. More...
 
typedef int32 int32_t
 32 bit signed integer type. More...
 
typedef int64 int64_t
 64 bit signed integer type. More...
 
typedef int8 int8_t
 8 bit signed integer type. More...
 
typedef float lowp_f32
 Low 32 bit single-qualifier floating-point scalar. More...
 
typedef mat< 2, 2, f32, lowp > lowp_f32mat2
 Low single-qualifier floating-point 1x1 matrix. More...
 
typedef mat< 2, 2, f32, lowp > lowp_f32mat2x2
 Low single-qualifier floating-point 1x1 matrix. More...
 
typedef mat< 2, 3, f32, lowp > lowp_f32mat2x3
 Low single-qualifier floating-point 2x3 matrix. More...
 
typedef mat< 2, 4, f32, lowp > lowp_f32mat2x4
 Low single-qualifier floating-point 2x4 matrix. More...
 
typedef mat< 3, 3, f32, lowp > lowp_f32mat3
 Low single-qualifier floating-point 3x3 matrix. More...
 
typedef mat< 3, 2, f32, lowp > lowp_f32mat3x2
 Low single-qualifier floating-point 3x2 matrix. More...
 
typedef mat< 3, 3, f32, lowp > lowp_f32mat3x3
 Low single-qualifier floating-point 3x3 matrix. More...
 
typedef mat< 3, 4, f32, lowp > lowp_f32mat3x4
 Low single-qualifier floating-point 3x4 matrix. More...
 
typedef mat< 4, 4, f32, lowp > lowp_f32mat4
 Low single-qualifier floating-point 4x4 matrix. More...
 
typedef mat< 4, 2, f32, lowp > lowp_f32mat4x2
 Low single-qualifier floating-point 4x2 matrix. More...
 
typedef mat< 4, 3, f32, lowp > lowp_f32mat4x3
 Low single-qualifier floating-point 4x3 matrix. More...
 
typedef mat< 4, 4, f32, lowp > lowp_f32mat4x4
 Low single-qualifier floating-point 4x4 matrix. More...
 
typedef qua< f32, lowp > lowp_f32quat
 Low single-qualifier floating-point quaternion. More...
 
typedef vec< 1, f32, lowp > lowp_f32vec1
 Low single-qualifier floating-point vector of 1 component. More...
 
typedef vec< 2, f32, lowp > lowp_f32vec2
 Low single-qualifier floating-point vector of 2 components. More...
 
typedef vec< 3, f32, lowp > lowp_f32vec3
 Low single-qualifier floating-point vector of 3 components. More...
 
typedef vec< 4, f32, lowp > lowp_f32vec4
 Low single-qualifier floating-point vector of 4 components. More...
 
typedef double lowp_f64
 Low 64 bit double-qualifier floating-point scalar. More...
 
typedef mat< 2, 2, f64, lowp > lowp_f64mat2
 Low double-qualifier floating-point 1x1 matrix. More...
 
typedef mat< 2, 2, f64, lowp > lowp_f64mat2x2
 Low double-qualifier floating-point 1x1 matrix. More...
 
typedef mat< 2, 3, f64, lowp > lowp_f64mat2x3
 Low double-qualifier floating-point 2x3 matrix. More...
 
typedef mat< 2, 4, f64, lowp > lowp_f64mat2x4
 Low double-qualifier floating-point 2x4 matrix. More...
 
typedef mat< 3, 3, f64, lowp > lowp_f64mat3
 Low double-qualifier floating-point 3x3 matrix. More...
 
typedef mat< 3, 2, f64, lowp > lowp_f64mat3x2
 Low double-qualifier floating-point 3x2 matrix. More...
 
typedef mat< 3, 3, f64, lowp > lowp_f64mat3x3
 Low double-qualifier floating-point 3x3 matrix. More...
 
typedef mat< 3, 4, f64, lowp > lowp_f64mat3x4
 Low double-qualifier floating-point 3x4 matrix. More...
 
typedef mat< 4, 4, f64, lowp > lowp_f64mat4
 Low double-qualifier floating-point 4x4 matrix. More...
 
typedef mat< 4, 2, f64, lowp > lowp_f64mat4x2
 Low double-qualifier floating-point 4x2 matrix. More...
 
typedef mat< 4, 3, f64, lowp > lowp_f64mat4x3
 Low double-qualifier floating-point 4x3 matrix. More...
 
typedef mat< 4, 4, f64, lowp > lowp_f64mat4x4
 Low double-qualifier floating-point 4x4 matrix. More...
 
typedef qua< f64, lowp > lowp_f64quat
 Low double-qualifier floating-point quaternion. More...
 
typedef vec< 1, f64, lowp > lowp_f64vec1
 Low double-qualifier floating-point vector of 1 component. More...
 
typedef vec< 2, f64, lowp > lowp_f64vec2
 Low double-qualifier floating-point vector of 2 components. More...
 
typedef vec< 3, f64, lowp > lowp_f64vec3
 Low double-qualifier floating-point vector of 3 components. More...
 
typedef vec< 4, f64, lowp > lowp_f64vec4
 Low double-qualifier floating-point vector of 4 components. More...
 
typedef float lowp_float32
 Low 32 bit single-qualifier floating-point scalar. More...
 
typedef float lowp_float32_t
 Low 32 bit single-qualifier floating-point scalar. More...
 
typedef double lowp_float64
 Low 64 bit double-qualifier floating-point scalar. More...
 
typedef double lowp_float64_t
 Low 64 bit double-qualifier floating-point scalar. More...
 
typedef mat< 2, 2, f32, lowp > lowp_fmat2
 Low single-qualifier floating-point 1x1 matrix. More...
 
typedef mat< 2, 2, f32, lowp > lowp_fmat2x2
 Low single-qualifier floating-point 1x1 matrix. More...
 
typedef mat< 2, 3, f32, lowp > lowp_fmat2x3
 Low single-qualifier floating-point 2x3 matrix. More...
 
typedef mat< 2, 4, f32, lowp > lowp_fmat2x4
 Low single-qualifier floating-point 2x4 matrix. More...
 
typedef mat< 3, 3, f32, lowp > lowp_fmat3
 Low single-qualifier floating-point 3x3 matrix. More...
 
typedef mat< 3, 2, f32, lowp > lowp_fmat3x2
 Low single-qualifier floating-point 3x2 matrix. More...
 
typedef mat< 3, 3, f32, lowp > lowp_fmat3x3
 Low single-qualifier floating-point 3x3 matrix. More...
 
typedef mat< 3, 4, f32, lowp > lowp_fmat3x4
 Low single-qualifier floating-point 3x4 matrix. More...
 
typedef mat< 4, 4, f32, lowp > lowp_fmat4
 Low single-qualifier floating-point 4x4 matrix. More...
 
typedef mat< 4, 2, f32, lowp > lowp_fmat4x2
 Low single-qualifier floating-point 4x2 matrix. More...
 
typedef mat< 4, 3, f32, lowp > lowp_fmat4x3
 Low single-qualifier floating-point 4x3 matrix. More...
 
typedef mat< 4, 4, f32, lowp > lowp_fmat4x4
 Low single-qualifier floating-point 4x4 matrix. More...
 
typedef vec< 1, float, lowp > lowp_fvec1
 Low single-qualifier floating-point vector of 1 component. More...
 
typedef vec< 2, float, lowp > lowp_fvec2
 Low single-qualifier floating-point vector of 2 components. More...
 
typedef vec< 3, float, lowp > lowp_fvec3
 Low single-qualifier floating-point vector of 3 components. More...
 
typedef vec< 4, float, lowp > lowp_fvec4
 Low single-qualifier floating-point vector of 4 components. More...
 
typedef int16 lowp_i16
 Low qualifier 16 bit signed integer type. More...
 
typedef vec< 1, i16, lowp > lowp_i16vec1
 Low qualifier 16 bit signed integer scalar type. More...
 
typedef vec< 2, i16, lowp > lowp_i16vec2
 Low qualifier 16 bit signed integer vector of 2 components type. More...
 
typedef vec< 3, i16, lowp > lowp_i16vec3
 Low qualifier 16 bit signed integer vector of 3 components type. More...
 
typedef vec< 4, i16, lowp > lowp_i16vec4
 Low qualifier 16 bit signed integer vector of 4 components type. More...
 
typedef int32 lowp_i32
 Low qualifier 32 bit signed integer type. More...
 
typedef vec< 1, i32, lowp > lowp_i32vec1
 Low qualifier 32 bit signed integer scalar type. More...
 
typedef vec< 2, i32, lowp > lowp_i32vec2
 Low qualifier 32 bit signed integer vector of 2 components type. More...
 
typedef vec< 3, i32, lowp > lowp_i32vec3
 Low qualifier 32 bit signed integer vector of 3 components type. More...
 
typedef vec< 4, i32, lowp > lowp_i32vec4
 Low qualifier 32 bit signed integer vector of 4 components type. More...
 
typedef int64 lowp_i64
 Low qualifier 64 bit signed integer type. More...
 
typedef vec< 1, i64, lowp > lowp_i64vec1
 Low qualifier 64 bit signed integer scalar type. More...
 
typedef vec< 2, i64, lowp > lowp_i64vec2
 Low qualifier 64 bit signed integer vector of 2 components type. More...
 
typedef vec< 3, i64, lowp > lowp_i64vec3
 Low qualifier 64 bit signed integer vector of 3 components type. More...
 
typedef vec< 4, i64, lowp > lowp_i64vec4
 Low qualifier 64 bit signed integer vector of 4 components type. More...
 
typedef int8 lowp_i8
 Low qualifier 8 bit signed integer type. More...
 
typedef vec< 1, i8, lowp > lowp_i8vec1
 Low qualifier 8 bit signed integer scalar type. More...
 
typedef vec< 2, i8, lowp > lowp_i8vec2
 Low qualifier 8 bit signed integer vector of 2 components type. More...
 
typedef vec< 3, i8, lowp > lowp_i8vec3
 Low qualifier 8 bit signed integer vector of 3 components type. More...
 
typedef vec< 4, i8, lowp > lowp_i8vec4
 Low qualifier 8 bit signed integer vector of 4 components type. More...
 
typedef int16 lowp_int16
 Low qualifier 16 bit signed integer type. More...
 
typedef int16 lowp_int16_t
 Low qualifier 16 bit signed integer type. More...
 
typedef int32 lowp_int32
 Low qualifier 32 bit signed integer type. More...
 
typedef int32 lowp_int32_t
 Low qualifier 32 bit signed integer type. More...
 
typedef int64 lowp_int64
 Low qualifier 64 bit signed integer type. More...
 
typedef int64 lowp_int64_t
 Low qualifier 64 bit signed integer type. More...
 
typedef int8 lowp_int8
 Low qualifier 8 bit signed integer type. More...
 
typedef int8 lowp_int8_t
 Low qualifier 8 bit signed integer type. More...
 
typedef uint16 lowp_u16
 Low qualifier 16 bit unsigned integer type. More...
 
typedef vec< 1, u16, lowp > lowp_u16vec1
 Low qualifier 16 bit unsigned integer scalar type. More...
 
typedef vec< 2, u16, lowp > lowp_u16vec2
 Low qualifier 16 bit unsigned integer vector of 2 components type. More...
 
typedef vec< 3, u16, lowp > lowp_u16vec3
 Low qualifier 16 bit unsigned integer vector of 3 components type. More...
 
typedef vec< 4, u16, lowp > lowp_u16vec4
 Low qualifier 16 bit unsigned integer vector of 4 components type. More...
 
typedef uint32 lowp_u32
 Low qualifier 32 bit unsigned integer type. More...
 
typedef vec< 1, u32, lowp > lowp_u32vec1
 Low qualifier 32 bit unsigned integer scalar type. More...
 
typedef vec< 2, u32, lowp > lowp_u32vec2
 Low qualifier 32 bit unsigned integer vector of 2 components type. More...
 
typedef vec< 3, u32, lowp > lowp_u32vec3
 Low qualifier 32 bit unsigned integer vector of 3 components type. More...
 
typedef vec< 4, u32, lowp > lowp_u32vec4
 Low qualifier 32 bit unsigned integer vector of 4 components type. More...
 
typedef uint64 lowp_u64
 Low qualifier 64 bit unsigned integer type. More...
 
typedef vec< 1, u64, lowp > lowp_u64vec1
 Low qualifier 64 bit unsigned integer scalar type. More...
 
typedef vec< 2, u64, lowp > lowp_u64vec2
 Low qualifier 64 bit unsigned integer vector of 2 components type. More...
 
typedef vec< 3, u64, lowp > lowp_u64vec3
 Low qualifier 64 bit unsigned integer vector of 3 components type. More...
 
typedef vec< 4, u64, lowp > lowp_u64vec4
 Low qualifier 64 bit unsigned integer vector of 4 components type. More...
 
typedef uint8 lowp_u8
 Low qualifier 8 bit unsigned integer type. More...
 
typedef vec< 1, u8, lowp > lowp_u8vec1
 Low qualifier 8 bit unsigned integer scalar type. More...
 
typedef vec< 2, u8, lowp > lowp_u8vec2
 Low qualifier 8 bit unsigned integer vector of 2 components type. More...
 
typedef vec< 3, u8, lowp > lowp_u8vec3
 Low qualifier 8 bit unsigned integer vector of 3 components type. More...
 
typedef vec< 4, u8, lowp > lowp_u8vec4
 Low qualifier 8 bit unsigned integer vector of 4 components type. More...
 
typedef uint16 lowp_uint16
 Low qualifier 16 bit unsigned integer type. More...
 
typedef uint16 lowp_uint16_t
 Low qualifier 16 bit unsigned integer type. More...
 
typedef uint32 lowp_uint32
 Low qualifier 32 bit unsigned integer type. More...
 
typedef uint32 lowp_uint32_t
 Low qualifier 32 bit unsigned integer type. More...
 
typedef uint64 lowp_uint64
 Low qualifier 64 bit unsigned integer type. More...
 
typedef uint64 lowp_uint64_t
 Low qualifier 64 bit unsigned integer type. More...
 
typedef uint8 lowp_uint8
 Low qualifier 8 bit unsigned integer type. More...
 
typedef uint8 lowp_uint8_t
 Low qualifier 8 bit unsigned integer type. More...
 
typedef float mediump_f32
 Medium 32 bit single-qualifier floating-point scalar. More...
 
typedef mat< 2, 2, f32, mediump > mediump_f32mat2
 Medium single-qualifier floating-point 1x1 matrix. More...
 
typedef mat< 2, 2, f32, mediump > mediump_f32mat2x2
 High single-qualifier floating-point 1x1 matrix. More...
 
typedef mat< 2, 3, f32, mediump > mediump_f32mat2x3
 Medium single-qualifier floating-point 2x3 matrix. More...
 
typedef mat< 2, 4, f32, mediump > mediump_f32mat2x4
 Medium single-qualifier floating-point 2x4 matrix. More...
 
typedef mat< 3, 3, f32, mediump > mediump_f32mat3
 Medium single-qualifier floating-point 3x3 matrix. More...
 
typedef mat< 3, 2, f32, mediump > mediump_f32mat3x2
 Medium single-qualifier floating-point 3x2 matrix. More...
 
typedef mat< 3, 3, f32, mediump > mediump_f32mat3x3
 Medium single-qualifier floating-point 3x3 matrix. More...
 
typedef mat< 3, 4, f32, mediump > mediump_f32mat3x4
 Medium single-qualifier floating-point 3x4 matrix. More...
 
typedef mat< 4, 4, f32, mediump > mediump_f32mat4
 Medium single-qualifier floating-point 4x4 matrix. More...
 
typedef mat< 4, 2, f32, mediump > mediump_f32mat4x2
 Medium single-qualifier floating-point 4x2 matrix. More...
 
typedef mat< 4, 3, f32, mediump > mediump_f32mat4x3
 Medium single-qualifier floating-point 4x3 matrix. More...
 
typedef mat< 4, 4, f32, mediump > mediump_f32mat4x4
 Medium single-qualifier floating-point 4x4 matrix. More...
 
typedef qua< f32, mediump > mediump_f32quat
 Medium single-qualifier floating-point quaternion. More...
 
typedef vec< 1, f32, mediump > mediump_f32vec1
 Medium single-qualifier floating-point vector of 1 component. More...
 
typedef vec< 2, f32, mediump > mediump_f32vec2
 Medium single-qualifier floating-point vector of 2 components. More...
 
typedef vec< 3, f32, mediump > mediump_f32vec3
 Medium single-qualifier floating-point vector of 3 components. More...
 
typedef vec< 4, f32, mediump > mediump_f32vec4
 Medium single-qualifier floating-point vector of 4 components. More...
 
typedef double mediump_f64
 Medium 64 bit double-qualifier floating-point scalar. More...
 
typedef mat< 2, 2, f64, mediump > mediump_f64mat2
 Medium double-qualifier floating-point 1x1 matrix. More...
 
typedef mat< 2, 2, f64, mediump > mediump_f64mat2x2
 Medium double-qualifier floating-point 1x1 matrix. More...
 
typedef mat< 2, 3, f64, mediump > mediump_f64mat2x3
 Medium double-qualifier floating-point 2x3 matrix. More...
 
typedef mat< 2, 4, f64, mediump > mediump_f64mat2x4
 Medium double-qualifier floating-point 2x4 matrix. More...
 
typedef mat< 3, 3, f64, mediump > mediump_f64mat3
 Medium double-qualifier floating-point 3x3 matrix. More...
 
typedef mat< 3, 2, f64, mediump > mediump_f64mat3x2
 Medium double-qualifier floating-point 3x2 matrix. More...
 
typedef mat< 3, 3, f64, mediump > mediump_f64mat3x3
 Medium double-qualifier floating-point 3x3 matrix. More...
 
typedef mat< 3, 4, f64, mediump > mediump_f64mat3x4
 Medium double-qualifier floating-point 3x4 matrix. More...
 
typedef mat< 4, 4, f64, mediump > mediump_f64mat4
 Medium double-qualifier floating-point 4x4 matrix. More...
 
typedef mat< 4, 2, f64, mediump > mediump_f64mat4x2
 Medium double-qualifier floating-point 4x2 matrix. More...
 
typedef mat< 4, 3, f64, mediump > mediump_f64mat4x3
 Medium double-qualifier floating-point 4x3 matrix. More...
 
typedef mat< 4, 4, f64, mediump > mediump_f64mat4x4
 Medium double-qualifier floating-point 4x4 matrix. More...
 
typedef qua< f64, mediump > mediump_f64quat
 Medium double-qualifier floating-point quaternion. More...
 
typedef vec< 1, f64, mediump > mediump_f64vec1
 Medium double-qualifier floating-point vector of 1 component. More...
 
typedef vec< 2, f64, mediump > mediump_f64vec2
 Medium double-qualifier floating-point vector of 2 components. More...
 
typedef vec< 3, f64, mediump > mediump_f64vec3
 Medium double-qualifier floating-point vector of 3 components. More...
 
typedef vec< 4, f64, mediump > mediump_f64vec4
 Medium double-qualifier floating-point vector of 4 components. More...
 
typedef float mediump_float32
 Medium 32 bit single-qualifier floating-point scalar. More...
 
typedef float mediump_float32_t
 Medium 32 bit single-qualifier floating-point scalar. More...
 
typedef double mediump_float64
 Medium 64 bit double-qualifier floating-point scalar. More...
 
typedef double mediump_float64_t
 Medium 64 bit double-qualifier floating-point scalar. More...
 
typedef mat< 2, 2, f32, mediump > mediump_fmat2
 Medium single-qualifier floating-point 1x1 matrix. More...
 
typedef mat< 2, 2, f32, mediump > mediump_fmat2x2
 Medium single-qualifier floating-point 1x1 matrix. More...
 
typedef mat< 2, 3, f32, mediump > mediump_fmat2x3
 Medium single-qualifier floating-point 2x3 matrix. More...
 
typedef mat< 2, 4, f32, mediump > mediump_fmat2x4
 Medium single-qualifier floating-point 2x4 matrix. More...
 
typedef mat< 3, 3, f32, mediump > mediump_fmat3
 Medium single-qualifier floating-point 3x3 matrix. More...
 
typedef mat< 3, 2, f32, mediump > mediump_fmat3x2
 Medium single-qualifier floating-point 3x2 matrix. More...
 
typedef mat< 3, 3, f32, mediump > mediump_fmat3x3
 Medium single-qualifier floating-point 3x3 matrix. More...
 
typedef mat< 3, 4, f32, mediump > mediump_fmat3x4
 Medium single-qualifier floating-point 3x4 matrix. More...
 
typedef mat< 4, 4, f32, mediump > mediump_fmat4
 Medium single-qualifier floating-point 4x4 matrix. More...
 
typedef mat< 4, 2, f32, mediump > mediump_fmat4x2
 Medium single-qualifier floating-point 4x2 matrix. More...
 
typedef mat< 4, 3, f32, mediump > mediump_fmat4x3
 Medium single-qualifier floating-point 4x3 matrix. More...
 
typedef mat< 4, 4, f32, mediump > mediump_fmat4x4
 Medium single-qualifier floating-point 4x4 matrix. More...
 
typedef vec< 1, float, mediump > mediump_fvec1
 Medium single-qualifier floating-point vector of 1 component. More...
 
typedef vec< 2, float, mediump > mediump_fvec2
 Medium Single-qualifier floating-point vector of 2 components. More...
 
typedef vec< 3, float, mediump > mediump_fvec3
 Medium Single-qualifier floating-point vector of 3 components. More...
 
typedef vec< 4, float, mediump > mediump_fvec4
 Medium Single-qualifier floating-point vector of 4 components. More...
 
typedef int16 mediump_i16
 Medium qualifier 16 bit signed integer type. More...
 
typedef vec< 1, i16, mediump > mediump_i16vec1
 Medium qualifier 16 bit signed integer scalar type. More...
 
typedef vec< 2, i16, mediump > mediump_i16vec2
 Medium qualifier 16 bit signed integer vector of 2 components type. More...
 
typedef vec< 3, i16, mediump > mediump_i16vec3
 Medium qualifier 16 bit signed integer vector of 3 components type. More...
 
typedef vec< 4, i16, mediump > mediump_i16vec4
 Medium qualifier 16 bit signed integer vector of 4 components type. More...
 
typedef int32 mediump_i32
 Medium qualifier 32 bit signed integer type. More...
 
typedef vec< 1, i32, mediump > mediump_i32vec1
 Medium qualifier 32 bit signed integer scalar type. More...
 
typedef vec< 2, i32, mediump > mediump_i32vec2
 Medium qualifier 32 bit signed integer vector of 2 components type. More...
 
typedef vec< 3, i32, mediump > mediump_i32vec3
 Medium qualifier 32 bit signed integer vector of 3 components type. More...
 
typedef vec< 4, i32, mediump > mediump_i32vec4
 Medium qualifier 32 bit signed integer vector of 4 components type. More...
 
typedef int64 mediump_i64
 Medium qualifier 64 bit signed integer type. More...
 
typedef vec< 1, i64, mediump > mediump_i64vec1
 Medium qualifier 64 bit signed integer scalar type. More...
 
typedef vec< 2, i64, mediump > mediump_i64vec2
 Medium qualifier 64 bit signed integer vector of 2 components type. More...
 
typedef vec< 3, i64, mediump > mediump_i64vec3
 Medium qualifier 64 bit signed integer vector of 3 components type. More...
 
typedef vec< 4, i64, mediump > mediump_i64vec4
 Medium qualifier 64 bit signed integer vector of 4 components type. More...
 
typedef int8 mediump_i8
 Medium qualifier 8 bit signed integer type. More...
 
typedef vec< 1, i8, mediump > mediump_i8vec1
 Medium qualifier 8 bit signed integer scalar type. More...
 
typedef vec< 2, i8, mediump > mediump_i8vec2
 Medium qualifier 8 bit signed integer vector of 2 components type. More...
 
typedef vec< 3, i8, mediump > mediump_i8vec3
 Medium qualifier 8 bit signed integer vector of 3 components type. More...
 
typedef vec< 4, i8, mediump > mediump_i8vec4
 Medium qualifier 8 bit signed integer vector of 4 components type. More...
 
typedef int16 mediump_int16
 Medium qualifier 16 bit signed integer type. More...
 
typedef int16 mediump_int16_t
 Medium qualifier 16 bit signed integer type. More...
 
typedef int32 mediump_int32
 Medium qualifier 32 bit signed integer type. More...
 
typedef int32 mediump_int32_t
 Medium qualifier 32 bit signed integer type. More...
 
typedef int64 mediump_int64
 Medium qualifier 64 bit signed integer type. More...
 
typedef int64 mediump_int64_t
 Medium qualifier 64 bit signed integer type. More...
 
typedef int8 mediump_int8
 Medium qualifier 8 bit signed integer type. More...
 
typedef int8 mediump_int8_t
 Medium qualifier 8 bit signed integer type. More...
 
typedef uint16 mediump_u16
 Medium qualifier 16 bit unsigned integer type. More...
 
typedef vec< 1, u16, mediump > mediump_u16vec1
 Medium qualifier 16 bit unsigned integer scalar type. More...
 
typedef vec< 2, u16, mediump > mediump_u16vec2
 Medium qualifier 16 bit unsigned integer vector of 2 components type. More...
 
typedef vec< 3, u16, mediump > mediump_u16vec3
 Medium qualifier 16 bit unsigned integer vector of 3 components type. More...
 
typedef vec< 4, u16, mediump > mediump_u16vec4
 Medium qualifier 16 bit unsigned integer vector of 4 components type. More...
 
typedef uint32 mediump_u32
 Medium qualifier 32 bit unsigned integer type. More...
 
typedef vec< 1, u32, mediump > mediump_u32vec1
 Medium qualifier 32 bit unsigned integer scalar type. More...
 
typedef vec< 2, u32, mediump > mediump_u32vec2
 Medium qualifier 32 bit unsigned integer vector of 2 components type. More...
 
typedef vec< 3, u32, mediump > mediump_u32vec3
 Medium qualifier 32 bit unsigned integer vector of 3 components type. More...
 
typedef vec< 4, u32, mediump > mediump_u32vec4
 Medium qualifier 32 bit unsigned integer vector of 4 components type. More...
 
typedef uint64 mediump_u64
 Medium qualifier 64 bit unsigned integer type. More...
 
typedef vec< 1, u64, mediump > mediump_u64vec1
 Medium qualifier 64 bit unsigned integer scalar type. More...
 
typedef vec< 2, u64, mediump > mediump_u64vec2
 Medium qualifier 64 bit unsigned integer vector of 2 components type. More...
 
typedef vec< 3, u64, mediump > mediump_u64vec3
 Medium qualifier 64 bit unsigned integer vector of 3 components type. More...
 
typedef vec< 4, u64, mediump > mediump_u64vec4
 Medium qualifier 64 bit unsigned integer vector of 4 components type. More...
 
typedef uint8 mediump_u8
 Medium qualifier 8 bit unsigned integer type. More...
 
typedef vec< 1, u8, mediump > mediump_u8vec1
 Medium qualifier 8 bit unsigned integer scalar type. More...
 
typedef vec< 2, u8, mediump > mediump_u8vec2
 Medium qualifier 8 bit unsigned integer vector of 2 components type. More...
 
typedef vec< 3, u8, mediump > mediump_u8vec3
 Medium qualifier 8 bit unsigned integer vector of 3 components type. More...
 
typedef vec< 4, u8, mediump > mediump_u8vec4
 Medium qualifier 8 bit unsigned integer vector of 4 components type. More...
 
typedef uint16 mediump_uint16
 Medium qualifier 16 bit unsigned integer type. More...
 
typedef uint16 mediump_uint16_t
 Medium qualifier 16 bit unsigned integer type. More...
 
typedef uint32 mediump_uint32
 Medium qualifier 32 bit unsigned integer type. More...
 
typedef uint32 mediump_uint32_t
 Medium qualifier 32 bit unsigned integer type. More...
 
typedef uint64 mediump_uint64
 Medium qualifier 64 bit unsigned integer type. More...
 
typedef uint64 mediump_uint64_t
 Medium qualifier 64 bit unsigned integer type. More...
 
typedef uint8 mediump_uint8
 Medium qualifier 8 bit unsigned integer type. More...
 
typedef uint8 mediump_uint8_t
 Medium qualifier 8 bit unsigned integer type. More...
 
typedef uint16 u16
 Default qualifier 16 bit unsigned integer type. More...
 
typedef vec< 1, u16, defaultp > u16vec1
 Default qualifier 16 bit unsigned integer scalar type. More...
 
typedef vec< 2, u16, defaultp > u16vec2
 Default qualifier 16 bit unsigned integer vector of 2 components type. More...
 
typedef vec< 3, u16, defaultp > u16vec3
 Default qualifier 16 bit unsigned integer vector of 3 components type. More...
 
typedef vec< 4, u16, defaultp > u16vec4
 Default qualifier 16 bit unsigned integer vector of 4 components type. More...
 
typedef uint32 u32
 Default qualifier 32 bit unsigned integer type. More...
 
typedef vec< 1, u32, defaultp > u32vec1
 Default qualifier 32 bit unsigned integer scalar type. More...
 
typedef vec< 2, u32, defaultp > u32vec2
 Default qualifier 32 bit unsigned integer vector of 2 components type. More...
 
typedef vec< 3, u32, defaultp > u32vec3
 Default qualifier 32 bit unsigned integer vector of 3 components type. More...
 
typedef vec< 4, u32, defaultp > u32vec4
 Default qualifier 32 bit unsigned integer vector of 4 components type. More...
 
typedef uint64 u64
 Default qualifier 64 bit unsigned integer type. More...
 
typedef vec< 1, u64, defaultp > u64vec1
 Default qualifier 64 bit unsigned integer scalar type. More...
 
typedef vec< 2, u64, defaultp > u64vec2
 Default qualifier 64 bit unsigned integer vector of 2 components type. More...
 
typedef vec< 3, u64, defaultp > u64vec3
 Default qualifier 64 bit unsigned integer vector of 3 components type. More...
 
typedef vec< 4, u64, defaultp > u64vec4
 Default qualifier 64 bit unsigned integer vector of 4 components type. More...
 
typedef uint8 u8
 Default qualifier 8 bit unsigned integer type. More...
 
typedef vec< 1, u8, defaultp > u8vec1
 Default qualifier 8 bit unsigned integer scalar type. More...
 
typedef vec< 2, u8, defaultp > u8vec2
 Default qualifier 8 bit unsigned integer vector of 2 components type. More...
 
typedef vec< 3, u8, defaultp > u8vec3
 Default qualifier 8 bit unsigned integer vector of 3 components type. More...
 
typedef vec< 4, u8, defaultp > u8vec4
 Default qualifier 8 bit unsigned integer vector of 4 components type. More...
 
typedef uint16 uint16_t
 Default qualifier 16 bit unsigned integer type. More...
 
typedef uint32 uint32_t
 Default qualifier 32 bit unsigned integer type. More...
 
typedef uint64 uint64_t
 Default qualifier 64 bit unsigned integer type. More...
 
typedef uint8 uint8_t
 Default qualifier 8 bit unsigned integer type. More...
 

Detailed Description

Include <glm/gtc/type_precision.hpp> to use the features of this extension.

Defines specific C++-based qualifier types.

Typedef Documentation

typedef float32 f32

Default 32 bit single-qualifier floating-point scalar.

32 bit single-qualifier floating-point scalar.

See also
GLM_GTC_type_precision

Definition at line 150 of file fwd.hpp.

typedef mat< 2, 2, f32, defaultp > f32mat2

Single-qualifier floating-point 1x1 matrix.

See also
GLM_GTC_type_precision Single-qualifier floating-point 2x2 matrix.
GLM_GTC_type_precision

Definition at line 552 of file fwd.hpp.

typedef mat< 2, 2, f32, defaultp > f32mat2x2

Single-qualifier floating-point 1x1 matrix.

See also
GLM_GTC_type_precision Single-qualifier floating-point 2x2 matrix.
GLM_GTC_type_precision

Definition at line 700 of file fwd.hpp.

typedef mat< 2, 3, f32, defaultp > f32mat2x3

Single-qualifier floating-point 2x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 703 of file fwd.hpp.

typedef mat< 2, 4, f32, defaultp > f32mat2x4

Single-qualifier floating-point 2x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 706 of file fwd.hpp.

typedef mat< 3, 3, f32, defaultp > f32mat3

Single-qualifier floating-point 3x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 553 of file fwd.hpp.

typedef mat< 3, 2, f32, defaultp > f32mat3x2

Single-qualifier floating-point 3x2 matrix.

See also
GLM_GTC_type_precision

Definition at line 701 of file fwd.hpp.

typedef mat< 3, 3, f32, defaultp > f32mat3x3

Single-qualifier floating-point 3x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 704 of file fwd.hpp.

typedef mat< 3, 4, f32, defaultp > f32mat3x4

Single-qualifier floating-point 3x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 707 of file fwd.hpp.

typedef mat< 4, 4, f32, defaultp > f32mat4

Single-qualifier floating-point 4x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 554 of file fwd.hpp.

typedef mat< 4, 2, f32, defaultp > f32mat4x2

Single-qualifier floating-point 4x2 matrix.

See also
GLM_GTC_type_precision

Definition at line 702 of file fwd.hpp.

typedef mat< 4, 3, f32, defaultp > f32mat4x3

Single-qualifier floating-point 4x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 705 of file fwd.hpp.

typedef mat< 4, 4, f32, defaultp > f32mat4x4

Single-qualifier floating-point 4x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 708 of file fwd.hpp.

typedef qua< f32, defaultp > f32quat

Single-qualifier floating-point quaternion.

See also
GLM_GTC_type_precision

Definition at line 805 of file fwd.hpp.

typedef vec< 1, f32, defaultp > f32vec1

Single-qualifier floating-point vector of 1 component.

See also
GLM_GTC_type_precision

Definition at line 461 of file fwd.hpp.

typedef vec< 2, f32, defaultp > f32vec2

Single-qualifier floating-point vector of 2 components.

See also
GLM_GTC_type_precision

Definition at line 462 of file fwd.hpp.

typedef vec< 3, f32, defaultp > f32vec3

Single-qualifier floating-point vector of 3 components.

See also
GLM_GTC_type_precision

Definition at line 463 of file fwd.hpp.

typedef vec< 4, f32, defaultp > f32vec4

Single-qualifier floating-point vector of 4 components.

See also
GLM_GTC_type_precision

Definition at line 464 of file fwd.hpp.

typedef float64 f64

Default 64 bit double-qualifier floating-point scalar.

64 bit double-qualifier floating-point scalar.

See also
GLM_GTC_type_precision

Definition at line 166 of file fwd.hpp.

typedef mat< 2, 2, f64, defaultp > f64mat2

Double-qualifier floating-point 1x1 matrix.

See also
GLM_GTC_type_precision Double-qualifier floating-point 2x2 matrix.
GLM_GTC_type_precision

Definition at line 584 of file fwd.hpp.

typedef mat< 2, 2, f64, defaultp > f64mat2x2

Double-qualifier floating-point 1x1 matrix.

See also
GLM_GTC_type_precision Double-qualifier floating-point 2x2 matrix.
GLM_GTC_type_precision

Definition at line 780 of file fwd.hpp.

typedef mat< 2, 3, f64, defaultp > f64mat2x3

Double-qualifier floating-point 2x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 783 of file fwd.hpp.

typedef mat< 2, 4, f64, defaultp > f64mat2x4

Double-qualifier floating-point 2x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 786 of file fwd.hpp.

typedef mat< 3, 3, f64, defaultp > f64mat3

Double-qualifier floating-point 3x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 585 of file fwd.hpp.

typedef mat< 3, 2, f64, defaultp > f64mat3x2

Double-qualifier floating-point 3x2 matrix.

See also
GLM_GTC_type_precision

Definition at line 781 of file fwd.hpp.

typedef mat< 3, 3, f64, defaultp > f64mat3x3

Double-qualifier floating-point 3x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 784 of file fwd.hpp.

typedef mat< 3, 4, f64, defaultp > f64mat3x4

Double-qualifier floating-point 3x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 787 of file fwd.hpp.

typedef mat< 4, 4, f64, defaultp > f64mat4

Double-qualifier floating-point 4x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 586 of file fwd.hpp.

typedef mat< 4, 2, f64, defaultp > f64mat4x2

Double-qualifier floating-point 4x2 matrix.

See also
GLM_GTC_type_precision

Definition at line 782 of file fwd.hpp.

typedef mat< 4, 3, f64, defaultp > f64mat4x3

Double-qualifier floating-point 4x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 785 of file fwd.hpp.

typedef mat< 4, 4, f64, defaultp > f64mat4x4

Double-qualifier floating-point 4x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 788 of file fwd.hpp.

typedef qua< f64, defaultp > f64quat

Double-qualifier floating-point quaternion.

See also
GLM_GTC_type_precision

Definition at line 815 of file fwd.hpp.

typedef vec< 1, f64, defaultp > f64vec1

Double-qualifier floating-point vector of 1 component.

See also
GLM_GTC_type_precision

Definition at line 501 of file fwd.hpp.

typedef vec< 2, f64, defaultp > f64vec2

Double-qualifier floating-point vector of 2 components.

See also
GLM_GTC_type_precision

Definition at line 502 of file fwd.hpp.

typedef vec< 3, f64, defaultp > f64vec3

Double-qualifier floating-point vector of 3 components.

See also
GLM_GTC_type_precision

Definition at line 503 of file fwd.hpp.

typedef vec< 4, f64, defaultp > f64vec4

Double-qualifier floating-point vector of 4 components.

See also
GLM_GTC_type_precision

Definition at line 504 of file fwd.hpp.

typedef float float32

Single-qualifier floating-point scalar.

See also
GLM_GTC_type_precision

Definition at line 155 of file fwd.hpp.

typedef float32 float32_t

Default 32 bit single-qualifier floating-point scalar.

32 bit single-qualifier floating-point scalar.

See also
GLM_GTC_type_precision

Definition at line 160 of file fwd.hpp.

typedef double float64

Double-qualifier floating-point scalar.

See also
GLM_GTC_type_precision

Definition at line 171 of file fwd.hpp.

typedef float64 float64_t

Default 64 bit double-qualifier floating-point scalar.

64 bit double-qualifier floating-point scalar.

See also
GLM_GTC_type_precision

Definition at line 176 of file fwd.hpp.

typedef mat< 2, 2, f32, defaultp > fmat2

Single-qualifier floating-point 1x1 matrix.

See also
GLM_GTC_type_precision Single-qualifier floating-point 2x2 matrix.
GLM_GTC_type_precision

Definition at line 536 of file fwd.hpp.

typedef mat< 2, 2, f32, defaultp > fmat2x2

Single-qualifier floating-point 1x1 matrix.

See also
GLM_GTC_type_precision Single-qualifier floating-point 2x2 matrix.
GLM_GTC_type_precision

Definition at line 660 of file fwd.hpp.

typedef mat< 2, 3, f32, defaultp > fmat2x3

Single-qualifier floating-point 2x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 663 of file fwd.hpp.

typedef mat< 2, 4, f32, defaultp > fmat2x4

Single-qualifier floating-point 2x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 666 of file fwd.hpp.

typedef mat< 3, 3, f32, defaultp > fmat3

Single-qualifier floating-point 3x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 537 of file fwd.hpp.

typedef mat< 3, 2, f32, defaultp > fmat3x2

Single-qualifier floating-point 3x2 matrix.

See also
GLM_GTC_type_precision

Definition at line 661 of file fwd.hpp.

typedef mat< 3, 3, f32, defaultp > fmat3x3

Single-qualifier floating-point 3x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 664 of file fwd.hpp.

typedef mat< 3, 4, f32, defaultp > fmat3x4

Single-qualifier floating-point 3x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 667 of file fwd.hpp.

typedef mat< 4, 4, f32, defaultp > fmat4

Single-qualifier floating-point 4x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 538 of file fwd.hpp.

typedef mat< 4, 2, f32, defaultp > fmat4x2

Single-qualifier floating-point 4x2 matrix.

See also
GLM_GTC_type_precision

Definition at line 662 of file fwd.hpp.

typedef mat< 4, 3, f32, defaultp > fmat4x3

Single-qualifier floating-point 4x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 665 of file fwd.hpp.

typedef mat< 4, 4, f32, defaultp > fmat4x4

Single-qualifier floating-point 4x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 668 of file fwd.hpp.

typedef vec< 1, float, defaultp > fvec1

Single-qualifier floating-point vector of 1 component.

See also
GLM_GTC_type_precision

Definition at line 441 of file fwd.hpp.

typedef vec< 2, float, defaultp > fvec2

Single-qualifier floating-point vector of 2 components.

See also
GLM_GTC_type_precision

Definition at line 442 of file fwd.hpp.

typedef vec< 3, float, defaultp > fvec3

Single-qualifier floating-point vector of 3 components.

See also
GLM_GTC_type_precision

Definition at line 443 of file fwd.hpp.

typedef vec< 4, float, defaultp > fvec4

Single-qualifier floating-point vector of 4 components.

See also
GLM_GTC_type_precision

Definition at line 444 of file fwd.hpp.

typedef float32 highp_f32

High 32 bit single-qualifier floating-point scalar.

See also
GLM_GTC_type_precision

Definition at line 149 of file fwd.hpp.

typedef highp_f32mat2x2 highp_f32mat2

High single-qualifier floating-point 1x1 matrix.

See also
GLM_GTC_type_precision High single-qualifier floating-point 2x2 matrix.
GLM_GTC_type_precision

Definition at line 548 of file fwd.hpp.

typedef mat< 2, 2, f32, highp > highp_f32mat2x2

High single-qualifier floating-point 1x1 matrix.

See also
GLM_GTC_type_precision High single-qualifier floating-point 2x2 matrix.
GLM_GTC_type_precision

Definition at line 690 of file fwd.hpp.

typedef mat< 2, 3, f32, highp > highp_f32mat2x3

High single-qualifier floating-point 2x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 691 of file fwd.hpp.

typedef mat< 2, 4, f32, highp > highp_f32mat2x4

High single-qualifier floating-point 2x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 692 of file fwd.hpp.

typedef highp_f32mat3x3 highp_f32mat3

High single-qualifier floating-point 3x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 549 of file fwd.hpp.

typedef mat< 3, 2, f32, highp > highp_f32mat3x2

High single-qualifier floating-point 3x2 matrix.

See also
GLM_GTC_type_precision

Definition at line 693 of file fwd.hpp.

typedef mat< 3, 3, f32, highp > highp_f32mat3x3

High single-qualifier floating-point 3x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 694 of file fwd.hpp.

typedef mat< 3, 4, f32, highp > highp_f32mat3x4

High single-qualifier floating-point 3x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 695 of file fwd.hpp.

typedef highp_f32mat4x4 highp_f32mat4

High single-qualifier floating-point 4x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 550 of file fwd.hpp.

typedef mat< 4, 2, f32, highp > highp_f32mat4x2

High single-qualifier floating-point 4x2 matrix.

See also
GLM_GTC_type_precision

Definition at line 696 of file fwd.hpp.

typedef mat< 4, 3, f32, highp > highp_f32mat4x3

High single-qualifier floating-point 4x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 697 of file fwd.hpp.

typedef mat< 4, 4, f32, highp > highp_f32mat4x4

High single-qualifier floating-point 4x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 698 of file fwd.hpp.

typedef qua< f32, highp > highp_f32quat

High single-qualifier floating-point quaternion.

See also
GLM_GTC_type_precision

Definition at line 804 of file fwd.hpp.

typedef vec< 1, f32, highp > highp_f32vec1

High single-qualifier floating-point vector of 1 component.

See also
GLM_GTC_type_precision

Definition at line 456 of file fwd.hpp.

typedef vec< 2, f32, highp > highp_f32vec2

High single-qualifier floating-point vector of 2 components.

See also
GLM_GTC_type_precision

Definition at line 457 of file fwd.hpp.

typedef vec< 3, f32, highp > highp_f32vec3

High single-qualifier floating-point vector of 3 components.

See also
GLM_GTC_type_precision

Definition at line 458 of file fwd.hpp.

typedef vec< 4, f32, highp > highp_f32vec4

High single-qualifier floating-point vector of 4 components.

See also
GLM_GTC_type_precision

Definition at line 459 of file fwd.hpp.

typedef float64 highp_f64

High 64 bit double-qualifier floating-point scalar.

See also
GLM_GTC_type_precision

Definition at line 165 of file fwd.hpp.

typedef highp_f64mat2x2 highp_f64mat2

High double-qualifier floating-point 1x1 matrix.

See also
GLM_GTC_type_precision High double-qualifier floating-point 2x2 matrix.
GLM_GTC_type_precision

Definition at line 580 of file fwd.hpp.

typedef mat< 2, 2, f64, highp > highp_f64mat2x2

High double-qualifier floating-point 1x1 matrix.

See also
GLM_GTC_type_precision High double-qualifier floating-point 2x2 matrix.
GLM_GTC_type_precision

Definition at line 770 of file fwd.hpp.

typedef mat< 2, 3, f64, highp > highp_f64mat2x3

High double-qualifier floating-point 2x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 771 of file fwd.hpp.

typedef mat< 2, 4, f64, highp > highp_f64mat2x4

High double-qualifier floating-point 2x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 772 of file fwd.hpp.

typedef highp_f64mat3x3 highp_f64mat3

High double-qualifier floating-point 3x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 581 of file fwd.hpp.

typedef mat< 3, 2, f64, highp > highp_f64mat3x2

High double-qualifier floating-point 3x2 matrix.

See also
GLM_GTC_type_precision

Definition at line 773 of file fwd.hpp.

typedef mat< 3, 3, f64, highp > highp_f64mat3x3

High double-qualifier floating-point 3x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 774 of file fwd.hpp.

typedef mat< 3, 4, f64, highp > highp_f64mat3x4

High double-qualifier floating-point 3x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 775 of file fwd.hpp.

typedef highp_f64mat4x4 highp_f64mat4

High double-qualifier floating-point 4x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 582 of file fwd.hpp.

typedef mat< 4, 2, f64, highp > highp_f64mat4x2

High double-qualifier floating-point 4x2 matrix.

See also
GLM_GTC_type_precision

Definition at line 776 of file fwd.hpp.

typedef mat< 4, 3, f64, highp > highp_f64mat4x3

High double-qualifier floating-point 4x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 777 of file fwd.hpp.

typedef mat< 4, 4, f64, highp > highp_f64mat4x4

High double-qualifier floating-point 4x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 778 of file fwd.hpp.

typedef qua< f64, highp > highp_f64quat

High double-qualifier floating-point quaternion.

See also
GLM_GTC_type_precision

Definition at line 814 of file fwd.hpp.

typedef vec< 1, f64, highp > highp_f64vec1

High double-qualifier floating-point vector of 1 component.

See also
GLM_GTC_type_precision

Definition at line 496 of file fwd.hpp.

typedef vec< 2, f64, highp > highp_f64vec2

High double-qualifier floating-point vector of 2 components.

See also
GLM_GTC_type_precision

Definition at line 497 of file fwd.hpp.

typedef vec< 3, f64, highp > highp_f64vec3

High double-qualifier floating-point vector of 3 components.

See also
GLM_GTC_type_precision

Definition at line 498 of file fwd.hpp.

typedef vec< 4, f64, highp > highp_f64vec4

High double-qualifier floating-point vector of 4 components.

See also
GLM_GTC_type_precision

Definition at line 499 of file fwd.hpp.

typedef float32 highp_float32

High 32 bit single-qualifier floating-point scalar.

See also
GLM_GTC_type_precision

Definition at line 154 of file fwd.hpp.

typedef float32 highp_float32_t

High 32 bit single-qualifier floating-point scalar.

See also
GLM_GTC_type_precision

Definition at line 159 of file fwd.hpp.

typedef float64 highp_float64

High 64 bit double-qualifier floating-point scalar.

See also
GLM_GTC_type_precision

Definition at line 170 of file fwd.hpp.

typedef float64 highp_float64_t

High 64 bit double-qualifier floating-point scalar.

See also
GLM_GTC_type_precision

Definition at line 175 of file fwd.hpp.

typedef highp_fmat2x2 highp_fmat2

High single-qualifier floating-point 1x1 matrix.

See also
GLM_GTC_type_precision High single-qualifier floating-point 2x2 matrix.
GLM_GTC_type_precision

Definition at line 532 of file fwd.hpp.

typedef mat< 2, 2, f32, highp > highp_fmat2x2

High single-qualifier floating-point 1x1 matrix.

See also
GLM_GTC_type_precision High single-qualifier floating-point 2x2 matrix.
GLM_GTC_type_precision

Definition at line 650 of file fwd.hpp.

typedef mat< 2, 3, f32, highp > highp_fmat2x3

High single-qualifier floating-point 2x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 651 of file fwd.hpp.

typedef mat< 2, 4, f32, highp > highp_fmat2x4

High single-qualifier floating-point 2x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 652 of file fwd.hpp.

typedef highp_fmat3x3 highp_fmat3

High single-qualifier floating-point 3x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 533 of file fwd.hpp.

typedef mat< 3, 2, f32, highp > highp_fmat3x2

High single-qualifier floating-point 3x2 matrix.

See also
GLM_GTC_type_precision

Definition at line 653 of file fwd.hpp.

typedef mat< 3, 3, f32, highp > highp_fmat3x3

High single-qualifier floating-point 3x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 654 of file fwd.hpp.

typedef mat< 3, 4, f32, highp > highp_fmat3x4

High single-qualifier floating-point 3x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 655 of file fwd.hpp.

typedef highp_fmat4x4 highp_fmat4

High single-qualifier floating-point 4x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 534 of file fwd.hpp.

typedef mat< 4, 2, f32, highp > highp_fmat4x2

High single-qualifier floating-point 4x2 matrix.

See also
GLM_GTC_type_precision

Definition at line 656 of file fwd.hpp.

typedef mat< 4, 3, f32, highp > highp_fmat4x3

High single-qualifier floating-point 4x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 657 of file fwd.hpp.

typedef mat< 4, 4, f32, highp > highp_fmat4x4

High single-qualifier floating-point 4x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 658 of file fwd.hpp.

typedef vec< 1, float, highp > highp_fvec1

High single-qualifier floating-point vector of 1 component.

See also
GLM_GTC_type_precision

Definition at line 436 of file fwd.hpp.

typedef vec< 2, float, highp > highp_fvec2

High Single-qualifier floating-point vector of 2 components.

See also
core_precision

Definition at line 437 of file fwd.hpp.

typedef vec< 3, float, highp > highp_fvec3

High Single-qualifier floating-point vector of 3 components.

See also
core_precision

Definition at line 438 of file fwd.hpp.

typedef vec< 4, float, highp > highp_fvec4

High Single-qualifier floating-point vector of 4 components.

See also
core_precision

Definition at line 439 of file fwd.hpp.

typedef detail::int16 highp_i16

High qualifier 16 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 47 of file fwd.hpp.

typedef vec< 1, i16, highp > highp_i16vec1

High qualifier 16 bit signed integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 252 of file fwd.hpp.

typedef vec< 2, i16, highp > highp_i16vec2

High qualifier 16 bit signed integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 253 of file fwd.hpp.

typedef vec< 3, i16, highp > highp_i16vec3

High qualifier 16 bit signed integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 254 of file fwd.hpp.

typedef vec< 4, i16, highp > highp_i16vec4

High qualifier 16 bit signed integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 255 of file fwd.hpp.

typedef detail::int32 highp_i32

High qualifier 32 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 61 of file fwd.hpp.

typedef vec< 1, i32, highp > highp_i32vec1

High qualifier 32 bit signed integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 272 of file fwd.hpp.

typedef vec< 2, i32, highp > highp_i32vec2

High qualifier 32 bit signed integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 273 of file fwd.hpp.

typedef vec< 3, i32, highp > highp_i32vec3

High qualifier 32 bit signed integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 274 of file fwd.hpp.

typedef vec< 4, i32, highp > highp_i32vec4

High qualifier 32 bit signed integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 275 of file fwd.hpp.

typedef detail::int64 highp_i64

High qualifier 64 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 75 of file fwd.hpp.

typedef vec< 1, i64, highp > highp_i64vec1

High qualifier 64 bit signed integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 292 of file fwd.hpp.

typedef vec< 2, i64, highp > highp_i64vec2

High qualifier 64 bit signed integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 293 of file fwd.hpp.

typedef vec< 3, i64, highp > highp_i64vec3

High qualifier 64 bit signed integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 294 of file fwd.hpp.

typedef vec< 4, i64, highp > highp_i64vec4

High qualifier 64 bit signed integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 295 of file fwd.hpp.

typedef detail::int8 highp_i8

High qualifier 8 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 33 of file fwd.hpp.

typedef vec< 1, i8, highp > highp_i8vec1

High qualifier 8 bit signed integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 232 of file fwd.hpp.

typedef vec< 2, i8, highp > highp_i8vec2

High qualifier 8 bit signed integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 233 of file fwd.hpp.

typedef vec< 3, i8, highp > highp_i8vec3

High qualifier 8 bit signed integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 234 of file fwd.hpp.

typedef vec< 4, i8, highp > highp_i8vec4

High qualifier 8 bit signed integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 235 of file fwd.hpp.

typedef detail::int16 highp_int16

High qualifier 16 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 52 of file fwd.hpp.

typedef detail::int16 highp_int16_t

High qualifier 16 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 56 of file fwd.hpp.

typedef detail::int32 highp_int32

High qualifier 32 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 66 of file fwd.hpp.

typedef detail::int32 highp_int32_t

32 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 70 of file fwd.hpp.

typedef detail::int64 highp_int64

High qualifier 64 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 80 of file fwd.hpp.

typedef detail::int64 highp_int64_t

High qualifier 64 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 84 of file fwd.hpp.

typedef detail::int8 highp_int8

High qualifier 8 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 38 of file fwd.hpp.

typedef detail::int8 highp_int8_t

High qualifier 8 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 42 of file fwd.hpp.

typedef detail::uint16 highp_u16

High qualifier 16 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 105 of file fwd.hpp.

typedef vec< 1, u16, highp > highp_u16vec1

High qualifier 16 bit unsigned integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 354 of file fwd.hpp.

typedef vec< 2, u16, highp > highp_u16vec2

High qualifier 16 bit unsigned integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 355 of file fwd.hpp.

typedef vec< 3, u16, highp > highp_u16vec3

High qualifier 16 bit unsigned integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 356 of file fwd.hpp.

typedef vec< 4, u16, highp > highp_u16vec4

High qualifier 16 bit unsigned integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 357 of file fwd.hpp.

typedef detail::uint32 highp_u32

High qualifier 32 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 119 of file fwd.hpp.

typedef vec< 1, u32, highp > highp_u32vec1

High qualifier 32 bit unsigned integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 374 of file fwd.hpp.

typedef vec< 2, u32, highp > highp_u32vec2

High qualifier 32 bit unsigned integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 375 of file fwd.hpp.

typedef vec< 3, u32, highp > highp_u32vec3

High qualifier 32 bit unsigned integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 376 of file fwd.hpp.

typedef vec< 4, u32, highp > highp_u32vec4

High qualifier 32 bit unsigned integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 377 of file fwd.hpp.

typedef detail::uint64 highp_u64

High qualifier 64 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 133 of file fwd.hpp.

typedef vec< 1, u64, highp > highp_u64vec1

High qualifier 64 bit unsigned integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 394 of file fwd.hpp.

typedef vec< 2, u64, highp > highp_u64vec2

High qualifier 64 bit unsigned integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 395 of file fwd.hpp.

typedef vec< 3, u64, highp > highp_u64vec3

High qualifier 64 bit unsigned integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 396 of file fwd.hpp.

typedef vec< 4, u64, highp > highp_u64vec4

High qualifier 64 bit unsigned integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 397 of file fwd.hpp.

typedef detail::uint8 highp_u8

High qualifier 8 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 91 of file fwd.hpp.

typedef vec< 1, u8, highp > highp_u8vec1

High qualifier 8 bit unsigned integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 334 of file fwd.hpp.

typedef vec< 2, u8, highp > highp_u8vec2

High qualifier 8 bit unsigned integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 335 of file fwd.hpp.

typedef vec< 3, u8, highp > highp_u8vec3

High qualifier 8 bit unsigned integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 336 of file fwd.hpp.

typedef vec< 4, u8, highp > highp_u8vec4

High qualifier 8 bit unsigned integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 337 of file fwd.hpp.

typedef detail::uint16 highp_uint16

High qualifier 16 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 110 of file fwd.hpp.

typedef detail::uint16 highp_uint16_t

High qualifier 16 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 114 of file fwd.hpp.

typedef detail::uint32 highp_uint32

High qualifier 32 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 124 of file fwd.hpp.

typedef detail::uint32 highp_uint32_t

High qualifier 32 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 128 of file fwd.hpp.

typedef detail::uint64 highp_uint64

High qualifier 64 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 138 of file fwd.hpp.

typedef detail::uint64 highp_uint64_t

High qualifier 64 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 142 of file fwd.hpp.

typedef detail::uint8 highp_uint8

High qualifier 8 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 96 of file fwd.hpp.

typedef detail::uint8 highp_uint8_t

High qualifier 8 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 100 of file fwd.hpp.

typedef detail::int16 i16

16 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 48 of file fwd.hpp.

typedef vec< 1, i16, defaultp > i16vec1

16 bit signed integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 257 of file fwd.hpp.

typedef vec< 2, i16, defaultp > i16vec2

16 bit signed integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 258 of file fwd.hpp.

typedef vec< 3, i16, defaultp > i16vec3

16 bit signed integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 259 of file fwd.hpp.

typedef vec< 4, i16, defaultp > i16vec4

16 bit signed integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 260 of file fwd.hpp.

typedef detail::int32 i32

32 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 62 of file fwd.hpp.

typedef vec< 1, i32, defaultp > i32vec1

32 bit signed integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 277 of file fwd.hpp.

typedef vec< 2, i32, defaultp > i32vec2

32 bit signed integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 278 of file fwd.hpp.

typedef vec< 3, i32, defaultp > i32vec3

32 bit signed integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 279 of file fwd.hpp.

typedef vec< 4, i32, defaultp > i32vec4

32 bit signed integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 280 of file fwd.hpp.

typedef detail::int64 i64

64 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 76 of file fwd.hpp.

typedef vec< 1, i64, defaultp > i64vec1

64 bit signed integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 297 of file fwd.hpp.

typedef vec< 2, i64, defaultp > i64vec2

64 bit signed integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 298 of file fwd.hpp.

typedef vec< 3, i64, defaultp > i64vec3

64 bit signed integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 299 of file fwd.hpp.

typedef vec< 4, i64, defaultp > i64vec4

64 bit signed integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 300 of file fwd.hpp.

typedef detail::int8 i8

8 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 34 of file fwd.hpp.

typedef vec< 1, i8, defaultp > i8vec1

8 bit signed integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 237 of file fwd.hpp.

typedef vec< 2, i8, defaultp > i8vec2

8 bit signed integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 238 of file fwd.hpp.

typedef vec< 3, i8, defaultp > i8vec3

8 bit signed integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 239 of file fwd.hpp.

typedef vec< 4, i8, defaultp > i8vec4

8 bit signed integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 240 of file fwd.hpp.

typedef detail::int16 int16_t

16 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 57 of file fwd.hpp.

typedef detail::int32 int32_t

32 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 71 of file fwd.hpp.

typedef detail::int64 int64_t

64 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 85 of file fwd.hpp.

typedef detail::int8 int8_t

8 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 43 of file fwd.hpp.

typedef float32 lowp_f32

Low 32 bit single-qualifier floating-point scalar.

See also
GLM_GTC_type_precision

Definition at line 147 of file fwd.hpp.

typedef lowp_f32mat2x2 lowp_f32mat2

Low single-qualifier floating-point 1x1 matrix.

See also
GLM_GTC_type_precision Low single-qualifier floating-point 2x2 matrix.
GLM_GTC_type_precision

Definition at line 540 of file fwd.hpp.

typedef mat< 2, 2, f32, lowp > lowp_f32mat2x2

Low single-qualifier floating-point 1x1 matrix.

See also
GLM_GTC_type_precision Low single-qualifier floating-point 2x2 matrix.
GLM_GTC_type_precision

Definition at line 670 of file fwd.hpp.

typedef mat< 2, 3, f32, lowp > lowp_f32mat2x3

Low single-qualifier floating-point 2x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 671 of file fwd.hpp.

typedef mat< 2, 4, f32, lowp > lowp_f32mat2x4

Low single-qualifier floating-point 2x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 672 of file fwd.hpp.

typedef lowp_f32mat3x3 lowp_f32mat3

Low single-qualifier floating-point 3x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 541 of file fwd.hpp.

typedef mat< 3, 2, f32, lowp > lowp_f32mat3x2

Low single-qualifier floating-point 3x2 matrix.

See also
GLM_GTC_type_precision

Definition at line 673 of file fwd.hpp.

typedef mat< 3, 3, f32, lowp > lowp_f32mat3x3

Low single-qualifier floating-point 3x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 674 of file fwd.hpp.

typedef mat< 3, 4, f32, lowp > lowp_f32mat3x4

Low single-qualifier floating-point 3x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 675 of file fwd.hpp.

typedef lowp_f32mat4x4 lowp_f32mat4

Low single-qualifier floating-point 4x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 542 of file fwd.hpp.

typedef mat< 4, 2, f32, lowp > lowp_f32mat4x2

Low single-qualifier floating-point 4x2 matrix.

See also
GLM_GTC_type_precision

Definition at line 676 of file fwd.hpp.

typedef mat< 4, 3, f32, lowp > lowp_f32mat4x3

Low single-qualifier floating-point 4x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 677 of file fwd.hpp.

typedef mat< 4, 4, f32, lowp > lowp_f32mat4x4

Low single-qualifier floating-point 4x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 678 of file fwd.hpp.

typedef qua< f32, lowp > lowp_f32quat

Low single-qualifier floating-point quaternion.

See also
GLM_GTC_type_precision

Definition at line 802 of file fwd.hpp.

typedef vec< 1, f32, lowp > lowp_f32vec1

Low single-qualifier floating-point vector of 1 component.

See also
GLM_GTC_type_precision

Definition at line 446 of file fwd.hpp.

typedef vec< 2, f32, lowp > lowp_f32vec2

Low single-qualifier floating-point vector of 2 components.

See also
core_precision

Definition at line 447 of file fwd.hpp.

typedef vec< 3, f32, lowp > lowp_f32vec3

Low single-qualifier floating-point vector of 3 components.

See also
core_precision

Definition at line 448 of file fwd.hpp.

typedef vec< 4, f32, lowp > lowp_f32vec4

Low single-qualifier floating-point vector of 4 components.

See also
core_precision

Definition at line 449 of file fwd.hpp.

typedef float64 lowp_f64

Low 64 bit double-qualifier floating-point scalar.

See also
GLM_GTC_type_precision

Definition at line 163 of file fwd.hpp.

typedef lowp_f64mat2x2 lowp_f64mat2

Low double-qualifier floating-point 1x1 matrix.

See also
GLM_GTC_type_precision Low double-qualifier floating-point 2x2 matrix.
GLM_GTC_type_precision

Definition at line 572 of file fwd.hpp.

typedef mat< 2, 2, f64, lowp > lowp_f64mat2x2

Low double-qualifier floating-point 1x1 matrix.

See also
GLM_GTC_type_precision Low double-qualifier floating-point 2x2 matrix.
GLM_GTC_type_precision

Definition at line 750 of file fwd.hpp.

typedef mat< 2, 3, f64, lowp > lowp_f64mat2x3

Low double-qualifier floating-point 2x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 751 of file fwd.hpp.

typedef mat< 2, 4, f64, lowp > lowp_f64mat2x4

Low double-qualifier floating-point 2x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 752 of file fwd.hpp.

typedef lowp_f64mat3x3 lowp_f64mat3

Low double-qualifier floating-point 3x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 573 of file fwd.hpp.

typedef mat< 3, 2, f64, lowp > lowp_f64mat3x2

Low double-qualifier floating-point 3x2 matrix.

See also
GLM_GTC_type_precision

Definition at line 753 of file fwd.hpp.

typedef mat< 3, 3, f64, lowp > lowp_f64mat3x3

Low double-qualifier floating-point 3x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 754 of file fwd.hpp.

typedef mat< 3, 4, f64, lowp > lowp_f64mat3x4

Low double-qualifier floating-point 3x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 755 of file fwd.hpp.

typedef lowp_f64mat4x4 lowp_f64mat4

Low double-qualifier floating-point 4x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 574 of file fwd.hpp.

typedef mat< 4, 2, f64, lowp > lowp_f64mat4x2

Low double-qualifier floating-point 4x2 matrix.

See also
GLM_GTC_type_precision

Definition at line 756 of file fwd.hpp.

typedef mat< 4, 3, f64, lowp > lowp_f64mat4x3

Low double-qualifier floating-point 4x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 757 of file fwd.hpp.

typedef mat< 4, 4, f64, lowp > lowp_f64mat4x4

Low double-qualifier floating-point 4x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 758 of file fwd.hpp.

typedef qua< f64, lowp > lowp_f64quat

Low double-qualifier floating-point quaternion.

See also
GLM_GTC_type_precision

Definition at line 812 of file fwd.hpp.

typedef vec< 1, f64, lowp > lowp_f64vec1

Low double-qualifier floating-point vector of 1 component.

See also
GLM_GTC_type_precision

Definition at line 486 of file fwd.hpp.

typedef vec< 2, f64, lowp > lowp_f64vec2

Low double-qualifier floating-point vector of 2 components.

See also
GLM_GTC_type_precision

Definition at line 487 of file fwd.hpp.

typedef vec< 3, f64, lowp > lowp_f64vec3

Low double-qualifier floating-point vector of 3 components.

See also
GLM_GTC_type_precision

Definition at line 488 of file fwd.hpp.

typedef vec< 4, f64, lowp > lowp_f64vec4

Low double-qualifier floating-point vector of 4 components.

See also
GLM_GTC_type_precision

Definition at line 489 of file fwd.hpp.

typedef float32 lowp_float32

Low 32 bit single-qualifier floating-point scalar.

See also
GLM_GTC_type_precision

Definition at line 152 of file fwd.hpp.

typedef float32 lowp_float32_t

Low 32 bit single-qualifier floating-point scalar.

See also
GLM_GTC_type_precision

Definition at line 157 of file fwd.hpp.

typedef float64 lowp_float64

Low 64 bit double-qualifier floating-point scalar.

See also
GLM_GTC_type_precision

Definition at line 168 of file fwd.hpp.

typedef float64 lowp_float64_t

Low 64 bit double-qualifier floating-point scalar.

See also
GLM_GTC_type_precision

Definition at line 173 of file fwd.hpp.

typedef lowp_fmat2x2 lowp_fmat2

Low single-qualifier floating-point 1x1 matrix.

See also
GLM_GTC_type_precision Low single-qualifier floating-point 2x2 matrix.
GLM_GTC_type_precision

Definition at line 524 of file fwd.hpp.

typedef mat< 2, 2, f32, lowp > lowp_fmat2x2

Low single-qualifier floating-point 1x1 matrix.

See also
GLM_GTC_type_precision Low single-qualifier floating-point 2x2 matrix.
GLM_GTC_type_precision

Definition at line 630 of file fwd.hpp.

typedef mat< 2, 3, f32, lowp > lowp_fmat2x3

Low single-qualifier floating-point 2x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 631 of file fwd.hpp.

typedef mat< 2, 4, f32, lowp > lowp_fmat2x4

Low single-qualifier floating-point 2x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 632 of file fwd.hpp.

typedef lowp_fmat3x3 lowp_fmat3

Low single-qualifier floating-point 3x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 525 of file fwd.hpp.

typedef mat< 3, 2, f32, lowp > lowp_fmat3x2

Low single-qualifier floating-point 3x2 matrix.

See also
GLM_GTC_type_precision

Definition at line 633 of file fwd.hpp.

typedef mat< 3, 3, f32, lowp > lowp_fmat3x3

Low single-qualifier floating-point 3x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 634 of file fwd.hpp.

typedef mat< 3, 4, f32, lowp > lowp_fmat3x4

Low single-qualifier floating-point 3x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 635 of file fwd.hpp.

typedef lowp_fmat4x4 lowp_fmat4

Low single-qualifier floating-point 4x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 526 of file fwd.hpp.

typedef mat< 4, 2, f32, lowp > lowp_fmat4x2

Low single-qualifier floating-point 4x2 matrix.

See also
GLM_GTC_type_precision

Definition at line 636 of file fwd.hpp.

typedef mat< 4, 3, f32, lowp > lowp_fmat4x3

Low single-qualifier floating-point 4x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 637 of file fwd.hpp.

typedef mat< 4, 4, f32, lowp > lowp_fmat4x4

Low single-qualifier floating-point 4x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 638 of file fwd.hpp.

typedef vec< 1, float, lowp > lowp_fvec1

Low single-qualifier floating-point vector of 1 component.

See also
GLM_GTC_type_precision

Definition at line 426 of file fwd.hpp.

typedef vec< 2, float, lowp > lowp_fvec2

Low single-qualifier floating-point vector of 2 components.

See also
GLM_GTC_type_precision

Definition at line 427 of file fwd.hpp.

typedef vec< 3, float, lowp > lowp_fvec3

Low single-qualifier floating-point vector of 3 components.

See also
GLM_GTC_type_precision

Definition at line 428 of file fwd.hpp.

typedef vec< 4, float, lowp > lowp_fvec4

Low single-qualifier floating-point vector of 4 components.

See also
GLM_GTC_type_precision

Definition at line 429 of file fwd.hpp.

typedef detail::int16 lowp_i16

Low qualifier 16 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 45 of file fwd.hpp.

typedef vec< 1, i16, lowp > lowp_i16vec1

Low qualifier 16 bit signed integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 242 of file fwd.hpp.

typedef vec< 2, i16, lowp > lowp_i16vec2

Low qualifier 16 bit signed integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 243 of file fwd.hpp.

typedef vec< 3, i16, lowp > lowp_i16vec3

Low qualifier 16 bit signed integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 244 of file fwd.hpp.

typedef vec< 4, i16, lowp > lowp_i16vec4

Low qualifier 16 bit signed integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 245 of file fwd.hpp.

typedef detail::int32 lowp_i32

Low qualifier 32 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 59 of file fwd.hpp.

typedef vec< 1, i32, lowp > lowp_i32vec1

Low qualifier 32 bit signed integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 262 of file fwd.hpp.

typedef vec< 2, i32, lowp > lowp_i32vec2

Low qualifier 32 bit signed integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 263 of file fwd.hpp.

typedef vec< 3, i32, lowp > lowp_i32vec3

Low qualifier 32 bit signed integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 264 of file fwd.hpp.

typedef vec< 4, i32, lowp > lowp_i32vec4

Low qualifier 32 bit signed integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 265 of file fwd.hpp.

typedef detail::int64 lowp_i64

Low qualifier 64 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 73 of file fwd.hpp.

typedef vec< 1, i64, lowp > lowp_i64vec1

Low qualifier 64 bit signed integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 282 of file fwd.hpp.

typedef vec< 2, i64, lowp > lowp_i64vec2

Low qualifier 64 bit signed integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 283 of file fwd.hpp.

typedef vec< 3, i64, lowp > lowp_i64vec3

Low qualifier 64 bit signed integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 284 of file fwd.hpp.

typedef vec< 4, i64, lowp > lowp_i64vec4

Low qualifier 64 bit signed integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 285 of file fwd.hpp.

typedef detail::int8 lowp_i8

Low qualifier 8 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 31 of file fwd.hpp.

typedef vec< 1, i8, lowp > lowp_i8vec1

Low qualifier 8 bit signed integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 222 of file fwd.hpp.

typedef vec< 2, i8, lowp > lowp_i8vec2

Low qualifier 8 bit signed integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 223 of file fwd.hpp.

typedef vec< 3, i8, lowp > lowp_i8vec3

Low qualifier 8 bit signed integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 224 of file fwd.hpp.

typedef vec< 4, i8, lowp > lowp_i8vec4

Low qualifier 8 bit signed integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 225 of file fwd.hpp.

typedef detail::int16 lowp_int16

Low qualifier 16 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 50 of file fwd.hpp.

typedef detail::int16 lowp_int16_t

Low qualifier 16 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 54 of file fwd.hpp.

typedef detail::int32 lowp_int32

Low qualifier 32 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 64 of file fwd.hpp.

typedef detail::int32 lowp_int32_t

Low qualifier 32 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 68 of file fwd.hpp.

typedef detail::int64 lowp_int64

Low qualifier 64 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 78 of file fwd.hpp.

typedef detail::int64 lowp_int64_t

Low qualifier 64 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 82 of file fwd.hpp.

typedef detail::int8 lowp_int8

Low qualifier 8 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 36 of file fwd.hpp.

typedef detail::int8 lowp_int8_t

Low qualifier 8 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 40 of file fwd.hpp.

typedef detail::uint16 lowp_u16

Low qualifier 16 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 103 of file fwd.hpp.

typedef vec< 1, u16, lowp > lowp_u16vec1

Low qualifier 16 bit unsigned integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 344 of file fwd.hpp.

typedef vec< 2, u16, lowp > lowp_u16vec2

Low qualifier 16 bit unsigned integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 345 of file fwd.hpp.

typedef vec< 3, u16, lowp > lowp_u16vec3

Low qualifier 16 bit unsigned integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 346 of file fwd.hpp.

typedef vec< 4, u16, lowp > lowp_u16vec4

Low qualifier 16 bit unsigned integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 347 of file fwd.hpp.

typedef detail::uint32 lowp_u32

Low qualifier 32 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 117 of file fwd.hpp.

typedef vec< 1, u32, lowp > lowp_u32vec1

Low qualifier 32 bit unsigned integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 364 of file fwd.hpp.

typedef vec< 2, u32, lowp > lowp_u32vec2

Low qualifier 32 bit unsigned integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 365 of file fwd.hpp.

typedef vec< 3, u32, lowp > lowp_u32vec3

Low qualifier 32 bit unsigned integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 366 of file fwd.hpp.

typedef vec< 4, u32, lowp > lowp_u32vec4

Low qualifier 32 bit unsigned integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 367 of file fwd.hpp.

typedef detail::uint64 lowp_u64

Low qualifier 64 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 131 of file fwd.hpp.

typedef vec< 1, u64, lowp > lowp_u64vec1

Low qualifier 64 bit unsigned integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 384 of file fwd.hpp.

typedef vec< 2, u64, lowp > lowp_u64vec2

Low qualifier 64 bit unsigned integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 385 of file fwd.hpp.

typedef vec< 3, u64, lowp > lowp_u64vec3

Low qualifier 64 bit unsigned integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 386 of file fwd.hpp.

typedef vec< 4, u64, lowp > lowp_u64vec4

Low qualifier 64 bit unsigned integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 387 of file fwd.hpp.

typedef detail::uint8 lowp_u8

Low qualifier 8 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 89 of file fwd.hpp.

typedef vec< 1, u8, lowp > lowp_u8vec1

Low qualifier 8 bit unsigned integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 324 of file fwd.hpp.

typedef vec< 2, u8, lowp > lowp_u8vec2

Low qualifier 8 bit unsigned integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 325 of file fwd.hpp.

typedef vec< 3, u8, lowp > lowp_u8vec3

Low qualifier 8 bit unsigned integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 326 of file fwd.hpp.

typedef vec< 4, u8, lowp > lowp_u8vec4

Low qualifier 8 bit unsigned integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 327 of file fwd.hpp.

typedef detail::uint16 lowp_uint16

Low qualifier 16 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 108 of file fwd.hpp.

typedef detail::uint16 lowp_uint16_t

Low qualifier 16 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 112 of file fwd.hpp.

typedef detail::uint32 lowp_uint32

Low qualifier 32 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 122 of file fwd.hpp.

typedef detail::uint32 lowp_uint32_t

Low qualifier 32 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 126 of file fwd.hpp.

typedef detail::uint64 lowp_uint64

Low qualifier 64 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 136 of file fwd.hpp.

typedef detail::uint64 lowp_uint64_t

Low qualifier 64 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 140 of file fwd.hpp.

typedef detail::uint8 lowp_uint8

Low qualifier 8 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 94 of file fwd.hpp.

typedef detail::uint8 lowp_uint8_t

Low qualifier 8 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 98 of file fwd.hpp.

typedef float32 mediump_f32

Medium 32 bit single-qualifier floating-point scalar.

See also
GLM_GTC_type_precision

Definition at line 148 of file fwd.hpp.

typedef mediump_f32mat2x2 mediump_f32mat2

Medium single-qualifier floating-point 1x1 matrix.

See also
GLM_GTC_type_precision Medium single-qualifier floating-point 2x2 matrix.
GLM_GTC_type_precision

Definition at line 544 of file fwd.hpp.

typedef mat< 2, 2, f32, mediump > mediump_f32mat2x2

High single-qualifier floating-point 1x1 matrix.

See also
GLM_GTC_type_precision Low single-qualifier floating-point 2x2 matrix.
GLM_GTC_type_precision

Definition at line 680 of file fwd.hpp.

typedef mat< 2, 3, f32, mediump > mediump_f32mat2x3

Medium single-qualifier floating-point 2x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 681 of file fwd.hpp.

typedef mat< 2, 4, f32, mediump > mediump_f32mat2x4

Medium single-qualifier floating-point 2x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 682 of file fwd.hpp.

typedef mediump_f32mat3x3 mediump_f32mat3

Medium single-qualifier floating-point 3x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 545 of file fwd.hpp.

typedef mat< 3, 2, f32, mediump > mediump_f32mat3x2

Medium single-qualifier floating-point 3x2 matrix.

See also
GLM_GTC_type_precision

Definition at line 683 of file fwd.hpp.

typedef mat< 3, 3, f32, mediump > mediump_f32mat3x3

Medium single-qualifier floating-point 3x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 684 of file fwd.hpp.

typedef mat< 3, 4, f32, mediump > mediump_f32mat3x4

Medium single-qualifier floating-point 3x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 685 of file fwd.hpp.

typedef mediump_f32mat4x4 mediump_f32mat4

Medium single-qualifier floating-point 4x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 546 of file fwd.hpp.

typedef mat< 4, 2, f32, mediump > mediump_f32mat4x2

Medium single-qualifier floating-point 4x2 matrix.

See also
GLM_GTC_type_precision

Definition at line 686 of file fwd.hpp.

typedef mat< 4, 3, f32, mediump > mediump_f32mat4x3

Medium single-qualifier floating-point 4x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 687 of file fwd.hpp.

typedef mat< 4, 4, f32, mediump > mediump_f32mat4x4

Medium single-qualifier floating-point 4x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 688 of file fwd.hpp.

typedef qua< f32, mediump > mediump_f32quat

Medium single-qualifier floating-point quaternion.

See also
GLM_GTC_type_precision

Definition at line 803 of file fwd.hpp.

typedef vec< 1, f32, mediump > mediump_f32vec1

Medium single-qualifier floating-point vector of 1 component.

See also
GLM_GTC_type_precision

Definition at line 451 of file fwd.hpp.

typedef vec< 2, f32, mediump > mediump_f32vec2

Medium single-qualifier floating-point vector of 2 components.

See also
core_precision

Definition at line 452 of file fwd.hpp.

typedef vec< 3, f32, mediump > mediump_f32vec3

Medium single-qualifier floating-point vector of 3 components.

See also
core_precision

Definition at line 453 of file fwd.hpp.

typedef vec< 4, f32, mediump > mediump_f32vec4

Medium single-qualifier floating-point vector of 4 components.

See also
core_precision

Definition at line 454 of file fwd.hpp.

typedef float64 mediump_f64

Medium 64 bit double-qualifier floating-point scalar.

See also
GLM_GTC_type_precision

Definition at line 164 of file fwd.hpp.

typedef mediump_f64mat2x2 mediump_f64mat2

Medium double-qualifier floating-point 1x1 matrix.

See also
GLM_GTC_type_precision Medium double-qualifier floating-point 2x2 matrix.
GLM_GTC_type_precision

Definition at line 576 of file fwd.hpp.

typedef mat< 2, 2, f64, mediump > mediump_f64mat2x2

Medium double-qualifier floating-point 1x1 matrix.

See also
GLM_GTC_type_precision Medium double-qualifier floating-point 2x2 matrix.
GLM_GTC_type_precision

Definition at line 760 of file fwd.hpp.

typedef mat< 2, 3, f64, mediump > mediump_f64mat2x3

Medium double-qualifier floating-point 2x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 761 of file fwd.hpp.

typedef mat< 2, 4, f64, mediump > mediump_f64mat2x4

Medium double-qualifier floating-point 2x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 762 of file fwd.hpp.

typedef mediump_f64mat3x3 mediump_f64mat3

Medium double-qualifier floating-point 3x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 577 of file fwd.hpp.

typedef mat< 3, 2, f64, mediump > mediump_f64mat3x2

Medium double-qualifier floating-point 3x2 matrix.

See also
GLM_GTC_type_precision

Definition at line 763 of file fwd.hpp.

typedef mat< 3, 3, f64, mediump > mediump_f64mat3x3

Medium double-qualifier floating-point 3x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 764 of file fwd.hpp.

typedef mat< 3, 4, f64, mediump > mediump_f64mat3x4

Medium double-qualifier floating-point 3x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 765 of file fwd.hpp.

typedef mediump_f64mat4x4 mediump_f64mat4

Medium double-qualifier floating-point 4x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 578 of file fwd.hpp.

typedef mat< 4, 2, f64, mediump > mediump_f64mat4x2

Medium double-qualifier floating-point 4x2 matrix.

See also
GLM_GTC_type_precision

Definition at line 766 of file fwd.hpp.

typedef mat< 4, 3, f64, mediump > mediump_f64mat4x3

Medium double-qualifier floating-point 4x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 767 of file fwd.hpp.

typedef mat< 4, 4, f64, mediump > mediump_f64mat4x4

Medium double-qualifier floating-point 4x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 768 of file fwd.hpp.

typedef qua< f64, mediump > mediump_f64quat

Medium double-qualifier floating-point quaternion.

See also
GLM_GTC_type_precision

Definition at line 813 of file fwd.hpp.

typedef vec< 1, f64, mediump > mediump_f64vec1

Medium double-qualifier floating-point vector of 1 component.

See also
GLM_GTC_type_precision

Definition at line 491 of file fwd.hpp.

typedef vec< 2, f64, mediump > mediump_f64vec2

Medium double-qualifier floating-point vector of 2 components.

See also
GLM_GTC_type_precision

Definition at line 492 of file fwd.hpp.

typedef vec< 3, f64, mediump > mediump_f64vec3

Medium double-qualifier floating-point vector of 3 components.

See also
GLM_GTC_type_precision

Definition at line 493 of file fwd.hpp.

typedef vec< 4, f64, mediump > mediump_f64vec4

Medium double-qualifier floating-point vector of 4 components.

See also
GLM_GTC_type_precision

Definition at line 494 of file fwd.hpp.

typedef float32 mediump_float32

Medium 32 bit single-qualifier floating-point scalar.

See also
GLM_GTC_type_precision

Definition at line 153 of file fwd.hpp.

typedef float32 mediump_float32_t

Medium 32 bit single-qualifier floating-point scalar.

See also
GLM_GTC_type_precision

Definition at line 158 of file fwd.hpp.

typedef float64 mediump_float64

Medium 64 bit double-qualifier floating-point scalar.

See also
GLM_GTC_type_precision

Definition at line 169 of file fwd.hpp.

typedef float64 mediump_float64_t

Medium 64 bit double-qualifier floating-point scalar.

See also
GLM_GTC_type_precision

Definition at line 174 of file fwd.hpp.

typedef mediump_fmat2x2 mediump_fmat2

Medium single-qualifier floating-point 1x1 matrix.

See also
GLM_GTC_type_precision Medium single-qualifier floating-point 2x2 matrix.
GLM_GTC_type_precision

Definition at line 528 of file fwd.hpp.

typedef mat< 2, 2, f32, mediump > mediump_fmat2x2

Medium single-qualifier floating-point 1x1 matrix.

See also
GLM_GTC_type_precision Medium single-qualifier floating-point 2x2 matrix.
GLM_GTC_type_precision

Definition at line 640 of file fwd.hpp.

typedef mat< 2, 3, f32, mediump > mediump_fmat2x3

Medium single-qualifier floating-point 2x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 641 of file fwd.hpp.

typedef mat< 2, 4, f32, mediump > mediump_fmat2x4

Medium single-qualifier floating-point 2x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 642 of file fwd.hpp.

typedef mediump_fmat3x3 mediump_fmat3

Medium single-qualifier floating-point 3x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 529 of file fwd.hpp.

typedef mat< 3, 2, f32, mediump > mediump_fmat3x2

Medium single-qualifier floating-point 3x2 matrix.

See also
GLM_GTC_type_precision

Definition at line 643 of file fwd.hpp.

typedef mat< 3, 3, f32, mediump > mediump_fmat3x3

Medium single-qualifier floating-point 3x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 644 of file fwd.hpp.

typedef mat< 3, 4, f32, mediump > mediump_fmat3x4

Medium single-qualifier floating-point 3x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 645 of file fwd.hpp.

typedef mediump_fmat4x4 mediump_fmat4

Medium single-qualifier floating-point 4x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 530 of file fwd.hpp.

typedef mat< 4, 2, f32, mediump > mediump_fmat4x2

Medium single-qualifier floating-point 4x2 matrix.

See also
GLM_GTC_type_precision

Definition at line 646 of file fwd.hpp.

typedef mat< 4, 3, f32, mediump > mediump_fmat4x3

Medium single-qualifier floating-point 4x3 matrix.

See also
GLM_GTC_type_precision

Definition at line 647 of file fwd.hpp.

typedef mat< 4, 4, f32, mediump > mediump_fmat4x4

Medium single-qualifier floating-point 4x4 matrix.

See also
GLM_GTC_type_precision

Definition at line 648 of file fwd.hpp.

typedef vec< 1, float, mediump > mediump_fvec1

Medium single-qualifier floating-point vector of 1 component.

See also
GLM_GTC_type_precision

Definition at line 431 of file fwd.hpp.

typedef vec< 2, float, mediump > mediump_fvec2

Medium Single-qualifier floating-point vector of 2 components.

See also
GLM_GTC_type_precision

Definition at line 432 of file fwd.hpp.

typedef vec< 3, float, mediump > mediump_fvec3

Medium Single-qualifier floating-point vector of 3 components.

See also
GLM_GTC_type_precision

Definition at line 433 of file fwd.hpp.

typedef vec< 4, float, mediump > mediump_fvec4

Medium Single-qualifier floating-point vector of 4 components.

See also
GLM_GTC_type_precision

Definition at line 434 of file fwd.hpp.

typedef detail::int16 mediump_i16

Medium qualifier 16 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 46 of file fwd.hpp.

typedef vec< 1, i16, mediump > mediump_i16vec1

Medium qualifier 16 bit signed integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 247 of file fwd.hpp.

typedef vec< 2, i16, mediump > mediump_i16vec2

Medium qualifier 16 bit signed integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 248 of file fwd.hpp.

typedef vec< 3, i16, mediump > mediump_i16vec3

Medium qualifier 16 bit signed integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 249 of file fwd.hpp.

typedef vec< 4, i16, mediump > mediump_i16vec4

Medium qualifier 16 bit signed integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 250 of file fwd.hpp.

typedef detail::int32 mediump_i32

Medium qualifier 32 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 60 of file fwd.hpp.

typedef vec< 1, i32, mediump > mediump_i32vec1

Medium qualifier 32 bit signed integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 267 of file fwd.hpp.

typedef vec< 2, i32, mediump > mediump_i32vec2

Medium qualifier 32 bit signed integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 268 of file fwd.hpp.

typedef vec< 3, i32, mediump > mediump_i32vec3

Medium qualifier 32 bit signed integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 269 of file fwd.hpp.

typedef vec< 4, i32, mediump > mediump_i32vec4

Medium qualifier 32 bit signed integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 270 of file fwd.hpp.

typedef detail::int64 mediump_i64

Medium qualifier 64 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 74 of file fwd.hpp.

typedef vec< 1, i64, mediump > mediump_i64vec1

Medium qualifier 64 bit signed integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 287 of file fwd.hpp.

typedef vec< 2, i64, mediump > mediump_i64vec2

Medium qualifier 64 bit signed integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 288 of file fwd.hpp.

typedef vec< 3, i64, mediump > mediump_i64vec3

Medium qualifier 64 bit signed integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 289 of file fwd.hpp.

typedef vec< 4, i64, mediump > mediump_i64vec4

Medium qualifier 64 bit signed integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 290 of file fwd.hpp.

typedef detail::int8 mediump_i8

Medium qualifier 8 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 32 of file fwd.hpp.

typedef vec< 1, i8, mediump > mediump_i8vec1

Medium qualifier 8 bit signed integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 227 of file fwd.hpp.

typedef vec< 2, i8, mediump > mediump_i8vec2

Medium qualifier 8 bit signed integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 228 of file fwd.hpp.

typedef vec< 3, i8, mediump > mediump_i8vec3

Medium qualifier 8 bit signed integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 229 of file fwd.hpp.

typedef vec< 4, i8, mediump > mediump_i8vec4

Medium qualifier 8 bit signed integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 230 of file fwd.hpp.

typedef detail::int16 mediump_int16

Medium qualifier 16 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 51 of file fwd.hpp.

typedef detail::int16 mediump_int16_t

Medium qualifier 16 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 55 of file fwd.hpp.

typedef detail::int32 mediump_int32

Medium qualifier 32 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 65 of file fwd.hpp.

typedef detail::int32 mediump_int32_t

Medium qualifier 32 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 69 of file fwd.hpp.

typedef detail::int64 mediump_int64

Medium qualifier 64 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 79 of file fwd.hpp.

typedef detail::int64 mediump_int64_t

Medium qualifier 64 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 83 of file fwd.hpp.

typedef detail::int8 mediump_int8

Medium qualifier 8 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 37 of file fwd.hpp.

typedef detail::int8 mediump_int8_t

Medium qualifier 8 bit signed integer type.

See also
GLM_GTC_type_precision

Definition at line 41 of file fwd.hpp.

typedef detail::uint16 mediump_u16

Medium qualifier 16 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 104 of file fwd.hpp.

typedef vec< 1, u16, mediump > mediump_u16vec1

Medium qualifier 16 bit unsigned integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 349 of file fwd.hpp.

typedef vec< 2, u16, mediump > mediump_u16vec2

Medium qualifier 16 bit unsigned integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 350 of file fwd.hpp.

typedef vec< 3, u16, mediump > mediump_u16vec3

Medium qualifier 16 bit unsigned integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 351 of file fwd.hpp.

typedef vec< 4, u16, mediump > mediump_u16vec4

Medium qualifier 16 bit unsigned integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 352 of file fwd.hpp.

typedef detail::uint32 mediump_u32

Medium qualifier 32 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 118 of file fwd.hpp.

typedef vec< 1, u32, mediump > mediump_u32vec1

Medium qualifier 32 bit unsigned integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 369 of file fwd.hpp.

typedef vec< 2, u32, mediump > mediump_u32vec2

Medium qualifier 32 bit unsigned integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 370 of file fwd.hpp.

typedef vec< 3, u32, mediump > mediump_u32vec3

Medium qualifier 32 bit unsigned integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 371 of file fwd.hpp.

typedef vec< 4, u32, mediump > mediump_u32vec4

Medium qualifier 32 bit unsigned integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 372 of file fwd.hpp.

typedef detail::uint64 mediump_u64

Medium qualifier 64 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 132 of file fwd.hpp.

typedef vec< 1, u64, mediump > mediump_u64vec1

Medium qualifier 64 bit unsigned integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 389 of file fwd.hpp.

typedef vec< 2, u64, mediump > mediump_u64vec2

Medium qualifier 64 bit unsigned integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 390 of file fwd.hpp.

typedef vec< 3, u64, mediump > mediump_u64vec3

Medium qualifier 64 bit unsigned integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 391 of file fwd.hpp.

typedef vec< 4, u64, mediump > mediump_u64vec4

Medium qualifier 64 bit unsigned integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 392 of file fwd.hpp.

typedef detail::uint8 mediump_u8

Medium qualifier 8 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 90 of file fwd.hpp.

typedef vec< 1, u8, mediump > mediump_u8vec1

Medium qualifier 8 bit unsigned integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 329 of file fwd.hpp.

typedef vec< 2, u8, mediump > mediump_u8vec2

Medium qualifier 8 bit unsigned integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 330 of file fwd.hpp.

typedef vec< 3, u8, mediump > mediump_u8vec3

Medium qualifier 8 bit unsigned integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 331 of file fwd.hpp.

typedef vec< 4, u8, mediump > mediump_u8vec4

Medium qualifier 8 bit unsigned integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 332 of file fwd.hpp.

typedef detail::uint16 mediump_uint16

Medium qualifier 16 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 109 of file fwd.hpp.

typedef detail::uint16 mediump_uint16_t

Medium qualifier 16 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 113 of file fwd.hpp.

typedef detail::uint32 mediump_uint32

Medium qualifier 32 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 123 of file fwd.hpp.

typedef detail::uint32 mediump_uint32_t

Medium qualifier 32 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 127 of file fwd.hpp.

typedef detail::uint64 mediump_uint64

Medium qualifier 64 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 137 of file fwd.hpp.

typedef detail::uint64 mediump_uint64_t

Medium qualifier 64 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 141 of file fwd.hpp.

typedef detail::uint8 mediump_uint8

Medium qualifier 8 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 95 of file fwd.hpp.

typedef detail::uint8 mediump_uint8_t

Medium qualifier 8 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 99 of file fwd.hpp.

typedef detail::uint16 u16

Default qualifier 16 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 106 of file fwd.hpp.

typedef vec< 1, u16, defaultp > u16vec1

Default qualifier 16 bit unsigned integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 359 of file fwd.hpp.

typedef vec< 2, u16, defaultp > u16vec2

Default qualifier 16 bit unsigned integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 360 of file fwd.hpp.

typedef vec< 3, u16, defaultp > u16vec3

Default qualifier 16 bit unsigned integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 361 of file fwd.hpp.

typedef vec< 4, u16, defaultp > u16vec4

Default qualifier 16 bit unsigned integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 362 of file fwd.hpp.

typedef detail::uint32 u32

Default qualifier 32 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 120 of file fwd.hpp.

typedef vec< 1, u32, defaultp > u32vec1

Default qualifier 32 bit unsigned integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 379 of file fwd.hpp.

typedef vec< 2, u32, defaultp > u32vec2

Default qualifier 32 bit unsigned integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 380 of file fwd.hpp.

typedef vec< 3, u32, defaultp > u32vec3

Default qualifier 32 bit unsigned integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 381 of file fwd.hpp.

typedef vec< 4, u32, defaultp > u32vec4

Default qualifier 32 bit unsigned integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 382 of file fwd.hpp.

typedef detail::uint64 u64

Default qualifier 64 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 134 of file fwd.hpp.

typedef vec< 1, u64, defaultp > u64vec1

Default qualifier 64 bit unsigned integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 399 of file fwd.hpp.

typedef vec< 2, u64, defaultp > u64vec2

Default qualifier 64 bit unsigned integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 400 of file fwd.hpp.

typedef vec< 3, u64, defaultp > u64vec3

Default qualifier 64 bit unsigned integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 401 of file fwd.hpp.

typedef vec< 4, u64, defaultp > u64vec4

Default qualifier 64 bit unsigned integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 402 of file fwd.hpp.

typedef detail::uint8 u8

Default qualifier 8 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 92 of file fwd.hpp.

typedef vec< 1, u8, defaultp > u8vec1

Default qualifier 8 bit unsigned integer scalar type.

See also
GLM_GTC_type_precision

Definition at line 339 of file fwd.hpp.

typedef vec< 2, u8, defaultp > u8vec2

Default qualifier 8 bit unsigned integer vector of 2 components type.

See also
GLM_GTC_type_precision

Definition at line 340 of file fwd.hpp.

typedef vec< 3, u8, defaultp > u8vec3

Default qualifier 8 bit unsigned integer vector of 3 components type.

See also
GLM_GTC_type_precision

Definition at line 341 of file fwd.hpp.

typedef vec< 4, u8, defaultp > u8vec4

Default qualifier 8 bit unsigned integer vector of 4 components type.

See also
GLM_GTC_type_precision

Definition at line 342 of file fwd.hpp.

typedef detail::uint16 uint16_t

Default qualifier 16 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 115 of file fwd.hpp.

typedef detail::uint32 uint32_t

Default qualifier 32 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 129 of file fwd.hpp.

typedef detail::uint64 uint64_t

Default qualifier 64 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 143 of file fwd.hpp.

typedef detail::uint8 uint8_t

Default qualifier 8 bit unsigned integer type.

See also
GLM_GTC_type_precision

Definition at line 101 of file fwd.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00305.html ================================================ 0.9.9 API documentation: GLM_GTC_type_ptr
0.9.9 API documentation

Include <glm/gtc/type_ptr.hpp> to use the features of this extension. More...

Functions

template<typename T >
GLM_FUNC_DECL mat< 2, 2, T, defaultp > make_mat2 (T const *const ptr)
 Build a matrix from a pointer. More...
 
template<typename T >
GLM_FUNC_DECL mat< 2, 2, T, defaultp > make_mat2x2 (T const *const ptr)
 Build a matrix from a pointer. More...
 
template<typename T >
GLM_FUNC_DECL mat< 2, 3, T, defaultp > make_mat2x3 (T const *const ptr)
 Build a matrix from a pointer. More...
 
template<typename T >
GLM_FUNC_DECL mat< 2, 4, T, defaultp > make_mat2x4 (T const *const ptr)
 Build a matrix from a pointer. More...
 
template<typename T >
GLM_FUNC_DECL mat< 3, 3, T, defaultp > make_mat3 (T const *const ptr)
 Build a matrix from a pointer. More...
 
template<typename T >
GLM_FUNC_DECL mat< 3, 2, T, defaultp > make_mat3x2 (T const *const ptr)
 Build a matrix from a pointer. More...
 
template<typename T >
GLM_FUNC_DECL mat< 3, 3, T, defaultp > make_mat3x3 (T const *const ptr)
 Build a matrix from a pointer. More...
 
template<typename T >
GLM_FUNC_DECL mat< 3, 4, T, defaultp > make_mat3x4 (T const *const ptr)
 Build a matrix from a pointer. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > make_mat4 (T const *const ptr)
 Build a matrix from a pointer. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 2, T, defaultp > make_mat4x2 (T const *const ptr)
 Build a matrix from a pointer. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 3, T, defaultp > make_mat4x3 (T const *const ptr)
 Build a matrix from a pointer. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > make_mat4x4 (T const *const ptr)
 Build a matrix from a pointer. More...
 
template<typename T >
GLM_FUNC_DECL qua< T, defaultp > make_quat (T const *const ptr)
 Build a quaternion from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 1, T, Q > make_vec1 (vec< 1, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 1, T, Q > make_vec1 (vec< 2, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 1, T, Q > make_vec1 (vec< 3, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 1, T, Q > make_vec1 (vec< 4, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 2, T, Q > make_vec2 (vec< 1, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 2, T, Q > make_vec2 (vec< 2, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 2, T, Q > make_vec2 (vec< 3, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 2, T, Q > make_vec2 (vec< 4, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T >
GLM_FUNC_DECL vec< 2, T, defaultp > make_vec2 (T const *const ptr)
 Build a vector from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > make_vec3 (vec< 1, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > make_vec3 (vec< 2, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > make_vec3 (vec< 3, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > make_vec3 (vec< 4, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T >
GLM_FUNC_DECL vec< 3, T, defaultp > make_vec3 (T const *const ptr)
 Build a vector from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, T, Q > make_vec4 (vec< 1, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, T, Q > make_vec4 (vec< 2, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, T, Q > make_vec4 (vec< 3, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, T, Q > make_vec4 (vec< 4, T, Q > const &v)
 Build a vector from a pointer. More...
 
template<typename T >
GLM_FUNC_DECL vec< 4, T, defaultp > make_vec4 (T const *const ptr)
 Build a vector from a pointer. More...
 
template<typename genType >
GLM_FUNC_DECL genType::value_type const * value_ptr (genType const &v)
 Return the constant address to the data of the input parameter. More...
 

Detailed Description

Include <glm/gtc/type_ptr.hpp> to use the features of this extension.

Handles the interaction between pointers and vector, matrix types.

This extension defines an overloaded function, glm::value_ptr. It returns a pointer to the memory layout of the object. Matrix types store their values in column-major order.

This is useful for uploading data to matrices or copying data to buffer objects.

Example:

#include <glm/glm.hpp>
glm::vec3 aVector(3);
glm::mat4 someMatrix(1.0);
glUniform3fv(uniformLoc, 1, glm::value_ptr(aVector));
glUniformMatrix4fv(uniformMatrixLoc, 1, GL_FALSE, glm::value_ptr(someMatrix));

<glm/gtc/type_ptr.hpp> need to be included to use the features of this extension.

Function Documentation

GLM_FUNC_DECL mat<2, 2, T, defaultp> glm::make_mat2 ( T const *const  ptr)

Build a matrix from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL mat<2, 2, T, defaultp> glm::make_mat2x2 ( T const *const  ptr)

Build a matrix from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL mat<2, 3, T, defaultp> glm::make_mat2x3 ( T const *const  ptr)

Build a matrix from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL mat<2, 4, T, defaultp> glm::make_mat2x4 ( T const *const  ptr)

Build a matrix from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL mat<3, 3, T, defaultp> glm::make_mat3 ( T const *const  ptr)

Build a matrix from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL mat<3, 2, T, defaultp> glm::make_mat3x2 ( T const *const  ptr)

Build a matrix from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL mat<3, 3, T, defaultp> glm::make_mat3x3 ( T const *const  ptr)

Build a matrix from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL mat<3, 4, T, defaultp> glm::make_mat3x4 ( T const *const  ptr)

Build a matrix from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::make_mat4 ( T const *const  ptr)

Build a matrix from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL mat<4, 2, T, defaultp> glm::make_mat4x2 ( T const *const  ptr)

Build a matrix from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL mat<4, 3, T, defaultp> glm::make_mat4x3 ( T const *const  ptr)

Build a matrix from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::make_mat4x4 ( T const *const  ptr)

Build a matrix from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL qua<T, defaultp> glm::make_quat ( T const *const  ptr)

Build a quaternion from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL vec<1, T, Q> glm::make_vec1 ( vec< 1, T, Q > const &  v)

Build a vector from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL vec<1, T, Q> glm::make_vec1 ( vec< 2, T, Q > const &  v)

Build a vector from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL vec<1, T, Q> glm::make_vec1 ( vec< 3, T, Q > const &  v)

Build a vector from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL vec<1, T, Q> glm::make_vec1 ( vec< 4, T, Q > const &  v)

Build a vector from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL vec<2, T, Q> glm::make_vec2 ( vec< 1, T, Q > const &  v)

Build a vector from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL vec<2, T, Q> glm::make_vec2 ( vec< 2, T, Q > const &  v)

Build a vector from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL vec<2, T, Q> glm::make_vec2 ( vec< 3, T, Q > const &  v)

Build a vector from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL vec<2, T, Q> glm::make_vec2 ( vec< 4, T, Q > const &  v)

Build a vector from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL vec<2, T, defaultp> glm::make_vec2 ( T const *const  ptr)

Build a vector from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL vec<3, T, Q> glm::make_vec3 ( vec< 1, T, Q > const &  v)

Build a vector from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL vec<3, T, Q> glm::make_vec3 ( vec< 2, T, Q > const &  v)

Build a vector from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL vec<3, T, Q> glm::make_vec3 ( vec< 3, T, Q > const &  v)

Build a vector from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL vec<3, T, Q> glm::make_vec3 ( vec< 4, T, Q > const &  v)

Build a vector from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL vec<3, T, defaultp> glm::make_vec3 ( T const *const  ptr)

Build a vector from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL vec<4, T, Q> glm::make_vec4 ( vec< 1, T, Q > const &  v)

Build a vector from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL vec<4, T, Q> glm::make_vec4 ( vec< 2, T, Q > const &  v)

Build a vector from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL vec<4, T, Q> glm::make_vec4 ( vec< 3, T, Q > const &  v)

Build a vector from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL vec<4, T, Q> glm::make_vec4 ( vec< 4, T, Q > const &  v)

Build a vector from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL vec<4, T, defaultp> glm::make_vec4 ( T const *const  ptr)

Build a vector from a pointer.

See also
GLM_GTC_type_ptr
GLM_FUNC_DECL genType::value_type const* glm::value_ptr ( genType const &  v)

Return the constant address to the data of the input parameter.

See also
GLM_GTC_type_ptr
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00306.html ================================================ 0.9.9 API documentation: GLM_GTC_ulp
0.9.9 API documentation

Include <glm/gtc/ulp.hpp> to use the features of this extension. More...

Include <glm/gtc/ulp.hpp> to use the features of this extension.

Allow the measurement of the accuracy of a function against a reference implementation. This extension works on floating-point data and provide results in ULP.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00307.html ================================================ 0.9.9 API documentation: GLM_GTC_vec1
0.9.9 API documentation

Include <glm/gtc/vec1.hpp> to use the features of this extension. More...

Include <glm/gtc/vec1.hpp> to use the features of this extension.

Add vec1, ivec1, uvec1 and bvec1 types.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00308.html ================================================ 0.9.9 API documentation: GLM_GTX_associated_min_max
0.9.9 API documentation
GLM_GTX_associated_min_max

Include <glm/gtx/associated_min_max.hpp> to use the features of this extension. More...

Functions

template<typename T , typename U >
GLM_FUNC_DECL U associatedMax (T x, U a, T y, U b)
 Maximum comparison between 2 variables and returns 2 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< 2, U, Q > associatedMax (vec< L, T, Q > const &x, vec< L, U, Q > const &a, vec< L, T, Q > const &y, vec< L, U, Q > const &b)
 Maximum comparison between 2 variables and returns 2 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > associatedMax (T x, vec< L, U, Q > const &a, T y, vec< L, U, Q > const &b)
 Maximum comparison between 2 variables and returns 2 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< L, U, Q > associatedMax (vec< L, T, Q > const &x, U a, vec< L, T, Q > const &y, U b)
 Maximum comparison between 2 variables and returns 2 associated variable values. More...
 
template<typename T , typename U >
GLM_FUNC_DECL U associatedMax (T x, U a, T y, U b, T z, U c)
 Maximum comparison between 3 variables and returns 3 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< L, U, Q > associatedMax (vec< L, T, Q > const &x, vec< L, U, Q > const &a, vec< L, T, Q > const &y, vec< L, U, Q > const &b, vec< L, T, Q > const &z, vec< L, U, Q > const &c)
 Maximum comparison between 3 variables and returns 3 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > associatedMax (T x, vec< L, U, Q > const &a, T y, vec< L, U, Q > const &b, T z, vec< L, U, Q > const &c)
 Maximum comparison between 3 variables and returns 3 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< L, U, Q > associatedMax (vec< L, T, Q > const &x, U a, vec< L, T, Q > const &y, U b, vec< L, T, Q > const &z, U c)
 Maximum comparison between 3 variables and returns 3 associated variable values. More...
 
template<typename T , typename U >
GLM_FUNC_DECL U associatedMax (T x, U a, T y, U b, T z, U c, T w, U d)
 Maximum comparison between 4 variables and returns 4 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< L, U, Q > associatedMax (vec< L, T, Q > const &x, vec< L, U, Q > const &a, vec< L, T, Q > const &y, vec< L, U, Q > const &b, vec< L, T, Q > const &z, vec< L, U, Q > const &c, vec< L, T, Q > const &w, vec< L, U, Q > const &d)
 Maximum comparison between 4 variables and returns 4 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< L, U, Q > associatedMax (T x, vec< L, U, Q > const &a, T y, vec< L, U, Q > const &b, T z, vec< L, U, Q > const &c, T w, vec< L, U, Q > const &d)
 Maximum comparison between 4 variables and returns 4 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< L, U, Q > associatedMax (vec< L, T, Q > const &x, U a, vec< L, T, Q > const &y, U b, vec< L, T, Q > const &z, U c, vec< L, T, Q > const &w, U d)
 Maximum comparison between 4 variables and returns 4 associated variable values. More...
 
template<typename T , typename U , qualifier Q>
GLM_FUNC_DECL U associatedMin (T x, U a, T y, U b)
 Minimum comparison between 2 variables and returns 2 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< 2, U, Q > associatedMin (vec< L, T, Q > const &x, vec< L, U, Q > const &a, vec< L, T, Q > const &y, vec< L, U, Q > const &b)
 Minimum comparison between 2 variables and returns 2 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< L, U, Q > associatedMin (T x, const vec< L, U, Q > &a, T y, const vec< L, U, Q > &b)
 Minimum comparison between 2 variables and returns 2 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< L, U, Q > associatedMin (vec< L, T, Q > const &x, U a, vec< L, T, Q > const &y, U b)
 Minimum comparison between 2 variables and returns 2 associated variable values. More...
 
template<typename T , typename U >
GLM_FUNC_DECL U associatedMin (T x, U a, T y, U b, T z, U c)
 Minimum comparison between 3 variables and returns 3 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< L, U, Q > associatedMin (vec< L, T, Q > const &x, vec< L, U, Q > const &a, vec< L, T, Q > const &y, vec< L, U, Q > const &b, vec< L, T, Q > const &z, vec< L, U, Q > const &c)
 Minimum comparison between 3 variables and returns 3 associated variable values. More...
 
template<typename T , typename U >
GLM_FUNC_DECL U associatedMin (T x, U a, T y, U b, T z, U c, T w, U d)
 Minimum comparison between 4 variables and returns 4 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< L, U, Q > associatedMin (vec< L, T, Q > const &x, vec< L, U, Q > const &a, vec< L, T, Q > const &y, vec< L, U, Q > const &b, vec< L, T, Q > const &z, vec< L, U, Q > const &c, vec< L, T, Q > const &w, vec< L, U, Q > const &d)
 Minimum comparison between 4 variables and returns 4 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< L, U, Q > associatedMin (T x, vec< L, U, Q > const &a, T y, vec< L, U, Q > const &b, T z, vec< L, U, Q > const &c, T w, vec< L, U, Q > const &d)
 Minimum comparison between 4 variables and returns 4 associated variable values. More...
 
template<length_t L, typename T , typename U , qualifier Q>
GLM_FUNC_DECL vec< L, U, Q > associatedMin (vec< L, T, Q > const &x, U a, vec< L, T, Q > const &y, U b, vec< L, T, Q > const &z, U c, vec< L, T, Q > const &w, U d)
 Minimum comparison between 4 variables and returns 4 associated variable values. More...
 

Detailed Description

Include <glm/gtx/associated_min_max.hpp> to use the features of this extension.

Min and max functions that return associated values not the compared onces.

Function Documentation

GLM_FUNC_DECL U glm::associatedMax ( x,
a,
y,
b 
)

Maximum comparison between 2 variables and returns 2 associated variable values.

See also
GLM_GTX_associated_min_max
GLM_FUNC_DECL vec<2, U, Q> glm::associatedMax ( vec< L, T, Q > const &  x,
vec< L, U, Q > const &  a,
vec< L, T, Q > const &  y,
vec< L, U, Q > const &  b 
)

Maximum comparison between 2 variables and returns 2 associated variable values.

See also
GLM_GTX_associated_min_max
GLM_FUNC_DECL vec<L, T, Q> glm::associatedMax ( x,
vec< L, U, Q > const &  a,
y,
vec< L, U, Q > const &  b 
)

Maximum comparison between 2 variables and returns 2 associated variable values.

See also
GLM_GTX_associated_min_max
GLM_FUNC_DECL vec<L, U, Q> glm::associatedMax ( vec< L, T, Q > const &  x,
a,
vec< L, T, Q > const &  y,
b 
)

Maximum comparison between 2 variables and returns 2 associated variable values.

See also
GLM_GTX_associated_min_max
GLM_FUNC_DECL U glm::associatedMax ( x,
a,
y,
b,
z,
c 
)

Maximum comparison between 3 variables and returns 3 associated variable values.

See also
GLM_GTX_associated_min_max
GLM_FUNC_DECL vec<L, U, Q> glm::associatedMax ( vec< L, T, Q > const &  x,
vec< L, U, Q > const &  a,
vec< L, T, Q > const &  y,
vec< L, U, Q > const &  b,
vec< L, T, Q > const &  z,
vec< L, U, Q > const &  c 
)

Maximum comparison between 3 variables and returns 3 associated variable values.

See also
GLM_GTX_associated_min_max
GLM_FUNC_DECL vec<L, T, Q> glm::associatedMax ( x,
vec< L, U, Q > const &  a,
y,
vec< L, U, Q > const &  b,
z,
vec< L, U, Q > const &  c 
)

Maximum comparison between 3 variables and returns 3 associated variable values.

See also
GLM_GTX_associated_min_max
GLM_FUNC_DECL vec<L, U, Q> glm::associatedMax ( vec< L, T, Q > const &  x,
a,
vec< L, T, Q > const &  y,
b,
vec< L, T, Q > const &  z,
c 
)

Maximum comparison between 3 variables and returns 3 associated variable values.

See also
GLM_GTX_associated_min_max
GLM_FUNC_DECL U glm::associatedMax ( x,
a,
y,
b,
z,
c,
w,
d 
)

Maximum comparison between 4 variables and returns 4 associated variable values.

See also
GLM_GTX_associated_min_max
GLM_FUNC_DECL vec<L, U, Q> glm::associatedMax ( vec< L, T, Q > const &  x,
vec< L, U, Q > const &  a,
vec< L, T, Q > const &  y,
vec< L, U, Q > const &  b,
vec< L, T, Q > const &  z,
vec< L, U, Q > const &  c,
vec< L, T, Q > const &  w,
vec< L, U, Q > const &  d 
)

Maximum comparison between 4 variables and returns 4 associated variable values.

See also
GLM_GTX_associated_min_max
GLM_FUNC_DECL vec<L, U, Q> glm::associatedMax ( x,
vec< L, U, Q > const &  a,
y,
vec< L, U, Q > const &  b,
z,
vec< L, U, Q > const &  c,
w,
vec< L, U, Q > const &  d 
)

Maximum comparison between 4 variables and returns 4 associated variable values.

See also
GLM_GTX_associated_min_max
GLM_FUNC_DECL vec<L, U, Q> glm::associatedMax ( vec< L, T, Q > const &  x,
a,
vec< L, T, Q > const &  y,
b,
vec< L, T, Q > const &  z,
c,
vec< L, T, Q > const &  w,
d 
)

Maximum comparison between 4 variables and returns 4 associated variable values.

See also
GLM_GTX_associated_min_max
GLM_FUNC_DECL U glm::associatedMin ( x,
a,
y,
b 
)

Minimum comparison between 2 variables and returns 2 associated variable values.

See also
GLM_GTX_associated_min_max
GLM_FUNC_DECL vec<2, U, Q> glm::associatedMin ( vec< L, T, Q > const &  x,
vec< L, U, Q > const &  a,
vec< L, T, Q > const &  y,
vec< L, U, Q > const &  b 
)

Minimum comparison between 2 variables and returns 2 associated variable values.

See also
GLM_GTX_associated_min_max
GLM_FUNC_DECL vec<L, U, Q> glm::associatedMin ( x,
const vec< L, U, Q > &  a,
y,
const vec< L, U, Q > &  b 
)

Minimum comparison between 2 variables and returns 2 associated variable values.

See also
GLM_GTX_associated_min_max
GLM_FUNC_DECL vec<L, U, Q> glm::associatedMin ( vec< L, T, Q > const &  x,
a,
vec< L, T, Q > const &  y,
b 
)

Minimum comparison between 2 variables and returns 2 associated variable values.

See also
GLM_GTX_associated_min_max
GLM_FUNC_DECL U glm::associatedMin ( x,
a,
y,
b,
z,
c 
)

Minimum comparison between 3 variables and returns 3 associated variable values.

See also
GLM_GTX_associated_min_max
GLM_FUNC_DECL vec<L, U, Q> glm::associatedMin ( vec< L, T, Q > const &  x,
vec< L, U, Q > const &  a,
vec< L, T, Q > const &  y,
vec< L, U, Q > const &  b,
vec< L, T, Q > const &  z,
vec< L, U, Q > const &  c 
)

Minimum comparison between 3 variables and returns 3 associated variable values.

See also
GLM_GTX_associated_min_max
GLM_FUNC_DECL U glm::associatedMin ( x,
a,
y,
b,
z,
c,
w,
d 
)

Minimum comparison between 4 variables and returns 4 associated variable values.

See also
GLM_GTX_associated_min_max
GLM_FUNC_DECL vec<L, U, Q> glm::associatedMin ( vec< L, T, Q > const &  x,
vec< L, U, Q > const &  a,
vec< L, T, Q > const &  y,
vec< L, U, Q > const &  b,
vec< L, T, Q > const &  z,
vec< L, U, Q > const &  c,
vec< L, T, Q > const &  w,
vec< L, U, Q > const &  d 
)

Minimum comparison between 4 variables and returns 4 associated variable values.

See also
GLM_GTX_associated_min_max
GLM_FUNC_DECL vec<L, U, Q> glm::associatedMin ( x,
vec< L, U, Q > const &  a,
y,
vec< L, U, Q > const &  b,
z,
vec< L, U, Q > const &  c,
w,
vec< L, U, Q > const &  d 
)

Minimum comparison between 4 variables and returns 4 associated variable values.

See also
GLM_GTX_associated_min_max
GLM_FUNC_DECL vec<L, U, Q> glm::associatedMin ( vec< L, T, Q > const &  x,
a,
vec< L, T, Q > const &  y,
b,
vec< L, T, Q > const &  z,
c,
vec< L, T, Q > const &  w,
d 
)

Minimum comparison between 4 variables and returns 4 associated variable values.

See also
GLM_GTX_associated_min_max
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00309.html ================================================ 0.9.9 API documentation: GLM_GTX_bit
0.9.9 API documentation

Include <glm/gtx/bit.hpp> to use the features of this extension. More...

Functions

template<typename genIUType >
GLM_FUNC_DECL genIUType highestBitValue (genIUType Value)
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > highestBitValue (vec< L, T, Q > const &value)
 Find the highest bit set to 1 in a integer variable and return its value. More...
 
template<typename genIUType >
GLM_FUNC_DECL genIUType lowestBitValue (genIUType Value)
 
template<typename genIUType >
GLM_DEPRECATED GLM_FUNC_DECL genIUType powerOfTwoAbove (genIUType Value)
 Return the power of two number which value is just higher the input value. More...
 
template<length_t L, typename T , qualifier Q>
GLM_DEPRECATED GLM_FUNC_DECL vec< L, T, Q > powerOfTwoAbove (vec< L, T, Q > const &value)
 Return the power of two number which value is just higher the input value. More...
 
template<typename genIUType >
GLM_DEPRECATED GLM_FUNC_DECL genIUType powerOfTwoBelow (genIUType Value)
 Return the power of two number which value is just lower the input value. More...
 
template<length_t L, typename T , qualifier Q>
GLM_DEPRECATED GLM_FUNC_DECL vec< L, T, Q > powerOfTwoBelow (vec< L, T, Q > const &value)
 Return the power of two number which value is just lower the input value. More...
 
template<typename genIUType >
GLM_DEPRECATED GLM_FUNC_DECL genIUType powerOfTwoNearest (genIUType Value)
 Return the power of two number which value is the closet to the input value. More...
 
template<length_t L, typename T , qualifier Q>
GLM_DEPRECATED GLM_FUNC_DECL vec< L, T, Q > powerOfTwoNearest (vec< L, T, Q > const &value)
 Return the power of two number which value is the closet to the input value. More...
 

Detailed Description

Include <glm/gtx/bit.hpp> to use the features of this extension.

Allow to perform bit operations on integer values

Function Documentation

GLM_FUNC_DECL genIUType glm::highestBitValue ( genIUType  Value)
See also
GLM_GTX_bit
GLM_FUNC_DECL vec<L, T, Q> glm::highestBitValue ( vec< L, T, Q > const &  value)

Find the highest bit set to 1 in a integer variable and return its value.

See also
GLM_GTX_bit
GLM_FUNC_DECL genIUType glm::lowestBitValue ( genIUType  Value)
See also
GLM_GTX_bit
GLM_DEPRECATED GLM_FUNC_DECL genIUType glm::powerOfTwoAbove ( genIUType  Value)

Return the power of two number which value is just higher the input value.

Deprecated, use ceilPowerOfTwo from GTC_round instead

See also
GLM_GTC_round
GLM_GTX_bit
GLM_DEPRECATED GLM_FUNC_DECL vec<L, T, Q> glm::powerOfTwoAbove ( vec< L, T, Q > const &  value)

Return the power of two number which value is just higher the input value.

Deprecated, use ceilPowerOfTwo from GTC_round instead

See also
GLM_GTC_round
GLM_GTX_bit
GLM_DEPRECATED GLM_FUNC_DECL genIUType glm::powerOfTwoBelow ( genIUType  Value)

Return the power of two number which value is just lower the input value.

Deprecated, use floorPowerOfTwo from GTC_round instead

See also
GLM_GTC_round
GLM_GTX_bit
GLM_DEPRECATED GLM_FUNC_DECL vec<L, T, Q> glm::powerOfTwoBelow ( vec< L, T, Q > const &  value)

Return the power of two number which value is just lower the input value.

Deprecated, use floorPowerOfTwo from GTC_round instead

See also
GLM_GTC_round
GLM_GTX_bit
GLM_DEPRECATED GLM_FUNC_DECL genIUType glm::powerOfTwoNearest ( genIUType  Value)

Return the power of two number which value is the closet to the input value.

Deprecated, use roundPowerOfTwo from GTC_round instead

See also
GLM_GTC_round
GLM_GTX_bit
GLM_DEPRECATED GLM_FUNC_DECL vec<L, T, Q> glm::powerOfTwoNearest ( vec< L, T, Q > const &  value)

Return the power of two number which value is the closet to the input value.

Deprecated, use roundPowerOfTwo from GTC_round instead

See also
GLM_GTC_round
GLM_GTX_bit
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00310.html ================================================ 0.9.9 API documentation: GLM_GTX_closest_point
0.9.9 API documentation
GLM_GTX_closest_point

Include <glm/gtx/closest_point.hpp> to use the features of this extension. More...

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > closestPointOnLine (vec< 3, T, Q > const &point, vec< 3, T, Q > const &a, vec< 3, T, Q > const &b)
 Find the point on a straight line which is the closet of a point. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 2, T, Q > closestPointOnLine (vec< 2, T, Q > const &point, vec< 2, T, Q > const &a, vec< 2, T, Q > const &b)
 2d lines work as well
 

Detailed Description

Include <glm/gtx/closest_point.hpp> to use the features of this extension.

Find the point on a straight line which is the closet of a point.

Function Documentation

GLM_FUNC_DECL vec<3, T, Q> glm::closestPointOnLine ( vec< 3, T, Q > const &  point,
vec< 3, T, Q > const &  a,
vec< 3, T, Q > const &  b 
)

Find the point on a straight line which is the closet of a point.

See also
GLM_GTX_closest_point
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00311.html ================================================ 0.9.9 API documentation: GLM_GTX_color_encoding
0.9.9 API documentation
GLM_GTX_color_encoding

Include <glm/gtx/color_encoding.hpp> to use the features of this extension. More...

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > convertD65XYZToD50XYZ (vec< 3, T, Q > const &ColorD65XYZ)
 Convert a D65 YUV color to D50 YUV.
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > convertD65XYZToLinearSRGB (vec< 3, T, Q > const &ColorD65XYZ)
 Convert a D65 YUV color to linear sRGB.
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > convertLinearSRGBToD50XYZ (vec< 3, T, Q > const &ColorLinearSRGB)
 Convert a linear sRGB color to D50 YUV.
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > convertLinearSRGBToD65XYZ (vec< 3, T, Q > const &ColorLinearSRGB)
 Convert a linear sRGB color to D65 YUV.
 

Detailed Description

Include <glm/gtx/color_encoding.hpp> to use the features of this extension.

Allow to perform bit operations on integer values

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00312.html ================================================ 0.9.9 API documentation: GLM_GTX_color_space
0.9.9 API documentation
GLM_GTX_color_space

Include <glm/gtx/color_space.hpp> to use the features of this extension. More...

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > hsvColor (vec< 3, T, Q > const &rgbValue)
 Converts a color from RGB color space to its color in HSV color space. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T luminosity (vec< 3, T, Q > const &color)
 Compute color luminosity associating ratios (0.33, 0.59, 0.11) to RGB canals. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > rgbColor (vec< 3, T, Q > const &hsvValue)
 Converts a color from HSV color space to its color in RGB color space. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > saturation (T const s)
 Build a saturation matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > saturation (T const s, vec< 3, T, Q > const &color)
 Modify the saturation of a color. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, T, Q > saturation (T const s, vec< 4, T, Q > const &color)
 Modify the saturation of a color. More...
 

Detailed Description

Include <glm/gtx/color_space.hpp> to use the features of this extension.

Related to RGB to HSV conversions and operations.

Function Documentation

GLM_FUNC_DECL vec<3, T, Q> glm::hsvColor ( vec< 3, T, Q > const &  rgbValue)

Converts a color from RGB color space to its color in HSV color space.

See also
GLM_GTX_color_space
GLM_FUNC_DECL T glm::luminosity ( vec< 3, T, Q > const &  color)

Compute color luminosity associating ratios (0.33, 0.59, 0.11) to RGB canals.

See also
GLM_GTX_color_space
GLM_FUNC_DECL vec<3, T, Q> glm::rgbColor ( vec< 3, T, Q > const &  hsvValue)

Converts a color from HSV color space to its color in RGB color space.

See also
GLM_GTX_color_space
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::saturation ( T const  s)

Build a saturation matrix.

See also
GLM_GTX_color_space
GLM_FUNC_DECL vec<3, T, Q> glm::saturation ( T const  s,
vec< 3, T, Q > const &  color 
)

Modify the saturation of a color.

See also
GLM_GTX_color_space
GLM_FUNC_DECL vec<4, T, Q> glm::saturation ( T const  s,
vec< 4, T, Q > const &  color 
)

Modify the saturation of a color.

See also
GLM_GTX_color_space
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00313.html ================================================ 0.9.9 API documentation: GLM_GTX_color_space_YCoCg
0.9.9 API documentation
GLM_GTX_color_space_YCoCg

Include <glm/gtx/color_space_YCoCg.hpp> to use the features of this extension. More...

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > rgb2YCoCg (vec< 3, T, Q > const &rgbColor)
 Convert a color from RGB color space to YCoCg color space. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > rgb2YCoCgR (vec< 3, T, Q > const &rgbColor)
 Convert a color from RGB color space to YCoCgR color space. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > YCoCg2rgb (vec< 3, T, Q > const &YCoCgColor)
 Convert a color from YCoCg color space to RGB color space. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > YCoCgR2rgb (vec< 3, T, Q > const &YCoCgColor)
 Convert a color from YCoCgR color space to RGB color space. More...
 

Detailed Description

Include <glm/gtx/color_space_YCoCg.hpp> to use the features of this extension.

RGB to YCoCg conversions and operations

Function Documentation

GLM_FUNC_DECL vec<3, T, Q> glm::rgb2YCoCg ( vec< 3, T, Q > const &  rgbColor)

Convert a color from RGB color space to YCoCg color space.

See also
GLM_GTX_color_space_YCoCg
GLM_FUNC_DECL vec<3, T, Q> glm::rgb2YCoCgR ( vec< 3, T, Q > const &  rgbColor)

Convert a color from RGB color space to YCoCgR color space.

See also
"YCoCg-R: A Color Space with RGB Reversibility and Low Dynamic Range"
GLM_GTX_color_space_YCoCg
GLM_FUNC_DECL vec<3, T, Q> glm::YCoCg2rgb ( vec< 3, T, Q > const &  YCoCgColor)

Convert a color from YCoCg color space to RGB color space.

See also
GLM_GTX_color_space_YCoCg
GLM_FUNC_DECL vec<3, T, Q> glm::YCoCgR2rgb ( vec< 3, T, Q > const &  YCoCgColor)

Convert a color from YCoCgR color space to RGB color space.

See also
"YCoCg-R: A Color Space with RGB Reversibility and Low Dynamic Range"
GLM_GTX_color_space_YCoCg
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00314.html ================================================ 0.9.9 API documentation: GLM_GTX_common
0.9.9 API documentation

Include <glm/gtx/common.hpp> to use the features of this extension. More...

Functions

template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, bool, Q > closeBounded (vec< L, T, Q > const &Value, vec< L, T, Q > const &Min, vec< L, T, Q > const &Max)
 Returns whether vector components values are within an interval. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fmod (vec< L, T, Q > const &v)
 Similar to 'mod' but with a different rounding and integer support. More...
 
template<typename genType >
GLM_FUNC_DECL genType::bool_type isdenormal (genType const &x)
 Returns true if x is a denormalized number Numbers whose absolute value is too small to be represented in the normal format are represented in an alternate, denormalized format. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, bool, Q > openBounded (vec< L, T, Q > const &Value, vec< L, T, Q > const &Min, vec< L, T, Q > const &Max)
 Returns whether vector components values are within an interval. More...
 

Detailed Description

Include <glm/gtx/common.hpp> to use the features of this extension.

Provide functions to increase the compatibility with Cg and HLSL languages

Function Documentation

GLM_FUNC_DECL vec<L, bool, Q> glm::closeBounded ( vec< L, T, Q > const &  Value,
vec< L, T, Q > const &  Min,
vec< L, T, Q > const &  Max 
)

Returns whether vector components values are within an interval.

A closed interval includes its endpoints, and is denoted with square brackets.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point or integer scalar types
QValue from qualifier enum
See also
GLM_EXT_vector_relational
GLM_FUNC_DECL vec<L, T, Q> glm::fmod ( vec< L, T, Q > const &  v)

Similar to 'mod' but with a different rounding and integer support.

Returns 'x - y * trunc(x/y)' instead of 'x - y * floor(x/y)'

See also
GLSL mod vs HLSL fmod
GLSL mod man page
GLM_FUNC_DECL genType::bool_type glm::isdenormal ( genType const &  x)

Returns true if x is a denormalized number Numbers whose absolute value is too small to be represented in the normal format are represented in an alternate, denormalized format.

This format is less precise but can represent values closer to zero.

Template Parameters
genTypeFloating-point scalar or vector types.
See also
GLSL isnan man page
GLSL 4.20.8 specification, section 8.3 Common Functions
GLM_FUNC_DECL vec<L, bool, Q> glm::openBounded ( vec< L, T, Q > const &  Value,
vec< L, T, Q > const &  Min,
vec< L, T, Q > const &  Max 
)

Returns whether vector components values are within an interval.

A open interval excludes its endpoints, and is denoted with square brackets.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point or integer scalar types
QValue from qualifier enum
See also
GLM_EXT_vector_relational
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00315.html ================================================ 0.9.9 API documentation: GLM_GTX_compatibility
0.9.9 API documentation
GLM_GTX_compatibility

Include <glm/gtx/compatibility.hpp> to use the features of this extension. More...

Typedefs

typedef bool bool1
 boolean type with 1 component. (From GLM_GTX_compatibility extension)
 
typedef bool bool1x1
 boolean matrix with 1 x 1 component. (From GLM_GTX_compatibility extension)
 
typedef vec< 2, bool, highp > bool2
 boolean type with 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 2, 2, bool, highp > bool2x2
 boolean matrix with 2 x 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 2, 3, bool, highp > bool2x3
 boolean matrix with 2 x 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 2, 4, bool, highp > bool2x4
 boolean matrix with 2 x 4 components. (From GLM_GTX_compatibility extension)
 
typedef vec< 3, bool, highp > bool3
 boolean type with 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 3, 2, bool, highp > bool3x2
 boolean matrix with 3 x 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 3, 3, bool, highp > bool3x3
 boolean matrix with 3 x 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 3, 4, bool, highp > bool3x4
 boolean matrix with 3 x 4 components. (From GLM_GTX_compatibility extension)
 
typedef vec< 4, bool, highp > bool4
 boolean type with 4 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 4, 2, bool, highp > bool4x2
 boolean matrix with 4 x 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 4, 3, bool, highp > bool4x3
 boolean matrix with 4 x 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 4, 4, bool, highp > bool4x4
 boolean matrix with 4 x 4 components. (From GLM_GTX_compatibility extension)
 
typedef double double1
 double-qualifier floating-point vector with 1 component. (From GLM_GTX_compatibility extension)
 
typedef double double1x1
 double-qualifier floating-point matrix with 1 component. (From GLM_GTX_compatibility extension)
 
typedef vec< 2, double, highp > double2
 double-qualifier floating-point vector with 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 2, 2, double, highp > double2x2
 double-qualifier floating-point matrix with 2 x 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 2, 3, double, highp > double2x3
 double-qualifier floating-point matrix with 2 x 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 2, 4, double, highp > double2x4
 double-qualifier floating-point matrix with 2 x 4 components. (From GLM_GTX_compatibility extension)
 
typedef vec< 3, double, highp > double3
 double-qualifier floating-point vector with 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 3, 2, double, highp > double3x2
 double-qualifier floating-point matrix with 3 x 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 3, 3, double, highp > double3x3
 double-qualifier floating-point matrix with 3 x 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 3, 4, double, highp > double3x4
 double-qualifier floating-point matrix with 3 x 4 components. (From GLM_GTX_compatibility extension)
 
typedef vec< 4, double, highp > double4
 double-qualifier floating-point vector with 4 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 4, 2, double, highp > double4x2
 double-qualifier floating-point matrix with 4 x 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 4, 3, double, highp > double4x3
 double-qualifier floating-point matrix with 4 x 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 4, 4, double, highp > double4x4
 double-qualifier floating-point matrix with 4 x 4 components. (From GLM_GTX_compatibility extension)
 
typedef float float1
 single-qualifier floating-point vector with 1 component. (From GLM_GTX_compatibility extension)
 
typedef float float1x1
 single-qualifier floating-point matrix with 1 component. (From GLM_GTX_compatibility extension)
 
typedef vec< 2, float, highp > float2
 single-qualifier floating-point vector with 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 2, 2, float, highp > float2x2
 single-qualifier floating-point matrix with 2 x 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 2, 3, float, highp > float2x3
 single-qualifier floating-point matrix with 2 x 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 2, 4, float, highp > float2x4
 single-qualifier floating-point matrix with 2 x 4 components. (From GLM_GTX_compatibility extension)
 
typedef vec< 3, float, highp > float3
 single-qualifier floating-point vector with 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 3, 2, float, highp > float3x2
 single-qualifier floating-point matrix with 3 x 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 3, 3, float, highp > float3x3
 single-qualifier floating-point matrix with 3 x 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 3, 4, float, highp > float3x4
 single-qualifier floating-point matrix with 3 x 4 components. (From GLM_GTX_compatibility extension)
 
typedef vec< 4, float, highp > float4
 single-qualifier floating-point vector with 4 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 4, 2, float, highp > float4x2
 single-qualifier floating-point matrix with 4 x 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 4, 3, float, highp > float4x3
 single-qualifier floating-point matrix with 4 x 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 4, 4, float, highp > float4x4
 single-qualifier floating-point matrix with 4 x 4 components. (From GLM_GTX_compatibility extension)
 
typedef int int1
 integer vector with 1 component. (From GLM_GTX_compatibility extension)
 
typedef int int1x1
 integer matrix with 1 component. (From GLM_GTX_compatibility extension)
 
typedef vec< 2, int, highp > int2
 integer vector with 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 2, 2, int, highp > int2x2
 integer matrix with 2 x 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 2, 3, int, highp > int2x3
 integer matrix with 2 x 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 2, 4, int, highp > int2x4
 integer matrix with 2 x 4 components. (From GLM_GTX_compatibility extension)
 
typedef vec< 3, int, highp > int3
 integer vector with 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 3, 2, int, highp > int3x2
 integer matrix with 3 x 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 3, 3, int, highp > int3x3
 integer matrix with 3 x 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 3, 4, int, highp > int3x4
 integer matrix with 3 x 4 components. (From GLM_GTX_compatibility extension)
 
typedef vec< 4, int, highp > int4
 integer vector with 4 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 4, 2, int, highp > int4x2
 integer matrix with 4 x 2 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 4, 3, int, highp > int4x3
 integer matrix with 4 x 3 components. (From GLM_GTX_compatibility extension)
 
typedef mat< 4, 4, int, highp > int4x4
 integer matrix with 4 x 4 components. (From GLM_GTX_compatibility extension)
 

Functions

template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER T atan2 (T x, T y)
 Arc tangent. Returns an angle whose tangent is y/x. The signs of x and y are used to determine what quadrant the angle is in. The range of values returned by this function is [-PI, PI]. Results are undefined if x and y are both 0. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER vec< 2, T, Q > atan2 (const vec< 2, T, Q > &x, const vec< 2, T, Q > &y)
 Arc tangent. Returns an angle whose tangent is y/x. The signs of x and y are used to determine what quadrant the angle is in. The range of values returned by this function is [-PI, PI]. Results are undefined if x and y are both 0. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER vec< 3, T, Q > atan2 (const vec< 3, T, Q > &x, const vec< 3, T, Q > &y)
 Arc tangent. Returns an angle whose tangent is y/x. The signs of x and y are used to determine what quadrant the angle is in. The range of values returned by this function is [-PI, PI]. Results are undefined if x and y are both 0. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER vec< 4, T, Q > atan2 (const vec< 4, T, Q > &x, const vec< 4, T, Q > &y)
 Arc tangent. Returns an angle whose tangent is y/x. The signs of x and y are used to determine what quadrant the angle is in. The range of values returned by this function is [-PI, PI]. Results are undefined if x and y are both 0. (From GLM_GTX_compatibility)
 
template<typename genType >
GLM_FUNC_DECL bool isfinite (genType const &x)
 Test whether or not a scalar or each vector component is a finite value. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 1, bool, Q > isfinite (const vec< 1, T, Q > &x)
 Test whether or not a scalar or each vector component is a finite value. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 2, bool, Q > isfinite (const vec< 2, T, Q > &x)
 Test whether or not a scalar or each vector component is a finite value. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, bool, Q > isfinite (const vec< 3, T, Q > &x)
 Test whether or not a scalar or each vector component is a finite value. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, bool, Q > isfinite (const vec< 4, T, Q > &x)
 Test whether or not a scalar or each vector component is a finite value. (From GLM_GTX_compatibility)
 
template<typename T >
GLM_FUNC_QUALIFIER T lerp (T x, T y, T a)
 Returns x * (1.0 - a) + y * a, i.e., the linear blend of x and y using the floating-point value a. The value for a is not restricted to the range [0, 1]. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER vec< 2, T, Q > lerp (const vec< 2, T, Q > &x, const vec< 2, T, Q > &y, T a)
 Returns x * (1.0 - a) + y * a, i.e., the linear blend of x and y using the floating-point value a. The value for a is not restricted to the range [0, 1]. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER vec< 3, T, Q > lerp (const vec< 3, T, Q > &x, const vec< 3, T, Q > &y, T a)
 Returns x * (1.0 - a) + y * a, i.e., the linear blend of x and y using the floating-point value a. The value for a is not restricted to the range [0, 1]. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER vec< 4, T, Q > lerp (const vec< 4, T, Q > &x, const vec< 4, T, Q > &y, T a)
 Returns x * (1.0 - a) + y * a, i.e., the linear blend of x and y using the floating-point value a. The value for a is not restricted to the range [0, 1]. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER vec< 2, T, Q > lerp (const vec< 2, T, Q > &x, const vec< 2, T, Q > &y, const vec< 2, T, Q > &a)
 Returns the component-wise result of x * (1.0 - a) + y * a, i.e., the linear blend of x and y using vector a. The value for a is not restricted to the range [0, 1]. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER vec< 3, T, Q > lerp (const vec< 3, T, Q > &x, const vec< 3, T, Q > &y, const vec< 3, T, Q > &a)
 Returns the component-wise result of x * (1.0 - a) + y * a, i.e., the linear blend of x and y using vector a. The value for a is not restricted to the range [0, 1]. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER vec< 4, T, Q > lerp (const vec< 4, T, Q > &x, const vec< 4, T, Q > &y, const vec< 4, T, Q > &a)
 Returns the component-wise result of x * (1.0 - a) + y * a, i.e., the linear blend of x and y using vector a. The value for a is not restricted to the range [0, 1]. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER T saturate (T x)
 Returns clamp(x, 0, 1) for each component in x. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER vec< 2, T, Q > saturate (const vec< 2, T, Q > &x)
 Returns clamp(x, 0, 1) for each component in x. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER vec< 3, T, Q > saturate (const vec< 3, T, Q > &x)
 Returns clamp(x, 0, 1) for each component in x. (From GLM_GTX_compatibility)
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER vec< 4, T, Q > saturate (const vec< 4, T, Q > &x)
 Returns clamp(x, 0, 1) for each component in x. (From GLM_GTX_compatibility)
 

Detailed Description

Include <glm/gtx/compatibility.hpp> to use the features of this extension.

Provide functions to increase the compatibility with Cg and HLSL languages

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00316.html ================================================ 0.9.9 API documentation: GLM_GTX_component_wise
0.9.9 API documentation
GLM_GTX_component_wise

Include <glm/gtx/component_wise.hpp> to use the features of this extension. More...

Functions

template<typename genType >
GLM_FUNC_DECL genType::value_type compAdd (genType const &v)
 Add all vector components together. More...
 
template<typename genType >
GLM_FUNC_DECL genType::value_type compMax (genType const &v)
 Find the maximum value between single vector components. More...
 
template<typename genType >
GLM_FUNC_DECL genType::value_type compMin (genType const &v)
 Find the minimum value between single vector components. More...
 
template<typename genType >
GLM_FUNC_DECL genType::value_type compMul (genType const &v)
 Multiply all vector components together. More...
 
template<typename floatType , length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, floatType, Q > compNormalize (vec< L, T, Q > const &v)
 Convert an integer vector to a normalized float vector. More...
 
template<length_t L, typename T , typename floatType , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > compScale (vec< L, floatType, Q > const &v)
 Convert a normalized float vector to an integer vector. More...
 

Detailed Description

Include <glm/gtx/component_wise.hpp> to use the features of this extension.

Operations between components of a type

Function Documentation

GLM_FUNC_DECL genType::value_type glm::compAdd ( genType const &  v)

Add all vector components together.

See also
GLM_GTX_component_wise
GLM_FUNC_DECL genType::value_type glm::compMax ( genType const &  v)

Find the maximum value between single vector components.

See also
GLM_GTX_component_wise
GLM_FUNC_DECL genType::value_type glm::compMin ( genType const &  v)

Find the minimum value between single vector components.

See also
GLM_GTX_component_wise
GLM_FUNC_DECL genType::value_type glm::compMul ( genType const &  v)

Multiply all vector components together.

See also
GLM_GTX_component_wise
GLM_FUNC_DECL vec<L, floatType, Q> glm::compNormalize ( vec< L, T, Q > const &  v)

Convert an integer vector to a normalized float vector.

If the parameter value type is already a floating qualifier type, the value is passed through.

See also
GLM_GTX_component_wise
GLM_FUNC_DECL vec<L, T, Q> glm::compScale ( vec< L, floatType, Q > const &  v)

Convert a normalized float vector to an integer vector.

If the parameter value type is already a floating qualifier type, the value is passed through.

See also
GLM_GTX_component_wise
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00317.html ================================================ 0.9.9 API documentation: GLM_GTX_dual_quaternion
0.9.9 API documentation
GLM_GTX_dual_quaternion

Include <glm/gtx/dual_quaternion.hpp> to use the features of this extension. More...

Typedefs

typedef highp_ddualquat ddualquat
 Dual-quaternion of default double-qualifier floating-point numbers. More...
 
typedef highp_fdualquat dualquat
 Dual-quaternion of floating-point numbers. More...
 
typedef highp_fdualquat fdualquat
 Dual-quaternion of single-qualifier floating-point numbers. More...
 
typedef tdualquat< double, highp > highp_ddualquat
 Dual-quaternion of high double-qualifier floating-point numbers. More...
 
typedef tdualquat< float, highp > highp_dualquat
 Dual-quaternion of high single-qualifier floating-point numbers. More...
 
typedef tdualquat< float, highp > highp_fdualquat
 Dual-quaternion of high single-qualifier floating-point numbers. More...
 
typedef tdualquat< double, lowp > lowp_ddualquat
 Dual-quaternion of low double-qualifier floating-point numbers. More...
 
typedef tdualquat< float, lowp > lowp_dualquat
 Dual-quaternion of low single-qualifier floating-point numbers. More...
 
typedef tdualquat< float, lowp > lowp_fdualquat
 Dual-quaternion of low single-qualifier floating-point numbers. More...
 
typedef tdualquat< double, mediump > mediump_ddualquat
 Dual-quaternion of medium double-qualifier floating-point numbers. More...
 
typedef tdualquat< float, mediump > mediump_dualquat
 Dual-quaternion of medium single-qualifier floating-point numbers. More...
 
typedef tdualquat< float, mediump > mediump_fdualquat
 Dual-quaternion of medium single-qualifier floating-point numbers. More...
 

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL tdualquat< T, Q > dual_quat_identity ()
 Creates an identity dual quaternion. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL tdualquat< T, Q > dualquat_cast (mat< 2, 4, T, Q > const &x)
 Converts a 2 * 4 matrix (matrix which holds real and dual parts) to a quaternion. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL tdualquat< T, Q > dualquat_cast (mat< 3, 4, T, Q > const &x)
 Converts a 3 * 4 matrix (augmented matrix rotation + translation) to a quaternion. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL tdualquat< T, Q > inverse (tdualquat< T, Q > const &q)
 Returns the q inverse. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL tdualquat< T, Q > lerp (tdualquat< T, Q > const &x, tdualquat< T, Q > const &y, T const &a)
 Returns the linear interpolation of two dual quaternion. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 2, 4, T, Q > mat2x4_cast (tdualquat< T, Q > const &x)
 Converts a quaternion to a 2 * 4 matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 4, T, Q > mat3x4_cast (tdualquat< T, Q > const &x)
 Converts a quaternion to a 3 * 4 matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL tdualquat< T, Q > normalize (tdualquat< T, Q > const &q)
 Returns the normalized quaternion. More...
 

Detailed Description

Include <glm/gtx/dual_quaternion.hpp> to use the features of this extension.

Defines a templated dual-quaternion type and several dual-quaternion operations.

Typedef Documentation

typedef highp_ddualquat ddualquat

Dual-quaternion of default double-qualifier floating-point numbers.

See also
GLM_GTX_dual_quaternion

Definition at line 260 of file dual_quaternion.hpp.

typedef highp_fdualquat dualquat

Dual-quaternion of floating-point numbers.

See also
GLM_GTX_dual_quaternion

Definition at line 236 of file dual_quaternion.hpp.

typedef highp_fdualquat fdualquat

Dual-quaternion of single-qualifier floating-point numbers.

See also
GLM_GTX_dual_quaternion

Definition at line 241 of file dual_quaternion.hpp.

typedef tdualquat<double, highp> highp_ddualquat

Dual-quaternion of high double-qualifier floating-point numbers.

See also
GLM_GTX_dual_quaternion

Definition at line 229 of file dual_quaternion.hpp.

typedef tdualquat<float, highp> highp_dualquat

Dual-quaternion of high single-qualifier floating-point numbers.

See also
GLM_GTX_dual_quaternion

Definition at line 197 of file dual_quaternion.hpp.

typedef tdualquat<float, highp> highp_fdualquat

Dual-quaternion of high single-qualifier floating-point numbers.

See also
GLM_GTX_dual_quaternion

Definition at line 213 of file dual_quaternion.hpp.

typedef tdualquat<double, lowp> lowp_ddualquat

Dual-quaternion of low double-qualifier floating-point numbers.

See also
GLM_GTX_dual_quaternion

Definition at line 219 of file dual_quaternion.hpp.

typedef tdualquat<float, lowp> lowp_dualquat

Dual-quaternion of low single-qualifier floating-point numbers.

See also
GLM_GTX_dual_quaternion

Definition at line 187 of file dual_quaternion.hpp.

typedef tdualquat<float, lowp> lowp_fdualquat

Dual-quaternion of low single-qualifier floating-point numbers.

See also
GLM_GTX_dual_quaternion

Definition at line 203 of file dual_quaternion.hpp.

typedef tdualquat<double, mediump> mediump_ddualquat

Dual-quaternion of medium double-qualifier floating-point numbers.

See also
GLM_GTX_dual_quaternion

Definition at line 224 of file dual_quaternion.hpp.

typedef tdualquat<float, mediump> mediump_dualquat

Dual-quaternion of medium single-qualifier floating-point numbers.

See also
GLM_GTX_dual_quaternion

Definition at line 192 of file dual_quaternion.hpp.

typedef tdualquat<float, mediump> mediump_fdualquat

Dual-quaternion of medium single-qualifier floating-point numbers.

See also
GLM_GTX_dual_quaternion

Definition at line 208 of file dual_quaternion.hpp.

Function Documentation

GLM_FUNC_DECL tdualquat<T, Q> glm::dual_quat_identity ( )

Creates an identity dual quaternion.

See also
GLM_GTX_dual_quaternion
GLM_FUNC_DECL tdualquat<T, Q> glm::dualquat_cast ( mat< 2, 4, T, Q > const &  x)

Converts a 2 * 4 matrix (matrix which holds real and dual parts) to a quaternion.

See also
GLM_GTX_dual_quaternion
GLM_FUNC_DECL tdualquat<T, Q> glm::dualquat_cast ( mat< 3, 4, T, Q > const &  x)

Converts a 3 * 4 matrix (augmented matrix rotation + translation) to a quaternion.

See also
GLM_GTX_dual_quaternion
GLM_FUNC_DECL tdualquat<T, Q> glm::inverse ( tdualquat< T, Q > const &  q)

Returns the q inverse.

See also
GLM_GTX_dual_quaternion
GLM_FUNC_DECL tdualquat<T, Q> glm::lerp ( tdualquat< T, Q > const &  x,
tdualquat< T, Q > const &  y,
T const &  a 
)

Returns the linear interpolation of two dual quaternion.

See also
gtc_dual_quaternion
GLM_FUNC_DECL mat<2, 4, T, Q> glm::mat2x4_cast ( tdualquat< T, Q > const &  x)

Converts a quaternion to a 2 * 4 matrix.

See also
GLM_GTX_dual_quaternion
GLM_FUNC_DECL mat<3, 4, T, Q> glm::mat3x4_cast ( tdualquat< T, Q > const &  x)

Converts a quaternion to a 3 * 4 matrix.

See also
GLM_GTX_dual_quaternion
GLM_FUNC_DECL tdualquat<T, Q> glm::normalize ( tdualquat< T, Q > const &  q)

Returns the normalized quaternion.

See also
GLM_GTX_dual_quaternion
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00318.html ================================================ 0.9.9 API documentation: GLM_GTX_easing
0.9.9 API documentation

Include <glm/gtx/easing.hpp> to use the features of this extension. More...

Functions

template<typename genType >
GLM_FUNC_DECL genType backEaseIn (genType const &a)
 
template<typename genType >
GLM_FUNC_DECL genType backEaseIn (genType const &a, genType const &o)
 
template<typename genType >
GLM_FUNC_DECL genType backEaseInOut (genType const &a)
 
template<typename genType >
GLM_FUNC_DECL genType backEaseInOut (genType const &a, genType const &o)
 
template<typename genType >
GLM_FUNC_DECL genType backEaseOut (genType const &a)
 
template<typename genType >
GLM_FUNC_DECL genType backEaseOut (genType const &a, genType const &o)
 
template<typename genType >
GLM_FUNC_DECL genType bounceEaseIn (genType const &a)
 
template<typename genType >
GLM_FUNC_DECL genType bounceEaseInOut (genType const &a)
 
template<typename genType >
GLM_FUNC_DECL genType bounceEaseOut (genType const &a)
 
template<typename genType >
GLM_FUNC_DECL genType circularEaseIn (genType const &a)
 Modelled after shifted quadrant IV of unit circle. More...
 
template<typename genType >
GLM_FUNC_DECL genType circularEaseInOut (genType const &a)
 Modelled after the piecewise circular function y = (1/2)(1 - sqrt(1 - 4x^2)) ; [0, 0.5) y = (1/2)(sqrt(-(2x - 3)*(2x - 1)) + 1) ; [0.5, 1]. More...
 
template<typename genType >
GLM_FUNC_DECL genType circularEaseOut (genType const &a)
 Modelled after shifted quadrant II of unit circle. More...
 
template<typename genType >
GLM_FUNC_DECL genType cubicEaseIn (genType const &a)
 Modelled after the cubic y = x^3.
 
template<typename genType >
GLM_FUNC_DECL genType cubicEaseInOut (genType const &a)
 Modelled after the piecewise cubic y = (1/2)((2x)^3) ; [0, 0.5) y = (1/2)((2x-2)^3 + 2) ; [0.5, 1]. More...
 
template<typename genType >
GLM_FUNC_DECL genType cubicEaseOut (genType const &a)
 Modelled after the cubic y = (x - 1)^3 + 1. More...
 
template<typename genType >
GLM_FUNC_DECL genType elasticEaseIn (genType const &a)
 Modelled after the damped sine wave y = sin(13pi/2*x)*pow(2, 10 * (x - 1)) More...
 
template<typename genType >
GLM_FUNC_DECL genType elasticEaseInOut (genType const &a)
 Modelled after the piecewise exponentially-damped sine wave: y = (1/2)*sin(13pi/2*(2*x))*pow(2, 10 * ((2*x) - 1)) ; [0,0.5) y = (1/2)*(sin(-13pi/2*((2x-1)+1))*pow(2,-10(2*x-1)) + 2) ; [0.5, 1]. More...
 
template<typename genType >
GLM_FUNC_DECL genType elasticEaseOut (genType const &a)
 Modelled after the damped sine wave y = sin(-13pi/2*(x + 1))*pow(2, -10x) + 1. More...
 
template<typename genType >
GLM_FUNC_DECL genType exponentialEaseIn (genType const &a)
 Modelled after the exponential function y = 2^(10(x - 1)) More...
 
template<typename genType >
GLM_FUNC_DECL genType exponentialEaseInOut (genType const &a)
 Modelled after the piecewise exponential y = (1/2)2^(10(2x - 1)) ; [0,0.5) y = -(1/2)*2^(-10(2x - 1))) + 1 ; [0.5,1]. More...
 
template<typename genType >
GLM_FUNC_DECL genType exponentialEaseOut (genType const &a)
 Modelled after the exponential function y = -2^(-10x) + 1. More...
 
template<typename genType >
GLM_FUNC_DECL genType linearInterpolation (genType const &a)
 Modelled after the line y = x. More...
 
template<typename genType >
GLM_FUNC_DECL genType quadraticEaseIn (genType const &a)
 Modelled after the parabola y = x^2. More...
 
template<typename genType >
GLM_FUNC_DECL genType quadraticEaseInOut (genType const &a)
 Modelled after the piecewise quadratic y = (1/2)((2x)^2) ; [0, 0.5) y = -(1/2)((2x-1)*(2x-3) - 1) ; [0.5, 1]. More...
 
template<typename genType >
GLM_FUNC_DECL genType quadraticEaseOut (genType const &a)
 Modelled after the parabola y = -x^2 + 2x. More...
 
template<typename genType >
GLM_FUNC_DECL genType quarticEaseIn (genType const &a)
 Modelled after the quartic x^4. More...
 
template<typename genType >
GLM_FUNC_DECL genType quarticEaseInOut (genType const &a)
 Modelled after the piecewise quartic y = (1/2)((2x)^4) ; [0, 0.5) y = -(1/2)((2x-2)^4 - 2) ; [0.5, 1]. More...
 
template<typename genType >
GLM_FUNC_DECL genType quarticEaseOut (genType const &a)
 Modelled after the quartic y = 1 - (x - 1)^4. More...
 
template<typename genType >
GLM_FUNC_DECL genType quinticEaseIn (genType const &a)
 Modelled after the quintic y = x^5. More...
 
template<typename genType >
GLM_FUNC_DECL genType quinticEaseInOut (genType const &a)
 Modelled after the piecewise quintic y = (1/2)((2x)^5) ; [0, 0.5) y = (1/2)((2x-2)^5 + 2) ; [0.5, 1]. More...
 
template<typename genType >
GLM_FUNC_DECL genType quinticEaseOut (genType const &a)
 Modelled after the quintic y = (x - 1)^5 + 1. More...
 
template<typename genType >
GLM_FUNC_DECL genType sineEaseIn (genType const &a)
 Modelled after quarter-cycle of sine wave. More...
 
template<typename genType >
GLM_FUNC_DECL genType sineEaseInOut (genType const &a)
 Modelled after half sine wave. More...
 
template<typename genType >
GLM_FUNC_DECL genType sineEaseOut (genType const &a)
 Modelled after quarter-cycle of sine wave (different phase) More...
 

Detailed Description

Include <glm/gtx/easing.hpp> to use the features of this extension.

Easing functions for animations and transitons All functions take a parameter x in the range [0.0,1.0]

Based on the AHEasing project of Warren Moore (https://github.com/warrenm/AHEasing)

Function Documentation

GLM_FUNC_DECL genType glm::backEaseIn ( genType const &  a)
See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::backEaseIn ( genType const &  a,
genType const &  o 
)
Parameters
aparameter
oOptional overshoot modifier
See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::backEaseInOut ( genType const &  a)
See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::backEaseInOut ( genType const &  a,
genType const &  o 
)
Parameters
aparameter
oOptional overshoot modifier
See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::backEaseOut ( genType const &  a)
See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::backEaseOut ( genType const &  a,
genType const &  o 
)
Parameters
aparameter
oOptional overshoot modifier
See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::bounceEaseIn ( genType const &  a)
See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::bounceEaseInOut ( genType const &  a)
See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::bounceEaseOut ( genType const &  a)
See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::circularEaseIn ( genType const &  a)

Modelled after shifted quadrant IV of unit circle.

See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::circularEaseInOut ( genType const &  a)

Modelled after the piecewise circular function y = (1/2)(1 - sqrt(1 - 4x^2)) ; [0, 0.5) y = (1/2)(sqrt(-(2x - 3)*(2x - 1)) + 1) ; [0.5, 1].

See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::circularEaseOut ( genType const &  a)

Modelled after shifted quadrant II of unit circle.

See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::cubicEaseInOut ( genType const &  a)

Modelled after the piecewise cubic y = (1/2)((2x)^3) ; [0, 0.5) y = (1/2)((2x-2)^3 + 2) ; [0.5, 1].

See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::cubicEaseOut ( genType const &  a)

Modelled after the cubic y = (x - 1)^3 + 1.

See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::elasticEaseIn ( genType const &  a)

Modelled after the damped sine wave y = sin(13pi/2*x)*pow(2, 10 * (x - 1))

See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::elasticEaseInOut ( genType const &  a)

Modelled after the piecewise exponentially-damped sine wave: y = (1/2)*sin(13pi/2*(2*x))*pow(2, 10 * ((2*x) - 1)) ; [0,0.5) y = (1/2)*(sin(-13pi/2*((2x-1)+1))*pow(2,-10(2*x-1)) + 2) ; [0.5, 1].

See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::elasticEaseOut ( genType const &  a)

Modelled after the damped sine wave y = sin(-13pi/2*(x + 1))*pow(2, -10x) + 1.

See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::exponentialEaseIn ( genType const &  a)

Modelled after the exponential function y = 2^(10(x - 1))

See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::exponentialEaseInOut ( genType const &  a)

Modelled after the piecewise exponential y = (1/2)2^(10(2x - 1)) ; [0,0.5) y = -(1/2)*2^(-10(2x - 1))) + 1 ; [0.5,1].

See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::exponentialEaseOut ( genType const &  a)

Modelled after the exponential function y = -2^(-10x) + 1.

See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::linearInterpolation ( genType const &  a)

Modelled after the line y = x.

See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::quadraticEaseIn ( genType const &  a)

Modelled after the parabola y = x^2.

See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::quadraticEaseInOut ( genType const &  a)

Modelled after the piecewise quadratic y = (1/2)((2x)^2) ; [0, 0.5) y = -(1/2)((2x-1)*(2x-3) - 1) ; [0.5, 1].

See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::quadraticEaseOut ( genType const &  a)

Modelled after the parabola y = -x^2 + 2x.

See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::quarticEaseIn ( genType const &  a)

Modelled after the quartic x^4.

See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::quarticEaseInOut ( genType const &  a)

Modelled after the piecewise quartic y = (1/2)((2x)^4) ; [0, 0.5) y = -(1/2)((2x-2)^4 - 2) ; [0.5, 1].

See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::quarticEaseOut ( genType const &  a)

Modelled after the quartic y = 1 - (x - 1)^4.

See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::quinticEaseIn ( genType const &  a)

Modelled after the quintic y = x^5.

See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::quinticEaseInOut ( genType const &  a)

Modelled after the piecewise quintic y = (1/2)((2x)^5) ; [0, 0.5) y = (1/2)((2x-2)^5 + 2) ; [0.5, 1].

See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::quinticEaseOut ( genType const &  a)

Modelled after the quintic y = (x - 1)^5 + 1.

See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::sineEaseIn ( genType const &  a)

Modelled after quarter-cycle of sine wave.

See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::sineEaseInOut ( genType const &  a)

Modelled after half sine wave.

See also
GLM_GTX_easing
GLM_FUNC_DECL genType glm::sineEaseOut ( genType const &  a)

Modelled after quarter-cycle of sine wave (different phase)

See also
GLM_GTX_easing
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00319.html ================================================ 0.9.9 API documentation: GLM_GTX_euler_angles
0.9.9 API documentation
GLM_GTX_euler_angles

Include <glm/gtx/euler_angles.hpp> to use the features of this extension. More...

Functions

template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > derivedEulerAngleX (T const &angleX, T const &angularVelocityX)
 Creates a 3D 4 * 4 homogeneous derived matrix from the rotation matrix about X-axis. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > derivedEulerAngleY (T const &angleY, T const &angularVelocityY)
 Creates a 3D 4 * 4 homogeneous derived matrix from the rotation matrix about Y-axis. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > derivedEulerAngleZ (T const &angleZ, T const &angularVelocityZ)
 Creates a 3D 4 * 4 homogeneous derived matrix from the rotation matrix about Z-axis. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleX (T const &angleX)
 Creates a 3D 4 * 4 homogeneous rotation matrix from an euler angle X. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleXY (T const &angleX, T const &angleY)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (X * Y). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleXYX (T const &t1, T const &t2, T const &t3)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (X * Y * X). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleXYZ (T const &t1, T const &t2, T const &t3)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (X * Y * Z). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleXZ (T const &angleX, T const &angleZ)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (X * Z). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleXZX (T const &t1, T const &t2, T const &t3)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (X * Z * X). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleXZY (T const &t1, T const &t2, T const &t3)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (X * Z * Y). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleY (T const &angleY)
 Creates a 3D 4 * 4 homogeneous rotation matrix from an euler angle Y. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleYX (T const &angleY, T const &angleX)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * X). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleYXY (T const &t1, T const &t2, T const &t3)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * X * Y). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleYXZ (T const &yaw, T const &pitch, T const &roll)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * X * Z). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleYZ (T const &angleY, T const &angleZ)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * Z). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleYZX (T const &t1, T const &t2, T const &t3)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * Z * X). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleYZY (T const &t1, T const &t2, T const &t3)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * Z * Y). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleZ (T const &angleZ)
 Creates a 3D 4 * 4 homogeneous rotation matrix from an euler angle Z. More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleZX (T const &angle, T const &angleX)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Z * X). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleZXY (T const &t1, T const &t2, T const &t3)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Z * X * Y). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleZXZ (T const &t1, T const &t2, T const &t3)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Z * X * Z). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleZY (T const &angleZ, T const &angleY)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Z * Y). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleZYX (T const &t1, T const &t2, T const &t3)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Z * Y * X). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > eulerAngleZYZ (T const &t1, T const &t2, T const &t3)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Z * Y * Z). More...
 
template<typename T >
GLM_FUNC_DECL void extractEulerAngleXYX (mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
 Extracts the (X * Y * X) Euler angles from the rotation matrix M. More...
 
template<typename T >
GLM_FUNC_DECL void extractEulerAngleXYZ (mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
 Extracts the (X * Y * Z) Euler angles from the rotation matrix M. More...
 
template<typename T >
GLM_FUNC_DECL void extractEulerAngleXZX (mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
 Extracts the (X * Z * X) Euler angles from the rotation matrix M. More...
 
template<typename T >
GLM_FUNC_DECL void extractEulerAngleXZY (mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
 Extracts the (X * Z * Y) Euler angles from the rotation matrix M. More...
 
template<typename T >
GLM_FUNC_DECL void extractEulerAngleYXY (mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
 Extracts the (Y * X * Y) Euler angles from the rotation matrix M. More...
 
template<typename T >
GLM_FUNC_DECL void extractEulerAngleYXZ (mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
 Extracts the (Y * X * Z) Euler angles from the rotation matrix M. More...
 
template<typename T >
GLM_FUNC_DECL void extractEulerAngleYZX (mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
 Extracts the (Y * Z * X) Euler angles from the rotation matrix M. More...
 
template<typename T >
GLM_FUNC_DECL void extractEulerAngleYZY (mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
 Extracts the (Y * Z * Y) Euler angles from the rotation matrix M. More...
 
template<typename T >
GLM_FUNC_DECL void extractEulerAngleZXY (mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
 Extracts the (Z * X * Y) Euler angles from the rotation matrix M. More...
 
template<typename T >
GLM_FUNC_DECL void extractEulerAngleZXZ (mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
 Extracts the (Z * X * Z) Euler angles from the rotation matrix M. More...
 
template<typename T >
GLM_FUNC_DECL void extractEulerAngleZYX (mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
 Extracts the (Z * Y * X) Euler angles from the rotation matrix M. More...
 
template<typename T >
GLM_FUNC_DECL void extractEulerAngleZYZ (mat< 4, 4, T, defaultp > const &M, T &t1, T &t2, T &t3)
 Extracts the (Z * Y * Z) Euler angles from the rotation matrix M. More...
 
template<typename T >
GLM_FUNC_DECL mat< 2, 2, T, defaultp > orientate2 (T const &angle)
 Creates a 2D 2 * 2 rotation matrix from an euler angle. More...
 
template<typename T >
GLM_FUNC_DECL mat< 3, 3, T, defaultp > orientate3 (T const &angle)
 Creates a 2D 4 * 4 homogeneous rotation matrix from an euler angle. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 3, T, Q > orientate3 (vec< 3, T, Q > const &angles)
 Creates a 3D 3 * 3 rotation matrix from euler angles (Y * X * Z). More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > orientate4 (vec< 3, T, Q > const &angles)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * X * Z). More...
 
template<typename T >
GLM_FUNC_DECL mat< 4, 4, T, defaultp > yawPitchRoll (T const &yaw, T const &pitch, T const &roll)
 Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * X * Z). More...
 

Detailed Description

Include <glm/gtx/euler_angles.hpp> to use the features of this extension.

Build matrices from Euler angles.

Extraction of Euler angles from rotation matrix. Based on the original paper 2014 Mike Day - Extracting Euler Angles from a Rotation Matrix.

Function Documentation

GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::derivedEulerAngleX ( T const &  angleX,
T const &  angularVelocityX 
)

Creates a 3D 4 * 4 homogeneous derived matrix from the rotation matrix about X-axis.

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::derivedEulerAngleY ( T const &  angleY,
T const &  angularVelocityY 
)

Creates a 3D 4 * 4 homogeneous derived matrix from the rotation matrix about Y-axis.

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::derivedEulerAngleZ ( T const &  angleZ,
T const &  angularVelocityZ 
)

Creates a 3D 4 * 4 homogeneous derived matrix from the rotation matrix about Z-axis.

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::eulerAngleX ( T const &  angleX)

Creates a 3D 4 * 4 homogeneous rotation matrix from an euler angle X.

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::eulerAngleXY ( T const &  angleX,
T const &  angleY 
)

Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (X * Y).

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::eulerAngleXYX ( T const &  t1,
T const &  t2,
T const &  t3 
)

Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (X * Y * X).

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::eulerAngleXYZ ( T const &  t1,
T const &  t2,
T const &  t3 
)

Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (X * Y * Z).

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::eulerAngleXZ ( T const &  angleX,
T const &  angleZ 
)

Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (X * Z).

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::eulerAngleXZX ( T const &  t1,
T const &  t2,
T const &  t3 
)

Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (X * Z * X).

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::eulerAngleXZY ( T const &  t1,
T const &  t2,
T const &  t3 
)

Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (X * Z * Y).

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::eulerAngleY ( T const &  angleY)

Creates a 3D 4 * 4 homogeneous rotation matrix from an euler angle Y.

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::eulerAngleYX ( T const &  angleY,
T const &  angleX 
)

Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * X).

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::eulerAngleYXY ( T const &  t1,
T const &  t2,
T const &  t3 
)

Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * X * Y).

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::eulerAngleYXZ ( T const &  yaw,
T const &  pitch,
T const &  roll 
)

Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * X * Z).

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::eulerAngleYZ ( T const &  angleY,
T const &  angleZ 
)

Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * Z).

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::eulerAngleYZX ( T const &  t1,
T const &  t2,
T const &  t3 
)

Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * Z * X).

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::eulerAngleYZY ( T const &  t1,
T const &  t2,
T const &  t3 
)

Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * Z * Y).

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::eulerAngleZ ( T const &  angleZ)

Creates a 3D 4 * 4 homogeneous rotation matrix from an euler angle Z.

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::eulerAngleZX ( T const &  angle,
T const &  angleX 
)

Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Z * X).

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::eulerAngleZXY ( T const &  t1,
T const &  t2,
T const &  t3 
)

Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Z * X * Y).

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::eulerAngleZXZ ( T const &  t1,
T const &  t2,
T const &  t3 
)

Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Z * X * Z).

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::eulerAngleZY ( T const &  angleZ,
T const &  angleY 
)

Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Z * Y).

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::eulerAngleZYX ( T const &  t1,
T const &  t2,
T const &  t3 
)

Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Z * Y * X).

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::eulerAngleZYZ ( T const &  t1,
T const &  t2,
T const &  t3 
)

Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Z * Y * Z).

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL void glm::extractEulerAngleXYX ( mat< 4, 4, T, defaultp > const &  M,
T &  t1,
T &  t2,
T &  t3 
)

Extracts the (X * Y * X) Euler angles from the rotation matrix M.

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL void glm::extractEulerAngleXYZ ( mat< 4, 4, T, defaultp > const &  M,
T &  t1,
T &  t2,
T &  t3 
)

Extracts the (X * Y * Z) Euler angles from the rotation matrix M.

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL void glm::extractEulerAngleXZX ( mat< 4, 4, T, defaultp > const &  M,
T &  t1,
T &  t2,
T &  t3 
)

Extracts the (X * Z * X) Euler angles from the rotation matrix M.

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL void glm::extractEulerAngleXZY ( mat< 4, 4, T, defaultp > const &  M,
T &  t1,
T &  t2,
T &  t3 
)

Extracts the (X * Z * Y) Euler angles from the rotation matrix M.

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL void glm::extractEulerAngleYXY ( mat< 4, 4, T, defaultp > const &  M,
T &  t1,
T &  t2,
T &  t3 
)

Extracts the (Y * X * Y) Euler angles from the rotation matrix M.

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL void glm::extractEulerAngleYXZ ( mat< 4, 4, T, defaultp > const &  M,
T &  t1,
T &  t2,
T &  t3 
)

Extracts the (Y * X * Z) Euler angles from the rotation matrix M.

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL void glm::extractEulerAngleYZX ( mat< 4, 4, T, defaultp > const &  M,
T &  t1,
T &  t2,
T &  t3 
)

Extracts the (Y * Z * X) Euler angles from the rotation matrix M.

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL void glm::extractEulerAngleYZY ( mat< 4, 4, T, defaultp > const &  M,
T &  t1,
T &  t2,
T &  t3 
)

Extracts the (Y * Z * Y) Euler angles from the rotation matrix M.

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL void glm::extractEulerAngleZXY ( mat< 4, 4, T, defaultp > const &  M,
T &  t1,
T &  t2,
T &  t3 
)

Extracts the (Z * X * Y) Euler angles from the rotation matrix M.

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL void glm::extractEulerAngleZXZ ( mat< 4, 4, T, defaultp > const &  M,
T &  t1,
T &  t2,
T &  t3 
)

Extracts the (Z * X * Z) Euler angles from the rotation matrix M.

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL void glm::extractEulerAngleZYX ( mat< 4, 4, T, defaultp > const &  M,
T &  t1,
T &  t2,
T &  t3 
)

Extracts the (Z * Y * X) Euler angles from the rotation matrix M.

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL void glm::extractEulerAngleZYZ ( mat< 4, 4, T, defaultp > const &  M,
T &  t1,
T &  t2,
T &  t3 
)

Extracts the (Z * Y * Z) Euler angles from the rotation matrix M.

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL mat<2, 2, T, defaultp> glm::orientate2 ( T const &  angle)

Creates a 2D 2 * 2 rotation matrix from an euler angle.

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL mat<3, 3, T, defaultp> glm::orientate3 ( T const &  angle)

Creates a 2D 4 * 4 homogeneous rotation matrix from an euler angle.

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL mat<3, 3, T, Q> glm::orientate3 ( vec< 3, T, Q > const &  angles)

Creates a 3D 3 * 3 rotation matrix from euler angles (Y * X * Z).

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL mat<4, 4, T, Q> glm::orientate4 ( vec< 3, T, Q > const &  angles)

Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * X * Z).

See also
GLM_GTX_euler_angles
GLM_FUNC_DECL mat<4, 4, T, defaultp> glm::yawPitchRoll ( T const &  yaw,
T const &  pitch,
T const &  roll 
)

Creates a 3D 4 * 4 homogeneous rotation matrix from euler angles (Y * X * Z).

See also
GLM_GTX_euler_angles
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00320.html ================================================ 0.9.9 API documentation: GLM_GTX_extend
0.9.9 API documentation

Include <glm/gtx/extend.hpp> to use the features of this extension. More...

Functions

template<typename genType >
GLM_FUNC_DECL genType extend (genType const &Origin, genType const &Source, typename genType::value_type const Length)
 Extends of Length the Origin position using the (Source - Origin) direction. More...
 

Detailed Description

Include <glm/gtx/extend.hpp> to use the features of this extension.

Extend a position from a source to a position at a defined length.

Function Documentation

GLM_FUNC_DECL genType glm::extend ( genType const &  Origin,
genType const &  Source,
typename genType::value_type const  Length 
)

Extends of Length the Origin position using the (Source - Origin) direction.

See also
GLM_GTX_extend
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00321.html ================================================ 0.9.9 API documentation: GLM_GTX_extented_min_max
0.9.9 API documentation
GLM_GTX_extented_min_max

Include <glm/gtx/extented_min_max.hpp> to use the features of this extension. More...

Functions

template<typename genType >
GLM_FUNC_DECL genType fclamp (genType x, genType minVal, genType maxVal)
 Returns min(max(x, minVal), maxVal) for each component in x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fclamp (vec< L, T, Q > const &x, T minVal, T maxVal)
 Returns min(max(x, minVal), maxVal) for each component in x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fclamp (vec< L, T, Q > const &x, vec< L, T, Q > const &minVal, vec< L, T, Q > const &maxVal)
 Returns min(max(x, minVal), maxVal) for each component in x. More...
 
template<typename genType >
GLM_FUNC_DECL genType fmax (genType x, genType y)
 Returns y if x < y; otherwise, it returns x. More...
 
template<typename genType >
GLM_FUNC_DECL genType fmin (genType x, genType y)
 Returns y if y < x; otherwise, it returns x. More...
 
template<typename T >
GLM_FUNC_DECL T max (T const &x, T const &y, T const &z)
 Return the maximum component-wise values of 3 inputs. More...
 
template<typename T , template< typename > class C>
GLM_FUNC_DECL C< T > max (C< T > const &x, typename C< T >::T const &y, typename C< T >::T const &z)
 Return the maximum component-wise values of 3 inputs. More...
 
template<typename T , template< typename > class C>
GLM_FUNC_DECL C< T > max (C< T > const &x, C< T > const &y, C< T > const &z)
 Return the maximum component-wise values of 3 inputs. More...
 
template<typename T >
GLM_FUNC_DECL T max (T const &x, T const &y, T const &z, T const &w)
 Return the maximum component-wise values of 4 inputs. More...
 
template<typename T , template< typename > class C>
GLM_FUNC_DECL C< T > max (C< T > const &x, typename C< T >::T const &y, typename C< T >::T const &z, typename C< T >::T const &w)
 Return the maximum component-wise values of 4 inputs. More...
 
template<typename T , template< typename > class C>
GLM_FUNC_DECL C< T > max (C< T > const &x, C< T > const &y, C< T > const &z, C< T > const &w)
 Return the maximum component-wise values of 4 inputs. More...
 
template<typename T >
GLM_FUNC_DECL T min (T const &x, T const &y, T const &z)
 Return the minimum component-wise values of 3 inputs. More...
 
template<typename T , template< typename > class C>
GLM_FUNC_DECL C< T > min (C< T > const &x, typename C< T >::T const &y, typename C< T >::T const &z)
 Return the minimum component-wise values of 3 inputs. More...
 
template<typename T , template< typename > class C>
GLM_FUNC_DECL C< T > min (C< T > const &x, C< T > const &y, C< T > const &z)
 Return the minimum component-wise values of 3 inputs. More...
 
template<typename T >
GLM_FUNC_DECL T min (T const &x, T const &y, T const &z, T const &w)
 Return the minimum component-wise values of 4 inputs. More...
 
template<typename T , template< typename > class C>
GLM_FUNC_DECL C< T > min (C< T > const &x, typename C< T >::T const &y, typename C< T >::T const &z, typename C< T >::T const &w)
 Return the minimum component-wise values of 4 inputs. More...
 
template<typename T , template< typename > class C>
GLM_FUNC_DECL C< T > min (C< T > const &x, C< T > const &y, C< T > const &z, C< T > const &w)
 Return the minimum component-wise values of 4 inputs. More...
 

Detailed Description

Include <glm/gtx/extented_min_max.hpp> to use the features of this extension.

Min and max functions for 3 to 4 parameters.

Function Documentation

GLM_FUNC_DECL genType glm::fclamp ( genType  x,
genType  minVal,
genType  maxVal 
)

Returns min(max(x, minVal), maxVal) for each component in x.

If one of the two arguments is NaN, the value of the other argument is returned.

Template Parameters
genTypeFloating-point scalar or vector types.
See also
gtx_extented_min_max
GLM_FUNC_DECL vec<L, T, Q> glm::fclamp ( vec< L, T, Q > const &  x,
minVal,
maxVal 
)

Returns min(max(x, minVal), maxVal) for each component in x.

If one of the two arguments is NaN, the value of the other argument is returned.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
gtx_extented_min_max
GLM_FUNC_DECL vec<L, T, Q> glm::fclamp ( vec< L, T, Q > const &  x,
vec< L, T, Q > const &  minVal,
vec< L, T, Q > const &  maxVal 
)

Returns min(max(x, minVal), maxVal) for each component in x.

If one of the two arguments is NaN, the value of the other argument is returned.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
gtx_extented_min_max
GLM_FUNC_DECL genType glm::fmax ( genType  x,
genType  y 
)

Returns y if x < y; otherwise, it returns x.

If one of the two arguments is NaN, the value of the other argument is returned.

Template Parameters
genTypeFloating-point; scalar or vector types.
See also
gtx_extented_min_max
std::fmax documentation
GLM_FUNC_DECL genType glm::fmin ( genType  x,
genType  y 
)

Returns y if y < x; otherwise, it returns x.

If one of the two arguments is NaN, the value of the other argument is returned.

Template Parameters
genTypeFloating-point or integer; scalar or vector types.
See also
gtx_extented_min_max
GLM_FUNC_DECL T glm::max ( T const &  x,
T const &  y,
T const &  z 
)

Return the maximum component-wise values of 3 inputs.

See also
gtx_extented_min_max
GLM_FUNC_DECL C<T> glm::max ( C< T > const &  x,
typename C< T >::T const &  y,
typename C< T >::T const &  z 
)

Return the maximum component-wise values of 3 inputs.

See also
gtx_extented_min_max
GLM_FUNC_DECL C<T> glm::max ( C< T > const &  x,
C< T > const &  y,
C< T > const &  z 
)

Return the maximum component-wise values of 3 inputs.

See also
gtx_extented_min_max
GLM_FUNC_DECL T glm::max ( T const &  x,
T const &  y,
T const &  z,
T const &  w 
)

Return the maximum component-wise values of 4 inputs.

See also
gtx_extented_min_max
GLM_FUNC_DECL C<T> glm::max ( C< T > const &  x,
typename C< T >::T const &  y,
typename C< T >::T const &  z,
typename C< T >::T const &  w 
)

Return the maximum component-wise values of 4 inputs.

See also
gtx_extented_min_max
GLM_FUNC_DECL C<T> glm::max ( C< T > const &  x,
C< T > const &  y,
C< T > const &  z,
C< T > const &  w 
)

Return the maximum component-wise values of 4 inputs.

See also
gtx_extented_min_max
GLM_FUNC_DECL T glm::min ( T const &  x,
T const &  y,
T const &  z 
)

Return the minimum component-wise values of 3 inputs.

See also
gtx_extented_min_max
GLM_FUNC_DECL C<T> glm::min ( C< T > const &  x,
typename C< T >::T const &  y,
typename C< T >::T const &  z 
)

Return the minimum component-wise values of 3 inputs.

See also
gtx_extented_min_max
GLM_FUNC_DECL C<T> glm::min ( C< T > const &  x,
C< T > const &  y,
C< T > const &  z 
)

Return the minimum component-wise values of 3 inputs.

See also
gtx_extented_min_max
GLM_FUNC_DECL T glm::min ( T const &  x,
T const &  y,
T const &  z,
T const &  w 
)

Return the minimum component-wise values of 4 inputs.

See also
gtx_extented_min_max
GLM_FUNC_DECL C<T> glm::min ( C< T > const &  x,
typename C< T >::T const &  y,
typename C< T >::T const &  z,
typename C< T >::T const &  w 
)

Return the minimum component-wise values of 4 inputs.

See also
gtx_extented_min_max
GLM_FUNC_DECL C<T> glm::min ( C< T > const &  x,
C< T > const &  y,
C< T > const &  z,
C< T > const &  w 
)

Return the minimum component-wise values of 4 inputs.

See also
gtx_extented_min_max
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00322.html ================================================ 0.9.9 API documentation: GLM_GTX_exterior_product
0.9.9 API documentation
GLM_GTX_exterior_product

Include <glm/gtx/exterior_product.hpp> to use the features of this extension. More...

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL T cross (vec< 2, T, Q > const &v, vec< 2, T, Q > const &u)
 Returns the cross product of x and y. More...
 

Detailed Description

Include <glm/gtx/exterior_product.hpp> to use the features of this extension.

Allow to perform bit operations on integer values

Function Documentation

GLM_FUNC_DECL T glm::cross ( vec< 2, T, Q > const &  v,
vec< 2, T, Q > const &  u 
)

Returns the cross product of x and y.

Template Parameters
TFloating-point scalar types
QValue from qualifier enum
See also
Exterior product
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00323.html ================================================ 0.9.9 API documentation: GLM_GTX_fast_exponential
0.9.9 API documentation
GLM_GTX_fast_exponential

Include <glm/gtx/fast_exponential.hpp> to use the features of this extension. More...

Functions

template<typename T >
GLM_FUNC_DECL T fastExp (T x)
 Faster than the common exp function but less accurate. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fastExp (vec< L, T, Q > const &x)
 Faster than the common exp function but less accurate. More...
 
template<typename T >
GLM_FUNC_DECL T fastExp2 (T x)
 Faster than the common exp2 function but less accurate. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fastExp2 (vec< L, T, Q > const &x)
 Faster than the common exp2 function but less accurate. More...
 
template<typename T >
GLM_FUNC_DECL T fastLog (T x)
 Faster than the common log function but less accurate. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fastLog (vec< L, T, Q > const &x)
 Faster than the common exp2 function but less accurate. More...
 
template<typename T >
GLM_FUNC_DECL T fastLog2 (T x)
 Faster than the common log2 function but less accurate. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fastLog2 (vec< L, T, Q > const &x)
 Faster than the common log2 function but less accurate. More...
 
template<typename genType >
GLM_FUNC_DECL genType fastPow (genType x, genType y)
 Faster than the common pow function but less accurate. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fastPow (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Faster than the common pow function but less accurate. More...
 
template<typename genTypeT , typename genTypeU >
GLM_FUNC_DECL genTypeT fastPow (genTypeT x, genTypeU y)
 Faster than the common pow function but less accurate. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fastPow (vec< L, T, Q > const &x)
 Faster than the common pow function but less accurate. More...
 

Detailed Description

Include <glm/gtx/fast_exponential.hpp> to use the features of this extension.

Fast but less accurate implementations of exponential based functions.

Function Documentation

GLM_FUNC_DECL T glm::fastExp ( x)

Faster than the common exp function but less accurate.

See also
GLM_GTX_fast_exponential
GLM_FUNC_DECL vec<L, T, Q> glm::fastExp ( vec< L, T, Q > const &  x)

Faster than the common exp function but less accurate.

See also
GLM_GTX_fast_exponential
GLM_FUNC_DECL T glm::fastExp2 ( x)

Faster than the common exp2 function but less accurate.

See also
GLM_GTX_fast_exponential
GLM_FUNC_DECL vec<L, T, Q> glm::fastExp2 ( vec< L, T, Q > const &  x)

Faster than the common exp2 function but less accurate.

See also
GLM_GTX_fast_exponential
GLM_FUNC_DECL T glm::fastLog ( x)

Faster than the common log function but less accurate.

See also
GLM_GTX_fast_exponential
GLM_FUNC_DECL vec<L, T, Q> glm::fastLog ( vec< L, T, Q > const &  x)

Faster than the common exp2 function but less accurate.

See also
GLM_GTX_fast_exponential
GLM_FUNC_DECL T glm::fastLog2 ( x)

Faster than the common log2 function but less accurate.

See also
GLM_GTX_fast_exponential
GLM_FUNC_DECL vec<L, T, Q> glm::fastLog2 ( vec< L, T, Q > const &  x)

Faster than the common log2 function but less accurate.

See also
GLM_GTX_fast_exponential
GLM_FUNC_DECL genType glm::fastPow ( genType  x,
genType  y 
)

Faster than the common pow function but less accurate.

See also
GLM_GTX_fast_exponential
GLM_FUNC_DECL vec<L, T, Q> glm::fastPow ( vec< L, T, Q > const &  x,
vec< L, T, Q > const &  y 
)

Faster than the common pow function but less accurate.

See also
GLM_GTX_fast_exponential
GLM_FUNC_DECL genTypeT glm::fastPow ( genTypeT  x,
genTypeU  y 
)

Faster than the common pow function but less accurate.

See also
GLM_GTX_fast_exponential
GLM_FUNC_DECL vec<L, T, Q> glm::fastPow ( vec< L, T, Q > const &  x)

Faster than the common pow function but less accurate.

See also
GLM_GTX_fast_exponential
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00324.html ================================================ 0.9.9 API documentation: GLM_GTX_fast_square_root
0.9.9 API documentation
GLM_GTX_fast_square_root

Include <glm/gtx/fast_square_root.hpp> to use the features of this extension. More...

Functions

template<typename genType >
GLM_FUNC_DECL genType fastDistance (genType x, genType y)
 Faster than the common distance function but less accurate. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL T fastDistance (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Faster than the common distance function but less accurate. More...
 
template<typename genType >
GLM_FUNC_DECL genType fastInverseSqrt (genType x)
 Faster than the common inversesqrt function but less accurate. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fastInverseSqrt (vec< L, T, Q > const &x)
 Faster than the common inversesqrt function but less accurate. More...
 
template<typename genType >
GLM_FUNC_DECL genType fastLength (genType x)
 Faster than the common length function but less accurate. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL T fastLength (vec< L, T, Q > const &x)
 Faster than the common length function but less accurate. More...
 
template<typename genType >
GLM_FUNC_DECL genType fastNormalize (genType const &x)
 Faster than the common normalize function but less accurate. More...
 
template<typename genType >
GLM_FUNC_DECL genType fastSqrt (genType x)
 Faster than the common sqrt function but less accurate. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > fastSqrt (vec< L, T, Q > const &x)
 Faster than the common sqrt function but less accurate. More...
 

Detailed Description

Include <glm/gtx/fast_square_root.hpp> to use the features of this extension.

Fast but less accurate implementations of square root based functions.

  • Sqrt optimisation based on Newton's method, www.gamedev.net/community/forums/topic.asp?topic id=139956

Function Documentation

GLM_FUNC_DECL genType glm::fastDistance ( genType  x,
genType  y 
)

Faster than the common distance function but less accurate.

See also
GLM_GTX_fast_square_root extension.
GLM_FUNC_DECL T glm::fastDistance ( vec< L, T, Q > const &  x,
vec< L, T, Q > const &  y 
)

Faster than the common distance function but less accurate.

See also
GLM_GTX_fast_square_root extension.
GLM_FUNC_DECL genType glm::fastInverseSqrt ( genType  x)

Faster than the common inversesqrt function but less accurate.

See also
GLM_GTX_fast_square_root extension.
GLM_FUNC_DECL vec<L, T, Q> glm::fastInverseSqrt ( vec< L, T, Q > const &  x)

Faster than the common inversesqrt function but less accurate.

See also
GLM_GTX_fast_square_root extension.
GLM_FUNC_DECL genType glm::fastLength ( genType  x)

Faster than the common length function but less accurate.

See also
GLM_GTX_fast_square_root extension.
GLM_FUNC_DECL T glm::fastLength ( vec< L, T, Q > const &  x)

Faster than the common length function but less accurate.

See also
GLM_GTX_fast_square_root extension.
GLM_FUNC_DECL genType glm::fastNormalize ( genType const &  x)

Faster than the common normalize function but less accurate.

See also
GLM_GTX_fast_square_root extension.
GLM_FUNC_DECL genType glm::fastSqrt ( genType  x)

Faster than the common sqrt function but less accurate.

See also
GLM_GTX_fast_square_root extension.
GLM_FUNC_DECL vec<L, T, Q> glm::fastSqrt ( vec< L, T, Q > const &  x)

Faster than the common sqrt function but less accurate.

See also
GLM_GTX_fast_square_root extension.
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00325.html ================================================ 0.9.9 API documentation: GLM_GTX_fast_trigonometry
0.9.9 API documentation
GLM_GTX_fast_trigonometry

Include <glm/gtx/fast_trigonometry.hpp> to use the features of this extension. More...

Functions

template<typename T >
GLM_FUNC_DECL T fastAcos (T angle)
 Faster than the common acos function but less accurate. More...
 
template<typename T >
GLM_FUNC_DECL T fastAsin (T angle)
 Faster than the common asin function but less accurate. More...
 
template<typename T >
GLM_FUNC_DECL T fastAtan (T y, T x)
 Faster than the common atan function but less accurate. More...
 
template<typename T >
GLM_FUNC_DECL T fastAtan (T angle)
 Faster than the common atan function but less accurate. More...
 
template<typename T >
GLM_FUNC_DECL T fastCos (T angle)
 Faster than the common cos function but less accurate. More...
 
template<typename T >
GLM_FUNC_DECL T fastSin (T angle)
 Faster than the common sin function but less accurate. More...
 
template<typename T >
GLM_FUNC_DECL T fastTan (T angle)
 Faster than the common tan function but less accurate. More...
 
template<typename T >
GLM_FUNC_DECL T wrapAngle (T angle)
 Wrap an angle to [0 2pi[ From GLM_GTX_fast_trigonometry extension. More...
 

Detailed Description

Include <glm/gtx/fast_trigonometry.hpp> to use the features of this extension.

Fast but less accurate implementations of trigonometric functions.

Function Documentation

GLM_FUNC_DECL T glm::fastAcos ( angle)

Faster than the common acos function but less accurate.

Defined between -2pi and 2pi. From GLM_GTX_fast_trigonometry extension.

GLM_FUNC_DECL T glm::fastAsin ( angle)

Faster than the common asin function but less accurate.

Defined between -2pi and 2pi. From GLM_GTX_fast_trigonometry extension.

GLM_FUNC_DECL T glm::fastAtan ( y,
x 
)

Faster than the common atan function but less accurate.

Defined between -2pi and 2pi. From GLM_GTX_fast_trigonometry extension.

GLM_FUNC_DECL T glm::fastAtan ( angle)

Faster than the common atan function but less accurate.

Defined between -2pi and 2pi. From GLM_GTX_fast_trigonometry extension.

GLM_FUNC_DECL T glm::fastCos ( angle)

Faster than the common cos function but less accurate.

From GLM_GTX_fast_trigonometry extension.

GLM_FUNC_DECL T glm::fastSin ( angle)

Faster than the common sin function but less accurate.

From GLM_GTX_fast_trigonometry extension.

GLM_FUNC_DECL T glm::fastTan ( angle)

Faster than the common tan function but less accurate.

Defined between -2pi and 2pi. From GLM_GTX_fast_trigonometry extension.

GLM_FUNC_DECL T glm::wrapAngle ( angle)

Wrap an angle to [0 2pi[ From GLM_GTX_fast_trigonometry extension.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00326.html ================================================ 0.9.9 API documentation: GLM_GTX_functions
0.9.9 API documentation

Include <glm/gtx/functions.hpp> to use the features of this extension. More...

Functions

template<typename T >
GLM_FUNC_DECL T gauss (T x, T ExpectedValue, T StandardDeviation)
 1D gauss function More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T gauss (vec< 2, T, Q > const &Coord, vec< 2, T, Q > const &ExpectedValue, vec< 2, T, Q > const &StandardDeviation)
 2D gauss function More...
 

Detailed Description

Include <glm/gtx/functions.hpp> to use the features of this extension.

List of useful common functions.

Function Documentation

GLM_FUNC_DECL T glm::gauss ( x,
ExpectedValue,
StandardDeviation 
)

1D gauss function

See also
GLM_GTC_epsilon
GLM_FUNC_DECL T glm::gauss ( vec< 2, T, Q > const &  Coord,
vec< 2, T, Q > const &  ExpectedValue,
vec< 2, T, Q > const &  StandardDeviation 
)

2D gauss function

See also
GLM_GTC_epsilon
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00327.html ================================================ 0.9.9 API documentation: GLM_GTX_gradient_paint
0.9.9 API documentation
GLM_GTX_gradient_paint

Include <glm/gtx/gradient_paint.hpp> to use the features of this extension. More...

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL T linearGradient (vec< 2, T, Q > const &Point0, vec< 2, T, Q > const &Point1, vec< 2, T, Q > const &Position)
 Return a color from a linear gradient. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T radialGradient (vec< 2, T, Q > const &Center, T const &Radius, vec< 2, T, Q > const &Focal, vec< 2, T, Q > const &Position)
 Return a color from a radial gradient. More...
 

Detailed Description

Include <glm/gtx/gradient_paint.hpp> to use the features of this extension.

Functions that return the color of procedural gradient for specific coordinates.

Function Documentation

GLM_FUNC_DECL T glm::linearGradient ( vec< 2, T, Q > const &  Point0,
vec< 2, T, Q > const &  Point1,
vec< 2, T, Q > const &  Position 
)

Return a color from a linear gradient.

See also
- GLM_GTX_gradient_paint
GLM_FUNC_DECL T glm::radialGradient ( vec< 2, T, Q > const &  Center,
T const &  Radius,
vec< 2, T, Q > const &  Focal,
vec< 2, T, Q > const &  Position 
)

Return a color from a radial gradient.

See also
- GLM_GTX_gradient_paint
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00328.html ================================================ 0.9.9 API documentation: GLM_GTX_handed_coordinate_space
0.9.9 API documentation
GLM_GTX_handed_coordinate_space

Include <glm/gtx/handed_coordinate_system.hpp> to use the features of this extension. More...

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL bool leftHanded (vec< 3, T, Q > const &tangent, vec< 3, T, Q > const &binormal, vec< 3, T, Q > const &normal)
 Return if a trihedron left handed or not. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL bool rightHanded (vec< 3, T, Q > const &tangent, vec< 3, T, Q > const &binormal, vec< 3, T, Q > const &normal)
 Return if a trihedron right handed or not. More...
 

Detailed Description

Include <glm/gtx/handed_coordinate_system.hpp> to use the features of this extension.

To know if a set of three basis vectors defines a right or left-handed coordinate system.

Function Documentation

GLM_FUNC_DECL bool glm::leftHanded ( vec< 3, T, Q > const &  tangent,
vec< 3, T, Q > const &  binormal,
vec< 3, T, Q > const &  normal 
)

Return if a trihedron left handed or not.

From GLM_GTX_handed_coordinate_space extension.

GLM_FUNC_DECL bool glm::rightHanded ( vec< 3, T, Q > const &  tangent,
vec< 3, T, Q > const &  binormal,
vec< 3, T, Q > const &  normal 
)

Return if a trihedron right handed or not.

From GLM_GTX_handed_coordinate_space extension.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00329.html ================================================ 0.9.9 API documentation: GLM_GTX_hash
0.9.9 API documentation

Include <glm/gtx/hash.hpp> to use the features of this extension. More...

Include <glm/gtx/hash.hpp> to use the features of this extension.

Add std::hash support for glm types

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00330.html ================================================ 0.9.9 API documentation: GLM_GTX_integer
0.9.9 API documentation

Include <glm/gtx/integer.hpp> to use the features of this extension. More...

Typedefs

typedef signed int sint
 32bit signed integer. More...
 

Functions

template<typename genType >
GLM_FUNC_DECL genType factorial (genType const &x)
 Return the factorial value of a number (!12 max, integer only) From GLM_GTX_integer extension. More...
 
GLM_FUNC_DECL unsigned int floor_log2 (unsigned int x)
 Returns the floor log2 of x. More...
 
GLM_FUNC_DECL int mod (int x, int y)
 Modulus. More...
 
GLM_FUNC_DECL uint mod (uint x, uint y)
 Modulus. More...
 
GLM_FUNC_DECL uint nlz (uint x)
 Returns the number of leading zeros. More...
 
GLM_FUNC_DECL int pow (int x, uint y)
 Returns x raised to the y power. More...
 
GLM_FUNC_DECL uint pow (uint x, uint y)
 Returns x raised to the y power. More...
 
GLM_FUNC_DECL int sqrt (int x)
 Returns the positive square root of x. More...
 
GLM_FUNC_DECL uint sqrt (uint x)
 Returns the positive square root of x. More...
 

Detailed Description

Include <glm/gtx/integer.hpp> to use the features of this extension.

Add support for integer for core functions

Typedef Documentation

typedef signed int sint

32bit signed integer.

From GLM_GTX_integer extension.

Definition at line 55 of file gtx/integer.hpp.

Function Documentation

GLM_FUNC_DECL genType glm::factorial ( genType const &  x)

Return the factorial value of a number (!12 max, integer only) From GLM_GTX_integer extension.

GLM_FUNC_DECL unsigned int glm::floor_log2 ( unsigned int  x)

Returns the floor log2 of x.

From GLM_GTX_integer extension.

GLM_FUNC_DECL int glm::mod ( int  x,
int  y 
)

Modulus.

Returns x - y * floor(x / y) for each component in x using the floating point value y. From GLM_GTX_integer extension.

GLM_FUNC_DECL uint glm::mod ( uint  x,
uint  y 
)

Modulus.

Returns x - y * floor(x / y) for each component in x using the floating point value y. From GLM_GTX_integer extension.

GLM_FUNC_DECL uint glm::nlz ( uint  x)

Returns the number of leading zeros.

From GLM_GTX_integer extension.

GLM_FUNC_DECL int glm::pow ( int  x,
uint  y 
)

Returns x raised to the y power.

From GLM_GTX_integer extension.

GLM_FUNC_DECL uint glm::pow ( uint  x,
uint  y 
)

Returns x raised to the y power.

From GLM_GTX_integer extension.

GLM_FUNC_DECL int glm::sqrt ( int  x)

Returns the positive square root of x.

From GLM_GTX_integer extension.

GLM_FUNC_DECL uint glm::sqrt ( uint  x)

Returns the positive square root of x.

From GLM_GTX_integer extension.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00331.html ================================================ 0.9.9 API documentation: GLM_GTX_intersect
0.9.9 API documentation

Include <glm/gtx/intersect.hpp> to use the features of this extension. More...

Functions

template<typename genType >
GLM_FUNC_DECL bool intersectLineSphere (genType const &point0, genType const &point1, genType const &sphereCenter, typename genType::value_type sphereRadius, genType &intersectionPosition1, genType &intersectionNormal1, genType &intersectionPosition2=genType(), genType &intersectionNormal2=genType())
 Compute the intersection of a line and a sphere. More...
 
template<typename genType >
GLM_FUNC_DECL bool intersectLineTriangle (genType const &orig, genType const &dir, genType const &vert0, genType const &vert1, genType const &vert2, genType &position)
 Compute the intersection of a line and a triangle. More...
 
template<typename genType >
GLM_FUNC_DECL bool intersectRayPlane (genType const &orig, genType const &dir, genType const &planeOrig, genType const &planeNormal, typename genType::value_type &intersectionDistance)
 Compute the intersection of a ray and a plane. More...
 
template<typename genType >
GLM_FUNC_DECL bool intersectRaySphere (genType const &rayStarting, genType const &rayNormalizedDirection, genType const &sphereCenter, typename genType::value_type const sphereRadiusSquared, typename genType::value_type &intersectionDistance)
 Compute the intersection distance of a ray and a sphere. More...
 
template<typename genType >
GLM_FUNC_DECL bool intersectRaySphere (genType const &rayStarting, genType const &rayNormalizedDirection, genType const &sphereCenter, const typename genType::value_type sphereRadius, genType &intersectionPosition, genType &intersectionNormal)
 Compute the intersection of a ray and a sphere. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL bool intersectRayTriangle (vec< 3, T, Q > const &orig, vec< 3, T, Q > const &dir, vec< 3, T, Q > const &v0, vec< 3, T, Q > const &v1, vec< 3, T, Q > const &v2, vec< 2, T, Q > &baryPosition, T &distance)
 Compute the intersection of a ray and a triangle. More...
 

Detailed Description

Include <glm/gtx/intersect.hpp> to use the features of this extension.

Add intersection functions

Function Documentation

GLM_FUNC_DECL bool glm::intersectLineSphere ( genType const &  point0,
genType const &  point1,
genType const &  sphereCenter,
typename genType::value_type  sphereRadius,
genType &  intersectionPosition1,
genType &  intersectionNormal1,
genType &  intersectionPosition2 = genType(),
genType &  intersectionNormal2 = genType() 
)

Compute the intersection of a line and a sphere.

From GLM_GTX_intersect extension

GLM_FUNC_DECL bool glm::intersectLineTriangle ( genType const &  orig,
genType const &  dir,
genType const &  vert0,
genType const &  vert1,
genType const &  vert2,
genType &  position 
)

Compute the intersection of a line and a triangle.

From GLM_GTX_intersect extension.

GLM_FUNC_DECL bool glm::intersectRayPlane ( genType const &  orig,
genType const &  dir,
genType const &  planeOrig,
genType const &  planeNormal,
typename genType::value_type &  intersectionDistance 
)

Compute the intersection of a ray and a plane.

Ray direction and plane normal must be unit length. From GLM_GTX_intersect extension.

GLM_FUNC_DECL bool glm::intersectRaySphere ( genType const &  rayStarting,
genType const &  rayNormalizedDirection,
genType const &  sphereCenter,
typename genType::value_type const  sphereRadiusSquared,
typename genType::value_type &  intersectionDistance 
)

Compute the intersection distance of a ray and a sphere.

The ray direction vector is unit length. From GLM_GTX_intersect extension.

GLM_FUNC_DECL bool glm::intersectRaySphere ( genType const &  rayStarting,
genType const &  rayNormalizedDirection,
genType const &  sphereCenter,
const typename genType::value_type  sphereRadius,
genType &  intersectionPosition,
genType &  intersectionNormal 
)

Compute the intersection of a ray and a sphere.

From GLM_GTX_intersect extension.

GLM_FUNC_DECL bool glm::intersectRayTriangle ( vec< 3, T, Q > const &  orig,
vec< 3, T, Q > const &  dir,
vec< 3, T, Q > const &  v0,
vec< 3, T, Q > const &  v1,
vec< 3, T, Q > const &  v2,
vec< 2, T, Q > &  baryPosition,
T &  distance 
)

Compute the intersection of a ray and a triangle.

Based om Tomas Möller implementation http://fileadmin.cs.lth.se/cs/Personal/Tomas_Akenine-Moller/raytri/ From GLM_GTX_intersect extension.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00332.html ================================================ 0.9.9 API documentation: GLM_GTX_io
0.9.9 API documentation

Include <glm/gtx/io.hpp> to use the features of this extension. More...

Detailed Description

Include <glm/gtx/io.hpp> to use the features of this extension.

std::[w]ostream support for glm types

std::[w]ostream support for glm types + qualifier/width/etc. manipulators based on howard hinnant's std::chrono io proposal [http://home.roadrunner.com/~hinnant/bloomington/chrono_io.html]

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00333.html ================================================ 0.9.9 API documentation: GLM_GTX_log_base
0.9.9 API documentation

Include <glm/gtx/log_base.hpp> to use the features of this extension. More...

Functions

template<typename genType >
GLM_FUNC_DECL genType log (genType const &x, genType const &base)
 Logarithm for any base. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > sign (vec< L, T, Q > const &x, vec< L, T, Q > const &base)
 Logarithm for any base. More...
 

Detailed Description

Include <glm/gtx/log_base.hpp> to use the features of this extension.

Logarithm for any base. base can be a vector or a scalar.

Function Documentation

GLM_FUNC_DECL genType glm::log ( genType const &  x,
genType const &  base 
)

Logarithm for any base.

From GLM_GTX_log_base.

GLM_FUNC_DECL vec<L, T, Q> glm::sign ( vec< L, T, Q > const &  x,
vec< L, T, Q > const &  base 
)

Logarithm for any base.

From GLM_GTX_log_base.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00334.html ================================================ 0.9.9 API documentation: GLM_GTX_matrix_cross_product
0.9.9 API documentation
GLM_GTX_matrix_cross_product

Include <glm/gtx/matrix_cross_product.hpp> to use the features of this extension. More...

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 3, T, Q > matrixCross3 (vec< 3, T, Q > const &x)
 Build a cross product matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > matrixCross4 (vec< 3, T, Q > const &x)
 Build a cross product matrix. More...
 

Detailed Description

Include <glm/gtx/matrix_cross_product.hpp> to use the features of this extension.

Build cross product matrices

Function Documentation

GLM_FUNC_DECL mat<3, 3, T, Q> glm::matrixCross3 ( vec< 3, T, Q > const &  x)

Build a cross product matrix.

From GLM_GTX_matrix_cross_product extension.

GLM_FUNC_DECL mat<4, 4, T, Q> glm::matrixCross4 ( vec< 3, T, Q > const &  x)

Build a cross product matrix.

From GLM_GTX_matrix_cross_product extension.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00335.html ================================================ 0.9.9 API documentation: GLM_GTX_matrix_decompose
0.9.9 API documentation
GLM_GTX_matrix_decompose

Include <glm/gtx/matrix_decompose.hpp> to use the features of this extension. More...

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL bool decompose (mat< 4, 4, T, Q > const &modelMatrix, vec< 3, T, Q > &scale, qua< T, Q > &orientation, vec< 3, T, Q > &translation, vec< 3, T, Q > &skew, vec< 4, T, Q > &perspective)
 Decomposes a model matrix to translations, rotation and scale components. More...
 

Detailed Description

Include <glm/gtx/matrix_decompose.hpp> to use the features of this extension.

Decomposes a model matrix to translations, rotation and scale components

Function Documentation

GLM_FUNC_DECL bool glm::decompose ( mat< 4, 4, T, Q > const &  modelMatrix,
vec< 3, T, Q > &  scale,
qua< T, Q > &  orientation,
vec< 3, T, Q > &  translation,
vec< 3, T, Q > &  skew,
vec< 4, T, Q > &  perspective 
)

Decomposes a model matrix to translations, rotation and scale components.

See also
GLM_GTX_matrix_decompose
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00336.html ================================================ 0.9.9 API documentation: GLM_GTX_matrix_factorisation
0.9.9 API documentation
GLM_GTX_matrix_factorisation

Include <glm/gtx/matrix_factorisation.hpp> to use the features of this extension. More...

Functions

template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL mat< C, R, T, Q > fliplr (mat< C, R, T, Q > const &in)
 Flips the matrix columns right and left. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL mat< C, R, T, Q > flipud (mat< C, R, T, Q > const &in)
 Flips the matrix rows up and down. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL void qr_decompose (mat< C, R, T, Q > const &in, mat<(C< R?C:R), R, T, Q > &q, mat< C,(C< R?C:R), T, Q > &r)
 Performs QR factorisation of a matrix. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL void rq_decompose (mat< C, R, T, Q > const &in, mat<(C< R?C:R), R, T, Q > &r, mat< C,(C< R?C:R), T, Q > &q)
 Performs RQ factorisation of a matrix. More...
 

Detailed Description

Include <glm/gtx/matrix_factorisation.hpp> to use the features of this extension.

Functions to factor matrices in various forms

Function Documentation

GLM_FUNC_DECL mat<C, R, T, Q> glm::fliplr ( mat< C, R, T, Q > const &  in)

Flips the matrix columns right and left.

From GLM_GTX_matrix_factorisation extension.

GLM_FUNC_DECL mat<C, R, T, Q> glm::flipud ( mat< C, R, T, Q > const &  in)

Flips the matrix rows up and down.

From GLM_GTX_matrix_factorisation extension.

GLM_FUNC_DECL void glm::qr_decompose ( mat< C, R, T, Q > const &  in)

Performs QR factorisation of a matrix.

Returns 2 matrices, q and r, such that the columns of q are orthonormal and span the same subspace than those of the input matrix, r is an upper triangular matrix, and q*r=in. Given an n-by-m input matrix, q has dimensions min(n,m)-by-m, and r has dimensions n-by-min(n,m).

From GLM_GTX_matrix_factorisation extension.

GLM_FUNC_DECL void glm::rq_decompose ( mat< C, R, T, Q > const &  in)

Performs RQ factorisation of a matrix.

Returns 2 matrices, r and q, such that r is an upper triangular matrix, the rows of q are orthonormal and span the same subspace than those of the input matrix, and r*q=in. Note that in the context of RQ factorisation, the diagonal is seen as starting in the lower-right corner of the matrix, instead of the usual upper-left. Given an n-by-m input matrix, r has dimensions min(n,m)-by-m, and q has dimensions n-by-min(n,m).

From GLM_GTX_matrix_factorisation extension.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00337.html ================================================ 0.9.9 API documentation: GLM_GTX_matrix_interpolation
0.9.9 API documentation
GLM_GTX_matrix_interpolation

Include <glm/gtx/matrix_interpolation.hpp> to use the features of this extension. More...

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL void axisAngle (mat< 4, 4, T, Q > const &Mat, vec< 3, T, Q > &Axis, T &Angle)
 Get the axis and angle of the rotation from a matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > axisAngleMatrix (vec< 3, T, Q > const &Axis, T const Angle)
 Build a matrix from axis and angle. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > extractMatrixRotation (mat< 4, 4, T, Q > const &Mat)
 Extracts the rotation part of a matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > interpolate (mat< 4, 4, T, Q > const &m1, mat< 4, 4, T, Q > const &m2, T const Delta)
 Build a interpolation of 4 * 4 matrixes. More...
 

Detailed Description

Include <glm/gtx/matrix_interpolation.hpp> to use the features of this extension.

Allows to directly interpolate two matrices.

Function Documentation

GLM_FUNC_DECL void glm::axisAngle ( mat< 4, 4, T, Q > const &  Mat,
vec< 3, T, Q > &  Axis,
T &  Angle 
)

Get the axis and angle of the rotation from a matrix.

From GLM_GTX_matrix_interpolation extension.

GLM_FUNC_DECL mat<4, 4, T, Q> glm::axisAngleMatrix ( vec< 3, T, Q > const &  Axis,
T const  Angle 
)

Build a matrix from axis and angle.

From GLM_GTX_matrix_interpolation extension.

GLM_FUNC_DECL mat<4, 4, T, Q> glm::extractMatrixRotation ( mat< 4, 4, T, Q > const &  Mat)

Extracts the rotation part of a matrix.

From GLM_GTX_matrix_interpolation extension.

GLM_FUNC_DECL mat<4, 4, T, Q> glm::interpolate ( mat< 4, 4, T, Q > const &  m1,
mat< 4, 4, T, Q > const &  m2,
T const  Delta 
)

Build a interpolation of 4 * 4 matrixes.

From GLM_GTX_matrix_interpolation extension. Warning! works only with rotation and/or translation matrixes, scale will generate unexpected results.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00338.html ================================================ 0.9.9 API documentation: GLM_GTX_matrix_major_storage
0.9.9 API documentation
GLM_GTX_matrix_major_storage

Include <glm/gtx/matrix_major_storage.hpp> to use the features of this extension. More...

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 2, 2, T, Q > colMajor2 (vec< 2, T, Q > const &v1, vec< 2, T, Q > const &v2)
 Build a column major matrix from column vectors. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 2, 2, T, Q > colMajor2 (mat< 2, 2, T, Q > const &m)
 Build a column major matrix from other matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 3, T, Q > colMajor3 (vec< 3, T, Q > const &v1, vec< 3, T, Q > const &v2, vec< 3, T, Q > const &v3)
 Build a column major matrix from column vectors. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 3, T, Q > colMajor3 (mat< 3, 3, T, Q > const &m)
 Build a column major matrix from other matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > colMajor4 (vec< 4, T, Q > const &v1, vec< 4, T, Q > const &v2, vec< 4, T, Q > const &v3, vec< 4, T, Q > const &v4)
 Build a column major matrix from column vectors. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > colMajor4 (mat< 4, 4, T, Q > const &m)
 Build a column major matrix from other matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 2, 2, T, Q > rowMajor2 (vec< 2, T, Q > const &v1, vec< 2, T, Q > const &v2)
 Build a row major matrix from row vectors. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 2, 2, T, Q > rowMajor2 (mat< 2, 2, T, Q > const &m)
 Build a row major matrix from other matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 3, T, Q > rowMajor3 (vec< 3, T, Q > const &v1, vec< 3, T, Q > const &v2, vec< 3, T, Q > const &v3)
 Build a row major matrix from row vectors. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 3, T, Q > rowMajor3 (mat< 3, 3, T, Q > const &m)
 Build a row major matrix from other matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > rowMajor4 (vec< 4, T, Q > const &v1, vec< 4, T, Q > const &v2, vec< 4, T, Q > const &v3, vec< 4, T, Q > const &v4)
 Build a row major matrix from row vectors. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > rowMajor4 (mat< 4, 4, T, Q > const &m)
 Build a row major matrix from other matrix. More...
 

Detailed Description

Include <glm/gtx/matrix_major_storage.hpp> to use the features of this extension.

Build matrices with specific matrix order, row or column

Function Documentation

GLM_FUNC_DECL mat<2, 2, T, Q> glm::colMajor2 ( vec< 2, T, Q > const &  v1,
vec< 2, T, Q > const &  v2 
)

Build a column major matrix from column vectors.

From GLM_GTX_matrix_major_storage extension.

GLM_FUNC_DECL mat<2, 2, T, Q> glm::colMajor2 ( mat< 2, 2, T, Q > const &  m)

Build a column major matrix from other matrix.

From GLM_GTX_matrix_major_storage extension.

GLM_FUNC_DECL mat<3, 3, T, Q> glm::colMajor3 ( vec< 3, T, Q > const &  v1,
vec< 3, T, Q > const &  v2,
vec< 3, T, Q > const &  v3 
)

Build a column major matrix from column vectors.

From GLM_GTX_matrix_major_storage extension.

GLM_FUNC_DECL mat<3, 3, T, Q> glm::colMajor3 ( mat< 3, 3, T, Q > const &  m)

Build a column major matrix from other matrix.

From GLM_GTX_matrix_major_storage extension.

GLM_FUNC_DECL mat<4, 4, T, Q> glm::colMajor4 ( vec< 4, T, Q > const &  v1,
vec< 4, T, Q > const &  v2,
vec< 4, T, Q > const &  v3,
vec< 4, T, Q > const &  v4 
)

Build a column major matrix from column vectors.

From GLM_GTX_matrix_major_storage extension.

GLM_FUNC_DECL mat<4, 4, T, Q> glm::colMajor4 ( mat< 4, 4, T, Q > const &  m)

Build a column major matrix from other matrix.

From GLM_GTX_matrix_major_storage extension.

GLM_FUNC_DECL mat<2, 2, T, Q> glm::rowMajor2 ( vec< 2, T, Q > const &  v1,
vec< 2, T, Q > const &  v2 
)

Build a row major matrix from row vectors.

From GLM_GTX_matrix_major_storage extension.

GLM_FUNC_DECL mat<2, 2, T, Q> glm::rowMajor2 ( mat< 2, 2, T, Q > const &  m)

Build a row major matrix from other matrix.

From GLM_GTX_matrix_major_storage extension.

GLM_FUNC_DECL mat<3, 3, T, Q> glm::rowMajor3 ( vec< 3, T, Q > const &  v1,
vec< 3, T, Q > const &  v2,
vec< 3, T, Q > const &  v3 
)

Build a row major matrix from row vectors.

From GLM_GTX_matrix_major_storage extension.

GLM_FUNC_DECL mat<3, 3, T, Q> glm::rowMajor3 ( mat< 3, 3, T, Q > const &  m)

Build a row major matrix from other matrix.

From GLM_GTX_matrix_major_storage extension.

GLM_FUNC_DECL mat<4, 4, T, Q> glm::rowMajor4 ( vec< 4, T, Q > const &  v1,
vec< 4, T, Q > const &  v2,
vec< 4, T, Q > const &  v3,
vec< 4, T, Q > const &  v4 
)

Build a row major matrix from row vectors.

From GLM_GTX_matrix_major_storage extension.

GLM_FUNC_DECL mat<4, 4, T, Q> glm::rowMajor4 ( mat< 4, 4, T, Q > const &  m)

Build a row major matrix from other matrix.

From GLM_GTX_matrix_major_storage extension.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00339.html ================================================ 0.9.9 API documentation: GLM_GTX_matrix_operation
0.9.9 API documentation
GLM_GTX_matrix_operation

Include <glm/gtx/matrix_operation.hpp> to use the features of this extension. More...

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 2, 2, T, Q > adjugate (mat< 2, 2, T, Q > const &m)
 Build an adjugate matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 3, T, Q > adjugate (mat< 3, 3, T, Q > const &m)
 Build an adjugate matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > adjugate (mat< 4, 4, T, Q > const &m)
 Build an adjugate matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 2, 2, T, Q > diagonal2x2 (vec< 2, T, Q > const &v)
 Build a diagonal matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 2, 3, T, Q > diagonal2x3 (vec< 2, T, Q > const &v)
 Build a diagonal matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 2, 4, T, Q > diagonal2x4 (vec< 2, T, Q > const &v)
 Build a diagonal matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 2, T, Q > diagonal3x2 (vec< 2, T, Q > const &v)
 Build a diagonal matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 3, T, Q > diagonal3x3 (vec< 3, T, Q > const &v)
 Build a diagonal matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 4, T, Q > diagonal3x4 (vec< 3, T, Q > const &v)
 Build a diagonal matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 2, T, Q > diagonal4x2 (vec< 2, T, Q > const &v)
 Build a diagonal matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 3, T, Q > diagonal4x3 (vec< 3, T, Q > const &v)
 Build a diagonal matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > diagonal4x4 (vec< 4, T, Q > const &v)
 Build a diagonal matrix. More...
 

Detailed Description

Include <glm/gtx/matrix_operation.hpp> to use the features of this extension.

Build diagonal matrices from vectors.

Function Documentation

GLM_FUNC_DECL mat<2, 2, T, Q> glm::adjugate ( mat< 2, 2, T, Q > const &  m)

Build an adjugate matrix.

From GLM_GTX_matrix_operation extension.

GLM_FUNC_DECL mat<3, 3, T, Q> glm::adjugate ( mat< 3, 3, T, Q > const &  m)

Build an adjugate matrix.

From GLM_GTX_matrix_operation extension.

GLM_FUNC_DECL mat<4, 4, T, Q> glm::adjugate ( mat< 4, 4, T, Q > const &  m)

Build an adjugate matrix.

From GLM_GTX_matrix_operation extension.

GLM_FUNC_DECL mat<2, 2, T, Q> glm::diagonal2x2 ( vec< 2, T, Q > const &  v)

Build a diagonal matrix.

From GLM_GTX_matrix_operation extension.

GLM_FUNC_DECL mat<2, 3, T, Q> glm::diagonal2x3 ( vec< 2, T, Q > const &  v)

Build a diagonal matrix.

From GLM_GTX_matrix_operation extension.

GLM_FUNC_DECL mat<2, 4, T, Q> glm::diagonal2x4 ( vec< 2, T, Q > const &  v)

Build a diagonal matrix.

From GLM_GTX_matrix_operation extension.

GLM_FUNC_DECL mat<3, 2, T, Q> glm::diagonal3x2 ( vec< 2, T, Q > const &  v)

Build a diagonal matrix.

From GLM_GTX_matrix_operation extension.

GLM_FUNC_DECL mat<3, 3, T, Q> glm::diagonal3x3 ( vec< 3, T, Q > const &  v)

Build a diagonal matrix.

From GLM_GTX_matrix_operation extension.

GLM_FUNC_DECL mat<3, 4, T, Q> glm::diagonal3x4 ( vec< 3, T, Q > const &  v)

Build a diagonal matrix.

From GLM_GTX_matrix_operation extension.

GLM_FUNC_DECL mat<4, 2, T, Q> glm::diagonal4x2 ( vec< 2, T, Q > const &  v)

Build a diagonal matrix.

From GLM_GTX_matrix_operation extension.

GLM_FUNC_DECL mat<4, 3, T, Q> glm::diagonal4x3 ( vec< 3, T, Q > const &  v)

Build a diagonal matrix.

From GLM_GTX_matrix_operation extension.

GLM_FUNC_DECL mat<4, 4, T, Q> glm::diagonal4x4 ( vec< 4, T, Q > const &  v)

Build a diagonal matrix.

From GLM_GTX_matrix_operation extension.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00340.html ================================================ 0.9.9 API documentation: GLM_GTX_matrix_query
0.9.9 API documentation
GLM_GTX_matrix_query

Include <glm/gtx/matrix_query.hpp> to use the features of this extension. More...

Functions

template<length_t C, length_t R, typename T , qualifier Q, template< length_t, length_t, typename, qualifier > class matType>
GLM_FUNC_DECL bool isIdentity (matType< C, R, T, Q > const &m, T const &epsilon)
 Return whether a matrix is an identity matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL bool isNormalized (mat< 2, 2, T, Q > const &m, T const &epsilon)
 Return whether a matrix is a normalized matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL bool isNormalized (mat< 3, 3, T, Q > const &m, T const &epsilon)
 Return whether a matrix is a normalized matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL bool isNormalized (mat< 4, 4, T, Q > const &m, T const &epsilon)
 Return whether a matrix is a normalized matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL bool isNull (mat< 2, 2, T, Q > const &m, T const &epsilon)
 Return whether a matrix a null matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL bool isNull (mat< 3, 3, T, Q > const &m, T const &epsilon)
 Return whether a matrix a null matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL bool isNull (mat< 4, 4, T, Q > const &m, T const &epsilon)
 Return whether a matrix is a null matrix. More...
 
template<length_t C, length_t R, typename T , qualifier Q, template< length_t, length_t, typename, qualifier > class matType>
GLM_FUNC_DECL bool isOrthogonal (matType< C, R, T, Q > const &m, T const &epsilon)
 Return whether a matrix is an orthonormalized matrix. More...
 

Detailed Description

Include <glm/gtx/matrix_query.hpp> to use the features of this extension.

Query to evaluate matrix properties

Function Documentation

GLM_FUNC_DECL bool glm::isIdentity ( matType< C, R, T, Q > const &  m,
T const &  epsilon 
)

Return whether a matrix is an identity matrix.

From GLM_GTX_matrix_query extension.

GLM_FUNC_DECL bool glm::isNormalized ( mat< 2, 2, T, Q > const &  m,
T const &  epsilon 
)

Return whether a matrix is a normalized matrix.

From GLM_GTX_matrix_query extension.

GLM_FUNC_DECL bool glm::isNormalized ( mat< 3, 3, T, Q > const &  m,
T const &  epsilon 
)

Return whether a matrix is a normalized matrix.

From GLM_GTX_matrix_query extension.

GLM_FUNC_DECL bool glm::isNormalized ( mat< 4, 4, T, Q > const &  m,
T const &  epsilon 
)

Return whether a matrix is a normalized matrix.

From GLM_GTX_matrix_query extension.

GLM_FUNC_DECL bool glm::isNull ( mat< 2, 2, T, Q > const &  m,
T const &  epsilon 
)

Return whether a matrix a null matrix.

From GLM_GTX_matrix_query extension.

GLM_FUNC_DECL bool glm::isNull ( mat< 3, 3, T, Q > const &  m,
T const &  epsilon 
)

Return whether a matrix a null matrix.

From GLM_GTX_matrix_query extension.

GLM_FUNC_DECL bool glm::isNull ( mat< 4, 4, T, Q > const &  m,
T const &  epsilon 
)

Return whether a matrix is a null matrix.

From GLM_GTX_matrix_query extension.

GLM_FUNC_DECL bool glm::isOrthogonal ( matType< C, R, T, Q > const &  m,
T const &  epsilon 
)

Return whether a matrix is an orthonormalized matrix.

From GLM_GTX_matrix_query extension.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00341.html ================================================ 0.9.9 API documentation: GLM_GTX_matrix_transform_2d
0.9.9 API documentation
GLM_GTX_matrix_transform_2d

Include <glm/gtx/matrix_transform_2d.hpp> to use the features of this extension. More...

Functions

template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER mat< 3, 3, T, Q > rotate (mat< 3, 3, T, Q > const &m, T angle)
 Builds a rotation 3 * 3 matrix created from an angle. More...
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER mat< 3, 3, T, Q > scale (mat< 3, 3, T, Q > const &m, vec< 2, T, Q > const &v)
 Builds a scale 3 * 3 matrix created from a vector of 2 components. More...
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER mat< 3, 3, T, Q > shearX (mat< 3, 3, T, Q > const &m, T y)
 Builds an horizontal (parallel to the x axis) shear 3 * 3 matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER mat< 3, 3, T, Q > shearY (mat< 3, 3, T, Q > const &m, T x)
 Builds a vertical (parallel to the y axis) shear 3 * 3 matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_QUALIFIER mat< 3, 3, T, Q > translate (mat< 3, 3, T, Q > const &m, vec< 2, T, Q > const &v)
 Builds a translation 3 * 3 matrix created from a vector of 2 components. More...
 

Detailed Description

Include <glm/gtx/matrix_transform_2d.hpp> to use the features of this extension.

Defines functions that generate common 2d transformation matrices.

Function Documentation

GLM_FUNC_QUALIFIER mat<3, 3, T, Q> glm::rotate ( mat< 3, 3, T, Q > const &  m,
angle 
)

Builds a rotation 3 * 3 matrix created from an angle.

Parameters
mInput matrix multiplied by this translation matrix.
angleRotation angle expressed in radians.
GLM_FUNC_QUALIFIER mat<3, 3, T, Q> glm::scale ( mat< 3, 3, T, Q > const &  m,
vec< 2, T, Q > const &  v 
)

Builds a scale 3 * 3 matrix created from a vector of 2 components.

Parameters
mInput matrix multiplied by this translation matrix.
vCoordinates of a scale vector.
GLM_FUNC_QUALIFIER mat<3, 3, T, Q> glm::shearX ( mat< 3, 3, T, Q > const &  m,
y 
)

Builds an horizontal (parallel to the x axis) shear 3 * 3 matrix.

Parameters
mInput matrix multiplied by this translation matrix.
yShear factor.
GLM_FUNC_QUALIFIER mat<3, 3, T, Q> glm::shearY ( mat< 3, 3, T, Q > const &  m,
x 
)

Builds a vertical (parallel to the y axis) shear 3 * 3 matrix.

Parameters
mInput matrix multiplied by this translation matrix.
xShear factor.
GLM_FUNC_QUALIFIER mat<3, 3, T, Q> glm::translate ( mat< 3, 3, T, Q > const &  m,
vec< 2, T, Q > const &  v 
)

Builds a translation 3 * 3 matrix created from a vector of 2 components.

Parameters
mInput matrix multiplied by this translation matrix.
vCoordinates of a translation vector.
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00342.html ================================================ 0.9.9 API documentation: GLM_GTX_mixed_producte
0.9.9 API documentation
GLM_GTX_mixed_producte

Include <glm/gtx/mixed_product.hpp> to use the features of this extension. More...

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL T mixedProduct (vec< 3, T, Q > const &v1, vec< 3, T, Q > const &v2, vec< 3, T, Q > const &v3)
 Mixed product of 3 vectors (from GLM_GTX_mixed_product extension)
 

Detailed Description

Include <glm/gtx/mixed_product.hpp> to use the features of this extension.

Mixed product of 3 vectors.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00343.html ================================================ 0.9.9 API documentation: GLM_GTX_norm
0.9.9 API documentation

Include <glm/gtx/norm.hpp> to use the features of this extension. More...

Functions

template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL T distance2 (vec< L, T, Q > const &p0, vec< L, T, Q > const &p1)
 Returns the squared distance between p0 and p1, i.e., length2(p0 - p1). More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T l1Norm (vec< 3, T, Q > const &x, vec< 3, T, Q > const &y)
 Returns the L1 norm between x and y. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T l1Norm (vec< 3, T, Q > const &v)
 Returns the L1 norm of v. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T l2Norm (vec< 3, T, Q > const &x, vec< 3, T, Q > const &y)
 Returns the L2 norm between x and y. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T l2Norm (vec< 3, T, Q > const &x)
 Returns the L2 norm of v. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL T length2 (vec< L, T, Q > const &x)
 Returns the squared length of x. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T lMaxNorm (vec< 3, T, Q > const &x, vec< 3, T, Q > const &y)
 Returns the LMax norm between x and y. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T lMaxNorm (vec< 3, T, Q > const &x)
 Returns the LMax norm of v. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T lxNorm (vec< 3, T, Q > const &x, vec< 3, T, Q > const &y, unsigned int Depth)
 Returns the L norm between x and y. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T lxNorm (vec< 3, T, Q > const &x, unsigned int Depth)
 Returns the L norm of v. More...
 

Detailed Description

Include <glm/gtx/norm.hpp> to use the features of this extension.

Various ways to compute vector norms.

Function Documentation

GLM_FUNC_DECL T glm::distance2 ( vec< L, T, Q > const &  p0,
vec< L, T, Q > const &  p1 
)

Returns the squared distance between p0 and p1, i.e., length2(p0 - p1).

From GLM_GTX_norm extension.

GLM_FUNC_DECL T glm::l1Norm ( vec< 3, T, Q > const &  x,
vec< 3, T, Q > const &  y 
)

Returns the L1 norm between x and y.

From GLM_GTX_norm extension.

GLM_FUNC_DECL T glm::l1Norm ( vec< 3, T, Q > const &  v)

Returns the L1 norm of v.

From GLM_GTX_norm extension.

GLM_FUNC_DECL T glm::l2Norm ( vec< 3, T, Q > const &  x,
vec< 3, T, Q > const &  y 
)

Returns the L2 norm between x and y.

From GLM_GTX_norm extension.

GLM_FUNC_DECL T glm::l2Norm ( vec< 3, T, Q > const &  x)

Returns the L2 norm of v.

From GLM_GTX_norm extension.

GLM_FUNC_DECL T glm::length2 ( vec< L, T, Q > const &  x)

Returns the squared length of x.

From GLM_GTX_norm extension.

GLM_FUNC_DECL T glm::lMaxNorm ( vec< 3, T, Q > const &  x,
vec< 3, T, Q > const &  y 
)

Returns the LMax norm between x and y.

From GLM_GTX_norm extension.

GLM_FUNC_DECL T glm::lMaxNorm ( vec< 3, T, Q > const &  x)

Returns the LMax norm of v.

From GLM_GTX_norm extension.

GLM_FUNC_DECL T glm::lxNorm ( vec< 3, T, Q > const &  x,
vec< 3, T, Q > const &  y,
unsigned int  Depth 
)

Returns the L norm between x and y.

From GLM_GTX_norm extension.

GLM_FUNC_DECL T glm::lxNorm ( vec< 3, T, Q > const &  x,
unsigned int  Depth 
)

Returns the L norm of v.

From GLM_GTX_norm extension.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00344.html ================================================ 0.9.9 API documentation: GLM_GTX_normal
0.9.9 API documentation

Include <glm/gtx/normal.hpp> to use the features of this extension. More...

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > triangleNormal (vec< 3, T, Q > const &p1, vec< 3, T, Q > const &p2, vec< 3, T, Q > const &p3)
 Computes triangle normal from triangle points. More...
 

Detailed Description

Include <glm/gtx/normal.hpp> to use the features of this extension.

Compute the normal of a triangle.

Function Documentation

GLM_FUNC_DECL vec<3, T, Q> glm::triangleNormal ( vec< 3, T, Q > const &  p1,
vec< 3, T, Q > const &  p2,
vec< 3, T, Q > const &  p3 
)

Computes triangle normal from triangle points.

See also
GLM_GTX_normal
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00345.html ================================================ 0.9.9 API documentation: GLM_GTX_normalize_dot
0.9.9 API documentation
GLM_GTX_normalize_dot

Include <glm/gtx/normalized_dot.hpp> to use the features of this extension. More...

Functions

template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL T fastNormalizeDot (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Normalize parameters and returns the dot product of x and y. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL T normalizeDot (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Normalize parameters and returns the dot product of x and y. More...
 

Detailed Description

Include <glm/gtx/normalized_dot.hpp> to use the features of this extension.

Dot product of vectors that need to be normalize with a single square root.

Function Documentation

GLM_FUNC_DECL T glm::fastNormalizeDot ( vec< L, T, Q > const &  x,
vec< L, T, Q > const &  y 
)

Normalize parameters and returns the dot product of x and y.

Faster that dot(fastNormalize(x), fastNormalize(y)).

See also
GLM_GTX_normalize_dot extension.
GLM_FUNC_DECL T glm::normalizeDot ( vec< L, T, Q > const &  x,
vec< L, T, Q > const &  y 
)

Normalize parameters and returns the dot product of x and y.

It's faster that dot(normalize(x), normalize(y)).

See also
GLM_GTX_normalize_dot extension.
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00346.html ================================================ 0.9.9 API documentation: GLM_GTX_number_precision
0.9.9 API documentation
GLM_GTX_number_precision

Include <glm/gtx/number_precision.hpp> to use the features of this extension. More...

Typedefs

typedef f32 f32mat1
 Single-qualifier floating-point scalar. (from GLM_GTX_number_precision extension)
 
typedef f32 f32mat1x1
 Single-qualifier floating-point scalar. (from GLM_GTX_number_precision extension)
 
typedef f32 f32vec1
 Single-qualifier floating-point scalar. (from GLM_GTX_number_precision extension)
 
typedef f64 f64mat1
 Double-qualifier floating-point scalar. (from GLM_GTX_number_precision extension)
 
typedef f64 f64mat1x1
 Double-qualifier floating-point scalar. (from GLM_GTX_number_precision extension)
 
typedef f64 f64vec1
 Single-qualifier floating-point scalar. (from GLM_GTX_number_precision extension)
 
typedef u16 u16vec1
 16bit unsigned integer scalar. (from GLM_GTX_number_precision extension)
 
typedef u32 u32vec1
 32bit unsigned integer scalar. (from GLM_GTX_number_precision extension)
 
typedef u64 u64vec1
 64bit unsigned integer scalar. (from GLM_GTX_number_precision extension)
 
typedef u8 u8vec1
 8bit unsigned integer scalar. (from GLM_GTX_number_precision extension)
 

Detailed Description

Include <glm/gtx/number_precision.hpp> to use the features of this extension.

Defined size types.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00347.html ================================================ 0.9.9 API documentation: GLM_GTX_optimum_pow
0.9.9 API documentation
GLM_GTX_optimum_pow

Include <glm/gtx/optimum_pow.hpp> to use the features of this extension. More...

Functions

template<typename genType >
GLM_FUNC_DECL genType pow2 (genType const &x)
 Returns x raised to the power of 2. More...
 
template<typename genType >
GLM_FUNC_DECL genType pow3 (genType const &x)
 Returns x raised to the power of 3. More...
 
template<typename genType >
GLM_FUNC_DECL genType pow4 (genType const &x)
 Returns x raised to the power of 4. More...
 

Detailed Description

Include <glm/gtx/optimum_pow.hpp> to use the features of this extension.

Integer exponentiation of power functions.

Function Documentation

GLM_FUNC_DECL genType glm::gtx::pow2 ( genType const &  x)

Returns x raised to the power of 2.

See also
GLM_GTX_optimum_pow
GLM_FUNC_DECL genType glm::gtx::pow3 ( genType const &  x)

Returns x raised to the power of 3.

See also
GLM_GTX_optimum_pow
GLM_FUNC_DECL genType glm::gtx::pow4 ( genType const &  x)

Returns x raised to the power of 4.

See also
GLM_GTX_optimum_pow
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00348.html ================================================ 0.9.9 API documentation: GLM_GTX_orthonormalize
0.9.9 API documentation
GLM_GTX_orthonormalize

Include <glm/gtx/orthonormalize.hpp> to use the features of this extension. More...

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 3, T, Q > orthonormalize (mat< 3, 3, T, Q > const &m)
 Returns the orthonormalized matrix of m. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > orthonormalize (vec< 3, T, Q > const &x, vec< 3, T, Q > const &y)
 Orthonormalizes x according y. More...
 

Detailed Description

Include <glm/gtx/orthonormalize.hpp> to use the features of this extension.

Orthonormalize matrices.

Function Documentation

GLM_FUNC_DECL mat<3, 3, T, Q> glm::orthonormalize ( mat< 3, 3, T, Q > const &  m)

Returns the orthonormalized matrix of m.

See also
GLM_GTX_orthonormalize
GLM_FUNC_DECL vec<3, T, Q> glm::orthonormalize ( vec< 3, T, Q > const &  x,
vec< 3, T, Q > const &  y 
)

Orthonormalizes x according y.

See also
GLM_GTX_orthonormalize
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00349.html ================================================ 0.9.9 API documentation: GLM_GTX_perpendicular
0.9.9 API documentation
GLM_GTX_perpendicular

Include <glm/gtx/perpendicular.hpp> to use the features of this extension. More...

Functions

template<typename genType >
GLM_FUNC_DECL genType perp (genType const &x, genType const &Normal)
 Projects x a perpendicular axis of Normal. More...
 

Detailed Description

Include <glm/gtx/perpendicular.hpp> to use the features of this extension.

Perpendicular of a vector from other one

Function Documentation

GLM_FUNC_DECL genType glm::perp ( genType const &  x,
genType const &  Normal 
)

Projects x a perpendicular axis of Normal.

From GLM_GTX_perpendicular extension.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00350.html ================================================ 0.9.9 API documentation: GLM_GTX_polar_coordinates
0.9.9 API documentation
GLM_GTX_polar_coordinates

Include <glm/gtx/polar_coordinates.hpp> to use the features of this extension. More...

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > euclidean (vec< 2, T, Q > const &polar)
 Convert Polar to Euclidean coordinates. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > polar (vec< 3, T, Q > const &euclidean)
 Convert Euclidean to Polar coordinates, x is the xz distance, y, the latitude and z the longitude. More...
 

Detailed Description

Include <glm/gtx/polar_coordinates.hpp> to use the features of this extension.

Conversion from Euclidean space to polar space and revert.

Function Documentation

GLM_FUNC_DECL vec<3, T, Q> glm::euclidean ( vec< 2, T, Q > const &  polar)

Convert Polar to Euclidean coordinates.

See also
GLM_GTX_polar_coordinates
GLM_FUNC_DECL vec<3, T, Q> glm::polar ( vec< 3, T, Q > const &  euclidean)

Convert Euclidean to Polar coordinates, x is the xz distance, y, the latitude and z the longitude.

See also
GLM_GTX_polar_coordinates
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00351.html ================================================ 0.9.9 API documentation: GLM_GTX_projection
0.9.9 API documentation

Include <glm/gtx/projection.hpp> to use the features of this extension. More...

Functions

template<typename genType >
GLM_FUNC_DECL genType proj (genType const &x, genType const &Normal)
 Projects x on Normal. More...
 

Detailed Description

Include <glm/gtx/projection.hpp> to use the features of this extension.

Projection of a vector to other one

Function Documentation

GLM_FUNC_DECL genType glm::proj ( genType const &  x,
genType const &  Normal 
)

Projects x on Normal.

Parameters
[in]xA vector to project
[in]NormalA normal that doesn't need to be of unit length.
See also
GLM_GTX_projection
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00352.html ================================================ 0.9.9 API documentation: GLM_GTX_quaternion
0.9.9 API documentation

Include <glm/gtx/quaternion.hpp> to use the features of this extension. More...

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > cross (qua< T, Q > const &q, vec< 3, T, Q > const &v)
 Compute a cross product between a quaternion and a vector. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > cross (vec< 3, T, Q > const &v, qua< T, Q > const &q)
 Compute a cross product between a vector and a quaternion. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T extractRealComponent (qua< T, Q > const &q)
 Extract the real component of a quaternion. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > fastMix (qua< T, Q > const &x, qua< T, Q > const &y, T const &a)
 Quaternion normalized linear interpolation. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > intermediate (qua< T, Q > const &prev, qua< T, Q > const &curr, qua< T, Q > const &next)
 Returns an intermediate control point for squad interpolation. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T length2 (qua< T, Q > const &q)
 Returns the squared length of x. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > quat_identity ()
 Create an identity quaternion. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > rotate (qua< T, Q > const &q, vec< 3, T, Q > const &v)
 Returns quarternion square root. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, T, Q > rotate (qua< T, Q > const &q, vec< 4, T, Q > const &v)
 Rotates a 4 components vector by a quaternion. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > rotation (vec< 3, T, Q > const &orig, vec< 3, T, Q > const &dest)
 Compute the rotation between two vectors. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > shortMix (qua< T, Q > const &x, qua< T, Q > const &y, T const &a)
 Quaternion interpolation using the rotation short path. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > squad (qua< T, Q > const &q1, qua< T, Q > const &q2, qua< T, Q > const &s1, qua< T, Q > const &s2, T const &h)
 Compute a point on a path according squad equation. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 3, T, Q > toMat3 (qua< T, Q > const &x)
 Converts a quaternion to a 3 * 3 matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > toMat4 (qua< T, Q > const &x)
 Converts a quaternion to a 4 * 4 matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > toQuat (mat< 3, 3, T, Q > const &x)
 Converts a 3 * 3 matrix to a quaternion. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > toQuat (mat< 4, 4, T, Q > const &x)
 Converts a 4 * 4 matrix to a quaternion. More...
 

Detailed Description

Include <glm/gtx/quaternion.hpp> to use the features of this extension.

Extented quaternion types and functions

Function Documentation

GLM_FUNC_DECL vec<3, T, Q> glm::cross ( qua< T, Q > const &  q,
vec< 3, T, Q > const &  v 
)

Compute a cross product between a quaternion and a vector.

See also
GLM_GTX_quaternion
GLM_FUNC_DECL vec<3, T, Q> glm::cross ( vec< 3, T, Q > const &  v,
qua< T, Q > const &  q 
)

Compute a cross product between a vector and a quaternion.

See also
GLM_GTX_quaternion
GLM_FUNC_DECL T glm::extractRealComponent ( qua< T, Q > const &  q)

Extract the real component of a quaternion.

See also
GLM_GTX_quaternion
GLM_FUNC_DECL qua<T, Q> glm::fastMix ( qua< T, Q > const &  x,
qua< T, Q > const &  y,
T const &  a 
)

Quaternion normalized linear interpolation.

See also
GLM_GTX_quaternion
GLM_FUNC_DECL qua<T, Q> glm::intermediate ( qua< T, Q > const &  prev,
qua< T, Q > const &  curr,
qua< T, Q > const &  next 
)

Returns an intermediate control point for squad interpolation.

See also
GLM_GTX_quaternion
GLM_FUNC_DECL T glm::length2 ( qua< T, Q > const &  q)

Returns the squared length of x.

See also
GLM_GTX_quaternion
GLM_FUNC_DECL qua<T, Q> glm::quat_identity ( )

Create an identity quaternion.

See also
GLM_GTX_quaternion
GLM_FUNC_DECL vec<3, T, Q> glm::rotate ( qua< T, Q > const &  q,
vec< 3, T, Q > const &  v 
)

Returns quarternion square root.

See also
GLM_GTX_quaternion Rotates a 3 components vector by a quaternion.
GLM_GTX_quaternion
GLM_FUNC_DECL vec<4, T, Q> glm::rotate ( qua< T, Q > const &  q,
vec< 4, T, Q > const &  v 
)

Rotates a 4 components vector by a quaternion.

See also
GLM_GTX_quaternion
GLM_FUNC_DECL qua<T, Q> glm::rotation ( vec< 3, T, Q > const &  orig,
vec< 3, T, Q > const &  dest 
)

Compute the rotation between two vectors.

Parameters
origvector, needs to be normalized
destvector, needs to be normalized
See also
GLM_GTX_quaternion
GLM_FUNC_DECL qua<T, Q> glm::shortMix ( qua< T, Q > const &  x,
qua< T, Q > const &  y,
T const &  a 
)

Quaternion interpolation using the rotation short path.

See also
GLM_GTX_quaternion
GLM_FUNC_DECL qua<T, Q> glm::squad ( qua< T, Q > const &  q1,
qua< T, Q > const &  q2,
qua< T, Q > const &  s1,
qua< T, Q > const &  s2,
T const &  h 
)

Compute a point on a path according squad equation.

q1 and q2 are control points; s1 and s2 are intermediate control points.

See also
GLM_GTX_quaternion
GLM_FUNC_DECL mat<3, 3, T, Q> glm::toMat3 ( qua< T, Q > const &  x)

Converts a quaternion to a 3 * 3 matrix.

See also
GLM_GTX_quaternion

Definition at line 113 of file gtx/quaternion.hpp.

References glm::mat3_cast().

GLM_FUNC_DECL mat<4, 4, T, Q> glm::toMat4 ( qua< T, Q > const &  x)

Converts a quaternion to a 4 * 4 matrix.

See also
GLM_GTX_quaternion

Definition at line 120 of file gtx/quaternion.hpp.

References glm::mat4_cast().

GLM_FUNC_DECL qua<T, Q> glm::toQuat ( mat< 3, 3, T, Q > const &  x)

Converts a 3 * 3 matrix to a quaternion.

See also
GLM_GTX_quaternion

Definition at line 127 of file gtx/quaternion.hpp.

References glm::quat_cast().

GLM_FUNC_DECL qua<T, Q> glm::toQuat ( mat< 4, 4, T, Q > const &  x)

Converts a 4 * 4 matrix to a quaternion.

See also
GLM_GTX_quaternion

Definition at line 134 of file gtx/quaternion.hpp.

References glm::quat_cast().

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00353.html ================================================ 0.9.9 API documentation: GLM_GTX_range
0.9.9 API documentation

Include <glm/gtx/range.hpp> to use the features of this extension. More...

Detailed Description

Include <glm/gtx/range.hpp> to use the features of this extension.

Defines begin and end for vectors and matrices. Useful for range-based for loop. The range is defined over the elements, not over columns or rows (e.g. mat4 has 16 elements).

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00354.html ================================================ 0.9.9 API documentation: GLM_GTX_raw_data
0.9.9 API documentation

Include <glm/gtx/raw_data.hpp> to use the features of this extension. More...

Typedefs

typedef detail::uint8 byte
 Type for byte numbers. More...
 
typedef detail::uint32 dword
 Type for dword numbers. More...
 
typedef detail::uint64 qword
 Type for qword numbers. More...
 
typedef detail::uint16 word
 Type for word numbers. More...
 

Detailed Description

Include <glm/gtx/raw_data.hpp> to use the features of this extension.

Projection of a vector to other one

Typedef Documentation

typedef detail::uint8 byte

Type for byte numbers.

From GLM_GTX_raw_data extension.

Definition at line 34 of file raw_data.hpp.

typedef detail::uint32 dword

Type for dword numbers.

From GLM_GTX_raw_data extension.

Definition at line 42 of file raw_data.hpp.

typedef detail::uint64 qword

Type for qword numbers.

From GLM_GTX_raw_data extension.

Definition at line 46 of file raw_data.hpp.

typedef detail::uint16 word

Type for word numbers.

From GLM_GTX_raw_data extension.

Definition at line 38 of file raw_data.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00355.html ================================================ 0.9.9 API documentation: GLM_GTX_rotate_normalized_axis
0.9.9 API documentation
GLM_GTX_rotate_normalized_axis

Include <glm/gtx/rotate_normalized_axis.hpp> to use the features of this extension. More...

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > rotateNormalizedAxis (mat< 4, 4, T, Q > const &m, T const &angle, vec< 3, T, Q > const &axis)
 Builds a rotation 4 * 4 matrix created from a normalized axis and an angle. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL qua< T, Q > rotateNormalizedAxis (qua< T, Q > const &q, T const &angle, vec< 3, T, Q > const &axis)
 Rotates a quaternion from a vector of 3 components normalized axis and an angle. More...
 

Detailed Description

Include <glm/gtx/rotate_normalized_axis.hpp> to use the features of this extension.

Quaternions and matrices rotations around normalized axis.

Function Documentation

GLM_FUNC_DECL mat<4, 4, T, Q> glm::rotateNormalizedAxis ( mat< 4, 4, T, Q > const &  m,
T const &  angle,
vec< 3, T, Q > const &  axis 
)

Builds a rotation 4 * 4 matrix created from a normalized axis and an angle.

Parameters
mInput matrix multiplied by this rotation matrix.
angleRotation angle expressed in radians.
axisRotation axis, must be normalized.
Template Parameters
TValue type used to build the matrix. Currently supported: half (not recommended), float or double.
See also
GLM_GTX_rotate_normalized_axis
- rotate(T angle, T x, T y, T z)
- rotate(mat<4, 4, T, Q> const& m, T angle, T x, T y, T z)
- rotate(T angle, vec<3, T, Q> const& v)
GLM_FUNC_DECL qua<T, Q> glm::rotateNormalizedAxis ( qua< T, Q > const &  q,
T const &  angle,
vec< 3, T, Q > const &  axis 
)

Rotates a quaternion from a vector of 3 components normalized axis and an angle.

Parameters
qSource orientation
angleAngle expressed in radians.
axisNormalized axis of the rotation, must be normalized.
See also
GLM_GTX_rotate_normalized_axis
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00356.html ================================================ 0.9.9 API documentation: GLM_GTX_rotate_vector
0.9.9 API documentation
GLM_GTX_rotate_vector

Include <glm/gtx/rotate_vector.hpp> to use the features of this extension. More...

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > orientation (vec< 3, T, Q > const &Normal, vec< 3, T, Q > const &Up)
 Build a rotation matrix from a normal and a up vector. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 2, T, Q > rotate (vec< 2, T, Q > const &v, T const &angle)
 Rotate a two dimensional vector. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > rotate (vec< 3, T, Q > const &v, T const &angle, vec< 3, T, Q > const &normal)
 Rotate a three dimensional vector around an axis. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, T, Q > rotate (vec< 4, T, Q > const &v, T const &angle, vec< 3, T, Q > const &normal)
 Rotate a four dimensional vector around an axis. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > rotateX (vec< 3, T, Q > const &v, T const &angle)
 Rotate a three dimensional vector around the X axis. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, T, Q > rotateX (vec< 4, T, Q > const &v, T const &angle)
 Rotate a four dimensional vector around the X axis. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > rotateY (vec< 3, T, Q > const &v, T const &angle)
 Rotate a three dimensional vector around the Y axis. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, T, Q > rotateY (vec< 4, T, Q > const &v, T const &angle)
 Rotate a four dimensional vector around the Y axis. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > rotateZ (vec< 3, T, Q > const &v, T const &angle)
 Rotate a three dimensional vector around the Z axis. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 4, T, Q > rotateZ (vec< 4, T, Q > const &v, T const &angle)
 Rotate a four dimensional vector around the Z axis. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL vec< 3, T, Q > slerp (vec< 3, T, Q > const &x, vec< 3, T, Q > const &y, T const &a)
 Returns Spherical interpolation between two vectors. More...
 

Detailed Description

Include <glm/gtx/rotate_vector.hpp> to use the features of this extension.

Function to directly rotate a vector

Function Documentation

GLM_FUNC_DECL mat<4, 4, T, Q> glm::orientation ( vec< 3, T, Q > const &  Normal,
vec< 3, T, Q > const &  Up 
)

Build a rotation matrix from a normal and a up vector.

From GLM_GTX_rotate_vector extension.

GLM_FUNC_DECL vec<2, T, Q> glm::rotate ( vec< 2, T, Q > const &  v,
T const &  angle 
)

Rotate a two dimensional vector.

From GLM_GTX_rotate_vector extension.

GLM_FUNC_DECL vec<3, T, Q> glm::rotate ( vec< 3, T, Q > const &  v,
T const &  angle,
vec< 3, T, Q > const &  normal 
)

Rotate a three dimensional vector around an axis.

From GLM_GTX_rotate_vector extension.

GLM_FUNC_DECL vec<4, T, Q> glm::rotate ( vec< 4, T, Q > const &  v,
T const &  angle,
vec< 3, T, Q > const &  normal 
)

Rotate a four dimensional vector around an axis.

From GLM_GTX_rotate_vector extension.

GLM_FUNC_DECL vec<3, T, Q> glm::rotateX ( vec< 3, T, Q > const &  v,
T const &  angle 
)

Rotate a three dimensional vector around the X axis.

From GLM_GTX_rotate_vector extension.

GLM_FUNC_DECL vec<4, T, Q> glm::rotateX ( vec< 4, T, Q > const &  v,
T const &  angle 
)

Rotate a four dimensional vector around the X axis.

From GLM_GTX_rotate_vector extension.

GLM_FUNC_DECL vec<3, T, Q> glm::rotateY ( vec< 3, T, Q > const &  v,
T const &  angle 
)

Rotate a three dimensional vector around the Y axis.

From GLM_GTX_rotate_vector extension.

GLM_FUNC_DECL vec<4, T, Q> glm::rotateY ( vec< 4, T, Q > const &  v,
T const &  angle 
)

Rotate a four dimensional vector around the Y axis.

From GLM_GTX_rotate_vector extension.

GLM_FUNC_DECL vec<3, T, Q> glm::rotateZ ( vec< 3, T, Q > const &  v,
T const &  angle 
)

Rotate a three dimensional vector around the Z axis.

From GLM_GTX_rotate_vector extension.

GLM_FUNC_DECL vec<4, T, Q> glm::rotateZ ( vec< 4, T, Q > const &  v,
T const &  angle 
)

Rotate a four dimensional vector around the Z axis.

From GLM_GTX_rotate_vector extension.

GLM_FUNC_DECL vec<3, T, Q> glm::slerp ( vec< 3, T, Q > const &  x,
vec< 3, T, Q > const &  y,
T const &  a 
)

Returns Spherical interpolation between two vectors.

Parameters
xA first vector
yA second vector
aInterpolation factor. The interpolation is defined beyond the range [0, 1].
See also
GLM_GTX_rotate_vector
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00357.html ================================================ 0.9.9 API documentation: GLM_GTX_scalar_relational
0.9.9 API documentation
GLM_GTX_scalar_relational

Include <glm/gtx/scalar_relational.hpp> to use the features of this extension. More...

Include <glm/gtx/scalar_relational.hpp> to use the features of this extension.

Extend a position from a source to a position at a defined length.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00358.html ================================================ 0.9.9 API documentation: GLM_GTX_spline
0.9.9 API documentation

Include <glm/gtx/spline.hpp> to use the features of this extension. More...

Functions

template<typename genType >
GLM_FUNC_DECL genType catmullRom (genType const &v1, genType const &v2, genType const &v3, genType const &v4, typename genType::value_type const &s)
 Return a point from a catmull rom curve. More...
 
template<typename genType >
GLM_FUNC_DECL genType cubic (genType const &v1, genType const &v2, genType const &v3, genType const &v4, typename genType::value_type const &s)
 Return a point from a cubic curve. More...
 
template<typename genType >
GLM_FUNC_DECL genType hermite (genType const &v1, genType const &t1, genType const &v2, genType const &t2, typename genType::value_type const &s)
 Return a point from a hermite curve. More...
 

Detailed Description

Include <glm/gtx/spline.hpp> to use the features of this extension.

Spline functions

Function Documentation

GLM_FUNC_DECL genType glm::catmullRom ( genType const &  v1,
genType const &  v2,
genType const &  v3,
genType const &  v4,
typename genType::value_type const &  s 
)

Return a point from a catmull rom curve.

See also
GLM_GTX_spline extension.
GLM_FUNC_DECL genType glm::cubic ( genType const &  v1,
genType const &  v2,
genType const &  v3,
genType const &  v4,
typename genType::value_type const &  s 
)

Return a point from a cubic curve.

See also
GLM_GTX_spline extension.
GLM_FUNC_DECL genType glm::hermite ( genType const &  v1,
genType const &  t1,
genType const &  v2,
genType const &  t2,
typename genType::value_type const &  s 
)

Return a point from a hermite curve.

See also
GLM_GTX_spline extension.
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00359.html ================================================ 0.9.9 API documentation: GLM_GTX_std_based_type
0.9.9 API documentation
GLM_GTX_std_based_type

Include <glm/gtx/std_based_type.hpp> to use the features of this extension. More...

Typedefs

typedef vec< 1, std::size_t, defaultp > size1
 Vector type based of one std::size_t component. More...
 
typedef vec< 1, std::size_t, defaultp > size1_t
 Vector type based of one std::size_t component. More...
 
typedef vec< 2, std::size_t, defaultp > size2
 Vector type based of two std::size_t components. More...
 
typedef vec< 2, std::size_t, defaultp > size2_t
 Vector type based of two std::size_t components. More...
 
typedef vec< 3, std::size_t, defaultp > size3
 Vector type based of three std::size_t components. More...
 
typedef vec< 3, std::size_t, defaultp > size3_t
 Vector type based of three std::size_t components. More...
 
typedef vec< 4, std::size_t, defaultp > size4
 Vector type based of four std::size_t components. More...
 
typedef vec< 4, std::size_t, defaultp > size4_t
 Vector type based of four std::size_t components. More...
 

Detailed Description

Include <glm/gtx/std_based_type.hpp> to use the features of this extension.

Adds vector types based on STL value types.

Typedef Documentation

typedef vec<1, std::size_t, defaultp> size1

Vector type based of one std::size_t component.

See also
GLM_GTX_std_based_type

Definition at line 35 of file std_based_type.hpp.

typedef vec<1, std::size_t, defaultp> size1_t

Vector type based of one std::size_t component.

See also
GLM_GTX_std_based_type

Definition at line 51 of file std_based_type.hpp.

typedef vec<2, std::size_t, defaultp> size2

Vector type based of two std::size_t components.

See also
GLM_GTX_std_based_type

Definition at line 39 of file std_based_type.hpp.

typedef vec<2, std::size_t, defaultp> size2_t

Vector type based of two std::size_t components.

See also
GLM_GTX_std_based_type

Definition at line 55 of file std_based_type.hpp.

typedef vec<3, std::size_t, defaultp> size3

Vector type based of three std::size_t components.

See also
GLM_GTX_std_based_type

Definition at line 43 of file std_based_type.hpp.

typedef vec<3, std::size_t, defaultp> size3_t

Vector type based of three std::size_t components.

See also
GLM_GTX_std_based_type

Definition at line 59 of file std_based_type.hpp.

typedef vec<4, std::size_t, defaultp> size4

Vector type based of four std::size_t components.

See also
GLM_GTX_std_based_type

Definition at line 47 of file std_based_type.hpp.

typedef vec<4, std::size_t, defaultp> size4_t

Vector type based of four std::size_t components.

See also
GLM_GTX_std_based_type

Definition at line 63 of file std_based_type.hpp.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00360.html ================================================ 0.9.9 API documentation: GLM_GTX_string_cast
0.9.9 API documentation
GLM_GTX_string_cast

Include <glm/gtx/string_cast.hpp> to use the features of this extension. More...

Functions

template<typename genType >
GLM_FUNC_DECL std::string to_string (genType const &x)
 Create a string from a GLM vector or matrix typed variable. More...
 

Detailed Description

Include <glm/gtx/string_cast.hpp> to use the features of this extension.

Setup strings for GLM type values

This extension is not supported with CUDA

Function Documentation

GLM_FUNC_DECL std::string glm::to_string ( genType const &  x)

Create a string from a GLM vector or matrix typed variable.

See also
GLM_GTX_string_cast extension.
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00361.html ================================================ 0.9.9 API documentation: GLM_GTX_texture
0.9.9 API documentation

Include <glm/gtx/texture.hpp> to use the features of this extension. More...

Functions

template<length_t L, typename T , qualifier Q>
levels (vec< L, T, Q > const &Extent)
 Compute the number of mipmaps levels necessary to create a mipmap complete texture. More...
 

Detailed Description

Include <glm/gtx/texture.hpp> to use the features of this extension.

Wrapping mode of texture coordinates.

Function Documentation

T glm::levels ( vec< L, T, Q > const &  Extent)

Compute the number of mipmaps levels necessary to create a mipmap complete texture.

Parameters
ExtentExtent of the texture base level mipmap
Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point or signed integer scalar types
QValue from qualifier enum
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00362.html ================================================ 0.9.9 API documentation: GLM_GTX_transform
0.9.9 API documentation

Include <glm/gtx/transform.hpp> to use the features of this extension. More...

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > rotate (T angle, vec< 3, T, Q > const &v)
 Builds a rotation 4 * 4 matrix created from an axis of 3 scalars and an angle expressed in radians. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > scale (vec< 3, T, Q > const &v)
 Transforms a matrix with a scale 4 * 4 matrix created from a vector of 3 components. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > translate (vec< 3, T, Q > const &v)
 Transforms a matrix with a translation 4 * 4 matrix created from 3 scalars. More...
 

Detailed Description

Include <glm/gtx/transform.hpp> to use the features of this extension.

Add transformation matrices

Function Documentation

GLM_FUNC_DECL mat<4, 4, T, Q> glm::rotate ( angle,
vec< 3, T, Q > const &  v 
)

Builds a rotation 4 * 4 matrix created from an axis of 3 scalars and an angle expressed in radians.

See also
GLM_GTC_matrix_transform
GLM_GTX_transform
GLM_FUNC_DECL mat<4, 4, T, Q> glm::scale ( vec< 3, T, Q > const &  v)

Transforms a matrix with a scale 4 * 4 matrix created from a vector of 3 components.

See also
GLM_GTC_matrix_transform
GLM_GTX_transform
GLM_FUNC_DECL mat<4, 4, T, Q> glm::translate ( vec< 3, T, Q > const &  v)

Transforms a matrix with a translation 4 * 4 matrix created from 3 scalars.

See also
GLM_GTC_matrix_transform
GLM_GTX_transform
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00363.html ================================================ 0.9.9 API documentation: GLM_GTX_transform2
0.9.9 API documentation

Include <glm/gtx/transform2.hpp> to use the features of this extension. More...

Functions

template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 3, T, Q > proj2D (mat< 3, 3, T, Q > const &m, vec< 3, T, Q > const &normal)
 Build planar projection matrix along normal axis. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > proj3D (mat< 4, 4, T, Q > const &m, vec< 3, T, Q > const &normal)
 Build planar projection matrix along normal axis. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > scaleBias (T scale, T bias)
 Build a scale bias matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > scaleBias (mat< 4, 4, T, Q > const &m, T scale, T bias)
 Build a scale bias matrix. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 3, T, Q > shearX2D (mat< 3, 3, T, Q > const &m, T y)
 Transforms a matrix with a shearing on X axis. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > shearX3D (mat< 4, 4, T, Q > const &m, T y, T z)
 Transforms a matrix with a shearing on X axis From GLM_GTX_transform2 extension. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 3, 3, T, Q > shearY2D (mat< 3, 3, T, Q > const &m, T x)
 Transforms a matrix with a shearing on Y axis. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > shearY3D (mat< 4, 4, T, Q > const &m, T x, T z)
 Transforms a matrix with a shearing on Y axis. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL mat< 4, 4, T, Q > shearZ3D (mat< 4, 4, T, Q > const &m, T x, T y)
 Transforms a matrix with a shearing on Z axis. More...
 

Detailed Description

Include <glm/gtx/transform2.hpp> to use the features of this extension.

Add extra transformation matrices

Function Documentation

GLM_FUNC_DECL mat<3, 3, T, Q> glm::proj2D ( mat< 3, 3, T, Q > const &  m,
vec< 3, T, Q > const &  normal 
)

Build planar projection matrix along normal axis.

From GLM_GTX_transform2 extension.

GLM_FUNC_DECL mat<4, 4, T, Q> glm::proj3D ( mat< 4, 4, T, Q > const &  m,
vec< 3, T, Q > const &  normal 
)

Build planar projection matrix along normal axis.

From GLM_GTX_transform2 extension.

GLM_FUNC_DECL mat<4, 4, T, Q> glm::scaleBias ( scale,
bias 
)

Build a scale bias matrix.

From GLM_GTX_transform2 extension.

GLM_FUNC_DECL mat<4, 4, T, Q> glm::scaleBias ( mat< 4, 4, T, Q > const &  m,
scale,
bias 
)

Build a scale bias matrix.

From GLM_GTX_transform2 extension.

GLM_FUNC_DECL mat<3, 3, T, Q> glm::shearX2D ( mat< 3, 3, T, Q > const &  m,
y 
)

Transforms a matrix with a shearing on X axis.

From GLM_GTX_transform2 extension.

GLM_FUNC_DECL mat<4, 4, T, Q> glm::shearX3D ( mat< 4, 4, T, Q > const &  m,
y,
z 
)

Transforms a matrix with a shearing on X axis From GLM_GTX_transform2 extension.

GLM_FUNC_DECL mat<3, 3, T, Q> glm::shearY2D ( mat< 3, 3, T, Q > const &  m,
x 
)

Transforms a matrix with a shearing on Y axis.

From GLM_GTX_transform2 extension.

GLM_FUNC_DECL mat<4, 4, T, Q> glm::shearY3D ( mat< 4, 4, T, Q > const &  m,
x,
z 
)

Transforms a matrix with a shearing on Y axis.

From GLM_GTX_transform2 extension.

GLM_FUNC_DECL mat<4, 4, T, Q> glm::shearZ3D ( mat< 4, 4, T, Q > const &  m,
x,
y 
)

Transforms a matrix with a shearing on Z axis.

From GLM_GTX_transform2 extension.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00364.html ================================================ 0.9.9 API documentation: GLM_GTX_type_aligned
0.9.9 API documentation
GLM_GTX_type_aligned

Include <glm/gtx/type_aligned.hpp> to use the features of this extension. More...

Functions

 GLM_ALIGNED_TYPEDEF (lowp_int8, aligned_lowp_int8, 1)
 Low qualifier 8 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_int16, aligned_lowp_int16, 2)
 Low qualifier 16 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_int32, aligned_lowp_int32, 4)
 Low qualifier 32 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_int64, aligned_lowp_int64, 8)
 Low qualifier 64 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_int8_t, aligned_lowp_int8_t, 1)
 Low qualifier 8 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_int16_t, aligned_lowp_int16_t, 2)
 Low qualifier 16 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_int32_t, aligned_lowp_int32_t, 4)
 Low qualifier 32 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_int64_t, aligned_lowp_int64_t, 8)
 Low qualifier 64 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_i8, aligned_lowp_i8, 1)
 Low qualifier 8 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_i16, aligned_lowp_i16, 2)
 Low qualifier 16 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_i32, aligned_lowp_i32, 4)
 Low qualifier 32 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_i64, aligned_lowp_i64, 8)
 Low qualifier 64 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_int8, aligned_mediump_int8, 1)
 Medium qualifier 8 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_int16, aligned_mediump_int16, 2)
 Medium qualifier 16 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_int32, aligned_mediump_int32, 4)
 Medium qualifier 32 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_int64, aligned_mediump_int64, 8)
 Medium qualifier 64 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_int8_t, aligned_mediump_int8_t, 1)
 Medium qualifier 8 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_int16_t, aligned_mediump_int16_t, 2)
 Medium qualifier 16 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_int32_t, aligned_mediump_int32_t, 4)
 Medium qualifier 32 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_int64_t, aligned_mediump_int64_t, 8)
 Medium qualifier 64 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_i8, aligned_mediump_i8, 1)
 Medium qualifier 8 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_i16, aligned_mediump_i16, 2)
 Medium qualifier 16 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_i32, aligned_mediump_i32, 4)
 Medium qualifier 32 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_i64, aligned_mediump_i64, 8)
 Medium qualifier 64 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_int8, aligned_highp_int8, 1)
 High qualifier 8 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_int16, aligned_highp_int16, 2)
 High qualifier 16 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_int32, aligned_highp_int32, 4)
 High qualifier 32 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_int64, aligned_highp_int64, 8)
 High qualifier 64 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_int8_t, aligned_highp_int8_t, 1)
 High qualifier 8 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_int16_t, aligned_highp_int16_t, 2)
 High qualifier 16 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_int32_t, aligned_highp_int32_t, 4)
 High qualifier 32 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_int64_t, aligned_highp_int64_t, 8)
 High qualifier 64 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_i8, aligned_highp_i8, 1)
 High qualifier 8 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_i16, aligned_highp_i16, 2)
 High qualifier 16 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_i32, aligned_highp_i32, 4)
 High qualifier 32 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_i64, aligned_highp_i64, 8)
 High qualifier 64 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (int8, aligned_int8, 1)
 Default qualifier 8 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (int16, aligned_int16, 2)
 Default qualifier 16 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (int32, aligned_int32, 4)
 Default qualifier 32 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (int64, aligned_int64, 8)
 Default qualifier 64 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (int8_t, aligned_int8_t, 1)
 Default qualifier 8 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (int16_t, aligned_int16_t, 2)
 Default qualifier 16 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (int32_t, aligned_int32_t, 4)
 Default qualifier 32 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (int64_t, aligned_int64_t, 8)
 Default qualifier 64 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (i8, aligned_i8, 1)
 Default qualifier 8 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (i16, aligned_i16, 2)
 Default qualifier 16 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (i32, aligned_i32, 4)
 Default qualifier 32 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (i64, aligned_i64, 8)
 Default qualifier 64 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (ivec1, aligned_ivec1, 4)
 Default qualifier 32 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (ivec2, aligned_ivec2, 8)
 Default qualifier 32 bit signed integer aligned vector of 2 components type. More...
 
 GLM_ALIGNED_TYPEDEF (ivec3, aligned_ivec3, 16)
 Default qualifier 32 bit signed integer aligned vector of 3 components type. More...
 
 GLM_ALIGNED_TYPEDEF (ivec4, aligned_ivec4, 16)
 Default qualifier 32 bit signed integer aligned vector of 4 components type. More...
 
 GLM_ALIGNED_TYPEDEF (i8vec1, aligned_i8vec1, 1)
 Default qualifier 8 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (i8vec2, aligned_i8vec2, 2)
 Default qualifier 8 bit signed integer aligned vector of 2 components type. More...
 
 GLM_ALIGNED_TYPEDEF (i8vec3, aligned_i8vec3, 4)
 Default qualifier 8 bit signed integer aligned vector of 3 components type. More...
 
 GLM_ALIGNED_TYPEDEF (i8vec4, aligned_i8vec4, 4)
 Default qualifier 8 bit signed integer aligned vector of 4 components type. More...
 
 GLM_ALIGNED_TYPEDEF (i16vec1, aligned_i16vec1, 2)
 Default qualifier 16 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (i16vec2, aligned_i16vec2, 4)
 Default qualifier 16 bit signed integer aligned vector of 2 components type. More...
 
 GLM_ALIGNED_TYPEDEF (i16vec3, aligned_i16vec3, 8)
 Default qualifier 16 bit signed integer aligned vector of 3 components type. More...
 
 GLM_ALIGNED_TYPEDEF (i16vec4, aligned_i16vec4, 8)
 Default qualifier 16 bit signed integer aligned vector of 4 components type. More...
 
 GLM_ALIGNED_TYPEDEF (i32vec1, aligned_i32vec1, 4)
 Default qualifier 32 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (i32vec2, aligned_i32vec2, 8)
 Default qualifier 32 bit signed integer aligned vector of 2 components type. More...
 
 GLM_ALIGNED_TYPEDEF (i32vec3, aligned_i32vec3, 16)
 Default qualifier 32 bit signed integer aligned vector of 3 components type. More...
 
 GLM_ALIGNED_TYPEDEF (i32vec4, aligned_i32vec4, 16)
 Default qualifier 32 bit signed integer aligned vector of 4 components type. More...
 
 GLM_ALIGNED_TYPEDEF (i64vec1, aligned_i64vec1, 8)
 Default qualifier 64 bit signed integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (i64vec2, aligned_i64vec2, 16)
 Default qualifier 64 bit signed integer aligned vector of 2 components type. More...
 
 GLM_ALIGNED_TYPEDEF (i64vec3, aligned_i64vec3, 32)
 Default qualifier 64 bit signed integer aligned vector of 3 components type. More...
 
 GLM_ALIGNED_TYPEDEF (i64vec4, aligned_i64vec4, 32)
 Default qualifier 64 bit signed integer aligned vector of 4 components type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_uint8, aligned_lowp_uint8, 1)
 Low qualifier 8 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_uint16, aligned_lowp_uint16, 2)
 Low qualifier 16 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_uint32, aligned_lowp_uint32, 4)
 Low qualifier 32 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_uint64, aligned_lowp_uint64, 8)
 Low qualifier 64 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_uint8_t, aligned_lowp_uint8_t, 1)
 Low qualifier 8 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_uint16_t, aligned_lowp_uint16_t, 2)
 Low qualifier 16 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_uint32_t, aligned_lowp_uint32_t, 4)
 Low qualifier 32 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_uint64_t, aligned_lowp_uint64_t, 8)
 Low qualifier 64 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_u8, aligned_lowp_u8, 1)
 Low qualifier 8 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_u16, aligned_lowp_u16, 2)
 Low qualifier 16 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_u32, aligned_lowp_u32, 4)
 Low qualifier 32 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (lowp_u64, aligned_lowp_u64, 8)
 Low qualifier 64 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_uint8, aligned_mediump_uint8, 1)
 Medium qualifier 8 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_uint16, aligned_mediump_uint16, 2)
 Medium qualifier 16 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_uint32, aligned_mediump_uint32, 4)
 Medium qualifier 32 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_uint64, aligned_mediump_uint64, 8)
 Medium qualifier 64 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_uint8_t, aligned_mediump_uint8_t, 1)
 Medium qualifier 8 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_uint16_t, aligned_mediump_uint16_t, 2)
 Medium qualifier 16 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_uint32_t, aligned_mediump_uint32_t, 4)
 Medium qualifier 32 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_uint64_t, aligned_mediump_uint64_t, 8)
 Medium qualifier 64 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_u8, aligned_mediump_u8, 1)
 Medium qualifier 8 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_u16, aligned_mediump_u16, 2)
 Medium qualifier 16 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_u32, aligned_mediump_u32, 4)
 Medium qualifier 32 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (mediump_u64, aligned_mediump_u64, 8)
 Medium qualifier 64 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_uint8, aligned_highp_uint8, 1)
 High qualifier 8 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_uint16, aligned_highp_uint16, 2)
 High qualifier 16 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_uint32, aligned_highp_uint32, 4)
 High qualifier 32 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_uint64, aligned_highp_uint64, 8)
 High qualifier 64 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_uint8_t, aligned_highp_uint8_t, 1)
 High qualifier 8 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_uint16_t, aligned_highp_uint16_t, 2)
 High qualifier 16 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_uint32_t, aligned_highp_uint32_t, 4)
 High qualifier 32 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_uint64_t, aligned_highp_uint64_t, 8)
 High qualifier 64 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_u8, aligned_highp_u8, 1)
 High qualifier 8 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_u16, aligned_highp_u16, 2)
 High qualifier 16 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_u32, aligned_highp_u32, 4)
 High qualifier 32 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (highp_u64, aligned_highp_u64, 8)
 High qualifier 64 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (uint8, aligned_uint8, 1)
 Default qualifier 8 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (uint16, aligned_uint16, 2)
 Default qualifier 16 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (uint32, aligned_uint32, 4)
 Default qualifier 32 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (uint64, aligned_uint64, 8)
 Default qualifier 64 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (uint8_t, aligned_uint8_t, 1)
 Default qualifier 8 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (uint16_t, aligned_uint16_t, 2)
 Default qualifier 16 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (uint32_t, aligned_uint32_t, 4)
 Default qualifier 32 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (uint64_t, aligned_uint64_t, 8)
 Default qualifier 64 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (u8, aligned_u8, 1)
 Default qualifier 8 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (u16, aligned_u16, 2)
 Default qualifier 16 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (u32, aligned_u32, 4)
 Default qualifier 32 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (u64, aligned_u64, 8)
 Default qualifier 64 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (uvec1, aligned_uvec1, 4)
 Default qualifier 32 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (uvec2, aligned_uvec2, 8)
 Default qualifier 32 bit unsigned integer aligned vector of 2 components type. More...
 
 GLM_ALIGNED_TYPEDEF (uvec3, aligned_uvec3, 16)
 Default qualifier 32 bit unsigned integer aligned vector of 3 components type. More...
 
 GLM_ALIGNED_TYPEDEF (uvec4, aligned_uvec4, 16)
 Default qualifier 32 bit unsigned integer aligned vector of 4 components type. More...
 
 GLM_ALIGNED_TYPEDEF (u8vec1, aligned_u8vec1, 1)
 Default qualifier 8 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (u8vec2, aligned_u8vec2, 2)
 Default qualifier 8 bit unsigned integer aligned vector of 2 components type. More...
 
 GLM_ALIGNED_TYPEDEF (u8vec3, aligned_u8vec3, 4)
 Default qualifier 8 bit unsigned integer aligned vector of 3 components type. More...
 
 GLM_ALIGNED_TYPEDEF (u8vec4, aligned_u8vec4, 4)
 Default qualifier 8 bit unsigned integer aligned vector of 4 components type. More...
 
 GLM_ALIGNED_TYPEDEF (u16vec1, aligned_u16vec1, 2)
 Default qualifier 16 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (u16vec2, aligned_u16vec2, 4)
 Default qualifier 16 bit unsigned integer aligned vector of 2 components type. More...
 
 GLM_ALIGNED_TYPEDEF (u16vec3, aligned_u16vec3, 8)
 Default qualifier 16 bit unsigned integer aligned vector of 3 components type. More...
 
 GLM_ALIGNED_TYPEDEF (u16vec4, aligned_u16vec4, 8)
 Default qualifier 16 bit unsigned integer aligned vector of 4 components type. More...
 
 GLM_ALIGNED_TYPEDEF (u32vec1, aligned_u32vec1, 4)
 Default qualifier 32 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (u32vec2, aligned_u32vec2, 8)
 Default qualifier 32 bit unsigned integer aligned vector of 2 components type. More...
 
 GLM_ALIGNED_TYPEDEF (u32vec3, aligned_u32vec3, 16)
 Default qualifier 32 bit unsigned integer aligned vector of 3 components type. More...
 
 GLM_ALIGNED_TYPEDEF (u32vec4, aligned_u32vec4, 16)
 Default qualifier 32 bit unsigned integer aligned vector of 4 components type. More...
 
 GLM_ALIGNED_TYPEDEF (u64vec1, aligned_u64vec1, 8)
 Default qualifier 64 bit unsigned integer aligned scalar type. More...
 
 GLM_ALIGNED_TYPEDEF (u64vec2, aligned_u64vec2, 16)
 Default qualifier 64 bit unsigned integer aligned vector of 2 components type. More...
 
 GLM_ALIGNED_TYPEDEF (u64vec3, aligned_u64vec3, 32)
 Default qualifier 64 bit unsigned integer aligned vector of 3 components type. More...
 
 GLM_ALIGNED_TYPEDEF (u64vec4, aligned_u64vec4, 32)
 Default qualifier 64 bit unsigned integer aligned vector of 4 components type. More...
 
 GLM_ALIGNED_TYPEDEF (float32, aligned_float32, 4)
 32 bit single-qualifier floating-point aligned scalar. More...
 
 GLM_ALIGNED_TYPEDEF (float32_t, aligned_float32_t, 4)
 32 bit single-qualifier floating-point aligned scalar. More...
 
 GLM_ALIGNED_TYPEDEF (float32, aligned_f32, 4)
 32 bit single-qualifier floating-point aligned scalar. More...
 
 GLM_ALIGNED_TYPEDEF (float64, aligned_float64, 8)
 64 bit double-qualifier floating-point aligned scalar. More...
 
 GLM_ALIGNED_TYPEDEF (float64_t, aligned_float64_t, 8)
 64 bit double-qualifier floating-point aligned scalar. More...
 
 GLM_ALIGNED_TYPEDEF (float64, aligned_f64, 8)
 64 bit double-qualifier floating-point aligned scalar. More...
 
 GLM_ALIGNED_TYPEDEF (vec1, aligned_vec1, 4)
 Single-qualifier floating-point aligned vector of 1 component. More...
 
 GLM_ALIGNED_TYPEDEF (vec2, aligned_vec2, 8)
 Single-qualifier floating-point aligned vector of 2 components. More...
 
 GLM_ALIGNED_TYPEDEF (vec3, aligned_vec3, 16)
 Single-qualifier floating-point aligned vector of 3 components. More...
 
 GLM_ALIGNED_TYPEDEF (vec4, aligned_vec4, 16)
 Single-qualifier floating-point aligned vector of 4 components. More...
 
 GLM_ALIGNED_TYPEDEF (fvec1, aligned_fvec1, 4)
 Single-qualifier floating-point aligned vector of 1 component. More...
 
 GLM_ALIGNED_TYPEDEF (fvec2, aligned_fvec2, 8)
 Single-qualifier floating-point aligned vector of 2 components. More...
 
 GLM_ALIGNED_TYPEDEF (fvec3, aligned_fvec3, 16)
 Single-qualifier floating-point aligned vector of 3 components. More...
 
 GLM_ALIGNED_TYPEDEF (fvec4, aligned_fvec4, 16)
 Single-qualifier floating-point aligned vector of 4 components. More...
 
 GLM_ALIGNED_TYPEDEF (f32vec1, aligned_f32vec1, 4)
 Single-qualifier floating-point aligned vector of 1 component. More...
 
 GLM_ALIGNED_TYPEDEF (f32vec2, aligned_f32vec2, 8)
 Single-qualifier floating-point aligned vector of 2 components. More...
 
 GLM_ALIGNED_TYPEDEF (f32vec3, aligned_f32vec3, 16)
 Single-qualifier floating-point aligned vector of 3 components. More...
 
 GLM_ALIGNED_TYPEDEF (f32vec4, aligned_f32vec4, 16)
 Single-qualifier floating-point aligned vector of 4 components. More...
 
 GLM_ALIGNED_TYPEDEF (dvec1, aligned_dvec1, 8)
 Double-qualifier floating-point aligned vector of 1 component. More...
 
 GLM_ALIGNED_TYPEDEF (dvec2, aligned_dvec2, 16)
 Double-qualifier floating-point aligned vector of 2 components. More...
 
 GLM_ALIGNED_TYPEDEF (dvec3, aligned_dvec3, 32)
 Double-qualifier floating-point aligned vector of 3 components. More...
 
 GLM_ALIGNED_TYPEDEF (dvec4, aligned_dvec4, 32)
 Double-qualifier floating-point aligned vector of 4 components. More...
 
 GLM_ALIGNED_TYPEDEF (f64vec1, aligned_f64vec1, 8)
 Double-qualifier floating-point aligned vector of 1 component. More...
 
 GLM_ALIGNED_TYPEDEF (f64vec2, aligned_f64vec2, 16)
 Double-qualifier floating-point aligned vector of 2 components. More...
 
 GLM_ALIGNED_TYPEDEF (f64vec3, aligned_f64vec3, 32)
 Double-qualifier floating-point aligned vector of 3 components. More...
 
 GLM_ALIGNED_TYPEDEF (f64vec4, aligned_f64vec4, 32)
 Double-qualifier floating-point aligned vector of 4 components. More...
 
 GLM_ALIGNED_TYPEDEF (mat2, aligned_mat2, 16)
 Single-qualifier floating-point aligned 1x1 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (mat3, aligned_mat3, 16)
 Single-qualifier floating-point aligned 3x3 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (mat4, aligned_mat4, 16)
 Single-qualifier floating-point aligned 4x4 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (fmat2x2, aligned_fmat2, 16)
 Single-qualifier floating-point aligned 1x1 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (fmat3x3, aligned_fmat3, 16)
 Single-qualifier floating-point aligned 3x3 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (fmat4x4, aligned_fmat4, 16)
 Single-qualifier floating-point aligned 4x4 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (fmat2x2, aligned_fmat2x2, 16)
 Single-qualifier floating-point aligned 1x1 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (fmat2x3, aligned_fmat2x3, 16)
 Single-qualifier floating-point aligned 2x3 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (fmat2x4, aligned_fmat2x4, 16)
 Single-qualifier floating-point aligned 2x4 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (fmat3x2, aligned_fmat3x2, 16)
 Single-qualifier floating-point aligned 3x2 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (fmat3x3, aligned_fmat3x3, 16)
 Single-qualifier floating-point aligned 3x3 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (fmat3x4, aligned_fmat3x4, 16)
 Single-qualifier floating-point aligned 3x4 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (fmat4x2, aligned_fmat4x2, 16)
 Single-qualifier floating-point aligned 4x2 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (fmat4x3, aligned_fmat4x3, 16)
 Single-qualifier floating-point aligned 4x3 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (fmat4x4, aligned_fmat4x4, 16)
 Single-qualifier floating-point aligned 4x4 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f32mat2x2, aligned_f32mat2, 16)
 Single-qualifier floating-point aligned 1x1 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f32mat3x3, aligned_f32mat3, 16)
 Single-qualifier floating-point aligned 3x3 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f32mat4x4, aligned_f32mat4, 16)
 Single-qualifier floating-point aligned 4x4 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f32mat2x2, aligned_f32mat2x2, 16)
 Single-qualifier floating-point aligned 1x1 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f32mat2x3, aligned_f32mat2x3, 16)
 Single-qualifier floating-point aligned 2x3 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f32mat2x4, aligned_f32mat2x4, 16)
 Single-qualifier floating-point aligned 2x4 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f32mat3x2, aligned_f32mat3x2, 16)
 Single-qualifier floating-point aligned 3x2 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f32mat3x3, aligned_f32mat3x3, 16)
 Single-qualifier floating-point aligned 3x3 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f32mat3x4, aligned_f32mat3x4, 16)
 Single-qualifier floating-point aligned 3x4 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f32mat4x2, aligned_f32mat4x2, 16)
 Single-qualifier floating-point aligned 4x2 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f32mat4x3, aligned_f32mat4x3, 16)
 Single-qualifier floating-point aligned 4x3 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f32mat4x4, aligned_f32mat4x4, 16)
 Single-qualifier floating-point aligned 4x4 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f64mat2x2, aligned_f64mat2, 32)
 Double-qualifier floating-point aligned 1x1 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f64mat3x3, aligned_f64mat3, 32)
 Double-qualifier floating-point aligned 3x3 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f64mat4x4, aligned_f64mat4, 32)
 Double-qualifier floating-point aligned 4x4 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f64mat2x2, aligned_f64mat2x2, 32)
 Double-qualifier floating-point aligned 1x1 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f64mat2x3, aligned_f64mat2x3, 32)
 Double-qualifier floating-point aligned 2x3 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f64mat2x4, aligned_f64mat2x4, 32)
 Double-qualifier floating-point aligned 2x4 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f64mat3x2, aligned_f64mat3x2, 32)
 Double-qualifier floating-point aligned 3x2 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f64mat3x3, aligned_f64mat3x3, 32)
 Double-qualifier floating-point aligned 3x3 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f64mat3x4, aligned_f64mat3x4, 32)
 Double-qualifier floating-point aligned 3x4 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f64mat4x2, aligned_f64mat4x2, 32)
 Double-qualifier floating-point aligned 4x2 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f64mat4x3, aligned_f64mat4x3, 32)
 Double-qualifier floating-point aligned 4x3 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (f64mat4x4, aligned_f64mat4x4, 32)
 Double-qualifier floating-point aligned 4x4 matrix. More...
 
 GLM_ALIGNED_TYPEDEF (quat, aligned_quat, 16)
 Single-qualifier floating-point aligned quaternion. More...
 
 GLM_ALIGNED_TYPEDEF (quat, aligned_fquat, 16)
 Single-qualifier floating-point aligned quaternion. More...
 
 GLM_ALIGNED_TYPEDEF (dquat, aligned_dquat, 32)
 Double-qualifier floating-point aligned quaternion. More...
 
 GLM_ALIGNED_TYPEDEF (f32quat, aligned_f32quat, 16)
 Single-qualifier floating-point aligned quaternion. More...
 
 GLM_ALIGNED_TYPEDEF (f64quat, aligned_f64quat, 32)
 Double-qualifier floating-point aligned quaternion. More...
 

Detailed Description

Include <glm/gtx/type_aligned.hpp> to use the features of this extension.

Defines aligned types.

Function Documentation

glm::GLM_ALIGNED_TYPEDEF ( lowp_int8  ,
aligned_lowp_int8  ,
 
)

Low qualifier 8 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( lowp_int16  ,
aligned_lowp_int16  ,
 
)

Low qualifier 16 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( lowp_int32  ,
aligned_lowp_int32  ,
 
)

Low qualifier 32 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( lowp_int64  ,
aligned_lowp_int64  ,
 
)

Low qualifier 64 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( lowp_int8_t  ,
aligned_lowp_int8_t  ,
 
)

Low qualifier 8 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( lowp_int16_t  ,
aligned_lowp_int16_t  ,
 
)

Low qualifier 16 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( lowp_int32_t  ,
aligned_lowp_int32_t  ,
 
)

Low qualifier 32 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( lowp_int64_t  ,
aligned_lowp_int64_t  ,
 
)

Low qualifier 64 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( lowp_i8  ,
aligned_lowp_i8  ,
 
)

Low qualifier 8 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( lowp_i16  ,
aligned_lowp_i16  ,
 
)

Low qualifier 16 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( lowp_i32  ,
aligned_lowp_i32  ,
 
)

Low qualifier 32 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( lowp_i64  ,
aligned_lowp_i64  ,
 
)

Low qualifier 64 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( mediump_int8  ,
aligned_mediump_int8  ,
 
)

Medium qualifier 8 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( mediump_int16  ,
aligned_mediump_int16  ,
 
)

Medium qualifier 16 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( mediump_int32  ,
aligned_mediump_int32  ,
 
)

Medium qualifier 32 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( mediump_int64  ,
aligned_mediump_int64  ,
 
)

Medium qualifier 64 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( mediump_int8_t  ,
aligned_mediump_int8_t  ,
 
)

Medium qualifier 8 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( mediump_int16_t  ,
aligned_mediump_int16_t  ,
 
)

Medium qualifier 16 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( mediump_int32_t  ,
aligned_mediump_int32_t  ,
 
)

Medium qualifier 32 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( mediump_int64_t  ,
aligned_mediump_int64_t  ,
 
)

Medium qualifier 64 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( mediump_i8  ,
aligned_mediump_i8  ,
 
)

Medium qualifier 8 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( mediump_i16  ,
aligned_mediump_i16  ,
 
)

Medium qualifier 16 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( mediump_i32  ,
aligned_mediump_i32  ,
 
)

Medium qualifier 32 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( mediump_i64  ,
aligned_mediump_i64  ,
 
)

Medium qualifier 64 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( highp_int8  ,
aligned_highp_int8  ,
 
)

High qualifier 8 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( highp_int16  ,
aligned_highp_int16  ,
 
)

High qualifier 16 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( highp_int32  ,
aligned_highp_int32  ,
 
)

High qualifier 32 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( highp_int64  ,
aligned_highp_int64  ,
 
)

High qualifier 64 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( highp_int8_t  ,
aligned_highp_int8_t  ,
 
)

High qualifier 8 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( highp_int16_t  ,
aligned_highp_int16_t  ,
 
)

High qualifier 16 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( highp_int32_t  ,
aligned_highp_int32_t  ,
 
)

High qualifier 32 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( highp_int64_t  ,
aligned_highp_int64_t  ,
 
)

High qualifier 64 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( highp_i8  ,
aligned_highp_i8  ,
 
)

High qualifier 8 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( highp_i16  ,
aligned_highp_i16  ,
 
)

High qualifier 16 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( highp_i32  ,
aligned_highp_i32  ,
 
)

High qualifier 32 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( highp_i64  ,
aligned_highp_i64  ,
 
)

High qualifier 64 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( int8  ,
aligned_int8  ,
 
)

Default qualifier 8 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( int16  ,
aligned_int16  ,
 
)

Default qualifier 16 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( int32  ,
aligned_int32  ,
 
)

Default qualifier 32 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( int64  ,
aligned_int64  ,
 
)

Default qualifier 64 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( int8_t  ,
aligned_int8_t  ,
 
)

Default qualifier 8 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( int16_t  ,
aligned_int16_t  ,
 
)

Default qualifier 16 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( int32_t  ,
aligned_int32_t  ,
 
)

Default qualifier 32 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( int64_t  ,
aligned_int64_t  ,
 
)

Default qualifier 64 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( i8  ,
aligned_i8  ,
 
)

Default qualifier 8 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( i16  ,
aligned_i16  ,
 
)

Default qualifier 16 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( i32  ,
aligned_i32  ,
 
)

Default qualifier 32 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( i64  ,
aligned_i64  ,
 
)

Default qualifier 64 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( ivec1  ,
aligned_ivec1  ,
 
)

Default qualifier 32 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( ivec2  ,
aligned_ivec2  ,
 
)

Default qualifier 32 bit signed integer aligned vector of 2 components type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( ivec3  ,
aligned_ivec3  ,
16   
)

Default qualifier 32 bit signed integer aligned vector of 3 components type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( ivec4  ,
aligned_ivec4  ,
16   
)

Default qualifier 32 bit signed integer aligned vector of 4 components type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( i8vec1  ,
aligned_i8vec1  ,
 
)

Default qualifier 8 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( i8vec2  ,
aligned_i8vec2  ,
 
)

Default qualifier 8 bit signed integer aligned vector of 2 components type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( i8vec3  ,
aligned_i8vec3  ,
 
)

Default qualifier 8 bit signed integer aligned vector of 3 components type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( i8vec4  ,
aligned_i8vec4  ,
 
)

Default qualifier 8 bit signed integer aligned vector of 4 components type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( i16vec1  ,
aligned_i16vec1  ,
 
)

Default qualifier 16 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( i16vec2  ,
aligned_i16vec2  ,
 
)

Default qualifier 16 bit signed integer aligned vector of 2 components type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( i16vec3  ,
aligned_i16vec3  ,
 
)

Default qualifier 16 bit signed integer aligned vector of 3 components type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( i16vec4  ,
aligned_i16vec4  ,
 
)

Default qualifier 16 bit signed integer aligned vector of 4 components type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( i32vec1  ,
aligned_i32vec1  ,
 
)

Default qualifier 32 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( i32vec2  ,
aligned_i32vec2  ,
 
)

Default qualifier 32 bit signed integer aligned vector of 2 components type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( i32vec3  ,
aligned_i32vec3  ,
16   
)

Default qualifier 32 bit signed integer aligned vector of 3 components type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( i32vec4  ,
aligned_i32vec4  ,
16   
)

Default qualifier 32 bit signed integer aligned vector of 4 components type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( i64vec1  ,
aligned_i64vec1  ,
 
)

Default qualifier 64 bit signed integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( i64vec2  ,
aligned_i64vec2  ,
16   
)

Default qualifier 64 bit signed integer aligned vector of 2 components type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( i64vec3  ,
aligned_i64vec3  ,
32   
)

Default qualifier 64 bit signed integer aligned vector of 3 components type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( i64vec4  ,
aligned_i64vec4  ,
32   
)

Default qualifier 64 bit signed integer aligned vector of 4 components type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( lowp_uint8  ,
aligned_lowp_uint8  ,
 
)

Low qualifier 8 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( lowp_uint16  ,
aligned_lowp_uint16  ,
 
)

Low qualifier 16 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( lowp_uint32  ,
aligned_lowp_uint32  ,
 
)

Low qualifier 32 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( lowp_uint64  ,
aligned_lowp_uint64  ,
 
)

Low qualifier 64 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( lowp_uint8_t  ,
aligned_lowp_uint8_t  ,
 
)

Low qualifier 8 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( lowp_uint16_t  ,
aligned_lowp_uint16_t  ,
 
)

Low qualifier 16 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( lowp_uint32_t  ,
aligned_lowp_uint32_t  ,
 
)

Low qualifier 32 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( lowp_uint64_t  ,
aligned_lowp_uint64_t  ,
 
)

Low qualifier 64 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( lowp_u8  ,
aligned_lowp_u8  ,
 
)

Low qualifier 8 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( lowp_u16  ,
aligned_lowp_u16  ,
 
)

Low qualifier 16 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( lowp_u32  ,
aligned_lowp_u32  ,
 
)

Low qualifier 32 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( lowp_u64  ,
aligned_lowp_u64  ,
 
)

Low qualifier 64 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( mediump_uint8  ,
aligned_mediump_uint8  ,
 
)

Medium qualifier 8 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( mediump_uint16  ,
aligned_mediump_uint16  ,
 
)

Medium qualifier 16 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( mediump_uint32  ,
aligned_mediump_uint32  ,
 
)

Medium qualifier 32 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( mediump_uint64  ,
aligned_mediump_uint64  ,
 
)

Medium qualifier 64 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( mediump_uint8_t  ,
aligned_mediump_uint8_t  ,
 
)

Medium qualifier 8 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( mediump_uint16_t  ,
aligned_mediump_uint16_t  ,
 
)

Medium qualifier 16 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( mediump_uint32_t  ,
aligned_mediump_uint32_t  ,
 
)

Medium qualifier 32 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( mediump_uint64_t  ,
aligned_mediump_uint64_t  ,
 
)

Medium qualifier 64 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( mediump_u8  ,
aligned_mediump_u8  ,
 
)

Medium qualifier 8 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( mediump_u16  ,
aligned_mediump_u16  ,
 
)

Medium qualifier 16 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( mediump_u32  ,
aligned_mediump_u32  ,
 
)

Medium qualifier 32 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( mediump_u64  ,
aligned_mediump_u64  ,
 
)

Medium qualifier 64 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( highp_uint8  ,
aligned_highp_uint8  ,
 
)

High qualifier 8 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( highp_uint16  ,
aligned_highp_uint16  ,
 
)

High qualifier 16 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( highp_uint32  ,
aligned_highp_uint32  ,
 
)

High qualifier 32 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( highp_uint64  ,
aligned_highp_uint64  ,
 
)

High qualifier 64 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( highp_uint8_t  ,
aligned_highp_uint8_t  ,
 
)

High qualifier 8 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( highp_uint16_t  ,
aligned_highp_uint16_t  ,
 
)

High qualifier 16 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( highp_uint32_t  ,
aligned_highp_uint32_t  ,
 
)

High qualifier 32 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( highp_uint64_t  ,
aligned_highp_uint64_t  ,
 
)

High qualifier 64 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( highp_u8  ,
aligned_highp_u8  ,
 
)

High qualifier 8 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( highp_u16  ,
aligned_highp_u16  ,
 
)

High qualifier 16 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( highp_u32  ,
aligned_highp_u32  ,
 
)

High qualifier 32 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( highp_u64  ,
aligned_highp_u64  ,
 
)

High qualifier 64 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( uint8  ,
aligned_uint8  ,
 
)

Default qualifier 8 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( uint16  ,
aligned_uint16  ,
 
)

Default qualifier 16 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( uint32  ,
aligned_uint32  ,
 
)

Default qualifier 32 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( uint64  ,
aligned_uint64  ,
 
)

Default qualifier 64 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( uint8_t  ,
aligned_uint8_t  ,
 
)

Default qualifier 8 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( uint16_t  ,
aligned_uint16_t  ,
 
)

Default qualifier 16 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( uint32_t  ,
aligned_uint32_t  ,
 
)

Default qualifier 32 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( uint64_t  ,
aligned_uint64_t  ,
 
)

Default qualifier 64 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( u8  ,
aligned_u8  ,
 
)

Default qualifier 8 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( u16  ,
aligned_u16  ,
 
)

Default qualifier 16 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( u32  ,
aligned_u32  ,
 
)

Default qualifier 32 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( u64  ,
aligned_u64  ,
 
)

Default qualifier 64 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( uvec1  ,
aligned_uvec1  ,
 
)

Default qualifier 32 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( uvec2  ,
aligned_uvec2  ,
 
)

Default qualifier 32 bit unsigned integer aligned vector of 2 components type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( uvec3  ,
aligned_uvec3  ,
16   
)

Default qualifier 32 bit unsigned integer aligned vector of 3 components type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( uvec4  ,
aligned_uvec4  ,
16   
)

Default qualifier 32 bit unsigned integer aligned vector of 4 components type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( u8vec1  ,
aligned_u8vec1  ,
 
)

Default qualifier 8 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( u8vec2  ,
aligned_u8vec2  ,
 
)

Default qualifier 8 bit unsigned integer aligned vector of 2 components type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( u8vec3  ,
aligned_u8vec3  ,
 
)

Default qualifier 8 bit unsigned integer aligned vector of 3 components type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( u8vec4  ,
aligned_u8vec4  ,
 
)

Default qualifier 8 bit unsigned integer aligned vector of 4 components type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( u16vec1  ,
aligned_u16vec1  ,
 
)

Default qualifier 16 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( u16vec2  ,
aligned_u16vec2  ,
 
)

Default qualifier 16 bit unsigned integer aligned vector of 2 components type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( u16vec3  ,
aligned_u16vec3  ,
 
)

Default qualifier 16 bit unsigned integer aligned vector of 3 components type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( u16vec4  ,
aligned_u16vec4  ,
 
)

Default qualifier 16 bit unsigned integer aligned vector of 4 components type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( u32vec1  ,
aligned_u32vec1  ,
 
)

Default qualifier 32 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( u32vec2  ,
aligned_u32vec2  ,
 
)

Default qualifier 32 bit unsigned integer aligned vector of 2 components type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( u32vec3  ,
aligned_u32vec3  ,
16   
)

Default qualifier 32 bit unsigned integer aligned vector of 3 components type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( u32vec4  ,
aligned_u32vec4  ,
16   
)

Default qualifier 32 bit unsigned integer aligned vector of 4 components type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( u64vec1  ,
aligned_u64vec1  ,
 
)

Default qualifier 64 bit unsigned integer aligned scalar type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( u64vec2  ,
aligned_u64vec2  ,
16   
)

Default qualifier 64 bit unsigned integer aligned vector of 2 components type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( u64vec3  ,
aligned_u64vec3  ,
32   
)

Default qualifier 64 bit unsigned integer aligned vector of 3 components type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( u64vec4  ,
aligned_u64vec4  ,
32   
)

Default qualifier 64 bit unsigned integer aligned vector of 4 components type.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( float32  ,
aligned_float32  ,
 
)

32 bit single-qualifier floating-point aligned scalar.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( float32_t  ,
aligned_float32_t  ,
 
)

32 bit single-qualifier floating-point aligned scalar.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( float32  ,
aligned_f32  ,
 
)

32 bit single-qualifier floating-point aligned scalar.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( float64  ,
aligned_float64  ,
 
)

64 bit double-qualifier floating-point aligned scalar.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( float64_t  ,
aligned_float64_t  ,
 
)

64 bit double-qualifier floating-point aligned scalar.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( float64  ,
aligned_f64  ,
 
)

64 bit double-qualifier floating-point aligned scalar.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( vec1  ,
aligned_vec1  ,
 
)

Single-qualifier floating-point aligned vector of 1 component.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( vec2  ,
aligned_vec2  ,
 
)

Single-qualifier floating-point aligned vector of 2 components.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( vec3  ,
aligned_vec3  ,
16   
)

Single-qualifier floating-point aligned vector of 3 components.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( vec4  ,
aligned_vec4  ,
16   
)

Single-qualifier floating-point aligned vector of 4 components.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( fvec1  ,
aligned_fvec1  ,
 
)

Single-qualifier floating-point aligned vector of 1 component.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( fvec2  ,
aligned_fvec2  ,
 
)

Single-qualifier floating-point aligned vector of 2 components.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( fvec3  ,
aligned_fvec3  ,
16   
)

Single-qualifier floating-point aligned vector of 3 components.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( fvec4  ,
aligned_fvec4  ,
16   
)

Single-qualifier floating-point aligned vector of 4 components.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f32vec1  ,
aligned_f32vec1  ,
 
)

Single-qualifier floating-point aligned vector of 1 component.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f32vec2  ,
aligned_f32vec2  ,
 
)

Single-qualifier floating-point aligned vector of 2 components.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f32vec3  ,
aligned_f32vec3  ,
16   
)

Single-qualifier floating-point aligned vector of 3 components.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f32vec4  ,
aligned_f32vec4  ,
16   
)

Single-qualifier floating-point aligned vector of 4 components.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( dvec1  ,
aligned_dvec1  ,
 
)

Double-qualifier floating-point aligned vector of 1 component.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( dvec2  ,
aligned_dvec2  ,
16   
)

Double-qualifier floating-point aligned vector of 2 components.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( dvec3  ,
aligned_dvec3  ,
32   
)

Double-qualifier floating-point aligned vector of 3 components.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( dvec4  ,
aligned_dvec4  ,
32   
)

Double-qualifier floating-point aligned vector of 4 components.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f64vec1  ,
aligned_f64vec1  ,
 
)

Double-qualifier floating-point aligned vector of 1 component.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f64vec2  ,
aligned_f64vec2  ,
16   
)

Double-qualifier floating-point aligned vector of 2 components.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f64vec3  ,
aligned_f64vec3  ,
32   
)

Double-qualifier floating-point aligned vector of 3 components.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f64vec4  ,
aligned_f64vec4  ,
32   
)

Double-qualifier floating-point aligned vector of 4 components.

See also
GLM_GTX_type_aligned
GLM_ALIGNED_TYPEDEF ( mat2  ,
aligned_mat2  ,
16   
)

Single-qualifier floating-point aligned 1x1 matrix.

See also
GLM_GTX_type_aligned Single-qualifier floating-point aligned 2x2 matrix.
GLM_GTX_type_aligned
GLM_ALIGNED_TYPEDEF ( mat3  ,
aligned_mat3  ,
16   
)

Single-qualifier floating-point aligned 3x3 matrix.

See also
GLM_GTX_type_aligned
GLM_ALIGNED_TYPEDEF ( mat4  ,
aligned_mat4  ,
16   
)

Single-qualifier floating-point aligned 4x4 matrix.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( fmat2x2  ,
aligned_fmat2  ,
16   
)

Single-qualifier floating-point aligned 1x1 matrix.

See also
GLM_GTX_type_aligned Single-qualifier floating-point aligned 2x2 matrix.
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( fmat3x3  ,
aligned_fmat3  ,
16   
)

Single-qualifier floating-point aligned 3x3 matrix.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( fmat4x4  ,
aligned_fmat4  ,
16   
)

Single-qualifier floating-point aligned 4x4 matrix.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( fmat2x2  ,
aligned_fmat2x2  ,
16   
)

Single-qualifier floating-point aligned 1x1 matrix.

See also
GLM_GTX_type_aligned Single-qualifier floating-point aligned 2x2 matrix.
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( fmat2x3  ,
aligned_fmat2x3  ,
16   
)

Single-qualifier floating-point aligned 2x3 matrix.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( fmat2x4  ,
aligned_fmat2x4  ,
16   
)

Single-qualifier floating-point aligned 2x4 matrix.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( fmat3x2  ,
aligned_fmat3x2  ,
16   
)

Single-qualifier floating-point aligned 3x2 matrix.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( fmat3x3  ,
aligned_fmat3x3  ,
16   
)

Single-qualifier floating-point aligned 3x3 matrix.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( fmat3x4  ,
aligned_fmat3x4  ,
16   
)

Single-qualifier floating-point aligned 3x4 matrix.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( fmat4x2  ,
aligned_fmat4x2  ,
16   
)

Single-qualifier floating-point aligned 4x2 matrix.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( fmat4x3  ,
aligned_fmat4x3  ,
16   
)

Single-qualifier floating-point aligned 4x3 matrix.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( fmat4x4  ,
aligned_fmat4x4  ,
16   
)

Single-qualifier floating-point aligned 4x4 matrix.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f32mat2x2  ,
aligned_f32mat2  ,
16   
)

Single-qualifier floating-point aligned 1x1 matrix.

See also
GLM_GTX_type_aligned Single-qualifier floating-point aligned 2x2 matrix.
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f32mat3x3  ,
aligned_f32mat3  ,
16   
)

Single-qualifier floating-point aligned 3x3 matrix.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f32mat4x4  ,
aligned_f32mat4  ,
16   
)

Single-qualifier floating-point aligned 4x4 matrix.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f32mat2x2  ,
aligned_f32mat2x2  ,
16   
)

Single-qualifier floating-point aligned 1x1 matrix.

See also
GLM_GTX_type_aligned Single-qualifier floating-point aligned 2x2 matrix.
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f32mat2x3  ,
aligned_f32mat2x3  ,
16   
)

Single-qualifier floating-point aligned 2x3 matrix.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f32mat2x4  ,
aligned_f32mat2x4  ,
16   
)

Single-qualifier floating-point aligned 2x4 matrix.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f32mat3x2  ,
aligned_f32mat3x2  ,
16   
)

Single-qualifier floating-point aligned 3x2 matrix.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f32mat3x3  ,
aligned_f32mat3x3  ,
16   
)

Single-qualifier floating-point aligned 3x3 matrix.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f32mat3x4  ,
aligned_f32mat3x4  ,
16   
)

Single-qualifier floating-point aligned 3x4 matrix.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f32mat4x2  ,
aligned_f32mat4x2  ,
16   
)

Single-qualifier floating-point aligned 4x2 matrix.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f32mat4x3  ,
aligned_f32mat4x3  ,
16   
)

Single-qualifier floating-point aligned 4x3 matrix.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f32mat4x4  ,
aligned_f32mat4x4  ,
16   
)

Single-qualifier floating-point aligned 4x4 matrix.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f64mat2x2  ,
aligned_f64mat2  ,
32   
)

Double-qualifier floating-point aligned 1x1 matrix.

See also
GLM_GTX_type_aligned Double-qualifier floating-point aligned 2x2 matrix.
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f64mat3x3  ,
aligned_f64mat3  ,
32   
)

Double-qualifier floating-point aligned 3x3 matrix.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f64mat4x4  ,
aligned_f64mat4  ,
32   
)

Double-qualifier floating-point aligned 4x4 matrix.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f64mat2x2  ,
aligned_f64mat2x2  ,
32   
)

Double-qualifier floating-point aligned 1x1 matrix.

See also
GLM_GTX_type_aligned Double-qualifier floating-point aligned 2x2 matrix.
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f64mat2x3  ,
aligned_f64mat2x3  ,
32   
)

Double-qualifier floating-point aligned 2x3 matrix.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f64mat2x4  ,
aligned_f64mat2x4  ,
32   
)

Double-qualifier floating-point aligned 2x4 matrix.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f64mat3x2  ,
aligned_f64mat3x2  ,
32   
)

Double-qualifier floating-point aligned 3x2 matrix.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f64mat3x3  ,
aligned_f64mat3x3  ,
32   
)

Double-qualifier floating-point aligned 3x3 matrix.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f64mat3x4  ,
aligned_f64mat3x4  ,
32   
)

Double-qualifier floating-point aligned 3x4 matrix.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f64mat4x2  ,
aligned_f64mat4x2  ,
32   
)

Double-qualifier floating-point aligned 4x2 matrix.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f64mat4x3  ,
aligned_f64mat4x3  ,
32   
)

Double-qualifier floating-point aligned 4x3 matrix.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f64mat4x4  ,
aligned_f64mat4x4  ,
32   
)

Double-qualifier floating-point aligned 4x4 matrix.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( quat  ,
aligned_quat  ,
16   
)

Single-qualifier floating-point aligned quaternion.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( quat  ,
aligned_fquat  ,
16   
)

Single-qualifier floating-point aligned quaternion.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( dquat  ,
aligned_dquat  ,
32   
)

Double-qualifier floating-point aligned quaternion.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f32quat  ,
aligned_f32quat  ,
16   
)

Single-qualifier floating-point aligned quaternion.

See also
GLM_GTX_type_aligned
glm::GLM_ALIGNED_TYPEDEF ( f64quat  ,
aligned_f64quat  ,
32   
)

Double-qualifier floating-point aligned quaternion.

See also
GLM_GTX_type_aligned
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00365.html ================================================ 0.9.9 API documentation: GLM_GTX_type_trait
0.9.9 API documentation
GLM_GTX_type_trait

Include <glm/gtx/type_trait.hpp> to use the features of this extension. More...

Detailed Description

Include <glm/gtx/type_trait.hpp> to use the features of this extension.

Defines traits for each type.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00366.html ================================================ 0.9.9 API documentation: GLM_GTX_vec_swizzle
0.9.9 API documentation
GLM_GTX_vec_swizzle

Include <glm/gtx/vec_swizzle.hpp> to use the features of this extension. More...

Include <glm/gtx/vec_swizzle.hpp> to use the features of this extension.

Functions to perform swizzle operation.

================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00367.html ================================================ 0.9.9 API documentation: GLM_GTX_vector_angle
0.9.9 API documentation
GLM_GTX_vector_angle

Include <glm/gtx/vector_angle.hpp> to use the features of this extension. More...

Functions

template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL T angle (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Returns the absolute angle between two vectors. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T orientedAngle (vec< 2, T, Q > const &x, vec< 2, T, Q > const &y)
 Returns the oriented angle between two 2d vectors. More...
 
template<typename T , qualifier Q>
GLM_FUNC_DECL T orientedAngle (vec< 3, T, Q > const &x, vec< 3, T, Q > const &y, vec< 3, T, Q > const &ref)
 Returns the oriented angle between two 3d vectors based from a reference axis. More...
 

Detailed Description

Include <glm/gtx/vector_angle.hpp> to use the features of this extension.

Compute angle between vectors

Function Documentation

GLM_FUNC_DECL T glm::angle ( vec< L, T, Q > const &  x,
vec< L, T, Q > const &  y 
)

Returns the absolute angle between two vectors.

Parameters need to be normalized.

See also
GLM_GTX_vector_angle extension.
GLM_FUNC_DECL T glm::orientedAngle ( vec< 2, T, Q > const &  x,
vec< 2, T, Q > const &  y 
)

Returns the oriented angle between two 2d vectors.

Parameters need to be normalized.

See also
GLM_GTX_vector_angle extension.
GLM_FUNC_DECL T glm::orientedAngle ( vec< 3, T, Q > const &  x,
vec< 3, T, Q > const &  y,
vec< 3, T, Q > const &  ref 
)

Returns the oriented angle between two 3d vectors based from a reference axis.

Parameters need to be normalized.

See also
GLM_GTX_vector_angle extension.
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00368.html ================================================ 0.9.9 API documentation: GLM_GTX_vector_query
0.9.9 API documentation
GLM_GTX_vector_query

Include <glm/gtx/vector_query.hpp> to use the features of this extension. More...

Functions

template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL bool areCollinear (vec< L, T, Q > const &v0, vec< L, T, Q > const &v1, T const &epsilon)
 Check whether two vectors are collinears. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL bool areOrthogonal (vec< L, T, Q > const &v0, vec< L, T, Q > const &v1, T const &epsilon)
 Check whether two vectors are orthogonals. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL bool areOrthonormal (vec< L, T, Q > const &v0, vec< L, T, Q > const &v1, T const &epsilon)
 Check whether two vectors are orthonormal. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, bool, Q > isCompNull (vec< L, T, Q > const &v, T const &epsilon)
 Check whether a each component of a vector is null. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL bool isNormalized (vec< L, T, Q > const &v, T const &epsilon)
 Check whether a vector is normalized. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL bool isNull (vec< L, T, Q > const &v, T const &epsilon)
 Check whether a vector is null. More...
 

Detailed Description

Include <glm/gtx/vector_query.hpp> to use the features of this extension.

Query informations of vector types

Function Documentation

GLM_FUNC_DECL bool glm::areCollinear ( vec< L, T, Q > const &  v0,
vec< L, T, Q > const &  v1,
T const &  epsilon 
)

Check whether two vectors are collinears.

See also
GLM_GTX_vector_query extensions.
GLM_FUNC_DECL bool glm::areOrthogonal ( vec< L, T, Q > const &  v0,
vec< L, T, Q > const &  v1,
T const &  epsilon 
)

Check whether two vectors are orthogonals.

See also
GLM_GTX_vector_query extensions.
GLM_FUNC_DECL bool glm::areOrthonormal ( vec< L, T, Q > const &  v0,
vec< L, T, Q > const &  v1,
T const &  epsilon 
)

Check whether two vectors are orthonormal.

See also
GLM_GTX_vector_query extensions.
GLM_FUNC_DECL vec<L, bool, Q> glm::isCompNull ( vec< L, T, Q > const &  v,
T const &  epsilon 
)

Check whether a each component of a vector is null.

See also
GLM_GTX_vector_query extensions.
GLM_FUNC_DECL bool glm::isNormalized ( vec< L, T, Q > const &  v,
T const &  epsilon 
)

Check whether a vector is normalized.

See also
GLM_GTX_vector_query extensions.
GLM_FUNC_DECL bool glm::isNull ( vec< L, T, Q > const &  v,
T const &  epsilon 
)

Check whether a vector is null.

See also
GLM_GTX_vector_query extensions.
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00369.html ================================================ 0.9.9 API documentation: GLM_GTX_wrap
0.9.9 API documentation

Include <glm/gtx/wrap.hpp> to use the features of this extension. More...

Functions

template<typename genType >
GLM_FUNC_DECL genType clamp (genType const &Texcoord)
 Simulate GL_CLAMP OpenGL wrap mode. More...
 
template<typename genType >
GLM_FUNC_DECL genType mirrorClamp (genType const &Texcoord)
 Simulate GL_MIRRORED_REPEAT OpenGL wrap mode. More...
 
template<typename genType >
GLM_FUNC_DECL genType mirrorRepeat (genType const &Texcoord)
 Simulate GL_MIRROR_REPEAT OpenGL wrap mode. More...
 
template<typename genType >
GLM_FUNC_DECL genType repeat (genType const &Texcoord)
 Simulate GL_REPEAT OpenGL wrap mode. More...
 

Detailed Description

Include <glm/gtx/wrap.hpp> to use the features of this extension.

Wrapping mode of texture coordinates.

Function Documentation

GLM_FUNC_DECL genType glm::clamp ( genType const &  Texcoord)

Simulate GL_CLAMP OpenGL wrap mode.

See also
GLM_GTX_wrap extension.
GLM_FUNC_DECL genType glm::mirrorClamp ( genType const &  Texcoord)

Simulate GL_MIRRORED_REPEAT OpenGL wrap mode.

See also
GLM_GTX_wrap extension.
GLM_FUNC_DECL genType glm::mirrorRepeat ( genType const &  Texcoord)

Simulate GL_MIRROR_REPEAT OpenGL wrap mode.

See also
GLM_GTX_wrap extension.
GLM_FUNC_DECL genType glm::repeat ( genType const &  Texcoord)

Simulate GL_REPEAT OpenGL wrap mode.

See also
GLM_GTX_wrap extension.
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00370.html ================================================ 0.9.9 API documentation: Integer functions
0.9.9 API documentation
Integer functions

Provides GLSL functions on integer types. More...

Functions

template<typename genType >
GLM_FUNC_DECL int bitCount (genType v)
 Returns the number of bits set to 1 in the binary representation of value. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, int, Q > bitCount (vec< L, T, Q > const &v)
 Returns the number of bits set to 1 in the binary representation of value. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > bitfieldExtract (vec< L, T, Q > const &Value, int Offset, int Bits)
 Extracts bits [offset, offset + bits - 1] from value, returning them in the least significant bits of the result. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > bitfieldInsert (vec< L, T, Q > const &Base, vec< L, T, Q > const &Insert, int Offset, int Bits)
 Returns the insertion the bits least-significant bits of insert into base. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > bitfieldReverse (vec< L, T, Q > const &v)
 Returns the reversal of the bits of value. More...
 
template<typename genIUType >
GLM_FUNC_DECL int findLSB (genIUType x)
 Returns the bit number of the least significant bit set to 1 in the binary representation of value. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, int, Q > findLSB (vec< L, T, Q > const &v)
 Returns the bit number of the least significant bit set to 1 in the binary representation of value. More...
 
template<typename genIUType >
GLM_FUNC_DECL int findMSB (genIUType x)
 Returns the bit number of the most significant bit in the binary representation of value. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, int, Q > findMSB (vec< L, T, Q > const &v)
 Returns the bit number of the most significant bit in the binary representation of value. More...
 
template<length_t L, qualifier Q>
GLM_FUNC_DECL void imulExtended (vec< L, int, Q > const &x, vec< L, int, Q > const &y, vec< L, int, Q > &msb, vec< L, int, Q > &lsb)
 Multiplies 32-bit integers x and y, producing a 64-bit result. More...
 
template<length_t L, qualifier Q>
GLM_FUNC_DECL vec< L, uint, Q > uaddCarry (vec< L, uint, Q > const &x, vec< L, uint, Q > const &y, vec< L, uint, Q > &carry)
 Adds 32-bit unsigned integer x and y, returning the sum modulo pow(2, 32). More...
 
template<length_t L, qualifier Q>
GLM_FUNC_DECL void umulExtended (vec< L, uint, Q > const &x, vec< L, uint, Q > const &y, vec< L, uint, Q > &msb, vec< L, uint, Q > &lsb)
 Multiplies 32-bit integers x and y, producing a 64-bit result. More...
 
template<length_t L, qualifier Q>
GLM_FUNC_DECL vec< L, uint, Q > usubBorrow (vec< L, uint, Q > const &x, vec< L, uint, Q > const &y, vec< L, uint, Q > &borrow)
 Subtracts the 32-bit unsigned integer y from x, returning the difference if non-negative, or pow(2, 32) plus the difference otherwise. More...
 

Detailed Description

Provides GLSL functions on integer types.

These all operate component-wise. The description is per component. The notation [a, b] means the set of bits from bit-number a through bit-number b, inclusive. The lowest-order bit is bit 0.

Include <glm/integer.hpp> to use these core features.

Function Documentation

GLM_FUNC_DECL int glm::bitCount ( genType  v)

Returns the number of bits set to 1 in the binary representation of value.

Template Parameters
genTypeSigned or unsigned integer scalar or vector types.
See also
GLSL bitCount man page
GLSL 4.20.8 specification, section 8.8 Integer Functions
GLM_FUNC_DECL vec<L, int, Q> glm::bitCount ( vec< L, T, Q > const &  v)

Returns the number of bits set to 1 in the binary representation of value.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TSigned or unsigned integer scalar or vector types.
See also
GLSL bitCount man page
GLSL 4.20.8 specification, section 8.8 Integer Functions
GLM_FUNC_DECL vec<L, T, Q> glm::bitfieldExtract ( vec< L, T, Q > const &  Value,
int  Offset,
int  Bits 
)

Extracts bits [offset, offset + bits - 1] from value, returning them in the least significant bits of the result.

For unsigned data types, the most significant bits of the result will be set to zero. For signed data types, the most significant bits will be set to the value of bit offset + base - 1.

If bits is zero, the result will be zero. The result will be undefined if offset or bits is negative, or if the sum of offset and bits is greater than the number of bits used to store the operand.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TSigned or unsigned integer scalar types.
See also
GLSL bitfieldExtract man page
GLSL 4.20.8 specification, section 8.8 Integer Functions
GLM_FUNC_DECL vec<L, T, Q> glm::bitfieldInsert ( vec< L, T, Q > const &  Base,
vec< L, T, Q > const &  Insert,
int  Offset,
int  Bits 
)

Returns the insertion the bits least-significant bits of insert into base.

The result will have bits [offset, offset + bits - 1] taken from bits [0, bits - 1] of insert, and all other bits taken directly from the corresponding bits of base. If bits is zero, the result will simply be base. The result will be undefined if offset or bits is negative, or if the sum of offset and bits is greater than the number of bits used to store the operand.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TSigned or unsigned integer scalar or vector types.
See also
GLSL bitfieldInsert man page
GLSL 4.20.8 specification, section 8.8 Integer Functions
GLM_FUNC_DECL vec<L, T, Q> glm::bitfieldReverse ( vec< L, T, Q > const &  v)

Returns the reversal of the bits of value.

The bit numbered n of the result will be taken from bit (bits - 1) - n of value, where bits is the total number of bits used to represent value.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TSigned or unsigned integer scalar or vector types.
See also
GLSL bitfieldReverse man page
GLSL 4.20.8 specification, section 8.8 Integer Functions
GLM_FUNC_DECL int glm::findLSB ( genIUType  x)

Returns the bit number of the least significant bit set to 1 in the binary representation of value.

If value is zero, -1 will be returned.

Template Parameters
genIUTypeSigned or unsigned integer scalar types.
See also
GLSL findLSB man page
GLSL 4.20.8 specification, section 8.8 Integer Functions
GLM_FUNC_DECL vec<L, int, Q> glm::findLSB ( vec< L, T, Q > const &  v)

Returns the bit number of the least significant bit set to 1 in the binary representation of value.

If value is zero, -1 will be returned.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TSigned or unsigned integer scalar types.
See also
GLSL findLSB man page
GLSL 4.20.8 specification, section 8.8 Integer Functions
GLM_FUNC_DECL int glm::findMSB ( genIUType  x)

Returns the bit number of the most significant bit in the binary representation of value.

For positive integers, the result will be the bit number of the most significant bit set to 1. For negative integers, the result will be the bit number of the most significant bit set to 0. For a value of zero or negative one, -1 will be returned.

Template Parameters
genIUTypeSigned or unsigned integer scalar types.
See also
GLSL findMSB man page
GLSL 4.20.8 specification, section 8.8 Integer Functions
GLM_FUNC_DECL vec<L, int, Q> glm::findMSB ( vec< L, T, Q > const &  v)

Returns the bit number of the most significant bit in the binary representation of value.

For positive integers, the result will be the bit number of the most significant bit set to 1. For negative integers, the result will be the bit number of the most significant bit set to 0. For a value of zero or negative one, -1 will be returned.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TSigned or unsigned integer scalar types.
See also
GLSL findMSB man page
GLSL 4.20.8 specification, section 8.8 Integer Functions
GLM_FUNC_DECL void glm::imulExtended ( vec< L, int, Q > const &  x,
vec< L, int, Q > const &  y,
vec< L, int, Q > &  msb,
vec< L, int, Q > &  lsb 
)

Multiplies 32-bit integers x and y, producing a 64-bit result.

The 32 least-significant bits are returned in lsb. The 32 most-significant bits are returned in msb.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
See also
GLSL imulExtended man page
GLSL 4.20.8 specification, section 8.8 Integer Functions
GLM_FUNC_DECL vec<L, uint, Q> glm::uaddCarry ( vec< L, uint, Q > const &  x,
vec< L, uint, Q > const &  y,
vec< L, uint, Q > &  carry 
)

Adds 32-bit unsigned integer x and y, returning the sum modulo pow(2, 32).

The value carry is set to 0 if the sum was less than pow(2, 32), or to 1 otherwise.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
See also
GLSL uaddCarry man page
GLSL 4.20.8 specification, section 8.8 Integer Functions
GLM_FUNC_DECL void glm::umulExtended ( vec< L, uint, Q > const &  x,
vec< L, uint, Q > const &  y,
vec< L, uint, Q > &  msb,
vec< L, uint, Q > &  lsb 
)

Multiplies 32-bit integers x and y, producing a 64-bit result.

The 32 least-significant bits are returned in lsb. The 32 most-significant bits are returned in msb.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
See also
GLSL umulExtended man page
GLSL 4.20.8 specification, section 8.8 Integer Functions
GLM_FUNC_DECL vec<L, uint, Q> glm::usubBorrow ( vec< L, uint, Q > const &  x,
vec< L, uint, Q > const &  y,
vec< L, uint, Q > &  borrow 
)

Subtracts the 32-bit unsigned integer y from x, returning the difference if non-negative, or pow(2, 32) plus the difference otherwise.

The value borrow is set to 0 if x >= y, or to 1 otherwise.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
See also
GLSL usubBorrow man page
GLSL 4.20.8 specification, section 8.8 Integer Functions
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00371.html ================================================ 0.9.9 API documentation: Matrix functions
0.9.9 API documentation
Matrix functions

Provides GLSL matrix functions. More...

Functions

template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL T determinant (mat< C, R, T, Q > const &m)
 Return the determinant of a squared matrix. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL mat< C, R, T, Q > inverse (mat< C, R, T, Q > const &m)
 Return the inverse of a squared matrix. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL mat< C, R, T, Q > matrixCompMult (mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y)
 Multiply matrix x by matrix y component-wise, i.e., result[i][j] is the scalar product of x[i][j] and y[i][j]. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL detail::outerProduct_trait< C, R, T, Q >::type outerProduct (vec< C, T, Q > const &c, vec< R, T, Q > const &r)
 Treats the first parameter c as a column vector and the second parameter r as a row vector and does a linear algebraic matrix multiply c * r. More...
 
template<length_t C, length_t R, typename T , qualifier Q>
GLM_FUNC_DECL mat< C, R, T, Q >::transpose_type transpose (mat< C, R, T, Q > const &x)
 Returns the transposed matrix of x. More...
 

Detailed Description

Provides GLSL matrix functions.

Include <glm/matrix.hpp> to use these core features.

Function Documentation

GLM_FUNC_DECL T glm::determinant ( mat< C, R, T, Q > const &  m)

Return the determinant of a squared matrix.

Template Parameters
CInteger between 1 and 4 included that qualify the number a column
RInteger between 1 and 4 included that qualify the number a row
TFloating-point or signed integer scalar types
QValue from qualifier enum
See also
GLSL determinant man page
GLSL 4.20.8 specification, section 8.6 Matrix Functions
GLM_FUNC_DECL mat<C, R, T, Q> glm::inverse ( mat< C, R, T, Q > const &  m)

Return the inverse of a squared matrix.

Template Parameters
CInteger between 1 and 4 included that qualify the number a column
RInteger between 1 and 4 included that qualify the number a row
TFloating-point or signed integer scalar types
QValue from qualifier enum
See also
GLSL inverse man page
GLSL 4.20.8 specification, section 8.6 Matrix Functions
GLM_FUNC_DECL mat<C, R, T, Q> glm::matrixCompMult ( mat< C, R, T, Q > const &  x,
mat< C, R, T, Q > const &  y 
)

Multiply matrix x by matrix y component-wise, i.e., result[i][j] is the scalar product of x[i][j] and y[i][j].

Template Parameters
CInteger between 1 and 4 included that qualify the number a column
RInteger between 1 and 4 included that qualify the number a row
TFloating-point or signed integer scalar types
QValue from qualifier enum
See also
GLSL matrixCompMult man page
GLSL 4.20.8 specification, section 8.6 Matrix Functions
GLM_FUNC_DECL detail::outerProduct_trait<C, R, T, Q>::type glm::outerProduct ( vec< C, T, Q > const &  c,
vec< R, T, Q > const &  r 
)

Treats the first parameter c as a column vector and the second parameter r as a row vector and does a linear algebraic matrix multiply c * r.

Template Parameters
CInteger between 1 and 4 included that qualify the number a column
RInteger between 1 and 4 included that qualify the number a row
TFloating-point or signed integer scalar types
QValue from qualifier enum
See also
GLSL outerProduct man page
GLSL 4.20.8 specification, section 8.6 Matrix Functions
GLM_FUNC_DECL mat<C, R, T, Q>::transpose_type glm::transpose ( mat< C, R, T, Q > const &  x)

Returns the transposed matrix of x.

Template Parameters
CInteger between 1 and 4 included that qualify the number a column
RInteger between 1 and 4 included that qualify the number a row
TFloating-point or signed integer scalar types
QValue from qualifier enum
See also
GLSL transpose man page
GLSL 4.20.8 specification, section 8.6 Matrix Functions
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00372.html ================================================ 0.9.9 API documentation: Floating-Point Pack and Unpack Functions
0.9.9 API documentation
Floating-Point Pack and Unpack Functions

Provides GLSL functions to pack and unpack half, single and double-precision floating point values into more compact integer types. More...

Functions

GLM_FUNC_DECL double packDouble2x32 (uvec2 const &v)
 Returns a double-qualifier value obtained by packing the components of v into a 64-bit value. More...
 
GLM_FUNC_DECL uint packHalf2x16 (vec2 const &v)
 Returns an unsigned integer obtained by converting the components of a two-component floating-point vector to the 16-bit floating-point representation found in the OpenGL Specification, and then packing these two 16- bit integers into a 32-bit unsigned integer. More...
 
GLM_FUNC_DECL uint packSnorm2x16 (vec2 const &v)
 First, converts each component of the normalized floating-point value v into 8- or 16-bit integer values. More...
 
GLM_FUNC_DECL uint packSnorm4x8 (vec4 const &v)
 First, converts each component of the normalized floating-point value v into 8- or 16-bit integer values. More...
 
GLM_FUNC_DECL uint packUnorm2x16 (vec2 const &v)
 First, converts each component of the normalized floating-point value v into 8- or 16-bit integer values. More...
 
GLM_FUNC_DECL uint packUnorm4x8 (vec4 const &v)
 First, converts each component of the normalized floating-point value v into 8- or 16-bit integer values. More...
 
GLM_FUNC_DECL uvec2 unpackDouble2x32 (double v)
 Returns a two-component unsigned integer vector representation of v. More...
 
GLM_FUNC_DECL vec2 unpackHalf2x16 (uint v)
 Returns a two-component floating-point vector with components obtained by unpacking a 32-bit unsigned integer into a pair of 16-bit values, interpreting those values as 16-bit floating-point numbers according to the OpenGL Specification, and converting them to 32-bit floating-point values. More...
 
GLM_FUNC_DECL vec2 unpackSnorm2x16 (uint p)
 First, unpacks a single 32-bit unsigned integer p into a pair of 16-bit unsigned integers, four 8-bit unsigned integers, or four 8-bit signed integers. More...
 
GLM_FUNC_DECL vec4 unpackSnorm4x8 (uint p)
 First, unpacks a single 32-bit unsigned integer p into a pair of 16-bit unsigned integers, four 8-bit unsigned integers, or four 8-bit signed integers. More...
 
GLM_FUNC_DECL vec2 unpackUnorm2x16 (uint p)
 First, unpacks a single 32-bit unsigned integer p into a pair of 16-bit unsigned integers, four 8-bit unsigned integers, or four 8-bit signed integers. More...
 
GLM_FUNC_DECL vec4 unpackUnorm4x8 (uint p)
 First, unpacks a single 32-bit unsigned integer p into a pair of 16-bit unsigned integers, four 8-bit unsigned integers, or four 8-bit signed integers. More...
 

Detailed Description

Provides GLSL functions to pack and unpack half, single and double-precision floating point values into more compact integer types.

These functions do not operate component-wise, rather as described in each case.

Include <glm/packing.hpp> to use these core features.

Function Documentation

GLM_FUNC_DECL double glm::packDouble2x32 ( uvec2 const &  v)

Returns a double-qualifier value obtained by packing the components of v into a 64-bit value.

If an IEEE 754 Inf or NaN is created, it will not signal, and the resulting floating point value is unspecified. Otherwise, the bit- level representation of v is preserved. The first vector component specifies the 32 least significant bits; the second component specifies the 32 most significant bits.

See also
GLSL packDouble2x32 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL uint glm::packHalf2x16 ( vec2 const &  v)

Returns an unsigned integer obtained by converting the components of a two-component floating-point vector to the 16-bit floating-point representation found in the OpenGL Specification, and then packing these two 16- bit integers into a 32-bit unsigned integer.

The first vector component specifies the 16 least-significant bits of the result; the second component specifies the 16 most-significant bits.

See also
GLSL packHalf2x16 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL uint glm::packSnorm2x16 ( vec2 const &  v)

First, converts each component of the normalized floating-point value v into 8- or 16-bit integer values.

Then, the results are packed into the returned 32-bit unsigned integer.

The conversion for component c of v to fixed point is done as follows: packSnorm2x16: round(clamp(v, -1, +1) * 32767.0)

The first component of the vector will be written to the least significant bits of the output; the last component will be written to the most significant bits.

See also
GLSL packSnorm2x16 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL uint glm::packSnorm4x8 ( vec4 const &  v)

First, converts each component of the normalized floating-point value v into 8- or 16-bit integer values.

Then, the results are packed into the returned 32-bit unsigned integer.

The conversion for component c of v to fixed point is done as follows: packSnorm4x8: round(clamp(c, -1, +1) * 127.0)

The first component of the vector will be written to the least significant bits of the output; the last component will be written to the most significant bits.

See also
GLSL packSnorm4x8 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL uint glm::packUnorm2x16 ( vec2 const &  v)

First, converts each component of the normalized floating-point value v into 8- or 16-bit integer values.

Then, the results are packed into the returned 32-bit unsigned integer.

The conversion for component c of v to fixed point is done as follows: packUnorm2x16: round(clamp(c, 0, +1) * 65535.0)

The first component of the vector will be written to the least significant bits of the output; the last component will be written to the most significant bits.

See also
GLSL packUnorm2x16 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL uint glm::packUnorm4x8 ( vec4 const &  v)

First, converts each component of the normalized floating-point value v into 8- or 16-bit integer values.

Then, the results are packed into the returned 32-bit unsigned integer.

The conversion for component c of v to fixed point is done as follows: packUnorm4x8: round(clamp(c, 0, +1) * 255.0)

The first component of the vector will be written to the least significant bits of the output; the last component will be written to the most significant bits.

See also
GLSL packUnorm4x8 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL uvec2 glm::unpackDouble2x32 ( double  v)

Returns a two-component unsigned integer vector representation of v.

The bit-level representation of v is preserved. The first component of the vector contains the 32 least significant bits of the double; the second component consists the 32 most significant bits.

See also
GLSL unpackDouble2x32 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL vec2 glm::unpackHalf2x16 ( uint  v)

Returns a two-component floating-point vector with components obtained by unpacking a 32-bit unsigned integer into a pair of 16-bit values, interpreting those values as 16-bit floating-point numbers according to the OpenGL Specification, and converting them to 32-bit floating-point values.

The first component of the vector is obtained from the 16 least-significant bits of v; the second component is obtained from the 16 most-significant bits of v.

See also
GLSL unpackHalf2x16 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL vec2 glm::unpackSnorm2x16 ( uint  p)

First, unpacks a single 32-bit unsigned integer p into a pair of 16-bit unsigned integers, four 8-bit unsigned integers, or four 8-bit signed integers.

Then, each component is converted to a normalized floating-point value to generate the returned two- or four-component vector.

The conversion for unpacked fixed-point value f to floating point is done as follows: unpackSnorm2x16: clamp(f / 32767.0, -1, +1)

The first component of the returned vector will be extracted from the least significant bits of the input; the last component will be extracted from the most significant bits.

See also
GLSL unpackSnorm2x16 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL vec4 glm::unpackSnorm4x8 ( uint  p)

First, unpacks a single 32-bit unsigned integer p into a pair of 16-bit unsigned integers, four 8-bit unsigned integers, or four 8-bit signed integers.

Then, each component is converted to a normalized floating-point value to generate the returned two- or four-component vector.

The conversion for unpacked fixed-point value f to floating point is done as follows: unpackSnorm4x8: clamp(f / 127.0, -1, +1)

The first component of the returned vector will be extracted from the least significant bits of the input; the last component will be extracted from the most significant bits.

See also
GLSL unpackSnorm4x8 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL vec2 glm::unpackUnorm2x16 ( uint  p)

First, unpacks a single 32-bit unsigned integer p into a pair of 16-bit unsigned integers, four 8-bit unsigned integers, or four 8-bit signed integers.

Then, each component is converted to a normalized floating-point value to generate the returned two- or four-component vector.

The conversion for unpacked fixed-point value f to floating point is done as follows: unpackUnorm2x16: f / 65535.0

The first component of the returned vector will be extracted from the least significant bits of the input; the last component will be extracted from the most significant bits.

See also
GLSL unpackUnorm2x16 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
GLM_FUNC_DECL vec4 glm::unpackUnorm4x8 ( uint  p)

First, unpacks a single 32-bit unsigned integer p into a pair of 16-bit unsigned integers, four 8-bit unsigned integers, or four 8-bit signed integers.

Then, each component is converted to a normalized floating-point value to generate the returned two- or four-component vector.

The conversion for unpacked fixed-point value f to floating point is done as follows: unpackUnorm4x8: f / 255.0

The first component of the returned vector will be extracted from the least significant bits of the input; the last component will be extracted from the most significant bits.

See also
GLSL unpackUnorm4x8 man page
GLSL 4.20.8 specification, section 8.4 Floating-Point Pack and Unpack Functions
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00373.html ================================================ 0.9.9 API documentation: Angle and Trigonometry Functions
0.9.9 API documentation
Angle and Trigonometry Functions

Function parameters specified as angle are assumed to be in units of radians. More...

Functions

template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > acos (vec< L, T, Q > const &x)
 Arc cosine. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > acosh (vec< L, T, Q > const &x)
 Arc hyperbolic cosine; returns the non-negative inverse of cosh. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > asin (vec< L, T, Q > const &x)
 Arc sine. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > asinh (vec< L, T, Q > const &x)
 Arc hyperbolic sine; returns the inverse of sinh. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > atan (vec< L, T, Q > const &y, vec< L, T, Q > const &x)
 Arc tangent. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > atan (vec< L, T, Q > const &y_over_x)
 Arc tangent. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > atanh (vec< L, T, Q > const &x)
 Arc hyperbolic tangent; returns the inverse of tanh. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > cos (vec< L, T, Q > const &angle)
 The standard trigonometric cosine function. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > cosh (vec< L, T, Q > const &angle)
 Returns the hyperbolic cosine function, (exp(x) + exp(-x)) / 2. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > degrees (vec< L, T, Q > const &radians)
 Converts radians to degrees and returns the result. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, T, Q > radians (vec< L, T, Q > const &degrees)
 Converts degrees to radians and returns the result. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > sin (vec< L, T, Q > const &angle)
 The standard trigonometric sine function. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > sinh (vec< L, T, Q > const &angle)
 Returns the hyperbolic sine function, (exp(x) - exp(-x)) / 2. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > tan (vec< L, T, Q > const &angle)
 The standard trigonometric tangent function. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL vec< L, T, Q > tanh (vec< L, T, Q > const &angle)
 Returns the hyperbolic tangent function, sinh(angle) / cosh(angle) More...
 

Detailed Description

Function parameters specified as angle are assumed to be in units of radians.

In no case will any of these functions result in a divide by zero error. If the divisor of a ratio is 0, then results will be undefined.

These all operate component-wise. The description is per component.

Include <glm/trigonometric.hpp> to use these core features.

See also
ext_vector_trigonometric

Function Documentation

GLM_FUNC_DECL vec<L, T, Q> glm::acos ( vec< L, T, Q > const &  x)

Arc cosine.

Returns an angle whose sine is x. The range of values returned by this function is [0, PI]. Results are undefined if |x| > 1.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
GLSL acos man page
GLSL 4.20.8 specification, section 8.1 Angle and Trigonometry Functions
GLM_FUNC_DECL vec<L, T, Q> glm::acosh ( vec< L, T, Q > const &  x)

Arc hyperbolic cosine; returns the non-negative inverse of cosh.

Results are undefined if x < 1.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
GLSL acosh man page
GLSL 4.20.8 specification, section 8.1 Angle and Trigonometry Functions
GLM_FUNC_DECL vec<L, T, Q> glm::asin ( vec< L, T, Q > const &  x)

Arc sine.

Returns an angle whose sine is x. The range of values returned by this function is [-PI/2, PI/2]. Results are undefined if |x| > 1.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
GLSL asin man page
GLSL 4.20.8 specification, section 8.1 Angle and Trigonometry Functions
GLM_FUNC_DECL vec<L, T, Q> glm::asinh ( vec< L, T, Q > const &  x)

Arc hyperbolic sine; returns the inverse of sinh.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
GLSL asinh man page
GLSL 4.20.8 specification, section 8.1 Angle and Trigonometry Functions
GLM_FUNC_DECL vec<L, T, Q> glm::atan ( vec< L, T, Q > const &  y,
vec< L, T, Q > const &  x 
)

Arc tangent.

Returns an angle whose tangent is y/x. The signs of x and y are used to determine what quadrant the angle is in. The range of values returned by this function is [-PI, PI]. Results are undefined if x and y are both 0.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
GLSL atan man page
GLSL 4.20.8 specification, section 8.1 Angle and Trigonometry Functions

Referenced by glm::atan2().

GLM_FUNC_DECL vec<L, T, Q> glm::atan ( vec< L, T, Q > const &  y_over_x)

Arc tangent.

Returns an angle whose tangent is y_over_x. The range of values returned by this function is [-PI/2, PI/2].

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
GLSL atan man page
GLSL 4.20.8 specification, section 8.1 Angle and Trigonometry Functions
GLM_FUNC_DECL vec<L, T, Q> glm::atanh ( vec< L, T, Q > const &  x)

Arc hyperbolic tangent; returns the inverse of tanh.

Results are undefined if abs(x) >= 1.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
GLSL atanh man page
GLSL 4.20.8 specification, section 8.1 Angle and Trigonometry Functions
GLM_FUNC_DECL vec<L, T, Q> glm::cos ( vec< L, T, Q > const &  angle)

The standard trigonometric cosine function.

The values returned by this function will range from [-1, 1].

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
GLSL cos man page
GLSL 4.20.8 specification, section 8.1 Angle and Trigonometry Functions
GLM_FUNC_DECL vec<L, T, Q> glm::cosh ( vec< L, T, Q > const &  angle)

Returns the hyperbolic cosine function, (exp(x) + exp(-x)) / 2.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
GLSL cosh man page
GLSL 4.20.8 specification, section 8.1 Angle and Trigonometry Functions
GLM_FUNC_DECL GLM_CONSTEXPR vec<L, T, Q> glm::degrees ( vec< L, T, Q > const &  radians)

Converts radians to degrees and returns the result.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
GLSL degrees man page
GLSL 4.20.8 specification, section 8.1 Angle and Trigonometry Functions
GLM_FUNC_DECL GLM_CONSTEXPR vec<L, T, Q> glm::radians ( vec< L, T, Q > const &  degrees)

Converts degrees to radians and returns the result.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
GLSL radians man page
GLSL 4.20.8 specification, section 8.1 Angle and Trigonometry Functions
GLM_FUNC_DECL vec<L, T, Q> glm::sin ( vec< L, T, Q > const &  angle)

The standard trigonometric sine function.

The values returned by this function will range from [-1, 1].

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
GLSL sin man page
GLSL 4.20.8 specification, section 8.1 Angle and Trigonometry Functions
GLM_FUNC_DECL vec<L, T, Q> glm::sinh ( vec< L, T, Q > const &  angle)

Returns the hyperbolic sine function, (exp(x) - exp(-x)) / 2.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
GLSL sinh man page
GLSL 4.20.8 specification, section 8.1 Angle and Trigonometry Functions
GLM_FUNC_DECL vec<L, T, Q> glm::tan ( vec< L, T, Q > const &  angle)

The standard trigonometric tangent function.

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
GLSL tan man page
GLSL 4.20.8 specification, section 8.1 Angle and Trigonometry Functions
GLM_FUNC_DECL vec<L, T, Q> glm::tanh ( vec< L, T, Q > const &  angle)

Returns the hyperbolic tangent function, sinh(angle) / cosh(angle)

Template Parameters
LInteger between 1 and 4 included that qualify the dimension of the vector
TFloating-point scalar types
QValue from qualifier enum
See also
GLSL tanh man page
GLSL 4.20.8 specification, section 8.1 Angle and Trigonometry Functions
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/a00374.html ================================================ 0.9.9 API documentation: Vector Relational Functions
0.9.9 API documentation
Vector Relational Functions

Relational and equality operators (<, <=, >, >=, ==, !=) are defined to operate on scalars and produce scalar Boolean results. More...

Functions

template<length_t L, qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR bool all (vec< L, bool, Q > const &v)
 Returns true if all components of x are true. More...
 
template<length_t L, qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR bool any (vec< L, bool, Q > const &v)
 Returns true if any component of x is true. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > equal (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Returns the component-wise comparison of result x == y. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > greaterThan (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Returns the component-wise comparison of result x > y. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > greaterThanEqual (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Returns the component-wise comparison of result x >= y. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > lessThan (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Returns the component-wise comparison result of x < y. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > lessThanEqual (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Returns the component-wise comparison of result x <= y. More...
 
template<length_t L, qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > not_ (vec< L, bool, Q > const &v)
 Returns the component-wise logical complement of x. More...
 
template<length_t L, typename T , qualifier Q>
GLM_FUNC_DECL GLM_CONSTEXPR vec< L, bool, Q > notEqual (vec< L, T, Q > const &x, vec< L, T, Q > const &y)
 Returns the component-wise comparison of result x != y. More...
 

Detailed Description

Relational and equality operators (<, <=, >, >=, ==, !=) are defined to operate on scalars and produce scalar Boolean results.

For vector results, use the following built-in functions.

In all cases, the sizes of all the input and return vectors for any particular call must match.

Include <glm/vector_relational.hpp> to use these core features.

See also
GLM_EXT_vector_relational

Function Documentation

GLM_FUNC_DECL GLM_CONSTEXPR bool glm::all ( vec< L, bool, Q > const &  v)

Returns true if all components of x are true.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
See also
GLSL all man page
GLSL 4.20.8 specification, section 8.7 Vector Relational Functions
GLM_FUNC_DECL GLM_CONSTEXPR bool glm::any ( vec< L, bool, Q > const &  v)

Returns true if any component of x is true.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
See also
GLSL any man page
GLSL 4.20.8 specification, section 8.7 Vector Relational Functions
GLM_FUNC_DECL GLM_CONSTEXPR vec<L, bool, Q> glm::equal ( vec< L, T, Q > const &  x,
vec< L, T, Q > const &  y 
)

Returns the component-wise comparison of result x == y.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TA floating-point, integer or bool scalar type.
See also
GLSL equal man page
GLSL 4.20.8 specification, section 8.7 Vector Relational Functions
GLM_FUNC_DECL GLM_CONSTEXPR vec<L, bool, Q> glm::greaterThan ( vec< L, T, Q > const &  x,
vec< L, T, Q > const &  y 
)

Returns the component-wise comparison of result x > y.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TA floating-point or integer scalar type.
See also
GLSL greaterThan man page
GLSL 4.20.8 specification, section 8.7 Vector Relational Functions
GLM_FUNC_DECL GLM_CONSTEXPR vec<L, bool, Q> glm::greaterThanEqual ( vec< L, T, Q > const &  x,
vec< L, T, Q > const &  y 
)

Returns the component-wise comparison of result x >= y.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TA floating-point or integer scalar type.
See also
GLSL greaterThanEqual man page
GLSL 4.20.8 specification, section 8.7 Vector Relational Functions
GLM_FUNC_DECL GLM_CONSTEXPR vec<L, bool, Q> glm::lessThan ( vec< L, T, Q > const &  x,
vec< L, T, Q > const &  y 
)

Returns the component-wise comparison result of x < y.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TA floating-point or integer scalar type.
See also
GLSL lessThan man page
GLSL 4.20.8 specification, section 8.7 Vector Relational Functions
GLM_FUNC_DECL GLM_CONSTEXPR vec<L, bool, Q> glm::lessThanEqual ( vec< L, T, Q > const &  x,
vec< L, T, Q > const &  y 
)

Returns the component-wise comparison of result x <= y.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TA floating-point or integer scalar type.
See also
GLSL lessThanEqual man page
GLSL 4.20.8 specification, section 8.7 Vector Relational Functions
GLM_FUNC_DECL GLM_CONSTEXPR vec<L, bool, Q> glm::not_ ( vec< L, bool, Q > const &  v)

Returns the component-wise logical complement of x.

/!\ Because of language incompatibilities between C++ and GLSL, GLM defines the function not but not_ instead.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
See also
GLSL not man page
GLSL 4.20.8 specification, section 8.7 Vector Relational Functions
GLM_FUNC_DECL GLM_CONSTEXPR vec<L, bool, Q> glm::notEqual ( vec< L, T, Q > const &  x,
vec< L, T, Q > const &  y 
)

Returns the component-wise comparison of result x != y.

Template Parameters
LAn integer between 1 and 4 included that qualify the dimension of the vector.
TA floating-point, integer or bool scalar type.
See also
GLSL notEqual man page
GLSL 4.20.8 specification, section 8.7 Vector Relational Functions
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/dir_033f5edb0915b828d2c46ed4804e5503.html ================================================ 0.9.9 API documentation: detail Directory Reference
0.9.9 API documentation
detail Directory Reference

Files

file  _features.hpp [code]
 
file  _fixes.hpp [code]
 
file  _noise.hpp [code]
 
file  _swizzle.hpp [code]
 
file  _swizzle_func.hpp [code]
 
file  _vectorize.hpp [code]
 
file  compute_common.hpp [code]
 
file  compute_vector_relational.hpp [code]
 
file  qualifier.hpp [code]
 
file  setup.hpp [code]
 
file  type_float.hpp [code]
 
file  type_half.hpp [code]
 
file  type_mat2x2.hpp [code]
 Core features
 
file  type_mat2x3.hpp [code]
 Core features
 
file  type_mat2x4.hpp [code]
 Core features
 
file  type_mat3x2.hpp [code]
 Core features
 
file  type_mat3x3.hpp [code]
 Core features
 
file  type_mat3x4.hpp [code]
 Core features
 
file  type_mat4x2.hpp [code]
 Core features
 
file  type_mat4x3.hpp [code]
 Core features
 
file  type_mat4x4.hpp [code]
 Core features
 
file  type_quat.hpp [code]
 Core features
 
file  type_vec1.hpp [code]
 Core features
 
file  type_vec2.hpp [code]
 Core features
 
file  type_vec3.hpp [code]
 Core features
 
file  type_vec4.hpp [code]
 Core features
 
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/dir_3a581ba30d25676e4b797b1f96d53b45.html ================================================ 0.9.9 API documentation: F: Directory Reference
0.9.9 API documentation
F: Directory Reference

Directories

directory  G-Truc
 
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/dir_44e5e654415abd9ca6fdeaddaff8565e.html ================================================ 0.9.9 API documentation: glm Directory Reference
0.9.9 API documentation
glm Directory Reference

Directories

directory  doc
 
directory  glm
 
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/dir_4c6bd29c73fa4e5a2509e1c15f846751.html ================================================ 0.9.9 API documentation: gtc Directory Reference
0.9.9 API documentation
gtc Directory Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/dir_5189610d3ba09ec39b766fb99b34cd93.html ================================================ 0.9.9 API documentation: doc Directory Reference
0.9.9 API documentation
doc Directory Reference

Files

file  man.doxy [code]
 
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/dir_6b66465792d005310484819a0eb0b0d3.html ================================================ 0.9.9 API documentation: ext Directory Reference
0.9.9 API documentation
ext Directory Reference

Files

file  matrix_clip_space.hpp [code]
 GLM_EXT_matrix_clip_space
 
file  matrix_common.hpp [code]
 GLM_EXT_matrix_common
 
file  matrix_double2x2.hpp [code]
 Core features
 
file  matrix_double2x2_precision.hpp [code]
 Core features
 
file  matrix_double2x3.hpp [code]
 Core features
 
file  matrix_double2x3_precision.hpp [code]
 Core features
 
file  matrix_double2x4.hpp [code]
 Core features
 
file  matrix_double2x4_precision.hpp [code]
 Core features
 
file  matrix_double3x2.hpp [code]
 Core features
 
file  matrix_double3x2_precision.hpp [code]
 Core features
 
file  matrix_double3x3.hpp [code]
 Core features
 
file  matrix_double3x3_precision.hpp [code]
 Core features
 
file  matrix_double3x4.hpp [code]
 Core features
 
file  matrix_double3x4_precision.hpp [code]
 Core features
 
file  matrix_double4x2.hpp [code]
 Core features
 
file  matrix_double4x2_precision.hpp [code]
 Core features
 
file  matrix_double4x3.hpp [code]
 Core features
 
file  matrix_double4x3_precision.hpp [code]
 Core features
 
file  matrix_double4x4.hpp [code]
 Core features
 
file  matrix_double4x4_precision.hpp [code]
 Core features
 
file  matrix_float2x2.hpp [code]
 Core features
 
file  matrix_float2x2_precision.hpp [code]
 Core features
 
file  matrix_float2x3.hpp [code]
 Core features
 
file  matrix_float2x3_precision.hpp [code]
 Core features
 
file  matrix_float2x4.hpp [code]
 Core features
 
file  matrix_float2x4_precision.hpp [code]
 Core features
 
file  matrix_float3x2.hpp [code]
 Core features
 
file  matrix_float3x2_precision.hpp [code]
 Core features
 
file  matrix_float3x3.hpp [code]
 Core features
 
file  matrix_float3x3_precision.hpp [code]
 Core features
 
file  matrix_float3x4.hpp [code]
 Core features
 
file  matrix_float3x4_precision.hpp [code]
 Core features
 
file  matrix_float4x2.hpp [code]
 Core features
 
file  matrix_float4x2_precision.hpp [code]
 
file  matrix_float4x3.hpp [code]
 Core features
 
file  matrix_float4x3_precision.hpp [code]
 Core features
 
file  matrix_float4x4.hpp [code]
 Core features
 
file  matrix_float4x4_precision.hpp [code]
 Core features
 
file  matrix_projection.hpp [code]
 GLM_EXT_matrix_projection
 
file  matrix_relational.hpp [code]
 GLM_EXT_matrix_relational
 
file  ext/matrix_transform.hpp [code]
 GLM_EXT_matrix_transform
 
file  quaternion_common.hpp [code]
 GLM_EXT_quaternion_common
 
file  quaternion_double.hpp [code]
 GLM_EXT_quaternion_double
 
file  quaternion_double_precision.hpp [code]
 GLM_EXT_quaternion_double_precision
 
file  quaternion_exponential.hpp [code]
 GLM_EXT_quaternion_exponential
 
file  quaternion_float.hpp [code]
 GLM_EXT_quaternion_float
 
file  quaternion_float_precision.hpp [code]
 GLM_EXT_quaternion_float_precision
 
file  quaternion_geometric.hpp [code]
 GLM_EXT_quaternion_geometric
 
file  quaternion_relational.hpp [code]
 GLM_EXT_quaternion_relational
 
file  quaternion_transform.hpp [code]
 GLM_EXT_quaternion_transform
 
file  quaternion_trigonometric.hpp [code]
 GLM_EXT_quaternion_trigonometric
 
file  scalar_common.hpp [code]
 GLM_EXT_scalar_common
 
file  scalar_constants.hpp [code]
 GLM_EXT_scalar_constants
 
file  scalar_int_sized.hpp [code]
 GLM_EXT_scalar_int_sized
 
file  scalar_integer.hpp [code]
 GLM_EXT_scalar_integer
 
file  ext/scalar_relational.hpp [code]
 GLM_EXT_scalar_relational
 
file  scalar_uint_sized.hpp [code]
 GLM_EXT_scalar_uint_sized
 
file  scalar_ulp.hpp [code]
 GLM_EXT_scalar_ulp
 
file  vector_bool1.hpp [code]
 GLM_EXT_vector_bool1
 
file  vector_bool1_precision.hpp [code]
 GLM_EXT_vector_bool1_precision
 
file  vector_bool2.hpp [code]
 Core features
 
file  vector_bool2_precision.hpp [code]
 Core features
 
file  vector_bool3.hpp [code]
 Core features
 
file  vector_bool3_precision.hpp [code]
 Core features
 
file  vector_bool4.hpp [code]
 Core features
 
file  vector_bool4_precision.hpp [code]
 Core features
 
file  vector_common.hpp [code]
 GLM_EXT_vector_common
 
file  vector_double1.hpp [code]
 GLM_EXT_vector_double1
 
file  vector_double1_precision.hpp [code]
 GLM_EXT_vector_double1_precision
 
file  vector_double2.hpp [code]
 Core features
 
file  vector_double2_precision.hpp [code]
 Core features
 
file  vector_double3.hpp [code]
 Core features
 
file  vector_double3_precision.hpp [code]
 Core features
 
file  vector_double4.hpp [code]
 Core features
 
file  vector_double4_precision.hpp [code]
 Core features
 
file  vector_float1.hpp [code]
 GLM_EXT_vector_float1
 
file  vector_float1_precision.hpp [code]
 GLM_EXT_vector_float1_precision
 
file  vector_float2.hpp [code]
 Core features
 
file  vector_float2_precision.hpp [code]
 Core features
 
file  vector_float3.hpp [code]
 Core features
 
file  vector_float3_precision.hpp [code]
 Core features
 
file  vector_float4.hpp [code]
 Core features
 
file  vector_float4_precision.hpp [code]
 Core features
 
file  vector_int1.hpp [code]
 GLM_EXT_vector_int1
 
file  vector_int1_precision.hpp [code]
 GLM_EXT_vector_int1_precision
 
file  vector_int2.hpp [code]
 Core features
 
file  vector_int2_precision.hpp [code]
 Core features
 
file  vector_int3.hpp [code]
 Core features
 
file  vector_int3_precision.hpp [code]
 Core features
 
file  vector_int4.hpp [code]
 Core features
 
file  vector_int4_precision.hpp [code]
 Core features
 
file  vector_integer.hpp [code]
 GLM_EXT_vector_integer
 
file  ext/vector_relational.hpp [code]
 GLM_EXT_vector_relational
 
file  vector_uint1.hpp [code]
 GLM_EXT_vector_uint1
 
file  vector_uint1_precision.hpp [code]
 GLM_EXT_vector_uint1_precision
 
file  vector_uint2.hpp [code]
 Core features
 
file  vector_uint2_precision.hpp [code]
 Core features
 
file  vector_uint3.hpp [code]
 Core features
 
file  vector_uint3_precision.hpp [code]
 Core features
 
file  vector_uint4.hpp [code]
 Core features
 
file  vector_uint4_precision.hpp [code]
 Core features
 
file  vector_ulp.hpp [code]
 GLM_EXT_vector_ulp
 
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/dir_9e5fe034a00e89334fd5186c3e7db156.html ================================================ 0.9.9 API documentation: G-Truc Directory Reference
0.9.9 API documentation
G-Truc Directory Reference

Directories

directory  Source
 
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/dir_a8bee7be44182a33f3820393ae0b105d.html ================================================ 0.9.9 API documentation: G-Truc Directory Reference
0.9.9 API documentation
G-Truc Directory Reference

Directories

directory  glm
 
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/dir_cef2d71d502cb69a9252bca2297d9549.html ================================================ 0.9.9 API documentation: glm Directory Reference
0.9.9 API documentation
glm Directory Reference
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/dir_d9496f0844b48bc7e53b5af8c99b9ab2.html ================================================ 0.9.9 API documentation: Source Directory Reference
0.9.9 API documentation
Source Directory Reference

Directories

directory  G-Truc
 
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/dir_f35778ec600a1b9bbc4524e62e226aa2.html ================================================ 0.9.9 API documentation: gtx Directory Reference
0.9.9 API documentation
gtx Directory Reference

Files

file  associated_min_max.hpp [code]
 GLM_GTX_associated_min_max
 
file  bit.hpp [code]
 GLM_GTX_bit
 
file  closest_point.hpp [code]
 GLM_GTX_closest_point
 
file  color_encoding.hpp [code]
 GLM_GTX_color_encoding
 
file  gtx/color_space.hpp [code]
 GLM_GTX_color_space
 
file  color_space_YCoCg.hpp [code]
 GLM_GTX_color_space_YCoCg
 
file  gtx/common.hpp [code]
 GLM_GTX_common
 
file  compatibility.hpp [code]
 GLM_GTX_compatibility
 
file  component_wise.hpp [code]
 GLM_GTX_component_wise
 
file  dual_quaternion.hpp [code]
 GLM_GTX_dual_quaternion
 
file  easing.hpp [code]
 GLM_GTX_easing
 
file  euler_angles.hpp [code]
 GLM_GTX_euler_angles
 
file  extend.hpp [code]
 GLM_GTX_extend
 
file  extended_min_max.hpp [code]
 GLM_GTX_extented_min_max
 
file  exterior_product.hpp [code]
 GLM_GTX_exterior_product
 
file  fast_exponential.hpp [code]
 GLM_GTX_fast_exponential
 
file  fast_square_root.hpp [code]
 GLM_GTX_fast_square_root
 
file  fast_trigonometry.hpp [code]
 GLM_GTX_fast_trigonometry
 
file  functions.hpp [code]
 GLM_GTX_functions
 
file  gradient_paint.hpp [code]
 GLM_GTX_gradient_paint
 
file  handed_coordinate_space.hpp [code]
 GLM_GTX_handed_coordinate_space
 
file  hash.hpp [code]
 GLM_GTX_hash
 
file  gtx/integer.hpp [code]
 GLM_GTX_integer
 
file  intersect.hpp [code]
 GLM_GTX_intersect
 
file  io.hpp [code]
 GLM_GTX_io
 
file  log_base.hpp [code]
 GLM_GTX_log_base
 
file  matrix_cross_product.hpp [code]
 GLM_GTX_matrix_cross_product
 
file  matrix_decompose.hpp [code]
 GLM_GTX_matrix_decompose
 
file  matrix_factorisation.hpp [code]
 GLM_GTX_matrix_factorisation
 
file  matrix_interpolation.hpp [code]
 GLM_GTX_matrix_interpolation
 
file  matrix_major_storage.hpp [code]
 GLM_GTX_matrix_major_storage
 
file  matrix_operation.hpp [code]
 GLM_GTX_matrix_operation
 
file  matrix_query.hpp [code]
 GLM_GTX_matrix_query
 
file  matrix_transform_2d.hpp [code]
 GLM_GTX_matrix_transform_2d
 
file  mixed_product.hpp [code]
 GLM_GTX_mixed_producte
 
file  norm.hpp [code]
 GLM_GTX_norm
 
file  normal.hpp [code]
 GLM_GTX_normal
 
file  normalize_dot.hpp [code]
 GLM_GTX_normalize_dot
 
file  number_precision.hpp [code]
 GLM_GTX_number_precision
 
file  optimum_pow.hpp [code]
 GLM_GTX_optimum_pow
 
file  orthonormalize.hpp [code]
 GLM_GTX_orthonormalize
 
file  perpendicular.hpp [code]
 GLM_GTX_perpendicular
 
file  polar_coordinates.hpp [code]
 GLM_GTX_polar_coordinates
 
file  projection.hpp [code]
 GLM_GTX_projection
 
file  gtx/quaternion.hpp [code]
 GLM_GTX_quaternion
 
file  range.hpp [code]
 GLM_GTX_range
 
file  raw_data.hpp [code]
 GLM_GTX_raw_data
 
file  rotate_normalized_axis.hpp [code]
 GLM_GTX_rotate_normalized_axis
 
file  rotate_vector.hpp [code]
 GLM_GTX_rotate_vector
 
file  scalar_multiplication.hpp [code]
 Experimental extensions
 
file  gtx/scalar_relational.hpp [code]
 GLM_GTX_scalar_relational
 
file  spline.hpp [code]
 GLM_GTX_spline
 
file  std_based_type.hpp [code]
 GLM_GTX_std_based_type
 
file  string_cast.hpp [code]
 GLM_GTX_string_cast
 
file  texture.hpp [code]
 GLM_GTX_texture
 
file  transform.hpp [code]
 GLM_GTX_transform
 
file  transform2.hpp [code]
 GLM_GTX_transform2
 
file  gtx/type_aligned.hpp [code]
 GLM_GTX_type_aligned
 
file  type_trait.hpp [code]
 GLM_GTX_type_trait
 
file  vec_swizzle.hpp [code]
 GLM_GTX_vec_swizzle
 
file  vector_angle.hpp [code]
 GLM_GTX_vector_angle
 
file  vector_query.hpp [code]
 GLM_GTX_vector_query
 
file  wrap.hpp [code]
 GLM_GTX_wrap
 
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/doxygen.css ================================================ /* The standard CSS for doxygen 1.8.10 */ body, table, div, p, dl { font: 400 14px/22px Roboto,sans-serif; } body { margin:0px; padding:0px; background-color:#bf6000; background-repeat:no-repeat; background-position:center center; background-attachment:fixed; min-height:1200px; overflow:auto; } /* @group Heading Levels */ h1.groupheader { color:#bf6000; font-size: 150%; } .title { color:#bf6000; font: 400 14px/28px Roboto,sans-serif; font-size: 150%; font-weight: bold; margin: 10px 2px; } h2.groupheader { border-bottom: 1px solid #bf6000; color:#bf6000; font-size: 150%; font-weight: normal; margin-top: 1.75em; padding-top: 8px; padding-bottom: 4px; width: 100%; } h3.groupheader { font-size: 100%; } h1, h2, h3, h4, h5, h6 { -webkit-transition: text-shadow 0.5s linear; -moz-transition: text-shadow 0.5s linear; -ms-transition: text-shadow 0.5s linear; -o-transition: text-shadow 0.5s linear; transition: text-shadow 0.5s linear; margin-right: 15px; } h1.glow, h2.glow, h3.glow, h4.glow, h5.glow, h6.glow { text-shadow: 0 0 15px cyan; } dt { font-weight: bold; } div.multicol { -moz-column-gap: 1em; -webkit-column-gap: 1em; -moz-column-count: 3; -webkit-column-count: 3; } p.startli, p.startdd { margin-top: 2px; } p.starttd { margin-top: 0px; } p.endli { margin-bottom: 0px; } p.enddd { margin-bottom: 4px; } p.endtd { margin-bottom: 2px; } /* @end */ caption { font-weight: bold; } span.legend { font-size: 70%; text-align: center; } h3.version { font-size: 90%; text-align: center; } div.qindex, div.navtab{ background-color: #FFF8F0; border: 1px solid #FF8000; text-align: center; } div.qindex, div.navpath { width: 100%; line-height: 140%; } div.navtab { margin-right: 15px; } /* @group Link Styling */ a { color: #000000; font-weight: normal; text-decoration: none; } .contents a:visited { color: #606060; } .contents{ background-color: #FFFFFF; padding-top:8px; padding-bottom:8px; padding-left:32px; padding-right:32px; margin:0px; margin-left:auto; margin-right:auto; width:1216px; border-bottom-left-radius: 8px; border-bottom-right-radius: 8px; } a:hover { text-decoration: underline; } a.qindex { font-weight: bold; } a.qindexHL { font-weight: bold; background-color: #9CAFD4; color: #ffffff; border: 1px double #869DCA; } .contents a.qindexHL:visited { color: #ffffff; } a.el { font-weight: bold; } a.elRef { } a.code, a.code:visited, a.line, a.line:visited { color: #4665A2; } a.codeRef, a.codeRef:visited, a.lineRef, a.lineRef:visited { color: #4665A2; } /* @end */ dl.el { margin-left: -1cm; } pre.fragment { border: 1px solid #FF8000; background-color: #FFF8F0; padding: 4px 6px; margin: 4px 8px 4px 2px; overflow: auto; word-wrap: break-word; font-size: 9pt; line-height: 125%; font-family: monospace, fixed; font-size: 105%; } div.fragment { padding: 4px 6px; margin: 4px 8px 4px 2px; background-color: #FFF8F0; border: 1px solid #FF8000; } div.line { font-family: monospace, fixed; font-size: 13px; min-height: 13px; line-height: 1.0; text-wrap: unrestricted; white-space: -moz-pre-wrap; /* Moz */ white-space: -pre-wrap; /* Opera 4-6 */ white-space: -o-pre-wrap; /* Opera 7 */ white-space: pre-wrap; /* CSS3 */ word-wrap: break-word; /* IE 5.5+ */ text-indent: -53px; padding-left: 53px; padding-bottom: 0px; margin: 0px; -webkit-transition-property: background-color, box-shadow; -webkit-transition-duration: 0.5s; -moz-transition-property: background-color, box-shadow; -moz-transition-duration: 0.5s; -ms-transition-property: background-color, box-shadow; -ms-transition-duration: 0.5s; -o-transition-property: background-color, box-shadow; -o-transition-duration: 0.5s; transition-property: background-color, box-shadow; transition-duration: 0.5s; } div.line.glow { background-color: cyan; box-shadow: 0 0 10px cyan; } span.lineno { padding-right: 4px; text-align: right; border-right: 2px solid #0F0; background-color: #E8E8E8; white-space: pre; } span.lineno a { background-color: #D8D8D8; } span.lineno a:hover { background-color: #C8C8C8; } div.ah, span.ah { background-color: black; font-weight: bold; color: #ffffff; margin-bottom: 3px; margin-top: 3px; padding: 0.2em; border: solid thin #333; border-radius: 0.5em; -webkit-border-radius: .5em; -moz-border-radius: .5em; box-shadow: 2px 2px 3px #999; -webkit-box-shadow: 2px 2px 3px #999; -moz-box-shadow: rgba(0, 0, 0, 0.15) 2px 2px 2px; background-image: -webkit-gradient(linear, left top, left bottom, from(#eee), to(#000),color-stop(0.3, #444)); background-image: -moz-linear-gradient(center top, #eee 0%, #444 40%, #000); } div.classindex ul { list-style: none; padding-left: 0; } div.classindex span.ai { display: inline-block; } div.groupHeader { margin-left: 16px; margin-top: 12px; font-weight: bold; } div.groupText { margin-left: 16px; font-style: italic; } body { color: black; margin: 0; } td.indexkey { background-color: #FFF8F0; font-weight: bold; border: 1px solid #C4CFE5; margin: 2px 0px 2px 0; padding: 2px 10px; white-space: nowrap; vertical-align: top; } td.indexvalue { background-color: #FFF8F0; border: 1px solid #C4CFE5; padding: 2px 10px; margin: 2px 0px; } tr.memlist { background-color: #FFF8F0; } p.formulaDsp { text-align: center; } img.formulaDsp { } img.formulaInl { vertical-align: middle; } div.center { text-align: center; margin-top: 0px; margin-bottom: 0px; padding: 0px; } div.center img { border: 0px; } address.footer { display: none; } img.footer { border: 0px; vertical-align: middle; } /* @group Code Colorization */ span.keyword { color: #008000 } span.keywordtype { color: #604020 } span.keywordflow { color: #e08000 } span.comment { color: #800000 } span.preprocessor { color: #806020 } span.stringliteral { color: #002080 } span.charliteral { color: #008080 } span.vhdldigit { color: #ff00ff } span.vhdlchar { color: #000000 } span.vhdlkeyword { color: #700070 } span.vhdllogic { color: #ff0000 } blockquote { background-color: #F7F8FB; border-left: 2px solid #9CAFD4; margin: 0 24px 0 4px; padding: 0 12px 0 16px; } /* @end */ /* .search { color: #003399; font-weight: bold; } form.search { margin-bottom: 0px; margin-top: 0px; } input.search { font-size: 75%; color: #000080; font-weight: normal; background-color: #e8eef2; } */ td.tiny { font-size: 75%; } .dirtab { padding: 4px; border-collapse: collapse; border: 1px solid #FF8000; } th.dirtab { background: #EBEFF6; font-weight: bold; } hr { height: 0px; border: none; border-top: 1px solid #4A6AAA; } hr.footer { display: none; } /* @group Member Descriptions */ table.memberdecls { border-spacing: 0px; padding: 0px; } .memberdecls td, .fieldtable tr { -webkit-transition-property: background-color, box-shadow; -webkit-transition-duration: 0.5s; -moz-transition-property: background-color, box-shadow; -moz-transition-duration: 0.5s; -ms-transition-property: background-color, box-shadow; -ms-transition-duration: 0.5s; -o-transition-property: background-color, box-shadow; -o-transition-duration: 0.5s; transition-property: background-color, box-shadow; transition-duration: 0.5s; } .memberdecls td.glow, .fieldtable tr.glow { background-color: cyan; box-shadow: 0 0 15px cyan; } .mdescLeft, .mdescRight, .memItemLeft, .memItemRight, .memTemplItemLeft, .memTemplItemRight, .memTemplParams { background-color: #FFFCF8; border: none; margin: 4px; padding: 1px 0 0 8px; } .mdescLeft, .mdescRight { padding: 0px 8px 4px 8px; color: #555; } .memSeparator { border-bottom: 1px solid #FFF8F0; line-height: 1px; margin: 0px; padding: 0px; } .memItemLeft, .memTemplItemLeft { white-space: nowrap; } .memItemRight { width: 100%; } .memTemplParams { color: #bf6000; white-space: nowrap; font-size: 80%; } /* @end */ /* @group Member Details */ /* Styles for detailed member documentation */ .memtemplate { font-size: 80%; color: #4665A2; font-weight: normal; margin-left: 9px; } .memnav { background-color: #FFF8F0; border: 1px solid #FF8000; text-align: center; margin: 2px; margin-right: 15px; padding: 2px; } .mempage { width: 100%; } .memitem { padding: 0; margin-bottom: 10px; margin-right: 5px; -webkit-transition: box-shadow 0.5s linear; -moz-transition: box-shadow 0.5s linear; -ms-transition: box-shadow 0.5s linear; -o-transition: box-shadow 0.5s linear; transition: box-shadow 0.5s linear; display: table !important; width: 100%; } .memitem.glow { box-shadow: 0 0 15px cyan; } .memname { font-weight: bold; margin-left: 6px; } .memname td { vertical-align: bottom; } .memproto, dl.reflist dt { border-top: 1px solid #bf6000; border-left: 1px solid #bf6000; border-right: 1px solid #bf6000; padding: 6px 0px 6px 0px; /*color: #253555;*/ font-weight: bold; /*text-shadow: 0px 1px 1px rgba(255, 255, 255, 0.9);*/ /*background-image:url('nav_f.png');*/ background-repeat:repeat-x; background-color: #FFF8F0; /* opera specific markup */ box-shadow: 5px 5px 5px rgba(0, 0, 0, 0.15); border-top-right-radius: 4px; border-top-left-radius: 4px; /* firefox specific markup */ -moz-box-shadow: rgba(0, 0, 0, 0.15) 5px 5px 5px; -moz-border-radius-topright: 4px; -moz-border-radius-topleft: 4px; /* webkit specific markup */ -webkit-box-shadow: 5px 5px 5px rgba(0, 0, 0, 0.15); -webkit-border-top-right-radius: 4px; -webkit-border-top-left-radius: 4px; } .memdoc, dl.reflist dd { border-bottom: 1px solid #bf6000; border-left: 1px solid #bf6000; border-right: 1px solid #bf6000; padding: 6px 10px 2px 10px; border-top-width: 0; background-image:url('nav_g.png'); background-repeat:repeat-x; background-color: #FFFDFB; /* opera specific markup */ border-bottom-left-radius: 4px; border-bottom-right-radius: 4px; box-shadow: 5px 5px 5px rgba(0, 0, 0, 0.15); /* firefox specific markup */ -moz-border-radius-bottomleft: 4px; -moz-border-radius-bottomright: 4px; -moz-box-shadow: rgba(0, 0, 0, 0.15) 5px 5px 5px; /* webkit specific markup */ -webkit-border-bottom-left-radius: 4px; -webkit-border-bottom-right-radius: 4px; -webkit-box-shadow: 5px 5px 5px rgba(0, 0, 0, 0.15); } dl.reflist dt { padding: 5px; } dl.reflist dd { margin: 0px 0px 10px 0px; padding: 5px; } .paramkey { text-align: right; } .paramtype { white-space: nowrap; } .paramname { color: #602020; white-space: nowrap; } .paramname em { font-style: normal; } .paramname code { line-height: 14px; } .params, .retval, .exception, .tparams { margin-left: 0px; padding-left: 0px; } .params .paramname, .retval .paramname { font-weight: bold; vertical-align: top; } .params .paramtype { font-style: italic; vertical-align: top; } .params .paramdir { font-family: "courier new",courier,monospace; vertical-align: top; } table.mlabels { border-spacing: 0px; } td.mlabels-left { width: 100%; padding: 0px; } td.mlabels-right { vertical-align: bottom; padding: 0px; white-space: nowrap; } span.mlabels { margin-left: 8px; } span.mlabel { background-color: #728DC1; border-top:1px solid #5373B4; border-left:1px solid #5373B4; border-right:1px solid #C4CFE5; border-bottom:1px solid #C4CFE5; text-shadow: none; color: white; margin-right: 4px; padding: 2px 3px; border-radius: 3px; font-size: 7pt; white-space: nowrap; vertical-align: middle; } /* @end */ /* these are for tree view inside a (index) page */ div.directory { margin: 10px 0px; border-top: 1px solid #bf6000; border-bottom: 1px solid #bf6000; width: 100%; } .directory table { border-collapse:collapse; } .directory td { margin: 0px; padding: 0px; vertical-align: top; } .directory td.entry { white-space: nowrap; padding-right: 6px; padding-top: 3px; } .directory td.entry a { outline:none; } .directory td.entry a img { border: none; } .directory td.desc { width: 100%; padding-left: 6px; padding-right: 6px; padding-top: 3px; border-left: 1px solid rgba(0,0,0,0.05); } .directory tr.even { padding-left: 6px; background-color: #FFFDFB; } .directory img { vertical-align: -30%; } .directory .levels { white-space: nowrap; width: 100%; text-align: right; font-size: 9pt; } .directory .levels span { cursor: pointer; padding-left: 2px; padding-right: 2px; color: #bf6000; } .arrow { color: #bf6000; -webkit-user-select: none; -khtml-user-select: none; -moz-user-select: none; -ms-user-select: none; user-select: none; cursor: pointer; font-size: 80%; display: inline-block; width: 16px; height: 22px; } .icon { font-family: Arial, Helvetica; font-weight: bold; font-size: 12px; height: 14px; width: 16px; display: inline-block; background-color: #bf6000; color: white; text-align: center; border-radius: 4px; margin-left: 2px; margin-right: 2px; } .icona { width: 24px; height: 22px; display: inline-block; } .iconfopen { width: 24px; height: 18px; margin-bottom: 4px; background-image:url('folderopen.png'); background-position: 0px -4px; background-repeat: repeat-y; vertical-align:top; display: inline-block; } .iconfclosed { width: 24px; height: 18px; margin-bottom: 4px; background-image:url('folderclosed.png'); background-position: 0px -4px; background-repeat: repeat-y; vertical-align:top; display: inline-block; } .icondoc { width: 24px; height: 18px; margin-bottom: 4px; background-image:url('doc.png'); background-position: 0px -4px; background-repeat: repeat-y; vertical-align:top; display: inline-block; } table.directory { font: 400 14px Roboto,sans-serif; } /* @end */ div.dynheader { margin-top: 8px; -webkit-touch-callout: none; -webkit-user-select: none; -khtml-user-select: none; -moz-user-select: none; -ms-user-select: none; user-select: none; } address { font-style: normal; color: #2A3D61; } table.doxtable { border-collapse:collapse; margin-top: 4px; margin-bottom: 4px; } table.doxtable td, table.doxtable th { border: 1px solid #2D4068; padding: 3px 7px 2px; } table.doxtable th { background-color: #374F7F; color: #FFFFFF; font-size: 110%; padding-bottom: 4px; padding-top: 5px; } table.fieldtable { /*width: 100%;*/ margin-bottom: 10px; border: 1px solid #A8B8D9; border-spacing: 0px; -moz-border-radius: 4px; -webkit-border-radius: 4px; border-radius: 4px; -moz-box-shadow: rgba(0, 0, 0, 0.15) 2px 2px 2px; -webkit-box-shadow: 2px 2px 2px rgba(0, 0, 0, 0.15); box-shadow: 2px 2px 2px rgba(0, 0, 0, 0.15); } .fieldtable td, .fieldtable th { padding: 3px 7px 2px; } .fieldtable td.fieldtype, .fieldtable td.fieldname { white-space: nowrap; border-right: 1px solid #A8B8D9; border-bottom: 1px solid #A8B8D9; vertical-align: top; } .fieldtable td.fieldname { padding-top: 3px; } .fieldtable td.fielddoc { border-bottom: 1px solid #A8B8D9; /*width: 100%;*/ } .fieldtable td.fielddoc p:first-child { margin-top: 0px; } .fieldtable td.fielddoc p:last-child { margin-bottom: 2px; } .fieldtable tr:last-child td { border-bottom: none; } .fieldtable th { background-image:url('nav_f.png'); background-repeat:repeat-x; background-color: #E2E8F2; font-size: 90%; color: #253555; padding-bottom: 4px; padding-top: 5px; text-align:left; -moz-border-radius-topleft: 4px; -moz-border-radius-topright: 4px; -webkit-border-top-left-radius: 4px; -webkit-border-top-right-radius: 4px; border-top-left-radius: 4px; border-top-right-radius: 4px; border-bottom: 1px solid #A8B8D9; } .tabsearch { top: 0px; left: 10px; height: 36px; background-image: url('tab_b.png'); z-index: 101; overflow: hidden; font-size: 13px; } .navpath ul { font-size: 11px; /*background-image:url('tab_b.png');*/ background-color: #FFF8F0; background-repeat:repeat-x; background-position: 0 -5px; height:30px; line-height:30px; color:#bf6000; border:solid 0px #C2CDE4; overflow:hidden; margin:0px; padding:0px; } .navpath li { list-style-type:none; float:left; padding-left:10px; padding-right:15px; background-image:url('bc_s.png'); background-repeat:no-repeat; background-position:right; color:#bf6000; } .navpath li.navelem a { height:32px; display:block; text-decoration: none; outline: none; color: #bf6000; font-family: 'Lucida Grande',Geneva,Helvetica,Arial,sans-serif; text-decoration: none; } .navpath li.navelem a:hover { color:#6884BD; } .navpath li.footer { list-style-type:none; float:right; padding-left:10px; padding-right:15px; background-image:none; background-repeat:no-repeat; background-position:right; color:#bf6000; font-size: 8pt; } div.summary { float: right; font-size: 8pt; padding-right: 5px; width: 50%; text-align: right; } div.summary a { white-space: nowrap; } div.ingroups { font-size: 8pt; width: 50%; text-align: left; } div.ingroups a { white-space: nowrap; } div.header { background-repeat:repeat-x; background-color: #FFFCF8; padding:0px; margin:0px; margin-left:auto; margin-right:auto; width:1280px; } div.headertitle { padding: 5px 5px 5px 10px; } dl { padding: 0 0 0 10px; } /* dl.note, dl.warning, dl.attention, dl.pre, dl.post, dl.invariant, dl.deprecated, dl.todo, dl.test, dl.bug */ dl.section { margin-left: 0px; padding-left: 0px; } dl.note { margin-left:-7px; padding-left: 3px; border-left:4px solid; border-color: #D0C000; } dl.warning, dl.attention { margin-left:-7px; padding-left: 3px; border-left:4px solid; border-color: #FF0000; } dl.pre, dl.post, dl.invariant { margin-left:-7px; padding-left: 3px; border-left:4px solid; border-color: #00D000; } dl.deprecated { margin-left:-7px; padding-left: 3px; border-left:4px solid; border-color: #505050; } dl.todo { margin-left:-7px; padding-left: 3px; border-left:4px solid; border-color: #E0C000; } dl.test { margin-left:-7px; padding-left: 3px; border-left:4px solid; border-color: #3030E0; } dl.bug { margin-left:-7px; padding-left: 3px; border-left:4px solid; border-color: #C08050; } dl.section dd { margin-bottom: 6px; } #projectlogo { text-align: center; vertical-align: bottom; border-collapse: separate; } #projectlogo img { border: 0px none; } #projectalign { vertical-align: middle; } #projectname { font: 300% Tahoma, Arial,sans-serif; margin: 0px; padding: 2px 0px; color: #FF8000; } #projectbrief { font: 120% Tahoma, Arial,sans-serif; margin: 0px; padding: 0px; } #projectnumber { font: 50% Tahoma, Arial,sans-serif; margin: 0px; padding: 0px; } #titlearea { padding: 0px; margin: 0px; width: 100%; border-bottom: 1px solid #5373B4; } .image { text-align: center; } .dotgraph { text-align: center; } .mscgraph { text-align: center; } .diagraph { text-align: center; } .caption { font-weight: bold; } div.zoom { border: 1px solid #90A5CE; } dl.citelist { margin-bottom:50px; } dl.citelist dt { color:#334975; float:left; font-weight:bold; margin-right:10px; padding:5px; } dl.citelist dd { margin:2px 0; padding:5px 0; } div.toc { padding: 14px 25px; background-color: #F4F6FA; border: 1px solid #D8DFEE; border-radius: 7px 7px 7px 7px; float: right; height: auto; margin: 0 20px 10px 10px; width: 200px; } div.toc li { background: url("bdwn.png") no-repeat scroll 0 5px transparent; font: 10px/1.2 Verdana,DejaVu Sans,Geneva,sans-serif; margin-top: 5px; padding-left: 10px; padding-top: 2px; } div.toc h3 { font: bold 12px/1.2 Arial,FreeSans,sans-serif; color: #4665A2; border-bottom: 0 none; margin: 0; } div.toc ul { list-style: none outside none; border: medium none; padding: 0px; } div.toc li.level1 { margin-left: 0px; } div.toc li.level2 { margin-left: 15px; } div.toc li.level3 { margin-left: 30px; } div.toc li.level4 { margin-left: 45px; } .inherit_header { font-weight: bold; color: gray; cursor: pointer; -webkit-touch-callout: none; -webkit-user-select: none; -khtml-user-select: none; -moz-user-select: none; -ms-user-select: none; user-select: none; } .inherit_header td { padding: 6px 0px 2px 5px; } .inherit { display: none; } tr.heading h2 { margin-top: 12px; margin-bottom: 4px; } /* tooltip related style info */ .ttc { position: absolute; display: none; } #powerTip { cursor: default; white-space: nowrap; background-color: white; border: 1px solid gray; border-radius: 4px 4px 4px 4px; box-shadow: 1px 1px 7px gray; display: none; font-size: smaller; max-width: 80%; opacity: 0.9; padding: 1ex 1em 1em; position: absolute; z-index: 2147483647; } #powerTip div.ttdoc { color: grey; font-style: italic; } #powerTip div.ttname a { font-weight: bold; } #powerTip div.ttname { font-weight: bold; } #powerTip div.ttdeci { color: #006318; } #powerTip div { margin: 0px; padding: 0px; font: 12px/16px Roboto,sans-serif; } #powerTip:before, #powerTip:after { content: ""; position: absolute; margin: 0px; } #powerTip.n:after, #powerTip.n:before, #powerTip.s:after, #powerTip.s:before, #powerTip.w:after, #powerTip.w:before, #powerTip.e:after, #powerTip.e:before, #powerTip.ne:after, #powerTip.ne:before, #powerTip.se:after, #powerTip.se:before, #powerTip.nw:after, #powerTip.nw:before, #powerTip.sw:after, #powerTip.sw:before { border: solid transparent; content: " "; height: 0; width: 0; position: absolute; } #powerTip.n:after, #powerTip.s:after, #powerTip.w:after, #powerTip.e:after, #powerTip.nw:after, #powerTip.ne:after, #powerTip.sw:after, #powerTip.se:after { border-color: rgba(255, 255, 255, 0); } #powerTip.n:before, #powerTip.s:before, #powerTip.w:before, #powerTip.e:before, #powerTip.nw:before, #powerTip.ne:before, #powerTip.sw:before, #powerTip.se:before { border-color: rgba(128, 128, 128, 0); } #powerTip.n:after, #powerTip.n:before, #powerTip.ne:after, #powerTip.ne:before, #powerTip.nw:after, #powerTip.nw:before { top: 100%; } #powerTip.n:after, #powerTip.ne:after, #powerTip.nw:after { border-top-color: #ffffff; border-width: 10px; margin: 0px -10px; } #powerTip.n:before { border-top-color: #808080; border-width: 11px; margin: 0px -11px; } #powerTip.n:after, #powerTip.n:before { left: 50%; } #powerTip.nw:after, #powerTip.nw:before { right: 14px; } #powerTip.ne:after, #powerTip.ne:before { left: 14px; } #powerTip.s:after, #powerTip.s:before, #powerTip.se:after, #powerTip.se:before, #powerTip.sw:after, #powerTip.sw:before { bottom: 100%; } #powerTip.s:after, #powerTip.se:after, #powerTip.sw:after { border-bottom-color: #ffffff; border-width: 10px; margin: 0px -10px; } #powerTip.s:before, #powerTip.se:before, #powerTip.sw:before { border-bottom-color: #808080; border-width: 11px; margin: 0px -11px; } #powerTip.s:after, #powerTip.s:before { left: 50%; } #powerTip.sw:after, #powerTip.sw:before { right: 14px; } #powerTip.se:after, #powerTip.se:before { left: 14px; } #powerTip.e:after, #powerTip.e:before { left: 100%; } #powerTip.e:after { border-left-color: #ffffff; border-width: 10px; top: 50%; margin-top: -10px; } #powerTip.e:before { border-left-color: #808080; border-width: 11px; top: 50%; margin-top: -11px; } #powerTip.w:after, #powerTip.w:before { right: 100%; } #powerTip.w:after { border-right-color: #ffffff; border-width: 10px; top: 50%; margin-top: -10px; } #powerTip.w:before { border-right-color: #808080; border-width: 11px; top: 50%; margin-top: -11px; } #titlearea { margin: 0px; padding-top: 8px; padding-bottom: 8px; margin-top: 32px; width: 100%; border-bottom: 0px solid #FF8000; border-top-left-radius: 8px; border-top-right-radius: 8px; background-color:#FFFFFF; } #top { margin-left:auto; margin-right:auto; width:1280px; } @media print { #top { display: none; } #side-nav { display: none; } #nav-path { display: none; } body { overflow:visible; } h1, h2, h3, h4, h5, h6 { page-break-after: avoid; } .summary { display: none; } .memitem { page-break-inside: avoid; } #doc-content { margin-left:0 !important; height:auto !important; width:auto !important; overflow:inherit; display:inline; } } ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/dynsections.js ================================================ function toggleVisibility(linkObj) { var base = $(linkObj).attr('id'); var summary = $('#'+base+'-summary'); var content = $('#'+base+'-content'); var trigger = $('#'+base+'-trigger'); var src=$(trigger).attr('src'); if (content.is(':visible')===true) { content.hide(); summary.show(); $(linkObj).addClass('closed').removeClass('opened'); $(trigger).attr('src',src.substring(0,src.length-8)+'closed.png'); } else { content.show(); summary.hide(); $(linkObj).removeClass('closed').addClass('opened'); $(trigger).attr('src',src.substring(0,src.length-10)+'open.png'); } return false; } function updateStripes() { $('table.directory tr'). removeClass('even').filter(':visible:even').addClass('even'); } function toggleLevel(level) { $('table.directory tr').each(function() { var l = this.id.split('_').length-1; var i = $('#img'+this.id.substring(3)); var a = $('#arr'+this.id.substring(3)); if (l 0.9.9 API documentation: File List
0.9.9 API documentation
File List
Here is a list of all documented files with brief descriptions:
 _features.hpp
 _fixes.hpp
 _noise.hpp
 _swizzle.hpp
 _swizzle_func.hpp
 _vectorize.hpp
 associated_min_max.hppGLM_GTX_associated_min_max
 bit.hppGLM_GTX_bit
 bitfield.hppGLM_GTC_bitfield
 closest_point.hppGLM_GTX_closest_point
 color_encoding.hppGLM_GTX_color_encoding
 gtc/color_space.hppGLM_GTC_color_space
 gtx/color_space.hppGLM_GTX_color_space
 color_space_YCoCg.hppGLM_GTX_color_space_YCoCg
 common.hppCore features
 gtx/common.hppGLM_GTX_common
 compatibility.hppGLM_GTX_compatibility
 component_wise.hppGLM_GTX_component_wise
 compute_common.hpp
 compute_vector_relational.hpp
 constants.hppGLM_GTC_constants
 dual_quaternion.hppGLM_GTX_dual_quaternion
 easing.hppGLM_GTX_easing
 epsilon.hppGLM_GTC_epsilon
 euler_angles.hppGLM_GTX_euler_angles
 exponential.hppCore features
 ext.hppCore features (Dependence)
 extend.hppGLM_GTX_extend
 extended_min_max.hppGLM_GTX_extented_min_max
 exterior_product.hppGLM_GTX_exterior_product
 fast_exponential.hppGLM_GTX_fast_exponential
 fast_square_root.hppGLM_GTX_fast_square_root
 fast_trigonometry.hppGLM_GTX_fast_trigonometry
 functions.hppGLM_GTX_functions
 fwd.hpp
 geometric.hppCore features
 glm.hppCore features
 gradient_paint.hppGLM_GTX_gradient_paint
 handed_coordinate_space.hppGLM_GTX_handed_coordinate_space
 hash.hppGLM_GTX_hash
 gtc/integer.hppGLM_GTC_integer
 gtx/integer.hppGLM_GTX_integer
 integer.hppCore features
 intersect.hppGLM_GTX_intersect
 io.hppGLM_GTX_io
 log_base.hppGLM_GTX_log_base
 man.doxy
 mat2x2.hppCore features
 mat2x3.hppCore features
 mat2x4.hppCore features
 mat3x2.hppCore features
 mat3x3.hppCore features
 mat3x4.hppCore features
 mat4x2.hppCore features
 mat4x3.hppCore features
 mat4x4.hppCore features
 matrix.hppCore features
 matrix_access.hppGLM_GTC_matrix_access
 matrix_clip_space.hppGLM_EXT_matrix_clip_space
 matrix_common.hppGLM_EXT_matrix_common
 matrix_cross_product.hppGLM_GTX_matrix_cross_product
 matrix_decompose.hppGLM_GTX_matrix_decompose
 matrix_double2x2.hppCore features
 matrix_double2x2_precision.hppCore features
 matrix_double2x3.hppCore features
 matrix_double2x3_precision.hppCore features
 matrix_double2x4.hppCore features
 matrix_double2x4_precision.hppCore features
 matrix_double3x2.hppCore features
 matrix_double3x2_precision.hppCore features
 matrix_double3x3.hppCore features
 matrix_double3x3_precision.hppCore features
 matrix_double3x4.hppCore features
 matrix_double3x4_precision.hppCore features
 matrix_double4x2.hppCore features
 matrix_double4x2_precision.hppCore features
 matrix_double4x3.hppCore features
 matrix_double4x3_precision.hppCore features
 matrix_double4x4.hppCore features
 matrix_double4x4_precision.hppCore features
 matrix_factorisation.hppGLM_GTX_matrix_factorisation
 matrix_float2x2.hppCore features
 matrix_float2x2_precision.hppCore features
 matrix_float2x3.hppCore features
 matrix_float2x3_precision.hppCore features
 matrix_float2x4.hppCore features
 matrix_float2x4_precision.hppCore features
 matrix_float3x2.hppCore features
 matrix_float3x2_precision.hppCore features
 matrix_float3x3.hppCore features
 matrix_float3x3_precision.hppCore features
 matrix_float3x4.hppCore features
 matrix_float3x4_precision.hppCore features
 matrix_float4x2.hppCore features
 matrix_float4x2_precision.hpp
 matrix_float4x3.hppCore features
 matrix_float4x3_precision.hppCore features
 matrix_float4x4.hppCore features
 matrix_float4x4_precision.hppCore features
 matrix_integer.hppGLM_GTC_matrix_integer
 matrix_interpolation.hppGLM_GTX_matrix_interpolation
 matrix_inverse.hppGLM_GTC_matrix_inverse
 matrix_major_storage.hppGLM_GTX_matrix_major_storage
 matrix_operation.hppGLM_GTX_matrix_operation
 matrix_projection.hppGLM_EXT_matrix_projection
 matrix_query.hppGLM_GTX_matrix_query
 matrix_relational.hppGLM_EXT_matrix_relational
 ext/matrix_transform.hppGLM_EXT_matrix_transform
 gtc/matrix_transform.hppGLM_GTC_matrix_transform
 matrix_transform_2d.hppGLM_GTX_matrix_transform_2d
 mixed_product.hppGLM_GTX_mixed_producte
 noise.hppGLM_GTC_noise
 norm.hppGLM_GTX_norm
 normal.hppGLM_GTX_normal
 normalize_dot.hppGLM_GTX_normalize_dot
 number_precision.hppGLM_GTX_number_precision
 optimum_pow.hppGLM_GTX_optimum_pow
 orthonormalize.hppGLM_GTX_orthonormalize
 gtc/packing.hppGLM_GTC_packing
 packing.hppCore features
 perpendicular.hppGLM_GTX_perpendicular
 polar_coordinates.hppGLM_GTX_polar_coordinates
 projection.hppGLM_GTX_projection
 qualifier.hpp
 gtc/quaternion.hppGLM_GTC_quaternion
 gtx/quaternion.hppGLM_GTX_quaternion
 quaternion_common.hppGLM_EXT_quaternion_common
 quaternion_double.hppGLM_EXT_quaternion_double
 quaternion_double_precision.hppGLM_EXT_quaternion_double_precision
 quaternion_exponential.hppGLM_EXT_quaternion_exponential
 quaternion_float.hppGLM_EXT_quaternion_float
 quaternion_float_precision.hppGLM_EXT_quaternion_float_precision
 quaternion_geometric.hppGLM_EXT_quaternion_geometric
 quaternion_relational.hppGLM_EXT_quaternion_relational
 quaternion_transform.hppGLM_EXT_quaternion_transform
 quaternion_trigonometric.hppGLM_EXT_quaternion_trigonometric
 random.hppGLM_GTC_random
 range.hppGLM_GTX_range
 raw_data.hppGLM_GTX_raw_data
 reciprocal.hppGLM_GTC_reciprocal
 rotate_normalized_axis.hppGLM_GTX_rotate_normalized_axis
 rotate_vector.hppGLM_GTX_rotate_vector
 round.hppGLM_GTC_round
 scalar_common.hppGLM_EXT_scalar_common
 scalar_constants.hppGLM_EXT_scalar_constants
 scalar_int_sized.hppGLM_EXT_scalar_int_sized
 scalar_integer.hppGLM_EXT_scalar_integer
 scalar_multiplication.hppExperimental extensions
 ext/scalar_relational.hppGLM_EXT_scalar_relational
 gtx/scalar_relational.hppGLM_GTX_scalar_relational
 scalar_uint_sized.hppGLM_EXT_scalar_uint_sized
 scalar_ulp.hppGLM_EXT_scalar_ulp
 setup.hpp
 spline.hppGLM_GTX_spline
 std_based_type.hppGLM_GTX_std_based_type
 string_cast.hppGLM_GTX_string_cast
 texture.hppGLM_GTX_texture
 transform.hppGLM_GTX_transform
 transform2.hppGLM_GTX_transform2
 trigonometric.hppCore features
 gtc/type_aligned.hppGLM_GTC_type_aligned
 gtx/type_aligned.hppGLM_GTX_type_aligned
 type_float.hpp
 type_half.hpp
 type_mat2x2.hppCore features
 type_mat2x3.hppCore features
 type_mat2x4.hppCore features
 type_mat3x2.hppCore features
 type_mat3x3.hppCore features
 type_mat3x4.hppCore features
 type_mat4x2.hppCore features
 type_mat4x3.hppCore features
 type_mat4x4.hppCore features
 type_precision.hppGLM_GTC_type_precision
 type_ptr.hppGLM_GTC_type_ptr
 type_quat.hppCore features
 type_trait.hppGLM_GTX_type_trait
 type_vec1.hppCore features
 type_vec2.hppCore features
 type_vec3.hppCore features
 type_vec4.hppCore features
 ulp.hppGLM_GTC_ulp
 vec1.hppGLM_GTC_vec1
 vec2.hppCore features
 vec3.hppCore features
 vec4.hppCore features
 vec_swizzle.hppGLM_GTX_vec_swizzle
 vector_angle.hppGLM_GTX_vector_angle
 vector_bool1.hppGLM_EXT_vector_bool1
 vector_bool1_precision.hppGLM_EXT_vector_bool1_precision
 vector_bool2.hppCore features
 vector_bool2_precision.hppCore features
 vector_bool3.hppCore features
 vector_bool3_precision.hppCore features
 vector_bool4.hppCore features
 vector_bool4_precision.hppCore features
 vector_common.hppGLM_EXT_vector_common
 vector_double1.hppGLM_EXT_vector_double1
 vector_double1_precision.hppGLM_EXT_vector_double1_precision
 vector_double2.hppCore features
 vector_double2_precision.hppCore features
 vector_double3.hppCore features
 vector_double3_precision.hppCore features
 vector_double4.hppCore features
 vector_double4_precision.hppCore features
 vector_float1.hppGLM_EXT_vector_float1
 vector_float1_precision.hppGLM_EXT_vector_float1_precision
 vector_float2.hppCore features
 vector_float2_precision.hppCore features
 vector_float3.hppCore features
 vector_float3_precision.hppCore features
 vector_float4.hppCore features
 vector_float4_precision.hppCore features
 vector_int1.hppGLM_EXT_vector_int1
 vector_int1_precision.hppGLM_EXT_vector_int1_precision
 vector_int2.hppCore features
 vector_int2_precision.hppCore features
 vector_int3.hppCore features
 vector_int3_precision.hppCore features
 vector_int4.hppCore features
 vector_int4_precision.hppCore features
 vector_integer.hppGLM_EXT_vector_integer
 vector_query.hppGLM_GTX_vector_query
 ext/vector_relational.hppGLM_EXT_vector_relational
 vector_relational.hppCore features
 vector_uint1.hppGLM_EXT_vector_uint1
 vector_uint1_precision.hppGLM_EXT_vector_uint1_precision
 vector_uint2.hppCore features
 vector_uint2_precision.hppCore features
 vector_uint3.hppCore features
 vector_uint3_precision.hppCore features
 vector_uint4.hppCore features
 vector_uint4_precision.hppCore features
 vector_ulp.hppGLM_EXT_vector_ulp
 wrap.hppGLM_GTX_wrap
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/index.html ================================================ 0.9.9 API documentation: OpenGL Mathematics (GLM)
0.9.9 API documentation
OpenGL Mathematics (GLM)
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/jquery.js ================================================ /*! * jQuery JavaScript Library v1.7.1 * http://jquery.com/ * * Copyright 2011, John Resig * Dual licensed under the MIT or GPL Version 2 licenses. * http://jquery.org/license * * Includes Sizzle.js * http://sizzlejs.com/ * Copyright 2011, The Dojo Foundation * Released under the MIT, BSD, and GPL Licenses. * * Date: Mon Nov 21 21:11:03 2011 -0500 */ (function(bb,L){var av=bb.document,bu=bb.navigator,bl=bb.location;var b=(function(){var bF=function(b0,b1){return new bF.fn.init(b0,b1,bD)},bU=bb.jQuery,bH=bb.$,bD,bY=/^(?:[^#<]*(<[\w\W]+>)[^>]*$|#([\w\-]*)$)/,bM=/\S/,bI=/^\s+/,bE=/\s+$/,bA=/^<(\w+)\s*\/?>(?:<\/\1>)?$/,bN=/^[\],:{}\s]*$/,bW=/\\(?:["\\\/bfnrt]|u[0-9a-fA-F]{4})/g,bP=/"[^"\\\n\r]*"|true|false|null|-?\d+(?:\.\d*)?(?:[eE][+\-]?\d+)?/g,bJ=/(?:^|:|,)(?:\s*\[)+/g,by=/(webkit)[ \/]([\w.]+)/,bR=/(opera)(?:.*version)?[ \/]([\w.]+)/,bQ=/(msie) ([\w.]+)/,bS=/(mozilla)(?:.*? rv:([\w.]+))?/,bB=/-([a-z]|[0-9])/ig,bZ=/^-ms-/,bT=function(b0,b1){return(b1+"").toUpperCase()},bX=bu.userAgent,bV,bC,e,bL=Object.prototype.toString,bG=Object.prototype.hasOwnProperty,bz=Array.prototype.push,bK=Array.prototype.slice,bO=String.prototype.trim,bv=Array.prototype.indexOf,bx={};bF.fn=bF.prototype={constructor:bF,init:function(b0,b4,b3){var b2,b5,b1,b6;if(!b0){return this}if(b0.nodeType){this.context=this[0]=b0;this.length=1;return this}if(b0==="body"&&!b4&&av.body){this.context=av;this[0]=av.body;this.selector=b0;this.length=1;return this}if(typeof b0==="string"){if(b0.charAt(0)==="<"&&b0.charAt(b0.length-1)===">"&&b0.length>=3){b2=[null,b0,null]}else{b2=bY.exec(b0)}if(b2&&(b2[1]||!b4)){if(b2[1]){b4=b4 instanceof bF?b4[0]:b4;b6=(b4?b4.ownerDocument||b4:av);b1=bA.exec(b0);if(b1){if(bF.isPlainObject(b4)){b0=[av.createElement(b1[1])];bF.fn.attr.call(b0,b4,true)}else{b0=[b6.createElement(b1[1])]}}else{b1=bF.buildFragment([b2[1]],[b6]);b0=(b1.cacheable?bF.clone(b1.fragment):b1.fragment).childNodes}return bF.merge(this,b0)}else{b5=av.getElementById(b2[2]);if(b5&&b5.parentNode){if(b5.id!==b2[2]){return b3.find(b0)}this.length=1;this[0]=b5}this.context=av;this.selector=b0;return this}}else{if(!b4||b4.jquery){return(b4||b3).find(b0)}else{return this.constructor(b4).find(b0)}}}else{if(bF.isFunction(b0)){return b3.ready(b0)}}if(b0.selector!==L){this.selector=b0.selector;this.context=b0.context}return bF.makeArray(b0,this)},selector:"",jquery:"1.7.1",length:0,size:function(){return this.length},toArray:function(){return bK.call(this,0)},get:function(b0){return b0==null?this.toArray():(b0<0?this[this.length+b0]:this[b0])},pushStack:function(b1,b3,b0){var b2=this.constructor();if(bF.isArray(b1)){bz.apply(b2,b1)}else{bF.merge(b2,b1)}b2.prevObject=this;b2.context=this.context;if(b3==="find"){b2.selector=this.selector+(this.selector?" ":"")+b0}else{if(b3){b2.selector=this.selector+"."+b3+"("+b0+")"}}return b2},each:function(b1,b0){return bF.each(this,b1,b0)},ready:function(b0){bF.bindReady();bC.add(b0);return this},eq:function(b0){b0=+b0;return b0===-1?this.slice(b0):this.slice(b0,b0+1)},first:function(){return this.eq(0)},last:function(){return this.eq(-1)},slice:function(){return this.pushStack(bK.apply(this,arguments),"slice",bK.call(arguments).join(","))},map:function(b0){return this.pushStack(bF.map(this,function(b2,b1){return b0.call(b2,b1,b2)}))},end:function(){return this.prevObject||this.constructor(null)},push:bz,sort:[].sort,splice:[].splice};bF.fn.init.prototype=bF.fn;bF.extend=bF.fn.extend=function(){var b9,b2,b0,b1,b6,b7,b5=arguments[0]||{},b4=1,b3=arguments.length,b8=false;if(typeof b5==="boolean"){b8=b5;b5=arguments[1]||{};b4=2}if(typeof b5!=="object"&&!bF.isFunction(b5)){b5={}}if(b3===b4){b5=this;--b4}for(;b40){return}bC.fireWith(av,[bF]);if(bF.fn.trigger){bF(av).trigger("ready").off("ready")}}},bindReady:function(){if(bC){return}bC=bF.Callbacks("once memory");if(av.readyState==="complete"){return setTimeout(bF.ready,1)}if(av.addEventListener){av.addEventListener("DOMContentLoaded",e,false);bb.addEventListener("load",bF.ready,false)}else{if(av.attachEvent){av.attachEvent("onreadystatechange",e);bb.attachEvent("onload",bF.ready);var b0=false;try{b0=bb.frameElement==null}catch(b1){}if(av.documentElement.doScroll&&b0){bw()}}}},isFunction:function(b0){return bF.type(b0)==="function"},isArray:Array.isArray||function(b0){return bF.type(b0)==="array"},isWindow:function(b0){return b0&&typeof b0==="object"&&"setInterval" in b0},isNumeric:function(b0){return !isNaN(parseFloat(b0))&&isFinite(b0)},type:function(b0){return b0==null?String(b0):bx[bL.call(b0)]||"object"},isPlainObject:function(b2){if(!b2||bF.type(b2)!=="object"||b2.nodeType||bF.isWindow(b2)){return false}try{if(b2.constructor&&!bG.call(b2,"constructor")&&!bG.call(b2.constructor.prototype,"isPrototypeOf")){return false}}catch(b1){return false}var b0;for(b0 in b2){}return b0===L||bG.call(b2,b0)},isEmptyObject:function(b1){for(var b0 in b1){return false}return true},error:function(b0){throw new Error(b0)},parseJSON:function(b0){if(typeof b0!=="string"||!b0){return null}b0=bF.trim(b0);if(bb.JSON&&bb.JSON.parse){return bb.JSON.parse(b0)}if(bN.test(b0.replace(bW,"@").replace(bP,"]").replace(bJ,""))){return(new Function("return "+b0))()}bF.error("Invalid JSON: "+b0)},parseXML:function(b2){var b0,b1;try{if(bb.DOMParser){b1=new DOMParser();b0=b1.parseFromString(b2,"text/xml")}else{b0=new ActiveXObject("Microsoft.XMLDOM");b0.async="false";b0.loadXML(b2)}}catch(b3){b0=L}if(!b0||!b0.documentElement||b0.getElementsByTagName("parsererror").length){bF.error("Invalid XML: "+b2)}return b0},noop:function(){},globalEval:function(b0){if(b0&&bM.test(b0)){(bb.execScript||function(b1){bb["eval"].call(bb,b1)})(b0)}},camelCase:function(b0){return b0.replace(bZ,"ms-").replace(bB,bT)},nodeName:function(b1,b0){return b1.nodeName&&b1.nodeName.toUpperCase()===b0.toUpperCase()},each:function(b3,b6,b2){var b1,b4=0,b5=b3.length,b0=b5===L||bF.isFunction(b3);if(b2){if(b0){for(b1 in b3){if(b6.apply(b3[b1],b2)===false){break}}}else{for(;b40&&b0[0]&&b0[b1-1])||b1===0||bF.isArray(b0));if(b3){for(;b21?aJ.call(arguments,0):bG;if(!(--bw)){bC.resolveWith(bC,bx)}}}function bz(bF){return function(bG){bB[bF]=arguments.length>1?aJ.call(arguments,0):bG;bC.notifyWith(bE,bB)}}if(e>1){for(;bv
a";bI=bv.getElementsByTagName("*");bF=bv.getElementsByTagName("a")[0];if(!bI||!bI.length||!bF){return{}}bG=av.createElement("select");bx=bG.appendChild(av.createElement("option"));bE=bv.getElementsByTagName("input")[0];bJ={leadingWhitespace:(bv.firstChild.nodeType===3),tbody:!bv.getElementsByTagName("tbody").length,htmlSerialize:!!bv.getElementsByTagName("link").length,style:/top/.test(bF.getAttribute("style")),hrefNormalized:(bF.getAttribute("href")==="/a"),opacity:/^0.55/.test(bF.style.opacity),cssFloat:!!bF.style.cssFloat,checkOn:(bE.value==="on"),optSelected:bx.selected,getSetAttribute:bv.className!=="t",enctype:!!av.createElement("form").enctype,html5Clone:av.createElement("nav").cloneNode(true).outerHTML!=="<:nav>",submitBubbles:true,changeBubbles:true,focusinBubbles:false,deleteExpando:true,noCloneEvent:true,inlineBlockNeedsLayout:false,shrinkWrapBlocks:false,reliableMarginRight:true};bE.checked=true;bJ.noCloneChecked=bE.cloneNode(true).checked;bG.disabled=true;bJ.optDisabled=!bx.disabled;try{delete bv.test}catch(bC){bJ.deleteExpando=false}if(!bv.addEventListener&&bv.attachEvent&&bv.fireEvent){bv.attachEvent("onclick",function(){bJ.noCloneEvent=false});bv.cloneNode(true).fireEvent("onclick")}bE=av.createElement("input");bE.value="t";bE.setAttribute("type","radio");bJ.radioValue=bE.value==="t";bE.setAttribute("checked","checked");bv.appendChild(bE);bD=av.createDocumentFragment();bD.appendChild(bv.lastChild);bJ.checkClone=bD.cloneNode(true).cloneNode(true).lastChild.checked;bJ.appendChecked=bE.checked;bD.removeChild(bE);bD.appendChild(bv);bv.innerHTML="";if(bb.getComputedStyle){bA=av.createElement("div");bA.style.width="0";bA.style.marginRight="0";bv.style.width="2px";bv.appendChild(bA);bJ.reliableMarginRight=(parseInt((bb.getComputedStyle(bA,null)||{marginRight:0}).marginRight,10)||0)===0}if(bv.attachEvent){for(by in {submit:1,change:1,focusin:1}){bB="on"+by;bw=(bB in bv);if(!bw){bv.setAttribute(bB,"return;");bw=(typeof bv[bB]==="function")}bJ[by+"Bubbles"]=bw}}bD.removeChild(bv);bD=bG=bx=bA=bv=bE=null;b(function(){var bM,bU,bV,bT,bN,bO,bL,bS,bR,e,bP,bQ=av.getElementsByTagName("body")[0];if(!bQ){return}bL=1;bS="position:absolute;top:0;left:0;width:1px;height:1px;margin:0;";bR="visibility:hidden;border:0;";e="style='"+bS+"border:5px solid #000;padding:0;'";bP="
";bM=av.createElement("div");bM.style.cssText=bR+"width:0;height:0;position:static;top:0;margin-top:"+bL+"px";bQ.insertBefore(bM,bQ.firstChild);bv=av.createElement("div");bM.appendChild(bv);bv.innerHTML="
t
";bz=bv.getElementsByTagName("td");bw=(bz[0].offsetHeight===0);bz[0].style.display="";bz[1].style.display="none";bJ.reliableHiddenOffsets=bw&&(bz[0].offsetHeight===0);bv.innerHTML="";bv.style.width=bv.style.paddingLeft="1px";b.boxModel=bJ.boxModel=bv.offsetWidth===2;if(typeof bv.style.zoom!=="undefined"){bv.style.display="inline";bv.style.zoom=1;bJ.inlineBlockNeedsLayout=(bv.offsetWidth===2);bv.style.display="";bv.innerHTML="
";bJ.shrinkWrapBlocks=(bv.offsetWidth!==2)}bv.style.cssText=bS+bR;bv.innerHTML=bP;bU=bv.firstChild;bV=bU.firstChild;bN=bU.nextSibling.firstChild.firstChild;bO={doesNotAddBorder:(bV.offsetTop!==5),doesAddBorderForTableAndCells:(bN.offsetTop===5)};bV.style.position="fixed";bV.style.top="20px";bO.fixedPosition=(bV.offsetTop===20||bV.offsetTop===15);bV.style.position=bV.style.top="";bU.style.overflow="hidden";bU.style.position="relative";bO.subtractsBorderForOverflowNotVisible=(bV.offsetTop===-5);bO.doesNotIncludeMarginInBodyOffset=(bQ.offsetTop!==bL);bQ.removeChild(bM);bv=bM=null;b.extend(bJ,bO)});return bJ})();var aS=/^(?:\{.*\}|\[.*\])$/,aA=/([A-Z])/g;b.extend({cache:{},uuid:0,expando:"jQuery"+(b.fn.jquery+Math.random()).replace(/\D/g,""),noData:{embed:true,object:"clsid:D27CDB6E-AE6D-11cf-96B8-444553540000",applet:true},hasData:function(e){e=e.nodeType?b.cache[e[b.expando]]:e[b.expando];return !!e&&!S(e)},data:function(bx,bv,bz,by){if(!b.acceptData(bx)){return}var bG,bA,bD,bE=b.expando,bC=typeof bv==="string",bF=bx.nodeType,e=bF?b.cache:bx,bw=bF?bx[bE]:bx[bE]&&bE,bB=bv==="events";if((!bw||!e[bw]||(!bB&&!by&&!e[bw].data))&&bC&&bz===L){return}if(!bw){if(bF){bx[bE]=bw=++b.uuid}else{bw=bE}}if(!e[bw]){e[bw]={};if(!bF){e[bw].toJSON=b.noop}}if(typeof bv==="object"||typeof bv==="function"){if(by){e[bw]=b.extend(e[bw],bv)}else{e[bw].data=b.extend(e[bw].data,bv)}}bG=bA=e[bw];if(!by){if(!bA.data){bA.data={}}bA=bA.data}if(bz!==L){bA[b.camelCase(bv)]=bz}if(bB&&!bA[bv]){return bG.events}if(bC){bD=bA[bv];if(bD==null){bD=bA[b.camelCase(bv)]}}else{bD=bA}return bD},removeData:function(bx,bv,by){if(!b.acceptData(bx)){return}var bB,bA,bz,bC=b.expando,bD=bx.nodeType,e=bD?b.cache:bx,bw=bD?bx[bC]:bC;if(!e[bw]){return}if(bv){bB=by?e[bw]:e[bw].data;if(bB){if(!b.isArray(bv)){if(bv in bB){bv=[bv]}else{bv=b.camelCase(bv);if(bv in bB){bv=[bv]}else{bv=bv.split(" ")}}}for(bA=0,bz=bv.length;bA-1){return true}}return false},val:function(bx){var e,bv,by,bw=this[0];if(!arguments.length){if(bw){e=b.valHooks[bw.nodeName.toLowerCase()]||b.valHooks[bw.type];if(e&&"get" in e&&(bv=e.get(bw,"value"))!==L){return bv}bv=bw.value;return typeof bv==="string"?bv.replace(aU,""):bv==null?"":bv}return}by=b.isFunction(bx);return this.each(function(bA){var bz=b(this),bB;if(this.nodeType!==1){return}if(by){bB=bx.call(this,bA,bz.val())}else{bB=bx}if(bB==null){bB=""}else{if(typeof bB==="number"){bB+=""}else{if(b.isArray(bB)){bB=b.map(bB,function(bC){return bC==null?"":bC+""})}}}e=b.valHooks[this.nodeName.toLowerCase()]||b.valHooks[this.type];if(!e||!("set" in e)||e.set(this,bB,"value")===L){this.value=bB}})}});b.extend({valHooks:{option:{get:function(e){var bv=e.attributes.value;return !bv||bv.specified?e.value:e.text}},select:{get:function(e){var bA,bv,bz,bx,by=e.selectedIndex,bB=[],bC=e.options,bw=e.type==="select-one";if(by<0){return null}bv=bw?by:0;bz=bw?by+1:bC.length;for(;bv=0});if(!e.length){bv.selectedIndex=-1}return e}}},attrFn:{val:true,css:true,html:true,text:true,data:true,width:true,height:true,offset:true},attr:function(bA,bx,bB,bz){var bw,e,by,bv=bA.nodeType;if(!bA||bv===3||bv===8||bv===2){return}if(bz&&bx in b.attrFn){return b(bA)[bx](bB)}if(typeof bA.getAttribute==="undefined"){return b.prop(bA,bx,bB)}by=bv!==1||!b.isXMLDoc(bA);if(by){bx=bx.toLowerCase();e=b.attrHooks[bx]||(ao.test(bx)?aY:be)}if(bB!==L){if(bB===null){b.removeAttr(bA,bx);return}else{if(e&&"set" in e&&by&&(bw=e.set(bA,bB,bx))!==L){return bw}else{bA.setAttribute(bx,""+bB);return bB}}}else{if(e&&"get" in e&&by&&(bw=e.get(bA,bx))!==null){return bw}else{bw=bA.getAttribute(bx);return bw===null?L:bw}}},removeAttr:function(bx,bz){var by,bA,bv,e,bw=0;if(bz&&bx.nodeType===1){bA=bz.toLowerCase().split(af);e=bA.length;for(;bw=0)}}})});var bd=/^(?:textarea|input|select)$/i,n=/^([^\.]*)?(?:\.(.+))?$/,J=/\bhover(\.\S+)?\b/,aO=/^key/,bf=/^(?:mouse|contextmenu)|click/,T=/^(?:focusinfocus|focusoutblur)$/,U=/^(\w*)(?:#([\w\-]+))?(?:\.([\w\-]+))?$/,Y=function(e){var bv=U.exec(e);if(bv){bv[1]=(bv[1]||"").toLowerCase();bv[3]=bv[3]&&new RegExp("(?:^|\\s)"+bv[3]+"(?:\\s|$)")}return bv},j=function(bw,e){var bv=bw.attributes||{};return((!e[1]||bw.nodeName.toLowerCase()===e[1])&&(!e[2]||(bv.id||{}).value===e[2])&&(!e[3]||e[3].test((bv["class"]||{}).value)))},bt=function(e){return b.event.special.hover?e:e.replace(J,"mouseenter$1 mouseleave$1")};b.event={add:function(bx,bC,bJ,bA,by){var bD,bB,bK,bI,bH,bF,e,bG,bv,bz,bw,bE;if(bx.nodeType===3||bx.nodeType===8||!bC||!bJ||!(bD=b._data(bx))){return}if(bJ.handler){bv=bJ;bJ=bv.handler}if(!bJ.guid){bJ.guid=b.guid++}bK=bD.events;if(!bK){bD.events=bK={}}bB=bD.handle;if(!bB){bD.handle=bB=function(bL){return typeof b!=="undefined"&&(!bL||b.event.triggered!==bL.type)?b.event.dispatch.apply(bB.elem,arguments):L};bB.elem=bx}bC=b.trim(bt(bC)).split(" ");for(bI=0;bI=0){bG=bG.slice(0,-1);bw=true}if(bG.indexOf(".")>=0){bx=bG.split(".");bG=bx.shift();bx.sort()}if((!bA||b.event.customEvent[bG])&&!b.event.global[bG]){return}bv=typeof bv==="object"?bv[b.expando]?bv:new b.Event(bG,bv):new b.Event(bG);bv.type=bG;bv.isTrigger=true;bv.exclusive=bw;bv.namespace=bx.join(".");bv.namespace_re=bv.namespace?new RegExp("(^|\\.)"+bx.join("\\.(?:.*\\.)?")+"(\\.|$)"):null;by=bG.indexOf(":")<0?"on"+bG:"";if(!bA){e=b.cache;for(bC in e){if(e[bC].events&&e[bC].events[bG]){b.event.trigger(bv,bD,e[bC].handle.elem,true)}}return}bv.result=L;if(!bv.target){bv.target=bA}bD=bD!=null?b.makeArray(bD):[];bD.unshift(bv);bF=b.event.special[bG]||{};if(bF.trigger&&bF.trigger.apply(bA,bD)===false){return}bB=[[bA,bF.bindType||bG]];if(!bJ&&!bF.noBubble&&!b.isWindow(bA)){bI=bF.delegateType||bG;bH=T.test(bI+bG)?bA:bA.parentNode;bz=null;for(;bH;bH=bH.parentNode){bB.push([bH,bI]);bz=bH}if(bz&&bz===bA.ownerDocument){bB.push([bz.defaultView||bz.parentWindow||bb,bI])}}for(bC=0;bCbA){bH.push({elem:this,matches:bz.slice(bA)})}for(bC=0;bC0?this.on(e,null,bx,bw):this.trigger(e)};if(b.attrFn){b.attrFn[e]=true}if(aO.test(e)){b.event.fixHooks[e]=b.event.keyHooks}if(bf.test(e)){b.event.fixHooks[e]=b.event.mouseHooks}}); /*! * Sizzle CSS Selector Engine * Copyright 2011, The Dojo Foundation * Released under the MIT, BSD, and GPL Licenses. * More information: http://sizzlejs.com/ */ (function(){var bH=/((?:\((?:\([^()]+\)|[^()]+)+\)|\[(?:\[[^\[\]]*\]|['"][^'"]*['"]|[^\[\]'"]+)+\]|\\.|[^ >+~,(\[\\]+)+|[>+~])(\s*,\s*)?((?:.|\r|\n)*)/g,bC="sizcache"+(Math.random()+"").replace(".",""),bI=0,bL=Object.prototype.toString,bB=false,bA=true,bK=/\\/g,bO=/\r\n/g,bQ=/\W/;[0,0].sort(function(){bA=false;return 0});var by=function(bV,e,bY,bZ){bY=bY||[];e=e||av;var b1=e;if(e.nodeType!==1&&e.nodeType!==9){return[]}if(!bV||typeof bV!=="string"){return bY}var bS,b3,b6,bR,b2,b5,b4,bX,bU=true,bT=by.isXML(e),bW=[],b0=bV;do{bH.exec("");bS=bH.exec(b0);if(bS){b0=bS[3];bW.push(bS[1]);if(bS[2]){bR=bS[3];break}}}while(bS);if(bW.length>1&&bD.exec(bV)){if(bW.length===2&&bE.relative[bW[0]]){b3=bM(bW[0]+bW[1],e,bZ)}else{b3=bE.relative[bW[0]]?[e]:by(bW.shift(),e);while(bW.length){bV=bW.shift();if(bE.relative[bV]){bV+=bW.shift()}b3=bM(bV,b3,bZ)}}}else{if(!bZ&&bW.length>1&&e.nodeType===9&&!bT&&bE.match.ID.test(bW[0])&&!bE.match.ID.test(bW[bW.length-1])){b2=by.find(bW.shift(),e,bT);e=b2.expr?by.filter(b2.expr,b2.set)[0]:b2.set[0]}if(e){b2=bZ?{expr:bW.pop(),set:bF(bZ)}:by.find(bW.pop(),bW.length===1&&(bW[0]==="~"||bW[0]==="+")&&e.parentNode?e.parentNode:e,bT);b3=b2.expr?by.filter(b2.expr,b2.set):b2.set;if(bW.length>0){b6=bF(b3)}else{bU=false}while(bW.length){b5=bW.pop();b4=b5;if(!bE.relative[b5]){b5=""}else{b4=bW.pop()}if(b4==null){b4=e}bE.relative[b5](b6,b4,bT)}}else{b6=bW=[]}}if(!b6){b6=b3}if(!b6){by.error(b5||bV)}if(bL.call(b6)==="[object Array]"){if(!bU){bY.push.apply(bY,b6)}else{if(e&&e.nodeType===1){for(bX=0;b6[bX]!=null;bX++){if(b6[bX]&&(b6[bX]===true||b6[bX].nodeType===1&&by.contains(e,b6[bX]))){bY.push(b3[bX])}}}else{for(bX=0;b6[bX]!=null;bX++){if(b6[bX]&&b6[bX].nodeType===1){bY.push(b3[bX])}}}}}else{bF(b6,bY)}if(bR){by(bR,b1,bY,bZ);by.uniqueSort(bY)}return bY};by.uniqueSort=function(bR){if(bJ){bB=bA;bR.sort(bJ);if(bB){for(var e=1;e0};by.find=function(bX,e,bY){var bW,bS,bU,bT,bV,bR;if(!bX){return[]}for(bS=0,bU=bE.order.length;bS":function(bW,bR){var bV,bU=typeof bR==="string",bS=0,e=bW.length;if(bU&&!bQ.test(bR)){bR=bR.toLowerCase();for(;bS=0)){if(!bS){e.push(bV)}}else{if(bS){bR[bU]=false}}}}return false},ID:function(e){return e[1].replace(bK,"")},TAG:function(bR,e){return bR[1].replace(bK,"").toLowerCase()},CHILD:function(e){if(e[1]==="nth"){if(!e[2]){by.error(e[0])}e[2]=e[2].replace(/^\+|\s*/g,"");var bR=/(-?)(\d*)(?:n([+\-]?\d*))?/.exec(e[2]==="even"&&"2n"||e[2]==="odd"&&"2n+1"||!/\D/.test(e[2])&&"0n+"+e[2]||e[2]);e[2]=(bR[1]+(bR[2]||1))-0;e[3]=bR[3]-0}else{if(e[2]){by.error(e[0])}}e[0]=bI++;return e},ATTR:function(bU,bR,bS,e,bV,bW){var bT=bU[1]=bU[1].replace(bK,"");if(!bW&&bE.attrMap[bT]){bU[1]=bE.attrMap[bT]}bU[4]=(bU[4]||bU[5]||"").replace(bK,"");if(bU[2]==="~="){bU[4]=" "+bU[4]+" "}return bU},PSEUDO:function(bU,bR,bS,e,bV){if(bU[1]==="not"){if((bH.exec(bU[3])||"").length>1||/^\w/.test(bU[3])){bU[3]=by(bU[3],null,null,bR)}else{var bT=by.filter(bU[3],bR,bS,true^bV);if(!bS){e.push.apply(e,bT)}return false}}else{if(bE.match.POS.test(bU[0])||bE.match.CHILD.test(bU[0])){return true}}return bU},POS:function(e){e.unshift(true);return e}},filters:{enabled:function(e){return e.disabled===false&&e.type!=="hidden"},disabled:function(e){return e.disabled===true},checked:function(e){return e.checked===true},selected:function(e){if(e.parentNode){e.parentNode.selectedIndex}return e.selected===true},parent:function(e){return !!e.firstChild},empty:function(e){return !e.firstChild},has:function(bS,bR,e){return !!by(e[3],bS).length},header:function(e){return(/h\d/i).test(e.nodeName)},text:function(bS){var e=bS.getAttribute("type"),bR=bS.type;return bS.nodeName.toLowerCase()==="input"&&"text"===bR&&(e===bR||e===null)},radio:function(e){return e.nodeName.toLowerCase()==="input"&&"radio"===e.type},checkbox:function(e){return e.nodeName.toLowerCase()==="input"&&"checkbox"===e.type},file:function(e){return e.nodeName.toLowerCase()==="input"&&"file"===e.type},password:function(e){return e.nodeName.toLowerCase()==="input"&&"password"===e.type},submit:function(bR){var e=bR.nodeName.toLowerCase();return(e==="input"||e==="button")&&"submit"===bR.type},image:function(e){return e.nodeName.toLowerCase()==="input"&&"image"===e.type},reset:function(bR){var e=bR.nodeName.toLowerCase();return(e==="input"||e==="button")&&"reset"===bR.type},button:function(bR){var e=bR.nodeName.toLowerCase();return e==="input"&&"button"===bR.type||e==="button"},input:function(e){return(/input|select|textarea|button/i).test(e.nodeName)},focus:function(e){return e===e.ownerDocument.activeElement}},setFilters:{first:function(bR,e){return e===0},last:function(bS,bR,e,bT){return bR===bT.length-1},even:function(bR,e){return e%2===0},odd:function(bR,e){return e%2===1},lt:function(bS,bR,e){return bRe[3]-0},nth:function(bS,bR,e){return e[3]-0===bR},eq:function(bS,bR,e){return e[3]-0===bR}},filter:{PSEUDO:function(bS,bX,bW,bY){var e=bX[1],bR=bE.filters[e];if(bR){return bR(bS,bW,bX,bY)}else{if(e==="contains"){return(bS.textContent||bS.innerText||bw([bS])||"").indexOf(bX[3])>=0}else{if(e==="not"){var bT=bX[3];for(var bV=0,bU=bT.length;bV=0)}}},ID:function(bR,e){return bR.nodeType===1&&bR.getAttribute("id")===e},TAG:function(bR,e){return(e==="*"&&bR.nodeType===1)||!!bR.nodeName&&bR.nodeName.toLowerCase()===e},CLASS:function(bR,e){return(" "+(bR.className||bR.getAttribute("class"))+" ").indexOf(e)>-1},ATTR:function(bV,bT){var bS=bT[1],e=by.attr?by.attr(bV,bS):bE.attrHandle[bS]?bE.attrHandle[bS](bV):bV[bS]!=null?bV[bS]:bV.getAttribute(bS),bW=e+"",bU=bT[2],bR=bT[4];return e==null?bU==="!=":!bU&&by.attr?e!=null:bU==="="?bW===bR:bU==="*="?bW.indexOf(bR)>=0:bU==="~="?(" "+bW+" ").indexOf(bR)>=0:!bR?bW&&e!==false:bU==="!="?bW!==bR:bU==="^="?bW.indexOf(bR)===0:bU==="$="?bW.substr(bW.length-bR.length)===bR:bU==="|="?bW===bR||bW.substr(0,bR.length+1)===bR+"-":false},POS:function(bU,bR,bS,bV){var e=bR[2],bT=bE.setFilters[e];if(bT){return bT(bU,bS,bR,bV)}}}};var bD=bE.match.POS,bx=function(bR,e){return"\\"+(e-0+1)};for(var bz in bE.match){bE.match[bz]=new RegExp(bE.match[bz].source+(/(?![^\[]*\])(?![^\(]*\))/.source));bE.leftMatch[bz]=new RegExp(/(^(?:.|\r|\n)*?)/.source+bE.match[bz].source.replace(/\\(\d+)/g,bx))}var bF=function(bR,e){bR=Array.prototype.slice.call(bR,0);if(e){e.push.apply(e,bR);return e}return bR};try{Array.prototype.slice.call(av.documentElement.childNodes,0)[0].nodeType}catch(bP){bF=function(bU,bT){var bS=0,bR=bT||[];if(bL.call(bU)==="[object Array]"){Array.prototype.push.apply(bR,bU)}else{if(typeof bU.length==="number"){for(var e=bU.length;bS";e.insertBefore(bR,e.firstChild);if(av.getElementById(bS)){bE.find.ID=function(bU,bV,bW){if(typeof bV.getElementById!=="undefined"&&!bW){var bT=bV.getElementById(bU[1]);return bT?bT.id===bU[1]||typeof bT.getAttributeNode!=="undefined"&&bT.getAttributeNode("id").nodeValue===bU[1]?[bT]:L:[]}};bE.filter.ID=function(bV,bT){var bU=typeof bV.getAttributeNode!=="undefined"&&bV.getAttributeNode("id");return bV.nodeType===1&&bU&&bU.nodeValue===bT}}e.removeChild(bR);e=bR=null})();(function(){var e=av.createElement("div");e.appendChild(av.createComment(""));if(e.getElementsByTagName("*").length>0){bE.find.TAG=function(bR,bV){var bU=bV.getElementsByTagName(bR[1]);if(bR[1]==="*"){var bT=[];for(var bS=0;bU[bS];bS++){if(bU[bS].nodeType===1){bT.push(bU[bS])}}bU=bT}return bU}}e.innerHTML="";if(e.firstChild&&typeof e.firstChild.getAttribute!=="undefined"&&e.firstChild.getAttribute("href")!=="#"){bE.attrHandle.href=function(bR){return bR.getAttribute("href",2)}}e=null})();if(av.querySelectorAll){(function(){var e=by,bT=av.createElement("div"),bS="__sizzle__";bT.innerHTML="

";if(bT.querySelectorAll&&bT.querySelectorAll(".TEST").length===0){return}by=function(b4,bV,bZ,b3){bV=bV||av;if(!b3&&!by.isXML(bV)){var b2=/^(\w+$)|^\.([\w\-]+$)|^#([\w\-]+$)/.exec(b4);if(b2&&(bV.nodeType===1||bV.nodeType===9)){if(b2[1]){return bF(bV.getElementsByTagName(b4),bZ)}else{if(b2[2]&&bE.find.CLASS&&bV.getElementsByClassName){return bF(bV.getElementsByClassName(b2[2]),bZ)}}}if(bV.nodeType===9){if(b4==="body"&&bV.body){return bF([bV.body],bZ)}else{if(b2&&b2[3]){var bY=bV.getElementById(b2[3]);if(bY&&bY.parentNode){if(bY.id===b2[3]){return bF([bY],bZ)}}else{return bF([],bZ)}}}try{return bF(bV.querySelectorAll(b4),bZ)}catch(b0){}}else{if(bV.nodeType===1&&bV.nodeName.toLowerCase()!=="object"){var bW=bV,bX=bV.getAttribute("id"),bU=bX||bS,b6=bV.parentNode,b5=/^\s*[+~]/.test(b4);if(!bX){bV.setAttribute("id",bU)}else{bU=bU.replace(/'/g,"\\$&")}if(b5&&b6){bV=bV.parentNode}try{if(!b5||b6){return bF(bV.querySelectorAll("[id='"+bU+"'] "+b4),bZ)}}catch(b1){}finally{if(!bX){bW.removeAttribute("id")}}}}}return e(b4,bV,bZ,b3)};for(var bR in e){by[bR]=e[bR]}bT=null})()}(function(){var e=av.documentElement,bS=e.matchesSelector||e.mozMatchesSelector||e.webkitMatchesSelector||e.msMatchesSelector;if(bS){var bU=!bS.call(av.createElement("div"),"div"),bR=false;try{bS.call(av.documentElement,"[test!='']:sizzle")}catch(bT){bR=true}by.matchesSelector=function(bW,bY){bY=bY.replace(/\=\s*([^'"\]]*)\s*\]/g,"='$1']");if(!by.isXML(bW)){try{if(bR||!bE.match.PSEUDO.test(bY)&&!/!=/.test(bY)){var bV=bS.call(bW,bY);if(bV||!bU||bW.document&&bW.document.nodeType!==11){return bV}}}catch(bX){}}return by(bY,null,null,[bW]).length>0}}})();(function(){var e=av.createElement("div");e.innerHTML="
";if(!e.getElementsByClassName||e.getElementsByClassName("e").length===0){return}e.lastChild.className="e";if(e.getElementsByClassName("e").length===1){return}bE.order.splice(1,0,"CLASS");bE.find.CLASS=function(bR,bS,bT){if(typeof bS.getElementsByClassName!=="undefined"&&!bT){return bS.getElementsByClassName(bR[1])}};e=null})();function bv(bR,bW,bV,bZ,bX,bY){for(var bT=0,bS=bZ.length;bT0){bU=e;break}}}e=e[bR]}bZ[bT]=bU}}}if(av.documentElement.contains){by.contains=function(bR,e){return bR!==e&&(bR.contains?bR.contains(e):true)}}else{if(av.documentElement.compareDocumentPosition){by.contains=function(bR,e){return !!(bR.compareDocumentPosition(e)&16)}}else{by.contains=function(){return false}}}by.isXML=function(e){var bR=(e?e.ownerDocument||e:0).documentElement;return bR?bR.nodeName!=="HTML":false};var bM=function(bS,e,bW){var bV,bX=[],bU="",bY=e.nodeType?[e]:e;while((bV=bE.match.PSEUDO.exec(bS))){bU+=bV[0];bS=bS.replace(bE.match.PSEUDO,"")}bS=bE.relative[bS]?bS+"*":bS;for(var bT=0,bR=bY.length;bT0){for(bB=bA;bB=0:b.filter(e,this).length>0:this.filter(e).length>0)},closest:function(by,bx){var bv=[],bw,e,bz=this[0];if(b.isArray(by)){var bB=1;while(bz&&bz.ownerDocument&&bz!==bx){for(bw=0;bw-1:b.find.matchesSelector(bz,by)){bv.push(bz);break}else{bz=bz.parentNode;if(!bz||!bz.ownerDocument||bz===bx||bz.nodeType===11){break}}}}bv=bv.length>1?b.unique(bv):bv;return this.pushStack(bv,"closest",by)},index:function(e){if(!e){return(this[0]&&this[0].parentNode)?this.prevAll().length:-1}if(typeof e==="string"){return b.inArray(this[0],b(e))}return b.inArray(e.jquery?e[0]:e,this)},add:function(e,bv){var bx=typeof e==="string"?b(e,bv):b.makeArray(e&&e.nodeType?[e]:e),bw=b.merge(this.get(),bx);return this.pushStack(C(bx[0])||C(bw[0])?bw:b.unique(bw))},andSelf:function(){return this.add(this.prevObject)}});function C(e){return !e||!e.parentNode||e.parentNode.nodeType===11}b.each({parent:function(bv){var e=bv.parentNode;return e&&e.nodeType!==11?e:null},parents:function(e){return b.dir(e,"parentNode")},parentsUntil:function(bv,e,bw){return b.dir(bv,"parentNode",bw)},next:function(e){return b.nth(e,2,"nextSibling")},prev:function(e){return b.nth(e,2,"previousSibling")},nextAll:function(e){return b.dir(e,"nextSibling")},prevAll:function(e){return b.dir(e,"previousSibling")},nextUntil:function(bv,e,bw){return b.dir(bv,"nextSibling",bw)},prevUntil:function(bv,e,bw){return b.dir(bv,"previousSibling",bw)},siblings:function(e){return b.sibling(e.parentNode.firstChild,e)},children:function(e){return b.sibling(e.firstChild)},contents:function(e){return b.nodeName(e,"iframe")?e.contentDocument||e.contentWindow.document:b.makeArray(e.childNodes)}},function(e,bv){b.fn[e]=function(by,bw){var bx=b.map(this,bv,by);if(!ab.test(e)){bw=by}if(bw&&typeof bw==="string"){bx=b.filter(bw,bx)}bx=this.length>1&&!ay[e]?b.unique(bx):bx;if((this.length>1||a9.test(bw))&&aq.test(e)){bx=bx.reverse()}return this.pushStack(bx,e,P.call(arguments).join(","))}});b.extend({filter:function(bw,e,bv){if(bv){bw=":not("+bw+")"}return e.length===1?b.find.matchesSelector(e[0],bw)?[e[0]]:[]:b.find.matches(bw,e)},dir:function(bw,bv,by){var e=[],bx=bw[bv];while(bx&&bx.nodeType!==9&&(by===L||bx.nodeType!==1||!b(bx).is(by))){if(bx.nodeType===1){e.push(bx)}bx=bx[bv]}return e},nth:function(by,e,bw,bx){e=e||1;var bv=0;for(;by;by=by[bw]){if(by.nodeType===1&&++bv===e){break}}return by},sibling:function(bw,bv){var e=[];for(;bw;bw=bw.nextSibling){if(bw.nodeType===1&&bw!==bv){e.push(bw)}}return e}});function aG(bx,bw,e){bw=bw||0;if(b.isFunction(bw)){return b.grep(bx,function(bz,by){var bA=!!bw.call(bz,by,bz);return bA===e})}else{if(bw.nodeType){return b.grep(bx,function(bz,by){return(bz===bw)===e})}else{if(typeof bw==="string"){var bv=b.grep(bx,function(by){return by.nodeType===1});if(bp.test(bw)){return b.filter(bw,bv,!e)}else{bw=b.filter(bw,bv)}}}}return b.grep(bx,function(bz,by){return(b.inArray(bz,bw)>=0)===e})}function a(e){var bw=aR.split("|"),bv=e.createDocumentFragment();if(bv.createElement){while(bw.length){bv.createElement(bw.pop())}}return bv}var aR="abbr|article|aside|audio|canvas|datalist|details|figcaption|figure|footer|header|hgroup|mark|meter|nav|output|progress|section|summary|time|video",ag=/ jQuery\d+="(?:\d+|null)"/g,ar=/^\s+/,R=/<(?!area|br|col|embed|hr|img|input|link|meta|param)(([\w:]+)[^>]*)\/>/ig,d=/<([\w:]+)/,w=/",""],legend:[1,"
","
"],thead:[1,"","
"],tr:[2,"","
"],td:[3,"","
"],col:[2,"","
"],area:[1,"",""],_default:[0,"",""]},ac=a(av);ax.optgroup=ax.option;ax.tbody=ax.tfoot=ax.colgroup=ax.caption=ax.thead;ax.th=ax.td;if(!b.support.htmlSerialize){ax._default=[1,"div
","
"]}b.fn.extend({text:function(e){if(b.isFunction(e)){return this.each(function(bw){var bv=b(this);bv.text(e.call(this,bw,bv.text()))})}if(typeof e!=="object"&&e!==L){return this.empty().append((this[0]&&this[0].ownerDocument||av).createTextNode(e))}return b.text(this)},wrapAll:function(e){if(b.isFunction(e)){return this.each(function(bw){b(this).wrapAll(e.call(this,bw))})}if(this[0]){var bv=b(e,this[0].ownerDocument).eq(0).clone(true);if(this[0].parentNode){bv.insertBefore(this[0])}bv.map(function(){var bw=this;while(bw.firstChild&&bw.firstChild.nodeType===1){bw=bw.firstChild}return bw}).append(this)}return this},wrapInner:function(e){if(b.isFunction(e)){return this.each(function(bv){b(this).wrapInner(e.call(this,bv))})}return this.each(function(){var bv=b(this),bw=bv.contents();if(bw.length){bw.wrapAll(e)}else{bv.append(e)}})},wrap:function(e){var bv=b.isFunction(e);return this.each(function(bw){b(this).wrapAll(bv?e.call(this,bw):e)})},unwrap:function(){return this.parent().each(function(){if(!b.nodeName(this,"body")){b(this).replaceWith(this.childNodes)}}).end()},append:function(){return this.domManip(arguments,true,function(e){if(this.nodeType===1){this.appendChild(e)}})},prepend:function(){return this.domManip(arguments,true,function(e){if(this.nodeType===1){this.insertBefore(e,this.firstChild)}})},before:function(){if(this[0]&&this[0].parentNode){return this.domManip(arguments,false,function(bv){this.parentNode.insertBefore(bv,this)})}else{if(arguments.length){var e=b.clean(arguments);e.push.apply(e,this.toArray());return this.pushStack(e,"before",arguments)}}},after:function(){if(this[0]&&this[0].parentNode){return this.domManip(arguments,false,function(bv){this.parentNode.insertBefore(bv,this.nextSibling)})}else{if(arguments.length){var e=this.pushStack(this,"after",arguments);e.push.apply(e,b.clean(arguments));return e}}},remove:function(e,bx){for(var bv=0,bw;(bw=this[bv])!=null;bv++){if(!e||b.filter(e,[bw]).length){if(!bx&&bw.nodeType===1){b.cleanData(bw.getElementsByTagName("*"));b.cleanData([bw])}if(bw.parentNode){bw.parentNode.removeChild(bw)}}}return this},empty:function(){for(var e=0,bv;(bv=this[e])!=null;e++){if(bv.nodeType===1){b.cleanData(bv.getElementsByTagName("*"))}while(bv.firstChild){bv.removeChild(bv.firstChild)}}return this},clone:function(bv,e){bv=bv==null?false:bv;e=e==null?bv:e;return this.map(function(){return b.clone(this,bv,e)})},html:function(bx){if(bx===L){return this[0]&&this[0].nodeType===1?this[0].innerHTML.replace(ag,""):null}else{if(typeof bx==="string"&&!ae.test(bx)&&(b.support.leadingWhitespace||!ar.test(bx))&&!ax[(d.exec(bx)||["",""])[1].toLowerCase()]){bx=bx.replace(R,"<$1>");try{for(var bw=0,bv=this.length;bw1&&bw0?this.clone(true):this).get();b(bC[bA])[bv](by);bz=bz.concat(by)}return this.pushStack(bz,e,bC.selector)}}});function bg(e){if(typeof e.getElementsByTagName!=="undefined"){return e.getElementsByTagName("*")}else{if(typeof e.querySelectorAll!=="undefined"){return e.querySelectorAll("*")}else{return[]}}}function az(e){if(e.type==="checkbox"||e.type==="radio"){e.defaultChecked=e.checked}}function E(e){var bv=(e.nodeName||"").toLowerCase();if(bv==="input"){az(e)}else{if(bv!=="script"&&typeof e.getElementsByTagName!=="undefined"){b.grep(e.getElementsByTagName("input"),az)}}}function al(e){var bv=av.createElement("div");ac.appendChild(bv);bv.innerHTML=e.outerHTML;return bv.firstChild}b.extend({clone:function(by,bA,bw){var e,bv,bx,bz=b.support.html5Clone||!ah.test("<"+by.nodeName)?by.cloneNode(true):al(by);if((!b.support.noCloneEvent||!b.support.noCloneChecked)&&(by.nodeType===1||by.nodeType===11)&&!b.isXMLDoc(by)){ai(by,bz);e=bg(by);bv=bg(bz);for(bx=0;e[bx];++bx){if(bv[bx]){ai(e[bx],bv[bx])}}}if(bA){t(by,bz);if(bw){e=bg(by);bv=bg(bz);for(bx=0;e[bx];++bx){t(e[bx],bv[bx])}}}e=bv=null;return bz},clean:function(bw,by,bH,bA){var bF;by=by||av;if(typeof by.createElement==="undefined"){by=by.ownerDocument||by[0]&&by[0].ownerDocument||av}var bI=[],bB;for(var bE=0,bz;(bz=bw[bE])!=null;bE++){if(typeof bz==="number"){bz+=""}if(!bz){continue}if(typeof bz==="string"){if(!W.test(bz)){bz=by.createTextNode(bz)}else{bz=bz.replace(R,"<$1>");var bK=(d.exec(bz)||["",""])[1].toLowerCase(),bx=ax[bK]||ax._default,bD=bx[0],bv=by.createElement("div");if(by===av){ac.appendChild(bv)}else{a(by).appendChild(bv)}bv.innerHTML=bx[1]+bz+bx[2];while(bD--){bv=bv.lastChild}if(!b.support.tbody){var e=w.test(bz),bC=bK==="table"&&!e?bv.firstChild&&bv.firstChild.childNodes:bx[1]===""&&!e?bv.childNodes:[];for(bB=bC.length-1;bB>=0;--bB){if(b.nodeName(bC[bB],"tbody")&&!bC[bB].childNodes.length){bC[bB].parentNode.removeChild(bC[bB])}}}if(!b.support.leadingWhitespace&&ar.test(bz)){bv.insertBefore(by.createTextNode(ar.exec(bz)[0]),bv.firstChild)}bz=bv.childNodes}}var bG;if(!b.support.appendChecked){if(bz[0]&&typeof(bG=bz.length)==="number"){for(bB=0;bB=0){return bx+"px"}}else{return bx}}}});if(!b.support.opacity){b.cssHooks.opacity={get:function(bv,e){return au.test((e&&bv.currentStyle?bv.currentStyle.filter:bv.style.filter)||"")?(parseFloat(RegExp.$1)/100)+"":e?"1":""},set:function(by,bz){var bx=by.style,bv=by.currentStyle,e=b.isNumeric(bz)?"alpha(opacity="+bz*100+")":"",bw=bv&&bv.filter||bx.filter||"";bx.zoom=1;if(bz>=1&&b.trim(bw.replace(ak,""))===""){bx.removeAttribute("filter");if(bv&&!bv.filter){return}}bx.filter=ak.test(bw)?bw.replace(ak,e):bw+" "+e}}}b(function(){if(!b.support.reliableMarginRight){b.cssHooks.marginRight={get:function(bw,bv){var e;b.swap(bw,{display:"inline-block"},function(){if(bv){e=Z(bw,"margin-right","marginRight")}else{e=bw.style.marginRight}});return e}}}});if(av.defaultView&&av.defaultView.getComputedStyle){aI=function(by,bw){var bv,bx,e;bw=bw.replace(z,"-$1").toLowerCase();if((bx=by.ownerDocument.defaultView)&&(e=bx.getComputedStyle(by,null))){bv=e.getPropertyValue(bw);if(bv===""&&!b.contains(by.ownerDocument.documentElement,by)){bv=b.style(by,bw)}}return bv}}if(av.documentElement.currentStyle){aX=function(bz,bw){var bA,e,by,bv=bz.currentStyle&&bz.currentStyle[bw],bx=bz.style;if(bv===null&&bx&&(by=bx[bw])){bv=by}if(!bc.test(bv)&&bn.test(bv)){bA=bx.left;e=bz.runtimeStyle&&bz.runtimeStyle.left;if(e){bz.runtimeStyle.left=bz.currentStyle.left}bx.left=bw==="fontSize"?"1em":(bv||0);bv=bx.pixelLeft+"px";bx.left=bA;if(e){bz.runtimeStyle.left=e}}return bv===""?"auto":bv}}Z=aI||aX;function p(by,bw,bv){var bA=bw==="width"?by.offsetWidth:by.offsetHeight,bz=bw==="width"?an:a1,bx=0,e=bz.length;if(bA>0){if(bv!=="border"){for(;bx)<[^<]*)*<\/script>/gi,q=/^(?:select|textarea)/i,h=/\s+/,br=/([?&])_=[^&]*/,K=/^([\w\+\.\-]+:)(?:\/\/([^\/?#:]*)(?::(\d+))?)?/,A=b.fn.load,aa={},r={},aE,s,aV=["*/"]+["*"];try{aE=bl.href}catch(aw){aE=av.createElement("a");aE.href="";aE=aE.href}s=K.exec(aE.toLowerCase())||[];function f(e){return function(by,bA){if(typeof by!=="string"){bA=by;by="*"}if(b.isFunction(bA)){var bx=by.toLowerCase().split(h),bw=0,bz=bx.length,bv,bB,bC;for(;bw=0){var e=bw.slice(by,bw.length);bw=bw.slice(0,by)}var bx="GET";if(bz){if(b.isFunction(bz)){bA=bz;bz=L}else{if(typeof bz==="object"){bz=b.param(bz,b.ajaxSettings.traditional);bx="POST"}}}var bv=this;b.ajax({url:bw,type:bx,dataType:"html",data:bz,complete:function(bC,bB,bD){bD=bC.responseText;if(bC.isResolved()){bC.done(function(bE){bD=bE});bv.html(e?b("
").append(bD.replace(a6,"")).find(e):bD)}if(bA){bv.each(bA,[bD,bB,bC])}}});return this},serialize:function(){return b.param(this.serializeArray())},serializeArray:function(){return this.map(function(){return this.elements?b.makeArray(this.elements):this}).filter(function(){return this.name&&!this.disabled&&(this.checked||q.test(this.nodeName)||aZ.test(this.type))}).map(function(e,bv){var bw=b(this).val();return bw==null?null:b.isArray(bw)?b.map(bw,function(by,bx){return{name:bv.name,value:by.replace(bs,"\r\n")}}):{name:bv.name,value:bw.replace(bs,"\r\n")}}).get()}});b.each("ajaxStart ajaxStop ajaxComplete ajaxError ajaxSuccess ajaxSend".split(" "),function(e,bv){b.fn[bv]=function(bw){return this.on(bv,bw)}});b.each(["get","post"],function(e,bv){b[bv]=function(bw,by,bz,bx){if(b.isFunction(by)){bx=bx||bz;bz=by;by=L}return b.ajax({type:bv,url:bw,data:by,success:bz,dataType:bx})}});b.extend({getScript:function(e,bv){return b.get(e,L,bv,"script")},getJSON:function(e,bv,bw){return b.get(e,bv,bw,"json")},ajaxSetup:function(bv,e){if(e){am(bv,b.ajaxSettings)}else{e=bv;bv=b.ajaxSettings}am(bv,e);return bv},ajaxSettings:{url:aE,isLocal:aM.test(s[1]),global:true,type:"GET",contentType:"application/x-www-form-urlencoded",processData:true,async:true,accepts:{xml:"application/xml, text/xml",html:"text/html",text:"text/plain",json:"application/json, text/javascript","*":aV},contents:{xml:/xml/,html:/html/,json:/json/},responseFields:{xml:"responseXML",text:"responseText"},converters:{"* text":bb.String,"text html":true,"text json":b.parseJSON,"text xml":b.parseXML},flatOptions:{context:true,url:true}},ajaxPrefilter:f(aa),ajaxTransport:f(r),ajax:function(bz,bx){if(typeof bz==="object"){bx=bz;bz=L}bx=bx||{};var bD=b.ajaxSetup({},bx),bS=bD.context||bD,bG=bS!==bD&&(bS.nodeType||bS instanceof b)?b(bS):b.event,bR=b.Deferred(),bN=b.Callbacks("once memory"),bB=bD.statusCode||{},bC,bH={},bO={},bQ,by,bL,bE,bI,bA=0,bw,bK,bJ={readyState:0,setRequestHeader:function(bT,bU){if(!bA){var e=bT.toLowerCase();bT=bO[e]=bO[e]||bT;bH[bT]=bU}return this},getAllResponseHeaders:function(){return bA===2?bQ:null},getResponseHeader:function(bT){var e;if(bA===2){if(!by){by={};while((e=aD.exec(bQ))){by[e[1].toLowerCase()]=e[2]}}e=by[bT.toLowerCase()]}return e===L?null:e},overrideMimeType:function(e){if(!bA){bD.mimeType=e}return this},abort:function(e){e=e||"abort";if(bL){bL.abort(e)}bF(0,e);return this}};function bF(bZ,bU,b0,bW){if(bA===2){return}bA=2;if(bE){clearTimeout(bE)}bL=L;bQ=bW||"";bJ.readyState=bZ>0?4:0;var bT,b4,b3,bX=bU,bY=b0?bj(bD,bJ,b0):L,bV,b2;if(bZ>=200&&bZ<300||bZ===304){if(bD.ifModified){if((bV=bJ.getResponseHeader("Last-Modified"))){b.lastModified[bC]=bV}if((b2=bJ.getResponseHeader("Etag"))){b.etag[bC]=b2}}if(bZ===304){bX="notmodified";bT=true}else{try{b4=G(bD,bY);bX="success";bT=true}catch(b1){bX="parsererror";b3=b1}}}else{b3=bX;if(!bX||bZ){bX="error";if(bZ<0){bZ=0}}}bJ.status=bZ;bJ.statusText=""+(bU||bX);if(bT){bR.resolveWith(bS,[b4,bX,bJ])}else{bR.rejectWith(bS,[bJ,bX,b3])}bJ.statusCode(bB);bB=L;if(bw){bG.trigger("ajax"+(bT?"Success":"Error"),[bJ,bD,bT?b4:b3])}bN.fireWith(bS,[bJ,bX]);if(bw){bG.trigger("ajaxComplete",[bJ,bD]);if(!(--b.active)){b.event.trigger("ajaxStop")}}}bR.promise(bJ);bJ.success=bJ.done;bJ.error=bJ.fail;bJ.complete=bN.add;bJ.statusCode=function(bT){if(bT){var e;if(bA<2){for(e in bT){bB[e]=[bB[e],bT[e]]}}else{e=bT[bJ.status];bJ.then(e,e)}}return this};bD.url=((bz||bD.url)+"").replace(bq,"").replace(c,s[1]+"//");bD.dataTypes=b.trim(bD.dataType||"*").toLowerCase().split(h);if(bD.crossDomain==null){bI=K.exec(bD.url.toLowerCase());bD.crossDomain=!!(bI&&(bI[1]!=s[1]||bI[2]!=s[2]||(bI[3]||(bI[1]==="http:"?80:443))!=(s[3]||(s[1]==="http:"?80:443))))}if(bD.data&&bD.processData&&typeof bD.data!=="string"){bD.data=b.param(bD.data,bD.traditional)}aW(aa,bD,bx,bJ);if(bA===2){return false}bw=bD.global;bD.type=bD.type.toUpperCase();bD.hasContent=!aQ.test(bD.type);if(bw&&b.active++===0){b.event.trigger("ajaxStart")}if(!bD.hasContent){if(bD.data){bD.url+=(M.test(bD.url)?"&":"?")+bD.data;delete bD.data}bC=bD.url;if(bD.cache===false){var bv=b.now(),bP=bD.url.replace(br,"$1_="+bv);bD.url=bP+((bP===bD.url)?(M.test(bD.url)?"&":"?")+"_="+bv:"")}}if(bD.data&&bD.hasContent&&bD.contentType!==false||bx.contentType){bJ.setRequestHeader("Content-Type",bD.contentType)}if(bD.ifModified){bC=bC||bD.url;if(b.lastModified[bC]){bJ.setRequestHeader("If-Modified-Since",b.lastModified[bC])}if(b.etag[bC]){bJ.setRequestHeader("If-None-Match",b.etag[bC])}}bJ.setRequestHeader("Accept",bD.dataTypes[0]&&bD.accepts[bD.dataTypes[0]]?bD.accepts[bD.dataTypes[0]]+(bD.dataTypes[0]!=="*"?", "+aV+"; q=0.01":""):bD.accepts["*"]);for(bK in bD.headers){bJ.setRequestHeader(bK,bD.headers[bK])}if(bD.beforeSend&&(bD.beforeSend.call(bS,bJ,bD)===false||bA===2)){bJ.abort();return false}for(bK in {success:1,error:1,complete:1}){bJ[bK](bD[bK])}bL=aW(r,bD,bx,bJ);if(!bL){bF(-1,"No Transport")}else{bJ.readyState=1;if(bw){bG.trigger("ajaxSend",[bJ,bD])}if(bD.async&&bD.timeout>0){bE=setTimeout(function(){bJ.abort("timeout")},bD.timeout)}try{bA=1;bL.send(bH,bF)}catch(bM){if(bA<2){bF(-1,bM)}else{throw bM}}}return bJ},param:function(e,bw){var bv=[],by=function(bz,bA){bA=b.isFunction(bA)?bA():bA;bv[bv.length]=encodeURIComponent(bz)+"="+encodeURIComponent(bA)};if(bw===L){bw=b.ajaxSettings.traditional}if(b.isArray(e)||(e.jquery&&!b.isPlainObject(e))){b.each(e,function(){by(this.name,this.value)})}else{for(var bx in e){v(bx,e[bx],bw,by)}}return bv.join("&").replace(k,"+")}});function v(bw,by,bv,bx){if(b.isArray(by)){b.each(by,function(bA,bz){if(bv||ap.test(bw)){bx(bw,bz)}else{v(bw+"["+(typeof bz==="object"||b.isArray(bz)?bA:"")+"]",bz,bv,bx)}})}else{if(!bv&&by!=null&&typeof by==="object"){for(var e in by){v(bw+"["+e+"]",by[e],bv,bx)}}else{bx(bw,by)}}}b.extend({active:0,lastModified:{},etag:{}});function bj(bD,bC,bz){var bv=bD.contents,bB=bD.dataTypes,bw=bD.responseFields,by,bA,bx,e;for(bA in bw){if(bA in bz){bC[bw[bA]]=bz[bA]}}while(bB[0]==="*"){bB.shift();if(by===L){by=bD.mimeType||bC.getResponseHeader("content-type")}}if(by){for(bA in bv){if(bv[bA]&&bv[bA].test(by)){bB.unshift(bA);break}}}if(bB[0] in bz){bx=bB[0]}else{for(bA in bz){if(!bB[0]||bD.converters[bA+" "+bB[0]]){bx=bA;break}if(!e){e=bA}}bx=bx||e}if(bx){if(bx!==bB[0]){bB.unshift(bx)}return bz[bx]}}function G(bH,bz){if(bH.dataFilter){bz=bH.dataFilter(bz,bH.dataType)}var bD=bH.dataTypes,bG={},bA,bE,bw=bD.length,bB,bC=bD[0],bx,by,bF,bv,e;for(bA=1;bA=bw.duration+this.startTime){this.now=this.end;this.pos=this.state=1;this.update();bw.animatedProperties[this.prop]=true;for(bA in bw.animatedProperties){if(bw.animatedProperties[bA]!==true){e=false}}if(e){if(bw.overflow!=null&&!b.support.shrinkWrapBlocks){b.each(["","X","Y"],function(bC,bD){bz.style["overflow"+bD]=bw.overflow[bC]})}if(bw.hide){b(bz).hide()}if(bw.hide||bw.show){for(bA in bw.animatedProperties){b.style(bz,bA,bw.orig[bA]);b.removeData(bz,"fxshow"+bA,true);b.removeData(bz,"toggle"+bA,true)}}bv=bw.complete;if(bv){bw.complete=false;bv.call(bz)}}return false}else{if(bw.duration==Infinity){this.now=bx}else{bB=bx-this.startTime;this.state=bB/bw.duration;this.pos=b.easing[bw.animatedProperties[this.prop]](this.state,bB,0,1,bw.duration);this.now=this.start+((this.end-this.start)*this.pos)}this.update()}return true}};b.extend(b.fx,{tick:function(){var bw,bv=b.timers,e=0;for(;e").appendTo(e),bw=bv.css("display");bv.remove();if(bw==="none"||bw===""){if(!a8){a8=av.createElement("iframe");a8.frameBorder=a8.width=a8.height=0}e.appendChild(a8);if(!m||!a8.createElement){m=(a8.contentWindow||a8.contentDocument).document;m.write((av.compatMode==="CSS1Compat"?"":"")+"");m.close()}bv=m.createElement(bx);m.body.appendChild(bv);bw=b.css(bv,"display");e.removeChild(a8)}Q[bx]=bw}return Q[bx]}var V=/^t(?:able|d|h)$/i,ad=/^(?:body|html)$/i;if("getBoundingClientRect" in av.documentElement){b.fn.offset=function(bI){var by=this[0],bB;if(bI){return this.each(function(e){b.offset.setOffset(this,bI,e)})}if(!by||!by.ownerDocument){return null}if(by===by.ownerDocument.body){return b.offset.bodyOffset(by)}try{bB=by.getBoundingClientRect()}catch(bF){}var bH=by.ownerDocument,bw=bH.documentElement;if(!bB||!b.contains(bw,by)){return bB?{top:bB.top,left:bB.left}:{top:0,left:0}}var bC=bH.body,bD=aK(bH),bA=bw.clientTop||bC.clientTop||0,bE=bw.clientLeft||bC.clientLeft||0,bv=bD.pageYOffset||b.support.boxModel&&bw.scrollTop||bC.scrollTop,bz=bD.pageXOffset||b.support.boxModel&&bw.scrollLeft||bC.scrollLeft,bG=bB.top+bv-bA,bx=bB.left+bz-bE;return{top:bG,left:bx}}}else{b.fn.offset=function(bF){var bz=this[0];if(bF){return this.each(function(bG){b.offset.setOffset(this,bF,bG)})}if(!bz||!bz.ownerDocument){return null}if(bz===bz.ownerDocument.body){return b.offset.bodyOffset(bz)}var bC,bw=bz.offsetParent,bv=bz,bE=bz.ownerDocument,bx=bE.documentElement,bA=bE.body,bB=bE.defaultView,e=bB?bB.getComputedStyle(bz,null):bz.currentStyle,bD=bz.offsetTop,by=bz.offsetLeft;while((bz=bz.parentNode)&&bz!==bA&&bz!==bx){if(b.support.fixedPosition&&e.position==="fixed"){break}bC=bB?bB.getComputedStyle(bz,null):bz.currentStyle;bD-=bz.scrollTop;by-=bz.scrollLeft;if(bz===bw){bD+=bz.offsetTop;by+=bz.offsetLeft;if(b.support.doesNotAddBorder&&!(b.support.doesAddBorderForTableAndCells&&V.test(bz.nodeName))){bD+=parseFloat(bC.borderTopWidth)||0;by+=parseFloat(bC.borderLeftWidth)||0}bv=bw;bw=bz.offsetParent}if(b.support.subtractsBorderForOverflowNotVisible&&bC.overflow!=="visible"){bD+=parseFloat(bC.borderTopWidth)||0;by+=parseFloat(bC.borderLeftWidth)||0}e=bC}if(e.position==="relative"||e.position==="static"){bD+=bA.offsetTop;by+=bA.offsetLeft}if(b.support.fixedPosition&&e.position==="fixed"){bD+=Math.max(bx.scrollTop,bA.scrollTop);by+=Math.max(bx.scrollLeft,bA.scrollLeft)}return{top:bD,left:by}}}b.offset={bodyOffset:function(e){var bw=e.offsetTop,bv=e.offsetLeft;if(b.support.doesNotIncludeMarginInBodyOffset){bw+=parseFloat(b.css(e,"marginTop"))||0;bv+=parseFloat(b.css(e,"marginLeft"))||0}return{top:bw,left:bv}},setOffset:function(bx,bG,bA){var bB=b.css(bx,"position");if(bB==="static"){bx.style.position="relative"}var bz=b(bx),bv=bz.offset(),e=b.css(bx,"top"),bE=b.css(bx,"left"),bF=(bB==="absolute"||bB==="fixed")&&b.inArray("auto",[e,bE])>-1,bD={},bC={},bw,by;if(bF){bC=bz.position();bw=bC.top;by=bC.left}else{bw=parseFloat(e)||0;by=parseFloat(bE)||0}if(b.isFunction(bG)){bG=bG.call(bx,bA,bv)}if(bG.top!=null){bD.top=(bG.top-bv.top)+bw}if(bG.left!=null){bD.left=(bG.left-bv.left)+by}if("using" in bG){bG.using.call(bx,bD)}else{bz.css(bD)}}};b.fn.extend({position:function(){if(!this[0]){return null}var bw=this[0],bv=this.offsetParent(),bx=this.offset(),e=ad.test(bv[0].nodeName)?{top:0,left:0}:bv.offset();bx.top-=parseFloat(b.css(bw,"marginTop"))||0;bx.left-=parseFloat(b.css(bw,"marginLeft"))||0;e.top+=parseFloat(b.css(bv[0],"borderTopWidth"))||0;e.left+=parseFloat(b.css(bv[0],"borderLeftWidth"))||0;return{top:bx.top-e.top,left:bx.left-e.left}},offsetParent:function(){return this.map(function(){var e=this.offsetParent||av.body;while(e&&(!ad.test(e.nodeName)&&b.css(e,"position")==="static")){e=e.offsetParent}return e})}});b.each(["Left","Top"],function(bv,e){var bw="scroll"+e;b.fn[bw]=function(bz){var bx,by;if(bz===L){bx=this[0];if(!bx){return null}by=aK(bx);return by?("pageXOffset" in by)?by[bv?"pageYOffset":"pageXOffset"]:b.support.boxModel&&by.document.documentElement[bw]||by.document.body[bw]:bx[bw]}return this.each(function(){by=aK(this);if(by){by.scrollTo(!bv?bz:b(by).scrollLeft(),bv?bz:b(by).scrollTop())}else{this[bw]=bz}})}});function aK(e){return b.isWindow(e)?e:e.nodeType===9?e.defaultView||e.parentWindow:false}b.each(["Height","Width"],function(bv,e){var bw=e.toLowerCase();b.fn["inner"+e]=function(){var bx=this[0];return bx?bx.style?parseFloat(b.css(bx,bw,"padding")):this[bw]():null};b.fn["outer"+e]=function(by){var bx=this[0];return bx?bx.style?parseFloat(b.css(bx,bw,by?"margin":"border")):this[bw]():null};b.fn[bw]=function(bz){var bA=this[0];if(!bA){return bz==null?null:this}if(b.isFunction(bz)){return this.each(function(bE){var bD=b(this);bD[bw](bz.call(this,bE,bD[bw]()))})}if(b.isWindow(bA)){var bB=bA.document.documentElement["client"+e],bx=bA.document.body;return bA.document.compatMode==="CSS1Compat"&&bB||bx&&bx["client"+e]||bB}else{if(bA.nodeType===9){return Math.max(bA.documentElement["client"+e],bA.body["scroll"+e],bA.documentElement["scroll"+e],bA.body["offset"+e],bA.documentElement["offset"+e])}else{if(bz===L){var bC=b.css(bA,bw),by=parseFloat(bC);return b.isNumeric(by)?by:bC}else{return this.css(bw,typeof bz==="string"?bz:bz+"px")}}}}});bb.jQuery=bb.$=b;if(typeof define==="function"&&define.amd&&define.amd.jQuery){define("jquery",[],function(){return b})}})(window);/*! * jQuery UI 1.8.18 * * Copyright 2011, AUTHORS.txt (http://jqueryui.com/about) * Dual licensed under the MIT or GPL Version 2 licenses. * http://jquery.org/license * * http://docs.jquery.com/UI */ (function(a,d){a.ui=a.ui||{};if(a.ui.version){return}a.extend(a.ui,{version:"1.8.18",keyCode:{ALT:18,BACKSPACE:8,CAPS_LOCK:20,COMMA:188,COMMAND:91,COMMAND_LEFT:91,COMMAND_RIGHT:93,CONTROL:17,DELETE:46,DOWN:40,END:35,ENTER:13,ESCAPE:27,HOME:36,INSERT:45,LEFT:37,MENU:93,NUMPAD_ADD:107,NUMPAD_DECIMAL:110,NUMPAD_DIVIDE:111,NUMPAD_ENTER:108,NUMPAD_MULTIPLY:106,NUMPAD_SUBTRACT:109,PAGE_DOWN:34,PAGE_UP:33,PERIOD:190,RIGHT:39,SHIFT:16,SPACE:32,TAB:9,UP:38,WINDOWS:91}});a.fn.extend({propAttr:a.fn.prop||a.fn.attr,_focus:a.fn.focus,focus:function(e,f){return typeof e==="number"?this.each(function(){var g=this;setTimeout(function(){a(g).focus();if(f){f.call(g)}},e)}):this._focus.apply(this,arguments)},scrollParent:function(){var e;if((a.browser.msie&&(/(static|relative)/).test(this.css("position")))||(/absolute/).test(this.css("position"))){e=this.parents().filter(function(){return(/(relative|absolute|fixed)/).test(a.curCSS(this,"position",1))&&(/(auto|scroll)/).test(a.curCSS(this,"overflow",1)+a.curCSS(this,"overflow-y",1)+a.curCSS(this,"overflow-x",1))}).eq(0)}else{e=this.parents().filter(function(){return(/(auto|scroll)/).test(a.curCSS(this,"overflow",1)+a.curCSS(this,"overflow-y",1)+a.curCSS(this,"overflow-x",1))}).eq(0)}return(/fixed/).test(this.css("position"))||!e.length?a(document):e},zIndex:function(h){if(h!==d){return this.css("zIndex",h)}if(this.length){var f=a(this[0]),e,g;while(f.length&&f[0]!==document){e=f.css("position");if(e==="absolute"||e==="relative"||e==="fixed"){g=parseInt(f.css("zIndex"),10);if(!isNaN(g)&&g!==0){return g}}f=f.parent()}}return 0},disableSelection:function(){return this.bind((a.support.selectstart?"selectstart":"mousedown")+".ui-disableSelection",function(e){e.preventDefault()})},enableSelection:function(){return this.unbind(".ui-disableSelection")}});a.each(["Width","Height"],function(g,e){var f=e==="Width"?["Left","Right"]:["Top","Bottom"],h=e.toLowerCase(),k={innerWidth:a.fn.innerWidth,innerHeight:a.fn.innerHeight,outerWidth:a.fn.outerWidth,outerHeight:a.fn.outerHeight};function j(m,l,i,n){a.each(f,function(){l-=parseFloat(a.curCSS(m,"padding"+this,true))||0;if(i){l-=parseFloat(a.curCSS(m,"border"+this+"Width",true))||0}if(n){l-=parseFloat(a.curCSS(m,"margin"+this,true))||0}});return l}a.fn["inner"+e]=function(i){if(i===d){return k["inner"+e].call(this)}return this.each(function(){a(this).css(h,j(this,i)+"px")})};a.fn["outer"+e]=function(i,l){if(typeof i!=="number"){return k["outer"+e].call(this,i)}return this.each(function(){a(this).css(h,j(this,i,true,l)+"px")})}});function c(g,e){var j=g.nodeName.toLowerCase();if("area"===j){var i=g.parentNode,h=i.name,f;if(!g.href||!h||i.nodeName.toLowerCase()!=="map"){return false}f=a("img[usemap=#"+h+"]")[0];return !!f&&b(f)}return(/input|select|textarea|button|object/.test(j)?!g.disabled:"a"==j?g.href||e:e)&&b(g)}function b(e){return !a(e).parents().andSelf().filter(function(){return a.curCSS(this,"visibility")==="hidden"||a.expr.filters.hidden(this)}).length}a.extend(a.expr[":"],{data:function(g,f,e){return !!a.data(g,e[3])},focusable:function(e){return c(e,!isNaN(a.attr(e,"tabindex")))},tabbable:function(g){var e=a.attr(g,"tabindex"),f=isNaN(e);return(f||e>=0)&&c(g,!f)}});a(function(){var e=document.body,f=e.appendChild(f=document.createElement("div"));f.offsetHeight;a.extend(f.style,{minHeight:"100px",height:"auto",padding:0,borderWidth:0});a.support.minHeight=f.offsetHeight===100;a.support.selectstart="onselectstart" in f;e.removeChild(f).style.display="none"});a.extend(a.ui,{plugin:{add:function(f,g,j){var h=a.ui[f].prototype;for(var e in j){h.plugins[e]=h.plugins[e]||[];h.plugins[e].push([g,j[e]])}},call:function(e,g,f){var j=e.plugins[g];if(!j||!e.element[0].parentNode){return}for(var h=0;h0){return true}h[e]=1;g=(h[e]>0);h[e]=0;return g},isOverAxis:function(f,e,g){return(f>e)&&(f<(e+g))},isOver:function(j,f,i,h,e,g){return a.ui.isOverAxis(j,i,e)&&a.ui.isOverAxis(f,h,g)}})})(jQuery);/*! * jQuery UI Widget 1.8.18 * * Copyright 2011, AUTHORS.txt (http://jqueryui.com/about) * Dual licensed under the MIT or GPL Version 2 licenses. * http://jquery.org/license * * http://docs.jquery.com/UI/Widget */ (function(b,d){if(b.cleanData){var c=b.cleanData;b.cleanData=function(f){for(var g=0,h;(h=f[g])!=null;g++){try{b(h).triggerHandler("remove")}catch(j){}}c(f)}}else{var a=b.fn.remove;b.fn.remove=function(e,f){return this.each(function(){if(!f){if(!e||b.filter(e,[this]).length){b("*",this).add([this]).each(function(){try{b(this).triggerHandler("remove")}catch(g){}})}}return a.call(b(this),e,f)})}}b.widget=function(f,h,e){var g=f.split(".")[0],j;f=f.split(".")[1];j=g+"-"+f;if(!e){e=h;h=b.Widget}b.expr[":"][j]=function(k){return !!b.data(k,f)};b[g]=b[g]||{};b[g][f]=function(k,l){if(arguments.length){this._createWidget(k,l)}};var i=new h();i.options=b.extend(true,{},i.options);b[g][f].prototype=b.extend(true,i,{namespace:g,widgetName:f,widgetEventPrefix:b[g][f].prototype.widgetEventPrefix||f,widgetBaseClass:j},e);b.widget.bridge(f,b[g][f])};b.widget.bridge=function(f,e){b.fn[f]=function(i){var g=typeof i==="string",h=Array.prototype.slice.call(arguments,1),j=this;i=!g&&h.length?b.extend.apply(null,[true,i].concat(h)):i;if(g&&i.charAt(0)==="_"){return j}if(g){this.each(function(){var k=b.data(this,f),l=k&&b.isFunction(k[i])?k[i].apply(k,h):k;if(l!==k&&l!==d){j=l;return false}})}else{this.each(function(){var k=b.data(this,f);if(k){k.option(i||{})._init()}else{b.data(this,f,new e(i,this))}})}return j}};b.Widget=function(e,f){if(arguments.length){this._createWidget(e,f)}};b.Widget.prototype={widgetName:"widget",widgetEventPrefix:"",options:{disabled:false},_createWidget:function(f,g){b.data(g,this.widgetName,this);this.element=b(g);this.options=b.extend(true,{},this.options,this._getCreateOptions(),f);var e=this;this.element.bind("remove."+this.widgetName,function(){e.destroy()});this._create();this._trigger("create");this._init()},_getCreateOptions:function(){return b.metadata&&b.metadata.get(this.element[0])[this.widgetName]},_create:function(){},_init:function(){},destroy:function(){this.element.unbind("."+this.widgetName).removeData(this.widgetName);this.widget().unbind("."+this.widgetName).removeAttr("aria-disabled").removeClass(this.widgetBaseClass+"-disabled ui-state-disabled")},widget:function(){return this.element},option:function(f,g){var e=f;if(arguments.length===0){return b.extend({},this.options)}if(typeof f==="string"){if(g===d){return this.options[f]}e={};e[f]=g}this._setOptions(e);return this},_setOptions:function(f){var e=this;b.each(f,function(g,h){e._setOption(g,h)});return this},_setOption:function(e,f){this.options[e]=f;if(e==="disabled"){this.widget()[f?"addClass":"removeClass"](this.widgetBaseClass+"-disabled ui-state-disabled").attr("aria-disabled",f)}return this},enable:function(){return this._setOption("disabled",false)},disable:function(){return this._setOption("disabled",true)},_trigger:function(e,f,g){var j,i,h=this.options[e];g=g||{};f=b.Event(f);f.type=(e===this.widgetEventPrefix?e:this.widgetEventPrefix+e).toLowerCase();f.target=this.element[0];i=f.originalEvent;if(i){for(j in i){if(!(j in f)){f[j]=i[j]}}}this.element.trigger(f,g);return !(b.isFunction(h)&&h.call(this.element[0],f,g)===false||f.isDefaultPrevented())}}})(jQuery);/*! * jQuery UI Mouse 1.8.18 * * Copyright 2011, AUTHORS.txt (http://jqueryui.com/about) * Dual licensed under the MIT or GPL Version 2 licenses. * http://jquery.org/license * * http://docs.jquery.com/UI/Mouse * * Depends: * jquery.ui.widget.js */ (function(b,c){var a=false;b(document).mouseup(function(d){a=false});b.widget("ui.mouse",{options:{cancel:":input,option",distance:1,delay:0},_mouseInit:function(){var d=this;this.element.bind("mousedown."+this.widgetName,function(e){return d._mouseDown(e)}).bind("click."+this.widgetName,function(e){if(true===b.data(e.target,d.widgetName+".preventClickEvent")){b.removeData(e.target,d.widgetName+".preventClickEvent");e.stopImmediatePropagation();return false}});this.started=false},_mouseDestroy:function(){this.element.unbind("."+this.widgetName)},_mouseDown:function(f){if(a){return}(this._mouseStarted&&this._mouseUp(f));this._mouseDownEvent=f;var e=this,g=(f.which==1),d=(typeof this.options.cancel=="string"&&f.target.nodeName?b(f.target).closest(this.options.cancel).length:false);if(!g||d||!this._mouseCapture(f)){return true}this.mouseDelayMet=!this.options.delay;if(!this.mouseDelayMet){this._mouseDelayTimer=setTimeout(function(){e.mouseDelayMet=true},this.options.delay)}if(this._mouseDistanceMet(f)&&this._mouseDelayMet(f)){this._mouseStarted=(this._mouseStart(f)!==false);if(!this._mouseStarted){f.preventDefault();return true}}if(true===b.data(f.target,this.widgetName+".preventClickEvent")){b.removeData(f.target,this.widgetName+".preventClickEvent")}this._mouseMoveDelegate=function(h){return e._mouseMove(h)};this._mouseUpDelegate=function(h){return e._mouseUp(h)};b(document).bind("mousemove."+this.widgetName,this._mouseMoveDelegate).bind("mouseup."+this.widgetName,this._mouseUpDelegate);f.preventDefault();a=true;return true},_mouseMove:function(d){if(b.browser.msie&&!(document.documentMode>=9)&&!d.button){return this._mouseUp(d)}if(this._mouseStarted){this._mouseDrag(d);return d.preventDefault()}if(this._mouseDistanceMet(d)&&this._mouseDelayMet(d)){this._mouseStarted=(this._mouseStart(this._mouseDownEvent,d)!==false);(this._mouseStarted?this._mouseDrag(d):this._mouseUp(d))}return !this._mouseStarted},_mouseUp:function(d){b(document).unbind("mousemove."+this.widgetName,this._mouseMoveDelegate).unbind("mouseup."+this.widgetName,this._mouseUpDelegate);if(this._mouseStarted){this._mouseStarted=false;if(d.target==this._mouseDownEvent.target){b.data(d.target,this.widgetName+".preventClickEvent",true)}this._mouseStop(d)}return false},_mouseDistanceMet:function(d){return(Math.max(Math.abs(this._mouseDownEvent.pageX-d.pageX),Math.abs(this._mouseDownEvent.pageY-d.pageY))>=this.options.distance)},_mouseDelayMet:function(d){return this.mouseDelayMet},_mouseStart:function(d){},_mouseDrag:function(d){},_mouseStop:function(d){},_mouseCapture:function(d){return true}})})(jQuery);(function(c,d){c.widget("ui.resizable",c.ui.mouse,{widgetEventPrefix:"resize",options:{alsoResize:false,animate:false,animateDuration:"slow",animateEasing:"swing",aspectRatio:false,autoHide:false,containment:false,ghost:false,grid:false,handles:"e,s,se",helper:false,maxHeight:null,maxWidth:null,minHeight:10,minWidth:10,zIndex:1000},_create:function(){var f=this,k=this.options;this.element.addClass("ui-resizable");c.extend(this,{_aspectRatio:!!(k.aspectRatio),aspectRatio:k.aspectRatio,originalElement:this.element,_proportionallyResizeElements:[],_helper:k.helper||k.ghost||k.animate?k.helper||"ui-resizable-helper":null});if(this.element[0].nodeName.match(/canvas|textarea|input|select|button|img/i)){this.element.wrap(c('
').css({position:this.element.css("position"),width:this.element.outerWidth(),height:this.element.outerHeight(),top:this.element.css("top"),left:this.element.css("left")}));this.element=this.element.parent().data("resizable",this.element.data("resizable"));this.elementIsWrapper=true;this.element.css({marginLeft:this.originalElement.css("marginLeft"),marginTop:this.originalElement.css("marginTop"),marginRight:this.originalElement.css("marginRight"),marginBottom:this.originalElement.css("marginBottom")});this.originalElement.css({marginLeft:0,marginTop:0,marginRight:0,marginBottom:0});this.originalResizeStyle=this.originalElement.css("resize");this.originalElement.css("resize","none");this._proportionallyResizeElements.push(this.originalElement.css({position:"static",zoom:1,display:"block"}));this.originalElement.css({margin:this.originalElement.css("margin")});this._proportionallyResize()}this.handles=k.handles||(!c(".ui-resizable-handle",this.element).length?"e,s,se":{n:".ui-resizable-n",e:".ui-resizable-e",s:".ui-resizable-s",w:".ui-resizable-w",se:".ui-resizable-se",sw:".ui-resizable-sw",ne:".ui-resizable-ne",nw:".ui-resizable-nw"});if(this.handles.constructor==String){if(this.handles=="all"){this.handles="n,e,s,w,se,sw,ne,nw"}var l=this.handles.split(",");this.handles={};for(var g=0;g
');if(/sw|se|ne|nw/.test(j)){h.css({zIndex:++k.zIndex})}if("se"==j){h.addClass("ui-icon ui-icon-gripsmall-diagonal-se")}this.handles[j]=".ui-resizable-"+j;this.element.append(h)}}this._renderAxis=function(q){q=q||this.element;for(var n in this.handles){if(this.handles[n].constructor==String){this.handles[n]=c(this.handles[n],this.element).show()}if(this.elementIsWrapper&&this.originalElement[0].nodeName.match(/textarea|input|select|button/i)){var o=c(this.handles[n],this.element),p=0;p=/sw|ne|nw|se|n|s/.test(n)?o.outerHeight():o.outerWidth();var m=["padding",/ne|nw|n/.test(n)?"Top":/se|sw|s/.test(n)?"Bottom":/^e$/.test(n)?"Right":"Left"].join("");q.css(m,p);this._proportionallyResize()}if(!c(this.handles[n]).length){continue}}};this._renderAxis(this.element);this._handles=c(".ui-resizable-handle",this.element).disableSelection();this._handles.mouseover(function(){if(!f.resizing){if(this.className){var i=this.className.match(/ui-resizable-(se|sw|ne|nw|n|e|s|w)/i)}f.axis=i&&i[1]?i[1]:"se"}});if(k.autoHide){this._handles.hide();c(this.element).addClass("ui-resizable-autohide").hover(function(){if(k.disabled){return}c(this).removeClass("ui-resizable-autohide");f._handles.show()},function(){if(k.disabled){return}if(!f.resizing){c(this).addClass("ui-resizable-autohide");f._handles.hide()}})}this._mouseInit()},destroy:function(){this._mouseDestroy();var e=function(g){c(g).removeClass("ui-resizable ui-resizable-disabled ui-resizable-resizing").removeData("resizable").unbind(".resizable").find(".ui-resizable-handle").remove()};if(this.elementIsWrapper){e(this.element);var f=this.element;f.after(this.originalElement.css({position:f.css("position"),width:f.outerWidth(),height:f.outerHeight(),top:f.css("top"),left:f.css("left")})).remove()}this.originalElement.css("resize",this.originalResizeStyle);e(this.originalElement);return this},_mouseCapture:function(f){var g=false;for(var e in this.handles){if(c(this.handles[e])[0]==f.target){g=true}}return !this.options.disabled&&g},_mouseStart:function(g){var j=this.options,f=this.element.position(),e=this.element;this.resizing=true;this.documentScroll={top:c(document).scrollTop(),left:c(document).scrollLeft()};if(e.is(".ui-draggable")||(/absolute/).test(e.css("position"))){e.css({position:"absolute",top:f.top,left:f.left})}this._renderProxy();var k=b(this.helper.css("left")),h=b(this.helper.css("top"));if(j.containment){k+=c(j.containment).scrollLeft()||0;h+=c(j.containment).scrollTop()||0}this.offset=this.helper.offset();this.position={left:k,top:h};this.size=this._helper?{width:e.outerWidth(),height:e.outerHeight()}:{width:e.width(),height:e.height()};this.originalSize=this._helper?{width:e.outerWidth(),height:e.outerHeight()}:{width:e.width(),height:e.height()};this.originalPosition={left:k,top:h};this.sizeDiff={width:e.outerWidth()-e.width(),height:e.outerHeight()-e.height()};this.originalMousePosition={left:g.pageX,top:g.pageY};this.aspectRatio=(typeof j.aspectRatio=="number")?j.aspectRatio:((this.originalSize.width/this.originalSize.height)||1);var i=c(".ui-resizable-"+this.axis).css("cursor");c("body").css("cursor",i=="auto"?this.axis+"-resize":i);e.addClass("ui-resizable-resizing");this._propagate("start",g);return true},_mouseDrag:function(e){var h=this.helper,g=this.options,m={},q=this,j=this.originalMousePosition,n=this.axis;var r=(e.pageX-j.left)||0,p=(e.pageY-j.top)||0;var i=this._change[n];if(!i){return false}var l=i.apply(this,[e,r,p]),k=c.browser.msie&&c.browser.version<7,f=this.sizeDiff;this._updateVirtualBoundaries(e.shiftKey);if(this._aspectRatio||e.shiftKey){l=this._updateRatio(l,e)}l=this._respectSize(l,e);this._propagate("resize",e);h.css({top:this.position.top+"px",left:this.position.left+"px",width:this.size.width+"px",height:this.size.height+"px"});if(!this._helper&&this._proportionallyResizeElements.length){this._proportionallyResize()}this._updateCache(l);this._trigger("resize",e,this.ui());return false},_mouseStop:function(h){this.resizing=false;var i=this.options,m=this;if(this._helper){var g=this._proportionallyResizeElements,e=g.length&&(/textarea/i).test(g[0].nodeName),f=e&&c.ui.hasScroll(g[0],"left")?0:m.sizeDiff.height,k=e?0:m.sizeDiff.width;var n={width:(m.helper.width()-k),height:(m.helper.height()-f)},j=(parseInt(m.element.css("left"),10)+(m.position.left-m.originalPosition.left))||null,l=(parseInt(m.element.css("top"),10)+(m.position.top-m.originalPosition.top))||null;if(!i.animate){this.element.css(c.extend(n,{top:l,left:j}))}m.helper.height(m.size.height);m.helper.width(m.size.width);if(this._helper&&!i.animate){this._proportionallyResize()}}c("body").css("cursor","auto");this.element.removeClass("ui-resizable-resizing");this._propagate("stop",h);if(this._helper){this.helper.remove()}return false},_updateVirtualBoundaries:function(g){var j=this.options,i,h,f,k,e;e={minWidth:a(j.minWidth)?j.minWidth:0,maxWidth:a(j.maxWidth)?j.maxWidth:Infinity,minHeight:a(j.minHeight)?j.minHeight:0,maxHeight:a(j.maxHeight)?j.maxHeight:Infinity};if(this._aspectRatio||g){i=e.minHeight*this.aspectRatio;f=e.minWidth/this.aspectRatio;h=e.maxHeight*this.aspectRatio;k=e.maxWidth/this.aspectRatio;if(i>e.minWidth){e.minWidth=i}if(f>e.minHeight){e.minHeight=f}if(hl.width),s=a(l.height)&&i.minHeight&&(i.minHeight>l.height);if(h){l.width=i.minWidth}if(s){l.height=i.minHeight}if(t){l.width=i.maxWidth}if(m){l.height=i.maxHeight}var f=this.originalPosition.left+this.originalSize.width,p=this.position.top+this.size.height;var k=/sw|nw|w/.test(q),e=/nw|ne|n/.test(q);if(h&&k){l.left=f-i.minWidth}if(t&&k){l.left=f-i.maxWidth}if(s&&e){l.top=p-i.minHeight}if(m&&e){l.top=p-i.maxHeight}var n=!l.width&&!l.height;if(n&&!l.left&&l.top){l.top=null}else{if(n&&!l.top&&l.left){l.left=null}}return l},_proportionallyResize:function(){var k=this.options;if(!this._proportionallyResizeElements.length){return}var g=this.helper||this.element;for(var f=0;f');var e=c.browser.msie&&c.browser.version<7,g=(e?1:0),h=(e?2:-1);this.helper.addClass(this._helper).css({width:this.element.outerWidth()+h,height:this.element.outerHeight()+h,position:"absolute",left:this.elementOffset.left-g+"px",top:this.elementOffset.top-g+"px",zIndex:++i.zIndex});this.helper.appendTo("body").disableSelection()}else{this.helper=this.element}},_change:{e:function(g,f,e){return{width:this.originalSize.width+f}},w:function(h,f,e){var j=this.options,g=this.originalSize,i=this.originalPosition;return{left:i.left+f,width:g.width-f}},n:function(h,f,e){var j=this.options,g=this.originalSize,i=this.originalPosition;return{top:i.top+e,height:g.height-e}},s:function(g,f,e){return{height:this.originalSize.height+e}},se:function(g,f,e){return c.extend(this._change.s.apply(this,arguments),this._change.e.apply(this,[g,f,e]))},sw:function(g,f,e){return c.extend(this._change.s.apply(this,arguments),this._change.w.apply(this,[g,f,e]))},ne:function(g,f,e){return c.extend(this._change.n.apply(this,arguments),this._change.e.apply(this,[g,f,e]))},nw:function(g,f,e){return c.extend(this._change.n.apply(this,arguments),this._change.w.apply(this,[g,f,e]))}},_propagate:function(f,e){c.ui.plugin.call(this,f,[e,this.ui()]);(f!="resize"&&this._trigger(f,e,this.ui()))},plugins:{},ui:function(){return{originalElement:this.originalElement,element:this.element,helper:this.helper,position:this.position,size:this.size,originalSize:this.originalSize,originalPosition:this.originalPosition}}});c.extend(c.ui.resizable,{version:"1.8.18"});c.ui.plugin.add("resizable","alsoResize",{start:function(f,g){var e=c(this).data("resizable"),i=e.options;var h=function(j){c(j).each(function(){var k=c(this);k.data("resizable-alsoresize",{width:parseInt(k.width(),10),height:parseInt(k.height(),10),left:parseInt(k.css("left"),10),top:parseInt(k.css("top"),10)})})};if(typeof(i.alsoResize)=="object"&&!i.alsoResize.parentNode){if(i.alsoResize.length){i.alsoResize=i.alsoResize[0];h(i.alsoResize)}else{c.each(i.alsoResize,function(j){h(j)})}}else{h(i.alsoResize)}},resize:function(g,i){var f=c(this).data("resizable"),j=f.options,h=f.originalSize,l=f.originalPosition;var k={height:(f.size.height-h.height)||0,width:(f.size.width-h.width)||0,top:(f.position.top-l.top)||0,left:(f.position.left-l.left)||0},e=function(m,n){c(m).each(function(){var q=c(this),r=c(this).data("resizable-alsoresize"),p={},o=n&&n.length?n:q.parents(i.originalElement[0]).length?["width","height"]:["width","height","top","left"];c.each(o,function(s,u){var t=(r[u]||0)+(k[u]||0);if(t&&t>=0){p[u]=t||null}});q.css(p)})};if(typeof(j.alsoResize)=="object"&&!j.alsoResize.nodeType){c.each(j.alsoResize,function(m,n){e(m,n)})}else{e(j.alsoResize)}},stop:function(e,f){c(this).removeData("resizable-alsoresize")}});c.ui.plugin.add("resizable","animate",{stop:function(i,n){var p=c(this).data("resizable"),j=p.options;var h=p._proportionallyResizeElements,e=h.length&&(/textarea/i).test(h[0].nodeName),f=e&&c.ui.hasScroll(h[0],"left")?0:p.sizeDiff.height,l=e?0:p.sizeDiff.width;var g={width:(p.size.width-l),height:(p.size.height-f)},k=(parseInt(p.element.css("left"),10)+(p.position.left-p.originalPosition.left))||null,m=(parseInt(p.element.css("top"),10)+(p.position.top-p.originalPosition.top))||null;p.element.animate(c.extend(g,m&&k?{top:m,left:k}:{}),{duration:j.animateDuration,easing:j.animateEasing,step:function(){var o={width:parseInt(p.element.css("width"),10),height:parseInt(p.element.css("height"),10),top:parseInt(p.element.css("top"),10),left:parseInt(p.element.css("left"),10)};if(h&&h.length){c(h[0]).css({width:o.width,height:o.height})}p._updateCache(o);p._propagate("resize",i)}})}});c.ui.plugin.add("resizable","containment",{start:function(f,r){var t=c(this).data("resizable"),j=t.options,l=t.element;var g=j.containment,k=(g instanceof c)?g.get(0):(/parent/.test(g))?l.parent().get(0):g;if(!k){return}t.containerElement=c(k);if(/document/.test(g)||g==document){t.containerOffset={left:0,top:0};t.containerPosition={left:0,top:0};t.parentData={element:c(document),left:0,top:0,width:c(document).width(),height:c(document).height()||document.body.parentNode.scrollHeight}}else{var n=c(k),i=[];c(["Top","Right","Left","Bottom"]).each(function(p,o){i[p]=b(n.css("padding"+o))});t.containerOffset=n.offset();t.containerPosition=n.position();t.containerSize={height:(n.innerHeight()-i[3]),width:(n.innerWidth()-i[1])};var q=t.containerOffset,e=t.containerSize.height,m=t.containerSize.width,h=(c.ui.hasScroll(k,"left")?k.scrollWidth:m),s=(c.ui.hasScroll(k)?k.scrollHeight:e);t.parentData={element:k,left:q.left,top:q.top,width:h,height:s}}},resize:function(g,q){var t=c(this).data("resizable"),i=t.options,f=t.containerSize,p=t.containerOffset,m=t.size,n=t.position,r=t._aspectRatio||g.shiftKey,e={top:0,left:0},h=t.containerElement;if(h[0]!=document&&(/static/).test(h.css("position"))){e=p}if(n.left<(t._helper?p.left:0)){t.size.width=t.size.width+(t._helper?(t.position.left-p.left):(t.position.left-e.left));if(r){t.size.height=t.size.width/i.aspectRatio}t.position.left=i.helper?p.left:0}if(n.top<(t._helper?p.top:0)){t.size.height=t.size.height+(t._helper?(t.position.top-p.top):t.position.top);if(r){t.size.width=t.size.height*i.aspectRatio}t.position.top=t._helper?p.top:0}t.offset.left=t.parentData.left+t.position.left;t.offset.top=t.parentData.top+t.position.top;var l=Math.abs((t._helper?t.offset.left-e.left:(t.offset.left-e.left))+t.sizeDiff.width),s=Math.abs((t._helper?t.offset.top-e.top:(t.offset.top-p.top))+t.sizeDiff.height);var k=t.containerElement.get(0)==t.element.parent().get(0),j=/relative|absolute/.test(t.containerElement.css("position"));if(k&&j){l-=t.parentData.left}if(l+t.size.width>=t.parentData.width){t.size.width=t.parentData.width-l;if(r){t.size.height=t.size.width/t.aspectRatio}}if(s+t.size.height>=t.parentData.height){t.size.height=t.parentData.height-s;if(r){t.size.width=t.size.height*t.aspectRatio}}},stop:function(f,n){var q=c(this).data("resizable"),g=q.options,l=q.position,m=q.containerOffset,e=q.containerPosition,i=q.containerElement;var j=c(q.helper),r=j.offset(),p=j.outerWidth()-q.sizeDiff.width,k=j.outerHeight()-q.sizeDiff.height;if(q._helper&&!g.animate&&(/relative/).test(i.css("position"))){c(this).css({left:r.left-e.left-m.left,width:p,height:k})}if(q._helper&&!g.animate&&(/static/).test(i.css("position"))){c(this).css({left:r.left-e.left-m.left,width:p,height:k})}}});c.ui.plugin.add("resizable","ghost",{start:function(g,h){var e=c(this).data("resizable"),i=e.options,f=e.size;e.ghost=e.originalElement.clone();e.ghost.css({opacity:0.25,display:"block",position:"relative",height:f.height,width:f.width,margin:0,left:0,top:0}).addClass("ui-resizable-ghost").addClass(typeof i.ghost=="string"?i.ghost:"");e.ghost.appendTo(e.helper)},resize:function(f,g){var e=c(this).data("resizable"),h=e.options;if(e.ghost){e.ghost.css({position:"relative",height:e.size.height,width:e.size.width})}},stop:function(f,g){var e=c(this).data("resizable"),h=e.options;if(e.ghost&&e.helper){e.helper.get(0).removeChild(e.ghost.get(0))}}});c.ui.plugin.add("resizable","grid",{resize:function(e,m){var p=c(this).data("resizable"),h=p.options,k=p.size,i=p.originalSize,j=p.originalPosition,n=p.axis,l=h._aspectRatio||e.shiftKey;h.grid=typeof h.grid=="number"?[h.grid,h.grid]:h.grid;var g=Math.round((k.width-i.width)/(h.grid[0]||1))*(h.grid[0]||1),f=Math.round((k.height-i.height)/(h.grid[1]||1))*(h.grid[1]||1);if(/^(se|s|e)$/.test(n)){p.size.width=i.width+g;p.size.height=i.height+f}else{if(/^(ne)$/.test(n)){p.size.width=i.width+g;p.size.height=i.height+f;p.position.top=j.top-f}else{if(/^(sw)$/.test(n)){p.size.width=i.width+g;p.size.height=i.height+f;p.position.left=j.left-g}else{p.size.width=i.width+g;p.size.height=i.height+f;p.position.top=j.top-f;p.position.left=j.left-g}}}}});var b=function(e){return parseInt(e,10)||0};var a=function(e){return !isNaN(parseInt(e,10))}})(jQuery);/*! * jQuery hashchange event - v1.3 - 7/21/2010 * http://benalman.com/projects/jquery-hashchange-plugin/ * * Copyright (c) 2010 "Cowboy" Ben Alman * Dual licensed under the MIT and GPL licenses. * http://benalman.com/about/license/ */ (function($,e,b){var c="hashchange",h=document,f,g=$.event.special,i=h.documentMode,d="on"+c in e&&(i===b||i>7);function a(j){j=j||location.href;return"#"+j.replace(/^[^#]*#?(.*)$/,"$1")}$.fn[c]=function(j){return j?this.bind(c,j):this.trigger(c)};$.fn[c].delay=50;g[c]=$.extend(g[c],{setup:function(){if(d){return false}$(f.start)},teardown:function(){if(d){return false}$(f.stop)}});f=(function(){var j={},p,m=a(),k=function(q){return q},l=k,o=k;j.start=function(){p||n()};j.stop=function(){p&&clearTimeout(p);p=b};function n(){var r=a(),q=o(m);if(r!==m){l(m=r,q);$(e).trigger(c)}else{if(q!==m){location.href=location.href.replace(/#.*/,"")+q}}p=setTimeout(n,$.fn[c].delay)}$.browser.msie&&!d&&(function(){var q,r;j.start=function(){if(!q){r=$.fn[c].src;r=r&&r+a();q=$('
Modules
Here is a list of all modules:
[detail level 12]
 Core featuresFeatures that implement in C++ the GLSL specification as closely as possible
 Stable extensionsAdditional features not specified by GLSL specification
 Recommended extensionsAdditional features not specified by GLSL specification
 Experimental extensionsExperimental features not specified by GLSL specification
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_0.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_0.js ================================================ var searchData= [ ['abs',['abs',['../a00241.html#ga439e60a72eadecfeda2df5449c613a64',1,'glm::abs(genType x)'],['../a00241.html#ga81d3abddd0ef0c8de579bc541ecadab6',1,'glm::abs(vec< L, T, Q > const &x)']]], ['acos',['acos',['../a00373.html#gacc9b092df8257c68f19c9053703e2563',1,'glm']]], ['acosh',['acosh',['../a00373.html#ga858f35dc66fd2688f20c52b5f25be76a',1,'glm']]], ['acot',['acot',['../a00301.html#gaeadfb9c9d71093f7865b2ba2ca8d104d',1,'glm']]], ['acoth',['acoth',['../a00301.html#gafaca98a7100170db8841f446282debfa',1,'glm']]], ['acsc',['acsc',['../a00301.html#ga1b4bed91476b9b915e76b4a30236d330',1,'glm']]], ['acsch',['acsch',['../a00301.html#ga4b50aa5e5afc7e19ec113ab91596c576',1,'glm']]], ['adjugate',['adjugate',['../a00339.html#ga40a38402a30860af6e508fe76211e659',1,'glm::adjugate(mat< 2, 2, T, Q > const &m)'],['../a00339.html#gaddb09f7abc1a9c56a243d32ff3538be6',1,'glm::adjugate(mat< 3, 3, T, Q > const &m)'],['../a00339.html#ga9aaa7d1f40391b0b5cacccb60e104ba8',1,'glm::adjugate(mat< 4, 4, T, Q > const &m)']]], ['affineinverse',['affineInverse',['../a00295.html#gae0fcc5fc8783291f9702272de428fa0e',1,'glm']]], ['aligned_5fbvec1',['aligned_bvec1',['../a00303.html#ga780a35f764020f553a9601a3fcdcd059',1,'glm']]], ['aligned_5fbvec2',['aligned_bvec2',['../a00303.html#gae766b317c5afec852bfb3d74a3c54bc8',1,'glm']]], ['aligned_5fbvec3',['aligned_bvec3',['../a00303.html#gae1964ba70d15915e5b710926decbb3cb',1,'glm']]], ['aligned_5fbvec4',['aligned_bvec4',['../a00303.html#gae164a1f7879f828bc35e50b79d786b05',1,'glm']]], ['aligned_5fdmat2',['aligned_dmat2',['../a00303.html#ga6783859382677d35fcd5dac7dcbefdbd',1,'glm']]], ['aligned_5fdmat2x2',['aligned_dmat2x2',['../a00303.html#ga449a3ec2dde6b6bb4bb94c49a6aad388',1,'glm']]], ['aligned_5fdmat2x3',['aligned_dmat2x3',['../a00303.html#ga53d519a7b1bfb69076b3ec206a6b3bd1',1,'glm']]], ['aligned_5fdmat2x4',['aligned_dmat2x4',['../a00303.html#ga5ccb2baeb0ab57b818c24e0d486c59d0',1,'glm']]], ['aligned_5fdmat3',['aligned_dmat3',['../a00303.html#ga19aa695ffdb45ce29f7ea0b5029627de',1,'glm']]], ['aligned_5fdmat3x2',['aligned_dmat3x2',['../a00303.html#ga5f5123d834bd1170edf8c386834e112c',1,'glm']]], ['aligned_5fdmat3x3',['aligned_dmat3x3',['../a00303.html#ga635bf3732281a2c2ca54d8f9d33d178f',1,'glm']]], ['aligned_5fdmat3x4',['aligned_dmat3x4',['../a00303.html#gaf488c6ad88c185054595d4d5c7ba5b9d',1,'glm']]], ['aligned_5fdmat4',['aligned_dmat4',['../a00303.html#ga001bb387ae8192fa94dbd8b23b600439',1,'glm']]], ['aligned_5fdmat4x2',['aligned_dmat4x2',['../a00303.html#gaa409cfb737bd59b68dc683e9b03930cc',1,'glm']]], ['aligned_5fdmat4x3',['aligned_dmat4x3',['../a00303.html#ga621e89ca1dbdcb7b5a3e7de237c44121',1,'glm']]], ['aligned_5fdmat4x4',['aligned_dmat4x4',['../a00303.html#gac9bda778d0b7ad82f656dab99b71857a',1,'glm']]], ['aligned_5fdvec1',['aligned_dvec1',['../a00303.html#ga4974f46ae5a19415d91316960a53617a',1,'glm']]], ['aligned_5fdvec2',['aligned_dvec2',['../a00303.html#ga18d859f87122b2b3b2992ffe86dbebc0',1,'glm']]], ['aligned_5fdvec3',['aligned_dvec3',['../a00303.html#gaa37869eea77d28419b2fb0ff70b69bf0',1,'glm']]], ['aligned_5fdvec4',['aligned_dvec4',['../a00303.html#ga8a9f0a4795ccc442fa9901845026f9f5',1,'glm']]], ['aligned_5fhighp_5fbvec1',['aligned_highp_bvec1',['../a00303.html#ga862843a45b01c35ffe4d44c47ea774ad',1,'glm']]], ['aligned_5fhighp_5fbvec2',['aligned_highp_bvec2',['../a00303.html#ga0731b593c5e33559954c80f8687e76c6',1,'glm']]], ['aligned_5fhighp_5fbvec3',['aligned_highp_bvec3',['../a00303.html#ga0913bdf048d0cb74af1d2512aec675bc',1,'glm']]], ['aligned_5fhighp_5fbvec4',['aligned_highp_bvec4',['../a00303.html#ga9df1d0c425852cf63a57e533b7a83f4f',1,'glm']]], ['aligned_5fhighp_5fdmat2',['aligned_highp_dmat2',['../a00303.html#ga3a7eeae43cb7673e14cc89bf02f7dd45',1,'glm']]], ['aligned_5fhighp_5fdmat2x2',['aligned_highp_dmat2x2',['../a00303.html#gaef26dfe3855a91644665b55c9096a8c8',1,'glm']]], ['aligned_5fhighp_5fdmat2x3',['aligned_highp_dmat2x3',['../a00303.html#gaa7c9d4ab7ab651cdf8001fe7843e238b',1,'glm']]], ['aligned_5fhighp_5fdmat2x4',['aligned_highp_dmat2x4',['../a00303.html#gaa0d2b8a75f1908dcf32c27f8524bdced',1,'glm']]], ['aligned_5fhighp_5fdmat3',['aligned_highp_dmat3',['../a00303.html#gad8f6abb2c9994850b5d5c04a5f979ed8',1,'glm']]], ['aligned_5fhighp_5fdmat3x2',['aligned_highp_dmat3x2',['../a00303.html#gab069b2fc2ec785fc4e193cf26c022679',1,'glm']]], ['aligned_5fhighp_5fdmat3x3',['aligned_highp_dmat3x3',['../a00303.html#ga66073b1ddef34b681741f572338ddb8e',1,'glm']]], ['aligned_5fhighp_5fdmat3x4',['aligned_highp_dmat3x4',['../a00303.html#ga683c8ca66de323ea533a760abedd0efc',1,'glm']]], ['aligned_5fhighp_5fdmat4',['aligned_highp_dmat4',['../a00303.html#gacaa7407ea00ffdd322ce86a57adb547e',1,'glm']]], ['aligned_5fhighp_5fdmat4x2',['aligned_highp_dmat4x2',['../a00303.html#ga93a23ca3d42818d56e0702213c66354b',1,'glm']]], ['aligned_5fhighp_5fdmat4x3',['aligned_highp_dmat4x3',['../a00303.html#gacab7374b560745cb1d0a306a90353f58',1,'glm']]], ['aligned_5fhighp_5fdmat4x4',['aligned_highp_dmat4x4',['../a00303.html#ga1fbfba14368b742972d3b58a0a303682',1,'glm']]], ['aligned_5fhighp_5fdvec1',['aligned_highp_dvec1',['../a00303.html#gaf0448b0f7ceb8273f7eda3a92205eefc',1,'glm']]], ['aligned_5fhighp_5fdvec2',['aligned_highp_dvec2',['../a00303.html#gab173a333e6b7ce153ceba66ac4a321cf',1,'glm']]], ['aligned_5fhighp_5fdvec3',['aligned_highp_dvec3',['../a00303.html#gae94ef61edfa047d05bc69b6065fc42ba',1,'glm']]], ['aligned_5fhighp_5fdvec4',['aligned_highp_dvec4',['../a00303.html#ga8fad35c5677f228e261fe541f15363a4',1,'glm']]], ['aligned_5fhighp_5fivec1',['aligned_highp_ivec1',['../a00303.html#gad63b8c5b4dc0500d54d7414ef555178f',1,'glm']]], ['aligned_5fhighp_5fivec2',['aligned_highp_ivec2',['../a00303.html#ga41563650f36cb7f479e080de21e08418',1,'glm']]], ['aligned_5fhighp_5fivec3',['aligned_highp_ivec3',['../a00303.html#ga6eca5170bb35eac90b4972590fd31a06',1,'glm']]], ['aligned_5fhighp_5fivec4',['aligned_highp_ivec4',['../a00303.html#ga31bfa801e1579fdba752ec3f7a45ec91',1,'glm']]], ['aligned_5fhighp_5fmat2',['aligned_highp_mat2',['../a00303.html#gaf9db5e8a929c317da5aa12cc53741b63',1,'glm']]], ['aligned_5fhighp_5fmat2x2',['aligned_highp_mat2x2',['../a00303.html#gab559d943abf92bc588bcd3f4c0e4664b',1,'glm']]], ['aligned_5fhighp_5fmat2x3',['aligned_highp_mat2x3',['../a00303.html#ga50c9af5aa3a848956d625fc64dc8488e',1,'glm']]], ['aligned_5fhighp_5fmat2x4',['aligned_highp_mat2x4',['../a00303.html#ga0edcfdd179f8a158342eead48a4d0c2a',1,'glm']]], ['aligned_5fhighp_5fmat3',['aligned_highp_mat3',['../a00303.html#gabab3afcc04459c7b123604ae5dc663f6',1,'glm']]], ['aligned_5fhighp_5fmat3x2',['aligned_highp_mat3x2',['../a00303.html#ga9fc2167b47c9be9295f2d8eea7f0ca75',1,'glm']]], ['aligned_5fhighp_5fmat3x3',['aligned_highp_mat3x3',['../a00303.html#ga2f7b8c99ba6f2d07c73a195a8143c259',1,'glm']]], ['aligned_5fhighp_5fmat3x4',['aligned_highp_mat3x4',['../a00303.html#ga52e00afd0eb181e6738f40cf41787049',1,'glm']]], ['aligned_5fhighp_5fmat4',['aligned_highp_mat4',['../a00303.html#ga058ae939bfdbcbb80521dd4a3b01afba',1,'glm']]], ['aligned_5fhighp_5fmat4x2',['aligned_highp_mat4x2',['../a00303.html#ga84e1f5e0718952a079b748825c03f956',1,'glm']]], ['aligned_5fhighp_5fmat4x3',['aligned_highp_mat4x3',['../a00303.html#gafff1684c4ff19b4a818138ccacc1e78d',1,'glm']]], ['aligned_5fhighp_5fmat4x4',['aligned_highp_mat4x4',['../a00303.html#ga40d49648083a0498a12a4bb41ae6ece8',1,'glm']]], ['aligned_5fhighp_5fuvec1',['aligned_highp_uvec1',['../a00303.html#ga5b80e28396c6ef7d32c6fd18df498451',1,'glm']]], ['aligned_5fhighp_5fuvec2',['aligned_highp_uvec2',['../a00303.html#ga04db692662a4908beeaf5a5ba6e19483',1,'glm']]], ['aligned_5fhighp_5fuvec3',['aligned_highp_uvec3',['../a00303.html#ga073fd6e8b241afade6d8afbd676b2667',1,'glm']]], ['aligned_5fhighp_5fuvec4',['aligned_highp_uvec4',['../a00303.html#gabdd60462042859f876c17c7346c732a5',1,'glm']]], ['aligned_5fhighp_5fvec1',['aligned_highp_vec1',['../a00303.html#ga4d0bd70d5fac49b800546d608b707513',1,'glm']]], ['aligned_5fhighp_5fvec2',['aligned_highp_vec2',['../a00303.html#gac9f8482dde741fb6bab7248b81a45465',1,'glm']]], ['aligned_5fhighp_5fvec3',['aligned_highp_vec3',['../a00303.html#ga65415d2d68c9cc0ca554524a8f5510b2',1,'glm']]], ['aligned_5fhighp_5fvec4',['aligned_highp_vec4',['../a00303.html#ga7cb26d354dd69d23849c34c4fba88da9',1,'glm']]], ['aligned_5fivec1',['aligned_ivec1',['../a00303.html#ga76298aed82a439063c3d55980c84aa0b',1,'glm']]], ['aligned_5fivec2',['aligned_ivec2',['../a00303.html#gae4f38fd2c86cee6940986197777b3ca4',1,'glm']]], ['aligned_5fivec3',['aligned_ivec3',['../a00303.html#ga32794322d294e5ace7fed4a61896f270',1,'glm']]], ['aligned_5fivec4',['aligned_ivec4',['../a00303.html#ga7f79eae5927c9033d84617e49f6f34e4',1,'glm']]], ['aligned_5flowp_5fbvec1',['aligned_lowp_bvec1',['../a00303.html#gac6036449ab1c4abf8efe1ea00fcdd1c9',1,'glm']]], ['aligned_5flowp_5fbvec2',['aligned_lowp_bvec2',['../a00303.html#ga59fadcd3835646e419372ae8b43c5d37',1,'glm']]], ['aligned_5flowp_5fbvec3',['aligned_lowp_bvec3',['../a00303.html#ga83aab4d191053f169c93a3e364f2e118',1,'glm']]], ['aligned_5flowp_5fbvec4',['aligned_lowp_bvec4',['../a00303.html#gaa7a76555ee4853614e5755181a8dd54e',1,'glm']]], ['aligned_5flowp_5fdmat2',['aligned_lowp_dmat2',['../a00303.html#ga79a90173d8faa9816dc852ce447d66ca',1,'glm']]], ['aligned_5flowp_5fdmat2x2',['aligned_lowp_dmat2x2',['../a00303.html#ga07cb8e846666cbf56045b064fb553d2e',1,'glm']]], ['aligned_5flowp_5fdmat2x3',['aligned_lowp_dmat2x3',['../a00303.html#ga7a4536b6e1f2ebb690f63816b5d7e48b',1,'glm']]], ['aligned_5flowp_5fdmat2x4',['aligned_lowp_dmat2x4',['../a00303.html#gab0cf4f7c9a264941519acad286e055ea',1,'glm']]], ['aligned_5flowp_5fdmat3',['aligned_lowp_dmat3',['../a00303.html#gac00e15efded8a57c9dec3aed0fb547e7',1,'glm']]], ['aligned_5flowp_5fdmat3x2',['aligned_lowp_dmat3x2',['../a00303.html#gaa281a47d5d627313984d0f8df993b648',1,'glm']]], ['aligned_5flowp_5fdmat3x3',['aligned_lowp_dmat3x3',['../a00303.html#ga7f3148a72355e39932d6855baca42ebc',1,'glm']]], ['aligned_5flowp_5fdmat3x4',['aligned_lowp_dmat3x4',['../a00303.html#gaea3ccc5ef5b178e6e49b4fa1427605d3',1,'glm']]], ['aligned_5flowp_5fdmat4',['aligned_lowp_dmat4',['../a00303.html#gab92c6d7d58d43dfb8147e9aedfe8351b',1,'glm']]], ['aligned_5flowp_5fdmat4x2',['aligned_lowp_dmat4x2',['../a00303.html#gaf806dfdaffb2e9f7681b1cd2825898ce',1,'glm']]], ['aligned_5flowp_5fdmat4x3',['aligned_lowp_dmat4x3',['../a00303.html#gab0931ac7807fa1428c7bbf249efcdf0d',1,'glm']]], ['aligned_5flowp_5fdmat4x4',['aligned_lowp_dmat4x4',['../a00303.html#gad8220a93d2fca2dd707821b4ab6f809e',1,'glm']]], ['aligned_5flowp_5fdvec1',['aligned_lowp_dvec1',['../a00303.html#ga7f8a2cc5a686e52b1615761f4978ca62',1,'glm']]], ['aligned_5flowp_5fdvec2',['aligned_lowp_dvec2',['../a00303.html#ga0e37cff4a43cca866101f0a35f01db6d',1,'glm']]], ['aligned_5flowp_5fdvec3',['aligned_lowp_dvec3',['../a00303.html#gab9e669c4efd52d3347fc6d5f6b20fd59',1,'glm']]], ['aligned_5flowp_5fdvec4',['aligned_lowp_dvec4',['../a00303.html#ga226f5ec7a953cea559c16fe3aff9924f',1,'glm']]], ['aligned_5flowp_5fivec1',['aligned_lowp_ivec1',['../a00303.html#ga1101d3a82b2e3f5f8828bd8f3adab3e1',1,'glm']]], ['aligned_5flowp_5fivec2',['aligned_lowp_ivec2',['../a00303.html#ga44c4accad582cfbd7226a19b83b0cadc',1,'glm']]], ['aligned_5flowp_5fivec3',['aligned_lowp_ivec3',['../a00303.html#ga65663f10a02e52cedcddbcfe36ddf38d',1,'glm']]], ['aligned_5flowp_5fivec4',['aligned_lowp_ivec4',['../a00303.html#gaae92fcec8b2e0328ffbeac31cc4fc419',1,'glm']]], ['aligned_5flowp_5fmat2',['aligned_lowp_mat2',['../a00303.html#ga17c424412207b00dba1cf587b099eea3',1,'glm']]], ['aligned_5flowp_5fmat2x2',['aligned_lowp_mat2x2',['../a00303.html#ga0e44aeb930a47f9cbf2db15b56433b0f',1,'glm']]], ['aligned_5flowp_5fmat2x3',['aligned_lowp_mat2x3',['../a00303.html#ga7dec6d96bc61312b1e56d137c9c74030',1,'glm']]], ['aligned_5flowp_5fmat2x4',['aligned_lowp_mat2x4',['../a00303.html#gaa694fab1f8df5f658846573ba8ffc563',1,'glm']]], ['aligned_5flowp_5fmat3',['aligned_lowp_mat3',['../a00303.html#ga1eb9076cc28ead5020fd3029fd0472c5',1,'glm']]], ['aligned_5flowp_5fmat3x2',['aligned_lowp_mat3x2',['../a00303.html#ga2d6639f0bd777bae1ee0eba71cd7bfdc',1,'glm']]], ['aligned_5flowp_5fmat3x3',['aligned_lowp_mat3x3',['../a00303.html#gaeaab04e378a90956eec8d68a99d777ed',1,'glm']]], ['aligned_5flowp_5fmat3x4',['aligned_lowp_mat3x4',['../a00303.html#ga1f03696ab066572c6c044e63edf635a2',1,'glm']]], ['aligned_5flowp_5fmat4',['aligned_lowp_mat4',['../a00303.html#ga25ea2f684e36aa5e978b4f2f86593824',1,'glm']]], ['aligned_5flowp_5fmat4x2',['aligned_lowp_mat4x2',['../a00303.html#ga2cb16c3fdfb15e0719d942ee3b548bc4',1,'glm']]], ['aligned_5flowp_5fmat4x3',['aligned_lowp_mat4x3',['../a00303.html#ga7e96981e872f17a780d9f1c22dc1f512',1,'glm']]], ['aligned_5flowp_5fmat4x4',['aligned_lowp_mat4x4',['../a00303.html#gadae3dcfc22d28c64d0548cbfd9d08719',1,'glm']]], ['aligned_5flowp_5fuvec1',['aligned_lowp_uvec1',['../a00303.html#gad09b93acc43c43423408d17a64f6d7ca',1,'glm']]], ['aligned_5flowp_5fuvec2',['aligned_lowp_uvec2',['../a00303.html#ga6f94fcd28dde906fc6cad5f742b55c1a',1,'glm']]], ['aligned_5flowp_5fuvec3',['aligned_lowp_uvec3',['../a00303.html#ga9e9f006970b1a00862e3e6e599eedd4c',1,'glm']]], ['aligned_5flowp_5fuvec4',['aligned_lowp_uvec4',['../a00303.html#ga46b1b0b9eb8625a5d69137bd66cd13dc',1,'glm']]], ['aligned_5flowp_5fvec1',['aligned_lowp_vec1',['../a00303.html#gab34aee3d5e121c543fea11d2c50ecc43',1,'glm']]], ['aligned_5flowp_5fvec2',['aligned_lowp_vec2',['../a00303.html#ga53ac5d252317f1fa43c2ef921857bf13',1,'glm']]], ['aligned_5flowp_5fvec3',['aligned_lowp_vec3',['../a00303.html#ga98f0b5cd65fce164ff1367c2a3b3aa1e',1,'glm']]], ['aligned_5flowp_5fvec4',['aligned_lowp_vec4',['../a00303.html#ga82f7275d6102593a69ce38cdad680409',1,'glm']]], ['aligned_5fmat2',['aligned_mat2',['../a00303.html#ga5a8a5f8c47cd7d5502dd9932f83472b9',1,'glm']]], ['aligned_5fmat2x2',['aligned_mat2x2',['../a00303.html#gabb04f459d81d753d278b2072e2375e8e',1,'glm']]], ['aligned_5fmat2x3',['aligned_mat2x3',['../a00303.html#ga832476bb1c59ef673db37433ff34e399',1,'glm']]], ['aligned_5fmat2x4',['aligned_mat2x4',['../a00303.html#gadab11a7504430825b648ff7c7e36b725',1,'glm']]], ['aligned_5fmat3',['aligned_mat3',['../a00303.html#ga43a92a24ca863e0e0f3b65834b3cf714',1,'glm']]], ['aligned_5fmat3x2',['aligned_mat3x2',['../a00303.html#ga5c0df24ba85eafafc0eb0c90690510ed',1,'glm']]], ['aligned_5fmat3x3',['aligned_mat3x3',['../a00303.html#gadb065dbe5c11271fef8cf2ea8608f187',1,'glm']]], ['aligned_5fmat3x4',['aligned_mat3x4',['../a00303.html#ga88061c72c997b94c420f2b0a60d9df26',1,'glm']]], ['aligned_5fmat4',['aligned_mat4',['../a00303.html#gab0fddcf95dd51cbcbf624ea7c40dfeb8',1,'glm']]], ['aligned_5fmat4x2',['aligned_mat4x2',['../a00303.html#gac9a2d0fb815fd5c2bd58b869c55e32d3',1,'glm']]], ['aligned_5fmat4x3',['aligned_mat4x3',['../a00303.html#ga452bbbfd26e244de216e4d004d50bb74',1,'glm']]], ['aligned_5fmat4x4',['aligned_mat4x4',['../a00303.html#ga8b8fb86973a0b768c5bd802c92fac1a1',1,'glm']]], ['aligned_5fmediump_5fbvec1',['aligned_mediump_bvec1',['../a00303.html#gadd3b8bd71a758f7fb0da8e525156f34e',1,'glm']]], ['aligned_5fmediump_5fbvec2',['aligned_mediump_bvec2',['../a00303.html#gacb183eb5e67ec0d0ea5a016cba962810',1,'glm']]], ['aligned_5fmediump_5fbvec3',['aligned_mediump_bvec3',['../a00303.html#gacfa4a542f1b20a5b63ad702dfb6fd587',1,'glm']]], ['aligned_5fmediump_5fbvec4',['aligned_mediump_bvec4',['../a00303.html#ga91bc1f513bb9b0fd60281d57ded9a48c',1,'glm']]], ['aligned_5fmediump_5fdmat2',['aligned_mediump_dmat2',['../a00303.html#ga62a2dfd668c91072b72c3109fc6cda28',1,'glm']]], ['aligned_5fmediump_5fdmat2x2',['aligned_mediump_dmat2x2',['../a00303.html#ga9b7feec247d378dd407ba81f56ea96c8',1,'glm']]], ['aligned_5fmediump_5fdmat2x3',['aligned_mediump_dmat2x3',['../a00303.html#gafcb189f4f93648fe7ca802ca4aca2eb8',1,'glm']]], ['aligned_5fmediump_5fdmat2x4',['aligned_mediump_dmat2x4',['../a00303.html#ga92f8873e3bbd5ca1323c8bbe5725cc5e',1,'glm']]], ['aligned_5fmediump_5fdmat3',['aligned_mediump_dmat3',['../a00303.html#ga6dc2832b747c00e0a0df621aba196960',1,'glm']]], ['aligned_5fmediump_5fdmat3x2',['aligned_mediump_dmat3x2',['../a00303.html#ga5a97f0355d801de3444d42c1d5b40438',1,'glm']]], ['aligned_5fmediump_5fdmat3x3',['aligned_mediump_dmat3x3',['../a00303.html#ga649d0acf01054b17e679cf00e150e025',1,'glm']]], ['aligned_5fmediump_5fdmat3x4',['aligned_mediump_dmat3x4',['../a00303.html#ga45e155a4840f69b2fa4ed8047a676860',1,'glm']]], ['aligned_5fmediump_5fdmat4',['aligned_mediump_dmat4',['../a00303.html#ga8a9376d82f0e946e25137eb55543e6ce',1,'glm']]], ['aligned_5fmediump_5fdmat4x2',['aligned_mediump_dmat4x2',['../a00303.html#gabc25e547f4de4af62403492532cd1b6d',1,'glm']]], ['aligned_5fmediump_5fdmat4x3',['aligned_mediump_dmat4x3',['../a00303.html#gae84f4763ecdc7457ecb7930bad12057c',1,'glm']]], ['aligned_5fmediump_5fdmat4x4',['aligned_mediump_dmat4x4',['../a00303.html#gaa292ebaa907afdecb2d5967fb4fb1247',1,'glm']]], ['aligned_5fmediump_5fdvec1',['aligned_mediump_dvec1',['../a00303.html#ga7180b685c581adb224406a7f831608e3',1,'glm']]], ['aligned_5fmediump_5fdvec2',['aligned_mediump_dvec2',['../a00303.html#ga9af1eabe22f569e70d9893be72eda0f5',1,'glm']]], ['aligned_5fmediump_5fdvec3',['aligned_mediump_dvec3',['../a00303.html#ga058e7ddab1428e47f2197bdd3a5a6953',1,'glm']]], ['aligned_5fmediump_5fdvec4',['aligned_mediump_dvec4',['../a00303.html#gaffd747ea2aea1e69c2ecb04e68521b21',1,'glm']]], ['aligned_5fmediump_5fivec1',['aligned_mediump_ivec1',['../a00303.html#ga20e63dd980b81af10cadbbe219316650',1,'glm']]], ['aligned_5fmediump_5fivec2',['aligned_mediump_ivec2',['../a00303.html#gaea13d89d49daca2c796aeaa82fc2c2f2',1,'glm']]], ['aligned_5fmediump_5fivec3',['aligned_mediump_ivec3',['../a00303.html#gabbf0f15e9c3d9868e43241ad018f82bd',1,'glm']]], ['aligned_5fmediump_5fivec4',['aligned_mediump_ivec4',['../a00303.html#ga6099dd7878d0a78101a4250d8cd2d736',1,'glm']]], ['aligned_5fmediump_5fmat2',['aligned_mediump_mat2',['../a00303.html#gaf6f041b212c57664d88bc6aefb7e36f3',1,'glm']]], ['aligned_5fmediump_5fmat2x2',['aligned_mediump_mat2x2',['../a00303.html#ga04bf49316ee777d42fcfe681ee37d7be',1,'glm']]], ['aligned_5fmediump_5fmat2x3',['aligned_mediump_mat2x3',['../a00303.html#ga26a0b61e444a51a37b9737cf4d84291b',1,'glm']]], ['aligned_5fmediump_5fmat2x4',['aligned_mediump_mat2x4',['../a00303.html#ga163facc9ed2692ea1300ed57c5d12b17',1,'glm']]], ['aligned_5fmediump_5fmat3',['aligned_mediump_mat3',['../a00303.html#ga3b76ba17ae5d53debeb6f7e55919a57c',1,'glm']]], ['aligned_5fmediump_5fmat3x2',['aligned_mediump_mat3x2',['../a00303.html#ga80dee705d714300378e0847f45059097',1,'glm']]], ['aligned_5fmediump_5fmat3x3',['aligned_mediump_mat3x3',['../a00303.html#ga721f5404caf40d68962dcc0529de71d9',1,'glm']]], ['aligned_5fmediump_5fmat3x4',['aligned_mediump_mat3x4',['../a00303.html#ga98f4dc6722a2541a990918c074075359',1,'glm']]], ['aligned_5fmediump_5fmat4',['aligned_mediump_mat4',['../a00303.html#gaeefee8317192174596852ce19b602720',1,'glm']]], ['aligned_5fmediump_5fmat4x2',['aligned_mediump_mat4x2',['../a00303.html#ga46f372a006345c252a41267657cc22c0',1,'glm']]], ['aligned_5fmediump_5fmat4x3',['aligned_mediump_mat4x3',['../a00303.html#ga0effece4545acdebdc2a5512a303110e',1,'glm']]], ['aligned_5fmediump_5fmat4x4',['aligned_mediump_mat4x4',['../a00303.html#ga312864244cae4e8f10f478cffd0f76de',1,'glm']]], ['aligned_5fmediump_5fuvec1',['aligned_mediump_uvec1',['../a00303.html#gacb78126ea2eb779b41c7511128ff1283',1,'glm']]], ['aligned_5fmediump_5fuvec2',['aligned_mediump_uvec2',['../a00303.html#ga081d53e0a71443d0b68ea61c870f9adc',1,'glm']]], ['aligned_5fmediump_5fuvec3',['aligned_mediump_uvec3',['../a00303.html#gad6fc921bdde2bdbc7e09b028e1e9b379',1,'glm']]], ['aligned_5fmediump_5fuvec4',['aligned_mediump_uvec4',['../a00303.html#ga73ea0c1ba31580e107d21270883f51fc',1,'glm']]], ['aligned_5fmediump_5fvec1',['aligned_mediump_vec1',['../a00303.html#ga6b797eec76fa471e300158f3453b3b2e',1,'glm']]], ['aligned_5fmediump_5fvec2',['aligned_mediump_vec2',['../a00303.html#ga026a55ddbf2bafb1432f1157a2708616',1,'glm']]], ['aligned_5fmediump_5fvec3',['aligned_mediump_vec3',['../a00303.html#ga3a25e494173f6a64637b08a1b50a2132',1,'glm']]], ['aligned_5fmediump_5fvec4',['aligned_mediump_vec4',['../a00303.html#ga320d1c661cff2ef214eb50241f2928b2',1,'glm']]], ['aligned_5fuvec1',['aligned_uvec1',['../a00303.html#ga1ff8ed402c93d280ff0597c1c5e7c548',1,'glm']]], ['aligned_5fuvec2',['aligned_uvec2',['../a00303.html#ga074137e3be58528d67041c223d49f398',1,'glm']]], ['aligned_5fuvec3',['aligned_uvec3',['../a00303.html#ga2a8d9c3046f89d854eb758adfa0811c0',1,'glm']]], ['aligned_5fuvec4',['aligned_uvec4',['../a00303.html#gabf842c45eea186170c267a328e3f3b7d',1,'glm']]], ['aligned_5fvec1',['aligned_vec1',['../a00303.html#ga05e6d4c908965d04191c2070a8d0a65e',1,'glm']]], ['aligned_5fvec2',['aligned_vec2',['../a00303.html#ga0682462f8096a226773e20fac993cde5',1,'glm']]], ['aligned_5fvec3',['aligned_vec3',['../a00303.html#ga7cf643b66664e0cd3c48759ae66c2bd0',1,'glm']]], ['aligned_5fvec4',['aligned_vec4',['../a00303.html#ga85d89e83cb8137e1be1446de8c3b643a',1,'glm']]], ['all',['all',['../a00374.html#ga87e53f50b679f5f95c5cb4780311b3dd',1,'glm']]], ['angle',['angle',['../a00257.html#ga8aa248b31d5ade470c87304df5eb7bd8',1,'glm::angle(qua< T, Q > const &x)'],['../a00367.html#ga2e2917b4cb75ca3d043ac15ff88f14e1',1,'glm::angle(vec< L, T, Q > const &x, vec< L, T, Q > const &y)']]], ['angleaxis',['angleAxis',['../a00257.html#ga5c0095cfcb218c75a4b79d7687950036',1,'glm']]], ['any',['any',['../a00374.html#ga911b3f8e41459dd551ccb6d385d91061',1,'glm']]], ['arecollinear',['areCollinear',['../a00368.html#ga13da4a787a2ff70e95d561fb19ff91b4',1,'glm']]], ['areorthogonal',['areOrthogonal',['../a00368.html#gac7b95b3f798e3c293262b2bdaad47c57',1,'glm']]], ['areorthonormal',['areOrthonormal',['../a00368.html#ga1b091c3d7f9ee3b0708311c001c293e3',1,'glm']]], ['asec',['asec',['../a00301.html#ga2c5b7f962c2c9ff684e6d2de48db1f10',1,'glm']]], ['asech',['asech',['../a00301.html#gaec7586dccfe431f850d006f3824b8ca6',1,'glm']]], ['asin',['asin',['../a00373.html#ga0552d2df4865fa8c3d7cfc3ec2caac73',1,'glm']]], ['asinh',['asinh',['../a00373.html#ga3ef16b501ee859fddde88e22192a5950',1,'glm']]], ['associated_5fmin_5fmax_2ehpp',['associated_min_max.hpp',['../a00007.html',1,'']]], ['associatedmax',['associatedMax',['../a00308.html#ga7d9c8785230c8db60f72ec8975f1ba45',1,'glm::associatedMax(T x, U a, T y, U b)'],['../a00308.html#ga5c6758bc50aa7fbe700f87123a045aad',1,'glm::associatedMax(vec< L, T, Q > const &x, vec< L, U, Q > const &a, vec< L, T, Q > const &y, vec< L, U, Q > const &b)'],['../a00308.html#ga0d169d6ce26b03248df175f39005d77f',1,'glm::associatedMax(T x, vec< L, U, Q > const &a, T y, vec< L, U, Q > const &b)'],['../a00308.html#ga4086269afabcb81dd7ded33cb3448653',1,'glm::associatedMax(vec< L, T, Q > const &x, U a, vec< L, T, Q > const &y, U b)'],['../a00308.html#gaec891e363d91abbf3a4443cf2f652209',1,'glm::associatedMax(T x, U a, T y, U b, T z, U c)'],['../a00308.html#gab84fdc35016a31e8cd0cbb8296bddf7c',1,'glm::associatedMax(vec< L, T, Q > const &x, vec< L, U, Q > const &a, vec< L, T, Q > const &y, vec< L, U, Q > const &b, vec< L, T, Q > const &z, vec< L, U, Q > const &c)'],['../a00308.html#gadd2a2002f4f2144bbc39eb2336dd2fba',1,'glm::associatedMax(T x, vec< L, U, Q > const &a, T y, vec< L, U, Q > const &b, T z, vec< L, U, Q > const &c)'],['../a00308.html#ga19f59d1141a51a3b2108a9807af78f7f',1,'glm::associatedMax(vec< L, T, Q > const &x, U a, vec< L, T, Q > const &y, U b, vec< L, T, Q > const &z, U c)'],['../a00308.html#ga3038ffcb43eaa6af75897a99a5047ccc',1,'glm::associatedMax(T x, U a, T y, U b, T z, U c, T w, U d)'],['../a00308.html#gaf5ab0c428f8d1cd9e3b45fcfbf6423a6',1,'glm::associatedMax(vec< L, T, Q > const &x, vec< L, U, Q > const &a, vec< L, T, Q > const &y, vec< L, U, Q > const &b, vec< L, T, Q > const &z, vec< L, U, Q > const &c, vec< L, T, Q > const &w, vec< L, U, Q > const &d)'],['../a00308.html#ga11477c2c4b5b0bfd1b72b29df3725a9d',1,'glm::associatedMax(T x, vec< L, U, Q > const &a, T y, vec< L, U, Q > const &b, T z, vec< L, U, Q > const &c, T w, vec< L, U, Q > const &d)'],['../a00308.html#gab9c3dd74cac899d2c625b5767ea3b3fb',1,'glm::associatedMax(vec< L, T, Q > const &x, U a, vec< L, T, Q > const &y, U b, vec< L, T, Q > const &z, U c, vec< L, T, Q > const &w, U d)']]], ['associatedmin',['associatedMin',['../a00308.html#gacc01bd272359572fc28437ae214a02df',1,'glm::associatedMin(T x, U a, T y, U b)'],['../a00308.html#gac2f0dff90948f2e44386a5eafd941d1c',1,'glm::associatedMin(vec< L, T, Q > const &x, vec< L, U, Q > const &a, vec< L, T, Q > const &y, vec< L, U, Q > const &b)'],['../a00308.html#gacfec519c820331d023ef53a511749319',1,'glm::associatedMin(T x, const vec< L, U, Q > &a, T y, const vec< L, U, Q > &b)'],['../a00308.html#ga4757c7cab2d809124a8525d0a9deeb37',1,'glm::associatedMin(vec< L, T, Q > const &x, U a, vec< L, T, Q > const &y, U b)'],['../a00308.html#gad0aa8f86259a26d839d34a3577a923fc',1,'glm::associatedMin(T x, U a, T y, U b, T z, U c)'],['../a00308.html#ga723e5411cebc7ffbd5c81ffeec61127d',1,'glm::associatedMin(vec< L, T, Q > const &x, vec< L, U, Q > const &a, vec< L, T, Q > const &y, vec< L, U, Q > const &b, vec< L, T, Q > const &z, vec< L, U, Q > const &c)'],['../a00308.html#ga432224ebe2085eaa2b63a077ecbbbff6',1,'glm::associatedMin(T x, U a, T y, U b, T z, U c, T w, U d)'],['../a00308.html#ga66b08118bc88f0494bcacb7cdb940556',1,'glm::associatedMin(vec< L, T, Q > const &x, vec< L, U, Q > const &a, vec< L, T, Q > const &y, vec< L, U, Q > const &b, vec< L, T, Q > const &z, vec< L, U, Q > const &c, vec< L, T, Q > const &w, vec< L, U, Q > const &d)'],['../a00308.html#ga78c28fde1a7080fb7420bd88e68c6c68',1,'glm::associatedMin(T x, vec< L, U, Q > const &a, T y, vec< L, U, Q > const &b, T z, vec< L, U, Q > const &c, T w, vec< L, U, Q > const &d)'],['../a00308.html#ga2db7e351994baee78540a562d4bb6d3b',1,'glm::associatedMin(vec< L, T, Q > const &x, U a, vec< L, T, Q > const &y, U b, vec< L, T, Q > const &z, U c, vec< L, T, Q > const &w, U d)']]], ['atan',['atan',['../a00373.html#gac61629f3a4aa14057e7a8cae002291db',1,'glm::atan(vec< L, T, Q > const &y, vec< L, T, Q > const &x)'],['../a00373.html#ga5229f087eaccbc466f1c609ce3107b95',1,'glm::atan(vec< L, T, Q > const &y_over_x)']]], ['atan2',['atan2',['../a00315.html#gac63011205bf6d0be82589dc56dd26708',1,'glm::atan2(T x, T y)'],['../a00315.html#ga83bc41bd6f89113ee8006576b12bfc50',1,'glm::atan2(const vec< 2, T, Q > &x, const vec< 2, T, Q > &y)'],['../a00315.html#gac39314f5087e7e51e592897cabbc1927',1,'glm::atan2(const vec< 3, T, Q > &x, const vec< 3, T, Q > &y)'],['../a00315.html#gaba86c28da7bf5bdac64fecf7d56e8ff3',1,'glm::atan2(const vec< 4, T, Q > &x, const vec< 4, T, Q > &y)']]], ['atanh',['atanh',['../a00373.html#gabc925650e618357d07da255531658b87',1,'glm']]], ['axis',['axis',['../a00257.html#ga764254f10248b505e936e5309a88c23d',1,'glm']]], ['axisangle',['axisAngle',['../a00337.html#gafefe32ce5a90a135287ba34fac3623bc',1,'glm']]], ['axisanglematrix',['axisAngleMatrix',['../a00337.html#ga3a788e2f5223397df5c426413ecc2f6b',1,'glm']]], ['angle_20and_20trigonometry_20functions',['Angle and Trigonometry Functions',['../a00373.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_1.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_1.js ================================================ var searchData= [ ['backeasein',['backEaseIn',['../a00318.html#ga93cddcdb6347a44d5927cc2bf2570816',1,'glm::backEaseIn(genType const &a)'],['../a00318.html#ga33777c9dd98f61d9472f96aafdf2bd36',1,'glm::backEaseIn(genType const &a, genType const &o)']]], ['backeaseinout',['backEaseInOut',['../a00318.html#gace6d24722a2f6722b56398206eb810bb',1,'glm::backEaseInOut(genType const &a)'],['../a00318.html#ga68a7b760f2afdfab298d5cd6d7611fb1',1,'glm::backEaseInOut(genType const &a, genType const &o)']]], ['backeaseout',['backEaseOut',['../a00318.html#gabf25069fa906413c858fd46903d520b9',1,'glm::backEaseOut(genType const &a)'],['../a00318.html#ga640c1ac6fe9d277a197da69daf60ee4f',1,'glm::backEaseOut(genType const &a, genType const &o)']]], ['ballrand',['ballRand',['../a00300.html#ga7c53b7797f3147af68a11c767679fa3f',1,'glm']]], ['bit_2ehpp',['bit.hpp',['../a00008.html',1,'']]], ['bitcount',['bitCount',['../a00370.html#ga44abfe3379e11cbd29425a843420d0d6',1,'glm::bitCount(genType v)'],['../a00370.html#gaac7b15e40bdea8d9aa4c4cb34049f7b5',1,'glm::bitCount(vec< L, T, Q > const &v)']]], ['bitfield_2ehpp',['bitfield.hpp',['../a00009.html',1,'']]], ['bitfielddeinterleave',['bitfieldDeinterleave',['../a00288.html#ga091d934233a2e121df91b8c7230357c8',1,'glm::bitfieldDeinterleave(glm::uint16 x)'],['../a00288.html#ga7d1cc24dfbcdd932c3a2abbb76235f98',1,'glm::bitfieldDeinterleave(glm::uint32 x)'],['../a00288.html#ga8dbb8c87092f33bd815dd8a840be5d60',1,'glm::bitfieldDeinterleave(glm::uint64 x)']]], ['bitfieldextract',['bitfieldExtract',['../a00370.html#ga346b25ab11e793e91a4a69c8aa6819f2',1,'glm']]], ['bitfieldfillone',['bitfieldFillOne',['../a00288.html#ga46f9295abe3b5c7658f5b13c7f819f0a',1,'glm::bitfieldFillOne(genIUType Value, int FirstBit, int BitCount)'],['../a00288.html#ga3e96dd1f0a4bc892f063251ed118c0c1',1,'glm::bitfieldFillOne(vec< L, T, Q > const &Value, int FirstBit, int BitCount)']]], ['bitfieldfillzero',['bitfieldFillZero',['../a00288.html#ga697b86998b7d74ee0a69d8e9f8819fee',1,'glm::bitfieldFillZero(genIUType Value, int FirstBit, int BitCount)'],['../a00288.html#ga0d16c9acef4be79ea9b47c082a0cf7c2',1,'glm::bitfieldFillZero(vec< L, T, Q > const &Value, int FirstBit, int BitCount)']]], ['bitfieldinsert',['bitfieldInsert',['../a00370.html#ga2e82992340d421fadb61a473df699b20',1,'glm']]], ['bitfieldinterleave',['bitfieldInterleave',['../a00288.html#ga24cad0069f9a0450abd80b3e89501adf',1,'glm::bitfieldInterleave(int8 x, int8 y)'],['../a00288.html#ga9a4976a529aec2cee56525e1165da484',1,'glm::bitfieldInterleave(uint8 x, uint8 y)'],['../a00288.html#ga4a76bbca39c40153f3203d0a1926e142',1,'glm::bitfieldInterleave(u8vec2 const &v)'],['../a00288.html#gac51c33a394593f0631fa3aa5bb778809',1,'glm::bitfieldInterleave(int16 x, int16 y)'],['../a00288.html#ga94f3646a5667f4be56f8dcf3310e963f',1,'glm::bitfieldInterleave(uint16 x, uint16 y)'],['../a00288.html#ga406c4ee56af4ca37a73f449f154eca3e',1,'glm::bitfieldInterleave(u16vec2 const &v)'],['../a00288.html#gaebb756a24a0784e3d6fba8bd011ab77a',1,'glm::bitfieldInterleave(int32 x, int32 y)'],['../a00288.html#ga2f1e2b3fe699e7d897ae38b2115ddcbd',1,'glm::bitfieldInterleave(uint32 x, uint32 y)'],['../a00288.html#ga8cb17574d60abd6ade84bc57c10e8f78',1,'glm::bitfieldInterleave(u32vec2 const &v)'],['../a00288.html#ga8fdb724dccd4a07d57efc01147102137',1,'glm::bitfieldInterleave(int8 x, int8 y, int8 z)'],['../a00288.html#ga9fc2a0dd5dcf8b00e113f272a5feca93',1,'glm::bitfieldInterleave(uint8 x, uint8 y, uint8 z)'],['../a00288.html#gaa901c36a842fa5d126ea650549f17b24',1,'glm::bitfieldInterleave(int16 x, int16 y, int16 z)'],['../a00288.html#ga3afd6d38881fe3948c53d4214d2197fd',1,'glm::bitfieldInterleave(uint16 x, uint16 y, uint16 z)'],['../a00288.html#gad2075d96a6640121edaa98ea534102ca',1,'glm::bitfieldInterleave(int32 x, int32 y, int32 z)'],['../a00288.html#gab19fbc739fc0cf7247978602c36f7da8',1,'glm::bitfieldInterleave(uint32 x, uint32 y, uint32 z)'],['../a00288.html#ga8a44ae22f5c953b296c42d067dccbe6d',1,'glm::bitfieldInterleave(int8 x, int8 y, int8 z, int8 w)'],['../a00288.html#ga14bb274d54a3c26f4919dd7ed0dd0c36',1,'glm::bitfieldInterleave(uint8 x, uint8 y, uint8 z, uint8 w)'],['../a00288.html#ga180a63161e1319fbd5a53c84d0429c7a',1,'glm::bitfieldInterleave(int16 x, int16 y, int16 z, int16 w)'],['../a00288.html#gafca8768671a14c8016facccb66a89f26',1,'glm::bitfieldInterleave(uint16 x, uint16 y, uint16 z, uint16 w)']]], ['bitfieldreverse',['bitfieldReverse',['../a00370.html#ga750a1d92464489b7711dee67aa3441b6',1,'glm']]], ['bitfieldrotateleft',['bitfieldRotateLeft',['../a00288.html#ga2eb49678a344ce1495bdb5586d9896b9',1,'glm::bitfieldRotateLeft(genIUType In, int Shift)'],['../a00288.html#gae186317091b1a39214ebf79008d44a1e',1,'glm::bitfieldRotateLeft(vec< L, T, Q > const &In, int Shift)']]], ['bitfieldrotateright',['bitfieldRotateRight',['../a00288.html#ga1c33d075c5fb8bd8dbfd5092bfc851ca',1,'glm::bitfieldRotateRight(genIUType In, int Shift)'],['../a00288.html#ga590488e1fc00a6cfe5d3bcaf93fbfe88',1,'glm::bitfieldRotateRight(vec< L, T, Q > const &In, int Shift)']]], ['bool1',['bool1',['../a00315.html#gaddcd7aa2e30e61af5b38660613d3979e',1,'glm']]], ['bool1x1',['bool1x1',['../a00315.html#ga7f895c936f0c29c8729afbbf22806090',1,'glm']]], ['bool2',['bool2',['../a00315.html#gaa09ab65ec9c3c54305ff502e2b1fe6d9',1,'glm']]], ['bool2x2',['bool2x2',['../a00315.html#gadb3703955e513632f98ba12fe051ba3e',1,'glm']]], ['bool2x3',['bool2x3',['../a00315.html#ga9ae6ee155d0f90cb1ae5b6c4546738a0',1,'glm']]], ['bool2x4',['bool2x4',['../a00315.html#ga4d7fa65be8e8e4ad6d920b45c44e471f',1,'glm']]], ['bool3',['bool3',['../a00315.html#ga99629f818737f342204071ef8296b2ed',1,'glm']]], ['bool3x2',['bool3x2',['../a00315.html#gac7d7311f7e0fa8b6163d96dab033a755',1,'glm']]], ['bool3x3',['bool3x3',['../a00315.html#ga6c97b99aac3e302053ffb58aace9033c',1,'glm']]], ['bool3x4',['bool3x4',['../a00315.html#gae7d6b679463d37d6c527d478fb470fdf',1,'glm']]], ['bool4',['bool4',['../a00315.html#ga13c3200b82708f73faac6d7f09ec91a3',1,'glm']]], ['bool4x2',['bool4x2',['../a00315.html#ga9ed830f52408b2f83c085063a3eaf1d0',1,'glm']]], ['bool4x3',['bool4x3',['../a00315.html#gad0f5dc7f22c2065b1b06d57f1c0658fe',1,'glm']]], ['bool4x4',['bool4x4',['../a00315.html#ga7d2a7d13986602ae2896bfaa394235d4',1,'glm']]], ['bounceeasein',['bounceEaseIn',['../a00318.html#gaac30767f2e430b0c3fc859a4d59c7b5b',1,'glm']]], ['bounceeaseinout',['bounceEaseInOut',['../a00318.html#gadf9f38eff1e5f4c2fa5b629a25ae413e',1,'glm']]], ['bounceeaseout',['bounceEaseOut',['../a00318.html#ga94007005ff0dcfa0749ebfa2aec540b2',1,'glm']]], ['bvec1',['bvec1',['../a00265.html#ga067af382616d93f8e850baae5154cdcc',1,'glm']]], ['bvec2',['bvec2',['../a00281.html#ga0b6123e03653cc1bbe366fc55238a934',1,'glm']]], ['bvec3',['bvec3',['../a00281.html#ga197151b72dfaf289daf98b361760ffe7',1,'glm']]], ['bvec4',['bvec4',['../a00281.html#ga9f7b9712373ff4342d9114619b55f5e3',1,'glm']]], ['byte',['byte',['../a00354.html#ga3005cb0d839d546c616becfa6602c607',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_10.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_10.js ================================================ var searchData= [ ['stable_20extensions',['Stable extensions',['../a00285.html',1,'']]], ['saturate',['saturate',['../a00315.html#ga0fd09e616d122bc2ed9726682ffd44b7',1,'glm::saturate(T x)'],['../a00315.html#gaee97b8001c794a78a44f5d59f62a8aba',1,'glm::saturate(const vec< 2, T, Q > &x)'],['../a00315.html#ga39bfe3a421286ee31680d45c31ccc161',1,'glm::saturate(const vec< 3, T, Q > &x)'],['../a00315.html#ga356f8c3a7e7d6376d3d4b0a026407183',1,'glm::saturate(const vec< 4, T, Q > &x)']]], ['saturation',['saturation',['../a00312.html#ga01a97152b44e1550edcac60bd849e884',1,'glm::saturation(T const s)'],['../a00312.html#ga2156cea600e90148ece5bc96fd6db43a',1,'glm::saturation(T const s, vec< 3, T, Q > const &color)'],['../a00312.html#gaba0eacee0736dae860e9371cc1ae4785',1,'glm::saturation(T const s, vec< 4, T, Q > const &color)']]], ['scalar_5fcommon_2ehpp',['scalar_common.hpp',['../a00144.html',1,'']]], ['scalar_5fconstants_2ehpp',['scalar_constants.hpp',['../a00145.html',1,'']]], ['scalar_5fint_5fsized_2ehpp',['scalar_int_sized.hpp',['../a00146.html',1,'']]], ['scalar_5finteger_2ehpp',['scalar_integer.hpp',['../a00147.html',1,'']]], ['scalar_5fmultiplication_2ehpp',['scalar_multiplication.hpp',['../a00148.html',1,'']]], ['scalar_5fuint_5fsized_2ehpp',['scalar_uint_sized.hpp',['../a00151.html',1,'']]], ['scalar_5fulp_2ehpp',['scalar_ulp.hpp',['../a00152.html',1,'']]], ['scale',['scale',['../a00247.html#ga05051adbee603fb3c5095d8cf5cc229b',1,'glm::scale(mat< 4, 4, T, Q > const &m, vec< 3, T, Q > const &v)'],['../a00341.html#gadb47d2ad2bd984b213e8ff7d9cd8154e',1,'glm::scale(mat< 3, 3, T, Q > const &m, vec< 2, T, Q > const &v)'],['../a00362.html#gafbeefee8fec884d566e4ada0049174d7',1,'glm::scale(vec< 3, T, Q > const &v)']]], ['scalebias',['scaleBias',['../a00363.html#gabf249498b236e62c983d90d30d63c99c',1,'glm::scaleBias(T scale, T bias)'],['../a00363.html#gae2bdd91a76759fecfbaef97e3020aa8e',1,'glm::scaleBias(mat< 4, 4, T, Q > const &m, T scale, T bias)']]], ['sec',['sec',['../a00301.html#gae4bcbebee670c5ea155f0777b3acbd84',1,'glm']]], ['sech',['sech',['../a00301.html#ga9a5cfd1e7170104a7b33863b1b75e5ae',1,'glm']]], ['shearx',['shearX',['../a00341.html#ga2a118ece5db1e2022112b954846012af',1,'glm']]], ['shearx2d',['shearX2D',['../a00363.html#gabf714b8a358181572b32a45555f71948',1,'glm']]], ['shearx3d',['shearX3D',['../a00363.html#ga73e867c6cd4d700fe2054437e56106c4',1,'glm']]], ['sheary',['shearY',['../a00341.html#ga717f1833369c1ac4a40e4ac015af885e',1,'glm']]], ['sheary2d',['shearY2D',['../a00363.html#gac7998d0763d9181550c77e8af09a182c',1,'glm']]], ['sheary3d',['shearY3D',['../a00363.html#gade5bb65ffcb513973db1a1314fb5cfac',1,'glm']]], ['shearz3d',['shearZ3D',['../a00363.html#ga6591e0a3a9d2c9c0b6577bb4dace0255',1,'glm']]], ['shortmix',['shortMix',['../a00352.html#gadc576cc957adc2a568cdcbc3799175bc',1,'glm']]], ['sign',['sign',['../a00241.html#ga1e2e5cfff800056540e32f6c9b604b28',1,'glm::sign(vec< L, T, Q > const &x)'],['../a00333.html#ga04ef803a24f3d4f8c67dbccb33b0fce0',1,'glm::sign(vec< L, T, Q > const &x, vec< L, T, Q > const &base)']]], ['simplex',['simplex',['../a00297.html#ga8122468c69015ff397349a7dcc638b27',1,'glm']]], ['sin',['sin',['../a00373.html#ga29747fd108cb7292ae5a284f69691a69',1,'glm']]], ['sineeasein',['sineEaseIn',['../a00318.html#gafb338ac6f6b2bcafee50e3dca5201dbf',1,'glm']]], ['sineeaseinout',['sineEaseInOut',['../a00318.html#gaa46e3d5fbf7a15caa28eff9ef192d7c7',1,'glm']]], ['sineeaseout',['sineEaseOut',['../a00318.html#gab3e454f883afc1606ef91363881bf5a3',1,'glm']]], ['sinh',['sinh',['../a00373.html#gac7c39ff21809e281552b4dbe46f4a39d',1,'glm']]], ['sint',['sint',['../a00330.html#gada7e83fdfe943aba4f1d5bf80cb66f40',1,'glm']]], ['size1',['size1',['../a00359.html#gaeb877ac8f9a3703961736c1c5072cf68',1,'glm']]], ['size1_5ft',['size1_t',['../a00359.html#gaaf6accc57f5aa50447ba7310ce3f0d6f',1,'glm']]], ['size2',['size2',['../a00359.html#ga1bfe8c4975ff282bce41be2bacd524fe',1,'glm']]], ['size2_5ft',['size2_t',['../a00359.html#ga5976c25657d4e2b5f73f39364c3845d6',1,'glm']]], ['size3',['size3',['../a00359.html#gae1c72956d0359b0db332c6c8774d3b04',1,'glm']]], ['size3_5ft',['size3_t',['../a00359.html#gaf2654983c60d641fd3808e65a8dfad8d',1,'glm']]], ['size4',['size4',['../a00359.html#ga3a19dde617beaf8ce3cfc2ac5064e9aa',1,'glm']]], ['size4_5ft',['size4_t',['../a00359.html#gaa423efcea63675a2df26990dbcb58656',1,'glm']]], ['slerp',['slerp',['../a00248.html#gae7fc3c945be366b9942b842f55da428a',1,'glm::slerp(qua< T, Q > const &x, qua< T, Q > const &y, T a)'],['../a00356.html#ga8b11b18ce824174ea1a5a69ea14e2cee',1,'glm::slerp(vec< 3, T, Q > const &x, vec< 3, T, Q > const &y, T const &a)']]], ['smoothstep',['smoothstep',['../a00241.html#ga562edf7eca082cc5b7a0aaf180436daf',1,'glm']]], ['sphericalrand',['sphericalRand',['../a00300.html#ga22f90fcaccdf001c516ca90f6428e138',1,'glm']]], ['spline_2ehpp',['spline.hpp',['../a00154.html',1,'']]], ['sqrt',['sqrt',['../a00242.html#gaa83e5f1648b7ccdf33b87c07c76cb77c',1,'glm::sqrt(vec< L, T, Q > const &v)'],['../a00256.html#ga64b7b255ed7bcba616fe6b44470b022e',1,'glm::sqrt(qua< T, Q > const &q)'],['../a00330.html#ga7ce36693a75879ccd9bb10167cfa722d',1,'glm::sqrt(int x)'],['../a00330.html#ga1975d318978d6dacf78b6444fa5ed7bc',1,'glm::sqrt(uint x)']]], ['squad',['squad',['../a00352.html#ga0b9bf3459e132ad8a18fe970669e3e35',1,'glm']]], ['std_5fbased_5ftype_2ehpp',['std_based_type.hpp',['../a00155.html',1,'']]], ['step',['step',['../a00241.html#ga015a1261ff23e12650211aa872863cce',1,'glm::step(genType edge, genType x)'],['../a00241.html#ga8f9a911a48ef244b51654eaefc81c551',1,'glm::step(T edge, vec< L, T, Q > const &x)'],['../a00241.html#gaf4a5fc81619c7d3e8b22f53d4a098c7f',1,'glm::step(vec< L, T, Q > const &edge, vec< L, T, Q > const &x)']]], ['string_5fcast_2ehpp',['string_cast.hpp',['../a00156.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_11.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_11.js ================================================ var searchData= [ ['tan',['tan',['../a00373.html#ga293a34cfb9f0115cc606b4a97c84f11f',1,'glm']]], ['tanh',['tanh',['../a00373.html#gaa1bccbfdcbe40ed2ffcddc2aa8bfd0f1',1,'glm']]], ['texture_2ehpp',['texture.hpp',['../a00157.html',1,'']]], ['third',['third',['../a00290.html#ga3077c6311010a214b69ddc8214ec13b5',1,'glm']]], ['three_5fover_5ftwo_5fpi',['three_over_two_pi',['../a00290.html#gae94950df74b0ce382b1fc1d978ef7394',1,'glm']]], ['to_5fstring',['to_string',['../a00360.html#ga8f0dced1fd45e67e2d77e80ab93c7af5',1,'glm']]], ['tomat3',['toMat3',['../a00352.html#gaab0afabb894b28a983fb8ec610409d56',1,'glm']]], ['tomat4',['toMat4',['../a00352.html#gadfa2c77094e8cc9adad321d938855ffb',1,'glm']]], ['toquat',['toQuat',['../a00352.html#ga798de5d186499c9a9231cd92c8afaef1',1,'glm::toQuat(mat< 3, 3, T, Q > const &x)'],['../a00352.html#ga5eb36f51e1638e710451eba194dbc011',1,'glm::toQuat(mat< 4, 4, T, Q > const &x)']]], ['transform_2ehpp',['transform.hpp',['../a00158.html',1,'']]], ['transform2_2ehpp',['transform2.hpp',['../a00159.html',1,'']]], ['translate',['translate',['../a00247.html#ga1a4ecc4ad82652b8fb14dcb087879284',1,'glm::translate(mat< 4, 4, T, Q > const &m, vec< 3, T, Q > const &v)'],['../a00341.html#gaf4573ae47c80938aa9053ef6a33755ab',1,'glm::translate(mat< 3, 3, T, Q > const &m, vec< 2, T, Q > const &v)'],['../a00362.html#ga309a30e652e58c396e2c3d4db3ee7658',1,'glm::translate(vec< 3, T, Q > const &v)']]], ['transpose',['transpose',['../a00371.html#gae679d841da8ce9dbcc6c2d454f15bc35',1,'glm']]], ['trianglenormal',['triangleNormal',['../a00344.html#gaff1cb5496925dfa7962df457772a7f35',1,'glm']]], ['trigonometric_2ehpp',['trigonometric.hpp',['../a00160.html',1,'']]], ['trunc',['trunc',['../a00241.html#gaf9375e3e06173271d49e6ffa3a334259',1,'glm']]], ['tweakedinfiniteperspective',['tweakedInfinitePerspective',['../a00243.html#gaaeacc04a2a6f4b18c5899d37e7bb3ef9',1,'glm::tweakedInfinitePerspective(T fovy, T aspect, T near)'],['../a00243.html#gaf5b3c85ff6737030a1d2214474ffa7a8',1,'glm::tweakedInfinitePerspective(T fovy, T aspect, T near, T ep)']]], ['two_5fover_5fpi',['two_over_pi',['../a00290.html#ga74eadc8a211253079683219a3ea0462a',1,'glm']]], ['two_5fover_5froot_5fpi',['two_over_root_pi',['../a00290.html#ga5827301817640843cf02026a8d493894',1,'glm']]], ['two_5fpi',['two_pi',['../a00290.html#gaa5276a4617566abcfe49286f40e3a256',1,'glm']]], ['two_5fthirds',['two_thirds',['../a00290.html#ga9b4d2f4322edcf63a6737b92a29dd1f5',1,'glm']]], ['type_5fmat2x2_2ehpp',['type_mat2x2.hpp',['../a00165.html',1,'']]], ['type_5fmat2x3_2ehpp',['type_mat2x3.hpp',['../a00166.html',1,'']]], ['type_5fmat2x4_2ehpp',['type_mat2x4.hpp',['../a00167.html',1,'']]], ['type_5fmat3x2_2ehpp',['type_mat3x2.hpp',['../a00168.html',1,'']]], ['type_5fmat3x3_2ehpp',['type_mat3x3.hpp',['../a00169.html',1,'']]], ['type_5fmat3x4_2ehpp',['type_mat3x4.hpp',['../a00170.html',1,'']]], ['type_5fmat4x2_2ehpp',['type_mat4x2.hpp',['../a00171.html',1,'']]], ['type_5fmat4x3_2ehpp',['type_mat4x3.hpp',['../a00172.html',1,'']]], ['type_5fmat4x4_2ehpp',['type_mat4x4.hpp',['../a00173.html',1,'']]], ['type_5fprecision_2ehpp',['type_precision.hpp',['../a00174.html',1,'']]], ['type_5fptr_2ehpp',['type_ptr.hpp',['../a00175.html',1,'']]], ['type_5fquat_2ehpp',['type_quat.hpp',['../a00176.html',1,'']]], ['type_5ftrait_2ehpp',['type_trait.hpp',['../a00177.html',1,'']]], ['type_5fvec1_2ehpp',['type_vec1.hpp',['../a00178.html',1,'']]], ['type_5fvec2_2ehpp',['type_vec2.hpp',['../a00179.html',1,'']]], ['type_5fvec3_2ehpp',['type_vec3.hpp',['../a00180.html',1,'']]], ['type_5fvec4_2ehpp',['type_vec4.hpp',['../a00181.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_12.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_12.js ================================================ var searchData= [ ['u16',['u16',['../a00304.html#gaa2d7acc0adb536fab71fe261232a40ff',1,'glm']]], ['u16vec1',['u16vec1',['../a00304.html#ga08c05ba8ffb19f5d14ab584e1e9e9ee5',1,'glm::u16vec1()'],['../a00346.html#ga52cc069a92e126c3a8dcde93424d2ef0',1,'glm::gtx::u16vec1()']]], ['u16vec2',['u16vec2',['../a00304.html#ga2a78447eb9d66a114b193f4a25899c16',1,'glm']]], ['u16vec3',['u16vec3',['../a00304.html#ga1c522ca821c27b862fe51cf4024b064b',1,'glm']]], ['u16vec4',['u16vec4',['../a00304.html#ga529496d75775fb656a07993ea9af2450',1,'glm']]], ['u32',['u32',['../a00304.html#ga8165913e068444f7842302d40ba897b9',1,'glm']]], ['u32vec1',['u32vec1',['../a00304.html#gae627372cfd5f20dd87db490387b71195',1,'glm::u32vec1()'],['../a00346.html#ga9bbc1e14aea65cba5e2dcfef6a67d9f3',1,'glm::gtx::u32vec1()']]], ['u32vec2',['u32vec2',['../a00304.html#ga2a266e46ee218d0c680f12b35c500cc0',1,'glm']]], ['u32vec3',['u32vec3',['../a00304.html#gae267358ff2a41d156d97f5762630235a',1,'glm']]], ['u32vec4',['u32vec4',['../a00304.html#ga31cef34e4cd04840c54741ff2f7005f0',1,'glm']]], ['u64',['u64',['../a00304.html#gaf3f312156984c365e9f65620354da70b',1,'glm']]], ['u64vec1',['u64vec1',['../a00304.html#gaf09f3ca4b671a4a4f84505eb4cc865fd',1,'glm::u64vec1()'],['../a00346.html#ga818de170e2584ab037130f2881925974',1,'glm::gtx::u64vec1()']]], ['u64vec2',['u64vec2',['../a00304.html#gaef3824ed4fe435a019c5b9dddf53fec5',1,'glm']]], ['u64vec3',['u64vec3',['../a00304.html#ga489b89ba93d4f7b3934df78debc52276',1,'glm']]], ['u64vec4',['u64vec4',['../a00304.html#ga3945dd6515d4498cb603e65ff867ab03',1,'glm']]], ['u8',['u8',['../a00304.html#gaecc7082561fc9028b844b6cf3d305d36',1,'glm']]], ['u8vec1',['u8vec1',['../a00304.html#ga29b349e037f0b24320b4548a143daee2',1,'glm::u8vec1()'],['../a00346.html#ga5853fe457f4c8a6bc09343d0e9833980',1,'glm::gtx::u8vec1()']]], ['u8vec2',['u8vec2',['../a00304.html#ga518b8d948a6b4ddb72f84d5c3b7b6611',1,'glm']]], ['u8vec3',['u8vec3',['../a00304.html#ga7c5706f6bbe5282e5598acf7e7b377e2',1,'glm']]], ['u8vec4',['u8vec4',['../a00304.html#ga20779a61de2fd526a17f12fe53ec46b1',1,'glm']]], ['uaddcarry',['uaddCarry',['../a00370.html#gaedcec48743632dff6786bcc492074b1b',1,'glm']]], ['uint16',['uint16',['../a00263.html#ga05f6b0ae8f6a6e135b0e290c25fe0e4e',1,'glm']]], ['uint16_5ft',['uint16_t',['../a00304.html#ga91f91f411080c37730856ff5887f5bcf',1,'glm']]], ['uint32',['uint32',['../a00263.html#ga1134b580f8da4de94ca6b1de4d37975e',1,'glm']]], ['uint32_5ft',['uint32_t',['../a00304.html#ga2171d9dc1fefb1c82e2817f45b622eac',1,'glm']]], ['uint64',['uint64',['../a00263.html#gab630f76c26b50298187f7889104d4b9c',1,'glm']]], ['uint64_5ft',['uint64_t',['../a00304.html#ga3999d3e7ff22025c16ddb601e14dfdee',1,'glm']]], ['uint8',['uint8',['../a00263.html#gadde6aaee8457bee49c2a92621fe22b79',1,'glm']]], ['uint8_5ft',['uint8_t',['../a00304.html#ga28d97808322d3c92186e4a0c067d7e8e',1,'glm']]], ['uintbitstofloat',['uintBitsToFloat',['../a00241.html#gab2bae0d15dcdca6093f88f76b3975d97',1,'glm::uintBitsToFloat(uint const &v)'],['../a00241.html#ga97f46b5f7b42fe44482e13356eb394ae',1,'glm::uintBitsToFloat(vec< L, uint, Q > const &v)']]], ['ulp_2ehpp',['ulp.hpp',['../a00182.html',1,'']]], ['umat2',['umat2',['../a00294.html#ga4cae85566f900debf930c41944b64691',1,'glm']]], ['umat2x2',['umat2x2',['../a00294.html#gabf8acdd33ce8951051edbca5200898aa',1,'glm']]], ['umat2x3',['umat2x3',['../a00294.html#ga1870da7578d5022b973a83155d386ab3',1,'glm']]], ['umat2x4',['umat2x4',['../a00294.html#ga57936a3998e992370e59a223e0ee4fd4',1,'glm']]], ['umat3',['umat3',['../a00294.html#ga5085e3ff02abbac5e537eb7b89ab63b6',1,'glm']]], ['umat3x2',['umat3x2',['../a00294.html#ga9cd7fa637a4a6788337f45231fad9e1a',1,'glm']]], ['umat3x3',['umat3x3',['../a00294.html#ga1f2cfcf3357db0cdf31fcb15e3c6bafb',1,'glm']]], ['umat3x4',['umat3x4',['../a00294.html#gae7c78ff3fc4309605ab0fa186c8d48ba',1,'glm']]], ['umat4',['umat4',['../a00294.html#ga38bc7bb6494e344185df596deeb4544c',1,'glm']]], ['umat4x2',['umat4x2',['../a00294.html#ga70fa2d05896aa83cbc8c07672a429b53',1,'glm']]], ['umat4x3',['umat4x3',['../a00294.html#ga87581417945411f75cb31dd6ca1dba98',1,'glm']]], ['umat4x4',['umat4x4',['../a00294.html#gaf72e6d399c42985db6872c50f53d7eb8',1,'glm']]], ['umulextended',['umulExtended',['../a00370.html#ga732e2fb56db57ea541c7e5c92b7121be',1,'glm']]], ['unpackdouble2x32',['unpackDouble2x32',['../a00372.html#ga5f4296dc5f12f0aa67ac05b8bb322483',1,'glm']]], ['unpackf2x11_5f1x10',['unpackF2x11_1x10',['../a00298.html#ga2b1fd1e854705b1345e98409e0a25e50',1,'glm']]], ['unpackf3x9_5fe1x5',['unpackF3x9_E1x5',['../a00298.html#gab9e60ebe3ad3eeced6a9ec6eb876d74e',1,'glm']]], ['unpackhalf',['unpackHalf',['../a00298.html#ga30d6b2f1806315bcd6047131f547d33b',1,'glm']]], ['unpackhalf1x16',['unpackHalf1x16',['../a00298.html#gac37dedaba24b00adb4ec6e8f92c19dbf',1,'glm']]], ['unpackhalf2x16',['unpackHalf2x16',['../a00372.html#gaf59b52e6b28da9335322c4ae19b5d745',1,'glm']]], ['unpackhalf4x16',['unpackHalf4x16',['../a00298.html#ga57dfc41b2eb20b0ac00efae7d9c49dcd',1,'glm']]], ['unpacki3x10_5f1x2',['unpackI3x10_1x2',['../a00298.html#ga9a05330e5490be0908d3b117d82aff56',1,'glm']]], ['unpackint2x16',['unpackInt2x16',['../a00298.html#gaccde055882918a3175de82f4ca8b7d8e',1,'glm']]], ['unpackint2x32',['unpackInt2x32',['../a00298.html#gab297c0bfd38433524791eb0584d8f08d',1,'glm']]], ['unpackint2x8',['unpackInt2x8',['../a00298.html#gab0c59f1e259fca9e68adb2207a6b665e',1,'glm']]], ['unpackint4x16',['unpackInt4x16',['../a00298.html#ga52c154a9b232b62c22517a700cc0c78c',1,'glm']]], ['unpackint4x8',['unpackInt4x8',['../a00298.html#ga1cd8d2038cdd33a860801aa155a26221',1,'glm']]], ['unpackrgbm',['unpackRGBM',['../a00298.html#ga5c1ec97894b05ea21a05aea4f0204a02',1,'glm']]], ['unpacksnorm',['unpackSnorm',['../a00298.html#ga6d49b31e5c3f9df8e1f99ab62b999482',1,'glm']]], ['unpacksnorm1x16',['unpackSnorm1x16',['../a00298.html#ga96dd15002370627a443c835ab03a766c',1,'glm']]], ['unpacksnorm1x8',['unpackSnorm1x8',['../a00298.html#ga4851ff86678aa1c7ace9d67846894285',1,'glm']]], ['unpacksnorm2x16',['unpackSnorm2x16',['../a00372.html#gacd8f8971a3fe28418be0d0fa1f786b38',1,'glm']]], ['unpacksnorm2x8',['unpackSnorm2x8',['../a00298.html#ga8b128e89be449fc71336968a66bf6e1a',1,'glm']]], ['unpacksnorm3x10_5f1x2',['unpackSnorm3x10_1x2',['../a00298.html#ga7a4fbf79be9740e3c57737bc2af05e5b',1,'glm']]], ['unpacksnorm4x16',['unpackSnorm4x16',['../a00298.html#gaaddf9c353528fe896106f7181219c7f4',1,'glm']]], ['unpacksnorm4x8',['unpackSnorm4x8',['../a00372.html#ga2db488646d48b7c43d3218954523fe82',1,'glm']]], ['unpacku3x10_5f1x2',['unpackU3x10_1x2',['../a00298.html#ga48df3042a7d079767f5891a1bfd8a60a',1,'glm']]], ['unpackuint2x16',['unpackUint2x16',['../a00298.html#ga035bbbeab7ec2b28c0529757395b645b',1,'glm']]], ['unpackuint2x32',['unpackUint2x32',['../a00298.html#gaf942ff11b65e83eb5f77e68329ebc6ab',1,'glm']]], ['unpackuint2x8',['unpackUint2x8',['../a00298.html#gaa7600a6c71784b637a410869d2a5adcd',1,'glm']]], ['unpackuint4x16',['unpackUint4x16',['../a00298.html#gab173834ef14cfc23a96a959f3ff4b8dc',1,'glm']]], ['unpackuint4x8',['unpackUint4x8',['../a00298.html#gaf6dc0e4341810a641c7ed08f10e335d1',1,'glm']]], ['unpackunorm',['unpackUnorm',['../a00298.html#ga3e6ac9178b59f0b1b2f7599f2183eb7f',1,'glm']]], ['unpackunorm1x16',['unpackUnorm1x16',['../a00298.html#ga83d34160a5cb7bcb5339823210fc7501',1,'glm']]], ['unpackunorm1x5_5f1x6_5f1x5',['unpackUnorm1x5_1x6_1x5',['../a00298.html#gab3bc08ecfc0f3339be93fb2b3b56d88a',1,'glm']]], ['unpackunorm1x8',['unpackUnorm1x8',['../a00298.html#ga1319207e30874fb4931a9ee913983ee1',1,'glm']]], ['unpackunorm2x16',['unpackUnorm2x16',['../a00372.html#ga1f66188e5d65afeb9ffba1ad971e4007',1,'glm']]], ['unpackunorm2x3_5f1x2',['unpackUnorm2x3_1x2',['../a00298.html#ga6abd5a9014df3b5ce4059008d2491260',1,'glm']]], ['unpackunorm2x4',['unpackUnorm2x4',['../a00298.html#ga2e50476132fe5f27f08e273d9c70d85b',1,'glm']]], ['unpackunorm2x8',['unpackUnorm2x8',['../a00298.html#ga637cbe3913dd95c6e7b4c99c61bd611f',1,'glm']]], ['unpackunorm3x10_5f1x2',['unpackUnorm3x10_1x2',['../a00298.html#ga5156d3060355fe332865da2c7f78815f',1,'glm']]], ['unpackunorm3x5_5f1x1',['unpackUnorm3x5_1x1',['../a00298.html#ga5ff95ff5bc16f396432ab67243dbae4d',1,'glm']]], ['unpackunorm4x16',['unpackUnorm4x16',['../a00298.html#ga2ae149c5d2473ac1e5f347bb654a242d',1,'glm']]], ['unpackunorm4x4',['unpackUnorm4x4',['../a00298.html#gac58ee89d0e224bb6df5e8bbb18843a2d',1,'glm']]], ['unpackunorm4x8',['unpackUnorm4x8',['../a00372.html#ga7f903259150b67e9466f5f8edffcd197',1,'glm']]], ['unproject',['unProject',['../a00245.html#ga36641e5d60f994e01c3d8f56b10263d2',1,'glm']]], ['unprojectno',['unProjectNO',['../a00245.html#gae089ba9fc150ff69c252a20e508857b5',1,'glm']]], ['unprojectzo',['unProjectZO',['../a00245.html#gade5136413ce530f8e606124d570fba32',1,'glm']]], ['uround',['uround',['../a00292.html#ga6715b9d573972a0f7763d30d45bcaec4',1,'glm']]], ['usubborrow',['usubBorrow',['../a00370.html#gae3316ba1229ad9b9f09480833321b053',1,'glm']]], ['uvec1',['uvec1',['../a00276.html#gac3bdd96183d23876c58a1424585fefe7',1,'glm']]], ['uvec2',['uvec2',['../a00281.html#ga2f6d9ec3ae14813ade37d6aee3715fdb',1,'glm']]], ['uvec3',['uvec3',['../a00281.html#ga3d3e55874babd4bf93baa7bbc83ae418',1,'glm']]], ['uvec4',['uvec4',['../a00281.html#gaa57e96bb337867329d5f43bcc27c1095',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_13.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_13.js ================================================ var searchData= [ ['vector_20relational_20functions',['Vector Relational Functions',['../a00374.html',1,'']]], ['vector_20types',['Vector types',['../a00281.html',1,'']]], ['vector_20types_20with_20precision_20qualifiers',['Vector types with precision qualifiers',['../a00282.html',1,'']]], ['value_5fptr',['value_ptr',['../a00305.html#ga1c64669e1ba1160ad9386e43dc57569a',1,'glm']]], ['vec1',['vec1',['../a00270.html#gadfc071d934d8dae7955a1d530a3cf656',1,'glm']]], ['vec1_2ehpp',['vec1.hpp',['../a00183.html',1,'']]], ['vec2',['vec2',['../a00281.html#gabe65c061834f61b4f7cb6037b19006a4',1,'glm']]], ['vec2_2ehpp',['vec2.hpp',['../a00184.html',1,'']]], ['vec3',['vec3',['../a00281.html#ga9c3019b13faf179e4ad3626ea66df334',1,'glm']]], ['vec3_2ehpp',['vec3.hpp',['../a00185.html',1,'']]], ['vec4',['vec4',['../a00281.html#gac215a35481a6597d1bf622a382e9d6e2',1,'glm']]], ['vec4_2ehpp',['vec4.hpp',['../a00186.html',1,'']]], ['vec_5fswizzle_2ehpp',['vec_swizzle.hpp',['../a00187.html',1,'']]], ['vector_5fangle_2ehpp',['vector_angle.hpp',['../a00188.html',1,'']]], ['vector_5fbool1_2ehpp',['vector_bool1.hpp',['../a00189.html',1,'']]], ['vector_5fbool1_5fprecision_2ehpp',['vector_bool1_precision.hpp',['../a00190.html',1,'']]], ['vector_5fbool2_2ehpp',['vector_bool2.hpp',['../a00191.html',1,'']]], ['vector_5fbool2_5fprecision_2ehpp',['vector_bool2_precision.hpp',['../a00192.html',1,'']]], ['vector_5fbool3_2ehpp',['vector_bool3.hpp',['../a00193.html',1,'']]], ['vector_5fbool3_5fprecision_2ehpp',['vector_bool3_precision.hpp',['../a00194.html',1,'']]], ['vector_5fbool4_2ehpp',['vector_bool4.hpp',['../a00195.html',1,'']]], ['vector_5fbool4_5fprecision_2ehpp',['vector_bool4_precision.hpp',['../a00196.html',1,'']]], ['vector_5fcommon_2ehpp',['vector_common.hpp',['../a00197.html',1,'']]], ['vector_5fdouble1_2ehpp',['vector_double1.hpp',['../a00198.html',1,'']]], ['vector_5fdouble1_5fprecision_2ehpp',['vector_double1_precision.hpp',['../a00199.html',1,'']]], ['vector_5fdouble2_2ehpp',['vector_double2.hpp',['../a00200.html',1,'']]], ['vector_5fdouble2_5fprecision_2ehpp',['vector_double2_precision.hpp',['../a00201.html',1,'']]], ['vector_5fdouble3_2ehpp',['vector_double3.hpp',['../a00202.html',1,'']]], ['vector_5fdouble3_5fprecision_2ehpp',['vector_double3_precision.hpp',['../a00203.html',1,'']]], ['vector_5fdouble4_2ehpp',['vector_double4.hpp',['../a00204.html',1,'']]], ['vector_5fdouble4_5fprecision_2ehpp',['vector_double4_precision.hpp',['../a00205.html',1,'']]], ['vector_5ffloat1_2ehpp',['vector_float1.hpp',['../a00206.html',1,'']]], ['vector_5ffloat1_5fprecision_2ehpp',['vector_float1_precision.hpp',['../a00207.html',1,'']]], ['vector_5ffloat2_2ehpp',['vector_float2.hpp',['../a00208.html',1,'']]], ['vector_5ffloat2_5fprecision_2ehpp',['vector_float2_precision.hpp',['../a00209.html',1,'']]], ['vector_5ffloat3_2ehpp',['vector_float3.hpp',['../a00210.html',1,'']]], ['vector_5ffloat3_5fprecision_2ehpp',['vector_float3_precision.hpp',['../a00211.html',1,'']]], ['vector_5ffloat4_2ehpp',['vector_float4.hpp',['../a00212.html',1,'']]], ['vector_5ffloat4_5fprecision_2ehpp',['vector_float4_precision.hpp',['../a00213.html',1,'']]], ['vector_5fint1_2ehpp',['vector_int1.hpp',['../a00214.html',1,'']]], ['vector_5fint1_5fprecision_2ehpp',['vector_int1_precision.hpp',['../a00215.html',1,'']]], ['vector_5fint2_2ehpp',['vector_int2.hpp',['../a00216.html',1,'']]], ['vector_5fint2_5fprecision_2ehpp',['vector_int2_precision.hpp',['../a00217.html',1,'']]], ['vector_5fint3_2ehpp',['vector_int3.hpp',['../a00218.html',1,'']]], ['vector_5fint3_5fprecision_2ehpp',['vector_int3_precision.hpp',['../a00219.html',1,'']]], ['vector_5fint4_2ehpp',['vector_int4.hpp',['../a00220.html',1,'']]], ['vector_5fint4_5fprecision_2ehpp',['vector_int4_precision.hpp',['../a00221.html',1,'']]], ['vector_5finteger_2ehpp',['vector_integer.hpp',['../a00222.html',1,'']]], ['vector_5fquery_2ehpp',['vector_query.hpp',['../a00223.html',1,'']]], ['vector_5frelational_2ehpp',['vector_relational.hpp',['../a00225.html',1,'']]], ['vector_5fuint1_2ehpp',['vector_uint1.hpp',['../a00226.html',1,'']]], ['vector_5fuint1_5fprecision_2ehpp',['vector_uint1_precision.hpp',['../a00227.html',1,'']]], ['vector_5fuint2_2ehpp',['vector_uint2.hpp',['../a00228.html',1,'']]], ['vector_5fuint2_5fprecision_2ehpp',['vector_uint2_precision.hpp',['../a00229.html',1,'']]], ['vector_5fuint3_2ehpp',['vector_uint3.hpp',['../a00230.html',1,'']]], ['vector_5fuint3_5fprecision_2ehpp',['vector_uint3_precision.hpp',['../a00231.html',1,'']]], ['vector_5fuint4_2ehpp',['vector_uint4.hpp',['../a00232.html',1,'']]], ['vector_5fuint4_5fprecision_2ehpp',['vector_uint4_precision.hpp',['../a00233.html',1,'']]], ['vector_5fulp_2ehpp',['vector_ulp.hpp',['../a00234.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_14.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_14.js ================================================ var searchData= [ ['word',['word',['../a00354.html#ga16e9fea0ef1e6c4ef472d3d1731c49a5',1,'glm']]], ['wrap_2ehpp',['wrap.hpp',['../a00235.html',1,'']]], ['wrapangle',['wrapAngle',['../a00325.html#ga069527c6dbd64f53435b8ebc4878b473',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_15.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_15.js ================================================ var searchData= [ ['yaw',['yaw',['../a00299.html#ga8da38cdfdc452dafa660c2f46506bad5',1,'glm']]], ['yawpitchroll',['yawPitchRoll',['../a00319.html#gae6aa26ccb020d281b449619e419a609e',1,'glm']]], ['ycocg2rgb',['YCoCg2rgb',['../a00313.html#ga163596b804c7241810b2534a99eb1343',1,'glm']]], ['ycocgr2rgb',['YCoCgR2rgb',['../a00313.html#gaf8d30574c8576838097d8e20c295384a',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_16.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_16.js ================================================ var searchData= [ ['zero',['zero',['../a00290.html#ga788f5a421fc0f40a1296ebc094cbaa8a',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_2.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_2.js ================================================ var searchData= [ ['catmullrom',['catmullRom',['../a00358.html#ga8119c04f8210fd0d292757565cd6918d',1,'glm']]], ['ceil',['ceil',['../a00241.html#gafb9d2a645a23aca12d4d6de0104b7657',1,'glm']]], ['ceilmultiple',['ceilMultiple',['../a00302.html#ga1d89ac88582aaf4d5dfa5feb4a376fd4',1,'glm::ceilMultiple(genType v, genType Multiple)'],['../a00302.html#gab77fdcc13f8e92d2e0b1b7d7aeab8e9d',1,'glm::ceilMultiple(vec< L, T, Q > const &v, vec< L, T, Q > const &Multiple)']]], ['ceilpoweroftwo',['ceilPowerOfTwo',['../a00302.html#ga5c3ef36ae32aa4271f1544f92bd578b6',1,'glm::ceilPowerOfTwo(genIUType v)'],['../a00302.html#gab53d4a97c0d3e297be5f693cdfdfe5d2',1,'glm::ceilPowerOfTwo(vec< L, T, Q > const &v)']]], ['circulareasein',['circularEaseIn',['../a00318.html#ga34508d4b204a321ec26d6086aa047997',1,'glm']]], ['circulareaseinout',['circularEaseInOut',['../a00318.html#ga0c1027637a5b02d4bb3612aa12599d69',1,'glm']]], ['circulareaseout',['circularEaseOut',['../a00318.html#ga26fefde9ced9b72745fe21f1a3fe8da7',1,'glm']]], ['circularrand',['circularRand',['../a00300.html#ga9dd05c36025088fae25b97c869e88517',1,'glm']]], ['clamp',['clamp',['../a00241.html#ga7cd77683da6361e297c56443fc70806d',1,'glm::clamp(genType x, genType minVal, genType maxVal)'],['../a00241.html#gafba2e0674deb5953878d89483cd6323d',1,'glm::clamp(vec< L, T, Q > const &x, T minVal, T maxVal)'],['../a00241.html#gaa0f2f12e9108b09e22a3f0b2008a0b5d',1,'glm::clamp(vec< L, T, Q > const &x, vec< L, T, Q > const &minVal, vec< L, T, Q > const &maxVal)'],['../a00369.html#ga6c0cc6bd1d67ea1008d2592e998bad33',1,'glm::clamp(genType const &Texcoord)']]], ['closebounded',['closeBounded',['../a00314.html#gab7d89c14c48ad01f720fb5daf8813161',1,'glm']]], ['closest_5fpoint_2ehpp',['closest_point.hpp',['../a00010.html',1,'']]], ['closestpointonline',['closestPointOnLine',['../a00310.html#ga36529c278ef716986151d58d151d697d',1,'glm::closestPointOnLine(vec< 3, T, Q > const &point, vec< 3, T, Q > const &a, vec< 3, T, Q > const &b)'],['../a00310.html#ga55bcbcc5fc06cb7ff7bc7a6e0e155eb0',1,'glm::closestPointOnLine(vec< 2, T, Q > const &point, vec< 2, T, Q > const &a, vec< 2, T, Q > const &b)']]], ['colmajor2',['colMajor2',['../a00338.html#gaaff72f11286e59a4a88ed21a347f284c',1,'glm::colMajor2(vec< 2, T, Q > const &v1, vec< 2, T, Q > const &v2)'],['../a00338.html#gafc25fd44196c92b1397b127aec1281ab',1,'glm::colMajor2(mat< 2, 2, T, Q > const &m)']]], ['colmajor3',['colMajor3',['../a00338.html#ga1e25b72b085087740c92f5c70f3b051f',1,'glm::colMajor3(vec< 3, T, Q > const &v1, vec< 3, T, Q > const &v2, vec< 3, T, Q > const &v3)'],['../a00338.html#ga86bd0656e787bb7f217607572590af27',1,'glm::colMajor3(mat< 3, 3, T, Q > const &m)']]], ['colmajor4',['colMajor4',['../a00338.html#gaf4aa6c7e17bfce41a6c13bf6469fab05',1,'glm::colMajor4(vec< 4, T, Q > const &v1, vec< 4, T, Q > const &v2, vec< 4, T, Q > const &v3, vec< 4, T, Q > const &v4)'],['../a00338.html#gaf3f9511c366c20ba2e4a64c9e4cec2b3',1,'glm::colMajor4(mat< 4, 4, T, Q > const &m)']]], ['color_5fencoding_2ehpp',['color_encoding.hpp',['../a00011.html',1,'']]], ['color_5fspace_5fycocg_2ehpp',['color_space_YCoCg.hpp',['../a00014.html',1,'']]], ['column',['column',['../a00293.html#ga96022eb0d3fae39d89fc7a954e59b374',1,'glm::column(genType const &m, length_t index)'],['../a00293.html#ga9e757377523890e8b80c5843dbe4dd15',1,'glm::column(genType const &m, length_t index, typename genType::col_type const &x)']]], ['common_2ehpp',['common.hpp',['../a00015.html',1,'']]], ['compadd',['compAdd',['../a00316.html#gaf71833350e15e74d31cbf8a3e7f27051',1,'glm']]], ['compatibility_2ehpp',['compatibility.hpp',['../a00017.html',1,'']]], ['compmax',['compMax',['../a00316.html#gabfa4bb19298c8c73d4217ba759c496b6',1,'glm']]], ['compmin',['compMin',['../a00316.html#gab5d0832b5c7bb01b8d7395973bfb1425',1,'glm']]], ['compmul',['compMul',['../a00316.html#gae8ab88024197202c9479d33bdc5a8a5d',1,'glm']]], ['compnormalize',['compNormalize',['../a00316.html#ga8f2b81ada8515875e58cb1667b6b9908',1,'glm']]], ['component_5fwise_2ehpp',['component_wise.hpp',['../a00018.html',1,'']]], ['compscale',['compScale',['../a00316.html#ga80abc2980d65d675f435d178c36880eb',1,'glm']]], ['conjugate',['conjugate',['../a00248.html#ga10d7bda73201788ac2ab28cd8d0d409b',1,'glm']]], ['constants_2ehpp',['constants.hpp',['../a00021.html',1,'']]], ['convertd65xyztod50xyz',['convertD65XYZToD50XYZ',['../a00311.html#gad12f4f65022b2c80e33fcba2ced0dc48',1,'glm']]], ['convertd65xyztolinearsrgb',['convertD65XYZToLinearSRGB',['../a00311.html#ga5265386fc3ac29e4c580d37ed470859c',1,'glm']]], ['convertlinearsrgbtod50xyz',['convertLinearSRGBToD50XYZ',['../a00311.html#ga1522ba180e3d83d554a734056da031f9',1,'glm']]], ['convertlinearsrgbtod65xyz',['convertLinearSRGBToD65XYZ',['../a00311.html#gaf9e130d9d4ccf51cc99317de7449f369',1,'glm']]], ['convertlineartosrgb',['convertLinearToSRGB',['../a00289.html#ga42239e7b3da900f7ef37cec7e2476579',1,'glm::convertLinearToSRGB(vec< L, T, Q > const &ColorLinear)'],['../a00289.html#gaace0a21167d13d26116c283009af57f6',1,'glm::convertLinearToSRGB(vec< L, T, Q > const &ColorLinear, T Gamma)']]], ['convertsrgbtolinear',['convertSRGBToLinear',['../a00289.html#ga16c798b7a226b2c3079dedc55083d187',1,'glm::convertSRGBToLinear(vec< L, T, Q > const &ColorSRGB)'],['../a00289.html#gad1b91f27a9726c9cb403f9fee6e2e200',1,'glm::convertSRGBToLinear(vec< L, T, Q > const &ColorSRGB, T Gamma)']]], ['core_20features',['Core features',['../a00280.html',1,'']]], ['common_20functions',['Common functions',['../a00241.html',1,'']]], ['cos',['cos',['../a00373.html#ga6a41efc740e3b3c937447d3a6284130e',1,'glm']]], ['cosh',['cosh',['../a00373.html#ga4e260e372742c5f517aca196cf1e62b3',1,'glm']]], ['cot',['cot',['../a00301.html#ga3a7b517a95bbd3ad74da3aea87a66314',1,'glm']]], ['coth',['coth',['../a00301.html#ga6b8b770eb7198e4dea59d52e6db81442',1,'glm']]], ['cross',['cross',['../a00254.html#ga755beaa929c75751dee646cccba37e4c',1,'glm::cross(qua< T, Q > const &q1, qua< T, Q > const &q2)'],['../a00279.html#gaeeec0794212fe84fc9d261de067c9587',1,'glm::cross(vec< 3, T, Q > const &x, vec< 3, T, Q > const &y)'],['../a00322.html#gac36e72b934ea6a9dd313772d7e78fa93',1,'glm::cross(vec< 2, T, Q > const &v, vec< 2, T, Q > const &u)'],['../a00352.html#ga2f32f970411c44cdd38bb98960198385',1,'glm::cross(qua< T, Q > const &q, vec< 3, T, Q > const &v)'],['../a00352.html#ga9f5f77255756e5668dfee7f0d07ed021',1,'glm::cross(vec< 3, T, Q > const &v, qua< T, Q > const &q)']]], ['csc',['csc',['../a00301.html#ga59dd0005b6474eea48af743b4f14ebbb',1,'glm']]], ['csch',['csch',['../a00301.html#ga6d95843ff3ca6472ab399ba171d290a0',1,'glm']]], ['cubic',['cubic',['../a00358.html#ga6b867eb52e2fc933d2e0bf26aabc9a70',1,'glm']]], ['cubiceasein',['cubicEaseIn',['../a00318.html#gaff52f746102b94864d105563ba8895ae',1,'glm']]], ['cubiceaseinout',['cubicEaseInOut',['../a00318.html#ga55134072b42d75452189321d4a2ad91c',1,'glm']]], ['cubiceaseout',['cubicEaseOut',['../a00318.html#ga40d746385d8bcc5973f5bc6a2340ca91',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_3.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_3.js ================================================ var searchData= [ ['ddualquat',['ddualquat',['../a00317.html#ga3d71f98d84ba59dfe4e369fde4714cd6',1,'glm']]], ['decompose',['decompose',['../a00335.html#gac0e342656ba09a9bc97c57182ba73124',1,'glm']]], ['degrees',['degrees',['../a00373.html#ga8faec9e303538065911ba8b3caf7326b',1,'glm']]], ['derivedeuleranglex',['derivedEulerAngleX',['../a00319.html#ga994b8186b3b80d91cf90bc403164692f',1,'glm']]], ['derivedeulerangley',['derivedEulerAngleY',['../a00319.html#ga0a4c56ecce7abcb69508ebe6313e9d10',1,'glm']]], ['derivedeuleranglez',['derivedEulerAngleZ',['../a00319.html#gae8b397348201c42667be983ba3f344df',1,'glm']]], ['determinant',['determinant',['../a00371.html#gad7928795124768e058f99dce270f5c8d',1,'glm']]], ['diagonal2x2',['diagonal2x2',['../a00339.html#ga58a32a2beeb2478dae2a721368cdd4ac',1,'glm']]], ['diagonal2x3',['diagonal2x3',['../a00339.html#gab69f900206a430e2875a5a073851e175',1,'glm']]], ['diagonal2x4',['diagonal2x4',['../a00339.html#ga30b4dbfed60a919d66acc8a63bcdc549',1,'glm']]], ['diagonal3x2',['diagonal3x2',['../a00339.html#ga832c805d5130d28ad76236958d15b47d',1,'glm']]], ['diagonal3x3',['diagonal3x3',['../a00339.html#ga5487ff9cdbc8e04d594adef1bcb16ee0',1,'glm']]], ['diagonal3x4',['diagonal3x4',['../a00339.html#gad7551139cff0c4208d27f0ad3437833e',1,'glm']]], ['diagonal4x2',['diagonal4x2',['../a00339.html#gacb8969e6543ba775c6638161a37ac330',1,'glm']]], ['diagonal4x3',['diagonal4x3',['../a00339.html#gae235def5049d6740f0028433f5e13f90',1,'glm']]], ['diagonal4x4',['diagonal4x4',['../a00339.html#ga0b4cd8dea436791b072356231ee8578f',1,'glm']]], ['diskrand',['diskRand',['../a00300.html#gaa0b18071f3f97dbf8bcf6f53c6fe5f73',1,'glm']]], ['distance',['distance',['../a00279.html#gaa68de6c53e20dfb2dac2d20197562e3f',1,'glm']]], ['distance2',['distance2',['../a00343.html#ga85660f1b79f66c09c7b5a6f80e68c89f',1,'glm']]], ['dmat2',['dmat2',['../a00283.html#ga21dbd1f987775d7cc7607c139531c7e6',1,'glm']]], ['dmat2x2',['dmat2x2',['../a00283.html#ga66b6a9af787e468a46dfe24189e87f9b',1,'glm']]], ['dmat2x3',['dmat2x3',['../a00283.html#ga92cd388753d48e20de69ea2dbedf826a',1,'glm']]], ['dmat2x4',['dmat2x4',['../a00283.html#gaef2198807e937072803ae0ae45e1965e',1,'glm']]], ['dmat3',['dmat3',['../a00283.html#ga6f40aa56265b4b0ccad41b86802efe33',1,'glm']]], ['dmat3x2',['dmat3x2',['../a00283.html#ga001e3e0638fbf8719788fc64c5b8cf39',1,'glm']]], ['dmat3x3',['dmat3x3',['../a00283.html#ga970cb3306be25a5ca5db5a9456831228',1,'glm']]], ['dmat3x4',['dmat3x4',['../a00283.html#ga0412a634d183587e6188e9b11869f8f4',1,'glm']]], ['dmat4',['dmat4',['../a00283.html#ga0f34486bb7fec8e5a5b3830b6a6cbeca',1,'glm']]], ['dmat4x2',['dmat4x2',['../a00283.html#ga9bc0b3ab8b6ba2cb6782e179ad7ad156',1,'glm']]], ['dmat4x3',['dmat4x3',['../a00283.html#gacd18864049f8c83799babe7e596ca05b',1,'glm']]], ['dmat4x4',['dmat4x4',['../a00283.html#gad5a6484b983b74f9d801cab8bc4e6a10',1,'glm']]], ['dot',['dot',['../a00254.html#ga84865a56acb8fbd7bc4f5c0b928e3cfc',1,'glm::dot(qua< T, Q > const &x, qua< T, Q > const &y)'],['../a00279.html#gaad6c5d9d39bdc0bf43baf1b22e147a0a',1,'glm::dot(vec< L, T, Q > const &x, vec< L, T, Q > const &y)']]], ['double1',['double1',['../a00315.html#ga20b861a9b6e2a300323671c57a02525b',1,'glm']]], ['double1x1',['double1x1',['../a00315.html#ga45f16a4dd0db1f199afaed9fd12fe9a8',1,'glm']]], ['double2',['double2',['../a00315.html#ga31b729b04facccda73f07ed26958b3c2',1,'glm']]], ['double2x2',['double2x2',['../a00315.html#gae57d0201096834d25f2b91b319e7cdbd',1,'glm']]], ['double2x3',['double2x3',['../a00315.html#ga3655bc324008553ca61f39952d0b2d08',1,'glm']]], ['double2x4',['double2x4',['../a00315.html#gacd33061fc64a7b2dcfd7322c49d9557a',1,'glm']]], ['double3',['double3',['../a00315.html#ga3d8b9028a1053a44a98902cd1c389472',1,'glm']]], ['double3x2',['double3x2',['../a00315.html#ga5ec08fc39c9d783dfcc488be240fe975',1,'glm']]], ['double3x3',['double3x3',['../a00315.html#ga4bad5bb20c6ddaecfe4006c93841d180',1,'glm']]], ['double3x4',['double3x4',['../a00315.html#ga2ef022e453d663d70aec414b2a80f756',1,'glm']]], ['double4',['double4',['../a00315.html#gaf92f58af24f35617518aeb3d4f63fda6',1,'glm']]], ['double4x2',['double4x2',['../a00315.html#gabca29ccceea53669618b751aae0ba83d',1,'glm']]], ['double4x3',['double4x3',['../a00315.html#gafad66a02ccd360c86d6ab9ff9cfbc19c',1,'glm']]], ['double4x4',['double4x4',['../a00315.html#gaab541bed2e788e4537852a2492860806',1,'glm']]], ['dquat',['dquat',['../a00249.html#ga1181459aa5d640a3ea43861b118f3f0b',1,'glm']]], ['dual_5fquat_5fidentity',['dual_quat_identity',['../a00317.html#ga0b35c0e30df8a875dbaa751e0bd800e0',1,'glm']]], ['dual_5fquaternion_2ehpp',['dual_quaternion.hpp',['../a00022.html',1,'']]], ['dualquat',['dualquat',['../a00317.html#gae93abee0c979902fbec6a7bee0f6fae1',1,'glm']]], ['dualquat_5fcast',['dualquat_cast',['../a00317.html#gac4064ff813759740201765350eac4236',1,'glm::dualquat_cast(mat< 2, 4, T, Q > const &x)'],['../a00317.html#ga91025ebdca0f4ea54da08497b00e8c84',1,'glm::dualquat_cast(mat< 3, 4, T, Q > const &x)']]], ['dvec1',['dvec1',['../a00268.html#ga6221af17edc2d4477a4583d2cd53e569',1,'glm']]], ['dvec2',['dvec2',['../a00281.html#ga8b09c71aaac7da7867ae58377fe219a8',1,'glm']]], ['dvec3',['dvec3',['../a00281.html#ga5b83ae3d0fdec519c038e4d2cf967cf0',1,'glm']]], ['dvec4',['dvec4',['../a00281.html#ga57debab5d98ce618f7b2a97fe26eb3ac',1,'glm']]], ['dword',['dword',['../a00354.html#ga86e46fff9f80ae33893d8d697f2ca98a',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_4.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_4.js ================================================ var searchData= [ ['exponential_20functions',['Exponential functions',['../a00242.html',1,'']]], ['e',['e',['../a00290.html#ga4b7956eb6e2fbedfc7cf2e46e85c5139',1,'glm']]], ['easing_2ehpp',['easing.hpp',['../a00023.html',1,'']]], ['elasticeasein',['elasticEaseIn',['../a00318.html#ga230918eccee4e113d10ec5b8cdc58695',1,'glm']]], ['elasticeaseinout',['elasticEaseInOut',['../a00318.html#ga2db4ac8959559b11b4029e54812908d6',1,'glm']]], ['elasticeaseout',['elasticEaseOut',['../a00318.html#gace9c9d1bdf88bf2ab1e7cdefa54c7365',1,'glm']]], ['epsilon',['epsilon',['../a00259.html#ga2a1e57fc5592b69cfae84174cbfc9429',1,'glm']]], ['epsilon_2ehpp',['epsilon.hpp',['../a00024.html',1,'']]], ['epsilonequal',['epsilonEqual',['../a00291.html#ga91b417866cafadd076004778217a1844',1,'glm::epsilonEqual(vec< L, T, Q > const &x, vec< L, T, Q > const &y, T const &epsilon)'],['../a00291.html#gaa7f227999ca09e7ca994e8b35aba47bb',1,'glm::epsilonEqual(genType const &x, genType const &y, genType const &epsilon)']]], ['epsilonnotequal',['epsilonNotEqual',['../a00291.html#gaf840d33b9a5261ec78dcd5125743b025',1,'glm::epsilonNotEqual(vec< L, T, Q > const &x, vec< L, T, Q > const &y, T const &epsilon)'],['../a00291.html#ga50a92103fb0cbd796908e1bf20c79aaf',1,'glm::epsilonNotEqual(genType const &x, genType const &y, genType const &epsilon)']]], ['equal',['equal',['../a00246.html#ga27e90dcb7941c9b70e295dc3f6f6369f',1,'glm::equal(mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y)'],['../a00246.html#gaf5d687d70d11708b68c36c6db5777040',1,'glm::equal(mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, T epsilon)'],['../a00246.html#gafa6a053e81179fa4292b35651c83c3fb',1,'glm::equal(mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, vec< C, T, Q > const &epsilon)'],['../a00246.html#gab3a93f19e72e9141f50527c9de21d0c0',1,'glm::equal(mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, int ULPs)'],['../a00246.html#ga5305af376173f1902719fa309bbae671',1,'glm::equal(mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, vec< C, int, Q > const &ULPs)'],['../a00255.html#gad7827af0549504ff1cd6a359786acc7a',1,'glm::equal(qua< T, Q > const &x, qua< T, Q > const &y)'],['../a00255.html#gaa001eecb91106463169a8e5ef1577b39',1,'glm::equal(qua< T, Q > const &x, qua< T, Q > const &y, T epsilon)'],['../a00275.html#ga2ac7651a2fa7354f2da610dbd50d28e2',1,'glm::equal(vec< L, T, Q > const &x, vec< L, T, Q > const &y, T epsilon)'],['../a00275.html#ga37d261a65f69babc82cec2ae1af7145f',1,'glm::equal(vec< L, T, Q > const &x, vec< L, T, Q > const &y, vec< L, T, Q > const &epsilon)'],['../a00275.html#ga2b46cb50911e97b32f4cd743c2c69771',1,'glm::equal(vec< L, T, Q > const &x, vec< L, T, Q > const &y, int ULPs)'],['../a00275.html#ga7da2b8605be7f245b39cb6fbf6d9d581',1,'glm::equal(vec< L, T, Q > const &x, vec< L, T, Q > const &y, vec< L, int, Q > const &ULPs)'],['../a00374.html#gab4c5cfdaa70834421397a85aa83ad946',1,'glm::equal(vec< L, T, Q > const &x, vec< L, T, Q > const &y)']]], ['euclidean',['euclidean',['../a00350.html#ga1821d5b3324201e60a9e2823d0b5d0c8',1,'glm']]], ['euler',['euler',['../a00290.html#gad8fe2e6f90bce9d829e9723b649fbd42',1,'glm']]], ['euler_5fangles_2ehpp',['euler_angles.hpp',['../a00025.html',1,'']]], ['eulerangles',['eulerAngles',['../a00299.html#gaf4dd967dead22dd932fc7460ceecb03f',1,'glm']]], ['euleranglex',['eulerAngleX',['../a00319.html#gafba6282e4ed3ff8b5c75331abfba3489',1,'glm']]], ['euleranglexy',['eulerAngleXY',['../a00319.html#ga64036577ee17a2d24be0dbc05881d4e2',1,'glm']]], ['euleranglexyx',['eulerAngleXYX',['../a00319.html#ga29bd0787a28a6648159c0d6e69706066',1,'glm']]], ['euleranglexyz',['eulerAngleXYZ',['../a00319.html#ga1975e0f0e9bed7f716dc9946da2ab645',1,'glm']]], ['euleranglexz',['eulerAngleXZ',['../a00319.html#gaa39bd323c65c2fc0a1508be33a237ce9',1,'glm']]], ['euleranglexzx',['eulerAngleXZX',['../a00319.html#ga60171c79a17aec85d7891ae1d1533ec9',1,'glm']]], ['euleranglexzy',['eulerAngleXZY',['../a00319.html#ga996dce12a60d8a674ba6737a535fa910',1,'glm']]], ['eulerangley',['eulerAngleY',['../a00319.html#gab84bf4746805fd69b8ecbb230e3974c5',1,'glm']]], ['eulerangleyx',['eulerAngleYX',['../a00319.html#ga4f57e6dd25c3cffbbd4daa6ef3f4486d',1,'glm']]], ['eulerangleyxy',['eulerAngleYXY',['../a00319.html#ga750fba9894117f87bcc529d7349d11de',1,'glm']]], ['eulerangleyxz',['eulerAngleYXZ',['../a00319.html#gab8ba99a9814f6d9edf417b6c6d5b0c10',1,'glm']]], ['eulerangleyz',['eulerAngleYZ',['../a00319.html#ga220379e10ac8cca55e275f0c9018fed9',1,'glm']]], ['eulerangleyzx',['eulerAngleYZX',['../a00319.html#ga08bef16357b8f9b3051b3dcaec4b7848',1,'glm']]], ['eulerangleyzy',['eulerAngleYZY',['../a00319.html#ga5e5e40abc27630749b42b3327c76d6e4',1,'glm']]], ['euleranglez',['eulerAngleZ',['../a00319.html#ga5b3935248bb6c3ec6b0d9297d406e251',1,'glm']]], ['euleranglezx',['eulerAngleZX',['../a00319.html#ga483903115cd4059228961046a28d69b5',1,'glm']]], ['euleranglezxy',['eulerAngleZXY',['../a00319.html#gab4505c54d2dd654df4569fd1f04c43aa',1,'glm']]], ['euleranglezxz',['eulerAngleZXZ',['../a00319.html#ga178f966c52b01e4d65e31ebd007e3247',1,'glm']]], ['euleranglezy',['eulerAngleZY',['../a00319.html#ga400b2bd5984999efab663f3a68e1d020',1,'glm']]], ['euleranglezyx',['eulerAngleZYX',['../a00319.html#ga2e61f1e39069c47530acab9167852dd6',1,'glm']]], ['euleranglezyz',['eulerAngleZYZ',['../a00319.html#gacd795f1dbecaf74974f9c76bbcca6830',1,'glm']]], ['exp',['exp',['../a00242.html#ga071566cadc7505455e611f2a0353f4d4',1,'glm::exp(vec< L, T, Q > const &v)'],['../a00256.html#gaab2d37ef7265819f1d2939b9dc2c52ac',1,'glm::exp(qua< T, Q > const &q)']]], ['exp2',['exp2',['../a00242.html#gaff17ace6b579a03bf223ed4d1ed2cd16',1,'glm']]], ['exponential_2ehpp',['exponential.hpp',['../a00026.html',1,'']]], ['exponentialeasein',['exponentialEaseIn',['../a00318.html#ga7f24ee9219ab4c84dc8de24be84c1e3c',1,'glm']]], ['exponentialeaseinout',['exponentialEaseInOut',['../a00318.html#ga232fb6dc093c5ce94bee105ff2947501',1,'glm']]], ['exponentialeaseout',['exponentialEaseOut',['../a00318.html#ga517f2bcfd15bc2c25c466ae50808efc3',1,'glm']]], ['ext_2ehpp',['ext.hpp',['../a00027.html',1,'']]], ['extend',['extend',['../a00320.html#ga8140caae613b0f847ab0d7175dc03a37',1,'glm']]], ['extend_2ehpp',['extend.hpp',['../a00028.html',1,'']]], ['extended_5fmin_5fmax_2ehpp',['extended_min_max.hpp',['../a00029.html',1,'']]], ['exterior_5fproduct_2ehpp',['exterior_product.hpp',['../a00030.html',1,'']]], ['extracteuleranglexyx',['extractEulerAngleXYX',['../a00319.html#gaf1077a72171d0f3b08f022ab5ff88af7',1,'glm']]], ['extracteuleranglexyz',['extractEulerAngleXYZ',['../a00319.html#gacea701562f778c1da4d3a0a1cf091000',1,'glm']]], ['extracteuleranglexzx',['extractEulerAngleXZX',['../a00319.html#gacf0bc6c031f25fa3ee0055b62c8260d0',1,'glm']]], ['extracteuleranglexzy',['extractEulerAngleXZY',['../a00319.html#gabe5a65d8eb1cd873c8de121cce1a15ed',1,'glm']]], ['extracteulerangleyxy',['extractEulerAngleYXY',['../a00319.html#gaab8868556361a190db94374e9983ed39',1,'glm']]], ['extracteulerangleyxz',['extractEulerAngleYXZ',['../a00319.html#gaf0937518e63037335a0e8358b6f053c5',1,'glm']]], ['extracteulerangleyzx',['extractEulerAngleYZX',['../a00319.html#ga9049b78466796c0de2971756e25b93d3',1,'glm']]], ['extracteulerangleyzy',['extractEulerAngleYZY',['../a00319.html#ga11dad972c109e4bf8694c915017c44a6',1,'glm']]], ['extracteuleranglezxy',['extractEulerAngleZXY',['../a00319.html#ga81fbbca2ba0c778b9662d5355b4e2363',1,'glm']]], ['extracteuleranglezxz',['extractEulerAngleZXZ',['../a00319.html#ga59359fef9bad92afaca55e193f91e702',1,'glm']]], ['extracteuleranglezyx',['extractEulerAngleZYX',['../a00319.html#ga2d6c11a4abfa60c565483cee2d3f7665',1,'glm']]], ['extracteuleranglezyz',['extractEulerAngleZYZ',['../a00319.html#gafdfa880a64b565223550c2d3938b1aeb',1,'glm']]], ['extractmatrixrotation',['extractMatrixRotation',['../a00337.html#gabbc1c7385a145f04b5c54228965df145',1,'glm']]], ['extractrealcomponent',['extractRealComponent',['../a00352.html#ga321953c1b2e7befe6f5dcfddbfc6b76b',1,'glm']]], ['experimental_20extensions',['Experimental extensions',['../a00287.html',1,'']]], ['matrix_5ftransform_2ehpp',['matrix_transform.hpp',['../a00108.html',1,'']]], ['scalar_5frelational_2ehpp',['scalar_relational.hpp',['../a00149.html',1,'']]], ['vector_5frelational_2ehpp',['vector_relational.hpp',['../a00224.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_5.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_5.js ================================================ var searchData= [ ['floating_2dpoint_20pack_20and_20unpack_20functions',['Floating-Point Pack and Unpack Functions',['../a00372.html',1,'']]], ['f32',['f32',['../a00304.html#gabe6a542dd6c1d5ffd847f1b9b4c9c9b7',1,'glm']]], ['f32mat1',['f32mat1',['../a00346.html#ga145ad477a2a3e152855511c3b52469a6',1,'glm::gtx']]], ['f32mat1x1',['f32mat1x1',['../a00346.html#gac88c6a4dbfc380aa26e3adbbade36348',1,'glm::gtx']]], ['f32mat2',['f32mat2',['../a00304.html#gab12383ed6ac7595ed6fde4d266c58425',1,'glm']]], ['f32mat2x2',['f32mat2x2',['../a00304.html#ga04100c76f7d55a0dd0983ccf05142bff',1,'glm']]], ['f32mat2x3',['f32mat2x3',['../a00304.html#gab256cdab5eb582e426d749ae77b5b566',1,'glm']]], ['f32mat2x4',['f32mat2x4',['../a00304.html#gaf512b74c4400b68f9fdf9388b3d6aac8',1,'glm']]], ['f32mat3',['f32mat3',['../a00304.html#ga856f3905ee7cc2e4890a8a1d56c150be',1,'glm']]], ['f32mat3x2',['f32mat3x2',['../a00304.html#ga1320a08e14fdff3821241eefab6947e9',1,'glm']]], ['f32mat3x3',['f32mat3x3',['../a00304.html#ga65261fa8a21045c8646ddff114a56174',1,'glm']]], ['f32mat3x4',['f32mat3x4',['../a00304.html#gab90ade28222f8b861d5ceaf81a3a7f5d',1,'glm']]], ['f32mat4',['f32mat4',['../a00304.html#ga99d1b85ff99956b33da7e9992aad129a',1,'glm']]], ['f32mat4x2',['f32mat4x2',['../a00304.html#ga3b32ca1e57a4ef91babbc3d35a34ea20',1,'glm']]], ['f32mat4x3',['f32mat4x3',['../a00304.html#ga239b96198771b7add8eea7e6b59840c0',1,'glm']]], ['f32mat4x4',['f32mat4x4',['../a00304.html#gaee4da0e9fbd8cfa2f89cb80889719dc3',1,'glm']]], ['f32quat',['f32quat',['../a00304.html#ga38e674196ba411d642be40c47bf33939',1,'glm']]], ['f32vec1',['f32vec1',['../a00304.html#ga701f32ab5b3fb06996b41f5c0d643805',1,'glm::f32vec1()'],['../a00346.html#ga07f8d7348eb7ae059a84c118fdfeb943',1,'glm::gtx::f32vec1()']]], ['f32vec2',['f32vec2',['../a00304.html#ga5d6c70e080409a76a257dc55bd8ea2c8',1,'glm']]], ['f32vec3',['f32vec3',['../a00304.html#gaea5c4518e175162e306d2c2b5ef5ac79',1,'glm']]], ['f32vec4',['f32vec4',['../a00304.html#ga31c6ca0e074a44007f49a9a3720b18c8',1,'glm']]], ['f64',['f64',['../a00304.html#ga1d794d240091678f602e8de225b8d8c9',1,'glm']]], ['f64mat1',['f64mat1',['../a00346.html#ga59bfa589419b5265d01314fcecd33435',1,'glm::gtx']]], ['f64mat1x1',['f64mat1x1',['../a00346.html#ga448eeb08d0b7d8c43a8b292c981955fd',1,'glm::gtx']]], ['f64mat2',['f64mat2',['../a00304.html#gad9771450a54785d13080cdde0fe20c1d',1,'glm']]], ['f64mat2x2',['f64mat2x2',['../a00304.html#ga9ec7c4c79e303c053e30729a95fb2c37',1,'glm']]], ['f64mat2x3',['f64mat2x3',['../a00304.html#gae3ab5719fc4c1e966631dbbcba8d412a',1,'glm']]], ['f64mat2x4',['f64mat2x4',['../a00304.html#gac87278e0c702ba8afff76316d4eeb769',1,'glm']]], ['f64mat3',['f64mat3',['../a00304.html#ga9b69181efbf8f37ae934f135137b29c0',1,'glm']]], ['f64mat3x2',['f64mat3x2',['../a00304.html#ga2473d8bf3f4abf967c4d0e18175be6f7',1,'glm']]], ['f64mat3x3',['f64mat3x3',['../a00304.html#ga916c1aed91cf91f7b41399ebe7c6e185',1,'glm']]], ['f64mat3x4',['f64mat3x4',['../a00304.html#gaab239fa9e35b65a67cbaa6ac082f3675',1,'glm']]], ['f64mat4',['f64mat4',['../a00304.html#ga0ecd3f4952536e5ef12702b44d2626fc',1,'glm']]], ['f64mat4x2',['f64mat4x2',['../a00304.html#gab7daf79d6bc06a68bea1c6f5e11b5512',1,'glm']]], ['f64mat4x3',['f64mat4x3',['../a00304.html#ga3e2e66ffbe341a80bc005ba2b9552110',1,'glm']]], ['f64mat4x4',['f64mat4x4',['../a00304.html#gae52e2b7077a9ff928a06ab5ce600b81e',1,'glm']]], ['f64quat',['f64quat',['../a00304.html#ga2b114a2f2af0fe1dfeb569c767822940',1,'glm']]], ['f64vec1',['f64vec1',['../a00304.html#gade502df1ce14f837fae7f60a03ddb9b0',1,'glm::f64vec1()'],['../a00346.html#gae5987a61b8c03d5c432a9e62f0b3efe1',1,'glm::gtx::f64vec1()']]], ['f64vec2',['f64vec2',['../a00304.html#gadc4e1594f9555d919131ee02b17822a2',1,'glm']]], ['f64vec3',['f64vec3',['../a00304.html#gaa7a1ddca75c5f629173bf4772db7a635',1,'glm']]], ['f64vec4',['f64vec4',['../a00304.html#ga66e92e57260bdb910609b9a56bf83e97',1,'glm']]], ['faceforward',['faceforward',['../a00279.html#ga7aed0a36c738169402404a3a5d54e43b',1,'glm']]], ['factorial',['factorial',['../a00330.html#ga8cbd3120905f398ec321b5d1836e08fb',1,'glm']]], ['fast_5fexponential_2ehpp',['fast_exponential.hpp',['../a00031.html',1,'']]], ['fast_5fsquare_5froot_2ehpp',['fast_square_root.hpp',['../a00032.html',1,'']]], ['fast_5ftrigonometry_2ehpp',['fast_trigonometry.hpp',['../a00033.html',1,'']]], ['fastacos',['fastAcos',['../a00325.html#ga9721d63356e5d94fdc4b393a426ab26b',1,'glm']]], ['fastasin',['fastAsin',['../a00325.html#ga562cb62c51fbfe7fac7db0bce706b81f',1,'glm']]], ['fastatan',['fastAtan',['../a00325.html#ga8d197c6ef564f5e5d59af3b3f8adcc2c',1,'glm::fastAtan(T y, T x)'],['../a00325.html#gae25de86a968490ff56856fa425ec9d30',1,'glm::fastAtan(T angle)']]], ['fastcos',['fastCos',['../a00325.html#gab34c8b45c23c0165a64dcecfcc3b302a',1,'glm']]], ['fastdistance',['fastDistance',['../a00324.html#gaac333418d0c4e0cc6d3d219ed606c238',1,'glm::fastDistance(genType x, genType y)'],['../a00324.html#ga42d3e771fa7cb3c60d828e315829df19',1,'glm::fastDistance(vec< L, T, Q > const &x, vec< L, T, Q > const &y)']]], ['fastexp',['fastExp',['../a00323.html#gaa3180ac8f96ab37ab96e0cacaf608e10',1,'glm::fastExp(T x)'],['../a00323.html#ga3ba6153aec6bd74628f8b00530aa8d58',1,'glm::fastExp(vec< L, T, Q > const &x)']]], ['fastexp2',['fastExp2',['../a00323.html#ga0af50585955eb14c60bb286297fabab2',1,'glm::fastExp2(T x)'],['../a00323.html#gacaaed8b67d20d244b7de217e7816c1b6',1,'glm::fastExp2(vec< L, T, Q > const &x)']]], ['fastinversesqrt',['fastInverseSqrt',['../a00324.html#ga7f081b14d9c7035c8714eba5f7f75a8f',1,'glm::fastInverseSqrt(genType x)'],['../a00324.html#gadcd7be12b1e5ee182141359d4c45dd24',1,'glm::fastInverseSqrt(vec< L, T, Q > const &x)']]], ['fastlength',['fastLength',['../a00324.html#gafe697d6287719538346bbdf8b1367c59',1,'glm::fastLength(genType x)'],['../a00324.html#ga90f66be92ef61e705c005e7b3209edb8',1,'glm::fastLength(vec< L, T, Q > const &x)']]], ['fastlog',['fastLog',['../a00323.html#gae1bdc97b7f96a600e29c753f1cd4388a',1,'glm::fastLog(T x)'],['../a00323.html#ga937256993a7219e73f186bb348fe6be8',1,'glm::fastLog(vec< L, T, Q > const &x)']]], ['fastlog2',['fastLog2',['../a00323.html#ga6e98118685f6dc9e05fbb13dd5e5234e',1,'glm::fastLog2(T x)'],['../a00323.html#ga7562043539194ccc24649f8475bc5584',1,'glm::fastLog2(vec< L, T, Q > const &x)']]], ['fastmix',['fastMix',['../a00352.html#ga264e10708d58dd0ff53b7902a2bd2561',1,'glm']]], ['fastnormalize',['fastNormalize',['../a00324.html#ga3b02c1d6e0c754144e2f1e110bf9f16c',1,'glm']]], ['fastnormalizedot',['fastNormalizeDot',['../a00345.html#ga2746fb9b5bd22b06b2f7c8babba5de9e',1,'glm']]], ['fastpow',['fastPow',['../a00323.html#ga5340e98a11fcbbd936ba6e983a154d50',1,'glm::fastPow(genType x, genType y)'],['../a00323.html#ga15325a8ed2d1c4ed2412c4b3b3927aa2',1,'glm::fastPow(vec< L, T, Q > const &x, vec< L, T, Q > const &y)'],['../a00323.html#ga7f2562db9c3e02ae76169c36b086c3f6',1,'glm::fastPow(genTypeT x, genTypeU y)'],['../a00323.html#ga1abe488c0829da5b9de70ac64aeaa7e5',1,'glm::fastPow(vec< L, T, Q > const &x)']]], ['fastsin',['fastSin',['../a00325.html#ga0aab3257bb3b628d10a1e0483e2c6915',1,'glm']]], ['fastsqrt',['fastSqrt',['../a00324.html#ga6c460e9414a50b2fc455c8f64c86cdc9',1,'glm::fastSqrt(genType x)'],['../a00324.html#gae83f0c03614f73eae5478c5b6274ee6d',1,'glm::fastSqrt(vec< L, T, Q > const &x)']]], ['fasttan',['fastTan',['../a00325.html#gaf29b9c1101a10007b4f79ee89df27ba2',1,'glm']]], ['fclamp',['fclamp',['../a00321.html#ga1e28539d3a46965ed9ef92ec7cb3b18a',1,'glm::fclamp(genType x, genType minVal, genType maxVal)'],['../a00321.html#ga60796d08903489ee185373593bc16b9d',1,'glm::fclamp(vec< L, T, Q > const &x, T minVal, T maxVal)'],['../a00321.html#ga5c15fa4709763c269c86c0b8b3aa2297',1,'glm::fclamp(vec< L, T, Q > const &x, vec< L, T, Q > const &minVal, vec< L, T, Q > const &maxVal)']]], ['fdualquat',['fdualquat',['../a00317.html#ga237c2b9b42c9a930e49de5840ae0f930',1,'glm']]], ['findlsb',['findLSB',['../a00370.html#gaf74c4d969fa34ab8acb9d390f5ca5274',1,'glm::findLSB(genIUType x)'],['../a00370.html#ga4454c0331d6369888c28ab677f4810c7',1,'glm::findLSB(vec< L, T, Q > const &v)']]], ['findmsb',['findMSB',['../a00370.html#ga7e4a794d766861c70bc961630f8ef621',1,'glm::findMSB(genIUType x)'],['../a00370.html#ga39ac4d52028bb6ab08db5ad6562c2872',1,'glm::findMSB(vec< L, T, Q > const &v)']]], ['findnsb',['findNSB',['../a00261.html#ga2777901e41ad6e1e9d0ad6cc855d1075',1,'glm::findNSB(genIUType x, int significantBitCount)'],['../a00274.html#gaff61eca266da315002a3db92ff0dd604',1,'glm::findNSB(vec< L, T, Q > const &Source, vec< L, int, Q > SignificantBitCount)']]], ['fliplr',['fliplr',['../a00336.html#gaf39f4e5f78eb29c1a90277d45b9b3feb',1,'glm']]], ['flipud',['flipud',['../a00336.html#ga85003371f0ba97380dd25e8905de1870',1,'glm']]], ['float1',['float1',['../a00315.html#gaf5208d01f6c6fbcb7bb55d610b9c0ead',1,'glm']]], ['float1x1',['float1x1',['../a00315.html#ga73720b8dc4620835b17f74d428f98c0c',1,'glm']]], ['float2',['float2',['../a00315.html#ga02d3c013982c183906c61d74aa3166ce',1,'glm']]], ['float2x2',['float2x2',['../a00315.html#ga33d43ecbb60a85a1366ff83f8a0ec85f',1,'glm']]], ['float2x3',['float2x3',['../a00315.html#ga939b0cff15cee3030f75c1b2e36f89fe',1,'glm']]], ['float2x4',['float2x4',['../a00315.html#gafec3cfd901ab334a92e0242b8f2269b4',1,'glm']]], ['float3',['float3',['../a00315.html#ga821ff110fc8533a053cbfcc93e078cc0',1,'glm']]], ['float32',['float32',['../a00304.html#gaacdc525d6f7bddb3ae95d5c311bd06a1',1,'glm']]], ['float32_5ft',['float32_t',['../a00304.html#gaa4947bc8b47c72fceea9bda730ecf603',1,'glm']]], ['float3x2',['float3x2',['../a00315.html#gaa6c69f04ba95f3faedf95dae874de576',1,'glm']]], ['float3x3',['float3x3',['../a00315.html#ga6ceb5d38a58becdf420026e12a6562f3',1,'glm']]], ['float3x4',['float3x4',['../a00315.html#ga4d2679c321b793ca3784fe0315bb5332',1,'glm']]], ['float4',['float4',['../a00315.html#gae2da7345087db3815a25d8837a727ef1',1,'glm']]], ['float4x2',['float4x2',['../a00315.html#ga308b9af0c221145bcfe9bfc129d9098e',1,'glm']]], ['float4x3',['float4x3',['../a00315.html#gac0a51b4812038aa81d73ffcc37f741ac',1,'glm']]], ['float4x4',['float4x4',['../a00315.html#gad3051649b3715d828a4ab92cdae7c3bf',1,'glm']]], ['float64',['float64',['../a00304.html#ga232fad1b0d6dcc7c16aabde98b2e2a80',1,'glm']]], ['float64_5ft',['float64_t',['../a00304.html#ga728366fef72cd96f0a5fa6429f05469e',1,'glm']]], ['floatbitstoint',['floatBitsToInt',['../a00241.html#ga1425c1c3160ec51214b03a0469a3013d',1,'glm::floatBitsToInt(float const &v)'],['../a00241.html#ga99f7d62f78ac5ea3b49bae715c9488ed',1,'glm::floatBitsToInt(vec< L, float, Q > const &v)']]], ['floatbitstouint',['floatBitsToUint',['../a00241.html#ga70e0271c34af52f3100c7960e18c3f2b',1,'glm::floatBitsToUint(float const &v)'],['../a00241.html#ga49418ba4c8a60fbbb5d57b705f3e26db',1,'glm::floatBitsToUint(vec< L, float, Q > const &v)']]], ['floor',['floor',['../a00241.html#gaa9d0742639e85b29c7c5de11cfd6840d',1,'glm']]], ['floor_5flog2',['floor_log2',['../a00330.html#ga7011b4e1c1e1ed492149b028feacc00e',1,'glm']]], ['floormultiple',['floorMultiple',['../a00302.html#ga2ffa3cd5f2ea746ee1bf57c46da6315e',1,'glm::floorMultiple(genType v, genType Multiple)'],['../a00302.html#gacdd8901448f51f0b192380e422fae3e4',1,'glm::floorMultiple(vec< L, T, Q > const &v, vec< L, T, Q > const &Multiple)']]], ['floorpoweroftwo',['floorPowerOfTwo',['../a00302.html#gafe273a57935d04c9db677bf67f9a71f4',1,'glm::floorPowerOfTwo(genIUType v)'],['../a00302.html#gaf0d591a8fca8ddb9289cdeb44b989c2d',1,'glm::floorPowerOfTwo(vec< L, T, Q > const &v)']]], ['fma',['fma',['../a00241.html#gad0f444d4b81cc53c3b6edf5aa25078c2',1,'glm']]], ['fmat2',['fmat2',['../a00304.html#ga4541dc2feb2a31d6ecb5a303f3dd3280',1,'glm']]], ['fmat2x2',['fmat2x2',['../a00304.html#ga3350c93c3275298f940a42875388e4b4',1,'glm']]], ['fmat2x3',['fmat2x3',['../a00304.html#ga55a2d2a8eb09b5633668257eb3cad453',1,'glm']]], ['fmat2x4',['fmat2x4',['../a00304.html#ga681381f19f11c9e5ee45cda2c56937ff',1,'glm']]], ['fmat3',['fmat3',['../a00304.html#ga253d453c20e037730023fea0215cb6f6',1,'glm']]], ['fmat3x2',['fmat3x2',['../a00304.html#ga6af54d70d9beb0a7ef992a879e86b04f',1,'glm']]], ['fmat3x3',['fmat3x3',['../a00304.html#gaa07c86650253672a19dbfb898f3265b8',1,'glm']]], ['fmat3x4',['fmat3x4',['../a00304.html#ga44e158af77a670ee1b58c03cda9e1619',1,'glm']]], ['fmat4',['fmat4',['../a00304.html#ga8cb400c0f4438f2640035d7b9824a0ca',1,'glm']]], ['fmat4x2',['fmat4x2',['../a00304.html#ga8c8aa45aafcc23238edb1d5aeb801774',1,'glm']]], ['fmat4x3',['fmat4x3',['../a00304.html#ga4295048a78bdf46b8a7de77ec665b497',1,'glm']]], ['fmat4x4',['fmat4x4',['../a00304.html#gad01cc6479bde1fd1870f13d3ed9530b3',1,'glm']]], ['fmax',['fmax',['../a00258.html#ga36920478565cf608e93064283ce06421',1,'glm::fmax(T a, T b)'],['../a00258.html#ga0007bba71ca451ac70e99d28dfbeaab9',1,'glm::fmax(T a, T b, T C)'],['../a00258.html#ga27e260b1ff4d04c3ad4b864d26cbaf08',1,'glm::fmax(T a, T b, T C, T D)'],['../a00267.html#gad66b6441f7200db16c9f341711733c56',1,'glm::fmax(vec< L, T, Q > const &a, T b)'],['../a00267.html#ga8df4be3f48d6717c40ea788fd30deebf',1,'glm::fmax(vec< L, T, Q > const &a, vec< L, T, Q > const &b)'],['../a00267.html#ga0f04ba924294dae4234ca93ede23229a',1,'glm::fmax(vec< L, T, Q > const &a, vec< L, T, Q > const &b, vec< L, T, Q > const &c)'],['../a00267.html#ga4ed3eb250ccbe17bfe8ded8a6b72d230',1,'glm::fmax(vec< L, T, Q > const &a, vec< L, T, Q > const &b, vec< L, T, Q > const &c, vec< L, T, Q > const &d)'],['../a00321.html#gae5792cb2b51190057e4aea027eb56f81',1,'glm::fmax(genType x, genType y)']]], ['fmin',['fmin',['../a00258.html#ga7b2b438a765e2a62098c79eb212f28f0',1,'glm::fmin(T a, T b)'],['../a00258.html#ga1a95fe4cf5437e8133f1093fe9726a64',1,'glm::fmin(T a, T b, T c)'],['../a00258.html#ga3d6f9c6c16bfd6f38f2c4f8076e8b661',1,'glm::fmin(T a, T b, T c, T d)'],['../a00267.html#gae989203363cff9eab5093630df4fe071',1,'glm::fmin(vec< L, T, Q > const &x, T y)'],['../a00267.html#ga7c42e93cd778c9181d1cdeea4d3e43bd',1,'glm::fmin(vec< L, T, Q > const &x, vec< L, T, Q > const &y)'],['../a00267.html#ga7e62739055b49189d9355471f78fe000',1,'glm::fmin(vec< L, T, Q > const &a, vec< L, T, Q > const &b, vec< L, T, Q > const &c)'],['../a00267.html#ga4a543dd7d22ad1f3b8b839f808a9d93c',1,'glm::fmin(vec< L, T, Q > const &a, vec< L, T, Q > const &b, vec< L, T, Q > const &c, vec< L, T, Q > const &d)'],['../a00321.html#gaa3200559611ac5b9b9ae7283547916a7',1,'glm::fmin(genType x, genType y)']]], ['fmod',['fmod',['../a00314.html#gae5e80425df9833164ad469e83b475fb4',1,'glm']]], ['four_5fover_5fpi',['four_over_pi',['../a00290.html#ga753950e5140e4ea6a88e4a18ba61dc09',1,'glm']]], ['fract',['fract',['../a00241.html#ga8ba89e40e55ae5cdf228548f9b7639c7',1,'glm::fract(genType x)'],['../a00241.html#ga2df623004f634b440d61e018d62c751b',1,'glm::fract(vec< L, T, Q > const &x)']]], ['frexp',['frexp',['../a00241.html#gaddf5ef73283c171730e0bcc11833fa81',1,'glm']]], ['frustum',['frustum',['../a00243.html#ga0bcd4542e0affc63a0b8c08fcb839ea9',1,'glm']]], ['frustumlh',['frustumLH',['../a00243.html#gae4277c37f61d81da01bc9db14ea90296',1,'glm']]], ['frustumlh_5fno',['frustumLH_NO',['../a00243.html#ga259520cad03b3f8bca9417920035ed01',1,'glm']]], ['frustumlh_5fzo',['frustumLH_ZO',['../a00243.html#ga94218b094862d17798370242680b9030',1,'glm']]], ['frustumno',['frustumNO',['../a00243.html#gae34ec664ad44860bf4b5ba631f0e0e90',1,'glm']]], ['frustumrh',['frustumRH',['../a00243.html#ga4366ab45880c6c5f8b3e8c371ca4b136',1,'glm']]], ['frustumrh_5fno',['frustumRH_NO',['../a00243.html#ga9236c8439f21be186b79c97b588836b9',1,'glm']]], ['frustumrh_5fzo',['frustumRH_ZO',['../a00243.html#ga7654a9227f14d5382786b9fc0eb5692d',1,'glm']]], ['frustumzo',['frustumZO',['../a00243.html#gaa73322e152edf50cf30a6edac342a757',1,'glm']]], ['functions_2ehpp',['functions.hpp',['../a00034.html',1,'']]], ['fvec1',['fvec1',['../a00304.html#ga98b9ed43cf8c5cf1d354b23c7df9119f',1,'glm']]], ['fvec2',['fvec2',['../a00304.html#ga24273aa02abaecaab7f160bac437a339',1,'glm']]], ['fvec3',['fvec3',['../a00304.html#ga89930533646b30d021759298aa6bf04a',1,'glm']]], ['fvec4',['fvec4',['../a00304.html#ga713c796c54875cf4092d42ff9d9096b0',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_6.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_6.js ================================================ var searchData= [ ['color_5fspace_2ehpp',['color_space.hpp',['../a00012.html',1,'']]], ['color_5fspace_2ehpp',['color_space.hpp',['../a00013.html',1,'']]], ['common_2ehpp',['common.hpp',['../a00016.html',1,'']]], ['geometric_20functions',['Geometric functions',['../a00279.html',1,'']]], ['glm_5fext_5fmatrix_5fclip_5fspace',['GLM_EXT_matrix_clip_space',['../a00243.html',1,'']]], ['glm_5fext_5fmatrix_5fcommon',['GLM_EXT_matrix_common',['../a00244.html',1,'']]], ['glm_5fext_5fmatrix_5fprojection',['GLM_EXT_matrix_projection',['../a00245.html',1,'']]], ['glm_5fext_5fmatrix_5frelational',['GLM_EXT_matrix_relational',['../a00246.html',1,'']]], ['glm_5fext_5fmatrix_5ftransform',['GLM_EXT_matrix_transform',['../a00247.html',1,'']]], ['glm_5fext_5fquaternion_5fcommon',['GLM_EXT_quaternion_common',['../a00248.html',1,'']]], ['glm_5fext_5fquaternion_5fdouble',['GLM_EXT_quaternion_double',['../a00249.html',1,'']]], ['glm_5fext_5fquaternion_5fdouble_5fprecision',['GLM_EXT_quaternion_double_precision',['../a00250.html',1,'']]], ['glm_5fext_5fquaternion_5fexponential',['GLM_EXT_quaternion_exponential',['../a00251.html',1,'']]], ['glm_5fext_5fquaternion_5ffloat',['GLM_EXT_quaternion_float',['../a00252.html',1,'']]], ['glm_5fext_5fquaternion_5ffloat_5fprecision',['GLM_EXT_quaternion_float_precision',['../a00253.html',1,'']]], ['glm_5fext_5fquaternion_5fgeometric',['GLM_EXT_quaternion_geometric',['../a00254.html',1,'']]], ['glm_5fext_5fquaternion_5frelational',['GLM_EXT_quaternion_relational',['../a00255.html',1,'']]], ['glm_5fext_5fquaternion_5ftransform',['GLM_EXT_quaternion_transform',['../a00256.html',1,'']]], ['glm_5fext_5fquaternion_5ftrigonometric',['GLM_EXT_quaternion_trigonometric',['../a00257.html',1,'']]], ['glm_5fext_5fscalar_5fcommon',['GLM_EXT_scalar_common',['../a00258.html',1,'']]], ['glm_5fext_5fscalar_5fconstants',['GLM_EXT_scalar_constants',['../a00259.html',1,'']]], ['glm_5fext_5fscalar_5fint_5fsized',['GLM_EXT_scalar_int_sized',['../a00260.html',1,'']]], ['glm_5fext_5fscalar_5finteger',['GLM_EXT_scalar_integer',['../a00261.html',1,'']]], ['glm_5fext_5fscalar_5frelational',['GLM_EXT_scalar_relational',['../a00262.html',1,'']]], ['glm_5fext_5fscalar_5fuint_5fsized',['GLM_EXT_scalar_uint_sized',['../a00263.html',1,'']]], ['glm_5fext_5fscalar_5fulp',['GLM_EXT_scalar_ulp',['../a00264.html',1,'']]], ['glm_5fext_5fvector_5fbool1',['GLM_EXT_vector_bool1',['../a00265.html',1,'']]], ['glm_5fext_5fvector_5fbool1_5fprecision',['GLM_EXT_vector_bool1_precision',['../a00266.html',1,'']]], ['glm_5fext_5fvector_5fcommon',['GLM_EXT_vector_common',['../a00267.html',1,'']]], ['glm_5fext_5fvector_5fdouble1',['GLM_EXT_vector_double1',['../a00268.html',1,'']]], ['glm_5fext_5fvector_5fdouble1_5fprecision',['GLM_EXT_vector_double1_precision',['../a00269.html',1,'']]], ['glm_5fext_5fvector_5ffloat1',['GLM_EXT_vector_float1',['../a00270.html',1,'']]], ['glm_5fext_5fvector_5ffloat1_5fprecision',['GLM_EXT_vector_float1_precision',['../a00271.html',1,'']]], ['glm_5fext_5fvector_5fint1',['GLM_EXT_vector_int1',['../a00272.html',1,'']]], ['glm_5fext_5fvector_5fint1_5fprecision',['GLM_EXT_vector_int1_precision',['../a00273.html',1,'']]], ['glm_5fext_5fvector_5finteger',['GLM_EXT_vector_integer',['../a00274.html',1,'']]], ['glm_5fext_5fvector_5frelational',['GLM_EXT_vector_relational',['../a00275.html',1,'']]], ['glm_5fext_5fvector_5fuint1',['GLM_EXT_vector_uint1',['../a00276.html',1,'']]], ['glm_5fext_5fvector_5fuint1_5fprecision',['GLM_EXT_vector_uint1_precision',['../a00277.html',1,'']]], ['glm_5fext_5fvector_5fulp',['GLM_EXT_vector_ulp',['../a00278.html',1,'']]], ['gauss',['gauss',['../a00326.html#ga0b50b197ff74261a0fad90f4b8d24702',1,'glm::gauss(T x, T ExpectedValue, T StandardDeviation)'],['../a00326.html#gad19ec8754a83c0b9a8dc16b7e60705ab',1,'glm::gauss(vec< 2, T, Q > const &Coord, vec< 2, T, Q > const &ExpectedValue, vec< 2, T, Q > const &StandardDeviation)']]], ['gaussrand',['gaussRand',['../a00300.html#ga5193a83e49e4fdc5652c084711083574',1,'glm']]], ['geometric_2ehpp',['geometric.hpp',['../a00036.html',1,'']]], ['glm_2ehpp',['glm.hpp',['../a00037.html',1,'']]], ['glm_5faligned_5ftypedef',['GLM_ALIGNED_TYPEDEF',['../a00364.html#gab5cd5c5fad228b25c782084f1cc30114',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_int8, aligned_lowp_int8, 1)'],['../a00364.html#ga5bb5dd895ef625c1b113f2cf400186b0',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_int16, aligned_lowp_int16, 2)'],['../a00364.html#gac6efa54cf7c6c86f7158922abdb1a430',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_int32, aligned_lowp_int32, 4)'],['../a00364.html#ga6612eb77c8607048e7552279a11eeb5f',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_int64, aligned_lowp_int64, 8)'],['../a00364.html#ga7ddc1848ff2223026db8968ce0c97497',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_int8_t, aligned_lowp_int8_t, 1)'],['../a00364.html#ga22240dd9458b0f8c11fbcc4f48714f68',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_int16_t, aligned_lowp_int16_t, 2)'],['../a00364.html#ga8130ea381d76a2cc34a93ccbb6cf487d',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_int32_t, aligned_lowp_int32_t, 4)'],['../a00364.html#ga7ccb60f3215d293fd62b33b31ed0e7be',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_int64_t, aligned_lowp_int64_t, 8)'],['../a00364.html#gac20d508d2ef5cc95ad3daf083c57ec2a',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_i8, aligned_lowp_i8, 1)'],['../a00364.html#ga50257b48069a31d0c8d9c1f644d267de',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_i16, aligned_lowp_i16, 2)'],['../a00364.html#gaa07e98e67b7a3435c0746018c7a2a839',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_i32, aligned_lowp_i32, 4)'],['../a00364.html#ga62601fc6f8ca298b77285bedf03faffd',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_i64, aligned_lowp_i64, 8)'],['../a00364.html#gac8cff825951aeb54dd846037113c72db',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_int8, aligned_mediump_int8, 1)'],['../a00364.html#ga78f443d88f438575a62b5df497cdf66b',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_int16, aligned_mediump_int16, 2)'],['../a00364.html#ga0680cd3b5d4e8006985fb41a4f9b57af',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_int32, aligned_mediump_int32, 4)'],['../a00364.html#gad9e5babb1dd3e3531b42c37bf25dd951',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_int64, aligned_mediump_int64, 8)'],['../a00364.html#ga353fd9fa8a9ad952fcabd0d53ad9a6dd',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_int8_t, aligned_mediump_int8_t, 1)'],['../a00364.html#ga2196442c0e5c5e8c77842de388c42521',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_int16_t, aligned_mediump_int16_t, 2)'],['../a00364.html#ga1284488189daf897cf095c5eefad9744',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_int32_t, aligned_mediump_int32_t, 4)'],['../a00364.html#ga73fdc86a539808af58808b7c60a1c4d8',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_int64_t, aligned_mediump_int64_t, 8)'],['../a00364.html#gafafeea923e1983262c972e2b83922d3b',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_i8, aligned_mediump_i8, 1)'],['../a00364.html#ga4b35ca5fe8f55c9d2fe54fdb8d8896f4',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_i16, aligned_mediump_i16, 2)'],['../a00364.html#ga63b882e29170d428463d99c3d630acc6',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_i32, aligned_mediump_i32, 4)'],['../a00364.html#ga8b20507bb048c1edea2d441cc953e6f0',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_i64, aligned_mediump_i64, 8)'],['../a00364.html#ga56c5ca60813027b603c7b61425a0479d',1,'glm::GLM_ALIGNED_TYPEDEF(highp_int8, aligned_highp_int8, 1)'],['../a00364.html#ga7a751b3aff24c0259f4a7357c2969089',1,'glm::GLM_ALIGNED_TYPEDEF(highp_int16, aligned_highp_int16, 2)'],['../a00364.html#ga70cd2144351c556469ee6119e59971fc',1,'glm::GLM_ALIGNED_TYPEDEF(highp_int32, aligned_highp_int32, 4)'],['../a00364.html#ga46bbf08dc004d8c433041e0b5018a5d3',1,'glm::GLM_ALIGNED_TYPEDEF(highp_int64, aligned_highp_int64, 8)'],['../a00364.html#gab3e10c77a20d1abad2de1c561c7a5c18',1,'glm::GLM_ALIGNED_TYPEDEF(highp_int8_t, aligned_highp_int8_t, 1)'],['../a00364.html#ga968f30319ebeaca9ebcd3a25a8e139fb',1,'glm::GLM_ALIGNED_TYPEDEF(highp_int16_t, aligned_highp_int16_t, 2)'],['../a00364.html#gaae773c28e6390c6aa76f5b678b7098a3',1,'glm::GLM_ALIGNED_TYPEDEF(highp_int32_t, aligned_highp_int32_t, 4)'],['../a00364.html#ga790cfff1ca39d0ed696ffed980809311',1,'glm::GLM_ALIGNED_TYPEDEF(highp_int64_t, aligned_highp_int64_t, 8)'],['../a00364.html#ga8265b91eb23c120a9b0c3e381bc37b96',1,'glm::GLM_ALIGNED_TYPEDEF(highp_i8, aligned_highp_i8, 1)'],['../a00364.html#gae6d384de17588d8edb894fbe06e0d410',1,'glm::GLM_ALIGNED_TYPEDEF(highp_i16, aligned_highp_i16, 2)'],['../a00364.html#ga9c8172b745ee03fc5b2b91c350c2922f',1,'glm::GLM_ALIGNED_TYPEDEF(highp_i32, aligned_highp_i32, 4)'],['../a00364.html#ga77e0dff12aa4020ddc3f8cabbea7b2e6',1,'glm::GLM_ALIGNED_TYPEDEF(highp_i64, aligned_highp_i64, 8)'],['../a00364.html#gabd82b9faa9d4d618dbbe0fc8a1efee63',1,'glm::GLM_ALIGNED_TYPEDEF(int8, aligned_int8, 1)'],['../a00364.html#ga285649744560be21000cfd81bbb5d507',1,'glm::GLM_ALIGNED_TYPEDEF(int16, aligned_int16, 2)'],['../a00364.html#ga07732da630b2deda428ce95c0ecaf3ff',1,'glm::GLM_ALIGNED_TYPEDEF(int32, aligned_int32, 4)'],['../a00364.html#ga1a8da2a8c51f69c07a2e7f473aa420f4',1,'glm::GLM_ALIGNED_TYPEDEF(int64, aligned_int64, 8)'],['../a00364.html#ga848aedf13e2d9738acf0bb482c590174',1,'glm::GLM_ALIGNED_TYPEDEF(int8_t, aligned_int8_t, 1)'],['../a00364.html#gafd2803d39049dd45a37a63931e25d943',1,'glm::GLM_ALIGNED_TYPEDEF(int16_t, aligned_int16_t, 2)'],['../a00364.html#gae553b33349d6da832cf0724f1e024094',1,'glm::GLM_ALIGNED_TYPEDEF(int32_t, aligned_int32_t, 4)'],['../a00364.html#ga16d223a2b3409e812e1d3bd87f0e9e5c',1,'glm::GLM_ALIGNED_TYPEDEF(int64_t, aligned_int64_t, 8)'],['../a00364.html#ga2de065d2ddfdb366bcd0febca79ae2ad',1,'glm::GLM_ALIGNED_TYPEDEF(i8, aligned_i8, 1)'],['../a00364.html#gabd786bdc20a11c8cb05c92c8212e28d3',1,'glm::GLM_ALIGNED_TYPEDEF(i16, aligned_i16, 2)'],['../a00364.html#gad4aefe56691cdb640c72f0d46d3fb532',1,'glm::GLM_ALIGNED_TYPEDEF(i32, aligned_i32, 4)'],['../a00364.html#ga8fe9745f7de24a8394518152ff9fccdc',1,'glm::GLM_ALIGNED_TYPEDEF(i64, aligned_i64, 8)'],['../a00364.html#gaaad735483450099f7f882d4e3a3569bd',1,'glm::GLM_ALIGNED_TYPEDEF(ivec1, aligned_ivec1, 4)'],['../a00364.html#gac7b6f823802edbd6edbaf70ea25bf068',1,'glm::GLM_ALIGNED_TYPEDEF(ivec2, aligned_ivec2, 8)'],['../a00364.html#ga3e235bcd2b8029613f25b8d40a2d3ef7',1,'glm::GLM_ALIGNED_TYPEDEF(ivec3, aligned_ivec3, 16)'],['../a00364.html#ga50d8a9523968c77f8325b4c9bfbff41e',1,'glm::GLM_ALIGNED_TYPEDEF(ivec4, aligned_ivec4, 16)'],['../a00364.html#ga9ec20fdfb729c702032da9378c79679f',1,'glm::GLM_ALIGNED_TYPEDEF(i8vec1, aligned_i8vec1, 1)'],['../a00364.html#ga25b3fe1d9e8d0a5e86c1949c1acd8131',1,'glm::GLM_ALIGNED_TYPEDEF(i8vec2, aligned_i8vec2, 2)'],['../a00364.html#ga2958f907719d94d8109b562540c910e2',1,'glm::GLM_ALIGNED_TYPEDEF(i8vec3, aligned_i8vec3, 4)'],['../a00364.html#ga1fe6fc032a978f1c845fac9aa0668714',1,'glm::GLM_ALIGNED_TYPEDEF(i8vec4, aligned_i8vec4, 4)'],['../a00364.html#gaa4161e7a496dc96972254143fe873e55',1,'glm::GLM_ALIGNED_TYPEDEF(i16vec1, aligned_i16vec1, 2)'],['../a00364.html#ga9d7cb211ccda69b1c22ddeeb0f3e7aba',1,'glm::GLM_ALIGNED_TYPEDEF(i16vec2, aligned_i16vec2, 4)'],['../a00364.html#gaaee91dd2ab34423bcc11072ef6bd0f02',1,'glm::GLM_ALIGNED_TYPEDEF(i16vec3, aligned_i16vec3, 8)'],['../a00364.html#ga49f047ccaa8b31fad9f26c67bf9b3510',1,'glm::GLM_ALIGNED_TYPEDEF(i16vec4, aligned_i16vec4, 8)'],['../a00364.html#ga904e9c2436bb099397c0823506a0771f',1,'glm::GLM_ALIGNED_TYPEDEF(i32vec1, aligned_i32vec1, 4)'],['../a00364.html#gaf90651cf2f5e7ee2b11cfdc5a6749534',1,'glm::GLM_ALIGNED_TYPEDEF(i32vec2, aligned_i32vec2, 8)'],['../a00364.html#ga7354a4ead8cb17868aec36b9c30d6010',1,'glm::GLM_ALIGNED_TYPEDEF(i32vec3, aligned_i32vec3, 16)'],['../a00364.html#gad2ecbdea18732163e2636e27b37981ee',1,'glm::GLM_ALIGNED_TYPEDEF(i32vec4, aligned_i32vec4, 16)'],['../a00364.html#ga965b1c9aa1800e93d4abc2eb2b5afcbf',1,'glm::GLM_ALIGNED_TYPEDEF(i64vec1, aligned_i64vec1, 8)'],['../a00364.html#ga1f9e9c2ea2768675dff9bae5cde2d829',1,'glm::GLM_ALIGNED_TYPEDEF(i64vec2, aligned_i64vec2, 16)'],['../a00364.html#gad77c317b7d942322cd5be4c8127b3187',1,'glm::GLM_ALIGNED_TYPEDEF(i64vec3, aligned_i64vec3, 32)'],['../a00364.html#ga716f8ea809bdb11b5b542d8b71aeb04f',1,'glm::GLM_ALIGNED_TYPEDEF(i64vec4, aligned_i64vec4, 32)'],['../a00364.html#gad46f8e9082d5878b1bc04f9c1471cdaa',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_uint8, aligned_lowp_uint8, 1)'],['../a00364.html#ga1246094581af624aca6c7499aaabf801',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_uint16, aligned_lowp_uint16, 2)'],['../a00364.html#ga7a5009a1d0196bbf21dd7518f61f0249',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_uint32, aligned_lowp_uint32, 4)'],['../a00364.html#ga45213fd18b3bb1df391671afefe4d1e7',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_uint64, aligned_lowp_uint64, 8)'],['../a00364.html#ga0ba26b4e3fd9ecbc25358efd68d8a4ca',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_uint8_t, aligned_lowp_uint8_t, 1)'],['../a00364.html#gaf2b58f5fb6d4ec8ce7b76221d3af43e1',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_uint16_t, aligned_lowp_uint16_t, 2)'],['../a00364.html#gadc246401847dcba155f0699425e49dcd',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_uint32_t, aligned_lowp_uint32_t, 4)'],['../a00364.html#gaace64bddf51a9def01498da9a94fb01c',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_uint64_t, aligned_lowp_uint64_t, 8)'],['../a00364.html#gad7bb97c29d664bd86ffb1bed4abc5534',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_u8, aligned_lowp_u8, 1)'],['../a00364.html#ga404bba7785130e0b1384d695a9450b28',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_u16, aligned_lowp_u16, 2)'],['../a00364.html#ga31ba41fd896257536958ec6080203d2a',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_u32, aligned_lowp_u32, 4)'],['../a00364.html#gacca5f13627f57b3505676e40a6e43e5e',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_u64, aligned_lowp_u64, 8)'],['../a00364.html#ga5faf1d3e70bf33174dd7f3d01d5b883b',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_uint8, aligned_mediump_uint8, 1)'],['../a00364.html#ga727e2bf2c433bb3b0182605860a48363',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_uint16, aligned_mediump_uint16, 2)'],['../a00364.html#ga12566ca66d5962dadb4a5eb4c74e891e',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_uint32, aligned_mediump_uint32, 4)'],['../a00364.html#ga7b66a97a8acaa35c5a377b947318c6bc',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_uint64, aligned_mediump_uint64, 8)'],['../a00364.html#gaa9cde002439b74fa66120a16a9f55fcc',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_uint8_t, aligned_mediump_uint8_t, 1)'],['../a00364.html#ga1ca98c67f7d1e975f7c5202f1da1df1f',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_uint16_t, aligned_mediump_uint16_t, 2)'],['../a00364.html#ga1dc8bc6199d785f235576948d80a597c',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_uint32_t, aligned_mediump_uint32_t, 4)'],['../a00364.html#gad14a0f2ec93519682b73d70b8e401d81',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_uint64_t, aligned_mediump_uint64_t, 8)'],['../a00364.html#gada8b996eb6526dc1ead813bd49539d1b',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_u8, aligned_mediump_u8, 1)'],['../a00364.html#ga28948f6bfb52b42deb9d73ae1ea8d8b0',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_u16, aligned_mediump_u16, 2)'],['../a00364.html#gad6a7c0b5630f89d3f1c5b4ef2919bb4c',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_u32, aligned_mediump_u32, 4)'],['../a00364.html#gaa0fc531cbaa972ac3a0b86d21ef4a7fa',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_u64, aligned_mediump_u64, 8)'],['../a00364.html#ga0ee829f7b754b262bbfe6317c0d678ac',1,'glm::GLM_ALIGNED_TYPEDEF(highp_uint8, aligned_highp_uint8, 1)'],['../a00364.html#ga447848a817a626cae08cedc9778b331c',1,'glm::GLM_ALIGNED_TYPEDEF(highp_uint16, aligned_highp_uint16, 2)'],['../a00364.html#ga6027ae13b2734f542a6e7beee11b8820',1,'glm::GLM_ALIGNED_TYPEDEF(highp_uint32, aligned_highp_uint32, 4)'],['../a00364.html#ga2aca46c8608c95ef991ee4c332acde5f',1,'glm::GLM_ALIGNED_TYPEDEF(highp_uint64, aligned_highp_uint64, 8)'],['../a00364.html#gaff50b10dd1c48be324fdaffd18e2c7ea',1,'glm::GLM_ALIGNED_TYPEDEF(highp_uint8_t, aligned_highp_uint8_t, 1)'],['../a00364.html#ga9fc4421dbb833d5461e6d4e59dcfde55',1,'glm::GLM_ALIGNED_TYPEDEF(highp_uint16_t, aligned_highp_uint16_t, 2)'],['../a00364.html#ga329f1e2b94b33ba5e3918197030bcf03',1,'glm::GLM_ALIGNED_TYPEDEF(highp_uint32_t, aligned_highp_uint32_t, 4)'],['../a00364.html#ga71e646f7e301aa422328194162c9c998',1,'glm::GLM_ALIGNED_TYPEDEF(highp_uint64_t, aligned_highp_uint64_t, 8)'],['../a00364.html#ga8942e09f479489441a7a5004c6d8cb66',1,'glm::GLM_ALIGNED_TYPEDEF(highp_u8, aligned_highp_u8, 1)'],['../a00364.html#gaab32497d6e4db16ee439dbedd64c5865',1,'glm::GLM_ALIGNED_TYPEDEF(highp_u16, aligned_highp_u16, 2)'],['../a00364.html#gaaadbb34952eca8e3d7fe122c3e167742',1,'glm::GLM_ALIGNED_TYPEDEF(highp_u32, aligned_highp_u32, 4)'],['../a00364.html#ga92024d27c74a3650afb55ec8e024ed25',1,'glm::GLM_ALIGNED_TYPEDEF(highp_u64, aligned_highp_u64, 8)'],['../a00364.html#gabde1d0b4072df35453db76075ab896a6',1,'glm::GLM_ALIGNED_TYPEDEF(uint8, aligned_uint8, 1)'],['../a00364.html#ga06c296c9e398b294c8c9dd2a7693dcbb',1,'glm::GLM_ALIGNED_TYPEDEF(uint16, aligned_uint16, 2)'],['../a00364.html#gacf1744488c96ebd33c9f36ad33b2010a',1,'glm::GLM_ALIGNED_TYPEDEF(uint32, aligned_uint32, 4)'],['../a00364.html#ga3328061a64c20ba59d5f9da24c2cd059',1,'glm::GLM_ALIGNED_TYPEDEF(uint64, aligned_uint64, 8)'],['../a00364.html#gaf6ced36f13bae57f377bafa6f5fcc299',1,'glm::GLM_ALIGNED_TYPEDEF(uint8_t, aligned_uint8_t, 1)'],['../a00364.html#gafbc7fb7847bfc78a339d1d371c915c73',1,'glm::GLM_ALIGNED_TYPEDEF(uint16_t, aligned_uint16_t, 2)'],['../a00364.html#gaa86bc56a73fd8120b1121b5f5e6245ae',1,'glm::GLM_ALIGNED_TYPEDEF(uint32_t, aligned_uint32_t, 4)'],['../a00364.html#ga68c0b9e669060d0eb5ab8c3ddeb483d8',1,'glm::GLM_ALIGNED_TYPEDEF(uint64_t, aligned_uint64_t, 8)'],['../a00364.html#ga4f3bab577daf3343e99cc005134bce86',1,'glm::GLM_ALIGNED_TYPEDEF(u8, aligned_u8, 1)'],['../a00364.html#ga13a2391339d0790d43b76d00a7611c4f',1,'glm::GLM_ALIGNED_TYPEDEF(u16, aligned_u16, 2)'],['../a00364.html#ga197570e03acbc3d18ab698e342971e8f',1,'glm::GLM_ALIGNED_TYPEDEF(u32, aligned_u32, 4)'],['../a00364.html#ga0f033b21e145a1faa32c62ede5878993',1,'glm::GLM_ALIGNED_TYPEDEF(u64, aligned_u64, 8)'],['../a00364.html#ga509af83527f5cd512e9a7873590663aa',1,'glm::GLM_ALIGNED_TYPEDEF(uvec1, aligned_uvec1, 4)'],['../a00364.html#ga94e86186978c502c6dc0c0d9c4a30679',1,'glm::GLM_ALIGNED_TYPEDEF(uvec2, aligned_uvec2, 8)'],['../a00364.html#ga5cec574686a7f3c8ed24bb195c5e2d0a',1,'glm::GLM_ALIGNED_TYPEDEF(uvec3, aligned_uvec3, 16)'],['../a00364.html#ga47edfdcee9c89b1ebdaf20450323b1d4',1,'glm::GLM_ALIGNED_TYPEDEF(uvec4, aligned_uvec4, 16)'],['../a00364.html#ga5611d6718e3a00096918a64192e73a45',1,'glm::GLM_ALIGNED_TYPEDEF(u8vec1, aligned_u8vec1, 1)'],['../a00364.html#ga19837e6f72b60d994a805ef564c6c326',1,'glm::GLM_ALIGNED_TYPEDEF(u8vec2, aligned_u8vec2, 2)'],['../a00364.html#ga9740cf8e34f068049b42a2753f9601c2',1,'glm::GLM_ALIGNED_TYPEDEF(u8vec3, aligned_u8vec3, 4)'],['../a00364.html#ga8b8588bb221448f5541a858903822a57',1,'glm::GLM_ALIGNED_TYPEDEF(u8vec4, aligned_u8vec4, 4)'],['../a00364.html#ga991abe990c16de26b2129d6bc2f4c051',1,'glm::GLM_ALIGNED_TYPEDEF(u16vec1, aligned_u16vec1, 2)'],['../a00364.html#gac01bb9fc32a1cd76c2b80d030f71df4c',1,'glm::GLM_ALIGNED_TYPEDEF(u16vec2, aligned_u16vec2, 4)'],['../a00364.html#ga09540dbca093793a36a8997e0d4bee77',1,'glm::GLM_ALIGNED_TYPEDEF(u16vec3, aligned_u16vec3, 8)'],['../a00364.html#gaecafb5996f5a44f57e34d29c8670741e',1,'glm::GLM_ALIGNED_TYPEDEF(u16vec4, aligned_u16vec4, 8)'],['../a00364.html#gac6b161a04d2f8408fe1c9d857e8daac0',1,'glm::GLM_ALIGNED_TYPEDEF(u32vec1, aligned_u32vec1, 4)'],['../a00364.html#ga1fa0dfc8feb0fa17dab2acd43e05342b',1,'glm::GLM_ALIGNED_TYPEDEF(u32vec2, aligned_u32vec2, 8)'],['../a00364.html#ga0019500abbfa9c66eff61ca75eaaed94',1,'glm::GLM_ALIGNED_TYPEDEF(u32vec3, aligned_u32vec3, 16)'],['../a00364.html#ga14fd29d01dae7b08a04e9facbcc18824',1,'glm::GLM_ALIGNED_TYPEDEF(u32vec4, aligned_u32vec4, 16)'],['../a00364.html#gab253845f534a67136f9619843cade903',1,'glm::GLM_ALIGNED_TYPEDEF(u64vec1, aligned_u64vec1, 8)'],['../a00364.html#ga929427a7627940cdf3304f9c050b677d',1,'glm::GLM_ALIGNED_TYPEDEF(u64vec2, aligned_u64vec2, 16)'],['../a00364.html#gae373b6c04fdf9879f33d63e6949c037e',1,'glm::GLM_ALIGNED_TYPEDEF(u64vec3, aligned_u64vec3, 32)'],['../a00364.html#ga53a8a03dca2015baec4584f45b8e9cdc',1,'glm::GLM_ALIGNED_TYPEDEF(u64vec4, aligned_u64vec4, 32)'],['../a00364.html#gab3301bae94ef5bf59fbdd9a24e7d2a01',1,'glm::GLM_ALIGNED_TYPEDEF(float32, aligned_float32, 4)'],['../a00364.html#gada9b0bea273d3ae0286f891533b9568f',1,'glm::GLM_ALIGNED_TYPEDEF(float32_t, aligned_float32_t, 4)'],['../a00364.html#gadbce23b9f23d77bb3884e289a574ebd5',1,'glm::GLM_ALIGNED_TYPEDEF(float32, aligned_f32, 4)'],['../a00364.html#ga75930684ff2233171c573e603f216162',1,'glm::GLM_ALIGNED_TYPEDEF(float64, aligned_float64, 8)'],['../a00364.html#ga6e3a2d83b131336219a0f4c7cbba2a48',1,'glm::GLM_ALIGNED_TYPEDEF(float64_t, aligned_float64_t, 8)'],['../a00364.html#gaa4deaa0dea930c393d55e7a4352b0a20',1,'glm::GLM_ALIGNED_TYPEDEF(float64, aligned_f64, 8)'],['../a00364.html#ga81bc497b2bfc6f80bab690c6ee28f0f9',1,'glm::GLM_ALIGNED_TYPEDEF(vec1, aligned_vec1, 4)'],['../a00364.html#gada3e8f783e9d4b90006695a16c39d4d4',1,'glm::GLM_ALIGNED_TYPEDEF(vec2, aligned_vec2, 8)'],['../a00364.html#gab8d081fac3a38d6f55fa552f32168d32',1,'glm::GLM_ALIGNED_TYPEDEF(vec3, aligned_vec3, 16)'],['../a00364.html#ga12fe7b9769c964c5b48dcfd8b7f40198',1,'glm::GLM_ALIGNED_TYPEDEF(vec4, aligned_vec4, 16)'],['../a00364.html#gaefab04611c7f8fe1fd9be3071efea6cc',1,'glm::GLM_ALIGNED_TYPEDEF(fvec1, aligned_fvec1, 4)'],['../a00364.html#ga2543c05ba19b3bd19d45b1227390c5b4',1,'glm::GLM_ALIGNED_TYPEDEF(fvec2, aligned_fvec2, 8)'],['../a00364.html#ga009afd727fd657ef33a18754d6d28f60',1,'glm::GLM_ALIGNED_TYPEDEF(fvec3, aligned_fvec3, 16)'],['../a00364.html#ga2f26177e74bfb301a3d0e02ec3c3ef53',1,'glm::GLM_ALIGNED_TYPEDEF(fvec4, aligned_fvec4, 16)'],['../a00364.html#ga309f495a1d6b75ddf195b674b65cb1e4',1,'glm::GLM_ALIGNED_TYPEDEF(f32vec1, aligned_f32vec1, 4)'],['../a00364.html#ga5e185865a2217d0cd47187644683a8c3',1,'glm::GLM_ALIGNED_TYPEDEF(f32vec2, aligned_f32vec2, 8)'],['../a00364.html#gade4458b27b039b9ca34f8ec049f3115a',1,'glm::GLM_ALIGNED_TYPEDEF(f32vec3, aligned_f32vec3, 16)'],['../a00364.html#ga2e8a12c5e6a9c4ae4ddaeda1d1cffe3b',1,'glm::GLM_ALIGNED_TYPEDEF(f32vec4, aligned_f32vec4, 16)'],['../a00364.html#ga3e0f35fa0c626285a8bad41707e7316c',1,'glm::GLM_ALIGNED_TYPEDEF(dvec1, aligned_dvec1, 8)'],['../a00364.html#ga78bfec2f185d1d365ea0a9ef1e3d45b8',1,'glm::GLM_ALIGNED_TYPEDEF(dvec2, aligned_dvec2, 16)'],['../a00364.html#ga01fe6fee6db5df580b6724a7e681f069',1,'glm::GLM_ALIGNED_TYPEDEF(dvec3, aligned_dvec3, 32)'],['../a00364.html#ga687d5b8f551d5af32425c0b2fba15e99',1,'glm::GLM_ALIGNED_TYPEDEF(dvec4, aligned_dvec4, 32)'],['../a00364.html#ga8e842371d46842ff8f1813419ba49d0f',1,'glm::GLM_ALIGNED_TYPEDEF(f64vec1, aligned_f64vec1, 8)'],['../a00364.html#ga32814aa0f19316b43134fc25f2aad2b9',1,'glm::GLM_ALIGNED_TYPEDEF(f64vec2, aligned_f64vec2, 16)'],['../a00364.html#gaf3d3bbc1e93909b689123b085e177a14',1,'glm::GLM_ALIGNED_TYPEDEF(f64vec3, aligned_f64vec3, 32)'],['../a00364.html#ga804c654cead1139bd250f90f9bb01fad',1,'glm::GLM_ALIGNED_TYPEDEF(f64vec4, aligned_f64vec4, 32)'],['../a00364.html#gacce4ac532880b8c7469d3c31974420a1',1,'glm::GLM_ALIGNED_TYPEDEF(mat2, aligned_mat2, 16)'],['../a00364.html#ga0498e0e249a6faddaf96aa55d7f81c3b',1,'glm::GLM_ALIGNED_TYPEDEF(mat3, aligned_mat3, 16)'],['../a00364.html#ga7435d87de82a0d652b35dc5b9cc718d5',1,'glm::GLM_ALIGNED_TYPEDEF(mat4, aligned_mat4, 16)'],['../a00364.html#ga719da577361541a4c43a2dd1d0e361e1',1,'glm::GLM_ALIGNED_TYPEDEF(fmat2x2, aligned_fmat2, 16)'],['../a00364.html#ga6e7ee4f541e1d7db66cd1a224caacafb',1,'glm::GLM_ALIGNED_TYPEDEF(fmat3x3, aligned_fmat3, 16)'],['../a00364.html#gae5d672d359f2a39f63f98c7975057486',1,'glm::GLM_ALIGNED_TYPEDEF(fmat4x4, aligned_fmat4, 16)'],['../a00364.html#ga6fa2df037dbfc5fe8c8e0b4db8a34953',1,'glm::GLM_ALIGNED_TYPEDEF(fmat2x2, aligned_fmat2x2, 16)'],['../a00364.html#ga0743b4f4f69a3227b82ff58f6abbad62',1,'glm::GLM_ALIGNED_TYPEDEF(fmat2x3, aligned_fmat2x3, 16)'],['../a00364.html#ga1a76b325fdf70f961d835edd182c63dd',1,'glm::GLM_ALIGNED_TYPEDEF(fmat2x4, aligned_fmat2x4, 16)'],['../a00364.html#ga4b4e181cd041ba28c3163e7b8074aef0',1,'glm::GLM_ALIGNED_TYPEDEF(fmat3x2, aligned_fmat3x2, 16)'],['../a00364.html#ga27b13f465abc8a40705698145e222c3f',1,'glm::GLM_ALIGNED_TYPEDEF(fmat3x3, aligned_fmat3x3, 16)'],['../a00364.html#ga2608d19cc275830a6f8c0b6405625a4f',1,'glm::GLM_ALIGNED_TYPEDEF(fmat3x4, aligned_fmat3x4, 16)'],['../a00364.html#ga93f09768241358a287c4cca538f1f7e7',1,'glm::GLM_ALIGNED_TYPEDEF(fmat4x2, aligned_fmat4x2, 16)'],['../a00364.html#ga7c117e3ecca089e10247b1d41d88aff9',1,'glm::GLM_ALIGNED_TYPEDEF(fmat4x3, aligned_fmat4x3, 16)'],['../a00364.html#ga07c75cd04ba42dc37fa3e105f89455c5',1,'glm::GLM_ALIGNED_TYPEDEF(fmat4x4, aligned_fmat4x4, 16)'],['../a00364.html#ga65ff0d690a34a4d7f46f9b2eb51525ee',1,'glm::GLM_ALIGNED_TYPEDEF(f32mat2x2, aligned_f32mat2, 16)'],['../a00364.html#gadd8ddbe2bf65ccede865ba2f510176dc',1,'glm::GLM_ALIGNED_TYPEDEF(f32mat3x3, aligned_f32mat3, 16)'],['../a00364.html#gaf18dbff14bf13d3ff540c517659ec045',1,'glm::GLM_ALIGNED_TYPEDEF(f32mat4x4, aligned_f32mat4, 16)'],['../a00364.html#ga66339f6139bf7ff19e245beb33f61cc8',1,'glm::GLM_ALIGNED_TYPEDEF(f32mat2x2, aligned_f32mat2x2, 16)'],['../a00364.html#ga1558a48b3934011b52612809f443e46d',1,'glm::GLM_ALIGNED_TYPEDEF(f32mat2x3, aligned_f32mat2x3, 16)'],['../a00364.html#gaa52e5732daa62851627021ad551c7680',1,'glm::GLM_ALIGNED_TYPEDEF(f32mat2x4, aligned_f32mat2x4, 16)'],['../a00364.html#gac09663c42566bcb58d23c6781ac4e85a',1,'glm::GLM_ALIGNED_TYPEDEF(f32mat3x2, aligned_f32mat3x2, 16)'],['../a00364.html#ga3f510999e59e1b309113e1d561162b29',1,'glm::GLM_ALIGNED_TYPEDEF(f32mat3x3, aligned_f32mat3x3, 16)'],['../a00364.html#ga2c9c94f0c89cd71ce56551db6cf4aaec',1,'glm::GLM_ALIGNED_TYPEDEF(f32mat3x4, aligned_f32mat3x4, 16)'],['../a00364.html#ga99ce8274c750fbfdf0e70c95946a2875',1,'glm::GLM_ALIGNED_TYPEDEF(f32mat4x2, aligned_f32mat4x2, 16)'],['../a00364.html#ga9476ef66790239df53dbe66f3989c3b5',1,'glm::GLM_ALIGNED_TYPEDEF(f32mat4x3, aligned_f32mat4x3, 16)'],['../a00364.html#gacc429b3b0b49921e12713b6d31e14e1d',1,'glm::GLM_ALIGNED_TYPEDEF(f32mat4x4, aligned_f32mat4x4, 16)'],['../a00364.html#ga88f6c6fa06e6e64479763e69444669cf',1,'glm::GLM_ALIGNED_TYPEDEF(f64mat2x2, aligned_f64mat2, 32)'],['../a00364.html#gaae8e4639c991e64754145ab8e4c32083',1,'glm::GLM_ALIGNED_TYPEDEF(f64mat3x3, aligned_f64mat3, 32)'],['../a00364.html#ga6e9094f3feb3b5b49d0f83683a101fde',1,'glm::GLM_ALIGNED_TYPEDEF(f64mat4x4, aligned_f64mat4, 32)'],['../a00364.html#gadbd2c639c03de1c3e9591b5a39f65559',1,'glm::GLM_ALIGNED_TYPEDEF(f64mat2x2, aligned_f64mat2x2, 32)'],['../a00364.html#gab059d7b9fe2094acc563b7223987499f',1,'glm::GLM_ALIGNED_TYPEDEF(f64mat2x3, aligned_f64mat2x3, 32)'],['../a00364.html#gabbc811d1c52ed2b8cfcaff1378f75c69',1,'glm::GLM_ALIGNED_TYPEDEF(f64mat2x4, aligned_f64mat2x4, 32)'],['../a00364.html#ga9ddf5212777734d2fd841a84439f3bdf',1,'glm::GLM_ALIGNED_TYPEDEF(f64mat3x2, aligned_f64mat3x2, 32)'],['../a00364.html#gad1dda32ed09f94bfcf0a7d8edfb6cf13',1,'glm::GLM_ALIGNED_TYPEDEF(f64mat3x3, aligned_f64mat3x3, 32)'],['../a00364.html#ga5875e0fa72f07e271e7931811cbbf31a',1,'glm::GLM_ALIGNED_TYPEDEF(f64mat3x4, aligned_f64mat3x4, 32)'],['../a00364.html#ga41e82cd6ac07f912ba2a2d45799dcf0d',1,'glm::GLM_ALIGNED_TYPEDEF(f64mat4x2, aligned_f64mat4x2, 32)'],['../a00364.html#ga0892638d6ba773043b3d63d1d092622e',1,'glm::GLM_ALIGNED_TYPEDEF(f64mat4x3, aligned_f64mat4x3, 32)'],['../a00364.html#ga912a16432608b822f1e13607529934c1',1,'glm::GLM_ALIGNED_TYPEDEF(f64mat4x4, aligned_f64mat4x4, 32)'],['../a00364.html#gafd945a8ea86b042aba410e0560df9a3d',1,'glm::GLM_ALIGNED_TYPEDEF(quat, aligned_quat, 16)'],['../a00364.html#ga19c2ba545d1f2f36bcb7b60c9a228622',1,'glm::GLM_ALIGNED_TYPEDEF(quat, aligned_fquat, 16)'],['../a00364.html#gaabc28c84a3288b697605d4688686f9a9',1,'glm::GLM_ALIGNED_TYPEDEF(dquat, aligned_dquat, 32)'],['../a00364.html#ga1ed8aeb5ca67fade269a46105f1bf273',1,'glm::GLM_ALIGNED_TYPEDEF(f32quat, aligned_f32quat, 16)'],['../a00364.html#ga95cc03b8b475993fa50e05e38e203303',1,'glm::GLM_ALIGNED_TYPEDEF(f64quat, aligned_f64quat, 32)']]], ['golden_5fratio',['golden_ratio',['../a00290.html#ga748cf8642830657c5b7eae04d0a80899',1,'glm']]], ['gradient_5fpaint_2ehpp',['gradient_paint.hpp',['../a00038.html',1,'']]], ['greaterthan',['greaterThan',['../a00299.html#ga8f7fa76e06c417b757ddfd438f3f677b',1,'glm::greaterThan(qua< T, Q > const &x, qua< T, Q > const &y)'],['../a00374.html#gadfdb8ea82deca869ddc7e63ea5a63ae4',1,'glm::greaterThan(vec< L, T, Q > const &x, vec< L, T, Q > const &y)']]], ['greaterthanequal',['greaterThanEqual',['../a00299.html#ga388cbeba987dae7b5937f742efa49a5a',1,'glm::greaterThanEqual(qua< T, Q > const &x, qua< T, Q > const &y)'],['../a00374.html#ga859975f538940f8d18fe62f916b9abd7',1,'glm::greaterThanEqual(vec< L, T, Q > const &x, vec< L, T, Q > const &y)']]], ['glm_5fgtc_5fbitfield',['GLM_GTC_bitfield',['../a00288.html',1,'']]], ['glm_5fgtc_5fcolor_5fspace',['GLM_GTC_color_space',['../a00289.html',1,'']]], ['glm_5fgtc_5fconstants',['GLM_GTC_constants',['../a00290.html',1,'']]], ['glm_5fgtc_5fepsilon',['GLM_GTC_epsilon',['../a00291.html',1,'']]], ['glm_5fgtc_5finteger',['GLM_GTC_integer',['../a00292.html',1,'']]], ['glm_5fgtc_5fmatrix_5faccess',['GLM_GTC_matrix_access',['../a00293.html',1,'']]], ['glm_5fgtc_5fmatrix_5finteger',['GLM_GTC_matrix_integer',['../a00294.html',1,'']]], ['glm_5fgtc_5fmatrix_5finverse',['GLM_GTC_matrix_inverse',['../a00295.html',1,'']]], ['glm_5fgtc_5fmatrix_5ftransform',['GLM_GTC_matrix_transform',['../a00296.html',1,'']]], ['glm_5fgtc_5fnoise',['GLM_GTC_noise',['../a00297.html',1,'']]], ['glm_5fgtc_5fpacking',['GLM_GTC_packing',['../a00298.html',1,'']]], ['glm_5fgtc_5fquaternion',['GLM_GTC_quaternion',['../a00299.html',1,'']]], ['glm_5fgtc_5frandom',['GLM_GTC_random',['../a00300.html',1,'']]], ['glm_5fgtc_5freciprocal',['GLM_GTC_reciprocal',['../a00301.html',1,'']]], ['glm_5fgtc_5fround',['GLM_GTC_round',['../a00302.html',1,'']]], ['glm_5fgtc_5ftype_5faligned',['GLM_GTC_type_aligned',['../a00303.html',1,'']]], ['glm_5fgtc_5ftype_5fprecision',['GLM_GTC_type_precision',['../a00304.html',1,'']]], ['glm_5fgtc_5ftype_5fptr',['GLM_GTC_type_ptr',['../a00305.html',1,'']]], ['glm_5fgtc_5fulp',['GLM_GTC_ulp',['../a00306.html',1,'']]], ['glm_5fgtc_5fvec1',['GLM_GTC_vec1',['../a00307.html',1,'']]], ['glm_5fgtx_5fassociated_5fmin_5fmax',['GLM_GTX_associated_min_max',['../a00308.html',1,'']]], ['glm_5fgtx_5fbit',['GLM_GTX_bit',['../a00309.html',1,'']]], ['glm_5fgtx_5fclosest_5fpoint',['GLM_GTX_closest_point',['../a00310.html',1,'']]], ['glm_5fgtx_5fcolor_5fencoding',['GLM_GTX_color_encoding',['../a00311.html',1,'']]], ['glm_5fgtx_5fcolor_5fspace',['GLM_GTX_color_space',['../a00312.html',1,'']]], ['glm_5fgtx_5fcolor_5fspace_5fycocg',['GLM_GTX_color_space_YCoCg',['../a00313.html',1,'']]], ['glm_5fgtx_5fcommon',['GLM_GTX_common',['../a00314.html',1,'']]], ['glm_5fgtx_5fcompatibility',['GLM_GTX_compatibility',['../a00315.html',1,'']]], ['glm_5fgtx_5fcomponent_5fwise',['GLM_GTX_component_wise',['../a00316.html',1,'']]], ['glm_5fgtx_5fdual_5fquaternion',['GLM_GTX_dual_quaternion',['../a00317.html',1,'']]], ['glm_5fgtx_5feasing',['GLM_GTX_easing',['../a00318.html',1,'']]], ['glm_5fgtx_5feuler_5fangles',['GLM_GTX_euler_angles',['../a00319.html',1,'']]], ['glm_5fgtx_5fextend',['GLM_GTX_extend',['../a00320.html',1,'']]], ['glm_5fgtx_5fextented_5fmin_5fmax',['GLM_GTX_extented_min_max',['../a00321.html',1,'']]], ['glm_5fgtx_5fexterior_5fproduct',['GLM_GTX_exterior_product',['../a00322.html',1,'']]], ['glm_5fgtx_5ffast_5fexponential',['GLM_GTX_fast_exponential',['../a00323.html',1,'']]], ['glm_5fgtx_5ffast_5fsquare_5froot',['GLM_GTX_fast_square_root',['../a00324.html',1,'']]], ['glm_5fgtx_5ffast_5ftrigonometry',['GLM_GTX_fast_trigonometry',['../a00325.html',1,'']]], ['glm_5fgtx_5ffunctions',['GLM_GTX_functions',['../a00326.html',1,'']]], ['glm_5fgtx_5fgradient_5fpaint',['GLM_GTX_gradient_paint',['../a00327.html',1,'']]], ['glm_5fgtx_5fhanded_5fcoordinate_5fspace',['GLM_GTX_handed_coordinate_space',['../a00328.html',1,'']]], ['glm_5fgtx_5fhash',['GLM_GTX_hash',['../a00329.html',1,'']]], ['glm_5fgtx_5finteger',['GLM_GTX_integer',['../a00330.html',1,'']]], ['glm_5fgtx_5fintersect',['GLM_GTX_intersect',['../a00331.html',1,'']]], ['glm_5fgtx_5fio',['GLM_GTX_io',['../a00332.html',1,'']]], ['glm_5fgtx_5flog_5fbase',['GLM_GTX_log_base',['../a00333.html',1,'']]], ['glm_5fgtx_5fmatrix_5fcross_5fproduct',['GLM_GTX_matrix_cross_product',['../a00334.html',1,'']]], ['glm_5fgtx_5fmatrix_5fdecompose',['GLM_GTX_matrix_decompose',['../a00335.html',1,'']]], ['glm_5fgtx_5fmatrix_5ffactorisation',['GLM_GTX_matrix_factorisation',['../a00336.html',1,'']]], ['glm_5fgtx_5fmatrix_5finterpolation',['GLM_GTX_matrix_interpolation',['../a00337.html',1,'']]], ['glm_5fgtx_5fmatrix_5fmajor_5fstorage',['GLM_GTX_matrix_major_storage',['../a00338.html',1,'']]], ['glm_5fgtx_5fmatrix_5foperation',['GLM_GTX_matrix_operation',['../a00339.html',1,'']]], ['glm_5fgtx_5fmatrix_5fquery',['GLM_GTX_matrix_query',['../a00340.html',1,'']]], ['glm_5fgtx_5fmatrix_5ftransform_5f2d',['GLM_GTX_matrix_transform_2d',['../a00341.html',1,'']]], ['glm_5fgtx_5fmixed_5fproducte',['GLM_GTX_mixed_producte',['../a00342.html',1,'']]], ['glm_5fgtx_5fnorm',['GLM_GTX_norm',['../a00343.html',1,'']]], ['glm_5fgtx_5fnormal',['GLM_GTX_normal',['../a00344.html',1,'']]], ['glm_5fgtx_5fnormalize_5fdot',['GLM_GTX_normalize_dot',['../a00345.html',1,'']]], ['glm_5fgtx_5fnumber_5fprecision',['GLM_GTX_number_precision',['../a00346.html',1,'']]], ['glm_5fgtx_5foptimum_5fpow',['GLM_GTX_optimum_pow',['../a00347.html',1,'']]], ['glm_5fgtx_5forthonormalize',['GLM_GTX_orthonormalize',['../a00348.html',1,'']]], ['glm_5fgtx_5fperpendicular',['GLM_GTX_perpendicular',['../a00349.html',1,'']]], ['glm_5fgtx_5fpolar_5fcoordinates',['GLM_GTX_polar_coordinates',['../a00350.html',1,'']]], ['glm_5fgtx_5fprojection',['GLM_GTX_projection',['../a00351.html',1,'']]], ['glm_5fgtx_5fquaternion',['GLM_GTX_quaternion',['../a00352.html',1,'']]], ['glm_5fgtx_5frange',['GLM_GTX_range',['../a00353.html',1,'']]], ['glm_5fgtx_5fraw_5fdata',['GLM_GTX_raw_data',['../a00354.html',1,'']]], ['glm_5fgtx_5frotate_5fnormalized_5faxis',['GLM_GTX_rotate_normalized_axis',['../a00355.html',1,'']]], ['glm_5fgtx_5frotate_5fvector',['GLM_GTX_rotate_vector',['../a00356.html',1,'']]], ['glm_5fgtx_5fscalar_5frelational',['GLM_GTX_scalar_relational',['../a00357.html',1,'']]], ['glm_5fgtx_5fspline',['GLM_GTX_spline',['../a00358.html',1,'']]], ['glm_5fgtx_5fstd_5fbased_5ftype',['GLM_GTX_std_based_type',['../a00359.html',1,'']]], ['glm_5fgtx_5fstring_5fcast',['GLM_GTX_string_cast',['../a00360.html',1,'']]], ['glm_5fgtx_5ftexture',['GLM_GTX_texture',['../a00361.html',1,'']]], ['glm_5fgtx_5ftransform',['GLM_GTX_transform',['../a00362.html',1,'']]], ['glm_5fgtx_5ftransform2',['GLM_GTX_transform2',['../a00363.html',1,'']]], ['glm_5fgtx_5ftype_5faligned',['GLM_GTX_type_aligned',['../a00364.html',1,'']]], ['glm_5fgtx_5ftype_5ftrait',['GLM_GTX_type_trait',['../a00365.html',1,'']]], ['glm_5fgtx_5fvec_5fswizzle',['GLM_GTX_vec_swizzle',['../a00366.html',1,'']]], ['glm_5fgtx_5fvector_5fangle',['GLM_GTX_vector_angle',['../a00367.html',1,'']]], ['glm_5fgtx_5fvector_5fquery',['GLM_GTX_vector_query',['../a00368.html',1,'']]], ['glm_5fgtx_5fwrap',['GLM_GTX_wrap',['../a00369.html',1,'']]], ['integer_2ehpp',['integer.hpp',['../a00042.html',1,'']]], ['integer_2ehpp',['integer.hpp',['../a00041.html',1,'']]], ['matrix_5ftransform_2ehpp',['matrix_transform.hpp',['../a00109.html',1,'']]], ['packing_2ehpp',['packing.hpp',['../a00119.html',1,'']]], ['quaternion_2ehpp',['quaternion.hpp',['../a00126.html',1,'']]], ['quaternion_2ehpp',['quaternion.hpp',['../a00125.html',1,'']]], ['scalar_5frelational_2ehpp',['scalar_relational.hpp',['../a00150.html',1,'']]], ['type_5faligned_2ehpp',['type_aligned.hpp',['../a00161.html',1,'']]], ['type_5faligned_2ehpp',['type_aligned.hpp',['../a00162.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_7.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_7.js ================================================ var searchData= [ ['half_5fpi',['half_pi',['../a00290.html#ga0c36b41d462e45641faf7d7938948bac',1,'glm']]], ['handed_5fcoordinate_5fspace_2ehpp',['handed_coordinate_space.hpp',['../a00039.html',1,'']]], ['hash_2ehpp',['hash.hpp',['../a00040.html',1,'']]], ['hermite',['hermite',['../a00358.html#gaa69e143f6374d32f934a8edeaa50bac9',1,'glm']]], ['highestbitvalue',['highestBitValue',['../a00309.html#ga0dcc8fe7c3d3ad60dea409281efa3d05',1,'glm::highestBitValue(genIUType Value)'],['../a00309.html#ga898ef075ccf809a1e480faab48fe96bf',1,'glm::highestBitValue(vec< L, T, Q > const &value)']]], ['highp_5fbvec1',['highp_bvec1',['../a00266.html#gae8a1e14abae1387274f57741750c06a2',1,'glm']]], ['highp_5fbvec2',['highp_bvec2',['../a00282.html#gac6c781a85f012d77a75310a3058702c2',1,'glm']]], ['highp_5fbvec3',['highp_bvec3',['../a00282.html#gaedb70027d89a0a405046aefda4eabaa6',1,'glm']]], ['highp_5fbvec4',['highp_bvec4',['../a00282.html#gaee663ff64429443ab07a5327074192f6',1,'glm']]], ['highp_5fddualquat',['highp_ddualquat',['../a00317.html#ga8f67eafa7197d7a668dad5105a463d2a',1,'glm']]], ['highp_5fdmat2',['highp_dmat2',['../a00284.html#ga369b447bb1b312449b679ea1f90f3cea',1,'glm']]], ['highp_5fdmat2x2',['highp_dmat2x2',['../a00284.html#gae27ac20302c2e39b6c78e7fe18e62ef7',1,'glm']]], ['highp_5fdmat2x3',['highp_dmat2x3',['../a00284.html#gad4689ec33bc2c26e10132b174b49001a',1,'glm']]], ['highp_5fdmat2x4',['highp_dmat2x4',['../a00284.html#ga5ceeb46670fdc000a0701910cc5061c9',1,'glm']]], ['highp_5fdmat3',['highp_dmat3',['../a00284.html#ga86d6d4dbad92ffdcc759773340e15a97',1,'glm']]], ['highp_5fdmat3x2',['highp_dmat3x2',['../a00284.html#ga3647309010a2160e9ec89bc6f7c95c35',1,'glm']]], ['highp_5fdmat3x3',['highp_dmat3x3',['../a00284.html#gae367ea93c4ad8a7c101dd27b8b2b04ce',1,'glm']]], ['highp_5fdmat3x4',['highp_dmat3x4',['../a00284.html#ga6543eeeb64f48d79a0b96484308c50f0',1,'glm']]], ['highp_5fdmat4',['highp_dmat4',['../a00284.html#ga945254f459860741138bceb74da496b9',1,'glm']]], ['highp_5fdmat4x2',['highp_dmat4x2',['../a00284.html#gaeda1f474c668eaecc443bea85a4a4eca',1,'glm']]], ['highp_5fdmat4x3',['highp_dmat4x3',['../a00284.html#gacf237c2d8832fe8db2d7e187585d34bd',1,'glm']]], ['highp_5fdmat4x4',['highp_dmat4x4',['../a00284.html#ga118d24a3d12c034e7cccef7bf2f01b8a',1,'glm']]], ['highp_5fdquat',['highp_dquat',['../a00250.html#gaf13a25f41afc03480b40fc71bd249cec',1,'glm']]], ['highp_5fdualquat',['highp_dualquat',['../a00317.html#ga9ef5bf1da52a9d4932335a517086ceaf',1,'glm']]], ['highp_5fdvec1',['highp_dvec1',['../a00269.html#ga77c22c4426da3a6865c88d3fc907e3fe',1,'glm']]], ['highp_5fdvec2',['highp_dvec2',['../a00282.html#gab98d77cca255914f5e29697fcbc2d975',1,'glm']]], ['highp_5fdvec3',['highp_dvec3',['../a00282.html#gab24dc20dcdc5b71282634bdbf6b70105',1,'glm']]], ['highp_5fdvec4',['highp_dvec4',['../a00282.html#gab654f4ed4a99d64a6cfc65320c2a7590',1,'glm']]], ['highp_5ff32',['highp_f32',['../a00304.html#ga6906e1ef0b34064b4b675489c5c38725',1,'glm']]], ['highp_5ff32mat2',['highp_f32mat2',['../a00304.html#ga298f7d4d273678d0282812368da27fda',1,'glm']]], ['highp_5ff32mat2x2',['highp_f32mat2x2',['../a00304.html#gae5eb02d92b7d4605a4b7f37ae5cb2968',1,'glm']]], ['highp_5ff32mat2x3',['highp_f32mat2x3',['../a00304.html#ga0aeb5cb001473b08c88175012708a379',1,'glm']]], ['highp_5ff32mat2x4',['highp_f32mat2x4',['../a00304.html#ga88938ee1e7981fa3402e88da6ad74531',1,'glm']]], ['highp_5ff32mat3',['highp_f32mat3',['../a00304.html#ga24f9ef3263b1638564713892cc37981f',1,'glm']]], ['highp_5ff32mat3x2',['highp_f32mat3x2',['../a00304.html#ga36537e701456f12c20e73f469cac4967',1,'glm']]], ['highp_5ff32mat3x3',['highp_f32mat3x3',['../a00304.html#gaab691ae40c37976d268d8cac0096e0e1',1,'glm']]], ['highp_5ff32mat3x4',['highp_f32mat3x4',['../a00304.html#gaa5086dbd6efb272d13fc88829330861d',1,'glm']]], ['highp_5ff32mat4',['highp_f32mat4',['../a00304.html#ga14c90ca49885723f51d06e295587236f',1,'glm']]], ['highp_5ff32mat4x2',['highp_f32mat4x2',['../a00304.html#ga602e119c6b246b4f6edcf66845f2aa0f',1,'glm']]], ['highp_5ff32mat4x3',['highp_f32mat4x3',['../a00304.html#ga66bffdd8e5c0d3ef9958bbab9ca1ba59',1,'glm']]], ['highp_5ff32mat4x4',['highp_f32mat4x4',['../a00304.html#gaf1b712b97b2322685fbbed28febe5f84',1,'glm']]], ['highp_5ff32quat',['highp_f32quat',['../a00304.html#ga4252cf7f5b0e3cd47c3d3badf0ef43b3',1,'glm']]], ['highp_5ff32vec1',['highp_f32vec1',['../a00304.html#gab1b1c9e8667902b78b2c330e4d383a61',1,'glm']]], ['highp_5ff32vec2',['highp_f32vec2',['../a00304.html#ga0b8ebd4262331e139ff257d7cf2a4b77',1,'glm']]], ['highp_5ff32vec3',['highp_f32vec3',['../a00304.html#ga522775dbcc6d96246a1c5cf02344fd8c',1,'glm']]], ['highp_5ff32vec4',['highp_f32vec4',['../a00304.html#ga0f038d4e09862a74f03d102c59eda73e',1,'glm']]], ['highp_5ff64',['highp_f64',['../a00304.html#ga51d5266017d88f62737c1973923a7cf4',1,'glm']]], ['highp_5ff64mat2',['highp_f64mat2',['../a00304.html#gaf7adb92ce8de0afaff01436b039fd924',1,'glm']]], ['highp_5ff64mat2x2',['highp_f64mat2x2',['../a00304.html#ga773ea237a051827cfc20de960bc73ff0',1,'glm']]], ['highp_5ff64mat2x3',['highp_f64mat2x3',['../a00304.html#ga8342c7469384c6d769cacc9e309278d9',1,'glm']]], ['highp_5ff64mat2x4',['highp_f64mat2x4',['../a00304.html#ga5a67a7440b9c0d1538533540f99036a5',1,'glm']]], ['highp_5ff64mat3',['highp_f64mat3',['../a00304.html#ga609bf0ace941d6ab1bb2f9522a04e546',1,'glm']]], ['highp_5ff64mat3x2',['highp_f64mat3x2',['../a00304.html#ga5bdbfb4ce7d05ce1e1b663f50be17e8a',1,'glm']]], ['highp_5ff64mat3x3',['highp_f64mat3x3',['../a00304.html#ga7c2cadb9b85cc7e0d125db21ca19dea4',1,'glm']]], ['highp_5ff64mat3x4',['highp_f64mat3x4',['../a00304.html#gad310b1dddeec9ec837a104e7db8de580',1,'glm']]], ['highp_5ff64mat4',['highp_f64mat4',['../a00304.html#gad308e0ed27d64daa4213fb257fcbd5a5',1,'glm']]], ['highp_5ff64mat4x2',['highp_f64mat4x2',['../a00304.html#ga58c4631421e323e252fc716b6103e38c',1,'glm']]], ['highp_5ff64mat4x3',['highp_f64mat4x3',['../a00304.html#gae94823d65648e44d972863c6caa13103',1,'glm']]], ['highp_5ff64mat4x4',['highp_f64mat4x4',['../a00304.html#ga09a2374b725c4246d263ee36fb66434c',1,'glm']]], ['highp_5ff64quat',['highp_f64quat',['../a00304.html#gafcfdd74a115163af2ce1093551747352',1,'glm']]], ['highp_5ff64vec1',['highp_f64vec1',['../a00304.html#ga62c31b133ceee9984fbee05ac4c434a9',1,'glm']]], ['highp_5ff64vec2',['highp_f64vec2',['../a00304.html#ga670ea1b0a1172bc73b1d7c1e0c26cce2',1,'glm']]], ['highp_5ff64vec3',['highp_f64vec3',['../a00304.html#gacd1196090ece7a69fb5c3e43a7d4d851',1,'glm']]], ['highp_5ff64vec4',['highp_f64vec4',['../a00304.html#ga61185c44c8cc0b25d9a0f67d8a267444',1,'glm']]], ['highp_5ffdualquat',['highp_fdualquat',['../a00317.html#ga4c4e55e9c99dc57b299ed590968da564',1,'glm']]], ['highp_5ffloat32',['highp_float32',['../a00304.html#gac5a7f21136e0a78d0a1b9f60ef2f8aea',1,'glm']]], ['highp_5ffloat32_5ft',['highp_float32_t',['../a00304.html#ga5376ef18dca9d248897c3363ef5a06b2',1,'glm']]], ['highp_5ffloat64',['highp_float64',['../a00304.html#gadbb198a4d7aad82a0f4dc466ef6f6215',1,'glm']]], ['highp_5ffloat64_5ft',['highp_float64_t',['../a00304.html#gaaeeb0077198cff40e3f48b1108ece139',1,'glm']]], ['highp_5ffmat2',['highp_fmat2',['../a00304.html#gae98c88d9a7befa9b5877f49176225535',1,'glm']]], ['highp_5ffmat2x2',['highp_fmat2x2',['../a00304.html#ga28635abcddb2f3e92c33c3f0fcc682ad',1,'glm']]], ['highp_5ffmat2x3',['highp_fmat2x3',['../a00304.html#gacf111095594996fef29067b2454fccad',1,'glm']]], ['highp_5ffmat2x4',['highp_fmat2x4',['../a00304.html#ga4920a1536f161f7ded1d6909b7fef0d2',1,'glm']]], ['highp_5ffmat3',['highp_fmat3',['../a00304.html#gaed2dc69e0d507d4191092dbd44b3eb75',1,'glm']]], ['highp_5ffmat3x2',['highp_fmat3x2',['../a00304.html#gae54e4d1aeb5a0f0c64822e6f1b299e19',1,'glm']]], ['highp_5ffmat3x3',['highp_fmat3x3',['../a00304.html#gaa5b44d3ef6efcf33f44876673a7a936e',1,'glm']]], ['highp_5ffmat3x4',['highp_fmat3x4',['../a00304.html#ga961fac2a885907ffcf4d40daac6615c5',1,'glm']]], ['highp_5ffmat4',['highp_fmat4',['../a00304.html#gabf28443ce0cc0959077ec39b21f32c39',1,'glm']]], ['highp_5ffmat4x2',['highp_fmat4x2',['../a00304.html#ga076961cf2d120c7168b957cb2ed107b3',1,'glm']]], ['highp_5ffmat4x3',['highp_fmat4x3',['../a00304.html#gae406ec670f64170a7437b5e302eeb2cb',1,'glm']]], ['highp_5ffmat4x4',['highp_fmat4x4',['../a00304.html#gaee80c7cd3caa0f2635058656755f6f69',1,'glm']]], ['highp_5ffvec1',['highp_fvec1',['../a00304.html#gaa1040342c4efdedc8f90e6267db8d41c',1,'glm']]], ['highp_5ffvec2',['highp_fvec2',['../a00304.html#ga7c0d196f5fa79f7e892a2f323a0be1ae',1,'glm']]], ['highp_5ffvec3',['highp_fvec3',['../a00304.html#ga6ef77413883f48d6b53b4169b25edbd0',1,'glm']]], ['highp_5ffvec4',['highp_fvec4',['../a00304.html#ga8b839abbb44f5102609eed89f6ed61f7',1,'glm']]], ['highp_5fi16',['highp_i16',['../a00304.html#ga0336abc2604dd2c20c30e036454b64f8',1,'glm']]], ['highp_5fi16vec1',['highp_i16vec1',['../a00304.html#ga70fdfcc1fd38084bde83c3f06a8b9f19',1,'glm']]], ['highp_5fi16vec2',['highp_i16vec2',['../a00304.html#gaa7db3ad10947cf70cae6474d05ebd227',1,'glm']]], ['highp_5fi16vec3',['highp_i16vec3',['../a00304.html#ga5609c8fa2b7eac3dec337d321cb0ca96',1,'glm']]], ['highp_5fi16vec4',['highp_i16vec4',['../a00304.html#ga7a18659438828f91ccca28f1a1e067b4',1,'glm']]], ['highp_5fi32',['highp_i32',['../a00304.html#ga727675ac6b5d2fc699520e0059735e25',1,'glm']]], ['highp_5fi32vec1',['highp_i32vec1',['../a00304.html#ga6a9d71cc62745302f70422b7dc98755c',1,'glm']]], ['highp_5fi32vec2',['highp_i32vec2',['../a00304.html#gaa9b4579f8e6f3d9b649a965bcb785530',1,'glm']]], ['highp_5fi32vec3',['highp_i32vec3',['../a00304.html#ga31e070ea3bdee623e6e18a61ba5718b1',1,'glm']]], ['highp_5fi32vec4',['highp_i32vec4',['../a00304.html#gadf70eaaa230aeed5a4c9f4c9c5c55902',1,'glm']]], ['highp_5fi64',['highp_i64',['../a00304.html#gac25db6d2b1e2a0f351b77ba3409ac4cd',1,'glm']]], ['highp_5fi64vec1',['highp_i64vec1',['../a00304.html#gabd2fda3cd208acf5a370ec9b5b3c58d4',1,'glm']]], ['highp_5fi64vec2',['highp_i64vec2',['../a00304.html#gad9d1903cb20899966e8ebe0670889a5f',1,'glm']]], ['highp_5fi64vec3',['highp_i64vec3',['../a00304.html#ga62324224b9c6cce9c6b4db96bb704a8a',1,'glm']]], ['highp_5fi64vec4',['highp_i64vec4',['../a00304.html#gad23b1be9b3bf20352089a6b738f0ebba',1,'glm']]], ['highp_5fi8',['highp_i8',['../a00304.html#gacb88796f2d08ef253d0345aff20c3aee',1,'glm']]], ['highp_5fi8vec1',['highp_i8vec1',['../a00304.html#ga1d8c10949691b0fd990253476f47beb3',1,'glm']]], ['highp_5fi8vec2',['highp_i8vec2',['../a00304.html#ga50542e4cb9b2f9bec213b66e06145d07',1,'glm']]], ['highp_5fi8vec3',['highp_i8vec3',['../a00304.html#ga8396bfdc081d9113190d0c39c9f67084',1,'glm']]], ['highp_5fi8vec4',['highp_i8vec4',['../a00304.html#ga4824e3ddf6e608117dfe4809430737b4',1,'glm']]], ['highp_5fimat2',['highp_imat2',['../a00294.html#ga8499cc3b016003f835314c1c756e9db9',1,'glm']]], ['highp_5fimat2x2',['highp_imat2x2',['../a00294.html#gaa389e2d1c3b10941cae870bc0aeba5b3',1,'glm']]], ['highp_5fimat2x3',['highp_imat2x3',['../a00294.html#gaba49d890e06c9444795f5a133fbf1336',1,'glm']]], ['highp_5fimat2x4',['highp_imat2x4',['../a00294.html#ga05a970fd4366dad6c8a0be676b1eae5b',1,'glm']]], ['highp_5fimat3',['highp_imat3',['../a00294.html#gaca4506a3efa679eff7c006d9826291fd',1,'glm']]], ['highp_5fimat3x2',['highp_imat3x2',['../a00294.html#ga91c671c3ff9706c2393e78b22fd84bcb',1,'glm']]], ['highp_5fimat3x3',['highp_imat3x3',['../a00294.html#ga07d7b7173e2a6f843ff5f1c615a95b41',1,'glm']]], ['highp_5fimat3x4',['highp_imat3x4',['../a00294.html#ga53008f580be99018a17b357b5a4ffc0d',1,'glm']]], ['highp_5fimat4',['highp_imat4',['../a00294.html#ga7cfb09b34e0fcf73eaf6512d6483ef56',1,'glm']]], ['highp_5fimat4x2',['highp_imat4x2',['../a00294.html#ga1858820fb292cae396408b2034407f72',1,'glm']]], ['highp_5fimat4x3',['highp_imat4x3',['../a00294.html#ga6be0b80ae74bb309bc5b964d93d68fc5',1,'glm']]], ['highp_5fimat4x4',['highp_imat4x4',['../a00294.html#ga2c783ee6f8f040ab37df2f70392c8b44',1,'glm']]], ['highp_5fint16',['highp_int16',['../a00304.html#ga5fde0fa4a3852a9dd5d637a92ee74718',1,'glm']]], ['highp_5fint16_5ft',['highp_int16_t',['../a00304.html#gacaea06d0a79ef3172e887a7a6ba434ff',1,'glm']]], ['highp_5fint32',['highp_int32',['../a00304.html#ga84ed04b4e0de18c977e932d617e7c223',1,'glm']]], ['highp_5fint32_5ft',['highp_int32_t',['../a00304.html#ga2c71c8bd9e2fe7d2e93ca250d8b6157f',1,'glm']]], ['highp_5fint64',['highp_int64',['../a00304.html#ga226a8d52b4e3f77aaa6231135e886aac',1,'glm']]], ['highp_5fint64_5ft',['highp_int64_t',['../a00304.html#ga73c6abb280a45feeff60f9accaee91f3',1,'glm']]], ['highp_5fint8',['highp_int8',['../a00304.html#gad0549c902a96a7164e4ac858d5f39dbf',1,'glm']]], ['highp_5fint8_5ft',['highp_int8_t',['../a00304.html#ga1085c50dd8fbeb5e7e609b1c127492a5',1,'glm']]], ['highp_5fivec1',['highp_ivec1',['../a00273.html#ga7e02566f2bd2caa68e61be45a477c77e',1,'glm']]], ['highp_5fivec2',['highp_ivec2',['../a00282.html#gaa18f6b80b41c214f10666948539c1f93',1,'glm']]], ['highp_5fivec3',['highp_ivec3',['../a00282.html#ga7dd782c3ef5719bc6d5c3ca826b8ad18',1,'glm']]], ['highp_5fivec4',['highp_ivec4',['../a00282.html#gafb84dccdf5d82443df3ffc8428dcaf3e',1,'glm']]], ['highp_5fmat2',['highp_mat2',['../a00284.html#ga4d5a0055544a516237dcdace049b143d',1,'glm']]], ['highp_5fmat2x2',['highp_mat2x2',['../a00284.html#ga2352ae43b284c9f71446674c0208c05d',1,'glm']]], ['highp_5fmat2x3',['highp_mat2x3',['../a00284.html#ga7a0e3fe41512b0494e598f5c58722f19',1,'glm']]], ['highp_5fmat2x4',['highp_mat2x4',['../a00284.html#ga61f36a81f2ed1b5f9fc8bc3b26faec8f',1,'glm']]], ['highp_5fmat3',['highp_mat3',['../a00284.html#ga3fd9849f3da5ed6e3decc3fb10a20b3e',1,'glm']]], ['highp_5fmat3x2',['highp_mat3x2',['../a00284.html#ga1eda47a00027ec440eac05d63739c71b',1,'glm']]], ['highp_5fmat3x3',['highp_mat3x3',['../a00284.html#ga2ea82e12f4d7afcfce8f59894d400230',1,'glm']]], ['highp_5fmat3x4',['highp_mat3x4',['../a00284.html#ga6454b3a26ea30f69de8e44c08a63d1b7',1,'glm']]], ['highp_5fmat4',['highp_mat4',['../a00284.html#gad72e13d669d039f12ae5afa23148adc1',1,'glm']]], ['highp_5fmat4x2',['highp_mat4x2',['../a00284.html#gab68b66e6d2c37b804d0baf970fa4f0e5',1,'glm']]], ['highp_5fmat4x3',['highp_mat4x3',['../a00284.html#ga8d5a4e65fb976e4553b84995b95ecb38',1,'glm']]], ['highp_5fmat4x4',['highp_mat4x4',['../a00284.html#ga58cc504be0e3b61c48bc91554a767b9f',1,'glm']]], ['highp_5fquat',['highp_quat',['../a00253.html#gaa2fd8085774376310aeb80588e0eab6e',1,'glm']]], ['highp_5fu16',['highp_u16',['../a00304.html#ga8e62c883d13f47015f3b70ed88751369',1,'glm']]], ['highp_5fu16vec1',['highp_u16vec1',['../a00304.html#gad064202b4cf9a2972475c03de657cb39',1,'glm']]], ['highp_5fu16vec2',['highp_u16vec2',['../a00304.html#ga791b15ceb3f1e09d1a0ec6f3057ca159',1,'glm']]], ['highp_5fu16vec3',['highp_u16vec3',['../a00304.html#gacfd806749008f0ade6ac4bb9dd91082f',1,'glm']]], ['highp_5fu16vec4',['highp_u16vec4',['../a00304.html#ga8a85a3d54a8a9e14fe7a1f96196c4f61',1,'glm']]], ['highp_5fu32',['highp_u32',['../a00304.html#ga7a6f1929464dcc680b16381a4ee5f2cf',1,'glm']]], ['highp_5fu32vec1',['highp_u32vec1',['../a00304.html#ga0e35a565b9036bfc3989f5e23a0792e3',1,'glm']]], ['highp_5fu32vec2',['highp_u32vec2',['../a00304.html#ga2f256334f83fba4c2d219e414b51df6c',1,'glm']]], ['highp_5fu32vec3',['highp_u32vec3',['../a00304.html#gaf14d7a50502464e7cbfa074f24684cb1',1,'glm']]], ['highp_5fu32vec4',['highp_u32vec4',['../a00304.html#ga22166f0da65038b447f3c5e534fff1c2',1,'glm']]], ['highp_5fu64',['highp_u64',['../a00304.html#ga0c181fdf06a309691999926b6690c969',1,'glm']]], ['highp_5fu64vec1',['highp_u64vec1',['../a00304.html#gae4fe774744852c4d7d069be2e05257ab',1,'glm']]], ['highp_5fu64vec2',['highp_u64vec2',['../a00304.html#ga78f77b8b2d17b431ac5a68c0b5d7050d',1,'glm']]], ['highp_5fu64vec3',['highp_u64vec3',['../a00304.html#ga41bdabea6e589029659331ba47eb78c1',1,'glm']]], ['highp_5fu64vec4',['highp_u64vec4',['../a00304.html#ga4f15b41aa24b11cc42ad5798c04a2325',1,'glm']]], ['highp_5fu8',['highp_u8',['../a00304.html#gacd1259f3a9e8d2a9df5be2d74322ef9c',1,'glm']]], ['highp_5fu8vec1',['highp_u8vec1',['../a00304.html#ga8408cb76b6550ff01fa0a3024e7b68d2',1,'glm']]], ['highp_5fu8vec2',['highp_u8vec2',['../a00304.html#ga27585b7c3ab300059f11fcba465f6fd2',1,'glm']]], ['highp_5fu8vec3',['highp_u8vec3',['../a00304.html#ga45721c13b956eb691cbd6c6c1429167a',1,'glm']]], ['highp_5fu8vec4',['highp_u8vec4',['../a00304.html#gae0b75ad0fed8c00ddc0b5ce335d31060',1,'glm']]], ['highp_5fuint16',['highp_uint16',['../a00304.html#ga746dc6da204f5622e395f492997dbf57',1,'glm']]], ['highp_5fuint16_5ft',['highp_uint16_t',['../a00304.html#gacf54c3330ef60aa3d16cb676c7bcb8c7',1,'glm']]], ['highp_5fuint32',['highp_uint32',['../a00304.html#ga256b12b650c3f2fb86878fd1c5db8bc3',1,'glm']]], ['highp_5fuint32_5ft',['highp_uint32_t',['../a00304.html#gae978599c9711ac263ba732d4ac225b0e',1,'glm']]], ['highp_5fuint64',['highp_uint64',['../a00304.html#gaa38d732f5d4a7bc42a1b43b9d3c141ce',1,'glm']]], ['highp_5fuint64_5ft',['highp_uint64_t',['../a00304.html#gaa46172d7dc1c7ffe3e78107ff88adf08',1,'glm']]], ['highp_5fuint8',['highp_uint8',['../a00304.html#ga97432f9979e73e66567361fd01e4cffb',1,'glm']]], ['highp_5fuint8_5ft',['highp_uint8_t',['../a00304.html#gac4e00a26a2adb5f2c0a7096810df29e5',1,'glm']]], ['highp_5fumat2',['highp_umat2',['../a00294.html#ga42cbce64c4c1cd121b8437daa6e110de',1,'glm']]], ['highp_5fumat2x2',['highp_umat2x2',['../a00294.html#ga5337b7bc95f9cbac08a0c00b3f936b28',1,'glm']]], ['highp_5fumat2x3',['highp_umat2x3',['../a00294.html#ga90718c7128320b24b52f9ea70e643ad4',1,'glm']]], ['highp_5fumat2x4',['highp_umat2x4',['../a00294.html#gadca0a4724b4a6f56a2355b6f6e19248b',1,'glm']]], ['highp_5fumat3',['highp_umat3',['../a00294.html#gaa1143120339b7d2d469d327662e8a172',1,'glm']]], ['highp_5fumat3x2',['highp_umat3x2',['../a00294.html#ga844a5da2e7fc03fc7cccc7f1b70809c4',1,'glm']]], ['highp_5fumat3x3',['highp_umat3x3',['../a00294.html#ga1f7d41c36b980774a4d2e7c1647fb4b2',1,'glm']]], ['highp_5fumat3x4',['highp_umat3x4',['../a00294.html#ga25ee15c323924f2d0fe9896d329e5086',1,'glm']]], ['highp_5fumat4',['highp_umat4',['../a00294.html#gaf665e4e78c2cc32a54ab40325738f9c9',1,'glm']]], ['highp_5fumat4x2',['highp_umat4x2',['../a00294.html#gae69eb82ec08b0dc9bf2ead2a339ff801',1,'glm']]], ['highp_5fumat4x3',['highp_umat4x3',['../a00294.html#ga45a8163d02c43216252056b0c120f3a5',1,'glm']]], ['highp_5fumat4x4',['highp_umat4x4',['../a00294.html#ga6a56cbb769aed334c95241664415f9ba',1,'glm']]], ['highp_5fuvec1',['highp_uvec1',['../a00277.html#gacda57dd8c2bff4934c7f09ddd87c0f39',1,'glm']]], ['highp_5fuvec2',['highp_uvec2',['../a00282.html#gad5dd50da9e37387ca6b4e6f9c80fe6f8',1,'glm']]], ['highp_5fuvec3',['highp_uvec3',['../a00282.html#gaef61508dd40ec523416697982f9ceaae',1,'glm']]], ['highp_5fuvec4',['highp_uvec4',['../a00282.html#gaeebd7dd9f3e678691f8620241e5f9221',1,'glm']]], ['highp_5fvec1',['highp_vec1',['../a00271.html#ga9e8ed21862a897c156c0b2abca70b1e9',1,'glm']]], ['highp_5fvec2',['highp_vec2',['../a00282.html#gaa92c1954d71b1e7914874bd787b43d1c',1,'glm']]], ['highp_5fvec3',['highp_vec3',['../a00282.html#gaca61dfaccbf2f58f2d8063a4e76b44a9',1,'glm']]], ['highp_5fvec4',['highp_vec4',['../a00282.html#gad281decae52948b82feb3a9db8f63a7b',1,'glm']]], ['hsvcolor',['hsvColor',['../a00312.html#ga789802bec2d4fe0f9741c731b4a8a7d8',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_8.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_8.js ================================================ var searchData= [ ['integer_20functions',['Integer functions',['../a00370.html',1,'']]], ['i16',['i16',['../a00304.html#ga3ab5fe184343d394fb6c2723c3ee3699',1,'glm']]], ['i16vec1',['i16vec1',['../a00304.html#gafe730798732aa7b0647096a004db1b1c',1,'glm']]], ['i16vec2',['i16vec2',['../a00304.html#ga2996630ba7b10535af8e065cf326f761',1,'glm']]], ['i16vec3',['i16vec3',['../a00304.html#gae9c90a867a6026b1f6eab00456f3fb8b',1,'glm']]], ['i16vec4',['i16vec4',['../a00304.html#ga550831bfc26d1e0101c1cb3d79938c06',1,'glm']]], ['i32',['i32',['../a00304.html#ga96faea43ac5f875d2d3ffbf8d213e3eb',1,'glm']]], ['i32vec1',['i32vec1',['../a00304.html#ga54b8a4e0f5a7203a821bf8e9c1265bcf',1,'glm']]], ['i32vec2',['i32vec2',['../a00304.html#ga8b44026374982dcd1e52d22bac99247e',1,'glm']]], ['i32vec3',['i32vec3',['../a00304.html#ga7f526b5cccef126a2ebcf9bdd890394e',1,'glm']]], ['i32vec4',['i32vec4',['../a00304.html#ga866a05905c49912309ed1fa5f5980e61',1,'glm']]], ['i64',['i64',['../a00304.html#gadb997e409103d4da18abd837e636a496',1,'glm']]], ['i64vec1',['i64vec1',['../a00304.html#ga2b65767f8b5aed1bd1cf86c541662b50',1,'glm']]], ['i64vec2',['i64vec2',['../a00304.html#ga48310188e1d0c616bf8d78c92447523b',1,'glm']]], ['i64vec3',['i64vec3',['../a00304.html#ga667948cfe6fb3d6606c750729ec49f77',1,'glm']]], ['i64vec4',['i64vec4',['../a00304.html#gaa4e31c3d9de067029efeb161a44b0232',1,'glm']]], ['i8',['i8',['../a00304.html#ga302ec977b0c0c3ea245b6c9275495355',1,'glm']]], ['i8vec1',['i8vec1',['../a00304.html#ga7e80d927ff0a3861ced68dfff8a4020b',1,'glm']]], ['i8vec2',['i8vec2',['../a00304.html#gad06935764d78f43f9d542c784c2212ec',1,'glm']]], ['i8vec3',['i8vec3',['../a00304.html#ga5a08d36cf7917cd19d081a603d0eae3e',1,'glm']]], ['i8vec4',['i8vec4',['../a00304.html#ga4177a44206121dabc8c4ff1c0f544574',1,'glm']]], ['identity',['identity',['../a00247.html#ga81696f2b8d1db02ea1aff8da8f269314',1,'glm']]], ['imat2',['imat2',['../a00294.html#gaabe04f9948d4a213bb1c20137de03e01',1,'glm']]], ['imat2x2',['imat2x2',['../a00294.html#gaa4732a240522ad9bc28144fda2fc14ec',1,'glm']]], ['imat2x3',['imat2x3',['../a00294.html#ga3f42dd3d5d94a0fd5706f7ec8dd0c605',1,'glm']]], ['imat2x4',['imat2x4',['../a00294.html#ga9d8faafdca42583d67e792dd038fc668',1,'glm']]], ['imat3',['imat3',['../a00294.html#ga038f68437155ffa3c2583a15264a8195',1,'glm']]], ['imat3x2',['imat3x2',['../a00294.html#ga7b33bbe4f12c060892bd3cc8d4cd737f',1,'glm']]], ['imat3x3',['imat3x3',['../a00294.html#ga6aacc960f62e8f7d2fe9d32d5050e7a4',1,'glm']]], ['imat3x4',['imat3x4',['../a00294.html#ga6e9ce23496d8b08dfc302d4039694b58',1,'glm']]], ['imat4',['imat4',['../a00294.html#ga96b0d26a33b81bb6a60ca0f39682f7eb',1,'glm']]], ['imat4x2',['imat4x2',['../a00294.html#ga8ce7ef51d8b2c1901fa5414deccbc3fa',1,'glm']]], ['imat4x3',['imat4x3',['../a00294.html#ga705ee0bf49d6c3de4404ce2481bf0df5',1,'glm']]], ['imat4x4',['imat4x4',['../a00294.html#ga43ed5e4f475b6f4cad7cba78f29c405b',1,'glm']]], ['imulextended',['imulExtended',['../a00370.html#gac0c510a70e852f57594a9141848642e3',1,'glm']]], ['infiniteperspective',['infinitePerspective',['../a00243.html#ga44fa38a18349450325cae2661bb115ca',1,'glm']]], ['infiniteperspectivelh',['infinitePerspectiveLH',['../a00243.html#ga3201b30f5b3ea0f933246d87bfb992a9',1,'glm']]], ['infiniteperspectiverh',['infinitePerspectiveRH',['../a00243.html#ga99672ffe5714ef478dab2437255fe7e1',1,'glm']]], ['int1',['int1',['../a00315.html#ga0670a2111b5e4a6410bd027fa0232fc3',1,'glm']]], ['int16',['int16',['../a00260.html#ga259fa4834387bd68627ddf37bb3ebdb9',1,'glm']]], ['int16_5ft',['int16_t',['../a00304.html#gae8f5e3e964ca2ae240adc2c0d74adede',1,'glm']]], ['int1x1',['int1x1',['../a00315.html#ga056ffe02d3a45af626f8e62221881c7a',1,'glm']]], ['int2',['int2',['../a00315.html#gafe3a8fd56354caafe24bfe1b1e3ad22a',1,'glm']]], ['int2x2',['int2x2',['../a00315.html#ga4e5ce477c15836b21e3c42daac68554d',1,'glm']]], ['int2x3',['int2x3',['../a00315.html#ga197ded5ad8354f6b6fb91189d7a269b3',1,'glm']]], ['int2x4',['int2x4',['../a00315.html#ga2749d59a7fddbac44f34ba78e57ef807',1,'glm']]], ['int3',['int3',['../a00315.html#ga909c38a425f215a50c847145d7da09f0',1,'glm']]], ['int32',['int32',['../a00260.html#ga43d43196463bde49cb067f5c20ab8481',1,'glm']]], ['int32_5ft',['int32_t',['../a00304.html#ga042ef09ff2f0cb24a36f541bcb3a3710',1,'glm']]], ['int3x2',['int3x2',['../a00315.html#gaa4cbe16a92cf3664376c7a2fc5126aa8',1,'glm']]], ['int3x3',['int3x3',['../a00315.html#ga15c9649286f0bf431bdf9b3509580048',1,'glm']]], ['int3x4',['int3x4',['../a00315.html#gaacac46ddc7d15d0f9529d05c92946a0f',1,'glm']]], ['int4',['int4',['../a00315.html#gaecdef18c819c205aeee9f94dc93de56a',1,'glm']]], ['int4x2',['int4x2',['../a00315.html#ga97a39dd9bc7d572810d80b8467cbffa1',1,'glm']]], ['int4x3',['int4x3',['../a00315.html#gae4a2c53f14aeec9a17c2b81142b7e82d',1,'glm']]], ['int4x4',['int4x4',['../a00315.html#ga04dee1552424198b8f58b377c2ee00d8',1,'glm']]], ['int64',['int64',['../a00260.html#gaff5189f97f9e842d9636a0f240001b2e',1,'glm']]], ['int64_5ft',['int64_t',['../a00304.html#ga322a7d7d2c2c68994dc872a33de63c61',1,'glm']]], ['int8',['int8',['../a00260.html#ga1b956fe1df85f3c132b21edb4e116458',1,'glm']]], ['int8_5ft',['int8_t',['../a00304.html#ga4bf09d8838a86866b39ee6e109341645',1,'glm']]], ['intbitstofloat',['intBitsToFloat',['../a00241.html#ga4fb7c21c2dce064b26fd9ccdaf9adcd4',1,'glm::intBitsToFloat(int const &v)'],['../a00241.html#ga7a0a8291a1cf3e1c2aee33030a1bd7b0',1,'glm::intBitsToFloat(vec< L, int, Q > const &v)']]], ['integer_2ehpp',['integer.hpp',['../a00043.html',1,'']]], ['intermediate',['intermediate',['../a00352.html#gacc5cd5f3e78de61d141c2355417424de',1,'glm']]], ['interpolate',['interpolate',['../a00337.html#ga4e67863d150724b10c1ac00972dc958c',1,'glm']]], ['intersect_2ehpp',['intersect.hpp',['../a00044.html',1,'']]], ['intersectlinesphere',['intersectLineSphere',['../a00331.html#ga9c68139f3d8a4f3d7fe45f9dbc0de5b7',1,'glm']]], ['intersectlinetriangle',['intersectLineTriangle',['../a00331.html#ga9d29b9b3acb504d43986502f42740df4',1,'glm']]], ['intersectrayplane',['intersectRayPlane',['../a00331.html#gad3697a9700ea379739a667ea02573488',1,'glm']]], ['intersectraysphere',['intersectRaySphere',['../a00331.html#gac88f8cd84c4bcb5b947d56acbbcfa56e',1,'glm::intersectRaySphere(genType const &rayStarting, genType const &rayNormalizedDirection, genType const &sphereCenter, typename genType::value_type const sphereRadiusSquared, typename genType::value_type &intersectionDistance)'],['../a00331.html#gad28c00515b823b579c608aafa1100c1d',1,'glm::intersectRaySphere(genType const &rayStarting, genType const &rayNormalizedDirection, genType const &sphereCenter, const typename genType::value_type sphereRadius, genType &intersectionPosition, genType &intersectionNormal)']]], ['intersectraytriangle',['intersectRayTriangle',['../a00331.html#ga65bf2c594482f04881c36bc761f9e946',1,'glm']]], ['inverse',['inverse',['../a00248.html#gab41da854ae678e23e114b598cbca4065',1,'glm::inverse(qua< T, Q > const &q)'],['../a00317.html#ga070f521a953f6461af4ab4cf8ccbf27e',1,'glm::inverse(tdualquat< T, Q > const &q)'],['../a00371.html#gaed509fe8129b01e4f20a6d0de5690091',1,'glm::inverse(mat< C, R, T, Q > const &m)']]], ['inversesqrt',['inversesqrt',['../a00242.html#ga523dd6bd0ad9f75ae2d24c8e4b017b7a',1,'glm']]], ['inversetranspose',['inverseTranspose',['../a00295.html#gab213cd0e3ead5f316d583f99d6312008',1,'glm']]], ['io_2ehpp',['io.hpp',['../a00045.html',1,'']]], ['iround',['iround',['../a00292.html#ga57824268ebe13a922f1d69a5d37f637f',1,'glm']]], ['iscompnull',['isCompNull',['../a00368.html#gaf6ec1688eab7442fe96fe4941d5d4e76',1,'glm']]], ['isdenormal',['isdenormal',['../a00314.html#ga74aa7c7462245d83bd5a9edf9c6c2d91',1,'glm']]], ['isfinite',['isfinite',['../a00315.html#gaf4b04dcd3526996d68c1bfe17bfc8657',1,'glm::isfinite(genType const &x)'],['../a00315.html#gac3b12b8ac3014418fe53c299478b6603',1,'glm::isfinite(const vec< 1, T, Q > &x)'],['../a00315.html#ga8e76dc3e406ce6a4155c2b12a2e4b084',1,'glm::isfinite(const vec< 2, T, Q > &x)'],['../a00315.html#ga929ef27f896d902c1771a2e5e150fc97',1,'glm::isfinite(const vec< 3, T, Q > &x)'],['../a00315.html#ga19925badbe10ce61df1d0de00be0b5ad',1,'glm::isfinite(const vec< 4, T, Q > &x)']]], ['isidentity',['isIdentity',['../a00340.html#gaee935d145581c82e82b154ccfd78ad91',1,'glm']]], ['isinf',['isinf',['../a00241.html#ga2885587c23a106301f20443896365b62',1,'glm::isinf(vec< L, T, Q > const &x)'],['../a00248.html#ga45722741ea266b4e861938b365c5f362',1,'glm::isinf(qua< T, Q > const &x)']]], ['ismultiple',['isMultiple',['../a00261.html#gaec593d33956a8fe43f78fccc63ddde9a',1,'glm::isMultiple(genIUType v, genIUType Multiple)'],['../a00274.html#ga354caf634ef333d9cb4844407416256a',1,'glm::isMultiple(vec< L, T, Q > const &v, T Multiple)'],['../a00274.html#gabb4360e38c0943d8981ba965dead519d',1,'glm::isMultiple(vec< L, T, Q > const &v, vec< L, T, Q > const &Multiple)']]], ['isnan',['isnan',['../a00241.html#ga29ef934c00306490de837b4746b4e14d',1,'glm::isnan(vec< L, T, Q > const &x)'],['../a00248.html#ga1bb55f8963616502e96dc564384d8a03',1,'glm::isnan(qua< T, Q > const &x)']]], ['isnormalized',['isNormalized',['../a00340.html#gae785af56f47ce220a1609f7f84aa077a',1,'glm::isNormalized(mat< 2, 2, T, Q > const &m, T const &epsilon)'],['../a00340.html#gaa068311695f28f5f555f5f746a6a66fb',1,'glm::isNormalized(mat< 3, 3, T, Q > const &m, T const &epsilon)'],['../a00340.html#ga4d9bb4d0465df49fedfad79adc6ce4ad',1,'glm::isNormalized(mat< 4, 4, T, Q > const &m, T const &epsilon)'],['../a00368.html#gac3c974f459fd75453134fad7ae89a39e',1,'glm::isNormalized(vec< L, T, Q > const &v, T const &epsilon)']]], ['isnull',['isNull',['../a00340.html#ga9790ec222ce948c0ff0d8ce927340dba',1,'glm::isNull(mat< 2, 2, T, Q > const &m, T const &epsilon)'],['../a00340.html#gae14501c6b14ccda6014cc5350080103d',1,'glm::isNull(mat< 3, 3, T, Q > const &m, T const &epsilon)'],['../a00340.html#ga2b98bb30a9fefa7cdea5f1dcddba677b',1,'glm::isNull(mat< 4, 4, T, Q > const &m, T const &epsilon)'],['../a00368.html#gab4a3637dbcb4bb42dc55caea7a1e0495',1,'glm::isNull(vec< L, T, Q > const &v, T const &epsilon)']]], ['isorthogonal',['isOrthogonal',['../a00340.html#ga58f3289f74dcab653387dd78ad93ca40',1,'glm']]], ['ispoweroftwo',['isPowerOfTwo',['../a00261.html#gadf491730354aa7da67fbe23d4d688763',1,'glm::isPowerOfTwo(genIUType v)'],['../a00274.html#gabf2b61ded7049bcb13e25164f832a290',1,'glm::isPowerOfTwo(vec< L, T, Q > const &v)']]], ['ivec1',['ivec1',['../a00272.html#gaedd0562c2e77714929d7723a7e2e0dba',1,'glm']]], ['ivec2',['ivec2',['../a00281.html#ga6f9269106d91b2d2b91bcf27cd5f5560',1,'glm']]], ['ivec3',['ivec3',['../a00281.html#gad0d784d8eee201aca362484d2daee46c',1,'glm']]], ['ivec4',['ivec4',['../a00281.html#ga5abb4603dae0ce58c595e66d9123d812',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_9.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_9.js ================================================ var searchData= [ ['l1norm',['l1Norm',['../a00343.html#gae2fc0b2aa967bebfd6a244700bff6997',1,'glm::l1Norm(vec< 3, T, Q > const &x, vec< 3, T, Q > const &y)'],['../a00343.html#ga1a7491e2037ceeb37f83ce41addfc0be',1,'glm::l1Norm(vec< 3, T, Q > const &v)']]], ['l2norm',['l2Norm',['../a00343.html#ga41340b2ef40a9307ab0f137181565168',1,'glm::l2Norm(vec< 3, T, Q > const &x, vec< 3, T, Q > const &y)'],['../a00343.html#gae288bde8f0e41fb4ed62e65137b18cba',1,'glm::l2Norm(vec< 3, T, Q > const &x)']]], ['ldexp',['ldexp',['../a00241.html#gac3010e0a0c35a1b514540f2fb579c58c',1,'glm']]], ['lefthanded',['leftHanded',['../a00328.html#ga6f1bad193b9a3b048543d1935cf04dd3',1,'glm']]], ['length',['length',['../a00254.html#gab703732449be6c7199369b3f9a91ed38',1,'glm::length(qua< T, Q > const &q)'],['../a00279.html#ga0cdabbb000834d994a1d6dc56f8f5263',1,'glm::length(vec< L, T, Q > const &x)']]], ['length2',['length2',['../a00343.html#ga8d1789651050adb7024917984b41c3de',1,'glm::length2(vec< L, T, Q > const &x)'],['../a00352.html#ga58a609b1b8ab965f5df2702e8ca4e75b',1,'glm::length2(qua< T, Q > const &q)']]], ['lerp',['lerp',['../a00248.html#ga6033dc0741051fa463a0a147ba29f293',1,'glm::lerp(qua< T, Q > const &x, qua< T, Q > const &y, T a)'],['../a00315.html#ga5494ba3a95ea6594c86fc75236886864',1,'glm::lerp(T x, T y, T a)'],['../a00315.html#gaa551c0a0e16d2d4608e49f7696df897f',1,'glm::lerp(const vec< 2, T, Q > &x, const vec< 2, T, Q > &y, T a)'],['../a00315.html#ga44a8b5fd776320f1713413dec959b32a',1,'glm::lerp(const vec< 3, T, Q > &x, const vec< 3, T, Q > &y, T a)'],['../a00315.html#ga89ac8e000199292ec7875519d27e214b',1,'glm::lerp(const vec< 4, T, Q > &x, const vec< 4, T, Q > &y, T a)'],['../a00315.html#gaf68de5baf72d16135368b8ef4f841604',1,'glm::lerp(const vec< 2, T, Q > &x, const vec< 2, T, Q > &y, const vec< 2, T, Q > &a)'],['../a00315.html#ga4ae1a616c8540a2649eab8e0cd051bb3',1,'glm::lerp(const vec< 3, T, Q > &x, const vec< 3, T, Q > &y, const vec< 3, T, Q > &a)'],['../a00315.html#gab5477ab69c40de4db5d58d3359529724',1,'glm::lerp(const vec< 4, T, Q > &x, const vec< 4, T, Q > &y, const vec< 4, T, Q > &a)'],['../a00317.html#gace8380112d16d33f520839cb35a4d173',1,'glm::lerp(tdualquat< T, Q > const &x, tdualquat< T, Q > const &y, T const &a)']]], ['lessthan',['lessThan',['../a00299.html#gad091a2d22c8acfebfa92bcfca1dfe9c4',1,'glm::lessThan(qua< T, Q > const &x, qua< T, Q > const &y)'],['../a00374.html#gae90ed1592c395f93e3f3dfce6b2f39c6',1,'glm::lessThan(vec< L, T, Q > const &x, vec< L, T, Q > const &y)']]], ['lessthanequal',['lessThanEqual',['../a00299.html#gac00012eea281800d2403f4ea8443134d',1,'glm::lessThanEqual(qua< T, Q > const &x, qua< T, Q > const &y)'],['../a00374.html#gab0bdafc019d227257ff73fb5bcca1718',1,'glm::lessThanEqual(vec< L, T, Q > const &x, vec< L, T, Q > const &y)']]], ['levels',['levels',['../a00361.html#gaa8c377f4e63486db4fa872d77880da73',1,'glm']]], ['lineargradient',['linearGradient',['../a00327.html#ga849241df1e55129b8ce9476200307419',1,'glm']]], ['linearinterpolation',['linearInterpolation',['../a00318.html#ga290c3e47cb0a49f2e8abe90b1872b649',1,'glm']]], ['linearrand',['linearRand',['../a00300.html#ga04e241ab88374a477a2c2ceadd2fa03d',1,'glm::linearRand(genType Min, genType Max)'],['../a00300.html#ga94731130c298a9ff5e5025fdee6d97a0',1,'glm::linearRand(vec< L, T, Q > const &Min, vec< L, T, Q > const &Max)']]], ['lmaxnorm',['lMaxNorm',['../a00343.html#gad58a8231fc32e38104a9e1c4d3c0cb64',1,'glm::lMaxNorm(vec< 3, T, Q > const &x, vec< 3, T, Q > const &y)'],['../a00343.html#ga6968a324837a8e899396d44de23d5aae',1,'glm::lMaxNorm(vec< 3, T, Q > const &x)']]], ['ln_5fln_5ftwo',['ln_ln_two',['../a00290.html#gaca94292c839ed31a405ab7a81ae7e850',1,'glm']]], ['ln_5ften',['ln_ten',['../a00290.html#gaf97ebc6c059ffd788e6c4946f71ef66c',1,'glm']]], ['ln_5ftwo',['ln_two',['../a00290.html#ga24f4d27765678116f41a2f336ab7975c',1,'glm']]], ['log',['log',['../a00242.html#ga918c9f3fd086ce20e6760c903bd30fa9',1,'glm::log(vec< L, T, Q > const &v)'],['../a00256.html#gaa5f7b20e296671b16ce25a2ab7ad5473',1,'glm::log(qua< T, Q > const &q)'],['../a00333.html#ga60a7b0a401da660869946b2b77c710c9',1,'glm::log(genType const &x, genType const &base)']]], ['log2',['log2',['../a00242.html#ga82831c7d9cca777cebedfe03a19c8d75',1,'glm::log2(vec< L, T, Q > const &v)'],['../a00292.html#ga9bd682e74bfacb005c735305207ec417',1,'glm::log2(genIUType x)']]], ['log_5fbase_2ehpp',['log_base.hpp',['../a00046.html',1,'']]], ['lookat',['lookAt',['../a00247.html#gaa64aa951a0e99136bba9008d2b59c78e',1,'glm']]], ['lookatlh',['lookAtLH',['../a00247.html#gab2c09e25b0a16d3a9d89cc85bbae41b0',1,'glm']]], ['lookatrh',['lookAtRH',['../a00247.html#gacfa12c8889c754846bc20c65d9b5c701',1,'glm']]], ['lowestbitvalue',['lowestBitValue',['../a00309.html#ga2ff6568089f3a9b67f5c30918855fc6f',1,'glm']]], ['lowp_5fbvec1',['lowp_bvec1',['../a00266.html#ga24a3d364e2ddd444f5b9e7975bbef8f9',1,'glm']]], ['lowp_5fbvec2',['lowp_bvec2',['../a00282.html#ga5a5452140650988b94d5716e4d872465',1,'glm']]], ['lowp_5fbvec3',['lowp_bvec3',['../a00282.html#ga79e0922a977662a8fd39d7829be3908b',1,'glm']]], ['lowp_5fbvec4',['lowp_bvec4',['../a00282.html#ga15ac87724048ab7169bb5d3572939dd3',1,'glm']]], ['lowp_5fddualquat',['lowp_ddualquat',['../a00317.html#gab4c5103338af3dac7e0fbc86895a3f1a',1,'glm']]], ['lowp_5fdmat2',['lowp_dmat2',['../a00284.html#gad8e2727a6e7aa68280245bb0022118e1',1,'glm']]], ['lowp_5fdmat2x2',['lowp_dmat2x2',['../a00284.html#gac61b94f5d9775f83f321bac899322fe2',1,'glm']]], ['lowp_5fdmat2x3',['lowp_dmat2x3',['../a00284.html#gaf6bf2f5bde7ad5b9c289f777b93094af',1,'glm']]], ['lowp_5fdmat2x4',['lowp_dmat2x4',['../a00284.html#ga97507a31ecee8609887d0f23bbde92c7',1,'glm']]], ['lowp_5fdmat3',['lowp_dmat3',['../a00284.html#ga0cab80beee64a5f8d2ae4e823983063a',1,'glm']]], ['lowp_5fdmat3x2',['lowp_dmat3x2',['../a00284.html#ga1e0ea3fba496bc7c6f620d2590acb66b',1,'glm']]], ['lowp_5fdmat3x3',['lowp_dmat3x3',['../a00284.html#gac017848a9df570f60916a21a297b1e8e',1,'glm']]], ['lowp_5fdmat3x4',['lowp_dmat3x4',['../a00284.html#ga93add35d2a44c5830978b827e8c295e8',1,'glm']]], ['lowp_5fdmat4',['lowp_dmat4',['../a00284.html#ga708bc5b91bbfedd21debac8dcf2a64cd',1,'glm']]], ['lowp_5fdmat4x2',['lowp_dmat4x2',['../a00284.html#ga382dc5295cead78766239a8457abfa98',1,'glm']]], ['lowp_5fdmat4x3',['lowp_dmat4x3',['../a00284.html#ga3d7ea07da7c6e5c81a3f4c8b3d44056e',1,'glm']]], ['lowp_5fdmat4x4',['lowp_dmat4x4',['../a00284.html#ga5b0413198b7e9f061f7534a221c9dac9',1,'glm']]], ['lowp_5fdquat',['lowp_dquat',['../a00250.html#ga9e6e5f42e67dd5877350ba485c191f1c',1,'glm']]], ['lowp_5fdualquat',['lowp_dualquat',['../a00317.html#gade05d29ebd4deea0f883d0e1bb4169aa',1,'glm']]], ['lowp_5fdvec1',['lowp_dvec1',['../a00269.html#gaf906eb86b6e96c35138d0e4928e1435a',1,'glm']]], ['lowp_5fdvec2',['lowp_dvec2',['../a00282.html#ga108086730d086b7f6f7a033955dfb9c3',1,'glm']]], ['lowp_5fdvec3',['lowp_dvec3',['../a00282.html#ga42c518b2917e19ce6946a84c64a3a4b2',1,'glm']]], ['lowp_5fdvec4',['lowp_dvec4',['../a00282.html#ga0b4432cb8d910e406576d10d802e190d',1,'glm']]], ['lowp_5ff32',['lowp_f32',['../a00304.html#gaeea53879fc327293cf3352a409b7867b',1,'glm']]], ['lowp_5ff32mat2',['lowp_f32mat2',['../a00304.html#ga52409bc6d4a2ce3421526c069220d685',1,'glm']]], ['lowp_5ff32mat2x2',['lowp_f32mat2x2',['../a00304.html#ga1d091b6abfba1772450e1745a06525bc',1,'glm']]], ['lowp_5ff32mat2x3',['lowp_f32mat2x3',['../a00304.html#ga961ccb34cd1a5654c772c8709e001dc5',1,'glm']]], ['lowp_5ff32mat2x4',['lowp_f32mat2x4',['../a00304.html#gacc6bf0209dda0c7c14851a646071c974',1,'glm']]], ['lowp_5ff32mat3',['lowp_f32mat3',['../a00304.html#ga4187f89f196505b40e63f516139511e5',1,'glm']]], ['lowp_5ff32mat3x2',['lowp_f32mat3x2',['../a00304.html#gac53f9d7ab04eace67adad026092fb1e8',1,'glm']]], ['lowp_5ff32mat3x3',['lowp_f32mat3x3',['../a00304.html#ga841211b641cff1fcf861bdb14e5e4abc',1,'glm']]], ['lowp_5ff32mat3x4',['lowp_f32mat3x4',['../a00304.html#ga21b1b22dec013a72656e3644baf8a1e1',1,'glm']]], ['lowp_5ff32mat4',['lowp_f32mat4',['../a00304.html#ga766aed2871e6173a81011a877f398f04',1,'glm']]], ['lowp_5ff32mat4x2',['lowp_f32mat4x2',['../a00304.html#gae6f3fcb702a666de07650c149cfa845a',1,'glm']]], ['lowp_5ff32mat4x3',['lowp_f32mat4x3',['../a00304.html#gac21eda58a1475449a5709b412ebd776c',1,'glm']]], ['lowp_5ff32mat4x4',['lowp_f32mat4x4',['../a00304.html#ga4143d129898f91545948c46859adce44',1,'glm']]], ['lowp_5ff32quat',['lowp_f32quat',['../a00304.html#gaa3ba60ef8f69c6aeb1629594eaa95347',1,'glm']]], ['lowp_5ff32vec1',['lowp_f32vec1',['../a00304.html#ga43e5b41c834fcaf4db5a831c0e28128e',1,'glm']]], ['lowp_5ff32vec2',['lowp_f32vec2',['../a00304.html#gaf3b694b2b8ded7e0b9f07b061917e1a0',1,'glm']]], ['lowp_5ff32vec3',['lowp_f32vec3',['../a00304.html#gaf739a2cd7b81783a43148b53e40d983b',1,'glm']]], ['lowp_5ff32vec4',['lowp_f32vec4',['../a00304.html#ga4e2e1debe022074ab224c9faf856d374',1,'glm']]], ['lowp_5ff64',['lowp_f64',['../a00304.html#gabc7a97c07cbfac8e35eb5e63beb4b679',1,'glm']]], ['lowp_5ff64mat2',['lowp_f64mat2',['../a00304.html#gafc730f6b4242763b0eda0ffa25150292',1,'glm']]], ['lowp_5ff64mat2x2',['lowp_f64mat2x2',['../a00304.html#ga771fda9109933db34f808d92b9b84d7e',1,'glm']]], ['lowp_5ff64mat2x3',['lowp_f64mat2x3',['../a00304.html#ga39e90adcffe33264bd608fa9c6bd184b',1,'glm']]], ['lowp_5ff64mat2x4',['lowp_f64mat2x4',['../a00304.html#ga50265a202fbfe0a25fc70066c31d9336',1,'glm']]], ['lowp_5ff64mat3',['lowp_f64mat3',['../a00304.html#ga58119a41d143ebaea0df70fe882e8a40',1,'glm']]], ['lowp_5ff64mat3x2',['lowp_f64mat3x2',['../a00304.html#gab0eb2d65514ee3e49905aa2caad8c0ad',1,'glm']]], ['lowp_5ff64mat3x3',['lowp_f64mat3x3',['../a00304.html#gac8f8a12ee03105ef8861dc652434e3b7',1,'glm']]], ['lowp_5ff64mat3x4',['lowp_f64mat3x4',['../a00304.html#gade8d1edfb23996ab6c622e65e3893271',1,'glm']]], ['lowp_5ff64mat4',['lowp_f64mat4',['../a00304.html#ga7451266e67794bd1125163502bc4a570',1,'glm']]], ['lowp_5ff64mat4x2',['lowp_f64mat4x2',['../a00304.html#gab0cecb80fd106bc369b9e46a165815ce',1,'glm']]], ['lowp_5ff64mat4x3',['lowp_f64mat4x3',['../a00304.html#gae731613b25db3a5ef5a05d21e57a57d3',1,'glm']]], ['lowp_5ff64mat4x4',['lowp_f64mat4x4',['../a00304.html#ga8c9cd734e03cd49674f3e287aa4a6f95',1,'glm']]], ['lowp_5ff64quat',['lowp_f64quat',['../a00304.html#gaa3ee2bc4af03cc06578b66b3e3f878ae',1,'glm']]], ['lowp_5ff64vec1',['lowp_f64vec1',['../a00304.html#gaf2d02c5f4d59135b9bc524fe317fd26b',1,'glm']]], ['lowp_5ff64vec2',['lowp_f64vec2',['../a00304.html#ga4e641a54d70c81eabf56c25c966d04bd',1,'glm']]], ['lowp_5ff64vec3',['lowp_f64vec3',['../a00304.html#gae7a4711107b7d078fc5f03ce2227b90b',1,'glm']]], ['lowp_5ff64vec4',['lowp_f64vec4',['../a00304.html#gaa666bb9e6d204d3bea0b3a39a3a335f4',1,'glm']]], ['lowp_5ffdualquat',['lowp_fdualquat',['../a00317.html#gaa38f671be25a7f3b136a452a8bb42860',1,'glm']]], ['lowp_5ffloat32',['lowp_float32',['../a00304.html#ga41b0d390bd8cc827323b1b3816ff4bf8',1,'glm']]], ['lowp_5ffloat32_5ft',['lowp_float32_t',['../a00304.html#gaea881cae4ddc6c0fbf7cc5b08177ca5b',1,'glm']]], ['lowp_5ffloat64',['lowp_float64',['../a00304.html#ga3714dab2c16a6545a405cb0c3b3aaa6f',1,'glm']]], ['lowp_5ffloat64_5ft',['lowp_float64_t',['../a00304.html#ga7286a37076a09da140df18bfa75d4e38',1,'glm']]], ['lowp_5ffmat2',['lowp_fmat2',['../a00304.html#ga5bba0ce31210e274f73efacd3364c03f',1,'glm']]], ['lowp_5ffmat2x2',['lowp_fmat2x2',['../a00304.html#gab0feb11edd0d3ab3e8ed996d349a5066',1,'glm']]], ['lowp_5ffmat2x3',['lowp_fmat2x3',['../a00304.html#ga71cdb53801ed4c3aadb3603c04723210',1,'glm']]], ['lowp_5ffmat2x4',['lowp_fmat2x4',['../a00304.html#gaab217601c74974a84acbca428123ecf7',1,'glm']]], ['lowp_5ffmat3',['lowp_fmat3',['../a00304.html#ga83079315e230e8f39728f4bf0d2f9a9b',1,'glm']]], ['lowp_5ffmat3x2',['lowp_fmat3x2',['../a00304.html#ga49b98e7d71804af45d86886a489e633c',1,'glm']]], ['lowp_5ffmat3x3',['lowp_fmat3x3',['../a00304.html#gaba56275dd04a7a61560b0e8fa5d365b4',1,'glm']]], ['lowp_5ffmat3x4',['lowp_fmat3x4',['../a00304.html#ga28733aec7288191b314d42154fd0b690',1,'glm']]], ['lowp_5ffmat4',['lowp_fmat4',['../a00304.html#ga5803cb9ae26399762d8bba9e0b2fc09f',1,'glm']]], ['lowp_5ffmat4x2',['lowp_fmat4x2',['../a00304.html#ga5868c2dcce41cc3ea5edcaeae239f62c',1,'glm']]], ['lowp_5ffmat4x3',['lowp_fmat4x3',['../a00304.html#ga5e649bbdb135fbcb4bfe950f4c73a444',1,'glm']]], ['lowp_5ffmat4x4',['lowp_fmat4x4',['../a00304.html#gac2f5263708ac847b361a9841e74ddf9f',1,'glm']]], ['lowp_5ffvec1',['lowp_fvec1',['../a00304.html#ga346b2336fff168a7e0df1583aae3e5a5',1,'glm']]], ['lowp_5ffvec2',['lowp_fvec2',['../a00304.html#ga62a32c31f4e2e8ca859663b6e3289a2d',1,'glm']]], ['lowp_5ffvec3',['lowp_fvec3',['../a00304.html#ga40b5c557efebb5bb99d6b9aa81095afa',1,'glm']]], ['lowp_5ffvec4',['lowp_fvec4',['../a00304.html#ga755484ffbe39ae3db2875953ed04e7b7',1,'glm']]], ['lowp_5fi16',['lowp_i16',['../a00304.html#ga392b673fd10847bfb78fb808c6cf8ff7',1,'glm']]], ['lowp_5fi16vec1',['lowp_i16vec1',['../a00304.html#ga501a2f313f1c220eef4ab02bdabdc3c6',1,'glm']]], ['lowp_5fi16vec2',['lowp_i16vec2',['../a00304.html#ga7cac84b520a6b57f2fbd880d3d63c51b',1,'glm']]], ['lowp_5fi16vec3',['lowp_i16vec3',['../a00304.html#gab69ef9cbc2a9214bf5596c528c801b72',1,'glm']]], ['lowp_5fi16vec4',['lowp_i16vec4',['../a00304.html#ga1d47d94d17c2406abdd1f087a816e387',1,'glm']]], ['lowp_5fi32',['lowp_i32',['../a00304.html#ga7ff73a45cea9613ebf1a9fad0b9f82ac',1,'glm']]], ['lowp_5fi32vec1',['lowp_i32vec1',['../a00304.html#gae31ac3608cf643ceffd6554874bec4a0',1,'glm']]], ['lowp_5fi32vec2',['lowp_i32vec2',['../a00304.html#ga867a3c2d99ab369a454167d2c0a24dbd',1,'glm']]], ['lowp_5fi32vec3',['lowp_i32vec3',['../a00304.html#ga5fe17c87ede1b1b4d92454cff4da076d',1,'glm']]], ['lowp_5fi32vec4',['lowp_i32vec4',['../a00304.html#gac9b2eb4296ffe50a32eacca9ed932c08',1,'glm']]], ['lowp_5fi64',['lowp_i64',['../a00304.html#ga354736e0c645099cd44c42fb2f87c2b8',1,'glm']]], ['lowp_5fi64vec1',['lowp_i64vec1',['../a00304.html#gab0f7d875db5f3cc9f3168c5a0ed56437',1,'glm']]], ['lowp_5fi64vec2',['lowp_i64vec2',['../a00304.html#gab485c48f06a4fdd6b8d58d343bb49f3c',1,'glm']]], ['lowp_5fi64vec3',['lowp_i64vec3',['../a00304.html#ga5cb1dc9e8d300c2cdb0d7ff2308fa36c',1,'glm']]], ['lowp_5fi64vec4',['lowp_i64vec4',['../a00304.html#gabb4229a4c1488bf063eed0c45355bb9c',1,'glm']]], ['lowp_5fi8',['lowp_i8',['../a00304.html#ga552a6bde5e75984efb0f863278da2e54',1,'glm']]], ['lowp_5fi8vec1',['lowp_i8vec1',['../a00304.html#ga036d6c7ca9fbbdc5f3871bfcb937c85c',1,'glm']]], ['lowp_5fi8vec2',['lowp_i8vec2',['../a00304.html#gac03e5099d27eeaa74b6016ea435a1df2',1,'glm']]], ['lowp_5fi8vec3',['lowp_i8vec3',['../a00304.html#gae2f43ace6b5b33ab49516d9e40af1845',1,'glm']]], ['lowp_5fi8vec4',['lowp_i8vec4',['../a00304.html#ga6d388e9b9aa1b389f0672d9c7dfc61c5',1,'glm']]], ['lowp_5fimat2',['lowp_imat2',['../a00294.html#gaa0bff0be804142bb16d441aec0a7962e',1,'glm']]], ['lowp_5fimat2x2',['lowp_imat2x2',['../a00294.html#ga92b95b679975d408645547ab45a8dcd8',1,'glm']]], ['lowp_5fimat2x3',['lowp_imat2x3',['../a00294.html#ga8c9e7a388f8e7c52f1e6857dee8afb65',1,'glm']]], ['lowp_5fimat2x4',['lowp_imat2x4',['../a00294.html#ga9cc13bd1f8dd2933e9fa31fe3f70e16e',1,'glm']]], ['lowp_5fimat3',['lowp_imat3',['../a00294.html#ga69bfe668f4170379fc1f35d82b060c43',1,'glm']]], ['lowp_5fimat3x2',['lowp_imat3x2',['../a00294.html#ga33db8f27491d30906cd37c0d86b3f432',1,'glm']]], ['lowp_5fimat3x3',['lowp_imat3x3',['../a00294.html#ga664f061df00020048c3f8530329ace45',1,'glm']]], ['lowp_5fimat3x4',['lowp_imat3x4',['../a00294.html#ga9273faab33623d944af4080befbb2c80',1,'glm']]], ['lowp_5fimat4',['lowp_imat4',['../a00294.html#gad1e77f7270cad461ca4fcb4c3ec2e98c',1,'glm']]], ['lowp_5fimat4x2',['lowp_imat4x2',['../a00294.html#ga26ec1a2ba08a1488f5f05336858a0f09',1,'glm']]], ['lowp_5fimat4x3',['lowp_imat4x3',['../a00294.html#ga8f40483a3ae634ead8ad22272c543a33',1,'glm']]], ['lowp_5fimat4x4',['lowp_imat4x4',['../a00294.html#gaf65677e53ac8e31a107399340d5e2451',1,'glm']]], ['lowp_5fint16',['lowp_int16',['../a00304.html#ga698e36b01167fc0f037889334dce8def',1,'glm']]], ['lowp_5fint16_5ft',['lowp_int16_t',['../a00304.html#ga8b2cd8d31eb345b2d641d9261c38db1a',1,'glm']]], ['lowp_5fint32',['lowp_int32',['../a00304.html#ga864aabca5f3296e176e0c3ed9cc16b02',1,'glm']]], ['lowp_5fint32_5ft',['lowp_int32_t',['../a00304.html#ga0350631d35ff800e6133ac6243b13cbc',1,'glm']]], ['lowp_5fint64',['lowp_int64',['../a00304.html#gaf645b1a60203b39c0207baff5e3d8c3c',1,'glm']]], ['lowp_5fint64_5ft',['lowp_int64_t',['../a00304.html#gaebf341fc4a5be233f7dde962c2e33847',1,'glm']]], ['lowp_5fint8',['lowp_int8',['../a00304.html#ga760bcf26fdb23a2c3ecad3c928a19ae6',1,'glm']]], ['lowp_5fint8_5ft',['lowp_int8_t',['../a00304.html#ga119c41d73fe9977358174eb3ac1035a3',1,'glm']]], ['lowp_5fivec1',['lowp_ivec1',['../a00273.html#ga836dbb1dc516c233b7f5fe9763bc15dc',1,'glm']]], ['lowp_5fivec2',['lowp_ivec2',['../a00282.html#ga8433c6c1fdd80c0a83941d94aff73fa0',1,'glm']]], ['lowp_5fivec3',['lowp_ivec3',['../a00282.html#gac1a86a75b3c68ebb704d7094043669d6',1,'glm']]], ['lowp_5fivec4',['lowp_ivec4',['../a00282.html#ga27fc23da61859cd6356326c5f1c796de',1,'glm']]], ['lowp_5fmat2',['lowp_mat2',['../a00284.html#gae400c4ce1f5f3e1fa12861b2baed331a',1,'glm']]], ['lowp_5fmat2x2',['lowp_mat2x2',['../a00284.html#ga2df7cdaf9a571ce7a1b09435f502c694',1,'glm']]], ['lowp_5fmat2x3',['lowp_mat2x3',['../a00284.html#ga3eee3a74d0f1de8635d846dfb29ec4bb',1,'glm']]], ['lowp_5fmat2x4',['lowp_mat2x4',['../a00284.html#gade27f8324a16626cbce5d3e7da66b070',1,'glm']]], ['lowp_5fmat3',['lowp_mat3',['../a00284.html#ga6271ebc85ed778ccc15458c3d86fc854',1,'glm']]], ['lowp_5fmat3x2',['lowp_mat3x2',['../a00284.html#gaabf6cf90fd31efe25c94965507e98390',1,'glm']]], ['lowp_5fmat3x3',['lowp_mat3x3',['../a00284.html#ga63362cb4a63fc1be7d2e49cd5d574c84',1,'glm']]], ['lowp_5fmat3x4',['lowp_mat3x4',['../a00284.html#gac5fc6786688eff02904ca5e7d6960092',1,'glm']]], ['lowp_5fmat4',['lowp_mat4',['../a00284.html#ga2dedee030500865267cd5851c00c139d',1,'glm']]], ['lowp_5fmat4x2',['lowp_mat4x2',['../a00284.html#gafa3cdb8f24d09d761ec9ae2a4c7e5e21',1,'glm']]], ['lowp_5fmat4x3',['lowp_mat4x3',['../a00284.html#ga534c3ef5c3b8fdd8656b6afc205b4b77',1,'glm']]], ['lowp_5fmat4x4',['lowp_mat4x4',['../a00284.html#ga686468a9a815bd4db8cddae42a6d6b87',1,'glm']]], ['lowp_5fquat',['lowp_quat',['../a00253.html#gade62c5316c1c11a79c34c00c189558eb',1,'glm']]], ['lowp_5fu16',['lowp_u16',['../a00304.html#ga504ce1631cb2ac02fcf1d44d8c2aa126',1,'glm']]], ['lowp_5fu16vec1',['lowp_u16vec1',['../a00304.html#gaa6aab4ee7189b86716f5d7015d43021d',1,'glm']]], ['lowp_5fu16vec2',['lowp_u16vec2',['../a00304.html#ga2a7d997da9ac29cb931e35bd399f58df',1,'glm']]], ['lowp_5fu16vec3',['lowp_u16vec3',['../a00304.html#gac0253db6c3d3bae1f591676307a9dd8c',1,'glm']]], ['lowp_5fu16vec4',['lowp_u16vec4',['../a00304.html#gaa7f00459b9a2e5b2757e70afc0c189e1',1,'glm']]], ['lowp_5fu32',['lowp_u32',['../a00304.html#ga4f072ada9552e1e480bbb3b1acde5250',1,'glm']]], ['lowp_5fu32vec1',['lowp_u32vec1',['../a00304.html#gabed3be8dfdc4a0df4bf3271dbd7344c4',1,'glm']]], ['lowp_5fu32vec2',['lowp_u32vec2',['../a00304.html#gaf7e286e81347011e257ee779524e73b9',1,'glm']]], ['lowp_5fu32vec3',['lowp_u32vec3',['../a00304.html#gad3ad390560a671b1f676fbf03cd3aa15',1,'glm']]], ['lowp_5fu32vec4',['lowp_u32vec4',['../a00304.html#ga4502885718742aa238c36a312c3f3f20',1,'glm']]], ['lowp_5fu64',['lowp_u64',['../a00304.html#ga30069d1f02b19599cbfadf98c23ac6ed',1,'glm']]], ['lowp_5fu64vec1',['lowp_u64vec1',['../a00304.html#ga859be7b9d3a3765c1cafc14dbcf249a6',1,'glm']]], ['lowp_5fu64vec2',['lowp_u64vec2',['../a00304.html#ga581485db4ba6ddb501505ee711fd8e42',1,'glm']]], ['lowp_5fu64vec3',['lowp_u64vec3',['../a00304.html#gaa4a8682bec7ec8af666ef87fae38d5d1',1,'glm']]], ['lowp_5fu64vec4',['lowp_u64vec4',['../a00304.html#ga6fccc89c34045c86339f6fa781ce96de',1,'glm']]], ['lowp_5fu8',['lowp_u8',['../a00304.html#ga1b09f03da7ac43055c68a349d5445083',1,'glm']]], ['lowp_5fu8vec1',['lowp_u8vec1',['../a00304.html#ga4b2e0e10d8d154fec9cab50e216588ec',1,'glm']]], ['lowp_5fu8vec2',['lowp_u8vec2',['../a00304.html#gae6f63fa38635431e51a8f2602f15c566',1,'glm']]], ['lowp_5fu8vec3',['lowp_u8vec3',['../a00304.html#ga150dc47e31c6b8cf8461803c8d56f7bd',1,'glm']]], ['lowp_5fu8vec4',['lowp_u8vec4',['../a00304.html#ga9910927f3a4d1addb3da6a82542a8287',1,'glm']]], ['lowp_5fuint16',['lowp_uint16',['../a00304.html#gad68bfd9f881856fc863a6ebca0b67f78',1,'glm']]], ['lowp_5fuint16_5ft',['lowp_uint16_t',['../a00304.html#ga91c4815f93177eb423362fd296a87e9f',1,'glm']]], ['lowp_5fuint32',['lowp_uint32',['../a00304.html#gaa6a5b461bbf5fe20982472aa51896d4b',1,'glm']]], ['lowp_5fuint32_5ft',['lowp_uint32_t',['../a00304.html#gaf1b735b4b1145174f4e4167d13778f9b',1,'glm']]], ['lowp_5fuint64',['lowp_uint64',['../a00304.html#gaa212b805736a759998e312cbdd550fae',1,'glm']]], ['lowp_5fuint64_5ft',['lowp_uint64_t',['../a00304.html#ga8dd3a3281ae5c970ffe0c41d538aa153',1,'glm']]], ['lowp_5fuint8',['lowp_uint8',['../a00304.html#gaf49470869e9be2c059629b250619804e',1,'glm']]], ['lowp_5fuint8_5ft',['lowp_uint8_t',['../a00304.html#ga667b2ece2b258be898812dc2177995d1',1,'glm']]], ['lowp_5fumat2',['lowp_umat2',['../a00294.html#gaf2fba702d990437fc88ff3f3a76846ee',1,'glm']]], ['lowp_5fumat2x2',['lowp_umat2x2',['../a00294.html#ga7b2e9d89745f7175051284e54c81d81c',1,'glm']]], ['lowp_5fumat2x3',['lowp_umat2x3',['../a00294.html#ga3072f90fd86f17a862e21589fbb14c0f',1,'glm']]], ['lowp_5fumat2x4',['lowp_umat2x4',['../a00294.html#ga8bb45fec4bd77bd81b4ae7eb961a270d',1,'glm']]], ['lowp_5fumat3',['lowp_umat3',['../a00294.html#gaf1145f72bcdd590f5808c4bc170c2924',1,'glm']]], ['lowp_5fumat3x2',['lowp_umat3x2',['../a00294.html#ga56ea68c6a6cba8d8c21d17bb14e69c6b',1,'glm']]], ['lowp_5fumat3x3',['lowp_umat3x3',['../a00294.html#ga4f660a39a395cc14f018f985e7dfbeb5',1,'glm']]], ['lowp_5fumat3x4',['lowp_umat3x4',['../a00294.html#gaec3d624306bd59649f021864709d56b5',1,'glm']]], ['lowp_5fumat4',['lowp_umat4',['../a00294.html#gac092c6105827bf9ea080db38074b78eb',1,'glm']]], ['lowp_5fumat4x2',['lowp_umat4x2',['../a00294.html#ga7716c2b210d141846f1ac4e774adef5e',1,'glm']]], ['lowp_5fumat4x3',['lowp_umat4x3',['../a00294.html#ga09ab33a2636f5f43f7fae29cfbc20fff',1,'glm']]], ['lowp_5fumat4x4',['lowp_umat4x4',['../a00294.html#ga10aafc66cf1a0ece336b1c5ae13d0cc0',1,'glm']]], ['lowp_5fuvec1',['lowp_uvec1',['../a00277.html#ga8bf3fc8a7863d140f48b29341c750402',1,'glm']]], ['lowp_5fuvec2',['lowp_uvec2',['../a00282.html#ga752ee45136011301b64afd8c310c47a4',1,'glm']]], ['lowp_5fuvec3',['lowp_uvec3',['../a00282.html#ga7b2efbdd6bdc2f8250c57f3e5dc9a292',1,'glm']]], ['lowp_5fuvec4',['lowp_uvec4',['../a00282.html#ga5e6a632ec1165cf9f54ceeaa5e9b2b1e',1,'glm']]], ['lowp_5fvec1',['lowp_vec1',['../a00271.html#ga0a57630f03031706b1d26a7d70d9184c',1,'glm']]], ['lowp_5fvec2',['lowp_vec2',['../a00282.html#ga30e8baef5d56d5c166872a2bc00f36e9',1,'glm']]], ['lowp_5fvec3',['lowp_vec3',['../a00282.html#ga868e8e4470a3ef97c7ee3032bf90dc79',1,'glm']]], ['lowp_5fvec4',['lowp_vec4',['../a00282.html#gace3acb313c800552a9411953eb8b2ed7',1,'glm']]], ['luminosity',['luminosity',['../a00312.html#gad028e0a4f1a9c812b39439b746295b34',1,'glm']]], ['lxnorm',['lxNorm',['../a00343.html#gacad23d30497eb16f67709f2375d1f66a',1,'glm::lxNorm(vec< 3, T, Q > const &x, vec< 3, T, Q > const &y, unsigned int Depth)'],['../a00343.html#gac61b6d81d796d6eb4d4183396a19ab91',1,'glm::lxNorm(vec< 3, T, Q > const &x, unsigned int Depth)']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_a.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_a.js ================================================ var searchData= [ ['matrix_20functions',['Matrix functions',['../a00371.html',1,'']]], ['matrix_20types',['Matrix types',['../a00283.html',1,'']]], ['matrix_20types_20with_20precision_20qualifiers',['Matrix types with precision qualifiers',['../a00284.html',1,'']]], ['make_5fmat2',['make_mat2',['../a00305.html#ga04409e74dc3da251d2501acf5b4b546c',1,'glm']]], ['make_5fmat2x2',['make_mat2x2',['../a00305.html#gae49e1c7bcd5abec74d1c34155031f663',1,'glm']]], ['make_5fmat2x3',['make_mat2x3',['../a00305.html#ga21982104164789cf8985483aaefc25e8',1,'glm']]], ['make_5fmat2x4',['make_mat2x4',['../a00305.html#ga078b862c90b0e9a79ed43a58997d8388',1,'glm']]], ['make_5fmat3',['make_mat3',['../a00305.html#ga611ee7c4d4cadfc83a8fa8e1d10a170f',1,'glm']]], ['make_5fmat3x2',['make_mat3x2',['../a00305.html#ga27a24e121dc39e6857620e0f85b6e1a8',1,'glm']]], ['make_5fmat3x3',['make_mat3x3',['../a00305.html#gaf2e8337b15c3362aaeb6e5849e1c0536',1,'glm']]], ['make_5fmat3x4',['make_mat3x4',['../a00305.html#ga05dd66232aedb993e3b8e7b35eaf932b',1,'glm']]], ['make_5fmat4',['make_mat4',['../a00305.html#gae7bcedb710d1446c87fd1fc93ed8ee9a',1,'glm']]], ['make_5fmat4x2',['make_mat4x2',['../a00305.html#ga8b34c9b25bf3310d8ff9c828c7e2d97c',1,'glm']]], ['make_5fmat4x3',['make_mat4x3',['../a00305.html#ga0330bf6640092d7985fac92927bbd42b',1,'glm']]], ['make_5fmat4x4',['make_mat4x4',['../a00305.html#ga8f084be30e404844bfbb4a551ac2728c',1,'glm']]], ['make_5fquat',['make_quat',['../a00305.html#ga58110d7d81cf7d029e2bab7f8cd9b246',1,'glm']]], ['make_5fvec1',['make_vec1',['../a00305.html#ga4135f03f3049f0a4eb76545c4967957c',1,'glm::make_vec1(vec< 1, T, Q > const &v)'],['../a00305.html#ga13c92b81e55f201b052a6404d57da220',1,'glm::make_vec1(vec< 2, T, Q > const &v)'],['../a00305.html#ga3c23cc74086d361e22bbd5e91a334e03',1,'glm::make_vec1(vec< 3, T, Q > const &v)'],['../a00305.html#ga6af06bb60d64ca8bcd169e3c93bc2419',1,'glm::make_vec1(vec< 4, T, Q > const &v)']]], ['make_5fvec2',['make_vec2',['../a00305.html#ga8476d0e6f1b9b4a6193cc25f59d8a896',1,'glm::make_vec2(vec< 1, T, Q > const &v)'],['../a00305.html#gae54bd325a08ad26edf63929201adebc7',1,'glm::make_vec2(vec< 2, T, Q > const &v)'],['../a00305.html#ga0084fea4694cf47276e9cccbe7b1015a',1,'glm::make_vec2(vec< 3, T, Q > const &v)'],['../a00305.html#ga2b81f71f3a222fe5bba81e3983751249',1,'glm::make_vec2(vec< 4, T, Q > const &v)'],['../a00305.html#ga81253cf7b0ebfbb1e70540c5774e6824',1,'glm::make_vec2(T const *const ptr)']]], ['make_5fvec3',['make_vec3',['../a00305.html#ga9147e4b3a5d0f4772edfbfd179d7ea0b',1,'glm::make_vec3(vec< 1, T, Q > const &v)'],['../a00305.html#ga482b60a842a5b154d3eed392417a9511',1,'glm::make_vec3(vec< 2, T, Q > const &v)'],['../a00305.html#gacd57046034df557b8b1c457f58613623',1,'glm::make_vec3(vec< 3, T, Q > const &v)'],['../a00305.html#ga8b589ed7d41a298b516d2a69169248f1',1,'glm::make_vec3(vec< 4, T, Q > const &v)'],['../a00305.html#gad9e0d36ff489cb30c65ad1fa40351651',1,'glm::make_vec3(T const *const ptr)']]], ['make_5fvec4',['make_vec4',['../a00305.html#ga600cb97f70c5d50d3a4a145e1cafbf37',1,'glm::make_vec4(vec< 1, T, Q > const &v)'],['../a00305.html#gaa9bd116caf28196fd1cf00b278286fa7',1,'glm::make_vec4(vec< 2, T, Q > const &v)'],['../a00305.html#ga4036328ba4702c74cbdfad1fc03d1b8f',1,'glm::make_vec4(vec< 3, T, Q > const &v)'],['../a00305.html#gaa95cb15732f708f613e65a0578895ae5',1,'glm::make_vec4(vec< 4, T, Q > const &v)'],['../a00305.html#ga63f576518993efc22a969f18f80e29bb',1,'glm::make_vec4(T const *const ptr)']]], ['mask',['mask',['../a00288.html#gad7eba518a0b71662114571ee76939f8a',1,'glm::mask(genIUType Bits)'],['../a00288.html#ga2e64e3b922a296033b825311e7f5fff1',1,'glm::mask(vec< L, T, Q > const &v)']]], ['mat2',['mat2',['../a00283.html#ga8dd59e7fc6913ac5d61b86553e9148ba',1,'glm']]], ['mat2x2',['mat2x2',['../a00283.html#gaaa17ef6bfa4e4f2692348b1460c8efcb',1,'glm']]], ['mat2x2_2ehpp',['mat2x2.hpp',['../a00048.html',1,'']]], ['mat2x3',['mat2x3',['../a00283.html#ga493ab21243abe564b3f7d381e677d29a',1,'glm']]], ['mat2x3_2ehpp',['mat2x3.hpp',['../a00049.html',1,'']]], ['mat2x4',['mat2x4',['../a00283.html#ga8e879b57ddd81e5bf5a88929844e8b40',1,'glm']]], ['mat2x4_2ehpp',['mat2x4.hpp',['../a00050.html',1,'']]], ['mat2x4_5fcast',['mat2x4_cast',['../a00317.html#gae99d143b37f9cad4cd9285571aab685a',1,'glm']]], ['mat3',['mat3',['../a00283.html#gaefb0fc7a4960b782c18708bb6b655262',1,'glm']]], ['mat3_5fcast',['mat3_cast',['../a00299.html#ga333ab70047fbe4132406100c292dbc89',1,'glm']]], ['mat3x2',['mat3x2',['../a00280.html#ga2c27aea32de57d58aec8e92d5d2181e2',1,'glm']]], ['mat3x2_2ehpp',['mat3x2.hpp',['../a00051.html',1,'']]], ['mat3x3',['mat3x3',['../a00283.html#gab91887d7565059dac640e3a1921c914a',1,'glm']]], ['mat3x3_2ehpp',['mat3x3.hpp',['../a00052.html',1,'']]], ['mat3x4',['mat3x4',['../a00283.html#gaf991cad0b34f64e33af186326dbc4d66',1,'glm']]], ['mat3x4_2ehpp',['mat3x4.hpp',['../a00053.html',1,'']]], ['mat3x4_5fcast',['mat3x4_cast',['../a00317.html#gaf59f5bb69620d2891c3795c6f2639179',1,'glm']]], ['mat4',['mat4',['../a00283.html#ga0db98d836c5549d31cf64ecd043b7af7',1,'glm']]], ['mat4_5fcast',['mat4_cast',['../a00299.html#ga1113212d9bdefc2e31ad40e5bbb506f3',1,'glm']]], ['mat4x2',['mat4x2',['../a00283.html#gad941c947ad6cdd117a0e8554a4754983',1,'glm']]], ['mat4x2_2ehpp',['mat4x2.hpp',['../a00054.html',1,'']]], ['mat4x3',['mat4x3',['../a00283.html#gac7574544bb94777bdbd2eb224eb72fd0',1,'glm']]], ['mat4x3_2ehpp',['mat4x3.hpp',['../a00055.html',1,'']]], ['mat4x4',['mat4x4',['../a00283.html#gab2d35cc2655f44d60958d60a1de34e81',1,'glm']]], ['mat4x4_2ehpp',['mat4x4.hpp',['../a00056.html',1,'']]], ['matrix_2ehpp',['matrix.hpp',['../a00057.html',1,'']]], ['matrix_5faccess_2ehpp',['matrix_access.hpp',['../a00058.html',1,'']]], ['matrix_5fclip_5fspace_2ehpp',['matrix_clip_space.hpp',['../a00059.html',1,'']]], ['matrix_5fcommon_2ehpp',['matrix_common.hpp',['../a00060.html',1,'']]], ['matrix_5fcross_5fproduct_2ehpp',['matrix_cross_product.hpp',['../a00061.html',1,'']]], ['matrix_5fdecompose_2ehpp',['matrix_decompose.hpp',['../a00062.html',1,'']]], ['matrix_5fdouble2x2_2ehpp',['matrix_double2x2.hpp',['../a00063.html',1,'']]], ['matrix_5fdouble2x2_5fprecision_2ehpp',['matrix_double2x2_precision.hpp',['../a00064.html',1,'']]], ['matrix_5fdouble2x3_2ehpp',['matrix_double2x3.hpp',['../a00065.html',1,'']]], ['matrix_5fdouble2x3_5fprecision_2ehpp',['matrix_double2x3_precision.hpp',['../a00066.html',1,'']]], ['matrix_5fdouble2x4_2ehpp',['matrix_double2x4.hpp',['../a00067.html',1,'']]], ['matrix_5fdouble2x4_5fprecision_2ehpp',['matrix_double2x4_precision.hpp',['../a00068.html',1,'']]], ['matrix_5fdouble3x2_2ehpp',['matrix_double3x2.hpp',['../a00069.html',1,'']]], ['matrix_5fdouble3x2_5fprecision_2ehpp',['matrix_double3x2_precision.hpp',['../a00070.html',1,'']]], ['matrix_5fdouble3x3_2ehpp',['matrix_double3x3.hpp',['../a00071.html',1,'']]], ['matrix_5fdouble3x3_5fprecision_2ehpp',['matrix_double3x3_precision.hpp',['../a00072.html',1,'']]], ['matrix_5fdouble3x4_2ehpp',['matrix_double3x4.hpp',['../a00073.html',1,'']]], ['matrix_5fdouble3x4_5fprecision_2ehpp',['matrix_double3x4_precision.hpp',['../a00074.html',1,'']]], ['matrix_5fdouble4x2_2ehpp',['matrix_double4x2.hpp',['../a00075.html',1,'']]], ['matrix_5fdouble4x2_5fprecision_2ehpp',['matrix_double4x2_precision.hpp',['../a00076.html',1,'']]], ['matrix_5fdouble4x3_2ehpp',['matrix_double4x3.hpp',['../a00077.html',1,'']]], ['matrix_5fdouble4x3_5fprecision_2ehpp',['matrix_double4x3_precision.hpp',['../a00078.html',1,'']]], ['matrix_5fdouble4x4_2ehpp',['matrix_double4x4.hpp',['../a00079.html',1,'']]], ['matrix_5fdouble4x4_5fprecision_2ehpp',['matrix_double4x4_precision.hpp',['../a00080.html',1,'']]], ['matrix_5ffactorisation_2ehpp',['matrix_factorisation.hpp',['../a00081.html',1,'']]], ['matrix_5ffloat2x2_2ehpp',['matrix_float2x2.hpp',['../a00082.html',1,'']]], ['matrix_5ffloat2x2_5fprecision_2ehpp',['matrix_float2x2_precision.hpp',['../a00083.html',1,'']]], ['matrix_5ffloat2x3_2ehpp',['matrix_float2x3.hpp',['../a00084.html',1,'']]], ['matrix_5ffloat2x3_5fprecision_2ehpp',['matrix_float2x3_precision.hpp',['../a00085.html',1,'']]], ['matrix_5ffloat2x4_2ehpp',['matrix_float2x4.hpp',['../a00086.html',1,'']]], ['matrix_5ffloat2x4_5fprecision_2ehpp',['matrix_float2x4_precision.hpp',['../a00087.html',1,'']]], ['matrix_5ffloat3x2_2ehpp',['matrix_float3x2.hpp',['../a00088.html',1,'']]], ['matrix_5ffloat3x2_5fprecision_2ehpp',['matrix_float3x2_precision.hpp',['../a00089.html',1,'']]], ['matrix_5ffloat3x3_2ehpp',['matrix_float3x3.hpp',['../a00090.html',1,'']]], ['matrix_5ffloat3x3_5fprecision_2ehpp',['matrix_float3x3_precision.hpp',['../a00091.html',1,'']]], ['matrix_5ffloat3x4_2ehpp',['matrix_float3x4.hpp',['../a00092.html',1,'']]], ['matrix_5ffloat3x4_5fprecision_2ehpp',['matrix_float3x4_precision.hpp',['../a00093.html',1,'']]], ['matrix_5ffloat4x2_2ehpp',['matrix_float4x2.hpp',['../a00094.html',1,'']]], ['matrix_5ffloat4x3_2ehpp',['matrix_float4x3.hpp',['../a00096.html',1,'']]], ['matrix_5ffloat4x3_5fprecision_2ehpp',['matrix_float4x3_precision.hpp',['../a00097.html',1,'']]], ['matrix_5ffloat4x4_2ehpp',['matrix_float4x4.hpp',['../a00098.html',1,'']]], ['matrix_5ffloat4x4_5fprecision_2ehpp',['matrix_float4x4_precision.hpp',['../a00099.html',1,'']]], ['matrix_5finteger_2ehpp',['matrix_integer.hpp',['../a00100.html',1,'']]], ['matrix_5finterpolation_2ehpp',['matrix_interpolation.hpp',['../a00101.html',1,'']]], ['matrix_5finverse_2ehpp',['matrix_inverse.hpp',['../a00102.html',1,'']]], ['matrix_5fmajor_5fstorage_2ehpp',['matrix_major_storage.hpp',['../a00103.html',1,'']]], ['matrix_5foperation_2ehpp',['matrix_operation.hpp',['../a00104.html',1,'']]], ['matrix_5fprojection_2ehpp',['matrix_projection.hpp',['../a00105.html',1,'']]], ['matrix_5fquery_2ehpp',['matrix_query.hpp',['../a00106.html',1,'']]], ['matrix_5frelational_2ehpp',['matrix_relational.hpp',['../a00107.html',1,'']]], ['matrix_5ftransform_5f2d_2ehpp',['matrix_transform_2d.hpp',['../a00110.html',1,'']]], ['matrixcompmult',['matrixCompMult',['../a00371.html#gaf14569404c779fedca98d0b9b8e58c1f',1,'glm']]], ['matrixcross3',['matrixCross3',['../a00334.html#ga5802386bb4c37b3332a3b6fd8b6960ff',1,'glm']]], ['matrixcross4',['matrixCross4',['../a00334.html#ga20057fff91ddafa102934adb25458cde',1,'glm']]], ['max',['max',['../a00241.html#gae02d42887fc5570451f880e3c624b9ac',1,'glm::max(genType x, genType y)'],['../a00241.html#ga03e45d6e60d1c36edb00c52edeea0f31',1,'glm::max(vec< L, T, Q > const &x, T y)'],['../a00241.html#gac1fec0c3303b572a6d4697a637213870',1,'glm::max(vec< L, T, Q > const &x, vec< L, T, Q > const &y)'],['../a00258.html#gaa20839d9ab14514f8966f69877ea0de8',1,'glm::max(T a, T b, T c)'],['../a00258.html#ga2274b5e75ed84b0b1e50d8d22f1f2f67',1,'glm::max(T a, T b, T c, T d)'],['../a00267.html#gaa45d34f6a2906f8bf58ab2ba5429234d',1,'glm::max(vec< L, T, Q > const &x, vec< L, T, Q > const &y, vec< L, T, Q > const &z)'],['../a00267.html#ga94d42b8da2b4ded5ddf7504fbdc6bf10',1,'glm::max(vec< L, T, Q > const &x, vec< L, T, Q > const &y, vec< L, T, Q > const &z, vec< L, T, Q > const &w)'],['../a00321.html#ga04991ccb9865c4c4e58488cfb209ce69',1,'glm::max(T const &x, T const &y, T const &z)'],['../a00321.html#gae1b7bbe5c91de4924835ea3e14530744',1,'glm::max(C< T > const &x, typename C< T >::T const &y, typename C< T >::T const &z)'],['../a00321.html#gaf832e9d4ab4826b2dda2fda25935a3a4',1,'glm::max(C< T > const &x, C< T > const &y, C< T > const &z)'],['../a00321.html#ga78e04a0cef1c4863fcae1a2130500d87',1,'glm::max(T const &x, T const &y, T const &z, T const &w)'],['../a00321.html#ga7cca8b53cfda402040494cdf40fbdf4a',1,'glm::max(C< T > const &x, typename C< T >::T const &y, typename C< T >::T const &z, typename C< T >::T const &w)'],['../a00321.html#gaacffbc466c2d08c140b181e7fd8a4858',1,'glm::max(C< T > const &x, C< T > const &y, C< T > const &z, C< T > const &w)']]], ['mediump_5fbvec1',['mediump_bvec1',['../a00266.html#ga7b4ccb989ba179fa44f7b0879c782621',1,'glm']]], ['mediump_5fbvec2',['mediump_bvec2',['../a00282.html#ga1e743764869efa9223c2bcefccedaddc',1,'glm']]], ['mediump_5fbvec3',['mediump_bvec3',['../a00282.html#ga50c783c25082882ef00fe2e5cddba4aa',1,'glm']]], ['mediump_5fbvec4',['mediump_bvec4',['../a00282.html#ga0be2c682258604a35004f088782a9645',1,'glm']]], ['mediump_5fddualquat',['mediump_ddualquat',['../a00317.html#ga0fb11e48e2d16348ccb06a25213641b4',1,'glm']]], ['mediump_5fdmat2',['mediump_dmat2',['../a00284.html#ga6205fd19be355600334edef6af0b27cb',1,'glm']]], ['mediump_5fdmat2x2',['mediump_dmat2x2',['../a00284.html#ga51dc36a7719cb458fa5114831c20d64f',1,'glm']]], ['mediump_5fdmat2x3',['mediump_dmat2x3',['../a00284.html#ga741e05adf1f12d5d913f67088db1009a',1,'glm']]], ['mediump_5fdmat2x4',['mediump_dmat2x4',['../a00284.html#ga685bda24922d112786af385deb4deb43',1,'glm']]], ['mediump_5fdmat3',['mediump_dmat3',['../a00284.html#ga939fbf9c53008a8e84c7dd7cf8de29e2',1,'glm']]], ['mediump_5fdmat3x2',['mediump_dmat3x2',['../a00284.html#ga2076157df85e49b8c021e03e46a376c1',1,'glm']]], ['mediump_5fdmat3x3',['mediump_dmat3x3',['../a00284.html#ga47bd2aae4701ee2fc865674a9df3d7a6',1,'glm']]], ['mediump_5fdmat3x4',['mediump_dmat3x4',['../a00284.html#ga3a132bd05675c2e46556f67cf738600b',1,'glm']]], ['mediump_5fdmat4',['mediump_dmat4',['../a00284.html#gaf650bc667bf2a0e496b5a9182bc8d378',1,'glm']]], ['mediump_5fdmat4x2',['mediump_dmat4x2',['../a00284.html#gae220fa4c5a7b13ef2ab0420340de645c',1,'glm']]], ['mediump_5fdmat4x3',['mediump_dmat4x3',['../a00284.html#ga43ef60e4d996db15c9c8f069a96ff763',1,'glm']]], ['mediump_5fdmat4x4',['mediump_dmat4x4',['../a00284.html#ga5389b3ab32dc0d72bea00057ab6d1dd3',1,'glm']]], ['mediump_5fdquat',['mediump_dquat',['../a00250.html#gacdf73b1f7fd8f5a0c79a3934e99c1a14',1,'glm']]], ['mediump_5fdualquat',['mediump_dualquat',['../a00317.html#gaa7aeb54c167712b38f2178a1be2360ad',1,'glm']]], ['mediump_5fdvec1',['mediump_dvec1',['../a00269.html#ga79a789ebb176b37a45848f7ccdd3b3dd',1,'glm']]], ['mediump_5fdvec2',['mediump_dvec2',['../a00282.html#ga2f4f6e9a69a0281d06940fd0990cafc3',1,'glm']]], ['mediump_5fdvec3',['mediump_dvec3',['../a00282.html#ga61c3b1dff4ec7c878af80503141b9f37',1,'glm']]], ['mediump_5fdvec4',['mediump_dvec4',['../a00282.html#ga23a8bca00914a51542bfea13a4778186',1,'glm']]], ['mediump_5ff32',['mediump_f32',['../a00304.html#ga3b27fcd9eaa2757f0aaf6b0ce0d85c80',1,'glm']]], ['mediump_5ff32mat2',['mediump_f32mat2',['../a00304.html#gaf9020c6176a75bc84828ab01ea7dac25',1,'glm']]], ['mediump_5ff32mat2x2',['mediump_f32mat2x2',['../a00304.html#gaa3ca74a44102035b3ffb5c9c52dfdd3f',1,'glm']]], ['mediump_5ff32mat2x3',['mediump_f32mat2x3',['../a00304.html#gad4cc829ab1ad3e05ac0a24828a3c95cf',1,'glm']]], ['mediump_5ff32mat2x4',['mediump_f32mat2x4',['../a00304.html#gae71445ac6cd0b9fba3e5c905cd030fb1',1,'glm']]], ['mediump_5ff32mat3',['mediump_f32mat3',['../a00304.html#gaaaf878d0d7bfc0aac054fe269a886ca8',1,'glm']]], ['mediump_5ff32mat3x2',['mediump_f32mat3x2',['../a00304.html#gaaab39454f56cf9fc6d940358ce5e6a0f',1,'glm']]], ['mediump_5ff32mat3x3',['mediump_f32mat3x3',['../a00304.html#gacd80ad7640e9e32f2edcb8330b1ffe4f',1,'glm']]], ['mediump_5ff32mat3x4',['mediump_f32mat3x4',['../a00304.html#ga8df705d775b776f5ae6b39e2ab892899',1,'glm']]], ['mediump_5ff32mat4',['mediump_f32mat4',['../a00304.html#ga4491baaebbc46a20f1cb5da985576bf4',1,'glm']]], ['mediump_5ff32mat4x2',['mediump_f32mat4x2',['../a00304.html#gab005efe0fa4de1a928e8ddec4bc2c43f',1,'glm']]], ['mediump_5ff32mat4x3',['mediump_f32mat4x3',['../a00304.html#gade108f16633cf95fa500b5b8c36c8b00',1,'glm']]], ['mediump_5ff32mat4x4',['mediump_f32mat4x4',['../a00304.html#ga936e95b881ecd2d109459ca41913fa99',1,'glm']]], ['mediump_5ff32quat',['mediump_f32quat',['../a00304.html#gaa40c03d52dbfbfaf03e75773b9606ff3',1,'glm']]], ['mediump_5ff32vec1',['mediump_f32vec1',['../a00304.html#gabb33cab7d7c74cc14aa95455d0690865',1,'glm']]], ['mediump_5ff32vec2',['mediump_f32vec2',['../a00304.html#gad6eb11412a3161ca8dc1d63b2a307c4b',1,'glm']]], ['mediump_5ff32vec3',['mediump_f32vec3',['../a00304.html#ga062ffef2973bd8241df993c3b30b327c',1,'glm']]], ['mediump_5ff32vec4',['mediump_f32vec4',['../a00304.html#gad80c84bcd5f585840faa6179f6fd446c',1,'glm']]], ['mediump_5ff64',['mediump_f64',['../a00304.html#ga6d40381d78472553f878f66e443feeef',1,'glm']]], ['mediump_5ff64mat2',['mediump_f64mat2',['../a00304.html#gac1281da5ded55047e8892b0e1f1ae965',1,'glm']]], ['mediump_5ff64mat2x2',['mediump_f64mat2x2',['../a00304.html#ga4fd527644cccbca4cb205320eab026f3',1,'glm']]], ['mediump_5ff64mat2x3',['mediump_f64mat2x3',['../a00304.html#gafd9a6ebc0c7b95f5c581d00d16a17c54',1,'glm']]], ['mediump_5ff64mat2x4',['mediump_f64mat2x4',['../a00304.html#gaf306dd69e53633636aee38cea79d4cb7',1,'glm']]], ['mediump_5ff64mat3',['mediump_f64mat3',['../a00304.html#gad35fb67eb1d03c5a514f0bd7aed1c776',1,'glm']]], ['mediump_5ff64mat3x2',['mediump_f64mat3x2',['../a00304.html#gacd926d36a72433f6cac51dd60fa13107',1,'glm']]], ['mediump_5ff64mat3x3',['mediump_f64mat3x3',['../a00304.html#ga84d88a6e3a54ccd2b67e195af4a4c23e',1,'glm']]], ['mediump_5ff64mat3x4',['mediump_f64mat3x4',['../a00304.html#gad38c544d332b8c4bd0b70b1bd9feccc2',1,'glm']]], ['mediump_5ff64mat4',['mediump_f64mat4',['../a00304.html#gaa805ef691c711dc41e2776cfb67f5cf5',1,'glm']]], ['mediump_5ff64mat4x2',['mediump_f64mat4x2',['../a00304.html#ga17d36f0ea22314117e1cec9594b33945',1,'glm']]], ['mediump_5ff64mat4x3',['mediump_f64mat4x3',['../a00304.html#ga54697a78f9a4643af6a57fc2e626ec0d',1,'glm']]], ['mediump_5ff64mat4x4',['mediump_f64mat4x4',['../a00304.html#ga66edb8de17b9235029472f043ae107e9',1,'glm']]], ['mediump_5ff64quat',['mediump_f64quat',['../a00304.html#ga5e52f485059ce6e3010c590b882602c9',1,'glm']]], ['mediump_5ff64vec1',['mediump_f64vec1',['../a00304.html#gac30fdf8afa489400053275b6a3350127',1,'glm']]], ['mediump_5ff64vec2',['mediump_f64vec2',['../a00304.html#ga8ebc04ecf6440c4ee24718a16600ce6b',1,'glm']]], ['mediump_5ff64vec3',['mediump_f64vec3',['../a00304.html#ga461c4c7d0757404dd0dba931760b25cf',1,'glm']]], ['mediump_5ff64vec4',['mediump_f64vec4',['../a00304.html#gacfea053bd6bb3eddb996a4f94de22a3e',1,'glm']]], ['mediump_5ffdualquat',['mediump_fdualquat',['../a00317.html#ga4a6b594ff7e81150d8143001367a9431',1,'glm']]], ['mediump_5ffloat32',['mediump_float32',['../a00304.html#ga7812bf00676fb1a86dcd62cca354d2c7',1,'glm']]], ['mediump_5ffloat32_5ft',['mediump_float32_t',['../a00304.html#gae4dee61f8fe1caccec309fbed02faf12',1,'glm']]], ['mediump_5ffloat64',['mediump_float64',['../a00304.html#gab83d8aae6e4f115e97a785e8574a115f',1,'glm']]], ['mediump_5ffloat64_5ft',['mediump_float64_t',['../a00304.html#gac61843e4fa96c1f4e9d8316454f32a8e',1,'glm']]], ['mediump_5ffmat2',['mediump_fmat2',['../a00304.html#ga74e9133378fd0b4da8ac0bc0876702ff',1,'glm']]], ['mediump_5ffmat2x2',['mediump_fmat2x2',['../a00304.html#ga98a687c17b174ea316b5f397b64f44bc',1,'glm']]], ['mediump_5ffmat2x3',['mediump_fmat2x3',['../a00304.html#gaa03f939d90d5ef157df957d93f0b9a64',1,'glm']]], ['mediump_5ffmat2x4',['mediump_fmat2x4',['../a00304.html#ga35223623e9ccebd8a281873b71b7d213',1,'glm']]], ['mediump_5ffmat3',['mediump_fmat3',['../a00304.html#ga80823dfad5dba98512c76af498343847',1,'glm']]], ['mediump_5ffmat3x2',['mediump_fmat3x2',['../a00304.html#ga42569e5b92f8635cedeadb1457ee1467',1,'glm']]], ['mediump_5ffmat3x3',['mediump_fmat3x3',['../a00304.html#gaa6f526388c74a66b3d52315a14d434ae',1,'glm']]], ['mediump_5ffmat3x4',['mediump_fmat3x4',['../a00304.html#gaefe8ef520c6cb78590ebbefe648da4d4',1,'glm']]], ['mediump_5ffmat4',['mediump_fmat4',['../a00304.html#gac1c38778c0b5a1263f07753c05a4f7b9',1,'glm']]], ['mediump_5ffmat4x2',['mediump_fmat4x2',['../a00304.html#gacea38a85893e17e6834b6cb09a9ad0cf',1,'glm']]], ['mediump_5ffmat4x3',['mediump_fmat4x3',['../a00304.html#ga41ad497f7eae211556aefd783cb02b90',1,'glm']]], ['mediump_5ffmat4x4',['mediump_fmat4x4',['../a00304.html#ga22e27beead07bff4d5ce9d6065a57279',1,'glm']]], ['mediump_5ffvec1',['mediump_fvec1',['../a00304.html#ga367964fc2133d3f1b5b3755ff9cf6c9b',1,'glm']]], ['mediump_5ffvec2',['mediump_fvec2',['../a00304.html#ga44bfa55cda5dbf53f24a1fb7610393d6',1,'glm']]], ['mediump_5ffvec3',['mediump_fvec3',['../a00304.html#ga999dc6703ad16e3d3c26b74ea8083f07',1,'glm']]], ['mediump_5ffvec4',['mediump_fvec4',['../a00304.html#ga1bed890513c0f50b7e7ba4f7f359dbfb',1,'glm']]], ['mediump_5fi16',['mediump_i16',['../a00304.html#ga62a17cddeb4dffb4e18fe3aea23f051a',1,'glm']]], ['mediump_5fi16vec1',['mediump_i16vec1',['../a00304.html#gacc44265ed440bf5e6e566782570de842',1,'glm']]], ['mediump_5fi16vec2',['mediump_i16vec2',['../a00304.html#ga4b5e2c9aaa5d7717bf71179aefa12e88',1,'glm']]], ['mediump_5fi16vec3',['mediump_i16vec3',['../a00304.html#ga3be6c7fc5fe08fa2274bdb001d5f2633',1,'glm']]], ['mediump_5fi16vec4',['mediump_i16vec4',['../a00304.html#gaf52982bb23e3a3772649b2c5bb84b107',1,'glm']]], ['mediump_5fi32',['mediump_i32',['../a00304.html#gaf5e94bf2a20af7601787c154751dc2e1',1,'glm']]], ['mediump_5fi32vec1',['mediump_i32vec1',['../a00304.html#ga46a57f71e430637559097a732b550a7e',1,'glm']]], ['mediump_5fi32vec2',['mediump_i32vec2',['../a00304.html#ga20bf224bd4f8a24ecc4ed2004a40c219',1,'glm']]], ['mediump_5fi32vec3',['mediump_i32vec3',['../a00304.html#ga13a221b910aa9eb1b04ca1c86e81015a',1,'glm']]], ['mediump_5fi32vec4',['mediump_i32vec4',['../a00304.html#ga6addd4dfee87fc09ab9525e3d07db4c8',1,'glm']]], ['mediump_5fi64',['mediump_i64',['../a00304.html#ga3ebcb1f6d8d8387253de8bccb058d77f',1,'glm']]], ['mediump_5fi64vec1',['mediump_i64vec1',['../a00304.html#ga8343e9d244fb17a5bbf0d94d36b3695e',1,'glm']]], ['mediump_5fi64vec2',['mediump_i64vec2',['../a00304.html#ga2c94aeae3457325944ca1059b0b68330',1,'glm']]], ['mediump_5fi64vec3',['mediump_i64vec3',['../a00304.html#ga8089722ffdf868cdfe721dea1fb6a90e',1,'glm']]], ['mediump_5fi64vec4',['mediump_i64vec4',['../a00304.html#gabf1f16c5ab8cb0484bd1e846ae4368f1',1,'glm']]], ['mediump_5fi8',['mediump_i8',['../a00304.html#gacf1ded173e1e2d049c511d095b259e21',1,'glm']]], ['mediump_5fi8vec1',['mediump_i8vec1',['../a00304.html#ga85e8893f4ae3630065690a9000c0c483',1,'glm']]], ['mediump_5fi8vec2',['mediump_i8vec2',['../a00304.html#ga2a8bdc32184ea0a522ef7bd90640cf67',1,'glm']]], ['mediump_5fi8vec3',['mediump_i8vec3',['../a00304.html#ga6dd1c1618378c6f94d522a61c28773c9',1,'glm']]], ['mediump_5fi8vec4',['mediump_i8vec4',['../a00304.html#gac7bb04fb857ef7b520e49f6c381432be',1,'glm']]], ['mediump_5fimat2',['mediump_imat2',['../a00294.html#ga20f4cc7ab23e2aa1f4db9fdb5496d378',1,'glm']]], ['mediump_5fimat2x2',['mediump_imat2x2',['../a00294.html#ga4b2aeb11a329940721dda9583e71f856',1,'glm']]], ['mediump_5fimat2x3',['mediump_imat2x3',['../a00294.html#ga74362470ba99843ac70aee5ac38cc674',1,'glm']]], ['mediump_5fimat2x4',['mediump_imat2x4',['../a00294.html#ga8da25cd380ba30fc5b68a4687deb3e09',1,'glm']]], ['mediump_5fimat3',['mediump_imat3',['../a00294.html#ga6c63bdc736efd3466e0730de0251cb71',1,'glm']]], ['mediump_5fimat3x2',['mediump_imat3x2',['../a00294.html#gac0b4e42d648fb3eaf4bb88da82ecc809',1,'glm']]], ['mediump_5fimat3x3',['mediump_imat3x3',['../a00294.html#gad99cc2aad8fc57f068cfa7719dbbea12',1,'glm']]], ['mediump_5fimat3x4',['mediump_imat3x4',['../a00294.html#ga67689a518b181a26540bc44a163525cd',1,'glm']]], ['mediump_5fimat4',['mediump_imat4',['../a00294.html#gaf348552978553630d2a00b78eb887ced',1,'glm']]], ['mediump_5fimat4x2',['mediump_imat4x2',['../a00294.html#ga8b2d35816f7103f0f4c82dd2f27571fc',1,'glm']]], ['mediump_5fimat4x3',['mediump_imat4x3',['../a00294.html#ga5b10acc696759e03f6ab918f4467e94c',1,'glm']]], ['mediump_5fimat4x4',['mediump_imat4x4',['../a00294.html#ga2596869d154dec1180beadbb9df80501',1,'glm']]], ['mediump_5fint16',['mediump_int16',['../a00304.html#gadff3608baa4b5bd3ed28f95c1c2c345d',1,'glm']]], ['mediump_5fint16_5ft',['mediump_int16_t',['../a00304.html#ga80e72fe94c88498537e8158ba7591c54',1,'glm']]], ['mediump_5fint32',['mediump_int32',['../a00304.html#ga5244cef85d6e870e240c76428a262ae8',1,'glm']]], ['mediump_5fint32_5ft',['mediump_int32_t',['../a00304.html#ga26fc7ced1ad7ca5024f1c973c8dc9180',1,'glm']]], ['mediump_5fint64',['mediump_int64',['../a00304.html#ga7b968f2b86a0442a89c7359171e1d866',1,'glm']]], ['mediump_5fint64_5ft',['mediump_int64_t',['../a00304.html#gac3bc41bcac61d1ba8f02a6f68ce23f64',1,'glm']]], ['mediump_5fint8',['mediump_int8',['../a00304.html#ga6fbd69cbdaa44345bff923a2cf63de7e',1,'glm']]], ['mediump_5fint8_5ft',['mediump_int8_t',['../a00304.html#ga6d7b3789ecb932c26430009478cac7ae',1,'glm']]], ['mediump_5fivec1',['mediump_ivec1',['../a00273.html#gad628c608970b3d0aa6cfb63ce6e53e56',1,'glm']]], ['mediump_5fivec2',['mediump_ivec2',['../a00282.html#gac57496299d276ed97044074097bd5e2c',1,'glm']]], ['mediump_5fivec3',['mediump_ivec3',['../a00282.html#ga27cfb51e0dbe15bba27a14a8590e8466',1,'glm']]], ['mediump_5fivec4',['mediump_ivec4',['../a00282.html#ga92a204c37e66ac6c1dc7ae91142f2ea5',1,'glm']]], ['mediump_5fmat2',['mediump_mat2',['../a00284.html#ga745452bd9c89f5ad948203e4fb4b4ea3',1,'glm']]], ['mediump_5fmat2x2',['mediump_mat2x2',['../a00284.html#ga0cdf57d29f9448864237b2fb3e39aa1d',1,'glm']]], ['mediump_5fmat2x3',['mediump_mat2x3',['../a00284.html#ga497d513d552d927537d61fa11e3701ab',1,'glm']]], ['mediump_5fmat2x4',['mediump_mat2x4',['../a00284.html#gae7b75ea2e09fa686a79bbe9b6ca68ee5',1,'glm']]], ['mediump_5fmat3',['mediump_mat3',['../a00284.html#ga5aae49834d02732942f44e61d7bce136',1,'glm']]], ['mediump_5fmat3x2',['mediump_mat3x2',['../a00284.html#ga9e1c9ee65fef547bde793e69723e24eb',1,'glm']]], ['mediump_5fmat3x3',['mediump_mat3x3',['../a00284.html#gabc0f2f4ad21c90b341881cf056f8650e',1,'glm']]], ['mediump_5fmat3x4',['mediump_mat3x4',['../a00284.html#gaa669c6675c3405f76c0b14020d1c0d61',1,'glm']]], ['mediump_5fmat4',['mediump_mat4',['../a00284.html#gab8531bc3f269aa45835cd6e1972b7fc7',1,'glm']]], ['mediump_5fmat4x2',['mediump_mat4x2',['../a00284.html#gad75706b70545412ba9ac27d5ee210f66',1,'glm']]], ['mediump_5fmat4x3',['mediump_mat4x3',['../a00284.html#ga4a1440b5ea3cf84d5b06c79b534bd770',1,'glm']]], ['mediump_5fmat4x4',['mediump_mat4x4',['../a00284.html#ga15bca2b70917d9752231160d9da74b01',1,'glm']]], ['mediump_5fquat',['mediump_quat',['../a00253.html#gad2a59409de1bb12ccb6eb692ee7e9d8d',1,'glm']]], ['mediump_5fu16',['mediump_u16',['../a00304.html#ga9df98857be695d5a30cb30f5bfa38a80',1,'glm']]], ['mediump_5fu16vec1',['mediump_u16vec1',['../a00304.html#ga400ce8cc566de093a9b28e59e220d6e4',1,'glm']]], ['mediump_5fu16vec2',['mediump_u16vec2',['../a00304.html#ga429c201b3e92c90b4ef4356f2be52ee1',1,'glm']]], ['mediump_5fu16vec3',['mediump_u16vec3',['../a00304.html#gac9ba20234b0c3751d45ce575fc71e551',1,'glm']]], ['mediump_5fu16vec4',['mediump_u16vec4',['../a00304.html#ga5793393686ce5bd2d5968ff9144762b8',1,'glm']]], ['mediump_5fu32',['mediump_u32',['../a00304.html#ga1bd0e914158bf03135f8a317de6debe9',1,'glm']]], ['mediump_5fu32vec1',['mediump_u32vec1',['../a00304.html#ga8a11ccd2e38f674bbf3c2d1afc232aee',1,'glm']]], ['mediump_5fu32vec2',['mediump_u32vec2',['../a00304.html#ga94f74851fce338549c705b5f0d601c4f',1,'glm']]], ['mediump_5fu32vec3',['mediump_u32vec3',['../a00304.html#ga012c24c8fc69707b90260474c70275a2',1,'glm']]], ['mediump_5fu32vec4',['mediump_u32vec4',['../a00304.html#ga5d43ee8b5dbaa06c327b03b83682598a',1,'glm']]], ['mediump_5fu64',['mediump_u64',['../a00304.html#ga2af9490085ae3bdf36a544e9dd073610',1,'glm']]], ['mediump_5fu64vec1',['mediump_u64vec1',['../a00304.html#ga659f372ccb8307d5db5beca942cde5e8',1,'glm']]], ['mediump_5fu64vec2',['mediump_u64vec2',['../a00304.html#ga73a08ef5a74798f3a1a99250b5f86a7d',1,'glm']]], ['mediump_5fu64vec3',['mediump_u64vec3',['../a00304.html#ga1900c6ab74acd392809425953359ef52',1,'glm']]], ['mediump_5fu64vec4',['mediump_u64vec4',['../a00304.html#gaec7ee455cb379ec2993e81482123e1cc',1,'glm']]], ['mediump_5fu8',['mediump_u8',['../a00304.html#gad1213a22bbb9e4107f07eaa4956f8281',1,'glm']]], ['mediump_5fu8vec1',['mediump_u8vec1',['../a00304.html#ga4a43050843b141bdc7e85437faef6f55',1,'glm']]], ['mediump_5fu8vec2',['mediump_u8vec2',['../a00304.html#ga907f85d4a0eac3d8aaf571e5c2647194',1,'glm']]], ['mediump_5fu8vec3',['mediump_u8vec3',['../a00304.html#gaddc6f7748b699254942c5216b68f8f7f',1,'glm']]], ['mediump_5fu8vec4',['mediump_u8vec4',['../a00304.html#gaaf4ee3b76d43d98da02ec399b99bda4b',1,'glm']]], ['mediump_5fuint16',['mediump_uint16',['../a00304.html#ga2885a6c89916911e418c06bb76b9bdbb',1,'glm']]], ['mediump_5fuint16_5ft',['mediump_uint16_t',['../a00304.html#ga3963b1050fc65a383ee28e3f827b6e3e',1,'glm']]], ['mediump_5fuint32',['mediump_uint32',['../a00304.html#ga34dd5ec1988c443bae80f1b20a8ade5f',1,'glm']]], ['mediump_5fuint32_5ft',['mediump_uint32_t',['../a00304.html#gaf4dae276fd29623950de14a6ca2586b5',1,'glm']]], ['mediump_5fuint64',['mediump_uint64',['../a00304.html#ga30652709815ad9404272a31957daa59e',1,'glm']]], ['mediump_5fuint64_5ft',['mediump_uint64_t',['../a00304.html#ga9b170dd4a8f38448a2dc93987c7875e9',1,'glm']]], ['mediump_5fuint8',['mediump_uint8',['../a00304.html#ga1fa92a233b9110861cdbc8c2ccf0b5a3',1,'glm']]], ['mediump_5fuint8_5ft',['mediump_uint8_t',['../a00304.html#gadfe65c78231039e90507770db50c98c7',1,'glm']]], ['mediump_5fumat2',['mediump_umat2',['../a00294.html#ga43041378b3410ea951b7de0dfd2bc7ee',1,'glm']]], ['mediump_5fumat2x2',['mediump_umat2x2',['../a00294.html#ga3b209b1b751f041422137e3c065dfa98',1,'glm']]], ['mediump_5fumat2x3',['mediump_umat2x3',['../a00294.html#gaee2c1f13b41f4c92ea5b3efe367a1306',1,'glm']]], ['mediump_5fumat2x4',['mediump_umat2x4',['../a00294.html#gae1317ddca16d01e119a40b7f0ee85f95',1,'glm']]], ['mediump_5fumat3',['mediump_umat3',['../a00294.html#ga1730dbe3c67801f53520b06d1aa0a34a',1,'glm']]], ['mediump_5fumat3x2',['mediump_umat3x2',['../a00294.html#gaadc28bfdc8ebca81ae85121b11994970',1,'glm']]], ['mediump_5fumat3x3',['mediump_umat3x3',['../a00294.html#ga48f2fc38d3f7fab3cfbc961278ced53d',1,'glm']]], ['mediump_5fumat3x4',['mediump_umat3x4',['../a00294.html#ga78009a1e4ca64217e46b418535e52546',1,'glm']]], ['mediump_5fumat4',['mediump_umat4',['../a00294.html#ga5087c2beb26a11d9af87432e554cf9d1',1,'glm']]], ['mediump_5fumat4x2',['mediump_umat4x2',['../a00294.html#gaf35aefd81cc13718f6b059623f7425fa',1,'glm']]], ['mediump_5fumat4x3',['mediump_umat4x3',['../a00294.html#ga4e1bed14fbc7f4b376aaed064f89f0fb',1,'glm']]], ['mediump_5fumat4x4',['mediump_umat4x4',['../a00294.html#gaa9428fc8430dc552aad920653f822ef3',1,'glm']]], ['mediump_5fuvec1',['mediump_uvec1',['../a00277.html#ga38fde73aaf1420175ece8d4882558a3f',1,'glm']]], ['mediump_5fuvec2',['mediump_uvec2',['../a00282.html#gaa3b4f7806dad03d83bb3da0baa1e3b9b',1,'glm']]], ['mediump_5fuvec3',['mediump_uvec3',['../a00282.html#ga83b7df38feefbb357f3673d950fafef7',1,'glm']]], ['mediump_5fuvec4',['mediump_uvec4',['../a00282.html#ga64ed0deb6573375b7016daf82ffd53a7',1,'glm']]], ['mediump_5fvec1',['mediump_vec1',['../a00271.html#ga645f53e6b8056609023a894b4e2beef4',1,'glm']]], ['mediump_5fvec2',['mediump_vec2',['../a00282.html#gabc61976261c406520c7a8e4d946dc3f0',1,'glm']]], ['mediump_5fvec3',['mediump_vec3',['../a00282.html#ga2384e263df19f1404b733016eff78fca',1,'glm']]], ['mediump_5fvec4',['mediump_vec4',['../a00282.html#ga5c6978d3ffba06738416a33083853fc0',1,'glm']]], ['min',['min',['../a00241.html#ga6cf8098827054a270ee36b18e30d471d',1,'glm::min(genType x, genType y)'],['../a00241.html#gaa7d015eba1f9f48519251f4abe69b14d',1,'glm::min(vec< L, T, Q > const &x, T y)'],['../a00241.html#ga31f49ef9e7d1beb003160c5e009b0c48',1,'glm::min(vec< L, T, Q > const &x, vec< L, T, Q > const &y)'],['../a00258.html#ga420b37cbd98c395b93dab0278305cd46',1,'glm::min(T a, T b, T c)'],['../a00258.html#ga0d24a9acb8178df77e4aff90cbb2010d',1,'glm::min(T a, T b, T c, T d)'],['../a00267.html#ga3cd83d80fd4f433d8e333593ec56dddf',1,'glm::min(vec< L, T, Q > const &a, vec< L, T, Q > const &b, vec< L, T, Q > const &c)'],['../a00267.html#gab66920ed064ab518d6859c5a889c4be4',1,'glm::min(vec< L, T, Q > const &a, vec< L, T, Q > const &b, vec< L, T, Q > const &c, vec< L, T, Q > const &d)'],['../a00321.html#ga713d3f9b3e76312c0d314e0c8611a6a6',1,'glm::min(T const &x, T const &y, T const &z)'],['../a00321.html#ga74d1a96e7cdbac40f6d35142d3bcbbd4',1,'glm::min(C< T > const &x, typename C< T >::T const &y, typename C< T >::T const &z)'],['../a00321.html#ga42b5c3fc027fd3d9a50d2ccc9126d9f0',1,'glm::min(C< T > const &x, C< T > const &y, C< T > const &z)'],['../a00321.html#ga95466987024d03039607f09e69813d69',1,'glm::min(T const &x, T const &y, T const &z, T const &w)'],['../a00321.html#ga4fe35dd31dd0c45693c9b60b830b8d47',1,'glm::min(C< T > const &x, typename C< T >::T const &y, typename C< T >::T const &z, typename C< T >::T const &w)'],['../a00321.html#ga7471ea4159eed8dd9ea4ac5d46c2fead',1,'glm::min(C< T > const &x, C< T > const &y, C< T > const &z, C< T > const &w)']]], ['mirrorclamp',['mirrorClamp',['../a00369.html#gaa6856a0a048d2749252848da35e10c8b',1,'glm']]], ['mirrorrepeat',['mirrorRepeat',['../a00369.html#ga16a89b0661b60d5bea85137bbae74d73',1,'glm']]], ['mix',['mix',['../a00241.html#ga8e93f374aae27d1a88b921860351f8d4',1,'glm::mix(genTypeT x, genTypeT y, genTypeU a)'],['../a00248.html#gafbfe587b8da11fb89a30c3d67dd5ccc2',1,'glm::mix(qua< T, Q > const &x, qua< T, Q > const &y, T a)']]], ['mixed_5fproduct_2ehpp',['mixed_product.hpp',['../a00111.html',1,'']]], ['mixedproduct',['mixedProduct',['../a00342.html#gab3c6048fbb67f7243b088a4fee48d020',1,'glm']]], ['mod',['mod',['../a00241.html#ga9b197a452cd52db3c5c18bac72bd7798',1,'glm::mod(vec< L, T, Q > const &x, vec< L, T, Q > const &y)'],['../a00330.html#gaabfbb41531ab7ad8d06fc176edfba785',1,'glm::mod(int x, int y)'],['../a00330.html#ga63fc8d63e7da1706439233b386ba8b6f',1,'glm::mod(uint x, uint y)']]], ['modf',['modf',['../a00241.html#ga85e33f139b8db1b39b590a5713b9e679',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_b.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_b.js ================================================ var searchData= [ ['nextmultiple',['nextMultiple',['../a00261.html#gab770a3835c44c8a6fd225be4f4e6b317',1,'glm::nextMultiple(genIUType v, genIUType Multiple)'],['../a00274.html#gace38d00601cbf49cd4dc03f003ab42b7',1,'glm::nextMultiple(vec< L, T, Q > const &v, T Multiple)'],['../a00274.html#gacda365edad320c7aff19cc283a3b8ca2',1,'glm::nextMultiple(vec< L, T, Q > const &v, vec< L, T, Q > const &Multiple)']]], ['nextpoweroftwo',['nextPowerOfTwo',['../a00261.html#ga3a37c2f2fd347886c9af6a3ca3db04dc',1,'glm::nextPowerOfTwo(genIUType v)'],['../a00274.html#gabba67f8aac9915e10fca727277274502',1,'glm::nextPowerOfTwo(vec< L, T, Q > const &v)']]], ['nlz',['nlz',['../a00330.html#ga78dff8bdb361bf0061194c93e003d189',1,'glm']]], ['noise_2ehpp',['noise.hpp',['../a00112.html',1,'']]], ['norm_2ehpp',['norm.hpp',['../a00113.html',1,'']]], ['normal_2ehpp',['normal.hpp',['../a00114.html',1,'']]], ['normalize',['normalize',['../a00254.html#gabf30e3263fffe8dcc6659aea76ae8927',1,'glm::normalize(qua< T, Q > const &q)'],['../a00279.html#ga3b8d3dcae77870781392ed2902cce597',1,'glm::normalize(vec< L, T, Q > const &x)'],['../a00317.html#ga299b8641509606b1958ffa104a162cfe',1,'glm::normalize(tdualquat< T, Q > const &q)']]], ['normalize_5fdot_2ehpp',['normalize_dot.hpp',['../a00115.html',1,'']]], ['normalizedot',['normalizeDot',['../a00345.html#gacb140a2b903115d318c8b0a2fb5a5daa',1,'glm']]], ['not_5f',['not_',['../a00374.html#ga610fcd175791fd246e328ffee10dbf1e',1,'glm']]], ['notequal',['notEqual',['../a00246.html#ga8504f18a7e2bf315393032c2137dad83',1,'glm::notEqual(mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y)'],['../a00246.html#ga29071147d118569344d10944b7d5c378',1,'glm::notEqual(mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, T epsilon)'],['../a00246.html#gad7959e14fbc35b4ed2617daf4d67f6cd',1,'glm::notEqual(mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, vec< C, T, Q > const &epsilon)'],['../a00246.html#gaa1cd7fc228ef6e26c73583fd0d9c6552',1,'glm::notEqual(mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, int ULPs)'],['../a00246.html#gaa5517341754149ffba742d230afd1f32',1,'glm::notEqual(mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, vec< C, int, Q > const &ULPs)'],['../a00255.html#gab441cee0de5867a868f3a586ee68cfe1',1,'glm::notEqual(qua< T, Q > const &x, qua< T, Q > const &y)'],['../a00255.html#ga5117a44c1bf21af857cd23e44a96d313',1,'glm::notEqual(qua< T, Q > const &x, qua< T, Q > const &y, T epsilon)'],['../a00275.html#ga4a99cc41341567567a608719449c1fac',1,'glm::notEqual(vec< L, T, Q > const &x, vec< L, T, Q > const &y, T epsilon)'],['../a00275.html#ga417cf51304359db18e819dda9bce5767',1,'glm::notEqual(vec< L, T, Q > const &x, vec< L, T, Q > const &y, vec< L, T, Q > const &epsilon)'],['../a00275.html#ga8b5c2c3f83422ae5b71fa960d03b0339',1,'glm::notEqual(vec< L, T, Q > const &x, vec< L, T, Q > const &y, int ULPs)'],['../a00275.html#ga0b15ffe32987a6029b14398eb0def01a',1,'glm::notEqual(vec< L, T, Q > const &x, vec< L, T, Q > const &y, vec< L, int, Q > const &ULPs)'],['../a00374.html#ga17c19dc1b76cd5aef63e9e7ff3aa3c27',1,'glm::notEqual(vec< L, T, Q > const &x, vec< L, T, Q > const &y)']]], ['number_5fprecision_2ehpp',['number_precision.hpp',['../a00116.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_c.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_c.js ================================================ var searchData= [ ['opengl_20mathematics_20_28glm_29',['OpenGL Mathematics (GLM)',['../index.html',1,'']]], ['one',['one',['../a00290.html#ga39c2fb227631ca25894326529bdd1ee5',1,'glm']]], ['one_5fover_5fpi',['one_over_pi',['../a00290.html#ga555150da2b06d23c8738981d5013e0eb',1,'glm']]], ['one_5fover_5froot_5ftwo',['one_over_root_two',['../a00290.html#ga788fa23a0939bac4d1d0205fb4f35818',1,'glm']]], ['one_5fover_5ftwo_5fpi',['one_over_two_pi',['../a00290.html#ga7c922b427986cbb2e4c6ac69874eefbc',1,'glm']]], ['openbounded',['openBounded',['../a00314.html#gafd303042ba2ba695bf53b2315f53f93f',1,'glm']]], ['optimum_5fpow_2ehpp',['optimum_pow.hpp',['../a00117.html',1,'']]], ['orientate2',['orientate2',['../a00319.html#gae16738a9f1887cf4e4db6a124637608d',1,'glm']]], ['orientate3',['orientate3',['../a00319.html#ga7ca98668a5786f19c7b38299ebbc9b4c',1,'glm::orientate3(T const &angle)'],['../a00319.html#ga7238c8e15c7720e3ca6a45ab151eeabb',1,'glm::orientate3(vec< 3, T, Q > const &angles)']]], ['orientate4',['orientate4',['../a00319.html#ga4a044653f71a4ecec68e0b623382b48a',1,'glm']]], ['orientation',['orientation',['../a00356.html#ga1a32fceb71962e6160e8af295c91930a',1,'glm']]], ['orientedangle',['orientedAngle',['../a00367.html#ga9556a803dce87fe0f42fdabe4ebba1d5',1,'glm::orientedAngle(vec< 2, T, Q > const &x, vec< 2, T, Q > const &y)'],['../a00367.html#ga706fce3d111f485839756a64f5a48553',1,'glm::orientedAngle(vec< 3, T, Q > const &x, vec< 3, T, Q > const &y, vec< 3, T, Q > const &ref)']]], ['ortho',['ortho',['../a00243.html#gae5b6b40ed882cd56cd7cb97701909c06',1,'glm::ortho(T left, T right, T bottom, T top)'],['../a00243.html#ga6615d8a9d39432e279c4575313ecb456',1,'glm::ortho(T left, T right, T bottom, T top, T zNear, T zFar)']]], ['ortholh',['orthoLH',['../a00243.html#gad122a79aadaa5529cec4ac197203db7f',1,'glm']]], ['ortholh_5fno',['orthoLH_NO',['../a00243.html#ga526416735ea7c5c5cd255bf99d051bd8',1,'glm']]], ['ortholh_5fzo',['orthoLH_ZO',['../a00243.html#gab37ac3eec8d61f22fceda7775e836afa',1,'glm']]], ['orthono',['orthoNO',['../a00243.html#gab219d28a8f178d4517448fcd6395a073',1,'glm']]], ['orthonormalize',['orthonormalize',['../a00348.html#ga4cab5d698e6e2eccea30c8e81c74371f',1,'glm::orthonormalize(mat< 3, 3, T, Q > const &m)'],['../a00348.html#gac3bc7ef498815026bc3d361ae0b7138e',1,'glm::orthonormalize(vec< 3, T, Q > const &x, vec< 3, T, Q > const &y)']]], ['orthonormalize_2ehpp',['orthonormalize.hpp',['../a00118.html',1,'']]], ['orthorh',['orthoRH',['../a00243.html#ga16264c9b838edeb9dd1de7a1010a13a4',1,'glm']]], ['orthorh_5fno',['orthoRH_NO',['../a00243.html#gaa2f7a1373170bf0a4a2ddef9b0706780',1,'glm']]], ['orthorh_5fzo',['orthoRH_ZO',['../a00243.html#ga9aea2e515b08fd7dce47b7b6ec34d588',1,'glm']]], ['orthozo',['orthoZO',['../a00243.html#gaea11a70817af2c0801c869dea0b7a5bc',1,'glm']]], ['outerproduct',['outerProduct',['../a00371.html#gac29fb7bae75a8e4c1b74cbbf85520e50',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_d.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_d.js ================================================ var searchData= [ ['packdouble2x32',['packDouble2x32',['../a00372.html#gaa916ca426b2bb0343ba17e3753e245c2',1,'glm']]], ['packed_5fbvec1',['packed_bvec1',['../a00303.html#ga88632cea9008ac0ac1388e94e804a53c',1,'glm']]], ['packed_5fbvec2',['packed_bvec2',['../a00303.html#gab85245913eaa40ab82adabcae37086cb',1,'glm']]], ['packed_5fbvec3',['packed_bvec3',['../a00303.html#ga0c48f9417f649e27f3fb0c9f733a18bd',1,'glm']]], ['packed_5fbvec4',['packed_bvec4',['../a00303.html#ga3180d7db84a74c402157df3bbc0ae3ed',1,'glm']]], ['packed_5fdmat2',['packed_dmat2',['../a00303.html#gad87408a8350918711f845f071bbe43fb',1,'glm']]], ['packed_5fdmat2x2',['packed_dmat2x2',['../a00303.html#gaaa33d8e06657a777efb0c72c44ce87a9',1,'glm']]], ['packed_5fdmat2x3',['packed_dmat2x3',['../a00303.html#gac3a5315f588ba04ad255188071ec4e22',1,'glm']]], ['packed_5fdmat2x4',['packed_dmat2x4',['../a00303.html#gae398fc3156f51d3684b08f62c1a5a6d4',1,'glm']]], ['packed_5fdmat3',['packed_dmat3',['../a00303.html#ga03dfc90d539cc87ea3a15a9caa5d2245',1,'glm']]], ['packed_5fdmat3x2',['packed_dmat3x2',['../a00303.html#gae36de20a4c0e0b1444b7903ae811d94e',1,'glm']]], ['packed_5fdmat3x3',['packed_dmat3x3',['../a00303.html#gab9b909f1392d86854334350efcae85f5',1,'glm']]], ['packed_5fdmat3x4',['packed_dmat3x4',['../a00303.html#ga199131fd279c92c2ac12df6d978f1dd6',1,'glm']]], ['packed_5fdmat4',['packed_dmat4',['../a00303.html#gada980a3485640aa8151f368f17ad3086',1,'glm']]], ['packed_5fdmat4x2',['packed_dmat4x2',['../a00303.html#ga6dc65249730698d3cc9ac5d7e1bc4d72',1,'glm']]], ['packed_5fdmat4x3',['packed_dmat4x3',['../a00303.html#gadf202aaa9ed71c09f9bbe347e43f8764',1,'glm']]], ['packed_5fdmat4x4',['packed_dmat4x4',['../a00303.html#gae20617435a6d042d7c38da2badd64a09',1,'glm']]], ['packed_5fdvec1',['packed_dvec1',['../a00303.html#ga532f0c940649b1ee303acd572fc35531',1,'glm']]], ['packed_5fdvec2',['packed_dvec2',['../a00303.html#ga5c194b11fbda636f2ab20c3bd0079196',1,'glm']]], ['packed_5fdvec3',['packed_dvec3',['../a00303.html#ga0581ea552d86b2b5de7a2804bed80e72',1,'glm']]], ['packed_5fdvec4',['packed_dvec4',['../a00303.html#gae8a9b181f9dc813ad6e125a52b14b935',1,'glm']]], ['packed_5fhighp_5fbvec1',['packed_highp_bvec1',['../a00303.html#ga439e97795314b81cd15abd4e5c2e6e7a',1,'glm']]], ['packed_5fhighp_5fbvec2',['packed_highp_bvec2',['../a00303.html#gad791d671f4fcf1ed1ea41f752916b70a',1,'glm']]], ['packed_5fhighp_5fbvec3',['packed_highp_bvec3',['../a00303.html#ga6a5a3250b57dfadc66735bc72911437f',1,'glm']]], ['packed_5fhighp_5fbvec4',['packed_highp_bvec4',['../a00303.html#ga09f517d88b996ef1b2f42fd54222b82d',1,'glm']]], ['packed_5fhighp_5fdmat2',['packed_highp_dmat2',['../a00303.html#gae29686632fd05efac0675d9a6370d77b',1,'glm']]], ['packed_5fhighp_5fdmat2x2',['packed_highp_dmat2x2',['../a00303.html#ga22bd6382b16052e301edbfc031b9f37a',1,'glm']]], ['packed_5fhighp_5fdmat2x3',['packed_highp_dmat2x3',['../a00303.html#ga999d82719696d4c59f4d236dd08f273d',1,'glm']]], ['packed_5fhighp_5fdmat2x4',['packed_highp_dmat2x4',['../a00303.html#ga6998ac2a8d7fe456b651a6336ed26bb0',1,'glm']]], ['packed_5fhighp_5fdmat3',['packed_highp_dmat3',['../a00303.html#gadac7c040c4810dd52b36fcd09d097400',1,'glm']]], ['packed_5fhighp_5fdmat3x2',['packed_highp_dmat3x2',['../a00303.html#gab462744977beb85fb5c782bc2eea7b15',1,'glm']]], ['packed_5fhighp_5fdmat3x3',['packed_highp_dmat3x3',['../a00303.html#ga49e5a709d098523823b2f824e48672a6',1,'glm']]], ['packed_5fhighp_5fdmat3x4',['packed_highp_dmat3x4',['../a00303.html#ga2c67b3b0adab71c8680c3d819f1fa9b7',1,'glm']]], ['packed_5fhighp_5fdmat4',['packed_highp_dmat4',['../a00303.html#ga6718822cd7af005a9b5bd6ee282f6ba6',1,'glm']]], ['packed_5fhighp_5fdmat4x2',['packed_highp_dmat4x2',['../a00303.html#ga12e39e797fb724a5b51fcbea2513a7da',1,'glm']]], ['packed_5fhighp_5fdmat4x3',['packed_highp_dmat4x3',['../a00303.html#ga79c2e9f82e67963c1ecad0ad6d0ec72e',1,'glm']]], ['packed_5fhighp_5fdmat4x4',['packed_highp_dmat4x4',['../a00303.html#ga2df58e03e5afded28707b4f7d077afb4',1,'glm']]], ['packed_5fhighp_5fdvec1',['packed_highp_dvec1',['../a00303.html#gab472b2d917b5e6efd76e8c7dbfbbf9f1',1,'glm']]], ['packed_5fhighp_5fdvec2',['packed_highp_dvec2',['../a00303.html#ga5b2dc48fa19b684d207d69c6b145eb63',1,'glm']]], ['packed_5fhighp_5fdvec3',['packed_highp_dvec3',['../a00303.html#gaaac6b356ef00154da41aaae7d1549193',1,'glm']]], ['packed_5fhighp_5fdvec4',['packed_highp_dvec4',['../a00303.html#ga81b5368fe485e2630aa9b44832d592e7',1,'glm']]], ['packed_5fhighp_5fivec1',['packed_highp_ivec1',['../a00303.html#ga7245acc887a5438f46fd85fdf076bb3b',1,'glm']]], ['packed_5fhighp_5fivec2',['packed_highp_ivec2',['../a00303.html#ga54f368ec6b514a5aa4f28d40e6f93ef7',1,'glm']]], ['packed_5fhighp_5fivec3',['packed_highp_ivec3',['../a00303.html#ga865a9c7bb22434b1b8c5ac31e164b628',1,'glm']]], ['packed_5fhighp_5fivec4',['packed_highp_ivec4',['../a00303.html#gad6f1b4e3a51c2c051814b60d5d1b8895',1,'glm']]], ['packed_5fhighp_5fmat2',['packed_highp_mat2',['../a00303.html#ga2f2d913d8cca2f935b2522964408c0b2',1,'glm']]], ['packed_5fhighp_5fmat2x2',['packed_highp_mat2x2',['../a00303.html#ga245c12d2daf67feecaa2d3277c8f6661',1,'glm']]], ['packed_5fhighp_5fmat2x3',['packed_highp_mat2x3',['../a00303.html#ga069cc8892aadae144c00f35297617d44',1,'glm']]], ['packed_5fhighp_5fmat2x4',['packed_highp_mat2x4',['../a00303.html#ga6904d09b62141d09712b76983892f95b',1,'glm']]], ['packed_5fhighp_5fmat3',['packed_highp_mat3',['../a00303.html#gabdd5fbffe8b8b8a7b33523f25b120dbe',1,'glm']]], ['packed_5fhighp_5fmat3x2',['packed_highp_mat3x2',['../a00303.html#ga2624719cb251d8de8cad1beaefc3a3f9',1,'glm']]], ['packed_5fhighp_5fmat3x3',['packed_highp_mat3x3',['../a00303.html#gaf2e07527d678440bf0c20adbeb9177c5',1,'glm']]], ['packed_5fhighp_5fmat3x4',['packed_highp_mat3x4',['../a00303.html#ga72102fa6ac2445aa3bb203128ad52449',1,'glm']]], ['packed_5fhighp_5fmat4',['packed_highp_mat4',['../a00303.html#ga253e8379b08d2dc6fe2800b2fb913203',1,'glm']]], ['packed_5fhighp_5fmat4x2',['packed_highp_mat4x2',['../a00303.html#gae389c2071cf3cdb33e7812c6fd156710',1,'glm']]], ['packed_5fhighp_5fmat4x3',['packed_highp_mat4x3',['../a00303.html#ga4584f64394bd7123b7a8534741e4916c',1,'glm']]], ['packed_5fhighp_5fmat4x4',['packed_highp_mat4x4',['../a00303.html#ga0149fe15668925147e07c94fd2c2d6ae',1,'glm']]], ['packed_5fhighp_5fuvec1',['packed_highp_uvec1',['../a00303.html#ga8c32b53f628a3616aa5061e58d66fe74',1,'glm']]], ['packed_5fhighp_5fuvec2',['packed_highp_uvec2',['../a00303.html#gab704d4fb15f6f96d70e363d5db7060cd',1,'glm']]], ['packed_5fhighp_5fuvec3',['packed_highp_uvec3',['../a00303.html#ga0b570da473fec4619db5aa0dce5133b0',1,'glm']]], ['packed_5fhighp_5fuvec4',['packed_highp_uvec4',['../a00303.html#gaa582f38c82aef61dea7aaedf15bb06a6',1,'glm']]], ['packed_5fhighp_5fvec1',['packed_highp_vec1',['../a00303.html#ga56473759d2702ee19ab7f91d0017fa70',1,'glm']]], ['packed_5fhighp_5fvec2',['packed_highp_vec2',['../a00303.html#ga6b8b9475e7c3b16aed13edbc460bbc4d',1,'glm']]], ['packed_5fhighp_5fvec3',['packed_highp_vec3',['../a00303.html#ga3815661df0e2de79beff8168c09adf1e',1,'glm']]], ['packed_5fhighp_5fvec4',['packed_highp_vec4',['../a00303.html#ga4015f36bf5a5adb6ac5d45beed959867',1,'glm']]], ['packed_5fivec1',['packed_ivec1',['../a00303.html#ga11581a06fc7bf941fa4d4b6aca29812c',1,'glm']]], ['packed_5fivec2',['packed_ivec2',['../a00303.html#ga1fe4c5f56b8087d773aa90dc88a257a7',1,'glm']]], ['packed_5fivec3',['packed_ivec3',['../a00303.html#gae157682a7847161787951ba1db4cf325',1,'glm']]], ['packed_5fivec4',['packed_ivec4',['../a00303.html#gac228b70372abd561340d5f926a7c1778',1,'glm']]], ['packed_5flowp_5fbvec1',['packed_lowp_bvec1',['../a00303.html#gae3c8750f53259ece334d3aa3b3649a40',1,'glm']]], ['packed_5flowp_5fbvec2',['packed_lowp_bvec2',['../a00303.html#gac969befedbda69eb78d4e23f751fdbee',1,'glm']]], ['packed_5flowp_5fbvec3',['packed_lowp_bvec3',['../a00303.html#ga7c20adbe1409e3fe4544677a7f6fe954',1,'glm']]], ['packed_5flowp_5fbvec4',['packed_lowp_bvec4',['../a00303.html#gae473587cff3092edc0877fc691c26a0b',1,'glm']]], ['packed_5flowp_5fdmat2',['packed_lowp_dmat2',['../a00303.html#gac93f9b1a35b9de4f456b9f2dfeaf1097',1,'glm']]], ['packed_5flowp_5fdmat2x2',['packed_lowp_dmat2x2',['../a00303.html#gaeeaff6c132ec91ebd21da3a2399548ea',1,'glm']]], ['packed_5flowp_5fdmat2x3',['packed_lowp_dmat2x3',['../a00303.html#ga2ccdcd4846775cbe4f9d12e71d55b5d2',1,'glm']]], ['packed_5flowp_5fdmat2x4',['packed_lowp_dmat2x4',['../a00303.html#gac870c47d2d9d48503f6c9ee3baec8ce1',1,'glm']]], ['packed_5flowp_5fdmat3',['packed_lowp_dmat3',['../a00303.html#ga3894a059eeaacec8791c25de398d9955',1,'glm']]], ['packed_5flowp_5fdmat3x2',['packed_lowp_dmat3x2',['../a00303.html#ga23ec236950f5859f59197663266b535d',1,'glm']]], ['packed_5flowp_5fdmat3x3',['packed_lowp_dmat3x3',['../a00303.html#ga4a7c7d8c3a663d0ec2a858cbfa14e54c',1,'glm']]], ['packed_5flowp_5fdmat3x4',['packed_lowp_dmat3x4',['../a00303.html#ga8fc0e66da83599071b7ec17510686cd9',1,'glm']]], ['packed_5flowp_5fdmat4',['packed_lowp_dmat4',['../a00303.html#ga03e1edf5666c40affe39aee35c87956f',1,'glm']]], ['packed_5flowp_5fdmat4x2',['packed_lowp_dmat4x2',['../a00303.html#ga39658fb13369db869d363684bd8399c0',1,'glm']]], ['packed_5flowp_5fdmat4x3',['packed_lowp_dmat4x3',['../a00303.html#ga30b0351eebc18c6056101359bdd3a359',1,'glm']]], ['packed_5flowp_5fdmat4x4',['packed_lowp_dmat4x4',['../a00303.html#ga0294d4c45151425c86a11deee7693c0e',1,'glm']]], ['packed_5flowp_5fdvec1',['packed_lowp_dvec1',['../a00303.html#ga054050e9d4e78d81db0e6d1573b1c624',1,'glm']]], ['packed_5flowp_5fdvec2',['packed_lowp_dvec2',['../a00303.html#gadc19938ddb204bfcb4d9ef35b1e2bf93',1,'glm']]], ['packed_5flowp_5fdvec3',['packed_lowp_dvec3',['../a00303.html#ga9189210cabd6651a5e14a4c46fb20598',1,'glm']]], ['packed_5flowp_5fdvec4',['packed_lowp_dvec4',['../a00303.html#ga262dafd0c001c3a38d1cc91d024ca738',1,'glm']]], ['packed_5flowp_5fivec1',['packed_lowp_ivec1',['../a00303.html#gaf22b77f1cf3e73b8b1dddfe7f959357c',1,'glm']]], ['packed_5flowp_5fivec2',['packed_lowp_ivec2',['../a00303.html#ga52635859f5ef660ab999d22c11b7867f',1,'glm']]], ['packed_5flowp_5fivec3',['packed_lowp_ivec3',['../a00303.html#ga98c9d122a959e9f3ce10a5623c310f5d',1,'glm']]], ['packed_5flowp_5fivec4',['packed_lowp_ivec4',['../a00303.html#ga931731b8ae3b54c7ecc221509dae96bc',1,'glm']]], ['packed_5flowp_5fmat2',['packed_lowp_mat2',['../a00303.html#ga70dcb9ef0b24e832772a7405efa9669a',1,'glm']]], ['packed_5flowp_5fmat2x2',['packed_lowp_mat2x2',['../a00303.html#gac70667c7642ec8d50245e6e6936a3927',1,'glm']]], ['packed_5flowp_5fmat2x3',['packed_lowp_mat2x3',['../a00303.html#ga3e7df5a11e1be27bc29a4c0d3956f234',1,'glm']]], ['packed_5flowp_5fmat2x4',['packed_lowp_mat2x4',['../a00303.html#gaea9c555e669dc56c45d95dcc75d59bf3',1,'glm']]], ['packed_5flowp_5fmat3',['packed_lowp_mat3',['../a00303.html#ga0d22400969dd223465b2900fecfb4f53',1,'glm']]], ['packed_5flowp_5fmat3x2',['packed_lowp_mat3x2',['../a00303.html#ga128cd52649621861635fab746df91735',1,'glm']]], ['packed_5flowp_5fmat3x3',['packed_lowp_mat3x3',['../a00303.html#ga5adf1802c5375a9dfb1729691bedd94e',1,'glm']]], ['packed_5flowp_5fmat3x4',['packed_lowp_mat3x4',['../a00303.html#ga92247ca09fa03c4013ba364f3a0fca7f',1,'glm']]], ['packed_5flowp_5fmat4',['packed_lowp_mat4',['../a00303.html#ga2a1dd2387725a335413d4c4fee8609c4',1,'glm']]], ['packed_5flowp_5fmat4x2',['packed_lowp_mat4x2',['../a00303.html#ga8f22607dcd090cd280071ccc689f4079',1,'glm']]], ['packed_5flowp_5fmat4x3',['packed_lowp_mat4x3',['../a00303.html#ga7661d759d6ad218e132e3d051e7b2c6c',1,'glm']]], ['packed_5flowp_5fmat4x4',['packed_lowp_mat4x4',['../a00303.html#ga776f18d1a6e7d399f05d386167dc60f5',1,'glm']]], ['packed_5flowp_5fuvec1',['packed_lowp_uvec1',['../a00303.html#gaf111fed760ecce16cb1988807569bee5',1,'glm']]], ['packed_5flowp_5fuvec2',['packed_lowp_uvec2',['../a00303.html#ga958210fe245a75b058325d367c951132',1,'glm']]], ['packed_5flowp_5fuvec3',['packed_lowp_uvec3',['../a00303.html#ga576a3f8372197a56a79dee1c8280f485',1,'glm']]], ['packed_5flowp_5fuvec4',['packed_lowp_uvec4',['../a00303.html#gafdd97922b4a2a42cd0c99a13877ff4da',1,'glm']]], ['packed_5flowp_5fvec1',['packed_lowp_vec1',['../a00303.html#ga0a6198fe64166a6a61084d43c71518a9',1,'glm']]], ['packed_5flowp_5fvec2',['packed_lowp_vec2',['../a00303.html#gafbf1c2cce307c5594b165819ed83bf5d',1,'glm']]], ['packed_5flowp_5fvec3',['packed_lowp_vec3',['../a00303.html#ga3a30c137c1f8cce478c28eab0427a570',1,'glm']]], ['packed_5flowp_5fvec4',['packed_lowp_vec4',['../a00303.html#ga3cc94fb8de80bbd8a4aa7a5b206d304a',1,'glm']]], ['packed_5fmat2',['packed_mat2',['../a00303.html#gadd019b43fcf42e1590d45dddaa504a1a',1,'glm']]], ['packed_5fmat2x2',['packed_mat2x2',['../a00303.html#ga51eaadcdc292c8750f746a5dc3e6c517',1,'glm']]], ['packed_5fmat2x3',['packed_mat2x3',['../a00303.html#ga301b76a89b8a9625501ca58815017f20',1,'glm']]], ['packed_5fmat2x4',['packed_mat2x4',['../a00303.html#gac401da1dd9177ad81d7618a2a5541e23',1,'glm']]], ['packed_5fmat3',['packed_mat3',['../a00303.html#ga9bc12b0ab7be8448836711b77cc7b83a',1,'glm']]], ['packed_5fmat3x2',['packed_mat3x2',['../a00303.html#ga134f0d99fbd2459c13cd9ebd056509fa',1,'glm']]], ['packed_5fmat3x3',['packed_mat3x3',['../a00303.html#ga6c1dbe8cde9fbb231284b01f8aeaaa99',1,'glm']]], ['packed_5fmat3x4',['packed_mat3x4',['../a00303.html#gad63515526cccfe88ffa8fe5ed64f95f8',1,'glm']]], ['packed_5fmat4',['packed_mat4',['../a00303.html#ga2c139854e5b04cf08a957dee3b510441',1,'glm']]], ['packed_5fmat4x2',['packed_mat4x2',['../a00303.html#ga379c1153f1339bdeaefd592bebf538e8',1,'glm']]], ['packed_5fmat4x3',['packed_mat4x3',['../a00303.html#gab286466e19f7399c8d25089da9400d43',1,'glm']]], ['packed_5fmat4x4',['packed_mat4x4',['../a00303.html#ga67e7102557d6067bb6ac00d4ad0e1374',1,'glm']]], ['packed_5fmediump_5fbvec1',['packed_mediump_bvec1',['../a00303.html#ga5546d828d63010a8f9cf81161ad0275a',1,'glm']]], ['packed_5fmediump_5fbvec2',['packed_mediump_bvec2',['../a00303.html#gab4c6414a59539e66a242ad4cf4b476b4',1,'glm']]], ['packed_5fmediump_5fbvec3',['packed_mediump_bvec3',['../a00303.html#ga70147763edff3fe96b03a0b98d6339a2',1,'glm']]], ['packed_5fmediump_5fbvec4',['packed_mediump_bvec4',['../a00303.html#ga7b1620f259595b9da47a6374fc44588a',1,'glm']]], ['packed_5fmediump_5fdmat2',['packed_mediump_dmat2',['../a00303.html#ga9d60e32d3fcb51f817046cd881fdbf57',1,'glm']]], ['packed_5fmediump_5fdmat2x2',['packed_mediump_dmat2x2',['../a00303.html#ga39e8bb9b70e5694964e8266a21ba534e',1,'glm']]], ['packed_5fmediump_5fdmat2x3',['packed_mediump_dmat2x3',['../a00303.html#ga8897c6d9adb4140b1c3b0a07b8f0a430',1,'glm']]], ['packed_5fmediump_5fdmat2x4',['packed_mediump_dmat2x4',['../a00303.html#gaaa4126969c765e7faa2ebf6951c22ffb',1,'glm']]], ['packed_5fmediump_5fdmat3',['packed_mediump_dmat3',['../a00303.html#gaf969eb879c76a5f4576e4a1e10095cf6',1,'glm']]], ['packed_5fmediump_5fdmat3x2',['packed_mediump_dmat3x2',['../a00303.html#ga86efe91cdaa2864c828a5d6d46356c6a',1,'glm']]], ['packed_5fmediump_5fdmat3x3',['packed_mediump_dmat3x3',['../a00303.html#gaf85877d38d8cfbc21d59d939afd72375',1,'glm']]], ['packed_5fmediump_5fdmat3x4',['packed_mediump_dmat3x4',['../a00303.html#gad5dcaf93df267bc3029174e430e0907f',1,'glm']]], ['packed_5fmediump_5fdmat4',['packed_mediump_dmat4',['../a00303.html#ga4b0ee7996651ddd04eaa0c4cdbb66332',1,'glm']]], ['packed_5fmediump_5fdmat4x2',['packed_mediump_dmat4x2',['../a00303.html#ga9a15514a0631f700de6312b9d5db3a73',1,'glm']]], ['packed_5fmediump_5fdmat4x3',['packed_mediump_dmat4x3',['../a00303.html#gab5b36cc9caee1bb1c5178fe191bf5713',1,'glm']]], ['packed_5fmediump_5fdmat4x4',['packed_mediump_dmat4x4',['../a00303.html#ga21e86cf2f6c126bacf31b8985db06bd4',1,'glm']]], ['packed_5fmediump_5fdvec1',['packed_mediump_dvec1',['../a00303.html#ga8920e90ea9c01d9c97e604a938ce2cbd',1,'glm']]], ['packed_5fmediump_5fdvec2',['packed_mediump_dvec2',['../a00303.html#ga0c754a783b6fcf80374c013371c4dae9',1,'glm']]], ['packed_5fmediump_5fdvec3',['packed_mediump_dvec3',['../a00303.html#ga1f18ada6f7cdd8c46db33ba987280fc4',1,'glm']]], ['packed_5fmediump_5fdvec4',['packed_mediump_dvec4',['../a00303.html#ga568b850f1116b667043533cf77826968',1,'glm']]], ['packed_5fmediump_5fivec1',['packed_mediump_ivec1',['../a00303.html#ga09507ef020a49517a7bcd50438f05056',1,'glm']]], ['packed_5fmediump_5fivec2',['packed_mediump_ivec2',['../a00303.html#gaaa891048dddef4627df33809ec726219',1,'glm']]], ['packed_5fmediump_5fivec3',['packed_mediump_ivec3',['../a00303.html#ga06f26d54dca30994eb1fdadb8e69f4a2',1,'glm']]], ['packed_5fmediump_5fivec4',['packed_mediump_ivec4',['../a00303.html#ga70130dc8ed9c966ec2a221ce586d45d8',1,'glm']]], ['packed_5fmediump_5fmat2',['packed_mediump_mat2',['../a00303.html#ga43cd36d430c5187bfdca34a23cb41581',1,'glm']]], ['packed_5fmediump_5fmat2x2',['packed_mediump_mat2x2',['../a00303.html#ga2d2a73e662759e301c22b8931ff6a526',1,'glm']]], ['packed_5fmediump_5fmat2x3',['packed_mediump_mat2x3',['../a00303.html#ga99049db01faf1e95ed9fb875a47dffe2',1,'glm']]], ['packed_5fmediump_5fmat2x4',['packed_mediump_mat2x4',['../a00303.html#gad43a240533f388ce0504b495d9df3d52',1,'glm']]], ['packed_5fmediump_5fmat3',['packed_mediump_mat3',['../a00303.html#ga13a75c6cbd0a411f694bc82486cd1e55',1,'glm']]], ['packed_5fmediump_5fmat3x2',['packed_mediump_mat3x2',['../a00303.html#ga04cfaf1421284df3c24ea0985dab24e7',1,'glm']]], ['packed_5fmediump_5fmat3x3',['packed_mediump_mat3x3',['../a00303.html#gaaa9cea174d342dd9650e3436823cab23',1,'glm']]], ['packed_5fmediump_5fmat3x4',['packed_mediump_mat3x4',['../a00303.html#gabc93a9560593bd32e099c908531305f5',1,'glm']]], ['packed_5fmediump_5fmat4',['packed_mediump_mat4',['../a00303.html#gae89d72ffc149147f61df701bbc8755bf',1,'glm']]], ['packed_5fmediump_5fmat4x2',['packed_mediump_mat4x2',['../a00303.html#gaa458f9d9e0934bae3097e2a373b24707',1,'glm']]], ['packed_5fmediump_5fmat4x3',['packed_mediump_mat4x3',['../a00303.html#ga02ca6255394aa778abaeb0f733c4d2b6',1,'glm']]], ['packed_5fmediump_5fmat4x4',['packed_mediump_mat4x4',['../a00303.html#gaf304f64c06743c1571401504d3f50259',1,'glm']]], ['packed_5fmediump_5fuvec1',['packed_mediump_uvec1',['../a00303.html#ga2c29fb42bab9a4f9b66bc60b2e514a34',1,'glm']]], ['packed_5fmediump_5fuvec2',['packed_mediump_uvec2',['../a00303.html#gaa1f95690a78dc12e39da32943243aeef',1,'glm']]], ['packed_5fmediump_5fuvec3',['packed_mediump_uvec3',['../a00303.html#ga1ea2bbdbcb0a69242f6d884663c1b0ab',1,'glm']]], ['packed_5fmediump_5fuvec4',['packed_mediump_uvec4',['../a00303.html#ga63a73be86a4f07ea7a7499ab0bfebe45',1,'glm']]], ['packed_5fmediump_5fvec1',['packed_mediump_vec1',['../a00303.html#ga71d63cead1e113fca0bcdaaa33aad050',1,'glm']]], ['packed_5fmediump_5fvec2',['packed_mediump_vec2',['../a00303.html#ga6844c6f4691d1bf67673240850430948',1,'glm']]], ['packed_5fmediump_5fvec3',['packed_mediump_vec3',['../a00303.html#gab0eb771b708c5b2205d9b14dd1434fd8',1,'glm']]], ['packed_5fmediump_5fvec4',['packed_mediump_vec4',['../a00303.html#ga68c9bb24f387b312bae6a0a68e74d95e',1,'glm']]], ['packed_5fuvec1',['packed_uvec1',['../a00303.html#ga5621493caac01bdd22ab6be4416b0314',1,'glm']]], ['packed_5fuvec2',['packed_uvec2',['../a00303.html#gabcc33efb4d5e83b8fe4706360e75b932',1,'glm']]], ['packed_5fuvec3',['packed_uvec3',['../a00303.html#gab96804e99e3a72a35740fec690c79617',1,'glm']]], ['packed_5fuvec4',['packed_uvec4',['../a00303.html#ga8e5d92e84ebdbe2480cf96bc17d6e2f2',1,'glm']]], ['packed_5fvec1',['packed_vec1',['../a00303.html#ga14741e3d9da9ae83765389927f837331',1,'glm']]], ['packed_5fvec2',['packed_vec2',['../a00303.html#ga3254defa5a8f0ae4b02b45fedba84a66',1,'glm']]], ['packed_5fvec3',['packed_vec3',['../a00303.html#gaccccd090e185450caa28b5b63ad4e8f0',1,'glm']]], ['packed_5fvec4',['packed_vec4',['../a00303.html#ga37a0e0bf653169b581c5eea3d547fa5d',1,'glm']]], ['packf2x11_5f1x10',['packF2x11_1x10',['../a00298.html#ga4944ad465ff950e926d49621f916c78d',1,'glm']]], ['packf3x9_5fe1x5',['packF3x9_E1x5',['../a00298.html#ga3f648fc205467792dc6d8c59c748f8a6',1,'glm']]], ['packhalf',['packHalf',['../a00298.html#ga2d8bbce673ebc04831c1fb05c47f5251',1,'glm']]], ['packhalf1x16',['packHalf1x16',['../a00298.html#ga43f2093b6ff192a79058ff7834fc3528',1,'glm']]], ['packhalf2x16',['packHalf2x16',['../a00372.html#ga20f134b07db3a3d3a38efb2617388c92',1,'glm']]], ['packhalf4x16',['packHalf4x16',['../a00298.html#gafe2f7b39caf8f5ec555e1c059ec530e6',1,'glm']]], ['packi3x10_5f1x2',['packI3x10_1x2',['../a00298.html#ga06ecb6afb902dba45419008171db9023',1,'glm']]], ['packing_2ehpp',['packing.hpp',['../a00120.html',1,'']]], ['packint2x16',['packInt2x16',['../a00298.html#ga3644163cf3a47bf1d4af1f4b03013a7e',1,'glm']]], ['packint2x32',['packInt2x32',['../a00298.html#gad1e4c8a9e67d86b61a6eec86703a827a',1,'glm']]], ['packint2x8',['packInt2x8',['../a00298.html#ga8884b1f2292414f36d59ef3be5d62914',1,'glm']]], ['packint4x16',['packInt4x16',['../a00298.html#ga1989f093a27ae69cf9207145be48b3d7',1,'glm']]], ['packint4x8',['packInt4x8',['../a00298.html#gaf2238401d5ce2aaade1a44ba19709072',1,'glm']]], ['packrgbm',['packRGBM',['../a00298.html#ga0466daf4c90f76cc64b3f105ce727295',1,'glm']]], ['packsnorm',['packSnorm',['../a00298.html#gaa54b5855a750d6aeb12c1c902f5939b8',1,'glm']]], ['packsnorm1x16',['packSnorm1x16',['../a00298.html#gab22f8bcfdb5fc65af4701b25f143c1af',1,'glm']]], ['packsnorm1x8',['packSnorm1x8',['../a00298.html#gae3592e0795e62aaa1865b3a10496a7a1',1,'glm']]], ['packsnorm2x16',['packSnorm2x16',['../a00372.html#ga977ab172da5494e5ac63e952afacfbe2',1,'glm']]], ['packsnorm2x8',['packSnorm2x8',['../a00298.html#ga6be3cfb2cce3702f03e91bbeb5286d7e',1,'glm']]], ['packsnorm3x10_5f1x2',['packSnorm3x10_1x2',['../a00298.html#gab997545661877d2c7362a5084d3897d3',1,'glm']]], ['packsnorm4x16',['packSnorm4x16',['../a00298.html#ga358943934d21da947d5bcc88c2ab7832',1,'glm']]], ['packsnorm4x8',['packSnorm4x8',['../a00372.html#ga85e8f17627516445026ab7a9c2e3531a',1,'glm']]], ['packu3x10_5f1x2',['packU3x10_1x2',['../a00298.html#gada3d88d59f0f458f9c51a9fd359a4bc0',1,'glm']]], ['packuint2x16',['packUint2x16',['../a00298.html#ga5eecc9e8cbaf51ac6cf57501e670ee19',1,'glm']]], ['packuint2x32',['packUint2x32',['../a00298.html#gaa864081097b86e83d8e4a4d79c382b22',1,'glm']]], ['packuint2x8',['packUint2x8',['../a00298.html#ga3c3c9fb53ae7823b10fa083909357590',1,'glm']]], ['packuint4x16',['packUint4x16',['../a00298.html#ga2ceb62cca347d8ace42ee90317a3f1f9',1,'glm']]], ['packuint4x8',['packUint4x8',['../a00298.html#gaa0fe2f09aeb403cd66c1a062f58861ab',1,'glm']]], ['packunorm',['packUnorm',['../a00298.html#gaccd3f27e6ba5163eb7aa9bc8ff96251a',1,'glm']]], ['packunorm1x16',['packUnorm1x16',['../a00298.html#ga9f82737bf2a44bedff1d286b76837886',1,'glm']]], ['packunorm1x5_5f1x6_5f1x5',['packUnorm1x5_1x6_1x5',['../a00298.html#ga768e0337dd6246773f14aa0a421fe9a8',1,'glm']]], ['packunorm1x8',['packUnorm1x8',['../a00298.html#ga4b2fa60df3460403817d28b082ee0736',1,'glm']]], ['packunorm2x16',['packUnorm2x16',['../a00372.html#ga0e2d107039fe608a209497af867b85fb',1,'glm']]], ['packunorm2x3_5f1x2',['packUnorm2x3_1x2',['../a00298.html#ga7f9abdb50f9be1aa1c14912504a0d98d',1,'glm']]], ['packunorm2x4',['packUnorm2x4',['../a00298.html#gab6bbd5be3b8e6db538ecb33a7844481c',1,'glm']]], ['packunorm2x8',['packUnorm2x8',['../a00298.html#ga9a666b1c688ab54100061ed06526de6e',1,'glm']]], ['packunorm3x10_5f1x2',['packUnorm3x10_1x2',['../a00298.html#ga8a1ee625d2707c60530fb3fca2980b19',1,'glm']]], ['packunorm3x5_5f1x1',['packUnorm3x5_1x1',['../a00298.html#gaec4112086d7fb133bea104a7c237de52',1,'glm']]], ['packunorm4x16',['packUnorm4x16',['../a00298.html#ga1f63c264e7ab63264e2b2a99fd393897',1,'glm']]], ['packunorm4x4',['packUnorm4x4',['../a00298.html#gad3e7e3ce521513584a53aedc5f9765c1',1,'glm']]], ['packunorm4x8',['packUnorm4x8',['../a00372.html#gaf7d2f7341a9eeb4a436929d6f9ad08f2',1,'glm']]], ['perlin',['perlin',['../a00297.html#ga1e043ce3b51510e9bc4469227cefc38a',1,'glm::perlin(vec< L, T, Q > const &p)'],['../a00297.html#gac270edc54c5fc52f5985a45f940bb103',1,'glm::perlin(vec< L, T, Q > const &p, vec< L, T, Q > const &rep)']]], ['perp',['perp',['../a00349.html#ga264cfc4e180cf9b852e943b35089003c',1,'glm']]], ['perpendicular_2ehpp',['perpendicular.hpp',['../a00121.html',1,'']]], ['perspective',['perspective',['../a00243.html#ga747c8cf99458663dd7ad1bb3a2f07787',1,'glm']]], ['perspectivefov',['perspectiveFov',['../a00243.html#gaebd02240fd36e85ad754f02ddd9a560d',1,'glm']]], ['perspectivefovlh',['perspectiveFovLH',['../a00243.html#ga6aebe16c164bd8e52554cbe0304ef4aa',1,'glm']]], ['perspectivefovlh_5fno',['perspectiveFovLH_NO',['../a00243.html#gad18a4495b77530317327e8d466488c1a',1,'glm']]], ['perspectivefovlh_5fzo',['perspectiveFovLH_ZO',['../a00243.html#gabdd37014f529e25b2fa1b3ba06c10d5c',1,'glm']]], ['perspectivefovno',['perspectiveFovNO',['../a00243.html#gaf30e7bd3b1387a6776433dd5383e6633',1,'glm']]], ['perspectivefovrh',['perspectiveFovRH',['../a00243.html#gaf32bf563f28379c68554a44ee60c6a85',1,'glm']]], ['perspectivefovrh_5fno',['perspectiveFovRH_NO',['../a00243.html#ga257b733ff883c9a065801023cf243eb2',1,'glm']]], ['perspectivefovrh_5fzo',['perspectiveFovRH_ZO',['../a00243.html#ga7dcbb25331676f5b0795aced1a905c44',1,'glm']]], ['perspectivefovzo',['perspectiveFovZO',['../a00243.html#ga4bc69fa1d1f95128430aa3d2a712390b',1,'glm']]], ['perspectivelh',['perspectiveLH',['../a00243.html#ga9bd34951dc7022ac256fcb51d7f6fc2f',1,'glm']]], ['perspectivelh_5fno',['perspectiveLH_NO',['../a00243.html#gaead4d049d1feab463b700b5641aa590e',1,'glm']]], ['perspectivelh_5fzo',['perspectiveLH_ZO',['../a00243.html#gaca32af88c2719005c02817ad1142986c',1,'glm']]], ['perspectiveno',['perspectiveNO',['../a00243.html#gaf497e6bca61e7c87088370b126a93758',1,'glm']]], ['perspectiverh',['perspectiveRH',['../a00243.html#ga26b88757fbd90601b80768a7e1ad3aa1',1,'glm']]], ['perspectiverh_5fno',['perspectiveRH_NO',['../a00243.html#gad1526cb2cbe796095284e8f34b01c582',1,'glm']]], ['perspectiverh_5fzo',['perspectiveRH_ZO',['../a00243.html#ga4da358d6e1b8e5b9ae35d1f3f2dc3b9a',1,'glm']]], ['perspectivezo',['perspectiveZO',['../a00243.html#gaa9dfba5c2322da54f72b1eb7c7c11b47',1,'glm']]], ['pi',['pi',['../a00259.html#ga94bafeb2a0f23ab6450fed1f98ee4e45',1,'glm']]], ['pickmatrix',['pickMatrix',['../a00245.html#gaf6b21eadb7ac2ecbbe258a9a233b4c82',1,'glm']]], ['pitch',['pitch',['../a00299.html#ga7603e81477b46ddb448896909bc04928',1,'glm']]], ['polar',['polar',['../a00350.html#gab83ac2c0e55b684b06b6c46c28b1590d',1,'glm']]], ['polar_5fcoordinates_2ehpp',['polar_coordinates.hpp',['../a00122.html',1,'']]], ['pow',['pow',['../a00242.html#ga2254981952d4f333b900a6bf5167a6c4',1,'glm::pow(vec< L, T, Q > const &base, vec< L, T, Q > const &exponent)'],['../a00256.html#ga4975ffcacd312a8c0bbd046a76c5607e',1,'glm::pow(qua< T, Q > const &q, T y)'],['../a00330.html#ga465016030a81d513fa2fac881ebdaa83',1,'glm::pow(int x, uint y)'],['../a00330.html#ga998e5ee915d3769255519e2fbaa2bbf0',1,'glm::pow(uint x, uint y)']]], ['pow2',['pow2',['../a00347.html#ga19aaff3213bf23bdec3ef124ace237e9',1,'glm::gtx']]], ['pow3',['pow3',['../a00347.html#ga35689d03cd434d6ea819f1942d3bf82e',1,'glm::gtx']]], ['pow4',['pow4',['../a00347.html#gacef0968763026e180e53e735007dbf5a',1,'glm::gtx']]], ['poweroftwoabove',['powerOfTwoAbove',['../a00309.html#ga8cda2459871f574a0aecbe702ac93291',1,'glm::powerOfTwoAbove(genIUType Value)'],['../a00309.html#ga2bbded187c5febfefc1e524ba31b3fab',1,'glm::powerOfTwoAbove(vec< L, T, Q > const &value)']]], ['poweroftwobelow',['powerOfTwoBelow',['../a00309.html#ga3de7df63c589325101a2817a56f8e29d',1,'glm::powerOfTwoBelow(genIUType Value)'],['../a00309.html#gaf78ddcc4152c051b2a21e68fecb10980',1,'glm::powerOfTwoBelow(vec< L, T, Q > const &value)']]], ['poweroftwonearest',['powerOfTwoNearest',['../a00309.html#ga5f65973a5d2ea38c719e6a663149ead9',1,'glm::powerOfTwoNearest(genIUType Value)'],['../a00309.html#gac87e65d11e16c3d6b91c3bcfaef7da0b',1,'glm::powerOfTwoNearest(vec< L, T, Q > const &value)']]], ['prevmultiple',['prevMultiple',['../a00261.html#gada3bdd871ffe31f2d484aa668362f636',1,'glm::prevMultiple(genIUType v, genIUType Multiple)'],['../a00274.html#ga7b3915a7cd3d50ff4976ab7a75a6880a',1,'glm::prevMultiple(vec< L, T, Q > const &v, T Multiple)'],['../a00274.html#ga51e04379e8aebbf83e2e5ab094578ee9',1,'glm::prevMultiple(vec< L, T, Q > const &v, vec< L, T, Q > const &Multiple)']]], ['prevpoweroftwo',['prevPowerOfTwo',['../a00261.html#gab21902a0e7e5a8451a7ad80333618727',1,'glm::prevPowerOfTwo(genIUType v)'],['../a00274.html#ga759db73f14d79f63612bd2398b577e7a',1,'glm::prevPowerOfTwo(vec< L, T, Q > const &v)']]], ['proj',['proj',['../a00351.html#ga58384b7170801dd513de46f87c7fb00e',1,'glm']]], ['proj2d',['proj2D',['../a00363.html#ga5b992a0cdc8298054edb68e228f0d93e',1,'glm']]], ['proj3d',['proj3D',['../a00363.html#gaa2b7f4f15b98f697caede11bef50509e',1,'glm']]], ['project',['project',['../a00245.html#gaf36e96033f456659e6705472a06b6e11',1,'glm']]], ['projection_2ehpp',['projection.hpp',['../a00123.html',1,'']]], ['projectno',['projectNO',['../a00245.html#ga05249751f48d14cb282e4979802b8111',1,'glm']]], ['projectzo',['projectZO',['../a00245.html#ga77d157525063dec83a557186873ee080',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_e.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_e.js ================================================ var searchData= [ ['qr_5fdecompose',['qr_decompose',['../a00336.html#gac62d7bfc8dc661e616620d70552cd566',1,'glm']]], ['quadraticeasein',['quadraticEaseIn',['../a00318.html#gaf42089d35855695132d217cd902304a0',1,'glm']]], ['quadraticeaseinout',['quadraticEaseInOut',['../a00318.html#ga03e8fc2d7945a4e63ee33b2159c14cea',1,'glm']]], ['quadraticeaseout',['quadraticEaseOut',['../a00318.html#ga283717bc2d937547ad34ec0472234ee3',1,'glm']]], ['quarter_5fpi',['quarter_pi',['../a00290.html#ga3c9df42bd73c519a995c43f0f99e77e0',1,'glm']]], ['quarticeasein',['quarticEaseIn',['../a00318.html#ga808b41f14514f47dad5dcc69eb924afd',1,'glm']]], ['quarticeaseinout',['quarticEaseInOut',['../a00318.html#ga6d000f852de12b197e154f234b20c505',1,'glm']]], ['quarticeaseout',['quarticEaseOut',['../a00318.html#ga4dfb33fa7664aa888eb647999d329b98',1,'glm']]], ['quat',['quat',['../a00252.html#gab0b441adb4509bc58d2946c2239a8942',1,'glm']]], ['quat_5fcast',['quat_cast',['../a00299.html#ga1108a4ab88ca87bac321454eea7702f8',1,'glm::quat_cast(mat< 3, 3, T, Q > const &x)'],['../a00299.html#ga4524810f07f72e8c7bdc7764fa11cb58',1,'glm::quat_cast(mat< 4, 4, T, Q > const &x)']]], ['quat_5fidentity',['quat_identity',['../a00352.html#ga5ee8332600b2aca3a77622a28d857b55',1,'glm']]], ['quaternion_5fcommon_2ehpp',['quaternion_common.hpp',['../a00127.html',1,'']]], ['quaternion_5fdouble_2ehpp',['quaternion_double.hpp',['../a00128.html',1,'']]], ['quaternion_5fdouble_5fprecision_2ehpp',['quaternion_double_precision.hpp',['../a00129.html',1,'']]], ['quaternion_5fexponential_2ehpp',['quaternion_exponential.hpp',['../a00130.html',1,'']]], ['quaternion_5ffloat_2ehpp',['quaternion_float.hpp',['../a00131.html',1,'']]], ['quaternion_5ffloat_5fprecision_2ehpp',['quaternion_float_precision.hpp',['../a00132.html',1,'']]], ['quaternion_5fgeometric_2ehpp',['quaternion_geometric.hpp',['../a00133.html',1,'']]], ['quaternion_5frelational_2ehpp',['quaternion_relational.hpp',['../a00134.html',1,'']]], ['quaternion_5ftransform_2ehpp',['quaternion_transform.hpp',['../a00135.html',1,'']]], ['quaternion_5ftrigonometric_2ehpp',['quaternion_trigonometric.hpp',['../a00136.html',1,'']]], ['quatlookat',['quatLookAt',['../a00299.html#gabe7fc5ec5feb41ab234d5d2b6254697f',1,'glm']]], ['quatlookatlh',['quatLookAtLH',['../a00299.html#ga2da350c73411be3bb19441b226b81a74',1,'glm']]], ['quatlookatrh',['quatLookAtRH',['../a00299.html#gaf6529ac8c04a57fcc35865b5c9437cc8',1,'glm']]], ['quinticeasein',['quinticEaseIn',['../a00318.html#ga097579d8e087dcf48037588140a21640',1,'glm']]], ['quinticeaseinout',['quinticEaseInOut',['../a00318.html#ga2a82d5c46df7e2d21cc0108eb7b83934',1,'glm']]], ['quinticeaseout',['quinticEaseOut',['../a00318.html#ga7dbd4d5c8da3f5353121f615e7b591d7',1,'glm']]], ['qword',['qword',['../a00354.html#ga4021754ffb8e5ef14c75802b15657714',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_f.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/all_f.js ================================================ var searchData= [ ['recommended_20extensions',['Recommended extensions',['../a00286.html',1,'']]], ['radialgradient',['radialGradient',['../a00327.html#gaaecb1e93de4cbe0758b882812d4da294',1,'glm']]], ['radians',['radians',['../a00373.html#ga6e1db4862c5e25afd553930e2fdd6a68',1,'glm']]], ['random_2ehpp',['random.hpp',['../a00137.html',1,'']]], ['range_2ehpp',['range.hpp',['../a00138.html',1,'']]], ['raw_5fdata_2ehpp',['raw_data.hpp',['../a00139.html',1,'']]], ['reciprocal_2ehpp',['reciprocal.hpp',['../a00140.html',1,'']]], ['reflect',['reflect',['../a00279.html#ga5631dd1d5618de5450b1ea3cf3e94905',1,'glm']]], ['refract',['refract',['../a00279.html#ga01da3dff9e2ef6b9d4915c3047e22b74',1,'glm']]], ['repeat',['repeat',['../a00369.html#ga809650c6310ea7c42666e918c117fb6f',1,'glm']]], ['rgb2ycocg',['rgb2YCoCg',['../a00313.html#ga0606353ec2a9b9eaa84f1b02ec391bc5',1,'glm']]], ['rgb2ycocgr',['rgb2YCoCgR',['../a00313.html#ga0389772e44ca0fd2ba4a79bdd8efe898',1,'glm']]], ['rgbcolor',['rgbColor',['../a00312.html#ga5f9193be46f45f0655c05a0cdca006db',1,'glm']]], ['righthanded',['rightHanded',['../a00328.html#ga99386a5ab5491871b947076e21699cc8',1,'glm']]], ['roll',['roll',['../a00299.html#ga0cc5ad970d0b00829b139fe0fe5a1e13',1,'glm']]], ['root_5ffive',['root_five',['../a00290.html#gae9ebbded75b53d4faeb1e4ef8b3347a2',1,'glm']]], ['root_5fhalf_5fpi',['root_half_pi',['../a00290.html#ga4e276cb823cc5e612d4f89ed99c75039',1,'glm']]], ['root_5fln_5ffour',['root_ln_four',['../a00290.html#ga4129412e96b33707a77c1a07652e23e2',1,'glm']]], ['root_5fpi',['root_pi',['../a00290.html#ga261380796b2cd496f68d2cf1d08b8eb9',1,'glm']]], ['root_5fthree',['root_three',['../a00290.html#ga4f286be4abe88be1eed7d2a9f6cb193e',1,'glm']]], ['root_5ftwo',['root_two',['../a00290.html#ga74e607d29020f100c0d0dc46ce2ca950',1,'glm']]], ['root_5ftwo_5fpi',['root_two_pi',['../a00290.html#ga2bcedc575039fe0cd765742f8bbb0bd3',1,'glm']]], ['rotate',['rotate',['../a00247.html#gaee9e865eaa9776370996da2940873fd4',1,'glm::rotate(mat< 4, 4, T, Q > const &m, T angle, vec< 3, T, Q > const &axis)'],['../a00256.html#gabfc57de6d4d2e11970f54119c5ccf0f5',1,'glm::rotate(qua< T, Q > const &q, T const &angle, vec< 3, T, Q > const &axis)'],['../a00341.html#gad5c84a4932a758f385a87098ce1b1660',1,'glm::rotate(mat< 3, 3, T, Q > const &m, T angle)'],['../a00352.html#ga07da6ef58646442efe93b0c273d73776',1,'glm::rotate(qua< T, Q > const &q, vec< 3, T, Q > const &v)'],['../a00352.html#gafcb78dfff45fbf19a7fcb2bd03fbf196',1,'glm::rotate(qua< T, Q > const &q, vec< 4, T, Q > const &v)'],['../a00356.html#gab64a67b52ff4f86c3ba16595a5a25af6',1,'glm::rotate(vec< 2, T, Q > const &v, T const &angle)'],['../a00356.html#ga1ba501ef83d1a009a17ac774cc560f21',1,'glm::rotate(vec< 3, T, Q > const &v, T const &angle, vec< 3, T, Q > const &normal)'],['../a00356.html#ga1005f1267ed9c57faa3f24cf6873b961',1,'glm::rotate(vec< 4, T, Q > const &v, T const &angle, vec< 3, T, Q > const &normal)'],['../a00362.html#gaf599be4c0e9d99be1f9cddba79b6018b',1,'glm::rotate(T angle, vec< 3, T, Q > const &v)']]], ['rotate_5fnormalized_5faxis_2ehpp',['rotate_normalized_axis.hpp',['../a00141.html',1,'']]], ['rotate_5fvector_2ehpp',['rotate_vector.hpp',['../a00142.html',1,'']]], ['rotatenormalizedaxis',['rotateNormalizedAxis',['../a00355.html#ga50efd7ebca0f7a603bb3cc11e34c708d',1,'glm::rotateNormalizedAxis(mat< 4, 4, T, Q > const &m, T const &angle, vec< 3, T, Q > const &axis)'],['../a00355.html#ga08f9c5411437d528019a25bfc01473d1',1,'glm::rotateNormalizedAxis(qua< T, Q > const &q, T const &angle, vec< 3, T, Q > const &axis)']]], ['rotatex',['rotateX',['../a00356.html#ga059fdbdba4cca35cdff172a9d0d0afc9',1,'glm::rotateX(vec< 3, T, Q > const &v, T const &angle)'],['../a00356.html#ga4333b1ea8ebf1bd52bc3801a7617398a',1,'glm::rotateX(vec< 4, T, Q > const &v, T const &angle)']]], ['rotatey',['rotateY',['../a00356.html#gaebdc8b054ace27d9f62e054531c6f44d',1,'glm::rotateY(vec< 3, T, Q > const &v, T const &angle)'],['../a00356.html#ga3ce3db0867b7f8efd878ee34f95a623b',1,'glm::rotateY(vec< 4, T, Q > const &v, T const &angle)']]], ['rotatez',['rotateZ',['../a00356.html#ga5a048838a03f6249acbacb4dbacf79c4',1,'glm::rotateZ(vec< 3, T, Q > const &v, T const &angle)'],['../a00356.html#ga923b75c6448161053768822d880702e6',1,'glm::rotateZ(vec< 4, T, Q > const &v, T const &angle)']]], ['rotation',['rotation',['../a00352.html#ga03e61282831cc3f52cc76f72f52ad2c5',1,'glm']]], ['round',['round',['../a00241.html#gafa03aca8c4713e1cc892aa92ca135a7e',1,'glm']]], ['round_2ehpp',['round.hpp',['../a00143.html',1,'']]], ['roundeven',['roundEven',['../a00241.html#ga76b81785045a057989a84d99aeeb1578',1,'glm']]], ['roundmultiple',['roundMultiple',['../a00302.html#gab892defcc9c0b0618df7251253dc0fbb',1,'glm::roundMultiple(genType v, genType Multiple)'],['../a00302.html#ga2f1a68332d761804c054460a612e3a4b',1,'glm::roundMultiple(vec< L, T, Q > const &v, vec< L, T, Q > const &Multiple)']]], ['roundpoweroftwo',['roundPowerOfTwo',['../a00302.html#gae4e1bf5d1cd179f59261a7342bdcafca',1,'glm::roundPowerOfTwo(genIUType v)'],['../a00302.html#ga258802a7d55c03c918f28cf4d241c4d0',1,'glm::roundPowerOfTwo(vec< L, T, Q > const &v)']]], ['row',['row',['../a00293.html#ga259e5ebd0f31ec3f83440f8cae7f5dba',1,'glm::row(genType const &m, length_t index)'],['../a00293.html#gaadcc64829aadf4103477679e48c7594f',1,'glm::row(genType const &m, length_t index, typename genType::row_type const &x)']]], ['rowmajor2',['rowMajor2',['../a00338.html#gaf5b1aee9e3eb1acf9d6c3c8be1e73bb8',1,'glm::rowMajor2(vec< 2, T, Q > const &v1, vec< 2, T, Q > const &v2)'],['../a00338.html#gaf66c75ed69ca9e87462550708c2c6726',1,'glm::rowMajor2(mat< 2, 2, T, Q > const &m)']]], ['rowmajor3',['rowMajor3',['../a00338.html#ga2ae46497493339f745754e40f438442e',1,'glm::rowMajor3(vec< 3, T, Q > const &v1, vec< 3, T, Q > const &v2, vec< 3, T, Q > const &v3)'],['../a00338.html#gad8a3a50ab47bbe8d36cdb81d90dfcf77',1,'glm::rowMajor3(mat< 3, 3, T, Q > const &m)']]], ['rowmajor4',['rowMajor4',['../a00338.html#ga9636cd6bbe2c32a8d0c03ffb8b1ef284',1,'glm::rowMajor4(vec< 4, T, Q > const &v1, vec< 4, T, Q > const &v2, vec< 4, T, Q > const &v3, vec< 4, T, Q > const &v4)'],['../a00338.html#gac92ad1c2acdf18d3eb7be45a32f9566b',1,'glm::rowMajor4(mat< 4, 4, T, Q > const &m)']]], ['rq_5fdecompose',['rq_decompose',['../a00336.html#ga82874e2ebe891ba35ac21d9993873758',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_0.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_0.js ================================================ var searchData= [ ['associated_5fmin_5fmax_2ehpp',['associated_min_max.hpp',['../a00007.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_1.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_1.js ================================================ var searchData= [ ['bit_2ehpp',['bit.hpp',['../a00008.html',1,'']]], ['bitfield_2ehpp',['bitfield.hpp',['../a00009.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_10.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_10.js ================================================ var searchData= [ ['scalar_5fcommon_2ehpp',['scalar_common.hpp',['../a00144.html',1,'']]], ['scalar_5fconstants_2ehpp',['scalar_constants.hpp',['../a00145.html',1,'']]], ['scalar_5fint_5fsized_2ehpp',['scalar_int_sized.hpp',['../a00146.html',1,'']]], ['scalar_5finteger_2ehpp',['scalar_integer.hpp',['../a00147.html',1,'']]], ['scalar_5fmultiplication_2ehpp',['scalar_multiplication.hpp',['../a00148.html',1,'']]], ['scalar_5fuint_5fsized_2ehpp',['scalar_uint_sized.hpp',['../a00151.html',1,'']]], ['scalar_5fulp_2ehpp',['scalar_ulp.hpp',['../a00152.html',1,'']]], ['spline_2ehpp',['spline.hpp',['../a00154.html',1,'']]], ['std_5fbased_5ftype_2ehpp',['std_based_type.hpp',['../a00155.html',1,'']]], ['string_5fcast_2ehpp',['string_cast.hpp',['../a00156.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_11.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_11.js ================================================ var searchData= [ ['texture_2ehpp',['texture.hpp',['../a00157.html',1,'']]], ['transform_2ehpp',['transform.hpp',['../a00158.html',1,'']]], ['transform2_2ehpp',['transform2.hpp',['../a00159.html',1,'']]], ['trigonometric_2ehpp',['trigonometric.hpp',['../a00160.html',1,'']]], ['type_5fmat2x2_2ehpp',['type_mat2x2.hpp',['../a00165.html',1,'']]], ['type_5fmat2x3_2ehpp',['type_mat2x3.hpp',['../a00166.html',1,'']]], ['type_5fmat2x4_2ehpp',['type_mat2x4.hpp',['../a00167.html',1,'']]], ['type_5fmat3x2_2ehpp',['type_mat3x2.hpp',['../a00168.html',1,'']]], ['type_5fmat3x3_2ehpp',['type_mat3x3.hpp',['../a00169.html',1,'']]], ['type_5fmat3x4_2ehpp',['type_mat3x4.hpp',['../a00170.html',1,'']]], ['type_5fmat4x2_2ehpp',['type_mat4x2.hpp',['../a00171.html',1,'']]], ['type_5fmat4x3_2ehpp',['type_mat4x3.hpp',['../a00172.html',1,'']]], ['type_5fmat4x4_2ehpp',['type_mat4x4.hpp',['../a00173.html',1,'']]], ['type_5fprecision_2ehpp',['type_precision.hpp',['../a00174.html',1,'']]], ['type_5fptr_2ehpp',['type_ptr.hpp',['../a00175.html',1,'']]], ['type_5fquat_2ehpp',['type_quat.hpp',['../a00176.html',1,'']]], ['type_5ftrait_2ehpp',['type_trait.hpp',['../a00177.html',1,'']]], ['type_5fvec1_2ehpp',['type_vec1.hpp',['../a00178.html',1,'']]], ['type_5fvec2_2ehpp',['type_vec2.hpp',['../a00179.html',1,'']]], ['type_5fvec3_2ehpp',['type_vec3.hpp',['../a00180.html',1,'']]], ['type_5fvec4_2ehpp',['type_vec4.hpp',['../a00181.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_12.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_12.js ================================================ var searchData= [ ['ulp_2ehpp',['ulp.hpp',['../a00182.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_13.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_13.js ================================================ var searchData= [ ['vec1_2ehpp',['vec1.hpp',['../a00183.html',1,'']]], ['vec2_2ehpp',['vec2.hpp',['../a00184.html',1,'']]], ['vec3_2ehpp',['vec3.hpp',['../a00185.html',1,'']]], ['vec4_2ehpp',['vec4.hpp',['../a00186.html',1,'']]], ['vec_5fswizzle_2ehpp',['vec_swizzle.hpp',['../a00187.html',1,'']]], ['vector_5fangle_2ehpp',['vector_angle.hpp',['../a00188.html',1,'']]], ['vector_5fbool1_2ehpp',['vector_bool1.hpp',['../a00189.html',1,'']]], ['vector_5fbool1_5fprecision_2ehpp',['vector_bool1_precision.hpp',['../a00190.html',1,'']]], ['vector_5fbool2_2ehpp',['vector_bool2.hpp',['../a00191.html',1,'']]], ['vector_5fbool2_5fprecision_2ehpp',['vector_bool2_precision.hpp',['../a00192.html',1,'']]], ['vector_5fbool3_2ehpp',['vector_bool3.hpp',['../a00193.html',1,'']]], ['vector_5fbool3_5fprecision_2ehpp',['vector_bool3_precision.hpp',['../a00194.html',1,'']]], ['vector_5fbool4_2ehpp',['vector_bool4.hpp',['../a00195.html',1,'']]], ['vector_5fbool4_5fprecision_2ehpp',['vector_bool4_precision.hpp',['../a00196.html',1,'']]], ['vector_5fcommon_2ehpp',['vector_common.hpp',['../a00197.html',1,'']]], ['vector_5fdouble1_2ehpp',['vector_double1.hpp',['../a00198.html',1,'']]], ['vector_5fdouble1_5fprecision_2ehpp',['vector_double1_precision.hpp',['../a00199.html',1,'']]], ['vector_5fdouble2_2ehpp',['vector_double2.hpp',['../a00200.html',1,'']]], ['vector_5fdouble2_5fprecision_2ehpp',['vector_double2_precision.hpp',['../a00201.html',1,'']]], ['vector_5fdouble3_2ehpp',['vector_double3.hpp',['../a00202.html',1,'']]], ['vector_5fdouble3_5fprecision_2ehpp',['vector_double3_precision.hpp',['../a00203.html',1,'']]], ['vector_5fdouble4_2ehpp',['vector_double4.hpp',['../a00204.html',1,'']]], ['vector_5fdouble4_5fprecision_2ehpp',['vector_double4_precision.hpp',['../a00205.html',1,'']]], ['vector_5ffloat1_2ehpp',['vector_float1.hpp',['../a00206.html',1,'']]], ['vector_5ffloat1_5fprecision_2ehpp',['vector_float1_precision.hpp',['../a00207.html',1,'']]], ['vector_5ffloat2_2ehpp',['vector_float2.hpp',['../a00208.html',1,'']]], ['vector_5ffloat2_5fprecision_2ehpp',['vector_float2_precision.hpp',['../a00209.html',1,'']]], ['vector_5ffloat3_2ehpp',['vector_float3.hpp',['../a00210.html',1,'']]], ['vector_5ffloat3_5fprecision_2ehpp',['vector_float3_precision.hpp',['../a00211.html',1,'']]], ['vector_5ffloat4_2ehpp',['vector_float4.hpp',['../a00212.html',1,'']]], ['vector_5ffloat4_5fprecision_2ehpp',['vector_float4_precision.hpp',['../a00213.html',1,'']]], ['vector_5fint1_2ehpp',['vector_int1.hpp',['../a00214.html',1,'']]], ['vector_5fint1_5fprecision_2ehpp',['vector_int1_precision.hpp',['../a00215.html',1,'']]], ['vector_5fint2_2ehpp',['vector_int2.hpp',['../a00216.html',1,'']]], ['vector_5fint2_5fprecision_2ehpp',['vector_int2_precision.hpp',['../a00217.html',1,'']]], ['vector_5fint3_2ehpp',['vector_int3.hpp',['../a00218.html',1,'']]], ['vector_5fint3_5fprecision_2ehpp',['vector_int3_precision.hpp',['../a00219.html',1,'']]], ['vector_5fint4_2ehpp',['vector_int4.hpp',['../a00220.html',1,'']]], ['vector_5fint4_5fprecision_2ehpp',['vector_int4_precision.hpp',['../a00221.html',1,'']]], ['vector_5finteger_2ehpp',['vector_integer.hpp',['../a00222.html',1,'']]], ['vector_5fquery_2ehpp',['vector_query.hpp',['../a00223.html',1,'']]], ['vector_5frelational_2ehpp',['vector_relational.hpp',['../a00225.html',1,'']]], ['vector_5fuint1_2ehpp',['vector_uint1.hpp',['../a00226.html',1,'']]], ['vector_5fuint1_5fprecision_2ehpp',['vector_uint1_precision.hpp',['../a00227.html',1,'']]], ['vector_5fuint2_2ehpp',['vector_uint2.hpp',['../a00228.html',1,'']]], ['vector_5fuint2_5fprecision_2ehpp',['vector_uint2_precision.hpp',['../a00229.html',1,'']]], ['vector_5fuint3_2ehpp',['vector_uint3.hpp',['../a00230.html',1,'']]], ['vector_5fuint3_5fprecision_2ehpp',['vector_uint3_precision.hpp',['../a00231.html',1,'']]], ['vector_5fuint4_2ehpp',['vector_uint4.hpp',['../a00232.html',1,'']]], ['vector_5fuint4_5fprecision_2ehpp',['vector_uint4_precision.hpp',['../a00233.html',1,'']]], ['vector_5fulp_2ehpp',['vector_ulp.hpp',['../a00234.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_14.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_14.js ================================================ var searchData= [ ['wrap_2ehpp',['wrap.hpp',['../a00235.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_2.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_2.js ================================================ var searchData= [ ['closest_5fpoint_2ehpp',['closest_point.hpp',['../a00010.html',1,'']]], ['color_5fencoding_2ehpp',['color_encoding.hpp',['../a00011.html',1,'']]], ['color_5fspace_5fycocg_2ehpp',['color_space_YCoCg.hpp',['../a00014.html',1,'']]], ['common_2ehpp',['common.hpp',['../a00015.html',1,'']]], ['compatibility_2ehpp',['compatibility.hpp',['../a00017.html',1,'']]], ['component_5fwise_2ehpp',['component_wise.hpp',['../a00018.html',1,'']]], ['constants_2ehpp',['constants.hpp',['../a00021.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_3.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_3.js ================================================ var searchData= [ ['dual_5fquaternion_2ehpp',['dual_quaternion.hpp',['../a00022.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_4.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_4.js ================================================ var searchData= [ ['easing_2ehpp',['easing.hpp',['../a00023.html',1,'']]], ['epsilon_2ehpp',['epsilon.hpp',['../a00024.html',1,'']]], ['euler_5fangles_2ehpp',['euler_angles.hpp',['../a00025.html',1,'']]], ['exponential_2ehpp',['exponential.hpp',['../a00026.html',1,'']]], ['ext_2ehpp',['ext.hpp',['../a00027.html',1,'']]], ['extend_2ehpp',['extend.hpp',['../a00028.html',1,'']]], ['extended_5fmin_5fmax_2ehpp',['extended_min_max.hpp',['../a00029.html',1,'']]], ['exterior_5fproduct_2ehpp',['exterior_product.hpp',['../a00030.html',1,'']]], ['matrix_5ftransform_2ehpp',['matrix_transform.hpp',['../a00108.html',1,'']]], ['scalar_5frelational_2ehpp',['scalar_relational.hpp',['../a00149.html',1,'']]], ['vector_5frelational_2ehpp',['vector_relational.hpp',['../a00224.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_5.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_5.js ================================================ var searchData= [ ['fast_5fexponential_2ehpp',['fast_exponential.hpp',['../a00031.html',1,'']]], ['fast_5fsquare_5froot_2ehpp',['fast_square_root.hpp',['../a00032.html',1,'']]], ['fast_5ftrigonometry_2ehpp',['fast_trigonometry.hpp',['../a00033.html',1,'']]], ['functions_2ehpp',['functions.hpp',['../a00034.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_6.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_6.js ================================================ var searchData= [ ['color_5fspace_2ehpp',['color_space.hpp',['../a00012.html',1,'']]], ['color_5fspace_2ehpp',['color_space.hpp',['../a00013.html',1,'']]], ['common_2ehpp',['common.hpp',['../a00016.html',1,'']]], ['geometric_2ehpp',['geometric.hpp',['../a00036.html',1,'']]], ['glm_2ehpp',['glm.hpp',['../a00037.html',1,'']]], ['gradient_5fpaint_2ehpp',['gradient_paint.hpp',['../a00038.html',1,'']]], ['integer_2ehpp',['integer.hpp',['../a00042.html',1,'']]], ['integer_2ehpp',['integer.hpp',['../a00041.html',1,'']]], ['matrix_5ftransform_2ehpp',['matrix_transform.hpp',['../a00109.html',1,'']]], ['packing_2ehpp',['packing.hpp',['../a00119.html',1,'']]], ['quaternion_2ehpp',['quaternion.hpp',['../a00125.html',1,'']]], ['quaternion_2ehpp',['quaternion.hpp',['../a00126.html',1,'']]], ['scalar_5frelational_2ehpp',['scalar_relational.hpp',['../a00150.html',1,'']]], ['type_5faligned_2ehpp',['type_aligned.hpp',['../a00162.html',1,'']]], ['type_5faligned_2ehpp',['type_aligned.hpp',['../a00161.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_7.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_7.js ================================================ var searchData= [ ['handed_5fcoordinate_5fspace_2ehpp',['handed_coordinate_space.hpp',['../a00039.html',1,'']]], ['hash_2ehpp',['hash.hpp',['../a00040.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_8.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_8.js ================================================ var searchData= [ ['integer_2ehpp',['integer.hpp',['../a00043.html',1,'']]], ['intersect_2ehpp',['intersect.hpp',['../a00044.html',1,'']]], ['io_2ehpp',['io.hpp',['../a00045.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_9.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_9.js ================================================ var searchData= [ ['log_5fbase_2ehpp',['log_base.hpp',['../a00046.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_a.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_a.js ================================================ var searchData= [ ['mat2x2_2ehpp',['mat2x2.hpp',['../a00048.html',1,'']]], ['mat2x3_2ehpp',['mat2x3.hpp',['../a00049.html',1,'']]], ['mat2x4_2ehpp',['mat2x4.hpp',['../a00050.html',1,'']]], ['mat3x2_2ehpp',['mat3x2.hpp',['../a00051.html',1,'']]], ['mat3x3_2ehpp',['mat3x3.hpp',['../a00052.html',1,'']]], ['mat3x4_2ehpp',['mat3x4.hpp',['../a00053.html',1,'']]], ['mat4x2_2ehpp',['mat4x2.hpp',['../a00054.html',1,'']]], ['mat4x3_2ehpp',['mat4x3.hpp',['../a00055.html',1,'']]], ['mat4x4_2ehpp',['mat4x4.hpp',['../a00056.html',1,'']]], ['matrix_2ehpp',['matrix.hpp',['../a00057.html',1,'']]], ['matrix_5faccess_2ehpp',['matrix_access.hpp',['../a00058.html',1,'']]], ['matrix_5fclip_5fspace_2ehpp',['matrix_clip_space.hpp',['../a00059.html',1,'']]], ['matrix_5fcommon_2ehpp',['matrix_common.hpp',['../a00060.html',1,'']]], ['matrix_5fcross_5fproduct_2ehpp',['matrix_cross_product.hpp',['../a00061.html',1,'']]], ['matrix_5fdecompose_2ehpp',['matrix_decompose.hpp',['../a00062.html',1,'']]], ['matrix_5fdouble2x2_2ehpp',['matrix_double2x2.hpp',['../a00063.html',1,'']]], ['matrix_5fdouble2x2_5fprecision_2ehpp',['matrix_double2x2_precision.hpp',['../a00064.html',1,'']]], ['matrix_5fdouble2x3_2ehpp',['matrix_double2x3.hpp',['../a00065.html',1,'']]], ['matrix_5fdouble2x3_5fprecision_2ehpp',['matrix_double2x3_precision.hpp',['../a00066.html',1,'']]], ['matrix_5fdouble2x4_2ehpp',['matrix_double2x4.hpp',['../a00067.html',1,'']]], ['matrix_5fdouble2x4_5fprecision_2ehpp',['matrix_double2x4_precision.hpp',['../a00068.html',1,'']]], ['matrix_5fdouble3x2_2ehpp',['matrix_double3x2.hpp',['../a00069.html',1,'']]], ['matrix_5fdouble3x2_5fprecision_2ehpp',['matrix_double3x2_precision.hpp',['../a00070.html',1,'']]], ['matrix_5fdouble3x3_2ehpp',['matrix_double3x3.hpp',['../a00071.html',1,'']]], ['matrix_5fdouble3x3_5fprecision_2ehpp',['matrix_double3x3_precision.hpp',['../a00072.html',1,'']]], ['matrix_5fdouble3x4_2ehpp',['matrix_double3x4.hpp',['../a00073.html',1,'']]], ['matrix_5fdouble3x4_5fprecision_2ehpp',['matrix_double3x4_precision.hpp',['../a00074.html',1,'']]], ['matrix_5fdouble4x2_2ehpp',['matrix_double4x2.hpp',['../a00075.html',1,'']]], ['matrix_5fdouble4x2_5fprecision_2ehpp',['matrix_double4x2_precision.hpp',['../a00076.html',1,'']]], ['matrix_5fdouble4x3_2ehpp',['matrix_double4x3.hpp',['../a00077.html',1,'']]], ['matrix_5fdouble4x3_5fprecision_2ehpp',['matrix_double4x3_precision.hpp',['../a00078.html',1,'']]], ['matrix_5fdouble4x4_2ehpp',['matrix_double4x4.hpp',['../a00079.html',1,'']]], ['matrix_5fdouble4x4_5fprecision_2ehpp',['matrix_double4x4_precision.hpp',['../a00080.html',1,'']]], ['matrix_5ffactorisation_2ehpp',['matrix_factorisation.hpp',['../a00081.html',1,'']]], ['matrix_5ffloat2x2_2ehpp',['matrix_float2x2.hpp',['../a00082.html',1,'']]], ['matrix_5ffloat2x2_5fprecision_2ehpp',['matrix_float2x2_precision.hpp',['../a00083.html',1,'']]], ['matrix_5ffloat2x3_2ehpp',['matrix_float2x3.hpp',['../a00084.html',1,'']]], ['matrix_5ffloat2x3_5fprecision_2ehpp',['matrix_float2x3_precision.hpp',['../a00085.html',1,'']]], ['matrix_5ffloat2x4_2ehpp',['matrix_float2x4.hpp',['../a00086.html',1,'']]], ['matrix_5ffloat2x4_5fprecision_2ehpp',['matrix_float2x4_precision.hpp',['../a00087.html',1,'']]], ['matrix_5ffloat3x2_2ehpp',['matrix_float3x2.hpp',['../a00088.html',1,'']]], ['matrix_5ffloat3x2_5fprecision_2ehpp',['matrix_float3x2_precision.hpp',['../a00089.html',1,'']]], ['matrix_5ffloat3x3_2ehpp',['matrix_float3x3.hpp',['../a00090.html',1,'']]], ['matrix_5ffloat3x3_5fprecision_2ehpp',['matrix_float3x3_precision.hpp',['../a00091.html',1,'']]], ['matrix_5ffloat3x4_2ehpp',['matrix_float3x4.hpp',['../a00092.html',1,'']]], ['matrix_5ffloat3x4_5fprecision_2ehpp',['matrix_float3x4_precision.hpp',['../a00093.html',1,'']]], ['matrix_5ffloat4x2_2ehpp',['matrix_float4x2.hpp',['../a00094.html',1,'']]], ['matrix_5ffloat4x3_2ehpp',['matrix_float4x3.hpp',['../a00096.html',1,'']]], ['matrix_5ffloat4x3_5fprecision_2ehpp',['matrix_float4x3_precision.hpp',['../a00097.html',1,'']]], ['matrix_5ffloat4x4_2ehpp',['matrix_float4x4.hpp',['../a00098.html',1,'']]], ['matrix_5ffloat4x4_5fprecision_2ehpp',['matrix_float4x4_precision.hpp',['../a00099.html',1,'']]], ['matrix_5finteger_2ehpp',['matrix_integer.hpp',['../a00100.html',1,'']]], ['matrix_5finterpolation_2ehpp',['matrix_interpolation.hpp',['../a00101.html',1,'']]], ['matrix_5finverse_2ehpp',['matrix_inverse.hpp',['../a00102.html',1,'']]], ['matrix_5fmajor_5fstorage_2ehpp',['matrix_major_storage.hpp',['../a00103.html',1,'']]], ['matrix_5foperation_2ehpp',['matrix_operation.hpp',['../a00104.html',1,'']]], ['matrix_5fprojection_2ehpp',['matrix_projection.hpp',['../a00105.html',1,'']]], ['matrix_5fquery_2ehpp',['matrix_query.hpp',['../a00106.html',1,'']]], ['matrix_5frelational_2ehpp',['matrix_relational.hpp',['../a00107.html',1,'']]], ['matrix_5ftransform_5f2d_2ehpp',['matrix_transform_2d.hpp',['../a00110.html',1,'']]], ['mixed_5fproduct_2ehpp',['mixed_product.hpp',['../a00111.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_b.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_b.js ================================================ var searchData= [ ['noise_2ehpp',['noise.hpp',['../a00112.html',1,'']]], ['norm_2ehpp',['norm.hpp',['../a00113.html',1,'']]], ['normal_2ehpp',['normal.hpp',['../a00114.html',1,'']]], ['normalize_5fdot_2ehpp',['normalize_dot.hpp',['../a00115.html',1,'']]], ['number_5fprecision_2ehpp',['number_precision.hpp',['../a00116.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_c.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_c.js ================================================ var searchData= [ ['optimum_5fpow_2ehpp',['optimum_pow.hpp',['../a00117.html',1,'']]], ['orthonormalize_2ehpp',['orthonormalize.hpp',['../a00118.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_d.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_d.js ================================================ var searchData= [ ['packing_2ehpp',['packing.hpp',['../a00120.html',1,'']]], ['perpendicular_2ehpp',['perpendicular.hpp',['../a00121.html',1,'']]], ['polar_5fcoordinates_2ehpp',['polar_coordinates.hpp',['../a00122.html',1,'']]], ['projection_2ehpp',['projection.hpp',['../a00123.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_e.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_e.js ================================================ var searchData= [ ['quaternion_5fcommon_2ehpp',['quaternion_common.hpp',['../a00127.html',1,'']]], ['quaternion_5fdouble_2ehpp',['quaternion_double.hpp',['../a00128.html',1,'']]], ['quaternion_5fdouble_5fprecision_2ehpp',['quaternion_double_precision.hpp',['../a00129.html',1,'']]], ['quaternion_5fexponential_2ehpp',['quaternion_exponential.hpp',['../a00130.html',1,'']]], ['quaternion_5ffloat_2ehpp',['quaternion_float.hpp',['../a00131.html',1,'']]], ['quaternion_5ffloat_5fprecision_2ehpp',['quaternion_float_precision.hpp',['../a00132.html',1,'']]], ['quaternion_5fgeometric_2ehpp',['quaternion_geometric.hpp',['../a00133.html',1,'']]], ['quaternion_5frelational_2ehpp',['quaternion_relational.hpp',['../a00134.html',1,'']]], ['quaternion_5ftransform_2ehpp',['quaternion_transform.hpp',['../a00135.html',1,'']]], ['quaternion_5ftrigonometric_2ehpp',['quaternion_trigonometric.hpp',['../a00136.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_f.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/files_f.js ================================================ var searchData= [ ['random_2ehpp',['random.hpp',['../a00137.html',1,'']]], ['range_2ehpp',['range.hpp',['../a00138.html',1,'']]], ['raw_5fdata_2ehpp',['raw_data.hpp',['../a00139.html',1,'']]], ['reciprocal_2ehpp',['reciprocal.hpp',['../a00140.html',1,'']]], ['rotate_5fnormalized_5faxis_2ehpp',['rotate_normalized_axis.hpp',['../a00141.html',1,'']]], ['rotate_5fvector_2ehpp',['rotate_vector.hpp',['../a00142.html',1,'']]], ['round_2ehpp',['round.hpp',['../a00143.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_0.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_0.js ================================================ var searchData= [ ['abs',['abs',['../a00241.html#ga439e60a72eadecfeda2df5449c613a64',1,'glm::abs(genType x)'],['../a00241.html#ga81d3abddd0ef0c8de579bc541ecadab6',1,'glm::abs(vec< L, T, Q > const &x)']]], ['acos',['acos',['../a00373.html#gacc9b092df8257c68f19c9053703e2563',1,'glm']]], ['acosh',['acosh',['../a00373.html#ga858f35dc66fd2688f20c52b5f25be76a',1,'glm']]], ['acot',['acot',['../a00301.html#gaeadfb9c9d71093f7865b2ba2ca8d104d',1,'glm']]], ['acoth',['acoth',['../a00301.html#gafaca98a7100170db8841f446282debfa',1,'glm']]], ['acsc',['acsc',['../a00301.html#ga1b4bed91476b9b915e76b4a30236d330',1,'glm']]], ['acsch',['acsch',['../a00301.html#ga4b50aa5e5afc7e19ec113ab91596c576',1,'glm']]], ['adjugate',['adjugate',['../a00339.html#ga40a38402a30860af6e508fe76211e659',1,'glm::adjugate(mat< 2, 2, T, Q > const &m)'],['../a00339.html#gaddb09f7abc1a9c56a243d32ff3538be6',1,'glm::adjugate(mat< 3, 3, T, Q > const &m)'],['../a00339.html#ga9aaa7d1f40391b0b5cacccb60e104ba8',1,'glm::adjugate(mat< 4, 4, T, Q > const &m)']]], ['affineinverse',['affineInverse',['../a00295.html#gae0fcc5fc8783291f9702272de428fa0e',1,'glm']]], ['all',['all',['../a00374.html#ga87e53f50b679f5f95c5cb4780311b3dd',1,'glm']]], ['angle',['angle',['../a00257.html#ga8aa248b31d5ade470c87304df5eb7bd8',1,'glm::angle(qua< T, Q > const &x)'],['../a00367.html#ga2e2917b4cb75ca3d043ac15ff88f14e1',1,'glm::angle(vec< L, T, Q > const &x, vec< L, T, Q > const &y)']]], ['angleaxis',['angleAxis',['../a00257.html#ga5c0095cfcb218c75a4b79d7687950036',1,'glm']]], ['any',['any',['../a00374.html#ga911b3f8e41459dd551ccb6d385d91061',1,'glm']]], ['arecollinear',['areCollinear',['../a00368.html#ga13da4a787a2ff70e95d561fb19ff91b4',1,'glm']]], ['areorthogonal',['areOrthogonal',['../a00368.html#gac7b95b3f798e3c293262b2bdaad47c57',1,'glm']]], ['areorthonormal',['areOrthonormal',['../a00368.html#ga1b091c3d7f9ee3b0708311c001c293e3',1,'glm']]], ['asec',['asec',['../a00301.html#ga2c5b7f962c2c9ff684e6d2de48db1f10',1,'glm']]], ['asech',['asech',['../a00301.html#gaec7586dccfe431f850d006f3824b8ca6',1,'glm']]], ['asin',['asin',['../a00373.html#ga0552d2df4865fa8c3d7cfc3ec2caac73',1,'glm']]], ['asinh',['asinh',['../a00373.html#ga3ef16b501ee859fddde88e22192a5950',1,'glm']]], ['associatedmax',['associatedMax',['../a00308.html#ga7d9c8785230c8db60f72ec8975f1ba45',1,'glm::associatedMax(T x, U a, T y, U b)'],['../a00308.html#ga5c6758bc50aa7fbe700f87123a045aad',1,'glm::associatedMax(vec< L, T, Q > const &x, vec< L, U, Q > const &a, vec< L, T, Q > const &y, vec< L, U, Q > const &b)'],['../a00308.html#ga0d169d6ce26b03248df175f39005d77f',1,'glm::associatedMax(T x, vec< L, U, Q > const &a, T y, vec< L, U, Q > const &b)'],['../a00308.html#ga4086269afabcb81dd7ded33cb3448653',1,'glm::associatedMax(vec< L, T, Q > const &x, U a, vec< L, T, Q > const &y, U b)'],['../a00308.html#gaec891e363d91abbf3a4443cf2f652209',1,'glm::associatedMax(T x, U a, T y, U b, T z, U c)'],['../a00308.html#gab84fdc35016a31e8cd0cbb8296bddf7c',1,'glm::associatedMax(vec< L, T, Q > const &x, vec< L, U, Q > const &a, vec< L, T, Q > const &y, vec< L, U, Q > const &b, vec< L, T, Q > const &z, vec< L, U, Q > const &c)'],['../a00308.html#gadd2a2002f4f2144bbc39eb2336dd2fba',1,'glm::associatedMax(T x, vec< L, U, Q > const &a, T y, vec< L, U, Q > const &b, T z, vec< L, U, Q > const &c)'],['../a00308.html#ga19f59d1141a51a3b2108a9807af78f7f',1,'glm::associatedMax(vec< L, T, Q > const &x, U a, vec< L, T, Q > const &y, U b, vec< L, T, Q > const &z, U c)'],['../a00308.html#ga3038ffcb43eaa6af75897a99a5047ccc',1,'glm::associatedMax(T x, U a, T y, U b, T z, U c, T w, U d)'],['../a00308.html#gaf5ab0c428f8d1cd9e3b45fcfbf6423a6',1,'glm::associatedMax(vec< L, T, Q > const &x, vec< L, U, Q > const &a, vec< L, T, Q > const &y, vec< L, U, Q > const &b, vec< L, T, Q > const &z, vec< L, U, Q > const &c, vec< L, T, Q > const &w, vec< L, U, Q > const &d)'],['../a00308.html#ga11477c2c4b5b0bfd1b72b29df3725a9d',1,'glm::associatedMax(T x, vec< L, U, Q > const &a, T y, vec< L, U, Q > const &b, T z, vec< L, U, Q > const &c, T w, vec< L, U, Q > const &d)'],['../a00308.html#gab9c3dd74cac899d2c625b5767ea3b3fb',1,'glm::associatedMax(vec< L, T, Q > const &x, U a, vec< L, T, Q > const &y, U b, vec< L, T, Q > const &z, U c, vec< L, T, Q > const &w, U d)']]], ['associatedmin',['associatedMin',['../a00308.html#gacc01bd272359572fc28437ae214a02df',1,'glm::associatedMin(T x, U a, T y, U b)'],['../a00308.html#gac2f0dff90948f2e44386a5eafd941d1c',1,'glm::associatedMin(vec< L, T, Q > const &x, vec< L, U, Q > const &a, vec< L, T, Q > const &y, vec< L, U, Q > const &b)'],['../a00308.html#gacfec519c820331d023ef53a511749319',1,'glm::associatedMin(T x, const vec< L, U, Q > &a, T y, const vec< L, U, Q > &b)'],['../a00308.html#ga4757c7cab2d809124a8525d0a9deeb37',1,'glm::associatedMin(vec< L, T, Q > const &x, U a, vec< L, T, Q > const &y, U b)'],['../a00308.html#gad0aa8f86259a26d839d34a3577a923fc',1,'glm::associatedMin(T x, U a, T y, U b, T z, U c)'],['../a00308.html#ga723e5411cebc7ffbd5c81ffeec61127d',1,'glm::associatedMin(vec< L, T, Q > const &x, vec< L, U, Q > const &a, vec< L, T, Q > const &y, vec< L, U, Q > const &b, vec< L, T, Q > const &z, vec< L, U, Q > const &c)'],['../a00308.html#ga432224ebe2085eaa2b63a077ecbbbff6',1,'glm::associatedMin(T x, U a, T y, U b, T z, U c, T w, U d)'],['../a00308.html#ga66b08118bc88f0494bcacb7cdb940556',1,'glm::associatedMin(vec< L, T, Q > const &x, vec< L, U, Q > const &a, vec< L, T, Q > const &y, vec< L, U, Q > const &b, vec< L, T, Q > const &z, vec< L, U, Q > const &c, vec< L, T, Q > const &w, vec< L, U, Q > const &d)'],['../a00308.html#ga78c28fde1a7080fb7420bd88e68c6c68',1,'glm::associatedMin(T x, vec< L, U, Q > const &a, T y, vec< L, U, Q > const &b, T z, vec< L, U, Q > const &c, T w, vec< L, U, Q > const &d)'],['../a00308.html#ga2db7e351994baee78540a562d4bb6d3b',1,'glm::associatedMin(vec< L, T, Q > const &x, U a, vec< L, T, Q > const &y, U b, vec< L, T, Q > const &z, U c, vec< L, T, Q > const &w, U d)']]], ['atan',['atan',['../a00373.html#gac61629f3a4aa14057e7a8cae002291db',1,'glm::atan(vec< L, T, Q > const &y, vec< L, T, Q > const &x)'],['../a00373.html#ga5229f087eaccbc466f1c609ce3107b95',1,'glm::atan(vec< L, T, Q > const &y_over_x)']]], ['atan2',['atan2',['../a00315.html#gac63011205bf6d0be82589dc56dd26708',1,'glm::atan2(T x, T y)'],['../a00315.html#ga83bc41bd6f89113ee8006576b12bfc50',1,'glm::atan2(const vec< 2, T, Q > &x, const vec< 2, T, Q > &y)'],['../a00315.html#gac39314f5087e7e51e592897cabbc1927',1,'glm::atan2(const vec< 3, T, Q > &x, const vec< 3, T, Q > &y)'],['../a00315.html#gaba86c28da7bf5bdac64fecf7d56e8ff3',1,'glm::atan2(const vec< 4, T, Q > &x, const vec< 4, T, Q > &y)']]], ['atanh',['atanh',['../a00373.html#gabc925650e618357d07da255531658b87',1,'glm']]], ['axis',['axis',['../a00257.html#ga764254f10248b505e936e5309a88c23d',1,'glm']]], ['axisangle',['axisAngle',['../a00337.html#gafefe32ce5a90a135287ba34fac3623bc',1,'glm']]], ['axisanglematrix',['axisAngleMatrix',['../a00337.html#ga3a788e2f5223397df5c426413ecc2f6b',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_1.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_1.js ================================================ var searchData= [ ['backeasein',['backEaseIn',['../a00318.html#ga93cddcdb6347a44d5927cc2bf2570816',1,'glm::backEaseIn(genType const &a)'],['../a00318.html#ga33777c9dd98f61d9472f96aafdf2bd36',1,'glm::backEaseIn(genType const &a, genType const &o)']]], ['backeaseinout',['backEaseInOut',['../a00318.html#gace6d24722a2f6722b56398206eb810bb',1,'glm::backEaseInOut(genType const &a)'],['../a00318.html#ga68a7b760f2afdfab298d5cd6d7611fb1',1,'glm::backEaseInOut(genType const &a, genType const &o)']]], ['backeaseout',['backEaseOut',['../a00318.html#gabf25069fa906413c858fd46903d520b9',1,'glm::backEaseOut(genType const &a)'],['../a00318.html#ga640c1ac6fe9d277a197da69daf60ee4f',1,'glm::backEaseOut(genType const &a, genType const &o)']]], ['ballrand',['ballRand',['../a00300.html#ga7c53b7797f3147af68a11c767679fa3f',1,'glm']]], ['bitcount',['bitCount',['../a00370.html#ga44abfe3379e11cbd29425a843420d0d6',1,'glm::bitCount(genType v)'],['../a00370.html#gaac7b15e40bdea8d9aa4c4cb34049f7b5',1,'glm::bitCount(vec< L, T, Q > const &v)']]], ['bitfielddeinterleave',['bitfieldDeinterleave',['../a00288.html#ga091d934233a2e121df91b8c7230357c8',1,'glm::bitfieldDeinterleave(glm::uint16 x)'],['../a00288.html#ga7d1cc24dfbcdd932c3a2abbb76235f98',1,'glm::bitfieldDeinterleave(glm::uint32 x)'],['../a00288.html#ga8dbb8c87092f33bd815dd8a840be5d60',1,'glm::bitfieldDeinterleave(glm::uint64 x)']]], ['bitfieldextract',['bitfieldExtract',['../a00370.html#ga346b25ab11e793e91a4a69c8aa6819f2',1,'glm']]], ['bitfieldfillone',['bitfieldFillOne',['../a00288.html#ga46f9295abe3b5c7658f5b13c7f819f0a',1,'glm::bitfieldFillOne(genIUType Value, int FirstBit, int BitCount)'],['../a00288.html#ga3e96dd1f0a4bc892f063251ed118c0c1',1,'glm::bitfieldFillOne(vec< L, T, Q > const &Value, int FirstBit, int BitCount)']]], ['bitfieldfillzero',['bitfieldFillZero',['../a00288.html#ga697b86998b7d74ee0a69d8e9f8819fee',1,'glm::bitfieldFillZero(genIUType Value, int FirstBit, int BitCount)'],['../a00288.html#ga0d16c9acef4be79ea9b47c082a0cf7c2',1,'glm::bitfieldFillZero(vec< L, T, Q > const &Value, int FirstBit, int BitCount)']]], ['bitfieldinsert',['bitfieldInsert',['../a00370.html#ga2e82992340d421fadb61a473df699b20',1,'glm']]], ['bitfieldinterleave',['bitfieldInterleave',['../a00288.html#ga24cad0069f9a0450abd80b3e89501adf',1,'glm::bitfieldInterleave(int8 x, int8 y)'],['../a00288.html#ga9a4976a529aec2cee56525e1165da484',1,'glm::bitfieldInterleave(uint8 x, uint8 y)'],['../a00288.html#ga4a76bbca39c40153f3203d0a1926e142',1,'glm::bitfieldInterleave(u8vec2 const &v)'],['../a00288.html#gac51c33a394593f0631fa3aa5bb778809',1,'glm::bitfieldInterleave(int16 x, int16 y)'],['../a00288.html#ga94f3646a5667f4be56f8dcf3310e963f',1,'glm::bitfieldInterleave(uint16 x, uint16 y)'],['../a00288.html#ga406c4ee56af4ca37a73f449f154eca3e',1,'glm::bitfieldInterleave(u16vec2 const &v)'],['../a00288.html#gaebb756a24a0784e3d6fba8bd011ab77a',1,'glm::bitfieldInterleave(int32 x, int32 y)'],['../a00288.html#ga2f1e2b3fe699e7d897ae38b2115ddcbd',1,'glm::bitfieldInterleave(uint32 x, uint32 y)'],['../a00288.html#ga8cb17574d60abd6ade84bc57c10e8f78',1,'glm::bitfieldInterleave(u32vec2 const &v)'],['../a00288.html#ga8fdb724dccd4a07d57efc01147102137',1,'glm::bitfieldInterleave(int8 x, int8 y, int8 z)'],['../a00288.html#ga9fc2a0dd5dcf8b00e113f272a5feca93',1,'glm::bitfieldInterleave(uint8 x, uint8 y, uint8 z)'],['../a00288.html#gaa901c36a842fa5d126ea650549f17b24',1,'glm::bitfieldInterleave(int16 x, int16 y, int16 z)'],['../a00288.html#ga3afd6d38881fe3948c53d4214d2197fd',1,'glm::bitfieldInterleave(uint16 x, uint16 y, uint16 z)'],['../a00288.html#gad2075d96a6640121edaa98ea534102ca',1,'glm::bitfieldInterleave(int32 x, int32 y, int32 z)'],['../a00288.html#gab19fbc739fc0cf7247978602c36f7da8',1,'glm::bitfieldInterleave(uint32 x, uint32 y, uint32 z)'],['../a00288.html#ga8a44ae22f5c953b296c42d067dccbe6d',1,'glm::bitfieldInterleave(int8 x, int8 y, int8 z, int8 w)'],['../a00288.html#ga14bb274d54a3c26f4919dd7ed0dd0c36',1,'glm::bitfieldInterleave(uint8 x, uint8 y, uint8 z, uint8 w)'],['../a00288.html#ga180a63161e1319fbd5a53c84d0429c7a',1,'glm::bitfieldInterleave(int16 x, int16 y, int16 z, int16 w)'],['../a00288.html#gafca8768671a14c8016facccb66a89f26',1,'glm::bitfieldInterleave(uint16 x, uint16 y, uint16 z, uint16 w)']]], ['bitfieldreverse',['bitfieldReverse',['../a00370.html#ga750a1d92464489b7711dee67aa3441b6',1,'glm']]], ['bitfieldrotateleft',['bitfieldRotateLeft',['../a00288.html#ga2eb49678a344ce1495bdb5586d9896b9',1,'glm::bitfieldRotateLeft(genIUType In, int Shift)'],['../a00288.html#gae186317091b1a39214ebf79008d44a1e',1,'glm::bitfieldRotateLeft(vec< L, T, Q > const &In, int Shift)']]], ['bitfieldrotateright',['bitfieldRotateRight',['../a00288.html#ga1c33d075c5fb8bd8dbfd5092bfc851ca',1,'glm::bitfieldRotateRight(genIUType In, int Shift)'],['../a00288.html#ga590488e1fc00a6cfe5d3bcaf93fbfe88',1,'glm::bitfieldRotateRight(vec< L, T, Q > const &In, int Shift)']]], ['bounceeasein',['bounceEaseIn',['../a00318.html#gaac30767f2e430b0c3fc859a4d59c7b5b',1,'glm']]], ['bounceeaseinout',['bounceEaseInOut',['../a00318.html#gadf9f38eff1e5f4c2fa5b629a25ae413e',1,'glm']]], ['bounceeaseout',['bounceEaseOut',['../a00318.html#ga94007005ff0dcfa0749ebfa2aec540b2',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_10.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_10.js ================================================ var searchData= [ ['saturate',['saturate',['../a00315.html#ga0fd09e616d122bc2ed9726682ffd44b7',1,'glm::saturate(T x)'],['../a00315.html#gaee97b8001c794a78a44f5d59f62a8aba',1,'glm::saturate(const vec< 2, T, Q > &x)'],['../a00315.html#ga39bfe3a421286ee31680d45c31ccc161',1,'glm::saturate(const vec< 3, T, Q > &x)'],['../a00315.html#ga356f8c3a7e7d6376d3d4b0a026407183',1,'glm::saturate(const vec< 4, T, Q > &x)']]], ['saturation',['saturation',['../a00312.html#ga01a97152b44e1550edcac60bd849e884',1,'glm::saturation(T const s)'],['../a00312.html#ga2156cea600e90148ece5bc96fd6db43a',1,'glm::saturation(T const s, vec< 3, T, Q > const &color)'],['../a00312.html#gaba0eacee0736dae860e9371cc1ae4785',1,'glm::saturation(T const s, vec< 4, T, Q > const &color)']]], ['scale',['scale',['../a00247.html#ga05051adbee603fb3c5095d8cf5cc229b',1,'glm::scale(mat< 4, 4, T, Q > const &m, vec< 3, T, Q > const &v)'],['../a00341.html#gadb47d2ad2bd984b213e8ff7d9cd8154e',1,'glm::scale(mat< 3, 3, T, Q > const &m, vec< 2, T, Q > const &v)'],['../a00362.html#gafbeefee8fec884d566e4ada0049174d7',1,'glm::scale(vec< 3, T, Q > const &v)']]], ['scalebias',['scaleBias',['../a00363.html#gabf249498b236e62c983d90d30d63c99c',1,'glm::scaleBias(T scale, T bias)'],['../a00363.html#gae2bdd91a76759fecfbaef97e3020aa8e',1,'glm::scaleBias(mat< 4, 4, T, Q > const &m, T scale, T bias)']]], ['sec',['sec',['../a00301.html#gae4bcbebee670c5ea155f0777b3acbd84',1,'glm']]], ['sech',['sech',['../a00301.html#ga9a5cfd1e7170104a7b33863b1b75e5ae',1,'glm']]], ['shearx',['shearX',['../a00341.html#ga2a118ece5db1e2022112b954846012af',1,'glm']]], ['shearx2d',['shearX2D',['../a00363.html#gabf714b8a358181572b32a45555f71948',1,'glm']]], ['shearx3d',['shearX3D',['../a00363.html#ga73e867c6cd4d700fe2054437e56106c4',1,'glm']]], ['sheary',['shearY',['../a00341.html#ga717f1833369c1ac4a40e4ac015af885e',1,'glm']]], ['sheary2d',['shearY2D',['../a00363.html#gac7998d0763d9181550c77e8af09a182c',1,'glm']]], ['sheary3d',['shearY3D',['../a00363.html#gade5bb65ffcb513973db1a1314fb5cfac',1,'glm']]], ['shearz3d',['shearZ3D',['../a00363.html#ga6591e0a3a9d2c9c0b6577bb4dace0255',1,'glm']]], ['shortmix',['shortMix',['../a00352.html#gadc576cc957adc2a568cdcbc3799175bc',1,'glm']]], ['sign',['sign',['../a00241.html#ga1e2e5cfff800056540e32f6c9b604b28',1,'glm::sign(vec< L, T, Q > const &x)'],['../a00333.html#ga04ef803a24f3d4f8c67dbccb33b0fce0',1,'glm::sign(vec< L, T, Q > const &x, vec< L, T, Q > const &base)']]], ['simplex',['simplex',['../a00297.html#ga8122468c69015ff397349a7dcc638b27',1,'glm']]], ['sin',['sin',['../a00373.html#ga29747fd108cb7292ae5a284f69691a69',1,'glm']]], ['sineeasein',['sineEaseIn',['../a00318.html#gafb338ac6f6b2bcafee50e3dca5201dbf',1,'glm']]], ['sineeaseinout',['sineEaseInOut',['../a00318.html#gaa46e3d5fbf7a15caa28eff9ef192d7c7',1,'glm']]], ['sineeaseout',['sineEaseOut',['../a00318.html#gab3e454f883afc1606ef91363881bf5a3',1,'glm']]], ['sinh',['sinh',['../a00373.html#gac7c39ff21809e281552b4dbe46f4a39d',1,'glm']]], ['slerp',['slerp',['../a00248.html#gae7fc3c945be366b9942b842f55da428a',1,'glm::slerp(qua< T, Q > const &x, qua< T, Q > const &y, T a)'],['../a00356.html#ga8b11b18ce824174ea1a5a69ea14e2cee',1,'glm::slerp(vec< 3, T, Q > const &x, vec< 3, T, Q > const &y, T const &a)']]], ['smoothstep',['smoothstep',['../a00241.html#ga562edf7eca082cc5b7a0aaf180436daf',1,'glm']]], ['sphericalrand',['sphericalRand',['../a00300.html#ga22f90fcaccdf001c516ca90f6428e138',1,'glm']]], ['sqrt',['sqrt',['../a00242.html#gaa83e5f1648b7ccdf33b87c07c76cb77c',1,'glm::sqrt(vec< L, T, Q > const &v)'],['../a00256.html#ga64b7b255ed7bcba616fe6b44470b022e',1,'glm::sqrt(qua< T, Q > const &q)'],['../a00330.html#ga7ce36693a75879ccd9bb10167cfa722d',1,'glm::sqrt(int x)'],['../a00330.html#ga1975d318978d6dacf78b6444fa5ed7bc',1,'glm::sqrt(uint x)']]], ['squad',['squad',['../a00352.html#ga0b9bf3459e132ad8a18fe970669e3e35',1,'glm']]], ['step',['step',['../a00241.html#ga015a1261ff23e12650211aa872863cce',1,'glm::step(genType edge, genType x)'],['../a00241.html#ga8f9a911a48ef244b51654eaefc81c551',1,'glm::step(T edge, vec< L, T, Q > const &x)'],['../a00241.html#gaf4a5fc81619c7d3e8b22f53d4a098c7f',1,'glm::step(vec< L, T, Q > const &edge, vec< L, T, Q > const &x)']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_11.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_11.js ================================================ var searchData= [ ['tan',['tan',['../a00373.html#ga293a34cfb9f0115cc606b4a97c84f11f',1,'glm']]], ['tanh',['tanh',['../a00373.html#gaa1bccbfdcbe40ed2ffcddc2aa8bfd0f1',1,'glm']]], ['third',['third',['../a00290.html#ga3077c6311010a214b69ddc8214ec13b5',1,'glm']]], ['three_5fover_5ftwo_5fpi',['three_over_two_pi',['../a00290.html#gae94950df74b0ce382b1fc1d978ef7394',1,'glm']]], ['to_5fstring',['to_string',['../a00360.html#ga8f0dced1fd45e67e2d77e80ab93c7af5',1,'glm']]], ['tomat3',['toMat3',['../a00352.html#gaab0afabb894b28a983fb8ec610409d56',1,'glm']]], ['tomat4',['toMat4',['../a00352.html#gadfa2c77094e8cc9adad321d938855ffb',1,'glm']]], ['toquat',['toQuat',['../a00352.html#ga798de5d186499c9a9231cd92c8afaef1',1,'glm::toQuat(mat< 3, 3, T, Q > const &x)'],['../a00352.html#ga5eb36f51e1638e710451eba194dbc011',1,'glm::toQuat(mat< 4, 4, T, Q > const &x)']]], ['translate',['translate',['../a00247.html#ga1a4ecc4ad82652b8fb14dcb087879284',1,'glm::translate(mat< 4, 4, T, Q > const &m, vec< 3, T, Q > const &v)'],['../a00341.html#gaf4573ae47c80938aa9053ef6a33755ab',1,'glm::translate(mat< 3, 3, T, Q > const &m, vec< 2, T, Q > const &v)'],['../a00362.html#ga309a30e652e58c396e2c3d4db3ee7658',1,'glm::translate(vec< 3, T, Q > const &v)']]], ['transpose',['transpose',['../a00371.html#gae679d841da8ce9dbcc6c2d454f15bc35',1,'glm']]], ['trianglenormal',['triangleNormal',['../a00344.html#gaff1cb5496925dfa7962df457772a7f35',1,'glm']]], ['trunc',['trunc',['../a00241.html#gaf9375e3e06173271d49e6ffa3a334259',1,'glm']]], ['tweakedinfiniteperspective',['tweakedInfinitePerspective',['../a00243.html#gaaeacc04a2a6f4b18c5899d37e7bb3ef9',1,'glm::tweakedInfinitePerspective(T fovy, T aspect, T near)'],['../a00243.html#gaf5b3c85ff6737030a1d2214474ffa7a8',1,'glm::tweakedInfinitePerspective(T fovy, T aspect, T near, T ep)']]], ['two_5fover_5fpi',['two_over_pi',['../a00290.html#ga74eadc8a211253079683219a3ea0462a',1,'glm']]], ['two_5fover_5froot_5fpi',['two_over_root_pi',['../a00290.html#ga5827301817640843cf02026a8d493894',1,'glm']]], ['two_5fpi',['two_pi',['../a00290.html#gaa5276a4617566abcfe49286f40e3a256',1,'glm']]], ['two_5fthirds',['two_thirds',['../a00290.html#ga9b4d2f4322edcf63a6737b92a29dd1f5',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_12.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_12.js ================================================ var searchData= [ ['uaddcarry',['uaddCarry',['../a00370.html#gaedcec48743632dff6786bcc492074b1b',1,'glm']]], ['uintbitstofloat',['uintBitsToFloat',['../a00241.html#gab2bae0d15dcdca6093f88f76b3975d97',1,'glm::uintBitsToFloat(uint const &v)'],['../a00241.html#ga97f46b5f7b42fe44482e13356eb394ae',1,'glm::uintBitsToFloat(vec< L, uint, Q > const &v)']]], ['umulextended',['umulExtended',['../a00370.html#ga732e2fb56db57ea541c7e5c92b7121be',1,'glm']]], ['unpackdouble2x32',['unpackDouble2x32',['../a00372.html#ga5f4296dc5f12f0aa67ac05b8bb322483',1,'glm']]], ['unpackf2x11_5f1x10',['unpackF2x11_1x10',['../a00298.html#ga2b1fd1e854705b1345e98409e0a25e50',1,'glm']]], ['unpackf3x9_5fe1x5',['unpackF3x9_E1x5',['../a00298.html#gab9e60ebe3ad3eeced6a9ec6eb876d74e',1,'glm']]], ['unpackhalf',['unpackHalf',['../a00298.html#ga30d6b2f1806315bcd6047131f547d33b',1,'glm']]], ['unpackhalf1x16',['unpackHalf1x16',['../a00298.html#gac37dedaba24b00adb4ec6e8f92c19dbf',1,'glm']]], ['unpackhalf2x16',['unpackHalf2x16',['../a00372.html#gaf59b52e6b28da9335322c4ae19b5d745',1,'glm']]], ['unpackhalf4x16',['unpackHalf4x16',['../a00298.html#ga57dfc41b2eb20b0ac00efae7d9c49dcd',1,'glm']]], ['unpacki3x10_5f1x2',['unpackI3x10_1x2',['../a00298.html#ga9a05330e5490be0908d3b117d82aff56',1,'glm']]], ['unpackint2x16',['unpackInt2x16',['../a00298.html#gaccde055882918a3175de82f4ca8b7d8e',1,'glm']]], ['unpackint2x32',['unpackInt2x32',['../a00298.html#gab297c0bfd38433524791eb0584d8f08d',1,'glm']]], ['unpackint2x8',['unpackInt2x8',['../a00298.html#gab0c59f1e259fca9e68adb2207a6b665e',1,'glm']]], ['unpackint4x16',['unpackInt4x16',['../a00298.html#ga52c154a9b232b62c22517a700cc0c78c',1,'glm']]], ['unpackint4x8',['unpackInt4x8',['../a00298.html#ga1cd8d2038cdd33a860801aa155a26221',1,'glm']]], ['unpackrgbm',['unpackRGBM',['../a00298.html#ga5c1ec97894b05ea21a05aea4f0204a02',1,'glm']]], ['unpacksnorm',['unpackSnorm',['../a00298.html#ga6d49b31e5c3f9df8e1f99ab62b999482',1,'glm']]], ['unpacksnorm1x16',['unpackSnorm1x16',['../a00298.html#ga96dd15002370627a443c835ab03a766c',1,'glm']]], ['unpacksnorm1x8',['unpackSnorm1x8',['../a00298.html#ga4851ff86678aa1c7ace9d67846894285',1,'glm']]], ['unpacksnorm2x16',['unpackSnorm2x16',['../a00372.html#gacd8f8971a3fe28418be0d0fa1f786b38',1,'glm']]], ['unpacksnorm2x8',['unpackSnorm2x8',['../a00298.html#ga8b128e89be449fc71336968a66bf6e1a',1,'glm']]], ['unpacksnorm3x10_5f1x2',['unpackSnorm3x10_1x2',['../a00298.html#ga7a4fbf79be9740e3c57737bc2af05e5b',1,'glm']]], ['unpacksnorm4x16',['unpackSnorm4x16',['../a00298.html#gaaddf9c353528fe896106f7181219c7f4',1,'glm']]], ['unpacksnorm4x8',['unpackSnorm4x8',['../a00372.html#ga2db488646d48b7c43d3218954523fe82',1,'glm']]], ['unpacku3x10_5f1x2',['unpackU3x10_1x2',['../a00298.html#ga48df3042a7d079767f5891a1bfd8a60a',1,'glm']]], ['unpackuint2x16',['unpackUint2x16',['../a00298.html#ga035bbbeab7ec2b28c0529757395b645b',1,'glm']]], ['unpackuint2x32',['unpackUint2x32',['../a00298.html#gaf942ff11b65e83eb5f77e68329ebc6ab',1,'glm']]], ['unpackuint2x8',['unpackUint2x8',['../a00298.html#gaa7600a6c71784b637a410869d2a5adcd',1,'glm']]], ['unpackuint4x16',['unpackUint4x16',['../a00298.html#gab173834ef14cfc23a96a959f3ff4b8dc',1,'glm']]], ['unpackuint4x8',['unpackUint4x8',['../a00298.html#gaf6dc0e4341810a641c7ed08f10e335d1',1,'glm']]], ['unpackunorm',['unpackUnorm',['../a00298.html#ga3e6ac9178b59f0b1b2f7599f2183eb7f',1,'glm']]], ['unpackunorm1x16',['unpackUnorm1x16',['../a00298.html#ga83d34160a5cb7bcb5339823210fc7501',1,'glm']]], ['unpackunorm1x5_5f1x6_5f1x5',['unpackUnorm1x5_1x6_1x5',['../a00298.html#gab3bc08ecfc0f3339be93fb2b3b56d88a',1,'glm']]], ['unpackunorm1x8',['unpackUnorm1x8',['../a00298.html#ga1319207e30874fb4931a9ee913983ee1',1,'glm']]], ['unpackunorm2x16',['unpackUnorm2x16',['../a00372.html#ga1f66188e5d65afeb9ffba1ad971e4007',1,'glm']]], ['unpackunorm2x3_5f1x2',['unpackUnorm2x3_1x2',['../a00298.html#ga6abd5a9014df3b5ce4059008d2491260',1,'glm']]], ['unpackunorm2x4',['unpackUnorm2x4',['../a00298.html#ga2e50476132fe5f27f08e273d9c70d85b',1,'glm']]], ['unpackunorm2x8',['unpackUnorm2x8',['../a00298.html#ga637cbe3913dd95c6e7b4c99c61bd611f',1,'glm']]], ['unpackunorm3x10_5f1x2',['unpackUnorm3x10_1x2',['../a00298.html#ga5156d3060355fe332865da2c7f78815f',1,'glm']]], ['unpackunorm3x5_5f1x1',['unpackUnorm3x5_1x1',['../a00298.html#ga5ff95ff5bc16f396432ab67243dbae4d',1,'glm']]], ['unpackunorm4x16',['unpackUnorm4x16',['../a00298.html#ga2ae149c5d2473ac1e5f347bb654a242d',1,'glm']]], ['unpackunorm4x4',['unpackUnorm4x4',['../a00298.html#gac58ee89d0e224bb6df5e8bbb18843a2d',1,'glm']]], ['unpackunorm4x8',['unpackUnorm4x8',['../a00372.html#ga7f903259150b67e9466f5f8edffcd197',1,'glm']]], ['unproject',['unProject',['../a00245.html#ga36641e5d60f994e01c3d8f56b10263d2',1,'glm']]], ['unprojectno',['unProjectNO',['../a00245.html#gae089ba9fc150ff69c252a20e508857b5',1,'glm']]], ['unprojectzo',['unProjectZO',['../a00245.html#gade5136413ce530f8e606124d570fba32',1,'glm']]], ['uround',['uround',['../a00292.html#ga6715b9d573972a0f7763d30d45bcaec4',1,'glm']]], ['usubborrow',['usubBorrow',['../a00370.html#gae3316ba1229ad9b9f09480833321b053',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_13.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_13.js ================================================ var searchData= [ ['value_5fptr',['value_ptr',['../a00305.html#ga1c64669e1ba1160ad9386e43dc57569a',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_14.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_14.js ================================================ var searchData= [ ['wrapangle',['wrapAngle',['../a00325.html#ga069527c6dbd64f53435b8ebc4878b473',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_15.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_15.js ================================================ var searchData= [ ['yaw',['yaw',['../a00299.html#ga8da38cdfdc452dafa660c2f46506bad5',1,'glm']]], ['yawpitchroll',['yawPitchRoll',['../a00319.html#gae6aa26ccb020d281b449619e419a609e',1,'glm']]], ['ycocg2rgb',['YCoCg2rgb',['../a00313.html#ga163596b804c7241810b2534a99eb1343',1,'glm']]], ['ycocgr2rgb',['YCoCgR2rgb',['../a00313.html#gaf8d30574c8576838097d8e20c295384a',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_16.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_16.js ================================================ var searchData= [ ['zero',['zero',['../a00290.html#ga788f5a421fc0f40a1296ebc094cbaa8a',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_2.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_2.js ================================================ var searchData= [ ['catmullrom',['catmullRom',['../a00358.html#ga8119c04f8210fd0d292757565cd6918d',1,'glm']]], ['ceil',['ceil',['../a00241.html#gafb9d2a645a23aca12d4d6de0104b7657',1,'glm']]], ['ceilmultiple',['ceilMultiple',['../a00302.html#ga1d89ac88582aaf4d5dfa5feb4a376fd4',1,'glm::ceilMultiple(genType v, genType Multiple)'],['../a00302.html#gab77fdcc13f8e92d2e0b1b7d7aeab8e9d',1,'glm::ceilMultiple(vec< L, T, Q > const &v, vec< L, T, Q > const &Multiple)']]], ['ceilpoweroftwo',['ceilPowerOfTwo',['../a00302.html#ga5c3ef36ae32aa4271f1544f92bd578b6',1,'glm::ceilPowerOfTwo(genIUType v)'],['../a00302.html#gab53d4a97c0d3e297be5f693cdfdfe5d2',1,'glm::ceilPowerOfTwo(vec< L, T, Q > const &v)']]], ['circulareasein',['circularEaseIn',['../a00318.html#ga34508d4b204a321ec26d6086aa047997',1,'glm']]], ['circulareaseinout',['circularEaseInOut',['../a00318.html#ga0c1027637a5b02d4bb3612aa12599d69',1,'glm']]], ['circulareaseout',['circularEaseOut',['../a00318.html#ga26fefde9ced9b72745fe21f1a3fe8da7',1,'glm']]], ['circularrand',['circularRand',['../a00300.html#ga9dd05c36025088fae25b97c869e88517',1,'glm']]], ['clamp',['clamp',['../a00241.html#ga7cd77683da6361e297c56443fc70806d',1,'glm::clamp(genType x, genType minVal, genType maxVal)'],['../a00241.html#gafba2e0674deb5953878d89483cd6323d',1,'glm::clamp(vec< L, T, Q > const &x, T minVal, T maxVal)'],['../a00241.html#gaa0f2f12e9108b09e22a3f0b2008a0b5d',1,'glm::clamp(vec< L, T, Q > const &x, vec< L, T, Q > const &minVal, vec< L, T, Q > const &maxVal)'],['../a00369.html#ga6c0cc6bd1d67ea1008d2592e998bad33',1,'glm::clamp(genType const &Texcoord)']]], ['closebounded',['closeBounded',['../a00314.html#gab7d89c14c48ad01f720fb5daf8813161',1,'glm']]], ['closestpointonline',['closestPointOnLine',['../a00310.html#ga36529c278ef716986151d58d151d697d',1,'glm::closestPointOnLine(vec< 3, T, Q > const &point, vec< 3, T, Q > const &a, vec< 3, T, Q > const &b)'],['../a00310.html#ga55bcbcc5fc06cb7ff7bc7a6e0e155eb0',1,'glm::closestPointOnLine(vec< 2, T, Q > const &point, vec< 2, T, Q > const &a, vec< 2, T, Q > const &b)']]], ['colmajor2',['colMajor2',['../a00338.html#gaaff72f11286e59a4a88ed21a347f284c',1,'glm::colMajor2(vec< 2, T, Q > const &v1, vec< 2, T, Q > const &v2)'],['../a00338.html#gafc25fd44196c92b1397b127aec1281ab',1,'glm::colMajor2(mat< 2, 2, T, Q > const &m)']]], ['colmajor3',['colMajor3',['../a00338.html#ga1e25b72b085087740c92f5c70f3b051f',1,'glm::colMajor3(vec< 3, T, Q > const &v1, vec< 3, T, Q > const &v2, vec< 3, T, Q > const &v3)'],['../a00338.html#ga86bd0656e787bb7f217607572590af27',1,'glm::colMajor3(mat< 3, 3, T, Q > const &m)']]], ['colmajor4',['colMajor4',['../a00338.html#gaf4aa6c7e17bfce41a6c13bf6469fab05',1,'glm::colMajor4(vec< 4, T, Q > const &v1, vec< 4, T, Q > const &v2, vec< 4, T, Q > const &v3, vec< 4, T, Q > const &v4)'],['../a00338.html#gaf3f9511c366c20ba2e4a64c9e4cec2b3',1,'glm::colMajor4(mat< 4, 4, T, Q > const &m)']]], ['column',['column',['../a00293.html#ga96022eb0d3fae39d89fc7a954e59b374',1,'glm::column(genType const &m, length_t index)'],['../a00293.html#ga9e757377523890e8b80c5843dbe4dd15',1,'glm::column(genType const &m, length_t index, typename genType::col_type const &x)']]], ['compadd',['compAdd',['../a00316.html#gaf71833350e15e74d31cbf8a3e7f27051',1,'glm']]], ['compmax',['compMax',['../a00316.html#gabfa4bb19298c8c73d4217ba759c496b6',1,'glm']]], ['compmin',['compMin',['../a00316.html#gab5d0832b5c7bb01b8d7395973bfb1425',1,'glm']]], ['compmul',['compMul',['../a00316.html#gae8ab88024197202c9479d33bdc5a8a5d',1,'glm']]], ['compnormalize',['compNormalize',['../a00316.html#ga8f2b81ada8515875e58cb1667b6b9908',1,'glm']]], ['compscale',['compScale',['../a00316.html#ga80abc2980d65d675f435d178c36880eb',1,'glm']]], ['conjugate',['conjugate',['../a00248.html#ga10d7bda73201788ac2ab28cd8d0d409b',1,'glm']]], ['convertd65xyztod50xyz',['convertD65XYZToD50XYZ',['../a00311.html#gad12f4f65022b2c80e33fcba2ced0dc48',1,'glm']]], ['convertd65xyztolinearsrgb',['convertD65XYZToLinearSRGB',['../a00311.html#ga5265386fc3ac29e4c580d37ed470859c',1,'glm']]], ['convertlinearsrgbtod50xyz',['convertLinearSRGBToD50XYZ',['../a00311.html#ga1522ba180e3d83d554a734056da031f9',1,'glm']]], ['convertlinearsrgbtod65xyz',['convertLinearSRGBToD65XYZ',['../a00311.html#gaf9e130d9d4ccf51cc99317de7449f369',1,'glm']]], ['convertlineartosrgb',['convertLinearToSRGB',['../a00289.html#ga42239e7b3da900f7ef37cec7e2476579',1,'glm::convertLinearToSRGB(vec< L, T, Q > const &ColorLinear)'],['../a00289.html#gaace0a21167d13d26116c283009af57f6',1,'glm::convertLinearToSRGB(vec< L, T, Q > const &ColorLinear, T Gamma)']]], ['convertsrgbtolinear',['convertSRGBToLinear',['../a00289.html#ga16c798b7a226b2c3079dedc55083d187',1,'glm::convertSRGBToLinear(vec< L, T, Q > const &ColorSRGB)'],['../a00289.html#gad1b91f27a9726c9cb403f9fee6e2e200',1,'glm::convertSRGBToLinear(vec< L, T, Q > const &ColorSRGB, T Gamma)']]], ['cos',['cos',['../a00373.html#ga6a41efc740e3b3c937447d3a6284130e',1,'glm']]], ['cosh',['cosh',['../a00373.html#ga4e260e372742c5f517aca196cf1e62b3',1,'glm']]], ['cot',['cot',['../a00301.html#ga3a7b517a95bbd3ad74da3aea87a66314',1,'glm']]], ['coth',['coth',['../a00301.html#ga6b8b770eb7198e4dea59d52e6db81442',1,'glm']]], ['cross',['cross',['../a00254.html#ga755beaa929c75751dee646cccba37e4c',1,'glm::cross(qua< T, Q > const &q1, qua< T, Q > const &q2)'],['../a00279.html#gaeeec0794212fe84fc9d261de067c9587',1,'glm::cross(vec< 3, T, Q > const &x, vec< 3, T, Q > const &y)'],['../a00322.html#gac36e72b934ea6a9dd313772d7e78fa93',1,'glm::cross(vec< 2, T, Q > const &v, vec< 2, T, Q > const &u)'],['../a00352.html#ga2f32f970411c44cdd38bb98960198385',1,'glm::cross(qua< T, Q > const &q, vec< 3, T, Q > const &v)'],['../a00352.html#ga9f5f77255756e5668dfee7f0d07ed021',1,'glm::cross(vec< 3, T, Q > const &v, qua< T, Q > const &q)']]], ['csc',['csc',['../a00301.html#ga59dd0005b6474eea48af743b4f14ebbb',1,'glm']]], ['csch',['csch',['../a00301.html#ga6d95843ff3ca6472ab399ba171d290a0',1,'glm']]], ['cubic',['cubic',['../a00358.html#ga6b867eb52e2fc933d2e0bf26aabc9a70',1,'glm']]], ['cubiceasein',['cubicEaseIn',['../a00318.html#gaff52f746102b94864d105563ba8895ae',1,'glm']]], ['cubiceaseinout',['cubicEaseInOut',['../a00318.html#ga55134072b42d75452189321d4a2ad91c',1,'glm']]], ['cubiceaseout',['cubicEaseOut',['../a00318.html#ga40d746385d8bcc5973f5bc6a2340ca91',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_3.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_3.js ================================================ var searchData= [ ['decompose',['decompose',['../a00335.html#gac0e342656ba09a9bc97c57182ba73124',1,'glm']]], ['degrees',['degrees',['../a00373.html#ga8faec9e303538065911ba8b3caf7326b',1,'glm']]], ['derivedeuleranglex',['derivedEulerAngleX',['../a00319.html#ga994b8186b3b80d91cf90bc403164692f',1,'glm']]], ['derivedeulerangley',['derivedEulerAngleY',['../a00319.html#ga0a4c56ecce7abcb69508ebe6313e9d10',1,'glm']]], ['derivedeuleranglez',['derivedEulerAngleZ',['../a00319.html#gae8b397348201c42667be983ba3f344df',1,'glm']]], ['determinant',['determinant',['../a00371.html#gad7928795124768e058f99dce270f5c8d',1,'glm']]], ['diagonal2x2',['diagonal2x2',['../a00339.html#ga58a32a2beeb2478dae2a721368cdd4ac',1,'glm']]], ['diagonal2x3',['diagonal2x3',['../a00339.html#gab69f900206a430e2875a5a073851e175',1,'glm']]], ['diagonal2x4',['diagonal2x4',['../a00339.html#ga30b4dbfed60a919d66acc8a63bcdc549',1,'glm']]], ['diagonal3x2',['diagonal3x2',['../a00339.html#ga832c805d5130d28ad76236958d15b47d',1,'glm']]], ['diagonal3x3',['diagonal3x3',['../a00339.html#ga5487ff9cdbc8e04d594adef1bcb16ee0',1,'glm']]], ['diagonal3x4',['diagonal3x4',['../a00339.html#gad7551139cff0c4208d27f0ad3437833e',1,'glm']]], ['diagonal4x2',['diagonal4x2',['../a00339.html#gacb8969e6543ba775c6638161a37ac330',1,'glm']]], ['diagonal4x3',['diagonal4x3',['../a00339.html#gae235def5049d6740f0028433f5e13f90',1,'glm']]], ['diagonal4x4',['diagonal4x4',['../a00339.html#ga0b4cd8dea436791b072356231ee8578f',1,'glm']]], ['diskrand',['diskRand',['../a00300.html#gaa0b18071f3f97dbf8bcf6f53c6fe5f73',1,'glm']]], ['distance',['distance',['../a00279.html#gaa68de6c53e20dfb2dac2d20197562e3f',1,'glm']]], ['distance2',['distance2',['../a00343.html#ga85660f1b79f66c09c7b5a6f80e68c89f',1,'glm']]], ['dot',['dot',['../a00254.html#ga84865a56acb8fbd7bc4f5c0b928e3cfc',1,'glm::dot(qua< T, Q > const &x, qua< T, Q > const &y)'],['../a00279.html#gaad6c5d9d39bdc0bf43baf1b22e147a0a',1,'glm::dot(vec< L, T, Q > const &x, vec< L, T, Q > const &y)']]], ['dual_5fquat_5fidentity',['dual_quat_identity',['../a00317.html#ga0b35c0e30df8a875dbaa751e0bd800e0',1,'glm']]], ['dualquat_5fcast',['dualquat_cast',['../a00317.html#gac4064ff813759740201765350eac4236',1,'glm::dualquat_cast(mat< 2, 4, T, Q > const &x)'],['../a00317.html#ga91025ebdca0f4ea54da08497b00e8c84',1,'glm::dualquat_cast(mat< 3, 4, T, Q > const &x)']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_4.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_4.js ================================================ var searchData= [ ['e',['e',['../a00290.html#ga4b7956eb6e2fbedfc7cf2e46e85c5139',1,'glm']]], ['elasticeasein',['elasticEaseIn',['../a00318.html#ga230918eccee4e113d10ec5b8cdc58695',1,'glm']]], ['elasticeaseinout',['elasticEaseInOut',['../a00318.html#ga2db4ac8959559b11b4029e54812908d6',1,'glm']]], ['elasticeaseout',['elasticEaseOut',['../a00318.html#gace9c9d1bdf88bf2ab1e7cdefa54c7365',1,'glm']]], ['epsilon',['epsilon',['../a00259.html#ga2a1e57fc5592b69cfae84174cbfc9429',1,'glm']]], ['epsilonequal',['epsilonEqual',['../a00291.html#ga91b417866cafadd076004778217a1844',1,'glm::epsilonEqual(vec< L, T, Q > const &x, vec< L, T, Q > const &y, T const &epsilon)'],['../a00291.html#gaa7f227999ca09e7ca994e8b35aba47bb',1,'glm::epsilonEqual(genType const &x, genType const &y, genType const &epsilon)']]], ['epsilonnotequal',['epsilonNotEqual',['../a00291.html#gaf840d33b9a5261ec78dcd5125743b025',1,'glm::epsilonNotEqual(vec< L, T, Q > const &x, vec< L, T, Q > const &y, T const &epsilon)'],['../a00291.html#ga50a92103fb0cbd796908e1bf20c79aaf',1,'glm::epsilonNotEqual(genType const &x, genType const &y, genType const &epsilon)']]], ['equal',['equal',['../a00246.html#ga27e90dcb7941c9b70e295dc3f6f6369f',1,'glm::equal(mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y)'],['../a00246.html#gaf5d687d70d11708b68c36c6db5777040',1,'glm::equal(mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, T epsilon)'],['../a00246.html#gafa6a053e81179fa4292b35651c83c3fb',1,'glm::equal(mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, vec< C, T, Q > const &epsilon)'],['../a00246.html#gab3a93f19e72e9141f50527c9de21d0c0',1,'glm::equal(mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, int ULPs)'],['../a00246.html#ga5305af376173f1902719fa309bbae671',1,'glm::equal(mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, vec< C, int, Q > const &ULPs)'],['../a00255.html#gad7827af0549504ff1cd6a359786acc7a',1,'glm::equal(qua< T, Q > const &x, qua< T, Q > const &y)'],['../a00255.html#gaa001eecb91106463169a8e5ef1577b39',1,'glm::equal(qua< T, Q > const &x, qua< T, Q > const &y, T epsilon)'],['../a00275.html#ga2ac7651a2fa7354f2da610dbd50d28e2',1,'glm::equal(vec< L, T, Q > const &x, vec< L, T, Q > const &y, T epsilon)'],['../a00275.html#ga37d261a65f69babc82cec2ae1af7145f',1,'glm::equal(vec< L, T, Q > const &x, vec< L, T, Q > const &y, vec< L, T, Q > const &epsilon)'],['../a00275.html#ga2b46cb50911e97b32f4cd743c2c69771',1,'glm::equal(vec< L, T, Q > const &x, vec< L, T, Q > const &y, int ULPs)'],['../a00275.html#ga7da2b8605be7f245b39cb6fbf6d9d581',1,'glm::equal(vec< L, T, Q > const &x, vec< L, T, Q > const &y, vec< L, int, Q > const &ULPs)'],['../a00374.html#gab4c5cfdaa70834421397a85aa83ad946',1,'glm::equal(vec< L, T, Q > const &x, vec< L, T, Q > const &y)']]], ['euclidean',['euclidean',['../a00350.html#ga1821d5b3324201e60a9e2823d0b5d0c8',1,'glm']]], ['euler',['euler',['../a00290.html#gad8fe2e6f90bce9d829e9723b649fbd42',1,'glm']]], ['eulerangles',['eulerAngles',['../a00299.html#gaf4dd967dead22dd932fc7460ceecb03f',1,'glm']]], ['euleranglex',['eulerAngleX',['../a00319.html#gafba6282e4ed3ff8b5c75331abfba3489',1,'glm']]], ['euleranglexy',['eulerAngleXY',['../a00319.html#ga64036577ee17a2d24be0dbc05881d4e2',1,'glm']]], ['euleranglexyx',['eulerAngleXYX',['../a00319.html#ga29bd0787a28a6648159c0d6e69706066',1,'glm']]], ['euleranglexyz',['eulerAngleXYZ',['../a00319.html#ga1975e0f0e9bed7f716dc9946da2ab645',1,'glm']]], ['euleranglexz',['eulerAngleXZ',['../a00319.html#gaa39bd323c65c2fc0a1508be33a237ce9',1,'glm']]], ['euleranglexzx',['eulerAngleXZX',['../a00319.html#ga60171c79a17aec85d7891ae1d1533ec9',1,'glm']]], ['euleranglexzy',['eulerAngleXZY',['../a00319.html#ga996dce12a60d8a674ba6737a535fa910',1,'glm']]], ['eulerangley',['eulerAngleY',['../a00319.html#gab84bf4746805fd69b8ecbb230e3974c5',1,'glm']]], ['eulerangleyx',['eulerAngleYX',['../a00319.html#ga4f57e6dd25c3cffbbd4daa6ef3f4486d',1,'glm']]], ['eulerangleyxy',['eulerAngleYXY',['../a00319.html#ga750fba9894117f87bcc529d7349d11de',1,'glm']]], ['eulerangleyxz',['eulerAngleYXZ',['../a00319.html#gab8ba99a9814f6d9edf417b6c6d5b0c10',1,'glm']]], ['eulerangleyz',['eulerAngleYZ',['../a00319.html#ga220379e10ac8cca55e275f0c9018fed9',1,'glm']]], ['eulerangleyzx',['eulerAngleYZX',['../a00319.html#ga08bef16357b8f9b3051b3dcaec4b7848',1,'glm']]], ['eulerangleyzy',['eulerAngleYZY',['../a00319.html#ga5e5e40abc27630749b42b3327c76d6e4',1,'glm']]], ['euleranglez',['eulerAngleZ',['../a00319.html#ga5b3935248bb6c3ec6b0d9297d406e251',1,'glm']]], ['euleranglezx',['eulerAngleZX',['../a00319.html#ga483903115cd4059228961046a28d69b5',1,'glm']]], ['euleranglezxy',['eulerAngleZXY',['../a00319.html#gab4505c54d2dd654df4569fd1f04c43aa',1,'glm']]], ['euleranglezxz',['eulerAngleZXZ',['../a00319.html#ga178f966c52b01e4d65e31ebd007e3247',1,'glm']]], ['euleranglezy',['eulerAngleZY',['../a00319.html#ga400b2bd5984999efab663f3a68e1d020',1,'glm']]], ['euleranglezyx',['eulerAngleZYX',['../a00319.html#ga2e61f1e39069c47530acab9167852dd6',1,'glm']]], ['euleranglezyz',['eulerAngleZYZ',['../a00319.html#gacd795f1dbecaf74974f9c76bbcca6830',1,'glm']]], ['exp',['exp',['../a00242.html#ga071566cadc7505455e611f2a0353f4d4',1,'glm::exp(vec< L, T, Q > const &v)'],['../a00256.html#gaab2d37ef7265819f1d2939b9dc2c52ac',1,'glm::exp(qua< T, Q > const &q)']]], ['exp2',['exp2',['../a00242.html#gaff17ace6b579a03bf223ed4d1ed2cd16',1,'glm']]], ['exponentialeasein',['exponentialEaseIn',['../a00318.html#ga7f24ee9219ab4c84dc8de24be84c1e3c',1,'glm']]], ['exponentialeaseinout',['exponentialEaseInOut',['../a00318.html#ga232fb6dc093c5ce94bee105ff2947501',1,'glm']]], ['exponentialeaseout',['exponentialEaseOut',['../a00318.html#ga517f2bcfd15bc2c25c466ae50808efc3',1,'glm']]], ['extend',['extend',['../a00320.html#ga8140caae613b0f847ab0d7175dc03a37',1,'glm']]], ['extracteuleranglexyx',['extractEulerAngleXYX',['../a00319.html#gaf1077a72171d0f3b08f022ab5ff88af7',1,'glm']]], ['extracteuleranglexyz',['extractEulerAngleXYZ',['../a00319.html#gacea701562f778c1da4d3a0a1cf091000',1,'glm']]], ['extracteuleranglexzx',['extractEulerAngleXZX',['../a00319.html#gacf0bc6c031f25fa3ee0055b62c8260d0',1,'glm']]], ['extracteuleranglexzy',['extractEulerAngleXZY',['../a00319.html#gabe5a65d8eb1cd873c8de121cce1a15ed',1,'glm']]], ['extracteulerangleyxy',['extractEulerAngleYXY',['../a00319.html#gaab8868556361a190db94374e9983ed39',1,'glm']]], ['extracteulerangleyxz',['extractEulerAngleYXZ',['../a00319.html#gaf0937518e63037335a0e8358b6f053c5',1,'glm']]], ['extracteulerangleyzx',['extractEulerAngleYZX',['../a00319.html#ga9049b78466796c0de2971756e25b93d3',1,'glm']]], ['extracteulerangleyzy',['extractEulerAngleYZY',['../a00319.html#ga11dad972c109e4bf8694c915017c44a6',1,'glm']]], ['extracteuleranglezxy',['extractEulerAngleZXY',['../a00319.html#ga81fbbca2ba0c778b9662d5355b4e2363',1,'glm']]], ['extracteuleranglezxz',['extractEulerAngleZXZ',['../a00319.html#ga59359fef9bad92afaca55e193f91e702',1,'glm']]], ['extracteuleranglezyx',['extractEulerAngleZYX',['../a00319.html#ga2d6c11a4abfa60c565483cee2d3f7665',1,'glm']]], ['extracteuleranglezyz',['extractEulerAngleZYZ',['../a00319.html#gafdfa880a64b565223550c2d3938b1aeb',1,'glm']]], ['extractmatrixrotation',['extractMatrixRotation',['../a00337.html#gabbc1c7385a145f04b5c54228965df145',1,'glm']]], ['extractrealcomponent',['extractRealComponent',['../a00352.html#ga321953c1b2e7befe6f5dcfddbfc6b76b',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_5.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_5.js ================================================ var searchData= [ ['faceforward',['faceforward',['../a00279.html#ga7aed0a36c738169402404a3a5d54e43b',1,'glm']]], ['factorial',['factorial',['../a00330.html#ga8cbd3120905f398ec321b5d1836e08fb',1,'glm']]], ['fastacos',['fastAcos',['../a00325.html#ga9721d63356e5d94fdc4b393a426ab26b',1,'glm']]], ['fastasin',['fastAsin',['../a00325.html#ga562cb62c51fbfe7fac7db0bce706b81f',1,'glm']]], ['fastatan',['fastAtan',['../a00325.html#ga8d197c6ef564f5e5d59af3b3f8adcc2c',1,'glm::fastAtan(T y, T x)'],['../a00325.html#gae25de86a968490ff56856fa425ec9d30',1,'glm::fastAtan(T angle)']]], ['fastcos',['fastCos',['../a00325.html#gab34c8b45c23c0165a64dcecfcc3b302a',1,'glm']]], ['fastdistance',['fastDistance',['../a00324.html#gaac333418d0c4e0cc6d3d219ed606c238',1,'glm::fastDistance(genType x, genType y)'],['../a00324.html#ga42d3e771fa7cb3c60d828e315829df19',1,'glm::fastDistance(vec< L, T, Q > const &x, vec< L, T, Q > const &y)']]], ['fastexp',['fastExp',['../a00323.html#gaa3180ac8f96ab37ab96e0cacaf608e10',1,'glm::fastExp(T x)'],['../a00323.html#ga3ba6153aec6bd74628f8b00530aa8d58',1,'glm::fastExp(vec< L, T, Q > const &x)']]], ['fastexp2',['fastExp2',['../a00323.html#ga0af50585955eb14c60bb286297fabab2',1,'glm::fastExp2(T x)'],['../a00323.html#gacaaed8b67d20d244b7de217e7816c1b6',1,'glm::fastExp2(vec< L, T, Q > const &x)']]], ['fastinversesqrt',['fastInverseSqrt',['../a00324.html#ga7f081b14d9c7035c8714eba5f7f75a8f',1,'glm::fastInverseSqrt(genType x)'],['../a00324.html#gadcd7be12b1e5ee182141359d4c45dd24',1,'glm::fastInverseSqrt(vec< L, T, Q > const &x)']]], ['fastlength',['fastLength',['../a00324.html#gafe697d6287719538346bbdf8b1367c59',1,'glm::fastLength(genType x)'],['../a00324.html#ga90f66be92ef61e705c005e7b3209edb8',1,'glm::fastLength(vec< L, T, Q > const &x)']]], ['fastlog',['fastLog',['../a00323.html#gae1bdc97b7f96a600e29c753f1cd4388a',1,'glm::fastLog(T x)'],['../a00323.html#ga937256993a7219e73f186bb348fe6be8',1,'glm::fastLog(vec< L, T, Q > const &x)']]], ['fastlog2',['fastLog2',['../a00323.html#ga6e98118685f6dc9e05fbb13dd5e5234e',1,'glm::fastLog2(T x)'],['../a00323.html#ga7562043539194ccc24649f8475bc5584',1,'glm::fastLog2(vec< L, T, Q > const &x)']]], ['fastmix',['fastMix',['../a00352.html#ga264e10708d58dd0ff53b7902a2bd2561',1,'glm']]], ['fastnormalize',['fastNormalize',['../a00324.html#ga3b02c1d6e0c754144e2f1e110bf9f16c',1,'glm']]], ['fastnormalizedot',['fastNormalizeDot',['../a00345.html#ga2746fb9b5bd22b06b2f7c8babba5de9e',1,'glm']]], ['fastpow',['fastPow',['../a00323.html#ga5340e98a11fcbbd936ba6e983a154d50',1,'glm::fastPow(genType x, genType y)'],['../a00323.html#ga15325a8ed2d1c4ed2412c4b3b3927aa2',1,'glm::fastPow(vec< L, T, Q > const &x, vec< L, T, Q > const &y)'],['../a00323.html#ga7f2562db9c3e02ae76169c36b086c3f6',1,'glm::fastPow(genTypeT x, genTypeU y)'],['../a00323.html#ga1abe488c0829da5b9de70ac64aeaa7e5',1,'glm::fastPow(vec< L, T, Q > const &x)']]], ['fastsin',['fastSin',['../a00325.html#ga0aab3257bb3b628d10a1e0483e2c6915',1,'glm']]], ['fastsqrt',['fastSqrt',['../a00324.html#ga6c460e9414a50b2fc455c8f64c86cdc9',1,'glm::fastSqrt(genType x)'],['../a00324.html#gae83f0c03614f73eae5478c5b6274ee6d',1,'glm::fastSqrt(vec< L, T, Q > const &x)']]], ['fasttan',['fastTan',['../a00325.html#gaf29b9c1101a10007b4f79ee89df27ba2',1,'glm']]], ['fclamp',['fclamp',['../a00321.html#ga1e28539d3a46965ed9ef92ec7cb3b18a',1,'glm::fclamp(genType x, genType minVal, genType maxVal)'],['../a00321.html#ga60796d08903489ee185373593bc16b9d',1,'glm::fclamp(vec< L, T, Q > const &x, T minVal, T maxVal)'],['../a00321.html#ga5c15fa4709763c269c86c0b8b3aa2297',1,'glm::fclamp(vec< L, T, Q > const &x, vec< L, T, Q > const &minVal, vec< L, T, Q > const &maxVal)']]], ['findlsb',['findLSB',['../a00370.html#gaf74c4d969fa34ab8acb9d390f5ca5274',1,'glm::findLSB(genIUType x)'],['../a00370.html#ga4454c0331d6369888c28ab677f4810c7',1,'glm::findLSB(vec< L, T, Q > const &v)']]], ['findmsb',['findMSB',['../a00370.html#ga7e4a794d766861c70bc961630f8ef621',1,'glm::findMSB(genIUType x)'],['../a00370.html#ga39ac4d52028bb6ab08db5ad6562c2872',1,'glm::findMSB(vec< L, T, Q > const &v)']]], ['findnsb',['findNSB',['../a00261.html#ga2777901e41ad6e1e9d0ad6cc855d1075',1,'glm::findNSB(genIUType x, int significantBitCount)'],['../a00274.html#gaff61eca266da315002a3db92ff0dd604',1,'glm::findNSB(vec< L, T, Q > const &Source, vec< L, int, Q > SignificantBitCount)']]], ['fliplr',['fliplr',['../a00336.html#gaf39f4e5f78eb29c1a90277d45b9b3feb',1,'glm']]], ['flipud',['flipud',['../a00336.html#ga85003371f0ba97380dd25e8905de1870',1,'glm']]], ['floatbitstoint',['floatBitsToInt',['../a00241.html#ga1425c1c3160ec51214b03a0469a3013d',1,'glm::floatBitsToInt(float const &v)'],['../a00241.html#ga99f7d62f78ac5ea3b49bae715c9488ed',1,'glm::floatBitsToInt(vec< L, float, Q > const &v)']]], ['floatbitstouint',['floatBitsToUint',['../a00241.html#ga70e0271c34af52f3100c7960e18c3f2b',1,'glm::floatBitsToUint(float const &v)'],['../a00241.html#ga49418ba4c8a60fbbb5d57b705f3e26db',1,'glm::floatBitsToUint(vec< L, float, Q > const &v)']]], ['floor',['floor',['../a00241.html#gaa9d0742639e85b29c7c5de11cfd6840d',1,'glm']]], ['floor_5flog2',['floor_log2',['../a00330.html#ga7011b4e1c1e1ed492149b028feacc00e',1,'glm']]], ['floormultiple',['floorMultiple',['../a00302.html#ga2ffa3cd5f2ea746ee1bf57c46da6315e',1,'glm::floorMultiple(genType v, genType Multiple)'],['../a00302.html#gacdd8901448f51f0b192380e422fae3e4',1,'glm::floorMultiple(vec< L, T, Q > const &v, vec< L, T, Q > const &Multiple)']]], ['floorpoweroftwo',['floorPowerOfTwo',['../a00302.html#gafe273a57935d04c9db677bf67f9a71f4',1,'glm::floorPowerOfTwo(genIUType v)'],['../a00302.html#gaf0d591a8fca8ddb9289cdeb44b989c2d',1,'glm::floorPowerOfTwo(vec< L, T, Q > const &v)']]], ['fma',['fma',['../a00241.html#gad0f444d4b81cc53c3b6edf5aa25078c2',1,'glm']]], ['fmax',['fmax',['../a00258.html#ga36920478565cf608e93064283ce06421',1,'glm::fmax(T a, T b)'],['../a00258.html#ga0007bba71ca451ac70e99d28dfbeaab9',1,'glm::fmax(T a, T b, T C)'],['../a00258.html#ga27e260b1ff4d04c3ad4b864d26cbaf08',1,'glm::fmax(T a, T b, T C, T D)'],['../a00267.html#gad66b6441f7200db16c9f341711733c56',1,'glm::fmax(vec< L, T, Q > const &a, T b)'],['../a00267.html#ga8df4be3f48d6717c40ea788fd30deebf',1,'glm::fmax(vec< L, T, Q > const &a, vec< L, T, Q > const &b)'],['../a00267.html#ga0f04ba924294dae4234ca93ede23229a',1,'glm::fmax(vec< L, T, Q > const &a, vec< L, T, Q > const &b, vec< L, T, Q > const &c)'],['../a00267.html#ga4ed3eb250ccbe17bfe8ded8a6b72d230',1,'glm::fmax(vec< L, T, Q > const &a, vec< L, T, Q > const &b, vec< L, T, Q > const &c, vec< L, T, Q > const &d)'],['../a00321.html#gae5792cb2b51190057e4aea027eb56f81',1,'glm::fmax(genType x, genType y)']]], ['fmin',['fmin',['../a00258.html#ga7b2b438a765e2a62098c79eb212f28f0',1,'glm::fmin(T a, T b)'],['../a00258.html#ga1a95fe4cf5437e8133f1093fe9726a64',1,'glm::fmin(T a, T b, T c)'],['../a00258.html#ga3d6f9c6c16bfd6f38f2c4f8076e8b661',1,'glm::fmin(T a, T b, T c, T d)'],['../a00267.html#gae989203363cff9eab5093630df4fe071',1,'glm::fmin(vec< L, T, Q > const &x, T y)'],['../a00267.html#ga7c42e93cd778c9181d1cdeea4d3e43bd',1,'glm::fmin(vec< L, T, Q > const &x, vec< L, T, Q > const &y)'],['../a00267.html#ga7e62739055b49189d9355471f78fe000',1,'glm::fmin(vec< L, T, Q > const &a, vec< L, T, Q > const &b, vec< L, T, Q > const &c)'],['../a00267.html#ga4a543dd7d22ad1f3b8b839f808a9d93c',1,'glm::fmin(vec< L, T, Q > const &a, vec< L, T, Q > const &b, vec< L, T, Q > const &c, vec< L, T, Q > const &d)'],['../a00321.html#gaa3200559611ac5b9b9ae7283547916a7',1,'glm::fmin(genType x, genType y)']]], ['fmod',['fmod',['../a00314.html#gae5e80425df9833164ad469e83b475fb4',1,'glm']]], ['four_5fover_5fpi',['four_over_pi',['../a00290.html#ga753950e5140e4ea6a88e4a18ba61dc09',1,'glm']]], ['fract',['fract',['../a00241.html#ga8ba89e40e55ae5cdf228548f9b7639c7',1,'glm::fract(genType x)'],['../a00241.html#ga2df623004f634b440d61e018d62c751b',1,'glm::fract(vec< L, T, Q > const &x)']]], ['frexp',['frexp',['../a00241.html#gaddf5ef73283c171730e0bcc11833fa81',1,'glm']]], ['frustum',['frustum',['../a00243.html#ga0bcd4542e0affc63a0b8c08fcb839ea9',1,'glm']]], ['frustumlh',['frustumLH',['../a00243.html#gae4277c37f61d81da01bc9db14ea90296',1,'glm']]], ['frustumlh_5fno',['frustumLH_NO',['../a00243.html#ga259520cad03b3f8bca9417920035ed01',1,'glm']]], ['frustumlh_5fzo',['frustumLH_ZO',['../a00243.html#ga94218b094862d17798370242680b9030',1,'glm']]], ['frustumno',['frustumNO',['../a00243.html#gae34ec664ad44860bf4b5ba631f0e0e90',1,'glm']]], ['frustumrh',['frustumRH',['../a00243.html#ga4366ab45880c6c5f8b3e8c371ca4b136',1,'glm']]], ['frustumrh_5fno',['frustumRH_NO',['../a00243.html#ga9236c8439f21be186b79c97b588836b9',1,'glm']]], ['frustumrh_5fzo',['frustumRH_ZO',['../a00243.html#ga7654a9227f14d5382786b9fc0eb5692d',1,'glm']]], ['frustumzo',['frustumZO',['../a00243.html#gaa73322e152edf50cf30a6edac342a757',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_6.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_6.js ================================================ var searchData= [ ['gauss',['gauss',['../a00326.html#ga0b50b197ff74261a0fad90f4b8d24702',1,'glm::gauss(T x, T ExpectedValue, T StandardDeviation)'],['../a00326.html#gad19ec8754a83c0b9a8dc16b7e60705ab',1,'glm::gauss(vec< 2, T, Q > const &Coord, vec< 2, T, Q > const &ExpectedValue, vec< 2, T, Q > const &StandardDeviation)']]], ['gaussrand',['gaussRand',['../a00300.html#ga5193a83e49e4fdc5652c084711083574',1,'glm']]], ['glm_5faligned_5ftypedef',['GLM_ALIGNED_TYPEDEF',['../a00364.html#gab5cd5c5fad228b25c782084f1cc30114',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_int8, aligned_lowp_int8, 1)'],['../a00364.html#ga5bb5dd895ef625c1b113f2cf400186b0',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_int16, aligned_lowp_int16, 2)'],['../a00364.html#gac6efa54cf7c6c86f7158922abdb1a430',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_int32, aligned_lowp_int32, 4)'],['../a00364.html#ga6612eb77c8607048e7552279a11eeb5f',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_int64, aligned_lowp_int64, 8)'],['../a00364.html#ga7ddc1848ff2223026db8968ce0c97497',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_int8_t, aligned_lowp_int8_t, 1)'],['../a00364.html#ga22240dd9458b0f8c11fbcc4f48714f68',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_int16_t, aligned_lowp_int16_t, 2)'],['../a00364.html#ga8130ea381d76a2cc34a93ccbb6cf487d',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_int32_t, aligned_lowp_int32_t, 4)'],['../a00364.html#ga7ccb60f3215d293fd62b33b31ed0e7be',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_int64_t, aligned_lowp_int64_t, 8)'],['../a00364.html#gac20d508d2ef5cc95ad3daf083c57ec2a',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_i8, aligned_lowp_i8, 1)'],['../a00364.html#ga50257b48069a31d0c8d9c1f644d267de',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_i16, aligned_lowp_i16, 2)'],['../a00364.html#gaa07e98e67b7a3435c0746018c7a2a839',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_i32, aligned_lowp_i32, 4)'],['../a00364.html#ga62601fc6f8ca298b77285bedf03faffd',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_i64, aligned_lowp_i64, 8)'],['../a00364.html#gac8cff825951aeb54dd846037113c72db',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_int8, aligned_mediump_int8, 1)'],['../a00364.html#ga78f443d88f438575a62b5df497cdf66b',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_int16, aligned_mediump_int16, 2)'],['../a00364.html#ga0680cd3b5d4e8006985fb41a4f9b57af',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_int32, aligned_mediump_int32, 4)'],['../a00364.html#gad9e5babb1dd3e3531b42c37bf25dd951',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_int64, aligned_mediump_int64, 8)'],['../a00364.html#ga353fd9fa8a9ad952fcabd0d53ad9a6dd',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_int8_t, aligned_mediump_int8_t, 1)'],['../a00364.html#ga2196442c0e5c5e8c77842de388c42521',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_int16_t, aligned_mediump_int16_t, 2)'],['../a00364.html#ga1284488189daf897cf095c5eefad9744',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_int32_t, aligned_mediump_int32_t, 4)'],['../a00364.html#ga73fdc86a539808af58808b7c60a1c4d8',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_int64_t, aligned_mediump_int64_t, 8)'],['../a00364.html#gafafeea923e1983262c972e2b83922d3b',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_i8, aligned_mediump_i8, 1)'],['../a00364.html#ga4b35ca5fe8f55c9d2fe54fdb8d8896f4',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_i16, aligned_mediump_i16, 2)'],['../a00364.html#ga63b882e29170d428463d99c3d630acc6',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_i32, aligned_mediump_i32, 4)'],['../a00364.html#ga8b20507bb048c1edea2d441cc953e6f0',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_i64, aligned_mediump_i64, 8)'],['../a00364.html#ga56c5ca60813027b603c7b61425a0479d',1,'glm::GLM_ALIGNED_TYPEDEF(highp_int8, aligned_highp_int8, 1)'],['../a00364.html#ga7a751b3aff24c0259f4a7357c2969089',1,'glm::GLM_ALIGNED_TYPEDEF(highp_int16, aligned_highp_int16, 2)'],['../a00364.html#ga70cd2144351c556469ee6119e59971fc',1,'glm::GLM_ALIGNED_TYPEDEF(highp_int32, aligned_highp_int32, 4)'],['../a00364.html#ga46bbf08dc004d8c433041e0b5018a5d3',1,'glm::GLM_ALIGNED_TYPEDEF(highp_int64, aligned_highp_int64, 8)'],['../a00364.html#gab3e10c77a20d1abad2de1c561c7a5c18',1,'glm::GLM_ALIGNED_TYPEDEF(highp_int8_t, aligned_highp_int8_t, 1)'],['../a00364.html#ga968f30319ebeaca9ebcd3a25a8e139fb',1,'glm::GLM_ALIGNED_TYPEDEF(highp_int16_t, aligned_highp_int16_t, 2)'],['../a00364.html#gaae773c28e6390c6aa76f5b678b7098a3',1,'glm::GLM_ALIGNED_TYPEDEF(highp_int32_t, aligned_highp_int32_t, 4)'],['../a00364.html#ga790cfff1ca39d0ed696ffed980809311',1,'glm::GLM_ALIGNED_TYPEDEF(highp_int64_t, aligned_highp_int64_t, 8)'],['../a00364.html#ga8265b91eb23c120a9b0c3e381bc37b96',1,'glm::GLM_ALIGNED_TYPEDEF(highp_i8, aligned_highp_i8, 1)'],['../a00364.html#gae6d384de17588d8edb894fbe06e0d410',1,'glm::GLM_ALIGNED_TYPEDEF(highp_i16, aligned_highp_i16, 2)'],['../a00364.html#ga9c8172b745ee03fc5b2b91c350c2922f',1,'glm::GLM_ALIGNED_TYPEDEF(highp_i32, aligned_highp_i32, 4)'],['../a00364.html#ga77e0dff12aa4020ddc3f8cabbea7b2e6',1,'glm::GLM_ALIGNED_TYPEDEF(highp_i64, aligned_highp_i64, 8)'],['../a00364.html#gabd82b9faa9d4d618dbbe0fc8a1efee63',1,'glm::GLM_ALIGNED_TYPEDEF(int8, aligned_int8, 1)'],['../a00364.html#ga285649744560be21000cfd81bbb5d507',1,'glm::GLM_ALIGNED_TYPEDEF(int16, aligned_int16, 2)'],['../a00364.html#ga07732da630b2deda428ce95c0ecaf3ff',1,'glm::GLM_ALIGNED_TYPEDEF(int32, aligned_int32, 4)'],['../a00364.html#ga1a8da2a8c51f69c07a2e7f473aa420f4',1,'glm::GLM_ALIGNED_TYPEDEF(int64, aligned_int64, 8)'],['../a00364.html#ga848aedf13e2d9738acf0bb482c590174',1,'glm::GLM_ALIGNED_TYPEDEF(int8_t, aligned_int8_t, 1)'],['../a00364.html#gafd2803d39049dd45a37a63931e25d943',1,'glm::GLM_ALIGNED_TYPEDEF(int16_t, aligned_int16_t, 2)'],['../a00364.html#gae553b33349d6da832cf0724f1e024094',1,'glm::GLM_ALIGNED_TYPEDEF(int32_t, aligned_int32_t, 4)'],['../a00364.html#ga16d223a2b3409e812e1d3bd87f0e9e5c',1,'glm::GLM_ALIGNED_TYPEDEF(int64_t, aligned_int64_t, 8)'],['../a00364.html#ga2de065d2ddfdb366bcd0febca79ae2ad',1,'glm::GLM_ALIGNED_TYPEDEF(i8, aligned_i8, 1)'],['../a00364.html#gabd786bdc20a11c8cb05c92c8212e28d3',1,'glm::GLM_ALIGNED_TYPEDEF(i16, aligned_i16, 2)'],['../a00364.html#gad4aefe56691cdb640c72f0d46d3fb532',1,'glm::GLM_ALIGNED_TYPEDEF(i32, aligned_i32, 4)'],['../a00364.html#ga8fe9745f7de24a8394518152ff9fccdc',1,'glm::GLM_ALIGNED_TYPEDEF(i64, aligned_i64, 8)'],['../a00364.html#gaaad735483450099f7f882d4e3a3569bd',1,'glm::GLM_ALIGNED_TYPEDEF(ivec1, aligned_ivec1, 4)'],['../a00364.html#gac7b6f823802edbd6edbaf70ea25bf068',1,'glm::GLM_ALIGNED_TYPEDEF(ivec2, aligned_ivec2, 8)'],['../a00364.html#ga3e235bcd2b8029613f25b8d40a2d3ef7',1,'glm::GLM_ALIGNED_TYPEDEF(ivec3, aligned_ivec3, 16)'],['../a00364.html#ga50d8a9523968c77f8325b4c9bfbff41e',1,'glm::GLM_ALIGNED_TYPEDEF(ivec4, aligned_ivec4, 16)'],['../a00364.html#ga9ec20fdfb729c702032da9378c79679f',1,'glm::GLM_ALIGNED_TYPEDEF(i8vec1, aligned_i8vec1, 1)'],['../a00364.html#ga25b3fe1d9e8d0a5e86c1949c1acd8131',1,'glm::GLM_ALIGNED_TYPEDEF(i8vec2, aligned_i8vec2, 2)'],['../a00364.html#ga2958f907719d94d8109b562540c910e2',1,'glm::GLM_ALIGNED_TYPEDEF(i8vec3, aligned_i8vec3, 4)'],['../a00364.html#ga1fe6fc032a978f1c845fac9aa0668714',1,'glm::GLM_ALIGNED_TYPEDEF(i8vec4, aligned_i8vec4, 4)'],['../a00364.html#gaa4161e7a496dc96972254143fe873e55',1,'glm::GLM_ALIGNED_TYPEDEF(i16vec1, aligned_i16vec1, 2)'],['../a00364.html#ga9d7cb211ccda69b1c22ddeeb0f3e7aba',1,'glm::GLM_ALIGNED_TYPEDEF(i16vec2, aligned_i16vec2, 4)'],['../a00364.html#gaaee91dd2ab34423bcc11072ef6bd0f02',1,'glm::GLM_ALIGNED_TYPEDEF(i16vec3, aligned_i16vec3, 8)'],['../a00364.html#ga49f047ccaa8b31fad9f26c67bf9b3510',1,'glm::GLM_ALIGNED_TYPEDEF(i16vec4, aligned_i16vec4, 8)'],['../a00364.html#ga904e9c2436bb099397c0823506a0771f',1,'glm::GLM_ALIGNED_TYPEDEF(i32vec1, aligned_i32vec1, 4)'],['../a00364.html#gaf90651cf2f5e7ee2b11cfdc5a6749534',1,'glm::GLM_ALIGNED_TYPEDEF(i32vec2, aligned_i32vec2, 8)'],['../a00364.html#ga7354a4ead8cb17868aec36b9c30d6010',1,'glm::GLM_ALIGNED_TYPEDEF(i32vec3, aligned_i32vec3, 16)'],['../a00364.html#gad2ecbdea18732163e2636e27b37981ee',1,'glm::GLM_ALIGNED_TYPEDEF(i32vec4, aligned_i32vec4, 16)'],['../a00364.html#ga965b1c9aa1800e93d4abc2eb2b5afcbf',1,'glm::GLM_ALIGNED_TYPEDEF(i64vec1, aligned_i64vec1, 8)'],['../a00364.html#ga1f9e9c2ea2768675dff9bae5cde2d829',1,'glm::GLM_ALIGNED_TYPEDEF(i64vec2, aligned_i64vec2, 16)'],['../a00364.html#gad77c317b7d942322cd5be4c8127b3187',1,'glm::GLM_ALIGNED_TYPEDEF(i64vec3, aligned_i64vec3, 32)'],['../a00364.html#ga716f8ea809bdb11b5b542d8b71aeb04f',1,'glm::GLM_ALIGNED_TYPEDEF(i64vec4, aligned_i64vec4, 32)'],['../a00364.html#gad46f8e9082d5878b1bc04f9c1471cdaa',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_uint8, aligned_lowp_uint8, 1)'],['../a00364.html#ga1246094581af624aca6c7499aaabf801',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_uint16, aligned_lowp_uint16, 2)'],['../a00364.html#ga7a5009a1d0196bbf21dd7518f61f0249',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_uint32, aligned_lowp_uint32, 4)'],['../a00364.html#ga45213fd18b3bb1df391671afefe4d1e7',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_uint64, aligned_lowp_uint64, 8)'],['../a00364.html#ga0ba26b4e3fd9ecbc25358efd68d8a4ca',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_uint8_t, aligned_lowp_uint8_t, 1)'],['../a00364.html#gaf2b58f5fb6d4ec8ce7b76221d3af43e1',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_uint16_t, aligned_lowp_uint16_t, 2)'],['../a00364.html#gadc246401847dcba155f0699425e49dcd',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_uint32_t, aligned_lowp_uint32_t, 4)'],['../a00364.html#gaace64bddf51a9def01498da9a94fb01c',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_uint64_t, aligned_lowp_uint64_t, 8)'],['../a00364.html#gad7bb97c29d664bd86ffb1bed4abc5534',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_u8, aligned_lowp_u8, 1)'],['../a00364.html#ga404bba7785130e0b1384d695a9450b28',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_u16, aligned_lowp_u16, 2)'],['../a00364.html#ga31ba41fd896257536958ec6080203d2a',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_u32, aligned_lowp_u32, 4)'],['../a00364.html#gacca5f13627f57b3505676e40a6e43e5e',1,'glm::GLM_ALIGNED_TYPEDEF(lowp_u64, aligned_lowp_u64, 8)'],['../a00364.html#ga5faf1d3e70bf33174dd7f3d01d5b883b',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_uint8, aligned_mediump_uint8, 1)'],['../a00364.html#ga727e2bf2c433bb3b0182605860a48363',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_uint16, aligned_mediump_uint16, 2)'],['../a00364.html#ga12566ca66d5962dadb4a5eb4c74e891e',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_uint32, aligned_mediump_uint32, 4)'],['../a00364.html#ga7b66a97a8acaa35c5a377b947318c6bc',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_uint64, aligned_mediump_uint64, 8)'],['../a00364.html#gaa9cde002439b74fa66120a16a9f55fcc',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_uint8_t, aligned_mediump_uint8_t, 1)'],['../a00364.html#ga1ca98c67f7d1e975f7c5202f1da1df1f',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_uint16_t, aligned_mediump_uint16_t, 2)'],['../a00364.html#ga1dc8bc6199d785f235576948d80a597c',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_uint32_t, aligned_mediump_uint32_t, 4)'],['../a00364.html#gad14a0f2ec93519682b73d70b8e401d81',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_uint64_t, aligned_mediump_uint64_t, 8)'],['../a00364.html#gada8b996eb6526dc1ead813bd49539d1b',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_u8, aligned_mediump_u8, 1)'],['../a00364.html#ga28948f6bfb52b42deb9d73ae1ea8d8b0',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_u16, aligned_mediump_u16, 2)'],['../a00364.html#gad6a7c0b5630f89d3f1c5b4ef2919bb4c',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_u32, aligned_mediump_u32, 4)'],['../a00364.html#gaa0fc531cbaa972ac3a0b86d21ef4a7fa',1,'glm::GLM_ALIGNED_TYPEDEF(mediump_u64, aligned_mediump_u64, 8)'],['../a00364.html#ga0ee829f7b754b262bbfe6317c0d678ac',1,'glm::GLM_ALIGNED_TYPEDEF(highp_uint8, aligned_highp_uint8, 1)'],['../a00364.html#ga447848a817a626cae08cedc9778b331c',1,'glm::GLM_ALIGNED_TYPEDEF(highp_uint16, aligned_highp_uint16, 2)'],['../a00364.html#ga6027ae13b2734f542a6e7beee11b8820',1,'glm::GLM_ALIGNED_TYPEDEF(highp_uint32, aligned_highp_uint32, 4)'],['../a00364.html#ga2aca46c8608c95ef991ee4c332acde5f',1,'glm::GLM_ALIGNED_TYPEDEF(highp_uint64, aligned_highp_uint64, 8)'],['../a00364.html#gaff50b10dd1c48be324fdaffd18e2c7ea',1,'glm::GLM_ALIGNED_TYPEDEF(highp_uint8_t, aligned_highp_uint8_t, 1)'],['../a00364.html#ga9fc4421dbb833d5461e6d4e59dcfde55',1,'glm::GLM_ALIGNED_TYPEDEF(highp_uint16_t, aligned_highp_uint16_t, 2)'],['../a00364.html#ga329f1e2b94b33ba5e3918197030bcf03',1,'glm::GLM_ALIGNED_TYPEDEF(highp_uint32_t, aligned_highp_uint32_t, 4)'],['../a00364.html#ga71e646f7e301aa422328194162c9c998',1,'glm::GLM_ALIGNED_TYPEDEF(highp_uint64_t, aligned_highp_uint64_t, 8)'],['../a00364.html#ga8942e09f479489441a7a5004c6d8cb66',1,'glm::GLM_ALIGNED_TYPEDEF(highp_u8, aligned_highp_u8, 1)'],['../a00364.html#gaab32497d6e4db16ee439dbedd64c5865',1,'glm::GLM_ALIGNED_TYPEDEF(highp_u16, aligned_highp_u16, 2)'],['../a00364.html#gaaadbb34952eca8e3d7fe122c3e167742',1,'glm::GLM_ALIGNED_TYPEDEF(highp_u32, aligned_highp_u32, 4)'],['../a00364.html#ga92024d27c74a3650afb55ec8e024ed25',1,'glm::GLM_ALIGNED_TYPEDEF(highp_u64, aligned_highp_u64, 8)'],['../a00364.html#gabde1d0b4072df35453db76075ab896a6',1,'glm::GLM_ALIGNED_TYPEDEF(uint8, aligned_uint8, 1)'],['../a00364.html#ga06c296c9e398b294c8c9dd2a7693dcbb',1,'glm::GLM_ALIGNED_TYPEDEF(uint16, aligned_uint16, 2)'],['../a00364.html#gacf1744488c96ebd33c9f36ad33b2010a',1,'glm::GLM_ALIGNED_TYPEDEF(uint32, aligned_uint32, 4)'],['../a00364.html#ga3328061a64c20ba59d5f9da24c2cd059',1,'glm::GLM_ALIGNED_TYPEDEF(uint64, aligned_uint64, 8)'],['../a00364.html#gaf6ced36f13bae57f377bafa6f5fcc299',1,'glm::GLM_ALIGNED_TYPEDEF(uint8_t, aligned_uint8_t, 1)'],['../a00364.html#gafbc7fb7847bfc78a339d1d371c915c73',1,'glm::GLM_ALIGNED_TYPEDEF(uint16_t, aligned_uint16_t, 2)'],['../a00364.html#gaa86bc56a73fd8120b1121b5f5e6245ae',1,'glm::GLM_ALIGNED_TYPEDEF(uint32_t, aligned_uint32_t, 4)'],['../a00364.html#ga68c0b9e669060d0eb5ab8c3ddeb483d8',1,'glm::GLM_ALIGNED_TYPEDEF(uint64_t, aligned_uint64_t, 8)'],['../a00364.html#ga4f3bab577daf3343e99cc005134bce86',1,'glm::GLM_ALIGNED_TYPEDEF(u8, aligned_u8, 1)'],['../a00364.html#ga13a2391339d0790d43b76d00a7611c4f',1,'glm::GLM_ALIGNED_TYPEDEF(u16, aligned_u16, 2)'],['../a00364.html#ga197570e03acbc3d18ab698e342971e8f',1,'glm::GLM_ALIGNED_TYPEDEF(u32, aligned_u32, 4)'],['../a00364.html#ga0f033b21e145a1faa32c62ede5878993',1,'glm::GLM_ALIGNED_TYPEDEF(u64, aligned_u64, 8)'],['../a00364.html#ga509af83527f5cd512e9a7873590663aa',1,'glm::GLM_ALIGNED_TYPEDEF(uvec1, aligned_uvec1, 4)'],['../a00364.html#ga94e86186978c502c6dc0c0d9c4a30679',1,'glm::GLM_ALIGNED_TYPEDEF(uvec2, aligned_uvec2, 8)'],['../a00364.html#ga5cec574686a7f3c8ed24bb195c5e2d0a',1,'glm::GLM_ALIGNED_TYPEDEF(uvec3, aligned_uvec3, 16)'],['../a00364.html#ga47edfdcee9c89b1ebdaf20450323b1d4',1,'glm::GLM_ALIGNED_TYPEDEF(uvec4, aligned_uvec4, 16)'],['../a00364.html#ga5611d6718e3a00096918a64192e73a45',1,'glm::GLM_ALIGNED_TYPEDEF(u8vec1, aligned_u8vec1, 1)'],['../a00364.html#ga19837e6f72b60d994a805ef564c6c326',1,'glm::GLM_ALIGNED_TYPEDEF(u8vec2, aligned_u8vec2, 2)'],['../a00364.html#ga9740cf8e34f068049b42a2753f9601c2',1,'glm::GLM_ALIGNED_TYPEDEF(u8vec3, aligned_u8vec3, 4)'],['../a00364.html#ga8b8588bb221448f5541a858903822a57',1,'glm::GLM_ALIGNED_TYPEDEF(u8vec4, aligned_u8vec4, 4)'],['../a00364.html#ga991abe990c16de26b2129d6bc2f4c051',1,'glm::GLM_ALIGNED_TYPEDEF(u16vec1, aligned_u16vec1, 2)'],['../a00364.html#gac01bb9fc32a1cd76c2b80d030f71df4c',1,'glm::GLM_ALIGNED_TYPEDEF(u16vec2, aligned_u16vec2, 4)'],['../a00364.html#ga09540dbca093793a36a8997e0d4bee77',1,'glm::GLM_ALIGNED_TYPEDEF(u16vec3, aligned_u16vec3, 8)'],['../a00364.html#gaecafb5996f5a44f57e34d29c8670741e',1,'glm::GLM_ALIGNED_TYPEDEF(u16vec4, aligned_u16vec4, 8)'],['../a00364.html#gac6b161a04d2f8408fe1c9d857e8daac0',1,'glm::GLM_ALIGNED_TYPEDEF(u32vec1, aligned_u32vec1, 4)'],['../a00364.html#ga1fa0dfc8feb0fa17dab2acd43e05342b',1,'glm::GLM_ALIGNED_TYPEDEF(u32vec2, aligned_u32vec2, 8)'],['../a00364.html#ga0019500abbfa9c66eff61ca75eaaed94',1,'glm::GLM_ALIGNED_TYPEDEF(u32vec3, aligned_u32vec3, 16)'],['../a00364.html#ga14fd29d01dae7b08a04e9facbcc18824',1,'glm::GLM_ALIGNED_TYPEDEF(u32vec4, aligned_u32vec4, 16)'],['../a00364.html#gab253845f534a67136f9619843cade903',1,'glm::GLM_ALIGNED_TYPEDEF(u64vec1, aligned_u64vec1, 8)'],['../a00364.html#ga929427a7627940cdf3304f9c050b677d',1,'glm::GLM_ALIGNED_TYPEDEF(u64vec2, aligned_u64vec2, 16)'],['../a00364.html#gae373b6c04fdf9879f33d63e6949c037e',1,'glm::GLM_ALIGNED_TYPEDEF(u64vec3, aligned_u64vec3, 32)'],['../a00364.html#ga53a8a03dca2015baec4584f45b8e9cdc',1,'glm::GLM_ALIGNED_TYPEDEF(u64vec4, aligned_u64vec4, 32)'],['../a00364.html#gab3301bae94ef5bf59fbdd9a24e7d2a01',1,'glm::GLM_ALIGNED_TYPEDEF(float32, aligned_float32, 4)'],['../a00364.html#gada9b0bea273d3ae0286f891533b9568f',1,'glm::GLM_ALIGNED_TYPEDEF(float32_t, aligned_float32_t, 4)'],['../a00364.html#gadbce23b9f23d77bb3884e289a574ebd5',1,'glm::GLM_ALIGNED_TYPEDEF(float32, aligned_f32, 4)'],['../a00364.html#ga75930684ff2233171c573e603f216162',1,'glm::GLM_ALIGNED_TYPEDEF(float64, aligned_float64, 8)'],['../a00364.html#ga6e3a2d83b131336219a0f4c7cbba2a48',1,'glm::GLM_ALIGNED_TYPEDEF(float64_t, aligned_float64_t, 8)'],['../a00364.html#gaa4deaa0dea930c393d55e7a4352b0a20',1,'glm::GLM_ALIGNED_TYPEDEF(float64, aligned_f64, 8)'],['../a00364.html#ga81bc497b2bfc6f80bab690c6ee28f0f9',1,'glm::GLM_ALIGNED_TYPEDEF(vec1, aligned_vec1, 4)'],['../a00364.html#gada3e8f783e9d4b90006695a16c39d4d4',1,'glm::GLM_ALIGNED_TYPEDEF(vec2, aligned_vec2, 8)'],['../a00364.html#gab8d081fac3a38d6f55fa552f32168d32',1,'glm::GLM_ALIGNED_TYPEDEF(vec3, aligned_vec3, 16)'],['../a00364.html#ga12fe7b9769c964c5b48dcfd8b7f40198',1,'glm::GLM_ALIGNED_TYPEDEF(vec4, aligned_vec4, 16)'],['../a00364.html#gaefab04611c7f8fe1fd9be3071efea6cc',1,'glm::GLM_ALIGNED_TYPEDEF(fvec1, aligned_fvec1, 4)'],['../a00364.html#ga2543c05ba19b3bd19d45b1227390c5b4',1,'glm::GLM_ALIGNED_TYPEDEF(fvec2, aligned_fvec2, 8)'],['../a00364.html#ga009afd727fd657ef33a18754d6d28f60',1,'glm::GLM_ALIGNED_TYPEDEF(fvec3, aligned_fvec3, 16)'],['../a00364.html#ga2f26177e74bfb301a3d0e02ec3c3ef53',1,'glm::GLM_ALIGNED_TYPEDEF(fvec4, aligned_fvec4, 16)'],['../a00364.html#ga309f495a1d6b75ddf195b674b65cb1e4',1,'glm::GLM_ALIGNED_TYPEDEF(f32vec1, aligned_f32vec1, 4)'],['../a00364.html#ga5e185865a2217d0cd47187644683a8c3',1,'glm::GLM_ALIGNED_TYPEDEF(f32vec2, aligned_f32vec2, 8)'],['../a00364.html#gade4458b27b039b9ca34f8ec049f3115a',1,'glm::GLM_ALIGNED_TYPEDEF(f32vec3, aligned_f32vec3, 16)'],['../a00364.html#ga2e8a12c5e6a9c4ae4ddaeda1d1cffe3b',1,'glm::GLM_ALIGNED_TYPEDEF(f32vec4, aligned_f32vec4, 16)'],['../a00364.html#ga3e0f35fa0c626285a8bad41707e7316c',1,'glm::GLM_ALIGNED_TYPEDEF(dvec1, aligned_dvec1, 8)'],['../a00364.html#ga78bfec2f185d1d365ea0a9ef1e3d45b8',1,'glm::GLM_ALIGNED_TYPEDEF(dvec2, aligned_dvec2, 16)'],['../a00364.html#ga01fe6fee6db5df580b6724a7e681f069',1,'glm::GLM_ALIGNED_TYPEDEF(dvec3, aligned_dvec3, 32)'],['../a00364.html#ga687d5b8f551d5af32425c0b2fba15e99',1,'glm::GLM_ALIGNED_TYPEDEF(dvec4, aligned_dvec4, 32)'],['../a00364.html#ga8e842371d46842ff8f1813419ba49d0f',1,'glm::GLM_ALIGNED_TYPEDEF(f64vec1, aligned_f64vec1, 8)'],['../a00364.html#ga32814aa0f19316b43134fc25f2aad2b9',1,'glm::GLM_ALIGNED_TYPEDEF(f64vec2, aligned_f64vec2, 16)'],['../a00364.html#gaf3d3bbc1e93909b689123b085e177a14',1,'glm::GLM_ALIGNED_TYPEDEF(f64vec3, aligned_f64vec3, 32)'],['../a00364.html#ga804c654cead1139bd250f90f9bb01fad',1,'glm::GLM_ALIGNED_TYPEDEF(f64vec4, aligned_f64vec4, 32)'],['../a00364.html#gacce4ac532880b8c7469d3c31974420a1',1,'glm::GLM_ALIGNED_TYPEDEF(mat2, aligned_mat2, 16)'],['../a00364.html#ga0498e0e249a6faddaf96aa55d7f81c3b',1,'glm::GLM_ALIGNED_TYPEDEF(mat3, aligned_mat3, 16)'],['../a00364.html#ga7435d87de82a0d652b35dc5b9cc718d5',1,'glm::GLM_ALIGNED_TYPEDEF(mat4, aligned_mat4, 16)'],['../a00364.html#ga719da577361541a4c43a2dd1d0e361e1',1,'glm::GLM_ALIGNED_TYPEDEF(fmat2x2, aligned_fmat2, 16)'],['../a00364.html#ga6e7ee4f541e1d7db66cd1a224caacafb',1,'glm::GLM_ALIGNED_TYPEDEF(fmat3x3, aligned_fmat3, 16)'],['../a00364.html#gae5d672d359f2a39f63f98c7975057486',1,'glm::GLM_ALIGNED_TYPEDEF(fmat4x4, aligned_fmat4, 16)'],['../a00364.html#ga6fa2df037dbfc5fe8c8e0b4db8a34953',1,'glm::GLM_ALIGNED_TYPEDEF(fmat2x2, aligned_fmat2x2, 16)'],['../a00364.html#ga0743b4f4f69a3227b82ff58f6abbad62',1,'glm::GLM_ALIGNED_TYPEDEF(fmat2x3, aligned_fmat2x3, 16)'],['../a00364.html#ga1a76b325fdf70f961d835edd182c63dd',1,'glm::GLM_ALIGNED_TYPEDEF(fmat2x4, aligned_fmat2x4, 16)'],['../a00364.html#ga4b4e181cd041ba28c3163e7b8074aef0',1,'glm::GLM_ALIGNED_TYPEDEF(fmat3x2, aligned_fmat3x2, 16)'],['../a00364.html#ga27b13f465abc8a40705698145e222c3f',1,'glm::GLM_ALIGNED_TYPEDEF(fmat3x3, aligned_fmat3x3, 16)'],['../a00364.html#ga2608d19cc275830a6f8c0b6405625a4f',1,'glm::GLM_ALIGNED_TYPEDEF(fmat3x4, aligned_fmat3x4, 16)'],['../a00364.html#ga93f09768241358a287c4cca538f1f7e7',1,'glm::GLM_ALIGNED_TYPEDEF(fmat4x2, aligned_fmat4x2, 16)'],['../a00364.html#ga7c117e3ecca089e10247b1d41d88aff9',1,'glm::GLM_ALIGNED_TYPEDEF(fmat4x3, aligned_fmat4x3, 16)'],['../a00364.html#ga07c75cd04ba42dc37fa3e105f89455c5',1,'glm::GLM_ALIGNED_TYPEDEF(fmat4x4, aligned_fmat4x4, 16)'],['../a00364.html#ga65ff0d690a34a4d7f46f9b2eb51525ee',1,'glm::GLM_ALIGNED_TYPEDEF(f32mat2x2, aligned_f32mat2, 16)'],['../a00364.html#gadd8ddbe2bf65ccede865ba2f510176dc',1,'glm::GLM_ALIGNED_TYPEDEF(f32mat3x3, aligned_f32mat3, 16)'],['../a00364.html#gaf18dbff14bf13d3ff540c517659ec045',1,'glm::GLM_ALIGNED_TYPEDEF(f32mat4x4, aligned_f32mat4, 16)'],['../a00364.html#ga66339f6139bf7ff19e245beb33f61cc8',1,'glm::GLM_ALIGNED_TYPEDEF(f32mat2x2, aligned_f32mat2x2, 16)'],['../a00364.html#ga1558a48b3934011b52612809f443e46d',1,'glm::GLM_ALIGNED_TYPEDEF(f32mat2x3, aligned_f32mat2x3, 16)'],['../a00364.html#gaa52e5732daa62851627021ad551c7680',1,'glm::GLM_ALIGNED_TYPEDEF(f32mat2x4, aligned_f32mat2x4, 16)'],['../a00364.html#gac09663c42566bcb58d23c6781ac4e85a',1,'glm::GLM_ALIGNED_TYPEDEF(f32mat3x2, aligned_f32mat3x2, 16)'],['../a00364.html#ga3f510999e59e1b309113e1d561162b29',1,'glm::GLM_ALIGNED_TYPEDEF(f32mat3x3, aligned_f32mat3x3, 16)'],['../a00364.html#ga2c9c94f0c89cd71ce56551db6cf4aaec',1,'glm::GLM_ALIGNED_TYPEDEF(f32mat3x4, aligned_f32mat3x4, 16)'],['../a00364.html#ga99ce8274c750fbfdf0e70c95946a2875',1,'glm::GLM_ALIGNED_TYPEDEF(f32mat4x2, aligned_f32mat4x2, 16)'],['../a00364.html#ga9476ef66790239df53dbe66f3989c3b5',1,'glm::GLM_ALIGNED_TYPEDEF(f32mat4x3, aligned_f32mat4x3, 16)'],['../a00364.html#gacc429b3b0b49921e12713b6d31e14e1d',1,'glm::GLM_ALIGNED_TYPEDEF(f32mat4x4, aligned_f32mat4x4, 16)'],['../a00364.html#ga88f6c6fa06e6e64479763e69444669cf',1,'glm::GLM_ALIGNED_TYPEDEF(f64mat2x2, aligned_f64mat2, 32)'],['../a00364.html#gaae8e4639c991e64754145ab8e4c32083',1,'glm::GLM_ALIGNED_TYPEDEF(f64mat3x3, aligned_f64mat3, 32)'],['../a00364.html#ga6e9094f3feb3b5b49d0f83683a101fde',1,'glm::GLM_ALIGNED_TYPEDEF(f64mat4x4, aligned_f64mat4, 32)'],['../a00364.html#gadbd2c639c03de1c3e9591b5a39f65559',1,'glm::GLM_ALIGNED_TYPEDEF(f64mat2x2, aligned_f64mat2x2, 32)'],['../a00364.html#gab059d7b9fe2094acc563b7223987499f',1,'glm::GLM_ALIGNED_TYPEDEF(f64mat2x3, aligned_f64mat2x3, 32)'],['../a00364.html#gabbc811d1c52ed2b8cfcaff1378f75c69',1,'glm::GLM_ALIGNED_TYPEDEF(f64mat2x4, aligned_f64mat2x4, 32)'],['../a00364.html#ga9ddf5212777734d2fd841a84439f3bdf',1,'glm::GLM_ALIGNED_TYPEDEF(f64mat3x2, aligned_f64mat3x2, 32)'],['../a00364.html#gad1dda32ed09f94bfcf0a7d8edfb6cf13',1,'glm::GLM_ALIGNED_TYPEDEF(f64mat3x3, aligned_f64mat3x3, 32)'],['../a00364.html#ga5875e0fa72f07e271e7931811cbbf31a',1,'glm::GLM_ALIGNED_TYPEDEF(f64mat3x4, aligned_f64mat3x4, 32)'],['../a00364.html#ga41e82cd6ac07f912ba2a2d45799dcf0d',1,'glm::GLM_ALIGNED_TYPEDEF(f64mat4x2, aligned_f64mat4x2, 32)'],['../a00364.html#ga0892638d6ba773043b3d63d1d092622e',1,'glm::GLM_ALIGNED_TYPEDEF(f64mat4x3, aligned_f64mat4x3, 32)'],['../a00364.html#ga912a16432608b822f1e13607529934c1',1,'glm::GLM_ALIGNED_TYPEDEF(f64mat4x4, aligned_f64mat4x4, 32)'],['../a00364.html#gafd945a8ea86b042aba410e0560df9a3d',1,'glm::GLM_ALIGNED_TYPEDEF(quat, aligned_quat, 16)'],['../a00364.html#ga19c2ba545d1f2f36bcb7b60c9a228622',1,'glm::GLM_ALIGNED_TYPEDEF(quat, aligned_fquat, 16)'],['../a00364.html#gaabc28c84a3288b697605d4688686f9a9',1,'glm::GLM_ALIGNED_TYPEDEF(dquat, aligned_dquat, 32)'],['../a00364.html#ga1ed8aeb5ca67fade269a46105f1bf273',1,'glm::GLM_ALIGNED_TYPEDEF(f32quat, aligned_f32quat, 16)'],['../a00364.html#ga95cc03b8b475993fa50e05e38e203303',1,'glm::GLM_ALIGNED_TYPEDEF(f64quat, aligned_f64quat, 32)']]], ['golden_5fratio',['golden_ratio',['../a00290.html#ga748cf8642830657c5b7eae04d0a80899',1,'glm']]], ['greaterthan',['greaterThan',['../a00299.html#ga8f7fa76e06c417b757ddfd438f3f677b',1,'glm::greaterThan(qua< T, Q > const &x, qua< T, Q > const &y)'],['../a00374.html#gadfdb8ea82deca869ddc7e63ea5a63ae4',1,'glm::greaterThan(vec< L, T, Q > const &x, vec< L, T, Q > const &y)']]], ['greaterthanequal',['greaterThanEqual',['../a00299.html#ga388cbeba987dae7b5937f742efa49a5a',1,'glm::greaterThanEqual(qua< T, Q > const &x, qua< T, Q > const &y)'],['../a00374.html#ga859975f538940f8d18fe62f916b9abd7',1,'glm::greaterThanEqual(vec< L, T, Q > const &x, vec< L, T, Q > const &y)']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_7.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_7.js ================================================ var searchData= [ ['half_5fpi',['half_pi',['../a00290.html#ga0c36b41d462e45641faf7d7938948bac',1,'glm']]], ['hermite',['hermite',['../a00358.html#gaa69e143f6374d32f934a8edeaa50bac9',1,'glm']]], ['highestbitvalue',['highestBitValue',['../a00309.html#ga0dcc8fe7c3d3ad60dea409281efa3d05',1,'glm::highestBitValue(genIUType Value)'],['../a00309.html#ga898ef075ccf809a1e480faab48fe96bf',1,'glm::highestBitValue(vec< L, T, Q > const &value)']]], ['hsvcolor',['hsvColor',['../a00312.html#ga789802bec2d4fe0f9741c731b4a8a7d8',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_8.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_8.js ================================================ var searchData= [ ['identity',['identity',['../a00247.html#ga81696f2b8d1db02ea1aff8da8f269314',1,'glm']]], ['imulextended',['imulExtended',['../a00370.html#gac0c510a70e852f57594a9141848642e3',1,'glm']]], ['infiniteperspective',['infinitePerspective',['../a00243.html#ga44fa38a18349450325cae2661bb115ca',1,'glm']]], ['infiniteperspectivelh',['infinitePerspectiveLH',['../a00243.html#ga3201b30f5b3ea0f933246d87bfb992a9',1,'glm']]], ['infiniteperspectiverh',['infinitePerspectiveRH',['../a00243.html#ga99672ffe5714ef478dab2437255fe7e1',1,'glm']]], ['intbitstofloat',['intBitsToFloat',['../a00241.html#ga4fb7c21c2dce064b26fd9ccdaf9adcd4',1,'glm::intBitsToFloat(int const &v)'],['../a00241.html#ga7a0a8291a1cf3e1c2aee33030a1bd7b0',1,'glm::intBitsToFloat(vec< L, int, Q > const &v)']]], ['intermediate',['intermediate',['../a00352.html#gacc5cd5f3e78de61d141c2355417424de',1,'glm']]], ['interpolate',['interpolate',['../a00337.html#ga4e67863d150724b10c1ac00972dc958c',1,'glm']]], ['intersectlinesphere',['intersectLineSphere',['../a00331.html#ga9c68139f3d8a4f3d7fe45f9dbc0de5b7',1,'glm']]], ['intersectlinetriangle',['intersectLineTriangle',['../a00331.html#ga9d29b9b3acb504d43986502f42740df4',1,'glm']]], ['intersectrayplane',['intersectRayPlane',['../a00331.html#gad3697a9700ea379739a667ea02573488',1,'glm']]], ['intersectraysphere',['intersectRaySphere',['../a00331.html#gac88f8cd84c4bcb5b947d56acbbcfa56e',1,'glm::intersectRaySphere(genType const &rayStarting, genType const &rayNormalizedDirection, genType const &sphereCenter, typename genType::value_type const sphereRadiusSquared, typename genType::value_type &intersectionDistance)'],['../a00331.html#gad28c00515b823b579c608aafa1100c1d',1,'glm::intersectRaySphere(genType const &rayStarting, genType const &rayNormalizedDirection, genType const &sphereCenter, const typename genType::value_type sphereRadius, genType &intersectionPosition, genType &intersectionNormal)']]], ['intersectraytriangle',['intersectRayTriangle',['../a00331.html#ga65bf2c594482f04881c36bc761f9e946',1,'glm']]], ['inverse',['inverse',['../a00248.html#gab41da854ae678e23e114b598cbca4065',1,'glm::inverse(qua< T, Q > const &q)'],['../a00317.html#ga070f521a953f6461af4ab4cf8ccbf27e',1,'glm::inverse(tdualquat< T, Q > const &q)'],['../a00371.html#gaed509fe8129b01e4f20a6d0de5690091',1,'glm::inverse(mat< C, R, T, Q > const &m)']]], ['inversesqrt',['inversesqrt',['../a00242.html#ga523dd6bd0ad9f75ae2d24c8e4b017b7a',1,'glm']]], ['inversetranspose',['inverseTranspose',['../a00295.html#gab213cd0e3ead5f316d583f99d6312008',1,'glm']]], ['iround',['iround',['../a00292.html#ga57824268ebe13a922f1d69a5d37f637f',1,'glm']]], ['iscompnull',['isCompNull',['../a00368.html#gaf6ec1688eab7442fe96fe4941d5d4e76',1,'glm']]], ['isdenormal',['isdenormal',['../a00314.html#ga74aa7c7462245d83bd5a9edf9c6c2d91',1,'glm']]], ['isfinite',['isfinite',['../a00315.html#gaf4b04dcd3526996d68c1bfe17bfc8657',1,'glm::isfinite(genType const &x)'],['../a00315.html#gac3b12b8ac3014418fe53c299478b6603',1,'glm::isfinite(const vec< 1, T, Q > &x)'],['../a00315.html#ga8e76dc3e406ce6a4155c2b12a2e4b084',1,'glm::isfinite(const vec< 2, T, Q > &x)'],['../a00315.html#ga929ef27f896d902c1771a2e5e150fc97',1,'glm::isfinite(const vec< 3, T, Q > &x)'],['../a00315.html#ga19925badbe10ce61df1d0de00be0b5ad',1,'glm::isfinite(const vec< 4, T, Q > &x)']]], ['isidentity',['isIdentity',['../a00340.html#gaee935d145581c82e82b154ccfd78ad91',1,'glm']]], ['isinf',['isinf',['../a00241.html#ga2885587c23a106301f20443896365b62',1,'glm::isinf(vec< L, T, Q > const &x)'],['../a00248.html#ga45722741ea266b4e861938b365c5f362',1,'glm::isinf(qua< T, Q > const &x)']]], ['ismultiple',['isMultiple',['../a00261.html#gaec593d33956a8fe43f78fccc63ddde9a',1,'glm::isMultiple(genIUType v, genIUType Multiple)'],['../a00274.html#ga354caf634ef333d9cb4844407416256a',1,'glm::isMultiple(vec< L, T, Q > const &v, T Multiple)'],['../a00274.html#gabb4360e38c0943d8981ba965dead519d',1,'glm::isMultiple(vec< L, T, Q > const &v, vec< L, T, Q > const &Multiple)']]], ['isnan',['isnan',['../a00241.html#ga29ef934c00306490de837b4746b4e14d',1,'glm::isnan(vec< L, T, Q > const &x)'],['../a00248.html#ga1bb55f8963616502e96dc564384d8a03',1,'glm::isnan(qua< T, Q > const &x)']]], ['isnormalized',['isNormalized',['../a00340.html#gae785af56f47ce220a1609f7f84aa077a',1,'glm::isNormalized(mat< 2, 2, T, Q > const &m, T const &epsilon)'],['../a00340.html#gaa068311695f28f5f555f5f746a6a66fb',1,'glm::isNormalized(mat< 3, 3, T, Q > const &m, T const &epsilon)'],['../a00340.html#ga4d9bb4d0465df49fedfad79adc6ce4ad',1,'glm::isNormalized(mat< 4, 4, T, Q > const &m, T const &epsilon)'],['../a00368.html#gac3c974f459fd75453134fad7ae89a39e',1,'glm::isNormalized(vec< L, T, Q > const &v, T const &epsilon)']]], ['isnull',['isNull',['../a00340.html#ga9790ec222ce948c0ff0d8ce927340dba',1,'glm::isNull(mat< 2, 2, T, Q > const &m, T const &epsilon)'],['../a00340.html#gae14501c6b14ccda6014cc5350080103d',1,'glm::isNull(mat< 3, 3, T, Q > const &m, T const &epsilon)'],['../a00340.html#ga2b98bb30a9fefa7cdea5f1dcddba677b',1,'glm::isNull(mat< 4, 4, T, Q > const &m, T const &epsilon)'],['../a00368.html#gab4a3637dbcb4bb42dc55caea7a1e0495',1,'glm::isNull(vec< L, T, Q > const &v, T const &epsilon)']]], ['isorthogonal',['isOrthogonal',['../a00340.html#ga58f3289f74dcab653387dd78ad93ca40',1,'glm']]], ['ispoweroftwo',['isPowerOfTwo',['../a00261.html#gadf491730354aa7da67fbe23d4d688763',1,'glm::isPowerOfTwo(genIUType v)'],['../a00274.html#gabf2b61ded7049bcb13e25164f832a290',1,'glm::isPowerOfTwo(vec< L, T, Q > const &v)']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_9.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_9.js ================================================ var searchData= [ ['l1norm',['l1Norm',['../a00343.html#gae2fc0b2aa967bebfd6a244700bff6997',1,'glm::l1Norm(vec< 3, T, Q > const &x, vec< 3, T, Q > const &y)'],['../a00343.html#ga1a7491e2037ceeb37f83ce41addfc0be',1,'glm::l1Norm(vec< 3, T, Q > const &v)']]], ['l2norm',['l2Norm',['../a00343.html#ga41340b2ef40a9307ab0f137181565168',1,'glm::l2Norm(vec< 3, T, Q > const &x, vec< 3, T, Q > const &y)'],['../a00343.html#gae288bde8f0e41fb4ed62e65137b18cba',1,'glm::l2Norm(vec< 3, T, Q > const &x)']]], ['ldexp',['ldexp',['../a00241.html#gac3010e0a0c35a1b514540f2fb579c58c',1,'glm']]], ['lefthanded',['leftHanded',['../a00328.html#ga6f1bad193b9a3b048543d1935cf04dd3',1,'glm']]], ['length',['length',['../a00254.html#gab703732449be6c7199369b3f9a91ed38',1,'glm::length(qua< T, Q > const &q)'],['../a00279.html#ga0cdabbb000834d994a1d6dc56f8f5263',1,'glm::length(vec< L, T, Q > const &x)']]], ['length2',['length2',['../a00343.html#ga8d1789651050adb7024917984b41c3de',1,'glm::length2(vec< L, T, Q > const &x)'],['../a00352.html#ga58a609b1b8ab965f5df2702e8ca4e75b',1,'glm::length2(qua< T, Q > const &q)']]], ['lerp',['lerp',['../a00248.html#ga6033dc0741051fa463a0a147ba29f293',1,'glm::lerp(qua< T, Q > const &x, qua< T, Q > const &y, T a)'],['../a00315.html#ga5494ba3a95ea6594c86fc75236886864',1,'glm::lerp(T x, T y, T a)'],['../a00315.html#gaa551c0a0e16d2d4608e49f7696df897f',1,'glm::lerp(const vec< 2, T, Q > &x, const vec< 2, T, Q > &y, T a)'],['../a00315.html#ga44a8b5fd776320f1713413dec959b32a',1,'glm::lerp(const vec< 3, T, Q > &x, const vec< 3, T, Q > &y, T a)'],['../a00315.html#ga89ac8e000199292ec7875519d27e214b',1,'glm::lerp(const vec< 4, T, Q > &x, const vec< 4, T, Q > &y, T a)'],['../a00315.html#gaf68de5baf72d16135368b8ef4f841604',1,'glm::lerp(const vec< 2, T, Q > &x, const vec< 2, T, Q > &y, const vec< 2, T, Q > &a)'],['../a00315.html#ga4ae1a616c8540a2649eab8e0cd051bb3',1,'glm::lerp(const vec< 3, T, Q > &x, const vec< 3, T, Q > &y, const vec< 3, T, Q > &a)'],['../a00315.html#gab5477ab69c40de4db5d58d3359529724',1,'glm::lerp(const vec< 4, T, Q > &x, const vec< 4, T, Q > &y, const vec< 4, T, Q > &a)'],['../a00317.html#gace8380112d16d33f520839cb35a4d173',1,'glm::lerp(tdualquat< T, Q > const &x, tdualquat< T, Q > const &y, T const &a)']]], ['lessthan',['lessThan',['../a00299.html#gad091a2d22c8acfebfa92bcfca1dfe9c4',1,'glm::lessThan(qua< T, Q > const &x, qua< T, Q > const &y)'],['../a00374.html#gae90ed1592c395f93e3f3dfce6b2f39c6',1,'glm::lessThan(vec< L, T, Q > const &x, vec< L, T, Q > const &y)']]], ['lessthanequal',['lessThanEqual',['../a00299.html#gac00012eea281800d2403f4ea8443134d',1,'glm::lessThanEqual(qua< T, Q > const &x, qua< T, Q > const &y)'],['../a00374.html#gab0bdafc019d227257ff73fb5bcca1718',1,'glm::lessThanEqual(vec< L, T, Q > const &x, vec< L, T, Q > const &y)']]], ['levels',['levels',['../a00361.html#gaa8c377f4e63486db4fa872d77880da73',1,'glm']]], ['lineargradient',['linearGradient',['../a00327.html#ga849241df1e55129b8ce9476200307419',1,'glm']]], ['linearinterpolation',['linearInterpolation',['../a00318.html#ga290c3e47cb0a49f2e8abe90b1872b649',1,'glm']]], ['linearrand',['linearRand',['../a00300.html#ga04e241ab88374a477a2c2ceadd2fa03d',1,'glm::linearRand(genType Min, genType Max)'],['../a00300.html#ga94731130c298a9ff5e5025fdee6d97a0',1,'glm::linearRand(vec< L, T, Q > const &Min, vec< L, T, Q > const &Max)']]], ['lmaxnorm',['lMaxNorm',['../a00343.html#gad58a8231fc32e38104a9e1c4d3c0cb64',1,'glm::lMaxNorm(vec< 3, T, Q > const &x, vec< 3, T, Q > const &y)'],['../a00343.html#ga6968a324837a8e899396d44de23d5aae',1,'glm::lMaxNorm(vec< 3, T, Q > const &x)']]], ['ln_5fln_5ftwo',['ln_ln_two',['../a00290.html#gaca94292c839ed31a405ab7a81ae7e850',1,'glm']]], ['ln_5ften',['ln_ten',['../a00290.html#gaf97ebc6c059ffd788e6c4946f71ef66c',1,'glm']]], ['ln_5ftwo',['ln_two',['../a00290.html#ga24f4d27765678116f41a2f336ab7975c',1,'glm']]], ['log',['log',['../a00242.html#ga918c9f3fd086ce20e6760c903bd30fa9',1,'glm::log(vec< L, T, Q > const &v)'],['../a00256.html#gaa5f7b20e296671b16ce25a2ab7ad5473',1,'glm::log(qua< T, Q > const &q)'],['../a00333.html#ga60a7b0a401da660869946b2b77c710c9',1,'glm::log(genType const &x, genType const &base)']]], ['log2',['log2',['../a00242.html#ga82831c7d9cca777cebedfe03a19c8d75',1,'glm::log2(vec< L, T, Q > const &v)'],['../a00292.html#ga9bd682e74bfacb005c735305207ec417',1,'glm::log2(genIUType x)']]], ['lookat',['lookAt',['../a00247.html#gaa64aa951a0e99136bba9008d2b59c78e',1,'glm']]], ['lookatlh',['lookAtLH',['../a00247.html#gab2c09e25b0a16d3a9d89cc85bbae41b0',1,'glm']]], ['lookatrh',['lookAtRH',['../a00247.html#gacfa12c8889c754846bc20c65d9b5c701',1,'glm']]], ['lowestbitvalue',['lowestBitValue',['../a00309.html#ga2ff6568089f3a9b67f5c30918855fc6f',1,'glm']]], ['luminosity',['luminosity',['../a00312.html#gad028e0a4f1a9c812b39439b746295b34',1,'glm']]], ['lxnorm',['lxNorm',['../a00343.html#gacad23d30497eb16f67709f2375d1f66a',1,'glm::lxNorm(vec< 3, T, Q > const &x, vec< 3, T, Q > const &y, unsigned int Depth)'],['../a00343.html#gac61b6d81d796d6eb4d4183396a19ab91',1,'glm::lxNorm(vec< 3, T, Q > const &x, unsigned int Depth)']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_a.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_a.js ================================================ var searchData= [ ['make_5fmat2',['make_mat2',['../a00305.html#ga04409e74dc3da251d2501acf5b4b546c',1,'glm']]], ['make_5fmat2x2',['make_mat2x2',['../a00305.html#gae49e1c7bcd5abec74d1c34155031f663',1,'glm']]], ['make_5fmat2x3',['make_mat2x3',['../a00305.html#ga21982104164789cf8985483aaefc25e8',1,'glm']]], ['make_5fmat2x4',['make_mat2x4',['../a00305.html#ga078b862c90b0e9a79ed43a58997d8388',1,'glm']]], ['make_5fmat3',['make_mat3',['../a00305.html#ga611ee7c4d4cadfc83a8fa8e1d10a170f',1,'glm']]], ['make_5fmat3x2',['make_mat3x2',['../a00305.html#ga27a24e121dc39e6857620e0f85b6e1a8',1,'glm']]], ['make_5fmat3x3',['make_mat3x3',['../a00305.html#gaf2e8337b15c3362aaeb6e5849e1c0536',1,'glm']]], ['make_5fmat3x4',['make_mat3x4',['../a00305.html#ga05dd66232aedb993e3b8e7b35eaf932b',1,'glm']]], ['make_5fmat4',['make_mat4',['../a00305.html#gae7bcedb710d1446c87fd1fc93ed8ee9a',1,'glm']]], ['make_5fmat4x2',['make_mat4x2',['../a00305.html#ga8b34c9b25bf3310d8ff9c828c7e2d97c',1,'glm']]], ['make_5fmat4x3',['make_mat4x3',['../a00305.html#ga0330bf6640092d7985fac92927bbd42b',1,'glm']]], ['make_5fmat4x4',['make_mat4x4',['../a00305.html#ga8f084be30e404844bfbb4a551ac2728c',1,'glm']]], ['make_5fquat',['make_quat',['../a00305.html#ga58110d7d81cf7d029e2bab7f8cd9b246',1,'glm']]], ['make_5fvec1',['make_vec1',['../a00305.html#ga4135f03f3049f0a4eb76545c4967957c',1,'glm::make_vec1(vec< 1, T, Q > const &v)'],['../a00305.html#ga13c92b81e55f201b052a6404d57da220',1,'glm::make_vec1(vec< 2, T, Q > const &v)'],['../a00305.html#ga3c23cc74086d361e22bbd5e91a334e03',1,'glm::make_vec1(vec< 3, T, Q > const &v)'],['../a00305.html#ga6af06bb60d64ca8bcd169e3c93bc2419',1,'glm::make_vec1(vec< 4, T, Q > const &v)']]], ['make_5fvec2',['make_vec2',['../a00305.html#ga8476d0e6f1b9b4a6193cc25f59d8a896',1,'glm::make_vec2(vec< 1, T, Q > const &v)'],['../a00305.html#gae54bd325a08ad26edf63929201adebc7',1,'glm::make_vec2(vec< 2, T, Q > const &v)'],['../a00305.html#ga0084fea4694cf47276e9cccbe7b1015a',1,'glm::make_vec2(vec< 3, T, Q > const &v)'],['../a00305.html#ga2b81f71f3a222fe5bba81e3983751249',1,'glm::make_vec2(vec< 4, T, Q > const &v)'],['../a00305.html#ga81253cf7b0ebfbb1e70540c5774e6824',1,'glm::make_vec2(T const *const ptr)']]], ['make_5fvec3',['make_vec3',['../a00305.html#ga9147e4b3a5d0f4772edfbfd179d7ea0b',1,'glm::make_vec3(vec< 1, T, Q > const &v)'],['../a00305.html#ga482b60a842a5b154d3eed392417a9511',1,'glm::make_vec3(vec< 2, T, Q > const &v)'],['../a00305.html#gacd57046034df557b8b1c457f58613623',1,'glm::make_vec3(vec< 3, T, Q > const &v)'],['../a00305.html#ga8b589ed7d41a298b516d2a69169248f1',1,'glm::make_vec3(vec< 4, T, Q > const &v)'],['../a00305.html#gad9e0d36ff489cb30c65ad1fa40351651',1,'glm::make_vec3(T const *const ptr)']]], ['make_5fvec4',['make_vec4',['../a00305.html#ga600cb97f70c5d50d3a4a145e1cafbf37',1,'glm::make_vec4(vec< 1, T, Q > const &v)'],['../a00305.html#gaa9bd116caf28196fd1cf00b278286fa7',1,'glm::make_vec4(vec< 2, T, Q > const &v)'],['../a00305.html#ga4036328ba4702c74cbdfad1fc03d1b8f',1,'glm::make_vec4(vec< 3, T, Q > const &v)'],['../a00305.html#gaa95cb15732f708f613e65a0578895ae5',1,'glm::make_vec4(vec< 4, T, Q > const &v)'],['../a00305.html#ga63f576518993efc22a969f18f80e29bb',1,'glm::make_vec4(T const *const ptr)']]], ['mask',['mask',['../a00288.html#gad7eba518a0b71662114571ee76939f8a',1,'glm::mask(genIUType Bits)'],['../a00288.html#ga2e64e3b922a296033b825311e7f5fff1',1,'glm::mask(vec< L, T, Q > const &v)']]], ['mat2x4_5fcast',['mat2x4_cast',['../a00317.html#gae99d143b37f9cad4cd9285571aab685a',1,'glm']]], ['mat3_5fcast',['mat3_cast',['../a00299.html#ga333ab70047fbe4132406100c292dbc89',1,'glm']]], ['mat3x4_5fcast',['mat3x4_cast',['../a00317.html#gaf59f5bb69620d2891c3795c6f2639179',1,'glm']]], ['mat4_5fcast',['mat4_cast',['../a00299.html#ga1113212d9bdefc2e31ad40e5bbb506f3',1,'glm']]], ['matrixcompmult',['matrixCompMult',['../a00371.html#gaf14569404c779fedca98d0b9b8e58c1f',1,'glm']]], ['matrixcross3',['matrixCross3',['../a00334.html#ga5802386bb4c37b3332a3b6fd8b6960ff',1,'glm']]], ['matrixcross4',['matrixCross4',['../a00334.html#ga20057fff91ddafa102934adb25458cde',1,'glm']]], ['max',['max',['../a00241.html#gae02d42887fc5570451f880e3c624b9ac',1,'glm::max(genType x, genType y)'],['../a00241.html#ga03e45d6e60d1c36edb00c52edeea0f31',1,'glm::max(vec< L, T, Q > const &x, T y)'],['../a00241.html#gac1fec0c3303b572a6d4697a637213870',1,'glm::max(vec< L, T, Q > const &x, vec< L, T, Q > const &y)'],['../a00258.html#gaa20839d9ab14514f8966f69877ea0de8',1,'glm::max(T a, T b, T c)'],['../a00258.html#ga2274b5e75ed84b0b1e50d8d22f1f2f67',1,'glm::max(T a, T b, T c, T d)'],['../a00267.html#gaa45d34f6a2906f8bf58ab2ba5429234d',1,'glm::max(vec< L, T, Q > const &x, vec< L, T, Q > const &y, vec< L, T, Q > const &z)'],['../a00267.html#ga94d42b8da2b4ded5ddf7504fbdc6bf10',1,'glm::max(vec< L, T, Q > const &x, vec< L, T, Q > const &y, vec< L, T, Q > const &z, vec< L, T, Q > const &w)'],['../a00321.html#ga04991ccb9865c4c4e58488cfb209ce69',1,'glm::max(T const &x, T const &y, T const &z)'],['../a00321.html#gae1b7bbe5c91de4924835ea3e14530744',1,'glm::max(C< T > const &x, typename C< T >::T const &y, typename C< T >::T const &z)'],['../a00321.html#gaf832e9d4ab4826b2dda2fda25935a3a4',1,'glm::max(C< T > const &x, C< T > const &y, C< T > const &z)'],['../a00321.html#ga78e04a0cef1c4863fcae1a2130500d87',1,'glm::max(T const &x, T const &y, T const &z, T const &w)'],['../a00321.html#ga7cca8b53cfda402040494cdf40fbdf4a',1,'glm::max(C< T > const &x, typename C< T >::T const &y, typename C< T >::T const &z, typename C< T >::T const &w)'],['../a00321.html#gaacffbc466c2d08c140b181e7fd8a4858',1,'glm::max(C< T > const &x, C< T > const &y, C< T > const &z, C< T > const &w)']]], ['min',['min',['../a00241.html#ga6cf8098827054a270ee36b18e30d471d',1,'glm::min(genType x, genType y)'],['../a00241.html#gaa7d015eba1f9f48519251f4abe69b14d',1,'glm::min(vec< L, T, Q > const &x, T y)'],['../a00241.html#ga31f49ef9e7d1beb003160c5e009b0c48',1,'glm::min(vec< L, T, Q > const &x, vec< L, T, Q > const &y)'],['../a00258.html#ga420b37cbd98c395b93dab0278305cd46',1,'glm::min(T a, T b, T c)'],['../a00258.html#ga0d24a9acb8178df77e4aff90cbb2010d',1,'glm::min(T a, T b, T c, T d)'],['../a00267.html#ga3cd83d80fd4f433d8e333593ec56dddf',1,'glm::min(vec< L, T, Q > const &a, vec< L, T, Q > const &b, vec< L, T, Q > const &c)'],['../a00267.html#gab66920ed064ab518d6859c5a889c4be4',1,'glm::min(vec< L, T, Q > const &a, vec< L, T, Q > const &b, vec< L, T, Q > const &c, vec< L, T, Q > const &d)'],['../a00321.html#ga713d3f9b3e76312c0d314e0c8611a6a6',1,'glm::min(T const &x, T const &y, T const &z)'],['../a00321.html#ga74d1a96e7cdbac40f6d35142d3bcbbd4',1,'glm::min(C< T > const &x, typename C< T >::T const &y, typename C< T >::T const &z)'],['../a00321.html#ga42b5c3fc027fd3d9a50d2ccc9126d9f0',1,'glm::min(C< T > const &x, C< T > const &y, C< T > const &z)'],['../a00321.html#ga95466987024d03039607f09e69813d69',1,'glm::min(T const &x, T const &y, T const &z, T const &w)'],['../a00321.html#ga4fe35dd31dd0c45693c9b60b830b8d47',1,'glm::min(C< T > const &x, typename C< T >::T const &y, typename C< T >::T const &z, typename C< T >::T const &w)'],['../a00321.html#ga7471ea4159eed8dd9ea4ac5d46c2fead',1,'glm::min(C< T > const &x, C< T > const &y, C< T > const &z, C< T > const &w)']]], ['mirrorclamp',['mirrorClamp',['../a00369.html#gaa6856a0a048d2749252848da35e10c8b',1,'glm']]], ['mirrorrepeat',['mirrorRepeat',['../a00369.html#ga16a89b0661b60d5bea85137bbae74d73',1,'glm']]], ['mix',['mix',['../a00241.html#ga8e93f374aae27d1a88b921860351f8d4',1,'glm::mix(genTypeT x, genTypeT y, genTypeU a)'],['../a00248.html#gafbfe587b8da11fb89a30c3d67dd5ccc2',1,'glm::mix(qua< T, Q > const &x, qua< T, Q > const &y, T a)']]], ['mixedproduct',['mixedProduct',['../a00342.html#gab3c6048fbb67f7243b088a4fee48d020',1,'glm']]], ['mod',['mod',['../a00241.html#ga9b197a452cd52db3c5c18bac72bd7798',1,'glm::mod(vec< L, T, Q > const &x, vec< L, T, Q > const &y)'],['../a00330.html#gaabfbb41531ab7ad8d06fc176edfba785',1,'glm::mod(int x, int y)'],['../a00330.html#ga63fc8d63e7da1706439233b386ba8b6f',1,'glm::mod(uint x, uint y)']]], ['modf',['modf',['../a00241.html#ga85e33f139b8db1b39b590a5713b9e679',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_b.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_b.js ================================================ var searchData= [ ['nextmultiple',['nextMultiple',['../a00261.html#gab770a3835c44c8a6fd225be4f4e6b317',1,'glm::nextMultiple(genIUType v, genIUType Multiple)'],['../a00274.html#gace38d00601cbf49cd4dc03f003ab42b7',1,'glm::nextMultiple(vec< L, T, Q > const &v, T Multiple)'],['../a00274.html#gacda365edad320c7aff19cc283a3b8ca2',1,'glm::nextMultiple(vec< L, T, Q > const &v, vec< L, T, Q > const &Multiple)']]], ['nextpoweroftwo',['nextPowerOfTwo',['../a00261.html#ga3a37c2f2fd347886c9af6a3ca3db04dc',1,'glm::nextPowerOfTwo(genIUType v)'],['../a00274.html#gabba67f8aac9915e10fca727277274502',1,'glm::nextPowerOfTwo(vec< L, T, Q > const &v)']]], ['nlz',['nlz',['../a00330.html#ga78dff8bdb361bf0061194c93e003d189',1,'glm']]], ['normalize',['normalize',['../a00254.html#gabf30e3263fffe8dcc6659aea76ae8927',1,'glm::normalize(qua< T, Q > const &q)'],['../a00279.html#ga3b8d3dcae77870781392ed2902cce597',1,'glm::normalize(vec< L, T, Q > const &x)'],['../a00317.html#ga299b8641509606b1958ffa104a162cfe',1,'glm::normalize(tdualquat< T, Q > const &q)']]], ['normalizedot',['normalizeDot',['../a00345.html#gacb140a2b903115d318c8b0a2fb5a5daa',1,'glm']]], ['not_5f',['not_',['../a00374.html#ga610fcd175791fd246e328ffee10dbf1e',1,'glm']]], ['notequal',['notEqual',['../a00246.html#ga8504f18a7e2bf315393032c2137dad83',1,'glm::notEqual(mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y)'],['../a00246.html#ga29071147d118569344d10944b7d5c378',1,'glm::notEqual(mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, T epsilon)'],['../a00246.html#gad7959e14fbc35b4ed2617daf4d67f6cd',1,'glm::notEqual(mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, vec< C, T, Q > const &epsilon)'],['../a00246.html#gaa1cd7fc228ef6e26c73583fd0d9c6552',1,'glm::notEqual(mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, int ULPs)'],['../a00246.html#gaa5517341754149ffba742d230afd1f32',1,'glm::notEqual(mat< C, R, T, Q > const &x, mat< C, R, T, Q > const &y, vec< C, int, Q > const &ULPs)'],['../a00255.html#gab441cee0de5867a868f3a586ee68cfe1',1,'glm::notEqual(qua< T, Q > const &x, qua< T, Q > const &y)'],['../a00255.html#ga5117a44c1bf21af857cd23e44a96d313',1,'glm::notEqual(qua< T, Q > const &x, qua< T, Q > const &y, T epsilon)'],['../a00275.html#ga4a99cc41341567567a608719449c1fac',1,'glm::notEqual(vec< L, T, Q > const &x, vec< L, T, Q > const &y, T epsilon)'],['../a00275.html#ga417cf51304359db18e819dda9bce5767',1,'glm::notEqual(vec< L, T, Q > const &x, vec< L, T, Q > const &y, vec< L, T, Q > const &epsilon)'],['../a00275.html#ga8b5c2c3f83422ae5b71fa960d03b0339',1,'glm::notEqual(vec< L, T, Q > const &x, vec< L, T, Q > const &y, int ULPs)'],['../a00275.html#ga0b15ffe32987a6029b14398eb0def01a',1,'glm::notEqual(vec< L, T, Q > const &x, vec< L, T, Q > const &y, vec< L, int, Q > const &ULPs)'],['../a00374.html#ga17c19dc1b76cd5aef63e9e7ff3aa3c27',1,'glm::notEqual(vec< L, T, Q > const &x, vec< L, T, Q > const &y)']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_c.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_c.js ================================================ var searchData= [ ['one',['one',['../a00290.html#ga39c2fb227631ca25894326529bdd1ee5',1,'glm']]], ['one_5fover_5fpi',['one_over_pi',['../a00290.html#ga555150da2b06d23c8738981d5013e0eb',1,'glm']]], ['one_5fover_5froot_5ftwo',['one_over_root_two',['../a00290.html#ga788fa23a0939bac4d1d0205fb4f35818',1,'glm']]], ['one_5fover_5ftwo_5fpi',['one_over_two_pi',['../a00290.html#ga7c922b427986cbb2e4c6ac69874eefbc',1,'glm']]], ['openbounded',['openBounded',['../a00314.html#gafd303042ba2ba695bf53b2315f53f93f',1,'glm']]], ['orientate2',['orientate2',['../a00319.html#gae16738a9f1887cf4e4db6a124637608d',1,'glm']]], ['orientate3',['orientate3',['../a00319.html#ga7ca98668a5786f19c7b38299ebbc9b4c',1,'glm::orientate3(T const &angle)'],['../a00319.html#ga7238c8e15c7720e3ca6a45ab151eeabb',1,'glm::orientate3(vec< 3, T, Q > const &angles)']]], ['orientate4',['orientate4',['../a00319.html#ga4a044653f71a4ecec68e0b623382b48a',1,'glm']]], ['orientation',['orientation',['../a00356.html#ga1a32fceb71962e6160e8af295c91930a',1,'glm']]], ['orientedangle',['orientedAngle',['../a00367.html#ga9556a803dce87fe0f42fdabe4ebba1d5',1,'glm::orientedAngle(vec< 2, T, Q > const &x, vec< 2, T, Q > const &y)'],['../a00367.html#ga706fce3d111f485839756a64f5a48553',1,'glm::orientedAngle(vec< 3, T, Q > const &x, vec< 3, T, Q > const &y, vec< 3, T, Q > const &ref)']]], ['ortho',['ortho',['../a00243.html#gae5b6b40ed882cd56cd7cb97701909c06',1,'glm::ortho(T left, T right, T bottom, T top)'],['../a00243.html#ga6615d8a9d39432e279c4575313ecb456',1,'glm::ortho(T left, T right, T bottom, T top, T zNear, T zFar)']]], ['ortholh',['orthoLH',['../a00243.html#gad122a79aadaa5529cec4ac197203db7f',1,'glm']]], ['ortholh_5fno',['orthoLH_NO',['../a00243.html#ga526416735ea7c5c5cd255bf99d051bd8',1,'glm']]], ['ortholh_5fzo',['orthoLH_ZO',['../a00243.html#gab37ac3eec8d61f22fceda7775e836afa',1,'glm']]], ['orthono',['orthoNO',['../a00243.html#gab219d28a8f178d4517448fcd6395a073',1,'glm']]], ['orthonormalize',['orthonormalize',['../a00348.html#ga4cab5d698e6e2eccea30c8e81c74371f',1,'glm::orthonormalize(mat< 3, 3, T, Q > const &m)'],['../a00348.html#gac3bc7ef498815026bc3d361ae0b7138e',1,'glm::orthonormalize(vec< 3, T, Q > const &x, vec< 3, T, Q > const &y)']]], ['orthorh',['orthoRH',['../a00243.html#ga16264c9b838edeb9dd1de7a1010a13a4',1,'glm']]], ['orthorh_5fno',['orthoRH_NO',['../a00243.html#gaa2f7a1373170bf0a4a2ddef9b0706780',1,'glm']]], ['orthorh_5fzo',['orthoRH_ZO',['../a00243.html#ga9aea2e515b08fd7dce47b7b6ec34d588',1,'glm']]], ['orthozo',['orthoZO',['../a00243.html#gaea11a70817af2c0801c869dea0b7a5bc',1,'glm']]], ['outerproduct',['outerProduct',['../a00371.html#gac29fb7bae75a8e4c1b74cbbf85520e50',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_d.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_d.js ================================================ var searchData= [ ['packdouble2x32',['packDouble2x32',['../a00372.html#gaa916ca426b2bb0343ba17e3753e245c2',1,'glm']]], ['packf2x11_5f1x10',['packF2x11_1x10',['../a00298.html#ga4944ad465ff950e926d49621f916c78d',1,'glm']]], ['packf3x9_5fe1x5',['packF3x9_E1x5',['../a00298.html#ga3f648fc205467792dc6d8c59c748f8a6',1,'glm']]], ['packhalf',['packHalf',['../a00298.html#ga2d8bbce673ebc04831c1fb05c47f5251',1,'glm']]], ['packhalf1x16',['packHalf1x16',['../a00298.html#ga43f2093b6ff192a79058ff7834fc3528',1,'glm']]], ['packhalf2x16',['packHalf2x16',['../a00372.html#ga20f134b07db3a3d3a38efb2617388c92',1,'glm']]], ['packhalf4x16',['packHalf4x16',['../a00298.html#gafe2f7b39caf8f5ec555e1c059ec530e6',1,'glm']]], ['packi3x10_5f1x2',['packI3x10_1x2',['../a00298.html#ga06ecb6afb902dba45419008171db9023',1,'glm']]], ['packint2x16',['packInt2x16',['../a00298.html#ga3644163cf3a47bf1d4af1f4b03013a7e',1,'glm']]], ['packint2x32',['packInt2x32',['../a00298.html#gad1e4c8a9e67d86b61a6eec86703a827a',1,'glm']]], ['packint2x8',['packInt2x8',['../a00298.html#ga8884b1f2292414f36d59ef3be5d62914',1,'glm']]], ['packint4x16',['packInt4x16',['../a00298.html#ga1989f093a27ae69cf9207145be48b3d7',1,'glm']]], ['packint4x8',['packInt4x8',['../a00298.html#gaf2238401d5ce2aaade1a44ba19709072',1,'glm']]], ['packrgbm',['packRGBM',['../a00298.html#ga0466daf4c90f76cc64b3f105ce727295',1,'glm']]], ['packsnorm',['packSnorm',['../a00298.html#gaa54b5855a750d6aeb12c1c902f5939b8',1,'glm']]], ['packsnorm1x16',['packSnorm1x16',['../a00298.html#gab22f8bcfdb5fc65af4701b25f143c1af',1,'glm']]], ['packsnorm1x8',['packSnorm1x8',['../a00298.html#gae3592e0795e62aaa1865b3a10496a7a1',1,'glm']]], ['packsnorm2x16',['packSnorm2x16',['../a00372.html#ga977ab172da5494e5ac63e952afacfbe2',1,'glm']]], ['packsnorm2x8',['packSnorm2x8',['../a00298.html#ga6be3cfb2cce3702f03e91bbeb5286d7e',1,'glm']]], ['packsnorm3x10_5f1x2',['packSnorm3x10_1x2',['../a00298.html#gab997545661877d2c7362a5084d3897d3',1,'glm']]], ['packsnorm4x16',['packSnorm4x16',['../a00298.html#ga358943934d21da947d5bcc88c2ab7832',1,'glm']]], ['packsnorm4x8',['packSnorm4x8',['../a00372.html#ga85e8f17627516445026ab7a9c2e3531a',1,'glm']]], ['packu3x10_5f1x2',['packU3x10_1x2',['../a00298.html#gada3d88d59f0f458f9c51a9fd359a4bc0',1,'glm']]], ['packuint2x16',['packUint2x16',['../a00298.html#ga5eecc9e8cbaf51ac6cf57501e670ee19',1,'glm']]], ['packuint2x32',['packUint2x32',['../a00298.html#gaa864081097b86e83d8e4a4d79c382b22',1,'glm']]], ['packuint2x8',['packUint2x8',['../a00298.html#ga3c3c9fb53ae7823b10fa083909357590',1,'glm']]], ['packuint4x16',['packUint4x16',['../a00298.html#ga2ceb62cca347d8ace42ee90317a3f1f9',1,'glm']]], ['packuint4x8',['packUint4x8',['../a00298.html#gaa0fe2f09aeb403cd66c1a062f58861ab',1,'glm']]], ['packunorm',['packUnorm',['../a00298.html#gaccd3f27e6ba5163eb7aa9bc8ff96251a',1,'glm']]], ['packunorm1x16',['packUnorm1x16',['../a00298.html#ga9f82737bf2a44bedff1d286b76837886',1,'glm']]], ['packunorm1x5_5f1x6_5f1x5',['packUnorm1x5_1x6_1x5',['../a00298.html#ga768e0337dd6246773f14aa0a421fe9a8',1,'glm']]], ['packunorm1x8',['packUnorm1x8',['../a00298.html#ga4b2fa60df3460403817d28b082ee0736',1,'glm']]], ['packunorm2x16',['packUnorm2x16',['../a00372.html#ga0e2d107039fe608a209497af867b85fb',1,'glm']]], ['packunorm2x3_5f1x2',['packUnorm2x3_1x2',['../a00298.html#ga7f9abdb50f9be1aa1c14912504a0d98d',1,'glm']]], ['packunorm2x4',['packUnorm2x4',['../a00298.html#gab6bbd5be3b8e6db538ecb33a7844481c',1,'glm']]], ['packunorm2x8',['packUnorm2x8',['../a00298.html#ga9a666b1c688ab54100061ed06526de6e',1,'glm']]], ['packunorm3x10_5f1x2',['packUnorm3x10_1x2',['../a00298.html#ga8a1ee625d2707c60530fb3fca2980b19',1,'glm']]], ['packunorm3x5_5f1x1',['packUnorm3x5_1x1',['../a00298.html#gaec4112086d7fb133bea104a7c237de52',1,'glm']]], ['packunorm4x16',['packUnorm4x16',['../a00298.html#ga1f63c264e7ab63264e2b2a99fd393897',1,'glm']]], ['packunorm4x4',['packUnorm4x4',['../a00298.html#gad3e7e3ce521513584a53aedc5f9765c1',1,'glm']]], ['packunorm4x8',['packUnorm4x8',['../a00372.html#gaf7d2f7341a9eeb4a436929d6f9ad08f2',1,'glm']]], ['perlin',['perlin',['../a00297.html#ga1e043ce3b51510e9bc4469227cefc38a',1,'glm::perlin(vec< L, T, Q > const &p)'],['../a00297.html#gac270edc54c5fc52f5985a45f940bb103',1,'glm::perlin(vec< L, T, Q > const &p, vec< L, T, Q > const &rep)']]], ['perp',['perp',['../a00349.html#ga264cfc4e180cf9b852e943b35089003c',1,'glm']]], ['perspective',['perspective',['../a00243.html#ga747c8cf99458663dd7ad1bb3a2f07787',1,'glm']]], ['perspectivefov',['perspectiveFov',['../a00243.html#gaebd02240fd36e85ad754f02ddd9a560d',1,'glm']]], ['perspectivefovlh',['perspectiveFovLH',['../a00243.html#ga6aebe16c164bd8e52554cbe0304ef4aa',1,'glm']]], ['perspectivefovlh_5fno',['perspectiveFovLH_NO',['../a00243.html#gad18a4495b77530317327e8d466488c1a',1,'glm']]], ['perspectivefovlh_5fzo',['perspectiveFovLH_ZO',['../a00243.html#gabdd37014f529e25b2fa1b3ba06c10d5c',1,'glm']]], ['perspectivefovno',['perspectiveFovNO',['../a00243.html#gaf30e7bd3b1387a6776433dd5383e6633',1,'glm']]], ['perspectivefovrh',['perspectiveFovRH',['../a00243.html#gaf32bf563f28379c68554a44ee60c6a85',1,'glm']]], ['perspectivefovrh_5fno',['perspectiveFovRH_NO',['../a00243.html#ga257b733ff883c9a065801023cf243eb2',1,'glm']]], ['perspectivefovrh_5fzo',['perspectiveFovRH_ZO',['../a00243.html#ga7dcbb25331676f5b0795aced1a905c44',1,'glm']]], ['perspectivefovzo',['perspectiveFovZO',['../a00243.html#ga4bc69fa1d1f95128430aa3d2a712390b',1,'glm']]], ['perspectivelh',['perspectiveLH',['../a00243.html#ga9bd34951dc7022ac256fcb51d7f6fc2f',1,'glm']]], ['perspectivelh_5fno',['perspectiveLH_NO',['../a00243.html#gaead4d049d1feab463b700b5641aa590e',1,'glm']]], ['perspectivelh_5fzo',['perspectiveLH_ZO',['../a00243.html#gaca32af88c2719005c02817ad1142986c',1,'glm']]], ['perspectiveno',['perspectiveNO',['../a00243.html#gaf497e6bca61e7c87088370b126a93758',1,'glm']]], ['perspectiverh',['perspectiveRH',['../a00243.html#ga26b88757fbd90601b80768a7e1ad3aa1',1,'glm']]], ['perspectiverh_5fno',['perspectiveRH_NO',['../a00243.html#gad1526cb2cbe796095284e8f34b01c582',1,'glm']]], ['perspectiverh_5fzo',['perspectiveRH_ZO',['../a00243.html#ga4da358d6e1b8e5b9ae35d1f3f2dc3b9a',1,'glm']]], ['perspectivezo',['perspectiveZO',['../a00243.html#gaa9dfba5c2322da54f72b1eb7c7c11b47',1,'glm']]], ['pi',['pi',['../a00259.html#ga94bafeb2a0f23ab6450fed1f98ee4e45',1,'glm']]], ['pickmatrix',['pickMatrix',['../a00245.html#gaf6b21eadb7ac2ecbbe258a9a233b4c82',1,'glm']]], ['pitch',['pitch',['../a00299.html#ga7603e81477b46ddb448896909bc04928',1,'glm']]], ['polar',['polar',['../a00350.html#gab83ac2c0e55b684b06b6c46c28b1590d',1,'glm']]], ['pow',['pow',['../a00242.html#ga2254981952d4f333b900a6bf5167a6c4',1,'glm::pow(vec< L, T, Q > const &base, vec< L, T, Q > const &exponent)'],['../a00256.html#ga4975ffcacd312a8c0bbd046a76c5607e',1,'glm::pow(qua< T, Q > const &q, T y)'],['../a00330.html#ga465016030a81d513fa2fac881ebdaa83',1,'glm::pow(int x, uint y)'],['../a00330.html#ga998e5ee915d3769255519e2fbaa2bbf0',1,'glm::pow(uint x, uint y)']]], ['pow2',['pow2',['../a00347.html#ga19aaff3213bf23bdec3ef124ace237e9',1,'glm::gtx']]], ['pow3',['pow3',['../a00347.html#ga35689d03cd434d6ea819f1942d3bf82e',1,'glm::gtx']]], ['pow4',['pow4',['../a00347.html#gacef0968763026e180e53e735007dbf5a',1,'glm::gtx']]], ['poweroftwoabove',['powerOfTwoAbove',['../a00309.html#ga8cda2459871f574a0aecbe702ac93291',1,'glm::powerOfTwoAbove(genIUType Value)'],['../a00309.html#ga2bbded187c5febfefc1e524ba31b3fab',1,'glm::powerOfTwoAbove(vec< L, T, Q > const &value)']]], ['poweroftwobelow',['powerOfTwoBelow',['../a00309.html#ga3de7df63c589325101a2817a56f8e29d',1,'glm::powerOfTwoBelow(genIUType Value)'],['../a00309.html#gaf78ddcc4152c051b2a21e68fecb10980',1,'glm::powerOfTwoBelow(vec< L, T, Q > const &value)']]], ['poweroftwonearest',['powerOfTwoNearest',['../a00309.html#ga5f65973a5d2ea38c719e6a663149ead9',1,'glm::powerOfTwoNearest(genIUType Value)'],['../a00309.html#gac87e65d11e16c3d6b91c3bcfaef7da0b',1,'glm::powerOfTwoNearest(vec< L, T, Q > const &value)']]], ['prevmultiple',['prevMultiple',['../a00261.html#gada3bdd871ffe31f2d484aa668362f636',1,'glm::prevMultiple(genIUType v, genIUType Multiple)'],['../a00274.html#ga7b3915a7cd3d50ff4976ab7a75a6880a',1,'glm::prevMultiple(vec< L, T, Q > const &v, T Multiple)'],['../a00274.html#ga51e04379e8aebbf83e2e5ab094578ee9',1,'glm::prevMultiple(vec< L, T, Q > const &v, vec< L, T, Q > const &Multiple)']]], ['prevpoweroftwo',['prevPowerOfTwo',['../a00261.html#gab21902a0e7e5a8451a7ad80333618727',1,'glm::prevPowerOfTwo(genIUType v)'],['../a00274.html#ga759db73f14d79f63612bd2398b577e7a',1,'glm::prevPowerOfTwo(vec< L, T, Q > const &v)']]], ['proj',['proj',['../a00351.html#ga58384b7170801dd513de46f87c7fb00e',1,'glm']]], ['proj2d',['proj2D',['../a00363.html#ga5b992a0cdc8298054edb68e228f0d93e',1,'glm']]], ['proj3d',['proj3D',['../a00363.html#gaa2b7f4f15b98f697caede11bef50509e',1,'glm']]], ['project',['project',['../a00245.html#gaf36e96033f456659e6705472a06b6e11',1,'glm']]], ['projectno',['projectNO',['../a00245.html#ga05249751f48d14cb282e4979802b8111',1,'glm']]], ['projectzo',['projectZO',['../a00245.html#ga77d157525063dec83a557186873ee080',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_e.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_e.js ================================================ var searchData= [ ['qr_5fdecompose',['qr_decompose',['../a00336.html#gac62d7bfc8dc661e616620d70552cd566',1,'glm']]], ['quadraticeasein',['quadraticEaseIn',['../a00318.html#gaf42089d35855695132d217cd902304a0',1,'glm']]], ['quadraticeaseinout',['quadraticEaseInOut',['../a00318.html#ga03e8fc2d7945a4e63ee33b2159c14cea',1,'glm']]], ['quadraticeaseout',['quadraticEaseOut',['../a00318.html#ga283717bc2d937547ad34ec0472234ee3',1,'glm']]], ['quarter_5fpi',['quarter_pi',['../a00290.html#ga3c9df42bd73c519a995c43f0f99e77e0',1,'glm']]], ['quarticeasein',['quarticEaseIn',['../a00318.html#ga808b41f14514f47dad5dcc69eb924afd',1,'glm']]], ['quarticeaseinout',['quarticEaseInOut',['../a00318.html#ga6d000f852de12b197e154f234b20c505',1,'glm']]], ['quarticeaseout',['quarticEaseOut',['../a00318.html#ga4dfb33fa7664aa888eb647999d329b98',1,'glm']]], ['quat_5fcast',['quat_cast',['../a00299.html#ga1108a4ab88ca87bac321454eea7702f8',1,'glm::quat_cast(mat< 3, 3, T, Q > const &x)'],['../a00299.html#ga4524810f07f72e8c7bdc7764fa11cb58',1,'glm::quat_cast(mat< 4, 4, T, Q > const &x)']]], ['quat_5fidentity',['quat_identity',['../a00352.html#ga5ee8332600b2aca3a77622a28d857b55',1,'glm']]], ['quatlookat',['quatLookAt',['../a00299.html#gabe7fc5ec5feb41ab234d5d2b6254697f',1,'glm']]], ['quatlookatlh',['quatLookAtLH',['../a00299.html#ga2da350c73411be3bb19441b226b81a74',1,'glm']]], ['quatlookatrh',['quatLookAtRH',['../a00299.html#gaf6529ac8c04a57fcc35865b5c9437cc8',1,'glm']]], ['quinticeasein',['quinticEaseIn',['../a00318.html#ga097579d8e087dcf48037588140a21640',1,'glm']]], ['quinticeaseinout',['quinticEaseInOut',['../a00318.html#ga2a82d5c46df7e2d21cc0108eb7b83934',1,'glm']]], ['quinticeaseout',['quinticEaseOut',['../a00318.html#ga7dbd4d5c8da3f5353121f615e7b591d7',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_f.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/functions_f.js ================================================ var searchData= [ ['radialgradient',['radialGradient',['../a00327.html#gaaecb1e93de4cbe0758b882812d4da294',1,'glm']]], ['radians',['radians',['../a00373.html#ga6e1db4862c5e25afd553930e2fdd6a68',1,'glm']]], ['reflect',['reflect',['../a00279.html#ga5631dd1d5618de5450b1ea3cf3e94905',1,'glm']]], ['refract',['refract',['../a00279.html#ga01da3dff9e2ef6b9d4915c3047e22b74',1,'glm']]], ['repeat',['repeat',['../a00369.html#ga809650c6310ea7c42666e918c117fb6f',1,'glm']]], ['rgb2ycocg',['rgb2YCoCg',['../a00313.html#ga0606353ec2a9b9eaa84f1b02ec391bc5',1,'glm']]], ['rgb2ycocgr',['rgb2YCoCgR',['../a00313.html#ga0389772e44ca0fd2ba4a79bdd8efe898',1,'glm']]], ['rgbcolor',['rgbColor',['../a00312.html#ga5f9193be46f45f0655c05a0cdca006db',1,'glm']]], ['righthanded',['rightHanded',['../a00328.html#ga99386a5ab5491871b947076e21699cc8',1,'glm']]], ['roll',['roll',['../a00299.html#ga0cc5ad970d0b00829b139fe0fe5a1e13',1,'glm']]], ['root_5ffive',['root_five',['../a00290.html#gae9ebbded75b53d4faeb1e4ef8b3347a2',1,'glm']]], ['root_5fhalf_5fpi',['root_half_pi',['../a00290.html#ga4e276cb823cc5e612d4f89ed99c75039',1,'glm']]], ['root_5fln_5ffour',['root_ln_four',['../a00290.html#ga4129412e96b33707a77c1a07652e23e2',1,'glm']]], ['root_5fpi',['root_pi',['../a00290.html#ga261380796b2cd496f68d2cf1d08b8eb9',1,'glm']]], ['root_5fthree',['root_three',['../a00290.html#ga4f286be4abe88be1eed7d2a9f6cb193e',1,'glm']]], ['root_5ftwo',['root_two',['../a00290.html#ga74e607d29020f100c0d0dc46ce2ca950',1,'glm']]], ['root_5ftwo_5fpi',['root_two_pi',['../a00290.html#ga2bcedc575039fe0cd765742f8bbb0bd3',1,'glm']]], ['rotate',['rotate',['../a00247.html#gaee9e865eaa9776370996da2940873fd4',1,'glm::rotate(mat< 4, 4, T, Q > const &m, T angle, vec< 3, T, Q > const &axis)'],['../a00256.html#gabfc57de6d4d2e11970f54119c5ccf0f5',1,'glm::rotate(qua< T, Q > const &q, T const &angle, vec< 3, T, Q > const &axis)'],['../a00341.html#gad5c84a4932a758f385a87098ce1b1660',1,'glm::rotate(mat< 3, 3, T, Q > const &m, T angle)'],['../a00352.html#ga07da6ef58646442efe93b0c273d73776',1,'glm::rotate(qua< T, Q > const &q, vec< 3, T, Q > const &v)'],['../a00352.html#gafcb78dfff45fbf19a7fcb2bd03fbf196',1,'glm::rotate(qua< T, Q > const &q, vec< 4, T, Q > const &v)'],['../a00356.html#gab64a67b52ff4f86c3ba16595a5a25af6',1,'glm::rotate(vec< 2, T, Q > const &v, T const &angle)'],['../a00356.html#ga1ba501ef83d1a009a17ac774cc560f21',1,'glm::rotate(vec< 3, T, Q > const &v, T const &angle, vec< 3, T, Q > const &normal)'],['../a00356.html#ga1005f1267ed9c57faa3f24cf6873b961',1,'glm::rotate(vec< 4, T, Q > const &v, T const &angle, vec< 3, T, Q > const &normal)'],['../a00362.html#gaf599be4c0e9d99be1f9cddba79b6018b',1,'glm::rotate(T angle, vec< 3, T, Q > const &v)']]], ['rotatenormalizedaxis',['rotateNormalizedAxis',['../a00355.html#ga50efd7ebca0f7a603bb3cc11e34c708d',1,'glm::rotateNormalizedAxis(mat< 4, 4, T, Q > const &m, T const &angle, vec< 3, T, Q > const &axis)'],['../a00355.html#ga08f9c5411437d528019a25bfc01473d1',1,'glm::rotateNormalizedAxis(qua< T, Q > const &q, T const &angle, vec< 3, T, Q > const &axis)']]], ['rotatex',['rotateX',['../a00356.html#ga059fdbdba4cca35cdff172a9d0d0afc9',1,'glm::rotateX(vec< 3, T, Q > const &v, T const &angle)'],['../a00356.html#ga4333b1ea8ebf1bd52bc3801a7617398a',1,'glm::rotateX(vec< 4, T, Q > const &v, T const &angle)']]], ['rotatey',['rotateY',['../a00356.html#gaebdc8b054ace27d9f62e054531c6f44d',1,'glm::rotateY(vec< 3, T, Q > const &v, T const &angle)'],['../a00356.html#ga3ce3db0867b7f8efd878ee34f95a623b',1,'glm::rotateY(vec< 4, T, Q > const &v, T const &angle)']]], ['rotatez',['rotateZ',['../a00356.html#ga5a048838a03f6249acbacb4dbacf79c4',1,'glm::rotateZ(vec< 3, T, Q > const &v, T const &angle)'],['../a00356.html#ga923b75c6448161053768822d880702e6',1,'glm::rotateZ(vec< 4, T, Q > const &v, T const &angle)']]], ['rotation',['rotation',['../a00352.html#ga03e61282831cc3f52cc76f72f52ad2c5',1,'glm']]], ['round',['round',['../a00241.html#gafa03aca8c4713e1cc892aa92ca135a7e',1,'glm']]], ['roundeven',['roundEven',['../a00241.html#ga76b81785045a057989a84d99aeeb1578',1,'glm']]], ['roundmultiple',['roundMultiple',['../a00302.html#gab892defcc9c0b0618df7251253dc0fbb',1,'glm::roundMultiple(genType v, genType Multiple)'],['../a00302.html#ga2f1a68332d761804c054460a612e3a4b',1,'glm::roundMultiple(vec< L, T, Q > const &v, vec< L, T, Q > const &Multiple)']]], ['roundpoweroftwo',['roundPowerOfTwo',['../a00302.html#gae4e1bf5d1cd179f59261a7342bdcafca',1,'glm::roundPowerOfTwo(genIUType v)'],['../a00302.html#ga258802a7d55c03c918f28cf4d241c4d0',1,'glm::roundPowerOfTwo(vec< L, T, Q > const &v)']]], ['row',['row',['../a00293.html#ga259e5ebd0f31ec3f83440f8cae7f5dba',1,'glm::row(genType const &m, length_t index)'],['../a00293.html#gaadcc64829aadf4103477679e48c7594f',1,'glm::row(genType const &m, length_t index, typename genType::row_type const &x)']]], ['rowmajor2',['rowMajor2',['../a00338.html#gaf5b1aee9e3eb1acf9d6c3c8be1e73bb8',1,'glm::rowMajor2(vec< 2, T, Q > const &v1, vec< 2, T, Q > const &v2)'],['../a00338.html#gaf66c75ed69ca9e87462550708c2c6726',1,'glm::rowMajor2(mat< 2, 2, T, Q > const &m)']]], ['rowmajor3',['rowMajor3',['../a00338.html#ga2ae46497493339f745754e40f438442e',1,'glm::rowMajor3(vec< 3, T, Q > const &v1, vec< 3, T, Q > const &v2, vec< 3, T, Q > const &v3)'],['../a00338.html#gad8a3a50ab47bbe8d36cdb81d90dfcf77',1,'glm::rowMajor3(mat< 3, 3, T, Q > const &m)']]], ['rowmajor4',['rowMajor4',['../a00338.html#ga9636cd6bbe2c32a8d0c03ffb8b1ef284',1,'glm::rowMajor4(vec< 4, T, Q > const &v1, vec< 4, T, Q > const &v2, vec< 4, T, Q > const &v3, vec< 4, T, Q > const &v4)'],['../a00338.html#gac92ad1c2acdf18d3eb7be45a32f9566b',1,'glm::rowMajor4(mat< 4, 4, T, Q > const &m)']]], ['rq_5fdecompose',['rq_decompose',['../a00336.html#ga82874e2ebe891ba35ac21d9993873758',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/groups_0.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/groups_0.js ================================================ var searchData= [ ['angle_20and_20trigonometry_20functions',['Angle and Trigonometry Functions',['../a00373.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/groups_1.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/groups_1.js ================================================ var searchData= [ ['core_20features',['Core features',['../a00280.html',1,'']]], ['common_20functions',['Common functions',['../a00241.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/groups_2.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/groups_2.js ================================================ var searchData= [ ['exponential_20functions',['Exponential functions',['../a00242.html',1,'']]], ['experimental_20extensions',['Experimental extensions',['../a00287.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/groups_3.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/groups_3.js ================================================ var searchData= [ ['floating_2dpoint_20pack_20and_20unpack_20functions',['Floating-Point Pack and Unpack Functions',['../a00372.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/groups_4.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/groups_4.js ================================================ var searchData= [ ['geometric_20functions',['Geometric functions',['../a00279.html',1,'']]], ['glm_5fext_5fmatrix_5fclip_5fspace',['GLM_EXT_matrix_clip_space',['../a00243.html',1,'']]], ['glm_5fext_5fmatrix_5fcommon',['GLM_EXT_matrix_common',['../a00244.html',1,'']]], ['glm_5fext_5fmatrix_5fprojection',['GLM_EXT_matrix_projection',['../a00245.html',1,'']]], ['glm_5fext_5fmatrix_5frelational',['GLM_EXT_matrix_relational',['../a00246.html',1,'']]], ['glm_5fext_5fmatrix_5ftransform',['GLM_EXT_matrix_transform',['../a00247.html',1,'']]], ['glm_5fext_5fquaternion_5fcommon',['GLM_EXT_quaternion_common',['../a00248.html',1,'']]], ['glm_5fext_5fquaternion_5fdouble',['GLM_EXT_quaternion_double',['../a00249.html',1,'']]], ['glm_5fext_5fquaternion_5fdouble_5fprecision',['GLM_EXT_quaternion_double_precision',['../a00250.html',1,'']]], ['glm_5fext_5fquaternion_5fexponential',['GLM_EXT_quaternion_exponential',['../a00251.html',1,'']]], ['glm_5fext_5fquaternion_5ffloat',['GLM_EXT_quaternion_float',['../a00252.html',1,'']]], ['glm_5fext_5fquaternion_5ffloat_5fprecision',['GLM_EXT_quaternion_float_precision',['../a00253.html',1,'']]], ['glm_5fext_5fquaternion_5fgeometric',['GLM_EXT_quaternion_geometric',['../a00254.html',1,'']]], ['glm_5fext_5fquaternion_5frelational',['GLM_EXT_quaternion_relational',['../a00255.html',1,'']]], ['glm_5fext_5fquaternion_5ftransform',['GLM_EXT_quaternion_transform',['../a00256.html',1,'']]], ['glm_5fext_5fquaternion_5ftrigonometric',['GLM_EXT_quaternion_trigonometric',['../a00257.html',1,'']]], ['glm_5fext_5fscalar_5fcommon',['GLM_EXT_scalar_common',['../a00258.html',1,'']]], ['glm_5fext_5fscalar_5fconstants',['GLM_EXT_scalar_constants',['../a00259.html',1,'']]], ['glm_5fext_5fscalar_5fint_5fsized',['GLM_EXT_scalar_int_sized',['../a00260.html',1,'']]], ['glm_5fext_5fscalar_5finteger',['GLM_EXT_scalar_integer',['../a00261.html',1,'']]], ['glm_5fext_5fscalar_5frelational',['GLM_EXT_scalar_relational',['../a00262.html',1,'']]], ['glm_5fext_5fscalar_5fuint_5fsized',['GLM_EXT_scalar_uint_sized',['../a00263.html',1,'']]], ['glm_5fext_5fscalar_5fulp',['GLM_EXT_scalar_ulp',['../a00264.html',1,'']]], ['glm_5fext_5fvector_5fbool1',['GLM_EXT_vector_bool1',['../a00265.html',1,'']]], ['glm_5fext_5fvector_5fbool1_5fprecision',['GLM_EXT_vector_bool1_precision',['../a00266.html',1,'']]], ['glm_5fext_5fvector_5fcommon',['GLM_EXT_vector_common',['../a00267.html',1,'']]], ['glm_5fext_5fvector_5fdouble1',['GLM_EXT_vector_double1',['../a00268.html',1,'']]], ['glm_5fext_5fvector_5fdouble1_5fprecision',['GLM_EXT_vector_double1_precision',['../a00269.html',1,'']]], ['glm_5fext_5fvector_5ffloat1',['GLM_EXT_vector_float1',['../a00270.html',1,'']]], ['glm_5fext_5fvector_5ffloat1_5fprecision',['GLM_EXT_vector_float1_precision',['../a00271.html',1,'']]], ['glm_5fext_5fvector_5fint1',['GLM_EXT_vector_int1',['../a00272.html',1,'']]], ['glm_5fext_5fvector_5fint1_5fprecision',['GLM_EXT_vector_int1_precision',['../a00273.html',1,'']]], ['glm_5fext_5fvector_5finteger',['GLM_EXT_vector_integer',['../a00274.html',1,'']]], ['glm_5fext_5fvector_5frelational',['GLM_EXT_vector_relational',['../a00275.html',1,'']]], ['glm_5fext_5fvector_5fuint1',['GLM_EXT_vector_uint1',['../a00276.html',1,'']]], ['glm_5fext_5fvector_5fuint1_5fprecision',['GLM_EXT_vector_uint1_precision',['../a00277.html',1,'']]], ['glm_5fext_5fvector_5fulp',['GLM_EXT_vector_ulp',['../a00278.html',1,'']]], ['glm_5fgtc_5fbitfield',['GLM_GTC_bitfield',['../a00288.html',1,'']]], ['glm_5fgtc_5fcolor_5fspace',['GLM_GTC_color_space',['../a00289.html',1,'']]], ['glm_5fgtc_5fconstants',['GLM_GTC_constants',['../a00290.html',1,'']]], ['glm_5fgtc_5fepsilon',['GLM_GTC_epsilon',['../a00291.html',1,'']]], ['glm_5fgtc_5finteger',['GLM_GTC_integer',['../a00292.html',1,'']]], ['glm_5fgtc_5fmatrix_5faccess',['GLM_GTC_matrix_access',['../a00293.html',1,'']]], ['glm_5fgtc_5fmatrix_5finteger',['GLM_GTC_matrix_integer',['../a00294.html',1,'']]], ['glm_5fgtc_5fmatrix_5finverse',['GLM_GTC_matrix_inverse',['../a00295.html',1,'']]], ['glm_5fgtc_5fmatrix_5ftransform',['GLM_GTC_matrix_transform',['../a00296.html',1,'']]], ['glm_5fgtc_5fnoise',['GLM_GTC_noise',['../a00297.html',1,'']]], ['glm_5fgtc_5fpacking',['GLM_GTC_packing',['../a00298.html',1,'']]], ['glm_5fgtc_5fquaternion',['GLM_GTC_quaternion',['../a00299.html',1,'']]], ['glm_5fgtc_5frandom',['GLM_GTC_random',['../a00300.html',1,'']]], ['glm_5fgtc_5freciprocal',['GLM_GTC_reciprocal',['../a00301.html',1,'']]], ['glm_5fgtc_5fround',['GLM_GTC_round',['../a00302.html',1,'']]], ['glm_5fgtc_5ftype_5faligned',['GLM_GTC_type_aligned',['../a00303.html',1,'']]], ['glm_5fgtc_5ftype_5fprecision',['GLM_GTC_type_precision',['../a00304.html',1,'']]], ['glm_5fgtc_5ftype_5fptr',['GLM_GTC_type_ptr',['../a00305.html',1,'']]], ['glm_5fgtc_5fulp',['GLM_GTC_ulp',['../a00306.html',1,'']]], ['glm_5fgtc_5fvec1',['GLM_GTC_vec1',['../a00307.html',1,'']]], ['glm_5fgtx_5fassociated_5fmin_5fmax',['GLM_GTX_associated_min_max',['../a00308.html',1,'']]], ['glm_5fgtx_5fbit',['GLM_GTX_bit',['../a00309.html',1,'']]], ['glm_5fgtx_5fclosest_5fpoint',['GLM_GTX_closest_point',['../a00310.html',1,'']]], ['glm_5fgtx_5fcolor_5fencoding',['GLM_GTX_color_encoding',['../a00311.html',1,'']]], ['glm_5fgtx_5fcolor_5fspace',['GLM_GTX_color_space',['../a00312.html',1,'']]], ['glm_5fgtx_5fcolor_5fspace_5fycocg',['GLM_GTX_color_space_YCoCg',['../a00313.html',1,'']]], ['glm_5fgtx_5fcommon',['GLM_GTX_common',['../a00314.html',1,'']]], ['glm_5fgtx_5fcompatibility',['GLM_GTX_compatibility',['../a00315.html',1,'']]], ['glm_5fgtx_5fcomponent_5fwise',['GLM_GTX_component_wise',['../a00316.html',1,'']]], ['glm_5fgtx_5fdual_5fquaternion',['GLM_GTX_dual_quaternion',['../a00317.html',1,'']]], ['glm_5fgtx_5feasing',['GLM_GTX_easing',['../a00318.html',1,'']]], ['glm_5fgtx_5feuler_5fangles',['GLM_GTX_euler_angles',['../a00319.html',1,'']]], ['glm_5fgtx_5fextend',['GLM_GTX_extend',['../a00320.html',1,'']]], ['glm_5fgtx_5fextented_5fmin_5fmax',['GLM_GTX_extented_min_max',['../a00321.html',1,'']]], ['glm_5fgtx_5fexterior_5fproduct',['GLM_GTX_exterior_product',['../a00322.html',1,'']]], ['glm_5fgtx_5ffast_5fexponential',['GLM_GTX_fast_exponential',['../a00323.html',1,'']]], ['glm_5fgtx_5ffast_5fsquare_5froot',['GLM_GTX_fast_square_root',['../a00324.html',1,'']]], ['glm_5fgtx_5ffast_5ftrigonometry',['GLM_GTX_fast_trigonometry',['../a00325.html',1,'']]], ['glm_5fgtx_5ffunctions',['GLM_GTX_functions',['../a00326.html',1,'']]], ['glm_5fgtx_5fgradient_5fpaint',['GLM_GTX_gradient_paint',['../a00327.html',1,'']]], ['glm_5fgtx_5fhanded_5fcoordinate_5fspace',['GLM_GTX_handed_coordinate_space',['../a00328.html',1,'']]], ['glm_5fgtx_5fhash',['GLM_GTX_hash',['../a00329.html',1,'']]], ['glm_5fgtx_5finteger',['GLM_GTX_integer',['../a00330.html',1,'']]], ['glm_5fgtx_5fintersect',['GLM_GTX_intersect',['../a00331.html',1,'']]], ['glm_5fgtx_5fio',['GLM_GTX_io',['../a00332.html',1,'']]], ['glm_5fgtx_5flog_5fbase',['GLM_GTX_log_base',['../a00333.html',1,'']]], ['glm_5fgtx_5fmatrix_5fcross_5fproduct',['GLM_GTX_matrix_cross_product',['../a00334.html',1,'']]], ['glm_5fgtx_5fmatrix_5fdecompose',['GLM_GTX_matrix_decompose',['../a00335.html',1,'']]], ['glm_5fgtx_5fmatrix_5ffactorisation',['GLM_GTX_matrix_factorisation',['../a00336.html',1,'']]], ['glm_5fgtx_5fmatrix_5finterpolation',['GLM_GTX_matrix_interpolation',['../a00337.html',1,'']]], ['glm_5fgtx_5fmatrix_5fmajor_5fstorage',['GLM_GTX_matrix_major_storage',['../a00338.html',1,'']]], ['glm_5fgtx_5fmatrix_5foperation',['GLM_GTX_matrix_operation',['../a00339.html',1,'']]], ['glm_5fgtx_5fmatrix_5fquery',['GLM_GTX_matrix_query',['../a00340.html',1,'']]], ['glm_5fgtx_5fmatrix_5ftransform_5f2d',['GLM_GTX_matrix_transform_2d',['../a00341.html',1,'']]], ['glm_5fgtx_5fmixed_5fproducte',['GLM_GTX_mixed_producte',['../a00342.html',1,'']]], ['glm_5fgtx_5fnorm',['GLM_GTX_norm',['../a00343.html',1,'']]], ['glm_5fgtx_5fnormal',['GLM_GTX_normal',['../a00344.html',1,'']]], ['glm_5fgtx_5fnormalize_5fdot',['GLM_GTX_normalize_dot',['../a00345.html',1,'']]], ['glm_5fgtx_5fnumber_5fprecision',['GLM_GTX_number_precision',['../a00346.html',1,'']]], ['glm_5fgtx_5foptimum_5fpow',['GLM_GTX_optimum_pow',['../a00347.html',1,'']]], ['glm_5fgtx_5forthonormalize',['GLM_GTX_orthonormalize',['../a00348.html',1,'']]], ['glm_5fgtx_5fperpendicular',['GLM_GTX_perpendicular',['../a00349.html',1,'']]], ['glm_5fgtx_5fpolar_5fcoordinates',['GLM_GTX_polar_coordinates',['../a00350.html',1,'']]], ['glm_5fgtx_5fprojection',['GLM_GTX_projection',['../a00351.html',1,'']]], ['glm_5fgtx_5fquaternion',['GLM_GTX_quaternion',['../a00352.html',1,'']]], ['glm_5fgtx_5frange',['GLM_GTX_range',['../a00353.html',1,'']]], ['glm_5fgtx_5fraw_5fdata',['GLM_GTX_raw_data',['../a00354.html',1,'']]], ['glm_5fgtx_5frotate_5fnormalized_5faxis',['GLM_GTX_rotate_normalized_axis',['../a00355.html',1,'']]], ['glm_5fgtx_5frotate_5fvector',['GLM_GTX_rotate_vector',['../a00356.html',1,'']]], ['glm_5fgtx_5fscalar_5frelational',['GLM_GTX_scalar_relational',['../a00357.html',1,'']]], ['glm_5fgtx_5fspline',['GLM_GTX_spline',['../a00358.html',1,'']]], ['glm_5fgtx_5fstd_5fbased_5ftype',['GLM_GTX_std_based_type',['../a00359.html',1,'']]], ['glm_5fgtx_5fstring_5fcast',['GLM_GTX_string_cast',['../a00360.html',1,'']]], ['glm_5fgtx_5ftexture',['GLM_GTX_texture',['../a00361.html',1,'']]], ['glm_5fgtx_5ftransform',['GLM_GTX_transform',['../a00362.html',1,'']]], ['glm_5fgtx_5ftransform2',['GLM_GTX_transform2',['../a00363.html',1,'']]], ['glm_5fgtx_5ftype_5faligned',['GLM_GTX_type_aligned',['../a00364.html',1,'']]], ['glm_5fgtx_5ftype_5ftrait',['GLM_GTX_type_trait',['../a00365.html',1,'']]], ['glm_5fgtx_5fvec_5fswizzle',['GLM_GTX_vec_swizzle',['../a00366.html',1,'']]], ['glm_5fgtx_5fvector_5fangle',['GLM_GTX_vector_angle',['../a00367.html',1,'']]], ['glm_5fgtx_5fvector_5fquery',['GLM_GTX_vector_query',['../a00368.html',1,'']]], ['glm_5fgtx_5fwrap',['GLM_GTX_wrap',['../a00369.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/groups_5.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/groups_5.js ================================================ var searchData= [ ['integer_20functions',['Integer functions',['../a00370.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/groups_6.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/groups_6.js ================================================ var searchData= [ ['matrix_20functions',['Matrix functions',['../a00371.html',1,'']]], ['matrix_20types',['Matrix types',['../a00283.html',1,'']]], ['matrix_20types_20with_20precision_20qualifiers',['Matrix types with precision qualifiers',['../a00284.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/groups_7.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/groups_7.js ================================================ var searchData= [ ['recommended_20extensions',['Recommended extensions',['../a00286.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/groups_8.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/groups_8.js ================================================ var searchData= [ ['stable_20extensions',['Stable extensions',['../a00285.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/groups_9.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/groups_9.js ================================================ var searchData= [ ['vector_20relational_20functions',['Vector Relational Functions',['../a00374.html',1,'']]], ['vector_20types',['Vector types',['../a00281.html',1,'']]], ['vector_20types_20with_20precision_20qualifiers',['Vector types with precision qualifiers',['../a00282.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/nomatches.html ================================================
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/pages_0.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/pages_0.js ================================================ var searchData= [ ['opengl_20mathematics_20_28glm_29',['OpenGL Mathematics (GLM)',['../index.html',1,'']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/search.css ================================================ /*---------------- Search Box */ #FSearchBox { float: left; } #MSearchBox { white-space : nowrap; position: absolute; float: none; display: inline; margin-top: 8px; right: 0px; width: 170px; z-index: 102; background-color: white; } #MSearchBox .left { display:block; position:absolute; left:10px; width:20px; height:19px; background:url('search_l.png') no-repeat; background-position:right; } #MSearchSelect { display:block; position:absolute; width:20px; height:19px; } .left #MSearchSelect { left:4px; } .right #MSearchSelect { right:5px; } #MSearchField { display:block; position:absolute; height:19px; background:url('search_m.png') repeat-x; border:none; width:111px; margin-left:20px; padding-left:4px; color: #909090; outline: none; font: 9pt Arial, Verdana, sans-serif; } #FSearchBox #MSearchField { margin-left:15px; } #MSearchBox .right { display:block; position:absolute; right:10px; top:0px; width:20px; height:19px; background:url('search_r.png') no-repeat; background-position:left; } #MSearchClose { display: none; position: absolute; top: 4px; background : none; border: none; margin: 0px 4px 0px 0px; padding: 0px 0px; outline: none; } .left #MSearchClose { left: 6px; } .right #MSearchClose { right: 2px; } .MSearchBoxActive #MSearchField { color: #000000; } /*---------------- Search filter selection */ #MSearchSelectWindow { display: none; position: absolute; left: 0; top: 0; border: 1px solid #90A5CE; background-color: #F9FAFC; z-index: 1; padding-top: 4px; padding-bottom: 4px; -moz-border-radius: 4px; -webkit-border-top-left-radius: 4px; -webkit-border-top-right-radius: 4px; -webkit-border-bottom-left-radius: 4px; -webkit-border-bottom-right-radius: 4px; -webkit-box-shadow: 5px 5px 5px rgba(0, 0, 0, 0.15); } .SelectItem { font: 8pt Arial, Verdana, sans-serif; padding-left: 2px; padding-right: 12px; border: 0px; } span.SelectionMark { margin-right: 4px; font-family: monospace; outline-style: none; text-decoration: none; } a.SelectItem { display: block; outline-style: none; color: #000000; text-decoration: none; padding-left: 6px; padding-right: 12px; } a.SelectItem:focus, a.SelectItem:active { color: #000000; outline-style: none; text-decoration: none; } a.SelectItem:hover { color: #FFFFFF; background-color: #3D578C; outline-style: none; text-decoration: none; cursor: pointer; display: block; } /*---------------- Search results window */ iframe#MSearchResults { width: 60ex; height: 15em; } #MSearchResultsWindow { display: none; position: absolute; left: 0; top: 0; border: 1px solid #000; background-color: #EEF1F7; } /* ----------------------------------- */ #SRIndex { clear:both; padding-bottom: 15px; } .SREntry { font-size: 10pt; padding-left: 1ex; } .SRPage .SREntry { font-size: 8pt; padding: 1px 5px; } body.SRPage { margin: 5px 2px; } .SRChildren { padding-left: 3ex; padding-bottom: .5em } .SRPage .SRChildren { display: none; } .SRSymbol { font-weight: bold; color: #425E97; font-family: Arial, Verdana, sans-serif; text-decoration: none; outline: none; } a.SRScope { display: block; color: #425E97; font-family: Arial, Verdana, sans-serif; text-decoration: none; outline: none; } a.SRSymbol:focus, a.SRSymbol:active, a.SRScope:focus, a.SRScope:active { text-decoration: underline; } span.SRScope { padding-left: 4px; } .SRPage .SRStatus { padding: 2px 5px; font-size: 8pt; font-style: italic; } .SRResult { display: none; } DIV.searchresults { margin-left: 10px; margin-right: 10px; } /*---------------- External search page results */ .searchresult { background-color: #F0F3F8; } .pages b { color: white; padding: 5px 5px 3px 5px; background-image: url("../tab_a.png"); background-repeat: repeat-x; text-shadow: 0 1px 1px #000000; } .pages { line-height: 17px; margin-left: 4px; text-decoration: none; } .hl { font-weight: bold; } #searchresults { margin-bottom: 20px; } .searchpages { margin-top: 10px; } ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/search.js ================================================ function convertToId(search) { var result = ''; for (i=0;i do a search { this.Search(); } } this.OnSearchSelectKey = function(evt) { var e = (evt) ? evt : window.event; // for IE if (e.keyCode==40 && this.searchIndex0) // Up { this.searchIndex--; this.OnSelectItem(this.searchIndex); } else if (e.keyCode==13 || e.keyCode==27) { this.OnSelectItem(this.searchIndex); this.CloseSelectionWindow(); this.DOMSearchField().focus(); } return false; } // --------- Actions // Closes the results window. this.CloseResultsWindow = function() { this.DOMPopupSearchResultsWindow().style.display = 'none'; this.DOMSearchClose().style.display = 'none'; this.Activate(false); } this.CloseSelectionWindow = function() { this.DOMSearchSelectWindow().style.display = 'none'; } // Performs a search. this.Search = function() { this.keyTimeout = 0; // strip leading whitespace var searchValue = this.DOMSearchField().value.replace(/^ +/, ""); var code = searchValue.toLowerCase().charCodeAt(0); var idxChar = searchValue.substr(0, 1).toLowerCase(); if ( 0xD800 <= code && code <= 0xDBFF && searchValue > 1) // surrogate pair { idxChar = searchValue.substr(0, 2); } var resultsPage; var resultsPageWithSearch; var hasResultsPage; var idx = indexSectionsWithContent[this.searchIndex].indexOf(idxChar); if (idx!=-1) { var hexCode=idx.toString(16); resultsPage = this.resultsPath + '/' + indexSectionNames[this.searchIndex] + '_' + hexCode + '.html'; resultsPageWithSearch = resultsPage+'?'+escape(searchValue); hasResultsPage = true; } else // nothing available for this search term { resultsPage = this.resultsPath + '/nomatches.html'; resultsPageWithSearch = resultsPage; hasResultsPage = false; } window.frames.MSearchResults.location = resultsPageWithSearch; var domPopupSearchResultsWindow = this.DOMPopupSearchResultsWindow(); if (domPopupSearchResultsWindow.style.display!='block') { var domSearchBox = this.DOMSearchBox(); this.DOMSearchClose().style.display = 'inline'; if (this.insideFrame) { var domPopupSearchResults = this.DOMPopupSearchResults(); domPopupSearchResultsWindow.style.position = 'relative'; domPopupSearchResultsWindow.style.display = 'block'; var width = document.body.clientWidth - 8; // the -8 is for IE :-( domPopupSearchResultsWindow.style.width = width + 'px'; domPopupSearchResults.style.width = width + 'px'; } else { var domPopupSearchResults = this.DOMPopupSearchResults(); var left = getXPos(domSearchBox) + 150; // domSearchBox.offsetWidth; var top = getYPos(domSearchBox) + 20; // domSearchBox.offsetHeight + 1; domPopupSearchResultsWindow.style.display = 'block'; left -= domPopupSearchResults.offsetWidth; domPopupSearchResultsWindow.style.top = top + 'px'; domPopupSearchResultsWindow.style.left = left + 'px'; } } this.lastSearchValue = searchValue; this.lastResultsPage = resultsPage; } // -------- Activation Functions // Activates or deactivates the search panel, resetting things to // their default values if necessary. this.Activate = function(isActive) { if (isActive || // open it this.DOMPopupSearchResultsWindow().style.display == 'block' ) { this.DOMSearchBox().className = 'MSearchBoxActive'; var searchField = this.DOMSearchField(); if (searchField.value == this.searchLabel) // clear "Search" term upon entry { searchField.value = ''; this.searchActive = true; } } else if (!isActive) // directly remove the panel { this.DOMSearchBox().className = 'MSearchBoxInactive'; this.DOMSearchField().value = this.searchLabel; this.searchActive = false; this.lastSearchValue = '' this.lastResultsPage = ''; } } } // ----------------------------------------------------------------------- // The class that handles everything on the search results page. function SearchResults(name) { // The number of matches from the last run of . this.lastMatchCount = 0; this.lastKey = 0; this.repeatOn = false; // Toggles the visibility of the passed element ID. this.FindChildElement = function(id) { var parentElement = document.getElementById(id); var element = parentElement.firstChild; while (element && element!=parentElement) { if (element.nodeName == 'DIV' && element.className == 'SRChildren') { return element; } if (element.nodeName == 'DIV' && element.hasChildNodes()) { element = element.firstChild; } else if (element.nextSibling) { element = element.nextSibling; } else { do { element = element.parentNode; } while (element && element!=parentElement && !element.nextSibling); if (element && element!=parentElement) { element = element.nextSibling; } } } } this.Toggle = function(id) { var element = this.FindChildElement(id); if (element) { if (element.style.display == 'block') { element.style.display = 'none'; } else { element.style.display = 'block'; } } } // Searches for the passed string. If there is no parameter, // it takes it from the URL query. // // Always returns true, since other documents may try to call it // and that may or may not be possible. this.Search = function(search) { if (!search) // get search word from URL { search = window.location.search; search = search.substring(1); // Remove the leading '?' search = unescape(search); } search = search.replace(/^ +/, ""); // strip leading spaces search = search.replace(/ +$/, ""); // strip trailing spaces search = search.toLowerCase(); search = convertToId(search); var resultRows = document.getElementsByTagName("div"); var matches = 0; var i = 0; while (i < resultRows.length) { var row = resultRows.item(i); if (row.className == "SRResult") { var rowMatchName = row.id.toLowerCase(); rowMatchName = rowMatchName.replace(/^sr\d*_/, ''); // strip 'sr123_' if (search.length<=rowMatchName.length && rowMatchName.substr(0, search.length)==search) { row.style.display = 'block'; matches++; } else { row.style.display = 'none'; } } i++; } document.getElementById("Searching").style.display='none'; if (matches == 0) // no results { document.getElementById("NoMatches").style.display='block'; } else // at least one result { document.getElementById("NoMatches").style.display='none'; } this.lastMatchCount = matches; return true; } // return the first item with index index or higher that is visible this.NavNext = function(index) { var focusItem; while (1) { var focusName = 'Item'+index; focusItem = document.getElementById(focusName); if (focusItem && focusItem.parentNode.parentNode.style.display=='block') { break; } else if (!focusItem) // last element { break; } focusItem=null; index++; } return focusItem; } this.NavPrev = function(index) { var focusItem; while (1) { var focusName = 'Item'+index; focusItem = document.getElementById(focusName); if (focusItem && focusItem.parentNode.parentNode.style.display=='block') { break; } else if (!focusItem) // last element { break; } focusItem=null; index--; } return focusItem; } this.ProcessKeys = function(e) { if (e.type == "keydown") { this.repeatOn = false; this.lastKey = e.keyCode; } else if (e.type == "keypress") { if (!this.repeatOn) { if (this.lastKey) this.repeatOn = true; return false; // ignore first keypress after keydown } } else if (e.type == "keyup") { this.lastKey = 0; this.repeatOn = false; } return this.lastKey!=0; } this.Nav = function(evt,itemIndex) { var e = (evt) ? evt : window.event; // for IE if (e.keyCode==13) return true; if (!this.ProcessKeys(e)) return false; if (this.lastKey==38) // Up { var newIndex = itemIndex-1; var focusItem = this.NavPrev(newIndex); if (focusItem) { var child = this.FindChildElement(focusItem.parentNode.parentNode.id); if (child && child.style.display == 'block') // children visible { var n=0; var tmpElem; while (1) // search for last child { tmpElem = document.getElementById('Item'+newIndex+'_c'+n); if (tmpElem) { focusItem = tmpElem; } else // found it! { break; } n++; } } } if (focusItem) { focusItem.focus(); } else // return focus to search field { parent.document.getElementById("MSearchField").focus(); } } else if (this.lastKey==40) // Down { var newIndex = itemIndex+1; var focusItem; var item = document.getElementById('Item'+itemIndex); var elem = this.FindChildElement(item.parentNode.parentNode.id); if (elem && elem.style.display == 'block') // children visible { focusItem = document.getElementById('Item'+itemIndex+'_c0'); } if (!focusItem) focusItem = this.NavNext(newIndex); if (focusItem) focusItem.focus(); } else if (this.lastKey==39) // Right { var item = document.getElementById('Item'+itemIndex); var elem = this.FindChildElement(item.parentNode.parentNode.id); if (elem) elem.style.display = 'block'; } else if (this.lastKey==37) // Left { var item = document.getElementById('Item'+itemIndex); var elem = this.FindChildElement(item.parentNode.parentNode.id); if (elem) elem.style.display = 'none'; } else if (this.lastKey==27) // Escape { parent.searchBox.CloseResultsWindow(); parent.document.getElementById("MSearchField").focus(); } else if (this.lastKey==13) // Enter { return true; } return false; } this.NavChild = function(evt,itemIndex,childIndex) { var e = (evt) ? evt : window.event; // for IE if (e.keyCode==13) return true; if (!this.ProcessKeys(e)) return false; if (this.lastKey==38) // Up { if (childIndex>0) { var newIndex = childIndex-1; document.getElementById('Item'+itemIndex+'_c'+newIndex).focus(); } else // already at first child, jump to parent { document.getElementById('Item'+itemIndex).focus(); } } else if (this.lastKey==40) // Down { var newIndex = childIndex+1; var elem = document.getElementById('Item'+itemIndex+'_c'+newIndex); if (!elem) // last child, jump to parent next parent { elem = this.NavNext(itemIndex+1); } if (elem) { elem.focus(); } } else if (this.lastKey==27) // Escape { parent.searchBox.CloseResultsWindow(); parent.document.getElementById("MSearchField").focus(); } else if (this.lastKey==13) // Enter { return true; } return false; } } function setKeyActions(elem,action) { elem.setAttribute('onkeydown',action); elem.setAttribute('onkeypress',action); elem.setAttribute('onkeyup',action); } function setClassAttr(elem,attr) { elem.setAttribute('class',attr); elem.setAttribute('className',attr); } function createResults() { var results = document.getElementById("SRResults"); for (var e=0; e
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/typedefs_0.js ================================================ var searchData= [ ['aligned_5fbvec1',['aligned_bvec1',['../a00303.html#ga780a35f764020f553a9601a3fcdcd059',1,'glm']]], ['aligned_5fbvec2',['aligned_bvec2',['../a00303.html#gae766b317c5afec852bfb3d74a3c54bc8',1,'glm']]], ['aligned_5fbvec3',['aligned_bvec3',['../a00303.html#gae1964ba70d15915e5b710926decbb3cb',1,'glm']]], ['aligned_5fbvec4',['aligned_bvec4',['../a00303.html#gae164a1f7879f828bc35e50b79d786b05',1,'glm']]], ['aligned_5fdmat2',['aligned_dmat2',['../a00303.html#ga6783859382677d35fcd5dac7dcbefdbd',1,'glm']]], ['aligned_5fdmat2x2',['aligned_dmat2x2',['../a00303.html#ga449a3ec2dde6b6bb4bb94c49a6aad388',1,'glm']]], ['aligned_5fdmat2x3',['aligned_dmat2x3',['../a00303.html#ga53d519a7b1bfb69076b3ec206a6b3bd1',1,'glm']]], ['aligned_5fdmat2x4',['aligned_dmat2x4',['../a00303.html#ga5ccb2baeb0ab57b818c24e0d486c59d0',1,'glm']]], ['aligned_5fdmat3',['aligned_dmat3',['../a00303.html#ga19aa695ffdb45ce29f7ea0b5029627de',1,'glm']]], ['aligned_5fdmat3x2',['aligned_dmat3x2',['../a00303.html#ga5f5123d834bd1170edf8c386834e112c',1,'glm']]], ['aligned_5fdmat3x3',['aligned_dmat3x3',['../a00303.html#ga635bf3732281a2c2ca54d8f9d33d178f',1,'glm']]], ['aligned_5fdmat3x4',['aligned_dmat3x4',['../a00303.html#gaf488c6ad88c185054595d4d5c7ba5b9d',1,'glm']]], ['aligned_5fdmat4',['aligned_dmat4',['../a00303.html#ga001bb387ae8192fa94dbd8b23b600439',1,'glm']]], ['aligned_5fdmat4x2',['aligned_dmat4x2',['../a00303.html#gaa409cfb737bd59b68dc683e9b03930cc',1,'glm']]], ['aligned_5fdmat4x3',['aligned_dmat4x3',['../a00303.html#ga621e89ca1dbdcb7b5a3e7de237c44121',1,'glm']]], ['aligned_5fdmat4x4',['aligned_dmat4x4',['../a00303.html#gac9bda778d0b7ad82f656dab99b71857a',1,'glm']]], ['aligned_5fdvec1',['aligned_dvec1',['../a00303.html#ga4974f46ae5a19415d91316960a53617a',1,'glm']]], ['aligned_5fdvec2',['aligned_dvec2',['../a00303.html#ga18d859f87122b2b3b2992ffe86dbebc0',1,'glm']]], ['aligned_5fdvec3',['aligned_dvec3',['../a00303.html#gaa37869eea77d28419b2fb0ff70b69bf0',1,'glm']]], ['aligned_5fdvec4',['aligned_dvec4',['../a00303.html#ga8a9f0a4795ccc442fa9901845026f9f5',1,'glm']]], ['aligned_5fhighp_5fbvec1',['aligned_highp_bvec1',['../a00303.html#ga862843a45b01c35ffe4d44c47ea774ad',1,'glm']]], ['aligned_5fhighp_5fbvec2',['aligned_highp_bvec2',['../a00303.html#ga0731b593c5e33559954c80f8687e76c6',1,'glm']]], ['aligned_5fhighp_5fbvec3',['aligned_highp_bvec3',['../a00303.html#ga0913bdf048d0cb74af1d2512aec675bc',1,'glm']]], ['aligned_5fhighp_5fbvec4',['aligned_highp_bvec4',['../a00303.html#ga9df1d0c425852cf63a57e533b7a83f4f',1,'glm']]], ['aligned_5fhighp_5fdmat2',['aligned_highp_dmat2',['../a00303.html#ga3a7eeae43cb7673e14cc89bf02f7dd45',1,'glm']]], ['aligned_5fhighp_5fdmat2x2',['aligned_highp_dmat2x2',['../a00303.html#gaef26dfe3855a91644665b55c9096a8c8',1,'glm']]], ['aligned_5fhighp_5fdmat2x3',['aligned_highp_dmat2x3',['../a00303.html#gaa7c9d4ab7ab651cdf8001fe7843e238b',1,'glm']]], ['aligned_5fhighp_5fdmat2x4',['aligned_highp_dmat2x4',['../a00303.html#gaa0d2b8a75f1908dcf32c27f8524bdced',1,'glm']]], ['aligned_5fhighp_5fdmat3',['aligned_highp_dmat3',['../a00303.html#gad8f6abb2c9994850b5d5c04a5f979ed8',1,'glm']]], ['aligned_5fhighp_5fdmat3x2',['aligned_highp_dmat3x2',['../a00303.html#gab069b2fc2ec785fc4e193cf26c022679',1,'glm']]], ['aligned_5fhighp_5fdmat3x3',['aligned_highp_dmat3x3',['../a00303.html#ga66073b1ddef34b681741f572338ddb8e',1,'glm']]], ['aligned_5fhighp_5fdmat3x4',['aligned_highp_dmat3x4',['../a00303.html#ga683c8ca66de323ea533a760abedd0efc',1,'glm']]], ['aligned_5fhighp_5fdmat4',['aligned_highp_dmat4',['../a00303.html#gacaa7407ea00ffdd322ce86a57adb547e',1,'glm']]], ['aligned_5fhighp_5fdmat4x2',['aligned_highp_dmat4x2',['../a00303.html#ga93a23ca3d42818d56e0702213c66354b',1,'glm']]], ['aligned_5fhighp_5fdmat4x3',['aligned_highp_dmat4x3',['../a00303.html#gacab7374b560745cb1d0a306a90353f58',1,'glm']]], ['aligned_5fhighp_5fdmat4x4',['aligned_highp_dmat4x4',['../a00303.html#ga1fbfba14368b742972d3b58a0a303682',1,'glm']]], ['aligned_5fhighp_5fdvec1',['aligned_highp_dvec1',['../a00303.html#gaf0448b0f7ceb8273f7eda3a92205eefc',1,'glm']]], ['aligned_5fhighp_5fdvec2',['aligned_highp_dvec2',['../a00303.html#gab173a333e6b7ce153ceba66ac4a321cf',1,'glm']]], ['aligned_5fhighp_5fdvec3',['aligned_highp_dvec3',['../a00303.html#gae94ef61edfa047d05bc69b6065fc42ba',1,'glm']]], ['aligned_5fhighp_5fdvec4',['aligned_highp_dvec4',['../a00303.html#ga8fad35c5677f228e261fe541f15363a4',1,'glm']]], ['aligned_5fhighp_5fivec1',['aligned_highp_ivec1',['../a00303.html#gad63b8c5b4dc0500d54d7414ef555178f',1,'glm']]], ['aligned_5fhighp_5fivec2',['aligned_highp_ivec2',['../a00303.html#ga41563650f36cb7f479e080de21e08418',1,'glm']]], ['aligned_5fhighp_5fivec3',['aligned_highp_ivec3',['../a00303.html#ga6eca5170bb35eac90b4972590fd31a06',1,'glm']]], ['aligned_5fhighp_5fivec4',['aligned_highp_ivec4',['../a00303.html#ga31bfa801e1579fdba752ec3f7a45ec91',1,'glm']]], ['aligned_5fhighp_5fmat2',['aligned_highp_mat2',['../a00303.html#gaf9db5e8a929c317da5aa12cc53741b63',1,'glm']]], ['aligned_5fhighp_5fmat2x2',['aligned_highp_mat2x2',['../a00303.html#gab559d943abf92bc588bcd3f4c0e4664b',1,'glm']]], ['aligned_5fhighp_5fmat2x3',['aligned_highp_mat2x3',['../a00303.html#ga50c9af5aa3a848956d625fc64dc8488e',1,'glm']]], ['aligned_5fhighp_5fmat2x4',['aligned_highp_mat2x4',['../a00303.html#ga0edcfdd179f8a158342eead48a4d0c2a',1,'glm']]], ['aligned_5fhighp_5fmat3',['aligned_highp_mat3',['../a00303.html#gabab3afcc04459c7b123604ae5dc663f6',1,'glm']]], ['aligned_5fhighp_5fmat3x2',['aligned_highp_mat3x2',['../a00303.html#ga9fc2167b47c9be9295f2d8eea7f0ca75',1,'glm']]], ['aligned_5fhighp_5fmat3x3',['aligned_highp_mat3x3',['../a00303.html#ga2f7b8c99ba6f2d07c73a195a8143c259',1,'glm']]], ['aligned_5fhighp_5fmat3x4',['aligned_highp_mat3x4',['../a00303.html#ga52e00afd0eb181e6738f40cf41787049',1,'glm']]], ['aligned_5fhighp_5fmat4',['aligned_highp_mat4',['../a00303.html#ga058ae939bfdbcbb80521dd4a3b01afba',1,'glm']]], ['aligned_5fhighp_5fmat4x2',['aligned_highp_mat4x2',['../a00303.html#ga84e1f5e0718952a079b748825c03f956',1,'glm']]], ['aligned_5fhighp_5fmat4x3',['aligned_highp_mat4x3',['../a00303.html#gafff1684c4ff19b4a818138ccacc1e78d',1,'glm']]], ['aligned_5fhighp_5fmat4x4',['aligned_highp_mat4x4',['../a00303.html#ga40d49648083a0498a12a4bb41ae6ece8',1,'glm']]], ['aligned_5fhighp_5fuvec1',['aligned_highp_uvec1',['../a00303.html#ga5b80e28396c6ef7d32c6fd18df498451',1,'glm']]], ['aligned_5fhighp_5fuvec2',['aligned_highp_uvec2',['../a00303.html#ga04db692662a4908beeaf5a5ba6e19483',1,'glm']]], ['aligned_5fhighp_5fuvec3',['aligned_highp_uvec3',['../a00303.html#ga073fd6e8b241afade6d8afbd676b2667',1,'glm']]], ['aligned_5fhighp_5fuvec4',['aligned_highp_uvec4',['../a00303.html#gabdd60462042859f876c17c7346c732a5',1,'glm']]], ['aligned_5fhighp_5fvec1',['aligned_highp_vec1',['../a00303.html#ga4d0bd70d5fac49b800546d608b707513',1,'glm']]], ['aligned_5fhighp_5fvec2',['aligned_highp_vec2',['../a00303.html#gac9f8482dde741fb6bab7248b81a45465',1,'glm']]], ['aligned_5fhighp_5fvec3',['aligned_highp_vec3',['../a00303.html#ga65415d2d68c9cc0ca554524a8f5510b2',1,'glm']]], ['aligned_5fhighp_5fvec4',['aligned_highp_vec4',['../a00303.html#ga7cb26d354dd69d23849c34c4fba88da9',1,'glm']]], ['aligned_5fivec1',['aligned_ivec1',['../a00303.html#ga76298aed82a439063c3d55980c84aa0b',1,'glm']]], ['aligned_5fivec2',['aligned_ivec2',['../a00303.html#gae4f38fd2c86cee6940986197777b3ca4',1,'glm']]], ['aligned_5fivec3',['aligned_ivec3',['../a00303.html#ga32794322d294e5ace7fed4a61896f270',1,'glm']]], ['aligned_5fivec4',['aligned_ivec4',['../a00303.html#ga7f79eae5927c9033d84617e49f6f34e4',1,'glm']]], ['aligned_5flowp_5fbvec1',['aligned_lowp_bvec1',['../a00303.html#gac6036449ab1c4abf8efe1ea00fcdd1c9',1,'glm']]], ['aligned_5flowp_5fbvec2',['aligned_lowp_bvec2',['../a00303.html#ga59fadcd3835646e419372ae8b43c5d37',1,'glm']]], ['aligned_5flowp_5fbvec3',['aligned_lowp_bvec3',['../a00303.html#ga83aab4d191053f169c93a3e364f2e118',1,'glm']]], ['aligned_5flowp_5fbvec4',['aligned_lowp_bvec4',['../a00303.html#gaa7a76555ee4853614e5755181a8dd54e',1,'glm']]], ['aligned_5flowp_5fdmat2',['aligned_lowp_dmat2',['../a00303.html#ga79a90173d8faa9816dc852ce447d66ca',1,'glm']]], ['aligned_5flowp_5fdmat2x2',['aligned_lowp_dmat2x2',['../a00303.html#ga07cb8e846666cbf56045b064fb553d2e',1,'glm']]], ['aligned_5flowp_5fdmat2x3',['aligned_lowp_dmat2x3',['../a00303.html#ga7a4536b6e1f2ebb690f63816b5d7e48b',1,'glm']]], ['aligned_5flowp_5fdmat2x4',['aligned_lowp_dmat2x4',['../a00303.html#gab0cf4f7c9a264941519acad286e055ea',1,'glm']]], ['aligned_5flowp_5fdmat3',['aligned_lowp_dmat3',['../a00303.html#gac00e15efded8a57c9dec3aed0fb547e7',1,'glm']]], ['aligned_5flowp_5fdmat3x2',['aligned_lowp_dmat3x2',['../a00303.html#gaa281a47d5d627313984d0f8df993b648',1,'glm']]], ['aligned_5flowp_5fdmat3x3',['aligned_lowp_dmat3x3',['../a00303.html#ga7f3148a72355e39932d6855baca42ebc',1,'glm']]], ['aligned_5flowp_5fdmat3x4',['aligned_lowp_dmat3x4',['../a00303.html#gaea3ccc5ef5b178e6e49b4fa1427605d3',1,'glm']]], ['aligned_5flowp_5fdmat4',['aligned_lowp_dmat4',['../a00303.html#gab92c6d7d58d43dfb8147e9aedfe8351b',1,'glm']]], ['aligned_5flowp_5fdmat4x2',['aligned_lowp_dmat4x2',['../a00303.html#gaf806dfdaffb2e9f7681b1cd2825898ce',1,'glm']]], ['aligned_5flowp_5fdmat4x3',['aligned_lowp_dmat4x3',['../a00303.html#gab0931ac7807fa1428c7bbf249efcdf0d',1,'glm']]], ['aligned_5flowp_5fdmat4x4',['aligned_lowp_dmat4x4',['../a00303.html#gad8220a93d2fca2dd707821b4ab6f809e',1,'glm']]], ['aligned_5flowp_5fdvec1',['aligned_lowp_dvec1',['../a00303.html#ga7f8a2cc5a686e52b1615761f4978ca62',1,'glm']]], ['aligned_5flowp_5fdvec2',['aligned_lowp_dvec2',['../a00303.html#ga0e37cff4a43cca866101f0a35f01db6d',1,'glm']]], ['aligned_5flowp_5fdvec3',['aligned_lowp_dvec3',['../a00303.html#gab9e669c4efd52d3347fc6d5f6b20fd59',1,'glm']]], ['aligned_5flowp_5fdvec4',['aligned_lowp_dvec4',['../a00303.html#ga226f5ec7a953cea559c16fe3aff9924f',1,'glm']]], ['aligned_5flowp_5fivec1',['aligned_lowp_ivec1',['../a00303.html#ga1101d3a82b2e3f5f8828bd8f3adab3e1',1,'glm']]], ['aligned_5flowp_5fivec2',['aligned_lowp_ivec2',['../a00303.html#ga44c4accad582cfbd7226a19b83b0cadc',1,'glm']]], ['aligned_5flowp_5fivec3',['aligned_lowp_ivec3',['../a00303.html#ga65663f10a02e52cedcddbcfe36ddf38d',1,'glm']]], ['aligned_5flowp_5fivec4',['aligned_lowp_ivec4',['../a00303.html#gaae92fcec8b2e0328ffbeac31cc4fc419',1,'glm']]], ['aligned_5flowp_5fmat2',['aligned_lowp_mat2',['../a00303.html#ga17c424412207b00dba1cf587b099eea3',1,'glm']]], ['aligned_5flowp_5fmat2x2',['aligned_lowp_mat2x2',['../a00303.html#ga0e44aeb930a47f9cbf2db15b56433b0f',1,'glm']]], ['aligned_5flowp_5fmat2x3',['aligned_lowp_mat2x3',['../a00303.html#ga7dec6d96bc61312b1e56d137c9c74030',1,'glm']]], ['aligned_5flowp_5fmat2x4',['aligned_lowp_mat2x4',['../a00303.html#gaa694fab1f8df5f658846573ba8ffc563',1,'glm']]], ['aligned_5flowp_5fmat3',['aligned_lowp_mat3',['../a00303.html#ga1eb9076cc28ead5020fd3029fd0472c5',1,'glm']]], ['aligned_5flowp_5fmat3x2',['aligned_lowp_mat3x2',['../a00303.html#ga2d6639f0bd777bae1ee0eba71cd7bfdc',1,'glm']]], ['aligned_5flowp_5fmat3x3',['aligned_lowp_mat3x3',['../a00303.html#gaeaab04e378a90956eec8d68a99d777ed',1,'glm']]], ['aligned_5flowp_5fmat3x4',['aligned_lowp_mat3x4',['../a00303.html#ga1f03696ab066572c6c044e63edf635a2',1,'glm']]], ['aligned_5flowp_5fmat4',['aligned_lowp_mat4',['../a00303.html#ga25ea2f684e36aa5e978b4f2f86593824',1,'glm']]], ['aligned_5flowp_5fmat4x2',['aligned_lowp_mat4x2',['../a00303.html#ga2cb16c3fdfb15e0719d942ee3b548bc4',1,'glm']]], ['aligned_5flowp_5fmat4x3',['aligned_lowp_mat4x3',['../a00303.html#ga7e96981e872f17a780d9f1c22dc1f512',1,'glm']]], ['aligned_5flowp_5fmat4x4',['aligned_lowp_mat4x4',['../a00303.html#gadae3dcfc22d28c64d0548cbfd9d08719',1,'glm']]], ['aligned_5flowp_5fuvec1',['aligned_lowp_uvec1',['../a00303.html#gad09b93acc43c43423408d17a64f6d7ca',1,'glm']]], ['aligned_5flowp_5fuvec2',['aligned_lowp_uvec2',['../a00303.html#ga6f94fcd28dde906fc6cad5f742b55c1a',1,'glm']]], ['aligned_5flowp_5fuvec3',['aligned_lowp_uvec3',['../a00303.html#ga9e9f006970b1a00862e3e6e599eedd4c',1,'glm']]], ['aligned_5flowp_5fuvec4',['aligned_lowp_uvec4',['../a00303.html#ga46b1b0b9eb8625a5d69137bd66cd13dc',1,'glm']]], ['aligned_5flowp_5fvec1',['aligned_lowp_vec1',['../a00303.html#gab34aee3d5e121c543fea11d2c50ecc43',1,'glm']]], ['aligned_5flowp_5fvec2',['aligned_lowp_vec2',['../a00303.html#ga53ac5d252317f1fa43c2ef921857bf13',1,'glm']]], ['aligned_5flowp_5fvec3',['aligned_lowp_vec3',['../a00303.html#ga98f0b5cd65fce164ff1367c2a3b3aa1e',1,'glm']]], ['aligned_5flowp_5fvec4',['aligned_lowp_vec4',['../a00303.html#ga82f7275d6102593a69ce38cdad680409',1,'glm']]], ['aligned_5fmat2',['aligned_mat2',['../a00303.html#ga5a8a5f8c47cd7d5502dd9932f83472b9',1,'glm']]], ['aligned_5fmat2x2',['aligned_mat2x2',['../a00303.html#gabb04f459d81d753d278b2072e2375e8e',1,'glm']]], ['aligned_5fmat2x3',['aligned_mat2x3',['../a00303.html#ga832476bb1c59ef673db37433ff34e399',1,'glm']]], ['aligned_5fmat2x4',['aligned_mat2x4',['../a00303.html#gadab11a7504430825b648ff7c7e36b725',1,'glm']]], ['aligned_5fmat3',['aligned_mat3',['../a00303.html#ga43a92a24ca863e0e0f3b65834b3cf714',1,'glm']]], ['aligned_5fmat3x2',['aligned_mat3x2',['../a00303.html#ga5c0df24ba85eafafc0eb0c90690510ed',1,'glm']]], ['aligned_5fmat3x3',['aligned_mat3x3',['../a00303.html#gadb065dbe5c11271fef8cf2ea8608f187',1,'glm']]], ['aligned_5fmat3x4',['aligned_mat3x4',['../a00303.html#ga88061c72c997b94c420f2b0a60d9df26',1,'glm']]], ['aligned_5fmat4',['aligned_mat4',['../a00303.html#gab0fddcf95dd51cbcbf624ea7c40dfeb8',1,'glm']]], ['aligned_5fmat4x2',['aligned_mat4x2',['../a00303.html#gac9a2d0fb815fd5c2bd58b869c55e32d3',1,'glm']]], ['aligned_5fmat4x3',['aligned_mat4x3',['../a00303.html#ga452bbbfd26e244de216e4d004d50bb74',1,'glm']]], ['aligned_5fmat4x4',['aligned_mat4x4',['../a00303.html#ga8b8fb86973a0b768c5bd802c92fac1a1',1,'glm']]], ['aligned_5fmediump_5fbvec1',['aligned_mediump_bvec1',['../a00303.html#gadd3b8bd71a758f7fb0da8e525156f34e',1,'glm']]], ['aligned_5fmediump_5fbvec2',['aligned_mediump_bvec2',['../a00303.html#gacb183eb5e67ec0d0ea5a016cba962810',1,'glm']]], ['aligned_5fmediump_5fbvec3',['aligned_mediump_bvec3',['../a00303.html#gacfa4a542f1b20a5b63ad702dfb6fd587',1,'glm']]], ['aligned_5fmediump_5fbvec4',['aligned_mediump_bvec4',['../a00303.html#ga91bc1f513bb9b0fd60281d57ded9a48c',1,'glm']]], ['aligned_5fmediump_5fdmat2',['aligned_mediump_dmat2',['../a00303.html#ga62a2dfd668c91072b72c3109fc6cda28',1,'glm']]], ['aligned_5fmediump_5fdmat2x2',['aligned_mediump_dmat2x2',['../a00303.html#ga9b7feec247d378dd407ba81f56ea96c8',1,'glm']]], ['aligned_5fmediump_5fdmat2x3',['aligned_mediump_dmat2x3',['../a00303.html#gafcb189f4f93648fe7ca802ca4aca2eb8',1,'glm']]], ['aligned_5fmediump_5fdmat2x4',['aligned_mediump_dmat2x4',['../a00303.html#ga92f8873e3bbd5ca1323c8bbe5725cc5e',1,'glm']]], ['aligned_5fmediump_5fdmat3',['aligned_mediump_dmat3',['../a00303.html#ga6dc2832b747c00e0a0df621aba196960',1,'glm']]], ['aligned_5fmediump_5fdmat3x2',['aligned_mediump_dmat3x2',['../a00303.html#ga5a97f0355d801de3444d42c1d5b40438',1,'glm']]], ['aligned_5fmediump_5fdmat3x3',['aligned_mediump_dmat3x3',['../a00303.html#ga649d0acf01054b17e679cf00e150e025',1,'glm']]], ['aligned_5fmediump_5fdmat3x4',['aligned_mediump_dmat3x4',['../a00303.html#ga45e155a4840f69b2fa4ed8047a676860',1,'glm']]], ['aligned_5fmediump_5fdmat4',['aligned_mediump_dmat4',['../a00303.html#ga8a9376d82f0e946e25137eb55543e6ce',1,'glm']]], ['aligned_5fmediump_5fdmat4x2',['aligned_mediump_dmat4x2',['../a00303.html#gabc25e547f4de4af62403492532cd1b6d',1,'glm']]], ['aligned_5fmediump_5fdmat4x3',['aligned_mediump_dmat4x3',['../a00303.html#gae84f4763ecdc7457ecb7930bad12057c',1,'glm']]], ['aligned_5fmediump_5fdmat4x4',['aligned_mediump_dmat4x4',['../a00303.html#gaa292ebaa907afdecb2d5967fb4fb1247',1,'glm']]], ['aligned_5fmediump_5fdvec1',['aligned_mediump_dvec1',['../a00303.html#ga7180b685c581adb224406a7f831608e3',1,'glm']]], ['aligned_5fmediump_5fdvec2',['aligned_mediump_dvec2',['../a00303.html#ga9af1eabe22f569e70d9893be72eda0f5',1,'glm']]], ['aligned_5fmediump_5fdvec3',['aligned_mediump_dvec3',['../a00303.html#ga058e7ddab1428e47f2197bdd3a5a6953',1,'glm']]], ['aligned_5fmediump_5fdvec4',['aligned_mediump_dvec4',['../a00303.html#gaffd747ea2aea1e69c2ecb04e68521b21',1,'glm']]], ['aligned_5fmediump_5fivec1',['aligned_mediump_ivec1',['../a00303.html#ga20e63dd980b81af10cadbbe219316650',1,'glm']]], ['aligned_5fmediump_5fivec2',['aligned_mediump_ivec2',['../a00303.html#gaea13d89d49daca2c796aeaa82fc2c2f2',1,'glm']]], ['aligned_5fmediump_5fivec3',['aligned_mediump_ivec3',['../a00303.html#gabbf0f15e9c3d9868e43241ad018f82bd',1,'glm']]], ['aligned_5fmediump_5fivec4',['aligned_mediump_ivec4',['../a00303.html#ga6099dd7878d0a78101a4250d8cd2d736',1,'glm']]], ['aligned_5fmediump_5fmat2',['aligned_mediump_mat2',['../a00303.html#gaf6f041b212c57664d88bc6aefb7e36f3',1,'glm']]], ['aligned_5fmediump_5fmat2x2',['aligned_mediump_mat2x2',['../a00303.html#ga04bf49316ee777d42fcfe681ee37d7be',1,'glm']]], ['aligned_5fmediump_5fmat2x3',['aligned_mediump_mat2x3',['../a00303.html#ga26a0b61e444a51a37b9737cf4d84291b',1,'glm']]], ['aligned_5fmediump_5fmat2x4',['aligned_mediump_mat2x4',['../a00303.html#ga163facc9ed2692ea1300ed57c5d12b17',1,'glm']]], ['aligned_5fmediump_5fmat3',['aligned_mediump_mat3',['../a00303.html#ga3b76ba17ae5d53debeb6f7e55919a57c',1,'glm']]], ['aligned_5fmediump_5fmat3x2',['aligned_mediump_mat3x2',['../a00303.html#ga80dee705d714300378e0847f45059097',1,'glm']]], ['aligned_5fmediump_5fmat3x3',['aligned_mediump_mat3x3',['../a00303.html#ga721f5404caf40d68962dcc0529de71d9',1,'glm']]], ['aligned_5fmediump_5fmat3x4',['aligned_mediump_mat3x4',['../a00303.html#ga98f4dc6722a2541a990918c074075359',1,'glm']]], ['aligned_5fmediump_5fmat4',['aligned_mediump_mat4',['../a00303.html#gaeefee8317192174596852ce19b602720',1,'glm']]], ['aligned_5fmediump_5fmat4x2',['aligned_mediump_mat4x2',['../a00303.html#ga46f372a006345c252a41267657cc22c0',1,'glm']]], ['aligned_5fmediump_5fmat4x3',['aligned_mediump_mat4x3',['../a00303.html#ga0effece4545acdebdc2a5512a303110e',1,'glm']]], ['aligned_5fmediump_5fmat4x4',['aligned_mediump_mat4x4',['../a00303.html#ga312864244cae4e8f10f478cffd0f76de',1,'glm']]], ['aligned_5fmediump_5fuvec1',['aligned_mediump_uvec1',['../a00303.html#gacb78126ea2eb779b41c7511128ff1283',1,'glm']]], ['aligned_5fmediump_5fuvec2',['aligned_mediump_uvec2',['../a00303.html#ga081d53e0a71443d0b68ea61c870f9adc',1,'glm']]], ['aligned_5fmediump_5fuvec3',['aligned_mediump_uvec3',['../a00303.html#gad6fc921bdde2bdbc7e09b028e1e9b379',1,'glm']]], ['aligned_5fmediump_5fuvec4',['aligned_mediump_uvec4',['../a00303.html#ga73ea0c1ba31580e107d21270883f51fc',1,'glm']]], ['aligned_5fmediump_5fvec1',['aligned_mediump_vec1',['../a00303.html#ga6b797eec76fa471e300158f3453b3b2e',1,'glm']]], ['aligned_5fmediump_5fvec2',['aligned_mediump_vec2',['../a00303.html#ga026a55ddbf2bafb1432f1157a2708616',1,'glm']]], ['aligned_5fmediump_5fvec3',['aligned_mediump_vec3',['../a00303.html#ga3a25e494173f6a64637b08a1b50a2132',1,'glm']]], ['aligned_5fmediump_5fvec4',['aligned_mediump_vec4',['../a00303.html#ga320d1c661cff2ef214eb50241f2928b2',1,'glm']]], ['aligned_5fuvec1',['aligned_uvec1',['../a00303.html#ga1ff8ed402c93d280ff0597c1c5e7c548',1,'glm']]], ['aligned_5fuvec2',['aligned_uvec2',['../a00303.html#ga074137e3be58528d67041c223d49f398',1,'glm']]], ['aligned_5fuvec3',['aligned_uvec3',['../a00303.html#ga2a8d9c3046f89d854eb758adfa0811c0',1,'glm']]], ['aligned_5fuvec4',['aligned_uvec4',['../a00303.html#gabf842c45eea186170c267a328e3f3b7d',1,'glm']]], ['aligned_5fvec1',['aligned_vec1',['../a00303.html#ga05e6d4c908965d04191c2070a8d0a65e',1,'glm']]], ['aligned_5fvec2',['aligned_vec2',['../a00303.html#ga0682462f8096a226773e20fac993cde5',1,'glm']]], ['aligned_5fvec3',['aligned_vec3',['../a00303.html#ga7cf643b66664e0cd3c48759ae66c2bd0',1,'glm']]], ['aligned_5fvec4',['aligned_vec4',['../a00303.html#ga85d89e83cb8137e1be1446de8c3b643a',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/typedefs_1.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/typedefs_1.js ================================================ var searchData= [ ['bool1',['bool1',['../a00315.html#gaddcd7aa2e30e61af5b38660613d3979e',1,'glm']]], ['bool1x1',['bool1x1',['../a00315.html#ga7f895c936f0c29c8729afbbf22806090',1,'glm']]], ['bool2',['bool2',['../a00315.html#gaa09ab65ec9c3c54305ff502e2b1fe6d9',1,'glm']]], ['bool2x2',['bool2x2',['../a00315.html#gadb3703955e513632f98ba12fe051ba3e',1,'glm']]], ['bool2x3',['bool2x3',['../a00315.html#ga9ae6ee155d0f90cb1ae5b6c4546738a0',1,'glm']]], ['bool2x4',['bool2x4',['../a00315.html#ga4d7fa65be8e8e4ad6d920b45c44e471f',1,'glm']]], ['bool3',['bool3',['../a00315.html#ga99629f818737f342204071ef8296b2ed',1,'glm']]], ['bool3x2',['bool3x2',['../a00315.html#gac7d7311f7e0fa8b6163d96dab033a755',1,'glm']]], ['bool3x3',['bool3x3',['../a00315.html#ga6c97b99aac3e302053ffb58aace9033c',1,'glm']]], ['bool3x4',['bool3x4',['../a00315.html#gae7d6b679463d37d6c527d478fb470fdf',1,'glm']]], ['bool4',['bool4',['../a00315.html#ga13c3200b82708f73faac6d7f09ec91a3',1,'glm']]], ['bool4x2',['bool4x2',['../a00315.html#ga9ed830f52408b2f83c085063a3eaf1d0',1,'glm']]], ['bool4x3',['bool4x3',['../a00315.html#gad0f5dc7f22c2065b1b06d57f1c0658fe',1,'glm']]], ['bool4x4',['bool4x4',['../a00315.html#ga7d2a7d13986602ae2896bfaa394235d4',1,'glm']]], ['bvec1',['bvec1',['../a00265.html#ga067af382616d93f8e850baae5154cdcc',1,'glm']]], ['bvec2',['bvec2',['../a00281.html#ga0b6123e03653cc1bbe366fc55238a934',1,'glm']]], ['bvec3',['bvec3',['../a00281.html#ga197151b72dfaf289daf98b361760ffe7',1,'glm']]], ['bvec4',['bvec4',['../a00281.html#ga9f7b9712373ff4342d9114619b55f5e3',1,'glm']]], ['byte',['byte',['../a00354.html#ga3005cb0d839d546c616becfa6602c607',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/typedefs_2.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/typedefs_2.js ================================================ var searchData= [ ['ddualquat',['ddualquat',['../a00317.html#ga3d71f98d84ba59dfe4e369fde4714cd6',1,'glm']]], ['dmat2',['dmat2',['../a00283.html#ga21dbd1f987775d7cc7607c139531c7e6',1,'glm']]], ['dmat2x2',['dmat2x2',['../a00283.html#ga66b6a9af787e468a46dfe24189e87f9b',1,'glm']]], ['dmat2x3',['dmat2x3',['../a00283.html#ga92cd388753d48e20de69ea2dbedf826a',1,'glm']]], ['dmat2x4',['dmat2x4',['../a00283.html#gaef2198807e937072803ae0ae45e1965e',1,'glm']]], ['dmat3',['dmat3',['../a00283.html#ga6f40aa56265b4b0ccad41b86802efe33',1,'glm']]], ['dmat3x2',['dmat3x2',['../a00283.html#ga001e3e0638fbf8719788fc64c5b8cf39',1,'glm']]], ['dmat3x3',['dmat3x3',['../a00283.html#ga970cb3306be25a5ca5db5a9456831228',1,'glm']]], ['dmat3x4',['dmat3x4',['../a00283.html#ga0412a634d183587e6188e9b11869f8f4',1,'glm']]], ['dmat4',['dmat4',['../a00283.html#ga0f34486bb7fec8e5a5b3830b6a6cbeca',1,'glm']]], ['dmat4x2',['dmat4x2',['../a00283.html#ga9bc0b3ab8b6ba2cb6782e179ad7ad156',1,'glm']]], ['dmat4x3',['dmat4x3',['../a00283.html#gacd18864049f8c83799babe7e596ca05b',1,'glm']]], ['dmat4x4',['dmat4x4',['../a00283.html#gad5a6484b983b74f9d801cab8bc4e6a10',1,'glm']]], ['double1',['double1',['../a00315.html#ga20b861a9b6e2a300323671c57a02525b',1,'glm']]], ['double1x1',['double1x1',['../a00315.html#ga45f16a4dd0db1f199afaed9fd12fe9a8',1,'glm']]], ['double2',['double2',['../a00315.html#ga31b729b04facccda73f07ed26958b3c2',1,'glm']]], ['double2x2',['double2x2',['../a00315.html#gae57d0201096834d25f2b91b319e7cdbd',1,'glm']]], ['double2x3',['double2x3',['../a00315.html#ga3655bc324008553ca61f39952d0b2d08',1,'glm']]], ['double2x4',['double2x4',['../a00315.html#gacd33061fc64a7b2dcfd7322c49d9557a',1,'glm']]], ['double3',['double3',['../a00315.html#ga3d8b9028a1053a44a98902cd1c389472',1,'glm']]], ['double3x2',['double3x2',['../a00315.html#ga5ec08fc39c9d783dfcc488be240fe975',1,'glm']]], ['double3x3',['double3x3',['../a00315.html#ga4bad5bb20c6ddaecfe4006c93841d180',1,'glm']]], ['double3x4',['double3x4',['../a00315.html#ga2ef022e453d663d70aec414b2a80f756',1,'glm']]], ['double4',['double4',['../a00315.html#gaf92f58af24f35617518aeb3d4f63fda6',1,'glm']]], ['double4x2',['double4x2',['../a00315.html#gabca29ccceea53669618b751aae0ba83d',1,'glm']]], ['double4x3',['double4x3',['../a00315.html#gafad66a02ccd360c86d6ab9ff9cfbc19c',1,'glm']]], ['double4x4',['double4x4',['../a00315.html#gaab541bed2e788e4537852a2492860806',1,'glm']]], ['dquat',['dquat',['../a00249.html#ga1181459aa5d640a3ea43861b118f3f0b',1,'glm']]], ['dualquat',['dualquat',['../a00317.html#gae93abee0c979902fbec6a7bee0f6fae1',1,'glm']]], ['dvec1',['dvec1',['../a00268.html#ga6221af17edc2d4477a4583d2cd53e569',1,'glm']]], ['dvec2',['dvec2',['../a00281.html#ga8b09c71aaac7da7867ae58377fe219a8',1,'glm']]], ['dvec3',['dvec3',['../a00281.html#ga5b83ae3d0fdec519c038e4d2cf967cf0',1,'glm']]], ['dvec4',['dvec4',['../a00281.html#ga57debab5d98ce618f7b2a97fe26eb3ac',1,'glm']]], ['dword',['dword',['../a00354.html#ga86e46fff9f80ae33893d8d697f2ca98a',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/typedefs_3.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/typedefs_3.js ================================================ var searchData= [ ['f32',['f32',['../a00304.html#gabe6a542dd6c1d5ffd847f1b9b4c9c9b7',1,'glm']]], ['f32mat1',['f32mat1',['../a00346.html#ga145ad477a2a3e152855511c3b52469a6',1,'glm::gtx']]], ['f32mat1x1',['f32mat1x1',['../a00346.html#gac88c6a4dbfc380aa26e3adbbade36348',1,'glm::gtx']]], ['f32mat2',['f32mat2',['../a00304.html#gab12383ed6ac7595ed6fde4d266c58425',1,'glm']]], ['f32mat2x2',['f32mat2x2',['../a00304.html#ga04100c76f7d55a0dd0983ccf05142bff',1,'glm']]], ['f32mat2x3',['f32mat2x3',['../a00304.html#gab256cdab5eb582e426d749ae77b5b566',1,'glm']]], ['f32mat2x4',['f32mat2x4',['../a00304.html#gaf512b74c4400b68f9fdf9388b3d6aac8',1,'glm']]], ['f32mat3',['f32mat3',['../a00304.html#ga856f3905ee7cc2e4890a8a1d56c150be',1,'glm']]], ['f32mat3x2',['f32mat3x2',['../a00304.html#ga1320a08e14fdff3821241eefab6947e9',1,'glm']]], ['f32mat3x3',['f32mat3x3',['../a00304.html#ga65261fa8a21045c8646ddff114a56174',1,'glm']]], ['f32mat3x4',['f32mat3x4',['../a00304.html#gab90ade28222f8b861d5ceaf81a3a7f5d',1,'glm']]], ['f32mat4',['f32mat4',['../a00304.html#ga99d1b85ff99956b33da7e9992aad129a',1,'glm']]], ['f32mat4x2',['f32mat4x2',['../a00304.html#ga3b32ca1e57a4ef91babbc3d35a34ea20',1,'glm']]], ['f32mat4x3',['f32mat4x3',['../a00304.html#ga239b96198771b7add8eea7e6b59840c0',1,'glm']]], ['f32mat4x4',['f32mat4x4',['../a00304.html#gaee4da0e9fbd8cfa2f89cb80889719dc3',1,'glm']]], ['f32quat',['f32quat',['../a00304.html#ga38e674196ba411d642be40c47bf33939',1,'glm']]], ['f32vec1',['f32vec1',['../a00304.html#ga701f32ab5b3fb06996b41f5c0d643805',1,'glm::f32vec1()'],['../a00346.html#ga07f8d7348eb7ae059a84c118fdfeb943',1,'glm::gtx::f32vec1()']]], ['f32vec2',['f32vec2',['../a00304.html#ga5d6c70e080409a76a257dc55bd8ea2c8',1,'glm']]], ['f32vec3',['f32vec3',['../a00304.html#gaea5c4518e175162e306d2c2b5ef5ac79',1,'glm']]], ['f32vec4',['f32vec4',['../a00304.html#ga31c6ca0e074a44007f49a9a3720b18c8',1,'glm']]], ['f64',['f64',['../a00304.html#ga1d794d240091678f602e8de225b8d8c9',1,'glm']]], ['f64mat1',['f64mat1',['../a00346.html#ga59bfa589419b5265d01314fcecd33435',1,'glm::gtx']]], ['f64mat1x1',['f64mat1x1',['../a00346.html#ga448eeb08d0b7d8c43a8b292c981955fd',1,'glm::gtx']]], ['f64mat2',['f64mat2',['../a00304.html#gad9771450a54785d13080cdde0fe20c1d',1,'glm']]], ['f64mat2x2',['f64mat2x2',['../a00304.html#ga9ec7c4c79e303c053e30729a95fb2c37',1,'glm']]], ['f64mat2x3',['f64mat2x3',['../a00304.html#gae3ab5719fc4c1e966631dbbcba8d412a',1,'glm']]], ['f64mat2x4',['f64mat2x4',['../a00304.html#gac87278e0c702ba8afff76316d4eeb769',1,'glm']]], ['f64mat3',['f64mat3',['../a00304.html#ga9b69181efbf8f37ae934f135137b29c0',1,'glm']]], ['f64mat3x2',['f64mat3x2',['../a00304.html#ga2473d8bf3f4abf967c4d0e18175be6f7',1,'glm']]], ['f64mat3x3',['f64mat3x3',['../a00304.html#ga916c1aed91cf91f7b41399ebe7c6e185',1,'glm']]], ['f64mat3x4',['f64mat3x4',['../a00304.html#gaab239fa9e35b65a67cbaa6ac082f3675',1,'glm']]], ['f64mat4',['f64mat4',['../a00304.html#ga0ecd3f4952536e5ef12702b44d2626fc',1,'glm']]], ['f64mat4x2',['f64mat4x2',['../a00304.html#gab7daf79d6bc06a68bea1c6f5e11b5512',1,'glm']]], ['f64mat4x3',['f64mat4x3',['../a00304.html#ga3e2e66ffbe341a80bc005ba2b9552110',1,'glm']]], ['f64mat4x4',['f64mat4x4',['../a00304.html#gae52e2b7077a9ff928a06ab5ce600b81e',1,'glm']]], ['f64quat',['f64quat',['../a00304.html#ga2b114a2f2af0fe1dfeb569c767822940',1,'glm']]], ['f64vec1',['f64vec1',['../a00304.html#gade502df1ce14f837fae7f60a03ddb9b0',1,'glm::f64vec1()'],['../a00346.html#gae5987a61b8c03d5c432a9e62f0b3efe1',1,'glm::gtx::f64vec1()']]], ['f64vec2',['f64vec2',['../a00304.html#gadc4e1594f9555d919131ee02b17822a2',1,'glm']]], ['f64vec3',['f64vec3',['../a00304.html#gaa7a1ddca75c5f629173bf4772db7a635',1,'glm']]], ['f64vec4',['f64vec4',['../a00304.html#ga66e92e57260bdb910609b9a56bf83e97',1,'glm']]], ['fdualquat',['fdualquat',['../a00317.html#ga237c2b9b42c9a930e49de5840ae0f930',1,'glm']]], ['float1',['float1',['../a00315.html#gaf5208d01f6c6fbcb7bb55d610b9c0ead',1,'glm']]], ['float1x1',['float1x1',['../a00315.html#ga73720b8dc4620835b17f74d428f98c0c',1,'glm']]], ['float2',['float2',['../a00315.html#ga02d3c013982c183906c61d74aa3166ce',1,'glm']]], ['float2x2',['float2x2',['../a00315.html#ga33d43ecbb60a85a1366ff83f8a0ec85f',1,'glm']]], ['float2x3',['float2x3',['../a00315.html#ga939b0cff15cee3030f75c1b2e36f89fe',1,'glm']]], ['float2x4',['float2x4',['../a00315.html#gafec3cfd901ab334a92e0242b8f2269b4',1,'glm']]], ['float3',['float3',['../a00315.html#ga821ff110fc8533a053cbfcc93e078cc0',1,'glm']]], ['float32',['float32',['../a00304.html#gaacdc525d6f7bddb3ae95d5c311bd06a1',1,'glm']]], ['float32_5ft',['float32_t',['../a00304.html#gaa4947bc8b47c72fceea9bda730ecf603',1,'glm']]], ['float3x2',['float3x2',['../a00315.html#gaa6c69f04ba95f3faedf95dae874de576',1,'glm']]], ['float3x3',['float3x3',['../a00315.html#ga6ceb5d38a58becdf420026e12a6562f3',1,'glm']]], ['float3x4',['float3x4',['../a00315.html#ga4d2679c321b793ca3784fe0315bb5332',1,'glm']]], ['float4',['float4',['../a00315.html#gae2da7345087db3815a25d8837a727ef1',1,'glm']]], ['float4x2',['float4x2',['../a00315.html#ga308b9af0c221145bcfe9bfc129d9098e',1,'glm']]], ['float4x3',['float4x3',['../a00315.html#gac0a51b4812038aa81d73ffcc37f741ac',1,'glm']]], ['float4x4',['float4x4',['../a00315.html#gad3051649b3715d828a4ab92cdae7c3bf',1,'glm']]], ['float64',['float64',['../a00304.html#ga232fad1b0d6dcc7c16aabde98b2e2a80',1,'glm']]], ['float64_5ft',['float64_t',['../a00304.html#ga728366fef72cd96f0a5fa6429f05469e',1,'glm']]], ['fmat2',['fmat2',['../a00304.html#ga4541dc2feb2a31d6ecb5a303f3dd3280',1,'glm']]], ['fmat2x2',['fmat2x2',['../a00304.html#ga3350c93c3275298f940a42875388e4b4',1,'glm']]], ['fmat2x3',['fmat2x3',['../a00304.html#ga55a2d2a8eb09b5633668257eb3cad453',1,'glm']]], ['fmat2x4',['fmat2x4',['../a00304.html#ga681381f19f11c9e5ee45cda2c56937ff',1,'glm']]], ['fmat3',['fmat3',['../a00304.html#ga253d453c20e037730023fea0215cb6f6',1,'glm']]], ['fmat3x2',['fmat3x2',['../a00304.html#ga6af54d70d9beb0a7ef992a879e86b04f',1,'glm']]], ['fmat3x3',['fmat3x3',['../a00304.html#gaa07c86650253672a19dbfb898f3265b8',1,'glm']]], ['fmat3x4',['fmat3x4',['../a00304.html#ga44e158af77a670ee1b58c03cda9e1619',1,'glm']]], ['fmat4',['fmat4',['../a00304.html#ga8cb400c0f4438f2640035d7b9824a0ca',1,'glm']]], ['fmat4x2',['fmat4x2',['../a00304.html#ga8c8aa45aafcc23238edb1d5aeb801774',1,'glm']]], ['fmat4x3',['fmat4x3',['../a00304.html#ga4295048a78bdf46b8a7de77ec665b497',1,'glm']]], ['fmat4x4',['fmat4x4',['../a00304.html#gad01cc6479bde1fd1870f13d3ed9530b3',1,'glm']]], ['fvec1',['fvec1',['../a00304.html#ga98b9ed43cf8c5cf1d354b23c7df9119f',1,'glm']]], ['fvec2',['fvec2',['../a00304.html#ga24273aa02abaecaab7f160bac437a339',1,'glm']]], ['fvec3',['fvec3',['../a00304.html#ga89930533646b30d021759298aa6bf04a',1,'glm']]], ['fvec4',['fvec4',['../a00304.html#ga713c796c54875cf4092d42ff9d9096b0',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/typedefs_4.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/typedefs_4.js ================================================ var searchData= [ ['highp_5fbvec1',['highp_bvec1',['../a00266.html#gae8a1e14abae1387274f57741750c06a2',1,'glm']]], ['highp_5fbvec2',['highp_bvec2',['../a00282.html#gac6c781a85f012d77a75310a3058702c2',1,'glm']]], ['highp_5fbvec3',['highp_bvec3',['../a00282.html#gaedb70027d89a0a405046aefda4eabaa6',1,'glm']]], ['highp_5fbvec4',['highp_bvec4',['../a00282.html#gaee663ff64429443ab07a5327074192f6',1,'glm']]], ['highp_5fddualquat',['highp_ddualquat',['../a00317.html#ga8f67eafa7197d7a668dad5105a463d2a',1,'glm']]], ['highp_5fdmat2',['highp_dmat2',['../a00284.html#ga369b447bb1b312449b679ea1f90f3cea',1,'glm']]], ['highp_5fdmat2x2',['highp_dmat2x2',['../a00284.html#gae27ac20302c2e39b6c78e7fe18e62ef7',1,'glm']]], ['highp_5fdmat2x3',['highp_dmat2x3',['../a00284.html#gad4689ec33bc2c26e10132b174b49001a',1,'glm']]], ['highp_5fdmat2x4',['highp_dmat2x4',['../a00284.html#ga5ceeb46670fdc000a0701910cc5061c9',1,'glm']]], ['highp_5fdmat3',['highp_dmat3',['../a00284.html#ga86d6d4dbad92ffdcc759773340e15a97',1,'glm']]], ['highp_5fdmat3x2',['highp_dmat3x2',['../a00284.html#ga3647309010a2160e9ec89bc6f7c95c35',1,'glm']]], ['highp_5fdmat3x3',['highp_dmat3x3',['../a00284.html#gae367ea93c4ad8a7c101dd27b8b2b04ce',1,'glm']]], ['highp_5fdmat3x4',['highp_dmat3x4',['../a00284.html#ga6543eeeb64f48d79a0b96484308c50f0',1,'glm']]], ['highp_5fdmat4',['highp_dmat4',['../a00284.html#ga945254f459860741138bceb74da496b9',1,'glm']]], ['highp_5fdmat4x2',['highp_dmat4x2',['../a00284.html#gaeda1f474c668eaecc443bea85a4a4eca',1,'glm']]], ['highp_5fdmat4x3',['highp_dmat4x3',['../a00284.html#gacf237c2d8832fe8db2d7e187585d34bd',1,'glm']]], ['highp_5fdmat4x4',['highp_dmat4x4',['../a00284.html#ga118d24a3d12c034e7cccef7bf2f01b8a',1,'glm']]], ['highp_5fdquat',['highp_dquat',['../a00250.html#gaf13a25f41afc03480b40fc71bd249cec',1,'glm']]], ['highp_5fdualquat',['highp_dualquat',['../a00317.html#ga9ef5bf1da52a9d4932335a517086ceaf',1,'glm']]], ['highp_5fdvec1',['highp_dvec1',['../a00269.html#ga77c22c4426da3a6865c88d3fc907e3fe',1,'glm']]], ['highp_5fdvec2',['highp_dvec2',['../a00282.html#gab98d77cca255914f5e29697fcbc2d975',1,'glm']]], ['highp_5fdvec3',['highp_dvec3',['../a00282.html#gab24dc20dcdc5b71282634bdbf6b70105',1,'glm']]], ['highp_5fdvec4',['highp_dvec4',['../a00282.html#gab654f4ed4a99d64a6cfc65320c2a7590',1,'glm']]], ['highp_5ff32',['highp_f32',['../a00304.html#ga6906e1ef0b34064b4b675489c5c38725',1,'glm']]], ['highp_5ff32mat2',['highp_f32mat2',['../a00304.html#ga298f7d4d273678d0282812368da27fda',1,'glm']]], ['highp_5ff32mat2x2',['highp_f32mat2x2',['../a00304.html#gae5eb02d92b7d4605a4b7f37ae5cb2968',1,'glm']]], ['highp_5ff32mat2x3',['highp_f32mat2x3',['../a00304.html#ga0aeb5cb001473b08c88175012708a379',1,'glm']]], ['highp_5ff32mat2x4',['highp_f32mat2x4',['../a00304.html#ga88938ee1e7981fa3402e88da6ad74531',1,'glm']]], ['highp_5ff32mat3',['highp_f32mat3',['../a00304.html#ga24f9ef3263b1638564713892cc37981f',1,'glm']]], ['highp_5ff32mat3x2',['highp_f32mat3x2',['../a00304.html#ga36537e701456f12c20e73f469cac4967',1,'glm']]], ['highp_5ff32mat3x3',['highp_f32mat3x3',['../a00304.html#gaab691ae40c37976d268d8cac0096e0e1',1,'glm']]], ['highp_5ff32mat3x4',['highp_f32mat3x4',['../a00304.html#gaa5086dbd6efb272d13fc88829330861d',1,'glm']]], ['highp_5ff32mat4',['highp_f32mat4',['../a00304.html#ga14c90ca49885723f51d06e295587236f',1,'glm']]], ['highp_5ff32mat4x2',['highp_f32mat4x2',['../a00304.html#ga602e119c6b246b4f6edcf66845f2aa0f',1,'glm']]], ['highp_5ff32mat4x3',['highp_f32mat4x3',['../a00304.html#ga66bffdd8e5c0d3ef9958bbab9ca1ba59',1,'glm']]], ['highp_5ff32mat4x4',['highp_f32mat4x4',['../a00304.html#gaf1b712b97b2322685fbbed28febe5f84',1,'glm']]], ['highp_5ff32quat',['highp_f32quat',['../a00304.html#ga4252cf7f5b0e3cd47c3d3badf0ef43b3',1,'glm']]], ['highp_5ff32vec1',['highp_f32vec1',['../a00304.html#gab1b1c9e8667902b78b2c330e4d383a61',1,'glm']]], ['highp_5ff32vec2',['highp_f32vec2',['../a00304.html#ga0b8ebd4262331e139ff257d7cf2a4b77',1,'glm']]], ['highp_5ff32vec3',['highp_f32vec3',['../a00304.html#ga522775dbcc6d96246a1c5cf02344fd8c',1,'glm']]], ['highp_5ff32vec4',['highp_f32vec4',['../a00304.html#ga0f038d4e09862a74f03d102c59eda73e',1,'glm']]], ['highp_5ff64',['highp_f64',['../a00304.html#ga51d5266017d88f62737c1973923a7cf4',1,'glm']]], ['highp_5ff64mat2',['highp_f64mat2',['../a00304.html#gaf7adb92ce8de0afaff01436b039fd924',1,'glm']]], ['highp_5ff64mat2x2',['highp_f64mat2x2',['../a00304.html#ga773ea237a051827cfc20de960bc73ff0',1,'glm']]], ['highp_5ff64mat2x3',['highp_f64mat2x3',['../a00304.html#ga8342c7469384c6d769cacc9e309278d9',1,'glm']]], ['highp_5ff64mat2x4',['highp_f64mat2x4',['../a00304.html#ga5a67a7440b9c0d1538533540f99036a5',1,'glm']]], ['highp_5ff64mat3',['highp_f64mat3',['../a00304.html#ga609bf0ace941d6ab1bb2f9522a04e546',1,'glm']]], ['highp_5ff64mat3x2',['highp_f64mat3x2',['../a00304.html#ga5bdbfb4ce7d05ce1e1b663f50be17e8a',1,'glm']]], ['highp_5ff64mat3x3',['highp_f64mat3x3',['../a00304.html#ga7c2cadb9b85cc7e0d125db21ca19dea4',1,'glm']]], ['highp_5ff64mat3x4',['highp_f64mat3x4',['../a00304.html#gad310b1dddeec9ec837a104e7db8de580',1,'glm']]], ['highp_5ff64mat4',['highp_f64mat4',['../a00304.html#gad308e0ed27d64daa4213fb257fcbd5a5',1,'glm']]], ['highp_5ff64mat4x2',['highp_f64mat4x2',['../a00304.html#ga58c4631421e323e252fc716b6103e38c',1,'glm']]], ['highp_5ff64mat4x3',['highp_f64mat4x3',['../a00304.html#gae94823d65648e44d972863c6caa13103',1,'glm']]], ['highp_5ff64mat4x4',['highp_f64mat4x4',['../a00304.html#ga09a2374b725c4246d263ee36fb66434c',1,'glm']]], ['highp_5ff64quat',['highp_f64quat',['../a00304.html#gafcfdd74a115163af2ce1093551747352',1,'glm']]], ['highp_5ff64vec1',['highp_f64vec1',['../a00304.html#ga62c31b133ceee9984fbee05ac4c434a9',1,'glm']]], ['highp_5ff64vec2',['highp_f64vec2',['../a00304.html#ga670ea1b0a1172bc73b1d7c1e0c26cce2',1,'glm']]], ['highp_5ff64vec3',['highp_f64vec3',['../a00304.html#gacd1196090ece7a69fb5c3e43a7d4d851',1,'glm']]], ['highp_5ff64vec4',['highp_f64vec4',['../a00304.html#ga61185c44c8cc0b25d9a0f67d8a267444',1,'glm']]], ['highp_5ffdualquat',['highp_fdualquat',['../a00317.html#ga4c4e55e9c99dc57b299ed590968da564',1,'glm']]], ['highp_5ffloat32',['highp_float32',['../a00304.html#gac5a7f21136e0a78d0a1b9f60ef2f8aea',1,'glm']]], ['highp_5ffloat32_5ft',['highp_float32_t',['../a00304.html#ga5376ef18dca9d248897c3363ef5a06b2',1,'glm']]], ['highp_5ffloat64',['highp_float64',['../a00304.html#gadbb198a4d7aad82a0f4dc466ef6f6215',1,'glm']]], ['highp_5ffloat64_5ft',['highp_float64_t',['../a00304.html#gaaeeb0077198cff40e3f48b1108ece139',1,'glm']]], ['highp_5ffmat2',['highp_fmat2',['../a00304.html#gae98c88d9a7befa9b5877f49176225535',1,'glm']]], ['highp_5ffmat2x2',['highp_fmat2x2',['../a00304.html#ga28635abcddb2f3e92c33c3f0fcc682ad',1,'glm']]], ['highp_5ffmat2x3',['highp_fmat2x3',['../a00304.html#gacf111095594996fef29067b2454fccad',1,'glm']]], ['highp_5ffmat2x4',['highp_fmat2x4',['../a00304.html#ga4920a1536f161f7ded1d6909b7fef0d2',1,'glm']]], ['highp_5ffmat3',['highp_fmat3',['../a00304.html#gaed2dc69e0d507d4191092dbd44b3eb75',1,'glm']]], ['highp_5ffmat3x2',['highp_fmat3x2',['../a00304.html#gae54e4d1aeb5a0f0c64822e6f1b299e19',1,'glm']]], ['highp_5ffmat3x3',['highp_fmat3x3',['../a00304.html#gaa5b44d3ef6efcf33f44876673a7a936e',1,'glm']]], ['highp_5ffmat3x4',['highp_fmat3x4',['../a00304.html#ga961fac2a885907ffcf4d40daac6615c5',1,'glm']]], ['highp_5ffmat4',['highp_fmat4',['../a00304.html#gabf28443ce0cc0959077ec39b21f32c39',1,'glm']]], ['highp_5ffmat4x2',['highp_fmat4x2',['../a00304.html#ga076961cf2d120c7168b957cb2ed107b3',1,'glm']]], ['highp_5ffmat4x3',['highp_fmat4x3',['../a00304.html#gae406ec670f64170a7437b5e302eeb2cb',1,'glm']]], ['highp_5ffmat4x4',['highp_fmat4x4',['../a00304.html#gaee80c7cd3caa0f2635058656755f6f69',1,'glm']]], ['highp_5ffvec1',['highp_fvec1',['../a00304.html#gaa1040342c4efdedc8f90e6267db8d41c',1,'glm']]], ['highp_5ffvec2',['highp_fvec2',['../a00304.html#ga7c0d196f5fa79f7e892a2f323a0be1ae',1,'glm']]], ['highp_5ffvec3',['highp_fvec3',['../a00304.html#ga6ef77413883f48d6b53b4169b25edbd0',1,'glm']]], ['highp_5ffvec4',['highp_fvec4',['../a00304.html#ga8b839abbb44f5102609eed89f6ed61f7',1,'glm']]], ['highp_5fi16',['highp_i16',['../a00304.html#ga0336abc2604dd2c20c30e036454b64f8',1,'glm']]], ['highp_5fi16vec1',['highp_i16vec1',['../a00304.html#ga70fdfcc1fd38084bde83c3f06a8b9f19',1,'glm']]], ['highp_5fi16vec2',['highp_i16vec2',['../a00304.html#gaa7db3ad10947cf70cae6474d05ebd227',1,'glm']]], ['highp_5fi16vec3',['highp_i16vec3',['../a00304.html#ga5609c8fa2b7eac3dec337d321cb0ca96',1,'glm']]], ['highp_5fi16vec4',['highp_i16vec4',['../a00304.html#ga7a18659438828f91ccca28f1a1e067b4',1,'glm']]], ['highp_5fi32',['highp_i32',['../a00304.html#ga727675ac6b5d2fc699520e0059735e25',1,'glm']]], ['highp_5fi32vec1',['highp_i32vec1',['../a00304.html#ga6a9d71cc62745302f70422b7dc98755c',1,'glm']]], ['highp_5fi32vec2',['highp_i32vec2',['../a00304.html#gaa9b4579f8e6f3d9b649a965bcb785530',1,'glm']]], ['highp_5fi32vec3',['highp_i32vec3',['../a00304.html#ga31e070ea3bdee623e6e18a61ba5718b1',1,'glm']]], ['highp_5fi32vec4',['highp_i32vec4',['../a00304.html#gadf70eaaa230aeed5a4c9f4c9c5c55902',1,'glm']]], ['highp_5fi64',['highp_i64',['../a00304.html#gac25db6d2b1e2a0f351b77ba3409ac4cd',1,'glm']]], ['highp_5fi64vec1',['highp_i64vec1',['../a00304.html#gabd2fda3cd208acf5a370ec9b5b3c58d4',1,'glm']]], ['highp_5fi64vec2',['highp_i64vec2',['../a00304.html#gad9d1903cb20899966e8ebe0670889a5f',1,'glm']]], ['highp_5fi64vec3',['highp_i64vec3',['../a00304.html#ga62324224b9c6cce9c6b4db96bb704a8a',1,'glm']]], ['highp_5fi64vec4',['highp_i64vec4',['../a00304.html#gad23b1be9b3bf20352089a6b738f0ebba',1,'glm']]], ['highp_5fi8',['highp_i8',['../a00304.html#gacb88796f2d08ef253d0345aff20c3aee',1,'glm']]], ['highp_5fi8vec1',['highp_i8vec1',['../a00304.html#ga1d8c10949691b0fd990253476f47beb3',1,'glm']]], ['highp_5fi8vec2',['highp_i8vec2',['../a00304.html#ga50542e4cb9b2f9bec213b66e06145d07',1,'glm']]], ['highp_5fi8vec3',['highp_i8vec3',['../a00304.html#ga8396bfdc081d9113190d0c39c9f67084',1,'glm']]], ['highp_5fi8vec4',['highp_i8vec4',['../a00304.html#ga4824e3ddf6e608117dfe4809430737b4',1,'glm']]], ['highp_5fimat2',['highp_imat2',['../a00294.html#ga8499cc3b016003f835314c1c756e9db9',1,'glm']]], ['highp_5fimat2x2',['highp_imat2x2',['../a00294.html#gaa389e2d1c3b10941cae870bc0aeba5b3',1,'glm']]], ['highp_5fimat2x3',['highp_imat2x3',['../a00294.html#gaba49d890e06c9444795f5a133fbf1336',1,'glm']]], ['highp_5fimat2x4',['highp_imat2x4',['../a00294.html#ga05a970fd4366dad6c8a0be676b1eae5b',1,'glm']]], ['highp_5fimat3',['highp_imat3',['../a00294.html#gaca4506a3efa679eff7c006d9826291fd',1,'glm']]], ['highp_5fimat3x2',['highp_imat3x2',['../a00294.html#ga91c671c3ff9706c2393e78b22fd84bcb',1,'glm']]], ['highp_5fimat3x3',['highp_imat3x3',['../a00294.html#ga07d7b7173e2a6f843ff5f1c615a95b41',1,'glm']]], ['highp_5fimat3x4',['highp_imat3x4',['../a00294.html#ga53008f580be99018a17b357b5a4ffc0d',1,'glm']]], ['highp_5fimat4',['highp_imat4',['../a00294.html#ga7cfb09b34e0fcf73eaf6512d6483ef56',1,'glm']]], ['highp_5fimat4x2',['highp_imat4x2',['../a00294.html#ga1858820fb292cae396408b2034407f72',1,'glm']]], ['highp_5fimat4x3',['highp_imat4x3',['../a00294.html#ga6be0b80ae74bb309bc5b964d93d68fc5',1,'glm']]], ['highp_5fimat4x4',['highp_imat4x4',['../a00294.html#ga2c783ee6f8f040ab37df2f70392c8b44',1,'glm']]], ['highp_5fint16',['highp_int16',['../a00304.html#ga5fde0fa4a3852a9dd5d637a92ee74718',1,'glm']]], ['highp_5fint16_5ft',['highp_int16_t',['../a00304.html#gacaea06d0a79ef3172e887a7a6ba434ff',1,'glm']]], ['highp_5fint32',['highp_int32',['../a00304.html#ga84ed04b4e0de18c977e932d617e7c223',1,'glm']]], ['highp_5fint32_5ft',['highp_int32_t',['../a00304.html#ga2c71c8bd9e2fe7d2e93ca250d8b6157f',1,'glm']]], ['highp_5fint64',['highp_int64',['../a00304.html#ga226a8d52b4e3f77aaa6231135e886aac',1,'glm']]], ['highp_5fint64_5ft',['highp_int64_t',['../a00304.html#ga73c6abb280a45feeff60f9accaee91f3',1,'glm']]], ['highp_5fint8',['highp_int8',['../a00304.html#gad0549c902a96a7164e4ac858d5f39dbf',1,'glm']]], ['highp_5fint8_5ft',['highp_int8_t',['../a00304.html#ga1085c50dd8fbeb5e7e609b1c127492a5',1,'glm']]], ['highp_5fivec1',['highp_ivec1',['../a00273.html#ga7e02566f2bd2caa68e61be45a477c77e',1,'glm']]], ['highp_5fivec2',['highp_ivec2',['../a00282.html#gaa18f6b80b41c214f10666948539c1f93',1,'glm']]], ['highp_5fivec3',['highp_ivec3',['../a00282.html#ga7dd782c3ef5719bc6d5c3ca826b8ad18',1,'glm']]], ['highp_5fivec4',['highp_ivec4',['../a00282.html#gafb84dccdf5d82443df3ffc8428dcaf3e',1,'glm']]], ['highp_5fmat2',['highp_mat2',['../a00284.html#ga4d5a0055544a516237dcdace049b143d',1,'glm']]], ['highp_5fmat2x2',['highp_mat2x2',['../a00284.html#ga2352ae43b284c9f71446674c0208c05d',1,'glm']]], ['highp_5fmat2x3',['highp_mat2x3',['../a00284.html#ga7a0e3fe41512b0494e598f5c58722f19',1,'glm']]], ['highp_5fmat2x4',['highp_mat2x4',['../a00284.html#ga61f36a81f2ed1b5f9fc8bc3b26faec8f',1,'glm']]], ['highp_5fmat3',['highp_mat3',['../a00284.html#ga3fd9849f3da5ed6e3decc3fb10a20b3e',1,'glm']]], ['highp_5fmat3x2',['highp_mat3x2',['../a00284.html#ga1eda47a00027ec440eac05d63739c71b',1,'glm']]], ['highp_5fmat3x3',['highp_mat3x3',['../a00284.html#ga2ea82e12f4d7afcfce8f59894d400230',1,'glm']]], ['highp_5fmat3x4',['highp_mat3x4',['../a00284.html#ga6454b3a26ea30f69de8e44c08a63d1b7',1,'glm']]], ['highp_5fmat4',['highp_mat4',['../a00284.html#gad72e13d669d039f12ae5afa23148adc1',1,'glm']]], ['highp_5fmat4x2',['highp_mat4x2',['../a00284.html#gab68b66e6d2c37b804d0baf970fa4f0e5',1,'glm']]], ['highp_5fmat4x3',['highp_mat4x3',['../a00284.html#ga8d5a4e65fb976e4553b84995b95ecb38',1,'glm']]], ['highp_5fmat4x4',['highp_mat4x4',['../a00284.html#ga58cc504be0e3b61c48bc91554a767b9f',1,'glm']]], ['highp_5fquat',['highp_quat',['../a00253.html#gaa2fd8085774376310aeb80588e0eab6e',1,'glm']]], ['highp_5fu16',['highp_u16',['../a00304.html#ga8e62c883d13f47015f3b70ed88751369',1,'glm']]], ['highp_5fu16vec1',['highp_u16vec1',['../a00304.html#gad064202b4cf9a2972475c03de657cb39',1,'glm']]], ['highp_5fu16vec2',['highp_u16vec2',['../a00304.html#ga791b15ceb3f1e09d1a0ec6f3057ca159',1,'glm']]], ['highp_5fu16vec3',['highp_u16vec3',['../a00304.html#gacfd806749008f0ade6ac4bb9dd91082f',1,'glm']]], ['highp_5fu16vec4',['highp_u16vec4',['../a00304.html#ga8a85a3d54a8a9e14fe7a1f96196c4f61',1,'glm']]], ['highp_5fu32',['highp_u32',['../a00304.html#ga7a6f1929464dcc680b16381a4ee5f2cf',1,'glm']]], ['highp_5fu32vec1',['highp_u32vec1',['../a00304.html#ga0e35a565b9036bfc3989f5e23a0792e3',1,'glm']]], ['highp_5fu32vec2',['highp_u32vec2',['../a00304.html#ga2f256334f83fba4c2d219e414b51df6c',1,'glm']]], ['highp_5fu32vec3',['highp_u32vec3',['../a00304.html#gaf14d7a50502464e7cbfa074f24684cb1',1,'glm']]], ['highp_5fu32vec4',['highp_u32vec4',['../a00304.html#ga22166f0da65038b447f3c5e534fff1c2',1,'glm']]], ['highp_5fu64',['highp_u64',['../a00304.html#ga0c181fdf06a309691999926b6690c969',1,'glm']]], ['highp_5fu64vec1',['highp_u64vec1',['../a00304.html#gae4fe774744852c4d7d069be2e05257ab',1,'glm']]], ['highp_5fu64vec2',['highp_u64vec2',['../a00304.html#ga78f77b8b2d17b431ac5a68c0b5d7050d',1,'glm']]], ['highp_5fu64vec3',['highp_u64vec3',['../a00304.html#ga41bdabea6e589029659331ba47eb78c1',1,'glm']]], ['highp_5fu64vec4',['highp_u64vec4',['../a00304.html#ga4f15b41aa24b11cc42ad5798c04a2325',1,'glm']]], ['highp_5fu8',['highp_u8',['../a00304.html#gacd1259f3a9e8d2a9df5be2d74322ef9c',1,'glm']]], ['highp_5fu8vec1',['highp_u8vec1',['../a00304.html#ga8408cb76b6550ff01fa0a3024e7b68d2',1,'glm']]], ['highp_5fu8vec2',['highp_u8vec2',['../a00304.html#ga27585b7c3ab300059f11fcba465f6fd2',1,'glm']]], ['highp_5fu8vec3',['highp_u8vec3',['../a00304.html#ga45721c13b956eb691cbd6c6c1429167a',1,'glm']]], ['highp_5fu8vec4',['highp_u8vec4',['../a00304.html#gae0b75ad0fed8c00ddc0b5ce335d31060',1,'glm']]], ['highp_5fuint16',['highp_uint16',['../a00304.html#ga746dc6da204f5622e395f492997dbf57',1,'glm']]], ['highp_5fuint16_5ft',['highp_uint16_t',['../a00304.html#gacf54c3330ef60aa3d16cb676c7bcb8c7',1,'glm']]], ['highp_5fuint32',['highp_uint32',['../a00304.html#ga256b12b650c3f2fb86878fd1c5db8bc3',1,'glm']]], ['highp_5fuint32_5ft',['highp_uint32_t',['../a00304.html#gae978599c9711ac263ba732d4ac225b0e',1,'glm']]], ['highp_5fuint64',['highp_uint64',['../a00304.html#gaa38d732f5d4a7bc42a1b43b9d3c141ce',1,'glm']]], ['highp_5fuint64_5ft',['highp_uint64_t',['../a00304.html#gaa46172d7dc1c7ffe3e78107ff88adf08',1,'glm']]], ['highp_5fuint8',['highp_uint8',['../a00304.html#ga97432f9979e73e66567361fd01e4cffb',1,'glm']]], ['highp_5fuint8_5ft',['highp_uint8_t',['../a00304.html#gac4e00a26a2adb5f2c0a7096810df29e5',1,'glm']]], ['highp_5fumat2',['highp_umat2',['../a00294.html#ga42cbce64c4c1cd121b8437daa6e110de',1,'glm']]], ['highp_5fumat2x2',['highp_umat2x2',['../a00294.html#ga5337b7bc95f9cbac08a0c00b3f936b28',1,'glm']]], ['highp_5fumat2x3',['highp_umat2x3',['../a00294.html#ga90718c7128320b24b52f9ea70e643ad4',1,'glm']]], ['highp_5fumat2x4',['highp_umat2x4',['../a00294.html#gadca0a4724b4a6f56a2355b6f6e19248b',1,'glm']]], ['highp_5fumat3',['highp_umat3',['../a00294.html#gaa1143120339b7d2d469d327662e8a172',1,'glm']]], ['highp_5fumat3x2',['highp_umat3x2',['../a00294.html#ga844a5da2e7fc03fc7cccc7f1b70809c4',1,'glm']]], ['highp_5fumat3x3',['highp_umat3x3',['../a00294.html#ga1f7d41c36b980774a4d2e7c1647fb4b2',1,'glm']]], ['highp_5fumat3x4',['highp_umat3x4',['../a00294.html#ga25ee15c323924f2d0fe9896d329e5086',1,'glm']]], ['highp_5fumat4',['highp_umat4',['../a00294.html#gaf665e4e78c2cc32a54ab40325738f9c9',1,'glm']]], ['highp_5fumat4x2',['highp_umat4x2',['../a00294.html#gae69eb82ec08b0dc9bf2ead2a339ff801',1,'glm']]], ['highp_5fumat4x3',['highp_umat4x3',['../a00294.html#ga45a8163d02c43216252056b0c120f3a5',1,'glm']]], ['highp_5fumat4x4',['highp_umat4x4',['../a00294.html#ga6a56cbb769aed334c95241664415f9ba',1,'glm']]], ['highp_5fuvec1',['highp_uvec1',['../a00277.html#gacda57dd8c2bff4934c7f09ddd87c0f39',1,'glm']]], ['highp_5fuvec2',['highp_uvec2',['../a00282.html#gad5dd50da9e37387ca6b4e6f9c80fe6f8',1,'glm']]], ['highp_5fuvec3',['highp_uvec3',['../a00282.html#gaef61508dd40ec523416697982f9ceaae',1,'glm']]], ['highp_5fuvec4',['highp_uvec4',['../a00282.html#gaeebd7dd9f3e678691f8620241e5f9221',1,'glm']]], ['highp_5fvec1',['highp_vec1',['../a00271.html#ga9e8ed21862a897c156c0b2abca70b1e9',1,'glm']]], ['highp_5fvec2',['highp_vec2',['../a00282.html#gaa92c1954d71b1e7914874bd787b43d1c',1,'glm']]], ['highp_5fvec3',['highp_vec3',['../a00282.html#gaca61dfaccbf2f58f2d8063a4e76b44a9',1,'glm']]], ['highp_5fvec4',['highp_vec4',['../a00282.html#gad281decae52948b82feb3a9db8f63a7b',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/typedefs_5.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/typedefs_5.js ================================================ var searchData= [ ['i16',['i16',['../a00304.html#ga3ab5fe184343d394fb6c2723c3ee3699',1,'glm']]], ['i16vec1',['i16vec1',['../a00304.html#gafe730798732aa7b0647096a004db1b1c',1,'glm']]], ['i16vec2',['i16vec2',['../a00304.html#ga2996630ba7b10535af8e065cf326f761',1,'glm']]], ['i16vec3',['i16vec3',['../a00304.html#gae9c90a867a6026b1f6eab00456f3fb8b',1,'glm']]], ['i16vec4',['i16vec4',['../a00304.html#ga550831bfc26d1e0101c1cb3d79938c06',1,'glm']]], ['i32',['i32',['../a00304.html#ga96faea43ac5f875d2d3ffbf8d213e3eb',1,'glm']]], ['i32vec1',['i32vec1',['../a00304.html#ga54b8a4e0f5a7203a821bf8e9c1265bcf',1,'glm']]], ['i32vec2',['i32vec2',['../a00304.html#ga8b44026374982dcd1e52d22bac99247e',1,'glm']]], ['i32vec3',['i32vec3',['../a00304.html#ga7f526b5cccef126a2ebcf9bdd890394e',1,'glm']]], ['i32vec4',['i32vec4',['../a00304.html#ga866a05905c49912309ed1fa5f5980e61',1,'glm']]], ['i64',['i64',['../a00304.html#gadb997e409103d4da18abd837e636a496',1,'glm']]], ['i64vec1',['i64vec1',['../a00304.html#ga2b65767f8b5aed1bd1cf86c541662b50',1,'glm']]], ['i64vec2',['i64vec2',['../a00304.html#ga48310188e1d0c616bf8d78c92447523b',1,'glm']]], ['i64vec3',['i64vec3',['../a00304.html#ga667948cfe6fb3d6606c750729ec49f77',1,'glm']]], ['i64vec4',['i64vec4',['../a00304.html#gaa4e31c3d9de067029efeb161a44b0232',1,'glm']]], ['i8',['i8',['../a00304.html#ga302ec977b0c0c3ea245b6c9275495355',1,'glm']]], ['i8vec1',['i8vec1',['../a00304.html#ga7e80d927ff0a3861ced68dfff8a4020b',1,'glm']]], ['i8vec2',['i8vec2',['../a00304.html#gad06935764d78f43f9d542c784c2212ec',1,'glm']]], ['i8vec3',['i8vec3',['../a00304.html#ga5a08d36cf7917cd19d081a603d0eae3e',1,'glm']]], ['i8vec4',['i8vec4',['../a00304.html#ga4177a44206121dabc8c4ff1c0f544574',1,'glm']]], ['imat2',['imat2',['../a00294.html#gaabe04f9948d4a213bb1c20137de03e01',1,'glm']]], ['imat2x2',['imat2x2',['../a00294.html#gaa4732a240522ad9bc28144fda2fc14ec',1,'glm']]], ['imat2x3',['imat2x3',['../a00294.html#ga3f42dd3d5d94a0fd5706f7ec8dd0c605',1,'glm']]], ['imat2x4',['imat2x4',['../a00294.html#ga9d8faafdca42583d67e792dd038fc668',1,'glm']]], ['imat3',['imat3',['../a00294.html#ga038f68437155ffa3c2583a15264a8195',1,'glm']]], ['imat3x2',['imat3x2',['../a00294.html#ga7b33bbe4f12c060892bd3cc8d4cd737f',1,'glm']]], ['imat3x3',['imat3x3',['../a00294.html#ga6aacc960f62e8f7d2fe9d32d5050e7a4',1,'glm']]], ['imat3x4',['imat3x4',['../a00294.html#ga6e9ce23496d8b08dfc302d4039694b58',1,'glm']]], ['imat4',['imat4',['../a00294.html#ga96b0d26a33b81bb6a60ca0f39682f7eb',1,'glm']]], ['imat4x2',['imat4x2',['../a00294.html#ga8ce7ef51d8b2c1901fa5414deccbc3fa',1,'glm']]], ['imat4x3',['imat4x3',['../a00294.html#ga705ee0bf49d6c3de4404ce2481bf0df5',1,'glm']]], ['imat4x4',['imat4x4',['../a00294.html#ga43ed5e4f475b6f4cad7cba78f29c405b',1,'glm']]], ['int1',['int1',['../a00315.html#ga0670a2111b5e4a6410bd027fa0232fc3',1,'glm']]], ['int16',['int16',['../a00260.html#ga259fa4834387bd68627ddf37bb3ebdb9',1,'glm']]], ['int16_5ft',['int16_t',['../a00304.html#gae8f5e3e964ca2ae240adc2c0d74adede',1,'glm']]], ['int1x1',['int1x1',['../a00315.html#ga056ffe02d3a45af626f8e62221881c7a',1,'glm']]], ['int2',['int2',['../a00315.html#gafe3a8fd56354caafe24bfe1b1e3ad22a',1,'glm']]], ['int2x2',['int2x2',['../a00315.html#ga4e5ce477c15836b21e3c42daac68554d',1,'glm']]], ['int2x3',['int2x3',['../a00315.html#ga197ded5ad8354f6b6fb91189d7a269b3',1,'glm']]], ['int2x4',['int2x4',['../a00315.html#ga2749d59a7fddbac44f34ba78e57ef807',1,'glm']]], ['int3',['int3',['../a00315.html#ga909c38a425f215a50c847145d7da09f0',1,'glm']]], ['int32',['int32',['../a00260.html#ga43d43196463bde49cb067f5c20ab8481',1,'glm']]], ['int32_5ft',['int32_t',['../a00304.html#ga042ef09ff2f0cb24a36f541bcb3a3710',1,'glm']]], ['int3x2',['int3x2',['../a00315.html#gaa4cbe16a92cf3664376c7a2fc5126aa8',1,'glm']]], ['int3x3',['int3x3',['../a00315.html#ga15c9649286f0bf431bdf9b3509580048',1,'glm']]], ['int3x4',['int3x4',['../a00315.html#gaacac46ddc7d15d0f9529d05c92946a0f',1,'glm']]], ['int4',['int4',['../a00315.html#gaecdef18c819c205aeee9f94dc93de56a',1,'glm']]], ['int4x2',['int4x2',['../a00315.html#ga97a39dd9bc7d572810d80b8467cbffa1',1,'glm']]], ['int4x3',['int4x3',['../a00315.html#gae4a2c53f14aeec9a17c2b81142b7e82d',1,'glm']]], ['int4x4',['int4x4',['../a00315.html#ga04dee1552424198b8f58b377c2ee00d8',1,'glm']]], ['int64',['int64',['../a00260.html#gaff5189f97f9e842d9636a0f240001b2e',1,'glm']]], ['int64_5ft',['int64_t',['../a00304.html#ga322a7d7d2c2c68994dc872a33de63c61',1,'glm']]], ['int8',['int8',['../a00260.html#ga1b956fe1df85f3c132b21edb4e116458',1,'glm']]], ['int8_5ft',['int8_t',['../a00304.html#ga4bf09d8838a86866b39ee6e109341645',1,'glm']]], ['ivec1',['ivec1',['../a00272.html#gaedd0562c2e77714929d7723a7e2e0dba',1,'glm']]], ['ivec2',['ivec2',['../a00281.html#ga6f9269106d91b2d2b91bcf27cd5f5560',1,'glm']]], ['ivec3',['ivec3',['../a00281.html#gad0d784d8eee201aca362484d2daee46c',1,'glm']]], ['ivec4',['ivec4',['../a00281.html#ga5abb4603dae0ce58c595e66d9123d812',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/typedefs_6.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/typedefs_6.js ================================================ var searchData= [ ['lowp_5fbvec1',['lowp_bvec1',['../a00266.html#ga24a3d364e2ddd444f5b9e7975bbef8f9',1,'glm']]], ['lowp_5fbvec2',['lowp_bvec2',['../a00282.html#ga5a5452140650988b94d5716e4d872465',1,'glm']]], ['lowp_5fbvec3',['lowp_bvec3',['../a00282.html#ga79e0922a977662a8fd39d7829be3908b',1,'glm']]], ['lowp_5fbvec4',['lowp_bvec4',['../a00282.html#ga15ac87724048ab7169bb5d3572939dd3',1,'glm']]], ['lowp_5fddualquat',['lowp_ddualquat',['../a00317.html#gab4c5103338af3dac7e0fbc86895a3f1a',1,'glm']]], ['lowp_5fdmat2',['lowp_dmat2',['../a00284.html#gad8e2727a6e7aa68280245bb0022118e1',1,'glm']]], ['lowp_5fdmat2x2',['lowp_dmat2x2',['../a00284.html#gac61b94f5d9775f83f321bac899322fe2',1,'glm']]], ['lowp_5fdmat2x3',['lowp_dmat2x3',['../a00284.html#gaf6bf2f5bde7ad5b9c289f777b93094af',1,'glm']]], ['lowp_5fdmat2x4',['lowp_dmat2x4',['../a00284.html#ga97507a31ecee8609887d0f23bbde92c7',1,'glm']]], ['lowp_5fdmat3',['lowp_dmat3',['../a00284.html#ga0cab80beee64a5f8d2ae4e823983063a',1,'glm']]], ['lowp_5fdmat3x2',['lowp_dmat3x2',['../a00284.html#ga1e0ea3fba496bc7c6f620d2590acb66b',1,'glm']]], ['lowp_5fdmat3x3',['lowp_dmat3x3',['../a00284.html#gac017848a9df570f60916a21a297b1e8e',1,'glm']]], ['lowp_5fdmat3x4',['lowp_dmat3x4',['../a00284.html#ga93add35d2a44c5830978b827e8c295e8',1,'glm']]], ['lowp_5fdmat4',['lowp_dmat4',['../a00284.html#ga708bc5b91bbfedd21debac8dcf2a64cd',1,'glm']]], ['lowp_5fdmat4x2',['lowp_dmat4x2',['../a00284.html#ga382dc5295cead78766239a8457abfa98',1,'glm']]], ['lowp_5fdmat4x3',['lowp_dmat4x3',['../a00284.html#ga3d7ea07da7c6e5c81a3f4c8b3d44056e',1,'glm']]], ['lowp_5fdmat4x4',['lowp_dmat4x4',['../a00284.html#ga5b0413198b7e9f061f7534a221c9dac9',1,'glm']]], ['lowp_5fdquat',['lowp_dquat',['../a00250.html#ga9e6e5f42e67dd5877350ba485c191f1c',1,'glm']]], ['lowp_5fdualquat',['lowp_dualquat',['../a00317.html#gade05d29ebd4deea0f883d0e1bb4169aa',1,'glm']]], ['lowp_5fdvec1',['lowp_dvec1',['../a00269.html#gaf906eb86b6e96c35138d0e4928e1435a',1,'glm']]], ['lowp_5fdvec2',['lowp_dvec2',['../a00282.html#ga108086730d086b7f6f7a033955dfb9c3',1,'glm']]], ['lowp_5fdvec3',['lowp_dvec3',['../a00282.html#ga42c518b2917e19ce6946a84c64a3a4b2',1,'glm']]], ['lowp_5fdvec4',['lowp_dvec4',['../a00282.html#ga0b4432cb8d910e406576d10d802e190d',1,'glm']]], ['lowp_5ff32',['lowp_f32',['../a00304.html#gaeea53879fc327293cf3352a409b7867b',1,'glm']]], ['lowp_5ff32mat2',['lowp_f32mat2',['../a00304.html#ga52409bc6d4a2ce3421526c069220d685',1,'glm']]], ['lowp_5ff32mat2x2',['lowp_f32mat2x2',['../a00304.html#ga1d091b6abfba1772450e1745a06525bc',1,'glm']]], ['lowp_5ff32mat2x3',['lowp_f32mat2x3',['../a00304.html#ga961ccb34cd1a5654c772c8709e001dc5',1,'glm']]], ['lowp_5ff32mat2x4',['lowp_f32mat2x4',['../a00304.html#gacc6bf0209dda0c7c14851a646071c974',1,'glm']]], ['lowp_5ff32mat3',['lowp_f32mat3',['../a00304.html#ga4187f89f196505b40e63f516139511e5',1,'glm']]], ['lowp_5ff32mat3x2',['lowp_f32mat3x2',['../a00304.html#gac53f9d7ab04eace67adad026092fb1e8',1,'glm']]], ['lowp_5ff32mat3x3',['lowp_f32mat3x3',['../a00304.html#ga841211b641cff1fcf861bdb14e5e4abc',1,'glm']]], ['lowp_5ff32mat3x4',['lowp_f32mat3x4',['../a00304.html#ga21b1b22dec013a72656e3644baf8a1e1',1,'glm']]], ['lowp_5ff32mat4',['lowp_f32mat4',['../a00304.html#ga766aed2871e6173a81011a877f398f04',1,'glm']]], ['lowp_5ff32mat4x2',['lowp_f32mat4x2',['../a00304.html#gae6f3fcb702a666de07650c149cfa845a',1,'glm']]], ['lowp_5ff32mat4x3',['lowp_f32mat4x3',['../a00304.html#gac21eda58a1475449a5709b412ebd776c',1,'glm']]], ['lowp_5ff32mat4x4',['lowp_f32mat4x4',['../a00304.html#ga4143d129898f91545948c46859adce44',1,'glm']]], ['lowp_5ff32quat',['lowp_f32quat',['../a00304.html#gaa3ba60ef8f69c6aeb1629594eaa95347',1,'glm']]], ['lowp_5ff32vec1',['lowp_f32vec1',['../a00304.html#ga43e5b41c834fcaf4db5a831c0e28128e',1,'glm']]], ['lowp_5ff32vec2',['lowp_f32vec2',['../a00304.html#gaf3b694b2b8ded7e0b9f07b061917e1a0',1,'glm']]], ['lowp_5ff32vec3',['lowp_f32vec3',['../a00304.html#gaf739a2cd7b81783a43148b53e40d983b',1,'glm']]], ['lowp_5ff32vec4',['lowp_f32vec4',['../a00304.html#ga4e2e1debe022074ab224c9faf856d374',1,'glm']]], ['lowp_5ff64',['lowp_f64',['../a00304.html#gabc7a97c07cbfac8e35eb5e63beb4b679',1,'glm']]], ['lowp_5ff64mat2',['lowp_f64mat2',['../a00304.html#gafc730f6b4242763b0eda0ffa25150292',1,'glm']]], ['lowp_5ff64mat2x2',['lowp_f64mat2x2',['../a00304.html#ga771fda9109933db34f808d92b9b84d7e',1,'glm']]], ['lowp_5ff64mat2x3',['lowp_f64mat2x3',['../a00304.html#ga39e90adcffe33264bd608fa9c6bd184b',1,'glm']]], ['lowp_5ff64mat2x4',['lowp_f64mat2x4',['../a00304.html#ga50265a202fbfe0a25fc70066c31d9336',1,'glm']]], ['lowp_5ff64mat3',['lowp_f64mat3',['../a00304.html#ga58119a41d143ebaea0df70fe882e8a40',1,'glm']]], ['lowp_5ff64mat3x2',['lowp_f64mat3x2',['../a00304.html#gab0eb2d65514ee3e49905aa2caad8c0ad',1,'glm']]], ['lowp_5ff64mat3x3',['lowp_f64mat3x3',['../a00304.html#gac8f8a12ee03105ef8861dc652434e3b7',1,'glm']]], ['lowp_5ff64mat3x4',['lowp_f64mat3x4',['../a00304.html#gade8d1edfb23996ab6c622e65e3893271',1,'glm']]], ['lowp_5ff64mat4',['lowp_f64mat4',['../a00304.html#ga7451266e67794bd1125163502bc4a570',1,'glm']]], ['lowp_5ff64mat4x2',['lowp_f64mat4x2',['../a00304.html#gab0cecb80fd106bc369b9e46a165815ce',1,'glm']]], ['lowp_5ff64mat4x3',['lowp_f64mat4x3',['../a00304.html#gae731613b25db3a5ef5a05d21e57a57d3',1,'glm']]], ['lowp_5ff64mat4x4',['lowp_f64mat4x4',['../a00304.html#ga8c9cd734e03cd49674f3e287aa4a6f95',1,'glm']]], ['lowp_5ff64quat',['lowp_f64quat',['../a00304.html#gaa3ee2bc4af03cc06578b66b3e3f878ae',1,'glm']]], ['lowp_5ff64vec1',['lowp_f64vec1',['../a00304.html#gaf2d02c5f4d59135b9bc524fe317fd26b',1,'glm']]], ['lowp_5ff64vec2',['lowp_f64vec2',['../a00304.html#ga4e641a54d70c81eabf56c25c966d04bd',1,'glm']]], ['lowp_5ff64vec3',['lowp_f64vec3',['../a00304.html#gae7a4711107b7d078fc5f03ce2227b90b',1,'glm']]], ['lowp_5ff64vec4',['lowp_f64vec4',['../a00304.html#gaa666bb9e6d204d3bea0b3a39a3a335f4',1,'glm']]], ['lowp_5ffdualquat',['lowp_fdualquat',['../a00317.html#gaa38f671be25a7f3b136a452a8bb42860',1,'glm']]], ['lowp_5ffloat32',['lowp_float32',['../a00304.html#ga41b0d390bd8cc827323b1b3816ff4bf8',1,'glm']]], ['lowp_5ffloat32_5ft',['lowp_float32_t',['../a00304.html#gaea881cae4ddc6c0fbf7cc5b08177ca5b',1,'glm']]], ['lowp_5ffloat64',['lowp_float64',['../a00304.html#ga3714dab2c16a6545a405cb0c3b3aaa6f',1,'glm']]], ['lowp_5ffloat64_5ft',['lowp_float64_t',['../a00304.html#ga7286a37076a09da140df18bfa75d4e38',1,'glm']]], ['lowp_5ffmat2',['lowp_fmat2',['../a00304.html#ga5bba0ce31210e274f73efacd3364c03f',1,'glm']]], ['lowp_5ffmat2x2',['lowp_fmat2x2',['../a00304.html#gab0feb11edd0d3ab3e8ed996d349a5066',1,'glm']]], ['lowp_5ffmat2x3',['lowp_fmat2x3',['../a00304.html#ga71cdb53801ed4c3aadb3603c04723210',1,'glm']]], ['lowp_5ffmat2x4',['lowp_fmat2x4',['../a00304.html#gaab217601c74974a84acbca428123ecf7',1,'glm']]], ['lowp_5ffmat3',['lowp_fmat3',['../a00304.html#ga83079315e230e8f39728f4bf0d2f9a9b',1,'glm']]], ['lowp_5ffmat3x2',['lowp_fmat3x2',['../a00304.html#ga49b98e7d71804af45d86886a489e633c',1,'glm']]], ['lowp_5ffmat3x3',['lowp_fmat3x3',['../a00304.html#gaba56275dd04a7a61560b0e8fa5d365b4',1,'glm']]], ['lowp_5ffmat3x4',['lowp_fmat3x4',['../a00304.html#ga28733aec7288191b314d42154fd0b690',1,'glm']]], ['lowp_5ffmat4',['lowp_fmat4',['../a00304.html#ga5803cb9ae26399762d8bba9e0b2fc09f',1,'glm']]], ['lowp_5ffmat4x2',['lowp_fmat4x2',['../a00304.html#ga5868c2dcce41cc3ea5edcaeae239f62c',1,'glm']]], ['lowp_5ffmat4x3',['lowp_fmat4x3',['../a00304.html#ga5e649bbdb135fbcb4bfe950f4c73a444',1,'glm']]], ['lowp_5ffmat4x4',['lowp_fmat4x4',['../a00304.html#gac2f5263708ac847b361a9841e74ddf9f',1,'glm']]], ['lowp_5ffvec1',['lowp_fvec1',['../a00304.html#ga346b2336fff168a7e0df1583aae3e5a5',1,'glm']]], ['lowp_5ffvec2',['lowp_fvec2',['../a00304.html#ga62a32c31f4e2e8ca859663b6e3289a2d',1,'glm']]], ['lowp_5ffvec3',['lowp_fvec3',['../a00304.html#ga40b5c557efebb5bb99d6b9aa81095afa',1,'glm']]], ['lowp_5ffvec4',['lowp_fvec4',['../a00304.html#ga755484ffbe39ae3db2875953ed04e7b7',1,'glm']]], ['lowp_5fi16',['lowp_i16',['../a00304.html#ga392b673fd10847bfb78fb808c6cf8ff7',1,'glm']]], ['lowp_5fi16vec1',['lowp_i16vec1',['../a00304.html#ga501a2f313f1c220eef4ab02bdabdc3c6',1,'glm']]], ['lowp_5fi16vec2',['lowp_i16vec2',['../a00304.html#ga7cac84b520a6b57f2fbd880d3d63c51b',1,'glm']]], ['lowp_5fi16vec3',['lowp_i16vec3',['../a00304.html#gab69ef9cbc2a9214bf5596c528c801b72',1,'glm']]], ['lowp_5fi16vec4',['lowp_i16vec4',['../a00304.html#ga1d47d94d17c2406abdd1f087a816e387',1,'glm']]], ['lowp_5fi32',['lowp_i32',['../a00304.html#ga7ff73a45cea9613ebf1a9fad0b9f82ac',1,'glm']]], ['lowp_5fi32vec1',['lowp_i32vec1',['../a00304.html#gae31ac3608cf643ceffd6554874bec4a0',1,'glm']]], ['lowp_5fi32vec2',['lowp_i32vec2',['../a00304.html#ga867a3c2d99ab369a454167d2c0a24dbd',1,'glm']]], ['lowp_5fi32vec3',['lowp_i32vec3',['../a00304.html#ga5fe17c87ede1b1b4d92454cff4da076d',1,'glm']]], ['lowp_5fi32vec4',['lowp_i32vec4',['../a00304.html#gac9b2eb4296ffe50a32eacca9ed932c08',1,'glm']]], ['lowp_5fi64',['lowp_i64',['../a00304.html#ga354736e0c645099cd44c42fb2f87c2b8',1,'glm']]], ['lowp_5fi64vec1',['lowp_i64vec1',['../a00304.html#gab0f7d875db5f3cc9f3168c5a0ed56437',1,'glm']]], ['lowp_5fi64vec2',['lowp_i64vec2',['../a00304.html#gab485c48f06a4fdd6b8d58d343bb49f3c',1,'glm']]], ['lowp_5fi64vec3',['lowp_i64vec3',['../a00304.html#ga5cb1dc9e8d300c2cdb0d7ff2308fa36c',1,'glm']]], ['lowp_5fi64vec4',['lowp_i64vec4',['../a00304.html#gabb4229a4c1488bf063eed0c45355bb9c',1,'glm']]], ['lowp_5fi8',['lowp_i8',['../a00304.html#ga552a6bde5e75984efb0f863278da2e54',1,'glm']]], ['lowp_5fi8vec1',['lowp_i8vec1',['../a00304.html#ga036d6c7ca9fbbdc5f3871bfcb937c85c',1,'glm']]], ['lowp_5fi8vec2',['lowp_i8vec2',['../a00304.html#gac03e5099d27eeaa74b6016ea435a1df2',1,'glm']]], ['lowp_5fi8vec3',['lowp_i8vec3',['../a00304.html#gae2f43ace6b5b33ab49516d9e40af1845',1,'glm']]], ['lowp_5fi8vec4',['lowp_i8vec4',['../a00304.html#ga6d388e9b9aa1b389f0672d9c7dfc61c5',1,'glm']]], ['lowp_5fimat2',['lowp_imat2',['../a00294.html#gaa0bff0be804142bb16d441aec0a7962e',1,'glm']]], ['lowp_5fimat2x2',['lowp_imat2x2',['../a00294.html#ga92b95b679975d408645547ab45a8dcd8',1,'glm']]], ['lowp_5fimat2x3',['lowp_imat2x3',['../a00294.html#ga8c9e7a388f8e7c52f1e6857dee8afb65',1,'glm']]], ['lowp_5fimat2x4',['lowp_imat2x4',['../a00294.html#ga9cc13bd1f8dd2933e9fa31fe3f70e16e',1,'glm']]], ['lowp_5fimat3',['lowp_imat3',['../a00294.html#ga69bfe668f4170379fc1f35d82b060c43',1,'glm']]], ['lowp_5fimat3x2',['lowp_imat3x2',['../a00294.html#ga33db8f27491d30906cd37c0d86b3f432',1,'glm']]], ['lowp_5fimat3x3',['lowp_imat3x3',['../a00294.html#ga664f061df00020048c3f8530329ace45',1,'glm']]], ['lowp_5fimat3x4',['lowp_imat3x4',['../a00294.html#ga9273faab33623d944af4080befbb2c80',1,'glm']]], ['lowp_5fimat4',['lowp_imat4',['../a00294.html#gad1e77f7270cad461ca4fcb4c3ec2e98c',1,'glm']]], ['lowp_5fimat4x2',['lowp_imat4x2',['../a00294.html#ga26ec1a2ba08a1488f5f05336858a0f09',1,'glm']]], ['lowp_5fimat4x3',['lowp_imat4x3',['../a00294.html#ga8f40483a3ae634ead8ad22272c543a33',1,'glm']]], ['lowp_5fimat4x4',['lowp_imat4x4',['../a00294.html#gaf65677e53ac8e31a107399340d5e2451',1,'glm']]], ['lowp_5fint16',['lowp_int16',['../a00304.html#ga698e36b01167fc0f037889334dce8def',1,'glm']]], ['lowp_5fint16_5ft',['lowp_int16_t',['../a00304.html#ga8b2cd8d31eb345b2d641d9261c38db1a',1,'glm']]], ['lowp_5fint32',['lowp_int32',['../a00304.html#ga864aabca5f3296e176e0c3ed9cc16b02',1,'glm']]], ['lowp_5fint32_5ft',['lowp_int32_t',['../a00304.html#ga0350631d35ff800e6133ac6243b13cbc',1,'glm']]], ['lowp_5fint64',['lowp_int64',['../a00304.html#gaf645b1a60203b39c0207baff5e3d8c3c',1,'glm']]], ['lowp_5fint64_5ft',['lowp_int64_t',['../a00304.html#gaebf341fc4a5be233f7dde962c2e33847',1,'glm']]], ['lowp_5fint8',['lowp_int8',['../a00304.html#ga760bcf26fdb23a2c3ecad3c928a19ae6',1,'glm']]], ['lowp_5fint8_5ft',['lowp_int8_t',['../a00304.html#ga119c41d73fe9977358174eb3ac1035a3',1,'glm']]], ['lowp_5fivec1',['lowp_ivec1',['../a00273.html#ga836dbb1dc516c233b7f5fe9763bc15dc',1,'glm']]], ['lowp_5fivec2',['lowp_ivec2',['../a00282.html#ga8433c6c1fdd80c0a83941d94aff73fa0',1,'glm']]], ['lowp_5fivec3',['lowp_ivec3',['../a00282.html#gac1a86a75b3c68ebb704d7094043669d6',1,'glm']]], ['lowp_5fivec4',['lowp_ivec4',['../a00282.html#ga27fc23da61859cd6356326c5f1c796de',1,'glm']]], ['lowp_5fmat2',['lowp_mat2',['../a00284.html#gae400c4ce1f5f3e1fa12861b2baed331a',1,'glm']]], ['lowp_5fmat2x2',['lowp_mat2x2',['../a00284.html#ga2df7cdaf9a571ce7a1b09435f502c694',1,'glm']]], ['lowp_5fmat2x3',['lowp_mat2x3',['../a00284.html#ga3eee3a74d0f1de8635d846dfb29ec4bb',1,'glm']]], ['lowp_5fmat2x4',['lowp_mat2x4',['../a00284.html#gade27f8324a16626cbce5d3e7da66b070',1,'glm']]], ['lowp_5fmat3',['lowp_mat3',['../a00284.html#ga6271ebc85ed778ccc15458c3d86fc854',1,'glm']]], ['lowp_5fmat3x2',['lowp_mat3x2',['../a00284.html#gaabf6cf90fd31efe25c94965507e98390',1,'glm']]], ['lowp_5fmat3x3',['lowp_mat3x3',['../a00284.html#ga63362cb4a63fc1be7d2e49cd5d574c84',1,'glm']]], ['lowp_5fmat3x4',['lowp_mat3x4',['../a00284.html#gac5fc6786688eff02904ca5e7d6960092',1,'glm']]], ['lowp_5fmat4',['lowp_mat4',['../a00284.html#ga2dedee030500865267cd5851c00c139d',1,'glm']]], ['lowp_5fmat4x2',['lowp_mat4x2',['../a00284.html#gafa3cdb8f24d09d761ec9ae2a4c7e5e21',1,'glm']]], ['lowp_5fmat4x3',['lowp_mat4x3',['../a00284.html#ga534c3ef5c3b8fdd8656b6afc205b4b77',1,'glm']]], ['lowp_5fmat4x4',['lowp_mat4x4',['../a00284.html#ga686468a9a815bd4db8cddae42a6d6b87',1,'glm']]], ['lowp_5fquat',['lowp_quat',['../a00253.html#gade62c5316c1c11a79c34c00c189558eb',1,'glm']]], ['lowp_5fu16',['lowp_u16',['../a00304.html#ga504ce1631cb2ac02fcf1d44d8c2aa126',1,'glm']]], ['lowp_5fu16vec1',['lowp_u16vec1',['../a00304.html#gaa6aab4ee7189b86716f5d7015d43021d',1,'glm']]], ['lowp_5fu16vec2',['lowp_u16vec2',['../a00304.html#ga2a7d997da9ac29cb931e35bd399f58df',1,'glm']]], ['lowp_5fu16vec3',['lowp_u16vec3',['../a00304.html#gac0253db6c3d3bae1f591676307a9dd8c',1,'glm']]], ['lowp_5fu16vec4',['lowp_u16vec4',['../a00304.html#gaa7f00459b9a2e5b2757e70afc0c189e1',1,'glm']]], ['lowp_5fu32',['lowp_u32',['../a00304.html#ga4f072ada9552e1e480bbb3b1acde5250',1,'glm']]], ['lowp_5fu32vec1',['lowp_u32vec1',['../a00304.html#gabed3be8dfdc4a0df4bf3271dbd7344c4',1,'glm']]], ['lowp_5fu32vec2',['lowp_u32vec2',['../a00304.html#gaf7e286e81347011e257ee779524e73b9',1,'glm']]], ['lowp_5fu32vec3',['lowp_u32vec3',['../a00304.html#gad3ad390560a671b1f676fbf03cd3aa15',1,'glm']]], ['lowp_5fu32vec4',['lowp_u32vec4',['../a00304.html#ga4502885718742aa238c36a312c3f3f20',1,'glm']]], ['lowp_5fu64',['lowp_u64',['../a00304.html#ga30069d1f02b19599cbfadf98c23ac6ed',1,'glm']]], ['lowp_5fu64vec1',['lowp_u64vec1',['../a00304.html#ga859be7b9d3a3765c1cafc14dbcf249a6',1,'glm']]], ['lowp_5fu64vec2',['lowp_u64vec2',['../a00304.html#ga581485db4ba6ddb501505ee711fd8e42',1,'glm']]], ['lowp_5fu64vec3',['lowp_u64vec3',['../a00304.html#gaa4a8682bec7ec8af666ef87fae38d5d1',1,'glm']]], ['lowp_5fu64vec4',['lowp_u64vec4',['../a00304.html#ga6fccc89c34045c86339f6fa781ce96de',1,'glm']]], ['lowp_5fu8',['lowp_u8',['../a00304.html#ga1b09f03da7ac43055c68a349d5445083',1,'glm']]], ['lowp_5fu8vec1',['lowp_u8vec1',['../a00304.html#ga4b2e0e10d8d154fec9cab50e216588ec',1,'glm']]], ['lowp_5fu8vec2',['lowp_u8vec2',['../a00304.html#gae6f63fa38635431e51a8f2602f15c566',1,'glm']]], ['lowp_5fu8vec3',['lowp_u8vec3',['../a00304.html#ga150dc47e31c6b8cf8461803c8d56f7bd',1,'glm']]], ['lowp_5fu8vec4',['lowp_u8vec4',['../a00304.html#ga9910927f3a4d1addb3da6a82542a8287',1,'glm']]], ['lowp_5fuint16',['lowp_uint16',['../a00304.html#gad68bfd9f881856fc863a6ebca0b67f78',1,'glm']]], ['lowp_5fuint16_5ft',['lowp_uint16_t',['../a00304.html#ga91c4815f93177eb423362fd296a87e9f',1,'glm']]], ['lowp_5fuint32',['lowp_uint32',['../a00304.html#gaa6a5b461bbf5fe20982472aa51896d4b',1,'glm']]], ['lowp_5fuint32_5ft',['lowp_uint32_t',['../a00304.html#gaf1b735b4b1145174f4e4167d13778f9b',1,'glm']]], ['lowp_5fuint64',['lowp_uint64',['../a00304.html#gaa212b805736a759998e312cbdd550fae',1,'glm']]], ['lowp_5fuint64_5ft',['lowp_uint64_t',['../a00304.html#ga8dd3a3281ae5c970ffe0c41d538aa153',1,'glm']]], ['lowp_5fuint8',['lowp_uint8',['../a00304.html#gaf49470869e9be2c059629b250619804e',1,'glm']]], ['lowp_5fuint8_5ft',['lowp_uint8_t',['../a00304.html#ga667b2ece2b258be898812dc2177995d1',1,'glm']]], ['lowp_5fumat2',['lowp_umat2',['../a00294.html#gaf2fba702d990437fc88ff3f3a76846ee',1,'glm']]], ['lowp_5fumat2x2',['lowp_umat2x2',['../a00294.html#ga7b2e9d89745f7175051284e54c81d81c',1,'glm']]], ['lowp_5fumat2x3',['lowp_umat2x3',['../a00294.html#ga3072f90fd86f17a862e21589fbb14c0f',1,'glm']]], ['lowp_5fumat2x4',['lowp_umat2x4',['../a00294.html#ga8bb45fec4bd77bd81b4ae7eb961a270d',1,'glm']]], ['lowp_5fumat3',['lowp_umat3',['../a00294.html#gaf1145f72bcdd590f5808c4bc170c2924',1,'glm']]], ['lowp_5fumat3x2',['lowp_umat3x2',['../a00294.html#ga56ea68c6a6cba8d8c21d17bb14e69c6b',1,'glm']]], ['lowp_5fumat3x3',['lowp_umat3x3',['../a00294.html#ga4f660a39a395cc14f018f985e7dfbeb5',1,'glm']]], ['lowp_5fumat3x4',['lowp_umat3x4',['../a00294.html#gaec3d624306bd59649f021864709d56b5',1,'glm']]], ['lowp_5fumat4',['lowp_umat4',['../a00294.html#gac092c6105827bf9ea080db38074b78eb',1,'glm']]], ['lowp_5fumat4x2',['lowp_umat4x2',['../a00294.html#ga7716c2b210d141846f1ac4e774adef5e',1,'glm']]], ['lowp_5fumat4x3',['lowp_umat4x3',['../a00294.html#ga09ab33a2636f5f43f7fae29cfbc20fff',1,'glm']]], ['lowp_5fumat4x4',['lowp_umat4x4',['../a00294.html#ga10aafc66cf1a0ece336b1c5ae13d0cc0',1,'glm']]], ['lowp_5fuvec1',['lowp_uvec1',['../a00277.html#ga8bf3fc8a7863d140f48b29341c750402',1,'glm']]], ['lowp_5fuvec2',['lowp_uvec2',['../a00282.html#ga752ee45136011301b64afd8c310c47a4',1,'glm']]], ['lowp_5fuvec3',['lowp_uvec3',['../a00282.html#ga7b2efbdd6bdc2f8250c57f3e5dc9a292',1,'glm']]], ['lowp_5fuvec4',['lowp_uvec4',['../a00282.html#ga5e6a632ec1165cf9f54ceeaa5e9b2b1e',1,'glm']]], ['lowp_5fvec1',['lowp_vec1',['../a00271.html#ga0a57630f03031706b1d26a7d70d9184c',1,'glm']]], ['lowp_5fvec2',['lowp_vec2',['../a00282.html#ga30e8baef5d56d5c166872a2bc00f36e9',1,'glm']]], ['lowp_5fvec3',['lowp_vec3',['../a00282.html#ga868e8e4470a3ef97c7ee3032bf90dc79',1,'glm']]], ['lowp_5fvec4',['lowp_vec4',['../a00282.html#gace3acb313c800552a9411953eb8b2ed7',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/typedefs_7.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/typedefs_7.js ================================================ var searchData= [ ['mat2',['mat2',['../a00283.html#ga8dd59e7fc6913ac5d61b86553e9148ba',1,'glm']]], ['mat2x2',['mat2x2',['../a00283.html#gaaa17ef6bfa4e4f2692348b1460c8efcb',1,'glm']]], ['mat2x3',['mat2x3',['../a00283.html#ga493ab21243abe564b3f7d381e677d29a',1,'glm']]], ['mat2x4',['mat2x4',['../a00283.html#ga8e879b57ddd81e5bf5a88929844e8b40',1,'glm']]], ['mat3',['mat3',['../a00283.html#gaefb0fc7a4960b782c18708bb6b655262',1,'glm']]], ['mat3x2',['mat3x2',['../a00280.html#ga2c27aea32de57d58aec8e92d5d2181e2',1,'glm']]], ['mat3x3',['mat3x3',['../a00283.html#gab91887d7565059dac640e3a1921c914a',1,'glm']]], ['mat3x4',['mat3x4',['../a00283.html#gaf991cad0b34f64e33af186326dbc4d66',1,'glm']]], ['mat4',['mat4',['../a00283.html#ga0db98d836c5549d31cf64ecd043b7af7',1,'glm']]], ['mat4x2',['mat4x2',['../a00283.html#gad941c947ad6cdd117a0e8554a4754983',1,'glm']]], ['mat4x3',['mat4x3',['../a00283.html#gac7574544bb94777bdbd2eb224eb72fd0',1,'glm']]], ['mat4x4',['mat4x4',['../a00283.html#gab2d35cc2655f44d60958d60a1de34e81',1,'glm']]], ['mediump_5fbvec1',['mediump_bvec1',['../a00266.html#ga7b4ccb989ba179fa44f7b0879c782621',1,'glm']]], ['mediump_5fbvec2',['mediump_bvec2',['../a00282.html#ga1e743764869efa9223c2bcefccedaddc',1,'glm']]], ['mediump_5fbvec3',['mediump_bvec3',['../a00282.html#ga50c783c25082882ef00fe2e5cddba4aa',1,'glm']]], ['mediump_5fbvec4',['mediump_bvec4',['../a00282.html#ga0be2c682258604a35004f088782a9645',1,'glm']]], ['mediump_5fddualquat',['mediump_ddualquat',['../a00317.html#ga0fb11e48e2d16348ccb06a25213641b4',1,'glm']]], ['mediump_5fdmat2',['mediump_dmat2',['../a00284.html#ga6205fd19be355600334edef6af0b27cb',1,'glm']]], ['mediump_5fdmat2x2',['mediump_dmat2x2',['../a00284.html#ga51dc36a7719cb458fa5114831c20d64f',1,'glm']]], ['mediump_5fdmat2x3',['mediump_dmat2x3',['../a00284.html#ga741e05adf1f12d5d913f67088db1009a',1,'glm']]], ['mediump_5fdmat2x4',['mediump_dmat2x4',['../a00284.html#ga685bda24922d112786af385deb4deb43',1,'glm']]], ['mediump_5fdmat3',['mediump_dmat3',['../a00284.html#ga939fbf9c53008a8e84c7dd7cf8de29e2',1,'glm']]], ['mediump_5fdmat3x2',['mediump_dmat3x2',['../a00284.html#ga2076157df85e49b8c021e03e46a376c1',1,'glm']]], ['mediump_5fdmat3x3',['mediump_dmat3x3',['../a00284.html#ga47bd2aae4701ee2fc865674a9df3d7a6',1,'glm']]], ['mediump_5fdmat3x4',['mediump_dmat3x4',['../a00284.html#ga3a132bd05675c2e46556f67cf738600b',1,'glm']]], ['mediump_5fdmat4',['mediump_dmat4',['../a00284.html#gaf650bc667bf2a0e496b5a9182bc8d378',1,'glm']]], ['mediump_5fdmat4x2',['mediump_dmat4x2',['../a00284.html#gae220fa4c5a7b13ef2ab0420340de645c',1,'glm']]], ['mediump_5fdmat4x3',['mediump_dmat4x3',['../a00284.html#ga43ef60e4d996db15c9c8f069a96ff763',1,'glm']]], ['mediump_5fdmat4x4',['mediump_dmat4x4',['../a00284.html#ga5389b3ab32dc0d72bea00057ab6d1dd3',1,'glm']]], ['mediump_5fdquat',['mediump_dquat',['../a00250.html#gacdf73b1f7fd8f5a0c79a3934e99c1a14',1,'glm']]], ['mediump_5fdualquat',['mediump_dualquat',['../a00317.html#gaa7aeb54c167712b38f2178a1be2360ad',1,'glm']]], ['mediump_5fdvec1',['mediump_dvec1',['../a00269.html#ga79a789ebb176b37a45848f7ccdd3b3dd',1,'glm']]], ['mediump_5fdvec2',['mediump_dvec2',['../a00282.html#ga2f4f6e9a69a0281d06940fd0990cafc3',1,'glm']]], ['mediump_5fdvec3',['mediump_dvec3',['../a00282.html#ga61c3b1dff4ec7c878af80503141b9f37',1,'glm']]], ['mediump_5fdvec4',['mediump_dvec4',['../a00282.html#ga23a8bca00914a51542bfea13a4778186',1,'glm']]], ['mediump_5ff32',['mediump_f32',['../a00304.html#ga3b27fcd9eaa2757f0aaf6b0ce0d85c80',1,'glm']]], ['mediump_5ff32mat2',['mediump_f32mat2',['../a00304.html#gaf9020c6176a75bc84828ab01ea7dac25',1,'glm']]], ['mediump_5ff32mat2x2',['mediump_f32mat2x2',['../a00304.html#gaa3ca74a44102035b3ffb5c9c52dfdd3f',1,'glm']]], ['mediump_5ff32mat2x3',['mediump_f32mat2x3',['../a00304.html#gad4cc829ab1ad3e05ac0a24828a3c95cf',1,'glm']]], ['mediump_5ff32mat2x4',['mediump_f32mat2x4',['../a00304.html#gae71445ac6cd0b9fba3e5c905cd030fb1',1,'glm']]], ['mediump_5ff32mat3',['mediump_f32mat3',['../a00304.html#gaaaf878d0d7bfc0aac054fe269a886ca8',1,'glm']]], ['mediump_5ff32mat3x2',['mediump_f32mat3x2',['../a00304.html#gaaab39454f56cf9fc6d940358ce5e6a0f',1,'glm']]], ['mediump_5ff32mat3x3',['mediump_f32mat3x3',['../a00304.html#gacd80ad7640e9e32f2edcb8330b1ffe4f',1,'glm']]], ['mediump_5ff32mat3x4',['mediump_f32mat3x4',['../a00304.html#ga8df705d775b776f5ae6b39e2ab892899',1,'glm']]], ['mediump_5ff32mat4',['mediump_f32mat4',['../a00304.html#ga4491baaebbc46a20f1cb5da985576bf4',1,'glm']]], ['mediump_5ff32mat4x2',['mediump_f32mat4x2',['../a00304.html#gab005efe0fa4de1a928e8ddec4bc2c43f',1,'glm']]], ['mediump_5ff32mat4x3',['mediump_f32mat4x3',['../a00304.html#gade108f16633cf95fa500b5b8c36c8b00',1,'glm']]], ['mediump_5ff32mat4x4',['mediump_f32mat4x4',['../a00304.html#ga936e95b881ecd2d109459ca41913fa99',1,'glm']]], ['mediump_5ff32quat',['mediump_f32quat',['../a00304.html#gaa40c03d52dbfbfaf03e75773b9606ff3',1,'glm']]], ['mediump_5ff32vec1',['mediump_f32vec1',['../a00304.html#gabb33cab7d7c74cc14aa95455d0690865',1,'glm']]], ['mediump_5ff32vec2',['mediump_f32vec2',['../a00304.html#gad6eb11412a3161ca8dc1d63b2a307c4b',1,'glm']]], ['mediump_5ff32vec3',['mediump_f32vec3',['../a00304.html#ga062ffef2973bd8241df993c3b30b327c',1,'glm']]], ['mediump_5ff32vec4',['mediump_f32vec4',['../a00304.html#gad80c84bcd5f585840faa6179f6fd446c',1,'glm']]], ['mediump_5ff64',['mediump_f64',['../a00304.html#ga6d40381d78472553f878f66e443feeef',1,'glm']]], ['mediump_5ff64mat2',['mediump_f64mat2',['../a00304.html#gac1281da5ded55047e8892b0e1f1ae965',1,'glm']]], ['mediump_5ff64mat2x2',['mediump_f64mat2x2',['../a00304.html#ga4fd527644cccbca4cb205320eab026f3',1,'glm']]], ['mediump_5ff64mat2x3',['mediump_f64mat2x3',['../a00304.html#gafd9a6ebc0c7b95f5c581d00d16a17c54',1,'glm']]], ['mediump_5ff64mat2x4',['mediump_f64mat2x4',['../a00304.html#gaf306dd69e53633636aee38cea79d4cb7',1,'glm']]], ['mediump_5ff64mat3',['mediump_f64mat3',['../a00304.html#gad35fb67eb1d03c5a514f0bd7aed1c776',1,'glm']]], ['mediump_5ff64mat3x2',['mediump_f64mat3x2',['../a00304.html#gacd926d36a72433f6cac51dd60fa13107',1,'glm']]], ['mediump_5ff64mat3x3',['mediump_f64mat3x3',['../a00304.html#ga84d88a6e3a54ccd2b67e195af4a4c23e',1,'glm']]], ['mediump_5ff64mat3x4',['mediump_f64mat3x4',['../a00304.html#gad38c544d332b8c4bd0b70b1bd9feccc2',1,'glm']]], ['mediump_5ff64mat4',['mediump_f64mat4',['../a00304.html#gaa805ef691c711dc41e2776cfb67f5cf5',1,'glm']]], ['mediump_5ff64mat4x2',['mediump_f64mat4x2',['../a00304.html#ga17d36f0ea22314117e1cec9594b33945',1,'glm']]], ['mediump_5ff64mat4x3',['mediump_f64mat4x3',['../a00304.html#ga54697a78f9a4643af6a57fc2e626ec0d',1,'glm']]], ['mediump_5ff64mat4x4',['mediump_f64mat4x4',['../a00304.html#ga66edb8de17b9235029472f043ae107e9',1,'glm']]], ['mediump_5ff64quat',['mediump_f64quat',['../a00304.html#ga5e52f485059ce6e3010c590b882602c9',1,'glm']]], ['mediump_5ff64vec1',['mediump_f64vec1',['../a00304.html#gac30fdf8afa489400053275b6a3350127',1,'glm']]], ['mediump_5ff64vec2',['mediump_f64vec2',['../a00304.html#ga8ebc04ecf6440c4ee24718a16600ce6b',1,'glm']]], ['mediump_5ff64vec3',['mediump_f64vec3',['../a00304.html#ga461c4c7d0757404dd0dba931760b25cf',1,'glm']]], ['mediump_5ff64vec4',['mediump_f64vec4',['../a00304.html#gacfea053bd6bb3eddb996a4f94de22a3e',1,'glm']]], ['mediump_5ffdualquat',['mediump_fdualquat',['../a00317.html#ga4a6b594ff7e81150d8143001367a9431',1,'glm']]], ['mediump_5ffloat32',['mediump_float32',['../a00304.html#ga7812bf00676fb1a86dcd62cca354d2c7',1,'glm']]], ['mediump_5ffloat32_5ft',['mediump_float32_t',['../a00304.html#gae4dee61f8fe1caccec309fbed02faf12',1,'glm']]], ['mediump_5ffloat64',['mediump_float64',['../a00304.html#gab83d8aae6e4f115e97a785e8574a115f',1,'glm']]], ['mediump_5ffloat64_5ft',['mediump_float64_t',['../a00304.html#gac61843e4fa96c1f4e9d8316454f32a8e',1,'glm']]], ['mediump_5ffmat2',['mediump_fmat2',['../a00304.html#ga74e9133378fd0b4da8ac0bc0876702ff',1,'glm']]], ['mediump_5ffmat2x2',['mediump_fmat2x2',['../a00304.html#ga98a687c17b174ea316b5f397b64f44bc',1,'glm']]], ['mediump_5ffmat2x3',['mediump_fmat2x3',['../a00304.html#gaa03f939d90d5ef157df957d93f0b9a64',1,'glm']]], ['mediump_5ffmat2x4',['mediump_fmat2x4',['../a00304.html#ga35223623e9ccebd8a281873b71b7d213',1,'glm']]], ['mediump_5ffmat3',['mediump_fmat3',['../a00304.html#ga80823dfad5dba98512c76af498343847',1,'glm']]], ['mediump_5ffmat3x2',['mediump_fmat3x2',['../a00304.html#ga42569e5b92f8635cedeadb1457ee1467',1,'glm']]], ['mediump_5ffmat3x3',['mediump_fmat3x3',['../a00304.html#gaa6f526388c74a66b3d52315a14d434ae',1,'glm']]], ['mediump_5ffmat3x4',['mediump_fmat3x4',['../a00304.html#gaefe8ef520c6cb78590ebbefe648da4d4',1,'glm']]], ['mediump_5ffmat4',['mediump_fmat4',['../a00304.html#gac1c38778c0b5a1263f07753c05a4f7b9',1,'glm']]], ['mediump_5ffmat4x2',['mediump_fmat4x2',['../a00304.html#gacea38a85893e17e6834b6cb09a9ad0cf',1,'glm']]], ['mediump_5ffmat4x3',['mediump_fmat4x3',['../a00304.html#ga41ad497f7eae211556aefd783cb02b90',1,'glm']]], ['mediump_5ffmat4x4',['mediump_fmat4x4',['../a00304.html#ga22e27beead07bff4d5ce9d6065a57279',1,'glm']]], ['mediump_5ffvec1',['mediump_fvec1',['../a00304.html#ga367964fc2133d3f1b5b3755ff9cf6c9b',1,'glm']]], ['mediump_5ffvec2',['mediump_fvec2',['../a00304.html#ga44bfa55cda5dbf53f24a1fb7610393d6',1,'glm']]], ['mediump_5ffvec3',['mediump_fvec3',['../a00304.html#ga999dc6703ad16e3d3c26b74ea8083f07',1,'glm']]], ['mediump_5ffvec4',['mediump_fvec4',['../a00304.html#ga1bed890513c0f50b7e7ba4f7f359dbfb',1,'glm']]], ['mediump_5fi16',['mediump_i16',['../a00304.html#ga62a17cddeb4dffb4e18fe3aea23f051a',1,'glm']]], ['mediump_5fi16vec1',['mediump_i16vec1',['../a00304.html#gacc44265ed440bf5e6e566782570de842',1,'glm']]], ['mediump_5fi16vec2',['mediump_i16vec2',['../a00304.html#ga4b5e2c9aaa5d7717bf71179aefa12e88',1,'glm']]], ['mediump_5fi16vec3',['mediump_i16vec3',['../a00304.html#ga3be6c7fc5fe08fa2274bdb001d5f2633',1,'glm']]], ['mediump_5fi16vec4',['mediump_i16vec4',['../a00304.html#gaf52982bb23e3a3772649b2c5bb84b107',1,'glm']]], ['mediump_5fi32',['mediump_i32',['../a00304.html#gaf5e94bf2a20af7601787c154751dc2e1',1,'glm']]], ['mediump_5fi32vec1',['mediump_i32vec1',['../a00304.html#ga46a57f71e430637559097a732b550a7e',1,'glm']]], ['mediump_5fi32vec2',['mediump_i32vec2',['../a00304.html#ga20bf224bd4f8a24ecc4ed2004a40c219',1,'glm']]], ['mediump_5fi32vec3',['mediump_i32vec3',['../a00304.html#ga13a221b910aa9eb1b04ca1c86e81015a',1,'glm']]], ['mediump_5fi32vec4',['mediump_i32vec4',['../a00304.html#ga6addd4dfee87fc09ab9525e3d07db4c8',1,'glm']]], ['mediump_5fi64',['mediump_i64',['../a00304.html#ga3ebcb1f6d8d8387253de8bccb058d77f',1,'glm']]], ['mediump_5fi64vec1',['mediump_i64vec1',['../a00304.html#ga8343e9d244fb17a5bbf0d94d36b3695e',1,'glm']]], ['mediump_5fi64vec2',['mediump_i64vec2',['../a00304.html#ga2c94aeae3457325944ca1059b0b68330',1,'glm']]], ['mediump_5fi64vec3',['mediump_i64vec3',['../a00304.html#ga8089722ffdf868cdfe721dea1fb6a90e',1,'glm']]], ['mediump_5fi64vec4',['mediump_i64vec4',['../a00304.html#gabf1f16c5ab8cb0484bd1e846ae4368f1',1,'glm']]], ['mediump_5fi8',['mediump_i8',['../a00304.html#gacf1ded173e1e2d049c511d095b259e21',1,'glm']]], ['mediump_5fi8vec1',['mediump_i8vec1',['../a00304.html#ga85e8893f4ae3630065690a9000c0c483',1,'glm']]], ['mediump_5fi8vec2',['mediump_i8vec2',['../a00304.html#ga2a8bdc32184ea0a522ef7bd90640cf67',1,'glm']]], ['mediump_5fi8vec3',['mediump_i8vec3',['../a00304.html#ga6dd1c1618378c6f94d522a61c28773c9',1,'glm']]], ['mediump_5fi8vec4',['mediump_i8vec4',['../a00304.html#gac7bb04fb857ef7b520e49f6c381432be',1,'glm']]], ['mediump_5fimat2',['mediump_imat2',['../a00294.html#ga20f4cc7ab23e2aa1f4db9fdb5496d378',1,'glm']]], ['mediump_5fimat2x2',['mediump_imat2x2',['../a00294.html#ga4b2aeb11a329940721dda9583e71f856',1,'glm']]], ['mediump_5fimat2x3',['mediump_imat2x3',['../a00294.html#ga74362470ba99843ac70aee5ac38cc674',1,'glm']]], ['mediump_5fimat2x4',['mediump_imat2x4',['../a00294.html#ga8da25cd380ba30fc5b68a4687deb3e09',1,'glm']]], ['mediump_5fimat3',['mediump_imat3',['../a00294.html#ga6c63bdc736efd3466e0730de0251cb71',1,'glm']]], ['mediump_5fimat3x2',['mediump_imat3x2',['../a00294.html#gac0b4e42d648fb3eaf4bb88da82ecc809',1,'glm']]], ['mediump_5fimat3x3',['mediump_imat3x3',['../a00294.html#gad99cc2aad8fc57f068cfa7719dbbea12',1,'glm']]], ['mediump_5fimat3x4',['mediump_imat3x4',['../a00294.html#ga67689a518b181a26540bc44a163525cd',1,'glm']]], ['mediump_5fimat4',['mediump_imat4',['../a00294.html#gaf348552978553630d2a00b78eb887ced',1,'glm']]], ['mediump_5fimat4x2',['mediump_imat4x2',['../a00294.html#ga8b2d35816f7103f0f4c82dd2f27571fc',1,'glm']]], ['mediump_5fimat4x3',['mediump_imat4x3',['../a00294.html#ga5b10acc696759e03f6ab918f4467e94c',1,'glm']]], ['mediump_5fimat4x4',['mediump_imat4x4',['../a00294.html#ga2596869d154dec1180beadbb9df80501',1,'glm']]], ['mediump_5fint16',['mediump_int16',['../a00304.html#gadff3608baa4b5bd3ed28f95c1c2c345d',1,'glm']]], ['mediump_5fint16_5ft',['mediump_int16_t',['../a00304.html#ga80e72fe94c88498537e8158ba7591c54',1,'glm']]], ['mediump_5fint32',['mediump_int32',['../a00304.html#ga5244cef85d6e870e240c76428a262ae8',1,'glm']]], ['mediump_5fint32_5ft',['mediump_int32_t',['../a00304.html#ga26fc7ced1ad7ca5024f1c973c8dc9180',1,'glm']]], ['mediump_5fint64',['mediump_int64',['../a00304.html#ga7b968f2b86a0442a89c7359171e1d866',1,'glm']]], ['mediump_5fint64_5ft',['mediump_int64_t',['../a00304.html#gac3bc41bcac61d1ba8f02a6f68ce23f64',1,'glm']]], ['mediump_5fint8',['mediump_int8',['../a00304.html#ga6fbd69cbdaa44345bff923a2cf63de7e',1,'glm']]], ['mediump_5fint8_5ft',['mediump_int8_t',['../a00304.html#ga6d7b3789ecb932c26430009478cac7ae',1,'glm']]], ['mediump_5fivec1',['mediump_ivec1',['../a00273.html#gad628c608970b3d0aa6cfb63ce6e53e56',1,'glm']]], ['mediump_5fivec2',['mediump_ivec2',['../a00282.html#gac57496299d276ed97044074097bd5e2c',1,'glm']]], ['mediump_5fivec3',['mediump_ivec3',['../a00282.html#ga27cfb51e0dbe15bba27a14a8590e8466',1,'glm']]], ['mediump_5fivec4',['mediump_ivec4',['../a00282.html#ga92a204c37e66ac6c1dc7ae91142f2ea5',1,'glm']]], ['mediump_5fmat2',['mediump_mat2',['../a00284.html#ga745452bd9c89f5ad948203e4fb4b4ea3',1,'glm']]], ['mediump_5fmat2x2',['mediump_mat2x2',['../a00284.html#ga0cdf57d29f9448864237b2fb3e39aa1d',1,'glm']]], ['mediump_5fmat2x3',['mediump_mat2x3',['../a00284.html#ga497d513d552d927537d61fa11e3701ab',1,'glm']]], ['mediump_5fmat2x4',['mediump_mat2x4',['../a00284.html#gae7b75ea2e09fa686a79bbe9b6ca68ee5',1,'glm']]], ['mediump_5fmat3',['mediump_mat3',['../a00284.html#ga5aae49834d02732942f44e61d7bce136',1,'glm']]], ['mediump_5fmat3x2',['mediump_mat3x2',['../a00284.html#ga9e1c9ee65fef547bde793e69723e24eb',1,'glm']]], ['mediump_5fmat3x3',['mediump_mat3x3',['../a00284.html#gabc0f2f4ad21c90b341881cf056f8650e',1,'glm']]], ['mediump_5fmat3x4',['mediump_mat3x4',['../a00284.html#gaa669c6675c3405f76c0b14020d1c0d61',1,'glm']]], ['mediump_5fmat4',['mediump_mat4',['../a00284.html#gab8531bc3f269aa45835cd6e1972b7fc7',1,'glm']]], ['mediump_5fmat4x2',['mediump_mat4x2',['../a00284.html#gad75706b70545412ba9ac27d5ee210f66',1,'glm']]], ['mediump_5fmat4x3',['mediump_mat4x3',['../a00284.html#ga4a1440b5ea3cf84d5b06c79b534bd770',1,'glm']]], ['mediump_5fmat4x4',['mediump_mat4x4',['../a00284.html#ga15bca2b70917d9752231160d9da74b01',1,'glm']]], ['mediump_5fquat',['mediump_quat',['../a00253.html#gad2a59409de1bb12ccb6eb692ee7e9d8d',1,'glm']]], ['mediump_5fu16',['mediump_u16',['../a00304.html#ga9df98857be695d5a30cb30f5bfa38a80',1,'glm']]], ['mediump_5fu16vec1',['mediump_u16vec1',['../a00304.html#ga400ce8cc566de093a9b28e59e220d6e4',1,'glm']]], ['mediump_5fu16vec2',['mediump_u16vec2',['../a00304.html#ga429c201b3e92c90b4ef4356f2be52ee1',1,'glm']]], ['mediump_5fu16vec3',['mediump_u16vec3',['../a00304.html#gac9ba20234b0c3751d45ce575fc71e551',1,'glm']]], ['mediump_5fu16vec4',['mediump_u16vec4',['../a00304.html#ga5793393686ce5bd2d5968ff9144762b8',1,'glm']]], ['mediump_5fu32',['mediump_u32',['../a00304.html#ga1bd0e914158bf03135f8a317de6debe9',1,'glm']]], ['mediump_5fu32vec1',['mediump_u32vec1',['../a00304.html#ga8a11ccd2e38f674bbf3c2d1afc232aee',1,'glm']]], ['mediump_5fu32vec2',['mediump_u32vec2',['../a00304.html#ga94f74851fce338549c705b5f0d601c4f',1,'glm']]], ['mediump_5fu32vec3',['mediump_u32vec3',['../a00304.html#ga012c24c8fc69707b90260474c70275a2',1,'glm']]], ['mediump_5fu32vec4',['mediump_u32vec4',['../a00304.html#ga5d43ee8b5dbaa06c327b03b83682598a',1,'glm']]], ['mediump_5fu64',['mediump_u64',['../a00304.html#ga2af9490085ae3bdf36a544e9dd073610',1,'glm']]], ['mediump_5fu64vec1',['mediump_u64vec1',['../a00304.html#ga659f372ccb8307d5db5beca942cde5e8',1,'glm']]], ['mediump_5fu64vec2',['mediump_u64vec2',['../a00304.html#ga73a08ef5a74798f3a1a99250b5f86a7d',1,'glm']]], ['mediump_5fu64vec3',['mediump_u64vec3',['../a00304.html#ga1900c6ab74acd392809425953359ef52',1,'glm']]], ['mediump_5fu64vec4',['mediump_u64vec4',['../a00304.html#gaec7ee455cb379ec2993e81482123e1cc',1,'glm']]], ['mediump_5fu8',['mediump_u8',['../a00304.html#gad1213a22bbb9e4107f07eaa4956f8281',1,'glm']]], ['mediump_5fu8vec1',['mediump_u8vec1',['../a00304.html#ga4a43050843b141bdc7e85437faef6f55',1,'glm']]], ['mediump_5fu8vec2',['mediump_u8vec2',['../a00304.html#ga907f85d4a0eac3d8aaf571e5c2647194',1,'glm']]], ['mediump_5fu8vec3',['mediump_u8vec3',['../a00304.html#gaddc6f7748b699254942c5216b68f8f7f',1,'glm']]], ['mediump_5fu8vec4',['mediump_u8vec4',['../a00304.html#gaaf4ee3b76d43d98da02ec399b99bda4b',1,'glm']]], ['mediump_5fuint16',['mediump_uint16',['../a00304.html#ga2885a6c89916911e418c06bb76b9bdbb',1,'glm']]], ['mediump_5fuint16_5ft',['mediump_uint16_t',['../a00304.html#ga3963b1050fc65a383ee28e3f827b6e3e',1,'glm']]], ['mediump_5fuint32',['mediump_uint32',['../a00304.html#ga34dd5ec1988c443bae80f1b20a8ade5f',1,'glm']]], ['mediump_5fuint32_5ft',['mediump_uint32_t',['../a00304.html#gaf4dae276fd29623950de14a6ca2586b5',1,'glm']]], ['mediump_5fuint64',['mediump_uint64',['../a00304.html#ga30652709815ad9404272a31957daa59e',1,'glm']]], ['mediump_5fuint64_5ft',['mediump_uint64_t',['../a00304.html#ga9b170dd4a8f38448a2dc93987c7875e9',1,'glm']]], ['mediump_5fuint8',['mediump_uint8',['../a00304.html#ga1fa92a233b9110861cdbc8c2ccf0b5a3',1,'glm']]], ['mediump_5fuint8_5ft',['mediump_uint8_t',['../a00304.html#gadfe65c78231039e90507770db50c98c7',1,'glm']]], ['mediump_5fumat2',['mediump_umat2',['../a00294.html#ga43041378b3410ea951b7de0dfd2bc7ee',1,'glm']]], ['mediump_5fumat2x2',['mediump_umat2x2',['../a00294.html#ga3b209b1b751f041422137e3c065dfa98',1,'glm']]], ['mediump_5fumat2x3',['mediump_umat2x3',['../a00294.html#gaee2c1f13b41f4c92ea5b3efe367a1306',1,'glm']]], ['mediump_5fumat2x4',['mediump_umat2x4',['../a00294.html#gae1317ddca16d01e119a40b7f0ee85f95',1,'glm']]], ['mediump_5fumat3',['mediump_umat3',['../a00294.html#ga1730dbe3c67801f53520b06d1aa0a34a',1,'glm']]], ['mediump_5fumat3x2',['mediump_umat3x2',['../a00294.html#gaadc28bfdc8ebca81ae85121b11994970',1,'glm']]], ['mediump_5fumat3x3',['mediump_umat3x3',['../a00294.html#ga48f2fc38d3f7fab3cfbc961278ced53d',1,'glm']]], ['mediump_5fumat3x4',['mediump_umat3x4',['../a00294.html#ga78009a1e4ca64217e46b418535e52546',1,'glm']]], ['mediump_5fumat4',['mediump_umat4',['../a00294.html#ga5087c2beb26a11d9af87432e554cf9d1',1,'glm']]], ['mediump_5fumat4x2',['mediump_umat4x2',['../a00294.html#gaf35aefd81cc13718f6b059623f7425fa',1,'glm']]], ['mediump_5fumat4x3',['mediump_umat4x3',['../a00294.html#ga4e1bed14fbc7f4b376aaed064f89f0fb',1,'glm']]], ['mediump_5fumat4x4',['mediump_umat4x4',['../a00294.html#gaa9428fc8430dc552aad920653f822ef3',1,'glm']]], ['mediump_5fuvec1',['mediump_uvec1',['../a00277.html#ga38fde73aaf1420175ece8d4882558a3f',1,'glm']]], ['mediump_5fuvec2',['mediump_uvec2',['../a00282.html#gaa3b4f7806dad03d83bb3da0baa1e3b9b',1,'glm']]], ['mediump_5fuvec3',['mediump_uvec3',['../a00282.html#ga83b7df38feefbb357f3673d950fafef7',1,'glm']]], ['mediump_5fuvec4',['mediump_uvec4',['../a00282.html#ga64ed0deb6573375b7016daf82ffd53a7',1,'glm']]], ['mediump_5fvec1',['mediump_vec1',['../a00271.html#ga645f53e6b8056609023a894b4e2beef4',1,'glm']]], ['mediump_5fvec2',['mediump_vec2',['../a00282.html#gabc61976261c406520c7a8e4d946dc3f0',1,'glm']]], ['mediump_5fvec3',['mediump_vec3',['../a00282.html#ga2384e263df19f1404b733016eff78fca',1,'glm']]], ['mediump_5fvec4',['mediump_vec4',['../a00282.html#ga5c6978d3ffba06738416a33083853fc0',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/typedefs_8.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/typedefs_8.js ================================================ var searchData= [ ['packed_5fbvec1',['packed_bvec1',['../a00303.html#ga88632cea9008ac0ac1388e94e804a53c',1,'glm']]], ['packed_5fbvec2',['packed_bvec2',['../a00303.html#gab85245913eaa40ab82adabcae37086cb',1,'glm']]], ['packed_5fbvec3',['packed_bvec3',['../a00303.html#ga0c48f9417f649e27f3fb0c9f733a18bd',1,'glm']]], ['packed_5fbvec4',['packed_bvec4',['../a00303.html#ga3180d7db84a74c402157df3bbc0ae3ed',1,'glm']]], ['packed_5fdmat2',['packed_dmat2',['../a00303.html#gad87408a8350918711f845f071bbe43fb',1,'glm']]], ['packed_5fdmat2x2',['packed_dmat2x2',['../a00303.html#gaaa33d8e06657a777efb0c72c44ce87a9',1,'glm']]], ['packed_5fdmat2x3',['packed_dmat2x3',['../a00303.html#gac3a5315f588ba04ad255188071ec4e22',1,'glm']]], ['packed_5fdmat2x4',['packed_dmat2x4',['../a00303.html#gae398fc3156f51d3684b08f62c1a5a6d4',1,'glm']]], ['packed_5fdmat3',['packed_dmat3',['../a00303.html#ga03dfc90d539cc87ea3a15a9caa5d2245',1,'glm']]], ['packed_5fdmat3x2',['packed_dmat3x2',['../a00303.html#gae36de20a4c0e0b1444b7903ae811d94e',1,'glm']]], ['packed_5fdmat3x3',['packed_dmat3x3',['../a00303.html#gab9b909f1392d86854334350efcae85f5',1,'glm']]], ['packed_5fdmat3x4',['packed_dmat3x4',['../a00303.html#ga199131fd279c92c2ac12df6d978f1dd6',1,'glm']]], ['packed_5fdmat4',['packed_dmat4',['../a00303.html#gada980a3485640aa8151f368f17ad3086',1,'glm']]], ['packed_5fdmat4x2',['packed_dmat4x2',['../a00303.html#ga6dc65249730698d3cc9ac5d7e1bc4d72',1,'glm']]], ['packed_5fdmat4x3',['packed_dmat4x3',['../a00303.html#gadf202aaa9ed71c09f9bbe347e43f8764',1,'glm']]], ['packed_5fdmat4x4',['packed_dmat4x4',['../a00303.html#gae20617435a6d042d7c38da2badd64a09',1,'glm']]], ['packed_5fdvec1',['packed_dvec1',['../a00303.html#ga532f0c940649b1ee303acd572fc35531',1,'glm']]], ['packed_5fdvec2',['packed_dvec2',['../a00303.html#ga5c194b11fbda636f2ab20c3bd0079196',1,'glm']]], ['packed_5fdvec3',['packed_dvec3',['../a00303.html#ga0581ea552d86b2b5de7a2804bed80e72',1,'glm']]], ['packed_5fdvec4',['packed_dvec4',['../a00303.html#gae8a9b181f9dc813ad6e125a52b14b935',1,'glm']]], ['packed_5fhighp_5fbvec1',['packed_highp_bvec1',['../a00303.html#ga439e97795314b81cd15abd4e5c2e6e7a',1,'glm']]], ['packed_5fhighp_5fbvec2',['packed_highp_bvec2',['../a00303.html#gad791d671f4fcf1ed1ea41f752916b70a',1,'glm']]], ['packed_5fhighp_5fbvec3',['packed_highp_bvec3',['../a00303.html#ga6a5a3250b57dfadc66735bc72911437f',1,'glm']]], ['packed_5fhighp_5fbvec4',['packed_highp_bvec4',['../a00303.html#ga09f517d88b996ef1b2f42fd54222b82d',1,'glm']]], ['packed_5fhighp_5fdmat2',['packed_highp_dmat2',['../a00303.html#gae29686632fd05efac0675d9a6370d77b',1,'glm']]], ['packed_5fhighp_5fdmat2x2',['packed_highp_dmat2x2',['../a00303.html#ga22bd6382b16052e301edbfc031b9f37a',1,'glm']]], ['packed_5fhighp_5fdmat2x3',['packed_highp_dmat2x3',['../a00303.html#ga999d82719696d4c59f4d236dd08f273d',1,'glm']]], ['packed_5fhighp_5fdmat2x4',['packed_highp_dmat2x4',['../a00303.html#ga6998ac2a8d7fe456b651a6336ed26bb0',1,'glm']]], ['packed_5fhighp_5fdmat3',['packed_highp_dmat3',['../a00303.html#gadac7c040c4810dd52b36fcd09d097400',1,'glm']]], ['packed_5fhighp_5fdmat3x2',['packed_highp_dmat3x2',['../a00303.html#gab462744977beb85fb5c782bc2eea7b15',1,'glm']]], ['packed_5fhighp_5fdmat3x3',['packed_highp_dmat3x3',['../a00303.html#ga49e5a709d098523823b2f824e48672a6',1,'glm']]], ['packed_5fhighp_5fdmat3x4',['packed_highp_dmat3x4',['../a00303.html#ga2c67b3b0adab71c8680c3d819f1fa9b7',1,'glm']]], ['packed_5fhighp_5fdmat4',['packed_highp_dmat4',['../a00303.html#ga6718822cd7af005a9b5bd6ee282f6ba6',1,'glm']]], ['packed_5fhighp_5fdmat4x2',['packed_highp_dmat4x2',['../a00303.html#ga12e39e797fb724a5b51fcbea2513a7da',1,'glm']]], ['packed_5fhighp_5fdmat4x3',['packed_highp_dmat4x3',['../a00303.html#ga79c2e9f82e67963c1ecad0ad6d0ec72e',1,'glm']]], ['packed_5fhighp_5fdmat4x4',['packed_highp_dmat4x4',['../a00303.html#ga2df58e03e5afded28707b4f7d077afb4',1,'glm']]], ['packed_5fhighp_5fdvec1',['packed_highp_dvec1',['../a00303.html#gab472b2d917b5e6efd76e8c7dbfbbf9f1',1,'glm']]], ['packed_5fhighp_5fdvec2',['packed_highp_dvec2',['../a00303.html#ga5b2dc48fa19b684d207d69c6b145eb63',1,'glm']]], ['packed_5fhighp_5fdvec3',['packed_highp_dvec3',['../a00303.html#gaaac6b356ef00154da41aaae7d1549193',1,'glm']]], ['packed_5fhighp_5fdvec4',['packed_highp_dvec4',['../a00303.html#ga81b5368fe485e2630aa9b44832d592e7',1,'glm']]], ['packed_5fhighp_5fivec1',['packed_highp_ivec1',['../a00303.html#ga7245acc887a5438f46fd85fdf076bb3b',1,'glm']]], ['packed_5fhighp_5fivec2',['packed_highp_ivec2',['../a00303.html#ga54f368ec6b514a5aa4f28d40e6f93ef7',1,'glm']]], ['packed_5fhighp_5fivec3',['packed_highp_ivec3',['../a00303.html#ga865a9c7bb22434b1b8c5ac31e164b628',1,'glm']]], ['packed_5fhighp_5fivec4',['packed_highp_ivec4',['../a00303.html#gad6f1b4e3a51c2c051814b60d5d1b8895',1,'glm']]], ['packed_5fhighp_5fmat2',['packed_highp_mat2',['../a00303.html#ga2f2d913d8cca2f935b2522964408c0b2',1,'glm']]], ['packed_5fhighp_5fmat2x2',['packed_highp_mat2x2',['../a00303.html#ga245c12d2daf67feecaa2d3277c8f6661',1,'glm']]], ['packed_5fhighp_5fmat2x3',['packed_highp_mat2x3',['../a00303.html#ga069cc8892aadae144c00f35297617d44',1,'glm']]], ['packed_5fhighp_5fmat2x4',['packed_highp_mat2x4',['../a00303.html#ga6904d09b62141d09712b76983892f95b',1,'glm']]], ['packed_5fhighp_5fmat3',['packed_highp_mat3',['../a00303.html#gabdd5fbffe8b8b8a7b33523f25b120dbe',1,'glm']]], ['packed_5fhighp_5fmat3x2',['packed_highp_mat3x2',['../a00303.html#ga2624719cb251d8de8cad1beaefc3a3f9',1,'glm']]], ['packed_5fhighp_5fmat3x3',['packed_highp_mat3x3',['../a00303.html#gaf2e07527d678440bf0c20adbeb9177c5',1,'glm']]], ['packed_5fhighp_5fmat3x4',['packed_highp_mat3x4',['../a00303.html#ga72102fa6ac2445aa3bb203128ad52449',1,'glm']]], ['packed_5fhighp_5fmat4',['packed_highp_mat4',['../a00303.html#ga253e8379b08d2dc6fe2800b2fb913203',1,'glm']]], ['packed_5fhighp_5fmat4x2',['packed_highp_mat4x2',['../a00303.html#gae389c2071cf3cdb33e7812c6fd156710',1,'glm']]], ['packed_5fhighp_5fmat4x3',['packed_highp_mat4x3',['../a00303.html#ga4584f64394bd7123b7a8534741e4916c',1,'glm']]], ['packed_5fhighp_5fmat4x4',['packed_highp_mat4x4',['../a00303.html#ga0149fe15668925147e07c94fd2c2d6ae',1,'glm']]], ['packed_5fhighp_5fuvec1',['packed_highp_uvec1',['../a00303.html#ga8c32b53f628a3616aa5061e58d66fe74',1,'glm']]], ['packed_5fhighp_5fuvec2',['packed_highp_uvec2',['../a00303.html#gab704d4fb15f6f96d70e363d5db7060cd',1,'glm']]], ['packed_5fhighp_5fuvec3',['packed_highp_uvec3',['../a00303.html#ga0b570da473fec4619db5aa0dce5133b0',1,'glm']]], ['packed_5fhighp_5fuvec4',['packed_highp_uvec4',['../a00303.html#gaa582f38c82aef61dea7aaedf15bb06a6',1,'glm']]], ['packed_5fhighp_5fvec1',['packed_highp_vec1',['../a00303.html#ga56473759d2702ee19ab7f91d0017fa70',1,'glm']]], ['packed_5fhighp_5fvec2',['packed_highp_vec2',['../a00303.html#ga6b8b9475e7c3b16aed13edbc460bbc4d',1,'glm']]], ['packed_5fhighp_5fvec3',['packed_highp_vec3',['../a00303.html#ga3815661df0e2de79beff8168c09adf1e',1,'glm']]], ['packed_5fhighp_5fvec4',['packed_highp_vec4',['../a00303.html#ga4015f36bf5a5adb6ac5d45beed959867',1,'glm']]], ['packed_5fivec1',['packed_ivec1',['../a00303.html#ga11581a06fc7bf941fa4d4b6aca29812c',1,'glm']]], ['packed_5fivec2',['packed_ivec2',['../a00303.html#ga1fe4c5f56b8087d773aa90dc88a257a7',1,'glm']]], ['packed_5fivec3',['packed_ivec3',['../a00303.html#gae157682a7847161787951ba1db4cf325',1,'glm']]], ['packed_5fivec4',['packed_ivec4',['../a00303.html#gac228b70372abd561340d5f926a7c1778',1,'glm']]], ['packed_5flowp_5fbvec1',['packed_lowp_bvec1',['../a00303.html#gae3c8750f53259ece334d3aa3b3649a40',1,'glm']]], ['packed_5flowp_5fbvec2',['packed_lowp_bvec2',['../a00303.html#gac969befedbda69eb78d4e23f751fdbee',1,'glm']]], ['packed_5flowp_5fbvec3',['packed_lowp_bvec3',['../a00303.html#ga7c20adbe1409e3fe4544677a7f6fe954',1,'glm']]], ['packed_5flowp_5fbvec4',['packed_lowp_bvec4',['../a00303.html#gae473587cff3092edc0877fc691c26a0b',1,'glm']]], ['packed_5flowp_5fdmat2',['packed_lowp_dmat2',['../a00303.html#gac93f9b1a35b9de4f456b9f2dfeaf1097',1,'glm']]], ['packed_5flowp_5fdmat2x2',['packed_lowp_dmat2x2',['../a00303.html#gaeeaff6c132ec91ebd21da3a2399548ea',1,'glm']]], ['packed_5flowp_5fdmat2x3',['packed_lowp_dmat2x3',['../a00303.html#ga2ccdcd4846775cbe4f9d12e71d55b5d2',1,'glm']]], ['packed_5flowp_5fdmat2x4',['packed_lowp_dmat2x4',['../a00303.html#gac870c47d2d9d48503f6c9ee3baec8ce1',1,'glm']]], ['packed_5flowp_5fdmat3',['packed_lowp_dmat3',['../a00303.html#ga3894a059eeaacec8791c25de398d9955',1,'glm']]], ['packed_5flowp_5fdmat3x2',['packed_lowp_dmat3x2',['../a00303.html#ga23ec236950f5859f59197663266b535d',1,'glm']]], ['packed_5flowp_5fdmat3x3',['packed_lowp_dmat3x3',['../a00303.html#ga4a7c7d8c3a663d0ec2a858cbfa14e54c',1,'glm']]], ['packed_5flowp_5fdmat3x4',['packed_lowp_dmat3x4',['../a00303.html#ga8fc0e66da83599071b7ec17510686cd9',1,'glm']]], ['packed_5flowp_5fdmat4',['packed_lowp_dmat4',['../a00303.html#ga03e1edf5666c40affe39aee35c87956f',1,'glm']]], ['packed_5flowp_5fdmat4x2',['packed_lowp_dmat4x2',['../a00303.html#ga39658fb13369db869d363684bd8399c0',1,'glm']]], ['packed_5flowp_5fdmat4x3',['packed_lowp_dmat4x3',['../a00303.html#ga30b0351eebc18c6056101359bdd3a359',1,'glm']]], ['packed_5flowp_5fdmat4x4',['packed_lowp_dmat4x4',['../a00303.html#ga0294d4c45151425c86a11deee7693c0e',1,'glm']]], ['packed_5flowp_5fdvec1',['packed_lowp_dvec1',['../a00303.html#ga054050e9d4e78d81db0e6d1573b1c624',1,'glm']]], ['packed_5flowp_5fdvec2',['packed_lowp_dvec2',['../a00303.html#gadc19938ddb204bfcb4d9ef35b1e2bf93',1,'glm']]], ['packed_5flowp_5fdvec3',['packed_lowp_dvec3',['../a00303.html#ga9189210cabd6651a5e14a4c46fb20598',1,'glm']]], ['packed_5flowp_5fdvec4',['packed_lowp_dvec4',['../a00303.html#ga262dafd0c001c3a38d1cc91d024ca738',1,'glm']]], ['packed_5flowp_5fivec1',['packed_lowp_ivec1',['../a00303.html#gaf22b77f1cf3e73b8b1dddfe7f959357c',1,'glm']]], ['packed_5flowp_5fivec2',['packed_lowp_ivec2',['../a00303.html#ga52635859f5ef660ab999d22c11b7867f',1,'glm']]], ['packed_5flowp_5fivec3',['packed_lowp_ivec3',['../a00303.html#ga98c9d122a959e9f3ce10a5623c310f5d',1,'glm']]], ['packed_5flowp_5fivec4',['packed_lowp_ivec4',['../a00303.html#ga931731b8ae3b54c7ecc221509dae96bc',1,'glm']]], ['packed_5flowp_5fmat2',['packed_lowp_mat2',['../a00303.html#ga70dcb9ef0b24e832772a7405efa9669a',1,'glm']]], ['packed_5flowp_5fmat2x2',['packed_lowp_mat2x2',['../a00303.html#gac70667c7642ec8d50245e6e6936a3927',1,'glm']]], ['packed_5flowp_5fmat2x3',['packed_lowp_mat2x3',['../a00303.html#ga3e7df5a11e1be27bc29a4c0d3956f234',1,'glm']]], ['packed_5flowp_5fmat2x4',['packed_lowp_mat2x4',['../a00303.html#gaea9c555e669dc56c45d95dcc75d59bf3',1,'glm']]], ['packed_5flowp_5fmat3',['packed_lowp_mat3',['../a00303.html#ga0d22400969dd223465b2900fecfb4f53',1,'glm']]], ['packed_5flowp_5fmat3x2',['packed_lowp_mat3x2',['../a00303.html#ga128cd52649621861635fab746df91735',1,'glm']]], ['packed_5flowp_5fmat3x3',['packed_lowp_mat3x3',['../a00303.html#ga5adf1802c5375a9dfb1729691bedd94e',1,'glm']]], ['packed_5flowp_5fmat3x4',['packed_lowp_mat3x4',['../a00303.html#ga92247ca09fa03c4013ba364f3a0fca7f',1,'glm']]], ['packed_5flowp_5fmat4',['packed_lowp_mat4',['../a00303.html#ga2a1dd2387725a335413d4c4fee8609c4',1,'glm']]], ['packed_5flowp_5fmat4x2',['packed_lowp_mat4x2',['../a00303.html#ga8f22607dcd090cd280071ccc689f4079',1,'glm']]], ['packed_5flowp_5fmat4x3',['packed_lowp_mat4x3',['../a00303.html#ga7661d759d6ad218e132e3d051e7b2c6c',1,'glm']]], ['packed_5flowp_5fmat4x4',['packed_lowp_mat4x4',['../a00303.html#ga776f18d1a6e7d399f05d386167dc60f5',1,'glm']]], ['packed_5flowp_5fuvec1',['packed_lowp_uvec1',['../a00303.html#gaf111fed760ecce16cb1988807569bee5',1,'glm']]], ['packed_5flowp_5fuvec2',['packed_lowp_uvec2',['../a00303.html#ga958210fe245a75b058325d367c951132',1,'glm']]], ['packed_5flowp_5fuvec3',['packed_lowp_uvec3',['../a00303.html#ga576a3f8372197a56a79dee1c8280f485',1,'glm']]], ['packed_5flowp_5fuvec4',['packed_lowp_uvec4',['../a00303.html#gafdd97922b4a2a42cd0c99a13877ff4da',1,'glm']]], ['packed_5flowp_5fvec1',['packed_lowp_vec1',['../a00303.html#ga0a6198fe64166a6a61084d43c71518a9',1,'glm']]], ['packed_5flowp_5fvec2',['packed_lowp_vec2',['../a00303.html#gafbf1c2cce307c5594b165819ed83bf5d',1,'glm']]], ['packed_5flowp_5fvec3',['packed_lowp_vec3',['../a00303.html#ga3a30c137c1f8cce478c28eab0427a570',1,'glm']]], ['packed_5flowp_5fvec4',['packed_lowp_vec4',['../a00303.html#ga3cc94fb8de80bbd8a4aa7a5b206d304a',1,'glm']]], ['packed_5fmat2',['packed_mat2',['../a00303.html#gadd019b43fcf42e1590d45dddaa504a1a',1,'glm']]], ['packed_5fmat2x2',['packed_mat2x2',['../a00303.html#ga51eaadcdc292c8750f746a5dc3e6c517',1,'glm']]], ['packed_5fmat2x3',['packed_mat2x3',['../a00303.html#ga301b76a89b8a9625501ca58815017f20',1,'glm']]], ['packed_5fmat2x4',['packed_mat2x4',['../a00303.html#gac401da1dd9177ad81d7618a2a5541e23',1,'glm']]], ['packed_5fmat3',['packed_mat3',['../a00303.html#ga9bc12b0ab7be8448836711b77cc7b83a',1,'glm']]], ['packed_5fmat3x2',['packed_mat3x2',['../a00303.html#ga134f0d99fbd2459c13cd9ebd056509fa',1,'glm']]], ['packed_5fmat3x3',['packed_mat3x3',['../a00303.html#ga6c1dbe8cde9fbb231284b01f8aeaaa99',1,'glm']]], ['packed_5fmat3x4',['packed_mat3x4',['../a00303.html#gad63515526cccfe88ffa8fe5ed64f95f8',1,'glm']]], ['packed_5fmat4',['packed_mat4',['../a00303.html#ga2c139854e5b04cf08a957dee3b510441',1,'glm']]], ['packed_5fmat4x2',['packed_mat4x2',['../a00303.html#ga379c1153f1339bdeaefd592bebf538e8',1,'glm']]], ['packed_5fmat4x3',['packed_mat4x3',['../a00303.html#gab286466e19f7399c8d25089da9400d43',1,'glm']]], ['packed_5fmat4x4',['packed_mat4x4',['../a00303.html#ga67e7102557d6067bb6ac00d4ad0e1374',1,'glm']]], ['packed_5fmediump_5fbvec1',['packed_mediump_bvec1',['../a00303.html#ga5546d828d63010a8f9cf81161ad0275a',1,'glm']]], ['packed_5fmediump_5fbvec2',['packed_mediump_bvec2',['../a00303.html#gab4c6414a59539e66a242ad4cf4b476b4',1,'glm']]], ['packed_5fmediump_5fbvec3',['packed_mediump_bvec3',['../a00303.html#ga70147763edff3fe96b03a0b98d6339a2',1,'glm']]], ['packed_5fmediump_5fbvec4',['packed_mediump_bvec4',['../a00303.html#ga7b1620f259595b9da47a6374fc44588a',1,'glm']]], ['packed_5fmediump_5fdmat2',['packed_mediump_dmat2',['../a00303.html#ga9d60e32d3fcb51f817046cd881fdbf57',1,'glm']]], ['packed_5fmediump_5fdmat2x2',['packed_mediump_dmat2x2',['../a00303.html#ga39e8bb9b70e5694964e8266a21ba534e',1,'glm']]], ['packed_5fmediump_5fdmat2x3',['packed_mediump_dmat2x3',['../a00303.html#ga8897c6d9adb4140b1c3b0a07b8f0a430',1,'glm']]], ['packed_5fmediump_5fdmat2x4',['packed_mediump_dmat2x4',['../a00303.html#gaaa4126969c765e7faa2ebf6951c22ffb',1,'glm']]], ['packed_5fmediump_5fdmat3',['packed_mediump_dmat3',['../a00303.html#gaf969eb879c76a5f4576e4a1e10095cf6',1,'glm']]], ['packed_5fmediump_5fdmat3x2',['packed_mediump_dmat3x2',['../a00303.html#ga86efe91cdaa2864c828a5d6d46356c6a',1,'glm']]], ['packed_5fmediump_5fdmat3x3',['packed_mediump_dmat3x3',['../a00303.html#gaf85877d38d8cfbc21d59d939afd72375',1,'glm']]], ['packed_5fmediump_5fdmat3x4',['packed_mediump_dmat3x4',['../a00303.html#gad5dcaf93df267bc3029174e430e0907f',1,'glm']]], ['packed_5fmediump_5fdmat4',['packed_mediump_dmat4',['../a00303.html#ga4b0ee7996651ddd04eaa0c4cdbb66332',1,'glm']]], ['packed_5fmediump_5fdmat4x2',['packed_mediump_dmat4x2',['../a00303.html#ga9a15514a0631f700de6312b9d5db3a73',1,'glm']]], ['packed_5fmediump_5fdmat4x3',['packed_mediump_dmat4x3',['../a00303.html#gab5b36cc9caee1bb1c5178fe191bf5713',1,'glm']]], ['packed_5fmediump_5fdmat4x4',['packed_mediump_dmat4x4',['../a00303.html#ga21e86cf2f6c126bacf31b8985db06bd4',1,'glm']]], ['packed_5fmediump_5fdvec1',['packed_mediump_dvec1',['../a00303.html#ga8920e90ea9c01d9c97e604a938ce2cbd',1,'glm']]], ['packed_5fmediump_5fdvec2',['packed_mediump_dvec2',['../a00303.html#ga0c754a783b6fcf80374c013371c4dae9',1,'glm']]], ['packed_5fmediump_5fdvec3',['packed_mediump_dvec3',['../a00303.html#ga1f18ada6f7cdd8c46db33ba987280fc4',1,'glm']]], ['packed_5fmediump_5fdvec4',['packed_mediump_dvec4',['../a00303.html#ga568b850f1116b667043533cf77826968',1,'glm']]], ['packed_5fmediump_5fivec1',['packed_mediump_ivec1',['../a00303.html#ga09507ef020a49517a7bcd50438f05056',1,'glm']]], ['packed_5fmediump_5fivec2',['packed_mediump_ivec2',['../a00303.html#gaaa891048dddef4627df33809ec726219',1,'glm']]], ['packed_5fmediump_5fivec3',['packed_mediump_ivec3',['../a00303.html#ga06f26d54dca30994eb1fdadb8e69f4a2',1,'glm']]], ['packed_5fmediump_5fivec4',['packed_mediump_ivec4',['../a00303.html#ga70130dc8ed9c966ec2a221ce586d45d8',1,'glm']]], ['packed_5fmediump_5fmat2',['packed_mediump_mat2',['../a00303.html#ga43cd36d430c5187bfdca34a23cb41581',1,'glm']]], ['packed_5fmediump_5fmat2x2',['packed_mediump_mat2x2',['../a00303.html#ga2d2a73e662759e301c22b8931ff6a526',1,'glm']]], ['packed_5fmediump_5fmat2x3',['packed_mediump_mat2x3',['../a00303.html#ga99049db01faf1e95ed9fb875a47dffe2',1,'glm']]], ['packed_5fmediump_5fmat2x4',['packed_mediump_mat2x4',['../a00303.html#gad43a240533f388ce0504b495d9df3d52',1,'glm']]], ['packed_5fmediump_5fmat3',['packed_mediump_mat3',['../a00303.html#ga13a75c6cbd0a411f694bc82486cd1e55',1,'glm']]], ['packed_5fmediump_5fmat3x2',['packed_mediump_mat3x2',['../a00303.html#ga04cfaf1421284df3c24ea0985dab24e7',1,'glm']]], ['packed_5fmediump_5fmat3x3',['packed_mediump_mat3x3',['../a00303.html#gaaa9cea174d342dd9650e3436823cab23',1,'glm']]], ['packed_5fmediump_5fmat3x4',['packed_mediump_mat3x4',['../a00303.html#gabc93a9560593bd32e099c908531305f5',1,'glm']]], ['packed_5fmediump_5fmat4',['packed_mediump_mat4',['../a00303.html#gae89d72ffc149147f61df701bbc8755bf',1,'glm']]], ['packed_5fmediump_5fmat4x2',['packed_mediump_mat4x2',['../a00303.html#gaa458f9d9e0934bae3097e2a373b24707',1,'glm']]], ['packed_5fmediump_5fmat4x3',['packed_mediump_mat4x3',['../a00303.html#ga02ca6255394aa778abaeb0f733c4d2b6',1,'glm']]], ['packed_5fmediump_5fmat4x4',['packed_mediump_mat4x4',['../a00303.html#gaf304f64c06743c1571401504d3f50259',1,'glm']]], ['packed_5fmediump_5fuvec1',['packed_mediump_uvec1',['../a00303.html#ga2c29fb42bab9a4f9b66bc60b2e514a34',1,'glm']]], ['packed_5fmediump_5fuvec2',['packed_mediump_uvec2',['../a00303.html#gaa1f95690a78dc12e39da32943243aeef',1,'glm']]], ['packed_5fmediump_5fuvec3',['packed_mediump_uvec3',['../a00303.html#ga1ea2bbdbcb0a69242f6d884663c1b0ab',1,'glm']]], ['packed_5fmediump_5fuvec4',['packed_mediump_uvec4',['../a00303.html#ga63a73be86a4f07ea7a7499ab0bfebe45',1,'glm']]], ['packed_5fmediump_5fvec1',['packed_mediump_vec1',['../a00303.html#ga71d63cead1e113fca0bcdaaa33aad050',1,'glm']]], ['packed_5fmediump_5fvec2',['packed_mediump_vec2',['../a00303.html#ga6844c6f4691d1bf67673240850430948',1,'glm']]], ['packed_5fmediump_5fvec3',['packed_mediump_vec3',['../a00303.html#gab0eb771b708c5b2205d9b14dd1434fd8',1,'glm']]], ['packed_5fmediump_5fvec4',['packed_mediump_vec4',['../a00303.html#ga68c9bb24f387b312bae6a0a68e74d95e',1,'glm']]], ['packed_5fuvec1',['packed_uvec1',['../a00303.html#ga5621493caac01bdd22ab6be4416b0314',1,'glm']]], ['packed_5fuvec2',['packed_uvec2',['../a00303.html#gabcc33efb4d5e83b8fe4706360e75b932',1,'glm']]], ['packed_5fuvec3',['packed_uvec3',['../a00303.html#gab96804e99e3a72a35740fec690c79617',1,'glm']]], ['packed_5fuvec4',['packed_uvec4',['../a00303.html#ga8e5d92e84ebdbe2480cf96bc17d6e2f2',1,'glm']]], ['packed_5fvec1',['packed_vec1',['../a00303.html#ga14741e3d9da9ae83765389927f837331',1,'glm']]], ['packed_5fvec2',['packed_vec2',['../a00303.html#ga3254defa5a8f0ae4b02b45fedba84a66',1,'glm']]], ['packed_5fvec3',['packed_vec3',['../a00303.html#gaccccd090e185450caa28b5b63ad4e8f0',1,'glm']]], ['packed_5fvec4',['packed_vec4',['../a00303.html#ga37a0e0bf653169b581c5eea3d547fa5d',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/typedefs_9.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/typedefs_9.js ================================================ var searchData= [ ['quat',['quat',['../a00252.html#gab0b441adb4509bc58d2946c2239a8942',1,'glm']]], ['qword',['qword',['../a00354.html#ga4021754ffb8e5ef14c75802b15657714',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/typedefs_a.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/typedefs_a.js ================================================ var searchData= [ ['sint',['sint',['../a00330.html#gada7e83fdfe943aba4f1d5bf80cb66f40',1,'glm']]], ['size1',['size1',['../a00359.html#gaeb877ac8f9a3703961736c1c5072cf68',1,'glm']]], ['size1_5ft',['size1_t',['../a00359.html#gaaf6accc57f5aa50447ba7310ce3f0d6f',1,'glm']]], ['size2',['size2',['../a00359.html#ga1bfe8c4975ff282bce41be2bacd524fe',1,'glm']]], ['size2_5ft',['size2_t',['../a00359.html#ga5976c25657d4e2b5f73f39364c3845d6',1,'glm']]], ['size3',['size3',['../a00359.html#gae1c72956d0359b0db332c6c8774d3b04',1,'glm']]], ['size3_5ft',['size3_t',['../a00359.html#gaf2654983c60d641fd3808e65a8dfad8d',1,'glm']]], ['size4',['size4',['../a00359.html#ga3a19dde617beaf8ce3cfc2ac5064e9aa',1,'glm']]], ['size4_5ft',['size4_t',['../a00359.html#gaa423efcea63675a2df26990dbcb58656',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/typedefs_b.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/typedefs_b.js ================================================ var searchData= [ ['u16',['u16',['../a00304.html#gaa2d7acc0adb536fab71fe261232a40ff',1,'glm']]], ['u16vec1',['u16vec1',['../a00304.html#ga08c05ba8ffb19f5d14ab584e1e9e9ee5',1,'glm::u16vec1()'],['../a00346.html#ga52cc069a92e126c3a8dcde93424d2ef0',1,'glm::gtx::u16vec1()']]], ['u16vec2',['u16vec2',['../a00304.html#ga2a78447eb9d66a114b193f4a25899c16',1,'glm']]], ['u16vec3',['u16vec3',['../a00304.html#ga1c522ca821c27b862fe51cf4024b064b',1,'glm']]], ['u16vec4',['u16vec4',['../a00304.html#ga529496d75775fb656a07993ea9af2450',1,'glm']]], ['u32',['u32',['../a00304.html#ga8165913e068444f7842302d40ba897b9',1,'glm']]], ['u32vec1',['u32vec1',['../a00304.html#gae627372cfd5f20dd87db490387b71195',1,'glm::u32vec1()'],['../a00346.html#ga9bbc1e14aea65cba5e2dcfef6a67d9f3',1,'glm::gtx::u32vec1()']]], ['u32vec2',['u32vec2',['../a00304.html#ga2a266e46ee218d0c680f12b35c500cc0',1,'glm']]], ['u32vec3',['u32vec3',['../a00304.html#gae267358ff2a41d156d97f5762630235a',1,'glm']]], ['u32vec4',['u32vec4',['../a00304.html#ga31cef34e4cd04840c54741ff2f7005f0',1,'glm']]], ['u64',['u64',['../a00304.html#gaf3f312156984c365e9f65620354da70b',1,'glm']]], ['u64vec1',['u64vec1',['../a00304.html#gaf09f3ca4b671a4a4f84505eb4cc865fd',1,'glm::u64vec1()'],['../a00346.html#ga818de170e2584ab037130f2881925974',1,'glm::gtx::u64vec1()']]], ['u64vec2',['u64vec2',['../a00304.html#gaef3824ed4fe435a019c5b9dddf53fec5',1,'glm']]], ['u64vec3',['u64vec3',['../a00304.html#ga489b89ba93d4f7b3934df78debc52276',1,'glm']]], ['u64vec4',['u64vec4',['../a00304.html#ga3945dd6515d4498cb603e65ff867ab03',1,'glm']]], ['u8',['u8',['../a00304.html#gaecc7082561fc9028b844b6cf3d305d36',1,'glm']]], ['u8vec1',['u8vec1',['../a00304.html#ga29b349e037f0b24320b4548a143daee2',1,'glm::u8vec1()'],['../a00346.html#ga5853fe457f4c8a6bc09343d0e9833980',1,'glm::gtx::u8vec1()']]], ['u8vec2',['u8vec2',['../a00304.html#ga518b8d948a6b4ddb72f84d5c3b7b6611',1,'glm']]], ['u8vec3',['u8vec3',['../a00304.html#ga7c5706f6bbe5282e5598acf7e7b377e2',1,'glm']]], ['u8vec4',['u8vec4',['../a00304.html#ga20779a61de2fd526a17f12fe53ec46b1',1,'glm']]], ['uint16',['uint16',['../a00263.html#ga05f6b0ae8f6a6e135b0e290c25fe0e4e',1,'glm']]], ['uint16_5ft',['uint16_t',['../a00304.html#ga91f91f411080c37730856ff5887f5bcf',1,'glm']]], ['uint32',['uint32',['../a00263.html#ga1134b580f8da4de94ca6b1de4d37975e',1,'glm']]], ['uint32_5ft',['uint32_t',['../a00304.html#ga2171d9dc1fefb1c82e2817f45b622eac',1,'glm']]], ['uint64',['uint64',['../a00263.html#gab630f76c26b50298187f7889104d4b9c',1,'glm']]], ['uint64_5ft',['uint64_t',['../a00304.html#ga3999d3e7ff22025c16ddb601e14dfdee',1,'glm']]], ['uint8',['uint8',['../a00263.html#gadde6aaee8457bee49c2a92621fe22b79',1,'glm']]], ['uint8_5ft',['uint8_t',['../a00304.html#ga28d97808322d3c92186e4a0c067d7e8e',1,'glm']]], ['umat2',['umat2',['../a00294.html#ga4cae85566f900debf930c41944b64691',1,'glm']]], ['umat2x2',['umat2x2',['../a00294.html#gabf8acdd33ce8951051edbca5200898aa',1,'glm']]], ['umat2x3',['umat2x3',['../a00294.html#ga1870da7578d5022b973a83155d386ab3',1,'glm']]], ['umat2x4',['umat2x4',['../a00294.html#ga57936a3998e992370e59a223e0ee4fd4',1,'glm']]], ['umat3',['umat3',['../a00294.html#ga5085e3ff02abbac5e537eb7b89ab63b6',1,'glm']]], ['umat3x2',['umat3x2',['../a00294.html#ga9cd7fa637a4a6788337f45231fad9e1a',1,'glm']]], ['umat3x3',['umat3x3',['../a00294.html#ga1f2cfcf3357db0cdf31fcb15e3c6bafb',1,'glm']]], ['umat3x4',['umat3x4',['../a00294.html#gae7c78ff3fc4309605ab0fa186c8d48ba',1,'glm']]], ['umat4',['umat4',['../a00294.html#ga38bc7bb6494e344185df596deeb4544c',1,'glm']]], ['umat4x2',['umat4x2',['../a00294.html#ga70fa2d05896aa83cbc8c07672a429b53',1,'glm']]], ['umat4x3',['umat4x3',['../a00294.html#ga87581417945411f75cb31dd6ca1dba98',1,'glm']]], ['umat4x4',['umat4x4',['../a00294.html#gaf72e6d399c42985db6872c50f53d7eb8',1,'glm']]], ['uvec1',['uvec1',['../a00276.html#gac3bdd96183d23876c58a1424585fefe7',1,'glm']]], ['uvec2',['uvec2',['../a00281.html#ga2f6d9ec3ae14813ade37d6aee3715fdb',1,'glm']]], ['uvec3',['uvec3',['../a00281.html#ga3d3e55874babd4bf93baa7bbc83ae418',1,'glm']]], ['uvec4',['uvec4',['../a00281.html#gaa57e96bb337867329d5f43bcc27c1095',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/typedefs_c.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/typedefs_c.js ================================================ var searchData= [ ['vec1',['vec1',['../a00270.html#gadfc071d934d8dae7955a1d530a3cf656',1,'glm']]], ['vec2',['vec2',['../a00281.html#gabe65c061834f61b4f7cb6037b19006a4',1,'glm']]], ['vec3',['vec3',['../a00281.html#ga9c3019b13faf179e4ad3626ea66df334',1,'glm']]], ['vec4',['vec4',['../a00281.html#gac215a35481a6597d1bf622a382e9d6e2',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/typedefs_d.html ================================================
Loading...
Searching...
No Matches
================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/search/typedefs_d.js ================================================ var searchData= [ ['word',['word',['../a00354.html#ga16e9fea0ef1e6c4ef472d3d1731c49a5',1,'glm']]] ]; ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/api/tabs.css ================================================ .tabs, .tabs2, .tabs3 { background-image: url('tab_b.png'); width: 100%; z-index: 101; font-size: 13px; font-family: 'Lucida Grande',Geneva,Helvetica,Arial,sans-serif; } .tabs2 { font-size: 10px; } .tabs3 { font-size: 9px; } .tablist { margin: 0; padding: 0; display: table; } .tablist li { float: left; display: table-cell; background-image: url('tab_b.png'); line-height: 36px; list-style: none; } .tablist a { display: block; padding: 0 20px; font-weight: bold; background-image:url('tab_s.png'); background-repeat:no-repeat; background-position:right; color: #283A5D; text-shadow: 0px 1px 1px rgba(255, 255, 255, 0.9); text-decoration: none; outline: none; } .tabs3 .tablist a { padding: 0 10px; } .tablist a:hover { background-image: url('tab_h.png'); background-repeat:repeat-x; color: #fff; text-shadow: 0px 1px 1px rgba(0, 0, 0, 1.0); text-decoration: none; } .tablist li.current a { background-image: url('tab_a.png'); background-repeat:repeat-x; color: #fff; text-shadow: 0px 1px 1px rgba(0, 0, 0, 1.0); } ================================================ FILE: field_construction/submodules/diff-langsurf-rasterizer/third_party/glm/doc/man.doxy ================================================ # Doxyfile 1.8.10 # This file describes the settings to be used by the documentation system # doxygen (www.doxygen.org) for a project. # # All text after a double hash (##) is considered a comment and is placed in # front of the TAG it is preceding. # # All text after a single hash (#) is considered a comment and will be ignored. # The format is: # TAG = value [value, ...] # For lists, items can also be appended using: # TAG += value [value, ...] # Values that contain spaces should be placed between quotes (\" \"). #--------------------------------------------------------------------------- # Project related configuration options #--------------------------------------------------------------------------- # This tag specifies the encoding used for all characters in the config file # that follow. The default is UTF-8 which is also the encoding used for all text # before the first occurrence of this tag. Doxygen uses libiconv (or the iconv # built into libc) for the transcoding. See http://www.gnu.org/software/libiconv # for the list of possible encodings. # The default value is: UTF-8. DOXYFILE_ENCODING = UTF-8 # The PROJECT_NAME tag is a single word (or a sequence of words surrounded by # double-quotes, unless you are using Doxywizard) that should identify the # project for which the documentation is generated. This name is used in the # title of most generated pages and in a few other places. # The default value is: My Project. PROJECT_NAME = "0.9.9 API documentation" # The PROJECT_NUMBER tag can be used to enter a project or revision number. This # could be handy for archiving the generated documentation or if some version # control system is used. PROJECT_NUMBER = # Using the PROJECT_BRIEF tag one can provide an optional one line description # for a project that appears at the top of each page and should give viewer a # quick idea about the purpose of the project. Keep the description short. PROJECT_BRIEF = # With the PROJECT_LOGO tag one can specify a logo or an icon that is included # in the documentation. The maximum height of the logo should not exceed 55 # pixels and the maximum width should not exceed 200 pixels. Doxygen will copy # the logo to the output directory. PROJECT_LOGO = theme/logo-mini.png # The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path # into which the generated documentation will be written. If a relative path is # entered, it will be relative to the location where doxygen was started. If # left blank the current directory will be used. OUTPUT_DIRECTORY = . # If the CREATE_SUBDIRS tag is set to YES then doxygen will create 4096 sub- # directories (in 2 levels) under the output directory of each output format and # will distribute the generated files over these directories. Enabling this # option can be useful when feeding doxygen a huge amount of source files, where # putting all generated files in the same directory would otherwise causes # performance problems for the file system. # The default value is: NO. CREATE_SUBDIRS = NO # If the ALLOW_UNICODE_NAMES tag is set to YES, doxygen will allow non-ASCII # characters to appear in the names of generated files. If set to NO, non-ASCII # characters will be escaped, for example _xE3_x81_x84 will be used for Unicode # U+3044. # The default value is: NO. ALLOW_UNICODE_NAMES = NO # The OUTPUT_LANGUAGE tag is used to specify the language in which all # documentation generated by doxygen is written. Doxygen will use this # information to generate all constant output in the proper language. # Possible values are: Afrikaans, Arabic, Armenian, Brazilian, Catalan, Chinese, # Chinese-Traditional, Croatian, Czech, Danish, Dutch, English (United States), # Esperanto, Farsi (Persian), Finnish, French, German, Greek, Hungarian, # Indonesian, Italian, Japanese, Japanese-en (Japanese with English messages), # Korean, Korean-en (Korean with English messages), Latvian, Lithuanian, # Macedonian, Norwegian, Persian (Farsi), Polish, Portuguese, Romanian, Russian, # Serbian, Serbian-Cyrillic, Slovak, Slovene, Spanish, Swedish, Turkish, # Ukrainian and Vietnamese. # The default value is: English. OUTPUT_LANGUAGE = English # If the BRIEF_MEMBER_DESC tag is set to YES, doxygen will include brief member # descriptions after the members that are listed in the file and class # documentation (similar to Javadoc). Set to NO to disable this. # The default value is: YES. BRIEF_MEMBER_DESC = YES # If the REPEAT_BRIEF tag is set to YES, doxygen will prepend the brief # description of a member or function before the detailed description # # Note: If both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the # brief descriptions will be completely suppressed. # The default value is: YES. REPEAT_BRIEF = YES # This tag implements a quasi-intelligent brief description abbreviator that is # used to form the text in various listings. Each string in this list, if found # as the leading text of the brief description, will be stripped from the text # and the result, after processing the whole list, is used as the annotated # text. Otherwise, the brief description is used as-is. If left blank, the # following values are used ($name is automatically replaced with the name of # the entity):The $name class, The $name widget, The $name file, is, provides, # specifies, contains, represents, a, an and the. ABBREVIATE_BRIEF = "The $name class " \ "The $name widget " \ "The $name file " \ is \ provides \ specifies \ contains \ represents \ a \ an \ the # If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then # doxygen will generate a detailed section even if there is only a brief # description. # The default value is: NO. ALWAYS_DETAILED_SEC = NO # If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all # inherited members of a class in the documentation of that class as if those # members were ordinary class members. Constructors, destructors and assignment # operators of the base classes will not be shown. # The default value is: NO. INLINE_INHERITED_MEMB = NO # If the FULL_PATH_NAMES tag is set to YES, doxygen will prepend the full path # before files name in the file list and in the header files. If set to NO the # shortest path that makes the file name unique will be used # The default value is: YES. FULL_PATH_NAMES = NO # The STRIP_FROM_PATH tag can be used to strip a user-defined part of the path. # Stripping is only done if one of the specified strings matches the left-hand # part of the path. The tag can be used to show relative paths in the file list. # If left blank the directory from which doxygen is run is used as the path to # strip. # # Note that you can specify absolute paths here, but also relative paths, which # will be relative from the directory where doxygen is started. # This tag requires that the tag FULL_PATH_NAMES is set to YES. STRIP_FROM_PATH = "C:/Documents and Settings/Groove/ " # The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of the # path mentioned in the documentation of a class, which tells the reader which # header file to include in order to use a class. If left blank only the name of # the header file containing the class definition is used. Otherwise one should # specify the list of include paths that are normally passed to the compiler # using the -I flag. STRIP_FROM_INC_PATH = # If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter (but # less readable) file names. This can be useful is your file systems doesn't # support long names like on DOS, Mac, or CD-ROM. # The default value is: NO. SHORT_NAMES = YES # If the JAVADOC_AUTOBRIEF tag is set to YES then doxygen will interpret the # first line (until the first dot) of a Javadoc-style comment as the brief # description. If set to NO, the Javadoc-style will behave just like regular Qt- # style comments (thus requiring an explicit @brief command for a brief # description.) # The default value is: NO. JAVADOC_AUTOBRIEF = YES # If the QT_AUTOBRIEF tag is set to YES then doxygen will interpret the first # line (until the first dot) of a Qt-style comment as the brief description. If # set to NO, the Qt-style will behave just like regular Qt-style comments (thus # requiring an explicit \brief command for a brief description.) # The default value is: NO. QT_AUTOBRIEF = NO # The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make doxygen treat a # multi-line C++ special comment block (i.e. a block of //! or /// comments) as # a brief description. This used to be the default behavior. The new default is # to treat a multi-line C++ comment block as a detailed description. Set this # tag to YES if you prefer the old behavior instead. # # Note that setting this tag to YES also means that rational rose comments are # not recognized any more. # The default value is: NO. MULTILINE_CPP_IS_BRIEF = NO # If the INHERIT_DOCS tag is set to YES then an undocumented member inherits the # documentation from any documented member that it re-implements. # The default value is: YES. INHERIT_DOCS = YES # If the SEPARATE_MEMBER_PAGES tag is set to YES then doxygen will produce a new # page for each member. If set to NO, the documentation of a member will be part # of the file/class/namespace that contains it. # The default value is: NO. SEPARATE_MEMBER_PAGES = NO # The TAB_SIZE tag can be used to set the number of spaces in a tab. Doxygen # uses this value to replace tabs by spaces in code fragments. # Minimum value: 1, maximum value: 16, default value: 4. TAB_SIZE = 8 # This tag can be used to specify a number of aliases that act as commands in # the documentation. An alias has the form: # name=value # For example adding # "sideeffect=@par Side Effects:\n" # will allow you to put the command \sideeffect (or @sideeffect) in the # documentation, which will result in a user-defined paragraph with heading # "Side Effects:". You can put \n's in the value part of an alias to insert # newlines. ALIASES = # This tag can be used to specify a number of word-keyword mappings (TCL only). # A mapping has the form "name=value". For example adding "class=itcl::class" # will allow you to use the command class in the itcl::class meaning. TCL_SUBST = # Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources # only. Doxygen will then generate output that is more tailored for C. For # instance, some of the names that are used will be different. The list of all # members will be omitted, etc. # The default value is: NO. OPTIMIZE_OUTPUT_FOR_C = NO # Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java or # Python sources only. Doxygen will then generate output that is more tailored # for that language. For instance, namespaces will be presented as packages, # qualified scopes will look different, etc. # The default value is: NO. OPTIMIZE_OUTPUT_JAVA = NO # Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran # sources. Doxygen will then generate output that is tailored for Fortran. # The default value is: NO. OPTIMIZE_FOR_FORTRAN = NO # Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL # sources. Doxygen will then generate output that is tailored for VHDL. # The default value is: NO. OPTIMIZE_OUTPUT_VHDL = NO # Doxygen selects the parser to use depending on the extension of the files it # parses. With this tag you can assign which parser to use for a given # extension. Doxygen has a built-in mapping, but you can override or extend it # using this tag. The format is ext=language, where ext is a file extension, and # language is one of the parsers supported by doxygen: IDL, Java, Javascript, # C#, C, C++, D, PHP, Objective-C, Python, Fortran (fixed format Fortran: # FortranFixed, free formatted Fortran: FortranFree, unknown formatted Fortran: # Fortran. In the later case the parser tries to guess whether the code is fixed # or free formatted code, this is the default for Fortran type files), VHDL. For # instance to make doxygen treat .inc files as Fortran files (default is PHP), # and .f files as C (default is Fortran), use: inc=Fortran f=C. # # Note: For files without extension you can use no_extension as a placeholder. # # Note that for custom extensions you also need to set FILE_PATTERNS otherwise # the files are not read by doxygen. EXTENSION_MAPPING = # If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments # according to the Markdown format, which allows for more readable # documentation. See http://daringfireball.net/projects/markdown/ for details. # The output of markdown processing is further processed by doxygen, so you can # mix doxygen, HTML, and XML commands with Markdown formatting. Disable only in # case of backward compatibilities issues. # The default value is: YES. MARKDOWN_SUPPORT = YES # When enabled doxygen tries to link words that correspond to documented # classes, or namespaces to their corresponding documentation. Such a link can # be prevented in individual cases by putting a % sign in front of the word or # globally by setting AUTOLINK_SUPPORT to NO. # The default value is: YES. AUTOLINK_SUPPORT = YES # If you use STL classes (i.e. std::string, std::vector, etc.) but do not want # to include (a tag file for) the STL sources as input, then you should set this # tag to YES in order to let doxygen match functions declarations and # definitions whose arguments contain STL classes (e.g. func(std::string); # versus func(std::string) {}). This also make the inheritance and collaboration # diagrams that involve STL classes more complete and accurate. # The default value is: NO. BUILTIN_STL_SUPPORT = NO # If you use Microsoft's C++/CLI language, you should set this option to YES to # enable parsing support. # The default value is: NO. CPP_CLI_SUPPORT = NO # Set the SIP_SUPPORT tag to YES if your project consists of sip (see: # http://www.riverbankcomputing.co.uk/software/sip/intro) sources only. Doxygen # will parse them like normal C++ but will assume all classes use public instead # of private inheritance when no explicit protection keyword is present. # The default value is: NO. SIP_SUPPORT = NO # For Microsoft's IDL there are propget and propput attributes to indicate # getter and setter methods for a property. Setting this option to YES will make # doxygen to replace the get and set methods by a property in the documentation. # This will only work if the methods are indeed getting or setting a simple # type. If this is not the case, or you want to show the methods anyway, you # should set this option to NO. # The default value is: YES. IDL_PROPERTY_SUPPORT = YES # If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC # tag is set to YES then doxygen will reuse the documentation of the first # member in the group (if any) for the other members of the group. By default # all members of a group must be documented explicitly. # The default value is: NO. DISTRIBUTE_GROUP_DOC = NO # If one adds a struct or class to a group and this option is enabled, then also # any nested class or struct is added to the same group. By default this option # is disabled and one has to add nested compounds explicitly via \ingroup. # The default value is: NO. GROUP_NESTED_COMPOUNDS = NO # Set the SUBGROUPING tag to YES to allow class member groups of the same type # (for instance a group of public functions) to be put as a subgroup of that # type (e.g. under the Public Functions section). Set it to NO to prevent # subgrouping. Alternatively, this can be done per class using the # \nosubgrouping command. # The default value is: YES. SUBGROUPING = NO # When the INLINE_GROUPED_CLASSES tag is set to YES, classes, structs and unions # are shown inside the group in which they are included (e.g. using \ingroup) # instead of on a separate page (for HTML and Man pages) or section (for LaTeX # and RTF). # # Note that this feature does not work in combination with # SEPARATE_MEMBER_PAGES. # The default value is: NO. INLINE_GROUPED_CLASSES = NO # When the INLINE_SIMPLE_STRUCTS tag is set to YES, structs, classes, and unions # with only public data fields or simple typedef fields will be shown inline in # the documentation of the scope in which they are defined (i.e. file, # namespace, or group documentation), provided this scope is documented. If set # to NO, structs, classes, and unions are shown on a separate page (for HTML and # Man pages) or section (for LaTeX and RTF). # The default value is: NO. INLINE_SIMPLE_STRUCTS = NO # When TYPEDEF_HIDES_STRUCT tag is enabled, a typedef of a struct, union, or # enum is documented as struct, union, or enum with the name of the typedef. So # typedef struct TypeS {} TypeT, will appear in the documentation as a struct # with name TypeT. When disabled the typedef will appear as a member of a file, # namespace, or class. And the struct will be named TypeS. This can typically be # useful for C code in case the coding convention dictates that all compound # types are typedef'ed and only the typedef is referenced, never the tag name. # The default value is: NO. TYPEDEF_HIDES_STRUCT = NO # The size of the symbol lookup cache can be set using LOOKUP_CACHE_SIZE. This # cache is used to resolve symbols given their name and scope. Since this can be # an expensive process and often the same symbol appears multiple times in the # code, doxygen keeps a cache of pre-resolved symbols. If the cache is too small # doxygen will become slower. If the cache is too large, memory is wasted. The # cache size is given by this formula: 2^(16+LOOKUP_CACHE_SIZE). The valid range # is 0..9, the default is 0, corresponding to a cache size of 2^16=65536 # symbols. At the end of a run doxygen will report the cache usage and suggest # the optimal cache size from a speed point of view. # Minimum value: 0, maximum value: 9, default value: 0. LOOKUP_CACHE_SIZE = 0 #--------------------------------------------------------------------------- # Build related configuration options #--------------------------------------------------------------------------- # If the EXTRACT_ALL tag is set to YES, doxygen will assume all entities in # documentation are documented, even if no documentation was available. Private # class members and static file members will be hidden unless the # EXTRACT_PRIVATE respectively EXTRACT_STATIC tags are set to YES. # Note: This will also disable the warnings about undocumented members that are # normally produced when WARNINGS is set to YES. # The default value is: NO. EXTRACT_ALL = NO # If the EXTRACT_PRIVATE tag is set to YES, all private members of a class will # be included in the documentation. # The default value is: NO. EXTRACT_PRIVATE = NO # If the EXTRACT_PACKAGE tag is set to YES, all members with package or internal # scope will be included in the documentation. # The default value is: NO. EXTRACT_PACKAGE = NO # If the EXTRACT_STATIC tag is set to YES, all static members of a file will be # included in the documentation. # The default value is: NO. EXTRACT_STATIC = YES # If the EXTRACT_LOCAL_CLASSES tag is set to YES, classes (and structs) defined # locally in source files will be included in the documentation. If set to NO, # only classes defined in header files are included. Does not have any effect # for Java sources. # The default value is: YES. EXTRACT_LOCAL_CLASSES = NO # This flag is only useful for Objective-C code. If set to YES, local methods, # which are defined in the implementation section but not in the interface are # included in the documentation. If set to NO, only methods in the interface are # included. # The default value is: NO. EXTRACT_LOCAL_METHODS = NO # If this flag is set to YES, the members of anonymous namespaces will be # extracted and appear in the documentation as a namespace called # 'anonymous_namespace{file}', where file will be replaced with the base name of # the file that contains the anonymous namespace. By default anonymous namespace # are hidden. # The default value is: NO. EXTRACT_ANON_NSPACES = NO # If the HIDE_UNDOC_MEMBERS tag is set to YES, doxygen will hide all # undocumented members inside documented classes or files. If set to NO these # members will be included in the various overviews, but no documentation # section is generated. This option has no effect if EXTRACT_ALL is enabled. # The default value is: NO. HIDE_UNDOC_MEMBERS = YES # If the HIDE_UNDOC_CLASSES tag is set to YES, doxygen will hide all # undocumented classes that are normally visible in the class hierarchy. If set # to NO, these classes will be included in the various overviews. This option # has no effect if EXTRACT_ALL is enabled. # The default value is: NO. HIDE_UNDOC_CLASSES = YES # If the HIDE_FRIEND_COMPOUNDS tag is set to YES, doxygen will hide all friend # (class|struct|union) declarations. If set to NO, these declarations will be # included in the documentation. # The default value is: NO. HIDE_FRIEND_COMPOUNDS = YES # If the HIDE_IN_BODY_DOCS tag is set to YES, doxygen will hide any # documentation blocks found inside the body of a function. If set to NO, these # blocks will be appended to the function's detailed documentation block. # The default value is: NO. HIDE_IN_BODY_DOCS = YES # The INTERNAL_DOCS tag determines if documentation that is typed after a # \internal command is included. If the tag is set to NO then the documentation # will be excluded. Set it to YES to include the internal documentation. # The default value is: NO. INTERNAL_DOCS = NO # If the CASE_SENSE_NAMES tag is set to NO then doxygen will only generate file # names in lower-case letters. If set to YES, upper-case letters are also # allowed. This is useful if you have classes or files whose names only differ # in case and if your file system supports case sensitive file names. Windows # and Mac users are advised to set this option to NO. # The default value is: system dependent. CASE_SENSE_NAMES = YES # If the HIDE_SCOPE_NAMES tag is set to NO then doxygen will show members with # their full class and namespace scopes in the documentation. If set to YES, the # scope will be hidden. # The default value is: NO. HIDE_SCOPE_NAMES = YES # If the HIDE_COMPOUND_REFERENCE tag is set to NO (default) then doxygen will # append additional text to a page's title, such as Class Reference. If set to # YES the compound reference will be hidden. # The default value is: NO. HIDE_COMPOUND_REFERENCE= NO # If the SHOW_INCLUDE_FILES tag is set to YES then doxygen will put a list of # the files that are included by a file in the documentation of that file. # The default value is: YES. SHOW_INCLUDE_FILES = NO # If the SHOW_GROUPED_MEMB_INC tag is set to YES then Doxygen will add for each # grouped member an include statement to the documentation, telling the reader # which file to include in order to use the member. # The default value is: NO. SHOW_GROUPED_MEMB_INC = NO # If the FORCE_LOCAL_INCLUDES tag is set to YES then doxygen will list include # files with double quotes in the documentation rather than with sharp brackets. # The default value is: NO. FORCE_LOCAL_INCLUDES = NO # If the INLINE_INFO tag is set to YES then a tag [inline] is inserted in the # documentation for inline members. # The default value is: YES. INLINE_INFO = NO # If the SORT_MEMBER_DOCS tag is set to YES then doxygen will sort the # (detailed) documentation of file and class members alphabetically by member # name. If set to NO, the members will appear in declaration order. # The default value is: YES. SORT_MEMBER_DOCS = YES # If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the brief # descriptions of file, namespace and class members alphabetically by member # name. If set to NO, the members will appear in declaration order. Note that # this will also influence the order of the classes in the class list. # The default value is: NO. SORT_BRIEF_DOCS = YES # If the SORT_MEMBERS_CTORS_1ST tag is set to YES then doxygen will sort the # (brief and detailed) documentation of class members so that constructors and # destructors are listed first. If set to NO the constructors will appear in the # respective orders defined by SORT_BRIEF_DOCS and SORT_MEMBER_DOCS. # Note: If SORT_BRIEF_DOCS is set to NO this option is ignored for sorting brief # member documentation. # Note: If SORT_MEMBER_DOCS is set to NO this option is ignored for sorting # detailed member documentation. # The default value is: NO. SORT_MEMBERS_CTORS_1ST = NO # If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the hierarchy # of group names into alphabetical order. If set to NO the group names will # appear in their defined order. # The default value is: NO. SORT_GROUP_NAMES = NO # If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be sorted by # fully-qualified names, including namespaces. If set to NO, the class list will # be sorted only by class name, not including the namespace part. # Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES. # Note: This option applies only to the class list, not to the alphabetical # list. # The default value is: NO. SORT_BY_SCOPE_NAME = YES # If the STRICT_PROTO_MATCHING option is enabled and doxygen fails to do proper # type resolution of all parameters of a function it will reject a match between # the prototype and the implementation of a member function even if there is # only one candidate or it is obvious which candidate to choose by doing a # simple string match. By disabling STRICT_PROTO_MATCHING doxygen will still # accept a match between prototype and implementation in such cases. # The default value is: NO. STRICT_PROTO_MATCHING = NO # The GENERATE_TODOLIST tag can be used to enable (YES) or disable (NO) the todo # list. This list is created by putting \todo commands in the documentation. # The default value is: YES. GENERATE_TODOLIST = YES # The GENERATE_TESTLIST tag can be used to enable (YES) or disable (NO) the test # list. This list is created by putting \test commands in the documentation. # The default value is: YES. GENERATE_TESTLIST = YES # The GENERATE_BUGLIST tag can be used to enable (YES) or disable (NO) the bug # list. This list is created by putting \bug commands in the documentation. # The default value is: YES. GENERATE_BUGLIST = YES # The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or disable (NO) # the deprecated list. This list is created by putting \deprecated commands in # the documentation. # The default value is: YES. GENERATE_DEPRECATEDLIST= YES # The ENABLED_SECTIONS tag can be used to enable conditional documentation # sections, marked by \if ... \endif and \cond # ... \endcond blocks. ENABLED_SECTIONS = # The MAX_INITIALIZER_LINES tag determines the maximum number of lines that the # initial value of a variable or macro / define can have for it to appear in the # documentation. If the initializer consists of more lines than specified here # it will be hidden. Use a value of 0 to hide initializers completely. The # appearance of the value of individual variables and macros / defines can be # controlled using \showinitializer or \hideinitializer command in the # documentation regardless of this setting. # Minimum value: 0, maximum value: 10000, default value: 30. MAX_INITIALIZER_LINES = 30 # Set the SHOW_USED_FILES tag to NO to disable the list of files generated at # the bottom of the documentation of classes and structs. If set to YES, the # list will mention the files that were used to generate the documentation. # The default value is: YES. SHOW_USED_FILES = NO # Set the SHOW_FILES tag to NO to disable the generation of the Files page. This # will remove the Files entry from the Quick Index and from the Folder Tree View # (if specified). # The default value is: YES. SHOW_FILES = YES # Set the SHOW_NAMESPACES tag to NO to disable the generation of the Namespaces # page. This will remove the Namespaces entry from the Quick Index and from the # Folder Tree View (if specified). # The default value is: YES. SHOW_NAMESPACES = YES # The FILE_VERSION_FILTER tag can be used to specify a program or script that # doxygen should invoke to get the current version for each file (typically from # the version control system). Doxygen will invoke the program by executing (via # popen()) the command command input-file, where command is the value of the # FILE_VERSION_FILTER tag, and input-file is the name of an input file provided # by doxygen. Whatever the program writes to standard output is used as the file # version. For an example see the documentation. FILE_VERSION_FILTER = # The LAYOUT_FILE tag can be used to specify a layout file which will be parsed # by doxygen. The layout file controls the global structure of the generated # output files in an output format independent way. To create the layout file # that represents doxygen's defaults, run doxygen with the -l option. You can # optionally specify a file name after the option, if omitted DoxygenLayout.xml # will be used as the name of the layout file. # # Note that if you run doxygen from a directory containing a file called # DoxygenLayout.xml, doxygen will parse it automatically even if the LAYOUT_FILE # tag is left empty. LAYOUT_FILE = # The CITE_BIB_FILES tag can be used to specify one or more bib files containing # the reference definitions. This must be a list of .bib files. The .bib # extension is automatically appended if omitted. This requires the bibtex tool # to be installed. See also http://en.wikipedia.org/wiki/BibTeX for more info. # For LaTeX the style of the bibliography can be controlled using # LATEX_BIB_STYLE. To use this feature you need bibtex and perl available in the # search path. See also \cite for info how to create references. CITE_BIB_FILES = #--------------------------------------------------------------------------- # Configuration options related to warning and progress messages #--------------------------------------------------------------------------- # The QUIET tag can be used to turn on/off the messages that are generated to # standard output by doxygen. If QUIET is set to YES this implies that the # messages are off. # The default value is: NO. QUIET = NO # The WARNINGS tag can be used to turn on/off the warning messages that are # generated to standard error (stderr) by doxygen. If WARNINGS is set to YES # this implies that the warnings are on. # # Tip: Turn warnings on while writing the documentation. # The default value is: YES. WARNINGS = YES # If the WARN_IF_UNDOCUMENTED tag is set to YES then doxygen will generate # warnings for undocumented members. If EXTRACT_ALL is set to YES then this flag # will automatically be disabled. # The default value is: YES. WARN_IF_UNDOCUMENTED = YES # If the WARN_IF_DOC_ERROR tag is set to YES, doxygen will generate warnings for # potential errors in the documentation, such as not documenting some parameters # in a documented function, or documenting parameters that don't exist or using # markup commands wrongly. # The default value is: YES. WARN_IF_DOC_ERROR = YES # This WARN_NO_PARAMDOC option can be enabled to get warnings for functions that # are documented, but have no documentation for their parameters or return # value. If set to NO, doxygen will only warn about wrong or incomplete # parameter documentation, but not about the absence of documentation. # The default value is: NO. WARN_NO_PARAMDOC = NO # The WARN_FORMAT tag determines the format of the warning messages that doxygen # can produce. The string should contain the $file, $line, and $text tags, which # will be replaced by the file and line number from which the warning originated # and the warning text. Optionally the format may contain $version, which will # be replaced by the version of the file (if it could be obtained via # FILE_VERSION_FILTER) # The default value is: $file:$line: $text. WARN_FORMAT = "$file:$line: $text" # The WARN_LOGFILE tag can be used to specify a file to which warning and error # messages should be written. If left blank the output is written to standard # error (stderr). WARN_LOGFILE = #--------------------------------------------------------------------------- # Configuration options related to the input files #--------------------------------------------------------------------------- # The INPUT tag is used to specify the files and/or directories that contain # documented source files. You may enter file names like myfile.cpp or # directories like /usr/src/myproject. Separate the files or directories with # spaces. See also FILE_PATTERNS and EXTENSION_MAPPING # Note: If this tag is empty the current directory is searched. INPUT = ../glm \ . # This tag can be used to specify the character encoding of the source files # that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses # libiconv (or the iconv built into libc) for the transcoding. See the libiconv # documentation (see: http://www.gnu.org/software/libiconv) for the list of # possible encodings. # The default value is: UTF-8. INPUT_ENCODING = UTF-8 # If the value of the INPUT tag contains directories, you can use the # FILE_PATTERNS tag to specify one or more wildcard patterns (like *.cpp and # *.h) to filter out the source-files in the directories. # # Note that for custom extensions or not directly supported extensions you also # need to set EXTENSION_MAPPING for the extension otherwise the files are not # read by doxygen. # # If left blank the following patterns are tested:*.c, *.cc, *.cxx, *.cpp, # *.c++, *.java, *.ii, *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl, *.h, # *.hh, *.hxx, *.hpp, *.h++, *.cs, *.d, *.php, *.php4, *.php5, *.phtml, *.inc, # *.m, *.markdown, *.md, *.mm, *.dox, *.py, *.f90, *.f, *.for, *.tcl, *.vhd, # *.vhdl, *.ucf, *.qsf, *.as and *.js. FILE_PATTERNS = *.hpp \ *.doxy # The RECURSIVE tag can be used to specify whether or not subdirectories should # be searched for input files as well. # The default value is: NO. RECURSIVE = YES # The EXCLUDE tag can be used to specify files and/or directories that should be # excluded from the INPUT source files. This way you can easily exclude a # subdirectory from a directory tree whose root is specified with the INPUT tag. # # Note that relative paths are relative to the directory from which doxygen is # run. EXCLUDE = # The EXCLUDE_SYMLINKS tag can be used to select whether or not files or # directories that are symbolic links (a Unix file system feature) are excluded # from the input. # The default value is: NO. EXCLUDE_SYMLINKS = NO # If the value of the INPUT tag contains directories, you can use the # EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude # certain files from those directories. # # Note that the wildcards are matched against the file with absolute path, so to # exclude all test directories for example use the pattern */test/* EXCLUDE_PATTERNS = # The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names # (namespaces, classes, functions, etc.) that should be excluded from the # output. The symbol name can be a fully qualified name, a word, or if the # wildcard * is used, a substring. Examples: ANamespace, AClass, # AClass::ANamespace, ANamespace::*Test # # Note that the wildcards are matched against the file with absolute path, so to # exclude all test directories use the pattern */test/* EXCLUDE_SYMBOLS = # The EXAMPLE_PATH tag can be used to specify one or more files or directories # that contain example code fragments that are included (see the \include # command). EXAMPLE_PATH = # If the value of the EXAMPLE_PATH tag contains directories, you can use the # EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and # *.h) to filter out the source-files in the directories. If left blank all # files are included. EXAMPLE_PATTERNS = * # If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be # searched for input files to be used with the \include or \dontinclude commands # irrespective of the value of the RECURSIVE tag. # The default value is: NO. EXAMPLE_RECURSIVE = NO # The IMAGE_PATH tag can be used to specify one or more files or directories # that contain images that are to be included in the documentation (see the # \image command). IMAGE_PATH = # The INPUT_FILTER tag can be used to specify a program that doxygen should # invoke to filter for each input file. Doxygen will invoke the filter program # by executing (via popen()) the command: # # # # where is the value of the INPUT_FILTER tag, and is the # name of an input file. Doxygen will then use the output that the filter # program writes to standard output. If FILTER_PATTERNS is specified, this tag # will be ignored. # # Note that the filter must not add or remove lines; it is applied before the # code is scanned, but not when the output code is generated. If lines are added # or removed, the anchors will not be placed correctly. INPUT_FILTER = # The FILTER_PATTERNS tag can be used to specify filters on a per file pattern # basis. Doxygen will compare the file name with each pattern and apply the # filter if there is a match. The filters are a list of the form: pattern=filter # (like *.cpp=my_cpp_filter). See INPUT_FILTER for further information on how # filters are used. If the FILTER_PATTERNS tag is empty or if none of the # patterns match the file name, INPUT_FILTER is applied. FILTER_PATTERNS = # If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using # INPUT_FILTER) will also be used to filter the input files that are used for # producing the source files to browse (i.e. when SOURCE_BROWSER is set to YES). # The default value is: NO. FILTER_SOURCE_FILES = NO # The FILTER_SOURCE_PATTERNS tag can be used to specify source filters per file # pattern. A pattern will override the setting for FILTER_PATTERN (if any) and # it is also possible to disable source filtering for a specific pattern using # *.ext= (so without naming a filter). # This tag requires that the tag FILTER_SOURCE_FILES is set to YES. FILTER_SOURCE_PATTERNS = # If the USE_MDFILE_AS_MAINPAGE tag refers to the name of a markdown file that # is part of the input, its contents will be placed on the main page # (index.html). This can be useful if you have a project on for instance GitHub # and want to reuse the introduction page also for the doxygen output. USE_MDFILE_AS_MAINPAGE = #--------------------------------------------------------------------------- # Configuration options related to source browsing #--------------------------------------------------------------------------- # If the SOURCE_BROWSER tag is set to YES then a list of source files will be # generated. Documented entities will be cross-referenced with these sources. # # Note: To get rid of all source code in the generated output, make sure that # also VERBATIM_HEADERS is set to NO. # The default value is: NO. SOURCE_BROWSER = YES # Setting the INLINE_SOURCES tag to YES will include the body of functions, # classes and enums directly into the documentation. # The default value is: NO. INLINE_SOURCES = NO # Setting the STRIP_CODE_COMMENTS tag to YES will instruct doxygen to hide any # special comment blocks from generated source code fragments. Normal C, C++ and # Fortran comments will always remain visible. # The default value is: YES. STRIP_CODE_COMMENTS = YES # If the REFERENCED_BY_RELATION tag is set to YES then for each documented # function all documented functions referencing it will be listed. # The default value is: NO. REFERENCED_BY_RELATION = YES # If the REFERENCES_RELATION tag is set to YES then for each documented function # all documented entities called/used by that function will be listed. # The default value is: NO. REFERENCES_RELATION = YES # If the REFERENCES_LINK_SOURCE tag is set to YES and SOURCE_BROWSER tag is set # to YES then the hyperlinks from functions in REFERENCES_RELATION and # REFERENCED_BY_RELATION lists will link to the source code. Otherwise they will # link to the documentation. # The default value is: YES. REFERENCES_LINK_SOURCE = YES # If SOURCE_TOOLTIPS is enabled (the default) then hovering a hyperlink in the # source code will show a tooltip with additional information such as prototype, # brief description and links to the definition and documentation. Since this # will make the HTML file larger and loading of large files a bit slower, you # can opt to disable this feature. # The default value is: YES. # This tag requires that the tag SOURCE_BROWSER is set to YES. SOURCE_TOOLTIPS = YES # If the USE_HTAGS tag is set to YES then the references to source code will # point to the HTML generated by the htags(1) tool instead of doxygen built-in # source browser. The htags tool is part of GNU's global source tagging system # (see http://www.gnu.org/software/global/global.html). You will need version # 4.8.6 or higher. # # To use it do the following: # - Install the latest version of global # - Enable SOURCE_BROWSER and USE_HTAGS in the config file # - Make sure the INPUT points to the root of the source tree # - Run doxygen as normal # # Doxygen will invoke htags (and that will in turn invoke gtags), so these # tools must be available from the command line (i.e. in the search path). # # The result: instead of the source browser generated by doxygen, the links to # source code will now point to the output of htags. # The default value is: NO. # This tag requires that the tag SOURCE_BROWSER is set to YES. USE_HTAGS = NO # If the VERBATIM_HEADERS tag is set the YES then doxygen will generate a # verbatim copy of the header file for each class for which an include is # specified. Set to NO to disable this. # See also: Section \class. # The default value is: YES. VERBATIM_HEADERS = YES # If the CLANG_ASSISTED_PARSING tag is set to YES then doxygen will use the # clang parser (see: http://clang.llvm.org/) for more accurate parsing at the # cost of reduced performance. This can be particularly helpful with template # rich C++ code for which doxygen's built-in parser lacks the necessary type # information. # Note: The availability of this option depends on whether or not doxygen was # compiled with the --with-libclang option. # The default value is: NO. CLANG_ASSISTED_PARSING = NO # If clang assisted parsing is enabled you can provide the compiler with command # line options that you would normally use when invoking the compiler. Note that # the include paths will already be set by doxygen for the files and directories # specified with INPUT and INCLUDE_PATH. # This tag requires that the tag CLANG_ASSISTED_PARSING is set to YES. CLANG_OPTIONS = #--------------------------------------------------------------------------- # Configuration options related to the alphabetical class index #--------------------------------------------------------------------------- # If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index of all # compounds will be generated. Enable this if the project contains a lot of # classes, structs, unions or interfaces. # The default value is: YES. ALPHABETICAL_INDEX = NO # The COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns in # which the alphabetical index list will be split. # Minimum value: 1, maximum value: 20, default value: 5. # This tag requires that the tag ALPHABETICAL_INDEX is set to YES. COLS_IN_ALPHA_INDEX = 5 # In case all classes in a project start with a common prefix, all classes will # be put under the same header in the alphabetical index. The IGNORE_PREFIX tag # can be used to specify a prefix (or a list of prefixes) that should be ignored # while generating the index headers. # This tag requires that the tag ALPHABETICAL_INDEX is set to YES. IGNORE_PREFIX = #--------------------------------------------------------------------------- # Configuration options related to the HTML output #--------------------------------------------------------------------------- # If the GENERATE_HTML tag is set to YES, doxygen will generate HTML output # The default value is: YES. GENERATE_HTML = YES # The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a # relative path is entered the value of OUTPUT_DIRECTORY will be put in front of # it. # The default directory is: html. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_OUTPUT = html # The HTML_FILE_EXTENSION tag can be used to specify the file extension for each # generated HTML page (for example: .htm, .php, .asp). # The default value is: .html. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_FILE_EXTENSION = .html # The HTML_HEADER tag can be used to specify a user-defined HTML header file for # each generated HTML page. If the tag is left blank doxygen will generate a # standard header. # # To get valid HTML the header file that includes any scripts and style sheets # that doxygen needs, which is dependent on the configuration options used (e.g. # the setting GENERATE_TREEVIEW). It is highly recommended to start with a # default header using # doxygen -w html new_header.html new_footer.html new_stylesheet.css # YourConfigFile # and then modify the file new_header.html. See also section "Doxygen usage" # for information on how to generate the default header that doxygen normally # uses. # Note: The header is subject to change so you typically have to regenerate the # default header when upgrading to a newer version of doxygen. For a description # of the possible markers and block names see the documentation. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_HEADER = # The HTML_FOOTER tag can be used to specify a user-defined HTML footer for each # generated HTML page. If the tag is left blank doxygen will generate a standard # footer. See HTML_HEADER for more information on how to generate a default # footer and what special commands can be used inside the footer. See also # section "Doxygen usage" for information on how to generate the default footer # that doxygen normally uses. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_FOOTER = # The HTML_STYLESHEET tag can be used to specify a user-defined cascading style # sheet that is used by each HTML page. It can be used to fine-tune the look of # the HTML output. If left blank doxygen will generate a default style sheet. # See also section "Doxygen usage" for information on how to generate the style # sheet that doxygen normally uses. # Note: It is recommended to use HTML_EXTRA_STYLESHEET instead of this tag, as # it is more robust and this tag (HTML_STYLESHEET) will in the future become # obsolete. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_STYLESHEET = # The HTML_EXTRA_STYLESHEET tag can be used to specify additional user-defined # cascading style sheets that are included after the standard style sheets # created by doxygen. Using this option one can overrule certain style aspects. # This is preferred over using HTML_STYLESHEET since it does not replace the # standard style sheet and is therefore more robust against future updates. # Doxygen will copy the style sheet files to the output directory. # Note: The order of the extra style sheet files is of importance (e.g. the last # style sheet in the list overrules the setting of the previous ones in the # list). For an example see the documentation. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_EXTRA_STYLESHEET = # The HTML_EXTRA_FILES tag can be used to specify one or more extra images or # other source files which should be copied to the HTML output directory. Note # that these files will be copied to the base HTML output directory. Use the # $relpath^ marker in the HTML_HEADER and/or HTML_FOOTER files to load these # files. In the HTML_STYLESHEET file, use the file name only. Also note that the # files will be copied as-is; there are no commands or markers available. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_EXTRA_FILES = # The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. Doxygen # will adjust the colors in the style sheet and background images according to # this color. Hue is specified as an angle on a colorwheel, see # http://en.wikipedia.org/wiki/Hue for more information. For instance the value # 0 represents red, 60 is yellow, 120 is green, 180 is cyan, 240 is blue, 300 # purple, and 360 is red again. # Minimum value: 0, maximum value: 359, default value: 220. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_COLORSTYLE_HUE = 220 # The HTML_COLORSTYLE_SAT tag controls the purity (or saturation) of the colors # in the HTML output. For a value of 0 the output will use grayscales only. A # value of 255 will produce the most vivid colors. # Minimum value: 0, maximum value: 255, default value: 100. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_COLORSTYLE_SAT = 100 # The HTML_COLORSTYLE_GAMMA tag controls the gamma correction applied to the # luminance component of the colors in the HTML output. Values below 100 # gradually make the output lighter, whereas values above 100 make the output # darker. The value divided by 100 is the actual gamma applied, so 80 represents # a gamma of 0.8, The value 220 represents a gamma of 2.2, and 100 does not # change the gamma. # Minimum value: 40, maximum value: 240, default value: 80. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_COLORSTYLE_GAMMA = 80 # If the HTML_TIMESTAMP tag is set to YES then the footer of each generated HTML # page will contain the date and time when the page was generated. Setting this # to YES can help to show when doxygen was last run and thus if the # documentation is up to date. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_TIMESTAMP = NO # If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML # documentation will contain sections that can be hidden and shown after the # page has loaded. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_DYNAMIC_SECTIONS = NO # With HTML_INDEX_NUM_ENTRIES one can control the preferred number of entries # shown in the various tree structured indices initially; the user can expand # and collapse entries dynamically later on. Doxygen will expand the tree to # such a level that at most the specified number of entries are visible (unless # a fully collapsed tree already exceeds this amount). So setting the number of # entries 1 will produce a full collapsed tree by default. 0 is a special value # representing an infinite number of entries and will result in a full expanded # tree by default. # Minimum value: 0, maximum value: 9999, default value: 100. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_INDEX_NUM_ENTRIES = 100 # If the GENERATE_DOCSET tag is set to YES, additional index files will be # generated that can be used as input for Apple's Xcode 3 integrated development # environment (see: http://developer.apple.com/tools/xcode/), introduced with # OSX 10.5 (Leopard). To create a documentation set, doxygen will generate a # Makefile in the HTML output directory. Running make will produce the docset in # that directory and running make install will install the docset in # ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at # startup. See http://developer.apple.com/tools/creatingdocsetswithdoxygen.html # for more information. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_DOCSET = NO # This tag determines the name of the docset feed. A documentation feed provides # an umbrella under which multiple documentation sets from a single provider # (such as a company or product suite) can be grouped. # The default value is: Doxygen generated docs. # This tag requires that the tag GENERATE_DOCSET is set to YES. DOCSET_FEEDNAME = "Doxygen generated docs" # This tag specifies a string that should uniquely identify the documentation # set bundle. This should be a reverse domain-name style string, e.g. # com.mycompany.MyDocSet. Doxygen will append .docset to the name. # The default value is: org.doxygen.Project. # This tag requires that the tag GENERATE_DOCSET is set to YES. DOCSET_BUNDLE_ID = org.doxygen.Project # The DOCSET_PUBLISHER_ID tag specifies a string that should uniquely identify # the documentation publisher. This should be a reverse domain-name style # string, e.g. com.mycompany.MyDocSet.documentation. # The default value is: org.doxygen.Publisher. # This tag requires that the tag GENERATE_DOCSET is set to YES. DOCSET_PUBLISHER_ID = org.doxygen.Publisher # The DOCSET_PUBLISHER_NAME tag identifies the documentation publisher. # The default value is: Publisher. # This tag requires that the tag GENERATE_DOCSET is set to YES. DOCSET_PUBLISHER_NAME = Publisher # If the GENERATE_HTMLHELP tag is set to YES then doxygen generates three # additional HTML index files: index.hhp, index.hhc, and index.hhk. The # index.hhp is a project file that can be read by Microsoft's HTML Help Workshop # (see: http://www.microsoft.com/en-us/download/details.aspx?id=21138) on # Windows. # # The HTML Help Workshop contains a compiler that can convert all HTML output # generated by doxygen into a single compiled HTML file (.chm). Compiled HTML # files are now used as the Windows 98 help format, and will replace the old # Windows help format (.hlp) on all Windows platforms in the future. Compressed # HTML files also contain an index, a table of contents, and you can search for # words in the documentation. The HTML workshop also contains a viewer for # compressed HTML files. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_HTMLHELP = NO # The CHM_FILE tag can be used to specify the file name of the resulting .chm # file. You can add a path in front of the file if the result should not be # written to the html output directory. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. CHM_FILE = # The HHC_LOCATION tag can be used to specify the location (absolute path # including file name) of the HTML help compiler (hhc.exe). If non-empty, # doxygen will try to run the HTML help compiler on the generated index.hhp. # The file has to be specified with full path. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. HHC_LOCATION = # The GENERATE_CHI flag controls if a separate .chi index file is generated # (YES) or that it should be included in the master .chm file (NO). # The default value is: NO. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. GENERATE_CHI = NO # The CHM_INDEX_ENCODING is used to encode HtmlHelp index (hhk), content (hhc) # and project file content. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. CHM_INDEX_ENCODING = # The BINARY_TOC flag controls whether a binary table of contents is generated # (YES) or a normal table of contents (NO) in the .chm file. Furthermore it # enables the Previous and Next buttons. # The default value is: NO. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. BINARY_TOC = NO # The TOC_EXPAND flag can be set to YES to add extra items for group members to # the table of contents of the HTML help documentation and to the tree view. # The default value is: NO. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. TOC_EXPAND = NO # If the GENERATE_QHP tag is set to YES and both QHP_NAMESPACE and # QHP_VIRTUAL_FOLDER are set, an additional index file will be generated that # can be used as input for Qt's qhelpgenerator to generate a Qt Compressed Help # (.qch) of the generated HTML documentation. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_QHP = NO # If the QHG_LOCATION tag is specified, the QCH_FILE tag can be used to specify # the file name of the resulting .qch file. The path specified is relative to # the HTML output folder. # This tag requires that the tag GENERATE_QHP is set to YES. QCH_FILE = # The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help # Project output. For more information please see Qt Help Project / Namespace # (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#namespace). # The default value is: org.doxygen.Project. # This tag requires that the tag GENERATE_QHP is set to YES. QHP_NAMESPACE = org.doxygen.Project # The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating Qt # Help Project output. For more information please see Qt Help Project / Virtual # Folders (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#virtual- # folders). # The default value is: doc. # This tag requires that the tag GENERATE_QHP is set to YES. QHP_VIRTUAL_FOLDER = doc # If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a custom # filter to add. For more information please see Qt Help Project / Custom # Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom- # filters). # This tag requires that the tag GENERATE_QHP is set to YES. QHP_CUST_FILTER_NAME = # The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the # custom filter to add. For more information please see Qt Help Project / Custom # Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom- # filters). # This tag requires that the tag GENERATE_QHP is set to YES. QHP_CUST_FILTER_ATTRS = # The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this # project's filter section matches. Qt Help Project / Filter Attributes (see: # http://qt-project.org/doc/qt-4.8/qthelpproject.html#filter-attributes). # This tag requires that the tag GENERATE_QHP is set to YES. QHP_SECT_FILTER_ATTRS = # The QHG_LOCATION tag can be used to specify the location of Qt's # qhelpgenerator. If non-empty doxygen will try to run qhelpgenerator on the # generated .qhp file. # This tag requires that the tag GENERATE_QHP is set to YES. QHG_LOCATION = # If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files will be # generated, together with the HTML files, they form an Eclipse help plugin. To # install this plugin and make it available under the help contents menu in # Eclipse, the contents of the directory containing the HTML and XML files needs # to be copied into the plugins directory of eclipse. The name of the directory # within the plugins directory should be the same as the ECLIPSE_DOC_ID value. # After copying Eclipse needs to be restarted before the help appears. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_ECLIPSEHELP = NO # A unique identifier for the Eclipse help plugin. When installing the plugin # the directory name containing the HTML and XML files should also have this # name. Each documentation set should have its own identifier. # The default value is: org.doxygen.Project. # This tag requires that the tag GENERATE_ECLIPSEHELP is set to YES. ECLIPSE_DOC_ID = org.doxygen.Project # If you want full control over the layout of the generated HTML pages it might # be necessary to disable the index and replace it with your own. The # DISABLE_INDEX tag can be used to turn on/off the condensed index (tabs) at top # of each HTML page. A value of NO enables the index and the value YES disables # it. Since the tabs in the index contain the same information as the navigation # tree, you can set this option to YES if you also set GENERATE_TREEVIEW to YES. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. DISABLE_INDEX = NO # The GENERATE_TREEVIEW tag is used to specify whether a tree-like index # structure should be generated to display hierarchical information. If the tag # value is set to YES, a side panel will be generated containing a tree-like # index structure (just like the one that is generated for HTML Help). For this # to work a browser that supports JavaScript, DHTML, CSS and frames is required # (i.e. any modern browser). Windows users are probably better off using the # HTML help feature. Via custom style sheets (see HTML_EXTRA_STYLESHEET) one can # further fine-tune the look of the index. As an example, the default style # sheet generated by doxygen has an example that shows how to put an image at # the root of the tree instead of the PROJECT_NAME. Since the tree basically has # the same information as the tab index, you could consider setting # DISABLE_INDEX to YES when enabling this option. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_TREEVIEW = NO # The ENUM_VALUES_PER_LINE tag can be used to set the number of enum values that # doxygen will group on one line in the generated HTML documentation. # # Note that a value of 0 will completely suppress the enum values from appearing # in the overview section. # Minimum value: 0, maximum value: 20, default value: 4. # This tag requires that the tag GENERATE_HTML is set to YES. ENUM_VALUES_PER_LINE = 4 # If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be used # to set the initial width (in pixels) of the frame in which the tree is shown. # Minimum value: 0, maximum value: 1500, default value: 250. # This tag requires that the tag GENERATE_HTML is set to YES. TREEVIEW_WIDTH = 250 # If the EXT_LINKS_IN_WINDOW option is set to YES, doxygen will open links to # external symbols imported via tag files in a separate window. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. EXT_LINKS_IN_WINDOW = NO # Use this tag to change the font size of LaTeX formulas included as images in # the HTML documentation. When you change the font size after a successful # doxygen run you need to manually remove any form_*.png images from the HTML # output directory to force them to be regenerated. # Minimum value: 8, maximum value: 50, default value: 10. # This tag requires that the tag GENERATE_HTML is set to YES. FORMULA_FONTSIZE = 10 # Use the FORMULA_TRANPARENT tag to determine whether or not the images # generated for formulas are transparent PNGs. Transparent PNGs are not # supported properly for IE 6.0, but are supported on all modern browsers. # # Note that when changing this option you need to delete any form_*.png files in # the HTML output directory before the changes have effect. # The default value is: YES. # This tag requires that the tag GENERATE_HTML is set to YES. FORMULA_TRANSPARENT = YES # Enable the USE_MATHJAX option to render LaTeX formulas using MathJax (see # http://www.mathjax.org) which uses client side Javascript for the rendering # instead of using pre-rendered bitmaps. Use this if you do not have LaTeX # installed or if you want to formulas look prettier in the HTML output. When # enabled you may also need to install MathJax separately and configure the path # to it using the MATHJAX_RELPATH option. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. USE_MATHJAX = NO # When MathJax is enabled you can set the default output format to be used for # the MathJax output. See the MathJax site (see: # http://docs.mathjax.org/en/latest/output.html) for more details. # Possible values are: HTML-CSS (which is slower, but has the best # compatibility), NativeMML (i.e. MathML) and SVG. # The default value is: HTML-CSS. # This tag requires that the tag USE_MATHJAX is set to YES. MATHJAX_FORMAT = HTML-CSS # When MathJax is enabled you need to specify the location relative to the HTML # output directory using the MATHJAX_RELPATH option. The destination directory # should contain the MathJax.js script. For instance, if the mathjax directory # is located at the same level as the HTML output directory, then # MATHJAX_RELPATH should be ../mathjax. The default value points to the MathJax # Content Delivery Network so you can quickly see the result without installing # MathJax. However, it is strongly recommended to install a local copy of # MathJax from http://www.mathjax.org before deployment. # The default value is: http://cdn.mathjax.org/mathjax/latest. # This tag requires that the tag USE_MATHJAX is set to YES. MATHJAX_RELPATH = http://www.mathjax.org/mathjax # The MATHJAX_EXTENSIONS tag can be used to specify one or more MathJax # extension names that should be enabled during MathJax rendering. For example # MATHJAX_EXTENSIONS = TeX/AMSmath TeX/AMSsymbols # This tag requires that the tag USE_MATHJAX is set to YES. MATHJAX_EXTENSIONS = # The MATHJAX_CODEFILE tag can be used to specify a file with javascript pieces # of code that will be used on startup of the MathJax code. See the MathJax site # (see: http://docs.mathjax.org/en/latest/output.html) for more details. For an # example see the documentation. # This tag requires that the tag USE_MATHJAX is set to YES. MATHJAX_CODEFILE = # When the SEARCHENGINE tag is enabled doxygen will generate a search box for # the HTML output. The underlying search engine uses javascript and DHTML and # should work on any modern browser. Note that when using HTML help # (GENERATE_HTMLHELP), Qt help (GENERATE_QHP), or docsets (GENERATE_DOCSET) # there is already a search function so this one should typically be disabled. # For large projects the javascript based search engine can be slow, then # enabling SERVER_BASED_SEARCH may provide a better solution. It is possible to # search using the keyboard; to jump to the search box use + S # (what the is depends on the OS and browser, but it is typically # , /