[
  {
    "path": ".github/ISSUE_TEMPLATE/but_report.yml",
    "content": "name: 🐛 Bug Report\ndescription: File a bug report\ntitle: '🐛 '\nlabels: [🐛 bug]\nbody:\n  - type: textarea\n    attributes:\n      label: What's happening?\n      description: Explain what you are trying to do and what happened instead. Be as precise as possible, I can't help you if I don't understand your issue.\n      placeholder: I wanted to take a picture, but the method failed with this error \"[capture/photo-not-enabled] Failed to take photo, photo is not enabled!\"\n    validations:\n      required: true\n  - type: textarea\n    attributes:\n      label: Reproduceable Code\n      description: >\n        Share a small reproduceable code snippet here (or the entire file if necessary).\n        Most importantly, share how you use the `<Camera>` component and what props you pass to it.\n        This will be automatically formatted into code, so no need for backticks.\n      render: tsx\n      placeholder: >\n        const faceDetectionOptions = useRef<FaceDetectionOptions>( {\n        } ).current\n\n        // ...\n\n        <Camera\n          style={StyleSheet.absoluteFill}\n          device={device}\n          isActive={true} \n          faceDetectionCallback={ handleFacesDetection }\n          faceDetectionOptions={ faceDetectionOptions }\n        />\n    validations:\n      required: true\n  - type: textarea\n    attributes:\n      label: Relevant log output\n      description: >\n        Paste any relevant **native log output** (Xcode Logs/Android Studio Logcat) here.\n        This will be automatically formatted into code, so no need for backticks.\n\n        * For iOS, run the project through Xcode and copy the logs from the log window.\n\n        * For Android, either open the project through Android Studio and paste the logs from the logcat window, or run `adb logcat` in terminal.\n      render: shell\n      placeholder: >\n        09:03:46 I ReactNativeJS: Running \"FaceDetectorExample\" with {\"rootTag\":11}\n\n        09:03:47 I ReactNativeJS: Re-rendering App. Camera: undefined | Microphone: undefined\n\n        09:03:47 I VisionCamera: Installing JSI bindings...\n\n        09:03:47 I VisionCamera: Finished installing JSI bindings!\n        ...\n    validations:\n      required: true\n  - type: input\n    attributes:\n      label: Device\n      description: >\n        Which device are you seeing this Problem on?\n        Mention the full name of the phone, as well as the operating system and version.\n        If you have tested this on multiple devices (ex. Android and iOS) then mention all of those devices (comma separated)\n      placeholder: ex. iPhone 11 Pro (iOS 14.3), Galaxy S24 (Android 16)\n    validations:\n      required: true\n  - type: input\n    attributes:\n      label: VisionCamera Version\n      description: Which version of react-native-vision-camera are you using?\n      placeholder: ex. 4.7.2\n    validations:\n      required: true\n  - type: input\n    attributes:\n      label: VisionCameraFaceDetector Version\n      description: Which version of react-native-vision-camera-face-detector are you using?\n      placeholder: ex. 1.10.1\n    validations:\n      required: true\n  - type: dropdown\n    attributes:\n      label: Can you reproduce this issue in the VisionCameraFaceDetector Example app?\n      description: >\n        Try to build the example app (`example/`) and see if the issue is reproduceable here.\n        **Note:** If you don't try this in the example app, I most likely won't help you with your issue.\n      options:\n        - I didn't try (⚠️ your issue might get ignored & closed if you don't try this)\n        - Yes, I can reproduce the same issue in the Example app here\n        - No, I cannot reproduce the issue in the Example app\n      default: 0\n    validations:\n      required: true\n  - type: checkboxes\n    attributes:\n      label: Additional information\n      description: Please check all the boxes that apply\n      options:\n        - label: I am using Expo\n        - label: I have enabled Frame Processors (react-native-worklets-core)\n        - label: I have read the [VisionCamera's Troubleshooting Guide](https://react-native-vision-camera.com/docs/guides/troubleshooting)\n          required: true\n        - label: I have read all the [VisionCameraFaceDetector's README file](https://github.com/luicfrr/react-native-vision-camera-face-detector)\n          required: true\n        - label: I searched for [similar issues in this repository](https://github.com/luicfrr/react-native-vision-camera-face-detector/issues) and found none.\n          required: true\n        - label: I understand this is an open-source project and that I am not paying anything to use this package, so I do not expect an urgent fix, a custom feature, or a tutorial on how to do something.\n          required: true\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/config.yml",
    "content": "blank_issues_enabled: false\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature_request.md",
    "content": "---\nname: Feature request\nabout: Suggest an idea for this project\ntitle: \"[FEAT ✨] Replace your title here\"\nlabels: ''\nassignees: ''\n\n---\n<!-- Please DON'T PIN ME. I'm not a free support for you -->\n\n<!-- Issues without any log, screenshot, dependencies versions, etc... will be ignored and closed without any response. Please help me to help you 😉 -->\n\n<!-- Before opening issues make sure you have searched for an already closed issue with the same problem than yours. Also read releases notes as they have usefull information about versions. -->\n\n<!-- Please note that this is an open source project. I don't expect you to pay me to use it but take this into consideration before requiring an urgent fix and/or response -->\n\n**Is your feature request related to a problem? Please describe.**\nA clear and concise description of what the problem is. \n<!-- E.g: This feature is related to ... problem -->\n\n**Describe the solution you'd like**\nA clear and concise description of what you want to happen.\n<!-- E.g: The package should have the option to ... -->\n\n**Additional context**\nAdd any other context, logs or screenshots about the feature request here.\n"
  },
  {
    "path": ".gitignore",
    "content": "# OSX\n#\n.DS_Store\n\n# node.js\n#\nnode_modules/\nnpm-debug.log\nyarn-error.log\n\nlib/\n\n# Android/IntelliJ\n#\nbuild/\n.idea\n.gradle\nlocal.properties\n*.iml\n*.hprof\n.cxx/\n*.keystore\n!debug.keystore\n\n# Xcode\n#\nbuild/\n*.pbxuser\n!default.pbxuser\n*.mode1v3\n!default.mode1v3\n*.mode2v3\n!default.mode2v3\n*.perspectivev3\n!default.perspectivev3\nxcuserdata\n*.xccheckout\n*.moved-aside\nDerivedData\n*.hmap\n*.ipa\n*.xcuserstate\nios/.xcode.env.local\n\n# Example\n#\nexample/android\nexample/ios\nexample/.expo\n"
  },
  {
    "path": ".yarnrc",
    "content": "# Override Yarn command so we can automatically setup the repo on running `yarn`\n\nyarn-path \"scripts/bootstrap.js\"\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "## 📚 Introduction\n\n`react-native-vision-camera-face-detector` is a React Native library that integrates with the Vision Camera module to provide face detection functionality. It allows you to easily detect faces in real-time using device's front/back camera. Also supports static image face detections (thanks to @XChikuX).\n\nIs this package usefull to you?\n\n<a href=\"https://www.buymeacoffee.com/luicfrr\" target=\"_blank\"><img src=\"https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png\" alt=\"Buy Me A Coffee\" style=\"height: 60px !important;width: 217px !important;\" ></a>\n\nOr give it a ⭐ on [GitHub](https://github.com/luicfrr/react-native-vision-camera-face-detector).\n\n## 🏗️ Features\n\n- Real-time face detection using front and back camera\n- Adjustable face detection settings\n- Optional native side face bounds, contour and landmarks auto scaling\n- Can be combined with [Skia Frame Processor](https://react-native-vision-camera.com/docs/guides/skia-frame-processors)\n\n## 🧰 Installation\n\n```bash\nyarn add react-native-vision-camera-face-detector\n```\n\nThen you need to add `react-native-worklets-core` plugin to `babel.config.js`. More details [here](https://react-native-vision-camera.com/docs/guides/frame-processors#react-native-worklets-core).\n\n## 🪲 Knowing Bugs\n\nThere are open issues ([here](https://github.com/mrousavy/react-native-vision-camera/issues/3362), [here](https://github.com/mrousavy/react-native-vision-camera/issues/3034), and [here](https://github.com/mrousavy/react-native-vision-camera/issues/2951)) about a bug on Skia Frame Processor that may cause a Black Screen on some Android Devices.\nThis bug can be easily fixed with [this trick](https://github.com/mrousavy/react-native-vision-camera/issues/3362#issuecomment-2624299305) but it makes Frame drawings to be in incorrect orientation.\n\n## 💡 Usage\n\nRecommended way (see [Example App](https://github.com/luicfrr/react-native-vision-camera-face-detector/blob/main/example/src/index.tsx) for Skia usage):\n```jsx\nimport { \n  StyleSheet, \n  Text, \n  View \n} from 'react-native'\nimport { \n  useEffect, \n  useState,\n  useRef\n} from 'react'\nimport {\n  Frame,\n  useCameraDevice\n} from 'react-native-vision-camera'\nimport {\n  Face,\n  Camera,\n  FaceDetectionOptions\n} from 'react-native-vision-camera-face-detector'\n\nexport default function App() {\n  const faceDetectionOptions = useRef<FaceDetectionOptions>( {\n    // detection options\n  } ).current\n\n  const device = useCameraDevice('front')\n\n  useEffect(() => {\n    (async () => {\n      const status = await Camera.requestCameraPermission()\n      console.log({ status })\n    })()\n  }, [device])\n\n  function handleFacesDetection(\n    faces: Face[],\n    frame: Frame\n  ) { \n    console.log(\n      'faces', faces.length,\n      'frame', frame.toString()\n    )\n  }\n\n  return (\n    <View style={{ flex: 1 }}>\n      {!!device? <Camera\n        style={StyleSheet.absoluteFill}\n        device={device}\n        faceDetectionCallback={ handleFacesDetection }\n        faceDetectionOptions={ faceDetectionOptions }\n      /> : <Text>\n        No Device\n      </Text>}\n    </View>\n  )\n}\n```\n\nOr use it following [vision-camera docs](https://react-native-vision-camera.com/docs/guides/frame-processors-interacting):\n```jsx\nimport { \n  StyleSheet, \n  Text, \n  View,\n  NativeModules,\n  Platform\n} from 'react-native'\nimport { \n  useEffect, \n  useState,\n  useRef\n} from 'react'\nimport {\n  Camera,\n  runAsync,\n  useCameraDevice,\n  useFrameProcessor\n} from 'react-native-vision-camera'\nimport { \n  Face,\n  useFaceDetector,\n  FaceDetectionOptions\n} from 'react-native-vision-camera-face-detector'\nimport { Worklets } from 'react-native-worklets-core'\n\nexport default function App() {\n  const faceDetectionOptions = useRef<FaceDetectionOptions>( {\n    // detection options\n  } ).current\n\n  const device = useCameraDevice('front')\n  const { \n    detectFaces,\n    stopListeners\n  } = useFaceDetector( faceDetectionOptions )\n\n  useEffect( () => {\n    return () => {\n      // you must call `stopListeners` when current component is unmounted\n      stopListeners()\n    }\n  }, [] )\n\n  useEffect(() => {\n    if(!device) {\n      // you must call `stopListeners` when `Camera` component is unmounted\n      stopListeners()\n      return\n    }\n\n    (async () => {\n      const status = await Camera.requestCameraPermission()\n      console.log({ status })\n    })()\n  }, [device])\n\n  const handleDetectedFaces = Worklets.createRunOnJS( (\n    faces: Face[]\n  ) => { \n    console.log( 'faces detected', faces )\n  })\n\n  const frameProcessor = useFrameProcessor((frame) => {\n    'worklet'\n    runAsync(frame, () => {\n      'worklet'\n      const faces = detectFaces(frame)\n      // ... chain some asynchronous frame processor\n      // ... do something asynchronously with frame\n      handleDetectedFaces(faces)\n    })\n    // ... chain frame processors\n    // ... do something with frame\n  }, [handleDetectedFaces])\n\n  return (\n    <View style={{ flex: 1 }}>\n      {!!device? <Camera\n        style={StyleSheet.absoluteFill}\n        device={device}\n        isActive={true}\n        frameProcessor={frameProcessor}\n      /> : <Text>\n        No Device\n      </Text>}\n    </View>\n  )\n}\n```\n\nAs face detection is a heavy process you should run it in an asynchronous thread so it can be finished without blocking your camera preview.\nYou should read `vision-camera` [docs](https://react-native-vision-camera.com/docs/guides/frame-processors-interacting#running-asynchronously) about this feature.\n\n## 🖼️ Static Image Face Detection\n\nYou can detect faces in static images without the camera (picking images from your gallery/files) or you can use it to detect faces in photos taken from camera (see [Example App](https://github.com/luicfrr/react-native-vision-camera-face-detector/blob/main/example/src/index.tsx)):\n\nSupported image sources: \n- Requirings (`require('path/to/file')`)\n- URI string (`file://`, `content://`, `http(s)://`)\n- Object (`{ uri: string }`)\n\n```ts\nimport { \n  detectFaces,\n  ImageFaceDetectionOptions\n} from 'react-native-vision-camera-face-detector'\n\nconst detectionOptions: ImageFaceDetectionOptions = {\n  // detection options\n}\n// Using a bundled asset\nconst faces1 = await detectFaces({\n  image: require('./assets/photo.jpg'),\n  options: detectionOptions\n})\n// Using a local file path or content URI (e.g. from an image picker)\nconst faces2 = await detectFaces({\n  image: 'file:///storage/emulated/0/Download/pic.jpg',\n  options: detectionOptions\n})\nconst faces3 = await detectFaces({\n  image: { uri: 'content://media/external/images/media/12345' },\n  options: detectionOptions\n})\n\nconsole.log({ \n  faces1, \n  faces2, \n  faces3 \n})\n```\n\n## Face Detection Options\n\n#### Common (Frame Processor and Static Images)\n| Option  | Description | Default | Options |\n| ------------- | ------------- | ------------- | ------------- |\n| `performanceMode` | Favor speed or accuracy when detecting faces.  | `fast` | `fast`, `accurate`|\n| `landmarkMode` | Whether to attempt to identify facial `landmarks`: eyes, ears, nose, cheeks, mouth, and so on. | `none` | `none`, `all` |\n| `contourMode` | Whether to detect the contours of facial features. Contours are detected for only the most prominent face in an image. | `none` | `none`, `all` |\n| `classificationMode` | Whether or not to classify faces into categories such as 'smiling', and 'eyes open'. | `none` | `none`, `all` |\n| `minFaceSize` | Sets the smallest desired face size, expressed as the ratio of the width of the head to width of the image. | `0.15` | `number` |\n| `trackingEnabled` | Whether or not to assign faces an ID, which can be used to track faces across images. Note that when contour detection is enabled, only one face is detected, so face tracking doesn't produce useful results. For this reason, and to improve detection speed, don't enable both contour detection and face tracking. | `false` | `boolean` |\n\n\n#### Frame Processor\n| Option  | Description | Default | Options |\n| ------------- | ------------- | ------------- | ------------- |\n| `cameraFacing` | Current active camera | `front` | `front`, `back` |\n| `autoMode` | Should handle auto scale (face bounds, contour and landmarks) and rotation on native side? If this option is disabled all detection results will be relative to frame coordinates, not to screen/preview. You shouldn't use this option if you want to draw on screen using `Skia Frame Processor`. See [this](https://github.com/luicfrr/react-native-vision-camera-face-detector/issues/30#issuecomment-2058805546) and [this](https://github.com/luicfrr/react-native-vision-camera-face-detector/issues/35) for more details. | `false` | `boolean` |\n| `windowWidth` | * Required if you want to use `autoMode`. You must handle your own logic to get screen sizes, with or without statusbar size, etc... | `1.0` | `number` |\n| `windowHeight` | * Required if you want to use `autoMode`. You must handle your own logic to get screen sizes, with or without statusbar size, etc... | `1.0` | `number` |\n\n#### Static Images\n| Option  | Description | Default | Options |\n| ------------- | ------------- | ------------- | ------------- |\n| `image` | Image source | - | `number`, `string`, `{ uri: string }` |\n\n## 🔧 Troubleshooting\n\nHere is a common issue when trying to use this package and how you can try to fix it:\n\n- `Regular javascript function cannot be shared. Try decorating the function with the 'worklet' keyword...`:\n  - If you're using `react-native-reanimated` maybe you're missing [this](https://github.com/mrousavy/react-native-vision-camera/issues/1791#issuecomment-1892130378) step.\n- `Execution failed for task ':react-native-vision-camera-face-detector:compileDebugKotlin'...`:\n  - This error is probably related to gradle cache. Try [this](https://github.com/luicfrr/react-native-vision-camera-face-detector/issues/71#issuecomment-2186614831) sollution first.\n  - Also check [this](https://github.com/luicfrr/react-native-vision-camera-face-detector/issues/90#issuecomment-2358160166) comment.\n\nIf you find other errors while using this package you're wellcome to open a new issue or create a PR with the fix.\n\n## 👷 Built With\n\n- [React Native](https://reactnative.dev/)\n- [Google MLKit](https://developers.google.com/ml-kit)\n- [Vision Camera](https://react-native-vision-camera.com/)\n\n## 🔎 About\n\nThis package was tested using the following:\n\n- `react-native`: `0.79.5` (new arch disabled)\n- `react-native-vision-camera`: `4.7.2`\n- `react-native-worklets-core`: `1.6.2`\n- `@shopify/react-native-skia`: `2.2.19`\n- `react-native-reanimated`: `~3.17.4`\n- `@react-native-firebase`: `^22.2.1`\n- `expo`: `^53`\n\nMin O.S version:\n\n- `Android`: `SDK 26` (Android 8)\n- `IOS`: `15.5`\n\nMake sure to follow tested versions and your device is using the minimum O.S version before opening issues.\n\n## 📚 Author\n\nMade with ❤️ by [luicfrr](https://github.com/luicfrr)\n"
  },
  {
    "path": "VisionCameraFaceDetector.podspec",
    "content": "require \"json\"\n\npackage = JSON.parse(File.read(File.join(__dir__, \"package.json\")))\n\nPod::Spec.new do |s|\n  s.name         = \"VisionCameraFaceDetector\"\n  s.version      = package[\"version\"]\n  s.summary      = package[\"description\"]\n  s.homepage     = package[\"homepage\"]\n  s.license      = package[\"license\"]\n  s.authors      = package[\"author\"]\n\n  s.platforms    = { :ios => \"15.5\" } # 15.5 is the minimum version for GoogleMLKit/FaceDetection 7.0.0\n  s.source       = { :git => \"https://github.com/luicfrr/react-native-vision-camera-face-detector.git\", :tag => \"#{s.version}\" }\n\n  s.source_files = \"ios/**/*.{h,m,mm,swift}\"\n\n  s.dependency \"React-Core\"\n  s.dependency \"GoogleMLKit/FaceDetection\" , \"8.0.0\"\n  s.dependency \"VisionCamera\"\nend\n"
  },
  {
    "path": "android/build.gradle",
    "content": "def safeExtGet(prop, fallback) {\n    rootProject.ext.has(prop) ? rootProject.ext.get(prop) : fallback\n}\ndef kotlinVersion = safeExtGet(\"VisionCameraFaceDetector_kotlinVersion\", \"2.1.20\")\n\napply plugin: \"com.android.library\"\napply plugin: \"kotlin-android\"\n\nbuildscript {\n    repositories {\n        google()\n        mavenCentral()\n    }\n\n    dependencies {\n        classpath \"com.android.tools.build:gradle:8.13.0\"\n        classpath \"org.jetbrains.kotlin:kotlin-gradle-plugin:${kotlinVersion}\"\n    }\n}\n\nandroid {\n    buildToolsVersion = safeExtGet(\"VisionCameraFaceDetector_buildToolsVersion\", \"35.0.0\")\n    ndkVersion safeExtGet(\"VisionCameraFaceDetector_ndkVersion\", \"27.3.13750724\")\n    defaultConfig {\n        minSdkVersion safeExtGet(\"VisionCameraFaceDetector_minSdkVersion\", 26)\n        compileSdkVersion safeExtGet(\"VisionCameraFaceDetector_compileSdkVersion\", 35)\n        targetSdkVersion safeExtGet(\"VisionCameraFaceDetector_targetSdkVersion\", 35)\n        versionCode 1\n        versionName \"1.0\"\n    }\n\n    buildTypes {\n        release {\n            minifyEnabled false\n        }\n    }\n    lintOptions {\n        disable \"GradleCompatible\"\n    }\n}\n\nrepositories {\n    mavenLocal()\n    maven {\n        // All of React Native (JS, Obj-C sources, Android binaries) is installed from npm\n        url(\"$rootDir/../node_modules/react-native/android\")\n    }\n    google()\n    mavenCentral()\n}\n\ndependencies {\n    //noinspection GradleDynamicVersion\n    implementation \"com.facebook.react:react-native:+\"  // From node_modules\n    api project(\":react-native-vision-camera\")\n    implementation \"androidx.annotation:annotation:1.9.1\"\n    implementation \"androidx.camera:camera-core:1.5.1\"\n    implementation \"com.google.mlkit:face-detection:16.1.7\"\n}\n"
  },
  {
    "path": "android/gradle/wrapper/gradle-wrapper.properties",
    "content": "distributionBase=GRADLE_USER_HOME\ndistributionPath=wrapper/dists\ndistributionUrl=https\\://services.gradle.org/distributions/gradle-8.13-bin.zip\nnetworkTimeout=10000\nvalidateDistributionUrl=true\nzipStoreBase=GRADLE_USER_HOME\nzipStorePath=wrapper/dists\n"
  },
  {
    "path": "android/gradlew",
    "content": "#!/usr/bin/env sh\n\n#\n# Copyright 2015 the original author or authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#      https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n##############################################################################\n##\n##  Gradle start up script for UN*X\n##\n##############################################################################\n\n# Attempt to set APP_HOME\n# Resolve links: $0 may be a link\nPRG=\"$0\"\n# Need this for relative symlinks.\nwhile [ -h \"$PRG\" ] ; do\n    ls=`ls -ld \"$PRG\"`\n    link=`expr \"$ls\" : '.*-> \\(.*\\)$'`\n    if expr \"$link\" : '/.*' > /dev/null; then\n        PRG=\"$link\"\n    else\n        PRG=`dirname \"$PRG\"`\"/$link\"\n    fi\ndone\nSAVED=\"`pwd`\"\ncd \"`dirname \\\"$PRG\\\"`/\" >/dev/null\nAPP_HOME=\"`pwd -P`\"\ncd \"$SAVED\" >/dev/null\n\nAPP_NAME=\"Gradle\"\nAPP_BASE_NAME=`basename \"$0\"`\n\n# Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.\nDEFAULT_JVM_OPTS='\"-Xmx64m\" \"-Xms64m\"'\n\n# Use the maximum available, or set MAX_FD != -1 to use that value.\nMAX_FD=\"maximum\"\n\nwarn () {\n    echo \"$*\"\n}\n\ndie () {\n    echo\n    echo \"$*\"\n    echo\n    exit 1\n}\n\n# OS specific support (must be 'true' or 'false').\ncygwin=false\nmsys=false\ndarwin=false\nnonstop=false\ncase \"`uname`\" in\n  CYGWIN* )\n    cygwin=true\n    ;;\n  Darwin* )\n    darwin=true\n    ;;\n  MINGW* )\n    msys=true\n    ;;\n  NONSTOP* )\n    nonstop=true\n    ;;\nesac\n\nCLASSPATH=$APP_HOME/gradle/wrapper/gradle-wrapper.jar\n\n\n# Determine the Java command to use to start the JVM.\nif [ -n \"$JAVA_HOME\" ] ; then\n    if [ -x \"$JAVA_HOME/jre/sh/java\" ] ; then\n        # IBM's JDK on AIX uses strange locations for the executables\n        JAVACMD=\"$JAVA_HOME/jre/sh/java\"\n    else\n        JAVACMD=\"$JAVA_HOME/bin/java\"\n    fi\n    if [ ! -x \"$JAVACMD\" ] ; then\n        die \"ERROR: JAVA_HOME is set to an invalid directory: $JAVA_HOME\n\nPlease set the JAVA_HOME variable in your environment to match the\nlocation of your Java installation.\"\n    fi\nelse\n    JAVACMD=\"java\"\n    which java >/dev/null 2>&1 || die \"ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.\n\nPlease set the JAVA_HOME variable in your environment to match the\nlocation of your Java installation.\"\nfi\n\n# Increase the maximum file descriptors if we can.\nif [ \"$cygwin\" = \"false\" -a \"$darwin\" = \"false\" -a \"$nonstop\" = \"false\" ] ; then\n    MAX_FD_LIMIT=`ulimit -H -n`\n    if [ $? -eq 0 ] ; then\n        if [ \"$MAX_FD\" = \"maximum\" -o \"$MAX_FD\" = \"max\" ] ; then\n            MAX_FD=\"$MAX_FD_LIMIT\"\n        fi\n        ulimit -n $MAX_FD\n        if [ $? -ne 0 ] ; then\n            warn \"Could not set maximum file descriptor limit: $MAX_FD\"\n        fi\n    else\n        warn \"Could not query maximum file descriptor limit: $MAX_FD_LIMIT\"\n    fi\nfi\n\n# For Darwin, add options to specify how the application appears in the dock\nif $darwin; then\n    GRADLE_OPTS=\"$GRADLE_OPTS \\\"-Xdock:name=$APP_NAME\\\" \\\"-Xdock:icon=$APP_HOME/media/gradle.icns\\\"\"\nfi\n\n# For Cygwin or MSYS, switch paths to Windows format before running java\nif [ \"$cygwin\" = \"true\" -o \"$msys\" = \"true\" ] ; then\n    APP_HOME=`cygpath --path --mixed \"$APP_HOME\"`\n    CLASSPATH=`cygpath --path --mixed \"$CLASSPATH\"`\n\n    JAVACMD=`cygpath --unix \"$JAVACMD\"`\n\n    # We build the pattern for arguments to be converted via cygpath\n    ROOTDIRSRAW=`find -L / -maxdepth 1 -mindepth 1 -type d 2>/dev/null`\n    SEP=\"\"\n    for dir in $ROOTDIRSRAW ; do\n        ROOTDIRS=\"$ROOTDIRS$SEP$dir\"\n        SEP=\"|\"\n    done\n    OURCYGPATTERN=\"(^($ROOTDIRS))\"\n    # Add a user-defined pattern to the cygpath arguments\n    if [ \"$GRADLE_CYGPATTERN\" != \"\" ] ; then\n        OURCYGPATTERN=\"$OURCYGPATTERN|($GRADLE_CYGPATTERN)\"\n    fi\n    # Now convert the arguments - kludge to limit ourselves to /bin/sh\n    i=0\n    for arg in \"$@\" ; do\n        CHECK=`echo \"$arg\"|egrep -c \"$OURCYGPATTERN\" -`\n        CHECK2=`echo \"$arg\"|egrep -c \"^-\"`                                 ### Determine if an option\n\n        if [ $CHECK -ne 0 ] && [ $CHECK2 -eq 0 ] ; then                    ### Added a condition\n            eval `echo args$i`=`cygpath --path --ignore --mixed \"$arg\"`\n        else\n            eval `echo args$i`=\"\\\"$arg\\\"\"\n        fi\n        i=`expr $i + 1`\n    done\n    case $i in\n        0) set -- ;;\n        1) set -- \"$args0\" ;;\n        2) set -- \"$args0\" \"$args1\" ;;\n        3) set -- \"$args0\" \"$args1\" \"$args2\" ;;\n        4) set -- \"$args0\" \"$args1\" \"$args2\" \"$args3\" ;;\n        5) set -- \"$args0\" \"$args1\" \"$args2\" \"$args3\" \"$args4\" ;;\n        6) set -- \"$args0\" \"$args1\" \"$args2\" \"$args3\" \"$args4\" \"$args5\" ;;\n        7) set -- \"$args0\" \"$args1\" \"$args2\" \"$args3\" \"$args4\" \"$args5\" \"$args6\" ;;\n        8) set -- \"$args0\" \"$args1\" \"$args2\" \"$args3\" \"$args4\" \"$args5\" \"$args6\" \"$args7\" ;;\n        9) set -- \"$args0\" \"$args1\" \"$args2\" \"$args3\" \"$args4\" \"$args5\" \"$args6\" \"$args7\" \"$args8\" ;;\n    esac\nfi\n\n# Escape application args\nsave () {\n    for i do printf %s\\\\n \"$i\" | sed \"s/'/'\\\\\\\\''/g;1s/^/'/;\\$s/\\$/' \\\\\\\\/\" ; done\n    echo \" \"\n}\nAPP_ARGS=`save \"$@\"`\n\n# Collect all arguments for the java command, following the shell quoting and substitution rules\neval set -- $DEFAULT_JVM_OPTS $JAVA_OPTS $GRADLE_OPTS \"\\\"-Dorg.gradle.appname=$APP_BASE_NAME\\\"\" -classpath \"\\\"$CLASSPATH\\\"\" org.gradle.wrapper.GradleWrapperMain \"$APP_ARGS\"\n\nexec \"$JAVACMD\" \"$@\"\n"
  },
  {
    "path": "android/gradlew.bat",
    "content": "@rem\r\n@rem Copyright 2015 the original author or authors.\r\n@rem\r\n@rem Licensed under the Apache License, Version 2.0 (the \"License\");\r\n@rem you may not use this file except in compliance with the License.\r\n@rem You may obtain a copy of the License at\r\n@rem\r\n@rem      https://www.apache.org/licenses/LICENSE-2.0\r\n@rem\r\n@rem Unless required by applicable law or agreed to in writing, software\r\n@rem distributed under the License is distributed on an \"AS IS\" BASIS,\r\n@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r\n@rem See the License for the specific language governing permissions and\r\n@rem limitations under the License.\r\n@rem\r\n\r\n@if \"%DEBUG%\" == \"\" @echo off\r\n@rem ##########################################################################\r\n@rem\r\n@rem  Gradle startup script for Windows\r\n@rem\r\n@rem ##########################################################################\r\n\r\n@rem Set local scope for the variables with windows NT shell\r\nif \"%OS%\"==\"Windows_NT\" setlocal\r\n\r\nset DIRNAME=%~dp0\r\nif \"%DIRNAME%\" == \"\" set DIRNAME=.\r\nset APP_BASE_NAME=%~n0\r\nset APP_HOME=%DIRNAME%\r\n\r\n@rem Resolve any \".\" and \"..\" in APP_HOME to make it shorter.\r\nfor %%i in (\"%APP_HOME%\") do set APP_HOME=%%~fi\r\n\r\n@rem Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script.\r\nset DEFAULT_JVM_OPTS=\"-Xmx64m\" \"-Xms64m\"\r\n\r\n@rem Find java.exe\r\nif defined JAVA_HOME goto findJavaFromJavaHome\r\n\r\nset JAVA_EXE=java.exe\r\n%JAVA_EXE% -version >NUL 2>&1\r\nif \"%ERRORLEVEL%\" == \"0\" goto execute\r\n\r\necho.\r\necho ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.\r\necho.\r\necho Please set the JAVA_HOME variable in your environment to match the\r\necho location of your Java installation.\r\n\r\ngoto fail\r\n\r\n:findJavaFromJavaHome\r\nset JAVA_HOME=%JAVA_HOME:\"=%\r\nset JAVA_EXE=%JAVA_HOME%/bin/java.exe\r\n\r\nif exist \"%JAVA_EXE%\" goto execute\r\n\r\necho.\r\necho ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME%\r\necho.\r\necho Please set the JAVA_HOME variable in your environment to match the\r\necho location of your Java installation.\r\n\r\ngoto fail\r\n\r\n:execute\r\n@rem Setup the command line\r\n\r\nset CLASSPATH=%APP_HOME%\\gradle\\wrapper\\gradle-wrapper.jar\r\n\r\n\r\n@rem Execute Gradle\r\n\"%JAVA_EXE%\" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %GRADLE_OPTS% \"-Dorg.gradle.appname=%APP_BASE_NAME%\" -classpath \"%CLASSPATH%\" org.gradle.wrapper.GradleWrapperMain %*\r\n\r\n:end\r\n@rem End local scope for the variables with windows NT shell\r\nif \"%ERRORLEVEL%\"==\"0\" goto mainEnd\r\n\r\n:fail\r\nrem Set variable GRADLE_EXIT_CONSOLE if you need the _script_ return code instead of\r\nrem the _cmd.exe /c_ return code!\r\nif  not \"\" == \"%GRADLE_EXIT_CONSOLE%\" exit 1\r\nexit /b 1\r\n\r\n:mainEnd\r\nif \"%OS%\"==\"Windows_NT\" endlocal\r\n\r\n:omega\r\n"
  },
  {
    "path": "android/src/main/AndroidManifest.xml",
    "content": "<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\"\n          package=\"com.visioncamerafacedetector\">\n\n</manifest>\n"
  },
  {
    "path": "android/src/main/java/com/visioncamerafacedetector/FaceDetectorCommon.kt",
    "content": "package com.visioncamerafacedetector\n\nimport android.graphics.Rect\nimport android.view.Surface\nimport com.mrousavy.camera.core.types.Position\nimport com.google.mlkit.vision.face.Face\nimport com.google.mlkit.vision.face.FaceLandmark\nimport com.google.mlkit.vision.face.FaceContour\nimport com.google.mlkit.vision.face.FaceDetection\nimport com.google.mlkit.vision.face.FaceDetector\nimport com.google.mlkit.vision.face.FaceDetectorOptions\n\ndata class FaceDetectorResult(\n  val runContours: Boolean = false,\n  val runClassifications: Boolean = false,\n  val runLandmarks: Boolean = false,\n  val trackingEnabled: Boolean = false,\n  val faceDetector: FaceDetector\n)\n\nclass FaceDetectorCommon() {\n  private fun processBoundingBox(\n    boundingBox: Rect,\n    sourceWidth: Double = 0.0,\n    sourceHeight: Double = 0.0,\n    scaleX: Double = 1.0,\n    scaleY: Double = 1.0,\n    autoMode: Boolean = false,\n    cameraFacing: Position = Position.FRONT,\n    orientation: Int? = Surface.ROTATION_0\n  ): Map<String, Any>  {\n    val bounds: MutableMap<String, Any> = HashMap()\n    val width = boundingBox.width().toDouble() * scaleX\n    val height = boundingBox.height().toDouble() * scaleY\n    val x = boundingBox.left.toDouble()\n    val y = boundingBox.top.toDouble()\n\n    bounds[\"width\"] = width\n    bounds[\"height\"] = height\n    bounds[\"x\"] = x * scaleX\n    bounds[\"y\"] = y * scaleY\n\n    if(!autoMode) return bounds\n\n    // using front camera\n    if(cameraFacing == Position.FRONT) {\n      when (orientation) {\n        // device is portrait\n        Surface.ROTATION_0 -> {\n          bounds[\"x\"] = ((-x * scaleX) + sourceWidth * scaleX) - width\n          bounds[\"y\"] = y * scaleY\n        }\n        // device is landscape right\n        Surface.ROTATION_270 -> {\n          bounds[\"x\"] = y * scaleX\n          bounds[\"y\"] = x * scaleY\n        }\n        // device is upside down\n        Surface.ROTATION_180 -> {\n          bounds[\"x\"] = x * scaleX\n          bounds[\"y\"] = ((-y * scaleY) + sourceHeight * scaleY) - height\n        }\n        // device is landscape left\n        Surface.ROTATION_90 -> {\n          bounds[\"x\"] = ((-y * scaleX) + sourceWidth * scaleX) - width\n          bounds[\"y\"] = ((-x * scaleY) + sourceHeight * scaleY) - height\n        }\n      }\n      return bounds\n    }\n\n    // using back camera\n    when (orientation) {\n      // device is portrait\n      Surface.ROTATION_0 -> {\n        bounds[\"x\"] = x * scaleX\n        bounds[\"y\"] = y * scaleY\n      }\n      // device is landscape right\n      Surface.ROTATION_270 -> {\n        bounds[\"x\"] = y * scaleX\n        bounds[\"y\"] = ((-x * scaleY) + sourceHeight * scaleY) - height\n      }\n      // device is upside down\n      Surface.ROTATION_180 -> {\n        bounds[\"x\"] =((-x * scaleX) + sourceWidth * scaleX) - width\n        bounds[\"y\"] = ((-y * scaleY) + sourceHeight * scaleY) - height\n      }\n      // device is landscape left\n      Surface.ROTATION_90 -> {\n        bounds[\"x\"] = ((-y * scaleX) + sourceWidth * scaleX) - width\n        bounds[\"y\"] = x * scaleY\n      }\n    }\n    return bounds\n  }\n\n  private fun processLandmarks(\n    face: Face,\n    sourceWidth: Double = 0.0,\n    sourceHeight: Double = 0.0,\n    scaleX: Double = 1.0,\n    scaleY: Double = 1.0,\n    autoMode: Boolean = false,\n    cameraFacing: Position = Position.FRONT,\n    orientation: Int? = Surface.ROTATION_0\n  ): Map<String, Any> {\n    val faceLandmarksTypes = intArrayOf(\n      FaceLandmark.LEFT_CHEEK,\n      FaceLandmark.LEFT_EAR,\n      FaceLandmark.LEFT_EYE,\n      FaceLandmark.MOUTH_BOTTOM,\n      FaceLandmark.MOUTH_LEFT,\n      FaceLandmark.MOUTH_RIGHT,\n      FaceLandmark.NOSE_BASE,\n      FaceLandmark.RIGHT_CHEEK,\n      FaceLandmark.RIGHT_EAR,\n      FaceLandmark.RIGHT_EYE\n    )\n    val faceLandmarksTypesStrings = arrayOf(\n      \"LEFT_CHEEK\",\n      \"LEFT_EAR\",\n      \"LEFT_EYE\",\n      \"MOUTH_BOTTOM\",\n      \"MOUTH_LEFT\",\n      \"MOUTH_RIGHT\",\n      \"NOSE_BASE\",\n      \"RIGHT_CHEEK\",\n      \"RIGHT_EAR\",\n      \"RIGHT_EYE\"\n    )\n    val faceLandmarksTypesMap: MutableMap<String, Any> = HashMap()\n    for (i in faceLandmarksTypesStrings.indices) {\n      val landmark = face.getLandmark(faceLandmarksTypes[i])\n      val landmarkName = faceLandmarksTypesStrings[i]\n\n      if (landmark == null) continue\n\n      val point = landmark.position\n      val currentPointsMap: MutableMap<String, Double> = HashMap()\n      val x = point.x.toDouble()\n      val y = point.y.toDouble()\n      currentPointsMap[\"x\"] = x * scaleX\n      currentPointsMap[\"y\"] = y * scaleY\n\n      if(autoMode) {\n        if(cameraFacing == Position.FRONT) {\n          // using front camera\n          when (orientation) {\n            // device is portrait\n            Surface.ROTATION_0 -> {\n              currentPointsMap[\"x\"] = ((-x * scaleX) + sourceWidth * scaleX)\n              currentPointsMap[\"y\"] = y * scaleY\n            }\n            // device is landscape right\n            Surface.ROTATION_270 -> {\n              currentPointsMap[\"x\"] = y * scaleX\n              currentPointsMap[\"y\"] = x * scaleY\n            }\n            // device is upside down\n            Surface.ROTATION_180 -> {\n              currentPointsMap[\"x\"] = x * scaleX\n              currentPointsMap[\"y\"] = ((-y * scaleY) + sourceHeight * scaleY)\n            }\n            // device is landscape left\n            Surface.ROTATION_90 -> {\n              currentPointsMap[\"x\"] = ((-y * scaleX) + sourceWidth * scaleX)\n              currentPointsMap[\"y\"] = ((-x * scaleY) + sourceHeight * scaleY)\n            }\n            else -> {\n              currentPointsMap[\"x\"] = x * scaleX\n              currentPointsMap[\"y\"] = y * scaleY\n            }\n          }\n        } else {\n          // using back camera\n          when (orientation) {\n            // device is portrait\n            Surface.ROTATION_0 -> {\n              currentPointsMap[\"x\"] = x * scaleX\n              currentPointsMap[\"y\"] = y * scaleY\n            }\n            // device is landscape right\n            Surface.ROTATION_270 -> {\n              currentPointsMap[\"x\"] = y * scaleX\n              currentPointsMap[\"y\"] = ((-x * scaleY) + sourceHeight * scaleY)\n            }\n            // device is upside down\n            Surface.ROTATION_180 -> {\n              currentPointsMap[\"x\"] =((-x * scaleX) + sourceWidth * scaleX)\n              currentPointsMap[\"y\"] = ((-y * scaleY) + sourceHeight * scaleY)\n            }\n            // device is landscape left\n            Surface.ROTATION_90 -> {\n              currentPointsMap[\"x\"] = ((-y * scaleX) + sourceWidth * scaleX)\n              currentPointsMap[\"y\"] = x * scaleY\n            }\n            else -> {\n              currentPointsMap[\"x\"] = x * scaleX\n              currentPointsMap[\"y\"] = y * scaleY\n            }\n          }\n        }\n      } \n\n      faceLandmarksTypesMap[landmarkName] = currentPointsMap\n    }\n\n    return faceLandmarksTypesMap\n  }\n\n  private fun processFaceContours(\n    face: Face,\n    sourceWidth: Double = 0.0,\n    sourceHeight: Double = 0.0,\n    scaleX: Double = 1.0,\n    scaleY: Double = 1.0,\n    autoMode: Boolean = false,\n    cameraFacing: Position = Position.FRONT,\n    orientation: Int? = Surface.ROTATION_0\n  ): Map<String, Any> {\n    val faceContoursTypes = intArrayOf(\n      FaceContour.FACE,\n      FaceContour.LEFT_CHEEK,\n      FaceContour.LEFT_EYE,\n      FaceContour.LEFT_EYEBROW_BOTTOM,\n      FaceContour.LEFT_EYEBROW_TOP,\n      FaceContour.LOWER_LIP_BOTTOM,\n      FaceContour.LOWER_LIP_TOP,\n      FaceContour.NOSE_BOTTOM,\n      FaceContour.NOSE_BRIDGE,\n      FaceContour.RIGHT_CHEEK,\n      FaceContour.RIGHT_EYE,\n      FaceContour.RIGHT_EYEBROW_BOTTOM,\n      FaceContour.RIGHT_EYEBROW_TOP,\n      FaceContour.UPPER_LIP_BOTTOM,\n      FaceContour.UPPER_LIP_TOP\n    )\n    val faceContoursTypesStrings = arrayOf(\n      \"FACE\",\n      \"LEFT_CHEEK\",\n      \"LEFT_EYE\",\n      \"LEFT_EYEBROW_BOTTOM\",\n      \"LEFT_EYEBROW_TOP\",\n      \"LOWER_LIP_BOTTOM\",\n      \"LOWER_LIP_TOP\",\n      \"NOSE_BOTTOM\",\n      \"NOSE_BRIDGE\",\n      \"RIGHT_CHEEK\",\n      \"RIGHT_EYE\",\n      \"RIGHT_EYEBROW_BOTTOM\",\n      \"RIGHT_EYEBROW_TOP\",\n      \"UPPER_LIP_BOTTOM\",\n      \"UPPER_LIP_TOP\"\n    )\n    val faceContoursTypesMap: MutableMap<String, Any> = HashMap()\n    for (i in faceContoursTypesStrings.indices) {\n      val contour = face.getContour(faceContoursTypes[i])\n      val contourName = faceContoursTypesStrings[i]\n\n      if (contour == null) continue\n\n      val points = contour.points\n      val pointsMap: MutableList<Map<String, Double>> = mutableListOf()\n      for (j in points.indices) {\n        val currentPointsMap: MutableMap<String, Double> = HashMap()\n        val x = points[j].x.toDouble()\n        val y = points[j].y.toDouble()\n        currentPointsMap[\"x\"] = points[j].x.toDouble() * scaleX\n        currentPointsMap[\"y\"] = points[j].y.toDouble() * scaleY\n\n        if(autoMode) {\n          if(cameraFacing == Position.FRONT) {\n            // using front camera\n            when (orientation) {\n              // device is portrait\n              Surface.ROTATION_0 -> {\n                currentPointsMap[\"x\"] = ((-x * scaleX) + sourceWidth * scaleX)\n                currentPointsMap[\"y\"] = y * scaleY\n              }\n              // device is landscape right\n              Surface.ROTATION_270 -> {\n                currentPointsMap[\"x\"] = y * scaleX\n                currentPointsMap[\"y\"] = x * scaleY\n              }\n              // device is upside down\n              Surface.ROTATION_180 -> {\n                currentPointsMap[\"x\"] = x * scaleX\n                currentPointsMap[\"y\"] = ((-y * scaleY) + sourceHeight * scaleY)\n              }\n              // device is landscape left\n              Surface.ROTATION_90 -> {\n                currentPointsMap[\"x\"] = ((-y * scaleX) + sourceWidth * scaleX)\n                currentPointsMap[\"y\"] = ((-x * scaleY) + sourceHeight * scaleY)\n              }\n              else -> {\n                currentPointsMap[\"x\"] = x * scaleX\n                currentPointsMap[\"y\"] = y * scaleY\n              }\n            }\n          } else {\n            // using back camera\n            when (orientation) {\n              // device is portrait\n              Surface.ROTATION_0 -> {\n                currentPointsMap[\"x\"] = x * scaleX\n                currentPointsMap[\"y\"] = y * scaleY\n              }\n              // device is landscape right\n              Surface.ROTATION_270 -> {\n                currentPointsMap[\"x\"] = y * scaleX\n                currentPointsMap[\"y\"] = ((-x * scaleY) + sourceHeight * scaleY)\n              }\n              // device is upside down\n              Surface.ROTATION_180 -> {\n                currentPointsMap[\"x\"] =((-x * scaleX) + sourceWidth * scaleX)\n                currentPointsMap[\"y\"] = ((-y * scaleY) + sourceHeight * scaleY)\n              }\n              // device is landscape left\n              Surface.ROTATION_90 -> {\n                currentPointsMap[\"x\"] = ((-y * scaleX) + sourceWidth * scaleX)\n                currentPointsMap[\"y\"] = x * scaleY\n              }\n              else -> {\n                currentPointsMap[\"x\"] = x * scaleX\n                currentPointsMap[\"y\"] = y * scaleY\n              }\n            }\n          }\n        }\n\n        pointsMap.add(currentPointsMap)\n      }\n\n      faceContoursTypesMap[contourName] = pointsMap\n    }\n    return faceContoursTypesMap\n  }\n\n  fun getFaceDetector(\n    options: Map<String, Any>?\n  ): FaceDetectorResult {\n    var performanceModeValue = FaceDetectorOptions.PERFORMANCE_MODE_FAST\n    var landmarkModeValue = FaceDetectorOptions.LANDMARK_MODE_NONE\n    var classificationModeValue = FaceDetectorOptions.CLASSIFICATION_MODE_NONE\n    var contourModeValue = FaceDetectorOptions.CONTOUR_MODE_NONE\n    var runLandmarks = false\n    var runClassifications = false\n    var runContours = false\n    var trackingEnabled = false\n\n    if (options?.get(\"performanceMode\").toString() == \"accurate\") {\n      performanceModeValue = FaceDetectorOptions.PERFORMANCE_MODE_ACCURATE\n    }\n\n    if (options?.get(\"landmarkMode\").toString() == \"all\") {\n      runLandmarks = true\n      landmarkModeValue = FaceDetectorOptions.LANDMARK_MODE_ALL\n    }\n\n    if (options?.get(\"classificationMode\").toString() == \"all\") {\n      runClassifications = true\n      classificationModeValue = FaceDetectorOptions.CLASSIFICATION_MODE_ALL\n    }\n\n    if (options?.get(\"contourMode\").toString() == \"all\") {\n      runContours = true\n      contourModeValue = FaceDetectorOptions.CONTOUR_MODE_ALL\n    }\n\n    val minFaceSize = (options?.get(\"minFaceSize\") ?: 0.15) as Double\n    val optionsBuilder = FaceDetectorOptions.Builder()\n      .setPerformanceMode(performanceModeValue)\n      .setLandmarkMode(landmarkModeValue)\n      .setContourMode(contourModeValue)\n      .setClassificationMode(classificationModeValue)\n      .setMinFaceSize(minFaceSize.toFloat())\n\n    if (options?.get(\"trackingEnabled\").toString() == \"true\") {\n      trackingEnabled = true\n      optionsBuilder.enableTracking()\n    }\n\n    val faceDetector = FaceDetection.getClient(\n      optionsBuilder.build()\n    )\n\n    return FaceDetectorResult(\n      runContours = runContours,\n      runClassifications = runClassifications,\n      runLandmarks = runLandmarks,\n      trackingEnabled = trackingEnabled,\n      faceDetector = faceDetector\n    )\n  }\n\n  fun processFaces(\n    faces: List<Face>,\n    runLandmarks: Boolean,\n    runClassifications: Boolean,\n    runContours: Boolean,\n    trackingEnabled: Boolean,\n    sourceWidth: Double = 0.0,\n    sourceHeight: Double = 0.0,\n    scaleX: Double = 1.0,\n    scaleY: Double = 1.0,\n    autoMode: Boolean = false,\n    cameraFacing: Position = Position.FRONT,\n    orientation: Int? = Surface.ROTATION_0\n  ): ArrayList<Map<String, Any>> {\n    val result = ArrayList<Map<String, Any>>()\n\n    faces.forEach{face ->\n      val map: MutableMap<String, Any> = HashMap()\n\n      if (runLandmarks) {\n        map[\"landmarks\"] = processLandmarks(\n          face,\n          sourceWidth,\n          sourceHeight,\n          scaleX,\n          scaleY,\n          autoMode,\n          cameraFacing,\n          orientation\n        )\n      }\n\n      if (runClassifications) {\n        map[\"leftEyeOpenProbability\"] = face.leftEyeOpenProbability?.toDouble() ?: -1\n        map[\"rightEyeOpenProbability\"] = face.rightEyeOpenProbability?.toDouble() ?: -1\n        map[\"smilingProbability\"] = face.smilingProbability?.toDouble() ?: -1\n      }\n\n      if (runContours) {\n        map[\"contours\"] = processFaceContours(\n          face,\n          sourceWidth,\n          sourceHeight,\n          scaleX,\n          scaleY,\n          autoMode,\n          cameraFacing,\n          orientation\n        )\n      }\n\n      if (trackingEnabled) {\n        map[\"trackingId\"] = face.trackingId ?: -1\n      }\n\n      map[\"rollAngle\"] = face.headEulerAngleZ.toDouble()\n      map[\"pitchAngle\"] = face.headEulerAngleX.toDouble()\n      map[\"yawAngle\"] = face.headEulerAngleY.toDouble()\n      map[\"bounds\"] = processBoundingBox(\n        face.boundingBox,\n        sourceWidth,\n        sourceHeight,\n        scaleX,\n        scaleY,\n        autoMode,\n        cameraFacing,\n        orientation\n      )\n\n      result.add(map)\n    }\n\n    return result\n  }\n}\n"
  },
  {
    "path": "android/src/main/java/com/visioncamerafacedetector/ImageFaceDetectorModule.kt",
    "content": "package com.visioncamerafacedetector\n\nimport android.util.Log\nimport android.graphics.Bitmap\nimport android.graphics.BitmapFactory\nimport android.net.Uri\nimport com.facebook.react.bridge.*\nimport com.google.mlkit.vision.common.InputImage\nimport java.io.InputStream\nimport java.net.HttpURLConnection\nimport java.net.URL\n\nprivate const val TAG = \"ImageFaceDetector\"\nclass ImageFaceDetectorModule(\n  private val reactContext: ReactApplicationContext\n): ReactContextBaseJavaModule(reactContext) {\n  override fun getName(): String = \"ImageFaceDetector\"\n\n  private fun toWritableArray(\n    list: ArrayList<Map<String, Any>>\n  ): WritableArray {\n    val array = Arguments.createArray()\n\n    for (map in list) {\n      val writableMap = Arguments.createMap()\n\n      for ((key, value) in map) {\n        @Suppress(\"UNCHECKED_CAST\")\n        when (value) {\n          is Boolean -> writableMap.putBoolean(key, value)\n          is Int -> writableMap.putInt(key, value)\n          is Double -> writableMap.putDouble(key, value)\n          is Float -> writableMap.putDouble(key, value.toDouble())\n          is String -> writableMap.putString(key, value)\n          is Map<*, *> -> writableMap.putMap(key, toWritableMap(value as Map<String, Any>))\n          is ArrayList<*> -> writableMap.putArray(key, toWritableArray(value as ArrayList<Map<String, Any>>))\n          else -> writableMap.putNull(key)\n        }\n      }\n\n      array.pushMap(writableMap)\n    }\n\n    return array\n  }\n\n  private fun toWritableMap(\n    map: Map<String, Any>\n  ): WritableMap {\n    val writableMap = Arguments.createMap()\n\n    for ((key, value) in map) {\n      @Suppress(\"UNCHECKED_CAST\")\n      when (value) {\n        is Boolean -> writableMap.putBoolean(key, value)\n        is Int -> writableMap.putInt(key, value)\n        is Double -> writableMap.putDouble(key, value)\n        is Float -> writableMap.putDouble(key, value.toDouble())\n        is String -> writableMap.putString(key, value)\n        is Map<*, *> -> writableMap.putMap(key, toWritableMap(value as Map<String, Any>))\n        is ArrayList<*> -> writableMap.putArray(key, toWritableArray(value as ArrayList<Map<String, Any>>))\n        else -> writableMap.putNull(key)\n      }\n    }\n\n    return writableMap\n  }\n\n  private fun toMap(\n    readableMap: ReadableMap?\n  ): Map<String, Any> {\n    val map = mutableMapOf<String, Any>()\n    if (readableMap == null) return map\n\n    val iterator = readableMap.keySetIterator()\n    while (iterator.hasNextKey()) {\n      val key = iterator.nextKey()\n      when (readableMap.getType(key)) {\n        ReadableType.Null -> map[key] = \"\"\n        ReadableType.Boolean -> map[key] = readableMap.getBoolean(key)\n        ReadableType.Number -> map[key] = readableMap.getDouble(key)\n        ReadableType.String -> map[key] = readableMap.getString(key) ?: \"\"\n        ReadableType.Map -> map[key] = toMap(readableMap.getMap(key))\n        ReadableType.Array -> map[key] = toList(readableMap.getArray(key))\n      }\n    }\n    return map\n  }\n\n  private fun toList(\n    readableArray: ReadableArray?\n  ): ArrayList<Any> {\n    val list = arrayListOf<Any>()\n    if (readableArray == null) return list\n\n    for (i in 0 until readableArray.size()) {\n      when (readableArray.getType(i)) {\n        ReadableType.Null -> list.add(\"\")\n        ReadableType.Boolean -> list.add(readableArray.getBoolean(i))\n        ReadableType.Number -> list.add(readableArray.getDouble(i))\n        ReadableType.String -> list.add(readableArray.getString(i) ?: \"\")\n        ReadableType.Map -> list.add(toMap(readableArray.getMap(i)))\n        ReadableType.Array -> list.add(toList(readableArray.getArray(i)))\n      }\n    }\n    return list\n  }\n\n  @ReactMethod\n  fun detectFaces(\n    uri: String, \n    options: ReadableMap?,\n    promise: Promise\n  ) {\n    try {\n      val common = FaceDetectorCommon()\n      val (\n        runContours,\n        runClassifications,\n        runLandmarks,\n        trackingEnabled,\n        faceDetector\n      ) = common.getFaceDetector(\n        toMap(options)\n      )\n\n      val image = InputImage.fromBitmap(\n        loadBitmapFromUri(uri)!!,\n        0\n      )\n      faceDetector.process(image)\n        .addOnSuccessListener { faces ->\n          val result = common.processFaces(\n            faces,\n            runLandmarks,\n            runClassifications,\n            runContours,\n            trackingEnabled\n          )\n\n          promise.resolve(\n            toWritableArray(result)\n          )\n        }\n        .addOnFailureListener { e ->\n          Log.e(TAG, \"Error processing image face detection: \", e)\n          // resolve empty list on error\n          promise.resolve(Arguments.createArray())\n        }\n    } catch (e: Exception) {\n      Log.e(TAG, \"Error preparing face detection: \", e)\n      // resolve empty list on error\n      promise.resolve(Arguments.createArray())\n    }\n  }\n\n  private fun loadBitmapFromUri(uriString: String): Bitmap? {\n    return try {\n      val uri = Uri.parse(uriString)\n      when (uri.scheme?.lowercase()) {\n        \"content\", \"android.resource\" -> {\n          val stream = reactContext.contentResolver.openInputStream(uri)\n          stream.useDecode()\n        }\n        \"file\" -> {\n          val path = uri.path ?: return null\n          if (path.startsWith(\"/android_asset/\")) {\n            val assetPath = path.removePrefix(\"/android_asset/\")\n            reactContext.assets.open(assetPath).useDecode()\n          } else {\n            BitmapFactory.decodeFile(path)\n          }\n        }\n        \"asset\" -> {\n          val assetPath = uriString.removePrefix(\"asset:/\").removePrefix(\"asset:///\")\n          reactContext.assets.open(assetPath).useDecode()\n        }\n        \"http\", \"https\" -> {\n          val url = URL(uriString)\n          val conn = url.openConnection() as HttpURLConnection\n          conn.connect()\n          val input = conn.inputStream\n          input.useDecode()\n        }\n        else -> {\n          // Fallback try as file path\n          BitmapFactory.decodeFile(uriString)\n        }\n      }\n    } catch (e: Exception) {\n      null\n    }\n  }\n}\n\nprivate fun InputStream?.useDecode(): Bitmap? {\n  if (this == null) return null\n  return try {\n    this.use { \n      BitmapFactory.decodeStream(it)\n    }\n  } catch (e: Exception) {\n    null\n  }\n}\n"
  },
  {
    "path": "android/src/main/java/com/visioncamerafacedetector/VisionCameraFaceDetectorOrientation.kt",
    "content": "package com.visioncamerafacedetector\n\nimport android.util.Log\nimport android.view.OrientationEventListener\nimport android.view.Surface\nimport com.facebook.react.bridge.ReactApplicationContext\n\nprivate const val TAG = \"FaceDetectorOrientation\"\nclass VisionCameraFaceDetectorOrientation(\n  private val context: ReactApplicationContext\n) {\n  var orientation = Surface.ROTATION_0\n  private var orientationListener: OrientationEventListener? = null\n\n  init {\n    if (orientationListener == null) {\n      Log.d(TAG, \"Assigning new device orientation listener\")\n      orientationListener = object : OrientationEventListener(context) {\n        override fun onOrientationChanged(rotationDegrees: Int) {\n          orientation = degreesToSurfaceRotation(rotationDegrees)\n        }\n      }\n    }\n\n    orientation = Surface.ROTATION_0\n    startDeviceOrientationListener()\n  }\n\n  private fun startDeviceOrientationListener() {\n    if (\n      orientationListener != null &&\n      orientationListener!!.canDetectOrientation()\n    ) {\n      Log.d(TAG, \"Enabling device orientation listener\")\n      orientationListener!!.enable()\n    }\n  }\n\n  fun stopDeviceOrientationListener() {\n    orientationListener?.disable()\n    orientationListener = null\n    Log.d(TAG, \"Disabled device orientation listener\")\n  }\n\n  private fun degreesToSurfaceRotation(degrees: Int): Int =\n    when (degrees) {\n      in 45..135 -> Surface.ROTATION_270\n      in 135..225 -> Surface.ROTATION_180\n      in 225..315 -> Surface.ROTATION_90\n      else -> Surface.ROTATION_0\n    }\n}\n"
  },
  {
    "path": "android/src/main/java/com/visioncamerafacedetector/VisionCameraFaceDetectorPlugin.kt",
    "content": "package com.visioncamerafacedetector\n\nimport android.util.Log\nimport android.view.Surface\nimport com.google.android.gms.tasks.Tasks\nimport com.google.mlkit.vision.common.InputImage\nimport com.google.mlkit.vision.face.FaceDetector\nimport com.mrousavy.camera.core.FrameInvalidError\nimport com.mrousavy.camera.core.types.Position\nimport com.mrousavy.camera.frameprocessors.Frame\nimport com.mrousavy.camera.frameprocessors.FrameProcessorPlugin\n\nprivate const val TAG = \"FaceDetector\"\nclass VisionCameraFaceDetectorPlugin(\n  options: Map<String, Any>?,\n  private val orientationManager: VisionCameraFaceDetectorOrientation\n) : FrameProcessorPlugin() {\n  // detection props\n  private var autoMode = false\n  private var faceDetector: FaceDetector? = null\n  private var runLandmarks = false\n  private var runClassifications = false\n  private var runContours = false\n  private var trackingEnabled = false\n  private var windowWidth = 1.0\n  private var windowHeight = 1.0\n  private var cameraFacing: Position = Position.FRONT\n  private val common = FaceDetectorCommon()\n\n  init {\n    // handle auto scaling\n    autoMode = options?.get(\"autoMode\").toString() == \"true\"\n    windowWidth = (options?.get(\"windowWidth\") ?: 1.0) as Double\n    windowHeight = (options?.get(\"windowHeight\") ?: 1.0) as Double\n\n    if (options?.get(\"cameraFacing\").toString() == \"back\") {\n      cameraFacing = Position.BACK\n    }\n\n    val faceDetectorResult = common.getFaceDetector(options)\n    runLandmarks = faceDetectorResult.runLandmarks\n    runClassifications = faceDetectorResult.runClassifications\n    runContours = faceDetectorResult.runContours\n    trackingEnabled = faceDetectorResult.trackingEnabled\n    faceDetector = faceDetectorResult.faceDetector\n  }\n  \n\n  override fun callback(\n    frame: Frame,\n    params: Map<String, Any>?\n  ): ArrayList<Map<String, Any>> {\n    try {\n      val image = InputImage.fromMediaImage(\n        frame.image,\n        frame.imageProxy.imageInfo.rotationDegrees\n      )\n      // we need to invert sizes as frame is always -90deg rotated\n      val width = image.height.toDouble()\n      val height = image.width.toDouble()\n      val scaleX = if(autoMode) windowWidth / width else 1.0\n      val scaleY = if(autoMode) windowHeight / height else 1.0\n      val task = faceDetector!!.process(image)\n      val faces = Tasks.await(task)\n\n      return common.processFaces(\n        faces,\n        runLandmarks,\n        runClassifications,\n        runContours,\n        trackingEnabled,\n        width,\n        height,\n        scaleX,\n        scaleY,\n        autoMode,\n        cameraFacing,\n        orientationManager.orientation\n      )\n    } catch (e: Exception) {\n      Log.e(TAG, \"Error processing face detection: \", e)\n    } catch (e: FrameInvalidError) {\n      Log.e(TAG, \"Frame invalid error: \", e)\n    }\n\n    return ArrayList()\n  }\n}\n"
  },
  {
    "path": "android/src/main/java/com/visioncamerafacedetector/VisionCameraFaceDetectorPluginPackage.kt",
    "content": "package com.visioncamerafacedetector\n\nimport com.facebook.react.ReactPackage\nimport com.facebook.react.bridge.NativeModule\nimport com.facebook.react.bridge.ReactApplicationContext\nimport com.facebook.react.bridge.ReactContextBaseJavaModule\nimport com.facebook.react.bridge.ReactMethod\nimport com.facebook.react.uimanager.ViewManager\nimport com.mrousavy.camera.frameprocessors.FrameProcessorPluginRegistry\n\nclass VisionCameraFaceDetectorPluginPackage: ReactPackage {\n  companion object {\n    private var orientationManager: VisionCameraFaceDetectorOrientation? = null\n\n    init {\n      FrameProcessorPluginRegistry.addFrameProcessorPlugin(\"detectFaces\") { proxy, options ->\n        if(orientationManager == null) {\n          orientationManager = VisionCameraFaceDetectorOrientation(proxy.context)\n        }\n        VisionCameraFaceDetectorPlugin(options, orientationManager!!)\n      }\n    }\n\n    fun stopDeviceOrientationListener() {\n      orientationManager?.stopDeviceOrientationListener()\n      orientationManager = null\n    }\n  }\n\n  override fun createNativeModules(reactContext: ReactApplicationContext): List<NativeModule> {\n    return listOf(\n      VisionCameraFaceDetectorOrientationManager(reactContext),\n      ImageFaceDetectorModule(reactContext)\n    )\n  }\n\n\n  override fun createViewManagers(reactContext: ReactApplicationContext): List<ViewManager<*, *>> {\n    return emptyList()\n  }\n}\n\nclass VisionCameraFaceDetectorOrientationManager(context: ReactApplicationContext) :\n  ReactContextBaseJavaModule(context) {\n  override fun getName(): String {\n    return \"VisionCameraFaceDetectorOrientationManager\"\n  }\n\n  @ReactMethod\n  fun stopDeviceOrientationListener() {\n    VisionCameraFaceDetectorPluginPackage.stopDeviceOrientationListener()\n  }\n}\n"
  },
  {
    "path": "babel.config.js",
    "content": "module.exports = {\n  presets: ['module:metro-react-native-babel-preset'],\n};\n"
  },
  {
    "path": "example/README.md",
    "content": "## Getting Started\n\nInstall all dependencies with:\n```bash\nnpm instal\n# or\nyarn\n```\n\nBefore running example app you need to delete everything inside `/node_modules/react-native-vision-camera-face-detector/node_modules` except `.bin` folder.\n\nThen run the app in development mode:\n```bash\nnpm run android\n# or\nnpm run ios\n# or\nyarn android\n# or\nyarn ios\n```\n\nOr in production mode:\n```bash\nnpm run android:prod\n# or\nnpm run ios:prod\n# or\nyarn android:prod\n# or\nyarn ios:prod\n```\n\n## Cleaning\n\nIf, for some reason, you need to clean the project just run:\n```bash\nnpm run prebuild:clean\n# or\nyarn prebuild:clean\n```\n"
  },
  {
    "path": "example/app.config.js",
    "content": "export default {\n  expo: {\n    newArchEnabled: false,\n    name: 'Face Detector Example',\n    slug: 'face-detector-example',\n    version: '1.0.0',\n    jsEngine: 'hermes',\n    orientation: 'portrait',\n    icon: './assets/icon.png',\n    splash: {\n      'image': './assets/splash.png',\n      'resizeMode': 'contain',\n      'backgroundColor': '#ffffff'\n    },\n    assetBundlePatterns: [\n      '**/*'\n    ],\n    ios: {\n      bundleIdentifier: 'com.facedetector.example',\n      buildNumber: '1',\n      privacyManifests: {\n        NSPrivacyAccessedAPITypes: [ {\n          NSPrivacyAccessedAPIType: 'NSPrivacyAccessedAPICategoryUserDefaults',\n          NSPrivacyAccessedAPITypeReasons: [ 'CA92.1' ]\n        } ]\n      }\n    },\n    android: {\n      package: 'com.facedetector.example',\n      versionCode: 1,\n      adaptiveIcon: {\n        foregroundImage: './assets/adaptive-icon.png',\n        backgroundColor: '#ffffff'\n      }\n    },\n    plugins: [\n      [ 'react-native-vision-camera', {\n        cameraPermissionText: '$(PRODUCT_NAME) needs to access your device\\'s camera.'\n      } ],\n      [ 'expo-image-picker', {\n        photosPermission: 'The app accesses your photos to let you share them with your friends.'\n      } ],\n      [ 'expo-build-properties', {\n        android: {\n          // android 8\n          minSdkVersion: 26,\n          // android 14\n          compileSdkVersion: 35,\n          targetSdkVersion: 35,\n          buildToolsVersion: '35.0.0'\n        },\n        ios: {\n          deploymentTarget: '15.5',\n          useFrameworks: 'static'\n        }\n      } ]\n    ]\n  }\n}\n"
  },
  {
    "path": "example/babel.config.js",
    "content": "const path = require( \"path\" )\nconst pak = require( \"../package.json\" )\n\nmodule.exports = {\n  presets: [ 'babel-preset-expo' ],\n  plugins: [ [\n    'module-resolver', {\n      alias: {\n        [ pak.name ]: path.join( __dirname, \"..\", pak.source )\n      },\n      root: [ './src' ],\n      'extensions': [\n        '.tsx',\n        '.ts',\n        '.js',\n        '.json'\n      ]\n    } ], [\n    'react-native-reanimated/plugin', {\n      processNestedWorklets: true\n    }\n  ], [\n    'react-native-worklets-core/plugin'\n  ] ]\n}\n"
  },
  {
    "path": "example/index.js",
    "content": "import { registerRootComponent } from 'expo'\nimport App from './src'\n\nregisterRootComponent( App )\n"
  },
  {
    "path": "example/metro.config.js",
    "content": "// Learn more https://docs.expo.io/guides/customizing-metro\nconst { getDefaultConfig } = require( 'expo/metro-config' )\nconst blacklist = require( 'metro-config/src/defaults/exclusionList' )\nconst path = require( 'path' )\nconst escape = require( 'escape-string-regexp' )\nconst pak = require( '../package.json' )\nconst root = path.resolve( __dirname, '..' )\nconst defaultConfig = getDefaultConfig( __dirname )\nconst modules = Object.keys( { ...pak.peerDependencies } )\n\nmodule.exports = {\n  ...defaultConfig,\n  projectRoot: __dirname,\n  watchFolders: [ root ],\n  // We need to make sure that only one version is loaded for peerDependencies\n  // So we blacklist them at the root, and alias them to the versions in example's node_modules\n  resolver: {\n    ...defaultConfig.resolver,\n    blacklistRE: blacklist( modules.map( ( m ) => (\n      new RegExp( `^${ escape( path.join( root, 'node_modules', m ) ) }\\\\/.*$` )\n    ) ) ),\n    extraNodeModules: modules.reduce( ( acc, name ) => {\n      acc[ name ] = path.join( __dirname, 'node_modules', name )\n      return acc\n    }, {} )\n  },\n  transformer: {\n    ...defaultConfig.transformer,\n    getTransformOptions: async () => ( {\n      transform: {\n        experimentalImportSupport: false,\n        inlineRequires: true\n      }\n    } )\n  }\n}\n\n\n\n"
  },
  {
    "path": "example/package.json",
    "content": "{\n  \"name\": \"face-detector-example\",\n  \"version\": \"1.0.0\",\n  \"private\": true,\n  \"scripts\": {\n    \"pods\": \"pod-install --quiet\",\n    \"lint\": \"yarn test && eslint --quiet --fix --ext .js,.ts,.tsx,.jsx .\",\n    \"test\": \"tsc\",\n    \"prebuild\": \"npx expo prebuild\",\n    \"prebuild:clean\": \"npx expo prebuild --clean\",\n    \"android\": \"yarn prebuild && npx expo run:android -d\",\n    \"android:prod\": \"yarn prebuild && npx expo run:android -d --variant release\",\n    \"ios\": \"yarn prebuild && npx expo run:ios -d\",\n    \"ios:prod\": \"yarn prebuild && npx expo run:ios -d --configuration Release\",\n    \"start\": \"expo start --dev-client\"\n  },\n  \"main\": \"index.js\",\n  \"dependencies\": {\n    \"@react-native-community/hooks\": \"^100.1.0\",\n    \"@react-native-firebase/app\": \"^22.2.1\",\n    \"@react-native-firebase/messaging\": \"^22.2.1\",\n    \"@react-navigation/native\": \"^7.1.10\",\n    \"@shopify/react-native-skia\": \"2.2.19\",\n    \"expo\": \"^53\",\n    \"expo-application\": \"~6.1.5\",\n    \"expo-build-properties\": \"~0.14.8\",\n    \"expo-dev-client\": \"~5.2.4\",\n    \"expo-image-picker\": \"~16.1.4\",\n    \"react\": \"19.0.0\",\n    \"react-native\": \"../node_modules/react-native\",\n    \"react-native-reanimated\": \"~3.17.4\",\n    \"react-native-safe-area-context\": \"5.4.0\",\n    \"react-native-vision-camera\": \"../node_modules/react-native-vision-camera\",\n    \"react-native-vision-camera-face-detector\": \"link:../\",\n    \"react-native-worklets-core\": \"../node_modules/react-native-worklets-core\"\n  },\n  \"devDependencies\": {\n    \"@babel/core\": \"^7.28.4\",\n    \"@babel/preset-env\": \"^7.28.3\",\n    \"@babel/runtime\": \"^7.28.3\",\n    \"@types/react\": \"~19.0.10\",\n    \"babel-plugin-module-resolver\": \"^5.0.2\",\n    \"eslint\": \"../node_modules/eslint\",\n    \"metro-react-native-babel-preset\": \"^0.77.0\",\n    \"pod-install\": \"^0.3.7\",\n    \"typescript\": \"~5.8.3\"\n  }\n}\n"
  },
  {
    "path": "example/src/index.tsx",
    "content": "import React, {\n  ReactNode,\n  useEffect,\n  useRef,\n  useState\n} from 'react'\nimport {\n  StyleSheet,\n  Text,\n  Button,\n  View,\n  useWindowDimensions\n} from 'react-native'\nimport {\n  CameraPosition,\n  DrawableFrame,\n  Frame,\n  Camera as VisionCamera,\n  useCameraDevice,\n  useCameraPermission\n} from 'react-native-vision-camera'\nimport { launchImageLibraryAsync } from 'expo-image-picker'\nimport { useIsFocused } from '@react-navigation/core'\nimport { useAppState } from '@react-native-community/hooks'\nimport { SafeAreaProvider } from 'react-native-safe-area-context'\nimport { NavigationContainer } from '@react-navigation/native'\nimport {\n  Face,\n  Camera,\n  Contours,\n  Landmarks,\n  detectFaces,\n  FrameFaceDetectionOptions\n} from 'react-native-vision-camera-face-detector'\nimport {\n  ClipOp,\n  Skia,\n  TileMode\n} from '@shopify/react-native-skia'\nimport Animated, {\n  useAnimatedStyle,\n  useSharedValue,\n  withTiming\n} from 'react-native-reanimated'\n\n/**\n * Entry point component\n *\n * @return {ReactNode} Component\n */\nfunction Index(): ReactNode {\n  return (\n    <SafeAreaProvider>\n      <NavigationContainer>\n        <FaceDetection />\n      </NavigationContainer>\n    </SafeAreaProvider>\n  )\n}\n\n/**\n * Face detection component\n *\n * @return {ReactNode} Component\n */\nfunction FaceDetection(): ReactNode {\n  const {\n    width,\n    height\n  } = useWindowDimensions()\n  const {\n    hasPermission,\n    requestPermission\n  } = useCameraPermission()\n  const [\n    cameraMounted,\n    setCameraMounted\n  ] = useState<boolean>( false )\n  const [\n    cameraPaused,\n    setCameraPaused\n  ] = useState<boolean>( false )\n  const [\n    autoMode,\n    setAutoMode\n  ] = useState<boolean>( true )\n  const [\n    cameraFacing,\n    setCameraFacing\n  ] = useState<CameraPosition>( 'front' )\n  const faceDetectionOptions = useRef<FrameFaceDetectionOptions>( {\n    performanceMode: 'fast',\n    classificationMode: 'all',\n    contourMode: 'all',\n    landmarkMode: 'all',\n    windowWidth: width,\n    windowHeight: height\n  } ).current\n  const isFocused = useIsFocused()\n  const appState = useAppState()\n  const isCameraActive = (\n    !cameraPaused &&\n    isFocused &&\n    appState === 'active'\n  )\n  const cameraDevice = useCameraDevice( cameraFacing )\n  //\n  // vision camera ref\n  //\n  const camera = useRef<VisionCamera>( null )\n  //\n  // face rectangle position\n  //\n  const aFaceW = useSharedValue( 0 )\n  const aFaceH = useSharedValue( 0 )\n  const aFaceX = useSharedValue( 0 )\n  const aFaceY = useSharedValue( 0 )\n  const aRot = useSharedValue( 0 )\n  const boundingBoxStyle = useAnimatedStyle( () => ( {\n    position: 'absolute',\n    borderWidth: 4,\n    borderLeftColor: 'rgb(0,255,0)',\n    borderRightColor: 'rgb(0,255,0)',\n    borderBottomColor: 'rgb(0,255,0)',\n    borderTopColor: 'rgb(255,0,0)',\n    width: withTiming( aFaceW.value, {\n      duration: 100\n    } ),\n    height: withTiming( aFaceH.value, {\n      duration: 100\n    } ),\n    left: withTiming( aFaceX.value, {\n      duration: 100\n    } ),\n    top: withTiming( aFaceY.value, {\n      duration: 100\n    } ),\n    transform: [ {\n      rotate: `${ aRot.value }deg`\n    } ]\n  } ) )\n\n  useEffect( () => {\n    if ( hasPermission ) return\n    requestPermission()\n  }, [] )\n\n  /**\n   * Handle camera UI rotation\n   * \n   * @param {number} rotation Camera rotation\n   */\n  function handleUiRotation(\n    rotation: number\n  ) {\n    aRot.value = rotation\n  }\n\n  /**\n   * Hanldes camera mount error event\n   *\n   * @param {any} error Error event\n   */\n  function handleCameraMountError(\n    error: any\n  ) {\n    console.error( 'camera mount error', error )\n  }\n\n  /**\n   * Handle detection result\n   * \n   * @param {Face[]} faces Detection result \n   * @param {Frame} frame Current frame\n   * @returns {void}\n   */\n  function handleFacesDetected(\n    faces: Face[],\n    frame: Frame\n  ): void {\n    // if no faces are detected we do nothing\n    if ( faces.length <= 0 ) {\n      aFaceW.value = 0\n      aFaceH.value = 0\n      aFaceX.value = 0\n      aFaceY.value = 0\n      return\n    }\n\n    console.log(\n      'faces', faces.length,\n      'frame', frame.toString(),\n      'faces', JSON.stringify( faces )\n    )\n\n    const { bounds } = faces[ 0 ]\n    const {\n      width,\n      height,\n      x,\n      y\n    } = bounds\n    aFaceW.value = width\n    aFaceH.value = height\n    aFaceX.value = x\n    aFaceY.value = y\n\n    // only call camera methods if ref is defined\n    if ( camera.current ) {\n      // take photo, capture video, etc...\n    }\n  }\n\n  /**\n   * Handle skia frame actions\n   * \n   * @param {Face[]} faces Detection result \n   * @param {DrawableFrame} frame Current frame\n   * @returns {void}\n   */\n  function handleSkiaActions(\n    faces: Face[],\n    frame: DrawableFrame\n  ): void {\n    'worklet'\n    // if no faces are detected we do nothing\n    if ( faces.length <= 0 ) return\n\n    console.log(\n      'SKIA - faces', faces.length,\n      'frame', frame.toString()\n    )\n\n    const {\n      bounds,\n      contours,\n      landmarks\n    } = faces[ 0 ]\n\n    // draw a blur shape around the face points\n    const blurRadius = 25\n    const blurFilter = Skia.ImageFilter.MakeBlur(\n      blurRadius,\n      blurRadius,\n      TileMode.Repeat,\n      null\n    )\n    const blurPaint = Skia.Paint()\n    blurPaint.setImageFilter( blurFilter )\n    const contourPath = Skia.Path.Make()\n    const necessaryContours: ( keyof Contours )[] = [\n      'FACE',\n      'LEFT_CHEEK',\n      'RIGHT_CHEEK'\n    ]\n\n    necessaryContours.map( ( key ) => {\n      contours?.[ key ]?.map( ( point, index ) => {\n        if ( index === 0 ) {\n          // it's a starting point\n          contourPath.moveTo( point.x, point.y )\n        } else {\n          // it's a continuation\n          contourPath.lineTo( point.x, point.y )\n        }\n      } )\n      contourPath.close()\n    } )\n\n    frame.save()\n    frame.clipPath( contourPath, ClipOp.Intersect, true )\n    frame.render( blurPaint )\n    frame.restore()\n\n    // draw mouth shape\n    const mouthPath = Skia.Path.Make()\n    const mouthPaint = Skia.Paint()\n    mouthPaint.setColor( Skia.Color( 'red' ) )\n    const necessaryLandmarks: ( keyof Landmarks )[] = [\n      'MOUTH_BOTTOM',\n      'MOUTH_LEFT',\n      'MOUTH_RIGHT'\n    ]\n\n    necessaryLandmarks.map( ( key, index ) => {\n      const point = landmarks?.[ key ]\n      if ( !point ) return\n\n      if ( index === 0 ) {\n        // it's a starting point\n        mouthPath.moveTo( point.x, point.y )\n      } else {\n        // it's a continuation\n        mouthPath.lineTo( point.x, point.y )\n      }\n    } )\n    mouthPath.close()\n    frame.drawPath( mouthPath, mouthPaint )\n\n    // draw a rectangle around the face\n    const rectPaint = Skia.Paint()\n    rectPaint.setColor( Skia.Color( 'blue' ) )\n    rectPaint.setStyle( 1 )\n    rectPaint.setStrokeWidth( 5 )\n    frame.drawRect( bounds, rectPaint )\n  }\n\n  /**\n   * Detect faces from image\n   * \n   * @returns {Promise<void>} Promise\n   */\n  async function detectFacesFromImage(): Promise<void> {\n    // No permissions request is necessary for launching the image library\n    let result = await launchImageLibraryAsync( {\n      mediaTypes: [ 'images' ],\n      allowsEditing: true,\n      aspect: [ 4, 3 ],\n      quality: 1\n    } )\n\n    if ( result.canceled ) return\n\n    const faces = await detectFaces( {\n      image: result.assets[ 0 ].uri\n    } )\n    console.log( 'image detected faces', faces )\n  }\n\n  /**\n   * Detect faces from photo\n   * \n   * @returns {Promise<void>} Promise\n   */\n  async function detectFacesFromPhoto(): Promise<void> {\n    if ( !camera.current ) return\n    // take snapshot is faster than take photo \n    // but it does not process captured image\n    const { path } = await camera.current?.takeSnapshot()\n    const faces = await detectFaces( {\n      image: `file://${ path }`\n    } )\n    console.log( 'photo detected faces', faces )\n  }\n\n  return ( <>\n    <View\n      style={ [\n        StyleSheet.absoluteFill, {\n          alignItems: 'center',\n          justifyContent: 'center'\n        }\n      ] }\n    >\n      { hasPermission && cameraDevice ? <>\n        { cameraMounted && <>\n          <Camera\n            // @ts-ignore\n            ref={ camera }\n            style={ StyleSheet.absoluteFill }\n            isActive={ isCameraActive }\n            device={ cameraDevice }\n            onError={ handleCameraMountError }\n            faceDetectionCallback={ handleFacesDetected }\n            onUIRotationChanged={ handleUiRotation }\n            // @ts-ignore\n            skiaActions={ handleSkiaActions }\n            faceDetectionOptions={ {\n              ...faceDetectionOptions,\n              autoMode,\n              cameraFacing\n            } }\n          />\n\n          <Animated.View\n            style={ boundingBoxStyle }\n          />\n\n          { cameraPaused && <Text\n            style={ {\n              width: '100%',\n              backgroundColor: 'rgb(0,0,255)',\n              textAlign: 'center',\n              color: 'white'\n            } }\n          >\n            Camera is PAUSED\n          </Text> }\n        </> }\n\n        { !cameraMounted && <Text\n          style={ {\n            width: '100%',\n            backgroundColor: 'rgb(255,255,0)',\n            textAlign: 'center'\n          } }\n        >\n          Camera is NOT mounted\n        </Text> }\n      </> : <Text\n        style={ {\n          width: '100%',\n          backgroundColor: 'rgb(255,0,0)',\n          textAlign: 'center',\n          color: 'white'\n        } }\n      >\n        No camera device or permission\n      </Text> }\n    </View>\n\n    <View\n      style={ {\n        position: 'absolute',\n        bottom: 20,\n        left: 0,\n        right: 0,\n        display: 'flex',\n        flexDirection: 'column'\n      } }\n    >\n      <View\n        style={ {\n          width: '100%',\n          display: 'flex',\n          flexDirection: 'row',\n          justifyContent: 'space-around'\n        } }\n      >\n        <Button\n          onPress={ detectFacesFromImage }\n          title={ 'Detect from file' }\n        />\n\n        <Button\n          disabled={ !cameraMounted }\n          onPress={ detectFacesFromPhoto }\n          title={ 'Detect from photo' }\n        />\n      </View>\n\n      <View\n        style={ {\n          width: '100%',\n          display: 'flex',\n          flexDirection: 'row',\n          justifyContent: 'space-around'\n        } }\n      >\n        <Button\n          onPress={ () => setCameraFacing( ( current ) => (\n            current === 'front' ? 'back' : 'front'\n          ) ) }\n          title={ 'Toggle Cam' }\n        />\n\n        <Button\n          onPress={ () => setAutoMode( ( current ) => !current ) }\n          title={ `${ autoMode ? 'Disable' : 'Enable' } AutoMode` }\n        />\n      </View>\n      <View\n        style={ {\n          width: '100%',\n          display: 'flex',\n          flexDirection: 'row',\n          justifyContent: 'space-around'\n        } }\n      >\n        <Button\n          onPress={ () => setCameraPaused( ( current ) => !current ) }\n          title={ `${ cameraPaused ? 'Resume' : 'Pause' } Cam` }\n        />\n\n        <Button\n          onPress={ () => setCameraMounted( ( current ) => !current ) }\n          title={ `${ cameraMounted ? 'Unmount' : 'Mount' } Cam` }\n        />\n      </View>\n    </View>\n  </> )\n}\n\nexport default Index\n"
  },
  {
    "path": "example/tsconfig.dev.json",
    "content": "{\n  \"include\": [\n    \".eslintrc.js\"\n  ]\n}\n"
  },
  {
    "path": "example/tsconfig.json",
    "content": "{\n  \"extends\": \"expo/tsconfig.base\",\n  \"compilerOptions\": {\n    \"strict\": true,\n    \"forceConsistentCasingInFileNames\": true,\n    \"isolatedModules\": true,\n    \"typeRoots\": [\n      \"./node_modules/@types\"\n    ],\n    \"noUnusedLocals\": true,\n    \"noImplicitReturns\": true,\n    \"sourceMap\": true,\n    \"paths\": {\n      \"react-native-vision-camera-face-detector\": [\n        \"../src/index\"\n      ]\n    },\n    \"strictPropertyInitialization\": false\n  },\n  \"compileOnSave\": true,\n  \"include\": [\n    \"index.js\",\n    \"src\"\n  ]\n}\n"
  },
  {
    "path": "ios/FaceDetectorCommon.swift",
    "content": "import MLKitFaceDetection\nimport VisionCamera\nimport AVFoundation\n\nstruct FaceDetectorResult {\n    let runContours: Bool\n    let runClassifications: Bool\n    let runLandmarks: Bool\n    let trackingEnabled: Bool\n    let faceDetector: FaceDetector\n}\n\nfinal class FaceDetectorCommon {\n  func getConfig(\n    withArguments arguments: [AnyHashable: Any]?\n  ) -> [String:Any] {\n    guard let arguments = arguments, !arguments.isEmpty else {\n      return [:]\n    }\n\n    let config = Dictionary(uniqueKeysWithValues: arguments.map {\n      (key, value) in (key as? String ?? \"\", value)\n    })\n\n    return config\n  }\n\n  private func processBoundingBox(\n    from face: Face,\n    sourceWidth: CGFloat = 0.0,\n    sourceHeight: CGFloat = 0.0,\n    scaleX: CGFloat = 1.0,\n    scaleY: CGFloat = 1.0,\n    autoMode: Bool = false\n  ) -> [String:Any] {\n    let boundingBox = face.frame\n    let width = boundingBox.width * scaleX\n    let height = boundingBox.height * scaleY\n    // inverted because we also inverted sourceWidth/height\n    let x = boundingBox.origin.y * scaleX\n    let y = boundingBox.origin.x * scaleY\n    \n    if(autoMode) {\n      return [\n        \"width\": width,\n        \"height\": height,\n        \"x\": (-x + sourceWidth * scaleX) - width,\n        \"y\": y\n      ]\n    }\n    \n    return [\n      \"width\": width,\n      \"height\": height,\n      \"x\": y,\n      \"y\": x\n    ]\n  }\n\n  private func processLandmarks(\n    from face: Face,\n    sourceWidth: CGFloat = 0.0,\n    sourceHeight: CGFloat = 0.0,\n    scaleX: CGFloat = 1.0,\n    scaleY: CGFloat = 1.0,\n    autoMode: Bool = false\n  ) -> [String:[String: CGFloat?]] {\n    let faceLandmarkTypes = [\n      FaceLandmarkType.leftCheek,\n      FaceLandmarkType.leftEar,\n      FaceLandmarkType.leftEye,\n      FaceLandmarkType.mouthBottom,\n      FaceLandmarkType.mouthLeft,\n      FaceLandmarkType.mouthRight,\n      FaceLandmarkType.noseBase,\n      FaceLandmarkType.rightCheek,\n      FaceLandmarkType.rightEar,\n      FaceLandmarkType.rightEye\n    ]\n\n    let faceLandmarksTypesStrings = [\n      \"LEFT_CHEEK\",\n      \"LEFT_EAR\",\n      \"LEFT_EYE\",\n      \"MOUTH_BOTTOM\",\n      \"MOUTH_LEFT\",\n      \"MOUTH_RIGHT\",\n      \"NOSE_BASE\",\n      \"RIGHT_CHEEK\",\n      \"RIGHT_EAR\",\n      \"RIGHT_EYE\"\n    ];\n\n    var faceLandMarksTypesMap: [String: [String: CGFloat?]] = [:]\n    for i in 0..<faceLandmarkTypes.count {\n      let landmark = face.landmark(ofType: faceLandmarkTypes[i]);\n      // inverted because we also inverted sourceWidth/height\n      let x = (landmark?.position.y ?? 0.0) * scaleX\n      let y = (landmark?.position.x ?? 0.0) * scaleY\n\n      var position: [String: CGFloat] \n      if autoMode {\n        position = [\n          \"x\": (-x + sourceWidth * scaleX),\n          \"y\": y\n        ]\n      } else {\n        position = [\n          \"x\": y,\n          \"y\": x\n        ]\n      }\n      \n      faceLandMarksTypesMap[faceLandmarksTypesStrings[i]] = position\n    }\n\n    return faceLandMarksTypesMap\n  }\n\n  private func processFaceContours(\n    from face: Face,\n    sourceWidth: CGFloat = 0.0,\n    sourceHeight: CGFloat = 0.0,\n    scaleX: CGFloat = 1.0,\n    scaleY: CGFloat = 1.0,\n    autoMode: Bool = false,\n    cameraFacing: AVCaptureDevice.Position = .front,\n    orientation: Orientation = .portrait\n  ) -> [String:[[String:CGFloat]]] {\n    let faceContoursTypes = [\n      FaceContourType.face,\n      FaceContourType.leftCheek,\n      FaceContourType.leftEye,\n      FaceContourType.leftEyebrowBottom,\n      FaceContourType.leftEyebrowTop,\n      FaceContourType.lowerLipBottom,\n      FaceContourType.lowerLipTop,\n      FaceContourType.noseBottom,\n      FaceContourType.noseBridge,\n      FaceContourType.rightCheek,\n      FaceContourType.rightEye,\n      FaceContourType.rightEyebrowBottom,\n      FaceContourType.rightEyebrowTop,\n      FaceContourType.upperLipBottom,\n      FaceContourType.upperLipTop\n    ]\n\n    let faceContoursTypesStrings = [\n      \"FACE\",\n      \"LEFT_CHEEK\",\n      \"LEFT_EYE\",\n      \"LEFT_EYEBROW_BOTTOM\",\n      \"LEFT_EYEBROW_TOP\",\n      \"LOWER_LIP_BOTTOM\",\n      \"LOWER_LIP_TOP\",\n      \"NOSE_BOTTOM\",\n      \"NOSE_BRIDGE\",\n      \"RIGHT_CHEEK\",\n      \"RIGHT_EYE\",\n      \"RIGHT_EYEBROW_BOTTOM\",\n      \"RIGHT_EYEBROW_TOP\",\n      \"UPPER_LIP_BOTTOM\",\n      \"UPPER_LIP_TOP\"\n    ];\n\n    var faceContoursTypesMap: [String:[[String:CGFloat]]] = [:]\n    for i in 0..<faceContoursTypes.count {\n      let contour = face.contour(ofType: faceContoursTypes[i])\n\n      var pointsArray: [[String:CGFloat]] = []\n      if let points = contour?.points {\n        for point in points {\n          var x = point.x\n          var y = point.y\n\n          switch orientation {\n            case .portrait:\n              swap(&x, &y)\n            case .landscapeLeft:\n              break\n            case .portraitUpsideDown:\n              x = -x\n              y = -y\n            case .landscapeRight:\n              swap(&x, &y)\n              x = -x\n              y = -y\n            default:\n              break\n          }\n\n          x *= scaleX\n          y *= scaleY\n\n          if autoMode && cameraFacing == .front {\n            x = sourceWidth * scaleX - x\n          }\n\n          pointsArray.append([\n            \"x\": x,\n            \"y\": y\n          ])\n        }\n\n        faceContoursTypesMap[faceContoursTypesStrings[i]] = pointsArray\n      }\n    }\n\n    return faceContoursTypesMap\n  }\n  \n  func getFaceDetector(\n    config: [String:Any]\n  ) -> FaceDetectorResult {\n    var runLandmarks = false\n    var runClassifications = false\n    var runContours = false\n    var trackingEnabled = false\n    \n    let minFaceSize = 0.15\n    let optionsBuilder = FaceDetectorOptions()\n        optionsBuilder.performanceMode = .fast\n        optionsBuilder.landmarkMode = .none\n        optionsBuilder.contourMode = .none\n        optionsBuilder.classificationMode = .none\n        optionsBuilder.minFaceSize = minFaceSize\n        optionsBuilder.isTrackingEnabled = false\n\n    if config[\"performanceMode\"] as? String == \"accurate\" {\n      optionsBuilder.performanceMode = .accurate\n    }\n\n    if config[\"landmarkMode\"] as? String == \"all\" {\n      runLandmarks = true\n      optionsBuilder.landmarkMode = .all\n    }\n\n    if config[\"classificationMode\"] as? String == \"all\" {\n      runClassifications = true\n      optionsBuilder.classificationMode = .all\n    }\n\n    if config[\"contourMode\"] as? String == \"all\" {\n      runContours = true\n      optionsBuilder.contourMode = .all\n    }\n\n    let minFaceSizeParam = config[\"minFaceSize\"] as? Double\n    if minFaceSizeParam != nil && minFaceSizeParam != minFaceSize {\n      optionsBuilder.minFaceSize = CGFloat(minFaceSizeParam!)\n    }\n\n    if config[\"trackingEnabled\"] as? Bool == true {\n      trackingEnabled = true\n      optionsBuilder.isTrackingEnabled = true\n    }\n\n    let faceDetector = FaceDetector.faceDetector(\n      options: optionsBuilder\n    )\n  \n    return FaceDetectorResult(\n      runContours: runContours,\n      runClassifications: runClassifications,\n      runLandmarks: runLandmarks,\n      trackingEnabled: trackingEnabled,\n      faceDetector: faceDetector\n    )\n  }\n  \n  func processFaces(\n    faces: [Face],\n    runLandmarks: Bool,\n    runClassifications: Bool,\n    runContours: Bool,\n    trackingEnabled: Bool,\n    sourceWidth: CGFloat = 0.0,\n    sourceHeight: CGFloat = 0.0,\n    scaleX: CGFloat = 1.0,\n    scaleY: CGFloat = 1.0,\n    autoMode: Bool = false,\n    cameraFacing: AVCaptureDevice.Position = .front,\n    orientation: Orientation = .portrait\n  ) -> [Any] {\n    var result: [Any] = []\n    \n    for face in faces {\n      var map: [String: Any] = [:]\n      \n      if runLandmarks {\n        map[\"landmarks\"] = processLandmarks(\n          from: face,\n          sourceWidth: sourceWidth,\n          sourceHeight: sourceHeight,\n          scaleX: scaleX,\n          scaleY: scaleY,\n          autoMode: autoMode\n        )\n      }\n      \n      if runClassifications {\n        map[\"leftEyeOpenProbability\"] = face.leftEyeOpenProbability\n        map[\"rightEyeOpenProbability\"] = face.rightEyeOpenProbability\n        map[\"smilingProbability\"] = face.smilingProbability\n      }\n      \n      if runContours {\n        map[\"contours\"] = processFaceContours(\n          from: face,\n          sourceWidth: sourceWidth,\n          sourceHeight: sourceHeight,\n          scaleX: scaleX,\n          scaleY: scaleY,\n          autoMode: autoMode,\n          cameraFacing: cameraFacing,\n          orientation: orientation\n        )\n      }\n      \n      if trackingEnabled {\n        map[\"trackingId\"] = face.trackingID\n      }\n      \n      map[\"rollAngle\"] = face.headEulerAngleZ\n      map[\"pitchAngle\"] = face.headEulerAngleX\n      map[\"yawAngle\"] = face.headEulerAngleY\n      map[\"bounds\"] = processBoundingBox(\n        from: face,\n        sourceWidth: sourceWidth,\n        sourceHeight: sourceHeight,\n        scaleX: scaleX,\n        scaleY: scaleY,\n        autoMode: autoMode\n      )\n      \n      result.append(map)\n    }\n    \n    return result\n  }\n}\n"
  },
  {
    "path": "ios/ImageFaceDetectorModule.m",
    "content": "#import <React/RCTBridgeModule.h>\n\n@interface RCT_EXTERN_MODULE(ImageFaceDetector, NSObject)\nRCT_EXTERN_METHOD(\n  detectFaces:(NSString *)uri \n  options:(NSDictionary *)options\n  resolver:(RCTPromiseResolveBlock)resolve \n  rejecter:(RCTPromiseRejectBlock)reject\n)\n@end\n"
  },
  {
    "path": "ios/ImageFaceDetectorModule.swift",
    "content": "import Foundation\nimport React\nimport MLKitFaceDetection\nimport MLKitVision\nimport UIKit\n\n@objc(ImageFaceDetector)\nclass ImageFaceDetectorModule: NSObject {\n  @objc static func requiresMainQueueSetup() -> Bool { false }\n\n  private func loadUIImage(\n    from uriString: String\n  ) throws -> UIImage {\n    if let url = URL(string: uriString) {\n      if url.isFileURL {\n        if let img = UIImage(contentsOfFile: url.path) { return img }\n        throw NSError(domain: \"ImageFaceDetector\", code: 1, userInfo: [NSLocalizedDescriptionKey: \"Could not load image from file path: \\(url.path)\"])\n      }\n\n      // Support bundled assets with no scheme (rare) or http(s) URIs\n      if url.scheme == \"http\" || url.scheme == \"https\" {\n        let data = try Data(contentsOf: url)\n        if let img = UIImage(data: data) { return img }\n        throw NSError(domain: \"ImageFaceDetector\", code: 2, userInfo: [NSLocalizedDescriptionKey: \"Could not decode image from network data\"])\n      }\n\n      // Try generic loading via Data for other schemes if possible\n      if let data = try? Data(contentsOf: url), let img = UIImage(data: data) {\n        return img\n      }\n    }\n\n    // Fallback: treat string as local path\n    if FileManager.default.fileExists(atPath: uriString), let img = UIImage(contentsOfFile: uriString) {\n      return img\n    }\n\n    throw NSError(domain: \"ImageFaceDetector\", code: 3, userInfo: [NSLocalizedDescriptionKey: \"Unsupported or unreadable uri: \\(uriString)\"])\n  }\n\n  @objc(detectFaces:options:resolver:rejecter:)\n  func detectFaces(\n    _ uri: String, \n    options: [AnyHashable : Any]? = [:],\n    resolver: @escaping RCTPromiseResolveBlock,\n    rejecter: @escaping RCTPromiseRejectBlock\n  ) {\n    let common = FaceDetectorCommon()\n    do {\n      let config = common.getConfig(withArguments: options)\n      let faceDetectorResult = common.getFaceDetector(\n        config: config\n      )\n      let runLandmarks = faceDetectorResult.runLandmarks\n      let runClassifications = faceDetectorResult.runClassifications\n      let runContours = faceDetectorResult.runContours\n      let trackingEnabled = faceDetectorResult.trackingEnabled\n      let faceDetector = faceDetectorResult.faceDetector\n      \n      let image = try self.loadUIImage(from: uri)\n      let visionImage = VisionImage(image: image)\n      visionImage.orientation = image.imageOrientation\n\n      faceDetector.process(visionImage) { faces, error in\n        if let error = error {\n          print(\"Error processing image face detection: \\(error)\")\n          // resolve empty list on error\n          resolver([])\n          return\n        }\n\n        let result = common.processFaces(\n          faces: faces ?? [],\n          runLandmarks: runLandmarks,\n          runClassifications: runClassifications,\n          runContours: runContours,\n          trackingEnabled: trackingEnabled\n        )\n        resolver(result)\n      }\n    } catch let error {\n      print(\"Error preparing face detection: \\(error)\")\n      // resolve empty list on error\n      resolver([])\n    }\n  }\n}\n"
  },
  {
    "path": "ios/VisionCameraFaceDetector-Bridging-Header.h",
    "content": "#if VISION_CAMERA_ENABLE_FRAME_PROCESSORS\n#import <VisionCamera/FrameProcessorPlugin.h>\n#import <VisionCamera/Frame.h>\n#endif\n"
  },
  {
    "path": "ios/VisionCameraFaceDetector.m",
    "content": "#import <Foundation/Foundation.h>\n#import <VisionCamera/FrameProcessorPlugin.h>\n#import <VisionCamera/FrameProcessorPluginRegistry.h>\n#import <VisionCamera/Frame.h>\n\n#if __has_include(\"VisionCameraFaceDetector/VisionCameraFaceDetector-Swift.h\")\n#import \"VisionCameraFaceDetector/VisionCameraFaceDetector-Swift.h\"\n#else\n#import \"VisionCameraFaceDetector-Swift.h\"\n#endif\n\n@interface VisionCameraFaceDetector (FrameProcessorPluginLoader)\n@end\n\n@implementation VisionCameraFaceDetector (FrameProcessorPluginLoader)\n+ (void) load {\n  [FrameProcessorPluginRegistry addFrameProcessorPlugin:@\"detectFaces\"\n    withInitializer:^FrameProcessorPlugin*(VisionCameraProxyHolder* proxy, NSDictionary* options) {\n    return [[VisionCameraFaceDetector alloc] initWithProxy:proxy withOptions:options];\n  }];\n}\n@end\n"
  },
  {
    "path": "ios/VisionCameraFaceDetector.swift",
    "content": "import VisionCamera\nimport Foundation\nimport MLKitFaceDetection\nimport MLKitVision\nimport CoreML\nimport UIKit\nimport AVFoundation\nimport SceneKit\n\n@objc(VisionCameraFaceDetector)\npublic class VisionCameraFaceDetector: FrameProcessorPlugin {\n  enum CameraFacing: String {\n    case front = \"front\"\n    case back = \"back\"\n  }\n  \n  // detection props\n  private var autoMode = false\n  private var faceDetector: FaceDetector! = nil\n  private var runLandmarks = false\n  private var runClassifications = false\n  private var runContours = false\n  private var trackingEnabled = false\n  private var windowWidth = 1.0\n  private var windowHeight = 1.0\n  private var cameraFacing: AVCaptureDevice.Position = .front\n  private var common: FaceDetectorCommon! = nil\n  private var orientationManager: VisionCameraFaceDetectorOrientation! = nil\n\n  public override init(\n    proxy: VisionCameraProxyHolder, \n    options: [AnyHashable : Any]? = [:]\n  ) {\n    super.init(proxy: proxy, options: options)\n    common = FaceDetectorCommon()\n    orientationManager = VisionCameraFaceDetectorOrientation()\n\n    let config = common.getConfig(withArguments: options)\n    let windowWidthParam = config[\"windowWidth\"] as? Double\n    if windowWidthParam != nil && windowWidthParam != windowWidth {\n      windowWidth = CGFloat(windowWidthParam!)\n    }\n\n    let windowHeightParam = config[\"windowHeight\"] as? Double\n    if windowHeightParam != nil && windowHeightParam != windowHeight {\n      windowHeight = CGFloat(windowHeightParam!)\n    }\n    \n    if config[\"cameraFacing\"] as? String == \"back\" {\n      cameraFacing = .back\n    }\n\n    // handle auto scaling and rotation\n    autoMode = config[\"autoMode\"] as? Bool == true\n    let faceDetectorResult = common.getFaceDetector(\n      config: config\n    )\n    \n    runLandmarks = faceDetectorResult.runLandmarks\n    runClassifications = faceDetectorResult.runClassifications\n    runContours = faceDetectorResult.runContours\n    trackingEnabled = faceDetectorResult.trackingEnabled\n    faceDetector = faceDetectorResult.faceDetector\n  }\n\n  func getImageOrientation() -> UIImage.Orientation {\n    switch orientationManager!.orientation {\n      case .portrait:\n        return cameraFacing == .front ? .leftMirrored : .right\n      case .landscapeLeft:\n        return cameraFacing == .front ? .upMirrored : .up\n      case .portraitUpsideDown:\n        return cameraFacing == .front ? .rightMirrored : .left\n      case .landscapeRight:\n        return cameraFacing == .front ? .downMirrored : .down\n      @unknown default:\n        return .up\n    }\n  }\n  \n  public override func callback(\n    _ frame: Frame, \n    withArguments arguments: [AnyHashable: Any]?\n  ) -> Any {\n    do {\n      // we need to invert sizes as frame is always -90deg rotated\n      let width = CGFloat(frame.height)\n      let height = CGFloat(frame.width)\n      let image = VisionImage(buffer: frame.buffer)\n      image.orientation = getImageOrientation()\n    \n      var scaleX:CGFloat\n      var scaleY:CGFloat\n      if (autoMode) {\n        scaleX = windowWidth / width\n        scaleY = windowHeight / height\n      } else {\n        scaleX = CGFloat(1)\n        scaleY = CGFloat(1)\n      }\n\n      let faces: [Face] = try faceDetector!.results(in: image)\n      return common.processFaces(\n        faces: faces,\n        runLandmarks: runLandmarks,\n        runClassifications: runClassifications,\n        runContours: runContours,\n        trackingEnabled: trackingEnabled,\n        sourceWidth: width,\n        sourceHeight: height,\n        scaleX: scaleX,\n        scaleY: scaleY,\n        autoMode: autoMode,\n        cameraFacing: cameraFacing,\n        orientation: orientationManager!.orientation\n      )\n    } catch let error {\n      print(\"Error processing face detection: \\(error)\")\n    }\n\n    return []\n  }\n}\n"
  },
  {
    "path": "ios/VisionCameraFaceDetector.xcodeproj/project.pbxproj",
    "content": "// !$*UTF8*$!\n{\n\tarchiveVersion = 1;\n\tclasses = {\n\t};\n\tobjectVersion = 46;\n\tobjects = {\n\n/* Begin PBXBuildFile section */\n\t\tF3D654022AE06AAB009F11D2 /* VisionCameraFaceDetector.m in Sources */ = {isa = PBXBuildFile; fileRef = F3D654002AE06A9C009F11D2 /* VisionCameraFaceDetector.m */; };\n\t\tF3D654032AE06AAB009F11D2 /* VisionCameraFaceDetector.swift in Sources */ = {isa = PBXBuildFile; fileRef = F3D654012AE06A9C009F11D2 /* VisionCameraFaceDetector.swift */; };\n/* End PBXBuildFile section */\n\n/* Begin PBXCopyFilesBuildPhase section */\n\t\t58B511D91A9E6C8500147676 /* CopyFiles */ = {\n\t\t\tisa = PBXCopyFilesBuildPhase;\n\t\t\tbuildActionMask = 2147483647;\n\t\t\tdstPath = \"include/$(PRODUCT_NAME)\";\n\t\t\tdstSubfolderSpec = 16;\n\t\t\tfiles = (\n\t\t\t);\n\t\t\trunOnlyForDeploymentPostprocessing = 0;\n\t\t};\n/* End PBXCopyFilesBuildPhase section */\n\n/* Begin PBXFileReference section */\n\t\t134814201AA4EA6300B7C361 /* libVisionCameraFaceDetector.a */ = {isa = PBXFileReference; explicitFileType = archive.ar; includeInIndex = 0; path = libVisionCameraFaceDetector.a; sourceTree = BUILT_PRODUCTS_DIR; };\n\t\tF3D653FF2AE06A9C009F11D2 /* VisionCameraFaceDetector-Bridging-Header.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = \"VisionCameraFaceDetector-Bridging-Header.h\"; sourceTree = \"<group>\"; };\n\t\tF3D654002AE06A9C009F11D2 /* VisionCameraFaceDetector.m */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.objc; path = VisionCameraFaceDetector.m; sourceTree = \"<group>\"; };\n\t\tF3D654012AE06A9C009F11D2 /* VisionCameraFaceDetector.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = VisionCameraFaceDetector.swift; sourceTree = \"<group>\"; };\n/* End PBXFileReference section */\n\n/* Begin PBXFrameworksBuildPhase section */\n\t\t58B511D81A9E6C8500147676 /* Frameworks */ = {\n\t\t\tisa = PBXFrameworksBuildPhase;\n\t\t\tbuildActionMask = 2147483647;\n\t\t\tfiles = (\n\t\t\t);\n\t\t\trunOnlyForDeploymentPostprocessing = 0;\n\t\t};\n/* End PBXFrameworksBuildPhase section */\n\n/* Begin PBXGroup section */\n\t\t134814211AA4EA7D00B7C361 /* Products */ = {\n\t\t\tisa = PBXGroup;\n\t\t\tchildren = (\n\t\t\t\t134814201AA4EA6300B7C361 /* libVisionCameraFaceDetector.a */,\n\t\t\t);\n\t\t\tname = Products;\n\t\t\tsourceTree = \"<group>\";\n\t\t};\n\t\t58B511D21A9E6C8500147676 = {\n\t\t\tisa = PBXGroup;\n\t\t\tchildren = (\n\t\t\t\tF3D653FF2AE06A9C009F11D2 /* VisionCameraFaceDetector-Bridging-Header.h */,\n\t\t\t\tF3D654002AE06A9C009F11D2 /* VisionCameraFaceDetector.m */,\n\t\t\t\tF3D654012AE06A9C009F11D2 /* VisionCameraFaceDetector.swift */,\n\t\t\t\t134814211AA4EA7D00B7C361 /* Products */,\n\t\t\t);\n\t\t\tsourceTree = \"<group>\";\n\t\t};\n/* End PBXGroup section */\n\n/* Begin PBXNativeTarget section */\n\t\t58B511DA1A9E6C8500147676 /* VisionCameraFaceDetector */ = {\n\t\t\tisa = PBXNativeTarget;\n\t\t\tbuildConfigurationList = 58B511EF1A9E6C8500147676 /* Build configuration list for PBXNativeTarget \"VisionCameraFaceDetector\" */;\n\t\t\tbuildPhases = (\n\t\t\t\t58B511D71A9E6C8500147676 /* Sources */,\n\t\t\t\t58B511D81A9E6C8500147676 /* Frameworks */,\n\t\t\t\t58B511D91A9E6C8500147676 /* CopyFiles */,\n\t\t\t);\n\t\t\tbuildRules = (\n\t\t\t);\n\t\t\tdependencies = (\n\t\t\t);\n\t\t\tname = VisionCameraFaceDetector;\n\t\t\tproductName = RCTDataManager;\n\t\t\tproductReference = 134814201AA4EA6300B7C361 /* libVisionCameraFaceDetector.a */;\n\t\t\tproductType = \"com.apple.product-type.library.static\";\n\t\t};\n/* End PBXNativeTarget section */\n\n/* Begin PBXProject section */\n\t\t58B511D31A9E6C8500147676 /* Project object */ = {\n\t\t\tisa = PBXProject;\n\t\t\tattributes = {\n\t\t\t\tLastUpgradeCheck = 920;\n\t\t\t\tORGANIZATIONNAME = Facebook;\n\t\t\t\tTargetAttributes = {\n\t\t\t\t\t58B511DA1A9E6C8500147676 = {\n\t\t\t\t\t\tCreatedOnToolsVersion = 6.1.1;\n\t\t\t\t\t};\n\t\t\t\t};\n\t\t\t};\n\t\t\tbuildConfigurationList = 58B511D61A9E6C8500147676 /* Build configuration list for PBXProject \"VisionCameraFaceDetector\" */;\n\t\t\tcompatibilityVersion = \"Xcode 3.2\";\n\t\t\tdevelopmentRegion = English;\n\t\t\thasScannedForEncodings = 0;\n\t\t\tknownRegions = (\n\t\t\t\tEnglish,\n\t\t\t\ten,\n\t\t\t);\n\t\t\tmainGroup = 58B511D21A9E6C8500147676;\n\t\t\tproductRefGroup = 58B511D21A9E6C8500147676;\n\t\t\tprojectDirPath = \"\";\n\t\t\tprojectRoot = \"\";\n\t\t\ttargets = (\n\t\t\t\t58B511DA1A9E6C8500147676 /* VisionCameraFaceDetector */,\n\t\t\t);\n\t\t};\n/* End PBXProject section */\n\n/* Begin PBXSourcesBuildPhase section */\n\t\t58B511D71A9E6C8500147676 /* Sources */ = {\n\t\t\tisa = PBXSourcesBuildPhase;\n\t\t\tbuildActionMask = 2147483647;\n\t\t\tfiles = (\n\t\t\t\tF3D654022AE06AAB009F11D2 /* VisionCameraFaceDetector.m in Sources */,\n\t\t\t\tF3D654032AE06AAB009F11D2 /* VisionCameraFaceDetector.swift in Sources */,\n\t\t\t);\n\t\t\trunOnlyForDeploymentPostprocessing = 0;\n\t\t};\n/* End PBXSourcesBuildPhase section */\n\n/* Begin XCBuildConfiguration section */\n\t\t58B511ED1A9E6C8500147676 /* Debug */ = {\n\t\t\tisa = XCBuildConfiguration;\n\t\t\tbuildSettings = {\n\t\t\t\tALWAYS_SEARCH_USER_PATHS = NO;\n\t\t\t\tCLANG_CXX_LANGUAGE_STANDARD = \"gnu++0x\";\n\t\t\t\tCLANG_CXX_LIBRARY = \"libc++\";\n\t\t\t\tCLANG_ENABLE_MODULES = YES;\n\t\t\t\tCLANG_ENABLE_OBJC_ARC = YES;\n\t\t\t\tCLANG_WARN_BLOCK_CAPTURE_AUTORELEASING = YES;\n\t\t\t\tCLANG_WARN_BOOL_CONVERSION = YES;\n\t\t\t\tCLANG_WARN_COMMA = YES;\n\t\t\t\tCLANG_WARN_CONSTANT_CONVERSION = YES;\n\t\t\t\tCLANG_WARN_DIRECT_OBJC_ISA_USAGE = YES_ERROR;\n\t\t\t\tCLANG_WARN_EMPTY_BODY = YES;\n\t\t\t\tCLANG_WARN_ENUM_CONVERSION = YES;\n\t\t\t\tCLANG_WARN_INFINITE_RECURSION = YES;\n\t\t\t\tCLANG_WARN_INT_CONVERSION = YES;\n\t\t\t\tCLANG_WARN_NON_LITERAL_NULL_CONVERSION = YES;\n\t\t\t\tCLANG_WARN_OBJC_LITERAL_CONVERSION = YES;\n\t\t\t\tCLANG_WARN_OBJC_ROOT_CLASS = YES_ERROR;\n\t\t\t\tCLANG_WARN_RANGE_LOOP_ANALYSIS = YES;\n\t\t\t\tCLANG_WARN_STRICT_PROTOTYPES = YES;\n\t\t\t\tCLANG_WARN_SUSPICIOUS_MOVE = YES;\n\t\t\t\tCLANG_WARN_UNREACHABLE_CODE = YES;\n\t\t\t\tCLANG_WARN__DUPLICATE_METHOD_MATCH = YES;\n\t\t\t\tCOPY_PHASE_STRIP = NO;\n\t\t\t\tENABLE_STRICT_OBJC_MSGSEND = YES;\n\t\t\t\tENABLE_TESTABILITY = YES;\n\t\t\t\tGCC_C_LANGUAGE_STANDARD = gnu99;\n\t\t\t\tGCC_DYNAMIC_NO_PIC = NO;\n\t\t\t\tGCC_NO_COMMON_BLOCKS = YES;\n\t\t\t\tGCC_OPTIMIZATION_LEVEL = 0;\n\t\t\t\tGCC_PREPROCESSOR_DEFINITIONS = (\n\t\t\t\t\t\"DEBUG=1\",\n\t\t\t\t\t\"$(inherited)\",\n\t\t\t\t);\n\t\t\t\tGCC_SYMBOLS_PRIVATE_EXTERN = NO;\n\t\t\t\tGCC_WARN_64_TO_32_BIT_CONVERSION = YES;\n\t\t\t\tGCC_WARN_ABOUT_RETURN_TYPE = YES_ERROR;\n\t\t\t\tGCC_WARN_UNDECLARED_SELECTOR = YES;\n\t\t\t\tGCC_WARN_UNINITIALIZED_AUTOS = YES_AGGRESSIVE;\n\t\t\t\tGCC_WARN_UNUSED_FUNCTION = YES;\n\t\t\t\tGCC_WARN_UNUSED_VARIABLE = YES;\n\t\t\t\tIPHONEOS_DEPLOYMENT_TARGET = 13.0;\n\t\t\t\tMTL_ENABLE_DEBUG_INFO = YES;\n\t\t\t\tONLY_ACTIVE_ARCH = YES;\n\t\t\t\tSDKROOT = iphoneos;\n\t\t\t};\n\t\t\tname = Debug;\n\t\t};\n\t\t58B511EE1A9E6C8500147676 /* Release */ = {\n\t\t\tisa = XCBuildConfiguration;\n\t\t\tbuildSettings = {\n\t\t\t\tALWAYS_SEARCH_USER_PATHS = NO;\n\t\t\t\tCLANG_CXX_LANGUAGE_STANDARD = \"gnu++0x\";\n\t\t\t\tCLANG_CXX_LIBRARY = \"libc++\";\n\t\t\t\tCLANG_ENABLE_MODULES = YES;\n\t\t\t\tCLANG_ENABLE_OBJC_ARC = YES;\n\t\t\t\tCLANG_WARN_BLOCK_CAPTURE_AUTORELEASING = YES;\n\t\t\t\tCLANG_WARN_BOOL_CONVERSION = YES;\n\t\t\t\tCLANG_WARN_COMMA = YES;\n\t\t\t\tCLANG_WARN_CONSTANT_CONVERSION = YES;\n\t\t\t\tCLANG_WARN_DIRECT_OBJC_ISA_USAGE = YES_ERROR;\n\t\t\t\tCLANG_WARN_EMPTY_BODY = YES;\n\t\t\t\tCLANG_WARN_ENUM_CONVERSION = YES;\n\t\t\t\tCLANG_WARN_INFINITE_RECURSION = YES;\n\t\t\t\tCLANG_WARN_INT_CONVERSION = YES;\n\t\t\t\tCLANG_WARN_NON_LITERAL_NULL_CONVERSION = YES;\n\t\t\t\tCLANG_WARN_OBJC_LITERAL_CONVERSION = YES;\n\t\t\t\tCLANG_WARN_OBJC_ROOT_CLASS = YES_ERROR;\n\t\t\t\tCLANG_WARN_RANGE_LOOP_ANALYSIS = YES;\n\t\t\t\tCLANG_WARN_STRICT_PROTOTYPES = YES;\n\t\t\t\tCLANG_WARN_SUSPICIOUS_MOVE = YES;\n\t\t\t\tCLANG_WARN_UNREACHABLE_CODE = YES;\n\t\t\t\tCLANG_WARN__DUPLICATE_METHOD_MATCH = YES;\n\t\t\t\tCOPY_PHASE_STRIP = YES;\n\t\t\t\tENABLE_NS_ASSERTIONS = NO;\n\t\t\t\tENABLE_STRICT_OBJC_MSGSEND = YES;\n\t\t\t\tGCC_C_LANGUAGE_STANDARD = gnu99;\n\t\t\t\tGCC_NO_COMMON_BLOCKS = YES;\n\t\t\t\tGCC_WARN_64_TO_32_BIT_CONVERSION = YES;\n\t\t\t\tGCC_WARN_ABOUT_RETURN_TYPE = YES_ERROR;\n\t\t\t\tGCC_WARN_UNDECLARED_SELECTOR = YES;\n\t\t\t\tGCC_WARN_UNINITIALIZED_AUTOS = YES_AGGRESSIVE;\n\t\t\t\tGCC_WARN_UNUSED_FUNCTION = YES;\n\t\t\t\tGCC_WARN_UNUSED_VARIABLE = YES;\n\t\t\t\tIPHONEOS_DEPLOYMENT_TARGET = 13.0;\n\t\t\t\tMTL_ENABLE_DEBUG_INFO = NO;\n\t\t\t\tSDKROOT = iphoneos;\n\t\t\t\tVALIDATE_PRODUCT = YES;\n\t\t\t};\n\t\t\tname = Release;\n\t\t};\n\t\t58B511F01A9E6C8500147676 /* Debug */ = {\n\t\t\tisa = XCBuildConfiguration;\n\t\t\tbuildSettings = {\n\t\t\t\tHEADER_SEARCH_PATHS = (\n\t\t\t\t\t\"$(inherited)\",\n\t\t\t\t\t/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include,\n\t\t\t\t\t\"$(SRCROOT)/../../../React/**\",\n\t\t\t\t\t\"$(SRCROOT)/../../react-native/React/**\",\n\t\t\t\t);\n\t\t\t\tLIBRARY_SEARCH_PATHS = \"$(inherited)\";\n\t\t\t\tOTHER_LDFLAGS = \"-ObjC\";\n\t\t\t\tPRODUCT_NAME = VisionCameraFaceDetector;\n\t\t\t\tSKIP_INSTALL = YES;\n\t\t\t\tSWIFT_OBJC_BRIDGING_HEADER = \"VisionCameraFaceDetector-Bridging-Header.h\";\n\t\t\t\tSWIFT_OBJC_INTERFACE_HEADER_NAME = \"VisionCameraFaceDetector-Swift.h\";\n\t\t\t\tSWIFT_OPTIMIZATION_LEVEL = \"-Onone\";\n\t\t\t\tSWIFT_VERSION = 5.0;\n\t\t\t};\n\t\t\tname = Debug;\n\t\t};\n\t\t58B511F11A9E6C8500147676 /* Release */ = {\n\t\t\tisa = XCBuildConfiguration;\n\t\t\tbuildSettings = {\n\t\t\t\tHEADER_SEARCH_PATHS = (\n\t\t\t\t\t\"$(inherited)\",\n\t\t\t\t\t/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include,\n\t\t\t\t\t\"$(SRCROOT)/../../../React/**\",\n\t\t\t\t\t\"$(SRCROOT)/../../react-native/React/**\",\n\t\t\t\t);\n\t\t\t\tLIBRARY_SEARCH_PATHS = \"$(inherited)\";\n\t\t\t\tOTHER_LDFLAGS = \"-ObjC\";\n\t\t\t\tPRODUCT_NAME = VisionCameraFaceDetector;\n\t\t\t\tSKIP_INSTALL = YES;\n\t\t\t\tSWIFT_OBJC_BRIDGING_HEADER = \"VisionCameraFaceDetector-Bridging-Header.h\";\n\t\t\t\tSWIFT_OBJC_INTERFACE_HEADER_NAME = \"VisionCameraFaceDetector-Swift.h\";\n\t\t\t\tSWIFT_VERSION = 5.0;\n\t\t\t};\n\t\t\tname = Release;\n\t\t};\n/* End XCBuildConfiguration section */\n\n/* Begin XCConfigurationList section */\n\t\t58B511D61A9E6C8500147676 /* Build configuration list for PBXProject \"VisionCameraFaceDetector\" */ = {\n\t\t\tisa = XCConfigurationList;\n\t\t\tbuildConfigurations = (\n\t\t\t\t58B511ED1A9E6C8500147676 /* Debug */,\n\t\t\t\t58B511EE1A9E6C8500147676 /* Release */,\n\t\t\t);\n\t\t\tdefaultConfigurationIsVisible = 0;\n\t\t\tdefaultConfigurationName = Release;\n\t\t};\n\t\t58B511EF1A9E6C8500147676 /* Build configuration list for PBXNativeTarget \"VisionCameraFaceDetector\" */ = {\n\t\t\tisa = XCConfigurationList;\n\t\t\tbuildConfigurations = (\n\t\t\t\t58B511F01A9E6C8500147676 /* Debug */,\n\t\t\t\t58B511F11A9E6C8500147676 /* Release */,\n\t\t\t);\n\t\t\tdefaultConfigurationIsVisible = 0;\n\t\t\tdefaultConfigurationName = Release;\n\t\t};\n/* End XCConfigurationList section */\n\t};\n\trootObject = 58B511D31A9E6C8500147676 /* Project object */;\n}\n"
  },
  {
    "path": "ios/VisionCameraFaceDetector.xcodeproj/project.xcworkspace/xcshareddata/IDEWorkspaceChecks.plist",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!DOCTYPE plist PUBLIC \"-//Apple//DTD PLIST 1.0//EN\" \"http://www.apple.com/DTDs/PropertyList-1.0.dtd\">\n<plist version=\"1.0\">\n<dict>\n\t<key>IDEDidComputeMac32BitWarning</key>\n\t<true/>\n</dict>\n</plist>\n"
  },
  {
    "path": "ios/VisionCameraFaceDetectorOrientation.swift",
    "content": "import VisionCamera\nimport AVFoundation\nimport CoreMotion\nimport Foundation\nimport UIKit\n\nfinal class VisionCameraFaceDetectorOrientation {\n  private let motionManager = CMMotionManager()\n  private let operationQueue = OperationQueue()\n  \n  // The orientation of the physical device's gyro sensor/accelerometer\n  var orientation: Orientation {\n    didSet {\n      if oldValue != orientation {\n        print(\"Device Orientation changed from \\(oldValue) -> \\(orientation)\")\n      }\n    }\n  }\n    \n  init() {\n    // default value\n    orientation = .portrait\n    startDeviceOrientationListener()\n  }\n    \n  deinit {\n    stopDeviceOrientationListener()\n  }\n  \n  private func startDeviceOrientationListener() {\n    stopDeviceOrientationListener()\n    if motionManager.isAccelerometerAvailable {\n      motionManager.accelerometerUpdateInterval = 0.2\n      motionManager.startAccelerometerUpdates(to: operationQueue) { accelerometerData, error in\n        if let error {\n          print(\"Failed to get Accelerometer data! \\(error)\")\n        }\n        if let accelerometerData {\n          self.orientation = accelerometerData.deviceOrientation\n        }\n      }\n    }\n  }\n  \n  private func stopDeviceOrientationListener() {\n    if motionManager.isAccelerometerActive {\n      motionManager.stopAccelerometerUpdates()\n    }\n  }\n}\n\nextension CMAccelerometerData {\n  /**\n   Get the current device orientation from the given acceleration/gyro data.\n   */\n  var deviceOrientation: Orientation {\n    let acceleration = acceleration\n    let xNorm = abs(acceleration.x)\n    let yNorm = abs(acceleration.y)\n    let zNorm = abs(acceleration.z)\n\n    // If the z-axis is greater than the other axes, the phone is flat.\n    if zNorm > xNorm && zNorm > yNorm {\n      return .portrait\n    }\n\n    if xNorm > yNorm {\n      if acceleration.x > 0 {\n        return .landscapeRight\n      } else {\n        return .landscapeLeft\n      }\n    } else {\n      if acceleration.y > 0 {\n        return .portraitUpsideDown\n      } else {\n        return .portrait\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "package.json",
    "content": "{\n  \"name\": \"react-native-vision-camera-face-detector\",\n  \"version\": \"1.10.2\",\n  \"description\": \"Frame Processor Plugin to detect faces using MLKit Vision Face Detector for React Native Vision Camera!\",\n  \"main\": \"lib/commonjs/index\",\n  \"module\": \"lib/module/index\",\n  \"types\": \"lib/typescript/src/index.d.ts\",\n  \"react-native\": \"src/index\",\n  \"source\": \"src/index\",\n  \"files\": [\n    \"src\",\n    \"lib\",\n    \"!**/__tests__\",\n    \"!**/__fixtures__\",\n    \"!**/__mocks__\",\n    \"android\",\n    \"ios\",\n    \"cpp\",\n    \"VisionCameraFaceDetector.podspec\",\n    \"!android/build\",\n    \"!ios/build\"\n  ],\n  \"scripts\": {\n    \"typescript\": \"tsc --noEmit\",\n    \"lint\": \"eslint \\\"**/*.{js,ts,tsx}\\\"\",\n    \"prepare\": \"bob build\",\n    \"release\": \"release-it\",\n    \"example\": \"yarn --cwd example\",\n    \"bootstrap\": \"yarn example && yarn install && yarn example pods\"\n  },\n  \"keywords\": [\n    \"vision-camera\",\n    \"face-detector\",\n    \"face-detection\",\n    \"frame-processor\",\n    \"react-native\"\n  ],\n  \"repository\": {\n    \"type\": \"git\",\n    \"url\": \"https://github.com/luicfrr/react-native-vision-camera-face-detector\"\n  },\n  \"author\": \"luicfrr\",\n  \"license\": \"MIT\",\n  \"bugs\": {\n    \"url\": \"https://github.com/luicfrr/react-native-vision-camera-face-detector\"\n  },\n  \"homepage\": \"https://github.com/luicfrr/react-native-vision-camera-face-detector\",\n  \"publishConfig\": {\n    \"registry\": \"https://registry.npmjs.org/\"\n  },\n  \"devDependencies\": {\n    \"@react-native-community/eslint-config\": \"^3.2.0\",\n    \"@release-it/conventional-changelog\": \"^10.0.1\",\n    \"@tsconfig/react-native\": \"^3.0.7\",\n    \"@types/react\": \"^19\",\n    \"eslint\": \"^9.37.0\",\n    \"eslint-config-prettier\": \"^10.1.8\",\n    \"eslint-plugin-prettier\": \"^5.5.4\",\n    \"prettier\": \"^3.6.2\",\n    \"react\": \"^19\",\n    \"react-native\": \"0.79.6\",\n    \"react-native-builder-bob\": \"^0.40.13\",\n    \"react-native-vision-camera\": \"4.7.2\",\n    \"react-native-worklets-core\": \"^1.6.2\",\n    \"release-it\": \"^19.0.5\",\n    \"typescript\": \"~5.9.3\"\n  },\n  \"peerDependencies\": {\n    \"react\": \">= 18\",\n    \"react-native\": \">= 0.74\",\n    \"react-native-vision-camera\": \">= 4.0\"\n  },\n  \"jest\": {\n    \"preset\": \"react-native\",\n    \"modulePathIgnorePatterns\": [\n      \"<rootDir>/lib/\"\n    ]\n  },\n  \"release-it\": {\n    \"git\": {\n      \"commitMessage\": \"chore: release ${version}\",\n      \"tagName\": \"v${version}\"\n    },\n    \"npm\": {\n      \"publish\": true\n    },\n    \"github\": {\n      \"release\": true\n    },\n    \"publishConfig\": {\n      \"registry\": \"https://registry.npmjs.org\"\n    }\n  },\n  \"eslintConfig\": {\n    \"root\": true,\n    \"extends\": [\n      \"@react-native-community\",\n      \"prettier\"\n    ],\n    \"rules\": {\n      \"prettier/prettier\": [\n        \"error\",\n        {\n          \"quoteProps\": \"consistent\",\n          \"singleQuote\": true,\n          \"tabWidth\": 2,\n          \"trailingComma\": \"es5\",\n          \"useTabs\": false\n        }\n      ]\n    }\n  },\n  \"eslintIgnore\": [\n    \"node_modules/\",\n    \"lib/\"\n  ],\n  \"prettier\": {\n    \"quoteProps\": \"consistent\",\n    \"singleQuote\": true,\n    \"tabWidth\": 2,\n    \"trailingComma\": \"es5\",\n    \"useTabs\": false\n  },\n  \"react-native-builder-bob\": {\n    \"source\": \"src\",\n    \"output\": \"lib\",\n    \"targets\": [\n      \"commonjs\",\n      \"module\",\n      \"typescript\"\n    ]\n  },\n  \"directories\": {\n    \"lib\": \"lib\"\n  }\n}\n"
  },
  {
    "path": "scripts/bootstrap.js",
    "content": "const os = require(\"os\");\nconst path = require(\"path\");\nconst child_process = require(\"child_process\");\n\nconst root = path.resolve(__dirname, \"..\");\nconst args = process.argv.slice(2);\nconst options = {\n  cwd: process.cwd(),\n  env: process.env,\n  stdio: \"inherit\",\n  encoding: \"utf-8\",\n};\n\nif (os.type() === \"Windows_NT\") {\n  options.shell = true;\n}\n\nlet result;\n\nif (process.cwd() !== root || args.length) {\n  // We're not in the root of the project, or additional arguments were passed\n  // In this case, forward the command to `yarn`\n  result = child_process.spawnSync(\"yarn\", args, options);\n} else {\n  // If `yarn` is run without arguments, perform bootstrap\n  result = child_process.spawnSync(\"yarn\", [\"bootstrap\"], options);\n}\n\nprocess.exitCode = result.status;\n"
  },
  {
    "path": "src/Camera.tsx",
    "content": "import React, {\n  useEffect\n} from 'react'\nimport {\n  Camera as VisionCamera,\n  // runAsync,\n  useFrameProcessor,\n  useSkiaFrameProcessor\n} from 'react-native-vision-camera'\nimport {\n  Worklets,\n  // useRunOnJS,\n  useSharedValue\n} from 'react-native-worklets-core'\nimport { useFaceDetector } from './FaceDetector'\n\n// types\nimport type {\n  DependencyList,\n  RefObject\n} from 'react'\nimport type {\n  CameraProps,\n  DrawableFrame,\n  Frame,\n  FrameInternal\n} from 'react-native-vision-camera'\nimport type {\n  Face,\n  FrameFaceDetectionOptions\n} from './FaceDetector'\n\ntype UseWorkletType = (\n  frame: FrameInternal\n) => Promise<void>\n\ntype UseRunInJSType = (\n  faces: Face[],\n  frame: Frame\n) => Promise<void | Promise<void>>\n\ntype CallbackType = (\n  faces: Face[],\n  frame: Frame\n) => void | Promise<void>\n\ntype ComponentType = {\n  ref: RefObject<VisionCamera | null>\n  faceDetectionOptions?: FrameFaceDetectionOptions\n  faceDetectionCallback: CallbackType\n  skiaActions?: (\n    faces: Face[],\n    frame: DrawableFrame\n  ) => void | Promise<void>\n} & CameraProps\n\n/**\n * Create a Worklet function that persists between re-renders.\n * The returned function can be called from both a Worklet context and the JS context, but will execute on a Worklet context.\n *\n * @param {function} func The Worklet. Must be marked with the `'worklet'` directive.\n * @param {DependencyList} dependencyList The React dependencies of this Worklet.\n * @returns {UseWorkletType} A memoized Worklet\n */\nfunction useWorklet(\n  func: ( frame: FrameInternal ) => void,\n  dependencyList: DependencyList\n): UseWorkletType {\n  const worklet = React.useMemo( () => {\n    const context = Worklets.defaultContext\n    return context.createRunAsync( func )\n  }, dependencyList )\n\n  return worklet\n}\n\n/**\n * Create a Worklet function that runs the giver function on JS context.\n * The returned function can be called from a Worklet to hop back to the JS thread.\n * \n * @param {function} func The Worklet. Must be marked with the `'worklet'` directive.\n * @param {DependencyList} dependencyList The React dependencies of this Worklet.\n * @returns {UseRunInJSType} a memoized Worklet\n */\nfunction useRunInJS(\n  func: CallbackType,\n  dependencyList: DependencyList\n): UseRunInJSType {\n  return React.useMemo( () => (\n    Worklets.createRunOnJS( func )\n  ), dependencyList )\n}\n\n/**\n * Vision camera wrapper\n * \n * @param {ComponentType} props Camera + face detection props \n * @returns \n */\nexport function Camera( {\n  ref,\n  faceDetectionOptions,\n  faceDetectionCallback,\n  skiaActions,\n  ...props\n}: ComponentType ) {\n  /** \n   * Is there an async task already running?\n   */\n  const isAsyncContextBusy = useSharedValue( false )\n  const faces = useSharedValue<string>( '[]' )\n  const {\n    detectFaces,\n    stopListeners\n  } = useFaceDetector( faceDetectionOptions )\n\n  useEffect( () => {\n    return () => stopListeners()\n  }, [] )\n\n  /** \n   * Throws logs/errors back on js thread\n   */\n  const logOnJs = Worklets.createRunOnJS( (\n    log: string,\n    error?: Error\n  ) => {\n    if ( error ) {\n      console.error( log, error.message ?? JSON.stringify( error ) )\n    } else {\n      console.log( log )\n    }\n  } )\n\n  /**\n   * Runs on detection callback on js thread\n   */\n  const runOnJs = useRunInJS( faceDetectionCallback, [\n    faceDetectionCallback\n  ] )\n\n  /**\n   * Async context that will handle face detection\n   */\n  const runOnAsyncContext = useWorklet( (\n    frame: FrameInternal\n  ) => {\n    'worklet'\n    try {\n      faces.value = JSON.stringify(\n        detectFaces( frame )\n      )\n      // increment frame count so we can use frame on \n      // js side without frame processor getting stuck\n      frame.incrementRefCount()\n      runOnJs(\n        JSON.parse(\n          faces.value\n        ), frame\n      ).finally( () => {\n        'worklet'\n        // finally decrement frame count so it can be dropped\n        frame.decrementRefCount()\n      } )\n    } catch ( error: any ) {\n      logOnJs( 'Execution error:', error )\n    } finally {\n      frame.decrementRefCount()\n      isAsyncContextBusy.value = false\n    }\n  }, [\n    detectFaces,\n    runOnJs\n  ] )\n\n  /**\n   * Detect faces on frame on an async context without blocking camera preview\n   * \n   * @param {Frame} frame Current frame\n   */\n  function runAsync( frame: Frame | DrawableFrame ) {\n    'worklet'\n    if ( isAsyncContextBusy.value ) return\n    // set async context as busy\n    isAsyncContextBusy.value = true\n    // cast to internal frame and increment ref count\n    const internal = frame as FrameInternal\n    internal.incrementRefCount()\n    // detect faces in async context\n    runOnAsyncContext( internal )\n  }\n\n  /**\n   * Skia frame processor\n   */\n  const skiaFrameProcessor = useSkiaFrameProcessor( ( frame ) => {\n    'worklet'\n    frame.render()\n    skiaActions!( JSON.parse(\n      faces.value\n    ), frame )\n    runAsync( frame )\n  }, [\n    runOnAsyncContext,\n    skiaActions\n  ] )\n\n  /**\n   * Default frame processor\n   */\n  const cameraFrameProcessor = useFrameProcessor( ( frame ) => {\n    'worklet'\n    runAsync( frame )\n  }, [ runOnAsyncContext ] )\n\n  /**\n   * Camera frame processor\n   */\n  const frameProcessor = ( () => {\n    const { autoMode } = faceDetectionOptions ?? {}\n\n    if (\n      !autoMode &&\n      !!skiaActions\n    ) return skiaFrameProcessor\n\n    return cameraFrameProcessor\n  } )()\n\n  //\n  // use bellow when vision-camera's  \n  // context creation issue is solved\n  //\n  // /**\n  //  * Runs on detection callback on js thread\n  //  */\n  // const runOnJs = useRunOnJS( faceDetectionCallback, [\n  //   faceDetectionCallback\n  // ] )\n\n  // const cameraFrameProcessor = useFrameProcessor( ( frame ) => {\n  //   'worklet'\n  //   runAsync( frame, () => {\n  //     'worklet'\n  //     runOnJs(\n  //       detectFaces( frame ),\n  //       frame\n  //     )\n  //   } )\n  // }, [ runOnJs ] )\n\n  return <VisionCamera\n    { ...props }\n    ref={ ref }\n    frameProcessor={ frameProcessor }\n    pixelFormat='yuv'\n  />\n}\n"
  },
  {
    "path": "src/FaceDetector.ts",
    "content": "import { useMemo } from 'react'\nimport {\n  Platform,\n  NativeModules\n} from 'react-native'\nimport {\n  VisionCameraProxy,\n  type CameraPosition,\n  type Frame\n} from 'react-native-vision-camera'\n\ntype FaceDetectorPlugin = {\n  /**\n   * Detect faces on frame\n   * \n   * @param {Frame} frame Frame to detect faces\n   */\n  detectFaces: ( frame: Frame ) => Face[]\n  /**\n   * Stop orientation listeners for Android.\n   * Does nothing for IOS.\n   * \n   * @returns {void}\n   */\n  stopListeners: () => void\n}\n\ntype Point = {\n  x: number\n  y: number\n}\n\nexport interface Face {\n  pitchAngle: number\n  rollAngle: number\n  yawAngle: number\n  bounds: Bounds\n  leftEyeOpenProbability: number\n  rightEyeOpenProbability: number\n  smilingProbability: number\n  contours?: Contours\n  landmarks?: Landmarks\n  trackingId?: number\n}\n\nexport interface Bounds {\n  width: number\n  height: number\n  x: number\n  y: number\n}\n\nexport interface Contours {\n  FACE: Point[]\n  LEFT_EYEBROW_TOP: Point[]\n  LEFT_EYEBROW_BOTTOM: Point[]\n  RIGHT_EYEBROW_TOP: Point[]\n  RIGHT_EYEBROW_BOTTOM: Point[]\n  LEFT_EYE: Point[]\n  RIGHT_EYE: Point[]\n  UPPER_LIP_TOP: Point[]\n  UPPER_LIP_BOTTOM: Point[]\n  LOWER_LIP_TOP: Point[]\n  LOWER_LIP_BOTTOM: Point[]\n  NOSE_BRIDGE: Point[]\n  NOSE_BOTTOM: Point[]\n  LEFT_CHEEK: Point[]\n  RIGHT_CHEEK: Point[]\n}\n\nexport interface Landmarks {\n  LEFT_CHEEK: Point\n  LEFT_EAR: Point\n  LEFT_EYE: Point\n  MOUTH_BOTTOM: Point\n  MOUTH_LEFT: Point\n  MOUTH_RIGHT: Point\n  NOSE_BASE: Point\n  RIGHT_CHEEK: Point\n  RIGHT_EAR: Point\n  RIGHT_EYE: Point\n}\n\nexport interface CommonFaceDetectionOptions {\n  /**\n   * Favor speed or accuracy when detecting faces.\n   *\n   * @default 'fast'\n   */\n  performanceMode?: 'fast' | 'accurate'\n\n  /**\n   * Whether to attempt to identify facial 'landmarks': eyes, ears, nose, cheeks, mouth, and so on.\n   *\n   * @default 'none'\n   */\n  landmarkMode?: 'none' | 'all'\n\n  /**\n   * Whether to detect the contours of facial features. Contours are detected for only the most prominent face in an image.\n   *\n   * @default 'none'\n   */\n  contourMode?: 'none' | 'all'\n\n  /**\n   * Whether or not to classify faces into categories such as 'smiling', and 'eyes open'.\n   *\n   * @default 'none'\n   */\n  classificationMode?: 'none' | 'all'\n\n  /**\n   * Sets the smallest desired face size, expressed as the ratio of the width of the head to width of the image.\n   *\n   * @default 0.15\n   */\n  minFaceSize?: number\n\n  /**\n   * Whether or not to assign faces an ID, which can be used to track faces across images.\n   *\n   * Note that when contour detection is enabled, only one face is detected, so face tracking doesn't produce useful results. For this reason, and to improve detection speed, don't enable both contour detection and face tracking.\n   *\n   * @default false\n   */\n  trackingEnabled?: boolean\n}\n\nexport interface FrameFaceDetectionOptions\n  extends CommonFaceDetectionOptions {\n  /**\n   * Current active camera\n   * \n   * @default front\n   */\n  cameraFacing?: CameraPosition\n\n  /**\n   * Should handle auto scale (face bounds, contour and landmarks) and rotation on native side? \n   * This option should be disabled if you want to draw on frame using `Skia Frame Processor`.\n   * See [this](https://github.com/luicfrr/react-native-vision-camera-face-detector/issues/30#issuecomment-2058805546) and [this](https://github.com/luicfrr/react-native-vision-camera-face-detector/issues/35) for more details. \n   * \n   * @default false\n   */\n  autoMode?: boolean\n\n  /**\n   * Required if you want to use `autoMode`. You must handle your own logic to get screen sizes, with or without statusbar size, etc...\n   * \n   * @default 1.0\n   */\n  windowWidth?: number\n\n  /**\n   * Required if you want to use `autoMode`. You must handle your own logic to get screen sizes, with or without statusbar size, etc...\n   * \n   * @default 1.0\n   */\n  windowHeight?: number\n}\n\n/**\n * Create a new instance of face detector plugin\n * \n * @param {FrameFaceDetectionOptions | undefined} options Detection options\n * @returns {FaceDetectorPlugin} Plugin instance\n */\nfunction createFaceDetectorPlugin(\n  options?: FrameFaceDetectionOptions\n): FaceDetectorPlugin {\n  const plugin = VisionCameraProxy.initFrameProcessorPlugin( 'detectFaces', {\n    ...options\n  } )\n\n  if ( !plugin ) {\n    throw new Error( 'Failed to load Frame Processor Plugin \"detectFaces\"!' )\n  }\n\n  return {\n    detectFaces: (\n      frame: Frame\n    ): Face[] => {\n      'worklet'\n      // @ts-ignore\n      return plugin.call( frame ) as Face[]\n    },\n    stopListeners: () => {\n      if ( Platform.OS !== 'android' ) return\n\n      const { VisionCameraFaceDetectorOrientationManager } = NativeModules\n      VisionCameraFaceDetectorOrientationManager?.stopDeviceOrientationListener()\n    }\n  }\n}\n\n/**\n * Use an instance of face detector plugin.\n * \n * @param {FrameFaceDetectionOptions | undefined} options Detection options\n * @returns {FaceDetectorPlugin} Memoized plugin instance that will be \n * destroyed once the component using `useFaceDetector()` unmounts.\n */\nexport function useFaceDetector(\n  options?: FrameFaceDetectionOptions\n): FaceDetectorPlugin {\n  return useMemo( () => (\n    createFaceDetectorPlugin( options )\n  ), [ options ] )\n}\n"
  },
  {
    "path": "src/ImageFaceDetector.ts",
    "content": "import {\n  Image,\n  NativeModules\n} from 'react-native'\nimport type {\n  Face,\n  CommonFaceDetectionOptions\n} from './FaceDetector'\n\ntype InputImage = number | string | { uri: string }\nexport interface ImageFaceDetectionOptions {\n  image: InputImage,\n  options?: CommonFaceDetectionOptions\n}\n\n/**\n * Resolves input image\n * \n * @param {InputImage} image Image path\n * @returns {string} Resolved image\n */\nfunction resolveUri( image: InputImage ): string {\n  const uri = ( () => {\n    switch ( typeof image ) {\n      case 'number': {\n        const source = Image.resolveAssetSource( image )\n        return source?.uri\n      }\n      case 'string': {\n        return image\n      }\n      case 'object': {\n        return image?.uri\n      }\n      default: {\n        return undefined\n      }\n    }\n  } )()\n\n  if ( !uri ) throw new Error( 'Unable to resolve image' )\n  return uri\n}\n\n/**\n * Detect faces in a static image\n * \n * @param {InputImage} image Image path\n * @returns {Promise<Face[]>} List of detected faces\n */\nexport async function detectFaces( {\n  image,\n  options\n}: ImageFaceDetectionOptions ): Promise<Face[]> {\n  const uri = resolveUri( image )\n  // @ts-ignore\n  const { ImageFaceDetector } = NativeModules\n  return await ImageFaceDetector?.detectFaces(\n    uri,\n    options\n  )\n}\n"
  },
  {
    "path": "src/index.ts",
    "content": "export * from './Camera'\nexport * from './FaceDetector'\nexport * from './ImageFaceDetector'\n"
  },
  {
    "path": "tsconfig.build.json",
    "content": "{\n  \"extends\": \"./tsconfig\",\n  \"exclude\": [\n    \"example\"\n  ]\n}\n"
  },
  {
    "path": "tsconfig.json",
    "content": "{\n  \"compilerOptions\": {\n    \"rootDir\": \".\",\n    \"paths\": {\n      \"react-native-vision-camera-face-detector\": [\n        \"./src/index\"\n      ]\n    },\n    \"allowUnreachableCode\": false,\n    \"allowUnusedLabels\": false,\n    \"esModuleInterop\": true,\n    \"forceConsistentCasingInFileNames\": true,\n    \"jsx\": \"react\",\n    \"lib\": [\n      \"esnext\"\n    ],\n    \"module\": \"esnext\",\n    \"moduleResolution\": \"node\",\n    \"noFallthroughCasesInSwitch\": true,\n    \"noImplicitReturns\": true,\n    \"noImplicitUseStrict\": false,\n    \"noStrictGenericChecks\": false,\n    \"noUncheckedIndexedAccess\": true,\n    \"noUnusedLocals\": true,\n    \"noUnusedParameters\": true,\n    \"resolveJsonModule\": true,\n    \"skipLibCheck\": true,\n    \"strict\": true,\n    \"target\": \"esnext\",\n    \"verbatimModuleSyntax\": true\n  },\n  \"exclude\": [\n    \"example/**\"\n  ]\n}\n"
  }
]