[
  {
    "path": ".editorconfig",
    "content": "# This file is for unifying the coding style of different editors and IDEs.\n# editorconfig.org\n\nroot = true\n\n[*]\ncharset = utf-8\nend_of_line = lf\nindent_size = 4\nindent_style = space\ninsert_final_newline = true\ntrim_trailing_whitespace = true\n\n[*.json]\nindent_size = 2\n"
  },
  {
    "path": ".github/CONTRIBUTING.md",
    "content": "# Contributing\n\nWe love pull requests from everyone.\n\n[Fork](https://help.github.com/articles/fork-a-repo/), then\n[clone](https://help.github.com/articles/cloning-a-repository/) the repo:\n\n```\ngit clone git@github.com:your-username/SpeechRecognitionPlugin.git\n```\n\nSet up a branch for your feature or bugfix with a link to the original repo:\n\n```\ngit checkout -b my-awesome-new-feature\ngit push --set-upstream origin my-awesome-new-feature\ngit remote add upstream https://github.com/macdonst/SpeechRecognitionPlugin.git\n```\n\nSet up the project:\n\n```\nnpm install\n```\n\nMake sure the tests pass before changing anything:\n\n```\nnpm test\n```\n\nMake your change. Add tests for your change. Make the tests pass:\n\n```\nnpm test\n```\n\nCommit changes:\n\n```\ngit commit -m \"Cool stuff\"\n```\n\nConsider starting the commit message with an applicable emoji:\n\n* :art: `:art:` when improving the format/structure of the code\n* :zap: `:zap:` when improving performance\n* :non-potable_water: `:non-potable_water:` when plugging memory leaks\n* :memo: `:memo:` when writing docs\n* :ambulance: `:ambulance:` a critical hotfix.\n* :sparkles: `:sparkles:` when introducing new features\n* :bookmark: `:bookmark:` when releasing / version tags\n* :rocket: `:rocket:` when deploying stuff\n* :penguin: `:penguin:` when fixing something on Android\n* :apple: `:apple:` when fixing something on iOS\n* :checkered_flag: `:checkered_flag:` when fixing something on Windows\n* :bug: `:bug:` when fixing a bug\n* :fire: `:fire:` when removing code or files\n* :green_heart: `:green_heart:` when fixing the CI build\n* :white_check_mark: `:white_check_mark:` when adding tests\n* :lock: `:lock:` when dealing with security\n* :arrow_up: `:arrow_up:` when upgrading dependencies\n* :arrow_down: `:arrow_down:` when downgrading dependencies\n* :shirt: `:shirt:` when removing linter warnings\n* :hammer: `:hammer:` when doing heavy refactoring\n* :heavy_minus_sign: `:heavy_minus_sign:` when removing a dependency.\n* :heavy_plus_sign: `:heavy_plus_sign:` when adding a dependency.\n* :wrench: `:wrench:` when changing configuration files.\n* :globe_with_meridians: `:globe_with_meridians:` when dealing with\n  internationalization and localization.\n* :pencil2: `:pencil2:` when fixing typos.\n* :hankey: `:hankey:` when writing bad code that needs to be improved.\n* :package: `:package:` when updating compiled files or packages.\n\nMake sure your branch is up to date with the original repo:\n\n```\ngit fetch upstream\ngit merge upstream/master\n```\n\nReview your changes and any possible conflicts and push to your fork:\n\n```\ngit push origin\n```\n\n[Submit a pull request](https://help.github.com/articles/creating-a-pull-request/).\n\nAt this point you're waiting on us. We do our best to keep on top of all the\npull requests. We may suggest some changes, improvements or alternatives.\n\nSome things that will increase the chance that your pull request is accepted:\n\n* Write tests.\n* Write a [good commit message](http://chris.beams.io/posts/git-commit/).\n* Make sure the PR merges cleanly with the latest master.\n* Describe your feature/bugfix and why it's needed/important in the pull request\n  description.\n\n## Editor Config\n\nThe project uses [.editorconfig](http://editorconfig.org/) to define the coding\nstyle of each file. We recommend that you install the Editor Config extension\nfor your preferred IDE. Consistency is key.\n\n## ESLint\n\nThe project uses [.eslint](http://eslint.org/) to define the JavaScript coding\nconventions. Most editors now have a ESLint add-on to provide on-save or on-edit\nlinting.\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE.md",
    "content": "### Expected Behaviour\n\n### Actual Behaviour\n\n### Reproduce Scenario (including but not limited to)\n\n#### Steps to Reproduce\n\n#### Platform and Version (eg. Android 5.0 or iOS 9.2.1)\n\n#### (Android) What device vendor (e.g. Samsung, HTC, Sony...)\n\n#### Cordova CLI version and cordova platform version\n\n    cordova --version                                    # e.g. 6.0.0\n    cordova platform version android                     # e.g. 4.1.1\n\n#### Plugin version\n\n    cordova plugin version | grep phonegap-plugin-speech-recognition   # e.g. 1.5.3\n\n#### Sample Push Data Payload\n\n#### Sample Code that illustrates the problem\n\n#### Logs taken while reproducing problem\n"
  },
  {
    "path": ".github/PULL_REQUEST_TEMPLATE.md",
    "content": "<!--- Provide a general summary of your changes in the Title above -->\n\n## Description\n<!--- Describe your changes in detail -->\n\n## Related Issue\n<!--- This project only accepts pull requests related to open issues -->\n<!--- If suggesting a new feature or change, please discuss it in an issue first -->\n<!--- If fixing a bug, there should be an issue describing it with steps to reproduce -->\n<!--- Please link to the issue here: -->\n\n## Motivation and Context\n<!--- Why is this change required? What problem does it solve? -->\n\n## How Has This Been Tested?\n<!--- Please describe in detail how you tested your changes. -->\n<!--- Include details of your testing environment, and the tests you ran to -->\n<!--- see how your change affects other areas of the code, etc. -->\n\n## Screenshots (if appropriate):\n\n## Types of changes\n<!--- What types of changes does your code introduce? Put an `x` in all the boxes that apply: -->\n- [ ] Bug fix (non-breaking change which fixes an issue)\n- [ ] New feature (non-breaking change which adds functionality)\n- [ ] Breaking change (fix or feature that would cause existing functionality to change)\n\n## Checklist:\n<!--- Go over all the following points, and put an `x` in all the boxes that apply. -->\n<!--- If you're unsure about any of these, don't hesitate to ask. We're here to help! -->\n- [ ] My code follows the code style of this project.\n- [ ] My change requires a change to the documentation.\n- [ ] I have updated the documentation accordingly.\n- [ ] I have read the **CONTRIBUTING** document.\n- [ ] I have added tests to cover my changes.\n- [ ] All new and existing tests passed.\n"
  },
  {
    "path": ".gitignore",
    "content": "# built application files\n*.apk\n*.ap_\n\n# files for the dex VM\n*.dex\n\n# Java class files\n*.class\n\n# generated files\nbin/\ngen/\n\n# Local configuration file (sdk path, etc)\nlocal.properties\n\n# Eclipse project files\n.classpath\n.project\n\n.DS_Store\n/node_modules/\n"
  },
  {
    "path": ".jshintrc",
    "content": "{\n    \"asi\": false,\n    \"boss\": false,\n    \"camelcase\": true,\n    \"curly\": true,\n    \"eqeqeq\": true,\n    \"eqnull\": false,\n    \"es5\": false,\n    \"evil\": false,\n    \"expr\": false,\n    \"forin\": true,\n    \"funcscope\": false,\n    \"jasmine\": true,\n    \"immed\": true,\n    \"indent\": 4,\n    \"latedef\": true,\n    \"loopfunc\": false,\n    \"maxerr\": 7,\n    \"newcap\": true,\n    \"node\": true,\n    \"nonew\": true,\n    \"plusplus\": false,\n    \"quotmark\": \"single\",\n    \"shadow\": false,\n    \"strict\": false,\n    \"supernew\": false,\n    \"trailing\": true,\n    \"undef\": true,\n    \"white\": true\n}\n"
  },
  {
    "path": "CHANGELOG.md",
    "content": "# Change Log\n"
  },
  {
    "path": "LICENSE",
    "content": "The MIT License (MIT)\n\nCopyright (c) 2013 macdonst\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of\nthis software and associated documentation files (the \"Software\"), to deal in\nthe Software without restriction, including without limitation the rights to\nuse, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of\nthe Software, and to permit persons to whom the Software is furnished to do so,\nsubject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS\nFOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR\nCOPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER\nIN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN\nCONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "SpeechRecognitionPlugin\n=======================\n\nW3C Web Speech API - Speech Recognition plugin for PhoneGap\n\nUpdate 2013/09/05\n=================\n\nBack to work on this but it's not ready yet so don't try to use.\n\nUpdate 2013/08/05\n=================\n\nHi, you are all probably wondering where the code is after seeing my PhoneGap Day US presentation or reading the slides. Well, I've been dealing with an illness in the family and have not has as much spare time as I would have hoped to update this project. However, things are working out better than I could have hoped for and I should have time to concentrate on this very soon.\n\nUpdate 2015/04/04\n=================\n\nBasic example is working on iOS and android\n```\n<script type=\"text/javascript\">\nvar recognition;\ndocument.addEventListener('deviceready', onDeviceReady, false);\n\nfunction onDeviceReady() {\n    recognition = new SpeechRecognition();\n    recognition.onresult = function(event) {\n        if (event.results.length > 0) {\n            q.value = event.results[0][0].transcript;\n            q.form.submit();\n        }\n    }\n}\n</script>\n<form action=\"http://www.example.com/search\">\n    <input type=\"search\" id=\"q\" name=\"q\" size=60>\n    <input type=\"button\" value=\"Click to Speak\" onclick=\"recognition.start()\">\n</form>\n```\n\nExample from section 6.1 Speech Recognition Examples of the W3C page\n(https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html#examples)\n\nTo install the plugin use \n\n```\ncordova plugin add https://github.com/macdonst/SpeechRecognitionPlugin\n```\n\nSince iOS 10 it's mandatory to add a `NSMicrophoneUsageDescription` in the info.plist to access the microphone.\n\n\nTo add this entry you can pass the `MICROPHONE_USAGE_DESCRIPTION` variable on plugin install.\n\n\nExample:\n\n`cordova plugin add https://github.com/macdonst/SpeechRecognitionPlugin --variable MICROPHONE_USAGE_DESCRIPTION=\"your usage message\"`\n\nIf the variable is not provided it will use an empty message, but a usage description string is mandatory to submit your app to the Apple Store.\n\n\nOn iOS 10 and greater it uses the native SFSpeechRecognizer (same as Siri).\n\nSupported locales for SFSpeechRecognizer are:\nro-RO, en-IN, he-IL, tr-TR, en-NZ, sv-SE, fr-BE, it-CH, de-CH, pl-PL, pt-PT, uk-UA, fi-FI, vi-VN, ar-SA, zh-TW, es-ES, en-GB, yue-CN, th-TH, en-ID, ja-JP, en-SA, en-AE, da-DK, fr-FR, sk-SK, de-AT, ms-MY, hu-HU, ca-ES, ko-KR, fr-CH, nb-NO, en-AU, el-GR, ru-RU, zh-CN, en-US, en-IE, nl-BE, es-CO, pt-BR, es-US, hr-HR, fr-CA, zh-HK, es-MX, id-ID, it-IT, nl-NL, cs-CZ, en-ZA, es-CL, en-PH, en-CA, en-SG, de-DE\n\nTwo-character codes can be used too.\n\nOn iOS 9 and older it uses iSpeech SDK, an API key is required, get one on https://www.ispeech.org/, it's free.\nTo provide the key, add this preference inside the config.xml\n```\n <preference name=\"apiKey\" value=\"yourApiKeyHere\" />\n ```\n If none is provided it will use the demo key \"developerdemokeydeveloperdemokey\"\n \niSpeech supported languages are:\n \nEnglish (Canada) (en-CA) \t\nEnglish (United States) (en-US) \t\nSpanish (Spain) (es-ES) \t\nFrench (France) (fr-FR) \t\nItalian (Italy) (it-IT) \t\nPolish (Poland) (pl-PL) \t\nPortuguese (Portugal) (pt-PT)\n\n\nTwo-character codes can be used too, but for English, \"en\" will use \"en-US\" \n \n \n"
  },
  {
    "path": "package.json",
    "content": "{\n  \"name\": \"phonegap-plugin-speech-recognition\",\n  \"version\": \"0.3.0\",\n  \"description\": \"Cordova Speech Recognition Plugin\",\n  \"cordova\": {\n    \"id\": \"phonegap-plugin-speech-recognition\",\n    \"platforms\": [\"android\", \"ios\"]\n  },\n  \"repository\": {\n    \"type\": \"git\",\n    \"url\": \"git+https://github.com/macdonst/SpeechRecognitionPlugin.git\"\n  },\n  \"keywords\": [\n    \"cordova\",\n    \"speech\",\n    \"recognition\",\n    \"ecosystem:cordova\",\n    \"cordova-android\",\n    \"cordova-ios\"\n  ],\n  \"author\": \"Simon MacDonald\",\n  \"license\": \"MIT\",\n  \"bugs\": {\n    \"url\": \"https://github.com/macdonst/SpeechRecognitionPlugin/issues\"\n  },\n  \"homepage\": \"https://github.com/macdonst/SpeechRecognitionPlugin#readme\"\n}\n"
  },
  {
    "path": "plugin.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<plugin\n    xmlns=\"http://www.phonegap.com/ns/plugins/1.0\"\n    xmlns:android=\"http://schemas.android.com/apk/res/android\" id=\"phonegap-plugin-speech-recognition\" version=\"0.3.0\">\n    <name>SpeechRecognition</name>\n    <description>Cordova Speech Recognition Plugin</description>\n    <license>MIT</license>\n    <keywords>cordova,speech,recognition</keywords>\n    <dependency id=\"cordova-plugin-compat\" version=\"^1.0.0\" />\n    <!-- android -->\n    <platform name=\"android\">\n        <js-module src=\"www/SpeechRecognition.js\" name=\"SpeechRecognition\">\n            <clobbers target=\"SpeechRecognition\" />\n        </js-module>\n        <js-module src=\"www/SpeechRecognitionError.js\" name=\"SpeechRecognitionError\">\n            <clobbers target=\"SpeechRecognitionError\" />\n        </js-module>\n        <js-module src=\"www/SpeechRecognitionAlternative.js\" name=\"SpeechRecognitionAlternative\">\n            <clobbers target=\"SpeechRecognitionAlternative\" />\n        </js-module>\n        <js-module src=\"www/SpeechRecognitionResult.js\" name=\"SpeechRecognitionResult\">\n            <clobbers target=\"SpeechRecognitionResult\" />\n        </js-module>\n        <js-module src=\"www/SpeechRecognitionResultList.js\" name=\"SpeechRecognitionResultList\">\n            <clobbers target=\"SpeechRecognitionResultList\" />\n        </js-module>\n        <js-module src=\"www/SpeechRecognitionEvent.js\" name=\"SpeechRecognitionEvent\">\n            <clobbers target=\"SpeechRecognitionEvent\" />\n        </js-module>\n        <js-module src=\"www/SpeechGrammar.js\" name=\"SpeechGrammar\">\n            <clobbers target=\"SpeechGrammar\" />\n        </js-module>\n        <js-module src=\"www/SpeechGrammarList.js\" name=\"SpeechGrammarList\">\n            <clobbers target=\"SpeechGrammarList\" />\n        </js-module>\n        <config-file target=\"res/xml/config.xml\" parent=\"/*\">\n            <feature name=\"SpeechRecognition\">\n                <param name=\"android-package\" value=\"org.apache.cordova.speech.SpeechRecognition\"/>\n            </feature>\n        </config-file>\n        <config-file target=\"AndroidManifest.xml\" parent=\"/*\">\n            <uses-permission android:name=\"android.permission.RECORD_AUDIO\" />\n        </config-file>\n        <source-file src=\"src/android/SpeechRecognition.java\" target-dir=\"src/org/apache/cordova/speech\" />\n    </platform>\n    <!-- ios -->\n    <platform name=\"ios\">\n        <js-module src=\"www/SpeechRecognition.js\" name=\"SpeechRecognition\">\n            <clobbers target=\"SpeechRecognition\" />\n        </js-module>\n        <js-module src=\"www/SpeechRecognitionError.js\" name=\"SpeechRecognitionError\">\n            <clobbers target=\"SpeechRecognitionError\" />\n        </js-module>\n        <js-module src=\"www/SpeechRecognitionAlternative.js\" name=\"SpeechRecognitionAlternative\">\n            <clobbers target=\"SpeechRecognitionAlternative\" />\n        </js-module>\n        <js-module src=\"www/SpeechRecognitionResult.js\" name=\"SpeechRecognitionResult\">\n            <clobbers target=\"SpeechRecognitionResult\" />\n        </js-module>\n        <js-module src=\"www/SpeechRecognitionResultList.js\" name=\"SpeechRecognitionResultList\">\n            <clobbers target=\"SpeechRecognitionResultList\" />\n        </js-module>\n        <js-module src=\"www/SpeechRecognitionEvent.js\" name=\"SpeechRecognitionEvent\">\n            <clobbers target=\"SpeechRecognitionEvent\" />\n        </js-module>\n        <js-module src=\"www/SpeechGrammar.js\" name=\"SpeechGrammar\">\n            <clobbers target=\"SpeechGrammar\" />\n        </js-module>\n        <js-module src=\"www/SpeechGrammarList.js\" name=\"SpeechGrammarList\">\n            <clobbers target=\"SpeechGrammarList\" />\n        </js-module>\n        <config-file target=\"config.xml\" parent=\"/*\">\n            <feature name=\"SpeechRecognition\">\n                <param name=\"ios-package\" value=\"SpeechRecognition\"/>\n            </feature>\n        </config-file>\n        <source-file src=\"src/ios/SpeechRecognition.m\" />\n        <source-file src=\"src/ios/libiSpeechSDK.a\" framework=\"true\" />\n        <header-file src=\"src/ios/SpeechRecognition.h\" />\n        <header-file src=\"src/ios/Headers/iSpeechSDK.h\" />\n        <header-file src=\"src/ios/Headers/ISSpeechRecognition.h\" />\n        <header-file src=\"src/ios/Headers/ISSpeechRecognitionLocales.h\" />\n        <header-file src=\"src/ios/Headers/ISSpeechRecognitionResult.h\" />\n        <header-file src=\"src/ios/Headers/ISSpeechSynthesis.h\" />\n        <header-file src=\"src/ios/Headers/ISSpeechSynthesisVoices.h\" />\n        <framework src=\"AudioToolbox.framework\" />\n        <framework src=\"SystemConfiguration.framework\" />\n        <framework src=\"Security.framework\" />\n        <framework src=\"CFNetwork.framework\" />\n        <framework src=\"Speech.framework\" weak=\"true\" />\n        <resource-file src=\"src/ios/iSpeechSDK.bundle\" />\n        <preference name=\"MICROPHONE_USAGE_DESCRIPTION\" default=\" \" />\n        <config-file target=\"*-Info.plist\" parent=\"NSMicrophoneUsageDescription\">\n            <string>$MICROPHONE_USAGE_DESCRIPTION</string>\n        </config-file>\n        <preference name=\"SPEECH_RECOGNITION_USAGE_DESCRIPTION\" default=\" \" />\n        <config-file target=\"*-Info.plist\" parent=\"NSSpeechRecognitionUsageDescription\">\n            <string>$SPEECH_RECOGNITION_USAGE_DESCRIPTION</string>\n        </config-file>\n    </platform>\n    <platform name=\"browser\">\n        <js-module src=\"www/browser/SpeechRecognition.js\" name=\"SpeechRecognition\">\n            <runs/>\n        </js-module>\n    </platform>\n</plugin>\n"
  },
  {
    "path": "src/android/SpeechRecognition.java",
    "content": "package org.apache.cordova.speech;\n\nimport java.util.ArrayList;\n\nimport org.apache.cordova.PermissionHelper;\nimport org.json.JSONArray;\nimport org.json.JSONException;\nimport org.json.JSONObject;\n\nimport org.apache.cordova.CallbackContext;\nimport org.apache.cordova.CordovaPlugin;\nimport org.apache.cordova.PluginResult;\n\nimport android.content.pm.PackageManager;\nimport android.util.Log;\nimport android.content.Intent;\nimport android.os.Bundle;\nimport android.os.Handler;\nimport android.os.Looper;\nimport android.speech.RecognitionListener;\nimport android.speech.RecognizerIntent;\nimport android.speech.SpeechRecognizer;\nimport android.Manifest;\n\n/**\n * Style and such borrowed from the TTS and PhoneListener plugins\n */\npublic class SpeechRecognition extends CordovaPlugin {\n    private static final String LOG_TAG = SpeechRecognition.class.getSimpleName();\n    public static final String ACTION_INIT = \"init\";\n    public static final String ACTION_SPEECH_RECOGNIZE_START = \"start\";\n    public static final String ACTION_SPEECH_RECOGNIZE_STOP = \"stop\";\n    public static final String ACTION_SPEECH_RECOGNIZE_ABORT = \"abort\";\n    public static final String NOT_PRESENT_MESSAGE = \"Speech recognition is not present or enabled\";\n\n    private CallbackContext speechRecognizerCallbackContext;\n    private boolean recognizerPresent = false;\n    private SpeechRecognizer recognizer;\n    private boolean aborted = false;\n    private boolean listening = false;\n    private String lang;\n\n    private static String [] permissions = { Manifest.permission.RECORD_AUDIO };\n    private static int RECORD_AUDIO = 0;\n\n    protected void getMicPermission()\n    {\n        PermissionHelper.requestPermission(this, RECORD_AUDIO, permissions[RECORD_AUDIO]);\n    }\n\n    private void promptForMic()\n    {\n        if(PermissionHelper.hasPermission(this, permissions[RECORD_AUDIO])) {\n            this.startRecognition();\n        }\n        else\n        {\n            getMicPermission();\n        }\n\n    }\n\n    public void onRequestPermissionResult(int requestCode, String[] permissions,\n                                          int[] grantResults) throws JSONException\n    {\n        for(int r:grantResults)\n        {\n            if(r == PackageManager.PERMISSION_DENIED)\n            {\n                fireErrorEvent();\n                fireEvent(\"end\");\n                return;\n            }\n        }\n        promptForMic();\n    }\n\n    @Override\n    public boolean execute(String action, JSONArray args, CallbackContext callbackContext) {\n        // Dispatcher\n        if (ACTION_INIT.equals(action)) {\n            // init\n            if (DoInit()) {\n                callbackContext.sendPluginResult(new PluginResult(PluginResult.Status.OK));\n                \n                Handler loopHandler = new Handler(Looper.getMainLooper());\n                loopHandler.post(new Runnable() {\n\n                    @Override\n                    public void run() {\n                        recognizer = SpeechRecognizer.createSpeechRecognizer(cordova.getActivity().getBaseContext());\n                        recognizer.setRecognitionListener(new SpeechRecognitionListner());\n                    }\n                    \n                });\n            } else {\n                callbackContext.sendPluginResult(new PluginResult(PluginResult.Status.ERROR, NOT_PRESENT_MESSAGE));\n            }\n        }\n        else if (ACTION_SPEECH_RECOGNIZE_START.equals(action)) {\n            // recognize speech\n            if (!recognizerPresent) {\n                callbackContext.sendPluginResult(new PluginResult(PluginResult.Status.ERROR, NOT_PRESENT_MESSAGE));\n            }\n            this.lang = args.optString(0, \"en\");\n            this.speechRecognizerCallbackContext = callbackContext;\n            this.promptForMic();\n        }\n        else if (ACTION_SPEECH_RECOGNIZE_STOP.equals(action)) {\n            stop(false);\n        }\n        else if (ACTION_SPEECH_RECOGNIZE_ABORT.equals(action)) {\n            stop(true);\n        }\n        else {\n            // Invalid action\n            String res = \"Unknown action: \" + action;\n            return false;\n        }\n        return true;\n    }\n\n    private void startRecognition() {\n\n        final Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);\n        intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);\n        intent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE,\"voice.recognition.test\");\n        intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE,lang);\n\n        intent.putExtra(RecognizerIntent.EXTRA_MAX_RESULTS,5);\n\n        Handler loopHandler = new Handler(Looper.getMainLooper());\n        loopHandler.post(new Runnable() {\n\n            @Override\n            public void run() {\n                recognizer.startListening(intent);\n            }\n\n        });\n\n        PluginResult res = new PluginResult(PluginResult.Status.NO_RESULT);\n        res.setKeepCallback(true);\n        this.speechRecognizerCallbackContext.sendPluginResult(res);\n    }\n    \n    private void stop(boolean abort) {\n        this.aborted = abort;\n        Handler loopHandler = new Handler(Looper.getMainLooper());\n        loopHandler.post(new Runnable() {\n\n            @Override\n            public void run() {\n                recognizer.stopListening();\n            }\n            \n        });\n    }\n\n    /**\n     * Initialize the speech recognizer by checking if one exists.\n     */\n    private boolean DoInit() {\n        this.recognizerPresent = SpeechRecognizer.isRecognitionAvailable(this.cordova.getActivity().getBaseContext());\n        return this.recognizerPresent;\n    }\n\n    private void fireRecognitionEvent(ArrayList<String> transcripts, float[] confidences) {\n        JSONObject event = new JSONObject();\n        JSONArray results = new JSONArray();\n        try {\n            for(int i=0; i<transcripts.size(); i++) {\n                JSONArray alternatives = new JSONArray();\n                JSONObject result = new JSONObject();\n                result.put(\"transcript\", transcripts.get(i));\n                result.put(\"final\", true);\n                if (confidences != null) {\n                    result.put(\"confidence\", confidences[i]);\n                }\n                alternatives.put(result);\n                results.put(alternatives);\n            }\n            event.put(\"type\", \"result\");\n            event.put(\"emma\", null);\n            event.put(\"interpretation\", null);\n            event.put(\"results\", results);\n        } catch (JSONException e) {\n            // this will never happen\n        }\n        PluginResult pr = new PluginResult(PluginResult.Status.OK, event);\n        pr.setKeepCallback(true);\n        this.speechRecognizerCallbackContext.sendPluginResult(pr); \n    }\n\n    private void fireEvent(String type) {\n        JSONObject event = new JSONObject();\n        try {\n            event.put(\"type\",type);\n        } catch (JSONException e) {\n            // this will never happen\n        }\n        PluginResult pr = new PluginResult(PluginResult.Status.OK, event);\n        pr.setKeepCallback(true);\n        this.speechRecognizerCallbackContext.sendPluginResult(pr); \n    }\n\n    private void fireErrorEvent() {\n        JSONObject event = new JSONObject();\n        try {\n            event.put(\"type\",\"error\");\n        } catch (JSONException e) {\n            // this will never happen\n        }\n        PluginResult pr = new PluginResult(PluginResult.Status.ERROR, event);\n        pr.setKeepCallback(true);\n        this.speechRecognizerCallbackContext.sendPluginResult(pr); \n    }\n\n    class SpeechRecognitionListner implements RecognitionListener {\n\n        @Override\n        public void onBeginningOfSpeech() {\n            Log.d(LOG_TAG, \"begin speech\");\n            fireEvent(\"start\");\n            fireEvent(\"audiostart\");\n            fireEvent(\"soundstart\");\n            fireEvent(\"speechstart\");\n        }\n\n        @Override\n        public void onBufferReceived(byte[] buffer) {\n            Log.d(LOG_TAG, \"buffer received\");\n        }\n\n        @Override\n        public void onEndOfSpeech() {\n            Log.d(LOG_TAG, \"end speech\");\n            fireEvent(\"speechend\");\n            fireEvent(\"soundend\");\n            fireEvent(\"audioend\");\n            fireEvent(\"end\");\n        }\n\n        @Override\n        public void onError(int error) {\n            Log.d(LOG_TAG, \"error speech \"+error);\n            if (listening || error == 9) {\n                fireErrorEvent();\n                fireEvent(\"end\");\n            }\n            listening = false;\n        }\n\n        @Override\n        public void onEvent(int eventType, Bundle params) {\n            Log.d(LOG_TAG, \"event speech\");\n        }\n\n        @Override\n        public void onPartialResults(Bundle partialResults) {\n            Log.d(LOG_TAG, \"partial results\");\n        }\n\n        @Override\n        public void onReadyForSpeech(Bundle params) {\n            Log.d(LOG_TAG, \"ready for speech\");\n            listening = true;\n        }\n\n        @Override\n        public void onResults(Bundle results) {\n            Log.d(LOG_TAG, \"results\");\n            String str = new String();\n            Log.d(LOG_TAG, \"onResults \" + results);\n            ArrayList<String> transcript = results.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);\n            float[] confidence = results.getFloatArray(SpeechRecognizer.CONFIDENCE_SCORES);\n            if (transcript.size() > 0) {\n                Log.d(LOG_TAG, \"fire recognition event\");\n                fireRecognitionEvent(transcript, confidence);\n            } else {\n                Log.d(LOG_TAG, \"fire no match event\");\n                fireEvent(\"nomatch\");\n            }\n            listening = false;\n        }\n\n        @Override\n        public void onRmsChanged(float rmsdB) {\n            Log.d(LOG_TAG, \"rms changed\");\n        }\n        \n    }\n}"
  },
  {
    "path": "src/ios/Headers/ISSpeechRecognition.h",
    "content": "//\n//  ISSpeechRecognition.h\n//  iSpeechSDK\n//\n//  Copyright (c) 2012 iSpeech, Inc. All rights reserved.\n//\n\n#import <Foundation/Foundation.h>\n#import <AudioToolbox/AudioToolbox.h>\n\n#import \"iSpeechSDK.h\"\n\n#import \"ISSpeechRecognitionLocales.h\"\n#import \"ISSpeechRecognitionResult.h\"\n\n/**\n * The type of model to use when trascribing audio. Currently, only SMS and Dictation are available.\n */\nenum {\n\tISFreeFormTypeSMS = 1,\n\tISFreeFormTypeVoicemail = 2,\n\tISFreeFormTypeDictation = 3,\n\tISFreeFormTypeMessage = 4,\n\tISFreeFormTypeInstantMessage = 5,\n\tISFreeFormTypeTranscript = 6,\n\tISFreeFormTypeMemo = 7,\n};\n\ntypedef NSUInteger ISFreeFormType;\n\n#if NS_BLOCKS_AVAILABLE\n\n/*^*\n * The callback handler for speech recognition request.\n *\n * @param error An error, if one occured, or `nil`.\n * @param result The result of a successful recognition, or `nil` if there's an error.\n * @param cancelledByUser Whether speech recognition finished because a user cancelled the request.\n */\ntypedef void(^ISSpeechRecognitionHandler)(NSError *error, ISSpeechRecognitionResult *result, BOOL cancelledByUser);\n\n#endif\n\n@class ISSpeechRecognition;\n\n/**\n * Delegate protocol for `ISSpeechRecognition`.\n * \n * The only required method is for getting the result from speech recognition.\n */\n@protocol ISSpeechRecognitionDelegate <NSObject>\n\n@required\n\n/**\n * Speech recognition successfully completed, and a result was sent back from the server.\n * \n * If you get no result text back, and a confience level of 0.0, then, most likely, the user didn't speak anything.\n * \n * @param speechRecognition The speech recognition instance that completed.\n * @param result The result text and confidence level.\n */\n- (void)recognition:(ISSpeechRecognition *)speechRecognition didGetRecognitionResult:(ISSpeechRecognitionResult *)result;\n\n@optional\n\n/**\n * Something went wrong, speech recognition failed, and an error was passed back.\n * \n * @param speechRecognition The speech recognition instance that was cancelled.\n * @param error The acutal error. Errors from the SDK internals will have the error domain of `iSpeechErrorDomain`. You may get some URL connection errors if something happens with the network.\n */\n- (void)recognition:(ISSpeechRecognition *)speechRecognition didFailWithError:(NSError *)error;\n\n/**\n * Speech recognition was cancelled by the user.\n * \n * @param speechRecognition The speech recognition instance that was cancelled.\n */\n- (void)recognitionCancelledByUser:(ISSpeechRecognition *)speechRecognition;\n\n/**\n * Recording the user's speech has started.\n * \n * @param speechRecognition The speech recognition instance that started recording audio.\n */\n- (void)recognitionDidBeginRecording:(ISSpeechRecognition *)speechRecognition;\n\n/**\n * Speech recognition has finished recording and is moving on to recognizing the text.\n * \n * This happens when the timeout is hit for a timed listen, or when the user taps the \"Done\" button on the dialog.\n * \n * @param speechRecognition The speech recognition instance that finished recording.\n */\n- (void)recognitionDidFinishRecording:(ISSpeechRecognition *)speechRecognition;\n\n@end\n\n/**\n * The interface for doing speech recognition in the SDK.\n */\n@interface ISSpeechRecognition : NSObject\n\n/** @name Getting and Setting the Delegate */\n\n/**\n * The delegate of a speech recognition object.\n * \n * The delegate must adopt the `<ISSpeechRecognitionDelegate>` protocol.\n */\n@property (nonatomic, unsafe_unretained) id <ISSpeechRecognitionDelegate> delegate;\n\n/** @name Configuration Properties */\n\n/**\n * Sets the locale to use for speech recognition.\n *\n * Most of the time, the value passed is a ISO country code. To get our supported ISOs, consult \"Freeform Dictation Languages\" under \"Speech Recognition Settings\" when viewing details about a specific key.\n */\n@property (nonatomic, copy) NSString *locale CONFIGURATION_METHOD;\n\n/**\n * Allows you to set a custom language model for speech recognition.\n */\n@property (nonatomic, copy) NSString *model CONFIGURATION_METHOD;\n\n/**\n * The type of model to use when trascribing audio. Defaults to `ISFreeFormTypeDictation`. \n */\n@property (nonatomic, assign) ISFreeFormType freeformType CONFIGURATION_METHOD;\n\n/**\n * Whether silence detection should be used to automatically detect when someone's done talking.\n */\n@property (nonatomic, assign) BOOL silenceDetectionEnabled CONFIGURATION_METHOD;\n\n/** @name Detecting Audio Input */\n\n/**\n * Returns whether audio input is available for speech recognition. You can check this before creating an instance of `ISSpeechRecognition`, as well as to dynamically update your UI with what you can do.\n * \n * @return Returns whether audio input is available.\n */\n+ (BOOL)audioInputAvailable;\n\n/** @name Aliases and Commands */\n\n/**\n * Adds a list of items as an alias.\n * \n * Think of using an alias list as a way of doing a regular expression. For example, if you want to use a regular expression to match \"call joe\", \"call charlie\", or \"call ben\", then it would be `call (joe|charlie|ben)`. Similarly, to do that with an alias list,\n *\n *\t[speechRecognitionInstance addAlias:@\"PEOPLE\" forItems:[NSArray arrayWithObjects:@\"joe\", @\"charlie\", @\"ben\", nil]];\n *\t[speechRecognitionInstance addCommand:@\"call %PEOPLE%\"];\n * \n * @param alias The string to use for the alias key.\n * @param items The array of items to be substituted for the alias.\n */\n- (void)addAlias:(NSString *)alias forItems:(NSArray *)items;\n\n/**\n * Adds a command.\n *\n * If you want to reference an alias list, the format is `%ALIAS_LIST_NAME%`. Replace `ALIAS_LIST_NAME` with the actual name of your alias list.\n * \n * @param command The command to be added.\n */\n- (void)addCommand:(NSString *)command;\n\n/**\n * Add multiple commands from an array.\n * \n * @param commands The array of commands to be added.\n */\n- (void)addCommands:(NSArray *)commands;\n\n/**\n * Clears any command or alias lists on this speech recognition object.\n */\n- (void)resetCommandsAndAliases;\n\n/** @name Listening and Recognizing */\n\n/**\n * Start an untimed listen.\n * \n * An untimed listen means that the SDK will start listening, and will not stop unless you tell it to by calling -[ISSpeechRecognition finishListenAndStartRecognize], or until silence detection kicks in, if you have that enabled.\n * \n * If you're using a command or alias list, use `-listenAndRecognizeWithTimeout:error:` instead. This will ensure that speech recognition will only last as long as is necessary, thus saving the user's battery life and data plan, ensuring that you get a result back, and providing an better overall experience.\n * \n * @param err An `NSError` pointer to get an error object out of the method if something goes wrong.\n * @return Returns whatever speech synthesis successfully started. If this returns `NO`, check the error for details on what went wrong.\n * @see listenAndRecognizeWithTimeout:error:\n * @see finishListenAndStartRecognize\n */\n- (BOOL)listen:(NSError **)err;\n\n/**\n * If you're running an untimed listen, or if you want to cut a timed listen short, call this method to tell the SDK to stop listening for audio, and finish up transcribing.\n */\n- (void)finishListenAndStartRecognize;\n\n/**\n * Starts a timed listen. After a set timeout, the SDK will stop listening for audio and will start to transcribe it.\n * \n * Useful when using command lists to ensure that the user doesn't talk longer than necessary.\n * \n * @param timeout The amount of time, in seconds, for the timed listen to last for.\n * @param err An `NSError` pointer to get an error object out of the method if something goes wrong.\n * @return Returns whatever speech synthesis successfully started. If this returns `NO`, check the error for details on what went wrong.\n */\n- (BOOL)listenAndRecognizeWithTimeout:(NSTimeInterval)timeout error:(NSError **)err;\n\n#if NS_BLOCKS_AVAILABLE\n\n/**\n * Start an untimed listen.\n *\n * An untimed listen means that the SDK will start listening, and will not stop unless you tell it to by calling -[ISSpeechRecognition finishListenAndStartRecognize], or until silence detection kicks in, if you have that enabled.\n *\n * If you're using a command or alias list, use `-listenAndRecognizeWithTimeout:handler:` instead. This will ensure that speech recognition will only last as long as is necessary, thus saving the user's battery life and data plan, ensuring that you get a result back, and providing an better overall experience.\n *\n * @param handler An `ISSpeechRecognitionHandler` block that will be executed on the main thread when speech recognition completes, or when an error occurs.\n * @see listenAndRecognizeWithTimeout:handler:\n * @see finishListenAndStartRecognize\n */\n- (void)listenWithHandler:(ISSpeechRecognitionHandler)handler;\n\n/**\n * Starts a timed listen. After a set timeout, the SDK will stop listening for audio and will start to transcribe it.\n *\n * Useful when using command lists to ensure that the user doesn't talk longer than necessary.\n *\n * @param timeout The amount of time, in seconds, for the timed listen to last for.\n * @param handler An `ISSpeechRecognitionHandler` block that will be executed on the main thread when speech recognition completes, or when an error occurs.\n */\n- (void)listenAndRecognizeWithTimeout:(NSTimeInterval)timeout handler:(ISSpeechRecognitionHandler)handler;\n\n#endif\n\n/**\n * Cancels an in progress speech recognition action.\n * \n * If, for some reason, you need to cancel an in progress speech recognition action, call this method. It's also a good idea to provide feedback to the user as to why you cancelled it.\n */\n- (void)cancel;\n\n@end"
  },
  {
    "path": "src/ios/Headers/ISSpeechRecognitionLocales.h",
    "content": "//\n//  ISSpeechRecognitionLocales.h\n//  iSpeechSDK\n//\n//  Copyright (c) 2012 iSpeech, Inc. All rights reserved.\n//\n\n#import <Foundation/Foundation.h>\n\nextern NSString *const ISLocaleUSEnglish;\nextern NSString *const ISLocaleCAEnglish;\nextern NSString *const ISLocaleGBEnglish;\nextern NSString *const ISLocaleAUEnglish;\nextern NSString *const ISLocaleESSpanish;\nextern NSString *const ISLocaleMXSpanish;\nextern NSString *const ISLocaleITItalian;\nextern NSString *const ISLocaleFRFrench;\nextern NSString *const ISLocaleCAFrench;\nextern NSString *const ISLocalePLPolish;\nextern NSString *const ISLocaleBRPortuguese;\nextern NSString *const ISLocalePTPortuguese;\nextern NSString *const ISLocaleCACatalan;\nextern NSString *const ISLocaleCNChinese;\nextern NSString *const ISLocaleHKChinese;\nextern NSString *const ISLocaleTWChinese;\nextern NSString *const ISLocaleDKDanish;\nextern NSString *const ISLocaleDEGerman;\nextern NSString *const ISLocaleFIFinish;\nextern NSString *const ISLocaleJAJapanese;\nextern NSString *const ISLocaleKRKorean;\nextern NSString *const ISLocaleNLDutch;\nextern NSString *const ISLocaleNONorwegian;\nextern NSString *const ISLocaleRURussian;\nextern NSString *const ISLocaleSESwedish;\n"
  },
  {
    "path": "src/ios/Headers/ISSpeechRecognitionResult.h",
    "content": "//\n//  ISSpeechRecognitionResult.h\n//  iSpeechSDK\n//\n//  Copyright (c) 2012 iSpeech, Inc. All rights reserved.\n//\n\n#import <Foundation/Foundation.h>\n\n/**\n * This class contains information about a successful recognition.\n */\n@interface ISSpeechRecognitionResult : NSObject\n\n/**\n * The transcribed text returned from a recognition.\n */\n@property (nonatomic, copy, readonly) NSString *text;\n\n/**\n * How confident the speech recognizer was. Scale from 0.0 to 1.0.\n */\n@property (nonatomic, assign, readonly) float confidence;\n\n@end\n"
  },
  {
    "path": "src/ios/Headers/ISSpeechSynthesis.h",
    "content": "//\n//  ISSpeechSynthesis.h\n//  iSpeechSDK\n//\n//  Copyright (c) 2012 iSpeech, Inc. All rights reserved.\n//\n\n#import <Foundation/Foundation.h>\n\n#import \"iSpeechSDK.h\"\n#import \"ISSpeechSynthesisVoices.h\"\n\n#if NS_BLOCKS_AVAILABLE\n\n/*^*\n * The callback handler for a speech synthesis request.\n *\n * @param error An error for the request, if one occurred, otherwise, `nil`.\n * @param userCancelled Whether speech synthesis finished as a result of user cancellation or not.\n */\ntypedef void(^ISSpeechSynthesisHandler)(NSError *error, BOOL userCancelled);\n\n#endif\n\n@class ISSpeechSynthesis;\n\n/**\n * Delegate protocol for `ISSpeechSynthesis`.\n * \n * All methods are optional.\n */\n@protocol ISSpeechSynthesisDelegate <NSObject>\n\n@optional\n\n/**\n * The specified speech synthesis instance started speaking. Audio is now playing.\n * \n * @param speechSynthesis The speech synthesis object that is speaking.\n */\n- (void)synthesisDidStartSpeaking:(ISSpeechSynthesis *)speechSynthesis;\n\n/**\n * The specified speech synthesis isntance finished speaking, either on its own or because the user cancelled it.\n * \n * @param speechSynthesis The speech synthesis object that finished speaking.\n * @param userCancelled Whether the user was responsible for cancelling the speech synthesis, usually by tapping the \"Cancel\" button on the dialog.\n */\n- (void)synthesisDidFinishSpeaking:(ISSpeechSynthesis *)speechSynthesis userCancelled:(BOOL)userCancelled;\n\n/**\n * Something went wrong with the speech synthesis. Usually this is used for errors returned by the server.\n * \n * @param speechSynthesis The speech synthesis object that the error occurred on.\n * @param error The acutal error. Errors from the SDK internals will have the error domain of `iSpeechErrorDomain`. You may get some URL connection errors if something happens with the network.\n */\n- (void)synthesis:(ISSpeechSynthesis *)speechSynthesis didFailWithError:(NSError *)error;\n\n@end\n\n/**\n * The interface for doing speech synthesis in the SDK.\n */\n@interface ISSpeechSynthesis : NSObject\n\n/** @name Getting and Setting the Delegate */\n\n/**\n * The delegate of a speech synthesis object.\n * \n * The delegate must adopt the `<ISSpeechSynthesisDelegate>` protocol.\n */\n@property (nonatomic, unsafe_unretained) id <ISSpeechSynthesisDelegate> delegate;\n\n/** @name Configuration Properties */\n\n/**\n * Sets the voice to use for this speech synthesis instance.\n * \n * Voices are listed in the `ISSpeechSynthesisVoices.h` header file. You are not limited to that list; they are only standard voices. If you specify an invalid voice, the delegate will get an error.\n */\n@property (nonatomic, copy) NSString *voice CONFIGURATION_METHOD;\n\n/**\n * Sets the speed to use for speech synthesis.\n * \n * This should be a number anywhere between -10 and 10, with -10 being the slowest, and 10 being the fastest. If you provide a number larger than 10, the speed will be set to 10. Likewise, if you provide a number smaller than -10, the speed will be set to -10.\n */\n@property (nonatomic, assign) NSInteger speed CONFIGURATION_METHOD;\n\n/**\n * The bitrate of the synthesised speech.\n * \n * The higher the bitrate, the better quality the audio, but the larger the file size of the data being sent, which results in more buffering needed to load all that data. Default value is 48, which is sutable for WiFi, 4G, and 3G. \n * \n * Valid values include 8, 16, 24, 32, 48, 56, 64, 80, 96, 112, 128, 144, 160, 192, 224, 256, and 320, as well as any others listed under \"Bit Rates\" for an API key's Text-to-Speech Settings.\n */\n@property (nonatomic, assign) NSInteger bitrate CONFIGURATION_METHOD;\n\n/** Getting and Setting the Text */\n\n/**\n * The text to speak.\n */\n@property (nonatomic, copy) NSString *text;\n\n/** @name Creating an Instance */\n\n/**\n * Create a new `ISSpeechSynthesis` object with the supplied text.\n *\n * @param text The initial text for the speech synthesis object.\n */\n- (id)initWithText:(NSString *)text;\n\n/** @name Speaking Text */\n\n/**\n * Speak the text that was specified when creating this instance.\n * \n * @param err An `NSError` pointer to get an error object out of the method if something goes wrong.\n * @return Whether synthesis successfully started. If this returns `NO`, check the error for details on what went wrong.\n */\n- (BOOL)speak:(NSError **)err;\n\n#if NS_BLOCKS_AVAILABLE\n\n/**\n * Speak the text that was specified when creating this instance.\n *\n * @param handler A `ISSpeechSynthesisHandler` block to be executed when speaking finishes, or when an error occurs. This handler will be called on the main thread.\n */\n- (void)speakWithHandler:(ISSpeechSynthesisHandler)handler;\n\n#endif\n\n/**\n * Cancels an in-progress speech synthesis action.\n */\n- (void)cancel;\n\n@end\n"
  },
  {
    "path": "src/ios/Headers/ISSpeechSynthesisVoices.h",
    "content": "//\n//  ISSpeechSynthesisVoices.h\n//  iSpeechSDK\n//\n//  Copyright (c) 2012 iSpeech, Inc. All rights reserved.\n//\n\n#import <Foundation/Foundation.h>\n\nextern NSString *const ISVoiceUSEnglishFemale;\nextern NSString *const ISVoiceUSEnglishMale;\nextern NSString *const ISVoiceUKEnglishFemale;\nextern NSString *const ISVoiceUKEnglishMale;\nextern NSString *const ISVoiceAUEnglishFemale;\nextern NSString *const ISVoiceUSSpanishFemale;\nextern NSString *const ISVoiceUSSpanishMale;\nextern NSString *const ISVoiceCHChineseFemale;\nextern NSString *const ISVoiceCHChineseMale;\nextern NSString *const ISVoiceHKChineseFemale;\nextern NSString *const ISVoiceTWChineseFemale;\nextern NSString *const ISVoiceJPJapaneseFemale;\nextern NSString *const ISVoiceJPJapaneseMale;\nextern NSString *const ISVoiceKRKoreanFemale;\nextern NSString *const ISVoiceKRKoreanMale;\nextern NSString *const ISVoiceCAEnglishFemale;\nextern NSString *const ISVoiceHUHungarianFemale;\nextern NSString *const ISVoiceBRPortugueseFemale;\nextern NSString *const ISVoiceEURPortugueseFemale;\nextern NSString *const ISVoiceEURPortugueseMale;\nextern NSString *const ISVoiceEURSpanishFemale;\nextern NSString *const ISVoiceEURSpanishMale;\nextern NSString *const ISVoiceEURCatalanFemale;\nextern NSString *const ISVoiceEURCzechFemale;\nextern NSString *const ISVoiceEURDanishFemale;\nextern NSString *const ISVoiceEURFinnishFemale;\nextern NSString *const ISVoiceEURFrenchFemale;\nextern NSString *const ISVoiceEURFrenchMale;\nextern NSString *const ISVoiceEURNorwegianFemale;\nextern NSString *const ISVoiceEURDutchFemale;\nextern NSString *const ISVoiceEURDutchMale;\nextern NSString *const ISVoiceEURPolishFemale;\nextern NSString *const ISVoiceEURItalianFemale;\nextern NSString *const ISVoiceEURItalianMale;\nextern NSString *const ISVoiceEURTurkishFemale;\nextern NSString *const ISVoiceEURTurkishMale;\nextern NSString *const ISVoiceEURGermanFemale;\nextern NSString *const ISVoiceEURGermanMale;\nextern NSString *const ISVoiceRURussianFemale;\nextern NSString *const ISVoiceRURussianMale;\nextern NSString *const ISVoiceSWSwedishFemale;\nextern NSString *const ISVoiceCAFrenchFemale;\nextern NSString *const ISVoiceCAFrenchMale;\nextern NSString *const ISVoiceArabicMale;\n"
  },
  {
    "path": "src/ios/Headers/iSpeechSDK.h",
    "content": "//\n//  iSpeechSDK.h\n//  iSpeechSDK\n//\n//  Copyright (c) 2012 iSpeech, Inc. All rights reserved.\n//\n\n#import <Foundation/Foundation.h>\n\n// Methods marked with `CONFIGURATION_METHOD` can be set globally, for all objects, by calling the methods on [[iSpeechSDK sharedSDK] configuration]. This mimics the Appearance API in iOS 5.\n#define CONFIGURATION_METHOD \n\n#import \"ISSpeechSynthesis.h\"\n#import \"ISSpeechRecognition.h\"\n\n// Protocol used by objects that act as the proxy for the Configuration API. For details on each property here, look at ISSpeechSynthesis and ISSpeechRecognition.\n@protocol ISConfiguration <NSObject>\n\n@property (nonatomic, copy) NSString *voice;\n@property (nonatomic, assign) NSInteger speed;\n@property (nonatomic, assign) NSInteger bitrate;\n\n@property (nonatomic, copy) NSString *locale;\n@property (nonatomic, copy) NSString *model;\n\n@property (nonatomic, assign) NSUInteger freeformType;\n\n@property (nonatomic, assign) BOOL silenceDetectionEnabled;\n@property (nonatomic, assign) BOOL adaptiveBitrateEnabled;\n\n@end\n\n/**\n * The error domain for errors returned by the SDK.\n */\nextern NSString *const iSpeechErrorDomain;\n\n/**\n * Possible error codes returned by the SDK.\n * \n * Some of these should not be returned by the SDK (ones like `kISpeechErrorCodeInvalidFileFormat` and `kISpeechErrorCodeInvalidContentType`) because you don't have control over them. However, they are included in the off chance that something does go wrong with the server and they are returned. Codes that shouldn't be returned are marked with an asterisk (`*`).\n *\n * When you get an error during speech recognition, tell the user that something went wrong. If you get `kISpeechErrorCodeNoInputAvailable`, `kISpeechErrorCodeNoInternetConnection`, or `kISpeechErrorCodeLostInput` the error messages on those NSError instances have been localized, and are presentable to the user. \n */\nenum _ISpeechErrorCode {\n\tkISpeechErrorCodeInvalidAPIKey = 1,\t\t\t\t\t// You provided an invalid API key.\n\tkISpeechErrorCodeUnableToConvert = 2,\t\t\t\t// The server was unable to convert your text to speech.\n\tkISpeechErrorCodeNotEnoughCredits = 3,\t\t\t\t// Your API key doesn't have the necessary credits required to complete this transaction.\n\tkISpeechErrorCodeNoActionSpecified = 4,\t\t\t\t// *\n\tkISpeechErrorCodeInvalidText = 5,\t\t\t\t\t// Usually, this error occurs when no text is sent to the server, or, for example, Japanese characters are sent to the English voice.\n\tkISpeechErrorCodeTooManyWords = 6,\t\t\t\t\t// You tried to convert too many words to speech.\n\tkISpeechErrorCodeInvalidTextEntry = 7,\t\t\t\t// *\n\tkISpeechErrorCodeInvalidVoice = 8,\t\t\t\t\t// You specified a voice that either doesn't exist, or that you don't have access to.\n\tkISpeechErrorCodeInvalidFileFormat = 12,\t\t\t// *\n\tkISpeechErrorCodeInvalidSpeed = 13,\t\t\t\t\t// *\n\tkISpeechErrorCodeInvalidDictionary = 14,\t\t\t// *\n\tkISpeechErrorCodeInvalidBitrate = 15,\t\t\t\t// You specified a bitrate that isn't one of the allowed values. See -[ISSpeechSynthesis bitrate] for details on valid values.\n\tkISpeechErrorCodeInvalidFrequency = 16,\t\t\t\t// *\n\tkISpeechErrorCodeInvalidAliasList = 17,\t\t\t\t// *\n\tkISpeechErrorCodeAliasMissing = 18,\t\t\t\t\t// *\n\tkISpeechErrorCodeInvalidContentType = 19,\t\t\t// *\n\tkISpeechErrorCodeAliasListTooComplex = 20,\t\t\t// *\n\tkISpeechErrorCodeCouldNotRecognize = 21,\t\t\t// If the audio isn't clear enough, or corrupted, this error will get returned. It's usually good UX to prompt the user to try again.\n\tkISpeechErrorCodeOptionNotEnabled = 30,\t\t\t\t// Option not enabled for your account. Please contact iSpeech sales at +1 (917) 338-7723 or at sales@ispeech.org to modify your license.\n\tkISpeechErrorCodeNoAPIAccess = 997,\t\t\t\t\t// *\n\tkISpeechErrorCodeUnsupportedOutputType = 998,\t\t// *\n\tkISpeechErrorCodeInvalidRequest = 999,\t\t\t\t// *\n\tkISpeechErrorCodeTrialPeriodExceeded = 100,\t\t\t// This evaluation account has exceeded its trial period. Please contact iSpeech sales at +1 (917) 338-7723 or at sales@ispeech.org to upgrade your license.\n\tkISpeechErrorCodeAPIKeyDisabled = 101,\t\t\t\t// Your key has been disabled. Please contact iSpeech sales at +1 (917) 338-7723 or at sales@ispeech.org to modify your license.\n\tkISpeechErrorCodeInvalidRequestMethod = 1000,\t\t// *\n\t\n\t// Error code 300 was \"UserCancelled\", but that has been wrapped into the SDK and is no longer used.\n\tkISpeechErrorCodeNoInputAvailable = 301,\t\t\t// You wanted to do speech recognition, but there's no mic available.\n\tkISpeechErrorCodeNoInternetConnection = 302,\t\t// There's no connection to the cloud to do speech synthesis or speech recognition.\n\tkISpeechErrorCodeSDKIsBusy = 303,\t\t\t\t\t// The SDK is busy doing either recognition or synthesis.\n\tkISpeechErrorCodeSDKInterrupted = 304,\t\t\t\t// The SDK was in the middle of doing something, and then got an audio session interruption\n\tkISpeechErrorCodeCouldNotActiveAudioSession = 305,\t// Unable to activate the audio session. Can happen when another audio session has higher precedence than ours does.\n\tkISpeechErrorCodeCouldNotStartAudioQueue = 306,\t\t// Unable to start an audio queue. Can happen when another audio queue has higher precedence than ours does.\n\tkISpeechErrorCodeServerDied = 307,\t\t\t\t\t// Server Died error. mediaserverd has died, and we need to clear out all our audio objects and start fresh.\n\tkISpeechErrorCodeLostInput = 308,\t\t\t\t\t// There was audio input, and speech recognition was happening, and then the audio input went away for some reason.\n\tkISpeechErrorCodeBadHost = 309,\t\t\t\t\t\t// The SSL Certificate chain was invalid, probably a result of some redirect away from iSpeech's servers. An example of this happening is when connected to a WiFi network that requires authentication before sending network requests.\n\t\n\tkISpeechErrorCodeUnknownError = 399\n};\n\ntypedef NSUInteger iSpeechErrorCode;\n\n@class iSpeechSDK;\n\n/**\n * iSpeechSDKDelegate has optional methods to be notified when things happen on the SDK. Currently only notifies when an audio session interruption begins and ends.\n */\n@protocol iSpeechSDKDelegate <NSObject>\n\n@optional\n\n/**\n * The audio session has been interrupted. See [Responding to Audio Session Interruptions](https://developer.apple.com/library/ios/#documentation/Audio/Conceptual/AudioSessionProgrammingGuide/Cookbook/Cookbook.html#//apple_ref/doc/uid/TP40007875-CH6-SW7) in the [Audio Session Programming Guide](https://developer.apple.com/library/ios/#documentation/Audio/Conceptual/AudioSessionProgrammingGuide/Introduction/Introduction.html).\n * \n * @param sdk The shared instance of the SDK.\n */\n- (void)iSpeechSDKDidBeginInterruption:(iSpeechSDK *)sdk;\n\n/**\n * The interupption on the audio session has ended. See [Responding to Audio Session Interruptions](https://developer.apple.com/library/ios/#documentation/Audio/Conceptual/AudioSessionProgrammingGuide/Cookbook/Cookbook.html#//apple_ref/doc/uid/TP40007875-CH6-SW7) in the [Audio Session Programming Guide](https://developer.apple.com/library/ios/#documentation/Audio/Conceptual/AudioSessionProgrammingGuide/Introduction/Introduction.html).\n * \n * @param sdk The shared instance of the SDK.\n */\n- (void)iSpeechSDKDidEndInterruption:(iSpeechSDK *)sdk;\n\n@end\n\n/**\n * The shared SDK class. Configuration of the SDK happens on this object, as well as getting the configuration object to set global configuration for ISSpeechSynthesis and ISSpeechRecognition objects.\n */\n@interface iSpeechSDK : NSObject\n\n/** @name Configuring the SDK Instance */\n\n/**\n * Whether the SDK should use the Mobile Development Server (`YES`) or the Mobile Production Server (`NO`). This is set to `NO` by default.\n */\n@property (nonatomic, assign) BOOL usesDevServer;\n\n/**\n * Whether the SDK should vibrate on the Start Recording and Stop Recording prompts.\n *\n * These are off by default (`NO`) and will need to be turned on by setting this to `YES`.\n */\n@property (nonatomic, assign) BOOL vibrateOnPrompts;\n\n/**\n * Whether the SDK should play Success and Fail prompts on a successful or unsuccesful recognition.\n *\n * These are off by default (`NO`) and will need to be turned on by setting this to `YES`.\n */\n@property (nonatomic, assign) BOOL playsSuccessAndFailPrompts;\n\n/**\n * Allows you to tell the SDK whether or not it should deactivate the audio session once it's finished its stuff. If you're doing your own audio stuff in the app (such as playing music, an audiobook, etc.), you'd use this to make sure that your audio doesn't go away once the SDK finishes its speech synthesis or speech recognition. \n */\n@property (nonatomic, assign) BOOL shouldDeactivateAudioSessionWhenFinished;\n\n/**\n * Any extra server params you want to send to the server.\n *\n * Use only if directed.\n */\n@property (nonatomic, copy) NSString *extraServerParams;\n\n/**\n * Sets the APIKey to send to the server.\n * \n * The best place to set this is once in your `-applicationDidFinishLaunching:` method on your app delegate. Once set, you shoudn't have a reason to change it.\n */\n@property (nonatomic, copy) NSString *APIKey;\n\n/** @name Setting and Getting the Delegate */\n\n/**\n * Set the delegate to be notified of audio session interruptions.\n * \n * The delegate must adopt the `<iSpeechSDKDelegate>` protocol.\n */\n@property (nonatomic, unsafe_unretained) id <iSpeechSDKDelegate> delegate;\n\n/** @name SDK Properties */\n\n/**\n * Returns whether the SDK is currently busy doing something, such as performing speech recognition or speech synthesis.\n */\n@property (nonatomic, assign, readonly) BOOL isBusy;\n\n/**\n * Returns the version number of the SDK. Useful for debugging purposes and bug reports.\n */\n@property (nonatomic, copy, readonly) NSString *version;\n\n/** @name Getting the SDK Instance */\n\n/**\n * The single instance of the iSpeechSDK class.\n * \n * @return Returns the shared instance of the SDK.\n */\n+ (iSpeechSDK *)sharedSDK;\n\n/** @name Getting the Configuration Instance */\n\n/**\n * Method to get the configuration object to set properties globally for all objects. For example, if you wanted to set the voice for all speech recognition requests, you'd call `[[[iSpeechSDK sharedSDK] configuration] setVoice:VOICE_HERE]` and all subsequent speech recognition requests would use that voice.\n *\n * @return Returns the configuration proxy.\n */\n- (id <ISConfiguration>)configuration;\n\n/** @name Resetting the SDK */\n\n/**\n * If you get a lot of 303 errors, even though you know for a fact that the SDK isn't doing anything, call this method to reset the SDK's internals.\n * \n * Configuration properties set, including your API key, and anything sent to `[[iSpeechSDK sharedSDK] configuration]` will not be affected by this call. The delegate for any active speech synthesis or speech recognition will get a `kISpeechErrorCodeServerDied` error code.\n *\n * @warning This is a temporary fix and will be removed for the final 1.0 relase of the SDK.\n */\n- (void)resetSDK;\n\n// The following methods are provided in the event that you initialize the audio session before the SDK has a chance to. If you do, you MUST call these methods in your interruption listener, otherwise the SDK WILL break.\n\n/** @name Interruption Handling */\n\n/**\n * Tells the SDK that an interruption has begun. If you initialize the audio session before the SDK, you must call this method to ensure that the SDK does not break.\n */\n- (void)beginInterruption;\n\n/**\n * Tells the SDK that an interruption has ended. If you initialize the audio session before the SDK, you must call this method to ensure that the SDK does not break.\n */\n- (void)endInterruption;\n\n@end\n"
  },
  {
    "path": "src/ios/SpeechRecognition.h",
    "content": "#import <Cordova/CDV.h>\n#import \"ISpeechSDK.h\"\n#import <Speech/Speech.h>\n\n@interface SpeechRecognition : CDVPlugin <ISSpeechRecognitionDelegate>\n\n@property (nonatomic, strong) CDVInvokedUrlCommand * command;\n@property (nonatomic, strong) CDVPluginResult* pluginResult;\n@property (nonatomic, strong) ISSpeechRecognition* iSpeechRecognition;\n@property (nonatomic, strong) SFSpeechRecognizer *sfSpeechRecognizer;\n@property (nonatomic, strong) AVAudioEngine *audioEngine;\n@property (nonatomic, strong) SFSpeechAudioBufferRecognitionRequest *recognitionRequest;\n@property (nonatomic, strong) SFSpeechRecognitionTask *recognitionTask;\n\n- (void) init:(CDVInvokedUrlCommand*)command;\n- (void) start:(CDVInvokedUrlCommand*)command;\n- (void) stop:(CDVInvokedUrlCommand*)command;\n- (void) abort:(CDVInvokedUrlCommand*)command;\n\n@end\n"
  },
  {
    "path": "src/ios/SpeechRecognition.m",
    "content": "//\n//  Created by jcesarmobile on 30/11/14.\n//\n//\n\n#import \"SpeechRecognition.h\"\n#import \"ISpeechSDK.h\"\n#import <Speech/Speech.h>\n\n@implementation SpeechRecognition\n\n- (void) init:(CDVInvokedUrlCommand*)command\n{\n    NSString * key = [self.commandDelegate.settings objectForKey:[@\"apiKey\" lowercaseString]];\n    if (!key) {\n        key = @\"developerdemokeydeveloperdemokey\";\n    }\n    iSpeechSDK *sdk = [iSpeechSDK sharedSDK];\n    sdk.APIKey = key;\n    self.iSpeechRecognition = [[ISSpeechRecognition alloc] init];\n    self.audioEngine = [[AVAudioEngine alloc] init];\n}\n\n- (void) start:(CDVInvokedUrlCommand*)command\n{\n    self.command = command;\n    NSMutableDictionary * event = [[NSMutableDictionary alloc]init];\n    [event setValue:@\"start\" forKey:@\"type\"];\n    self.pluginResult = [CDVPluginResult resultWithStatus:CDVCommandStatus_OK messageAsDictionary:event];\n    [self.pluginResult setKeepCallbackAsBool:YES];\n    [self.commandDelegate sendPluginResult:self.pluginResult callbackId:self.command.callbackId];\n    [self recognize];\n\n}\n\n- (void) recognize\n{\n    NSString * lang = [self.command argumentAtIndex:0];\n    if (lang && [lang isEqualToString:@\"en\"]) {\n        lang = @\"en-US\";\n    }\n\n    if (NSClassFromString(@\"SFSpeechRecognizer\")) {\n\n        if (![self permissionIsSet]) {\n            [SFSpeechRecognizer requestAuthorization:^(SFSpeechRecognizerAuthorizationStatus status){\n                dispatch_async(dispatch_get_main_queue(), ^{\n\n                    if (status == SFSpeechRecognizerAuthorizationStatusAuthorized) {\n                        [self recordAndRecognizeWithLang:lang];\n                    } else {\n                        [self sendErrorWithMessage:@\"Permission not allowed\" andCode:4];\n                    }\n\n                });\n            }];\n        } else {\n            [self recordAndRecognizeWithLang:lang];\n        }\n    } else {\n        [self.iSpeechRecognition setDelegate:self];\n        [self.iSpeechRecognition setLocale:lang];\n        [self.iSpeechRecognition setFreeformType:ISFreeFormTypeDictation];\n        NSError *error;\n        if(![self.iSpeechRecognition listenAndRecognizeWithTimeout:10 error:&error]) {\n            NSLog(@\"ERROR: %@\", error);\n        }\n    }\n}\n\n- (void) recordAndRecognizeWithLang:(NSString *) lang\n{\n    NSLocale *locale = [[NSLocale alloc] initWithLocaleIdentifier:lang];\n    self.sfSpeechRecognizer = [[SFSpeechRecognizer alloc] initWithLocale:locale];\n    if (!self.sfSpeechRecognizer) {\n        [self sendErrorWithMessage:@\"The language is not supported\" andCode:7];\n    } else {\n\n        // Cancel the previous task if it's running.\n        if ( self.recognitionTask ) {\n            [self.recognitionTask cancel];\n            self.recognitionTask = nil;\n        }\n\n        [self initAudioSession];\n\n        self.recognitionRequest = [[SFSpeechAudioBufferRecognitionRequest alloc] init];\n        self.recognitionRequest.shouldReportPartialResults = [[self.command argumentAtIndex:1] boolValue];\n\n        self.recognitionTask = [self.sfSpeechRecognizer recognitionTaskWithRequest:self.recognitionRequest resultHandler:^(SFSpeechRecognitionResult *result, NSError *error) {\n\n            if (error) {\n                NSLog(@\"error\");\n                [self stopAndRelease];\n                [self sendErrorWithMessage:error.localizedFailureReason andCode:error.code];\n            }\n\n            if (result) {\n                NSMutableArray * alternatives = [[NSMutableArray alloc] init];\n                int maxAlternatives = [[self.command argumentAtIndex:2] intValue];\n                for ( SFTranscription *transcription in result.transcriptions ) {\n                    if (alternatives.count < maxAlternatives) {\n                        float confMed = 0;\n                        for ( SFTranscriptionSegment *transcriptionSegment in transcription.segments ) {\n                            NSLog(@\"transcriptionSegment.confidence %f\", transcriptionSegment.confidence);\n                            confMed +=transcriptionSegment.confidence;\n                        }\n                        NSMutableDictionary * resultDict = [[NSMutableDictionary alloc]init];\n                        [resultDict setValue:transcription.formattedString forKey:@\"transcript\"];\n                        [resultDict setValue:[NSNumber numberWithBool:result.isFinal] forKey:@\"final\"];\n                        [resultDict setValue:[NSNumber numberWithFloat:confMed/transcription.segments.count]forKey:@\"confidence\"];\n                        [alternatives addObject:resultDict];\n                    }\n                }\n                [self sendResults:@[alternatives]];\n                if ( result.isFinal ) {\n                    [self stopAndRelease];\n                }\n            }\n        }];\n\n        AVAudioFormat *recordingFormat = [self.audioEngine.inputNode outputFormatForBus:0];\n\n        [self.audioEngine.inputNode installTapOnBus:0 bufferSize:1024 format:recordingFormat block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) {\n            [self.recognitionRequest appendAudioPCMBuffer:buffer];\n        }],\n\n        [self.audioEngine prepare];\n        [self.audioEngine startAndReturnError:nil];\n    }\n}\n\n- (void) initAudioSession\n{\n    AVAudioSession *audioSession = [AVAudioSession sharedInstance];\n    [audioSession setCategory:AVAudioSessionCategoryRecord error:nil];\n    [audioSession setMode:AVAudioSessionModeMeasurement error:nil];\n    [audioSession setActive:YES withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:nil];\n}\n\n- (BOOL) permissionIsSet\n{\n    SFSpeechRecognizerAuthorizationStatus status = [SFSpeechRecognizer authorizationStatus];\n    return status != SFSpeechRecognizerAuthorizationStatusNotDetermined;\n}\n\n- (void)recognition:(ISSpeechRecognition *)speechRecognition didGetRecognitionResult:(ISSpeechRecognitionResult *)result\n{\n    NSMutableDictionary * resultDict = [[NSMutableDictionary alloc]init];\n    [resultDict setValue:result.text forKey:@\"transcript\"];\n    [resultDict setValue:[NSNumber numberWithBool:YES] forKey:@\"final\"];\n    [resultDict setValue:[NSNumber numberWithFloat:result.confidence]forKey:@\"confidence\"];\n    NSArray * alternatives = @[resultDict];\n    NSArray * results = @[alternatives];\n    [self sendResults:results];\n\n}\n\n-(void) recognition:(ISSpeechRecognition *)speechRecognition didFailWithError:(NSError *)error\n{\n    if (error.code == 28 || error.code == 23) {\n        [self sendErrorWithMessage:[error localizedDescription] andCode:7];\n    }\n}\n\n-(void) sendResults:(NSArray *) results\n{\n    NSMutableDictionary * event = [[NSMutableDictionary alloc]init];\n    [event setValue:@\"result\" forKey:@\"type\"];\n    [event setValue:nil forKey:@\"emma\"];\n    [event setValue:nil forKey:@\"interpretation\"];\n    [event setValue:results forKey:@\"results\"];\n\n    self.pluginResult = [CDVPluginResult resultWithStatus:CDVCommandStatus_OK messageAsDictionary:event];\n    [self.pluginResult setKeepCallbackAsBool:YES];\n    [self.commandDelegate sendPluginResult:self.pluginResult callbackId:self.command.callbackId];\n}\n\n-(void) sendErrorWithMessage:(NSString *)errorMessage andCode:(NSInteger) code\n{\n    NSMutableDictionary * event = [[NSMutableDictionary alloc]init];\n    [event setValue:@\"error\" forKey:@\"type\"];\n    [event setValue:[NSNumber numberWithInteger:code] forKey:@\"error\"];\n    [event setValue:errorMessage forKey:@\"message\"];\n    self.pluginResult = [CDVPluginResult resultWithStatus:CDVCommandStatus_ERROR messageAsDictionary:event];\n    [self.pluginResult setKeepCallbackAsBool:NO];\n    [self.commandDelegate sendPluginResult:self.pluginResult callbackId:self.command.callbackId];\n}\n\n-(void) stop:(CDVInvokedUrlCommand*)command\n{\n    [self stopOrAbort];\n}\n\n-(void) abort:(CDVInvokedUrlCommand*)command\n{\n    [self stopOrAbort];\n}\n\n-(void) stopOrAbort\n{\n    if (NSClassFromString(@\"SFSpeechRecognizer\")) {\n        if (self.audioEngine.isRunning) {\n            [self.audioEngine stop];\n            [self.recognitionRequest endAudio];\n        }\n    } else {\n        [self.iSpeechRecognition cancel];\n    }\n}\n\n-(void) stopAndRelease\n{\n    [self.audioEngine stop];\n    [self.audioEngine.inputNode removeTapOnBus:0];\n    self.recognitionRequest = nil;\n    self.recognitionTask = nil;\n}\n\n@end\n"
  },
  {
    "path": "www/SpeechGrammar.js",
    "content": "var SpeechGrammar = function() {\n    this.src;\n    this.weight;\n};\n\nmodule.exports = SpeechGrammar;\n"
  },
  {
    "path": "www/SpeechGrammarList.js",
    "content": "var SpeechGrammarList = function(data) {\n  this._list = data;\n  this.length = this._list.length;\n};\n    \nSpeechGrammarList.prototype.item = function(item) {\n    return this._list[item];\n};\n\nSpeechGrammarList.prototype.addFromUri = function(item) {\n};\n\nSpeechGrammarList.prototype.addFromString = function(item) {\n};\n\nmodule.exports = SpeechGrammarList;\n"
  },
  {
    "path": "www/SpeechRecognition.js",
    "content": "var exec = require(\"cordova/exec\");\n\n/** \n    attribute SpeechGrammarList grammars;\n    attribute DOMString lang;\n    attribute boolean continuous;\n    attribute boolean interimResults;\n    attribute unsigned long maxAlternatives;\n    attribute DOMString serviceURI;\n */\nvar SpeechRecognition = function () {\n    this.grammars = null;\n    this.lang = \"en\";\n    this.continuous = false;\n    this.interimResults = false;\n    this.maxAlternatives = 1;\n    this.serviceURI = \"\";\n    \n    // event methods\n    this.onaudiostart = null;\n    this.onsoundstart = null;\n    this.onspeechstart = null;\n    this.onspeechend = null;\n    this.onsoundend = null;\n    this.onaudioend = null;\n    this.onresult = null;\n    this.onnomatch = null;\n    this.onerror = null;\n    this.onstart = null;\n    this.onend = null;\n\n    exec(function() {\n        console.log(\"initialized\");\n    }, function(e) {\n        console.log(\"error: \" + e);\n    }, \"SpeechRecognition\", \"init\", []);\n};\n\nSpeechRecognition.prototype.start = function() {\n    var that = this;\n    var successCallback = function(event) {\n        if (event.type === \"audiostart\" && typeof that.onaudiostart === \"function\") {\n            that.onaudiostart(event);\n        } else if (event.type === \"soundstart\" && typeof that.onsoundstart === \"function\") {\n            that.onsoundstart(event);\n        } else if (event.type === \"speechstart\" && typeof that.onspeechstart === \"function\") {\n            that.onspeechstart(event);\n        } else if (event.type === \"speechend\" && typeof that.onspeechend === \"function\") {\n            that.onspeechend(event);\n        } else if (event.type === \"soundend\" && typeof that.onsoundend === \"function\") {\n            that.onsoundend(event);\n        } else if (event.type === \"audioend\" && typeof that.onaudioend === \"function\") {\n            that.onaudioend(event);\n        } else if (event.type === \"result\" && typeof that.onresult === \"function\") {\n            that.onresult(event);\n        } else if (event.type === \"nomatch\" && typeof that.onnomatch === \"function\") {\n            that.onnomatch(event);\n        } else if (event.type === \"start\" && typeof that.onstart === \"function\") {\n            that.onstart(event);\n        } else if (event.type === \"end\" && typeof that.onend === \"function\") {\n            that.onend(event);\n        }\n    };\n    var errorCallback = function(err) {\n        if (typeof that.onerror === \"function\") {\n            that.onerror(err);\n        }\n    };\n\n    exec(successCallback, errorCallback, \"SpeechRecognition\", \"start\", [this.lang, this.interimResults, this.maxAlternatives]);\n};\n\nSpeechRecognition.prototype.stop = function() {\n    exec(null, null, \"SpeechRecognition\", \"stop\", []);\n};\n\nSpeechRecognition.prototype.abort = function() {\n    exec(null, null, \"SpeechRecognition\", \"abort\", []);\n};\n\nmodule.exports = SpeechRecognition;\n"
  },
  {
    "path": "www/SpeechRecognitionAlternative.js",
    "content": "var SpeechRecognitionAlternative = function() {\n    this.transcript = null;\n    this.confidence = 0.0;\n};\n\nmodule.exports = SpeechRecognitionAlternative;\n"
  },
  {
    "path": "www/SpeechRecognitionError.js",
    "content": "var SpeechRecognitionError = function() {\n    this.error = null;\n    this.message = null;\n};\n\nSpeechRecognitionError['no-speech'] = 0;\nSpeechRecognitionError['aborted'] = 1;\nSpeechRecognitionError['audio-capture'] = 2;\nSpeechRecognitionError['network'] = 3;\nSpeechRecognitionError['not-allowed'] = 4;\nSpeechRecognitionError['service-not-allowed'] = 5;\nSpeechRecognitionError['bad-grammar'] = 6;\nSpeechRecognitionError['language-not-supported'] = 7;\n\nmodule.exports = SpeechRecognitionError;\n"
  },
  {
    "path": "www/SpeechRecognitionEvent.js",
    "content": "var SpeechRecognitionEvent = function() {\n    this.resultIndex;\n    this.results;\n    this.interpretation;\n    this.emma;\n};\n\nmodule.exports = SpeechRecognitionEvent;\n"
  },
  {
    "path": "www/SpeechRecognitionResult.js",
    "content": "// A complete one-shot simple response\nvar SpeechRecognitionResult = function() {\n    this._result = [];\n    this.length = 0;\n    this.final = false;\n};\n\nSpeechRecognitionResult.prototype.item = function(item) {\n    return this._result[item];\n};\n\nmodule.exports = SpeechRecognitionResult;\n"
  },
  {
    "path": "www/SpeechRecognitionResultList.js",
    "content": "// A collection of responses (used in continuous mode)\nvar SpeechRecognitionResultList = function() {\n    this._result = [];\n    this.length = 0;\n};\n\nSpeechRecognitionResultList.prototype.item = function(item) {\n    return this._result[item];\n};\n\nmodule.exports = SpeechRecognitionResultList;\n"
  },
  {
    "path": "www/browser/SpeechRecognition.js",
    "content": "if (!window.SpeechRecognition && window.webkitSpeechRecognition) {\n    window.SpeechRecognition = window.webkitSpeechRecognition;\n}\n\nif (!window.SpeechRecognitionError && window.webkitSpeechRecognitionError) {\n    window.SpeechRecognitionError = window.webkitSpeechRecognitionError;\n}\n\nif (!window.SpeechRecognitionEvent && window.webkitSpeechRecognitionEvent) {\n    window.SpeechRecognitionEvent = window.webkitSpeechRecognitionEvent;\n}\n\nif (!window.SpeechGrammar && window.webkitSpeechGrammar) {\n    window.SpeechGrammar = window.webkitSpeechGrammar;\n}\n\nif (!window.SpeechGrammarList && window.webkitSpeechGrammarList) {\n    window.SpeechGrammarList = window.webkitSpeechGrammarList;\n    SpeechGrammarList.prototype.addFromURI = window.SpeechGrammarList.prototype.addFromUri;\n}"
  }
]