Repository: 0x006F/react-media-recorder
Branch: master
Commit: 0623e8085c6f
Files: 9
Total size: 27.8 KB
Directory structure:
gitextract_t7w5y6m2/
├── .github/
│ └── workflows/
│ └── publish.yaml
├── .gitignore
├── .npmignore
├── LICENSE
├── README.md
├── index.js
├── package.json
├── src/
│ └── index.ts
└── tsconfig.json
================================================
FILE CONTENTS
================================================
================================================
FILE: .github/workflows/publish.yaml
================================================
name: Publish Package to npmjs
on:
release:
types: [published]
jobs:
build:
runs-on: ubuntu-latest
permissions:
contents: read
id-token: write
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20.x'
registry-url: 'https://registry.npmjs.org'
- run: npm ci
- run: npm publish --provenance --access public
env:
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
================================================
FILE: .gitignore
================================================
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
# Runtime data
pids
*.pid
*.seed
*.pid.lock
# Directory for instrumented libs generated by jscoverage/JSCover
lib-cov
# Coverage directory used by tools like istanbul
coverage
# nyc test coverage
.nyc_output
# Grunt intermediate storage (http://gruntjs.com/creating-plugins#storing-task-files)
.grunt
# Bower dependency directory (https://bower.io/)
bower_components
# node-waf configuration
.lock-wscript
# Compiled binary addons (http://nodejs.org/api/addons.html)
build/Release
# Dependency directories
node_modules/
jspm_packages/
# Typescript v1 declaration files
typings/
# Optional npm cache directory
.npm
# Optional eslint cache
.eslintcache
# Optional REPL history
.node_repl_history
# Output of 'npm pack'
*.tgz
# Yarn Integrity file
.yarn-integrity
# dotenv environment variables file
.env
# build directories
lib
================================================
FILE: .npmignore
================================================
src
================================================
FILE: LICENSE
================================================
MIT License
Copyright (c) 2018 Giridharan GM
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
================================================
FILE: README.md
================================================
# react-media-recorder :o2: :video_camera: :microphone: :computer:
`react-media-recorder` is a fully typed react component with render prop, or a react hook, that can be used to:
- Record audio/video
- Record screen
using [MediaRecorder API](https://developer.mozilla.org/en-US/docs/Web/API/MediaRecorder).
## Installation
```
npm i react-media-recorder
```
or
```
yarn add react-media-recorder
```
## Usage
```javascript
import { ReactMediaRecorder } from "react-media-recorder";
const RecordView = () => (
(
{status}
)}
/>
);
```
Since `react-media-recording` uses render prop, you can define what to render in the view. Just don't forget to wire the `startRecording`, `stopRecording` and `mediaBlobUrl` to your component.
## Usage with react hooks
```javascript
import { useReactMediaRecorder } from "react-media-recorder";
const RecordView = () => {
const { status, startRecording, stopRecording, mediaBlobUrl } =
useReactMediaRecorder({ video: true });
return (
{status}
);
};
```
The hook receives an object as argument with the same ReactMediaRecorder options / props (except the `render` function).
### Options / Props
#### audio
Can be either a boolean value or a [MediaTrackConstraints](https://developer.mozilla.org/en-US/docs/Web/API/MediaTrackConstraints) object.
type: `boolean` or `object`
default: `true`
#### blobPropertyBag
[From MDN](https://developer.mozilla.org/en-US/docs/Web/API/Blob/Blob):
An optional `BlobPropertyBag` dictionary which may specify the following two attributes (for the `mediaBlob`):
- `type`, that represents the MIME type of the content of the array that will be put in the blob.
- `endings`, with a default value of "transparent", that specifies how strings containing the line ending character \n are to be written out. It is one of the two values: "native", meaning that line ending characters are changed to match host OS filesystem convention, or "transparent", meaning that endings are stored in the blob without change
type: `object`
default:
if `video` is enabled,
```
{
type: "video/mp4"
}
```
if there's only `audio` is enabled,
```
{
type: "audio/wav"
}
```
#### customMediaStream
A media stream object itself (optional)
#### mediaRecorderOptions
An optional options object that will be passed to `MediaRecorder`. Please note that if you specify the MIME type via either `audio` or `video` prop _and_ through this `mediaRecorderOptions`, the `mediaRecorderOptions` have higher precedence.
type: `object`
default: `{}`
#### onStart
A `function` that would get invoked when the MediaRecorder starts.
type: `function()`
default: `() => null`
#### onStop
A `function` that would get invoked when the MediaRecorder stops. It'll provide the blob and the blob url as its params.
type: `function(blobUrl: string, blob: Blob)`
default: `() => null`
#### stopStreamsOnStop
Whether to stop all streams on stop. By default, its `true`
#### render
A `function` which accepts an object containing fields: `status`, `startRecording`, `stopRecording` and`mediaBlob`. This function would return a react element/component.
type: `function`
default: `() => null`
#### screen
A `boolean` value. Lets you to record your current screen. Not all browsers would support this. Please [check here](https://caniuse.com/#search=getDisplayMedia) for the availability. Please note that at the moment, the MediaRecorder won't record two alike streams at a time, if you provide both `screen` and `video` prop, the **screen capturing will take precedence** than the video capturing. But, you can provide the `video` prop (_as the MediaTrackConstraints_) which will then utilized by screen capture (for example, `height`, `width` etc..)
#### video
Can be either a boolean value or a [MediaTrackConstraints](https://developer.mozilla.org/en-US/docs/Web/API/MediaTrackConstraints) object.
type: `boolean` or `object`
default: `false`
#### askPermissionOnMount
A boolean value. If set to `true`, will ask media permission on mounting.
type: `boolean`
default: `false`
#### preferCurrentTab
A boolean value. If set to `true`, the browser will offer the current tab as the most prominent capture source, i.e. as a separate "This Tab" option in the "Choose what to share" options presented to the user.
type: `boolean`
default: `false`
#### selfBrowserSurface
An enumerated value specifying whether the browser should allow the user to select the current tab for capture. Possible values are `include`, which hints that the browser should include the current tab in the choices offered for capture, and `exclude`, which hints that it should be excluded.
type: `undefined` | `'include'` | `'exclude'`;
default: `undefined`
### Props available in the `render` function
#### error
A string enum. Possible values:
- `media_aborted`
- `permission_denied`
- `no_specified_media_found`
- `media_in_use`
- `invalid_media_constraints`
- `no_constraints`
- `recorder_error`
#### status
A string `enum`. Possible values:
- `media_aborted`
- `permission_denied`
- `no_specified_media_found`
- `media_in_use`
- `invalid_media_constraints`
- `no_constraints`
- `recorder_error`
- `idle`
- `acquiring_media`
- `recording`
- `stopping`
- `stopped`
#### startRecording
A `function`, which starts recording when invoked.
#### pauseRecording
A `function`, which pauses the recording when invoked.
#### resumeRecording
A `function`, which resumes the recording when invoked.
#### stopRecording
A `function`, which stops recording when invoked.
#### muteAudio
A `function`, which mutes the audio tracks when invoked.
#### unmuteAudio
A `function` which unmutes the audio tracks when invoked.
#### mediaBlobUrl
A `blob` url that can be wired to an ``, `` or an `` element.
#### clearBlobUrl
A `function` which clears the existing generated blob url (if any) and resets the workflow to its initial `idle` state.
#### isMuted
A boolean prop that tells whether the audio is muted or not.
#### previewStream
If you want to create a live-preview of the video to the user, you can use this _stream_ and attach it to a `` element. Please note that this is a **muted stream**. This is by design to get rid of internal microphone feedbacks on machines like laptop.
For example:
```tsx
const VideoPreview = ({ stream }: { stream: MediaStream | null }) => {
const videoRef = useRef(null);
useEffect(() => {
if (videoRef.current && stream) {
videoRef.current.srcObject = stream;
}
}, [stream]);
if (!stream) {
return null;
}
return ;
};
const App = () => (
{
return ;
}}
/>
);
```
#### previewAudioStream
If you want access to the live audio stream for use in sound visualisations, you can use this _stream_ as your audio source and extract data from it using the [AudioContext](https://developer.mozilla.org/en-US/docs/Web/API/AudioContext) and [AnalyzerNode](https://developer.mozilla.org/en-US/docs/Web/API/AnalyserNode) features of the Web Audio API. Some javascript examples of how to do this can be found [here](https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API/Visualizations_with_Web_Audio_API).
## Contributing
Feel free to submit a PR if you found a bug (I might've missed many! :grinning:) or if you want to enhance it further.
Thanks!. Happy Recording!
================================================
FILE: index.js
================================================
module.exports = require("./lib");
================================================
FILE: package.json
================================================
{
"name": "react-media-recorder",
"version": "1.7.2",
"description": "A React component based on MediaRecorder() API to record audio/video streams",
"main": "index.js",
"scripts": {
"build": "tsc && jsmin -o ./lib/index.js ./lib/index.js",
"prepare": "npm run build"
},
"files": [
"lib"
],
"repository": {
"type": "git",
"url": "git+https://github.com/giridharangm/react-media-recorder.git"
},
"keywords": [
"react",
"recorder",
"voice recording",
"video recording",
"media recording",
"getusermedia",
"MediaRecorder",
"getDisplayMedia",
"screen recorder",
"video recorder",
"audio recorder"
],
"author": "Giridharan GM",
"license": "MIT",
"bugs": {
"url": "https://github.com/giridharangm/react-media-recorder/issues"
},
"homepage": "https://github.com/giridharangm/react-media-recorder#readme",
"devDependencies": {
"@types/react": "^16.9.11",
"jsmin": "^1.0.1",
"typescript": "^4.4.3"
},
"types": "./lib/index.d.ts",
"dependencies": {
"extendable-media-recorder": "^6.6.5",
"extendable-media-recorder-wav-encoder": "^7.0.68"
}
}
================================================
FILE: src/index.ts
================================================
import {
register,
MediaRecorder as ExtendableMediaRecorder,
IMediaRecorder,
} from "extendable-media-recorder";
import { ReactElement, useCallback, useEffect, useRef, useState } from "react";
import { connect } from "extendable-media-recorder-wav-encoder";
export type ReactMediaRecorderRenderProps = {
error: string;
muteAudio: () => void;
unMuteAudio: () => void;
startRecording: () => void;
pauseRecording: () => void;
resumeRecording: () => void;
stopRecording: () => void;
mediaBlobUrl: undefined | string;
status: StatusMessages;
isAudioMuted: boolean;
previewStream: MediaStream | null;
previewAudioStream: MediaStream | null;
clearBlobUrl: () => void;
};
export type ReactMediaRecorderHookProps = {
audio?: boolean | MediaTrackConstraints;
video?: boolean | MediaTrackConstraints;
screen?: boolean;
selfBrowserSurface?: SelfBrowserSurface;
preferCurrentTab?: PreferCurrentTab,
onStop?: (blobUrl: string, blob: Blob) => void;
onStart?: () => void;
blobPropertyBag?: BlobPropertyBag;
mediaRecorderOptions?: MediaRecorderOptions | undefined;
customMediaStream?: MediaStream | null;
stopStreamsOnStop?: boolean;
askPermissionOnMount?: boolean;
};
export type ReactMediaRecorderProps = ReactMediaRecorderHookProps & {
render: (props: ReactMediaRecorderRenderProps) => ReactElement;
};
/**
* Experimental (optional).
* An enumerated value specifying whether the browser should allow the user to select the current tab for capture.
* This helps to avoid the "infinite hall of mirrors" effect experienced when a video conferencing app inadvertently shares its own display.
* Possible values are include, which hints that the browser should include the current tab in the choices offered for capture,
* and exclude, which hints that it should be excluded.
* A default value is not mandated by the spec.
* See specs at: https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getDisplayMedia#selfbrowsersurface
*/
export type SelfBrowserSurface = undefined | 'include' | 'exclude';
/**
* Experimental (optional).
* A boolean; a value of true instructs the browser to offer the current tab as the most prominent capture source, i.e. as a separate "This Tab" option in the "Choose what to share" options presented to the user.
* This is useful as many app types generally just want to share the current tab.
* See specs at: https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getDisplayMedia#prefercurrenttab
*/
export type PreferCurrentTab = true | false;
export type StatusMessages =
| "media_aborted"
| "permission_denied"
| "no_specified_media_found"
| "media_in_use"
| "invalid_media_constraints"
| "no_constraints"
| "recorder_error"
| "idle"
| "acquiring_media"
| "delayed_start"
| "recording"
| "stopping"
| "stopped"
| "paused";
export enum RecorderErrors {
AbortError = "media_aborted",
NotAllowedError = "permission_denied",
NotFoundError = "no_specified_media_found",
NotReadableError = "media_in_use",
OverconstrainedError = "invalid_media_constraints",
TypeError = "no_constraints",
NONE = "",
NO_RECORDER = "recorder_error",
}
export function useReactMediaRecorder({
audio = true,
video = false,
selfBrowserSurface = undefined,
preferCurrentTab = false,
onStop = () => null,
onStart = () => null,
blobPropertyBag,
screen = false,
mediaRecorderOptions = undefined,
customMediaStream = null,
stopStreamsOnStop = true,
askPermissionOnMount = false,
}: ReactMediaRecorderHookProps): ReactMediaRecorderRenderProps {
const mediaRecorder = useRef(null);
const mediaChunks = useRef([]);
const mediaStream = useRef(null);
const [status, setStatus] = useState("idle");
const [isAudioMuted, setIsAudioMuted] = useState(false);
const [mediaBlobUrl, setMediaBlobUrl] = useState(
undefined
);
const [error, setError] = useState("NONE");
const [init, setInit] = useState(false);
useEffect(() => {
// avoid re-registering the encoder
if (init) {
return;
}
const setup = async () => {
try {
await register(await connect());
} catch (e) {
//
}
};
setup();
setInit(true);
}, []);
const getMediaStream = useCallback(async () => {
setStatus("acquiring_media");
const requiredMedia: MediaStreamConstraints = {
audio: typeof audio === "boolean" ? !!audio : audio,
video: typeof video === "boolean" ? !!video : video,
};
try {
if (customMediaStream) {
mediaStream.current = customMediaStream;
} else if (screen) {
const stream = (await window.navigator.mediaDevices.getDisplayMedia({
video: video || true,
// @ts-ignore experimental feature, useful for Chrome
selfBrowserSurface,
preferCurrentTab
})) as MediaStream;
stream.getVideoTracks()[0].addEventListener("ended", () => {
stopRecording();
});
if (audio) {
const audioStream = await window.navigator.mediaDevices.getUserMedia({
audio,
});
audioStream
.getAudioTracks()
.forEach((audioTrack) => stream.addTrack(audioTrack));
}
mediaStream.current = stream;
} else {
const stream = await window.navigator.mediaDevices.getUserMedia(
requiredMedia
);
mediaStream.current = stream;
}
setStatus("idle");
} catch (error: any) {
setError(error.name);
setStatus("idle");
}
}, [audio, video, screen]);
useEffect(() => {
if (!window.MediaRecorder) {
throw new Error("Unsupported Browser");
}
if (screen) {
if (!window.navigator.mediaDevices.getDisplayMedia) {
throw new Error("This browser doesn't support screen capturing");
}
}
const checkConstraints = (mediaType: MediaTrackConstraints) => {
const supportedMediaConstraints =
navigator.mediaDevices.getSupportedConstraints();
const unSupportedConstraints = Object.keys(mediaType).filter(
(constraint) =>
!(supportedMediaConstraints as { [key: string]: any })[constraint]
);
if (unSupportedConstraints.length > 0) {
console.error(
`The constraints ${unSupportedConstraints.join(
","
)} doesn't support on this browser. Please check your ReactMediaRecorder component.`
);
}
};
if (typeof audio === "object") {
checkConstraints(audio);
}
if (typeof video === "object") {
checkConstraints(video);
}
if (mediaRecorderOptions && mediaRecorderOptions.mimeType) {
if (!MediaRecorder.isTypeSupported(mediaRecorderOptions.mimeType)) {
console.error(
`The specified MIME type you supplied for MediaRecorder doesn't support this browser`
);
}
}
if (!mediaStream.current && askPermissionOnMount) {
getMediaStream();
}
return () => {
if (mediaStream.current) {
const tracks = mediaStream.current.getTracks();
tracks.forEach((track) => track.clone().stop());
}
};
}, [
audio,
screen,
video,
getMediaStream,
mediaRecorderOptions,
askPermissionOnMount,
]);
// Media Recorder Handlers
const startRecording = async () => {
setError("NONE");
if (!mediaStream.current) {
await getMediaStream();
}
if (mediaStream.current) {
const isStreamEnded = mediaStream.current
.getTracks()
.some((track) => track.readyState === "ended");
if (isStreamEnded) {
await getMediaStream();
}
// User blocked the permissions (getMediaStream errored out)
if (!mediaStream.current.active) {
return;
}
mediaRecorder.current = new ExtendableMediaRecorder(
mediaStream.current,
mediaRecorderOptions || undefined
);
mediaRecorder.current.ondataavailable = onRecordingActive;
mediaRecorder.current.onstop = onRecordingStop;
mediaRecorder.current.onstart = onRecordingStart;
mediaRecorder.current.onerror = () => {
setError("NO_RECORDER");
setStatus("idle");
};
mediaRecorder.current.start();
setStatus("recording");
}
};
const onRecordingActive = ({ data }: BlobEvent) => {
mediaChunks.current.push(data);
};
const onRecordingStart = () => {
onStart();
};
const onRecordingStop = () => {
const [chunk] = mediaChunks.current;
const blobProperty: BlobPropertyBag = Object.assign(
{ type: chunk.type },
blobPropertyBag || (video ? { type: "video/mp4" } : { type: "audio/wav" })
);
const blob = new Blob(mediaChunks.current, blobProperty);
const url = URL.createObjectURL(blob);
setStatus("stopped");
setMediaBlobUrl(url);
onStop(url, blob);
};
const muteAudio = (mute: boolean) => {
setIsAudioMuted(mute);
if (mediaStream.current) {
mediaStream.current
.getAudioTracks()
.forEach((audioTrack) => (audioTrack.enabled = !mute));
}
};
const pauseRecording = () => {
if (mediaRecorder.current && mediaRecorder.current.state === "recording") {
setStatus("paused");
mediaRecorder.current.pause();
}
};
const resumeRecording = () => {
if (mediaRecorder.current && mediaRecorder.current.state === "paused") {
setStatus("recording");
mediaRecorder.current.resume();
}
};
const stopRecording = () => {
if (mediaRecorder.current) {
if (mediaRecorder.current.state !== "inactive") {
setStatus("stopping");
mediaRecorder.current.stop();
if (stopStreamsOnStop) {
mediaStream.current &&
mediaStream.current.getTracks().forEach((track) => track.stop());
}
mediaChunks.current = [];
}
}
};
return {
error: RecorderErrors[error],
muteAudio: () => muteAudio(true),
unMuteAudio: () => muteAudio(false),
startRecording,
pauseRecording,
resumeRecording,
stopRecording,
mediaBlobUrl,
status,
isAudioMuted,
previewStream: mediaStream.current
? new MediaStream(mediaStream.current.getVideoTracks())
: null,
previewAudioStream: mediaStream.current
? new MediaStream(mediaStream.current.getAudioTracks())
: null,
clearBlobUrl: () => {
if (mediaBlobUrl) {
URL.revokeObjectURL(mediaBlobUrl);
}
setMediaBlobUrl(undefined);
setStatus("idle");
},
};
}
export const ReactMediaRecorder = (props: ReactMediaRecorderProps) =>
props.render(useReactMediaRecorder(props));
================================================
FILE: tsconfig.json
================================================
{
"compilerOptions": {
/* Basic Options */
// "incremental": true, /* Enable incremental compilation */
"target": "es5" /* Specify ECMAScript target version: 'ES3' (default), 'ES5', 'ES2015', 'ES2016', 'ES2017', 'ES2018', 'ES2019' or 'ESNEXT'. */,
"module": "commonjs" /* Specify module code generation: 'none', 'commonjs', 'amd', 'system', 'umd', 'es2015', or 'ESNext'. */,
"lib": [
"ES2015",
"DOM"
] /* Specify library files to be included in the compilation. */,
// "allowJs": true, /* Allow javascript files to be compiled. */
// "checkJs": true, /* Report errors in .js files. */
// "jsx": "preserve", /* Specify JSX code generation: 'preserve', 'react-native', or 'react'. */
"declaration": true /* Generates corresponding '.d.ts' file. */,
// "declarationMap": true, /* Generates a sourcemap for each corresponding '.d.ts' file. */
// "sourceMap": true, /* Generates corresponding '.map' file. */
// "outFile": "./", /* Concatenate and emit output to single file. */
"outDir": "./lib" /* Redirect output structure to the directory. */,
"rootDir": "./src" /* Specify the root directory of input files. Use to control the output directory structure with --outDir. */,
// "composite": true, /* Enable project compilation */
// "tsBuildInfoFile": "./", /* Specify file to store incremental compilation information */
// "removeComments": true, /* Do not emit comments to output. */
// "noEmit": true, /* Do not emit outputs. */
// "importHelpers": true, /* Import emit helpers from 'tslib'. */
// "downlevelIteration": true, /* Provide full support for iterables in 'for-of', spread, and destructuring when targeting 'ES5' or 'ES3'. */
// "isolatedModules": true, /* Transpile each file as a separate module (similar to 'ts.transpileModule'). */
/* Strict Type-Checking Options */
"strict": true /* Enable all strict type-checking options. */,
// "noImplicitAny": true, /* Raise error on expressions and declarations with an implied 'any' type. */
// "strictNullChecks": true, /* Enable strict null checks. */
// "strictFunctionTypes": true, /* Enable strict checking of function types. */
// "strictBindCallApply": true, /* Enable strict 'bind', 'call', and 'apply' methods on functions. */
// "strictPropertyInitialization": true, /* Enable strict checking of property initialization in classes. */
// "noImplicitThis": true, /* Raise error on 'this' expressions with an implied 'any' type. */
// "alwaysStrict": true, /* Parse in strict mode and emit "use strict" for each source file. */
/* Additional Checks */
// "noUnusedLocals": true, /* Report errors on unused locals. */
// "noUnusedParameters": true, /* Report errors on unused parameters. */
// "noImplicitReturns": true, /* Report error when not all code paths in function return a value. */
// "noFallthroughCasesInSwitch": true, /* Report errors for fallthrough cases in switch statement. */
/* Module Resolution Options */
// "moduleResolution": "node", /* Specify module resolution strategy: 'node' (Node.js) or 'classic' (TypeScript pre-1.6). */
// "baseUrl": "./", /* Base directory to resolve non-absolute module names. */
// "paths": {}, /* A series of entries which re-map imports to lookup locations relative to the 'baseUrl'. */
// "rootDirs": [], /* List of root folders whose combined content represents the structure of the project at runtime. */
// "typeRoots": [], /* List of folders to include type definitions from. */
// "types": [], /* Type declaration files to be included in compilation. */
// "allowSyntheticDefaultImports": true, /* Allow default imports from modules with no default export. This does not affect code emit, just typechecking. */
"esModuleInterop": true /* Enables emit interoperability between CommonJS and ES Modules via creation of namespace objects for all imports. Implies 'allowSyntheticDefaultImports'. */,
// "preserveSymlinks": true, /* Do not resolve the real path of symlinks. */
// "allowUmdGlobalAccess": true, /* Allow accessing UMD globals from modules. */
/* Source Map Options */
// "sourceRoot": "", /* Specify the location where debugger should locate TypeScript files instead of source locations. */
// "mapRoot": "", /* Specify the location where debugger should locate map files instead of generated locations. */
// "inlineSourceMap": true, /* Emit a single file with source maps instead of having a separate file. */
// "inlineSources": true, /* Emit the source alongside the sourcemaps within a single file; requires '--inlineSourceMap' or '--sourceMap' to be set. */
/* Experimental Options */
// "experimentalDecorators": true, /* Enables experimental support for ES7 decorators. */
// "emitDecoratorMetadata": true, /* Enables experimental support for emitting type metadata for decorators. */
/* Advanced Options */
"forceConsistentCasingInFileNames": true /* Disallow inconsistently-cased references to the same file. */
}
}