main 485b963cb41d cached
424 files
3.9 MB
1.0M tokens
1635 symbols
1 requests
Download .txt
Showing preview only (4,180K chars total). Download the full file or copy to clipboard to get everything.
Repository: full-stack-deep-learning/fsdl-text-recognizer-2022-labs
Branch: main
Commit: 485b963cb41d
Files: 424
Total size: 3.9 MB

Directory structure:
gitextract_8c13vqgo/

├── .flake8
├── .github/
│   ├── ISSUE_TEMPLATE/
│   │   └── this-repository-is-automatically-generated--don-t-open-issues-here-.md
│   └── pull_request_template.md
├── .gitignore
├── .pre-commit-config.yaml
├── LICENSE.txt
├── Makefile
├── data/
│   └── raw/
│       ├── emnist/
│       │   ├── metadata.toml
│       │   └── readme.md
│       ├── fsdl_handwriting/
│       │   ├── fsdl_handwriting.jsonl
│       │   ├── manifest.csv
│       │   ├── metadata.toml
│       │   └── readme.md
│       └── iam/
│           ├── metadata.toml
│           └── readme.md
├── environment.yml
├── lab01/
│   ├── notebooks/
│   │   └── lab01_pytorch.ipynb
│   └── text_recognizer/
│       ├── __init__.py
│       ├── data/
│       │   └── util.py
│       ├── metadata/
│       │   ├── mnist.py
│       │   └── shared.py
│       ├── models/
│       │   ├── __init__.py
│       │   └── mlp.py
│       └── util.py
├── lab02/
│   ├── notebooks/
│   │   ├── lab01_pytorch.ipynb
│   │   ├── lab02a_lightning.ipynb
│   │   └── lab02b_cnn.ipynb
│   ├── text_recognizer/
│   │   ├── __init__.py
│   │   ├── data/
│   │   │   ├── __init__.py
│   │   │   ├── base_data_module.py
│   │   │   ├── emnist.py
│   │   │   ├── emnist_essentials.json
│   │   │   ├── emnist_lines.py
│   │   │   ├── mnist.py
│   │   │   ├── sentence_generator.py
│   │   │   └── util.py
│   │   ├── lit_models/
│   │   │   ├── __init__.py
│   │   │   └── base.py
│   │   ├── metadata/
│   │   │   ├── emnist.py
│   │   │   ├── emnist_lines.py
│   │   │   ├── mnist.py
│   │   │   └── shared.py
│   │   ├── models/
│   │   │   ├── __init__.py
│   │   │   ├── cnn.py
│   │   │   ├── line_cnn_simple.py
│   │   │   └── mlp.py
│   │   ├── stems/
│   │   │   └── image.py
│   │   └── util.py
│   └── training/
│       ├── __init__.py
│       ├── run_experiment.py
│       └── util.py
├── lab03/
│   ├── notebooks/
│   │   ├── lab01_pytorch.ipynb
│   │   ├── lab02a_lightning.ipynb
│   │   ├── lab02b_cnn.ipynb
│   │   └── lab03_transformers.ipynb
│   ├── text_recognizer/
│   │   ├── __init__.py
│   │   ├── data/
│   │   │   ├── __init__.py
│   │   │   ├── base_data_module.py
│   │   │   ├── emnist.py
│   │   │   ├── emnist_essentials.json
│   │   │   ├── emnist_lines.py
│   │   │   ├── iam.py
│   │   │   ├── iam_paragraphs.py
│   │   │   ├── mnist.py
│   │   │   ├── sentence_generator.py
│   │   │   └── util.py
│   │   ├── lit_models/
│   │   │   ├── __init__.py
│   │   │   ├── base.py
│   │   │   ├── metrics.py
│   │   │   ├── transformer.py
│   │   │   └── util.py
│   │   ├── metadata/
│   │   │   ├── emnist.py
│   │   │   ├── emnist_lines.py
│   │   │   ├── iam.py
│   │   │   ├── iam_paragraphs.py
│   │   │   ├── mnist.py
│   │   │   └── shared.py
│   │   ├── models/
│   │   │   ├── __init__.py
│   │   │   ├── cnn.py
│   │   │   ├── line_cnn_simple.py
│   │   │   ├── mlp.py
│   │   │   ├── resnet_transformer.py
│   │   │   └── transformer_util.py
│   │   ├── stems/
│   │   │   ├── image.py
│   │   │   └── paragraph.py
│   │   └── util.py
│   └── training/
│       ├── __init__.py
│       ├── run_experiment.py
│       └── util.py
├── lab04/
│   ├── notebooks/
│   │   ├── lab01_pytorch.ipynb
│   │   ├── lab02a_lightning.ipynb
│   │   ├── lab02b_cnn.ipynb
│   │   ├── lab03_transformers.ipynb
│   │   └── lab04_experiments.ipynb
│   ├── text_recognizer/
│   │   ├── __init__.py
│   │   ├── callbacks/
│   │   │   ├── __init__.py
│   │   │   ├── imtotext.py
│   │   │   ├── model.py
│   │   │   ├── optim.py
│   │   │   └── util.py
│   │   ├── data/
│   │   │   ├── __init__.py
│   │   │   ├── base_data_module.py
│   │   │   ├── emnist.py
│   │   │   ├── emnist_essentials.json
│   │   │   ├── emnist_lines.py
│   │   │   ├── iam.py
│   │   │   ├── iam_lines.py
│   │   │   ├── iam_paragraphs.py
│   │   │   ├── mnist.py
│   │   │   ├── sentence_generator.py
│   │   │   └── util.py
│   │   ├── lit_models/
│   │   │   ├── __init__.py
│   │   │   ├── base.py
│   │   │   ├── metrics.py
│   │   │   ├── transformer.py
│   │   │   └── util.py
│   │   ├── metadata/
│   │   │   ├── emnist.py
│   │   │   ├── emnist_lines.py
│   │   │   ├── iam.py
│   │   │   ├── iam_lines.py
│   │   │   ├── iam_paragraphs.py
│   │   │   ├── mnist.py
│   │   │   └── shared.py
│   │   ├── models/
│   │   │   ├── __init__.py
│   │   │   ├── cnn.py
│   │   │   ├── line_cnn.py
│   │   │   ├── line_cnn_simple.py
│   │   │   ├── line_cnn_transformer.py
│   │   │   ├── mlp.py
│   │   │   ├── resnet_transformer.py
│   │   │   └── transformer_util.py
│   │   ├── stems/
│   │   │   ├── image.py
│   │   │   ├── line.py
│   │   │   └── paragraph.py
│   │   └── util.py
│   └── training/
│       ├── __init__.py
│       ├── run_experiment.py
│       └── util.py
├── lab05/
│   ├── .flake8
│   ├── .github/
│   │   └── workflows/
│   │       └── pre-commit.yml
│   ├── .pre-commit-config.yaml
│   ├── notebooks/
│   │   ├── lab01_pytorch.ipynb
│   │   ├── lab02a_lightning.ipynb
│   │   ├── lab02b_cnn.ipynb
│   │   ├── lab03_transformers.ipynb
│   │   ├── lab04_experiments.ipynb
│   │   └── lab05_troubleshooting.ipynb
│   ├── tasks/
│   │   └── lint.sh
│   ├── text_recognizer/
│   │   ├── __init__.py
│   │   ├── callbacks/
│   │   │   ├── __init__.py
│   │   │   ├── imtotext.py
│   │   │   ├── model.py
│   │   │   ├── optim.py
│   │   │   └── util.py
│   │   ├── data/
│   │   │   ├── __init__.py
│   │   │   ├── base_data_module.py
│   │   │   ├── emnist.py
│   │   │   ├── emnist_essentials.json
│   │   │   ├── emnist_lines.py
│   │   │   ├── fake_images.py
│   │   │   ├── iam.py
│   │   │   ├── iam_lines.py
│   │   │   ├── iam_paragraphs.py
│   │   │   ├── mnist.py
│   │   │   ├── sentence_generator.py
│   │   │   └── util.py
│   │   ├── lit_models/
│   │   │   ├── __init__.py
│   │   │   ├── base.py
│   │   │   ├── metrics.py
│   │   │   ├── transformer.py
│   │   │   └── util.py
│   │   ├── metadata/
│   │   │   ├── emnist.py
│   │   │   ├── emnist_lines.py
│   │   │   ├── iam.py
│   │   │   ├── iam_lines.py
│   │   │   ├── iam_paragraphs.py
│   │   │   ├── mnist.py
│   │   │   └── shared.py
│   │   ├── models/
│   │   │   ├── __init__.py
│   │   │   ├── cnn.py
│   │   │   ├── line_cnn.py
│   │   │   ├── line_cnn_simple.py
│   │   │   ├── line_cnn_transformer.py
│   │   │   ├── mlp.py
│   │   │   ├── resnet_transformer.py
│   │   │   └── transformer_util.py
│   │   ├── stems/
│   │   │   ├── image.py
│   │   │   ├── line.py
│   │   │   └── paragraph.py
│   │   ├── tests/
│   │   │   ├── test_callback_utils.py
│   │   │   └── test_iam.py
│   │   └── util.py
│   └── training/
│       ├── __init__.py
│       ├── run_experiment.py
│       ├── tests/
│       │   ├── test_memorize_iam.sh
│       │   └── test_run_experiment.sh
│       └── util.py
├── lab06/
│   ├── .flake8
│   ├── .github/
│   │   └── workflows/
│   │       └── pre-commit.yml
│   ├── .pre-commit-config.yaml
│   ├── notebooks/
│   │   ├── lab01_pytorch.ipynb
│   │   ├── lab02a_lightning.ipynb
│   │   ├── lab02b_cnn.ipynb
│   │   ├── lab03_transformers.ipynb
│   │   ├── lab04_experiments.ipynb
│   │   ├── lab05_troubleshooting.ipynb
│   │   └── lab06_data.ipynb
│   ├── tasks/
│   │   └── lint.sh
│   ├── text_recognizer/
│   │   ├── __init__.py
│   │   ├── callbacks/
│   │   │   ├── __init__.py
│   │   │   ├── imtotext.py
│   │   │   ├── model.py
│   │   │   ├── optim.py
│   │   │   └── util.py
│   │   ├── data/
│   │   │   ├── __init__.py
│   │   │   ├── base_data_module.py
│   │   │   ├── emnist.py
│   │   │   ├── emnist_essentials.json
│   │   │   ├── emnist_lines.py
│   │   │   ├── fake_images.py
│   │   │   ├── iam.py
│   │   │   ├── iam_lines.py
│   │   │   ├── iam_original_and_synthetic_paragraphs.py
│   │   │   ├── iam_paragraphs.py
│   │   │   ├── iam_synthetic_paragraphs.py
│   │   │   ├── mnist.py
│   │   │   ├── sentence_generator.py
│   │   │   └── util.py
│   │   ├── lit_models/
│   │   │   ├── __init__.py
│   │   │   ├── base.py
│   │   │   ├── metrics.py
│   │   │   ├── transformer.py
│   │   │   └── util.py
│   │   ├── metadata/
│   │   │   ├── emnist.py
│   │   │   ├── emnist_lines.py
│   │   │   ├── iam.py
│   │   │   ├── iam_lines.py
│   │   │   ├── iam_paragraphs.py
│   │   │   ├── iam_synthetic_paragraphs.py
│   │   │   ├── mnist.py
│   │   │   └── shared.py
│   │   ├── models/
│   │   │   ├── __init__.py
│   │   │   ├── cnn.py
│   │   │   ├── line_cnn.py
│   │   │   ├── line_cnn_simple.py
│   │   │   ├── line_cnn_transformer.py
│   │   │   ├── mlp.py
│   │   │   ├── resnet_transformer.py
│   │   │   └── transformer_util.py
│   │   ├── stems/
│   │   │   ├── image.py
│   │   │   ├── line.py
│   │   │   └── paragraph.py
│   │   ├── tests/
│   │   │   ├── test_callback_utils.py
│   │   │   └── test_iam.py
│   │   └── util.py
│   └── training/
│       ├── __init__.py
│       ├── run_experiment.py
│       ├── tests/
│       │   ├── test_memorize_iam.sh
│       │   └── test_run_experiment.sh
│       └── util.py
├── lab07/
│   ├── .flake8
│   ├── .github/
│   │   └── workflows/
│   │       └── pre-commit.yml
│   ├── .pre-commit-config.yaml
│   ├── api_serverless/
│   │   ├── Dockerfile
│   │   ├── __init__.py
│   │   └── api.py
│   ├── app_gradio/
│   │   ├── Dockerfile
│   │   ├── README.md
│   │   ├── __init__.py
│   │   ├── app.py
│   │   └── tests/
│   │       └── test_app.py
│   ├── notebooks/
│   │   ├── lab01_pytorch.ipynb
│   │   ├── lab02a_lightning.ipynb
│   │   ├── lab02b_cnn.ipynb
│   │   ├── lab03_transformers.ipynb
│   │   ├── lab04_experiments.ipynb
│   │   ├── lab05_troubleshooting.ipynb
│   │   ├── lab06_data.ipynb
│   │   └── lab07_deployment.ipynb
│   ├── tasks/
│   │   └── lint.sh
│   ├── text_recognizer/
│   │   ├── __init__.py
│   │   ├── callbacks/
│   │   │   ├── __init__.py
│   │   │   ├── imtotext.py
│   │   │   ├── model.py
│   │   │   ├── optim.py
│   │   │   └── util.py
│   │   ├── data/
│   │   │   ├── __init__.py
│   │   │   ├── base_data_module.py
│   │   │   ├── emnist.py
│   │   │   ├── emnist_essentials.json
│   │   │   ├── emnist_lines.py
│   │   │   ├── fake_images.py
│   │   │   ├── iam.py
│   │   │   ├── iam_lines.py
│   │   │   ├── iam_original_and_synthetic_paragraphs.py
│   │   │   ├── iam_paragraphs.py
│   │   │   ├── iam_synthetic_paragraphs.py
│   │   │   ├── mnist.py
│   │   │   ├── sentence_generator.py
│   │   │   └── util.py
│   │   ├── lit_models/
│   │   │   ├── __init__.py
│   │   │   ├── base.py
│   │   │   ├── metrics.py
│   │   │   ├── transformer.py
│   │   │   └── util.py
│   │   ├── metadata/
│   │   │   ├── emnist.py
│   │   │   ├── emnist_lines.py
│   │   │   ├── iam.py
│   │   │   ├── iam_lines.py
│   │   │   ├── iam_paragraphs.py
│   │   │   ├── iam_synthetic_paragraphs.py
│   │   │   ├── mnist.py
│   │   │   └── shared.py
│   │   ├── models/
│   │   │   ├── __init__.py
│   │   │   ├── cnn.py
│   │   │   ├── line_cnn.py
│   │   │   ├── line_cnn_simple.py
│   │   │   ├── line_cnn_transformer.py
│   │   │   ├── mlp.py
│   │   │   ├── resnet_transformer.py
│   │   │   └── transformer_util.py
│   │   ├── paragraph_text_recognizer.py
│   │   ├── stems/
│   │   │   ├── image.py
│   │   │   ├── line.py
│   │   │   └── paragraph.py
│   │   ├── tests/
│   │   │   ├── test_callback_utils.py
│   │   │   └── test_iam.py
│   │   └── util.py
│   └── training/
│       ├── __init__.py
│       ├── cleanup_artifacts.py
│       ├── run_experiment.py
│       ├── stage_model.py
│       ├── tests/
│       │   ├── test_memorize_iam.sh
│       │   ├── test_model_development.sh
│       │   └── test_run_experiment.sh
│       └── util.py
├── lab08/
│   ├── .flake8
│   ├── .github/
│   │   └── workflows/
│   │       └── pre-commit.yml
│   ├── .pre-commit-config.yaml
│   ├── api_serverless/
│   │   ├── Dockerfile
│   │   ├── __init__.py
│   │   └── api.py
│   ├── app_gradio/
│   │   ├── Dockerfile
│   │   ├── README.md
│   │   ├── __init__.py
│   │   ├── app.py
│   │   ├── flagging.py
│   │   ├── s3_util.py
│   │   └── tests/
│   │       └── test_app.py
│   ├── notebooks/
│   │   ├── lab01_pytorch.ipynb
│   │   ├── lab02a_lightning.ipynb
│   │   ├── lab02b_cnn.ipynb
│   │   ├── lab03_transformers.ipynb
│   │   ├── lab04_experiments.ipynb
│   │   ├── lab05_troubleshooting.ipynb
│   │   ├── lab06_data.ipynb
│   │   ├── lab07_deployment.ipynb
│   │   └── lab08_monitoring.ipynb
│   ├── tasks/
│   │   └── lint.sh
│   ├── text_recognizer/
│   │   ├── __init__.py
│   │   ├── callbacks/
│   │   │   ├── __init__.py
│   │   │   ├── imtotext.py
│   │   │   ├── model.py
│   │   │   ├── optim.py
│   │   │   └── util.py
│   │   ├── data/
│   │   │   ├── __init__.py
│   │   │   ├── base_data_module.py
│   │   │   ├── emnist.py
│   │   │   ├── emnist_essentials.json
│   │   │   ├── emnist_lines.py
│   │   │   ├── fake_images.py
│   │   │   ├── iam.py
│   │   │   ├── iam_lines.py
│   │   │   ├── iam_original_and_synthetic_paragraphs.py
│   │   │   ├── iam_paragraphs.py
│   │   │   ├── iam_synthetic_paragraphs.py
│   │   │   ├── mnist.py
│   │   │   ├── sentence_generator.py
│   │   │   └── util.py
│   │   ├── lit_models/
│   │   │   ├── __init__.py
│   │   │   ├── base.py
│   │   │   ├── metrics.py
│   │   │   ├── transformer.py
│   │   │   └── util.py
│   │   ├── metadata/
│   │   │   ├── emnist.py
│   │   │   ├── emnist_lines.py
│   │   │   ├── iam.py
│   │   │   ├── iam_lines.py
│   │   │   ├── iam_paragraphs.py
│   │   │   ├── iam_synthetic_paragraphs.py
│   │   │   ├── mnist.py
│   │   │   └── shared.py
│   │   ├── models/
│   │   │   ├── __init__.py
│   │   │   ├── cnn.py
│   │   │   ├── line_cnn.py
│   │   │   ├── line_cnn_simple.py
│   │   │   ├── line_cnn_transformer.py
│   │   │   ├── mlp.py
│   │   │   ├── resnet_transformer.py
│   │   │   └── transformer_util.py
│   │   ├── paragraph_text_recognizer.py
│   │   ├── stems/
│   │   │   ├── image.py
│   │   │   ├── line.py
│   │   │   └── paragraph.py
│   │   ├── tests/
│   │   │   ├── test_callback_utils.py
│   │   │   └── test_iam.py
│   │   └── util.py
│   └── training/
│       ├── __init__.py
│       ├── cleanup_artifacts.py
│       ├── run_experiment.py
│       ├── stage_model.py
│       ├── tests/
│       │   ├── test_memorize_iam.sh
│       │   ├── test_model_development.sh
│       │   └── test_run_experiment.sh
│       └── util.py
├── overview.ipynb
├── pyproject.toml
├── readme.md
├── requirements/
│   ├── dev-lint.in
│   ├── dev.in
│   ├── dev.txt
│   ├── prod.in
│   └── prod.txt
└── setup/
    └── readme.md

================================================
FILE CONTENTS
================================================

================================================
FILE: .flake8
================================================
[flake8]
select = ANN,B,B9,BLK,C,D,E,F,I,S,W
  # only check selected error codes
max-complexity = 12
  # C9 - flake8 McCabe Complexity checker -- threshold
max-line-length = 120
  # E501 - flake8 -- line length too long, actually handled by black
extend-ignore =
  # E W - flake8 PEP style check
    E203,E402,E501,W503,  # whitespace, import, line length, binary operator line breaks
  # S - flake8-bandit safety check
    S101,S113,S311,S105,  # assert removed in bytecode, no request timeout, pRNG not secure, hardcoded password
  # ANN - flake8-annotations type annotation check
    ANN,ANN002,ANN003,ANN101,ANN102,ANN202,  # ignore all for now, but always ignore some
  # D1 - flake8-docstrings docstring style check
    D100,D102,D103,D104,D105,  # missing docstrings
  # D2 D4 - flake8-docstrings docstring style check
    D200,D205,D400,D401,  # whitespace issues and first line content
  # DAR - flake8-darglint docstring correctness check
    DAR103,  # mismatched or missing type in docstring
application-import-names = app_gradio,text_recognizer,tests,training
  # flake8-import-order: which names are first party?
import-order-style = google
  # flake8-import-order: which import order style guide do we use?
docstring-convention = numpy
  # flake8-docstrings: which docstring style guide do we use?
strictness = short
  # darglint: how "strict" are we with docstring completeness?
docstring-style = numpy
  # darglint: which docstring style guide do we use?
suppress-none-returning = true
  # flake8-annotations: do we allow un-annotated Nones in returns?
mypy-init-return = true
  # flake8-annotations: do we allow init to have no return annotation?
per-file-ignores =
  # list of case-by-case ignores, see files for details
  */__init__.py:F401,I
  */data/*.py:DAR
  data/*.py:F,I
  *text_recognizer/util.py:DAR101,F401
  *training/run_experiment.py:I202
  *app_gradio/app.py:I202


================================================
FILE: .github/ISSUE_TEMPLATE/this-repository-is-automatically-generated--don-t-open-issues-here-.md
================================================
---
name: This repository is automatically generated! Don't open issues here.
about: Open issues in the generating repo instead, at https://fsdl.me/2022-repo.
title: ''
labels: ''
assignees: ''

---

Thanks for your interest in contributing!

This repository is automatically generated from a source repo,
so the preferred place for issues and the only place for PRs is there.

So please open your issues [there](https://github.com/full-stack-deep-learning/fsdl-text-recognizer-2022).
Looking forward to hearing from you!


================================================
FILE: .github/pull_request_template.md
================================================
Thanks for your interest in contributing!

This repository is automatically generated from [a source repo](https://fsdl.me/2022-repo), so the preferred place for issues and the only place for PRs is there.

So please open your issues [there](https://github.com/full-stack-deep-learning/fsdl-text-recognizer-2022).

Looking forward to hearing from you!


================================================
FILE: .gitignore
================================================
# Data
data/downloaded
data/processed
data/interim


# Editors
.vscode
*.sw?
*~

# Node
node_modules

# Python
__pycache__
.pytest_cache

# notebooks
.ipynb_checkpoints
*.nbconvert*.ipynb
.notebook_test.sh

# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
*.egg-info/
.installed.cfg
*.egg

# logging
wandb
*.pt
*.ckpt
lightning_logs/
logs
*/training/logs
*/training/*sweep.yaml
flagged

# Misc
.aws/credentials
.DS_Store
.env
_labs
.mypy_cache
lab9/requirements.txt
.coverage*
/requirements.txt
requirements/dev-lint.txt
bootstrap.py
**/fixme.py
.server.env


================================================
FILE: .pre-commit-config.yaml
================================================
repos:
  # a set of useful Python-based pre-commit hooks
  - repo: https://github.com/pre-commit/pre-commit-hooks
    rev: v4.1.0
    hooks:
      # list of definitions and supported hooks: https://pre-commit.com/hooks.html
      - id: trailing-whitespace      # removes any whitespace at the ends of lines
      - id: check-toml               # check toml syntax by loading all toml files
      - id: check-yaml               # check yaml syntax by loading all yaml files
      - id: check-json               # check-json syntax by loading all json files
      - id: check-merge-conflict     # check for files with merge conflict strings
        args: ['--assume-in-merge']  #  and run this check even when not explicitly in a merge
      - id: check-added-large-files  # check that no "large" files have been added
        args: ['--maxkb=10240']      #  where large means 10MB+, as in Hugging Face's git server
      - id: debug-statements         # check for python debug statements (import pdb, breakpoint, etc.)
      - id: detect-private-key       # checks for private keys (BEGIN X PRIVATE KEY, etc.)

  # black python autoformatting
  - repo: https://github.com/psf/black
    rev: 22.3.0
    hooks:
      - id: black
    # additional configuration of black in pyproject.toml

  # flake8 python linter with all the fixins
  - repo: https://github.com/PyCQA/flake8
    rev: 3.9.2
    hooks:
      - id: flake8
        exclude: (lab01|lab02|lab03|lab04|lab06|lab07|lab08)
        additional_dependencies: [
          flake8-bandit, flake8-bugbear, flake8-docstrings,
          flake8-import-order, darglint, mypy, pycodestyle, pydocstyle]
        args: ["--config", ".flake8"]
    # additional configuration of flake8 and extensions in .flake8

  # shellcheck-py for linting shell files
  - repo: https://github.com/shellcheck-py/shellcheck-py
    rev: v0.8.0.4
    hooks:
      - id: shellcheck


================================================
FILE: LICENSE.txt
================================================
MIT License

Copyright (c) 2022 Full Stack Deep Learning, LLC

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.


================================================
FILE: Makefile
================================================
# Arcane incantation to print all the other targets, from https://stackoverflow.com/a/26339924
help:
	@$(MAKE) -pRrq -f $(lastword $(MAKEFILE_LIST)) : 2>/dev/null | awk -v RS= -F: '/^# File/,/^# Finished Make data base/ {if ($$1 !~ "^[#.]") {print $$1}}' | sort | egrep -v -e '^[^[:alnum:]]' -e '^$@$$'

# Install exact Python and CUDA versions
conda-update:
	conda env update --prune -f environment.yml
	echo "!!!RUN THE conda activate COMMAND ABOVE RIGHT NOW!!!"

# Compile and install exact pip packages
pip-tools:
	pip install pip-tools==6.13.0 setuptools==67.7.2
	pip-compile requirements/prod.in && pip-compile requirements/dev.in
	pip-sync requirements/prod.txt requirements/dev.txt

# Compile and install the requirements for local linting (optional)
pip-tools-lint:
	pip install pip-tools==6.13.0 setuptools==67.7.2
	pip-compile requirements/prod.in && pip-compile requirements/dev.in && pip-compile requirements/dev-lint.in
	pip-sync requirements/prod.txt requirements/dev.txt requirements/dev-lint.txt

# Bump versions of transitive dependencies
pip-tools-upgrade:
	pip install pip-tools==6.13.0 setuptools==67.7.2
	pip-compile --upgrade requirements/prod.in && pip-compile --upgrade requirements/dev.in && pip-compile --upgrade requirements/dev-lint.in

# Example training command
train-mnist-cnn-ddp:
	python training/run_experiment.py --max_epochs=10 --gpus=-1 --accelerator=ddp --num_workers=20 --data_class=MNIST --model_class=CNN

# Lint
lint:
	tasks/lint.sh

# Test notebooks in source repo
test-notebooks:
	tasks/notebook_test.sh $(SELECT_BY)

# Test all lab notebooks from the folder for provided lab InDeX
test-labs-up-to:
	cd lab$(IDX) && ./.notebook_test.sh

# Test only the notebooks for the provided lab InDeX
test-lab:
	cd lab$(IDX) && ./.notebook_test.sh $(IDX)


================================================
FILE: data/raw/emnist/metadata.toml
================================================
filename = 'matlab.zip'
sha256 = 'e1fa805cdeae699a52da0b77c2db17f6feb77eed125f9b45c022e7990444df95'
url = 'https://s3-us-west-2.amazonaws.com/fsdl-public-assets/matlab.zip'


================================================
FILE: data/raw/emnist/readme.md
================================================
# EMNIST dataset

The EMNIST dataset is a set of handwritten character digits derived from the NIST Special Database 19
and converted to a 28x28 pixel image format and dataset structure that directly matches the MNIST dataset."
From https://www.nist.gov/itl/iad/image-group/emnist-dataset

Original url is http://www.itl.nist.gov/iaui/vip/cs_links/EMNIST/matlab.zip

We uploaded the same file to our S3 bucket for faster download.


================================================
FILE: data/raw/fsdl_handwriting/fsdl_handwriting.jsonl
================================================
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/85a46e90-8af1-48fe-9e75-d18f7f05d6e9___a01-000u.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.1422924901185771,0.18948824343015216],[0.875494071146245,0.18948824343015216],[0.875494071146245,0.25034578146611347],[0.1422924901185771,0.25034578146611347]],"notes":"A MOVE to stop Mr. Gaitskiell from","imageWidth":1240,"imageHeight":1771},{"label":["line"],"shape":"rectangle","points":[[0.1324110671936759,0.24481327800829875],[0.9407114624505929,0.24481327800829875],[0.9407114624505929,0.3029045643153527],[0.1324110671936759,0.3029045643153527]],"notes":"nominating any more Labour life Peers","imageWidth":1240,"imageHeight":1771},{"label":["line"],"shape":"rectangle","points":[[0.13636363636363635,0.29737206085753803],[0.9802371541501976,0.29737206085753803],[0.9802371541501976,0.35408022130013833],[0.13636363636363635,0.35408022130013833]],"notes":"","imageWidth":1240,"imageHeight":1771},{"label":["line"],"shape":"rectangle","points":[[0.1422924901185771,0.34439834024896265],[0.958498023715415,0.34439834024896265],[0.958498023715415,0.40110650069156295],[0.1422924901185771,0.40110650069156295]],"notes":"","imageWidth":1240,"imageHeight":1771},{"label":["line"],"shape":"rectangle","points":[[0.1284584980237155,0.39695712309820197],[0.9268774703557316,0.39695712309820197],[0.9268774703557316,0.4591977869986169],[0.1284584980237155,0.4591977869986169]],"notes":"","imageWidth":1240,"imageHeight":1771}],"extras":null,"metadata":{"first_done_at":1551319523000,"last_updated_at":1551319736000,"sec_taken":0,"last_updated_by":"69FI7aSdl6aSMhn3Anp3BRvA8gg2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/84ae6d5f-1283-46b6-88ec-2e8c87343d1e___Vv_%28Name%29_001.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.11607142857142858,0.2962194559704933],[0.8526785714285714,0.2962194559704933],[0.8526785714285714,0.3515444905486399],[0.11607142857142858,0.3515444905486399]],"notes":"Mathematicians seek and use patterns to formulate","imageWidth":2536,"imageHeight":3274},{"label":["line"],"shape":"rectangle","points":[[0.12797619047619047,0.34923928077455046],[0.7901785714285714,0.34923928077455046],[0.7901785714285714,0.39073305670816044],[0.12797619047619047,0.39073305670816044]],"notes":"new conjectures by mathematical proof. When","imageWidth":2536,"imageHeight":3274},{"label":["line"],"shape":"rectangle","points":[[0.12648809523809523,0.3849700322729368],[0.7455357142857143,0.3849700322729368],[0.7455357142857143,0.4287690179806362],[0.12648809523809523,0.4287690179806362]],"notes":"mathematical structures are good models of","imageWidth":2536,"imageHeight":3274},{"label":["line"],"shape":"rectangle","points":[[0.13392857142857142,0.4287690179806362],[0.8318452380952381,0.4287690179806362],[0.8318452380952381,0.46680497925311204],[0.13392857142857142,0.46680497925311204]],"notes":"real phenomenona, then mathematical reasoning can","imageWidth":2536,"imageHeight":3274},{"label":["line"],"shape":"rectangle","points":[[0.11607142857142858,0.46680497925311204],[0.8735119047619048,0.46680497925311204],[0.8735119047619048,0.5094513600737667],[0.11607142857142858,0.5094513600737667]],"notes":"provide insight or predictions about nature.","imageWidth":2536,"imageHeight":3274},{"label":["line"],"shape":"rectangle","points":[[0.8655753968253969,0.5599738742892271],[0.11656746031746032,0.5599738742892271],[0.11656746031746032,0.5071461502996774],[0.8655753968253969,0.5071461502996774]],"notes":"Through the use of abstraction and logic,","imageWidth":2536,"imageHeight":3274},{"label":["line"],"shape":"rectangle","points":[[0.8258928571428572,0.6012755494083295],[0.11408730158730158,0.6012755494083295],[0.11408730158730158,0.5522898417089289],[0.8258928571428572,0.5522898417089289]],"notes":"mathematics developed from counting, calculation,","imageWidth":2536,"imageHeight":3274},{"label":["line"],"shape":"rectangle","points":[[0.8444940476190477,0.6396957123098203],[0.11780753968253968,0.6396957123098203],[0.11780753968253968,0.5916705086829569],[0.8444940476190477,0.5916705086829569]],"notes":"measurement, and the systematic study of","imageWidth":2536,"imageHeight":3274},{"label":["line"],"shape":"rectangle","points":[[0.8544146825396826,0.6896419240817582],[0.13020833333333334,0.6896419240817582],[0.13020833333333334,0.6320116797295221],[0.8544146825396826,0.6320116797295221]],"notes":"the shapes and motions of physical objects","imageWidth":2536,"imageHeight":3274}],"extras":null,"metadata":{"first_done_at":1551411138000,"last_updated_at":1551411138000,"sec_taken":0,"last_updated_by":"69FI7aSdl6aSMhn3Anp3BRvA8gg2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/0f7a387a-6820-44e4-a0bb-43ea9f72023b___Vv_%28Name%29_001.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.11934156378600823,0.29936305732484075],[0.8518518518518519,0.29936305732484075],[0.8518518518518519,0.3487261146496815],[0.11934156378600823,0.3487261146496815]],"notes":"","imageWidth":2536,"imageHeight":3274},{"label":["line"],"shape":"rectangle","points":[[0.11934156378600823,0.39171974522292996],[0.8004115226337448,0.39171974522292996],[0.8004115226337448,0.5222929936305732],[0.11934156378600823,0.5222929936305732]],"notes":"","imageWidth":2536,"imageHeight":3274}],"extras":null,"metadata":{"first_done_at":1551642111000,"last_updated_at":1551642111000,"sec_taken":77,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/88918018-1bc8-49c0-ad45-d751aa0e863b___page_0103.jpg","annotation":[],"extras":null,"metadata":{"first_done_at":1551642137000,"last_updated_at":1551642216000,"sec_taken":13,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/153188d0-e001-4485-9c23-4a271b3f6ac5___page_0063.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.06804123711340206,0.3057324840764331],[0.865979381443299,0.3057324840764331],[0.865979381443299,0.3328025477707006],[0.06804123711340206,0.3328025477707006]],"notes":"","imageWidth":1276,"imageHeight":1651}],"extras":null,"metadata":{"first_done_at":1551642322000,"last_updated_at":1551642322000,"sec_taken":28,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/efc70808-e131-4eb1-a146-6e1575b4feb9___page_0088.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.14766839378238342,0.268],[0.9145077720207254,0.268],[0.9145077720207254,0.308],[0.14766839378238342,0.308]],"notes":"Mathematical analysis is the branch of mathematics","imageWidth":1275,"imageHeight":1651},{"label":["line"],"shape":"rectangle","points":[[0.07512953367875648,0.3],[0.8989637305699482,0.3],[0.8989637305699482,0.344],[0.07512953367875648,0.344]],"notes":"dealing with limits and related theories, such as","imageWidth":1275,"imageHeight":1651},{"label":["line"],"shape":"rectangle","points":[[0.08549222797927461,0.35],[0.9093264248704663,0.35],[0.9093264248704663,0.382],[0.08549222797927461,0.382]],"notes":"differentiation, integration, measure, infinite series, and","imageWidth":1275,"imageHeight":1651},{"label":["line"],"shape":"polygon","points":[[0.09585492227979274,0.388],[0.4378238341968912,0.388],[0.8782383419689119,0.388],[0.8782383419689119,0.412],[0.2772020725388601,0.41],[0.08549222797927461,0.41]],"notes":"analytic functions. These theories are usually studied in","imageWidth":1275,"imageHeight":1651},{"label":["line"],"shape":"polygon","points":[[0.11658031088082901,0.42],[0.4326424870466321,0.422],[0.8264248704663213,0.424],[0.9145077720207254,0.418],[0.9015544041450777,0.448],[0.47668393782383417,0.454],[0.21761658031088082,0.45],[0.10621761658031088,0.444]],"notes":"the context of real and complex numbers and functions,","imageWidth":1275,"imageHeight":1651},{"label":["line"],"shape":"polygon","points":[[0.11139896373056994,0.458],[0.44559585492227977,0.464],[0.8808290155440415,0.46],[0.8626943005181347,0.484],[0.33419689119170987,0.498],[0.10621761658031088,0.5]],"notes":"Analytics evolved from solutions, which involves the","imageWidth":1275,"imageHeight":1651},{"label":["line"],"shape":"polygon","points":[[0.11398963730569948,0.512],[0.5751295336787565,0.508],[0.8601036269430051,0.502],[0.8523316062176166,0.544],[0.3134715025906736,0.556],[0.10362694300518134,0.542]],"notes":"elementary concepts and techniques of analysis","imageWidth":1275,"imageHeight":1651}],"extras":null,"metadata":{"first_done_at":1551642388000,"last_updated_at":1551642743000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/fb971620-c18e-495f-ade8-ee265da4c82c___page_0076.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.11637251143436657,0.3042066195892488],[0.8869472493105777,0.3042066195892488],[0.8869472493105777,0.3341418454530844],[0.11637251143436657,0.3341418454530844]],"notes":"","imageWidth":1273,"imageHeight":1651},{"label":["line"],"shape":"rectangle","points":[[0.8785600412792719,0.33495090561156643],[0.11532411043045336,0.33495090561156643],[0.11532411043045336,0.3681223721093303],[0.8785600412792719,0.3681223721093303]],"notes":"","imageWidth":1273,"imageHeight":1651},{"label":["line"],"shape":"rectangle","points":[[0.8565436201970945,0.37135861274325843],[0.12056611545001941,0.37135861274325843],[0.12056611545001941,0.39967571829013],[0.8565436201970945,0.39967571829013]],"notes":"","imageWidth":1273,"imageHeight":1651},{"label":["line"],"shape":"rectangle","points":[[0.8303335950992642,0.4045300792410223],[0.12056611545001941,0.4045300792410223],[0.12056611545001941,0.44174684653119634],[0.8303335950992642,0.44174684653119634]],"notes":"","imageWidth":1273,"imageHeight":1651},{"label":["line"],"shape":"rectangle","points":[[0.8722696352557927,0.4377015457387861],[0.11532411043045336,0.4377015457387861],[0.11532411043045336,0.47896361382137037],[0.8722696352557927,0.47896361382137037]],"notes":"","imageWidth":1273,"imageHeight":1651},{"label":["line"],"shape":"rectangle","points":[[0.8596888232088341,0.48219985445529856],[0.11637251143436657,0.48219985445529856],[0.11637251143436657,0.5226528623794008],[0.8596888232088341,0.5226528623794008]],"notes":"","imageWidth":1273,"imageHeight":1651},{"label":["line"],"shape":"rectangle","points":[[0.8397692041344831,0.5234619225378828],[0.11532411043045336,0.5234619225378828],[0.11532411043045336,0.563914930461985],[0.8397692041344831,0.563914930461985]],"notes":"","imageWidth":1273,"imageHeight":1651},{"label":["line"],"shape":"rectangle","points":[[0.8722696352557927,0.5582515093526107],[0.11742091243827978,0.5582515093526107],[0.11742091243827978,0.6132676001293897],[0.8722696352557927,0.6132676001293897]],"notes":"","imageWidth":1273,"imageHeight":1651},{"label":["line"],"shape":"rectangle","points":[[0.22959981985699351,0.6116494798124257],[0.11637251143436657,0.6116494798124257],[0.11637251143436657,0.6432028259932254],[0.22959981985699351,0.6432028259932254]],"notes":"","imageWidth":1273,"imageHeight":1651}],"extras":null,"metadata":{"first_done_at":1551642111000,"last_updated_at":1551642325000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/3bbb25d4-f032-49ad-b5b6-1c652054d661___page_0062.jpg","annotation":[{"label":["line"],"shape":"polygon","points":[[0.04492958942654768,0.31433367149243563],[0.28419577480801256,0.30927543129985063],[0.5804946028443109,0.30927543129985063],[0.7272358898231037,0.31505627361318417],[0.926312693574225,0.3331131394393547],[0.9094909427542906,0.3547830158692679],[0.29728769060058546,0.34322130589525557],[0.05894590277463495,0.34249870377450703]],"notes":"Deep learning (also known as deep structured learning or","imageWidth":1275,"imageHeight":1649},{"label":["line"],"shape":"rectangle","points":[[0.9403397415230607,0.3778898079587543],[0.03931876368652401,0.3778898079587543],[0.03931876368652401,0.34033918599418533],[0.9403397415230607,0.34033918599418533]],"notes":"hierarchical learning) is a part of a broader family","imageWidth":1275,"imageHeight":1649},{"label":["line"],"shape":"rectangle","points":[[0.9506155016442616,0.40822264660307133],[0.06549186477122727,0.40822264660307133],[0.06549186477122727,0.37211725422734465],[0.9506155016442616,0.37211725422734465]],"notes":"of machine learnig methods based on learning data","imageWidth":1275,"imageHeight":1649},{"label":["line"],"shape":"rectangle","points":[[0.8356581455905015,0.42916989556489065],[0.06455672441435275,0.42916989556489065],[0.06455672441435275,0.3995596843886675],[0.8356581455905015,0.3995596843886675]],"notes":"representations, as opposed to task-specific","imageWidth":1275,"imageHeight":1649},{"label":["line"],"shape":"rectangle","points":[[0.9057504141292395,0.4645610250964833],[0.06736213728436459,0.4645610250964833],[0.06736213728436459,0.42195216293932924],[0.9057504141292395,0.42195216293932924]],"notes":"","imageWidth":1275,"imageHeight":1649},{"label":["line"],"shape":"rectangle","points":[[0.9141666691404984,0.49561644051420345],[0.06549186477122727,0.49561644051420345],[0.06549186477122727,0.4587801574358044],[0.9141666691404984,0.4587801574358044]],"notes":"belief networks and recurrent neural networks","imageWidth":1275,"imageHeight":1649},{"label":["line"],"shape":"rectangle","points":[[0.9506155016442616,0.5259493045058657],[0.05239994077804262,0.5259493045058657],[0.05239994077804262,0.4847856719375541],[0.9506155016442616,0.4847856719375541]],"notes":"have been applied to field including computer","imageWidth":1275,"imageHeight":1649},{"label":["line"],"shape":"rectangle","points":[[0.9149477172695168,0.5467789079888803],[0.07686220154246657,0.5467789079888803],[0.07686220154246657,0.5130780492825777],[0.9149477172695168,0.5130780492825777]],"notes":"vision, speech recognition, natural language","imageWidth":1275,"imageHeight":1649},{"label":["line"],"shape":"rectangle","points":[[0.6789419844017848,0.5732606315165131],[0.07140723662889453,0.5732606315165131],[0.07140723662889453,0.5479832448567945],[0.6789419844017848,0.5479832448567945]],"notes":"processing, and audio recognition","imageWidth":1275,"imageHeight":1649}],"extras":null,"metadata":{"first_done_at":1551642500000,"last_updated_at":1551642500000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/b85e999f-7d09-4a86-ba70-00ea5fdd95f8___page_0102.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.15785041889328175,0.3840635656715723],[0.8367832614937538,0.3840635656715723],[0.8367832614937538,0.34882008552759275],[0.15785041889328175,0.34882008552759275]],"notes":"(i.e. , when a low-probability event occurs), the event","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.1607844415492907,0.41343313245822194],[0.8426513068057717,0.41343313245822194],[0.8426513068057717,0.38044884975936927],[0.1607844415492907,0.38044884975936927]],"notes":"carries more \"information\" (\"surprisal\") than when","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.16313165967409787,0.4536468469814807],[0.8197659300889019,0.4536468469814807],[0.8197659300889019,0.41162577450212046],[0.16313165967409787,0.41162577450212046]],"notes":"the source data has a higher-probability value.","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.17134692311092292,0.4870829691693588],[0.8027485986840499,0.4870829691693588],[0.8027485986840499,0.44867661260220154],[0.17134692311092292,0.44867661260220154]],"notes":"The amount of information conveyed by each","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.17134692311092292,0.5146451779999069],[0.8074430349336642,0.5146451779999069],[0.8074430349336642,0.48392009274618114],[0.17134692311092292,0.48392009274618114]],"notes":"event defined in this way becomes a random","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.17193372764212472,0.5476294606987596],[0.7428945365014672,0.5476294606987596],[0.7428945365014672,0.5160006964669831],[0.17193372764212472,0.5160006964669831]],"notes":"variable whose expected value is the","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.8056826213400589,0.5788063854415108],[0.179562186547748,0.5788063854415108],[0.179562186547748,0.5498886581438864],[0.8056826213400589,0.5498886581438864]],"notes":"information entropy. Generally entropy refers","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.15639462707566004,0.3481336872710597],[0.81831013827128,0.3481336872710597],[0.15639462707566004,0.3174086020173338],[0.81831013827128,0.3174086020173338]],"notes":"When the data source has a lower-probability value","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.18132260014135337,0.5950726070464244],[0.8772727741466774,0.585132138287866],[0.875512360553072,0.611338828651338],[0.1860170363909677,0.6221829763879472],[0.18366981826616055,0.6090796312062111]],"notes":"to disorder or uncertainty, and the definition","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.17956218654774803,0.6267013712782009],[0.7945333352472249,0.620827457920871],[0.7945333352472249,0.6483896667514192],[0.17838857748534445,0.6574264565319268],[0.17838857748534445,0.6569746170429014],[0.17838857748534445,0.6569746170429014]],"notes":"of entropy used in information theory is","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.16334450764181302,0.6652999622788471],[0.8425915778808454,0.6561940603143804],[0.8462871549768141,0.6857882416988971],[0.1603880459650381,0.7005853323911555]],"notes":"directly analogous to the definition used","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.16999654641455653,0.7045691645006097],[0.6762906085622575,0.6966015002817013],[0.6733341468854825,0.7313177515212306],[0.16777920015697537,0.7392854157401388]],"notes":"in statistical thermodynamics","imageWidth":1274,"imageHeight":1653}],"extras":null,"metadata":{"first_done_at":1551642302000,"last_updated_at":1551642481000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/886f2e7c-c81e-46ea-91a5-8736740da295___page_0114.jpg","annotation":null,"extras":null,"metadata":{"first_done_at":1551642515000,"last_updated_at":1551642515000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/fa6c1cc0-217a-49a9-a000-540f36ba204e___page_0048.jpg","annotation":[{"label":["line"],"shape":"polygon","points":[[0.1289134438305709,0.3252840909090909],[0.9465930018416207,0.30113636363636365],[0.9465930018416207,0.34801136363636365],[0.13259668508287292,0.3678977272727273]],"notes":"","imageWidth":1277,"imageHeight":1655},{"label":["line"],"shape":"polygon","points":[[0.1141804788213628,0.3678977272727273],[0.9097605893186004,0.34801136363636365],[0.9097605893186004,0.37926136363636365],[0.1141804788213628,0.4090909090909091]],"notes":"","imageWidth":1277,"imageHeight":1655},{"label":["line"],"shape":"polygon","points":[[0.1270718232044199,0.40198863636363635],[0.9521178637200737,0.37642045454545453],[0.9484346224677717,0.4119318181818182],[0.1252302025782689,0.4460227272727273]],"notes":"","imageWidth":1277,"imageHeight":1655},{"label":["line"],"shape":"polygon","points":[[0.10681399631675875,0.4460227272727273],[0.9337016574585635,0.40625],[0.9318600368324125,0.4446022727272727],[0.12154696132596685,0.48295454545454547]],"notes":"","imageWidth":1277,"imageHeight":1655},{"label":["line"],"shape":"polygon","points":[[0.13812154696132597,0.4758522727272727],[0.9226519337016574,0.44886363636363635],[0.9152854511970534,0.4815340909090909],[0.13627992633517497,0.5198863636363636]],"notes":"","imageWidth":1277,"imageHeight":1655},{"label":["line"],"shape":"polygon","points":[[0.11602209944751381,0.515625],[0.9871086556169429,0.48579545454545453],[0.9723756906077348,0.5284090909090909],[0.12154696132596685,0.5639204545454546]],"notes":"","imageWidth":1277,"imageHeight":1655},{"label":["line"],"shape":"polygon","points":[[0.10313075506445672,0.5582386363636364],[0.9042357274401474,0.5184659090909091],[0.9042357274401474,0.5696022727272727],[0.1252302025782689,0.5965909090909091]],"notes":"","imageWidth":1277,"imageHeight":1655},{"label":["line"],"shape":"polygon","points":[[0.1141804788213628,0.5923295454545454],[0.8710865561694291,0.5610795454545454],[0.8710865561694291,0.6036931818181818],[0.12154696132596685,0.6448863636363636]],"notes":"","imageWidth":1277,"imageHeight":1655}],"extras":null,"metadata":{"first_done_at":1551642527000,"last_updated_at":1551642527000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/35db7729-5dcc-4b03-a996-0ceeb1090d88___page_0074.jpg","annotation":[{"label":["line"],"shape":"polygon","points":[[0.05502392344497608,0.3044280442804428],[0.9497607655502392,0.2933579335793358],[0.9425837320574163,0.3247232472324723],[0.04784688995215311,0.3376383763837638]],"notes":"when the data source has a lower probability value (i.e. when","imageWidth":1276,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.05263157894736842,0.34317343173431736],[0.9856459330143541,0.32287822878228783],[0.9760765550239234,0.34870848708487084],[0.050239234449760764,0.3671586715867159]],"notes":"a low probability event occurs), the event carries more informaton ' ","imageWidth":1276,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.05502392344497608,0.37084870848708484],[0.9976076555023924,0.35424354243542433],[0.992822966507177,0.3837638376383764],[0.045454545454545456,0.3966789667896679]],"notes":"(\" surprisal \") than when the source data has a higher-probability","imageWidth":1276,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.04784688995215311,0.4003690036900369],[0.9210526315789473,0.37822878228782286],[0.916267942583732,0.4003690036900369],[0.050239234449760764,0.4261992619926199]],"notes":"value. The amount of information conveyed by each event ","imageWidth":1276,"imageHeight":1654}],"extras":null,"metadata":{"first_done_at":1551642470000,"last_updated_at":1551642609000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/7c569746-270b-4736-9a28-2b38fb77bc11___page_0060.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.12424559496895629,0.29239130016027703],[0.8647493409839357,0.29239130016027703],[0.8647493409839357,0.335672907091897],[0.12424559496895629,0.335672907091897]],"notes":"From the technology perspective, speech recognition has a","imageWidth":1278,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.8759314445311419,0.3683745656624543],[0.12424559496895629,0.3683745656624543],[0.12424559496895629,0.327016585705573],[0.8759314445311419,0.327016585705573]],"notes":"long history with several waves of major innovations. Most","imageWidth":1278,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.11554840332112935,0.41646524003092095],[0.4646785251838965,0.41069435910670493],[0.8548096933864193,0.39338171633405694],[0.8448700457889028,0.36068005776349965],[0.45101150973731136,0.3654891252003463],[0.11679085927081892,0.378954514023517]],"notes":"recently, the field has benefited from advances in deep","imageWidth":1278,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.1205182271198876,0.41646524003092095],[0.4162227431460036,0.4068471051572276],[0.9107202111224496,0.4039616646951196],[0.9069928432733809,0.4510905255762169],[0.7827472483044247,0.4433960176772622],[0.3516150337621463,0.4568614065004329],[0.20997505549753614,0.4664795413741262],[0.11927577117019804,0.4520523390635862]],"notes":"learning and big data. The advances are evidenced not only","imageWidth":1278,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.11430594737143979,0.4674413548614956],[0.5889241201528528,0.4510905255762169],[0.829960574392628,0.44243420418989293],[0.9132051230218288,0.4501287120888476],[0.9082352992230704,0.48475399763414356],[0.4187076550453827,0.49341031902046756],[0.34043293021494025,0.5030284538941608],[0.22736943879319002,0.5049520808688995],[0.11554840332112935,0.5020666404067915]],"notes":"by the surge of academic papers published in the field,","imageWidth":1278,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.11585660358533005,0.5070778522812383],[0.4117365450494037,0.4960394228438236],[0.7798040625935677,0.48569089524624726],[0.9179407822529997,0.4898303062852778],[0.9117023497522512,0.5229455945975219],[0.7628711743772503,0.5429527479528361],[0.5320491718495541,0.5374335332341287],[0.43847268433832604,0.5491618645113818],[0.3056831925366785,0.5484719626715434],[0.11229178501347374,0.539503238753644]],"notes":"but more importantly by the worldwide industry adaption","imageWidth":1278,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.11229178501347374,0.5512315700308971],[0.25844934645958245,0.5360537295544519],[0.4046069079056911,0.5484719626715434],[0.4402550936242542,0.5381234350739671],[0.5115514650613804,0.5470921589918666],[0.5979983154288959,0.5339840240349366],[0.7619799697342862,0.5360537295544519],[0.9714130608308443,0.5277749074763909],[0.9580449911863832,0.5802074473041108],[0.6345377057904231,0.5850367601829797],[0.3769795639738047,0.5857266620228181],[0.2040858632387737,0.5843468583431413],[0.1176390128712582,0.5843468583431413]],"notes":"of a variety of deep learning methods in designing and deploying","imageWidth":1278,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.1176390128712582,0.590555974901687],[0.35559065254266686,0.5836569565033028],[0.5008570093458115,0.590555974901687],[0.4919449629161707,0.6264308705732848],[0.12387744537200675,0.625051066893608]],"notes":"speech recognition systems","imageWidth":1278,"imageHeight":1653}],"extras":null,"metadata":{"first_done_at":1551642108000,"last_updated_at":1551642449000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/af2350a1-ec02-4282-8302-4e272bac49dc___page_0075.jpg","annotation":[{"label":["line"],"shape":"polygon","points":[[0.14876456634926738,0.28535026568872723],[0.8697005417341785,0.27358324442321275],[0.8747865098144955,0.3206513294852708],[0.14622158230910898,0.3196707443798113]],"notes":"Gradient descent is a first-order iterative optimization","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"polygon","points":[[0.1538505344295842,0.32555425501256857],[0.8671575576940201,0.32555425501256857],[0.8633430816337826,0.35006888264905717],[0.152579042409505,0.35203005285997624]],"notes":"","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"polygon","points":[[0.15639351846974264,0.358894148598193],[0.8582571135534657,0.3559523932818144],[0.8506281614329905,0.37458351028554576],[0.1513075503894258,0.3883117017619793]],"notes":"","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"polygon","points":[[0.16275097857013868,0.3941952123947366],[0.8747865098144955,0.3785058507073839],[0.8722435257743371,0.4089039889766298],[0.16147948655005948,0.42459335066398246]],"notes":"","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"polygon","points":[[0.16275097857013868,0.42851569108582066],[0.8722435257743371,0.4167486698203061],[0.8798724778948123,0.44028271235133515],[0.17165142271069314,0.45793324424960696]],"notes":"","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"polygon","points":[[0.16020799452998027,0.46675851019874287],[0.8900444140554459,0.447146808089552],[0.9078453023365548,0.447146808089552],[0.9192887305172677,0.4461662229840924],[0.9205602225373469,0.4795061165697169],[0.16910843867053474,0.48833138251885283]],"notes":"","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"polygon","points":[[0.16147948655005948,0.4961760633625292],[0.8875014300152875,0.48440904209701463],[0.8798724778948123,0.5206906909990178],[0.16020799452998027,0.5177489356826391]],"notes":"","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"polygon","points":[[0.16020799452998027,0.5216712761044773],[0.8824154619349707,0.5246130314208559],[0.8684290497140994,0.5746228717992927],[0.1640224705902179,0.5520694143737231]],"notes":"maximum of that function the procedure is then known as gradient ascent","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"polygon","points":[[0.14876456634926738,0.5579529250064804],[0.48189547561002,0.5657976058501568],[0.4768095075297032,0.5961957441194027],[0.15512202644966344,0.5834481377484285]],"notes":"as gradient ascent","imageWidth":1274,"imageHeight":1652}],"extras":null,"metadata":{"first_done_at":1551642588000,"last_updated_at":1551642759000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/727cb1dd-56ae-4960-9308-d3ace1b098a2___page_0101.jpg","annotation":[{"label":["line"],"shape":"polygon","points":[[0.03322180454046187,0.28531308855693593],[0.034198916438710744,0.316259746310466],[0.956592548385652,0.3117309671270226],[0.9546383245891543,0.27776512325119684]],"notes":"Information is the resolution of uncertainty; it is that which answers the question of \"what an","imageWidth":1276,"imageHeight":1652},{"label":["line"],"shape":"polygon","points":[[0.035176028336959625,0.32003372896333554],[0.03713025213345738,0.3441872179417005],[0.973203450655883,0.35701875896145685],[0.9751576744523808,0.32003372896333554]],"notes":"entity is\" and is thus that which specifies the nature of that entity, ad well as essentiality","imageWidth":1276,"imageHeight":1652},{"label":["line"],"shape":"polygon","points":[[0.03908447592995514,0.3494707936557178],[0.04201581162470178,0.3743790791646566],[0.9761347863506296,0.3947585854901521],[0.9722263387576341,0.35928314855317856]],"notes":"of its properties. Information is associated with data and knowledge, as data is meaningful","imageWidth":1276,"imageHeight":1652},{"label":["line"],"shape":"polygon","points":[[0.041038699726452896,0.3804174514092479],[0.041038699726452896,0.40834492304048237],[0.9438900937084166,0.43023402242712555],[0.9438900937084166,0.3977777716124477]],"notes":"information and represents the values attributed to parameters, and knowledge signifies","imageWidth":1276,"imageHeight":1652},{"label":["line"],"shape":"polygon","points":[[0.04201581162470178,0.4151380918156475],[0.5882213627458248,0.42872442936597777],[0.5852900270510782,0.46269027324180345],[0.04299292352295065,0.4528779183443427]],"notes":"understanding of an abstract or concrete concept.","imageWidth":1276,"imageHeight":1652}],"extras":null,"metadata":{"first_done_at":1551642360000,"last_updated_at":1551642533000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/cd013df3-cfcf-4737-bb2c-2f8ff1bca5c0___page_0105.jpg","annotation":null,"extras":null,"metadata":{"first_done_at":1551641885000,"last_updated_at":1551641885000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/d5990a8f-acd1-4ef6-b445-1c1df96f05c2___page_0111.jpg","annotation":[],"extras":null,"metadata":{"first_done_at":1551642236000,"last_updated_at":1551642236000,"sec_taken":19,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/e19b8034-88ff-49de-bdc9-a1af0a82af0d___page_0071.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.0509868540276039,0.2874260130672446],[0.896441597176236,0.2874260130672446],[0.896441597176236,0.33318537833168155],[0.0509868540276039,0.33318537833168155]],"notes":"From the technology perspective, speech recognition has along history with several waves of","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.02688397757819115,0.33175539816716787],[0.983582765877959,0.33175539816716787],[0.983582765877959,0.36535993203323874],[0.02688397757819115,0.36535993203323874]],"notes":"major innovations. Most recently, the field has benefitted from advances in deep learning and big","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.017613640482263167,0.36535993203323874],[0.9492825186230255,0.36535993203323874],[0.9492825186230255,0.3946745254057687],[0.017613640482263167,0.3946745254057687]],"notes":"ta. The advances are evidenced not only by the urge of academic papers published in the","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.024102876449412756,0.3961045055702823],[0.9298148107215767,0.3961045055702823],[0.9298148107215767,0.4268490791073259],[0.024102876449412756,0.4268490791073259]],"notes":"field, but more importantly by the worldwide industry adoption of a variety of deep","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.02781101128778395,0.4625985832201673],[0.8130085633128841,0.4625985832201673],[0.8130085633128841,0.4318540096831237],[0.02781101128778395,0.4318540096831237]],"notes":"learning methods in designing and deploying speech recognition systems.","imageWidth":1274,"imageHeight":1653}],"extras":null,"metadata":{"first_done_at":1551642401000,"last_updated_at":1551642401000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/632a93ac-bef9-4642-a4f6-7c9e96af9e9d___page_0059.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.07921326940063682,0.31591498942434393],[0.866793476659842,0.31591498942434393],[0.866793476659842,0.35944105463392023],[0.07921326940063682,0.35944105463392023]],"notes":"","imageWidth":1275,"imageHeight":1653}],"extras":null,"metadata":{"first_done_at":1551641914000,"last_updated_at":1551641914000,"sec_taken":55,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/31d87003-20b1-42cc-939e-dd03589e1694___page_0058.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.1621105995278845,0.3695700146550101],[0.8231108609831319,0.3695700146550101],[0.8231108609831319,0.3176542268820444],[0.1621105995278845,0.3176542268820444]],"notes":"Computer science is teh study of process that","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.15868572770687286,0.4047671589078682],[0.8162611173411086,0.4047671589078682],[0.8162611173411086,0.37220980047397445],[0.15868572770687286,0.37220980047397445]],"notes":"interact with data and that can be represented","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.16096897558754727,0.44876358922394083],[0.8002783821763876,0.44876358922394083],[0.8002783821763876,0.4074069447268326],[0.16096897558754727,0.4074069447268326]],"notes":"as data in the form of programs. It enables the","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.15982735164721007,0.4575628752871554],[0.7751626554889689,0.4575628752871554],[0.7751626554889689,0.49803959117794216],[0.15982735164721007,0.49803959117794216]],"notes":"use of algorithms to manipulate, store, and","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.1689603431699078,0.5490754503445865],[0.7888621427730155,0.5490754503445865],[0.7888621427730155,0.5041990914221923],[0.1689603431699078,0.5041990914221923]],"notes":"communicate digital information. A computer","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.1689603431699078,0.556114879195158],[0.8105529976394226,0.556114879195158],[0.8105529976394226,0.5913120234480161],[0.1689603431699078,0.5913120234480161]],"notes":"scientist studies the theory of computation","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.8826095814097336,0.5976818707166152],[0.17526104545136137,0.5976818707166152],[0.17526104545136137,0.6491894140466649],[0.8826095814097336,0.6491894140466649]],"notes":"and the practice of designing software systems.","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.16265233714550803,0.6472457331662856],[0.8220877815416375,0.6472457331662856],[0.8220877815416375,0.6948659147355769],[0.16265233714550803,0.6948659147355769]],"notes":"Its fields can be devided into theoretical and ","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.1689566912984347,0.6997251169365251],[0.7968703649299308,0.6997251169365251],[0.7968703649299308,0.7385987345441097],[0.1689566912984347,0.7385987345441097]],"notes":"practical disciplines. Computational","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.1538262413314107,0.7356832132235409],[0.8296530065251495,0.7356832132235409],[0.8296530065251495,0.7823315543526426],[0.1538262413314107,0.7823315543526426]],"notes":"complexity theory is highly abstract, while","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.9216965771578789,0.7658102668694191],[0.1651740788066787,0.7658102668694191],[0.1651740788066787,0.819261491079848],[0.9216965771578789,0.819261491079848]],"notes":"computer graphics emphasizes real-world applications.","imageWidth":1274,"imageHeight":1652}],"extras":null,"metadata":{"first_done_at":1551642451000,"last_updated_at":1551642645000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/3dabc6d4-9f8a-4fc1-a7b3-3d9288ceb74f___page_0064.jpg","annotation":[{"label":["line"],"shape":"polygon","points":[[0.12568189620300696,0.32634637601373423],[0.9641124406099086,0.3097740991067868],[0.9674198589310404,0.2804539168868029],[0.1223744778818752,0.29192703166853573]],"notes":"Natural Language processing (NLP) is a subfield of","imageWidth":1275,"imageHeight":1652},{"label":["line"],"shape":"polygon","points":[[0.09756884047338699,0.3556665582337182],[0.1041836771156505,0.3301707476076452],[0.9723809864127381,0.3072245180441795],[0.9707272772521722,0.3378194907954671],[0.9707272772521722,0.33654470026416344]],"notes":"computer science, information engineering, and artificial","imageWidth":1275,"imageHeight":1652},{"label":["line"],"shape":"polygon","points":[[0.09756884047338699,0.38116236885979116],[0.8946566558661416,0.3531169771711109],[0.8781195642604829,0.3301707476076452],[0.09922254963395287,0.35056739610850357]],"notes":"intelligence concerned with the interactions between","imageWidth":1275,"imageHeight":1652},{"label":["line"],"shape":"polygon","points":[[0.0876465855099917,0.40793297001716783],[0.09426142215225522,0.37733799726588024],[0.9773421138944357,0.35056739610850357],[0.9740346955733039,0.3747884162032729],[0.9740346955733039,0.3735136256719693],[0.9740346955733039,0.3735136256719693]],"notes":"computes and human (natural) languages, in particular","imageWidth":1275,"imageHeight":1652},{"label":["line"],"shape":"polygon","points":[[0.0876465855099917,0.4359783617058481],[0.9773421138944357,0.4155817132049897],[0.9756884047338699,0.38243715939109485],[0.08930029467055757,0.4053833889545605]],"notes":"how to program computers to process and analyze large","imageWidth":1275,"imageHeight":1652},{"label":["line"],"shape":"polygon","points":[[0.08268545802829405,0.469122915519743],[0.08268545802829405,0.43470357117454445],[0.9012714925084052,0.40920776054847147],[0.8979640741872734,0.4334287806432408]],"notes":"amounts of natural language data. Challenges in","imageWidth":1275,"imageHeight":1652},{"label":["line"],"shape":"polygon","points":[[0.08103174886772817,0.5022674693336379],[0.08103174886772817,0.46784812498843936],[0.9376530940408545,0.43087919958063353],[0.9376530940408545,0.4601993818006174]],"notes":"","imageWidth":1275,"imageHeight":1652},{"label":["line"],"shape":"polygon","points":[[0.07441691222546465,0.5315876515536219],[0.07607062138603053,0.5009926788023342],[0.9277308390774592,0.4589245912693138],[0.9211160024351958,0.4920691450832087]],"notes":"recognition, natural language understanding","imageWidth":1275,"imageHeight":1652},{"label":["line"],"shape":"polygon","points":[[0.07441691222546465,0.5660069958988203],[0.07607062138603053,0.5303128610223181],[0.5490314413078725,0.5073666314588525],[0.5490314413078725,0.53796160421014]],"notes":"natural language generation","imageWidth":1275,"imageHeight":1652}],"extras":null,"metadata":{"first_done_at":1551597360000,"last_updated_at":1551597360000,"sec_taken":0,"last_updated_by":"69FI7aSdl6aSMhn3Anp3BRvA8gg2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/74143f70-aeb1-4b5c-b97c-1d7b74e57efc___page_0070.jpg","annotation":[{"label":["line"],"shape":"polygon","points":[[0.15682656826568267,0.32670454545454547],[0.8302583025830258,0.3039772727272727],[0.8357933579335793,0.3352272727272727],[0.15867158671586715,0.3565340909090909]],"notes":"","imageWidth":1273,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.15498154981549817,0.3778409090909091],[0.8782287822878229,0.32954545454545453],[0.9095940959409594,0.3678977272727273],[0.12546125461254612,0.3991477272727273]],"notes":"","imageWidth":1273,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.13284132841328414,0.41051136363636365],[0.9280442804428044,0.3650568181818182],[0.9317343173431735,0.40198863636363635],[0.13099630996309963,0.4431818181818182]],"notes":"","imageWidth":1273,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.13284132841328414,0.4502840909090909],[0.8690036900369004,0.4161931818181818],[0.8634686346863468,0.4431818181818182],[0.13468634686346864,0.48863636363636365]],"notes":"","imageWidth":1273,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.13468634686346864,0.4900568181818182],[0.8837638376383764,0.45454545454545453],[0.8837638376383764,0.4900568181818182],[0.13284132841328414,0.5369318181818182]],"notes":"","imageWidth":1273,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.12915129151291513,0.5326704545454546],[0.8726937269372693,0.4971590909090909],[0.8653136531365314,0.53125],[0.12361623616236163,0.5724431818181818]],"notes":"","imageWidth":1273,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.14206642066420663,0.5639204545454546],[0.8800738007380073,0.5284090909090909],[0.8745387453874539,0.5610795454545454],[0.14206642066420663,0.6079545454545454]],"notes":"","imageWidth":1273,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.15313653136531366,0.5980113636363636],[0.940959409594096,0.5525568181818182],[0.9501845018450185,0.5894886363636364],[0.14944649446494465,0.6477272727272727]],"notes":"","imageWidth":1273,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.14391143911439114,0.640625],[0.9391143911439115,0.6051136363636364],[0.9391143911439115,0.6448863636363636],[0.14022140221402213,0.6846590909090909]],"notes":"","imageWidth":1273,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.15867158671586715,0.6775568181818182],[0.9760147601476015,0.6477272727272727],[0.977859778597786,0.7017045454545454],[0.15129151291512916,0.7400568181818182]],"notes":"","imageWidth":1273,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.16974169741697417,0.71875],[0.988929889298893,0.6832386363636364],[0.9907749077490775,0.7357954545454546],[0.1752767527675277,0.7585227272727273]],"notes":"","imageWidth":1273,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.18265682656826568,0.7613636363636364],[0.9446494464944649,0.7244318181818182],[0.9428044280442804,0.7713068181818182],[0.1863468634686347,0.8110795454545454]],"notes":"","imageWidth":1273,"imageHeight":1653}],"extras":null,"metadata":{"first_done_at":1551642063000,"last_updated_at":1551642376000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/a6a99b01-c4c7-44e9-864b-ca8aab68d290___page_0110.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.08617234468937876,0.30124223602484473],[0.905811623246493,0.30124223602484473],[0.905811623246493,0.2748447204968944],[0.08617234468937876,0.2748447204968944]],"notes":"","imageWidth":1280,"imageHeight":1654}],"extras":null,"metadata":{"first_done_at":1551641579000,"last_updated_at":1551641579000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/0d30b30b-ad40-4163-b137-676d77c4f67f___page_0112.jpg","annotation":null,"extras":null,"metadata":{"first_done_at":1551641900000,"last_updated_at":1551641900000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/44744051-aa5b-45f4-8682-fb06a8acf900___page_0066.jpg","annotation":[{"label":["line"],"shape":"polygon","points":[[0.11244019138755981,0.3044280442804428],[0.9760765550239234,0.27490774907749077],[0.9545454545454546,0.33025830258302585],[0.1076555023923445,0.34501845018450183]],"notes":"","imageWidth":1275,"imageHeight":1651},{"label":["line"],"shape":"polygon","points":[[0.11004784688995216,0.35424354243542433],[0.12200956937799043,0.3985239852398524],[0.8923444976076556,0.3892988929889299],[0.9019138755980861,0.34501845018450183]],"notes":"","imageWidth":1275,"imageHeight":1651},{"label":["line"],"shape":"polygon","points":[[0.12200956937799043,0.4151291512915129],[0.9688995215311005,0.3874538745387454],[0.9784688995215312,0.45202952029520294],[0.1291866028708134,0.45018450184501846]],"notes":"","imageWidth":1275,"imageHeight":1651},{"label":["line"],"shape":"polygon","points":[[0.13157894736842105,0.48154981549815495],[0.1291866028708134,0.5295202952029521],[0.9641148325358851,0.5166051660516605],[0.9473684210526315,0.466789667896679]],"notes":"","imageWidth":1275,"imageHeight":1651},{"label":["line"],"shape":"polygon","points":[[0.1339712918660287,0.5369003690036901],[0.1339712918660287,0.5793357933579336],[0.937799043062201,0.5627306273062731],[0.9354066985645934,0.518450184501845]],"notes":"","imageWidth":1275,"imageHeight":1651},{"label":["line"],"shape":"polygon","points":[[0.1339712918660287,0.5885608856088561],[0.9712918660287081,0.5756457564575646],[0.9784688995215312,0.6273062730627307],[0.13636363636363635,0.6199261992619927]],"notes":"","imageWidth":1275,"imageHeight":1651},{"label":["line"],"shape":"polygon","points":[[0.10047846889952153,0.6439114391143912]],"notes":"","imageWidth":1275,"imageHeight":1651},{"label":["line"],"shape":"polygon","points":[[0.4880382775119617,0.6143911439114391],[0.7464114832535885,0.6328413284132841],[0.9473684210526315,0.6476014760147601],[0.930622009569378,0.6808118081180812],[0.5287081339712919,0.6697416974169742],[0.39952153110047844,0.6642066420664207],[0.20334928229665072,0.6642066420664207],[0.10526315789473684,0.6863468634686347],[0.11004784688995216,0.6457564575645757]],"notes":"","imageWidth":1275,"imageHeight":1651},{"label":["line"],"shape":"polygon","points":[[0.12200956937799043,0.6992619926199262],[0.23205741626794257,0.6678966789667896],[0.8803827751196173,0.7084870848708487],[0.8444976076555024,0.7472324723247232],[0.13157894736842105,0.7361623616236163],[0.11483253588516747,0.7453874538745388]],"notes":"","imageWidth":1275,"imageHeight":1651},{"label":["line"],"shape":"polygon","points":[[0.1076555023923445,0.7361623616236163],[0.12679425837320574,0.7693726937269373],[0.9210526315789473,0.7749077490774908],[0.9282296650717703,0.7472324723247232]],"notes":"","imageWidth":1275,"imageHeight":1651},{"label":["line"],"shape":"polygon","points":[[0.11244019138755981,0.7767527675276753],[0.11244019138755981,0.8191881918819188],[0.6076555023923444,0.8118081180811808],[0.6124401913875598,0.7767527675276753]],"notes":"","imageWidth":1275,"imageHeight":1651}],"extras":null,"metadata":{"first_done_at":1551642247000,"last_updated_at":1551642247000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/9b412681-c5a5-43a6-87a4-d4138f644e00___page_0072.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.8928571428571429,0.2875722543352601],[0.16165413533834586,0.2875722543352601],[0.16165413533834586,0.3208092485549133],[0.8928571428571429,0.3208092485549133]],"notes":"","imageWidth":1271,"imageHeight":1653}],"extras":null,"metadata":{"first_done_at":1551642383000,"last_updated_at":1551642383000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/1d2e05c3-a70b-4e51-89b5-c621481a0c88___page_0099.jpg","annotation":null,"extras":null,"metadata":{"first_done_at":1551642454000,"last_updated_at":1551642486000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/5713d627-e625-4aef-97ee-48fc4ae0d10a___page_0098.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.1618798955613577,0.2796780684104628],[0.9086161879895561,0.2796780684104628],[0.9086161879895561,0.32595573440643866],[0.1618798955613577,0.32595573440643866]],"notes":"","imageWidth":1273,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.057441253263707574,0.317907444668008],[0.9451697127937336,0.317907444668008],[0.9451697127937336,0.3722334004024145],[0.057441253263707574,0.3722334004024145]],"notes":"","imageWidth":1273,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.07571801566579635,0.3762575452716298],[0.8877284595300261,0.3762575452716298],[0.8877284595300261,0.43259557344064387],[0.07571801566579635,0.43259557344064387]],"notes":"","imageWidth":1273,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.056323637429408505,0.43094301319767464],[0.9595133947795663,0.43094301319767464],[0.9595133947795663,0.4805479643571192],[0.056323637429408505,0.4805479643571192]],"notes":"","imageWidth":1273,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.0623583128682737,0.48984889269951504],[0.9776174210961619,0.48984889269951504],[0.9776174210961619,0.5394538438589596],[0.0623583128682737,0.5394538438589596]],"notes":"","imageWidth":1273,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.07241610526638237,0.5487547722013555],[0.9957214474127576,0.5487547722013555],[0.9957214474127576,0.6138612705981265],[0.07241610526638237,0.6138612705981265]],"notes":"","imageWidth":1273,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.9414093684629707,0.6727671500999669],[0.07241610526638237,0.6727671500999669],[0.07241610526638237,0.6138612705981265],[0.9414093684629707,0.6138612705981265]],"notes":"signifies understanding of an abstract or","imageWidth":1273,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.10460104094033008,0.685168387889828],[0.3962770204854813,0.685168387889828],[0.3962770204854813,0.7394238032204705],[0.10460104094033008,0.7394238032204705]],"notes":"concrete concept.","imageWidth":1273,"imageHeight":1652}],"extras":null,"metadata":{"first_done_at":1551641700000,"last_updated_at":1551641700000,"sec_taken":187,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/0da2b94d-2d6b-4b6b-a4b8-091819b49bc9___page_0107.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.14661673952495569,0.31196126201482305],[0.8659880802157455,0.31196126201482305],[0.8659880802157455,0.3517688188865062],[0.14661673952495569,0.3517688188865062]],"notes":"Gradient descent isa first-order iterative optimization algorithm","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.14767153621218557,0.3623300074442997],[0.8554401133434465,0.3623300074442997],[0.8554401133434465,0.40701195903496445],[0.14767153621218557,0.40701195903496445]],"notes":"for finding the minimum of a function.  To find a local minimum","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.16454828320786394,0.40701195903496445],[0.8744264537135846,0.40701195903496445],[0.8744264537135846,0.4500691123859687],[0.16454828320786394,0.4500691123859687]],"notes":"of a function using gradient descent, on takes steps proportional","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.15821950308448454,0.4646922965429135],[0.532672327051098,0.44519471766698704],[0.8733716570263547,0.44275752030749627],[0.8744264537135846,0.4720038886213859],[0.4282474550153382,0.4915014674973124],[0.16032909645894433,0.5012502569352756]],"notes":"to the negative of the gradient (or approximate gradient) of the","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.14978112958664538,0.5069370507740875],[0.8680976735902052,0.4776906824601978],[0.8744264537135846,0.5118114454930691],[0.7425768678098474,0.5280594278896744],[0.40504192789628046,0.5337462217284863],[0.14978112958664538,0.538620616447468]],"notes":"function at the current point.  If, instead, one takes steps proportional","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.14872633289941548,0.5475570067656009],[0.6202204520911795,0.5313090243689955],[0.8723168603391248,0.5256222305301836],[0.8723168603391248,0.5621801909225457],[0.33753493991356703,0.5833025680381327],[0.14978112958664538,0.5833025680381327]],"notes":"to the positive of the gradient, one approaches a local maximum of","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.15189072296110517,0.5881769627571143],[0.4662201357556145,0.5768033750794905],[0.8860292172731135,0.562992590042376],[0.8870840139603434,0.5963009539554169],[0.39765835108567116,0.6239225240296461],[0.16032909645894433,0.6271721205089672]],"notes":"that function; the procedure is then known as gradient ascent.","imageWidth":1274,"imageHeight":1653}],"extras":null,"metadata":{"first_done_at":1551642270000,"last_updated_at":1551642479000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/50fa8599-62ac-42a2-8141-fe510a164fdb___page_0113.jpg","annotation":[],"extras":null,"metadata":{"first_done_at":1551642233000,"last_updated_at":1551642233000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/45152f96-4d00-43a7-9049-2d7fbab41b73___page_0028.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.12658430181286598,0.2715296999168771],[0.8368274162604412,0.2715296999168771],[0.8368274162604412,0.3047739984993239],[0.12658430181286598,0.3047739984993239]],"notes":"Mathematical analysis is the branch of mathematics","imageWidth":1274,"imageHeight":1651},{"label":["line"],"shape":"rectangle","points":[[0.12489361458636344,0.3323770429267258],[0.8124467565658244,0.3323770429267258],[0.8124467565658244,0.3020286872738697],[0.12489361458636344,0.3020286872738697]],"notes":"dealing with limits and related theories, such as","imageWidth":1274,"imageHeight":1651},{"label":["line"],"shape":"rectangle","points":[[0.12489361458636344,0.3612295056952805],[0.8932181013391373,0.3612295056952805],[0.8932181013391373,0.3263114553983094],[0.12489361458636344,0.3263114553983094]],"notes":"differentiation, integration, measure, infinite series, and","imageWidth":1274,"imageHeight":1651},{"label":["line"],"shape":"rectangle","points":[[0.12489361458636344,0.39157786134813654],[0.8755053256420379,0.39157786134813654],[0.8755053256420379,0.3536270329209625],[0.12489361458636344,0.3536270329209625]],"notes":"analytic functions. These theories are usually studied","imageWidth":1274,"imageHeight":1651},{"label":["line"],"shape":"rectangle","points":[[0.12686170415675385,0.4219466975477875],[0.8262499545482879,0.4219466975477875],[0.8262499545482879,0.38702867851882683],[0.12686170415675385,0.38702867851882683]],"notes":"in the context of real and complex numbers","imageWidth":1274,"imageHeight":1651},{"label":["line"],"shape":"rectangle","points":[[0.12292552501597303,0.45229508446865396],[0.9089893584555768,0.45229508446865396],[0.9089893584555768,0.40979507321217023],[0.12292552501597303,0.40979507321217023]],"notes":"and functions. Analysis evolved from calculus, which","imageWidth":1274,"imageHeight":1651},{"label":["line"],"shape":"rectangle","points":[[0.12095744559105406,0.4841803253674116],[0.8951861604731134,0.4841803253674116],[0.8951861604731134,0.4386270241659196],[0.12095744559105406,0.4386270241659196]],"notes":"involved the elementary concepts and techniques of","imageWidth":1274,"imageHeight":1651},{"label":["line"],"shape":"rectangle","points":[[0.11898935602066364,0.5251639319247887],[0.2588563675576068,0.5251639319247887],[0.2588563675576068,0.4917622863269243],[0.11898935602066364,0.4917622863269243]],"notes":"analysis","imageWidth":1274,"imageHeight":1651}],"extras":null,"metadata":{"first_done_at":1551641986000,"last_updated_at":1551642217000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/730580a1-7c34-42d8-805e-95e93ca6aab4___page_0000.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.11998585392590445,0.33858742103311024],[0.9481808944388548,0.33858742103311024],[0.9481808944388548,0.37094133015405184],[0.11998585392590445,0.37094133015405184]],"notes":"","imageWidth":1280,"imageHeight":1658},{"label":["line"],"shape":"rectangle","points":[[0.12388783291536476,0.37545582910116],[0.9579358419125055,0.37545582910116],[0.9579358419125055,0.39953315681907003],[0.12388783291536476,0.39953315681907003]],"notes":"","imageWidth":1280,"imageHeight":1658},{"label":["line"],"shape":"rectangle","points":[[0.114132885441714,0.4446781462901514],[0.9442789154493945,0.4446781462901514],[0.9442789154493945,0.4732699729551696],[0.114132885441714,0.4732699729551696]],"notes":"","imageWidth":1280,"imageHeight":1658},{"label":["line"],"shape":"rectangle","points":[[0.11705936968380923,0.4807941378670165],[0.8584353776812677,0.4807941378670165],[0.8584353776812677,0.5048714655849266],[0.11705936968380923,0.5048714655849266]],"notes":"","imageWidth":1280,"imageHeight":1658}],"extras":null,"metadata":{"first_done_at":1551642485000,"last_updated_at":1551642485000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/84428a31-96f5-47e9-b2b0-4a1e4e33fa0f___page_0014.jpg","annotation":[{"label":["line"],"shape":"polygon","points":[[0.14787980517008467,0.2951591907825168],[0.9830751827973561,0.2506339600562004],[0.9933058611424563,0.2815143620115491],[0.1581104835151851,0.3296303371512783],[0.15625036017971208,0.3310666349166432]],"notes":"gradient descent is a first-order iterative optimization","imageWidth":1279,"imageHeight":1655},{"label":["line"],"shape":"polygon","points":[[0.15424697803388596,0.3357447043694809],[0.9826596273020838,0.2885145624704102],[0.9813299119742055,0.32239705557191745],[0.15956583934539925,0.3706539396861853]],"notes":"algorithm for finding the minimum of a function","imageWidth":1279,"imageHeight":1655},{"label":["line"],"shape":"polygon","points":[[0.1622252700011559,0.3727074241165797],[0.9946270652529887,0.325477282217509],[0.9919676345972321,0.3521725798126359],[0.16754413131266924,0.4076166594332841]],"notes":"To find a local minimum of a function using gradient","imageWidth":1279,"imageHeight":1655},{"label":["line"],"shape":"polygon","points":[[0.1755224232799392,0.41377711272446727],[0.9959567805808671,0.36141325974941063],[0.9946270652529887,0.39632249506611505],[0.17286299262418253,0.45279331690196045]],"notes":"descent, one takes steps proportional to the negative","imageWidth":1279,"imageHeight":1655},{"label":["line"],"shape":"polygon","points":[[0.17052120851774863,0.45530151144511877],[0.9593810049316325,0.4023880925474428],[0.9530063803141465,0.4429960651898453],[0.17689583313523455,0.48852621633435717]],"notes":"of the gradient of approximate gradient","imageWidth":1279,"imageHeight":1655},{"label":["line"],"shape":"polygon","points":[[0.181676801598349,0.4946789394619939],[0.8892601341392872,0.4589931453217008],[0.8908537902936586,0.4872956717088298],[0.1896450823702064,0.530364733602287]],"notes":"of the function at the current point.","imageWidth":1279,"imageHeight":1655},{"label":["line"],"shape":"polygon","points":[[0.1450227100478049,0.5352869121043964],[0.9577873487772609,0.4959094840875213],[0.9466317556966606,0.5365174567299238],[0.15777195928277676,0.5882003310020724]],"notes":"If instead one takes steps proportional to","imageWidth":1279,"imageHeight":1655}],"extras":null,"metadata":{"first_done_at":1551642487000,"last_updated_at":1551642487000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/de461a9b-7af6-4049-8b8a-3a49b42eba27___page_0015.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.13551052137336192,0.3129159013602186],[0.9736681906086005,0.3129159013602186],[0.9736681906086005,0.3438976737721214],[0.13551052137336192,0.3438976737721214]],"notes":"When the data source has a lower-probability value (i.e., when a low-probability event","imageWidth":1276,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.13049161317434851,0.3562903827368825],[0.9425509597747175,0.3562903827368825],[0.9425509597747175,0.39037033238997565],[0.13049161317434851,0.39037033238997565]],"notes":"occurs), the event carries more \"information\" (\"surprise\") than when the source","imageWidth":1276,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.12848404989474316,0.41747938325039063],[0.9204677636990585,0.41747938325039063],[0.9204677636990585,0.4445884341108056],[0.12848404989474316,0.4445884341108056]],"notes":"data has a higher-probability value. The amount of information conveyed by","imageWidth":1276,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.1274802682549405,0.4593047760064594],[0.9214715453388611,0.4593047760064594],[0.9214715453388611,0.49106109272865983],[0.1274802682549405,0.49106109272865983]],"notes":"each event defined in this way becomes a random variable whose expected","imageWidth":1276,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.1264764866151378,0.5104247004860991],[0.978687098807614,0.5104247004860991],[0.978687098807614,0.5630937135863339],[0.1264764866151378,0.5630937135863339]],"notes":"value is the information entrophy. Generally, entropy refers to a disorder or uncertainty","imageWidth":1276,"imageHeight":1652}],"extras":null,"metadata":{"first_done_at":1551642217000,"last_updated_at":1551642409000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/b8c470a9-1856-4205-8e0f-74f528b2727d___page_0001.jpg","annotation":[{"label":["line"],"shape":"polygon","points":[[0.13889398232459746,0.2773116101391781],[0.1420269292943252,0.3119755614065754],[0.8709592575843178,0.3039141773909016],[0.8709592575843178,0.26844408772193695],[0.8699149419277419,0.27328091813134125]],"notes":"Data is measured, collected and reported, and","imageWidth":1276,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.1409826136377493,0.3160062534144123],[0.1409826136377493,0.3490579278786748],[0.894978517685564,0.34663951267397264],[0.8855796767763807,0.31036328460344065]],"notes":"analyzed, where upon it can be visualized using","imageWidth":1276,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.13784966666802154,0.35067020468180954],[0.13993829798117338,0.38049732553980253],[0.8709592575843178,0.3748543567288309],[0.8636490479882863,0.340996543863001]],"notes":"graphs, images or other analysis tools. Data as","imageWidth":1276,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.13784966666802154,0.39097712476017843],[0.13889398232459746,0.4095183079962282],[0.9210864090999621,0.4159674152087672],[0.9095989368776269,0.3788850487366678]],"notes":"a general concept refers to the fact that some","imageWidth":1276,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.13889398232459746,0.41516127680719983],[0.13784966666802154,0.4393454288542212],[0.922130724756538,0.44901908967302967],[0.9189977777868102,0.4167735536103346]],"notes":"existing information or knowledge is represented or","imageWidth":1276,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.14411556060747704,0.4506313664761645],[0.1462041919206289,0.4861014561451291],[0.9252636717262657,0.49416284016080286],[0.9231750404131138,0.4554681968855687]],"notes":"coded in some form suitable for better usage or","imageWidth":1276,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.1409826136377493,0.48771373294826387],[0.13784966666802154,0.519959269010959],[0.9169091464736584,0.5280206530266328],[0.9210864090999621,0.503030362578044]],"notes":"processing. Raw data (\"unprocessed data\") is a","imageWidth":1276,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.14411556060747704,0.5288267914282001],[0.14411556060747704,0.5634907426955974],[0.9377954596051767,0.5868687563410514],[0.9304852500091453,0.5376943138454413]],"notes":"collection of number of characters before it has","imageWidth":1276,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.13889398232459746,0.5683275731050017],[0.1409826136377493,0.6150836003959096],[0.9106432525342029,0.6239511228131508],[0.9148205151605066,0.592511725152023]],"notes":"been \"cleaned\" and corrected by researchers.","imageWidth":1276,"imageHeight":1654}],"extras":null,"metadata":{"first_done_at":1551642056000,"last_updated_at":1551642542000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/7f7ecd81-03b4-41fb-8a58-3c0259be2f91___page_0029.jpg","annotation":[{"label":["line"],"shape":"polygon","points":[[0.6018972108873774,0.4227590737457191],[0.5801987220936318,0.33689750627240617],[0.6320864126873712,0.3587267183418925],[0.6320864126873712,0.35945435874420867]],"notes":"","imageWidth":1274,"imageHeight":1651},{"label":["line"],"shape":"polygon","points":[[0.7507991593488054,0.3565437971349439]],"notes":"","imageWidth":1274,"imageHeight":1651}],"extras":null,"metadata":{"first_done_at":1551642405000,"last_updated_at":1551642405000,"sec_taken":55,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/b872b837-a64b-40dc-bcef-3b2daa9def90___page_0017.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.0751233291352395,0.29338367799802967],[0.9215128373922713,0.29338367799802967],[0.9215128373922713,0.3461412692169736],[0.0751233291352395,0.3461412692169736]],"notes":"Mathematical analysis is the branch of mathematics","imageWidth":1274,"imageHeight":1654},{"label":["line"],"shape":"rectangle","points":[[0.08180095839170524,0.3795973026728892],[0.8764388399111276,0.3795973026728892],[0.8764388399111276,0.3461412692169736],[0.08180095839170524,0.3461412692169736]],"notes":"declining with limits and related theories, such as","imageWidth":1274,"imageHeight":1654},{"label":["line"],"shape":"rectangle","points":[[0.08180095839170524,0.389891466813171],[0.9248516520205041,0.389891466813171],[0.9248516520205041,0.4362152054444388],[0.08180095839170524,0.4362152054444388]],"notes":"differentiation integration, measure, infinite series, and","imageWidth":1274,"imageHeight":1654},{"label":["line"],"shape":"rectangle","points":[[0.07679273644935594,0.44522259906718537],[0.8113319546605866,0.44522259906718537],[0.8113319546605866,0.4825389440757067],[0.07679273644935594,0.4825389440757067]],"notes":"analytic function. These theories are usually","imageWidth":1274,"imageHeight":1654},{"label":["line"],"shape":"rectangle","points":[[0.08680918033405453,0.48768602614584755],[0.9448845397899013,0.48768602614584755],[0.9448845397899013,0.5262891416719041],[0.08680918033405453,0.5262891416719041]],"notes":"studied on the context of real and complex numbers","imageWidth":1274,"imageHeight":1654},{"label":["line"],"shape":"rectangle","points":[[0.9565703909887163,0.5352965352946506],[0.0801315510775888,0.5352965352946506],[0.0801315510775888,0.5764731918557776],[0.9565703909887163,0.5764731918557776]],"notes":"and functions analysis evolved from calculus, which","imageWidth":1274,"imageHeight":1654},{"label":["line"],"shape":"rectangle","points":[[0.8680918033405454,0.5816202739259184],[0.07846214376347237,0.5816202739259184],[0.07846214376347237,0.6215101599695102],[0.8680918033405454,0.6215101599695102]],"notes":"involves the elementary concepts and techniques","imageWidth":1274,"imageHeight":1654},{"label":["line"],"shape":"rectangle","points":[[0.3188567969962388,0.6652603575657076],[0.07679273644935594,0.6652603575657076],[0.07679273644935594,0.6227969304870454],[0.3188567969962388,0.6227969304870454]],"notes":"of analysis.","imageWidth":1274,"imageHeight":1654}],"extras":null,"metadata":{"first_done_at":1551642137000,"last_updated_at":1551642256000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/8458cec1-b8d7-45ed-aa8b-285188d70131___page_0003.jpg","annotation":null,"extras":null,"metadata":{"first_done_at":1551642214000,"last_updated_at":1551642758000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/47496a4a-8671-4531-a6ae-952f6312e47e___page_0002.jpg","annotation":null,"extras":null,"metadata":{"first_done_at":1551641931000,"last_updated_at":1551641931000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/7fd48af1-f073-4dfe-a486-2bfd7c6d40a5___page_0016.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.1519707592927914,0.29922677536138453],[0.7747205011773822,0.29922677536138453],[0.7747205011773822,0.3580491329110584],[0.1519707592927914,0.3580491329110584]],"notes":"Natural language processing (NLP) is a subfield","imageWidth":1281,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.15527447145133033,0.5869448285934851],[0.822624327476197,0.5869448285934851],[0.822624327476197,0.6137985135618145],[0.15527447145133033,0.6137985135618145]],"notes":"understanding, and natural image generation","imageWidth":1281,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.16023003968913876,0.3644428674273273],[0.8242761835554664,0.3644428674273273],[0.8242761835554664,0.39385404620216424],[0.16023003968913876,0.39385404620216424]],"notes":"of computer science, information engineering, and","imageWidth":1281,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.14371147889644403,0.3976902869119256],[0.8077576227627717,0.3976902869119256],[0.8077576227627717,0.424543971880255],[0.14371147889644403,0.424543971880255]],"notes":"artificial intelligence concerned with the instructions","imageWidth":1281,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.15031890321352193,0.45267640375183815],[0.8440984565067001,0.45267640375183815],[0.8440984565067001,0.47697259491365995],[0.15031890321352193,0.47697259491365995]],"notes":"in particular how to program computers to process","imageWidth":1281,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.15527447145133033,0.5140562551080196],[0.787935349811538,0.5140562551080196],[0.787935349811538,0.5511399153023792],[0.15527447145133033,0.5511399153023792]],"notes":"data. challenge in natural language processing","imageWidth":1281,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.13875591065863563,0.5562549029153944],[0.8738318659335506,0.5562549029153944],[0.8738318659335506,0.5805510940772162],[0.13875591065863563,0.5805510940772162]],"notes":"frequently involve speech recognition, natural language","imageWidth":1281,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.13710405457936617,0.4795300887201675],[0.8655725855372032,0.4795300887201675],[0.8655725855372032,0.5076625205917507],[0.13710405457936617,0.5076625205917507]],"notes":"and analyze large amount of natural language","imageWidth":1281,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.13710405457936617,0.41942898426723985],[0.8870467145677063,0.41942898426723985],[0.8870467145677063,0.45779139136485325],[0.13710405457936617,0.45779139136485325]],"notes":"between computers and human (natural) images,","imageWidth":1281,"imageHeight":1653}],"extras":null,"metadata":{"first_done_at":1551642070000,"last_updated_at":1551642703000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/23c1d93f-c104-4fbd-bb1b-3c8e1e19a046___page_0012.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.0832148135216203,0.30503882421183975],[0.8097785616888856,0.30503882421183975],[0.8097785616888856,0.4561779701448554],[0.0832148135216203,0.4561779701448554]],"notes":"","imageWidth":1278,"imageHeight":1656}],"extras":null,"metadata":{"first_done_at":1551642350000,"last_updated_at":1551642350000,"sec_taken":115,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/3fa22029-8053-4435-aad8-5dbbd0486aec___page_0007.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.13433246428237078,0.2807275109597344],[0.8911163472196872,0.2807275109597344],[0.8911163472196872,0.3228880528987421],[0.13433246428237078,0.3228880528987421]],"notes":"Mathematicians seek and use patterns to formulae new","imageWidth":1280,"imageHeight":1656},{"label":["line"],"shape":"rectangle","points":[[0.12236224469285258,0.3249446647006449],[0.897766469213864,0.3249446647006449],[0.897766469213864,0.3599070653329928],[0.12236224469285258,0.3599070653329928]],"notes":"conjectures; they resolve the truth or falsify of conjectures","imageWidth":1280,"imageHeight":1656},{"label":["line"],"shape":"rectangle","points":[[0.11571212269867581,0.36196367713489563],[0.8192950296825782,0.36196367713489563],[0.8192950296825782,0.3958977718662921],[0.11571212269867581,0.3958977718662921]],"notes":"by mathematical proof. When mathematical","imageWidth":1280,"imageHeight":1656},{"label":["line"],"shape":"rectangle","points":[[0.1330024398835354,0.3979543836681949],[0.8498855908557913,0.3979543836681949],[0.8498855908557913,0.4339450902014942],[0.1330024398835354,0.4339450902014942]],"notes":"structures are good models of real phenomena, then","imageWidth":1280,"imageHeight":1656},{"label":["line"],"shape":"rectangle","points":[[0.1396525618777122,0.4380583138052998],[0.920376883994065,0.4380583138052998],[0.920376883994065,0.4689074908338421],[0.1396525618777122,0.4689074908338421]],"notes":"mathematic reasoning can provide insight on predictions","imageWidth":1280,"imageHeight":1656},{"label":["line"],"shape":"rectangle","points":[[0.12635231788935863,0.46376596132908504],[0.8884562984220166,0.46376596132908504],[0.8884562984220166,0.5079831150699956],[0.12635231788935863,0.5079831150699956]],"notes":"about nature. Through the use of abstraction and logic,","imageWidth":1280,"imageHeight":1656},{"label":["line"],"shape":"rectangle","points":[[0.1449726594730536,0.5048981973671414],[0.7940245661047064,0.5048981973671414],[0.7940245661047064,0.5388322920985378],[0.1449726594730536,0.5388322920985378]],"notes":"mathematics, developed from counting, calculation,","imageWidth":1280,"imageHeight":1656},{"label":["line"],"shape":"rectangle","points":[[0.1303423910858647,0.5388322920985378],[0.9123967376010529,0.5388322920985378],[0.9123967376010529,0.5727663868299343],[0.1303423910858647,0.5727663868299343]],"notes":"measurement, and the systematic study of the shapes","imageWidth":1280,"imageHeight":1656},{"label":["line"],"shape":"rectangle","points":[[0.1423126106753829,0.5614550219194688],[0.5838807110887204,0.5614550219194688],[0.5838807110887204,0.5974457284527681],[0.1423126106753829,0.5974457284527681]],"notes":"and motions of physical objects.","imageWidth":1280,"imageHeight":1656}],"extras":null,"metadata":{"first_done_at":1551642538000,"last_updated_at":1551642538000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/a9509c7d-fcf0-4c19-b31d-957ffddb37a9___page_0013.jpg","annotation":[{"label":["line"],"shape":"polygon","points":[[0.13739097105283643,0.2936915170956773],[0.8563516775281906,0.28406228702696656],[0.8610354606322647,0.32619016857757605],[0.13739097105283643,0.3400321868013477]],"notes":"In the simplest case, an optimization problem consists of","imageWidth":1277,"imageHeight":1657},{"label":["line"],"shape":"polygon","points":[[0.13661034053549076,0.3430413211978198],[0.8906994202913998,0.3243846879396928],[0.8985057254648564,0.37012353076606874],[0.1319265574314168,0.3743363189211297]],"notes":"maximizing or minimizing a real function by systematically","imageWidth":1277,"imageHeight":1657},{"label":["line"],"shape":"polygon","points":[[0.1319265574314168,0.37734545331760183],[0.8977250949475107,0.37313266516254084],[0.8906994202913998,0.4002148747307898],[0.13270718794876246,0.4086404510409117]],"notes":"choosing input values from within an allowed set and","imageWidth":1277,"imageHeight":1657},{"label":["line"],"shape":"polygon","points":[[0.1319265574314168,0.4122514123166782],[0.8883575287393628,0.40442766288585075],[0.8906994202913998,0.4459537175571658],[0.1280234048446885,0.4561847745051709]],"notes":"computing the value of the function. The generalization","imageWidth":1277,"imageHeight":1657},{"label":["line"],"shape":"polygon","points":[[0.12926569857121709,0.4568707277224312],[0.9292021319372552,0.44716384257252123],[0.9224870307127764,0.4834028804655186],[0.12171120969367841,0.4937568912920893]],"notes":"of optimization theory and techniques to other formulations","imageWidth":1277,"imageHeight":1657},{"label":["line"],"shape":"polygon","points":[[0.11835365908143901,0.4885798858788039],[0.828475613570073,0.48728563452548257],[0.8242786753047738,0.5312901805384079],[0.1301050862242769,0.5267603008017833]],"notes":"constitutes a large area of applied mathematics.","imageWidth":1277,"imageHeight":1657}],"extras":null,"metadata":{"first_done_at":1551642399000,"last_updated_at":1551642421000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/ae4e63e9-61a8-4335-b38d-8a6a6bc5741f___page_0005.jpg","annotation":[{"label":["line"],"shape":"polygon","points":[[0.11409090059286668,0.2833943336948642],[0.9192466847768116,0.2758705903224342],[0.9268527448163361,0.31766916461371214],[0.12061038062674478,0.32937276541526994]],"notes":"From the technology perspective, speech recognition","imageWidth":1274,"imageHeight":1655},{"label":["line"],"shape":"polygon","points":[[0.11517748059851303,0.33689650878769994],[0.8638311044888477,0.3277008224436188],[0.8605713644719087,0.36197565336246673],[0.11517748059851303,0.37953105456480346]],"notes":"has a long history with several waves of major","imageWidth":1274,"imageHeight":1655},{"label":["line"],"shape":"polygon","points":[[0.11300432058722033,0.3845468834797568],[0.9072943047147017,0.36615551079159453],[0.9040345646977627,0.40126631319626793],[0.10865800056463494,0.41464185696947686]],"notes":"innovations. Most recently, the field has benefited","imageWidth":1274,"imageHeight":1655},{"label":["line"],"shape":"polygon","points":[[0.10865800056463494,0.4221656003419069],[0.8540518844380306,0.4037742276537446],[0.8475324044041526,0.43136128668598805],[0.11517748059851303,0.4539325168032781]],"notes":"from advances in deeplearning and big data.","imageWidth":1274,"imageHeight":1655},{"label":["line"],"shape":"polygon","points":[[0.10974458057028129,0.4606202886898826],[0.10322510053640319,0.49907497703785825],[0.8714371645283722,0.47566777543474265],[0.8627445244832014,0.4464087734308481]],"notes":"The advances are evidenced not only by the","imageWidth":1274,"imageHeight":1655},{"label":["line"],"shape":"polygon","points":[[0.10539826054769588,0.5049267774386371],[0.9040345646977627,0.480683604349696],[0.9138137847485798,0.5199742641834972],[0.10648484055334224,0.5333498079567062]],"notes":"surge of academic papers published in the field,","imageWidth":1274,"imageHeight":1655},{"label":["line"],"shape":"polygon","points":[[0.10757142055898859,0.5408735513291362],[0.9170735247655188,0.5216462071551483],[0.9181601047711652,0.5575929810456474],[0.10974458057028129,0.5692965818472051]],"notes":"but more importantly by the worldwide industry","imageWidth":1274,"imageHeight":1655},{"label":["line"],"shape":"polygon","points":[[0.11083116057592764,0.573476439276333],[0.10865800056463494,0.5977196123652742],[0.8258008042912255,0.6010634983085764],[0.8312337043194573,0.5651167244180774],[0.8312337043194573,0.560100895503124]],"notes":"adoption of a variety of deep learning","imageWidth":1274,"imageHeight":1655},{"label":["line"],"shape":"polygon","points":[[0.11191774058157399,0.6027354412802275],[0.11300432058722033,0.637846243684901],[0.8138484242291156,0.6370102721990754],[0.8051557841839448,0.6035714127660531]],"notes":"methods in designing and deploying","imageWidth":1274,"imageHeight":1655},{"label":["line"],"shape":"polygon","points":[[0.11191774058157399,0.647041930028982],[0.699757523636249,0.6436980440856799],[0.6954112036136636,0.6846606468911322],[0.11626406060415938,0.6804807894620044]],"notes":"speech recognition systems","imageWidth":1274,"imageHeight":1655}],"extras":null,"metadata":{"first_done_at":1551641986000,"last_updated_at":1551642498000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/440fa873-d9f6-4605-917f-eaba293624b1___page_0011.jpg","annotation":null,"extras":null,"metadata":{"first_done_at":1551641921000,"last_updated_at":1551641921000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/90d345d6-bc45-4fc0-9e9e-9bb10d4ee06e___page_0038.jpg","annotation":[{"label":["line"],"shape":"polygon","points":[[0.1451990632318501,0.30685920577617326],[0.9344262295081968,0.2851985559566787],[0.9508196721311475,0.3194945848375451],[0.1522248243559719,0.351985559566787]],"notes":"In mathematical analysis, the maxima and minima (the respective","imageWidth":1273,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.9812646370023419,0.3664259927797834],[0.9718969555035128,0.3285198555956679],[0.14285714285714285,0.3574007220216607],[0.1451990632318501,0.3862815884476534],[0.14754098360655737,0.388086642599278],[0.14754098360655737,0.3916967509025271],[0.14754098360655737,0.388086642599278],[0.9742388758782201,0.3628158844765343],[0.9742388758782201,0.36101083032490977],[0.9742388758782201,0.3592057761732852]],"notes":"plurals of maximum and minimum) of a ruction, known collectively","imageWidth":1273,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.1405152224824356,0.3953068592057762],[0.8711943793911007,0.37184115523465705],[0.8711943793911007,0.36823104693140796],[0.8711943793911007,0.36823104693140796],[0.8711943793911007,0.3664259927797834],[0.8711943793911007,0.3628158844765343],[0.8594847775175644,0.3935018050541516],[0.14754098360655737,0.42057761732851984]],"notes":"as extrema (the plural of extremum), are the largest","imageWidth":1273,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.14754098360655737,0.4296028880866426],[0.8875878220140515,0.3971119133574007],[0.9039812646370023,0.3953068592057762],[0.9016393442622951,0.4187725631768953],[0.9039812646370023,0.427797833935018],[0.14988290398126464,0.4602888086642599],[0.1358313817330211,0.4332129963898917]],"notes":"and smallest value of the function, either within a given","imageWidth":1273,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.1451990632318501,0.4657039711191336],[0.9461358313817331,0.4332129963898917],[0.955503512880562,0.4296028880866426],[0.955503512880562,0.4657039711191336],[0.1451990632318501,0.4927797833935018],[0.1358313817330211,0.4657039711191336],[0.1358313817330211,0.5018050541516246]],"notes":"range (the local or relative extrema) or on the entire domain","imageWidth":1273,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.13817330210772832,0.5],[0.9765807962529274,0.4675090252707581],[0.9929742388758782,0.5],[0.1405152224824356,0.5306859205776173]],"notes":"of a function (the global or absolute extrema). Pierre de Fermat","imageWidth":1273,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.14285714285714285,0.5379061371841155],[0.9765807962529274,0.5054151624548736],[0.9695550351288056,0.5451263537906137],[0.1451990632318501,0.5740072202166066]],"notes":"was one of the first mathematicians to propose a general","imageWidth":1273,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.13817330210772832,0.5812274368231047],[0.9344262295081968,0.5523465703971119],[0.9344262295081968,0.5794223826714802],[0.1358313817330211,0.6137184115523465]],"notes":"technique, adequality for finding the maxima and minima","imageWidth":1273,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.13348946135831383,0.6209386281588448],[0.3255269320843091,0.6155234657039711],[0.32786885245901637,0.6534296028880866],[0.1358313817330211,0.6570397111913358],[0.12177985948477751,0.6155234657039711]],"notes":"of functions.","imageWidth":1273,"imageHeight":1654}],"extras":null,"metadata":{"first_done_at":1551642026000,"last_updated_at":1551642404000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/689c78f1-025c-4c2f-b424-ce06d35cfe3e___page_0010.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.07932692307692307,0.29944547134935307],[0.8990384615384616,0.29944547134935307],[0.8990384615384616,0.3438077634011091],[0.07932692307692307,0.3438077634011091]],"notes":"","imageWidth":1273,"imageHeight":1656},{"label":["line"],"shape":"rectangle","points":[[0.0889423076923077,0.35489833641404805],[0.9038461538461539,0.35489833641404805],[0.9038461538461539,0.3955637707948244],[0.0889423076923077,0.3955637707948244]],"notes":"","imageWidth":1273,"imageHeight":1656},{"label":["line"],"shape":"rectangle","points":[[0.07211538461538461,0.3974121996303142],[0.9423076923076923,0.3974121996303142],[0.9423076923076923,0.4343807763401109],[0.07211538461538461,0.4343807763401109]],"notes":"","imageWidth":1273,"imageHeight":1656},{"label":["line"],"shape":"rectangle","points":[[0.08653846153846154,0.43807763401109057],[0.9086538461538461,0.43807763401109057],[0.9086538461538461,0.46950092421441775],[0.08653846153846154,0.46950092421441775]],"notes":"","imageWidth":1273,"imageHeight":1656},{"label":["line"],"shape":"rectangle","points":[[0.057692307692307696,0.47504621072088726],[0.9711538461538461,0.47504621072088726],[0.9711538461538461,0.512014787430684],[0.057692307692307696,0.512014787430684]],"notes":"","imageWidth":1273,"imageHeight":1656},{"label":["line"],"shape":"rectangle","points":[[0.08653846153846154,0.5175600739371534],[0.9615384615384616,0.5175600739371534],[0.9615384615384616,0.5508317929759704],[0.08653846153846154,0.5508317929759704]],"notes":"","imageWidth":1273,"imageHeight":1656},{"label":["line"],"shape":"rectangle","points":[[0.10096153846153846,0.5600739371534196],[0.9423076923076923,0.5600739371534196],[0.9423076923076923,0.5951940850277264],[0.10096153846153846,0.5951940850277264]],"notes":"","imageWidth":1273,"imageHeight":1656},{"label":["line"],"shape":"rectangle","points":[[0.08173076923076923,0.5970425138632163],[0.9783653846153846,0.5970425138632163],[0.9783653846153846,0.6377079482439926],[0.08173076923076923,0.6377079482439926]],"notes":"","imageWidth":1273,"imageHeight":1656},{"label":["line"],"shape":"rectangle","points":[[0.11057692307692307,0.6414048059149723],[0.8100961538461539,0.6414048059149723],[0.8100961538461539,0.6968576709796673],[0.11057692307692307,0.6968576709796673]],"notes":"","imageWidth":1273,"imageHeight":1656}],"extras":null,"metadata":{"first_done_at":1551642448000,"last_updated_at":1551642448000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/a1628e99-7b9a-4f84-953b-66c24eeabe1a___page_0004.jpg","annotation":[{"label":["line"],"shape":"polygon","points":[[0.06325661478110305,0.36367123323056577],[0.06472769884577985,0.3251515387450853],[0.9400227173284846,0.30702462369309447],[0.9402678980059309,0.3512089791323221]],"notes":"Computer science is the study of processes","imageWidth":1276,"imageHeight":1656},{"label":["line"],"shape":"polygon","points":[[0.0465843287147658,0.4116320293056248],[0.044132521940304446,0.36442652135773207],[0.9390419946187002,0.35026486897336423],[0.9451715115548536,0.38425283469584703]],"notes":"that interact with data and that can","imageWidth":1276,"imageHeight":1656},{"label":["line"],"shape":"polygon","points":[[0.03432529484245901,0.45883753725351756],[0.04045481177861241,0.41257613946458266],[0.913298023486856,0.3927498261264677],[0.9108462167123945,0.43429067312061337]],"notes":"be represented as data in the form","imageWidth":1276,"imageHeight":1656},{"label":["line"],"shape":"polygon","points":[[0.031873488067997655,0.4890490623401689],[0.038003005004151054,0.45694931693560187],[0.9770449996228513,0.42296135121311906],[0.985626323333466,0.4607257575714333]],"notes":"of programs. It enables the use of algorithms","imageWidth":1276,"imageHeight":1656},{"label":["line"],"shape":"polygon","points":[[0.02696987451907494,0.5305899093343146],[0.015936744033998827,0.4890490623401689],[0.9108462167123945,0.456005206776644],[0.9255570573591627,0.501322494406621]],"notes":"to manipulate, store, and communicate digital","imageWidth":1276,"imageHeight":1656},{"label":["line"],"shape":"polygon","points":[[0.015936744033998827,0.5655219852157553],[0.01348493725953747,0.5287016890163988],[0.924331153971932,0.4862167318632954],[0.9476233183293149,0.5305899093343146]],"notes":"information. A computer scientist studies the","imageWidth":1276,"imageHeight":1656},{"label":["line"],"shape":"polygon","points":[[0.017162647421229506,0.6042305017330273],[0.014710840646768149,0.5674102055336709],[0.9047166997762411,0.5249252483805674],[0.9145239268740866,0.5674102055336709]],"notes":"theory of computation and the practice","imageWidth":1276,"imageHeight":1656},{"label":["line"],"shape":"polygon","points":[[0.017162647421229506,0.6467154588861307],[0.014710840646768149,0.6004540610971958],[0.9108462167123945,0.5692984258515866],[0.9120721200996252,0.6155598236405215]],"notes":"of designing software systems. Its fields","imageWidth":1276,"imageHeight":1656},{"label":["line"],"shape":"polygon","points":[[0.008581323710614753,0.6844798652444449],[0.012259033872306792,0.6495477893630043],[0.821355269444555,0.6089510525278166],[0.8238070762190164,0.6457713487271729]],"notes":"can be divided into theoretical and","imageWidth":1276,"imageHeight":1656},{"label":["line"],"shape":"polygon","points":[[0.008581323710614753,0.7250766020796328],[0.003677710161692037,0.6844798652444449],[0.9660118691377751,0.6278332557069737],[0.981948613171774,0.6750387636548664]],"notes":"practical disciplines. Computational complexity","imageWidth":1276,"imageHeight":1656},{"label":["line"],"shape":"polygon","points":[[0.018388550808460188,0.7543440170073262],[0.011033130485076112,0.7184678309669277],[0.787029974602096,0.67787109413174],[0.7992890084744028,0.7222442716027592]],"notes":"is highly abstract, while computer","imageWidth":1276,"imageHeight":1656},{"label":["line"],"shape":"polygon","points":[[0.022066260970152223,0.8006054147962611],[0.018388550808460188,0.7543440170073262],[0.8201293660573243,0.7213001614438013],[0.8274847863807084,0.7618968982789891]],"notes":"graphics emphasizes real-world","imageWidth":1276,"imageHeight":1656},{"label":["line"],"shape":"polygon","points":[[0.02942168129353629,0.82326405861125],[0.028195777906305647,0.8006054147962619],[0.3371234314884368,0.7864437624118941],[0.3371234314884368,0.8223199484522927]],"notes":"applications","imageWidth":1276,"imageHeight":1656}],"extras":null,"metadata":{"first_done_at":1551641899000,"last_updated_at":1551642413000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"CORRECT"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/4b4847ca-35ad-49ca-9c7d-11217b51dcb4___page_0009.jpg","annotation":[{"label":["line"],"shape":"polygon","points":[[0.16563203811314875,0.29920697287551934],[0.17160076020731627,0.3446634168316079],[0.975139972134619,0.3078379432469286],[0.9788704234434737,0.2692862755879674]],"notes":"Gradient descent is a first-order iterative optimization","imageWidth":1274,"imageHeight":1650},{"label":["line"],"shape":"polygon","points":[[0.18414220221641592,0.3944108534188876],[0.1740980820955205,0.3485791503211118],[0.31304174376790705,0.34147846392568176],[0.8227808399033493,0.3143667522340397],[0.9625615115858105,0.3104936505638051],[0.9290811111828258,0.3356688114203299],[0.7415908689261114,0.35955293838677643]],"notes":"algorithm for finding the minimum of a function.","imageWidth":1274,"imageHeight":1650},{"label":["line"],"shape":"polygon","points":[[0.18246818219626668,0.3944108534188876],[0.18246818219626668,0.42281359900060783],[0.9525173914649151,0.3905377517486531],[0.9726056317067059,0.3614894892218937],[0.6670969780294704,0.3569708706066201]],"notes":"To find a local minimum of a function using gradient","imageWidth":1274,"imageHeight":1650},{"label":["line"],"shape":"polygon","points":[[0.18330519220634126,0.424104632890686],[0.18330519220634126,0.46412668348311004],[0.9466583213943929,0.42668670067084224],[0.935777191263423,0.3950563703639268]],"notes":"descent, one takes steps proportional to the negative of ","imageWidth":1274,"imageHeight":1650},{"label":["line"],"shape":"polygon","points":[[0.1903054702443,0.46283086228848935],[0.1986522013953658,0.4885794498984192],[0.28879689782687634,0.4885794498984192],[0.777080670164225,0.46283086228848935],[0.9598740823725658,0.45703743007625514],[0.9523620243366065,0.4293576983955805],[0.8730680784014816,0.4287139837053323]],"notes":"the gradient (or approximate gradient) of the function","imageWidth":1274,"imageHeight":1650},{"label":["line"],"shape":"polygon","points":[[0.19809454641782176,0.4914711490862539],[0.21771711941203994,0.5202964364226911],[0.5513008603137493,0.5130901145885818],[0.9119824401122361,0.4857060916189665],[0.9101136236365962,0.46048396519958396]],"notes":"at the current point. If, instead, one takes steps","imageWidth":1274,"imageHeight":1650},{"label":["line"],"shape":"polygon","points":[[0.21024185350948063,0.5606518386937032],[0.21024185350948063,0.5282233904402113],[0.5372847367464505,0.51164885022176],[0.9045071742096767,0.5030012640208288],[0.9063759906853165,0.5339884479074988]],"notes":"one approaches a local maximum of that function","imageWidth":1274,"imageHeight":1650},{"label":["line"],"shape":"polygon","points":[[0.21491389469858022,0.5743438501785108],[0.5400879614599102,0.5505629881259502],[0.8989007247827572,0.5375916088245534],[0.901703949496217,0.5671375283444015],[0.21491389469858022,0.6082135627988244]],"notes":"","imageWidth":1274,"imageHeight":1650},{"label":["line"],"shape":"polygon","points":[[0.21865152764985987,0.6161405168163447],[0.21491389469858022,0.6500102294366584],[0.9568340355275918,0.6089341949822354],[0.9512275861006724,0.5736232179950999]],"notes":"the procedure is then known as gradient ascent","imageWidth":1274,"imageHeight":1650}],"extras":null,"metadata":{"first_done_at":1551642278000,"last_updated_at":1551642496000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/d24d8d1c-4385-4dbc-98a3-b72e78b7a95f___page_0021.jpg","annotation":null,"extras":null,"metadata":{"first_done_at":1551642589000,"last_updated_at":1551642589000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/f63c79eb-1b21-4921-8ff6-8acb5415d7f6___page_0035.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.14758836868157094,0.32632492232155297],[0.8758748982504443,0.32632492232155297],[0.8758748982504443,0.3549778423302747],[0.14758836868157094,0.3549778423302747]],"notes":"Deep learning (also known as deep structured learning or","imageWidth":1273,"imageHeight":1654},{"label":["line"],"shape":"rectangle","points":[[0.15241602560106157,0.3571002808494393],[0.866909249685676,0.3571002808494393],[0.866909249685676,0.38256954307941415],[0.15241602560106157,0.38256954307941415]],"notes":"hierarchical learning) is part of a broader family of","imageWidth":1273,"imageHeight":1654},{"label":["line"],"shape":"rectangle","points":[[0.1517263603268486,0.38628381048795213],[0.9807040199308125,0.38628381048795213],[0.9807040199308125,0.4053857571604333],[0.1517263603268486,0.4053857571604333]],"notes":"machine learning methods based on learning data representations,","imageWidth":1273,"imageHeight":1654},{"label":["line"],"shape":"rectangle","points":[[0.1517263603268486,0.40963063419876244],[0.8648402538630371,0.40963063419876244],[0.8648402538630371,0.42926319050103473],[0.1517263603268486,0.42926319050103473]],"notes":"as opposed to task-specific algorithms. Learning can be","imageWidth":1273,"imageHeight":1654},{"label":["line"],"shape":"rectangle","points":[[0.14965736450420977,0.4324468482797816],[0.8896682037347032,0.4324468482797816],[0.8896682037347032,0.45261001421184505],[0.14965736450420977,0.45261001421184505]],"notes":"supervised, semi-supervised or unsupervised. Deep networks","imageWidth":1273,"imageHeight":1654},{"label":["line"],"shape":"rectangle","points":[[0.1517263603268486,0.45685489125017414],[0.8896682037347032,0.45685489125017414],[0.8896682037347032,0.47967110533119334],[0.1517263603268486,0.47967110533119334]],"notes":"have been applied to fields including computer vision, speech","imageWidth":1273,"imageHeight":1654},{"label":["line"],"shape":"rectangle","points":[[0.1503470297784227,0.4844465919993136],[0.9613933922528499,0.4844465919993136],[0.9613933922528499,0.5003648808930479],[0.1503470297784227,0.5003648808930479]],"notes":"recognition natural language processing, and audio recognition.","imageWidth":1273,"imageHeight":1654}],"extras":null,"metadata":{"first_done_at":1551642005000,"last_updated_at":1551642503000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/7f7effd3-09e4-4836-a476-6c724cb6a2a2___page_0034.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.1456223957524815,0.3119468566362452],[0.9691665270711269,0.3119468566362452],[0.9691665270711269,0.34461144895417667],[0.1456223957524815,0.34461144895417667]],"notes":"From the technology perspective, speech recognition has a long history with","imageWidth":1273,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.1449154909187316,0.34842231805793533],[0.9727010512398765,0.34842231805793533],[0.9727010512398765,0.37346517216834946],[0.1449154909187316,0.37346517216834946]],"notes":"several waves of major innovations. Most recently, the field has benefited","imageWidth":1273,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.1458882093429722,0.3736080488203237],[0.9708358486847941,0.3736080488203237],[0.9708358486847941,0.3966074878273449],[0.1458882093429722,0.3966074878273449]],"notes":"from advances in deep learning and big data. The advances are","imageWidth":1273,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.13962952138444584,0.39850513130977233],[0.9683717983074215,0.39850513130977233],[0.9683717983074215,0.42254194875385387],[0.13962952138444584,0.42254194875385387]],"notes":"evidence not only by the surge of academic papers published","imageWidth":1273,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.9889055514521929,0.40293296610210316],[0.9913696018295656,0.40293296610210316],[0.9889055514521929,0.40293296610210316],[0.9913696018295656,0.40293296610210316]],"notes":"","imageWidth":1273,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.14415026823142896,0.4244376027495618],[0.9574259676022351,0.4244376027495618],[0.9574259676022351,0.4569674920113896],[0.14415026823142896,0.4569674920113896]],"notes":"in the field, but more importantly by the worldwide industry","imageWidth":1273,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.1421388691398276,0.4610982716001931],[0.2713195659387469,0.4610982716001931],[0.2713195659387469,0.4912539277338767],[0.1421388691398276,0.4912539277338767]],"notes":"adoption","imageWidth":1273,"imageHeight":1652}],"extras":null,"metadata":{"first_done_at":1551642230000,"last_updated_at":1551642517000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/2de4e581-35d9-462d-995d-cd461d0de696___page_0020.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.10386473429951697,0.2705223880597015],[0.9065562376402465,0.2705223880597015],[0.9065562376402465,0.3298105715295672],[0.10386473429951697,0.3298105715295672]],"notes":"In the simplest case, an optimization problem consists of","imageWidth":1278,"imageHeight":1654},{"label":["line"],"shape":"rectangle","points":[[0.11036336806055175,0.37852105594008795],[0.9341470796553845,0.37852105594008795],[0.9341470796553845,0.33285497680522474],[0.11036336806055175,0.33285497680522474]],"notes":"Maximizing or minimizing a real function by systematically","imageWidth":1278,"imageHeight":1654},{"label":["line"],"shape":"rectangle","points":[[0.13029008729370695,0.4363647561775813],[0.8682356237303328,0.4363647561775813],[0.8682356237303328,0.3873160045142097],[0.13029008729370695,0.3873160045142097]],"notes":"Choosing input values from within or allowed set onl","imageWidth":1278,"imageHeight":1654},{"label":["line"],"shape":"rectangle","points":[[0.1368593353925493,0.4752654902554277],[0.9503512249658623,0.4752654902554277],[0.9503512249658623,0.44228443310247095],[0.1368593353925493,0.44228443310247095]],"notes":"computing the value of the function. The generalizations","imageWidth":1278,"imageHeight":1654},{"label":["line"],"shape":"rectangle","points":[[0.1379542100756897,0.48625917597307994],[0.9634897211635471,0.48625917597307994],[0.9634897211635471,0.5437646089577224],[0.1379542100756897,0.5437646089577224]],"notes":"of optimization theory and techniques to other formulations","imageWidth":1278,"imageHeight":1654},{"label":["line"],"shape":"rectangle","points":[[0.8441483807012441,0.5953503650174753],[0.13247983665998772,0.5953503650174753],[0.13247983665998772,0.5395362682970869],[0.8441483807012441,0.5395362682970869]],"notes":"constitutes a large area of applied mathematics.","imageWidth":1278,"imageHeight":1654}],"extras":null,"metadata":{"first_done_at":1551642039000,"last_updated_at":1551642473000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/95971e2b-2a14-43ce-a243-59b92f9b39b8___page_0036.jpg","annotation":[{"label":["line"],"shape":"polygon","points":[[0.08163527840800211,0.25929895472498055],[0.08673748330850224,0.30906340058128995],[0.874177772952356,0.28549076833356446],[0.8690755680518558,0.25733456870433674]],"notes":"In the simplest case, an optimization problem consists of","imageWidth":1271,"imageHeight":1652},{"label":["line"],"shape":"polygon","points":[[0.08851196004094052,0.315787138950529],[0.08851196004094052,0.3385056381556031],[0.8792188030733424,0.31502985564369324],[0.8762684044053111,0.2945832063591266],[0.8762684044053111,0.2923113564386192],[0.875284938182634,0.289282223211276],[0.8654502759558628,0.28852493990444017],[0.8654502759558628,0.28852493990444017],[0.8644668097331857,0.2862530899839328],[0.8644668097331857,0.2862530899839328],[0.09047889248629475,0.31200072241635]],"notes":"maximizing or minimizing a real function by systematically","imageWidth":1271,"imageHeight":1652},{"label":["line"],"shape":"polygon","points":[[0.08654502759558629,0.40590385246398936],[0.08654502759558629,0.430136918282735],[0.8929873301908221,0.39605916947512393],[0.8949542626361763,0.3748552368837215],[0.8949542626361763,0.3680396871221993],[0.8949542626361763,0.3657678372016919],[0.8979046613042077,0.36046685405384127]],"notes":"optimization theory and techniques to other formulations","imageWidth":1271,"imageHeight":1652},{"label":["line"],"shape":"polygon","points":[[0.08949542626361763,0.4399816012716004],[0.09047889248629475,0.46042825055616704],[0.7946407079231104,0.4278650683622276],[0.7907068430324019,0.40741841907766096],[0.7867729781416934,0.40363200254348197],[0.7867729781416934,0.39681645278195976]],"notes":"constitutes a large area of applied mathematics.","imageWidth":1271,"imageHeight":1652},{"label":["line"],"shape":"polygon","points":[[0.08556156137290916,0.3385056381556031],[0.08556156137290916,0.3627387039743487],[0.8654502759558628,0.33471922162142403],[0.8585660123971229,0.3066997392684994]],"notes":"choosing input values from within an allowed set and","imageWidth":1271,"imageHeight":1652},{"label":["line"],"shape":"polygon","points":[[0.08654502759558629,0.3703115370427067],[0.08949542626361763,0.39833101939563137],[0.8969211950815306,0.3703115370427067],[0.8939707964134992,0.34304933799661785],[0.8910203977454678,0.33623378823509564]],"notes":"computing the value of the function. The generalization of","imageWidth":1271,"imageHeight":1652}],"extras":null,"metadata":{"first_done_at":1551642285000,"last_updated_at":1551642285000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"CORRECT"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/6d0d7e7d-f506-406d-b4c2-b50447d1b1ae___page_0022.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.1503638731818887,0.2813181677261853],[0.9385127654976947,0.2813181677261853],[0.9385127654976947,0.3249575510853719],[0.1503638731818887,0.3249575510853719]],"notes":"Data is measured, collected and reported, and","imageWidth":1275,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.07064074579014906,0.3670383850388733],[0.9163113882493621,0.3670383850388733],[0.9163113882493621,0.32573682578821456],[0.07064074579014906,0.32573682578821456]],"notes":"analyzed, whereupon it can be visualized","imageWidth":1275,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.052475982586967874,0.3647005609303455],[0.9667690638137543,0.3647005609303455],[0.9667690638137543,0.404443570775319],[0.052475982586967874,0.404443570775319]],"notes":"using graphs, images or other analysis tools","imageWidth":1275,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.045411908007952966,0.44574513002597776],[0.9990619761749653,0.44574513002597776],[0.9990619761749653,0.4028850213696338],[0.045411908007952966,0.4028850213696338]],"notes":"Data as a general concepts refers to the fact","imageWidth":1275,"imageHeight":1652}],"extras":null,"metadata":{"first_done_at":1551642374000,"last_updated_at":1551642473000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/bce05240-8bdf-4d63-91a9-84ce6e8dcaf4___page_0037.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[1,1],[0.7181069958847737,1],[0.7181069958847737,0.6449044585987261],[1,0.6449044585987261]],"notes":"","imageWidth":1276,"imageHeight":1649}],"extras":null,"metadata":{"first_done_at":1551642147000,"last_updated_at":1551642147000,"sec_taken":15,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/171942cf-00fd-4a5a-a3c2-0c3f153c57bb___page_0033.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.5012800867937639,0.30610368414063827],[0.6506844123365307,0.30610368414063827],[0.6506844123365307,0.34397218114772754],[0.5012800867937639,0.34397218114772754]],"notes":"","imageWidth":1277,"imageHeight":1650},{"label":["line"],"shape":"rectangle","points":[[0.5004636697142952,0.3244067910273981],[0.39188019814496206,0.3244067910273981],[0.39188019814496206,0.3427098979141579],[0.5004636697142952,0.3427098979141579]],"notes":"","imageWidth":1277,"imageHeight":1650}],"extras":null,"metadata":{"first_done_at":1551641913000,"last_updated_at":1551641913000,"sec_taken":270,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/5f5bb547-376b-4572-b7e9-45f95bead94f___page_0027.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.1660684104163008,0.2924327181842311],[0.9269439378792215,0.2924327181842311],[0.9269439378792215,0.34689726455092174],[0.1660684104163008,0.34689726455092174]],"notes":"","imageWidth":1279,"imageHeight":1657},{"label":["line"],"shape":"rectangle","points":[[0.09334564245622136,0.35360059333451443],[0.9768730024488282,0.35360059333451443],[0.9768730024488282,0.3996859787217142],[0.09334564245622136,0.3996859787217142]],"notes":"","imageWidth":1279,"imageHeight":1657},{"label":["line"],"shape":"rectangle","points":[[0.09877271469204818,0.4156063845827468],[0.9812146602374897,0.4156063845827468],[0.9812146602374897,0.4767742597330301],[0.09877271469204818,0.4767742597330301]],"notes":"","imageWidth":1279,"imageHeight":1657},{"label":["line"],"shape":"rectangle","points":[[0.10202895803354428,0.4918567494961137],[0.9573355423998516,0.4918567494961137],[0.9573355423998516,0.5563762890381934],[0.10202895803354428,0.5563762890381934]],"notes":"","imageWidth":1279,"imageHeight":1657},{"label":["line"],"shape":"rectangle","points":[[0.10202895803354428,0.5697829466053786],[0.970360515765836,0.5697829466053786],[0.970360515765836,0.6317887378536111],[0.10202895803354428,0.6317887378536111]],"notes":"","imageWidth":1279,"imageHeight":1657},{"label":["line"],"shape":"rectangle","points":[[0.09443105690338673,0.6426816471269492],[0.92260228009056,0.6426816471269492],[0.92260228009056,0.698822025689538],[0.09443105690338673,0.698822025689538]],"notes":"","imageWidth":1279,"imageHeight":1657},{"label":["line"],"shape":"rectangle","points":[[0.11722476029385938,0.7239595086280106],[0.46238655449244537,0.7239595086280106],[0.46238655449244537,0.7742344745049557],[0.11722476029385938,0.7742344745049557]],"notes":"","imageWidth":1279,"imageHeight":1657}],"extras":null,"metadata":{"first_done_at":1551641916000,"last_updated_at":1551642055000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/a2f9fc97-65c1-42ac-b6b9-a1c5ddb28356___page_0032.jpg","annotation":[{"label":["line"],"shape":"polygon","points":[[0.15132252227721377,0.30347144316314506],[0.8390869718689105,0.29407433107976894],[0.8412384769249845,0.31950181083478657],[0.5895123853643115,0.330557236815229],[0.5371590956665077,0.3327683220113175],[0.5228157286260134,0.3399543488986051],[0.47978562750453085,0.33332109331033966],[0.3894224151494174,0.33387386460936175],[0.2603321117849697,0.334979407207406],[0.15132252227721377,0.3382960350015387]],"notes":"","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.15275685898126318,0.3427182053937157],[0.15419119568531262,0.36704014255068906],[0.3815335632771456,0.35819580176633514],[0.5134925400496922,0.35487917397220237],[0.570866008211669,0.3593013443643794],[0.6784412610153755,0.35598471657024666],[0.7680873050184642,0.35156254617806965],[0.8512788338533305,0.35100977487904755],[0.8448243186851081,0.3244767525259857]],"notes":"","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.15849420579746087,0.39191485100668455],[0.687047281239672,0.38472882411939696],[0.8017942175636256,0.3769900259330873],[0.7996427125075514,0.3543264026731803],[0.15705986909341144,0.37090954164384393]],"notes":"","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.15490836403733732,0.39688979269788366],[0.15203969062923847,0.4250811289480119],[0.3449579773238854,0.42839775674214464],[0.8469758237411823,0.40407581958517125],[0.8455414870371328,0.37643725463406513]],"notes":"","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.15453302859568263,0.4319596895623016],[0.15648914288170393,0.46512936904003505],[0.5770537143762833,0.45834466187413503],[0.6063954286666028,0.46060623092943503],[0.6484518858160607,0.4485445293011683],[0.8636244572784035,0.4379905403764349],[0.8655805715644249,0.4100978553610682]],"notes":"","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.15844525716772523,0.4704063635024017],[0.1594233143107359,0.4990529048695352],[0.8489536001332438,0.4764372143165351],[0.8440633144181906,0.44703681659763495],[0.307109942905344,0.45834466187413503]],"notes":"","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.15844525716772523,0.5050837556836685],[0.15844525716772523,0.538253435161402],[0.3882886857752279,0.5344841534025686],[0.8264582858439989,0.5073453247389685],[0.8274363429870095,0.48397577783420176]],"notes":"","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.1613794285967572,0.545038142327302],[0.16626971431181042,0.5797155345085688],[0.859712228706361,0.551822849493202],[0.856778057277329,0.5133761755531019]],"notes":"","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.15453302859568263,0.5865002416744688],[0.16040137145374653,0.6219314902075023],[0.9037248001418402,0.5827309599156355],[0.9047028572848509,0.5510689931414353],[0.326671085765557,0.5736846836944355]],"notes":"","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.1613794285967572,0.629470053725169],[0.17018194288385302,0.661132020499369],[0.6797497143924015,0.6392701862981357],[0.7746212572644344,0.6309777664287023],[0.787336000123573,0.633993191835769],[0.8254802287009882,0.6257007719663357],[0.8890539429966805,0.6166544957451356],[0.8880758858536698,0.5925310924886021],[0.8861197715676485,0.5872540980262355]],"notes":"","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.16040137145374653,0.6679167276652691],[0.16431360002578912,0.7010864071430025],[0.3296052571945889,0.7025941198465359],[0.8734050287085101,0.6535934569817023],[0.8724269715654994,0.627208484669869],[0.379486171488132,0.6505780315746357]],"notes":"","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.1574672000247146,0.7078711143089026],[0.15257691430966133,0.7448100755454694],[0.45479657149995206,0.7289790921583693],[0.44990628578489883,0.7010864071430025],[0.33840777148168477,0.6995786944394692]],"notes":"","imageWidth":1274,"imageHeight":1653}],"extras":null,"metadata":{"first_done_at":1551642472000,"last_updated_at":1551642472000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/e63382fe-8bd9-440d-9707-15b18cf0f6ea___page_0030.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.15517241379310345,0.3521594684385382],[0.9116379310344828,0.3521594684385382],[0.9116379310344828,0.31893687707641194],[0.15517241379310345,0.31893687707641194]],"notes":"resolve the truth or falsity of conjectures by mathematical proog. When","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.15517241379310345,0.3903654485049834],[0.9525862068965517,0.3903654485049834],[0.9525862068965517,0.3554817275747508],[0.15517241379310345,0.3554817275747508]],"notes":"mathematical structures are good models of real phenomena, then mathematical","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.16163793103448276,0.4318936877076412],[0.9267241379310345,0.4318936877076412],[0.9267241379310345,0.3953488372093023],[0.16163793103448276,0.3953488372093023]],"notes":"reasoning can provide insight of predictions about nature. Through the","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.14870689655172414,0.4435215946843854],[0.8728448275862069,0.4435215946843854],[0.8728448275862069,0.48172757475083056],[0.14870689655172414,0.48172757475083056]],"notes":"use of abstraction and logic, mathematics developed from counting,","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.15086206896551724,0.5149501661129569],[0.8879310344827587,0.5149501661129569],[0.8879310344827587,0.48172757475083056],[0.15086206896551724,0.48172757475083056]],"notes":"calculation, measurement, and systematic study of the shapes","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.5581896551724138,0.5548172757475083],[0.14870689655172414,0.5548172757475083],[0.14870689655172414,0.5166112956810631],[0.5581896551724138,0.5166112956810631]],"notes":"and motions of physical objets.","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.15948275862068967,0.27906976744186046],[0.8987068965517241,0.27906976744186046],[0.8987068965517241,0.3222591362126246],[0.15948275862068967,0.3222591362126246]],"notes":"Mathematicians seek and use patters to formulate new conjectures; they","imageWidth":1274,"imageHeight":1653}],"extras":null,"metadata":{"first_done_at":1551642117000,"last_updated_at":1551642383000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/d7ad6428-c7e8-4fa7-94fd-13fac1b3886a___page_0018.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.10691106390356535,0.3210322806307629],[0.9311207774857028,0.3210322806307629],[0.9311207774857028,0.35361466135149705],[0.10691106390356535,0.35361466135149705]],"notes":"mathematics seek and use patterns to formulate new","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.12804464630310733,0.36319771450465416],[0.9062577393685946,0.36319771450465416],[0.9062577393685946,0.4082380643244925],[0.12804464630310733,0.4082380643244925]],"notes":"conjuctures; they resolve the truth or falsity of","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.1355035577382398,0.4101546749551239],[0.9423091446384015,0.4101546749551239],[0.9423091446384015,0.44848688756775235],[0.1355035577382398,0.44848688756775235]],"notes":"conjuctures by mathematical proff when mathematical","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.1355035577382398,0.45615333009027803],[0.9895489170609071,0.45615333009027803],[0.9895489170609071,0.49831876396416924],[0.1355035577382398,0.49831876396416924]],"notes":"structures are good models of real phenomenon, then","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.10815421580942075,0.49736045864885353],[0.9945215246843288,0.49736045864885353],[0.9945215246843288,0.5529421669371647],[0.10815421580942075,0.5529421669371647]],"notes":"mathematical reasoning can provide insight or predictions","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.14420562107922766,0.5510255563065333],[0.8950693722158959,0.5510255563065333],[0.8950693722158959,0.588399463603846],[0.14420562107922766,0.588399463603846]],"notes":"about nature through the use of abstractions","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.13674670964409522,0.5893577689191617],[0.9659290308496543,0.5893577689191617],[0.9659290308496543,0.6190652336939487],[0.13674670964409522,0.6190652336939487]],"notes":"and logic, mathematics developed from courtesy","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.11685627915040864,0.6238567602705273],[0.9522543598852448,0.6238567602705273],[0.9522543598852448,0.6669804994597343],[0.11685627915040864,0.6669804994597343]],"notes":"calcualtions, measurement, and the systematic","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.12058573486797486,0.66793880477505],[0.8764220936280648,0.66793880477505],[0.8764220936280648,0.7120208492795727],[0.12058573486797486,0.7120208492795727]],"notes":"study of the shapes and motions of physical","imageWidth":1274,"imageHeight":1652}],"extras":null,"metadata":{"first_done_at":1551642585000,"last_updated_at":1551642718000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/1e201b1a-ca44-4bd4-9beb-58bc5e353edc___page_0019.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.14812779202958973,0.2577482912953182],[0.9492823745310903,0.2577482912953182],[0.9492823745310903,0.30372501352637493],[0.14812779202958973,0.30372501352637493]],"notes":"Thus a neural network is either a biological neural network,","imageWidth":1275,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.09393469738461788,0.2939723754773629],[0.9736692671213276,0.2939723754773629],[0.09393469738461788,0.3413423317154214],[0.9736692671213276,0.3413423317154214]],"notes":"made up of real biological neurons, or an artificial neural network,","imageWidth":1275,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.0858057331878721,0.3406457147119205],[0.8815410062248755,0.3406457147119205],[0.8815410062248755,0.3810495009149704],[0.0858057331878721,0.3810495009149704]],"notes":"for solving artificial intelligence (AI) problems. The connections of","imageWidth":1275,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.09393469738461788,0.37895964990446784],[0.9086375535473614,0.37895964990446784],[0.9086375535473614,0.42006005311101857],[0.09393469738461788,0.42006005311101857]],"notes":"the biological neuron are modeled as weights. A positive weight","imageWidth":1275,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.09032182440828643,0.41657696809351424],[0.8661862960754668,0.41657696809351424],[0.8661862960754668,0.45698075429656415],[0.09032182440828643,0.45698075429656415]],"notes":"reflects an excitatory connection, while negative values mean","imageWidth":1275,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.08670895143195496,0.4562841372930633],[0.41819004701036616,0.4562841372930633],[0.41819004701036616,0.49390145548210973],[0.08670895143195496,0.49390145548210973]],"notes":"inhibitory connections.","imageWidth":1275,"imageHeight":1652}],"extras":null,"metadata":{"first_done_at":1551642105000,"last_updated_at":1551642105000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/f7aa9b59-6baf-4f30-8d58-ef25153bf7bd___page_0042.jpg","annotation":null,"extras":null,"metadata":{"first_done_at":1551641898000,"last_updated_at":1551641898000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/624156aa-4323-4af4-bb63-28f130d3a4df___page_0056.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.10052596062794851,0.3011990316098742],[0.9643045852829135,0.3011990316098742],[0.9643045852829135,0.32936458990927087],[0.10052596062794851,0.32936458990927087]],"notes":"From the technology perspective, speech recognition has a","imageWidth":1276,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.10052596062794851,0.3379866995927596],[0.9829205039177188,0.31901805828908436],[0.9881329611354642,0.3489080385251787],[0.09605814015559525,0.37075071639001694]],"notes":"long history with several waves of major innovations. Most","imageWidth":1276,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.10573841784569399,0.3730499456389473],[0.9322852052310484,0.3563805335842023],[0.934519115467225,0.38454609188359895],[0.10946160157265504,0.4017903112505765]],"notes":"recently, the field has benefitted from advances in deep","imageWidth":1276,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.1139294220450083,0.40064069662611135],[0.9829205039177188,0.3799476333857383],[0.9896222346262487,0.412136842870763],[0.11095087506343947,0.4270818329888102]],"notes":"learning and big data. The advances are evidenced not","imageWidth":1276,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.11169551180883168,0.43168029148667086],[0.9769634099545811,0.412136842870763],[0.9821758671723265,0.43915278654569445],[0.1131847852996161,0.4667435375328585]],"notes":"only by the surge of academic papers published in the","imageWidth":1276,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.10797232808187063,0.4604206570983001],[0.9784526834453654,0.4391527865456945],[0.987388324390072,0.4719168033429518],[0.10946160157265505,0.4977831323934181]],"notes":"field, but more importantly by the worldwide industry","imageWidth":1276,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.11020623831804725,0.49548390314448776],[0.9397315726849705,0.4747908399041147],[0.9397315726849705,0.5018067835790462],[0.11095087506343947,0.5219250395071866]],"notes":"adoption of a variety of deep learning methods in","imageWidth":1276,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.12286506298971485,0.5202006175704889],[0.9576028545743835,0.5041060128279765],[0.959092128065168,0.542043295435327],[0.12286506298971485,0.5512402124310485]],"notes":"designing and deploying speech recognition systems.","imageWidth":1276,"imageHeight":1653}],"extras":null,"metadata":{"first_done_at":1551642454000,"last_updated_at":1551642751000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/9a824824-7b2a-4619-9ae4-110cceab66e8___page_0080.jpg","annotation":[{"label":["line"],"shape":"polygon","points":[[0.12686579645970855,0.2806875920128781],[0.8672055383182101,0.2806875920128781],[0.8672055383182101,0.30991551267106515],[0.12686579645970855,0.30991551267106515]],"notes":"In the simplest case, an optimization problem","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.1277125964808945,0.3249786622137598],[0.8466245634263271,0.3249786622137598],[0.8466245634263271,0.35303906879044994],[0.1277125964808945,0.35303906879044994]],"notes":"consists of maximizing or minimizing a real","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.12263792377304437,0.37131096144503883],[0.9202073176901536,0.37131096144503883],[0.9202073176901536,0.4032867735905694],[0.12263792377304437,0.4032867735905694]],"notes":"function by systematically choosing input values from","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.11925480863447763,0.42612663940880563],[0.9041375207819616,0.42612663940880563],[0.9041375207819616,0.4496190728218485],[0.11925480863447763,0.4496190728218485]],"notes":"within an allowed set and computing the value","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.12010058741911932,0.47180637104527784],[0.9464264600140458,0.47180637104527784],[0.9464264600140458,0.494646236863514],[0.12010058741911932,0.494646236863514]],"notes":"of the function. The generalization of optimization","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.12010058741911932,0.5214015082505904],[0.8660774754730858,0.5214015082505904],[0.8660774754730858,0.5553350231805413],[0.12010058741911932,0.5553350231805413]],"notes":"theory and the techniques to other formulations","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.12348370255768606,0.5762171862143574],[0.876226820888786,0.5762171862143574],[0.876226820888786,0.6029724576014339],[0.12348370255768606,0.6029724576014339]],"notes":"constitutes a large area of applied mathematics.","imageWidth":1274,"imageHeight":1652}],"extras":null,"metadata":{"first_done_at":1551642513000,"last_updated_at":1551642513000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/baab8a0c-2e0a-4a61-9e7c-69ca080baacd___page_0055.jpg","annotation":[{"label":["line"],"shape":"polygon","points":[[0.14320109467693395,0.28556282769363056],[0.9013245370842314,0.27128468630894903],[0.9030092558451365,0.3076290461972293],[0.13646221963331354,0.3193111618756051]],"notes":"Natural language processing (N.L.P) is a","imageWidth":1276,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.13477750087240845,0.3543575089107325],[0.9973535064558224,0.3387813546728981],[0.9990382252167275,0.30503302049092357],[0.13646221963331354,0.32190718758191084]],"notes":"subfield of computer science, information","imageWidth":1276,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.14320109467693395,0.3855098173864013],[0.9990382252167275,0.36733763744226117],[0.9956687876949173,0.3387813546728981],[0.14151637591602886,0.35825154747019106]],"notes":"engineering and artificial intelligence","imageWidth":1276,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.13814693839421865,0.39070186879901275],[0.13309278211150333,0.4192581515683758],[0.9939840689340121,0.3984899459179299],[0.9973535064558224,0.368635650295414]],"notes":"concerned with the interactions between","imageWidth":1276,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.1314080633505982,0.4296422543935987],[0.9552355374331948,0.40238398447738855],[0.9333341935414284,0.43223828009990445],[0.1314080633505982,0.4607945628692675]],"notes":"computers and human (natural)","imageWidth":1276,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.13646221963331354,0.4672846271350318],[0.128038625828788,0.5023309741701593],[0.9653438499986254,0.4685826399881847],[0.9569202561940999,0.4348343058062102]],"notes":"languages, in particular how to","imageWidth":1276,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.128038625828788,0.5062250127296178],[0.128038625828788,0.5425693726178981],[0.9249105997369029,0.5127150769953822],[0.9063786933669468,0.4789667428134076]],"notes":"program computers to process","imageWidth":1276,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.12972334458969312,0.5477614240305095],[0.128038625828788,0.5750196939467197],[0.9788216000858663,0.563337578268344],[0.9602896937159101,0.5192051412611465],[0.958604974955005,0.514013089848535]],"notes":"and analyze large amounts of","imageWidth":1276,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.128038625828788,0.582807771065637],[0.128038625828788,0.611364053835],[0.9788216000858663,0.5957878995971656],[0.9805063188467713,0.5646355911214969]],"notes":"natural language data. Challenges","imageWidth":1276,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.11746056540276852,0.6589456729826468],[0.11746056540276852,0.6985389752219474],[0.9158253458747108,0.6674299520339255],[0.8993074538649465,0.6419771148800894]],"notes":"","imageWidth":1276,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.11562524406835026,0.6179383242347997],[0.11195460139951374,0.6518754404399145],[0.9708849859072585,0.6334928358288107],[0.967214343238422,0.5995557196236958]],"notes":"in natural languages processing","imageWidth":1276,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.113789922733932,0.6999530217304939],[0.113789922733932,0.7254058588843301],[0.9617083792351672,0.6999530217304939],[0.9470258085598211,0.6702580450510184]],"notes":"","imageWidth":1276,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.113789922733932,0.7367182309527016],[0.11195460139951374,0.7593429750894449],[0.9066487392026195,0.7522727425467126],[0.8974721325305282,0.7112653937988656]],"notes":"","imageWidth":1276,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.11746056540276852,0.7649991611236308],[0.11929588673718677,0.7961081843116526],[0.7286225697640485,0.8102486493971172],[0.7322932124328849,0.756514882072352]],"notes":"","imageWidth":1276,"imageHeight":1654}],"extras":null,"metadata":{"first_done_at":1551642443000,"last_updated_at":1551642751000,"sec_taken":274,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/3061f91f-e31b-42d4-8323-885c7c41e90c___page_0041.jpg","annotation":null,"extras":null,"metadata":{"first_done_at":1551642196000,"last_updated_at":1551642748000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/9231ab59-5925-457e-b988-c44188514e8d___page_0096.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.08340495409000775,0.2661502149288549],[0.9035536693084173,0.2661502149288549],[0.9035536693084173,0.3155167870527554],[0.08340495409000775,0.3155167870527554]],"notes":"There are so many interesting problems to","imageWidth":1274,"imageHeight":1651},{"label":["line"],"shape":"rectangle","points":[[0.08479503665817455,0.3198095324548337],[0.9007735041720838,0.3198095324548337],[0.9007735041720838,0.3627369864756167],[0.08479503665817455,0.3627369864756167]],"notes":"work on how do you pick the one to focus on","imageWidth":1274,"imageHeight":1651},{"label":["line"],"shape":"rectangle","points":[[0.07923470638550736,0.37024929092925374],[0.8910429261949162,0.37024929092925374],[0.8910429261949162,0.4142499313005564],[0.07923470638550736,0.4142499313005564]],"notes":"right now? The is no clear-cut answer,","imageWidth":1274,"imageHeight":1651},{"label":["line"],"shape":"rectangle","points":[[0.06949793359369577,0.42029385702368843],[0.9570444605298523,0.42029385702368843],[0.9570444605298523,0.4571813497943845],[0.06949793359369577,0.4571813497943845]],"notes":"yet some good rules of thumb apply. For instance,","imageWidth":1274,"imageHeight":1651},{"label":["line"],"shape":"rectangle","points":[[0.07239368082676644,0.465005969473017],[0.9599402077629229,0.465005969473017],[0.9599402077629229,0.5074824762998791],[0.07239368082676644,0.5074824762998791]],"notes":"consider impact/feasibility framework. You always","imageWidth":1274,"imageHeight":1651},{"label":["line"],"shape":"rectangle","points":[[0.0709458072102311,0.5164248987897448],[0.8310794558912786,0.5164248987897448],[0.8310794558912786,0.5510767859379745],[0.0709458072102311,0.5510767859379745]],"notes":"whant to be working on high impact and","imageWidth":1274,"imageHeight":1651},{"label":["line"],"shape":"rectangle","points":[[0.06805005997716045,0.558901405616607],[0.5357132381180716,0.558901405616607],[0.5357132381180716,0.6069669264996351],[0.06805005997716045,0.6069669264996351]],"notes":"high feasibility problems.","imageWidth":1274,"imageHeight":1651}],"extras":null,"metadata":{"first_done_at":1551642378000,"last_updated_at":1551642378000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/8d8193ed-1c92-467f-8d58-6b8a7ec1a4a7___page_0097.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.06920823696737605,0.2603155293182308],[0.9148760092262723,0.2603155293182308],[0.9148760092262723,0.29980159275414225],[0.06920823696737605,0.29980159275414225]],"notes":"","imageWidth":1274,"imageHeight":1652}],"extras":null,"metadata":{"first_done_at":1551641856000,"last_updated_at":1551641856000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/a1352ccc-e927-4869-b760-c54f8835190d___page_0054.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.05995203836930456,0.2680221811460259],[0.920863309352518,0.2680221811460259],[0.920863309352518,0.3031423290203327],[0.05995203836930456,0.3031423290203327]],"notes":"","imageWidth":1277,"imageHeight":1654},{"label":["line"],"shape":"rectangle","points":[[0.08872901678657075,0.31053604436229204],[0.9760191846522782,0.31053604436229204],[0.9760191846522782,0.34935304990757854],[0.08872901678657075,0.34935304990757854]],"notes":"","imageWidth":1277,"imageHeight":1654},{"label":["line"],"shape":"rectangle","points":[[0.11270983213429256,0.34935304990757854],[0.973621103117506,0.34935304990757854],[0.973621103117506,0.3900184842883549],[0.11270983213429256,0.3900184842883549]],"notes":"","imageWidth":1277,"imageHeight":1654},{"label":["line"],"shape":"rectangle","points":[[0.09352517985611511,0.38817005545286504],[0.9280575539568345,0.38817005545286504],[0.9280575539568345,0.4417744916820702],[0.09352517985611511,0.4417744916820702]],"notes":"","imageWidth":1277,"imageHeight":1654},{"label":["line"],"shape":"rectangle","points":[[0.11990407673860912,0.4454713493530499],[0.9568345323741008,0.4454713493530499],[0.9568345323741008,0.4787430683918669],[0.11990407673860912,0.4787430683918669]],"notes":"","imageWidth":1277,"imageHeight":1654},{"label":["line"],"shape":"rectangle","points":[[0.10311750599520383,0.4824399260628466],[0.9856115107913669,0.4824399260628466],[0.9856115107913669,0.5249537892791127],[0.10311750599520383,0.5249537892791127]],"notes":"","imageWidth":1277,"imageHeight":1654},{"label":["line"],"shape":"rectangle","points":[[0.1223021582733813,0.5415896487985212],[0.894484412470024,0.5415896487985212],[0.894484412470024,0.5822550831792976],[0.1223021582733813,0.5822550831792976]],"notes":"","imageWidth":1277,"imageHeight":1654},{"label":["line"],"shape":"rectangle","points":[[0.12709832134292565,0.5933456561922366],[0.6594724220623501,0.5933456561922366],[0.6594724220623501,0.6561922365988909],[0.12709832134292565,0.6561922365988909]],"notes":"","imageWidth":1277,"imageHeight":1654}],"extras":null,"metadata":{"first_done_at":1551642491000,"last_updated_at":1551642491000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/acb4d474-0457-4fbb-b149-ef4d0d568dc4___page_0068.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.11648745519713262,0.26832641770401106],[0.953405017921147,0.26832641770401106],[0.953405017921147,0.30428769017980634],[0.11648745519713262,0.30428769017980634]],"notes":"","imageWidth":1274,"imageHeight":1651},{"label":["line"],"shape":"rectangle","points":[[0.10215053763440861,0.31950207468879666],[0.9336917562724014,0.31950207468879666],[0.9336917562724014,0.35408022130013833],[0.10215053763440861,0.35408022130013833]],"notes":"","imageWidth":1274,"imageHeight":1651},{"label":["line"],"shape":"rectangle","points":[[0.11648745519713262,0.3651452282157676],[0.9372759856630825,0.3651452282157676],[0.9372759856630825,0.3997233748271093],[0.11648745519713262,0.3997233748271093]],"notes":"","imageWidth":1274,"imageHeight":1651},{"label":["line"],"shape":"rectangle","points":[[0.10931899641577061,0.4107883817427386],[0.9121863799283154,0.4107883817427386],[0.9121863799283154,0.45228215767634855],[0.10931899641577061,0.45228215767634855]],"notes":"","imageWidth":1274,"imageHeight":1651},{"label":["line"],"shape":"rectangle","points":[[0.10215053763440861,0.4633471645919779],[0.989247311827957,0.4633471645919779],[0.989247311827957,0.5131396957123098],[0.10215053763440861,0.5131396957123098]],"notes":"","imageWidth":1274,"imageHeight":1651}],"extras":null,"metadata":{"first_done_at":1551642481000,"last_updated_at":1551642481000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/49ba0b43-11fb-4b06-9183-f0dc66946f96___page_0108.jpg","annotation":[{"label":["line"],"shape":"polygon","points":[[0.15214929862493645,0.31794688190620146],[0.9471293839402295,0.29055453515735946],[0.9534689380496019,0.3394694400660059],[0.1597567635561833,0.37175327730571245]],"notes":"","imageWidth":1277,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.05959180862810012,0.3688183830111937],[0.9522010272277274,0.34240433436052464],[0.9522010272277274,0.37175327730571245],[0.05959180862810012,0.4089286050362837]],"notes":"","imageWidth":1277,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.05832389780622564,0.41577669172349424],[0.948397294762104,0.37762306589475003],[0.948397294762104,0.4069720088399379],[0.060859719449974585,0.45099542325771963]],"notes":"","imageWidth":1277,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.05705598698435117,0.45588691374858425],[0.9509331164058529,0.41088520123262956],[0.9522010272277274,0.4402341441778174],[0.050716432874978824,0.48034436620290744]],"notes":"","imageWidth":1277,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.05325225451872776,0.4881707509882909],[0.9179674350371166,0.45099542325771963],[0.9179674350371166,0.48425755859559916],[0.05705598698435117,0.5233894825225163]],"notes":"","imageWidth":1277,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.04944852205310435,0.5312158673078997],[0.9141637025714933,0.49110564528280964],[0.92177116750274,0.5165413958353058],[0.06212763027184905,0.5546950216640499],[0.05198434369685329,0.567412896940298]],"notes":"","imageWidth":1277,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.06212763027184905,0.5615431083512604],[0.991506262705836,0.5184979920316516],[0.9940420843495849,0.5556733197622229],[0.060859719449974585,0.593826945590967]],"notes":"","imageWidth":1277,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.05198434369685329,0.6036099265726963],[0.9953099951714593,0.5625214064494334],[0.9953099951714593,0.5899137531982753],[0.07353882766871929,0.6261107828306737],[0.050716432874978824,0.6407852543032676]],"notes":"","imageWidth":1277,"imageHeight":1653}],"extras":null,"metadata":{"first_done_at":1551642504000,"last_updated_at":1551642504000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/b93495bd-d87c-48f2-8295-f7a90d211a79___page_0050.jpg","annotation":null,"extras":null,"metadata":{"first_done_at":1551642099000,"last_updated_at":1551642533000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/9df2d063-395e-4c65-8b97-8309a12f95f0___page_0078.jpg","annotation":[{"label":["line"],"shape":"polygon","points":[[0.11704411877612936,0.3437182440748323],[0.890022986526817,0.332448793449428],[0.8973382439503251,0.3690745079819921],[0.12435937619963744,0.3775265959510454]],"notes":"when the data source has a lower-probability value","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.07315257423508086,0.37940483772194605],[0.8717348429680468,0.37940483772194605],[0.8717348429680468,0.42260439845266273],[0.07315257423508086,0.42260439845266273]],"notes":"(i.e., when a low-probability event occurs), the event","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.10363281349969787,0.42636088199446415],[0.9070919205150025,0.42636088199446415],[0.9070919205150025,0.46016923387067715],[0.10363281349969787,0.46016923387067715]],"notes":"carries more \"information\" (\"surprisal\") than wehen the","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.06705652638215745,0.475195168037883],[0.4828069899515336,0.4639257174124786],[0.9314761119266961,0.4592301129852268],[0.9424489980619583,0.4911602230905391],[0.48036857081036427,0.498673190174142],[0.06827573595274213,0.5146382452267981]],"notes":"source data has a higher-probability value. The amount of","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.06827573595274213,0.5193338496540499],[0.5961934800159089,0.49961231105959236],[0.9387913693502042,0.5005514319450427],[0.9412297884913736,0.5362380255921565],[0.2974871352226621,0.5559595641866141],[0.0768102029468349,0.5578378059575148]],"notes":"information conveyed by each event defined in this way becomes","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"polygon","points":[[0.08046783165858894,0.5963417622609796],[0.9229416449326033,0.586950553406476],[0.9241608545031881,0.5418727509048586],[0.5510827259042758,0.5522030806448126],[0.0768102029468349,0.5653507730411177]],"notes":"a random variable whose expected value is the information","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.07924862208800425,0.5963417622609796],[0.9314761119266961,0.5963417622609796],[0.9314761119266961,0.6470542900752991],[0.07924862208800425,0.6470542900752991]],"notes":"entropy. Generally, entropy refers to disorder or uncertainty,","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.09022150822326638,0.6498716527316503],[0.954641093767805,0.6498716527316503],[0.954641093767805,0.6864973672642143],[0.09022150822326638,0.6864973672642143]],"notes":"and the definition of entropy used in information theory is","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.0890022986526817,0.6902538508060158],[0.8010206878741353,0.6902538508060158],[0.8010206878741353,0.7334534115367324],[0.0890022986526817,0.7334534115367324]],"notes":"directly analogous to the definition used in","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.11460569963495999,0.7362707741930835],[0.5096296005043965,0.7362707741930835],[0.5096296005043965,0.7747747304965483],[0.11460569963495999,0.7747747304965483]],"notes":"statistical thermodynamics.","imageWidth":1274,"imageHeight":1653}],"extras":null,"metadata":{"first_done_at":1551642165000,"last_updated_at":1551642651000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/e3301354-71bf-40a0-a5ff-5f7d98102573___page_0093.jpg","annotation":[],"extras":null,"metadata":{"first_done_at":1551641913000,"last_updated_at":1551641958000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/52197e17-80a2-40c6-8be5-0fb972141774___page_0087.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.11210134568238246,0.3066743032522675],[0.932335839795026,0.3066743032522675],[0.932335839795026,0.33770682203374697],[0.11210134568238246,0.33770682203374697]],"notes":"Mathematicians seek and use patterns to formulate new conjectures; they resolve the truth or","imageWidth":1275,"imageHeight":1656},{"label":["line"],"shape":"rectangle","points":[[0.11368023787509207,0.3480509949609068],[0.8723379364720607,0.3480509949609068],[0.8723379364720607,0.37360718689859573],[0.11368023787509207,0.37360718689859573]],"notes":"falsity of conjectures by mathematical proof. When mathematical structures are good models of","imageWidth":1275,"imageHeight":1656},{"label":["line"],"shape":"rectangle","points":[[0.9670714680346374,0.3815174367840709],[0.11289079177873727,0.3815174367840709],[0.11289079177873727,0.4119414748051292],[0.9670714680346374,0.4119414748051292]],"notes":"real phenomena, then mathematical reasoning can provide insight or predictions about nature.","imageWidth":1275,"imageHeight":1656},{"label":["line"],"shape":"rectangle","points":[[0.8960213193627049,0.41680932088849854],[0.11131189958602766,0.41680932088849854],[0.11131189958602766,0.45575208955545315],[0.8960213193627049,0.45575208955545315]],"notes":"Through the use of abstraction and logic, mathematics developed from counting, calculation,","imageWidth":1275,"imageHeight":1656},{"label":["line"],"shape":"rectangle","points":[[0.8983896576517694,0.4612284163992436],[0.11210134568238246,0.4612284163992436],[0.11210134568238246,0.4916524544203019],[0.8983896576517694,0.4916524544203019]],"notes":"measurement, ans the systematic study of the shapes and motions of physical objects.","imageWidth":1275,"imageHeight":1656}],"extras":null,"metadata":{"first_done_at":1551642260000,"last_updated_at":1551642262000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/4587050e-e1c1-47fc-8faf-4508e4471ade___page_0092.jpg","annotation":[{"label":["line"],"shape":"polygon","points":[[0.09429280397022333,0.32122370936902483],[0.9156327543424317,0.30975143403441685],[0.9205955334987593,0.33460803059273425],[0.09429280397022333,0.35372848948374763],[0.09429280397022333,0.37093690248565964]],"notes":"Computer Science is the study of process that interact with data and","imageWidth":1273,"imageHeight":1651},{"label":["line"],"shape":"polygon","points":[[0.09181141439205956,0.37093690248565964],[0.9181141439205955,0.35181644359464626],[0.9181141439205955,0.37667304015296366],[0.09925558312655088,0.39579349904397704]],"notes":"that can be represented as data in the form of programs. It enables the","imageWidth":1273,"imageHeight":1651},{"label":["line"],"shape":"polygon","points":[[0.09925558312655088,0.4130019120458891],[0.9652605459057072,0.3919694072657744],[0.9627791563275434,0.4187380497131931],[0.10173697270471464,0.4359464627151052]],"notes":"use of algorithms to manipulate. Store, and communicate digital information","imageWidth":1273,"imageHeight":1651},{"label":["line"],"shape":"polygon","points":[[0.0967741935483871,0.45315487571701724],[0.9354838709677419,0.4321223709369025],[0.9330024813895782,0.4569789674952199],[0.10421836228287841,0.47992351816443596]],"notes":"A computer scientist studies the theory of computation and the","imageWidth":1273,"imageHeight":1651},{"label":["line"],"shape":"polygon","points":[[0.10421836228287841,0.4933078393881453],[0.8833746898263027,0.47418738049713194],[0.8808933002481389,0.49521988527724664],[0.10669975186104218,0.5162523900573613]],"notes":"theoritical and practical disciplines. Computational complexity theory","imageWidth":1273,"imageHeight":1651},{"label":["line"],"shape":"polygon","points":[[0.10669975186104218,0.5315487571701721],[0.9379652605459057,0.51434034416826],[0.9354838709677419,0.5449330783938815],[0.11166253101736973,0.5583173996175909]],"notes":"theoretical and practical disciplines. Computational complexity theory","imageWidth":1273,"imageHeight":1651},{"label":["line"],"shape":"polygon","points":[[0.10421836228287841,0.5621414913957935],[0.9330024813895782,0.5506692160611855],[0.9429280397022333,0.5889101338432122],[0.11662531017369727,0.6061185468451242]],"notes":"is highly abstract, while computer graphics emphasizes real world applications","imageWidth":1273,"imageHeight":1651}],"extras":null,"metadata":{"first_done_at":1551642751000,"last_updated_at":1551642751000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/c63becc8-5ce8-470e-b996-f95a8308f311___page_0079.jpg","annotation":[{"label":["line"],"shape":"polygon","points":[[0.05394486132843181,0.33062331179310883],[0.890090211919125,0.3145644652203007],[0.8925422510704173,0.34101433016374944],[0.05762292005537035,0.356128538702863]],"notes":"Deep learning (also known as deep structured learning or hierarchical learning)","imageWidth":1274,"imageHeight":1651},{"label":["line"],"shape":"polygon","points":[[0.07143527205309437,0.3651997725230968],[0.45467585320280335,0.3562743198341413],[0.8398471173809743,0.34809265486926544],[0.8369510928382813,0.37263764976389313],[0.41992355869048714,0.39346370603812264],[0.07240061356732537,0.392719918314043]],"notes":"is a part of a broader family of machine learning methods based on","imageWidth":1274,"imageHeight":1651},{"label":["line"],"shape":"polygon","points":[[0.07170017871567709,0.4009404181386089],[0.8995113329784944,0.3783404363228627],[0.9049431646993791,0.41182189086470894],[0.6637698362921016,0.4210292908637167],[0.3606736262667393,0.4260515090449936],[0.12601849592452338,0.4360959454075475],[0.07170017871567709,0.4302366908627244]],"notes":"learning data representations, as opposed to task-specific algorithms. Learning","imageWidth":1274,"imageHeight":1651},{"label":["line"],"shape":"polygon","points":[[0.07278654505985402,0.43944409086173214],[0.9386205213688638,0.41182189086470894],[0.9397068877130407,0.44279223631591674],[0.7941337975933327,0.4553477817691091],[0.07278654505985402,0.46622925449520913]],"notes":"can be supervised, semi-supervised or unsupervised. Deep learning architectures","imageWidth":1274,"imageHeight":1651},{"label":["line"],"shape":"polygon","points":[[0.07387291140403095,0.48296998176613226],[0.5497013701535244,0.46622925449520913],[0.9103749964202636,0.45953296358683987],[0.9027704320110252,0.4804588726754938],[0.5171103798282166,0.49887367267350924],[0.2118414371145005,0.5147773635808862],[0.07495927774820788,0.5147773635808862]],"notes":"such as deep neural networks, deep belief networks and recurrent neural","imageWidth":1274,"imageHeight":1651},{"label":["line"],"shape":"polygon","points":[[0.07713201043656172,0.5290069817611709],[0.46496479530772417,0.508081072672517],[0.7300381832868941,0.4997107090370554],[0.9386205213688638,0.4938514544922323],[0.9386205213688638,0.5206366181257093],[0.6040196873623707,0.5382143817601786],[0.3150462398113084,0.5490958544862786],[0.07821837678073865,0.5549551090311017]],"notes":"networks have been applied to fields including computer vision, speech recognition,","imageWidth":1274,"imageHeight":1651},{"label":["line"],"shape":"polygon","points":[[0.08147747581326942,0.567510654484294],[0.6463879747852708,0.545747709032094],[0.6507334401619784,0.577555090846848],[0.3595872599225624,0.5942958181177711],[0.0803911094690925,0.5942958181177711]],"notes":"natural language processing and audio recognition.","imageWidth":1274,"imageHeight":1651}],"extras":null,"metadata":{"first_done_at":1551642333000,"last_updated_at":1551642333000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/60fd45a9-95ba-4a0d-8194-310423ab0aa2___page_0053.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.09855532502582025,0.3529281588444158],[0.5081758946643856,0.3529281588444158],[0.5081758946643856,0.31133729837453517],[0.09855532502582025,0.31133729837453517]],"notes":"","imageWidth":1276,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.512795675524971,0.35411646914355527],[0.49739640598968654,0.35411646914355527],[0.49739640598968654,0.3731294339297864],[0.512795675524971,0.3731294339297864]],"notes":"","imageWidth":1276,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.24484838561102218,0.42066184589536426],[0.0970153980722918,0.42066184589536426],[0.0970153980722918,0.337480124955603],[0.24484838561102218,0.337480124955603]],"notes":"","imageWidth":1276,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.2494681664716075,0.3362918146564635],[0.9655341998623327,0.3362918146564635],[0.9655341998623327,0.41709691499794593],[0.2494681664716075,0.41709691499794593]],"notes":"","imageWidth":1276,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.10779488674699089,0.4218501561945037],[0.9778536154905603,0.4218501561945037],[0.9778536154905603,0.6250512173473491],[0.10779488674699089,0.6250512173473491]],"notes":"","imageWidth":1276,"imageHeight":1652}],"extras":null,"metadata":{"first_done_at":1551642397000,"last_updated_at":1551642397000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/742cee36-5414-4f6c-aa62-82c1d1065c87___page_0084.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.14214461210211474,0.2920787712271451],[0.8656116723183953,0.2920787712271451],[0.8656116723183953,0.32151306600197366],[0.14214461210211474,0.32151306600197366]],"notes":"Information is the resolution of uncertainty; it is that","imageWidth":1273,"imageHeight":1654}],"extras":null,"metadata":{"first_done_at":1551642328000,"last_updated_at":1551642328000,"sec_taken":271,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"INCORRECT"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/75d834a1-3405-46d9-a2a3-0023693a18ee___page_0090.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.11528570550666367,0.5809108384394237],[0.932487918876908,0.5809108384394237],[0.932487918876908,0.6296476070229748],[0.11528570550666367,0.6296476070229748]],"notes":"while negative values mean inhibitory connections.","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.09692161082418628,0.5313879929432347],[0.8794360897941954,0.5313879929432347],[0.8794360897941954,0.57855260770151],[0.09692161082418628,0.57855260770151]],"notes":"A positive weight reflects an excitatory connection,","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.15507457731869803,0.4952284549618903],[0.2938255149196384,0.4952284549618903],[0.2938255149196384,0.5306019160305967],[0.15507457731869803,0.5306019160305967]],"notes":"weights.","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.09488115585946656,0.45278030167944255],[0.939629511253427,0.45278030167944255],[0.939629511253427,0.4952284549618903],[0.09488115585946656,0.4952284549618903]],"notes":"the connection of the biological neuron are modeled as","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.12140707040082281,0.4024713792706156],[0.5386801106860037,0.4024713792706156],[0.5386801106860037,0.4433473787277875],[0.12140707040082281,0.4433473787277875]],"notes":"intelligence (AI) problems.","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.11834638795374323,0.3655257643766333],[0.9212654165709495,0.3655257643766333],[0.9212654165709495,0.40718784074644315],[0.11834638795374323,0.40718784074644315]],"notes":"an artificial neural network, for solving artificial","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.12344752536554252,0.328580149482651],[0.903921549370832,0.328580149482651],[0.903921549370832,0.37024222585246086],[0.12344752536554252,0.37024222585246086]],"notes":"network, made up of real biological neurons, or","imageWidth":1274,"imageHeight":1653},{"label":["line"],"shape":"rectangle","points":[[0.07141592376518989,0.289276303850755],[0.9080024593002715,0.289276303850755],[0.9080024593002715,0.34037130317221986],[0.07141592376518989,0.34037130317221986]],"notes":"Thus a neural network is either a biological neural","imageWidth":1274,"imageHeight":1653}],"extras":null,"metadata":{"first_done_at":1551642379000,"last_updated_at":1551642488000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/e92f801d-0e36-4516-96ab-cf06f0ebcc0c___page_0091.jpg","annotation":[{"label":["line"],"shape":"polygon","points":[[0.16170212765957448,0.3567921440261866],[0.4808510638297872,0.3436988543371522],[0.7148936170212766,0.33878887070376434],[0.9680851063829787,0.3404255319148936],[0.9744680851063829,0.3027823240589198],[0.1702127659574468,0.3158756137479542],[0.15106382978723404,0.3142389525368249]],"notes":"","imageWidth":1274,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.16808510638297872,0.39279869067103107],[0.16808510638297872,0.36006546644844517],[0.7,0.34206219312602293],[0.9148936170212766,0.3436988543371522],[0.9148936170212766,0.36824877250409166]],"notes":"","imageWidth":1274,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.16595744680851063,0.3960720130932897],[0.9744680851063829,0.36824877250409166],[0.9723404255319149,0.40589198036006546],[0.16170212765957448,0.4320785597381342],[0.15531914893617021,0.4369885433715221],[0.16382978723404254,0.4386252045826514],[0.16382978723404254,0.4386252045826514]],"notes":"","imageWidth":1274,"imageHeight":1654}],"extras":null,"metadata":{"first_done_at":1551597003000,"last_updated_at":1551597003000,"sec_taken":83,"last_updated_by":"69FI7aSdl6aSMhn3Anp3BRvA8gg2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/a0153bd0-62f9-480c-92a7-e9db47b4053d___page_0085.jpg","annotation":[{"label":["line"],"shape":"rectangle","points":[[0.9046920211971423,0.46804044558785357],[0.9976298853329644,0.46804044558785357],[0.9976298853329644,0.5272436523089902],[0.9046920211971423,0.5272436523089902]],"notes":"","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.9295454545454546,0.5052631578947369],[0.9431818181818182,0.5052631578947369],[0.9431818181818182,0.49298245614035085],[0.9295454545454546,0.49298245614035085]],"notes":"","imageWidth":1274,"imageHeight":1652},{"label":["line"],"shape":"rectangle","points":[[0.9659090909090909,0.5157894736842106],[0.9772727272727273,0.5157894736842106],[0.9772727272727273,0.5],[0.9659090909090909,0.5]],"notes":"","imageWidth":1274,"imageHeight":1652}],"extras":null,"metadata":{"first_done_at":1551642170000,"last_updated_at":1551642483000,"sec_taken":142,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/f1614407-b739-4cf7-8495-9df7e67da4c4___page_0052.jpg","annotation":[{"label":["line"],"shape":"polygon","points":[[0.07493391582194713,0.3422201463385958],[0.8308297916758387,0.31550675938811473],[0.8270830958847413,0.2887933724376336],[0.06931387213530109,0.3111748588015502]],"notes":"There are many solutions around is necessary to","imageWidth":1276,"imageHeight":1654},{"label":["line"],"shape":"polygon","points":[[0.08149063345636749,0.3443860966318781],[0.87860016301233,0.31550675938811473],[0.8832835327512016,0.34583006349406625],[0.0889840250385622,0.36460163270251245]],"notes":"explore. using different techniques in order to get better solutions","imageWidth":1276,"imageHeight":1654}],"extras":null,"metadata":{"first_done_at":1551642334000,"last_updated_at":1551642478000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}
{"content": "http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/633887b3-356f-44b3-b602-abb6de17bc99___page_0046.jpg","annotation":null,"extras":null,"metadata":{"first_done_at":1551642488000,"last_updated_at":1551642488000,"sec_taken":0,"last_updated_by":"qYxNqy3ztcMtWhTynXyHGxbAArx2","status":"done","evaluation":"NONE"}}


================================================
FILE: data/raw/fsdl_handwriting/manifest.csv
================================================
form
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-001.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-002.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-003.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-004.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-005.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-006.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-007.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-008.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-009.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-010.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-011.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-012.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-013.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-014.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-015.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-016.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-017.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-018.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-019.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-020.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-021.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-022.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-023.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-024.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-025.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-026.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-027.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-028.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-029.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-030.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-031.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-032.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-033.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-034.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-035.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-036.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-037.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-038.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-039.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-040.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-041.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-042.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-043.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-044.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-045.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-046.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-047.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-048.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-049.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-050.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-051.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-052.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-053.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-054.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-055.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-056.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-057.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-058.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-059.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-060.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-061.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-062.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-063.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-064.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-065.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-066.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-067.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-068.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-069.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-070.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-071.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-072.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-073.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-074.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-075.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-076.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-077.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-078.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-079.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-080.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-081.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-082.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-083.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-084.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-085.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-086.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-087.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-088.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-089.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-090.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-091.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-092.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-093.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-094.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-095.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-096.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-097.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-098.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-099.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-100.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-101.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-102.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-103.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-104.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-105.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-106.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-107.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-108.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-109.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-110.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-111.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-112.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-113.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-114.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-115.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-116.jpg
https://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-117.jpg


================================================
FILE: data/raw/fsdl_handwriting/metadata.toml
================================================
url = "https://dataturks.com/projects/sergeykarayev/fsdl_handwriting/export"
filename = "fsdl_handwriting.json"
sha256 = "720d6c72b4317a9a5492630a1c9f6d83a20d36101a29311a5cf7825c1d60c180"


================================================
FILE: data/raw/fsdl_handwriting/readme.md
================================================
# FSDL Handwriting Dataset

## Collection

Handwritten paragraphs were collected in the FSDL March 2019 class.
The resulting PDF was stored at https://fsdl-public-assets.s3-us-west-2.amazonaws.com/fsdl_handwriting_20190302.pdf

Pages were extracted from the PDF by running `gs -q -dBATCH -dNOPAUSE  -sDEVICE=jpeg -r300 -sOutputFile=page-%03d.jpg -f fsdl_handwriting_20190302.pdf` and uploaded to S3, with urls like https://fsdl-public-assets.s3-us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-001.jpg


================================================
FILE: data/raw/iam/metadata.toml
================================================
url = 'https://s3-us-west-2.amazonaws.com/fsdl-public-assets/iam/iamdb.zip'
filename = 'iamdb.zip'
sha256 = 'f3c9e87a88a313e557c6d3548ed8a2a1af2dc3c4a678c5f3fc6f972ba4a50c55'


================================================
FILE: data/raw/iam/readme.md
================================================
# IAM Dataset

The IAM Handwriting Database contains forms of handwritten English text which can be used to train and test handwritten text recognizers and to perform writer identification and verification experiments.

- 657 writers contributed samples of their handwriting
- 1,539 pages of scanned text
- 13,353 isolated and labeled text lines

- http://www.fki.inf.unibe.ch/databases/iam-handwriting-database

## Pre-processing

First, all forms were placed into one directory called `forms`, from original directories like `formsA-D`.

To save space, I converted the original PNG files to JPG, and resized them to half-size
```
mkdir forms-resized
cd forms
ls -1 *.png | parallel --eta -j 6 convert '{}' -adaptive-resize 50% '../forms-resized/{.}.jpg'
```

## Split

The data split we will use is loosely based on the IAM Lines Large Writer Independent Text Line Recognition Task (`lwitlrt`) which provides 4 data splits:
 - Train: has 6,161 text lines from 747 pages written by 283 writers
 - Validation 1: has 900 text lines from 105 pages written by 46 writers
 - Validation 2: has 940 text lines from 115 pages written by 43 writers
 - Test: has 1,861 text lines from 232 pages written by 128 writers
Total: has 9,862 text lines from 1199 pages written by 500 writers
The text lines of all data sets are mutually exclusive, thus each writer has contributed to one set only.

The total text lines (9,862) in the data splits is way less then all the text lines (13,353) in the dataset. This is because:
 - pages of 157 writers (`657-500`) are not included in the data splits
 - 511 text lines are dropped from the 1,199 pages included in the data splits

To avoid missing out on all the dropped data, we slightly modify the data splits. We:
 - use all text lines in a page and never drop text lines
 - merge Validation 1 and Validation 2 into a single Validation data split
 - all the missing pages of 157 writers are added to the train data split.
Our final data splits are:
 - Train: has 9,462 text lines from 1,087 pages written by 440 writers
 - Validation: has 1,926 text lines from 220 pages written by 89 writers
 - Test: has 1,965 text lines from 232 pages written by 128 writers
Total: has 13,353 text lines from 1,539 pages written by 657 writers


================================================
FILE: environment.yml
================================================
name: fsdl-text-recognizer-2022
channels:
  - pytorch
  - nvidia
  - defaults
dependencies:
  - python=3.10  # versioned to match Google Colab  # version also pinned in Dockerfile
  - pytorch=2.1.1  # versioned to match Google Colab
  - pytorch-cuda=12.1  # versioned to match Google Colab
  - pip=23.1.2  # versioned to match Google Colab  # version also pinned in Dockerfile


================================================
FILE: lab01/notebooks/lab01_pytorch.ipynb
================================================
{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "FlH0lCOttCs5"
      },
      "source": [
        "<img src=\"https://fsdl.me/logo-720-dark-horizontal\">"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZUPRHaeetRnT"
      },
      "source": [
        "# Lab 01: Deep Neural Networks in PyTorch"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "bry3Hr-PcgDs"
      },
      "source": [
        "### What You Will Learn\n",
        "\n",
        "- How to write a basic neural network from scratch in PyTorch\n",
        "- How the submodules of `torch`, like `torch.nn` and `torch.utils.data`, make writing performant neural network training and inference code easier"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6c7bFQ20LbLB"
      },
      "source": [
        "At its core, PyTorch is a library for\n",
        "- doing math on arrays\n",
        "- with automatic calculation of gradients\n",
        "- that is easy to accelerate with GPUs and distribute over nodes.\n",
        "\n",
        "Much of the time,\n",
        "we work at a remove from the core features of PyTorch,\n",
        "using abstractions from `torch.nn`\n",
        "or from frameworks on top of PyTorch.\n",
        "\n",
        "This tutorial builds those abstractions up\n",
        "from core PyTorch,\n",
        "showing how to go from basic iterated\n",
        "gradient computation and application\n",
        "to a solid training and validation loop.\n",
        "It is adapted from the PyTorch tutorial\n",
        "[What is `torch.nn` really?](https://pytorch.org/tutorials/beginner/nn_tutorial.html).\n",
        "\n",
        "We assume familiarity with the fundamentals of ML and DNNs here,\n",
        "like gradient-based optimization and statistical learning.\n",
        "For refreshing on those, we recommend\n",
        "[3Blue1Brown's videos](https://www.youtube.com/watch?v=aircAruvnKk&list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi&ab_channel=3Blue1Brown)\n",
        "or\n",
        "[the NYU course on deep learning by Le Cun and Canziani](https://cds.nyu.edu/deep-learning/)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vs0LXXlCU6Ix"
      },
      "source": [
        "# Setup"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZkQiK7lkgeXm"
      },
      "source": [
        "If you're running this notebook on Google Colab,\n",
        "the cell below will run full environment setup.\n",
        "\n",
        "It should take about three minutes to run."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "sVx7C7H0PIZC"
      },
      "outputs": [],
      "source": [
        "lab_idx = 1\n",
        "\n",
        "if \"bootstrap\" not in locals() or bootstrap.run:\n",
        "    # path management for Python\n",
        "    pythonpath, = !echo $PYTHONPATH\n",
        "    if \".\" not in pythonpath.split(\":\"):\n",
        "        pythonpath = \".:\" + pythonpath\n",
        "        %env PYTHONPATH={pythonpath}\n",
        "        !echo $PYTHONPATH\n",
        "\n",
        "    # get both Colab and local notebooks into the same state\n",
        "    !wget --quiet https://fsdl.me/gist-bootstrap -O bootstrap.py\n",
        "    import bootstrap\n",
        "\n",
        "    # change into the lab directory\n",
        "    bootstrap.change_to_lab_dir(lab_idx=lab_idx)\n",
        "\n",
        "    # allow \"hot-reloading\" of modules\n",
        "    %load_ext autoreload\n",
        "    %autoreload 2\n",
	"    # needed for inline plots in some contexts\n",
	"    %matplotlib inline\n",
        "\n",
        "    bootstrap.run = False  # change to True re-run setup\n",
        "    \n",
        "!pwd\n",
        "%ls"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6wJ8r7BTPB-t"
      },
      "source": [
        "# Getting data and making `Tensor`s"
      ]
    },
    {
      "cell_type": "markdown",
      
Download .txt
gitextract_8c13vqgo/

├── .flake8
├── .github/
│   ├── ISSUE_TEMPLATE/
│   │   └── this-repository-is-automatically-generated--don-t-open-issues-here-.md
│   └── pull_request_template.md
├── .gitignore
├── .pre-commit-config.yaml
├── LICENSE.txt
├── Makefile
├── data/
│   └── raw/
│       ├── emnist/
│       │   ├── metadata.toml
│       │   └── readme.md
│       ├── fsdl_handwriting/
│       │   ├── fsdl_handwriting.jsonl
│       │   ├── manifest.csv
│       │   ├── metadata.toml
│       │   └── readme.md
│       └── iam/
│           ├── metadata.toml
│           └── readme.md
├── environment.yml
├── lab01/
│   ├── notebooks/
│   │   └── lab01_pytorch.ipynb
│   └── text_recognizer/
│       ├── __init__.py
│       ├── data/
│       │   └── util.py
│       ├── metadata/
│       │   ├── mnist.py
│       │   └── shared.py
│       ├── models/
│       │   ├── __init__.py
│       │   └── mlp.py
│       └── util.py
├── lab02/
│   ├── notebooks/
│   │   ├── lab01_pytorch.ipynb
│   │   ├── lab02a_lightning.ipynb
│   │   └── lab02b_cnn.ipynb
│   ├── text_recognizer/
│   │   ├── __init__.py
│   │   ├── data/
│   │   │   ├── __init__.py
│   │   │   ├── base_data_module.py
│   │   │   ├── emnist.py
│   │   │   ├── emnist_essentials.json
│   │   │   ├── emnist_lines.py
│   │   │   ├── mnist.py
│   │   │   ├── sentence_generator.py
│   │   │   └── util.py
│   │   ├── lit_models/
│   │   │   ├── __init__.py
│   │   │   └── base.py
│   │   ├── metadata/
│   │   │   ├── emnist.py
│   │   │   ├── emnist_lines.py
│   │   │   ├── mnist.py
│   │   │   └── shared.py
│   │   ├── models/
│   │   │   ├── __init__.py
│   │   │   ├── cnn.py
│   │   │   ├── line_cnn_simple.py
│   │   │   └── mlp.py
│   │   ├── stems/
│   │   │   └── image.py
│   │   └── util.py
│   └── training/
│       ├── __init__.py
│       ├── run_experiment.py
│       └── util.py
├── lab03/
│   ├── notebooks/
│   │   ├── lab01_pytorch.ipynb
│   │   ├── lab02a_lightning.ipynb
│   │   ├── lab02b_cnn.ipynb
│   │   └── lab03_transformers.ipynb
│   ├── text_recognizer/
│   │   ├── __init__.py
│   │   ├── data/
│   │   │   ├── __init__.py
│   │   │   ├── base_data_module.py
│   │   │   ├── emnist.py
│   │   │   ├── emnist_essentials.json
│   │   │   ├── emnist_lines.py
│   │   │   ├── iam.py
│   │   │   ├── iam_paragraphs.py
│   │   │   ├── mnist.py
│   │   │   ├── sentence_generator.py
│   │   │   └── util.py
│   │   ├── lit_models/
│   │   │   ├── __init__.py
│   │   │   ├── base.py
│   │   │   ├── metrics.py
│   │   │   ├── transformer.py
│   │   │   └── util.py
│   │   ├── metadata/
│   │   │   ├── emnist.py
│   │   │   ├── emnist_lines.py
│   │   │   ├── iam.py
│   │   │   ├── iam_paragraphs.py
│   │   │   ├── mnist.py
│   │   │   └── shared.py
│   │   ├── models/
│   │   │   ├── __init__.py
│   │   │   ├── cnn.py
│   │   │   ├── line_cnn_simple.py
│   │   │   ├── mlp.py
│   │   │   ├── resnet_transformer.py
│   │   │   └── transformer_util.py
│   │   ├── stems/
│   │   │   ├── image.py
│   │   │   └── paragraph.py
│   │   └── util.py
│   └── training/
│       ├── __init__.py
│       ├── run_experiment.py
│       └── util.py
├── lab04/
│   ├── notebooks/
│   │   ├── lab01_pytorch.ipynb
│   │   ├── lab02a_lightning.ipynb
│   │   ├── lab02b_cnn.ipynb
│   │   ├── lab03_transformers.ipynb
│   │   └── lab04_experiments.ipynb
│   ├── text_recognizer/
│   │   ├── __init__.py
│   │   ├── callbacks/
│   │   │   ├── __init__.py
│   │   │   ├── imtotext.py
│   │   │   ├── model.py
│   │   │   ├── optim.py
│   │   │   └── util.py
│   │   ├── data/
│   │   │   ├── __init__.py
│   │   │   ├── base_data_module.py
│   │   │   ├── emnist.py
│   │   │   ├── emnist_essentials.json
│   │   │   ├── emnist_lines.py
│   │   │   ├── iam.py
│   │   │   ├── iam_lines.py
│   │   │   ├── iam_paragraphs.py
│   │   │   ├── mnist.py
│   │   │   ├── sentence_generator.py
│   │   │   └── util.py
│   │   ├── lit_models/
│   │   │   ├── __init__.py
│   │   │   ├── base.py
│   │   │   ├── metrics.py
│   │   │   ├── transformer.py
│   │   │   └── util.py
│   │   ├── metadata/
│   │   │   ├── emnist.py
│   │   │   ├── emnist_lines.py
│   │   │   ├── iam.py
│   │   │   ├── iam_lines.py
│   │   │   ├── iam_paragraphs.py
│   │   │   ├── mnist.py
│   │   │   └── shared.py
│   │   ├── models/
│   │   │   ├── __init__.py
│   │   │   ├── cnn.py
│   │   │   ├── line_cnn.py
│   │   │   ├── line_cnn_simple.py
│   │   │   ├── line_cnn_transformer.py
│   │   │   ├── mlp.py
│   │   │   ├── resnet_transformer.py
│   │   │   └── transformer_util.py
│   │   ├── stems/
│   │   │   ├── image.py
│   │   │   ├── line.py
│   │   │   └── paragraph.py
│   │   └── util.py
│   └── training/
│       ├── __init__.py
│       ├── run_experiment.py
│       └── util.py
├── lab05/
│   ├── .flake8
│   ├── .github/
│   │   └── workflows/
│   │       └── pre-commit.yml
│   ├── .pre-commit-config.yaml
│   ├── notebooks/
│   │   ├── lab01_pytorch.ipynb
│   │   ├── lab02a_lightning.ipynb
│   │   ├── lab02b_cnn.ipynb
│   │   ├── lab03_transformers.ipynb
│   │   ├── lab04_experiments.ipynb
│   │   └── lab05_troubleshooting.ipynb
│   ├── tasks/
│   │   └── lint.sh
│   ├── text_recognizer/
│   │   ├── __init__.py
│   │   ├── callbacks/
│   │   │   ├── __init__.py
│   │   │   ├── imtotext.py
│   │   │   ├── model.py
│   │   │   ├── optim.py
│   │   │   └── util.py
│   │   ├── data/
│   │   │   ├── __init__.py
│   │   │   ├── base_data_module.py
│   │   │   ├── emnist.py
│   │   │   ├── emnist_essentials.json
│   │   │   ├── emnist_lines.py
│   │   │   ├── fake_images.py
│   │   │   ├── iam.py
│   │   │   ├── iam_lines.py
│   │   │   ├── iam_paragraphs.py
│   │   │   ├── mnist.py
│   │   │   ├── sentence_generator.py
│   │   │   └── util.py
│   │   ├── lit_models/
│   │   │   ├── __init__.py
│   │   │   ├── base.py
│   │   │   ├── metrics.py
│   │   │   ├── transformer.py
│   │   │   └── util.py
│   │   ├── metadata/
│   │   │   ├── emnist.py
│   │   │   ├── emnist_lines.py
│   │   │   ├── iam.py
│   │   │   ├── iam_lines.py
│   │   │   ├── iam_paragraphs.py
│   │   │   ├── mnist.py
│   │   │   └── shared.py
│   │   ├── models/
│   │   │   ├── __init__.py
│   │   │   ├── cnn.py
│   │   │   ├── line_cnn.py
│   │   │   ├── line_cnn_simple.py
│   │   │   ├── line_cnn_transformer.py
│   │   │   ├── mlp.py
│   │   │   ├── resnet_transformer.py
│   │   │   └── transformer_util.py
│   │   ├── stems/
│   │   │   ├── image.py
│   │   │   ├── line.py
│   │   │   └── paragraph.py
│   │   ├── tests/
│   │   │   ├── test_callback_utils.py
│   │   │   └── test_iam.py
│   │   └── util.py
│   └── training/
│       ├── __init__.py
│       ├── run_experiment.py
│       ├── tests/
│       │   ├── test_memorize_iam.sh
│       │   └── test_run_experiment.sh
│       └── util.py
├── lab06/
│   ├── .flake8
│   ├── .github/
│   │   └── workflows/
│   │       └── pre-commit.yml
│   ├── .pre-commit-config.yaml
│   ├── notebooks/
│   │   ├── lab01_pytorch.ipynb
│   │   ├── lab02a_lightning.ipynb
│   │   ├── lab02b_cnn.ipynb
│   │   ├── lab03_transformers.ipynb
│   │   ├── lab04_experiments.ipynb
│   │   ├── lab05_troubleshooting.ipynb
│   │   └── lab06_data.ipynb
│   ├── tasks/
│   │   └── lint.sh
│   ├── text_recognizer/
│   │   ├── __init__.py
│   │   ├── callbacks/
│   │   │   ├── __init__.py
│   │   │   ├── imtotext.py
│   │   │   ├── model.py
│   │   │   ├── optim.py
│   │   │   └── util.py
│   │   ├── data/
│   │   │   ├── __init__.py
│   │   │   ├── base_data_module.py
│   │   │   ├── emnist.py
│   │   │   ├── emnist_essentials.json
│   │   │   ├── emnist_lines.py
│   │   │   ├── fake_images.py
│   │   │   ├── iam.py
│   │   │   ├── iam_lines.py
│   │   │   ├── iam_original_and_synthetic_paragraphs.py
│   │   │   ├── iam_paragraphs.py
│   │   │   ├── iam_synthetic_paragraphs.py
│   │   │   ├── mnist.py
│   │   │   ├── sentence_generator.py
│   │   │   └── util.py
│   │   ├── lit_models/
│   │   │   ├── __init__.py
│   │   │   ├── base.py
│   │   │   ├── metrics.py
│   │   │   ├── transformer.py
│   │   │   └── util.py
│   │   ├── metadata/
│   │   │   ├── emnist.py
│   │   │   ├── emnist_lines.py
│   │   │   ├── iam.py
│   │   │   ├── iam_lines.py
│   │   │   ├── iam_paragraphs.py
│   │   │   ├── iam_synthetic_paragraphs.py
│   │   │   ├── mnist.py
│   │   │   └── shared.py
│   │   ├── models/
│   │   │   ├── __init__.py
│   │   │   ├── cnn.py
│   │   │   ├── line_cnn.py
│   │   │   ├── line_cnn_simple.py
│   │   │   ├── line_cnn_transformer.py
│   │   │   ├── mlp.py
│   │   │   ├── resnet_transformer.py
│   │   │   └── transformer_util.py
│   │   ├── stems/
│   │   │   ├── image.py
│   │   │   ├── line.py
│   │   │   └── paragraph.py
│   │   ├── tests/
│   │   │   ├── test_callback_utils.py
│   │   │   └── test_iam.py
│   │   └── util.py
│   └── training/
│       ├── __init__.py
│       ├── run_experiment.py
│       ├── tests/
│       │   ├── test_memorize_iam.sh
│       │   └── test_run_experiment.sh
│       └── util.py
├── lab07/
│   ├── .flake8
│   ├── .github/
│   │   └── workflows/
│   │       └── pre-commit.yml
│   ├── .pre-commit-config.yaml
│   ├── api_serverless/
│   │   ├── Dockerfile
│   │   ├── __init__.py
│   │   └── api.py
│   ├── app_gradio/
│   │   ├── Dockerfile
│   │   ├── README.md
│   │   ├── __init__.py
│   │   ├── app.py
│   │   └── tests/
│   │       └── test_app.py
│   ├── notebooks/
│   │   ├── lab01_pytorch.ipynb
│   │   ├── lab02a_lightning.ipynb
│   │   ├── lab02b_cnn.ipynb
│   │   ├── lab03_transformers.ipynb
│   │   ├── lab04_experiments.ipynb
│   │   ├── lab05_troubleshooting.ipynb
│   │   ├── lab06_data.ipynb
│   │   └── lab07_deployment.ipynb
│   ├── tasks/
│   │   └── lint.sh
│   ├── text_recognizer/
│   │   ├── __init__.py
│   │   ├── callbacks/
│   │   │   ├── __init__.py
│   │   │   ├── imtotext.py
│   │   │   ├── model.py
│   │   │   ├── optim.py
│   │   │   └── util.py
│   │   ├── data/
│   │   │   ├── __init__.py
│   │   │   ├── base_data_module.py
│   │   │   ├── emnist.py
│   │   │   ├── emnist_essentials.json
│   │   │   ├── emnist_lines.py
│   │   │   ├── fake_images.py
│   │   │   ├── iam.py
│   │   │   ├── iam_lines.py
│   │   │   ├── iam_original_and_synthetic_paragraphs.py
│   │   │   ├── iam_paragraphs.py
│   │   │   ├── iam_synthetic_paragraphs.py
│   │   │   ├── mnist.py
│   │   │   ├── sentence_generator.py
│   │   │   └── util.py
│   │   ├── lit_models/
│   │   │   ├── __init__.py
│   │   │   ├── base.py
│   │   │   ├── metrics.py
│   │   │   ├── transformer.py
│   │   │   └── util.py
│   │   ├── metadata/
│   │   │   ├── emnist.py
│   │   │   ├── emnist_lines.py
│   │   │   ├── iam.py
│   │   │   ├── iam_lines.py
│   │   │   ├── iam_paragraphs.py
│   │   │   ├── iam_synthetic_paragraphs.py
│   │   │   ├── mnist.py
│   │   │   └── shared.py
│   │   ├── models/
│   │   │   ├── __init__.py
│   │   │   ├── cnn.py
│   │   │   ├── line_cnn.py
│   │   │   ├── line_cnn_simple.py
│   │   │   ├── line_cnn_transformer.py
│   │   │   ├── mlp.py
│   │   │   ├── resnet_transformer.py
│   │   │   └── transformer_util.py
│   │   ├── paragraph_text_recognizer.py
│   │   ├── stems/
│   │   │   ├── image.py
│   │   │   ├── line.py
│   │   │   └── paragraph.py
│   │   ├── tests/
│   │   │   ├── test_callback_utils.py
│   │   │   └── test_iam.py
│   │   └── util.py
│   └── training/
│       ├── __init__.py
│       ├── cleanup_artifacts.py
│       ├── run_experiment.py
│       ├── stage_model.py
│       ├── tests/
│       │   ├── test_memorize_iam.sh
│       │   ├── test_model_development.sh
│       │   └── test_run_experiment.sh
│       └── util.py
├── lab08/
│   ├── .flake8
│   ├── .github/
│   │   └── workflows/
│   │       └── pre-commit.yml
│   ├── .pre-commit-config.yaml
│   ├── api_serverless/
│   │   ├── Dockerfile
│   │   ├── __init__.py
│   │   └── api.py
│   ├── app_gradio/
│   │   ├── Dockerfile
│   │   ├── README.md
│   │   ├── __init__.py
│   │   ├── app.py
│   │   ├── flagging.py
│   │   ├── s3_util.py
│   │   └── tests/
│   │       └── test_app.py
│   ├── notebooks/
│   │   ├── lab01_pytorch.ipynb
│   │   ├── lab02a_lightning.ipynb
│   │   ├── lab02b_cnn.ipynb
│   │   ├── lab03_transformers.ipynb
│   │   ├── lab04_experiments.ipynb
│   │   ├── lab05_troubleshooting.ipynb
│   │   ├── lab06_data.ipynb
│   │   ├── lab07_deployment.ipynb
│   │   └── lab08_monitoring.ipynb
│   ├── tasks/
│   │   └── lint.sh
│   ├── text_recognizer/
│   │   ├── __init__.py
│   │   ├── callbacks/
│   │   │   ├── __init__.py
│   │   │   ├── imtotext.py
│   │   │   ├── model.py
│   │   │   ├── optim.py
│   │   │   └── util.py
│   │   ├── data/
│   │   │   ├── __init__.py
│   │   │   ├── base_data_module.py
│   │   │   ├── emnist.py
│   │   │   ├── emnist_essentials.json
│   │   │   ├── emnist_lines.py
│   │   │   ├── fake_images.py
│   │   │   ├── iam.py
│   │   │   ├── iam_lines.py
│   │   │   ├── iam_original_and_synthetic_paragraphs.py
│   │   │   ├── iam_paragraphs.py
│   │   │   ├── iam_synthetic_paragraphs.py
│   │   │   ├── mnist.py
│   │   │   ├── sentence_generator.py
│   │   │   └── util.py
│   │   ├── lit_models/
│   │   │   ├── __init__.py
│   │   │   ├── base.py
│   │   │   ├── metrics.py
│   │   │   ├── transformer.py
│   │   │   └── util.py
│   │   ├── metadata/
│   │   │   ├── emnist.py
│   │   │   ├── emnist_lines.py
│   │   │   ├── iam.py
│   │   │   ├── iam_lines.py
│   │   │   ├── iam_paragraphs.py
│   │   │   ├── iam_synthetic_paragraphs.py
│   │   │   ├── mnist.py
│   │   │   └── shared.py
│   │   ├── models/
│   │   │   ├── __init__.py
│   │   │   ├── cnn.py
│   │   │   ├── line_cnn.py
│   │   │   ├── line_cnn_simple.py
│   │   │   ├── line_cnn_transformer.py
│   │   │   ├── mlp.py
│   │   │   ├── resnet_transformer.py
│   │   │   └── transformer_util.py
│   │   ├── paragraph_text_recognizer.py
│   │   ├── stems/
│   │   │   ├── image.py
│   │   │   ├── line.py
│   │   │   └── paragraph.py
│   │   ├── tests/
│   │   │   ├── test_callback_utils.py
│   │   │   └── test_iam.py
│   │   └── util.py
│   └── training/
│       ├── __init__.py
│       ├── cleanup_artifacts.py
│       ├── run_experiment.py
│       ├── stage_model.py
│       ├── tests/
│       │   ├── test_memorize_iam.sh
│       │   ├── test_model_development.sh
│       │   └── test_run_experiment.sh
│       └── util.py
├── overview.ipynb
├── pyproject.toml
├── readme.md
├── requirements/
│   ├── dev-lint.in
│   ├── dev.in
│   ├── dev.txt
│   ├── prod.in
│   └── prod.txt
└── setup/
    └── readme.md
Download .txt
SYMBOL INDEX (1635 symbols across 221 files)

FILE: lab01/text_recognizer/data/util.py
  class BaseDataset (line 11) | class BaseDataset(torch.utils.data.Dataset):
    method __init__ (line 28) | def __init__(
    method __len__ (line 43) | def __len__(self) -> int:
    method __getitem__ (line 47) | def __getitem__(self, index: int) -> Tuple[Any, Any]:
  function convert_strings_to_labels (line 70) | def convert_strings_to_labels(strings: Sequence[str], mapping: Dict[str,...
  function split_dataset (line 84) | def split_dataset(base_dataset: BaseDataset, fraction: float, seed: int)...
  function resize_image (line 96) | def resize_image(image: Image.Image, scale_factor: int) -> Image.Image:

FILE: lab01/text_recognizer/models/mlp.py
  class MLP (line 14) | class MLP(nn.Module):
    method __init__ (line 17) | def __init__(
    method forward (line 39) | def forward(self, x):
    method add_to_argparse (line 51) | def add_to_argparse(parser):

FILE: lab01/text_recognizer/util.py
  function to_categorical (line 17) | def to_categorical(y, num_classes):
  function read_image_pil (line 22) | def read_image_pil(image_uri: Union[Path, str], grayscale=False) -> Image:
  function read_image_pil_file (line 27) | def read_image_pil_file(image_file, grayscale=False) -> Image:
  function temporary_working_directory (line 37) | def temporary_working_directory(working_dir: Union[str, Path]):
  function compute_sha256 (line 47) | def compute_sha256(filename: Union[Path, str]):
  class TqdmUpTo (line 53) | class TqdmUpTo(tqdm):
    method update_to (line 56) | def update_to(self, blocks=1, bsize=1, tsize=None):
  function download_url (line 72) | def download_url(url, filename):

FILE: lab02/text_recognizer/data/base_data_module.py
  function load_and_print_info (line 16) | def load_and_print_info(data_module_class) -> None:
  function _download_raw_dataset (line 27) | def _download_raw_dataset(metadata: Dict, dl_dirname: Path) -> Path:
  class BaseDataModule (line 51) | class BaseDataModule(pl.LightningDataModule):
    method __init__ (line 57) | def __init__(self, args: argparse.Namespace = None) -> None:
    method data_dirname (line 74) | def data_dirname(cls):
    method add_to_argparse (line 78) | def add_to_argparse(parser):
    method config (line 93) | def config(self):
    method prepare_data (line 97) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 104) | def setup(self, stage: Optional[str] = None) -> None:
    method train_dataloader (line 111) | def train_dataloader(self):
    method val_dataloader (line 120) | def val_dataloader(self):
    method test_dataloader (line 129) | def test_dataloader(self):

FILE: lab02/text_recognizer/data/emnist.py
  class EMNIST (line 32) | class EMNIST(BaseDataModule):
    method __init__ (line 43) | def __init__(self, args=None):
    method prepare_data (line 52) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 56) | def setup(self, stage: str = None) -> None:
    method __repr__ (line 71) | def __repr__(self):
  function _download_and_process_emnist (line 85) | def _download_and_process_emnist():
  function _process_raw_dataset (line 91) | def _process_raw_dataset(filename: str, dirname: Path):
  function _sample_to_balance (line 133) | def _sample_to_balance(x, y):
  function _augment_emnist_characters (line 148) | def _augment_emnist_characters(characters: Sequence[str]) -> Sequence[str]:

FILE: lab02/text_recognizer/data/emnist_lines.py
  class EMNISTLines (line 28) | class EMNISTLines(BaseDataModule):
    method __init__ (line 31) | def __init__(
    method add_to_argparse (line 55) | def add_to_argparse(parser):
    method data_filename (line 79) | def data_filename(self):
    method prepare_data (line 85) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 93) | def setup(self, stage: str = None) -> None:
    method __repr__ (line 111) | def __repr__(self) -> str:
    method _generate_data (line 132) | def _generate_data(self, split: str) -> None:
  function get_samples_by_char (line 168) | def get_samples_by_char(samples, labels, mapping):
  function select_letter_samples_for_string (line 175) | def select_letter_samples_for_string(string, samples_by_char, char_shape...
  function construct_image_from_string (line 187) | def construct_image_from_string(
  function create_dataset_of_images (line 202) | def create_dataset_of_images(N, samples_by_char, sentence_generator, min...
  function convert_strings_to_labels (line 212) | def convert_strings_to_labels(

FILE: lab02/text_recognizer/data/mnist.py
  class MNIST (line 12) | class MNIST(BaseDataModule):
    method __init__ (line 15) | def __init__(self, args: argparse.Namespace) -> None:
    method prepare_data (line 23) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 28) | def setup(self, stage=None) -> None:

FILE: lab02/text_recognizer/data/sentence_generator.py
  class SentenceGenerator (line 15) | class SentenceGenerator:
    method __init__ (line 18) | def __init__(self, max_length: Optional[int] = None):
    method generate (line 23) | def generate(self, max_length: Optional[int] = None) -> str:
    method _get_end_ind_candidates (line 49) | def _get_end_ind_candidates(self, first_ind: int, start_ind: int, max_...
  function brown_text (line 58) | def brown_text():
  function load_nltk_brown_corpus (line 67) | def load_nltk_brown_corpus():

FILE: lab02/text_recognizer/data/util.py
  class BaseDataset (line 11) | class BaseDataset(torch.utils.data.Dataset):
    method __init__ (line 28) | def __init__(
    method __len__ (line 43) | def __len__(self) -> int:
    method __getitem__ (line 47) | def __getitem__(self, index: int) -> Tuple[Any, Any]:
  function convert_strings_to_labels (line 70) | def convert_strings_to_labels(strings: Sequence[str], mapping: Dict[str,...
  function split_dataset (line 84) | def split_dataset(base_dataset: BaseDataset, fraction: float, seed: int)...
  function resize_image (line 96) | def resize_image(image: Image.Image, scale_factor: int) -> Image.Image:

FILE: lab02/text_recognizer/lit_models/base.py
  class BaseLitModel (line 15) | class BaseLitModel(pl.LightningModule):
    method __init__ (line 20) | def __init__(self, model, args: argparse.Namespace = None):
    method add_to_argparse (line 46) | def add_to_argparse(parser):
    method configure_optimizers (line 54) | def configure_optimizers(self):
    method forward (line 63) | def forward(self, x):
    method predict (line 66) | def predict(self, x):
    method training_step (line 70) | def training_step(self, batch, batch_idx):
    method _run_on_batch (line 81) | def _run_on_batch(self, batch, with_preds=False):
    method validation_step (line 88) | def validation_step(self, batch, batch_idx):
    method test_step (line 99) | def test_step(self, batch, batch_idx):

FILE: lab02/text_recognizer/models/cnn.py
  class ConvBlock (line 15) | class ConvBlock(nn.Module):
    method __init__ (line 20) | def __init__(self, input_channels: int, output_channels: int) -> None:
    method forward (line 25) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class CNN (line 43) | class CNN(nn.Module):
    method __init__ (line 46) | def __init__(self, data_config: Dict[str, Any], args: argparse.Namespa...
    method forward (line 75) | def forward(self, x: torch.Tensor) -> torch.Tensor:
    method add_to_argparse (line 101) | def add_to_argparse(parser):

FILE: lab02/text_recognizer/models/line_cnn_simple.py
  class LineCNNSimple (line 16) | class LineCNNSimple(nn.Module):
    method __init__ (line 19) | def __init__(
    method forward (line 38) | def forward(self, x: torch.Tensor) -> torch.Tensor:
    method add_to_argparse (line 73) | def add_to_argparse(parser):

FILE: lab02/text_recognizer/models/mlp.py
  class MLP (line 14) | class MLP(nn.Module):
    method __init__ (line 17) | def __init__(
    method forward (line 39) | def forward(self, x):
    method add_to_argparse (line 51) | def add_to_argparse(parser):

FILE: lab02/text_recognizer/stems/image.py
  class ImageStem (line 5) | class ImageStem:
    method __init__ (line 23) | def __init__(self):
    method __call__ (line 28) | def __call__(self, img):
  class MNISTStem (line 38) | class MNISTStem(ImageStem):
    method __init__ (line 41) | def __init__(self):

FILE: lab02/text_recognizer/util.py
  function to_categorical (line 17) | def to_categorical(y, num_classes):
  function read_image_pil (line 22) | def read_image_pil(image_uri: Union[Path, str], grayscale=False) -> Image:
  function read_image_pil_file (line 27) | def read_image_pil_file(image_file, grayscale=False) -> Image:
  function temporary_working_directory (line 37) | def temporary_working_directory(working_dir: Union[str, Path]):
  function compute_sha256 (line 47) | def compute_sha256(filename: Union[Path, str]):
  class TqdmUpTo (line 53) | class TqdmUpTo(tqdm):
    method update_to (line 56) | def update_to(self, blocks=1, bsize=1, tsize=None):
  function download_url (line 72) | def download_url(url, filename):

FILE: lab02/training/run_experiment.py
  function _setup_parser (line 19) | def _setup_parser():
  function _ensure_logging_dir (line 73) | def _ensure_logging_dir(experiment_dir):
  function main (line 78) | def main():

FILE: lab02/training/util.py
  function import_class (line 9) | def import_class(module_and_class_name: str) -> type:
  function setup_data_and_model_from_args (line 17) | def setup_data_and_model_from_args(args: argparse.Namespace):

FILE: lab03/text_recognizer/data/base_data_module.py
  function load_and_print_info (line 16) | def load_and_print_info(data_module_class) -> None:
  function _download_raw_dataset (line 27) | def _download_raw_dataset(metadata: Dict, dl_dirname: Path) -> Path:
  class BaseDataModule (line 51) | class BaseDataModule(pl.LightningDataModule):
    method __init__ (line 57) | def __init__(self, args: argparse.Namespace = None) -> None:
    method data_dirname (line 74) | def data_dirname(cls):
    method add_to_argparse (line 78) | def add_to_argparse(parser):
    method config (line 93) | def config(self):
    method prepare_data (line 97) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 104) | def setup(self, stage: Optional[str] = None) -> None:
    method train_dataloader (line 111) | def train_dataloader(self):
    method val_dataloader (line 120) | def val_dataloader(self):
    method test_dataloader (line 129) | def test_dataloader(self):

FILE: lab03/text_recognizer/data/emnist.py
  class EMNIST (line 32) | class EMNIST(BaseDataModule):
    method __init__ (line 43) | def __init__(self, args=None):
    method prepare_data (line 52) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 56) | def setup(self, stage: str = None) -> None:
    method __repr__ (line 71) | def __repr__(self):
  function _download_and_process_emnist (line 85) | def _download_and_process_emnist():
  function _process_raw_dataset (line 91) | def _process_raw_dataset(filename: str, dirname: Path):
  function _sample_to_balance (line 133) | def _sample_to_balance(x, y):
  function _augment_emnist_characters (line 148) | def _augment_emnist_characters(characters: Sequence[str]) -> Sequence[str]:

FILE: lab03/text_recognizer/data/emnist_lines.py
  class EMNISTLines (line 28) | class EMNISTLines(BaseDataModule):
    method __init__ (line 31) | def __init__(
    method add_to_argparse (line 55) | def add_to_argparse(parser):
    method data_filename (line 79) | def data_filename(self):
    method prepare_data (line 85) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 93) | def setup(self, stage: str = None) -> None:
    method __repr__ (line 111) | def __repr__(self) -> str:
    method _generate_data (line 132) | def _generate_data(self, split: str) -> None:
  function get_samples_by_char (line 168) | def get_samples_by_char(samples, labels, mapping):
  function select_letter_samples_for_string (line 175) | def select_letter_samples_for_string(string, samples_by_char, char_shape...
  function construct_image_from_string (line 187) | def construct_image_from_string(
  function create_dataset_of_images (line 202) | def create_dataset_of_images(N, samples_by_char, sentence_generator, min...
  function convert_strings_to_labels (line 212) | def convert_strings_to_labels(

FILE: lab03/text_recognizer/data/iam.py
  class IAM (line 22) | class IAM:
    method __init__ (line 40) | def __init__(self):
    method prepare_data (line 43) | def prepare_data(self):
    method load_image (line 49) | def load_image(self, id: str) -> Image.Image:
    method __repr__ (line 62) | def __repr__(self):
    method all_ids (line 74) | def all_ids(self):
    method ids_by_split (line 79) | def ids_by_split(self):
    method split_by_id (line 83) | def split_by_id(self):
    method train_ids (line 91) | def train_ids(self):
    method test_ids (line 96) | def test_ids(self):
    method xml_filenames (line 101) | def xml_filenames(self) -> List[Path]:
    method validation_ids (line 106) | def validation_ids(self):
    method form_filenames (line 113) | def form_filenames(self) -> List[Path]:
    method xml_filenames_by_id (line 118) | def xml_filenames_by_id(self):
    method form_filenames_by_id (line 123) | def form_filenames_by_id(self):
    method line_strings_by_id (line 128) | def line_strings_by_id(self):
    method line_regions_by_id (line 133) | def line_regions_by_id(self):
    method paragraph_string_by_id (line 138) | def paragraph_string_by_id(self):
    method paragraph_region_by_id (line 143) | def paragraph_region_by_id(self):
  function _extract_raw_dataset (line 156) | def _extract_raw_dataset(filename: Path, dirname: Path) -> None:
  function _get_ids_from_lwitlrt_split_file (line 163) | def _get_ids_from_lwitlrt_split_file(filename: str) -> List[str]:
  function _get_line_strings_from_xml_file (line 172) | def _get_line_strings_from_xml_file(filename: str) -> List[str]:
  function _get_text_from_xml_element (line 178) | def _get_text_from_xml_element(xml_element: Any) -> str:
  function _get_line_regions_from_xml_file (line 183) | def _get_line_regions_from_xml_file(filename: str) -> List[Dict[str, int]]:
  function _get_line_elements_from_xml_file (line 210) | def _get_line_elements_from_xml_file(filename: str) -> List[Any]:
  function _get_region_from_xml_element (line 216) | def _get_region_from_xml_element(xml_elem: Any, xml_path: str) -> Option...

FILE: lab03/text_recognizer/data/iam_paragraphs.py
  class IAMParagraphs (line 24) | class IAMParagraphs(BaseDataModule):
    method __init__ (line 27) | def __init__(self, args: argparse.Namespace = None):
    method add_to_argparse (line 41) | def add_to_argparse(parser):
    method prepare_data (line 46) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 75) | def setup(self, stage: str = None) -> None:
    method __repr__ (line 91) | def __repr__(self) -> str:
  function validate_input_and_output_dimensions (line 114) | def validate_input_and_output_dimensions(
  function get_paragraph_crops_and_labels (line 127) | def get_paragraph_crops_and_labels(
  function save_crops_and_labels (line 143) | def save_crops_and_labels(crops: Dict[str, Image.Image], labels: Dict[st...
  function load_processed_crops_and_labels (line 154) | def load_processed_crops_and_labels(split: str) -> Tuple[Sequence[Image....
  function get_dataset_properties (line 167) | def get_dataset_properties() -> dict:
  function _labels_filename (line 188) | def _labels_filename(split: str) -> Path:
  function _crop_filename (line 193) | def _crop_filename(id_: str, split: str) -> Path:
  function _num_lines (line 198) | def _num_lines(label: str) -> int:

FILE: lab03/text_recognizer/data/mnist.py
  class MNIST (line 12) | class MNIST(BaseDataModule):
    method __init__ (line 15) | def __init__(self, args: argparse.Namespace) -> None:
    method prepare_data (line 23) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 28) | def setup(self, stage=None) -> None:

FILE: lab03/text_recognizer/data/sentence_generator.py
  class SentenceGenerator (line 15) | class SentenceGenerator:
    method __init__ (line 18) | def __init__(self, max_length: Optional[int] = None):
    method generate (line 23) | def generate(self, max_length: Optional[int] = None) -> str:
    method _get_end_ind_candidates (line 49) | def _get_end_ind_candidates(self, first_ind: int, start_ind: int, max_...
  function brown_text (line 58) | def brown_text():
  function load_nltk_brown_corpus (line 67) | def load_nltk_brown_corpus():

FILE: lab03/text_recognizer/data/util.py
  class BaseDataset (line 11) | class BaseDataset(torch.utils.data.Dataset):
    method __init__ (line 28) | def __init__(
    method __len__ (line 43) | def __len__(self) -> int:
    method __getitem__ (line 47) | def __getitem__(self, index: int) -> Tuple[Any, Any]:
  function convert_strings_to_labels (line 70) | def convert_strings_to_labels(strings: Sequence[str], mapping: Dict[str,...
  function split_dataset (line 84) | def split_dataset(base_dataset: BaseDataset, fraction: float, seed: int)...
  function resize_image (line 96) | def resize_image(image: Image.Image, scale_factor: int) -> Image.Image:

FILE: lab03/text_recognizer/lit_models/base.py
  class BaseLitModel (line 17) | class BaseLitModel(pl.LightningModule):
    method __init__ (line 22) | def __init__(self, model, args: argparse.Namespace = None):
    method add_to_argparse (line 48) | def add_to_argparse(parser):
    method configure_optimizers (line 56) | def configure_optimizers(self):
    method forward (line 65) | def forward(self, x):
    method predict (line 68) | def predict(self, x):
    method training_step (line 72) | def training_step(self, batch, batch_idx):
    method _run_on_batch (line 83) | def _run_on_batch(self, batch, with_preds=False):
    method validation_step (line 90) | def validation_step(self, batch, batch_idx):
    method test_step (line 101) | def test_step(self, batch, batch_idx):
  class BaseImageToTextLitModel (line 109) | class BaseImageToTextLitModel(BaseLitModel):  # pylint: disable=too-many...
    method __init__ (line 112) | def __init__(self, model, args: argparse.Namespace = None):

FILE: lab03/text_recognizer/lit_models/metrics.py
  class CharacterErrorRate (line 8) | class CharacterErrorRate(torchmetrics.CharErrorRate):
    method __init__ (line 11) | def __init__(self, ignore_tokens: Sequence[int], *args):
    method update (line 15) | def update(self, preds: torch.Tensor, targets: torch.Tensor):  # type:...
  function test_character_error_rate (line 21) | def test_character_error_rate():

FILE: lab03/text_recognizer/lit_models/transformer.py
  class TransformerLitModel (line 10) | class TransformerLitModel(BaseImageToTextLitModel):
    method __init__ (line 18) | def __init__(self, model, args=None):
    method forward (line 22) | def forward(self, x):
    method teacher_forward (line 25) | def teacher_forward(self, x: torch.Tensor, y: torch.Tensor) -> torch.T...
    method training_step (line 44) | def training_step(self, batch, batch_idx):
    method validation_step (line 55) | def validation_step(self, batch, batch_idx):
    method test_step (line 72) | def test_step(self, batch, batch_idx):
    method map (line 89) | def map(self, ks: Sequence[int], ignore: bool = True) -> str:
    method batchmap (line 96) | def batchmap(self, ks: Sequence[Sequence[int]], ignore=True) -> List[s...
    method get_preds (line 100) | def get_preds(self, logitlikes: torch.Tensor, replace_after_end: bool ...

FILE: lab03/text_recognizer/lit_models/util.py
  function first_appearance (line 6) | def first_appearance(x: torch.Tensor, element: Union[int, float], dim: i...
  function replace_after (line 47) | def replace_after(x: torch.Tensor, element: Union[int, float], replace: ...

FILE: lab03/text_recognizer/models/cnn.py
  class ConvBlock (line 15) | class ConvBlock(nn.Module):
    method __init__ (line 20) | def __init__(self, input_channels: int, output_channels: int) -> None:
    method forward (line 25) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class CNN (line 43) | class CNN(nn.Module):
    method __init__ (line 46) | def __init__(self, data_config: Dict[str, Any], args: argparse.Namespa...
    method forward (line 75) | def forward(self, x: torch.Tensor) -> torch.Tensor:
    method add_to_argparse (line 101) | def add_to_argparse(parser):

FILE: lab03/text_recognizer/models/line_cnn_simple.py
  class LineCNNSimple (line 16) | class LineCNNSimple(nn.Module):
    method __init__ (line 19) | def __init__(
    method forward (line 38) | def forward(self, x: torch.Tensor) -> torch.Tensor:
    method add_to_argparse (line 73) | def add_to_argparse(parser):

FILE: lab03/text_recognizer/models/mlp.py
  class MLP (line 14) | class MLP(nn.Module):
    method __init__ (line 17) | def __init__(
    method forward (line 39) | def forward(self, x):
    method add_to_argparse (line 51) | def add_to_argparse(parser):

FILE: lab03/text_recognizer/models/resnet_transformer.py
  class ResnetTransformer (line 21) | class ResnetTransformer(nn.Module):
    method __init__ (line 24) | def __init__(
    method forward (line 75) | def forward(self, x: torch.Tensor) -> torch.Tensor:
    method init_weights (line 111) | def init_weights(self):
    method encode (line 123) | def encode(self, x: torch.Tensor) -> torch.Tensor:
    method decode (line 147) | def decode(self, x, y):
    method add_to_argparse (line 179) | def add_to_argparse(parser):

FILE: lab03/text_recognizer/models/transformer_util.py
  class PositionalEncodingImage (line 9) | class PositionalEncodingImage(nn.Module):
    method __init__ (line 16) | def __init__(self, d_model: int, max_h: int = 2000, max_w: int = 2000,...
    method make_pe (line 26) | def make_pe(d_model: int, max_h: int, max_w: int) -> torch.Tensor:
    method forward (line 36) | def forward(self, x: Tensor) -> Tensor:
  class PositionalEncoding (line 44) | class PositionalEncoding(torch.nn.Module):
    method __init__ (line 47) | def __init__(self, d_model: int, dropout: float = 0.1, max_len: int = ...
    method make_pe (line 56) | def make_pe(d_model: int, max_len: int) -> torch.Tensor:
    method forward (line 65) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  function generate_square_subsequent_mask (line 72) | def generate_square_subsequent_mask(size: int) -> torch.Tensor:

FILE: lab03/text_recognizer/stems/image.py
  class ImageStem (line 5) | class ImageStem:
    method __init__ (line 23) | def __init__(self):
    method __call__ (line 28) | def __call__(self, img):
  class MNISTStem (line 38) | class MNISTStem(ImageStem):
    method __init__ (line 41) | def __init__(self):

FILE: lab03/text_recognizer/stems/paragraph.py
  class ParagraphStem (line 14) | class ParagraphStem(ImageStem):
    method __init__ (line 17) | def __init__(

FILE: lab03/text_recognizer/util.py
  function to_categorical (line 17) | def to_categorical(y, num_classes):
  function read_image_pil (line 22) | def read_image_pil(image_uri: Union[Path, str], grayscale=False) -> Image:
  function read_image_pil_file (line 27) | def read_image_pil_file(image_file, grayscale=False) -> Image:
  function temporary_working_directory (line 37) | def temporary_working_directory(working_dir: Union[str, Path]):
  function compute_sha256 (line 47) | def compute_sha256(filename: Union[Path, str]):
  class TqdmUpTo (line 53) | class TqdmUpTo(tqdm):
    method update_to (line 56) | def update_to(self, blocks=1, bsize=1, tsize=None):
  function download_url (line 72) | def download_url(url, filename):

FILE: lab03/training/run_experiment.py
  function _setup_parser (line 19) | def _setup_parser():
  function _ensure_logging_dir (line 73) | def _ensure_logging_dir(experiment_dir):
  function main (line 78) | def main():

FILE: lab03/training/util.py
  function import_class (line 9) | def import_class(module_and_class_name: str) -> type:
  function setup_data_and_model_from_args (line 17) | def setup_data_and_model_from_args(args: argparse.Namespace):

FILE: lab04/text_recognizer/callbacks/imtotext.py
  class ImageToTextTableLogger (line 14) | class ImageToTextTableLogger(pl.Callback):
    method __init__ (line 17) | def __init__(self, max_images_to_log=32, on_train=True):
    method on_train_batch_end (line 24) | def on_train_batch_end(self, trainer, module, output, batch, batch_idx):
    method on_validation_batch_end (line 33) | def on_validation_batch_end(self, trainer, module, output, batch, batc...
    method _log_image_text_table (line 40) | def _log_image_text_table(self, trainer, output, batch, key):
    method has_metrics (line 55) | def has_metrics(self, output):
  class ImageToTextCaptionLogger (line 59) | class ImageToTextCaptionLogger(pl.Callback):
    method __init__ (line 62) | def __init__(self, max_images_to_log=32, on_train=True):
    method on_train_batch_end (line 69) | def on_train_batch_end(self, trainer, module, output, batch, batch_idx):
    method on_validation_batch_end (line 77) | def on_validation_batch_end(self, trainer, module, output, batch, batc...
    method on_test_batch_end (line 85) | def on_test_batch_end(self, trainer, module, output, batch, batch_idx,...
    method _log_image_text_caption (line 92) | def _log_image_text_caption(self, trainer, output, batch, key):
    method has_metrics (line 102) | def has_metrics(self, output):

FILE: lab04/text_recognizer/callbacks/model.py
  class ModelSizeLogger (line 19) | class ModelSizeLogger(pl.Callback):
    method __init__ (line 22) | def __init__(self, print_size=True):
    method on_fit_start (line 27) | def on_fit_start(self, trainer, module):
    method _run (line 30) | def _run(self, trainer, module):
    method get_model_disksize (line 43) | def get_model_disksize(module):
  class GraphLogger (line 51) | class GraphLogger(pl.Callback):
    method __init__ (line 54) | def __init__(self, output_key="logits"):
    method on_train_batch_end (line 62) | def on_train_batch_end(self, trainer, module, outputs, batch, batch_id...
    method log_graph (line 72) | def log_graph(trainer, module, outputs):
  function count_params (line 84) | def count_params(module):

FILE: lab04/text_recognizer/callbacks/optim.py
  class LearningRateMonitor (line 6) | class LearningRateMonitor(pl.callbacks.LearningRateMonitor):
    method _add_prefix (line 13) | def _add_prefix(self, *args, **kwargs) -> str:

FILE: lab04/text_recognizer/callbacks/util.py
  function check_and_warn (line 6) | def check_and_warn(logger, attribute, feature):
  function warn_no_attribute (line 12) | def warn_no_attribute(blocked_feature, missing_attribute):

FILE: lab04/text_recognizer/data/base_data_module.py
  function load_and_print_info (line 16) | def load_and_print_info(data_module_class) -> None:
  function _download_raw_dataset (line 27) | def _download_raw_dataset(metadata: Dict, dl_dirname: Path) -> Path:
  class BaseDataModule (line 51) | class BaseDataModule(pl.LightningDataModule):
    method __init__ (line 57) | def __init__(self, args: argparse.Namespace = None) -> None:
    method data_dirname (line 74) | def data_dirname(cls):
    method add_to_argparse (line 78) | def add_to_argparse(parser):
    method config (line 93) | def config(self):
    method prepare_data (line 97) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 104) | def setup(self, stage: Optional[str] = None) -> None:
    method train_dataloader (line 111) | def train_dataloader(self):
    method val_dataloader (line 120) | def val_dataloader(self):
    method test_dataloader (line 129) | def test_dataloader(self):

FILE: lab04/text_recognizer/data/emnist.py
  class EMNIST (line 32) | class EMNIST(BaseDataModule):
    method __init__ (line 43) | def __init__(self, args=None):
    method prepare_data (line 52) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 56) | def setup(self, stage: str = None) -> None:
    method __repr__ (line 71) | def __repr__(self):
  function _download_and_process_emnist (line 85) | def _download_and_process_emnist():
  function _process_raw_dataset (line 91) | def _process_raw_dataset(filename: str, dirname: Path):
  function _sample_to_balance (line 133) | def _sample_to_balance(x, y):
  function _augment_emnist_characters (line 148) | def _augment_emnist_characters(characters: Sequence[str]) -> Sequence[str]:

FILE: lab04/text_recognizer/data/emnist_lines.py
  class EMNISTLines (line 28) | class EMNISTLines(BaseDataModule):
    method __init__ (line 31) | def __init__(
    method add_to_argparse (line 55) | def add_to_argparse(parser):
    method data_filename (line 79) | def data_filename(self):
    method prepare_data (line 85) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 93) | def setup(self, stage: str = None) -> None:
    method __repr__ (line 111) | def __repr__(self) -> str:
    method _generate_data (line 132) | def _generate_data(self, split: str) -> None:
  function get_samples_by_char (line 168) | def get_samples_by_char(samples, labels, mapping):
  function select_letter_samples_for_string (line 175) | def select_letter_samples_for_string(string, samples_by_char, char_shape...
  function construct_image_from_string (line 187) | def construct_image_from_string(
  function create_dataset_of_images (line 202) | def create_dataset_of_images(N, samples_by_char, sentence_generator, min...
  function convert_strings_to_labels (line 212) | def convert_strings_to_labels(

FILE: lab04/text_recognizer/data/iam.py
  class IAM (line 22) | class IAM:
    method __init__ (line 40) | def __init__(self):
    method prepare_data (line 43) | def prepare_data(self):
    method load_image (line 49) | def load_image(self, id: str) -> Image.Image:
    method __repr__ (line 62) | def __repr__(self):
    method all_ids (line 74) | def all_ids(self):
    method ids_by_split (line 79) | def ids_by_split(self):
    method split_by_id (line 83) | def split_by_id(self):
    method train_ids (line 91) | def train_ids(self):
    method test_ids (line 96) | def test_ids(self):
    method xml_filenames (line 101) | def xml_filenames(self) -> List[Path]:
    method validation_ids (line 106) | def validation_ids(self):
    method form_filenames (line 113) | def form_filenames(self) -> List[Path]:
    method xml_filenames_by_id (line 118) | def xml_filenames_by_id(self):
    method form_filenames_by_id (line 123) | def form_filenames_by_id(self):
    method line_strings_by_id (line 128) | def line_strings_by_id(self):
    method line_regions_by_id (line 133) | def line_regions_by_id(self):
    method paragraph_string_by_id (line 138) | def paragraph_string_by_id(self):
    method paragraph_region_by_id (line 143) | def paragraph_region_by_id(self):
  function _extract_raw_dataset (line 156) | def _extract_raw_dataset(filename: Path, dirname: Path) -> None:
  function _get_ids_from_lwitlrt_split_file (line 163) | def _get_ids_from_lwitlrt_split_file(filename: str) -> List[str]:
  function _get_line_strings_from_xml_file (line 172) | def _get_line_strings_from_xml_file(filename: str) -> List[str]:
  function _get_text_from_xml_element (line 178) | def _get_text_from_xml_element(xml_element: Any) -> str:
  function _get_line_regions_from_xml_file (line 183) | def _get_line_regions_from_xml_file(filename: str) -> List[Dict[str, int]]:
  function _get_line_elements_from_xml_file (line 210) | def _get_line_elements_from_xml_file(filename: str) -> List[Any]:
  function _get_region_from_xml_element (line 216) | def _get_region_from_xml_element(xml_elem: Any, xml_path: str) -> Option...

FILE: lab04/text_recognizer/data/iam_lines.py
  class IAMLines (line 24) | class IAMLines(BaseDataModule):
    method __init__ (line 27) | def __init__(self, args: argparse.Namespace = None):
    method add_to_argparse (line 38) | def add_to_argparse(parser):
    method prepare_data (line 43) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 64) | def setup(self, stage: str = None) -> None:
    method __repr__ (line 91) | def __repr__(self) -> str:
  function generate_line_crops_and_labels (line 114) | def generate_line_crops_and_labels(iam: IAM, split: str, scale_factor=IM...
  function save_images_and_labels (line 131) | def save_images_and_labels(crops: Sequence[Image.Image], labels: Sequenc...
  function load_processed_crops_and_labels (line 140) | def load_processed_crops_and_labels(split: str, data_dirname: Path):
  function load_processed_line_crops (line 148) | def load_processed_line_crops(split: str, data_dirname: Path):
  function load_processed_line_labels (line 155) | def load_processed_line_labels(split: str, data_dirname: Path):

FILE: lab04/text_recognizer/data/iam_paragraphs.py
  class IAMParagraphs (line 24) | class IAMParagraphs(BaseDataModule):
    method __init__ (line 27) | def __init__(self, args: argparse.Namespace = None):
    method add_to_argparse (line 41) | def add_to_argparse(parser):
    method prepare_data (line 46) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 75) | def setup(self, stage: str = None) -> None:
    method __repr__ (line 91) | def __repr__(self) -> str:
  function validate_input_and_output_dimensions (line 114) | def validate_input_and_output_dimensions(
  function get_paragraph_crops_and_labels (line 127) | def get_paragraph_crops_and_labels(
  function save_crops_and_labels (line 143) | def save_crops_and_labels(crops: Dict[str, Image.Image], labels: Dict[st...
  function load_processed_crops_and_labels (line 154) | def load_processed_crops_and_labels(split: str) -> Tuple[Sequence[Image....
  function get_dataset_properties (line 167) | def get_dataset_properties() -> dict:
  function _labels_filename (line 188) | def _labels_filename(split: str) -> Path:
  function _crop_filename (line 193) | def _crop_filename(id_: str, split: str) -> Path:
  function _num_lines (line 198) | def _num_lines(label: str) -> int:

FILE: lab04/text_recognizer/data/mnist.py
  class MNIST (line 12) | class MNIST(BaseDataModule):
    method __init__ (line 15) | def __init__(self, args: argparse.Namespace) -> None:
    method prepare_data (line 23) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 28) | def setup(self, stage=None) -> None:

FILE: lab04/text_recognizer/data/sentence_generator.py
  class SentenceGenerator (line 15) | class SentenceGenerator:
    method __init__ (line 18) | def __init__(self, max_length: Optional[int] = None):
    method generate (line 23) | def generate(self, max_length: Optional[int] = None) -> str:
    method _get_end_ind_candidates (line 49) | def _get_end_ind_candidates(self, first_ind: int, start_ind: int, max_...
  function brown_text (line 58) | def brown_text():
  function load_nltk_brown_corpus (line 67) | def load_nltk_brown_corpus():

FILE: lab04/text_recognizer/data/util.py
  class BaseDataset (line 11) | class BaseDataset(torch.utils.data.Dataset):
    method __init__ (line 28) | def __init__(
    method __len__ (line 43) | def __len__(self) -> int:
    method __getitem__ (line 47) | def __getitem__(self, index: int) -> Tuple[Any, Any]:
  function convert_strings_to_labels (line 70) | def convert_strings_to_labels(strings: Sequence[str], mapping: Dict[str,...
  function split_dataset (line 84) | def split_dataset(base_dataset: BaseDataset, fraction: float, seed: int)...
  function resize_image (line 96) | def resize_image(image: Image.Image, scale_factor: int) -> Image.Image:

FILE: lab04/text_recognizer/lit_models/base.py
  class BaseLitModel (line 17) | class BaseLitModel(pl.LightningModule):
    method __init__ (line 22) | def __init__(self, model, args: argparse.Namespace = None):
    method add_to_argparse (line 48) | def add_to_argparse(parser):
    method configure_optimizers (line 56) | def configure_optimizers(self):
    method forward (line 65) | def forward(self, x):
    method predict (line 68) | def predict(self, x):
    method training_step (line 72) | def training_step(self, batch, batch_idx):
    method _run_on_batch (line 84) | def _run_on_batch(self, batch, with_preds=False):
    method validation_step (line 91) | def validation_step(self, batch, batch_idx):
    method test_step (line 103) | def test_step(self, batch, batch_idx):
    method add_on_first_batch (line 110) | def add_on_first_batch(self, metrics, outputs, batch_idx):
    method add_on_logged_batches (line 114) | def add_on_logged_batches(self, metrics, outputs):
    method is_logged_batch (line 118) | def is_logged_batch(self):
  class BaseImageToTextLitModel (line 125) | class BaseImageToTextLitModel(BaseLitModel):  # pylint: disable=too-many...
    method __init__ (line 128) | def __init__(self, model, args: argparse.Namespace = None):

FILE: lab04/text_recognizer/lit_models/metrics.py
  class CharacterErrorRate (line 8) | class CharacterErrorRate(torchmetrics.CharErrorRate):
    method __init__ (line 11) | def __init__(self, ignore_tokens: Sequence[int], *args):
    method update (line 15) | def update(self, preds: torch.Tensor, targets: torch.Tensor):  # type:...
  function test_character_error_rate (line 21) | def test_character_error_rate():

FILE: lab04/text_recognizer/lit_models/transformer.py
  class TransformerLitModel (line 10) | class TransformerLitModel(BaseImageToTextLitModel):
    method __init__ (line 18) | def __init__(self, model, args=None):
    method forward (line 22) | def forward(self, x):
    method teacher_forward (line 25) | def teacher_forward(self, x: torch.Tensor, y: torch.Tensor) -> torch.T...
    method training_step (line 44) | def training_step(self, batch, batch_idx):
    method validation_step (line 59) | def validation_step(self, batch, batch_idx):
    method test_step (line 80) | def test_step(self, batch, batch_idx):
    method map (line 101) | def map(self, ks: Sequence[int], ignore: bool = True) -> str:
    method batchmap (line 108) | def batchmap(self, ks: Sequence[Sequence[int]], ignore=True) -> List[s...
    method get_preds (line 112) | def get_preds(self, logitlikes: torch.Tensor, replace_after_end: bool ...

FILE: lab04/text_recognizer/lit_models/util.py
  function first_appearance (line 6) | def first_appearance(x: torch.Tensor, element: Union[int, float], dim: i...
  function replace_after (line 47) | def replace_after(x: torch.Tensor, element: Union[int, float], replace: ...

FILE: lab04/text_recognizer/models/cnn.py
  class ConvBlock (line 15) | class ConvBlock(nn.Module):
    method __init__ (line 20) | def __init__(self, input_channels: int, output_channels: int) -> None:
    method forward (line 25) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class CNN (line 43) | class CNN(nn.Module):
    method __init__ (line 46) | def __init__(self, data_config: Dict[str, Any], args: argparse.Namespa...
    method forward (line 75) | def forward(self, x: torch.Tensor) -> torch.Tensor:
    method add_to_argparse (line 101) | def add_to_argparse(parser):

FILE: lab04/text_recognizer/models/line_cnn.py
  class ConvBlock (line 21) | class ConvBlock(nn.Module):
    method __init__ (line 26) | def __init__(
    method forward (line 38) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class LineCNN (line 56) | class LineCNN(nn.Module):
    method __init__ (line 61) | def __init__(
    method _init_weights (line 100) | def _init_weights(self):
    method forward (line 119) | def forward(self, x: torch.Tensor) -> torch.Tensor:
    method add_to_argparse (line 146) | def add_to_argparse(parser):

FILE: lab04/text_recognizer/models/line_cnn_simple.py
  class LineCNNSimple (line 16) | class LineCNNSimple(nn.Module):
    method __init__ (line 19) | def __init__(
    method forward (line 38) | def forward(self, x: torch.Tensor) -> torch.Tensor:
    method add_to_argparse (line 73) | def add_to_argparse(parser):

FILE: lab04/text_recognizer/models/line_cnn_transformer.py
  class LineCNNTransformer (line 20) | class LineCNNTransformer(nn.Module):
    method __init__ (line 23) | def __init__(
    method init_weights (line 65) | def init_weights(self):
    method encode (line 71) | def encode(self, x: torch.Tensor) -> torch.Tensor:
    method decode (line 90) | def decode(self, x, y):
    method forward (line 117) | def forward(self, x: torch.Tensor) -> torch.Tensor:
    method add_to_argparse (line 150) | def add_to_argparse(parser):

FILE: lab04/text_recognizer/models/mlp.py
  class MLP (line 14) | class MLP(nn.Module):
    method __init__ (line 17) | def __init__(
    method forward (line 39) | def forward(self, x):
    method add_to_argparse (line 51) | def add_to_argparse(parser):

FILE: lab04/text_recognizer/models/resnet_transformer.py
  class ResnetTransformer (line 21) | class ResnetTransformer(nn.Module):
    method __init__ (line 24) | def __init__(
    method forward (line 75) | def forward(self, x: torch.Tensor) -> torch.Tensor:
    method init_weights (line 111) | def init_weights(self):
    method encode (line 123) | def encode(self, x: torch.Tensor) -> torch.Tensor:
    method decode (line 147) | def decode(self, x, y):
    method add_to_argparse (line 179) | def add_to_argparse(parser):

FILE: lab04/text_recognizer/models/transformer_util.py
  class PositionalEncodingImage (line 9) | class PositionalEncodingImage(nn.Module):
    method __init__ (line 16) | def __init__(self, d_model: int, max_h: int = 2000, max_w: int = 2000,...
    method make_pe (line 26) | def make_pe(d_model: int, max_h: int, max_w: int) -> torch.Tensor:
    method forward (line 36) | def forward(self, x: Tensor) -> Tensor:
  class PositionalEncoding (line 44) | class PositionalEncoding(torch.nn.Module):
    method __init__ (line 47) | def __init__(self, d_model: int, dropout: float = 0.1, max_len: int = ...
    method make_pe (line 56) | def make_pe(d_model: int, max_len: int) -> torch.Tensor:
    method forward (line 65) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  function generate_square_subsequent_mask (line 72) | def generate_square_subsequent_mask(size: int) -> torch.Tensor:

FILE: lab04/text_recognizer/stems/image.py
  class ImageStem (line 5) | class ImageStem:
    method __init__ (line 23) | def __init__(self):
    method __call__ (line 28) | def __call__(self, img):
  class MNISTStem (line 38) | class MNISTStem(ImageStem):
    method __init__ (line 41) | def __init__(self):

FILE: lab04/text_recognizer/stems/line.py
  class LineStem (line 11) | class LineStem(ImageStem):
    method __init__ (line 14) | def __init__(self, augment=False, color_jitter_kwargs=None, random_aff...
  class IAMLineStem (line 37) | class IAMLineStem(ImageStem):
    method __init__ (line 40) | def __init__(self, augment=False, color_jitter_kwargs=None, random_aff...

FILE: lab04/text_recognizer/stems/paragraph.py
  class ParagraphStem (line 14) | class ParagraphStem(ImageStem):
    method __init__ (line 17) | def __init__(

FILE: lab04/text_recognizer/util.py
  function to_categorical (line 17) | def to_categorical(y, num_classes):
  function read_image_pil (line 22) | def read_image_pil(image_uri: Union[Path, str], grayscale=False) -> Image:
  function read_image_pil_file (line 27) | def read_image_pil_file(image_file, grayscale=False) -> Image:
  function temporary_working_directory (line 37) | def temporary_working_directory(working_dir: Union[str, Path]):
  function compute_sha256 (line 47) | def compute_sha256(filename: Union[Path, str]):
  class TqdmUpTo (line 53) | class TqdmUpTo(tqdm):
    method update_to (line 56) | def update_to(self, blocks=1, bsize=1, tsize=None):
  function download_url (line 72) | def download_url(url, filename):

FILE: lab04/training/run_experiment.py
  function _setup_parser (line 21) | def _setup_parser():
  function _ensure_logging_dir (line 81) | def _ensure_logging_dir(experiment_dir):
  function main (line 86) | def main():

FILE: lab04/training/util.py
  function import_class (line 9) | def import_class(module_and_class_name: str) -> type:
  function setup_data_and_model_from_args (line 17) | def setup_data_and_model_from_args(args: argparse.Namespace):

FILE: lab05/text_recognizer/callbacks/imtotext.py
  class ImageToTextTableLogger (line 14) | class ImageToTextTableLogger(pl.Callback):
    method __init__ (line 17) | def __init__(self, max_images_to_log=32, on_train=True):
    method on_train_batch_end (line 24) | def on_train_batch_end(self, trainer, module, output, batch, batch_idx):
    method on_validation_batch_end (line 33) | def on_validation_batch_end(self, trainer, module, output, batch, batc...
    method _log_image_text_table (line 40) | def _log_image_text_table(self, trainer, output, batch, key):
    method has_metrics (line 55) | def has_metrics(self, output):
  class ImageToTextCaptionLogger (line 59) | class ImageToTextCaptionLogger(pl.Callback):
    method __init__ (line 62) | def __init__(self, max_images_to_log=32, on_train=True):
    method on_train_batch_end (line 69) | def on_train_batch_end(self, trainer, module, output, batch, batch_idx):
    method on_validation_batch_end (line 77) | def on_validation_batch_end(self, trainer, module, output, batch, batc...
    method on_test_batch_end (line 85) | def on_test_batch_end(self, trainer, module, output, batch, batch_idx,...
    method _log_image_text_caption (line 92) | def _log_image_text_caption(self, trainer, output, batch, key):
    method has_metrics (line 102) | def has_metrics(self, output):

FILE: lab05/text_recognizer/callbacks/model.py
  class ModelSizeLogger (line 19) | class ModelSizeLogger(pl.Callback):
    method __init__ (line 22) | def __init__(self, print_size=True):
    method on_fit_start (line 27) | def on_fit_start(self, trainer, module):
    method _run (line 30) | def _run(self, trainer, module):
    method get_model_disksize (line 43) | def get_model_disksize(module):
  class GraphLogger (line 51) | class GraphLogger(pl.Callback):
    method __init__ (line 54) | def __init__(self, output_key="logits"):
    method on_train_batch_end (line 62) | def on_train_batch_end(self, trainer, module, outputs, batch, batch_id...
    method log_graph (line 72) | def log_graph(trainer, module, outputs):
  function count_params (line 84) | def count_params(module):

FILE: lab05/text_recognizer/callbacks/optim.py
  class LearningRateMonitor (line 6) | class LearningRateMonitor(pl.callbacks.LearningRateMonitor):
    method _add_prefix (line 13) | def _add_prefix(self, *args, **kwargs) -> str:

FILE: lab05/text_recognizer/callbacks/util.py
  function check_and_warn (line 6) | def check_and_warn(logger, attribute, feature):
  function warn_no_attribute (line 12) | def warn_no_attribute(blocked_feature, missing_attribute):

FILE: lab05/text_recognizer/data/base_data_module.py
  function load_and_print_info (line 16) | def load_and_print_info(data_module_class) -> None:
  function _download_raw_dataset (line 27) | def _download_raw_dataset(metadata: Dict, dl_dirname: Path) -> Path:
  class BaseDataModule (line 51) | class BaseDataModule(pl.LightningDataModule):
    method __init__ (line 57) | def __init__(self, args: argparse.Namespace = None) -> None:
    method data_dirname (line 74) | def data_dirname(cls):
    method add_to_argparse (line 78) | def add_to_argparse(parser):
    method config (line 93) | def config(self):
    method prepare_data (line 97) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 104) | def setup(self, stage: Optional[str] = None) -> None:
    method train_dataloader (line 111) | def train_dataloader(self):
    method val_dataloader (line 120) | def val_dataloader(self):
    method test_dataloader (line 129) | def test_dataloader(self):

FILE: lab05/text_recognizer/data/emnist.py
  class EMNIST (line 32) | class EMNIST(BaseDataModule):
    method __init__ (line 43) | def __init__(self, args=None):
    method prepare_data (line 52) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 56) | def setup(self, stage: str = None) -> None:
    method __repr__ (line 71) | def __repr__(self):
  function _download_and_process_emnist (line 85) | def _download_and_process_emnist():
  function _process_raw_dataset (line 91) | def _process_raw_dataset(filename: str, dirname: Path):
  function _sample_to_balance (line 133) | def _sample_to_balance(x, y):
  function _augment_emnist_characters (line 148) | def _augment_emnist_characters(characters: Sequence[str]) -> Sequence[str]:

FILE: lab05/text_recognizer/data/emnist_lines.py
  class EMNISTLines (line 28) | class EMNISTLines(BaseDataModule):
    method __init__ (line 31) | def __init__(
    method add_to_argparse (line 55) | def add_to_argparse(parser):
    method data_filename (line 79) | def data_filename(self):
    method prepare_data (line 85) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 93) | def setup(self, stage: str = None) -> None:
    method __repr__ (line 111) | def __repr__(self) -> str:
    method _generate_data (line 132) | def _generate_data(self, split: str) -> None:
  function get_samples_by_char (line 168) | def get_samples_by_char(samples, labels, mapping):
  function select_letter_samples_for_string (line 175) | def select_letter_samples_for_string(string, samples_by_char, char_shape...
  function construct_image_from_string (line 187) | def construct_image_from_string(
  function create_dataset_of_images (line 202) | def create_dataset_of_images(N, samples_by_char, sentence_generator, min...
  function convert_strings_to_labels (line 212) | def convert_strings_to_labels(

FILE: lab05/text_recognizer/data/fake_images.py
  class FakeImageData (line 15) | class FakeImageData(BaseDataModule):
    method __init__ (line 18) | def __init__(self, args: argparse.Namespace = None):
    method add_to_argparse (line 28) | def add_to_argparse(parser):
    method setup (line 36) | def setup(self, stage: str = None) -> None:

FILE: lab05/text_recognizer/data/iam.py
  class IAM (line 22) | class IAM:
    method __init__ (line 40) | def __init__(self):
    method prepare_data (line 43) | def prepare_data(self):
    method load_image (line 49) | def load_image(self, id: str) -> Image.Image:
    method __repr__ (line 62) | def __repr__(self):
    method all_ids (line 74) | def all_ids(self):
    method ids_by_split (line 79) | def ids_by_split(self):
    method split_by_id (line 83) | def split_by_id(self):
    method train_ids (line 91) | def train_ids(self):
    method test_ids (line 96) | def test_ids(self):
    method xml_filenames (line 101) | def xml_filenames(self) -> List[Path]:
    method validation_ids (line 106) | def validation_ids(self):
    method form_filenames (line 113) | def form_filenames(self) -> List[Path]:
    method xml_filenames_by_id (line 118) | def xml_filenames_by_id(self):
    method form_filenames_by_id (line 123) | def form_filenames_by_id(self):
    method line_strings_by_id (line 128) | def line_strings_by_id(self):
    method line_regions_by_id (line 133) | def line_regions_by_id(self):
    method paragraph_string_by_id (line 138) | def paragraph_string_by_id(self):
    method paragraph_region_by_id (line 143) | def paragraph_region_by_id(self):
  function _extract_raw_dataset (line 156) | def _extract_raw_dataset(filename: Path, dirname: Path) -> None:
  function _get_ids_from_lwitlrt_split_file (line 163) | def _get_ids_from_lwitlrt_split_file(filename: str) -> List[str]:
  function _get_line_strings_from_xml_file (line 172) | def _get_line_strings_from_xml_file(filename: str) -> List[str]:
  function _get_text_from_xml_element (line 178) | def _get_text_from_xml_element(xml_element: Any) -> str:
  function _get_line_regions_from_xml_file (line 183) | def _get_line_regions_from_xml_file(filename: str) -> List[Dict[str, int]]:
  function _get_line_elements_from_xml_file (line 210) | def _get_line_elements_from_xml_file(filename: str) -> List[Any]:
  function _get_region_from_xml_element (line 216) | def _get_region_from_xml_element(xml_elem: Any, xml_path: str) -> Option...

FILE: lab05/text_recognizer/data/iam_lines.py
  class IAMLines (line 24) | class IAMLines(BaseDataModule):
    method __init__ (line 27) | def __init__(self, args: argparse.Namespace = None):
    method add_to_argparse (line 38) | def add_to_argparse(parser):
    method prepare_data (line 43) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 64) | def setup(self, stage: str = None) -> None:
    method __repr__ (line 91) | def __repr__(self) -> str:
  function generate_line_crops_and_labels (line 114) | def generate_line_crops_and_labels(iam: IAM, split: str, scale_factor=IM...
  function save_images_and_labels (line 131) | def save_images_and_labels(crops: Sequence[Image.Image], labels: Sequenc...
  function load_processed_crops_and_labels (line 140) | def load_processed_crops_and_labels(split: str, data_dirname: Path):
  function load_processed_line_crops (line 148) | def load_processed_line_crops(split: str, data_dirname: Path):
  function load_processed_line_labels (line 155) | def load_processed_line_labels(split: str, data_dirname: Path):

FILE: lab05/text_recognizer/data/iam_paragraphs.py
  class IAMParagraphs (line 24) | class IAMParagraphs(BaseDataModule):
    method __init__ (line 27) | def __init__(self, args: argparse.Namespace = None):
    method add_to_argparse (line 41) | def add_to_argparse(parser):
    method prepare_data (line 46) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 75) | def setup(self, stage: str = None) -> None:
    method __repr__ (line 91) | def __repr__(self) -> str:
  function validate_input_and_output_dimensions (line 114) | def validate_input_and_output_dimensions(
  function get_paragraph_crops_and_labels (line 127) | def get_paragraph_crops_and_labels(
  function save_crops_and_labels (line 143) | def save_crops_and_labels(crops: Dict[str, Image.Image], labels: Dict[st...
  function load_processed_crops_and_labels (line 154) | def load_processed_crops_and_labels(split: str) -> Tuple[Sequence[Image....
  function get_dataset_properties (line 167) | def get_dataset_properties() -> dict:
  function _labels_filename (line 188) | def _labels_filename(split: str) -> Path:
  function _crop_filename (line 193) | def _crop_filename(id_: str, split: str) -> Path:
  function _num_lines (line 198) | def _num_lines(label: str) -> int:

FILE: lab05/text_recognizer/data/mnist.py
  class MNIST (line 12) | class MNIST(BaseDataModule):
    method __init__ (line 15) | def __init__(self, args: argparse.Namespace) -> None:
    method prepare_data (line 23) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 28) | def setup(self, stage=None) -> None:

FILE: lab05/text_recognizer/data/sentence_generator.py
  class SentenceGenerator (line 15) | class SentenceGenerator:
    method __init__ (line 18) | def __init__(self, max_length: Optional[int] = None):
    method generate (line 23) | def generate(self, max_length: Optional[int] = None) -> str:
    method _get_end_ind_candidates (line 49) | def _get_end_ind_candidates(self, first_ind: int, start_ind: int, max_...
  function brown_text (line 58) | def brown_text():
  function load_nltk_brown_corpus (line 67) | def load_nltk_brown_corpus():

FILE: lab05/text_recognizer/data/util.py
  class BaseDataset (line 11) | class BaseDataset(torch.utils.data.Dataset):
    method __init__ (line 28) | def __init__(
    method __len__ (line 43) | def __len__(self) -> int:
    method __getitem__ (line 47) | def __getitem__(self, index: int) -> Tuple[Any, Any]:
  function convert_strings_to_labels (line 70) | def convert_strings_to_labels(strings: Sequence[str], mapping: Dict[str,...
  function split_dataset (line 84) | def split_dataset(base_dataset: BaseDataset, fraction: float, seed: int)...
  function resize_image (line 96) | def resize_image(image: Image.Image, scale_factor: int) -> Image.Image:

FILE: lab05/text_recognizer/lit_models/base.py
  class BaseLitModel (line 17) | class BaseLitModel(pl.LightningModule):
    method __init__ (line 22) | def __init__(self, model, args: argparse.Namespace = None):
    method add_to_argparse (line 48) | def add_to_argparse(parser):
    method configure_optimizers (line 56) | def configure_optimizers(self):
    method forward (line 65) | def forward(self, x):
    method predict (line 68) | def predict(self, x):
    method training_step (line 72) | def training_step(self, batch, batch_idx):
    method _run_on_batch (line 84) | def _run_on_batch(self, batch, with_preds=False):
    method validation_step (line 91) | def validation_step(self, batch, batch_idx):
    method test_step (line 103) | def test_step(self, batch, batch_idx):
    method add_on_first_batch (line 110) | def add_on_first_batch(self, metrics, outputs, batch_idx):
    method add_on_logged_batches (line 114) | def add_on_logged_batches(self, metrics, outputs):
    method is_logged_batch (line 118) | def is_logged_batch(self):
  class BaseImageToTextLitModel (line 125) | class BaseImageToTextLitModel(BaseLitModel):  # pylint: disable=too-many...
    method __init__ (line 128) | def __init__(self, model, args: argparse.Namespace = None):

FILE: lab05/text_recognizer/lit_models/metrics.py
  class CharacterErrorRate (line 8) | class CharacterErrorRate(torchmetrics.CharErrorRate):
    method __init__ (line 11) | def __init__(self, ignore_tokens: Sequence[int], *args):
    method update (line 15) | def update(self, preds: torch.Tensor, targets: torch.Tensor):  # type:...
  function test_character_error_rate (line 21) | def test_character_error_rate():

FILE: lab05/text_recognizer/lit_models/transformer.py
  class TransformerLitModel (line 10) | class TransformerLitModel(BaseImageToTextLitModel):
    method __init__ (line 18) | def __init__(self, model, args=None):
    method forward (line 22) | def forward(self, x):
    method teacher_forward (line 25) | def teacher_forward(self, x: torch.Tensor, y: torch.Tensor) -> torch.T...
    method training_step (line 44) | def training_step(self, batch, batch_idx):
    method validation_step (line 59) | def validation_step(self, batch, batch_idx):
    method test_step (line 80) | def test_step(self, batch, batch_idx):
    method map (line 101) | def map(self, ks: Sequence[int], ignore: bool = True) -> str:
    method batchmap (line 108) | def batchmap(self, ks: Sequence[Sequence[int]], ignore=True) -> List[s...
    method get_preds (line 112) | def get_preds(self, logitlikes: torch.Tensor, replace_after_end: bool ...

FILE: lab05/text_recognizer/lit_models/util.py
  function first_appearance (line 6) | def first_appearance(x: torch.Tensor, element: Union[int, float], dim: i...
  function replace_after (line 47) | def replace_after(x: torch.Tensor, element: Union[int, float], replace: ...

FILE: lab05/text_recognizer/models/cnn.py
  class ConvBlock (line 15) | class ConvBlock(nn.Module):
    method __init__ (line 20) | def __init__(self, input_channels: int, output_channels: int) -> None:
    method forward (line 25) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class CNN (line 43) | class CNN(nn.Module):
    method __init__ (line 46) | def __init__(self, data_config: Dict[str, Any], args: argparse.Namespa...
    method forward (line 75) | def forward(self, x: torch.Tensor) -> torch.Tensor:
    method add_to_argparse (line 101) | def add_to_argparse(parser):

FILE: lab05/text_recognizer/models/line_cnn.py
  class ConvBlock (line 21) | class ConvBlock(nn.Module):
    method __init__ (line 26) | def __init__(
    method forward (line 38) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class LineCNN (line 56) | class LineCNN(nn.Module):
    method __init__ (line 61) | def __init__(
    method _init_weights (line 100) | def _init_weights(self):
    method forward (line 119) | def forward(self, x: torch.Tensor) -> torch.Tensor:
    method add_to_argparse (line 146) | def add_to_argparse(parser):

FILE: lab05/text_recognizer/models/line_cnn_simple.py
  class LineCNNSimple (line 16) | class LineCNNSimple(nn.Module):
    method __init__ (line 19) | def __init__(
    method forward (line 38) | def forward(self, x: torch.Tensor) -> torch.Tensor:
    method add_to_argparse (line 73) | def add_to_argparse(parser):

FILE: lab05/text_recognizer/models/line_cnn_transformer.py
  class LineCNNTransformer (line 20) | class LineCNNTransformer(nn.Module):
    method __init__ (line 23) | def __init__(
    method init_weights (line 65) | def init_weights(self):
    method encode (line 71) | def encode(self, x: torch.Tensor) -> torch.Tensor:
    method decode (line 90) | def decode(self, x, y):
    method forward (line 117) | def forward(self, x: torch.Tensor) -> torch.Tensor:
    method add_to_argparse (line 150) | def add_to_argparse(parser):

FILE: lab05/text_recognizer/models/mlp.py
  class MLP (line 14) | class MLP(nn.Module):
    method __init__ (line 17) | def __init__(
    method forward (line 39) | def forward(self, x):
    method add_to_argparse (line 51) | def add_to_argparse(parser):

FILE: lab05/text_recognizer/models/resnet_transformer.py
  class ResnetTransformer (line 21) | class ResnetTransformer(nn.Module):
    method __init__ (line 24) | def __init__(
    method forward (line 75) | def forward(self, x: torch.Tensor) -> torch.Tensor:
    method init_weights (line 111) | def init_weights(self):
    method encode (line 123) | def encode(self, x: torch.Tensor) -> torch.Tensor:
    method decode (line 147) | def decode(self, x, y):
    method add_to_argparse (line 179) | def add_to_argparse(parser):

FILE: lab05/text_recognizer/models/transformer_util.py
  class PositionalEncodingImage (line 9) | class PositionalEncodingImage(nn.Module):
    method __init__ (line 16) | def __init__(self, d_model: int, max_h: int = 2000, max_w: int = 2000,...
    method make_pe (line 26) | def make_pe(d_model: int, max_h: int, max_w: int) -> torch.Tensor:
    method forward (line 36) | def forward(self, x: Tensor) -> Tensor:
  class PositionalEncoding (line 44) | class PositionalEncoding(torch.nn.Module):
    method __init__ (line 47) | def __init__(self, d_model: int, dropout: float = 0.1, max_len: int = ...
    method make_pe (line 56) | def make_pe(d_model: int, max_len: int) -> torch.Tensor:
    method forward (line 65) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  function generate_square_subsequent_mask (line 72) | def generate_square_subsequent_mask(size: int) -> torch.Tensor:

FILE: lab05/text_recognizer/stems/image.py
  class ImageStem (line 5) | class ImageStem:
    method __init__ (line 23) | def __init__(self):
    method __call__ (line 28) | def __call__(self, img):
  class MNISTStem (line 38) | class MNISTStem(ImageStem):
    method __init__ (line 41) | def __init__(self):

FILE: lab05/text_recognizer/stems/line.py
  class LineStem (line 11) | class LineStem(ImageStem):
    method __init__ (line 14) | def __init__(self, augment=False, color_jitter_kwargs=None, random_aff...
  class IAMLineStem (line 37) | class IAMLineStem(ImageStem):
    method __init__ (line 40) | def __init__(self, augment=False, color_jitter_kwargs=None, random_aff...

FILE: lab05/text_recognizer/stems/paragraph.py
  class ParagraphStem (line 14) | class ParagraphStem(ImageStem):
    method __init__ (line 17) | def __init__(

FILE: lab05/text_recognizer/tests/test_callback_utils.py
  function test_check_and_warn_simple (line 11) | def test_check_and_warn_simple():
  function test_check_and_warn_tblogger (line 23) | def test_check_and_warn_tblogger():
  function test_check_and_warn_wandblogger (line 32) | def test_check_and_warn_wandblogger():

FILE: lab05/text_recognizer/tests/test_iam.py
  function test_iam_parsed_lines (line 5) | def test_iam_parsed_lines():
  function test_iam_data_splits (line 13) | def test_iam_data_splits():

FILE: lab05/text_recognizer/util.py
  function to_categorical (line 17) | def to_categorical(y, num_classes):
  function read_image_pil (line 22) | def read_image_pil(image_uri: Union[Path, str], grayscale=False) -> Image:
  function read_image_pil_file (line 27) | def read_image_pil_file(image_file, grayscale=False) -> Image:
  function temporary_working_directory (line 37) | def temporary_working_directory(working_dir: Union[str, Path]):
  function compute_sha256 (line 47) | def compute_sha256(filename: Union[Path, str]):
  class TqdmUpTo (line 53) | class TqdmUpTo(tqdm):
    method update_to (line 56) | def update_to(self, blocks=1, bsize=1, tsize=None):
  function download_url (line 72) | def download_url(url, filename):

FILE: lab05/training/run_experiment.py
  function _setup_parser (line 21) | def _setup_parser():
  function _ensure_logging_dir (line 87) | def _ensure_logging_dir(experiment_dir):
  function main (line 92) | def main():

FILE: lab05/training/util.py
  function import_class (line 9) | def import_class(module_and_class_name: str) -> type:
  function setup_data_and_model_from_args (line 17) | def setup_data_and_model_from_args(args: argparse.Namespace):

FILE: lab06/text_recognizer/callbacks/imtotext.py
  class ImageToTextTableLogger (line 14) | class ImageToTextTableLogger(pl.Callback):
    method __init__ (line 17) | def __init__(self, max_images_to_log=32, on_train=True):
    method on_train_batch_end (line 24) | def on_train_batch_end(self, trainer, module, output, batch, batch_idx):
    method on_validation_batch_end (line 33) | def on_validation_batch_end(self, trainer, module, output, batch, batc...
    method _log_image_text_table (line 40) | def _log_image_text_table(self, trainer, output, batch, key):
    method has_metrics (line 55) | def has_metrics(self, output):
  class ImageToTextCaptionLogger (line 59) | class ImageToTextCaptionLogger(pl.Callback):
    method __init__ (line 62) | def __init__(self, max_images_to_log=32, on_train=True):
    method on_train_batch_end (line 69) | def on_train_batch_end(self, trainer, module, output, batch, batch_idx):
    method on_validation_batch_end (line 77) | def on_validation_batch_end(self, trainer, module, output, batch, batc...
    method on_test_batch_end (line 85) | def on_test_batch_end(self, trainer, module, output, batch, batch_idx,...
    method _log_image_text_caption (line 92) | def _log_image_text_caption(self, trainer, output, batch, key):
    method has_metrics (line 102) | def has_metrics(self, output):

FILE: lab06/text_recognizer/callbacks/model.py
  class ModelSizeLogger (line 19) | class ModelSizeLogger(pl.Callback):
    method __init__ (line 22) | def __init__(self, print_size=True):
    method on_fit_start (line 27) | def on_fit_start(self, trainer, module):
    method _run (line 30) | def _run(self, trainer, module):
    method get_model_disksize (line 43) | def get_model_disksize(module):
  class GraphLogger (line 51) | class GraphLogger(pl.Callback):
    method __init__ (line 54) | def __init__(self, output_key="logits"):
    method on_train_batch_end (line 62) | def on_train_batch_end(self, trainer, module, outputs, batch, batch_id...
    method log_graph (line 72) | def log_graph(trainer, module, outputs):
  function count_params (line 84) | def count_params(module):

FILE: lab06/text_recognizer/callbacks/optim.py
  class LearningRateMonitor (line 6) | class LearningRateMonitor(pl.callbacks.LearningRateMonitor):
    method _add_prefix (line 13) | def _add_prefix(self, *args, **kwargs) -> str:

FILE: lab06/text_recognizer/callbacks/util.py
  function check_and_warn (line 6) | def check_and_warn(logger, attribute, feature):
  function warn_no_attribute (line 12) | def warn_no_attribute(blocked_feature, missing_attribute):

FILE: lab06/text_recognizer/data/base_data_module.py
  function load_and_print_info (line 16) | def load_and_print_info(data_module_class) -> None:
  function _download_raw_dataset (line 27) | def _download_raw_dataset(metadata: Dict, dl_dirname: Path) -> Path:
  class BaseDataModule (line 51) | class BaseDataModule(pl.LightningDataModule):
    method __init__ (line 57) | def __init__(self, args: argparse.Namespace = None) -> None:
    method data_dirname (line 74) | def data_dirname(cls):
    method add_to_argparse (line 78) | def add_to_argparse(parser):
    method config (line 93) | def config(self):
    method prepare_data (line 97) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 104) | def setup(self, stage: Optional[str] = None) -> None:
    method train_dataloader (line 111) | def train_dataloader(self):
    method val_dataloader (line 120) | def val_dataloader(self):
    method test_dataloader (line 129) | def test_dataloader(self):

FILE: lab06/text_recognizer/data/emnist.py
  class EMNIST (line 32) | class EMNIST(BaseDataModule):
    method __init__ (line 43) | def __init__(self, args=None):
    method prepare_data (line 52) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 56) | def setup(self, stage: str = None) -> None:
    method __repr__ (line 71) | def __repr__(self):
  function _download_and_process_emnist (line 85) | def _download_and_process_emnist():
  function _process_raw_dataset (line 91) | def _process_raw_dataset(filename: str, dirname: Path):
  function _sample_to_balance (line 133) | def _sample_to_balance(x, y):
  function _augment_emnist_characters (line 148) | def _augment_emnist_characters(characters: Sequence[str]) -> Sequence[str]:

FILE: lab06/text_recognizer/data/emnist_lines.py
  class EMNISTLines (line 28) | class EMNISTLines(BaseDataModule):
    method __init__ (line 31) | def __init__(
    method add_to_argparse (line 55) | def add_to_argparse(parser):
    method data_filename (line 79) | def data_filename(self):
    method prepare_data (line 85) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 93) | def setup(self, stage: str = None) -> None:
    method __repr__ (line 111) | def __repr__(self) -> str:
    method _generate_data (line 132) | def _generate_data(self, split: str) -> None:
  function get_samples_by_char (line 168) | def get_samples_by_char(samples, labels, mapping):
  function select_letter_samples_for_string (line 175) | def select_letter_samples_for_string(string, samples_by_char, char_shape...
  function construct_image_from_string (line 187) | def construct_image_from_string(
  function create_dataset_of_images (line 202) | def create_dataset_of_images(N, samples_by_char, sentence_generator, min...
  function convert_strings_to_labels (line 212) | def convert_strings_to_labels(

FILE: lab06/text_recognizer/data/fake_images.py
  class FakeImageData (line 15) | class FakeImageData(BaseDataModule):
    method __init__ (line 18) | def __init__(self, args: argparse.Namespace = None):
    method add_to_argparse (line 28) | def add_to_argparse(parser):
    method setup (line 36) | def setup(self, stage: str = None) -> None:

FILE: lab06/text_recognizer/data/iam.py
  class IAM (line 22) | class IAM:
    method __init__ (line 40) | def __init__(self):
    method prepare_data (line 43) | def prepare_data(self):
    method load_image (line 49) | def load_image(self, id: str) -> Image.Image:
    method __repr__ (line 62) | def __repr__(self):
    method all_ids (line 74) | def all_ids(self):
    method ids_by_split (line 79) | def ids_by_split(self):
    method split_by_id (line 83) | def split_by_id(self):
    method train_ids (line 91) | def train_ids(self):
    method test_ids (line 96) | def test_ids(self):
    method xml_filenames (line 101) | def xml_filenames(self) -> List[Path]:
    method validation_ids (line 106) | def validation_ids(self):
    method form_filenames (line 113) | def form_filenames(self) -> List[Path]:
    method xml_filenames_by_id (line 118) | def xml_filenames_by_id(self):
    method form_filenames_by_id (line 123) | def form_filenames_by_id(self):
    method line_strings_by_id (line 128) | def line_strings_by_id(self):
    method line_regions_by_id (line 133) | def line_regions_by_id(self):
    method paragraph_string_by_id (line 138) | def paragraph_string_by_id(self):
    method paragraph_region_by_id (line 143) | def paragraph_region_by_id(self):
  function _extract_raw_dataset (line 156) | def _extract_raw_dataset(filename: Path, dirname: Path) -> None:
  function _get_ids_from_lwitlrt_split_file (line 163) | def _get_ids_from_lwitlrt_split_file(filename: str) -> List[str]:
  function _get_line_strings_from_xml_file (line 172) | def _get_line_strings_from_xml_file(filename: str) -> List[str]:
  function _get_text_from_xml_element (line 178) | def _get_text_from_xml_element(xml_element: Any) -> str:
  function _get_line_regions_from_xml_file (line 183) | def _get_line_regions_from_xml_file(filename: str) -> List[Dict[str, int]]:
  function _get_line_elements_from_xml_file (line 210) | def _get_line_elements_from_xml_file(filename: str) -> List[Any]:
  function _get_region_from_xml_element (line 216) | def _get_region_from_xml_element(xml_elem: Any, xml_path: str) -> Option...

FILE: lab06/text_recognizer/data/iam_lines.py
  class IAMLines (line 24) | class IAMLines(BaseDataModule):
    method __init__ (line 27) | def __init__(self, args: argparse.Namespace = None):
    method add_to_argparse (line 38) | def add_to_argparse(parser):
    method prepare_data (line 43) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 64) | def setup(self, stage: str = None) -> None:
    method __repr__ (line 91) | def __repr__(self) -> str:
  function generate_line_crops_and_labels (line 114) | def generate_line_crops_and_labels(iam: IAM, split: str, scale_factor=IM...
  function save_images_and_labels (line 131) | def save_images_and_labels(crops: Sequence[Image.Image], labels: Sequenc...
  function load_processed_crops_and_labels (line 140) | def load_processed_crops_and_labels(split: str, data_dirname: Path):
  function load_processed_line_crops (line 148) | def load_processed_line_crops(split: str, data_dirname: Path):
  function load_processed_line_labels (line 155) | def load_processed_line_labels(split: str, data_dirname: Path):

FILE: lab06/text_recognizer/data/iam_original_and_synthetic_paragraphs.py
  class IAMOriginalAndSyntheticParagraphs (line 11) | class IAMOriginalAndSyntheticParagraphs(BaseDataModule):
    method __init__ (line 14) | def __init__(self, args: argparse.Namespace = None):
    method add_to_argparse (line 26) | def add_to_argparse(parser):
    method prepare_data (line 32) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 36) | def setup(self, stage: str = None) -> None:
    method __repr__ (line 47) | def __repr__(self) -> str:

FILE: lab06/text_recognizer/data/iam_paragraphs.py
  class IAMParagraphs (line 24) | class IAMParagraphs(BaseDataModule):
    method __init__ (line 27) | def __init__(self, args: argparse.Namespace = None):
    method add_to_argparse (line 41) | def add_to_argparse(parser):
    method prepare_data (line 46) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 75) | def setup(self, stage: str = None) -> None:
    method __repr__ (line 91) | def __repr__(self) -> str:
  function validate_input_and_output_dimensions (line 114) | def validate_input_and_output_dimensions(
  function get_paragraph_crops_and_labels (line 127) | def get_paragraph_crops_and_labels(
  function save_crops_and_labels (line 143) | def save_crops_and_labels(crops: Dict[str, Image.Image], labels: Dict[st...
  function load_processed_crops_and_labels (line 154) | def load_processed_crops_and_labels(split: str) -> Tuple[Sequence[Image....
  function get_dataset_properties (line 167) | def get_dataset_properties() -> dict:
  function _labels_filename (line 188) | def _labels_filename(split: str) -> Path:
  function _crop_filename (line 193) | def _crop_filename(id_: str, split: str) -> Path:
  function _num_lines (line 198) | def _num_lines(label: str) -> int:

FILE: lab06/text_recognizer/data/iam_synthetic_paragraphs.py
  class IAMSyntheticParagraphs (line 29) | class IAMSyntheticParagraphs(IAMParagraphs):
    method __init__ (line 32) | def __init__(self, args: argparse.Namespace = None):
    method prepare_data (line 39) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 58) | def setup(self, stage: str = None) -> None:
    method _load_processed_crops_and_labels (line 73) | def _load_processed_crops_and_labels(self):
    method __repr__ (line 79) | def __repr__(self) -> str:
    method add_to_argparse (line 98) | def add_to_argparse(parser):
  class IAMSyntheticParagraphsDataset (line 103) | class IAMSyntheticParagraphsDataset(torch.utils.data.Dataset):
    method __init__ (line 106) | def __init__(
    method __len__ (line 131) | def __len__(self) -> int:
    method _set_seed (line 135) | def _set_seed(self, seed):
    method __getitem__ (line 141) | def __getitem__(self, index: int) -> Tuple[Any, Any]:
  function join_line_crops_to_form_paragraph (line 169) | def join_line_crops_to_form_paragraph(line_crops: Sequence[Image.Image])...

FILE: lab06/text_recognizer/data/mnist.py
  class MNIST (line 12) | class MNIST(BaseDataModule):
    method __init__ (line 15) | def __init__(self, args: argparse.Namespace) -> None:
    method prepare_data (line 23) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 28) | def setup(self, stage=None) -> None:

FILE: lab06/text_recognizer/data/sentence_generator.py
  class SentenceGenerator (line 15) | class SentenceGenerator:
    method __init__ (line 18) | def __init__(self, max_length: Optional[int] = None):
    method generate (line 23) | def generate(self, max_length: Optional[int] = None) -> str:
    method _get_end_ind_candidates (line 49) | def _get_end_ind_candidates(self, first_ind: int, start_ind: int, max_...
  function brown_text (line 58) | def brown_text():
  function load_nltk_brown_corpus (line 67) | def load_nltk_brown_corpus():

FILE: lab06/text_recognizer/data/util.py
  class BaseDataset (line 11) | class BaseDataset(torch.utils.data.Dataset):
    method __init__ (line 28) | def __init__(
    method __len__ (line 43) | def __len__(self) -> int:
    method __getitem__ (line 47) | def __getitem__(self, index: int) -> Tuple[Any, Any]:
  function convert_strings_to_labels (line 70) | def convert_strings_to_labels(strings: Sequence[str], mapping: Dict[str,...
  function split_dataset (line 84) | def split_dataset(base_dataset: BaseDataset, fraction: float, seed: int)...
  function resize_image (line 96) | def resize_image(image: Image.Image, scale_factor: int) -> Image.Image:

FILE: lab06/text_recognizer/lit_models/base.py
  class BaseLitModel (line 17) | class BaseLitModel(pl.LightningModule):
    method __init__ (line 22) | def __init__(self, model, args: argparse.Namespace = None):
    method add_to_argparse (line 48) | def add_to_argparse(parser):
    method configure_optimizers (line 56) | def configure_optimizers(self):
    method forward (line 65) | def forward(self, x):
    method predict (line 68) | def predict(self, x):
    method training_step (line 72) | def training_step(self, batch, batch_idx):
    method _run_on_batch (line 84) | def _run_on_batch(self, batch, with_preds=False):
    method validation_step (line 91) | def validation_step(self, batch, batch_idx):
    method test_step (line 103) | def test_step(self, batch, batch_idx):
    method add_on_first_batch (line 110) | def add_on_first_batch(self, metrics, outputs, batch_idx):
    method add_on_logged_batches (line 114) | def add_on_logged_batches(self, metrics, outputs):
    method is_logged_batch (line 118) | def is_logged_batch(self):
  class BaseImageToTextLitModel (line 125) | class BaseImageToTextLitModel(BaseLitModel):  # pylint: disable=too-many...
    method __init__ (line 128) | def __init__(self, model, args: argparse.Namespace = None):

FILE: lab06/text_recognizer/lit_models/metrics.py
  class CharacterErrorRate (line 8) | class CharacterErrorRate(torchmetrics.CharErrorRate):
    method __init__ (line 11) | def __init__(self, ignore_tokens: Sequence[int], *args):
    method update (line 15) | def update(self, preds: torch.Tensor, targets: torch.Tensor):  # type:...
  function test_character_error_rate (line 21) | def test_character_error_rate():

FILE: lab06/text_recognizer/lit_models/transformer.py
  class TransformerLitModel (line 10) | class TransformerLitModel(BaseImageToTextLitModel):
    method __init__ (line 18) | def __init__(self, model, args=None):
    method forward (line 22) | def forward(self, x):
    method teacher_forward (line 25) | def teacher_forward(self, x: torch.Tensor, y: torch.Tensor) -> torch.T...
    method training_step (line 44) | def training_step(self, batch, batch_idx):
    method validation_step (line 59) | def validation_step(self, batch, batch_idx):
    method test_step (line 80) | def test_step(self, batch, batch_idx):
    method map (line 101) | def map(self, ks: Sequence[int], ignore: bool = True) -> str:
    method batchmap (line 108) | def batchmap(self, ks: Sequence[Sequence[int]], ignore=True) -> List[s...
    method get_preds (line 112) | def get_preds(self, logitlikes: torch.Tensor, replace_after_end: bool ...

FILE: lab06/text_recognizer/lit_models/util.py
  function first_appearance (line 6) | def first_appearance(x: torch.Tensor, element: Union[int, float], dim: i...
  function replace_after (line 47) | def replace_after(x: torch.Tensor, element: Union[int, float], replace: ...

FILE: lab06/text_recognizer/models/cnn.py
  class ConvBlock (line 15) | class ConvBlock(nn.Module):
    method __init__ (line 20) | def __init__(self, input_channels: int, output_channels: int) -> None:
    method forward (line 25) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class CNN (line 43) | class CNN(nn.Module):
    method __init__ (line 46) | def __init__(self, data_config: Dict[str, Any], args: argparse.Namespa...
    method forward (line 75) | def forward(self, x: torch.Tensor) -> torch.Tensor:
    method add_to_argparse (line 101) | def add_to_argparse(parser):

FILE: lab06/text_recognizer/models/line_cnn.py
  class ConvBlock (line 21) | class ConvBlock(nn.Module):
    method __init__ (line 26) | def __init__(
    method forward (line 38) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class LineCNN (line 56) | class LineCNN(nn.Module):
    method __init__ (line 61) | def __init__(
    method _init_weights (line 100) | def _init_weights(self):
    method forward (line 119) | def forward(self, x: torch.Tensor) -> torch.Tensor:
    method add_to_argparse (line 146) | def add_to_argparse(parser):

FILE: lab06/text_recognizer/models/line_cnn_simple.py
  class LineCNNSimple (line 16) | class LineCNNSimple(nn.Module):
    method __init__ (line 19) | def __init__(
    method forward (line 38) | def forward(self, x: torch.Tensor) -> torch.Tensor:
    method add_to_argparse (line 73) | def add_to_argparse(parser):

FILE: lab06/text_recognizer/models/line_cnn_transformer.py
  class LineCNNTransformer (line 20) | class LineCNNTransformer(nn.Module):
    method __init__ (line 23) | def __init__(
    method init_weights (line 65) | def init_weights(self):
    method encode (line 71) | def encode(self, x: torch.Tensor) -> torch.Tensor:
    method decode (line 90) | def decode(self, x, y):
    method forward (line 117) | def forward(self, x: torch.Tensor) -> torch.Tensor:
    method add_to_argparse (line 150) | def add_to_argparse(parser):

FILE: lab06/text_recognizer/models/mlp.py
  class MLP (line 14) | class MLP(nn.Module):
    method __init__ (line 17) | def __init__(
    method forward (line 39) | def forward(self, x):
    method add_to_argparse (line 51) | def add_to_argparse(parser):

FILE: lab06/text_recognizer/models/resnet_transformer.py
  class ResnetTransformer (line 21) | class ResnetTransformer(nn.Module):
    method __init__ (line 24) | def __init__(
    method forward (line 75) | def forward(self, x: torch.Tensor) -> torch.Tensor:
    method init_weights (line 111) | def init_weights(self):
    method encode (line 123) | def encode(self, x: torch.Tensor) -> torch.Tensor:
    method decode (line 147) | def decode(self, x, y):
    method add_to_argparse (line 179) | def add_to_argparse(parser):

FILE: lab06/text_recognizer/models/transformer_util.py
  class PositionalEncodingImage (line 9) | class PositionalEncodingImage(nn.Module):
    method __init__ (line 16) | def __init__(self, d_model: int, max_h: int = 2000, max_w: int = 2000,...
    method make_pe (line 26) | def make_pe(d_model: int, max_h: int, max_w: int) -> torch.Tensor:
    method forward (line 36) | def forward(self, x: Tensor) -> Tensor:
  class PositionalEncoding (line 44) | class PositionalEncoding(torch.nn.Module):
    method __init__ (line 47) | def __init__(self, d_model: int, dropout: float = 0.1, max_len: int = ...
    method make_pe (line 56) | def make_pe(d_model: int, max_len: int) -> torch.Tensor:
    method forward (line 65) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  function generate_square_subsequent_mask (line 72) | def generate_square_subsequent_mask(size: int) -> torch.Tensor:

FILE: lab06/text_recognizer/stems/image.py
  class ImageStem (line 5) | class ImageStem:
    method __init__ (line 23) | def __init__(self):
    method __call__ (line 28) | def __call__(self, img):
  class MNISTStem (line 38) | class MNISTStem(ImageStem):
    method __init__ (line 41) | def __init__(self):

FILE: lab06/text_recognizer/stems/line.py
  class LineStem (line 11) | class LineStem(ImageStem):
    method __init__ (line 14) | def __init__(self, augment=False, color_jitter_kwargs=None, random_aff...
  class IAMLineStem (line 37) | class IAMLineStem(ImageStem):
    method __init__ (line 40) | def __init__(self, augment=False, color_jitter_kwargs=None, random_aff...

FILE: lab06/text_recognizer/stems/paragraph.py
  class ParagraphStem (line 14) | class ParagraphStem(ImageStem):
    method __init__ (line 17) | def __init__(

FILE: lab06/text_recognizer/tests/test_callback_utils.py
  function test_check_and_warn_simple (line 11) | def test_check_and_warn_simple():
  function test_check_and_warn_tblogger (line 23) | def test_check_and_warn_tblogger():
  function test_check_and_warn_wandblogger (line 32) | def test_check_and_warn_wandblogger():

FILE: lab06/text_recognizer/tests/test_iam.py
  function test_iam_parsed_lines (line 5) | def test_iam_parsed_lines():
  function test_iam_data_splits (line 13) | def test_iam_data_splits():

FILE: lab06/text_recognizer/util.py
  function to_categorical (line 17) | def to_categorical(y, num_classes):
  function read_image_pil (line 22) | def read_image_pil(image_uri: Union[Path, str], grayscale=False) -> Image:
  function read_image_pil_file (line 27) | def read_image_pil_file(image_file, grayscale=False) -> Image:
  function temporary_working_directory (line 37) | def temporary_working_directory(working_dir: Union[str, Path]):
  function compute_sha256 (line 47) | def compute_sha256(filename: Union[Path, str]):
  class TqdmUpTo (line 53) | class TqdmUpTo(tqdm):
    method update_to (line 56) | def update_to(self, blocks=1, bsize=1, tsize=None):
  function download_url (line 72) | def download_url(url, filename):

FILE: lab06/training/run_experiment.py
  function _setup_parser (line 21) | def _setup_parser():
  function _ensure_logging_dir (line 87) | def _ensure_logging_dir(experiment_dir):
  function main (line 92) | def main():

FILE: lab06/training/util.py
  function import_class (line 9) | def import_class(module_and_class_name: str) -> type:
  function setup_data_and_model_from_args (line 17) | def setup_data_and_model_from_args(args: argparse.Namespace):

FILE: lab07/api_serverless/api.py
  function handler (line 12) | def handler(event, _context):
  function _load_image (line 30) | def _load_image(event):
  function _from_string (line 46) | def _from_string(event):

FILE: lab07/app_gradio/app.py
  function main (line 28) | def main(args):
  function make_frontend (line 41) | def make_frontend(
  class PredictorBackend (line 73) | class PredictorBackend:
    method __init__ (line 81) | def __init__(self, url=None):
    method run (line 89) | def run(self, image):
    method _predict_with_metrics (line 94) | def _predict_with_metrics(self, image):
    method _predict_from_endpoint (line 107) | def _predict_from_endpoint(self, image):
    method _log_inference (line 133) | def _log_inference(self, pred, metrics):
  function _load_readme (line 139) | def _load_readme(with_logging=False):
  function _make_parser (line 149) | def _make_parser():

FILE: lab07/app_gradio/tests/test_app.py
  function test_local_run (line 16) | def test_local_run():

FILE: lab07/text_recognizer/callbacks/imtotext.py
  class ImageToTextTableLogger (line 14) | class ImageToTextTableLogger(pl.Callback):
    method __init__ (line 17) | def __init__(self, max_images_to_log=32, on_train=True):
    method on_train_batch_end (line 24) | def on_train_batch_end(self, trainer, module, output, batch, batch_idx):
    method on_validation_batch_end (line 33) | def on_validation_batch_end(self, trainer, module, output, batch, batc...
    method _log_image_text_table (line 40) | def _log_image_text_table(self, trainer, output, batch, key):
    method has_metrics (line 55) | def has_metrics(self, output):
  class ImageToTextCaptionLogger (line 59) | class ImageToTextCaptionLogger(pl.Callback):
    method __init__ (line 62) | def __init__(self, max_images_to_log=32, on_train=True):
    method on_train_batch_end (line 69) | def on_train_batch_end(self, trainer, module, output, batch, batch_idx):
    method on_validation_batch_end (line 77) | def on_validation_batch_end(self, trainer, module, output, batch, batc...
    method on_test_batch_end (line 85) | def on_test_batch_end(self, trainer, module, output, batch, batch_idx,...
    method _log_image_text_caption (line 92) | def _log_image_text_caption(self, trainer, output, batch, key):
    method has_metrics (line 102) | def has_metrics(self, output):

FILE: lab07/text_recognizer/callbacks/model.py
  class ModelSizeLogger (line 19) | class ModelSizeLogger(pl.Callback):
    method __init__ (line 22) | def __init__(self, print_size=True):
    method on_fit_start (line 27) | def on_fit_start(self, trainer, module):
    method _run (line 30) | def _run(self, trainer, module):
    method get_model_disksize (line 43) | def get_model_disksize(module):
  class GraphLogger (line 51) | class GraphLogger(pl.Callback):
    method __init__ (line 54) | def __init__(self, output_key="logits"):
    method on_train_batch_end (line 62) | def on_train_batch_end(self, trainer, module, outputs, batch, batch_id...
    method log_graph (line 72) | def log_graph(trainer, module, outputs):
  function count_params (line 84) | def count_params(module):

FILE: lab07/text_recognizer/callbacks/optim.py
  class LearningRateMonitor (line 6) | class LearningRateMonitor(pl.callbacks.LearningRateMonitor):
    method _add_prefix (line 13) | def _add_prefix(self, *args, **kwargs) -> str:

FILE: lab07/text_recognizer/callbacks/util.py
  function check_and_warn (line 6) | def check_and_warn(logger, attribute, feature):
  function warn_no_attribute (line 12) | def warn_no_attribute(blocked_feature, missing_attribute):

FILE: lab07/text_recognizer/data/base_data_module.py
  function load_and_print_info (line 16) | def load_and_print_info(data_module_class) -> None:
  function _download_raw_dataset (line 27) | def _download_raw_dataset(metadata: Dict, dl_dirname: Path) -> Path:
  class BaseDataModule (line 51) | class BaseDataModule(pl.LightningDataModule):
    method __init__ (line 57) | def __init__(self, args: argparse.Namespace = None) -> None:
    method data_dirname (line 74) | def data_dirname(cls):
    method add_to_argparse (line 78) | def add_to_argparse(parser):
    method config (line 93) | def config(self):
    method prepare_data (line 97) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 104) | def setup(self, stage: Optional[str] = None) -> None:
    method train_dataloader (line 111) | def train_dataloader(self):
    method val_dataloader (line 120) | def val_dataloader(self):
    method test_dataloader (line 129) | def test_dataloader(self):

FILE: lab07/text_recognizer/data/emnist.py
  class EMNIST (line 32) | class EMNIST(BaseDataModule):
    method __init__ (line 43) | def __init__(self, args=None):
    method prepare_data (line 52) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 56) | def setup(self, stage: str = None) -> None:
    method __repr__ (line 71) | def __repr__(self):
  function _download_and_process_emnist (line 85) | def _download_and_process_emnist():
  function _process_raw_dataset (line 91) | def _process_raw_dataset(filename: str, dirname: Path):
  function _sample_to_balance (line 133) | def _sample_to_balance(x, y):
  function _augment_emnist_characters (line 148) | def _augment_emnist_characters(characters: Sequence[str]) -> Sequence[str]:

FILE: lab07/text_recognizer/data/emnist_lines.py
  class EMNISTLines (line 28) | class EMNISTLines(BaseDataModule):
    method __init__ (line 31) | def __init__(
    method add_to_argparse (line 55) | def add_to_argparse(parser):
    method data_filename (line 79) | def data_filename(self):
    method prepare_data (line 85) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 93) | def setup(self, stage: str = None) -> None:
    method __repr__ (line 111) | def __repr__(self) -> str:
    method _generate_data (line 132) | def _generate_data(self, split: str) -> None:
  function get_samples_by_char (line 168) | def get_samples_by_char(samples, labels, mapping):
  function select_letter_samples_for_string (line 175) | def select_letter_samples_for_string(string, samples_by_char, char_shape...
  function construct_image_from_string (line 187) | def construct_image_from_string(
  function create_dataset_of_images (line 202) | def create_dataset_of_images(N, samples_by_char, sentence_generator, min...
  function convert_strings_to_labels (line 212) | def convert_strings_to_labels(

FILE: lab07/text_recognizer/data/fake_images.py
  class FakeImageData (line 15) | class FakeImageData(BaseDataModule):
    method __init__ (line 18) | def __init__(self, args: argparse.Namespace = None):
    method add_to_argparse (line 28) | def add_to_argparse(parser):
    method setup (line 36) | def setup(self, stage: str = None) -> None:

FILE: lab07/text_recognizer/data/iam.py
  class IAM (line 22) | class IAM:
    method __init__ (line 40) | def __init__(self):
    method prepare_data (line 43) | def prepare_data(self):
    method load_image (line 49) | def load_image(self, id: str) -> Image.Image:
    method __repr__ (line 62) | def __repr__(self):
    method all_ids (line 74) | def all_ids(self):
    method ids_by_split (line 79) | def ids_by_split(self):
    method split_by_id (line 83) | def split_by_id(self):
    method train_ids (line 91) | def train_ids(self):
    method test_ids (line 96) | def test_ids(self):
    method xml_filenames (line 101) | def xml_filenames(self) -> List[Path]:
    method validation_ids (line 106) | def validation_ids(self):
    method form_filenames (line 113) | def form_filenames(self) -> List[Path]:
    method xml_filenames_by_id (line 118) | def xml_filenames_by_id(self):
    method form_filenames_by_id (line 123) | def form_filenames_by_id(self):
    method line_strings_by_id (line 128) | def line_strings_by_id(self):
    method line_regions_by_id (line 133) | def line_regions_by_id(self):
    method paragraph_string_by_id (line 138) | def paragraph_string_by_id(self):
    method paragraph_region_by_id (line 143) | def paragraph_region_by_id(self):
  function _extract_raw_dataset (line 156) | def _extract_raw_dataset(filename: Path, dirname: Path) -> None:
  function _get_ids_from_lwitlrt_split_file (line 163) | def _get_ids_from_lwitlrt_split_file(filename: str) -> List[str]:
  function _get_line_strings_from_xml_file (line 172) | def _get_line_strings_from_xml_file(filename: str) -> List[str]:
  function _get_text_from_xml_element (line 178) | def _get_text_from_xml_element(xml_element: Any) -> str:
  function _get_line_regions_from_xml_file (line 183) | def _get_line_regions_from_xml_file(filename: str) -> List[Dict[str, int]]:
  function _get_line_elements_from_xml_file (line 210) | def _get_line_elements_from_xml_file(filename: str) -> List[Any]:
  function _get_region_from_xml_element (line 216) | def _get_region_from_xml_element(xml_elem: Any, xml_path: str) -> Option...

FILE: lab07/text_recognizer/data/iam_lines.py
  class IAMLines (line 24) | class IAMLines(BaseDataModule):
    method __init__ (line 27) | def __init__(self, args: argparse.Namespace = None):
    method add_to_argparse (line 38) | def add_to_argparse(parser):
    method prepare_data (line 43) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 64) | def setup(self, stage: str = None) -> None:
    method __repr__ (line 91) | def __repr__(self) -> str:
  function generate_line_crops_and_labels (line 114) | def generate_line_crops_and_labels(iam: IAM, split: str, scale_factor=IM...
  function save_images_and_labels (line 131) | def save_images_and_labels(crops: Sequence[Image.Image], labels: Sequenc...
  function load_processed_crops_and_labels (line 140) | def load_processed_crops_and_labels(split: str, data_dirname: Path):
  function load_processed_line_crops (line 148) | def load_processed_line_crops(split: str, data_dirname: Path):
  function load_processed_line_labels (line 155) | def load_processed_line_labels(split: str, data_dirname: Path):

FILE: lab07/text_recognizer/data/iam_original_and_synthetic_paragraphs.py
  class IAMOriginalAndSyntheticParagraphs (line 11) | class IAMOriginalAndSyntheticParagraphs(BaseDataModule):
    method __init__ (line 14) | def __init__(self, args: argparse.Namespace = None):
    method add_to_argparse (line 26) | def add_to_argparse(parser):
    method prepare_data (line 32) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 36) | def setup(self, stage: str = None) -> None:
    method __repr__ (line 47) | def __repr__(self) -> str:

FILE: lab07/text_recognizer/data/iam_paragraphs.py
  class IAMParagraphs (line 24) | class IAMParagraphs(BaseDataModule):
    method __init__ (line 27) | def __init__(self, args: argparse.Namespace = None):
    method add_to_argparse (line 41) | def add_to_argparse(parser):
    method prepare_data (line 46) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 75) | def setup(self, stage: str = None) -> None:
    method __repr__ (line 91) | def __repr__(self) -> str:
  function validate_input_and_output_dimensions (line 114) | def validate_input_and_output_dimensions(
  function get_paragraph_crops_and_labels (line 127) | def get_paragraph_crops_and_labels(
  function save_crops_and_labels (line 143) | def save_crops_and_labels(crops: Dict[str, Image.Image], labels: Dict[st...
  function load_processed_crops_and_labels (line 154) | def load_processed_crops_and_labels(split: str) -> Tuple[Sequence[Image....
  function get_dataset_properties (line 167) | def get_dataset_properties() -> dict:
  function _labels_filename (line 188) | def _labels_filename(split: str) -> Path:
  function _crop_filename (line 193) | def _crop_filename(id_: str, split: str) -> Path:
  function _num_lines (line 198) | def _num_lines(label: str) -> int:

FILE: lab07/text_recognizer/data/iam_synthetic_paragraphs.py
  class IAMSyntheticParagraphs (line 29) | class IAMSyntheticParagraphs(IAMParagraphs):
    method __init__ (line 32) | def __init__(self, args: argparse.Namespace = None):
    method prepare_data (line 39) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 58) | def setup(self, stage: str = None) -> None:
    method _load_processed_crops_and_labels (line 73) | def _load_processed_crops_and_labels(self):
    method __repr__ (line 79) | def __repr__(self) -> str:
    method add_to_argparse (line 98) | def add_to_argparse(parser):
  class IAMSyntheticParagraphsDataset (line 103) | class IAMSyntheticParagraphsDataset(torch.utils.data.Dataset):
    method __init__ (line 106) | def __init__(
    method __len__ (line 131) | def __len__(self) -> int:
    method _set_seed (line 135) | def _set_seed(self, seed):
    method __getitem__ (line 141) | def __getitem__(self, index: int) -> Tuple[Any, Any]:
  function join_line_crops_to_form_paragraph (line 169) | def join_line_crops_to_form_paragraph(line_crops: Sequence[Image.Image])...

FILE: lab07/text_recognizer/data/mnist.py
  class MNIST (line 12) | class MNIST(BaseDataModule):
    method __init__ (line 15) | def __init__(self, args: argparse.Namespace) -> None:
    method prepare_data (line 23) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 28) | def setup(self, stage=None) -> None:

FILE: lab07/text_recognizer/data/sentence_generator.py
  class SentenceGenerator (line 15) | class SentenceGenerator:
    method __init__ (line 18) | def __init__(self, max_length: Optional[int] = None):
    method generate (line 23) | def generate(self, max_length: Optional[int] = None) -> str:
    method _get_end_ind_candidates (line 49) | def _get_end_ind_candidates(self, first_ind: int, start_ind: int, max_...
  function brown_text (line 58) | def brown_text():
  function load_nltk_brown_corpus (line 67) | def load_nltk_brown_corpus():

FILE: lab07/text_recognizer/data/util.py
  class BaseDataset (line 11) | class BaseDataset(torch.utils.data.Dataset):
    method __init__ (line 28) | def __init__(
    method __len__ (line 43) | def __len__(self) -> int:
    method __getitem__ (line 47) | def __getitem__(self, index: int) -> Tuple[Any, Any]:
  function convert_strings_to_labels (line 70) | def convert_strings_to_labels(strings: Sequence[str], mapping: Dict[str,...
  function split_dataset (line 84) | def split_dataset(base_dataset: BaseDataset, fraction: float, seed: int)...
  function resize_image (line 96) | def resize_image(image: Image.Image, scale_factor: int) -> Image.Image:

FILE: lab07/text_recognizer/lit_models/base.py
  class BaseLitModel (line 17) | class BaseLitModel(pl.LightningModule):
    method __init__ (line 22) | def __init__(self, model, args: argparse.Namespace = None):
    method add_to_argparse (line 48) | def add_to_argparse(parser):
    method configure_optimizers (line 56) | def configure_optimizers(self):
    method forward (line 65) | def forward(self, x):
    method predict (line 68) | def predict(self, x):
    method training_step (line 72) | def training_step(self, batch, batch_idx):
    method _run_on_batch (line 84) | def _run_on_batch(self, batch, with_preds=False):
    method validation_step (line 91) | def validation_step(self, batch, batch_idx):
    method test_step (line 103) | def test_step(self, batch, batch_idx):
    method add_on_first_batch (line 110) | def add_on_first_batch(self, metrics, outputs, batch_idx):
    method add_on_logged_batches (line 114) | def add_on_logged_batches(self, metrics, outputs):
    method is_logged_batch (line 118) | def is_logged_batch(self):
  class BaseImageToTextLitModel (line 125) | class BaseImageToTextLitModel(BaseLitModel):  # pylint: disable=too-many...
    method __init__ (line 128) | def __init__(self, model, args: argparse.Namespace = None):

FILE: lab07/text_recognizer/lit_models/metrics.py
  class CharacterErrorRate (line 8) | class CharacterErrorRate(torchmetrics.CharErrorRate):
    method __init__ (line 11) | def __init__(self, ignore_tokens: Sequence[int], *args):
    method update (line 15) | def update(self, preds: torch.Tensor, targets: torch.Tensor):  # type:...
  function test_character_error_rate (line 21) | def test_character_error_rate():

FILE: lab07/text_recognizer/lit_models/transformer.py
  class TransformerLitModel (line 10) | class TransformerLitModel(BaseImageToTextLitModel):
    method __init__ (line 18) | def __init__(self, model, args=None):
    method forward (line 22) | def forward(self, x):
    method teacher_forward (line 25) | def teacher_forward(self, x: torch.Tensor, y: torch.Tensor) -> torch.T...
    method training_step (line 44) | def training_step(self, batch, batch_idx):
    method validation_step (line 59) | def validation_step(self, batch, batch_idx):
    method test_step (line 80) | def test_step(self, batch, batch_idx):
    method map (line 101) | def map(self, ks: Sequence[int], ignore: bool = True) -> str:
    method batchmap (line 108) | def batchmap(self, ks: Sequence[Sequence[int]], ignore=True) -> List[s...
    method get_preds (line 112) | def get_preds(self, logitlikes: torch.Tensor, replace_after_end: bool ...

FILE: lab07/text_recognizer/lit_models/util.py
  function first_appearance (line 6) | def first_appearance(x: torch.Tensor, element: Union[int, float], dim: i...
  function replace_after (line 47) | def replace_after(x: torch.Tensor, element: Union[int, float], replace: ...

FILE: lab07/text_recognizer/models/cnn.py
  class ConvBlock (line 15) | class ConvBlock(nn.Module):
    method __init__ (line 20) | def __init__(self, input_channels: int, output_channels: int) -> None:
    method forward (line 25) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class CNN (line 43) | class CNN(nn.Module):
    method __init__ (line 46) | def __init__(self, data_config: Dict[str, Any], args: argparse.Namespa...
    method forward (line 75) | def forward(self, x: torch.Tensor) -> torch.Tensor:
    method add_to_argparse (line 101) | def add_to_argparse(parser):

FILE: lab07/text_recognizer/models/line_cnn.py
  class ConvBlock (line 21) | class ConvBlock(nn.Module):
    method __init__ (line 26) | def __init__(
    method forward (line 38) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class LineCNN (line 56) | class LineCNN(nn.Module):
    method __init__ (line 61) | def __init__(
    method _init_weights (line 100) | def _init_weights(self):
    method forward (line 119) | def forward(self, x: torch.Tensor) -> torch.Tensor:
    method add_to_argparse (line 146) | def add_to_argparse(parser):

FILE: lab07/text_recognizer/models/line_cnn_simple.py
  class LineCNNSimple (line 16) | class LineCNNSimple(nn.Module):
    method __init__ (line 19) | def __init__(
    method forward (line 38) | def forward(self, x: torch.Tensor) -> torch.Tensor:
    method add_to_argparse (line 73) | def add_to_argparse(parser):

FILE: lab07/text_recognizer/models/line_cnn_transformer.py
  class LineCNNTransformer (line 20) | class LineCNNTransformer(nn.Module):
    method __init__ (line 23) | def __init__(
    method init_weights (line 65) | def init_weights(self):
    method encode (line 71) | def encode(self, x: torch.Tensor) -> torch.Tensor:
    method decode (line 90) | def decode(self, x, y):
    method forward (line 117) | def forward(self, x: torch.Tensor) -> torch.Tensor:
    method add_to_argparse (line 150) | def add_to_argparse(parser):

FILE: lab07/text_recognizer/models/mlp.py
  class MLP (line 14) | class MLP(nn.Module):
    method __init__ (line 17) | def __init__(
    method forward (line 39) | def forward(self, x):
    method add_to_argparse (line 51) | def add_to_argparse(parser):

FILE: lab07/text_recognizer/models/resnet_transformer.py
  class ResnetTransformer (line 21) | class ResnetTransformer(nn.Module):
    method __init__ (line 24) | def __init__(
    method forward (line 75) | def forward(self, x: torch.Tensor) -> torch.Tensor:
    method init_weights (line 111) | def init_weights(self):
    method encode (line 123) | def encode(self, x: torch.Tensor) -> torch.Tensor:
    method decode (line 147) | def decode(self, x, y):
    method add_to_argparse (line 179) | def add_to_argparse(parser):

FILE: lab07/text_recognizer/models/transformer_util.py
  class PositionalEncodingImage (line 9) | class PositionalEncodingImage(nn.Module):
    method __init__ (line 16) | def __init__(self, d_model: int, max_h: int = 2000, max_w: int = 2000,...
    method make_pe (line 26) | def make_pe(d_model: int, max_h: int, max_w: int) -> torch.Tensor:
    method forward (line 36) | def forward(self, x: Tensor) -> Tensor:
  class PositionalEncoding (line 44) | class PositionalEncoding(torch.nn.Module):
    method __init__ (line 47) | def __init__(self, d_model: int, dropout: float = 0.1, max_len: int = ...
    method make_pe (line 56) | def make_pe(d_model: int, max_len: int) -> torch.Tensor:
    method forward (line 65) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  function generate_square_subsequent_mask (line 72) | def generate_square_subsequent_mask(size: int) -> torch.Tensor:

FILE: lab07/text_recognizer/paragraph_text_recognizer.py
  class ParagraphTextRecognizer (line 26) | class ParagraphTextRecognizer:
    method __init__ (line 29) | def __init__(self, model_path=None):
    method predict (line 38) | def predict(self, image: Union[str, Path, Image.Image]) -> str:
  function convert_y_label_to_string (line 51) | def convert_y_label_to_string(y: torch.Tensor, mapping: Sequence[str], i...
  function main (line 55) | def main():

FILE: lab07/text_recognizer/stems/image.py
  class ImageStem (line 5) | class ImageStem:
    method __init__ (line 23) | def __init__(self):
    method __call__ (line 28) | def __call__(self, img):
  class MNISTStem (line 38) | class MNISTStem(ImageStem):
    method __init__ (line 41) | def __init__(self):

FILE: lab07/text_recognizer/stems/line.py
  class LineStem (line 11) | class LineStem(ImageStem):
    method __init__ (line 14) | def __init__(self, augment=False, color_jitter_kwargs=None, random_aff...
  class IAMLineStem (line 37) | class IAMLineStem(ImageStem):
    method __init__ (line 40) | def __init__(self, augment=False, color_jitter_kwargs=None, random_aff...

FILE: lab07/text_recognizer/stems/paragraph.py
  class ParagraphStem (line 14) | class ParagraphStem(ImageStem):
    method __init__ (line 17) | def __init__(

FILE: lab07/text_recognizer/tests/test_callback_utils.py
  function test_check_and_warn_simple (line 11) | def test_check_and_warn_simple():
  function test_check_and_warn_tblogger (line 23) | def test_check_and_warn_tblogger():
  function test_check_and_warn_wandblogger (line 32) | def test_check_and_warn_wandblogger():

FILE: lab07/text_recognizer/tests/test_iam.py
  function test_iam_parsed_lines (line 5) | def test_iam_parsed_lines():
  function test_iam_data_splits (line 13) | def test_iam_data_splits():

FILE: lab07/text_recognizer/util.py
  function to_categorical (line 17) | def to_categorical(y, num_classes):
  function read_image_pil (line 22) | def read_image_pil(image_uri: Union[Path, str], grayscale=False) -> Image:
  function read_image_pil_file (line 27) | def read_image_pil_file(image_file, grayscale=False) -> Image:
  function temporary_working_directory (line 37) | def temporary_working_directory(working_dir: Union[str, Path]):
  function encode_b64_image (line 47) | def encode_b64_image(image, format="png"):
  function compute_sha256 (line 55) | def compute_sha256(filename: Union[Path, str]):
  class TqdmUpTo (line 61) | class TqdmUpTo(tqdm):
    method update_to (line 64) | def update_to(self, blocks=1, bsize=1, tsize=None):
  function download_url (line 80) | def download_url(url, filename):

FILE: lab07/training/cleanup_artifacts.py
  function _setup_parser (line 30) | def _setup_parser():
  function main (line 87) | def main(args):
  function clean_run_artifacts (line 101) | def clean_run_artifacts(run, selector, protect_aliases=True, verbose=Fal...
  function remove_artifact (line 108) | def remove_artifact(artifact, protect_aliases, verbose=False, dryrun=True):
  function _get_runs (line 117) | def _get_runs(project_path, run_ids=None, run_name_res=None, verbose=Fal...
  function _get_run_by_id (line 134) | def _get_run_by_id(project_path, run_id, verbose=False):
  function _get_runs_by_name_re (line 142) | def _get_runs_by_name_re(project_path, run_name_re, verbose=False):
  function _get_selector_from (line 152) | def _get_selector_from(args, verbose=False):
  function _get_entity_from (line 173) | def _get_entity_from(args, verbose=False):

FILE: lab07/training/run_experiment.py
  function _setup_parser (line 21) | def _setup_parser():
  function _ensure_logging_dir (line 87) | def _ensure_logging_dir(experiment_dir):
  function main (line 92) | def main():

FILE: lab07/training/stage_model.py
  function main (line 45) | def main(args):
  function find_artifact (line 85) | def find_artifact(entity: str, project: str, type: str, alias: str, run=...
  function get_logging_run (line 115) | def get_logging_run(artifact):
  function print_info (line 120) | def print_info(artifact, run=None):
  function get_checkpoint_metadata (line 134) | def get_checkpoint_metadata(run, checkpoint):
  function download_artifact (line 148) | def download_artifact(artifact_path, target_directory):
  function load_model_from_checkpoint (line 159) | def load_model_from_checkpoint(ckpt_metadata, directory):
  function save_model_to_torchscript (line 173) | def save_model_to_torchscript(model, directory):
  function upload_staged_model (line 179) | def upload_staged_model(staged_at, from_directory):
  function _find_artifact_run (line 184) | def _find_artifact_run(entity, project, type, run, alias):
  function _find_artifact_project (line 196) | def _find_artifact_project(entity, project, type, alias):
  function _get_entity_from (line 215) | def _get_entity_from(args):
  function _setup_parser (line 225) | def _setup_parser():

FILE: lab07/training/util.py
  function import_class (line 9) | def import_class(module_and_class_name: str) -> type:
  function setup_data_and_model_from_args (line 17) | def setup_data_and_model_from_args(args: argparse.Namespace):

FILE: lab08/api_serverless/api.py
  function handler (line 12) | def handler(event, _context):
  function _load_image (line 30) | def _load_image(event):
  function _from_string (line 46) | def _from_string(event):

FILE: lab08/app_gradio/app.py
  function main (line 34) | def main(args):
  function make_frontend (line 45) | def make_frontend(
  class PredictorBackend (line 96) | class PredictorBackend:
    method __init__ (line 104) | def __init__(self, url=None):
    method run (line 112) | def run(self, image):
    method _predict_with_metrics (line 117) | def _predict_with_metrics(self, image):
    method _predict_from_endpoint (line 130) | def _predict_from_endpoint(self, image):
    method _log_inference (line 156) | def _log_inference(self, pred, metrics):
  function _load_readme (line 162) | def _load_readme(with_logging=False):
  function _make_parser (line 172) | def _make_parser():

FILE: lab08/app_gradio/flagging.py
  class GantryImageToTextLogger (line 13) | class GantryImageToTextLogger(gr.FlaggingCallback):
    method __init__ (line 16) | def __init__(self, application: str, version: Union[int, str, None] = ...
    method setup (line 50) | def setup(self, components: List[Component], flagging_dir: str):
    method flag (line 58) | def flag(self, flag_data, flag_option=None, flag_index=None, username=...
    method _to_gantry (line 74) | def _to_gantry(self, input_image_url, output_text, feedback):
    method _to_s3 (line 80) | def _to_s3(self, image_bytes, key=None, filetype=None):
    method _find_image_and_text_components (line 91) | def _find_image_and_text_components(self, components: List[Component]):
  function get_api_key (line 107) | def get_api_key() -> Optional[str]:

FILE: lab08/app_gradio/s3_util.py
  function get_or_create_bucket (line 13) | def get_or_create_bucket(name):
  function _create_bucket (line 31) | def _create_bucket(name):
  function make_key (line 42) | def make_key(fileobj, filetype=None):
  function make_unique_bucket_name (line 51) | def make_unique_bucket_name(prefix, seed):
  function get_url_of (line 57) | def get_url_of(bucket, key=None):
  function get_uri_of (line 68) | def get_uri_of(bucket, key=None):
  function enable_bucket_versioning (line 79) | def enable_bucket_versioning(bucket):
  function add_access_policy (line 88) | def add_access_policy(bucket):
  function _get_policy (line 94) | def _get_policy(bucket_name):
  function make_identifier (line 127) | def make_identifier(byte_data):
  function _get_region (line 136) | def _get_region(bucket):
  function _format_url (line 148) | def _format_url(bucket_name, region, key=None):
  function _format_uri (line 154) | def _format_uri(bucket_name, key=None):

FILE: lab08/app_gradio/tests/test_app.py
  function test_local_run (line 16) | def test_local_run():

FILE: lab08/text_recognizer/callbacks/imtotext.py
  class ImageToTextTableLogger (line 14) | class ImageToTextTableLogger(pl.Callback):
    method __init__ (line 17) | def __init__(self, max_images_to_log=32, on_train=True):
    method on_train_batch_end (line 24) | def on_train_batch_end(self, trainer, module, output, batch, batch_idx):
    method on_validation_batch_end (line 33) | def on_validation_batch_end(self, trainer, module, output, batch, batc...
    method _log_image_text_table (line 40) | def _log_image_text_table(self, trainer, output, batch, key):
    method has_metrics (line 55) | def has_metrics(self, output):
  class ImageToTextCaptionLogger (line 59) | class ImageToTextCaptionLogger(pl.Callback):
    method __init__ (line 62) | def __init__(self, max_images_to_log=32, on_train=True):
    method on_train_batch_end (line 69) | def on_train_batch_end(self, trainer, module, output, batch, batch_idx):
    method on_validation_batch_end (line 77) | def on_validation_batch_end(self, trainer, module, output, batch, batc...
    method on_test_batch_end (line 85) | def on_test_batch_end(self, trainer, module, output, batch, batch_idx,...
    method _log_image_text_caption (line 92) | def _log_image_text_caption(self, trainer, output, batch, key):
    method has_metrics (line 102) | def has_metrics(self, output):

FILE: lab08/text_recognizer/callbacks/model.py
  class ModelSizeLogger (line 19) | class ModelSizeLogger(pl.Callback):
    method __init__ (line 22) | def __init__(self, print_size=True):
    method on_fit_start (line 27) | def on_fit_start(self, trainer, module):
    method _run (line 30) | def _run(self, trainer, module):
    method get_model_disksize (line 43) | def get_model_disksize(module):
  class GraphLogger (line 51) | class GraphLogger(pl.Callback):
    method __init__ (line 54) | def __init__(self, output_key="logits"):
    method on_train_batch_end (line 62) | def on_train_batch_end(self, trainer, module, outputs, batch, batch_id...
    method log_graph (line 72) | def log_graph(trainer, module, outputs):
  function count_params (line 84) | def count_params(module):

FILE: lab08/text_recognizer/callbacks/optim.py
  class LearningRateMonitor (line 6) | class LearningRateMonitor(pl.callbacks.LearningRateMonitor):
    method _add_prefix (line 13) | def _add_prefix(self, *args, **kwargs) -> str:

FILE: lab08/text_recognizer/callbacks/util.py
  function check_and_warn (line 6) | def check_and_warn(logger, attribute, feature):
  function warn_no_attribute (line 12) | def warn_no_attribute(blocked_feature, missing_attribute):

FILE: lab08/text_recognizer/data/base_data_module.py
  function load_and_print_info (line 16) | def load_and_print_info(data_module_class) -> None:
  function _download_raw_dataset (line 27) | def _download_raw_dataset(metadata: Dict, dl_dirname: Path) -> Path:
  class BaseDataModule (line 51) | class BaseDataModule(pl.LightningDataModule):
    method __init__ (line 57) | def __init__(self, args: argparse.Namespace = None) -> None:
    method data_dirname (line 74) | def data_dirname(cls):
    method add_to_argparse (line 78) | def add_to_argparse(parser):
    method config (line 93) | def config(self):
    method prepare_data (line 97) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 104) | def setup(self, stage: Optional[str] = None) -> None:
    method train_dataloader (line 111) | def train_dataloader(self):
    method val_dataloader (line 120) | def val_dataloader(self):
    method test_dataloader (line 129) | def test_dataloader(self):

FILE: lab08/text_recognizer/data/emnist.py
  class EMNIST (line 32) | class EMNIST(BaseDataModule):
    method __init__ (line 43) | def __init__(self, args=None):
    method prepare_data (line 52) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 56) | def setup(self, stage: str = None) -> None:
    method __repr__ (line 71) | def __repr__(self):
  function _download_and_process_emnist (line 85) | def _download_and_process_emnist():
  function _process_raw_dataset (line 91) | def _process_raw_dataset(filename: str, dirname: Path):
  function _sample_to_balance (line 133) | def _sample_to_balance(x, y):
  function _augment_emnist_characters (line 148) | def _augment_emnist_characters(characters: Sequence[str]) -> Sequence[str]:

FILE: lab08/text_recognizer/data/emnist_lines.py
  class EMNISTLines (line 28) | class EMNISTLines(BaseDataModule):
    method __init__ (line 31) | def __init__(
    method add_to_argparse (line 55) | def add_to_argparse(parser):
    method data_filename (line 79) | def data_filename(self):
    method prepare_data (line 85) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 93) | def setup(self, stage: str = None) -> None:
    method __repr__ (line 111) | def __repr__(self) -> str:
    method _generate_data (line 132) | def _generate_data(self, split: str) -> None:
  function get_samples_by_char (line 168) | def get_samples_by_char(samples, labels, mapping):
  function select_letter_samples_for_string (line 175) | def select_letter_samples_for_string(string, samples_by_char, char_shape...
  function construct_image_from_string (line 187) | def construct_image_from_string(
  function create_dataset_of_images (line 202) | def create_dataset_of_images(N, samples_by_char, sentence_generator, min...
  function convert_strings_to_labels (line 212) | def convert_strings_to_labels(

FILE: lab08/text_recognizer/data/fake_images.py
  class FakeImageData (line 15) | class FakeImageData(BaseDataModule):
    method __init__ (line 18) | def __init__(self, args: argparse.Namespace = None):
    method add_to_argparse (line 28) | def add_to_argparse(parser):
    method setup (line 36) | def setup(self, stage: str = None) -> None:

FILE: lab08/text_recognizer/data/iam.py
  class IAM (line 22) | class IAM:
    method __init__ (line 40) | def __init__(self):
    method prepare_data (line 43) | def prepare_data(self):
    method load_image (line 49) | def load_image(self, id: str) -> Image.Image:
    method __repr__ (line 62) | def __repr__(self):
    method all_ids (line 74) | def all_ids(self):
    method ids_by_split (line 79) | def ids_by_split(self):
    method split_by_id (line 83) | def split_by_id(self):
    method train_ids (line 91) | def train_ids(self):
    method test_ids (line 96) | def test_ids(self):
    method xml_filenames (line 101) | def xml_filenames(self) -> List[Path]:
    method validation_ids (line 106) | def validation_ids(self):
    method form_filenames (line 113) | def form_filenames(self) -> List[Path]:
    method xml_filenames_by_id (line 118) | def xml_filenames_by_id(self):
    method form_filenames_by_id (line 123) | def form_filenames_by_id(self):
    method line_strings_by_id (line 128) | def line_strings_by_id(self):
    method line_regions_by_id (line 133) | def line_regions_by_id(self):
    method paragraph_string_by_id (line 138) | def paragraph_string_by_id(self):
    method paragraph_region_by_id (line 143) | def paragraph_region_by_id(self):
  function _extract_raw_dataset (line 156) | def _extract_raw_dataset(filename: Path, dirname: Path) -> None:
  function _get_ids_from_lwitlrt_split_file (line 163) | def _get_ids_from_lwitlrt_split_file(filename: str) -> List[str]:
  function _get_line_strings_from_xml_file (line 172) | def _get_line_strings_from_xml_file(filename: str) -> List[str]:
  function _get_text_from_xml_element (line 178) | def _get_text_from_xml_element(xml_element: Any) -> str:
  function _get_line_regions_from_xml_file (line 183) | def _get_line_regions_from_xml_file(filename: str) -> List[Dict[str, int]]:
  function _get_line_elements_from_xml_file (line 210) | def _get_line_elements_from_xml_file(filename: str) -> List[Any]:
  function _get_region_from_xml_element (line 216) | def _get_region_from_xml_element(xml_elem: Any, xml_path: str) -> Option...

FILE: lab08/text_recognizer/data/iam_lines.py
  class IAMLines (line 24) | class IAMLines(BaseDataModule):
    method __init__ (line 27) | def __init__(self, args: argparse.Namespace = None):
    method add_to_argparse (line 38) | def add_to_argparse(parser):
    method prepare_data (line 43) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 64) | def setup(self, stage: str = None) -> None:
    method __repr__ (line 91) | def __repr__(self) -> str:
  function generate_line_crops_and_labels (line 114) | def generate_line_crops_and_labels(iam: IAM, split: str, scale_factor=IM...
  function save_images_and_labels (line 131) | def save_images_and_labels(crops: Sequence[Image.Image], labels: Sequenc...
  function load_processed_crops_and_labels (line 140) | def load_processed_crops_and_labels(split: str, data_dirname: Path):
  function load_processed_line_crops (line 148) | def load_processed_line_crops(split: str, data_dirname: Path):
  function load_processed_line_labels (line 155) | def load_processed_line_labels(split: str, data_dirname: Path):

FILE: lab08/text_recognizer/data/iam_original_and_synthetic_paragraphs.py
  class IAMOriginalAndSyntheticParagraphs (line 11) | class IAMOriginalAndSyntheticParagraphs(BaseDataModule):
    method __init__ (line 14) | def __init__(self, args: argparse.Namespace = None):
    method add_to_argparse (line 26) | def add_to_argparse(parser):
    method prepare_data (line 32) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 36) | def setup(self, stage: str = None) -> None:
    method __repr__ (line 47) | def __repr__(self) -> str:

FILE: lab08/text_recognizer/data/iam_paragraphs.py
  class IAMParagraphs (line 24) | class IAMParagraphs(BaseDataModule):
    method __init__ (line 27) | def __init__(self, args: argparse.Namespace = None):
    method add_to_argparse (line 41) | def add_to_argparse(parser):
    method prepare_data (line 46) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 75) | def setup(self, stage: str = None) -> None:
    method __repr__ (line 91) | def __repr__(self) -> str:
  function validate_input_and_output_dimensions (line 114) | def validate_input_and_output_dimensions(
  function get_paragraph_crops_and_labels (line 127) | def get_paragraph_crops_and_labels(
  function save_crops_and_labels (line 143) | def save_crops_and_labels(crops: Dict[str, Image.Image], labels: Dict[st...
  function load_processed_crops_and_labels (line 154) | def load_processed_crops_and_labels(split: str) -> Tuple[Sequence[Image....
  function get_dataset_properties (line 167) | def get_dataset_properties() -> dict:
  function _labels_filename (line 188) | def _labels_filename(split: str) -> Path:
  function _crop_filename (line 193) | def _crop_filename(id_: str, split: str) -> Path:
  function _num_lines (line 198) | def _num_lines(label: str) -> int:

FILE: lab08/text_recognizer/data/iam_synthetic_paragraphs.py
  class IAMSyntheticParagraphs (line 29) | class IAMSyntheticParagraphs(IAMParagraphs):
    method __init__ (line 32) | def __init__(self, args: argparse.Namespace = None):
    method prepare_data (line 39) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 58) | def setup(self, stage: str = None) -> None:
    method _load_processed_crops_and_labels (line 73) | def _load_processed_crops_and_labels(self):
    method __repr__ (line 79) | def __repr__(self) -> str:
    method add_to_argparse (line 98) | def add_to_argparse(parser):
  class IAMSyntheticParagraphsDataset (line 103) | class IAMSyntheticParagraphsDataset(torch.utils.data.Dataset):
    method __init__ (line 106) | def __init__(
    method __len__ (line 131) | def __len__(self) -> int:
    method _set_seed (line 135) | def _set_seed(self, seed):
    method __getitem__ (line 141) | def __getitem__(self, index: int) -> Tuple[Any, Any]:
  function join_line_crops_to_form_paragraph (line 169) | def join_line_crops_to_form_paragraph(line_crops: Sequence[Image.Image])...

FILE: lab08/text_recognizer/data/mnist.py
  class MNIST (line 12) | class MNIST(BaseDataModule):
    method __init__ (line 15) | def __init__(self, args: argparse.Namespace) -> None:
    method prepare_data (line 23) | def prepare_data(self, *args, **kwargs) -> None:
    method setup (line 28) | def setup(self, stage=None) -> None:

FILE: lab08/text_recognizer/data/sentence_generator.py
  class SentenceGenerator (line 15) | class SentenceGenerator:
    method __init__ (line 18) | def __init__(self, max_length: Optional[int] = None):
    method generate (line 23) | def generate(self, max_length: Optional[int] = None) -> str:
    method _get_end_ind_candidates (line 49) | def _get_end_ind_candidates(self, first_ind: int, start_ind: int, max_...
  function brown_text (line 58) | def brown_text():
  function load_nltk_brown_corpus (line 67) | def load_nltk_brown_corpus():

FILE: lab08/text_recognizer/data/util.py
  class BaseDataset (line 11) | class BaseDataset(torch.utils.data.Dataset):
    method __init__ (line 28) | def __init__(
    method __len__ (line 43) | def __len__(self) -> int:
    method __getitem__ (line 47) | def __getitem__(self, index: int) -> Tuple[Any, Any]:
  function convert_strings_to_labels (line 70) | def convert_strings_to_labels(strings: Sequence[str], mapping: Dict[str,...
  function split_dataset (line 84) | def split_dataset(base_dataset: BaseDataset, fraction: float, seed: int)...
  function resize_image (line 96) | def resize_image(image: Image.Image, scale_factor: int) -> Image.Image:

FILE: lab08/text_recognizer/lit_models/base.py
  class BaseLitModel (line 17) | class BaseLitModel(pl.LightningModule):
    method __init__ (line 22) | def __init__(self, model, args: argparse.Namespace = None):
    method add_to_argparse (line 48) | def add_to_argparse(parser):
    method configure_optimizers (line 56) | def configure_optimizers(self):
    method forward (line 65) | def forward(self, x):
    method predict (line 68) | def predict(self, x):
    method training_step (line 72) | def training_step(self, batch, batch_idx):
    method _run_on_batch (line 84) | def _run_on_batch(self, batch, with_preds=False):
    method validation_step (line 91) | def validation_step(self, batch, batch_idx):
    method test_step (line 103) | def test_step(self, batch, batch_idx):
    method add_on_first_batch (line 110) | def add_on_first_batch(self, metrics, outputs, batch_idx):
    method add_on_logged_batches (line 114) | def add_on_logged_batches(self, metrics, outputs):
    method is_logged_batch (line 118) | def is_logged_batch(self):
  class BaseImageToTextLitModel (line 125) | class BaseImageToTextLitModel(BaseLitModel):  # pylint: disable=too-many...
    method __init__ (line 128) | def __init__(self, model, args: argparse.Namespace = None):

FILE: lab08/text_recognizer/lit_models/metrics.py
  class CharacterErrorRate (line 8) | class CharacterErrorRate(torchmetrics.CharErrorRate):
    method __init__ (line 11) | def __init__(self, ignore_tokens: Sequence[int], *args):
    method update (line 15) | def update(self, preds: torch.Tensor, targets: torch.Tensor):  # type:...
  function test_character_error_rate (line 21) | def test_character_error_rate():

FILE: lab08/text_recognizer/lit_models/transformer.py
  class TransformerLitModel (line 10) | class TransformerLitModel(BaseImageToTextLitModel):
    method __init__ (line 18) | def __init__(self, model, args=None):
    method forward (line 22) | def forward(self, x):
    method teacher_forward (line 25) | def teacher_forward(self, x: torch.Tensor, y: torch.Tensor) -> torch.T...
    method training_step (line 44) | def training_step(self, batch, batch_idx):
    method validation_step (line 59) | def validation_step(self, batch, batch_idx):
    method test_step (line 80) | def test_step(self, batch, batch_idx):
    method map (line 101) | def map(self, ks: Sequence[int], ignore: bool = True) -> str:
    method batchmap (line 108) | def batchmap(self, ks: Sequence[Sequence[int]], ignore=True) -> List[s...
    method get_preds (line 112) | def get_preds(self, logitlikes: torch.Tensor, replace_after_end: bool ...

FILE: lab08/text_recognizer/lit_models/util.py
  function first_appearance (line 6) | def first_appearance(x: torch.Tensor, element: Union[int, float], dim: i...
  function replace_after (line 47) | def replace_after(x: torch.Tensor, element: Union[int, float], replace: ...

FILE: lab08/text_recognizer/models/cnn.py
  class ConvBlock (line 15) | class ConvBlock(nn.Module):
    method __init__ (line 20) | def __init__(self, input_channels: int, output_channels: int) -> None:
    method forward (line 25) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class CNN (line 43) | class CNN(nn.Module):
    method __init__ (line 46) | def __init__(self, data_config: Dict[str, Any], args: argparse.Namespa...
    method forward (line 75) | def forward(self, x: torch.Tensor) -> torch.Tensor:
    method add_to_argparse (line 101) | def add_to_argparse(parser):

FILE: lab08/text_recognizer/models/line_cnn.py
  class ConvBlock (line 21) | class ConvBlock(nn.Module):
    method __init__ (line 26) | def __init__(
    method forward (line 38) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  class LineCNN (line 56) | class LineCNN(nn.Module):
    method __init__ (line 61) | def __init__(
    method _init_weights (line 100) | def _init_weights(self):
    method forward (line 119) | def forward(self, x: torch.Tensor) -> torch.Tensor:
    method add_to_argparse (line 146) | def add_to_argparse(parser):

FILE: lab08/text_recognizer/models/line_cnn_simple.py
  class LineCNNSimple (line 16) | class LineCNNSimple(nn.Module):
    method __init__ (line 19) | def __init__(
    method forward (line 38) | def forward(self, x: torch.Tensor) -> torch.Tensor:
    method add_to_argparse (line 73) | def add_to_argparse(parser):

FILE: lab08/text_recognizer/models/line_cnn_transformer.py
  class LineCNNTransformer (line 20) | class LineCNNTransformer(nn.Module):
    method __init__ (line 23) | def __init__(
    method init_weights (line 65) | def init_weights(self):
    method encode (line 71) | def encode(self, x: torch.Tensor) -> torch.Tensor:
    method decode (line 90) | def decode(self, x, y):
    method forward (line 117) | def forward(self, x: torch.Tensor) -> torch.Tensor:
    method add_to_argparse (line 150) | def add_to_argparse(parser):

FILE: lab08/text_recognizer/models/mlp.py
  class MLP (line 14) | class MLP(nn.Module):
    method __init__ (line 17) | def __init__(
    method forward (line 39) | def forward(self, x):
    method add_to_argparse (line 51) | def add_to_argparse(parser):

FILE: lab08/text_recognizer/models/resnet_transformer.py
  class ResnetTransformer (line 21) | class ResnetTransformer(nn.Module):
    method __init__ (line 24) | def __init__(
    method forward (line 75) | def forward(self, x: torch.Tensor) -> torch.Tensor:
    method init_weights (line 111) | def init_weights(self):
    method encode (line 123) | def encode(self, x: torch.Tensor) -> torch.Tensor:
    method decode (line 147) | def decode(self, x, y):
    method add_to_argparse (line 179) | def add_to_argparse(parser):

FILE: lab08/text_recognizer/models/transformer_util.py
  class PositionalEncodingImage (line 9) | class PositionalEncodingImage(nn.Module):
    method __init__ (line 16) | def __init__(self, d_model: int, max_h: int = 2000, max_w: int = 2000,...
    method make_pe (line 26) | def make_pe(d_model: int, max_h: int, max_w: int) -> torch.Tensor:
    method forward (line 36) | def forward(self, x: Tensor) -> Tensor:
  class PositionalEncoding (line 44) | class PositionalEncoding(torch.nn.Module):
    method __init__ (line 47) | def __init__(self, d_model: int, dropout: float = 0.1, max_len: int = ...
    method make_pe (line 56) | def make_pe(d_model: int, max_len: int) -> torch.Tensor:
    method forward (line 65) | def forward(self, x: torch.Tensor) -> torch.Tensor:
  function generate_square_subsequent_mask (line 72) | def generate_square_subsequent_mask(size: int) -> torch.Tensor:

FILE: lab08/text_recognizer/paragraph_text_recognizer.py
  class ParagraphTextRecognizer (line 26) | class ParagraphTextRecognizer:
    method __init__ (line 29) | def __init__(self, model_path=None):
    method predict (line 38) | def predict(self, image: Union[str, Path, Image.Image]) -> str:
  function convert_y_label_to_string (line 51) | def convert_y_label_to_string(y: torch.Tensor, mapping: Sequence[str], i...
  function main (line 55) | def main():

FILE: lab08/text_recognizer/stems/image.py
  class ImageStem (line 5) | class ImageStem:
    method __init__ (line 23) | def __init__(self):
    method __call__ (line 28) | def __call__(self, img):
  class MNISTStem (line 38) | class MNISTStem(ImageStem):
    method __init__ (line 41) | def __init__(self):

FILE: lab08/text_recognizer/stems/line.py
  class LineStem (line 11) | class LineStem(ImageStem):
    method __init__ (line 14) | def __init__(self, augment=False, color_jitter_kwargs=None, random_aff...
  class IAMLineStem (line 37) | class IAMLineStem(ImageStem):
    method __init__ (line 40) | def __init__(self, augment=False, color_jitter_kwargs=None, random_aff...

FILE: lab08/text_recognizer/stems/paragraph.py
  class ParagraphStem (line 14) | class ParagraphStem(ImageStem):
    method __init__ (line 17) | def __init__(

FILE: lab08/text_recognizer/tests/test_callback_utils.py
  function test_check_and_warn_simple (line 11) | def test_check_and_warn_simple():
  function test_check_and_warn_tblogger (line 23) | def test_check_and_warn_tblogger():
  function test_check_and_warn_wandblogger (line 32) | def test_check_and_warn_wandblogger():

FILE: lab08/text_recognizer/tests/test_iam.py
  function test_iam_parsed_lines (line 5) | def test_iam_parsed_lines():
  function test_iam_data_splits (line 13) | def test_iam_data_splits():

FILE: lab08/text_recognizer/util.py
  function to_categorical (line 17) | def to_categorical(y, num_classes):
  function read_image_pil (line 22) | def read_image_pil(image_uri: Union[Path, str], grayscale=False) -> Image:
  function read_image_pil_file (line 27) | def read_image_pil_file(image_file, grayscale=False) -> Image:
  function temporary_working_directory (line 37) | def temporary_working_directory(working_dir: Union[str, Path]):
  function read_b64_image (line 47) | def read_b64_image(b64_string, grayscale=False):
  function read_b64_string (line 56) | def read_b64_string(b64_string, return_data_type=False):
  function get_b64_filetype (line 66) | def get_b64_filetype(data_header):
  function split_and_validate_b64_string (line 72) | def split_and_validate_b64_string(b64_string):
  function encode_b64_image (line 81) | def encode_b64_image(image, format="png"):
  function compute_sha256 (line 89) | def compute_sha256(filename: Union[Path, str]):
  class TqdmUpTo (line 95) | class TqdmUpTo(tqdm):
    method update_to (line 98) | def update_to(self, blocks=1, bsize=1, tsize=None):
  function download_url (line 114) | def download_url(url, filename):

FILE: lab08/training/cleanup_artifacts.py
  function _setup_parser (line 30) | def _setup_parser():
  function main (line 87) | def main(args):
  function clean_run_artifacts (line 101) | def clean_run_artifacts(run, selector, protect_aliases=True, verbose=Fal...
  function remove_artifact (line 108) | def remove_artifact(artifact, protect_aliases, verbose=False, dryrun=True):
  function _get_runs (line 117) | def _get_runs(project_path, run_ids=None, run_name_res=None, verbose=Fal...
  function _get_run_by_id (line 134) | def _get_run_by_id(project_path, run_id, verbose=False):
  function _get_runs_by_name_re (line 142) | def _get_runs_by_name_re(project_path, run_name_re, verbose=False):
  function _get_selector_from (line 152) | def _get_selector_from(args, verbose=False):
  function _get_entity_from (line 173) | def _get_entity_from(args, verbose=False):

FILE: lab08/training/run_experiment.py
  function _setup_parser (line 21) | def _setup_parser():
  function _ensure_logging_dir (line 87) | def _ensure_logging_dir(experiment_dir):
  function main (line 92) | def main():

FILE: lab08/training/stage_model.py
  function main (line 45) | def main(args):
  function find_artifact (line 85) | def find_artifact(entity: str, project: str, type: str, alias: str, run=...
  function get_logging_run (line 115) | def get_logging_run(artifact):
  function print_info (line 120) | def print_info(artifact, run=None):
  function get_checkpoint_metadata (line 134) | def get_checkpoint_metadata(run, checkpoint):
  function download_artifact (line 148) | def download_artifact(artifact_path, target_directory):
  function load_model_from_checkpoint (line 159) | def load_model_from_checkpoint(ckpt_metadata, directory):
  function save_model_to_torchscript (line 173) | def save_model_to_torchscript(model, directory):
  function upload_staged_model (line 179) | def upload_staged_model(staged_at, from_directory):
  function _find_artifact_run (line 184) | def _find_artifact_run(entity, project, type, run, alias):
  function _find_artifact_project (line 196) | def _find_artifact_project(entity, project, type, alias):
  function _get_entity_from (line 215) | def _get_entity_from(args):
  function _setup_parser (line 225) | def _setup_parser():

FILE: lab08/training/util.py
  function import_class (line 9) | def import_class(module_and_class_name: str) -> type:
  function setup_data_and_model_from_args (line 17) | def setup_data_and_model_from_args(args: argparse.Namespace):
Condensed preview — 424 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (4,488K chars).
[
  {
    "path": ".flake8",
    "chars": 1897,
    "preview": "[flake8]\nselect = ANN,B,B9,BLK,C,D,E,F,I,S,W\n  # only check selected error codes\nmax-complexity = 12\n  # C9 - flake8 McC"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/this-repository-is-automatically-generated--don-t-open-issues-here-.md",
    "chars": 522,
    "preview": "---\nname: This repository is automatically generated! Don't open issues here.\nabout: Open issues in the generating repo "
  },
  {
    "path": ".github/pull_request_template.md",
    "chars": 352,
    "preview": "Thanks for your interest in contributing!\n\nThis repository is automatically generated from [a source repo](https://fsdl."
  },
  {
    "path": ".gitignore",
    "chars": 631,
    "preview": "# Data\ndata/downloaded\ndata/processed\ndata/interim\n\n\n# Editors\n.vscode\n*.sw?\n*~\n\n# Node\nnode_modules\n\n# Python\n__pycache"
  },
  {
    "path": ".pre-commit-config.yaml",
    "chars": 1902,
    "preview": "repos:\n  # a set of useful Python-based pre-commit hooks\n  - repo: https://github.com/pre-commit/pre-commit-hooks\n    re"
  },
  {
    "path": "LICENSE.txt",
    "chars": 1086,
    "preview": "MIT License\n\nCopyright (c) 2022 Full Stack Deep Learning, LLC\n\nPermission is hereby granted, free of charge, to any pers"
  },
  {
    "path": "Makefile",
    "chars": 1789,
    "preview": "# Arcane incantation to print all the other targets, from https://stackoverflow.com/a/26339924\nhelp:\n\t@$(MAKE) -pRrq -f "
  },
  {
    "path": "data/raw/emnist/metadata.toml",
    "chars": 173,
    "preview": "filename = 'matlab.zip'\nsha256 = 'e1fa805cdeae699a52da0b77c2db17f6feb77eed125f9b45c022e7990444df95'\nurl = 'https://s3-us"
  },
  {
    "path": "data/raw/emnist/readme.md",
    "chars": 431,
    "preview": "# EMNIST dataset\n\nThe EMNIST dataset is a set of handwritten character digits derived from the NIST Special Database 19\n"
  },
  {
    "path": "data/raw/fsdl_handwriting/fsdl_handwriting.jsonl",
    "chars": 170325,
    "preview": "{\"content\": \"http://com.dataturks.a96-i23.open.s3.amazonaws.com/2c9fafb068c19d77016931d84bde07ba/85a46e90-8af1-48fe-9e75"
  },
  {
    "path": "data/raw/fsdl_handwriting/manifest.csv",
    "chars": 10886,
    "preview": "form\nhttps://fsdl-public-assets.s3.us-west-2.amazonaws.com/fsdl_handwriting_20190302/page-001.jpg\nhttps://fsdl-public-as"
  },
  {
    "path": "data/raw/fsdl_handwriting/metadata.toml",
    "chars": 188,
    "preview": "url = \"https://dataturks.com/projects/sergeykarayev/fsdl_handwriting/export\"\nfilename = \"fsdl_handwriting.json\"\nsha256 ="
  },
  {
    "path": "data/raw/fsdl_handwriting/readme.md",
    "chars": 508,
    "preview": "# FSDL Handwriting Dataset\n\n## Collection\n\nHandwritten paragraphs were collected in the FSDL March 2019 class.\nThe resul"
  },
  {
    "path": "data/raw/iam/metadata.toml",
    "chars": 175,
    "preview": "url = 'https://s3-us-west-2.amazonaws.com/fsdl-public-assets/iam/iamdb.zip'\nfilename = 'iamdb.zip'\nsha256 = 'f3c9e87a88a"
  },
  {
    "path": "data/raw/iam/readme.md",
    "chars": 2263,
    "preview": "# IAM Dataset\n\nThe IAM Handwriting Database contains forms of handwritten English text which can be used to train and te"
  },
  {
    "path": "environment.yml",
    "chars": 377,
    "preview": "name: fsdl-text-recognizer-2022\nchannels:\n  - pytorch\n  - nvidia\n  - defaults\ndependencies:\n  - python=3.10  # versioned"
  },
  {
    "path": "lab01/notebooks/lab01_pytorch.ipynb",
    "chars": 79942,
    "preview": "{\n  \"cells\": [\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"FlH0lCOttCs5\"\n      },\n      \"sou"
  },
  {
    "path": "lab01/text_recognizer/__init__.py",
    "chars": 58,
    "preview": "\"\"\"Modules for creating and running a text recognizer.\"\"\"\n"
  },
  {
    "path": "lab01/text_recognizer/data/util.py",
    "chars": 3261,
    "preview": "\"\"\"Base Dataset class.\"\"\"\nfrom typing import Any, Callable, Dict, Sequence, Tuple, Union\n\nfrom PIL import Image\nimport t"
  },
  {
    "path": "lab01/text_recognizer/metadata/mnist.py",
    "chars": 246,
    "preview": "\"\"\"Metadata for the MNIST dataset.\"\"\"\nimport text_recognizer.metadata.shared as shared\n\nDOWNLOADED_DATA_DIRNAME = shared"
  },
  {
    "path": "lab01/text_recognizer/metadata/shared.py",
    "chars": 140,
    "preview": "from pathlib import Path\n\nDATA_DIRNAME = Path(__file__).resolve().parents[3] / \"data\"\nDOWNLOADED_DATA_DIRNAME = DATA_DIR"
  },
  {
    "path": "lab01/text_recognizer/models/__init__.py",
    "chars": 80,
    "preview": "\"\"\"Models for character and text recognition in images.\"\"\"\nfrom .mlp import MLP\n"
  },
  {
    "path": "lab01/text_recognizer/models/mlp.py",
    "chars": 1512,
    "preview": "import argparse\nfrom typing import Any, Dict\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.func"
  },
  {
    "path": "lab01/text_recognizer/util.py",
    "chars": 2278,
    "preview": "\"\"\"Utility functions for text_recognizer module.\"\"\"\nimport base64\nimport contextlib\nimport hashlib\nfrom io import BytesI"
  },
  {
    "path": "lab02/notebooks/lab01_pytorch.ipynb",
    "chars": 79942,
    "preview": "{\n  \"cells\": [\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"FlH0lCOttCs5\"\n      },\n      \"sou"
  },
  {
    "path": "lab02/notebooks/lab02a_lightning.ipynb",
    "chars": 57375,
    "preview": "{\n  \"cells\": [\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"FlH0lCOttCs5\"\n      },\n      \"sou"
  },
  {
    "path": "lab02/notebooks/lab02b_cnn.ipynb",
    "chars": 69401,
    "preview": "{\n  \"cells\": [\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"FlH0lCOttCs5\"\n      },\n      \"sou"
  },
  {
    "path": "lab02/text_recognizer/__init__.py",
    "chars": 58,
    "preview": "\"\"\"Modules for creating and running a text recognizer.\"\"\"\n"
  },
  {
    "path": "lab02/text_recognizer/data/__init__.py",
    "chars": 505,
    "preview": "\"\"\"Module containing submodules for each dataset.\n\nEach dataset is defined as a class in that submodule.\n\nThe datasets s"
  },
  {
    "path": "lab02/text_recognizer/data/base_data_module.py",
    "chars": 4895,
    "preview": "\"\"\"Base DataModule class.\"\"\"\nimport argparse\nimport os\nfrom pathlib import Path\nfrom typing import Collection, Dict, Opt"
  },
  {
    "path": "lab02/text_recognizer/data/emnist.py",
    "chars": 7150,
    "preview": "\"\"\"EMNIST dataset. Downloads from NIST website and saves as .npz file if not already present.\"\"\"\nimport json\nimport os\nf"
  },
  {
    "path": "lab02/text_recognizer/data/emnist_essentials.json",
    "chars": 465,
    "preview": "{\"characters\": [\"<B>\", \"<S>\", \"<E>\", \"<P>\", \"0\", \"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\", \"A\", \"B\", \"C\", \"D\", \"E\", \""
  },
  {
    "path": "lab02/text_recognizer/data/emnist_lines.py",
    "chars": 8909,
    "preview": "import argparse\nfrom collections import defaultdict\nfrom typing import Dict, Sequence\n\nimport h5py\nimport numpy as np\nim"
  },
  {
    "path": "lab02/text_recognizer/data/mnist.py",
    "chars": 1409,
    "preview": "\"\"\"MNIST DataModule.\"\"\"\nimport argparse\n\nfrom torch.utils.data import random_split\nfrom torchvision.datasets import MNIS"
  },
  {
    "path": "lab02/text_recognizer/data/sentence_generator.py",
    "chars": 2875,
    "preview": "\"\"\"SentenceGenerator class and supporting functions.\"\"\"\nimport itertools\nimport re\nimport string\nfrom typing import List"
  },
  {
    "path": "lab02/text_recognizer/data/util.py",
    "chars": 3261,
    "preview": "\"\"\"Base Dataset class.\"\"\"\nfrom typing import Any, Callable, Dict, Sequence, Tuple, Union\n\nfrom PIL import Image\nimport t"
  },
  {
    "path": "lab02/text_recognizer/lit_models/__init__.py",
    "chars": 31,
    "preview": "from .base import BaseLitModel\n"
  },
  {
    "path": "lab02/text_recognizer/lit_models/base.py",
    "chars": 3563,
    "preview": "\"\"\"Basic LightningModules on which other modules can be built.\"\"\"\nimport argparse\n\nimport pytorch_lightning as pl\nimport"
  },
  {
    "path": "lab02/text_recognizer/metadata/emnist.py",
    "chars": 1381,
    "preview": "from pathlib import Path\n\nimport text_recognizer.metadata.shared as shared\n\nRAW_DATA_DIRNAME = shared.DATA_DIRNAME / \"ra"
  },
  {
    "path": "lab02/text_recognizer/metadata/emnist_lines.py",
    "chars": 469,
    "preview": "from pathlib import Path\n\nimport text_recognizer.metadata.emnist as emnist\nimport text_recognizer.metadata.shared as sha"
  },
  {
    "path": "lab02/text_recognizer/metadata/mnist.py",
    "chars": 246,
    "preview": "\"\"\"Metadata for the MNIST dataset.\"\"\"\nimport text_recognizer.metadata.shared as shared\n\nDOWNLOADED_DATA_DIRNAME = shared"
  },
  {
    "path": "lab02/text_recognizer/metadata/shared.py",
    "chars": 140,
    "preview": "from pathlib import Path\n\nDATA_DIRNAME = Path(__file__).resolve().parents[3] / \"data\"\nDOWNLOADED_DATA_DIRNAME = DATA_DIR"
  },
  {
    "path": "lab02/text_recognizer/models/__init__.py",
    "chars": 145,
    "preview": "\"\"\"Models for character and text recognition in images.\"\"\"\nfrom .mlp import MLP\n\nfrom .cnn import CNN\nfrom .line_cnn_sim"
  },
  {
    "path": "lab02/text_recognizer/models/cnn.py",
    "chars": 3497,
    "preview": "\"\"\"Basic convolutional model building blocks.\"\"\"\nimport argparse\nfrom typing import Any, Dict\n\nimport torch\nfrom torch i"
  },
  {
    "path": "lab02/text_recognizer/models/line_cnn_simple.py",
    "chars": 3024,
    "preview": "\"\"\"Simplest version of LineCNN that works on cleanly-separated characters.\"\"\"\nimport argparse\nimport math\nfrom typing im"
  },
  {
    "path": "lab02/text_recognizer/models/mlp.py",
    "chars": 1512,
    "preview": "import argparse\nfrom typing import Any, Dict\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.func"
  },
  {
    "path": "lab02/text_recognizer/stems/image.py",
    "chars": 1277,
    "preview": "import torch\nfrom torchvision import transforms\n\n\nclass ImageStem:\n    \"\"\"A stem for models operating on images.\n\n    Im"
  },
  {
    "path": "lab02/text_recognizer/util.py",
    "chars": 2278,
    "preview": "\"\"\"Utility functions for text_recognizer module.\"\"\"\nimport base64\nimport contextlib\nimport hashlib\nfrom io import BytesI"
  },
  {
    "path": "lab02/training/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "lab02/training/run_experiment.py",
    "chars": 5258,
    "preview": "\"\"\"Experiment-running framework.\"\"\"\nimport argparse\nfrom pathlib import Path\n\nimport numpy as np\nimport pytorch_lightnin"
  },
  {
    "path": "lab02/training/util.py",
    "chars": 823,
    "preview": "\"\"\"Utilities for model development scripts: training and staging.\"\"\"\nimport argparse\nimport importlib\n\nDATA_CLASS_MODULE"
  },
  {
    "path": "lab03/notebooks/lab01_pytorch.ipynb",
    "chars": 79942,
    "preview": "{\n  \"cells\": [\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"FlH0lCOttCs5\"\n      },\n      \"sou"
  },
  {
    "path": "lab03/notebooks/lab02a_lightning.ipynb",
    "chars": 57375,
    "preview": "{\n  \"cells\": [\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"FlH0lCOttCs5\"\n      },\n      \"sou"
  },
  {
    "path": "lab03/notebooks/lab02b_cnn.ipynb",
    "chars": 69401,
    "preview": "{\n  \"cells\": [\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"FlH0lCOttCs5\"\n      },\n      \"sou"
  },
  {
    "path": "lab03/notebooks/lab03_transformers.ipynb",
    "chars": 73349,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"FlH0lCOttCs5\"\n   },\n   \"source\": [\n    \"<img s"
  },
  {
    "path": "lab03/text_recognizer/__init__.py",
    "chars": 58,
    "preview": "\"\"\"Modules for creating and running a text recognizer.\"\"\"\n"
  },
  {
    "path": "lab03/text_recognizer/data/__init__.py",
    "chars": 549,
    "preview": "\"\"\"Module containing submodules for each dataset.\n\nEach dataset is defined as a class in that submodule.\n\nThe datasets s"
  },
  {
    "path": "lab03/text_recognizer/data/base_data_module.py",
    "chars": 4895,
    "preview": "\"\"\"Base DataModule class.\"\"\"\nimport argparse\nimport os\nfrom pathlib import Path\nfrom typing import Collection, Dict, Opt"
  },
  {
    "path": "lab03/text_recognizer/data/emnist.py",
    "chars": 7150,
    "preview": "\"\"\"EMNIST dataset. Downloads from NIST website and saves as .npz file if not already present.\"\"\"\nimport json\nimport os\nf"
  },
  {
    "path": "lab03/text_recognizer/data/emnist_essentials.json",
    "chars": 465,
    "preview": "{\"characters\": [\"<B>\", \"<S>\", \"<E>\", \"<P>\", \"0\", \"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\", \"A\", \"B\", \"C\", \"D\", \"E\", \""
  },
  {
    "path": "lab03/text_recognizer/data/emnist_lines.py",
    "chars": 8909,
    "preview": "import argparse\nfrom collections import defaultdict\nfrom typing import Dict, Sequence\n\nimport h5py\nimport numpy as np\nim"
  },
  {
    "path": "lab03/text_recognizer/data/iam.py",
    "chars": 10287,
    "preview": "\"\"\"Class for loading the IAM handwritten text dataset, which encompasses both paragraphs and lines, plus utilities.\"\"\"\nf"
  },
  {
    "path": "lab03/text_recognizer/data/iam_paragraphs.py",
    "chars": 8229,
    "preview": "\"\"\"IAM Paragraphs Dataset class.\"\"\"\nimport argparse\nimport json\nfrom pathlib import Path\nfrom typing import Callable, Di"
  },
  {
    "path": "lab03/text_recognizer/data/mnist.py",
    "chars": 1409,
    "preview": "\"\"\"MNIST DataModule.\"\"\"\nimport argparse\n\nfrom torch.utils.data import random_split\nfrom torchvision.datasets import MNIS"
  },
  {
    "path": "lab03/text_recognizer/data/sentence_generator.py",
    "chars": 2875,
    "preview": "\"\"\"SentenceGenerator class and supporting functions.\"\"\"\nimport itertools\nimport re\nimport string\nfrom typing import List"
  },
  {
    "path": "lab03/text_recognizer/data/util.py",
    "chars": 3261,
    "preview": "\"\"\"Base Dataset class.\"\"\"\nfrom typing import Any, Callable, Dict, Sequence, Tuple, Union\n\nfrom PIL import Image\nimport t"
  },
  {
    "path": "lab03/text_recognizer/lit_models/__init__.py",
    "chars": 77,
    "preview": "from .base import BaseLitModel\n\nfrom .transformer import TransformerLitModel\n"
  },
  {
    "path": "lab03/text_recognizer/lit_models/base.py",
    "chars": 4402,
    "preview": "\"\"\"Basic LightningModules on which other modules can be built.\"\"\"\nimport argparse\n\nimport pytorch_lightning as pl\nimport"
  },
  {
    "path": "lab03/text_recognizer/lit_models/metrics.py",
    "chars": 1282,
    "preview": "\"\"\"Special-purpose metrics for tracking our model performance.\"\"\"\nfrom typing import Sequence\n\nimport torch\nimport torch"
  },
  {
    "path": "lab03/text_recognizer/lit_models/transformer.py",
    "chars": 4179,
    "preview": "\"\"\"An encoder-decoder Transformer model\"\"\"\nfrom typing import List, Sequence\n\nimport torch\n\nfrom .base import BaseImageT"
  },
  {
    "path": "lab03/text_recognizer/lit_models/util.py",
    "chars": 2511,
    "preview": "from typing import Union\n\nimport torch\n\n\ndef first_appearance(x: torch.Tensor, element: Union[int, float], dim: int = 1)"
  },
  {
    "path": "lab03/text_recognizer/metadata/emnist.py",
    "chars": 1381,
    "preview": "from pathlib import Path\n\nimport text_recognizer.metadata.shared as shared\n\nRAW_DATA_DIRNAME = shared.DATA_DIRNAME / \"ra"
  },
  {
    "path": "lab03/text_recognizer/metadata/emnist_lines.py",
    "chars": 469,
    "preview": "from pathlib import Path\n\nimport text_recognizer.metadata.emnist as emnist\nimport text_recognizer.metadata.shared as sha"
  },
  {
    "path": "lab03/text_recognizer/metadata/iam.py",
    "chars": 431,
    "preview": "import text_recognizer.metadata.shared as shared\n\nRAW_DATA_DIRNAME = shared.DATA_DIRNAME / \"raw\" / \"iam\"\nMETADATA_FILENA"
  },
  {
    "path": "lab03/text_recognizer/metadata/iam_paragraphs.py",
    "chars": 447,
    "preview": "import text_recognizer.metadata.emnist as emnist\nimport text_recognizer.metadata.shared as shared\n\n\nPROCESSED_DATA_DIRNA"
  },
  {
    "path": "lab03/text_recognizer/metadata/mnist.py",
    "chars": 246,
    "preview": "\"\"\"Metadata for the MNIST dataset.\"\"\"\nimport text_recognizer.metadata.shared as shared\n\nDOWNLOADED_DATA_DIRNAME = shared"
  },
  {
    "path": "lab03/text_recognizer/metadata/shared.py",
    "chars": 140,
    "preview": "from pathlib import Path\n\nDATA_DIRNAME = Path(__file__).resolve().parents[3] / \"data\"\nDOWNLOADED_DATA_DIRNAME = DATA_DIR"
  },
  {
    "path": "lab03/text_recognizer/models/__init__.py",
    "chars": 197,
    "preview": "\"\"\"Models for character and text recognition in images.\"\"\"\nfrom .mlp import MLP\n\nfrom .cnn import CNN\nfrom .line_cnn_sim"
  },
  {
    "path": "lab03/text_recognizer/models/cnn.py",
    "chars": 3497,
    "preview": "\"\"\"Basic convolutional model building blocks.\"\"\"\nimport argparse\nfrom typing import Any, Dict\n\nimport torch\nfrom torch i"
  },
  {
    "path": "lab03/text_recognizer/models/line_cnn_simple.py",
    "chars": 3024,
    "preview": "\"\"\"Simplest version of LineCNN that works on cleanly-separated characters.\"\"\"\nimport argparse\nimport math\nfrom typing im"
  },
  {
    "path": "lab03/text_recognizer/models/mlp.py",
    "chars": 1512,
    "preview": "import argparse\nfrom typing import Any, Dict\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.func"
  },
  {
    "path": "lab03/text_recognizer/models/resnet_transformer.py",
    "chars": 7404,
    "preview": "\"\"\"Model combining a ResNet with a Transformer for image-to-sequence tasks.\"\"\"\nimport argparse\nimport math\nfrom typing i"
  },
  {
    "path": "lab03/text_recognizer/models/transformer_util.py",
    "chars": 3217,
    "preview": "\"\"\"Position Encoding and other utilities for Transformers.\"\"\"\nimport math\n\nimport torch\nfrom torch import Tensor\nimport "
  },
  {
    "path": "lab03/text_recognizer/stems/image.py",
    "chars": 1277,
    "preview": "import torch\nfrom torchvision import transforms\n\n\nclass ImageStem:\n    \"\"\"A stem for models operating on images.\n\n    Im"
  },
  {
    "path": "lab03/text_recognizer/stems/paragraph.py",
    "chars": 2427,
    "preview": "\"\"\"IAMParagraphs Stem class.\"\"\"\nimport torchvision.transforms as transforms\n\nimport text_recognizer.metadata.iam_paragra"
  },
  {
    "path": "lab03/text_recognizer/util.py",
    "chars": 2278,
    "preview": "\"\"\"Utility functions for text_recognizer module.\"\"\"\nimport base64\nimport contextlib\nimport hashlib\nfrom io import BytesI"
  },
  {
    "path": "lab03/training/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "lab03/training/run_experiment.py",
    "chars": 5461,
    "preview": "\"\"\"Experiment-running framework.\"\"\"\nimport argparse\nfrom pathlib import Path\n\nimport numpy as np\nimport pytorch_lightnin"
  },
  {
    "path": "lab03/training/util.py",
    "chars": 823,
    "preview": "\"\"\"Utilities for model development scripts: training and staging.\"\"\"\nimport argparse\nimport importlib\n\nDATA_CLASS_MODULE"
  },
  {
    "path": "lab04/notebooks/lab01_pytorch.ipynb",
    "chars": 79942,
    "preview": "{\n  \"cells\": [\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"FlH0lCOttCs5\"\n      },\n      \"sou"
  },
  {
    "path": "lab04/notebooks/lab02a_lightning.ipynb",
    "chars": 57375,
    "preview": "{\n  \"cells\": [\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"FlH0lCOttCs5\"\n      },\n      \"sou"
  },
  {
    "path": "lab04/notebooks/lab02b_cnn.ipynb",
    "chars": 69401,
    "preview": "{\n  \"cells\": [\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"FlH0lCOttCs5\"\n      },\n      \"sou"
  },
  {
    "path": "lab04/notebooks/lab03_transformers.ipynb",
    "chars": 73349,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"FlH0lCOttCs5\"\n   },\n   \"source\": [\n    \"<img s"
  },
  {
    "path": "lab04/notebooks/lab04_experiments.ipynb",
    "chars": 69482,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"FlH0lCOttCs5\"\n   },\n   \"source\": [\n    \"<img s"
  },
  {
    "path": "lab04/text_recognizer/__init__.py",
    "chars": 58,
    "preview": "\"\"\"Modules for creating and running a text recognizer.\"\"\"\n"
  },
  {
    "path": "lab04/text_recognizer/callbacks/__init__.py",
    "chars": 164,
    "preview": "from .model import ModelSizeLogger\nfrom .optim import LearningRateMonitor\n\nfrom . import imtotext\nfrom .imtotext import "
  },
  {
    "path": "lab04/text_recognizer/callbacks/imtotext.py",
    "chars": 3870,
    "preview": "import pytorch_lightning as pl\nfrom pytorch_lightning.utilities import rank_zero_only\n\ntry:\n    import wandb\n\n    has_wa"
  },
  {
    "path": "lab04/text_recognizer/callbacks/model.py",
    "chars": 2761,
    "preview": "import os\nfrom pathlib import Path\nimport tempfile\n\nimport pytorch_lightning as pl\nfrom pytorch_lightning.utilities.rank"
  },
  {
    "path": "lab04/text_recognizer/callbacks/optim.py",
    "chars": 411,
    "preview": "import pytorch_lightning as pl\n\nKEY = \"optimizer\"\n\n\nclass LearningRateMonitor(pl.callbacks.LearningRateMonitor):\n    \"\"\""
  },
  {
    "path": "lab04/text_recognizer/callbacks/util.py",
    "chars": 384,
    "preview": "import logging\n\nlogging.basicConfig(level=logging.WARNING)\n\n\ndef check_and_warn(logger, attribute, feature):\n    if not "
  },
  {
    "path": "lab04/text_recognizer/data/__init__.py",
    "chars": 583,
    "preview": "\"\"\"Module containing submodules for each dataset.\n\nEach dataset is defined as a class in that submodule.\n\nThe datasets s"
  },
  {
    "path": "lab04/text_recognizer/data/base_data_module.py",
    "chars": 4895,
    "preview": "\"\"\"Base DataModule class.\"\"\"\nimport argparse\nimport os\nfrom pathlib import Path\nfrom typing import Collection, Dict, Opt"
  },
  {
    "path": "lab04/text_recognizer/data/emnist.py",
    "chars": 7150,
    "preview": "\"\"\"EMNIST dataset. Downloads from NIST website and saves as .npz file if not already present.\"\"\"\nimport json\nimport os\nf"
  },
  {
    "path": "lab04/text_recognizer/data/emnist_essentials.json",
    "chars": 465,
    "preview": "{\"characters\": [\"<B>\", \"<S>\", \"<E>\", \"<P>\", \"0\", \"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\", \"A\", \"B\", \"C\", \"D\", \"E\", \""
  },
  {
    "path": "lab04/text_recognizer/data/emnist_lines.py",
    "chars": 8909,
    "preview": "import argparse\nfrom collections import defaultdict\nfrom typing import Dict, Sequence\n\nimport h5py\nimport numpy as np\nim"
  },
  {
    "path": "lab04/text_recognizer/data/iam.py",
    "chars": 10287,
    "preview": "\"\"\"Class for loading the IAM handwritten text dataset, which encompasses both paragraphs and lines, plus utilities.\"\"\"\nf"
  },
  {
    "path": "lab04/text_recognizer/data/iam_lines.py",
    "chars": 7307,
    "preview": "\"\"\"A dataset of lines of handwritten text derived from the IAM dataset.\"\"\"\nimport argparse\nimport json\nfrom pathlib impo"
  },
  {
    "path": "lab04/text_recognizer/data/iam_paragraphs.py",
    "chars": 8229,
    "preview": "\"\"\"IAM Paragraphs Dataset class.\"\"\"\nimport argparse\nimport json\nfrom pathlib import Path\nfrom typing import Callable, Di"
  },
  {
    "path": "lab04/text_recognizer/data/mnist.py",
    "chars": 1409,
    "preview": "\"\"\"MNIST DataModule.\"\"\"\nimport argparse\n\nfrom torch.utils.data import random_split\nfrom torchvision.datasets import MNIS"
  },
  {
    "path": "lab04/text_recognizer/data/sentence_generator.py",
    "chars": 2875,
    "preview": "\"\"\"SentenceGenerator class and supporting functions.\"\"\"\nimport itertools\nimport re\nimport string\nfrom typing import List"
  },
  {
    "path": "lab04/text_recognizer/data/util.py",
    "chars": 3261,
    "preview": "\"\"\"Base Dataset class.\"\"\"\nfrom typing import Any, Callable, Dict, Sequence, Tuple, Union\n\nfrom PIL import Image\nimport t"
  },
  {
    "path": "lab04/text_recognizer/lit_models/__init__.py",
    "chars": 77,
    "preview": "from .base import BaseLitModel\n\nfrom .transformer import TransformerLitModel\n"
  },
  {
    "path": "lab04/text_recognizer/lit_models/base.py",
    "chars": 4989,
    "preview": "\"\"\"Basic LightningModules on which other modules can be built.\"\"\"\nimport argparse\n\nimport pytorch_lightning as pl\nimport"
  },
  {
    "path": "lab04/text_recognizer/lit_models/metrics.py",
    "chars": 1282,
    "preview": "\"\"\"Special-purpose metrics for tracking our model performance.\"\"\"\nfrom typing import Sequence\n\nimport torch\nimport torch"
  },
  {
    "path": "lab04/text_recognizer/lit_models/transformer.py",
    "chars": 4898,
    "preview": "\"\"\"An encoder-decoder Transformer model\"\"\"\nfrom typing import List, Sequence\n\nimport torch\n\nfrom .base import BaseImageT"
  },
  {
    "path": "lab04/text_recognizer/lit_models/util.py",
    "chars": 2511,
    "preview": "from typing import Union\n\nimport torch\n\n\ndef first_appearance(x: torch.Tensor, element: Union[int, float], dim: int = 1)"
  },
  {
    "path": "lab04/text_recognizer/metadata/emnist.py",
    "chars": 1381,
    "preview": "from pathlib import Path\n\nimport text_recognizer.metadata.shared as shared\n\nRAW_DATA_DIRNAME = shared.DATA_DIRNAME / \"ra"
  },
  {
    "path": "lab04/text_recognizer/metadata/emnist_lines.py",
    "chars": 469,
    "preview": "from pathlib import Path\n\nimport text_recognizer.metadata.emnist as emnist\nimport text_recognizer.metadata.shared as sha"
  },
  {
    "path": "lab04/text_recognizer/metadata/iam.py",
    "chars": 431,
    "preview": "import text_recognizer.metadata.shared as shared\n\nRAW_DATA_DIRNAME = shared.DATA_DIRNAME / \"raw\" / \"iam\"\nMETADATA_FILENA"
  },
  {
    "path": "lab04/text_recognizer/metadata/iam_lines.py",
    "chars": 489,
    "preview": "import text_recognizer.metadata.emnist as emnist\nimport text_recognizer.metadata.shared as shared\n\nPROCESSED_DATA_DIRNAM"
  },
  {
    "path": "lab04/text_recognizer/metadata/iam_paragraphs.py",
    "chars": 447,
    "preview": "import text_recognizer.metadata.emnist as emnist\nimport text_recognizer.metadata.shared as shared\n\n\nPROCESSED_DATA_DIRNA"
  },
  {
    "path": "lab04/text_recognizer/metadata/mnist.py",
    "chars": 246,
    "preview": "\"\"\"Metadata for the MNIST dataset.\"\"\"\nimport text_recognizer.metadata.shared as shared\n\nDOWNLOADED_DATA_DIRNAME = shared"
  },
  {
    "path": "lab04/text_recognizer/metadata/shared.py",
    "chars": 140,
    "preview": "from pathlib import Path\n\nDATA_DIRNAME = Path(__file__).resolve().parents[3] / \"data\"\nDOWNLOADED_DATA_DIRNAME = DATA_DIR"
  },
  {
    "path": "lab04/text_recognizer/models/__init__.py",
    "chars": 252,
    "preview": "\"\"\"Models for character and text recognition in images.\"\"\"\nfrom .mlp import MLP\n\nfrom .cnn import CNN\nfrom .line_cnn_sim"
  },
  {
    "path": "lab04/text_recognizer/models/cnn.py",
    "chars": 3497,
    "preview": "\"\"\"Basic convolutional model building blocks.\"\"\"\nimport argparse\nfrom typing import Any, Dict\n\nimport torch\nfrom torch i"
  },
  {
    "path": "lab04/text_recognizer/models/line_cnn.py",
    "chars": 5355,
    "preview": "\"\"\"Basic building blocks for convolutional models over lines of text.\"\"\"\nimport argparse\nimport math\nfrom typing import "
  },
  {
    "path": "lab04/text_recognizer/models/line_cnn_simple.py",
    "chars": 3024,
    "preview": "\"\"\"Simplest version of LineCNN that works on cleanly-separated characters.\"\"\"\nimport argparse\nimport math\nfrom typing im"
  },
  {
    "path": "lab04/text_recognizer/models/line_cnn_transformer.py",
    "chars": 5552,
    "preview": "\"\"\"Model that combines a LineCNN with a Transformer model for text prediction.\"\"\"\nimport argparse\nimport math\nfrom typin"
  },
  {
    "path": "lab04/text_recognizer/models/mlp.py",
    "chars": 1512,
    "preview": "import argparse\nfrom typing import Any, Dict\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.func"
  },
  {
    "path": "lab04/text_recognizer/models/resnet_transformer.py",
    "chars": 7404,
    "preview": "\"\"\"Model combining a ResNet with a Transformer for image-to-sequence tasks.\"\"\"\nimport argparse\nimport math\nfrom typing i"
  },
  {
    "path": "lab04/text_recognizer/models/transformer_util.py",
    "chars": 3217,
    "preview": "\"\"\"Position Encoding and other utilities for Transformers.\"\"\"\nimport math\n\nimport torch\nfrom torch import Tensor\nimport "
  },
  {
    "path": "lab04/text_recognizer/stems/image.py",
    "chars": 1277,
    "preview": "import torch\nfrom torchvision import transforms\n\n\nclass ImageStem:\n    \"\"\"A stem for models operating on images.\n\n    Im"
  },
  {
    "path": "lab04/text_recognizer/stems/line.py",
    "chars": 2991,
    "preview": "import random\n\nfrom PIL import Image\nfrom torchvision import transforms\n\n\nimport text_recognizer.metadata.iam_lines as m"
  },
  {
    "path": "lab04/text_recognizer/stems/paragraph.py",
    "chars": 2427,
    "preview": "\"\"\"IAMParagraphs Stem class.\"\"\"\nimport torchvision.transforms as transforms\n\nimport text_recognizer.metadata.iam_paragra"
  },
  {
    "path": "lab04/text_recognizer/util.py",
    "chars": 2278,
    "preview": "\"\"\"Utility functions for text_recognizer module.\"\"\"\nimport base64\nimport contextlib\nimport hashlib\nfrom io import BytesI"
  },
  {
    "path": "lab04/training/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "lab04/training/run_experiment.py",
    "chars": 6255,
    "preview": "\"\"\"Experiment-running framework.\"\"\"\nimport argparse\nfrom pathlib import Path\n\nimport numpy as np\nimport pytorch_lightnin"
  },
  {
    "path": "lab04/training/util.py",
    "chars": 823,
    "preview": "\"\"\"Utilities for model development scripts: training and staging.\"\"\"\nimport argparse\nimport importlib\n\nDATA_CLASS_MODULE"
  },
  {
    "path": "lab05/.flake8",
    "chars": 1897,
    "preview": "[flake8]\nselect = ANN,B,B9,BLK,C,D,E,F,I,S,W\n  # only check selected error codes\nmax-complexity = 12\n  # C9 - flake8 McC"
  },
  {
    "path": "lab05/.github/workflows/pre-commit.yml",
    "chars": 322,
    "preview": "name: pre-commit\n\non:\n  pull_request:\n  push:\n  # allows this Action to be triggered manually\n  workflow_dispatch:\n\njobs"
  },
  {
    "path": "lab05/.pre-commit-config.yaml",
    "chars": 1902,
    "preview": "repos:\n  # a set of useful Python-based pre-commit hooks\n  - repo: https://github.com/pre-commit/pre-commit-hooks\n    re"
  },
  {
    "path": "lab05/notebooks/lab01_pytorch.ipynb",
    "chars": 79942,
    "preview": "{\n  \"cells\": [\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"FlH0lCOttCs5\"\n      },\n      \"sou"
  },
  {
    "path": "lab05/notebooks/lab02a_lightning.ipynb",
    "chars": 57375,
    "preview": "{\n  \"cells\": [\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"FlH0lCOttCs5\"\n      },\n      \"sou"
  },
  {
    "path": "lab05/notebooks/lab02b_cnn.ipynb",
    "chars": 69401,
    "preview": "{\n  \"cells\": [\n    {\n      \"cell_type\": \"markdown\",\n      \"metadata\": {\n        \"id\": \"FlH0lCOttCs5\"\n      },\n      \"sou"
  },
  {
    "path": "lab05/notebooks/lab03_transformers.ipynb",
    "chars": 73349,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"FlH0lCOttCs5\"\n   },\n   \"source\": [\n    \"<img s"
  },
  {
    "path": "lab05/notebooks/lab04_experiments.ipynb",
    "chars": 69482,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"FlH0lCOttCs5\"\n   },\n   \"source\": [\n    \"<img s"
  },
  {
    "path": "lab05/notebooks/lab05_troubleshooting.ipynb",
    "chars": 76249,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {\n    \"id\": \"FlH0lCOttCs5\"\n   },\n   \"source\": [\n    \"<img s"
  },
  {
    "path": "lab05/tasks/lint.sh",
    "chars": 539,
    "preview": "#!/bin/bash\nset -uo pipefail\nset +e\n\nFAILURE=false\n\n# apply automatic formatting\necho \"black\"\npre-commit run black || FA"
  },
  {
    "path": "lab05/text_recognizer/__init__.py",
    "chars": 58,
    "preview": "\"\"\"Modules for creating and running a text recognizer.\"\"\"\n"
  },
  {
    "path": "lab05/text_recognizer/callbacks/__init__.py",
    "chars": 164,
    "preview": "from .model import ModelSizeLogger\nfrom .optim import LearningRateMonitor\n\nfrom . import imtotext\nfrom .imtotext import "
  },
  {
    "path": "lab05/text_recognizer/callbacks/imtotext.py",
    "chars": 3870,
    "preview": "import pytorch_lightning as pl\nfrom pytorch_lightning.utilities import rank_zero_only\n\ntry:\n    import wandb\n\n    has_wa"
  },
  {
    "path": "lab05/text_recognizer/callbacks/model.py",
    "chars": 2761,
    "preview": "import os\nfrom pathlib import Path\nimport tempfile\n\nimport pytorch_lightning as pl\nfrom pytorch_lightning.utilities.rank"
  },
  {
    "path": "lab05/text_recognizer/callbacks/optim.py",
    "chars": 411,
    "preview": "import pytorch_lightning as pl\n\nKEY = \"optimizer\"\n\n\nclass LearningRateMonitor(pl.callbacks.LearningRateMonitor):\n    \"\"\""
  },
  {
    "path": "lab05/text_recognizer/callbacks/util.py",
    "chars": 384,
    "preview": "import logging\n\nlogging.basicConfig(level=logging.WARNING)\n\n\ndef check_and_warn(logger, attribute, feature):\n    if not "
  },
  {
    "path": "lab05/text_recognizer/data/__init__.py",
    "chars": 624,
    "preview": "\"\"\"Module containing submodules for each dataset.\n\nEach dataset is defined as a class in that submodule.\n\nThe datasets s"
  },
  {
    "path": "lab05/text_recognizer/data/base_data_module.py",
    "chars": 4895,
    "preview": "\"\"\"Base DataModule class.\"\"\"\nimport argparse\nimport os\nfrom pathlib import Path\nfrom typing import Collection, Dict, Opt"
  },
  {
    "path": "lab05/text_recognizer/data/emnist.py",
    "chars": 7150,
    "preview": "\"\"\"EMNIST dataset. Downloads from NIST website and saves as .npz file if not already present.\"\"\"\nimport json\nimport os\nf"
  },
  {
    "path": "lab05/text_recognizer/data/emnist_essentials.json",
    "chars": 465,
    "preview": "{\"characters\": [\"<B>\", \"<S>\", \"<E>\", \"<P>\", \"0\", \"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\", \"A\", \"B\", \"C\", \"D\", \"E\", \""
  },
  {
    "path": "lab05/text_recognizer/data/emnist_lines.py",
    "chars": 8909,
    "preview": "import argparse\nfrom collections import defaultdict\nfrom typing import Dict, Sequence\n\nimport h5py\nimport numpy as np\nim"
  },
  {
    "path": "lab05/text_recognizer/data/fake_images.py",
    "chars": 1704,
    "preview": "\"\"\"A fake image dataset for testing.\"\"\"\nimport argparse\n\nimport torch\nimport torchvision\n\nfrom text_recognizer.data.base"
  },
  {
    "path": "lab05/text_recognizer/data/iam.py",
    "chars": 10287,
    "preview": "\"\"\"Class for loading the IAM handwritten text dataset, which encompasses both paragraphs and lines, plus utilities.\"\"\"\nf"
  },
  {
    "path": "lab05/text_recognizer/data/iam_lines.py",
    "chars": 7307,
    "preview": "\"\"\"A dataset of lines of handwritten text derived from the IAM dataset.\"\"\"\nimport argparse\nimport json\nfrom pathlib impo"
  },
  {
    "path": "lab05/text_recognizer/data/iam_paragraphs.py",
    "chars": 8229,
    "preview": "\"\"\"IAM Paragraphs Dataset class.\"\"\"\nimport argparse\nimport json\nfrom pathlib import Path\nfrom typing import Callable, Di"
  },
  {
    "path": "lab05/text_recognizer/data/mnist.py",
    "chars": 1409,
    "preview": "\"\"\"MNIST DataModule.\"\"\"\nimport argparse\n\nfrom torch.utils.data import random_split\nfrom torchvision.datasets import MNIS"
  },
  {
    "path": "lab05/text_recognizer/data/sentence_generator.py",
    "chars": 2875,
    "preview": "\"\"\"SentenceGenerator class and supporting functions.\"\"\"\nimport itertools\nimport re\nimport string\nfrom typing import List"
  },
  {
    "path": "lab05/text_recognizer/data/util.py",
    "chars": 3261,
    "preview": "\"\"\"Base Dataset class.\"\"\"\nfrom typing import Any, Callable, Dict, Sequence, Tuple, Union\n\nfrom PIL import Image\nimport t"
  },
  {
    "path": "lab05/text_recognizer/lit_models/__init__.py",
    "chars": 77,
    "preview": "from .base import BaseLitModel\n\nfrom .transformer import TransformerLitModel\n"
  },
  {
    "path": "lab05/text_recognizer/lit_models/base.py",
    "chars": 4989,
    "preview": "\"\"\"Basic LightningModules on which other modules can be built.\"\"\"\nimport argparse\n\nimport pytorch_lightning as pl\nimport"
  },
  {
    "path": "lab05/text_recognizer/lit_models/metrics.py",
    "chars": 1282,
    "preview": "\"\"\"Special-purpose metrics for tracking our model performance.\"\"\"\nfrom typing import Sequence\n\nimport torch\nimport torch"
  },
  {
    "path": "lab05/text_recognizer/lit_models/transformer.py",
    "chars": 4898,
    "preview": "\"\"\"An encoder-decoder Transformer model\"\"\"\nfrom typing import List, Sequence\n\nimport torch\n\nfrom .base import BaseImageT"
  },
  {
    "path": "lab05/text_recognizer/lit_models/util.py",
    "chars": 2511,
    "preview": "from typing import Union\n\nimport torch\n\n\ndef first_appearance(x: torch.Tensor, element: Union[int, float], dim: int = 1)"
  },
  {
    "path": "lab05/text_recognizer/metadata/emnist.py",
    "chars": 1381,
    "preview": "from pathlib import Path\n\nimport text_recognizer.metadata.shared as shared\n\nRAW_DATA_DIRNAME = shared.DATA_DIRNAME / \"ra"
  },
  {
    "path": "lab05/text_recognizer/metadata/emnist_lines.py",
    "chars": 469,
    "preview": "from pathlib import Path\n\nimport text_recognizer.metadata.emnist as emnist\nimport text_recognizer.metadata.shared as sha"
  },
  {
    "path": "lab05/text_recognizer/metadata/iam.py",
    "chars": 431,
    "preview": "import text_recognizer.metadata.shared as shared\n\nRAW_DATA_DIRNAME = shared.DATA_DIRNAME / \"raw\" / \"iam\"\nMETADATA_FILENA"
  },
  {
    "path": "lab05/text_recognizer/metadata/iam_lines.py",
    "chars": 489,
    "preview": "import text_recognizer.metadata.emnist as emnist\nimport text_recognizer.metadata.shared as shared\n\nPROCESSED_DATA_DIRNAM"
  },
  {
    "path": "lab05/text_recognizer/metadata/iam_paragraphs.py",
    "chars": 447,
    "preview": "import text_recognizer.metadata.emnist as emnist\nimport text_recognizer.metadata.shared as shared\n\n\nPROCESSED_DATA_DIRNA"
  },
  {
    "path": "lab05/text_recognizer/metadata/mnist.py",
    "chars": 246,
    "preview": "\"\"\"Metadata for the MNIST dataset.\"\"\"\nimport text_recognizer.metadata.shared as shared\n\nDOWNLOADED_DATA_DIRNAME = shared"
  },
  {
    "path": "lab05/text_recognizer/metadata/shared.py",
    "chars": 140,
    "preview": "from pathlib import Path\n\nDATA_DIRNAME = Path(__file__).resolve().parents[3] / \"data\"\nDOWNLOADED_DATA_DIRNAME = DATA_DIR"
  },
  {
    "path": "lab05/text_recognizer/models/__init__.py",
    "chars": 252,
    "preview": "\"\"\"Models for character and text recognition in images.\"\"\"\nfrom .mlp import MLP\n\nfrom .cnn import CNN\nfrom .line_cnn_sim"
  },
  {
    "path": "lab05/text_recognizer/models/cnn.py",
    "chars": 3497,
    "preview": "\"\"\"Basic convolutional model building blocks.\"\"\"\nimport argparse\nfrom typing import Any, Dict\n\nimport torch\nfrom torch i"
  },
  {
    "path": "lab05/text_recognizer/models/line_cnn.py",
    "chars": 5355,
    "preview": "\"\"\"Basic building blocks for convolutional models over lines of text.\"\"\"\nimport argparse\nimport math\nfrom typing import "
  },
  {
    "path": "lab05/text_recognizer/models/line_cnn_simple.py",
    "chars": 3024,
    "preview": "\"\"\"Simplest version of LineCNN that works on cleanly-separated characters.\"\"\"\nimport argparse\nimport math\nfrom typing im"
  },
  {
    "path": "lab05/text_recognizer/models/line_cnn_transformer.py",
    "chars": 5552,
    "preview": "\"\"\"Model that combines a LineCNN with a Transformer model for text prediction.\"\"\"\nimport argparse\nimport math\nfrom typin"
  },
  {
    "path": "lab05/text_recognizer/models/mlp.py",
    "chars": 1512,
    "preview": "import argparse\nfrom typing import Any, Dict\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.func"
  },
  {
    "path": "lab05/text_recognizer/models/resnet_transformer.py",
    "chars": 7404,
    "preview": "\"\"\"Model combining a ResNet with a Transformer for image-to-sequence tasks.\"\"\"\nimport argparse\nimport math\nfrom typing i"
  },
  {
    "path": "lab05/text_recognizer/models/transformer_util.py",
    "chars": 3217,
    "preview": "\"\"\"Position Encoding and other utilities for Transformers.\"\"\"\nimport math\n\nimport torch\nfrom torch import Tensor\nimport "
  },
  {
    "path": "lab05/text_recognizer/stems/image.py",
    "chars": 1277,
    "preview": "import torch\nfrom torchvision import transforms\n\n\nclass ImageStem:\n    \"\"\"A stem for models operating on images.\n\n    Im"
  },
  {
    "path": "lab05/text_recognizer/stems/line.py",
    "chars": 2991,
    "preview": "import random\n\nfrom PIL import Image\nfrom torchvision import transforms\n\n\nimport text_recognizer.metadata.iam_lines as m"
  },
  {
    "path": "lab05/text_recognizer/stems/paragraph.py",
    "chars": 2427,
    "preview": "\"\"\"IAMParagraphs Stem class.\"\"\"\nimport torchvision.transforms as transforms\n\nimport text_recognizer.metadata.iam_paragra"
  },
  {
    "path": "lab05/text_recognizer/tests/test_callback_utils.py",
    "chars": 1311,
    "preview": "\"\"\"Tests for the text_recognizer.callbacks.util module.\"\"\"\nimport random\nimport string\nimport tempfile\n\nimport pytorch_l"
  },
  {
    "path": "lab05/text_recognizer/tests/test_iam.py",
    "chars": 685,
    "preview": "\"\"\"Test for data.iam module.\"\"\"\nfrom text_recognizer.data.iam import IAM\n\n\ndef test_iam_parsed_lines():\n    \"\"\"Tests tha"
  },
  {
    "path": "lab05/text_recognizer/util.py",
    "chars": 2278,
    "preview": "\"\"\"Utility functions for text_recognizer module.\"\"\"\nimport base64\nimport contextlib\nimport hashlib\nfrom io import BytesI"
  },
  {
    "path": "lab05/training/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "lab05/training/run_experiment.py",
    "chars": 6945,
    "preview": "\"\"\"Experiment-running framework.\"\"\"\nimport argparse\nfrom pathlib import Path\n\nimport numpy as np\nimport pytorch_lightnin"
  },
  {
    "path": "lab05/training/tests/test_memorize_iam.sh",
    "chars": 1189,
    "preview": "#!/bin/bash\nset -uo pipefail\nset +e\n\n# tests whether we can achieve a criterion loss\n#  on a single batch within a certa"
  },
  {
    "path": "lab05/training/tests/test_run_experiment.sh",
    "chars": 753,
    "preview": "#!/bin/bash\nset -uo pipefail\nset +e\n\nFAILURE=false\n\necho \"running full loop test with CNN on fake data\"\npython training/"
  },
  {
    "path": "lab05/training/util.py",
    "chars": 823,
    "preview": "\"\"\"Utilities for model development scripts: training and staging.\"\"\"\nimport argparse\nimport importlib\n\nDATA_CLASS_MODULE"
  },
  {
    "path": "lab06/.flake8",
    "chars": 1897,
    "preview": "[flake8]\nselect = ANN,B,B9,BLK,C,D,E,F,I,S,W\n  # only check selected error codes\nmax-complexity = 12\n  # C9 - flake8 McC"
  },
  {
    "path": "lab06/.github/workflows/pre-commit.yml",
    "chars": 322,
    "preview": "name: pre-commit\n\non:\n  pull_request:\n  push:\n  # allows this Action to be triggered manually\n  workflow_dispatch:\n\njobs"
  },
  {
    "path": "lab06/.pre-commit-config.yaml",
    "chars": 1902,
    "preview": "repos:\n  # a set of useful Python-based pre-commit hooks\n  - repo: https://github.com/pre-commit/pre-commit-hooks\n    re"
  }
]

// ... and 224 more files (download for full content)

About this extraction

This page contains the full source code of the full-stack-deep-learning/fsdl-text-recognizer-2022-labs GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 424 files (3.9 MB), approximately 1.0M tokens, and a symbol index with 1635 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!