Full Code of analysiscenter/cardio for AI

master eb6d1cbe7f11 cached
75 files
3.0 MB
795.3k tokens
150 symbols
1 requests
Download .txt
Showing preview only (3,182K chars total). Download the full file or copy to clipboard to get everything.
Repository: analysiscenter/cardio
Branch: master
Commit: eb6d1cbe7f11
Files: 75
Total size: 3.0 MB

Directory structure:
gitextract_p3quxox6/

├── .gitattributes
├── .gitignore
├── .gitmodules
├── CONTRIBUTING.md
├── LICENSE
├── MANIFEST.in
├── README.md
├── RELEASE.md
├── cardio/
│   ├── __init__.py
│   ├── core/
│   │   ├── __init__.py
│   │   ├── ecg_batch.py
│   │   ├── ecg_batch_tools.py
│   │   ├── ecg_dataset.py
│   │   ├── kernels.py
│   │   └── utils.py
│   ├── models/
│   │   ├── __init__.py
│   │   ├── dirichlet_model/
│   │   │   ├── __init__.py
│   │   │   ├── dirichlet_model.py
│   │   │   └── dirichlet_model_training.ipynb
│   │   ├── fft_model/
│   │   │   ├── __init__.py
│   │   │   ├── fft_model.py
│   │   │   └── fft_model_training.ipynb
│   │   ├── hmm/
│   │   │   ├── __init__.py
│   │   │   ├── hmm.py
│   │   │   └── hmmodel_training.ipynb
│   │   ├── keras_custom_objects.py
│   │   ├── layers.py
│   │   └── metrics.py
│   ├── pipelines/
│   │   ├── __init__.py
│   │   └── pipelines.py
│   └── tests/
│       ├── data/
│       │   ├── A00001.hea
│       │   ├── A00001.mat
│       │   ├── A00002.hea
│       │   ├── A00002.mat
│       │   ├── A00004.hea
│       │   ├── A00004.mat
│       │   ├── A00005.hea
│       │   ├── A00005.mat
│       │   ├── A00008.hea
│       │   ├── A00008.mat
│       │   ├── A00013.hea
│       │   ├── A00013.mat
│       │   ├── REFERENCE.csv
│       │   ├── sample.dcm
│       │   ├── sample.edf
│       │   ├── sample.xml
│       │   ├── sel100.atr
│       │   ├── sel100.hea
│       │   └── sel100.pu1
│       └── test_ecgbatch.py
├── docs/
│   ├── Makefile
│   ├── api/
│   │   ├── api.rst
│   │   ├── core.rst
│   │   ├── models.rst
│   │   └── pipelines.rst
│   ├── conf.py
│   ├── index.rst
│   ├── make.bat
│   ├── modules/
│   │   ├── core.rst
│   │   ├── models.rst
│   │   ├── modules.rst
│   │   └── pipelines.rst
│   └── tutorials.rst
├── examples/
│   ├── Getting_started.ipynb
│   └── Load_XML.ipynb
├── pylintrc
├── requirements-shippable.txt
├── requirements.txt
├── setup.py
├── shippable.yml
└── tutorials/
    ├── I.CardIO.ipynb
    ├── II.Pipelines.ipynb
    ├── III.Models.ipynb
    ├── IV.Research.ipynb
    └── pn2017_data_to_wfdb_format.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitattributes
================================================
# Set the default behavior, in case people don't have core.autocrlf set.
* text=auto

# Explicitly declare text files you want to always be normalized on checkout.
*.py text
*.sh text


================================================
FILE: .gitignore
================================================
*.pyc
docs/_build/
.cache/
__pycache__/
.ipynb_checkpoints/


================================================
FILE: .gitmodules
================================================
[submodule "cardio/batchflow"]
	path = cardio/batchflow
	url = https://github.com/analysiscenter/dataset.git


================================================
FILE: CONTRIBUTING.md
================================================
- Перед любыми операциями с репозиториями у каждого пользователя должно быть настроено имя и адрес почты:
```bash
git config --global user.name "Firstname Lastnameov"
git config --global user.email f.lastnameov@analysiscenter.ru
```
Причем email **должен совпадать** с email'ом, который указан в вашем github-аккаунте (в нем может быть несколько email'ов). 

- В корневом каталоге каждого репозитория должен быть размещен файл README.md с кратким описанием проекта, структуры исходного кода, инструкцией по установке и ссылками на документацию.

- Все содержательные файлы рекомендуется размещать в подкаталогах, а в корневом хранить только описательные (README.md, INSTALL.md и т.п.), 
инсталляционные (setup.py, requirements.txt и т.п.), а также конфигурационные и make-файлы.

- Имена файлов должны содержать только латинские буквы. Пробелы в наименованиях файлов не допускаются.

- Коммиты в ветку `master` не допускаются. Она должна быть защищена от удаления и изменения истории 
(Settings - Branches - Protected branches).

- Изменения в исходном коде и файлах репозитория рекомендуется производить только в рамках задач (issues). 
Для каждого изменения исполнитель открывает отдельную ветку с наименованием вида <iTASK-ID>-<short branch name> (например, `i15-dataset` или `i22-HMM`).

- В рамках одной задачи можно создавать несколько веток в одном репозитории.

- Если у вас нет задачи, имеет смысл ее открыть и явным образом завести в issues.

- Коммиты в рабочие ветки рекомендуется делать регулярно, чтобы каждый коммит содержал не слишком объемные, 
но вместе с тем завершенные и независимые от всего остального изменения в репозитории 
(лучше закоммитить 3 измененных строки, чем сразу 300).

- Коммит должен содержать однострочный англоязычный комментарий (длиной 20-60 символов), 
отражающий содержание включенных в него изменений исходного кода и файлов.

- Более подробное описание изменений следует сохранять в файле HISTORY.md, размещенном в корневом каталоге репозитория.

- Выполнив задачу и завершив все изменения, исполнитель открывает pull request на слияние рабочей и продуктивной ветки (например, master).

- Перед слиянием рабочая ветка не должна отставать от продуктивной (что можно проверить с помощью `git status`). Для этого следует предварительно синхронизировать рабочую ветку (`git pull`).

- После слияния рабочая ветка удаляется.


================================================
FILE: LICENSE
================================================
                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "{}"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright {yyyy} {name of copyright owner}

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.


================================================
FILE: MANIFEST.in
================================================
include MANIFEST.in
include LICENSE
include README.md
include setup.py

recursive-include cardio *
recursive-include docs *
recursive-include tutorials *
recursive-exclude docs/_build *

global-exclude *.pyc *.pyo *.pyd
global-exclude *.git
global-exclude *.so
global-exclude *~
global-exclude \#*
global-exclude .DS_Store


================================================
FILE: README.md
================================================
# CardIO

`CardIO` is designed to build end-to-end machine learning models for deep research of electrocardiograms.

Main features:

* load and save signals in various formats: WFDB, DICOM, EDF, XML (Schiller), etc.
* resample, crop, flip and filter signals
* detect PQ, QT, QRS segments
* calculate heart rate and other ECG characteristics
* perform complex processing like fourier and wavelet transformations
* apply custom functions to the data
* recognize heart diseases (e.g. atrial fibrillation)
* efficiently work with large datasets that do not even fit into memory
* perform end-to-end ECG processing
* build, train and test neural networks and other machine learning models

For more details see [the documentation and tutorials](https://analysiscenter.github.io/cardio/).


## About CardIO

> CardIO is based on [BatchFlow](https://github.com/analysiscenter/batchflow). You might benefit from reading [its documentation](https://analysiscenter.github.io/batchflow).
However, it is not required, especially at the beginning.


CardIO has three modules: [``core``](https://analysiscenter.github.io/cardio/modules/core.html),
[``models``](https://analysiscenter.github.io/cardio/modules/models.html) and
[``pipelines``](https://analysiscenter.github.io/cardio/modules/pipelines.html).


``core`` module contains ``EcgBatch`` and ``EcgDataset`` classes.
``EcgBatch`` defines how ECGs are stored and includes actions for ECG processing. These actions might be used to build multi-staged workflows that can also involve machine learning models. ``EcgDataset`` is a class that stores indices of ECGs and generates batches of type ``EcgBatch``.

``models`` module provides several ready to use models for important problems in ECG analysis:

* how to detect specific features of ECG like R-peaks, P-wave, T-wave, etc
* how to recognize heart diseases from ECG, for example, atrial fibrillation

``pipelines`` module contains predefined workflows to

* train a model and perform an inference to detect PQ, QT, QRS segments and calculate heart rate
* train a model and perform an inference to find probabilities of heart diseases, in particular, atrial fibrillation


## Basic usage

Here is an example of a pipeline that loads ECG signals, makes preprocessing and trains a model for 50 epochs:
```python
train_pipeline = (
  ds.Pipeline()
    .init_model("dynamic", DirichletModel, name="dirichlet", config=model_config)
    .init_variable("loss_history", init_on_each_run=list)
    .load(components=["signal", "meta"], fmt="wfdb")
    .load(components="target", fmt="csv", src=LABELS_PATH)
    .drop_labels(["~"])
    .rename_labels({"N": "NO", "O": "NO"})
    .flip_signals()
    .random_resample_signals("normal", loc=300, scale=10)
    .random_split_signals(2048, {"A": 9, "NO": 3})
    .binarize_labels()
    .train_model("dirichlet", make_data=concatenate_ecg_batch, fetches="loss", save_to=V("loss_history"), mode="a")
    .run(batch_size=100, shuffle=True, drop_last=True, n_epochs=50)
)
```


## Installation

> `CardIO` module is in the beta stage. Your suggestions and improvements are very welcome.

> `CardIO` supports python 3.5 or higher.


### Installation as a python package

With [pipenv](https://docs.pipenv.org/):

    pipenv install git+https://github.com/analysiscenter/cardio.git#egg=cardio

With [pip](https://pip.pypa.io/en/stable/):

    pip3 install git+https://github.com/analysiscenter/cardio.git

After that just import `cardio`:
```python
import cardio
```


### Installation as a project repository

When cloning repo from GitHub use flag ``--recursive`` to make sure that ``batchflow`` submodule is also cloned.

    git clone --recursive https://github.com/analysiscenter/cardio.git


## Citing CardIO

Please cite CardIO in your publications if it helps your research.

[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.1156085.svg)](https://doi.org/10.5281/zenodo.1156085)

    Khudorozhkov R., Illarionov E., Kuvaev A., Podvyaznikov D. CardIO library for deep research of heart signals. 2017.

```
@misc{cardio_2017_1156085,
  author       = {R. Khudorozhkov and E. Illarionov and A. Kuvaev and D. Podvyaznikov},
  title        = {CardIO library for deep research of heart signals},
  year         = 2017,
  doi          = {10.5281/zenodo.1156085},
  url          = {https://doi.org/10.5281/zenodo.1156085}
}
```


================================================
FILE: RELEASE.md
================================================
# Release 0.3.0

## Major Features and Improvements
* `load` method now supports Schiller XML format.
* Added channels processing methods:
	* `EcgBatch.reorder_channels`
	* `EcgBatch.convert_units`

## Breaking Changes to the API
* `Dataset` submodule was updated and renamed to `BatchFlow`.


# Release 0.2.0

## Major Features and Improvements
* `load` method now supports new signal formats:
	* DICOM
	* EDF
	* WAV
* `meta` component structure has changed - now it always contains a number of predefined keys.
* Added channels processing methods:
	* `EcgBatch.keep_channels`
	* `EcgBatch.drop_channels`
	* `EcgBatch.rename_channels`
* Added `apply_to_each_channel` method.
* Added `standardize` method.
* Added complex ECG transformations:
	* Fourier-based transformations:
		* `EcgBatch.fft`
		* `EcgBatch.ifft`
		* `EcgBatch.rfft`
		* `EcgBatch.irfft`
		* `EcgBatch.spectrogram`
	* Wavelet-based transformations:
		* `EcgBatch.dwt`
		* `EcgBatch.idwt`
		* `EcgBatch.wavedec`
		* `EcgBatch.waverec`
		* `EcgBatch.pdwt`
		* `EcgBatch.cwt`

## Breaking Changes to the API
* Changed signature of the following methods:
	* `EcgBatch.apply_transform`
	* `EcgBatch.show_ecg`
	* `EcgBatch.calc_ecg_parameters`
* Changed signature of the following pipelines:
	* `dirichlet_train_pipeline`
	* `dirichlet_predict_pipeline`
	* `hmm_preprocessing_pipeline`
	* `hmm_train_pipeline`
	* `hmm_predict_pipeline`
* `wavelet_transform` method has been deleted.
* `update` method has been deleted.
* `replace_labels` method has been renamed to `rename_labels`.
* `slice_signal` method has been renamed to `slice_signals`.
* `ravel` method has been renamed to `unstack_signals`.


# Release 0.1.0

Initial release of CardIO.


================================================
FILE: cardio/__init__.py
================================================
""" CardIO package """

from . import batchflow  # pylint: disable=wildcard-import
from .core import *  # pylint: disable=wildcard-import


__version__ = '0.3.0'


================================================
FILE: cardio/core/__init__.py
================================================
""" Core CardIO objects """

from .ecg_batch import EcgBatch, add_actions
from .ecg_dataset import EcgDataset
from . import kernels


================================================
FILE: cardio/core/ecg_batch.py
================================================
"""Contains ECG Batch class."""
# pylint: disable=too-many-lines

import copy
from textwrap import dedent

import numpy as np
import pandas as pd
import scipy
import scipy.signal
import matplotlib.pyplot as plt
import pywt

from .. import batchflow as bf
from . import kernels
from . import ecg_batch_tools as bt
from .utils import get_units_conversion_factor, partialmethod, LabelBinarizer


ACTIONS_DICT = {
    "fft": (np.fft.fft, "numpy.fft.fft", "a Discrete Fourier Transform"),
    "ifft": (np.fft.ifft, "numpy.fft.ifft", "an inverse Discrete Fourier Transform"),
    "rfft": (np.fft.rfft, "numpy.fft.rfft", "a real-input Discrete Fourier Transform"),
    "irfft": (np.fft.irfft, "numpy.fft.irfft", "a real-input inverse Discrete Fourier Transform"),
    "dwt": (pywt.dwt, "pywt.dwt", "a single level Discrete Wavelet Transform"),
    "idwt": (lambda x, *args, **kwargs: pywt.idwt(*x, *args, **kwargs), "pywt.idwt",
             "a single level inverse Discrete Wavelet Transform"),
    "wavedec": (pywt.wavedec, "pywt.wavedec", "a multilevel 1D Discrete Wavelet Transform"),
    "waverec": (lambda x, *args, **kwargs: pywt.waverec(list(x), *args, **kwargs), "pywt.waverec",
                "a multilevel 1D Inverse Discrete Wavelet Transform"),
    "pdwt": (lambda x, part, *args, **kwargs: pywt.downcoef(part, x, *args, **kwargs), "pywt.downcoef",
             "a partial Discrete Wavelet Transform data decomposition"),
    "cwt": (lambda x, *args, **kwargs: pywt.cwt(x, *args, **kwargs)[0], "pywt.cwt", "a Continuous Wavelet Transform"),
}


TEMPLATE_DOCSTRING = """
    Compute {description} for each slice of a signal over the axis 0
    (typically the channel axis).

    This method simply wraps ``apply_to_each_channel`` method by setting the
    ``func`` argument to ``{full_name}``.

    Parameters
    ----------
    src : str, optional
        Batch attribute or component name to get the data from.
    dst : str, optional
        Batch attribute or component name to put the result in.
    args : misc
        Any additional positional arguments to ``{full_name}``.
    kwargs : misc
        Any additional named arguments to ``{full_name}``.

    Returns
    -------
    batch : EcgBatch
        Transformed batch. Changes ``dst`` attribute or component.
"""
TEMPLATE_DOCSTRING = dedent(TEMPLATE_DOCSTRING).strip()


def add_actions(actions_dict, template_docstring):
    """Add new actions in ``EcgBatch`` by setting ``func`` argument in
    ``EcgBatch.apply_to_each_channel`` method to given callables.

    Parameters
    ----------
    actions_dict : dict
        A dictionary, containing new methods' names as keys and a callable,
        its full name and description for each method as values.
    template_docstring : str
        A string, that will be formatted for each new method from
        ``actions_dict`` using ``full_name`` and ``description`` parameters
        and assigned to its ``__doc__`` attribute.

    Returns
    -------
    decorator : callable
        Class decorator.
    """
    def decorator(cls):
        """Returned decorator."""
        for method_name, (func, full_name, description) in actions_dict.items():
            docstring = template_docstring.format(full_name=full_name, description=description)
            method = partialmethod(cls.apply_to_each_channel, func)
            method.__doc__ = docstring
            setattr(cls, method_name, method)
        return cls
    return decorator


@add_actions(ACTIONS_DICT, TEMPLATE_DOCSTRING)  # pylint: disable=too-many-public-methods,too-many-instance-attributes
class EcgBatch(bf.Batch):
    """Batch class for ECG signals storing.

    Contains ECG signals and additional metadata along with various processing
    methods.

    Parameters
    ----------
    index : DatasetIndex
        Unique identifiers of ECGs in the batch.
    preloaded : tuple, optional
        Data to put in the batch if given. Defaults to ``None``.
    unique_labels : 1-D ndarray, optional
        Array with unique labels in a dataset.

    Attributes
    ----------
    index : DatasetIndex
        Unique identifiers of ECGs in the batch.
    signal : 1-D ndarray
        Array of 2-D ndarrays with ECG signals in channels first format.
    annotation : 1-D ndarray
        Array of dicts with different types of annotations.
    meta : 1-D ndarray
        Array of dicts with metadata about signals.
    target : 1-D ndarray
        Array with signals' labels.
    unique_labels : 1-D ndarray
        Array with unique labels in a dataset.
    label_binarizer : LabelBinarizer
        Object for label one-hot encoding.

    Note
    ----
    Some batch methods take ``index`` as their first argument after ``self``.
    You should not specify it in your code, it will be passed automatically by
    ``inbatch_parallel`` decorator. For example, ``resample_signals`` method
    with ``index`` and ``fs`` arguments should be called as
    ``batch.resample_signals(fs)``.
    """

    components = "signal", "annotation", "meta", "target"

    def __init__(self, index, preloaded=None, unique_labels=None):
        super().__init__(index, preloaded)
        self.signal = self.array_of_nones
        self.annotation = self.array_of_dicts
        self.meta = self.array_of_dicts
        self.target = self.array_of_nones
        self._unique_labels = None
        self._label_binarizer = None
        self.unique_labels = unique_labels

    @property
    def array_of_nones(self):
        """1-D ndarray: ``NumPy`` array with ``None`` values."""
        return np.array([None] * len(self.index))

    @property
    def array_of_dicts(self):
        """1-D ndarray: ``NumPy`` array with empty ``dict`` values."""
        return np.array([{} for _ in range(len(self.index))])

    @property
    def unique_labels(self):
        """1-D ndarray: Unique labels in a dataset."""
        return self._unique_labels

    @unique_labels.setter
    def unique_labels(self, val):
        """Set unique labels value to ``val``. Updates
        ``self.label_binarizer`` instance.

        Parameters
        ----------
        val : 1-D ndarray
            New unique labels.
        """
        self._unique_labels = val
        if self.unique_labels is None or len(self.unique_labels) == 0:
            self._label_binarizer = None
        else:
            self._label_binarizer = LabelBinarizer().fit(self.unique_labels)

    @property
    def label_binarizer(self):
        """LabelBinarizer: Label binarizer object for unique labels in a
        dataset."""
        return self._label_binarizer

    def _reraise_exceptions(self, results):
        """Reraise all exceptions in the ``results`` list.

        Parameters
        ----------
        results : list
            Post function computation results.

        Raises
        ------
        RuntimeError
            If any paralleled action raised an ``Exception``.
        """
        if bf.any_action_failed(results):
            all_errors = self.get_errors(results)
            raise RuntimeError("Cannot assemble the batch", all_errors)

    @staticmethod
    def _check_2d(signal):
        """Check if given signal is 2-D.

        Parameters
        ----------
        signal : ndarray
            Signal to check.

        Raises
        ------
        ValueError
            If given signal is not two-dimensional.
        """
        if signal.ndim != 2:
            raise ValueError("Each signal in batch must be 2-D ndarray")

    # Input/output methods

    @bf.action
    def load(self, src=None, fmt=None, components=None, ann_ext=None, *args, **kwargs):
        """Load given batch components from source.

        Most of the ``EcgBatch`` actions work under the assumption that both
        ``signal`` and ``meta`` components are loaded. In case this assumption
        is not fulfilled, normal operation of the actions is not guaranteed.

        This method supports loading of signals from WFDB, DICOM, EDF, WAV,
        XML and Blosc formats.

        Parameters
        ----------
        src : misc, optional
            Source to load components from.
        fmt : str, optional
            Source format.
        components : str or array-like, optional
            Components to load.
        ann_ext : str, optional
            Extension of the annotation file.

        Returns
        -------
        batch : EcgBatch
            Batch with loaded components. Changes batch data inplace.
        """
        if components is None:
            components = self.components
        components = np.unique(components).ravel().tolist()

        if (fmt == "csv" or fmt is None and isinstance(src, pd.Series)) and components == ['target']:
            return self._load_labels(src)
        if fmt in ["wfdb", "dicom", "edf", "wav", "xml"]:
            unexpected_components = set(components) - set(self.components)
            if unexpected_components:
                raise ValueError('Unexpected components: ', unexpected_components)
            return self._load_data(src=src, fmt=fmt, components=components, ann_ext=ann_ext, *args, **kwargs)
        return super().load(src=src, fmt=fmt, components=components, **kwargs)

    @bf.inbatch_parallel(init="indices", post="_assemble_load", target="threads")
    def _load_data(self, index, src=None, fmt=None, components=None, *args, **kwargs):
        """Load given components from WFDB, DICOM, EDF, WAV or XML files.

        Parameters
        ----------
        src : misc, optional
            Source to load components from. Must be a collection, that can be
            indexed by indices of a batch. If ``None`` and ``index`` has
            ``FilesIndex`` type, the path from ``index`` is used.
        fmt : str, optional
            Source format.
        components : iterable, optional
            Components to load.
        ann_ext: str, optional
            Extension of the annotation file.

        Returns
        -------
        batch : EcgBatch
            Batch with loaded components. Changes batch data inplace.

        Raises
        ------
        ValueError
            If source path is not specified and batch's ``index`` is not a
            ``FilesIndex``.
        """
        loaders = {
            "wfdb": bt.load_wfdb,
            "dicom": bt.load_dicom,
            "edf": bt.load_edf,
            "wav": bt.load_wav,
            "xml": bt.load_xml,
        }

        if src is not None:
            path = src[index]
        elif isinstance(self.index, bf.FilesIndex):
            path = self.index.get_fullpath(index)  # pylint: disable=no-member
        else:
            raise ValueError("Source path is not specified")
        return loaders[fmt](path, components, *args, **kwargs)

    def _assemble_load(self, results, *args, **kwargs):
        """Concatenate results of different workers and update ``self``.

        Parameters
        ----------
        results : list
            Workers' results.

        Returns
        -------
        batch : EcgBatch
            Assembled batch. Changes components inplace.
        """
        _ = args, kwargs
        self._reraise_exceptions(results)
        components = kwargs.get("components", None)
        if components is None:
            components = self.components
        for comp, data in zip(components, zip(*results)):
            if comp == "signal":
                data = np.array(data + (None,))[:-1]
            else:
                data = np.array(data)
            setattr(self, comp, data)
        return self

    def _load_labels(self, src):
        """Load labels from a csv file or ``pandas.Series``.

        Parameters
        ----------
        src : str or Series
            Path to csv file or ``pandas.Series``. The file should contain two
            columns: ECG index and label. It shouldn't have a header.

        Returns
        -------
        batch : EcgBatch
            Batch with loaded labels. Changes ``self.target`` inplace.

        Raises
        ------
        TypeError
            If ``src`` is not a string or ``pandas.Series``.
        RuntimeError
            If ``unique_labels`` has not been defined and the batch was not
            created by a ``Pipeline``.
        """
        if not isinstance(src, (str, pd.Series)):
            raise TypeError("Unsupported type of source")
        if isinstance(src, str):
            src = pd.read_csv(src, header=None, names=["index", "label"], index_col=0)["label"]
        self.target = src[self.indices].values
        if self.unique_labels is None:
            if self.pipeline is None:
                raise RuntimeError("Batch with undefined unique_labels must be created in a pipeline")
            ds_indices = self.pipeline.dataset.indices
            self.unique_labels = np.sort(src[ds_indices].unique())
        return self

    def show_ecg(self, index=None, start=0, end=None, annot=None, subplot_size=(10, 4)):  # pylint: disable=too-many-locals, line-too-long
        """Plot an ECG signal.

        Optionally highlight QRS complexes along with P and T waves. Each
        channel is displayed on a separate subplot.

        Parameters
        ----------
        index : element of ``self.indices``, optional
            Index of a signal to plot. If undefined, the first ECG in the
            batch is used.
        start : int, optional
            The start point of the displayed part of the signal (in seconds).
        end : int, optional
            The end point of the displayed part of the signal (in seconds).
        annot : str, optional
            If not ``None``, specifies attribute that stores annotation
            obtained from ``cardio.models.HMModel``.
        subplot_size : tuple
            Width and height of each subplot in inches.

        Raises
        ------
        ValueError
            If the chosen signal is not two-dimensional.
        """
        i = 0 if index is None else self.get_pos(None, "signal", index)
        signal, meta = self.signal[i], self.meta[i]
        self._check_2d(signal)

        fs = meta["fs"]
        num_channels = signal.shape[0]
        start = np.int(start * fs)
        end = signal.shape[1] if end is None else np.int(end * fs)

        figsize = (subplot_size[0], subplot_size[1] * num_channels)
        _, axes = plt.subplots(num_channels, 1, squeeze=False, figsize=figsize)
        for channel, (ax,) in enumerate(axes):
            lead_name = "undefined" if meta["signame"][channel] == "None" else meta["signame"][channel]
            units = "undefined" if meta["units"][channel] is None else meta["units"][channel]
            ax.plot((np.arange(start, end) / fs), signal[channel, start:end])
            ax.set_title("Lead name: {}".format(lead_name))
            ax.set_xlabel("Time (sec)")
            ax.set_ylabel("Amplitude ({})".format(units))
            ax.grid(True, which="major")

        if annot and hasattr(self, annot):
            def fill_segments(segment_states, color):
                """Fill ECG segments with a given color."""
                starts, ends = bt.find_intervals_borders(signal_states, segment_states)
                for start_t, end_t in zip((starts + start) / fs, (ends + start) / fs):
                    for (ax,) in axes:
                        ax.axvspan(start_t, end_t, color=color, alpha=0.3)

            signal_states = getattr(self, annot)[i][start:end]
            fill_segments(bt.QRS_STATES, "red")
            fill_segments(bt.P_STATES, "green")
            fill_segments(bt.T_STATES, "blue")
        plt.tight_layout()
        plt.show()

    # Batch processing

    @classmethod
    def merge(cls, batches, batch_size=None):
        """Concatenate a list of ``EcgBatch`` instances and split the result
        into two batches of sizes ``batch_size`` and ``sum(lens of batches) -
        batch_size`` respectively.

        Parameters
        ----------
        batches : list
            List of ``EcgBatch`` instances.
        batch_size : positive int, optional
            Length of the first resulting batch. If ``None``, equals the
            length of the concatenated batch.

        Returns
        -------
        new_batch : EcgBatch
            Batch of no more than ``batch_size`` first items from the
            concatenation of input batches. Contains a deep copy of input
            batches' data.
        rest_batch : EcgBatch
            Batch of the remaining items. Contains a deep copy of input
            batches' data.

        Raises
        ------
        ValueError
            If ``batch_size`` is non-positive or non-integer.
        """
        batches = [batch for batch in batches if batch is not None]
        if len(batches) == 0:
            return None, None
        total_len = np.sum([len(batch) for batch in batches])
        if batch_size is None:
            batch_size = total_len
        elif not isinstance(batch_size, int) or batch_size < 1:
            raise ValueError("Batch size must be positive int")
        indices = np.arange(total_len)

        data = []
        for comp in batches[0].components:
            data.append(np.concatenate([batch.get(component=comp) for batch in batches]))
        data = copy.deepcopy(data)

        new_indices = indices[:batch_size]
        new_batch = cls(bf.DatasetIndex(new_indices), unique_labels=batches[0].unique_labels)
        new_batch._data = tuple(comp[:batch_size] for comp in data)  # pylint: disable=protected-access, attribute-defined-outside-init, line-too-long
        if total_len <= batch_size:
            rest_batch = None
        else:
            rest_indices = indices[batch_size:]
            rest_batch = cls(bf.DatasetIndex(rest_indices), unique_labels=batches[0].unique_labels)
            rest_batch._data = tuple(comp[batch_size:] for comp in data)  # pylint: disable=protected-access, attribute-defined-outside-init, line-too-long
        return new_batch, rest_batch

    # Versatile components processing

    @bf.action
    def apply_transform(self, func, *args, src="signal", dst="signal", **kwargs):
        """Apply a function to each item in the batch.

        Parameters
        ----------
        func : callable
            A function to apply. Must accept an item of ``src`` as its first
            argument if ``src`` is not ``None``.
        src : str, array-like or ``None``, optional
            The source to get the data from. If ``src`` is ``str``, it is
            treated as the batch attribute or component name. Defaults to
            ``signal`` component.
        dst : str, writeable array-like or ``None``, optional
            The source to put the result in. If ``dst`` is ``str``, it is
            treated as the batch attribute or component name. Defaults to
            ``signal`` component.
        args : misc
            Any additional positional arguments to ``func``.
        kwargs : misc
            Any additional named arguments to ``func``.

        Returns
        -------
        batch : EcgBatch
            Transformed batch. If ``dst`` is ``str``, the corresponding
            attribute or component is changed inplace.
        """
        if isinstance(dst, str) and not hasattr(self, dst):
            setattr(self, dst, np.array([None] * len(self.index)))
        return super().apply_transform(func, *args, src=src, dst=dst, **kwargs)

    def _init_component(self, *args, **kwargs):
        """Create and preallocate a new attribute with the name ``dst`` if it
        does not exist and return batch indices."""
        _ = args
        dst = kwargs.get("dst")
        if dst is None:
            raise KeyError("dst argument must be specified")
        if not hasattr(self, dst):
            setattr(self, dst, np.array([None] * len(self.index)))
        return self.indices

    @bf.action
    @bf.inbatch_parallel(init="_init_component", src="signal", dst="signal", target="threads")
    def apply_to_each_channel(self, index, func, *args, src="signal", dst="signal", **kwargs):
        """Apply a function to each slice of a signal over the axis 0
        (typically the channel axis).

        Parameters
        ----------
        func : callable
            A function to apply. Must accept a signal as its first argument.
        src : str, optional
            Batch attribute or component name to get the data from. Defaults
            to ``signal`` component.
        dst : str, optional
            Batch attribute or component name to put the result in. Defaults
            to ``signal`` component.
        args : misc
            Any additional positional arguments to ``func``.
        kwargs : misc
            Any additional named arguments to ``func``.

        Returns
        -------
        batch : EcgBatch
            Transformed batch. Changes ``dst`` attribute or component.
        """
        i = self.get_pos(None, src, index)
        src_data = getattr(self, src)[i]
        dst_data = np.array([func(slc, *args, **kwargs) for slc in src_data])
        getattr(self, dst)[i] = dst_data

    # Labels processing

    def _filter_batch(self, keep_mask):
        """Drop elements from a batch with corresponding ``False`` values in
        ``keep_mask``.

        This method creates a new batch and updates only components and
        ``unique_labels`` attribute. The information stored in other
        attributes will be lost.

        Parameters
        ----------
        keep_mask : bool 1-D ndarray
            Filtering mask.

        Returns
        -------
        batch : same class as self
            Filtered batch.

        Raises
        ------
        SkipBatchException
            If all batch data was dropped. If the batch is created by a
            ``pipeline``, its processing will be stopped and the ``pipeline``
            will create the next batch.
        """
        indices = self.indices[keep_mask]
        if len(indices) == 0:
            raise bf.SkipBatchException("All batch data was dropped")
        batch = self.__class__(bf.DatasetIndex(indices), unique_labels=self.unique_labels)
        for component in self.components:
            setattr(batch, component, getattr(self, component)[keep_mask])
        return batch

    @bf.action
    def drop_labels(self, drop_list):
        """Drop elements whose labels are in ``drop_list``.

        This method creates a new batch and updates only components and
        ``unique_labels`` attribute. The information stored in other
        attributes will be lost.

        Parameters
        ----------
        drop_list : list
            Labels to be dropped from a batch.

        Returns
        -------
        batch : EcgBatch
            Filtered batch. Creates a new ``EcgBatch`` instance.

        Raises
        ------
        SkipBatchException
            If all batch data was dropped. If the batch is created by a
            ``pipeline``, its processing will be stopped and the ``pipeline``
            will create the next batch.
        """
        drop_arr = np.asarray(drop_list)
        self.unique_labels = np.setdiff1d(self.unique_labels, drop_arr)
        keep_mask = ~np.in1d(self.target, drop_arr)
        return self._filter_batch(keep_mask)

    @bf.action
    def keep_labels(self, keep_list):
        """Drop elements whose labels are not in ``keep_list``.

        This method creates a new batch and updates only components and
        ``unique_labels`` attribute. The information stored in other
        attributes will be lost.

        Parameters
        ----------
        keep_list : list
            Labels to be kept in a batch.

        Returns
        -------
        batch : EcgBatch
            Filtered batch. Creates a new ``EcgBatch`` instance.

        Raises
        ------
        SkipBatchException
            If all batch data was dropped. If the batch is created by a
            ``pipeline``, its processing will be stopped and the ``pipeline``
            will create the next batch.
        """
        keep_arr = np.asarray(keep_list)
        self.unique_labels = np.intersect1d(self.unique_labels, keep_arr)
        keep_mask = np.in1d(self.target, keep_arr)
        return self._filter_batch(keep_mask)

    @bf.action
    def rename_labels(self, rename_dict):
        """Rename labels with corresponding values from ``rename_dict``.

        Parameters
        ----------
        rename_dict : dict
            Dictionary containing ``(old label : new label)`` pairs.

        Returns
        -------
        batch : EcgBatch
            Batch with renamed labels. Changes ``self.target`` inplace.
        """
        self.unique_labels = np.array(sorted({rename_dict.get(t, t) for t in self.unique_labels}))
        self.target = np.array([rename_dict.get(t, t) for t in self.target])
        return self

    @bf.action
    def binarize_labels(self):
        """Binarize labels in a batch in a one-vs-all fashion.

        Returns
        -------
        batch : EcgBatch
            Batch with binarized labels. Changes ``self.target`` inplace.
        """
        self.target = self.label_binarizer.transform(self.target)
        return self

    # Channels processing

    @bf.inbatch_parallel(init="indices", target="threads")
    def _filter_channels(self, index, names=None, indices=None, invert_mask=False):
        """Build and apply a boolean mask for each channel of a signal based
        on provided channels ``names`` and ``indices``.

        Mask value for a channel is set to ``True`` if its name or index is
        contained in ``names`` or ``indices`` respectively. The mask can be
        inverted before its application if ``invert_mask`` flag is set to
        ``True``.

        Parameters
        ----------
        names : str or list or tuple, optional
            Channels names used to construct the mask.
        indices : int or list or tuple, optional
            Channels indices used to construct the mask.
        invert_mask : bool, optional
            Specifies whether to invert the mask before its application.

        Returns
        -------
        batch : EcgBatch
            Batch with filtered channels. Changes ``self.signal`` and
            ``self.meta`` inplace.

        Raises
        ------
        ValueError
            If both ``names`` and ``indices`` are empty.
        ValueError
            If all channels should be dropped.
        """
        i = self.get_pos(None, "signal", index)
        channels_names = np.asarray(self.meta[i]["signame"])
        mask = np.zeros_like(channels_names, dtype=np.bool)
        if names is None and indices is None:
            raise ValueError("Both names and indices cannot be empty")
        if names is not None:
            names = np.asarray(names)
            mask |= np.in1d(channels_names, names)
        if indices is not None:
            indices = np.asarray(indices)
            mask |= np.array([i in indices for i in range(len(channels_names))])
        if invert_mask:
            # know pylint bug: https://github.com/PyCQA/pylint/issues/2436
            mask = ~mask # pylint: disable=invalid-unary-operand-type
        if np.sum(mask) == 0:
            raise ValueError("All channels cannot be dropped")
        self.signal[i] = self.signal[i][mask]
        self.meta[i]["signame"] = channels_names[mask]
        self.meta[i]["units"] = self.meta[i]["units"][mask]

    @bf.action
    def drop_channels(self, names=None, indices=None):
        """Drop channels whose names are in ``names`` or whose indices are in
        ``indices``.

        Parameters
        ----------
        names : str or list or tuple, optional
            Names of channels to be dropped from a batch.
        indices : int or list or tuple, optional
            Indices of channels to be dropped from a batch.

        Returns
        -------
        batch : EcgBatch
            Batch with dropped channels. Changes ``self.signal`` and
            ``self.meta`` inplace.

        Raises
        ------
        ValueError
            If both ``names`` and ``indices`` are empty.
        ValueError
            If all channels should be dropped.
        """
        return self._filter_channels(names, indices, invert_mask=True)

    @bf.action
    def keep_channels(self, names=None, indices=None):
        """Drop channels whose names are not in ``names`` and whose indices
        are not in ``indices``.

        Parameters
        ----------
        names : str or list or tuple, optional
            Names of channels to be kept in a batch.
        indices : int or list or tuple, optional
            Indices of channels to be kept in a batch.

        Returns
        -------
        batch : EcgBatch
            Batch with dropped channels. Changes ``self.signal`` and
            ``self.meta`` inplace.

        Raises
        ------
        ValueError
            If both ``names`` and ``indices`` are empty.
        ValueError
            If all channels should be dropped.
        """
        return self._filter_channels(names, indices, invert_mask=False)

    @bf.action
    @bf.inbatch_parallel(init="indices", target="threads")
    def rename_channels(self, index, rename_dict):
        """Rename channels with corresponding values from ``rename_dict``.

        Parameters
        ----------
        rename_dict : dict
            Dictionary containing ``(old channel name : new channel name)``
            pairs.

        Returns
        -------
        batch : EcgBatch
            Batch with renamed channels. Changes ``self.meta`` inplace.
        """
        i = self.get_pos(None, "signal", index)
        old_names = self.meta[i]["signame"]
        new_names = np.array([rename_dict.get(name, name) for name in old_names], dtype=object)
        self.meta[i]["signame"] = new_names

    @bf.action
    @bf.inbatch_parallel(init="indices", target="threads")
    def reorder_channels(self, index, new_order):
        """Change the order of channels in the batch according to the
        ``new_order``.

        Parameters
        ----------
        new_order : array_like
            A list of channel names specifying the order of channels in the
            transformed batch.

        Returns
        -------
        batch : EcgBatch
            Batch with reordered channels. Changes ``self.signal`` and
            ``self.meta`` inplace.

        Raises
        ------
        ValueError
            If unknown lead names are specified.
        ValueError
            If all channels should be dropped.
        """
        i = self.get_pos(None, "signal", index)
        old_order = self.meta[i]["signame"]
        diff = np.setdiff1d(new_order, old_order)
        if diff.size > 0:
            raise ValueError("Unknown lead names: {}".format(", ".join(diff)))
        if len(new_order) == 0:
            raise ValueError("All channels cannot be dropped")
        transform_dict = {k: v for v, k in enumerate(old_order)}
        indices = [transform_dict[k] for k in new_order]
        self.signal[i] = self.signal[i][indices]
        self.meta[i]["signame"] = self.meta[i]["signame"][indices]
        self.meta[i]["units"] = self.meta[i]["units"][indices]

    @bf.action
    @bf.inbatch_parallel(init="indices", target="threads")
    def convert_units(self, index, new_units):
        """Convert units of signal's channels to ``new_units``.

        Parameters
        ----------
        new_units : str, dict or array_like
            New units of signal's channels. Must be specified in SI format and
            can be of one of the following types:
                * ``str`` - defines the same new units for each channel.
                * ``dict`` - defines the mapping from channel names to new
                  units. Channels, whose names are not in the dictionary,
                  remain unchanged.
                * ``array_like`` - defines new units for corresponding
                  channels. The length of the sequence in this case must match
                  the number of channels.

        Returns
        -------
        batch : EcgBatch
            Batch with converted units. Changes ``self.signal`` and
            ``self.meta`` inplace.

        Raises
        ------
        ValueError
            If ``new_units`` is ``array_like`` and its length doesn't match
            the number of channels.
        ValueError
            If unknown units are used.
        ValueError
            If conversion between incompatible units is performed.
        """
        i = self.get_pos(None, "signal", index)
        old_units = self.meta[i]["units"]
        channels_names = self.meta[i]["signame"]
        if isinstance(new_units, str):
            new_units = [new_units] * len(old_units)
        elif isinstance(new_units, dict):
            new_units = [new_units.get(name, unit) for name, unit in zip(channels_names, old_units)]
        elif len(new_units) != len(old_units):
            raise ValueError("The length of the new and old units lists must be the same")
        factors = [get_units_conversion_factor(old, new) for old, new in zip(old_units, new_units)]
        factors = np.array(factors).reshape(*([-1] + [1] * (self.signal[i].ndim - 1)))
        self.signal[i] *= factors
        self.meta[i]["units"] = np.asarray(new_units)

    # Signal processing

    @bf.action
    def convolve_signals(self, kernel, padding_mode="edge", axis=-1, **kwargs):
        """Convolve signals with given ``kernel``.

        Parameters
        ----------
        kernel : 1-D array_like
            Convolution kernel.
        padding_mode : str or function, optional
            ``np.pad`` padding mode.
        axis : int, optional
            Axis along which signals are sliced. Default value is -1.
        kwargs : misc
            Any additional named arguments to ``np.pad``.

        Returns
        -------
        batch : EcgBatch
            Convolved batch. Changes ``self.signal`` inplace.

        Raises
        ------
        ValueError
            If ``kernel`` is not one-dimensional or has non-numeric ``dtype``.
        """
        for i in range(len(self.signal)):
            self.signal[i] = bt.convolve_signals(self.signal[i], kernel, padding_mode, axis, **kwargs)
        return self

    @bf.action
    @bf.inbatch_parallel(init="indices", target="threads")
    def band_pass_signals(self, index, low=None, high=None, axis=-1):
        """Reject frequencies outside a given range.

        Parameters
        ----------
        low : positive float, optional
            High-pass filter cutoff frequency (in Hz).
        high : positive float, optional
            Low-pass filter cutoff frequency (in Hz).
        axis : int, optional
            Axis along which signals are sliced. Default value is -1.

        Returns
        -------
        batch : EcgBatch
            Filtered batch. Changes ``self.signal`` inplace.
        """
        i = self.get_pos(None, "signal", index)
        self.signal[i] = bt.band_pass_signals(self.signal[i], self.meta[i]["fs"], low, high, axis)

    @bf.action
    def drop_short_signals(self, min_length, axis=-1):
        """Drop short signals from a batch.

        Parameters
        ----------
        min_length : positive int
            Minimal signal length.
        axis : int, optional
            Axis along which length is calculated. Default value is -1.

        Returns
        -------
        batch : EcgBatch
            Filtered batch. Creates a new ``EcgBatch`` instance.
        """
        keep_mask = np.array([sig.shape[axis] >= min_length for sig in self.signal])
        return self._filter_batch(keep_mask)

    @bf.action
    @bf.inbatch_parallel(init="indices", target="threads")
    def flip_signals(self, index, window_size=None, threshold=0):
        """Flip 2-D signals whose R-peaks are directed downwards.

        Each element of ``self.signal`` must be a 2-D ndarray. Signals are
        flipped along axis 1 (signal axis). For each subarray of
        ``window_size`` length skewness is calculated and compared with
        ``threshold`` to decide whether this subarray should be flipped or
        not. Then the mode of the result is calculated to make the final
        decision.

        Parameters
        ----------
        window_size : int, optional
            Signal is split into K subarrays of ``window_size`` length. If it
            is not possible, data in the end of the signal is removed. If
            ``window_size`` is not given, the whole array is checked without
            splitting.
        threshold : float, optional
            If skewness of a subarray is less than the ``threshold``, it
            "votes" for flipping the signal. Default value is 0.

        Returns
        -------
        batch : EcgBatch
            Batch with flipped signals. Changes ``self.signal`` inplace.

        Raises
        ------
        ValueError
            If given signal is not two-dimensional.
        """
        i = self.get_pos(None, "signal", index)
        self._check_2d(self.signal[i])
        sig = bt.band_pass_signals(self.signal[i], self.meta[i]["fs"], low=5, high=50)
        sig = bt.convolve_signals(sig, kernels.gaussian(11, 3))

        if window_size is None:
            window_size = sig.shape[1]

        number_of_splits = sig.shape[1] // window_size
        sig = sig[:, : window_size * number_of_splits]

        splits = np.split(sig, number_of_splits, axis=-1)
        votes = [np.where(scipy.stats.skew(subseq, axis=-1) < threshold, -1, 1).reshape(-1, 1) for subseq in splits]
        mode_of_votes = scipy.stats.mode(votes)[0].reshape(-1, 1)
        self.signal[i] *= mode_of_votes

    @bf.action
    @bf.inbatch_parallel(init="indices", target="threads")
    def slice_signals(self, index, selection_object):
        """Perform indexing or slicing of signals in a batch. Allows basic
        ``NumPy`` indexing and slicing along with advanced indexing.

        Parameters
        ----------
        selection_object : slice or int or a tuple of slices and ints
            An object that is used to slice signals.

        Returns
        -------
        batch : EcgBatch
            Batch with sliced signals. Changes ``self.signal`` inplace.
        """
        i = self.get_pos(None, "signal", index)
        self.signal[i] = self.signal[i][selection_object]

    @staticmethod
    def _pad_signal(signal, length, pad_value):
        """Pad signal with ``pad_value`` to the left along axis 1 (signal
        axis).

        Parameters
        ----------
        signal : 2-D ndarray
            Signals to pad.
        length : positive int
            Length of padded signal along axis 1.
        pad_value : float
            Padding value.

        Returns
        -------
        signal : 2-D ndarray
            Padded signals.
        """
        pad_len = length - signal.shape[1]
        sig = np.pad(signal, ((0, 0), (pad_len, 0)), "constant", constant_values=pad_value)
        return sig

    @staticmethod
    def _get_segmentation_arg(arg, arg_name, target):
        """Get segmentation step or number of segments for a given signal.

        Parameters
        ----------
        arg : int or dict
            Segmentation step or number of segments.
        arg_name : str
            Argument name.
        target : hashable
            Signal target.

        Returns
        -------
        arg : positive int
            Segmentation step or number of segments for given signal.

        Raises
        ------
        KeyError
            If ``arg`` dict has no ``target`` key.
        ValueError
            If ``arg`` is not int or dict.
        """
        if isinstance(arg, int):
            return arg
        if isinstance(arg, dict):
            arg = arg.get(target)
            if arg is None:
                raise KeyError("Undefined {} for target {}".format(arg_name, target))
            return arg
        raise ValueError("Unsupported {} type".format(arg_name))

    @staticmethod
    def _check_segmentation_args(signal, target, length, arg, arg_name):
        """Check values of segmentation parameters.

        Parameters
        ----------
        signal : 2-D ndarray
            Signals to segment.
        target : hashable
            Signal target.
        length : positive int
            Length of each segment along axis 1.
        arg : positive int or dict
            Segmentation step or number of segments.
        arg_name : str
            Argument name.

        Returns
        -------
        arg : positive int
            Segmentation step or number of segments for given signal.

        Raises
        ------
        ValueError
            If:
                * given signal is not two-dimensional,
                * ``arg`` is not int or dict,
                * ``length`` or ``arg`` for a given signal is negative or
                  non-integer.
        KeyError
            If ``arg`` dict has no ``target`` key.
        """
        EcgBatch._check_2d(signal)
        if (length <= 0) or not isinstance(length, int):
            raise ValueError("Segment length must be positive integer")
        arg = EcgBatch._get_segmentation_arg(arg, arg_name, target)
        if (arg <= 0) or not isinstance(arg, int):
            raise ValueError("{} must be positive integer".format(arg_name))
        return arg

    @bf.action
    @bf.inbatch_parallel(init="indices", target="threads")
    def split_signals(self, index, length, step, pad_value=0):
        """Split 2-D signals along axis 1 (signal axis) with given ``length``
        and ``step``.

        If signal length along axis 1 is less than ``length``, it is padded to
        the left with ``pad_value``.

        Notice, that each resulting signal will be a 3-D ndarray of shape
        ``[n_segments, n_channels, length]``. If you would like to get a
        number of 2-D signals of shape ``[n_channels, length]`` as a result,
        you need to apply ``unstack_signals`` method then.

        Parameters
        ----------
        length : positive int
            Length of each segment along axis 1.
        step : positive int or dict
            Segmentation step. If ``step`` is dict, segmentation step is
            fetched by signal's target key.
        pad_value : float, optional
            Padding value. Defaults to 0.

        Returns
        -------
        batch : EcgBatch
            Batch of split signals. Changes ``self.signal`` inplace.

        Raises
        ------
        ValueError
            If:
                * given signal is not two-dimensional,
                * ``step`` is not int or dict,
                * ``length`` or ``step`` for a given signal is negative or
                  non-integer.
        KeyError
            If ``step`` dict has no signal's target key.
        """
        i = self.get_pos(None, "signal", index)
        step = self._check_segmentation_args(self.signal[i], self.target[i], length, step, "step size")
        if self.signal[i].shape[1] < length:
            tmp_sig = self._pad_signal(self.signal[i], length, pad_value)
            self.signal[i] = tmp_sig[np.newaxis, ...]
        else:
            self.signal[i] = bt.split_signals(self.signal[i], length, step)

    @bf.action
    @bf.inbatch_parallel(init="indices", target="threads")
    def random_split_signals(self, index, length, n_segments, pad_value=0):
        """Split 2-D signals along axis 1 (signal axis) ``n_segments`` times
        with random start position and given ``length``.

        If signal length along axis 1 is less than ``length``, it is padded to
        the left with ``pad_value``.

        Notice, that each resulting signal will be a 3-D ndarray of shape
        ``[n_segments, n_channels, length]``. If you would like to get a
        number of 2-D signals of shape ``[n_channels, length]`` as a result,
        you need to apply ``unstack_signals`` method then.

        Parameters
        ----------
        length : positive int
            Length of each segment along axis 1.
        n_segments : positive int or dict
            Number of segments. If ``n_segments`` is dict, number of segments
            is fetched by signal's target key.
        pad_value : float, optional
            Padding value. Defaults to 0.

        Returns
        -------
        batch : EcgBatch
            Batch of split signals. Changes ``self.signal`` inplace.

        Raises
        ------
        ValueError
            If:
                * given signal is not two-dimensional,
                * ``n_segments`` is not int or dict,
                * ``length`` or ``n_segments`` for a given signal is negative
                  or non-integer.
        KeyError
            If ``n_segments`` dict has no signal's target key.
        """
        i = self.get_pos(None, "signal", index)
        n_segments = self._check_segmentation_args(self.signal[i], self.target[i], length,
                                                   n_segments, "number of segments")
        if self.signal[i].shape[1] < length:
            tmp_sig = self._pad_signal(self.signal[i], length, pad_value)
            self.signal[i] = np.tile(tmp_sig, (n_segments, 1, 1))
        else:
            self.signal[i] = bt.random_split_signals(self.signal[i], length, n_segments)

    @bf.action
    def unstack_signals(self):
        """Create a new batch in which each signal's element along axis 0 is
        considered as a separate signal.

        This method creates a new batch and updates only components and
        ``unique_labels`` attribute. Signal's data from non-``signal``
        components is duplicated using a deep copy for each of the resulting
        signals. The information stored in other attributes will be lost.

        Returns
        -------
        batch : same class as self
            Batch with split signals and duplicated other components.

        Examples
        --------
        >>> batch.signal
        array([array([[ 0,  1,  2,  3],
                      [ 4,  5,  6,  7],
                      [ 8,  9, 10, 11]])],
              dtype=object)

        >>> batch = batch.unstack_signals()
        >>> batch.signal
        array([array([0, 1, 2, 3]),
               array([4, 5, 6, 7]),
               array([ 8,  9, 10, 11])],
              dtype=object)
        """
        n_reps = [sig.shape[0] for sig in self.signal]
        signal = np.array([channel for signal in self.signal for channel in signal] + [None])[:-1]
        index = bf.DatasetIndex(np.arange(len(signal)))
        batch = self.__class__(index, unique_labels=self.unique_labels)
        batch.signal = signal
        for component_name in set(self.components) - {"signal"}:
            val = []
            component = getattr(self, component_name)
            is_object_dtype = (component.dtype.kind == "O")
            for elem, n in zip(component, n_reps):
                for _ in range(n):
                    val.append(copy.deepcopy(elem))
            if is_object_dtype:
                val = np.array(val + [None])[:-1]
            else:
                val = np.array(val)
            setattr(batch, component_name, val)
        return batch

    def _safe_fs_resample(self, index, fs):
        """Resample 2-D signal along axis 1 (signal axis) to given sampling
        rate.

        New sampling rate is guaranteed to be positive float.

        Parameters
        ----------
        fs : positive float
            New sampling rate.

        Raises
        ------
        ValueError
            If given signal is not two-dimensional.
        """
        i = self.get_pos(None, "signal", index)
        self._check_2d(self.signal[i])
        new_len = max(1, int(fs * self.signal[i].shape[1] / self.meta[i]["fs"]))
        self.meta[i]["fs"] = fs
        self.signal[i] = bt.resample_signals(self.signal[i], new_len)

    @bf.action
    @bf.inbatch_parallel(init="indices", target="threads")
    def resample_signals(self, index, fs):
        """Resample 2-D signals along axis 1 (signal axis) to given sampling
        rate.

        Parameters
        ----------
        fs : positive float
            New sampling rate.

        Returns
        -------
        batch : EcgBatch
            Resampled batch. Changes ``self.signal`` and ``self.meta``
            inplace.

        Raises
        ------
        ValueError
            If given signal is not two-dimensional.
        ValueError
            If ``fs`` is negative or non-numeric.
        """
        if fs <= 0:
            raise ValueError("Sampling rate must be a positive float")
        self._safe_fs_resample(index, fs)

    @bf.action
    @bf.inbatch_parallel(init="indices", target="threads")
    def random_resample_signals(self, index, distr, **kwargs):
        """Resample 2-D signals along axis 1 (signal axis) to a new sampling
        rate, sampled from a given distribution.

        If new sampling rate is negative, the signal is left unchanged.

        Parameters
        ----------
        distr : str or callable
            ``NumPy`` distribution name or a callable to sample from.
        kwargs : misc
            Distribution parameters.

        Returns
        -------
        batch : EcgBatch
            Resampled batch. Changes ``self.signal`` and ``self.meta``
            inplace.

        Raises
        ------
        ValueError
            If given signal is not two-dimensional.
        ValueError
            If ``distr`` is not a string or a callable.
        """
        if hasattr(np.random, distr):
            distr_fn = getattr(np.random, distr)
            fs = distr_fn(**kwargs)
        elif callable(distr):
            fs = distr_fn(**kwargs)
        else:
            raise ValueError("Unknown type of distribution parameter")
        if fs <= 0:
            fs = self[index].meta["fs"]
        self._safe_fs_resample(index, fs)

    # Complex ECG processing

    @bf.action
    @bf.inbatch_parallel(init="_init_component", src="signal", dst="signal", target="threads")
    def spectrogram(self, index, *args, src="signal", dst="signal", **kwargs):
        """Compute a spectrogram for each slice of a signal over the axis 0
        (typically the channel axis).

        This method is a wrapper around ``scipy.signal.spectrogram``, that
        accepts the same arguments, except the ``fs`` which is substituted
        automatically from signal's meta. The method returns only the
        spectrogram itself.

        Parameters
        ----------
        src : str, optional
            Batch attribute or component name to get the data from.
        dst : str, optional
            Batch attribute or component name to put the result in.
        args : misc
            Any additional positional arguments to
            ``scipy.signal.spectrogram``.
        kwargs : misc
            Any additional named arguments to ``scipy.signal.spectrogram``.

        Returns
        -------
        batch : EcgBatch
            Transformed batch. Changes ``dst`` attribute or component.
        """
        i = self.get_pos(None, src, index)
        fs = self.meta[i]["fs"]
        src_data = getattr(self, src)[i]
        dst_data = np.array([scipy.signal.spectrogram(slc, fs, *args, **kwargs)[-1] for slc in src_data])
        getattr(self, dst)[i] = dst_data

    @bf.action
    @bf.inbatch_parallel(init="_init_component", src="signal", dst="signal", target="threads")
    def standardize(self, index, axis=None, eps=1e-10, *, src="signal", dst="signal"):
        """Standardize data along specified axes by removing the mean and
        scaling to unit variance.

        Parameters
        ----------
        axis : ``None`` or int or tuple of ints, optional
            Axis or axes along which standardization is performed. The default
            is to compute for the flattened array.
        eps: float
            Small addition to avoid division by zero.
        src : str, optional
            Batch attribute or component name to get the data from.
        dst : str, optional
            Batch attribute or component name to put the result in.

        Returns
        -------
        batch : EcgBatch
            Transformed batch. Changes ``dst`` attribute or component.
        """
        i = self.get_pos(None, src, index)
        src_data = getattr(self, src)[i]
        dst_data = ((src_data - np.mean(src_data, axis=axis, keepdims=True)) /
                    np.std(src_data, axis=axis, keepdims=True) + eps)
        getattr(self, dst)[i] = dst_data

    @bf.action
    @bf.inbatch_parallel(init="indices", target="threads")
    def calc_ecg_parameters(self, index, src=None):
        """Calculate ECG report parameters and write them to the ``meta``
        component.

        Calculates PQ, QT, QRS intervals along with their borders and the
        heart rate value based on the annotation and writes them to the
        ``meta`` component.

        Parameters
        ----------
        src : str
            Batch attribute or component name to get the annotation from.

        Returns
        -------
        batch : EcgBatch
            Batch with report parameters stored in the ``meta`` component.

        Raises
        ------
        ValueError
            If ``src`` is ``None`` or is not an attribute of a batch.
        """
        if not (src and hasattr(self, src)):
            raise ValueError("Batch does not have an attribute or component {}!".format(src))

        i = self.get_pos(None, "signal", index)
        src_data = getattr(self, src)[i]

        self.meta[i]["hr"] = bt.calc_hr(self.signal[i],
                                        src_data,
                                        np.float64(self.meta[i]["fs"]),
                                        bt.R_STATE)

        self.meta[i]["pq"] = bt.calc_pq(src_data,
                                        np.float64(self.meta[i]["fs"]),
                                        bt.P_STATES,
                                        bt.Q_STATE,
                                        bt.R_STATE)

        self.meta[i]["qt"] = bt.calc_qt(src_data,
                                        np.float64(self.meta[i]["fs"]),
                                        bt.T_STATES,
                                        bt.Q_STATE,
                                        bt.R_STATE)

        self.meta[i]["qrs"] = bt.calc_qrs(src_data,
                                          np.float64(self.meta[i]["fs"]),
                                          bt.S_STATE,
                                          bt.Q_STATE,
                                          bt.R_STATE)

        self.meta[i]["qrs_segments"] = np.vstack(bt.find_intervals_borders(src_data,
                                                                           bt.QRS_STATES))

        self.meta[i]["p_segments"] = np.vstack(bt.find_intervals_borders(src_data,
                                                                         bt.P_STATES))

        self.meta[i]["t_segments"] = np.vstack(bt.find_intervals_borders(src_data,
                                                                         bt.T_STATES))


================================================
FILE: cardio/core/ecg_batch_tools.py
================================================
"""Сontains ECG processing tools."""

import os
import struct
import datetime
from xml.etree import ElementTree

import numpy as np
from numba import njit
from scipy.io import wavfile
import pyedflib
import wfdb

try:
    import pydicom as dicom
except ImportError:
    import dicom

# Constants

# This is the predefined keys of the meta component.
# Each key is initialized with None.
META_KEYS = [
    "age",
    "sex",
    "timestamp",
    "comments",
    "fs",
    "signame",
    "units",
]

# This is the mapping from inner HMM states to human-understandable
# cardiological terms.
P_STATES = np.array([14, 15, 16], np.int64)
T_STATES = np.array([5, 6, 7, 8, 9, 10], np.int64)
QRS_STATES = np.array([0, 1, 2], np.int64)
Q_STATE = np.array([0], np.int64)
R_STATE = np.array([1], np.int64)
S_STATE = np.array([2], np.int64)


def check_signames(signame, nsig):
    """Check that signame is in proper format.

    Check if signame is a list of values that can be casted
    to string, othervise generate new signame list with numbers
    0 to `nsig`-1 as strings.

    Parameters
    ----------
    signame : misc
        Signal names from file.
    nsig : int
        Number of signals / channels.

    Returns
    -------
    signame : list
        List of string names of signals / channels.
    """
    if isinstance(signame, (tuple, list)) and len(signame) == nsig:
        signame = [str(name) for name in signame]
    else:
        signame = [str(number) for number in range(nsig)]
    return np.array(signame)


def check_units(units, nsig):
    """Check that units are in proper format.

    Check if units is a list of values with lenght
    equal to number of channels.

    Parameters
    ----------
    units : misc
        Units from file.
    nsig : int
        Number of signals / channels.

    Returns
    -------
    units : list
        List of units of signal / channel.
    """
    if not (isinstance(units, (tuple, list)) and len(units) == nsig):
        units = [None for number in range(nsig)]
    return np.array(units)


def unify_sex(sex):
    """Maps the sex of a patient into one of the following values: "M", "F" or
    ``None``.

    Parameters
    ----------
    sex : str
        Sex of the patient.

    Returns
    -------
    sex : str
        Transformed sex of the patient.
    """
    transform_dict = {
        "MALE": "M",
        "M": "M",
        "FEMALE": "F",
        "F": "F",
    }
    return transform_dict.get(sex)


def load_wfdb(path, components, *args, **kwargs):
    """Load given components from wfdb file.

    Parameters
    ----------
    path : str
        Path to .hea file.
    components : iterable
        Components to load.
    ann_ext: str
        Extension of the annotation file.

    Returns
    -------
    ecg_data : list
        List of ecg data components.
    """
    _ = args

    ann_ext = kwargs.get("ann_ext")

    path = os.path.splitext(path)[0]
    record = wfdb.rdrecord(path)
    signal = record.__dict__.pop("p_signal").T
    record_meta = record.__dict__
    nsig = record_meta["n_sig"]

    if "annotation" in components and ann_ext is not None:
        annotation = wfdb.rdann(path, ann_ext)
        annot = {"annsamp": annotation.sample,
                 "anntype": annotation.symbol}
    else:
        annot = {}

    # Initialize meta with defined keys, load values from record
    # meta and preprocess to our format.
    meta = dict(zip(META_KEYS, [None] * len(META_KEYS)))
    meta.update(record_meta)

    meta["signame"] = check_signames(meta.pop("sig_name"), nsig)
    meta["units"] = check_units(meta["units"], nsig)

    data = {"signal": signal,
            "annotation": annot,
            "meta": meta}
    return [data[comp] for comp in components]


def load_dicom(path, components, *args, **kwargs):
    """
    Load given components from DICOM file.

    Parameters
    ----------
    path : str
        Path to .hea file.
    components : iterable
        Components to load.

    Returns
    -------
    ecg_data : list
        List of ecg data components.
    """

    def signal_decoder(record, nsig):
        """
        Helper function to decode signal from binaries when reading from dicom.
        """
        definition = record.WaveformSequence[0].ChannelDefinitionSequence
        data = record.WaveformSequence[0].WaveformData

        unpack_fmt = "<{}h".format(int(len(data) / 2))
        factor = np.ones(nsig)
        baseline = np.zeros(nsig)

        for i in range(nsig):

            assert definition[i].WaveformBitsStored == 16

            channel_sens = definition[i].get("ChannelSensitivity")
            channel_sens_cf = definition[i].get("ChannelSensitivityCorrectionFactor")
            if channel_sens is not None and channel_sens_cf is not None:
                factor[i] = float(channel_sens) * float(channel_sens_cf)

            channel_bl = definition[i].get("ChannelBaseline")
            if channel_bl is not None:
                baseline[i] = float(channel_bl)

        unpacked_data = struct.unpack(unpack_fmt, data)

        signals = np.asarray(unpacked_data, dtype=np.float32).reshape(-1, nsig)
        signals = ((signals + baseline) * factor).T

        return signals

    _ = args, kwargs

    record = dicom.read_file(path)

    sequence = record.WaveformSequence[0]

    assert sequence.WaveformSampleInterpretation == 'SS'
    assert sequence.WaveformBitsAllocated == 16

    nsig = sequence.NumberOfWaveformChannels

    annot = {}

    meta = dict(zip(META_KEYS, [None] * len(META_KEYS)))

    if record.PatientAge[-1] == "Y":
        age = np.int(record.PatientAge[:-1])
    else:
        age = np.int(record.PatientAge[:-1]) / 12.0

    meta["age"] = age
    meta["sex"] = record.PatientSex
    meta["timestamp"] = record.AcquisitionDateTime
    meta["comments"] = [section.UnformattedTextValue for section in
                        record.WaveformAnnotationSequence if section.AnnotationGroupNumber == 0]
    meta["fs"] = sequence.SamplingFrequency
    meta["signame"] = [section.ChannelSourceSequence[0].CodeMeaning for section in
                       sequence.ChannelDefinitionSequence]
    meta["units"] = [section.ChannelSensitivityUnitsSequence[0].CodeValue for section in
                     sequence.ChannelDefinitionSequence]

    meta["signame"] = check_signames(meta["signame"], nsig)
    meta["units"] = check_units(meta["units"], nsig)

    signal = signal_decoder(record, nsig)

    data = {"signal": signal,
            "annotation": annot,
            "meta": meta}
    return [data[comp] for comp in components]


def load_edf(path, components, *args, **kwargs):
    """
    Load given components from EDF file.

    Parameters
    ----------
    path : str
        Path to .hea file.
    components : iterable
        Components to load.

    Returns
    -------
    ecg_data : list
        List of ecg data components.
    """
    _ = args, kwargs

    record = pyedflib.EdfReader(path)

    annot = {}
    meta = dict(zip(META_KEYS, [None] * len(META_KEYS)))

    meta["sex"] = record.getGender() if record.getGender() != '' else None
    meta["timestamp"] = record.getStartdatetime().strftime("%Y%m%d%H%M%S")
    nsig = record.signals_in_file

    if len(np.unique(record.getNSamples())) != 1:
        raise ValueError("Different signal lenghts are not supported!")

    if len(np.unique(record.getSampleFrequencies())) == 1:
        meta["fs"] = record.getSampleFrequencies()[0]
    else:
        raise ValueError("Different sampling rates are not supported!")

    meta["signame"] = record.getSignalLabels()
    meta["units"] = [record.getSignalHeader(sig)["dimension"] for sig in range(nsig)]

    meta.update(record.getHeader())

    meta["signame"] = check_signames(meta["signame"], nsig)
    meta["units"] = check_units(meta["units"], nsig)

    signal = np.array([record.readSignal(i) for i in range(nsig)])

    data = {"signal": signal,
            "annotation": annot,
            "meta": meta}
    return [data[comp] for comp in components]


def load_wav(path, components, *args, **kwargs):
    """
    Load given components from wav file.

    Parameters
    ----------
    path : str
        Path to .hea file.
    components : iterable
        Components to load.

    Returns
    -------
    ecg_data : list
        List of ecg data components.
    """
    _ = args, kwargs

    fs, signal = wavfile.read(path)
    if signal.ndim == 1:
        nsig = 1
        signal = signal.reshape([-1, 1])
    elif signal.ndim == 2:
        nsig = signal.shape[1]
    else:
        raise ValueError("Unexpected number of dimensions in signal array: {}".format(signal.ndim))

    signal = signal.T

    annot = {}
    meta = dict(zip(META_KEYS, [None] * len(META_KEYS)))

    meta["fs"] = fs
    meta["signame"] = check_signames(meta["signame"], nsig)
    meta["units"] = check_units(meta["units"], nsig)

    data = {"signal": signal,
            "annotation": annot,
            "meta": meta}
    return [data[comp] for comp in components]


def load_xml(path, components, xml_type, *args, **kwargs):
    """Load given components from an XML file.

    Parameters
    ----------
    path : str
        A path to an .xml file.
    components : iterable
        Components to load.
    xml_type : str
        Defines the structure of the file. The following values of the
        argument are supported:
            * "schiller" - Schiller XML

    Returns
    -------
    loaded_data : list
        A list of loaded ECG data components.
    """
    loaders = {
        "schiller": load_xml_schiller,
    }
    loader = loaders.get(xml_type)
    if loader is None:
        err_str = "Unsupported XML type {}. Currently supported XML types: {}"
        err_msg = err_str.format(xml_type, ", ".join(sorted(loaders.keys())))
        raise ValueError(err_msg)
    return loader(path, components, *args, **kwargs)


def load_xml_schiller(path, components, *args, **kwargs):  # pylint: disable=too-many-locals
    """Load given components from a Schiller XML file.

    Parameters
    ----------
    path : str
        A path to an .xml file.
    components : iterable
        Components to load.

    Returns
    -------
    loaded_data : list
        A list of loaded ECG data components.
    """
    _ = args, kwargs

    root = ElementTree.parse(path).getroot()

    birthdate = root.find("./patdata/birthdate").text
    if birthdate is None:
        age = None
    else:
        today = datetime.date.today()
        birthdate = datetime.datetime.strptime(birthdate, "%Y%m%d")
        age = today.year - birthdate.year - ((today.month, today.day) < (birthdate.month, birthdate.day))

    sex = unify_sex(root.find("./patdata/gender").text)

    date = root.find("./examdescript/startdatetime/date").text
    time = root.find("./examdescript/startdatetime/time").text
    timestamp = datetime.datetime.strptime(date + time, "%Y%m%d%H%M%S%f")

    ecg_data = root.find("./eventdata/event/wavedata[type='ECG_RHYTHMS']")
    sig_info = []
    for channel in ecg_data.findall("./channel"):
        sig = [float(val) for val in channel.find("data").text.split(",") if val]
        name = channel.find("name").text
        sig_info.append((sig, name))
    signal, signame = zip(*sig_info)
    signal = np.array(signal)
    signame = np.array(signame)

    fs = float(ecg_data.find("./resolution/samplerate/value").text)
    units = ecg_data.find("./resolution/yres/units").text
    if units == "UV":
        units = "uV"
    units = np.array([units] * len(signame))

    meta = {
        "age": age,
        "sex": sex,
        "timestamp": timestamp,
        "comments": None,
        "fs": fs,
        "signame": signame,
        "units": units,
    }

    data = {
        "signal": signal,
        "annotation": {},
        "meta": meta
    }
    return [data[comp] for comp in components]


@njit(nogil=True)
def split_signals(signals, length, step):
    """Split signals along axis 1 with given ``length`` and ``step``.

    Parameters
    ----------
    signals : 2-D ndarray
        Signals to split.
    length : positive int
        Length of each segment along axis 1.
    step : positive int
        Segmentation step.

    Returns
    -------
    signals : 3-D ndarray
        Split signals stacked along new axis with index 0.
    """
    res = np.empty(((signals.shape[1] - length) // step + 1, signals.shape[0], length), dtype=signals.dtype)
    for i in range(res.shape[0]):
        res[i, :, :] = signals[:, i * step : i * step + length]
    return res


@njit(nogil=True)
def random_split_signals(signals, length, n_segments):
    """Split signals along axis 1 ``n_segments`` times with random start
    position and given ``length``.

    Parameters
    ----------
    signals : 2-D ndarray
        Signals to split.
    length : positive int
        Length of each segment along axis 1.
    n_segments : positive int
        Number of segments.

    Returns
    -------
    signals : 3-D ndarray
        Split signals stacked along new axis with index 0.
    """
    res = np.empty((n_segments, signals.shape[0], length), dtype=signals.dtype)
    for i in range(res.shape[0]):
        ix = np.random.randint(0, signals.shape[1] - length + 1)
        res[i, :, :] = signals[:, ix : ix + length]
    return res


@njit(nogil=True)
def resample_signals(signals, new_length):
    """Resample signals to new length along axis 1 using linear interpolation.

    Parameters
    ----------
    signals : 2-D ndarray
        Signals to resample.
    new_length : positive int
        New signals shape along axis 1.

    Returns
    -------
    signals : 2-D ndarray
        Resampled signals.
    """
    arg = np.linspace(0, signals.shape[1] - 1, new_length)
    x_left = arg.astype(np.int32)  # pylint: disable=no-member
    x_right = x_left + 1
    x_right[-1] = x_left[-1]
    alpha = arg - x_left
    y_left = signals[:, x_left]
    y_right = signals[:, x_right]
    return y_left + (y_right - y_left) * alpha


def convolve_signals(signals, kernel, padding_mode="edge", axis=-1, **kwargs):
    """Convolve signals with given ``kernel``.

    Parameters
    ----------
    signals : ndarray
        Signals to convolve.
    kernel : array_like
        Convolution kernel.
    padding_mode : str or function
        ``np.pad`` padding mode.
    axis : int
        Axis along which signals are sliced.
    kwargs : misc
        Any additional named argments to ``np.pad``.

    Returns
    -------
    signals : ndarray
        Convolved signals.

    Raises
    ------
    ValueError
        If ``kernel`` is not one-dimensional or has non-numeric ``dtype``.
    """
    kernel = np.asarray(kernel)
    if len(kernel.shape) == 0:
        kernel = kernel.ravel()
    if len(kernel.shape) != 1:
        raise ValueError("Kernel must be 1-D array")
    if not np.issubdtype(kernel.dtype, np.number):
        raise ValueError("Kernel must have numeric dtype")
    pad = len(kernel) // 2

    def conv_func(x):
        """Convolve padded signal."""
        x_pad = np.pad(x, pad, padding_mode, **kwargs)
        conv = np.convolve(x_pad, kernel, "same")
        if pad > 0:
            conv = conv[pad:-pad]
        return conv

    signals = np.apply_along_axis(conv_func, arr=signals, axis=axis)
    return signals


def band_pass_signals(signals, freq, low=None, high=None, axis=-1):
    """Reject frequencies outside given range.

    Parameters
    ----------
    signals : ndarray
        Signals to filter.
    freq : positive float
        Sampling rate.
    low : positive float
        High-pass filter cutoff frequency (Hz).
    high : positive float
        Low-pass filter cutoff frequency (Hz).
    axis : int
        Axis along which signals are sliced.

    Returns
    -------
    signals : ndarray
        Filtered signals.

    Raises
    ------
    ValueError
        If ``freq`` is negative or non-numeric.
    """
    if freq <= 0:
        raise ValueError("Sampling rate must be a positive float")
    sig_rfft = np.fft.rfft(signals, axis=axis)
    sig_freq = np.fft.rfftfreq(signals.shape[axis], 1 / freq)
    mask = np.zeros(len(sig_freq), dtype=bool)
    if low is not None:
        mask |= (sig_freq <= low)
    if high is not None:
        mask |= (sig_freq >= high)
    slc = [slice(None)] * signals.ndim
    slc[axis] = mask
    sig_rfft[tuple(slc)] = 0
    return np.fft.irfft(sig_rfft, n=signals.shape[axis], axis=axis)


@njit(nogil=True)
def find_intervals_borders(hmm_annotation, inter_val):
    """Find starts and ends of the intervals.

    This function finds starts and ends of continuous intervals of values
    from inter_val in hmm_annotation.

    Parameters
    ----------
    hmm_annotation : numpy.array
        Annotation for the signal from hmm_annotation model.
    inter_val : array_like
        Values that form interval of interest.

    Returns
    -------
    starts : 1-D ndarray
        Indices of the starts of the intervals.
    ends : 1-D ndarray
        Indices of the ends of the intervals.
    """
    intervals = np.zeros(hmm_annotation.shape, dtype=np.int8)
    for val in inter_val:
        intervals = np.logical_or(intervals, (hmm_annotation == val).astype(np.int8)).astype(np.int8)
    masque = np.diff(intervals)
    starts = np.where(masque == 1)[0] + 1
    ends = np.where(masque == -1)[0] + 1
    if np.any(inter_val == hmm_annotation[:1]):
        ends = ends[1:]
    if np.any(inter_val == hmm_annotation[-1:]):
        starts = starts[:-1]
    return starts, ends


@njit(nogil=True)
def find_maxes(signal, starts, ends):
    """ Find index of the maximum of the segment.

    Parameters
    ----------
    signal : 2-D ndarray
        ECG signal.
    starts : 1-D ndarray
        Indices of the starts of the intervals.
    ends : 1-D ndarray
        Indices of the ens of the intervals.

    Returns
    -------
    maxes : 1-D ndarray
        Indices of max values of each interval.

    Notes
    -----
    Currently works with first lead only.
    """

    maxes = np.empty(starts.shape, dtype=np.float64)
    for i in range(maxes.shape[0]):
        maxes[i] = starts[i] + np.argmax(signal[0][starts[i]:ends[i]])
    return maxes


@njit(nogil=True)
def calc_hr(signal, hmm_annotation, fs, r_state=R_STATE):
    """ Calculate heart rate based on HMM prediction.

    Parameters
    ----------
    signal : 2-D ndarray
        ECG signal.
    hmm_annotation : 1-D ndarray
        Annotation for the signal from hmm_annotation model.
    fs : float
        Sampling rate of the signal.
    r_state : 1-D ndarray
        Array with values that represent R peak.
        Default value is R_STATE, which is a constant of this module.

    Returns
    -------
    hr_val : float
        Heart rate in beats per minute.
    """

    starts, ends = find_intervals_borders(hmm_annotation, r_state)
    # NOTE: Currently works on first lead signal only
    maxes = find_maxes(signal, starts, ends)
    diff = maxes[1:] - maxes[:-1]
    hr_val = (np.median(diff / fs) ** -1) * 60
    return hr_val


@njit(nogil=True)
def calc_pq(hmm_annotation, fs, p_states=P_STATES, q_state=Q_STATE, r_state=R_STATE):
    """ Calculate PQ based on HMM prediction.

    Parameters
    ----------
    hmm_annotation : numpy.array
        Annotation for the signal from hmm_annotation model.
    fs : float
        Sampling rate of the signal.
    p_states : 1-D ndarray
        Array with values that represent P peak.
        Default value is P_STATES, which is a constant of this module.
    q_state : 1-D ndarray
        Array with values that represent Q peak.
        Default value is Q_STATE, which is a constant of this module.
    r_state : 1-D ndarray
        Array with values that represent R peak.
        Default value is R_STATE, which is a constant of this module.

    Returns
    -------
    pq_val : float
        Duration of PQ interval in seconds.
    """

    p_starts, _ = find_intervals_borders(hmm_annotation, p_states)
    q_starts, _ = find_intervals_borders(hmm_annotation, q_state)
    r_starts, _ = find_intervals_borders(hmm_annotation, r_state)

    p_final = - np.ones(r_starts.shape[0] - 1)
    q_final = - np.ones(r_starts.shape[0] - 1)

    maxlen = hmm_annotation.shape[0]

    if not p_starts.shape[0] * q_starts.shape[0] * r_starts.shape[0]:
        return 0.00

    temp_p = np.zeros(maxlen)
    temp_p[p_starts] = 1
    temp_q = np.zeros(maxlen)
    temp_q[q_starts] = 1

    for i in range(len(r_starts) - 1):
        low = r_starts[i]
        high = r_starts[i + 1]

        inds_p = np.where(temp_p[low:high])[0] + low
        inds_q = np.where(temp_q[low:high])[0] + low

        if inds_p.shape[0] == 1 and inds_q.shape[0] == 1:
            p_final[i] = inds_p[0]
            q_final[i] = inds_q[0]

    p_final = p_final[p_final > -1]
    q_final = q_final[q_final > -1]

    intervals = q_final - p_final
    return np.median(intervals) / fs


@njit(nogil=True)
def calc_qt(hmm_annotation, fs, t_states=T_STATES, q_state=Q_STATE, r_state=R_STATE):
    """ Calculate QT interval based on HMM prediction.

    Parameters
    ----------
    hmm_annotation : numpy.array
        Annotation for the signal from hmm_annotation model.
    fs : float
        Sampling rate of the signal.
    t_states : 1-D ndarray
        Array with values that represent T peak.
        Default value is T_STATES, which is a constant of this module.
    q_state : 1-D ndarray
        Array with values that represent Q peak.
        Default value is Q_STATE, which is a constant of this module.
    r_state : 1-D ndarray
        Array with values that represent R peak.
        Default value is R_STATE, which is a constant of this module.

    Returns
    -------
    qt_val : float
        Duration of QT interval in seconds.
    """

    _, t_ends = find_intervals_borders(hmm_annotation, t_states)
    q_starts, _ = find_intervals_borders(hmm_annotation, q_state)
    r_starts, _ = find_intervals_borders(hmm_annotation, r_state)

    t_final = - np.ones(r_starts.shape[0] - 1)
    q_final = - np.ones(r_starts.shape[0] - 1)

    maxlen = hmm_annotation.shape[0]

    if not t_ends.shape[0] * q_starts.shape[0] * r_starts.shape[0]:
        return 0.00

    temp_t = np.zeros(maxlen)
    temp_t[t_ends] = 1
    temp_q = np.zeros(maxlen)
    temp_q[q_starts] = 1

    for i in range(len(r_starts) - 1):
        low = r_starts[i]
        high = r_starts[i + 1]

        inds_t = np.where(temp_t[low:high])[0] + low
        inds_q = np.where(temp_q[low:high])[0] + low

        if inds_t.shape[0] == 1 and inds_q.shape[0] == 1:
            t_final[i] = inds_t[0]
            q_final[i] = inds_q[0]

    t_final = t_final[t_final > -1][1:]
    q_final = q_final[q_final > -1][:-1]

    intervals = t_final - q_final
    return np.median(intervals) / fs


@njit(nogil=True)
def calc_qrs(hmm_annotation, fs, s_state=S_STATE, q_state=Q_STATE, r_state=R_STATE):
    """ Calculate QRS interval based on HMM prediction.

    Parameters
    ----------
    hmm_annotation : numpy.array
        Annotation for the signal from hmm_annotation model.
    fs : float
        Sampling rate of the signal.
    s_state : 1-D ndarray
        Array with values that represent S peak.
        Default value is S_STATE, which is a constant of this module.
    q_state : 1-D ndarray
        Array with values that represent Q peak.
        Default value is Q_STATE, which is a constant of this module.
    r_state : 1-D ndarray
        Array with values that represent R peak.
        Default value is R_STATE, which is a constant of this module.

    Returns
    -------
    qrs_val : float
        Duration of QRS complex in seconds.
    """
    _, s_ends = find_intervals_borders(hmm_annotation, s_state)
    q_starts, _ = find_intervals_borders(hmm_annotation, q_state)
    r_starts, _ = find_intervals_borders(hmm_annotation, r_state)

    s_final = - np.ones(r_starts.shape[0] - 1)
    q_final = - np.ones(r_starts.shape[0] - 1)

    maxlen = hmm_annotation.shape[0]

    if not s_ends.shape[0] * q_starts.shape[0] * r_starts.shape[0]:
        return 0.00

    temp_s = np.zeros(maxlen)
    temp_s[s_ends] = 1
    temp_q = np.zeros(maxlen)
    temp_q[q_starts] = 1

    for i in range(len(r_starts) - 1):
        low = r_starts[i]
        high = r_starts[i + 1]

        inds_s = np.where(temp_s[low:high])[0] + low
        inds_q = np.where(temp_q[low:high])[0] + low

        if inds_s.shape[0] == 1 and inds_q.shape[0] == 1:
            s_final[i] = inds_s[0]
            q_final[i] = inds_q[0]

    s_final = s_final[s_final > -1][1:]
    q_final = q_final[q_final > -1][:-1]

    intervals = s_final - q_final
    return np.median(intervals) / fs


================================================
FILE: cardio/core/ecg_dataset.py
================================================
"""Contains ECG Dataset class."""

from .. import batchflow as bf
from .ecg_batch import EcgBatch


class EcgDataset(bf.Dataset):
    """Dataset that generates batches of ``EcgBatch`` class.

    Contains indices of ECGs and a specific ``batch_class`` to create and
    process batches - small subsets of data.

    Parameters
    ----------
    index : DatasetIndex or None, optional
        Unique identifiers of ECGs in a dataset. If ``index`` is not given, it
        is constructed by instantiating ``index_class`` with ``args`` and
        ``kwargs``.
    batch_class : type, optional
        Class of batches, generated by dataset. Must be inherited from
        ``Batch``.
    preloaded : tuple, optional
        Data to put in created batches. Defaults to ``None``.
    index_class : type, optional
        Class of built index if ``index`` is not given. Must be inherited from
        ``DatasetIndex``.
    args : misc, optional
        Additional positional argments to ``index_class.__init__``.
    kwargs : misc, optional
        Additional named argments to ``index_class.__init__``.
    """

    def __init__(self, index=None, batch_class=EcgBatch, preloaded=None, index_class=bf.FilesIndex, *args, **kwargs):
        if index is None:
            index = index_class(*args, **kwargs)
        super().__init__(index, batch_class, preloaded)


================================================
FILE: cardio/core/kernels.py
================================================
"""Contains kernel generation functions."""

import numpy as np


def _check_kernel_size(size):
    """Check if kernel size is a positive integer."""
    if not isinstance(size, int) or size < 1:
        raise ValueError("Kernel size must be a positive integer")


def gaussian(size, sigma=None):
    """Create a 1-D Gaussian kernel.

    Parameters
    ----------
    size : positive int
        Kernel size.
    sigma : positive float, optional
        Standard deviation of Gaussian distribution. Controls the degree of
        smoothing. If ``None``, it is set to ``(size + 1) / 6``.

    Returns
    -------
    kernel : 1-D ndarray
        Gaussian kernel.

    Raises
    ------
    ValueError
        If ``size`` or ``sigma`` is negative or non-numeric.
    """
    _check_kernel_size(size)
    if sigma is None:
        sigma = (size + 1) / 6
    elif not isinstance(sigma, (int, float)) or sigma <= 0:
        raise ValueError("Sigma must be a positive integer or float")
    i = np.arange(size) - (size - 1) / 2
    kernel = np.exp(-i**2 / (2 * sigma**2))
    return kernel / sum(kernel)


================================================
FILE: cardio/core/utils.py
================================================
"""Miscellaneous ECG Batch utils."""

import functools

import pint
import numpy as np
from sklearn.preprocessing import LabelBinarizer as LB


UNIT_REGISTRY = pint.UnitRegistry()


def get_units_conversion_factor(old_units, new_units):
    """Return a multiplicative factor to convert a measured quantity from old
    to new units.

    Parameters
    ----------
    old_units : str
        Current units in SI format.
    new_units : str
        Target units in SI format.

    Returns
    -------
    factor : float
        A factor to convert quantities between units.
    """
    try:  # pint exceptions are wrapped with ValueError exceptions because they don't implement __repr__ method
        factor = UNIT_REGISTRY(old_units).to(new_units).magnitude
    except Exception as error:
        raise ValueError(error.__class__.__name__ + ": " + str(error))
    return factor


def partialmethod(func, *frozen_args, **frozen_kwargs):
    """Wrap a method with partial application of given positional and keyword
    arguments.

    Parameters
    ----------
    func : callable
        A method to wrap.
    frozen_args : misc
        Fixed positional arguments.
    frozen_kwargs : misc
        Fixed keyword arguments.

    Returns
    -------
    method : callable
        Wrapped method.
    """
    @functools.wraps(func)
    def method(self, *args, **kwargs):
        """Wrapped method."""
        return func(self, *frozen_args, *args, **frozen_kwargs, **kwargs)
    return method


class LabelBinarizer(LB):
    """Encode categorical features using a one-hot scheme.

    Unlike ``sklearn.preprocessing.LabelBinarizer``, each label will be
    encoded using ``n_classes`` numbers even for binary problems.
    """
    # pylint: disable=invalid-name

    def transform(self, y):
        """Transform ``y`` using one-hot encoding.

        Parameters
        ----------
        y : 1-D ndarray of shape ``[n_samples,]``
            Class labels.

        Returns
        -------
        Y : 2-D ndarray of shape ``[n_samples, n_classes]``
            One-hot encoded labels.
        """
        Y = super().transform(y)
        if len(self.classes_) == 1:
            Y = 1 - Y
        if len(self.classes_) == 2:
            Y = np.hstack((1 - Y, Y))
        return Y

    def inverse_transform(self, Y, threshold=None):
        """Transform one-hot encoded labels back to class labels.

        Parameters
        ----------
        Y : 2-D ndarray of shape ``[n_samples, n_classes]``
            One-hot encoded labels.
        threshold : float, optional
            The threshold used in the binary and multi-label cases. If
            ``None``, it is assumed to be half way between ``neg_label`` and
            ``pos_label``.

        Returns
        -------
        y : 1-D ndarray of shape ``[n_samples,]``
            Class labels.
        """
        if len(self.classes_) == 1:
            y = super().inverse_transform(1 - Y, threshold)
        elif len(self.classes_) == 2:
            y = super().inverse_transform(Y[:, 1], threshold)
        else:
            y = super().inverse_transform(Y, threshold)
        return y


================================================
FILE: cardio/models/__init__.py
================================================
"""Contains ECG models and custom functions."""

from .dirichlet_model import *  # pylint: disable=wildcard-import
from .fft_model import *  # pylint: disable=wildcard-import
from .hmm import *  # pylint: disable=wildcard-import
from . import metrics


================================================
FILE: cardio/models/dirichlet_model/__init__.py
================================================
"""Contains dirichlet model class."""

from .dirichlet_model import DirichletModel, concatenate_ecg_batch


================================================
FILE: cardio/models/dirichlet_model/dirichlet_model.py
================================================
"""Contains Dirichlet model class."""

from itertools import zip_longest

import numpy as np
import tensorflow as tf

from ..layers import conv1d_block, resnet1d_block
from ...batchflow.models.tf import TFModel #pylint: disable=no-name-in-module, import-error


def concatenate_ecg_batch(batch, model, return_targets=True):
    """Concatenate batch signals and (optionally) targets.

    Parameters
    ----------
    batch : EcgBatch
        Batch to concatenate.
    model : BaseModel
        A model to get the resulting arguments.
    return_targets : bool
        Specifies whether to return concatenated targets.

    Returns
    -------
    kwargs : dict
        Named argments for model's train or predict method. Has the following
        structure:
        "feed_dict" : dict
            "signals" : 3-D ndarray
                Concatenated signals.
            "targets" : 2-D ndarray, optional
                Concatenated targets.
        "split_indices" : 1-D ndarray
            Split indices to undo the concatenation.
    """
    _ = model
    x = np.concatenate(batch.signal)
    split_indices = np.cumsum([item.signal.shape[0] for item in batch])[:-1]
    res_dict = {"feed_dict": {"signals": x}, "split_indices": split_indices}
    if return_targets:
        y = np.concatenate([np.tile(item.target, (item.signal.shape[0], 1)) for item in batch])
        res_dict["feed_dict"]["targets"] = y
    return res_dict


class DirichletModelBase(TFModel):
    """Dirichlet model class.

    The model predicts Dirichlet distribution parameters from which class
    probabilities are sampled.

    Notes
    -----
    **Configuration**

    Model config must contain the following keys:

    * input_shape : tuple
        Input signals's shape without the batch dimension.
    * class_names : array_like
        Class names.
    * loss : ``None``
        The model has a predefined loss, so you should leave it ``None``.
    """

    def _build(self, *args, **kwargs):  # pylint: disable=too-many-locals
        """Build Dirichlet model."""
        _ = args, kwargs
        input_shape = self.config["input_shape"]
        class_names = self.config["class_names"]

        with self.graph.as_default():
            self.store_to_attr("class_names", tf.constant(class_names))

            signals = tf.placeholder(tf.float32, shape=(None,) + input_shape, name="signals")
            self.store_to_attr("signals", signals)
            signals_channels_last = tf.transpose(signals, perm=[0, 2, 1], name="signals_channels_last")

            k = 0.001
            targets = tf.placeholder(tf.float32, shape=(None, len(class_names)), name="targets")
            self.store_to_attr("targets", targets)
            targets_soft = (1 - 2 * k) * targets + k

            block = conv1d_block("conv", signals_channels_last, is_training=self.is_training,
                                 filters=8, kernel_size=5)

            block_config = [
                (8, 3, True),
                (8, 3, False),
                (8, 3, True),
                (8, 3, False),
                (12, 3, True),
                (12, 3, False),
                (12, 3, True),
                (12, 3, False),
                (16, 3, True),
                (16, 3, False),
                (16, 3, False),
                (16, 3, True),
                (16, 3, False),
                (16, 3, False),
                (20, 3, True),
                (20, 3, False),
            ]
            for i, (filters, kernel_size, downsample) in enumerate(block_config):
                block = resnet1d_block("block_" + str(i + 1), block, is_training=self.is_training,
                                       filters=filters, kernel_size=kernel_size, downsample=downsample)

            with tf.variable_scope("global_max_pooling"):  # pylint: disable=not-context-manager
                block = tf.reduce_max(block, axis=1)

            with tf.variable_scope("dense"):  # pylint: disable=not-context-manager
                dense = tf.layers.dense(block, len(class_names), use_bias=False, name="dense")
                bnorm = tf.layers.batch_normalization(dense, training=self.is_training, name="batch_norm", fused=True)
                act = tf.nn.softplus(bnorm, name="activation")

            parameters = tf.identity(act, name="parameters")
            self.store_to_attr("parameters", parameters)
            predictions = tf.identity(act, name="predictions")
            self.store_to_attr("predictions", predictions)
            loss = tf.reduce_mean(tf.lbeta(parameters) -
                                  tf.reduce_sum((parameters - 1) * tf.log(targets_soft), axis=1), name="loss")
            tf.losses.add_loss(loss)


class DirichletModel(DirichletModelBase):
    """Dirichlet model with overloaded train and predict methods.

    * ``train`` method is identical to ``DirichletModelBase.train``, but also
      accepts ``args`` and ``kwargs``.

    * ``predict`` method splits the resulting tensor for ``parameters`` fetch
      using ``split_indices``. It also splits and aggregates results for
      ``predictions`` fetch to get class probabilities.
    """

    @staticmethod
    def _get_dirichlet_mixture_stats(alpha):
        """Get mean and variance vectors of the mixture of Dirichlet
        distributions with equal weights and given parameters.

        Parameters
        ----------
        alpha : 2-D ndarray
            Dirichlet distribution parameters along axis 1 for each mixture
            component.

        Returns
        -------
        mean : 1-D ndarray
            Mean of the mixture.
        var : 1-D ndarray
            Variance of the mixture.
        """
        alpha_sum = np.sum(alpha, axis=1)[:, np.newaxis]
        comp_m1 = alpha / alpha_sum
        comp_m2 = (alpha * (alpha + 1)) / (alpha_sum * (alpha_sum + 1))
        mean = np.mean(comp_m1, axis=0)
        var = np.mean(comp_m2, axis=0) - mean**2
        return mean, var

    def train(self, fetches=None, feed_dict=None, use_lock=False, *args, **kwargs):
        """Train the model with the data provided.

        The only difference between ``DirichletModel.train`` and
        ``TFModel.train`` is that the former also accepts ``args`` and
        ``kwargs``.

        Parameters
        ----------
        fetches : tf.Operation or tf.Tensor or array-like sequence of them
            Graph element to evaluate in addition to ``train_step``.
        feed_dict : dict
            A dictionary that maps graph elements to values.
        use_lock : bool
            If ``True``, the whole train step is locked, thus allowing for
            multithreading.

        Returns
        -------
        output : same structure as ``fetches``
            Calculated values for each element in ``fetches``.
        """
        _ = args, kwargs
        return super().train(fetches, feed_dict, use_lock)

    def predict(self, fetches=None, feed_dict=None, split_indices=None):  # pylint: disable=arguments-differ
        """Get predictions on the data provided.

        Parameters
        ----------
        fetches : tf.Operation or tf.Tensor or array-like sequence of them
            Graph element to evaluate.
            If ``fetches`` contains ``parameters`` tensor, the corresponding
            output is split using ``split_indices``.
            If ``fetches`` contains ``predictions`` tensor, the corresponding
            output is split using ``split_indices`` and then aggregated to get
            class probabilities.
        feed_dict : dict
            A dictionary that maps graph elements to values.
        split_indices : 1-D ndarray
            Indices used to split ``parameters`` and ``predictions`` tensors.

        Returns
        -------
        output : same structure as ``fetches``
            Calculated values for each element in ``fetches``.
        """
        if isinstance(fetches, (list, tuple)):
            fetches_list = list(fetches)
        else:
            fetches_list = [fetches]
        output = super().predict(fetches_list, feed_dict)
        for i, fetch in enumerate(fetches_list):
            if fetch == "parameters":
                output[i] = np.split(output[i], split_indices)
            elif fetch == "predictions":
                class_names = self.class_names.eval(session=self.session)  # pylint: disable=no-member
                class_names = [c.decode("utf-8") for c in class_names]
                n_classes = len(class_names)
                max_var = (n_classes - 1) / n_classes**2
                alpha = np.split(output[i], split_indices)
                targets = feed_dict.get("targets")
                targets = [] if targets is None else [t[0] for t in np.split(targets, split_indices)]
                res = []
                for a, t in zip_longest(alpha, targets):
                    mean, var = self._get_dirichlet_mixture_stats(a)
                    uncertainty = var[np.argmax(mean)] / max_var
                    predictions_dict = {"target_pred": dict(zip(class_names, mean)),
                                        "uncertainty": uncertainty}
                    if t is not None:
                        predictions_dict["target_true"] = dict(zip(class_names, t))
                    res.append(predictions_dict)
                output[i] = res
        if isinstance(fetches, list):
            pass
        elif isinstance(fetches, tuple):
            output = tuple(output)
        else:
            output = output[0]
        return output


================================================
FILE: cardio/models/dirichlet_model/dirichlet_model_training.ipynb
================================================
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Dirichlet model training"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In this notebook we will train Dirichlet model for atrial fibrillation detection."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Table of contents"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "* [Dataset initialization](#Dataset-initialization)\n",
    "* [Training pipeline](#Training-pipeline)\n",
    "* [Saving the model](#Saving-the-model)\n",
    "* [Testing pipeline](#Testing-pipeline)\n",
    "* [Predicting pipeline](#Predicting-pipeline)\n",
    "* [Analyzing the uncertainty](#Analyzing-the-uncertainty)\n",
    "* [Visualizing predictions](#Visualizing-predictions)\n",
    "    * [Certain prediction](#Certain-prediction)\n",
    "    * [Uncertain prediction](#Uncertain-prediction)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Using TensorFlow backend.\n"
     ]
    }
   ],
   "source": [
    "import os\n",
    "import sys\n",
    "from functools import partial\n",
    "\n",
    "import numpy as np\n",
    "import tensorflow as tf\n",
    "import matplotlib.pyplot as plt\n",
    "import seaborn as sns\n",
    "from scipy.stats import beta\n",
    "\n",
    "sys.path.append(os.path.join(\"..\", \"..\", \"..\"))\n",
    "import cardio.batchflow as bf\n",
    "from cardio import EcgDataset\n",
    "from cardio.batchflow import B, V, F\n",
    "from cardio.models.dirichlet_model import DirichletModel, concatenate_ecg_batch\n",
    "from cardio.models.metrics import f1_score, classification_report, confusion_matrix"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Seaborn plotting parameters setting:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "sns.set(\"talk\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "By default, TensorFlow attempts to allocate almost the entire memory on all of the available GPUs. Executing this instruction makes only the GPU with id 0 visible for TensorFlow in this process."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "env: CUDA_VISIBLE_DEVICES=0\n"
     ]
    }
   ],
   "source": [
    "%env CUDA_VISIBLE_DEVICES=0"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Dataset initialization"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "First, we need to specify paths to ECG signals and their labels:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "SIGNALS_PATH = \"D:\\\\Projects\\\\data\\\\ecg\\\\training2017\\\\\"\n",
    "SIGNALS_MASK = SIGNALS_PATH + \"*.hea\"\n",
    "LABELS_PATH = SIGNALS_PATH + \"REFERENCE.csv\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now let's create an ECG dataset and perform a train/test split:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "eds = EcgDataset(path=SIGNALS_MASK, no_ext=True, sort=True)\n",
    "eds.split(0.8)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Training pipeline"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Dirichlet model builder expects model config to contain input signals' shape and class names:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.5, allow_growth=True)\n",
    "\n",
    "model_config = {\n",
    "    \"session\": {\"config\": tf.ConfigProto(gpu_options=gpu_options)},\n",
    "    \"input_shape\": F(lambda batch: batch.signal[0].shape[1:]),\n",
    "    \"class_names\": F(lambda batch: batch.label_binarizer.classes_),\n",
    "    \"loss\": None,\n",
    "}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "N_EPOCH = 1000\n",
    "BATCH_SIZE = 256"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Model training pipeline is composed of:\n",
    "* model initialization with the config defined above\n",
    "* data loading, preprocessing (e.g. flipping) and augmentation (e.g. resampling)\n",
    "* train step execution\n",
    "\n",
    "Let's create a template pipeline, then link it to our training dataset and run:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "template_train_ppl = (\n",
    "    bf.Pipeline()\n",
    "      .init_model(\"dynamic\", DirichletModel, name=\"dirichlet\", config=model_config)\n",
    "      .init_variable(\"loss_history\", init_on_each_run=list)\n",
    "      .load(components=[\"signal\", \"meta\"], fmt=\"wfdb\")\n",
    "      .load(components=\"target\", fmt=\"csv\", src=LABELS_PATH)\n",
    "      .drop_labels([\"~\"])\n",
    "      .rename_labels({\"N\": \"NO\", \"O\": \"NO\"})\n",
    "      .flip_signals()\n",
    "      .random_resample_signals(\"normal\", loc=300, scale=10)\n",
    "      .random_split_signals(2048, {\"A\": 9, \"NO\": 3})\n",
    "      .binarize_labels()\n",
    "      .train_model(\"dirichlet\", make_data=concatenate_ecg_batch,\n",
    "                   fetches=\"loss\", save_to=V(\"loss_history\"), mode=\"a\")\n",
    "      .run(batch_size=BATCH_SIZE, shuffle=True, drop_last=True, n_epochs=N_EPOCH, lazy=True)\n",
    ")\n",
    "\n",
    "train_ppl = (eds.train >> template_train_ppl).run()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Training loss is stored in \"loss_history\" pipeline variable. Let's take a look at its plot:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAA4gAAAEOCAYAAADLxjkiAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAIABJREFUeJzs3Xl8leWd///XfZasJ8nJvrIECAQQJIAIKSJoF8GlpeNW\ntxnUGdvvTNWOX0ftb+pCrUsrUvepto51m+lPXEesrYLruFAUZIusYQnZ9+Ts2/ePQCAmgQPk5JyQ\n9/Px4EG4z33u80nOh4Q313VflxEKhUKIiIiIiIjIsGeKdgEiIiIiIiISGxQQRUREREREBFBAFBER\nERERkQMUEEVERERERARQQBQREREREZEDLNEuYLA1NHREu4ReDMMgMzOZpiYHWlRWBpr6SyJNPSaR\npP6SSFJ/SaTFao9lZ6f0+5hGEGOAydTVPCa9GxIB6i+JNPWYRJL6SyJJ/SWRNhR7bAiVKiIiIiIi\nIpGkgCgiIiIiIiKAAqKIiIiIiIgcoIAoIiIiIiIigAKiiIiIiIiIHKCAKCIiIiIiIoACooiIiIiI\niBwQcwHx7rvv5v777+/3ca/Xy89//nNmzZpFeXk5TzzxxCBWN/ACwSBLn/kbt//uE4IxtHmmiIiI\niIgMPzETEFtaWrj11lt57rnnjnje8uXLqa6uZtWqVbz44ou89NJLvPXWW4NU5cBrbHWzc38767Y1\n0NTmjnY5IiIiIiIyjFmiXcBBl112GdOnT+d73/veEc97/fXXWbZsGSkpKaSkpHDFFVfw6quvsmjR\norBexzAMTDETiyEx4dBb4PUFMZuNKFYjJyOTyejxu8hAU49JJKm/JJLUXxJpQ7HHBi0g+v1+nE5n\nr+MmkwmbzcYzzzxDbm4ut956a7/XaGtro6mpiXHjxnUfKy4u5oUXXgi7jszMZAwjdt6ghKT47o8t\ncVYyMmxRrEZOZnZ7crRLkJOcekwiSf0lkaT+kkgbSj02aAFxzZo1LFmypNfxwsJCVq9eTW5u7lGv\n4XK5AEhMTOw+lpCQgNsd/tTMpiZHTI0gHn7fYUNzJ7lpcVGsRk5GJpOB3Z5Ma6uDYFD3ucrAU49J\nJKm/JJLUXxJpsdpjRxqUGrSAWF5eztatW0/oGgkJCQC43W5sNlv3x0lJSWFfIxQKEQicUBkDLt5q\nxuML4HL7CARip3Hk5BIMhtRfElHqMYkk9ZdEkvpLIm0o9VgMjaUdnd1uJzMzk8rKyu5jlZWVjB07\nNopVnbiEODMALm+MJVcRERERERlWhlRABLjgggt45JFHaG1tZffu3Tz//PN8//vfj3ZZJyQhvisg\nuhUQRUREREQkioZEQCwrK2Pt2rUA3HjjjYwePZqFCxdy2WWXcfHFF7Nw4cIoV3hiEuK6Zvq6Pf4o\nVyIiIiIiIsNZzGxzcdB9993X69i6deu6P05ISGDp0qUsXbp0MMuKqMQDU0zdPo0gioiIiIhI9AyJ\nEcST3aERRAVEERERERGJHgXEGHDoHkRNMRURERERkehRQIwB8VYtUiMiIiIiItGngBgDEuMPTDFV\nQBQRERERkShSQIwBB/dB1CqmIiIiIiISTQqIMeBgQHRpBFFERERERKJIATEGHJxi6tIIooiIiIiI\nRJECYgxISYoDoMPpJRQKRbkaEREREREZrhQQY0BKkhUAfyCkhWpERERERCRqFBBjQOqBEUToGkUU\nERERERGJBgXEGHBwBBGgw+mLYiUiIiIiIjKcKSDGgKQEC2aTASggioiIiIhI9CggxgDDMEhNPrRQ\njYiIiIiISDQoIMaINFs8AB0ujSCKiIiIiEh0KCDGCI0gioiIiIhItCkgxojuEUTdgygiIiIiIlGi\ngBgjkhO7VjJ1uv1RrkRERERERIYrBcQYkZxgAcDlUUAUEREREZHoUECMEQdHEBUQRUREREQkWhQQ\nY0T3FFMFRBERERERiRJLtAv4prvvvhur1cott9zS5+PNzc3MmTOHpKSk7mPnn38+S5cuHawSIyIp\nQSOIIiIiIiISXTETEFtaWrj//vt59dVXufrqq/s9r6KigpKSEt58881BrC7ybIeNIIZCIQzDiHJF\nIiIiIiIy3MRMQLzsssuYPn063/ve94543pYtWygtLT3u1zEMA1OMTaw1mQySDixSEwqBLxAkMT5m\n3hoZ4kwmo8fvIgNNPSaRpP6SSFJ/SaQNxR4btBTi9/txOp29jptMJmw2G8888wy5ubnceuutR7xO\nRUUFVVVVnHPOOXR2djJv3jxuvfVWUlNTw6ojMzM5Jkfn2lyB7o/jE+PJsCdGsRo5GdntydEuQU5y\n6jGJJPWXRJL6SyJtKPXYoAXENWvWsGTJkl7HCwsLWb16Nbm5uWFdx2azcfrpp3Pttdfi8/m45ZZb\nuOOOO1i+fHlYz29qcsTkCOLBRWoAqmvbMAUDR3iGSPhMJgO7PZnWVgfBYCja5chJSD0mkaT+kkhS\nf0mkxWqPZWTY+n1s0AJieXk5W7duPeHrfHMxmp/97GdcfvnlBINBTGEkv1AoRCAGs1dS0qG3otPl\nIxCInQaSk0MwGFJfSUSpxySS1F8SSeovibSh1GMxNpZ2ZMFgkGXLllFVVdV9zOPxYLVawwqHsSwx\n3sLBma9Ot1YyFRERERGRwTekUpXJZGL9+vU8+OCDOJ1OGhoaePDBB1m8eHG0SzthhmGQdGBhGm11\nISIiIiIi0TAkAmJZWRlr164F4IEHHsDj8TB//nzOO+88xo8fz8033xzlCgdGogKiiIiIiIhEUczt\npXDffff1OrZu3bruj3Nzc3nssccGs6RBk5RggbauvRBFREREREQG25AYQRwuDk4xVUAUEREREZFo\nUECMIUkJB6aYapEaERERERGJAgXEGJKoEUQREREREYkiBcQYcmgV0xjcqFFERERERE56Cogx5OAU\nU6fHF+VKRERERERkOFJAjCGJGkEUEREREZEoUkCMIUkJVgCcbo0gioiIiIjI4FNAjCFJ8WZAI4gi\nIiIiIhIdCogx5OAIoscXIBAMRrkaEREREREZbhQQY8jBexBBo4giIiIiIjL4FBBjSNJhAVH3IYqI\niIiIyGBTQIwhyYmHAqLD7Y9iJSIiIiIiMhwpIMaQg/sgAjhcGkEUEREREZHBpYAYQ8wmU/c0004F\nRBERERERGWQKiDHGlti1kqkCooiIiIiIDDYFxBiTrIAoIiIiIiJRooAYYw6OIDpcWqRGREREREQG\nlwJijLEdWMm0U9tciIiIiIjIIFNAjDGaYioiIiIiItESMwHx8ccfZ/78+cycOZMrr7ySbdu29Xme\n1+vl5z//ObNmzaK8vJwnnnhikCuNLC1SIyIiIiIi0RITAfGVV17h9ddf57nnnuOzzz5jzpw5XHfd\ndQSDwV7nLl++nOrqalatWsWLL77ISy+9xFtvvRWFqiPj0D2ICogiIiIiIjK4YiIgtrS08OMf/5gR\nI0ZgsVi46qqrqK6upra2tte5r7/+Otdddx0pKSmMHj2aK664gldffTUKVUfGwYDY4VRAFBERERGR\nwWUJ98StW7eSl5dHWloaH3zwAX/5y1845ZRTuOyyy8J6vt/vx+l09jpuMpm45pprehxbvXo1drud\nvLy8Hsfb2tpoampi3Lhx3ceKi4t54YUXwv00MAwDU0zE4kNMJqP7d3tKPAAeXwB/MEi81RzN0uQk\ncHh/iUSCekwiSf0lkaT+kkgbij0WVkD805/+xF133cUzzzyDzWbjn//5n5k9ezaPP/449fX13Hjj\njUe9xpo1a1iyZEmv44WFhaxevbrHeXfccQdLly7F9I0k53K5AEhMTOw+lpCQgNvtDufTACAzMxnD\niM03yG5PZmTBoWm1JquVjIykKFYkJxO7PTnaJchJTj0mkaT+kkhSf0mkDaUeCysgPv3009x7773M\nmjWLu+++m9LSUn7/+9+zZs0abrrpprACYnl5OVu3bj3iOa+99hp33XUXv/jFLzj//PN7PZ6QkACA\n2+3GZrN1f5yUFH6IampyxOQIot2eTGurg5Dv0P6He/e3YKX3fZgix+Lw/goGQ9EuR05C6jGJJPWX\nRJL6SyItVnssI8PW72NhBcSamhpmzZoFwPvvv8/ixYsBKCgooLOzcwBKhMcee4xnn32Wxx9/nDlz\n5vR5jt1uJzMzk8rKSrKysgCorKxk7NixYb9OKBQiEBiQkgdcMBgiPs6MyTAIhkK0dnoJBGKnkWRo\nCwZD6ieJKPWYRJL6SyJJ/SWRNpR6LKyxtBEjRvDhhx/y3nvvUVVVxVlnnQXAyy+/zJgxY064iJdf\nfpk//vGPvPjii/2Gw4MuuOACHnnkEVpbW9m9ezfPP/883//+90+4hlhhMgxSkrsWqml3eKNcjYiI\niIiIDCdhjSDecMMN/Ou//iuBQICzzz6biRMncs899/DSSy/x+OOPn3ARTz75JA6HgwsvvLDH8RUr\nVjB27FjKysp46qmnmDlzJjfeeCP33HMPCxcuxDAMrrrqKhYuXHjCNcSS1KQ42jq9dDgVEEVERERE\nZPAYoVAorLHO5uZm6urqmDhxIgC7du0iLS2NzMzMiBY40BoaOqJdQi9ms0FGho3m5k4CgRDL/nsd\nm3e38J2ZI/jRt0uiXZ4Mcd/sL5GBph6TSFJ/SSSpvyTSYrXHsrNT+n3smJZrKS4uBmDz5s38+c9/\nZtOmTSdWmfQpJTkOgHaNIIqIiIiIyCAKKyC+++67zJ8/ny+//JI9e/Zw5ZVX8tZbb3HjjTfy3HPP\nRbrGYSf9wF6ITW3hb98hIiIiIiJyosIKiA899BA//elPKS8vZ8WKFeTn57Ny5UqWLVvGM888E+ES\nh58ce9c+jw2trihXIiIiIiIiw0lYAXH37t2cd955ALz33nucffbZAEyYMIHGxsbIVTdMZR8IiG0O\nLx5vjO7JISIiIiIiJ52wAmJubi5btmxhy5Yt7NixgzPPPBPo2hOxqKgoogUORwdHEAEa2jSKKCIi\nIiIigyOsbS6uvvpqbrjhBgCmTZvGjBkzePTRR/mP//gPfv3rX0e0wOEoPTUes8kgEAzR0OKiKNsW\n7ZJERERERGQYCCsgXnbZZUybNo3q6mrmzp0LwNy5c/n2t79NaWlpRAscjswmE1lpCdS1uKhuclBG\ndrRLEhERERGRYSCsgAgwadIkLBYLq1evJhgMUlxcrHAYQWMKUqlrcbF1Xyvnzol2NSIiIiIiMhyE\nFRDb29v5t3/7N95//33S0tIIBAI4HA7Kysr43e9+R0pK/xstyvEpHZnOp5vr2L6vDX8giMV8TFtW\nioiIiIiIHLOwUsevfvUr6uvrWblyJZ9//jlr167ljTfewO126x7ECJkwKh0Ajy/A/gZHlKsRERER\nEZHhIKyAuHr1au644w7Gjh3bfaykpITbb7+dv/71rxErbjjLTksgztL19tRrP0QRERERERkEYQVE\ni8VCfHx8r+MJCQn4fL4BL0rAMAyyDmx30aiAKCIiIiIigyCsgFheXs6vf/1rWltbu481Nzfzm9/8\nhvLy8ogVN9xlpSUA0KCAKCIiIiIigyCsRWpuvfVW/v7v/54zzzyToqIiAKqqqhg7diz33HNPRAsc\nzrLTukYQG9rcUa5ERERERESGg7ACYnZ2Nm+88QYffvghu3btIj4+njFjxlBeXo5hGJGucdjKsneN\nIGqKqYiIiIiIDIZ+A6LX6+11bO7cucydO7f7zwfvP4yLi4tAaZKTfmAEsdWN2+snIS7sbStFRERE\nRESOWb+JY+rUqUcdHQyFQhiGQUVFxYAXJlBSZMcAgqEQW/e2cuq4rGiXJCIiIiIiJ7F+A+Kzzz47\nmHVIH2yJVkbmpbCntoPNu5sVEEVEREREJKL6DYizZs0azDqkH6Uj7eyp7WB3bUe0SxERERERkZNc\nWNtcSPTkpicBWqhGREREREQiL2YC4uOPP878+fOZOXMmV155Jdu2bevzvObmZiZMmEBZWVn3r9tv\nv32Qqx08B1cybe304vMHolyNiIiIiIiczGJiWcxXXnmF119/neeee478/HyefPJJrrvuOlatWoXJ\n1DPDVlRUUFJSwptvvhmlagfXwb0QARrb3ORnJkexGhEREREROZnFREBsaWnhxz/+MSNGjADgqquu\n4qGHHqK2tpaCgoIe527ZsoXS0tLjfi3DMDDFzLhpF5PJ6PH74bLTEzGAENDc4aEoxza4xcmQd6T+\nEhkI6jGJJPWXRJL6SyJtKPZYWAHxrLPO6nPLC8MwsFqt5Obmcu6553LRRRf1ew2/34/T6ex13GQy\ncc011/Q4tnr1aux2O3l5eb3Or6iooKqqinPOOYfOzk7mzZvHrbfeSmpqajifCpmZyUfdviNa7Pa+\nRwcz0xJobHPj9AXJyFBAlOPTX3+JDBT1mESS+ksiSf0lkTaUeiysgHhwRO+KK65g2rRpAGzYsIHn\nn3+eiy66iIyMDB5++GE6OztZsmRJn9dYs2ZNn48VFhayevXqHufdcccdLF26tNf0UgCbzcbpp5/O\ntddei8/n45ZbbuGOO+5g+fLlYX3CTU2OmBxBtNuTaW11EAyGej2ebU+ksc3Njr3NNJdmR6FCGcqO\n1l8iJ0o9JpGk/pJIUn9JpMVqjx1p0CmsgPj6669z1113ccEFF3QfO/vss5kwYQK///3veeWVV5g0\naRJ33nlnvwGxvLycrVu3HvF1XnvtNe666y5+8YtfcP755/d5ztKlS3v8+Wc/+xmXX345wWCwz0D5\nTaFQiECMrvUSDIYIBHo3TlG2jYo9Leyp7ezzcZFw9NdfIgNFPSaRpP6SSFJ/SaQNpR4Layxt165d\nnHLKKb2Ol5aWsmPHDgDGjBlDQ0PDcRfy2GOPce+99/L444/zwx/+sM9zgsEgy5Yto6qqqvuYx+PB\narWGFQ6HqpG5XQl/X30HodDQaCwRERERERl6wkpVkydP5g9/+AN+v7/7WCAQ4JlnnuleMOaLL74g\nPz//uIp4+eWX+eMf/8iLL77InDlz+i/WZGL9+vU8+OCDOJ1OGhoaePDBB1m8ePFxve5QMeLAwjQu\nT4DGNneUqxERERERkZNVWFNMb7/9dq655hrOOussJk2aRDAYZOvWrQQCAZ588knWrl3Lbbfdxi9/\n+cvjKuLJJ5/E4XBw4YUX9ji+YsUKxo4dS1lZGU899RQzZ87kgQceYOnSpcyfPx/DMFi0aBE333zz\ncb3uUFGQlYzZZBAIhqhq6CTbnnj0J4mIiIiIiBwjIxTmnMXOzk5WrlzJtm3bsFgslJSUcN5555GQ\nkEBVVRWdnZ0ntP3EYGlo6Ih2Cb2YzQYZGTaam/u/x/C2Jz+jrtnJRQvGsvD0UYNcoQxl4fSXyIlQ\nj0kkqb8kktRfEmmx2mPZ2Sn9Phb2Pog2m41LLrmkz8eKioqOvSo5JvkZSdQ1O6lt6r1ViIiIiIiI\nyEAIKyDu3buXBx54gE2bNuHz+XotlPLxxx9HpDg5JC8zCXZATbMCooiIiIiIREZYAfG2226jubmZ\nJUuWYLNpo/ZoyMtIAtAIooiIiIiIRExYAXHjxo2sWLGC8ePHR7oe6Ud+ZldA7HT56HT5sCVao1yR\niIiIiIicbMLa5qKgoIDOzs5I1yJHcHAEETSKKCIiIiIikRHWCOJNN93EXXfdxb/8y78watQorNae\no1fFxcURKU4OSUmKw5ZopdPlo6bZwbiitGiXJCIiIiIiJ5mwAuJPf/rTHr8DGIZBKBTCMAwqKioi\nU530kJeRxI79bRpBFBERERGRiAgrIK5atSrSdUgY8jIPBEStZCoiIiIiIhEQVkAsLCyMdB0ShvwD\n9yHWaARRREREREQioN+AOHfuXP7nf/6H9PR05s6de8SLaB/EwXFwoZqGVhf+QBCLOaw1hkRERERE\nRMLSb0C86aabSE5O7v5Yoi/vwFYXgWCIhlYX+ZnJUa5IREREREROJv0GxMWLF/f5sURPtj0Rs8kg\nEAxR2+xUQBQRERERkQEV1j2IHo+HFStWsHHjRnw+X6/Hly1bNuCFSW8Ws4lseyK1zc6ulUxLol2R\niIiIiIicTMK6ie3f//3f+c1vfoPD4SAuLq7XLxk8+QemmVY3OaJciYiIiIiInGzCGkF89913efjh\nh5k3b16k65GjKMhKZt32RvY3KCCKiIiIiMjACmsEMTk5maKiokjXImEozO6677C6yUEwFIpyNSIi\nIiIicjIJKyBeddVVLFu2jLa2tkjXI0dRmGUDwOsL0tTmjnI1IiIiIiJyMglriumqVavYvHkzs2fP\nJjU1FavV2uNx7YM4ePIykjAZBsFQiP0NDrLtidEuSUREREREThJhBcRLL7000nVImKwWE7kZidQ0\nOdnf2Mm0kqxolyQiIiIiIieJsAKi9kGMLQVZyQcCohaqERERERGRgdNvQLzpppu46667sNls3HTT\nTUe8yInug+j1ernnnnt4++238fl8zJo1izvvvJPc3Nw+z73zzjt59913sVgsXHnllfzkJz85odcf\nagqzkvliawPVWslUREREREQGUL+L1By+v2Ffex8O5D6Ijz32GDt37uTtt9/m008/xW6388tf/rLP\nc5cvX051dTWrVq3ixRdf5KWXXuKtt9464RqGksLsroVqqpucBINayVRERERERAZGvyOI9957b58f\nR8L111+Pz+cjISGBlpYWHA4H6enpfZ77+uuvs2zZMlJSUkhJSeGKK67g1VdfZdGiRWG9lmEYmMJa\nu3XwmExGj9+PZkRO11YX/kCQpnY3eZlJEatNhr5j7S+RY6Uek0hSf0kkqb8k0oZij4V1DyLA+vXr\n2b59O8FgEIBQKITX62Xz5s3cf//9R32+3+/H6XT2Om4ymbDZbJjNZh599FEeffRRcnJyeOGFF3qd\n29bWRlNTE+PGjes+Vlxc3Oe5/cnMTMYwYvMNstuTwzovNS0Ji9nAHwjR5vYzKcMW4crkZBBuf4kc\nL/WYRJL6SyJJ/SWRNpR6LKyA+Nvf/pbf/e535OTkUF9fT25uLo2NjQQCAb773e+G9UJr1qxhyZIl\nvY4XFhayevVqAP7xH/+Ra6+9lgceeIBrrrmGlStX9thSw+VyAZCYeGhrh4SEBNzu8PcDbGpyxOQI\not2eTGurI+wpo3kZSVQ1OPi6sokJhakRrlCGsuPpL5FjoR6TSFJ/SSSpvyTSYrXHMo4wwBRWQHz5\n5Ze58847ueSSS1iwYAHPPvssaWlp3HDDDYwaNSqsIsrLy9m6desRz4mPjwfg3/7t3/jv//5vtm3b\nxuTJk7sfT0hIAMDtdmOz2bo/TkoKf4plKBQiEAj79EEVDIYIBMJrnIKsZKoaHFTVd4b9HBnejqW/\nRI6HekwiSf0lkaT+kkgbSj0W1lhaS0sLZ5xxBgClpaV89dVXpKam8rOf/YyVK1eecBG33XYbL774\nYvefA4EAwWCQ1NSeI2N2u53MzEwqKyu7j1VWVjJ27NgTrmGoKczqGqbeVd1OKDQ0mk1ERERERGJb\nWAExOzuburo6AMaMGUNFRQUA6enpNDU1nXARU6dO5emnn6aqqgqXy8WvfvUrZsyYwYgRI3qde8EF\nF/DII4/Q2trK7t27ef755/n+979/wjUMNZOLMwFobHOzt64zytWIiIiIiMjJIKyAuGjRIm6++WbW\nrl3LvHnzePnll3njjTf47W9/y5gxY064iEsvvZQf/OAH/OhHP2LBggW4XC4eeuih7sfLyspYu3Yt\nADfeeCOjR49m4cKFXHbZZVx88cUsXLjwhGsYaorzU8hM7ZqS+8W2hihXIyIiIiIiJwMjFMb8xEAg\nwJNPPsn48eM5++yzefjhh3nmmWfIzc3l/vvvZ+rUqYNR64BoaOiIdgm9mM0GGRk2mpuP7X7CZ9/+\nmvfXVzNxVDo3/6gsghXKUHa8/SUSLvWYRJL6SyJJ/SWRFqs9lp2d0u9jYS1S89xzz/HDH/6Q3Nxc\noGvfwuuvv35gqpPjVpyfyvvrq9ld20EwFMIUo9t3iIiIiIjI0BDWFNNHH30Uj8cT6VrkGBXndy3i\n4/L4qWvuvcekiIiIiIjIsQgrIJ555pm88MILtLe3R7oeOQYFWcnEW80A7Nyv90ZERERERE5MWFNM\n9+zZw8qVK3n22Wex2Wzd+xUe9PHHH0ekODkyk8mgpCiNTZXNVOxpZu7U/GiXJCIiIiIiQ1hYAfHy\nyy+PdB1ynCaNzmBTZTNbdrcQCoUwdB+iiIiIiIgcp34D4qOPPso111xDYmIiixcvHsya5BhMLs6A\n96DN4WXL7pauP4uIiIiIiByHfu9BfOyxx3A6tfBJrCvKTmZsQddiNX/5294oVyMiIiIiIkNZvwEx\njO0RJQYYhsH8skIAtu1tJRjU+yYiIiIiIsfniPcg+nw+vF7vUS8SFxc3YAXJsRtzYATR6w9S2+yk\nICs5yhWJiIiIiMhQdMSAuGDBgrAuUlFRMSDFyPHJTU8izmLC6w+yt75DAVFERERERI7LEQPiww8/\nTFpa2mDVIsfJZDIoyrGxq7qdvbWdzJ4U7YpERERERGQo6jcgGobB9OnTyczMHMx65DiNKUhlV3U7\nGyubuJhx0S5HRERERESGIC1Sc5KYMT4bgP0NDmqaHFGuRkREREREhqJ+A+LixYuJj48fzFrkBJQU\n2UlN7los6KsdTVGuRkREREREhqJ+A+K9996LzWYbzFrkBJhMBpNGpQPw9d6WKFcjIiIiIiJDUb8B\nUYae0gMBcdu+VgLBYJSrERERERGRoUYB8SQy8UBAdHsD/M//7tZ9pCIiIiIickwUEE8i2fZEZkzo\nWqzmjf/dzasfVUa5IhERERERGUoUEE8yPzq7BKul6239bHOtRhFFRERERCRsMREQvV4vd955J7Nn\nz2bGjBn85Cc/oa6urs9zm5ubmTBhAmVlZd2/br/99kGuOHZlpCZwy2XTAWhsc1Pb7IxyRSIiIiIi\nMlTEREB87LHH2LlzJ2+//TaffvopdrudX/7yl32eW1FRQUlJCevWrev+tXTp0kGuOLaNzk/BlmgF\n4LPNfQdtERERERGRb4qJgHj99dfz1FNPYbfbcTgcOBwO0tPT+zx3y5YtlJaWDnKFQ4vJMJg7JR+A\nd7+owumm+ODlAAAgAElEQVT2R7kiEREREREZCiyD9UJ+vx+ns/d0R5PJhM1mw2w28+ijj/Loo4+S\nk5PDCy+80Od1KioqqKqq4pxzzqGzs5N58+Zx6623kpqaGlYdhmFgiolYfIjJZPT4fSCcWz6KVV9W\n4fL4+WxLLd85bcSAXVuGlkj0l8jh1GMSSeoviST1l0TaUOwxIzRIq5h88sknLFmypNfxwsJCVq9e\nDYDH4yEUCvHAAw/w4YcfsnLlSqxWa4/zb7/9dtLS0rj22mvx+XzccsstpKamsnz58rDqCIVCGMbQ\neYNOxPL/+pLVa/eRn5XMwzfNJyFu0P4/QEREREREhqBBC4jHwuv1Mn36dP70pz8xefLkI567adMm\nLr/8ctatW4cpjKHBxsbOmBxBtNuTaW11EAwO3Nuxu6adO//zb4RCUH5KHv90waRhE47lkEj1l8hB\n6jGJJPWXRJL6SyItVnssI8PW72MxMaR02223MWXKFC677DIAAoEAwWCw17TRYDDI8uXLueSSSygq\nKgK6Rh2tVmtY4RC6RhADgYGtf6AEgyECgYFrnBE5Kfxw3hhe/mAXn2yq5ZQxGcyelDdg15ehZaD7\nS+Sb1GMSSeoviST1l0TaUOqxmBhLmzp1Kk8//TRVVVW4XC5+9atfMWPGDEaM6HnfnMlkYv369Tz4\n4IM4nU4aGhp48MEHWbx4cZQqj30LZ49i8uiuBX/eXVsV5WpERERERCSWxURAvPTSS/nBD37Aj370\nIxYsWIDL5eKhhx7qfrysrIy1a9cC8MADD+DxeJg/fz7nnXce48eP5+abb45W6THPZBicc/ooAHZV\nt/PF1oYoVyQiIiIiIrEqJu9BjKSGho5ol9CL2WyQkWGjubkzIkPPwVCIX/5xLXtquz73xWcUc/63\nigf8dSQ2Rbq/RNRjEknqL4kk9ZdEWqz2WHZ2Sr+PxcQIokSWyTD4Pz84hfSUeABe/3g3VQ2dUa5K\nRERERERijQLiMJFtT+Tua08nPSWeYCjE8v//K9od3miXJSIiIiIiMUQBcRhJjLdw3QWTsVpMtHR4\nWPnpnmiXJCIiIiIiMUQBcZgZP8LOwtNHAvDO2n389qWv+NvX9QyzW1FFRERERKQPCojD0HdPG0Fh\nVjIAG3Y28cRrm/hsS12UqxIRERERkWhTQByGkhKs/OLvZzJtXFb3sdVfVmkUUURERERkmFNAHKbi\nrGZ++ndTuOp7EwDYub+df/z1+2zY2RjlykREREREJFoUEIcxwzA4c1pB90hiMBTiP//8NX9Zs5fa\nZmeUqxMRERERkcGmgDjMGYbBdRdMZt6pBQC0dXr50+od3PH0GjbuasLnDxLU1FMRERERkWHBEu0C\nJPri48z8w8JSxham8tGGGvbWduD1B3nitU34/EFG5tq44aJTSU2Ki3apIiIiIiISQRpBlG5nTC3g\n51fM4M6rZ2ExG7i9AQLBEJU1Hfz6xXW0O7zRLlFERERERCJIAVF6yctIYsmiiZSVZLFo9ihMhkF1\no4OHX95Ac7ubtk5PtEsUEREREZEI0BRT6dOcyXnMmZwHQLY9gT++vZVd1e3838c/AWBycQaXLBhH\nUY6NuhYnFpOJzLSEaJYsIiIiIiInSAFRjmreqQXUNbt4f/1+3N4AAJsrm7mjcg0lRWlsr2ojMd7C\nPdfNJinegsWsgWkRERERkaFIAVGOyjAMLj5rHOeVj2bN13X8+bM9NLS6CQHbqtoAcHr83Pjwx5hN\nBqeOy6KsJIsZE7KxmE2YTAYmw4juJyEiIiIiIkelgChhS0qwMH9aIfOnFeLzB3l//X4qdrewfkdj\n9zmBYIgvtzXw5bYG/rCyAsMAQjB7ci75mcl897QRxFnN3ef7A0Hc3gC2RGsUPiMRERERETmcAqIc\nF6vFxHdmjuA7M0fQ7vTy4jvb8PmD2BKtrN1aj8vTNRX14BaKn26uA6BiTwvnzRlFxd4WNu5qZk9t\nBxazwb9fNZORuSnR+nRERERERAQwQqHhtQt6Q0NHtEvoxWw2yMiw0dzcSSBwcrwdKz/dzRdbGzAM\n2FfvIDnBQtsRtskozE7GYjKRm5HIWdOLGFeYxpuf7KZiTwszJmTz7ZkjBq/4k8zJ2F8SW9RjEknq\nL4kk9ZdEWqz2WHZ2/wMzGkGUiDh3zmjOnTMagFAohGEYvPLhLt78ZHef5+9vcACwp66DNRX1PR7b\nuq+VjzfWYLWYOHVsFqdNzOHrPS2kpyQwpiCVeKsJi9mEofscRUREREROiAKiRNzB4PbDeWOYPDqd\nPXWdtDu85GcmMXVsJkufWYvb66dsfDZf7Wikw+nrdY29dZ0A7Nzfzisf7ur3tWZPyuVbU/LJTk9k\nb20HYwpSsSVae9z3KCIiIiIifYu5gLhixQp+85vf8Pnnn/f5uNfr5c477+Tdd9/FYrFw5ZVX8pOf\n/GSQq5TjNWFkOhNGpvc4dv+P54ABJsOgsqadR1/ZSEFWMotmj6K60cEL72wDwDAgMc6C0+Pv9/qf\nbanjsy11PY7FW83dezQWZSeTGG9hT20HhtEVXmeMz6Yox4bfHyQzLQGn28/ba/YyZUwm804twGrp\n2rbj4EioiIiIiMjJKqYC4r59+7jvvvswm/sf7Vm+fDnV1dWsWrWKpqYmrr76akaNGsWiRYsGsVIZ\nSCbTodBVnJ/KA/+nvDuIlY60M21cFlaridSkOJxuP+u2N9Dc4aG+2Ul2eiLVjY5e01IP5/EFqG7s\nmsJ68PfD7apu7/N5G3Y28fbnezitNJev97aw+8CI5LdnFjG+yM7u2g5KitIIBkMYJoPUpDhcHj+v\nfVRJZloCLo+fpAQLZ88owmQY7KntoGJPCzMnZJNlTzzur5eCqoiIiIhESswsUhMIBLjiiiuYPn06\nK1as6HcEsby8nGXLljFnzhwAnn76aT799FOeeuqpsF6nsbETU4zt424yGdjtybS2OggGY+LtGJIc\nLh/xcWY27Wpm464mivNTsVpMtDu8BIIhXB4/W3Y3U9Pk7HMa64kwGQaj8lKorOkdNieNTic/M4nV\nX+wnBCTGm8lOSyQh3szFZ42jpMiOy+OnYk8LE0bY2biriZG5KeRlJPHxxhpG5Ngozk8F4KX3dvDX\nNfu4/sKpTBmb2fPzd/vocPrIy0jqWZv6SyJMPSaRpP6SSFJ/SaTFao9lZNj6fWzQAqLf78fpdPY6\nbjKZsNlsPPHEE9TU1HDuuedy/fXX9xkQ29ramDVrFh9//DHZ2dkAvPfee9x9992sWrUqrDo0+iIA\nTW0uEuMt7KhqZce+NopybYwpSOPTjTV4fQG+PWsk7Q4vT7y8gY07G49+wRMwKi+FPbU9V9e1WroW\n3nEdmE47ozSHTpePrXtaus/5zqyRlE8toGSEnaQEKzc8+D776jo4fXIel353AqPyUggEQiTEH32i\nQCAQpLKmneKCNMwm/f0QERERGa4GbYrpmjVrWLJkSa/jhYWFPPzww7zxxhusWLGCTZs29XsNl8sF\nQGLioel5CQkJuN3usOtoanJoBFEwALczQFFGIkUZB/opGOBbk3MACHh9JFsN/vXiqeyp66AwO5k4\nS9fU56/3tNDS6SE1KY7VX1QRF2emOD+VxlYXX+9thRCUT8lj1sQctu1r5e3P91FZ087IXBuTizP4\n82d7e9TyzXAI4PMH8fmD3X/+4uveU2jfWbOXd9Z0XSs12Uq7o2tU9PPNtXy+uRYAW6KVH545hqBh\nsGNvC60dHmaU5jC+KI2UpDhWfLCT5jY3bQ4vNU1OpozJ5Kd/N4X4uK7PNRgK4fYE+HJbAw63j7Nn\nFGExH/oLdPA/XDy+wIGVZePJz0zuvm/zoNomJ15/gJG5KTS3u/H4AuRnJof9fkls0/cwiST1l0SS\n+ksiLVZ77EgjiIMWEMvLy9m6dWuv4263mwsvvJC7776b5OQj/4MxISGh+zk2m63746SkpCM9rYdQ\nKEQgcAyFD6JgMBRT+6NIl5E5XfvEHHxvSors3Y+VfmPBnW86rTSX00pz8QeCmE0GhmEwvSSbdqeX\njJQEPtpQTWK8hdYODyNzU6huchAIhoi3mrvutWz3UJCVTH2LE/+B10+IMzOtJIvdNR3UNneNyh8M\nh9/U6fLx7Ns9/959vbe133o37mrin37zPsX5KSTEWdhd29E9ignw4jvbAchMTaCp3U1SvIV/WFjK\n6i+ruq9rGJCXkcRF88cxrSSLNoeXO55eg9sbYM7k3O5FhK49dxKzJ+fS2OYmIc5McoKVVz/aRSAQ\nIjs9EZfHz5nTCkiKt1Df6uKJVzcxtjCNK7834Yhfc4kefQ+TSFJ/SSSpvyTShlKPRf0exLVr13LN\nNddgtVqBrnsRXS4XNpuNN954g4KCgh7nl5eX89BDD3HaaacBXfcgrlmzhv/4j/8I6/UaGnqP1kRb\nrG6gKbHDHwgSCITwB4MkxVswDINgMERts5O6ZidtDi9xVhNlJdlYLSb+VlHP5xV17KnroNPpw2Qy\nSIq30Obw9nl9A0izxdHa2ffjx2tcURoWk9FvKLVaTPgDQeKtZr51Sj6rvqzqVdc3/0bceNGp7Kvv\noLrRgd0Wz9kzishI7frPo8OnkK/f3shHG6pxefzMm1ZAYZaNnfvbKClKozD70P+a7a3roLXTQ056\nEjn2RBrbXGyqbCYYDDG/rBCL2USny8cH6/fj8QWZNzX/qIsM1be6qGl0MHVs5glNad9T28HKT3fz\nvVkjGVuYdtzXiTR9D5NIUn9JJKm/JNJitceys1P6fSzqAfGbPv/8837vQQS477772LJlCw8//DCt\nra1cffXV3HzzzSxcuDCs6ysgynBjMkF6uo3WVgd7ajtY+3U9cVYzhgGzSnNpd3qJs5gozLZRVd/J\nl9sa2FXTjt0Wz5iCVDzeAE3tbswmgy+2NWA2GUwancGO/W09pscumF7IgrJC3vtyP++t2z9on19q\nkpVpJVlsrmymtdPLGVPzmT05j/te+LLf50wfn82ovBS++LqevfWd/Z43v6yQkqI0Xvjrtu7tVQqy\nkrnjH2ZiPTDlOBgMESKE+cDcdZfHz8+f+oy2Ti9XL5rI3Kn53dfbXdvO5spmvj1zBPEH9uasb3WR\nYDXj9gVoaXd3bwMTCoVY+se17KntwGwyuOMfTiMvM6nHFF/omgZsGsD7qtsdXmyJ1h6rCx+NvodJ\nJKm/JJLUXxJpsdpjQz4glpWV8dRTTzFz5kzcbjf33HMP77zzDoZhcNVVV/HjH/847OsrIMpwE8n+\n2lPbwQvvbsNui+cfz5vUfe+h0+1j3fZGtle10uH0kRRv4ZKzS3h/3X4+2lDNT35wCh9+VcP76/Zj\nMRvd02dtiVZuuXw6aclxBAJBNu5q5um3KgCYOCqdvXUdONz974M5WIqybYwrSqPT5WPDzka8viAz\nS3PweAPUNjtoaD10X3RZSRbjitJoaffwwVfV+PxBCrOS+YeFpbi8fh56aQNms0EgECIQDHH1ookk\nJ1hoanfz4rvbe7xujj2RixaM5dRxWVjMJqoaOln2p/W0dXqZMiaTxfOKGZ3XteKt0+3D4wvicPnY\nuq+VpnY3k0dnMLk4o9fnEwqFcLj97NjfxiMrNlA6Kp1Jo9NJSrCyoKzwiF+Ltk4PSYkWCvLs+h4m\nEaGfkRJJ6i+JtFjtsSEVECNNAVGGm1jur2AwhMlkdK3QureViaO6VmQ9XJvDS1VDJ5NGpeNw+3n+\nr1txuP1cvWgivkCQv3y+98AiQCl0OL2s29616qzJMLj+wikkxlv4dFMtnS4fp4zJxGI2ePmDXbR0\neAC49KxxzC8r5IutDdQ0O5k8Op2RuSn851sVrN3aAEBRdjI3XTKNv67d12uRoWjITU8kJSmOHfvb\nehy3JVpZsrCUT7fUsbaPhY0ARuelYEuyMqU4k9Mn5RIfZ+bZt7/m0811fZ4/YYSdy78zHpfXj8Vs\nYlReCht3NpGZmsAH66tZva4KW6KVScWZeDx+ppVkMSLHxui8lO7ptaFQiM27m/nvVTv47mkjmDsl\nP6wRyg6nl799Xc+sibnYEnv2hT8QpK3TS2ZaQr/P/9vX9fzvxhou+FYxYwpSj/p6AyEQDFLT5KQw\nK1krZg+QaH0Pq2ly8P66ahbNHkmaLX7QXlcGVyz/jJSTQ6z2mALiYRQQZbgZTv0VCoXYsruFxjYX\nYwvTKMrue4Wu1k4PH2+oYcaE7H5XUw0GQ6zb3kBTm5vZk/NITY7D5w/y25e+Yld1OyNybGTZE9hS\n2Uy700d6SjxlJVmYDIPxI+yk2eLYub+drXtbqGl24vb4yctMZntVK+F+1y3OT+WfF5/CY69upLLm\n+L93pSRZB3zvz6MZlZfCGVPzWb+9kYo9LQS+sXJbWnIcISAp3sKcU/KYMykXe0o8FrOJvXUdrP6y\nig+/quk+/6IFYzltQg6PvbaJhhZX95Tfq743gYKsZLLtibyzdh/25Di+c9oIXvlwFys/3dP9/J/+\ncApl47u2R9rf6OC1j3ZRUphG+ZR8Nu5sYkSurd9+aWh18dSbW5g0Kp0fnDGm+/i6bQ00trs589QC\n4g5MGX56ZQUfb6zh4gXjmDs1n88217K9qo0rvzehV8g9mvfX7WfF+zv5+4WlnFaac9TzQ6EQu2ra\nKchMJjGM7W2Gimh9D7vh4Y/ocPo4dWwmN1x0akReo2J3M5lpCeSkh7/Yngys4fQzUqIjVntMAfEw\nCogy3Ki/Im9/o4PstITukHAknS4fPn+Q+hYnlTUdzDklj8rqdkKhEB5fgKff+pq8jEQWzR7FzNKc\n7nsOm9vdvL1mL4lxFvbVd+IPBEmMt7Bo9igaWl2EgN+/uaXH9ig/OKOY6SXZFGQnU9vk5I3/rWTn\n/naa2vveGmh8URqj81OZMSGb//nf3WyqbD7q5zO5OAOfP8C2fW1HPTcc2fYEmto8BCPwo6msJIu0\n5Dg+3VKHx9t7OeuROTYa2lyMKUjD4wtQ2+QkzRbH/gZH9zmLZo+i/JQ89tV38rs3NgOQn5nEOaeP\nJBgM8ce3e6/WDfCdmSP40bdL8AeCvP35XmqanLR0uMmyJ3JKcQaj8lLIPSwkNLW5ufmJT4CulYGf\nvHk+rR1eKva0YDEbnD4pF5cnQFJCVxAMhUK8+M52Vn1ZxdSxmfzT+ZOo2NOKPSWOUbkpve5d7Y/P\nH+y1TU0oFGLd9kZy7Il4/UH+691tFOencunZJcd0r+o3hUIhQiF6XcPt9eP2BrAfGLWLxvew5nY3\n//fxT7r//IdbFgz4iHDFnhZ+81/rSEmysvyncwf0XmIJn35GSqTFao8pIB5GAVGGG/XX0OLzB7CY\nTcf1j9GWDg8797cxpiCVOKu5zxGrg9/yqxsduLwBdlS1YU+JY9bE3B7/QA2GQny+pY6c9EQaWl1k\npCTw9d4W1m9vZN60AlweP+MK0ygpsnf32LbKBgwM6pqd/GXNPrbua+3eJqUwO5nFZ4zBlmiltdND\ndaMDi9nErup2NlU24w8Ee9RpS7SSn5lEp8tHY5u7O/jGW83MnJCN2Wziw6+qj/j1KCvJYkxBKi9/\nsOuYv5YDzWQYjCtKY9u+/reZKSlKo7ndjdlsor7FddRrGsAZpxYwd0o+DW0unvqfLf2em5mawLem\n5OFw+wkEQ+TYEykdZWd0XioNrS46XT6e+fPX7KvvZFReCn5/kNTkOK45dyKfb6njpfd39rrm5NHp\nXPrt8Xy+pZZAIMSC6YVkpSXi8QZ4+8AerWdMzccXCGIyDLIPW/3XHwhy97Nr6XT5+OfFUxidl8L2\nqjbuf+FLQoDZZPDzK2dQnJ/a43vYmi31GEbXVOnKmg4mjrJjNpu6A31KkrV7VkAoFKKlw9O9yvHh\nr20yGZgMA68vgNls4HD7SU2K6z7nv97dzjtr93X/edq4LP7pgkkkxIU3Mtvc7sZqMVHT5GRffScL\nygp7BeFHXt7QPSX+ziWnMTK3/3+sHb5C8+FcHj8Ws9G9aNbx8PmDtHZ6erw/w4l+RkqkxWqPKSAe\nRgFRhhv1l0TaifZYu9PL55vrWP1lFUkJVqaMyWBBWWH3fV+BYJDPNtdhtZiYMiaze/pkQ6uLD7+q\nJt5qJj0lHntKPLv2t9Hp8mMY8P25xSTGW9jf0MlDKzbg8QXISEnAbDY4a3oh44vsvL++mtyMREyG\nwa6aduqbu0Z2D05hzUlPpHxyHqm2OKobHXy2uY5OV9d03cKsZM6dM4oP1lez9bDgl5eRhMcXIBTq\n2irl3bVV3c85mSXEmRk/ws6Oqrbur9/hJo5KZ/r4bDpdPjqdvh7b2kwdm4nHG+jxdSzOT2H8CDv+\nQIjyaYV89OU+3l935P8UAJgzORd/IIRhwJqKek4rzSEYDFFZ2863Tsnnr3/bR0K8mZE5KWzc1dT9\nvAvnj6UwK5lHX9nYa0o0dC1Odfl3SmhodbNlTzNnTS9iRI4Ni9noXsUYoK7ZyR3/uYY4ixmH20co\n1HWv83dnjQS6/vNlX10n//XuNrZVdY28Lygr7HeP18qadh7803qmlWSxZNHE7v/IeePjSl7/uJKx\nhWlctGAs7Q4f08dn9QqSB8NlIBjE4fKTmhzX47GHVmxgw84mrl40kVkTc6htdjIixzZs7qHVz0jp\ni88foLnD02Nmx0EeXwCXx989y+FoYrXHFBAPo4Aow436SyLtZOsxnz+AxxfE5w+SlhzXY+QnEAxS\n3+LqWo02O7k7GNS3urj/hS+x2+K57YrpPaZ0Ot0+Vn66h8+21OHzBynKTuZffjiVDqeX1k4PzR0e\n/vL53u4tV0pH2vEeWO32wvljeWjFBgKBEBfOH8vYwlQ+3lDD2q/raXf68AeCNLYdmjJcOtJOdaOD\nedMKWFBWRFO7m/96dxuVNV3bpfQVfA5KTrBwWmkONU1OKmvb8foOjeoe/tzF88ZgNhmsOGxU8WjX\nHoqK81O5eMFYVn25v99Fn6Dr63bquCzyMpIIBENs3NXErur2XufFx5k589QCapudbNjZ1Otxuy2O\nM6cVctb0QmyJVvyBIFaLmXuf/4LtB4Lkj75dwsSR6XywvrrXvrEA58waycVnjcMfCLJxVxOfbq5j\n464mkuIteH0BXJ4AsyblsOj0USQnWtlX38FvX9rQ/fyxBansrG4nMd5M6ch0GlrdfH/uaCYXZ5AQ\nZyEQDHaFzUCI5g539z+eff4APn+IxHgzXn+wexsfgP0NnYToCs4ZqQm88sFOpozJ5MyyQuKtZjqc\nXnbubyc3IxGzySDlwEhuX/fRNra68PiDNLS42FjZxKRRGbR2ethc2cwV3x3fY7S4pslBtj3x6NOr\njRCW+DhCPj+BQIid1W2s29ZISVEap47LOvJzw9Tc7sZuiz+hKdlDlcvj5y9r9jK5OIOSInu0ywnb\n029V8PGGGq45dyLfmpLf47Ffv/gl26vauPXy6WHtURyrPyMVEA+jgCjDjfpLIk091uVE9qT0+YO8\n+clustISOOPUgvBfMxhiX30n+xs7MZtMzJqYc8SRn8ZWF4++spExBamUjkrn/XX7SU2O46zpRYzM\ntfWYQvnVjkZe/ahreu61500iGAyxY38bZ04rwDAMXnpvBy0dHs4vH40t0crjr21ie1Ubmanx/Osl\n00iMt/Dxhhq27m1hx/52PL6u+z7zMpJobHPjDwSJs5jwHnbfbHpKPPf+02w+/Kqa99btp7bZ2b2o\nU5zVRF5GEnvreu5dOml0Oum2eD6vqO81Vbk/k0ens3Vfa/cWO99UkJXMjRdOJevAtMtPNtXw+zcr\nwrr2QDCAEF3h81i39plZmkN9i7PX12mgJMVbukeIz5iaz/gRdl58dzsmAwqzbWyvauW6CyYza2Iu\nX2yt57FXN/V7rczUeJraPX0+lhhvYXReCtPHZzPv1ALaHB5+8Yc1fd4/DF33Q99w4VSaOzy8s2Yf\nq76sYkSOjX/54RS+3tPCX/+2j3nTCigrySI1KY5HXtmI1xcgGOxa3OmfF59CYryVB/+0nkCw6+/y\nvdfN7nPqbbvDS0Obi7EFXeHA6fbx9pq9rNlST6vDQ5zFTPkpeVxy1jjW72jkkZc3Mq4wjZ/84BTS\nU/oedQoEg+yu7WBkjq17yvD+hk5aOj2cUpwJdC2MZbWYOGVMZr9f06OprGln7dZ6zior6rUK9I79\nbXyxtZ4zphZQkNX3Am7H6o2PK3nt40oAfvp3UygryR6Q60ZSKBTimvvf6/7z07ee1f1xp8vH9Q99\nBHR9L7vnn2Yf9Xqx+jNSAfEwCogy3Ki/JNLUYwLg9QVYv6OR0pHpPaYxHu7gIjgujx+Hy0eWPZFg\nKMRrH1XS2OriB2cU91jRMxgKYTEbeEMmgj4fcRYzO/a3kW1P5MttDXS6fJw7Z1R3MN/f0Em81cyz\nf91KYpyFvz9nAh1OH3FWM9urWnnlw13Yk+P42SXTcHv8+AJB0pLjMZng001d04fLT8nrs/49tR0s\nfeZvAPz0wql89FU1Da1uHG4fLR0esu0JuDyB7unEifFm4qxmpozJ5Iut9bg8h4LNlDGZTBhppyjb\nRmFWMv/f7z/rMWL7TbnpidR9477UxHgLF80fy2eba3F7A7Q5vbR1ens9Nz7OjAG4+wlWaclxTBqd\nwaeba/t9/ePxzfA/VOVnJjG5OIO9dZ00tblIjLdS1XAofC+eN4ZPNtVS1+zs9dyD9xUfDMFxFhOL\nZo/irBlFmE0G76/fz7jCNNJs8Tz68gaqGhwUZCXzo7NL+GhDNWsqukaulywqpTDLxt3PrgXg1sun\nM35Ez9E4nz/ItqpWmtvcvLduPyNybMwvK6Slw0NZSdfU4692NHZPoU5JsjJ/WiGTizMYP8JOc7ub\nX/zhc1yeAHFWE3f8w2n9rvJ9LO557ovuLZkyU+O5/8flfY6k1re6ePN/d5MQZwYDxhfZmXlg9WaX\nx89f/7aPQDDE+eWjey2kdSJ8/gB/Wr2DUAgu/854TCaDxtb/196dB0dZ5Wsc/6aTdLYmGySEsARC\ncmeQeWkAABMfSURBVCHsWwIYZM0II0GNoDODNS5VlIAoXi3KcSlrUBihEJ1iuV7QsQaxiGBgHEBB\nxQuCoqAhhLBJFEJIyA5JyNLZut/7R0JPYoI4JKHD8Hyq8keft8l7TteP7n5yznteK8+t+9bxnGB/\nL2IGdGX62DB+yi7ljS2pjmP/++wEPMy/fA1wR/2MVEBsRAFRbjeqL2lvqjFpTx2pvn7ILMYwDKJ6\nBzra6mx2CkusjiWmeZcq8e/k0WSTKJvdjqvJ5LjNzc+/3F8sLKfcWouXhxvJZwrp03Af0dxLFXQP\nsjA4PJAte3/iyJlCovsHc/+E+mW+jWeLq2rq2H3oAmdzSvH1MRPdL5ggfy+6B/k0zJQauDcE7N2H\nMgnt4kNY104M6B2A2d2VT77NxMvsysTh3amz2dmbcpGQQG/O55Vx9MdCKqy1hIfWB56ry6GvznRe\nj8nFhbCQTgyL7EKfkE5s2fsTF4v+tTuwxcudhbOGUG6t5UpFDZl5Zew7erHF39U1wIsewRbuHhNG\nTlEFm/akXzP8Wrzc/+3rf3083Rg9oCt7U1o+/y/p19O/yXW012J2M+Hl6dZioP81fDzdiBvVk8IS\nK7mXKunW2ZuLRRVk5rX8HffBSRF08na/5mvV0h8gwkI6MTqqK14ertw5NJQP9/7EsZ+K6pdTd/Ym\n9cciegRZmDAslLM5pdjtBmMGhFBaUcPxc5f4KbuUQF8Pdhw83+x8dwwKobrGRn6xlUBfD6qq6xzX\n417l4lIf2DzcXfm/I9mcbxjbzAnhTB/bG8MwyC+2EmDxcAQ0u91g0550LhZVMG5wN1LSC8m7XMl/\nPzCE4ABvauvsfPrdBS7kl/Hb0WGUlFezbf9Zci/Vh/u4kT0wmVzw8zG3uDHX8Mgu9AiysPObf43p\nruiePDgpApOp/jrfyqo6xzJpu2HgAri5mTrMe1hjCoiNKCDK7Ub1Je1NNSbtSfXV8ZSWV+Nn8SC7\nsJyTGZeJ7h9Mda2NzPwyfDzdKSi24udjxt/iQXiob4szRt//UEB6Vgl5lyqYNiaMgY1CN9TPSJ/N\nuYKtIazmFFXQq6uFh6f1b7ZDc3WNrX55ZHohB1Jz8O9k5vnZIzC7u5L4RTpeHm7MmtgXm80gM7+M\nI2cKcXGpX0rcvYsPPUP9OXIql5zCCmIHd6NXVwsHjuXw+fdZjvAAOG4xY3Jx4au03CZ9iIkKZt69\ngzh0Ko+8S5XU1NXf0gaga6A3C2cO5p2dpxxB5+c8za5MHNadPclZ7Xo9r6+PmWERnfk6La/F2wlF\nhQVwOrO4Tc/panKhi59nsxB6o4ZHdqG4rJrzeWX4ersT1Tuw/vrUWhvZjW5LdJWvjxl/i/mGll27\nuZquu3S9fy9/nrx/MO9+cprUH4uIG9WTS1eqSM8qobbOTsL4Pjx098AO9x6mgNiIAqLcblRf0t5U\nY9KeVF/Snq5XX3bDoK7OjpubiZpam+M63YuF5djsBkfOFJJ27hJz7xlISGDTHS9/zC7hYmEFo/oH\nO0LtNydy2frlWSxeZsYN6caP2SX07xVATFQwnbzNZOReIbugnCB/L9KzS8jKL6ewxIqvjxmzuysJ\nd/Zh+8HzZOSUEtLZh8y8MsqttUT28MPN1USFtZbH7o7iclkVZy6UcOhkHlcq62dRB4UH8vBd/eji\n70VtnY2Ckip2HsygsKR+461R/YOJH9ubrfvP8nVa7i/Ovvp6u1NmreV6KSLQ14MHJ0Uwsl8Q3/9Q\nwPavz1NhrcUwDEYP6MqJc5cpKKkPjnePCWPWxL6cuVDM6x+kNgmwoV18sFbXUVzW8jWrN6p/L3+6\nd7E02/TJzdXEE/cNoldXC25uJpZsSG5yD+E7BoXwzYlftyy7k7c7iUvu7nDvYQqIjSggyu1G9SXt\nTTUm7Un1Je3pVq8vu2FQYa11LGv8uYqqWg6k5hDZ05+IX7HjZmNVNXVs2pNOSVk13YMsTI3pxdmL\npXTydue/evpTZq2lpKwaD7MrdXV2Dp/O53xeGQ9OjKBHsIXKqjq8PFxb3Dir8e1XyitrHbc1uuq7\n0/l8fTyX6WPCAOjVtROGYfD18Tyy8suwGwYZuWXkXa7Ex9ON4ZFBGBj0DLIwbkg3Dp7Io6ikisHh\ngXxxJNuxc/DQvp159Lf9+eZEHmZ3VyaN6I4L8O3JPIrLqunXK4DS8mr6dPNtsituaUUN//PRcX7K\nLqWzryfL540h91IlKWcK2XHwfLPZ2JBAb8YPDcXkAuHdfRk7rGeHqzEFxEYUEOV2o/qS9qYak/ak\n+pL2pPq6ddkNg/O5ZXQP8mlya5WWXA2krVFbZ2d/6kWiegfSvdEur5l5ZRw6lYeX2Y0fs0uoqrXx\n1P1DHJtdddQa+6WA2PwmMyIiIiIiIh2YycWF8FDfX/Xc1oZDAHc3E3GjejZrDwvpRFjItcPWrajt\n9okVERERERGRW5oCooiIiIiIiAAKiCIiIiIiItJAAVFEREREREQABUQRERERERFpoIAoIiIiIiIi\ngAKiiIiIiIiINFBAFBEREREREQBcDMMwnN0JERERERERcT7NIIqIiIiIiAiggCgiIiIiIiINFBBF\nREREREQEUEAUERERERGRBgqIIiIiIiIiAiggioiIiIiISAMFRBEREREREQEUEEVERERERKSBAqKI\niIiIiIgACohOd+rUKWbNmsWwYcO49957SU1NdXaX5BaTnJzMAw88wMiRI4mLi2Pz5s0AlJaWsmDB\nAkaOHMnEiRNJSkpy/BvDMHjjjTcYM2YM0dHRLF26FJvN5qwhyC2gqKiIsWPHsm/fPkD1JW0nLy+P\nuXPnMmLECMaPH8/GjRsB1Zi0jZSUFO6//35GjBjB1KlT2blzJ6D6ktZLS0tj3LhxjsetqamPP/6Y\nKVOmMGzYMObOnUtRUdFNHUszhjhNVVWVceeddxqbNm0yampqjKSkJGPMmDFGeXm5s7smt4iSkhIj\nOjra2LFjh2Gz2YwTJ04Y0dHRxsGDB42nnnrKWLRokVFVVWUcO3bMiImJMY4ePWoYhmG8//77Rnx8\nvJGfn28UFBQYCQkJxttvv+3k0UhH9vjjjxv9+/c39u7daxiGofqSNmG3242EhARj+fLlRk1NjZGe\nnm5ER0cbR44cUY1Jq9XV1Rljxowxdu/ebRiGYXz//ffGgAEDjKysLNWX3DC73W4kJSUZI0eONGJi\nYhztN1pTp0+fNkaMGGGkpqYaVqvVePHFF405c+Y4ZWxXaQbRiQ4dOoTJZGL27Nm4u7sza9YsunTp\nwv79+53dNblF5OTkMGHCBGbMmIHJZGLgwIGMHj2alJQUvvjiCxYuXIiHhwdDhgwhPj6ef/7znwBs\n376dRx55hODgYIKCgpg7dy4fffSRk0cjHdUHH3yAl5cX3bp1A6CiokL1JW3i2LFjFBQUsGjRItzd\n3YmMjGTz5s107dpVNSatduXKFS5fvozNZsMwDFxcXHB3d8fV1VX1JTds3bp1bNy4kXnz5jnaWvO5\nuHPnTqZMmcLQoUPx9PRk0aJFfPXVV06dRVRAdKKMjAz69u3bpK1Pnz6cO3fOST2SW01UVBSvv/66\n43FpaSnJyckAuLm50bNnT8exxrV17tw5IiIimhzLyMjAMIyb1HO5VWRkZPD3v/+dxYsXO9oyMzNV\nX9ImTp48SWRkJK+//jqxsbFMnTqVY8eOUVpaqhqTVgsICGD27Nk8++yzDBw4kIceeoiXX36Z4uJi\n1ZfcsJkzZ7J9+3YGDx7saGvN5+LPjwUEBODn50dGRsZNGE3LFBCdqLKyEi8vryZtnp6eVFVVOalH\ncisrKytj3rx5jllET0/PJscb15bVam1y3MvLC7vdTk1NzU3ts3RsdXV1PPfcc7z00kv4+/s72isr\nK1Vf0iZKS0s5fPgwAQEB7Nu3j2XLlrFkyRLVmLQJu92Op6cnq1atIjU1lXXr1vHaa69RXl6u+pIb\nFhwcjIuLS5O21rxn/fzY1eNWq7WdRnB9CohO5OXl1SwMVlVV4e3t7aQeya0qKyuL3//+9/j5+bF2\n7Vq8vb2prq5u8pzGteXp6dnkuNVqxc3NDQ8Pj5vab+nY3nrrLaKiopgwYUKTdi8vL9WXtAmz2Yyf\nnx9z587FbDY7NhJZvXq1akxa7fPPPyctLY1p06ZhNpuZOHEiEydOZM2aNaovaVOt+VxsaXLIarU6\nNQ8oIDpReHh4s+njjIyMJtPMItdz8uRJHnzwQcaNG8dbb72Fp6cnYWFh1NbWkpOT43he49rq27dv\nk9rLyMggPDz8pvddOrZdu3bxySefMGrUKEaNGkVOTg7PPvssX375pepL2kSfPn2w2WxNdvOz2WwM\nGDBANSatlpub22zWz83NjYEDB6q+pE215nvXz49dvnyZ0tLSZpeh3UwKiE40duxYampqeP/996mt\nrWXr1q0UFRU12TJX5JcUFRUxZ84cHnvsMV544QVMpvr/0haLhSlTpvDGG29gtVpJS0vj448/ZsaM\nGQDcc889vPvuu+Tl5VFUVMT69eu59957nTkU6YA+/fRTjhw5QnJyMsnJyYSGhvLmm2+yYMEC1Ze0\nidjYWDw9PVm7di11dXWkpKSwZ88epk2bphqTVrvjjjs4ffo027ZtwzAMvvvuO/bs2cP06dNVX9Km\nWvO9Kz4+ns8//5zk5GSqq6t58803GT9+PAEBAU4bj4uhK26d6ocffmDx4sWcOXOGsLAwFi9ezLBh\nw5zdLblFrFu3jr/+9a/NliE8/PDDPPbYY/z5z3/m22+/xdvbmyeffJJZs2YB9X+hX716Ndu2baO2\ntpYZM2bwwgsv4Orq6oxhyC1i8uTJvPzyy0yaNImSkhLVl7SJzMxMXn31VY4fP47FYmHBggXMnDlT\nNSZtYu/evaxatYqsrCxCQ0N5+umn+c1vfqP6klY7fPgwCxcu5PDhwwCtqqldu3axatUqCgsLGTVq\nFMuWLaNz585OG5sCooiIiIiIiABaYioiIiIiIiINFBBFREREREQEUEAUERERERGRBgqIIiIiIiIi\nAiggioiIiIiISAMFRBEREREREQEUEEVERFo0efJk+vXr1+LPhg0b2vXc//jHP4iNjW3Xc4iIiLTE\nzdkdEBER6agWLVrEfffd16zdYrE4oTciIiLtTwFRRETkGiwWC0FBQc7uhoiIyE2jJaYiIiI3YM2a\nNTz55JO88sorDB8+nEmTJrFly5Ymz9m5cyfx8fEMGTKE6dOn89lnnzU5npiYyF133cXQoUP53e9+\nR1paWpPjb7/9NrGxsQwfPpznn3+e6upqAMrLy3nmmWeIiYlh+PDhzJ8/n7y8vPYdsIiI3BYUEEVE\nRG7Ql19+yaVLl0hKSmL+/Pm8+uqr7N27F4AdO3bw0ksv8cgjj7B9+3YSEhJ45plnOHbsGADbtm1j\nxYoVPPHEE+zcuZNBgwbx+OOPY7VaASgqKiI1NZX33nuP1atXs3v3bj788EMAVq1axYULF9i4cSNb\nt26lrKyMJUuWOOdFEBGR/yhaYioiInINr732GitWrGjWfuDAAQB8fHxYvnw53t7eREREkJyczJYt\nW5g8eTIbNmzgD3/4Aw888AAAc+bM4cSJE7zzzjusXbuWxMREZs+e7bjG8U9/+hPu7u6UlpYCYDKZ\nWLZsGX5+fkRERBAbG8upU6cAyM7Oxtvbmx49emCxWFi+fDnFxcU34yUREZH/cAqIIiIi1zBv3jzi\n4+Obtfv4+AAQFRWFt7e3o33w4MG89957AJw9e5Y5c+Y0+XcjRowgMTGxxeNms5nnn3/e8djPzw8/\nPz/HY19fX8cS00cffZT58+czduxYYmJiiIuLIyEhobXDFRERUUAUERG5lsDAQMLCwq553NXVtclj\nm83maPPw8Gj2fLvdjt1uB8Dd3f0Xz/3z3w1gGAYAo0eP5sCBA+zbt4/9+/ezcuVKtm/fTmJiIiaT\nrh4REZEbp08RERGRG5Senk5dXZ3j8fHjx+nXrx8A4eHhpKamNnl+SkoKffr0AaB3796cPn3accxu\ntxMXF8fBgweve94NGzaQkpLCjBkzWLlyJe+++y5Hjx4lNze3LYYlIiK3MQVEERGRaygvL6ewsLDZ\nz5UrVwAoKChg6dKlnDt3jsTERD777DP++Mc/AvXXHG7evJmkpCTOnz/P3/72N/bs2cNDDz0E1C8T\n3bRpE7t27SIzM5O//OUvVFdXM3To0Ov2Kz8/n6VLl5KcnExWVhY7duwgKCiI4ODg9nsxRETktqAl\npiIiItewcuVKVq5c2ax96tSpREZGMmDAAOx2OwkJCYSEhLBy5Uqio6MBiIuL48UXX2T9+vW88sor\n9O3bl9WrVzN+/HgApk+fTkFBAStWrKC4uJjBgwfzzjvvYLFYrtuvp59+moqKChYuXEhZWRmDBg1i\n/fr11122KiIicj0uxtULGkRERORXW7NmDV999ZXj1hMiIiL/CbTEVERERERERAAFRBEREREREWmg\nJaYiIiIiIiICaAZRREREREREGiggioiIiIiICKCAKCIiIiIiIg0UEEVERERERARQQBQREREREZEG\n/w/G2ZMkAT/nCwAAAABJRU5ErkJggg==\n",
      "text/plain": [
       "<matplotlib.figure.Figure at 0x28d625d2080>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "train_loss = [np.mean(l) for l in np.array_split(train_ppl.get_variable(\"loss_history\"), N_EPOCH)]\n",
    "\n",
    "fig = plt.figure(figsize=(15, 4))\n",
    "plt.plot(train_loss)\n",
    "plt.xlabel(\"Epochs\")\n",
    "plt.ylabel(\"Training loss\")\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As we can see, training loss almost reaches a plateau by the end of the training."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Saving the model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<cardio.batchflow.batchflow.pipeline.Pipeline at 0x28d6564ad68>"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "MODEL_PATH = \"D:\\\\Projects\\\\data\\\\ecg\\\\dirichlet_model\"\n",
    "train_ppl.save_model(\"dirichlet\", path=MODEL_PATH)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Testing pipeline"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Testing pipeline is almost identical to the training one. The differences lie in the absence of signal resampling and the modified segmentation procedure. Notice, that the model is imported from the training pipeline, rather than being constructed from scratch."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "template_test_ppl = (\n",
    "    bf.Pipeline()\n",
    "      .import_model(\"dirichlet\", train_ppl)\n",
    "      .init_variable(\"predictions_list\", init_on_each_run=list)\n",
    "      .load(components=[\"signal\", \"meta\"], fmt=\"wfdb\")\n",
    "      .load(components=\"target\", fmt=\"csv\", src=LABELS_PATH)\n",
    "      .drop_labels([\"~\"])\n",
    "      .rename_labels({\"N\": \"NO\", \"O\": \"NO\"})\n",
    "      .flip_signals()\n",
    "      .split_signals(2048, 2048)\n",
    "      .binarize_labels()\n",
    "      .predict_model(\"dirichlet\", make_data=concatenate_ecg_batch,\n",
    "                     fetches=\"predictions\", save_to=V(\"predictions_list\"), mode=\"e\")\n",
    "      .run(batch_size=BATCH_SIZE, shuffle=False, drop_last=False, n_epochs=1, lazy=True)\n",
    ")\n",
    "\n",
    "test_ppl = (eds.test >> template_test_ppl).run()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now \"predictions_list\" pipeline variable stores model predictions and true targets for all signals labeled with \"A\", \"O\" and \"N\" in the testing dataset."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We will use [F1-score](https://en.wikipedia.org/wiki/F1_score) with macro averaging to measure classification performance:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "predictions = test_ppl.get_variable(\"predictions_list\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.90342902903869682"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "f1_score(predictions)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "             precision    recall  f1-score   support\n",
      "\n",
      "          A       0.85      0.81      0.83       171\n",
      "         NO       0.98      0.98      0.98      1484\n",
      "\n",
      "avg / total       0.96      0.96      0.96      1655\n",
      "\n"
     ]
    }
   ],
   "source": [
    "print(classification_report(predictions))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Also take a look at the more detailed report - the confusion matrix:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style>\n",
       "    .dataframe thead tr:only-child th {\n",
       "        text-align: right;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: left;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th>True</th>\n",
       "      <th>A</th>\n",
       "      <th>NO</th>\n",
       "      <th>All</th>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>Pred</th>\n",
       "      <th></th>\n",
       "      <th></th>\n",
       "      <th></th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>A</th>\n",
       "      <td>138</td>\n",
       "      <td>25</td>\n",
       "      <td>163</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>NO</th>\n",
       "      <td>33</td>\n",
       "      <td>1459</td>\n",
       "      <td>1492</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>All</th>\n",
       "      <td>171</td>\n",
       "      <td>1484</td>\n",
       "      <td>1655</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "True    A    NO   All\n",
       "Pred                 \n",
       "A     138    25   163\n",
       "NO     33  1459  1492\n",
       "All   171  1484  1655"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "confusion_matrix(predictions)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The model misclassifies 33 patients with atrial fibrillation and 25 patients with normal and other rhythms. All other patients were classified correctly."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We’ve already obtained good classification performance. Let’s see if we can do even better."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Analyzing the uncertainty"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In addition to class probabilities the model returns its uncertainty in the prediction, which varies from 0 (absolutely sure) to 1 (absolutely unsure). You can see the uncertainty histogram on the plot below:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAA2oAAAEOCAYAAAD40eRuAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAIABJREFUeJzt3X1Y1fX9x/HXOaDAOXSNtDtdiiiKzgT0cGmmUxPLKwVv\n0mp2s81SR7mL6+rGLTWSsnTXNd3iytLatbrStkrQC8tpEtNohtmY5b0uFZ2O3BUhNBAQzvn+/ugH\ni7zhoHwPnwPPxz/I93xv3pzzCs+r7/d8dViWZQkAAAAAYAxnWw8AAAAAAGiKogYAAAAAhqGoAQAA\nAIBhKGoAAAAAYBiKGgAAAAAYJrQtD/7VV/9ty8NfkMPhUNeubn39dZW4ISZaG/mCncgX7ES+YDcy\nBjuZnK9rr73qgss5o/Y9Tue3L6STZwY2IF+wE/mCncgX7EbGYKdgzFcQjQoAAAAAHQNFDQAAAAAM\nQ1EDAAAAAMNQ1AAAAADAMBQ1AAAAADAMRQ0AAAAADENRAwAAAADD+F3USktLNXz4cG3btk2SVFFR\noblz58rj8WjMmDHKzs62bUgAAAAA6EhC/V1x4cKFKi8vb/w+IyNDLpdLhYWFOnz4sGbPnq2+ffsq\nMTHRlkEBAAAAoKPwq6i99dZbioiIULdu3SRJVVVVys/P15YtWxQWFqb4+HilpKQoNze33RS1bbv+\nLZ9ltWibMYk/tGkaAAAAAB1Js0WtuLhYr7/+utauXas777xTknTixAmFhoaqR48ejevFxMQoLy+v\nRQd3OBxyGvYpOafTIUlyOCWnz9GibUNCWrY+Op6GfDV8BVoT+YKdyBfsRsZgp2DM1yWLWn19vX71\nq19p4cKFioqKalx+9uxZhYeHN1k3PDxcNTU1LTp4165uORxmPlmuiLAWb9OlS6QNk6A9iopyt/UI\naMfIF+xEvmA3MgY7BVO+LlnUXn75ZQ0YMECjR49usjwiIkK1tbVNltXU1MjlcrXo4F9/XWXkGbWo\nKLfOVtfK8rVs27KySnuGQrvRkK/y8ir5fC27tBZoDvmCncgX7EbGYCeT83Wxkz2XLGqbNm3SV199\npU2bNkmSKisr9dhjj2nWrFmqq6tTSUmJunfvLunbSyRjY2NbNJRlWfJ6W7RJwFg+tfgzal6vWS86\nzOXzWeQFtiFfsBP5gt3IGOwUTPm6ZFF7//33m3w/duxYZWRk6NZbb9WhQ4e0fPlyPffcc/riiy+0\nceNGvfrqq7YOCwAAAAAdwWVfeLh48WLV19dr9OjRSk9P17x585SQkNCaswEAAABAh+T3v6MmSVu3\nbm38c1RUlLKyslp9IAAAAADo6Ay7lQcAAAAAgKIGAAAAAIahqAEAAACAYShqAAAAAGAYihoAAAAA\nGIaiBgAAAACGoagBAAAAgGEoagAAAABgGIoaAAAAABiGogYAAAAAhqGoAQAAAIBhKGoAAAAAYBiK\nGgAAAAAYhqIGAAAAAIahqAEAAACAYShqAAAAAGAYihoAAAAAGIaiBgAAAACGoagBAAAAgGEoagAA\nAABgGIoaAAAAABiGogYAAAAAhqGoAQAAAIBhKGoAAAAAYBiKGgAAAAAYhqIGAAAAAIahqAEAAACA\nYShqAAAAAGAYihoAAAAAGIaiBgAAAACGoagBAAAAgGEoagAAAABgGIoaAAAAABiGogYAAAAAhqGo\nAQAAAIBhKGoAAAAAYBiKGgAAAAAYhqIGAAAAAIahqAEAAACAYShqAAAAAGAYihoAAAAAGIaiBgAA\nAACG8auobdq0SXfccYcGDx6siRMnKj8/X5JUUVGhuXPnyuPxaMyYMcrOzrZ1WAAAAADoCEKbW6G4\nuFgLFizQa6+9piFDhqiwsFBz5szRRx99pMzMTLlcLhUWFurw4cOaPXu2+vbtq8TExEDMDgAAAADt\nUrNFLSYmRh9//LHcbrfq6+tVWloqt9utzp07Kz8/X1u2bFFYWJji4+OVkpKi3Nxcv4uaw+GQ07CL\nL51OhyTJ4ZScPkeLtg0Jadn66Hga8tXwFWhN5At2Il+wGxmDnYIxX80WNUlyu906efKkxo8fL5/P\np8zMTP3rX/9SaGioevTo0bheTEyM8vLy/D54165uORxmPlmuiLAWb9OlS6QNk6A9iopyt/UIaMfI\nF+xEvmA3MgY7BVO+/CpqktStWzft3r1bRUVFeuSRR/TQQw8pPDy8yTrh4eGqqanx++Bff11l5Bm1\nqCi3zlbXyvK1bNuyskp7hkK70ZCv8vIq+XxWW4+DdoZ8wU7kC3YjY7CTyfm62Mkev4taaOi3qw4f\nPly333679u3bp9ra2ibr1NTUyOVy+T2UZVnyev1ePaAsn+SzWvYier1mvegwl89nkRfYhnzBTuQL\ndiNjsFMw5avZ81kFBQX6+c9/3mRZXV2devbsqbq6OpWUlDQuLy4uVmxsbKsPCQAAAAAdSbNF7Uc/\n+pH27dun3Nxc+Xw+FRQUqKCgQPfcc4+Sk5O1fPlyVVdXa8+ePdq4caNSU1MDMTcAAAAAtFvNFrVr\nr71Wq1at0urVq5WUlKSsrCy99NJL6tOnjxYvXqz6+nqNHj1a6enpmjdvnhISEgIxNwAAAAC0W359\nRi0pKUnr168/b3lUVJSysrJafSgAAAAA6MgMu+ciAAAAAICiBgAAAACGoagBAAAAgGEoagAAAABg\nGIoaAAAAABiGogYAAAAAhqGoAQAAAIBhKGoAAAAAYBiKGgAAAAAYhqIGAAAAAIahqAEAAACAYShq\nAAAAAGAYihoAAAAAGIaiBgAAAACGoagBAAAAgGEoagAAAABgGIoaAAAAABiGogYAAAAAhqGoAQAA\nAIBhKGoAAAAAYBiKGgAAAAAYhqIGAAAAAIahqAEAAACAYShqAAAAAGAYihoAAAAAGIaiBgAAAACG\noagBAAAAgGEoagAAAABgGIoaAAAAABiGogYAAAAAhqGoAQAAAIBhKGoAAAAAYBiKGgAAAAAYhqIG\nAAAAAIahqAEAAACAYShqAAAAAGAYihoAAAAAGIaiBgAAAACGoagBAAAAgGEoagAAAABgGIoaAAAA\nABjGr6JWVFSku+66Sx6PR+PGjdPbb78tSaqoqNDcuXPl8Xg0ZswYZWdn2zosAAAAAHQEoc2tUFFR\noUceeUQZGRmaOHGiDh48qJkzZ6pnz556++235XK5VFhYqMOHD2v27Nnq27evEhMTAzE7AAAAALRL\nzRa1kpISjR49WqmpqZKkgQMHatiwYdq1a5fy8/O1ZcsWhYWFKT4+XikpKcrNzfW7qDkcDjkNu/jS\n6XRIkhxOyelztGjbkJCWrY+OpyFfDV+B1kS+YCfyBbuRMdgpGPPVbFEbMGCAfvvb3zZ+X1FRoaKi\nIsXFxSk0NFQ9evRofCwmJkZ5eXl+H7xrV7ccDjOfLFdEWIu36dIl0oZJ0B5FRbnbegS0Y+QLdiJf\nsBsZg52CKV/NFrXv+u9//6u0tLTGs2qrV69u8nh4eLhqamr83t/XX1cZeUYtKsqts9W1snwt27as\nrNKeodBuNOSrvLxKPp/V1uOgnSFfsBP5gt3IGOxkcr4udrLH76J28uRJpaWlqUePHnrhhRd09OhR\n1dbWNlmnpqZGLpfL76Esy5LX6/fqAWX5JJ/VshfR6zXrRYe5fD6LvMA25At2Il+wGxmDnYIpX36d\nz9q/f7/uvvtujRw5Ui+//LLCw8MVHR2turo6lZSUNK5XXFys2NhY24YFAAAAgI6g2aJWWlqqWbNm\naebMmZo/f76c/3+tYmRkpJKTk7V8+XJVV1drz5492rhxY+NNRwAAAAAAl6fZSx9zcnJUVlamlStX\nauXKlY3Lf/rTn2rx4sVatGiRRo8eLZfLpXnz5ikhIcHWgQEAAACgvWu2qKWlpSktLe2ij2dlZbXq\nQAAAAADQ0Rl2z0UAAAAAAEUNAAAAAAxDUQMAAAAAw1DUAAAAAMAwFDUAAAAAMAxFDQAAAAAMQ1ED\nAAAAAMNQ1AAAAADAMBQ1AAAAADAMRQ0AAAAADENRAwAAAADDUNQAAAAAwDAUNQAAAAAwDEUNAAAA\nAAxDUQMAAAAAw1DUAAAAAMAwFDUAAAAAMAxFDQAAAAAMQ1EDAAAAAMNQ1AAAAADAMBQ1AAAAADAM\nRQ0AAAAADENRAwAAAADDUNQAAAAAwDAUNQAAAAAwDEUNAAAAAAxDUQMAAAAAw1DUAAAAAMAwFDUA\nAAAAMAxFDQAAAAAMQ1EDAAAAAMNQ1AAAAADAMBQ1AAAAADAMRQ0AAAAADENRAwAAAADDUNQAAAAA\nwDAUNQAAAAAwDEUNAAAAAAxDUQMAAAAAw1DUAAAAAMAwFDUAAAAAMEyLitqePXs0cuTIxu8rKio0\nd+5ceTwejRkzRtnZ2a0+IAAAAAB0NKH+rGRZltatW6ff/OY3CgkJaVyekZEhl8ulwsJCHT58WLNn\nz1bfvn2VmJho28AAAAAA0N75dUZt1apVWr16tdLS0hqXVVVVKT8/X+np6QoLC1N8fLxSUlKUm5tr\n27AAAAAA0BH4dUZt2rRpSktL06efftq47MSJEwoNDVWPHj0al8XExCgvL8/vgzscDjkN+5Sc0+mQ\nJDmcktPnaNG2ISEtWx8dT0O+Gr4CrYl8wU7kC3YjY7BTMObLr6J23XXXnbfs7NmzCg8Pb7IsPDxc\nNTU1fh+8a1e3HA4znyxXRFiLt+nSJdKGSdAeRUW523oEtGPkC3YiX7AbGYOdgilffhW1C4mIiFBt\nbW2TZTU1NXK5XH7v4+uvq4w8oxYV5dbZ6lpZvpZtW1ZWac9QaDca8lVeXiWfz2rrcdDOkC/YiXzB\nbmQMdjI5Xxc72XPZRS06Olp1dXUqKSlR9+7dJUnFxcWKjY31ex+WZcnrvdwJ7GX5JJ/VshfR6zXr\nRYe5fD6LvMA25At2Il+wGxmDnYIpX5d9PisyMlLJyclavny5qqurtWfPHm3cuFGpqamtOR8AAAAA\ndDhXdOHh4sWLVV9fr9GjRys9PV3z5s1TQkJCa80GAAAAAB1Siy59HDZsmHbu3Nn4fVRUlLKyslp9\nKAAAAADoyAy7lQcAAAAAgKIGAAAAAIahqAEAAACAYShqAAAAAGAYihoAAAAAGIaiBgAAAACGoagB\nAAAAgGEoagAAAABgGIoaAAAAABiGogYAAAAAhqGoAQAAAIBhKGoAAAAAYBiKGgAAAAAYhqIGAAAA\nAIahqAEAAACAYShqAAAAAGAYihoAAAAAGIaiBgAAAACGoagBAAAAgGEoagAAAABgGIoaAAAAABiG\nogYAAAAAhqGoAQAAAIBhKGoAAAAAYBiKGgAAAAAYhqIGAAAAAIahqAEAAACAYShqAAAAAGAYihoA\nAAAAGIaiBgAAAACGoagBAAAAgGEoagAAAABgGIoaAAAAABgmtK0HaE8+/PzfLd5mTOIPbZgEAAAA\nQDDjjBoAAAAAGIaiBgAAAACGoagBAAAAgGEoagAAAABgGG4m0sa4AQkAAACA7+OMGgAAAAAYhjNq\nQehyzsJJnIkDAAAAgsUVF7UDBw7o6aef1pEjRxQdHa1nnnlGiYmJrTEbWtnlFrxAoEQCAAAA/3NF\nRa22tlZpaWlKS0vTXXfdpQ0bNujhhx9Wfn6+3G53a80IXFAgzyzyWUK0hZbmzulwaNq4OJumAQAA\ngXRFRe2TTz6R0+nUvffeK0maPn263njjDRUUFGjChAmtMiA6hkCe7QvUsS50HKfDIbc7TFVVtfJZ\n1nmPB7LcBep5ML2wBqqEm3xGO5Da4/NtesYDpT2+tvgWGQfaxhUVteLiYvXp06fJspiYGB07dsyv\n7R0Oh5yG3c7E6XRIkhxOyelztPE0aG8czv99vVC+PtpdErBZnI7A5DuQP9PluJzn4XJ+pkA83w6n\n9P6O4zpbXSvLZ/vhLkt7er4bmJ7x1uJwSq6IsIvmqz2+tvhWoDKenHSjpP+9FwNaU0OugilfV1TU\nzp49q4iIiCbLwsPDVVNT49f211wTeSWHt9WdY7l8CAAAINCiovj4DOwTTPm6ovNZERER55Wympoa\nuVyuKxoKAAAAADqyKypqvXv3VnFxcZNlxcXFio2NvaKhAAAAAKAju6KiNnz4cJ07d05r1qxRXV2d\ncnJyVFpaqpEjR7bWfAAAAADQ4Tgs6wK3nmuBQ4cOKTMzU4cPH1Z0dLQyMzP5d9QAAAAA4ApccVED\nAAAAALQuw26ODwAAAACgqAEAAACAYShqAAAAAGAYihoAAAAAGKZDFrUDBw5o+vTpSkxM1OTJk/X5\n559fcL2NGzcqOTlZiYmJ+sUvfqHS0tIAT4pg5W/G1q5dq9tvv11DhgzRtGnTVFRUFOBJEYz8zVeD\nHTt2qH///qqqqgrQhAhm/uarqKhIU6dO1eDBg5WamqodO3YEeFIEI3/zlZ2dreTkZHk8Hv3kJz/R\nvn37AjwpgtmePXsu+c+FBc17fKuDqampsX784x9bf/rTn6xz585Z2dnZ1s0332xVVlY2We/gwYPW\nkCFDrM8//9yqrq62FixYYM2aNauNpkYw8TdjO3bssIYNG2YdOHDA8nq91vr16y2Px2OVlZW10eQI\nBv7mq0F5ebk1ZswYq1+/fhddB2jgb75Onz5tJSUlWe+//77l8/ms9957z/J4PFZ1dXUbTY5g0JL3\nYEOHDrWOHTtmeb1e65VXXrHGjh3bRlMjmPh8Pis7O9vyeDzW0KFDL7hOML3H73Bn1D755BM5nU7d\ne++96tSpk6ZPn65rrrlGBQUFTdZ77733lJycrISEBIWHh+uJJ57Q3/72N3MbN4zhb8ZOnz6thx56\nSAMGDJDT6dTUqVMVEhKiI0eOtNHkCAb+5qtBZmamJkyYEOApEaz8zdeGDRt0yy23aPz48XI4HEpJ\nSdEbb7whp7PDva1AC/ibrxMnTsjn88nr9cqyLDmdToWHh7fR1Agmq1at0urVq5WWlnbRdYLpPX6H\n+41aXFysPn36NFkWExOjY8eONVl27NgxxcbGNn5/9dVX6wc/+IGKi4sDMieCl78ZmzJlimbPnt34\n/T/+8Q9VVVWdty3wXf7mS5LeffddffPNN5oxY0agxkOQ8zdf+/fv1/XXX6+5c+dq2LBhuueee+T1\netW5c+dAjosg42++Ro4cqV69emnixIkaNGiQXnnlFS1btiyQoyJITZs2TRs2bNCgQYMuuk4wvcfv\ncEXt7NmzioiIaLIsPDxcNTU1TZZVV1ef939vIiIiVF1dbfuMCG7+Zuy7jhw5ovT0dKWnp6tLly52\nj4gg5m++SkpKlJWVpSVLlgRyPAQ5f/NVUVGh7OxszZgxQ9u3b9ekSZM0Z84cVVRUBHJcBBl/81Vb\nW6vY2Fjl5OTos88+089+9jP98pe/vOTfo4AkXXfddXI4HJdcJ5je43e4ohYREXHef+g1NTVyuVxN\nll2svH1/PeD7/M1Yg+3bt2vGjBm67777NGfOnECMiCDmT758Pp9+/etf69FHH9X1118f6BERxPz9\n/dW5c2eNGjVKI0eOVKdOnXTffffJ5XJp165dgRwXQcbffK1YsUI33HCDBg0apLCwMM2dO1d1dXUq\nLCwM5Lhop4LpPX6HK2q9e/c+79RmcXFxk1OgktSnT58m65WVlamiooLL0tAsfzMmSevWrVN6eroW\nLVqkRx55JFAjIoj5k6/Tp09r9+7dyszMVFJSkiZNmiRJGj16NHcWxSX5+/srJiZG586da7LM5/PJ\nsizbZ0Tw8jdfJSUlTfLlcDgUEhKikJCQgMyJ9i2Y3uN3uKI2fPhwnTt3TmvWrFFdXZ1ycnJUWlp6\n3i08U1JSlJeXp6KiItXW1up3v/udRo0apauvvrqNJkew8DdjO3bs0DPPPKNXX31VKSkpbTQtgo0/\n+erevbv27NmjoqIiFRUV6d1335UkFRQUKCkpqa1GRxDw9/fX5MmTtX37dn344Yfy+Xxas2aNamtr\nNWzYsDaaHMHA33yNGTNGOTk52r9/v+rr6/X666/L6/XK4/G00eRoT4LpPb7D6oD/++vQoUPKzMzU\n4cOHFR0drczMTCUmJurpp5+WJD377LOSpE2bNikrK0tfffWVkpKStHTpUnXt2rUtR0eQ8CdjDz74\noHbs2HHeddJZWVkaNWpUW4yNIOHv77AGp06dUnJysnbt2iW3290WIyOI+Juv7du3a9myZTpx4oRi\nYmK0aNEiJSQktOXoCAL+5MuyLP3hD3/Q22+/rW+++UYDBgxQRkaG+vXr18bTI1js3LlT6enp2rlz\npyQF7Xv8DlnUAAAAAMBkHe7SRwAAAAAwHUUNAAAAAAxDUQMAAAAAw1DUAAAAAMAwFDUAAAAAMAxF\nDQAAAAAMQ1EDAFxSXFyc4uLi9M9//vO8x/bs2aO4uDg98MADl73/F198UXfffbdf665fv14jRoy4\n7GMF0rlz5/TnP//Z7/UfeOABLVu2zJZ9AwCCD0UNANCsTp06KT8//7zleXl5cjgcbTCR+f7yl7/o\npZde8nv9F198UQ8//LAt+wYABB+KGgCgWUOHDr1gUfvggw+UmJjYBhOZz7KsFq0fFRUlt9tty74B\nAMGHogYAaNa4ceN04MABnT59unHZ4cOHVVlZqSFDhjRZ9/jx40pLS1NSUpKGDx+u5557TrW1tY2P\n7969W9OnT1d8fLxmzpypM2fONNn+6NGjevDBB5WQkKCxY8fqhRdeUF1dXbMz7ty5U3FxcU2OtWzZ\nssbLMnfu3KkRI0Zo3bp1uvXWWxUfH685c+aorKysyT7uvvtuJSQkaPz48dqwYYNfc61fv17Tpk3T\nY489Jo/Ho9dff13z589XaWmp4uLidOrUKVVWViojI0MjRozQwIEDNXbsWL355puN+//upY8vvvii\n0tPTtWTJEg0dOlRJSUl6/vnn5fP5tHPnzib7brj89NSpU437Ki8v10033aT9+/c3+7wBAMxEUQMA\nNOvGG29UXFxck7NqH3zwgcaNGyen839/lZSXl+vee++V2+3WW2+9peXLl2vr1q1aunSpJKmsrEyz\nZs1SYmKicnNzlZycrHfeeadx+9raWs2aNUv9+vVTbm6ulixZovfff1+///3vW+XnKC8vV05Ojlas\nWKE33nhDe/fu1auvvipJOnbsmB566CENGzZMubm5mjNnjhYuXKjdu3f7Nde+ffvUpUsXrVu3TuPH\nj9eCBQvUpUsXbd++Xd26ddPSpUu1f/9+rVq1Sps3b9aUKVO0ZMkSffnllxecdevWraqpqdE777yj\np556Sm+++aY+/PBDDR48uMm+Bw4cqOjoaG3evLlx27y8PPXo0UMDBw5slecNABB4FDUAgF9uu+22\nJkUtLy9P48ePb7LOe++9J6fTqaVLl6pv37665ZZbtGjRIq1du1YVFRXavHmz3G635s+fr969e+v+\n++/XuHHjmmzvcrn05JNPKiYmRjfffLOeeuoprVmzRl6v94p/hvr6ei1YsEADBw7U4MGDNWnSJO3d\nu1eSlJ2drf79++vxxx9XTEyMpk2bpscff1x1dXV+zzV37lz16tVL3bt311VXXSWn06lrr71WISEh\n8ng8ev755zVo0CD17NlTDz/8sLxer44dO3bBWV0ulzIyMhQTE6MpU6aof//+2rt3rzp37nzevlNT\nU7Vp06bGbTdu3KiUlJQrfr4AAG0ntK0HAAAEh3HjxmnlypX65ptvdObMGf3nP//R0KFDVVhY2LjO\n0aNH1b9/f3Xu3LlxmcfjkdfrVXFxsY4cOaJ+/fopJCSk8fFBgwY1nlU6evSoiouLNXjw4MbHLcvS\nuXPn9O9//7tVfo6YmJjGP0dGRqq+vr7x2IMGDWqy7syZMyVJf/3rX5udKzIyUldfffVFjzt16lRt\n3bpV69evV3FxsQ4ePChJFy2gP/zhD9WpU6cLzvp9qampWrFihY4fPy6Xy6WioiI999xzF50FAGA+\nihoAwC/9+/dX9+7dtW3bNn311VcaO3asQkOb/jUSFhZ23nYNRcTr9crhcJx3I4zv7qO+vl4ej+eC\nJeOGG2645HwXuvvkhUrQd8uP9L8bc3x/+Xf5M1d4ePgl53vyySe1c+dOTZ48WXfeeacSExN16623\nXnT9C81zsZuI9OrVS/Hx8dq8ebMiIyN10003qWfPnpecBwBgNi59BAD4bdy4cdq2bZvy8/PPu+xR\nknr37q1Dhw7p3Llzjcs+++wzOZ1O9erVS/369dPBgwebPH7gwIHGP/fp00fHjx9Xt27dFB0drejo\naH355Zdavnx5s3c6bCg2lZWVjctOnjzp98/Wq1evxrNcDR5//HGtWLHisub6bnE8c+aMcnNztWzZ\nMj366KOaMGGCzp49K+ny7uB4oVKampqqbdu2adu2bUpNTW3xPgEAZqGoAQD8dtttt6mgoEBHjx7V\nLbfcct7jqampcjgcmj9/vo4cOaLCwkI9++yzuuOOO9S1a1dNnDhRkrRo0SIdPXpU2dnZTW6CMWnS\nJEnfnn364osv9Pe//10LFy5UaGjoBc/WfVffvn0VHh6ulStX6uTJk1q7dq0+/vhjv3+2GTNmaP/+\n/XrppZd04sQJ5eTkaMuWLRo1atRlzeVyuVRZWamjR48qMjJSbrdbeXl5OnXqlD799FM98cQTkuTX\nHS0vte+GyyEnTJigAwcOaNeuXZowYUKL9wkAMAtFDQDgt8TERLndbo0aNarJ59AauFwu/fGPf1Rp\naanuvPNOzZs3T+PHj2+86+NVV12l1157TcePH9fUqVOVk5Oj+++//7ztz5w5o+nTpys9PV0jRozw\n6/NWkZGRWrJkibZu3aqJEyeqoKBAc+bM8ftnu/HGG7Vy5Url5eUpJSVFr732mpYtW6b4+PjLmuvm\nm29WbGyspkyZooMHD2r58uUqKCjQhAkTlJGRoYkTJyoxMVH79u3ze8YL7bvhjOQ111yjoUOHyuPx\nqGvXri3eJwDALA6LfzUTAIB2ITU1VbNmzdLkyZPbehQAwBXiZiIAAAS5jz76SEVFRSotLb3gZwcB\nAMGHogYAQJBbs2aN9u7dqyVLljR790kAQHDg0kcAAAAAMAw3EwEAAAAAw1DUAAAAAMAwFDUAAAAA\nMAxFDQApcMrIAAAAEklEQVQAAAAMQ1EDAAAAAMP8H2sgip3wq2oKAAAAAElFTkSuQmCC\n",
      "text/plain": [
       "<matplotlib.figure.Figure at 0x28dfa7d6f98>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "uncertainty = [d[\"uncertainty\"] for d in predictions]\n",
    "\n",
    "fig = plt.figure(figsize=(15, 4))\n",
    "sns.distplot(uncertainty, hist=True, norm_hist=True, kde=False)\n",
    "plt.xlabel(\"Model uncertainty\")\n",
    "plt.xlim(-0.05, 1.05)\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The figure above shows, that the model is almost always certain in its predictions."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Compare the metrics for the full testing dataset above with the same metrics for 90% most certain predictions:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "q = 90\n",
    "thr = np.percentile(uncertainty, q)\n",
    "certain_predictions = [d for d in predictions if d[\"uncertainty\"] <= thr]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.95671826175423291"
      ]
     },
     "execution_count": 18,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "f1_score(certain_predictions)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "             precision    recall  f1-score   support\n",
      "\n",
      "          A       0.95      0.89      0.92       102\n",
      "         NO       0.99      1.00      0.99      1387\n",
      "\n",
      "avg / total       0.99      0.99      0.99      1489\n",
      "\n"
     ]
    }
   ],
   "source": [
    "print(classification_report(certain_predictions))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style>\n",
       "    .dataframe thead tr:only-child th {\n",
       "        text-align: right;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: left;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th>True</th>\n",
       "      <th>A</th>\n",
       "      <th>NO</th>\n",
       "      <th>All</th>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>Pred</th>\n",
       "      <th></th>\n",
       "      <th></th>\n",
       "      <th></th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>A</th>\n",
       "      <td>91</td>\n",
       "      <td>5</td>\n",
       "      <td>96</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>NO</th>\n",
       "      <td>11</td>\n",
       "      <td>1382</td>\n",
       "      <td>1393</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>All</th>\n",
       "      <td>102</td>\n",
       "      <td>1387</td>\n",
       "      <td>1489</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "True    A    NO   All\n",
       "Pred                 \n",
       "A      91     5    96\n",
       "NO     11  1382  1393\n",
       "All   102  1387  1489"
      ]
     },
     "execution_count": 20,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "confusion_matrix(certain_predictions)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can observe a significant increase in precision, recall and F1-score for the atrial fibrillation class. Now only 16 signals were misclassified."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Predicting pipeline"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now let's predict class probabilities for a new, unobserved ECG signal.<br>\n",
    "Besides, we will load pretrained model from MODEL_PATH directory instead of importing it from another pipeline:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "SIGNALS_PATH = \"D:\\\\Projects\\\\data\\\\ecg\\\\training2017\\\\\"\n",
    "MODEL_PATH = \"D:\\\\Projects\\\\data\\\\ecg\\\\dirichlet_model\"\n",
    "\n",
    "BATCH_SIZE = 100"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.5, allow_growth=True)\n",
    "\n",
    "model_config = {\n",
    "    \"session\": {\"config\": tf.ConfigProto(gpu_options=gpu_options)},\n",
    "    \"build\": False,\n",
    "    \"load\": {\"path\": MODEL_PATH},\n",
    "}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:tensorflow:Restoring parameters from D:\\Projects\\data\\ecg\\dirichlet_model\\model-26000\n"
     ]
    }
   ],
   "source": [
    "template_predict_ppl = (\n",
    "    bf.Pipeline()\n",
    "      .init_model(\"static\", DirichletModel, name=\"dirichlet\", config=model_config)\n",
    "      .init_variable(\"predictions_list\", init_on_each_run=list)\n",
    "      .load(fmt=\"wfdb\", components=[\"signal\", \"meta\"])\n",
    "      .flip_signals()\n",
    "      .split_signals(2048, 2048)\n",
    "      .predict_model(\"dirichlet\", make_data=partial(concatenate_ecg_batch, return_targets=False),\n",
    "                     fetches=\"predictions\", save_to=V(\"predictions_list\"), mode=\"e\")\n",
    "      .run(batch_size=BATCH_SIZE, shuffle=False, drop_last=False, n_epochs=1, lazy=True)\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We need to create a dataset with a single ECG in it, then link it to the template predicting pipeline defined above and run it. Model prediction will be stored in the \"predictions_list\" variable."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "signal_name = \"A00001.hea\"\n",
    "signal_path = SIGNALS_PATH + signal_name\n",
    "predict_eds = EcgDataset(path=signal_path, no_ext=True, sort=True)\n",
    "predict_ppl = (predict_eds >> template_predict_ppl).run()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'target_pred': {'A': 0.020024037, 'NO': 0.97997594},\n",
       "  'uncertainty': 0.0063641071319580078}]"
      ]
     },
     "execution_count": 25,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "predict_ppl.get_variable(\"predictions_list\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The length of the resulting list equals the length of the index of the dataset (1 in out case)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Visualizing predictions"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now let's look at the target Dirichlet mixture density for a given signal. The pipeline below stores the signal and Dirichlet distribution parameters in its variables in addition to the predicted class probabilities."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:tensorflow:Restoring parameters from D:\\Projects\\data\\ecg\\dirichlet_model\\model-26000\n"
     ]
    }
   ],
   "source": [
    "template_full_predict_ppl = (\n",
    "    bf.Pipeline()\n",
    "      .init_model(\"static\", DirichletModel, name=\"dirichlet\", config=model_config)\n",
    "      .init_variable(\"signals\", init_on_each_run=list)\n",
    "      .init_variable(\"predictions_list\", init_on_each_run=list)\n",
    "      .init_variable(\"parameters_list\", init_on_each_run=list)\n",
    "      .load(fmt=\"wfdb\", components=[\"signal\", \"meta\"])\n",
    "      .update_variable(\"signals\", value=B(\"signal\"))\n",
    "      .flip_signals()\n",
    "      .split_signals(2048, 2048)\n",
    "      .predict_model(\"dirichlet\", make_data=partial(concatenate_ecg_batch, return_targets=False),\n",
    "                     fetches=[\"predictions\", \"parameters\"],\n",
    "                     save_to=[V(\"predictions_list\"), V(\"parameters_list\")], mode=\"e\")\n",
    "      .run(batch_size=BATCH_SIZE, shuffle=False, drop_last=False, n_epochs=1, lazy=True)\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def predict_and_visualize(signal_path):\n",
    "    predict_eds = EcgDataset(path=signal_path, no_ext=True, sort=True)\n",
    "    \n",
    "    full_predict_ppl = (predict_eds >> template_full_predict_ppl).run()\n",
    "    signal = full_predict_ppl.get_variable(\"signals\")[0][0][0][:2000].ravel()\n",
    "    predictions = full_predict_ppl.get_variable(\"predictions_list\")[0]\n",
    "    parameters = full_predict_ppl.get_variable(\"parameters_list\")[0]\n",
    "    \n",
    "    print(predictions)\n",
    "\n",
    "    x = np.linspace(0.001, 0.999, 1000)\n",
    "    y = np.zeros_like(x)\n",
    "    for alpha in parameters:\n",
    "        y += beta.pdf(x, *alpha)\n",
    "    y /= len(parameters)\n",
    "    \n",
    "    fig, (ax1, ax2) = plt.subplots(1, 2, gridspec_kw={\"width_ratios\": [2.5, 1]}, figsize=(15, 4))\n",
    "\n",
    "    ax1.plot(signal)\n",
    "\n",
    "    ax2.plot(x, y)\n",
    "    ax2.fill_between(x, y, alpha=0.3)\n",
    "    ax2.set_ylim(ymin=0)\n",
    "\n",
    "    plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Certain prediction"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "First, let’s look at the healthy person’s ECG. The signal is shown on the left plot. Note that it has a clear quasi periodic structure. The right plot shows the pdf of the mixture distributions with atrial fibrillation probability plotted on the horizontal axis. The model is absolutely certain in the absence of AF: almost all the probability density is concentrated around 0."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'target_pred': {'A': 0.016624907, 'NO': 0.98337513}, 'uncertainty': 0.0040795803070068359}\n"
     ]
    },
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAA3kAAAD8CAYAAADKZW0jAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAIABJREFUeJzsvXl8HNWV9/2r6kW9aJcs74sk7+ANA47BAYNnQgJ2CI6T\nMCQ875OZACaeMG8WJkN4YogZZt7MBBIT3pjBEAJ2iAMmwWCzGLOYGNt4wza2vEteZFu71FK31GvV\n80d1VVd1V0vdanV3dd/z/Xz8sbpU6r7n9q2qc+7ZOFEURRAEQRAEQRAEQRB5AZ/tARAEQRAEQRAE\nQRBDBxl5BEEQBEEQBEEQeQQZeQRBEARBEARBEHkEGXkEQRAEQRAEQRB5BBl5BEEQBEEQBEEQeYQ5\n2wNIlNbWnrS+P8dxqKhwor3dA1YKjrIoM0Byk9xswKLcLMoMkNzplHvYsKK0vG8+M5C+li/rNR/k\nyAcZgPyQIxkZEr0vkScvDM9LE8wzNCMsygyQ3CQ3G7AoN4syAyQ3a3LnOvnyveWDHPkgA5AfcqRD\nhhyeDoIgCIIgCIIgCCIaMvIIgiAIgiAIgiDyCDLyCIIgCIIgCIIg8ggy8giCIAiCIAiCIPIIMvII\ngiAIgiAIgiDyiCEz8g4fPowFCxbE/f3mzZuxaNEizJ49G/fddx/a2tqG6qMJgiAIgiAIgiCIMCkb\neaIoYuPGjfjHf/xHBAIB3XOOHz+ORx55BE8++SR2796NyspKPPTQQ6l+NEEQBEEQBEEQBBFFys3Q\nn3nmGbz99ttYvnw51q5dq3vOm2++iUWLFmHWrFkAgJ/85CeYP38+2traUFlZmdDnpLv/Bc9zmv9Z\nIN9kbrjUjZJCK8qLbf2el29yJwrJTXLnOzzP4fjZDvzHH/Zg8XXj8XdXj832kDICi981wK7cuU67\ny4tddS2YVVOOAosp28MhiLwlZSPv61//OpYvX449e/bEPae+vh5z5sxRXpeVlaGkpAQNDQ0JG3kV\nFU5wXPpv5KWlzrR/htHIB5nPXe7Goy/sBQC88h+3wV4w8NLOB7kHA8nNFqzJ/b1ffgifP4R1757E\nN780LdvDySisfdcyrMqdq/z5g1P4tK4F37ypFl+eNz7bwyGIvCVlI6+qqmrAc/r6+mCzab0rdrsd\nfX19CX9Oe7sn7Z680lInuro8EAQxfR9kIPJJ5s+ONSk/f36yGRNHl8Q9N5/kTgaSm+TOd3ieg88f\nUl53dLizOJrMweJ3DWRG7vLywrS8L8v09EqpPT19+ik+BEEMDSkbeYlgs9ng9Xo1x/r6+uBwOBJ+\nD1EUEQoNfF6qCIKIUIidhySQHzILYmT8fn8oIXnyQe7BQHKzBatyA2BObla/a1blznVE+soIIq1k\npIVCbW0tGhoalNcdHR1wuVyora3NxMcTLKB6WIQY2skmYmm43I2NH52Bm3aJCYIgDEcmUm8IgsiQ\nkbd48WJs3boV+/btg8/nw5NPPokbbrgBZWVlmfh4JvAHMuDmNDBqTx4ZeWzz2Iv78Nbuc3h528ls\nD4UgCIIgCCIrpM3IW7lyJVauXAkAmDZtGh577DE8/PDDmD9/PlpaWvCf//mf6fpo5nht+xnc/+R2\nbN17IdtDyRpqw47CdggAOHy6PdtDIAiCIOIgUrwmQaSVIcvJmzdvHj799FPl9apVqzS/v/XWW3Hr\nrbcO1ccRKrbsOgcA2PD+KXzpGjZKhkcTDKk9eUIWR0IYBYs5I4EKBEEQRBJQtCZBZAbSgoi8IBSK\nGHash2v6GA/dlSEjj2CZYEjAS++ewPv7G7M9FILQhe0nNUGkH9KCiLwgSEYeAGDv8RZ8/8nt2Lzz\nbLaHkhXUZdTNJjZvb+6+APM5ugTw7p7z+Oizi/jjeyc1OcsEkW3IkUcQmYFNLYjIOzThmgzn5K15\n/QhEEfjLx/XZHkpW8Acjxo2JZ0+VaO7oxY///0/wH+v2Z3sohoBl4+bz+g7l52CQQtgJA8Lu5UkQ\nGYGMPCIvCApqTx4pNKyi/upZVPDXbT2BQFDA+RY3U02x48F0YQeV7OpIB4LIPtIGnEhWHkGkFTLy\niLwgFKIWCoRWaWBxGajzMUmxZ7vZMq/yZAcZjm4gjAcVXiGIzEBGHpEXaPrkkULDLGrvlcCgR5dT\nZbsEyMhj2pupbjhNBj9hSNi9PAkiI5CRR+QFokqHIU8eu6g9NwzaeJodctrsIE+eDBn8hJEgRx5B\nZAYy8oi8QAD1ySO0Hl0Wc/LUyhN5b9hcAzK82pNHhVcIA8Lu1UkQmYGMvByH6cICKmgaCCDKk8fg\notCG6LEnfzQs3x/VXl1aC4ShIFceQWQEMvJyHApNlNDkYtGUMAvLOVhAtGJP3huWl4Pak0fhmoSR\nkHOHGd6DIYiMQEZejkNGnoR6x57l3XvWUX/3LG4WU7ENLSzfC7T5mbQWCCPC7vVJEJmAjLwch3XP\nhYy24AbNCauwGKKphkL0tNCtQILuiYShYHEHjiCyABl5OQ7rSq2MoPHkZXEgBoJFL4ZaZI7BZkys\ne/Ki74csXgMymo2v7A2DIOLC8OVJEBmBjLwch26SEppwTQoBAcBmIAxtekRgMUQv2mPF8nIQKISd\nMCjsbb8RRHYgIy/HIaVWQj0NNCUSLCp2rEelaTc72CN6ybMcpqiWncFbAWFg5IADWpYEkV7IyMtx\nRIaVGDWs90fTg8VpYP16YF2xp3DNCCGBPHmEwaFlSRBphYy8HCdap2X1YS6QJy9GwWXRi8G6gc96\nn8Do+x+LcyBDbWUI40IBmwSRCcjIy3GilRpWn+XUQoHykQA2ZVYTYrwAkRCVhsjiHMho1gJZeYQB\nofx5gkgvZOTlOBSeJMF6mBoQ2zORRS+GWmYGi2syv9lBnrwIInnyCIPC4r2ZILIBGXk5TrQOw6pO\nw3qYGqDnyWNvHlj97mXUniwWpyImZJnBOZDRhrAzPBGE4ZBtPFqWBJFeyMjLcWLCNRm9a4qMh6kB\nel6MLA0ki7D63cuo1wCLOZmxm17szYEM65VWCYIgWCdlI6+urg7Lli3D7Nmzcfvtt+PgwYO65736\n6qtYtGgR5s6dizvvvBNHjhxJ9aMJxCryDOp1AGjXGqAiPACbho0a1j3aseHrWRqIARDpnkj0w4ED\nB7B06VJcddVVuOWWW/Dmm28CAFwuF1asWIG5c+di4cKFePXVV4f+wylckyAyQkpGns/nw/Lly7F0\n6VLs3bsXd999N+6//354PB7NecePH8evfvUrPPfcc9i7dy9uvvlm/Mu//EtKAyckyJMnQZ48UnAB\n7TpgUY8QGPfeRBv5LBv91FaGiEcoFMKKFStw77334sCBA3j88cfxb//2b2hsbMTPf/5zOBwO7Ny5\nE0899RR+9atfxd28TxValgSRXlIy8nbv3g2e53HXXXfBYrFg2bJlqKysxPbt2zXnnTt3DoIgIBQK\nQRRF8DwPm82W0sAJiVjvTXbGkW1YLzgB6DSCZnAeGNbpAdB1EC0xy9X7NGtB6OdEgjm6u7vR0dGh\n6GQcx8FiscBkMmHbtm144IEHUFBQgJkzZ2Lx4sV4/fXXh/TzOSa34Agi85hT+eOGhgbU1tZqjlVX\nV6O+vl5zbMGCBZgwYQJuu+02mEwmOJ1OvPTSS0l9Fsdx4NOYQcjznOb/XCG6ShXHAyZTYjLkqsx6\niFE/9zcH+SS3mmhxOE47D/kqtxr19cBxHEwmjgm5ZfRMGhbk7o9E74e5jN4a16wFLj/ngaVreygp\nKyvDXXfdhR/96Ed48MEHIQgCHn/8cXR2dsJsNmPs2LHKudXV1di6dWtS7z+Qvqbcp3N8XebD+ssH\nGYD8kCMdMqRk5PX29sJut2uO2Ww2eL1ezTGfz4eJEydi5cqVmDx5MtauXYt//ud/xpYtWxL26FVU\nOMFloO5uaakz7Z8xlLi8Ic3r0lInihzWpN4j12TWw2QyKT9brWaUlxcO+Df5ILcagTdpXpeUOFFe\nZo85L9/kVuNs7VV+5nlOsw7yWW4Z9T3Sbi8AwIbcMn5R+4woKrYndC/IF9TftVpRcDgK8noeWFrj\nQ4EgCLDZbFi9ejVuvvlm7Ny5Ez/+8Y+xZs2aGJ1MT6cbiIH0NYtVUj0LEnxWG518WH/5IAOQH3IM\npQwpGXl2uz3m4vd6vXA4HJpjTz/9NEaMGIEZM2YAAFasWIFXXnkFO3fuxM0335zQZ7W3e9LuySst\ndaKry5NTeRxdXb2a1x0dbgS8iRl5uSqzHv5AUPm5zxtAR4c77rn5JLeaTpf2WuzscsMkRjYB8lVu\nNd3dfcrPwZCAjg43E3LLBIORuDy3R1oPLMgt09XVp3ntcvWioyO5Ta9cRG+Na9aC29vvPTFXycS1\nnQ9GSDRbt27F4cOH8dOf/hQAsHDhQixcuBC//e1v4fP5NOfq6XQDMZC+FgxIzyWfr/9ntdHJh2dL\nPsgA5IccyciQ6H0pJSOvpqYG69ev1xxraGjA4sWLNccuXbqk8fhJYVQmjfdlIERRRCg08HmpIggi\nQqHcWSCBoDbZIhhMfvy5JrMe6kbgicqTD3KrCUathUCctZBvcqtRz4EoQiNnPsstoym2Eb4mWJBb\nJiSkfj/MZdTftfqeGMrzNcDSGh8KLl++DL/frzlmNptxxRVXYP/+/bh06RJGjRoFQNLpJk6cmNT7\nD6SvyfmiQtQ9OlfJh/WXDzIA+SHHUMqQkm9s/vz58Pv9WLduHQKBADZu3Ii2tjYsWLBAc97ChQux\nceNGHD16FMFgEC+88AJCoRDmzp2b0uAJ6gslw3rBCUCvuiZ788B8CwFBfR1kcSBZgu6HEUSqrknE\n4brrrsOxY8fw2muvQRRF7NmzB++99x5uu+02LFq0CE888QT6+vpw+PBhbN68GUuWLBnSz89A5g1B\nEEjRk2e1WrF27Vo8+uijePLJJzF+/HisWbMGDocDK1euBACsWrUK3/rWt9Dd3Y0f/OAH6O7uxrRp\n0/Dcc8+hsDD/wiAyDTXAltD2hMreOLJJrIKbnXFkE00LAQblJyOX7ocymnsiyxNBxDBlyhQ89dRT\nWL16NR5//HGMGjUKv/zlLzFjxgw89thjeOSRR3DjjTfC4XDgwQcfxKxZs4Z4BGTlEUQmSMnIA4Cp\nU6diw4YNMcdXrVql/MxxHO69917ce++9qX4cEQV5byTUyh2Lyi0QK3euxqWnAuseXXXLAAbFj70G\nWJyEMFpPXhYHQhiSm2++WbcmQmlpKVavXp2FEREEMdSksZQJkQnIeyNBnjwKVQO0yiyD4keFa7I3\nAXQNRNBeC+zOA2E85HBNWpcEkV7IyMtxom+SrN40WffgAHpe3SwNJIsIUQV4WENkXLGP9WZnaSAG\ngPXQZcL40LIkiPRCRl6OE5ODkqVxZBtSaGLlZjFUTWPsM6hCsB6iF30/ZNHQlWHd4CeMC2XkEURm\nICMvx4k26lh9mLNecALQ8+pmaSBZRGDdyGFcsY+WmMEpUGDd4CcMjBKvmd1hEES+Q0ZejkOKvQR5\n8siTB5D3QmT8OoitNszgJIRh/VogCIJgHTLycpzonBNWH+asezAAyskDqE8c69cBhWtGoIrDhFGR\nwzVpVRJEeiEjL8ehPnkS2lwsNqF2GtrvnkX5WQ9XpWrDEVhvp0EQBME6ZOTlOFQyXEKbf8LqHGhf\nszgP2uqaWRxIlmC9yiz1yYvAuleXMDCKK4/WJUGkEzLychzKyZNQK/TszgGtBdaNHG0BouyNI1vQ\nRkcE1vMzCeNC4ZoEkRnIyMtxKERPQhuaxOgc0FrQei/A3hyw3gw9ZqODQW+ujHrji2VjlyAIglXI\nyMtxKAdFQuPBYNGFAR0vBoPzELPpkaVxZAttRcXsjSNbULhmBGqhQBgVLtxCgeHLkyAyAhl5OQ4p\nNRKsV1UEdIrwZGkc2URkvLoi67mptOkVgVooEARBsA0ZeTkOlc2XYD0XC4jdrWdxHmLnIDvjyAai\nKFJ1Udr0UqCcPML40MIkiHRCRl6OQ9U1JdTKPYseLICK8AA63kyG4tTIi0X3Qxky+AkjQ8U1CSIz\nkJGX45BiL0GevFgvFksGjgzLnm0qwsT296+GDH6CIAiCjLwcJ7oXGKvhSdqeUNkbRzYhg1/H0GVo\nEmK8mOyIrhA7BwxOAihslTA24borFKxJEGmGjLwch8rmS5Anjzw5ANuGLuVkUq9IGeoXSBAEQZCR\nl+NEP7pZfZarlRgWPRiAnmKXnXFkkxgln6G9YpYNXBkybiRoLRDGhlx5BJEJyMjLcWJ3rtm8a1K5\ncPLqAmznIkWHbrP4/cd4s1nc6QAVoCGMjRyuSRBEeiEjL8eh3mgSVC6cvBiATi4SQ0p+tNeSIdEV\nyJstQQVoiFyApUgLgsgGZOTlOJSHI6H2YjA7B6TYMe3BYFl2GcpLlSCvPpET0LIkiLSSspFXV1eH\nZcuWYfbs2bj99ttx8OBB3fP27duHO+64A3PmzMGSJUuwa9euVD+aACn2MuTJo4p6gN4cZGkgWYCK\njsTOAYvXAKBXZTY74yAIgiCyR0pGns/nw/Lly7F06VLs3bsXd999N+6//354PB7Nec3Nzbj//vux\nfPlyHDhwAPfddx9+8IMfwOv1pjR4gnbvgdjGv6wqdrQW2PZgsCy7DMs5mWqolQRhZKiFAkFkhpSM\nvN27d4Pnedx1112wWCxYtmwZKisrsX37ds15mzZtwnXXXYdbbrkFHMdh8eLFePHFF8HzFC2aKtQb\nixQ7GfLk6BUfyc44sgF5b8ibLUNh/ARBEIQ5lT9uaGhAbW2t5lh1dTXq6+s1x44ePYrhw4djxYoV\n2LdvHyZMmICHH34YVqs14c/iOA7ptAl5ntP8n6twHGAyJSZDvsgcnbwtQux3DvJF7mhiKpZFrYV8\nlVsDp10LHM+I3JBk1SPf5R6IRO+HuUz0Gtf7yvNxHli5tvMNLtxCgfYeCCK9pGTk9fb2wm63a47Z\nbLaYMEyXy4WPP/4Yv/3tb/Gb3/wGr7zyCu699168++67KCkpSeizKiqc4DJQd7e01Jn2zxhKbHat\noex02lBeXpjUe+SazNEEgiHNa57jE5qDXJc7GoejQPPabi/QnYd8k1uN1WrRvC4udijy5rPcABCM\nCsywWKXbe77LrcZm094PC2yWpO+HuYz8XYsmk+a4xWrO63lgaY3nBYoqR1YeQaSTlIw8u90eY9B5\nvV44HA7NMavVihtuuAELFiwAAHz729/G888/jwMHDuCmm25K6LPa2z1p9+SVljrR1eXJqbLrHo9P\n87qnpw8dHe6E/jZXZY7GF9AaeYFgqN85yBe5o+lxa69Ft9urmYd8lVtNX59f87qzywO7GXkvNwB0\ndvVpXnu90lzku9xqPL3a+2Ffrz/h+2EuE31td3Rr7wU+byAv5yET97R8No4JgshvUjLyampqsH79\nes2xhoYGLF68WHOsuroa58+f1xwTBCGpPAFRFBEKDXxeqgiCiFAodxSiUEiIep38+HNN5miCQe0c\nSGtlYHlyXe5oomUJxZEv3+RWE4pS9IJBQVH+8lluIPY6YEVuNdGKfrxrIF+Rv+votZDv88DSGs8H\nZEcehWsSRHpJyTc2f/58+P1+rFu3DoFAABs3bkRbW5visZO5/fbbsWPHDnz00UcQBAHr1q2Dz+fD\nvHnzUho8QQ2wAWqALEPVFfXmIEsDyQJUeIWKD8nEFl7JzjgIgiCI7JGSkWe1WrF27Vps2bIF1157\nLdavX481a9bA4XBg5cqVWLlyJQBg+vTpWLNmDX7zm99g7ty5+Otf/4pnnnkGTifF0acK9cmjBsgy\nMUo+g1p+bHVNduaAjHyaAxlqoUAYGWqhQBCZIaVwTQCYOnUqNmzYEHN81apVmtcLFiyI8fARqUOl\nsqmFggzLXiwZlluKkPeG5kCG7omEsaFqqASRCahRXY5Dij158mTI4Gd7LYgxoYrsyC5DffIkyKNJ\n5AS0LAkirZCRl+NQTl6scsuS90YNy14sGZY9GLEGTpYGkkWiQ5RZvB8ClJtIGJtIuCYtTIJIJ2Tk\n5TiUk0ceLBky+Nn25MQauOzILsOyka+G7gUEQRAEGXk5DoXlUMiqDMuhijIs52TRhg/dD2VoLRAD\n0dTUhPvuuw9XXXUVbrjhBrz00ksAAJfLhRUrVmDu3LlYuHAhXn311SH/bOqFThCZIeXCK0R2idm5\nzs4wskqMQsPkLJCxC+iF7rIzCeTJI+NGhjx5RH+Ioojvf//7mDdvHp5++mmcPXsW3/72t3HllVfi\nD3/4AxwOB3bu3IkTJ07gnnvuwaRJkzB79uyhGwDVXSGIjEBGXo5DOSjUJ08mxouVnWFklZj1z9Ak\nUE4mGboy0RtdrM4Doc+hQ4fQ0tKCn/zkJzCZTJg0aRI2bNiAgoICbNu2De+++y4KCgowc+ZMLF68\nGK+//vrQGnlhaFUSRHohIy/HoRwUHQ8Wi9otKFQNYNuDQeG6VHxGJrZfZHbGQRiTo0ePYtKkSfjv\n//5vvPnmmygsLMTy5csxZcoUmM1mjB07Vjm3uroaW7duTer9OY4D308yEM9FXHkmU+669Xie0/yf\ni+SDDEB+yJEOGcjIy3EE2rElD1YYqqjHtqFDGz7kyZOJ8eqyau0SurhcLnz66af4whe+gA8//BBH\njhzB9773PTz77LOw2Wyac202G7xeb1LvX1HhBMfFV1QLCiwAALPFhPLywuQFMBilpc5sDyFl8kEG\nID/kGEoZyMjLcWJ7Y2VpIFmEPFgSpOCynZNF1wFtdMgwHLVMJIDVakVJSQnuu+8+AMBVV12FW265\nBU899RR8Pp/mXK/XC4fDkdT7t7d7+vXk+fwBAEAgEEJHhzu5wRsInudQWupEV5cnZzdS8kEGID/k\nSEaGRDdHyMjLcaLXAUvhaTIsV1RUE52Hk6P3uZRg2dCNzc/N0kCyCBm6Eix7tImBqa6uRigUQigU\ngslkAgCEQiFMnz4d+/btw6VLlzBq1CgAQENDAyZOnJjU+4uiiFCovxOUExEK5f7aFITclyMfZADy\nQ46hlIFaKOQ4VFGR7YqKalg2cGRiDJ0sjSMbRMvKYpXZ2E2v7Iwj29BzgeiP66+/HjabDU8//TSC\nwSAOHDiA9957D1/+8pexaNEiPPHEE+jr68Phw4exefNmLFmyJC3joGVJEOmFPHk5Du3Ysh2ip4ZC\n1dguwkOKPbVTkYmNbmBzHgh9bDYb1q1bh1WrVuG6665DYWEh/s//+T+YPXs2HnvsMTzyyCO48cYb\n4XA48OCDD2LWrFlD+vn95esRBDF0kJGX41CxBX2ZRVFk7kFCnjy2PTnRFRVzNS8hFcjQlaB2GsRA\njB8/Hs8//3zM8dLSUqxevTojY2D1+iSITEHhmjlO7MOcvbumnswMTgN5NMF2TlaM7Ax6sWJbB7A3\nBwBt+BAEQRBk5OU8FJajb8yQsUtzALBl6JJXnzxYMuTRJIwMY0E2BJE1yMjLcehhrm/YsjkP2R5B\n9mHZg0H5uTQHMjQPRC5A65Ig0gsZeTkOefLieazYmwcK3WW7jQB5scibKcNybiphfMiRRxCZgYy8\nHIeaoccL18z8OLINKbhse/JYll2G5ZxMNZSfSRiacLwmo5cnQWQMMvJyHMrDiheuyd48UIiWXgl9\ndmA5H1GG5kCCNnwIIyPn5NHmA0GkFzLychx6mOuXimdxHig/k21PDhn5emGK7M0BoNczk815IIwJ\nD/LkEUQmICMvx6GwHEDQOcaiUkP5mWwr+bThQ+HrMtFisxi+ThgXxZPH6gVKEBkiZSOvrq4Oy5Yt\nw+zZs3H77bfj4MGD/Z6/a9cuTJ06FR6PJ9WPJqCn2GdnHNlE70HBolJDhTfY9mbGem+yNJAsQhsd\nEjFyMzoPhDHhwlYei88ogsgkKRl5Pp8Py5cvx9KlS7F3717cfffduP/+++MacC6XCz/72c+YffCm\nA6qoGNsAGWBTuaNG0GyHqVHhFcpRlqHcRMLIkCePIDJDSkbe7t27wfM87rrrLlgsFixbtgyVlZXY\nvn277vmPPvoobr311lQ+kogidsc2O+PIJtQnTyI6VJfJOWDYgRGz4ZOlcWQTlj25aqLlZtXYJYwJ\nT9U1CSIjmFP544aGBtTW1mqOVVdXo76+PubcN954A93d3fjxj3+M5557LunP4jgOfBozCHme0/yf\nK8TcIznAZEpMhlyVOQad4XN8/HnIG7kHImotsCB3jDLLsSE3gJjrQDZ48l5uFdH3QxFiwvfDXCaR\nNZ6P88DMtZ1nkCePIDJDSkZeb28v7Ha75pjNZoPX69Ucu3TpElavXo2XX34ZgUBgUJ9VUeFU4rjT\nSWmpM+2fMZSYTCbN64ICC8rLC5N6j1yTORqnszvmWEmJA+Uldp2zI+S63NFYLNrL2Wo1666FfJNb\nDRel7DkcBYq8+Sw3ANgdBZrXsuKb73KrMUXtBJpMpqTvh7mM/F07otYCOC6v54GlNZ4PUE4eQWSG\nlIw8u90eY9B5vV44HA7ltSAI+OlPf4of/vCHGD58OBobGwf1We3tnrR78kpLnejq8uiW5Dcq/kBQ\n87q3z4+ODndCf5urMkfT3eONOdbZ6QEXCumeny9yR+P1BmJeq9dCvsqtJhjUBim63V50dXnyXm5A\nklWNPBf5Lrcaf0B7zQcCwYTvh7lM9LUdvRZCISEv5yET97R8No6zBU+ePILICCkZeTU1NVi/fr3m\nWENDAxYvXqy8bmpqwqFDh3Ds2DE8+uijEMLVIW688UY888wzuPrqqxP6LFEUEUdnH1IEQUQolDs3\nnlDUg00IJT/+XJM5mmjFXjo2sEy5Lnc00UpOPPnyTW410eGaoZCozEs+yw0gRjZ5LvJdbjWx10Ds\nvOQz8ncd81wQ83sNsLTG8wGOcvIIIiOkZOTNnz8ffr8f69atw5133olNmzahra0NCxYsUM4ZNWoU\nDh8+rLxubGzEokWLsH37djidFGKRKlRoQL+oAJP9AqmyoE6fNHbmIOb7Zkd0hdj7IYOTAKquSRgb\nyskjiMyQUgCk1WrF2rVrsWXLFlx77bVYv3491qxZA4fDgZUrV2LlypVDNU4iDjFV1BjU7ETdFgqZ\nH0e2Ybkms0V4AAAgAElEQVSypExsM/TsjCMbxFZUzM44sknsRkeWBpJlWN7sIIwP5eQRRGZIyZMH\nAFOnTsWGDRtijq9atUr3/DFjxuDEiROpfiwRhhogx/HkMTgRsbv37M0By54clnsEyshhijzHQRBF\nJucA0GsKn51xEIQelJNHEJkhjaVMiExAin08Iy8LA8ky5MljW7mlPolSDh4QaRfA4hwAbG92EMaH\ncvIIIjOQkZfjCFGhiiw+zPVEZjIfjXLymN70iDXy2ZFdRv7+TbysRLI3BwCF7hLGhnLyCCIzkJGX\n41DhlYhip+6QxvI8yLA4B9G5SCwptzGVJRmSXUb+/mVPHotzALC92UEYH8rJI4jMQEZejkOKfUSx\n41WNsFlUasiToxOuyVAhIgrRi9wPzeGmqix9/2oodJswMpSTRxCZgYy8HCc2B4m9m6Y8ByaNkZel\nwWSRGCU/S+PIJix7tlmSNR6x4ZrZHE32IIOfMDKUk0cQmYGMvByH5fA0GSEqRAtgNR9N+5rBKWA6\nTI36JOoVXmFvDgBqJUEYm4iRRwuTINIJGXk5ToxSy6D/RlR273nVsWyNJnuwbOAAkrwxBScY0m7J\nyI+0UDCbpHsBQ1+/BgrdJoyMXHiF1euTIDIFGXk5DuXkxQnXZNjYjfc639GTlqUpYP37B9QbPmx7\nCui5QBgZnjx5BJERyMjLcagBckSh4ZnPydO+Zm2XVM9rx1LIYuy9IEsDySJK6LZceIXFSQB58ojc\ngMXNWILIJGTk5TiyXsdy+ENEsWPbyGPd4NcTl6UpoJw8VeEVaoaufa1zjCCyRaRPXnbHQRD5Dhl5\nOQ7tXKtCtEx8zDGWYD1ES8+oYWmnOESePGqGHkb/WiAIY8ArffJoVRJEOiEjL8cRaeda8V6amffk\nSf+zquCqPZksltAXhajXLAkfJrq6JouRDUBkHtSwuB4IY0KePILIDGTk5TiR5r9sKvZARLlX5+Sx\nuEMYbfDr6Hl5jXrtR5R8dtZBbKVd9lDuh0p1TRZnISI3pzrG6FQQBoRaKBBEZiAjL8eJ9d5kcTBZ\nQg7J0xZeYW8iIqFqbIbuqsMVzcocZGs0mYdCFSMbPix+/2r0eoeyuB4IY8KTJ48gMgIZeTlOpNAA\nm4o9EDF0KVwzqgANY3OgDs1jsRl29PfPkOgK0cYNS9+/Gllsde9QVkNXCePBUU4eQWQEMvJynBjF\nLpuDyRLRHgyATeUueh5YU+pYz8mLrizJogIV683M5miyh35bGUYngzAckXDNLA+EIPIcMvJynMiO\nLcOKnV5OXrYGk0Wii06wptRpcvJ49nKyIqHb7IYqRq4BdiMbgHgbX9kaDWFk2traMH/+fHz44YcA\nAJfLhRUrVmDu3LlYuHAhXn311SH/zEjhFVqUBJFOzNkeAJEa0Tu2LN4zFUOXWigAYNeLoc7JY7Ha\nrEgebea92TKibk5etkZDGJmHH34YXV1dyuuf//zncDgc2LlzJ06cOIF77rkHkyZNwuzZs4fsM3ny\n5BFERiBPXg4j6HguWFPqANq1lmG98Ir6ejAz6MkJ6Sj2LBk52vshm95smeiqywBbPSOJxPjTn/4E\nu92OkSNHAgA8Hg+2bduGBx54AAUFBZg5cyYWL16M119/fUg/V/bksRRpQRDZgDx5OYzAuOdCRvFm\nchw4TpoDFpW76N17lhR8gHLy1NeBAkMToP7+I0Z+tkaTXaJDdwF254LQp6GhAS+88AJeeeUVLF26\nFABw7tw5mM1mjB07VjmvuroaW7duTeq9OY4D348LQX1/Vm9K5RpyBJU6VSTXyAcZgPyQIx0ykJGX\nw6gNGZb75MnGDcdJCm5IFIdEoTlz0YWmjl5cPbUKBRZT6m+YZmQdN1KEh621oKmuyeD1EKksyWZF\nRf1NL4YmQEV0ER71MYIIBoP413/9Vzz88MMoLS1Vjvf29sJms2nOtdls8Hq9Sb1/RYVTKa6iR2Fr\nr/JzWVn/5+YCpaXObA8hZfJBBiA/5BhKGVI28urq6rBy5UqcPn0a48ePxy9+8Qvd2O1XXnkFzz33\nHNra2lBdXY2HHnoIV199daofzzSCqroIyzkossyyJw9IbddaFEVs3nUOf/24HgBw6HQbvn/HjBRH\nmX4ilVbZ9GKIOko+S9eD/H2bGc3JE3Q2vVj6/tVQCDvRH7/73e8wbdo03HjjjZrjdrsdPp9Pc8zr\n9cLhcCT1/u3tnn49eb0er+pcd856X3ieQ2mpE11dHs0mUy6RDzIA+SFHMjKUlxcm9J4pGXk+nw/L\nly/H8uXL8Y1vfAObNm3C/fffj23btsHpjFiiu3fvxpNPPokXXngBU6ZMwaZNm7B8+XK89957KCsr\nS2UITKPJQWEwB0lGngeO5wBwAMSU5uFvhy8rBh4A7D/RCo83AKfNkuJI00v07j1ra0FTeIXBvETW\nvTfaTa+h+/7PN/egs8eHGbUV2lBYAxO94QOwdS0Q/fPWW2+htbUVb731FgDA7XbjRz/6Eb73ve8h\nEAjg0qVLGDVqFAAprHPixIlJvb8oigiFEjs3EBSU8OpcRRBEhEK5fX3lgwxAfsgxlDKkdGXt3r0b\nPM/jrrvugsViwbJly1BZWYnt27drzmtqasI//dM/Ydq0aeB5HnfccQdMJhNOnz6d0uBZR63AKdU1\nszWYLCJ7cHhO+gektoP/4YGLAIDxI4qk9wdw6oIrlSFmhOhiCzm6mTVotIVXGMzJ01XsszWazKPd\n9Bqa7//AyVb84oW9WL3xMJ5+7fOcMZSU0G2qrkno8M4772D//v3Yt28f9u3bh1GjRuHJJ5/EihUr\nsGjRIjzxxBPo6+vD4cOHsXnzZixZsmRIP18dnknrkiDSR0qevIaGBtTW1mqOVVdXo76+XnPsa1/7\nmub1/v374fF4Yv62PwZK5E2VXEzaVG8qy0otkkhkzkWZ9RDDwzfxnPLw4Pj489Cf3MGQgIttbgDA\nkuvG441PzuJ8sxvnmnswd+qwNIx+6BCjeoQBomYO8uX7jof2egjPAZf/csvIupJZo9iLeS+3jN73\nL4rioAs7+PwhvPzeSWVeD55uw5GGDsyeVJniSIeemDUub/ioZO/vnpirsHJtZ5LHHnsMjzzyCG68\n8UY4HA48+OCDmDVr1pB+hrY2FFl5BJEuUjLyent7YbfbNccGStI9ffo0HnjgATzwwAMoLy9P+LMG\nSuQdKnIpaZOzRObZHg4lNJn4hGN1ZXJJZj0sZmkZ22wW5WHvdBYMOA96cp+93I1g2E0+c8oIHDjd\njvPNbvR4g0nPa6aRPRm2Amk+eE5/LeT69x2PZpdf+Vm+HiwWkyJvvsotIxv3toJIWLEg5r/cMpw5\ncj8sdBYAkAzfwRZ22LyjHh09Pph4DkVOK7p6fPjo4CXcPG/CEI146JG/a3O4UFSBNbIWSoodKC9P\nLrcqV2BljaeLDz74QPm5tLQUq1evTuvn8eTJI4iMkJKRZ7fbYwy6/pJ0d+zYgR/+8If47ne/i3vv\nvTepzxookTdVcjFps7MnkiAdCkluHH8ghI4Od0J/n4sy6+H1Scp9IBCC/Ojo6fHGnYf+5D7Z0AZA\n2gG3cAKKbNIl0tTmTnhes4UsixBeC8GQdi3ky/cdj06XR/k5FE4I8fqC6Ory5LXcMj5/EAAgqJLT\nRFHMe7llOrojz6JAIKj83N7hHlQu3f5jTQCAqyYPw8zaCjy/5RhOnOtEe3uP4aoBRl/bXl/sWujs\n8sAMId5b5CSZuKcZfXMvF1FfPizlDRNEpknJyKupqcH69es1xxoaGrB48eKYc1977TU8/vjjWLVq\nle7vByKZRN5UyKWkzWAw8sA2cXIeVvLjzyWZ9ZDHziHy8AiGhAFl0pO7o1synEsLCyAKQKFd2gnv\n9vgNPUeiGGmYIHszBQG6Y8717zsewaAqR1W+HgQxYvzmqdwysmzRu+T5LrdMQHU/VEfvBYOCJk8x\nEURRVPJwJ44uwYiwB6zXF0RPr3GLMMnfdSQ/MzIRweDA98RchZU1ni9QTh5BZIaUfGPz58+H3+/H\nunXrEAgEsHHjRrS1tWHBggWa83bt2oVf/OIXePbZZwdl4BH66Dd/Zu+OKYvMcZGcvMFOg+wdLS2S\nwr2KHFYAQE9vILVBphlN0Qlm++TpNcNmZw70y+azJH/k51SLz7S7vHB5pAiBiWNKUOy0Kr/r9vjj\n/Zlh0DPyGFoKhMFR+8FZe04RRCZJyc
Download .txt
gitextract_p3quxox6/

├── .gitattributes
├── .gitignore
├── .gitmodules
├── CONTRIBUTING.md
├── LICENSE
├── MANIFEST.in
├── README.md
├── RELEASE.md
├── cardio/
│   ├── __init__.py
│   ├── core/
│   │   ├── __init__.py
│   │   ├── ecg_batch.py
│   │   ├── ecg_batch_tools.py
│   │   ├── ecg_dataset.py
│   │   ├── kernels.py
│   │   └── utils.py
│   ├── models/
│   │   ├── __init__.py
│   │   ├── dirichlet_model/
│   │   │   ├── __init__.py
│   │   │   ├── dirichlet_model.py
│   │   │   └── dirichlet_model_training.ipynb
│   │   ├── fft_model/
│   │   │   ├── __init__.py
│   │   │   ├── fft_model.py
│   │   │   └── fft_model_training.ipynb
│   │   ├── hmm/
│   │   │   ├── __init__.py
│   │   │   ├── hmm.py
│   │   │   └── hmmodel_training.ipynb
│   │   ├── keras_custom_objects.py
│   │   ├── layers.py
│   │   └── metrics.py
│   ├── pipelines/
│   │   ├── __init__.py
│   │   └── pipelines.py
│   └── tests/
│       ├── data/
│       │   ├── A00001.hea
│       │   ├── A00001.mat
│       │   ├── A00002.hea
│       │   ├── A00002.mat
│       │   ├── A00004.hea
│       │   ├── A00004.mat
│       │   ├── A00005.hea
│       │   ├── A00005.mat
│       │   ├── A00008.hea
│       │   ├── A00008.mat
│       │   ├── A00013.hea
│       │   ├── A00013.mat
│       │   ├── REFERENCE.csv
│       │   ├── sample.dcm
│       │   ├── sample.edf
│       │   ├── sample.xml
│       │   ├── sel100.atr
│       │   ├── sel100.hea
│       │   └── sel100.pu1
│       └── test_ecgbatch.py
├── docs/
│   ├── Makefile
│   ├── api/
│   │   ├── api.rst
│   │   ├── core.rst
│   │   ├── models.rst
│   │   └── pipelines.rst
│   ├── conf.py
│   ├── index.rst
│   ├── make.bat
│   ├── modules/
│   │   ├── core.rst
│   │   ├── models.rst
│   │   ├── modules.rst
│   │   └── pipelines.rst
│   └── tutorials.rst
├── examples/
│   ├── Getting_started.ipynb
│   └── Load_XML.ipynb
├── pylintrc
├── requirements-shippable.txt
├── requirements.txt
├── setup.py
├── shippable.yml
└── tutorials/
    ├── I.CardIO.ipynb
    ├── II.Pipelines.ipynb
    ├── III.Models.ipynb
    ├── IV.Research.ipynb
    └── pn2017_data_to_wfdb_format.py
Download .txt
SYMBOL INDEX (150 symbols across 14 files)

FILE: cardio/core/ecg_batch.py
  function add_actions (line 63) | def add_actions(actions_dict, template_docstring):
  class EcgBatch (line 94) | class EcgBatch(bf.Batch):
    method __init__ (line 137) | def __init__(self, index, preloaded=None, unique_labels=None):
    method array_of_nones (line 148) | def array_of_nones(self):
    method array_of_dicts (line 153) | def array_of_dicts(self):
    method unique_labels (line 158) | def unique_labels(self):
    method unique_labels (line 163) | def unique_labels(self, val):
    method label_binarizer (line 179) | def label_binarizer(self):
    method _reraise_exceptions (line 184) | def _reraise_exceptions(self, results):
    method _check_2d (line 202) | def _check_2d(signal):
    method load (line 221) | def load(self, src=None, fmt=None, components=None, ann_ext=None, *arg...
    method _load_data (line 261) | def _load_data(self, index, src=None, fmt=None, components=None, *args...
    method _assemble_load (line 304) | def _assemble_load(self, results, *args, **kwargs):
    method _load_labels (line 330) | def _load_labels(self, src):
    method show_ecg (line 364) | def show_ecg(self, index=None, start=0, end=None, annot=None, subplot_...
    method merge (line 428) | def merge(cls, batches, batch_size=None):
    method apply_transform (line 485) | def apply_transform(self, func, *args, src="signal", dst="signal", **k...
    method _init_component (line 516) | def _init_component(self, *args, **kwargs):
    method apply_to_each_channel (line 529) | def apply_to_each_channel(self, index, func, *args, src="signal", dst=...
    method _filter_batch (line 560) | def _filter_batch(self, keep_mask):
    method drop_labels (line 594) | def drop_labels(self, drop_list):
    method keep_labels (line 624) | def keep_labels(self, keep_list):
    method rename_labels (line 654) | def rename_labels(self, rename_dict):
    method binarize_labels (line 672) | def binarize_labels(self):
    method _filter_channels (line 686) | def _filter_channels(self, index, names=None, indices=None, invert_mas...
    method drop_channels (line 738) | def drop_channels(self, names=None, indices=None):
    method keep_channels (line 765) | def keep_channels(self, names=None, indices=None):
    method rename_channels (line 793) | def rename_channels(self, index, rename_dict):
    method reorder_channels (line 814) | def reorder_channels(self, index, new_order):
    method convert_units (line 852) | def convert_units(self, index, new_units):
    method convolve_signals (line 901) | def convolve_signals(self, kernel, padding_mode="edge", axis=-1, **kwa...
    method band_pass_signals (line 931) | def band_pass_signals(self, index, low=None, high=None, axis=-1):
    method drop_short_signals (line 952) | def drop_short_signals(self, min_length, axis=-1):
    method flip_signals (line 972) | def flip_signals(self, index, window_size=None, threshold=0):
    method slice_signals (line 1021) | def slice_signals(self, index, selection_object):
    method _pad_signal (line 1039) | def _pad_signal(signal, length, pad_value):
    method _get_segmentation_arg (line 1062) | def _get_segmentation_arg(arg, arg_name, target):
    method _check_segmentation_args (line 1096) | def _check_segmentation_args(signal, target, length, arg, arg_name):
    method split_signals (line 1138) | def split_signals(self, index, length, step, pad_value=0):
    method random_split_signals (line 1186) | def random_split_signals(self, index, length, n_segments, pad_value=0):
    method unstack_signals (line 1234) | def unstack_signals(self):
    method _safe_fs_resample (line 1282) | def _safe_fs_resample(self, index, fs):
    method resample_signals (line 1306) | def resample_signals(self, index, fs):
    method random_resample_signals (line 1334) | def random_resample_signals(self, index, distr, **kwargs):
    method spectrogram (line 1375) | def spectrogram(self, index, *args, src="signal", dst="signal", **kwar...
    method standardize (line 1409) | def standardize(self, index, axis=None, eps=1e-10, *, src="signal", ds...
    method calc_ecg_parameters (line 1438) | def calc_ecg_parameters(self, index, src=None):

FILE: cardio/core/ecg_batch_tools.py
  function check_signames (line 43) | def check_signames(signame, nsig):
  function check_units (line 69) | def check_units(units, nsig):
  function unify_sex (line 92) | def unify_sex(sex):
  function load_wfdb (line 115) | def load_wfdb(path, components, *args, **kwargs):
  function load_dicom (line 163) | def load_dicom(path, components, *args, **kwargs):
  function load_edf (line 253) | def load_edf(path, components, *args, **kwargs):
  function load_wav (line 304) | def load_wav(path, components, *args, **kwargs):
  function load_xml (line 346) | def load_xml(path, components, xml_type, *args, **kwargs):
  function load_xml_schiller (line 376) | def load_xml_schiller(path, components, *args, **kwargs):  # pylint: dis...
  function split_signals (line 444) | def split_signals(signals, length, step):
  function random_split_signals (line 468) | def random_split_signals(signals, length, n_segments):
  function resample_signals (line 494) | def resample_signals(signals, new_length):
  function convolve_signals (line 519) | def convolve_signals(signals, kernel, padding_mode="edge", axis=-1, **kw...
  function band_pass_signals (line 566) | def band_pass_signals(signals, freq, low=None, high=None, axis=-1):
  function find_intervals_borders (line 608) | def find_intervals_borders(hmm_annotation, inter_val):
  function find_maxes (line 642) | def find_maxes(signal, starts, ends):
  function calc_hr (line 671) | def calc_hr(signal, hmm_annotation, fs, r_state=R_STATE):
  function calc_pq (line 701) | def calc_pq(hmm_annotation, fs, p_states=P_STATES, q_state=Q_STATE, r_st...
  function calc_qt (line 762) | def calc_qt(hmm_annotation, fs, t_states=T_STATES, q_state=Q_STATE, r_st...
  function calc_qrs (line 823) | def calc_qrs(hmm_annotation, fs, s_state=S_STATE, q_state=Q_STATE, r_sta...

FILE: cardio/core/ecg_dataset.py
  class EcgDataset (line 7) | class EcgDataset(bf.Dataset):
    method __init__ (line 33) | def __init__(self, index=None, batch_class=EcgBatch, preloaded=None, i...

FILE: cardio/core/kernels.py
  function _check_kernel_size (line 6) | def _check_kernel_size(size):
  function gaussian (line 12) | def gaussian(size, sigma=None):

FILE: cardio/core/utils.py
  function get_units_conversion_factor (line 13) | def get_units_conversion_factor(old_units, new_units):
  function partialmethod (line 36) | def partialmethod(func, *frozen_args, **frozen_kwargs):
  class LabelBinarizer (line 61) | class LabelBinarizer(LB):
    method transform (line 69) | def transform(self, y):
    method inverse_transform (line 89) | def inverse_transform(self, Y, threshold=None):

FILE: cardio/models/dirichlet_model/dirichlet_model.py
  function concatenate_ecg_batch (line 12) | def concatenate_ecg_batch(batch, model, return_targets=True):
  class DirichletModelBase (line 47) | class DirichletModelBase(TFModel):
    method _build (line 67) | def _build(self, *args, **kwargs):  # pylint: disable=too-many-locals
  class DirichletModel (line 127) | class DirichletModel(DirichletModelBase):
    method _get_dirichlet_mixture_stats (line 139) | def _get_dirichlet_mixture_stats(alpha):
    method train (line 163) | def train(self, fetches=None, feed_dict=None, use_lock=False, *args, *...
    method predict (line 188) | def predict(self, fetches=None, feed_dict=None, split_indices=None):  ...

FILE: cardio/models/fft_model/fft_model.py
  class FFTModel (line 14) | class FFTModel(KerasModel):#pylint: disable=too-many-locals
    method _build (line 19) | def _build(self, **kwargs):#pylint: disable=too-many-locals

FILE: cardio/models/hmm/hmm.py
  function prepare_hmm_input (line 9) | def prepare_hmm_input(batch, model, features, channel_ix):
  class HMModel (line 42) | class HMModel(BaseModel):
    method __init__ (line 50) | def __init__(self, *args, **kwargs):
    method build (line 54) | def build(self, *args, **kwargs):
    method save (line 76) | def save(self, path, *args, **kwargs):  # pylint: disable=arguments-di...
    method load (line 91) | def load(self, path, *args, **kwargs):  # pylint: disable=arguments-di...
    method train (line 103) | def train(self, *args, **kwargs):
    method predict (line 124) | def predict(self, *args, **kwargs):

FILE: cardio/models/keras_custom_objects.py
  class RFFT (line 9) | class RFFT(Layer):
    method rfft (line 21) | def rfft(self, x, fft_fn):
    method call (line 46) | def call(self, x):
    method compute_output_shape (line 52) | def compute_output_shape(self, input_shape):
  class Crop (line 61) | class Crop(Layer):
    method __init__ (line 85) | def __init__(self, begin, size, *agrs, **kwargs):
    method call (line 90) | def call(self, x):
    method compute_output_shape (line 96) | def compute_output_shape(self, input_shape):
  class Inception2D (line 103) | class Inception2D(Layer):#pylint: disable=too-many-instance-attributes
    method __init__ (line 139) | def __init__(self, base_dim, nb_filters,#pylint: disable=too-many-argu...
    method build (line 150) | def build(self, input_shape):
    method call (line 178) | def call(self, x):
    method compute_output_shape (line 195) | def compute_output_shape(self, input_shape):

FILE: cardio/models/layers.py
  function conv1d_block (line 6) | def conv1d_block(scope, input_layer, is_training, filters, kernel_size, ...
  function resnet1d_block (line 43) | def resnet1d_block(scope, input_layer, is_training, filters, kernel_size...

FILE: cardio/models/metrics.py
  function get_class_prob (line 11) | def get_class_prob(predictions_dict):
  function get_labels (line 34) | def get_labels(predictions_list):
  function get_probs (line 59) | def get_probs(predictions_list):
  function f1_score (line 84) | def f1_score(predictions_list, labels=None, average="macro", **kwargs):
  function auc (line 108) | def auc(predictions_list, average="macro", **kwargs):
  function classification_report (line 129) | def classification_report(predictions_list, **kwargs):
  function confusion_matrix (line 148) | def confusion_matrix(predictions_list, margins=True, **kwargs):
  function calculate_metrics (line 180) | def calculate_metrics(metrics_list, predictions_list):

FILE: cardio/pipelines/pipelines.py
  function dirichlet_train_pipeline (line 15) | def dirichlet_train_pipeline(labels_path, batch_size=256, n_epochs=1000,...
  function dirichlet_predict_pipeline (line 66) | def dirichlet_predict_pipeline(model_path, batch_size=100, gpu_options=N...
  function hmm_preprocessing_pipeline (line 108) | def hmm_preprocessing_pipeline(batch_size=20, features="hmm_features"):
  function hmm_train_pipeline (line 150) | def hmm_train_pipeline(hmm_preprocessed, batch_size=20, features="hmm_fe...
  function hmm_predict_pipeline (line 266) | def hmm_predict_pipeline(model_path, batch_size=20, features="hmm_featur...

FILE: cardio/tests/test_ecgbatch.py
  function setup_module_load (line 14) | def setup_module_load(request):
  function setup_class_methods (line 40) | def setup_class_methods(request):
  function setup_class_dataset (line 58) | def setup_class_dataset(request):
  function setup_class_pipeline (line 74) | def setup_class_pipeline(request):
  class TestEcgBatchLoad (line 94) | class TestEcgBatchLoad():
    method test_load_wfdb (line 99) | def test_load_wfdb(self, setup_module_load): #pylint: disable=redefine...
    method test_load_wfdb_annotation (line 120) | def test_load_wfdb_annotation(self, setup_module_load): #pylint: disab...
    method test_load_dicom (line 144) | def test_load_dicom(self, setup_module_load): #pylint: disable=redefin...
    method test_load_edf (line 166) | def test_load_edf(self, setup_module_load): #pylint: disable=redefined...
    method test_load_wav (line 188) | def test_load_wav(self, setup_module_load): #pylint: disable=redefined...
  class TestEcgBatchSingleMethods (line 210) | class TestEcgBatchSingleMethods:
    method tets_drop_short_signal (line 215) | def tets_drop_short_signal(self, setup_class_methods): #pylint: disabl...
    method test_split_signals (line 231) | def test_split_signals(self, setup_class_methods): #pylint: disable=re...
    method test_resample_signals (line 241) | def test_resample_signals(self, setup_class_methods):#pylint: disable=...
    method test_band_pass_signals (line 252) | def test_band_pass_signals(self, setup_class_methods): #pylint: disabl...
    method test_flip_signals (line 263) | def test_flip_signals(self, setup_class_methods): #pylint: disable=red...
  class TestEcgBatchDataset (line 274) | class TestEcgBatchDataset:
    method test_cv_split (line 279) | def test_cv_split(self, setup_class_dataset): #pylint: disable=redefin...
    method test_pipeline_load (line 298) | def test_pipeline_load(self, setup_class_dataset, setup_module_load): ...
  class TestEcgBatchPipelineMethods (line 327) | class TestEcgBatchPipelineMethods:
    method test_pipeline_1 (line 332) | def test_pipeline_1(self, setup_class_pipeline): #pylint: disable=rede...
    method test_pipeline_2 (line 348) | def test_pipeline_2(self, setup_class_pipeline): #pylint: disable=rede...
    method test_get_signal_with_meta (line 366) | def test_get_signal_with_meta(self, setup_module_load): #pylint: disab...
  class TestIntervalBatchTools (line 390) | class TestIntervalBatchTools:
    method test_find_interval_borders (line 394) | def test_find_interval_borders(self): #pylint: disable=redefined-outer...
    method test_find_maxes (line 419) | def test_find_maxes(self): #pylint: disable=redefined-outer-name
    method test_calc_hr (line 452) | def test_calc_hr(self):

FILE: tutorials/pn2017_data_to_wfdb_format.py
  function check_format (line 9) | def check_format(line_values):
  function main (line 25) | def main():
Condensed preview — 75 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (3,215K chars).
[
  {
    "path": ".gitattributes",
    "chars": 184,
    "preview": "# Set the default behavior, in case people don't have core.autocrlf set.\n* text=auto\n\n# Explicitly declare text files yo"
  },
  {
    "path": ".gitignore",
    "chars": 60,
    "preview": "*.pyc\ndocs/_build/\n.cache/\n__pycache__/\n.ipynb_checkpoints/\n"
  },
  {
    "path": ".gitmodules",
    "chars": 109,
    "preview": "[submodule \"cardio/batchflow\"]\n\tpath = cardio/batchflow\n\turl = https://github.com/analysiscenter/dataset.git\n"
  },
  {
    "path": "CONTRIBUTING.md",
    "chars": 2366,
    "preview": "- Перед любыми операциями с репозиториями у каждого пользователя должно быть настроено имя и адрес почты:\n```bash\ngit co"
  },
  {
    "path": "LICENSE",
    "chars": 11357,
    "preview": "                                 Apache License\n                           Version 2.0, January 2004\n                   "
  },
  {
    "path": "MANIFEST.in",
    "chars": 323,
    "preview": "include MANIFEST.in\ninclude LICENSE\ninclude README.md\ninclude setup.py\n\nrecursive-include cardio *\nrecursive-include doc"
  },
  {
    "path": "README.md",
    "chars": 4355,
    "preview": "# CardIO\n\n`CardIO` is designed to build end-to-end machine learning models for deep research of electrocardiograms.\n\nMai"
  },
  {
    "path": "RELEASE.md",
    "chars": 1708,
    "preview": "# Release 0.3.0\n\n## Major Features and Improvements\n* `load` method now supports Schiller XML format.\n* Added channels p"
  },
  {
    "path": "cardio/__init__.py",
    "chars": 162,
    "preview": "\"\"\" CardIO package \"\"\"\n\nfrom . import batchflow  # pylint: disable=wildcard-import\nfrom .core import *  # pylint: disabl"
  },
  {
    "path": "cardio/core/__init__.py",
    "chars": 132,
    "preview": "\"\"\" Core CardIO objects \"\"\"\n\nfrom .ecg_batch import EcgBatch, add_actions\nfrom .ecg_dataset import EcgDataset\nfrom . imp"
  },
  {
    "path": "cardio/core/ecg_batch.py",
    "chars": 55207,
    "preview": "\"\"\"Contains ECG Batch class.\"\"\"\n# pylint: disable=too-many-lines\n\nimport copy\nfrom textwrap import dedent\n\nimport numpy "
  },
  {
    "path": "cardio/core/ecg_batch_tools.py",
    "chars": 24707,
    "preview": "\"\"\"Сontains ECG processing tools.\"\"\"\n\nimport os\nimport struct\nimport datetime\nfrom xml.etree import ElementTree\n\nimport "
  },
  {
    "path": "cardio/core/ecg_dataset.py",
    "chars": 1356,
    "preview": "\"\"\"Contains ECG Dataset class.\"\"\"\n\nfrom .. import batchflow as bf\nfrom .ecg_batch import EcgBatch\n\n\nclass EcgDataset(bf."
  },
  {
    "path": "cardio/core/kernels.py",
    "chars": 1099,
    "preview": "\"\"\"Contains kernel generation functions.\"\"\"\n\nimport numpy as np\n\n\ndef _check_kernel_size(size):\n    \"\"\"Check if kernel s"
  },
  {
    "path": "cardio/core/utils.py",
    "chars": 3147,
    "preview": "\"\"\"Miscellaneous ECG Batch utils.\"\"\"\n\nimport functools\n\nimport pint\nimport numpy as np\nfrom sklearn.preprocessing import"
  },
  {
    "path": "cardio/models/__init__.py",
    "chars": 251,
    "preview": "\"\"\"Contains ECG models and custom functions.\"\"\"\n\nfrom .dirichlet_model import *  # pylint: disable=wildcard-import\nfrom "
  },
  {
    "path": "cardio/models/dirichlet_model/__init__.py",
    "chars": 106,
    "preview": "\"\"\"Contains dirichlet model class.\"\"\"\n\nfrom .dirichlet_model import DirichletModel, concatenate_ecg_batch\n"
  },
  {
    "path": "cardio/models/dirichlet_model/dirichlet_model.py",
    "chars": 9518,
    "preview": "\"\"\"Contains Dirichlet model class.\"\"\"\n\nfrom itertools import zip_longest\n\nimport numpy as np\nimport tensorflow as tf\n\nfr"
  },
  {
    "path": "cardio/models/dirichlet_model/dirichlet_model_training.ipynb",
    "chars": 158087,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"# Dirichlet model training\"\n   ]\n  "
  },
  {
    "path": "cardio/models/fft_model/__init__.py",
    "chars": 65,
    "preview": "\"\"\"Contains fft model class.\"\"\"\n\nfrom .fft_model import FFTModel\n"
  },
  {
    "path": "cardio/models/fft_model/fft_model.py",
    "chars": 2036,
    "preview": "\"\"\" Contains fft_model architecture \"\"\"\n\nimport tensorflow as tf\nimport keras.backend as K\n\nfrom keras.layers import Inp"
  },
  {
    "path": "cardio/models/fft_model/fft_model_training.ipynb",
    "chars": 33501,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"# FFT model training\\n\",\n    \"\\n\",\n"
  },
  {
    "path": "cardio/models/hmm/__init__.py",
    "chars": 87,
    "preview": "\"\"\"Contains HMM annotation model class.\"\"\"\nfrom .hmm import HMModel, prepare_hmm_input\n"
  },
  {
    "path": "cardio/models/hmm/hmm.py",
    "chars": 5135,
    "preview": "\"\"\" HMModel \"\"\"\n\nimport numpy as np\nimport dill\n\nfrom ...batchflow.models.base import BaseModel #pylint: disable=no-name"
  },
  {
    "path": "cardio/models/hmm/hmmodel_training.ipynb",
    "chars": 170339,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"# HMModel trainig\"\n   ]\n  },\n  {\n  "
  },
  {
    "path": "cardio/models/keras_custom_objects.py",
    "chars": 6537,
    "preview": "\"\"\" Contains keras custom objects \"\"\"\n\nimport tensorflow as tf\nfrom keras.engine.topology import Layer\nfrom keras.layers"
  },
  {
    "path": "cardio/models/layers.py",
    "chars": 3546,
    "preview": "\"\"\"Contains helper functions for tensorflow layers creation.\"\"\"\n\nimport tensorflow as tf\n\n\ndef conv1d_block(scope, input"
  },
  {
    "path": "cardio/models/metrics.py",
    "chars": 7216,
    "preview": "\"\"\"Contains metric functions.\"\"\"\n\nimport numpy as np\nimport pandas as pd\nfrom sklearn import metrics\n\n\n__all__ = [\"f1_sc"
  },
  {
    "path": "cardio/pipelines/__init__.py",
    "chars": 91,
    "preview": "\"\"\"Contains ECG pipelines.\"\"\"\n\nfrom .pipelines import *  # pylint: disable=wildcard-import\n"
  },
  {
    "path": "cardio/pipelines/pipelines.py",
    "chars": 12288,
    "preview": "\"\"\"Contains pipelines.\"\"\"\n\nfrom functools import partial\nimport numpy as np\n\nimport tensorflow as tf\nfrom hmmlearn impor"
  },
  {
    "path": "cardio/tests/data/A00001.hea",
    "chars": 83,
    "preview": "A00001 1 300 9000 05:05:15 08/05/2014 \nA00001.mat 16+24 1000/mV 16 0 -127 0 0 ECG \n"
  },
  {
    "path": "cardio/tests/data/A00002.hea",
    "chars": 82,
    "preview": "A00002 1 300 9000 11:05:25 16/05/2014 \nA00002.mat 16+24 1000/mV 16 0 128 0 0 ECG \n"
  },
  {
    "path": "cardio/tests/data/A00004.hea",
    "chars": 82,
    "preview": "A00004 1 300 9000 11:05:48 01/05/2014 \nA00004.mat 16+24 1000/mV 16 0 519 0 0 ECG \n"
  },
  {
    "path": "cardio/tests/data/A00005.hea",
    "chars": 84,
    "preview": "A00005 1 300 18000 08:12:08 23/12/2013 \nA00005.mat 16+24 1000/mV 16 0 -188 0 0 ECG \n"
  },
  {
    "path": "cardio/tests/data/A00008.hea",
    "chars": 84,
    "preview": "A00008 1 300 18000 06:11:36 24/11/2013 \nA00008.mat 16+24 1000/mV 16 0 -187 0 0 ECG \n"
  },
  {
    "path": "cardio/tests/data/A00013.hea",
    "chars": 80,
    "preview": "A00013 1 300 9000 10:06:01 10/06/2014 \nA00013.mat 16+24 1000/mV 16 0 3 0 0 ECG \n"
  },
  {
    "path": "cardio/tests/data/REFERENCE.csv",
    "chars": 54,
    "preview": "A00001,N\nA00002,N\nA00004,A\nA00005,A\nA00008,O\nA00013,O\n"
  },
  {
    "path": "cardio/tests/data/sample.xml",
    "chars": 292241,
    "preview": "<?xml version=\"1.0\"?>\n<com.xiriuz.sema.xml.schillerEDI.SchillerEDI>\n<patdata>\n<id></id>\n<lastname></lastname>\n<firstname"
  },
  {
    "path": "cardio/tests/data/sel100.hea",
    "chars": 214,
    "preview": "sel100 2 250/360 225000\nsel100.dat 212 200(0) 11 1024 945 -13873 0 MLII\nsel100.dat 212 200(0) 11 1024 955 14507 0 V5\n# 6"
  },
  {
    "path": "cardio/tests/test_ecgbatch.py",
    "chars": 17535,
    "preview": "\"\"\"Module for testing ecg_batch methods\"\"\"\n\nimport os\nimport random\nimport numpy as np\nimport pytest\n\nfrom cardio import"
  },
  {
    "path": "docs/Makefile",
    "chars": 607,
    "preview": "# Minimal makefile for Sphinx documentation\n#\n\n# You can set these variables from the command line.\nSPHINXOPTS    =\nSPHI"
  },
  {
    "path": "docs/api/api.rst",
    "chars": 73,
    "preview": "===\nAPI\n===\n\n.. toctree::\n   :maxdepth: 2\n\n   core\n   models\n   pipelines"
  },
  {
    "path": "docs/api/core.rst",
    "chars": 2398,
    "preview": "=====\nCore\n=====\n\n\nEcgBatch\n========\n\n.. autoclass:: cardio.EcgBatch\n\t:show-inheritance:\n\nMethods\n-------\n\nInput/output "
  },
  {
    "path": "docs/api/models.rst",
    "chars": 704,
    "preview": "======\nModels\n======\n\n--------------\nDirichletModel\n--------------\n\n.. autoclass:: cardio.models.DirichletModel\n    :mem"
  },
  {
    "path": "docs/api/pipelines.rst",
    "chars": 572,
    "preview": "=========\nPipelines\n=========\n\ndirichlet_train_pipeline\n------------------------\n\n.. autofunction:: cardio.pipelines.dir"
  },
  {
    "path": "docs/conf.py",
    "chars": 5078,
    "preview": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n#\n# CardIO documentation build configuration file, created by\n# hand on T"
  },
  {
    "path": "docs/index.rst",
    "chars": 4576,
    "preview": "===================================\nWelcome to CardIO's documentation!\n===================================\n`CardIO` is d"
  },
  {
    "path": "docs/make.bat",
    "chars": 768,
    "preview": "@ECHO OFF\n\npushd %~dp0\n\nREM Command file for Sphinx documentation\n\nif \"%SPHINXBUILD%\" == \"\" (\n\tset SPHINXBUILD=python -m"
  },
  {
    "path": "docs/modules/core.rst",
    "chars": 2145,
    "preview": "====\nCore\n====\n\nCardIO's core classes are ``EcgBatch`` and ``EcgDataset``. They are responsible for\nstoring of ECGs, bat"
  },
  {
    "path": "docs/modules/models.rst",
    "chars": 6309,
    "preview": "======\nModels\n======\n\nThis is a place where ECG models live. You can write your own model or exploit provided :doc:`mode"
  },
  {
    "path": "docs/modules/modules.rst",
    "chars": 87,
    "preview": "========\nModules\n========\n\n.. toctree::\n   :maxdepth: 2\n\n   core\n   models\n   pipelines"
  },
  {
    "path": "docs/modules/pipelines.rst",
    "chars": 3178,
    "preview": "=========\nPipelines\n=========\n\nModule ``pipelines`` contains functions that build pipelines that we used to train models"
  },
  {
    "path": "docs/tutorials.rst",
    "chars": 1350,
    "preview": "=========\nTutorials\n=========\n\nThe following tutorials will help you to explore CardIO framework from simple to complex "
  },
  {
    "path": "examples/Getting_started.ipynb",
    "chars": 93650,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"This notebook briefly shows capabil"
  },
  {
    "path": "examples/Load_XML.ipynb",
    "chars": 561732,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 1,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n "
  },
  {
    "path": "pylintrc",
    "chars": 488,
    "preview": "[MASTER]\nextension-pkg-whitelist=numpy\ninit-hook='import sys; sys.path.append(\".\")'\n\n[FORMAT]\nmax-line-length=120\nmax-at"
  },
  {
    "path": "requirements-shippable.txt",
    "chars": 16,
    "preview": "pytest>=3.3.0\n.\n"
  },
  {
    "path": "requirements.txt",
    "chars": 16,
    "preview": "pytest>=3.3.0\n.\n"
  },
  {
    "path": "setup.py",
    "chars": 1876,
    "preview": "\"\"\"\nCardIO is a library that works with electrocardiograms.\nDocumentation - https://analysiscenter.github.io/cardio/\n\"\"\""
  },
  {
    "path": "shippable.yml",
    "chars": 748,
    "preview": "language: none\n\nenv:\n  global:\n    - DOCKER_ACC=analysiscenter1\n    - DOCKER_REPO=ds-py3\n    - TAG=\"latest\"\n\nbuild:\n  pr"
  },
  {
    "path": "tutorials/I.CardIO.ipynb",
    "chars": 1016528,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"# CardIO framework for deep researc"
  },
  {
    "path": "tutorials/II.Pipelines.ipynb",
    "chars": 353475,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"# Build and run pipelines\"\n   ]\n  }"
  },
  {
    "path": "tutorials/III.Models.ipynb",
    "chars": 262705,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"# Models for deep research of ECG\"\n"
  },
  {
    "path": "tutorials/IV.Research.ipynb",
    "chars": 21763,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"markdown\",\n   \"metadata\": {},\n   \"source\": [\n    \"# Deep research of electrocardiogra"
  },
  {
    "path": "tutorials/pn2017_data_to_wfdb_format.py",
    "chars": 2389,
    "preview": "\"\"\" Simple script to format PhysioNet/CinC 2017 Challenge data according to\nwfdb standard.\"\"\"\n\nimport os\nimport argparse"
  }
]

// ... and 10 more files (download for full content)

About this extraction

This page contains the full source code of the analysiscenter/cardio GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 75 files (3.0 MB), approximately 795.3k tokens, and a symbol index with 150 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!