Full Code of eloquentarduino/micromlgen for AI

master e0889bd71687 cached
79 files
52.2 KB
16.6k tokens
40 symbols
1 requests
Download .txt
Repository: eloquentarduino/micromlgen
Branch: master
Commit: e0889bd71687
Files: 79
Total size: 52.2 KB

Directory structure:
gitextract_zfuhlvcy/

├── .gitattributes
├── .gitignore
├── LICENSE.txt
├── MANIFEST
├── README.md
├── micromlgen/
│   ├── __init__.py
│   ├── decisiontreeclassifier.py
│   ├── decisiontreeregressor.py
│   ├── gaussiannb.py
│   ├── linear_regression.py
│   ├── logisticregression.py
│   ├── micromlgen.py
│   ├── pca.py
│   ├── platforms.py
│   ├── principalfft.py
│   ├── randomforestclassifier.py
│   ├── randomforestregressor.py
│   ├── rvm.py
│   ├── sefr.py
│   ├── svm.py
│   ├── templates/
│   │   ├── __init__.py
│   │   ├── _skeleton.jinja
│   │   ├── classmap.jinja
│   │   ├── decisiontree/
│   │   │   ├── __init__.py
│   │   │   ├── decisiontree.jinja
│   │   │   ├── decisiontree_regressor.jinja
│   │   │   ├── tree.jinja
│   │   │   └── tree_regressor.jinja
│   │   ├── dot.jinja
│   │   ├── gaussiannb/
│   │   │   ├── __init__.py
│   │   │   ├── gaussiannb.jinja
│   │   │   └── vote.jinja
│   │   ├── linearregression/
│   │   │   ├── __init__.py
│   │   │   └── linearregression.jinja
│   │   ├── logisticregression/
│   │   │   ├── __init__.py
│   │   │   ├── logisticregression.jinja
│   │   │   ├── vote.arduino.jinja
│   │   │   └── vote.attiny.jinja
│   │   ├── pca/
│   │   │   ├── __init__.py
│   │   │   └── pca.jinja
│   │   ├── principalfft/
│   │   │   ├── __init__.py
│   │   │   ├── lut.jinja
│   │   │   ├── lut_bool.jinja
│   │   │   └── principalfft.jinja
│   │   ├── randomforest/
│   │   │   ├── __init__.py
│   │   │   ├── randomforest.jinja
│   │   │   ├── randomforest_regressor.jinja
│   │   │   ├── tree.jinja
│   │   │   └── tree_regressor.jinja
│   │   ├── rvm/
│   │   │   ├── __init__.py
│   │   │   └── rvm.jinja
│   │   ├── sefr/
│   │   │   ├── __init__.py
│   │   │   ├── dot.jinja
│   │   │   └── sefr.jinja
│   │   ├── svm/
│   │   │   ├── __init__.py
│   │   │   ├── computations/
│   │   │   │   ├── class.jinja
│   │   │   │   ├── decisions.jinja
│   │   │   │   ├── kernel/
│   │   │   │   │   ├── arduino.jinja
│   │   │   │   │   └── attiny.jinja
│   │   │   │   └── votes.jinja
│   │   │   ├── kernel/
│   │   │   │   ├── arduino.jinja
│   │   │   │   ├── attiny.jinja
│   │   │   │   └── kernel.jinja
│   │   │   └── svm.jinja
│   │   ├── testset.jinja
│   │   ├── trainset.jinja
│   │   ├── vote.jinja
│   │   ├── wifiindoorpositioning/
│   │   │   ├── __init__.py
│   │   │   └── wifiindoorpositioning.jinja
│   │   └── xgboost/
│   │       ├── __init__.py
│   │       ├── tree.jinja
│   │       └── xgboost.jinja
│   ├── utils.py
│   ├── wifiindoorpositioning.py
│   └── xgboost.py
├── publish
├── setup.cfg
├── setup.py
└── setup_template.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitattributes
================================================
*.jinja linguist-detectable=false
*.py linguist-detectable=true

================================================
FILE: .gitignore
================================================
venv
.idea
tests
micromlgen/__pycache__
examples

================================================
FILE: LICENSE.txt
================================================
MIT License
Copyright (c) 2018 YOUR NAME
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

================================================
FILE: MANIFEST
================================================
# file GENERATED by distutils, do NOT edit
setup.cfg
setup.py
micromlgen/__init__.py
micromlgen/decisiontreeclassifier.py
micromlgen/decisiontreeregressor.py
micromlgen/gaussiannb.py
micromlgen/linear_regression.py
micromlgen/logisticregression.py
micromlgen/micromlgen.py
micromlgen/pca.py
micromlgen/platforms.py
micromlgen/principalfft.py
micromlgen/randomforestclassifier.py
micromlgen/randomforestregressor.py
micromlgen/rvm.py
micromlgen/sefr.py
micromlgen/svm.py
micromlgen/utils.py
micromlgen/wifiindoorpositioning.py
micromlgen/xgboost.py
micromlgen/templates/__init__.py
micromlgen/templates/_skeleton.jinja
micromlgen/templates/classmap.jinja
micromlgen/templates/dot.jinja
micromlgen/templates/testset.jinja
micromlgen/templates/trainset.jinja
micromlgen/templates/vote.jinja
micromlgen/templates/__pycache__/__init__.cpython-37.pyc
micromlgen/templates/decisiontree/__init__.py
micromlgen/templates/decisiontree/decisiontree.jinja
micromlgen/templates/decisiontree/decisiontree_regressor.jinja
micromlgen/templates/decisiontree/tree.jinja
micromlgen/templates/decisiontree/tree_regressor.jinja
micromlgen/templates/gaussiannb/__init__.py
micromlgen/templates/gaussiannb/gaussiannb.jinja
micromlgen/templates/gaussiannb/vote.jinja
micromlgen/templates/linearregression/__init__.py
micromlgen/templates/linearregression/linearregression.jinja
micromlgen/templates/logisticregression/__init__.py
micromlgen/templates/logisticregression/logisticregression.jinja
micromlgen/templates/logisticregression/vote.arduino.jinja
micromlgen/templates/logisticregression/vote.attiny.jinja
micromlgen/templates/pca/__init__.py
micromlgen/templates/pca/pca.jinja
micromlgen/templates/principalfft/__init__.py
micromlgen/templates/principalfft/lut.jinja
micromlgen/templates/principalfft/lut_bool.jinja
micromlgen/templates/principalfft/principalfft.jinja
micromlgen/templates/randomforest/__init__.py
micromlgen/templates/randomforest/randomforest.jinja
micromlgen/templates/randomforest/randomforest_regressor.jinja
micromlgen/templates/randomforest/tree.jinja
micromlgen/templates/randomforest/tree_regressor.jinja
micromlgen/templates/rvm/__init__.py
micromlgen/templates/rvm/rvm.jinja
micromlgen/templates/sefr/__init__.py
micromlgen/templates/sefr/dot.jinja
micromlgen/templates/sefr/sefr.jinja
micromlgen/templates/svm/__init__.py
micromlgen/templates/svm/svm.jinja
micromlgen/templates/svm/computations/class.jinja
micromlgen/templates/svm/computations/decisions.jinja
micromlgen/templates/svm/computations/votes.jinja
micromlgen/templates/svm/computations/kernel/arduino.jinja
micromlgen/templates/svm/computations/kernel/attiny.jinja
micromlgen/templates/svm/kernel/arduino.jinja
micromlgen/templates/svm/kernel/attiny.jinja
micromlgen/templates/svm/kernel/kernel.jinja
micromlgen/templates/wifiindoorpositioning/__init__.py
micromlgen/templates/wifiindoorpositioning/wifiindoorpositioning.jinja
micromlgen/templates/xgboost/__init__.py
micromlgen/templates/xgboost/tree.jinja
micromlgen/templates/xgboost/xgboost.jinja


================================================
FILE: README.md
================================================
# Introducing MicroML

MicroML is an attempt to bring Machine Learning algorithms to microcontrollers.
Please refer to [this blog post](https://eloquentarduino.github.io/2019/11/you-can-run-machine-learning-on-arduino/)
to an introduction to the topic.

**This repository is archived because it does what it was meant to do: generate C++ code for the supported models. I'm focusing on a more comprehensive library ([https://github.com/eloquentarduino/tinyml4all-python/](https://github.com/eloquentarduino/tinyml4all-python/)), so this will not receive updates**.

## Install

`pip install micromlgen`

## Supported classifiers

`micromlgen` can port to plain C many types of classifiers:

 - DecisionTree
 - RandomForest
 - XGBoost
 - GaussianNB
 - Support Vector Machines (SVC and OneClassSVM)
 - Relevant Vector Machines (from `skbayes.rvm_ard_models` package)
 - SEFR
 - PCA

```python
from micromlgen import port
from sklearn.svm import SVC
from sklearn.datasets import load_iris


if __name__ == '__main__':
    iris = load_iris()
    X = iris.data
    y = iris.target
    clf = SVC(kernel='linear').fit(X, y)
    print(port(clf))
```

You may pass a classmap to get readable class names in the ported code

```python
from micromlgen import port
from sklearn.svm import SVC
from sklearn.datasets import load_iris


if __name__ == '__main__':
    iris = load_iris()
    X = iris.data
    y = iris.target
    clf = SVC(kernel='linear').fit(X, y)
    print(port(clf, classmap={
        0: 'setosa',
        1: 'virginica',
        2: 'versicolor'
    }))
```

## PCA

It can export a PCA transformer.

```python
from sklearn.decomposition import PCA
from sklearn.datasets import load_iris
from micromlgen import port

if __name__ == '__main__':
    X = load_iris().data
    pca = PCA(n_components=2, whiten=False).fit(X)
    
    print(port(pca))
```

## SEFR

Read the post about [SEFR](https://eloquentarduino.github.io/2020/07/sefr-a-fast-linear-time-classifier-for-ultra-low-power-devices/).

```shell script
pip install sefr
```

```python
from sefr import SEFR
from micromlgen import port


clf = SEFR()
clf.fit(X, y)
print(port(clf))
```

## DecisionTreeRegressor and RandomForestRegressor

```bash
pip install micromlgen>=1.1.26
```

```python
from sklearn.datasets import load_boston
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from micromlgen import port


if __name__ == '__main__':
    X, y = load_boston(return_X_y=True)
    regr = DecisionTreeRegressor(max_depth=10, min_samples_leaf=5).fit(X, y)
    regr = RandomForestRegressor(n_estimators=10, max_depth=10, min_samples_leaf=5).fit(X, y)
    
    with open('RandomForestRegressor.h', 'w') as file:
        file.write(port(regr))
```

```cpp
// Arduino sketch
#include "RandomForestRegressor.h"

Eloquent::ML::Port::RandomForestRegressor regressor;
float X[] = {...};


void setup() {
}

void loop() {
    float y_pred = regressor.predict(X);
}
```


================================================
FILE: micromlgen/__init__.py
================================================
import micromlgen.platforms as platforms
from micromlgen.micromlgen import port
from micromlgen.utils import port_testset, port_trainset
from micromlgen.wifiindoorpositioning import port_wifi_indoor_positioning


================================================
FILE: micromlgen/decisiontreeclassifier.py
================================================
from micromlgen.utils import jinja, check_type


def is_decisiontree(clf):
    """Test if classifier can be ported"""
    return check_type(clf, 'DecisionTreeClassifier')


def port_decisiontree(clf, **kwargs):
    """Port sklearn's DecisionTreeClassifier"""
    return jinja('decisiontree/decisiontree.jinja', {
        'left': clf.tree_.children_left,
        'right': clf.tree_.children_right,
        'features': clf.tree_.feature,
        'thresholds': clf.tree_.threshold,
        'classes': clf.tree_.value,
        'i': 0
    }, {
        'classname': 'DecisionTree'
    }, **kwargs)

================================================
FILE: micromlgen/decisiontreeregressor.py
================================================
from micromlgen.utils import jinja, check_type


def is_decisiontree_regressor(clf):
    """
    Test if classifier can be ported
    :param clf:
    :return: bool
    """
    return check_type(clf, 'DecisionTreeRegressor')


def port_decisiontree_regressor(clf, **kwargs):
    """
    Port sklearn's DecisionTreeClassifier
    :param clf:
    :return: str ported classifier
    """
    return jinja('decisiontree/decisiontree_regressor.jinja', {
        'dtype': 'float',
        'left': clf.tree_.children_left,
        'right': clf.tree_.children_right,
        'features': clf.tree_.feature,
        'thresholds': clf.tree_.threshold,
        'values': clf.tree_.value,
        'i': 0
    }, {
        'classname': 'DecisionTreeRegressor'
    }, **kwargs)

================================================
FILE: micromlgen/gaussiannb.py
================================================
from micromlgen.utils import jinja, check_type


def is_gaussiannb(clf):
    """Test if classifier can be ported"""
    return check_type(clf, 'GaussianNB')


def port_gaussiannb(clf, **kwargs):
    """Port sklearn's GaussianNB"""
    return jinja('gaussiannb/gaussiannb.jinja', {
        'sigma': clf.sigma_,
        'theta': clf.theta_,
        'prior': clf.class_prior_,
        'classes': clf.classes_,
        'n_classes': len(clf.classes_)
    }, {
        'classname': 'GaussianNB'
    }, **kwargs)

================================================
FILE: micromlgen/linear_regression.py
================================================
from micromlgen.utils import jinja, check_type


def is_linear_regression(clf):
    """Test if classifier can be ported"""
    return check_type(clf, 'LinearRegression')


def port_linear_regression(clf, classname=None, **kwargs):
    """Port Linear Regression"""
    return jinja('linearregression/linearregression.jinja', {
        'coefs': clf.coef_,
        'intercept': clf.intercept_,
        'dimension': len(clf.coef_),
        'dtype': 'float'
    }, {
        'classname': 'LinearRegression'
    }, **kwargs)

================================================
FILE: micromlgen/logisticregression.py
================================================
from micromlgen.utils import jinja, check_type


def is_logisticregression(clf):
    """Test if classifier can be ported"""
    return check_type(clf, 'LogisticRegression')


def port_logisticregression(clf, **kwargs):
    """Port sklearn's LogisticRegressionClassifier"""
    return jinja('logisticregression/logisticregression.jinja', {
        'weights': clf.coef_,
        'intercept': clf.intercept_,
        'classes': clf.classes_,
        'n_classes': len(clf.classes_)
    }, {
        'classname': 'LogisticRegression'
    }, **kwargs)

================================================
FILE: micromlgen/micromlgen.py
================================================
from micromlgen import platforms
from micromlgen.svm import is_svm, port_svm
from micromlgen.rvm import is_rvm, port_rvm
from micromlgen.sefr import is_sefr, port_sefr
from micromlgen.decisiontreeclassifier import is_decisiontree, port_decisiontree
from micromlgen.decisiontreeregressor import is_decisiontree_regressor, port_decisiontree_regressor
from micromlgen.randomforestclassifier import is_randomforest, port_randomforest
from micromlgen.randomforestregressor import is_randomforest_regressor, port_randomforest_regressor
from micromlgen.logisticregression import is_logisticregression, port_logisticregression
from micromlgen.gaussiannb import is_gaussiannb, port_gaussiannb
from micromlgen.pca import is_pca, port_pca
from micromlgen.principalfft import is_principalfft, port_principalfft
from micromlgen.linear_regression import is_linear_regression, port_linear_regression
from micromlgen.xgboost import is_xgboost, port_xgboost


def port(
        clf,
        classname=None,
        classmap=None,
        platform=platforms.ARDUINO,
        precision=None,
        **kwargs):
    """Port a classifier to plain C++"""
    assert platform in platforms.ALL, 'Unknown platform %s. Use one of %s' % (platform, ', '.join(platforms.ALL))

    if is_svm(clf):
        return port_svm(**locals())
    elif is_rvm(clf):
        return port_rvm(**locals())
    elif is_sefr(clf):
        return port_sefr(**locals())
    elif is_decisiontree(clf):
        return port_decisiontree(**locals())
    elif is_randomforest(clf):
        return port_randomforest(**locals())
    elif is_logisticregression(clf):
        return port_logisticregression(**locals())
    elif is_gaussiannb(clf):
        return port_gaussiannb(**locals())
    elif is_pca(clf):
        return port_pca(**locals())
    elif is_principalfft(clf):
        return port_principalfft(**locals(), **kwargs)
    elif is_linear_regression(clf):
        return port_linear_regression(**locals(), **kwargs)
    elif is_xgboost(clf):
        return port_xgboost(**locals(), **kwargs)
    elif is_decisiontree_regressor(clf):
        return port_decisiontree_regressor(**locals(), **kwargs)
    elif is_randomforest_regressor(clf):
        return port_randomforest_regressor(**locals(), **kwargs)
    raise TypeError('clf MUST be one of %s' % ', '.join(platforms.ALLOWED_CLASSIFIERS))

================================================
FILE: micromlgen/pca.py
================================================
from micromlgen.utils import jinja, check_type


def is_pca(clf):
    """Test if classifier can be ported"""
    return check_type(clf, 'PCA')


def port_pca(clf, **kwargs):
    """Port a PCA"""
    return jinja('pca/pca.jinja', {
        'arrays': {
            'components': clf.components_,
            'mean': clf.mean_
        },
    }, {
        'classname': 'PCA'
    }, **kwargs)

================================================
FILE: micromlgen/platforms.py
================================================
ARDUINO = 'arduino'
ATTINY = 'attiny'
ALL = [
    ARDUINO,
    ATTINY
]

ALLOWED_CLASSIFIERS = [
    'SVC',
    'OneClassSVM',
    'RVC',
    'SEFR',
    'DecisionTreeClassifier',
    'DecisionTreeRegressor',
    'RandomForestClassifier',
    'RandomForestRegressor',
    'GaussianNB',
    'LogisticRegression',
    'PCA',
    'PrincipalFFT',
    'LinearRegression',
    'XGBClassifier'
]

================================================
FILE: micromlgen/principalfft.py
================================================
from micromlgen.utils import jinja, check_type
from math import pi


def is_principalfft(clf):
    """Test if classifier can be ported"""
    return check_type(clf, 'PrincipalFFT') or check_type(clf, 'KFFT')


def port_principalfft(clf, optimize_sin=False, lookup_cos=None, lookup_sin=None, **kwargs):
    """Port PrincipalFFT classifier"""
    return jinja('principalfft/principalfft.jinja', {
        'fft': clf,
        'PI': pi,
        'size': len(clf.idx),
        'optmize_sin': optimize_sin,
        'lookup_cos': lookup_cos,
        'lookup_sin': lookup_sin,
    }, **kwargs)

================================================
FILE: micromlgen/randomforestclassifier.py
================================================
from micromlgen.utils import jinja, check_type


def is_randomforest(clf):
    """Test if classifier can be ported"""
    return check_type(clf, 'RandomForestClassifier')


def port_randomforest(clf, **kwargs):
    """Port sklearn's RandomForestClassifier"""
    return jinja('randomforest/randomforest.jinja', {
        'n_classes': clf.n_classes_,
        'trees': [{
            'left': clf.tree_.children_left,
            'right': clf.tree_.children_right,
            'features': clf.tree_.feature,
            'thresholds': clf.tree_.threshold,
            'classes': clf.tree_.value,
        } for clf in clf.estimators_]
    }, {
        'classname': 'RandomForest'
    }, **kwargs)

================================================
FILE: micromlgen/randomforestregressor.py
================================================
from micromlgen.utils import jinja, check_type


def is_randomforest_regressor(clf):
    """
    Test if classifier can be ported
    """
    return check_type(clf, 'RandomForestRegressor')


def port_randomforest_regressor(clf, **kwargs):
    """
    Port sklearn's RandomForestRegressor
    """
    return jinja('randomforest/randomforest_regressor.jinja', {
        'dtype': 'float',
        'n_estimators': clf.n_estimators,
        'trees': [{
            'left': clf.tree_.children_left,
            'right': clf.tree_.children_right,
            'features': clf.tree_.feature,
            'thresholds': clf.tree_.threshold,
            'values': clf.tree_.value,
        } for clf in clf.estimators_]
    }, {
        'classname': 'RandomForestRegressor'
    }, **kwargs)

================================================
FILE: micromlgen/rvm.py
================================================
from micromlgen.utils import jinja, check_type


def is_rvm(clf):
    """Test if classifier can be ported"""
    return check_type(clf, 'RVC')


def port_rvm(clf, **kwargs):
    """Port a RVM classifier"""
    return jinja('rvm/rvm.jinja', {
        'n_classes': len(clf.intercept_),
        'kernel': {
            'type': clf.kernel,
            'gamma': clf.gamma,
            'coef0': clf.coef0,
            'degree': clf.degree
        },
        'sizes': {
            'features': clf.relevant_vectors_[0].shape[1],
        },
        'arrays': {
            'vectors': clf.relevant_vectors_,
            'coefs': clf.coef_,
            'actives': clf.active_,
            'intercepts': clf.intercept_,
            'mean': clf._x_mean,
            'std': clf._x_std
        },
    }, {
        'classname': 'RVC'
    }, **kwargs)


================================================
FILE: micromlgen/sefr.py
================================================
from micromlgen.utils import jinja, check_type


def is_sefr(clf):
    """Test if classifier can be ported"""
    return check_type(clf, 'SEFR')


def port_sefr(clf, classname=None, **kwargs):
    """Port SEFR classifier"""
    return jinja('sefr/sefr.jinja', {
        'weights': clf.weights,
        'bias': clf.bias,
        'dimension': len(clf.weights),
    }, {
        'classname': 'SEFR'
    }, **kwargs)

================================================
FILE: micromlgen/svm.py
================================================
from micromlgen.utils import jinja, check_type


def is_svm(clf):
    """Test if classifier can be ported"""
    return check_type(clf, 'SVC', 'OneClassSVM')


def port_svm(clf, **kwargs):
    """Port a SVC / OneClassSVC classifier"""
    assert isinstance(clf.gamma, float), 'You probably didn\'t set an explicit value for gamma: 0.001 is a good default'

    support_v = clf.support_vectors_
    n_classes = len(clf.n_support_)

    return jinja('svm/svm.jinja', {
        'kernel': {
            'type': clf.kernel,
            'gamma': clf.gamma,
            'coef0': clf.coef0,
            'degree': clf.degree
        },
        'sizes': {
            'features': len(support_v[0]),
            'vectors': len(support_v),
            'classes': n_classes,
            'decisions': n_classes * (n_classes - 1) // 2,
            'supports': clf.n_support_
        },
        'arrays': {
            'supports': support_v,
            'intercepts': clf.intercept_,
            'coefs': clf.dual_coef_
        }
    }, {
        'classname': 'OneClassSVM' if check_type(clf, 'OneClassSVM') else 'SVM'
    }, **kwargs)

================================================
FILE: micromlgen/templates/__init__.py
================================================
# here just to force pip to copy this folder

================================================
FILE: micromlgen/templates/_skeleton.jinja
================================================
#pragma once
#include <cstdarg>

namespace Eloquent {
    namespace ML {
        namespace Port {
            class {{ classname }} {
                public:

                    /**
                     * Predict class for features vector
                     */
                    {{ dtype|default("int", true) }} predict(float *x) {
                        {% block predict %}{% endblock %}
                    }

                    {% include 'classmap.jinja' %}

                    {% block public %}{% endblock %}

                protected:

                    {% block protected %}{% endblock %}
            };
        }
    }
}

================================================
FILE: micromlgen/templates/classmap.jinja
================================================
{% if classmap is not none %}

/**
 * Predict readable class name
 */
const char* predictLabel(float *x) {
    return idxToLabel(predict(x));
}

/**
 * Convert class idx to readable name
 */
const char* idxToLabel(uint8_t classIdx) {
    switch (classIdx) {
        {% for idx, name in classmap.items() %}
            case {{ idx }}:
                return "{{ name }}";
        {% endfor %}
        default:
            return "Houston we have a problem";
    }
}

{% endif %}

================================================
FILE: micromlgen/templates/decisiontree/__init__.py
================================================
# here just to force pip to copy this folder


================================================
FILE: micromlgen/templates/decisiontree/decisiontree.jinja
================================================
{% extends '_skeleton.jinja' %}

{% block predict %}
    {% include 'decisiontree/tree.jinja' %}
{% endblock %}

================================================
FILE: micromlgen/templates/decisiontree/decisiontree_regressor.jinja
================================================
{% extends '_skeleton.jinja' %}

{% block predict %}
    {% include 'decisiontree/tree_regressor.jinja' %}
{% endblock %}

================================================
FILE: micromlgen/templates/decisiontree/tree.jinja
================================================
{% if left[i] != right[i] %}
    if (x[{{ features[i] }}] <= {{ thresholds[i] }}) {
        {% with i = left[i] %}
            {% include 'decisiontree/tree.jinja' %}
        {% endwith %}
    }
    else {
        {% with i = right[i] %}
            {% include 'decisiontree/tree.jinja' %}
        {% endwith %}
    }
{% else %}
    return {{ classes[i].argmax() }};
{% endif %}

================================================
FILE: micromlgen/templates/decisiontree/tree_regressor.jinja
================================================
{% if left[i] != right[i] %}
    if (x[{{ features[i] }}] <= {{ thresholds[i] }}) {
        {% with i = left[i] %}
            {% include 'decisiontree/tree_regressor.jinja' %}
        {% endwith %}
    }
    else {
        {% with i = right[i] %}
            {% include 'decisiontree/tree_regressor.jinja' %}
        {% endwith %}
    }
{% else %}
    return {{ values[i].mean() }}f;
{% endif %}

================================================
FILE: micromlgen/templates/dot.jinja
================================================
{% if platform == 'attiny' %}
    {% set signature = 'float w[%d]' % dimension  %}
    {% set wi = 'w[i]' %}
    {% set preamble = '' %}
{% else %}
    {% set signature = '...' %}
    {% set wi = 'va_arg(w, double)' %}
    {% set preamble = 'va_list w;\nva_start(w, %d);' % dimension %}
{% endif %}

/**
 * Compute dot product
 */
float dot(float *x, {{ signature }}) {
    {{ preamble }}
    float dot = 0.0;

    for (uint16_t i = 0; i < {{ dimension }}; i++) {
        const float wi = {{ wi }};
        dot += {{ expr }};
    }

    return dot;
}

================================================
FILE: micromlgen/templates/gaussiannb/__init__.py
================================================
# here just to force pip to copy this folder

================================================
FILE: micromlgen/templates/gaussiannb/gaussiannb.jinja
================================================
{% extends '_skeleton.jinja' %}

{% block predict %}
    float votes[{{ classes|length }}] = { 0.0f };

    {% include 'gaussiannb/vote.jinja' %}
    {% include 'vote.jinja' %}
{% endblock %}

{% block protected %}
    /**
     * Compute gaussian value
     */
    float gauss(float *x, float *theta, float *sigma) {
        float gauss = 0.0f;

        for (uint16_t i = 0; i < {{ theta[0]|length }}; i++) {
            gauss += log(sigma[i]);
            gauss += abs(x[i] - theta[i]) / sigma[i];
        }

        return gauss;
    }
{% endblock %}

================================================
FILE: micromlgen/templates/gaussiannb/vote.jinja
================================================
float theta[{{ theta[0]|length }}] = { 0 };
float sigma[{{ sigma[0]|length }}] = { 0 };

{% for i, (t, s) in f.enumerate(f.zip(theta, sigma)) %}
    {% for j, tj in f.enumerate(t) %}theta[{{ j }}] = {{ f.round(tj) }}; {% endfor %}
    {% for j, sj in f.enumerate(s) %}sigma[{{ j }}] = {{ f.round(sj) }}; {% endfor %}
    votes[{{ i }}] = {{ f.round(prior[i]) }} - gauss(x, theta, sigma);
{% endfor %}

================================================
FILE: micromlgen/templates/linearregression/__init__.py
================================================
# here just to force pip to copy this folder

================================================
FILE: micromlgen/templates/linearregression/linearregression.jinja
================================================
{% extends '_skeleton.jinja' %}

{% block predict %}
    return dot(x, {{ f.to_array(coefs) }}) + {{ intercept }};
{% endblock %}

{% block protected %}
    {% with expr = 'x[i] * wi' %}
        {% include 'dot.jinja' %}
    {% endwith %}
{% endblock %}

================================================
FILE: micromlgen/templates/logisticregression/__init__.py
================================================
# here just to force pip to copy this folder

================================================
FILE: micromlgen/templates/logisticregression/logisticregression.jinja
================================================
{% extends '_skeleton.jinja' %}

{% block predict %}
    float votes[{{ classes|length }}] = { {% for i, x in f.enumerate(intercept) %}{% if i > 0 %},{% endif %}{{ f.round(x) }} {% endfor %} };

    {% include 'logisticregression/vote.%s.jinja' % platform %}
    {% include 'vote.jinja' %}
{% endblock %}

{% block protected %}
    {% with dimension = weights[0]|length, expr = 'x[i] * wi' %}
        {% include 'dot.jinja' %}
    {% endwith %}
{% endblock %}

================================================
FILE: micromlgen/templates/logisticregression/vote.arduino.jinja
================================================
{% for i, w in f.enumerate(weights) %}
    votes[{{ i }}] += dot(x, {% for j, wj in f.enumerate(w) %} {% if j > 0 %},{% endif %} {{ f.round(wj) }} {% endfor %});
{% endfor %}

================================================
FILE: micromlgen/templates/logisticregression/vote.attiny.jinja
================================================
float w[{{ weights[0]|length }}] = { 0 };

{% for i, w in f.enumerate(weights) %}
    {% for j, wj in f.enumerate(w) %}w[{{ j }}] = {{ f.round(wj) }}; {% endfor %}
    votes[{{ i }}] += dot(x, w);
{% endfor %}

================================================
FILE: micromlgen/templates/pca/__init__.py
================================================
# here just to force pip to copy this folder

================================================
FILE: micromlgen/templates/pca/pca.jinja
================================================
#pragma once
#include <cstdarg>

namespace Eloquent {
    namespace ML {
        namespace Port {

            class {{ classname }} {
                public:

                    /**
                     * Apply dimensionality reduction
                     * @warn Will override the source vector if no dest provided!
                     */
                    void transform(float *x, float *dest = NULL) {
                        static float u[{{ arrays.components|length }}] = { 0 };

                        {% for i, component in f.enumerate(arrays.components) %}
                        u[{{ i }}] = dot(x, {% for j, cj in f.enumerate(component) %} {% if j > 0 %},{% endif %} {{ f.round(cj) }} {% endfor %});
                        {% endfor %}

                        memcpy(dest != NULL ? dest : x, u, sizeof(float) * {{ arrays.components|length }});
                    }

                protected:

                    /**
                     * Compute dot product with varargs
                     */
                    float dot(float *x, ...) {
                        va_list w;
                        va_start(w, {{ arrays.components[0]|length }});

                        static float mean[] = { {% for i, m in f.enumerate(arrays.mean) %}{% if i > 0 %},{% endif %} {{ f.round(m) }} {% endfor %} };
                        float dot = 0.0;

                        for (uint16_t i = 0; i < {{ arrays.components[0]|length }}; i++) {
                            dot += (x[i] - mean[i]) * va_arg(w, double);
                        }

                        return dot;
                    }
            };
        }
    }
}

================================================
FILE: micromlgen/templates/principalfft/__init__.py
================================================
# here just to force pip to copy this folder

================================================
FILE: micromlgen/templates/principalfft/lut.jinja
================================================
const float {{ op }}LUT[{{ size }}][{{ fft.original_size }}] = {
    {% for i in range(0, size) %}
        { {% for n in range(0, fft.original_size) %} {{ math[op](2 * PI / fft.original_size * fft.idx[i] * n) }}, {% endfor %} },
    {% endfor %}
};

================================================
FILE: micromlgen/templates/principalfft/lut_bool.jinja
================================================
const bool {{ op }}LUT[{{ size }}][{{ fft.original_size }}] = {
    {% for i in range(0, size) %}
        { {% for n in range(0, fft.original_size) %} {{ "true" if math[op](2 * PI / fft.original_size * fft.idx[i] * n) > 0 else "false" }}, {% endfor %} },
    {% endfor %}
};

================================================
FILE: micromlgen/templates/principalfft/principalfft.jinja
================================================
void principalFFT(float *features, float *fft) {
    // apply principal FFT (naive implementation for the top N frequencies only)
    const int topFrequencies[] = { {{ f.to_array(fft.idx, True) }} };

    {% if lookup_cos %}
        {% with op="cos" %}
            {% include "principalfft/lut.jinja" %}
        {% endwith %}

        {# sin lookup is available only if cos lookup is used #}
        {% if lookup_sin %}
            {% with op="sin" %}
                {% include "principalfft/lut.jinja" %}
            {% endwith %}
        {% else %}
            {% with op="sin" %}
                {% include "principalfft/lut_bool.jinja" %}
            {% endwith %}
        {% endif %}
    {% endif %}

    for (int i = 0; i < {{ size }}; i++) {
        const int k = topFrequencies[i];
        {% if not lookup_cos %}
            const float harmonic = {{ 2 * PI / fft.original_size }} * k;
        {% endif %}
        float re = 0;
        float im = 0;

        // optimized case
        if (k == 0) {
            for (int n = 0; n < {{ fft.original_size }}; n++) {
                re += features[n];
            }
        }
        else {
            for (int n = 0; n < {{ fft.original_size }}; n++) {
                {% if lookup_cos %}
                    const float cos_n = cosLUT[i][n];

                    {% if lookup_sin %}
                        const float sin_n = sinLUT[i][n];
                    {% else %}
                        const float sin_n = sinLUT[i][n] * sqrt(1 - cos_n * cos_n);
                    {% endif %}
                {% else %}
                    const float harmonicN = harmonic * n;
                    const float cos_n = cos(harmonicN);
                    const float sin_n = sin(harmonicN);
                {% endif %}

                re += features[n] * cos_n;
                im -= features[n] * sin_n;
            }
        }

        fft[i] = sqrt(re * re + im * im);
    }
}

================================================
FILE: micromlgen/templates/randomforest/__init__.py
================================================
# here just to force pip to copy this folder

================================================
FILE: micromlgen/templates/randomforest/randomforest.jinja
================================================
{% extends '_skeleton.jinja' %}

{% block predict %}
    uint8_t votes[{{ n_classes }}] = { 0 };

    {% for k, tree in f.enumerate(trees) %}
        {% with i = 0 %}
            // tree #{{ k + 1 }}
            {% include 'randomforest/tree.jinja' %}
        {% endwith %}
    {% endfor %}

    {% include 'vote.jinja' %}
{% endblock %}

================================================
FILE: micromlgen/templates/randomforest/randomforest_regressor.jinja
================================================
{% extends '_skeleton.jinja' %}

{% block predict %}
    float y_pred = 0;

    {% for k, tree in f.enumerate(trees) %}
        {% with i = 0 %}
            // tree #{{ k + 1 }}
            {% include 'randomforest/tree_regressor.jinja' %}
        {% endwith %}
    {% endfor %}

    return y_pred / {{ n_estimators }};
{% endblock %}

================================================
FILE: micromlgen/templates/randomforest/tree.jinja
================================================
{% if tree['left'][i] != tree['right'][i] %}
    if (x[{{ tree['features'][i] }}] <= {{ tree['thresholds'][i] }}) {
        {% with i = tree['left'][i] %}
            {% include 'randomforest/tree.jinja' %}
        {% endwith %}
    }
    else {
        {% with i = tree['right'][i] %}
            {% include 'randomforest/tree.jinja' %}
        {% endwith %}
    }
{% else %}
    votes[{{ tree['classes'][i].argmax() }}] += 1;
{% endif %}

================================================
FILE: micromlgen/templates/randomforest/tree_regressor.jinja
================================================
{% if tree['left'][i] != tree['right'][i] %}
    if (x[{{ tree['features'][i] }}] <= {{ tree['thresholds'][i] }}) {
        {% with i = tree['left'][i] %}
            {% include 'randomforest/tree_regressor.jinja' %}
        {% endwith %}
    }
    else {
        {% with i = tree['right'][i] %}
            {% include 'randomforest/tree_regressor.jinja' %}
        {% endwith %}
    }
{% else %}
    y_pred += {{ tree['values'][i].mean() }};
{% endif %}

================================================
FILE: micromlgen/templates/rvm/__init__.py
================================================
# here just to force pip to copy this folder

================================================
FILE: micromlgen/templates/rvm/rvm.jinja
================================================
{% extends '_skeleton.jinja' %}

{% block predict %}
    float votes[{{ arrays.vectors|length }}] = { 0 };
    {% for i, (rv, cf, act, b) in f.enumerate(f.zip(arrays.vectors, arrays.coefs, arrays.actives, arrays.intercepts)) %}
        {% if rv.shape[0] == 0 %}
            votes[{{ i }}] = {{ b }};
        {% else %}
            votes[{{ i }}] = (compute_kernel(x, {% for vi in rv[0] %}{% if loop.index > 1 %},{% endif %} {{ f.round(vi) }}{% endfor %}) - {{ f.round(arrays.mean[act][0]) }} ) * {{ f.round(cf[act][0] / arrays.std[act][0]) }} + {{ f.round(b) }};
        {% endif %}
    {% endfor %}

    {% include 'vote.jinja' %}
{% endblock %}

{% block protected %}
    {% include 'svm/kernel/%s.jinja' % platform %}
{% endblock %}

================================================
FILE: micromlgen/templates/sefr/__init__.py
================================================
# here just to force pip to copy this folder

================================================
FILE: micromlgen/templates/sefr/dot.jinja
================================================
/**
 * Compute dot product between features vector and classifier weights
 */
float dot(float *x, ...) {
    va_list w;
    va_start(w, {{ weights|length }});

    float kernel = 0.0;

    for (uint16_t i = 0; i < {{ weights|length }}; i++) {
        kernel += x[i] * va_arg(w, double);
    }

    return kernel;
}

================================================
FILE: micromlgen/templates/sefr/sefr.jinja
================================================
{% extends '_skeleton.jinja' %}

{% block predict %}
    return dot(x, {% for i, wi in f.enumerate(weights) %} {% if i > 0 %},{% endif %} {{ f.round(wi) }} {% endfor %}) <= {{ bias }} ? 0 : 1;
{% endblock %}

{% block protected %}
    {% with expr = 'x[i] * wi' %}
        {% include 'dot.jinja' %}
    {% endwith %}
{% endblock %}

================================================
FILE: micromlgen/templates/svm/__init__.py
================================================
# here just to force pip to copy this folder

================================================
FILE: micromlgen/templates/svm/computations/class.jinja
================================================
int val = votes[0];
int idx = 0;

for (int i = 1; i < {{ sizes.classes }}; i++) {
    if (votes[i] > val) {
        val = votes[i];
        idx = i;
    }
}

return idx;

================================================
FILE: micromlgen/templates/svm/computations/decisions.jinja
================================================
{% set helpers = {'ii': 0} %}

{% for i in range(0, sizes.classes) %}
    {% for j in range(i + 1, sizes.classes) %}
        {% set start_i = sizes.supports[:i].sum() %}
        {% set start_j = sizes.supports[:j].sum() %}
        decisions[{{ helpers.ii }}] = {{ f.round(arrays.intercepts[helpers.ii]) }}
        {% for k in range(start_i, start_i + sizes.supports[i]) %}
            {% with coef=arrays.coefs[j-1][k] %}
                {% if coef == 1 %}
                    + kernels[{{ k }}]
                {% elif coef == -1 %}
                    - kernels[{{ k }}]
                {% elif coef != 0 %}
                    + kernels[{{ k }}] * {{ f.round(coef) }}
                {% endif %}
            {% endwith %}
        {% endfor %}
        {% for k in range(start_j, start_j + sizes.supports[j]) %}
            {% with coef=arrays.coefs[i][k] %}
                {% if coef == 1 %}
                    + kernels[{{ k }}]
                {% elif coef == -1 %}
                    - kernels[{{ k }}]
                {% elif coef %}
                    + kernels[{{ k }}] * {{ f.round(coef) }}
                {% endif %}
            {% endwith %}
        {% endfor %};
        {% if helpers.update({'ii': helpers.ii + 1}) %}{% endif %}
    {% endfor %}
{% endfor %}


================================================
FILE: micromlgen/templates/svm/computations/kernel/arduino.jinja
================================================
{% for i, w in f.enumerate(arrays.supports) %}
    kernels[{{ i }}] = compute_kernel(x, {% for j, wj in f.enumerate(w) %} {% if j > 0 %},{% endif %} {{ f.round(wj) }} {% endfor %});
{% endfor %}

================================================
FILE: micromlgen/templates/svm/computations/kernel/attiny.jinja
================================================
float w[{{ arrays.supports[0]|length }}] = { 0 };

{% for i, w in f.enumerate(arrays.supports) %}
    {% for j, wj in f.enumerate(w) %}w[{{ j }}] = {{ f.round(wj) }}; {% endfor %}
    kernels[{{ i }}] = compute_kernel(x, w);
{% endfor %}

================================================
FILE: micromlgen/templates/svm/computations/votes.jinja
================================================
{% set helpers = {'ii': 0} %}

{% for i in range(0, sizes.classes) %}
  {% for j in range(i + 1, sizes.classes) %}
        votes[decisions[{{ helpers.ii }}] > 0 ? {{ i }} : {{ j }}] += 1;
        {% if helpers.update({'ii': helpers.ii + 1}) %}{% endif %}
   {% endfor %}
{% endfor %}

================================================
FILE: micromlgen/templates/svm/kernel/arduino.jinja
================================================
/**
 * Compute kernel between feature vector and support vector.
 * Kernel type: {{ kernel['type'] }}
 */
float compute_kernel(float *x, ...) {
    va_list w;
    va_start(w, {{ sizes.features }});

    {% with %}
        {% set wi = 'va_arg(w, double)' %}
        {% include 'svm/kernel/kernel.jinja' %}
    {% endwith %}
}

================================================
FILE: micromlgen/templates/svm/kernel/attiny.jinja
================================================
/**
 * Compute kernel between feature vector and support vector.
 * Kernel type: {{ kernel['type'] }}
 */
float compute_kernel(float *x, float w[{{ sizes.features }}]) {
    {% with %}
        {% set wi = 'w[i]' %}
        {% include 'svm/kernel/kernel.jinja' %}
    {% endwith %}
}

================================================
FILE: micromlgen/templates/svm/kernel/kernel.jinja
================================================
float kernel = 0.0;

for (uint16_t i = 0; i < {{ sizes.features }}; i++) {
    {% if kernel['type'] in ['linear', 'poly', 'sigmoid'] %}
        kernel += x[i] * {{ wi }};
    {% elif kernel['type'] == 'rbf' %}
        kernel += pow(x[i] - {{ wi }}, 2);
    {% else %}
        #error "UNKNOWN KERNEL {{ kernel['type'] }}";
    {% endif %}
}

{% if kernel['type'] == 'poly' %}
    return pow(({{ kernel['gamma'] }} * kernel) + {{ kernel['coef0'] }}, {{ kernel['degree'] }});
{% elif kernel['type'] == 'rbf' %}
    return exp(-{{ kernel['gamma'] }} * kernel);
{% elif kernel['type'] == 'sigmoid' %}
    return sigmoid(({{ kernel['gamma'] }} * kernel) + {{ kernel['coef0'] }});
{% else %}
    return kernel;
{% endif %}

================================================
FILE: micromlgen/templates/svm/svm.jinja
================================================
{% extends '_skeleton.jinja' %}

{% block predict %}

    float kernels[{{ sizes.vectors }}] = { 0 };
    {% if sizes.classes > 1 %}
        float decisions[{{ sizes.decisions }}] = { 0 };
        int votes[{{ sizes.classes }}] = { 0 };
    {% endif %}

    {% include 'svm/computations/kernel/%s.jinja' % platform %}

    {% if sizes.classes == 1 %}
        float decision = {{ f.round(arrays.intercepts[0]) }} - ({% for i, coef in f.enumerate(arrays.coefs[0]) %} + kernels[{{ i }}] {% if coef != 1 %}* {{ f.round(coef) }}{% endif %} {% endfor %});

        return decision > 0 ? 0 : 1;

    {% elif sizes.classes == 2 %}
        float decision = {{ f.round(arrays.intercepts[0]) }};

        decision = decision - ({% for i in range(0, sizes.supports[0]) %} + kernels[{{ i }}] * {{ f.round(arrays.coefs[0][i]) }} {% endfor %});
        decision = decision - ({% for i in range(sizes.supports[0], sizes.supports[0] + sizes.supports[1]) %} + kernels[{{ i }}] * {{ f.round(arrays.coefs[0][i]) }} {% endfor %});

        return decision > 0 ? 0 : 1;
    {% else %}
        {% include 'svm/computations/decisions.jinja' %}
        {% include 'svm/computations/votes.jinja' %}
        {% include 'svm/computations/class.jinja' %}
    {% endif %}
{% endblock %}

{% block protected %}
    {% include 'svm/kernel/%s.jinja' % platform %}
{% endblock %}

================================================
FILE: micromlgen/templates/testset.jinja
================================================
#pragma once

namespace Eloquent {
    namespace ML {
        namespace Test {

            /**
             * A tailor made test set
             */
            class {{ classname }} {
                public:
                    {{ classname }}() :
                        _x{  {% for x in X %}
                                { {% for xi in x %} {% if loop.index > 1 %}, {% endif %} {{ f.round(xi) }} {% endfor %} },
                            {% endfor %}
                        },
                        _y{  {% for yi in y %} {% if loop.index > 1 %}, {% endif %} {{ f.round(yi) }} {% endfor %}  },
                        _tp(0),
                        _tn(0),
                        _fp(0),
                        _fn(0)
                    {}

                    template<class Classifier>
                    void test(Classifier clf) {
                        for (uint16_t i = 0; i < {{ X|length }}; i++) {
                            int predicted = clf.predict(_x[i]);
                            int actual = _y[i];

                            if (predicted > 0 && predicted == actual)       _tp++;
                            else if (predicted > 0 && predicted != actual)  _fp++;
                            else if (predicted <= 0 && predicted == actual) _tn++;
                            else _fn++;
                        }
                    }

                    String dump() {
                      return String("Support:\t")
                            + support()
                            + "\nTP:\t"
                            + _tp
                            + "\nTN:\t"
                            + _tn
                            + "\nFP:\t"
                            + _fp
                            + "\nFN:\t"
                            + _fn
                            + "\nAccuracy:\t"
                            + accuracy()
                            + "\nPrecision:\t"
                            + precision()
                            + "\nRecall:\t"
                            + recall()
                            + "\nSpecificity:\t"
                            + specificity()
                          ;
                    }

                    float accuracy() {
                        return (1.0f * _tp + _tn) / support();
                    }

                    float precision() {
                        return (1.0f * _tp) / (_tp + _fp);
                    }
                    float recall() {
                        return (1.0f * _tp) / (_tp + _fn);
                    }

                    float specificity() {
                      return (1.0f * _tn) / (_tn + _fp);
                    }

                    uint16_t support() {
                        return _tp + _tn + _fp + _fn;
                    }

                protected:
                    float _x[{{ X|length }}][{{ X[0]|length }}];
                    int _y[{{ X|length }}];
                    uint16_t _tp, _fp, _tn, _fn;
            };
        }
    }
}

================================================
FILE: micromlgen/templates/trainset.jinja
================================================
#pragma once

namespace Eloquent {
    namespace ML {
        namespace Train {

            /**
             * A tailor made training set
             */
            class {{ classname }} {
                public:
                    {{ classname }}() :
                        _x{  {% for x in X %}
                                { {% for xi in x %} {% if loop.index > 1 %}, {% endif %} {{ f.round(xi) }} {% endfor %} },
                            {% endfor %}
                        },
                        _y{  {% for yi in y %} {% if loop.index > 1 %}, {% endif %} {{ f.round(yi) }} {% endfor %}  }
                    {}

                    template<class Classifier>
                    void fit(Classifier *clf) {
                        clf->fit(_x, _y, {{ X|length }});
                    }

                protected:
                    float _x[{{ X|length }}][{{ X[0]|length }}];
                    int _y[{{ X|length }}];
            };
        }
    }
}

================================================
FILE: micromlgen/templates/vote.jinja
================================================
// return argmax of votes
uint8_t classIdx = 0;
float maxVotes = votes[0];

for (uint8_t i = 1; i < {{ n_classes }}; i++) {
    if (votes[i] > maxVotes) {
        classIdx = i;
        maxVotes = votes[i];
    }
}

return classIdx;

================================================
FILE: micromlgen/templates/wifiindoorpositioning/__init__.py
================================================
# here just to force pip to copy this folder

================================================
FILE: micromlgen/templates/wifiindoorpositioning/wifiindoorpositioning.jinja
================================================
#pragma once

namespace Eloquent {
    namespace Projects {
        class WifiIndoorPositioning {
            public:
                float features[{{ X[0]|length }}] = {0};

                /**
                 * Get feature vector
                 */
                float* scan() {
                    uint8_t numNetworks = WiFi.scanNetworks();

                    for (uint8_t i = 0; i < {{ X[0]|length }}; i++) {
                        features[i] = 0;
                    }

                    for (uint8_t i = 0; i < numNetworks; i++) {
                        int featureIdx = ssidToFeatureIdx(WiFi.SSID(i));

                        if (featureIdx >= 0) {
                            features[featureIdx] = WiFi.RSSI(i);
                        }
                    }

                    return features;
                }

            protected:
                /**
                 * Convert SSID to featureIdx
                 */
                int ssidToFeatureIdx(String ssid) {
                    {% for network, idx in networkmap.items() %}
                    if (ssid.equals("{{ network }}"))
                        return {{ idx }};
                    {% endfor %}

                    return -1;
                }
        };
    }
}

================================================
FILE: micromlgen/templates/xgboost/__init__.py
================================================
# here just to force pip to copy this folder

================================================
FILE: micromlgen/templates/xgboost/tree.jinja
================================================
{% if tree['left'][i] != tree['right'][i] %}
    if (x[{{ tree['features'][i] }}] <= {{ tree['thresholds'][i] }}) {
        {% with i = tree['left'][i] %}
            {% include 'xgboost/tree.jinja' %}
        {% endwith %}
    }
    else {
        {% with i = tree['right'][i] %}
            {% include 'xgboost/tree.jinja' %}
        {% endwith %}
    }
{% else %}
    votes[{{ class_idx }}] += {{ tree['thresholds'][i] }};
{% endif %}

================================================
FILE: micromlgen/templates/xgboost/xgboost.jinja
================================================
{% extends '_skeleton.jinja' %}

{% block predict %}
    float votes[{{ n_classes }}] = { 0.0f };

    {% for k, tree in f.enumerate(trees) %}
        {% with i = 0, class_idx = k % n_classes %}
            // tree #{{ k + 1 }}
            {% include 'xgboost/tree.jinja' %}
        {% endwith %}
    {% endfor %}

    {% include 'vote.jinja' %}
{% endblock %}

================================================
FILE: micromlgen/utils.py
================================================
import os
import re
from math import sin, cos
from collections.abc import Iterable
from inspect import getmro
from jinja2 import FileSystemLoader, Environment


def check_type(instance, *classes):
    """Check if object is instance of given class"""
    for klass in classes:
        if type(instance).__name__ == klass:
            return True
        for T in getmro(type(instance)):
            if T.__name__ == klass:
                return True
    return False


def prettify(code):
    """A super simple code prettifier"""
    pretty = []
    indent = 0
    for line in code.split('\n'):
        line = line.strip()
        # skip empty lines
        if len(line) == 0:
            continue
        # lower indentation on closing braces
        if line[-1] == '}' or line == '};' or line == 'protected:':
            indent -= 1
        pretty.append(('    ' * indent) + line)
        # increase indentation on opening braces
        if line[-1] == '{' or line == 'public:' or line == 'protected:':
            indent += 1
    pretty = '\n'.join(pretty)
    # leave empty line before {return, for, if}
    pretty = re.sub(r'([;])\n(\s*?)(for|return|if) ', lambda m: '%s\n\n%s%s ' % m.groups(), pretty)
    # leave empty line after closing braces
    pretty = re.sub(r'}\n', '}\n\n', pretty)
    # strip empty lines between closing braces (2 times)
    pretty = re.sub(r'\}\n\n(\s*?)\}', lambda m: '}\n%s}' % m.groups(), pretty)
    pretty = re.sub(r'\}\n\n(\s*?)\}', lambda m: '}\n%s}' % m.groups(), pretty)
    # remove "," before "}"
    pretty = re.sub(r',\s*\}', '}', pretty)
    return pretty


def jinja(template_file, data, defaults=None, **kwargs):
    """Render Jinja template"""
    dir_path = os.path.dirname(os.path.realpath(__file__))
    loader = FileSystemLoader(dir_path + '/templates')
    template = Environment(loader=loader).get_template(template_file)
    data = {k: v for k, v in data.items() if v is not None}
    kwargs = {k: v for k, v in kwargs.items() if v is not None}
    precision = data.get('precision', 12) or 12
    precision_fmt = '%.' + str(precision) + 'f'
    if defaults is None:
        defaults = {}
    defaults.setdefault('platform', 'arduino')
    defaults.setdefault('classmap', None)
    defaults.update({
        'f': {
            'enumerate': enumerate,
            'round': lambda x: round(x, precision),
            'zip': zip,
            'signed': lambda x: '' if x == 0 else '+' + str(x) if x >= 0 else x,
            'to_array': lambda x, as_int=False: ', '.join([precision_fmt % xx if not as_int else str(xx) for xx in x])
        },
        'math': {
            'cos': cos,
            'sin': sin
        }
    })
    data = {
        **defaults,
        **kwargs,
        **data
    }
    code = template.render(data)
    return prettify(code)


def port_trainset(X, y, classname='TrainSet'):
    return jinja('trainset.jinja', locals())


def port_testset(X, y, classname='TestSet'):
    return jinja('testset.jinja', locals())


def port_array(arr, precision=9):
    """
    Convert array to C
    :param arr: list|ndarray
    :param precision: int how many decimal digits to print
    :return: str C-array contents
    """
    if not isinstance(arr, Iterable):
        fmt = '%%.%df' % precision
        return fmt % arr

    return '{%s}' % (', '.join([port_array(x, precision) for x in arr]))

================================================
FILE: micromlgen/wifiindoorpositioning.py
================================================
import json
import numpy as np
from micromlgen.utils import jinja


def parse_samples(samples, parser):
    for line in samples.split('\n'):
        if '{' in line and '}' in line:
            data = json.loads(line)
            info = {k: v for k, v in data.items() if k.startswith('__')}
            networks = {k: v for k, v in data.items() if not k.startswith('__')}
            yield parser(info, networks)


def get_classmap(samples):
    """Get {location: classIdx} mapping"""

    def parser(info, networks):
        return info['__location']

    locations = list(parse_samples(samples, parser))
    return {location: i for i, location in enumerate(sorted(set(locations)))}


def get_networkmap(samples):
    """Get {network: featureIdx} mapping"""

    def parser(info, networks):
        return networks.keys()

    networks = [list(x) for x in parse_samples(samples, parser)]
    networks = [network for sub in networks for network in sub]
    return {network: i for i, network in enumerate(sorted(set(networks)))}


def get_x(samples, networkmap):
    """Get features array"""

    def parser(info, networks):
        x = [0] * len(networkmap)
        for network, rssi in networks.items():
            x[networkmap.get(network)] = rssi
        return x

    return np.asarray(list(parse_samples(samples, parser)), dtype=np.int8)


def get_y(samples, classmap):
    """Get locationIdx array"""

    def parser(info, networks):
        location = info['__location']
        assert location in classmap, 'Unknown location %s' % location
        return classmap[location]

    return np.asarray(list(parse_samples(samples, parser)))


def port_wifi_indoor_positioning(samples):
    classmap = get_classmap(samples)
    networkmap = get_networkmap(samples)
    X = get_x(samples, networkmap)
    y = get_y(samples, classmap)
    # classmap is flipped wrt the format `port` expects: flip it
    classmap = {v: k for k, v in classmap.items()}
    return X, y, classmap, jinja('wifiindoorpositioning/wifiindoorpositioning.jinja', {
        'X': X,
        'networkmap': networkmap
    })


================================================
FILE: micromlgen/xgboost.py
================================================
from micromlgen.utils import jinja, check_type
from tempfile import NamedTemporaryFile
import json


def format_tree(tree):
    """
    Format xgboost tree like a sklearn DecisionTree
    :param tree:
    :return:
    """
    split_indices = tree['split_indices']
    split_conditions = tree['split_conditions']
    left_children = tree['left_children']
    right_children = tree['right_children']
    return {
        'left': left_children,
        'right': right_children,
        'features': split_indices,
        'thresholds': split_conditions
    }


def is_xgboost(clf):
    """
    Test if classifier can be ported
    """
    return check_type(clf, 'XGBClassifier')


def port_xgboost(clf, tmp_file=None, **kwargs):
    """
    Port a XGBoost classifier
    @updated 1.1.28
    :param clf:
    :param tmp_file: if not None, use the given file as temporary export destination
    """
    if tmp_file is None:
        with NamedTemporaryFile('w+', suffix='.json', encoding='utf-8') as tmp:
            clf.save_model(tmp.name)
            tmp.seek(0)
            decoded = json.load(tmp)
    else:
        clf.save_model(tmp_file)

        with open(tmp_file, encoding='utf-8') as file:
            decoded = json.load(file)

    trees = [format_tree(tree) for tree in decoded['learner']['gradient_booster']['model']['trees']]

    return jinja('xgboost/xgboost.jinja', {
        'n_classes': int(decoded['learner']['learner_model_param']['num_class']),
        'trees': trees,
    }, {
        'classname': 'XGBClassifier'
    }, **kwargs)

================================================
FILE: publish
================================================
#!/bin/bash

templates=$(python -c 'import glob; print([f.replace("micromlgen/", "") for f in glob.glob("micromlgen/templates/**/*", recursive=True)])')
templates=$(printf '%s\n' "$templates" | sed -e 's/[\/&]/\\&/g')
rm dist/micromlgen-${1}.tar.gz
cp setup_template.py setup.py
sed -i "" "s/PACKAGE_DATA/$templates/" setup.py
sed -i "" "s/VERSION/${1}/" setup.py
python setup.py sdist
twine upload -u eloquentarduino dist/micromlgen-${1}.tar.gz
git add .
git commit -m "bump dist to ${1}"
git push origin master -f

================================================
FILE: setup.cfg
================================================
[metadata]
description-file = README.md

================================================
FILE: setup.py
================================================
from distutils.core import setup


setup(
  name = 'micromlgen',
  packages = ['micromlgen'],
  version = '1.1.28',
  license='MIT',
  description = 'Generate C code for microcontrollers from Python\'s sklearn classifiers',
  author = 'Simone Salerno',
  author_email = 'eloquentarduino@gmail.com',
  url = 'https://github.com/eloquentarduino/micromlgen',
  download_url = 'https://github.com/eloquentarduino/micromlgen/blob/master/dist/micromlgen-1.1.28.tar.gz?raw=true',
  keywords = [
    'ML',
    'microcontrollers',
    'sklearn',
    'machine learning'
  ],
  install_requires=[
    'jinja2',
  ],
  package_data={
    'micromlgen': ['templates/pca', 'templates/linearregression', 'templates/wifiindoorpositioning', 'templates/vote.jinja', 'templates/dot.jinja', 'templates/gaussiannb', 'templates/rvm', 'templates/xgboost', 'templates/decisiontree', 'templates/randomforest', 'templates/__init__.py', 'templates/__pycache__', 'templates/_skeleton.jinja', 'templates/sefr', 'templates/svm', 'templates/logisticregression', 'templates/principalfft', 'templates/classmap.jinja', 'templates/trainset.jinja', 'templates/testset.jinja', 'templates/pca/__init__.py', 'templates/pca/pca.jinja', 'templates/linearregression/__init__.py', 'templates/linearregression/linearregression.jinja', 'templates/wifiindoorpositioning/__init__.py', 'templates/wifiindoorpositioning/wifiindoorpositioning.jinja', 'templates/gaussiannb/vote.jinja', 'templates/gaussiannb/gaussiannb.jinja', 'templates/gaussiannb/__init__.py', 'templates/rvm/__init__.py', 'templates/rvm/rvm.jinja', 'templates/xgboost/tree.jinja', 'templates/xgboost/__init__.py', 'templates/xgboost/xgboost.jinja', 'templates/decisiontree/tree_regressor.jinja', 'templates/decisiontree/tree.jinja', 'templates/decisiontree/decisiontree.jinja', 'templates/decisiontree/__init__.py', 'templates/decisiontree/decisiontree_regressor.jinja', 'templates/randomforest/randomforest_regressor.jinja', 'templates/randomforest/randomforest.jinja', 'templates/randomforest/tree_regressor.jinja', 'templates/randomforest/tree.jinja', 'templates/randomforest/__init__.py', 'templates/__pycache__/__init__.cpython-37.pyc', 'templates/sefr/sefr.jinja', 'templates/sefr/dot.jinja', 'templates/sefr/__init__.py', 'templates/svm/__init__.py', 'templates/svm/computations', 'templates/svm/svm.jinja', 'templates/svm/kernel', 'templates/svm/computations/class.jinja', 'templates/svm/computations/decisions.jinja', 'templates/svm/computations/votes.jinja', 'templates/svm/computations/kernel', 'templates/svm/computations/kernel/attiny.jinja', 'templates/svm/computations/kernel/arduino.jinja', 'templates/svm/kernel/attiny.jinja', 'templates/svm/kernel/arduino.jinja', 'templates/svm/kernel/kernel.jinja', 'templates/logisticregression/__init__.py', 'templates/logisticregression/vote.arduino.jinja', 'templates/logisticregression/logisticregression.jinja', 'templates/logisticregression/vote.attiny.jinja', 'templates/principalfft/lut.jinja', 'templates/principalfft/__init__.py', 'templates/principalfft/principalfft.jinja', 'templates/principalfft/lut_bool.jinja']
  },
  classifiers=[
    'Development Status :: 5 - Production/Stable',
    'Intended Audience :: Developers',
    'Topic :: Software Development :: Code Generators',
    'License :: OSI Approved :: MIT License',
    'Programming Language :: Python :: 3',
    'Programming Language :: Python :: 3.4',
    'Programming Language :: Python :: 3.5',
    'Programming Language :: Python :: 3.6',
  ],
)


================================================
FILE: setup_template.py
================================================
from distutils.core import setup


setup(
  name = 'micromlgen',
  packages = ['micromlgen'],
  version = 'VERSION',
  license='MIT',
  description = 'Generate C code for microcontrollers from Python\'s sklearn classifiers',
  author = 'Simone Salerno',
  author_email = 'eloquentarduino@gmail.com',
  url = 'https://github.com/eloquentarduino/micromlgen',
  download_url = 'https://github.com/eloquentarduino/micromlgen/blob/master/dist/micromlgen-VERSION.tar.gz?raw=true',
  keywords = [
    'ML',
    'microcontrollers',
    'sklearn',
    'machine learning'
  ],
  install_requires=[
    'jinja2',
  ],
  package_data={
    'micromlgen': PACKAGE_DATA
  },
  classifiers=[
    'Development Status :: 5 - Production/Stable',
    'Intended Audience :: Developers',
    'Topic :: Software Development :: Code Generators',
    'License :: OSI Approved :: MIT License',
    'Programming Language :: Python :: 3',
    'Programming Language :: Python :: 3.4',
    'Programming Language :: Python :: 3.5',
    'Programming Language :: Python :: 3.6',
  ],
)
Download .txt
gitextract_zfuhlvcy/

├── .gitattributes
├── .gitignore
├── LICENSE.txt
├── MANIFEST
├── README.md
├── micromlgen/
│   ├── __init__.py
│   ├── decisiontreeclassifier.py
│   ├── decisiontreeregressor.py
│   ├── gaussiannb.py
│   ├── linear_regression.py
│   ├── logisticregression.py
│   ├── micromlgen.py
│   ├── pca.py
│   ├── platforms.py
│   ├── principalfft.py
│   ├── randomforestclassifier.py
│   ├── randomforestregressor.py
│   ├── rvm.py
│   ├── sefr.py
│   ├── svm.py
│   ├── templates/
│   │   ├── __init__.py
│   │   ├── _skeleton.jinja
│   │   ├── classmap.jinja
│   │   ├── decisiontree/
│   │   │   ├── __init__.py
│   │   │   ├── decisiontree.jinja
│   │   │   ├── decisiontree_regressor.jinja
│   │   │   ├── tree.jinja
│   │   │   └── tree_regressor.jinja
│   │   ├── dot.jinja
│   │   ├── gaussiannb/
│   │   │   ├── __init__.py
│   │   │   ├── gaussiannb.jinja
│   │   │   └── vote.jinja
│   │   ├── linearregression/
│   │   │   ├── __init__.py
│   │   │   └── linearregression.jinja
│   │   ├── logisticregression/
│   │   │   ├── __init__.py
│   │   │   ├── logisticregression.jinja
│   │   │   ├── vote.arduino.jinja
│   │   │   └── vote.attiny.jinja
│   │   ├── pca/
│   │   │   ├── __init__.py
│   │   │   └── pca.jinja
│   │   ├── principalfft/
│   │   │   ├── __init__.py
│   │   │   ├── lut.jinja
│   │   │   ├── lut_bool.jinja
│   │   │   └── principalfft.jinja
│   │   ├── randomforest/
│   │   │   ├── __init__.py
│   │   │   ├── randomforest.jinja
│   │   │   ├── randomforest_regressor.jinja
│   │   │   ├── tree.jinja
│   │   │   └── tree_regressor.jinja
│   │   ├── rvm/
│   │   │   ├── __init__.py
│   │   │   └── rvm.jinja
│   │   ├── sefr/
│   │   │   ├── __init__.py
│   │   │   ├── dot.jinja
│   │   │   └── sefr.jinja
│   │   ├── svm/
│   │   │   ├── __init__.py
│   │   │   ├── computations/
│   │   │   │   ├── class.jinja
│   │   │   │   ├── decisions.jinja
│   │   │   │   ├── kernel/
│   │   │   │   │   ├── arduino.jinja
│   │   │   │   │   └── attiny.jinja
│   │   │   │   └── votes.jinja
│   │   │   ├── kernel/
│   │   │   │   ├── arduino.jinja
│   │   │   │   ├── attiny.jinja
│   │   │   │   └── kernel.jinja
│   │   │   └── svm.jinja
│   │   ├── testset.jinja
│   │   ├── trainset.jinja
│   │   ├── vote.jinja
│   │   ├── wifiindoorpositioning/
│   │   │   ├── __init__.py
│   │   │   └── wifiindoorpositioning.jinja
│   │   └── xgboost/
│   │       ├── __init__.py
│   │       ├── tree.jinja
│   │       └── xgboost.jinja
│   ├── utils.py
│   ├── wifiindoorpositioning.py
│   └── xgboost.py
├── publish
├── setup.cfg
├── setup.py
└── setup_template.py
Download .txt
SYMBOL INDEX (40 symbols across 16 files)

FILE: micromlgen/decisiontreeclassifier.py
  function is_decisiontree (line 4) | def is_decisiontree(clf):
  function port_decisiontree (line 9) | def port_decisiontree(clf, **kwargs):

FILE: micromlgen/decisiontreeregressor.py
  function is_decisiontree_regressor (line 4) | def is_decisiontree_regressor(clf):
  function port_decisiontree_regressor (line 13) | def port_decisiontree_regressor(clf, **kwargs):

FILE: micromlgen/gaussiannb.py
  function is_gaussiannb (line 4) | def is_gaussiannb(clf):
  function port_gaussiannb (line 9) | def port_gaussiannb(clf, **kwargs):

FILE: micromlgen/linear_regression.py
  function is_linear_regression (line 4) | def is_linear_regression(clf):
  function port_linear_regression (line 9) | def port_linear_regression(clf, classname=None, **kwargs):

FILE: micromlgen/logisticregression.py
  function is_logisticregression (line 4) | def is_logisticregression(clf):
  function port_logisticregression (line 9) | def port_logisticregression(clf, **kwargs):

FILE: micromlgen/micromlgen.py
  function port (line 17) | def port(

FILE: micromlgen/pca.py
  function is_pca (line 4) | def is_pca(clf):
  function port_pca (line 9) | def port_pca(clf, **kwargs):

FILE: micromlgen/principalfft.py
  function is_principalfft (line 5) | def is_principalfft(clf):
  function port_principalfft (line 10) | def port_principalfft(clf, optimize_sin=False, lookup_cos=None, lookup_s...

FILE: micromlgen/randomforestclassifier.py
  function is_randomforest (line 4) | def is_randomforest(clf):
  function port_randomforest (line 9) | def port_randomforest(clf, **kwargs):

FILE: micromlgen/randomforestregressor.py
  function is_randomforest_regressor (line 4) | def is_randomforest_regressor(clf):
  function port_randomforest_regressor (line 11) | def port_randomforest_regressor(clf, **kwargs):

FILE: micromlgen/rvm.py
  function is_rvm (line 4) | def is_rvm(clf):
  function port_rvm (line 9) | def port_rvm(clf, **kwargs):

FILE: micromlgen/sefr.py
  function is_sefr (line 4) | def is_sefr(clf):
  function port_sefr (line 9) | def port_sefr(clf, classname=None, **kwargs):

FILE: micromlgen/svm.py
  function is_svm (line 4) | def is_svm(clf):
  function port_svm (line 9) | def port_svm(clf, **kwargs):

FILE: micromlgen/utils.py
  function check_type (line 9) | def check_type(instance, *classes):
  function prettify (line 20) | def prettify(code):
  function jinja (line 49) | def jinja(template_file, data, defaults=None, **kwargs):
  function port_trainset (line 84) | def port_trainset(X, y, classname='TrainSet'):
  function port_testset (line 88) | def port_testset(X, y, classname='TestSet'):
  function port_array (line 92) | def port_array(arr, precision=9):

FILE: micromlgen/wifiindoorpositioning.py
  function parse_samples (line 6) | def parse_samples(samples, parser):
  function get_classmap (line 15) | def get_classmap(samples):
  function get_networkmap (line 25) | def get_networkmap(samples):
  function get_x (line 36) | def get_x(samples, networkmap):
  function get_y (line 48) | def get_y(samples, classmap):
  function port_wifi_indoor_positioning (line 59) | def port_wifi_indoor_positioning(samples):

FILE: micromlgen/xgboost.py
  function format_tree (line 6) | def format_tree(tree):
  function is_xgboost (line 24) | def is_xgboost(clf):
  function port_xgboost (line 31) | def port_xgboost(clf, tmp_file=None, **kwargs):
Condensed preview — 79 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (61K chars).
[
  {
    "path": ".gitattributes",
    "chars": 63,
    "preview": "*.jinja linguist-detectable=false\n*.py linguist-detectable=true"
  },
  {
    "path": ".gitignore",
    "chars": 48,
    "preview": "venv\n.idea\ntests\nmicromlgen/__pycache__\nexamples"
  },
  {
    "path": "LICENSE.txt",
    "chars": 1061,
    "preview": "MIT License\nCopyright (c) 2018 YOUR NAME\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof"
  },
  {
    "path": "MANIFEST",
    "chars": 3026,
    "preview": "# file GENERATED by distutils, do NOT edit\nsetup.cfg\nsetup.py\nmicromlgen/__init__.py\nmicromlgen/decisiontreeclassifier.p"
  },
  {
    "path": "README.md",
    "chars": 2972,
    "preview": "# Introducing MicroML\n\nMicroML is an attempt to bring Machine Learning algorithms to microcontrollers.\nPlease refer to ["
  },
  {
    "path": "micromlgen/__init__.py",
    "chars": 211,
    "preview": "import micromlgen.platforms as platforms\nfrom micromlgen.micromlgen import port\nfrom micromlgen.utils import port_testse"
  },
  {
    "path": "micromlgen/decisiontreeclassifier.py",
    "chars": 591,
    "preview": "from micromlgen.utils import jinja, check_type\n\n\ndef is_decisiontree(clf):\n    \"\"\"Test if classifier can be ported\"\"\"\n  "
  },
  {
    "path": "micromlgen/decisiontreeregressor.py",
    "chars": 759,
    "preview": "from micromlgen.utils import jinja, check_type\n\n\ndef is_decisiontree_regressor(clf):\n    \"\"\"\n    Test if classifier can "
  },
  {
    "path": "micromlgen/gaussiannb.py",
    "chars": 505,
    "preview": "from micromlgen.utils import jinja, check_type\n\n\ndef is_gaussiannb(clf):\n    \"\"\"Test if classifier can be ported\"\"\"\n    "
  },
  {
    "path": "micromlgen/linear_regression.py",
    "chars": 518,
    "preview": "from micromlgen.utils import jinja, check_type\n\n\ndef is_linear_regression(clf):\n    \"\"\"Test if classifier can be ported\""
  },
  {
    "path": "micromlgen/logisticregression.py",
    "chars": 545,
    "preview": "from micromlgen.utils import jinja, check_type\n\n\ndef is_logisticregression(clf):\n    \"\"\"Test if classifier can be ported"
  },
  {
    "path": "micromlgen/micromlgen.py",
    "chars": 2349,
    "preview": "from micromlgen import platforms\nfrom micromlgen.svm import is_svm, port_svm\nfrom micromlgen.rvm import is_rvm, port_rvm"
  },
  {
    "path": "micromlgen/pca.py",
    "chars": 387,
    "preview": "from micromlgen.utils import jinja, check_type\n\n\ndef is_pca(clf):\n    \"\"\"Test if classifier can be ported\"\"\"\n    return "
  },
  {
    "path": "micromlgen/platforms.py",
    "chars": 388,
    "preview": "ARDUINO = 'arduino'\nATTINY = 'attiny'\nALL = [\n    ARDUINO,\n    ATTINY\n]\n\nALLOWED_CLASSIFIERS = [\n    'SVC',\n    'OneClas"
  },
  {
    "path": "micromlgen/principalfft.py",
    "chars": 584,
    "preview": "from micromlgen.utils import jinja, check_type\nfrom math import pi\n\n\ndef is_principalfft(clf):\n    \"\"\"Test if classifier"
  },
  {
    "path": "micromlgen/randomforestclassifier.py",
    "chars": 691,
    "preview": "from micromlgen.utils import jinja, check_type\n\n\ndef is_randomforest(clf):\n    \"\"\"Test if classifier can be ported\"\"\"\n  "
  },
  {
    "path": "micromlgen/randomforestregressor.py",
    "chars": 778,
    "preview": "from micromlgen.utils import jinja, check_type\n\n\ndef is_randomforest_regressor(clf):\n    \"\"\"\n    Test if classifier can "
  },
  {
    "path": "micromlgen/rvm.py",
    "chars": 836,
    "preview": "from micromlgen.utils import jinja, check_type\n\n\ndef is_rvm(clf):\n    \"\"\"Test if classifier can be ported\"\"\"\n    return "
  },
  {
    "path": "micromlgen/sefr.py",
    "chars": 412,
    "preview": "from micromlgen.utils import jinja, check_type\n\n\ndef is_sefr(clf):\n    \"\"\"Test if classifier can be ported\"\"\"\n    return"
  },
  {
    "path": "micromlgen/svm.py",
    "chars": 1119,
    "preview": "from micromlgen.utils import jinja, check_type\n\n\ndef is_svm(clf):\n    \"\"\"Test if classifier can be ported\"\"\"\n    return "
  },
  {
    "path": "micromlgen/templates/__init__.py",
    "chars": 44,
    "preview": "# here just to force pip to copy this folder"
  },
  {
    "path": "micromlgen/templates/_skeleton.jinja",
    "chars": 640,
    "preview": "#pragma once\n#include <cstdarg>\n\nnamespace Eloquent {\n    namespace ML {\n        namespace Port {\n            class {{ c"
  },
  {
    "path": "micromlgen/templates/classmap.jinja",
    "chars": 477,
    "preview": "{% if classmap is not none %}\n\n/**\n * Predict readable class name\n */\nconst char* predictLabel(float *x) {\n    return id"
  },
  {
    "path": "micromlgen/templates/decisiontree/__init__.py",
    "chars": 45,
    "preview": "# here just to force pip to copy this folder\n"
  },
  {
    "path": "micromlgen/templates/decisiontree/decisiontree.jinja",
    "chars": 111,
    "preview": "{% extends '_skeleton.jinja' %}\n\n{% block predict %}\n    {% include 'decisiontree/tree.jinja' %}\n{% endblock %}"
  },
  {
    "path": "micromlgen/templates/decisiontree/decisiontree_regressor.jinja",
    "chars": 121,
    "preview": "{% extends '_skeleton.jinja' %}\n\n{% block predict %}\n    {% include 'decisiontree/tree_regressor.jinja' %}\n{% endblock %"
  },
  {
    "path": "micromlgen/templates/decisiontree/tree.jinja",
    "chars": 378,
    "preview": "{% if left[i] != right[i] %}\n    if (x[{{ features[i] }}] <= {{ thresholds[i] }}) {\n        {% with i = left[i] %}\n     "
  },
  {
    "path": "micromlgen/templates/decisiontree/tree_regressor.jinja",
    "chars": 396,
    "preview": "{% if left[i] != right[i] %}\n    if (x[{{ features[i] }}] <= {{ thresholds[i] }}) {\n        {% with i = left[i] %}\n     "
  },
  {
    "path": "micromlgen/templates/dot.jinja",
    "chars": 550,
    "preview": "{% if platform == 'attiny' %}\n    {% set signature = 'float w[%d]' % dimension  %}\n    {% set wi = 'w[i]' %}\n    {% set "
  },
  {
    "path": "micromlgen/templates/gaussiannb/__init__.py",
    "chars": 44,
    "preview": "# here just to force pip to copy this folder"
  },
  {
    "path": "micromlgen/templates/gaussiannb/gaussiannb.jinja",
    "chars": 552,
    "preview": "{% extends '_skeleton.jinja' %}\n\n{% block predict %}\n    float votes[{{ classes|length }}] = { 0.0f };\n\n    {% include '"
  },
  {
    "path": "micromlgen/templates/gaussiannb/vote.jinja",
    "chars": 400,
    "preview": "float theta[{{ theta[0]|length }}] = { 0 };\nfloat sigma[{{ sigma[0]|length }}] = { 0 };\n\n{% for i, (t, s) in f.enumerate"
  },
  {
    "path": "micromlgen/templates/linearregression/__init__.py",
    "chars": 44,
    "preview": "# here just to force pip to copy this folder"
  },
  {
    "path": "micromlgen/templates/linearregression/linearregression.jinja",
    "chars": 253,
    "preview": "{% extends '_skeleton.jinja' %}\n\n{% block predict %}\n    return dot(x, {{ f.to_array(coefs) }}) + {{ intercept }};\n{% en"
  },
  {
    "path": "micromlgen/templates/logisticregression/__init__.py",
    "chars": 44,
    "preview": "# here just to force pip to copy this folder"
  },
  {
    "path": "micromlgen/templates/logisticregression/logisticregression.jinja",
    "chars": 459,
    "preview": "{% extends '_skeleton.jinja' %}\n\n{% block predict %}\n    float votes[{{ classes|length }}] = { {% for i, x in f.enumerat"
  },
  {
    "path": "micromlgen/templates/logisticregression/vote.arduino.jinja",
    "chars": 174,
    "preview": "{% for i, w in f.enumerate(weights) %}\n    votes[{{ i }}] += dot(x, {% for j, wj in f.enumerate(w) %} {% if j > 0 %},{% "
  },
  {
    "path": "micromlgen/templates/logisticregression/vote.attiny.jinja",
    "chars": 209,
    "preview": "float w[{{ weights[0]|length }}] = { 0 };\n\n{% for i, w in f.enumerate(weights) %}\n    {% for j, wj in f.enumerate(w) %}w"
  },
  {
    "path": "micromlgen/templates/pca/__init__.py",
    "chars": 44,
    "preview": "# here just to force pip to copy this folder"
  },
  {
    "path": "micromlgen/templates/pca/pca.jinja",
    "chars": 1648,
    "preview": "#pragma once\n#include <cstdarg>\n\nnamespace Eloquent {\n    namespace ML {\n        namespace Port {\n\n            class {{ "
  },
  {
    "path": "micromlgen/templates/principalfft/__init__.py",
    "chars": 44,
    "preview": "# here just to force pip to copy this folder"
  },
  {
    "path": "micromlgen/templates/principalfft/lut.jinja",
    "chars": 248,
    "preview": "const float {{ op }}LUT[{{ size }}][{{ fft.original_size }}] = {\n    {% for i in range(0, size) %}\n        { {% for n in"
  },
  {
    "path": "micromlgen/templates/principalfft/lut_bool.jinja",
    "chars": 274,
    "preview": "const bool {{ op }}LUT[{{ size }}][{{ fft.original_size }}] = {\n    {% for i in range(0, size) %}\n        { {% for n in "
  },
  {
    "path": "micromlgen/templates/principalfft/principalfft.jinja",
    "chars": 1933,
    "preview": "void principalFFT(float *features, float *fft) {\n    // apply principal FFT (naive implementation for the top N frequenc"
  },
  {
    "path": "micromlgen/templates/randomforest/__init__.py",
    "chars": 44,
    "preview": "# here just to force pip to copy this folder"
  },
  {
    "path": "micromlgen/templates/randomforest/randomforest.jinja",
    "chars": 337,
    "preview": "{% extends '_skeleton.jinja' %}\n\n{% block predict %}\n    uint8_t votes[{{ n_classes }}] = { 0 };\n\n    {% for k, tree in "
  },
  {
    "path": "micromlgen/templates/randomforest/randomforest_regressor.jinja",
    "chars": 334,
    "preview": "{% extends '_skeleton.jinja' %}\n\n{% block predict %}\n    float y_pred = 0;\n\n    {% for k, tree in f.enumerate(trees) %}\n"
  },
  {
    "path": "micromlgen/templates/randomforest/tree.jinja",
    "chars": 439,
    "preview": "{% if tree['left'][i] != tree['right'][i] %}\n    if (x[{{ tree['features'][i] }}] <= {{ tree['thresholds'][i] }}) {\n    "
  },
  {
    "path": "micromlgen/templates/randomforest/tree_regressor.jinja",
    "chars": 454,
    "preview": "{% if tree['left'][i] != tree['right'][i] %}\n    if (x[{{ tree['features'][i] }}] <= {{ tree['thresholds'][i] }}) {\n    "
  },
  {
    "path": "micromlgen/templates/rvm/__init__.py",
    "chars": 44,
    "preview": "# here just to force pip to copy this folder"
  },
  {
    "path": "micromlgen/templates/rvm/rvm.jinja",
    "chars": 735,
    "preview": "{% extends '_skeleton.jinja' %}\n\n{% block predict %}\n    float votes[{{ arrays.vectors|length }}] = { 0 };\n    {% for i,"
  },
  {
    "path": "micromlgen/templates/sefr/__init__.py",
    "chars": 44,
    "preview": "# here just to force pip to copy this folder"
  },
  {
    "path": "micromlgen/templates/sefr/dot.jinja",
    "chars": 314,
    "preview": "/**\n * Compute dot product between features vector and classifier weights\n */\nfloat dot(float *x, ...) {\n    va_list w;\n"
  },
  {
    "path": "micromlgen/templates/sefr/sefr.jinja",
    "chars": 331,
    "preview": "{% extends '_skeleton.jinja' %}\n\n{% block predict %}\n    return dot(x, {% for i, wi in f.enumerate(weights) %} {% if i >"
  },
  {
    "path": "micromlgen/templates/svm/__init__.py",
    "chars": 44,
    "preview": "# here just to force pip to copy this folder"
  },
  {
    "path": "micromlgen/templates/svm/computations/class.jinja",
    "chars": 169,
    "preview": "int val = votes[0];\nint idx = 0;\n\nfor (int i = 1; i < {{ sizes.classes }}; i++) {\n    if (votes[i] > val) {\n        val "
  },
  {
    "path": "micromlgen/templates/svm/computations/decisions.jinja",
    "chars": 1277,
    "preview": "{% set helpers = {'ii': 0} %}\n\n{% for i in range(0, sizes.classes) %}\n    {% for j in range(i + 1, sizes.classes) %}\n   "
  },
  {
    "path": "micromlgen/templates/svm/computations/kernel/arduino.jinja",
    "chars": 194,
    "preview": "{% for i, w in f.enumerate(arrays.supports) %}\n    kernels[{{ i }}] = compute_kernel(x, {% for j, wj in f.enumerate(w) %"
  },
  {
    "path": "micromlgen/templates/svm/computations/kernel/attiny.jinja",
    "chars": 237,
    "preview": "float w[{{ arrays.supports[0]|length }}] = { 0 };\n\n{% for i, w in f.enumerate(arrays.supports) %}\n    {% for j, wj in f."
  },
  {
    "path": "micromlgen/templates/svm/computations/votes.jinja",
    "chars": 283,
    "preview": "{% set helpers = {'ii': 0} %}\n\n{% for i in range(0, sizes.classes) %}\n  {% for j in range(i + 1, sizes.classes) %}\n     "
  },
  {
    "path": "micromlgen/templates/svm/kernel/arduino.jinja",
    "chars": 324,
    "preview": "/**\n * Compute kernel between feature vector and support vector.\n * Kernel type: {{ kernel['type'] }}\n */\nfloat compute_"
  },
  {
    "path": "micromlgen/templates/svm/kernel/attiny.jinja",
    "chars": 282,
    "preview": "/**\n * Compute kernel between feature vector and support vector.\n * Kernel type: {{ kernel['type'] }}\n */\nfloat compute_"
  },
  {
    "path": "micromlgen/templates/svm/kernel/kernel.jinja",
    "chars": 715,
    "preview": "float kernel = 0.0;\n\nfor (uint16_t i = 0; i < {{ sizes.features }}; i++) {\n    {% if kernel['type'] in ['linear', 'poly'"
  },
  {
    "path": "micromlgen/templates/svm/svm.jinja",
    "chars": 1345,
    "preview": "{% extends '_skeleton.jinja' %}\n\n{% block predict %}\n\n    float kernels[{{ sizes.vectors }}] = { 0 };\n    {% if sizes.cl"
  },
  {
    "path": "micromlgen/templates/testset.jinja",
    "chars": 3022,
    "preview": "#pragma once\n\nnamespace Eloquent {\n    namespace ML {\n        namespace Test {\n\n            /**\n             * A tailor "
  },
  {
    "path": "micromlgen/templates/trainset.jinja",
    "chars": 978,
    "preview": "#pragma once\n\nnamespace Eloquent {\n    namespace ML {\n        namespace Train {\n\n            /**\n             * A tailor"
  },
  {
    "path": "micromlgen/templates/vote.jinja",
    "chars": 231,
    "preview": "// return argmax of votes\nuint8_t classIdx = 0;\nfloat maxVotes = votes[0];\n\nfor (uint8_t i = 1; i < {{ n_classes }}; i++"
  },
  {
    "path": "micromlgen/templates/wifiindoorpositioning/__init__.py",
    "chars": 44,
    "preview": "# here just to force pip to copy this folder"
  },
  {
    "path": "micromlgen/templates/wifiindoorpositioning/wifiindoorpositioning.jinja",
    "chars": 1262,
    "preview": "#pragma once\n\nnamespace Eloquent {\n    namespace Projects {\n        class WifiIndoorPositioning {\n            public:\n  "
  },
  {
    "path": "micromlgen/templates/xgboost/__init__.py",
    "chars": 44,
    "preview": "# here just to force pip to copy this folder"
  },
  {
    "path": "micromlgen/templates/xgboost/tree.jinja",
    "chars": 437,
    "preview": "{% if tree['left'][i] != tree['right'][i] %}\n    if (x[{{ tree['features'][i] }}] <= {{ tree['thresholds'][i] }}) {\n    "
  },
  {
    "path": "micromlgen/templates/xgboost/xgboost.jinja",
    "chars": 360,
    "preview": "{% extends '_skeleton.jinja' %}\n\n{% block predict %}\n    float votes[{{ n_classes }}] = { 0.0f };\n\n    {% for k, tree in"
  },
  {
    "path": "micromlgen/utils.py",
    "chars": 3362,
    "preview": "import os\nimport re\nfrom math import sin, cos\nfrom collections.abc import Iterable\nfrom inspect import getmro\nfrom jinja"
  },
  {
    "path": "micromlgen/wifiindoorpositioning.py",
    "chars": 2094,
    "preview": "import json\nimport numpy as np\nfrom micromlgen.utils import jinja\n\n\ndef parse_samples(samples, parser):\n    for line in "
  },
  {
    "path": "micromlgen/xgboost.py",
    "chars": 1547,
    "preview": "from micromlgen.utils import jinja, check_type\nfrom tempfile import NamedTemporaryFile\nimport json\n\n\ndef format_tree(tre"
  },
  {
    "path": "publish",
    "chars": 515,
    "preview": "#!/bin/bash\n\ntemplates=$(python -c 'import glob; print([f.replace(\"micromlgen/\", \"\") for f in glob.glob(\"micromlgen/temp"
  },
  {
    "path": "setup.cfg",
    "chars": 39,
    "preview": "[metadata]\ndescription-file = README.md"
  },
  {
    "path": "setup.py",
    "chars": 3498,
    "preview": "from distutils.core import setup\n\n\nsetup(\n  name = 'micromlgen',\n  packages = ['micromlgen'],\n  version = '1.1.28',\n  li"
  },
  {
    "path": "setup_template.py",
    "chars": 1053,
    "preview": "from distutils.core import setup\n\n\nsetup(\n  name = 'micromlgen',\n  packages = ['micromlgen'],\n  version = 'VERSION',\n  l"
  }
]

About this extraction

This page contains the full source code of the eloquentarduino/micromlgen GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 79 files (52.2 KB), approximately 16.6k tokens, and a symbol index with 40 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!