Repository: vc1492a/PyNomaly
Branch: main
Commit: d0ace6f9938f
Files: 23
Total size: 110.1 KB
Directory structure:
gitextract_82bct4dc/
├── .github/
│ ├── FUNDING.yml
│ └── workflows/
│ └── tests.yml
├── .gitignore
├── LICENSE
├── PyNomaly/
│ ├── __init__.py
│ └── loop.py
├── README.md
├── changelog.md
├── examples/
│ ├── iris.py
│ ├── iris_dist_grid.py
│ ├── multiple_gaussian_2d.py
│ ├── numba_speed_diff.py
│ ├── numpy_array.py
│ └── stream.py
├── paper/
│ ├── codemeta.json
│ ├── paper.bib
│ └── paper.md
├── requirements.txt
├── requirements_ci.txt
├── requirements_examples.txt
├── setup.py
└── tests/
├── __init__.py
└── test_loop.py
================================================
FILE CONTENTS
================================================
================================================
FILE: .github/FUNDING.yml
================================================
github: [vc1492a]
================================================
FILE: .github/workflows/tests.yml
================================================
# This workflow will install Python dependencies, run tests and lint with a variety of Python versions
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-python
name: tests
on:
push:
branches: [ "main", "dev" ]
pull_request:
branches: [ "main", "dev" ]
jobs:
test:
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version: ["3.8", "3.9", "3.10", "3.11", "3.12", "3.13"]
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v3
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install flake8 pytest
pip install -r requirements.txt
pip install -r requirements_ci.txt
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --exit-zero --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test with pytest
run: |
pytest --cov=PyNomaly
================================================
FILE: .gitignore
================================================
*.DS_STORE
.idea/
__pycache__/
*.csv
nasaValve
rel_research
PyNomaly/loop_dev.py
/PyNomaly.egg-info/
*.pyc
*.coverage.*
.coveragerc
.pypirc
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# poetry
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock
# pdm
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
# in version control.
# https://pdm.fming.dev/#use-with-ide
.pdm.toml
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
# PyCharm
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
================================================
FILE: LICENSE
================================================
Copyright 2017 Valentino Constantinou.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
================================================
FILE: PyNomaly/__init__.py
================================================
# Authors: Valentino Constantinou <vc@valentino.io>
# License: Apache 2.0
from PyNomaly.loop import (
LocalOutlierProbability,
PyNomalyError,
ValidationError,
ClusterSizeError,
MissingValuesError,
)
__all__ = [
"LocalOutlierProbability",
"PyNomalyError",
"ValidationError",
"ClusterSizeError",
"MissingValuesError",
]
================================================
FILE: PyNomaly/loop.py
================================================
from math import erf, sqrt
import numpy as np
from python_utils.terminal import get_terminal_size
import sys
from typing import Tuple, Union
import warnings
try:
import numba
except ImportError:
pass
__author__ = "Valentino Constantinou"
__version__ = "0.3.5"
__license__ = "Apache License, Version 2.0"
# Custom Exceptions
class PyNomalyError(Exception):
"""Base exception for PyNomaly."""
pass
class ValidationError(PyNomalyError):
"""Raised when input validation fails."""
pass
class ClusterSizeError(ValidationError):
"""Raised when cluster size is smaller than n_neighbors."""
pass
class MissingValuesError(ValidationError):
"""Raised when data contains missing values."""
pass
class Utils:
@staticmethod
def emit_progress_bar(progress: str, index: int, total: int) -> str:
"""
A progress bar that is continuously updated in Python's standard
out.
:param progress: a string printed to stdout that is updated and later
returned.
:param index: the current index of the iteration within the tracked
process.
:param total: the total length of the tracked process.
:return: progress string.
"""
w, h = get_terminal_size()
sys.stdout.write("\r")
if total < w:
block_size = int(w / total)
else:
block_size = int(total / w)
if index % block_size == 0:
progress += "="
percent = index / total
sys.stdout.write("[ %s ] %.2f%%" % (progress, percent * 100))
sys.stdout.flush()
return progress
class LocalOutlierProbability(object):
"""
:param data: a Pandas DataFrame or Numpy array of float data
:param extent: an integer value [1, 2, 3] that controls the statistical
extent, e.g. lambda times the standard deviation from the mean (optional,
default 3)
:param n_neighbors: the total number of neighbors to consider w.r.t. each
sample (optional, default 10)
:param cluster_labels: a numpy array of cluster assignments w.r.t. each
sample (optional, default None)
:return:
""" """
Based on the work of Kriegel, Kröger, Schubert, and Zimek (2009) in LoOP:
Local Outlier Probabilities.
----------
References
----------
.. [1] Breunig M., Kriegel H.-P., Ng R., Sander, J. LOF: Identifying
Density-based Local Outliers. ACM SIGMOD
International Conference on Management of Data (2000).
.. [2] Kriegel H.-P., Kröger P., Schubert E., Zimek A. LoOP: Local Outlier
Probabilities. 18th ACM conference on
Information and knowledge management, CIKM (2009).
.. [3] Goldstein M., Uchida S. A Comparative Evaluation of Unsupervised
Anomaly Detection Algorithms for Multivariate Data. PLoS ONE 11(4):
e0152173 (2016).
.. [4] Hamlet C., Straub J., Russell M., Kerlin S. An incremental and
approximate local outlier probability algorithm for intrusion
detection and its evaluation. Journal of Cyber Security Technology
(2016).
"""
"""
Validation methods.
These methods validate inputs and raise exceptions or warnings as appropriate.
"""
@staticmethod
def _convert_to_array(obj: Union["pd.DataFrame", np.ndarray]) -> np.ndarray:
"""
Converts the input data to a numpy array if it is a Pandas DataFrame
or validates it is already a numpy array.
:param obj: user-provided input data.
:return: a vector of values to be used in calculating the local
outlier probability.
"""
if obj.__class__.__name__ == "DataFrame":
points_vector = obj.values
return points_vector
elif obj.__class__.__name__ == "ndarray":
points_vector = obj
return points_vector
else:
warnings.warn(
"Provided data or distance matrix must be in ndarray "
"or DataFrame.",
UserWarning,
)
if isinstance(obj, list):
points_vector = np.array(obj)
return points_vector
points_vector = np.array([obj])
return points_vector
def _validate_inputs(self):
"""
Validates the inputs provided during initialization to ensure
that the needed objects are provided.
:return: a tuple of (data, distance_matrix, neighbor_matrix) or
raises a warning for invalid inputs.
"""
if all(v is None for v in [self.data, self.distance_matrix]):
warnings.warn(
"Data or a distance matrix must be provided.", UserWarning
)
return False
elif all(v is not None for v in [self.data, self.distance_matrix]):
warnings.warn(
"Only one of the following may be provided: data or a "
"distance matrix (not both).",
UserWarning,
)
return False
if self.data is not None:
points_vector = self._convert_to_array(self.data)
return points_vector, self.distance_matrix, self.neighbor_matrix
if all(
matrix is not None
for matrix in [self.neighbor_matrix, self.distance_matrix]
):
dist_vector = self._convert_to_array(self.distance_matrix)
neigh_vector = self._convert_to_array(self.neighbor_matrix)
else:
warnings.warn(
"A neighbor index matrix and distance matrix must both be "
"provided when not using raw input data.",
UserWarning,
)
return False
if self.distance_matrix.shape != self.neighbor_matrix.shape:
warnings.warn(
"The shape of the distance and neighbor "
"index matrices must match.",
UserWarning,
)
return False
elif (self.distance_matrix.shape[1] != self.n_neighbors) or (
self.neighbor_matrix.shape[1] != self.n_neighbors
):
warnings.warn(
"The shape of the distance or "
"neighbor index matrix does not "
"match the number of neighbors "
"specified.",
UserWarning,
)
return False
return self.data, dist_vector, neigh_vector
def _check_cluster_size(self) -> None:
"""
Validates the cluster labels to ensure that the smallest cluster
size (number of observations in the cluster) is larger than the
specified number of neighbors.
:raises ClusterSizeError: if any cluster is too small.
"""
c_labels = self._cluster_labels()
for cluster_id in set(c_labels):
c_size = np.where(c_labels == cluster_id)[0].shape[0]
if c_size <= self.n_neighbors:
raise ClusterSizeError(
"Number of neighbors specified larger than smallest "
"cluster. Specify a number of neighbors smaller than "
"the smallest cluster size (observations in smallest "
"cluster minus one)."
)
def _check_n_neighbors(self) -> bool:
"""
Validates the specified number of neighbors to ensure that it is
greater than 0 and that the specified value is less than the total
number of observations.
:return: a boolean indicating whether validation has passed without
adjustment.
"""
if not self.n_neighbors > 0:
self.n_neighbors = 10
warnings.warn(
"n_neighbors must be greater than 0."
" Fit with " + str(self.n_neighbors) + " instead.",
UserWarning,
)
return False
elif self.n_neighbors >= self._n_observations():
self.n_neighbors = self._n_observations() - 1
warnings.warn(
"n_neighbors must be less than the number of observations."
" Fit with " + str(self.n_neighbors) + " instead.",
UserWarning,
)
return True
def _check_extent(self) -> bool:
"""
Validates the specified extent parameter to ensure it is either 1,
2, or 3.
:return: a boolean indicating whether validation has passed.
"""
if self.extent not in [1, 2, 3]:
warnings.warn(
"extent parameter (lambda) must be 1, 2, or 3.", UserWarning
)
return False
return True
def _check_missing_values(self) -> None:
"""
Validates the provided data to ensure that it contains no
missing values.
:raises MissingValuesError: if data contains NaN values.
"""
if np.any(np.isnan(self.data)):
raise MissingValuesError(
"Method does not support missing values in input data."
)
def _check_is_fit(self) -> bool:
"""
Checks that the model was fit prior to calling the stream() method.
:return: a boolean indicating whether the model has been fit.
"""
if self.is_fit is False:
warnings.warn(
"Must fit on historical data by calling fit() prior to "
"calling stream(x).",
UserWarning,
)
return False
return True
def _check_no_cluster_labels(self) -> bool:
"""
Checks to see if cluster labels are attempting to be used in
stream() and, if so, returns False. As PyNomaly does not accept
clustering algorithms as input, the stream approach does not
support clustering.
:return: a boolean indicating whether single cluster (no labels).
"""
if len(set(self._cluster_labels())) > 1:
warnings.warn(
"Stream approach does not support clustered data. "
"Automatically refit using single cluster of points.",
UserWarning,
)
return False
return True
"""
Decorators.
"""
def accepts(*types):
"""
A decorator that facilitates a form of type checking for the inputs
which can be used in Python 3.4-3.7 in lieu of Python 3.5+'s type
hints.
:param types: the input types of the objects being passed as arguments
in __init__.
:return: a decorator.
"""
def decorator(f):
assert len(types) == f.__code__.co_argcount
def new_f(*args, **kwds):
for a, t in zip(args, types):
if type(a).__name__ == "DataFrame":
a = np.array(a)
if isinstance(a, t) is False:
warnings.warn(
"Argument %r is not of type %s" % (a, t), UserWarning
)
opt_types = {
"distance_matrix": {"type": types[2]},
"neighbor_matrix": {"type": types[3]},
"extent": {"type": types[4]},
"n_neighbors": {"type": types[5]},
"cluster_labels": {"type": types[6]},
"use_numba": {"type": types[7]},
"progress_bar": {"type": types[8]},
}
for x in kwds:
opt_types[x]["value"] = kwds[x]
for k in opt_types:
try:
if (
isinstance(opt_types[k]["value"], opt_types[k]["type"])
is False
):
warnings.warn(
"Argument %r is not of type %s."
% (k, opt_types[k]["type"]),
UserWarning,
)
except KeyError:
pass
return f(*args, **kwds)
new_f.__name__ = f.__name__
return new_f
return decorator
@accepts(
object,
np.ndarray,
np.ndarray,
np.ndarray,
(int, np.integer),
(int, np.integer),
list,
bool,
bool,
)
def __init__(
self,
data=None,
distance_matrix=None,
neighbor_matrix=None,
extent=3,
n_neighbors=10,
cluster_labels=None,
use_numba=False,
progress_bar=False,
) -> None:
self.data = data
self.distance_matrix = distance_matrix
self.neighbor_matrix = neighbor_matrix
self.extent = extent
self.n_neighbors = n_neighbors
self.cluster_labels = cluster_labels
self.use_numba = use_numba
self.points_vector = None
self.prob_distances = None
self.prob_distances_ev = None
self.norm_prob_local_outlier_factor = None
self.local_outlier_probabilities = None
self._objects = {}
self.progress_bar = progress_bar
self.is_fit = False
if self.use_numba is True and "numba" not in sys.modules:
self.use_numba = False
warnings.warn(
"Numba is not available, falling back to pure python mode.", UserWarning
)
self._validate_inputs()
self._check_extent()
"""
Private methods.
"""
@staticmethod
def _standard_distance(cardinality: float, sum_squared_distance: float) -> float:
"""
Calculates the standard distance of an observation.
:param cardinality: the cardinality of the input observation.
:param sum_squared_distance: the sum squared distance between all
neighbors of the input observation.
:return: the standard distance.
#"""
division_result = sum_squared_distance / cardinality
st_dist = sqrt(division_result)
return st_dist
@staticmethod
def _prob_distance(extent: int, standard_distance: float) -> float:
"""
Calculates the probabilistic distance of an observation.
:param extent: the extent value specified during initialization.
:param standard_distance: the standard distance of the input
observation.
:return: the probabilistic distance.
"""
return extent * standard_distance
@staticmethod
def _prob_outlier_factor(
probabilistic_distance: np.ndarray, ev_prob_dist: np.ndarray
) -> np.ndarray:
"""
Calculates the probabilistic outlier factor of an observation.
:param probabilistic_distance: the probabilistic distance of the
input observation.
:param ev_prob_dist:
:return: the probabilistic outlier factor.
"""
if np.all(probabilistic_distance == ev_prob_dist):
return np.zeros(probabilistic_distance.shape)
else:
ev_prob_dist[ev_prob_dist == 0.0] = 1.0e-8
result = np.divide(probabilistic_distance, ev_prob_dist) - 1.0
return result
@staticmethod
def _norm_prob_outlier_factor(
extent: float, ev_probabilistic_outlier_factor: list
) -> list:
"""
Calculates the normalized probabilistic outlier factor of an
observation.
:param extent: the extent value specified during initialization.
:param ev_probabilistic_outlier_factor: the expected probabilistic
outlier factor of the input observation.
:return: the normalized probabilistic outlier factor.
"""
npofs = []
for i in ev_probabilistic_outlier_factor:
npofs.append(extent * sqrt(i))
return npofs
@staticmethod
def _local_outlier_probability(
plof_val: np.ndarray, nplof_val: np.ndarray
) -> np.ndarray:
"""
Calculates the local outlier probability of an observation.
:param plof_val: the probabilistic outlier factor of the input
observation.
:param nplof_val: the normalized probabilistic outlier factor of the
input observation.
:return: the local outlier probability.
"""
erf_vec = np.vectorize(erf)
if np.all(plof_val == nplof_val):
return np.zeros(plof_val.shape)
else:
return np.maximum(0, erf_vec(plof_val / (nplof_val * np.sqrt(2.0))))
def _n_observations(self) -> int:
"""
Calculates the number of observations in the data.
:return: the number of observations in the input data.
"""
if self.data is not None:
return len(self.data)
return len(self.distance_matrix)
def _store(self) -> np.ndarray:
"""
Initializes the storage matrix that includes the input value,
cluster labels, local outlier probability, etc. for the input data.
:return: an empty numpy array of shape [n_observations, 3].
"""
return np.empty([self._n_observations(), 3], dtype=object)
def _cluster_labels(self) -> np.ndarray:
"""
Returns a numpy array of cluster labels that corresponds to the
input labels or that is an array of all 0 values to indicate all
points belong to the same cluster.
:return: a numpy array of cluster labels.
"""
if self.cluster_labels is None:
if self.data is not None:
return np.array([0] * len(self.data))
return np.array([0] * len(self.distance_matrix))
return np.array(self.cluster_labels)
@staticmethod
def _euclidean(vector1: np.ndarray, vector2: np.ndarray) -> np.ndarray:
"""
Calculates the euclidean distance between two observations in the
input data.
:param vector1: a numpy array corresponding to observation 1.
:param vector2: a numpy array corresponding to observation 2.
:return: the euclidean distance between the two observations.
"""
diff = vector1 - vector2
return np.dot(diff, diff) ** 0.5
def _assign_distances(self, data_store: np.ndarray) -> np.ndarray:
"""
Takes a distance matrix, produced by _distances or provided through
user input, and assigns distances for each observation to the storage
matrix, data_store.
:param data_store: the storage matrix that collects information on
each observation.
:return: the updated storage matrix that collects information on
each observation.
"""
for vec, cluster_id in zip(
range(self.distance_matrix.shape[0]), self._cluster_labels()
):
data_store[vec][0] = cluster_id
data_store[vec][1] = self.distance_matrix[vec]
data_store[vec][2] = self.neighbor_matrix[vec]
return data_store
@staticmethod
def _compute_distance_and_neighbor_matrix(
clust_points_vector: np.ndarray,
indices: np.ndarray,
distances: np.ndarray,
indexes: np.ndarray,
) -> Tuple[np.ndarray, np.ndarray, int]:
"""
This helper method provides the heavy lifting for the _distances
method and is only intended for use therein. The code has been
written so that it can make full use of Numba's jit capabilities if
desired.
"""
for i in range(clust_points_vector.shape[0]):
for j in range(i + 1, clust_points_vector.shape[0]):
# Global index of the points
global_i = indices[0][i]
global_j = indices[0][j]
# Compute Euclidean distance
diff = clust_points_vector[i] - clust_points_vector[j]
d = np.dot(diff, diff) ** 0.5
# Update distance and neighbor index for global_i
idx_max = distances[global_i].argmax()
if d < distances[global_i][idx_max]:
distances[global_i][idx_max] = d
indexes[global_i][idx_max] = global_j
# Update distance and neighbor index for global_j
idx_max = distances[global_j].argmax()
if d < distances[global_j][idx_max]:
distances[global_j][idx_max] = d
indexes[global_j][idx_max] = global_i
yield distances, indexes, i
def _distances(self, progress_bar: bool = False) -> None:
"""
Provides the distances between each observation and it's closest
neighbors. When input data is provided, calculates the euclidean
distance between every observation. Otherwise, the user-provided
distance matrix is used.
:return: the updated storage matrix that collects information on
each observation.
"""
distances = np.full(
[self._n_observations(), self.n_neighbors], 9e10, dtype=float
)
indexes = np.full([self._n_observations(), self.n_neighbors], 9e10, dtype=float)
self.points_vector = self._convert_to_array(self.data)
compute = (
numba.jit(self._compute_distance_and_neighbor_matrix, cache=True)
if self.use_numba
else self._compute_distance_and_neighbor_matrix
)
progress = "="
for cluster_id in set(self._cluster_labels()):
indices = np.where(self._cluster_labels() == cluster_id)
clust_points_vector = np.array(
self.points_vector.take(indices, axis=0)[0], dtype=np.float64
)
# a generator that yields an updated distance matrix on each loop
for c in compute(clust_points_vector, indices, distances, indexes):
distances, indexes, i = c
# update the progress bar
if progress_bar is True:
progress = Utils.emit_progress_bar(
progress, i + 1, clust_points_vector.shape[0]
)
self.distance_matrix = distances
self.neighbor_matrix = indexes
def _ssd(self, data_store: np.ndarray) -> np.ndarray:
"""
Calculates the sum squared distance between neighbors for each
observation in the input data.
:param data_store: the storage matrix that collects information on
each observation.
:return: the updated storage matrix that collects information on
each observation.
"""
self.cluster_labels_u = np.unique(data_store[:, 0])
ssd_array = np.empty([self._n_observations(), 1])
for cluster_id in self.cluster_labels_u:
indices = np.where(data_store[:, 0] == cluster_id)
cluster_distances = np.take(data_store[:, 1], indices).tolist()
ssd = np.power(cluster_distances[0], 2).sum(axis=1)
for i, j in zip(indices[0], ssd):
ssd_array[i] = j
data_store = np.hstack((data_store, ssd_array))
return data_store
def _standard_distances(self, data_store: np.ndarray) -> np.ndarray:
"""
Calculated the standard distance for each observation in the input
data. First calculates the cardinality and then calculates the standard
distance with respect to each observation.
:param data_store:
:param data_store: the storage matrix that collects information on
each observation.
:return: the updated storage matrix that collects information on
each observation.
"""
cardinality = [self.n_neighbors] * self._n_observations()
vals = data_store[:, 3].tolist()
std_distances = []
for c, v in zip(cardinality, vals):
std_distances.append(self._standard_distance(c, v))
return np.hstack((data_store, np.array([std_distances]).T))
def _prob_distances(self, data_store: np.ndarray) -> np.ndarray:
"""
Calculates the probabilistic distance for each observation in the
input data.
:param data_store: the storage matrix that collects information on
each observation.
:return: the updated storage matrix that collects information on
each observation.
"""
prob_distances = []
for i in range(data_store[:, 4].shape[0]):
prob_distances.append(self._prob_distance(self.extent, data_store[:, 4][i]))
return np.hstack((data_store, np.array([prob_distances]).T))
def _prob_distances_ev(self, data_store) -> np.ndarray:
"""
Calculates the expected value of the probabilistic distance for
each observation in the input data with respect to the cluster the
observation belongs to.
:param data_store: the storage matrix that collects information on
each observation.
:return: the updated storage matrix that collects information on
each observation.
"""
prob_set_distance_ev = np.empty([self._n_observations(), 1])
for cluster_id in self.cluster_labels_u:
indices = np.where(data_store[:, 0] == cluster_id)[0]
for index in indices:
# Global neighbor indices for the current point
nbrhood = data_store[index][2].astype(int) # Ensure global indices
nbrhood_prob_distances = np.take(data_store[:, 5], nbrhood).astype(
float
)
nbrhood_prob_distances_nonan = nbrhood_prob_distances[
np.logical_not(np.isnan(nbrhood_prob_distances))
]
prob_set_distance_ev[index] = nbrhood_prob_distances_nonan.mean()
self.prob_distances_ev = prob_set_distance_ev
return np.hstack((data_store, prob_set_distance_ev))
def _prob_local_outlier_factors(self, data_store: np.ndarray) -> np.ndarray:
"""
Calculates the probabilistic local outlier factor for each
observation in the input data.
:param data_store: the storage matrix that collects information on
each observation.
:return: the updated storage matrix that collects information on
each observation.
"""
return np.hstack(
(
data_store,
np.array(
[
np.apply_along_axis(
self._prob_outlier_factor,
0,
data_store[:, 5],
data_store[:, 6],
)
]
).T,
)
)
def _prob_local_outlier_factors_ev(self, data_store: np.ndarray) -> np.ndarray:
"""
Calculates the expected value of the probabilistic local outlier factor
for each observation in the input data with respect to the cluster the
observation belongs to.
:param data_store: the storage matrix that collects information on
each observation.
:return: the updated storage matrix that collects information on
each observation.
"""
prob_local_outlier_factor_ev_dict = {}
for cluster_id in self.cluster_labels_u:
indices = np.where(data_store[:, 0] == cluster_id)
prob_local_outlier_factors = np.take(data_store[:, 7], indices).astype(
float
)
prob_local_outlier_factors_nonan = prob_local_outlier_factors[
np.logical_not(np.isnan(prob_local_outlier_factors))
]
prob_local_outlier_factor_ev_dict[cluster_id] = np.power(
prob_local_outlier_factors_nonan, 2
).sum() / float(prob_local_outlier_factors_nonan.size)
data_store = np.hstack(
(
data_store,
np.array(
[
[
prob_local_outlier_factor_ev_dict[x]
for x in data_store[:, 0].tolist()
]
]
).T,
)
)
return data_store
def _norm_prob_local_outlier_factors(self, data_store: np.ndarray) -> np.ndarray:
"""
Calculates the normalized probabilistic local outlier factor for each
observation in the input data.
:param data_store: the storage matrix that collects information on
each observation.
:return: the updated storage matrix that collects information on
each observation.
"""
return np.hstack(
(
data_store,
np.array(
[
self._norm_prob_outlier_factor(
self.extent, data_store[:, 8].tolist()
)
]
).T,
)
)
def _local_outlier_probabilities(self, data_store: np.ndarray) -> np.ndarray:
"""
Calculates the local outlier probability for each observation in the
input data.
:param data_store: the storage matrix that collects information on
each observation.
:return: the updated storage matrix that collects information on
each observation.
"""
return np.hstack(
(
data_store,
np.array(
[
np.apply_along_axis(
self._local_outlier_probability,
0,
data_store[:, 7],
data_store[:, 9],
)
]
).T,
)
)
"""
Public methods
"""
def fit(self) -> "LocalOutlierProbability":
"""
Calculates the local outlier probability for each observation in the
input data according to the input parameters extent, n_neighbors, and
cluster_labels.
:return: self, which contains the local outlier probabilities as
self.local_outlier_probabilities.
:raises ClusterSizeError: if any cluster is smaller than n_neighbors.
:raises MissingValuesError: if data contains missing values.
"""
self._check_n_neighbors()
self._check_cluster_size()
if self.data is not None:
self._check_missing_values()
store = self._store()
if self.data is not None:
self._distances(progress_bar=self.progress_bar)
store = self._assign_distances(store)
store = self._ssd(store)
store = self._standard_distances(store)
store = self._prob_distances(store)
self.prob_distances = store[:, 5]
store = self._prob_distances_ev(store)
store = self._prob_local_outlier_factors(store)
store = self._prob_local_outlier_factors_ev(store)
store = self._norm_prob_local_outlier_factors(store)
self.norm_prob_local_outlier_factor = store[:, 9].max()
store = self._local_outlier_probabilities(store)
self.local_outlier_probabilities = store[:, 10]
self.is_fit = True
return self
def stream(self, x: np.ndarray) -> np.ndarray:
"""
Calculates the local outlier probability for an individual sample
according to the input parameters extent, n_neighbors, and
cluster_labels after first calling fit(). Observations are assigned
a local outlier probability against the mean of expected values of
probabilistic distance and the normalized probabilistic outlier
factor from the earlier model, provided when calling fit().
distance
:param x: an observation to score for its local outlier probability.
:return: the local outlier probability of the input observation.
"""
orig_cluster_labels = None
if self._check_no_cluster_labels() is False:
orig_cluster_labels = self.cluster_labels
self.cluster_labels = np.array([0] * len(self.data))
if self._check_is_fit() is False:
self.fit()
point_vector = self._convert_to_array(x)
distances = np.full([1, self.n_neighbors], 9e10, dtype=float)
if self.data is not None:
matrix = self.points_vector
else:
matrix = self.distance_matrix
# When using distance matrix mode, x is a scalar distance value.
# Extract scalar from array to avoid NumPy assignment errors.
if point_vector.size == 1:
point_vector = float(point_vector.flat[0])
for p in range(0, matrix.shape[0]):
if self.data is not None:
d = self._euclidean(matrix[p, :], point_vector)
else:
d = point_vector
idx_max = distances[0].argmax()
if d < distances[0][idx_max]:
distances[0][idx_max] = d
ssd = np.power(distances, 2).sum()
std_dist = np.sqrt(np.divide(ssd, self.n_neighbors))
prob_dist = self._prob_distance(self.extent, std_dist)
plof = self._prob_outlier_factor(
np.array(prob_dist), np.array(self.prob_distances_ev.mean())
)
loop = self._local_outlier_probability(
plof, self.norm_prob_local_outlier_factor
)
if orig_cluster_labels is not None:
self.cluster_labels = orig_cluster_labels
return loop
================================================
FILE: README.md
================================================
# PyNomaly
PyNomaly is a Python 3 implementation of LoOP (Local Outlier Probabilities).
LoOP is a local density based outlier detection method by Kriegel, Kröger, Schubert, and Zimek which provides outlier
scores in the range of [0,1] that are directly interpretable as the probability of a sample being an outlier.
PyNomaly is a core library of [deepchecks](https://github.com/deepchecks/deepchecks), [OmniDocBench](https://github.com/opendatalab/OmniDocBench) and [pysad](https://github.com/selimfirat/pysad).
[](https://opensource.org/licenses/Apache-2.0)
[](https://pypi.python.org/pypi/PyNomaly/0.3.5)
[](https://pepy.tech/projects/pynomaly)
[](https://pepy.tech/projects/pynomaly)

[](https://coveralls.io/github/vc1492a/PyNomaly?branch=main)
[](http://joss.theoj.org/papers/f4d2cfe680768526da7c1f6a2c103266)
The outlier score of each sample is called the Local Outlier Probability.
It measures the local deviation of density of a given sample with
respect to its neighbors as Local Outlier Factor (LOF), but provides normalized
outlier scores in the range [0,1]. These outlier scores are directly interpretable
as a probability of an object being an outlier. Since Local Outlier Probabilities provides scores in the
range [0,1], practitioners are free to interpret the results according to the application.
Like LOF, it is local in that the anomaly score depends on how isolated the sample is
with respect to the surrounding neighborhood. Locality is given by k-nearest neighbors,
whose distance is used to estimate the local density. By comparing the local density of a sample to the
local densities of its neighbors, one can identify samples that lie in regions of lower
density compared to their neighbors and thus identify samples that may be outliers according to their Local
Outlier Probability.
The authors' 2009 paper detailing LoOP's theory, formulation, and application is provided by
Ludwig-Maximilians University Munich - Institute for Informatics;
[LoOP: Local Outlier Probabilities](http://www.dbs.ifi.lmu.de/Publikationen/Papers/LoOP1649.pdf).
## Implementation
This Python 3 implementation uses Numpy and the formulas outlined in
[LoOP: Local Outlier Probabilities](http://www.dbs.ifi.lmu.de/Publikationen/Papers/LoOP1649.pdf)
to calculate the Local Outlier Probability of each sample.
## Dependencies
- Python 3.8 - 3.13
- numpy >= 1.16.3
- python-utils >= 2.3.0
- (optional) numba >= 0.45.1
Numba just-in-time (JIT) compiles the function with calculates the Euclidean
distance between observations, providing a reduction in computation time
(significantly when a large number of observations are scored). Numba is not a
requirement and PyNomaly may still be used solely with numpy if desired
(details below).
## Quick Start
First install the package from the Python Package Index:
```shell
pip install PyNomaly # or pip3 install ... if you're using both Python 3 and 2.
```
Alternatively, you can use conda to install the package from conda-forge:
```shell
conda install conda-forge::pynomaly
```
Then you can do something like this:
```python
from PyNomaly import loop
m = loop.LocalOutlierProbability(data).fit()
scores = m.local_outlier_probabilities
print(scores)
```
where *data* is a NxM (N rows, M columns; 2-dimensional) set of data as either a Pandas DataFrame or Numpy array.
LocalOutlierProbability sets the *extent* (in integer in value of 1, 2, or 3) and *n_neighbors* (must be greater than 0) parameters with the default
values of 3 and 10, respectively. You're free to set these parameters on your own as below:
```python
from PyNomaly import loop
m = loop.LocalOutlierProbability(data, extent=2, n_neighbors=20).fit()
scores = m.local_outlier_probabilities
print(scores)
```
This implementation of LoOP also includes an optional *cluster_labels* parameter. This is useful in cases where regions
of varying density occur within the same set of data. When using *cluster_labels*, the Local Outlier Probability of a
sample is calculated with respect to its cluster assignment.
```python
from PyNomaly import loop
from sklearn.cluster import DBSCAN
db = DBSCAN(eps=0.6, min_samples=50).fit(data)
m = loop.LocalOutlierProbability(data, extent=2, n_neighbors=20, cluster_labels=list(db.labels_)).fit()
scores = m.local_outlier_probabilities
print(scores)
```
**NOTE**: Unless your data is all the same scale, it may be a good idea to normalize your data with z-scores or another
normalization scheme prior to using LoOP, especially when working with multiple dimensions of varying scale.
Users must also appropriately handle missing values prior to using LoOP, as LoOP does not support Pandas
DataFrames or Numpy arrays with missing values.
### Utilizing Numba and Progress Bars
It may be helpful to use just-in-time (JIT) compilation in the cases where a lot of
observations are scored. Numba, a JIT compiler for Python, may be used
with PyNomaly by setting `use_numba=True`:
```python
from PyNomaly import loop
m = loop.LocalOutlierProbability(data, extent=2, n_neighbors=20, use_numba=True, progress_bar=True).fit()
scores = m.local_outlier_probabilities
print(scores)
```
Numba must be installed if the above to use JIT compilation and improve the
speed of multiple calls to `LocalOutlierProbability()`, and PyNomaly has been
tested with Numba version 0.45.1. An example of the speed difference that can
be realized with using Numba is avaialble in `examples/numba_speed_diff.py`.
You may also choose to print progress bars _with our without_ the use of numba
by passing `progress_bar=True` to the `LocalOutlierProbability()` method as above.
### Choosing Parameters
The *extent* parameter controls the sensitivity of the scoring in practice. The parameter corresponds to
the statistical notion of an outlier defined as an object deviating more than a given lambda (*extent*)
times the standard deviation from the mean. A value of 2 implies outliers deviating more than 2 standard deviations
from the mean, and corresponds to 95.0% in the empirical "three-sigma" rule. The appropriate parameter should be selected
according to the level of sensitivity needed for the input data and application. The question to ask is whether it is
more reasonable to assume outliers in your data are 1, 2, or 3 standard deviations from the mean, and select the value
likely most appropriate to your data and application.
The *n_neighbors* parameter defines the number of neighbors to consider about
each sample (neighborhood size) when determining its Local Outlier Probability with respect to the density
of the sample's defined neighborhood. The idea number of neighbors to consider is dependent on the
input data. However, the notion of an outlier implies it would be considered as such regardless of the number
of neighbors considered. One potential approach is to use a number of different neighborhood sizes and average
the results for reach observation. Those observations which rank highly with varying neighborhood sizes are
more than likely outliers. This is one potential approach of selecting the neighborhood size. Another is to
select a value proportional to the number of observations, such an odd-valued integer close to the square root
of the number of observations in your data (*sqrt(n_observations*).
## Iris Data Example
We'll be using the well-known Iris dataset to show LoOP's capabilities. There's a few things you'll need for this
example beyond the standard prerequisites listed above:
- matplotlib 2.0.0 or greater
- PyDataset 0.2.0 or greater
- scikit-learn 0.18.1 or greater
First, let's import the packages and libraries we will need for this example.
```python
from PyNomaly import loop
import pandas as pd
from pydataset import data
import numpy as np
from sklearn.cluster import DBSCAN
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
```
Now let's create two sets of Iris data for scoring; one with clustering and the other without.
```python
# import the data and remove any non-numeric columns
iris = pd.DataFrame(data('iris').drop(columns=['Species']))
```
Next, let's cluster the data using DBSCAN and generate two sets of scores. On both cases, we will use the default
values for both *extent* (0.997) and *n_neighbors* (10).
```python
db = DBSCAN(eps=0.9, min_samples=10).fit(iris)
m = loop.LocalOutlierProbability(iris).fit()
scores_noclust = m.local_outlier_probabilities
m_clust = loop.LocalOutlierProbability(iris, cluster_labels=list(db.labels_)).fit()
scores_clust = m_clust.local_outlier_probabilities
```
Organize the data into two separate Pandas DataFrames.
```python
iris_clust = pd.DataFrame(iris.copy())
iris_clust['scores'] = scores_clust
iris_clust['labels'] = db.labels_
iris['scores'] = scores_noclust
```
And finally, let's visualize the scores provided by LoOP in both cases (with and without clustering).
```python
fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(iris['Sepal.Width'], iris['Petal.Width'], iris['Sepal.Length'],
c=iris['scores'], cmap='seismic', s=50)
ax.set_xlabel('Sepal.Width')
ax.set_ylabel('Petal.Width')
ax.set_zlabel('Sepal.Length')
plt.show()
plt.clf()
plt.cla()
plt.close()
fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(iris_clust['Sepal.Width'], iris_clust['Petal.Width'], iris_clust['Sepal.Length'],
c=iris_clust['scores'], cmap='seismic', s=50)
ax.set_xlabel('Sepal.Width')
ax.set_ylabel('Petal.Width')
ax.set_zlabel('Sepal.Length')
plt.show()
plt.clf()
plt.cla()
plt.close()
fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(iris_clust['Sepal.Width'], iris_clust['Petal.Width'], iris_clust['Sepal.Length'],
c=iris_clust['labels'], cmap='Set1', s=50)
ax.set_xlabel('Sepal.Width')
ax.set_ylabel('Petal.Width')
ax.set_zlabel('Sepal.Length')
plt.show()
plt.clf()
plt.cla()
plt.close()
```
Your results should look like the following:
**LoOP Scores without Clustering**

**LoOP Scores with Clustering**

**DBSCAN Cluster Assignments**

Note the differences between using LocalOutlierProbability with and without clustering. In the example without clustering, samples are
scored according to the distribution of the entire data set. In the example with clustering, each sample is scored
according to the distribution of each cluster. Which approach is suitable depends on the use case.
**NOTE**: Data was not normalized in this example, but it's probably a good idea to do so in practice.
## Using Numpy
When using numpy, make sure to use 2-dimensional arrays in tabular format:
```python
data = np.array([
[43.3, 30.2, 90.2],
[62.9, 58.3, 49.3],
[55.2, 56.2, 134.2],
[48.6, 80.3, 50.3],
[67.1, 60.0, 55.9],
[421.5, 90.3, 50.0]
])
scores = loop.LocalOutlierProbability(data, n_neighbors=3).fit().local_outlier_probabilities
print(scores)
```
The shape of the input array shape corresponds to the rows (observations) and columns (features) in the data:
```python
print(data.shape)
# (6,3), which matches number of observations and features in the above example
```
Similar to the above:
```python
data = np.random.rand(100, 5)
scores = loop.LocalOutlierProbability(data).fit().local_outlier_probabilities
print(scores)
```
## Specifying a Distance Matrix
PyNomaly provides the ability to specify a distance matrix so that any
distance metric can be used (a neighbor index matrix must also be provided).
This can be useful when wanting to use a distance other than the euclidean.
Note that in order to maintain alignment with the LoOP definition of closest neighbors,
an additional neighbor is added when using [scikit-learn's NearestNeighbors](https://scikit-learn.org/1.5/modules/neighbors.html) since `NearestNeighbors`
includes the point itself when calculating the cloest neighbors (whereas the LoOP method does not include distances to point itself).
```python
import numpy as np
from sklearn.neighbors import NearestNeighbors
data = np.array([
[43.3, 30.2, 90.2],
[62.9, 58.3, 49.3],
[55.2, 56.2, 134.2],
[48.6, 80.3, 50.3],
[67.1, 60.0, 55.9],
[421.5, 90.3, 50.0]
])
# Generate distance and neighbor matrices
n_neighbors = 3 # the number of neighbors according to the LoOP definition
neigh = NearestNeighbors(n_neighbors=n_neighbors+1, metric='hamming')
neigh.fit(data)
d, idx = neigh.kneighbors(data, return_distance=True)
# Remove self-distances - you MUST do this to preserve the same results as intended by the definition of LoOP
indices = np.delete(indices, 0, 1)
distances = np.delete(distances, 0, 1)
# Fit and return scores
m = loop.LocalOutlierProbability(distance_matrix=d, neighbor_matrix=idx, n_neighbors=n_neighbors+1).fit()
scores = m.local_outlier_probabilities
```
The below visualization shows the results by a few known distance metrics:
**LoOP Scores by Distance Metric**

## Streaming Data
PyNomaly also contains an implementation of Hamlet et. al.'s modifications
to the original LoOP approach [[4](http://www.tandfonline.com/doi/abs/10.1080/23742917.2016.1226651?journalCode=tsec20)],
which may be used for applications involving streaming data or where rapid calculations may be necessary.
First, the standard LoOP algorithm is used on "training" data, with certain attributes of the fitted data
stored from the original LoOP approach. Then, as new points are considered, these fitted attributes are
called when calculating the score of the incoming streaming data due to the use of averages from the initial
fit, such as the use of a global value for the expected value of the probabilistic distance. Despite the potential
for increased error when compared to the standard approach, it may be effective in streaming applications where
refitting the standard approach over all points could be computationally expensive.
While the iris dataset is not streaming data, we'll use it in this example by taking the first 120 observations
as training data and take the remaining 30 observations as a stream, scoring each observation
individually.
Split the data.
```python
iris = iris.sample(frac=1) # shuffle data
iris_train = iris.iloc[:, 0:4].head(120)
iris_test = iris.iloc[:, 0:4].tail(30)
```
Fit to each set.
```python
m = loop.LocalOutlierProbability(iris).fit()
scores_noclust = m.local_outlier_probabilities
iris['scores'] = scores_noclust
m_train = loop.LocalOutlierProbability(iris_train, n_neighbors=10)
m_train.fit()
iris_train_scores = m_train.local_outlier_probabilities
```
```python
iris_test_scores = []
for index, row in iris_test.iterrows():
array = np.array([row['Sepal.Length'], row['Sepal.Width'], row['Petal.Length'], row['Petal.Width']])
iris_test_scores.append(m_train.stream(array))
iris_test_scores = np.array(iris_test_scores)
```
Concatenate the scores and assess.
```python
iris['stream_scores'] = np.hstack((iris_train_scores, iris_test_scores))
# iris['scores'] from earlier example
rmse = np.sqrt(((iris['scores'] - iris['stream_scores']) ** 2).mean(axis=None))
print(rmse)
```
The root mean squared error (RMSE) between the two approaches is approximately 0.199 (your scores will vary depending on the data and specification).
The plot below shows the scores from the stream approach.
```python
fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(iris['Sepal.Width'], iris['Petal.Width'], iris['Sepal.Length'],
c=iris['stream_scores'], cmap='seismic', s=50)
ax.set_xlabel('Sepal.Width')
ax.set_ylabel('Petal.Width')
ax.set_zlabel('Sepal.Length')
plt.show()
plt.clf()
plt.cla()
plt.close()
```
**LoOP Scores using Stream Approach with n=10**

### Notes
When calculating the LoOP score of incoming data, the original fitted scores are not updated.
In some applications, it may be beneficial to refit the data periodically. The stream functionality
also assumes that either data or a distance matrix (or value) will be used across in both fitting
and streaming, with no changes in specification between steps.
## Contributing
Please use the issue tracker to report any erroneous behavior or desired
feature requests.
If you would like to contribute to development, please fork the repository and make
any changes to a branch which corresponds to an open issue. Hot fixes
and bug fixes can be represented by branches with the prefix `fix/` versus
`feature/` for new capabilities or code improvements. Pull requests will
then be made from these branches into the repository's `dev` branch
prior to being pulled into `main`.
### Commit Messages and Releases
**Your commit messages are important** - here's why.
PyNomaly leverages [release-please](https://github.com/googleapis/release-please-action) to help automate the release process using the [Conventional Commits](https://www.conventionalcommits.org/) specification. When pull requests are opened to the `main` branch, release-please will collate the git commit messages and prepare an organized changelog and release notes. This process can be completed because of the Conventional Commits specification.
Conventional Commits provides an easy set of rules for creating an explicit commit history; which makes it easier to write automated tools on top of. This convention dovetails with SemVer, by describing the features, fixes, and breaking changes made in commit messages. You can check out examples [here](https://www.conventionalcommits.org/en/v1.0.0/#examples). Make a best effort to use the specification when contributing to Infactory code as it dramatically eases the documentation around releases and their features, breaking changes, bug fixes and documentation updates.
### Tests
When contributing, please ensure to run unit tests and add additional tests as
necessary if adding new functionality. To run the unit tests, use `pytest`:
```
python3 -m pytest --cov=PyNomaly -s -v
```
To run the tests with Numba enabled, simply set the flag `NUMBA` in `test_loop.py`
to `True`. Note that a drop in coverage is expected due to portions of the code
being compiled upon code execution.
## Versioning
[Semantic versioning](http://semver.org/) is used for this project. If contributing, please conform to semantic
versioning guidelines when submitting a pull request.
## License
This project is licensed under the Apache 2.0 license.
## Research
If citing PyNomaly, use the following:
```
@article{Constantinou2018,
doi = {10.21105/joss.00845},
url = {https://doi.org/10.21105/joss.00845},
year = {2018},
month = {oct},
publisher = {The Open Journal},
volume = {3},
number = {30},
pages = {845},
author = {Valentino Constantinou},
title = {{PyNomaly}: Anomaly detection using Local Outlier Probabilities ({LoOP}).},
journal = {Journal of Open Source Software}
}
```
## References
1. Breunig M., Kriegel H.-P., Ng R., Sander, J. LOF: Identifying Density-based Local Outliers. ACM SIGMOD International Conference on Management of Data (2000). [PDF](http://www.dbs.ifi.lmu.de/Publikationen/Papers/LOF.pdf).
2. Kriegel H., Kröger P., Schubert E., Zimek A. LoOP: Local Outlier Probabilities. 18th ACM conference on Information and knowledge management, CIKM (2009). [PDF](http://www.dbs.ifi.lmu.de/Publikationen/Papers/LoOP1649.pdf).
3. Goldstein M., Uchida S. A Comparative Evaluation of Unsupervised Anomaly Detection Algorithms for Multivariate Data. PLoS ONE 11(4): e0152173 (2016).
4. Hamlet C., Straub J., Russell M., Kerlin S. An incremental and approximate local outlier probability algorithm for intrusion detection and its evaluation. Journal of Cyber Security Technology (2016). [DOI](http://www.tandfonline.com/doi/abs/10.1080/23742917.2016.1226651?journalCode=tsec20).
## Acknowledgements
- The authors of LoOP (Local Outlier Probabilities)
- Hans-Peter Kriegel
- Peer Kröger
- Erich Schubert
- Arthur Zimek
- [NASA Jet Propulsion Laboratory](https://jpl.nasa.gov/)
- [Kyle Hundman](https://github.com/khundman)
- [Ian Colwell](https://github.com/iancolwell)
================================================
FILE: changelog.md
================================================
# Changelog
All notable changes to PyNomaly will be documented in this Changelog.
The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/)
and adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html).
## 0.3.5
### Changed
- Refactored the `Validate` class by dissolving it and moving validation methods
directly into `LocalOutlierProbability` as instance methods
([Issue #69](https://github.com/vc1492a/PyNomaly/issues/69)).
- Renamed validation methods for clarity: `_fit()` → `_check_is_fit()`,
`_data()` → `_convert_to_array()`, `_inputs()` → `_validate_inputs()`,
`_cluster_size()` → `_check_cluster_size()`, `_n_neighbors()` → `_check_n_neighbors()`,
`_extent()` → `_check_extent()`, `_missing_values()` → `_check_missing_values()`,
`_no_cluster_labels()` → `_check_no_cluster_labels()`.
- Replaced `sys.exit()` calls with proper exception handling. The library no longer
terminates the Python process on validation errors.
### Added
- Custom exception classes for better error handling: `PyNomalyError` (base),
`ValidationError`, `ClusterSizeError`, and `MissingValuesError`. These are now
exported from the package and can be caught by users.
### Fixed
- Fixed a compatibility issue with NumPy in Python 3.11+ where assigning an array
to a scalar position in `stream()` would raise a `ValueError` when using distance
matrix mode.
## 0.3.4
### Changed
- Changed source code as necessary to address a [user-reported issue](https://github.com/vc1492a/PyNomaly/issues/49), corrected in [this commit](https://github.com/vc1492a/PyNomaly/commit/bbdd12a318316ca9c7e0272a5b06909f3fc4f9b0)
## 0.3.3
### Changed
- The implementation of the progress bar to support use when the number of
observations is less than the width of the Python console in which the code
is being executed (tracked in [this issue](https://github.com/vc1492a/PyNomaly/issues/35)).
### Added
- Docstring to the testing functions to provide some additional documentation
of the testing (tracked in [this issue](https://github.com/vc1492a/PyNomaly/issues/41)).
## 0.3.2
### Changed
- Removed numba as a strict dependency, which is now an optional dependency
that is not needed to use PyNomaly but which provides performance enhancements
when functions are called repeatedly, such as when the number of observations
is large. This relaxes the numba requirement introduced in version 0.3.0.
### Added
- Added progress bar functionality that can be called using
`LocalOutlierProbability(progress_bar=True)` in both native
Python and numba just-in-time (JIT) compiled modes.
This is helpful in cases where PyNomaly is processing a large amount
of observations.
## 0.3.1
### Changed
- Removed Numba JIT compilation from the `_standard_distance` and
`_prob_distance` calculations. Using Numba JIT compilation there does
not result in a speed improvement and only add compilation overhead.
- Integrated [pull request #33](https://github.com/vc1492a/PyNomaly/pull/33)
which decreases runtime about 30 to more than 90 percent in some cases, in
particular on repeated calls with larger datasets.
### Added
- Type hinting for unit tests in `tests/test_loop.py`.
## 0.3.0
### Changed
- The manner in which the standard distance is calculated from list
comprehension to a vectorized Numpy implementation, reducing compute
time for that specific calculation by approximately 75%.
- Removed formal testing and support for Python 3.4
([Python 3 adoption rates](https://rushter.com/blog/python-3-adoption/)).
- Raised the minimum numpy version requirement from 1.12.0 to 1.16.3.
### Added
- Numba just in time (JIT) compilation to improve the speed of some
of the core functionality, consistently achieving a further 20% reduction
in compute time when _n_ = 1000. Future optimizations could yield
further reductions in computation time. For now, requiring a strict numba version of `0.43.1`
in anticipation of [this deprecation](http://numba.pydata.org/numba-doc/latest/reference/deprecation.html#deprecation-of-reflection-for-list-and-set-types) -
which does not yet have an implemented solution.
## 0.2.7
### Changed
- Integrated various performance enhancements as described in
[pull request #30](https://github.com/vc1492a/PyNomaly/pull/30) that
increase PyNomaly's performance by at least up to 50% in some cases.
- The Validate classes functions from public to private, as they are only
used in validating specification and data input into PyNomaly.
### Added
- [Issue #27](https://github.com/vc1492a/PyNomaly/issues/27) - Added
docstring to key functions in PyNomaly to ease future development and
provide additional information.
- Additional unit tests to raise code coverage from 96% to 100%.
## 0.2.6
### Fixed
- [Issue #25](https://github.com/vc1492a/PyNomaly/issues/25) - Fixed an issue
that caused zero division errors when all the values in a neighborhood are
duplicate samples.
### Changed
- The error behavior when attempting to use the stream approach
before calling `fit`. While the previous implementation resulted in a
warning and system exit, PyNomaly now attempts to `fit` (assumes data or a
distance matrix is available) and then later calls `stream`. If no
data or distance matrix is provided, a warning is raised.
### Added
- [Issue #24](https://github.com/vc1492a/PyNomaly/issues/24) - Added
the ability to use one's own distance matrix,
provided a neighbor index matrix is also provided. This ensures
PyNomaly can be used with distances other than the euclidean.
See the file `iris_dist_grid.py` for examples.
- [Issue #23](https://github.com/vc1492a/PyNomaly/issues/23) - Added
Python 3.7 to the tested distributions in Travis CI and passed tests.
- Unit tests to monitor the issues and features covered
in issues [24](https://github.com/vc1492a/PyNomaly/issues/24) and
[25](https://github.com/vc1492a/PyNomaly/issues/25).
## 0.2.5
### Fixed
- [Issue #20](https://github.com/vc1492a/PyNomaly/issues/20) - Fixed
a bug that inadvertently used global means of the probabilistic distance
as the expected value of the probabilistic distance, as opposed to the
expected value of the probabilistic distance within a neighborhood of
a point.
- Integrated [pull request #21](https://github.com/vc1492a/PyNomaly/pull/21) -
This pull request addressed the issue noted above.
### Changed
- Changed the default behavior to strictly not supporting the
use of missing values in the input data, as opposed to the soft enforcement
(a simple user warning) used in the previous behavior.
## 0.2.4
### Fixed
- [Issue #17](https://github.com/vc1492a/PyNomaly/issues/17) - Fixed
a bug that allowed for a column of empty values in the primary data store.
- Integrated [pull request #18](https://github.com/vc1492a/PyNomaly/pull/18) -
Fixed a bug that was not causing dependencies such as numpy to skip
installation when installing PyNomaly via pip.
## 0.2.3
### Fixed
- [Issue #14](https://github.com/vc1492a/PyNomaly/issues/14) - Fixed an issue
that was causing a ZeroDivisionError when the specified neighborhood size
is larger than the total number of observations in the smallest cluster.
## 0.2.2
### Changed
- This implementation to align more closely with the specification of the
approach in the original paper. The extent parameter now takes an integer
value of 1, 2, or 3 that corresponds to the lambda parameter specified
in the paper. See the [readme](https://github.com/vc1492a/PyNomaly/blob/master/readme.md) for more details.
- Refactored the code base and created the Validate class, which includes
checks for data type, correct specification, and other dependencies.
### Added
- Automated tests to ensure the desired functionality is being met can now be
found in the `PyNomaly/tests` directory.
- Code for the examples in the readme can now be found in the `examples` directory.
- Additional information for parameter selection in the [readme](https://github.com/vc1492a/PyNomaly/blob/master/readme.md).
## 0.2.1
### Fixed
- [Issue #10](https://github.com/vc1492a/PyNomaly/issues/10) - Fixed error on line
142 which was causing the class to fail. More explicit examples
were also included in the readme for using numpy arrays.
### Added
- An improvement to the Euclidean distance calculation by [MichaelSchreier](https://github.com/MichaelSchreier)
which brings a over a 50% reduction in computation time.
## 0.2.0
### Added
- Added new functionality to PyNomaly by integrating a modified LoOP
approach introduced by Hamlet et al. which can be used for streaming
data applications or in the case where computational expense is a concern.
Data is first fit to a "training set", with any additional observations
considered for outlierness against this initial set.
## 0.1.8
### Fixed
- Fixed an issue which allowed the number of neighbors considered to exceed the number of observations. Added a check
to ensure this is no longer possible.
## 0.1.7
### Fixed
- Fixed an issue inadvertently introduced in 0.1.6 that caused distance calculations to be incorrect,
thus resulting in incorrect LoOP values.
## 0.1.6
### Fixed
- Updated the distance calculation such that the euclidean distance calculation has been separated from
the main distance calculation function.
- Fixed an error in the calculation of the standard distance.
### Changed
- .fit() now returns a fitted object instead of local_outlier_probabilities. Local outlier probabilities can
be now be retrieved by calling .local_outlier_probabilities. See the readme for an example.
- Some private functions have been renamed.
## 0.1.5
### Fixed
- [Issue #4](https://github.com/vc1492a/PyNomaly/issues/4) - Separated parameter type checks
from checks for invalid parameter values.
- @accepts decorator verifies LocalOutlierProbability parameters are of correct type.
- Parameter value checks moved from .fit() to init.
- Fixed parameter check to ensure extent value is in the range (0., 1.] instead of [0, 1] (extent cannot be zero).
- [Issue #1](https://github.com/vc1492a/PyNomaly/issues/1) - Added type check using @accepts decorator for cluster_labels.
## 0.1.4
### Fixed
- [Issue #3](https://github.com/vc1492a/PyNomaly/issues/3) - .fit() fails if the sum of squared distances sums to 0.
- Added check to ensure the sum of square distances is greater than zero.
- Added UserWarning to increase the neighborhood size if all neighbors in n_neighbors are
zero distance from an observation.
- Added UserWarning to check for integer type n_neighbor conditions versus float type.
- Changed calculation of the probabilistic local outlier factor expected value to Numpy operation
from base Python.
## 0.1.3
### Fixed
- Altered the distance matrix computation to return a triangular matrix instead of a
fully populated matrix. This was made to ensure no duplicate neighbors were present
in computing the neighborhood distance for each observation.
## 0.1.2
### Added
- LICENSE.txt file of Apache License, Version 2.0.
- setup.py, setup.cfg files configured for release to PyPi.
- Changed name throughout code base from PyLoOP to PyNomaly.
### Other
- Initial release to PyPi.
## 0.1.1
### Other
- A bad push to PyPi necessitated the need to skip a version number.
- Chosen name of PyLoOP not present on test index but present on production PyPi index.
- Issue not known until push was made to the test index.
- Skipped version number to align test and production PyPi indices.
## 0.1.0 - 2017-05-19
### Added
- readme.md file documenting methodology, package dependencies, use cases,
how to contribute, and acknowledgements.
- Initial open release of PyNomaly codebase on Github.
================================================
FILE: examples/iris.py
================================================
from PyNomaly import loop
import pandas as pd
from pydataset import data
from sklearn.cluster import DBSCAN
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
iris = pd.DataFrame(data('iris'))
iris = pd.DataFrame(iris.drop('Species', 1))
db = DBSCAN(eps=0.9, min_samples=10).fit(iris)
m = loop.LocalOutlierProbability(iris).fit()
scores_noclust = m.local_outlier_probabilities
m_clust = loop.LocalOutlierProbability(iris, cluster_labels=list(db.labels_)).fit()
scores_clust = m_clust.local_outlier_probabilities
iris_clust = pd.DataFrame(iris.copy())
iris_clust['scores'] = scores_clust
iris_clust['labels'] = db.labels_
iris['scores'] = scores_noclust
fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(iris['Sepal.Width'], iris['Petal.Width'], iris['Sepal.Length'],
c=iris['scores'], cmap='seismic', s=50)
ax.set_xlabel('Sepal.Width')
ax.set_ylabel('Petal.Width')
ax.set_zlabel('Sepal.Length')
plt.show()
plt.clf()
plt.cla()
plt.close()
fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(iris_clust['Sepal.Width'], iris_clust['Petal.Width'], iris_clust['Sepal.Length'],
c=iris_clust['scores'], cmap='seismic', s=50)
ax.set_xlabel('Sepal.Width')
ax.set_ylabel('Petal.Width')
ax.set_zlabel('Sepal.Length')
plt.show()
plt.clf()
plt.cla()
plt.close()
fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(iris_clust['Sepal.Width'], iris_clust['Petal.Width'], iris_clust['Sepal.Length'],
c=iris_clust['labels'], cmap='Set1', s=50)
ax.set_xlabel('Sepal.Width')
ax.set_ylabel('Petal.Width')
ax.set_zlabel('Sepal.Length')
plt.show()
plt.clf()
plt.cla()
plt.close()
================================================
FILE: examples/iris_dist_grid.py
================================================
from PyNomaly import loop
import pandas as pd
from pydataset import data
from sklearn.neighbors import NearestNeighbors
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
iris = pd.DataFrame(data('iris'))
iris = pd.DataFrame(iris.drop('Species', 1))
distance_metrics = [
'braycurtis',
'canberra',
'cityblock',
'chebyshev',
'cosine',
'euclidean',
'hamming',
'l1',
'manhattan'
]
fig = plt.figure(figsize=(17, 17))
for i in range(1, 10):
neigh = NearestNeighbors(n_neighbors=10, metric=distance_metrics[i-1])
neigh.fit(iris)
d, idx = neigh.kneighbors(iris, return_distance=True)
m = loop.LocalOutlierProbability(distance_matrix=d,
neighbor_matrix=idx).fit()
iris['scores'] = m.local_outlier_probabilities
ax = fig.add_subplot(3, 3, i, projection='3d')
plt.title(distance_metrics[i-1], loc='left', fontsize=18)
ax.scatter(iris['Sepal.Width'], iris['Petal.Width'], iris['Sepal.Length'],
c=iris['scores'], cmap='seismic', s=50)
ax.set_xlabel('Sepal.Width')
ax.set_ylabel('Petal.Width')
ax.set_zlabel('Sepal.Length')
plt.show()
plt.clf()
plt.cla()
plt.close()
================================================
FILE: examples/multiple_gaussian_2d.py
================================================
import numpy as np
import matplotlib.pyplot as plt
from PyNomaly import loop
import pandas as pd
# import the multiple gaussian data #
df = pd.read_csv('../data/multiple-gaussian-2d-data-only.csv')
print(df)
# fit LoOP according to the original settings outlined in the paper #
m = loop.LocalOutlierProbability(df[['x', 'y']], n_neighbors=20, extent=3).fit()
scores = m.local_outlier_probabilities
print(scores)
# plot the results #
# base 3 width, then set as multiple
threshold = 0.1
color = np.where(scores > threshold, "white", "black")
label_mask = np.where(scores > threshold)
area = (20 * scores) ** 2
plt.scatter(df['x'], df['y'], c=color, s=area.astype(float), edgecolor='red', linewidth=1)
plt.scatter(df['x'], df['y'], c='black', s=3)
for i in range(len(scores)):
if scores[i] > threshold:
plt.text(df['x'].loc[i] * (1 + 0.01), df['y'].loc[i] * (1 + 0.01), round(scores[i], 2), fontsize=8)
plt.show()
================================================
FILE: examples/numba_speed_diff.py
================================================
import numpy as np
from PyNomaly import loop
import time
# generate a large set of data
data = np.ones(shape=(10000, 4))
# first time the process without Numba
# use the progress bar to track progress
t1 = time.time()
scores_numpy = loop.LocalOutlierProbability(
data,
n_neighbors=3,
use_numba=False,
progress_bar=True
).fit().local_outlier_probabilities
t2 = time.time()
seconds_no_numba = t2 - t1
print("\nComputation took " + str(seconds_no_numba) + " seconds without Numba JIT.")
t3 = time.time()
scores_numba = loop.LocalOutlierProbability(
data,
n_neighbors=3,
use_numba=True,
progress_bar=True
).fit().local_outlier_probabilities
t4 = time.time()
seconds_numba = t4 - t3
print("\nComputation took " + str(seconds_numba) + " seconds with Numba JIT.")
================================================
FILE: examples/numpy_array.py
================================================
================================================
FILE: examples/stream.py
================================================
import numpy as np
from PyNomaly import loop
import pandas as pd
from pydataset import data
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
iris = pd.DataFrame(data('iris'))
iris = pd.DataFrame(iris.drop('Species', 1))
iris_train = iris.iloc[:, 0:4].head(120)
iris_test = iris.iloc[:, 0:4].tail(30)
m = loop.LocalOutlierProbability(iris).fit()
scores_noclust = m.local_outlier_probabilities
iris['scores'] = scores_noclust
m_train = loop.LocalOutlierProbability(iris_train, n_neighbors=10)
m_train.fit()
iris_train_scores = m_train.local_outlier_probabilities
iris_test_scores = []
for index, row in iris_test.iterrows():
array = np.array([row['Sepal.Length'], row['Sepal.Width'], row['Petal.Length'], row['Petal.Width']])
iris_test_scores.append(m_train.stream(array))
iris_test_scores = np.array(iris_test_scores)
iris['stream_scores'] = np.hstack((iris_train_scores, iris_test_scores))
# iris['scores'] from earlier example
rmse = np.sqrt(((iris['scores'] - iris['stream_scores']) ** 2).mean(axis=None))
print(rmse)
fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(iris['Sepal.Width'], iris['Petal.Width'], iris['Sepal.Length'],
c=iris['stream_scores'], cmap='seismic', s=50)
ax.set_xlabel('Sepal.Width')
ax.set_ylabel('Petal.Width')
ax.set_zlabel('Sepal.Length')
plt.show()
plt.clf()
plt.cla()
plt.close()
================================================
FILE: paper/codemeta.json
================================================
{
"@context": "https://raw.githubusercontent.com/codemeta/codemeta/master/codemeta.jsonld",
"@type": "Code",
"author": [
{
"@id": "http://orcid.org/0000-0002-5279-4143",
"@type": "Person",
"email": "vconstan@jpl.caltech.edu",
"name": "Valentino Constantinou",
"affiliation": "NASA Jet Propulsion Laboratory"
}
],
"identifier": "",
"codeRepository": "https://www.github.com/vc1492a/PyNomaly",
"datePublished": "2018-05-07",
"dateModified": "2018-05-07",
"dateCreated": "2018-05-07",
"description": "Anomaly detection using Local Outlier Probabilities (LoOP).",
"keywords": "machine learning, unsupervised learning, outlier detection, anomaly detection, nearest neighbors, statistics, probability",
"license": "Apache 2.0",
"title": "PyNomaly",
"version": "v0.2.0"
}
================================================
FILE: paper/paper.bib
================================================
@inproceedings{Breunig,
author = {Breunig, Markus M. and Kriegel, Hans-Peter and Ng, Raymond T. and Sander, J\"{o}rg},
title = {LOF: Identifying Density-based Local Outliers},
booktitle = {Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data},
series = {SIGMOD '00},
year = {2000},
isbn = {1-58113-217-4},
location = {Dallas, Texas, USA},
pages = {93--104},
numpages = {12},
url = {http://doi.acm.org/10.1145/342009.335388},
doi = {10.1145/342009.335388},
acmid = {335388},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {database mining, outlier detection},
}
@inproceedings{Kriegel,
author = {Kriegel, Hans-Peter and Kr\"{o}ger, Peer and Schubert, Erich and Zimek, Arthur},
title = {LoOP: Local Outlier Probabilities},
booktitle = {Proceedings of the 18th ACM Conference on Information and Knowledge Management},
series = {CIKM '09},
year = {2009},
isbn = {978-1-60558-512-3},
location = {Hong Kong, China},
pages = {1649--1652},
numpages = {4},
url = {http://doi.acm.org/10.1145/1645953.1646195},
doi = {10.1145/1645953.1646195},
acmid = {1646195},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {outlier detection},
}
@article{Hamlet,
doi= {10.1080/23742917.2016.1226651},
author = {Connor Hamlet and Jeremy Straub and Matthew Russell and Scott Kerlin},
title = {An incremental and approximate local outlier probability algorithm for intrusion detection and its evaluation},
journal = {Journal of Cyber Security Technology},
volume = {1},
number = {2},
pages = {75-87},
year = {2017},
publisher = {Taylor & Francis},
doi = {10.1080/23742917.2016.1226651},
URL = {https://doi.org/10.1080/23742917.2016.1226651},
eprint = {https://doi.org/10.1080/23742917.2016.1226651}
}
================================================
FILE: paper/paper.md
================================================
---
title: 'PyNomaly: Anomaly detection using Local Outlier Probabilities (LoOP).'
tags:
- outlier detection
- anomaly detection
- probability
- nearest neighbors
- unsupervised learning
- machine learning
- statistics
authors:
- name: Valentino Constantinou
orcid: 0000-0002-5279-4143
affiliation: 1
affiliations:
- name: NASA Jet Propulsion Laboratory
index: 1
date: 7 May 2018
bibliography: paper.bib
---
# Summary
``PyNomaly`` is a Python 3 implementation of LoOP (Local Outlier
Probabilities) [@Kriegel]. LoOP is a local density based outlier detection
method by Kriegel, Kröger, Schubert, and Zimek which provides
outlier scores in the range of [0,1] that are directly
interpretable as the probability of a sample being an outlier.
``PyNomaly`` also implements a modified approach to LoOP [@Hamlet], which may be used for applications involving
streaming data or where rapid calculations may be necessary.
The outlier score of each sample is called the Local Outlier
Probability. It measures the local deviation of density of a
given sample with respect to its neighbors as Local Outlier
Factor (LOF) [@Breunig], but provides normalized outlier scores in the
range [0,1]. These outlier scores are directly interpretable
as a probability of an object being an outlier. Since Local
Outlier Probabilities provides scores in the range [0,1],
practitioners are free to interpret the results according to
the application.
Like LOF, it is local in that the anomaly score depends on
how isolated the sample is with respect to the surrounding
neighborhood. Locality is given by k-nearest neighbors,
whose distance is used to estimate the local density.
By comparing the local density of a sample to the local
densities of its neighbors, one can identify samples that
lie in regions of lower density compared to their neighbors
and thus identify samples that may be outliers according to
their Local Outlier Probability.
``PyNomaly`` includes an optional _cluster_labels_ parameter.
This is useful in cases where regions of varying density
occur within the same set of data. When using _cluster_labels_,
the Local Outlier Probability of a sample is calculated with
respect to its cluster assignment.
## Research
PyNomaly is currently being used in the following research:
- Y. Zhao and M.K. Hryniewicki, "XGBOD: Improving Supervised
Outlier Detection with Unsupervised Representation Learning,"
International Joint Conference on Neural Networks (IJCNN),
IEEE, 2018.
## Acknowledgements
The authors recognize the support of Kyle Hundman and Ian Colwell.
# References
================================================
FILE: requirements.txt
================================================
numpy>=1.12.0
python-utils>=2.3.0
================================================
FILE: requirements_ci.txt
================================================
coveralls>=1.8.0
pandas>=0.24.2
pytest>=4.6.2
pytest-cov>=2.7.1
scikit-learn>=0.21.2
scipy>=1.3.0
wheel>=0.33.4
================================================
FILE: requirements_examples.txt
================================================
matplotlib==3.1.0
pandas>=0.24.2
pydataset>=0.2.0
scikit-learn>=0.21.2
scipy>=1.3.0
================================================
FILE: setup.py
================================================
from setuptools import setup
from pathlib import Path
this_directory = Path(__file__).parent
long_description = (this_directory / "README.md").read_text()
setup(
name='PyNomaly',
packages=['PyNomaly'],
version='0.3.5',
description='A Python 3 implementation of LoOP: Local Outlier '
'Probabilities, a local density based outlier detection '
'method providing an outlier score in the range of [0,1].',
author='Valentino Constantinou',
author_email='vc@valentino.io',
long_description=long_description,
long_description_content_type='text/markdown',
url='https://github.com/vc1492a/PyNomaly',
download_url='https://github.com/vc1492a/PyNomaly/archive/0.3.5.tar.gz',
keywords=['outlier', 'anomaly', 'detection', 'machine', 'learning',
'probability'],
classifiers=[],
license='Apache License, Version 2.0',
install_requires=['numpy', 'python-utils']
)
================================================
FILE: tests/__init__.py
================================================
================================================
FILE: tests/test_loop.py
================================================
# Authors: Valentino Constantinou <vc@valentino.io>
# License: Apache 2.0
from PyNomaly import loop
from PyNomaly.loop import ClusterSizeError, MissingValuesError
import logging
from typing import Tuple
import numpy as np
from numpy.testing import assert_array_equal, assert_array_almost_equal
import pandas as pd
import pytest
from sklearn.datasets import load_iris
from sklearn.metrics import roc_auc_score
from sklearn.neighbors import NearestNeighbors
from sklearn.utils import check_random_state
import sys
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
# flag to enable or disable NUMBA
NUMBA = False
if NUMBA is False:
logging.info(
"Numba is disabled. Coverage statistics are reflective of "
"testing native Python code. Consider also testing with numba"
" enabled."
)
else:
logging.warning(
"Numba is enabled. Coverage statistics will be impacted (reduced) to"
" due the just-in-time compilation of native Python code."
)
# load the iris dataset
# and randomly permute it
rng = check_random_state(0)
iris = load_iris()
perm = rng.permutation(iris.target.size)
iris.data = iris.data[perm]
iris.target = iris.target[perm]
# fixtures
@pytest.fixture()
def X_n8() -> np.ndarray:
"""
Fixture that generates a small Numpy array with two anomalous values
(last two observations).
:return: a Numpy array.
"""
# Toy sample (the last two samples are outliers):
X = np.array(
[[-2, -1], [-1, -1], [-1, -2], [1, 2], [1, 2], [2, 1], [5, 3], [-4, 2]]
)
return X
@pytest.fixture()
def X_n20_scores() -> Tuple[np.ndarray, np.ndarray]:
"""
Fixture that returns a tuple containing a 20 element numpy array
and the precalculated loOP scores based on that array.
:return: tuple(input_data,exptected_scores)
"""
input_data = np.array(
[
0.02059752,
0.32629926,
0.63036653,
0.94409321,
0.63251097,
0.47598494,
0.80204026,
0.34845067,
0.81556468,
0.89183,
0.25210317,
0.11460502,
0.19953434,
0.36955067,
0.06038041,
0.34527368,
0.56621582,
0.90533649,
0.33773613,
0.71573306,
]
)
expected_scores = np.array(
[
0.6356276742921594,
0.0,
0.0,
0.48490790006974044,
0.0,
0.0,
0.0,
0.0,
0.021728288376168012,
0.28285086151683225,
0.0,
0.18881886507113213,
0.0,
0.0,
0.45350246469681843,
0.0,
0.07886635748113013,
0.3349068501560546,
0.0,
0.0,
]
)
return (input_data, expected_scores)
@pytest.fixture()
def X_n120() -> np.ndarray:
"""
Fixture that generates a Numpy array with 120 observations. Each
observation contains two float values.
:return: a Numpy array.
"""
# Generate train/test data
rng = check_random_state(2)
X = 0.3 * rng.randn(120, 2)
return X
@pytest.fixture()
def X_n140_outliers(X_n120) -> np.ndarray:
"""
Fixture that generates a Numpy array with 140 observations, where the
first 120 observations are "normal" and the last 20 considered anomalous.
:param X_n120: A pytest Fixture that generates the first 120 observations.
:return: A Numpy array.
"""
# Generate some abnormal novel observations
X_outliers = rng.uniform(low=-4, high=4, size=(20, 2))
X = np.r_[X_n120, X_outliers]
return X
@pytest.fixture()
def X_n1000() -> np.ndarray:
"""
Fixture that generates a Numpy array with 1000 observations.
:return: A Numpy array.
"""
# Generate train/test data
rng = check_random_state(2)
X = 0.3 * rng.randn(1000, 2)
return X
def test_loop(X_n8) -> None:
"""
Tests the basic functionality and asserts that the anomalous observations
are detected as anomalies. Tests the functionality using inputs
as Numpy arrays and as Pandas dataframes.
:param X_n8: A pytest Fixture that generates the 8 observations.
:return: None
"""
# Test LocalOutlierProbability:
clf = loop.LocalOutlierProbability(X_n8, n_neighbors=5, use_numba=NUMBA)
score = clf.fit().local_outlier_probabilities
share_outlier = 2.0 / 8.0
predictions = [-1 if s > share_outlier else 1 for s in score]
assert_array_equal(predictions, 6 * [1] + 2 * [-1])
# Assert smallest outlier score is greater than largest inlier score:
assert np.min(score[-2:]) > np.max(score[:-2])
# Test the DataFrame functionality
X_df = pd.DataFrame(X_n8)
# Test LocalOutlierProbability:
clf = loop.LocalOutlierProbability(X_df, n_neighbors=5, use_numba=NUMBA)
score = clf.fit().local_outlier_probabilities
share_outlier = 2.0 / 8.0
predictions = [-1 if s > share_outlier else 1 for s in score]
assert_array_equal(predictions, 6 * [1] + 2 * [-1])
# Assert smallest outlier score is greater than largest inlier score:
assert np.min(score[-2:]) > np.max(score[:-2])
def test_regression(X_n20_scores) -> None:
"""
Tests for potential regression errors by comparing current results
to the exptected results. Any changes to the code should still return
the same result given the same dataset
"""
input_data, expected_scores = X_n20_scores
clf = loop.LocalOutlierProbability(input_data).fit()
scores = clf.local_outlier_probabilities
assert_array_almost_equal(scores, expected_scores, 6)
def test_loop_performance(X_n120) -> None:
"""
Using a set of known anomalies (labels), tests the performance (using
ROC / AUC score) of the software and ensures it is able to capture most
anomalies under this basic scenario.
:param X_n120: A pytest Fixture that generates the 120 observations.
:return: None
"""
# Generate some abnormal novel observations
X_outliers = rng.uniform(low=-4, high=4, size=(20, 2))
X_test = np.r_[X_n120, X_outliers]
X_labels = np.r_[np.repeat(1, X_n120.shape[0]), np.repeat(-1, X_outliers.shape[0])]
# fit the model
clf = loop.LocalOutlierProbability(
X_test,
n_neighbors=X_test.shape[0] - 1,
# test the progress bar
progress_bar=True,
use_numba=NUMBA,
)
# predict scores (the lower, the more normal)
score = clf.fit().local_outlier_probabilities
share_outlier = X_outliers.shape[0] / X_test.shape[0]
X_pred = [-1 if s > share_outlier else 1 for s in score]
# check that roc_auc is good
assert roc_auc_score(X_pred, X_labels) >= 0.98
def test_input_nodata(X_n140_outliers) -> None:
"""
Test to ensure that the proper warning is issued if no data is
provided.
:param X_n140_outliers: A pytest Fixture that generates 140 observations.
:return: None
"""
with pytest.warns(UserWarning) as record:
# attempt to fit loop without data or a distance matrix
loop.LocalOutlierProbability(
n_neighbors=X_n140_outliers.shape[0] - 1, use_numba=NUMBA
)
# check that only one warning was raised
assert len(record) == 1
# check that the message matches
assert record[0].message.args[0] == "Data or a distance matrix must be provided."
def test_input_incorrect_type(X_n140_outliers) -> None:
"""
Test to ensure that the proper warning is issued if the type of an
argument is the incorrect type.
:param X_n140_outliers: A pytest Fixture that generates 140 observations.
:return: None
"""
with pytest.warns(UserWarning) as record:
# attempt to fit loop with a string input for n_neighbors
loop.LocalOutlierProbability(
X_n140_outliers,
n_neighbors=str(X_n140_outliers.shape[0] - 1),
use_numba=NUMBA,
)
# check that only one warning was raised
assert len(record) == 1
# check that the message matches
assert (
record[0].message.args[0]
== "Argument 'n_neighbors' is not of type (<class 'int'>, "
"<class 'numpy.integer'>)."
)
def test_input_neighbor_zero(X_n120) -> None:
"""
Test to ensure that the proper warning is issued if the neighbor size
is specified as 0 (must be greater than 0).
:param X_n120: A pytest Fixture that generates 120 observations.
:return: None
"""
clf = loop.LocalOutlierProbability(X_n120, n_neighbors=0, use_numba=NUMBA)
with pytest.warns(UserWarning) as record:
# attempt to fit loop with a 0 neighbor count
clf.fit()
# check that only one warning was raised
assert len(record) == 1
# check that the message matches
assert (
record[0].message.args[0]
== "n_neighbors must be greater than 0. Fit with 10 instead."
)
def test_input_distonly(X_n120) -> None:
"""
Test to ensure that the proper warning is issued if only a distance
matrix is provided (without a neighbor matrix).
:param X_n120: A pytest Fixture that generates 120 observations.
:return: None
"""
# generate distance and neighbor indices
neigh = NearestNeighbors(metric="euclidean")
neigh.fit(X_n120)
d, idx = neigh.kneighbors(X_n120, n_neighbors=10, return_distance=True)
with pytest.warns(UserWarning) as record:
# attempt to fit loop with a distance matrix and no neighbor matrix
loop.LocalOutlierProbability(distance_matrix=d, use_numba=NUMBA)
# check that only one warning was raised
assert len(record) == 1
# check that the message matches
assert (
record[0].message.args[0]
== "A neighbor index matrix and distance matrix must both "
"be provided when not using raw input data."
)
def test_input_neighboronly(X_n120) -> None:
"""
Test to ensure that the proper warning is issued if only a neighbor
matrix is provided (without a distance matrix).
:param X_n120: A pytest Fixture that generates 120 observations.
:return: None
"""
# generate distance and neighbor indices
neigh = NearestNeighbors(metric="euclidean")
neigh.fit(X_n120)
d, idx = neigh.kneighbors(X_n120, n_neighbors=10, return_distance=True)
with pytest.warns(UserWarning) as record:
# attempt to fit loop with a neighbor matrix and no distance matrix
loop.LocalOutlierProbability(neighbor_matrix=idx, use_numba=NUMBA)
# check that only one warning was raised
assert len(record) == 1
# check that the message matches
assert record[0].message.args[0] == "Data or a distance matrix must be provided."
def test_input_too_many(X_n120) -> None:
"""
Test to ensure that the proper warning is issued if both a data matrix
and a distance matrix are provided (can only be data matrix).
:param X_n120: A pytest Fixture that generates 120 observations.
:return: None
"""
# generate distance and neighbor indices
neigh = NearestNeighbors(metric="euclidean")
neigh.fit(X_n120)
d, idx = neigh.kneighbors(X_n120, n_neighbors=10, return_distance=True)
with pytest.warns(UserWarning) as record:
# attempt to fit loop with data and a distance matrix
loop.LocalOutlierProbability(
X_n120, distance_matrix=d, neighbor_matrix=idx, use_numba=NUMBA
)
# check that only one warning was raised
assert len(record) == 1
# check that the message matches
assert (
record[0].message.args[0]
== "Only one of the following may be provided: data or a "
"distance matrix (not both)."
)
def test_distance_neighbor_shape_mismatch(X_n120) -> None:
"""
Test to ensure that the proper warning is issued if there is a mismatch
between the shape of the provided distance and neighbor matrices.
:param X_n120: A pytest Fixture that generates 120 observations.
:return: None
"""
# generate distance and neighbor indices
neigh = NearestNeighbors(metric="euclidean")
neigh.fit(X_n120)
d, idx = neigh.kneighbors(X_n120, n_neighbors=10, return_distance=True)
# generate distance and neighbor indices of a different shape
neigh_2 = NearestNeighbors(metric="euclidean")
neigh_2.fit(X_n120)
d_2, idx_2 = neigh.kneighbors(X_n120, n_neighbors=5, return_distance=True)
with pytest.warns(UserWarning) as record:
# attempt to fit loop with a mismatch in shapes
loop.LocalOutlierProbability(
distance_matrix=d, neighbor_matrix=idx_2, n_neighbors=5, use_numba=NUMBA
)
# check that only one warning was raised
assert len(record) == 1
# check that the message matches
assert (
record[0].message.args[0] == "The shape of the distance and neighbor "
"index matrices must match."
)
def test_input_neighbor_mismatch(X_n120) -> None:
"""
Test to ensure that the proper warning is issued if the supplied distance
(and neighbor) matrix and specified number of neighbors do not match.
:param X_n120: A pytest Fixture that generates 120 observations.
:return: None
"""
# generate distance and neighbor indices
neigh = NearestNeighbors(metric="euclidean")
neigh.fit(X_n120)
d, idx = neigh.kneighbors(X_n120, n_neighbors=5, return_distance=True)
with pytest.warns(UserWarning) as record:
# attempt to fit loop with a neighbor size mismatch
loop.LocalOutlierProbability(
distance_matrix=d, neighbor_matrix=idx, n_neighbors=10, use_numba=NUMBA
)
# check that only one warning was raised
assert len(record) == 1
# check that the message matches
assert (
record[0].message.args[0] == "The shape of the distance or "
"neighbor index matrix does not "
"match the number of neighbors "
"specified."
)
def test_loop_dist_matrix(X_n120) -> None:
"""
Tests to ensure the proper results are returned when supplying the
appropriate format distance and neighbor matrices.
:param X_n120: A pytest Fixture that generates 120 observations.
:return: None
"""
# generate distance and neighbor indices
neigh = NearestNeighbors(metric="euclidean")
neigh.fit(X_n120)
d, idx = neigh.kneighbors(X_n120, n_neighbors=10, return_distance=True)
# fit loop using data and distance matrix
clf1 = loop.LocalOutlierProbability(X_n120, use_numba=NUMBA)
clf2 = loop.LocalOutlierProbability(
distance_matrix=d, neighbor_matrix=idx, use_numba=NUMBA
)
scores1 = clf1.fit().local_outlier_probabilities
scores2 = clf2.fit().local_outlier_probabilities
# compare the agreement between the results
assert np.abs(scores2 - scores1).all() <= 0.1
def test_lambda_values(X_n140_outliers) -> None:
"""
Test to ensure results are returned which correspond to what is expected
when varying the extent parameter (we expect larger extent values to
result in more constrained scores).
:param X_n140_outliers: A pytest Fixture that generates 140 observations.
:return: None
"""
# Fit the model with different extent (lambda) values
clf1 = loop.LocalOutlierProbability(X_n140_outliers, extent=1, use_numba=NUMBA)
clf2 = loop.LocalOutlierProbability(X_n140_outliers, extent=2, use_numba=NUMBA)
clf3 = loop.LocalOutlierProbability(X_n140_outliers, extent=3, use_numba=NUMBA)
# predict scores (the lower, the more normal)
score1 = clf1.fit().local_outlier_probabilities
score2 = clf2.fit().local_outlier_probabilities
score3 = clf3.fit().local_outlier_probabilities
# Get the mean of all the scores
score_mean1 = np.mean(score1)
score_mean2 = np.mean(score2)
score_mean3 = np.mean(score3)
# check that expected the means align with expectation
assert score_mean1 > score_mean2
assert score_mean2 > score_mean3
def test_parameters(X_n120) -> None:
"""
Test to ensure that the model object contains the needed attributes after
the model is fit. This is important in the context of the streaming
functionality.
:param X_n120: A pytest Fixture that generates 120 observations.
:return: None
"""
# fit the model
clf = loop.LocalOutlierProbability(X_n120, use_numba=NUMBA).fit()
# check that the model has attributes post fit
assert hasattr(clf, "n_neighbors") and clf.n_neighbors is not None
assert hasattr(clf, "extent") and clf.extent is not None
assert hasattr(clf, "cluster_labels") and clf._cluster_labels() is not None
assert hasattr(clf, "prob_distances") and clf.prob_distances is not None
assert hasattr(clf, "prob_distances_ev") and clf.prob_distances_ev is not None
assert (
hasattr(clf, "norm_prob_local_outlier_factor")
and clf.norm_prob_local_outlier_factor is not None
)
assert (
hasattr(clf, "local_outlier_probabilities")
and clf.local_outlier_probabilities is not None
)
def test_n_neighbors() -> None:
"""
Tests the functionality of providing a large number of neighbors that
is greater than the number of observations (software defaults to the
data input size and provides a UserWarning).
:return: None
"""
X = iris.data
clf = loop.LocalOutlierProbability(X, n_neighbors=500, use_numba=NUMBA).fit()
assert clf.n_neighbors == X.shape[0] - 1
clf = loop.LocalOutlierProbability(X, n_neighbors=500, use_numba=NUMBA)
with pytest.warns(UserWarning) as record:
clf.fit()
# check that only one warning was raised
assert len(record) == 1
assert clf.n_neighbors == X.shape[0] - 1
def test_extent() -> None:
"""
Test to ensure that a UserWarning is issued when providing an invalid
extent parameter value (can be 1, 2, or 3).
:return: None
"""
X = np.array([[1, 1], [1, 0]])
clf = loop.LocalOutlierProbability(X, n_neighbors=2, extent=4, use_numba=NUMBA)
with pytest.warns(UserWarning) as record:
clf.fit()
# check that only one warning was raised
assert len(record) == 1
def test_data_format() -> None:
"""
Test to ensure that a UserWarning is issued when the shape of the input
data is not explicitly correct. This is corrected by the software when
possible.
:return: None
"""
X = [1.3, 1.1, 0.9, 1.4, 1.5, 3.2]
clf = loop.LocalOutlierProbability(X, n_neighbors=3, use_numba=NUMBA)
with pytest.warns(UserWarning) as record:
clf.fit()
# check that only one warning was raised
assert len(record) == 1
def test_missing_values() -> None:
"""
Test to ensure that MissingValuesError is raised if a missing value is
encountered in the input data, as this is not allowable.
:return: None
"""
X = np.array([1.3, 1.1, 0.9, 1.4, 1.5, np.nan, 3.2])
clf = loop.LocalOutlierProbability(X, n_neighbors=3, use_numba=NUMBA)
with pytest.raises(MissingValuesError) as record:
clf.fit()
# check that the message matches
assert (
str(record.value)
== "Method does not support missing values in input data."
)
def test_small_cluster_size(X_n140_outliers) -> None:
"""
Test to ensure that ClusterSizeError is raised when the specified number of
neighbors is larger than the smallest cluster size in the input data.
:param X_n140_outliers: A pytest Fixture that generates 140 observations.
:return: None
"""
# Generate cluster labels
a = [0] * 120
b = [1] * 18
cluster_labels = a + b
clf = loop.LocalOutlierProbability(
X_n140_outliers, n_neighbors=50, cluster_labels=cluster_labels, use_numba=NUMBA
)
with pytest.raises(ClusterSizeError) as record:
clf.fit()
# check that the message matches
assert (
str(record.value)
== "Number of neighbors specified larger than smallest "
"cluster. Specify a number of neighbors smaller than "
"the smallest cluster size (observations in smallest "
"cluster minus one)."
)
def test_stream_fit(X_n140_outliers) -> None:
"""
Test to ensure that the proper warning is issued if the user attempts
to use the streaming approach prior to the classical approach being fit.
:param X_n140_outliers: A pytest Fixture that generates 140 observations.
:return: None
"""
# Fit the model
X_train = X_n140_outliers[0:138]
X_test = X_n140_outliers[139]
clf = loop.LocalOutlierProbability(X_train, use_numba=NUMBA)
with pytest.warns(UserWarning) as record:
clf.stream(X_test)
# check that the message matches
messages = [i.message.args[0] for i in record]
assert (
"Must fit on historical data by calling fit() prior to "
"calling stream(x)." in messages
)
def test_stream_distance(X_n140_outliers) -> None:
"""
Test to ensure that the streaming approach functions as desired when
providing matrices for use and that the returned results are within some
margin of error when compared to the classical approach (using the RMSE).
:param X_n140_outliers: A pytest Fixture that generates 140 observations.
:return: None
"""
X_train = X_n140_outliers[0:100]
X_test = X_n140_outliers[100:140]
# generate distance and neighbor indices
neigh = NearestNeighbors(metric="euclidean")
neigh.fit(X_train)
d, idx = neigh.kneighbors(X_train, n_neighbors=10, return_distance=True)
# Fit the models in standard and distance matrix form
m = loop.LocalOutlierProbability(X_train, use_numba=NUMBA).fit()
m_dist = loop.LocalOutlierProbability(
distance_matrix=d, neighbor_matrix=idx, use_numba=NUMBA
).fit()
# Collect the scores
X_test_scores = []
for i in range(X_test.shape[0]):
X_test_scores.append(m.stream(np.array(X_test[i])))
X_test_scores = np.array(X_test_scores)
X_test_dist_scores = []
for i in range(X_test.shape[0]):
dd, ii = neigh.kneighbors(np.array([X_test[i]]), return_distance=True)
X_test_dist_scores.append(m_dist.stream(np.mean(dd)))
X_test_dist_scores = np.array(X_test_dist_scores)
# calculate the rmse and ensure score is below threshold
rmse = np.sqrt(((X_test_scores - X_test_dist_scores) ** 2).mean(axis=None))
assert 0.075 >= rmse
def test_stream_cluster(X_n140_outliers) -> None:
"""
Test to ensure that the proper warning is issued if the streaming approach
is called on clustered data, as the streaming approach does not support
this functionality.
:param X_n140_outliers: A pytest Fixture that generates 140 observations.
:return: None
"""
# Generate cluster labels
a = [0] * 120
b = [1] * 18
cluster_labels = a + b
# Fit the model
X_train = X_n140_outliers[0:138]
X_test = X_n140_outliers[139]
clf = loop.LocalOutlierProbability(
X_train, cluster_labels=cluster_labels, use_numba=NUMBA
).fit()
with pytest.warns(UserWarning) as record:
clf.stream(X_test)
# check that only one warning was raised
assert len(record) == 1
# check that the message matches
assert (
record[0].message.args[0] == "Stream approach does not support clustered data. "
"Automatically refit using single cluster of points."
)
def test_stream_performance(X_n140_outliers) -> None:
"""
Test to ensure that the streaming approach works as desired when using
a regular set of input data (no distance and neighbor matrices) and that
the result is within some expected level of error when compared to the
classical approach.
:param X_n140_outliers: A pytest Fixture that generates 140 observations.
:return:
"""
X_train = X_n140_outliers[0:100]
X_test = X_n140_outliers[100:140]
# Fit the models in standard and stream form
m = loop.LocalOutlierProbability(X_n140_outliers, use_numba=NUMBA).fit()
scores_noclust = m.local_outlier_probabilities
m_train = loop.LocalOutlierProbability(X_train, use_numba=NUMBA)
m_train.fit()
X_train_scores = m_train.local_outlier_probabilities
X_test_scores = []
for idx in range(X_test.shape[0]):
X_test_scores.append(m_train.stream(X_test[idx]))
X_test_scores = np.array(X_test_scores)
stream_scores = np.hstack((X_train_scores, X_test_scores))
# calculate the rmse and ensure score is below threshold
rmse = np.sqrt(((scores_noclust - stream_scores) ** 2).mean(axis=None))
assert 0.35 > rmse
def test_progress_bar(X_n8) -> None:
"""
Tests the progress bar functionality on a small number of observations,
when the number of observations is less than the width of the console
window.
:param X_n8: a numpy array with 8 observations.
:return: None
"""
# attempt to use the progress bar on a small number of observations
loop.LocalOutlierProbability(X_n8, use_numba=NUMBA, progress_bar=True).fit()
def test_data_flipping() -> None:
"""
Tests the flipping of data and cluster labels and ensures that the
:return: None
"""
np.random.seed(1)
n = 9
data = np.append(
np.random.normal(2, 1, [n, 2]), np.random.normal(8, 1, [n, 2]), axis=0
)
clus = np.append(np.ones(n), 2 * np.ones(n)).tolist()
model = loop.LocalOutlierProbability(data, n_neighbors=5, cluster_labels=clus)
fit = model.fit()
res = fit.local_outlier_probabilities
data_flipped = np.flipud(data)
clus_flipped = np.flipud(clus).tolist()
model2 = loop.LocalOutlierProbability(
data_flipped, n_neighbors=5, cluster_labels=clus_flipped
)
fit2 = model2.fit()
res2 = np.flipud(fit2.local_outlier_probabilities)
assert_array_almost_equal(res, res2, decimal=6)
assert_array_almost_equal(
fit.norm_prob_local_outlier_factor,
fit2.norm_prob_local_outlier_factor,
decimal=6,
)
def test_distance_matrix_consistency(X_n120) -> None:
"""
Test to ensure that the distance matrix is consistent with the neighbor
matrix and that the software is able to handle self-distances.
:return: None
"""
neigh = NearestNeighbors(metric='euclidean')
neigh.fit(X_n120)
distances, indices = neigh.kneighbors(X_n120, n_neighbors=11, return_distance=True)
# remove the closest neighbor (its the point itself) from each row in the indices matrix and distances matrix
indices = np.delete(indices, 0, 1)
distances = np.delete(distances, 0, 1)
# Fit LoOP with and without distance matrix
clf_data = loop.LocalOutlierProbability(X_n120, n_neighbors=10)
clf_dist = loop.LocalOutlierProbability(distance_matrix=distances, neighbor_matrix=indices, n_neighbors=11)
# Attempt to retrieve scores and check types
scores_data = clf_data.fit().local_outlier_probabilities
scores_dist = clf_dist.fit().local_outlier_probabilities
# Debugging prints to investigate types and contents
print("Type of scores_data:", type(scores_data))
print("Type of scores_dist:", type(scores_dist))
print("Value of scores_data:", scores_data)
print("Value of scores_dist:", scores_dist)
print("Shape of scores_data:", scores_data.shape)
print("Shape of scores_dist:", scores_dist.shape)
# Convert to arrays if they aren't already
scores_data = np.array(scores_data) if not isinstance(scores_data, np.ndarray) else scores_data
scores_dist = np.array(scores_dist) if not isinstance(scores_dist, np.ndarray) else scores_dist
# Check shapes and types before assertion
assert scores_data.shape == scores_dist.shape, "Score shapes mismatch"
assert isinstance(scores_data, np.ndarray), "Expected scores_data to be a numpy array"
assert isinstance(scores_dist, np.ndarray), "Expected scores_dist to be a numpy array"
# Compare scores allowing for minor floating-point differences
assert_array_almost_equal(scores_data, scores_dist, decimal=10, err_msg="Inconsistent LoOP scores due to self-distances")
gitextract_82bct4dc/
├── .github/
│ ├── FUNDING.yml
│ └── workflows/
│ └── tests.yml
├── .gitignore
├── LICENSE
├── PyNomaly/
│ ├── __init__.py
│ └── loop.py
├── README.md
├── changelog.md
├── examples/
│ ├── iris.py
│ ├── iris_dist_grid.py
│ ├── multiple_gaussian_2d.py
│ ├── numba_speed_diff.py
│ ├── numpy_array.py
│ └── stream.py
├── paper/
│ ├── codemeta.json
│ ├── paper.bib
│ └── paper.md
├── requirements.txt
├── requirements_ci.txt
├── requirements_examples.txt
├── setup.py
└── tests/
├── __init__.py
└── test_loop.py
SYMBOL INDEX (70 symbols across 2 files)
FILE: PyNomaly/loop.py
class PyNomalyError (line 19) | class PyNomalyError(Exception):
class ValidationError (line 24) | class ValidationError(PyNomalyError):
class ClusterSizeError (line 29) | class ClusterSizeError(ValidationError):
class MissingValuesError (line 34) | class MissingValuesError(ValidationError):
class Utils (line 39) | class Utils:
method emit_progress_bar (line 41) | def emit_progress_bar(progress: str, index: int, total: int) -> str:
class LocalOutlierProbability (line 67) | class LocalOutlierProbability(object):
method _convert_to_array (line 107) | def _convert_to_array(obj: Union["pd.DataFrame", np.ndarray]) -> np.nd...
method _validate_inputs (line 133) | def _validate_inputs(self):
method _check_cluster_size (line 188) | def _check_cluster_size(self) -> None:
method _check_n_neighbors (line 206) | def _check_n_neighbors(self) -> bool:
method _check_extent (line 231) | def _check_extent(self) -> bool:
method _check_missing_values (line 244) | def _check_missing_values(self) -> None:
method _check_is_fit (line 255) | def _check_is_fit(self) -> bool:
method _check_no_cluster_labels (line 269) | def _check_no_cluster_labels(self) -> bool:
method accepts (line 290) | def accepts(*types):
method __init__ (line 353) | def __init__(
method _standard_distance (line 394) | def _standard_distance(cardinality: float, sum_squared_distance: float...
method _prob_distance (line 407) | def _prob_distance(extent: int, standard_distance: float) -> float:
method _prob_outlier_factor (line 418) | def _prob_outlier_factor(
method _norm_prob_outlier_factor (line 436) | def _norm_prob_outlier_factor(
method _local_outlier_probability (line 453) | def _local_outlier_probability(
method _n_observations (line 470) | def _n_observations(self) -> int:
method _store (line 479) | def _store(self) -> np.ndarray:
method _cluster_labels (line 487) | def _cluster_labels(self) -> np.ndarray:
method _euclidean (line 501) | def _euclidean(vector1: np.ndarray, vector2: np.ndarray) -> np.ndarray:
method _assign_distances (line 512) | def _assign_distances(self, data_store: np.ndarray) -> np.ndarray:
method _compute_distance_and_neighbor_matrix (line 531) | def _compute_distance_and_neighbor_matrix(
method _distances (line 567) | def _distances(self, progress_bar: bool = False) -> None:
method _ssd (line 604) | def _ssd(self, data_store: np.ndarray) -> np.ndarray:
method _standard_distances (line 624) | def _standard_distances(self, data_store: np.ndarray) -> np.ndarray:
method _prob_distances (line 642) | def _prob_distances(self, data_store: np.ndarray) -> np.ndarray:
method _prob_distances_ev (line 656) | def _prob_distances_ev(self, data_store) -> np.ndarray:
method _prob_local_outlier_factors (line 683) | def _prob_local_outlier_factors(self, data_store: np.ndarray) -> np.nd...
method _prob_local_outlier_factors_ev (line 708) | def _prob_local_outlier_factors_ev(self, data_store: np.ndarray) -> np...
method _norm_prob_local_outlier_factors (line 745) | def _norm_prob_local_outlier_factors(self, data_store: np.ndarray) -> ...
method _local_outlier_probabilities (line 767) | def _local_outlier_probabilities(self, data_store: np.ndarray) -> np.n...
method fit (line 796) | def fit(self) -> "LocalOutlierProbability":
method stream (line 832) | def stream(self, x: np.ndarray) -> np.ndarray:
FILE: tests/test_loop.py
function X_n8 (line 47) | def X_n8() -> np.ndarray:
function X_n20_scores (line 61) | def X_n20_scores() -> Tuple[np.ndarray, np.ndarray]:
function X_n120 (line 120) | def X_n120() -> np.ndarray:
function X_n140_outliers (line 133) | def X_n140_outliers(X_n120) -> np.ndarray:
function X_n1000 (line 147) | def X_n1000() -> np.ndarray:
function test_loop (line 158) | def test_loop(X_n8) -> None:
function test_regression (line 190) | def test_regression(X_n20_scores) -> None:
function test_loop_performance (line 202) | def test_loop_performance(X_n120) -> None:
function test_input_nodata (line 233) | def test_input_nodata(X_n140_outliers) -> None:
function test_input_incorrect_type (line 252) | def test_input_incorrect_type(X_n140_outliers) -> None:
function test_input_neighbor_zero (line 277) | def test_input_neighbor_zero(X_n120) -> None:
function test_input_distonly (line 299) | def test_input_distonly(X_n120) -> None:
function test_input_neighboronly (line 325) | def test_input_neighboronly(X_n120) -> None:
function test_input_too_many (line 347) | def test_input_too_many(X_n120) -> None:
function test_distance_neighbor_shape_mismatch (line 375) | def test_distance_neighbor_shape_mismatch(X_n120) -> None:
function test_input_neighbor_mismatch (line 407) | def test_input_neighbor_mismatch(X_n120) -> None:
function test_loop_dist_matrix (line 436) | def test_loop_dist_matrix(X_n120) -> None:
function test_lambda_values (line 460) | def test_lambda_values(X_n140_outliers) -> None:
function test_parameters (line 488) | def test_parameters(X_n120) -> None:
function test_n_neighbors (line 515) | def test_n_neighbors() -> None:
function test_extent (line 537) | def test_extent() -> None:
function test_data_format (line 553) | def test_data_format() -> None:
function test_missing_values (line 570) | def test_missing_values() -> None:
function test_small_cluster_size (line 589) | def test_small_cluster_size(X_n140_outliers) -> None:
function test_stream_fit (line 618) | def test_stream_fit(X_n140_outliers) -> None:
function test_stream_distance (line 641) | def test_stream_distance(X_n140_outliers) -> None:
function test_stream_cluster (line 680) | def test_stream_cluster(X_n140_outliers) -> None:
function test_stream_performance (line 712) | def test_stream_performance(X_n140_outliers) -> None:
function test_progress_bar (line 744) | def test_progress_bar(X_n8) -> None:
function test_data_flipping (line 757) | def test_data_flipping() -> None:
function test_distance_matrix_consistency (line 788) | def test_distance_matrix_consistency(X_n120) -> None:
Condensed preview — 23 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (118K chars).
[
{
"path": ".github/FUNDING.yml",
"chars": 18,
"preview": "github: [vc1492a]\n"
},
{
"path": ".github/workflows/tests.yml",
"chars": 1368,
"preview": "# This workflow will install Python dependencies, run tests and lint with a variety of Python versions\n# For more inform"
},
{
"path": ".gitignore",
"chars": 3220,
"preview": "*.DS_STORE\n.idea/\n__pycache__/\n*.csv\nnasaValve\nrel_research\nPyNomaly/loop_dev.py\n/PyNomaly.egg-info/\n*.pyc\n*.coverage.*\n"
},
{
"path": "LICENSE",
"chars": 563,
"preview": "Copyright 2017 Valentino Constantinou.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use "
},
{
"path": "PyNomaly/__init__.py",
"chars": 360,
"preview": "# Authors: Valentino Constantinou <vc@valentino.io>\n# License: Apache 2.0\n\nfrom PyNomaly.loop import (\n LocalOutlierP"
},
{
"path": "PyNomaly/loop.py",
"chars": 33806,
"preview": "from math import erf, sqrt\nimport numpy as np\nfrom python_utils.terminal import get_terminal_size\nimport sys\nfrom typing"
},
{
"path": "README.md",
"chars": 21042,
"preview": "# PyNomaly\n\nPyNomaly is a Python 3 implementation of LoOP (Local Outlier Probabilities).\nLoOP is a local density based o"
},
{
"path": "changelog.md",
"chars": 11681,
"preview": "# Changelog\nAll notable changes to PyNomaly will be documented in this Changelog.\n\nThe format is based on [Keep a Change"
},
{
"path": "examples/iris.py",
"chars": 1696,
"preview": "from PyNomaly import loop\nimport pandas as pd\nfrom pydataset import data\nfrom sklearn.cluster import DBSCAN\nimport matpl"
},
{
"path": "examples/iris_dist_grid.py",
"chars": 1217,
"preview": "from PyNomaly import loop\nimport pandas as pd\nfrom pydataset import data\nfrom sklearn.neighbors import NearestNeighbors\n"
},
{
"path": "examples/multiple_gaussian_2d.py",
"chars": 929,
"preview": "import numpy as np\nimport matplotlib.pyplot as plt\nfrom PyNomaly import loop\nimport pandas as pd\n\n# import the multiple "
},
{
"path": "examples/numba_speed_diff.py",
"chars": 793,
"preview": "import numpy as np\nfrom PyNomaly import loop\nimport time\n\n# generate a large set of data\ndata = np.ones(shape=(10000, 4)"
},
{
"path": "examples/numpy_array.py",
"chars": 0,
"preview": ""
},
{
"path": "examples/stream.py",
"chars": 1388,
"preview": "import numpy as np\nfrom PyNomaly import loop\nimport pandas as pd\nfrom pydataset import data\nimport matplotlib.pyplot as "
},
{
"path": "paper/codemeta.json",
"chars": 830,
"preview": "{\n \"@context\": \"https://raw.githubusercontent.com/codemeta/codemeta/master/codemeta.jsonld\",\n \"@type\": \"Code\",\n \"auth"
},
{
"path": "paper/paper.bib",
"chars": 1775,
"preview": "@inproceedings{Breunig,\n author = {Breunig, Markus M. and Kriegel, Hans-Peter and Ng, Raymond T. and Sander, J\\\"{o}rg},\n"
},
{
"path": "paper/paper.md",
"chars": 2597,
"preview": "---\ntitle: 'PyNomaly: Anomaly detection using Local Outlier Probabilities (LoOP).'\ntags:\n - outlier detection\n - anoma"
},
{
"path": "requirements.txt",
"chars": 33,
"preview": "numpy>=1.12.0\npython-utils>=2.3.0"
},
{
"path": "requirements_ci.txt",
"chars": 111,
"preview": "coveralls>=1.8.0\npandas>=0.24.2\npytest>=4.6.2\npytest-cov>=2.7.1\nscikit-learn>=0.21.2\nscipy>=1.3.0\nwheel>=0.33.4"
},
{
"path": "requirements_examples.txt",
"chars": 83,
"preview": "matplotlib==3.1.0\npandas>=0.24.2\npydataset>=0.2.0\nscikit-learn>=0.21.2\nscipy>=1.3.0"
},
{
"path": "setup.py",
"chars": 952,
"preview": "from setuptools import setup\n\nfrom pathlib import Path\nthis_directory = Path(__file__).parent\nlong_description = (this_d"
},
{
"path": "tests/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "tests/test_loop.py",
"chars": 28253,
"preview": "# Authors: Valentino Constantinou <vc@valentino.io>\n# License: Apache 2.0\n\nfrom PyNomaly import loop\nfrom PyNomaly.loop "
}
]
About this extraction
This page contains the full source code of the vc1492a/PyNomaly GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 23 files (110.1 KB), approximately 27.9k tokens, and a symbol index with 70 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.