Full Code of marius-team/marius for AI

main 2f27ffedfbff cached
446 files
1.7 MB
432.2k tokens
1092 symbols
1 requests
Download .txt
Showing preview only (1,840K chars total). Download the full file or copy to clipboard to get everything.
Repository: marius-team/marius
Branch: main
Commit: 2f27ffedfbff
Files: 446
Total size: 1.7 MB

Directory structure:
gitextract_pqlh87yg/

├── .clang-format
├── .flake8
├── .github/
│   ├── ISSUE_TEMPLATE/
│   │   ├── bug_report.md
│   │   ├── documentation-improvement.md
│   │   ├── feature_request.md
│   │   └── general-question.md
│   ├── PULL_REQUEST_TEMPLATE/
│   │   └── pull_request_template.md
│   └── workflows/
│       ├── build_and_test.yml
│       ├── db2graph_test_postgres.yml
│       └── lint.yml
├── .gitignore
├── .gitmodules
├── CMakeLists.txt
├── CONTRIBUTING.md
├── LICENSE
├── MANIFEST.in
├── README.md
├── docs/
│   ├── .nojekyll
│   ├── CMakeLists.txt
│   ├── Doxyfile
│   ├── Doxyfile.in
│   ├── README.md
│   ├── _static/
│   │   └── css/
│   │       └── marius_theme.css
│   ├── _templates/
│   │   └── layout.html
│   ├── conf.py
│   ├── config_interface/
│   │   ├── configuration.rst
│   │   ├── full_schema.rst
│   │   ├── index.rst
│   │   └── samples.rst
│   ├── db2graph/
│   │   └── db2graph.rst
│   ├── examples/
│   │   ├── config/
│   │   │   ├── index.rst
│   │   │   ├── lp_custom.rst
│   │   │   ├── lp_fb15k237.rst
│   │   │   ├── lp_paleobiology.rst
│   │   │   ├── nc_custom.rst
│   │   │   ├── nc_ogbn_arxiv.rst
│   │   │   └── resume_training.rst
│   │   ├── index.rst
│   │   ├── introduction.rst
│   │   ├── prediction/
│   │   │   ├── command_line.rst
│   │   │   └── python.rst
│   │   ├── preprocessing/
│   │   │   ├── command_line.rst
│   │   │   └── python.rst
│   │   └── python/
│   │       ├── index.rst
│   │       ├── lp_custom.rst
│   │       ├── lp_fb15k237.rst
│   │       └── nc_ogbn_arxiv.rst
│   ├── export_and_inference/
│   │   ├── index.rst
│   │   ├── marius_postprocess.rst
│   │   └── marius_predict.rst
│   ├── graph_learning/
│   │   ├── decoders.rst
│   │   ├── downstream_tasks.rst
│   │   ├── encoders.rst
│   │   ├── index.rst
│   │   ├── intro.rst
│   │   └── learning_tasks.rst
│   ├── index.rst
│   ├── introduction.rst
│   ├── preprocess_datasets/
│   │   ├── built_in.rst
│   │   ├── command_line.rst
│   │   ├── index.rst
│   │   └── python.rst
│   ├── python_api/
│   │   ├── configuration/
│   │   │   └── index.rst
│   │   ├── index.rst
│   │   ├── manager/
│   │   │   └── index.rst
│   │   ├── nn/
│   │   │   ├── activation.rst
│   │   │   ├── decoders/
│   │   │   │   ├── decoder.rst
│   │   │   │   ├── edge/
│   │   │   │   │   ├── comparators.rst
│   │   │   │   │   ├── complex.rst
│   │   │   │   │   ├── distmult.rst
│   │   │   │   │   ├── edge_decoder.rst
│   │   │   │   │   ├── index.rst
│   │   │   │   │   ├── relation_operators.rst
│   │   │   │   │   └── transe.rst
│   │   │   │   ├── index.rst
│   │   │   │   └── node/
│   │   │   │       ├── index.rst
│   │   │   │       ├── node_decoder.rst
│   │   │   │       └── noop_node_decoder.rst
│   │   │   ├── encoders/
│   │   │   │   ├── general_encoder.rst
│   │   │   │   └── index.rst
│   │   │   ├── index.rst
│   │   │   ├── initialization.rst
│   │   │   ├── layers/
│   │   │   │   ├── embedding.rst
│   │   │   │   ├── feature.rst
│   │   │   │   ├── gnn.rst
│   │   │   │   ├── index.rst
│   │   │   │   ├── layer.rst
│   │   │   │   └── reduction.rst
│   │   │   ├── loss.rst
│   │   │   ├── model.rst
│   │   │   └── optim.rst
│   │   ├── pipeline/
│   │   │   ├── evaluator.rst
│   │   │   ├── graph_encoder.rst
│   │   │   ├── index.rst
│   │   │   └── trainer.rst
│   │   ├── reporting/
│   │   │   ├── index.rst
│   │   │   ├── metrics.rst
│   │   │   └── reporters.rst
│   │   ├── storage/
│   │   │   ├── graph_storage.rst
│   │   │   ├── index.rst
│   │   │   └── storage.rst
│   │   └── tools/
│   │       ├── configuration/
│   │       │   ├── constants.rst
│   │       │   ├── datatypes.rst
│   │       │   ├── index.rst
│   │       │   └── marius_config.rst
│   │       ├── index.rst
│   │       └── preprocess/
│   │           ├── converters/
│   │           │   └── index.rst
│   │           ├── datasets/
│   │           │   └── index.rst
│   │           ├── index.rst
│   │           ├── partitioners/
│   │           │   └── index.rst
│   │           ├── readers/
│   │           │   └── index.rst
│   │           └── writers/
│   │               └── index.rst
│   └── quickstart.rst
├── examples/
│   ├── configuration/
│   │   ├── custom_lp.yaml
│   │   ├── custom_nc.yaml
│   │   ├── fb15k_237.yaml
│   │   ├── ogbn_arxiv.yaml
│   │   └── sakila.yaml
│   ├── db2graph/
│   │   ├── dockerfile
│   │   └── run.sh
│   ├── docker/
│   │   ├── README.md
│   │   ├── cpu_ubuntu/
│   │   │   └── dockerfile
│   │   └── gpu_ubuntu/
│   │       └── dockerfile
│   ├── preprocessing/
│   │   └── custom_dataset.py
│   └── python/
│       ├── custom.py
│       ├── custom_lp.py
│       ├── custom_nc_graphsage.py
│       ├── fb15k_237.py
│       ├── fb15k_237_gpu.py
│       └── ogbn_arxiv_nc.py
├── pyproject.toml
├── setup.cfg
├── setup.py
├── src/
│   ├── __init__.py
│   ├── cpp/
│   │   ├── cmake/
│   │   │   └── FindSphinx.cmake
│   │   ├── include/
│   │   │   ├── common/
│   │   │   │   ├── datatypes.h
│   │   │   │   ├── exception.h
│   │   │   │   ├── pybind_headers.h
│   │   │   │   └── util.h
│   │   │   ├── configuration/
│   │   │   │   ├── config.h
│   │   │   │   ├── constants.h
│   │   │   │   ├── options.h
│   │   │   │   └── util.h
│   │   │   ├── data/
│   │   │   │   ├── batch.h
│   │   │   │   ├── dataloader.h
│   │   │   │   ├── graph.h
│   │   │   │   ├── ordering.h
│   │   │   │   └── samplers/
│   │   │   │       ├── edge.h
│   │   │   │       ├── negative.h
│   │   │   │       └── neighbor.h
│   │   │   ├── marius.h
│   │   │   ├── nn/
│   │   │   │   ├── activation.h
│   │   │   │   ├── decoders/
│   │   │   │   │   ├── decoder.h
│   │   │   │   │   ├── edge/
│   │   │   │   │   │   ├── comparators.h
│   │   │   │   │   │   ├── complex.h
│   │   │   │   │   │   ├── decoder_methods.h
│   │   │   │   │   │   ├── distmult.h
│   │   │   │   │   │   ├── edge_decoder.h
│   │   │   │   │   │   ├── relation_operators.h
│   │   │   │   │   │   └── transe.h
│   │   │   │   │   └── node/
│   │   │   │   │       ├── node_decoder.h
│   │   │   │   │       └── noop_node_decoder.h
│   │   │   │   ├── encoders/
│   │   │   │   │   └── encoder.h
│   │   │   │   ├── initialization.h
│   │   │   │   ├── layers/
│   │   │   │   │   ├── embedding/
│   │   │   │   │   │   └── embedding.h
│   │   │   │   │   ├── feature/
│   │   │   │   │   │   └── feature.h
│   │   │   │   │   ├── gnn/
│   │   │   │   │   │   ├── gat_layer.h
│   │   │   │   │   │   ├── gcn_layer.h
│   │   │   │   │   │   ├── gnn_layer.h
│   │   │   │   │   │   ├── graph_sage_layer.h
│   │   │   │   │   │   ├── layer_helpers.h
│   │   │   │   │   │   └── rgcn_layer.h
│   │   │   │   │   ├── layer.h
│   │   │   │   │   └── reduction/
│   │   │   │   │       ├── concat.h
│   │   │   │   │       ├── linear.h
│   │   │   │   │       └── reduction_layer.h
│   │   │   │   ├── loss.h
│   │   │   │   ├── model.h
│   │   │   │   ├── model_helpers.h
│   │   │   │   ├── optim.h
│   │   │   │   └── regularizer.h
│   │   │   ├── pipeline/
│   │   │   │   ├── evaluator.h
│   │   │   │   ├── graph_encoder.h
│   │   │   │   ├── pipeline.h
│   │   │   │   ├── pipeline_constants.h
│   │   │   │   ├── pipeline_cpu.h
│   │   │   │   ├── pipeline_gpu.h
│   │   │   │   ├── pipeline_monitor.h
│   │   │   │   ├── queue.h
│   │   │   │   └── trainer.h
│   │   │   ├── reporting/
│   │   │   │   ├── logger.h
│   │   │   │   └── reporting.h
│   │   │   └── storage/
│   │   │       ├── buffer.h
│   │   │       ├── checkpointer.h
│   │   │       ├── graph_storage.h
│   │   │       ├── io.h
│   │   │       └── storage.h
│   │   ├── python_bindings/
│   │   │   ├── configuration/
│   │   │   │   ├── config_wrap.cpp
│   │   │   │   ├── options_wrap.cpp
│   │   │   │   └── wrap.cpp
│   │   │   ├── manager/
│   │   │   │   ├── marius_wrap.cpp
│   │   │   │   └── wrap.cpp
│   │   │   ├── nn/
│   │   │   │   ├── activation_wrap.cpp
│   │   │   │   ├── decoders/
│   │   │   │   │   ├── decoder_wrap.cpp
│   │   │   │   │   ├── edge/
│   │   │   │   │   │   ├── comparators_wrap.cpp
│   │   │   │   │   │   ├── complex_wrap.cpp
│   │   │   │   │   │   ├── distmult_wrap.cpp
│   │   │   │   │   │   ├── edge_decoder_wrap.cpp
│   │   │   │   │   │   ├── relation_operators_wrap.cpp
│   │   │   │   │   │   └── transe_wrap.cpp
│   │   │   │   │   └── node/
│   │   │   │   │       ├── node_decoder_wrap.cpp
│   │   │   │   │       └── noop_node_decoder.cpp
│   │   │   │   ├── encoders/
│   │   │   │   │   └── encoder_wrap.cpp
│   │   │   │   ├── initialization_wrap.cpp
│   │   │   │   ├── layers/
│   │   │   │   │   ├── embedding/
│   │   │   │   │   │   └── embedding_wrap.cpp
│   │   │   │   │   ├── feature/
│   │   │   │   │   │   └── feature_wrap.cpp
│   │   │   │   │   ├── gnn/
│   │   │   │   │   │   ├── gat_layer_wrap.cpp
│   │   │   │   │   │   ├── gcn_layer_wrap.cpp
│   │   │   │   │   │   ├── gnn_layer_wrap.cpp
│   │   │   │   │   │   ├── graph_sage_layer_wrap.cpp
│   │   │   │   │   │   ├── layer_helpers_wrap.cpp
│   │   │   │   │   │   └── rgcn_layer_wrap.cpp
│   │   │   │   │   ├── layer_wrap.cpp
│   │   │   │   │   └── reduction/
│   │   │   │   │       ├── concat_wrap.cpp
│   │   │   │   │       ├── linear_wrap.cpp
│   │   │   │   │       └── reduction_layer_wrap.cpp
│   │   │   │   ├── loss_wrap.cpp
│   │   │   │   ├── model_wrap.cpp
│   │   │   │   ├── optim_wrap.cpp
│   │   │   │   ├── regularizer_wrap.cpp
│   │   │   │   └── wrap.cpp
│   │   │   ├── pipeline/
│   │   │   │   ├── evaluator_wrap.cpp
│   │   │   │   ├── graph_encoder_wrap.cpp
│   │   │   │   ├── trainer_wrap.cpp
│   │   │   │   └── wrap.cpp
│   │   │   ├── reporting/
│   │   │   │   ├── reporting_wrap.cpp
│   │   │   │   └── wrap.cpp
│   │   │   └── storage/
│   │   │       ├── graph_storage_wrap.cpp
│   │   │       ├── io_wrap.cpp
│   │   │       ├── storage_wrap.cpp
│   │   │       └── wrap.cpp
│   │   ├── src/
│   │   │   ├── common/
│   │   │   │   └── util.cpp
│   │   │   ├── configuration/
│   │   │   │   ├── config.cpp
│   │   │   │   ├── options.cpp
│   │   │   │   └── util.cpp
│   │   │   ├── data/
│   │   │   │   ├── batch.cpp
│   │   │   │   ├── dataloader.cpp
│   │   │   │   ├── graph.cpp
│   │   │   │   ├── ordering.cpp
│   │   │   │   └── samplers/
│   │   │   │       ├── edge.cpp
│   │   │   │       ├── negative.cpp
│   │   │   │       └── neighbor.cpp
│   │   │   ├── marius.cpp
│   │   │   ├── nn/
│   │   │   │   ├── activation.cpp
│   │   │   │   ├── decoders/
│   │   │   │   │   ├── edge/
│   │   │   │   │   │   ├── comparators.cpp
│   │   │   │   │   │   ├── complex.cpp
│   │   │   │   │   │   ├── decoder_methods.cpp
│   │   │   │   │   │   ├── distmult.cpp
│   │   │   │   │   │   ├── edge_decoder.cpp
│   │   │   │   │   │   ├── relation_operators.cpp
│   │   │   │   │   │   └── transe.cpp
│   │   │   │   │   └── node/
│   │   │   │   │       └── noop_node_decoder.cpp
│   │   │   │   ├── encoders/
│   │   │   │   │   └── encoder.cpp
│   │   │   │   ├── initialization.cpp
│   │   │   │   ├── layers/
│   │   │   │   │   ├── embedding/
│   │   │   │   │   │   └── embedding.cpp
│   │   │   │   │   ├── feature/
│   │   │   │   │   │   └── feature.cpp
│   │   │   │   │   ├── gnn/
│   │   │   │   │   │   ├── gat_layer.cpp
│   │   │   │   │   │   ├── gcn_layer.cpp
│   │   │   │   │   │   ├── graph_sage_layer.cpp
│   │   │   │   │   │   ├── layer_helpers.cpp
│   │   │   │   │   │   └── rgcn_layer.cpp
│   │   │   │   │   ├── layer.cpp
│   │   │   │   │   └── reduction/
│   │   │   │   │       ├── concat.cpp
│   │   │   │   │       └── linear.cpp
│   │   │   │   ├── loss.cpp
│   │   │   │   ├── model.cpp
│   │   │   │   ├── optim.cpp
│   │   │   │   └── regularizer.cpp
│   │   │   ├── pipeline/
│   │   │   │   ├── evaluator.cpp
│   │   │   │   ├── graph_encoder.cpp
│   │   │   │   ├── pipeline.cpp
│   │   │   │   ├── pipeline_cpu.cpp
│   │   │   │   ├── pipeline_gpu.cpp
│   │   │   │   └── trainer.cpp
│   │   │   ├── reporting/
│   │   │   │   └── reporting.cpp
│   │   │   └── storage/
│   │   │       ├── buffer.cpp
│   │   │       ├── checkpointer.cpp
│   │   │       ├── graph_storage.cpp
│   │   │       ├── io.cpp
│   │   │       └── storage.cpp
│   │   └── third_party/
│   │       └── CMakeLists.txt
│   ├── cuda/
│   │   └── third_party/
│   │       └── pytorch_scatter/
│   │           ├── atomics.cuh
│   │           ├── index_info.cuh
│   │           ├── reducer.cuh
│   │           ├── segment_csr_cuda.cu
│   │           ├── segment_csr_cuda.h
│   │           ├── segment_max.cpp
│   │           ├── segment_max.h
│   │           └── utils.cuh
│   └── python/
│       ├── __init__.py
│       ├── console_scripts/
│       │   ├── __init__.py
│       │   ├── marius_eval.py
│       │   └── marius_train.py
│       ├── distribution/
│       │   ├── generate_stubs.py
│       │   └── marius_env_info.py
│       └── tools/
│           ├── __init__.py
│           ├── configuration/
│           │   ├── __init__.py
│           │   ├── constants.py
│           │   ├── datatypes.py
│           │   ├── marius_config.py
│           │   └── validation.py
│           ├── db2graph/
│           │   └── marius_db2graph.py
│           ├── marius_config_generator.py
│           ├── marius_postprocess.py
│           ├── marius_predict.py
│           ├── marius_preprocess.py
│           ├── postprocess/
│           │   ├── __init__.py
│           │   └── in_memory_exporter.py
│           ├── prediction/
│           │   ├── link_prediction.py
│           │   └── node_classification.py
│           └── preprocess/
│               ├── __init__.py
│               ├── converters/
│               │   ├── __init__.py
│               │   ├── partitioners/
│               │   │   ├── __init__.py
│               │   │   ├── partitioner.py
│               │   │   ├── spark_partitioner.py
│               │   │   └── torch_partitioner.py
│               │   ├── readers/
│               │   │   ├── __init__.py
│               │   │   ├── pandas_readers.py
│               │   │   ├── reader.py
│               │   │   └── spark_readers.py
│               │   ├── spark_constants.py
│               │   ├── spark_converter.py
│               │   ├── torch_constants.py
│               │   ├── torch_converter.py
│               │   └── writers/
│               │       ├── __init__.py
│               │       ├── spark_writer.py
│               │       ├── torch_writer.py
│               │       └── writer.py
│               ├── custom.py
│               ├── dataset.py
│               ├── dataset_stats.tsv
│               ├── datasets/
│               │   ├── __init__.py
│               │   ├── dataset_helpers.py
│               │   ├── fb15k.py
│               │   ├── fb15k_237.py
│               │   ├── freebase86m.py
│               │   ├── friendster.py
│               │   ├── livejournal.py
│               │   ├── ogb_mag240m.py
│               │   ├── ogb_wikikg90mv2.py
│               │   ├── ogbl_citation2.py
│               │   ├── ogbl_collab.py
│               │   ├── ogbl_ppa.py
│               │   ├── ogbl_wikikg2.py
│               │   ├── ogbn_arxiv.py
│               │   ├── ogbn_papers100m.py
│               │   ├── ogbn_products.py
│               │   └── twitter.py
│               └── utils.py
├── test/
│   ├── CMakeLists.txt
│   ├── README.md
│   ├── __init__.py
│   ├── cpp/
│   │   ├── CMakeLists.txt
│   │   ├── end_to_end/
│   │   │   ├── CMakeLists.txt
│   │   │   ├── main.cpp
│   │   │   └── test_main.cpp
│   │   ├── integration/
│   │   │   ├── CMakeLists.txt
│   │   │   └── main.cpp
│   │   ├── performance/
│   │   │   ├── CMakeLists.txt
│   │   │   └── main.cpp
│   │   └── unit/
│   │       ├── CMakeLists.txt
│   │       ├── main.cpp
│   │       ├── nn/
│   │       │   ├── test_activation.cpp
│   │       │   ├── test_initialization.cpp
│   │       │   ├── test_loss.cpp
│   │       │   └── test_model.cpp
│   │       ├── test_buffer.cpp
│   │       ├── test_storage.cpp
│   │       ├── testing_util.cpp
│   │       └── testing_util.h
│   ├── db2graph/
│   │   └── test_postgres.py
│   ├── python/
│   │   ├── bindings/
│   │   │   ├── end_to_end/
│   │   │   │   ├── test_fb15k_acc.py
│   │   │   │   ├── test_interval_checkpointing.py
│   │   │   │   ├── test_lp_basic.py
│   │   │   │   ├── test_lp_buffer.py
│   │   │   │   ├── test_lp_storage.py
│   │   │   │   ├── test_model_dir.py
│   │   │   │   ├── test_nc_basic.py
│   │   │   │   ├── test_nc_buffer.py
│   │   │   │   ├── test_nc_storage.py
│   │   │   │   └── test_resume_training.py
│   │   │   └── integration/
│   │   │       ├── test_config.py
│   │   │       ├── test_data.py
│   │   │       └── test_nn.py
│   │   ├── constants.py
│   │   ├── helpers.py
│   │   ├── postprocessing/
│   │   │   └── test_in_memory_exporter.py
│   │   ├── predict/
│   │   │   └── test_predict.py
│   │   └── preprocessing/
│   │       ├── test_spark_converter.py
│   │       └── test_torch_converter.py
│   ├── test_configs/
│   │   ├── generate_test_configs.py
│   │   ├── lp/
│   │   │   ├── evaluation/
│   │   │   │   ├── async.yaml
│   │   │   │   ├── async_deg.yaml
│   │   │   │   ├── async_filtered.yaml
│   │   │   │   ├── sync.yaml
│   │   │   │   ├── sync_deg.yaml
│   │   │   │   └── sync_filtered.yaml
│   │   │   ├── model/
│   │   │   │   ├── distmult.yaml
│   │   │   │   ├── distmult_feat.yaml
│   │   │   │   ├── gat_1_layer.yaml
│   │   │   │   ├── gat_3_layer.yaml
│   │   │   │   ├── gs_1_layer.yaml
│   │   │   │   ├── gs_1_layer_feat.yaml
│   │   │   │   ├── gs_1_layer_uniform.yaml
│   │   │   │   ├── gs_3_layer.yaml
│   │   │   │   ├── gs_3_layer_feat.yaml
│   │   │   │   └── gs_3_layer_uniform.yaml
│   │   │   ├── storage/
│   │   │   │   ├── edges_disk.yaml
│   │   │   │   ├── in_memory.yaml
│   │   │   │   └── part_buffer.yaml
│   │   │   └── training/
│   │   │       ├── async.yaml
│   │   │       ├── async_deg.yaml
│   │   │       ├── async_filtered.yaml
│   │   │       ├── sync.yaml
│   │   │       ├── sync_deg.yaml
│   │   │       └── sync_filtered.yaml
│   │   └── nc/
│   │       ├── evaluation/
│   │       │   ├── async.yaml
│   │       │   └── sync.yaml
│   │       ├── model/
│   │       │   ├── gat_1_layer.yaml
│   │       │   ├── gat_3_layer.yaml
│   │       │   ├── gs_1_layer.yaml
│   │       │   ├── gs_1_layer_emb.yaml
│   │       │   ├── gs_1_layer_uniform.yaml
│   │       │   ├── gs_3_layer.yaml
│   │       │   ├── gs_3_layer_emb.yaml
│   │       │   └── gs_3_layer_uniform.yaml
│   │       ├── storage/
│   │       │   ├── in_memory.yaml
│   │       │   └── part_buffer.yaml
│   │       └── training/
│   │           ├── async.yaml
│   │           └── sync.yaml
│   └── test_data/
│       ├── generate.py
│       ├── test_edges.txt
│       ├── train_edges.txt
│       ├── train_edges_weights.txt
│       └── valid_edges.txt
└── tox.ini

================================================
FILE CONTENTS
================================================

================================================
FILE: .clang-format
================================================
---
Language:        Cpp
# BasedOnStyle:  Google
AccessModifierOffset: -1
AlignAfterOpenBracket: Align
AlignArrayOfStructures: None
AlignConsecutiveMacros: None
AlignConsecutiveAssignments: None
AlignConsecutiveBitFields: None
AlignConsecutiveDeclarations: None
AlignEscapedNewlines: Left
AlignOperands:   Align
AlignTrailingComments: true
AllowAllArgumentsOnNextLine: true
AllowAllParametersOfDeclarationOnNextLine: true
AllowShortEnumsOnASingleLine: true
AllowShortBlocksOnASingleLine: Never
AllowShortCaseLabelsOnASingleLine: false
AllowShortFunctionsOnASingleLine: All
AllowShortLambdasOnASingleLine: All
AllowShortIfStatementsOnASingleLine: WithoutElse
AllowShortLoopsOnASingleLine: true
AlwaysBreakAfterDefinitionReturnType: None
AlwaysBreakAfterReturnType: None
AlwaysBreakBeforeMultilineStrings: true
AlwaysBreakTemplateDeclarations: Yes
AttributeMacros:
  - __capability
BinPackArguments: true
BinPackParameters: true
BraceWrapping:
  AfterCaseLabel:  false
  AfterClass:      false
  AfterControlStatement: Never
  AfterEnum:       false
  AfterFunction:   false
  AfterNamespace:  false
  AfterObjCDeclaration: false
  AfterStruct:     false
  AfterUnion:      false
  AfterExternBlock: false
  BeforeCatch:     false
  BeforeElse:      false
  BeforeLambdaBody: false
  BeforeWhile:     false
  IndentBraces:    false
  SplitEmptyFunction: true
  SplitEmptyRecord: true
  SplitEmptyNamespace: true
BreakBeforeBinaryOperators: None
BreakBeforeConceptDeclarations: true
BreakBeforeBraces: Attach
BreakBeforeInheritanceComma: false
BreakInheritanceList: BeforeColon
BreakBeforeTernaryOperators: true
BreakConstructorInitializersBeforeComma: false
BreakConstructorInitializers: BeforeColon
BreakAfterJavaFieldAnnotations: false
BreakStringLiterals: true
ColumnLimit:     160
CommentPragmas:  '^ IWYU pragma:'
QualifierAlignment: Leave
CompactNamespaces: false
ConstructorInitializerIndentWidth: 4
ContinuationIndentWidth: 4
Cpp11BracedListStyle: true
DeriveLineEnding: true
DerivePointerAlignment: true
DisableFormat:   false
EmptyLineAfterAccessModifier: Never
EmptyLineBeforeAccessModifier: LogicalBlock
ExperimentalAutoDetectBinPacking: false
PackConstructorInitializers: NextLine
BasedOnStyle:    ''
ConstructorInitializerAllOnOneLineOrOnePerLine: false
AllowAllConstructorInitializersOnNextLine: true
FixNamespaceComments: true
ForEachMacros:
  - foreach
  - Q_FOREACH
  - BOOST_FOREACH
IfMacros:
  - KJ_IF_MAYBE
IncludeBlocks:   Regroup
IncludeCategories:
  - Regex:           '^<ext/.*\.h>'
    Priority:        2
    SortPriority:    0
    CaseSensitive:   false
  - Regex:           '^<.*\.h>'
    Priority:        1
    SortPriority:    0
    CaseSensitive:   false
  - Regex:           '^<.*'
    Priority:        2
    SortPriority:    0
    CaseSensitive:   false
  - Regex:           '.*'
    Priority:        3
    SortPriority:    0
    CaseSensitive:   false
IncludeIsMainRegex: '([-_](test|unittest))?$'
IncludeIsMainSourceRegex: ''
IndentAccessModifiers: false
IndentCaseLabels: true
IndentCaseBlocks: false
IndentGotoLabels: true
IndentPPDirectives: BeforeHash
IndentExternBlock: AfterExternBlock
IndentRequires:  false
IndentWidth:     4
IndentWrappedFunctionNames: false
InsertTrailingCommas: None
JavaScriptQuotes: Leave
JavaScriptWrapImports: true
KeepEmptyLinesAtTheStartOfBlocks: false
LambdaBodyIndentation: Signature
MacroBlockBegin: ''
MacroBlockEnd:   ''
MaxEmptyLinesToKeep: 1
NamespaceIndentation: None
ObjCBinPackProtocolList: Never
ObjCBlockIndentWidth: 2
ObjCBreakBeforeNestedBlockParam: true
ObjCSpaceAfterProperty: false
ObjCSpaceBeforeProtocolList: true
PenaltyBreakAssignment: 2
PenaltyBreakBeforeFirstCallParameter: 1
PenaltyBreakComment: 300
PenaltyBreakFirstLessLess: 120
PenaltyBreakOpenParenthesis: 0
PenaltyBreakString: 1000
PenaltyBreakTemplateDeclaration: 10
PenaltyExcessCharacter: 1000000
PenaltyReturnTypeOnItsOwnLine: 200
PenaltyIndentedWhitespace: 0
PointerAlignment: Left
PPIndentWidth:   -1
RawStringFormats:
  - Language:        Cpp
    Delimiters:
      - cc
      - CC
      - cpp
      - Cpp
      - CPP
      - 'c++'
      - 'C++'
    CanonicalDelimiter: ''
    BasedOnStyle:    google
  - Language:        TextProto
    Delimiters:
      - pb
      - PB
      - proto
      - PROTO
    EnclosingFunctions:
      - EqualsProto
      - EquivToProto
      - PARSE_PARTIAL_TEXT_PROTO
      - PARSE_TEST_PROTO
      - PARSE_TEXT_PROTO
      - ParseTextOrDie
      - ParseTextProtoOrDie
      - ParseTestProto
      - ParsePartialTestProto
    CanonicalDelimiter: pb
    BasedOnStyle:    google
ReferenceAlignment: Pointer
ReflowComments:  true
RemoveBracesLLVM: false
SeparateDefinitionBlocks: Leave
ShortNamespaceLines: 1
SortIncludes:    CaseSensitive
SortJavaStaticImport: Before
SortUsingDeclarations: true
SpaceAfterCStyleCast: false
SpaceAfterLogicalNot: false
SpaceAfterTemplateKeyword: true
SpaceBeforeAssignmentOperators: true
SpaceBeforeCaseColon: false
SpaceBeforeCpp11BracedList: false
SpaceBeforeCtorInitializerColon: true
SpaceBeforeInheritanceColon: true
SpaceBeforeParens: ControlStatements
SpaceBeforeParensOptions:
  AfterControlStatements: true
  AfterForeachMacros: true
  AfterFunctionDefinitionName: false
  AfterFunctionDeclarationName: false
  AfterIfMacros:   true
  AfterOverloadedOperator: false
  BeforeNonEmptyParentheses: false
SpaceAroundPointerQualifiers: Default
SpaceBeforeRangeBasedForLoopColon: true
SpaceInEmptyBlock: false
SpaceInEmptyParentheses: false
SpacesBeforeTrailingComments: 2
SpacesInAngles:  Never
SpacesInConditionalStatement: false
SpacesInContainerLiterals: true
SpacesInCStyleCastParentheses: false
SpacesInLineCommentPrefix:
  Minimum:         1
  Maximum:         -1
SpacesInParentheses: false
SpacesInSquareBrackets: false
SpaceBeforeSquareBrackets: false
BitFieldColonSpacing: Both
Standard:        Auto
StatementAttributeLikeMacros:
  - Q_EMIT
StatementMacros:
  - Q_UNUSED
  - QT_REQUIRE_VERSION
TabWidth:        8
UseCRLF:         false
UseTab:          Never
WhitespaceSensitiveMacros:
  - STRINGIZE
  - PP_STRINGIZE
  - BOOST_PP_STRINGIZE
  - NS_SWIFT_NAME
  - CF_SWIFT_NAME
...



================================================
FILE: .flake8
================================================
#########################
# Flake8 Configuration  #
# (.flake8)             #
#########################
[flake8]
ignore =
    # first argument of a classmethod should be named 'cls'
    N804
    # line break before binary operator
    W503
    # whitespace before ':'
    E203
exclude =
    .tox
    .git,
    __pycache__,
    build,
    *.pyc,
    *third_party*,
    scripts
max-line-length = 120
max-complexity = 25
import-order-style = pycharm
application-import-names =
    marius


================================================
FILE: .github/ISSUE_TEMPLATE/bug_report.md
================================================
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: bug
assignees: ''

---

**Describe the bug**
A clear and concise description of what the bug is.

**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error

**Expected behavior**
A clear and concise description of what you expected to happen.

**Environment**
List your operating system, and dependency versions. You can obtain this by running `marius_env_info` from the command line.

**Additional context**
Add any other context about the problem here.


================================================
FILE: .github/ISSUE_TEMPLATE/documentation-improvement.md
================================================
---
name: Documentation Improvement
about: 'Suggest improvements to the documentation '
title: ''
labels: documentation
assignees: ''

---

**What is the documentation lacking? Please describe.**
A clear and concise description of what the problem is. 

**Describe the improvement you'd like**
A clear and concise description of what you want to added/fixed.

**Additional context**
Provide additional information and links to the relevant sections of the documentation (if applicable).


================================================
FILE: .github/ISSUE_TEMPLATE/feature_request.md
================================================
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: enhancement
assignees: ''

---

**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

**Describe the solution you'd like**
A clear and concise description of what you want to happen.

**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.

**Additional context**
Add any other context or screenshots about the feature request here.


================================================
FILE: .github/ISSUE_TEMPLATE/general-question.md
================================================
---
name: General Question
about: Ask a question
title: ''
labels: question
assignees: ''

---




================================================
FILE: .github/PULL_REQUEST_TEMPLATE/pull_request_template.md
================================================
If there is no outstanding issue related to this change, please open an issue before submitting this pull request. For small and trivial changes, this step can be skipped.

**Describe the pull request.**
A clear and concise description of what the pull request contains. 

**How was this tested?**
Describe the tests that were added and any manual testing if applicable.

**Please link the issue(s) this relates to.**

**Additional context**
Add any other context or screenshots for the pull request here. Include notes on any follow-up work that may be required.


================================================
FILE: .github/workflows/build_and_test.yml
================================================
name: Build and Test

on:
  push:
    branches:
      - main
  pull_request:
    branches:
      - main
env:
  BUILD_TYPE: Release

jobs:
  build:
    name: ${{ matrix.config.name }}
    runs-on: ${{ matrix.config.os }}
    strategy:
      fail-fast: false
      matrix:
        config:
        - {
            name: "Ubuntu 20.04 GCC", artifact: "Linux.7z",
            os: ubuntu-20.04,
            cc: "gcc", cxx: "g++"
          }
    steps:
    - uses: actions/checkout@v2

    - name: Install dependencies
      working-directory: ${{github.workspace}}
      shell: bash
      run:   |
        
        python3 --version
      
        sudo python3 -m pip install pyarrow

        if [ "$RUNNER_OS" == "Linux" ]; then
             sudo pip3 install torch --extra-index-url https://download.pytorch.org/whl/cpu
        else
             echo "$RUNNER_OS not supported"
             exit 1
        fi
      
    - name: Install Marius
      working-directory: ${{github.workspace}}
      shell: bash
      run: |
        sudo pip3 install .[tests] --verbose
        marius_env_info

    - name: Run Tests
      working-directory: ${{github.workspace}}
      shell: bash
      run: OMP_NUM_THREADS=1 MARIUS_TEST_HOME=test/ python3 -m pytest test/python --verbose



================================================
FILE: .github/workflows/db2graph_test_postgres.yml
================================================
name: Testing DB2GRAPH using postgres
on:
  push:
    branches:
      - main
  pull_request:
    branches:
      - main

jobs:

  db2graph:
    runs-on: ubuntu-latest
    container: ${{ matrix.python_container }}
    strategy:
      matrix:
        python_container: ["python:3.7", "python:3.8", "python:3.9", "python:3.10"]

    services:
      postgres:
        # Docker Hub image
        image: postgres
        # Provide the password for postgres
        env:
          POSTGRES_PASSWORD: postgres
        # Set health checks to wait until postgres has started
        options: >-
          --health-cmd pg_isready
          --health-interval 10s
          --health-timeout 5s
          --health-retries 5

    steps:
      # Downloads a copy of the code in your repository before running CI tests
      - name: Check out repository code
        uses: actions/checkout@v3

      - name: Installing dependencies
        run: MARIUS_NO_BINDINGS=1 python3 -m pip install .[db2graph,tests]

      - name: Running pytest
        run: MARIUS_NO_BINDINGS=1 pytest -s test/db2graph/test_postgres.py
        # Environment variables used in the test
        env:
          # The hostname used to communicate with the PostgreSQL service container
          POSTGRES_HOST: postgres
          # The default PostgreSQL port - using default port
          POSTGRES_PORT: 5432

================================================
FILE: .github/workflows/lint.yml
================================================
name: Lint

on: [push, pull_request]

jobs:
  linting:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - name: Install Tox
      run: pip3 install tox
    - name: Update clang-format
      run: pip3 install --upgrade pip; pip3 install clang-format
    - name: Check linting with Flake8
      run: tox -e check_lint


================================================
FILE: .gitignore
================================================
CMakeCache.txt
CMakeFiles
CMakeScripts
Testing
Makefile
cmake_install.cmake
install_manifest.txt
compile_commands.json
CTestTestfile.cmake
.idea/
cmake-*/
logs/
data/
!src/cpp/src/data
!src/cpp/include/data
test/test_data/generated/
*.dylib

# Created by https://www.toptal.com/developers/gitignore/api/python
# Edit at https://www.toptal.com/developers/gitignore?templates=python

### Python ###
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
docs_build/
docs_html/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
pytestdebug.log

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/
doc/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
.python-version

# pipenv
#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
#   However, in case of collaboration, if having platform-specific dependencies or dependencies
#   having no cross-platform support, pipenv may install dependencies that don't work, or not
#   install all needed dependencies.
#Pipfile.lock

# poetry
#poetry.lock

# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
# .env
.env/
.venv/
env/
venv/
ENV/
env.bak/
venv.bak/
pythonenv*

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# pytype static type analyzer
.pytype/

# operating system-related files
# file properties cache/storage on macOS
*.DS_Store
# thumbnail cache on Windows
Thumbs.db

# profiling data
.prof


# End of https://www.toptal.com/developers/gitignore/api/python


================================================
FILE: .gitmodules
================================================
[submodule "src/cpp/third_party/pybind11"]
	path = src/cpp/third_party/pybind11
	url = https://github.com/pybind/pybind11.git
[submodule "src/cpp/third_party/spdlog"]
	path = src/cpp/third_party/spdlog
	url = https://github.com/gabime/spdlog.git
[submodule "src/cpp/third_party/googletest"]
	path = src/cpp/third_party/googletest
	url = https://github.com/google/googletest.git
[submodule "src/cpp/third_party/parallel-hashmap"]
	path = src/cpp/third_party/parallel-hashmap
	url = https://github.com/greg7mdp/parallel-hashmap.git


================================================
FILE: CMakeLists.txt
================================================
cmake_minimum_required(VERSION 3.12.2)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
cmake_policy(SET CMP0048 NEW)

project(marius VERSION 0.1 LANGUAGES CXX)

include(FindPackageHandleStandardArgs)

add_compile_definitions(_GLIBCXX_USE_CXX11_ABI=0)

set(CMAKE_CXX_VISIBILITY_PRESET default)

if ("${CMAKE_CXX_COMPILER_ID}" MATCHES "Clang")
    if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 11.0)
        message(FATAL_ERROR "Clang version must be at least 11!")
    endif()
    set(CLANG TRUE)
elseif ("${CMAKE_CXX_COMPILER_ID}" MATCHES "GNU")
    if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 7.0)
        message(FATAL_ERROR "GCC version must be at least 7.0!")
    endif()
    set(GCC TRUE)
else ()
    message(FATAL_ERROR "Unknown compiler")
endif ()

if (${CMAKE_SYSTEM_NAME} MATCHES "Darwin")
    set(CMAKE_MACOSX_RPATH 1)
endif ()

if(${USE_CUDA})
    add_definitions(-DMARIUS_CUDA=${USE_CUDA})
    set(CMAKE_CUDA_STANDARD 14)
    set(CMAKE_CUDA_STANDARD_REQUIRED TRUE)
    enable_language(CUDA)
    set(CMAKE_CUDA_FLAGS "${CMAKE_CUDA_FLAGS} --expt-relaxed-constexpr")
endif()

# Find torch location
execute_process(
        COMMAND python3 -c "import torch; import os; print(os.path.dirname(torch.__file__), end='')"
        OUTPUT_VARIABLE TorchPath
)
list(APPEND CMAKE_PREFIX_PATH ${TorchPath})

execute_process(
        COMMAND python3 -c "import torch; print(torch.__version__, end='')"
        OUTPUT_VARIABLE TorchVersion
)

message(STATUS "Torch Version: ${TorchVersion}")

# Add the cmake folder so the FindSphinx module is found

set(MARIUS_CPP_SOURCE ${CMAKE_CURRENT_LIST_DIR}/src/cpp)
set(CMAKE_MODULE_PATH "${MARIUS_CPP_SOURCE}/cmake" ${CMAKE_MODULE_PATH})
set(project_INCLUDE_DIR ${MARIUS_CPP_SOURCE}/include)
set(project_SOURCE_DIR ${MARIUS_CPP_SOURCE}/src)
set(project_CUDA_INCLUDE_DIR ${CMAKE_CURRENT_LIST_DIR}/src/cuda/include)
set(project_CUDA_SOURCE_DIR ${CMAKE_CURRENT_LIST_DIR}/src/cuda/src)
set(project_CUDA_THIRD_PARTY_DIR ${CMAKE_CURRENT_LIST_DIR}/src/cuda/third_party)
set(project_TEST_DIR ${CMAKE_CURRENT_LIST_DIR}/test)
set(project_DOCS_DIR ${CMAKE_CURRENT_LIST_DIR}/docs)
set(project_BINDINGS_DIR ${MARIUS_CPP_SOURCE}/python_bindings)
set(project_THIRD_PARTY_DIR ${MARIUS_CPP_SOURCE}/third_party)

set(project_WORKING_DIR ${CMAKE_CURRENT_BINARY_DIR})
add_definitions(-DMARIUS_BASE_DIRECTORY="${CMAKE_CURRENT_LIST_DIR}")
add_definitions(-DMARIUS_TEST_DIRECTORY="${project_TEST_DIR}")

if (EXISTS ${project_INCLUDE_DIR})
    file(GLOB_RECURSE project_HEADERS ${project_HEADERS} ${project_INCLUDE_DIR}/*.h)
endif ()
if (EXISTS ${project_SOURCE_DIR})
    file(GLOB_RECURSE project_SOURCES ${project_SOURCES} ${project_SOURCE_DIR}/*.cpp)
endif ()

if(${USE_CUDA})
    if (EXISTS ${project_CUDA_INCLUDE_DIR})
        file(GLOB_RECURSE project_CUDA_HEADERS ${project_CUDA_INCLUDE_DIR} ${project_CUDA_INCLUDE_DIR}/*.cuh)
    endif ()
    if (EXISTS ${project_CUDA_SOURCE_DIR})
        file(GLOB_RECURSE project_CUDA_SOURCES ${project_CUDA_SOURCE_DIR} ${project_CUDA_SOURCE_DIR}/*.cu)
    endif ()

    if (EXISTS ${project_CUDA_THIRD_PARTY_DIR})
        file(GLOB_RECURSE project_CUDA_THIRD_PARTY_HEADERS ${project_CUDA_THIRD_PARTY_DIR} ${project_CUDA_THIRD_PARTY_DIR}/*.cuh ${project_CUDA_THIRD_PARTY_DIR}/*.h)
    endif ()
    if (EXISTS ${project_CUDA_THIRD_PARTY_DIR})
        file(GLOB_RECURSE project_CUDA_THIRD_PARTY_SOURCES ${project_CUDA_THIRD_PARTY_DIR} ${project_CUDA_THIRD_PARTY_DIR}/*.cu ${project_CUDA_THIRD_PARTY_DIR}/*.cpp)
    endif ()
endif ()

message(STATUS "project_CUDA_THIRD_PARTY_HEADERS ${project_CUDA_THIRD_PARTY_HEADERS}")
message(STATUS "project_CUDA_THIRD_PARTY_SOURCES ${project_CUDA_THIRD_PARTY_SOURCES}")

find_package(Python3 COMPONENTS Development Interpreter REQUIRED)
find_package(Torch REQUIRED)

execute_process(
        COMMAND python3 -c "import torch; print(torch._C._PYBIND11_COMPILER_TYPE, end='')"
        OUTPUT_VARIABLE _PYBIND11_COMPILER_TYPE
)
execute_process(
        COMMAND python3 -c "import torch; print(torch._C._PYBIND11_STDLIB, end='')"
        OUTPUT_VARIABLE _PYBIND11_STDLIB
)
execute_process(
        COMMAND python3 -c "import torch; print(torch._C._PYBIND11_BUILD_ABI, end='')"
        OUTPUT_VARIABLE _PYBIND11_BUILD_ABI
)

message(STATUS "PYBIND11_COMPILER_TYPE:" ${_PYBIND11_COMPILER_TYPE})
message(STATUS "PYBIND11_STDLIB:" ${_PYBIND11_STDLIB})
message(STATUS "PYBIND11_BUILD_ABI:" ${_PYBIND11_BUILD_ABI})

add_compile_definitions(PYBIND11_COMPILER_TYPE="${_PYBIND11_COMPILER_TYPE}" PYBIND11_STDLIB="${_PYBIND11_STDLIB}" PYBIND11_BUILD_ABI="${_PYBIND11_BUILD_ABI}")

set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}")
message(STATUS "CMAKE_CXX_FLAGS: ${CMAKE_CXX_FLAGS}")

message(STATUS "Python3_INCLUDE_DIRS ${Python3_INCLUDE_DIRS}")
add_subdirectory(${project_THIRD_PARTY_DIR})
set_property(TARGET spdlog PROPERTY POSITION_INDEPENDENT_CODE ON)

include_directories(${Python3_INCLUDE_DIRS})
include_directories(${project_INCLUDE_DIR})
include_directories(${project_CUDA_INCLUDE_DIR})
include_directories(${project_CUDA_THIRD_PARTY_DIR})
include_directories(${TORCH_INCLUDE_DIRS})
include_directories(${project_THIRD_PARTY_DIR}/parallel-hashmap/)
include_directories(${project_BINDINGS})

add_library(${PROJECT_NAME}
            SHARED
            ${project_SOURCES}
            ${project_HEADERS}
            ${project_CUDA_HEADERS}
            ${project_CUDA_SOURCES}
            ${project_CUDA_THIRD_PARTY_HEADERS}
            ${project_CUDA_THIRD_PARTY_SOURCES})

if(NOT APPLE)
    target_link_libraries(${PROJECT_NAME} ${Python3_LIBRARIES})
else()
    set_target_properties(${PROJECT_NAME} PROPERTIES LINK_FLAGS "-undefined dynamic_lookup")
endif()

target_link_libraries(${PROJECT_NAME} ${TORCH_LIBRARIES})
target_link_libraries(${PROJECT_NAME} spdlog)
set_target_properties(${PROJECT_NAME} PROPERTIES PUBLIC_HEADER "${project_HEADERS}")
set_target_properties(${PROJECT_NAME} PROPERTIES POSITION_INDEPENDENT_CODE ON)

if(${USE_CUDA})
    set(NVCC_FLAGS "{NVCC_FLAGS} --expt-relaxed-constexpr")
endif()

if(${USE_OMP})
    add_definitions(-DMARIUS_OMP=${USE_OMP})
    if(APPLE)
        if(CMAKE_C_COMPILER_ID MATCHES "Clang")
            set(OpenMP_C "${CMAKE_C_COMPILER}")
            set(OpenMP_C_FLAGS "-Xpreprocessor -fopenmp")
            set(OpenMP_C_LIB_NAMES "omp")
            set(OpenMP_omp_LIBRARY omp)
        endif()
        if(CMAKE_CXX_COMPILER_ID MATCHES "Clang")
            set(OpenMP_CXX "${CMAKE_CXX_COMPILER}")
            set(OpenMP_CXX_FLAGS "-Xpreprocessor -fopenmp")
            set(OpenMP_CXX_LIB_NAMES "omp")
            set(OpenMP_omp_LIBRARY omp)
        endif()
    endif()

    if("${CMAKE_CXX_COMPILER_ID}" MATCHES "GNU")
        set(OpenMP_CXX "${CMAKE_CXX_COMPILER}")
        set(OpenMP_CXX_FLAGS "-fopenmp")
    endif()
    find_package(OpenMP REQUIRED)
    target_link_libraries(${PROJECT_NAME} OpenMP::OpenMP_CXX)
endif()

if (EXISTS ${project_INCLUDE_DIR})
    target_include_directories(${PROJECT_NAME} PUBLIC ${project_INCLUDE_DIR})
endif ()
if (EXISTS ${project_SOURCE_DIR})
    target_include_directories(${PROJECT_NAME} PRIVATE ${project_SOURCE_DIR})
endif ()

IF(CMAKE_BUILD_TYPE MATCHES Debug AND MARIUS_USE_TSAN)
    message("Using address sanitizer")
    set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fsanitize=thread")
    set(CMAKE_MODULE_LINKER_FLAGS "{$CMAKE_MODULE_LINKER_FLAGS} -fsanitize=thread")
ENDIF(CMAKE_BUILD_TYPE MATCHES Debug AND MARIUS_USE_TSAN)

IF(CMAKE_BUILD_TYPE MATCHES Debug AND MARIUS_USE_ASAN)
    message("Using thread sanitizer")
    set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fsanitize=address -fsanitize=leak")
    set(CMAKE_MODULE_LINKER_FLAGS "{$CMAKE_MODULE_LINKER_FLAGS} -fsanitize=address -fsanitize=leak")
ENDIF(CMAKE_BUILD_TYPE MATCHES Debug AND MARIUS_USE_ASAN)


IF(BUILD_DOCS)
    add_subdirectory(${project_DOCS_DIR})
ENDIF()

if (EXISTS ${project_TEST_DIR})
    enable_testing()
    add_subdirectory(${project_TEST_DIR})
endif ()

add_executable(marius_train ${project_SOURCE_DIR}/marius.cpp)
add_executable(marius_eval ${project_SOURCE_DIR}/marius.cpp)
target_link_libraries(marius_train ${PROJECT_NAME})
target_link_libraries(marius_eval ${PROJECT_NAME})

find_library(TORCH_PYTHON_LIBRARY torch_python PATHS "${TORCH_INSTALL_PREFIX}/lib")
message(STATUS "TORCH_PYTHON_LIBRARY: ${TORCH_PYTHON_LIBRARY}")

file(GLOB_RECURSE CONFIG_BINDINGS ${project_BINDINGS} ${project_BINDINGS_DIR}/configuration/*.cpp)
pybind11_add_module(_config ${CONFIG_BINDINGS})
target_link_libraries(_config PRIVATE ${PROJECT_NAME} ${TORCH_PYTHON_LIBRARY})

file(GLOB_RECURSE DATA_BINDINGS ${project_BINDINGS} ${project_BINDINGS_DIR}/data/*.cpp)
pybind11_add_module(_data ${DATA_BINDINGS})
target_link_libraries(_data PRIVATE ${PROJECT_NAME} ${TORCH_PYTHON_LIBRARY})

file(GLOB_RECURSE NN_BINDINGS ${project_BINDINGS} ${project_BINDINGS_DIR}/nn/*.cpp)
pybind11_add_module(_nn ${NN_BINDINGS})
target_link_libraries(_nn PRIVATE ${PROJECT_NAME} ${TORCH_PYTHON_LIBRARY})

file(GLOB_RECURSE MANAGER_BINDINGS ${project_BINDINGS} ${project_BINDINGS_DIR}/manager/*.cpp)
pybind11_add_module(_manager ${MANAGER_BINDINGS})
target_link_libraries(_manager PRIVATE ${PROJECT_NAME} ${TORCH_PYTHON_LIBRARY})

file(GLOB_RECURSE PIPELINE_BINDINGS ${project_BINDINGS} ${project_BINDINGS_DIR}/pipeline/*.cpp)
pybind11_add_module(_pipeline ${PIPELINE_BINDINGS})
target_link_libraries(_pipeline PRIVATE ${PROJECT_NAME} ${TORCH_PYTHON_LIBRARY})

file(GLOB_RECURSE REPORT_BINDINGS ${project_BINDINGS} ${project_BINDINGS_DIR}/reporting/*.cpp)
pybind11_add_module(_report ${REPORT_BINDINGS})
target_link_libraries(_report PRIVATE ${PROJECT_NAME} ${TORCH_PYTHON_LIBRARY})

file(GLOB_RECURSE STORAGE_BINDINGS ${project_BINDINGS} ${project_BINDINGS_DIR}/storage/*.cpp)
pybind11_add_module(_storage ${STORAGE_BINDINGS})
target_link_libraries(_storage PRIVATE ${PROJECT_NAME} ${TORCH_PYTHON_LIBRARY})

add_custom_target(bindings)
add_dependencies(bindings _config _data _manager _nn _pipeline _report _storage)


================================================
FILE: CONTRIBUTING.md
================================================
# Contributing to Marius

Any contributions users wish to make to Marius are welcome. To name a few, here are some ways to contribute:


- Adding new models
- Adding new datasets and converters
- Downstream inference examples
- Documentation improvements
- Bug Fixes

## Contributing Code

1. Fork the Marius repository
2. Clone the forked repo and create a new branch for your change  
- `git clone https://github.com/<YourUsername>/marius`  
- `git checkout -b <feature_branch>`
   
3. Add your changes to the feature branch 

4. Write tests for your changes  
- C++ Tests are located in a gtest under `test/cpp`
- Python tests are located in `test/python`

5. Run tests and verify nothing is broken.
See the testing README for how to build and run the tests `test/README.md`

## Submitting a Pull Request

Once your changes have been completed, or if you want to submit an in-progress pull request to get eyes on it. Please follow the following steps:

1. Sync your feature branch with the main branch

- `git remote add upstream https://github.com/marius-team/marius.git`

- `git fetch upstream main`

- `git merge upstream/main`

2. Create and submit a pull request that follows the provided template. The pull request will be reviewed by the maintainers of Marius.

3. Address the comments from the reviewer(s) and update your pull request accordingly. 

4. Once the review process is complete your changes will be merged in!


================================================
FILE: LICENSE
================================================
                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "[]"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright [yyyy] [name of copyright owner]

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.


================================================
FILE: MANIFEST.in
================================================
graft src
include CMakeLists.txt
global-exclude *.py[cod] __pycache__ *.so *.dylib .DS_Store *.gpickle

================================================
FILE: README.md
================================================
# Marius and MariusGNN #

This repository contains the code for the Marius and MariusGNN papers. 
We have combined the two works into one unified system for training 
graph embeddings and graph neural networks over large-scale graphs 
on a single machine using the entire memory hierarchy.

Marius ([OSDI '21 Paper](https://www.usenix.org/conference/osdi21/presentation/mohoney)) is designed to mitigate/reduce data movement overheads for graph embeddings using:
- Pipelined training and IO
- Partition caching and a buffer-aware data ordering to minimize IO for disk-based training (called BETA)

MariusGNN ([EuroSys '23 Paper](https://dl.acm.org/doi/abs/10.1145/3552326.3567501)) 
utilizes the data movement optimizations from Marius and adds support for scalable graph neural network training through:
- An optimized data structure for neighbor sampling and GNN aggregation (called DENSE)
- An improved data ordering for disk-based training (called COMET) which minimizes IO and maximizes model accuracy (with COMET now subsuming BETA)

## Build and Install ##

### Requirements ###

* CUDA >= 10.1
* CuDNN >= 7 
* PyTorch >= 1.8
* Python >= 3.7
* GCC >= 7 (On Linux) or Clang >= 11.0 (On MacOS)
* CMake >= 3.12
* Make >= 3.8

### Docker Installation ###
We recommend using Docker for build and installation. 
We provide a Dockerfile which installs all the necessary 
requirements and provide end-to-end instructions in `examples/docker/`.


### Pip Installation ###
With the required dependencies installed, Marius and MariusGNN can be built using Pip:  

```
git clone https://github.com/marius-team/marius.git
cd marius
pip3 install .
```

### Installation Result ###

After installation, the Python API can be accessed with ``import marius``.

The following command line tools will be also be installed:
- marius_train: Train models using configuration files and the command line
- marius_eval: Command line model evaluation
- marius_preprocess: Built-in dataset downloading and preprocessing
- marius_predict: Batch inference tool for link prediction or node classification

## Command Line Interface ##

The command line interface supports performant in-memory and out-of-core 
training and evaluation of graph learning models. Experimental results 
from our papers can be reproduced using this interface (we also provide
an exact experiment artifact for each paper in separate branches).

### Quick Start: ###

First make sure Marius is installed. 

Preprocess the FB15K_237 dataset with `marius_preprocess --dataset fb15k_237 --output_dir datasets/fb15k_237_example/`

Train using the example configuration file (assuming we are in the root directory of the repository) `marius_train examples/configuration/fb15k_237.yaml`

After running this configuration file, the MRR output by the system should be about .25 after 10 epochs.

Perform batch inference on the test set with `marius_predict --config examples/configuration/fb15k_237.yaml --metrics mrr --save_scores --save_ranks`

See the [full example](http://marius-project.org/marius/examples/config/lp_fb15k237.html#small-scale-link-prediction-fb15k-237) for details.

## Python API ##

The Python API is currently experimental and can be used to perform in-memory training and evaluation of graph learning models. 

See the [documentation](http://marius-project.org/marius/examples/python/index.html#) and `examples/python/` for Python API usage and examples.


## Citing Marius or MariusGNN ##
Marius (out-of-core graph embeddings)
```
@inproceedings{Marius,
    author = {Jason Mohoney and Roger Waleffe and Henry Xu and Theodoros Rekatsinas and Shivaram Venkataraman},
    title = {Marius: Learning Massive Graph Embeddings on a Single Machine},
    booktitle = {15th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 21)},
    year = {2021},
    isbn = {9781939133229},
    pages = {533--549},
    url = {https://www.usenix.org/conference/osdi21/presentation/mohoney},
    publisher = {{USENIX} Association}
}
```

MariusGNN (out-of-core GNN training)
```
@inproceedings{MariusGNN, 
    author = {Roger Waleffe and Jason Mohoney and Theodoros Rekatsinas and Shivaram Venkataraman},
    title = {MariusGNN: Resource-Efficient Out-of-Core Training of Graph Neural Networks}, 
    booktitle = {Proceedings of the Eighteenth European Conference on Computer Systems}, 
    year = {2023}, 
    isbn = {9781450394871}, 
    pages = {144–161},
    url = {https://doi.org/10.1145/3552326.3567501},
    publisher = {Association for Computing Machinery}
}
```


================================================
FILE: docs/.nojekyll
================================================


================================================
FILE: docs/CMakeLists.txt
================================================
# https://devblogs.microsoft.com/cppblog/clear-functional-c-documentation-with-sphinx-breathe-doxygen-cmake/
find_package(Doxygen REQUIRED)
find_package(Sphinx REQUIRED)

# Find all the public headers
file(GLOB_RECURSE PROJECT_HEADERS ${project_INCLUDE_DIR}/*.h)

set(DOXYGEN_INPUT_DIR ${PROJECT_SOURCE_DIR}/src/cpp/include)
set(DOXYGEN_OUTPUT_DIR ${CMAKE_CURRENT_BINARY_DIR}/doxygen)
set(DOXYGEN_INDEX_FILE ${DOXYGEN_OUTPUT_DIR}/xml/index.xml)
set(DOXYFILE_IN ${CMAKE_CURRENT_SOURCE_DIR}/Doxyfile.in)
set(DOXYFILE_OUT ${CMAKE_CURRENT_BINARY_DIR}/Doxyfile)

# Replace variables inside @@ with the current values
configure_file(${DOXYFILE_IN} ${DOXYFILE_OUT} @ONLY)

# Doxygen won't create this for us
file(MAKE_DIRECTORY ${DOXYGEN_OUTPUT_DIR})

# Only regenerate Doxygen when the Doxyfile or public headers change
add_custom_command(OUTPUT ${DOXYGEN_INDEX_FILE}
        DEPENDS ${PROJECT_HEADERS}
        COMMAND ${DOXYGEN_EXECUTABLE} ${DOXYFILE_OUT}
        MAIN_DEPENDENCY ${DOXYFILE_OUT} ${DOXYFILE_IN}
        COMMENT "Generating docs"
        VERBATIM)

# Nice named target so we can run the job easily
add_custom_target(Doxygen ALL DEPENDS ${DOXYGEN_INDEX_FILE})

set(SPHINX_SOURCE ${CMAKE_CURRENT_SOURCE_DIR})
set(SPHINX_BUILD ${CMAKE_CURRENT_BINARY_DIR}/html)
set(SPHINX_INDEX_FILE ${SPHINX_BUILD}/index.html)

# Only regenerate Sphinx when:
# - Doxygen has rerun
# - Our doc files have been updated
# - The Sphinx config has been updated
add_custom_command(OUTPUT ${SPHINX_INDEX_FILE}
        COMMAND
        ${SPHINX_EXECUTABLE} -b html
        # Tell Breathe where to find the Doxygen output
        -Dbreathe_projects.Marius=${DOXYGEN_OUTPUT_DIR}/xml
        ${SPHINX_SOURCE} ${SPHINX_BUILD}
        WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
        DEPENDS
        # Other docs files you want to track should go here (or in some variable)
        ${CMAKE_CURRENT_SOURCE_DIR}/index.rst
        ${DOXYGEN_INDEX_FILE}
        MAIN_DEPENDENCY ${SPHINX_SOURCE}/conf.py
        COMMENT "Generating documentation with Sphinx")

# Nice named target so we can run the job easily
add_custom_target(Sphinx ALL DEPENDS ${SPHINX_INDEX_FILE})

# Add an install target to install the docs
include(GNUInstallDirs)
install(DIRECTORY ${SPHINX_BUILD}
        DESTINATION ${CMAKE_INSTALL_DOCDIR})

================================================
FILE: docs/Doxyfile
================================================
# Doxyfile 1.8.20

# This file describes the settings to be used by the documentation system
# doxygen (www.doxygen.org) for a project.
#
# All text after a double hash (##) is considered a comment and is placed in
# front of the TAG it is preceding.
#
# All text after a single hash (#) is considered a comment and will be ignored.
# The format is:
# TAG = value [value, ...]
# For lists, items can also be appended using:
# TAG += value [value, ...]
# Values that contain spaces should be placed between quotes (\" \").

#---------------------------------------------------------------------------
# Project related configuration options
#---------------------------------------------------------------------------

# This tag specifies the encoding used for all characters in the configuration
# file that follow. The default is UTF-8 which is also the encoding used for all
# text before the first occurrence of this tag. Doxygen uses libiconv (or the
# iconv built into libc) for the transcoding. See
# https://www.gnu.org/software/libiconv/ for the list of possible encodings.
# The default value is: UTF-8.

DOXYFILE_ENCODING      = UTF-8

# The PROJECT_NAME tag is a single word (or a sequence of words surrounded by
# double-quotes, unless you are using Doxywizard) that should identify the
# project for which the documentation is generated. This name is used in the
# title of most generated pages and in a few other places.
# The default value is: My Project.

PROJECT_NAME           = "Marius"

# The PROJECT_NUMBER tag can be used to enter a project or revision number. This
# could be handy for archiving the generated documentation or if some version
# control system is used.

PROJECT_NUMBER         =

# Using the PROJECT_BRIEF tag one can provide an optional one line description
# for a project that appears at the top of each page and should give viewer a
# quick idea about the purpose of the project. Keep the description short.

PROJECT_BRIEF          =

# With the PROJECT_LOGO tag one can specify a logo or an icon that is included
# in the documentation. The maximum height of the logo should not exceed 55
# pixels and the maximum width should not exceed 200 pixels. Doxygen will copy
# the logo to the output directory.

PROJECT_LOGO           =

# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path
# into which the generated documentation will be written. If a relative path is
# entered, it will be relative to the location where doxygen was started. If
# left blank the current directory will be used.

OUTPUT_DIRECTORY       =

# If the CREATE_SUBDIRS tag is set to YES then doxygen will create 4096 sub-
# directories (in 2 levels) under the output directory of each output format and
# will distribute the generated files over these directories. Enabling this
# option can be useful when feeding doxygen a huge amount of source files, where
# putting all generated files in the same directory would otherwise causes
# performance problems for the file system.
# The default value is: NO.

CREATE_SUBDIRS         = NO

# If the ALLOW_UNICODE_NAMES tag is set to YES, doxygen will allow non-ASCII
# characters to appear in the names of generated files. If set to NO, non-ASCII
# characters will be escaped, for example _xE3_x81_x84 will be used for Unicode
# U+3044.
# The default value is: NO.

ALLOW_UNICODE_NAMES    = NO

# The OUTPUT_LANGUAGE tag is used to specify the language in which all
# documentation generated by doxygen is written. Doxygen will use this
# information to generate all constant output in the proper language.
# Possible values are: Afrikaans, Arabic, Armenian, Brazilian, Catalan, Chinese,
# Chinese-Traditional, Croatian, Czech, Danish, Dutch, English (United States),
# Esperanto, Farsi (Persian), Finnish, French, German, Greek, Hungarian,
# Indonesian, Italian, Japanese, Japanese-en (Japanese with English messages),
# Korean, Korean-en (Korean with English messages), Latvian, Lithuanian,
# Macedonian, Norwegian, Persian (Farsi), Polish, Portuguese, Romanian, Russian,
# Serbian, Serbian-Cyrillic, Slovak, Slovene, Spanish, Swedish, Turkish,
# Ukrainian and Vietnamese.
# The default value is: English.

OUTPUT_LANGUAGE        = English

# The OUTPUT_TEXT_DIRECTION tag is used to specify the direction in which all
# documentation generated by doxygen is written. Doxygen will use this
# information to generate all generated output in the proper direction.
# Possible values are: None, LTR, RTL and Context.
# The default value is: None.

OUTPUT_TEXT_DIRECTION  = None

# If the BRIEF_MEMBER_DESC tag is set to YES, doxygen will include brief member
# descriptions after the members that are listed in the file and class
# documentation (similar to Javadoc). Set to NO to disable this.
# The default value is: YES.

BRIEF_MEMBER_DESC      = YES

# If the REPEAT_BRIEF tag is set to YES, doxygen will prepend the brief
# description of a member or function before the detailed description
#
# Note: If both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the
# brief descriptions will be completely suppressed.
# The default value is: YES.

REPEAT_BRIEF           = YES

# This tag implements a quasi-intelligent brief description abbreviator that is
# used to form the text in various listings. Each string in this list, if found
# as the leading text of the brief description, will be stripped from the text
# and the result, after processing the whole list, is used as the annotated
# text. Otherwise, the brief description is used as-is. If left blank, the
# following values are used ($name is automatically replaced with the name of
# the entity):The $name class, The $name widget, The $name file, is, provides,
# specifies, contains, represents, a, an and the.

ABBREVIATE_BRIEF       = "The $name class" \
                         "The $name widget" \
                         "The $name file" \
                         is \
                         provides \
                         specifies \
                         contains \
                         represents \
                         a \
                         an \
                         the

# If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then
# doxygen will generate a detailed section even if there is only a brief
# description.
# The default value is: NO.

ALWAYS_DETAILED_SEC    = NO

# If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all
# inherited members of a class in the documentation of that class as if those
# members were ordinary class members. Constructors, destructors and assignment
# operators of the base classes will not be shown.
# The default value is: NO.

INLINE_INHERITED_MEMB  = NO

# If the FULL_PATH_NAMES tag is set to YES, doxygen will prepend the full path
# before files name in the file list and in the header files. If set to NO the
# shortest path that makes the file name unique will be used
# The default value is: YES.

FULL_PATH_NAMES        = YES

# The STRIP_FROM_PATH tag can be used to strip a user-defined part of the path.
# Stripping is only done if one of the specified strings matches the left-hand
# part of the path. The tag can be used to show relative paths in the file list.
# If left blank the directory from which doxygen is run is used as the path to
# strip.
#
# Note that you can specify absolute paths here, but also relative paths, which
# will be relative from the directory where doxygen is started.
# This tag requires that the tag FULL_PATH_NAMES is set to YES.

STRIP_FROM_PATH        =

# The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of the
# path mentioned in the documentation of a class, which tells the reader which
# header file to include in order to use a class. If left blank only the name of
# the header file containing the class definition is used. Otherwise one should
# specify the list of include paths that are normally passed to the compiler
# using the -I flag.

STRIP_FROM_INC_PATH    =

# If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter (but
# less readable) file names. This can be useful is your file systems doesn't
# support long names like on DOS, Mac, or CD-ROM.
# The default value is: NO.

SHORT_NAMES            = NO

# If the JAVADOC_AUTOBRIEF tag is set to YES then doxygen will interpret the
# first line (until the first dot) of a Javadoc-style comment as the brief
# description. If set to NO, the Javadoc-style will behave just like regular Qt-
# style comments (thus requiring an explicit @brief command for a brief
# description.)
# The default value is: NO.

JAVADOC_AUTOBRIEF      = NO

# If the JAVADOC_BANNER tag is set to YES then doxygen will interpret a line
# such as
# /***************
# as being the beginning of a Javadoc-style comment "banner". If set to NO, the
# Javadoc-style will behave just like regular comments and it will not be
# interpreted by doxygen.
# The default value is: NO.

JAVADOC_BANNER         = NO

# If the QT_AUTOBRIEF tag is set to YES then doxygen will interpret the first
# line (until the first dot) of a Qt-style comment as the brief description. If
# set to NO, the Qt-style will behave just like regular Qt-style comments (thus
# requiring an explicit \brief command for a brief description.)
# The default value is: NO.

QT_AUTOBRIEF           = NO

# The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make doxygen treat a
# multi-line C++ special comment block (i.e. a block of //! or /// comments) as
# a brief description. This used to be the default behavior. The new default is
# to treat a multi-line C++ comment block as a detailed description. Set this
# tag to YES if you prefer the old behavior instead.
#
# Note that setting this tag to YES also means that rational rose comments are
# not recognized any more.
# The default value is: NO.

MULTILINE_CPP_IS_BRIEF = NO

# By default Python docstrings are displayed as preformatted text and doxygen's
# special commands cannot be used. By setting PYTHON_DOCSTRING to NO the
# doxygen's special commands can be used and the contents of the docstring
# documentation blocks is shown as doxygen documentation.
# The default value is: YES.

PYTHON_DOCSTRING       = YES

# If the INHERIT_DOCS tag is set to YES then an undocumented member inherits the
# documentation from any documented member that it re-implements.
# The default value is: YES.

INHERIT_DOCS           = YES

# If the SEPARATE_MEMBER_PAGES tag is set to YES then doxygen will produce a new
# page for each member. If set to NO, the documentation of a member will be part
# of the file/class/namespace that contains it.
# The default value is: NO.

SEPARATE_MEMBER_PAGES  = NO

# The TAB_SIZE tag can be used to set the number of spaces in a tab. Doxygen
# uses this value to replace tabs by spaces in code fragments.
# Minimum value: 1, maximum value: 16, default value: 4.

TAB_SIZE               = 4

# This tag can be used to specify a number of aliases that act as commands in
# the documentation. An alias has the form:
# name=value
# For example adding
# "sideeffect=@par Side Effects:\n"
# will allow you to put the command \sideeffect (or @sideeffect) in the
# documentation, which will result in a user-defined paragraph with heading
# "Side Effects:". You can put \n's in the value part of an alias to insert
# newlines (in the resulting output). You can put ^^ in the value part of an
# alias to insert a newline as if a physical newline was in the original file.
# When you need a literal { or } or , in the value part of an alias you have to
# escape them by means of a backslash (\), this can lead to conflicts with the
# commands \{ and \} for these it is advised to use the version @{ and @} or use
# a double escape (\\{ and \\})

ALIASES                =

# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources
# only. Doxygen will then generate output that is more tailored for C. For
# instance, some of the names that are used will be different. The list of all
# members will be omitted, etc.
# The default value is: NO.

OPTIMIZE_OUTPUT_FOR_C  = NO

# Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java or
# Python sources only. Doxygen will then generate output that is more tailored
# for that language. For instance, namespaces will be presented as packages,
# qualified scopes will look different, etc.
# The default value is: NO.

OPTIMIZE_OUTPUT_JAVA   = NO

# Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran
# sources. Doxygen will then generate output that is tailored for Fortran.
# The default value is: NO.

OPTIMIZE_FOR_FORTRAN   = NO

# Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL
# sources. Doxygen will then generate output that is tailored for VHDL.
# The default value is: NO.

OPTIMIZE_OUTPUT_VHDL   = NO

# Set the OPTIMIZE_OUTPUT_SLICE tag to YES if your project consists of Slice
# sources only. Doxygen will then generate output that is more tailored for that
# language. For instance, namespaces will be presented as modules, types will be
# separated into more groups, etc.
# The default value is: NO.

OPTIMIZE_OUTPUT_SLICE  = NO

# Doxygen selects the parser to use depending on the extension of the files it
# parses. With this tag you can assign which parser to use for a given
# extension. Doxygen has a built-in mapping, but you can override or extend it
# using this tag. The format is ext=language, where ext is a file extension, and
# language is one of the parsers supported by doxygen: IDL, Java, JavaScript,
# Csharp (C#), C, C++, D, PHP, md (Markdown), Objective-C, Python, Slice, VHDL,
# Fortran (fixed format Fortran: FortranFixed, free formatted Fortran:
# FortranFree, unknown formatted Fortran: Fortran. In the later case the parser
# tries to guess whether the code is fixed or free formatted code, this is the
# default for Fortran type files). For instance to make doxygen treat .inc files
# as Fortran files (default is PHP), and .f files as C (default is Fortran),
# use: inc=Fortran f=C.
#
# Note: For files without extension you can use no_extension as a placeholder.
#
# Note that for custom extensions you also need to set FILE_PATTERNS otherwise
# the files are not read by doxygen.

EXTENSION_MAPPING      =

# If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments
# according to the Markdown format, which allows for more readable
# documentation. See https://daringfireball.net/projects/markdown/ for details.
# The output of markdown processing is further processed by doxygen, so you can
# mix doxygen, HTML, and XML commands with Markdown formatting. Disable only in
# case of backward compatibilities issues.
# The default value is: YES.

MARKDOWN_SUPPORT       = YES

# When the TOC_INCLUDE_HEADINGS tag is set to a non-zero value, all headings up
# to that level are automatically included in the table of contents, even if
# they do not have an id attribute.
# Note: This feature currently applies only to Markdown headings.
# Minimum value: 0, maximum value: 99, default value: 5.
# This tag requires that the tag MARKDOWN_SUPPORT is set to YES.

TOC_INCLUDE_HEADINGS   = 5

# When enabled doxygen tries to link words that correspond to documented
# classes, or namespaces to their corresponding documentation. Such a link can
# be prevented in individual cases by putting a % sign in front of the word or
# globally by setting AUTOLINK_SUPPORT to NO.
# The default value is: YES.

AUTOLINK_SUPPORT       = YES

# If you use STL classes (i.e. std::string, std::vector, etc.) but do not want
# to include (a tag file for) the STL sources as input, then you should set this
# tag to YES in order to let doxygen match functions declarations and
# definitions whose arguments contain STL classes (e.g. func(std::string);
# versus func(std::string) {}). This also make the inheritance and collaboration
# diagrams that involve STL classes more complete and accurate.
# The default value is: NO.

BUILTIN_STL_SUPPORT    = NO

# If you use Microsoft's C++/CLI language, you should set this option to YES to
# enable parsing support.
# The default value is: NO.

CPP_CLI_SUPPORT        = NO

# Set the SIP_SUPPORT tag to YES if your project consists of sip (see:
# https://www.riverbankcomputing.com/software/sip/intro) sources only. Doxygen
# will parse them like normal C++ but will assume all classes use public instead
# of private inheritance when no explicit protection keyword is present.
# The default value is: NO.

SIP_SUPPORT            = NO

# For Microsoft's IDL there are propget and propput attributes to indicate
# getter and setter methods for a property. Setting this option to YES will make
# doxygen to replace the get and set methods by a property in the documentation.
# This will only work if the methods are indeed getting or setting a simple
# type. If this is not the case, or you want to show the methods anyway, you
# should set this option to NO.
# The default value is: YES.

IDL_PROPERTY_SUPPORT   = YES

# If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC
# tag is set to YES then doxygen will reuse the documentation of the first
# member in the group (if any) for the other members of the group. By default
# all members of a group must be documented explicitly.
# The default value is: NO.

DISTRIBUTE_GROUP_DOC   = NO

# If one adds a struct or class to a group and this option is enabled, then also
# any nested class or struct is added to the same group. By default this option
# is disabled and one has to add nested compounds explicitly via \ingroup.
# The default value is: NO.

GROUP_NESTED_COMPOUNDS = NO

# Set the SUBGROUPING tag to YES to allow class member groups of the same type
# (for instance a group of public functions) to be put as a subgroup of that
# type (e.g. under the Public Functions section). Set it to NO to prevent
# subgrouping. Alternatively, this can be done per class using the
# \nosubgrouping command.
# The default value is: YES.

SUBGROUPING            = YES

# When the INLINE_GROUPED_CLASSES tag is set to YES, classes, structs and unions
# are shown inside the group in which they are included (e.g. using \ingroup)
# instead of on a separate page (for HTML and Man pages) or section (for LaTeX
# and RTF).
#
# Note that this feature does not work in combination with
# SEPARATE_MEMBER_PAGES.
# The default value is: NO.

INLINE_GROUPED_CLASSES = NO

# When the INLINE_SIMPLE_STRUCTS tag is set to YES, structs, classes, and unions
# with only public data fields or simple typedef fields will be shown inline in
# the documentation of the scope in which they are defined (i.e. file,
# namespace, or group documentation), provided this scope is documented. If set
# to NO, structs, classes, and unions are shown on a separate page (for HTML and
# Man pages) or section (for LaTeX and RTF).
# The default value is: NO.

INLINE_SIMPLE_STRUCTS  = NO

# When TYPEDEF_HIDES_STRUCT tag is enabled, a typedef of a struct, union, or
# enum is documented as struct, union, or enum with the name of the typedef. So
# typedef struct TypeS {} TypeT, will appear in the documentation as a struct
# with name TypeT. When disabled the typedef will appear as a member of a file,
# namespace, or class. And the struct will be named TypeS. This can typically be
# useful for C code in case the coding convention dictates that all compound
# types are typedef'ed and only the typedef is referenced, never the tag name.
# The default value is: NO.

TYPEDEF_HIDES_STRUCT   = NO

# The size of the symbol lookup cache can be set using LOOKUP_CACHE_SIZE. This
# cache is used to resolve symbols given their name and scope. Since this can be
# an expensive process and often the same symbol appears multiple times in the
# code, doxygen keeps a cache of pre-resolved symbols. If the cache is too small
# doxygen will become slower. If the cache is too large, memory is wasted. The
# cache size is given by this formula: 2^(16+LOOKUP_CACHE_SIZE). The valid range
# is 0..9, the default is 0, corresponding to a cache size of 2^16=65536
# symbols. At the end of a run doxygen will report the cache usage and suggest
# the optimal cache size from a speed point of view.
# Minimum value: 0, maximum value: 9, default value: 0.

LOOKUP_CACHE_SIZE      = 0

# The NUM_PROC_THREADS specifies the number threads doxygen is allowed to use
# during processing. When set to 0 doxygen will based this on the number of
# cores available in the system. You can set it explicitly to a value larger
# than 0 to get more control over the balance between CPU load and processing
# speed. At this moment only the input processing can be done using multiple
# threads. Since this is still an experimental feature the default is set to 1,
# which efficively disables parallel processing. Please report any issues you
# encounter. Generating dot graphs in parallel is controlled by the
# DOT_NUM_THREADS setting.
# Minimum value: 0, maximum value: 32, default value: 1.

NUM_PROC_THREADS       = 1

#---------------------------------------------------------------------------
# Build related configuration options
#---------------------------------------------------------------------------

# If the EXTRACT_ALL tag is set to YES, doxygen will assume all entities in
# documentation are documented, even if no documentation was available. Private
# class members and static file members will be hidden unless the
# EXTRACT_PRIVATE respectively EXTRACT_STATIC tags are set to YES.
# Note: This will also disable the warnings about undocumented members that are
# normally produced when WARNINGS is set to YES.
# The default value is: NO.

EXTRACT_ALL            = YES

# If the EXTRACT_PRIVATE tag is set to YES, all private members of a class will
# be included in the documentation.
# The default value is: NO.

EXTRACT_PRIVATE        = YES

# If the EXTRACT_PRIV_VIRTUAL tag is set to YES, documented private virtual
# methods of a class will be included in the documentation.
# The default value is: NO.

EXTRACT_PRIV_VIRTUAL   = YES

# If the EXTRACT_PACKAGE tag is set to YES, all members with package or internal
# scope will be included in the documentation.
# The default value is: NO.

EXTRACT_PACKAGE        = YES

# If the EXTRACT_STATIC tag is set to YES, all static members of a file will be
# included in the documentation.
# The default value is: NO.

EXTRACT_STATIC         = YES

# If the EXTRACT_LOCAL_CLASSES tag is set to YES, classes (and structs) defined
# locally in source files will be included in the documentation. If set to NO,
# only classes defined in header files are included. Does not have any effect
# for Java sources.
# The default value is: YES.

EXTRACT_LOCAL_CLASSES  = YES

# This flag is only useful for Objective-C code. If set to YES, local methods,
# which are defined in the implementation section but not in the interface are
# included in the documentation. If set to NO, only methods in the interface are
# included.
# The default value is: NO.

EXTRACT_LOCAL_METHODS  = NO

# If this flag is set to YES, the members of anonymous namespaces will be
# extracted and appear in the documentation as a namespace called
# 'anonymous_namespace{file}', where file will be replaced with the base name of
# the file that contains the anonymous namespace. By default anonymous namespace
# are hidden.
# The default value is: NO.

EXTRACT_ANON_NSPACES   = NO

# If the HIDE_UNDOC_MEMBERS tag is set to YES, doxygen will hide all
# undocumented members inside documented classes or files. If set to NO these
# members will be included in the various overviews, but no documentation
# section is generated. This option has no effect if EXTRACT_ALL is enabled.
# The default value is: NO.

HIDE_UNDOC_MEMBERS     = NO

# If the HIDE_UNDOC_CLASSES tag is set to YES, doxygen will hide all
# undocumented classes that are normally visible in the class hierarchy. If set
# to NO, these classes will be included in the various overviews. This option
# has no effect if EXTRACT_ALL is enabled.
# The default value is: NO.

HIDE_UNDOC_CLASSES     = NO

# If the HIDE_FRIEND_COMPOUNDS tag is set to YES, doxygen will hide all friend
# declarations. If set to NO, these declarations will be included in the
# documentation.
# The default value is: NO.

HIDE_FRIEND_COMPOUNDS  = NO

# If the HIDE_IN_BODY_DOCS tag is set to YES, doxygen will hide any
# documentation blocks found inside the body of a function. If set to NO, these
# blocks will be appended to the function's detailed documentation block.
# The default value is: NO.

HIDE_IN_BODY_DOCS      = NO

# The INTERNAL_DOCS tag determines if documentation that is typed after a
# \internal command is included. If the tag is set to NO then the documentation
# will be excluded. Set it to YES to include the internal documentation.
# The default value is: NO.

INTERNAL_DOCS          = NO

# If the CASE_SENSE_NAMES tag is set to NO then doxygen will only generate file
# names in lower-case letters. If set to YES, upper-case letters are also
# allowed. This is useful if you have classes or files whose names only differ
# in case and if your file system supports case sensitive file names. Windows
# (including Cygwin) and Mac users are advised to set this option to NO.
# The default value is: system dependent.

CASE_SENSE_NAMES       = NO

# If the HIDE_SCOPE_NAMES tag is set to NO then doxygen will show members with
# their full class and namespace scopes in the documentation. If set to YES, the
# scope will be hidden.
# The default value is: NO.

HIDE_SCOPE_NAMES       = NO

# If the HIDE_COMPOUND_REFERENCE tag is set to NO (default) then doxygen will
# append additional text to a page's title, such as Class Reference. If set to
# YES the compound reference will be hidden.
# The default value is: NO.

HIDE_COMPOUND_REFERENCE= NO

# If the SHOW_INCLUDE_FILES tag is set to YES then doxygen will put a list of
# the files that are included by a file in the documentation of that file.
# The default value is: YES.

SHOW_INCLUDE_FILES     = YES

# If the SHOW_GROUPED_MEMB_INC tag is set to YES then Doxygen will add for each
# grouped member an include statement to the documentation, telling the reader
# which file to include in order to use the member.
# The default value is: NO.

SHOW_GROUPED_MEMB_INC  = NO

# If the FORCE_LOCAL_INCLUDES tag is set to YES then doxygen will list include
# files with double quotes in the documentation rather than with sharp brackets.
# The default value is: NO.

FORCE_LOCAL_INCLUDES   = NO

# If the INLINE_INFO tag is set to YES then a tag [inline] is inserted in the
# documentation for inline members.
# The default value is: YES.

INLINE_INFO            = YES

# If the SORT_MEMBER_DOCS tag is set to YES then doxygen will sort the
# (detailed) documentation of file and class members alphabetically by member
# name. If set to NO, the members will appear in declaration order.
# The default value is: YES.

SORT_MEMBER_DOCS       = YES

# If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the brief
# descriptions of file, namespace and class members alphabetically by member
# name. If set to NO, the members will appear in declaration order. Note that
# this will also influence the order of the classes in the class list.
# The default value is: NO.

SORT_BRIEF_DOCS        = NO

# If the SORT_MEMBERS_CTORS_1ST tag is set to YES then doxygen will sort the
# (brief and detailed) documentation of class members so that constructors and
# destructors are listed first. If set to NO the constructors will appear in the
# respective orders defined by SORT_BRIEF_DOCS and SORT_MEMBER_DOCS.
# Note: If SORT_BRIEF_DOCS is set to NO this option is ignored for sorting brief
# member documentation.
# Note: If SORT_MEMBER_DOCS is set to NO this option is ignored for sorting
# detailed member documentation.
# The default value is: NO.

SORT_MEMBERS_CTORS_1ST = NO

# If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the hierarchy
# of group names into alphabetical order. If set to NO the group names will
# appear in their defined order.
# The default value is: NO.

SORT_GROUP_NAMES       = NO

# If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be sorted by
# fully-qualified names, including namespaces. If set to NO, the class list will
# be sorted only by class name, not including the namespace part.
# Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES.
# Note: This option applies only to the class list, not to the alphabetical
# list.
# The default value is: NO.

SORT_BY_SCOPE_NAME     = NO

# If the STRICT_PROTO_MATCHING option is enabled and doxygen fails to do proper
# type resolution of all parameters of a function it will reject a match between
# the prototype and the implementation of a member function even if there is
# only one candidate or it is obvious which candidate to choose by doing a
# simple string match. By disabling STRICT_PROTO_MATCHING doxygen will still
# accept a match between prototype and implementation in such cases.
# The default value is: NO.

STRICT_PROTO_MATCHING  = NO

# The GENERATE_TODOLIST tag can be used to enable (YES) or disable (NO) the todo
# list. This list is created by putting \todo commands in the documentation.
# The default value is: YES.

GENERATE_TODOLIST      = YES

# The GENERATE_TESTLIST tag can be used to enable (YES) or disable (NO) the test
# list. This list is created by putting \test commands in the documentation.
# The default value is: YES.

GENERATE_TESTLIST      = YES

# The GENERATE_BUGLIST tag can be used to enable (YES) or disable (NO) the bug
# list. This list is created by putting \bug commands in the documentation.
# The default value is: YES.

GENERATE_BUGLIST       = YES

# The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or disable (NO)
# the deprecated list. This list is created by putting \deprecated commands in
# the documentation.
# The default value is: YES.

GENERATE_DEPRECATEDLIST= YES

# The ENABLED_SECTIONS tag can be used to enable conditional documentation
# sections, marked by \if <section_label> ... \endif and \cond <section_label>
# ... \endcond blocks.

ENABLED_SECTIONS       =

# The MAX_INITIALIZER_LINES tag determines the maximum number of lines that the
# initial value of a variable or macro / define can have for it to appear in the
# documentation. If the initializer consists of more lines than specified here
# it will be hidden. Use a value of 0 to hide initializers completely. The
# appearance of the value of individual variables and macros / defines can be
# controlled using \showinitializer or \hideinitializer command in the
# documentation regardless of this setting.
# Minimum value: 0, maximum value: 10000, default value: 30.

MAX_INITIALIZER_LINES  = 30

# Set the SHOW_USED_FILES tag to NO to disable the list of files generated at
# the bottom of the documentation of classes and structs. If set to YES, the
# list will mention the files that were used to generate the documentation.
# The default value is: YES.

SHOW_USED_FILES        = YES

# Set the SHOW_FILES tag to NO to disable the generation of the Files page. This
# will remove the Files entry from the Quick Index and from the Folder Tree View
# (if specified).
# The default value is: YES.

SHOW_FILES             = YES

# Set the SHOW_NAMESPACES tag to NO to disable the generation of the Namespaces
# page. This will remove the Namespaces entry from the Quick Index and from the
# Folder Tree View (if specified).
# The default value is: YES.

SHOW_NAMESPACES        = YES

# The FILE_VERSION_FILTER tag can be used to specify a program or script that
# doxygen should invoke to get the current version for each file (typically from
# the version control system). Doxygen will invoke the program by executing (via
# popen()) the command command input-file, where command is the value of the
# FILE_VERSION_FILTER tag, and input-file is the name of an input file provided
# by doxygen. Whatever the program writes to standard output is used as the file
# version. For an example see the documentation.

FILE_VERSION_FILTER    =

# The LAYOUT_FILE tag can be used to specify a layout file which will be parsed
# by doxygen. The layout file controls the global structure of the generated
# output files in an output format independent way. To create the layout file
# that represents doxygen's defaults, run doxygen with the -l option. You can
# optionally specify a file name after the option, if omitted DoxygenLayout.xml
# will be used as the name of the layout file.
#
# Note that if you run doxygen from a directory containing a file called
# DoxygenLayout.xml, doxygen will parse it automatically even if the LAYOUT_FILE
# tag is left empty.

LAYOUT_FILE            =

# The CITE_BIB_FILES tag can be used to specify one or more bib files containing
# the reference definitions. This must be a list of .bib files. The .bib
# extension is automatically appended if omitted. This requires the bibtex tool
# to be installed. See also https://en.wikipedia.org/wiki/BibTeX for more info.
# For LaTeX the style of the bibliography can be controlled using
# LATEX_BIB_STYLE. To use this feature you need bibtex and perl available in the
# search path. See also \cite for info how to create references.

CITE_BIB_FILES         =

#---------------------------------------------------------------------------
# Configuration options related to warning and progress messages
#---------------------------------------------------------------------------

# The QUIET tag can be used to turn on/off the messages that are generated to
# standard output by doxygen. If QUIET is set to YES this implies that the
# messages are off.
# The default value is: NO.

QUIET                  = NO

# The WARNINGS tag can be used to turn on/off the warning messages that are
# generated to standard error (stderr) by doxygen. If WARNINGS is set to YES
# this implies that the warnings are on.
#
# Tip: Turn warnings on while writing the documentation.
# The default value is: YES.

WARNINGS               = YES

# If the WARN_IF_UNDOCUMENTED tag is set to YES then doxygen will generate
# warnings for undocumented members. If EXTRACT_ALL is set to YES then this flag
# will automatically be disabled.
# The default value is: YES.

WARN_IF_UNDOCUMENTED   = YES

# If the WARN_IF_DOC_ERROR tag is set to YES, doxygen will generate warnings for
# potential errors in the documentation, such as not documenting some parameters
# in a documented function, or documenting parameters that don't exist or using
# markup commands wrongly.
# The default value is: YES.

WARN_IF_DOC_ERROR      = YES

# This WARN_NO_PARAMDOC option can be enabled to get warnings for functions that
# are documented, but have no documentation for their parameters or return
# value. If set to NO, doxygen will only warn about wrong or incomplete
# parameter documentation, but not about the absence of documentation. If
# EXTRACT_ALL is set to YES then this flag will automatically be disabled.
# The default value is: NO.

WARN_NO_PARAMDOC       = NO

# If the WARN_AS_ERROR tag is set to YES then doxygen will immediately stop when
# a warning is encountered.
# The default value is: NO.

WARN_AS_ERROR          = NO

# The WARN_FORMAT tag determines the format of the warning messages that doxygen
# can produce. The string should contain the $file, $line, and $text tags, which
# will be replaced by the file and line number from which the warning originated
# and the warning text. Optionally the format may contain $version, which will
# be replaced by the version of the file (if it could be obtained via
# FILE_VERSION_FILTER)
# The default value is: $file:$line: $text.

WARN_FORMAT            = "$file:$line: $text"

# The WARN_LOGFILE tag can be used to specify a file to which warning and error
# messages should be written. If left blank the output is written to standard
# error (stderr).

WARN_LOGFILE           =

#---------------------------------------------------------------------------
# Configuration options related to the input files
#---------------------------------------------------------------------------

# The INPUT tag is used to specify the files and/or directories that contain
# documented source files. You may enter file names like myfile.cpp or
# directories like /usr/src/myproject. Separate the files or directories with
# spaces. See also FILE_PATTERNS and EXTENSION_MAPPING
# Note: If this tag is empty the current directory is searched.

INPUT                  =

# This tag can be used to specify the character encoding of the source files
# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses
# libiconv (or the iconv built into libc) for the transcoding. See the libiconv
# documentation (see: https://www.gnu.org/software/libiconv/) for the list of
# possible encodings.
# The default value is: UTF-8.

INPUT_ENCODING         = UTF-8

# If the value of the INPUT tag contains directories, you can use the
# FILE_PATTERNS tag to specify one or more wildcard patterns (like *.cpp and
# *.h) to filter out the source-files in the directories.
#
# Note that for custom extensions or not directly supported extensions you also
# need to set EXTENSION_MAPPING for the extension otherwise the files are not
# read by doxygen.
#
# If left blank the following patterns are tested:*.c, *.cc, *.cxx, *.cpp,
# *.c++, *.java, *.ii, *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl, *.h,
# *.hh, *.hxx, *.hpp, *.h++, *.cs, *.d, *.php, *.php4, *.php5, *.phtml, *.inc,
# *.m, *.markdown, *.md, *.mm, *.dox (to be provided as doxygen C comment),
# *.doc (to be provided as doxygen C comment), *.txt (to be provided as doxygen
# C comment), *.py, *.pyw, *.f90, *.f95, *.f03, *.f08, *.f18, *.f, *.for, *.vhd,
# *.vhdl, *.ucf, *.qsf and *.ice.

FILE_PATTERNS          = *.c \
                         *.cc \
                         *.cxx \
                         *.cpp \
                         *.c++ \
                         *.java \
                         *.ii \
                         *.ixx \
                         *.ipp \
                         *.i++ \
                         *.inl \
                         *.idl \
                         *.ddl \
                         *.odl \
                         *.h \
                         *.hh \
                         *.hxx \
                         *.hpp \
                         *.h++ \
                         *.cs \
                         *.d \
                         *.php \
                         *.php4 \
                         *.php5 \
                         *.phtml \
                         *.inc \
                         *.m \
                         *.markdown \
                         *.md \
                         *.mm \
                         *.dox \
                         *.doc \
                         *.txt \
                         *.py \
                         *.pyw \
                         *.f90 \
                         *.f95 \
                         *.f03 \
                         *.f08 \
                         *.f18 \
                         *.f \
                         *.for \
                         *.vhd \
                         *.vhdl \
                         *.ucf \
                         *.qsf \
                         *.ice

# The RECURSIVE tag can be used to specify whether or not subdirectories should
# be searched for input files as well.
# The default value is: NO.

RECURSIVE              = NO

# The EXCLUDE tag can be used to specify files and/or directories that should be
# excluded from the INPUT source files. This way you can easily exclude a
# subdirectory from a directory tree whose root is specified with the INPUT tag.
#
# Note that relative paths are relative to the directory from which doxygen is
# run.

EXCLUDE                =

# The EXCLUDE_SYMLINKS tag can be used to select whether or not files or
# directories that are symbolic links (a Unix file system feature) are excluded
# from the input.
# The default value is: NO.

EXCLUDE_SYMLINKS       = NO

# If the value of the INPUT tag contains directories, you can use the
# EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude
# certain files from those directories.
#
# Note that the wildcards are matched against the file with absolute path, so to
# exclude all test directories for example use the pattern */test/*

EXCLUDE_PATTERNS       =

# The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names
# (namespaces, classes, functions, etc.) that should be excluded from the
# output. The symbol name can be a fully qualified name, a word, or if the
# wildcard * is used, a substring. Examples: ANamespace, AClass,
# AClass::ANamespace, ANamespace::*Test
#
# Note that the wildcards are matched against the file with absolute path, so to
# exclude all test directories use the pattern */test/*

EXCLUDE_SYMBOLS        =

# The EXAMPLE_PATH tag can be used to specify one or more files or directories
# that contain example code fragments that are included (see the \include
# command).

EXAMPLE_PATH           =

# If the value of the EXAMPLE_PATH tag contains directories, you can use the
# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and
# *.h) to filter out the source-files in the directories. If left blank all
# files are included.

EXAMPLE_PATTERNS       = *

# If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be
# searched for input files to be used with the \include or \dontinclude commands
# irrespective of the value of the RECURSIVE tag.
# The default value is: NO.

EXAMPLE_RECURSIVE      = NO

# The IMAGE_PATH tag can be used to specify one or more files or directories
# that contain images that are to be included in the documentation (see the
# \image command).

IMAGE_PATH             =

# The INPUT_FILTER tag can be used to specify a program that doxygen should
# invoke to filter for each input file. Doxygen will invoke the filter program
# by executing (via popen()) the command:
#
# <filter> <input-file>
#
# where <filter> is the value of the INPUT_FILTER tag, and <input-file> is the
# name of an input file. Doxygen will then use the output that the filter
# program writes to standard output. If FILTER_PATTERNS is specified, this tag
# will be ignored.
#
# Note that the filter must not add or remove lines; it is applied before the
# code is scanned, but not when the output code is generated. If lines are added
# or removed, the anchors will not be placed correctly.
#
# Note that for custom extensions or not directly supported extensions you also
# need to set EXTENSION_MAPPING for the extension otherwise the files are not
# properly processed by doxygen.

INPUT_FILTER           =

# The FILTER_PATTERNS tag can be used to specify filters on a per file pattern
# basis. Doxygen will compare the file name with each pattern and apply the
# filter if there is a match. The filters are a list of the form: pattern=filter
# (like *.cpp=my_cpp_filter). See INPUT_FILTER for further information on how
# filters are used. If the FILTER_PATTERNS tag is empty or if none of the
# patterns match the file name, INPUT_FILTER is applied.
#
# Note that for custom extensions or not directly supported extensions you also
# need to set EXTENSION_MAPPING for the extension otherwise the files are not
# properly processed by doxygen.

FILTER_PATTERNS        =

# If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using
# INPUT_FILTER) will also be used to filter the input files that are used for
# producing the source files to browse (i.e. when SOURCE_BROWSER is set to YES).
# The default value is: NO.

FILTER_SOURCE_FILES    = NO

# The FILTER_SOURCE_PATTERNS tag can be used to specify source filters per file
# pattern. A pattern will override the setting for FILTER_PATTERN (if any) and
# it is also possible to disable source filtering for a specific pattern using
# *.ext= (so without naming a filter).
# This tag requires that the tag FILTER_SOURCE_FILES is set to YES.

FILTER_SOURCE_PATTERNS =

# If the USE_MDFILE_AS_MAINPAGE tag refers to the name of a markdown file that
# is part of the input, its contents will be placed on the main page
# (index.html). This can be useful if you have a project on for instance GitHub
# and want to reuse the introduction page also for the doxygen output.

USE_MDFILE_AS_MAINPAGE =

#---------------------------------------------------------------------------
# Configuration options related to source browsing
#---------------------------------------------------------------------------

# If the SOURCE_BROWSER tag is set to YES then a list of source files will be
# generated. Documented entities will be cross-referenced with these sources.
#
# Note: To get rid of all source code in the generated output, make sure that
# also VERBATIM_HEADERS is set to NO.
# The default value is: NO.

SOURCE_BROWSER         = NO

# Setting the INLINE_SOURCES tag to YES will include the body of functions,
# classes and enums directly into the documentation.
# The default value is: NO.

INLINE_SOURCES         = NO

# Setting the STRIP_CODE_COMMENTS tag to YES will instruct doxygen to hide any
# special comment blocks from generated source code fragments. Normal C, C++ and
# Fortran comments will always remain visible.
# The default value is: YES.

STRIP_CODE_COMMENTS    = YES

# If the REFERENCED_BY_RELATION tag is set to YES then for each documented
# entity all documented functions referencing it will be listed.
# The default value is: NO.

REFERENCED_BY_RELATION = NO

# If the REFERENCES_RELATION tag is set to YES then for each documented function
# all documented entities called/used by that function will be listed.
# The default value is: NO.

REFERENCES_RELATION    = NO

# If the REFERENCES_LINK_SOURCE tag is set to YES and SOURCE_BROWSER tag is set
# to YES then the hyperlinks from functions in REFERENCES_RELATION and
# REFERENCED_BY_RELATION lists will link to the source code. Otherwise they will
# link to the documentation.
# The default value is: YES.

REFERENCES_LINK_SOURCE = YES

# If SOURCE_TOOLTIPS is enabled (the default) then hovering a hyperlink in the
# source code will show a tooltip with additional information such as prototype,
# brief description and links to the definition and documentation. Since this
# will make the HTML file larger and loading of large files a bit slower, you
# can opt to disable this feature.
# The default value is: YES.
# This tag requires that the tag SOURCE_BROWSER is set to YES.

SOURCE_TOOLTIPS        = YES

# If the USE_HTAGS tag is set to YES then the references to source code will
# point to the HTML generated by the htags(1) tool instead of doxygen built-in
# source browser. The htags tool is part of GNU's global source tagging system
# (see https://www.gnu.org/software/global/global.html). You will need version
# 4.8.6 or higher.
#
# To use it do the following:
# - Install the latest version of global
# - Enable SOURCE_BROWSER and USE_HTAGS in the configuration file
# - Make sure the INPUT points to the root of the source tree
# - Run doxygen as normal
#
# Doxygen will invoke htags (and that will in turn invoke gtags), so these
# tools must be available from the command line (i.e. in the search path).
#
# The result: instead of the source browser generated by doxygen, the links to
# source code will now point to the output of htags.
# The default value is: NO.
# This tag requires that the tag SOURCE_BROWSER is set to YES.

USE_HTAGS              = NO

# If the VERBATIM_HEADERS tag is set the YES then doxygen will generate a
# verbatim copy of the header file for each class for which an include is
# specified. Set to NO to disable this.
# See also: Section \class.
# The default value is: YES.

VERBATIM_HEADERS       = YES

#---------------------------------------------------------------------------
# Configuration options related to the alphabetical class index
#---------------------------------------------------------------------------

# If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index of all
# compounds will be generated. Enable this if the project contains a lot of
# classes, structs, unions or interfaces.
# The default value is: YES.

ALPHABETICAL_INDEX     = YES

# The COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns in
# which the alphabetical index list will be split.
# Minimum value: 1, maximum value: 20, default value: 5.
# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.

COLS_IN_ALPHA_INDEX    = 5

# In case all classes in a project start with a common prefix, all classes will
# be put under the same header in the alphabetical index. The IGNORE_PREFIX tag
# can be used to specify a prefix (or a list of prefixes) that should be ignored
# while generating the index headers.
# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.

IGNORE_PREFIX          =

#---------------------------------------------------------------------------
# Configuration options related to the HTML output
#---------------------------------------------------------------------------

# If the GENERATE_HTML tag is set to YES, doxygen will generate HTML output
# The default value is: YES.

GENERATE_HTML          = YES

# The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a
# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
# it.
# The default directory is: html.
# This tag requires that the tag GENERATE_HTML is set to YES.

HTML_OUTPUT            = html

# The HTML_FILE_EXTENSION tag can be used to specify the file extension for each
# generated HTML page (for example: .htm, .php, .asp).
# The default value is: .html.
# This tag requires that the tag GENERATE_HTML is set to YES.

HTML_FILE_EXTENSION    = .html

# The HTML_HEADER tag can be used to specify a user-defined HTML header file for
# each generated HTML page. If the tag is left blank doxygen will generate a
# standard header.
#
# To get valid HTML the header file that includes any scripts and style sheets
# that doxygen needs, which is dependent on the configuration options used (e.g.
# the setting GENERATE_TREEVIEW). It is highly recommended to start with a
# default header using
# doxygen -w html new_header.html new_footer.html new_stylesheet.css
# YourConfigFile
# and then modify the file new_header.html. See also section "Doxygen usage"
# for information on how to generate the default header that doxygen normally
# uses.
# Note: The header is subject to change so you typically have to regenerate the
# default header when upgrading to a newer version of doxygen. For a description
# of the possible markers and block names see the documentation.
# This tag requires that the tag GENERATE_HTML is set to YES.

HTML_HEADER            =

# The HTML_FOOTER tag can be used to specify a user-defined HTML footer for each
# generated HTML page. If the tag is left blank doxygen will generate a standard
# footer. See HTML_HEADER for more information on how to generate a default
# footer and what special commands can be used inside the footer. See also
# section "Doxygen usage" for information on how to generate the default footer
# that doxygen normally uses.
# This tag requires that the tag GENERATE_HTML is set to YES.

HTML_FOOTER            =

# The HTML_STYLESHEET tag can be used to specify a user-defined cascading style
# sheet that is used by each HTML page. It can be used to fine-tune the look of
# the HTML output. If left blank doxygen will generate a default style sheet.
# See also section "Doxygen usage" for information on how to generate the style
# sheet that doxygen normally uses.
# Note: It is recommended to use HTML_EXTRA_STYLESHEET instead of this tag, as
# it is more robust and this tag (HTML_STYLESHEET) will in the future become
# obsolete.
# This tag requires that the tag GENERATE_HTML is set to YES.

HTML_STYLESHEET        =

# The HTML_EXTRA_STYLESHEET tag can be used to specify additional user-defined
# cascading style sheets that are included after the standard style sheets
# created by doxygen. Using this option one can overrule certain style aspects.
# This is preferred over using HTML_STYLESHEET since it does not replace the
# standard style sheet and is therefore more robust against future updates.
# Doxygen will copy the style sheet files to the output directory.
# Note: The order of the extra style sheet files is of importance (e.g. the last
# style sheet in the list overrules the setting of the previous ones in the
# list). For an example see the documentation.
# This tag requires that the tag GENERATE_HTML is set to YES.

HTML_EXTRA_STYLESHEET  =

# The HTML_EXTRA_FILES tag can be used to specify one or more extra images or
# other source files which should be copied to the HTML output directory. Note
# that these files will be copied to the base HTML output directory. Use the
# $relpath^ marker in the HTML_HEADER and/or HTML_FOOTER files to load these
# files. In the HTML_STYLESHEET file, use the file name only. Also note that the
# files will be copied as-is; there are no commands or markers available.
# This tag requires that the tag GENERATE_HTML is set to YES.

HTML_EXTRA_FILES       =

# The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. Doxygen
# will adjust the colors in the style sheet and background images according to
# this color. Hue is specified as an angle on a colorwheel, see
# https://en.wikipedia.org/wiki/Hue for more information. For instance the value
# 0 represents red, 60 is yellow, 120 is green, 180 is cyan, 240 is blue, 300
# purple, and 360 is red again.
# Minimum value: 0, maximum value: 359, default value: 220.
# This tag requires that the tag GENERATE_HTML is set to YES.

HTML_COLORSTYLE_HUE    = 220

# The HTML_COLORSTYLE_SAT tag controls the purity (or saturation) of the colors
# in the HTML output. For a value of 0 the output will use grayscales only. A
# value of 255 will produce the most vivid colors.
# Minimum value: 0, maximum value: 255, default value: 100.
# This tag requires that the tag GENERATE_HTML is set to YES.

HTML_COLORSTYLE_SAT    = 100

# The HTML_COLORSTYLE_GAMMA tag controls the gamma correction applied to the
# luminance component of the colors in the HTML output. Values below 100
# gradually make the output lighter, whereas values above 100 make the output
# darker. The value divided by 100 is the actual gamma applied, so 80 represents
# a gamma of 0.8, The value 220 represents a gamma of 2.2, and 100 does not
# change the gamma.
# Minimum value: 40, maximum value: 240, default value: 80.
# This tag requires that the tag GENERATE_HTML is set to YES.

HTML_COLORSTYLE_GAMMA  = 80

# If the HTML_TIMESTAMP tag is set to YES then the footer of each generated HTML
# page will contain the date and time when the page was generated. Setting this
# to YES can help to show when doxygen was last run and thus if the
# documentation is up to date.
# The default value is: NO.
# This tag requires that the tag GENERATE_HTML is set to YES.

HTML_TIMESTAMP         = NO

# If the HTML_DYNAMIC_MENUS tag is set to YES then the generated HTML
# documentation will contain a main index with vertical navigation menus that
# are dynamically created via JavaScript. If disabled, the navigation index will
# consists of multiple levels of tabs that are statically embedded in every HTML
# page. Disable this option to support browsers that do not have JavaScript,
# like the Qt help browser.
# The default value is: YES.
# This tag requires that the tag GENERATE_HTML is set to YES.

HTML_DYNAMIC_MENUS     = YES

# If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML
# documentation will contain sections that can be hidden and shown after the
# page has loaded.
# The default value is: NO.
# This tag requires that the tag GENERATE_HTML is set to YES.

HTML_DYNAMIC_SECTIONS  = NO

# With HTML_INDEX_NUM_ENTRIES one can control the preferred number of entries
# shown in the various tree structured indices initially; the user can expand
# and collapse entries dynamically later on. Doxygen will expand the tree to
# such a level that at most the specified number of entries are visible (unless
# a fully collapsed tree already exceeds this amount). So setting the number of
# entries 1 will produce a full collapsed tree by default. 0 is a special value
# representing an infinite number of entries and will result in a full expanded
# tree by default.
# Minimum value: 0, maximum value: 9999, default value: 100.
# This tag requires that the tag GENERATE_HTML is set to YES.

HTML_INDEX_NUM_ENTRIES = 100

# If the GENERATE_DOCSET tag is set to YES, additional index files will be
# generated that can be used as input for Apple's Xcode 3 integrated development
# environment (see: https://developer.apple.com/xcode/), introduced with OSX
# 10.5 (Leopard). To create a documentation set, doxygen will generate a
# Makefile in the HTML output directory. Running make will produce the docset in
# that directory and running make install will install the docset in
# ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at
# startup. See https://developer.apple.com/library/archive/featuredarticles/Doxy
# genXcode/_index.html for more information.
# The default value is: NO.
# This tag requires that the tag GENERATE_HTML is set to YES.

GENERATE_DOCSET        = NO

# This tag determines the name of the docset feed. A documentation feed provides
# an umbrella under which multiple documentation sets from a single provider
# (such as a company or product suite) can be grouped.
# The default value is: Doxygen generated docs.
# This tag requires that the tag GENERATE_DOCSET is set to YES.

DOCSET_FEEDNAME        = "Doxygen generated docs"

# This tag specifies a string that should uniquely identify the documentation
# set bundle. This should be a reverse domain-name style string, e.g.
# com.mycompany.MyDocSet. Doxygen will append .docset to the name.
# The default value is: org.doxygen.Project.
# This tag requires that the tag GENERATE_DOCSET is set to YES.

DOCSET_BUNDLE_ID       = org.doxygen.Project

# The DOCSET_PUBLISHER_ID tag specifies a string that should uniquely identify
# the documentation publisher. This should be a reverse domain-name style
# string, e.g. com.mycompany.MyDocSet.documentation.
# The default value is: org.doxygen.Publisher.
# This tag requires that the tag GENERATE_DOCSET is set to YES.

DOCSET_PUBLISHER_ID    = org.doxygen.Publisher

# The DOCSET_PUBLISHER_NAME tag identifies the documentation publisher.
# The default value is: Publisher.
# This tag requires that the tag GENERATE_DOCSET is set to YES.

DOCSET_PUBLISHER_NAME  = Publisher

# If the GENERATE_HTMLHELP tag is set to YES then doxygen generates three
# additional HTML index files: index.hhp, index.hhc, and index.hhk. The
# index.hhp is a project file that can be read by Microsoft's HTML Help Workshop
# (see: https://www.microsoft.com/en-us/download/details.aspx?id=21138) on
# Windows.
#
# The HTML Help Workshop contains a compiler that can convert all HTML output
# generated by doxygen into a single compiled HTML file (.chm). Compiled HTML
# files are now used as the Windows 98 help format, and will replace the old
# Windows help format (.hlp) on all Windows platforms in the future. Compressed
# HTML files also contain an index, a table of contents, and you can search for
# words in the documentation. The HTML workshop also contains a viewer for
# compressed HTML files.
# The default value is: NO.
# This tag requires that the tag GENERATE_HTML is set to YES.

GENERATE_HTMLHELP      = NO

# The CHM_FILE tag can be used to specify the file name of the resulting .chm
# file. You can add a path in front of the file if the result should not be
# written to the html output directory.
# This tag requires that the tag GENERATE_HTMLHELP is set to YES.

CHM_FILE               =

# The HHC_LOCATION tag can be used to specify the location (absolute path
# including file name) of the HTML help compiler (hhc.exe). If non-empty,
# doxygen will try to run the HTML help compiler on the generated index.hhp.
# The file has to be specified with full path.
# This tag requires that the tag GENERATE_HTMLHELP is set to YES.

HHC_LOCATION           =

# The GENERATE_CHI flag controls if a separate .chi index file is generated
# (YES) or that it should be included in the main .chm file (NO).
# The default value is: NO.
# This tag requires that the tag GENERATE_HTMLHELP is set to YES.

GENERATE_CHI           = NO

# The CHM_INDEX_ENCODING is used to encode HtmlHelp index (hhk), content (hhc)
# and project file content.
# This tag requires that the tag GENERATE_HTMLHELP is set to YES.

CHM_INDEX_ENCODING     =

# The BINARY_TOC flag controls whether a binary table of contents is generated
# (YES) or a normal table of contents (NO) in the .chm file. Furthermore it
# enables the Previous and Next buttons.
# The default value is: NO.
# This tag requires that the tag GENERATE_HTMLHELP is set to YES.

BINARY_TOC             = NO

# The TOC_EXPAND flag can be set to YES to add extra items for group members to
# the table of contents of the HTML help documentation and to the tree view.
# The default value is: NO.
# This tag requires that the tag GENERATE_HTMLHELP is set to YES.

TOC_EXPAND             = NO

# If the GENERATE_QHP tag is set to YES and both QHP_NAMESPACE and
# QHP_VIRTUAL_FOLDER are set, an additional index file will be generated that
# can be used as input for Qt's qhelpgenerator to generate a Qt Compressed Help
# (.qch) of the generated HTML documentation.
# The default value is: NO.
# This tag requires that the tag GENERATE_HTML is set to YES.

GENERATE_QHP           = NO

# If the QHG_LOCATION tag is specified, the QCH_FILE tag can be used to specify
# the file name of the resulting .qch file. The path specified is relative to
# the HTML output folder.
# This tag requires that the tag GENERATE_QHP is set to YES.

QCH_FILE               =

# The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help
# Project output. For more information please see Qt Help Project / Namespace
# (see: https://doc.qt.io/archives/qt-4.8/qthelpproject.html#namespace).
# The default value is: org.doxygen.Project.
# This tag requires that the tag GENERATE_QHP is set to YES.

QHP_NAMESPACE          = org.doxygen.Project

# The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating Qt
# Help Project output. For more information please see Qt Help Project / Virtual
# Folders (see: https://doc.qt.io/archives/qt-4.8/qthelpproject.html#virtual-
# folders).
# The default value is: doc.
# This tag requires that the tag GENERATE_QHP is set to YES.

QHP_VIRTUAL_FOLDER     = doc

# If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a custom
# filter to add. For more information please see Qt Help Project / Custom
# Filters (see: https://doc.qt.io/archives/qt-4.8/qthelpproject.html#custom-
# filters).
# This tag requires that the tag GENERATE_QHP is set to YES.

QHP_CUST_FILTER_NAME   =

# The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the
# custom filter to add. For more information please see Qt Help Project / Custom
# Filters (see: https://doc.qt.io/archives/qt-4.8/qthelpproject.html#custom-
# filters).
# This tag requires that the tag GENERATE_QHP is set to YES.

QHP_CUST_FILTER_ATTRS  =

# The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this
# project's filter section matches. Qt Help Project / Filter Attributes (see:
# https://doc.qt.io/archives/qt-4.8/qthelpproject.html#filter-attributes).
# This tag requires that the tag GENERATE_QHP is set to YES.

QHP_SECT_FILTER_ATTRS  =

# The QHG_LOCATION tag can be used to specify the location of Qt's
# qhelpgenerator. If non-empty doxygen will try to run qhelpgenerator on the
# generated .qhp file.
# This tag requires that the tag GENERATE_QHP is set to YES.

QHG_LOCATION           =

# If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files will be
# generated, together with the HTML files, they form an Eclipse help plugin. To
# install this plugin and make it available under the help contents menu in
# Eclipse, the contents of the directory containing the HTML and XML files needs
# to be copied into the plugins directory of eclipse. The name of the directory
# within the plugins directory should be the same as the ECLIPSE_DOC_ID value.
# After copying Eclipse needs to be restarted before the help appears.
# The default value is: NO.
# This tag requires that the tag GENERATE_HTML is set to YES.

GENERATE_ECLIPSEHELP   = NO

# A unique identifier for the Eclipse help plugin. When installing the plugin
# the directory name containing the HTML and XML files should also have this
# name. Each documentation set should have its own identifier.
# The default value is: org.doxygen.Project.
# This tag requires that the tag GENERATE_ECLIPSEHELP is set to YES.

ECLIPSE_DOC_ID         = org.doxygen.Project

# If you want full control over the layout of the generated HTML pages it might
# be necessary to disable the index and replace it with your own. The
# DISABLE_INDEX tag can be used to turn on/off the condensed index (tabs) at top
# of each HTML page. A value of NO enables the index and the value YES disables
# it. Since the tabs in the index contain the same information as the navigation
# tree, you can set this option to YES if you also set GENERATE_TREEVIEW to YES.
# The default value is: NO.
# This tag requires that the tag GENERATE_HTML is set to YES.

DISABLE_INDEX          = NO

# The GENERATE_TREEVIEW tag is used to specify whether a tree-like index
# structure should be generated to display hierarchical information. If the tag
# value is set to YES, a side panel will be generated containing a tree-like
# index structure (just like the one that is generated for HTML Help). For this
# to work a browser that supports JavaScript, DHTML, CSS and frames is required
# (i.e. any modern browser). Windows users are probably better off using the
# HTML help feature. Via custom style sheets (see HTML_EXTRA_STYLESHEET) one can
# further fine-tune the look of the index. As an example, the default style
# sheet generated by doxygen has an example that shows how to put an image at
# the root of the tree instead of the PROJECT_NAME. Since the tree basically has
# the same information as the tab index, you could consider setting
# DISABLE_INDEX to YES when enabling this option.
# The default value is: NO.
# This tag requires that the tag GENERATE_HTML is set to YES.

GENERATE_TREEVIEW      = NO

# The ENUM_VALUES_PER_LINE tag can be used to set the number of enum values that
# doxygen will group on one line in the generated HTML documentation.
#
# Note that a value of 0 will completely suppress the enum values from appearing
# in the overview section.
# Minimum value: 0, maximum value: 20, default value: 4.
# This tag requires that the tag GENERATE_HTML is set to YES.

ENUM_VALUES_PER_LINE   = 4

# If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be used
# to set the initial width (in pixels) of the frame in which the tree is shown.
# Minimum value: 0, maximum value: 1500, default value: 250.
# This tag requires that the tag GENERATE_HTML is set to YES.

TREEVIEW_WIDTH         = 250

# If the EXT_LINKS_IN_WINDOW option is set to YES, doxygen will open links to
# external symbols imported via tag files in a separate window.
# The default value is: NO.
# This tag requires that the tag GENERATE_HTML is set to YES.

EXT_LINKS_IN_WINDOW    = NO

# If the HTML_FORMULA_FORMAT option is set to svg, doxygen will use the pdf2svg
# tool (see https://github.com/dawbarton/pdf2svg) or inkscape (see
# https://inkscape.org) to generate formulas as SVG images instead of PNGs for
# the HTML output. These images will generally look nicer at scaled resolutions.
# Possible values are: png (the default) and svg (looks nicer but requires the
# pdf2svg or inkscape tool).
# The default value is: png.
# This tag requires that the tag GENERATE_HTML is set to YES.

HTML_FORMULA_FORMAT    = png

# Use this tag to change the font size of LaTeX formulas included as images in
# the HTML documentation. When you change the font size after a successful
# doxygen run you need to manually remove any form_*.png images from the HTML
# output directory to force them to be regenerated.
# Minimum value: 8, maximum value: 50, default value: 10.
# This tag requires that the tag GENERATE_HTML is set to YES.

FORMULA_FONTSIZE       = 10

# Use the FORMULA_TRANSPARENT tag to determine whether or not the images
# generated for formulas are transparent PNGs. Transparent PNGs are not
# supported properly for IE 6.0, but are supported on all modern browsers.
#
# Note that when changing this option you need to delete any form_*.png files in
# the HTML output directory before the changes have effect.
# The default value is: YES.
# This tag requires that the tag GENERATE_HTML is set to YES.

FORMULA_TRANSPARENT    = YES

# The FORMULA_MACROFILE can contain LaTeX \newcommand and \renewcommand commands
# to create new LaTeX commands to be used in formulas as building blocks. See
# the section "Including formulas" for details.

FORMULA_MACROFILE      =

# Enable the USE_MATHJAX option to render LaTeX formulas using MathJax (see
# https://www.mathjax.org) which uses client side JavaScript for the rendering
# instead of using pre-rendered bitmaps. Use this if you do not have LaTeX
# installed or if you want to formulas look prettier in the HTML output. When
# enabled you may also need to install MathJax separately and configure the path
# to it using the MATHJAX_RELPATH option.
# The default value is: NO.
# This tag requires that the tag GENERATE_HTML is set to YES.

USE_MATHJAX            = NO

# When MathJax is enabled you can set the default output format to be used for
# the MathJax output. See the MathJax site (see:
# http://docs.mathjax.org/en/latest/output.html) for more details.
# Possible values are: HTML-CSS (which is slower, but has the best
# compatibility), NativeMML (i.e. MathML) and SVG.
# The default value is: HTML-CSS.
# This tag requires that the tag USE_MATHJAX is set to YES.

MATHJAX_FORMAT         = HTML-CSS

# When MathJax is enabled you need to specify the location relative to the HTML
# output directory using the MATHJAX_RELPATH option. The destination directory
# should contain the MathJax.js script. For instance, if the mathjax directory
# is located at the same level as the HTML output directory, then
# MATHJAX_RELPATH should be ../mathjax. The default value points to the MathJax
# Content Delivery Network so you can quickly see the result without installing
# MathJax. However, it is strongly recommended to install a local copy of
# MathJax from https://www.mathjax.org before deployment.
# The default value is: https://cdn.jsdelivr.net/npm/mathjax@2.
# This tag requires that the tag USE_MATHJAX is set to YES.

MATHJAX_RELPATH        = https://cdn.jsdelivr.net/npm/mathjax@2

# The MATHJAX_EXTENSIONS tag can be used to specify one or more MathJax
# extension names that should be enabled during MathJax rendering. For example
# MATHJAX_EXTENSIONS = TeX/AMSmath TeX/AMSsymbols
# This tag requires that the tag USE_MATHJAX is set to YES.

MATHJAX_EXTENSIONS     =

# The MATHJAX_CODEFILE tag can be used to specify a file with javascript pieces
# of code that will be used on startup of the MathJax code. See the MathJax site
# (see: http://docs.mathjax.org/en/latest/output.html) for more details. For an
# example see the documentation.
# This tag requires that the tag USE_MATHJAX is set to YES.

MATHJAX_CODEFILE       =

# When the SEARCHENGINE tag is enabled doxygen will generate a search box for
# the HTML output. The underlying search engine uses javascript and DHTML and
# should work on any modern browser. Note that when using HTML help
# (GENERATE_HTMLHELP), Qt help (GENERATE_QHP), or docsets (GENERATE_DOCSET)
# there is already a search function so this one should typically be disabled.
# For large projects the javascript based search engine can be slow, then
# enabling SERVER_BASED_SEARCH may provide a better solution. It is possible to
# search using the keyboard; to jump to the search box use <access key> + S
# (what the <access key> is depends on the OS and browser, but it is typically
# <CTRL>, <ALT>/<option>, or both). Inside the search box use the <cursor down
# key> to jump into the search results window, the results can be navigated
# using the <cursor keys>. Press <Enter> to select an item or <escape> to cancel
# the search. The filter options can be selected when the cursor is inside the
# search box by pressing <Shift>+<cursor down>. Also here use the <cursor keys>
# to select a filter and <Enter> or <escape> to activate or cancel the filter
# option.
# The default value is: YES.
# This tag requires that the tag GENERATE_HTML is set to YES.

SEARCHENGINE           = YES

# When the SERVER_BASED_SEARCH tag is enabled the search engine will be
# implemented using a web server instead of a web client using JavaScript. There
# are two flavors of web server based searching depending on the EXTERNAL_SEARCH
# setting. When disabled, doxygen will generate a PHP script for searching and
# an index file used by the script. When EXTERNAL_SEARCH is enabled the indexing
# and searching needs to be provided by external tools. See the section
# "External Indexing and Searching" for details.
# The default value is: NO.
# This tag requires that the tag SEARCHENGINE is set to YES.

SERVER_BASED_SEARCH    = NO

# When EXTERNAL_SEARCH tag is enabled doxygen will no longer generate the PHP
# script for searching. Instead the search results are written to an XML file
# which needs to be processed by an external indexer. Doxygen will invoke an
# external search engine pointed to by the SEARCHENGINE_URL option to obtain the
# search results.
#
# Doxygen ships with an example indexer (doxyindexer) and search engine
# (doxysearch.cgi) which are based on the open source search engine library
# Xapian (see: https://xapian.org/).
#
# See the section "External Indexing and Searching" for details.
# The default value is: NO.
# This tag requires that the tag SEARCHENGINE is set to YES.

EXTERNAL_SEARCH        = NO

# The SEARCHENGINE_URL should point to a search engine hosted by a web server
# which will return the search results when EXTERNAL_SEARCH is enabled.
#
# Doxygen ships with an example indexer (doxyindexer) and search engine
# (doxysearch.cgi) which are based on the open source search engine library
# Xapian (see: https://xapian.org/). See the section "External Indexing and
# Searching" for details.
# This tag requires that the tag SEARCHENGINE is set to YES.

SEARCHENGINE_URL       =

# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the unindexed
# search data is written to a file for indexing by an external tool. With the
# SEARCHDATA_FILE tag the name of this file can be specified.
# The default file is: searchdata.xml.
# This tag requires that the tag SEARCHENGINE is set to YES.

SEARCHDATA_FILE        = searchdata.xml

# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the
# EXTERNAL_SEARCH_ID tag can be used as an identifier for the project. This is
# useful in combination with EXTRA_SEARCH_MAPPINGS to search through multiple
# projects and redirect the results back to the right project.
# This tag requires that the tag SEARCHENGINE is set to YES.

EXTERNAL_SEARCH_ID     =

# The EXTRA_SEARCH_MAPPINGS tag can be used to enable searching through doxygen
# projects other than the one defined by this configuration file, but that are
# all added to the same external search index. Each project needs to have a
# unique id set via EXTERNAL_SEARCH_ID. The search mapping then maps the id of
# to a relative location where the documentation can be found. The format is:
# EXTRA_SEARCH_MAPPINGS = tagname1=loc1 tagname2=loc2 ...
# This tag requires that the tag SEARCHENGINE is set to YES.

EXTRA_SEARCH_MAPPINGS  =

#---------------------------------------------------------------------------
# Configuration options related to the LaTeX output
#---------------------------------------------------------------------------

# If the GENERATE_LATEX tag is set to YES, doxygen will generate LaTeX output.
# The default value is: YES.

GENERATE_LATEX         = YES

# The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put. If a
# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
# it.
# The default directory is: latex.
# This tag requires that the tag GENERATE_LATEX is set to YES.

LATEX_OUTPUT           = latex

# The LATEX_CMD_NAME tag can be used to specify the LaTeX command name to be
# invoked.
#
# Note that when not enabling USE_PDFLATEX the default is latex when enabling
# USE_PDFLATEX the default is pdflatex and when in the later case latex is
# chosen this is overwritten by pdflatex. For specific output languages the
# default can have been set differently, this depends on the implementation of
# the output language.
# This tag requires that the tag GENERATE_LATEX is set to YES.

LATEX_CMD_NAME         =

# The MAKEINDEX_CMD_NAME tag can be used to specify the command name to generate
# index for LaTeX.
# Note: This tag is used in the Makefile / make.bat.
# See also: LATEX_MAKEINDEX_CMD for the part in the generated output file
# (.tex).
# The default file is: makeindex.
# This tag requires that the tag GENERATE_LATEX is set to YES.

MAKEINDEX_CMD_NAME     = makeindex

# The LATEX_MAKEINDEX_CMD tag can be used to specify the command name to
# generate index for LaTeX. In case there is no backslash (\) as first character
# it will be automatically added in the LaTeX code.
# Note: This tag is used in the generated output file (.tex).
# See also: MAKEINDEX_CMD_NAME for the part in the Makefile / make.bat.
# The default value is: makeindex.
# This tag requires that the tag GENERATE_LATEX is set to YES.

LATEX_MAKEINDEX_CMD    = makeindex

# If the COMPACT_LATEX tag is set to YES, doxygen generates more compact LaTeX
# documents. This may be useful for small projects and may help to save some
# trees in general.
# The default value is: NO.
# This tag requires that the tag GENERATE_LATEX is set to YES.

COMPACT_LATEX          = NO

# The PAPER_TYPE tag can be used to set the paper type that is used by the
# printer.
# Possible values are: a4 (210 x 297 mm), letter (8.5 x 11 inches), legal (8.5 x
# 14 inches) and executive (7.25 x 10.5 inches).
# The default value is: a4.
# This tag requires that the tag GENERATE_LATEX is set to YES.

PAPER_TYPE             = a4

# The EXTRA_PACKAGES tag can be used to specify one or more LaTeX package names
# that should be included in the LaTeX output. The package can be specified just
# by its name or with the correct syntax as to be used with the LaTeX
# \usepackage command. To get the times font for instance you can specify :
# EXTRA_PACKAGES=times or EXTRA_PACKAGES={times}
# To use the option intlimits with the amsmath package you can specify:
# EXTRA_PACKAGES=[intlimits]{amsmath}
# If left blank no extra packages will be included.
# This tag requires that the tag GENERATE_LATEX is set to YES.

EXTRA_PACKAGES         =

# The LATEX_HEADER tag can be used to specify a personal LaTeX header for the
# generated LaTeX document. The header should contain everything until the first
# chapter. If it is left blank doxygen will generate a standard header. See
# section "Doxygen usage" for information on how to let doxygen write the
# default header to a separate file.
#
# Note: Only use a user-defined header if you know what you are doing! The
# following commands have a special meaning inside the header: $title,
# $datetime, $date, $doxygenversion, $projectname, $projectnumber,
# $projectbrief, $projectlogo. Doxygen will replace $title with the empty
# string, for the replacement values of the other commands the user is referred
# to HTML_HEADER.
# This tag requires that the tag GENERATE_LATEX is set to YES.

LATEX_HEADER           =

# The LATEX_FOOTER tag can be used to specify a personal LaTeX footer for the
# generated LaTeX document. The footer should contain everything after the last
# chapter. If it is left blank doxygen will generate a standard footer. See
# LATEX_HEADER for more information on how to generate a default footer and what
# special commands can be used inside the footer.
#
# Note: Only use a user-defined footer if you know what you are doing!
# This tag requires that the tag GENERATE_LATEX is set to YES.

LATEX_FOOTER           =

# The LATEX_EXTRA_STYLESHEET tag can be used to specify additional user-defined
# LaTeX style sheets that are included after the standard style sheets created
# by doxygen. Using this option one can overrule certain style aspects. Doxygen
# will copy the style sheet files to the output directory.
# Note: The order of the extra style sheet files is of importance (e.g. the last
# style sheet in the list overrules the setting of the previous ones in the
# list).
# This tag requires that the tag GENERATE_LATEX is set to YES.

LATEX_EXTRA_STYLESHEET =

# The LATEX_EXTRA_FILES tag can be used to specify one or more extra images or
# other source files which should be copied to the LATEX_OUTPUT output
# directory. Note that the files will be copied as-is; there are no commands or
# markers available.
# This tag requires that the tag GENERATE_LATEX is set to YES.

LATEX_EXTRA_FILES      =

# If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated is
# prepared for conversion to PDF (using ps2pdf or pdflatex). The PDF file will
# contain links (just like the HTML output) instead of page references. This
# makes the output suitable for online browsing using a PDF viewer.
# The default value is: YES.
# This tag requires that the tag GENERATE_LATEX is set to YES.

PDF_HYPERLINKS         = YES

# If the USE_PDFLATEX tag is set to YES, doxygen will use the engine as
# specified with LATEX_CMD_NAME to generate the PDF file directly from the LaTeX
# files. Set this option to YES, to get a higher quality PDF documentation.
#
# See also section LATEX_CMD_NAME for selecting the engine.
# The default value is: YES.
# This tag requires that the tag GENERATE_LATEX is set to YES.

USE_PDFLATEX           = YES

# If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \batchmode
# command to the generated LaTeX files. This will instruct LaTeX to keep running
# if errors occur, instead of asking the user for help. This option is also used
# when generating formulas in HTML.
# The default value is: NO.
# This tag requires that the tag GENERATE_LATEX is set to YES.

LATEX_BATCHMODE        = NO

# If the LATEX_HIDE_INDICES tag is set to YES then doxygen will not include the
# index chapters (such as File Index, Compound Index, etc.) in the output.
# The default value is: NO.
# This tag requires that the tag GENERATE_LATEX is set to YES.

LATEX_HIDE_INDICES     = NO

# If the LATEX_SOURCE_CODE tag is set to YES then doxygen will include source
# code with syntax highlighting in the LaTeX output.
#
# Note that which sources are shown also depends on other settings such as
# SOURCE_BROWSER.
# The default value is: NO.
# This tag requires that the tag GENERATE_LATEX is set to YES.

LATEX_SOURCE_CODE      = NO

# The LATEX_BIB_STYLE tag can be used to specify the style to use for the
# bibliography, e.g. plainnat, or ieeetr. See
# https://en.wikipedia.org/wiki/BibTeX and \cite for more info.
# The default value is: plain.
# This tag requires that the tag GENERATE_LATEX is set to YES.

LATEX_BIB_STYLE        = plain

# If the LATEX_TIMESTAMP tag is set to YES then the footer of each generated
# page will contain the date and time when the page was generated. Setting this
# to NO can help when comparing the output of multiple runs.
# The default value is: NO.
# This tag requires that the tag GENERATE_LATEX is set to YES.

LATEX_TIMESTAMP        = NO

# The LATEX_EMOJI_DIRECTORY tag is used to specify the (relative or absolute)
# path from which the emoji images will be read. If a relative path is entered,
# it will be relative to the LATEX_OUTPUT directory. If left blank the
# LATEX_OUTPUT directory will be used.
# This tag requires that the tag GENERATE_LATEX is set to YES.

LATEX_EMOJI_DIRECTORY  =

#---------------------------------------------------------------------------
# Configuration options related to the RTF output
#---------------------------------------------------------------------------

# If the GENERATE_RTF tag is set to YES, doxygen will generate RTF output. The
# RTF output is optimized for Word 97 and may not look too pretty with other RTF
# readers/editors.
# The default value is: NO.

GENERATE_RTF           = NO

# The RTF_OUTPUT tag is used to specify where the RTF docs will be put. If a
# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
# it.
# The default directory is: rtf.
# This tag requires that the tag GENERATE_RTF is set to YES.

RTF_OUTPUT             = rtf

# If the COMPACT_RTF tag is set to YES, doxygen generates more compact RTF
# documents. This may be useful for small projects and may help to save some
# trees in general.
# The default value is: NO.
# This tag requires that the tag GENERATE_RTF is set to YES.

COMPACT_RTF            = NO

# If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated will
# contain hyperlink fields. The RTF file will contain links (just like the HTML
# output) instead of page references. This makes the output suitable for online
# browsing using Word or some other Word compatible readers that support those
# fields.
#
# Note: WordPad (write) and others do not support links.
# The default value is: NO.
# This tag requires that the tag GENERATE_RTF is set to YES.

RTF_HYPERLINKS         = NO

# Load stylesheet definitions from file. Syntax is similar to doxygen's
# configuration file, i.e. a series of assignments. You only have to provide
# replacements, missing definitions are set to their default value.
#
# See also section "Doxygen usage" for information on how to generate the
# default style sheet that doxygen normally uses.
# This tag requires that the tag GENERATE_RTF is set to YES.

RTF_STYLESHEET_FILE    =

# Set optional variables used in the generation of an RTF document. Syntax is
# similar to doxygen's configuration file. A template extensions file can be
# generated using doxygen -e rtf extensionFile.
# This tag requires that the tag GENERATE_RTF is set to YES.

RTF_EXTENSIONS_FILE    =

# If the RTF_SOURCE_CODE tag is set to YES then doxygen will include source code
# with syntax highlighting in the RTF output.
#
# Note that which sources are shown also depends on other settings such as
# SOURCE_BROWSER.
# The default value is: NO.
# This tag requires that the tag GENERATE_RTF is set to YES.

RTF_SOURCE_CODE        = NO

#---------------------------------------------------------------------------
# Configuration options related to the man page output
#---------------------------------------------------------------------------

# If the GENERATE_MAN tag is set to YES, doxygen will generate man pages for
# classes and files.
# The default value is: NO.

GENERATE_MAN           = NO

# The MAN_OUTPUT tag is used to specify where the man pages will be put. If a
# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
# it. A directory man3 will be created inside the directory specified by
# MAN_OUTPUT.
# The default directory is: man.
# This tag requires that the tag GENERATE_MAN is set to YES.

MAN_OUTPUT             = man

# The MAN_EXTENSION tag determines the extension that is added to the generated
# man pages. In case the manual section does not start with a number, the number
# 3 is prepended. The dot (.) at the beginning of the MAN_EXTENSION tag is
# optional.
# The default value is: .3.
# This tag requires that the tag GENERATE_MAN is set to YES.

MAN_EXTENSION          = .3

# The MAN_SUBDIR tag determines the name of the directory created within
# MAN_OUTPUT in which the man pages are placed. If defaults to man followed by
# MAN_EXTENSION with the initial . removed.
# This tag requires that the tag GENERATE_MAN is set to YES.

MAN_SUBDIR             =

# If the MAN_LINKS tag is set to YES and doxygen generates man output, then it
# will generate one additional man file for each entity documented in the real
# man page(s). These additional files only source the real man page, but without
# them the man command would be unable to find the correct page.
# The default value is: NO.
# This tag requires that the tag GENERATE_MAN is set to YES.

MAN_LINKS              = NO

#---------------------------------------------------------------------------
# Configuration options related to the XML output
#---------------------------------------------------------------------------

# If the GENERATE_XML tag is set to YES, doxygen will generate an XML file that
# captures the structure of the code including all documentation.
# The default value is: NO.

GENERATE_XML           = NO

# The XML_OUTPUT tag is used to specify where the XML pages will be put. If a
# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
# it.
# The default directory is: xml.
# This tag requires that the tag GENERATE_XML is set to YES.

XML_OUTPUT             = xml

# If the XML_PROGRAMLISTING tag is set to YES, doxygen will dump the program
# listings (including syntax highlighting and cross-referencing information) to
# the XML output. Note that enabling this will significantly increase the size
# of the XML output.
# The default value is: YES.
# This tag requires that the tag GENERATE_XML is set to YES.

XML_PROGRAMLISTING     = YES

# If the XML_NS_MEMB_FILE_SCOPE tag is set to YES, doxygen will include
# namespace members in file scope as well, matching the HTML output.
# The default value is: NO.
# This tag requires that the tag GENERATE_XML is set to YES.

XML_NS_MEMB_FILE_SCOPE = NO

#---------------------------------------------------------------------------
# Configuration options related to the DOCBOOK output
#---------------------------------------------------------------------------

# If the GENERATE_DOCBOOK tag is set to YES, doxygen will generate Docbook files
# that can be used to generate PDF.
# The default value is: NO.

GENERATE_DOCBOOK       = NO

# The DOCBOOK_OUTPUT tag is used to specify where the Docbook pages will be put.
# If a relative path is entered the value of OUTPUT_DIRECTORY will be put in
# front of it.
# The default directory is: docbook.
# This tag requires that the tag GENERATE_DOCBOOK is set to YES.

DOCBOOK_OUTPUT         = docbook

# If the DOCBOOK_PROGRAMLISTING tag is set to YES, doxygen will include the
# program listings (including syntax highlighting and cross-referencing
# information) to the DOCBOOK output. Note that enabling this will significantly
# increase the size of the DOCBOOK output.
# The default value is: NO.
# This tag requires that the tag GENERATE_DOCBOOK is set to YES.

DOCBOOK_PROGRAMLISTING = NO

#---------------------------------------------------------------------------
# Configuration options for the AutoGen Definitions output
#---------------------------------------------------------------------------

# If the GENERATE_AUTOGEN_DEF tag is set to YES, doxygen will generate an
# AutoGen Definitions (see http://autogen.sourceforge.net/) file that captures
# the structure of the code including all documentation. Note that this feature
# is still experimental and incomplete at the moment.
# The default value is: NO.

GENERATE_AUTOGEN_DEF   = NO

#---------------------------------------------------------------------------
# Configuration options related to the Perl module output
#---------------------------------------------------------------------------

# If the GENERATE_PERLMOD tag is set to YES, doxygen will generate a Perl module
# file that captures the structure of the code including all documentation.
#
# Note that this feature is still experimental and incomplete at the moment.
# The default value is: NO.

GENERATE_PERLMOD       = NO

# If the PERLMOD_LATEX tag is set to YES, doxygen will generate the necessary
# Makefile rules, Perl scripts and LaTeX code to be able to generate PDF and DVI
# output from the Perl module output.
# The default value is: NO.
# This tag requires that the tag GENERATE_PERLMOD is set to YES.

PERLMOD_LATEX          = NO

# If the PERLMOD_PRETTY tag is set to YES, the Perl module output will be nicely
# formatted so it can be parsed by a human reader. This is useful if you want to
# understand what is going on. On the other hand, if this tag is set to NO, the
# size of the Perl module output will be much smaller and Perl will parse it
# just the same.
# The default value is: YES.
# This tag requires that the tag GENERATE_PERLMOD is set to YES.

PERLMOD_PRETTY         = YES

# The names of the make variables in the generated doxyrules.make file are
# prefixed with the string contained in PERLMOD_MAKEVAR_PREFIX. This is useful
# so different doxyrules.make files included by the same Makefile don't
# overwrite each other's variables.
# This tag requires that the tag GENERATE_PERLMOD is set to YES.

PERLMOD_MAKEVAR_PREFIX =

#---------------------------------------------------------------------------
# Configuration options related to the preprocessor
#---------------------------------------------------------------------------

# If the ENABLE_PREPROCESSING tag is set to YES, doxygen will evaluate all
# C-preprocessor directives found in the sources and include files.
# The default value is: YES.

ENABLE_PREPROCESSING   = YES

# If the MACRO_EXPANSION tag is set to YES, doxygen will expand all macro names
# in the source code. If set to NO, only conditional compilation will be
# performed. Macro expansion can be done in a controlled way by setting
# EXPAND_ONLY_PREDEF to YES.
# The default value is: NO.
# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.

MACRO_EXPANSION        = NO

# If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES then
# the macro expansion is limited to the macros specified with the PREDEFINED and
# EXPAND_AS_DEFINED tags.
# The default value is: NO.
# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.

EXPAND_ONLY_PREDEF     = NO

# If the SEARCH_INCLUDES tag is set to YES, the include files in the
# INCLUDE_PATH will be searched if a #include is found.
# The default value is: YES.
# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.

SEARCH_INCLUDES        = YES

# The INCLUDE_PATH tag can be used to specify one or more directories that
# contain include files that are not input files but should be processed by the
# preprocessor.
# This tag requires that the tag SEARCH_INCLUDES is set to YES.

INCLUDE_PATH           =

# You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard
# patterns (like *.h and *.hpp) to filter out the header-files in the
# directories. If left blank, the patterns specified with FILE_PATTERNS will be
# used.
# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.

INCLUDE_FILE_PATTERNS  =

# The PREDEFINED tag can be used to specify one or more macro names that are
# defined before the preprocessor is started (similar to the -D option of e.g.
# gcc). The argument of the tag is a list of macros of the form: name or
# name=definition (no spaces). If the definition and the "=" are omitted, "=1"
# is assumed. To prevent a macro definition from being undefined via #undef or
# recursively expanded use the := operator instead of the = operator.
# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.

PREDEFINED             =

# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then this
# tag can be used to specify a list of macro names that should be expanded. The
# macro definition that is found in the sources will be used. Use the PREDEFINED
# tag if you want to use a different macro definition that overrules the
# definition found in the source code.
# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.

EXPAND_AS_DEFINED      =

# If the SKIP_FUNCTION_MACROS tag is set to YES then doxygen's preprocessor will
# remove all references to function-like macros that are alone on a line, have
# an all uppercase name, and do not end with a semicolon. Such function macros
# are typically used for boiler-plate code, and will confuse the parser if not
# removed.
# The default value is: YES.
# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.

SKIP_FUNCTION_MACROS   = YES

#---------------------------------------------------------------------------
# Configuration options related to external references
#---------------------------------------------------------------------------

# The TAGFILES tag can be used to specify one or more tag files. For each tag
# file the location of the external documentation should be added. The format of
# a tag file without this location is as follows:
# TAGFILES = file1 file2 ...
# Adding location for the tag files is done as follows:
# TAGFILES = file1=loc1 "file2 = loc2" ...
# where loc1 and loc2 can be relative or absolute paths or URLs. See the
# section "Linking to external documentation" for more information about the use
# of tag files.
# Note: Each tag file must have a unique name (where the name does NOT include
# the path). If a tag file is not located in the directory in which doxygen is
# run, you must also specify the path to the tagfile here.

TAGFILES               =

# When a file name is specified after GENERATE_TAGFILE, doxygen will create a
# tag file that is based on the input files it reads. See section "Linking to
# external documentation" for more information about the usage of tag files.

GENERATE_TAGFILE       =

# If the ALLEXTERNALS tag is set to YES, all external class will be listed in
# the class index. If set to NO, only the inherited external classes will be
# listed.
# The default value is: NO.

ALLEXTERNALS           = NO

# If the EXTERNAL_GROUPS tag is set to YES, all external groups will be listed
# in the modules index. If set to NO, only the current project's groups will be
# listed.
# The default value is: YES.

EXTERNAL_GROUPS        = YES

# If the EXTERNAL_PAGES tag is set to YES, all external pages will be listed in
# the related pages index. If set to NO, only the current project's pages will
# be listed.
# The default value is: YES.

EXTERNAL_PAGES         = YES

#---------------------------------------------------------------------------
# Configuration options related to the dot tool
#---------------------------------------------------------------------------

# If the CLASS_DIAGRAMS tag is set to YES, doxygen will generate a class diagram
# (in HTML and LaTeX) for classes with base or super classes. Setting the tag to
# NO turns the diagrams off. Note that this option also works with HAVE_DOT
# disabled, but it is recommended to install and use dot, since it yields more
# powerful graphs.
# The default value is: YES.

CLASS_DIAGRAMS         = YES

# You can include diagrams made with dia in doxygen documentation. Doxygen will
# then run dia to produce the diagram and insert it in the documentation. The
# DIA_PATH tag allows you to specify the directory where the dia binary resides.
# If left empty dia is assumed to be found in the default search path.

DIA_PATH               =

# If set to YES the inheritance and collaboration graphs will hide inheritance
# and usage relations if the target is undocumented or is not a class.
# The default value is: YES.

HIDE_UNDOC_RELATIONS   = YES

# If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is
# available from the path. This tool is part of Graphviz (see:
# http://www.graphviz.org/), a graph visualization toolkit from AT&T and Lucent
# Bell Labs. The other options in this section have no effect if this option is
# set to NO
# The default value is: NO.

HAVE_DOT               = NO

# The DOT_NUM_THREADS specifies the number of dot invocations doxygen is allowed
# to run in parallel. When set to 0 doxygen will base this on the number of
# processors available in the system. You can set it explicitly to a value
# larger than 0 to get control over the balance between CPU load and processing
# speed.
# Minimum value: 0, maximum value: 32, default value: 0.
# This tag requires that the tag HAVE_DOT is set to YES.

DOT_NUM_THREADS        = 0

# When you want a differently looking font in the dot files that doxygen
# generates you can specify the font name using DOT_FONTNAME. You need to make
# sure dot is able to find the font, which can be done by putting it in a
# standard location or by setting the DOTFONTPATH environment variable or by
# setting DOT_FONTPATH to the directory containing the font.
# The default value is: Helvetica.
# This tag requires that the tag HAVE_DOT is set to YES.

DOT_FONTNAME           = Helvetica

# The DOT_FONTSIZE tag can be used to set the size (in points) of the font of
# dot graphs.
# Minimum value: 4, maximum value: 24, default value: 10.
# This tag requires that the tag HAVE_DOT is set to YES.

DOT_FONTSIZE           = 10

# By default doxygen will tell dot to use the default font as specified with
# DOT_FONTNAME. If you specify a different font using DOT_FONTNAME you can set
# the path where dot can find it using this tag.
# This tag requires that the tag HAVE_DOT is set to YES.

DOT_FONTPATH           =

# If the CLASS_GRAPH tag is set to YES then doxygen will generate a graph for
# each documented class showing the direct and indirect inheritance relations.
# Setting this tag to YES will force the CLASS_DIAGRAMS tag to NO.
# The default value is: YES.
# This tag requires that the tag HAVE_DOT is set to YES.

CLASS_GRAPH            = YES

# If the COLLABORATION_GRAPH tag is set to YES then doxygen will generate a
# graph for each documented class showing the direct and indirect implementation
# dependencies (inheritance, containment, and class references variables) of the
# class with other documented classes.
# The default value is: YES.
# This tag requires that the tag HAVE_DOT is set to YES.

COLLABORATION_GRAPH    = YES

# If the GROUP_GRAPHS tag is set to YES then doxygen will generate a graph for
# groups, showing the direct groups dependencies.
# The default value is: YES.
# This tag requires that the tag HAVE_DOT is set to YES.

GROUP_GRAPHS           = YES

# If the UML_LOOK tag is set to YES, doxygen will generate inheritance and
# collaboration diagrams in a style similar to the OMG's Unified Modeling
# Language.
# The default value is: NO.
# This tag requires that the tag HAVE_DOT is set to YES.

UML_LOOK               = NO

# If the UML_LOOK tag is enabled, the fields and methods are shown inside the
# class node. If there are many fields or methods and many nodes the graph may
# become too big to be useful. The UML_LIMIT_NUM_FIELDS threshold limits the
# number of items for each type to make the size more manageable. Set this to 0
# for no limit. Note that the threshold may be exceeded by 50% before the limit
# is enforced. So when you set the threshold to 10, up to 15 fields may appear,
# but if the number exceeds 15, the total amount of fields shown is limited to
# 10.
# Minimum value: 0, maximum value: 100, default value: 10.
# This tag requires that the tag HAVE_DOT is set to YES.

UML_LIMIT_NUM_FIELDS   = 10

# If the TEMPLATE_RELATIONS tag is set to YES then the inheritance and
# collaboration graphs will show the relations between templates and their
# instances.
# The default value is: NO.
# This tag requires that the tag HAVE_DOT is set to YES.

TEMPLATE_RELATIONS     = NO

# If the INCLUDE_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are set to
# YES then doxygen will generate a graph for each documented file showing the
# direct and indirect include dependencies of the file with other documented
# files.
# The default value is: YES.
# This tag requires that the tag HAVE_DOT is set to YES.

INCLUDE_GRAPH          = YES

# If the INCLUDED_BY_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are
# set to YES then doxygen will generate a graph for each documented file showing
# the direct and indirect include dependencies of the file with other documented
# files.
# The default value is: YES.
# This tag requires that the tag HAVE_DOT is set to YES.

INCLUDED_BY_GRAPH      = YES

# If the CALL_GRAPH tag is set to YES then doxygen will generate a call
# dependency graph for every global function or class method.
#
# Note that enabling this option will significantly increase the time of a run.
# So in most cases it will be better to enable call graphs for selected
# functions only using the \callgraph command. Disabling a call graph can be
# accomplished by means of the command \hidecallgraph.
# The default value is: NO.
# This tag requires that the tag HAVE_DOT is set to YES.

CALL_GRAPH             = NO

# If the CALLER_GRAPH tag is set to YES then doxygen will generate a caller
# dependency graph for every global function or class method.
#
# Note that enabling this option will significantly increase the time of a run.
# So in most cases it will be better to enable caller graphs for selected
# functions only using the \callergraph command. Disabling a caller graph can be
# accomplished by means of the command \hidecallergraph.
# The default value is: NO.
# This tag requires that the tag HAVE_DOT is set to YES.

CALLER_GRAPH           = NO

# If the GRAPHICAL_HIERARCHY tag is set to YES then doxygen will graphical
# hierarchy of all classes instead of a textual one.
# The default value is: YES.
# This tag requires that the tag HAVE_DOT is set to YES.

GRAPHICAL_HIERARCHY    = YES

# If the DIRECTORY_GRAPH tag is set to YES then doxygen will show the
# dependencies a directory has on other directories in a graphical way. The
# dependency relations are determined by the #include relations between the
# files in the directories.
# The default value is: YES.
# This tag requires that the tag HAVE_DOT is set to YES.

DIRECTORY_GRAPH        = YES

# The DOT_IMAGE_FORMAT tag can be used to set the image format of the images
# generated by dot. For an explanation of the image formats see the section
# output formats in the documentation of the dot tool (Graphviz (see:
# http://www.graphviz.org/)).
# Note: If you choose svg you need to set HTML_FILE_EXTENSION to xhtml in order
# to make the SVG files visible in IE 9+ (other browsers do not have this
# requirement).
# Possible values are: png, jpg, gif, svg, png:gd, png:gd:gd, png:cairo,
# png:cairo:gd, png:cairo:cairo, png:cairo:gdiplus, png:gdiplus and
# png:gdiplus:gdiplus.
# The default value is: png.
# This tag requires that the tag HAVE_DOT is set to YES.

DOT_IMAGE_FORMAT       = png

# If DOT_IMAGE_FORMAT is set to svg, then this option can be set to YES to
# enable generation of interactive SVG images that allow zooming and panning.
#
# Note that this requires a modern browser other than Internet Explorer. Tested
# and working are Firefox, Chrome, Safari, and Opera.
# Note: For IE 9+ you need to set HTML_FILE_EXTENSION to xhtml in order to make
# the SVG files visible. Older versions of IE do not have SVG support.
# The default value is: NO.
# This tag requires that the tag HAVE_DOT is set to YES.

INTERACTIVE_SVG        = NO

# The DOT_PATH tag can be used to specify the path where the dot tool can be
# found. If left blank, it is assumed the dot tool can be found in the path.
# This tag requires that the tag HAVE_DOT is set to YES.

DOT_PATH               =

# The DOTFILE_DIRS tag can be used to specify one or more directories that
# contain dot files that are included in the documentation (see the \dotfile
# command).
# This tag requires that the tag HAVE_DOT is set to YES.

DOTFILE_DIRS           =

# The MSCFILE_DIRS tag can be used to specify one or more directories that
# contain msc files that are included in the documentation (see the \mscfile
# command).

MSCFILE_DIRS           =

# The DIAFILE_DIRS tag can be used to specify one or more directories that
# contain dia files that are included in the documentation (see the \diafile
# command).

DIAFILE_DIRS           =

# When using plantuml, the PLANTUML_JAR_PATH tag should be used to specify the
# path where java can find the plantuml.jar file. If left blank, it is assumed
# PlantUML is not used or called during a preprocessing step. Doxygen will
# generate a warning when it encounters a \startuml command in this case and
# will not generate output for the diagram.

PLANTUML_JAR_PATH      =

# When using plantuml, the PLANTUML_CFG_FILE tag can be used to specify a
# configuration file for plantuml.

PLANTUML_CFG_FILE      =

# When using plantuml, the specified paths are searched for files specified by
# the !include statement in a plantuml block.

PLANTUML_INCLUDE_PATH  =

# The DOT_GRAPH_MAX_NODES tag can be used to set the maximum number of nodes
# that will be shown in the graph. If the number of nodes in a graph becomes
# larger than this value, doxygen will truncate the graph, which is visualized
# by representing a node as a red box. Note that doxygen if the number of direct
# children of the root node in a graph is already larger than
# DOT_GRAPH_MAX_NODES then the graph will not be shown at all. Also note that
# the size of a graph can be further restricted by MAX_DOT_GRAPH_DEPTH.
# Minimum value: 0, maximum value: 10000, default value: 50.
# This tag requires that the tag HAVE_DOT is set to YES.

DOT_GRAPH_MAX_NODES    = 50

# The MAX_DOT_GRAPH_DEPTH tag can be used to set the maximum depth of the graphs
# generated by dot. A depth value of 3 means that only nodes reachable from the
# root by following a path via at most 3 edges will be shown. Nodes that lay
# further from the root node will be omitted. Note that setting this option to 1
# or 2 may greatly reduce the computation time needed for large code bases. Also
# note that the size of a graph can be further restricted by
# DOT_GRAPH_MAX_NODES. Using a depth of 0 means no depth restriction.
# Minimum value: 0, maximum value: 1000, default value: 0.
# This tag requires that the tag HAVE_DOT is set to YES.

MAX_DOT_GRAPH_DEPTH    = 0

# Set the DOT_TRANSPARENT tag to YES to generate images with a transparent
# background. This is disabled by default, because dot on Windows does not seem
# to support this out of the box.
#
# Warning: Depending on the platform used, enabling this option may lead to
# badly anti-aliased labels on the edges of a graph (i.e. they become hard to
# read).
# The default value is: NO.
# This tag requires that the tag HAVE_DOT is set to YES.

DOT_TRANSPARENT        = NO

# Set the DOT_MULTI_TARGETS tag to YES to allow dot to generate multiple output
# files in one run (i.e. multiple -o and -T options on the command line). This
# makes dot run faster, but since only newer versions of dot (>1.8.10) support
# this, this feature is disabled by default.
# The default value is: NO.
# This tag requires that the tag HAVE_DOT is set to YES.

DOT_MULTI_TARGETS      = NO

# If the GENERATE_LEGEND tag is set to YES doxygen will generate a legend page
# explaining the meaning of the various boxes and arrows in the dot generated
# graphs.
# The default value is: YES.
# This tag requires that the tag HAVE_DOT is set to YES.

GENERATE_LEGEND        = YES

# If the DOT_CLEANUP tag is set to YES, doxygen will remove the intermediate dot
# files that are used to generate the various graphs.
# The default value is: YES.
# This tag requires that the tag HAVE_DOT is set to YES.

DOT_CLEANUP            = YES


================================================
FILE: docs/Doxyfile.in
================================================
#...
INPUT = "@DOXYGEN_INPUT_DIR@"
#...
OUTPUT_DIRECTORY = "@DOXYGEN_OUTPUT_DIR@"
#...
GENERATE_XML = YES
#...


================================================
FILE: docs/README.md
================================================
# Building the Docs #

1. Clone main repository: `git clone https://github.com/marius-team/marius.git`.

2. Clone `gh-pages` branch into seperate directory `html`: `git clone -b gh-pages https://github.com/marius-team/marius.git html`

3. Enter main repo: `cd marius`. Create build directory and run CMake with `BUILD_DOCS` enabled: `mkdir build; cd build; cmake ../ -DBUILD_DOCS=1`.

2. Build the documentation with Sphinx `make Sphinx -j`

3. Output html files will be generated in our `html` directory. Push changes to `gh-pages` for site to update at https://marius-project.org/marius/.

================================================
FILE: docs/_static/css/marius_theme.css
================================================
@import url("theme.css");

.wy-nav-content {
    max-width: 50vw;
}

:root {
    /*--marius_purple: #180A5B;*/
    /*--marius_offwhite: #FFFDF3;*/
    --marius_offwhite: #FFFDFB;
    --marius_lightblue: #7175a2;
    --marius_lighterblue: #999ecd;
}

h1, h2, h3, h4, h5, h6 {
    font-family: Avenir;
    font-weight: 800;
}

.rst-content table.docutils caption, .rst-content table.field-list caption, .wy-table caption {
    font-style: normal;
    font-weight: 400;
    color: black;
}

.wy-body-for-nav {
    font-family: FreightSans, Helvetica Neue, Helvetica, Arial, sans-serif;
    font-weight: 400;
    color: black;
}

.rst-footer-buttons {
    display: none;
}

.wy-side-nav-search, .wy-nav-top {
    /*background: var(--marius_purple);*/
    background: var(--marius_lightblue);
}

.wy-nav-side {
    /* background: var(--marius_offwhite); */
    background: var(--marius_lighterblue);
    border-right: 1px solid #e1e4e5;
    color: black;
}

.wy-menu-vertical a {
    color: black;
}

.wy-menu-vertical li.current a {
    border-right: None;
    font-weight: 400;
    color: var(--marius_offwhite);
}

.wy-menu-vertical li.current>a, .wy-menu-vertical li.on a {
    /* background: var(--marius_offwhite); */
    background: var(--marius_lighterblue);
}

.wy-nav-content {
    background: var(--marius_offwhite);
}

.wy-nav-content-wrap {
    background: var(--marius_offwhite);
}

.wy-table-responsive table td, .wy-table-responsive table th {
    white-space: normal;
}

/*hide nested bullet points from toclist*/
.rst-content .section ul li, .rst-content .toctree-wrapper ul li, .rst-content section ul li, .wy-plain-list-disc li, article ul li {
    list-style: none;
    margin: 0;
    padding: 0;
    font-weight: normal;
}

/*hide nested bullet points from toclist*/
.rst-content .section ul li li, .rst-content .toctree-wrapper ul li li, .rst-content section ul li li, .wy-plain-list-disc li li, article ul li li {
    list-style: none;
    margin-left: 24px;
    padding: 0;
    font-weight: normal;
}

.wy-menu-vertical a:hover {
    background: none;
}

.wy-menu-vertical li.current {
    background: none;

}

.wy-menu-vertical li.current:hover {
    background: none;
}

.wy-menu-vertical li.toctree-l2.current>a, .wy-menu-vertical li.toctree-l2.current li.toctree-l3>a {
    background: none;
}

.wy-menu-vertical li.toctree-l2.current>a:hover, .wy-menu-vertical li.toctree-l2.current li.toctree-l3>a:hover {
    background: none;
}

.wy-menu-vertical li.current a:hover {
    background: none;
}

.wy-menu-vertical li.toctree-l3.current>a, .wy-menu-vertical li.toctree-l3.current li.toctree-l4>a {
    background: none;
}

.wy-menu-vertical li.toctree-l3.current>a:hover, .wy-menu-vertical li.toctree-l3.current li.toctree-l4>a:hover {
    background: none;
}


.wy-menu-vertical li.toctree-l1.current>a {
    border-top: none;
    border-bottom: none;
}

.wy-menu-vertical li.toctree-l1.current>a:hover {
    background: none;
}

.wy-menu-vertical li.current>a {
    border-top: none;
    border-bottom: none;
}

html.writer-html4 .rst-content dl:not(.docutils) .property, html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.glossary):not(.simple) .property {
    display: inline;
    padding-right: 8px;
    max-width: 100%;
}

div.leftside {
    width: 60%;
    padding: 0px 10px 0px 0px;
    float: left;
}

div.rightside {
    margin-left: 10%;
    /* float: right; */
}

================================================
FILE: docs/_templates/layout.html
================================================
{% extends "!layout.html" %}
  {% block menu %} {{ super() }}

  <!-- <style>
    a.gh-font {
        font-weight: 800;
        color: rgb(160, 45, 45);
    }
    </style> -->
  <p>
  <!-- <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 1000 1000">
    <path
      d="M439.55 236.05L244 40.45a28.87 28.87 0 0 0-40.81 0l-40.66 40.63 51.52 51.52c27.06-9.14 52.68 16.77 43.39 43.68l49.66 49.66c34.23-11.8 61.18 31 35.47 56.69-26.49 26.49-70.21-2.87-56-37.34L240.22 199v121.85c25.3 12.54 22.26 41.85 9.08 55a34.34 34.34 0 0 1-48.55 0c-17.57-17.6-11.07-46.91 11.25-56v-123c-20.8-8.51-24.6-30.74-18.64-45L142.57 101 8.45 235.14a28.86 28.86 0 0 0 0 40.81l195.61 195.6a28.86 28.86 0 0 0 40.8 0l194.69-194.69a28.86 28.86 0 0 0 0-40.81z" />
  </svg> -->
  <a href="https://github.com/marius-team/marius">GitHub</a>
</p>
{% endblock %}

================================================
FILE: docs/conf.py
================================================
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html

# -- Path setup --------------------------------------------------------------

# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))


# -- Project information -----------------------------------------------------

project = "Marius"
# copyright = '2020, Jason Mohoney'
author = "Jason Mohoney"

# The full version, including alpha/beta/rc tags
release = "0.0.2"


# -- General configuration ---------------------------------------------------

# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = ["breathe", "sphinx.ext.autodoc", "sphinx_autodoc_typehints"]

# Breathe Configuration
breathe_default_project = "Marius"

# Add any paths that contain templates here, relative to this directory.
templates_path = ["_templates"]

# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]

autodoc_typehints = "description"
autodoc_member_order = "bysource"
# -- Options for HTML output -------------------------------------------------

# The theme to use for HTML and HTML Help pages.  See the documentation for
# a list of builtin themes.
#
html_theme = "sphinx_rtd_theme"

html_style = "css/marius_theme.css"

html_logo = "marius_logo_scaled.png"

html_favicon = "favicon.ico"

html_theme_options = {"logo_only": True}

# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ["_static"]


================================================
FILE: docs/config_interface/configuration.rst
================================================

Overview
======================

The configuration interface allows for high-performance training and evaluation of models without need for writing code.

Configuration files are defined in YAML format and are grouped up into four sections:

- Model: Defines the architecture of the model, neighbor sampling configuration, loss, and optimizer(s)
- Storage: Specifies the input dataset and how to store the graph, features, and embeddings.
- Training: Sets options for the training procedure and hyperparameters. E.g. batch size, negative sampling.
- Evaluation: Sets options for the evaluation procedure (if any). The options here are similar to those in the training section.


Link Prediction Example
-----------------------

In this example, we show how to define a configuration file for training a :doc:`3-layer GraphSage GNN <../examples/config/lp_fb15k237>` for link prediction on :doc:`fb15k_237 <../examples/config/lp_fb15k237>`.

This example assumes that marius has been installed with :doc:`pip <../build>` the dataset has been preprocessed with the following command:

``marius_preprocess --dataset fb15k_237 --output_dir /home/data/datasets/fb15k_237/``


1. Define the model:
^^^^^^^^^^^^^^^^^^^^

+-------------------------------------------+-----------------------------------------------+
|                                           |                                               |
|.. code-block:: yaml                       |.. image:: ../assets/configuration_lp.png      |
|                                           |  :width: 700                                  |
|   model:                                  |                                               |
|     encoder:                              |                                               |
|       train_neighbor_sampling:            |                                               |
|         - type: ALL                       |                                               |
|         - type: ALL                       |                                               |
|         - type: ALL                       |                                               |
|       layers:                             |                                               |
|         - - type: EMBEDDING               |                                               |
|             output_dim: 50                |                                               |
|             bias: true                    |                                               |
|                                           |                                               |
|           - type: FEATURE                 |                                               |
|             output_dim: 50                |                                               |
|             bias: true                    |                                               |
|                                           |                                               |
|         - - type: REDUCTION               |                                               |
|             input_dim: 100                |                                               |
|             output_dim: 50                |                                               |
|             bias: true                    |                                               |
|             options:                      |                                               |
|               type: LINEAR                |                                               |
|                                           |                                               |
|         - - type: GNN                     |                                               |
|             options:                      |                                               |
|             type: GRAPH_SAGE              |                                               |
|             aggregator: MEAN              |                                               |
|             input_dim: 50                 |                                               |
|             output_dim: 50                |                                               |
|             bias: true                    |                                               |
|             init:                         |                                               |
|               type: GLOROT_NORMAL         |                                               |
|                                           |                                               |
|         - - type: GNN                     |                                               |
|             options:                      |                                               |
|             type: GRAPH_SAGE              |                                               |
|             aggregator: MEAN              |                                               |
|             input_dim: 50                 |                                               |
|             output_dim: 50                |                                               |
|             bias: true                    |                                               |
|             init:                         |                                               |
|               type: GLOROT_NORMAL         |                                               |
|                                           |                                               |
|         - - type: GNN                     |                                               |
|             options:                      |                                               |
|             type: GRAPH_SAGE              |                                               |
|             aggregator: MEAN              |                                               |
|             input_dim: 50                 |                                               |
|             output_dim: 50                |                                               |
|             bias: true                    |                                               |
|             init:                         |                                               |
|               type: GLOROT_NORMAL         |                                               |
|                                           |                                               |
|     decoder:                              |                                               |
|       type: DISTMULT                      |                                               |
|     loss:                                 |                                               |
|       type: SOFTMAX_CE                    |                                               |
|       options:                            |                                               |
|         reduction: SUM                    |                                               |
|     dense_optimizer:                      |                                               |
|       type: ADAM                          |                                               |
|       options:                            |                                               |
|         learning_rate: 0.01               |                                               |
|     sparse_optimizer:                     |                                               |
|       type: ADAGRAD                       |                                               |
|       options:                            |                                               |
|         learning_rate: 0.1                |                                               |
|                                           |                                               |
+-------------------------------------------+-----------------------------------------------+

The above model configuration has 5 stages in the encoder section, each stage separated by a `--`. The first stage has 2 layers, one embedding layer with output 
dimension 50 and another feature layer with output dimension of 50. The reduction layer in stage 2 takes input the combined vector of dimension 
100 and outputs a 50 dimensional vector. It is followed by 3 stages of GNN layers. The output from the encoder is fed to the decoder of type DISMULT. 
The loss function being used is SoftmaxCrossEntropy with sum as the reduction method. The dense optimizer is for all model parameters except the node embeddings.
Node embedings are optimized by the sparse optimizer. 

2. Set storage and dataset:
^^^^^^^^^^^^^^^^^^^^^^^^^^^

.. code-block:: yaml

   storage:
     device_type: cpu
     dataset:
       dataset_dir: /home/data/datasets/fb15k_237/
     edges:
       type: DEVICE_MEMORY
       options:
         dtype: int
     embeddings:
       type: DEVICE_MEMORY
       options:
         dtype: float

The storage configuration provides information on the location and statistics of the pre-processed dataset. It also specfies where 
to store the embeddings and edges during training. The `device_type` is set to `cpu` here, `cuda` mode can be used for gpu training.
`DEVICE_MEMORY` in this case states that the embeddings need to stored in cpu memory.

3. Configure training and evaluation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

.. code-block:: yaml

   training:
     batch_size: 1000
     negative_sampling:
       num_chunks: 10
       negatives_per_positive: 10
       degree_fraction: 0
       filtered: false
     num_epochs: 10
     pipeline:
       sync: true
     epochs_per_shuffle: 1
     logs_per_epoch: 10
   evaluation:
     batch_size: 1000
     negative_sampling:
       filtered: true
     epochs_per_eval: 1
     pipeline:
       sync: true

The training configuration specifies number of data samples in each batch and the total number of epochs to train the model for. 
Marius groups edges into chunks and reuses negative samples within the chunk. `num_chunks`*`negatives_per_positive` negative edges are 
sampled for each positive edge. Marius also uses pipelining to overlap data movement with training which introduces bounded staleness 
in the system. We can explicitly set sync to true if we want every minibatch to see the latest embeddings. 

Node Classification Example
---------------------------

In this example, we show how to define a configuration file for training a :doc:`3-layer GAT GNN <../examples/config/nc_ogbn_arxiv>` for node classification on :doc:`ogbn_arxiv <../examples/config/nc_ogbn_arxiv>`.

This example assumes that marius has been installed with :doc:`pip <../build>` the dataset has been preprocessed with the following command:

``marius_preprocess --dataset ogbn_arxiv --output_dir /home/data/datasets/ogbn_arxiv/``


1. Define the model:
^^^^^^^^^^^^^^^^^^^^

+-------------------------------------------+-----------------------------------------------+
|                                           |                                               |
|.. code-block:: yaml                       |.. image:: ../assets/configuration_nc.png      |
|                                           |                                               |
|   model:                                  |                                               |
|     learning_task: NODE_CLASSIFICATION    |                                               |
|     encoder:                              |                                               |
|       train_neighbor_sampling:            |                                               |
|         - type: ALL                       |                                               |
|       layers:                             |                                               |
|         - - type: FEATURE                 |                                               |
|             output_dim: 128               |                                               |
|             bias: false                   |                                               |
|             init:                         |                                               |
|               type: GLOROT_NORMAL         |                                               |
|         - - type: GNN                     |                                               |
|             options:                      |                                               |
|               type: GRAPH_SAGE            |                                               |
|               aggregator: MEAN            |                                               |
|             input_dim: 128                |                                               |
|             output_dim: 40                |                                               |
|             bias: true                    |                                               |
|             init:                         |                                               |
|               type: GLOROT_NORMAL         |                                               |
|     decoder:                              |                                               |
|       type: NODE                          |                                               |
|     loss:                                 |                                               |
|       type: CROSS_ENTROPY                 |                                               |
|       options:                            |                                               |
|         reduction: SUM                    |                                               |
|     dense_optimizer:                      |                                               |
|       type: ADAM                          |                                               |
|       options:                            |                                               |
|         learning_rate: 0.01               |                                               |
|     sparse_optimizer:                     |                                               |
|       type: ADAGRAD                       |                                               |
|       options:                            |                                               |
|         learning_rate: 0.1                |                                               |
|                                           |                                               |
+-------------------------------------------+-----------------------------------------------+

The above node classification example has 2 layers in the encoder section, one feature layer and another GNN layer. The number of
training/evaluation sampling layers should be equal to the number of GNN stages in the model. The model has a decoder of type node
classification. The loss function being used is Cross Entropy with sum as the reduction method.

2. Set storage and dataset:
^^^^^^^^^^^^^^^^^^^^^^^^^^^

.. code-block:: yaml

   storage:
     device_type: cuda
     dataset:
       dataset_dir: /home/data/datasets/ogbn_arxiv/
     edges:
       type: DEVICE_MEMORY
     nodes:
       type: DEVICE_MEMORY
     features:
       type: DEVICE_MEMORY
     embeddings:
       type: DEVICE_MEMORY
       options:
         dtype: float
     prefetch: true
     shuffle_input: true
     full_graph_evaluation: true

The storage configuration here is very similar to the one shown above in Link Prediction.

3. Configure training and evaluation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

.. code-block:: yaml

   training:
     batch_size: 1000
     num_epochs: 5
     pipeline:
       sync: true
     epochs_per_shuffle: 1
     logs_per_epoch: 1
   evaluation:
     batch_size: 1000
     pipeline:
       sync: true
     epochs_per_eval: 1

The above training configuration has specifications for a training batch size of 1000 and total epochs of 5. The `logs_per_epoch` attribute 
sets how often to report progres during training. `epochs_per_eval` sets how often to evaluate the model. 

Defining Encoder Architectures
------------------------------

The interface enables users to define complex model architectures. The layers field can be seen as a double-list, a list of stages wherein 
each stage is again a list of layers. We need to ensure that the total output dimension of a stage is equal to the net input dimension of 
the next stage. We need to ensure that the following conditions are met while stacking layers of a model,

#. Embedding/Feature layers have only output dimension. The `input_dim` is set to -1 by default
#. A Reduction layer can have inputs from multiple layers in the previous stage and has a single output
#. The number of training/evaluation sampling layers should be equal to the GNN stages in the model

Advanced Configuration
----------------------

Pipeline
^^^^^^^^
Marius uses pipelining training architecture that can interleave data access, transfer, and computation to achieve high utilization. This 
introduces the possibility of a few mini-batches using stale parameters during training. If `sync` is set to true, the training becomes 
synchronous and there is no staleness. Below is a sample configuration where the training is async, there is bounded staleness in the system.


.. code-block:: yaml

   pipeline:
     sync: false
     staleness_bound: 16
     batch_host_queue_size: 4
     batch_device_queue_size: 4
     gradients_device_queue_size: 4
     gradients_host_queue_size: 4
     batch_loader_threads: 4
     batch_transfer_threads: 2
     compute_threads: 1
     gradient_transfer_threads: 2
     gradient_update_threads: 4


.. image:: ../assets/marius_arch.png
  :width: 700
  :align: center


Marius follows a 5-staged pipeline architecture, 4 of which are responsible for data movement and the other is for model computation 
and in-GPU parameter updates. The `pipeline` field has options for setting thread counts for each of these stages. `staleness_bound` 
sets the maximum number of minibatches that can be present in the pipeline at any time. It implies that after a set of node embedding 
updates, at most of 16 mini-batches use stale node embeddings. 

Partition Buffer
^^^^^^^^^^^^^^^^
One of the storage backends supported for node embeddings is the `PARTITION_BUFFER` mode, where the nodes are bucketed into p partitions 
and every edge falls into one of the p^2 buckets. When pre-processed in the partitioned mode, the edges are ordered in a wat that reduces
the number of node-embedding bucket swaps from the buffer. 

The following command pre-processes the fb15k_237 dataset into 10 partitions as required by Marius for training in `PARTITION_BUFFER` mode.

``marius_preprocess --dataset fb15k_237 --num_partitions 10 --output_dir /home/data/datasets/fb15k_237_partitioned/``

Now, we can set the storage backend for node embeddings to `PARTITION_BUFFER` mode


.. code-block:: yaml

   embeddings:
     type: PARTITION_BUFFER
     options:
       dtype: float
       num_partitions: 10
       buffer_capacity: 5
       prefetching: true


`num_partitions` should hold the same value that was earlier supplied to `marius_preprocess`. `buffer_capacity` states the maximum number of 
node embedding buckets that can be present in the memory at any given time. Setting `prefetching` enables the system to prefetch partitions 
asynchronously leading to reduction in IO wait times and additional memory overheads. 

================================================
FILE: docs/config_interface/full_schema.rst
================================================
.. _config_schema

Configuration Schema
=========================

.. list-table:: MariusConfig
   :widths: 15 10 50 15
   :header-rows: 1

   * - Key
     - Type
     - Description
     - Required
   * - model
     - ModelConfig
     - Defines model architecture, learning task, optimizers and loss function.
     - Yes
   * - storage
     - StorageConfig
     - Defines the input graph and how to store the graph (edges, features) and learned model (embeddings).
     - Yes
   * - training
     - TrainingConfig
     - Hyperparameters for training.
     - Training
   * - evaluation
     - EvaluationConfig
     - Hyperparameters for evaluation.
     - Evaluation

Below is a sample end-to-end configuration file for link prediction on `fb15_237` dataset. The model consists of an embedding layer
in the encoder phase which is directly fed to the `DISTMULT` decoder. Both embeddings and edges are stored in `cpu` memory. 

.. code-block:: yaml 

   model:
     learning_task: LINK_PREDICTION
     encoder:
       layers:
         - - type: EMBEDDING
             output_dim: 50
             bias: true
             init:
               type: GLOROT_NORMAL
     decoder:
       type: DISTMULT
     loss:
       type: SOFTMAX_CE
       options:
         reduction: SUM
     dense_optimizer:
       type: ADAM
       options:
         learning_rate: 0.01
     sparse_optimizer:
       type: ADAGRAD
       options:
         learning_rate: 0.1
   storage:
     full_graph_evaluation: true
     device_type: cpu
     dataset:
       dataset_dir: /home/data/datasets/fb15k_237/
     edges:
       type: DEVICE_MEMORY
       options:
         dtype: int
     embeddings:
       type: DEVICE_MEMORY
       options:
         dtype: float
   training:
     batch_size: 1000
     negative_sampling:
       num_chunks: 10
       negatives_per_positive: 10
       degree_fraction: 0
       filtered: false
     num_epochs: 10
     pipeline:
       sync: true
     epochs_per_shuffle: 1
     logs_per_epoch: 10
     resume_training: false
   evaluation:
     batch_size: 1000
     negative_sampling:
       filtered: true
     epochs_per_eval: 1
     pipeline:
       sync: true


Model Configuration
--------------------


.. list-table:: ModelConfig
   :widths: 15 10 50 15
   :header-rows: 1

   * - Key
     - Type
     - Description
     - Required
   * - random_seed
     - Int
     - Random seed used to initialize, train, and evaluate the model. If not given, a seed will be generated.
     - No
   * - learning_task
     - String
     - Learning task for which the model is used. Valid values are ["LINK_PREDICTION", "NODE_CLASSIFICATION"] (case insensitive). "LP" and "NC" can be used for shorthand.
     - Yes
   * - :ref: encoder
     - :ref:`EncoderConfig<encoder-conf-section>`
     - Defines the architecture of the encoder and configuration of neighbor samplers.
     - Yes
   * - :ref: decoder
     - :ref:`DecoderConfig<decoder-conf-section>`
     - Denotes the decoder to apply to the output of the encoder. The decoder is learning task specific.
     - Yes
   * - :ref: loss
     - :ref:`LossConfig<loss-conf-section>`
     - Loss function to apply over the output of the decoder.
     - Required for training
   * - dense_optimizer
     - :ref:`OptimizerConfig<optimizer-conf-section>`
     - Optimizer to use for dense model parameters. Where dense model parameters refer to all parameters besides the node embeddings. Where node embeddings are handled by the sparse_optimizer.
     - Required for training
   * - sparse_optimizer
     - :ref:`OptimizerConfig<optimizer-conf-section>`
     - Optimizer to use for the node embedding parameters. Currently only ADAGRAD is supported.
     - No

Below is a full view of the `model` attribute and the corresponding parameters that can be set in the model configuration. It consists
of an embedding layer in the encoder phase and a `DISTMULT` decoder.

.. code-block:: yaml

   model:
     random_seed: 456356765463
     learning_task: LINK_PREDICTION
     encoder:
       layers:
         - - type: EMBEDDING
             output_dim: 50
             bias: true
             init:
               type: GLOROT_NORMAL
             optimizer:
               type: DEFAULT
               options:
                 learning_rate: 0.1
     decoder:
       type: DISTMULT
       options:
         inverse_edges: true
         use_relation_features: false
         edge_decoder_method: CORRUPT_NODE
       optimizer:
         type: ADAGRAD
         options:
           learning_rate: 0.1
     loss:
       type: SOFTMAX_CE
       options:
         reduction: SUM
     dense_optimizer:
       type: ADAM
       options:
         learning_rate: 0.01
     sparse_optimizer:
       type: ADAGRAD
       options:
         learning_rate: 0.1

.. _encoder-conf-section:

Encoder Configuration
^^^^^^^^^^^^^^^^^^^^^

.. list-table:: EncoderConfig
   :widths: 15 10 50 15
   :header-rows: 1

   * - Key
     - Type
     - Description
     - Required
   * - use_incoming_nbrs
     - Boolean
     - Whether to use incoming neighbors for the encoder. One of use_incoming_nbrs or use_outgoing_nbrs must be set to true.
     - No
    * - use_outgoing_nbrs
     - Boolean
     - Whether to use outgoing neighbors for the encoder. One of use_incoming_nbrs or use_outgoing_nbrs must be set to true.
     - No
   * - layers
     - List[List[:ref:`LayerConfig<layer-conf-section>`]]
     - Defines architecture of the encoder. Layers of the encoder are grouped into stages, where the layers within a stage are executed in parallel and the output of stage is the input to the successive stage.
     - Yes
   * - train_neighbor_sampling
     - List[:ref:`NeighborSamplingConfig<neighbor-sampling-conf-section>`]
     - Sets the neighbor sampling configuration for each GNN layer for training (and evaluation if eval_neighbor_sampling is not set). Defined as a list of neighbor sampling configurations, where the size of the list must match the number of GNN layers in the encoder.
     - Only for GNNs
   * - eval_neighbor_sampling
     - List[:ref:`NeighborSamplingConfig<neighbor-sampling-conf-section>`]
     - Sets the neighbor sampling configuration for each GNN layer for evaluation. Defined as a list of neighbor sampling configurations, where the size of the list must match the number of GNN layers in the encoder. If this field is not set then the sampling configuration used for training will be used for evaluation.
     - No

The below example depicts a configuration where there is one embedding layer, followed by three GNN layers.  

.. code-block:: yaml

   encoder:
     train_neighbor_sampling:
       - type: ALL
       - type: ALL
       - type: ALL
     eval_neighbor_sampling:
       - type: ALL
       - type: ALL
       - type: ALL
     layers:
       - - type: EMBEDDING
           output_dim: 10
           bias: true
           init:
             type: GLOROT_NORMAL

       - - type: GNN
           options:
             type: GAT
           input_dim: 10
           output_dim: 10
           bias: true
           init:
             type: GLOROT_NORMAL

       - - type: GNN
           options:
             type: GAT
           input_dim: 10
           output_dim: 10
           bias: true
           init:
             type: GLOROT_NORMAL

       - - type: GNN
           options:
             type: GAT
           input_dim: 10
           output_dim: 10
           bias: true
           init:
             type: GLOROT_NORMAL


.. _neighbor-sampling-conf-section:

.. list-table:: NeighborSamplingConfig
   :widths: 15 10 50 15
   :header-rows: 1

   * - Key
     - Type
     - Description
     - Required
   * - type
     - String
     - Denotes the type of the neighbor sampling layer. Options: ["ALL", "UNIFORM", "DROPOUT"].
     - Yes
   * - options
     - NeighborSamplingOptions
     - Specific options depending on the type of sampling layer.
     - No


.. list-table:: UniformSamplingOptions[NeighborSamplingOptions]
   :widths: 15 10 50 15
   :header-rows: 1

   * - Key
     - Type
     - Description
     - Required
   * - max_neighbors
     - Int
     - Number of neighbors to sample in a given uniform sampling layer.
     - Yes

The below configuration might work for a graph configuration where there are 2 GNN layers. The configuration specifies that at most 
10 neighboring nodes will be samples for any given node embedding during training.

.. code-block:: yaml 

   train_neighbor_sampling:
     - type: UNIFORM
       options:
         max_neighbors: 10
     - type: UNIFORM
       options:
         max_neighbors: 10


.. list-table:: DropoutSamplingOptions[NeighborSamplingOptions]
   :widths: 15 10 50 15
   :header-rows: 1

   * - Key
     - Type
     - Description
     - Required
   * - rate
     - Float
     - The dropout rate for a dropout layer.
     - Yes

`DROPOUT` mode neighbor sampling randomly drops `rate * 100` percent neighbors during sampling. 

.. code-block:: yaml 

   train_neighbor_sampling:
     - type: DROPOUT
       options:
         rate: 0.05


.. _layer-conf-section:

Layer Configuration
"""""""""""""""""""

.. list-table:: LayerConfig
   :widths: 15 10 50 15
   :header-rows: 1

   * - Key
     - Type
     - Description
     - Required
   * - type
     - String
     - Denotes the type of layer. Options: ["EMBEDDING", "FEATURE", "GNN" "REDUCTION"]
     - Yes
   * - options
     - LayerOptions
     - Layer specific options depending on the type.
     - No
   * - input_dim
     - Int
     - The dimension of the input to the layer.
     - GNN and Reduction layers
   * - output_dim
     - Int
     - The output of dimension of the layer.
     - Yes
   * - init
     - :ref:`InitConfig<init-conf-section>`
     - Initialization method for the layer parameters. (Default GLOROT_UNIFORM).
     - No
   * - optimizer
     - OptimizerConfig
     - Optimizer to use for the parameters of this layer. If not given, the dense_optimizer is used.
     - No
   * - bias
     - Bool
     - Enable a bias to be applied to the output of the layer. (Default False)
     - No
   * - bias_init
     - :ref:`InitConfig<init-conf-section>`
     - Initialization method for the bias. The default initialization is zeroes.
     - No
   * - activation
     - String
     - Activation function to apply to the output of the layer. Options ["RELU", "SIGMOID", "NONE"]. (Default "NONE")
     - No

Below is a configuration for creating and embedding layer with output dimension 50. It is initialized with zeros and has no activation 
set.

.. code-block:: yaml

   layers:
   - - type: EMBEDDING
       input_dim: -1
       output_dim: 50
       init:
         type: GLOROT_NORMAL
       optimizer:
         type: DEFAULT
         options:
           learning_rate: 0.1
       bias: true
       bias_init:
         type: ZEROS
       activation: NONE


A GNN layer of type GAT (Graph Attention) with input and output dimension of 50 is as follows.

.. code-block:: yaml 

   layers:
   - - type: GNN
       options:
         type: GAT
       input_dim: 50
       output_dim: 50
       bias: true
       init:
         type: GLOROT_NORMAL


A Reduction layer of type Linear, with input dimension of 100 and output dimension of 50 is as follows. 

.. code-block:: yaml

   layers:
   - - type: REDUCTION
       input_dim: 100
       ouptut_dim: 50
       bias: true
       options:
         type: LINEAR


Below is a simple Feature layer with output dimension of 50. The input dimension is set to -1 by default since both Feature and 
Embedding layers do not have any input. 

.. code-block:: yaml

   layers:
   - - type: FEATURE
       output_dim: 50
       bias: true


Layer Options
"""""""""""""

**GNN Layer Options**

.. list-table:: GraphSageLayerOptions[LayerOptions]
   :widths: 15 10 50 15
   :header-rows: 1

   * - Key
     - Type
     - Description
     - Required
   * - type
     - String
     - The type of the GNN layer, for GraphSage, this must be equal to "GRAPH_SAGE".
     - Yes
   * - aggregator
     - String
     - Aggregation to use for graph sage, options are ["GCN", "MEAN"]. (Default "MEAN")
     - No

A GNN layer of type `GRAPH_SAGE` with aggregator set to `MEAN`. Another possbile option is `GCN` (Graph Convolution).

.. code-block:: yaml

   - - type: GNN
       options:
         type: GRAPH_SAGE
         aggregator: MEAN


.. list-table:: GATLayerOptions[LayerOptions]
   :widths: 15 10 50 15
   :header-rows: 1

   * - Key
     - Type
     - Description
     - Required
   * - type
     - String
     - The type of the GNN layer, for GAT, this must be equal to
Download .txt
gitextract_pqlh87yg/

├── .clang-format
├── .flake8
├── .github/
│   ├── ISSUE_TEMPLATE/
│   │   ├── bug_report.md
│   │   ├── documentation-improvement.md
│   │   ├── feature_request.md
│   │   └── general-question.md
│   ├── PULL_REQUEST_TEMPLATE/
│   │   └── pull_request_template.md
│   └── workflows/
│       ├── build_and_test.yml
│       ├── db2graph_test_postgres.yml
│       └── lint.yml
├── .gitignore
├── .gitmodules
├── CMakeLists.txt
├── CONTRIBUTING.md
├── LICENSE
├── MANIFEST.in
├── README.md
├── docs/
│   ├── .nojekyll
│   ├── CMakeLists.txt
│   ├── Doxyfile
│   ├── Doxyfile.in
│   ├── README.md
│   ├── _static/
│   │   └── css/
│   │       └── marius_theme.css
│   ├── _templates/
│   │   └── layout.html
│   ├── conf.py
│   ├── config_interface/
│   │   ├── configuration.rst
│   │   ├── full_schema.rst
│   │   ├── index.rst
│   │   └── samples.rst
│   ├── db2graph/
│   │   └── db2graph.rst
│   ├── examples/
│   │   ├── config/
│   │   │   ├── index.rst
│   │   │   ├── lp_custom.rst
│   │   │   ├── lp_fb15k237.rst
│   │   │   ├── lp_paleobiology.rst
│   │   │   ├── nc_custom.rst
│   │   │   ├── nc_ogbn_arxiv.rst
│   │   │   └── resume_training.rst
│   │   ├── index.rst
│   │   ├── introduction.rst
│   │   ├── prediction/
│   │   │   ├── command_line.rst
│   │   │   └── python.rst
│   │   ├── preprocessing/
│   │   │   ├── command_line.rst
│   │   │   └── python.rst
│   │   └── python/
│   │       ├── index.rst
│   │       ├── lp_custom.rst
│   │       ├── lp_fb15k237.rst
│   │       └── nc_ogbn_arxiv.rst
│   ├── export_and_inference/
│   │   ├── index.rst
│   │   ├── marius_postprocess.rst
│   │   └── marius_predict.rst
│   ├── graph_learning/
│   │   ├── decoders.rst
│   │   ├── downstream_tasks.rst
│   │   ├── encoders.rst
│   │   ├── index.rst
│   │   ├── intro.rst
│   │   └── learning_tasks.rst
│   ├── index.rst
│   ├── introduction.rst
│   ├── preprocess_datasets/
│   │   ├── built_in.rst
│   │   ├── command_line.rst
│   │   ├── index.rst
│   │   └── python.rst
│   ├── python_api/
│   │   ├── configuration/
│   │   │   └── index.rst
│   │   ├── index.rst
│   │   ├── manager/
│   │   │   └── index.rst
│   │   ├── nn/
│   │   │   ├── activation.rst
│   │   │   ├── decoders/
│   │   │   │   ├── decoder.rst
│   │   │   │   ├── edge/
│   │   │   │   │   ├── comparators.rst
│   │   │   │   │   ├── complex.rst
│   │   │   │   │   ├── distmult.rst
│   │   │   │   │   ├── edge_decoder.rst
│   │   │   │   │   ├── index.rst
│   │   │   │   │   ├── relation_operators.rst
│   │   │   │   │   └── transe.rst
│   │   │   │   ├── index.rst
│   │   │   │   └── node/
│   │   │   │       ├── index.rst
│   │   │   │       ├── node_decoder.rst
│   │   │   │       └── noop_node_decoder.rst
│   │   │   ├── encoders/
│   │   │   │   ├── general_encoder.rst
│   │   │   │   └── index.rst
│   │   │   ├── index.rst
│   │   │   ├── initialization.rst
│   │   │   ├── layers/
│   │   │   │   ├── embedding.rst
│   │   │   │   ├── feature.rst
│   │   │   │   ├── gnn.rst
│   │   │   │   ├── index.rst
│   │   │   │   ├── layer.rst
│   │   │   │   └── reduction.rst
│   │   │   ├── loss.rst
│   │   │   ├── model.rst
│   │   │   └── optim.rst
│   │   ├── pipeline/
│   │   │   ├── evaluator.rst
│   │   │   ├── graph_encoder.rst
│   │   │   ├── index.rst
│   │   │   └── trainer.rst
│   │   ├── reporting/
│   │   │   ├── index.rst
│   │   │   ├── metrics.rst
│   │   │   └── reporters.rst
│   │   ├── storage/
│   │   │   ├── graph_storage.rst
│   │   │   ├── index.rst
│   │   │   └── storage.rst
│   │   └── tools/
│   │       ├── configuration/
│   │       │   ├── constants.rst
│   │       │   ├── datatypes.rst
│   │       │   ├── index.rst
│   │       │   └── marius_config.rst
│   │       ├── index.rst
│   │       └── preprocess/
│   │           ├── converters/
│   │           │   └── index.rst
│   │           ├── datasets/
│   │           │   └── index.rst
│   │           ├── index.rst
│   │           ├── partitioners/
│   │           │   └── index.rst
│   │           ├── readers/
│   │           │   └── index.rst
│   │           └── writers/
│   │               └── index.rst
│   └── quickstart.rst
├── examples/
│   ├── configuration/
│   │   ├── custom_lp.yaml
│   │   ├── custom_nc.yaml
│   │   ├── fb15k_237.yaml
│   │   ├── ogbn_arxiv.yaml
│   │   └── sakila.yaml
│   ├── db2graph/
│   │   ├── dockerfile
│   │   └── run.sh
│   ├── docker/
│   │   ├── README.md
│   │   ├── cpu_ubuntu/
│   │   │   └── dockerfile
│   │   └── gpu_ubuntu/
│   │       └── dockerfile
│   ├── preprocessing/
│   │   └── custom_dataset.py
│   └── python/
│       ├── custom.py
│       ├── custom_lp.py
│       ├── custom_nc_graphsage.py
│       ├── fb15k_237.py
│       ├── fb15k_237_gpu.py
│       └── ogbn_arxiv_nc.py
├── pyproject.toml
├── setup.cfg
├── setup.py
├── src/
│   ├── __init__.py
│   ├── cpp/
│   │   ├── cmake/
│   │   │   └── FindSphinx.cmake
│   │   ├── include/
│   │   │   ├── common/
│   │   │   │   ├── datatypes.h
│   │   │   │   ├── exception.h
│   │   │   │   ├── pybind_headers.h
│   │   │   │   └── util.h
│   │   │   ├── configuration/
│   │   │   │   ├── config.h
│   │   │   │   ├── constants.h
│   │   │   │   ├── options.h
│   │   │   │   └── util.h
│   │   │   ├── data/
│   │   │   │   ├── batch.h
│   │   │   │   ├── dataloader.h
│   │   │   │   ├── graph.h
│   │   │   │   ├── ordering.h
│   │   │   │   └── samplers/
│   │   │   │       ├── edge.h
│   │   │   │       ├── negative.h
│   │   │   │       └── neighbor.h
│   │   │   ├── marius.h
│   │   │   ├── nn/
│   │   │   │   ├── activation.h
│   │   │   │   ├── decoders/
│   │   │   │   │   ├── decoder.h
│   │   │   │   │   ├── edge/
│   │   │   │   │   │   ├── comparators.h
│   │   │   │   │   │   ├── complex.h
│   │   │   │   │   │   ├── decoder_methods.h
│   │   │   │   │   │   ├── distmult.h
│   │   │   │   │   │   ├── edge_decoder.h
│   │   │   │   │   │   ├── relation_operators.h
│   │   │   │   │   │   └── transe.h
│   │   │   │   │   └── node/
│   │   │   │   │       ├── node_decoder.h
│   │   │   │   │       └── noop_node_decoder.h
│   │   │   │   ├── encoders/
│   │   │   │   │   └── encoder.h
│   │   │   │   ├── initialization.h
│   │   │   │   ├── layers/
│   │   │   │   │   ├── embedding/
│   │   │   │   │   │   └── embedding.h
│   │   │   │   │   ├── feature/
│   │   │   │   │   │   └── feature.h
│   │   │   │   │   ├── gnn/
│   │   │   │   │   │   ├── gat_layer.h
│   │   │   │   │   │   ├── gcn_layer.h
│   │   │   │   │   │   ├── gnn_layer.h
│   │   │   │   │   │   ├── graph_sage_layer.h
│   │   │   │   │   │   ├── layer_helpers.h
│   │   │   │   │   │   └── rgcn_layer.h
│   │   │   │   │   ├── layer.h
│   │   │   │   │   └── reduction/
│   │   │   │   │       ├── concat.h
│   │   │   │   │       ├── linear.h
│   │   │   │   │       └── reduction_layer.h
│   │   │   │   ├── loss.h
│   │   │   │   ├── model.h
│   │   │   │   ├── model_helpers.h
│   │   │   │   ├── optim.h
│   │   │   │   └── regularizer.h
│   │   │   ├── pipeline/
│   │   │   │   ├── evaluator.h
│   │   │   │   ├── graph_encoder.h
│   │   │   │   ├── pipeline.h
│   │   │   │   ├── pipeline_constants.h
│   │   │   │   ├── pipeline_cpu.h
│   │   │   │   ├── pipeline_gpu.h
│   │   │   │   ├── pipeline_monitor.h
│   │   │   │   ├── queue.h
│   │   │   │   └── trainer.h
│   │   │   ├── reporting/
│   │   │   │   ├── logger.h
│   │   │   │   └── reporting.h
│   │   │   └── storage/
│   │   │       ├── buffer.h
│   │   │       ├── checkpointer.h
│   │   │       ├── graph_storage.h
│   │   │       ├── io.h
│   │   │       └── storage.h
│   │   ├── python_bindings/
│   │   │   ├── configuration/
│   │   │   │   ├── config_wrap.cpp
│   │   │   │   ├── options_wrap.cpp
│   │   │   │   └── wrap.cpp
│   │   │   ├── manager/
│   │   │   │   ├── marius_wrap.cpp
│   │   │   │   └── wrap.cpp
│   │   │   ├── nn/
│   │   │   │   ├── activation_wrap.cpp
│   │   │   │   ├── decoders/
│   │   │   │   │   ├── decoder_wrap.cpp
│   │   │   │   │   ├── edge/
│   │   │   │   │   │   ├── comparators_wrap.cpp
│   │   │   │   │   │   ├── complex_wrap.cpp
│   │   │   │   │   │   ├── distmult_wrap.cpp
│   │   │   │   │   │   ├── edge_decoder_wrap.cpp
│   │   │   │   │   │   ├── relation_operators_wrap.cpp
│   │   │   │   │   │   └── transe_wrap.cpp
│   │   │   │   │   └── node/
│   │   │   │   │       ├── node_decoder_wrap.cpp
│   │   │   │   │       └── noop_node_decoder.cpp
│   │   │   │   ├── encoders/
│   │   │   │   │   └── encoder_wrap.cpp
│   │   │   │   ├── initialization_wrap.cpp
│   │   │   │   ├── layers/
│   │   │   │   │   ├── embedding/
│   │   │   │   │   │   └── embedding_wrap.cpp
│   │   │   │   │   ├── feature/
│   │   │   │   │   │   └── feature_wrap.cpp
│   │   │   │   │   ├── gnn/
│   │   │   │   │   │   ├── gat_layer_wrap.cpp
│   │   │   │   │   │   ├── gcn_layer_wrap.cpp
│   │   │   │   │   │   ├── gnn_layer_wrap.cpp
│   │   │   │   │   │   ├── graph_sage_layer_wrap.cpp
│   │   │   │   │   │   ├── layer_helpers_wrap.cpp
│   │   │   │   │   │   └── rgcn_layer_wrap.cpp
│   │   │   │   │   ├── layer_wrap.cpp
│   │   │   │   │   └── reduction/
│   │   │   │   │       ├── concat_wrap.cpp
│   │   │   │   │       ├── linear_wrap.cpp
│   │   │   │   │       └── reduction_layer_wrap.cpp
│   │   │   │   ├── loss_wrap.cpp
│   │   │   │   ├── model_wrap.cpp
│   │   │   │   ├── optim_wrap.cpp
│   │   │   │   ├── regularizer_wrap.cpp
│   │   │   │   └── wrap.cpp
│   │   │   ├── pipeline/
│   │   │   │   ├── evaluator_wrap.cpp
│   │   │   │   ├── graph_encoder_wrap.cpp
│   │   │   │   ├── trainer_wrap.cpp
│   │   │   │   └── wrap.cpp
│   │   │   ├── reporting/
│   │   │   │   ├── reporting_wrap.cpp
│   │   │   │   └── wrap.cpp
│   │   │   └── storage/
│   │   │       ├── graph_storage_wrap.cpp
│   │   │       ├── io_wrap.cpp
│   │   │       ├── storage_wrap.cpp
│   │   │       └── wrap.cpp
│   │   ├── src/
│   │   │   ├── common/
│   │   │   │   └── util.cpp
│   │   │   ├── configuration/
│   │   │   │   ├── config.cpp
│   │   │   │   ├── options.cpp
│   │   │   │   └── util.cpp
│   │   │   ├── data/
│   │   │   │   ├── batch.cpp
│   │   │   │   ├── dataloader.cpp
│   │   │   │   ├── graph.cpp
│   │   │   │   ├── ordering.cpp
│   │   │   │   └── samplers/
│   │   │   │       ├── edge.cpp
│   │   │   │       ├── negative.cpp
│   │   │   │       └── neighbor.cpp
│   │   │   ├── marius.cpp
│   │   │   ├── nn/
│   │   │   │   ├── activation.cpp
│   │   │   │   ├── decoders/
│   │   │   │   │   ├── edge/
│   │   │   │   │   │   ├── comparators.cpp
│   │   │   │   │   │   ├── complex.cpp
│   │   │   │   │   │   ├── decoder_methods.cpp
│   │   │   │   │   │   ├── distmult.cpp
│   │   │   │   │   │   ├── edge_decoder.cpp
│   │   │   │   │   │   ├── relation_operators.cpp
│   │   │   │   │   │   └── transe.cpp
│   │   │   │   │   └── node/
│   │   │   │   │       └── noop_node_decoder.cpp
│   │   │   │   ├── encoders/
│   │   │   │   │   └── encoder.cpp
│   │   │   │   ├── initialization.cpp
│   │   │   │   ├── layers/
│   │   │   │   │   ├── embedding/
│   │   │   │   │   │   └── embedding.cpp
│   │   │   │   │   ├── feature/
│   │   │   │   │   │   └── feature.cpp
│   │   │   │   │   ├── gnn/
│   │   │   │   │   │   ├── gat_layer.cpp
│   │   │   │   │   │   ├── gcn_layer.cpp
│   │   │   │   │   │   ├── graph_sage_layer.cpp
│   │   │   │   │   │   ├── layer_helpers.cpp
│   │   │   │   │   │   └── rgcn_layer.cpp
│   │   │   │   │   ├── layer.cpp
│   │   │   │   │   └── reduction/
│   │   │   │   │       ├── concat.cpp
│   │   │   │   │       └── linear.cpp
│   │   │   │   ├── loss.cpp
│   │   │   │   ├── model.cpp
│   │   │   │   ├── optim.cpp
│   │   │   │   └── regularizer.cpp
│   │   │   ├── pipeline/
│   │   │   │   ├── evaluator.cpp
│   │   │   │   ├── graph_encoder.cpp
│   │   │   │   ├── pipeline.cpp
│   │   │   │   ├── pipeline_cpu.cpp
│   │   │   │   ├── pipeline_gpu.cpp
│   │   │   │   └── trainer.cpp
│   │   │   ├── reporting/
│   │   │   │   └── reporting.cpp
│   │   │   └── storage/
│   │   │       ├── buffer.cpp
│   │   │       ├── checkpointer.cpp
│   │   │       ├── graph_storage.cpp
│   │   │       ├── io.cpp
│   │   │       └── storage.cpp
│   │   └── third_party/
│   │       └── CMakeLists.txt
│   ├── cuda/
│   │   └── third_party/
│   │       └── pytorch_scatter/
│   │           ├── atomics.cuh
│   │           ├── index_info.cuh
│   │           ├── reducer.cuh
│   │           ├── segment_csr_cuda.cu
│   │           ├── segment_csr_cuda.h
│   │           ├── segment_max.cpp
│   │           ├── segment_max.h
│   │           └── utils.cuh
│   └── python/
│       ├── __init__.py
│       ├── console_scripts/
│       │   ├── __init__.py
│       │   ├── marius_eval.py
│       │   └── marius_train.py
│       ├── distribution/
│       │   ├── generate_stubs.py
│       │   └── marius_env_info.py
│       └── tools/
│           ├── __init__.py
│           ├── configuration/
│           │   ├── __init__.py
│           │   ├── constants.py
│           │   ├── datatypes.py
│           │   ├── marius_config.py
│           │   └── validation.py
│           ├── db2graph/
│           │   └── marius_db2graph.py
│           ├── marius_config_generator.py
│           ├── marius_postprocess.py
│           ├── marius_predict.py
│           ├── marius_preprocess.py
│           ├── postprocess/
│           │   ├── __init__.py
│           │   └── in_memory_exporter.py
│           ├── prediction/
│           │   ├── link_prediction.py
│           │   └── node_classification.py
│           └── preprocess/
│               ├── __init__.py
│               ├── converters/
│               │   ├── __init__.py
│               │   ├── partitioners/
│               │   │   ├── __init__.py
│               │   │   ├── partitioner.py
│               │   │   ├── spark_partitioner.py
│               │   │   └── torch_partitioner.py
│               │   ├── readers/
│               │   │   ├── __init__.py
│               │   │   ├── pandas_readers.py
│               │   │   ├── reader.py
│               │   │   └── spark_readers.py
│               │   ├── spark_constants.py
│               │   ├── spark_converter.py
│               │   ├── torch_constants.py
│               │   ├── torch_converter.py
│               │   └── writers/
│               │       ├── __init__.py
│               │       ├── spark_writer.py
│               │       ├── torch_writer.py
│               │       └── writer.py
│               ├── custom.py
│               ├── dataset.py
│               ├── dataset_stats.tsv
│               ├── datasets/
│               │   ├── __init__.py
│               │   ├── dataset_helpers.py
│               │   ├── fb15k.py
│               │   ├── fb15k_237.py
│               │   ├── freebase86m.py
│               │   ├── friendster.py
│               │   ├── livejournal.py
│               │   ├── ogb_mag240m.py
│               │   ├── ogb_wikikg90mv2.py
│               │   ├── ogbl_citation2.py
│               │   ├── ogbl_collab.py
│               │   ├── ogbl_ppa.py
│               │   ├── ogbl_wikikg2.py
│               │   ├── ogbn_arxiv.py
│               │   ├── ogbn_papers100m.py
│               │   ├── ogbn_products.py
│               │   └── twitter.py
│               └── utils.py
├── test/
│   ├── CMakeLists.txt
│   ├── README.md
│   ├── __init__.py
│   ├── cpp/
│   │   ├── CMakeLists.txt
│   │   ├── end_to_end/
│   │   │   ├── CMakeLists.txt
│   │   │   ├── main.cpp
│   │   │   └── test_main.cpp
│   │   ├── integration/
│   │   │   ├── CMakeLists.txt
│   │   │   └── main.cpp
│   │   ├── performance/
│   │   │   ├── CMakeLists.txt
│   │   │   └── main.cpp
│   │   └── unit/
│   │       ├── CMakeLists.txt
│   │       ├── main.cpp
│   │       ├── nn/
│   │       │   ├── test_activation.cpp
│   │       │   ├── test_initialization.cpp
│   │       │   ├── test_loss.cpp
│   │       │   └── test_model.cpp
│   │       ├── test_buffer.cpp
│   │       ├── test_storage.cpp
│   │       ├── testing_util.cpp
│   │       └── testing_util.h
│   ├── db2graph/
│   │   └── test_postgres.py
│   ├── python/
│   │   ├── bindings/
│   │   │   ├── end_to_end/
│   │   │   │   ├── test_fb15k_acc.py
│   │   │   │   ├── test_interval_checkpointing.py
│   │   │   │   ├── test_lp_basic.py
│   │   │   │   ├── test_lp_buffer.py
│   │   │   │   ├── test_lp_storage.py
│   │   │   │   ├── test_model_dir.py
│   │   │   │   ├── test_nc_basic.py
│   │   │   │   ├── test_nc_buffer.py
│   │   │   │   ├── test_nc_storage.py
│   │   │   │   └── test_resume_training.py
│   │   │   └── integration/
│   │   │       ├── test_config.py
│   │   │       ├── test_data.py
│   │   │       └── test_nn.py
│   │   ├── constants.py
│   │   ├── helpers.py
│   │   ├── postprocessing/
│   │   │   └── test_in_memory_exporter.py
│   │   ├── predict/
│   │   │   └── test_predict.py
│   │   └── preprocessing/
│   │       ├── test_spark_converter.py
│   │       └── test_torch_converter.py
│   ├── test_configs/
│   │   ├── generate_test_configs.py
│   │   ├── lp/
│   │   │   ├── evaluation/
│   │   │   │   ├── async.yaml
│   │   │   │   ├── async_deg.yaml
│   │   │   │   ├── async_filtered.yaml
│   │   │   │   ├── sync.yaml
│   │   │   │   ├── sync_deg.yaml
│   │   │   │   └── sync_filtered.yaml
│   │   │   ├── model/
│   │   │   │   ├── distmult.yaml
│   │   │   │   ├── distmult_feat.yaml
│   │   │   │   ├── gat_1_layer.yaml
│   │   │   │   ├── gat_3_layer.yaml
│   │   │   │   ├── gs_1_layer.yaml
│   │   │   │   ├── gs_1_layer_feat.yaml
│   │   │   │   ├── gs_1_layer_uniform.yaml
│   │   │   │   ├── gs_3_layer.yaml
│   │   │   │   ├── gs_3_layer_feat.yaml
│   │   │   │   └── gs_3_layer_uniform.yaml
│   │   │   ├── storage/
│   │   │   │   ├── edges_disk.yaml
│   │   │   │   ├── in_memory.yaml
│   │   │   │   └── part_buffer.yaml
│   │   │   └── training/
│   │   │       ├── async.yaml
│   │   │       ├── async_deg.yaml
│   │   │       ├── async_filtered.yaml
│   │   │       ├── sync.yaml
│   │   │       ├── sync_deg.yaml
│   │   │       └── sync_filtered.yaml
│   │   └── nc/
│   │       ├── evaluation/
│   │       │   ├── async.yaml
│   │       │   └── sync.yaml
│   │       ├── model/
│   │       │   ├── gat_1_layer.yaml
│   │       │   ├── gat_3_layer.yaml
│   │       │   ├── gs_1_layer.yaml
│   │       │   ├── gs_1_layer_emb.yaml
│   │       │   ├── gs_1_layer_uniform.yaml
│   │       │   ├── gs_3_layer.yaml
│   │       │   ├── gs_3_layer_emb.yaml
│   │       │   └── gs_3_layer_uniform.yaml
│   │       ├── storage/
│   │       │   ├── in_memory.yaml
│   │       │   └── part_buffer.yaml
│   │       └── training/
│   │           ├── async.yaml
│   │           └── sync.yaml
│   └── test_data/
│       ├── generate.py
│       ├── test_edges.txt
│       ├── train_edges.txt
│       ├── train_edges_weights.txt
│       └── valid_edges.txt
└── tox.ini
Download .txt
SYMBOL INDEX (1092 symbols across 198 files)

FILE: examples/python/custom_lp.py
  class MYDATASET (line 13) | class MYDATASET(LinkPredictionDataset):
    method __init__ (line 14) | def __init__(self, output_directory: Path, spark=False):
    method download (line 20) | def download(self, overwrite=False):
    method preprocess (line 35) | def preprocess(self, remap_ids=True, splits=None):
  function init_model (line 51) | def init_model(embedding_dim, num_nodes, num_relations, device, dtype):
  function train_epoch (line 87) | def train_epoch(model, dataloader):
  function eval_epoch (line 104) | def eval_epoch(model, dataloader):

FILE: examples/python/custom_nc_graphsage.py
  function switch_to_num (line 17) | def switch_to_num(row):
  class MYDATASET (line 36) | class MYDATASET(NodeClassificationDataset):
    method __init__ (line 37) | def __init__(self, output_directory: Path, spark=False):
    method download (line 43) | def download(self, overwrite=False):
    method preprocess (line 103) | def preprocess(
  function init_model (line 164) | def init_model(feature_dim, num_classes, device):
  function train_epoch (line 202) | def train_epoch(model, dataloader):
  function eval_epoch (line 218) | def eval_epoch(model, dataloader):

FILE: examples/python/fb15k_237.py
  function init_model (line 11) | def init_model(embedding_dim, num_nodes, num_relations, device, dtype):
  function train_epoch (line 47) | def train_epoch(model, dataloader):
  function eval_epoch (line 64) | def eval_epoch(model, dataloader):

FILE: examples/python/fb15k_237_gpu.py
  function init_model (line 11) | def init_model(embedding_dim, num_nodes, num_relations, device, dtype):
  function train_epoch (line 47) | def train_epoch(model, dataloader):
  function eval_epoch (line 64) | def eval_epoch(model, dataloader):

FILE: examples/python/ogbn_arxiv_nc.py
  function init_model (line 11) | def init_model(feature_dim, num_classes, device):
  function train_epoch (line 49) | def train_epoch(model, dataloader):
  function eval_epoch (line 65) | def eval_epoch(model, dataloader):

FILE: setup.py
  class CMakeExtension (line 10) | class CMakeExtension(Extension):
    method __init__ (line 11) | def __init__(self, name, sourcedir=""):
  class CMakeBuild (line 16) | class CMakeBuild(build_ext):
    method run (line 17) | def run(self):
    method build_extension (line 32) | def build_extension(self, ext):

FILE: src/cpp/include/common/datatypes.h
  function class (line 32) | class DummyCudaEvent {
  function class (line 45) | class DummyCudaStream {
  function class (line 52) | class DummyCudaStreamGuard {
  type at (line 65) | typedef at::cuda::CUDAEvent CudaEvent;
  type at (line 66) | typedef at::cuda::CUDAStream CudaStream;
  type at (line 67) | typedef at::cuda::CUDAStreamGuard CudaStreamGuard;
  type DummyCudaEvent (line 72) | typedef DummyCudaEvent CudaEvent;
  type DummyCudaStream (line 73) | typedef DummyCudaStream CudaStream;
  type DummyCudaStreamGuard (line 74) | typedef DummyCudaStreamGuard CudaStreamGuard;
  type torch (line 91) | typedef torch::Tensor EdgeList;
  type torch (line 94) | typedef torch::Tensor Indices;
  type torch (line 97) | typedef torch::Tensor Gradients;
  type torch (line 100) | typedef torch::Tensor OptimizerState;
  type std (line 102) | typedef std::chrono::time_point<std::chrono::steady_clock> Timestamp;

FILE: src/cpp/include/common/exception.h
  type MariusRuntimeException (line 12) | struct MariusRuntimeException
  function MariusRuntimeException (line 17) | struct UndefinedTensorException : public MariusRuntimeException {
  function MariusRuntimeException (line 22) | struct NANTensorException : public MariusRuntimeException {
  function MariusRuntimeException (line 27) | struct OOMTensorException : public MariusRuntimeException {
  function MariusRuntimeException (line 32) | struct TensorSizeMismatchException : public MariusRuntimeException {
  function MariusRuntimeException (line 39) | struct UnexpectedNullPtrException : public MariusRuntimeException {

FILE: src/cpp/include/common/util.h
  function class (line 10) | class Timer {
  function start (line 29) | void start() {
  function stop (line 36) | void stop() {

FILE: src/cpp/include/configuration/config.h
  type NeighborSamplingConfig (line 16) | struct NeighborSamplingConfig {
  type OptimizerConfig (line 22) | struct OptimizerConfig {
  type InitConfig (line 27) | struct InitConfig {
  type LossConfig (line 35) | struct LossConfig {
  type LayerConfig (line 40) | struct LayerConfig {
  type EncoderConfig (line 52) | struct EncoderConfig {
  type DecoderConfig (line 60) | struct DecoderConfig {
  type StorageBackendConfig (line 66) | struct StorageBackendConfig {
  type DatasetConfig (line 71) | struct DatasetConfig {
  type NegativeSamplingConfig (line 84) | struct NegativeSamplingConfig {
  type PipelineConfig (line 92) | struct PipelineConfig {
  type CheckpointConfig (line 108) | struct CheckpointConfig {
  type ModelConfig (line 115) | struct ModelConfig {
  type StorageConfig (line 125) | struct StorageConfig {
  type TrainingConfig (line 142) | struct TrainingConfig {
  type EvaluationConfig (line 155) | struct EvaluationConfig {
  type MariusConfig (line 164) | struct MariusConfig {

FILE: src/cpp/include/configuration/constants.h
  function namespace (line 16) | namespace PathConstants {

FILE: src/cpp/include/configuration/options.h
  type class (line 12) | enum class
  type class (line 16) | enum class
  type class (line 20) | enum class
  type class (line 24) | enum class
  type class (line 28) | enum class
  type class (line 32) | enum class
  type class (line 36) | enum class
  type class (line 44) | enum class
  type class (line 48) | enum class
  type class (line 52) | enum class
  type class (line 56) | enum class
  type class (line 60) | enum class
  type class (line 64) | enum class
  type class (line 68) | enum class
  type class (line 72) | enum class
  type class (line 76) | enum class
  type class (line 80) | enum class
  type class (line 84) | enum class
  type InitOptions (line 92) | struct InitOptions {
  function InitOptions (line 96) | struct ConstantInitOptions : InitOptions {
  function InitOptions (line 102) | struct UniformInitOptions : InitOptions {
  function InitOptions (line 108) | struct NormalInitOptions : InitOptions {
  type LossOptions (line 115) | struct LossOptions {
  function LossOptions (line 121) | struct RankingLossOptions : LossOptions {
  type OptimizerOptions (line 126) | struct OptimizerOptions {
  function OptimizerOptions (line 132) | struct AdagradOptions : OptimizerOptions {
  function OptimizerOptions (line 139) | struct AdamOptions : OptimizerOptions {
  type LayerOptions (line 147) | struct LayerOptions {
  function LayerOptions (line 151) | struct EmbeddingLayerOptions : LayerOptions {}
  function LayerOptions (line 153) | struct FeatureLayerOptions : LayerOptions {}
  function LayerOptions (line 155) | struct DenseLayerOptions : LayerOptions {
  function LayerOptions (line 159) | struct ReductionLayerOptions : LayerOptions {
  function LayerOptions (line 163) | struct GNNLayerOptions : LayerOptions {
  function GNNLayerOptions (line 168) | struct GraphSageLayerOptions : GNNLayerOptions {
  function GNNLayerOptions (line 172) | struct GATLayerOptions : GNNLayerOptions {
  type DecoderOptions (line 180) | struct DecoderOptions {
  function DecoderOptions (line 184) | struct EdgeDecoderOptions : DecoderOptions {
  type StorageOptions (line 190) | struct StorageOptions {
  function StorageOptions (line 195) | struct PartitionBufferOptions : StorageOptions {
  type NeighborSamplingOptions (line 206) | struct NeighborSamplingOptions {
  function NeighborSamplingOptions (line 210) | struct UniformSamplingOptions : NeighborSamplingOptions {
  function NeighborSamplingOptions (line 214) | struct DropoutSamplingOptions : NeighborSamplingOptions {

FILE: src/cpp/include/data/batch.h
  type class (line 17) | enum class
  function class (line 32) | class Batch {

FILE: src/cpp/include/data/dataloader.h
  function class (line 22) | class DataLoader {
  function getNumEdges (line 180) | int64_t getNumEdges() { return graph_storage_->getNumEdges(); }
  function getEpochsProcessed (line 182) | int64_t getEpochsProcessed() { return epochs_processed_; }
  function getBatchesProcessed (line 184) | int64_t getBatchesProcessed() { return batches_processed_; }
  function isTrain (line 186) | bool isTrain() { return train_; }
  function setTrainSet (line 191) | void setTrainSet() {
  function setValidationSet (line 207) | void setValidationSet() {
  function setTestSet (line 220) | void setTestSet() {
  function setEncode (line 233) | void setEncode() {

FILE: src/cpp/include/data/graph.h
  function class (line 16) | class MariusGraph {
  function class (line 108) | class DENSEGraph : public MariusGraph {

FILE: src/cpp/include/data/samplers/edge.h
  function class (line 13) | class EdgeSampler {
  function class (line 27) | class RandomEdgeSampler : public EdgeSampler {

FILE: src/cpp/include/data/samplers/negative.h
  function class (line 31) | class NegativeSampler {
  function class (line 45) | class CorruptNodeNegativeSampler : public NegativeSampler {
  function class (line 59) | class CorruptRelNegativeSampler : public NegativeSampler {
  function class (line 70) | class NegativeEdgeSampler : public NegativeSampler {

FILE: src/cpp/include/data/samplers/neighbor.h
  function class (line 32) | class NeighborSampler {
  function class (line 47) | class LayeredNeighborSampler : public NeighborSampler {

FILE: src/cpp/include/nn/decoders/decoder.h
  function class (line 12) | class Decoder {

FILE: src/cpp/include/nn/decoders/edge/comparators.h
  function class (line 13) | class Comparator {
  function class (line 19) | class L2Compare : public Comparator {
  function class (line 26) | class CosineCompare : public Comparator {
  function class (line 33) | class DotCompare : public Comparator {

FILE: src/cpp/include/nn/decoders/edge/edge_decoder.h
  function class (line 13) | class EdgeDecoder : public Decoder {

FILE: src/cpp/include/nn/decoders/edge/relation_operators.h
  function class (line 11) | class RelationOperator {
  function class (line 17) | class HadamardOperator : public RelationOperator {
  function class (line 22) | class ComplexHadamardOperator : public RelationOperator {
  function class (line 27) | class TranslationOperator : public RelationOperator {
  function class (line 32) | class NoOp : public RelationOperator {

FILE: src/cpp/include/nn/decoders/node/node_decoder.h
  function class (line 10) | class NodeDecoder : public Decoder {

FILE: src/cpp/include/nn/encoders/encoder.h
  function class (line 11) | class GeneralEncoder : public torch::nn::Cloneable<GeneralEncoder> {

FILE: src/cpp/include/nn/layers/embedding/embedding.h
  function class (line 12) | class EmbeddingLayer : public Layer {

FILE: src/cpp/include/nn/layers/feature/feature.h
  function class (line 11) | class FeatureLayer : public Layer {

FILE: src/cpp/include/nn/layers/gnn/gat_layer.h
  function class (line 10) | class GATLayer : public GNNLayer {

FILE: src/cpp/include/nn/layers/gnn/gcn_layer.h
  function class (line 10) | class GCNLayer : public GNNLayer {

FILE: src/cpp/include/nn/layers/gnn/gnn_layer.h
  function class (line 14) | class GNNLayer : public Layer {

FILE: src/cpp/include/nn/layers/gnn/graph_sage_layer.h
  function class (line 10) | class GraphSageLayer : public GNNLayer {

FILE: src/cpp/include/nn/layers/gnn/rgcn_layer.h
  function class (line 10) | class RGCNLayer : public GNNLayer {

FILE: src/cpp/include/nn/layers/layer.h
  function virtual (line 22) | virtual ~Layer(){}

FILE: src/cpp/include/nn/layers/reduction/concat.h
  function class (line 11) | class ConcatReduction : public ReductionLayer {

FILE: src/cpp/include/nn/layers/reduction/linear.h
  function class (line 11) | class LinearReduction : public ReductionLayer {

FILE: src/cpp/include/nn/layers/reduction/reduction_layer.h
  function class (line 15) | class ReductionLayer : public Layer {

FILE: src/cpp/include/nn/loss.h
  function class (line 21) | class LossFunction {
  function class (line 33) | class SoftmaxCrossEntropy : public LossFunction {
  function class (line 43) | class RankingLoss : public LossFunction {
  function class (line 57) | class CrossEntropyLoss : public LossFunction {
  function class (line 67) | class BCEAfterSigmoidLoss : public LossFunction {
  function class (line 77) | class BCEWithLogitsLoss : public LossFunction {
  function class (line 87) | class MSELoss : public LossFunction {
  function class (line 97) | class SoftPlusLoss : public LossFunction {

FILE: src/cpp/include/nn/optim.h
  function class (line 11) | class Optimizer {
  function class (line 33) | class SGDOptimizer : public Optimizer {
  function class (line 52) | class AdagradOptimizer : public Optimizer {
  function class (line 79) | class AdamOptimizer : public Optimizer {

FILE: src/cpp/include/nn/regularizer.h
  function class (line 10) | class Regularizer {
  function class (line 17) | class NormRegularizer : public Regularizer {

FILE: src/cpp/include/pipeline/evaluator.h
  function class (line 17) | class Evaluator {
  function class (line 30) | class PipelineEvaluator : public Evaluator {
  function class (line 39) | class SynchronousEvaluator : public Evaluator {

FILE: src/cpp/include/pipeline/graph_encoder.h
  function class (line 12) | class GraphEncoder {
  function class (line 25) | class PipelineGraphEncoder : public GraphEncoder {
  function class (line 34) | class SynchronousGraphEncoder : public GraphEncoder {

FILE: src/cpp/include/pipeline/pipeline.h
  function class (line 18) | class Worker {
  function class (line 57) | class UpdateBatchWorker : public Worker {
  function class (line 64) | class WriteNodesWorker : public Worker {
  function class (line 71) | class Pipeline {

FILE: src/cpp/include/pipeline/pipeline_cpu.h
  function class (line 11) | class ComputeWorkerCPU : public Worker {
  function class (line 18) | class EncodeNodesWorkerCPU : public Worker {
  function class (line 27) | class PipelineCPU : public Pipeline {

FILE: src/cpp/include/pipeline/pipeline_gpu.h
  function class (line 11) | class BatchToDeviceWorker : public Worker {
  function class (line 45) | class PipelineGPU : public Pipeline {

FILE: src/cpp/include/pipeline/queue.h
  function unlock (line 83) | void unlock() { mutex_->unlock(); }
  function flush (line 85) | void flush() {
  function size (line 91) | int size() { return queue_.size(); }
  function isFull (line 93) | bool isFull() { return queue_.size() == max_size_; }
  function isEmpty (line 95) | bool isEmpty() { return queue_.size() == 0; }
  function getMaxSize (line 97) | int getMaxSize() { return max_size_; }
  type typename (line 99) | typedef typename std::deque<T> queue_type;
  type typename (line 101) | typedef typename queue_type::iterator iterator;
  type typename (line 102) | typedef typename queue_type::const_iterator const_iterator;
  function iterator (line 104) | inline iterator begin() noexcept { return queue_.begin(); }
  function iterator (line 108) | inline iterator end() noexcept { return queue_.end(); }

FILE: src/cpp/include/pipeline/trainer.h
  function class (line 14) | class Trainer {
  function class (line 28) | class PipelineTrainer : public Trainer {
  function class (line 37) | class SynchronousTrainer : public Trainer {

FILE: src/cpp/include/reporting/logger.h
  function class (line 18) | class MariusLogger {

FILE: src/cpp/include/reporting/reporting.h
  function class (line 10) | class Metric {
  function class (line 18) | class RankingMetric : public Metric {
  function class (line 23) | class HitskMetric : public RankingMetric {
  function class (line 32) | class MeanRankMetric : public RankingMetric {
  function class (line 39) | class MeanReciprocalRankMetric : public RankingMetric {
  function class (line 46) | class ClassificationMetric : public Metric {
  function class (line 51) | class CategoricalAccuracyMetric : public ClassificationMetric {
  function class (line 58) | class Reporter {
  function report (line 119) | void report() override;

FILE: src/cpp/include/storage/buffer.h
  function class (line 11) | class Partition {
  function class (line 41) | class PartitionedFile {
  function class (line 62) | class LookaheadBlock {
  function class (line 89) | class AsyncWriteBlock {
  function class (line 116) | class PartitionBuffer {

FILE: src/cpp/include/storage/checkpointer.h
  type CheckpointMeta (line 12) | struct CheckpointMeta {
  function class (line 23) | class Checkpointer {

FILE: src/cpp/include/storage/graph_storage.h
  type GraphModelStoragePtrs (line 12) | struct GraphModelStoragePtrs {
  type InMemorySubgraphState (line 32) | struct InMemorySubgraphState {
  function class (line 43) | class GraphModelStorage {

FILE: src/cpp/include/storage/storage.h
  function class (line 35) | class Storage {
  function class (line 89) | class PartitionBufferStorage : public Storage {
  function load (line 168) | void load() override;

FILE: src/cpp/python_bindings/configuration/config_wrap.cpp
  function init_config (line 4) | void init_config(py::module &m) {

FILE: src/cpp/python_bindings/configuration/options_wrap.cpp
  function init_options (line 4) | void init_options(py::module &m) {

FILE: src/cpp/python_bindings/configuration/wrap.cpp
  function PYBIND11_MODULE (line 7) | PYBIND11_MODULE(_config, m) {

FILE: src/cpp/python_bindings/manager/marius_wrap.cpp
  function init_marius (line 8) | void init_marius(py::module &m) {

FILE: src/cpp/python_bindings/manager/wrap.cpp
  function PYBIND11_MODULE (line 5) | PYBIND11_MODULE(_manager, m) {

FILE: src/cpp/python_bindings/nn/activation_wrap.cpp
  function init_activation (line 6) | void init_activation(py::module &m) { m.def("apply_activation", &apply_a...

FILE: src/cpp/python_bindings/nn/decoders/decoder_wrap.cpp
  class PyDecoder (line 8) | class PyDecoder : Decoder {
  function init_decoder (line 12) | void init_decoder(py::module &m) { py::class_<Decoder, PyDecoder, shared...

FILE: src/cpp/python_bindings/nn/decoders/edge/comparators_wrap.cpp
  class PyComparator (line 8) | class PyComparator : Comparator {
  function init_comparators (line 16) | void init_comparators(py::module &m) {

FILE: src/cpp/python_bindings/nn/decoders/edge/complex_wrap.cpp
  function init_complex (line 8) | void init_complex(py::module &m) {

FILE: src/cpp/python_bindings/nn/decoders/edge/distmult_wrap.cpp
  function init_distmult (line 5) | void init_distmult(py::module &m) {

FILE: src/cpp/python_bindings/nn/decoders/edge/edge_decoder_wrap.cpp
  function init_edge_decoder (line 8) | void init_edge_decoder(py::module &m) {

FILE: src/cpp/python_bindings/nn/decoders/edge/relation_operators_wrap.cpp
  class PyRelationOperator (line 9) | class PyRelationOperator : RelationOperator {
  function init_relation_operators (line 17) | void init_relation_operators(py::module &m) {

FILE: src/cpp/python_bindings/nn/decoders/edge/transe_wrap.cpp
  function init_transe (line 8) | void init_transe(py::module &m) {

FILE: src/cpp/python_bindings/nn/decoders/node/node_decoder_wrap.cpp
  function init_node_decoder (line 9) | void init_node_decoder(py::module &m) {

FILE: src/cpp/python_bindings/nn/decoders/node/noop_node_decoder.cpp
  function init_noop_node_decoder (line 9) | void init_noop_node_decoder(py::module &m) {

FILE: src/cpp/python_bindings/nn/encoders/encoder_wrap.cpp
  function init_encoder (line 5) | void init_encoder(py::module &m) {

FILE: src/cpp/python_bindings/nn/initialization_wrap.cpp
  function init_initialization (line 4) | void init_initialization(py::module &m) {

FILE: src/cpp/python_bindings/nn/layers/embedding/embedding_wrap.cpp
  function init_embedding_layer (line 4) | void init_embedding_layer(py::module &m) {

FILE: src/cpp/python_bindings/nn/layers/feature/feature_wrap.cpp
  function init_feature_layer (line 8) | void init_feature_layer(py::module &m) {

FILE: src/cpp/python_bindings/nn/layers/gnn/gat_layer_wrap.cpp
  function init_gat_layer (line 8) | void init_gat_layer(py::module &m) {

FILE: src/cpp/python_bindings/nn/layers/gnn/gcn_layer_wrap.cpp
  function init_gcn_layer (line 8) | void init_gcn_layer(py::module &m) {

FILE: src/cpp/python_bindings/nn/layers/gnn/gnn_layer_wrap.cpp
  class PyGNNLayer (line 8) | class PyGNNLayer : GNNLayer {
    method forward (line 11) | torch::Tensor forward(torch::Tensor inputs, DENSEGraph dense_graph, bo...
  function init_gnn_layer (line 16) | void init_gnn_layer(py::module &m) {

FILE: src/cpp/python_bindings/nn/layers/gnn/graph_sage_layer_wrap.cpp
  function init_graph_sage_layer (line 8) | void init_graph_sage_layer(py::module &m) {

FILE: src/cpp/python_bindings/nn/layers/gnn/layer_helpers_wrap.cpp
  function init_layer_helpers (line 5) | void init_layer_helpers(py::module &m) {

FILE: src/cpp/python_bindings/nn/layers/gnn/rgcn_layer_wrap.cpp
  function init_rgcn_layer (line 8) | void init_rgcn_layer(py::module &m) {

FILE: src/cpp/python_bindings/nn/layers/layer_wrap.cpp
  class PyLayer (line 10) | class PyLayer : Layer {
  function init_layer (line 15) | void init_layer(py::module &m) {

FILE: src/cpp/python_bindings/nn/layers/reduction/concat_wrap.cpp
  function init_concat_reduction_layer (line 8) | void init_concat_reduction_layer(py::module &m) {

FILE: src/cpp/python_bindings/nn/layers/reduction/linear_wrap.cpp
  function init_linear_reduction_layer (line 8) | void init_linear_reduction_layer(py::module &m) {

FILE: src/cpp/python_bindings/nn/layers/reduction/reduction_layer_wrap.cpp
  class PyReductionLayer (line 8) | class PyReductionLayer : ReductionLayer {
    method forward (line 11) | torch::Tensor forward(std::vector<torch::Tensor> inputs) override { PY...
  function init_reduction_layer (line 14) | void init_reduction_layer(py::module &m) {

FILE: src/cpp/python_bindings/nn/loss_wrap.cpp
  class PyLossFunction (line 6) | class PyLossFunction : LossFunction {
  function init_loss (line 14) | void init_loss(py::module &m) {

FILE: src/cpp/python_bindings/nn/model_wrap.cpp
  class PyModel (line 10) | class PyModel : Model {
  function init_model (line 15) | void init_model(py::module &m) {

FILE: src/cpp/python_bindings/nn/optim_wrap.cpp
  function PYBIND11_OVERRIDE_PURE_NAME (line 12) | PYBIND11_OVERRIDE_PURE_NAME(void, Optimizer, "step", step); }

FILE: src/cpp/python_bindings/nn/regularizer_wrap.cpp
  class PyRegularizer (line 6) | class PyRegularizer : Regularizer {
  function init_regularizer (line 14) | void init_regularizer(py::module &m) {

FILE: src/cpp/python_bindings/nn/wrap.cpp
  function PYBIND11_MODULE (line 57) | PYBIND11_MODULE(_nn, m) {

FILE: src/cpp/python_bindings/pipeline/evaluator_wrap.cpp
  class PyEvaluator (line 7) | class PyEvaluator : Evaluator {
    method evaluate (line 10) | void evaluate(bool validation) override { PYBIND11_OVERRIDE_PURE(void,...
  function init_evaluator (line 13) | void init_evaluator(py::module &m) {

FILE: src/cpp/python_bindings/pipeline/graph_encoder_wrap.cpp
  class PyGraphEncoder (line 7) | class PyGraphEncoder : GraphEncoder {
    method encode (line 10) | void encode(bool separate_layers) override { PYBIND11_OVERRIDE_PURE(vo...
  function init_graph_encoder (line 13) | void init_graph_encoder(py::module &m) {

FILE: src/cpp/python_bindings/pipeline/trainer_wrap.cpp
  class PyTrainer (line 7) | class PyTrainer : Trainer {
    method train (line 10) | void train(int num_epochs = 1) override { PYBIND11_OVERRIDE_PURE(void,...
  function init_trainer (line 13) | void init_trainer(py::module &m) {

FILE: src/cpp/python_bindings/pipeline/wrap.cpp
  function PYBIND11_MODULE (line 8) | PYBIND11_MODULE(_pipeline, m) {

FILE: src/cpp/python_bindings/reporting/reporting_wrap.cpp
  function PYBIND11_OVERRIDE_PURE_NAME (line 7) | PYBIND11_OVERRIDE_PURE_NAME(void, Reporter, "report", report); }

FILE: src/cpp/python_bindings/reporting/wrap.cpp
  function PYBIND11_MODULE (line 6) | PYBIND11_MODULE(_report, m) {

FILE: src/cpp/python_bindings/storage/graph_storage_wrap.cpp
  function init_graph_storage (line 4) | void init_graph_storage(py::module &m) {

FILE: src/cpp/python_bindings/storage/io_wrap.cpp
  function init_io (line 7) | void init_io(py::module &m) {

FILE: src/cpp/python_bindings/storage/storage_wrap.cpp
  class PyStorage (line 7) | class PyStorage : Storage {
    method indexRead (line 11) | torch::Tensor indexRead(torch::Tensor indices) override { PYBIND11_OVE...
    method indexAdd (line 13) | void indexAdd(torch::Tensor indices, torch::Tensor values) override { ...
    method range (line 15) | torch::Tensor range(int64_t offset, int64_t n) override { PYBIND11_OVE...
    method indexPut (line 17) | void indexPut(torch::Tensor indices, torch::Tensor values) override { ...
    method rangePut (line 19) | void rangePut(int64_t offset, int64_t n, torch::Tensor values) overrid...
    method load (line 21) | void load() override { PYBIND11_OVERRIDE_PURE(void, Storage, load); }
    method write (line 23) | void write() override { PYBIND11_OVERRIDE_PURE(void, Storage, write); }
    method unload (line 25) | void unload(bool write) override { PYBIND11_OVERRIDE_PURE(void, Storag...
    method shuffle (line 27) | void shuffle() override { PYBIND11_OVERRIDE_PURE(void, Storage, shuffl...
    method sort (line 29) | void sort(bool src) override { PYBIND11_OVERRIDE_PURE(void, Storage, s...
  function init_storage (line 32) | void init_storage(py::module &m) {

FILE: src/cpp/python_bindings/storage/wrap.cpp
  function PYBIND11_MODULE (line 8) | PYBIND11_MODULE(_storage, m) {

FILE: src/cpp/src/common/util.cpp
  function assert_no_nans (line 14) | void assert_no_nans(torch::Tensor values) {
  function assert_no_neg (line 20) | void assert_no_neg(torch::Tensor values) {
  function assert_in_range (line 26) | void assert_in_range(torch::Tensor values, int64_t start, int64_t end) {
  function process_mem_usage (line 32) | void process_mem_usage() {
  function pread_wrapper (line 89) | int64_t pread_wrapper(int fd, void *buf, int64_t count, int64_t offset) {
  function pwrite_wrapper (line 109) | int64_t pwrite_wrapper(int fd, const void *buf, int64_t count, int64_t o...
  function transfer_tensor (line 129) | torch::Tensor transfer_tensor(torch::Tensor input, torch::Device device,...
  function get_dtype_size_wrapper (line 147) | int64_t get_dtype_size_wrapper(torch::Dtype dtype_) {
  function get_directory (line 168) | std::string get_directory(std::string filename) {
  function map_tensors (line 180) | std::tuple<torch::Tensor, std::vector<torch::Tensor>> map_tensors(std::v...
  function apply_tensor_map (line 209) | std::vector<torch::Tensor> apply_tensor_map(torch::Tensor map, std::vect...

FILE: src/cpp/src/configuration/config.cpp
  function check_missing (line 10) | bool check_missing(pyobj python_object) {
  function T (line 25) | T cast_helper(pyobj python_object) {
  function initNeighborSamplingConfig (line 36) | shared_ptr<NeighborSamplingConfig> initNeighborSamplingConfig(pyobj pyth...
  function initInitConfig (line 61) | shared_ptr<InitConfig> initInitConfig(pyobj python_object) {
  function initOptimizerConfig (line 89) | shared_ptr<OptimizerConfig> initOptimizerConfig(pyobj python_config) {
  function initDatasetConfig (line 129) | shared_ptr<DatasetConfig> initDatasetConfig(pyobj python_config) {
  function initLayerConfig (line 146) | shared_ptr<LayerConfig> initLayerConfig(pyobj python_config) {
  function initEncoderConfig (line 231) | shared_ptr<EncoderConfig> initEncoderConfig(pyobj python_config) {
  function initDecoderConfig (line 277) | shared_ptr<DecoderConfig> initDecoderConfig(pyobj python_config) {
  function initLossConfig (line 301) | shared_ptr<LossConfig> initLossConfig(pyobj python_config) {
  function initStorageBackendConfig (line 326) | shared_ptr<StorageBackendConfig> initStorageBackendConfig(pyobj python_c...
  function initNegativeSamplingConfig (line 358) | shared_ptr<NegativeSamplingConfig> initNegativeSamplingConfig(pyobj pyth...
  function initPipelineConfig (line 381) | shared_ptr<PipelineConfig> initPipelineConfig(pyobj python_config) {
  function initCheckpointConfig (line 403) | shared_ptr<CheckpointConfig> initCheckpointConfig(pyobj python_config) {
  function initModelConfig (line 416) | shared_ptr<ModelConfig> initModelConfig(pyobj python_config) {
  function initStorageConfig (line 430) | shared_ptr<StorageConfig> initStorageConfig(pyobj python_config) {
  function initTrainingConfig (line 464) | shared_ptr<TrainingConfig> initTrainingConfig(pyobj python_config) {
  function initEvaluationConfig (line 480) | shared_ptr<EvaluationConfig> initEvaluationConfig(pyobj python_config) {
  function initMariusConfig (line 491) | shared_ptr<MariusConfig> initMariusConfig(pyobj python_config) {
  function loadConfig (line 502) | shared_ptr<MariusConfig> loadConfig(string config_path, bool save) {

FILE: src/cpp/src/configuration/options.cpp
  function LearningTask (line 7) | LearningTask getLearningTask(std::string string_val) {
  function InitDistribution (line 19) | InitDistribution getInitDistribution(std::string string_val) {
  function LossFunctionType (line 41) | LossFunctionType getLossFunctionType(std::string string_val) {
  function LossReduction (line 63) | LossReduction getLossReduction(std::string string_val) {
  function ActivationFunction (line 75) | ActivationFunction getActivationFunction(std::string string_val) {
  function OptimizerType (line 89) | OptimizerType getOptimizerType(std::string string_val) {
  function ReductionLayerType (line 105) | ReductionLayerType getReductionLayerType(std::string string_val) {
  function DenseLayerType (line 119) | DenseLayerType getDenseLayerType(std::string string_val) {
  function LayerType (line 133) | LayerType getLayerType(std::string string_val) {
  function GNNLayerType (line 153) | GNNLayerType getGNNLayerType(std::string string_val) {
  function GraphSageAggregator (line 171) | GraphSageAggregator getGraphSageAggregator(std::string string_val) {
  function DecoderType (line 183) | DecoderType getDecoderType(std::string string_val) {
  function EdgeDecoderMethod (line 199) | EdgeDecoderMethod getEdgeDecoderMethod(std::string string_val) {
  function StorageBackend (line 219) | StorageBackend getStorageBackend(std::string string_val) {
  function EdgeBucketOrdering (line 235) | EdgeBucketOrdering getEdgeBucketOrderingEnum(std::string string_val) {
  function NodePartitionOrdering (line 253) | NodePartitionOrdering getNodePartitionOrderingEnum(std::string string_va...
  function NeighborSamplingLayer (line 267) | NeighborSamplingLayer getNeighborSamplingLayer(std::string string_val) {
  function LocalFilterMode (line 281) | LocalFilterMode getLocalFilterMode(std::string string_val) {
  function getDtype (line 293) | torch::Dtype getDtype(std::string string_val) {
  function getLogLevel (line 309) | spdlog::level::level_enum getLogLevel(std::string string_val) {

FILE: src/cpp/src/configuration/util.cpp
  function devices_from_config (line 7) | std::vector<torch::Device> devices_from_config(std::shared_ptr<StorageCo...

FILE: src/cpp/src/data/graph.cpp
  function Indices (line 56) | Indices MariusGraph::getNodeIDs() { return node_ids_; }
  function Indices (line 58) | Indices MariusGraph::getEdges(bool incoming) {
  function Indices (line 66) | Indices MariusGraph::getRelationIDs(bool incoming) {
  function Indices (line 78) | Indices MariusGraph::getNeighborOffsets(bool incoming) {
  function Indices (line 86) | Indices MariusGraph::getNumNeighbors(bool incoming) {
  function Indices (line 322) | Indices DENSEGraph::getNeighborIDs(bool incoming, bool global_ids) {

FILE: src/cpp/src/data/ordering.cpp
  function getEdgeBucketOrdering (line 12) | std::tuple<vector<torch::Tensor>, vector<torch::Tensor>> getEdgeBucketOr...
  function getNodePartitionOrdering (line 36) | std::tuple<vector<torch::Tensor>, vector<torch::Tensor>> getNodePartitio...
  function convertEdgeBucketOrderToTensors (line 55) | std::tuple<vector<torch::Tensor>, vector<torch::Tensor>> convertEdgeBuck...
  function getBetaOrderingHelper (line 78) | vector<vector<int>> getBetaOrderingHelper(int num_partitions, int buffer...
  function greedyAssignEdgeBucketsToBuffers (line 128) | vector<vector<std::pair<int, int>>> greedyAssignEdgeBucketsToBuffers(vec...
  function randomlyAssignEdgeBucketsToBuffers (line 150) | vector<vector<std::pair<int, int>>> randomlyAssignEdgeBucketsToBuffers(v...
  function getTwoLevelBetaOrdering (line 241) | std::tuple<vector<torch::Tensor>, vector<torch::Tensor>> getTwoLevelBeta...
  function getDispersedNodePartitionOrdering (line 294) | std::tuple<vector<torch::Tensor>, vector<torch::Tensor>> getDispersedNod...
  function getSequentialNodePartitionOrdering (line 389) | std::tuple<vector<torch::Tensor>, vector<torch::Tensor>> getSequentialNo...
  function getCustomNodePartitionOrdering (line 412) | std::tuple<vector<torch::Tensor>, vector<torch::Tensor>> getCustomNodePa...
  function getCustomEdgeBucketOrdering (line 418) | std::tuple<vector<torch::Tensor>, vector<torch::Tensor>> getCustomEdgeBu...

FILE: src/cpp/src/data/samplers/edge.cpp
  function EdgeList (line 12) | EdgeList RandomEdgeSampler::getEdges(shared_ptr<Batch> batch) {

FILE: src/cpp/src/data/samplers/negative.cpp
  function batch_sample (line 7) | std::tuple<torch::Tensor, torch::Tensor> batch_sample(torch::Tensor edge...
  function deg_negative_local_filter (line 21) | torch::Tensor deg_negative_local_filter(torch::Tensor deg_sample_indices...
  function compute_filter_corruption (line 41) | torch::Tensor compute_filter_corruption(shared_ptr<MariusGraph> graph, t...
  function compute_filter_corruption_cpu (line 50) | torch::Tensor compute_filter_corruption_cpu(shared_ptr<MariusGraph> grap...
  function compute_filter_corruption_gpu (line 197) | torch::Tensor compute_filter_corruption_gpu(shared_ptr<MariusGraph> grap...
  function apply_score_filter (line 306) | torch::Tensor apply_score_filter(torch::Tensor scores, torch::Tensor fil...

FILE: src/cpp/src/data/samplers/neighbor.cpp
  function sample_all_gpu (line 9) | std::tuple<torch::Tensor, torch::Tensor> sample_all_gpu(torch::Tensor ed...
  function sample_all_cpu (line 19) | std::tuple<torch::Tensor, torch::Tensor> sample_all_cpu(torch::Tensor ed...
  function sample_uniform_gpu (line 80) | std::tuple<torch::Tensor, torch::Tensor> sample_uniform_gpu(torch::Tenso...
  function sample_uniform_cpu (line 104) | std::tuple<torch::Tensor, torch::Tensor> sample_uniform_cpu(torch::Tenso...
  function sample_dropout_gpu (line 236) | std::tuple<torch::Tensor, torch::Tensor> sample_dropout_gpu(torch::Tenso...
  function sample_dropout_cpu (line 255) | std::tuple<torch::Tensor, torch::Tensor> sample_dropout_cpu(torch::Tenso...
  function DENSEGraph (line 402) | DENSEGraph LayeredNeighborSampler::getNeighbors(torch::Tensor node_ids, ...

FILE: src/cpp/src/marius.cpp
  function encode_and_export (line 13) | void encode_and_export(shared_ptr<DataLoader> dataloader, shared_ptr<Mod...
  function marius_init (line 38) | std::tuple<shared_ptr<Model>, shared_ptr<GraphModelStorage>, shared_ptr<...
  function marius_train (line 105) | void marius_train(shared_ptr<MariusConfig> marius_config) {
  function marius_eval (line 165) | void marius_eval(shared_ptr<MariusConfig> marius_config) {
  function marius (line 187) | void marius(int argc, char *argv[]) {
  function main (line 207) | int main(int argc, char *argv[]) { marius(argc, argv); }

FILE: src/cpp/src/nn/activation.cpp
  function apply_activation (line 7) | torch::Tensor apply_activation(ActivationFunction activation_function, t...

FILE: src/cpp/src/nn/decoders/edge/comparators.cpp
  function pad_and_reshape (line 7) | torch::Tensor pad_and_reshape(torch::Tensor input, int num_chunks) {

FILE: src/cpp/src/nn/decoders/edge/decoder_methods.cpp
  function only_pos_forward (line 7) | std::tuple<torch::Tensor, torch::Tensor> only_pos_forward(shared_ptr<Edg...
  function neg_and_pos_forward (line 44) | std::tuple<torch::Tensor, torch::Tensor, torch::Tensor, torch::Tensor> n...
  function node_corrupt_forward (line 57) | std::tuple<torch::Tensor, torch::Tensor, torch::Tensor, torch::Tensor> n...
  function rel_corrupt_forward (line 116) | std::tuple<torch::Tensor, torch::Tensor, torch::Tensor, torch::Tensor> r...

FILE: src/cpp/src/nn/encoders/encoder.cpp
  function device_ (line 29) | device_(torch::kCPU) {

FILE: src/cpp/src/nn/initialization.cpp
  function compute_fans (line 7) | std::tuple<int64_t, int64_t> compute_fans(std::vector<int64_t> shape) {
  function glorot_uniform (line 26) | torch::Tensor glorot_uniform(std::vector<int64_t> shape, std::tuple<int6...
  function glorot_normal (line 43) | torch::Tensor glorot_normal(std::vector<int64_t> shape, std::tuple<int64...
  function uniform_init (line 58) | torch::Tensor uniform_init(float scale_factor, std::vector<int64_t> shap...
  function normal_init (line 61) | torch::Tensor normal_init(float mean, float std, std::vector<int64_t> sh...
  function constant_init (line 65) | torch::Tensor constant_init(float constant, std::vector<int64_t> shape, ...
  function initialize_tensor (line 67) | torch::Tensor initialize_tensor(shared_ptr<InitConfig> init_config, std:...
  function initialize_subtensor (line 100) | torch::Tensor initialize_subtensor(shared_ptr<InitConfig> init_config, s...

FILE: src/cpp/src/nn/layers/gnn/layer_helpers.cpp
  function segment_ids_from_offsets (line 11) | torch::Tensor segment_ids_from_offsets(torch::Tensor segment_offsets, in...
  function segmented_sum (line 19) | torch::Tensor segmented_sum(torch::Tensor tensor, torch::Tensor segment_...
  function segmented_sum_with_offsets (line 27) | torch::Tensor segmented_sum_with_offsets(torch::Tensor tensor, torch::Te...
  function segmented_max_with_offsets (line 32) | torch::Tensor segmented_max_with_offsets(torch::Tensor tensor, torch::Te...
  function attention_softmax (line 44) | std::tuple<torch::Tensor, torch::Tensor> attention_softmax(torch::Tensor...

FILE: src/cpp/src/nn/loss.cpp
  function check_score_shapes (line 7) | void check_score_shapes(torch::Tensor pos_scores, torch::Tensor neg_scor...
  function to_one_hot (line 31) | torch::Tensor to_one_hot(torch::Tensor labels, int num_classes) {
  function scores_to_labels (line 37) | std::tuple<torch::Tensor, torch::Tensor> scores_to_labels(torch::Tensor ...
  function getLossFunction (line 177) | std::shared_ptr<LossFunction> getLossFunction(shared_ptr<LossConfig> con...

FILE: src/cpp/src/nn/model.cpp
  function initModelFromConfig (line 381) | shared_ptr<Model> initModelFromConfig(shared_ptr<ModelConfig> model_conf...

FILE: src/cpp/src/storage/buffer.cpp
  function Indices (line 457) | Indices PartitionBuffer::getRandomIds(int64_t size) { return torch::rand...

FILE: src/cpp/src/storage/checkpointer.cpp
  function CheckpointMeta (line 75) | CheckpointMeta Checkpointer::loadMetadata(string directory) {

FILE: src/cpp/src/storage/graph_storage.cpp
  function EdgeList (line 163) | EdgeList GraphModelStorage::getEdges(Indices indices) {
  function EdgeList (line 171) | EdgeList GraphModelStorage::getEdgesRange(int64_t start, int64_t size) {
  function Indices (line 181) | Indices GraphModelStorage::getRandomNodeIds(int64_t size) {
  function Indices (line 198) | Indices GraphModelStorage::getNodeIdsRange(int64_t start, int64_t size) {
  function OptimizerState (line 297) | OptimizerState GraphModelStorage::getNodeEmbeddingState(Indices indices) {
  function OptimizerState (line 305) | OptimizerState GraphModelStorage::getNodeEmbeddingStateRange(int64_t sta...
  function EdgeList (line 737) | EdgeList GraphModelStorage::merge_sorted_edge_buckets(EdgeList edges, to...

FILE: src/cpp/src/storage/io.cpp
  function initializeEdges (line 12) | std::tuple<shared_ptr<Storage>, shared_ptr<Storage>, shared_ptr<Storage>...
  function initializeNodeEmbeddings (line 154) | std::tuple<shared_ptr<Storage>, shared_ptr<Storage>> initializeNodeEmbed...
  function initializeNodeIds (line 226) | std::tuple<shared_ptr<Storage>, shared_ptr<Storage>, shared_ptr<Storage>...
  function initializeRelationFeatures (line 294) | shared_ptr<Storage> initializeRelationFeatures(shared_ptr<Model> model, ...
  function initializeNodeFeatures (line 311) | shared_ptr<Storage> initializeNodeFeatures(shared_ptr<Model> model, shar...
  function initializeNodeLabels (line 347) | shared_ptr<Storage> initializeNodeLabels(shared_ptr<Model> model, shared...
  function initializeStorageLinkPrediction (line 378) | shared_ptr<GraphModelStorage> initializeStorageLinkPrediction(shared_ptr...
  function initializeStorageNodeClassification (line 402) | shared_ptr<GraphModelStorage> initializeStorageNodeClassification(shared...
  function initializeStorage (line 433) | shared_ptr<GraphModelStorage> initializeStorage(shared_ptr<Model> model,...

FILE: src/cpp/src/storage/storage.cpp
  function renameFile (line 19) | void renameFile(string old_filename, string new_filename) {
  function copyFile (line 27) | void copyFile(string src_file, string dst_file) {
  function fileExists (line 40) | bool fileExists(string filename) {
  function createDir (line 49) | void createDir(string path, bool exist_ok) {

FILE: src/cuda/third_party/pytorch_scatter/segment_max.cpp
  function list2vec (line 4) | inline std::vector<int64_t> list2vec(const c10::List<int64_t> list) {
  class SegmentMaxCSR (line 16) | class SegmentMaxCSR : public torch::autograd::Function<SegmentMaxCSR> {
    method variable_list (line 18) | static variable_list forward(AutogradContext *ctx, Variable src,
    method variable_list (line 32) | static variable_list backward(AutogradContext *ctx, variable_list grad...
  function segment_max_csr (line 47) | std::tuple<torch::Tensor, torch::Tensor>

FILE: src/python/console_scripts/marius_eval.py
  function main (line 7) | def main():

FILE: src/python/console_scripts/marius_train.py
  function main (line 7) | def main():

FILE: src/python/distribution/generate_stubs.py
  function generate_stubs (line 6) | def generate_stubs(output_dir, module_name):
  function gen_all_stubs (line 23) | def gen_all_stubs(output_dir):

FILE: src/python/distribution/marius_env_info.py
  class MyDumper (line 10) | class MyDumper(yaml.Dumper):
    method increase_indent (line 11) | def increase_indent(self, flow=False, indentless=False):
  function get_os_info (line 15) | def get_os_info():
  function get_cpu_info (line 20) | def get_cpu_info():
  function get_gpu_info (line 33) | def get_gpu_info():
  function get_python_info (line 49) | def get_python_info():
  function get_cuda_info (line 75) | def get_cuda_info():
  function get_openmp_info (line 87) | def get_openmp_info():
  function get_pytorch_info (line 99) | def get_pytorch_info():
  function get_marius_info (line 112) | def get_marius_info():
  function get_pybind_info (line 132) | def get_pybind_info():
  function get_cmake_info (line 146) | def get_cmake_info():
  function main (line 158) | def main():

FILE: src/python/tools/configuration/constants.py
  class PathConstants (line 5) | class PathConstants:

FILE: src/python/tools/configuration/datatypes.py
  class InitOptions (line 8) | class InitOptions:
  class UniformInitOptions (line 13) | class UniformInitOptions(InitOptions):
    method __post_init__ (line 16) | def __post_init__(self):
  class NormalInitOptions (line 22) | class NormalInitOptions(InitOptions):
    method __post_init__ (line 26) | def __post_init__(self):
  class ConstantInitOptions (line 32) | class ConstantInitOptions(InitOptions):
  class LossOptions (line 37) | class LossOptions:
  class RankingLossOptions (line 42) | class RankingLossOptions(LossOptions):
  class OptimizerOptions (line 47) | class OptimizerOptions:
    method __post_init__ (line 50) | def __post_init__(self):
  class AdagradOptions (line 56) | class AdagradOptions(OptimizerOptions):
    method __post_init__ (line 63) | def __post_init__(self):
  class AdamOptions (line 74) | class AdamOptions(OptimizerOptions):
    method __post_init__ (line 82) | def __post_init__(self):
  class LayerOptions (line 93) | class LayerOptions:
  class EmbeddingLayerOptions (line 98) | class EmbeddingLayerOptions(LayerOptions):
  class FeatureLayerOptions (line 103) | class FeatureLayerOptions(LayerOptions):
  class DenseLayerOptions (line 108) | class DenseLayerOptions(LayerOptions):
  class ReductionLayerOptions (line 113) | class ReductionLayerOptions(LayerOptions):
  class GNNLayerOptions (line 118) | class GNNLayerOptions(LayerOptions):
  class GraphSageLayerOptions (line 124) | class GraphSageLayerOptions(GNNLayerOptions):
  class GATLayerOptions (line 130) | class GATLayerOptions(GNNLayerOptions):
    method __post_init__ (line 138) | def __post_init__(self):
  class DecoderOptions (line 144) | class DecoderOptions:
  class EdgeDecoderOptions (line 149) | class EdgeDecoderOptions(DecoderOptions):
  class StorageOptions (line 156) | class StorageOptions:
  class PartitionBufferOptions (line 161) | class PartitionBufferOptions(StorageOptions):
    method __post_init__ (line 171) | def __post_init__(self):
  class NeighborSamplingOptions (line 187) | class NeighborSamplingOptions:
  class UniformSamplingOptions (line 192) | class UniformSamplingOptions(NeighborSamplingOptions):
    method __post_init__ (line 195) | def __post_init__(self):
  class DropoutSamplingOptions (line 201) | class DropoutSamplingOptions(NeighborSamplingOptions):
    method __post_init__ (line 204) | def __post_init__(self):

FILE: src/python/tools/configuration/marius_config.py
  function get_model_dir_path (line 47) | def get_model_dir_path(dataset_dir):
  class NeighborSamplingConfig (line 60) | class NeighborSamplingConfig:
    method merge (line 65) | def merge(self, input_config: DictConfig):
  class OptimizerConfig (line 95) | class OptimizerConfig:
    method merge (line 99) | def merge(self, input_config: DictConfig):
  class InitConfig (line 129) | class InitConfig:
    method merge (line 133) | def merge(self, input_config: DictConfig):
  class LossConfig (line 162) | class LossConfig:
    method merge (line 166) | def merge(self, input_config: DictConfig):
  class LayerConfig (line 190) | class LayerConfig:
    method merge (line 201) | def merge(self, input_config: DictConfig):
  class EncoderConfig (line 258) | class EncoderConfig:
    method merge (line 266) | def merge(self, input_config: DictConfig):
  class DecoderConfig (line 313) | class DecoderConfig:
    method merge (line 318) | def merge(self, input_config: DictConfig):
  class ModelConfig (line 345) | class ModelConfig:
    method __post_init__ (line 354) | def __post_init__(self):
    method merge (line 358) | def merge(self, input_config: DictConfig):
  class StorageBackendConfig (line 396) | class StorageBackendConfig:
    method merge (line 400) | def merge(self, input_config: DictConfig):
  class DatasetConfig (line 424) | class DatasetConfig:
    method __post_init__ (line 437) | def __post_init__(self):
    method populate_dataset_stats (line 470) | def populate_dataset_stats(self):
    method merge (line 495) | def merge(self, input_config: DictConfig):
  class StorageConfig (line 514) | class StorageConfig:
    method __post_init__ (line 534) | def __post_init__(self):
    method merge (line 546) | def merge(self, input_config: DictConfig):
  class NegativeSamplingConfig (line 607) | class NegativeSamplingConfig:
    method __post_init__ (line 614) | def __post_init__(self):
    method merge (line 623) | def merge(self, input_config: DictConfig):
  class CheckpointConfig (line 649) | class CheckpointConfig:
    method merge (line 654) | def merge(self, input_config: DictConfig):
  class PipelineConfig (line 672) | class PipelineConfig:
    method __post_init__ (line 687) | def __post_init__(self):
    method merge (line 709) | def merge(self, input_config: DictConfig):
  class TrainingConfig (line 725) | class TrainingConfig:
    method __post_init__ (line 737) | def __post_init__(self):
    method merge (line 747) | def merge(self, input_config: DictConfig):
  class EvaluationConfig (line 777) | class EvaluationConfig:
    method __post_init__ (line 784) | def __post_init__(self):
    method merge (line 788) | def merge(self, input_config: DictConfig):
  class MariusConfig (line 813) | class MariusConfig:
    method __init__ (line 820) | def __init__(self):
    method __post_init__ (line 826) | def __post_init__(self):
  function type_safe_merge (line 836) | def type_safe_merge(base_config: MariusConfig, input_config: DictConfig):
  function initialize_model_dir (line 861) | def initialize_model_dir(output_config):
  function infer_model_dir (line 875) | def infer_model_dir(output_config):
  function load_config (line 899) | def load_config(input_config_path, save=False):

FILE: src/python/tools/configuration/validation.py
  function get_lines_in_file (line 10) | def get_lines_in_file(filepath):
  function validate_dataset_config (line 14) | def validate_dataset_config(output_config):
  function validate_storage_config (line 134) | def validate_storage_config(output_config):
  function check_encoder_layer_dimensions (line 186) | def check_encoder_layer_dimensions(output_config):
  function check_gnn_layers_alignment (line 257) | def check_gnn_layers_alignment(output_config):
  function retrieve_memory_info (line 276) | def retrieve_memory_info():
  function get_storage_overheads (line 282) | def get_storage_overheads(output_config):
  function check_full_graph_evaluation (line 305) | def check_full_graph_evaluation(output_config):

FILE: src/python/tools/db2graph/marius_db2graph.py
  function set_args (line 21) | def set_args():
  function config_parser_fn (line 48) | def config_parser_fn(config_name):
  function connect_to_db (line 137) | def connect_to_db(db_server, db_name, db_user, db_password, db_host):
  function validation_check_edge_entity_entity_queries (line 173) | def validation_check_edge_entity_entity_queries(edges_queries_list):
  function clean_token (line 231) | def clean_token(token):
  function get_init_fetch_size (line 243) | def get_init_fetch_size():
  function get_fetch_size (line 256) | def get_fetch_size(fetch_size, limit_fetch_size, mem_copy_used):
  function get_cursor (line 278) | def get_cursor(cnx, db_server, cursor_name):
  function post_processing (line 295) | def post_processing(output_dir, cnx, edges_queries_list, edge_rel_list, ...
  function main (line 380) | def main():

FILE: src/python/tools/marius_config_generator.py
  function output_config (line 12) | def output_config(config_dict, output_dir):
  function read_template (line 50) | def read_template(file):
  function set_up_files (line 70) | def set_up_files(output_directory):
  function update_dataset_stats (line 80) | def update_dataset_stats(dataset, arg_dict, config_dict):
  function update_stats (line 92) | def update_stats(stats, arg_dict, config_dict, opt="stats"):
  function update_data_path (line 112) | def update_data_path(dir, arg_dict):
  function set_args (line 161) | def set_args():
  function parse_args (line 272) | def parse_args(args, config_dict):
  function main (line 302) | def main():

FILE: src/python/tools/marius_postprocess.py
  function set_args (line 10) | def set_args():
  function main (line 47) | def main():

FILE: src/python/tools/marius_predict.py
  function str2bool (line 25) | def str2bool(v):
  function set_args (line 36) | def set_args():
  function get_metrics (line 228) | def get_metrics(config, args):
  function get_dtype (line 274) | def get_dtype(storage_backend, args):
  function get_columns (line 294) | def get_columns(config, args):
  function infer_input_shape (line 317) | def infer_input_shape(config, args):
  function get_nbrs_config (line 362) | def get_nbrs_config(config, args):
  function get_neg_config (line 390) | def get_neg_config(config, args):
  function preprocess_input_file (line 401) | def preprocess_input_file(config, args):
  function get_input_file_storage (line 490) | def get_input_file_storage(config, args):
  function run_predict (line 520) | def run_predict(args):
  function main (line 582) | def main():

FILE: src/python/tools/marius_preprocess.py
  function set_args (line 24) | def set_args():
  function main (line 133) | def main():

FILE: src/python/tools/postprocess/in_memory_exporter.py
  function get_ordered_raw_ids (line 15) | def get_ordered_raw_ids(mapping_path):
  function save_df (line 28) | def save_df(output_df: pd.DataFrame, output_dir: Path, name: str, fmt: s...
  class InMemoryExporter (line 45) | class InMemoryExporter(object):
    method __init__ (line 46) | def __init__(self, model_dir: Path, fmt: str = "CSV", delim: str = ","...
    method export_node_embeddings (line 61) | def export_node_embeddings(self, output_dir: Path):
    method export_rel_embeddings (line 103) | def export_rel_embeddings(self, output_dir: Path):
    method export_model (line 143) | def export_model(self, output_dir: Path):
    method copy_model (line 161) | def copy_model(self, output_dir: Path):
    method export (line 171) | def export(self, output_dir: Path):

FILE: src/python/tools/prediction/link_prediction.py
  function infer_lp (line 4) | def infer_lp(

FILE: src/python/tools/prediction/node_classification.py
  function infer_nc (line 4) | def infer_nc(

FILE: src/python/tools/preprocess/converters/partitioners/partitioner.py
  class Partitioner (line 4) | class Partitioner(ABC):
    method __init__ (line 5) | def __init__(self):

FILE: src/python/tools/preprocess/converters/partitioners/spark_partitioner.py
  function get_partition_size (line 11) | def get_partition_size(nodes, num_partitions):
  function get_edge_buckets (line 16) | def get_edge_buckets(edges_df: DataFrame, partition_size):
  class SparkPartitioner (line 23) | class SparkPartitioner(Partitioner):
    method __init__ (line 24) | def __init__(self, spark, partitioned_evaluation):
    method partition_edges (line 30) | def partition_edges(self, train_edges_df, valid_edges_df, test_edges_d...

FILE: src/python/tools/preprocess/converters/partitioners/torch_partitioner.py
  function dataframe_to_tensor (line 8) | def dataframe_to_tensor(df):
  function partition_edges (line 12) | def partition_edges(edges, num_nodes, num_partitions, edge_weights=None):
  class TorchPartitioner (line 49) | class TorchPartitioner(Partitioner):
    method __init__ (line 50) | def __init__(self, partitioned_evaluation):
    method partition_edges (line 55) | def partition_edges(

FILE: src/python/tools/preprocess/converters/readers/pandas_readers.py
  class PandasDelimitedFileReader (line 9) | class PandasDelimitedFileReader(Reader):
    method __init__ (line 10) | def __init__(
    method read_single_file (line 44) | def read_single_file(self, file_path):
    method read (line 77) | def read(self):

FILE: src/python/tools/preprocess/converters/readers/reader.py
  class Reader (line 4) | class Reader(ABC):
    method __init__ (line 5) | def __init__(self):
    method read (line 9) | def read(self):

FILE: src/python/tools/preprocess/converters/readers/spark_readers.py
  class SparkDelimitedFileReader (line 9) | class SparkDelimitedFileReader(Reader):
    method __init__ (line 10) | def __init__(
    method read (line 67) | def read(self):

FILE: src/python/tools/preprocess/converters/spark_converter.py
  function remap_columns (line 29) | def remap_columns(df, has_rels):
  function get_nodes_df (line 36) | def get_nodes_df(edges_df):
  function get_relations_df (line 49) | def get_relations_df(edges_df):
  function assign_ids (line 62) | def assign_ids(df, col_id):
  function remap_edges (line 68) | def remap_edges(edges_df, nodes, rels):
  function write_df_to_csv (line 94) | def write_df_to_csv(df, output_filename):
  class SparkEdgeListConverter (line 101) | class SparkEdgeListConverter(object):
    method __init__ (line 102) | def __init__(
    method convert (line 169) | def convert(self):

FILE: src/python/tools/preprocess/converters/torch_constants.py
  class TorchConverterColumnKeys (line 5) | class TorchConverterColumnKeys(Enum):
    method __hash__ (line 11) | def __hash__(self) -> int:

FILE: src/python/tools/preprocess/converters/torch_converter.py
  function dataframe_to_tensor (line 19) | def dataframe_to_tensor(df):
  function apply_mapping_edges (line 23) | def apply_mapping_edges(input_edges, node_mapping_df, rel_mapping_df=None):
  function apply_mapping1d (line 69) | def apply_mapping1d(input_ids, mapping_df):
  function extract_tensors_from_df (line 80) | def extract_tensors_from_df(df, column_mappings):
  function map_edge_list_dfs (line 97) | def map_edge_list_dfs(
  function extract_tensor_from_tens (line 174) | def extract_tensor_from_tens(edges_tensor, column_mappings):
  function map_edge_lists (line 191) | def map_edge_lists(
  function split_edges (line 374) | def split_edges(edges, edges_weights, splits):
  class TorchEdgeListConverter (line 428) | class TorchEdgeListConverter(object):
    method __init__ (line 429) | def __init__(
    method convert (line 626) | def convert(self):

FILE: src/python/tools/preprocess/converters/writers/spark_writer.py
  function convert_to_binary (line 26) | def convert_to_binary(input_filename, output_filename):
  function merge_csvs (line 37) | def merge_csvs(input_directory, output_file):
  function write_df_to_csv (line 50) | def write_df_to_csv(df, output_filename):
  function write_partitioned_df_to_csv (line 56) | def write_partitioned_df_to_csv(partition_triples, num_partitions, outpu...
  class SparkWriter (line 111) | class SparkWriter(object):
    method __init__ (line 112) | def __init__(self, spark, output_dir, partitioned_evaluation):
    method write_to_csv (line 119) | def write_to_csv(self, train_edges_df, valid_edges_df, test_edges_df, ...
    method write_to_binary (line 193) | def write_to_binary(self, train_edges_df, valid_edges_df, test_edges_d...

FILE: src/python/tools/preprocess/converters/writers/torch_writer.py
  class TorchWriter (line 10) | class TorchWriter(object):
    method __init__ (line 11) | def __init__(self, output_dir, partitioned_evaluation):
    method write_to_binary (line 17) | def write_to_binary(

FILE: src/python/tools/preprocess/custom.py
  class CustomLinkPredictionDataset (line 14) | class CustomLinkPredictionDataset(LinkPredictionDataset):
    method __init__ (line 15) | def __init__(
    method download (line 36) | def download(self, overwrite=False):
    method preprocess (line 39) | def preprocess(

FILE: src/python/tools/preprocess/dataset.py
  class Dataset (line 9) | class Dataset(ABC):
    method __init__ (line 32) | def __init__(self, output_directory, spark=False):
    method download (line 45) | def download(self, overwrite=False):
    method preprocess (line 49) | def preprocess(self) -> DatasetConfig:
  class NodeClassificationDataset (line 53) | class NodeClassificationDataset(Dataset):
    method __init__ (line 54) | def __init__(self, output_directory, spark):
  class LinkPredictionDataset (line 64) | class LinkPredictionDataset(Dataset):
    method __init__ (line 65) | def __init__(self, output_directory, spark):
  class GraphClassificationDataset (line 78) | class GraphClassificationDataset(Dataset):

FILE: src/python/tools/preprocess/datasets/dataset_helpers.py
  function remap_nodes (line 4) | def remap_nodes(node_mapping, train_nodes, valid_nodes, test_nodes, feat...

FILE: src/python/tools/preprocess/datasets/fb15k.py
  class FB15K (line 8) | class FB15K(LinkPredictionDataset):
    method __init__ (line 17) | def __init__(self, output_directory: Path, spark=False):
    method download (line 23) | def download(self, overwrite=False):
    method preprocess (line 45) | def preprocess(

FILE: src/python/tools/preprocess/datasets/fb15k_237.py
  class FB15K237 (line 8) | class FB15K237(LinkPredictionDataset):
    method __init__ (line 19) | def __init__(self, output_directory: Path, spark=False):
    method download (line 25) | def download(self, overwrite=False):
    method preprocess (line 47) | def preprocess(

FILE: src/python/tools/preprocess/datasets/freebase86m.py
  class Freebase86m (line 8) | class Freebase86m(LinkPredictionDataset):
    method __init__ (line 15) | def __init__(self, output_directory: Path, spark=False):
    method download (line 21) | def download(self, overwrite=False):
    method preprocess (line 43) | def preprocess(

FILE: src/python/tools/preprocess/datasets/friendster.py
  class Friendster (line 8) | class Friendster(LinkPredictionDataset):
    method __init__ (line 21) | def __init__(self, output_directory: Path, spark=False):
    method download (line 27) | def download(self, overwrite=False):
    method preprocess (line 35) | def preprocess(

FILE: src/python/tools/preprocess/datasets/livejournal.py
  class Livejournal (line 8) | class Livejournal(LinkPredictionDataset):
    method __init__ (line 20) | def __init__(self, output_directory: Path, spark=False):
    method download (line 26) | def download(self, overwrite=False):
    method preprocess (line 34) | def preprocess(

FILE: src/python/tools/preprocess/datasets/ogb_mag240m.py
  class OGBMag240M (line 16) | class OGBMag240M(NodeClassificationDataset):
    method __init__ (line 29) | def __init__(self, output_directory: Path, spark=False):
    method download (line 35) | def download(self, overwrite=False):
    method preprocess (line 66) | def preprocess(

FILE: src/python/tools/preprocess/datasets/ogb_wikikg90mv2.py
  class OGBWikiKG90Mv2 (line 12) | class OGBWikiKG90Mv2(LinkPredictionDataset):
    method __init__ (line 22) | def __init__(self, output_directory: Path, spark=False):
    method download (line 28) | def download(self, overwrite=False):
    method preprocess (line 53) | def preprocess(

FILE: src/python/tools/preprocess/datasets/ogbl_citation2.py
  class OGBLCitation2 (line 12) | class OGBLCitation2(LinkPredictionDataset):
    method __init__ (line 23) | def __init__(self, output_directory: Path, spark=False):
    method download (line 29) | def download(self, overwrite=False):
    method preprocess (line 49) | def preprocess(

FILE: src/python/tools/preprocess/datasets/ogbl_collab.py
  class OGBLCollab (line 12) | class OGBLCollab(LinkPredictionDataset):
    method __init__ (line 25) | def __init__(self, output_directory: Path, spark=False, include_edge_t...
    method download (line 34) | def download(self, overwrite=False):
    method preprocess (line 61) | def preprocess(

FILE: src/python/tools/preprocess/datasets/ogbl_ppa.py
  class OGBLPpa (line 10) | class OGBLPpa(LinkPredictionDataset):
    method __init__ (line 21) | def __init__(self, output_directory: Path, spark=False):
    method download (line 27) | def download(self, overwrite=False, remap_ids=True):
    method preprocess (line 47) | def preprocess(

FILE: src/python/tools/preprocess/datasets/ogbl_wikikg2.py
  class OGBLWikiKG2 (line 12) | class OGBLWikiKG2(LinkPredictionDataset):
    method __init__ (line 23) | def __init__(self, output_directory: Path, spark=False):
    method download (line 29) | def download(self, overwrite=False):
    method preprocess (line 49) | def preprocess(

FILE: src/python/tools/preprocess/datasets/ogbn_arxiv.py
  class OGBNArxiv (line 13) | class OGBNArxiv(NodeClassificationDataset):
    method __init__ (line 27) | def __init__(self, output_directory: Path, spark=False):
    method download (line 33) | def download(self, overwrite=False):
    method preprocess (line 73) | def preprocess(

FILE: src/python/tools/preprocess/datasets/ogbn_papers100m.py
  class OGBNPapers100M (line 15) | class OGBNPapers100M(NodeClassificationDataset):
    method __init__ (line 25) | def __init__(self, output_directory: Path, spark=False):
    method download (line 31) | def download(self, overwrite=False):
    method preprocess (line 66) | def preprocess(

FILE: src/python/tools/preprocess/datasets/ogbn_products.py
  class OGBNProducts (line 13) | class OGBNProducts(NodeClassificationDataset):
    method __init__ (line 26) | def __init__(self, output_directory: Path, spark=False):
    method download (line 32) | def download(self, overwrite=False):
    method preprocess (line 72) | def preprocess(

FILE: src/python/tools/preprocess/datasets/twitter.py
  class Twitter (line 8) | class Twitter(LinkPredictionDataset):
    method __init__ (line 18) | def __init__(self, output_directory: Path, spark=False):
    method download (line 24) | def download(self, overwrite=False):
    method preprocess (line 31) | def preprocess(

FILE: src/python/tools/preprocess/utils.py
  function get_df_count (line 12) | def get_df_count(df, col):
  function download_url (line 18) | def download_url(url, output_dir, overwrite):
  function extract_file (line 37) | def extract_file(filepath, remove_input=True):
  function strip_header (line 78) | def strip_header(filepath, num_lines):

FILE: test/cpp/end_to_end/main.cpp
  function main (line 7) | int main(int argc, char **argv) {

FILE: test/cpp/end_to_end/test_main.cpp
  function TEST (line 13) | TEST(TestMain, TestLinkPred) {

FILE: test/cpp/integration/main.cpp
  function main (line 7) | int main(int argc, char **argv) {

FILE: test/cpp/performance/main.cpp
  function main (line 7) | int main(int argc, char **argv) {

FILE: test/cpp/unit/main.cpp
  function main (line 7) | int main(int argc, char **argv) {

FILE: test/cpp/unit/nn/test_activation.cpp
  function TEST (line 15) | TEST(TestActivation, TestRelu) {
  function TEST (line 33) | TEST(TestActivation, TestSigmoid) {
  function TEST (line 51) | TEST(TestActivation, TestNone) {

FILE: test/cpp/unit/nn/test_initialization.cpp
  function TEST (line 16) | TEST(TestInitialization, TestUniform) {
  function TEST (line 52) | TEST(TestInitialization, TestNormal) {
  function TEST (line 97) | TEST(TestInitialization, TestConstant) {
  function TEST (line 132) | TEST(TestInitialization, TestComputeFans) {
  function TEST (line 186) | TEST(TestInitialization, TestGlorotUniform) {
  function TEST (line 283) | TEST(TestInitialization, TestGlorotNormal) {
  function TEST (line 313) | TEST(TestInitialization, TestTensorInit) {
  function TEST (line 375) | TEST(TestInitialization, TestSubtensorInit) {

FILE: test/cpp/unit/nn/test_loss.cpp
  function TEST (line 30) | TEST(TestLoss, TestShapeMismatch) {
  function TEST (line 55) | TEST(TestLoss, TestSoftmaxCrossEntropy) {
  function TEST (line 90) | TEST(TestLoss, TestRankingLoss) {
  function TEST (line 151) | TEST(TestLoss, TestCrossEntropyLoss) {
  function TEST (line 186) | TEST(TestLoss, TestBCEAfterSigmoid) {
  function TEST (line 221) | TEST(TestLoss, TestBCEWithLogits) {
  function TEST (line 256) | TEST(TestLoss, TestMSE) {
  function TEST (line 291) | TEST(TestLoss, TestSoftPlus) {
  function TEST (line 326) | TEST(TestLoss, TestGetLossFunction) {

FILE: test/cpp/unit/nn/test_model.cpp
  function TEST (line 17) | TEST(TestModel, TestInitModelFromConfigLP) {
  function TEST (line 125) | TEST(TestModel, TestInitModelFromConfigNC) {

FILE: test/cpp/unit/test_buffer.cpp
  class PartitionBufferTest (line 19) | class PartitionBufferTest : public ::testing::Test {
    method PartitionBufferTest (line 37) | PartitionBufferTest() {
    method SetUp (line 52) | void SetUp() override {
    method initializePartitionBuffer (line 90) | void initializePartitionBuffer(bool prefetch) {
    method TearDown (line 96) | void TearDown() override {
  class PartitionedFileTest (line 103) | class PartitionedFileTest : public ::testing::Test {
    method PartitionedFileTest (line 116) | PartitionedFileTest() {
    method SetUp (line 127) | void SetUp() override {
    method TearDown (line 136) | void TearDown() override {
  class LookaheadBlockTest (line 143) | class LookaheadBlockTest : public ::testing::Test {
    method LookaheadBlockTest (line 160) | LookaheadBlockTest() {
    method SetUp (line 173) | void SetUp() override {
    method TearDown (line 182) | void TearDown() override {
  class AsyncWriteBlockTest (line 193) | class AsyncWriteBlockTest : public ::testing::Test {
    method AsyncWriteBlockTest (line 209) | AsyncWriteBlockTest() {
    method SetUp (line 222) | void SetUp() override {
    method TearDown (line 231) | void TearDown() override {
  function TEST_F (line 241) | TEST_F(PartitionBufferTest, TestPartitionBufferOrdering) {
  function TEST_F (line 259) | TEST_F(PartitionBufferTest, TestPartitionBufferPrefetch) {
  function TEST_F (line 275) | TEST_F(PartitionBufferTest, TestPartitionBufferIndexRead) {
  function TEST_F (line 285) | TEST_F(PartitionBufferTest, TestPartitionBufferIndexAdd) {
  function TEST_F (line 299) | TEST_F(PartitionBufferTest, TestPartitionBufferSync) {
  function TEST_F (line 310) | TEST_F(PartitionBufferTest, TestPartitionBufferGlobalMap) {
  function TEST_F (line 320) | TEST_F(PartitionedFileTest, TestReadPartition) {
  function TEST_F (line 336) | TEST_F(PartitionedFileTest, TestWritePartition) {
  function TEST_F (line 363) | TEST_F(LookaheadBlockTest, TestMoveToBuffer) {
  function TEST_F (line 410) | TEST_F(AsyncWriteBlockTest, TestAsyncWrite) {

FILE: test/cpp/unit/test_storage.cpp
  class StorageTest (line 17) | class StorageTest : public ::testing::Test {
    method StorageTest (line 27) | StorageTest() {
    method SetUp (line 36) | void SetUp() override {
    method TearDown (line 46) | void TearDown() override {
  class FlatFileTest (line 54) | class FlatFileTest : public StorageTest {
    method FlatFileTest (line 61) | FlatFileTest() {
  class InMemoryTest (line 72) | class InMemoryTest : public StorageTest {
    method InMemoryTest (line 79) | InMemoryTest() {
  class PartitionBufferStorageTest (line 90) | class PartitionBufferStorageTest : public StorageTest {
    method PartitionBufferStorageTest (line 99) | PartitionBufferStorageTest() {
  function TEST_F (line 141) | TEST_F(FlatFileTest, TestFlatFileWrite) {
  function TEST_F (line 168) | TEST_F(FlatFileTest, TestFlatFileCopy) {
  function TEST_F (line 189) | TEST_F(FlatFileTest, TestFlatFileShuffle) {
  function TEST_F (line 205) | TEST_F(FlatFileTest, TestFlatFileSort) {
  function TEST_F (line 222) | TEST_F(FlatFileTest, TestFlatFileSortEdges) {
  function TEST_F (line 260) | TEST_F(InMemoryTest, TestIndexRead) {
  function TEST_F (line 274) | TEST_F(InMemoryTest, TestIndexAdd) {
  function TEST_F (line 294) | TEST_F(InMemoryTest, TestIndexPut) {
  function TEST_F (line 318) | TEST_F(InMemoryTest, TestInMemoryShuffle) {
  function TEST_F (line 331) | TEST_F(InMemoryTest, TestInMemorySort) {
  function TEST_F (line 345) | TEST_F(InMemoryTest, TestInMemorySortEdges) {
  function TEST_F (line 381) | TEST_F(PartitionBufferStorageTest, TestBufferOrdering) {
  function TEST_F (line 402) | TEST_F(PartitionBufferStorageTest, TestIndexRead) {
  function TEST_F (line 417) | TEST_F(PartitionBufferStorageTest, TestRangePut) {
  function TEST_F (line 440) | TEST_F(PartitionBufferStorageTest, TestIndexAdd) {

FILE: test/cpp/unit/testing_util.cpp
  function createTmpFile (line 6) | int createTmpFile(std::string &filename) { return open(filename.c_str(),...
  function getRandTensor (line 8) | torch::Tensor getRandTensor(int dim0_size, int dim1_size, torch::Dtype d...
  function genRandTensorAndWriteToFile (line 15) | int genRandTensorAndWriteToFile(torch::Tensor &rand_tensor, int total_em...
  function checkPermOf2dTensor (line 21) | bool checkPermOf2dTensor(torch::Tensor &a, torch::Tensor &b) {
  function sortWithinEdgeBuckets (line 36) | void sortWithinEdgeBuckets(torch::Tensor &rand_tensor, vector<int64_t> &...
  function sortEdgesSrcDest (line 47) | bool sortEdgesSrcDest(vector<int> &edge1, vector<int> &edge2) {
  function partitionEdges (line 53) | vector<int64_t> partitionEdges(torch::Tensor &edges, int num_partitions,...

FILE: test/db2graph/test_postgres.py
  class TestConnector (line 10) | class TestConnector:
    method fill_db (line 40) | def fill_db(self):
    method test_connect_to_db (line 175) | def test_connect_to_db(self):
    method test_edges_entity_entity (line 229) | def test_edges_entity_entity(self):

FILE: test/python/bindings/end_to_end/test_fb15k_acc.py
  class TestFB15K (line 10) | class TestFB15K(unittest.TestCase):
    method setUp (line 12) | def setUp(self):
    method tearDown (line 17) | def tearDown(self):
    method test_one_epoch (line 23) | def test_one_epoch(self):

FILE: test/python/bindings/end_to_end/test_interval_checkpointing.py
  function replace_string_in_file (line 12) | def replace_string_in_file(filepath, before, after):
  function get_line_in_file (line 16) | def get_line_in_file(filepath, line_num):
  function run_config (line 20) | def run_config(config_file, enable_checkpointing, checkpoint_interval, s...
  class TestIntervalCheckpointing (line 29) | class TestIntervalCheckpointing(unittest.TestCase):
    method setUp (line 34) | def setUp(self):
    method tearDown (line 40) | def tearDown(self):
    method init_dataset_dir (line 44) | def init_dataset_dir(self, name):
    method test_checkpointing_with_state (line 71) | def test_checkpointing_with_state(self):
    method test_checkpointing_wo_state (line 118) | def test_checkpointing_wo_state(self):

FILE: test/python/bindings/end_to_end/test_lp_basic.py
  function run_configs (line 14) | def run_configs(directory, partitioned_eval=False):
  class TestLP (line 28) | class TestLP(unittest.TestCase):
    method setUp (line 32) | def setUp(self):
    method tearDown (line 51) | def tearDown(self):
    method test_dm (line 56) | def test_dm(self):
    method test_gs (line 72) | def test_gs(self):
    method test_gs_uniform (line 88) | def test_gs_uniform(self):
    method test_gat (line 104) | def test_gat(self):
    method test_sync_training (line 120) | def test_sync_training(self):
    method test_async_training (line 137) | def test_async_training(self):
    method test_sync_eval (line 153) | def test_sync_eval(self):
    method test_async_eval (line 170) | def test_async_eval(self):
  class TestLPNoRelations (line 186) | class TestLPNoRelations(unittest.TestCase):
    method setUp (line 190) | def setUp(self):
    method tearDown (line 209) | def tearDown(self):
    method test_dm (line 214) | def test_dm(self):
    method test_gs (line 230) | def test_gs(self):
    method test_gs_uniform (line 246) | def test_gs_uniform(self):
    method test_gat (line 262) | def test_gat(self):
    method test_sync_training (line 278) | def test_sync_training(self):
    method test_async_training (line 295) | def test_async_training(self):
    method test_sync_eval (line 311) | def test_sync_eval(self):
    method test_async_eval (line 328) | def test_async_eval(self):

FILE: test/python/bindings/end_to_end/test_lp_buffer.py
  function run_configs (line 14) | def run_configs(directory, partitioned_eval=False):
  class TestLPBuffer (line 29) | class TestLPBuffer(unittest.TestCase):
    method setUp (line 33) | def setUp(self):
    method tearDown (line 53) | def tearDown(self):
    method test_dm (line 58) | def test_dm(self):
    method test_gs (line 74) | def test_gs(self):
    method test_gs_uniform (line 90) | def test_gs_uniform(self):
    method test_gat (line 106) | def test_gat(self):
    method test_sync_training (line 123) | def test_sync_training(self):
    method test_async_training (line 140) | def test_async_training(self):
    method test_sync_eval (line 156) | def test_sync_eval(self):
    method test_async_eval (line 173) | def test_async_eval(self):
    method test_partitioned_eval (line 189) | def test_partitioned_eval(self):
  class TestLPBufferNoRelations (line 218) | class TestLPBufferNoRelations(unittest.TestCase):
    method setUp (line 222) | def setUp(self):
    method tearDown (line 242) | def tearDown(self):
    method test_dm (line 247) | def test_dm(self):
    method test_gs (line 264) | def test_gs(self):
    method test_gs_uniform (line 281) | def test_gs_uniform(self):
    method test_gat (line 297) | def test_gat(self):
    method test_sync_training (line 314) | def test_sync_training(self):
    method test_async_training (line 331) | def test_async_training(self):
    method test_sync_eval (line 347) | def test_sync_eval(self):
    method test_async_eval (line 364) | def test_async_eval(self):
    method test_partitioned_eval (line 380) | def test_partitioned_eval(self):

FILE: test/python/bindings/end_to_end/test_lp_storage.py
  function run_configs (line 14) | def run_configs(directory, partitioned_eval=False):
  class TestLPStorage (line 28) | class TestLPStorage(unittest.TestCase):
    method setUp (line 32) | def setUp(self):
    method tearDown (line 37) | def tearDown(self):
    method test_no_valid (line 42) | def test_no_valid(self):
    method test_only_train (line 69) | def test_only_train(self):
    method test_no_valid_no_relations (line 95) | def test_no_valid_no_relations(self):
    method test_only_train_no_relations (line 122) | def test_only_train_no_relations(self):
    method test_no_valid_buffer (line 148) | def test_no_valid_buffer(self):
    method test_only_train_buffer (line 177) | def test_only_train_buffer(self):
    method test_no_valid_buffer_no_relations (line 204) | def test_no_valid_buffer_no_relations(self):
    method test_only_train_buffer_no_relations (line 233) | def test_only_train_buffer_no_relations(self):

FILE: test/python/bindings/end_to_end/test_model_dir.py
  function run_configs (line 14) | def run_configs(directory, model_dir=None, partitioned_eval=False, seque...
  function has_model_params (line 50) | def has_model_params(model_dir_path, task="lp", has_embeddings=False, ha...
  class TestLP (line 83) | class TestLP(unittest.TestCase):
    method setUp (line 87) | def setUp(self):
    method tearDown (line 106) | def tearDown(self):
    method test_dm (line 111) | def test_dm(self):
  class TestNC (line 152) | class TestNC(unittest.TestCase):
    method setUp (line 156) | def setUp(self):
    method tearDown (line 176) | def tearDown(self):
    method test_gs (line 181) | def test_gs(self):
    method test_async (line 203) | def test_async(self):
    method test_emb (line 224) | def test_emb(self):
  class TestLPBufferNoRelations (line 245) | class TestLPBufferNoRelations(unittest.TestCase):
    method setUp (line 249) | def setUp(self):
    method tearDown (line 269) | def tearDown(self):
    method test_dm (line 274) | def test_dm(self):
    method test_partitioned_eval (line 295) | def test_partitioned_eval(self):
  class TestNCBuffer (line 329) | class TestNCBuffer(unittest.TestCase):
    method setUp (line 333) | def setUp(self):
    method tearDown (line 354) | def tearDown(self):
    method test_gs (line 359) | def test_gs(self):
    method test_async (line 381) | def test_async(self):
    method test_emb (line 402) | def test_emb(self):
    method test_partitioned_eval (line 423) | def test_partitioned_eval(self):
    method test_sequential (line 459) | def test_sequential(self):

FILE: test/python/bindings/end_to_end/test_nc_basic.py
  function run_configs (line 14) | def run_configs(directory, partitioned_eval=False):
  class TestNC (line 28) | class TestNC(unittest.TestCase):
    method setUp (line 32) | def setUp(self):
    method tearDown (line 52) | def tearDown(self):
    method test_gs (line 57) | def test_gs(self):
    method test_gs_uniform (line 73) | def test_gs_uniform(self):
    method test_gat (line 89) | def test_gat(self):
    method test_async (line 106) | def test_async(self):
    method test_emb (line 122) | def test_emb(self):
  class TestNCNoRelations (line 138) | class TestNCNoRelations(unittest.TestCase):
    method setUp (line 142) | def setUp(self):
    method tearDown (line 162) | def tearDown(self):
    method test_gs (line 167) | def test_gs(self):
    method test_gs_uniform (line 183) | def test_gs_uniform(self):
    method test_gat (line 199) | def test_gat(self):
    method test_async (line 216) | def test_async(self):
    method test_emb (line 232) | def test_emb(self):

FILE: test/python/bindings/end_to_end/test_nc_buffer.py
  function run_configs (line 14) | def run_configs(directory, partitioned_eval=False, sequential_train_node...
  class TestNCBuffer (line 32) | class TestNCBuffer(unittest.TestCase):
    method setUp (line 36) | def setUp(self):
    method tearDown (line 57) | def tearDown(self):
    method test_gs (line 62) | def test_gs(self):
    method test_gs_uniform (line 78) | def test_gs_uniform(self):
    method test_gat (line 94) | def test_gat(self):
    method test_async (line 111) | def test_async(self):
    method test_emb (line 127) | def test_emb(self):
    method test_partitioned_eval (line 143) | def test_partitioned_eval(self):
    method test_sequential (line 174) | def test_sequential(self):
  class TestNCBufferNoRelations (line 206) | class TestNCBufferNoRelations(unittest.TestCase):
    method setUp (line 210) | def setUp(self):
    method tearDown (line 231) | def tearDown(self):
    method test_gs (line 236) | def test_gs(self):
    method test_gs_uniform (line 252) | def test_gs_uniform(self):
    method test_gat (line 268) | def test_gat(self):
    method test_async (line 285) | def test_async(self):
    method test_emb (line 301) | def test_emb(self):
    method test_partitioned_eval (line 317) | def test_partitioned_eval(self):
    method test_sequential (line 348) | def test_sequential(self):

FILE: test/python/bindings/end_to_end/test_nc_storage.py
  function run_configs (line 14) | def run_configs(directory, partitioned_eval=False):
  class TestNCStorage (line 28) | class TestNCStorage(unittest.TestCase):
    method setUp (line 32) | def setUp(self):
    method tearDown (line 37) | def tearDown(self):
    method test_no_valid (line 42) | def test_no_valid(self):
    method test_only_train (line 70) | def test_only_train(self):
    method test_no_valid_no_relations (line 97) | def test_no_valid_no_relations(self):
    method test_only_train_no_relations (line 125) | def test_only_train_no_relations(self):
    method test_no_valid_buffer (line 152) | def test_no_valid_buffer(self):
    method test_only_train_buffer (line 182) | def test_only_train_buffer(self):
    method test_no_valid_buffer_no_relations (line 210) | def test_no_valid_buffer_no_relations(self):
    method test_only_train_buffer_no_relations (line 240) | def test_only_train_buffer_no_relations(self):

FILE: test/python/bindings/end_to_end/test_resume_training.py
  function replace_string_in_file (line 12) | def replace_string_in_file(filepath, before, after):
  function get_line_in_file (line 16) | def get_line_in_file(filepath, line_num):
  function run_config (line 20) | def run_config(config_file):
  class TestResumeTraining (line 25) | class TestResumeTraining(unittest.TestCase):
    method setUp (line 30) | def setUp(self):
    method tearDown (line 36) | def tearDown(self):
    method init_dataset_dir (line 40) | def init_dataset_dir(self, name):
    method test_resume_training_model_dir (line 69) | def test_resume_training_model_dir(self):
    method test_resume_training_checkpoint_dir (line 111) | def test_resume_training_checkpoint_dir(self):

FILE: test/python/bindings/integration/test_config.py
  class TestConfig (line 14) | class TestConfig(unittest.TestCase):
    method setUp (line 32) | def setUp(self):
    method tearDown (line 39) | def tearDown(self):
    method test_missing_config (line 43) | def test_missing_config(self):
    method test_missing_dataset_yaml (line 50) | def test_missing_dataset_yaml(self):
    method test_load_config (line 93) | def test_load_config(self):

FILE: test/python/bindings/integration/test_data.py
  class TestBatch (line 9) | class TestBatch(unittest.TestCase):
    method test_construction (line 14) | def test_construction(self):
    method test_accumulate_gradients (line 34) | def test_accumulate_gradients(self):
    method test_clear (line 49) | def test_clear(self):
  class TestDataloader (line 67) | class TestDataloader(unittest.TestCase):
    method test_lp_only_edges (line 68) | def test_lp_only_edges(self):
    method test_lp_negs (line 108) | def test_lp_negs(self):
    method test_lp_negs_nbrs (line 164) | def test_lp_negs_nbrs(self):
    method test_lp_nbrs (line 222) | def test_lp_nbrs(self):
    method test_nc_nbrs (line 264) | def test_nc_nbrs(self):
    method test_nc_no_nbrs (line 307) | def test_nc_no_nbrs(self):

FILE: test/python/bindings/integration/test_nn.py
  function get_test_model_lp (line 35) | def get_test_model_lp():
  function get_test_model_lp_neg (line 59) | def get_test_model_lp_neg():
  function get_test_model_nc (line 83) | def get_test_model_nc():
  class CustomModelBasic (line 99) | class CustomModelBasic(Model):
    method __init__ (line 100) | def __init__(self, encoder, decoder):
  class CustomModelOverrideForward (line 113) | class CustomModelOverrideForward(Model):
    method __init__ (line 114) | def __init__(self, encoder, decoder):
    method forward_lp (line 124) | def forward_lp(self, batch, train):
  class TestModel (line 130) | class TestModel(unittest.TestCase):
    method test_construction_lp (line 135) | def test_construction_lp(self):
    method test_construction_nc (line 138) | def test_construction_nc(self):
    method test_forward_nc (line 141) | def test_forward_nc(self):
    method test_forward_lp (line 148) | def test_forward_lp(self):
    method test_train_batch (line 162) | def test_train_batch(self):
    method test_clear_grad (line 185) | def test_clear_grad(self):
    method test_step (line 197) | def test_step(self):
    method test_save (line 209) | def test_save(self):
    method test_load (line 212) | def test_load(self):
    method test_custom_model_basic (line 215) | def test_custom_model_basic(self):
    method test_custom_model_forward_override (line 230) | def test_custom_model_forward_override(self):
    method test_init_from_config (line 246) | def test_init_from_config(self):

FILE: test/python/helpers.py
  function dataset_generator (line 5) | def dataset_generator(

FILE: test/python/postprocessing/test_in_memory_exporter.py
  function check_output (line 16) | def check_output(output_dir, fmt, has_rels=False):
  class TestLP (line 90) | class TestLP(unittest.TestCase):
    method setUp (line 94) | def setUp(self):
    method tearDown (line 134) | def tearDown(self):
    method test_export_csv (line 138) | def test_export_csv(self):
    method test_export_binary (line 148) | def test_export_binary(self):
    method test_export_parquet (line 163) | def test_export_parquet(self):
    method test_export_no_model (line 179) | def test_export_no_model(self):
    method test_export_overwrite (line 186) | def test_export_overwrite(self):
  class TestNC (line 203) | class TestNC(unittest.TestCase):
    method setUp (line 207) | def setUp(self):
    method tearDown (line 248) | def tearDown(self):
    method test_export_csv (line 252) | def test_export_csv(self):
    method test_export_binary (line 262) | def test_export_binary(self):
    method test_export_parquet (line 277) | def test_export_parquet(self):
    method test_export_no_model (line 293) | def test_export_no_model(self):
    method test_export_overwrite (line 300) | def test_export_overwrite(self):

FILE: test/python/predict/test_predict.py
  function validate_metrics (line 15) | def validate_metrics(config, metrics, num_items, output_dir=None):
  function validate_scores (line 57) | def validate_scores(config, num_edges, save_scores, save_ranks, output_d...
  function validate_labels (line 82) | def validate_labels(config, num_nodes, output_dir=None):
  class TestPredictLP (line 97) | class TestPredictLP(unittest.TestCase):
    method setUp (line 101) | def setUp(self):
    method tearDown (line 138) | def tearDown(self):
    method test_basic_lp (line 142) | def test_basic_lp(self):
    method test_lp_metrics (line 150) | def test_lp_metrics(self):
    method test_predict_model_dir (line 177) | def test_predict_model_dir(self):
    method test_lp_save_ranks (line 272) | def test_lp_save_ranks(self):
    method test_lp_save_scores (line 275) | def test_lp_save_scores(self):

FILE: test/python/preprocessing/test_spark_converter.py
  class TestSparkConverter (line 23) | class TestSparkConverter(unittest.TestCase):
    method setUp (line 25) | def setUp(self):
    method tearDown (line 34) | def tearDown(self):
    method make_directory_tree (line 39) | def make_directory_tree(self, dir_path):
    method test_delimited_defaults (line 48) | def test_delimited_defaults(self):
    method test_delimited_str_ids (line 66) | def test_delimited_str_ids(self):
    method test_columns (line 94) | def test_columns(self):
    method test_header (line 112) | def test_header(self):
    method test_delim (line 138) | def test_delim(self):
    method test_partitions (line 159) | def test_partitions(self):

FILE: test/python/preprocessing/test_torch_converter.py
  function validate_partitioned_output_dir (line 20) | def validate_partitioned_output_dir(
  function validate_output_dir (line 77) | def validate_output_dir(
  class TestTorchConverter (line 155) | class TestTorchConverter(unittest.TestCase):
    method setUp (line 161) | def setUp(self):
    method tearDown (line 170) | def tearDown(self):
    method test_delimited_defaults (line 174) | def test_delimited_defaults(self):
    method test_delimited_str_ids (line 198) | def test_delimited_str_ids(self):
    method test_numpy_defaults (line 230) | def test_numpy_defaults(self):
    method test_pytorch_defaults (line 258) | def test_pytorch_defaults(self):
    method test_splits (line 286) | def test_splits(self):
    method test_columns (line 313) | def test_columns(self):
    method test_header (line 336) | def test_header(self):
    method test_delim (line 366) | def test_delim(self):
    method test_dtype (line 393) | def test_dtype(self):
    method test_partitions (line 424) | def test_partitions(self):
    method test_no_remap (line 467) | def test_no_remap(self):
    method test_torch_no_relation_no_remap (line 494) | def test_torch_no_relation_no_remap(self):
    method test_pandas_no_relation_no_remap (line 524) | def test_pandas_no_relation_no_remap(self):
    method test_torch_no_relation_remap (line 551) | def test_torch_no_relation_remap(self):
    method test_pandas_no_relation_remap (line 582) | def test_pandas_no_relation_remap(self):
    method test_torch_only_weights_no_remap (line 609) | def test_torch_only_weights_no_remap(self):
    method test_pandas_only_weights_no_remap (line 643) | def test_pandas_only_weights_no_remap(self):
    method test_torch_only_weights_remap (line 673) | def test_torch_only_weights_remap(self):
    method test_pandas_only_weights_remap (line 707) | def test_pandas_only_weights_remap(self):
    method test_torch_relationship_weights_no_remap (line 737) | def test_torch_relationship_weights_no_remap(self):
    method test_pandas_relationship_weights_no_remap (line 773) | def test_pandas_relationship_weights_no_remap(self):
    method test_torch_relationship_weights_remap (line 805) | def test_torch_relationship_weights_remap(self):
    method test_pandas_relationship_weights_remap (line 840) | def test_pandas_relationship_weights_remap(self):
    method test_torch_relationship_weights_remap_partioned (line 872) | def test_torch_relationship_weights_remap_partioned(self):
    method test_pandas_relationship_weights_remap_partioned (line 910) | def test_pandas_relationship_weights_remap_partioned(self):

FILE: test/test_configs/generate_test_configs.py
  function get_config (line 11) | def get_config(model_config_path, storage_config_path, train_config_path...
  function set_dataset_config (line 22) | def set_dataset_config(base_config, dataset_dir):
  function config_from_sub_configs (line 38) | def config_from_sub_configs(model_config, storage_config, train_config, ...
  function get_cartesian_product_of_configs (line 49) | def get_cartesian_product_of_configs(config_directory, model_names, stor...
  function get_all_configs_for_dataset (line 106) | def get_all_configs_for_dataset(
  function generate_configs_for_dataset (line 122) | def generate_configs_for_dataset(

FILE: test/test_data/generate.py
  function get_random_graph (line 12) | def get_random_graph(num_nodes, num_edges, num_rels=1):
  function generate_features (line 25) | def generate_features(num_nodes, feature_dim):
  function generate_labels (line 29) | def generate_labels(num_nodes, num_classes):
  function shuffle_with_map (line 33) | def shuffle_with_map(values, node_mapping):
  function apply_mapping (line 39) | def apply_mapping(values, node_mapping):
  function remap_nc (line 44) | def remap_nc(output_dir, train_nodes, labels, num_nodes, valid_nodes=Non...
  function remap_lp (line 66) | def remap_lp(output_dir, features=None):
  function generate_random_dataset_nc (line 73) | def generate_random_dataset_nc(
  function generate_random_dataset_lp (line 186) | def generate_random_dataset_lp(
  function generate_random_dataset (line 244) | def generate_random_dataset(
Condensed preview — 446 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (1,841K chars).
[
  {
    "path": ".clang-format",
    "chars": 6105,
    "preview": "---\nLanguage:        Cpp\n# BasedOnStyle:  Google\nAccessModifierOffset: -1\nAlignAfterOpenBracket: Align\nAlignArrayOfStruc"
  },
  {
    "path": ".flake8",
    "chars": 485,
    "preview": "#########################\n# Flake8 Configuration  #\n# (.flake8)             #\n#########################\n[flake8]\nignore "
  },
  {
    "path": ".github/ISSUE_TEMPLATE/bug_report.md",
    "chars": 600,
    "preview": "---\nname: Bug report\nabout: Create a report to help us improve\ntitle: ''\nlabels: bug\nassignees: ''\n\n---\n\n**Describe the "
  },
  {
    "path": ".github/ISSUE_TEMPLATE/documentation-improvement.md",
    "chars": 487,
    "preview": "---\nname: Documentation Improvement\nabout: 'Suggest improvements to the documentation '\ntitle: ''\nlabels: documentation\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature_request.md",
    "chars": 604,
    "preview": "---\nname: Feature request\nabout: Suggest an idea for this project\ntitle: ''\nlabels: enhancement\nassignees: ''\n\n---\n\n**Is"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/general-question.md",
    "chars": 97,
    "preview": "---\nname: General Question\nabout: Ask a question\ntitle: ''\nlabels: question\nassignees: ''\n\n---\n\n\n"
  },
  {
    "path": ".github/PULL_REQUEST_TEMPLATE/pull_request_template.md",
    "chars": 564,
    "preview": "If there is no outstanding issue related to this change, please open an issue before submitting this pull request. For s"
  },
  {
    "path": ".github/workflows/build_and_test.yml",
    "chars": 1267,
    "preview": "name: Build and Test\n\non:\n  push:\n    branches:\n      - main\n  pull_request:\n    branches:\n      - main\nenv:\n  BUILD_TYP"
  },
  {
    "path": ".github/workflows/db2graph_test_postgres.yml",
    "chars": 1364,
    "preview": "name: Testing DB2GRAPH using postgres\non:\n  push:\n    branches:\n      - main\n  pull_request:\n    branches:\n      - main\n"
  },
  {
    "path": ".github/workflows/lint.yml",
    "chars": 341,
    "preview": "name: Lint\n\non: [push, pull_request]\n\njobs:\n  linting:\n    runs-on: ubuntu-latest\n    steps:\n    - uses: actions/checkou"
  },
  {
    "path": ".gitignore",
    "chars": 2533,
    "preview": "CMakeCache.txt\nCMakeFiles\nCMakeScripts\nTesting\nMakefile\ncmake_install.cmake\ninstall_manifest.txt\ncompile_commands.json\nC"
  },
  {
    "path": ".gitmodules",
    "chars": 530,
    "preview": "[submodule \"src/cpp/third_party/pybind11\"]\n\tpath = src/cpp/third_party/pybind11\n\turl = https://github.com/pybind/pybind1"
  },
  {
    "path": "CMakeLists.txt",
    "chars": 9959,
    "preview": "cmake_minimum_required(VERSION 3.12.2)\nset(CMAKE_CXX_STANDARD 17)\nset(CMAKE_CXX_STANDARD_REQUIRED ON)\ncmake_policy(SET C"
  },
  {
    "path": "CONTRIBUTING.md",
    "chars": 1432,
    "preview": "# Contributing to Marius\n\nAny contributions users wish to make to Marius are welcome. To name a few, here are some ways "
  },
  {
    "path": "LICENSE",
    "chars": 11357,
    "preview": "                                 Apache License\n                           Version 2.0, January 2004\n                   "
  },
  {
    "path": "MANIFEST.in",
    "chars": 102,
    "preview": "graft src\ninclude CMakeLists.txt\nglobal-exclude *.py[cod] __pycache__ *.so *.dylib .DS_Store *.gpickle"
  },
  {
    "path": "README.md",
    "chars": 4543,
    "preview": "# Marius and MariusGNN #\n\nThis repository contains the code for the Marius and MariusGNN papers. \nWe have combined the t"
  },
  {
    "path": "docs/.nojekyll",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "docs/CMakeLists.txt",
    "chars": 2293,
    "preview": "# https://devblogs.microsoft.com/cppblog/clear-functional-c-documentation-with-sphinx-breathe-doxygen-cmake/\nfind_packag"
  },
  {
    "path": "docs/Doxyfile",
    "chars": 112413,
    "preview": "# Doxyfile 1.8.20\n\n# This file describes the settings to be used by the documentation system\n# doxygen (www.doxygen.org)"
  },
  {
    "path": "docs/Doxyfile.in",
    "chars": 111,
    "preview": "#...\nINPUT = \"@DOXYGEN_INPUT_DIR@\"\n#...\nOUTPUT_DIRECTORY = \"@DOXYGEN_OUTPUT_DIR@\"\n#...\nGENERATE_XML = YES\n#...\n"
  },
  {
    "path": "docs/README.md",
    "chars": 590,
    "preview": "# Building the Docs #\n\n1. Clone main repository: `git clone https://github.com/marius-team/marius.git`.\n\n2. Clone `gh-pa"
  },
  {
    "path": "docs/_static/css/marius_theme.css",
    "chars": 3444,
    "preview": "@import url(\"theme.css\");\n\n.wy-nav-content {\n    max-width: 50vw;\n}\n\n:root {\n    /*--marius_purple: #180A5B;*/\n    /*--m"
  },
  {
    "path": "docs/_templates/layout.html",
    "chars": 829,
    "preview": "{% extends \"!layout.html\" %}\n  {% block menu %} {{ super() }}\n\n  <!-- <style>\n    a.gh-font {\n        font-weight: 800;\n"
  },
  {
    "path": "docs/conf.py",
    "chars": 2267,
    "preview": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common op"
  },
  {
    "path": "docs/config_interface/configuration.rst",
    "chars": 19556,
    "preview": "\nOverview\n======================\n\nThe configuration interface allows for high-performance training and evaluation of mod"
  },
  {
    "path": "docs/config_interface/full_schema.rst",
    "chars": 32758,
    "preview": ".. _config_schema\n\nConfiguration Schema\n=========================\n\n.. list-table:: MariusConfig\n   :widths: 15 10 50 15\n"
  },
  {
    "path": "docs/config_interface/index.rst",
    "chars": 166,
    "preview": "\nConfiguration Interface\n**************************\n\n.. toctree::\n    :glob:\n    :maxdepth: 2\n    :caption: Contents\n\n  "
  },
  {
    "path": "docs/config_interface/samples.rst",
    "chars": 15267,
    "preview": "\nSample Files\n======================\n\nModel Configs\n-------------\n\nDistMult\n^^^^^^^^\n\n+---------------------------------"
  },
  {
    "path": "docs/db2graph/db2graph.rst",
    "chars": 25540,
    "preview": "Db2Graph: Database to Graph conversion tool\n============================================\n\nIntroduction\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\""
  },
  {
    "path": "docs/examples/config/index.rst",
    "chars": 206,
    "preview": ".. _configuration_examples\n\n\nConfiguration Examples\n**************************\n\n.. toctree::\n    :glob:\n    :maxdepth: 2"
  },
  {
    "path": "docs/examples/config/lp_custom.rst",
    "chars": 14439,
    "preview": "Custom Dataset Link Prediction\n---------------------------------------------\n\nIn this tutorial, we use the **OGBN_Arxiv "
  },
  {
    "path": "docs/examples/config/lp_fb15k237.rst",
    "chars": 13151,
    "preview": "Small Scale Link Prediction (FB15K-237)\n---------------------------------------------\n\nIn this tutorial, we use the **FB"
  },
  {
    "path": "docs/examples/config/lp_paleobiology.rst",
    "chars": 8596,
    "preview": ".. _lp_paleo:\n\nPaleobiology Dataset Link Prediction\n---------------------------------------------\nIn this tutorial, we w"
  },
  {
    "path": "docs/examples/config/nc_custom.rst",
    "chars": 19536,
    "preview": "Custom Dataset Node Classification\n---------------------------------------------\nIn this tutorial, we use the **Cora dat"
  },
  {
    "path": "docs/examples/config/nc_ogbn_arxiv.rst",
    "chars": 13359,
    "preview": "Small Scale Node Classification (OGBN-Arxiv)\n---------------------------------------------\n\nIn this tutorial, we use the"
  },
  {
    "path": "docs/examples/config/resume_training.rst",
    "chars": 3536,
    "preview": "Resume Training (FB15K-237)\n---------------------------------------------\n\nIn this tutorial, we use the **FB15K_237 know"
  },
  {
    "path": "docs/examples/index.rst",
    "chars": 116,
    "preview": "\nExamples\n**************************\n\n.. toctree::\n    :glob:\n    :maxdepth: 2\n\n    config/index\n    python/index\n\n\n"
  },
  {
    "path": "docs/examples/introduction.rst",
    "chars": 51,
    "preview": ".. _introduction\n\nIntroduction\n********************"
  },
  {
    "path": "docs/examples/prediction/command_line.rst",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "docs/examples/prediction/python.rst",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "docs/examples/preprocessing/command_line.rst",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "docs/examples/preprocessing/python.rst",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "docs/examples/python/index.rst",
    "chars": 134,
    "preview": "\nPython Examples\n**************************\n\n.. toctree::\n    :glob:\n    :maxdepth: 2\n\n    lp_fb15k237\n    lp_custom\n   "
  },
  {
    "path": "docs/examples/python/lp_custom.rst",
    "chars": 11436,
    "preview": "Custom Dataset Link Prediction\n---------------------------------------------\nThis example will demonstrate how to use Ma"
  },
  {
    "path": "docs/examples/python/lp_fb15k237.rst",
    "chars": 8098,
    "preview": "Small Scale Link Prediction (FB15K-237)\n---------------------------------------------\nThis example will demonstrate how "
  },
  {
    "path": "docs/examples/python/nc_ogbn_arxiv.rst",
    "chars": 8814,
    "preview": "Small Scale Node Classification (OGBN-Arxiv)\n---------------------------------------------\nOGBN-Arxiv is a built in data"
  },
  {
    "path": "docs/export_and_inference/index.rst",
    "chars": 142,
    "preview": "\nModel Export and Inference\n**************************\n\n.. toctree::\n    :glob:\n    :maxdepth: 2\n\n    marius_predict\n   "
  },
  {
    "path": "docs/export_and_inference/marius_postprocess.rst",
    "chars": 2926,
    "preview": ".. _marius_postprocess\n\nModel exporting tool (marius_postprocess)\n==================================================\n\nTh"
  },
  {
    "path": "docs/export_and_inference/marius_predict.rst",
    "chars": 13000,
    "preview": ".. _marius_predict:\n\nBatch Inference (marius_predict)\n==================================================\n\nThis document "
  },
  {
    "path": "docs/graph_learning/decoders.rst",
    "chars": 29,
    "preview": "Decoders\n********************"
  },
  {
    "path": "docs/graph_learning/downstream_tasks.rst",
    "chars": 86,
    "preview": "Downstream Tasks and Applications\n*********************************\n\n- :ref:`lp_paleo`"
  },
  {
    "path": "docs/graph_learning/encoders.rst",
    "chars": 29,
    "preview": "Encoders\n********************"
  },
  {
    "path": "docs/graph_learning/index.rst",
    "chars": 119,
    "preview": "\nGraph Learning\n**************************\n\n.. toctree::\n    :glob:\n    :maxdepth: 2\n\n    intro\n    downstream_tasks\n\n\n"
  },
  {
    "path": "docs/graph_learning/intro.rst",
    "chars": 2785,
    "preview": "Intro to Graph Embeddings\n***************************\n\nA brief overview of graph-structured data, graph embeddings, and "
  },
  {
    "path": "docs/graph_learning/learning_tasks.rst",
    "chars": 35,
    "preview": "Learning Tasks\n********************"
  },
  {
    "path": "docs/index.rst",
    "chars": 345,
    "preview": ".. Marius documentation master file, created by\n    sphinx-quickstart on Tue Oct 20 13:17:05 2020.\n\nMarius\n*************"
  },
  {
    "path": "docs/introduction.rst",
    "chars": 4700,
    "preview": ".. _introduction\n\nIntroduction\n=========================\n\nMarius is a system for scaling graph learning on a single mach"
  },
  {
    "path": "docs/preprocess_datasets/built_in.rst",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "docs/preprocess_datasets/command_line.rst",
    "chars": 4548,
    "preview": "\nCommand Line Preprocessing\n================================\n\nThe preprocessing procedure takes datasets in their raw fo"
  },
  {
    "path": "docs/preprocess_datasets/index.rst",
    "chars": 150,
    "preview": "\nDatasets and Preprocessing\n**************************\n\n.. toctree::\n    :glob:\n    :maxdepth: 2\n    :caption: Contents\n"
  },
  {
    "path": "docs/preprocess_datasets/python.rst",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "docs/python_api/configuration/index.rst",
    "chars": 124,
    "preview": "\nmarius.config\n********************\n\n.. automodule:: marius.config\n    :members:\n    :undoc-members:\n    :imported-membe"
  },
  {
    "path": "docs/python_api/index.rst",
    "chars": 220,
    "preview": "\nPython API\n********************\n\n.. toctree::\n    :glob:\n    :maxdepth: 2\n\n    configuration/index\n    data/index\n    m"
  },
  {
    "path": "docs/python_api/manager/index.rst",
    "chars": 79,
    "preview": "\nmarius.manager\n********************\n\n.. toctree::\n    :glob:\n    :maxdepth: 2\n"
  },
  {
    "path": "docs/python_api/nn/activation.rst",
    "chars": 197,
    "preview": "Activation Functions\n=======================================\n\n.. function:: marius.nn.apply_activation(activation_functi"
  },
  {
    "path": "docs/python_api/nn/decoders/decoder.rst",
    "chars": 157,
    "preview": "Decoder\n=======================================\n\n.. autoclass:: marius.nn.decoders.Decoder\n    :members:\n    :undoc-memb"
  },
  {
    "path": "docs/python_api/nn/decoders/edge/comparators.rst",
    "chars": 545,
    "preview": "Comparator\n=======================================\n\n.. autoclass:: marius.nn.decoders.edge.Comparator\n    :members:\n    "
  },
  {
    "path": "docs/python_api/nn/decoders/edge/complex.rst",
    "chars": 161,
    "preview": "ComplEx\n=======================================\n\n.. autoclass:: marius.nn.decoders.edge.ComplEx\n    :members:\n    :undoc"
  },
  {
    "path": "docs/python_api/nn/decoders/edge/distmult.rst",
    "chars": 163,
    "preview": "DistMult\n=======================================\n\n.. autoclass:: marius.nn.decoders.edge.DistMult\n    :members:\n    :und"
  },
  {
    "path": "docs/python_api/nn/decoders/edge/edge_decoder.rst",
    "chars": 667,
    "preview": "EdgeDecoder\n=======================================\n\n.. autoclass:: marius.nn.decoders.edge.EdgeDecoder\n    :members:\n  "
  },
  {
    "path": "docs/python_api/nn/decoders/edge/index.rst",
    "chars": 161,
    "preview": "\nedge\n********************\n\n.. toctree::\n    :glob:\n    :maxdepth: 2\n\n    comparators\n    complex\n    distmult\n    edge_"
  },
  {
    "path": "docs/python_api/nn/decoders/edge/relation_operators.rst",
    "chars": 670,
    "preview": "RelationOperator\n=======================================\n\n.. autoclass:: marius.nn.decoders.edge.RelationOperator\n    :m"
  },
  {
    "path": "docs/python_api/nn/decoders/edge/transe.rst",
    "chars": 159,
    "preview": "TransE\n=======================================\n\n.. autoclass:: marius.nn.decoders.edge.TransE\n    :members:\n    :undoc-m"
  },
  {
    "path": "docs/python_api/nn/decoders/index.rst",
    "chars": 115,
    "preview": "\ndecoders\n********************\n\n.. toctree::\n    :glob:\n    :maxdepth: 2\n\n    edge/index\n    node/index\n    decoder"
  },
  {
    "path": "docs/python_api/nn/decoders/node/index.rst",
    "chars": 108,
    "preview": "\nnode\n********************\n\n.. toctree::\n    :glob:\n    :maxdepth: 2\n\n    node_decoder\n    noop_node_decoder"
  },
  {
    "path": "docs/python_api/nn/decoders/node/node_decoder.rst",
    "chars": 169,
    "preview": "NodeDecoder\n=======================================\n\n.. autoclass:: marius.nn.decoders.node.NodeDecoder\n    :members:\n  "
  },
  {
    "path": "docs/python_api/nn/decoders/node/noop_node_decoder.rst",
    "chars": 177,
    "preview": "NoOpNodeDecoder\n=======================================\n\n.. autoclass:: marius.nn.decoders.node.NoOpNodeDecoder\n    :mem"
  },
  {
    "path": "docs/python_api/nn/encoders/general_encoder.rst",
    "chars": 668,
    "preview": "GeneralEncoder\n=======================================\n\n.. autoclass:: marius.nn.encoders.GeneralEncoder\n    :members:\n "
  },
  {
    "path": "docs/python_api/nn/encoders/index.rst",
    "chars": 94,
    "preview": "\nencoders\n********************\n\n.. toctree::\n    :glob:\n    :maxdepth: 2\n\n    general_encoder\n"
  },
  {
    "path": "docs/python_api/nn/index.rst",
    "chars": 193,
    "preview": "\nmarius.nn\n********************\n\n.. toctree::\n    :glob:\n    :maxdepth: 2\n\n    decoders/index\n    encoders/index\n    lay"
  },
  {
    "path": "docs/python_api/nn/initialization.rst",
    "chars": 1168,
    "preview": "Initialization\n=======================================\n\n.. autofunction:: marius.nn.compute_fans\n\n.. function:: marius.n"
  },
  {
    "path": "docs/python_api/nn/layers/embedding.rst",
    "chars": 813,
    "preview": "EmbeddingLayer\n=======================================\n\n.. autoclass:: marius.nn.layers.EmbeddingLayer\n    :members:\n   "
  },
  {
    "path": "docs/python_api/nn/layers/feature.rst",
    "chars": 642,
    "preview": "FeatureLayer\n=======================================\n\n.. autoclass:: marius.nn.layers.FeatureLayer\n    :members:\n    :un"
  },
  {
    "path": "docs/python_api/nn/layers/gnn.rst",
    "chars": 2533,
    "preview": "GNNLayer\n=======================================\n\n.. autoclass:: marius.nn.layers.GNNLayer\n    :members:\n    :undoc-memb"
  },
  {
    "path": "docs/python_api/nn/layers/index.rst",
    "chars": 130,
    "preview": "\nlayers\n********************\n\n.. toctree::\n    :glob:\n    :maxdepth: 2\n\n    embedding\n    feature\n    gnn\n    layer\n    "
  },
  {
    "path": "docs/python_api/nn/layers/layer.rst",
    "chars": 273,
    "preview": "Layer\n********************\n\n.. autoclass:: marius.nn.layers.Layer\n    :members:\n    :undoc-members:\n    :exclude-members"
  },
  {
    "path": "docs/python_api/nn/layers/reduction.rst",
    "chars": 1605,
    "preview": "ReductionLayer\n=======================================\n\n.. autoclass:: marius.nn.layers.ReductionLayer\n    :members:\n   "
  },
  {
    "path": "docs/python_api/nn/loss.rst",
    "chars": 904,
    "preview": "Loss Functions\n=======================================\n\n.. autoclass:: marius.nn.LossFunction\n    :members:\n    :undoc-m"
  },
  {
    "path": "docs/python_api/nn/model.rst",
    "chars": 950,
    "preview": "Model\n********************\n\n.. autoclass:: marius.nn.Model\n    :members:\n    :undoc-members:\n    :exclude-members: __ini"
  },
  {
    "path": "docs/python_api/nn/optim.rst",
    "chars": 1243,
    "preview": "Optimizers\n********************\n\n.. autoclass:: marius.nn.Optimizer\n    :members:\n    :undoc-members:\n    :exclude-membe"
  },
  {
    "path": "docs/python_api/pipeline/evaluator.rst",
    "chars": 389,
    "preview": "Evaluator\n=======================================\n\n.. autoclass:: marius.pipeline.Evaluator\n    :members:\n    :undoc-mem"
  },
  {
    "path": "docs/python_api/pipeline/graph_encoder.rst",
    "chars": 419,
    "preview": "GraphEncoder\n=======================================\n\n.. autoclass:: marius.pipeline.GraphEncoder\n    :members:\n    :und"
  },
  {
    "path": "docs/python_api/pipeline/index.rst",
    "chars": 124,
    "preview": "\nmarius.pipeline\n********************\n\n.. toctree::\n    :glob:\n    :maxdepth: 2\n\n    evaluator\n    trainer\n    graph_enc"
  },
  {
    "path": "docs/python_api/pipeline/trainer.rst",
    "chars": 381,
    "preview": "Trainer\n=======================================\n\n.. autoclass:: marius.pipeline.Trainer\n    :members:\n    :undoc-members"
  },
  {
    "path": "docs/python_api/reporting/index.rst",
    "chars": 92,
    "preview": "\nmarius.report\n********************\n\n.. toctree::\n    :glob:\n    :maxdepth: 2\n\n    reporters"
  },
  {
    "path": "docs/python_api/reporting/metrics.rst",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "docs/python_api/reporting/reporters.rst",
    "chars": 1056,
    "preview": "\nReporter\n=======================================\n\n.. autoclass:: marius.report.Reporter\n    :members:\n    :undoc-member"
  },
  {
    "path": "docs/python_api/storage/graph_storage.rst",
    "chars": 3621,
    "preview": "GraphStorage\n=======================================\n\n.. autoclass:: marius.storage.GraphModelStorage\n    :members:\n    "
  },
  {
    "path": "docs/python_api/storage/index.rst",
    "chars": 110,
    "preview": "\nmarius.storage\n********************\n\n.. toctree::\n    :glob:\n    :maxdepth: 2\n\n    graph_storage\n    storage\n"
  },
  {
    "path": "docs/python_api/storage/storage.rst",
    "chars": 2806,
    "preview": "Storage\n=======================================\n\n.. function:: marius.storage.tensor_from_file(filename: str, shape: Lis"
  },
  {
    "path": "docs/python_api/tools/configuration/constants.rst",
    "chars": 137,
    "preview": "constants\n=======================================\n\n.. automodule:: marius.tools.configuration.constants\n    :members:\n  "
  },
  {
    "path": "docs/python_api/tools/configuration/datatypes.rst",
    "chars": 138,
    "preview": "datatypes\n=======================================\n\n.. automodule:: marius.tools.configuration.datatypes\n    :members:\n  "
  },
  {
    "path": "docs/python_api/tools/configuration/index.rst",
    "chars": 125,
    "preview": "\nconfiguration\n********************\n\n.. toctree::\n    :glob:\n    :maxdepth: 2\n\n    constants\n    datatypes\n    marius_co"
  },
  {
    "path": "docs/python_api/tools/configuration/marius_config.rst",
    "chars": 146,
    "preview": "marius_config\n=======================================\n\n.. automodule:: marius.tools.configuration.marius_config\n    :mem"
  },
  {
    "path": "docs/python_api/tools/index.rst",
    "chars": 163,
    "preview": "\nmarius.tools\n********************\n\n.. toctree::\n    :glob:\n    :maxdepth: 2\n\n    configuration/index\n    postprocess/in"
  },
  {
    "path": "docs/python_api/tools/preprocess/converters/index.rst",
    "chars": 143,
    "preview": "\nconverters\n********************\n\n.. automodule:: marius.tools.preprocess.converters\n    :members:\n    :undoc-members:\n "
  },
  {
    "path": "docs/python_api/tools/preprocess/datasets/index.rst",
    "chars": 2069,
    "preview": "\ndatasets\n********************\n\n.. autoclass:: marius.tools.preprocess.dataset.Dataset\n    :members:\n    :undoc-members:"
  },
  {
    "path": "docs/python_api/tools/preprocess/index.rst",
    "chars": 208,
    "preview": "\npreprocess\n********************\n\n.. toctree::\n    :glob:\n    :maxdepth: 2\n\n    converters/index\n    datasets/index\n    "
  },
  {
    "path": "docs/python_api/tools/preprocess/partitioners/index.rst",
    "chars": 157,
    "preview": "\npartitioners\n********************\n\n.. automodule:: marius.tools.preprocess.converters.partitioners\n    :members:\n    :u"
  },
  {
    "path": "docs/python_api/tools/preprocess/readers/index.rst",
    "chars": 146,
    "preview": "\nreaders\n********************\n\n.. automodule:: marius.tools.preprocess.converters.readers\n    :members:\n    :undoc-membe"
  },
  {
    "path": "docs/python_api/tools/preprocess/writers/index.rst",
    "chars": 146,
    "preview": "\nwriters\n********************\n\n.. automodule:: marius.tools.preprocess.converters.writers\n    :members:\n    :undoc-membe"
  },
  {
    "path": "docs/quickstart.rst",
    "chars": 10486,
    "preview": ".. _quickstart\n\nGetting Started\n=========================\n\nBuild and Install\n##############################\n\nRequirement"
  },
  {
    "path": "examples/configuration/custom_lp.yaml",
    "chars": 903,
    "preview": "model:\n  learning_task: LINK_PREDICTION\n  encoder:\n    layers:\n      - - type: EMBEDDING\n          output_dim: 50\n  deco"
  },
  {
    "path": "examples/configuration/custom_nc.yaml",
    "chars": 1251,
    "preview": "model:\n  learning_task: NODE_CLASSIFICATION\n  encoder:\n    train_neighbor_sampling:\n      - type: ALL\n      - type: ALL\n"
  },
  {
    "path": "examples/configuration/fb15k_237.yaml",
    "chars": 888,
    "preview": "model:\n  learning_task: LINK_PREDICTION\n  encoder:\n    layers:\n      - - type: EMBEDDING\n          output_dim: 50\n  deco"
  },
  {
    "path": "examples/configuration/ogbn_arxiv.yaml",
    "chars": 1240,
    "preview": "model:\n  learning_task: NODE_CLASSIFICATION\n  encoder:\n    train_neighbor_sampling:\n      - type: ALL\n      - type: ALL\n"
  },
  {
    "path": "examples/configuration/sakila.yaml",
    "chars": 1190,
    "preview": "model:\n  learning_task: LINK_PREDICTION # set the learning task to link prediction\n  encoder:\n    layers:\n      - - type"
  },
  {
    "path": "examples/db2graph/dockerfile",
    "chars": 1440,
    "preview": "# setup for Marius\nFROM nvidia/cuda:11.4.2-cudnn8-devel-ubuntu20.04\n\nENV TZ=US\n\nRUN ln -snf /usr/share/zoneinfo/$TZ /etc"
  },
  {
    "path": "examples/db2graph/run.sh",
    "chars": 659,
    "preview": "#!/bin/sh\nsystemctl start mysql\nmkdir /db2graph_eg\nwget -O /db2graph_eg/sakila-db.tar.gz https://downloads.mysql.com/doc"
  },
  {
    "path": "examples/docker/README.md",
    "chars": 2849,
    "preview": "# Docker Installation\n\nThe following instructions install the necessary dependencies and build\nthe system using Docker. "
  },
  {
    "path": "examples/docker/cpu_ubuntu/dockerfile",
    "chars": 884,
    "preview": "FROM ubuntu:22.04\nRUN apt update\n\nRUN apt install -y g++ \\\n         make \\\n         wget \\\n         unzip \\\n         vim"
  },
  {
    "path": "examples/docker/gpu_ubuntu/dockerfile",
    "chars": 933,
    "preview": "FROM nvidia/cuda:11.8.0-cudnn8-devel-ubuntu22.04\nRUN apt update\n\nRUN apt install -y g++ \\\n         make \\\n         wget "
  },
  {
    "path": "examples/preprocessing/custom_dataset.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "examples/python/custom.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "examples/python/custom_lp.py",
    "chars": 6284,
    "preview": "from pathlib import Path\n\nfrom omegaconf import OmegaConf\n\nimport marius as m\nfrom marius.tools.preprocess.converters.to"
  },
  {
    "path": "examples/python/custom_nc_graphsage.py",
    "chars": 11766,
    "preview": "from pathlib import Path\n\nimport numpy as np\nimport pandas as pd\nfrom omegaconf import OmegaConf\n\nimport marius as m\nfro"
  },
  {
    "path": "examples/python/fb15k_237.py",
    "chars": 4561,
    "preview": "from pathlib import Path\n\nfrom omegaconf import OmegaConf\n\nimport marius as m\nfrom marius.tools.preprocess.datasets.fb15"
  },
  {
    "path": "examples/python/fb15k_237_gpu.py",
    "chars": 4562,
    "preview": "from pathlib import Path\n\nfrom omegaconf import OmegaConf\n\nimport marius as m\nfrom marius.tools.preprocess.datasets.fb15"
  },
  {
    "path": "examples/python/ogbn_arxiv_nc.py",
    "chars": 4616,
    "preview": "from pathlib import Path\n\nfrom omegaconf import OmegaConf\n\nimport marius as m\nfrom marius.tools.preprocess.datasets.ogbn"
  },
  {
    "path": "pyproject.toml",
    "chars": 237,
    "preview": "[tool.black]\nline-length = 120\n\n[tool.isort]\nprofile = \"black\"\nline_length = 120\n\n[tool.pytest.ini_options]\npythonpath ="
  },
  {
    "path": "setup.cfg",
    "chars": 1683,
    "preview": "[metadata]\nname = marius\nversion = 0.0.2\ndescription = A system for training embeddings for large scale graphs on a sing"
  },
  {
    "path": "setup.py",
    "chars": 3155,
    "preview": "import os\nimport platform\nimport subprocess\nimport sys\n\nfrom setuptools import Extension, setup\nfrom setuptools.command."
  },
  {
    "path": "src/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "src/cpp/cmake/FindSphinx.cmake",
    "chars": 373,
    "preview": "#Look for an executable called sphinx-build\nfind_program(SPHINX_EXECUTABLE\n        NAMES sphinx-build\n        DOC \"Path "
  },
  {
    "path": "src/cpp/include/common/datatypes.h",
    "chars": 2236,
    "preview": "//\n// Created by jasonmohoney on 10/19/19.\n//\n\n#ifndef MARIUS_DATATYPES_H\n#define MARIUS_DATATYPES_H\n\n#include <map>\n#in"
  },
  {
    "path": "src/cpp/include/common/exception.h",
    "chars": 1386,
    "preview": "//\n// Created by Jason Mohoney on 2/4/22.\n//\n\n#ifndef MARIUS_EXCEPTION_H\n#define MARIUS_EXCEPTION_H\n\n#include <exception"
  },
  {
    "path": "src/cpp/include/common/pybind_headers.h",
    "chars": 230,
    "preview": "//\n// Created by Jason Mohoney on 3/7/22.\n//\n\n#ifndef MARIUS_PYBIND_HEADERS_H\n#define MARIUS_PYBIND_HEADERS_H\n\n#include "
  },
  {
    "path": "src/cpp/include/common/util.h",
    "chars": 2587,
    "preview": "//\n// Created by Jason Mohoney on 7/30/20.\n//\n\n#ifndef MARIUS_UTIL_H\n#define MARIUS_UTIL_H\n\n#include \"datatypes.h\"\n\nclas"
  },
  {
    "path": "src/cpp/include/configuration/config.h",
    "chars": 6220,
    "preview": "//\n// Created by Jason Mohoney on 10/7/21.\n//\n\n#ifndef MARIUS_CONFIG_H\n#define MARIUS_CONFIG_H\n\n#include \"common/datatyp"
  },
  {
    "path": "src/cpp/include/configuration/constants.h",
    "chars": 1495,
    "preview": "//\n// Created by Jason Mohoney on 2/18/20.\n//\n\n#ifndef MARIUS_CONSTANTS_H\n#define MARIUS_CONSTANTS_H\n\n#include <string>\n"
  },
  {
    "path": "src/cpp/include/configuration/options.h",
    "chars": 5355,
    "preview": "//\n// Created by Jason Mohoney on 10/7/21.\n//\n\n#ifndef MARIUS_OPTIONS_H\n#define MARIUS_OPTIONS_H\n\n#include \"common/datat"
  },
  {
    "path": "src/cpp/include/configuration/util.h",
    "chars": 276,
    "preview": "//\n// Created by Jason Mohoney on 1/19/22.\n//\n\n#ifndef MARIUS_CONFIGURATION_UTIL_H\n#define MARIUS_CONFIGURATION_UTIL_H\n\n"
  },
  {
    "path": "src/cpp/include/data/batch.h",
    "chars": 3971,
    "preview": "//\n// Created by Jason Mohoney on 7/9/20.\n//\n\n#ifndef MARIUS_BATCH_H\n#define MARIUS_BATCH_H\n\n#include \"common/datatypes."
  },
  {
    "path": "src/cpp/include/data/dataloader.h",
    "chars": 7566,
    "preview": "//\n// Created by jasonmohoney on 10/4/19.\n//\n\n#ifndef MARIUS_DATASET_H\n#define MARIUS_DATASET_H\n\n#include <map>\n#include"
  },
  {
    "path": "src/cpp/include/data/graph.h",
    "chars": 5006,
    "preview": "//\n// Created by Jason Mohoney on 8/25/21.\n//\n\n#ifndef MARIUS_SRC_CPP_INCLUDE_GRAPH_H_\n#define MARIUS_SRC_CPP_INCLUDE_GR"
  },
  {
    "path": "src/cpp/include/data/ordering.h",
    "chars": 2784,
    "preview": "//\n// Created by Jason Mohoney on 7/17/20.\n//\n\n#ifndef MARIUS_ORDERING_H\n#define MARIUS_ORDERING_H\n\n#include \"batch.h\"\n\n"
  },
  {
    "path": "src/cpp/include/data/samplers/edge.h",
    "chars": 767,
    "preview": "//\n// Created by Jason Mohoney on 2/8/22.\n//\n\n#ifndef MARIUS_EDGE_H\n#define MARIUS_EDGE_H\n\n#include \"storage/graph_stora"
  },
  {
    "path": "src/cpp/include/data/samplers/negative.h",
    "chars": 3506,
    "preview": "//\n// Created by Jason Mohoney on 2/8/22.\n//\n\n#ifndef MARIUS_NEGATIVE_H\n#define MARIUS_NEGATIVE_H\n\n#include \"storage/gra"
  },
  {
    "path": "src/cpp/include/data/samplers/neighbor.h",
    "chars": 3669,
    "preview": "//\n// Created by Jason Mohoney on 2/8/22.\n//\n\n#ifndef MARIUS_NEIGHBOR_SAMPLER_H\n#define MARIUS_NEIGHBOR_SAMPLER_H\n\n#incl"
  },
  {
    "path": "src/cpp/include/marius.h",
    "chars": 589,
    "preview": "#include \"configuration/config.h\"\n#include \"data/dataloader.h\"\n#include \"nn/model.h\"\n#include \"storage/graph_storage.h\"\n"
  },
  {
    "path": "src/cpp/include/nn/activation.h",
    "chars": 294,
    "preview": "//\n// Created by Jason Mohoney on 10/7/21.\n//\n\n#ifndef MARIUS_ACTIVATION_H\n#define MARIUS_ACTIVATION_H\n\n#include \"common"
  },
  {
    "path": "src/cpp/include/nn/decoders/decoder.h",
    "chars": 284,
    "preview": "//\n// Created by Jason Mohoney on 9/29/21.\n//\n\n#ifndef MARIUS_DECODER_H\n#define MARIUS_DECODER_H\n\n#include <configuratio"
  },
  {
    "path": "src/cpp/include/nn/decoders/edge/comparators.h",
    "chars": 872,
    "preview": "//\n// Created by Jason Mohoney on 9/29/21.\n//\n\n#ifndef MARIUS_COMPARATOR_H\n#define MARIUS_COMPARATOR_H\n\n#include \"common"
  },
  {
    "path": "src/cpp/include/nn/decoders/edge/complex.h",
    "chars": 516,
    "preview": "//\n// Created by Jason Mohoney on 9/29/21.\n//\n\n#ifndef MARIUS_COMPLEX_H\n#define MARIUS_COMPLEX_H\n\n#include \"nn/decoders/"
  },
  {
    "path": "src/cpp/include/nn/decoders/edge/decoder_methods.h",
    "chars": 1429,
    "preview": "//\n// Created by Jason Mohoney on 3/31/22.\n//\n\n#ifndef MARIUS_DECODER_METHODS_H\n#define MARIUS_DECODER_METHODS_H\n\n#inclu"
  },
  {
    "path": "src/cpp/include/nn/decoders/edge/distmult.h",
    "chars": 523,
    "preview": "//\n// Created by Jason Mohoney on 9/29/21.\n//\n\n#ifndef MARIUS_DISTMULT_H\n#define MARIUS_DISTMULT_H\n\n#include \"nn/decoder"
  },
  {
    "path": "src/cpp/include/nn/decoders/edge/edge_decoder.h",
    "chars": 907,
    "preview": "//\n// Created by Jason Mohoney on 2/6/22.\n//\n\n#ifndef MARIUS_EDGE_DECODER_H\n#define MARIUS_EDGE_DECODER_H\n\n#include \"com"
  },
  {
    "path": "src/cpp/include/nn/decoders/edge/relation_operators.h",
    "chars": 1014,
    "preview": "//\n// Created by Jason Mohoney on 9/29/21.\n//\n\n#ifndef MARIUS_RELATION_OPERATOR_H\n#define MARIUS_RELATION_OPERATOR_H\n\n#i"
  },
  {
    "path": "src/cpp/include/nn/decoders/edge/transe.h",
    "chars": 509,
    "preview": "//\n// Created by Jason Mohoney on 9/29/21.\n//\n\n#ifndef MARIUS_TRANSE_H\n#define MARIUS_TRANSE_H\n\n#include \"nn/decoders/ed"
  },
  {
    "path": "src/cpp/include/nn/decoders/node/node_decoder.h",
    "chars": 290,
    "preview": "//\n// Created by Jason Mohoney on 2/5/22.\n//\n\n#ifndef MARIUS_NODE_DECODER_H\n#define MARIUS_NODE_DECODER_H\n\n#include \"nn/"
  },
  {
    "path": "src/cpp/include/nn/decoders/node/noop_node_decoder.h",
    "chars": 473,
    "preview": "//\n// Created by Jason Mohoney on 2/7/22.\n//\n\n#ifndef MARIUS_NOOP_NODE_DECODER_H\n#define MARIUS_NOOP_NODE_DECODER_H\n\n#in"
  },
  {
    "path": "src/cpp/include/nn/encoders/encoder.h",
    "chars": 1413,
    "preview": "//\n// Created by Jason Mohoney on 10/7/21.\n//\n\n#ifndef MARIUS_ENCODER_H\n#define MARIUS_ENCODER_H\n\n#include \"configuratio"
  },
  {
    "path": "src/cpp/include/nn/initialization.h",
    "chars": 1383,
    "preview": "//\n// Created by Jason Mohoney on 10/7/21.\n//\n\n#ifndef MARIUS_INITIALIZATION_H\n#define MARIUS_INITIALIZATION_H\n\n#include"
  },
  {
    "path": "src/cpp/include/nn/layers/embedding/embedding.h",
    "chars": 519,
    "preview": "//\n// Created by Jason Mohoney on 2/1/22.\n//\n\n#ifndef MARIUS_EMBEDDING_H\n#define MARIUS_EMBEDDING_H\n\n#include \"common/da"
  },
  {
    "path": "src/cpp/include/nn/layers/feature/feature.h",
    "chars": 425,
    "preview": "//\n// Created by Jason Mohoney on 2/1/22.\n//\n\n#ifndef MARIUS_FEATURE_H\n#define MARIUS_FEATURE_H\n\n#include \"common/dataty"
  },
  {
    "path": "src/cpp/include/nn/layers/gnn/gat_layer.h",
    "chars": 612,
    "preview": "//\n// Created by Jason Mohoney on 9/29/21.\n//\n\n#ifndef MARIUS_GAT_LAYER_H\n#define MARIUS_GAT_LAYER_H\n\n#include \"gnn_laye"
  },
  {
    "path": "src/cpp/include/nn/layers/gnn/gcn_layer.h",
    "chars": 523,
    "preview": "//\n// Created by Jason Mohoney on 9/29/21.\n//\n\n#ifndef MARIUS_GCN_LAYER_H\n#define MARIUS_GCN_LAYER_H\n\n#include \"gnn_laye"
  },
  {
    "path": "src/cpp/include/nn/layers/gnn/gnn_layer.h",
    "chars": 519,
    "preview": "//\n// Created by Jason Mohoney on 9/29/21.\n//\n\n#ifndef MARIUS_GNN_LAYER_H\n#define MARIUS_GNN_LAYER_H\n\n#include \"common/d"
  },
  {
    "path": "src/cpp/include/nn/layers/gnn/graph_sage_layer.h",
    "chars": 538,
    "preview": "//\n// Created by Jason Mohoney on 9/29/21.\n//\n\n#ifndef MARIUS_GRAPH_SAGE_LAYER_H\n#define MARIUS_GRAPH_SAGE_LAYER_H\n\n#inc"
  },
  {
    "path": "src/cpp/include/nn/layers/gnn/layer_helpers.h",
    "chars": 802,
    "preview": "//\n// Created by Jason Mohoney on 10/1/21.\n//\n\n#ifndef MARIUS_LAYER_HELPERS_H\n#define MARIUS_LAYER_HELPERS_H\n\n#include \""
  },
  {
    "path": "src/cpp/include/nn/layers/gnn/rgcn_layer.h",
    "chars": 617,
    "preview": "//\n// Created by Jason Mohoney on 9/29/21.\n//\n\n#ifndef MARIUS_RGCN_LAYER_H\n#define MARIUS_RGCN_LAYER_H\n\n#include \"gnn_la"
  },
  {
    "path": "src/cpp/include/nn/layers/layer.h",
    "chars": 555,
    "preview": "//\n// Created by Jason Mohoney on 2/1/22.\n//\n\n#ifndef MARIUS_LAYER_H\n#define MARIUS_LAYER_H\n\n#include \"common/datatypes."
  },
  {
    "path": "src/cpp/include/nn/layers/reduction/concat.h",
    "chars": 428,
    "preview": "//\n// Created by Jason Mohoney on 12/10/21.\n//\n\n#ifndef MARIUS_CONCAT_H\n#define MARIUS_CONCAT_H\n\n#include \"common/dataty"
  },
  {
    "path": "src/cpp/include/nn/layers/reduction/linear.h",
    "chars": 463,
    "preview": "//\n// Created by Jason Mohoney on 12/10/21.\n//\n\n#ifndef MARIUS_LINEAR_H\n#define MARIUS_LINEAR_H\n\n#include \"common/dataty"
  },
  {
    "path": "src/cpp/include/nn/layers/reduction/reduction_layer.h",
    "chars": 546,
    "preview": "//\n// Created by Jason Mohoney on 8/25/21.\n//\n\n#ifndef MARIUS_FEATURIZER_H_\n#define MARIUS_FEATURIZER_H_\n\n#include \"comm"
  },
  {
    "path": "src/cpp/include/nn/loss.h",
    "chars": 3341,
    "preview": "//\n// Created by Jason Mohoney on 8/25/21.\n//\n\n#ifndef MARIUS_SRC_CPP_INCLUDE_LOSS_H_\n#define MARIUS_SRC_CPP_INCLUDE_LOS"
  },
  {
    "path": "src/cpp/include/nn/model.h",
    "chars": 1906,
    "preview": "//\n// Created by Jason Mohoney on 2/11/21.\n//\n\n#ifndef MARIUS_INCLUDE_MODEL_H_\n#define MARIUS_INCLUDE_MODEL_H_\n\n#include"
  },
  {
    "path": "src/cpp/include/nn/model_helpers.h",
    "chars": 2097,
    "preview": "//\n// Created by Jason Mohoney on 9/17/21.\n//\n\n#ifndef MARIUS_MODEL_HELPERS_H\n#define MARIUS_MODEL_HELPERS_H\n\n#include \""
  },
  {
    "path": "src/cpp/include/nn/optim.h",
    "chars": 2633,
    "preview": "//\n// Created by Jason Mohoney on 12/9/21.\n//\n\n#ifndef MARIUS_OPTIM_H\n#define MARIUS_OPTIM_H\n\n#include \"common/datatypes"
  },
  {
    "path": "src/cpp/include/nn/regularizer.h",
    "chars": 648,
    "preview": "//\n// Created by Jason Mohoney on 8/25/21.\n//\n\n#ifndef MARIUS_SRC_CPP_INCLUDE_REGULARIZER_H_\n#define MARIUS_SRC_CPP_INCL"
  },
  {
    "path": "src/cpp/include/pipeline/evaluator.h",
    "chars": 1106,
    "preview": "//\n// Created by Jason Mohoney on 2/28/20.\n//\n\n#ifndef MARIUS_EVALUATOR_H\n#define MARIUS_EVALUATOR_H\n\n#include <iostream"
  },
  {
    "path": "src/cpp/include/pipeline/graph_encoder.h",
    "chars": 1218,
    "preview": "//\n// Created by Jason Mohoney on 1/21/22.\n//\n\n#ifndef MARIUS_GRAPH_ENCODER_H\n#define MARIUS_GRAPH_ENCODER_H\n\n#include \""
  },
  {
    "path": "src/cpp/include/pipeline/pipeline.h",
    "chars": 2686,
    "preview": "//\n// Created by Jason Mohoney on 2/29/20.\n//\n#ifndef MARIUS_PIPELINE_H\n#define MARIUS_PIPELINE_H\n\n#include <time.h>\n\n#i"
  },
  {
    "path": "src/cpp/include/pipeline/pipeline_constants.h",
    "chars": 655,
    "preview": "//\n// Created by Jason Mohoney on 1/21/22.\n//\n\n#ifndef MARIUS_PIPELINE_CONSTANTS_H\n#define MARIUS_PIPELINE_CONSTANTS_H\n\n"
  },
  {
    "path": "src/cpp/include/pipeline/pipeline_cpu.h",
    "chars": 1144,
    "preview": "//\n// Created by Jason Mohoney on 1/21/22.\n//\n\n#ifndef MARIUS_PIPELINE_CPU_H\n#define MARIUS_PIPELINE_CPU_H\n\n#include \"pi"
  },
  {
    "path": "src/cpp/include/pipeline/pipeline_gpu.h",
    "chars": 1994,
    "preview": "//\n// Created by Jason Mohoney on 1/21/22.\n//\n\n#ifndef MARIUS_PIPELINE_GPU_H\n#define MARIUS_PIPELINE_GPU_H\n\n#include \"pi"
  },
  {
    "path": "src/cpp/include/pipeline/pipeline_monitor.h",
    "chars": 153,
    "preview": "//\n// Created by Jason Mohoney on 1/21/22.\n//\n\n#ifndef MARIUS_PIPELINE_MONITOR_H\n#define MARIUS_PIPELINE_MONITOR_H\n\n#end"
  },
  {
    "path": "src/cpp/include/pipeline/queue.h",
    "chars": 2672,
    "preview": "//\n// Created by Jason Mohoney on 1/21/22.\n//\n\n#ifndef MARIUS_QUEUE_H\n#define MARIUS_QUEUE_H\n\ntemplate <class T>\nclass Q"
  },
  {
    "path": "src/cpp/include/pipeline/trainer.h",
    "chars": 1241,
    "preview": "//\n// Created by Jason Mohoney on 2/28/20.\n//\n#ifndef MARIUS_TRAINER_H\n#define MARIUS_TRAINER_H\n\n#include \"data/dataload"
  },
  {
    "path": "src/cpp/include/reporting/logger.h",
    "chars": 2912,
    "preview": "//\n// Created by Jason Mohoney on 7/2/20.\n//\n\n#ifndef MARIUS_LOGGER_H\n#define MARIUS_LOGGER_H\n#define SPDLOG_ACTIVE_LEVE"
  },
  {
    "path": "src/cpp/include/reporting/reporting.h",
    "chars": 3283,
    "preview": "//\n// Created by Jason Mohoney on 8/24/21.\n//\n\n#ifndef MARIUS_SRC_CPP_INCLUDE_REPORTING_H_\n#define MARIUS_SRC_CPP_INCLUD"
  },
  {
    "path": "src/cpp/include/storage/buffer.h",
    "chars": 6013,
    "preview": "//\n// Created by Jason Mohoney on 5/26/20.\n//\n\n#ifndef MARIUS_BUFFER_H\n#define MARIUS_BUFFER_H\n\n#include \"common/datatyp"
  },
  {
    "path": "src/cpp/include/storage/checkpointer.h",
    "chars": 1326,
    "preview": "//\n// Created by Jason Mohoney on 12/15/21.\n//\n\n#ifndef MARIUS_CHECKPOINTER_H\n#define MARIUS_CHECKPOINTER_H\n\n#include \"d"
  },
  {
    "path": "src/cpp/include/storage/graph_storage.h",
    "chars": 11175,
    "preview": "//\n// Created by Jason Mohoney on 6/18/21.\n//\n\n#ifndef MARIUS_SRC_CPP_INCLUDE_GRAPH_STORAGE_H_\n#define MARIUS_SRC_CPP_IN"
  },
  {
    "path": "src/cpp/include/storage/io.h",
    "chars": 2204,
    "preview": "//\n// Created by jasonmohoney on 10/4/19.\n//\n\n#ifndef MARIUS_IO_H\n#define MARIUS_IO_H\n\n#include <sys/ioctl.h>\n#include <"
  },
  {
    "path": "src/cpp/include/storage/storage.h",
    "chars": 6011,
    "preview": "//\n// Created by Jason Mohoney on 4/21/20.\n//\n\n#ifndef MARIUS_STORAGE_H\n#define MARIUS_STORAGE_H\n\n#include <fstream>\n#in"
  },
  {
    "path": "src/cpp/python_bindings/configuration/config_wrap.cpp",
    "chars": 8728,
    "preview": "#include \"common/pybind_headers.h\"\n#include \"configuration/config.h\"\n\nvoid init_config(py::module &m) {\n    py::class_<N"
  },
  {
    "path": "src/cpp/python_bindings/configuration/options_wrap.cpp",
    "chars": 11697,
    "preview": "#include \"common/pybind_headers.h\"\n#include \"configuration/options.h\"\n\nvoid init_options(py::module &m) {\n    py::enum_<"
  },
  {
    "path": "src/cpp/python_bindings/configuration/wrap.cpp",
    "chars": 274,
    "preview": "#include \"common/pybind_headers.h\"\n\n// configuration\nvoid init_config(py::module &);\nvoid init_options(py::module &);\n\nP"
  }
]

// ... and 246 more files (download for full content)

About this extraction

This page contains the full source code of the marius-team/marius GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 446 files (1.7 MB), approximately 432.2k tokens, and a symbol index with 1092 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!