Repository: RUCAIBox/RecBole-GNN
Branch: main
Commit: 632ef8885899
Files: 74
Total size: 361.6 KB
Directory structure:
gitextract_g58jlyza/
├── .github/
│ ├── ISSUE_TEMPLATE/
│ │ ├── bug_report.md
│ │ ├── bug_report_CN.md
│ │ ├── feature_request.md
│ │ └── feature_request_CN.md
│ └── workflows/
│ └── python-package.yml
├── .gitignore
├── LICENSE
├── README.md
├── recbole_gnn/
│ ├── config.py
│ ├── data/
│ │ ├── __init__.py
│ │ ├── dataloader.py
│ │ ├── dataset.py
│ │ └── transform.py
│ ├── model/
│ │ ├── abstract_recommender.py
│ │ ├── general_recommender/
│ │ │ ├── __init__.py
│ │ │ ├── directau.py
│ │ │ ├── hmlet.py
│ │ │ ├── lightgcl.py
│ │ │ ├── lightgcn.py
│ │ │ ├── ncl.py
│ │ │ ├── ngcf.py
│ │ │ ├── sgl.py
│ │ │ ├── simgcl.py
│ │ │ ├── ssl4rec.py
│ │ │ └── xsimgcl.py
│ │ ├── layers.py
│ │ ├── sequential_recommender/
│ │ │ ├── __init__.py
│ │ │ ├── gcegnn.py
│ │ │ ├── gcsan.py
│ │ │ ├── lessr.py
│ │ │ ├── niser.py
│ │ │ ├── sgnnhn.py
│ │ │ ├── srgnn.py
│ │ │ └── tagnn.py
│ │ └── social_recommender/
│ │ ├── __init__.py
│ │ ├── diffnet.py
│ │ ├── mhcn.py
│ │ └── sept.py
│ ├── properties/
│ │ ├── model/
│ │ │ ├── DiffNet.yaml
│ │ │ ├── DirectAU.yaml
│ │ │ ├── GCEGNN.yaml
│ │ │ ├── GCSAN.yaml
│ │ │ ├── HMLET.yaml
│ │ │ ├── LESSR.yaml
│ │ │ ├── LightGCL.yaml
│ │ │ ├── LightGCN.yaml
│ │ │ ├── MHCN.yaml
│ │ │ ├── NCL.yaml
│ │ │ ├── NGCF.yaml
│ │ │ ├── NISER.yaml
│ │ │ ├── SEPT.yaml
│ │ │ ├── SGL.yaml
│ │ │ ├── SGNNHN.yaml
│ │ │ ├── SRGNN.yaml
│ │ │ ├── SSL4REC.yaml
│ │ │ ├── SimGCL.yaml
│ │ │ ├── TAGNN.yaml
│ │ │ └── XSimGCL.yaml
│ │ └── quick_start_config/
│ │ ├── sequential_base.yaml
│ │ └── social_base.yaml
│ ├── quick_start.py
│ ├── trainer.py
│ └── utils.py
├── results/
│ ├── README.md
│ ├── general/
│ │ └── ml-1m.md
│ ├── sequential/
│ │ └── diginetica.md
│ └── social/
│ └── lastfm.md
├── run_hyper.py
├── run_recbole_gnn.py
├── run_test.sh
└── tests/
├── test_data/
│ └── test/
│ ├── test.inter
│ └── test.net
├── test_model.py
└── test_model.yaml
================================================
FILE CONTENTS
================================================
================================================
FILE: .github/ISSUE_TEMPLATE/bug_report.md
================================================
---
name: Bug report
about: Create a report to help us improve
title: "[\U0001F41BBUG] Describe your problem in one sentence."
labels: bug
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. extra yaml file
2. your code
3. script for running
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Colab Links**
If applicable, add links to Colab or other Jupyter laboratory platforms that can reproduce the bug.
**Desktop (please complete the following information):**
- OS: [e.g. Linux, macOS or Windows]
- RecBole Version [e.g. 0.1.0]
- Python Version [e.g. 3.79]
- PyTorch Version [e.g. 1.60]
- cudatoolkit Version [e.g. 9.2, none]
================================================
FILE: .github/ISSUE_TEMPLATE/bug_report_CN.md
================================================
---
name: Bug 报告
about: 提交一份 bug 报告,帮助 RecBole-GNN 变得更好
title: "[\U0001F41BBUG] 用一句话描述您的问题。"
labels: bug
assignees: ''
---
**描述这个 bug**
对 bug 作一个清晰简明的描述。
**如何复现**
复现这个 bug 的步骤:
1. 您引入的额外 yaml 文件
2. 您的代码
3. 您的运行脚本
**预期**
对您的预期作清晰简明的描述。
**屏幕截图**
添加屏幕截图以帮助解释您的问题。(可选)
**链接**
添加能够复现 bug 的代码链接,如 Colab 或者其他在线 Jupyter 平台。(可选)
**实验环境(请补全下列信息):**
- 操作系统: [如 Linux, macOS 或 Windows]
- RecBole 版本 [如 0.1.0]
- Python 版本 [如 3.79]
- PyTorch 版本 [如 1.60]
- cudatoolkit 版本 [如 9.2, none]
================================================
FILE: .github/ISSUE_TEMPLATE/feature_request.md
================================================
---
name: Feature request
about: Suggest an idea for this project
title: "[\U0001F4A1SUG] Description of what you want to happen in one sentence"
labels: enhancement
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
================================================
FILE: .github/ISSUE_TEMPLATE/feature_request_CN.md
================================================
---
name: 请求添加新功能
about: 提出一个关于本项目新功能/新特性的建议
title: "[\U0001F4A1SUG] 一句话描述您希望新增的功能或特性"
labels: enhancement
assignees: ''
---
**您希望添加的功能是否与某个问题相关?**
关于这个问题的简洁清晰的描述,例如,当 [...] 时,我总是很沮丧。
**描述您希望的解决方案**
关于解决方案的简洁清晰的描述。
**描述您考虑的替代方案**
关于您考虑的,能实现这个功能的其他替代方案的简洁清晰的描述。
**其他**
您可以添加其他任何的资料、链接或者屏幕截图,以帮助我们理解这个新功能。
================================================
FILE: .github/workflows/python-package.yml
================================================
name: RecBole-GNN tests
# Controls when the action will run.
on:
# Triggers the workflow on push or pull request events but only for the master branch
push:
pull_request:
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.9]
torch-version: [2.0.0]
defaults:
run:
shell: bash -l {0}
steps:
- uses: actions/checkout@v2
- name: Setup Miniconda
uses: conda-incubator/setup-miniconda@v2
with:
python-version: ${{ matrix.python-version }}
channels: conda-forge
channel-priority: true
auto-activate-base: true
# install setuptools as a interim solution for bugs in PyTorch 1.10.2 (#69904)
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pytest
pip install dgl
pip install torch==${{ matrix.torch-version}}+cpu -f https://download.pytorch.org/whl/torch_stable.html
pip install torch-scatter torch-sparse torch-cluster torch-spline-conv torch-geometric -f https://data.pyg.org/whl/torch-${{ matrix.torch-version }}+cpu.html
pip install recbole==1.1.1
conda install -c conda-forge faiss-cpu
# Use "python -m pytest" instead of "pytest" to fix imports
- name: Test model
run: |
python -m pytest -v tests/test_model.py
================================================
FILE: .gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# RecBole
log_tensorboard/
saved/
dataset/
================================================
FILE: LICENSE
================================================
MIT License
Copyright (c) 2021 RUCAIBox
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
================================================
FILE: README.md
================================================
# RecBole-GNN

-----
*Updates*:
* [Oct 29, 2023] Add [SSL4Rec](https://github.com/RUCAIBox/RecBole-GNN/blob/main/recbole_gnn/model/general_recommender/ssl4rec.py). (https://github.com/RUCAIBox/RecBole-GNN/pull/76, by [@downeykking](https://github.com/downeykking))
* [Oct 23, 2023] Add sparse tensor support, accelerating LightGCN & NGCF by ~5x, with 1/6 GPU memories. (https://github.com/RUCAIBox/RecBole-GNN/pull/75, by [@downeykking](https://github.com/downeykking))
* [Oct 20, 2023] Add [DirectAU](https://github.com/RUCAIBox/RecBole-GNN/blob/main/recbole_gnn/model/general_recommender/directau.py). (https://github.com/RUCAIBox/RecBole-GNN/pull/74, by [@downeykking](https://github.com/downeykking))
* [Oct 16, 2023] Add [XSimGCL](https://github.com/RUCAIBox/RecBole-GNN/blob/main/recbole_gnn/model/general_recommender/xsimgcl.py). (https://github.com/RUCAIBox/RecBole-GNN/pull/72, by [@downeykking](https://github.com/downeykking))
* [Apr 12, 2023] Add [LightGCL](https://github.com/RUCAIBox/RecBole-GNN/blob/main/recbole_gnn/model/general_recommender/lightgcl.py). (https://github.com/RUCAIBox/RecBole-GNN/pull/63, by [@wending0417](https://github.com/wending0417))
* [Oct 29, 2022] Adaptation to RecBole 1.1.1. (https://github.com/RUCAIBox/RecBole-GNN/pull/53)
* [Jun 15, 2022] Add [MultiBehaviorDataset](https://github.com/RUCAIBox/RecBole-GNN/blob/8c61463451b294dce9af2d1939a5e054f7955e0f/recbole_gnn/data/dataset.py#L145). (https://github.com/RUCAIBox/RecBole-GNN/pull/43, by [@Tokkiu](https://github.com/Tokkiu))
-----
**RecBole-GNN** is a library built upon [PyTorch](https://pytorch.org) and [RecBole](https://github.com/RUCAIBox/RecBole) for reproducing and developing recommendation algorithms based on graph neural networks (GNNs). Our library includes algorithms covering three major categories:
* **General Recommendation** with user-item interaction graphs;
* **Sequential Recommendation** with session/sequence graphs;
* **Social Recommendation** with social networks.

## Highlights
* **Easy-to-use and unified API**:
Our library shares unified API and input (atomic files) as RecBole.
* **Efficient and reusable graph processing**:
We provide highly efficient and reusable basic datasets, dataloaders and layers for graph processing and learning.
* **Extensive graph library**:
Graph neural networks from widely-used library like [PyG](https://github.com/pyg-team/pytorch_geometric) are incorporated. Recently proposed graph algorithms can be easily equipped and compared with existing methods.
## Requirements
```
recbole==1.1.1
pyg>=2.0.4
pytorch>=1.7.0
python>=3.7.0
```
> If you are using `recbole==1.0.1`, please refer to our `recbole1.0.1` branch [[link]](https://github.com/hyp1231/RecBole-GNN/tree/recbole1.0.1).
## Quick-Start
With the source code, you can use the provided script for initial usage of our library:
```bash
python run_recbole_gnn.py
```
If you want to change the models or datasets, just run the script by setting additional command parameters:
```bash
python run_recbole_gnn.py -m [model] -d [dataset]
```
## Implemented Models
We list currently supported models according to category:
**General Recommendation**:
* **[NGCF](recbole_gnn/model/general_recommender/ngcf.py)** from Wang *et al.*: [Neural Graph Collaborative Filtering](https://arxiv.org/abs/1905.08108) (SIGIR 2019).
* **[LightGCN](recbole_gnn/model/general_recommender/lightgcn.py)** from He *et al.*: [LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation](https://arxiv.org/abs/2002.02126) (SIGIR 2020).
* **[SSL4Rec](recbole_gnn/model/general_recommender/ssl4rec.py)** from Yao *et al.*: [Self-supervised Learning for Large-scale Item Recommendations](https://arxiv.org/abs/2007.12865) (CIKM 2021).
* **[SGL](recbole_gnn/model/general_recommender/sgl.py)** from Wu *et al.*: [Self-supervised Graph Learning for Recommendation](https://arxiv.org/abs/2010.10783) (SIGIR 2021).
* **[HMLET](recbole_gnn/model/general_recommender/hmlet.py)** from Kong *et al.*: [Linear, or Non-Linear, That is the Question!](https://arxiv.org/abs/2111.07265) (WSDM 2022).
* **[NCL](recbole_gnn/model/general_recommender/ncl.py)** from Lin *et al.*: [Improving Graph Collaborative Filtering with Neighborhood-enriched Contrastive Learning](https://arxiv.org/abs/2202.06200) (TheWebConf 2022).
* **[DirectAU](recbole_gnn/model/general_recommender/directau.py)** from Wang *et al.*: [Towards Representation Alignment and Uniformity in Collaborative Filtering](https://arxiv.org/abs/2206.12811) (KDD 2022).
* **[SimGCL](recbole_gnn/model/general_recommender/simgcl.py)** from Yu *et al.*: [Are Graph Augmentations Necessary? Simple Graph Contrastive Learning for Recommendation](https://arxiv.org/abs/2112.08679) (SIGIR 2022).
* **[XSimGCL](recbole_gnn/model/general_recommender/xsimgcl.py)** from Yu *et al.*: [XSimGCL: Towards Extremely Simple Graph Contrastive Learning for Recommendation](https://arxiv.org/abs/2209.02544) (TKDE 2023).
* **[LightGCL](recbole_gnn/model/general_recommender/lightgcl.py)** from Cai *et al.*: [LightGCL: Simple Yet Effective Graph Contrastive Learning for Recommendation
](https://arxiv.org/abs/2302.08191) (ICLR 2023).
**Sequential Recommendation**:
* **[SR-GNN](recbole_gnn/model/sequential_recommender/srgnn.py)** from Wu *et al.*: [Session-based Recommendation with Graph Neural Networks](https://arxiv.org/abs/1811.00855) (AAAI 2019).
* **[GC-SAN](recbole_gnn/model/sequential_recommender/gcsan.py)** from Xu *et al.*: [Graph Contextualized Self-Attention Network for Session-based Recommendation](https://www.ijcai.org/proceedings/2019/547) (IJCAI 2019).
* **[NISER+](recbole_gnn/model/sequential_recommender/niser.py)** from Gupta *et al.*: [NISER: Normalized Item and Session Representations to Handle Popularity Bias](https://arxiv.org/abs/1909.04276) (GRLA, CIKM 2019 workshop).
* **[LESSR](recbole_gnn/model/sequential_recommender/lessr.py)** from Chen *et al.*: [Handling Information Loss of Graph Neural Networks for Session-based Recommendation](https://dl.acm.org/doi/10.1145/3394486.3403170) (KDD 2020).
* **[TAGNN](recbole_gnn/model/sequential_recommender/tagnn.py)** from Yu *et al.*: [TAGNN: Target Attentive Graph Neural Networks for Session-based Recommendation](https://arxiv.org/abs/2005.02844) (SIGIR 2020 short).
* **[GCE-GNN](recbole_gnn/model/sequential_recommender/gcegnn.py)** from Wang *et al.*: [Global Context Enhanced Graph Neural Networks for Session-based Recommendation](https://arxiv.org/abs/2106.05081) (SIGIR 2020).
* **[SGNN-HN](recbole_gnn/model/sequential_recommender/sgnnhn.py)** from Pan *et al.*: [Star Graph Neural Networks for Session-based Recommendation](https://dl.acm.org/doi/10.1145/3340531.3412014) (CIKM 2020).
**Social Recommendation**:
> Note that datasets for social recommendation methods can be downloaded from [Social-Datasets](https://github.com/Sherry-XLL/Social-Datasets).
* **[DiffNet](recbole_gnn/model/social_recommender/diffnet.py)** from Wu *et al.*: [A Neural Influence Diffusion Model for Social Recommendation](https://arxiv.org/abs/1904.10322) (SIGIR 2019).
* **[MHCN](recbole_gnn/model/social_recommender/mhcn.py)** from Yu *et al.*: [Self-Supervised Multi-Channel Hypergraph Convolutional Network for Social Recommendation](https://doi.org/10.1145/3442381.3449844) (WWW 2021).
* **[SEPT](recbole_gnn/model/social_recommender/sept.py)** from Yu *et al.*: [Socially-Aware Self-Supervised Tri-Training for Recommendation](https://doi.org/10.1145/3447548.3467340) (KDD 2021).
## Result
### Leaderboard
We carefully tune the hyper-parameters of the implemented models of each research field and release the corresponding leaderboards for reference:
- **General** recommendation on `MovieLens-1M` dataset [[link]](results/general/ml-1m.md);
- **Sequential** recommendation on `Diginetica` dataset [[link]](results/sequential/diginetica.md);
- **Social** recommendation on `LastFM` dataset [[link]](results/social/lastfm.md);
### Efficiency
With the sequential/session graphs preprocessing technique, as well as efficient GNN layers, we speed up the training process of our sequential recommenders a lot.

## The Team
RecBole-GNN is initially developed and maintained by members from [RUCAIBox](http://aibox.ruc.edu.cn/), the main developers are Yupeng Hou ([@hyp1231](https://github.com/hyp1231)), Lanling Xu ([@Sherry-XLL](https://github.com/Sherry-XLL)) and Changxin Tian ([@ChangxinTian](https://github.com/ChangxinTian)). We also thank Xinzhou ([@downeykking](https://github.com/downeykking)), Wanli ([@wending0417](https://github.com/wending0417)), and Jingqi ([@Tokkiu](https://github.com/Tokkiu)) for their great contribution! ❤️
## Acknowledgement
The implementation is based on the open-source recommendation library [RecBole](https://github.com/RUCAIBox/RecBole). RecBole-GNN is part of [RecBole 2.0](https://github.com/RUCAIBox/RecBole2.0) now!
Please cite the following paper as the reference if you use our code or processed datasets.
```bibtex
@inproceedings{zhao2022recbole2,
author={Wayne Xin Zhao and Yupeng Hou and Xingyu Pan and Chen Yang and Zeyu Zhang and Zihan Lin and Jingsen Zhang and Shuqing Bian and Jiakai Tang and Wenqi Sun and Yushuo Chen and Lanling Xu and Gaowei Zhang and Zhen Tian and Changxin Tian and Shanlei Mu and Xinyan Fan and Xu Chen and Ji-Rong Wen},
title={RecBole 2.0: Towards a More Up-to-Date Recommendation Library},
booktitle = {{CIKM}},
year={2022}
}
@inproceedings{zhao2021recbole,
author = {Wayne Xin Zhao and Shanlei Mu and Yupeng Hou and Zihan Lin and Yushuo Chen and Xingyu Pan and Kaiyuan Li and Yujie Lu and Hui Wang and Changxin Tian and Yingqian Min and Zhichao Feng and Xinyan Fan and Xu Chen and Pengfei Wang and Wendi Ji and Yaliang Li and Xiaoling Wang and Ji{-}Rong Wen},
title = {RecBole: Towards a Unified, Comprehensive and Efficient Framework for Recommendation Algorithms},
booktitle = {{CIKM}},
pages = {4653--4664},
publisher = {{ACM}},
year = {2021}
}
```
================================================
FILE: recbole_gnn/config.py
================================================
import os
import recbole
from recbole.config.configurator import Config as RecBole_Config
from recbole.utils import ModelType as RecBoleModelType
from recbole_gnn.utils import get_model, ModelType
class Config(RecBole_Config):
def __init__(self, model=None, dataset=None, config_file_list=None, config_dict=None):
"""
Args:
model (str/AbstractRecommender): the model name or the model class, default is None, if it is None, config
will search the parameter 'model' from the external input as the model name or model class.
dataset (str): the dataset name, default is None, if it is None, config will search the parameter 'dataset'
from the external input as the dataset name.
config_file_list (list of str): the external config file, it allows multiple config files, default is None.
config_dict (dict): the external parameter dictionaries, default is None.
"""
if recbole.__version__ == "1.1.1":
self.compatibility_settings()
super(Config, self).__init__(model, dataset, config_file_list, config_dict)
def compatibility_settings(self):
import numpy as np
np.bool = np.bool_
np.int = np.int_
np.float = np.float_
np.complex = np.complex_
np.object = np.object_
np.str = np.str_
np.long = np.int_
np.unicode = np.unicode_
def _get_model_and_dataset(self, model, dataset):
if model is None:
try:
model = self.external_config_dict['model']
except KeyError:
raise KeyError(
'model need to be specified in at least one of the these ways: '
'[model variable, config file, config dict, command line] '
)
if not isinstance(model, str):
final_model_class = model
final_model = model.__name__
else:
final_model = model
final_model_class = get_model(final_model)
if dataset is None:
try:
final_dataset = self.external_config_dict['dataset']
except KeyError:
raise KeyError(
'dataset need to be specified in at least one of the these ways: '
'[dataset variable, config file, config dict, command line] '
)
else:
final_dataset = dataset
return final_model, final_model_class, final_dataset
def _load_internal_config_dict(self, model, model_class, dataset):
super()._load_internal_config_dict(model, model_class, dataset)
current_path = os.path.dirname(os.path.realpath(__file__))
model_init_file = os.path.join(current_path, './properties/model/' + model + '.yaml')
quick_start_config_path = os.path.join(current_path, './properties/quick_start_config/')
sequential_base_init = os.path.join(quick_start_config_path, 'sequential_base.yaml')
social_base_init = os.path.join(quick_start_config_path, 'social_base.yaml')
if os.path.isfile(model_init_file):
config_dict = self._update_internal_config_dict(model_init_file)
self.internal_config_dict['MODEL_TYPE'] = model_class.type
if self.internal_config_dict['MODEL_TYPE'] == RecBoleModelType.SEQUENTIAL:
self._update_internal_config_dict(sequential_base_init)
if self.internal_config_dict['MODEL_TYPE'] == ModelType.SOCIAL:
self._update_internal_config_dict(social_base_init)
================================================
FILE: recbole_gnn/data/__init__.py
================================================
================================================
FILE: recbole_gnn/data/dataloader.py
================================================
import numpy as np
import torch
from recbole.data.interaction import cat_interactions
from recbole.data.dataloader.general_dataloader import TrainDataLoader, NegSampleEvalDataLoader, FullSortEvalDataLoader
from recbole_gnn.data.transform import gnn_construct_transform
class CustomizedTrainDataLoader(TrainDataLoader):
def __init__(self, config, dataset, sampler, shuffle=False):
super().__init__(config, dataset, sampler, shuffle=shuffle)
if config['gnn_transform'] is not None:
self.transform = gnn_construct_transform(config)
class CustomizedNegSampleEvalDataLoader(NegSampleEvalDataLoader):
def __init__(self, config, dataset, sampler, shuffle=False):
super().__init__(config, dataset, sampler, shuffle=shuffle)
if config['gnn_transform'] is not None:
self.transform = gnn_construct_transform(config)
def collate_fn(self, index):
index = np.array(index)
if (
self.neg_sample_args["distribution"] != "none"
and self.neg_sample_args["sample_num"] != "none"
):
uid_list = self.uid_list[index]
data_list = []
idx_list = []
positive_u = []
positive_i = torch.tensor([], dtype=torch.int64)
for idx, uid in enumerate(uid_list):
index = self.uid2index[uid]
data_list.append(self._neg_sampling(self._dataset[index]))
idx_list += [idx for i in range(self.uid2items_num[uid] * self.times)]
positive_u += [idx for i in range(self.uid2items_num[uid])]
positive_i = torch.cat(
(positive_i, self._dataset[index][self.iid_field]), 0
)
cur_data = cat_interactions(data_list)
idx_list = torch.from_numpy(np.array(idx_list)).long()
positive_u = torch.from_numpy(np.array(positive_u)).long()
return self.transform(self._dataset, cur_data), idx_list, positive_u, positive_i
else:
data = self._dataset[index]
transformed_data = self.transform(self._dataset, data)
cur_data = self._neg_sampling(transformed_data)
return cur_data, None, None, None
class CustomizedFullSortEvalDataLoader(FullSortEvalDataLoader):
def __init__(self, config, dataset, sampler, shuffle=False):
super().__init__(config, dataset, sampler, shuffle=shuffle)
if config['gnn_transform'] is not None:
self.transform = gnn_construct_transform(config)
================================================
FILE: recbole_gnn/data/dataset.py
================================================
import os
import torch
import numpy as np
import pandas as pd
from tqdm import tqdm
from torch_geometric.nn.conv.gcn_conv import gcn_norm
from torch_geometric.utils import degree
try:
from torch_sparse import SparseTensor
is_sparse = True
except ImportError:
is_sparse = False
from recbole.data.dataset import SequentialDataset
from recbole.data.dataset import Dataset as RecBoleDataset
from recbole.utils import set_color, FeatureSource
import recbole
import pickle
from recbole.utils import ensure_dir
class GeneralGraphDataset(RecBoleDataset):
def __init__(self, config):
super().__init__(config)
if recbole.__version__ == "1.1.1":
def save(self):
"""Saving this :class:`Dataset` object to :attr:`config['checkpoint_dir']`."""
save_dir = self.config["checkpoint_dir"]
ensure_dir(save_dir)
file = os.path.join(save_dir, f'{self.config["dataset"]}-{self.__class__.__name__}.pth')
self.logger.info(
set_color("Saving filtered dataset into ", "pink") + f"[{file}]"
)
with open(file, "wb") as f:
pickle.dump(self, f)
@staticmethod
def edge_index_to_adj_t(edge_index, edge_weight, m_num_nodes, n_num_nodes):
adj = SparseTensor(row=edge_index[0],
col=edge_index[1],
value=edge_weight,
sparse_sizes=(m_num_nodes, n_num_nodes))
return adj.t()
def get_norm_adj_mat(self, enable_sparse=False):
self.is_sparse = is_sparse
r"""Get the normalized interaction matrix of users and items.
Construct the square matrix from the training data and normalize it
using the laplace matrix.
.. math::
A_{hat} = D^{-0.5} \times A \times D^{-0.5}
Returns:
The normalized interaction matrix in Tensor.
"""
row = self.inter_feat[self.uid_field]
col = self.inter_feat[self.iid_field] + self.user_num
edge_index1 = torch.stack([row, col])
edge_index2 = torch.stack([col, row])
edge_index = torch.cat([edge_index1, edge_index2], dim=1)
edge_weight = torch.ones(edge_index.size(1))
num_nodes = self.user_num + self.item_num
if enable_sparse:
if not is_sparse:
self.logger.warning(
"Import `torch_sparse` error, please install corrsponding version of `torch_sparse`. Now we will use dense edge_index instead of SparseTensor in dataset.")
else:
adj_t = self.edge_index_to_adj_t(edge_index, edge_weight, num_nodes, num_nodes)
adj_t = gcn_norm(adj_t, None, num_nodes, add_self_loops=False)
return adj_t, None
edge_index, edge_weight = gcn_norm(edge_index, edge_weight, num_nodes, add_self_loops=False)
return edge_index, edge_weight
def get_bipartite_inter_mat(self, row='user', row_norm=True):
r"""Get the row-normalized bipartite interaction matrix of users and items.
"""
if row == 'user':
row_field, col_field = self.uid_field, self.iid_field
else:
row_field, col_field = self.iid_field, self.uid_field
row = self.inter_feat[row_field]
col = self.inter_feat[col_field]
edge_index = torch.stack([row, col])
if row_norm:
deg = degree(edge_index[0], self.num(row_field))
norm_deg = 1. / torch.where(deg == 0, torch.ones([1]), deg)
edge_weight = norm_deg[edge_index[0]]
else:
row_deg = degree(edge_index[0], self.num(row_field))
col_deg = degree(edge_index[1], self.num(col_field))
row_norm_deg = 1. / torch.sqrt(torch.where(row_deg == 0, torch.ones([1]), row_deg))
col_norm_deg = 1. / torch.sqrt(torch.where(col_deg == 0, torch.ones([1]), col_deg))
edge_weight = row_norm_deg[edge_index[0]] * col_norm_deg[edge_index[1]]
return edge_index, edge_weight
class SessionGraphDataset(SequentialDataset):
def __init__(self, config):
super().__init__(config)
def session_graph_construction(self):
# Default session graph dataset follows the graph construction operator like SR-GNN.
self.logger.info('Constructing session graphs.')
item_seq = self.inter_feat[self.item_id_list_field]
item_seq_len = self.inter_feat[self.item_list_length_field]
x = []
edge_index = []
alias_inputs = []
for i, seq in enumerate(tqdm(list(torch.chunk(item_seq, item_seq.shape[0])))):
seq, idx = torch.unique(seq, return_inverse=True)
x.append(seq)
alias_seq = idx.squeeze(0)[:item_seq_len[i]]
alias_inputs.append(alias_seq)
# No repeat click
edge = torch.stack([alias_seq[:-1], alias_seq[1:]]).unique(dim=-1)
edge_index.append(edge)
self.inter_feat.interaction['graph_idx'] = torch.arange(item_seq.shape[0])
self.graph_objs = {
'x': x,
'edge_index': edge_index,
'alias_inputs': alias_inputs
}
def build(self):
datasets = super().build()
for dataset in datasets:
dataset.session_graph_construction()
return datasets
class MultiBehaviorDataset(SessionGraphDataset):
def session_graph_construction(self):
self.logger.info('Constructing multi-behavior session graphs.')
self.item_behavior_list_field = self.config['ITEM_BEHAVIOR_LIST_FIELD']
self.behavior_id_field = self.config['BEHAVIOR_ID_FIELD']
item_seq = self.inter_feat[self.item_id_list_field]
item_seq_len = self.inter_feat[self.item_list_length_field]
if self.item_behavior_list_field == None or self.behavior_id_field == None:
# To be compatible with existing datasets
item_behavior_seq = torch.tensor([0] * len(item_seq))
self.behavior_id_field = 'behavior_id'
self.field2id_token[self.behavior_id_field] = {0: 'interaction'}
else:
item_behavior_seq = self.inter_feat[self.item_list_length_field]
edge_index = []
alias_inputs = []
behaviors = torch.unique(item_behavior_seq)
x = {}
for behavior in behaviors:
x[behavior.item()] = []
behavior_seqs = list(torch.chunk(item_behavior_seq, item_seq.shape[0]))
for i, seq in enumerate(tqdm(list(torch.chunk(item_seq, item_seq.shape[0])))):
bseq = behavior_seqs[i]
for behavior in behaviors:
bidx = torch.where(bseq == behavior)
subseq = torch.index_select(seq, 0, bidx[0])
subseq, _ = torch.unique(subseq, return_inverse=True)
x[behavior.item()].append(subseq)
seq, idx = torch.unique(seq, return_inverse=True)
alias_seq = idx.squeeze(0)[:item_seq_len[i]]
alias_inputs.append(alias_seq)
# No repeat click
edge = torch.stack([alias_seq[:-1], alias_seq[1:]]).unique(dim=-1)
edge_index.append(edge)
nx = {}
for k, v in x.items():
behavior_name = self.id2token(self.behavior_id_field, k)
nx[behavior_name] = v
self.inter_feat.interaction['graph_idx'] = torch.arange(item_seq.shape[0])
self.graph_objs = {
'x': nx,
'edge_index': edge_index,
'alias_inputs': alias_inputs
}
class LESSRDataset(SessionGraphDataset):
def session_graph_construction(self):
self.logger.info('Constructing LESSR session graphs.')
item_seq = self.inter_feat[self.item_id_list_field]
item_seq_len = self.inter_feat[self.item_list_length_field]
empty_edge = torch.stack([torch.LongTensor([]), torch.LongTensor([])])
x = []
edge_index_EOP = []
edge_index_shortcut = []
is_last = []
for i, seq in enumerate(tqdm(list(torch.chunk(item_seq, item_seq.shape[0])))):
seq, idx = torch.unique(seq, return_inverse=True)
x.append(seq)
alias_seq = idx.squeeze(0)[:item_seq_len[i]]
edge = torch.stack([alias_seq[:-1], alias_seq[1:]])
edge_index_EOP.append(edge)
last = torch.zeros_like(seq, dtype=torch.bool)
last[alias_seq[-1]] = True
is_last.append(last)
sub_edges = []
for j in range(1, item_seq_len[i]):
sub_edges.append(torch.stack([alias_seq[:-j], alias_seq[j:]]))
shortcut_edge = torch.cat(sub_edges, dim=-1).unique(dim=-1) if len(sub_edges) > 0 else empty_edge
edge_index_shortcut.append(shortcut_edge)
self.inter_feat.interaction['graph_idx'] = torch.arange(item_seq.shape[0])
self.graph_objs = {
'x': x,
'edge_index_EOP': edge_index_EOP,
'edge_index_shortcut': edge_index_shortcut,
'is_last': is_last
}
self.node_attr = ['x', 'is_last']
class GCEGNNDataset(SequentialDataset):
def __init__(self, config):
super().__init__(config)
def reverse_session(self):
self.logger.info('Reversing sessions.')
item_seq = self.inter_feat[self.item_id_list_field]
item_seq_len = self.inter_feat[self.item_list_length_field]
for i in tqdm(range(item_seq.shape[0])):
item_seq[i, :item_seq_len[i]] = item_seq[i, :item_seq_len[i]].flip(dims=[0])
def bidirectional_edge(self, edge_index):
seq_len = edge_index.shape[1]
ed = edge_index.T
ed2 = edge_index.T.flip(dims=[1])
idc = ed.unsqueeze(1).expand(-1, seq_len, 2) == ed2.unsqueeze(0).expand(seq_len, -1, 2)
return torch.logical_and(idc[:, :, 0], idc[:, :, 1]).any(dim=-1)
def session_graph_construction(self):
self.logger.info('Constructing session graphs.')
item_seq = self.inter_feat[self.item_id_list_field]
item_seq_len = self.inter_feat[self.item_list_length_field]
x = []
edge_index = []
edge_attr = []
alias_inputs = []
for i, seq in enumerate(tqdm(list(torch.chunk(item_seq, item_seq.shape[0])))):
seq, idx = torch.unique(seq, return_inverse=True)
x.append(seq)
alias_seq = idx.squeeze(0)[:item_seq_len[i]]
alias_inputs.append(alias_seq)
edge_index_backward = torch.stack([alias_seq[:-1], alias_seq[1:]])
edge_attr_backward = torch.where(self.bidirectional_edge(edge_index_backward), 3, 1)
edge_backward = torch.cat([edge_index_backward, edge_attr_backward.unsqueeze(0)], dim=0)
edge_index_forward = torch.stack([alias_seq[1:], alias_seq[:-1]])
edge_attr_forward = torch.where(self.bidirectional_edge(edge_index_forward), 3, 2)
edge_forward = torch.cat([edge_index_forward, edge_attr_forward.unsqueeze(0)], dim=0)
edge_index_selfloop = torch.stack([alias_seq, alias_seq])
edge_selfloop = torch.cat([edge_index_selfloop, torch.zeros([1, edge_index_selfloop.shape[1]])], dim=0)
edge = torch.cat([edge_backward, edge_forward, edge_selfloop], dim=-1).long()
edge = edge.unique(dim=-1)
cur_edge_index = edge[:2]
cur_edge_attr = edge[2]
edge_index.append(cur_edge_index)
edge_attr.append(cur_edge_attr)
self.inter_feat.interaction['graph_idx'] = torch.arange(item_seq.shape[0])
self.graph_objs = {
'x': x,
'edge_index': edge_index,
'edge_attr': edge_attr,
'alias_inputs': alias_inputs
}
def build(self):
datasets = super().build()
for dataset in datasets:
dataset.reverse_session()
dataset.session_graph_construction()
return datasets
class SocialDataset(GeneralGraphDataset):
""":class:`SocialDataset` is based on :class:`~recbole_gnn.data.dataset.GeneralGraphDataset`,
and load ``.net``.
All users in ``.inter`` and ``.net`` are remapped into the same ID sections.
Users that only exist in social network will be filtered.
It also provides several interfaces to transfer ``.net`` features into coo sparse matrix,
csr sparse matrix, :class:`DGL.Graph` or :class:`PyG.Data`.
Attributes:
net_src_field (str): The same as ``config['NET_SOURCE_ID_FIELD']``.
net_tgt_field (str): The same as ``config['NET_TARGET_ID_FIELD']``.
net_feat (pandas.DataFrame): Internal data structure stores the users' social network relations.
It's loaded from file ``.net``.
"""
def __init__(self, config):
super().__init__(config)
def _get_field_from_config(self):
super()._get_field_from_config()
self.net_src_field = self.config['NET_SOURCE_ID_FIELD']
self.net_tgt_field = self.config['NET_TARGET_ID_FIELD']
self.filter_net_by_inter = self.config['filter_net_by_inter']
self.undirected_net = self.config['undirected_net']
self._check_field('net_src_field', 'net_tgt_field')
self.logger.debug(set_color('net_src_field', 'blue') + f': {self.net_src_field}')
self.logger.debug(set_color('net_tgt_field', 'blue') + f': {self.net_tgt_field}')
def _data_filtering(self):
super()._data_filtering()
if self.filter_net_by_inter:
self._filter_net_by_inter()
def _filter_net_by_inter(self):
"""Filter users in ``net_feat`` that don't occur in interactions.
"""
inter_uids = set(self.inter_feat[self.uid_field])
self.net_feat.drop(self.net_feat.index[~self.net_feat[self.net_src_field].isin(inter_uids)], inplace=True)
self.net_feat.drop(self.net_feat.index[~self.net_feat[self.net_tgt_field].isin(inter_uids)], inplace=True)
def _load_data(self, token, dataset_path):
super()._load_data(token, dataset_path)
self.net_feat = self._load_net(self.dataset_name, self.dataset_path)
@property
def net_num(self):
"""Get the number of social network records.
Returns:
int: Number of social network records.
"""
return len(self.net_feat)
def __str__(self):
info = [
super().__str__(),
set_color('The number of social network relations', 'blue') + f': {self.net_num}'
] # yapf: disable
return '\n'.join(info)
def _build_feat_name_list(self):
feat_name_list = super()._build_feat_name_list()
if self.net_feat is not None:
feat_name_list.append('net_feat')
return feat_name_list
def _load_net(self, token, dataset_path):
self.logger.debug(set_color(f'Loading social network from [{dataset_path}].', 'green'))
net_path = os.path.join(dataset_path, f'{token}.net')
if not os.path.isfile(net_path):
raise ValueError(f'[{token}.net] not found in [{dataset_path}].')
df = self._load_feat(net_path, FeatureSource.NET)
if self.undirected_net:
row = df[self.net_src_field]
col = df[self.net_tgt_field]
df_net_src = pd.concat([row, col], axis=0)
df_net_tgt = pd.concat([col, row], axis=0)
df_net_src.name = self.net_src_field
df_net_tgt.name = self.net_tgt_field
df = pd.concat([df_net_src, df_net_tgt], axis=1)
self._check_net(df)
return df
def _check_net(self, net):
net_warn_message = 'net data requires field [{}]'
assert self.net_src_field in net, net_warn_message.format(self.net_src_field)
assert self.net_tgt_field in net, net_warn_message.format(self.net_tgt_field)
def _init_alias(self):
"""Add :attr:`alias_of_user_id`.
"""
self._set_alias('user_id', [self.uid_field, self.net_src_field, self.net_tgt_field])
self._set_alias('item_id', [self.iid_field])
for alias_name_1, alias_1 in self.alias.items():
for alias_name_2, alias_2 in self.alias.items():
if alias_name_1 != alias_name_2:
intersect = np.intersect1d(alias_1, alias_2, assume_unique=True)
if len(intersect) > 0:
raise ValueError(
f'`alias_of_{alias_name_1}` and `alias_of_{alias_name_2}` '
f'should not have the same field {list(intersect)}.'
)
self._rest_fields = self.token_like_fields
for alias_name, alias in self.alias.items():
isin = np.isin(alias, self._rest_fields, assume_unique=True)
if isin.all() is False:
raise ValueError(
f'`alias_of_{alias_name}` should not contain '
f'non-token-like field {list(alias[~isin])}.'
)
self._rest_fields = np.setdiff1d(self._rest_fields, alias, assume_unique=True)
def get_norm_net_adj_mat(self, row_norm=False):
r"""Get the normalized socail matrix of users and users.
Construct the square matrix from the social network data and
normalize it using the laplace matrix.
.. math::
A_{hat} = D^{-0.5} \times A \times D^{-0.5}
Returns:
The normalized social network matrix in Tensor.
"""
row = self.net_feat[self.net_src_field]
col = self.net_feat[self.net_tgt_field]
edge_index = torch.stack([row, col])
deg = degree(edge_index[0], self.user_num)
if row_norm:
norm_deg = 1. / torch.where(deg == 0, torch.ones([1]), deg)
edge_weight = norm_deg[edge_index[0]]
else:
norm_deg = 1. / torch.sqrt(torch.where(deg == 0, torch.ones([1]), deg))
edge_weight = norm_deg[edge_index[0]] * norm_deg[edge_index[1]]
return edge_index, edge_weight
def net_matrix(self, form='coo', value_field=None):
"""Get sparse matrix that describe social relations between user_id and user_id.
Sparse matrix has shape (user_num, user_num).
Returns:
scipy.sparse: Sparse matrix in form ``coo`` or ``csr``.
"""
return self._create_sparse_matrix(self.net_feat, self.net_src_field, self.net_tgt_field, form, value_field)
================================================
FILE: recbole_gnn/data/transform.py
================================================
from logging import getLogger
import torch
from torch.nn.utils.rnn import pad_sequence
from recbole.data.interaction import Interaction
def gnn_construct_transform(config):
if config['gnn_transform'] is None:
raise ValueError('config["gnn_transform"] is None but trying to construct transform.')
str2transform = {
'sess_graph': SessionGraph,
}
return str2transform[config['gnn_transform']](config)
class SessionGraph:
def __init__(self, config):
self.logger = getLogger()
self.logger.info('SessionGraph Transform in DataLoader.')
def __call__(self, dataset, interaction):
graph_objs = dataset.graph_objs
index = interaction['graph_idx']
graph_batch = {
k: [graph_objs[k][_.item()] for _ in index]
for k in graph_objs
}
graph_batch['batch'] = []
tot_node_num = torch.ones([1], dtype=torch.long)
for i in range(index.shape[0]):
for k in graph_batch:
if 'edge_index' in k:
graph_batch[k][i] = graph_batch[k][i] + tot_node_num
if 'alias_inputs' in graph_batch:
graph_batch['alias_inputs'][i] = graph_batch['alias_inputs'][i] + tot_node_num
graph_batch['batch'].append(torch.full_like(graph_batch['x'][i], i))
tot_node_num += graph_batch['x'][i].shape[0]
if hasattr(dataset, 'node_attr'):
node_attr = ['batch'] + dataset.node_attr
else:
node_attr = ['x', 'batch']
for k in node_attr:
graph_batch[k] = [torch.zeros([1], dtype=graph_batch[k][-1].dtype)] + graph_batch[k]
for k in graph_batch:
if k == 'alias_inputs':
graph_batch[k] = pad_sequence(graph_batch[k], batch_first=True)
else:
graph_batch[k] = torch.cat(graph_batch[k], dim=-1)
interaction.update(Interaction(graph_batch))
return interaction
================================================
FILE: recbole_gnn/model/abstract_recommender.py
================================================
from recbole.model.abstract_recommender import GeneralRecommender
from recbole.utils import ModelType as RecBoleModelType
from recbole_gnn.utils import ModelType
class GeneralGraphRecommender(GeneralRecommender):
"""This is an abstract general graph recommender. All the general graph models should implement in this class.
The base general graph recommender class provide the basic U-I graph dataset and parameters information.
"""
type = RecBoleModelType.GENERAL
def __init__(self, config, dataset):
super(GeneralGraphRecommender, self).__init__(config, dataset)
self.edge_index, self.edge_weight = dataset.get_norm_adj_mat(enable_sparse=config["enable_sparse"])
self.use_sparse = config["enable_sparse"] and dataset.is_sparse
if self.use_sparse:
self.edge_index, self.edge_weight = self.edge_index.to(self.device), None
else:
self.edge_index, self.edge_weight = self.edge_index.to(self.device), self.edge_weight.to(self.device)
class SocialRecommender(GeneralRecommender):
"""This is an abstract social recommender. All the social graph model should implement this class.
The base social recommender class provide the basic social graph dataset and parameters information.
"""
type = ModelType.SOCIAL
def __init__(self, config, dataset):
super(SocialRecommender, self).__init__(config, dataset)
================================================
FILE: recbole_gnn/model/general_recommender/__init__.py
================================================
from recbole_gnn.model.general_recommender.lightgcn import LightGCN
from recbole_gnn.model.general_recommender.hmlet import HMLET
from recbole_gnn.model.general_recommender.ncl import NCL
from recbole_gnn.model.general_recommender.ngcf import NGCF
from recbole_gnn.model.general_recommender.sgl import SGL
from recbole_gnn.model.general_recommender.lightgcl import LightGCL
from recbole_gnn.model.general_recommender.simgcl import SimGCL
from recbole_gnn.model.general_recommender.xsimgcl import XSimGCL
from recbole_gnn.model.general_recommender.directau import DirectAU
from recbole_gnn.model.general_recommender.ssl4rec import SSL4REC
================================================
FILE: recbole_gnn/model/general_recommender/directau.py
================================================
# r"""
# DiretAU
# ################################################
# Reference:
# Chenyang Wang et al. "Towards Representation Alignment and Uniformity in Collaborative Filtering." in KDD 2022.
# Reference code:
# https://github.com/THUwangcy/DirectAU
# """
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from recbole.model.init import xavier_normal_initialization
from recbole.utils import InputType
from recbole.model.general_recommender import BPR
from recbole_gnn.model.general_recommender import LightGCN
from recbole_gnn.model.abstract_recommender import GeneralGraphRecommender
class DirectAU(GeneralGraphRecommender):
input_type = InputType.PAIRWISE
def __init__(self, config, dataset):
super(DirectAU, self).__init__(config, dataset)
# load parameters info
self.embedding_size = config['embedding_size']
self.gamma = config['gamma']
self.encoder_name = config['encoder']
# define encoder
if self.encoder_name == 'MF':
self.encoder = MFEncoder(config, dataset)
elif self.encoder_name == 'LightGCN':
self.encoder = LGCNEncoder(config, dataset)
else:
raise ValueError('Non-implemented Encoder.')
# storage variables for full sort evaluation acceleration
self.restore_user_e = None
self.restore_item_e = None
# parameters initialization
self.apply(xavier_normal_initialization)
def forward(self, user, item):
user_e, item_e = self.encoder(user, item)
return F.normalize(user_e, dim=-1), F.normalize(item_e, dim=-1)
@staticmethod
def alignment(x, y, alpha=2):
return (x - y).norm(p=2, dim=1).pow(alpha).mean()
@staticmethod
def uniformity(x, t=2):
return torch.pdist(x, p=2).pow(2).mul(-t).exp().mean().log()
def calculate_loss(self, interaction):
if self.restore_user_e is not None or self.restore_item_e is not None:
self.restore_user_e, self.restore_item_e = None, None
user = interaction[self.USER_ID]
item = interaction[self.ITEM_ID]
user_e, item_e = self.forward(user, item)
align = self.alignment(user_e, item_e)
uniform = self.gamma * (self.uniformity(user_e) + self.uniformity(item_e)) / 2
return align, uniform
def predict(self, interaction):
user = interaction[self.USER_ID]
item = interaction[self.ITEM_ID]
user_e = self.user_embedding(user)
item_e = self.item_embedding(item)
return torch.mul(user_e, item_e).sum(dim=1)
def full_sort_predict(self, interaction):
user = interaction[self.USER_ID]
if self.encoder_name == 'LightGCN':
if self.restore_user_e is None or self.restore_item_e is None:
self.restore_user_e, self.restore_item_e = self.encoder.get_all_embeddings()
user_e = self.restore_user_e[user]
all_item_e = self.restore_item_e
else:
user_e = self.encoder.user_embedding(user)
all_item_e = self.encoder.item_embedding.weight
score = torch.matmul(user_e, all_item_e.transpose(0, 1))
return score.view(-1)
class MFEncoder(BPR):
def __init__(self, config, dataset):
super(MFEncoder, self).__init__(config, dataset)
def forward(self, user_id, item_id):
return super().forward(user_id, item_id)
def get_all_embeddings(self):
user_embeddings = self.user_embedding.weight
item_embeddings = self.item_embedding.weight
return user_embeddings, item_embeddings
class LGCNEncoder(LightGCN):
def __init__(self, config, dataset):
super(LGCNEncoder, self).__init__(config, dataset)
def forward(self, user_id, item_id):
user_all_embeddings, item_all_embeddings = self.get_all_embeddings()
u_embed = user_all_embeddings[user_id]
i_embed = item_all_embeddings[item_id]
return u_embed, i_embed
def get_all_embeddings(self):
return super().forward()
================================================
FILE: recbole_gnn/model/general_recommender/hmlet.py
================================================
# @Time : 2022/3/21
# @Author : Yupeng Hou
# @Email : houyupeng@ruc.edu.cn
r"""
HMLET
################################################
Reference:
Taeyong Kong et al. "Linear, or Non-Linear, That is the Question!." in WSDM 2022.
Reference code:
https://github.com/qbxlvnf11/HMLET
"""
import torch
import torch.nn as nn
import torch.nn.functional as F
from recbole.model.init import xavier_uniform_initialization
from recbole.model.loss import BPRLoss, EmbLoss
from recbole.model.layers import activation_layer
from recbole.utils import InputType
from recbole_gnn.model.abstract_recommender import GeneralGraphRecommender
from recbole_gnn.model.layers import LightGCNConv
class Gating_Net(nn.Module):
def __init__(self, embedding_dim, mlp_dims, dropout_p):
super(Gating_Net, self).__init__()
self.embedding_dim = embedding_dim
fc_layers = []
for i in range(len(mlp_dims)):
if i == 0:
fc = nn.Linear(embedding_dim*2, mlp_dims[i])
fc_layers.append(fc)
else:
fc = nn.Linear(mlp_dims[i-1], mlp_dims[i])
fc_layers.append(fc)
if i != len(mlp_dims) - 1:
fc_layers.append(nn.BatchNorm1d(mlp_dims[i]))
fc_layers.append(nn.Dropout(p=dropout_p))
fc_layers.append(nn.ReLU(inplace=True))
self.mlp = nn.Sequential(*fc_layers)
def gumbel_softmax(self, logits, temperature, hard):
"""Sample from the Gumbel-Softmax distribution and optionally discretize.
Args:
logits: [batch_size, n_class] unnormalized log-probs
temperature: non-negative scalar
hard: if True, take argmax, but differentiate w.r.t. soft sample y
Returns:
[batch_size, n_class] sample from the Gumbel-Softmax distribution.
If hard=True, then the returned sample will be one-hot, otherwise it will
be a probabilitiy distribution that sums to 1 across classes
"""
y = self.gumbel_softmax_sample(logits, temperature) ## (0.6, 0.2, 0.1,..., 0.11)
if hard:
k = logits.size(1) # k is numb of classes
# y_hard = tf.cast(tf.one_hot(tf.argmax(y,1),k), y.dtype) ## (1, 0, 0, ..., 0)
y_hard = torch.eq(y, torch.max(y, dim=1, keepdim=True)[0]).type_as(y)
y = (y_hard - y).detach() + y
return y
def gumbel_softmax_sample(self, logits, temperature):
""" Draw a sample from the Gumbel-Softmax distribution"""
noise = self.sample_gumbel(logits)
y = (logits + noise) / temperature
return F.softmax(y, dim=1)
def sample_gumbel(self, logits):
"""Sample from Gumbel(0, 1)"""
noise = torch.rand(logits.size())
eps = 1e-20
noise.add_(eps).log_().neg_()
noise.add_(eps).log_().neg_()
return torch.Tensor(noise.float()).to(logits.device)
def forward(self, feature, temperature, hard):
x = self.mlp(feature)
out = self.gumbel_softmax(x, temperature, hard)
out_value = out.unsqueeze(2)
gating_out = out_value.repeat(1, 1, self.embedding_dim)
return gating_out
class HMLET(GeneralGraphRecommender):
r"""HMLET combines both linear and non-linear propagation layers for general recommendation and yields better performance.
"""
input_type = InputType.PAIRWISE
def __init__(self, config, dataset):
super(HMLET, self).__init__(config, dataset)
# load parameters info
self.latent_dim = config['embedding_size'] # int type:the embedding size of lightGCN
self.n_layers = config['n_layers'] # int type:the layer num of lightGCN
self.reg_weight = config['reg_weight'] # float32 type: the weight decay for l2 normalization
self.require_pow = config['require_pow'] # bool type: whether to require pow when regularization
self.gate_layer_ids = config['gate_layer_ids'] # list type: layer ids for non-linear gating
self.gating_mlp_dims = config['gating_mlp_dims'] # list type: list of mlp dimensions in gating module
self.dropout_ratio = config['dropout_ratio'] # dropout ratio for mlp in gating module
self.gum_temp = config['ori_temp']
self.logger.info(f'Model initialization, gumbel softmax temperature: {self.gum_temp}')
# define layers and loss
self.user_embedding = torch.nn.Embedding(num_embeddings=self.n_users, embedding_dim=self.latent_dim)
self.item_embedding = torch.nn.Embedding(num_embeddings=self.n_items, embedding_dim=self.latent_dim)
self.gcn_conv = LightGCNConv(dim=self.latent_dim)
self.activation = nn.ELU() if config['activation_function'] == 'elu' else activation_layer(config['activation_function'])
self.gating_nets = nn.ModuleList([
Gating_Net(self.latent_dim, self.gating_mlp_dims, self.dropout_ratio) for _ in range(len(self.gate_layer_ids))
])
self.mf_loss = BPRLoss()
self.reg_loss = EmbLoss()
# storage variables for full sort evaluation acceleration
self.restore_user_e = None
self.restore_item_e = None
# parameters initialization
self.apply(xavier_uniform_initialization)
self.other_parameter_name = ['restore_user_e', 'restore_item_e', 'gum_temp']
for gating in self.gating_nets:
self._gating_freeze(gating, False)
def _gating_freeze(self, model, freeze_flag):
for name, child in model.named_children():
for param in child.parameters():
param.requires_grad = freeze_flag
def __choosing_one(self, features, gumbel_out):
feature = torch.sum(torch.mul(features, gumbel_out), dim=1) # batch x embedding_dim (or batch x embedding_dim x layer_num)
return feature
def __where(self, idx, lst):
for i in range(len(lst)):
if lst[i] == idx:
return i
raise ValueError(f'{idx} not in {lst}.')
def get_ego_embeddings(self):
r"""Get the embedding of users and items and combine to an embedding matrix.
Returns:
Tensor of the embedding matrix. Shape of [n_items+n_users, embedding_dim]
"""
user_embeddings = self.user_embedding.weight
item_embeddings = self.item_embedding.weight
ego_embeddings = torch.cat([user_embeddings, item_embeddings], dim=0)
return ego_embeddings
def forward(self):
all_embeddings = self.get_ego_embeddings()
embeddings_list = [all_embeddings]
non_lin_emb_list = [all_embeddings]
for layer_idx in range(self.n_layers):
linear_embeddings = self.gcn_conv(all_embeddings, self.edge_index, self.edge_weight)
if layer_idx not in self.gate_layer_ids:
all_embeddings = linear_embeddings
else:
non_lin_id = self.__where(layer_idx, self.gate_layer_ids)
last_non_lin_emb = non_lin_emb_list[non_lin_id]
non_lin_embeddings = self.activation(self.gcn_conv(last_non_lin_emb, self.edge_index, self.edge_weight))
stack_embeddings = torch.stack([linear_embeddings, non_lin_embeddings], dim=1)
concat_embeddings = torch.cat((linear_embeddings, non_lin_embeddings), dim=-1)
gumbel_out = self.gating_nets[non_lin_id](concat_embeddings, self.gum_temp, not self.training)
all_embeddings = self.__choosing_one(stack_embeddings, gumbel_out)
non_lin_emb_list.append(all_embeddings)
embeddings_list.append(all_embeddings)
hmlet_all_embeddings = torch.stack(embeddings_list, dim=1)
hmlet_all_embeddings = torch.mean(hmlet_all_embeddings, dim=1)
user_all_embeddings, item_all_embeddings = torch.split(hmlet_all_embeddings, [self.n_users, self.n_items])
return user_all_embeddings, item_all_embeddings
def calculate_loss(self, interaction):
# clear the storage variable when training
if self.restore_user_e is not None or self.restore_item_e is not None:
self.restore_user_e, self.restore_item_e = None, None
user = interaction[self.USER_ID]
pos_item = interaction[self.ITEM_ID]
neg_item = interaction[self.NEG_ITEM_ID]
user_all_embeddings, item_all_embeddings = self.forward()
u_embeddings = user_all_embeddings[user]
pos_embeddings = item_all_embeddings[pos_item]
neg_embeddings = item_all_embeddings[neg_item]
# calculate BPR Loss
pos_scores = torch.mul(u_embeddings, pos_embeddings).sum(dim=1)
neg_scores = torch.mul(u_embeddings, neg_embeddings).sum(dim=1)
mf_loss = self.mf_loss(pos_scores, neg_scores)
# calculate regularization Loss
u_ego_embeddings = self.user_embedding(user)
pos_ego_embeddings = self.item_embedding(pos_item)
neg_ego_embeddings = self.item_embedding(neg_item)
reg_loss = self.reg_loss(u_ego_embeddings, pos_ego_embeddings, neg_ego_embeddings, require_pow=self.require_pow)
loss = mf_loss + self.reg_weight * reg_loss
return loss
def predict(self, interaction):
user = interaction[self.USER_ID]
item = interaction[self.ITEM_ID]
user_all_embeddings, item_all_embeddings = self.forward()
u_embeddings = user_all_embeddings[user]
i_embeddings = item_all_embeddings[item]
scores = torch.mul(u_embeddings, i_embeddings).sum(dim=1)
return scores
def full_sort_predict(self, interaction):
user = interaction[self.USER_ID]
if self.restore_user_e is None or self.restore_item_e is None:
self.restore_user_e, self.restore_item_e = self.forward()
# get user embedding from storage variable
u_embeddings = self.restore_user_e[user]
# dot with all item embedding to accelerate
scores = torch.matmul(u_embeddings, self.restore_item_e.transpose(0, 1))
return scores.view(-1)
================================================
FILE: recbole_gnn/model/general_recommender/lightgcl.py
================================================
# -*- coding: utf-8 -*-
# @Time : 2023/04/12
# @Author : Wanli Yang
# @Email : 2013774@mail.nankai.edu.cn
r"""
LightGCL
################################################
Reference:
Xuheng Cai et al. "LightGCL: Simple Yet Effective Graph Contrastive Learning for Recommendation" in ICLR 2023.
Reference code:
https://github.com/HKUDS/LightGCL
"""
import numpy as np
import scipy.sparse as sp
import torch
import torch.nn as nn
from recbole.model.abstract_recommender import GeneralRecommender
from recbole.model.init import xavier_uniform_initialization
from recbole.model.loss import EmbLoss
from recbole.utils import InputType
import torch.nn.functional as F
class LightGCL(GeneralRecommender):
r"""LightGCL is a GCN-based recommender model.
LightGCL guides graph augmentation by singular value decomposition (SVD) to not only
distill the useful information of user-item interactions but also inject the global
collaborative context into the representation alignment of contrastive learning.
We implement the model following the original author with a pairwise training mode.
"""
input_type = InputType.PAIRWISE
def __init__(self, config, dataset):
super(LightGCL, self).__init__(config, dataset)
self._user = dataset.inter_feat[dataset.uid_field]
self._item = dataset.inter_feat[dataset.iid_field]
# load parameters info
self.embed_dim = config["embedding_size"]
self.n_layers = config["n_layers"]
self.dropout = config["dropout"]
self.temp = config["temp"]
self.lambda_1 = config["lambda1"]
self.lambda_2 = config["lambda2"]
self.q = config["q"]
self.act = nn.LeakyReLU(0.5)
self.reg_loss = EmbLoss()
# get the normalized adjust matrix
self.adj_norm = self.coo2tensor(self.create_adjust_matrix())
# perform svd reconstruction
svd_u, s, svd_v = torch.svd_lowrank(self.adj_norm, q=self.q)
self.u_mul_s = svd_u @ (torch.diag(s))
self.v_mul_s = svd_v @ (torch.diag(s))
del s
self.ut = svd_u.T
self.vt = svd_v.T
self.E_u_0 = nn.Parameter(nn.init.xavier_uniform_(torch.empty(self.n_users, self.embed_dim)))
self.E_i_0 = nn.Parameter(nn.init.xavier_uniform_(torch.empty(self.n_items, self.embed_dim)))
self.E_u_list = [None] * (self.n_layers + 1)
self.E_i_list = [None] * (self.n_layers + 1)
self.E_u_list[0] = self.E_u_0
self.E_i_list[0] = self.E_i_0
self.Z_u_list = [None] * (self.n_layers + 1)
self.Z_i_list = [None] * (self.n_layers + 1)
self.G_u_list = [None] * (self.n_layers + 1)
self.G_i_list = [None] * (self.n_layers + 1)
self.G_u_list[0] = self.E_u_0
self.G_i_list[0] = self.E_i_0
self.E_u = None
self.E_i = None
self.restore_user_e = None
self.restore_item_e = None
self.apply(xavier_uniform_initialization)
self.other_parameter_name = ['restore_user_e', 'restore_item_e']
def create_adjust_matrix(self):
r"""Get the normalized interaction matrix of users and items.
Returns:
coo_matrix of the normalized interaction matrix.
"""
ratings = np.ones_like(self._user, dtype=np.float32)
matrix = sp.csr_matrix(
(ratings, (self._user, self._item)),
shape=(self.n_users, self.n_items),
).tocoo()
rowD = np.squeeze(np.array(matrix.sum(1)), axis=1)
colD = np.squeeze(np.array(matrix.sum(0)), axis=0)
for i in range(len(matrix.data)):
matrix.data[i] = matrix.data[i] / pow(rowD[matrix.row[i]] * colD[matrix.col[i]], 0.5)
return matrix
def coo2tensor(self, matrix: sp.coo_matrix):
r"""Convert coo_matrix to tensor.
Args:
matrix (scipy.coo_matrix): Sparse matrix to be converted.
Returns:
torch.sparse.FloatTensor: Transformed sparse matrix.
"""
indices = torch.from_numpy(
np.vstack((matrix.row, matrix.col)).astype(np.int64))
values = torch.from_numpy(matrix.data)
shape = torch.Size(matrix.shape)
x = torch.sparse.FloatTensor(indices, values, shape).coalesce().to(self.device)
return x
def sparse_dropout(self, matrix, dropout):
if dropout == 0.0:
return matrix
indices = matrix.indices()
values = F.dropout(matrix.values(), p=dropout)
size = matrix.size()
return torch.sparse.FloatTensor(indices, values, size)
def forward(self):
for layer in range(1, self.n_layers + 1):
# GNN propagation
self.Z_u_list[layer] = torch.spmm(self.sparse_dropout(self.adj_norm, self.dropout),
self.E_i_list[layer - 1])
self.Z_i_list[layer] = torch.spmm(self.sparse_dropout(self.adj_norm, self.dropout).transpose(0, 1),
self.E_u_list[layer - 1])
# aggregate
self.E_u_list[layer] = self.Z_u_list[layer]
self.E_i_list[layer] = self.Z_i_list[layer]
# aggregate across layer
self.E_u = sum(self.E_u_list)
self.E_i = sum(self.E_i_list)
return self.E_u, self.E_i
def calculate_loss(self, interaction):
if self.restore_user_e is not None or self.restore_item_e is not None:
self.restore_user_e, self.restore_item_e = None, None
user_list = interaction[self.USER_ID]
pos_item_list = interaction[self.ITEM_ID]
neg_item_list = interaction[self.NEG_ITEM_ID]
E_u_norm, E_i_norm = self.forward()
bpr_loss = self.calc_bpr_loss(E_u_norm, E_i_norm, user_list, pos_item_list, neg_item_list)
ssl_loss = self.calc_ssl_loss(E_u_norm, E_i_norm, user_list, pos_item_list)
total_loss = bpr_loss + ssl_loss
return total_loss
def calc_bpr_loss(self, E_u_norm, E_i_norm, user_list, pos_item_list, neg_item_list):
r"""Calculate the pairwise Bayesian Personalized Ranking (BPR) loss and parameter regularization loss.
Args:
E_u_norm (torch.Tensor): Ego embedding of all users after forwarding.
E_i_norm (torch.Tensor): Ego embedding of all items after forwarding.
user_list (torch.Tensor): List of the user.
pos_item_list (torch.Tensor): List of positive examples.
neg_item_list (torch.Tensor): List of negative examples.
Returns:
torch.Tensor: Loss of BPR tasks and parameter regularization.
"""
u_e = E_u_norm[user_list]
pi_e = E_i_norm[pos_item_list]
ni_e = E_i_norm[neg_item_list]
pos_scores = torch.mul(u_e, pi_e).sum(dim=1)
neg_scores = torch.mul(u_e, ni_e).sum(dim=1)
loss1 = -(pos_scores - neg_scores).sigmoid().log().mean()
# reg loss
loss_reg = 0
for param in self.parameters():
loss_reg += param.norm(2).square()
loss_reg *= self.lambda_2
return loss1 + loss_reg
def calc_ssl_loss(self, E_u_norm, E_i_norm, user_list, pos_item_list):
r"""Calculate the loss of self-supervised tasks.
Args:
E_u_norm (torch.Tensor): Ego embedding of all users in the original graph after forwarding.
E_i_norm (torch.Tensor): Ego embedding of all items in the original graph after forwarding.
user_list (torch.Tensor): List of the user.
pos_item_list (torch.Tensor): List of positive examples.
Returns:
torch.Tensor: Loss of self-supervised tasks.
"""
# calculate G_u_norm&G_i_norm
for layer in range(1, self.n_layers + 1):
# svd_adj propagation
vt_ei = self.vt @ self.E_i_list[layer - 1]
self.G_u_list[layer] = self.u_mul_s @ vt_ei
ut_eu = self.ut @ self.E_u_list[layer - 1]
self.G_i_list[layer] = self.v_mul_s @ ut_eu
# aggregate across layer
G_u_norm = sum(self.G_u_list)
G_i_norm = sum(self.G_i_list)
neg_score = torch.log(torch.exp(G_u_norm[user_list] @ E_u_norm.T / self.temp).sum(1) + 1e-8).mean()
neg_score += torch.log(torch.exp(G_i_norm[pos_item_list] @ E_i_norm.T / self.temp).sum(1) + 1e-8).mean()
pos_score = (torch.clamp((G_u_norm[user_list] * E_u_norm[user_list]).sum(1) / self.temp, -5.0, 5.0)).mean() + (
torch.clamp((G_i_norm[pos_item_list] * E_i_norm[pos_item_list]).sum(1) / self.temp, -5.0, 5.0)).mean()
ssl_loss = -pos_score + neg_score
return self.lambda_1 * ssl_loss
def predict(self, interaction):
if self.restore_user_e is None or self.restore_item_e is None:
self.restore_user_e, self.restore_item_e = self.forward()
user = self.restore_user_e[interaction[self.USER_ID]]
item = self.restore_item_e[interaction[self.ITEM_ID]]
return torch.sum(user * item, dim=1)
def full_sort_predict(self, interaction):
if self.restore_user_e is None or self.restore_item_e is None:
self.restore_user_e, self.restore_item_e = self.forward()
user = self.restore_user_e[interaction[self.USER_ID]]
return user.matmul(self.restore_item_e.T)
================================================
FILE: recbole_gnn/model/general_recommender/lightgcn.py
================================================
# @Time : 2022/3/8
# @Author : Lanling Xu
# @Email : xulanling_sherry@163.com
r"""
LightGCN
################################################
Reference:
Xiangnan He et al. "LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation." in SIGIR 2020.
Reference code:
https://github.com/kuandeng/LightGCN
"""
import numpy as np
import torch
from recbole.model.init import xavier_uniform_initialization
from recbole.model.loss import BPRLoss, EmbLoss
from recbole.utils import InputType
from recbole_gnn.model.abstract_recommender import GeneralGraphRecommender
from recbole_gnn.model.layers import LightGCNConv
class LightGCN(GeneralGraphRecommender):
r"""LightGCN is a GCN-based recommender model, implemented via PyG.
LightGCN includes only the most essential component in GCN — neighborhood aggregation — for
collaborative filtering. Specifically, LightGCN learns user and item embeddings by linearly
propagating them on the user-item interaction graph, and uses the weighted sum of the embeddings
learned at all layers as the final embedding.
We implement the model following the original author with a pairwise training mode.
"""
input_type = InputType.PAIRWISE
def __init__(self, config, dataset):
super(LightGCN, self).__init__(config, dataset)
# load parameters info
self.latent_dim = config['embedding_size'] # int type:the embedding size of lightGCN
self.n_layers = config['n_layers'] # int type:the layer num of lightGCN
self.reg_weight = config['reg_weight'] # float32 type: the weight decay for l2 normalization
self.require_pow = config['require_pow'] # bool type: whether to require pow when regularization
# define layers and loss
self.user_embedding = torch.nn.Embedding(num_embeddings=self.n_users, embedding_dim=self.latent_dim)
self.item_embedding = torch.nn.Embedding(num_embeddings=self.n_items, embedding_dim=self.latent_dim)
self.gcn_conv = LightGCNConv(dim=self.latent_dim)
self.mf_loss = BPRLoss()
self.reg_loss = EmbLoss()
# storage variables for full sort evaluation acceleration
self.restore_user_e = None
self.restore_item_e = None
# parameters initialization
self.apply(xavier_uniform_initialization)
self.other_parameter_name = ['restore_user_e', 'restore_item_e']
def get_ego_embeddings(self):
r"""Get the embedding of users and items and combine to an embedding matrix.
Returns:
Tensor of the embedding matrix. Shape of [n_items+n_users, embedding_dim]
"""
user_embeddings = self.user_embedding.weight
item_embeddings = self.item_embedding.weight
ego_embeddings = torch.cat([user_embeddings, item_embeddings], dim=0)
return ego_embeddings
def forward(self):
all_embeddings = self.get_ego_embeddings()
embeddings_list = [all_embeddings]
for layer_idx in range(self.n_layers):
all_embeddings = self.gcn_conv(all_embeddings, self.edge_index, self.edge_weight)
embeddings_list.append(all_embeddings)
lightgcn_all_embeddings = torch.stack(embeddings_list, dim=1)
lightgcn_all_embeddings = torch.mean(lightgcn_all_embeddings, dim=1)
user_all_embeddings, item_all_embeddings = torch.split(lightgcn_all_embeddings, [self.n_users, self.n_items])
return user_all_embeddings, item_all_embeddings
def calculate_loss(self, interaction):
# clear the storage variable when training
if self.restore_user_e is not None or self.restore_item_e is not None:
self.restore_user_e, self.restore_item_e = None, None
user = interaction[self.USER_ID]
pos_item = interaction[self.ITEM_ID]
neg_item = interaction[self.NEG_ITEM_ID]
user_all_embeddings, item_all_embeddings = self.forward()
u_embeddings = user_all_embeddings[user]
pos_embeddings = item_all_embeddings[pos_item]
neg_embeddings = item_all_embeddings[neg_item]
# calculate BPR Loss
pos_scores = torch.mul(u_embeddings, pos_embeddings).sum(dim=1)
neg_scores = torch.mul(u_embeddings, neg_embeddings).sum(dim=1)
mf_loss = self.mf_loss(pos_scores, neg_scores)
# calculate regularization Loss
u_ego_embeddings = self.user_embedding(user)
pos_ego_embeddings = self.item_embedding(pos_item)
neg_ego_embeddings = self.item_embedding(neg_item)
reg_loss = self.reg_loss(u_ego_embeddings, pos_ego_embeddings, neg_ego_embeddings, require_pow=self.require_pow)
loss = mf_loss + self.reg_weight * reg_loss
return loss
def predict(self, interaction):
user = interaction[self.USER_ID]
item = interaction[self.ITEM_ID]
user_all_embeddings, item_all_embeddings = self.forward()
u_embeddings = user_all_embeddings[user]
i_embeddings = item_all_embeddings[item]
scores = torch.mul(u_embeddings, i_embeddings).sum(dim=1)
return scores
def full_sort_predict(self, interaction):
user = interaction[self.USER_ID]
if self.restore_user_e is None or self.restore_item_e is None:
self.restore_user_e, self.restore_item_e = self.forward()
# get user embedding from storage variable
u_embeddings = self.restore_user_e[user]
# dot with all item embedding to accelerate
scores = torch.matmul(u_embeddings, self.restore_item_e.transpose(0, 1))
return scores.view(-1)
================================================
FILE: recbole_gnn/model/general_recommender/ncl.py
================================================
# -*- coding: utf-8 -*-
r"""
NCL
################################################
Reference:
Zihan Lin*, Changxin Tian*, Yupeng Hou*, Wayne Xin Zhao. "Improving Graph Collaborative Filtering with Neighborhood-enriched Contrastive Learning." in WWW 2022.
"""
import torch
import torch.nn.functional as F
from recbole.model.init import xavier_uniform_initialization
from recbole.model.loss import BPRLoss, EmbLoss
from recbole.utils import InputType
from recbole_gnn.model.abstract_recommender import GeneralGraphRecommender
from recbole_gnn.model.layers import LightGCNConv
class NCL(GeneralGraphRecommender):
input_type = InputType.PAIRWISE
def __init__(self, config, dataset):
super(NCL, self).__init__(config, dataset)
# load parameters info
self.latent_dim = config['embedding_size'] # int type: the embedding size of the base model
self.n_layers = config['n_layers'] # int type: the layer num of the base model
self.reg_weight = config['reg_weight'] # float32 type: the weight decay for l2 normalization
self.ssl_temp = config['ssl_temp']
self.ssl_reg = config['ssl_reg']
self.hyper_layers = config['hyper_layers']
self.alpha = config['alpha']
self.proto_reg = config['proto_reg']
self.k = config['num_clusters']
# define layers and loss
self.user_embedding = torch.nn.Embedding(num_embeddings=self.n_users, embedding_dim=self.latent_dim)
self.item_embedding = torch.nn.Embedding(num_embeddings=self.n_items, embedding_dim=self.latent_dim)
self.gcn_conv = LightGCNConv(dim=self.latent_dim)
self.mf_loss = BPRLoss()
self.reg_loss = EmbLoss()
# storage variables for full sort evaluation acceleration
self.restore_user_e = None
self.restore_item_e = None
# parameters initialization
self.apply(xavier_uniform_initialization)
self.other_parameter_name = ['restore_user_e', 'restore_item_e']
self.user_centroids = None
self.user_2cluster = None
self.item_centroids = None
self.item_2cluster = None
def e_step(self):
user_embeddings = self.user_embedding.weight.detach().cpu().numpy()
item_embeddings = self.item_embedding.weight.detach().cpu().numpy()
self.user_centroids, self.user_2cluster = self.run_kmeans(user_embeddings)
self.item_centroids, self.item_2cluster = self.run_kmeans(item_embeddings)
def run_kmeans(self, x):
"""Run K-means algorithm to get k clusters of the input tensor x
"""
import faiss
kmeans = faiss.Kmeans(d=self.latent_dim, k=self.k, gpu=True)
kmeans.train(x)
cluster_cents = kmeans.centroids
_, I = kmeans.index.search(x, 1)
# convert to cuda Tensors for broadcast
centroids = torch.Tensor(cluster_cents).to(self.device)
centroids = F.normalize(centroids, p=2, dim=1)
node2cluster = torch.LongTensor(I).squeeze().to(self.device)
return centroids, node2cluster
def get_ego_embeddings(self):
r"""Get the embedding of users and items and combine to an embedding matrix.
Returns:
Tensor of the embedding matrix. Shape of [n_items+n_users, embedding_dim]
"""
user_embeddings = self.user_embedding.weight
item_embeddings = self.item_embedding.weight
ego_embeddings = torch.cat([user_embeddings, item_embeddings], dim=0)
return ego_embeddings
def forward(self):
all_embeddings = self.get_ego_embeddings()
embeddings_list = [all_embeddings]
for layer_idx in range(max(self.n_layers, self.hyper_layers * 2)):
all_embeddings = self.gcn_conv(all_embeddings, self.edge_index, self.edge_weight)
embeddings_list.append(all_embeddings)
lightgcn_all_embeddings = torch.stack(embeddings_list[:self.n_layers + 1], dim=1)
lightgcn_all_embeddings = torch.mean(lightgcn_all_embeddings, dim=1)
user_all_embeddings, item_all_embeddings = torch.split(lightgcn_all_embeddings, [self.n_users, self.n_items])
return user_all_embeddings, item_all_embeddings, embeddings_list
def ProtoNCE_loss(self, node_embedding, user, item):
user_embeddings_all, item_embeddings_all = torch.split(node_embedding, [self.n_users, self.n_items])
user_embeddings = user_embeddings_all[user] # [B, e]
norm_user_embeddings = F.normalize(user_embeddings)
user2cluster = self.user_2cluster[user] # [B,]
user2centroids = self.user_centroids[user2cluster] # [B, e]
pos_score_user = torch.mul(norm_user_embeddings, user2centroids).sum(dim=1)
pos_score_user = torch.exp(pos_score_user / self.ssl_temp)
ttl_score_user = torch.matmul(norm_user_embeddings, self.user_centroids.transpose(0, 1))
ttl_score_user = torch.exp(ttl_score_user / self.ssl_temp).sum(dim=1)
proto_nce_loss_user = -torch.log(pos_score_user / ttl_score_user).sum()
item_embeddings = item_embeddings_all[item]
norm_item_embeddings = F.normalize(item_embeddings)
item2cluster = self.item_2cluster[item] # [B, ]
item2centroids = self.item_centroids[item2cluster] # [B, e]
pos_score_item = torch.mul(norm_item_embeddings, item2centroids).sum(dim=1)
pos_score_item = torch.exp(pos_score_item / self.ssl_temp)
ttl_score_item = torch.matmul(norm_item_embeddings, self.item_centroids.transpose(0, 1))
ttl_score_item = torch.exp(ttl_score_item / self.ssl_temp).sum(dim=1)
proto_nce_loss_item = -torch.log(pos_score_item / ttl_score_item).sum()
proto_nce_loss = self.proto_reg * (proto_nce_loss_user + proto_nce_loss_item)
return proto_nce_loss
def ssl_layer_loss(self, current_embedding, previous_embedding, user, item):
current_user_embeddings, current_item_embeddings = torch.split(current_embedding, [self.n_users, self.n_items])
previous_user_embeddings_all, previous_item_embeddings_all = torch.split(previous_embedding, [self.n_users, self.n_items])
current_user_embeddings = current_user_embeddings[user]
previous_user_embeddings = previous_user_embeddings_all[user]
norm_user_emb1 = F.normalize(current_user_embeddings)
norm_user_emb2 = F.normalize(previous_user_embeddings)
norm_all_user_emb = F.normalize(previous_user_embeddings_all)
pos_score_user = torch.mul(norm_user_emb1, norm_user_emb2).sum(dim=1)
ttl_score_user = torch.matmul(norm_user_emb1, norm_all_user_emb.transpose(0, 1))
pos_score_user = torch.exp(pos_score_user / self.ssl_temp)
ttl_score_user = torch.exp(ttl_score_user / self.ssl_temp).sum(dim=1)
ssl_loss_user = -torch.log(pos_score_user / ttl_score_user).sum()
current_item_embeddings = current_item_embeddings[item]
previous_item_embeddings = previous_item_embeddings_all[item]
norm_item_emb1 = F.normalize(current_item_embeddings)
norm_item_emb2 = F.normalize(previous_item_embeddings)
norm_all_item_emb = F.normalize(previous_item_embeddings_all)
pos_score_item = torch.mul(norm_item_emb1, norm_item_emb2).sum(dim=1)
ttl_score_item = torch.matmul(norm_item_emb1, norm_all_item_emb.transpose(0, 1))
pos_score_item = torch.exp(pos_score_item / self.ssl_temp)
ttl_score_item = torch.exp(ttl_score_item / self.ssl_temp).sum(dim=1)
ssl_loss_item = -torch.log(pos_score_item / ttl_score_item).sum()
ssl_loss = self.ssl_reg * (ssl_loss_user + self.alpha * ssl_loss_item)
return ssl_loss
def calculate_loss(self, interaction):
# clear the storage variable when training
if self.restore_user_e is not None or self.restore_item_e is not None:
self.restore_user_e, self.restore_item_e = None, None
user = interaction[self.USER_ID]
pos_item = interaction[self.ITEM_ID]
neg_item = interaction[self.NEG_ITEM_ID]
user_all_embeddings, item_all_embeddings, embeddings_list = self.forward()
center_embedding = embeddings_list[0]
context_embedding = embeddings_list[self.hyper_layers * 2]
ssl_loss = self.ssl_layer_loss(context_embedding, center_embedding, user, pos_item)
proto_loss = self.ProtoNCE_loss(center_embedding, user, pos_item)
u_embeddings = user_all_embeddings[user]
pos_embeddings = item_all_embeddings[pos_item]
neg_embeddings = item_all_embeddings[neg_item]
# calculate BPR Loss
pos_scores = torch.mul(u_embeddings, pos_embeddings).sum(dim=1)
neg_scores = torch.mul(u_embeddings, neg_embeddings).sum(dim=1)
mf_loss = self.mf_loss(pos_scores, neg_scores)
u_ego_embeddings = self.user_embedding(user)
pos_ego_embeddings = self.item_embedding(pos_item)
neg_ego_embeddings = self.item_embedding(neg_item)
reg_loss = self.reg_loss(u_ego_embeddings, pos_ego_embeddings, neg_ego_embeddings)
return mf_loss + self.reg_weight * reg_loss, ssl_loss, proto_loss
def predict(self, interaction):
user = interaction[self.USER_ID]
item = interaction[self.ITEM_ID]
user_all_embeddings, item_all_embeddings, embeddings_list = self.forward()
u_embeddings = user_all_embeddings[user]
i_embeddings = item_all_embeddings[item]
scores = torch.mul(u_embeddings, i_embeddings).sum(dim=1)
return scores
def full_sort_predict(self, interaction):
user = interaction[self.USER_ID]
if self.restore_user_e is None or self.restore_item_e is None:
self.restore_user_e, self.restore_item_e, embedding_list = self.forward()
# get user embedding from storage variable
u_embeddings = self.restore_user_e[user]
# dot with all item embedding to accelerate
scores = torch.matmul(u_embeddings, self.restore_item_e.transpose(0, 1))
return scores.view(-1)
================================================
FILE: recbole_gnn/model/general_recommender/ngcf.py
================================================
# @Time : 2022/3/8
# @Author : Changxin Tian
# @Email : cx.tian@outlook.com
r"""
NGCF
################################################
Reference:
Xiang Wang et al. "Neural Graph Collaborative Filtering." in SIGIR 2019.
Reference code:
https://github.com/xiangwang1223/neural_graph_collaborative_filtering
"""
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch_geometric.utils import dropout_adj
from recbole.model.init import xavier_normal_initialization
from recbole.model.loss import BPRLoss, EmbLoss
from recbole.utils import InputType
from recbole_gnn.model.abstract_recommender import GeneralGraphRecommender
from recbole_gnn.model.layers import BiGNNConv
class NGCF(GeneralGraphRecommender):
r"""NGCF is a model that incorporate GNN for recommendation.
We implement the model following the original author with a pairwise training mode.
"""
input_type = InputType.PAIRWISE
def __init__(self, config, dataset):
super(NGCF, self).__init__(config, dataset)
# load parameters info
self.embedding_size = config['embedding_size']
self.hidden_size_list = config['hidden_size_list']
self.hidden_size_list = [self.embedding_size] + self.hidden_size_list
self.node_dropout = config['node_dropout']
self.message_dropout = config['message_dropout']
self.reg_weight = config['reg_weight']
# define layers and loss
self.user_embedding = nn.Embedding(self.n_users, self.embedding_size)
self.item_embedding = nn.Embedding(self.n_items, self.embedding_size)
self.GNNlayers = torch.nn.ModuleList()
for input_size, output_size in zip(self.hidden_size_list[:-1], self.hidden_size_list[1:]):
self.GNNlayers.append(BiGNNConv(input_size, output_size))
self.mf_loss = BPRLoss()
self.reg_loss = EmbLoss()
# storage variables for full sort evaluation acceleration
self.restore_user_e = None
self.restore_item_e = None
# parameters initialization
self.apply(xavier_normal_initialization)
self.other_parameter_name = ['restore_user_e', 'restore_item_e']
def get_ego_embeddings(self):
r"""Get the embedding of users and items and combine to an embedding matrix.
Returns:
Tensor of the embedding matrix. Shape of (n_items+n_users, embedding_dim)
"""
user_embeddings = self.user_embedding.weight
item_embeddings = self.item_embedding.weight
ego_embeddings = torch.cat([user_embeddings, item_embeddings], dim=0)
return ego_embeddings
def forward(self):
if self.node_dropout == 0:
edge_index, edge_weight = self.edge_index, self.edge_weight
else:
edge_index, edge_weight = self.edge_index, self.edge_weight
if self.use_sparse:
row, col, edge_weight = edge_index.t().coo()
edge_index = torch.stack([row, col], 0)
edge_index, edge_weight = dropout_adj(edge_index=edge_index, edge_attr=edge_weight,
p=self.node_dropout, training=self.training)
from torch_sparse import SparseTensor
edge_index = SparseTensor(row=edge_index[0], col=edge_index[1], value=edge_weight,
sparse_sizes=(self.n_users + self.n_items, self.n_users + self.n_items))
edge_index = edge_index.t()
edge_weight = None
else:
edge_index, edge_weight = dropout_adj(edge_index=edge_index, edge_attr=edge_weight,
p=self.node_dropout, training=self.training)
all_embeddings = self.get_ego_embeddings()
embeddings_list = [all_embeddings]
for gnn in self.GNNlayers:
all_embeddings = gnn(all_embeddings, edge_index, edge_weight)
all_embeddings = nn.LeakyReLU(negative_slope=0.2)(all_embeddings)
all_embeddings = nn.Dropout(self.message_dropout)(all_embeddings)
all_embeddings = F.normalize(all_embeddings, p=2, dim=1)
embeddings_list += [all_embeddings] # storage output embedding of each layer
ngcf_all_embeddings = torch.cat(embeddings_list, dim=1)
user_all_embeddings, item_all_embeddings = torch.split(ngcf_all_embeddings, [self.n_users, self.n_items])
return user_all_embeddings, item_all_embeddings
def calculate_loss(self, interaction):
# clear the storage variable when training
if self.restore_user_e is not None or self.restore_item_e is not None:
self.restore_user_e, self.restore_item_e = None, None
user = interaction[self.USER_ID]
pos_item = interaction[self.ITEM_ID]
neg_item = interaction[self.NEG_ITEM_ID]
user_all_embeddings, item_all_embeddings = self.forward()
u_embeddings = user_all_embeddings[user]
pos_embeddings = item_all_embeddings[pos_item]
neg_embeddings = item_all_embeddings[neg_item]
pos_scores = torch.mul(u_embeddings, pos_embeddings).sum(dim=1)
neg_scores = torch.mul(u_embeddings, neg_embeddings).sum(dim=1)
mf_loss = self.mf_loss(pos_scores, neg_scores) # calculate BPR Loss
reg_loss = self.reg_loss(u_embeddings, pos_embeddings, neg_embeddings) # L2 regularization of embeddings
return mf_loss + self.reg_weight * reg_loss
def predict(self, interaction):
user = interaction[self.USER_ID]
item = interaction[self.ITEM_ID]
user_all_embeddings, item_all_embeddings = self.forward()
u_embeddings = user_all_embeddings[user]
i_embeddings = item_all_embeddings[item]
scores = torch.mul(u_embeddings, i_embeddings).sum(dim=1)
return scores
def full_sort_predict(self, interaction):
user = interaction[self.USER_ID]
if self.restore_user_e is None or self.restore_item_e is None:
self.restore_user_e, self.restore_item_e = self.forward()
# get user embedding from storage variable
u_embeddings = self.restore_user_e[user]
# dot with all item embedding to accelerate
scores = torch.matmul(u_embeddings, self.restore_item_e.transpose(0, 1))
return scores.view(-1)
================================================
FILE: recbole_gnn/model/general_recommender/sgl.py
================================================
# -*- coding: utf-8 -*-
# @Time : 2022/3/8
# @Author : Changxin Tian
# @Email : cx.tian@outlook.com
r"""
SGL
################################################
Reference:
Jiancan Wu et al. "SGL: Self-supervised Graph Learning for Recommendation" in SIGIR 2021.
Reference code:
https://github.com/wujcan/SGL
"""
import numpy as np
import torch
import torch.nn.functional as F
from torch_geometric.utils import degree
from torch_geometric.nn.conv.gcn_conv import gcn_norm
from recbole.model.init import xavier_uniform_initialization
from recbole.model.loss import EmbLoss
from recbole.utils import InputType
from recbole_gnn.model.abstract_recommender import GeneralGraphRecommender
from recbole_gnn.model.layers import LightGCNConv
class SGL(GeneralGraphRecommender):
r"""SGL is a GCN-based recommender model.
SGL supplements the classical supervised task of recommendation with an auxiliary
self supervised task, which reinforces node representation learning via self-
discrimination.Specifically,SGL generates multiple views of a node, maximizing the
agreement between different views of the same node compared to that of other nodes.
SGL devises three operators to generate the views — node dropout, edge dropout, and
random walk — that change the graph structure in different manners.
We implement the model following the original author with a pairwise training mode.
"""
input_type = InputType.PAIRWISE
def __init__(self, config, dataset):
super(SGL, self).__init__(config, dataset)
# load parameters info
self.latent_dim = config["embedding_size"]
self.n_layers = int(config["n_layers"])
self.aug_type = config["type"]
self.drop_ratio = config["drop_ratio"]
self.ssl_tau = config["ssl_tau"]
self.reg_weight = config["reg_weight"]
self.ssl_weight = config["ssl_weight"]
self._user = dataset.inter_feat[dataset.uid_field]
self._item = dataset.inter_feat[dataset.iid_field]
self.dataset = dataset
# define layers and loss
self.user_embedding = torch.nn.Embedding(self.n_users, self.latent_dim)
self.item_embedding = torch.nn.Embedding(self.n_items, self.latent_dim)
self.gcn_conv = LightGCNConv(dim=self.latent_dim)
self.reg_loss = EmbLoss()
# storage variables for full sort evaluation acceleration
self.restore_user_e = None
self.restore_item_e = None
# parameters initialization
self.apply(xavier_uniform_initialization)
self.other_parameter_name = ['restore_user_e', 'restore_item_e']
def train(self, mode: bool = True):
r"""Override train method of base class. The subgraph is reconstructed each time it is called.
"""
T = super().train(mode=mode)
if mode:
self.graph_construction()
return T
def graph_construction(self):
r"""Devise three operators to generate the views — node dropout, edge dropout, and random walk of a node.
"""
if self.aug_type == "ND" or self.aug_type == "ED":
self.sub_graph1 = [self.random_graph_augment()] * self.n_layers
self.sub_graph2 = [self.random_graph_augment()] * self.n_layers
elif self.aug_type == "RW":
self.sub_graph1 = [self.random_graph_augment() for _ in range(self.n_layers)]
self.sub_graph2 = [self.random_graph_augment() for _ in range(self.n_layers)]
def random_graph_augment(self):
def rand_sample(high, size=None, replace=True):
return np.random.choice(np.arange(high), size=size, replace=replace)
if self.aug_type == "ND":
drop_user = rand_sample(self.n_users, size=int(self.n_users * self.drop_ratio), replace=False)
drop_item = rand_sample(self.n_items, size=int(self.n_items * self.drop_ratio), replace=False)
mask = np.isin(self._user.numpy(), drop_user)
mask |= np.isin(self._item.numpy(), drop_item)
keep = np.where(~mask)
row = self._user[keep]
col = self._item[keep] + self.n_users
elif self.aug_type == "ED" or self.aug_type == "RW":
keep = rand_sample(len(self._user), size=int(len(self._user) * (1 - self.drop_ratio)), replace=False)
row = self._user[keep]
col = self._item[keep] + self.n_users
edge_index1 = torch.stack([row, col])
edge_index2 = torch.stack([col, row])
edge_index = torch.cat([edge_index1, edge_index2], dim=1)
edge_weight = torch.ones(edge_index.size(1))
num_nodes = self.n_users + self.n_items
if self.use_sparse:
adj_t = self.dataset.edge_index_to_adj_t(edge_index, edge_weight, num_nodes, num_nodes)
adj_t = gcn_norm(adj_t, None, num_nodes, add_self_loops=False)
return adj_t.to(self.device), None
edge_index, edge_weight = gcn_norm(edge_index, edge_weight, num_nodes, add_self_loops=False)
return edge_index.to(self.device), edge_weight.to(self.device)
def forward(self, graph=None):
all_embeddings = torch.cat([self.user_embedding.weight, self.item_embedding.weight])
embeddings_list = [all_embeddings]
if graph is None: # for the original graph
for _ in range(self.n_layers):
all_embeddings = self.gcn_conv(all_embeddings, self.edge_index, self.edge_weight)
embeddings_list.append(all_embeddings)
else: # for the augmented graph
for graph_edge_index, graph_edge_weight in graph:
all_embeddings = self.gcn_conv(all_embeddings, graph_edge_index, graph_edge_weight)
embeddings_list.append(all_embeddings)
embeddings_list = torch.stack(embeddings_list, dim=1)
embeddings_list = torch.mean(embeddings_list, dim=1, keepdim=False)
user_all_embeddings, item_all_embeddings = torch.split(embeddings_list, [self.n_users, self.n_items], dim=0)
return user_all_embeddings, item_all_embeddings
def calc_bpr_loss(self, user_emd, item_emd, user_list, pos_item_list, neg_item_list):
r"""Calculate the the pairwise Bayesian Personalized Ranking (BPR) loss and parameter regularization loss.
Args:
user_emd (torch.Tensor): Ego embedding of all users after forwarding.
item_emd (torch.Tensor): Ego embedding of all items after forwarding.
user_list (torch.Tensor): List of the user.
pos_item_list (torch.Tensor): List of positive examples.
neg_item_list (torch.Tensor): List of negative examples.
Returns:
torch.Tensor: Loss of BPR tasks and parameter regularization.
"""
u_e = user_emd[user_list]
pi_e = item_emd[pos_item_list]
ni_e = item_emd[neg_item_list]
p_scores = torch.mul(u_e, pi_e).sum(dim=1)
n_scores = torch.mul(u_e, ni_e).sum(dim=1)
l1 = torch.sum(-F.logsigmoid(p_scores - n_scores))
u_e_p = self.user_embedding(user_list)
pi_e_p = self.item_embedding(pos_item_list)
ni_e_p = self.item_embedding(neg_item_list)
l2 = self.reg_loss(u_e_p, pi_e_p, ni_e_p)
return l1 + l2 * self.reg_weight
def calc_ssl_loss(self, user_list, pos_item_list, user_sub1, user_sub2, item_sub1, item_sub2):
r"""Calculate the loss of self-supervised tasks.
Args:
user_list (torch.Tensor): List of the user.
pos_item_list (torch.Tensor): List of positive examples.
user_sub1 (torch.Tensor): Ego embedding of all users in the first subgraph after forwarding.
user_sub2 (torch.Tensor): Ego embedding of all users in the second subgraph after forwarding.
item_sub1 (torch.Tensor): Ego embedding of all items in the first subgraph after forwarding.
item_sub2 (torch.Tensor): Ego embedding of all items in the second subgraph after forwarding.
Returns:
torch.Tensor: Loss of self-supervised tasks.
"""
u_emd1 = F.normalize(user_sub1[user_list], dim=1)
u_emd2 = F.normalize(user_sub2[user_list], dim=1)
all_user2 = F.normalize(user_sub2, dim=1)
v1 = torch.sum(u_emd1 * u_emd2, dim=1)
v2 = u_emd1.matmul(all_user2.T)
v1 = torch.exp(v1 / self.ssl_tau)
v2 = torch.sum(torch.exp(v2 / self.ssl_tau), dim=1)
ssl_user = -torch.sum(torch.log(v1 / v2))
i_emd1 = F.normalize(item_sub1[pos_item_list], dim=1)
i_emd2 = F.normalize(item_sub2[pos_item_list], dim=1)
all_item2 = F.normalize(item_sub2, dim=1)
v3 = torch.sum(i_emd1 * i_emd2, dim=1)
v4 = i_emd1.matmul(all_item2.T)
v3 = torch.exp(v3 / self.ssl_tau)
v4 = torch.sum(torch.exp(v4 / self.ssl_tau), dim=1)
ssl_item = -torch.sum(torch.log(v3 / v4))
return (ssl_item + ssl_user) * self.ssl_weight
def calculate_loss(self, interaction):
if self.restore_user_e is not None or self.restore_item_e is not None:
self.restore_user_e, self.restore_item_e = None, None
user_list = interaction[self.USER_ID]
pos_item_list = interaction[self.ITEM_ID]
neg_item_list = interaction[self.NEG_ITEM_ID]
user_emd, item_emd = self.forward()
user_sub1, item_sub1 = self.forward(self.sub_graph1)
user_sub2, item_sub2 = self.forward(self.sub_graph2)
total_loss = self.calc_bpr_loss(user_emd, item_emd, user_list, pos_item_list, neg_item_list) + \
self.calc_ssl_loss(user_list, pos_item_list, user_sub1, user_sub2, item_sub1, item_sub2)
return total_loss
def predict(self, interaction):
if self.restore_user_e is None or self.restore_item_e is None:
self.restore_user_e, self.restore_item_e = self.forward()
user = self.restore_user_e[interaction[self.USER_ID]]
item = self.restore_item_e[interaction[self.ITEM_ID]]
return torch.sum(user * item, dim=1)
def full_sort_predict(self, interaction):
if self.restore_user_e is None or self.restore_item_e is None:
self.restore_user_e, self.restore_item_e = self.forward()
user = self.restore_user_e[interaction[self.USER_ID]]
return user.matmul(self.restore_item_e.T)
================================================
FILE: recbole_gnn/model/general_recommender/simgcl.py
================================================
# -*- coding: utf-8 -*-
r"""
SimGCL
################################################
Reference:
Junliang Yu, Hongzhi Yin, Xin Xia, Tong Chen, Lizhen Cui, Quoc Viet Hung Nguyen. "Are Graph Augmentations Necessary? Simple Graph Contrastive Learning for Recommendation." in SIGIR 2022.
"""
import torch
import torch.nn.functional as F
from recbole_gnn.model.general_recommender import LightGCN
class SimGCL(LightGCN):
def __init__(self, config, dataset):
super(SimGCL, self).__init__(config, dataset)
self.cl_rate = config['lambda']
self.eps = config['eps']
self.temperature = config['temperature']
def forward(self, perturbed=False):
all_embs = self.get_ego_embeddings()
embeddings_list = []
for layer_idx in range(self.n_layers):
all_embs = self.gcn_conv(all_embs, self.edge_index, self.edge_weight)
if perturbed:
random_noise = torch.rand_like(all_embs, device=all_embs.device)
all_embs = all_embs + torch.sign(all_embs) * F.normalize(random_noise, dim=-1) * self.eps
embeddings_list.append(all_embs)
lightgcn_all_embeddings = torch.stack(embeddings_list, dim=1)
lightgcn_all_embeddings = torch.mean(lightgcn_all_embeddings, dim=1)
user_all_embeddings, item_all_embeddings = torch.split(lightgcn_all_embeddings, [self.n_users, self.n_items])
return user_all_embeddings, item_all_embeddings
def calculate_cl_loss(self, x1, x2):
x1, x2 = F.normalize(x1, dim=-1), F.normalize(x2, dim=-1)
pos_score = (x1 * x2).sum(dim=-1)
pos_score = torch.exp(pos_score / self.temperature)
ttl_score = torch.matmul(x1, x2.transpose(0, 1))
ttl_score = torch.exp(ttl_score / self.temperature).sum(dim=1)
return -torch.log(pos_score / ttl_score).sum()
def calculate_loss(self, interaction):
loss = super().calculate_loss(interaction)
user = torch.unique(interaction[self.USER_ID])
pos_item = torch.unique(interaction[self.ITEM_ID])
perturbed_user_embs_1, perturbed_item_embs_1 = self.forward(perturbed=True)
perturbed_user_embs_2, perturbed_item_embs_2 = self.forward(perturbed=True)
user_cl_loss = self.calculate_cl_loss(perturbed_user_embs_1[user], perturbed_user_embs_2[user])
item_cl_loss = self.calculate_cl_loss(perturbed_item_embs_1[pos_item], perturbed_item_embs_2[pos_item])
return loss + self.cl_rate * (user_cl_loss + item_cl_loss)
================================================
FILE: recbole_gnn/model/general_recommender/ssl4rec.py
================================================
r"""
SSL4REC
################################################
Reference:
Tiansheng Yao et al. "Self-supervised Learning for Large-scale Item Recommendations." in CIKM 2021.
Reference code:
https://github.com/Coder-Yu/SELFRec/model/graph/SSL4Rec.py
"""
import torch
import torch.nn as nn
import torch.nn.functional as F
from recbole.model.loss import EmbLoss
from recbole.utils import InputType
from recbole.model.init import xavier_uniform_initialization
from recbole_gnn.model.abstract_recommender import GeneralGraphRecommender
class SSL4REC(GeneralGraphRecommender):
input_type = InputType.PAIRWISE
def __init__(self, config, dataset):
super(SSL4REC, self).__init__(config, dataset)
# load parameters info
self.tau = config["tau"]
self.reg_weight = config["reg_weight"]
self.cl_rate = config["ssl_weight"]
self.require_pow = config["require_pow"]
self.reg_loss = EmbLoss()
self.encoder = DNN_Encoder(config, dataset)
# storage variables for full sort evaluation acceleration
self.restore_user_e = None
self.restore_item_e = None
# parameters initialization
self.apply(xavier_uniform_initialization)
self.other_parameter_name = ['restore_user_e', 'restore_item_e']
def forward(self, user, item):
user_e, item_e = self.encoder(user, item)
return user_e, item_e
def calculate_batch_softmax_loss(self, user_emb, item_emb, temperature):
user_emb, item_emb = F.normalize(user_emb, dim=1), F.normalize(item_emb, dim=1)
pos_score = (user_emb * item_emb).sum(dim=-1)
pos_score = torch.exp(pos_score / temperature)
ttl_score = torch.matmul(user_emb, item_emb.transpose(0, 1))
ttl_score = torch.exp(ttl_score / temperature).sum(dim=1)
loss = -torch.log(pos_score / ttl_score + 10e-6)
return torch.mean(loss)
def calculate_loss(self, interaction):
# clear the storage variable when training
if self.restore_user_e is not None or self.restore_item_e is not None:
self.restore_user_e, self.restore_item_e = None, None
user = interaction[self.USER_ID]
pos_item = interaction[self.ITEM_ID]
user_embeddings, item_embeddings = self.forward(user, pos_item)
rec_loss = self.calculate_batch_softmax_loss(user_embeddings, item_embeddings, self.tau)
cl_loss = self.encoder.calculate_cl_loss(pos_item)
reg_loss = self.reg_loss(user_embeddings, item_embeddings, require_pow=self.require_pow)
loss = rec_loss + self.cl_rate * cl_loss + self.reg_weight * reg_loss
return loss
def predict(self, interaction):
user = interaction[self.USER_ID]
item = interaction[self.ITEM_ID]
user_embeddings, item_embeddings = self.forward(user, item)
u_embeddings = user_embeddings[user]
i_embeddings = item_embeddings[item]
scores = torch.mul(u_embeddings, i_embeddings).sum(dim=1)
return scores
def full_sort_predict(self, interaction):
user = interaction[self.USER_ID]
if self.restore_user_e is None or self.restore_item_e is None:
self.restore_user_e, self.restore_item_e = self.forward(torch.arange(
self.n_users, device=self.device), torch.arange(self.n_items, device=self.device))
# get user embedding from storage variable
u_embeddings = self.restore_user_e[user]
# dot with all item embedding to accelerate
scores = torch.matmul(u_embeddings, self.restore_item_e.transpose(0, 1))
return scores.view(-1)
class DNN_Encoder(nn.Module):
def __init__(self, config, dataset):
super(DNN_Encoder, self).__init__()
self.emb_size = config["embedding_size"]
self.drop_ratio = config["drop_ratio"]
self.tau = config["tau"]
self.USER_ID = config["USER_ID_FIELD"]
self.ITEM_ID = config["ITEM_ID_FIELD"]
self.n_users = dataset.num(self.USER_ID)
self.n_items = dataset.num(self.ITEM_ID)
self.user_tower = nn.Sequential(
nn.Linear(self.emb_size, 1024),
nn.ReLU(True),
nn.Linear(1024, 128),
nn.Tanh()
)
self.item_tower = nn.Sequential(
nn.Linear(self.emb_size, 1024),
nn.ReLU(True),
nn.Linear(1024, 128),
nn.Tanh()
)
self.dropout = nn.Dropout(self.drop_ratio)
self.initial_user_emb = nn.Embedding(self.n_users, self.emb_size)
self.initial_item_emb = nn.Embedding(self.n_items, self.emb_size)
self.reset_parameters()
def reset_parameters(self):
nn.init.xavier_uniform_(self.initial_user_emb.weight)
nn.init.xavier_uniform_(self.initial_item_emb.weight)
def forward(self, q, x):
q_emb = self.initial_user_emb(q)
i_emb = self.initial_item_emb(x)
q_emb = self.user_tower(q_emb)
i_emb = self.item_tower(i_emb)
return q_emb, i_emb
def item_encoding(self, x):
i_emb = self.initial_item_emb(x)
i1_emb = self.dropout(i_emb)
i2_emb = self.dropout(i_emb)
i1_emb = self.item_tower(i1_emb)
i2_emb = self.item_tower(i2_emb)
return i1_emb, i2_emb
def calculate_cl_loss(self, idx):
x1, x2 = self.item_encoding(idx)
x1, x2 = F.normalize(x1, dim=-1), F.normalize(x2, dim=-1)
pos_score = (x1 * x2).sum(dim=-1)
pos_score = torch.exp(pos_score / self.tau)
ttl_score = torch.matmul(x1, x2.transpose(0, 1))
ttl_score = torch.exp(ttl_score / self.tau).sum(dim=1)
return -torch.log(pos_score / ttl_score).mean()
================================================
FILE: recbole_gnn/model/general_recommender/xsimgcl.py
================================================
# -*- coding: utf-8 -*-
r"""
XSimGCL
################################################
Reference:
Junliang Yu, Xin Xia, Tong Chen, Lizhen Cui, Nguyen Quoc Viet Hung, Hongzhi Yin. "XSimGCL: Towards Extremely Simple Graph Contrastive Learning for Recommendation" in TKDE 2023.
Reference code:
https://github.com/Coder-Yu/SELFRec/blob/main/model/graph/XSimGCL.py
"""
import torch
import torch.nn.functional as F
from recbole_gnn.model.general_recommender import LightGCN
class XSimGCL(LightGCN):
def __init__(self, config, dataset):
super(XSimGCL, self).__init__(config, dataset)
self.cl_rate = config['lambda']
self.eps = config['eps']
self.temperature = config['temperature']
self.layer_cl = config['layer_cl']
def forward(self, perturbed=False):
all_embs = self.get_ego_embeddings()
all_embs_cl = all_embs
embeddings_list = []
for layer_idx in range(self.n_layers):
all_embs = self.gcn_conv(all_embs, self.edge_index, self.edge_weight)
if perturbed:
random_noise = torch.rand_like(all_embs, device=all_embs.device)
all_embs = all_embs + torch.sign(all_embs) * F.normalize(random_noise, dim=-1) * self.eps
embeddings_list.append(all_embs)
if layer_idx == self.layer_cl - 1:
all_embs_cl = all_embs
lightgcn_all_embeddings = torch.stack(embeddings_list, dim=1)
lightgcn_all_embeddings = torch.mean(lightgcn_all_embeddings, dim=1)
user_all_embeddings, item_all_embeddings = torch.split(lightgcn_all_embeddings, [self.n_users, self.n_items])
user_all_embeddings_cl, item_all_embeddings_cl = torch.split(all_embs_cl, [self.n_users, self.n_items])
if perturbed:
return user_all_embeddings, item_all_embeddings, user_all_embeddings_cl, item_all_embeddings_cl
return user_all_embeddings, item_all_embeddings
def calculate_cl_loss(self, x1, x2):
x1, x2 = F.normalize(x1, dim=-1), F.normalize(x2, dim=-1)
pos_score = (x1 * x2).sum(dim=-1)
pos_score = torch.exp(pos_score / self.temperature)
ttl_score = torch.matmul(x1, x2.transpose(0, 1))
ttl_score = torch.exp(ttl_score / self.temperature).sum(dim=1)
return -torch.log(pos_score / ttl_score).mean()
def calculate_loss(self, interaction):
# clear the storage variable when training
if self.restore_user_e is not None or self.restore_item_e is not None:
self.restore_user_e, self.restore_item_e = None, None
user = interaction[self.USER_ID]
pos_item = interaction[self.ITEM_ID]
neg_item = interaction[self.NEG_ITEM_ID]
user_all_embeddings, item_all_embeddings, user_all_embeddings_cl, item_all_embeddings_cl = self.forward(perturbed=True)
u_embeddings = user_all_embeddings[user]
pos_embeddings = item_all_embeddings[pos_item]
neg_embeddings = item_all_embeddings[neg_item]
# calculate BPR Loss
pos_scores = torch.mul(u_embeddings, pos_embeddings).sum(dim=1)
neg_scores = torch.mul(u_embeddings, neg_embeddings).sum(dim=1)
mf_loss = self.mf_loss(pos_scores, neg_scores)
# calculate regularization Loss
u_ego_embeddings = self.user_embedding(user)
pos_ego_embeddings = self.item_embedding(pos_item)
neg_ego_embeddings = self.item_embedding(neg_item)
reg_loss = self.reg_loss(u_ego_embeddings, pos_ego_embeddings, neg_ego_embeddings, require_pow=self.require_pow)
user = torch.unique(interaction[self.USER_ID])
pos_item = torch.unique(interaction[self.ITEM_ID])
# calculate CL Loss
user_cl_loss = self.calculate_cl_loss(user_all_embeddings[user], user_all_embeddings_cl[user])
item_cl_loss = self.calculate_cl_loss(item_all_embeddings[pos_item], item_all_embeddings_cl[pos_item])
return mf_loss, self.reg_weight * reg_loss, self.cl_rate * (user_cl_loss + item_cl_loss)
================================================
FILE: recbole_gnn/model/layers.py
================================================
import numpy as np
import torch
import torch.nn as nn
from torch_geometric.nn import MessagePassing
from torch_sparse import matmul
class LightGCNConv(MessagePassing):
def __init__(self, dim):
super(LightGCNConv, self).__init__(aggr='add')
self.dim = dim
def forward(self, x, edge_index, edge_weight):
return self.propagate(edge_index, x=x, edge_weight=edge_weight)
def message(self, x_j, edge_weight):
return edge_weight.view(-1, 1) * x_j
def message_and_aggregate(self, adj_t, x):
return matmul(adj_t, x, reduce=self.aggr)
def __repr__(self):
return '{}({})'.format(self.__class__.__name__, self.dim)
class BipartiteGCNConv(MessagePassing):
def __init__(self, dim):
super(BipartiteGCNConv, self).__init__(aggr='add')
self.dim = dim
def forward(self, x, edge_index, edge_weight, size):
return self.propagate(edge_index, x=x, edge_weight=edge_weight, size=size)
def message(self, x_j, edge_weight):
return edge_weight.view(-1, 1) * x_j
def __repr__(self):
return '{}({})'.format(self.__class__.__name__, self.dim)
class BiGNNConv(MessagePassing):
r"""Propagate a layer of Bi-interaction GNN
.. math::
output = (L+I)EW_1 + LE \otimes EW_2
"""
def __init__(self, in_channels, out_channels):
super().__init__(aggr='add')
self.in_channels, self.out_channels = in_channels, out_channels
self.lin1 = torch.nn.Linear(in_features=in_channels, out_features=out_channels)
self.lin2 = torch.nn.Linear(in_features=in_channels, out_features=out_channels)
def forward(self, x, edge_index, edge_weight):
x_prop = self.propagate(edge_index, x=x, edge_weight=edge_weight)
x_trans = self.lin1(x_prop + x)
x_inter = self.lin2(torch.mul(x_prop, x))
return x_trans + x_inter
def message(self, x_j, edge_weight):
return edge_weight.view(-1, 1) * x_j
def message_and_aggregate(self, adj_t, x):
return matmul(adj_t, x, reduce=self.aggr)
def __repr__(self):
return '{}({},{})'.format(self.__class__.__name__, self.in_channels, self.out_channels)
class SRGNNConv(MessagePassing):
def __init__(self, dim):
# mean aggregation to incorporate weight naturally
super(SRGNNConv, self).__init__(aggr='mean')
self.lin = torch.nn.Linear(dim, dim)
def forward(self, x, edge_index):
x = self.lin(x)
return self.propagate(edge_index, x=x)
class SRGNNCell(nn.Module):
def __init__(self, dim):
super(SRGNNCell, self).__init__()
self.dim = dim
self.incomming_conv = SRGNNConv(dim)
self.outcomming_conv = SRGNNConv(dim)
self.lin_ih = nn.Linear(2 * dim, 3 * dim)
self.lin_hh = nn.Linear(dim, 3 * dim)
self._reset_parameters()
def forward(self, hidden, edge_index):
input_in = self.incomming_conv(hidden, edge_index)
reversed_edge_index = torch.flip(edge_index, dims=[0])
input_out = self.outcomming_conv(hidden, reversed_edge_index)
inputs = torch.cat([input_in, input_out], dim=-1)
gi = self.lin_ih(inputs)
gh = self.lin_hh(hidden)
i_r, i_i, i_n = gi.chunk(3, -1)
h_r, h_i, h_n = gh.chunk(3, -1)
reset_gate = torch.sigmoid(i_r + h_r)
input_gate = torch.sigmoid(i_i + h_i)
new_gate = torch.tanh(i_n + reset_gate * h_n)
hy = (1 - input_gate) * hidden + input_gate * new_gate
return hy
def _reset_parameters(self):
stdv = 1.0 / np.sqrt(self.dim)
for weight in self.parameters():
weight.data.uniform_(-stdv, stdv)
================================================
FILE: recbole_gnn/model/sequential_recommender/__init__.py
================================================
from recbole_gnn.model.sequential_recommender.gcegnn import GCEGNN
from recbole_gnn.model.sequential_recommender.gcsan import GCSAN
from recbole_gnn.model.sequential_recommender.lessr import LESSR
from recbole_gnn.model.sequential_recommender.niser import NISER
from recbole_gnn.model.sequential_recommender.sgnnhn import SGNNHN
from recbole_gnn.model.sequential_recommender.srgnn import SRGNN
from recbole_gnn.model.sequential_recommender.tagnn import TAGNN
================================================
FILE: recbole_gnn/model/sequential_recommender/gcegnn.py
================================================
# @Time : 2022/3/22
# @Author : Yupeng Hou
# @Email : houyupeng@ruc.edu.cn
r"""
GCE-GNN
################################################
Reference:
Ziyang Wang et al. "Global Context Enhanced Graph Neural Networks for Session-based Recommendation." in SIGIR 2020.
Reference code:
https://github.com/CCIIPLab/GCE-GNN
"""
import numpy as np
from tqdm import tqdm
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch_geometric.nn import MessagePassing
from torch_geometric.utils import softmax
from recbole.model.loss import BPRLoss
from recbole.model.abstract_recommender import SequentialRecommender
class LocalAggregator(MessagePassing):
def __init__(self, dim, alpha):
super().__init__(aggr='add')
self.edge_emb = nn.Embedding(4, dim)
self.leakyrelu = nn.LeakyReLU(alpha)
def forward(self, x, edge_index, edge_attr):
return self.propagate(edge_index, x=x, edge_attr=edge_attr)
def message(self, x_j, x_i, edge_attr, index, ptr, size_i):
x = x_j * x_i
a = self.edge_emb(edge_attr)
e = (x * a).sum(dim=-1)
e = self.leakyrelu(e)
e = softmax(e, index, ptr, size_i)
return e.unsqueeze(-1) * x_j
class GlobalAggregator(nn.Module):
def __init__(self, dim, dropout, act=torch.relu):
super(GlobalAggregator, self).__init__()
self.dropout = dropout
self.act = act
self.dim = dim
self.w_1 = nn.Parameter(torch.Tensor(self.dim + 1, self.dim))
self.w_2 = nn.Parameter(torch.Tensor(self.dim, 1))
self.w_3 = nn.Parameter(torch.Tensor(2 * self.dim, self.dim))
self.bias = nn.Parameter(torch.Tensor(self.dim))
def forward(self, self_vectors, neighbor_vector, batch_size, masks, neighbor_weight, extra_vector=None):
if extra_vector is not None:
alpha = torch.matmul(torch.cat([extra_vector.unsqueeze(2).repeat(1, 1, neighbor_vector.shape[2], 1)*neighbor_vector, neighbor_weight.unsqueeze(-1)], -1), self.w_1).squeeze(-1)
alpha = F.leaky_relu(alpha, negative_slope=0.2)
alpha = torch.matmul(alpha, self.w_2).squeeze(-1)
alpha = torch.softmax(alpha, -1).unsqueeze(-1)
neighbor_vector = torch.sum(alpha * neighbor_vector, dim=-2)
else:
neighbor_vector = torch.mean(neighbor_vector, dim=2)
# self_vectors = F.dropout(self_vectors, 0.5, training=self.training)
output = torch.cat([self_vectors, neighbor_vector], -1)
output = F.dropout(output, self.dropout, training=self.training)
output = torch.matmul(output, self.w_3)
output = output.view(batch_size, -1, self.dim)
output = self.act(output)
return output
class GCEGNN(SequentialRecommender):
def __init__(self, config, dataset):
super(GCEGNN, self).__init__(config, dataset)
# load parameters info
self.embedding_size = config['embedding_size']
self.leakyrelu_alpha = config['leakyrelu_alpha']
self.dropout_local = config['dropout_local']
self.dropout_global = config['dropout_global']
self.dropout_gcn = config['dropout_gcn']
self.device = config['device']
self.loss_type = config['loss_type']
self.build_global_graph = config['build_global_graph']
self.sample_num = config['sample_num']
self.hop = config['hop']
self.max_seq_length = dataset.field2seqlen[self.ITEM_SEQ]
# global graph construction
self.global_graph = None
if self.build_global_graph:
self.global_adj, self.global_weight = self.construct_global_graph(dataset)
# item embedding
self.item_embedding = nn.Embedding(self.n_items, self.embedding_size, padding_idx=0)
self.pos_embedding = nn.Embedding(self.max_seq_length, self.embedding_size)
# define layers and loss
# Aggregator
self.local_agg = LocalAggregator(self.embedding_size, self.leakyrelu_alpha)
global_agg_list = []
for i in range(self.hop):
global_agg_list.append(GlobalAggregator(self.embedding_size, self.dropout_gcn))
self.global_agg = nn.ModuleList(global_agg_list)
self.w_1 = nn.Linear(2 * self.embedding_size, self.embedding_size, bias=False)
self.w_2 = nn.Linear(self.embedding_size, 1, bias=False)
self.glu1 = nn.Linear(self.embedding_size, self.embedding_size)
self.glu2 = nn.Linear(self.embedding_size, self.embedding_size, bias=False)
if self.loss_type == 'BPR':
self.loss_fct = BPRLoss()
elif self.loss_type == 'CE':
self.loss_fct = nn.CrossEntropyLoss()
else:
raise NotImplementedError("Make sure 'loss_type' in ['BPR', 'CE']!")
self.reset_parameters()
self.other_parameter_name = ['global_adj', 'global_weight']
def reset_parameters(self):
stdv = 1.0 / np.sqrt(self.embedding_size)
for weight in self.parameters():
weight.data.uniform_(-stdv, stdv)
def _add_edge(self, graph, sid, tid):
if tid not in graph[sid]:
graph[sid][tid] = 0
graph[sid][tid] += 1
def construct_global_graph(self, dataset):
self.logger.info('Constructing global graphs.')
item_id_list = dataset.inter_feat['item_id_list']
src_item_ids = item_id_list[:,:4].tolist()
tgt_itme_id = dataset.inter_feat['item_id'].tolist()
global_graph = [{} for _ in range(self.n_items)]
for i in tqdm(range(len(tgt_itme_id)), desc='Converting: '):
tid = tgt_itme_id[i]
for sid in src_item_ids[i]:
if sid > 0:
self._add_edge(global_graph, tid, sid)
self._add_edge(global_graph, sid, tid)
global_adj = [[] for _ in range(self.n_items)]
global_weight = [[] for _ in range(self.n_items)]
for i in tqdm(range(self.n_items), desc='Sorting: '):
sorted_out_edges = [v for v in sorted(global_graph[i].items(), reverse=True, key=lambda x: x[1])]
global_adj[i] = [v[0] for v in sorted_out_edges[:self.sample_num]]
global_weight[i] = [v[1] for v in sorted_out_edges[:self.sample_num]]
if len(global_adj[i]) < self.sample_num:
for j in range(self.sample_num - len(global_adj[i])):
global_adj[i].append(0)
global_weight[i].append(0)
return torch.LongTensor(global_adj).to(self.device), torch.FloatTensor(global_weight).to(self.device)
def fusion(self, hidden, mask):
batch_size = hidden.shape[0]
length = hidden.shape[1]
pos_emb = self.pos_embedding.weight[:length]
pos_emb = pos_emb.unsqueeze(0).expand(batch_size, -1, -1)
hs = torch.sum(hidden * mask, -2) / torch.sum(mask, 1)
hs = hs.unsqueeze(-2).expand(-1, length, -1)
nh = self.w_1(torch.cat([pos_emb, hidden], -1))
nh = torch.tanh(nh)
nh = torch.sigmoid(self.glu1(nh) + self.glu2(hs))
beta = self.w_2(nh)
beta = beta * mask
final_h = torch.sum(beta * hidden, 1)
return final_h
def forward(self, x, edge_index, edge_attr, alias_inputs, item_seq_len):
batch_size = alias_inputs.shape[0]
mask = alias_inputs.gt(0).unsqueeze(-1)
h = self.item_embedding(x)
# local
h_local = self.local_agg(h, edge_index, edge_attr)
# global
item_neighbors = [F.pad(x[alias_inputs], (0, self.max_seq_length - x[alias_inputs].shape[1]), "constant", 0)]
weight_neighbors = []
support_size = self.max_seq_length
for i in range(self.hop):
item_sample_i, weight_sample_i = self.global_adj[item_neighbors[-1].view(-1)], self.global_weight[item_neighbors[-1].view(-1)]
support_size *= self.sample_num
item_neighbors.append(item_sample_i.view(batch_size, support_size))
weight_neighbors.append(weight_sample_i.view(batch_size, support_size))
entity_vectors = [self.item_embedding(i) for i in item_neighbors]
weight_vectors = weight_neighbors
session_info = []
item_emb = h[alias_inputs] * mask
# mean
sum_item_emb = torch.sum(item_emb, 1) / torch.sum(mask.float(), 1)
# sum
# sum_item_emb = torch.sum(item_emb, 1)
sum_item_emb = sum_item_emb.unsqueeze(-2)
for i in range(self.hop):
session_info.append(sum_item_emb.repeat(1, entity_vectors[i].shape[1], 1))
for n_hop in range(self.hop):
entity_vectors_next_iter = []
shape = [batch_size, -1, self.sample_num, self.embedding_size]
for hop in range(self.hop - n_hop):
aggregator = self.global_agg[n_hop]
vector = aggregator(self_vectors=entity_vectors[hop],
neighbor_vector=entity_vectors[hop + 1].view(shape),
masks=None,
batch_size=batch_size,
neighbor_weight=weight_vectors[hop].view(batch_size, -1, self.sample_num),
extra_vector=session_info[hop])
entity_vectors_next_iter.append(vector)
entity_vectors = entity_vectors_next_iter
h_global = entity_vectors[0].view(batch_size, self.max_seq_length, self.embedding_size)
h_global = h_global[:,:alias_inputs.shape[1],:]
h_local = F.dropout(h_local, self.dropout_local, training=self.training)
h_global = F.dropout(h_global, self.dropout_global, training=self.training)
h_local = h_local[alias_inputs]
h_session = h_local + h_global
h_session = self.fusion(h_session, mask)
return h_session
def calculate_loss(self, interaction):
x = interaction['x']
edge_index = interaction['edge_index']
edge_attr = interaction['edge_attr']
alias_inputs = interaction['alias_inputs']
item_seq_len = interaction[self.ITEM_SEQ_LEN]
seq_output = self.forward(x, edge_index, edge_attr, alias_inputs, item_seq_len)
pos_items = interaction[self.POS_ITEM_ID]
if self.loss_type == 'BPR':
neg_items = interaction[self.NEG_ITEM_ID]
pos_items_emb = self.item_embedding(pos_items)
neg_items_emb = self.item_embedding(neg_items)
pos_score = torch.sum(seq_output * pos_items_emb, dim=-1) # [B]
neg_score = torch.sum(seq_output * neg_items_emb, dim=-1) # [B]
loss = self.loss_fct(pos_score, neg_score)
return loss
else: # self.loss_type = 'CE'
test_item_emb = self.item_embedding.weight
logits = torch.matmul(seq_output, test_item_emb.transpose(0, 1))
loss = self.loss_fct(logits, pos_items)
return loss
def predict(self, interaction):
test_item = interaction[self.ITEM_ID]
x = interaction['x']
edge_index = interaction['edge_index']
edge_attr = interaction['edge_attr']
alias_inputs = interaction['alias_inputs']
item_seq_len = interaction[self.ITEM_SEQ_LEN]
seq_output = self.forward(x, edge_index, edge_attr, alias_inputs, item_seq_len)
test_item_emb = self.item_embedding(test_item)
scores = torch.mul(seq_output, test_item_emb).sum(dim=1) # [B]
return scores
def full_sort_predict(self, interaction):
x = interaction['x']
edge_index = interaction['edge_index']
edge_attr = interaction['edge_attr']
alias_inputs = interaction['alias_inputs']
item_seq_len = interaction[self.ITEM_SEQ_LEN]
seq_output = self.forward(x, edge_index, edge_attr, alias_inputs, item_seq_len)
test_items_emb = self.item_embedding.weight
scores = torch.matmul(seq_output, test_items_emb.transpose(0, 1)) # [B, n_items]
return scores
================================================
FILE: recbole_gnn/model/sequential_recommender/gcsan.py
================================================
# @Time : 2022/3/7
# @Author : Yupeng Hou
# @Email : houyupeng@ruc.edu.cn
r"""
GCSAN
################################################
Reference:
Chengfeng Xu et al. "Graph Contextualized Self-Attention Network for Session-based Recommendation." in IJCAI 2019.
"""
import torch
from torch import nn
from recbole.model.layers import TransformerEncoder
from recbole.model.loss import EmbLoss, BPRLoss
from recbole.model.abstract_recommender import SequentialRecommender
from recbole_gnn.model.layers import SRGNNCell
class GCSAN(SequentialRecommender):
r"""GCSAN captures rich local dependencies via graph neural network,
and learns long-range dependencies by applying the self-attention mechanism.
Note:
In the original paper, the attention mechanism in the self-attention layer is a single head,
for the reusability of the project code, we use a unified transformer component.
According to the experimental results, we only applied regularization to embedding.
"""
def __init__(self, config, dataset):
super(GCSAN, self).__init__(config, dataset)
# load parameters info
self.n_layers = config['n_layers']
self.n_heads = config['n_heads']
self.hidden_size = config['hidden_size'] # same as embedding_size
self.inner_size = config['inner_size'] # the dimensionality in feed-forward layer
self.hidden_dropout_prob = config['hidden_dropout_prob']
self.attn_dropout_prob = config['attn_dropout_prob']
self.hidden_act = config['hidden_act']
self.layer_norm_eps = config['layer_norm_eps']
self.step = config['step']
self.device = config['device']
self.weight = config['weight']
self.reg_weight = config['reg_weight']
self.loss_type = config['loss_type']
self.initializer_range = config['initializer_range']
# item embedding
self.item_embedding = nn.Embedding(self.n_items, self.hidden_size, padding_idx=0)
# define layers and loss
self.gnncell = SRGNNCell(self.hidden_size)
self.self_attention = TransformerEncoder(
n_layers=self.n_layers,
n_heads=self.n_heads,
hidden_size=self.hidden_size,
inner_size=self.inner_size,
hidden_dropout_prob=self.hidden_dropout_prob,
attn_dropout_prob=self.attn_dropout_prob,
hidden_act=self.hidden_act,
layer_norm_eps=self.layer_norm_eps
)
self.reg_loss = EmbLoss()
if self.loss_type == 'BPR':
self.loss_fct = BPRLoss()
elif self.loss_type == 'CE':
self.loss_fct = nn.CrossEntropyLoss()
else:
raise NotImplementedError("Make sure 'loss_type' in ['BPR', 'CE']!")
# parameters initialization
self.apply(self._init_weights)
def _init_weights(self, module):
""" Initialize the weights """
if isinstance(module, (nn.Linear, nn.Embedding)):
# Slightly different from the TF version which uses truncated_normal for initialization
# cf https://github.com/pytorch/pytorch/pull/5617
module.weight.data.normal_(mean=0.0, std=self.initializer_range)
elif isinstance(module, nn.LayerNorm):
module.bias.data.zero_()
module.weight.data.fill_(1.0)
if isinstance(module, nn.Linear) and module.bias is not None:
module.bias.data.zero_()
def get_attention_mask(self, item_seq):
"""Generate left-to-right uni-directional attention mask for multi-head attention."""
attention_mask = (item_seq > 0).long()
extended_attention_mask = attention_mask.unsqueeze(1).unsqueeze(2) # torch.int64
# mask for left-to-right unidirectional
max_len = attention_mask.size(-1)
attn_shape = (1, max_len, max_len)
subsequent_mask = torch.triu(torch.ones(attn_shape), diagonal=1) # torch.uint8
subsequent_mask = (subsequent_mask == 0).unsqueeze(1)
subsequent_mask = subsequent_mask.long().to(item_seq.device)
extended_attention_mask = extended_attention_mask * subsequent_mask
extended_attention_mask = extended_attention_mask.to(dtype=next(self.parameters()).dtype) # fp16 compatibility
extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0
return extended_attention_mask
def forward(self, x, edge_index, alias_inputs, item_seq_len):
hidden = self.item_embedding(x)
for i in range(self.step):
hidden = self.gnncell(hidden, edge_index)
seq_hidden = hidden[alias_inputs]
# fetch the last hidden state of last timestamp
ht = self.gather_indexes(seq_hidden, item_seq_len - 1)
attention_mask = self.get_attention_mask(alias_inputs)
outputs = self.self_attention(seq_hidden, attention_mask, output_all_encoded_layers=True)
output = outputs[-1]
at = self.gather_indexes(output, item_seq_len - 1)
seq_output = self.weight * at + (1 - self.weight) * ht
return seq_output
def calculate_loss(self, interaction):
x = interaction['x']
edge_index = interaction['edge_index']
alias_inputs = interaction['alias_inputs']
item_seq_len = interaction[self.ITEM_SEQ_LEN]
seq_output = self.forward(x, edge_index, alias_inputs, item_seq_len)
pos_items = interaction[self.POS_ITEM_ID]
if self.loss_type == 'BPR':
neg_items = interaction[self.NEG_ITEM_ID]
pos_items_emb = self.item_embedding(pos_items)
neg_items_emb = self.item_embedding(neg_items)
pos_score = torch.sum(seq_output * pos_items_emb, dim=-1) # [B]
neg_score = torch.sum(seq_output * neg_items_emb, dim=-1) # [B]
loss = self.loss_fct(pos_score, neg_score)
else: # self.loss_type = 'CE'
test_item_emb = self.item_embedding.weight
logits = torch.matmul(seq_output, test_item_emb.transpose(0, 1))
loss = self.loss_fct(logits, pos_items)
reg_loss = self.reg_loss(self.item_embedding.weight)
total_loss = loss + self.reg_weight * reg_loss
return total_loss
def predict(self, interaction):
test_item = interaction[self.ITEM_ID]
x = interaction['x']
edge_index = interaction['edge_index']
alias_inputs = interaction['alias_inputs']
item_seq_len = interaction[self.ITEM_SEQ_LEN]
seq_output = self.forward(x, edge_index, alias_inputs, item_seq_len)
test_item_emb = self.item_embedding(test_item)
scores = torch.mul(seq_output, test_item_emb).sum(dim=1) # [B]
return scores
def full_sort_predict(self, interaction):
x = interaction['x']
edge_index = interaction['edge_index']
alias_inputs = interaction['alias_inputs']
item_seq_len = interaction[self.ITEM_SEQ_LEN]
seq_output = self.forward(x, edge_index, alias_inputs, item_seq_len)
test_items_emb = self.item_embedding.weight
scores = torch.matmul(seq_output, test_items_emb.transpose(0, 1)) # [B, n_items]
return scores
================================================
FILE: recbole_gnn/model/sequential_recommender/lessr.py
================================================
# @Time : 2022/3/11
# @Author : Yupeng Hou
# @Email : houyupeng@ruc.edu.cn
r"""
LESSR
################################################
Reference:
Tianwen Chen and Raymond Chi-Wing Wong. "Handling Information Loss of Graph Neural Networks for Session-based Recommendation." in KDD 2020.
Reference code:
https://github.com/twchen/lessr
"""
import torch
from torch import nn
from torch_geometric.utils import softmax
from torch_geometric.nn import global_add_pool
from recbole.model.abstract_recommender import SequentialRecommender
class EOPA(nn.Module):
def __init__(
self, input_dim, output_dim, batch_norm=True, feat_drop=0.0, activation=None
):
super().__init__()
self.batch_norm = nn.BatchNorm1d(input_dim) if batch_norm else None
self.feat_drop = nn.Dropout(feat_drop)
self.gru = nn.GRU(input_dim, input_dim, batch_first=True)
self.fc_self = nn.Linear(input_dim, output_dim, bias=False)
self.fc_neigh = nn.Linear(input_dim, output_dim, bias=False)
self.activation = activation
def reducer(self, nodes):
m = nodes.mailbox['m'] # (num_nodes, deg, d)
# m[i]: the messages passed to the i-th node with in-degree equal to 'deg'
# the order of messages follows the order of incoming edges
# since the edges are sorted by occurrence time when the EOP multigraph is built
# the messages are in the order required by EOPA
_, hn = self.gru(m) # hn: (1, num_nodes, d)
return {'neigh': hn.squeeze(0)}
def forward(self, mg, feat):
import dgl.function as fn
with mg.local_scope():
if self.batch_norm is not None:
feat = self.batch_norm(feat)
mg.ndata['ft'] = self.feat_drop(feat)
if mg.number_of_edges() > 0:
mg.update_all(fn.copy_u('ft', 'm'), self.reducer)
neigh = mg.ndata['neigh']
rst = self.fc_self(feat) + self.fc_neigh(neigh)
else:
rst = self.fc_self(feat)
if self.activation is not None:
rst = self.activation(rst)
return rst
class SGAT(nn.Module):
def __init__(
self,
input_dim,
hidden_dim,
output_dim,
batch_norm=True,
feat_drop=0.0,
activation=None,
):
super().__init__()
self.batch_norm = nn.BatchNorm1d(input_dim) if batch_norm else None
self.feat_drop = nn.Dropout(feat_drop)
self.fc_q = nn.Linear(input_dim, hidden_dim, bias=True)
self.fc_k = nn.Linear(input_dim, hidden_dim, bias=False)
self.fc_v = nn.Linear(input_dim, output_dim, bias=False)
self.fc_e = nn.Linear(hidden_dim, 1, bias=False)
self.activation = activation
def forward(self, sg, feat):
import dgl.ops as F
if self.batch_norm is not None:
feat = self.batch_norm(feat)
feat = self.feat_drop(feat)
q = self.fc_q(feat)
k = self.fc_k(feat)
v = self.fc_v(feat)
e = F.u_add_v(sg, q, k)
e = self.fc_e(torch.sigmoid(e))
a = F.edge_softmax(sg, e)
rst = F.u_mul_e_sum(sg, v, a)
if self.activation is not None:
rst = self.activation(rst)
return rst
class AttnReadout(nn.Module):
def __init__(
self,
input_dim,
hidden_dim,
output_dim,
batch_norm=True,
feat_drop=0.0,
activation=None,
):
super().__init__()
self.batch_norm = nn.BatchNorm1d(input_dim) if batch_norm else None
self.feat_drop = nn.Dropout(feat_drop)
self.fc_u = nn.Linear(input_dim, hidden_dim, bias=False)
self.fc_v = nn.Linear(input_dim, hidden_dim, bias=True)
self.fc_e = nn.Linear(hidden_dim, 1, bias=False)
self.fc_out = (
nn.Linear(input_dim, output_dim, bias=False)
if output_dim != input_dim else None
)
self.activation = activation
def forward(self, g, feat, last_nodes, batch):
if self.batch_norm is not None:
feat = self.batch_norm(feat)
feat = self.feat_drop(feat)
feat_u = self.fc_u(feat)
feat_v = self.fc_v(feat[last_nodes])
feat_v = torch.index_select(feat_v, dim=0, index=batch)
e = self.fc_e(torch.sigmoid(feat_u + feat_v))
alpha = softmax(e, batch)
feat_norm = feat * alpha
rst = global_add_pool(feat_norm, batch)
if self.fc_out is not None:
rst = self.fc_out(rst)
if self.activation is not None:
rst = self.activation(rst)
return rst
class LESSR(SequentialRecommender):
r"""LESSR analyzes the information losses when constructing session graphs,
and emphasises lossy session encoding problem and the ineffective long-range dependency capturing problem.
To solve the first problem, authors propose a lossless encoding scheme and an edge-order preserving aggregation layer.
To solve the second problem, authors propose a shortcut graph attention layer that effectively captures long-range dependencies.
Note:
We follow the original implementation, which requires DGL package.
We find it difficult to implement these functions via PyG, so we remain them.
If you would like to test this model, please install DGL.
"""
def __init__(self, config, dataset):
super().__init__(config, dataset)
embedding_dim = config['embedding_size']
self.num_layers = config['n_layers']
batch_norm = config['batch_norm']
feat_drop = config['feat_drop']
self.loss_type = config['loss_type']
self.item_embedding = nn.Embedding(self.n_items, embedding_dim, max_norm=1)
self.layers = nn.ModuleList()
input_dim = embedding_dim
for i in range(self.num_layers):
if i % 2 == 0:
layer = EOPA(
input_dim,
embedding_dim,
batch_norm=batch_norm,
feat_drop=feat_drop,
activation=nn.PReLU(embedding_dim),
)
else:
layer = SGAT(
input_dim,
embedding_dim,
embedding_dim,
batch_norm=batch_norm,
feat_drop=feat_drop,
activation=nn.PReLU(embedding_dim),
)
input_dim += embedding_dim
self.layers.append(layer)
self.readout = AttnReadout(
input_dim,
embedding_dim,
embedding_dim,
batch_norm=batch_norm,
feat_drop=feat_drop,
activation=nn.PReLU(embedding_dim),
)
input_dim += embedding_dim
self.batch_norm = nn.BatchNorm1d(input_dim) if batch_norm else None
self.feat_drop = nn.Dropout(feat_drop)
self.fc_sr = nn.Linear(input_dim, embedding_dim, bias=False)
if self.loss_type == 'CE':
self.loss_fct = nn.CrossEntropyLoss()
else:
raise NotImplementedError("Make sure 'loss_type' in ['CE']!")
def forward(self, x, edge_index_EOP, edge_index_shortcut, batch, is_last):
import dgl
mg = dgl.graph((edge_index_EOP[0], edge_index_EOP[1]), num_nodes=batch.shape[0])
sg = dgl.graph((edge_index_shortcut[0], edge_index_shortcut[1]), num_nodes=batch.shape[0])
feat = self.item_embedding(x)
for i, layer in enumerate(self.layers):
if i % 2 == 0:
out = layer(mg, feat)
else:
out = layer(sg, feat)
feat = torch.cat([out, feat], dim=1)
sr_g = self.readout(mg, feat, is_last, batch)
sr_l = feat[is_last]
sr = torch.cat([sr_l, sr_g], dim=1)
if self.batch_norm is not None:
sr = self.batch_norm(sr)
sr = self.fc_sr(self.feat_drop(sr))
return sr
def calculate_loss(self, interaction):
x = interaction['x']
edge_index_EOP = interaction['edge_index_EOP']
edge_index_shortcut = interaction['edge_index_shortcut']
batch = interaction['batch']
is_last = interaction['is_last']
seq_output = self.forward(x, edge_index_EOP, edge_index_shortcut, batch, is_last)
pos_items = interaction[self.POS_ITEM_ID]
test_item_emb = self.item_embedding.weight
logits = torch.matmul(seq_output, test_item_emb.transpose(0, 1))
loss = self.loss_fct(logits, pos_items)
return loss
def predict(self, interaction):
test_item = interaction[self.ITEM_ID]
x = interaction['x']
edge_index_EOP = interaction['edge_index_EOP']
edge_index_shortcut = interaction['edge_index_shortcut']
batch = interaction['batch']
is_last = interaction['is_last']
seq_output = self.forward(x, edge_index_EOP, edge_index_shortcut, batch, is_last)
test_item_emb = self.item_embedding(test_item)
scores = torch.mul(seq_output, test_item_emb).sum(dim=1) # [B]
return scores
def full_sort_predict(self, interaction):
x = interaction['x']
edge_index_EOP = interaction['edge_index_EOP']
edge_index_shortcut = interaction['edge_index_shortcut']
batch = interaction['batch']
is_last = interaction['is_last']
seq_output = self.forward(x, edge_index_EOP, edge_index_shortcut, batch, is_last)
test_items_emb = self.item_embedding.weight
scores = torch.matmul(seq_output, test_items_emb.transpose(0, 1)) # [B, n_items]
return scores
================================================
FILE: recbole_gnn/model/sequential_recommender/niser.py
================================================
# @Time : 2022/3/7
# @Author : Yupeng Hou
# @Email : houyupeng@ruc.edu.cn
r"""
NISER
################################################
Reference:
Priyanka Gupta et al. "NISER: Normalized Item and Session Representations to Handle Popularity Bias." in CIKM 2019 GRLA workshop.
"""
import numpy as np
import torch
from torch import nn
import torch.nn.functional as F
from recbole.model.loss import BPRLoss
from recbole.model.abstract_recommender import SequentialRecommender
from recbole_gnn.model.layers import SRGNNCell
class NISER(SequentialRecommender):
r"""NISER+ is a GNN-based model that normalizes session and item embeddings to handle popularity bias.
"""
def __init__(self, config, dataset):
super(NISER, self).__init__(config, dataset)
# load parameters info
self.embedding_size = config['embedding_size']
self.step = config['step']
self.device = config['device']
self.loss_type = config['loss_type']
self.sigma = config['sigma']
self.max_seq_length = dataset.field2seqlen[self.ITEM_SEQ]
# item embedding
self.item_embedding = nn.Embedding(self.n_items, self.embedding_size, padding_idx=0)
self.pos_embedding = nn.Embedding(self.max_seq_length, self.embedding_size)
self.item_dropout = nn.Dropout(config['item_dropout'])
# define layers and loss
self.gnncell = SRGNNCell(self.embedding_size)
self.linear_one = nn.Linear(self.embedding_size, self.embedding_size)
self.linear_two = nn.Linear(self.embedding_size, self.embedding_size)
self.linear_three = nn.Linear(self.embedding_size, 1, bias=False)
self.linear_transform = nn.Linear(self.embedding_size * 2, self.embedding_size)
if self.loss_type == 'BPR':
self.loss_fct = BPRLoss()
elif self.loss_type == 'CE':
self.loss_fct = nn.CrossEntropyLoss()
else:
raise NotImplementedError("Make sure 'loss_type' in ['BPR', 'CE']!")
# parameters initialization
self._reset_parameters()
def _reset_parameters(self):
stdv = 1.0 / np.sqrt(self.embedding_size)
for weight in self.parameters():
weight.data.uniform_(-stdv, stdv)
def forward(self, x, edge_index, alias_inputs, item_seq_len):
mask = alias_inputs.gt(0)
hidden = self.item_embedding(x)
# Dropout in NISER+
hidden = self.item_dropout(hidden)
# Normalize item embeddings
hidden = F.normalize(hidden, dim=-1)
for i in range(self.step):
hidden = self.gnncell(hidden, edge_index)
seq_hidden = hidden[alias_inputs]
batch_size = seq_hidden.shape[0]
pos_emb = self.pos_embedding.weight[:seq_hidden.shape[1]]
pos_emb = pos_emb.unsqueeze(0).expand(batch_size, -1, -1)
seq_hidden = seq_hidden + pos_emb
# fetch the last hidden state of last timestamp
ht = self.gather_indexes(seq_hidden, item_seq_len - 1)
q1 = self.linear_one(ht).view(ht.size(0), 1, ht.size(1))
q2 = self.linear_two(seq_hidden)
alpha = self.linear_three(torch.sigmoid(q1 + q2))
a = torch.sum(alpha * seq_hidden * mask.view(mask.size(0), -1, 1).float(), 1)
seq_output = self.linear_transform(torch.cat([a, ht], dim=1))
# Normalize session embeddings
seq_output = F.normalize(seq_output, dim=-1)
return seq_output
def calculate_loss(self, interaction):
x = interaction['x']
edge_index = interaction['edge_index']
alias_inputs = interaction['alias_inputs']
item_seq_len = interaction[self.ITEM_SEQ_LEN]
seq_output = self.forward(x, edge_index, alias_inputs, item_seq_len)
pos_items = interaction[self.POS_ITEM_ID]
if self.loss_type == 'BPR':
neg_items = interaction[self.NEG_ITEM_ID]
pos_items_emb = F.normalize(self.item_embedding(pos_items), dim=-1)
neg_items_emb = F.normalize(self.item_embedding(neg_items), dim=-1)
pos_score = torch.sum(seq_output * pos_items_emb, dim=-1) # [B]
neg_score = torch.sum(seq_output * neg_items_emb, dim=-1) # [B]
loss = self.loss_fct(self.sigma * pos_score, self.sigma * neg_score)
return loss
else: # self.loss_type = 'CE'
test_item_emb = F.normalize(self.item_embedding.weight, dim=-1)
logits = self.sigma * torch.matmul(seq_output, test_item_emb.transpose(0, 1))
loss = self.loss_fct(logits, pos_items)
return loss
def predict(self, interaction):
test_item = interaction[self.ITEM_ID]
x = interaction['x']
edge_index = interaction['edge_index']
alias_inputs = interaction['alias_inputs']
item_seq_len = interaction[self.ITEM_SEQ_LEN]
seq_output = self.forward(x, edge_index, alias_inputs, item_seq_len)
test_item_emb = F.normalize(self.item_embedding(test_item), dim=-1)
scores = torch.mul(seq_output, test_item_emb).sum(dim=1) # [B]
return scores
def full_sort_predict(self, interaction):
x = interaction['x']
edge_index = interaction['edge_index']
alias_inputs = interaction['alias_inputs']
item_seq_len = interaction[self.ITEM_SEQ_LEN]
seq_output = self.forward(x, edge_index, alias_inputs, item_seq_len)
test_items_emb = F.normalize(self.item_embedding.weight, dim=-1)
scores = torch.matmul(seq_output, test_items_emb.transpose(0, 1)) # [B, n_items]
return scores
================================================
FILE: recbole_gnn/model/sequential_recommender/sgnnhn.py
================================================
# @Time : 2022/3/28
# @Author : Yupeng Hou
# @Email : houyupeng@ruc.edu.cn
r"""
SRGNN
################################################
Reference:
Zhiqiang Pan et al. "Star Graph Neural Networks for Session-based Recommendation." in CIKM 2020.
Reference code:
https://bitbucket.org/nudtpanzq/sgnn-hn
"""
import math
import numpy as np
import torch
from torch import nn
from torch_geometric.nn import global_mean_pool, global_add_pool
from torch_geometric.utils import softmax
from recbole.model.abstract_recommender import SequentialRecommender
from recbole.model.loss import BPRLoss
from recbole_gnn.model.layers import SRGNNCell
def layer_norm(x):
ave_x = torch.mean(x, -1).unsqueeze(-1)
x = x - ave_x
norm_x = torch.sqrt(torch.sum(x**2, -1)).unsqueeze(-1)
y = x / norm_x
return y
class SGNNHN(SequentialRecommender):
r"""SGNN-HN applies a star graph neural network to model the complex transition relationship between items in an ongoing session.
To avoid overfitting, it applies highway networks to adaptively select embeddings from item representations.
"""
def __init__(self, config, dataset):
super(SGNNHN, self).__init__(config, dataset)
# load parameters info
self.embedding_size = config['embedding_size']
self.step = config['step']
self.device = config['device']
self.loss_type = config['loss_type']
self.scale = config['scale']
# item embedding
self.item_embedding = nn.Embedding(self.n_items, self.embedding_size, padding_idx=0)
self.max_seq_length = dataset.field2seqlen[self.ITEM_SEQ]
self.pos_embedding = nn.Embedding(self.max_seq_length, self.embedding_size)
# define layers and loss
self.gnncell = SRGNNCell(self.embedding_size)
self.linear_one = nn.Linear(self.embedding_size, self.embedding_size)
self.linear_two = nn.Linear(self.embedding_size, self.embedding_size)
self.linear_three = nn.Linear(self.embedding_size, self.embedding_size)
self.linear_four = nn.Linear(self.embedding_size, 1, bias=False)
self.linear_transform = nn.Linear(self.embedding_size * 2, self.embedding_size)
if self.loss_type == 'BPR':
self.loss_fct = BPRLoss()
elif self.loss_type == 'CE':
self.loss_fct = nn.CrossEntropyLoss()
else:
raise NotImplementedError("Make sure 'loss_type' in ['BPR', 'CE']!")
# parameters initialization
self._reset_parameters()
def _reset_parameters(self):
stdv = 1.0 / np.sqrt(self.embedding_size)
for weight in self.parameters():
weight.data.uniform_(-stdv, stdv)
def att_out(self, hidden, star_node, batch):
star_node_repeat = torch.index_select(star_node, 0, batch)
sim = (hidden * star_node_repeat).sum(dim=-1)
sim = softmax(sim, batch)
att_hidden = sim.unsqueeze(-1) * hidden
output = global_add_pool(att_hidden, batch)
return output
def forward(self, x, edge_index, batch, alias_inputs, item_seq_len):
mask = alias_inputs.gt(0)
hidden = self.item_embedding(x)
star_node = global_mean_pool(hidden, batch)
for i in range(self.step):
hidden = self.gnncell(hidden, edge_index)
star_node_repeat = torch.index_select(star_node, 0, batch)
sim = (hidden * star_node_repeat).sum(dim=-1, keepdim=True) / math.sqrt(self.embedding_size)
alpha = torch.sigmoid(sim)
hidden = (1 - alpha) * hidden + alpha * star_node_repeat
star_node = self.att_out(hidden, star_node, batch)
seq_hidden = hidden[alias_inputs]
bs, item_num, _ = seq_hidden.shape
pos_emb = self.pos_embedding.weight[:item_num]
pos_emb = pos_emb.unsqueeze(0).expand(bs, -1, -1)
seq_hidden = seq_hidden + pos_emb
# fetch the last hidden state of last timestamp
ht = self.gather_indexes(seq_hidden, item_seq_len - 1)
q1 = self.linear_one(ht).view(ht.size(0), 1, ht.size(1))
q2 = self.linear_two(seq_hidden)
q3 = self.linear_three(star_node).view(star_node.shape[0], 1, star_node.shape[1])
alpha = self.linear_four(torch.sigmoid(q1 + q2 + q3))
a = torch.sum(alpha * seq_hidden * mask.view(mask.size(0), -1, 1).float(), 1)
seq_output = self.linear_transform(torch.cat([a, ht], dim=1))
return layer_norm(seq_output)
def calculate_loss(self, interaction):
x = interaction['x']
edge_index = interaction['edge_index']
batch = interaction['batch']
alias_inputs = interaction['alias_inputs']
item_seq_len = interaction[self.ITEM_SEQ_LEN]
seq_output = self.forward(x, edge_index, batch, alias_inputs, item_seq_len)
pos_items = interaction[self.POS_ITEM_ID]
if self.loss_type == 'BPR':
neg_items = interaction[self.NEG_ITEM_ID]
pos_items_emb = layer_norm(self.item_embedding(pos_items))
neg_items_emb = layer_norm(self.item_embedding(neg_items))
pos_score = torch.sum(seq_output * pos_items_emb, dim=-1) * self.scale # [B]
neg_score = torch.sum(seq_output * neg_items_emb, dim=-1) * self.scale # [B]
loss = self.loss_fct(pos_score, neg_score)
return loss
else: # self.loss_type = 'CE'
test_item_emb = layer_norm(self.item_embedding.weight)
logits = torch.matmul(seq_output, test_item_emb.transpose(0, 1)) * self.scale
loss = self.loss_fct(logits, pos_items)
return loss
def predict(self, interaction):
test_item = interaction[self.ITEM_ID]
x = interaction['x']
edge_index = interaction['edge_index']
batch = interaction['batch']
alias_inputs = interaction['alias_inputs']
item_seq_len = interaction[self.ITEM_SEQ_LEN]
seq_output = self.forward(x, edge_index, batch, alias_inputs, item_seq_len)
test_item_emb = layer_norm(self.item_embedding(test_item))
scores = torch.mul(seq_output, test_item_emb).sum(dim=1) * self.scale # [B]
return scores
def full_sort_predict(self, interaction):
x = interaction['x']
edge_index = interaction['edge_index']
batch = interaction['batch']
alias_inputs = interaction['alias_inputs']
item_seq_len = interaction[self.ITEM_SEQ_LEN]
seq_output = self.forward(x, edge_index, batch, alias_inputs, item_seq_len)
test_items_emb = layer_norm(self.item_embedding.weight)
scores = torch.matmul(seq_output, test_items_emb.transpose(0, 1)) * self.scale # [B, n_items]
return scores
================================================
FILE: recbole_gnn/model/sequential_recommender/srgnn.py
================================================
# @Time : 2022/3/7
# @Author : Yupeng Hou
# @Email : houyupeng@ruc.edu.cn
r"""
SRGNN
################################################
Reference:
Shu Wu et al. "Session-based Recommendation with Graph Neural Networks." in AAAI 2019.
Reference code:
https://github.com/CRIPAC-DIG/SR-GNN
"""
import numpy as np
import torch
from torch import nn
from recbole.model.loss import BPRLoss
from recbole.model.abstract_recommender import SequentialRecommender
from recbole_gnn.model.layers import SRGNNCell
class SRGNN(SequentialRecommender):
r"""SRGNN regards the conversation history as a directed graph.
In addition to considering the connection between the item and the adjacent item,
it also considers the connection with other interactive items.
Such as: A example of a session sequence(eg:item1, item2, item3, item2, item4) and the connection matrix A
Outgoing edges:
=== ===== ===== ===== =====
\ 1 2 3 4
=== ===== ===== ===== =====
1 0 1 0 0
2 0 0 1/2 1/2
3 0 1 0 0
4 0 0 0 0
=== ===== ===== ===== =====
Incoming edges:
=== ===== ===== ===== =====
\ 1 2 3 4
=== ===== ===== ===== =====
1 0 0 0 0
2 1/2 0 1/2 0
3 0 1 0 0
4 0 1 0 0
=== ===== ===== ===== =====
"""
def __init__(self, config, dataset):
super(SRGNN, self).__init__(config, dataset)
# load parameters info
self.embedding_size = config['embedding_size']
self.step = config['step']
self.device = config['device']
self.loss_type = config['loss_type']
# item embedding
self.item_embedding = nn.Embedding(self.n_items, self.embedding_size, padding_idx=0)
# define layers and loss
self.gnncell = SRGNNCell(self.embedding_size)
self.linear_one = nn.Linear(self.embedding_size, self.embedding_size)
self.linear_two = nn.Linear(self.embedding_size, self.embedding_size)
self.linear_three = nn.Linear(self.embedding_size, 1, bias=False)
self.linear_transform = nn.Linear(self.embedding_size * 2, self.embedding_size)
if self.loss_type == 'BPR':
self.loss_fct = BPRLoss()
elif self.loss_type == 'CE':
self.loss_fct = nn.CrossEntropyLoss()
else:
raise NotImplementedError("Make sure 'loss_type' in ['BPR', 'CE']!")
# parameters initialization
self._reset_parameters()
def _reset_parameters(self):
stdv = 1.0 / np.sqrt(self.embedding_size)
for weight in self.parameters():
weight.data.uniform_(-stdv, stdv)
def forward(self, x, edge_index, alias_inputs, item_seq_len):
mask = alias_inputs.gt(0)
hidden = self.item_embedding(x)
for i in range(self.step):
hidden = self.gnncell(hidden, edge_index)
seq_hidden = hidden[alias_inputs]
# fetch the last hidden state of last timestamp
ht = self.gather_indexes(seq_hidden, item_seq_len - 1)
q1 = self.linear_one(ht).view(ht.size(0), 1, ht.size(1))
q2 = self.linear_two(seq_hidden)
alpha = self.linear_three(torch.sigmoid(q1 + q2))
a = torch.sum(alpha * seq_hidden * mask.view(mask.size(0), -1, 1).float(), 1)
seq_output = self.linear_transform(torch.cat([a, ht], dim=1))
return seq_output
def calculate_loss(self, interaction):
x = interaction['x']
edge_index = interaction['edge_index']
alias_inputs = interaction['alias_inputs']
item_seq_len = interaction[self.ITEM_SEQ_LEN]
seq_output = self.forward(x, edge_index, alias_inputs, item_seq_len)
pos_items = interaction[self.POS_ITEM_ID]
if self.loss_type == 'BPR':
neg_items = interaction[self.NEG_ITEM_ID]
pos_items_emb = self.item_embedding(pos_items)
neg_items_emb = self.item_embedding(neg_items)
pos_score = torch.sum(seq_output * pos_items_emb, dim=-1) # [B]
neg_score = torch.sum(seq_output * neg_items_emb, dim=-1) # [B]
loss = self.loss_fct(pos_score, neg_score)
return loss
else: # self.loss_type = 'CE'
test_item_emb = self.item_embedding.weight
logits = torch.matmul(seq_output, test_item_emb.transpose(0, 1))
loss = self.loss_fct(logits, pos_items)
return loss
def predict(self, interaction):
test_item = interaction[self.ITEM_ID]
x = interaction['x']
edge_index = interaction['edge_index']
alias_inputs = interaction['alias_inputs']
item_seq_len = interaction[self.ITEM_SEQ_LEN]
seq_output = self.forward(x, edge_index, alias_inputs, item_seq_len)
test_item_emb = self.item_embedding(test_item)
scores = torch.mul(seq_output, test_item_emb).sum(dim=1) # [B]
return scores
def full_sort_predict(self, interaction):
x = interaction['x']
edge_index = interaction['edge_index']
alias_inputs = interaction['alias_inputs']
item_seq_len = interaction[self.ITEM_SEQ_LEN]
seq_output = self.forward(x, edge_index, alias_inputs, item_seq_len)
test_items_emb = self.item_embedding.weight
scores = torch.matmul(seq_output, test_items_emb.transpose(0, 1)) # [B, n_items]
return scores
================================================
FILE: recbole_gnn/model/sequential_recommender/tagnn.py
================================================
# @Time : 2022/3/17
# @Author : Yupeng Hou
# @Email : houyupeng@ruc.edu.cn
r"""
TAGNN
################################################
Reference:
Feng Yu et al. "TAGNN: Target Attentive Graph Neural Networks for Session-based Recommendation." in SIGIR 2020 short.
Implemented using PyTorch Geometric.
Reference code:
https://github.com/CRIPAC-DIG/TAGNN
"""
import numpy as np
import torch
from torch import nn
import torch.nn.functional as F
from recbole.model.abstract_recommender import SequentialRecommender
from recbole_gnn.model.layers import SRGNNCell
class TAGNN(SequentialRecommender):
r"""TAGNN introduces target-aware attention and adaptively activates different user interests with respect to varied target items.
"""
def __init__(self, config, dataset):
super(TAGNN, self).__init__(config, dataset)
# load parameters info
self.embedding_size = config['embedding_size']
self.step = config['step']
self.device = config['device']
self.loss_type = config['loss_type']
# item embedding
self.item_embedding = nn.Embedding(self.n_items, self.embedding_size, padding_idx=0)
# define layers and loss
self.gnncell = SRGNNCell(self.embedding_size)
self.linear_one = nn.Linear(self.embedding_size, self.embedding_size)
self.linear_two = nn.Linear(self.embedding_size, self.embedding_size)
self.linear_three = nn.Linear(self.embedding_size, 1, bias=False)
self.linear_transform = nn.Linear(self.embedding_size * 2, self.embedding_size)
self.linear_t = nn.Linear(self.embedding_size, self.embedding_size, bias=False) #target attention
if self.loss_type == 'CE':
self.loss_fct = nn.CrossEntropyLoss()
else:
raise NotImplementedError("Make sure 'loss_type' in ['BPR', 'CE']!")
# parameters initialization
self._reset_parameters()
def _reset_parameters(self):
stdv = 1.0 / np.sqrt(self.embedding_size)
for weight in self.parameters():
weight.data.uniform_(-stdv, stdv)
def forward(self, x, edge_index, alias_inputs, item_seq_len):
mask = alias_inputs.gt(0)
hidden = self.item_embedding(x)
for i in range(self.step):
hidden = self.gnncell(hidden, edge_index)
seq_hidden = hidden[alias_inputs]
# fetch the last hidden state of last timestamp
ht = self.gather_indexes(seq_hidden, item_seq_len - 1)
q1 = self.linear_one(ht).view(ht.size(0), 1, ht.size(1))
q2 = self.linear_two(seq_hidden)
alpha = self.linear_three(torch.sigmoid(q1 + q2))
alpha = F.softmax(alpha, 1)
a = torch.sum(alpha * seq_hidden * mask.view(mask.size(0), -1, 1).float(), 1)
seq_output = self.linear_transform(torch.cat([a, ht], dim=1))
seq_hidden = seq_hidden * mask.view(mask.shape[0], -1, 1).float()
qt = self.linear_t(seq_hidden)
b = self.item_embedding.weight
beta = F.softmax(b @ qt.transpose(1,2), -1)
target = beta @ seq_hidden
a = seq_output.view(ht.shape[0], 1, ht.shape[1]) # b,1,d
a = a + target # b,n,d
scores = torch.sum(a * b, -1) # b,n
return scores
def calculate_loss(self, interaction):
x = interaction['x']
edge_index = interaction['edge_index']
alias_inputs = interaction['alias_inputs']
item_seq_len = interaction[self.ITEM_SEQ_LEN]
logits = self.forward(x, edge_index, alias_inputs, item_seq_len)
pos_items = interaction[self.POS_ITEM_ID]
loss = self.loss_fct(logits, pos_items)
return loss
def predict(self, interaction):
pass
def full_sort_predict(self, interaction):
x = interaction['x']
edge_index = interaction['edge_index']
alias_inputs = interaction['alias_inputs']
item_seq_len = interaction[self.ITEM_SEQ_LEN]
scores = self.forward(x, edge_index, alias_inputs, item_seq_len)
return scores
================================================
FILE: recbole_gnn/model/social_recommender/__init__.py
================================================
from recbole_gnn.model.social_recommender.diffnet import DiffNet
from recbole_gnn.model.social_recommender.mhcn import MHCN
from recbole_gnn.model.social_recommender.sept import SEPT
================================================
FILE: recbole_gnn/model/social_recommender/diffnet.py
================================================
# @Time : 2022/3/15
# @Author : Lanling Xu
# @Email : xulanling_sherry@163.com
r"""
DiffNet
################################################
Reference:
Le Wu et al. "A Neural Influence Diffusion Model for Social Recommendation." in SIGIR 2019.
Reference code:
https://github.com/PeiJieSun/diffnet
"""
import numpy as np
import torch
import torch.nn as nn
from recbole.model.init import xavier_uniform_initialization
from recbole.model.loss import BPRLoss, EmbLoss
from recbole.utils import InputType
from recbole_gnn.model.abstract_recommender import SocialRecommender
from recbole_gnn.model.layers import BipartiteGCNConv
class DiffNet(SocialRecommender):
r"""DiffNet is a deep influence propagation model to stimulate how users are influenced by the recursive social diffusion process for social recommendation.
We implement the model following the original author with a pairwise training mode.
"""
input_type = InputType.PAIRWISE
def __init__(self, config, dataset):
super(DiffNet, self).__init__(config, dataset)
# load dataset info
self.edge_index, self.edge_weight = dataset.get_bipartite_inter_mat(row='user')
self.edge_index, self.edge_weight = self.edge_index.to(self.device), self.edge_weight.to(self.device)
self.net_edge_index, self.net_edge_weight = dataset.get_norm_net_adj_mat(row_norm=True)
self.net_edge_index, self.net_edge_weight = self.net_edge_index.to(self.device), self.net_edge_weight.to(self.device)
# load parameters info
self.embedding_size = config['embedding_size'] # int type:the embedding size of DiffNet
self.n_layers = config['n_layers'] # int type:the GCN layer num of DiffNet for social net
self.reg_weight = config['reg_weight'] # float32 type: the weight decay for l2 normalization
self.pretrained_review = config['pretrained_review'] # bool type:whether to load pre-trained review vectors of users and items
# define layers and loss
self.user_embedding = torch.nn.Embedding(num_embeddings=self.n_users, embedding_dim=self.embedding_size)
self.item_embedding = torch.nn.Embedding(num_embeddings=self.n_items, embedding_dim=self.embedding_size)
self.bipartite_gcn_conv = BipartiteGCNConv(dim=self.embedding_size)
self.mf_loss = BPRLoss()
self.reg_loss = EmbLoss()
# storage variables for full sort evaluation acceleration
self.restore_user_e = None
self.restore_item_e = None
# parameters initialization
self.apply(xavier_uniform_initialization)
self.other_parameter_name = ['restore_user_e', 'restore_item_e']
if self.pretrained_review:
# handle review information, map the origin review into the new space
self.user_review_embedding = nn.Embedding(self.n_users, self.embedding_size, padding_idx=0)
self.user_review_embedding.weight.requires_grad = False
self.user_review_embedding.weight.data.copy_(self.convertDistribution(dataset.user_feat['user_review_emb']))
self.item_review_embedding = nn.Embedding(self.n_items, self.embedding_size, padding_idx=0)
self.item_review_embedding.weight.requires_grad = False
self.item_review_embedding.weight.data.copy_(self.convertDistribution(dataset.item_feat['item_review_emb']))
self.user_fusion_layer = nn.Linear(self.embedding_size, self.embedding_size)
self.item_fusion_layer = nn.Linear(self.embedding_size, self.embedding_size)
self.activation = nn.Sigmoid()
def convertDistribution(self, x):
mean, std = torch.mean(x), torch.std(x)
y = (x - mean) * 0.2 / std
return y
def forward(self):
user_embedding = self.user_embedding.weight
final_item_embedding = self.item_embedding.weight
if self.pretrained_review:
user_reduce_dim_vector_matrix = self.activation(self.user_fusion_layer(self.user_review_embedding.weight))
item_reduce_dim_vector_matrix = self.activation(self.item_fusion_layer(self.item_review_embedding.weight))
user_review_vector_matrix = self.convertDistribution(user_reduce_dim_vector_matrix)
item_review_vector_matrix = self.convertDistribution(item_reduce_dim_vector_matrix)
user_embedding = user_embedding + user_review_vector_matrix
final_item_embedding = final_item_embedding + item_review_vector_matrix
user_embedding_from_consumed_items = self.bipartite_gcn_conv(x=(final_item_embedding, user_embedding), edge_index=self.edge_index.flip([0]), edge_weight=self.edge_weight, size=(self.n_items, self.n_users))
embeddings_list = [user_embedding]
for layer_idx in range(self.n_layers):
user_embedding = self.bipartite_gcn_conv((user_embedding, user_embedding), self.net_edge_index.flip([0]), self.net_edge_weight, size=(self.n_users, self.n_users))
embeddings_list.append(user_embedding)
final_user_embedding = torch.stack(embeddings_list, dim=1)
final_user_embedding = torch.sum(final_user_embedding, dim=1) + user_embedding_from_consumed_items
return final_user_embedding, final_item_embedding
def calculate_loss(self, interaction):
# clear the storage variable when training
if self.restore_user_e is not None or self.restore_item_e is not None:
self.restore_user_e, self.restore_item_e = None, None
user = interaction[self.USER_ID]
pos_item = interaction[self.ITEM_ID]
neg_item = interaction[self.NEG_ITEM_ID]
user_all_embeddings, item_all_embeddings = self.forward()
u_embeddings = user_all_embeddings[user]
pos_embeddings = item_all_embeddings[pos_item]
neg_embeddings = item_all_embeddings[neg_item]
# calculate BPR Loss
pos_scores = torch.mul(u_embeddings, pos_embeddings).sum(dim=1)
neg_scores = torch.mul(u_embeddings, neg_embeddings).sum(dim=1)
mf_loss = self.mf_loss(pos_scores, neg_scores)
# calculate regularization Loss
u_ego_embeddings = self.user_embedding(user)
pos_ego_embeddings = self.item_embedding(pos_item)
neg_ego_embeddings = self.item_embedding(neg_item)
reg_loss = self.reg_loss(u_ego_embeddings, pos_ego_embeddings, neg_ego_embeddings)
loss = mf_loss + self.reg_weight * reg_loss
return loss
def predict(self, interaction):
user = interaction[self.USER_ID]
item = interaction[self.ITEM_ID]
user_all_embeddings, item_all_embeddings = self.forward()
u_embeddings = user_all_embeddings[user]
i_embeddings = item_all_embeddings[item]
scores = torch.mul(u_embeddings, i_embeddings).sum(dim=1)
return scores
def full_sort_predict(self, interaction):
user = interaction[self.USER_ID]
if self.restore_user_e is None or self.restore_item_e is None:
self.restore_user_e, self.restore_item_e = self.forward()
# get user embedding from storage variable
u_embeddings = self.restore_user_e[user]
# dot with all item embedding to accelerate
scores = torch.matmul(u_embeddings, self.restore_item_e.transpose(0, 1))
return scores.view(-1)
================================================
FILE: recbole_gnn/model/social_recommender/mhcn.py
================================================
# @Time : 2022/4/5
# @Author : Lanling Xu
# @Email : xulanling_sherry@163.com
r"""
MHCN
################################################
Reference:
Junliang Yu et al. "Self-Supervised Multi-Channel Hypergraph Convolutional Network for Social Recommendation." in WWW 2021.
Reference code:
https://github.com/Coder-Yu/QRec
"""
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from scipy.sparse import coo_matrix
from recbole.model.init import xavier_uniform_initialization
from recbole.model.loss import BPRLoss, EmbLoss
from recbole.utils import InputType
from recbole_gnn.model.abstract_recommender import SocialRecommender
from recbole_gnn.model.layers import BipartiteGCNConv
class GatingLayer(nn.Module):
def __init__(self, dim):
super(GatingLayer, self).__init__()
self.dim = dim
self.linear = nn.Linear(self.dim, self.dim)
self.activation = nn.Sigmoid()
def forward(self, emb):
embedding = self.linear(emb)
embedding = self.activation(embedding)
embedding = torch.mul(emb, embedding)
return embedding
class AttLayer(nn.Module):
def __init__(self, dim):
super(AttLayer, self).__init__()
self.dim = dim
self.attention_mat = nn.Parameter(torch.randn([self.dim, self.dim]))
self.attention = nn.Parameter(torch.randn([1, self.dim]))
def forward(self, *embs):
weights = []
emb_list = []
for embedding in embs:
weights.append(torch.sum(torch.mul(self.attention, torch.matmul(embedding, self.attention_mat)), dim=1))
emb_list.append(embedding)
score = torch.nn.Softmax(dim=0)(torch.stack(weights, dim=0))
embeddings = torch.stack(emb_list, dim=0)
mixed_embeddings = torch.mul(embeddings, score.unsqueeze(dim=2).repeat(1, 1, self.dim)).sum(dim=0)
return mixed_embeddings
class MHCN(SocialRecommender):
r"""MHCN fuses hypergraph modeling and graph neural networks in social recommendation by
exploiting multiple types of high-order user relations under a multi-channel setting.
We implement the model following the original author with a pairwise training mode.
"""
input_type = InputType.PAIRWISE
def __init__(self, config, dataset):
super(MHCN, self).__init__(config, dataset)
# load dataset info
self.R_user_edge_index, self.R_user_edge_weight, self.R_item_edge_index, self.R_item_edge_weight = self.get_bipartite_inter_mat(dataset)
H_s, H_j, H_p = self.get_motif_adj_matrix(dataset)
# transform matrix to edge index and edge weight for convolution
self.H_s_edge_index, self.H_s_edge_weight = self.get_edge_index_weight(H_s)
self.H_j_edge_index, self.H_j_edge_weight = self.get_edge_index_weight(H_j)
self.H_p_edge_index, self.H_p_edge_weight = self.get_edge_index_weight(H_p)
# load parameters info
self.embedding_size = config['embedding_size']
self.n_layers = config['n_layers']
self.ssl_reg = config['ssl_reg']
self.reg_weight = config['reg_weight']
# define embedding and loss
self.user_embedding = nn.Embedding(self.n_users, self.embedding_size)
self.item_embedding = nn.Embedding(self.n_items, self.embedding_size)
self.bipartite_gcn_conv = BipartiteGCNConv(dim=self.embedding_size)
self.mf_loss = BPRLoss()
self.reg_loss = EmbLoss()
# define gating layers
self.gating_c1 = GatingLayer(self.embedding_size)
self.gating_c2 = GatingLayer(self.embedding_size)
self.gating_c3 = GatingLayer(self.embedding_size)
self.gating_simple = GatingLayer(self.embedding_size)
# define self supervised gating layers
self.ss_gating_c1 = GatingLayer(self.embedding_size)
self.ss_gating_c2 = GatingLayer(self.embedding_size)
self.ss_gating_c3 = GatingLayer(self.embedding_size)
# define attention layers
self.attention_layer = AttLayer(self.embedding_size)
# storage variables for full sort evaluation acceleration
self.restore_user_e = None
self.restore_item_e = None
# parameters initialization
self.apply(xavier_uniform_initialization)
self.other_parameter_name = ['restore_user_e', 'restore_item_e']
def get_bipartite_inter_mat(self, dataset):
R_user_edge_index, R_user_edge_weight = dataset.get_bipartite_inter_mat(row='user', row_norm=False)
R_item_edge_index, R_item_edge_weight = dataset.get_bipartite_inter_mat(row='item', row_norm=False)
return R_user_edge_index.to(self.device), R_user_edge_weight.to(self.device), R_item_edge_index.to(self.device), R_item_edge_weight.to(self.device)
def get_edge_index_weight(self, matrix):
matrix = coo_matrix(matrix)
edge_index = torch.stack([torch.LongTensor(matrix.row), torch.LongTensor(matrix.col)])
edge_weight = torch.FloatTensor(matrix.data)
return edge_index.to(self.device), edge_weight.to(self.device)
def get_motif_adj_matrix(self, dataset):
S = dataset.net_matrix()
Y = dataset.inter_matrix()
B = S.multiply(S.T)
U = S - B
C1 = (U.dot(U)).multiply(U.T)
A1 = C1 + C1.T
C2 = (B.dot(U)).multiply(U.T) + (U.dot(B)).multiply(U.T) + (U.dot(U)).multiply(B)
A2 = C2 + C2.T
C3 = (B.dot(B)).multiply(U) + (B.dot(U)).multiply(B) + (U.dot(B)).multiply(B)
A3 = C3 + C3.T
A4 = (B.dot(B)).multiply(B)
C5 = (U.dot(U)).multiply(U) + (U.dot(U.T)).multiply(U) + (U.T.dot(U)).multiply(U)
A5 = C5 + C5.T
A6 = (U.dot(B)).multiply(U) + (B.dot(U.T)).multiply(U.T) + (U.T.dot(U)).multiply(B)
A7 = (U.T.dot(B)).multiply(U.T) + (B.dot(U)).multiply(U) + (U.dot(U.T)).multiply(B)
A8 = (Y.dot(Y.T)).multiply(B)
A9 = (Y.dot(Y.T)).multiply(U)
A9 = A9 + A9.T
A10 = Y.dot(Y.T) - A8 - A9
# addition and row-normalization
H_s = sum([A1, A2, A3, A4, A5, A6, A7])
# add epsilon to avoid divide by zero Warning
H_s = H_s.multiply(1.0 / (H_s.sum(axis=1) + 1e-7).reshape(-1, 1))
H_j = sum([A8, A9])
H_j = H_j.multiply(1.0 / (H_j.sum(axis=1) + 1e-7).reshape(-1, 1))
H_p = A10
H_p = H_p.multiply(H_p > 1)
H_p = H_p.multiply(1.0 / (H_p.sum(axis=1) + 1e-7).reshape(-1, 1))
return H_s, H_j, H_p
def forward(self):
# get ego embeddings
user_embeddings = self.user_embedding.weight
item_embeddings = self.item_embedding.weight
# self-gating
user_embeddings_c1 = self.gating_c1(user_embeddings)
user_embeddings_c2 = self.gating_c2(user_embeddings)
user_embeddings_c3 = self.gating_c3(user_embeddings)
simple_user_embeddings = self.gating_simple(user_embeddings)
all_embeddings_c1 = [user_embeddings_c1]
all_embeddings_c2 = [user_embeddings_c2]
all_embeddings_c3 = [user_embeddings_c3]
all_embeddings_simple = [simple_user_embeddings]
all_embeddings_i = [item_embeddings]
for layer_idx in range(self.n_layers):
mixed_embedding = self.attention_layer(user_embeddings_c1, user_embeddings_c2, user_embeddings_c3) + simple_user_embeddings / 2
# Channel S
user_embeddings_c1 = self.bipartite_gcn_conv((user_embeddings_c1, user_embeddings_c1), self.H_s_edge_index.flip([0]), self.H_s_edge_weight, size=(self.n_users, self.n_users))
norm_embeddings = F.normalize(user_embeddings_c1, p=2, dim=1)
all_embeddings_c1 += [norm_embeddings]
# Channel J
user_embeddings_c2 = self.bipartite_gcn_conv((user_embeddings_c2, user_embeddings_c2), self.H_j_edge_index.flip([0]), self.H_j_edge_weight, size=(self.n_users, self.n_users))
norm_embeddings = F.normalize(user_embeddings_c2, p=2, dim=1)
all_embeddings_c2 += [norm_embeddings]
# Channel P
user_embeddings_c3 = self.bipartite_gcn_conv((user_embeddings_c3, user_embeddings_c3), self.H_p_edge_index.flip([0]), self.H_p_edge_weight, size=(self.n_users, self.n_users))
norm_embeddings = F.normalize(user_embeddings_c3, p=2, dim=1)
all_embeddings_c3 += [norm_embeddings]
# item convolution
new_item_embeddings = self.bipartite_gcn_conv((mixed_embedding, item_embeddings), self.R_item_edge_index.flip([0]), self.R_item_edge_weight, size=(self.n_users, self.n_items))
norm_embeddings = F.normalize(new_item_embeddings, p=2, dim=1)
all_embeddings_i += [norm_embeddings]
simple_user_embeddings = self.bipartite_gcn_conv((item_embeddings, simple_user_embeddings), self.R_user_edge_index.flip([0]), self.R_user_edge_weight, size=(self.n_items, self.n_users))
norm_embeddings = F.normalize(simple_user_embeddings, p=2, dim=1)
all_embeddings_simple += [norm_embeddings]
item_embeddings = new_item_embeddings
# averaging the channel-specific embeddings
user_embeddings_c1 = torch.stack(all_embeddings_c1, dim=0).sum(dim=0)
user_embeddings_c2 = torch.stack(all_embeddings_c2, dim=0).sum(dim=0)
user_embeddings_c3 = torch.stack(all_embeddings_c3, dim=0).sum(dim=0)
simple_user_embeddings = torch.stack(all_embeddings_simple, dim=0).sum(dim=0)
item_all_embeddings = torch.stack(all_embeddings_i, dim=0).sum(dim=0)
# aggregating channel-specific embeddings
user_all_embeddings = self.attention_layer(user_embeddings_c1, user_embeddings_c2, user_embeddings_c3)
user_all_embeddings += simple_user_embeddings / 2
return user_all_embeddings, item_all_embeddings
def hierarchical_self_supervision(self, user_embeddings, edge_index, edge_weight):
def row_shuffle(embedding):
shuffled_embeddings = embedding[torch.randperm(embedding.size(0))]
return shuffled_embeddings
def row_column_shuffle(embedding):
shuffled_embeddings = embedding[:, torch.randperm(embedding.size(1))]
shuffled_embeddings = shuffled_embeddings[torch.randperm(embedding.size(0))]
return shuffled_embeddings
def score(x1, x2):
return torch.sum(torch.mul(x1, x2), dim=1)
# For Douban, normalization is needed.
# user_embeddings = F.normalize(user_embeddings, p=2, dim=1)
edge_embeddings = self.bipartite_gcn_conv((user_embeddings, user_embeddings), edge_index.flip([0]), edge_weight, size=(self.n_users, self.n_users))
# Local MIM
pos = score(user_embeddings, edge_embeddings)
neg1 = score(row_shuffle(user_embeddings), edge_embeddings)
neg2 = score(row_column_shuffle(edge_embeddings), user_embeddings)
local_loss = torch.sum(-torch.log(torch.sigmoid(pos - neg1)) - torch.log(torch.sigmoid(neg1 - neg2)))
# Global MIM
graph = torch.mean(edge_embeddings, dim=0, keepdim=True)
pos = score(edge_embeddings, graph)
neg1 = score(row_column_shuffle(edge_embeddings), graph)
global_loss = torch.sum(-torch.log(torch.sigmoid(pos - neg1)))
return global_loss + local_loss
def calculate_loss(self, interaction):
# clear the storage variable when training
if self.restore_user_e is not None or self.restore_item_e is not None:
self.restore_user_e, self.restore_item_e = None, None
user = interaction[self.USER_ID]
pos_item = interaction[self.ITEM_ID]
neg_item = interaction[self.NEG_ITEM_ID]
user_all_embeddings, item_all_embeddings = self.forward()
u_embeddings = user_all_embeddings[user]
pos_embeddings = item_all_embeddings[pos_item]
neg_embeddings = item_all_embeddings[neg_item]
# calculate BPR Loss
pos_scores = torch.mul(u_embeddings, pos_embeddings).sum(dim=1)
neg_scores = torch.mul(u_embeddings, neg_embeddings).sum(dim=1)
mf_loss = self.mf_loss(pos_scores, neg_scores)
# calculate self-supervised loss
ss_loss = self.hierarchical_self_supervision(self.ss_gating_c1(user_all_embeddings), self.H_s_edge_index, self.H_s_edge_weight)
ss_loss += self.hierarchical_self_supervision(self.ss_gating_c2(user_all_embeddings), self.H_j_edge_index, self.H_j_edge_weight)
ss_loss += self.hierarchical_self_supervision(self.ss_gating_c3(user_all_embeddings), self.H_p_edge_index, self.H_p_edge_weight)
# calculate regularization Loss
u_ego_embeddings = self.user_embedding(user)
pos_ego_embeddings = self.item_embedding(pos_item)
neg_ego_embeddings = self.item_embedding(neg_item)
reg_loss = self.reg_loss(u_ego_embeddings, pos_ego_embeddings, neg_ego_embeddings)
loss = mf_loss + self.ssl_reg * ss_loss + self.reg_weight * reg_loss
return loss
def predict(self, interaction):
user = interaction[self.USER_ID]
item = interaction[self.ITEM_ID]
user_all_embeddings, item_all_embeddings = self.forward()
u_embeddings = user_all_embeddings[user]
i_embeddings = item_all_embeddings[item]
scores = torch.mul(u_embeddings, i_embeddings).sum(dim=1)
return scores
def full_sort_predict(self, interaction):
user = interaction[self.USER_ID]
if self.restore_user_e is None or self.restore_item_e is None:
self.restore_user_e, self.restore_item_e = self.forward()
# get user embedding from storage variable
u_embeddings = self.restore_user_e[user]
# dot with all item embedding to accelerate
scores = torch.matmul(u_embeddings, self.restore_item_e.transpose(0, 1))
return scores.view(-1)
================================================
FILE: recbole_gnn/model/social_recommender/sept.py
================================================
# @Time : 2022/3/29
# @Author : Lanling Xu
# @Email : xulanling_sherry@163.com
r"""
SEPT
################################################
Reference:
Junliang Yu et al. "Socially-Aware Self-Supervised Tri-Training for Recommendation." in KDD 2021.
Reference code:
https://github.com/Coder-Yu/QRec
"""
import numpy as np
import torch
import torch.nn.functional as F
from scipy.sparse import coo_matrix, eye
from torch_geometric.utils import degree
from recbole.model.init import xavier_uniform_initialization
from recbole.model.loss import BPRLoss, EmbLoss
from recbole.utils import InputType
from recbole_gnn.model.abstract_recommender import SocialRecommender
from recbole_gnn.model.layers import LightGCNConv
class SEPT(SocialRecommender):
r"""SEPT is a socially-aware GCN-based SSL framework that integrates tri-training.
Under the regime of tri-training for multi-view encoding, the framework builds three graph
encoders (one for recommendation) upon the augmented views and iteratively improves each
encoder with self-supervision signals from other users, generated by the other two encoders.
We implement the model following the original author with a pairwise training mode.
"""
input_type = InputType.PAIRWISE
def __init__(self, config, dataset):
super(SEPT, self).__init__(config, dataset)
# load dataset info
self.edge_index, self.edge_weight = dataset.get_norm_adj_mat()
self.edge_index, self.edge_weight = self.edge_index.to(self.device), self.edge_weight.to(self.device)
# generate intermediate data
self.social_edge_index, self.social_edge_weight, self.sharing_edge_index, \
self.sharing_edge_weight = self.get_user_view_matrix(dataset)
self._user = dataset.inter_feat[dataset.uid_field]
self._item = dataset.inter_feat[dataset.iid_field]
self._src_user = dataset.net_feat[dataset.net_src_field]
self._tgt_user = dataset.net_feat[dataset.net_tgt_field]
# load parameters info
self.latent_dim = config["embedding_size"]
self.n_layers = int(config["n_layers"])
self.drop_ratio = config["drop_ratio"]
self.instance_cnt = config["instance_cnt"]
self.reg_weight = config["reg_weight"]
self.ssl_weight = config["ssl_weight"]
self.ssl_tau = config["ssl_tau"]
# define layers and loss
self.user_embedding = torch.nn.Embedding(self.n_users, self.latent_dim)
self.item_embedding = torch.nn.Embedding(self.n_items, self.latent_dim)
self.gcn_conv = LightGCNConv(dim=self.latent_dim)
self.mf_loss = BPRLoss()
self.reg_loss = EmbLoss()
# storage variables for full sort evaluation acceleration
self.user_all_embeddings = None
self.restore_user_e = None
self.restore_item_e = None
# parameters initialization
self.apply(xavier_uniform_initialization)
self.other_parameter_name = ['restore_user_e', 'restore_item_e']
def get_norm_edge_weight(self, edge_index, node_num):
r"""Get normalized edge weight using the laplace matrix.
"""
deg = degree(edge_index[0], node_num)
norm_deg = 1. / torch.sqrt(torch.where(deg == 0, torch.ones([1]), deg))
edge_weight = norm_deg[edge_index[0]] * norm_deg[edge_index[1]]
return edge_weight
def get_user_view_matrix(self, dataset):
# Friend View: A_f = (SS) ⊙ S
social_mat = dataset.net_matrix()
social_matrix = social_mat.dot(social_mat)
social_matrix = social_matrix.toarray() * social_mat.toarray() + eye(self.n_users)
social_matrix = coo_matrix(social_matrix)
social_edge_index = torch.stack([torch.LongTensor(social_matrix.row), torch.LongTensor(social_matrix.col)])
social_edge_weight = self.get_norm_edge_weight(social_edge_index, self.n_users)
# Sharing View: A_s = (RR^T) ⊙ S
rating_mat = dataset.inter_matrix()
sharing_matrix = rating_mat.dot(rating_mat.T)
sharing_matrix = sharing_matrix.toarray() * social_mat.toarray() + eye(self.n_users)
sharing_matrix = coo_matrix(sharing_matrix)
sharing_edge_index = torch.stack([torch.LongTensor(sharing_matrix.row), torch.LongTensor(sharing_matrix.col)])
sharing_edge_weight = self.get_norm_edge_weight(sharing_edge_index, self.n_users)
return social_edge_index.to(self.device), social_edge_weight.to(self.device), \
sharing_edge_index.to(self.device), sharing_edge_weight.to(self.device)
def subgraph_construction(self):
r"""Perturb the joint graph to construct subgraph for integrated self-supervision signals.
"""
def rand_sample(high, size=None, replace=True):
return np.random.choice(np.arange(high), size=size, replace=replace)
# perturb the raw graph with edge dropout
keep = rand_sample(len(self._user), size=int(len(self._user) * (1 - self.drop_ratio)), replace=False)
row = self._user[keep]
col = self._item[keep] + self.n_users
# perturb the social graph with edge dropout
net_keep = rand_sample(len(self._src_user), size=int(len(self._src_user) * (1 - self.drop_ratio)), replace=False)
net_row = self._src_user[net_keep]
net_col = self._tgt_user[net_keep]
# concatenation and normalization
edge_index1 = torch.stack([row, col])
edge_index2 = torch.stack([col, row])
edge_index3 = torch.stack([net_row, net_col])
edge_index = torch.cat([edge_index1, edge_index2, edge_index3], dim=1)
edge_weight = self.get_norm_edge_weight(edge_index, self.n_users + self.n_items)
self.sub_graph = edge_index.to(self.device), edge_weight.to(self.device)
def get_ego_embeddings(self):
r"""Get the embedding of users and items and combine to an embedding matrix.
Returns:
Tensor of the embedding matrix. Shape of [n_items+n_users, embedding_dim]
"""
user_embeddings = self.user_embedding.weight
item_embeddings = self.item_embedding.weight
ego_embeddings = torch.cat([user_embeddings, item_embeddings], dim=0)
return ego_embeddings
def forward(self, graph=None):
all_embeddings = torch.cat([self.user_embedding.weight, self.item_embedding.weight])
embeddings_list = [all_embeddings]
if graph is None: # for the original graph
edge_index, edge_weight = self.edge_index, self.edge_weight
else: # for the augmented graph
edge_index, edge_weight = graph
for _ in range(self.n_layers):
all_embeddings = self.gcn_conv(all_embeddings, edge_index, edge_weight)
norm_embeddings = F.normalize(all_embeddings, p=2, dim=1)
embeddings_list.append(norm_embeddings)
all_embeddings = torch.stack(embeddings_list, dim=1)
all_embeddings = torch.sum(all_embeddings, dim=1)
user_all_embeddings, item_all_embeddings = torch.split(all_embeddings, [self.n_users, self.n_items], dim=0)
return user_all_embeddings, item_all_embeddings
def user_view_forward(self):
all_social_embeddings = self.user_embedding.weight
all_sharing_embeddings = self.user_embedding.weight
social_embeddings_list = [all_social_embeddings]
sharing_embeddings_list = [all_sharing_embeddings]
for _ in range(self.n_layers):
# friend view
all_social_embeddings = self.gcn_conv(all_social_embeddings, self.social_edge_index, self.social_edge_weight)
norm_social_embeddings = F.normalize(all_social_embeddings, p=2, dim=1)
social_embeddings_list.append(norm_social_embeddings)
# sharing view
all_sharing_embeddings = self.gcn_conv(all_sharing_embeddings, self.sharing_edge_index, self.sharing_edge_weight)
norm_sharing_embeddings = F.normalize(all_sharing_embeddings, p=2, dim=1)
sharing_embeddings_list.append(norm_sharing_embeddings)
social_all_embeddings = torch.stack(social_embeddings_list, dim=1)
social_all_embeddings = torch.sum(social_all_embeddings, dim=1)
sharing_all_embeddings = torch.stack(sharing_embeddings_list, dim=1)
sharing_all_embeddings = torch.sum(sharing_all_embeddings, dim=1)
return social_all_embeddings, sharing_all_embeddings
def label_prediction(self, emb, aug_emb):
prob = torch.matmul(emb, aug_emb.transpose(0, 1))
prob = F.softmax(prob, dim=1)
return prob
def sampling(self, logits):
return torch.topk(logits, k=self.instance_cnt)[1]
def generate_pesudo_labels(self, prob1, prob2):
positive = (prob1 + prob2) / 2
pos_examples = self.sampling(positive)
return pos_examples
def calculate_ssl_loss(self, aug_emb, positive, emb):
pos_emb = aug_emb[positive]
pos_score = torch.sum(emb.unsqueeze(dim=1).repeat(1, self.instance_cnt, 1) * pos_emb, dim=2)
ttl_score = torch.matmul(emb, aug_emb.transpose(0, 1))
pos_score = torch.sum(torch.exp(pos_score / self.ssl_tau), dim=1)
ttl_score = torch.sum(torch.exp(ttl_score / self.ssl_tau), dim=1)
ssl_loss = - torch.sum(torch.log(pos_score / ttl_score))
return ssl_loss
def calculate_rec_loss(self, interaction):
# clear the storage variable when training
if self.restore_user_e is not None or self.restore_item_e is not None:
self.restore_user_e, self.restore_item_e = None, None
user = interaction[self.USER_ID]
pos_item = interaction[self.ITEM_ID]
neg_item = interaction[self.NEG_ITEM_ID]
self.user_all_embeddings, item_all_embeddings = self.forward()
u_embeddings = self.user_all_embeddings[user]
pos_embeddings = item_all_embeddings[pos_item]
neg_embeddings = item_all_embeddings[neg_item]
# calculate BPR Loss
pos_scores = torch.mul(u_embeddings, pos_embeddings).sum(dim=1)
neg_scores = torch.mul(u_embeddings, neg_embeddings).sum(dim=1)
mf_loss = self.mf_loss(pos_scores, neg_scores)
# calculate regularization Loss
u_ego_embeddings = self.user_embedding(user)
pos_ego_embeddings = self.item_embedding(pos_item)
neg_ego_embeddings = self.item_embedding(neg_item)
reg_loss = self.reg_loss(u_ego_embeddings, pos_ego_embeddings, neg_ego_embeddings)
loss = mf_loss + self.reg_weight * reg_loss
return loss
def calculate_loss(self, interaction):
# preference view
rec_loss = self.calculate_rec_loss(interaction)
# unlabeled sample view
aug_user_embeddings, _ = self.forward(graph=self.sub_graph)
# friend and sharing views
friend_view_embeddings, sharing_view_embeddings = self.user_view_forward()
user = interaction[self.USER_ID]
aug_u_embeddings = aug_user_embeddings[user]
social_u_embeddings = friend_view_embeddings[user]
sharing_u_embeddings = sharing_view_embeddings[user]
rec_u_embeddings = self.user_all_embeddings[user]
aug_u_embeddings = F.normalize(aug_u_embeddings, p=2, dim=1)
social_u_embeddings = F.normalize(social_u_embeddings, p=2, dim=1)
sharing_u_embeddings = F.normalize(sharing_u_embeddings, p=2, dim=1)
rec_u_embeddings = F.normalize(rec_u_embeddings, p=2, dim=1)
# self-supervision prediction
social_prediction = self.label_prediction(social_u_embeddings, aug_u_embeddings)
sharing_prediction = self.label_prediction(sharing_u_embeddings, aug_u_embeddings)
rec_prediction = self.label_prediction(rec_u_embeddings, aug_u_embeddings)
# find informative positive examples for each encoder
friend_pos = self.generate_pesudo_labels(sharing_prediction, rec_prediction)
sharing_pos = self.generate_pesudo_labels(social_prediction, rec_prediction)
rec_pos = self.generate_pesudo_labels(social_prediction, sharing_prediction)
# neighbor-discrimination based contrastive learning
ssl_loss = self.calculate_ssl_loss(aug_u_embeddings, friend_pos, social_u_embeddings)
ssl_loss += self.calculate_ssl_loss(aug_u_embeddings, sharing_pos, sharing_u_embeddings)
ssl_loss += self.calculate_ssl_loss(aug_u_embeddings, rec_pos, rec_u_embeddings)
# L = L_r + β * L_{ssl}
loss = rec_loss + self.ssl_weight * ssl_loss
return loss
def predict(self, interaction):
user = interaction[self.USER_ID]
item = interaction[self.ITEM_ID]
user_all_embeddings, item_all_embeddings = self.forward()
u_embeddings = user_all_embeddings[user]
i_embeddings = item_all_embeddings[item]
scores = torch.mul(u_embeddings, i_embeddings).sum(dim=1)
return scores
def full_sort_predict(self, interaction):
user = interaction[self.USER_ID]
if self.restore_user_e is None or self.restore_item_e is None:
self.restore_user_e, self.restore_item_e = self.forward()
# get user embedding from storage variable
u_embeddings = self.restore_user_e[user]
# dot with all item embedding to accelerate
scores = torch.matmul(u_embeddings, self.restore_item_e.transpose(0, 1))
return scores.view(-1)
================================================
FILE: recbole_gnn/properties/model/DiffNet.yaml
================================================
embedding_size: 64
n_layers: 2
reg_weight: 1e-05
pretrained_review: False
================================================
FILE: recbole_gnn/properties/model/DirectAU.yaml
================================================
embedding_size: 64
encoder: "MF" # "MF" or "lightGCN"
gamma: 0.5
weight_decay: 1e-6
train_batch_size: 256
# n_layers: 3 # needed for LightGCN
================================================
FILE: recbole_gnn/properties/model/GCEGNN.yaml
================================================
embedding_size: 64
leakyrelu_alpha: 0.2
dropout_local: 0.
dropout_global: 0.5
dropout_gcn: 0.
loss_type: CE
gnn_transform: sess_graph
# global
build_global_graph: True
sample_num: 12
hop: 1
================================================
FILE: recbole_gnn/properties/model/GCSAN.yaml
================================================
n_layers: 1
n_heads: 1
hidden_size: 64
inner_size: 256
hidden_dropout_prob: 0.2
attn_dropout_prob: 0.2
hidden_act: 'gelu'
layer_norm_eps: 1e-12
initializer_range: 0.02
step: 1
weight: 0.6
reg_weight: 5e-5
loss_type: 'CE'
gnn_transform: sess_graph
================================================
FILE: recbole_gnn/properties/model/HMLET.yaml
================================================
embedding_size: 64
n_layers: 4
reg_weight: 1e-05
require_pow: True
gate_layer_ids: [2,3]
gating_mlp_dims: [64,16,2]
dropout_ratio: 0.2
activation_function: elu
warm_up_epochs: 50
ori_temp: 0.7
min_temp: 0.01
gum_temp_decay: 0.005
epoch_temp_decay: 1
================================================
FILE: recbole_gnn/properties/model/LESSR.yaml
================================================
embedding_size: 64
n_layers: 4
batch_norm: True
feat_drop: 0.2
loss_type: CE
gnn_transform: sess_graph
================================================
FILE: recbole_gnn/properties/model/LightGCL.yaml
================================================
embedding_size: 64 # (int) The embedding size of users and items.
n_layers: 2 # (int) The number of layers in LightGCL.
dropout: 0.0 # (float) The dropout ratio.
temp: 0.8 # (float) The temperature in softmax.
lambda1: 0.01 # (float) The hyperparameter to control the strengths of SSL.
lambda2: 1e-05 # (float) The L2 regularization weight.
q: 5 # (int) A slightly overestimated rank of the adjacency matrix.
================================================
FILE: recbole_gnn/properties/model/LightGCN.yaml
================================================
embedding_size: 64
n_layers: 2
reg_weight: 1e-05
require_pow: True
================================================
FILE: recbole_gnn/properties/model/MHCN.yaml
================================================
embedding_size: 64
n_layers: 2
ssl_reg: 1e-05
reg_weight: 1e-05
================================================
FILE: recbole_gnn/properties/model/NCL.yaml
================================================
embedding_size: 64
n_layers: 3
reg_weight: 1e-4
ssl_temp: 0.1
ssl_reg: 1e-7
hyper_layers: 1
alpha: 1
proto_reg: 8e-8
num_clusters: 1000
m_step: 1
warm_up_step: 20
================================================
FILE: recbole_gnn/properties/model/NGCF.yaml
================================================
embedding_size: 64
hidden_size_list: [64,64,64]
node_dropout: 0.0
message_dropout: 0.1
reg_weight: 1e-5
================================================
FILE: recbole_gnn/properties/model/NISER.yaml
================================================
embedding_size: 64
step: 1
sigma: 16
item_dropout: 0.1
loss_type: 'CE'
gnn_transform: sess_graph
================================================
FILE: recbole_gnn/properties/model/SEPT.yaml
================================================
warm_up_epochs: 100
embedding_size: 64
n_layers: 2
drop_ratio: 0.3
instance_cnt: 10
reg_weight: 1e-05
ssl_weight: 1e-07
ssl_tau: 0.1
================================================
FILE: recbole_gnn/properties/model/SGL.yaml
================================================
type: "ED"
n_layers: 3
ssl_tau: 0.5
reg_weight: 1e-5
ssl_weight: 0.05
drop_ratio: 0.1
embedding_size: 64
================================================
FILE: recbole_gnn/properties/model/SGNNHN.yaml
================================================
embedding_size: 64
step: 6
scale: 12
loss_type: 'CE'
gnn_transform: sess_graph
================================================
FILE: recbole_gnn/properties/model/SRGNN.yaml
================================================
embedding_size: 64
step: 1
loss_type: 'CE'
gnn_transform: sess_graph
================================================
FILE: recbole_gnn/properties/model/SSL4REC.yaml
================================================
embedding_size: 64
drop_ratio: 0.1
tau: 0.1
reg_weight: 1e-04
ssl_weight: 1e-05
require_pow: True
================================================
FILE: recbole_gnn/properties/model/SimGCL.yaml
================================================
embedding_size: 64
n_layers: 2
reg_weight: 1e-4
lambda: 0.5
eps: 0.1
temperature: 0.2
================================================
FILE: recbole_gnn/properties/model/TAGNN.yaml
================================================
embedding_size: 64
step: 1
loss_type: 'CE'
gnn_transform: sess_graph
================================================
FILE: recbole_gnn/properties/model/XSimGCL.yaml
================================================
embedding_size: 64
n_layers: 2
reg_weight: 0.0001
lambda: 0.1
eps: 0.2
temperature: 0.2
layer_cl: 1
require_pow: True
================================================
FILE: recbole_gnn/properties/quick_start_config/sequential_base.yaml
================================================
train_neg_sample_args: ~
================================================
FILE: recbole_gnn/properties/quick_start_config/social_base.yaml
================================================
NET_SOURCE_ID_FIELD: source_id
NET_TARGET_ID_FIELD: target_id
load_col:
inter: ['user_id', 'item_id', 'rating', 'timestamp']
net: [source_id, target_id]
filter_net_by_inter: True
undirected_net: True
================================================
FILE: recbole_gnn/quick_start.py
================================================
import logging
from logging import getLogger
from recbole.utils import init_logger, init_seed, set_color
from recbole_gnn.config import Config
from recbole_gnn.utils import create_dataset, data_preparation, get_model, get_trainer
def run_recbole_gnn(model=None, dataset=None, config_file_list=None, config_dict=None, saved=True):
r""" A fast running api, which includes the complete process of
training and testing a model on a specified dataset
Args:
model (str, optional): Model name. Defaults to ``None``.
dataset (str, optional): Dataset name. Defaults to ``None``.
config_file_list (list, optional): Config files used to modify experiment parameters. Defaults to ``None``.
config_dict (dict, optional): Parameters dictionary used to modify experiment parameters. Defaults to ``None``.
saved (bool, optional): Whether to save the model. Defaults to ``True``.
"""
# configurations initialization
config = Config(model=model, dataset=dataset, config_file_list=config_file_list, config_dict=config_dict)
try:
assert config["enable_sparse"] in [True, False, None]
except AssertionError:
raise ValueError("Your config `enable_sparse` must be `True` or `False` or `None`")
init_seed(config['seed'], config['reproducibility'])
# logger initialization
init_logger(config)
logger = getLogger()
logger.info(config)
# dataset filtering
dataset = create_dataset(config)
logger.info(dataset)
# dataset splitting
train_data, valid_data, test_data = data_preparation(config, dataset)
# model loading and initialization
init_seed(config['seed'], config['reproducibility'])
model = get_model(config['model'])(config, train_data.dataset).to(config['device'])
logger.info(model)
# trainer loading and initialization
trainer = get_trainer(config['MODEL_TYPE'], config['model'])(config, model)
# model training
best_valid_score, best_valid_result = trainer.fit(
train_data, valid_data, saved=saved, show_progress=config['show_progress']
)
# model evaluation
test_result = trainer.evaluate(test_data, load_best_model=saved, show_progress=config['show_progress'])
logger.info(set_color('best valid ', 'yellow') + f': {best_valid_result}')
logger.info(set_color('test result', 'yellow') + f': {test_result}')
return {
'best_valid_score': best_valid_score,
'valid_score_bigger': config['valid_metric_bigger'],
'best_valid_result': best_valid_result,
'test_result': test_result
}
def objective_function(config_dict=None, config_file_list=None, saved=True):
r""" The default objective_function used in HyperTuning
Args:
config_dict (dict, optional): Parameters dictionary used to modify experiment parameters. Defaults to ``None``.
config_file_list (list, optional): Config files used to modify experiment parameters. Defaults to ``None``.
saved (bool, optional): Whether to save the model. Defaults to ``True``.
"""
config = Config(config_dict=config_dict, config_file_list=config_file_list)
try:
assert config["enable_sparse"] in [True, False, None]
except AssertionError:
raise ValueError("Your config `enable_sparse` must be `True` or `False` or `None`")
init_seed(config['seed'], config['reproducibility'])
logging.basicConfig(level=logging.ERROR)
dataset = create_dataset(config)
train_data, valid_data, test_data = data_preparation(config, dataset)
init_seed(config['seed'], config['reproducibility'])
model = get_model(config['model'])(config, train_data.dataset).to(config['device'])
trainer = get_trainer(config['MODEL_TYPE'], config['model'])(config, model)
best_valid_score, best_valid_result = trainer.fit(train_data, valid_data, verbose=False, saved=saved)
test_result = trainer.evaluate(test_data, load_best_model=saved)
return {
'model': config['model'],
'best_valid_score': best_valid_score,
'valid_score_bigger': config['valid_metric_bigger'],
'best_valid_result': best_valid_result,
'test_result': test_result
}
================================================
FILE: recbole_gnn/trainer.py
================================================
from time import time
import math
from torch.nn.utils.clip_grad import clip_grad_norm_
from tqdm import tqdm
from recbole.trainer import Trainer
from recbole.utils import early_stopping, dict2str, set_color, get_gpu_usage
class NCLTrainer(Trainer):
def __init__(self, config, model):
super(NCLTrainer, self).__init__(config, model)
self.num_m_step = config['m_step']
assert self.num_m_step is not None
def fit(self, train_data, valid_data=None, verbose=True, saved=True, show_progress=False, callback_fn=None):
r"""Train the model based on the train data and the valid data.
Args:
train_data (DataLoader): the train data
valid_data (DataLoader, optional): the valid data, default: None.
If it's None, the early_stopping is invalid.
verbose (bool, optional): whether to write training and evaluation information to logger, default: True
saved (bool, optional): whether to save the model parameters, default: True
show_progress (bool): Show the progress of training epoch and evaluate epoch. Defaults to ``False``.
callback_fn (callable): Optional callback function executed at end of epoch.
Includes (epoch_idx, valid_score) input arguments.
Returns:
(float, dict): best valid score and best valid result. If valid_data is None, it returns (-1, None)
"""
if saved and self.start_epoch >= self.epochs:
self._save_checkpoint(-1)
self.eval_collector.data_collect(train_data)
for epoch_idx in range(self.start_epoch, self.epochs):
# only differences from the original trainer
if epoch_idx % self.num_m_step == 0:
self.logger.info("Running E-step ! ")
self.model.e_step()
# train
training_start_time = time()
train_loss = self._train_epoch(train_data, epoch_idx, show_progress=show_progress)
self.train_loss_dict[epoch_idx] = sum(train_loss) if isinstance(train_loss, tuple) else train_loss
training_end_time = time()
train_loss_output = \
self._generate_train_loss_output(epoch_idx, training_start_time, training_end_time, train_loss)
if verbose:
self.logger.info(train_loss_output)
self._add_train_loss_to_tensorboard(epoch_idx, train_loss)
# eval
if self.eval_step <= 0 or not valid_data:
if saved:
self._save_checkpoint(epoch_idx)
update_output = set_color('Saving current', 'blue') + ': %s' % self.saved_model_file
if verbose:
self.logger.info(update_output)
continue
if (epoch_idx + 1) % self.eval_step == 0:
valid_start_time = time()
valid_score, valid_result = self._valid_epoch(valid_data, show_progress=show_progress)
self.best_valid_score, self.cur_step, stop_flag, update_flag = early_stopping(
valid_score,
self.best_valid_score,
self.cur_step,
max_step=self.stopping_step,
bigger=self.valid_metric_bigger
)
valid_end_time = time()
valid_score_output = (set_color("epoch %d evaluating", 'green') + " [" + set_color("time", 'blue')
+ ": %.2fs, " + set_color("valid_score", 'blue') + ": %f]") % \
(epoch_idx, valid_end_time - valid_start_time, valid_score)
valid_result_output = set_color('valid result', 'blue') + ': \n' + dict2str(valid_result)
if verbose:
self.logger.info(valid_score_output)
self.logger.info(valid_result_output)
self.tensorboard.add_scalar('Vaild_score', valid_score, epoch_idx)
if update_flag:
if saved:
self._save_checkpoint(epoch_idx)
update_output = set_color('Saving current best', 'blue') + ': %s' % self.saved_model_file
if verbose:
self.logger.info(update_output)
self.best_valid_result = valid_result
if callback_fn:
callback_fn(epoch_idx, valid_score)
if stop_flag:
stop_output = 'Finished training, best eval result in epoch %d' % \
(epoch_idx - self.cur_step * self.eval_step)
if verbose:
self.logger.info(stop_output)
break
self._add_hparam_to_tensorboard(self.best_valid_score)
return self.best_valid_score, self.best_valid_result
def _train_epoch(self, train_data, epoch_idx, loss_func=None, show_progress=False):
r"""Train the model in an epoch
Args:
train_data (DataLoader): The train data.
epoch_idx (int): The current epoch id.
loss_func (function): The loss function of :attr:`model`. If it is ``None``, the loss function will be
:attr:`self.model.calculate_loss`. Defaults to ``None``.
show_progress (bool): Show the progress of training epoch. Defaults to ``False``.
Returns:
float/tuple: The sum of loss returned by all batches in this epoch. If the loss in each batch contains
multiple parts and the model return these multiple parts loss instead of the sum of loss, it will return a
tuple which includes the sum of loss in each part.
"""
self.model.train()
loss_func = loss_func or self.model.calculate_loss
total_loss = None
iter_data = (
tqdm(
train_data,
total=len(train_data),
ncols=100,
desc=set_color(f"Train {epoch_idx:>5}", 'pink'),
) if show_progress else train_data
)
for batch_idx, interaction in enumerate(iter_data):
interaction = interaction.to(self.device)
self.optimizer.zero_grad()
losses = loss_func(interaction)
if isinstance(losses, tuple):
if epoch_idx < self.config['warm_up_step']:
losses = losses[:-1]
loss = sum(losses)
loss_tuple = tuple(per_loss.item() for per_loss in losses)
total_loss = loss_tuple if total_loss is None else tuple(map(sum, zip(total_loss, loss_tuple)))
else:
loss = losses
total_loss = losses.item() if total_loss is None else total_loss + losses.item()
self._check_nan(loss)
loss.backward()
if self.clip_grad_norm:
clip_grad_norm_(self.model.parameters(), **self.clip_grad_norm)
self.optimizer.step()
if self.gpu_available and show_progress:
iter_data.set_postfix_str(set_color('GPU RAM: ' + get_gpu_usage(self.device), 'yellow'))
return total_loss
class HMLETTrainer(Trainer):
def __init__(self, config, model):
super(HMLETTrainer, self).__init__(config, model)
self.warm_up_epochs = config['warm_up_epochs']
self.ori_temp = config['ori_temp']
self.min_temp = config['min_temp']
self.gum_temp_decay = config['gum_temp_decay']
self.epoch_temp_decay = config['epoch_temp_decay']
def _train_epoch(self, train_data, epoch_idx, loss_func=None, show_progress=False):
if epoch_idx > self.warm_up_epochs:
# Temp decay
gum_temp = self.ori_temp * math.exp(-self.gum_temp_decay*(epoch_idx - self.warm_up_epochs))
self.model.gum_temp = max(gum_temp, self.min_temp)
self.logger.info(f'Current gumbel softmax temperature: {self.model.gum_temp}')
for gating in self.model.gating_nets:
self.model._gating_freeze(gating, True)
return super()._train_epoch(train_data, epoch_idx, loss_func, show_progress)
class SEPTTrainer(Trainer):
def __init__(self, config, model):
super(SEPTTrainer, self).__init__(config, model)
self.warm_up_epochs = config['warm_up_epochs']
def _train_epoch(self, train_data, epoch_idx, loss_func=None, show_progress=False):
if epoch_idx < self.warm_up_epochs:
loss_func = self.model.calculate_rec_loss
else:
self.model.subgraph_construction()
return super()._train_epoch(train_data, epoch_idx, loss_func, show_progress)
================================================
FILE: recbole_gnn/utils.py
================================================
import os
import pickle
import importlib
from logging import getLogger
from recbole.data.utils import load_split_dataloaders, create_samplers, save_split_dataloaders
from recbole.data.utils import create_dataset as create_recbole_dataset
from recbole.data.utils import data_preparation as recbole_data_preparation
from recbole.utils import set_color, Enum
from recbole.utils import get_model as get_recbole_model
from recbole.utils import get_trainer as get_recbole_trainer
from recbole.utils.argument_list import dataset_arguments
from recbole_gnn.data.dataloader import CustomizedTrainDataLoader, CustomizedNegSampleEvalDataLoader, CustomizedFullSortEvalDataLoader
def create_dataset(config):
"""Create dataset according to :attr:`config['model']` and :attr:`config['MODEL_TYPE']`.
If :attr:`config['dataset_save_path']` file exists and
its :attr:`config` of dataset is equal to current :attr:`config` of dataset.
It will return the saved dataset in :attr:`config['dataset_save_path']`.
Args:
config (Config): An instance object of Config, used to record parameter information.
Returns:
Dataset: Constructed dataset.
"""
model_type = config['MODEL_TYPE']
dataset_module = importlib.import_module('recbole_gnn.data.dataset')
gen_graph_module_path = '.'.join(['recbole_gnn.model.general_recommender', config['model'].lower()])
seq_module_path = '.'.join(['recbole_gnn.model.sequential_recommender', config['model'].lower()])
if hasattr(dataset_module, config['model'] + 'Dataset'):
dataset_class = getattr(dataset_module, config['model'] + 'Dataset')
elif importlib.util.find_spec(gen_graph_module_path, __name__):
dataset_class = getattr(dataset_module, 'GeneralGraphDataset')
elif importlib.util.find_spec(seq_module_path, __name__):
dataset_class = getattr(dataset_module, 'SessionGraphDataset')
elif model_type == ModelType.SOCIAL:
dataset_class = getattr(dataset_module, 'SocialDataset')
else:
return create_recbole_dataset(config)
default_file = os.path.join(config['checkpoint_dir'], f'{config["dataset"]}-{dataset_class.__name__}.pth')
file = config['dataset_save_path'] or default_file
if os.path.exists(file):
with open(file, 'rb') as f:
dataset = pickle.load(f)
dataset_args_unchanged = True
for arg in dataset_arguments + ['seed', 'repeatable']:
if config[arg] != dataset.config[arg]:
dataset_args_unchanged = False
break
if dataset_args_unchanged:
logger = getLogger()
logger.info(set_color('Load filtered dataset from', 'pink') + f': [{file}]')
return dataset
dataset = dataset_class(config)
if config['save_dataset']:
dataset.save()
return dataset
def get_model(model_name):
r"""Automatically select model class based on model name
Args:
model_name (str): model name
Returns:
Recommender: model class
"""
model_submodule = [
'general_recommender', 'sequential_recommender', 'social_recommender'
]
model_file_name = model_name.lower()
model_module = None
for submodule in model_submodule:
module_path = '.'.join(['recbole_gnn.model', submodule, model_file_name])
if importlib.util.find_spec(module_path, __name__):
model_module = importlib.import_module(module_path, __name__)
break
if model_module is None:
model_class = get_recbole_model(model_name)
else:
model_class = getattr(model_module, model_name)
return model_class
def _get_customized_dataloader(config, phase):
if phase == 'train':
return CustomizedTrainDataLoader
else:
eval_mode = config["eval_args"]["mode"]
if eval_mode == 'full':
return CustomizedFullSortEvalDataLoader
else:
return CustomizedNegSampleEvalDataLoader
def data_preparation(config, dataset):
"""Split the dataset by :attr:`config['eval_args']` and create training, validation and test dataloader.
Note:
If we can load split dataloaders by :meth:`load_split_dataloaders`, we will not create new split dataloaders.
Args:
config (Config): An instance object of Config, used to record parameter information.
dataset (Dataset): An instance object of Dataset, which contains all interaction records.
Returns:
tuple:
- train_data (AbstractDataLoader): The dataloader for training.
- valid_data (AbstractDataLoader): The dataloader for validation.
- test_data (AbstractDataLoader): The dataloader for testing.
"""
seq_module_path = '.'.join(['recbole_gnn.model.sequential_recommender', config['model'].lower()])
if importlib.util.find_spec(seq_module_path, __name__):
# Special condition for sequential models of RecBole-Graph
dataloaders = load_split_dataloaders(config)
if dataloaders is not None:
train_data, valid_data, test_data = dataloaders
else:
built_datasets = dataset.build()
train_dataset, valid_dataset, test_dataset = built_datasets
train_sampler, valid_sampler, test_sampler = create_samplers(config, dataset, built_datasets)
train_data = _get_customized_dataloader(config, 'train')(config, train_dataset, train_sampler, shuffle=True)
valid_data = _get_customized_dataloader(config, 'evaluation')(config, valid_dataset, valid_sampler, shuffle=False)
test_data = _get_customized_dataloader(config, 'evaluation')(config, test_dataset, test_sampler, shuffle=False)
if config['save_dataloaders']:
save_split_dataloaders(config, dataloaders=(train_data, valid_data, test_data))
logger = getLogger()
logger.info(
set_color('[Training]: ', 'pink') + set_color('train_batch_size', 'cyan') + ' = ' +
set_color(f'[{config["train_batch_size"]}]', 'yellow') + set_color(' negative sampling', 'cyan') + ': ' +
set_color(f'[{config["train_neg_sample_args"]}]', 'yellow')
)
logger.info(
set_color('[Evaluation]: ', 'pink') + set_color('eval_batch_size', 'cyan') + ' = ' +
set_color(f'[{config["eval_batch_size"]}]', 'yellow') + set_color(' eval_args', 'cyan') + ': ' +
set_color(f'[{config["eval_args"]}]', 'yellow')
)
return train_data, valid_data, test_data
else:
return recbole_data_preparation(config, dataset)
def get_trainer(model_type, model_name):
r"""Automatically select trainer class based on model type and model name
Args:
model_type (ModelType): model type
model_name (str): model name
Returns:
Trainer: trainer class
"""
try:
return getattr(importlib.import_module('recbole_gnn.trainer'), model_name + 'Trainer')
except AttributeError:
return get_recbole_trainer(model_type, model_name)
class ModelType(Enum):
"""Type of models.
- ``Social``: Social-based Recommendation
"""
SOCIAL = 7
================================================
FILE: results/README.md
================================================
## General Model Results
* [ml-1m](general/ml-1m.md)
## Sequential Model Results
* [diginetica](sequential/diginetica.md)
## Social-aware Model Results
* [lastfm](social/lastfm.md)
================================================
FILE: results/general/ml-1m.md
================================================
# Experimental Setting
**Dataset:** [MovieLens-1M](https://grouplens.org/datasets/movielens/)
**Filtering:** Remove interactions with a rating score of less than 3
**Evaluation:** ratio-based 8:1:1, full sort
**Metrics:** Recall@10, NGCG@10, MRR@10, Hit@10, Precision@10
**Properties:**
```yaml
# dataset config
field_separator: "\t"
seq_separator: " "
USER_ID_FIELD: user_id
ITEM_ID_FIELD: item_id
RATING_FIELD: rating
NEG_PREFIX: neg_
LABEL_FIELD: label
load_col:
inter: [user_id, item_id, rating]
val_interval:
rating: "[3,inf)"
unused_col:
inter: [rating]
# training and evaluation
epochs: 500
train_batch_size: 4096
valid_metric: MRR@10
eval_batch_size: 4096000
```
For fairness, we restrict users' and items' embedding dimension as following. Please adjust the name of the corresponding args of different models.
```
embedding_size: 64
```
# Dataset Statistics
| Dataset | #Users | #Items | #Interactions | Sparsity |
| ---------- | ------ | ------ | ------------- | -------- |
| ml-1m | 6,040 | 3,629 | 836,478 | 96.18% |
# Evaluation Results
| Method | Recall@10 | MRR@10 | NDCG@10 | Hit@10 | Precision@10 |
|--------------|-----------|--------|---------|--------|--------------|
| **BPR** | 0.1776 | 0.4187 | 0.2401 | 0.7199 | 0.1779 |
| **NeuMF** | 0.1651 | 0.4020 | 0.2271 | 0.7029 | 0.1700 |
| **NGCF** | 0.1814 | 0.4354 | 0.2508 | 0.7239 | 0.1850 |
| **LightGCN** | 0.1861 | 0.4388 | 0.2538 | 0.7330 | 0.1863 |
| **LightGCL** | 0.1867 | 0.4283 | 0.2479 | 0.7370 | 0.1815 |
| **SGL** | 0.1889 | 0.4315 | 0.2505 | 0.7392 | 0.1843 |
| **HMLET** | 0.1847 | 0.4297 | 0.2490 | 0.7305 | 0.1836 |
| **NCL** | 0.2021 | 0.4599 | 0.2702 | 0.7565 | 0.1962 |
| **SimGCL** | 0.2029 | 0.4550 | 0.2667 | 0.7640 | 0.1933 |
| **XSimGCL** | 0.2116 | 0.4638 | 0.2750 | 0.7743 | 0.1987 |
# Hyper-parameters
| | Best hyper-parameters | Tuning range |
|--------------|------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **BPR** | learning_rate=0.001 | learning_rate choice [0.05, 0.02, 0.01, 0.005, 0.002, 0.001, 0.0005, 0.0002, 0.0001, 0.00005, 0.00002, 0.00001] |
| **NeuMF** | learning_rate=0.0001
mlp_hidden_size=[32,16,8]
dropout_prob=0 | learning_rate choice [0.005, 0.002, 0.001, 0.0005, 0.0002, 0.0001, 0.00005]
mlp_hidden_size choice ['[64,64]', '[64,32]', '[64,32,16]','[32,16,8]']
dropout_prob choice [0, 0.1, 0.2] |
| **NGCF** | learning_rate=0.0002
message_dropout=0.0
node_dropout=0.0 | learning_rate choice [0.001, 0.0005, 0.0002]
node_dropout choice [0.0, 0.1]
message_dropout choice [0.0, 0.1] |
| **LightGCN** | learning_rate=0.002
n_layers=3
reg_weight=0.0001 | learning_rate choice [0.005, 0.002, 0.001]
n_layers choice [2, 3]
reg_weight choice [1e-4, 1e-5] |
| **LightGCL** | learning_rate=0.001
n_layers=2
lambda1=0.0001
temp=2
lambda2=1e-7
dropout=0.1 | learning_rate choice [0.001]
n_layers choice [2, 3]
lambda1 choice [0.01, 0.005, 0.001, 0.0001, 1e-5, 1e-7]
temp choice [0.5, 0.8, 2, 3]
lambda2 choice [1e-4, 1e-5, 1e-7]
dropout choice [0.0, 0.1, 0.25] |
| **SGL** | learning_rate=0.002
n_layers=3
reg_weight=0.0001
ssl_tau=0.5
drop_ratio=0.1
ssl_weight=0.005 | learning_rate choice [0.002]
n_layers choice [3]
reg_weight choice [1e-4]
ssl_tau choice [0.1, 0.5]
drop_ratio choice [0.1, 0.3]
ssl_weight choice [1e-5, 1e-6, 1e-7, 0.005, 0.01, 0.05] |
| **HMLET** | learning_rate=0.002
n_layers=4
activation_function=leakyrelu | learning_rate choice [0.002, 0.001, 0.0005]
n_layers choice [3, 4]
activation_function choice ['elu', 'leakyrelu'] |
| **NCL** | learning_rate=0.002
n_layers=3
reg_weight=0.0001
ssl_temp=0.1
ssl_reg=1e-06
hyper_layers=1
alpha=1.5 | learning_rate choice [0.002]
n_layers choice [3]
reg_weight choice [1e-4]
ssl_temp choice [0.1, 0.05]
ssl_reg choice [1e-7, 1e-6]
hyper_layers choice [1]
alpha choice [1, 0.8, 1.5] |
| **SimGCL** | learning_rate=0.002
n_layers=2
reg_weight=0.0001
temperature=0.05
lambda=1e-5
eps=0.1 | learning_rate choice [0.002]
n_layers choice [2, 3]
reg_weight choice [1e-4]
temperature choice [0.05, 0.1, 0.2]
lambda choice [1e-5, 1e-6, 1e-7, 0.005, 0.01, 0.05]
eps choice [0.1, 0.2] |
| **XSimGCL** | learning_rate=0.002
n_layers=2
reg_weight=0.0001
temperature=0.2
lambda=0.1
eps=0.2
layer_cl=1 | learning_rate choice [0.002]
n_layers choice [2, 3]
reg_weight choice [1e-4]
temperature choice [0.05, 0.1, 0.2]
lambda choice [1e-5, 1e-6, 1e-7, 1e-4, 0.005, 0.01, 0.05, 0.1]
eps choice [0.1, 0.2]
layer_cl choice [1] |
================================================
FILE: results/sequential/diginetica.md
================================================
# Experimental Setting
**Dataset:** diginetica-not-merged
**Filtering:** Remove users and items with less than 5 interactions
**Evaluation:** leave one out, full sort
**Metrics:** Recall@10, NGCG@10, MRR@10, Hit@10, Precision@10
**Properties:**
```yaml
# dataset config
field_separator: "\t"
seq_separator: " "
USER_ID_FIELD: session_id
ITEM_ID_FIELD: item_id
TIME_FIELD: timestamp
NEG_PREFIX: neg_
ITEM_LIST_LENGTH_FIELD: item_length
LIST_SUFFIX: _list
MAX_ITEM_LIST_LENGTH: 20
POSITION_FIELD: position_id
load_col:
inter: [session_id, item_id, timestamp]
user_inter_num_interval: "[5,inf)"
item_inter_num_interval: "[5,inf)"
# training and evaluation
epochs: 500
train_batch_size: 4096
eval_batch_size: 2000
valid_metric: MRR@10
eval_args:
split: {'LS':"valid_and_test"}
mode: full
order: TO
train_neg_sample_args: ~
```
For fairness, we restrict users' and items' embedding dimension as following. Please adjust the name of the corresponding args of different models.
```
embedding_size: 64
```
# Dataset Statistics
| Dataset | #Users | #Items | #Interactions | Sparsity |
| ---------- | ------ | ------ | ------------- | -------- |
| diginetica | 72,014 | 29,454 | 580,490 | 99.97% |
# Evaluation Results
| Method | Recall@10 | MRR@10 | NDCG@10 | Hit@10 | Precision@10 |
| -------------------- | --------- | ------ | ------- | ------ | ------------ |
| **GRU4Rec** | 0.3691 | 0.1632 | 0.2114 | 0.3691 | 0.0369 |
| **NARM** | 0.3801 | 0.1695 | 0.2188 | 0.3801 | 0.0380 |
| **SASRec** | 0.4144 | 0.1857 | 0.2393 | 0.4144 | 0.0414 |
| **SR-GNN** | 0.3881 | 0.1754 | 0.2253 | 0.3881 | 0.0388 |
| **GC-SAN** | 0.4127 | 0.1881 | 0.2408 | 0.4127 | 0.0413 |
| **NISER+** | 0.4144 | 0.1904 | 0.2430 | 0.4144 | 0.0414 |
| **LESSR** | 0.3964 | 0.1763 | 0.2279 | 0.3964 | 0.0396 |
| **TAGNN** | 0.3894 | 0.1763 | 0.2263 | 0.3894 | 0.0389 |
| **GCE-GNN** | 0.4284 | 0.1961 | 0.2507 | 0.4284 | 0.0428 |
| **SGNN-HN** | 0.4183 | 0.1877 | 0.2418 | 0.4183 | 0.0418 |
# Hyper-parameters
| | Best hyper-parameters | Tuning range |
| -------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
| **GRU4Rec** | learning_rate=0.01
hidden_size=128
dropout_prob=0.3
num_layers=1 | learning_rate in [1e-2, 1e-3, 3e-3]
num_layers in [1, 2, 3]
hidden_size in [128]
dropout_prob in [0.1, 0.2, 0.3] |
| **SASRec** | learning_rate=0.001
n_layers=2
attn_dropout_prob=0.2
hidden_dropout_prob=0.2 | learning_rate in [0.001, 0.0001]
n_layers in [1, 2]
hidden_dropout_prob in [0.2, 0.5]
attn_dropout_prob in [0.2, 0.5] |
| **NARM** | learning_rate=0.001
hidden_size=128
n_layers=1
dropout_probs=[0.25, 0.5] | learning_rate in [0.001, 0.01, 0.03]
hidden_size in [128]
n_layers in [1, 2]
dropout_probs in ['[0.25,0.5]', '[0.2,0.2]', '[0.1,0.2]'] |
| **SR-GNN** | learning_rate=0.001
step=1 | learning_rate in [0.01, 0.001, 0.0001]
step in [1, 2] |
| **GC-SAN** | learning_rate=0.001
step=1 | learning_rate in [0.01, 0.001, 0.0001]
step in [1, 2] |
| **NISER+** | learning_rate=0.001
sigma=16 | learning_rate in [0.01, 0.001, 0.003]
sigma in [10, 16, 20] |
| **LESSR** | learning_rate=0.001
n_layers=4 | learning_rate in [0.01, 0.001, 0.003]
n_layers in [2, 4] |
| **TAGNN** | learning_rate=0.001 | learning_rate in [0.01, 0.001, 0.003]
train_batch_size=512 |
| **GCE-GNN** | learning_rate=0.001
dropout_global=0.5 | learning_rate in [0.01, 0.001, 0.003]
dropout_global in [0.2, 0.5] |
| **SGNN-HN** | learning_rate=0.003
scale=12
step=2 | learning_rate in [0.01, 0.001, 0.003]
scale in [12, 16, 20]
step in [2, 4, 6] |
================================================
FILE: results/social/lastfm.md
================================================
# Experimental Setting
**Dataset:** [LastFM](http://files.grouplens.org/datasets/hetrec2011/)
> Note that datasets for social recommendation methods can be downloaded from [Social-Datasets](https://github.com/Sherry-XLL/Social-Datasets).
**Filtering:** None
**Evaluation:** ratio-based 8:1:1, full sort
**Metrics:** Recall@10, NGCG@10, MRR@10, Hit@10, Precision@10
**Properties:**
```yaml
# dataset config
field_separator: "\t"
seq_separator: " "
USER_ID_FIELD: user_id
ITEM_ID_FIELD: artist_id
NET_SOURCE_ID_FIELD: source_id
NET_TARGET_ID_FIELD: target_id
LABEL_FIELD: label
NEG_PREFIX: neg_
load_col:
inter: [user_id, artist_id]
net: [source_id, target_id]
# social network config
filter_net_by_inter: True
undirected_net: True
# training and evaluation
epochs: 5000
train_batch_size: 4096
eval_batch_size: 409600000
valid_metric: NDCG@10
stopping_step: 50
```
For fairness, we restrict users' and items' embedding dimension as following. Please adjust the name of the corresponding args of different models.
```
embedding_size: 64
```
# Dataset Statistics
| Dataset | #Users | #Items | #Interactions | Sparsity |
| ---------- | ------ | ------ | ------------- | -------- |
| lastfm | 1,892 | 17,632 | 92,834 | 99.72% |
# Evaluation Results
| Method | Recall@10 | MRR@10 | NDCG@10 | Hit@10 | Precision@10 |
| -------------------- | --------- | ------ | ------- | ------ | ------------ |
| **BPR** | 0.1761 | 0.3026 | 0.1674 | 0.5573 | 0.0858 |
| **NeuMF** | 0.1696 | 0.2924 | 0.1604 | 0.5456 | 0.0828 |
| **NGCF** | 0.1960 | 0.3479 | 0.1898 | 0.6141 | 0.0961 |
| **LightGCN** | 0.2064 | 0.3559 | 0.1972 | 0.6322 | 0.1009 |
| **DiffNet** | 0.1757 | 0.3117 | 0.1694 | 0.5621 | 0.0857 |
| **MHCN** | 0.2123 | 0.3782 | 0.2068 | 0.6523 | 0.1042 |
| **SEPT** | 0.2127 | 0.3703 | 0.2057 | 0.6465 | 0.1044 |
# Hyper-parameters
| | Best hyper-parameters | Tuning range |
| -------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
| **BPR** | learning_rate=0.0005 | learning_rate in [0.01, 0.005, 0.001, 0.0005, 0.0001] |
| **NeuMF** | learning_rate=0.0005
dropout_prob=0.1 | learning_rate in [0.01, 0.005, 0.001, 0.0005, 0.0001]
dropout_prob in [0.1, 0.2, 0.3] |
| **NGCF** | learning_rate=0.0005
hidden_size_list=[64,64,64] | learning_rate in [0.01, 0.005, 0.001, 0.0005, 0.0001]
hidden_size_list in ['[64]', '[64,64]', '[64,64,64]'] |
| **LightGCN** | learning_rate=0.001
n_layers=3 | learning_rate in [0.01, 0.005, 0.001, 0.0005, 0.0001]
n_layers in [1, 2, 3] |
| **DiffNet** | learning_rate=0.0005
n_layers=1 | learning_rate in [0.01, 0.005, 0.001, 0.0005, 0.0001]
n_layers in [1, 2, 3] |
| **MHCN** | learning_rate=0.0005
n_layers=2
ssl_reg=1e-05 | learning_rate in [0.01, 0.005, 0.001, 0.0005, 0.0001]
n_layers in [1, 2, 3]
ssl_reg in [1e-04, 1e-05, 1e-06] |
| **SEPT** | learning_rate=0.0005
n_layers=2
ssl_weight=1e-07 | learning_rate in [0.01, 0.005, 0.001, 0.0005, 0.0001]
n_layers in [1, 2, 3]
ssl_weight in [1e-3, 1e-4, 1e-5, 1e-6, 1e-7] |
================================================
FILE: run_hyper.py
================================================
import argparse
from recbole.trainer import HyperTuning
from recbole_gnn.quick_start import objective_function
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--config_files', type=str, default=None, help='fixed config files')
parser.add_argument('--params_file', type=str, default=None, help='parameters file')
parser.add_argument('--output_file', type=str, default='hyper_example.result', help='output file')
args, _ = parser.parse_known_args()
# plz set algo='exhaustive' to use exhaustive search, in this case, max_evals is auto set
config_file_list = args.config_files.strip().split(' ') if args.config_files else None
hp = HyperTuning(objective_function, algo='exhaustive',
params_file=args.params_file, fixed_config_file_list=config_file_list)
hp.run()
hp.export_result(output_file=args.output_file)
print('best params: ', hp.best_params)
print('best result: ')
print(hp.params2result[hp.params2str(hp.best_params)])
if __name__ == '__main__':
main()
================================================
FILE: run_recbole_gnn.py
================================================
import argparse
from recbole_gnn.quick_start import run_recbole_gnn
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--model', '-m', type=str, default='BPR', help='name of models')
parser.add_argument('--dataset', '-d', type=str, default='ml-100k', help='name of datasets')
parser.add_argument('--config_files', type=str, default=None, help='config files')
args, _ = parser.parse_known_args()
config_file_list = args.config_files.strip().split(' ') if args.config_files else None
run_recbole_gnn(model=args.model, dataset=args.dataset, config_file_list=config_file_list)
================================================
FILE: run_test.sh
================================================
#!/bin/bash
python -m pytest -v tests/test_model.py
echo "model tests finished"
================================================
FILE: tests/test_data/test/test.inter
================================================
user_id:token item_id:token rating:float timestamp:float
196 242 3 881250949
186 302 3 891717742
22 377 1 878887116
244 51 2 880606923
166 346 1 886397596
298 474 4 884182806
115 265 2 881171488
253 465 5 891628467
305 451 3 886324817
6 86 3 883603013
62 257 2 879372434
286 1014 5 879781125
200 222 5 876042340
210 40 3 891035994
224 29 3 888104457
303 785 3 879485318
122 387 5 879270459
194 274 2 879539794
291 1042 4 874834944
234 1184 2 892079237
119 392 4 886176814
167 486 4 892738452
299 144 4 877881320
291 118 2 874833878
308 1 4 887736532
95 546 2 879196566
38 95 5 892430094
102 768 2 883748450
63 277 4 875747401
160 234 5 876861185
50 246 3 877052329
301 98 4 882075827
225 193 4 879539727
290 88 4 880731963
97 194 3 884238860
157 274 4 886890835
181 1081 1 878962623
278 603 5 891295330
276 796 1 874791932
7 32 4 891350932
10 16 4 877888877
284 304 4 885329322
201 979 2 884114233
276 564 3 874791805
287 327 5 875333916
246 201 5 884921594
242 1137 5 879741196
249 241 5 879641194
99 4 5 886519097
178 332 3 882823437
251 100 4 886271884
81 432 2 876535131
260 322 4 890618898
25 181 5 885853415
59 196 5 888205088
72 679 2 880037164
87 384 4 879877127
290 143 5 880474293
42 423 5 881107687
292 515 4 881103977
115 20 3 881171009
20 288 1 879667584
201 219 4 884112673
13 526 3 882141053
246 919 4 884920949
138 26 5 879024232
167 232 1 892738341
60 427 5 883326620
57 304 5 883698581
223 274 4 891550094
189 512 4 893277702
243 15 3 879987440
92 1049 1 890251826
246 416 3 884923047
194 165 4 879546723
241 690 2 887249482
178 248 4 882823954
254 1444 3 886475558
293 5 3 888906576
127 229 5 884364867
225 237 5 879539643
299 229 3 878192429
225 480 5 879540748
276 54 3 874791025
291 144 5 874835091
222 366 4 878183381
267 518 5 878971773
42 403 3 881108684
11 111 4 891903862
95 625 4 888954412
8 338 4 879361873
162 25 4 877635573
87 1016 4 879876194
279 154 5 875296291
145 275 2 885557505
119 1153 5 874781198
62 498 4 879373848
62 382 3 879375537
28 209 4 881961214
135 23 4 879857765
32 294 3 883709863
90 382 5 891383835
286 208 4 877531942
293 685 3 888905170
216 144 4 880234639
166 328 5 886397722
250 496 4 878090499
271 132 5 885848672
160 174 5 876860807
265 118 4 875320714
198 498 3 884207492
42 96 5 881107178
168 151 5 884288058
110 307 4 886987260
58 144 4 884304936
90 648 4 891384754
271 346 4 885844430
62 21 3 879373460
279 832 3 881375854
237 514 4 879376641
94 789 4 891720887
128 485 3 879966895
298 317 4 884182806
44 195 5 878347874
264 200 5 886122352
194 385 2 879524643
72 195 5 880037702
222 750 5 883815120
250 264 3 878089182
41 265 3 890687042
224 245 3 888082216
82 135 3 878769629
262 1147 4 879791710
293 471 3 888904884
216 658 3 880245029
250 140 3 878092059
59 23 5 888205300
286 379 5 877533771
244 815 4 880605185
7 479 4 891352010
174 368 1 886434402
87 274 4 879876734
194 1211 2 879551380
82 1134 2 884714402
13 836 2 882139746
13 272 4 884538403
244 756 2 880605157
305 427 5 886323090
95 787 2 888954930
43 14 2 883955745
299 955 4 889502823
57 419 3 883698454
84 405 3 883452363
269 504 4 891449922
299 111 3 877878184
194 466 4 879525876
160 135 4 876860807
99 268 3 885678247
10 486 4 877886846
259 117 4 874724988
85 427 3 879456350
303 919 4 879467295
213 273 5 878870987
121 514 3 891387947
90 98 5 891383204
49 559 2 888067405
42 794 3 881108425
155 323 2 879371261
68 117 4 876973939
172 177 4 875537965
19 4 4 885412840
268 231 4 875744136
5 2 3 875636053
305 117 2 886324028
44 294 4 883612356
43 137 4 875975656
279 1336 1 875298353
80 466 5 887401701
254 164 4 886472768
298 281 3 884183336
279 1240 1 892174404
66 298 4 883601324
18 443 3 880130193
268 1035 2 875542174
99 79 4 885680138
13 98 4 881515011
26 258 3 891347949
7 455 4 891353086
222 755 4 878183481
200 673 5 884128554
119 328 4 876923913
213 172 5 878955442
276 322 3 874786392
94 1217 3 891723086
130 379 4 875801662
38 328 4 892428688
160 719 3 876857977
293 1267 3 888906966
26 930 2 891385985
130 216 4 875216545
92 1079 3 886443455
256 452 4 882164999
1 61 4 878542420
72 48 4 880036718
56 755 3 892910207
13 360 4 882140926
15 405 2 879455957
92 77 3 875654637
207 476 2 884386343
292 174 5 881105481
232 483 5 888549622
251 748 2 886272175
224 26 3 888104153
181 220 4 878962392
259 255 4 874724710
305 471 4 886323648
52 280 3 882922806
161 202 5 891170769
148 408 5 877399018
125 235 2 892838559
97 228 5 884238860
58 1098 4 884304936
83 234 4 887665548
90 347 4 891383319
272 178 5 879455113
194 181 3 879521396
125 478 4 879454628
110 688 1 886987605
299 14 4 877877775
151 10 5 879524921
269 127 4 891446165
6 14 5 883599249
54 106 3 880937882
303 69 5 879467542
16 944 1 877727122
301 790 4 882078621
276 1091 3 874793035
305 214 2 886323068
194 1028 2 879541148
91 323 2 891438397
87 554 4 879875940
294 109 4 877819599
286 171 4 877531791
200 318 5 884128458
229 328 1 891632142
178 568 4 882826555
303 842 2 879484804
62 65 4 879374686
207 591 3 876018608
92 172 4 875653271
301 401 4 882078040
36 339 5 882157581
70 746 3 884150257
63 242 3 875747190
28 201 3 881961671
279 68 4 875307407
250 7 4 878089716
14 98 3 890881335
299 1018 3 889502324
194 54 3 879525876
303 815 3 879485532
119 237 5 874775038
295 218 5 879966498
268 930 2 875742942
268 2 2 875744173
66 258 4 883601089
233 202 5 879394264
83 623 4 880308578
214 334 3 891542540
192 476 2 881368243
100 344 4 891374868
268 145 1 875744501
301 56 4 882076587
307 89 5 879283786
234 141 3 892334609
83 576 4 880308755
181 264 2 878961624
297 133 4 875240090
38 153 5 892430369
7 382 4 891352093
264 813 4 886122952
181 872 1 878961814
201 146 1 884140579
85 507 4 879456199
269 367 3 891450023
59 468 3 888205855
286 143 4 889651549
193 96 1 889124507
113 595 5 875936424
292 11 5 881104093
130 1014 3 876250718
275 98 4 875155140
189 520 5 893265380
219 82 1 889452455
218 209 5 877488546
123 427 3 879873020
119 222 5 874775311
158 177 4 880134407
222 118 4 877563802
302 322 2 879436875
279 501 3 875308843
301 79 5 882076403
181 3 2 878963441
201 695 1 884140115
13 198 3 881515193
1 189 3 888732928
145 237 5 875270570
23 385 4 874786462
201 767 4 884114505
296 705 5 884197193
42 546 3 881105817
33 872 3 891964230
301 554 3 882078830
16 64 5 877720297
95 135 3 879197562
154 357 4 879138713
77 484 5 884733766
296 508 5 884196584
302 303 2 879436785
244 673 3 880606667
222 77 4 878183616
13 215 5 882140588
16 705 5 877722736
270 452 4 876956264
145 15 2 875270655
187 64 5 879465631
200 304 5 876041644
170 749 5 887646170
101 829 3 877136138
184 218 3 889909840
128 204 4 879967478
181 1295 1 878961781
184 153 3 889911285
1 33 4 878542699
1 160 4 875072547
184 321 5 889906967
54 595 3 880937813
94 343 4 891725009
128 508 4 879967767
23 323 2 874784266
301 227 3 882077222
301 191 3 882075672
112 903 1 892440172
82 183 3 878769848
222 724 3 878181976
218 430 3 877488316
308 1197 4 887739521
303 134 5 879467959
133 751 3 890588547
215 212 2 891435680
69 256 5 882126156
254 662 4 887347350
276 2 4 874792436
104 984 1 888442575
63 1067 3 875747514
267 410 4 878970785
13 56 5 881515011
240 879 3 885775745
286 237 2 875806800
294 271 5 889241426
90 1086 4 891384424
18 26 4 880129731
92 229 3 875656201
308 649 4 887739292
144 89 3 888105691
191 302 4 891560253
59 951 3 888206409
200 96 5 884129409
16 197 5 877726146
61 678 3 892302309
271 199 4 885848448
271 709 3 885849325
142 169 5 888640356
275 597 3 876197678
222 151 3 878182109
87 40 3 879876917
207 258 4 877879172
272 1393 2 879454663
177 333 4 880130397
207 1115 2 879664906
299 577 3 889503806
271 378 4 885849447
305 425 4 886324486
49 959 2 888068912
94 1224 3 891722802
130 1017 3 874953895
10 175 3 877888677
203 321 3 880433418
191 286 4 891560842
43 323 3 875975110
21 558 5 874951695
197 96 5 891409839
13 344 2 888073635
194 66 3 879527264
234 206 4 892334543
308 402 4 887740700
308 640 4 887737036
269 522 5 891447773
94 265 4 891721889
268 62 3 875310824
272 12 5 879455254
121 291 3 891390477
296 20 5 884196921
134 286 3 891732334
180 462 5 877544218
234 612 3 892079140
104 117 2 888465972
38 758 1 892434626
269 845 1 891456255
7 163 4 891353444
234 1451 3 892078343
275 405 2 876197645
52 250 3 882922661
102 823 3 888801465
13 186 4 890704999
178 731 4 882827532
236 71 3 890116671
256 781 5 882165296
263 176 5 891299752
244 186 3 880605697
279 1181 4 875314001
43 815 4 883956189
83 78 2 880309089
151 197 5 879528710
254 436 2 886474216
109 631 3 880579371
297 716 3 875239422
249 188 4 879641067
144 699 4 888106106
301 604 4 882075994
64 392 3 889737542
92 501 2 875653665
222 97 4 878181739
268 436 3 875310745
293 135 5 888905550
213 173 5 878955442
160 460 2 876861185
13 498 4 882139901
59 715 5 888205921
5 17 4 875636198
125 163 5 879454956
174 315 5 886432749
114 505 3 881260203
213 515 4 878870518
23 196 2 874786926
128 15 4 879968827
239 56 4 889179478
181 279 1 878962955
291 80 4 875086354
250 238 4 878089963
201 649 3 884114275
60 60 5 883327734
181 325 2 878961814
119 407 3 887038665
287 1 5 875334088
216 228 3 880245642
216 531 4 880233810
203 471 4 880434463
92 587 3 875660408
13 892 3 882774224
213 176 4 878956338
286 288 5 875806672
117 1047 2 881009697
99 111 1 885678886
11 558 3 891904214
65 47 2 879216672
295 194 4 879517412
269 217 2 891451610
85 259 2 881705026
250 596 5 878089921
137 144 5 881433689
201 960 2 884112077
257 137 4 882049932
111 328 4 891679939
91 480 4 891438875
215 211 4 891436202
181 938 1 878961586
189 1060 5 893264301
1 20 4 887431883
303 404 4 879468375
299 305 3 879737314
187 210 4 879465242
222 278 2 877563913
214 568 4 892668197
293 770 3 888906655
285 191 4 890595859
303 252 3 879544791
96 156 4 884402860
72 1110 3 880037334
115 1067 4 881171009
7 430 3 891352178
116 350 3 886977926
73 480 4 888625753
269 246 5 891457067
263 419 5 891299514
70 431 3 884150257
221 475 4 875244204
72 182 5 880036515
25 357 4 885852757
290 50 5 880473582
189 526 4 893266205
299 303 3 877618584
264 294 3 886121516
200 365 5 884129962
187 135 4 879465653
184 187 4 889909024
63 289 2 875746985
13 229 4 882397650
298 486 3 884183063
235 185 4 889655435
62 712 4 879376178
246 94 2 884923505
54 742 5 880934806
63 762 3 875747688
11 732 3 891904596
92 168 4 875653723
8 550 3 879362356
307 174 4 879283480
303 200 4 879468459
256 849 2 882164603
72 54 3 880036854
164 406 2 889402389
117 150 4 880125101
224 77 4 888103872
193 869 3 889127811
94 184 2 891720862
281 338 2 881200457
130 109 3 874953794
128 371 1 879966954
94 720 1 891723593
182 845 3 885613067
129 873 1 883245452
254 229 4 886474580
64 381 4 879365491
151 176 2 879524293
45 25 4 881014015
193 879 3 889123257
276 922 4 889174849
276 57 3 874787526
234 187 4 892079140
181 306 1 878962006
21 370 1 874951293
293 249 3 888905229
264 721 5 886123656
10 611 5 877886722
197 346 3 891409070
276 142 3 874792945
308 427 4 887736584
221 943 4 875246759
131 126 4 883681514
268 824 2 876518557
109 8 3 880572642
198 58 3 884208173
230 680 4 880484286
181 741 1 878962918
192 1061 4 881368891
234 448 3 892335501
90 900 4 891382309
193 941 4 889124890
128 603 5 879966839
126 905 2 887855283
244 265 4 880606634
90 289 3 891382310
157 25 3 886890787
305 71 3 886323684
119 382 5 874781742
21 222 2 874951382
231 181 4 888605273
280 508 3 891700453
288 132 3 886374129
279 1497 2 890780576
301 33 4 882078228
72 699 3 880036783
90 259 2 891382392
308 55 3 887738760
59 742 3 888203053
94 744 4 891721462
130 642 4 875216933
26 1015 3 891352136
56 121 5 892679480
82 508 2 884714249
62 12 4 879373613
276 40 3 874791871
181 1015 1 878963121
152 301 3 880147407
178 845 4 882824291
217 597 4 889070087
79 303 4 891271203
138 484 4 879024127
308 81 5 887737293
75 284 2 884050393
269 198 4 891447062
307 94 3 877122695
222 781 3 881059677
121 740 3 891390544
269 22 1 891448072
13 864 4 882141924
230 742 5 880485043
269 507 4 891448800
239 1099 5 889179253
245 1028 5 888513447
56 546 3 892679460
295 961 5 879519556
271 1028 2 885848102
222 812 2 881059117
69 240 3 882126156
10 7 4 877892210
22 376 3 878887112
294 931 3 889242857
82 717 1 884714492
279 399 4 875313859
269 234 1 891449406
6 98 5 883600680
243 1039 4 879988184
298 181 4 884125629
282 325 1 881703044
78 323 1 879633567
118 200 5 875384647
283 1114 5 879297545
171 292 4 891034835
70 217 4 884151119
10 100 5 877891747
245 181 4 888513664
107 333 3 891264267
246 561 1 884923445
13 901 1 883670672
276 70 4 874790826
244 17 2 880607205
189 56 5 893265263
226 242 5 883888671
62 1016 4 879373008
276 417 4 874792907
214 478 4 891544052
306 235 4 876504354
222 26 3 878183043
280 631 5 891700751
60 430 5 883326122
56 71 4 892683275
42 274 5 881105817
1 202 5 875072442
13 809 4 882397582
173 289 4 877556988
15 749 1 879455311
185 23 4 883524249
280 540 3 891702304
244 381 4 880604077
150 293 4 878746946
7 497 4 891352134
178 317 4 882826915
178 742 3 882823833
95 1217 3 880572658
234 1462 3 892333865
97 222 5 884238887
109 127 2 880563471
117 268 5 880124306
269 705 2 891448850
130 1246 3 876252497
264 655 4 886123530
207 13 3 875506839
42 588 5 881108147
246 409 2 884923372
87 367 4 879876702
101 304 3 877135677
256 127 4 882164406
92 794 3 875654798
181 762 2 878963418
213 235 1 878955115
92 739 2 876175582
292 661 5 881105561
246 665 4 884922831
274 845 5 878945579
188 692 5 875072583
18 86 4 880129731
5 439 1 878844423
236 632 3 890116254
193 407 4 889127921
144 709 4 888105940
90 1198 5 891383866
48 609 4 879434819
5 225 2 875635723
22 128 5 878887983
311 432 4 884365485
8 22 5 879362183
276 188 4 874792547
222 173 5 878183043
72 866 4 880035887
299 134 4 878192311
1 171 5 889751711
308 295 3 887741461
165 216 4 879525778
222 49 3 878183512
181 121 4 878962623
200 11 5 884129542
234 626 4 892336358
244 707 4 880606243
90 25 5 891384789
208 216 5 883108324
263 96 4 891298336
134 323 4 891732335
279 586 4 892864663
2 292 4 888550774
288 593 2 886892127
49 302 4 888065432
286 153 5 877531406
205 304 3 888284313
22 80 4 878887227
234 318 4 892078890
223 328 3 891548959
15 25 3 879456204
268 147 4 876514002
94 1220 3 891722678
274 405 4 878945840
7 492 5 891352010
268 217 2 875744501
16 55 5 877717956
164 620 3 889402298
290 161 4 880474293
92 515 4 875640800
239 1070 5 889179032
56 449 5 892679308
248 234 4 884534968
234 10 3 891227851
280 1049 2 891702486
308 187 5 887738760
276 64 5 874787441
192 948 3 881368302
122 509 4 879270511
85 588 3 880838306
262 931 2 879790874
201 272 3 886013700
181 870 2 878962623
295 739 4 879518319
263 568 4 891299387
295 39 4 879518279
201 1100 4 884112800
93 820 3 888705966
159 1028 5 880557539
158 665 2 880134532
293 423 3 888906070
82 597 3 878768882
276 181 5 874786488
13 823 5 882397833
217 2 3 889069782
83 660 4 880308256
189 20 5 893264466
222 796 4 878183684
146 1022 5 891458193
267 121 3 878970681
126 294 3 887855087
181 1060 1 878962675
125 80 4 892838865
43 120 4 884029430
13 780 1 882142057
253 259 2 891628883
42 44 3 881108548
77 518 4 884753202
291 686 5 874835165
268 21 3 875742822
262 28 3 879792220
234 81 3 892334680
29 245 3 882820803
236 57 5 890116575
158 729 3 880133116
156 661 4 888185947
232 52 5 888550130
168 866 5 884287927
37 288 4 880915258
141 245 3 884584426
235 230 4 889655162
102 70 3 888803537
77 172 3 884752562
90 506 5 891383319
186 566 5 879023663
44 660 5 878347915
118 774 5 875385198
7 661 5 891351624
49 1003 2 888068651
62 68 1 879374969
42 1028 4 881106072
178 433 4 882827834
85 51 2 879454782
77 474 5 884732407
58 1099 2 892243079
56 1047 4 892911290
197 688 1 891409564
286 99 4 878141681
90 258 3 891382121
181 1288 1 878962349
295 190 4 879517062
224 69 4 888082495
272 317 4 879454977
221 1010 3 875246662
66 877 1 883601089
207 318 5 877124871
234 487 3 892079237
7 648 5 891351653
87 82 5 879875774
195 1052 1 877835102
44 449 5 883613334
306 287 4 876504442
194 172 3 879521474
94 62 3 891722933
167 659 4 892738277
108 100 4 879879720
230 304 5 880484286
181 927 1 878962675
54 302 4 880928519
90 22 4 891384357
181 696 2 878962997
286 357 4 877531537
14 269 4 892242403
311 179 2 884365357
92 121 5 875640679
21 440 1 874951798
244 550 1 880602264
181 405 4 878962919
65 806 4 879216529
37 540 2 880916070
44 443 5 878348289
244 183 4 880606043
1 265 4 878542441
270 25 5 876954456
299 387 2 889502756
94 572 3 891723883
286 746 4 877533058
239 272 5 889181247
216 55 5 880245145
254 121 3 886472369
62 665 2 879376483
178 385 4 882826982
194 23 4 879522819
268 955 3 875745160
188 143 5 875072674
276 294 4 874786366
158 1098 4 880135069
207 845 3 881681663
161 48 1 891170745
305 654 4 886323937
47 324 3 879439078
64 736 4 889739212
191 751 3 891560753
7 378 5 891353011
59 92 5 888204997
69 268 5 882027109
10 461 3 877888944
21 129 4 874951382
58 9 4 884304328
194 152 3 879549996
7 200 5 891353543
113 126 5 875076827
173 328 5 877557028
95 233 4 879196354
16 194 5 877720733
59 323 4 888206809
311 654 3 884365075
292 589 4 881105516
43 203 4 883955224
79 50 4 891271545
235 70 5 889655619
125 190 5 892836309
284 322 3 885329671
303 161 5 879468547
254 378 3 886474396
255 1034 1 883217030
104 301 2 888442275
90 923 5 891383912
6 463 4 883601713
279 122 1 875297433
286 298 4 875807004
222 448 3 878183565
297 57 5 875239383
42 625 3 881108873
130 1217 4 875801778
254 357 3 886472466
109 475 1 880563641
230 1444 2 880485726
244 310 3 880601905
6 301 2 883600406
36 748 4 882157285
256 443 3 882164727
102 515 1 888801316
104 285 4 888465201
21 447 5 874951695
111 301 4 891680028
18 408 5 880129628
25 222 4 885852817
110 944 3 886989501
270 98 5 876955868
68 237 5 876974133
83 215 4 880307940
6 258 2 883268278
89 216 5 879459859
128 317 4 879968029
305 512 4 886323525
184 412 2 889912691
286 175 5 877532470
279 1428 3 888465209
256 86 5 882165103
221 48 5 875245462
140 332 3 879013617
190 977 2 891042938
11 227 3 891905896
201 203 5 884114471
150 181 5 878746685
126 245 3 887854726
20 208 2 879669401
144 742 4 888104122
181 930 1 878963275
109 566 4 880578814
85 1065 3 879455021
213 133 3 878955973
222 379 1 878184290
223 11 3 891550649
215 421 4 891435704
218 208 3 877488366
174 937 5 886432989
275 186 3 880314383
68 742 1 876974198
268 583 4 876513830
160 462 4 876858346
195 273 4 878019342
224 178 4 888082468
5 110 1 875636493
99 1016 5 885678724
2 251 5 888552084
292 9 4 881104148
72 568 4 880037203
85 228 3 882813248
83 281 5 880307072
92 831 2 886443708
7 543 3 891351772
87 401 2 879876813
287 926 4 875334340
1 155 2 878542201
234 632 2 892079538
222 53 5 878184113
24 64 5 875322758
7 554 3 891354639
82 56 3 878769410
161 318 3 891170824
196 393 4 881251863
56 91 4 892683275
82 477 3 876311344
7 472 2 891353357
256 761 4 882164644
226 56 4 883889102
279 741 5 875296891
308 1286 3 887738151
16 8 5 877722736
180 202 3 877128388
203 93 4 880434940
145 56 5 875271896
288 305 4 886372527
84 742 3 883450643
44 644 3 878347818
17 13 3 885272654
313 117 4 891015319
148 1 4 877019411
197 347 4 891409070
21 164 5 874951695
279 982 3 875298314
239 491 5 889181015
185 287 5 883526288
297 89 4 875239125
303 68 4 879467361
186 250 1 879023607
73 206 3 888625754
104 756 2 888465739
94 216 3 885870665
239 194 5 889178833
197 511 5 891409839
280 1 4 891700426
1 117 3 874965739
224 583 1 888103729
303 397 1 879543831
60 162 4 883327734
198 258 4 884204501
239 513 5 889178887
6 69 3 883601277
233 375 4 876374419
85 642 4 882995615
110 38 3 886988574
184 522 3 889908462
99 873 1 885678436
13 418 2 882398763
201 518 4 884112201
13 858 1 882397068
214 131 3 891544465
296 228 4 884197264
222 87 3 878182589
279 725 4 875314144
217 182 2 889070109
85 433 3 879828720
239 234 3 889178762
13 72 4 882141727
194 77 3 879527421
208 663 5 883108476
109 178 3 880572950
230 172 4 880484523
59 485 2 888204466
313 478 3 891014373
70 1133 3 884151344
62 182 5 879375169
198 234 3 884207833
65 125 4 879217509
174 660 5 886514261
90 12 5 891383241
130 1248 3 880396702
100 354 2 891375260
283 432 5 879297965
275 418 3 875154718
311 98 5 884364502
195 751 4 883295500
130 105 4 876251160
269 252 1 891456350
286 73 5 877532965
7 623 3 891354217
56 222 5 892679439
210 204 5 887730676
239 9 5 889180446
96 87 4 884403531
297 73 2 875239691
249 239 3 879572284
94 860 2 891723706
84 121 4 883452307
275 265 4 880314031
135 1046 3 879858003
291 1178 4 875086354
125 382 1 892836623
70 399 4 884068521
311 9 4 884963365
301 523 4 882076146
152 685 5 880149074
244 172 4 880605665
275 1091 2 875154535
53 281 4 879443288
198 118 2 884206513
244 790 4 880608037
26 125 4 891371676
151 13 3 879542688
124 496 1 890286933
24 191 5 875323003
271 65 3 885849419
307 634 3 879283385
294 1245 3 877819265
234 241 2 892335042
25 501 3 885852301
293 137 3 888904653
201 432 3 884111312
75 240 1 884050661
13 181 5 882140354
207 68 2 877125350
2 50 5 888552084
313 566 4 891016220
144 125 4 888104191
188 443 4 875074329
276 324 4 874786419
145 974 1 882182634
72 234 4 880037418
83 385 4 887665549
181 619 3 878963086
109 402 4 880581344
207 107 3 876198301
185 216 4 883526268
14 213 5 890881557
149 319 2 883512658
57 79 5 883698495
230 963 5 880484370
176 875 4 886047442
253 97 4 891628501
284 269 4 885328991
106 526 4 881452685
121 180 3 891388286
62 86 2 879374640
291 418 4 875086920
84 1033 4 883452711
293 380 2 888907527
207 58 3 875991047
194 187 4 879520813
109 97 3 880578711
283 845 4 879297442
297 275 5 874954260
181 334 1 878961749
78 255 4 879633745
11 425 4 891904300
308 59 4 887737647
193 1078 4 889126943
297 234 3 875239018
87 585 4 879877008
250 204 2 878091682
8 50 5 879362124
186 148 4 891719774
312 692 4 891699426
91 683 3 891438351
5 454 1 875721432
291 376 3 875086534
175 127 5 877107640
145 737 2 875272833
7 644 5 891351685
276 419 5 874792907
83 210 5 880307751
102 524 3 888803537
153 174 1 881371140
62 302 3 879371909
49 995 3 888065577
268 298 3 875742647
207 554 2 877822854
313 616 5 891015049
286 44 3 877532173
279 168 5 875296435
276 474 5 889174904
62 59 4 879373821
254 219 1 886475980
83 97 4 880308690
63 100 5 875747319
16 178 5 877719333
297 233 2 875239914
90 945 5 891383866
85 25 2 879452769
42 98 4 881106711
303 393 4 879484981
274 50 5 878944679
104 299 3 888442436
94 792 4 885873006
184 98 4 889908539
293 708 3 888907527
248 589 4 884534968
18 950 3 880130764
217 27 1 889070011
200 892 4 884127082
201 148 1 884140751
296 222 5 884196640
7 662 3 892133739
196 381 4 881251728
69 427 3 882145465
72 196 4 880036747
256 472 4 882152471
128 182 4 879967225
151 747 3 879524564
7 171 3 891351287
286 85 5 877533224
172 220 4 875537441
308 516 4 887736743
190 974 2 891625949
82 756 1 878768741
308 436 4 887739257
59 235 1 888203658
64 1063 3 889739539
145 756 2 885557506
220 298 4 881198966
21 324 4 874950889
285 269 4 890595313
207 65 3 878104594
198 658 3 884208173
220 333 3 881197771
210 70 4 887730589
181 14 1 878962392
158 128 2 880134296
143 682 3 888407741
75 237 2 884050309
199 221 4 883782854
223 1150 2 891549841
297 25 4 874954497
276 78 4 877934828
299 847 4 877877649
293 325 2 888904353
301 138 2 882079446
1 47 4 875072125
164 281 4 889401906
96 673 4 884402860
291 1016 4 874833827
7 451 5 891353892
233 177 4 877661496
6 517 4 883602212
202 283 3 879727153
214 117 4 891543241
184 602 4 889909691
277 257 3 879543487
194 212 1 879524216
95 68 4 879196231
25 257 4 885853415
6 23 4 883601365
38 573 1 892433660
313 436 4 891029877
22 241 3 878888025
262 617 3 879793715
130 569 3 880396494
66 181 5 883601425
21 948 1 874951054
181 1332 1 878962278
262 174 3 879791948
206 302 5 888180227
222 22 5 878183285
76 61 4 875028123
151 703 4 879542460
314 28 5 877888346
13 147 3 882397502
44 258 4 878340824
303 418 4 879483510
16 89 2 877717833
270 558 5 876954927
248 117 5 884535433
125 318 5 879454309
138 523 5 879024043
268 386 2 875743978
291 15 5 874833668
234 147 3 892335372
239 96 5 889178798
15 331 3 879455166
94 155 2 891723807
136 89 4 882848925
223 423 3 891550684
82 194 4 878770027
145 355 3 888396967
280 845 3 891700925
179 339 1 892151366
178 199 4 882826306
307 949 4 877123315
10 488 5 877888613
116 331 3 876451911
23 258 5 876785704
308 174 4 887736696
185 114 4 883524320
188 237 3 875073648
118 654 5 875385007
246 721 4 884921794
234 98 4 892078567
194 239 3 879522917
94 24 4 885873423
122 378 4 879270769
312 100 4 891698613
262 64 5 879793022
154 242 3 879138235
223 763 3 891550067
99 403 4 885680374
83 43 4 880308690
130 307 4 877984546
174 402 5 886513729
256 487 5 882164231
59 177 4 888204349
161 168 1 891171174
244 53 3 880607489
250 196 4 878091818
43 40 3 883956468
285 150 5 890595636
42 953 2 881108815
97 670 5 884239744
122 510 4 879270327
61 323 3 891206450
222 106 2 883816184
4 264 3 892004275
304 259 1 884967253
37 403 5 880915942
49 68 1 888069513
303 1098 4 879467959
165 372 5 879525987
176 324 5 886047292
3 335 1 889237269
56 869 3 892683895
44 15 4 878341343
190 117 4 891033697
29 189 4 882821942
94 174 4 885870231
130 949 3 876251944
117 181 5 880124648
303 779 1 879543418
19 435 5 885412840
194 191 4 879521856
158 24 4 880134261
56 447 4 892679067
262 223 3 879791816
181 1334 1 878962240
214 137 4 891543227
92 747 4 875656164
188 96 5 875073128
58 173 5 884305353
244 154 5 880606385
134 879 4 891732393
298 625 4 884183406
254 230 4 886472400
230 138 3 880485197
16 209 5 877722736
151 835 5 879524199
181 1327 1 878963305
145 1248 3 875272195
200 588 5 884128499
248 257 3 884535840
297 432 4 875239658
312 133 5 891699296
151 12 5 879524368
110 568 3 886988449
305 483 5 886323068
141 258 5 884584338
44 240 4 878346997
186 263 3 879023571
214 213 4 891544414
233 208 4 880610814
104 287 2 888465347
312 153 2 891699491
1 222 4 878873388
206 323 1 888179833
230 419 4 880484587
56 450 3 892679374
94 651 5 891725332
205 316 4 888284710
14 174 5 890881294
268 790 2 876513785
276 1081 3 880913705
83 929 3 880307140
268 580 3 875309344
222 1041 3 881060155
279 89 4 875306910
5 424 1 875635807
112 331 4 884992603
296 429 5 884197330
18 202 3 880130515
13 868 5 882139901
87 210 5 879875734
10 285 5 877889186
181 328 3 878961227
23 463 4 874785843
253 746 3 891628630
234 228 3 892079190
299 1047 2 877880041
66 1 3 883601324
216 174 5 881432488
290 208 3 880475245
79 1161 2 891271697
264 448 2 886122031
4 303 5 892002352
144 831 3 888104805
138 517 4 879024279
64 433 2 889740286
5 1 4 875635748
276 357 5 874787526
62 433 5 879375588
239 475 5 889178689
293 166 3 888905520
130 234 5 875216932
264 70 4 886123596
208 197 5 883108797
24 763 5 875322875
279 1162 3 875314334
3 245 1 889237247
101 596 3 877136564
162 1019 4 877636556
223 908 1 891548802
99 246 3 888469392
239 430 3 889180338
160 160 5 876862078
172 580 4 875538028
303 1160 2 879544629
54 676 5 880935294
44 507 3 878347392
210 97 5 887736454
164 930 4 889402340
299 240 2 877878414
28 217 3 881961671
305 79 3 886324276
18 729 3 880131236
82 343 1 884713755
109 1012 4 880564570
207 25 4 876079113
92 1209 1 875660468
109 1 4 880563619
15 222 3 879455730
58 709 5 884304812
303 693 4 879466771
152 111 5 880148782
194 160 2 879551380
92 241 3 875655961
77 91 3 884752924
244 662 3 880606533
177 321 2 880130481
131 221 3 883681561
197 302 3 891409070
227 50 4 879035347
85 282 3 879829618
295 72 4 879518714
181 1 3 878962392
277 255 4 879544145
279 96 4 875310606
1 253 5 874965970
18 182 4 880130640
276 568 4 882659211
87 177 5 879875940
177 69 1 880131088
213 13 4 878955139
125 134 5 879454532
128 739 4 879969349
291 428 5 874871766
25 208 4 885852337
288 272 5 889225463
207 1350 2 877878772
271 56 3 885848559
5 363 3 875635225
274 748 5 878944406
70 419 5 884065035
311 559 2 884366187
151 919 5 879524368
199 268 5 883782509
201 209 3 884112801
99 274 1 885679157
11 740 4 891903067
59 77 4 888206254
184 277 3 889907971
222 88 4 878183336
38 161 5 892432062
59 418 2 888205188
104 300 3 888442275
298 1346 3 884126061
180 1119 3 877128156
7 674 2 891352659
121 14 5 891390014
268 1041 1 875743735
252 277 4 891456797
303 411 4 879483802
210 527 5 887736232
234 648 3 892826760
312 573 5 891712535
308 215 3 887737483
234 1397 4 892334976
75 546 3 884050422
117 15 5 880125887
246 239 3 884921380
64 516 5 889737376
85 187 5 879454235
239 81 3 889179808
59 54 4 888205921
256 220 3 882151690
216 196 5 880245145
203 282 1 880434919
13 195 3 881515296
144 153 5 888105823
100 268 3 891374982
210 274 5 887730676
94 471 4 891721642
13 807 1 886304229
125 657 3 892836422
65 1142 4 879217349
1 113 5 878542738
76 175 4 875028853
294 508 4 877819532
263 1451 4 891299949
294 930 3 889242704
121 117 1 891388600
85 13 3 879452866
303 426 3 879542535
212 180 1 879303974
6 492 5 883601089
181 240 1 878963122
279 746 5 875310233
303 1109 4 879467936
184 191 4 889908716
310 116 5 879436104
313 22 3 891014870
314 1150 4 877887002
13 121 5 882397503
43 5 4 875981421
58 214 2 884305296
215 164 3 891436633
62 288 2 879371909
280 127 5 891702544
161 898 3 891170191
11 723 5 891904637
94 218 3 891721851
35 243 2 875459046
311 566 4 884366112
48 680 3 879434330
85 604 4 882995132
288 527 3 886373565
184 514 5 889908497
151 929 3 879543457
90 690 4 891383319
11 38 3 891905936
104 1016 1 888466002
106 582 4 881451199
181 1010 1 878962774
37 117 4 880915674
276 845 4 874786807
22 258 5 878886261
70 82 4 884068075
5 98 3 875720691
308 95 4 887737130
60 208 5 883326028
270 778 5 876955711
243 208 4 879989134
92 540 2 875813197
81 280 4 876534214
293 412 1 888905377
200 478 5 884128788
13 308 3 881514726
56 184 4 892679088
116 250 4 876452606
295 172 4 879516986
63 1007 5 875747368
295 235 4 879517943
104 1010 1 888465554
156 641 5 888185677
269 1165 1 891446904
160 430 5 876861799
237 191 4 879376773
287 252 1 875334361
290 132 3 880473993
45 109 5 881012356
224 678 3 888082277
145 764 2 888398257
277 1011 3 879543697
65 100 3 879217558
272 1101 5 879454977
116 255 3 876452524
184 86 5 889908694
285 151 5 890595636
222 148 2 881061164
72 28 4 880036824
271 187 5 885848343
94 211 5 891721142
246 425 5 884921918
115 8 5 881171982
176 327 3 886047176
13 396 3 882141727
129 331 2 883244737
257 1260 2 880496892
95 1 5 879197329
147 904 5 885594015
151 58 4 879524849
184 660 3 889909962
311 386 3 884365747
105 268 4 889214268
158 510 3 880134296
34 312 4 888602742
72 427 5 880037702
263 416 5 891299697
94 1048 4 891722678
200 291 3 891825292
45 118 4 881014550
279 144 4 880850073
145 22 5 875273021
71 89 5 880864462
182 69 5 876435435
193 627 4 889126972
214 302 4 892668197
151 485 5 879525002
102 322 3 883277645
234 571 2 892318158
249 930 2 879640585
195 328 4 884420059
109 258 5 880562908
222 552 2 878184596
282 288 4 879949367
117 758 2 881011217
23 381 4 874787350
112 327 1 884992535
303 145 1 879543573
252 300 4 891448664
151 372 5 879524819
282 327 5 879949417
304 237 5 884968415
290 568 3 880474716
64 160 4 889739288
28 79 4 881961003
168 1278 3 884287560
265 471 4 875320302
18 113 5 880129628
83 82 5 887665423
90 499 5 891383866
234 1186 4 892335707
87 196 5 879877681
26 685 3 891371676
150 129 4 878746946
161 98 4 891171357
70 210 4 884065854
51 182 3 883498790
222 1057 4 881061370
92 176 5 875652981
204 216 4 892513864
164 685 5 889402160
57 682 3 883696824
184 207 4 889908903
60 403 3 883327087
92 180 5 875653016
43 204 4 883956122
222 1042 4 878184514
197 300 4 891409422
92 790 3 875907618
294 282 3 877821796
201 747 2 884113635
201 215 2 884140382
193 410 3 889127633
271 705 4 885849052
214 693 3 891544414
73 657 5 888625422
90 187 4 891383561
315 273 3 879821349
48 309 3 879434132
255 472 1 883216958
270 671 4 876956360
66 7 3 883601355
6 478 4 883602762
101 222 3 877136243
207 1046 4 875509787
144 182 3 888105743
85 83 4 886282959
102 625 3 883748418
158 770 5 880134477
297 588 4 875238579
90 507 5 891383987
271 482 5 885848519
130 901 1 884624044
178 276 3 882823978
90 245 3 891382612
181 1094 1 878963086
311 143 3 884364812
267 17 4 878971773
201 51 2 884140751
194 647 4 879521531
59 387 3 888206562
1 227 4 876892946
116 751 3 890131577
170 292 5 884103732
110 578 3 886988536
60 1021 5 883326185
287 347 4 888177040
197 55 3 891409982
38 679 5 892432062
195 1014 4 879673925
279 227 4 889326161
84 748 4 883449530
31 886 2 881547877
316 98 5 880853743
25 25 5 885853415
168 274 4 884287865
103 24 4 880415847
299 588 4 877880852
194 478 3 879521329
287 294 5 875333873
234 582 4 892334883
279 1048 1 886015533
87 9 4 879877931
181 408 1 878962550
279 1151 2 875744584
49 47 5 888068715
296 855 5 884197352
44 95 4 878347569
92 216 3 875653867
135 39 3 879857931
13 66 3 882141485
262 386 3 879795512
7 676 3 891354499
116 942 3 876454090
318 474 4 884495742
141 826 2 884585437
269 13 4 891446662
222 1044 4 881060578
82 455 4 876311319
279 254 3 879572960
42 685 4 881105972
145 1245 5 875271397
184 161 2 889909640
49 625 3 888067031
177 243 1 882142141
313 99 4 891014029
32 290 3 883717913
308 848 4 887736925
145 448 5 877343121
130 542 3 875801778
130 806 3 875217096
165 288 2 879525673
249 255 3 879571752
49 581 3 888068143
195 300 3 890588925
118 475 5 875384793
130 316 4 888211794
104 293 3 888465166
201 1229 3 884140307
142 82 4 888640356
119 718 5 874774956
303 94 3 879485318
99 50 5 885679998
306 14 5 876503995
92 709 2 875654590
227 295 5 879035387
3 337 1 889236983
94 820 1 891723186
59 1107 4 888206254
30 539 3 885941454
262 821 3 879794887
6 508 3 883599530
311 716 4 884365718
268 364 3 875743979
262 553 4 879795122
214 275 3 891542968
16 56 5 877719863
262 293 2 879790906
293 132 4 888905481
62 132 5 879375022
94 346 4 891725410
13 59 4 882140425
240 313 5 885775604
102 161 2 888801876
83 301 2 891181430
291 7 5 874834481
312 28 4 891698300
31 484 5 881548030
291 70 4 874868146
56 172 5 892737191
109 588 4 880578388
110 1246 2 886989613
59 429 4 888204597
246 1218 3 884922801
65 196 5 879216637
24 367 2 875323241
92 115 3 875654125
308 741 4 887739863
301 660 4 882076782
214 1129 4 892668249
158 241 4 880134445
269 674 2 891451754
308 493 3 887737293
32 151 3 883717850
224 191 4 888082468
215 423 5 891435526
32 1012 4 883717581
154 289 2 879138345
201 509 3 884111546
85 298 4 880581629
180 68 5 877127721
184 36 3 889910195
188 218 5 875074667
305 11 1 886323237
144 508 4 888104150
73 94 1 888625754
194 205 3 879524291
177 203 4 880131026
276 273 4 874786517
198 7 4 884205317
108 290 4 879880076
189 197 5 893265291
73 56 4 888626041
172 462 3 875537717
120 546 2 889490979
101 471 3 877136535
5 102 3 875721196
26 235 2 891372429
268 1249 2 875743793
276 773 3 874792794
13 150 5 882140588
7 401 4 891354257
128 482 4 879967432
104 7 3 888465972
293 39 3 888906804
256 25 5 882150552
90 821 3 891385843
275 69 3 880314089
22 510 5 878887765
312 494 5 891698454
207 192 3 877822350
264 504 5 886122577
137 687 4 881432756
185 740 4 883524475
307 687 1 879114143
42 176 3 881107178
145 472 3 875271128
189 634 3 893265506
262 121 3 879790536
251 148 2 886272547
259 772 4 874724882
239 58 5 889179623
312 921 5 891699295
92 15 3 875640189
81 742 2 876533764
311 419 3 884364931
102 448 3 888803002
249 746 5 879641209
95 527 4 888954440
19 655 3 885412723
79 100 5 891271652
189 751 4 893265046
253 510 5 891628416
201 919 3 884141208
1 17 3 875073198
214 42 5 892668130
7 81 5 891352626
234 132 4 892333865
59 148 3 888203175
13 354 2 888779458
6 469 5 883601155
82 14 4 876311280
109 627 5 880582133
305 50 5 886321799
195 154 3 888737525
277 279 4 879543592
223 8 2 891550684
92 81 3 875654929
201 69 2 884112901
94 58 5 891720540
217 144 4 889069782
244 148 2 880605071
313 200 3 891017736
181 874 1 878961749
116 1216 3 876452582
303 433 4 879467985
117 151 4 880126373
221 327 4 875243968
46 307 3 883611430
91 28 4 891439243
151 317 5 879524610
64 176 4 889737567
90 553 2 891384959
116 271 4 886310197
291 1139 3 874871671
62 111 3 879372670
196 251 3 881251274
303 120 2 879544099
49 547 5 888066187
307 1022 4 879283008
303 176 5 879467260
286 154 4 877533381
291 501 4 875087100
235 87 4 889655162
254 379 1 886474650
276 157 5 874790773
135 1208 3 879858003
57 243 3 883696547
276 1157 2 874795772
7 576 5 892132943
250 404 4 878092144
318 768 2 884498022
234 808 2 892335707
289 282 3 876789180
87 1079 2 879877240
50 823 3 877052784
25 258 5 885853199
18 496 5 880130470
193 790 3 889127381
263 510 4 891298392
209 906 2 883589546
207 716 3 875508783
314 535 4 877887002
250 338 4 883263374
262 568 3 879794113
95 172 4 879196847
94 470 4 891722006
59 583 5 888205921
277 282 4 879543697
303 1286 4 879467413
271 714 3 885848863
269 235 3 891446756
148 140 1 877019882
223 977 2 891550295
210 357 5 887736206
185 199 4 883526268
174 80 1 886515210
235 480 4 889655044
276 939 3 874790855
99 354 2 888469332
308 163 4 887737084
303 738 2 879544276
224 873 2 888082187
298 252 4 884183833
44 208 4 878347420
315 13 4 879821158
215 197 4 891435357
269 9 4 891446246
42 195 5 881107949
293 79 3 888906045
246 68 5 884922341
101 405 4 877137015
92 665 3 875906853
249 88 4 879572668
60 525 5 883325944
13 331 3 881515457
271 750 4 885844698
92 731 4 875653769
254 188 3 886473672
311 203 5 884365201
263 197 4 891299752
201 660 3 884140927
279 79 3 875296461
138 496 4 879024043
209 251 5 883417810
217 7 4 889069741
261 340 5 890454045
176 258 4 886047026
303 1037 3 879544340
81 169 4 876534751
62 114 4 879373568
72 530 4 880037164
276 364 3 877935377
88 750 2 891037276
49 7 4 888067307
263 117 3 891299387
9 298 5 886960055
92 528 4 875657681
249 708 4 879572403
262 754 3 879961283
196 655 5 881251793
207 1436 3 878191574
256 771 2 882164999
276 226 4 874792520
134 313 5 891732150
311 849 3 884365781
181 1383 1 878962086
203 148 3 880434755
247 736 5 893097024
313 745 3 891016583
311 83 5 884364812
251 1014 5 886272486
227 411 4 879035897
59 550 5 888206605
201 206 2 884112029
58 100 5 884304553
249 723 4 879641093
286 1316 5 884583549
11 725 3 891905568
7 228 4 891350845
92 846 3 886443471
160 56 5 876770222
103 127 4 880416331
11 110 3 891905324
87 2 4 879876074
45 763 2 881013563
293 605 3 888907702
291 732 4 874868097
254 575 3 886476165
49 334 4 888065744
222 1284 4 878184422
161 162 2 891171413
268 1 3 875742341
59 215 5 888204430
177 209 4 880130736
151 1298 4 879528520
299 235 1 877878184
29 332 4 882820869
30 435 5 885941156
297 182 3 875239125
315 185 4 879821267
23 172 4 874785889
262 47 2 879794599
321 496 4 879438607
191 754 3 891560366
106 778 4 881453040
7 151 4 891352749
178 678 3 882823530
84 12 5 883452874
94 168 5 891721378
264 33 3 886122644
239 529 5 889179808
90 657 5 891385190
261 875 5 890454351
190 302 5 891033606
112 289 5 884992690
144 106 3 888104684
199 258 4 883782403
224 20 1 888104487
85 501 3 880838306
301 202 5 882076211
145 743 1 888398516
294 127 5 877819265
130 206 3 875801695
103 121 3 880415766
152 412 2 880149328
267 840 4 878970926
286 231 3 877532094
200 24 2 884127370
5 211 4 875636631
160 117 4 876767822
6 357 4 883602422
158 72 3 880135118
297 736 4 875239975
250 244 4 878089786
57 760 2 883697617
58 268 5 884304288
23 1006 3 874785809
301 1228 4 882079423
307 265 3 877122816
276 1095 1 877935135
223 411 1 891550005
92 24 3 875640448
137 300 5 881432524
164 117 5 889401816
276 38 3 874792574
213 294 3 878870226
286 34 5 877534701
232 197 4 888549563
150 221 4 878747017
21 103 1 874951245
130 731 3 876251922
222 441 2 881059920
1 90 4 878542300
189 1005 4 893265971
49 38 1 888068289
311 5 3 884365853
36 307 4 882157227
128 228 3 879969329
151 89 5 879524491
248 475 5 884535446
95 1229 2 879198800
213 609 4 878955533
203 181 5 880434278
308 863 3 887736881
269 47 4 891448386
198 100 1 884207325
297 307 4 878771124
305 189 5 886323303
266 676 3 892257897
197 229 3 891410039
74 272 5 888333194
127 294 4 884363803
194 4 4 879521397
177 56 5 880130618
45 473 3 881014417
57 28 4 883698324
239 187 5 889178798
268 94 2 875743630
238 252 3 883576644
201 1010 3 884140579
131 1281 4 883681561
270 97 4 876955633
159 127 5 880989744
230 202 4 880485352
92 219 4 875654888
318 356 4 884496671
123 531 3 879872671
267 403 4 878971939
232 630 3 888550060
5 382 5 875636587
16 155 3 877719157
180 762 4 877126241
178 282 3 882823978
319 313 5 889816026
180 737 3 877128327
270 736 5 876955087
269 658 2 891448497
293 496 5 888905840
269 793 4 891449880
54 685 3 880935504
21 98 5 874951657
303 209 5 879467328
13 766 4 882139686
314 95 5 877888168
151 387 5 879542353
230 378 5 880485159
201 403 3 884112427
95 1206 4 888956137
270 370 5 876956232
256 716 5 882165135
80 582 3 887401701
303 435 5 879466491
312 121 3 891698174
151 1006 1 879524974
62 258 5 879371909
189 1115 4 893264270
77 195 5 884733695
99 742 5 885679114
291 1028 3 875086561
293 748 2 888904327
181 1342 1 878962168
206 900 1 888179980
83 338 4 883868647
262 179 4 879962570
253 216 4 891628252
223 596 3 891549713
108 50 4 879879739
94 347 5 891724950
293 779 1 888908066
101 281 2 877136842
267 980 3 878970578
201 1245 4 884141015
314 1263 2 877890611
271 111 4 885847956
314 276 1 877886413
18 387 4 880130155
207 4 4 876198457
313 96 5 891015144
21 299 1 874950931
215 144 4 891435107
279 1376 4 886016680
234 1015 2 892079617
296 248 5 884196765
270 83 4 876954995
210 161 5 887736393
201 79 4 884112245
5 376 2 879198045
184 181 4 889907426
104 411 1 888465739
275 449 3 876198328
185 269 5 883524428
276 550 4 874792574
279 1182 3 875314370
216 69 5 880235229
21 457 1 874951054
16 471 3 877724845
147 292 5 885594040
291 250 4 874805927
28 95 3 881956917
29 539 2 882821044
291 471 4 874833746
7 580 3 892132171
181 16 1 878962996
297 218 3 875409827
308 559 4 887740367
87 211 5 879876812
97 89 5 884238939
21 596 3 874951617
59 710 3 888205463
238 756 3 883576476
178 209 4 882826944
186 470 5 879023693
299 615 4 878192555
10 504 5 877892110
110 682 4 886987354
109 101 1 880578186
157 250 1 886890296
267 386 3 878973597
181 327 3 878961780
207 87 4 884386260
47 995 3 879440429
148 114 5 877016735
94 9 5 885872684
60 222 4 883327441
244 409 4 880605294
276 246 4 874786686
90 906 2 891382240
234 20 4 891227979
106 107 4 883876961
216 697 4 883981700
294 1199 2 889242142
323 257 2 878739393
140 268 4 879013684
220 303 4 881198014
67 64 5 875379211
170 299 3 886190476
230 142 4 880485633
299 641 4 889501514
7 581 5 891353477
275 501 3 875154718
44 250 5 878346709
291 214 4 874868146
11 741 5 891902745
59 286 3 888202532
174 395 1 886515154
194 234 3 879521167
57 204 4 883698272
314 417 4 877888855
201 197 4 884113422
184 155 3 889912656
194 792 4 879524504
159 1037 2 884360502
186 983 3 879023152
181 979 2 878963241
68 7 3 876974096
286 721 3 877532329
316 306 4 880853072
280 781 4 891701699
13 14 4 884538727
211 127 4 879461498
187 215 3 879465805
71 134 3 885016614
306 242 5 876503793
64 684 4 889740199
303 277 3 879468547
198 135 5 884208061
232 91 5 888549515
98 47 4 880498898
53 24 3 879442538
299 971 2 889502353
254 1116 3 886473448
7 106 4 891353892
12 300 4 879958639
239 10 5 889180338
238 111 4 883576603
130 267 5 875801239
90 662 5 891385842
63 20 3 875748004
40 268 4 889041430
181 221 1 878962465
298 152 3 884183336
104 327 2 888442202
42 185 4 881107449
181 995 1 878961585
258 288 1 885700919
291 578 4 874835242
148 70 5 877021271
305 187 4 886323189
184 71 4 889911552
94 556 3 891722882
158 1011 4 880132579
7 528 5 891352659
174 237 4 886434047
158 190 5 880134332
201 853 4 884114635
276 43 1 874791383
278 311 4 891295130
229 347 1 891632073
101 252 3 877136628
63 1028 3 875748198
275 520 4 880314218
275 173 3 875154795
62 1073 4 879374752
230 234 4 880484756
109 975 3 880572351
73 357 5 888626007
83 118 3 880307071
4 361 5 892002353
130 245 1 874953526
64 778 5 889739806
15 473 1 879456204
244 89 5 880602210
7 643 4 891350932
219 347 1 889386819
295 704 5 879519266
293 288 3 888904327
125 997 2 892838976
279 487 3 890282182
76 582 3 882607444
272 48 4 879455143
269 285 5 891446165
244 380 4 880608133
271 220 3 885848179
321 287 3 879438857
306 864 3 876504286
224 332 3 888103429
57 1047 4 883697679
145 591 4 879161848
85 277 2 879452938
116 7 2 876453915
52 95 4 882922927
209 688 1 883589626
145 260 4 875269871
208 202 4 883108476
160 187 5 876770168
141 274 5 884585220
260 990 5 890618729
177 299 4 880130500
82 231 2 878769815
223 969 5 891550649
107 271 2 891264432
26 25 3 891373727
297 1016 3 874955131
244 167 3 880607853
15 678 1 879455311
286 709 4 877532748
82 411 3 878768902
167 364 3 892738212
99 181 5 885680138
56 196 2 892678628
293 346 3 888904004
7 650 3 891350965
90 425 4 891384996
228 475 3 889388521
82 919 3 876311280
43 151 4 875975613
10 289 4 877886223
197 515 5 891409935
57 756 3 883697730
246 82 2 884921986
62 24 4 879372633
323 223 4 878739699
13 320 1 882397010
268 63 1 875743792
18 863 3 880130680
271 410 2 885848238
307 509 3 877121019
54 298 4 892681300
295 47 5 879518166
194 237 3 879538959
194 82 2 879524216
311 385 5 884365284
287 257 4 875334224
290 82 4 880473918
262 96 4 879793022
279 491 5 875296435
290 393 3 880475169
145 393 5 875273174
305 61 4 886323378
269 156 5 891449364
276 180 5 874787353
323 298 4 878739275
296 258 5 884196469
18 965 4 880132012
72 528 4 880036664
224 949 3 888104057
125 239 5 892838375
244 652 5 880606533
135 431 2 879857868
138 211 4 879024183
59 604 3 888204927
221 1059 4 875245077
13 451 1 882141872
42 69 4 881107375
10 340 4 880371312
219 882 3 889386741
60 604 4 883327997
125 152 1 879454892
63 50 4 875747292
255 448 3 883216544
311 172 5 884364763
7 582 5 892135347
7 127 5 891351728
189 203 3 893265921
59 470 3 888205714
313 148 2 891031979
234 161 3 892335824
6 143 2 883601053
305 960 1 886324362
226 147 3 883889479
204 340 5 892389195
13 493 5 882140206
186 281 4 879023390
6 275 4 883599102
269 82 2 891450780
69 300 3 882027204
259 959 4 888720593
5 62 4 875637575
181 1164 3 878962464
135 449 3 879857843
222 1207 2 881060659
5 231 2 875635947
286 258 4 877530390
104 249 3 888465675
303 65 4 879467436
295 73 4 879519009
201 686 2 884112352
13 289 2 882140759
184 100 5 889907652
262 786 3 879795319
234 614 3 892334609
1 64 5 875072404
325 485 3 891478599
312 641 5 891698300
207 810 2 877125506
262 509 3 879792818
239 478 5 889178986
142 181 5 888640317
296 242 4 884196057
291 571 2 875086608
13 488 3 890704999
294 676 3 877821514
69 174 5 882145548
195 265 4 888737346
121 509 5 891388145
279 509 3 875296552
49 17 2 888068651
7 196 5 891351432
280 472 2 891702086
221 780 3 875246552
175 96 3 877108051
180 431 4 877442098
311 1222 3 884366010
44 120 4 878346977
318 257 5 884471030
59 588 2 888204389
320 117 4 884748641
256 939 5 882164893
310 24 4 879436242
236 265 2 890116191
83 139 3 880308959
280 128 3 891701188
43 52 4 883955224
18 494 3 880131497
303 87 3 879466421
91 427 4 891439057
318 631 4 884496855
275 258 3 875154310
97 482 5 884238693
174 160 5 886514377
268 470 3 875310745
188 769 2 875074720
94 89 3 885870284
7 44 5 891351728
158 85 4 880135118
256 765 4 882165328
221 69 4 875245641
196 67 5 881252017
232 175 5 888549815
159 685 4 880557347
99 182 4 886518810
175 71 4 877107942
254 624 2 886473254
326 22 4 879874989
303 291 3 879484804
270 53 4 876956106
181 1001 1 878963038
254 418 3 886473078
56 235 1 892911348
11 190 3 891904174
162 181 4 877635798
117 829 3 881010219
268 52 3 875309319
320 177 5 884749360
6 294 2 883599938
210 380 4 887736482
151 969 5 879542510
42 684 4 881108093
62 365 2 879376096
207 121 3 875504876
59 70 3 888204758
26 455 3 891371506
234 705 5 892318002
270 466 5 876955899
97 484 3 884238966
11 660 3 891904573
5 377 1 878844615
56 797 4 892910860
305 923 5 886323237
173 286 5 877556626
67 1095 4 875379287
213 12 5 878955409
268 684 3 875744321
36 883 5 882157581
100 321 1 891375112
269 729 2 891448569
131 100 5 883681418
308 298 5 887741383
14 709 5 879119693
284 305 4 885328906
191 752 3 891560481
222 29 3 878184571
201 421 2 884111708
207 864 3 877750738
303 1315 3 879544791
52 1086 4 882922562
305 529 5 886324097
223 318 4 891550711
22 79 4 878887765
137 546 5 881433116
292 328 3 877560833
249 11 5 879640868
269 616 4 891450453
197 294 4 891409290
42 603 4 881107502
26 1016 3 891377609
7 560 3 892132798
193 435 4 889124439
7 559 5 891354882
299 186 3 889503233
115 127 5 881171760
59 433 5 888205982
217 22 5 889069741
279 709 4 875310195
257 345 4 887066556
279 789 4 875306580
279 919 3 892864663
63 222 3 875747635
178 73 5 882827985
90 1194 4 891383718
111 313 4 891679901
13 848 5 882140001
94 625 4 891723086
59 496 4 888205144
179 905 4 892151331
303 302 4 879465986
299 516 4 889503159
10 505 4 877886846
62 464 4 879375196
56 69 4 892678893
92 289 3 875641367
308 378 3 887740700
13 144 4 882397146
181 1348 1 878962200
15 932 1 879456465
244 155 3 880608599
234 233 2 892335990
15 127 2 879455505
110 1179 2 886989501
181 302 2 878961511
236 313 4 890115777
310 536 4 879436137
37 55 3 880915942
234 617 3 892078741
303 369 1 879544130
75 409 3 884050829
197 518 1 891409982
314 692 5 877888445
187 523 3 879465125
151 402 3 879543423
268 264 3 876513607
224 215 4 888082612
292 195 5 881103568
16 191 5 877719454
99 597 4 885679210
234 482 4 892334803
303 323 1 879466214
233 99 3 877663383
66 249 4 883602158
280 204 3 891700643
301 174 5 882075827
92 1142 4 886442422
99 410 5 885679262
221 1250 2 875247855
97 98 4 884238728
313 673 4 891016622
58 109 4 884304396
270 781 5 876955750
13 476 2 882141997
189 1 5 893264174
67 147 3 875379357
234 50 4 892079237
40 880 3 889041643
294 222 4 877819353
293 629 3 888907753
7 241 4 891354053
87 775 2 879876848
314 1289 2 877887388
131 750 5 883681723
296 48 5 884197091
81 3 4 876592546
151 186 4 879524222
57 926 3 883697831
234 134 5 892333573
53 174 5 879442561
280 544 4 891701302
123 135 5 879872868
109 797 3 880582856
96 479 4 884403758
236 286 5 890115777
201 313 5 884110598
174 471 5 886433804
130 931 2 880396881
151 15 4 879524879
90 529 5 891385132
59 12 5 888204260
3 343 3 889237122
310 845 5 879436534
224 658 1 888103840
4 357 4 892003525
25 615 5 885852611
11 517 2 891905222
298 91 2 884182932
59 170 4 888204430
147 305 4 885593997
314 1518 4 877891426
256 413 4 882163956
234 618 3 892078343
246 8 3 884921245
255 678 2 883215795
92 106 3 875640609
272 127 5 879454725
104 269 5 888441878
276 406 2 874786831
276 34 2 877934264
97 50 5 884239471
150 121 2 878747322
14 530 5 890881433
23 170 4 874785348
13 97 4 882399357
165 325 4 879525672
244 7 4 880602558
95 416 4 888954961
28 98 5 881961531
259 269 3 877923906
82 596 3 876311195
28 173 3 881956220
94 455 3 891721777
276 384 3 874792189
298 8 5 884182748
151 210 4 879524419
77 238 5 884733965
200 241 4 884129782
201 405 4 884112427
193 332 3 889123257
38 139 2 892432786
291 226 5 874834895
113 326 5 875935609
313 191 5 891013829
207 531 4 877878342
214 151 5 892668153
44 123 4 878346532
18 154 4 880131358
297 628 4 874954497
279 116 1 888799670
7 28 5 891352341
115 92 4 881172049
308 581 4 887740500
62 138 1 879376709
81 824 3 876534437
293 1161 2 888905062
13 781 3 882399528
13 338 1 882140740
41 28 4 890687353
280 554 1 891701998
287 249 5 875334430
117 50 5 880126022
178 106 2 882824983
201 117 2 884112487
256 1057 2 882163805
221 204 4 875246008
318 659 4 884495868
262 11 4 879793597
154 488 4 879138831
186 385 4 879023894
303 1095 2 879543988
302 323 2 879436875
198 179 4 884209264
99 168 5 885680374
229 313 2 891631948
126 262 4 887854726
72 226 4 880037307
109 31 4 880577844
34 242 5 888601628
173 323 5 877556926
156 276 3 888185854
122 215 4 879270676
276 583 3 874791444
224 528 3 888082658
208 88 5 883108324
295 483 5 879517348
279 65 1 875306767
43 64 5 875981247
89 197 5 879459859
308 435 4 887737484
315 305 5 881017419
42 1041 4 881109060
164 299 4 889401383
7 153 5 891352220
93 412 2 888706037
125 1180 3 892838865
70 50 4 884064188
177 960 3 880131161
75 476 1 884050393
62 401 3 879376727
130 366 5 876251972
312 228 3 891699040
158 414 4 880135118
279 42 4 875308843
210 58 4 887730177
43 66 4 875981506
151 490 5 879528418
293 665 2 888908117
293 36 1 888908041
102 405 2 888801812
276 291 3 874791169
21 839 1 874951797
194 663 4 879524292
38 432 1 892430282
92 453 1 875906882
311 180 4 884364764
198 214 4 884208273
82 661 4 878769703
267 238 4 878971629
291 466 5 874834768
151 692 3 879524669
60 47 4 883326399
92 79 4 875653198
97 115 5 884239525
314 1218 4 877887525
319 338 2 879977242
5 407 3 875635431
15 685 4 879456288
99 204 4 885679952
123 192 5 879873119
47 340 5 879439078
222 135 5 878181563
224 149 1 888103999
58 284 4 884304519
320 294 4 884748418
268 135 4 875309583
83 640 2 880308550
106 692 3 881453290
287 11 5 875335124
305 186 4 886323902
181 1320 1 878962279
49 49 2 888068990
6 221 4 883599431
85 647 4 879453844
128 736 5 879968352
279 827 1 888426577
271 630 2 885848943
303 748 2 879466214
249 124 5 879572646
280 693 3 891701027
207 827 3 876018501
60 616 3 883327087
21 184 4 874951797
286 628 4 875806800
145 183 5 875272009
311 28 5 884365140
25 228 4 885852920
76 92 4 882606108
246 406 3 884924749
201 292 3 884110598
235 647 4 889655045
286 133 4 877531730
48 174 5 879434723
144 685 3 888105473
5 24 4 879198229
85 272 4 893110061
286 7 4 875807003
64 93 2 889739025
151 429 5 879528673
191 301 4 891561336
287 56 5 875334759
96 153 4 884403624
125 615 3 879454793
150 100 2 878746636
93 15 5 888705388
84 528 5 883453617
318 50 2 884495696
13 167 4 882141659
213 471 3 878870816
178 234 4 882826783
128 418 4 879968164
195 496 4 888737525
13 570 5 882397581
276 843 4 874792989
54 268 5 883963510
305 347 3 886308111
14 474 4 890881557
18 58 4 880130613
263 921 3 891298727
289 849 4 876789943
194 321 3 879520306
11 746 4 891905032
298 842 4 884127249
56 215 5 892678547
13 844 1 882397010
38 465 5 892432476
308 165 3 887736696
214 652 4 891543972
102 300 3 875886434
7 420 5 891353219
61 328 5 891206371
307 100 3 879206424
21 590 1 874951898
311 68 1 884365824
95 1230 1 888956901
303 182 5 879467105
145 13 5 875270507
50 253 5 877052550
194 530 4 879521167
145 1 3 882181396
222 157 4 878181976
7 188 5 891352778
109 100 4 880563080
90 631 5 891384570
7 78 3 891354165
181 1324 1 878962464
201 332 2 884110887
13 685 5 882397582
82 73 4 878769888
267 423 3 878972842
194 1206 1 879554453
269 106 1 891451947
99 895 3 885678304
235 1149 4 889655595
200 665 4 884130621
312 188 3 891698793
145 50 5 885557660
234 71 3 892334338
213 48 5 878955848
244 216 4 880605869
316 588 1 880853992
85 175 4 879828912
124 50 3 890287508
137 237 4 881432965
13 567 1 882396955
151 162 5 879528779
187 116 5 879464978
193 554 3 889126088
49 741 4 888068079
291 54 4 874834963
316 292 4 880853072
271 514 4 885848408
194 404 3 879522445
268 721 3 875743587
277 1197 4 879543768
301 606 3 882076890
89 1048 3 879460027
253 50 4 891628518
102 732 3 888804089
311 662 4 884365018
201 943 3 884114275
246 816 4 884925218
172 488 3 875537965
280 38 3 891701832
43 1057 2 884029777
311 661 3 884365075
59 287 5 888203175
268 83 4 875309344
315 651 3 879799457
145 299 4 875269822
248 174 3 884534992
327 191 4 887820828
268 672 2 875744501
297 286 5 874953892
295 151 4 879517635
13 877 2 882140792
70 584 3 884150236
145 460 1 875271312
275 176 4 880314320
48 259 4 879434270
235 419 5 889655858
83 413 1 891182379
147 258 4 885594040
92 521 4 875813412
246 728 1 884923829
43 284 5 883955441
207 203 3 877124625
234 485 3 892079434
201 587 4 884140975
286 689 5 884583549
69 12 5 882145567
237 494 4 879376553
85 133 4 879453876
276 85 3 874791871
311 366 5 884366010
320 399 3 884749411
114 175 5 881259955
42 121 4 881110578
7 680 4 891350703
154 302 4 879138235
106 660 4 881451631
313 71 4 891030144
90 526 5 891383866
94 186 4 891722278
224 43 3 888104456
44 230 2 883613335
229 315 1 891632945
151 480 5 879524151
311 505 4 884365451
320 202 4 884750946
113 329 3 875935312
255 859 3 883216748
193 827 2 890859916
276 789 3 874791623
259 750 4 888630424
204 172 3 892513819
78 412 4 879634223
85 98 4 879453716
279 393 1 875314093
222 323 3 877562839
288 127 5 886374451
42 606 3 881107538
25 729 4 885852697
119 213 5 874781257
116 185 3 876453519
123 13 3 879873988
315 657 4 879821299
142 243 1 888640199
13 480 3 881515193
201 326 2 884111095
43 631 2 883955675
195 387 4 891762491
95 174 5 879196231
130 332 4 876250582
233 482 4 877661437
44 530 5 878348725
292 86 4 881105778
176 294 2 886047220
157 405 3 886890342
207 787 3 876079054
239 204 3 889180888
251 144 5 886271920
269 923 4 891447169
178 148 4 882824325
138 121 4 879023558
30 82 4 875060217
302 245 2 879436911
34 690 4 888602513
292 276 5 881103915
271 11 4 885848408
69 175 3 882145586
42 456 3 881106113
311 568 5 884365325
183 241 4 892323453
269 411 1 891451013
288 196 5 886373474
268 42 4 875310384
308 634 4 887737334
308 166 3 887737837
57 831 1 883697785
207 410 3 877838946
271 211 5 885849164
16 144 5 877721142
90 603 5 891385132
209 408 4 883417517
299 238 4 877880852
279 1228 4 890779991
128 140 4 879968308
307 173 5 879283786
167 392 1 892738307
22 791 1 878887227
291 159 4 875087488
194 705 2 879524007
10 489 4 877892210
95 128 3 879196354
10 657 4 877892110
59 855 4 888204502
124 11 5 890287645
7 133 5 891353192
256 692 5 882165066
85 629 3 879454685
271 1266 2 885848943
276 1416 3 874792634
155 988 2 879371261
318 476 4 884495164
307 258 5 879283786
28 7 5 881961531
236 729 5 890118372
38 672 3 892434800
7 93 5 891351042
255 217 2 883216600
184 729 3 889909840
154 175 5 879138784
311 403 4 884365889
116 301 3 892683732
94 229 3 891722979
221 508 4 875244160
95 636 1 879196566
44 56 2 878348601
305 203 4 886323839
207 508 4 877879259
130 161 4 875802058
98 163 3 880499053
328 9 4 885045993
178 218 3 882827776
293 293 4 888904795
162 742 4 877635758
128 79 4 879967692
307 1411 4 877124058
269 514 4 891449123
195 186 3 888737240
327 533 4 887822530
189 91 3 893265684
206 1394 1 888179981
95 143 4 880571951
31 682 2 881547834
94 157 5 891725332
73 588 2 888625754
256 819 4 882151052
291 366 3 874868255
222 153 4 878182416
207 98 4 875509887
222 298 4 877563253
286 151 5 875806800
116 262 3 876751342
7 174 5 891350757
148 495 4 877016735
311 495 4 884366066
178 255 4 882824001
181 597 3 878963276
123 847 4 879873193
291 77 4 874834799
237 528 5 879376606
140 301 3 879013747
290 222 4 880731778
177 79 4 880130758
65 202 4 879217852
311 181 4 884364724
125 796 3 892838591
77 168 4 884752721
58 960 4 884305004
117 405 5 880126174
248 127 5 884535084
5 423 4 875636793
254 286 1 887346861
289 7 4 876789628
241 294 3 887250085
213 690 3 878870275
99 508 4 885678840
275 523 4 880314031
168 284 2 884288112
28 380 4 881961394
144 31 3 888105823
198 651 4 884207424
181 1093 1 878962391
221 268 5 876502910
267 739 4 878973276
129 303 3 883244011
301 496 5 882075743
94 33 3 891721919
318 64 4 884495590
298 477 4 884126202
290 476 3 880475837
16 942 4 877719863
130 815 3 874953866
181 304 1 878961586
178 125 4 882824431
42 506 3 881108760
320 284 4 884748818
138 151 4 879023389
197 849 3 891410124
215 157 4 891435573
94 1119 4 891723261
293 724 3 888907061
79 246 5 891271545
279 1492 4 888430806
189 30 4 893266205
233 806 4 880610396
198 24 2 884205385
222 172 5 878183079
276 301 4 877584219
70 417 3 884066823
305 15 1 886322796
201 370 1 884114506
57 409 4 883697655
13 314 1 884538485
206 245 1 888179772
125 173 5 879454100
128 143 5 879967300
92 763 3 886443192
65 56 3 879217816
236 506 5 890118153
262 77 2 879794829
90 958 4 891383561
144 91 2 888106106
63 841 1 875747917
323 117 3 878739355
197 176 5 891409798
277 273 5 879544145
176 288 3 886046979
38 838 2 892433680
99 546 4 885679353
326 186 4 879877143
59 663 4 888204928
59 702 5 888205463
26 15 4 891386369
7 182 4 891350965
112 354 3 891304031
109 154 2 880578121
121 405 2 891390579
293 167 3 888907702
297 198 3 875238923
276 11 5 874787497
222 210 4 878184338
287 92 4 875334896
62 443 3 879375080
106 703 4 881450039
276 1218 4 874792040
230 210 5 880484975
246 184 4 884921948
22 511 4 878887983
165 258 5 879525672
161 174 2 891170800
109 89 4 880573263
305 87 1 886323153
195 181 5 875771440
7 193 5 892135346
326 480 4 879875691
77 125 3 884733014
85 58 4 879829689
186 588 4 879024535
256 280 5 882151167
84 529 5 883453108
74 288 3 888333280
102 432 3 883748418
194 770 4 879525342
267 114 5 878971514
1 92 3 876892425
16 504 5 877718168
211 300 2 879461395
90 31 4 891384673
234 657 4 892079840
60 1020 4 883327018
92 947 4 875654929
158 1 4 880132443
87 1000 3 879877173
276 104 1 874836682
1 228 5 878543541
42 143 4 881108229
43 26 5 883954901
299 1322 3 877878001
130 200 5 875217392
307 71 5 879283169
147 339 5 885594204
311 229 5 884365890
296 286 5 884196209
217 82 5 889069842
80 886 4 883605238
314 9 4 877886375
64 527 4 879365590
249 79 5 879572777
21 298 5 874951382
68 118 2 876974248
215 151 5 891435761
305 238 3 886323617
308 417 3 887740254
102 118 3 888801465
189 120 1 893264954
112 750 4 884992444
130 622 3 875802173
188 474 4 875072674
56 585 3 892911366
56 230 5 892676339
20 11 2 879669401
20 176 2 879669152
222 25 3 877563437
49 148 1 888068195
307 431 4 877123333
144 313 5 888103407
23 404 4 874787860
144 961 3 888106106
160 3 3 876770124
22 227 4 878888067
79 508 3 891271676
18 647 4 880129595
151 481 3 879524669
312 480 5 891698224
256 29 4 882164644
158 568 4 880134532
311 141 4 884366187
303 179 5 879466491
25 478 5 885852271
195 407 2 877835302
152 147 3 880149045
145 1001 4 875271607
151 260 1 879523998
194 576 2 879528568
271 624 2 885849558
162 121 4 877636000
313 65 2 891016962
6 532 3 883600066
22 433 3 878886479
13 915 5 892015023
327 461 3 887746665
200 402 4 884129029
271 22 5 885848518
269 478 4 891448980
315 431 2 879821300
178 121 5 882824291
210 502 3 891035965
76 135 5 875028792
318 648 5 884495534
279 1291 4 875297708
75 121 4 884050450
90 618 5 891385335
44 174 5 878347662
293 729 2 888907145
217 195 5 889069709
224 708 2 888104153
246 121 4 884922627
284 906 3 885328836
301 172 5 882076403
244 31 4 880603484
95 395 3 888956928
303 330 3 879552065
198 640 3 884208651
256 802 3 882164955
46 690 5 883611274
305 209 5 886322966
83 364 1 886534501
224 1208 1 888104554
295 67 4 879519042
116 248 3 876452492
201 37 2 884114635
155 748 2 879371261
318 508 4 884494976
274 288 4 878944379
263 333 2 891296842
145 172 5 882181632
188 191 3 875073128
119 313 5 886176135
270 306 5 876953744
262 91 3 879792713
131 845 4 883681351
250 260 4 878089144
33 307 3 891964148
37 183 4 880930042
6 211 5 883601155
85 517 5 879455238
308 164 4 887738664
42 746 3 881108279
102 1025 2 883278200
311 70 4 884364999
181 1322 1 878962086
17 508 3 885272779
174 396 1 886515104
125 150 1 879454892
181 1364 1 878962464
235 511 5 889655162
1 266 1 885345728
295 727 5 879517682
56 194 5 892676908
83 1035 4 880308959
100 355 4 891375313
106 828 2 883876872
270 327 5 876953900
181 680 1 878961709
115 228 4 881171488
286 771 2 877535119
234 151 3 892334481
16 92 4 877721905
130 410 5 875802105
271 121 2 885848132
320 1157 4 884751336
189 462 5 893265741
313 31 4 891015486
49 238 4 888068762
60 79 4 883326620
13 226 4 882397651
1 121 4 875071823
150 246 5 878746719
13 548 3 882398743
179 751 1 892151565
222 426 1 878181351
7 614 5 891352489
157 1132 3 886891132
193 368 1 889127860
130 993 5 874953665
166 322 5 886397723
62 4 4 879374640
253 183 5 891628341
261 117 4 890455974
269 1020 4 891449571
269 136 4 891449075
322 197 5 887313983
7 647 5 891352489
112 748 3 884992651
170 245 5 884103758
271 823 3 885848237
294 288 5 877818729
151 522 5 879524443
311 213 4 884365075
26 257 3 891371596
291 627 4 875086991
26 7 3 891350826
221 468 3 875246824
318 204 5 884496218
87 996 3 879876848
279 88 1 882146554
279 562 3 890451433
207 14 4 875504876
279 163 5 875313311
230 238 1 880484778
94 235 4 891722980
293 931 1 888905252
121 86 5 891388286
198 180 3 884207298
292 653 4 881105442
92 781 3 875907649
291 572 3 874834944
48 690 4 879434211
102 264 2 883277645
1 114 5 875072173
180 79 3 877442037
255 879 3 883215660
250 2 4 878090414
119 716 5 874782190
101 282 3 877135883
244 220 2 880605264
67 1 3 875379445
291 99 4 875086887
59 238 5 888204553
311 73 4 884366187
177 919 4 880130736
1 132 4 878542889
144 778 4 888106044
1 74 1 889751736
268 68 4 875744173
232 705 5 888549838
49 758 1 888067596
102 313 3 887048184
279 1093 4 875298330
279 1493 1 888465068
22 173 5 878886368
122 715 5 879270741
145 315 5 883840797
119 1101 5 874781779
261 259 4 890454843
1 134 4 875073067
94 45 5 886008764
330 11 4 876546561
291 741 5 874834481
6 180 4 883601311
188 88 4 875075300
299 921 3 889502087
253 203 4 891628651
215 194 4 891436150
291 273 3 874833705
303 867 3 879484373
6 477 1 883599509
307 1110 4 877122208
130 876 4 874953291
95 483 3 879198697
74 326 4 888333329
13 305 4 881514811
4 260 4 892004275
261 294 4 890454217
159 259 4 893255969
137 55 5 881433689
174 699 5 886514220
286 158 3 877533472
87 1183 3 879875995
270 230 3 876955868
91 172 4 891439208
296 272 5 884198772
125 483 4 879454628
62 1118 3 879375537
328 200 4 885046420
296 510 5 884197264
234 500 3 892078890
237 100 5 879376381
150 13 4 878746889
301 610 3 882077176
151 25 4 879528496
271 8 4 885848770
87 303 3 879875471
293 1220 2 888907552
113 294 4 875935277
311 518 3 884365451
181 123 2 878963276
328 905 3 888641999
110 301 2 886987505
288 742 3 886893063
111 887 3 891679692
194 196 3 879524007
239 605 4 889180446
109 5 3 880580637
291 824 4 874833962
16 168 4 877721142
14 357 2 890881294
22 687 1 878887476
207 746 4 877878342
312 1299 4 891698832
268 250 4 875742530
68 411 1 876974596
195 887 4 886782489
271 50 5 885848640
74 9 4 888333458
308 802 3 887738717
144 66 4 888106078
195 14 4 890985390
18 199 3 880129769
13 918 3 892524090
174 41 1 886515063
109 159 4 880578121
227 293 5 879035387
233 357 5 877661553
264 475 5 886122706
205 678 1 888284618
275 1066 3 880313679
56 68 3 892910913
78 1160 5 879634134
130 682 4 881076059
127 380 5 884364950
130 568 5 876251693
58 1100 2 884304979
49 473 3 888067164
13 273 3 882397502
203 336 3 880433474
330 136 5 876546378
109 195 5 880578038
186 406 1 879023272
293 148 1 888907015
280 1028 5 891702276
143 331 5 888407622
183 96 3 891463617
60 699 4 883327539
178 131 4 882827947
297 216 4 875409423
59 1117 4 888203313
276 429 5 874790972
179 258 5 892151270
87 386 2 879877006
198 1169 4 884208834
119 54 4 886176814
297 20 4 874954763
1 98 4 875072404
268 205 5 875309859
279 174 4 875306636
64 187 5 889737395
119 1262 3 890627252
75 1017 5 884050502
27 742 3 891543129
307 21 4 876433101
37 685 3 880915528
82 15 3 876311365
244 238 5 880606118
271 274 3 885848014
174 1014 3 890664424
210 135 5 887736352
262 258 4 879961282
320 68 5 884749327
85 660 4 879829618
311 348 4 884364108
82 208 3 878769815
1 186 4 875073128
145 368 3 888398492
276 401 3 874792094
23 213 3 874785675
64 515 5 889737478
63 237 3 875747342
293 227 2 888906990
322 32 5 887314417
74 285 3 888333428
297 202 3 875238638
82 216 4 878769949
280 145 3 891702198
200 227 5 884129006
290 21 3 880475695
43 820 2 884029742
95 573 1 888954808
181 20 1 878962919
178 926 4 882824671
81 476 2 876534124
194 410 3 879541042
325 402 2 891479706
276 347 4 885159630
207 133 4 875812281
87 135 5 879875649
331 7 4 877196633
315 8 3 879820961
106 435 3 881452355
286 83 5 877531975
87 157 3 879877799
87 163 4 879877083
286 655 3 889651746
232 8 2 888549757
254 380 4 886474456
96 91 5 884403250
232 1 4 880062302
315 98 4 879821193
43 553 4 875981159
305 679 3 886324792
61 690 2 891206407
44 665 1 883613372
92 1016 2 875640582
168 255 1 884287560
276 270 4 879131395
328 568 3 885047896
222 1053 3 881060735
93 222 4 888705295
330 235 5 876544690
82 504 4 878769917
2 314 1 888980085
89 732 5 879459909
38 216 5 892430486
308 85 4 887741245
24 153 4 875323368
235 1464 4 889655266
1 221 5 887431921
222 715 2 878183924
222 69 5 878182338
43 114 5 883954950
331 486 3 877196308
223 322 4 891548920
201 452 1 884114770
158 271 4 880132232
32 249 4 883717645
314 90 2 877888758
313 245 3 891013144
102 576 2 888802722
211 526 4 879459952
268 425 4 875310549
332 770 3 888098170
38 508 2 892429399
280 975 4 891702252
10 463 4 877889186
92 386 3 875907727
268 374 2 875744895
69 258 4 882027204
210 96 4 887736616
213 144 5 878956047
254 50 5 886471151
58 272 5 884647314
327 210 3 887744065
291 385 4 874835141
291 324 1 874805453
246 596 3 884921511
11 714 4 891904214
329 100 4 891655812
86 258 5 879570366
7 621 5 892132773
246 80 2 884923329
308 481 4 887737997
54 820 3 880937992
177 651 3 880130862
10 655 5 877891904
83 631 2 887664566
145 993 3 875270616
255 185 4 883216449
18 607 3 880131752
226 180 4 883889322
234 616 2 892334976
274 25 5 878945541
293 156 4 888905948
83 476 3 880307359
295 173 5 879518257
286 1039 5 877531730
42 48 5 881107821
208 204 3 883108360
232 275 2 885939945
267 94 3 878972558
271 242 4 885844495
125 97 3 879454385
323 333 4 878738865
305 56 1 886323068
145 250 5 882182944
38 1030 5 892434475
202 515 1 879726778
181 975 2 878963343
332 566 4 888360342
108 13 3 879879834
194 520 5 879545114
144 62 2 888105902
194 1183 2 879554453
148 172 5 877016513
144 1147 4 888105587
269 961 5 891457067
290 71 5 880473667
249 597 2 879640436
65 676 5 879217689
301 395 1 882079384
267 546 3 878970877
207 754 4 879577345
201 777 1 884112673
314 1095 3 877887356
210 631 5 887736796
22 456 1 878887413
59 931 2 888203610
92 715 4 875656288
50 475 5 877052167
188 159 3 875074589
303 700 3 879485718
197 288 3 891409387
244 676 4 880604858
44 88 2 878348885
164 597 4 889402225
11 230 4 891905783
6 297 3 883599134
186 925 5 879023152
190 147 4 891033863
184 1137 5 889907812
85 269 3 891289966
185 127 5 883525183
44 257 4 878346689
293 484 5 888906217
150 1 4 878746441
60 179 4 883326566
75 147 3 884050134
269 640 5 891457067
138 493 4 879024382
299 271 3 879737472
92 928 3 886443582
299 24 3 877877732
292 183 5 881103478
5 394 2 879198031
62 559 3 879375912
198 549 3 884208518
288 1039 2 886373565
152 272 5 890322298
42 999 4 881108982
64 333 3 879365313
99 682 2 885678371
59 121 4 888203313
135 233 3 879857843
7 22 5 891351121
24 427 5 875323002
144 747 5 888105473
261 322 4 890454974
201 475 4 884112748
133 258 5 890588639
110 245 3 886987540
5 384 3 875636389
139 268 4 879537876
112 322 4 884992690
234 596 2 891227979
301 184 4 882077222
291 1471 3 874834914
285 216 3 890595900
85 53 3 882995643
275 183 3 880314500
296 275 4 884196555
271 197 4 885848915
29 748 2 882821558
221 172 5 875245907
323 9 4 878739325
111 340 4 891679692
95 176 3 879196298
207 170 4 877125221
136 276 5 882693489
124 616 4 890287645
185 528 4 883526268
167 404 3 892738278
286 341 5 884069544
84 322 3 883449567
151 529 5 879542610
264 401 5 886123656
289 1 3 876789736
144 64 5 888105140
56 29 3 892910913
23 528 4 874786974
328 742 4 885047309
125 785 3 892838558
200 72 4 884129542
249 23 4 879572432
130 56 5 875216283
140 319 4 879013617
49 102 2 888067164
158 483 5 880133225
222 58 3 878182479
194 213 2 879523575
177 89 5 880131088
7 268 3 891350703
59 549 4 888205659
145 411 2 875271522
265 7 2 875320689
248 282 2 884535582
239 47 2 889180169
319 879 5 876280338
42 102 5 881108873
301 1035 4 882078809
326 69 2 879874964
180 67 1 877127591
280 99 2 891700475
145 682 3 879161624
214 79 4 891544306
259 210 4 874725485
57 864 3 883697512
261 597 4 890456142
136 298 4 882693569
293 705 5 888906338
194 470 3 879527421
75 496 5 884051921
202 172 3 879726778
23 183 3 874785728
38 403 1 892432205
52 1009 5 882922328
95 720 2 879196513
65 97 5 879216605
207 290 2 878104627
201 2 2 884112487
190 751 4 891033606
162 685 3 877635917
221 250 5 875244633
92 134 4 875656623
49 695 3 888068957
102 391 2 888802767
6 500 4 883601277
152 25 3 880149045
145 278 4 875272871
328 271 3 885044607
116 750 4 886309481
90 237 4 891385215
221 318 5 875245690
128 283 5 879966729
94 467 4 885873423
221 1218 3 875246745
281 332 4 881200603
294 539 4 889241707
300 948 4 875650018
326 153 4 879875751
62 28 3 879375169
159 249 4 884027269
76 811 4 882606323
74 237 4 888333428
81 411 2 876534244
280 227 3 891702153
224 22 5 888103581
64 77 3 889737420
194 756 1 879549899
15 20 3 879455541
43 328 4 875975061
244 100 4 880604252
327 805 4 887819462
21 928 3 874951616
83 254 2 880327839
14 22 3 890881521
318 610 5 884496525
92 756 3 886443582
222 1078 2 878183449
62 157 3 879374686
13 840 3 886261387
271 300 2 885844583
59 13 5 888203415
208 514 4 883108324
289 815 3 876789581
279 249 3 878878420
326 50 5 879875112
73 12 5 888624976
28 234 4 881956144
6 95 2 883602133
90 354 3 891382240
96 519 4 884402896
7 627 3 891352594
254 649 1 886474619
328 519 5 885046420
247 751 3 893081411
45 472 3 881014417
323 127 5 878739137
268 566 3 875744321
291 816 3 874867852
59 405 3 888203578
200 409 2 884127431
332 975 3 887938631
239 612 5 889178616
22 399 4 878887157
267 147 3 878970681
235 319 4 889654419
87 70 5 879876448
216 143 2 881428956
268 121 2 875743141
239 317 5 889179291
269 922 5 891457067
207 468 4 877124806
270 148 4 876954062
184 559 3 889910418
304 271 4 884968415
331 479 2 877196504
157 283 4 886890692
239 183 5 889180071
261 339 5 890454351
301 58 4 882077285
145 339 3 882181058
10 321 4 879163494
48 308 5 879434292
321 631 4 879440264
32 591 3 883717581
125 1036 2 892839191
1 84 4 875072923
21 742 3 874951617
22 186 5 878886368
292 324 3 881104533
72 129 4 880035588
256 642 4 882164893
92 1095 2 886443728
73 475 4 888625753
290 274 4 880731874
83 543 2 887665445
56 597 3 892679439
83 216 4 880307846
215 22 3 891435161
101 369 2 877136928
328 521 4 885047484
307 175 4 877117651
201 23 4 884111830
197 570 4 891410124
26 286 3 891347400
90 489 5 891384357
98 517 5 880498990
57 250 3 883697223
163 288 3 891220226
1 31 3 875072144
104 324 1 888442404
333 894 3 891045496
311 22 4 884364538
237 211 4 879376515
44 603 4 878347420
22 96 5 878887680
213 546 4 878870903
257 258 3 879029516
327 300 2 887743541
279 1017 3 875296891
53 845 3 879443083
85 97 2 879829667
43 286 4 875975028
181 7 4 878963037
297 574 1 875239092
201 651 4 884111217
320 99 4 884751440
94 180 5 885870284
235 85 4 889655232
305 131 3 886323440
234 229 4 892334189
328 591 3 885047018
328 754 4 885044607
258 323 4 885701062
3 323 2 889237269
16 70 4 877720118
286 425 2 877532013
327 702 2 887819021
200 265 5 884128372
207 131 3 878104377
292 10 5 881104606
214 179 5 892668130
155 321 4 879370963
106 213 4 881453065
200 586 4 884130391
305 216 5 886323563
279 1113 3 888806035
178 984 2 882823530
331 133 3 877196443
58 45 5 884305295
167 1306 5 892738385
151 191 3 879524326
326 168 3 879874859
297 443 2 875240133
191 288 3 891562090
81 471 3 876533586
284 258 4 885329146
5 267 4 875635064
150 325 1 878747322
257 59 5 879547440
145 443 3 882182658
271 191 5 885848448
176 297 3 886047918
158 38 4 880134607
152 716 5 884019001
232 638 5 888549988
109 930 3 880572351
243 660 4 879988422
57 744 5 883698581
145 1057 1 875271312
235 275 5 889655550
181 124 1 878962550
145 182 5 885622510
249 476 3 879640481
44 11 3 878347915
194 566 4 879522819
109 218 4 880578633
49 10 3 888066086
269 210 1 891449608
87 233 4 879876036
314 791 4 877889398
292 132 4 881105340
7 300 4 891350703
291 460 5 874834254
292 176 5 881103478
290 1028 3 880732365
122 427 3 879270165
17 151 4 885272751
59 47 5 888205574
29 689 2 882821705
274 411 3 878945888
190 340 1 891033153
213 50 5 878870456
14 111 3 876965165
321 131 4 879439883
221 1314 3 875247833
195 100 5 875771440
236 187 3 890118340
92 619 4 875640487
303 576 3 879485417
42 210 5 881108633
246 423 3 884920900
181 823 2 878963343
197 231 3 891410124
181 369 3 878963418
130 172 5 875801530
276 1131 3 874796116
252 742 4 891455743
221 1067 3 875244387
292 488 5 881105657
177 124 3 880130881
42 785 4 881109060
1 70 3 875072895
13 178 4 882139829
76 276 5 875027601
269 72 2 891451470
3 331 4 889237455
290 429 4 880474606
159 815 3 880557387
248 474 2 884534672
214 1065 5 892668173
30 181 4 875060217
8 182 5 879362183
238 118 3 883576509
249 176 4 879641109
264 1069 5 886123728
98 655 3 880498861
123 275 4 879873726
181 688 1 878961668
7 162 5 891353444
119 269 3 892564213
181 457 1 878961474
138 483 5 879024280
56 63 3 892910268
291 122 3 874834289
326 468 3 879875572
92 175 4 875653549
293 654 5 888905760
162 1047 5 877635896
303 549 3 879484846
325 504 3 891477905
267 654 5 878971902
130 546 4 876250932
216 577 1 881432453
301 53 1 882078883
91 423 5 891439090
301 384 5 882079315
291 672 3 874867741
18 196 3 880131297
195 1084 4 888737345
222 939 3 878182211
327 274 2 887819462
254 577 1 886476092
332 693 5 888098538
267 55 4 878972785
16 443 5 877727055
158 79 4 880134332
305 14 4 886322893
87 67 4 879877007
313 175 4 891014697
43 498 5 875981275
234 1035 3 892335142
90 11 4 891384113
230 196 5 880484755
1 60 5 875072370
262 185 3 879793164
221 1407 3 875247833
279 382 4 875312947
211 678 3 879461394
287 1016 5 875334430
167 603 4 892738212
119 154 5 874782022
126 878 5 887938392
60 474 5 883326028
296 427 5 884198772
300 243 4 875650068
194 971 3 879551049
83 186 4 880308601
207 1242 5 884386260
311 1116 3 884364623
181 406 1 878962955
130 550 5 878537602
245 222 4 888513212
168 235 2 884288270
256 756 4 882151167
1 177 5 876892701
59 10 4 888203234
223 258 1 891548802
243 225 3 879987655
148 1149 5 877016513
10 48 4 877889058
178 549 4 882827689
295 4 4 879518568
99 124 2 885678886
334 117 3 891544735
263 523 5 891298107
230 402 5 880485445
152 132 5 882475496
189 45 3 893265657
130 231 3 875801422
334 282 4 891544925
91 193 3 891439057
244 97 2 880605514
83 866 3 883867947
222 217 3 881060062
10 203 4 877891967
173 300 4 877556988
269 168 4 891448850
292 100 5 881103999
60 508 4 883327368
197 431 3 891409935
313 265 4 891016853
234 506 4 892318107
234 959 2 892334189
154 484 4 879139096
14 56 5 879119579
201 1211 3 884113806
181 359 1 878961668
52 748 4 882922629
308 579 3 887740700
212 515 4 879303571
13 42 4 882141393
268 99 3 875744744
119 245 4 886176618
44 202 4 878347315
126 884 5 887938392
159 111 4 880556981
90 301 4 891382392
320 42 4 884751712
301 25 3 882075110
114 269 4 881256090
9 691 5 886960055
315 17 1 879821003
137 195 5 881433689
183 562 3 891467003
297 301 4 876529834
334 603 5 891628849
18 954 3 880130640
152 97 5 882475618
184 498 5 889913687
325 430 5 891478028
39 315 4 891400094
231 127 3 879965565
302 309 2 879436820
63 150 4 875747292
201 375 3 884287140
200 103 2 891825521
13 94 3 882142057
297 22 4 875238984
201 844 2 884112537
14 93 3 879119311
240 343 3 885775831
184 716 3 889909987
216 12 5 881432544
38 122 1 892434801
257 276 5 882049973
256 778 4 882165103
200 229 5 884129696
148 177 2 877020715
249 22 5 879572926
184 47 4 889909640
276 58 4 874791169
268 432 3 875310145
224 258 3 888081947
145 25 2 875270655
298 261 4 884126805
244 743 5 880602170
289 410 2 876790361
59 132 5 888205744
301 1112 4 882079294
56 1090 3 892683641
327 192 5 887820828
285 288 5 890595584
133 328 3 890588577
71 346 4 885016248
293 1132 3 888905416
13 908 1 886302385
1 27 2 876892946
271 172 5 885848616
286 269 5 879780839
49 926 1 888069117
290 153 3 880475310
226 270 4 883888639
104 122 3 888465739
311 233 4 884365889
60 178 5 883326399
200 191 5 884128554
128 276 4 879967550
157 748 2 886890015
303 460 4 879543600
5 445 3 875720744
268 540 1 875542174
290 218 2 880475542
181 1346 1 878962086
189 276 3 893264300
90 659 4 891384357
321 134 3 879438607
279 108 4 892174381
197 770 3 891410082
217 566 4 889069903
193 682 1 889123377
34 310 4 888601628
293 157 5 888905779
297 300 3 874953892
24 742 4 875323915
259 405 3 874725120
303 1007 5 879544576
326 282 2 879875964
10 218 4 877889261
334 635 2 891548155
272 8 4 879455015
76 1129 5 875028075
13 300 1 881515736
194 431 4 879524291
256 291 5 882152630
148 185 1 877398385
276 318 5 874787496
227 126 4 879035158
311 553 3 884365451
198 427 4 884207009
13 180 5 882141248
286 100 3 876521650
271 451 3 885849447
59 318 5 888204349
328 655 4 886037429
25 174 5 885853415
90 971 4 891385250
157 150 5 874813703
106 69 4 881449886
173 322 4 877557028
276 1135 4 874791527
276 76 4 874791506
49 546 1 888069636
115 234 5 881171982
307 22 3 879205470
82 218 3 878769748
116 1082 3 876453171
80 50 3 887401533
59 381 5 888205659
236 143 4 890116163
56 174 5 892737191
82 413 1 884714593
82 69 4 878769948
144 727 3 888105765
7 526 5 891351042
49 531 3 888066511
1 260 1 875071713
243 129 2 879987526
313 488 5 891013496
207 273 4 878104569
334 222 4 891544904
83 95 4 880308453
162 230 2 877636860
326 496 5 879874825
236 686 3 890118372
17 9 3 885272558
92 1215 2 890251747
82 147 3 876311473
201 242 4 884110598
223 237 5 891549657
168 295 4 884287615
186 977 3 879023273
246 356 2 884923047
62 135 4 879375080
320 456 3 884748904
48 603 4 879434607
209 269 2 883589606
236 1328 4 890116132
92 673 4 875656392
71 285 3 877319414
5 167 2 875636281
67 240 5 875379566
188 554 2 875074891
326 54 3 879876300
234 462 4 892079840
31 302 4 881547719
228 886 1 889387173
172 603 3 875538027
314 1139 5 877888480
297 652 3 875239346
264 659 5 886122577
118 174 5 875385007
216 286 4 881432501
290 1013 2 880732131
256 278 5 882151517
200 820 3 884127370
49 312 3 888065786
118 433 5 875384793
293 195 3 888906119
13 29 2 882397833
42 405 4 881105541
293 566 3 888907312
125 158 4 892839066
315 230 4 879821300
296 83 5 884199624
188 204 4 875073478
201 4 4 884111830
253 747 3 891628501
315 531 5 879799457
210 134 5 887736070
119 1170 3 890627339
151 509 4 879524778
81 273 4 876533710
324 748 5 880575108
43 15 5 875975546
298 432 4 884183307
250 127 4 878089881
286 1265 5 884069544
203 294 2 880433398
267 226 3 878972463
194 735 4 879524718
303 99 4 879467514
193 195 1 889124507
57 588 4 883698454
92 672 3 875660028
207 269 4 877845577
325 154 3 891478480
280 86 4 891700475
197 449 5 891410124
39 352 5 891400704
197 510 5 891409935
117 1 4 880126083
132 922 5 891278996
271 180 5 885849087
222 433 4 881059876
103 117 4 880416313
201 26 4 884111927
270 387 5 876955689
104 100 4 888465166
95 96 4 879196298
130 204 5 875216718
290 239 2 880474451
314 833 4 877887155
313 969 4 891015387
295 722 4 879518881
269 412 3 891446904
49 1 2 888068651
332 228 5 888359980
301 11 4 882076291
125 434 4 879454100
336 66 3 877756126
1 145 2 875073067
327 230 4 887820448
262 292 4 879961282
313 205 5 891013652
321 523 3 879440687
248 185 3 884534772
38 384 5 892433660
224 778 1 888104057
217 1222 1 889070050
6 475 5 883599478
331 47 5 877196235
38 423 5 892430071
1 174 5 875073198
308 60 3 887737760
207 642 3 875991116
215 1039 5 891436543
56 239 4 892676970
109 1011 3 880571872
10 124 5 877888545
320 210 5 884749227
269 180 3 891448120
290 380 3 880731766
311 205 5 884365357
129 270 3 883243934
109 281 2 880571919
235 898 3 889654553
335 328 3 891566903
13 508 3 882140426
201 558 2 884112537
276 801 3 877935306
81 118 2 876533764
288 200 4 886373534
263 97 4 891299387
293 87 4 888907015
136 117 4 882694498
318 660 3 884497207
295 405 5 879518319
201 480 4 884111598
232 708 4 888550060
197 566 4 891409893
313 180 5 891014898
109 230 5 880579107
168 596 4 884287615
201 980 3 884140927
222 554 2 881060435
115 11 4 881171348
334 224 2 891545020
119 697 5 874782068
198 385 3 884208778
91 507 4 891438977
62 281 3 879373118
239 98 5 889180410
324 1033 4 880575589
201 823 3 884140975
322 50 5 887314418
107 305 4 891264327
64 2 3 889737609
28 50 4 881957090
246 202 3 884922272
168 1197 5 884287927
34 259 2 888602808
286 465 5 889651698
184 521 4 889908873
106 286 4 881449486
198 1117 3 884205252
291 53 5 874834827
25 477 4 885853155
1 159 3 875073180
181 1393 1 878961709
169 301 4 891268622
60 172 4 883326339
178 427 5 882826162
149 327 2 883512689
280 96 4 891700664
205 984 1 888284710
92 431 4 875660164
244 369 4 880605294
308 291 3 887739472
235 684 4 889655162
218 194 3 877488546
307 313 5 888095725
18 69 3 880129527
23 215 2 874787116
184 132 5 889913687
244 237 5 880602334
211 181 1 879461498
236 696 2 890117223
145 672 3 882182689
235 648 4 889655662
116 1016 2 876453376
178 358 1 888512993
11 561 2 891905936
329 512 4 891656347
183 405 4 891464393
308 467 4 887737194
207 576 3 877822904
198 249 2 884205277
100 750 4 891375016
291 168 5 874871800
115 762 4 881170508
151 169 5 879524268
305 403 2 886324792
338 494 3 879438570
292 525 5 881105701
234 671 3 892336257
234 584 3 892333653
279 275 3 875249232
234 638 4 892335989
110 79 4 886988480
106 273 3 881453290
128 111 3 879969215
298 151 3 884183952
42 845 5 881110719
128 747 3 879968742
190 717 3 891042938
1 82 5 878542589
99 421 3 885680772
313 208 3 891015167
13 45 3 882139863
305 302 4 886307860
94 185 5 885873684
271 204 4 885848314
128 83 5 879967691
267 50 5 878974783
142 189 4 888640317
1 56 4 875072716
18 214 4 880132078
188 234 4 875073048
235 100 4 889655550
303 408 4 879467035
100 266 2 891375484
178 302 4 892239796
42 781 4 881108280
18 488 3 880130065
184 14 4 889907738
293 521 3 888906288
293 849 2 888907891
198 156 3 884207058
234 966 4 892334189
181 1351 1 878962168
194 153 3 879546723
1 272 3 887431647
265 279 2 875320462
159 323 4 880485443
332 229 5 888360342
334 229 2 891549777
126 258 4 887853919
200 225 4 876042299
63 246 3 875747514
271 134 3 885848518
179 316 5 892151202
308 959 3 887739335
270 70 5 876955066
181 1198 1 878962585
21 445 3 874951658
326 675 4 879875457
268 823 2 875742942
109 845 4 880571684
339 132 5 891032953
244 95 4 880606418
62 702 2 879376079
321 615 5 879440109
254 141 3 886472836
295 423 4 879517372
271 241 3 885849207
7 519 4 891352831
334 52 4 891548579
136 14 5 882693338
192 1160 4 881367456
259 176 4 874725386
244 509 5 880606017
238 815 2 883576398
73 127 5 888625200
249 455 4 879640326
320 291 4 884749014
13 820 4 882398743
10 283 4 877892276
321 207 3 879440244
201 991 4 884110735
102 559 3 888803052
190 742 3 891033841
311 99 5 884365075
309 333 3 877370419
62 685 2 879373175
116 187 5 886310197
295 966 5 879518060
234 72 3 892335674
255 984 1 883215902
161 582 1 891170800
87 550 4 879876074
59 559 5 888206562
140 322 3 879013684
224 301 3 888082013
90 486 5 891383912
14 792 5 879119651
194 216 3 879523785
222 501 2 881060331
90 311 4 891382163
328 43 3 886038224
7 633 5 891351509
151 228 5 879524345
297 223 5 875238638
207 529 4 878191679
130 930 3 876251072
314 743 1 877886443
181 926 1 878962866
13 509 5 882140691
232 523 4 888549757
201 87 3 884111775
223 470 4 891550767
18 602 3 880131407
82 495 3 878769668
144 403 3 888105636
186 322 5 879022927
250 174 3 878092104
321 194 3 879441225
28 12 4 881956853
28 895 4 882826398
151 405 3 879543055
207 1102 3 880839891
201 164 3 884112627
6 509 4 883602664
42 380 4 881108548
221 895 2 885081339
328 10 4 885047099
270 159 4 876956233
269 340 5 891446132
216 249 3 880232917
201 1424 3 884113114
85 86 4 879454189
95 843 4 880572448
306 275 4 876503894
256 235 3 882153668
85 692 3 879828490
11 312 4 891902157
305 210 3 886323006
181 321 2 878961623
151 7 4 879524610
296 961 5 884197287
119 595 3 874781067
314 929 3 877887356
279 363 5 890451473
188 357 4 875073647
214 872 2 891542492
234 209 4 892317967
5 426 3 878844510
1 80 4 876893008
246 578 2 884923306
294 979 3 877819897
314 73 4 877889205
312 98 4 891698300
208 662 4 883108842
43 382 5 883955702
254 596 4 886473852
3 294 2 889237224
44 153 4 878347234
25 742 4 885852569
94 79 4 885882967
262 406 3 879791537
35 1025 3 875459237
148 501 4 877020297
70 423 5 884066910
83 265 5 880308186
5 222 4 875635174
308 1028 2 887738972
109 62 3 880578711
49 173 3 888067691
314 468 4 877892214
334 1163 4 891544764
269 205 3 891447841
38 318 3 892430071
102 222 3 888801406
329 297 4 891655868
305 1411 3 886324865
236 289 4 890117820
313 131 4 891015513
332 284 5 887938245
121 121 2 891388501
60 183 5 883326399
339 1030 1 891036707
296 544 4 884196938
11 720 1 891904717
263 272 5 891296919
303 203 5 879467669
288 182 4 886374497
291 17 4 874834850
308 628 3 887738104
13 755 3 882399014
64 231 3 889740880
277 24 4 879543931
130 572 3 878537853
293 386 2 888908065
279 368 1 886016352
189 253 4 893264150
296 32 4 884197131
305 169 5 886322893
303 262 5 879466065
95 211 3 879197652
207 1098 4 877879172
110 1248 3 886989126
312 408 4 891698174
279 1413 5 875314434
15 301 4 879455233
116 484 4 886310197
198 51 3 884208455
13 2 3 882397650
332 232 5 888098373
44 55 4 878347455
62 716 4 879375951
148 529 5 877398901
303 421 4 879466966
276 56 5 874791623
311 484 4 884366590
58 475 5 884304609
85 488 4 879455197
330 584 3 876547220
181 1067 1 878962550
301 515 3 882074561
13 830 1 882397581
127 268 1 884363990
37 56 5 880915810
314 924 5 877886921
201 210 2 884111270
198 511 4 884208326
94 742 3 891722214
209 258 2 883589626
305 610 3 886324128
67 405 5 875379794
294 120 2 889242937
246 98 4 884921428
194 162 3 879549899
307 393 3 877123041
95 976 2 879195703
268 252 3 875743182
216 298 5 881721819
5 453 1 879198898
223 845 4 891549713
293 124 4 888904696
224 1119 3 888082634
299 176 4 880699166
130 71 5 875801695
130 50 5 874953665
54 313 4 890608360
62 473 4 879373046
312 495 4 891699372
125 22 5 892836395
318 357 4 884496069
204 748 1 892392030
182 293 3 885613152
49 569 3 888067482
69 56 5 882145428
64 959 4 889739903
325 179 5 891478529
286 272 5 884069298
116 880 3 876680723
215 89 4 891435060
46 333 5 883611374
246 294 2 884924460
213 25 4 878870750
90 213 5 891383718
110 188 4 886988574
212 511 4 879304051
57 1059 3 883697432
57 825 1 883697761
297 282 3 874954845
276 176 5 874792401
106 45 3 881453290
151 66 4 879524974
276 66 3 874791993
269 76 3 891448456
154 286 4 879138235
210 219 3 887808581
306 319 4 876503793
324 471 5 880575412
265 472 3 875320542
85 389 3 882995832
54 325 3 880930146
18 498 4 880129940
271 345 3 885844666
123 22 4 879809943
87 1189 5 879877951
217 810 3 889070050
198 148 3 884206401
116 257 3 876452523
131 274 3 883681351
297 692 3 875239018
266 874 2 892257101
109 796 3 880582856
189 480 5 893265291
22 294 1 878886262
234 471 3 892335074
328 679 2 885049460
56 79 4 892676303
178 978 2 882824983
216 226 3 880244803
38 444 1 892433912
219 179 5 889492687
43 944 2 883956260
279 1484 3 875307587
236 507 3 890115897
296 1009 3 884196921
271 490 4 885848886
206 903 2 888180018
21 295 3 874951349
318 47 2 884496855
59 230 4 888205714
151 175 5 879524244
263 86 4 891299574
308 193 3 887737837
152 125 5 880149165
123 165 5 879872672
169 174 4 891359418
294 10 3 877819490
197 651 5 891409839
263 892 3 891297766
63 109 4 875747731
206 362 1 888180018
52 498 5 882922948
316 213 5 880853516
72 89 3 880037164
189 705 4 893265569
80 87 4 887401307
198 746 4 884207946
85 56 4 879453587
194 56 5 879521936
110 82 4 886988480
99 741 3 885678886
7 195 5 891352626
323 546 2 878739519
21 982 1 874951482
334 93 4 891545020
12 82 4 879959610
43 235 3 875975520
228 288 4 889387173
109 90 3 880583192
13 64 5 882140037
178 288 5 882823353
181 887 1 878962005
123 606 3 879872540
82 64 5 878770169
138 285 4 879023245
87 1182 3 879877043
201 304 2 884110967
70 202 4 884066713
178 655 4 882827247
327 558 4 887746196
315 654 5 879821193
251 55 3 886271856
42 70 3 881109148
311 482 4 884365104
129 272 4 883243972
307 193 3 879205470
10 4 4 877889130
338 211 4 879438092
95 514 2 888954076
342 1047 2 874984854
342 792 3 875318882
201 213 4 884111873
32 276 4 883717913
257 289 4 879029543
14 175 5 879119497
299 174 4 877880961
6 134 5 883602283
320 433 4 884751730
305 257 2 886322122
28 153 3 881961214
308 609 4 887739757
287 218 5 875335424
62 421 5 879375716
269 172 3 891449031
119 628 4 874775185
279 1142 1 890780603
224 1442 3 888104281
308 528 3 887737036
151 435 4 879524131
328 216 3 885045899
295 493 5 879516961
62 96 4 879374835
59 1109 3 888205088
255 258 4 883215406
102 195 4 888801360
128 660 2 879968415
8 79 4 879362286
197 1419 2 891410124
217 578 5 889070087
313 204 4 891014401
162 298 4 877635690
30 289 2 876847817
260 319 2 890618198
57 294 4 883696547
334 86 4 891548295
308 54 2 887740254
210 255 4 887730842
213 447 4 878955598
189 1021 5 893266251
220 306 4 881197664
104 1241 1 888465379
339 582 4 891032793
28 184 4 881961671
51 148 3 883498623
244 157 4 880604119
234 491 4 892079538
275 588 3 875154535
186 53 1 879023882
99 1052 1 885679533
269 131 5 891449728
311 720 3 884366307
270 1119 5 876955729
286 1035 3 877532094
311 94 3 884366187
211 257 5 879461498
239 671 5 889179290
201 98 4 884111312
43 403 4 883956305
315 216 4 879821120
53 924 3 879443303
308 452 2 887741329
338 613 3 879438597
90 357 5 891385132
303 327 1 879466166
247 271 2 893081411
144 303 4 888103407
102 1030 1 892994075
90 739 5 891384789
72 527 4 880036746
286 248 5 875806800
201 32 3 884140049
327 497 4 887818658
141 125 5 884585642
167 675 1 892738277
262 217 3 879792818
151 813 4 879524222
13 859 1 882397040
276 207 4 874795988
246 1073 4 884921380
298 98 4 884127720
23 88 3 874787410
94 700 2 891723427
130 772 4 876251804
5 403 3 875636152
297 176 4 881708055
178 250 4 888514821
128 417 4 879968447
270 281 5 876956137
63 251 4 875747514
42 357 5 881107687
100 288 2 891374603
334 100 5 891544707
162 222 4 877635758
184 1020 4 889908630
13 625 2 882398691
72 79 4 880037119
213 8 3 878955564
82 13 2 878768615
314 735 5 877888855
59 488 3 888205956
14 313 2 890880970
236 200 3 890115856
325 240 1 891479592
286 164 3 877533586
268 768 3 875744895
83 77 4 880308426
313 230 3 891015049
21 218 4 874951696
325 656 4 891478219
283 83 4 879298239
223 323 2 891549017
130 418 5 875801631
28 282 4 881957425
43 7 4 875975520
293 559 2 888906168
286 432 3 878141681
176 272 5 886047068
237 499 2 879376487
332 451 5 888360179
303 273 3 879468274
286 13 2 876521933
327 169 2 887744205
262 50 2 879962366
312 631 5 891699599
102 734 2 892993786
16 655 5 877724066
23 90 2 874787370
249 182 5 879640949
18 209 4 880130861
293 216 4 888905990
308 607 3 887737084
164 689 5 889401490
306 1009 4 876503995
327 655 4 887745303
280 756 4 891701649
106 97 5 881450810
109 147 4 880564679
156 58 4 888185906
133 260 1 890588878
23 511 5 874786770
112 689 4 884992668
116 313 5 886978155
271 13 4 885847714
313 136 5 891014474
240 898 5 885775770
52 405 4 882922610
280 202 3 891701090
262 1278 4 879961819
275 252 2 876197944
187 732 3 879465419
13 428 5 882140588
268 946 3 875310442
234 283 3 891227814
16 151 5 877721905
336 108 3 877757320
235 435 5 889655434
216 274 3 880233061
246 215 2 884921058
13 913 1 892014908
21 439 1 874951820
94 99 3 891721815
82 275 2 884714125
339 55 3 891032765
59 1116 3 888206562
217 685 5 889069782
295 736 5 879966498
170 328 3 884103860
151 826 1 879543212
13 212 5 882399385
223 1 4 891549324
246 196 3 884921861
154 137 3 879138657
158 144 4 880134445
11 120 2 891903935
18 630 3 880132188
197 181 5 891409893
235 433 4 889655596
331 69 5 877196384
244 278 3 880605294
217 540 1 889070087
312 134 5 891698764
299 168 4 878192039
234 1172 3 892079076
224 632 2 888103872
327 474 3 887743986
184 780 4 889913254
62 1107 1 879376159
65 70 1 879216529
101 928 2 877136302
210 465 4 887737131
144 237 4 888104258
320 250 4 884751992
311 692 4 884364652
159 328 3 893255993
128 77 3 879968447
167 48 1 892738277
291 558 4 874867757
56 143 3 892910182
38 392 5 892430120
293 264 3 888904392
115 69 1 881171825
276 250 4 874786784
280 225 4 891701974
295 588 4 879517682
26 321 3 891347949
302 328 3 879436844
145 109 4 875270903
201 380 1 884140825
57 252 2 883697807
280 100 3 891700385
310 258 3 879435606
26 269 4 891347478
308 4 5 887737890
269 174 1 891449124
262 71 4 879794951
221 684 4 875247454
263 521 3 891297988
256 276 3 882151198
1 229 4 878542075
266 508 4 892258004
59 127 5 888204430
325 505 4 891478557
327 133 4 887745662
282 269 4 879949347
151 300 4 879523942
104 283 4 888465582
291 1017 4 874833911
276 770 4 877935446
334 1108 4 891549632
224 879 3 888082099
64 1133 4 889739975
58 42 4 884304936
106 584 4 881453481
159 258 4 893255836
268 248 3 875742530
318 286 3 884470681
6 525 5 883601203
327 431 3 887820384
77 23 4 884753173
95 15 4 879195062
255 452 3 883216672
144 328 3 888103407
102 307 4 883748222
269 1014 3 891446838
184 172 4 889908497
306 306 5 876503792
49 732 3 888069040
181 1347 1 878962052
293 514 4 888906378
330 121 4 876544582
125 1074 3 892838827
291 147 4 874805768
269 214 3 891448547
13 168 4 881515193
305 76 1 886323506
313 435 5 891013803
307 229 5 879538921
314 54 4 877888892
269 529 5 891455815
283 186 5 879298239
158 8 5 880134948
92 87 3 876175077
85 842 3 882995704
20 118 4 879668442
193 393 4 889126808
167 222 4 892737995
201 1187 3 884112201
125 346 1 892835800
144 880 5 888103509
234 628 2 892826612
291 574 1 875087656
224 977 2 888104281
152 780 5 884019189
71 462 5 877319567
151 755 3 879543366
135 229 2 879857843
92 931 1 875644796
95 33 3 880571704
130 125 5 875801963
269 405 1 891450902
297 277 3 875048641
62 527 4 879373692
221 17 4 875245406
11 743 2 891904065
230 50 5 880484755
159 930 4 880557824
174 107 5 886434361
97 7 5 884238939
84 289 5 883449419
63 948 3 875746948
125 143 5 879454793
160 126 3 876769148
316 483 4 880853810
32 117 3 883717555
327 93 4 887744432
13 856 5 886303171
216 202 4 880234346
92 1212 3 876175626
1 140 1 878543133
263 183 4 891298655
5 173 4 875636675
85 372 4 879828720
194 519 4 879521474
109 550 5 880579107
201 198 4 884111873
340 172 4 884990620
49 117 1 888069459
7 642 3 892132277
239 286 1 889178512
198 568 3 884208710
237 23 4 879376606
239 135 5 889178762
5 241 1 875720948
72 382 4 880036691
297 480 4 875238923
249 826 1 879640481
25 127 3 885853030
94 227 3 891722759
195 591 4 892281779
92 85 3 875812364
85 709 5 879828941
308 502 5 887739521
311 117 4 884366852
247 251 4 893081395
235 792 4 889655490
329 326 3 891656639
338 79 4 879438715
244 428 4 880606155
187 70 4 879465394
253 483 5 891628122
194 62 2 879524504
70 71 3 884066399
203 332 5 880433474
49 72 2 888069246
308 673 4 887737243
246 426 3 884923471
280 231 3 891701974
180 433 5 877127273
110 1250 3 886988818
327 811 4 887747363
339 47 4 891032701
194 132 3 879520991
1 225 2 878542738
36 319 2 882157356
342 746 4 875320227
260 1105 5 890618729
40 754 4 889041790
175 31 4 877108051
62 827 2 879373421
138 100 5 879022956
252 9 5 891456797
59 421 5 888206015
110 540 3 886988793
1 235 5 875071589
334 269 3 891544049
301 95 5 882076334
63 6 3 875747439
269 805 2 891450623
151 357 5 879524585
268 404 4 875309430
199 473 4 883783005
22 780 1 878887377
28 441 2 881961782
299 210 4 889502980
317 326 3 891446438
254 384 1 886475790
178 245 3 882823460
297 194 3 875239453
90 966 5 891385843
11 734 3 891905349
325 514 4 891478006
249 411 3 879640436
18 964 3 880132252
311 118 3 884963203
334 293 3 891544840
294 483 4 889854323
297 86 5 875238883
293 647 5 888905760
294 876 3 889241633
286 142 4 877534793
308 569 3 887740410
222 164 4 878181768
49 721 2 888068934
303 1090 1 879485686
73 474 5 888625200
93 845 4 888705321
85 1101 4 879454046
223 216 5 891550925
42 1043 2 881108633
234 212 2 892334883
16 288 3 877717078
13 319 4 882139327
135 294 4 879857575
168 411 1 884288222
72 204 4 880037853
144 523 5 888105338
303 398 1 879485372
128 215 3 879967452
320 11 4 884749255
267 684 4 878973088
60 490 4 883326958
189 694 4 893265946
116 905 2 890131519
249 240 4 879640343
110 300 3 886987380
201 1063 3 884113453
180 121 5 877127830
87 1072 3 879876610
6 209 4 883601713
63 301 5 875747010
179 895 5 892151565
148 98 3 877017714
13 312 1 883670630
15 278 1 879455843
176 305 5 886047068
102 66 3 892992129
293 251 4 888904734
42 204 5 881107821
328 523 5 885046206
206 333 4 888179565
279 67 4 875310419
158 42 3 880134913
70 151 3 884148603
271 661 4 885848373
37 222 3 880915528
279 1095 1 886016480
250 200 5 883263374
103 144 4 880420510
50 1084 5 877052501
128 1141 4 879968827
336 577 1 877757396
275 191 4 880314797
95 173 5 879198547
87 651 4 879875893
21 678 2 874951005
145 1217 2 875272349
13 860 1 882396984
312 676 3 891699295
200 431 5 884129006
102 67 1 892993706
325 506 5 891478180
221 1073 4 875245846
2 297 4 888550871
305 733 3 886324661
275 969 2 880314412
11 215 3 891904389
341 876 4 890757886
231 126 5 888605273
269 474 4 891448823
13 540 3 882398410
102 809 3 888802768
254 240 1 886476165
234 486 3 892079373
256 932 3 882150508
249 58 5 879572516
305 947 4 886322838
262 15 3 879962366
325 187 3 891478455
184 836 4 889909142
11 428 4 891905032
40 258 3 889041981
313 740 2 891016540
276 1314 3 874796412
101 1051 2 877136891
236 699 4 890116095
207 134 4 875991160
215 82 3 891435995
125 945 5 892836465
120 282 4 889490172
293 461 2 888905519
160 93 5 876767572
298 418 4 884183406
326 444 4 879877413
246 849 1 884923687
278 301 2 891294980
166 288 3 886397510
328 4 3 885047895
70 265 4 884067503
298 465 4 884182806
343 186 4 876407485
205 313 3 888284313
201 461 4 884113924
276 1478 3 889174849
91 264 4 891438583
250 294 1 878089033
68 405 3 876974518
246 99 3 884922657
10 704 3 877892050
97 435 4 884238752
99 118 2 885679237
102 302 3 880680541
70 152 4 884149877
41 31 3 890687473
178 179 2 882828320
6 19 4 883602965
89 246 5 879461219
254 257 3 886471389
94 402 4 891723261
42 404 5 881108760
130 566 4 878537558
13 614 4 884538634
286 642 3 877531498
291 410 5 874834481
214 121 4 891543632
246 284 1 884922475
130 413 3 876251127
320 1210 4 884751316
60 810 4 883327201
141 744 5 884584981
288 97 4 886629750
145 750 4 885555884
189 496 5 893265380
130 55 5 875216507
328 431 2 885047822
177 1039 3 880130807
201 281 2 884112352
301 456 3 882074838
136 56 4 882848783
74 15 4 888333542
169 429 3 891359250
1 120 1 875241637
100 302 4 891374528
303 716 2 879467639
216 498 3 880235329
6 476 1 883600175
329 98 4 891656300
230 511 2 880485656
113 321 3 875075887
64 100 4 879365558
13 876 2 881515521
269 771 1 891451754
6 154 3 883602730
327 962 3 887820545
179 345 1 892151565
60 152 4 883328033
222 250 2 877563801
83 252 4 883868598
330 51 5 876546753
125 290 4 892838375
181 286 1 878961173
327 451 4 887819411
161 14 4 891171413
18 82 3 880131236
24 372 4 875323553
200 286 4 884125953
73 202 2 888626577
22 29 1 878888228
96 8 5 884403020
343 1107 3 876406977
297 12 5 875239619
279 1411 3 884556545
110 202 2 886988909
94 257 4 891724178
72 176 2 880037203
102 89 4 888801315
119 684 4 886177338
60 151 5 883326995
295 404 4 879518378
308 447 4 887739056
312 1203 5 891699599
343 55 3 876405129
284 259 2 885329593
276 563 3 874977334
280 736 2 891700341
311 310 4 884363865
18 739 3 880131776
87 209 5 879876488
13 90 3 882141872
58 1097 5 884504973
224 243 2 888082277
279 780 4 875314165
56 568 4 892910797
330 215 5 876547925
7 92 5 891352010
179 315 5 892151202
64 239 3 889740033
297 699 4 875239658
21 424 1 874951293
188 792 2 875075062
91 195 5 891439057
293 194 4 888906045
94 727 5 891722458
274 148 2 878946133
57 282 5 883697223
276 780 3 874792143
216 651 5 880233912
151 241 3 879542645
62 8 5 879373820
197 68 2 891410082
59 385 4 888205659
119 275 5 874774575
118 324 4 875384444
304 298 5 884968415
26 9 4 891386369
312 847 3 891698174
308 965 4 887738387
270 707 5 876954927
297 31 3 881708087
221 100 5 875244125
116 760 3 886309812
119 193 4 874781872
177 300 2 880130434
161 654 3 891171357
303 235 4 879484563
117 174 4 881011393
327 216 3 887818991
327 1098 4 887820828
23 516 4 874787330
181 1051 2 878962586
48 661 5 879434954
76 531 4 875028007
189 129 3 893264378
1 125 3 878542960
312 144 1 891698987
301 410 4 882074460
306 476 3 876504679
38 616 3 892433375
223 298 5 891549570
145 1292 1 875271357
328 528 5 886037457
174 458 4 886433862
303 31 3 879467361
23 83 4 874785926
6 175 4 883601426
173 938 3 877557076
313 239 3 891028873
38 780 4 892434217
184 89 4 889908572
44 155 3 878348947
244 13 4 880604379
13 263 5 881515647
344 479 4 884901093
40 340 2 889041454
141 222 4 884584865
144 286 4 888103370
324 597 4 880575493
222 700 3 881060550
96 484 5 884402860
90 199 5 891384423
1 215 3 876893145
270 379 5 876956232
251 257 3 886272378
246 109 5 884921794
130 90 4 875801920
326 318 5 879875612
9 521 4 886959343
221 32 4 875245223
20 186 3 879669040
37 79 4 880915810
279 871 4 875297410
163 56 4 891220097
84 284 3 883450093
201 676 2 884140927
46 1062 5 883614766
72 82 3 880037242
117 176 5 881012028
269 608 4 891449526
148 214 5 877019882
294 1067 4 877819421
121 174 3 891388063
20 172 3 879669181
59 724 5 888205265
108 125 3 879879864
49 53 4 888067405
294 678 2 877818861
240 301 5 885775683
299 602 3 878191995
246 802 1 884923471
13 788 1 882396914
303 1508 1 879544130
207 1283 4 884386260
255 271 4 883215525
195 477 2 885110922
312 557 5 891699599
144 302 3 888103530
102 399 2 888802722
297 515 5 874954353
106 165 5 881450536
291 421 4 875087352
145 552 5 888398747
89 936 5 879461219
85 71 4 879456308
282 271 3 881702919
339 856 5 891034922
135 227 3 879857843
151 91 2 879542796
221 467 4 875245928
286 196 4 877533543
116 195 4 876453626
94 738 2 891723558
144 172 4 888105312
214 208 5 892668153
234 519 5 892078342
244 596 4 880604735
222 739 4 878184924
74 126 3 888333428
45 127 5 881007272
344 306 5 884814359
116 887 3 881246591
181 1362 1 878962200
144 461 4 888106044
189 1099 5 893266074
53 228 3 879442561
2 290 3 888551441
299 739 3 889502865
313 139 3 891030334
274 275 5 878944679
321 521 2 879441201
134 539 4 891732335
269 486 3 891449922
94 655 4 891720862
262 1220 4 879794296
181 1265 1 878961668
109 4 2 880572756
12 96 4 879959583
109 42 1 880572756
90 307 5 891383319
77 498 5 884734016
314 620 3 877887212
48 210 3 879434886
305 1101 4 886323563
198 357 5 884207267
222 293 3 877563353
207 186 4 877879173
158 580 4 880135093
255 551 1 883216672
87 1047 3 879877280
301 9 3 882074291
279 1498 4 891208884
299 343 3 881605700
339 288 3 891036899
13 782 3 885744650
210 722 4 891036021
200 528 4 884128426
193 693 4 889124374
297 678 3 874954093
128 216 5 879967102
311 38 3 884365954
169 879 5 891268653
174 82 1 886515472
13 440 1 882397040
95 378 4 888954699
321 224 3 879439733
180 83 5 877128388
150 127 5 878746889
332 233 4 888360370
102 83 3 888803487
263 678 2 891297766
128 97 3 879968125
239 288 2 889178513
275 202 3 875155167
311 471 4 884963254
267 145 4 878972903
253 210 4 891628598
250 64 5 878090153
284 339 3 885329671
327 849 2 887822530
11 90 2 891905298
222 93 2 883815577
299 26 4 878192601
276 748 3 883822507
274 496 5 878946473
252 129 4 891456876
244 1225 2 880606818
75 820 3 884050979
194 52 4 879525876
328 627 3 885048365
201 955 3 884114895
253 198 5 891628392
221 39 4 875245798
334 317 3 891546000
271 414 4 885849470
158 525 5 880133288
64 705 5 879365558
294 24 4 877819761
28 480 5 881957002
269 959 5 891457067
299 270 4 878052375
151 655 4 879542645
177 87 4 880130931
269 15 2 891446348
279 740 3 875736276
332 673 5 888360307
269 483 4 891448800
91 682 2 891438184
246 17 2 884922658
290 418 3 880474293
9 487 5 886960056
217 797 4 889070011
234 14 3 891227730
292 1050 4 881105778
65 1129 4 879217258
222 231 2 878182005
299 32 3 877881169
279 685 3 884982881
15 620 4 879456204
68 178 5 876974755
293 210 3 888905665
43 931 1 884029742
344 278 3 884900454
56 368 3 892911589
339 30 3 891032765
144 518 3 888106182
125 734 3 892838977
12 735 5 879960826
269 484 3 891448895
90 179 5 891385389
185 237 4 883526268
243 275 3 879987084
269 1091 2 891451705
11 429 5 891904335
13 88 4 882141485
120 25 5 889490370
198 402 3 884209147
165 304 3 879525672
138 98 5 879024043
94 561 3 891722882
293 188 3 888906288
39 258 4 891400280
159 237 3 880485766
344 39 3 884901290
69 1017 5 882126156
230 673 3 880485573
160 124 4 876767360
44 228 5 883613334
298 1142 4 884183572
345 1160 3 884994606
94 133 4 885882685
121 122 2 891390501
325 109 2 891478528
160 1019 5 876857977
205 333 4 888284618
343 44 3 876406640
321 1028 2 879441064
102 986 1 888802319
268 123 3 875742794
19 153 4 885412840
125 511 5 879454699
332 1188 5 888098374
90 132 5 891384673
16 657 5 877723882
316 50 1 880853654
272 11 4 879455143
85 380 4 882995704
279 1118 3 875310631
269 761 2 891451374
75 696 4 884050979
249 469 4 879641285
311 671 3 884365954
58 222 4 884304656
254 99 3 886473254
308 632 3 887738057
125 1272 1 879454892
49 40 1 888069222
83 1101 2 880308256
16 294 4 877717116
94 214 5 891725332
295 624 5 879518654
152 866 5 880149224
128 227 2 879968946
119 235 5 874774956
122 1268 2 879270711
276 561 2 874792745
251 109 4 886272547
7 90 3 891352984
184 275 5 889913687
262 628 2 879962366
279 13 3 875249210
181 764 1 878962866
21 56 5 874951658
298 660 3 884182838
98 321 3 880498519
145 949 4 875272652
164 458 4 889402050
232 64 4 888549441
184 126 3 889907971
269 209 4 891448895
26 100 5 891386368
57 1093 3 883697352
117 338 3 886019636
297 97 5 875239871
276 969 4 874792839
119 1263 3 886177338
345 722 3 884993783
318 72 4 884498540
246 410 1 884923175
158 809 3 880134675
178 651 4 882826915
254 625 3 886473808
21 106 2 874951447
225 136 5 879540707
41 486 4 890687305
234 191 4 892334765
78 289 4 879633567
90 9 4 891385787
313 415 2 891030367
180 716 1 877128119
344 462 2 884901156
268 810 2 875744388
195 227 3 888737346
72 603 4 880037417
31 135 4 881548030
303 1267 3 879484327
64 731 3 889739648
62 89 5 879374640
151 662 4 879525054
189 1372 4 893264429
213 79 5 878956263
219 13 1 889452455
345 708 3 884992786
244 712 3 880607925
220 288 5 881197887
1 6 5 887431973
239 923 5 889179033
290 202 4 880474590
194 523 5 879521596
200 831 4 891825565
346 213 3 874948173
267 214 4 878972342
100 340 3 891374707
42 521 2 881107989
214 45 4 891543952
264 320 4 886122261
145 1102 1 888398162
10 22 5 877888812
299 71 3 878192238
313 608 4 891017585
209 242 4 883589606
221 92 4 875245989
293 646 3 888906244
184 1012 3 889907448
70 260 2 884065247
90 30 5 891385843
144 1169 4 888106044
1 104 1 875241619
21 288 3 874950932
6 523 5 883601528
248 181 4 884535374
168 409 4 884287846
234 878 2 892336477
44 238 4 878347598
296 1073 5 884197330
296 96 5 884197287
206 288 5 888179565
76 100 5 875028391
327 50 3 887745574
308 811 4 887739212
338 168 3 879438225
125 238 3 892838322
299 1074 3 889502786
85 203 5 879455402
77 431 5 884733695
18 367 4 880130802
293 572 2 888907931
286 228 3 889651576
246 568 4 884922451
174 902 3 890168363
268 163 2 875743656
291 555 1 874868629
151 478 5 879524471
269 63 1 891450857
11 97 4 891904300
83 748 2 886534501
83 125 5 880306811
145 717 3 888398702
56 426 4 892683303
339 435 4 891032189
35 242 2 875459166
18 462 3 880130065
194 708 3 879528106
14 514 4 879119579
345 651 4 884992493
279 415 3 875314313
12 471 5 879959670
126 332 2 887853735
16 22 5 877721071
116 758 1 876452980
220 325 1 881198435
151 328 3 879523838
280 11 5 891700570
10 155 4 877889186
73 1149 4 888626299
180 213 5 877128388
13 831 3 882398385
181 1291 1 878963167
92 132 3 875812211
345 202 3 884992218
269 482 3 891448823
59 241 4 888205574
322 508 4 887314073
18 25 3 880131591
343 135 5 876404568
62 856 4 879374866
144 528 4 888105846
24 662 5 875323440
108 282 3 879880055
95 518 4 888954076
276 383 2 877934828
187 427 5 879465597
13 315 5 884538466
332 98 5 888359903
12 172 4 879959088
347 22 5 881654005
201 8 3 884141438
90 855 5 891383752
193 1132 3 889127660
99 203 4 885680723
122 708 5 879270605
15 742 2 879456049
222 1239 2 881060762
57 56 3 883698646
332 595 4 887938574
6 498 4 883601053
339 58 3 891032379
268 154 4 875743563
102 202 4 892991269
213 474 2 878955635
73 196 4 888626177
283 70 4 879298206
122 212 5 879270567
201 454 2 884111830
298 652 3 884183099
7 10 4 891352864
314 29 5 877889234
130 1277 4 876250897
201 275 4 884113634
304 681 2 884967167
130 748 4 874953526
118 176 5 875384793
182 237 3 885613067
13 794 4 882399615
242 934 5 879741196
69 1134 5 882072998
77 153 5 884732685
151 196 4 879542670
279 202 4 875307587
233 958 5 875508372
284 682 3 885329322
181 301 2 878961303
286 419 5 889651990
327 14 4 887744167
256 195 5 882164406
331 1100 2 877196634
102 186 4 888803487
119 338 1 892565167
234 316 4 891033851
295 378 4 879518233
14 100 5 876965165
184 1006 3 889910078
216 721 4 880245213
130 148 4 876251127
130 229 4 875802173
158 100 5 880132401
222 972 2 881059758
122 792 3 879270459
59 14 5 888203234
31 705 5 881548110
254 501 3 886476281
297 475 5 874954426
193 328 3 889122993
292 28 4 881105734
1 49 3 878542478
242 1152 5 879741196
267 559 3 878972614
82 705 3 878769598
292 1039 4 881105778
14 455 4 880929745
308 511 5 887737130
236 170 5 890116451
334 4 3 891548345
130 1215 2 876251389
145 203 5 875271948
156 205 3 888185735
340 435 4 884990546
94 385 2 891721975
94 109 4 891721974
168 988 2 884287145
313 151 1 891014982
96 645 5 884403020
308 109 3 887738894
94 393 3 891721684
21 995 2 874950932
5 234 2 875720692
317 350 5 891446819
102 62 3 888801812
118 156 5 875384946
276 786 3 874791694
116 259 4 876452186
81 93 3 876533657
92 595 3 886443534
250 111 4 878091915
344 215 3 884900818
320 148 4 884748708
79 124 5 891271870
94 313 4 891724925
1 206 4 876893205
128 966 4 879968071
269 664 5 891457067
318 795 2 884498766
16 940 2 877721236
54 276 5 880931595
291 1109 4 874834768
298 172 4 884124993
234 292 4 891033821
106 15 3 883876518
114 1104 5 881260352
299 137 4 877877535
301 771 2 882079256
73 7 4 888625956
332 44 3 888360342
308 1019 4 887738570
187 28 4 879465597
94 783 2 891723495
15 137 4 879455939
286 56 2 877531469
222 756 4 877564031
18 699 5 880130802
68 245 3 876973777
134 748 5 891732365
334 1207 2 891550121
243 223 4 879988262
322 479 5 887313892
334 481 5 891546206
243 13 4 879987362
268 16 3 875306691
90 241 5 891384611
267 484 5 878971542
233 48 5 877663184
77 4 3 884752721
184 92 3 889908657
148 596 5 877020297
59 664 4 888205614
110 734 2 886989566
285 628 2 890595636
244 101 5 880603288
314 366 4 877891354
303 654 5 879467328
186 333 3 891718820
92 785 3 875660304
151 486 5 879525002
6 188 3 883602462
293 125 2 888905086
194 51 4 879549793
291 552 3 874834963
87 790 4 879876885
299 50 4 877877775
56 1 4 892683248
277 9 4 879543336
174 823 4 886434376
92 1047 1 875644796
177 182 5 880130684
41 751 4 890686872
1 76 4 878543176
113 262 2 875075983
271 657 4 885848559
323 7 2 878739355
303 373 2 879544276
138 238 5 879024382
325 98 4 891478079
106 64 4 881449830
222 155 4 878184113
345 367 4 884993069
273 328 3 891293048
144 1039 4 888105587
157 127 5 886890541
211 310 3 879461394
56 31 4 892679259
168 1016 5 884287615
303 129 5 879468547
76 258 3 875027206
223 249 2 891549876
60 28 5 883326155
321 507 3 879441336
141 932 3 884585128
73 286 4 888792192
226 480 4 883888853
90 713 4 891385466
272 172 4 879455043
19 313 2 885411792
145 286 3 875269755
342 764 1 875318762
224 322 2 888082013
328 1126 3 885046580
268 552 2 876514108
179 354 4 892151331
308 526 3 887739426
267 693 4 878972266
345 402 4 884993464
6 213 4 883602462
12 143 5 879959635
210 160 4 887737210
290 546 2 880475564
293 300 2 888904004
58 248 4 884794774
303 181 5 879468082
298 498 5 884182573
347 501 4 881654410
236 172 3 890116539
102 121 3 888801673
290 404 3 880475341
92 123 2 875640251
151 274 5 879542369
6 432 4 883601713
256 1289 4 882150552
43 216 5 875981128
189 632 5 893265624
263 514 3 891299387
22 117 4 878887869
250 44 4 878090199
269 188 2 891450675
278 98 4 891295360
155 294 3 879371194
140 334 2 879013684
18 190 4 880130155
239 198 5 889181047
104 342 3 888442437
251 258 3 886271496
72 64 5 880036549
305 338 3 886308252
72 566 4 880037277
339 226 2 891034744
1 72 4 878542678
194 511 4 879520991
316 549 5 880854049
201 150 4 884139983
206 1127 4 888180081
48 187 5 879434954
279 418 3 875733888
94 153 5 891725333
217 53 1 889069974
94 765 3 891723619
250 485 4 878092104
79 288 3 891272015
230 393 3 880485110
128 64 5 879966954
311 367 3 884365780
76 518 3 875498895
62 153 4 879374686
6 515 4 883599273
215 11 2 891436024
145 569 4 877343156
213 715 5 878955915
94 1199 3 891724798
10 294 3 879163524
344 181 3 884901047
53 100 5 879442537
20 678 4 879667684
207 294 3 875504669
123 285 5 879873830
256 1028 4 882151690
174 94 2 886515062
5 154 3 875636691
308 488 4 887736696
222 436 4 878184358
200 7 4 876042451
65 121 4 879217458
7 485 5 891351851
295 843 4 879517994
63 111 3 875747896
7 511 5 891351624
198 11 4 884207392
295 1503 2 879517082
267 28 4 878972524
91 99 2 891439386
151 321 4 879523900
13 302 5 881514811
293 1098 2 888905519
42 131 2 881108548
328 1135 1 885045528
14 519 5 890881335
234 142 2 892334852
230 154 4 880485159
152 98 2 882473974
164 313 5 889401284
55 144 5 878176398
318 1014 2 884494919
3 332 1 889237224
290 818 3 880732656
125 175 2 879455184
243 93 2 879987173
21 670 3 874951696
268 228 4 875309945
7 654 5 892135347
82 178 4 878769629
318 524 3 884496123
89 381 4 879459999
301 123 4 882074726
193 673 4 889126551
1 185 4 875072631
323 79 4 878739829
21 219 5 874951797
197 328 4 891409290
184 15 3 889907812
313 482 5 891016193
109 823 3 880572296
152 167 5 882477430
297 629 3 875410013
167 1147 4 892738384
264 524 3 886123596
280 571 3 891702338
222 577 1 878185137
21 591 3 874951382
210 501 4 887736998
280 230 3 891702153
86 286 3 879569555
320 174 4 884749255
144 50 5 888103929
256 97 4 882165103
65 427 5 879216734
198 429 4 884207691
184 217 3 889910394
151 709 5 879524778
18 530 4 880129877
43 724 4 875981390
86 319 3 879569555
242 305 5 879741340
97 28 5 884238778
114 195 4 881260861
188 69 4 875072009
301 230 4 882077033
85 241 3 882995340
129 313 3 883243934
106 77 4 881451716
261 748 3 890454310
188 7 5 875073477
13 208 5 882140624
342 288 5 875318267
299 286 4 877618524
311 204 5 884365617
125 813 1 879455184
276 463 4 874792839
13 421 2 882140389
141 472 5 884585274
222 550 3 878184623
191 896 3 891562090
144 516 2 888105197
216 1047 3 881428365
151 213 5 879524849
144 845 4 888104191
4 356 3 892003459
96 64 5 884403336
160 79 4 876859413
49 369 1 888069329
110 332 3 886987287
209 351 2 883589546
178 1004 4 882827375
344 97 3 884901156
11 203 4 891905856
241 307 4 887249795
239 312 2 889181247
276 719 3 877935336
18 191 4 880130193
141 535 5 884585195
18 971 4 880131878
162 42 3 877636675
342 591 3 875318629
278 525 5 891295330
102 217 2 888803149
16 447 5 877724066
343 82 5 876405735
109 357 2 880572528
301 732 4 882077351
303 202 5 879468149
250 378 4 878092059
234 507 4 892334803
217 68 3 889069974
87 523 5 879875649
95 26 3 880571951
245 94 2 888513081
95 289 2 879191590
334 1008 4 891545126
201 896 3 884110766
126 323 3 887853568
150 475 5 878746764
59 871 2 888203865
227 9 3 879035431
169 603 5 891359171
293 553 3 888907453
================================================
FILE: tests/test_data/test/test.net
================================================
source_id:token target_id:token
187 100
119 40
96 119
12 52
153 131
259 232
191 307
83 150
86 255
177 4
210 192
25 323
90 298
38 47
201 283
93 63
115 190
143 293
147 265
320 68
188 273
332 321
212 203
326 98
74 270
4 333
87 261
163 207
18 175
127 77
296 179
17 101
24 30
102 288
345 269
270 188
235 297
68 303
313 43
239 109
28 76
108 227
78 218
96 30
180 301
211 12
234 34
178 53
3 243
179 73
98 92
310 116
154 271
293 3
80 297
329 254
198 134
341 238
75 185
166 64
205 142
317 163
261 91
314 322
4 33
71 73
289 182
21 12
248 49
255 32
261 170
257 314
159 118
212 221
5 177
204 57
132 120
13 275
340 252
245 251
334 15
130 103
280 187
232 153
242 341
219 123
6 290
49 289
46 347
185 231
57 254
134 248
24 234
57 207
147 295
191 274
340 54
280 150
190 4
238 198
72 123
122 178
7 334
11 90
232 78
16 77
41 190
108 101
212 66
258 18
321 250
126 280
271 85
11 176
22 69
129 159
235 193
129 88
221 315
308 329
103 83
180 43
208 87
64 75
92 36
298 151
56 103
162 268
81 252
344 115
67 282
132 17
83 307
299 82
321 227
48 13
212 57
344 280
195 81
112 122
345 346
65 18
269 3
131 123
185 311
124 330
347 297
321 251
196 135
65 122
322 197
334 160
129 64
38 17
289 256
51 286
107 260
300 101
290 281
192 170
42 2
54 260
126 1
326 294
119 14
48 172
133 191
332 157
311 99
115 123
160 201
269 267
302 184
262 168
11 80
317 155
163 310
290 32
90 239
246 129
105 189
336 8
266 100
153 311
7 20
329 94
135 38
216 331
291 89
121 253
246 82
113 325
99 313
226 188
319 60
195 280
245 319
168 291
63 127
316 280
67 69
40 143
177 18
239 253
213 304
218 315
18 312
165 6
324 232
167 156
295 275
42 110
25 226
114 104
172 305
66 26
51 303
247 110
245 18
335 307
325 95
289 81
166 141
4 39
171 16
79 145
187 65
102 105
234 70
321 104
62 179
171 122
225 239
283 315
121 107
154 297
309 170
3 38
78 345
164 238
92 142
339 4
251 61
223 240
167 39
223 8
61 253
220 256
139 247
199 267
344 264
336 56
110 235
75 90
93 321
345 277
119 260
214 10
15 86
102 5
34 213
223 238
243 169
107 223
106 175
218 104
28 82
267 37
331 124
16 146
186 289
226 304
109 34
124 73
165 286
260 70
94 159
151 257
151 210
263 288
276 218
222 79
48 133
67 218
282 250
127 195
222 316
19 272
238 43
71 240
208 65
219 300
338 29
75 86
86 269
91 100
273 248
202 9
190 33
84 92
124 306
284 70
281 341
247 302
306 230
320 279
319 41
91 160
323 201
305 194
41 156
220 264
296 310
183 131
232 21
239 218
302 49
250 287
200 109
96 263
225 221
123 263
329 256
136 344
338 76
233 245
347 198
99 83
240 81
238 291
78 331
56 225
21 93
24 293
28 155
245 19
225 198
90 235
191 35
146 28
303 194
203 276
189 49
265 232
204 198
283 217
306 44
133 175
256 80
345 215
97 13
25 287
104 48
20 50
155 340
202 57
343 263
135 293
152 266
232 182
86 217
73 72
143 44
299 162
277 324
154 124
307 210
210 226
323 293
55 97
52 8
32 163
312 307
271 171
204 34
64 282
311 315
174 58
56 84
217 275
86 180
342 84
340 174
13 80
100 197
189 341
5 86
9 40
210 329
260 188
236 261
94 282
105 188
141 258
132 285
17 156
70 213
204 5
344 74
34 202
347 263
121 312
146 219
31 48
53 291
213 203
125 9
279 301
247 140
217 2
298 83
315 311
165 209
169 270
259 40
174 285
21 276
58 229
165 84
48 29
222 257
38 209
336 30
53 63
269 243
36 324
252 138
113 155
123 290
10 253
346 15
217 36
15 102
264 149
143 122
300 178
25 220
58 231
19 250
11 147
73 186
90 109
248 104
196 55
308 298
316 7
160 208
173 323
196 176
147 168
168 293
274 328
6 133
177 226
49 336
173 7
307 1
85 128
63 241
39 323
167 173
298 253
171 42
196 326
53 329
221 307
51 194
192 231
13 23
308 117
324 84
228 13
231 156
314 286
321 314
140 30
143 288
55 340
192 264
119 220
28 226
248 309
227 122
157 227
81 178
143 329
327 170
199 308
297 27
28 101
317 179
176 293
328 265
64 256
176 316
336 315
137 189
290 209
243 232
305 233
28 26
216 306
155 65
246 166
148 218
28 343
31 148
6 38
43 267
85 30
5 212
328 157
93 65
158 179
315 256
261 210
8 234
137 163
261 9
247 231
32 266
118 191
107 34
87 153
132 81
41 235
80 103
13 167
31 166
290 32
53 125
131 163
188 82
68 38
94 325
254 129
99 63
267 164
1 46
175 36
99 72
328 80
84 221
164 80
232 264
172 70
227 346
183 44
208 184
120 317
20 154
76 315
52 200
231 46
343 241
42 284
229 345
213 75
155 135
28 261
22 255
106 169
310 347
212 275
104 314
347 181
285 72
26 68
6 331
19 227
325 108
325 110
152 226
221 160
310 226
145 57
228 299
233 139
291 1
52 173
173 33
48 339
188 27
329 117
216 73
291 325
180 22
343 95
293 172
31 146
99 213
290 10
79 212
184 96
257 27
11 323
117 95
215 118
258 23
================================================
FILE: tests/test_model.py
================================================
import os
import unittest
from recbole_gnn.quick_start import objective_function
current_path = os.path.dirname(os.path.realpath(__file__))
config_file_list = [os.path.join(current_path, 'test_model.yaml')]
def quick_test(config_dict):
objective_function(config_dict=config_dict, config_file_list=config_file_list, saved=False)
class TestGeneralRecommender(unittest.TestCase):
def test_bpr(self):
config_dict = {
'model': 'BPR',
}
quick_test(config_dict)
def test_neumf(self):
config_dict = {
'model': 'NeuMF',
}
quick_test(config_dict)
def test_ngcf(self):
config_dict = {
'model': 'NGCF',
}
quick_test(config_dict)
def test_lightgcn(self):
config_dict = {
'model': 'LightGCN',
}
quick_test(config_dict)
def test_sgl(self):
config_dict = {
'model': 'SGL',
}
quick_test(config_dict)
def test_hmlet(self):
config_dict = {
'model': 'HMLET',
}
quick_test(config_dict)
def test_ncl(self):
config_dict = {
'model': 'NCL',
'num_clusters': 10
}
quick_test(config_dict)
def test_simgcl(self):
config_dict = {
'model': 'SimGCL'
}
quick_test(config_dict)
def test_xsimgcl(self):
config_dict = {
'model': 'XSimGCL'
}
quick_test(config_dict)
def test_lightgcl(self):
config_dict = {
'model': 'LightGCL'
}
quick_test(config_dict)
def test_directau(self):
config_dict = {
'model': 'DirectAU'
}
quick_test(config_dict)
def test_ssl4rec(self):
config_dict = {
'model': 'SSL4REC'
}
quick_test(config_dict)
class TestSequentialRecommender(unittest.TestCase):
def test_gru4rec(self):
config_dict = {
'model': 'GRU4Rec',
}
quick_test(config_dict)
def test_narm(self):
config_dict = {
'model': 'NARM',
}
quick_test(config_dict)
def test_sasrec(self):
config_dict = {
'model': 'SASRec',
}
quick_test(config_dict)
def test_srgnn(self):
config_dict = {
'model': 'SRGNN',
}
quick_test(config_dict)
def test_srgnn_uni100(self):
config_dict = {
'model': 'SRGNN',
'eval_args': {
'split': {'LS': "valid_and_test"},
'mode': 'uni100',
'order': 'TO'
}
}
quick_test(config_dict)
def test_gcsan(self):
config_dict = {
'model': 'GCSAN',
}
quick_test(config_dict)
def test_niser(self):
config_dict = {
'model': 'NISER',
}
quick_test(config_dict)
def test_lessr(self):
config_dict = {
'model': 'LESSR'
}
quick_test(config_dict)
def test_tagnn(self):
config_dict = {
'model': 'TAGNN'
}
quick_test(config_dict)
def test_gcegnn(self):
config_dict = {
'model': 'GCEGNN'
}
quick_test(config_dict)
def test_sgnnhn(self):
config_dict = {
'model': 'SGNNHN'
}
quick_test(config_dict)
class TestSocialRecommender(unittest.TestCase):
def test_diffnet(self):
config_dict = {
'model': 'DiffNet',
}
quick_test(config_dict)
def test_mhcn(self):
config_dict = {
'model': 'MHCN',
}
quick_test(config_dict)
def test_sept(self):
config_dict = {
'model': 'SEPT',
}
quick_test(config_dict)
if __name__ == '__main__':
unittest.main()
================================================
FILE: tests/test_model.yaml
================================================
dataset: test
epochs: 1
state: ERROR
data_path: tests/test_data/
# Atomic File Format
field_separator: "\t"
seq_separator: " "
# Common Features
USER_ID_FIELD: user_id
ITEM_ID_FIELD: item_id
RATING_FIELD: rating
TIME_FIELD: timestamp
seq_len: ~
# Label for Point-wise DataLoader
LABEL_FIELD: label
# NegSample Prefix for Pair-wise DataLoader
NEG_PREFIX: neg_
# Sequential Model Needed
ITEM_LIST_LENGTH_FIELD: item_length
LIST_SUFFIX: _list
MAX_ITEM_LIST_LENGTH: 50
POSITION_FIELD: position_id
# social network config
NET_SOURCE_ID_FIELD: source_id
NET_TARGET_ID_FIELD: target_id
filter_net_by_inter: True
undirected_net: True
# Selectively Loading
load_col:
inter: [user_id, item_id, rating, timestamp]
net: [source_id, target_id]
unload_col: ~
# Preprocessing
alias_of_user_id: ~
alias_of_item_id: ~
alias_of_entity_id: ~
alias_of_relation_id: ~
preload_weight: ~
normalize_field: ~
normalize_all: True