[
  {
    "path": ".github/ISSUE_TEMPLATE/bug_report.md",
    "content": "---\nname: Bug report\nabout: Create a report to help us improve\ntitle: \"[\\U0001F41BBUG] Describe your problem in one sentence.\"\nlabels: bug\nassignees: ''\n\n---\n\n**Describe the bug**\nA clear and concise description of what the bug is.\n\n**To Reproduce**\nSteps to reproduce the behavior:\n1. extra yaml file\n2. your code\n3. script for running\n\n**Expected behavior**\nA clear and concise description of what you expected to happen.\n\n**Screenshots**\nIf applicable, add screenshots to help explain your problem.\n\n**Colab Links**\nIf applicable, add links to Colab or other Jupyter laboratory platforms that can reproduce the bug.\n\n**Desktop (please complete the following information):**\n - OS: [e.g. Linux, macOS or Windows]\n- RecBole Version [e.g. 0.1.0]\n - Python Version [e.g. 3.79]\n- PyTorch Version [e.g. 1.60]\n- cudatoolkit Version [e.g. 9.2, none]\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/bug_report_CN.md",
    "content": "---\nname: Bug 报告\nabout: 提交一份 bug 报告，帮助 RecBole-GNN 变得更好\ntitle: \"[\\U0001F41BBUG] 用一句话描述您的问题。\"\nlabels: bug\nassignees: ''\n\n---\n\n**描述这个 bug**\n对 bug 作一个清晰简明的描述。\n\n**如何复现**\n复现这个 bug 的步骤：\n1. 您引入的额外 yaml 文件\n2. 您的代码\n3. 您的运行脚本\n\n**预期**\n对您的预期作清晰简明的描述。\n\n**屏幕截图**\n添加屏幕截图以帮助解释您的问题。（可选）\n\n**链接**\n添加能够复现 bug 的代码链接，如 Colab 或者其他在线 Jupyter 平台。（可选）\n\n**实验环境（请补全下列信息）：**\n - 操作系统: [如 Linux, macOS 或 Windows]\n- RecBole 版本 [如 0.1.0]\n - Python 版本 [如 3.79]\n- PyTorch 版本 [如 1.60]\n- cudatoolkit 版本 [如 9.2, none]\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature_request.md",
    "content": "---\nname: Feature request\nabout: Suggest an idea for this project\ntitle: \"[\\U0001F4A1SUG] Description of what you want to happen in one sentence\"\nlabels: enhancement\nassignees: ''\n\n---\n\n**Is your feature request related to a problem? Please describe.**\nA clear and concise description of what the problem is. Ex. I'm always frustrated when [...]\n\n**Describe the solution you'd like**\nA clear and concise description of what you want to happen.\n\n**Describe alternatives you've considered**\nA clear and concise description of any alternative solutions or features you've considered.\n\n**Additional context**\nAdd any other context or screenshots about the feature request here.\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature_request_CN.md",
    "content": "---\nname: 请求添加新功能\nabout: 提出一个关于本项目新功能/新特性的建议\ntitle: \"[\\U0001F4A1SUG] 一句话描述您希望新增的功能或特性\"\nlabels: enhancement\nassignees: ''\n\n---\n\n**您希望添加的功能是否与某个问题相关？**\n关于这个问题的简洁清晰的描述，例如，当 [...] 时，我总是很沮丧。\n\n**描述您希望的解决方案**\n关于解决方案的简洁清晰的描述。\n\n**描述您考虑的替代方案**\n关于您考虑的，能实现这个功能的其他替代方案的简洁清晰的描述。\n\n**其他**\n您可以添加其他任何的资料、链接或者屏幕截图，以帮助我们理解这个新功能。\n"
  },
  {
    "path": ".github/workflows/python-package.yml",
    "content": "name: RecBole-GNN tests\n\n# Controls when the action will run. \non:\n  # Triggers the workflow on push or pull request events but only for the master branch\n  push:\n  pull_request:\n\n  # Allows you to run this workflow manually from the Actions tab\n  workflow_dispatch:\n\njobs:\n  build:\n\n    runs-on: ubuntu-latest\n    strategy:\n      matrix:\n        python-version: [3.9]\n        torch-version: [2.0.0]\n    defaults:\n      run:\n        shell: bash -l {0}\n\n    steps:\n    - uses: actions/checkout@v2\n    - name: Setup Miniconda\n      uses: conda-incubator/setup-miniconda@v2\n      with:\n        python-version: ${{ matrix.python-version }}\n        channels: conda-forge\n        channel-priority: true\n        auto-activate-base: true\n    # install setuptools as a interim solution for bugs in PyTorch 1.10.2 (#69904)\n    - name: Install dependencies\n      run: |\n        python -m pip install --upgrade pip\n        pip install pytest\n        pip install dgl\n        pip install torch==${{ matrix.torch-version}}+cpu -f https://download.pytorch.org/whl/torch_stable.html\n        pip install torch-scatter torch-sparse torch-cluster torch-spline-conv torch-geometric -f https://data.pyg.org/whl/torch-${{ matrix.torch-version }}+cpu.html\n        pip install recbole==1.1.1\n        conda install -c conda-forge faiss-cpu\n    # Use \"python -m pytest\" instead of \"pytest\" to fix imports\n    - name: Test model\n      run: |\n        python -m pytest -v tests/test_model.py\n"
  },
  {
    "path": ".gitignore",
    "content": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\npip-wheel-metadata/\nshare/python-wheels/\n*.egg-info/\n.installed.cfg\n*.egg\nMANIFEST\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.nox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n*.py,cover\n.hypothesis/\n.pytest_cache/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\ndb.sqlite3\ndb.sqlite3-journal\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# IPython\nprofile_default/\nipython_config.py\n\n# pyenv\n.python-version\n\n# pipenv\n#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.\n#   However, in case of collaboration, if having platform-specific dependencies or dependencies\n#   having no cross-platform support, pipenv may install dependencies that don't work, or not\n#   install all needed dependencies.\n#Pipfile.lock\n\n# PEP 582; used by e.g. github.com/David-OConnor/pyflow\n__pypackages__/\n\n# Celery stuff\ncelerybeat-schedule\ncelerybeat.pid\n\n# SageMath parsed files\n*.sage.py\n\n# Environments\n.env\n.venv\nenv/\nvenv/\nENV/\nenv.bak/\nvenv.bak/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n.dmypy.json\ndmypy.json\n\n# Pyre type checker\n.pyre/\n\n# RecBole\nlog_tensorboard/\nsaved/\ndataset/\n"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2021 RUCAIBox\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# RecBole-GNN\n\n![](asset/recbole-gnn-logo.png)\n\n-----\n\n*Updates*:\n\n* [Oct 29, 2023] Add [SSL4Rec](https://github.com/RUCAIBox/RecBole-GNN/blob/main/recbole_gnn/model/general_recommender/ssl4rec.py). (https://github.com/RUCAIBox/RecBole-GNN/pull/76, by [@downeykking](https://github.com/downeykking))\n* [Oct 23, 2023] Add sparse tensor support, accelerating LightGCN & NGCF by ~5x, with 1/6 GPU memories. (https://github.com/RUCAIBox/RecBole-GNN/pull/75, by [@downeykking](https://github.com/downeykking))\n* [Oct 20, 2023] Add [DirectAU](https://github.com/RUCAIBox/RecBole-GNN/blob/main/recbole_gnn/model/general_recommender/directau.py). (https://github.com/RUCAIBox/RecBole-GNN/pull/74, by [@downeykking](https://github.com/downeykking))\n* [Oct 16, 2023] Add [XSimGCL](https://github.com/RUCAIBox/RecBole-GNN/blob/main/recbole_gnn/model/general_recommender/xsimgcl.py). (https://github.com/RUCAIBox/RecBole-GNN/pull/72, by [@downeykking](https://github.com/downeykking))\n* [Apr 12, 2023] Add [LightGCL](https://github.com/RUCAIBox/RecBole-GNN/blob/main/recbole_gnn/model/general_recommender/lightgcl.py). (https://github.com/RUCAIBox/RecBole-GNN/pull/63, by [@wending0417](https://github.com/wending0417))\n* [Oct 29, 2022] Adaptation to RecBole 1.1.1. (https://github.com/RUCAIBox/RecBole-GNN/pull/53)\n* [Jun 15, 2022] Add [MultiBehaviorDataset](https://github.com/RUCAIBox/RecBole-GNN/blob/8c61463451b294dce9af2d1939a5e054f7955e0f/recbole_gnn/data/dataset.py#L145). (https://github.com/RUCAIBox/RecBole-GNN/pull/43, by [@Tokkiu](https://github.com/Tokkiu))\n\n-----\n\n**RecBole-GNN** is a library built upon [PyTorch](https://pytorch.org) and [RecBole](https://github.com/RUCAIBox/RecBole) for reproducing and developing recommendation algorithms based on graph neural networks (GNNs). Our library includes algorithms covering three major categories:\n* **General Recommendation** with user-item interaction graphs;\n* **Sequential Recommendation** with session/sequence graphs;\n* **Social Recommendation** with social networks.\n\n![](asset/arch.png)\n\n## Highlights\n\n* **Easy-to-use and unified API**:\n    Our library shares unified API and input (atomic files) as RecBole.\n* **Efficient and reusable graph processing**:\n    We provide highly efficient and reusable basic datasets, dataloaders and layers for graph processing and learning.\n* **Extensive graph library**:\n    Graph neural networks from widely-used library like [PyG](https://github.com/pyg-team/pytorch_geometric) are incorporated. Recently proposed graph algorithms can be easily equipped and compared with existing methods.\n\n## Requirements\n\n```\nrecbole==1.1.1\npyg>=2.0.4\npytorch>=1.7.0\npython>=3.7.0\n```\n\n> If you are using `recbole==1.0.1`, please refer to our `recbole1.0.1` branch [[link]](https://github.com/hyp1231/RecBole-GNN/tree/recbole1.0.1).\n\n## Quick-Start\n\nWith the source code, you can use the provided script for initial usage of our library:\n\n```bash\npython run_recbole_gnn.py\n```\n\nIf you want to change the models or datasets, just run the script by setting additional command parameters:\n\n```bash\npython run_recbole_gnn.py -m [model] -d [dataset]\n```\n\n## Implemented Models\n\nWe list currently supported models according to category:\n\n**General Recommendation**:\n\n* **[NGCF](recbole_gnn/model/general_recommender/ngcf.py)** from Wang *et al.*: [Neural Graph Collaborative Filtering](https://arxiv.org/abs/1905.08108) (SIGIR 2019).\n* **[LightGCN](recbole_gnn/model/general_recommender/lightgcn.py)** from He *et al.*: [LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation](https://arxiv.org/abs/2002.02126) (SIGIR 2020).\n* **[SSL4Rec](recbole_gnn/model/general_recommender/ssl4rec.py)** from Yao *et al.*: [Self-supervised Learning for Large-scale Item Recommendations](https://arxiv.org/abs/2007.12865) (CIKM 2021).\n* **[SGL](recbole_gnn/model/general_recommender/sgl.py)** from Wu *et al.*: [Self-supervised Graph Learning for Recommendation](https://arxiv.org/abs/2010.10783) (SIGIR 2021).\n* **[HMLET](recbole_gnn/model/general_recommender/hmlet.py)** from Kong *et al.*: [Linear, or Non-Linear, That is the Question!](https://arxiv.org/abs/2111.07265) (WSDM 2022).\n* **[NCL](recbole_gnn/model/general_recommender/ncl.py)** from Lin *et al.*: [Improving Graph Collaborative Filtering with Neighborhood-enriched Contrastive Learning](https://arxiv.org/abs/2202.06200) (TheWebConf 2022).\n* **[DirectAU](recbole_gnn/model/general_recommender/directau.py)** from Wang *et al.*: [Towards Representation Alignment and Uniformity in Collaborative Filtering](https://arxiv.org/abs/2206.12811) (KDD 2022).\n* **[SimGCL](recbole_gnn/model/general_recommender/simgcl.py)** from Yu *et al.*: [Are Graph Augmentations Necessary? Simple Graph Contrastive Learning for Recommendation](https://arxiv.org/abs/2112.08679) (SIGIR 2022).\n* **[XSimGCL](recbole_gnn/model/general_recommender/xsimgcl.py)** from Yu *et al.*: [XSimGCL: Towards Extremely Simple Graph Contrastive Learning for Recommendation](https://arxiv.org/abs/2209.02544) (TKDE 2023).\n* **[LightGCL](recbole_gnn/model/general_recommender/lightgcl.py)** from Cai *et al.*: [LightGCL: Simple Yet Effective Graph Contrastive Learning for Recommendation\n](https://arxiv.org/abs/2302.08191) (ICLR 2023).\n\n**Sequential Recommendation**:\n\n* **[SR-GNN](recbole_gnn/model/sequential_recommender/srgnn.py)** from Wu *et al.*: [Session-based Recommendation with Graph Neural Networks](https://arxiv.org/abs/1811.00855) (AAAI 2019).\n* **[GC-SAN](recbole_gnn/model/sequential_recommender/gcsan.py)** from Xu *et al.*: [Graph Contextualized Self-Attention Network for Session-based Recommendation](https://www.ijcai.org/proceedings/2019/547) (IJCAI 2019).\n* **[NISER+](recbole_gnn/model/sequential_recommender/niser.py)** from Gupta *et al.*: [NISER: Normalized Item and Session Representations to Handle Popularity Bias](https://arxiv.org/abs/1909.04276) (GRLA, CIKM 2019 workshop).\n* **[LESSR](recbole_gnn/model/sequential_recommender/lessr.py)** from Chen *et al.*: [Handling Information Loss of Graph Neural Networks for Session-based Recommendation](https://dl.acm.org/doi/10.1145/3394486.3403170) (KDD 2020).\n* **[TAGNN](recbole_gnn/model/sequential_recommender/tagnn.py)** from Yu *et al.*: [TAGNN: Target Attentive Graph Neural Networks for Session-based Recommendation](https://arxiv.org/abs/2005.02844) (SIGIR 2020 short).\n* **[GCE-GNN](recbole_gnn/model/sequential_recommender/gcegnn.py)** from Wang *et al.*: [Global Context Enhanced Graph Neural Networks for Session-based Recommendation](https://arxiv.org/abs/2106.05081) (SIGIR 2020).\n* **[SGNN-HN](recbole_gnn/model/sequential_recommender/sgnnhn.py)** from Pan *et al.*: [Star Graph Neural Networks for Session-based Recommendation](https://dl.acm.org/doi/10.1145/3340531.3412014) (CIKM 2020).\n\n**Social Recommendation**:\n\n> Note that datasets for social recommendation methods can be downloaded from [Social-Datasets](https://github.com/Sherry-XLL/Social-Datasets).\n\n* **[DiffNet](recbole_gnn/model/social_recommender/diffnet.py)** from Wu *et al.*: [A Neural Influence Diffusion Model for Social Recommendation](https://arxiv.org/abs/1904.10322) (SIGIR 2019).\n* **[MHCN](recbole_gnn/model/social_recommender/mhcn.py)** from Yu *et al.*: [Self-Supervised Multi-Channel Hypergraph Convolutional Network for Social Recommendation](https://doi.org/10.1145/3442381.3449844) (WWW 2021).\n* **[SEPT](recbole_gnn/model/social_recommender/sept.py)** from Yu *et al.*: [Socially-Aware Self-Supervised Tri-Training for Recommendation](https://doi.org/10.1145/3447548.3467340) (KDD 2021).\n\n## Result\n\n### Leaderboard\n\nWe carefully tune the hyper-parameters of the implemented models of each research field and release the corresponding leaderboards for reference:\n\n- **General** recommendation on `MovieLens-1M` dataset [[link]](results/general/ml-1m.md);\n- **Sequential** recommendation on `Diginetica` dataset [[link]](results/sequential/diginetica.md);\n- **Social** recommendation on `LastFM` dataset [[link]](results/social/lastfm.md);\n\n### Efficiency\n\nWith the sequential/session graphs preprocessing technique, as well as efficient GNN layers, we speed up the training process of our sequential recommenders a lot.\n\n<img src='asset/ml-1m.png' width='25%'><img src='asset/diginetica.png' width='25%'>\n\n## The Team\n\nRecBole-GNN is initially developed and maintained by members from [RUCAIBox](http://aibox.ruc.edu.cn/), the main developers are Yupeng Hou ([@hyp1231](https://github.com/hyp1231)), Lanling Xu ([@Sherry-XLL](https://github.com/Sherry-XLL)) and Changxin Tian ([@ChangxinTian](https://github.com/ChangxinTian)). We also thank Xinzhou ([@downeykking](https://github.com/downeykking)), Wanli ([@wending0417](https://github.com/wending0417)), and Jingqi ([@Tokkiu](https://github.com/Tokkiu)) for their great contribution! ❤️\n\n## Acknowledgement\n\nThe implementation is based on the open-source recommendation library [RecBole](https://github.com/RUCAIBox/RecBole). RecBole-GNN is part of [RecBole 2.0](https://github.com/RUCAIBox/RecBole2.0) now!\n\nPlease cite the following paper as the reference if you use our code or processed datasets.\n\n```bibtex\n@inproceedings{zhao2022recbole2,\n  author={Wayne Xin Zhao and Yupeng Hou and Xingyu Pan and Chen Yang and Zeyu Zhang and Zihan Lin and Jingsen Zhang and Shuqing Bian and Jiakai Tang and Wenqi Sun and Yushuo Chen and Lanling Xu and Gaowei Zhang and Zhen Tian and Changxin Tian and Shanlei Mu and Xinyan Fan and Xu Chen and Ji-Rong Wen},\n  title={RecBole 2.0: Towards a More Up-to-Date Recommendation Library},\n  booktitle = {{CIKM}},\n  year={2022}\n}\n\n@inproceedings{zhao2021recbole,\n  author    = {Wayne Xin Zhao and Shanlei Mu and Yupeng Hou and Zihan Lin and Yushuo Chen and Xingyu Pan and Kaiyuan Li and Yujie Lu and Hui Wang and Changxin Tian and  Yingqian Min and Zhichao Feng and Xinyan Fan and Xu Chen and Pengfei Wang and Wendi Ji and Yaliang Li and Xiaoling Wang and Ji{-}Rong Wen},\n  title     = {RecBole: Towards a Unified, Comprehensive and Efficient Framework for Recommendation Algorithms},\n  booktitle = {{CIKM}},\n  pages     = {4653--4664},\n  publisher = {{ACM}},\n  year      = {2021}\n}\n```\n"
  },
  {
    "path": "recbole_gnn/config.py",
    "content": "import os\nimport recbole\nfrom recbole.config.configurator import Config as RecBole_Config\nfrom recbole.utils import ModelType as RecBoleModelType\n\nfrom recbole_gnn.utils import get_model, ModelType\n\n\nclass Config(RecBole_Config):\n    def __init__(self, model=None, dataset=None, config_file_list=None, config_dict=None):\n        \"\"\"\n        Args:\n            model (str/AbstractRecommender): the model name or the model class, default is None, if it is None, config\n            will search the parameter 'model' from the external input as the model name or model class.\n            dataset (str): the dataset name, default is None, if it is None, config will search the parameter 'dataset'\n            from the external input as the dataset name.\n            config_file_list (list of str): the external config file, it allows multiple config files, default is None.\n            config_dict (dict): the external parameter dictionaries, default is None.\n        \"\"\"\n        if recbole.__version__ == \"1.1.1\":\n            self.compatibility_settings()\n        super(Config, self).__init__(model, dataset, config_file_list, config_dict)\n\n    def compatibility_settings(self):\n        import numpy as np\n        np.bool = np.bool_\n        np.int = np.int_\n        np.float = np.float_\n        np.complex = np.complex_\n        np.object = np.object_\n        np.str = np.str_\n        np.long = np.int_\n        np.unicode = np.unicode_\n\n    def _get_model_and_dataset(self, model, dataset):\n\n        if model is None:\n            try:\n                model = self.external_config_dict['model']\n            except KeyError:\n                raise KeyError(\n                    'model need to be specified in at least one of the these ways: '\n                    '[model variable, config file, config dict, command line] '\n                )\n        if not isinstance(model, str):\n            final_model_class = model\n            final_model = model.__name__\n        else:\n            final_model = model\n            final_model_class = get_model(final_model)\n\n        if dataset is None:\n            try:\n                final_dataset = self.external_config_dict['dataset']\n            except KeyError:\n                raise KeyError(\n                    'dataset need to be specified in at least one of the these ways: '\n                    '[dataset variable, config file, config dict, command line] '\n                )\n        else:\n            final_dataset = dataset\n\n        return final_model, final_model_class, final_dataset\n\n    def _load_internal_config_dict(self, model, model_class, dataset):\n        super()._load_internal_config_dict(model, model_class, dataset)\n        current_path = os.path.dirname(os.path.realpath(__file__))\n        model_init_file = os.path.join(current_path, './properties/model/' + model + '.yaml')\n        quick_start_config_path = os.path.join(current_path, './properties/quick_start_config/')\n        sequential_base_init = os.path.join(quick_start_config_path, 'sequential_base.yaml')\n        social_base_init = os.path.join(quick_start_config_path, 'social_base.yaml')\n\n        if os.path.isfile(model_init_file):\n            config_dict = self._update_internal_config_dict(model_init_file)\n\n        self.internal_config_dict['MODEL_TYPE'] = model_class.type\n        if self.internal_config_dict['MODEL_TYPE'] == RecBoleModelType.SEQUENTIAL:\n            self._update_internal_config_dict(sequential_base_init)\n        if self.internal_config_dict['MODEL_TYPE'] == ModelType.SOCIAL:\n            self._update_internal_config_dict(social_base_init)\n"
  },
  {
    "path": "recbole_gnn/data/__init__.py",
    "content": ""
  },
  {
    "path": "recbole_gnn/data/dataloader.py",
    "content": "import numpy as np\nimport torch\nfrom recbole.data.interaction import cat_interactions\nfrom recbole.data.dataloader.general_dataloader import TrainDataLoader, NegSampleEvalDataLoader, FullSortEvalDataLoader\n\nfrom recbole_gnn.data.transform import gnn_construct_transform\n\n\nclass CustomizedTrainDataLoader(TrainDataLoader):\n    def __init__(self, config, dataset, sampler, shuffle=False):\n        super().__init__(config, dataset, sampler, shuffle=shuffle)\n        if config['gnn_transform'] is not None:\n            self.transform = gnn_construct_transform(config)\n\n\nclass CustomizedNegSampleEvalDataLoader(NegSampleEvalDataLoader):\n    def __init__(self, config, dataset, sampler, shuffle=False):\n        super().__init__(config, dataset, sampler, shuffle=shuffle)\n        if config['gnn_transform'] is not None:\n            self.transform = gnn_construct_transform(config)\n\n    def collate_fn(self, index):\n        index = np.array(index)\n        if (\n            self.neg_sample_args[\"distribution\"] != \"none\"\n            and self.neg_sample_args[\"sample_num\"] != \"none\"\n        ):\n            uid_list = self.uid_list[index]\n            data_list = []\n            idx_list = []\n            positive_u = []\n            positive_i = torch.tensor([], dtype=torch.int64)\n\n            for idx, uid in enumerate(uid_list):\n                index = self.uid2index[uid]\n                data_list.append(self._neg_sampling(self._dataset[index]))\n                idx_list += [idx for i in range(self.uid2items_num[uid] * self.times)]\n                positive_u += [idx for i in range(self.uid2items_num[uid])]\n                positive_i = torch.cat(\n                    (positive_i, self._dataset[index][self.iid_field]), 0\n                )\n\n            cur_data = cat_interactions(data_list)\n            idx_list = torch.from_numpy(np.array(idx_list)).long()\n            positive_u = torch.from_numpy(np.array(positive_u)).long()\n\n            return self.transform(self._dataset, cur_data), idx_list, positive_u, positive_i\n        else:\n            data = self._dataset[index]\n            transformed_data = self.transform(self._dataset, data)\n            cur_data = self._neg_sampling(transformed_data)\n            return cur_data, None, None, None\n\n\nclass CustomizedFullSortEvalDataLoader(FullSortEvalDataLoader):\n    def __init__(self, config, dataset, sampler, shuffle=False):\n        super().__init__(config, dataset, sampler, shuffle=shuffle)\n        if config['gnn_transform'] is not None:\n            self.transform = gnn_construct_transform(config)\n"
  },
  {
    "path": "recbole_gnn/data/dataset.py",
    "content": "import os\nimport torch\nimport numpy as np\nimport pandas as pd\n\nfrom tqdm import tqdm\nfrom torch_geometric.nn.conv.gcn_conv import gcn_norm\nfrom torch_geometric.utils import degree\ntry:\n    from torch_sparse import SparseTensor\n    is_sparse = True\nexcept ImportError:\n    is_sparse = False\n\nfrom recbole.data.dataset import SequentialDataset\nfrom recbole.data.dataset import Dataset as RecBoleDataset\nfrom recbole.utils import set_color, FeatureSource\n\nimport recbole\nimport pickle\nfrom recbole.utils import ensure_dir\n\n\nclass GeneralGraphDataset(RecBoleDataset):\n    def __init__(self, config):\n        super().__init__(config)\n\n    if recbole.__version__ == \"1.1.1\":\n\n        def save(self):\n            \"\"\"Saving this :class:`Dataset` object to :attr:`config['checkpoint_dir']`.\"\"\"\n            save_dir = self.config[\"checkpoint_dir\"]\n            ensure_dir(save_dir)\n            file = os.path.join(save_dir, f'{self.config[\"dataset\"]}-{self.__class__.__name__}.pth')\n            self.logger.info(\n                set_color(\"Saving filtered dataset into \", \"pink\") + f\"[{file}]\"\n            )\n            with open(file, \"wb\") as f:\n                pickle.dump(self, f)\n\n    @staticmethod\n    def edge_index_to_adj_t(edge_index, edge_weight, m_num_nodes, n_num_nodes):\n        adj = SparseTensor(row=edge_index[0],\n                           col=edge_index[1],\n                           value=edge_weight,\n                           sparse_sizes=(m_num_nodes, n_num_nodes))\n        return adj.t()\n\n    def get_norm_adj_mat(self, enable_sparse=False):\n        self.is_sparse = is_sparse\n        r\"\"\"Get the normalized interaction matrix of users and items.\n        Construct the square matrix from the training data and normalize it\n        using the laplace matrix.\n        .. math::\n            A_{hat} = D^{-0.5} \\times A \\times D^{-0.5}\n        Returns:\n            The normalized interaction matrix in Tensor.\n        \"\"\"\n\n        row = self.inter_feat[self.uid_field]\n        col = self.inter_feat[self.iid_field] + self.user_num\n        edge_index1 = torch.stack([row, col])\n        edge_index2 = torch.stack([col, row])\n        edge_index = torch.cat([edge_index1, edge_index2], dim=1)\n        edge_weight = torch.ones(edge_index.size(1))\n        num_nodes = self.user_num + self.item_num\n\n        if enable_sparse:\n            if not is_sparse:\n                self.logger.warning(\n                    \"Import `torch_sparse` error, please install corrsponding version of `torch_sparse`. Now we will use dense edge_index instead of SparseTensor in dataset.\")\n            else:\n                adj_t = self.edge_index_to_adj_t(edge_index, edge_weight, num_nodes, num_nodes)\n                adj_t = gcn_norm(adj_t, None, num_nodes, add_self_loops=False)\n                return adj_t, None\n\n        edge_index, edge_weight = gcn_norm(edge_index, edge_weight, num_nodes, add_self_loops=False)\n\n        return edge_index, edge_weight\n\n    def get_bipartite_inter_mat(self, row='user', row_norm=True):\n        r\"\"\"Get the row-normalized bipartite interaction matrix of users and items.\n        \"\"\"\n        if row == 'user':\n            row_field, col_field = self.uid_field, self.iid_field\n        else:\n            row_field, col_field = self.iid_field, self.uid_field\n\n        row = self.inter_feat[row_field]\n        col = self.inter_feat[col_field]\n        edge_index = torch.stack([row, col])\n\n        if row_norm:\n            deg = degree(edge_index[0], self.num(row_field))\n            norm_deg = 1. / torch.where(deg == 0, torch.ones([1]), deg)\n            edge_weight = norm_deg[edge_index[0]]\n        else:\n            row_deg = degree(edge_index[0], self.num(row_field))\n            col_deg = degree(edge_index[1], self.num(col_field))\n\n            row_norm_deg = 1. / torch.sqrt(torch.where(row_deg == 0, torch.ones([1]), row_deg))\n            col_norm_deg = 1. / torch.sqrt(torch.where(col_deg == 0, torch.ones([1]), col_deg))\n\n            edge_weight = row_norm_deg[edge_index[0]] * col_norm_deg[edge_index[1]]\n\n        return edge_index, edge_weight\n\n\nclass SessionGraphDataset(SequentialDataset):\n    def __init__(self, config):\n        super().__init__(config)\n\n    def session_graph_construction(self):\n        # Default session graph dataset follows the graph construction operator like SR-GNN.\n        self.logger.info('Constructing session graphs.')\n        item_seq = self.inter_feat[self.item_id_list_field]\n        item_seq_len = self.inter_feat[self.item_list_length_field]\n        x = []\n        edge_index = []\n        alias_inputs = []\n\n        for i, seq in enumerate(tqdm(list(torch.chunk(item_seq, item_seq.shape[0])))):\n            seq, idx = torch.unique(seq, return_inverse=True)\n            x.append(seq)\n            alias_seq = idx.squeeze(0)[:item_seq_len[i]]\n            alias_inputs.append(alias_seq)\n            # No repeat click\n            edge = torch.stack([alias_seq[:-1], alias_seq[1:]]).unique(dim=-1)\n            edge_index.append(edge)\n\n        self.inter_feat.interaction['graph_idx'] = torch.arange(item_seq.shape[0])\n        self.graph_objs = {\n            'x': x,\n            'edge_index': edge_index,\n            'alias_inputs': alias_inputs\n        }\n\n    def build(self):\n        datasets = super().build()\n        for dataset in datasets:\n            dataset.session_graph_construction()\n        return datasets\n\n\nclass MultiBehaviorDataset(SessionGraphDataset):\n\n    def session_graph_construction(self):\n        self.logger.info('Constructing multi-behavior session graphs.')\n        self.item_behavior_list_field = self.config['ITEM_BEHAVIOR_LIST_FIELD']\n        self.behavior_id_field = self.config['BEHAVIOR_ID_FIELD']\n        item_seq = self.inter_feat[self.item_id_list_field]\n        item_seq_len = self.inter_feat[self.item_list_length_field]\n        if self.item_behavior_list_field == None or self.behavior_id_field == None:\n            # To be compatible with existing datasets\n            item_behavior_seq = torch.tensor([0] * len(item_seq))\n            self.behavior_id_field = 'behavior_id'\n            self.field2id_token[self.behavior_id_field] = {0: 'interaction'}\n        else:\n            item_behavior_seq = self.inter_feat[self.item_list_length_field]\n\n        edge_index = []\n        alias_inputs = []\n        behaviors = torch.unique(item_behavior_seq)\n        x = {}\n        for behavior in behaviors:\n            x[behavior.item()] = []\n\n        behavior_seqs = list(torch.chunk(item_behavior_seq, item_seq.shape[0]))\n        for i, seq in enumerate(tqdm(list(torch.chunk(item_seq, item_seq.shape[0])))):\n            bseq = behavior_seqs[i]\n            for behavior in behaviors:\n                bidx = torch.where(bseq == behavior)\n                subseq = torch.index_select(seq, 0, bidx[0])\n                subseq, _ = torch.unique(subseq, return_inverse=True)\n                x[behavior.item()].append(subseq)\n\n            seq, idx = torch.unique(seq, return_inverse=True)\n            alias_seq = idx.squeeze(0)[:item_seq_len[i]]\n            alias_inputs.append(alias_seq)\n            # No repeat click\n            edge = torch.stack([alias_seq[:-1], alias_seq[1:]]).unique(dim=-1)\n            edge_index.append(edge)\n\n        nx = {}\n        for k, v in x.items():\n            behavior_name = self.id2token(self.behavior_id_field, k)\n            nx[behavior_name] = v\n\n        self.inter_feat.interaction['graph_idx'] = torch.arange(item_seq.shape[0])\n        self.graph_objs = {\n            'x': nx,\n            'edge_index': edge_index,\n            'alias_inputs': alias_inputs\n        }\n\n\nclass LESSRDataset(SessionGraphDataset):\n    def session_graph_construction(self):\n        self.logger.info('Constructing LESSR session graphs.')\n        item_seq = self.inter_feat[self.item_id_list_field]\n        item_seq_len = self.inter_feat[self.item_list_length_field]\n\n        empty_edge = torch.stack([torch.LongTensor([]), torch.LongTensor([])])\n\n        x = []\n        edge_index_EOP = []\n        edge_index_shortcut = []\n        is_last = []\n\n        for i, seq in enumerate(tqdm(list(torch.chunk(item_seq, item_seq.shape[0])))):\n            seq, idx = torch.unique(seq, return_inverse=True)\n            x.append(seq)\n            alias_seq = idx.squeeze(0)[:item_seq_len[i]]\n            edge = torch.stack([alias_seq[:-1], alias_seq[1:]])\n            edge_index_EOP.append(edge)\n            last = torch.zeros_like(seq, dtype=torch.bool)\n            last[alias_seq[-1]] = True\n            is_last.append(last)\n            sub_edges = []\n            for j in range(1, item_seq_len[i]):\n                sub_edges.append(torch.stack([alias_seq[:-j], alias_seq[j:]]))\n            shortcut_edge = torch.cat(sub_edges, dim=-1).unique(dim=-1) if len(sub_edges) > 0 else empty_edge\n            edge_index_shortcut.append(shortcut_edge)\n\n        self.inter_feat.interaction['graph_idx'] = torch.arange(item_seq.shape[0])\n        self.graph_objs = {\n            'x': x,\n            'edge_index_EOP': edge_index_EOP,\n            'edge_index_shortcut': edge_index_shortcut,\n            'is_last': is_last\n        }\n        self.node_attr = ['x', 'is_last']\n\n\nclass GCEGNNDataset(SequentialDataset):\n    def __init__(self, config):\n        super().__init__(config)\n\n    def reverse_session(self):\n        self.logger.info('Reversing sessions.')\n        item_seq = self.inter_feat[self.item_id_list_field]\n        item_seq_len = self.inter_feat[self.item_list_length_field]\n        for i in tqdm(range(item_seq.shape[0])):\n            item_seq[i, :item_seq_len[i]] = item_seq[i, :item_seq_len[i]].flip(dims=[0])\n\n    def bidirectional_edge(self, edge_index):\n        seq_len = edge_index.shape[1]\n        ed = edge_index.T\n        ed2 = edge_index.T.flip(dims=[1])\n        idc = ed.unsqueeze(1).expand(-1, seq_len, 2) == ed2.unsqueeze(0).expand(seq_len, -1, 2)\n        return torch.logical_and(idc[:, :, 0], idc[:, :, 1]).any(dim=-1)\n\n    def session_graph_construction(self):\n        self.logger.info('Constructing session graphs.')\n        item_seq = self.inter_feat[self.item_id_list_field]\n        item_seq_len = self.inter_feat[self.item_list_length_field]\n        x = []\n        edge_index = []\n        edge_attr = []\n        alias_inputs = []\n\n        for i, seq in enumerate(tqdm(list(torch.chunk(item_seq, item_seq.shape[0])))):\n            seq, idx = torch.unique(seq, return_inverse=True)\n            x.append(seq)\n            alias_seq = idx.squeeze(0)[:item_seq_len[i]]\n            alias_inputs.append(alias_seq)\n\n            edge_index_backward = torch.stack([alias_seq[:-1], alias_seq[1:]])\n            edge_attr_backward = torch.where(self.bidirectional_edge(edge_index_backward), 3, 1)\n            edge_backward = torch.cat([edge_index_backward, edge_attr_backward.unsqueeze(0)], dim=0)\n\n            edge_index_forward = torch.stack([alias_seq[1:], alias_seq[:-1]])\n            edge_attr_forward = torch.where(self.bidirectional_edge(edge_index_forward), 3, 2)\n            edge_forward = torch.cat([edge_index_forward, edge_attr_forward.unsqueeze(0)], dim=0)\n\n            edge_index_selfloop = torch.stack([alias_seq, alias_seq])\n            edge_selfloop = torch.cat([edge_index_selfloop, torch.zeros([1, edge_index_selfloop.shape[1]])], dim=0)\n\n            edge = torch.cat([edge_backward, edge_forward, edge_selfloop], dim=-1).long()\n            edge = edge.unique(dim=-1)\n\n            cur_edge_index = edge[:2]\n            cur_edge_attr = edge[2]\n            edge_index.append(cur_edge_index)\n            edge_attr.append(cur_edge_attr)\n\n        self.inter_feat.interaction['graph_idx'] = torch.arange(item_seq.shape[0])\n        self.graph_objs = {\n            'x': x,\n            'edge_index': edge_index,\n            'edge_attr': edge_attr,\n            'alias_inputs': alias_inputs\n        }\n\n    def build(self):\n        datasets = super().build()\n        for dataset in datasets:\n            dataset.reverse_session()\n            dataset.session_graph_construction()\n        return datasets\n\n\nclass SocialDataset(GeneralGraphDataset):\n    \"\"\":class:`SocialDataset` is based on :class:`~recbole_gnn.data.dataset.GeneralGraphDataset`,\n    and load ``.net``.\n\n    All users in ``.inter`` and ``.net`` are remapped into the same ID sections.\n    Users that only exist in social network will be filtered.\n\n    It also provides several interfaces to transfer ``.net`` features into coo sparse matrix,\n    csr sparse matrix, :class:`DGL.Graph` or :class:`PyG.Data`.\n\n    Attributes:\n        net_src_field (str): The same as ``config['NET_SOURCE_ID_FIELD']``.\n\n        net_tgt_field (str): The same as ``config['NET_TARGET_ID_FIELD']``.\n\n        net_feat (pandas.DataFrame): Internal data structure stores the users' social network relations.\n            It's loaded from file ``.net``.\n    \"\"\"\n\n    def __init__(self, config):\n        super().__init__(config)\n\n    def _get_field_from_config(self):\n        super()._get_field_from_config()\n\n        self.net_src_field = self.config['NET_SOURCE_ID_FIELD']\n        self.net_tgt_field = self.config['NET_TARGET_ID_FIELD']\n        self.filter_net_by_inter = self.config['filter_net_by_inter']\n        self.undirected_net = self.config['undirected_net']\n        self._check_field('net_src_field', 'net_tgt_field')\n\n        self.logger.debug(set_color('net_src_field', 'blue') + f': {self.net_src_field}')\n        self.logger.debug(set_color('net_tgt_field', 'blue') + f': {self.net_tgt_field}')\n\n    def _data_filtering(self):\n        super()._data_filtering()\n        if self.filter_net_by_inter:\n            self._filter_net_by_inter()\n\n    def _filter_net_by_inter(self):\n        \"\"\"Filter users in ``net_feat`` that don't occur in interactions.\n        \"\"\"\n        inter_uids = set(self.inter_feat[self.uid_field])\n        self.net_feat.drop(self.net_feat.index[~self.net_feat[self.net_src_field].isin(inter_uids)], inplace=True)\n        self.net_feat.drop(self.net_feat.index[~self.net_feat[self.net_tgt_field].isin(inter_uids)], inplace=True)\n\n    def _load_data(self, token, dataset_path):\n        super()._load_data(token, dataset_path)\n        self.net_feat = self._load_net(self.dataset_name, self.dataset_path)\n\n    @property\n    def net_num(self):\n        \"\"\"Get the number of social network records.\n\n        Returns:\n            int: Number of social network records.\n        \"\"\"\n        return len(self.net_feat)\n\n    def __str__(self):\n        info = [\n            super().__str__(),\n            set_color('The number of social network relations', 'blue') + f': {self.net_num}'\n        ]  # yapf: disable\n        return '\\n'.join(info)\n\n    def _build_feat_name_list(self):\n        feat_name_list = super()._build_feat_name_list()\n        if self.net_feat is not None:\n            feat_name_list.append('net_feat')\n        return feat_name_list\n\n    def _load_net(self, token, dataset_path):\n        self.logger.debug(set_color(f'Loading social network from [{dataset_path}].', 'green'))\n        net_path = os.path.join(dataset_path, f'{token}.net')\n        if not os.path.isfile(net_path):\n            raise ValueError(f'[{token}.net] not found in [{dataset_path}].')\n        df = self._load_feat(net_path, FeatureSource.NET)\n        if self.undirected_net:\n            row = df[self.net_src_field]\n            col = df[self.net_tgt_field]\n            df_net_src = pd.concat([row, col], axis=0)\n            df_net_tgt = pd.concat([col, row], axis=0)\n            df_net_src.name = self.net_src_field\n            df_net_tgt.name = self.net_tgt_field\n            df = pd.concat([df_net_src, df_net_tgt], axis=1)\n        self._check_net(df)\n        return df\n\n    def _check_net(self, net):\n        net_warn_message = 'net data requires field [{}]'\n        assert self.net_src_field in net, net_warn_message.format(self.net_src_field)\n        assert self.net_tgt_field in net, net_warn_message.format(self.net_tgt_field)\n\n    def _init_alias(self):\n        \"\"\"Add :attr:`alias_of_user_id`.\n        \"\"\"\n        self._set_alias('user_id', [self.uid_field, self.net_src_field, self.net_tgt_field])\n        self._set_alias('item_id', [self.iid_field])\n\n        for alias_name_1, alias_1 in self.alias.items():\n            for alias_name_2, alias_2 in self.alias.items():\n                if alias_name_1 != alias_name_2:\n                    intersect = np.intersect1d(alias_1, alias_2, assume_unique=True)\n                    if len(intersect) > 0:\n                        raise ValueError(\n                            f'`alias_of_{alias_name_1}` and `alias_of_{alias_name_2}` '\n                            f'should not have the same field {list(intersect)}.'\n                        )\n\n        self._rest_fields = self.token_like_fields\n        for alias_name, alias in self.alias.items():\n            isin = np.isin(alias, self._rest_fields, assume_unique=True)\n            if isin.all() is False:\n                raise ValueError(\n                    f'`alias_of_{alias_name}` should not contain '\n                    f'non-token-like field {list(alias[~isin])}.'\n                )\n            self._rest_fields = np.setdiff1d(self._rest_fields, alias, assume_unique=True)\n\n    def get_norm_net_adj_mat(self, row_norm=False):\n        r\"\"\"Get the normalized socail matrix of users and users.\n        Construct the square matrix from the social network data and \n        normalize it using the laplace matrix.\n        .. math::\n            A_{hat} = D^{-0.5} \\times A \\times D^{-0.5}\n        Returns:\n            The normalized social network matrix in Tensor.\n        \"\"\"\n\n        row = self.net_feat[self.net_src_field]\n        col = self.net_feat[self.net_tgt_field]\n        edge_index = torch.stack([row, col])\n\n        deg = degree(edge_index[0], self.user_num)\n\n        if row_norm:\n            norm_deg = 1. / torch.where(deg == 0, torch.ones([1]), deg)\n            edge_weight = norm_deg[edge_index[0]]\n        else:\n            norm_deg = 1. / torch.sqrt(torch.where(deg == 0, torch.ones([1]), deg))\n            edge_weight = norm_deg[edge_index[0]] * norm_deg[edge_index[1]]\n\n        return edge_index, edge_weight\n\n    def net_matrix(self, form='coo', value_field=None):\n        \"\"\"Get sparse matrix that describe social relations between user_id and user_id.\n\n        Sparse matrix has shape (user_num, user_num).\n\n        Returns:\n            scipy.sparse: Sparse matrix in form ``coo`` or ``csr``.\n        \"\"\"\n        return self._create_sparse_matrix(self.net_feat, self.net_src_field, self.net_tgt_field, form, value_field)\n"
  },
  {
    "path": "recbole_gnn/data/transform.py",
    "content": "from logging import getLogger\nimport torch\nfrom torch.nn.utils.rnn import pad_sequence\nfrom recbole.data.interaction import Interaction\n\n\ndef gnn_construct_transform(config):\n    if config['gnn_transform'] is None:\n        raise ValueError('config[\"gnn_transform\"] is None but trying to construct transform.')\n    str2transform = {\n        'sess_graph': SessionGraph,\n    }\n    return str2transform[config['gnn_transform']](config)\n\n\nclass SessionGraph:\n    def __init__(self, config):\n        self.logger = getLogger()\n        self.logger.info('SessionGraph Transform in DataLoader.')\n\n    def __call__(self, dataset, interaction):\n        graph_objs = dataset.graph_objs\n        index = interaction['graph_idx']\n        graph_batch = {\n            k: [graph_objs[k][_.item()] for _ in index]\n            for k in graph_objs\n        }\n        graph_batch['batch'] = []\n\n        tot_node_num = torch.ones([1], dtype=torch.long)\n        for i in range(index.shape[0]):\n            for k in graph_batch:\n                if 'edge_index' in k:\n                    graph_batch[k][i] = graph_batch[k][i] + tot_node_num\n            if 'alias_inputs' in graph_batch:\n                graph_batch['alias_inputs'][i] = graph_batch['alias_inputs'][i] + tot_node_num\n            graph_batch['batch'].append(torch.full_like(graph_batch['x'][i], i))\n            tot_node_num += graph_batch['x'][i].shape[0]\n\n        if hasattr(dataset, 'node_attr'):\n            node_attr = ['batch'] + dataset.node_attr\n        else:\n            node_attr = ['x', 'batch']\n        for k in node_attr:\n            graph_batch[k] = [torch.zeros([1], dtype=graph_batch[k][-1].dtype)] + graph_batch[k]\n\n        for k in graph_batch:\n            if k == 'alias_inputs':\n                graph_batch[k] = pad_sequence(graph_batch[k], batch_first=True)\n            else:\n                graph_batch[k] = torch.cat(graph_batch[k], dim=-1)\n\n        interaction.update(Interaction(graph_batch))\n        return interaction\n"
  },
  {
    "path": "recbole_gnn/model/abstract_recommender.py",
    "content": "from recbole.model.abstract_recommender import GeneralRecommender\nfrom recbole.utils import ModelType as RecBoleModelType\n\nfrom recbole_gnn.utils import ModelType\n\n\nclass GeneralGraphRecommender(GeneralRecommender):\n    \"\"\"This is an abstract general graph recommender. All the general graph models should implement in this class.\n    The base general graph recommender class provide the basic U-I graph dataset and parameters information.\n    \"\"\"\n    type = RecBoleModelType.GENERAL\n\n    def __init__(self, config, dataset):\n        super(GeneralGraphRecommender, self).__init__(config, dataset)\n        self.edge_index, self.edge_weight = dataset.get_norm_adj_mat(enable_sparse=config[\"enable_sparse\"])\n        self.use_sparse = config[\"enable_sparse\"] and dataset.is_sparse\n        if self.use_sparse:\n            self.edge_index, self.edge_weight = self.edge_index.to(self.device), None\n        else:\n            self.edge_index, self.edge_weight = self.edge_index.to(self.device), self.edge_weight.to(self.device)\n\n\nclass SocialRecommender(GeneralRecommender):\n    \"\"\"This is an abstract social recommender. All the social graph model should implement this class.\n    The base social recommender class provide the basic social graph dataset and parameters information.\n    \"\"\"\n    type = ModelType.SOCIAL\n\n    def __init__(self, config, dataset):\n        super(SocialRecommender, self).__init__(config, dataset)\n"
  },
  {
    "path": "recbole_gnn/model/general_recommender/__init__.py",
    "content": "from recbole_gnn.model.general_recommender.lightgcn import LightGCN\nfrom recbole_gnn.model.general_recommender.hmlet import HMLET\nfrom recbole_gnn.model.general_recommender.ncl import NCL\nfrom recbole_gnn.model.general_recommender.ngcf import NGCF\nfrom recbole_gnn.model.general_recommender.sgl import SGL\nfrom recbole_gnn.model.general_recommender.lightgcl import LightGCL\nfrom recbole_gnn.model.general_recommender.simgcl import SimGCL\nfrom recbole_gnn.model.general_recommender.xsimgcl import XSimGCL\nfrom recbole_gnn.model.general_recommender.directau import DirectAU\nfrom recbole_gnn.model.general_recommender.ssl4rec import SSL4REC\n"
  },
  {
    "path": "recbole_gnn/model/general_recommender/directau.py",
    "content": "# r\"\"\"\n# DiretAU\n# ################################################\n# Reference:\n#     Chenyang Wang et al. \"Towards Representation Alignment and Uniformity in Collaborative Filtering.\" in KDD 2022.\n\n# Reference code:\n#     https://github.com/THUwangcy/DirectAU\n# \"\"\"\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom recbole.model.init import xavier_normal_initialization\nfrom recbole.utils import InputType\nfrom recbole.model.general_recommender import BPR\nfrom recbole_gnn.model.general_recommender import LightGCN\n\nfrom recbole_gnn.model.abstract_recommender import GeneralGraphRecommender\n\n\nclass DirectAU(GeneralGraphRecommender):\n    input_type = InputType.PAIRWISE\n\n    def __init__(self, config, dataset):\n        super(DirectAU, self).__init__(config, dataset)\n\n        # load parameters info\n        self.embedding_size = config['embedding_size']\n        self.gamma = config['gamma']\n        self.encoder_name = config['encoder']\n\n        # define encoder\n        if self.encoder_name == 'MF':\n            self.encoder = MFEncoder(config, dataset)\n        elif self.encoder_name == 'LightGCN':\n            self.encoder = LGCNEncoder(config, dataset)\n        else:\n            raise ValueError('Non-implemented Encoder.')\n\n        # storage variables for full sort evaluation acceleration\n        self.restore_user_e = None\n        self.restore_item_e = None\n\n        # parameters initialization\n        self.apply(xavier_normal_initialization)\n\n    def forward(self, user, item):\n        user_e, item_e = self.encoder(user, item)\n        return F.normalize(user_e, dim=-1), F.normalize(item_e, dim=-1)\n\n    @staticmethod\n    def alignment(x, y, alpha=2):\n        return (x - y).norm(p=2, dim=1).pow(alpha).mean()\n\n    @staticmethod\n    def uniformity(x, t=2):\n        return torch.pdist(x, p=2).pow(2).mul(-t).exp().mean().log()\n\n    def calculate_loss(self, interaction):\n        if self.restore_user_e is not None or self.restore_item_e is not None:\n            self.restore_user_e, self.restore_item_e = None, None\n\n        user = interaction[self.USER_ID]\n        item = interaction[self.ITEM_ID]\n\n        user_e, item_e = self.forward(user, item)\n        align = self.alignment(user_e, item_e)\n        uniform = self.gamma * (self.uniformity(user_e) + self.uniformity(item_e)) / 2\n\n        return align, uniform\n\n    def predict(self, interaction):\n        user = interaction[self.USER_ID]\n        item = interaction[self.ITEM_ID]\n        user_e = self.user_embedding(user)\n        item_e = self.item_embedding(item)\n        return torch.mul(user_e, item_e).sum(dim=1)\n\n    def full_sort_predict(self, interaction):\n        user = interaction[self.USER_ID]\n        if self.encoder_name == 'LightGCN':\n            if self.restore_user_e is None or self.restore_item_e is None:\n                self.restore_user_e, self.restore_item_e = self.encoder.get_all_embeddings()\n            user_e = self.restore_user_e[user]\n            all_item_e = self.restore_item_e\n        else:\n            user_e = self.encoder.user_embedding(user)\n            all_item_e = self.encoder.item_embedding.weight\n        score = torch.matmul(user_e, all_item_e.transpose(0, 1))\n        return score.view(-1)\n\n\nclass MFEncoder(BPR):\n    def __init__(self, config, dataset):\n        super(MFEncoder, self).__init__(config, dataset)\n\n    def forward(self, user_id, item_id):\n        return super().forward(user_id, item_id)\n\n    def get_all_embeddings(self):\n        user_embeddings = self.user_embedding.weight\n        item_embeddings = self.item_embedding.weight\n        return user_embeddings, item_embeddings\n\n\nclass LGCNEncoder(LightGCN):\n    def __init__(self, config, dataset):\n        super(LGCNEncoder, self).__init__(config, dataset)\n\n    def forward(self, user_id, item_id):\n        user_all_embeddings, item_all_embeddings = self.get_all_embeddings()\n        u_embed = user_all_embeddings[user_id]\n        i_embed = item_all_embeddings[item_id]\n        return u_embed, i_embed\n\n    def get_all_embeddings(self):\n        return super().forward()\n"
  },
  {
    "path": "recbole_gnn/model/general_recommender/hmlet.py",
    "content": "# @Time   : 2022/3/21\n# @Author : Yupeng Hou\n# @Email  : houyupeng@ruc.edu.cn\n\nr\"\"\"\nHMLET\n################################################\nReference:\n    Taeyong Kong et al. \"Linear, or Non-Linear, That is the Question!.\" in WSDM 2022.\n\nReference code:\n    https://github.com/qbxlvnf11/HMLET\n\"\"\"\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom recbole.model.init import xavier_uniform_initialization\nfrom recbole.model.loss import BPRLoss, EmbLoss\nfrom recbole.model.layers import activation_layer\nfrom recbole.utils import InputType\n\nfrom recbole_gnn.model.abstract_recommender import GeneralGraphRecommender\nfrom recbole_gnn.model.layers import LightGCNConv\n\n\nclass Gating_Net(nn.Module):\n    def __init__(self, embedding_dim, mlp_dims, dropout_p):\n        super(Gating_Net, self).__init__()\n        self.embedding_dim = embedding_dim\n\n        fc_layers = []\n        for i in range(len(mlp_dims)):\n            if i == 0:\n                fc = nn.Linear(embedding_dim*2, mlp_dims[i])\n                fc_layers.append(fc)\n            else:\n                fc = nn.Linear(mlp_dims[i-1], mlp_dims[i])\n                fc_layers.append(fc)\n            if i != len(mlp_dims) - 1:\n                fc_layers.append(nn.BatchNorm1d(mlp_dims[i]))\n                fc_layers.append(nn.Dropout(p=dropout_p))\n                fc_layers.append(nn.ReLU(inplace=True))\n        self.mlp = nn.Sequential(*fc_layers)\n\n    def gumbel_softmax(self, logits, temperature, hard):\n        \"\"\"Sample from the Gumbel-Softmax distribution and optionally discretize.\n        Args:\n          logits: [batch_size, n_class] unnormalized log-probs\n          temperature: non-negative scalar\n          hard: if True, take argmax, but differentiate w.r.t. soft sample y\n        Returns:\n          [batch_size, n_class] sample from the Gumbel-Softmax distribution.\n          If hard=True, then the returned sample will be one-hot, otherwise it will\n          be a probabilitiy distribution that sums to 1 across classes\n        \"\"\"\n        y = self.gumbel_softmax_sample(logits, temperature) ## (0.6, 0.2, 0.1,..., 0.11)\n        if hard:\n            k = logits.size(1) # k is numb of classes\n            # y_hard = tf.cast(tf.one_hot(tf.argmax(y,1),k), y.dtype)  ## (1, 0, 0, ..., 0)\n            y_hard = torch.eq(y, torch.max(y, dim=1, keepdim=True)[0]).type_as(y)\n            y = (y_hard - y).detach() + y\n        return y\n\n    def gumbel_softmax_sample(self, logits, temperature):\n        \"\"\" Draw a sample from the Gumbel-Softmax distribution\"\"\"\n        noise = self.sample_gumbel(logits)\n        y = (logits + noise) / temperature\n        return F.softmax(y, dim=1)\n\n    def sample_gumbel(self, logits):\n        \"\"\"Sample from Gumbel(0, 1)\"\"\"\n        noise = torch.rand(logits.size())\n        eps = 1e-20\n        noise.add_(eps).log_().neg_()\n        noise.add_(eps).log_().neg_()\n        return torch.Tensor(noise.float()).to(logits.device)\n\n    def forward(self, feature, temperature, hard):\n        x = self.mlp(feature)\n        out = self.gumbel_softmax(x, temperature, hard)\n        out_value = out.unsqueeze(2)\n        gating_out = out_value.repeat(1, 1, self.embedding_dim)\n        return gating_out\n\n\nclass HMLET(GeneralGraphRecommender):\n    r\"\"\"HMLET combines both linear and non-linear propagation layers for general recommendation and yields better performance.\n    \"\"\"\n    input_type = InputType.PAIRWISE\n\n    def __init__(self, config, dataset):\n        super(HMLET, self).__init__(config, dataset)\n\n        # load parameters info\n        self.latent_dim = config['embedding_size']  # int type:the embedding size of lightGCN\n        self.n_layers = config['n_layers']  # int type:the layer num of lightGCN\n        self.reg_weight = config['reg_weight']  # float32 type: the weight decay for l2 normalization\n        self.require_pow = config['require_pow']  # bool type: whether to require pow when regularization\n        self.gate_layer_ids = config['gate_layer_ids']  # list type: layer ids for non-linear gating\n        self.gating_mlp_dims = config['gating_mlp_dims']  # list type: list of mlp dimensions in gating module\n        self.dropout_ratio = config['dropout_ratio']  # dropout ratio for mlp in gating module\n        self.gum_temp = config['ori_temp']\n        self.logger.info(f'Model initialization, gumbel softmax temperature: {self.gum_temp}')\n\n        # define layers and loss\n        self.user_embedding = torch.nn.Embedding(num_embeddings=self.n_users, embedding_dim=self.latent_dim)\n        self.item_embedding = torch.nn.Embedding(num_embeddings=self.n_items, embedding_dim=self.latent_dim)\n        self.gcn_conv = LightGCNConv(dim=self.latent_dim)\n        self.activation = nn.ELU() if config['activation_function'] == 'elu' else activation_layer(config['activation_function'])\n        self.gating_nets = nn.ModuleList([\n            Gating_Net(self.latent_dim, self.gating_mlp_dims, self.dropout_ratio) for _ in range(len(self.gate_layer_ids))\n        ])\n\n        self.mf_loss = BPRLoss()\n        self.reg_loss = EmbLoss()\n\n        # storage variables for full sort evaluation acceleration\n        self.restore_user_e = None\n        self.restore_item_e = None\n\n        # parameters initialization\n        self.apply(xavier_uniform_initialization)\n        self.other_parameter_name = ['restore_user_e', 'restore_item_e', 'gum_temp']\n\n        for gating in self.gating_nets:\n            self._gating_freeze(gating, False)\n\n    def _gating_freeze(self, model, freeze_flag):\n        for name, child in model.named_children():\n            for param in child.parameters():\n                param.requires_grad = freeze_flag\n\n    def __choosing_one(self, features, gumbel_out):\n        feature = torch.sum(torch.mul(features, gumbel_out), dim=1)  # batch x embedding_dim (or batch x embedding_dim x layer_num)\n        return feature\n\n    def __where(self, idx, lst):\n        for i in range(len(lst)):\n            if lst[i] == idx:\n                return i\n        raise ValueError(f'{idx} not in {lst}.')\n\n    def get_ego_embeddings(self):\n        r\"\"\"Get the embedding of users and items and combine to an embedding matrix.\n        Returns:\n            Tensor of the embedding matrix. Shape of [n_items+n_users, embedding_dim]\n        \"\"\"\n        user_embeddings = self.user_embedding.weight\n        item_embeddings = self.item_embedding.weight\n        ego_embeddings = torch.cat([user_embeddings, item_embeddings], dim=0)\n        return ego_embeddings\n\n    def forward(self):\n        all_embeddings = self.get_ego_embeddings()\n        embeddings_list = [all_embeddings]\n        non_lin_emb_list = [all_embeddings]\n\n        for layer_idx in range(self.n_layers):\n            linear_embeddings = self.gcn_conv(all_embeddings, self.edge_index, self.edge_weight)\n            if layer_idx not in self.gate_layer_ids:\n                all_embeddings = linear_embeddings\n            else:\n                non_lin_id = self.__where(layer_idx, self.gate_layer_ids)\n                last_non_lin_emb = non_lin_emb_list[non_lin_id]\n                non_lin_embeddings = self.activation(self.gcn_conv(last_non_lin_emb, self.edge_index, self.edge_weight))\n                stack_embeddings = torch.stack([linear_embeddings, non_lin_embeddings], dim=1)\n                concat_embeddings = torch.cat((linear_embeddings, non_lin_embeddings), dim=-1)\n                gumbel_out = self.gating_nets[non_lin_id](concat_embeddings, self.gum_temp, not self.training)\n                all_embeddings = self.__choosing_one(stack_embeddings, gumbel_out)\n                non_lin_emb_list.append(all_embeddings)\n            embeddings_list.append(all_embeddings)\n        hmlet_all_embeddings = torch.stack(embeddings_list, dim=1)\n        hmlet_all_embeddings = torch.mean(hmlet_all_embeddings, dim=1)\n\n        user_all_embeddings, item_all_embeddings = torch.split(hmlet_all_embeddings, [self.n_users, self.n_items])\n        return user_all_embeddings, item_all_embeddings\n\n    def calculate_loss(self, interaction):\n        # clear the storage variable when training\n        if self.restore_user_e is not None or self.restore_item_e is not None:\n            self.restore_user_e, self.restore_item_e = None, None\n\n        user = interaction[self.USER_ID]\n        pos_item = interaction[self.ITEM_ID]\n        neg_item = interaction[self.NEG_ITEM_ID]\n\n        user_all_embeddings, item_all_embeddings = self.forward()\n        u_embeddings = user_all_embeddings[user]\n        pos_embeddings = item_all_embeddings[pos_item]\n        neg_embeddings = item_all_embeddings[neg_item]\n\n        # calculate BPR Loss\n        pos_scores = torch.mul(u_embeddings, pos_embeddings).sum(dim=1)\n        neg_scores = torch.mul(u_embeddings, neg_embeddings).sum(dim=1)\n        mf_loss = self.mf_loss(pos_scores, neg_scores)\n\n        # calculate regularization Loss\n        u_ego_embeddings = self.user_embedding(user)\n        pos_ego_embeddings = self.item_embedding(pos_item)\n        neg_ego_embeddings = self.item_embedding(neg_item)\n\n        reg_loss = self.reg_loss(u_ego_embeddings, pos_ego_embeddings, neg_ego_embeddings, require_pow=self.require_pow)\n        loss = mf_loss + self.reg_weight * reg_loss\n\n        return loss\n\n    def predict(self, interaction):\n        user = interaction[self.USER_ID]\n        item = interaction[self.ITEM_ID]\n\n        user_all_embeddings, item_all_embeddings = self.forward()\n\n        u_embeddings = user_all_embeddings[user]\n        i_embeddings = item_all_embeddings[item]\n        scores = torch.mul(u_embeddings, i_embeddings).sum(dim=1)\n        return scores\n\n    def full_sort_predict(self, interaction):\n        user = interaction[self.USER_ID]\n        if self.restore_user_e is None or self.restore_item_e is None:\n            self.restore_user_e, self.restore_item_e = self.forward()\n        # get user embedding from storage variable\n        u_embeddings = self.restore_user_e[user]\n\n        # dot with all item embedding to accelerate\n        scores = torch.matmul(u_embeddings, self.restore_item_e.transpose(0, 1))\n\n        return scores.view(-1)"
  },
  {
    "path": "recbole_gnn/model/general_recommender/lightgcl.py",
    "content": "# -*- coding: utf-8 -*-\r\n# @Time   : 2023/04/12\r\n# @Author : Wanli Yang\r\n# @Email  : 2013774@mail.nankai.edu.cn\r\n\r\nr\"\"\"\r\nLightGCL\r\n################################################\r\nReference:\r\n    Xuheng Cai et al. \"LightGCL: Simple Yet Effective Graph Contrastive Learning for Recommendation\" in ICLR 2023.\r\n\r\nReference code:\r\n    https://github.com/HKUDS/LightGCL\r\n\"\"\"\r\n\r\nimport numpy as np\r\nimport scipy.sparse as sp\r\nimport torch\r\nimport torch.nn as nn\r\nfrom recbole.model.abstract_recommender import GeneralRecommender\r\nfrom recbole.model.init import xavier_uniform_initialization\r\nfrom recbole.model.loss import EmbLoss\r\nfrom recbole.utils import InputType\r\nimport torch.nn.functional as F\r\n\r\n\r\nclass LightGCL(GeneralRecommender):\r\n    r\"\"\"LightGCL is a GCN-based recommender model.\r\n\r\n    LightGCL guides graph augmentation by singular value decomposition (SVD) to not only\r\n    distill the useful information of user-item interactions but also inject the global\r\n    collaborative context into the representation alignment of contrastive learning.\r\n\r\n    We implement the model following the original author with a pairwise training mode.\r\n    \"\"\"\r\n    input_type = InputType.PAIRWISE\r\n\r\n    def __init__(self, config, dataset):\r\n        super(LightGCL, self).__init__(config, dataset)\r\n        self._user = dataset.inter_feat[dataset.uid_field]\r\n        self._item = dataset.inter_feat[dataset.iid_field]\r\n\r\n        # load parameters info\r\n        self.embed_dim = config[\"embedding_size\"]\r\n        self.n_layers = config[\"n_layers\"]\r\n        self.dropout = config[\"dropout\"]\r\n        self.temp = config[\"temp\"]\r\n        self.lambda_1 = config[\"lambda1\"]\r\n        self.lambda_2 = config[\"lambda2\"]\r\n        self.q = config[\"q\"]\r\n        self.act = nn.LeakyReLU(0.5)\r\n        self.reg_loss = EmbLoss()\r\n\r\n        # get the normalized adjust matrix\r\n        self.adj_norm = self.coo2tensor(self.create_adjust_matrix())\r\n\r\n        # perform svd reconstruction\r\n        svd_u, s, svd_v = torch.svd_lowrank(self.adj_norm, q=self.q)\r\n        self.u_mul_s = svd_u @ (torch.diag(s))\r\n        self.v_mul_s = svd_v @ (torch.diag(s))\r\n        del s\r\n        self.ut = svd_u.T\r\n        self.vt = svd_v.T\r\n\r\n        self.E_u_0 = nn.Parameter(nn.init.xavier_uniform_(torch.empty(self.n_users, self.embed_dim)))\r\n        self.E_i_0 = nn.Parameter(nn.init.xavier_uniform_(torch.empty(self.n_items, self.embed_dim)))\r\n        self.E_u_list = [None] * (self.n_layers + 1)\r\n        self.E_i_list = [None] * (self.n_layers + 1)\r\n        self.E_u_list[0] = self.E_u_0\r\n        self.E_i_list[0] = self.E_i_0\r\n        self.Z_u_list = [None] * (self.n_layers + 1)\r\n        self.Z_i_list = [None] * (self.n_layers + 1)\r\n        self.G_u_list = [None] * (self.n_layers + 1)\r\n        self.G_i_list = [None] * (self.n_layers + 1)\r\n        self.G_u_list[0] = self.E_u_0\r\n        self.G_i_list[0] = self.E_i_0\r\n\r\n        self.E_u = None\r\n        self.E_i = None\r\n        self.restore_user_e = None\r\n        self.restore_item_e = None\r\n\r\n        self.apply(xavier_uniform_initialization)\r\n        self.other_parameter_name = ['restore_user_e', 'restore_item_e']\r\n\r\n    def create_adjust_matrix(self):\r\n        r\"\"\"Get the normalized interaction matrix of users and items.\r\n\r\n        Returns:\r\n            coo_matrix of the normalized interaction matrix.\r\n        \"\"\"\r\n        ratings = np.ones_like(self._user, dtype=np.float32)\r\n        matrix = sp.csr_matrix(\r\n            (ratings, (self._user, self._item)),\r\n            shape=(self.n_users, self.n_items),\r\n        ).tocoo()\r\n        rowD = np.squeeze(np.array(matrix.sum(1)), axis=1)\r\n        colD = np.squeeze(np.array(matrix.sum(0)), axis=0)\r\n        for i in range(len(matrix.data)):\r\n            matrix.data[i] = matrix.data[i] / pow(rowD[matrix.row[i]] * colD[matrix.col[i]], 0.5)\r\n        return matrix\r\n\r\n    def coo2tensor(self, matrix: sp.coo_matrix):\r\n        r\"\"\"Convert coo_matrix to tensor.\r\n\r\n        Args:\r\n            matrix (scipy.coo_matrix): Sparse matrix to be converted.\r\n\r\n        Returns:\r\n            torch.sparse.FloatTensor: Transformed sparse matrix.\r\n        \"\"\"\r\n        indices = torch.from_numpy(\r\n            np.vstack((matrix.row, matrix.col)).astype(np.int64))\r\n        values = torch.from_numpy(matrix.data)\r\n        shape = torch.Size(matrix.shape)\r\n        x = torch.sparse.FloatTensor(indices, values, shape).coalesce().to(self.device)\r\n        return x\r\n\r\n    def sparse_dropout(self, matrix, dropout):\r\n        if dropout == 0.0:\r\n            return matrix\r\n        indices = matrix.indices()\r\n        values = F.dropout(matrix.values(), p=dropout)\r\n        size = matrix.size()\r\n        return torch.sparse.FloatTensor(indices, values, size)\r\n\r\n    def forward(self):\r\n        for layer in range(1, self.n_layers + 1):\r\n            # GNN propagation\r\n            self.Z_u_list[layer] = torch.spmm(self.sparse_dropout(self.adj_norm, self.dropout),\r\n                                              self.E_i_list[layer - 1])\r\n            self.Z_i_list[layer] = torch.spmm(self.sparse_dropout(self.adj_norm, self.dropout).transpose(0, 1),\r\n                                              self.E_u_list[layer - 1])\r\n            # aggregate\r\n            self.E_u_list[layer] = self.Z_u_list[layer]\r\n            self.E_i_list[layer] = self.Z_i_list[layer]\r\n\r\n        # aggregate across layer\r\n        self.E_u = sum(self.E_u_list)\r\n        self.E_i = sum(self.E_i_list)\r\n\r\n        return self.E_u, self.E_i\r\n\r\n    def calculate_loss(self, interaction):\r\n        if self.restore_user_e is not None or self.restore_item_e is not None:\r\n            self.restore_user_e, self.restore_item_e = None, None\r\n\r\n        user_list = interaction[self.USER_ID]\r\n        pos_item_list = interaction[self.ITEM_ID]\r\n        neg_item_list = interaction[self.NEG_ITEM_ID]\r\n        E_u_norm, E_i_norm = self.forward()\r\n        bpr_loss = self.calc_bpr_loss(E_u_norm, E_i_norm, user_list, pos_item_list, neg_item_list)\r\n        ssl_loss = self.calc_ssl_loss(E_u_norm, E_i_norm, user_list, pos_item_list)\r\n        total_loss = bpr_loss + ssl_loss\r\n        return total_loss\r\n\r\n    def calc_bpr_loss(self, E_u_norm, E_i_norm, user_list, pos_item_list, neg_item_list):\r\n        r\"\"\"Calculate the pairwise Bayesian Personalized Ranking (BPR) loss and parameter regularization loss.\r\n\r\n        Args:\r\n            E_u_norm (torch.Tensor): Ego embedding of all users after forwarding.\r\n            E_i_norm (torch.Tensor): Ego embedding of all items after forwarding.\r\n            user_list (torch.Tensor): List of the user.\r\n            pos_item_list (torch.Tensor): List of positive examples.\r\n            neg_item_list (torch.Tensor): List of negative examples.\r\n\r\n        Returns:\r\n            torch.Tensor: Loss of BPR tasks and parameter regularization.\r\n        \"\"\"\r\n        u_e = E_u_norm[user_list]\r\n        pi_e = E_i_norm[pos_item_list]\r\n        ni_e = E_i_norm[neg_item_list]\r\n        pos_scores = torch.mul(u_e, pi_e).sum(dim=1)\r\n        neg_scores = torch.mul(u_e, ni_e).sum(dim=1)\r\n        loss1 = -(pos_scores - neg_scores).sigmoid().log().mean()\r\n\r\n        # reg loss\r\n        loss_reg = 0\r\n        for param in self.parameters():\r\n            loss_reg += param.norm(2).square()\r\n        loss_reg *= self.lambda_2\r\n        return loss1 + loss_reg\r\n\r\n    def calc_ssl_loss(self, E_u_norm, E_i_norm, user_list, pos_item_list):\r\n        r\"\"\"Calculate the loss of self-supervised tasks.\r\n\r\n        Args:\r\n            E_u_norm (torch.Tensor): Ego embedding of all users in the original graph after forwarding.\r\n            E_i_norm (torch.Tensor): Ego embedding of all items in the original graph after forwarding.\r\n            user_list (torch.Tensor): List of the user.\r\n            pos_item_list (torch.Tensor): List of positive examples.\r\n\r\n        Returns:\r\n            torch.Tensor: Loss of self-supervised tasks.\r\n        \"\"\"\r\n        # calculate G_u_norm&G_i_norm\r\n        for layer in range(1, self.n_layers + 1):\r\n            # svd_adj propagation\r\n            vt_ei = self.vt @ self.E_i_list[layer - 1]\r\n            self.G_u_list[layer] = self.u_mul_s @ vt_ei\r\n            ut_eu = self.ut @ self.E_u_list[layer - 1]\r\n            self.G_i_list[layer] = self.v_mul_s @ ut_eu\r\n\r\n        # aggregate across layer\r\n        G_u_norm = sum(self.G_u_list)\r\n        G_i_norm = sum(self.G_i_list)\r\n\r\n        neg_score = torch.log(torch.exp(G_u_norm[user_list] @ E_u_norm.T / self.temp).sum(1) + 1e-8).mean()\r\n        neg_score += torch.log(torch.exp(G_i_norm[pos_item_list] @ E_i_norm.T / self.temp).sum(1) + 1e-8).mean()\r\n        pos_score = (torch.clamp((G_u_norm[user_list] * E_u_norm[user_list]).sum(1) / self.temp, -5.0, 5.0)).mean() + (\r\n            torch.clamp((G_i_norm[pos_item_list] * E_i_norm[pos_item_list]).sum(1) / self.temp, -5.0, 5.0)).mean()\r\n        ssl_loss = -pos_score + neg_score\r\n        return self.lambda_1 * ssl_loss\r\n\r\n    def predict(self, interaction):\r\n        if self.restore_user_e is None or self.restore_item_e is None:\r\n            self.restore_user_e, self.restore_item_e = self.forward()\r\n        user = self.restore_user_e[interaction[self.USER_ID]]\r\n        item = self.restore_item_e[interaction[self.ITEM_ID]]\r\n        return torch.sum(user * item, dim=1)\r\n\r\n    def full_sort_predict(self, interaction):\r\n        if self.restore_user_e is None or self.restore_item_e is None:\r\n            self.restore_user_e, self.restore_item_e = self.forward()\r\n        user = self.restore_user_e[interaction[self.USER_ID]]\r\n        return user.matmul(self.restore_item_e.T)\r\n"
  },
  {
    "path": "recbole_gnn/model/general_recommender/lightgcn.py",
    "content": "# @Time   : 2022/3/8\n# @Author : Lanling Xu\n# @Email  : xulanling_sherry@163.com\n\nr\"\"\"\nLightGCN\n################################################\nReference:\n    Xiangnan He et al. \"LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation.\" in SIGIR 2020.\n\nReference code:\n    https://github.com/kuandeng/LightGCN\n\"\"\"\n\nimport numpy as np\nimport torch\n\nfrom recbole.model.init import xavier_uniform_initialization\nfrom recbole.model.loss import BPRLoss, EmbLoss\nfrom recbole.utils import InputType\n\nfrom recbole_gnn.model.abstract_recommender import GeneralGraphRecommender\nfrom recbole_gnn.model.layers import LightGCNConv\n\n\nclass LightGCN(GeneralGraphRecommender):\n    r\"\"\"LightGCN is a GCN-based recommender model, implemented via PyG.\n    LightGCN includes only the most essential component in GCN — neighborhood aggregation — for\n    collaborative filtering. Specifically, LightGCN learns user and item embeddings by linearly \n    propagating them on the user-item interaction graph, and uses the weighted sum of the embeddings\n    learned at all layers as the final embedding.\n    We implement the model following the original author with a pairwise training mode.\n    \"\"\"\n    input_type = InputType.PAIRWISE\n\n    def __init__(self, config, dataset):\n        super(LightGCN, self).__init__(config, dataset)\n\n        # load parameters info\n        self.latent_dim = config['embedding_size']  # int type:the embedding size of lightGCN\n        self.n_layers = config['n_layers']  # int type:the layer num of lightGCN\n        self.reg_weight = config['reg_weight']  # float32 type: the weight decay for l2 normalization\n        self.require_pow = config['require_pow']  # bool type: whether to require pow when regularization\n\n        # define layers and loss\n        self.user_embedding = torch.nn.Embedding(num_embeddings=self.n_users, embedding_dim=self.latent_dim)\n        self.item_embedding = torch.nn.Embedding(num_embeddings=self.n_items, embedding_dim=self.latent_dim)\n        self.gcn_conv = LightGCNConv(dim=self.latent_dim)\n        self.mf_loss = BPRLoss()\n        self.reg_loss = EmbLoss()\n\n        # storage variables for full sort evaluation acceleration\n        self.restore_user_e = None\n        self.restore_item_e = None\n\n        # parameters initialization\n        self.apply(xavier_uniform_initialization)\n        self.other_parameter_name = ['restore_user_e', 'restore_item_e']\n\n    def get_ego_embeddings(self):\n        r\"\"\"Get the embedding of users and items and combine to an embedding matrix.\n        Returns:\n            Tensor of the embedding matrix. Shape of [n_items+n_users, embedding_dim]\n        \"\"\"\n        user_embeddings = self.user_embedding.weight\n        item_embeddings = self.item_embedding.weight\n        ego_embeddings = torch.cat([user_embeddings, item_embeddings], dim=0)\n        return ego_embeddings\n\n    def forward(self):\n        all_embeddings = self.get_ego_embeddings()\n        embeddings_list = [all_embeddings]\n\n        for layer_idx in range(self.n_layers):\n            all_embeddings = self.gcn_conv(all_embeddings, self.edge_index, self.edge_weight)\n            embeddings_list.append(all_embeddings)\n        lightgcn_all_embeddings = torch.stack(embeddings_list, dim=1)\n        lightgcn_all_embeddings = torch.mean(lightgcn_all_embeddings, dim=1)\n\n        user_all_embeddings, item_all_embeddings = torch.split(lightgcn_all_embeddings, [self.n_users, self.n_items])\n        return user_all_embeddings, item_all_embeddings\n\n    def calculate_loss(self, interaction):\n        # clear the storage variable when training\n        if self.restore_user_e is not None or self.restore_item_e is not None:\n            self.restore_user_e, self.restore_item_e = None, None\n\n        user = interaction[self.USER_ID]\n        pos_item = interaction[self.ITEM_ID]\n        neg_item = interaction[self.NEG_ITEM_ID]\n\n        user_all_embeddings, item_all_embeddings = self.forward()\n        u_embeddings = user_all_embeddings[user]\n        pos_embeddings = item_all_embeddings[pos_item]\n        neg_embeddings = item_all_embeddings[neg_item]\n\n        # calculate BPR Loss\n        pos_scores = torch.mul(u_embeddings, pos_embeddings).sum(dim=1)\n        neg_scores = torch.mul(u_embeddings, neg_embeddings).sum(dim=1)\n        mf_loss = self.mf_loss(pos_scores, neg_scores)\n\n        # calculate regularization Loss\n        u_ego_embeddings = self.user_embedding(user)\n        pos_ego_embeddings = self.item_embedding(pos_item)\n        neg_ego_embeddings = self.item_embedding(neg_item)\n\n        reg_loss = self.reg_loss(u_ego_embeddings, pos_ego_embeddings, neg_ego_embeddings, require_pow=self.require_pow)\n        loss = mf_loss + self.reg_weight * reg_loss\n\n        return loss\n\n    def predict(self, interaction):\n        user = interaction[self.USER_ID]\n        item = interaction[self.ITEM_ID]\n\n        user_all_embeddings, item_all_embeddings = self.forward()\n\n        u_embeddings = user_all_embeddings[user]\n        i_embeddings = item_all_embeddings[item]\n        scores = torch.mul(u_embeddings, i_embeddings).sum(dim=1)\n        return scores\n\n    def full_sort_predict(self, interaction):\n        user = interaction[self.USER_ID]\n        if self.restore_user_e is None or self.restore_item_e is None:\n            self.restore_user_e, self.restore_item_e = self.forward()\n        # get user embedding from storage variable\n        u_embeddings = self.restore_user_e[user]\n\n        # dot with all item embedding to accelerate\n        scores = torch.matmul(u_embeddings, self.restore_item_e.transpose(0, 1))\n\n        return scores.view(-1)\n"
  },
  {
    "path": "recbole_gnn/model/general_recommender/ncl.py",
    "content": "# -*- coding: utf-8 -*-\nr\"\"\"\nNCL\n################################################\nReference:\n    Zihan Lin*, Changxin Tian*, Yupeng Hou*, Wayne Xin Zhao. \"Improving Graph Collaborative Filtering with Neighborhood-enriched Contrastive Learning.\" in WWW 2022.\n\"\"\"\n\nimport torch\nimport torch.nn.functional as F\n\nfrom recbole.model.init import xavier_uniform_initialization\nfrom recbole.model.loss import BPRLoss, EmbLoss\nfrom recbole.utils import InputType\n\nfrom recbole_gnn.model.abstract_recommender import GeneralGraphRecommender\nfrom recbole_gnn.model.layers import LightGCNConv\n\n\nclass NCL(GeneralGraphRecommender):\n    input_type = InputType.PAIRWISE\n\n    def __init__(self, config, dataset):\n        super(NCL, self).__init__(config, dataset)\n\n        # load parameters info\n        self.latent_dim = config['embedding_size']  # int type: the embedding size of the base model\n        self.n_layers = config['n_layers']          # int type: the layer num of the base model\n        self.reg_weight = config['reg_weight']      # float32 type: the weight decay for l2 normalization\n\n        self.ssl_temp = config['ssl_temp']\n        self.ssl_reg = config['ssl_reg']\n        self.hyper_layers = config['hyper_layers']\n\n        self.alpha = config['alpha']\n\n        self.proto_reg = config['proto_reg']\n        self.k = config['num_clusters']\n\n        # define layers and loss\n        self.user_embedding = torch.nn.Embedding(num_embeddings=self.n_users, embedding_dim=self.latent_dim)\n        self.item_embedding = torch.nn.Embedding(num_embeddings=self.n_items, embedding_dim=self.latent_dim)\n        self.gcn_conv = LightGCNConv(dim=self.latent_dim)\n        self.mf_loss = BPRLoss()\n        self.reg_loss = EmbLoss()\n\n        # storage variables for full sort evaluation acceleration\n        self.restore_user_e = None\n        self.restore_item_e = None\n\n        # parameters initialization\n        self.apply(xavier_uniform_initialization)\n        self.other_parameter_name = ['restore_user_e', 'restore_item_e']\n\n        self.user_centroids = None\n        self.user_2cluster = None\n        self.item_centroids = None\n        self.item_2cluster = None\n\n    def e_step(self):\n        user_embeddings = self.user_embedding.weight.detach().cpu().numpy()\n        item_embeddings = self.item_embedding.weight.detach().cpu().numpy()\n        self.user_centroids, self.user_2cluster = self.run_kmeans(user_embeddings)\n        self.item_centroids, self.item_2cluster = self.run_kmeans(item_embeddings)\n\n    def run_kmeans(self, x):\n        \"\"\"Run K-means algorithm to get k clusters of the input tensor x\n        \"\"\"\n        import faiss\n        kmeans = faiss.Kmeans(d=self.latent_dim, k=self.k, gpu=True)\n        kmeans.train(x)\n        cluster_cents = kmeans.centroids\n\n        _, I = kmeans.index.search(x, 1)\n\n        # convert to cuda Tensors for broadcast\n        centroids = torch.Tensor(cluster_cents).to(self.device)\n        centroids = F.normalize(centroids, p=2, dim=1)\n\n        node2cluster = torch.LongTensor(I).squeeze().to(self.device)\n        return centroids, node2cluster\n\n    def get_ego_embeddings(self):\n        r\"\"\"Get the embedding of users and items and combine to an embedding matrix.\n        Returns:\n            Tensor of the embedding matrix. Shape of [n_items+n_users, embedding_dim]\n        \"\"\"\n        user_embeddings = self.user_embedding.weight\n        item_embeddings = self.item_embedding.weight\n        ego_embeddings = torch.cat([user_embeddings, item_embeddings], dim=0)\n        return ego_embeddings\n\n    def forward(self):\n        all_embeddings = self.get_ego_embeddings()\n        embeddings_list = [all_embeddings]\n        for layer_idx in range(max(self.n_layers, self.hyper_layers * 2)):\n            all_embeddings = self.gcn_conv(all_embeddings, self.edge_index, self.edge_weight)\n            embeddings_list.append(all_embeddings)\n\n        lightgcn_all_embeddings = torch.stack(embeddings_list[:self.n_layers + 1], dim=1)\n        lightgcn_all_embeddings = torch.mean(lightgcn_all_embeddings, dim=1)\n\n        user_all_embeddings, item_all_embeddings = torch.split(lightgcn_all_embeddings, [self.n_users, self.n_items])\n        return user_all_embeddings, item_all_embeddings, embeddings_list\n\n    def ProtoNCE_loss(self, node_embedding, user, item):\n        user_embeddings_all, item_embeddings_all = torch.split(node_embedding, [self.n_users, self.n_items])\n\n        user_embeddings = user_embeddings_all[user]     # [B, e]\n        norm_user_embeddings = F.normalize(user_embeddings)\n\n        user2cluster = self.user_2cluster[user]     # [B,]\n        user2centroids = self.user_centroids[user2cluster]   # [B, e]\n        pos_score_user = torch.mul(norm_user_embeddings, user2centroids).sum(dim=1)\n        pos_score_user = torch.exp(pos_score_user / self.ssl_temp)\n        ttl_score_user = torch.matmul(norm_user_embeddings, self.user_centroids.transpose(0, 1))\n        ttl_score_user = torch.exp(ttl_score_user / self.ssl_temp).sum(dim=1)\n\n        proto_nce_loss_user = -torch.log(pos_score_user / ttl_score_user).sum()\n\n        item_embeddings = item_embeddings_all[item]\n        norm_item_embeddings = F.normalize(item_embeddings)\n\n        item2cluster = self.item_2cluster[item]  # [B, ]\n        item2centroids = self.item_centroids[item2cluster]  # [B, e]\n        pos_score_item = torch.mul(norm_item_embeddings, item2centroids).sum(dim=1)\n        pos_score_item = torch.exp(pos_score_item / self.ssl_temp)\n        ttl_score_item = torch.matmul(norm_item_embeddings, self.item_centroids.transpose(0, 1))\n        ttl_score_item = torch.exp(ttl_score_item / self.ssl_temp).sum(dim=1)\n        proto_nce_loss_item = -torch.log(pos_score_item / ttl_score_item).sum()\n\n        proto_nce_loss = self.proto_reg * (proto_nce_loss_user + proto_nce_loss_item)\n        return proto_nce_loss\n\n    def ssl_layer_loss(self, current_embedding, previous_embedding, user, item):\n        current_user_embeddings, current_item_embeddings = torch.split(current_embedding, [self.n_users, self.n_items])\n        previous_user_embeddings_all, previous_item_embeddings_all = torch.split(previous_embedding, [self.n_users, self.n_items])\n\n        current_user_embeddings = current_user_embeddings[user]\n        previous_user_embeddings = previous_user_embeddings_all[user]\n        norm_user_emb1 = F.normalize(current_user_embeddings)\n        norm_user_emb2 = F.normalize(previous_user_embeddings)\n        norm_all_user_emb = F.normalize(previous_user_embeddings_all)\n        pos_score_user = torch.mul(norm_user_emb1, norm_user_emb2).sum(dim=1)\n        ttl_score_user = torch.matmul(norm_user_emb1, norm_all_user_emb.transpose(0, 1))\n        pos_score_user = torch.exp(pos_score_user / self.ssl_temp)\n        ttl_score_user = torch.exp(ttl_score_user / self.ssl_temp).sum(dim=1)\n\n        ssl_loss_user = -torch.log(pos_score_user / ttl_score_user).sum()\n\n        current_item_embeddings = current_item_embeddings[item]\n        previous_item_embeddings = previous_item_embeddings_all[item]\n        norm_item_emb1 = F.normalize(current_item_embeddings)\n        norm_item_emb2 = F.normalize(previous_item_embeddings)\n        norm_all_item_emb = F.normalize(previous_item_embeddings_all)\n        pos_score_item = torch.mul(norm_item_emb1, norm_item_emb2).sum(dim=1)\n        ttl_score_item = torch.matmul(norm_item_emb1, norm_all_item_emb.transpose(0, 1))\n        pos_score_item = torch.exp(pos_score_item / self.ssl_temp)\n        ttl_score_item = torch.exp(ttl_score_item / self.ssl_temp).sum(dim=1)\n\n        ssl_loss_item = -torch.log(pos_score_item / ttl_score_item).sum()\n\n        ssl_loss = self.ssl_reg * (ssl_loss_user + self.alpha * ssl_loss_item)\n        return ssl_loss\n\n    def calculate_loss(self, interaction):\n        # clear the storage variable when training\n        if self.restore_user_e is not None or self.restore_item_e is not None:\n            self.restore_user_e, self.restore_item_e = None, None\n\n        user = interaction[self.USER_ID]\n        pos_item = interaction[self.ITEM_ID]\n        neg_item = interaction[self.NEG_ITEM_ID]\n\n        user_all_embeddings, item_all_embeddings, embeddings_list = self.forward()\n\n        center_embedding = embeddings_list[0]\n        context_embedding = embeddings_list[self.hyper_layers * 2]\n\n        ssl_loss = self.ssl_layer_loss(context_embedding, center_embedding, user, pos_item)\n        proto_loss = self.ProtoNCE_loss(center_embedding, user, pos_item)\n\n        u_embeddings = user_all_embeddings[user]\n        pos_embeddings = item_all_embeddings[pos_item]\n        neg_embeddings = item_all_embeddings[neg_item]\n\n        # calculate BPR Loss\n        pos_scores = torch.mul(u_embeddings, pos_embeddings).sum(dim=1)\n        neg_scores = torch.mul(u_embeddings, neg_embeddings).sum(dim=1)\n\n        mf_loss = self.mf_loss(pos_scores, neg_scores)\n\n        u_ego_embeddings = self.user_embedding(user)\n        pos_ego_embeddings = self.item_embedding(pos_item)\n        neg_ego_embeddings = self.item_embedding(neg_item)\n\n        reg_loss = self.reg_loss(u_ego_embeddings, pos_ego_embeddings, neg_ego_embeddings)\n\n        return mf_loss + self.reg_weight * reg_loss, ssl_loss, proto_loss\n\n    def predict(self, interaction):\n        user = interaction[self.USER_ID]\n        item = interaction[self.ITEM_ID]\n\n        user_all_embeddings, item_all_embeddings, embeddings_list = self.forward()\n\n        u_embeddings = user_all_embeddings[user]\n        i_embeddings = item_all_embeddings[item]\n        scores = torch.mul(u_embeddings, i_embeddings).sum(dim=1)\n        return scores\n\n    def full_sort_predict(self, interaction):\n        user = interaction[self.USER_ID]\n        if self.restore_user_e is None or self.restore_item_e is None:\n            self.restore_user_e, self.restore_item_e, embedding_list = self.forward()\n        # get user embedding from storage variable\n        u_embeddings = self.restore_user_e[user]\n\n        # dot with all item embedding to accelerate\n        scores = torch.matmul(u_embeddings, self.restore_item_e.transpose(0, 1))\n\n        return scores.view(-1)\n"
  },
  {
    "path": "recbole_gnn/model/general_recommender/ngcf.py",
    "content": "# @Time   : 2022/3/8\n# @Author : Changxin Tian\n# @Email  : cx.tian@outlook.com\nr\"\"\"\nNGCF\n################################################\nReference:\n    Xiang Wang et al. \"Neural Graph Collaborative Filtering.\" in SIGIR 2019.\n\nReference code:\n    https://github.com/xiangwang1223/neural_graph_collaborative_filtering\n\n\"\"\"\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch_geometric.utils import dropout_adj\n\nfrom recbole.model.init import xavier_normal_initialization\nfrom recbole.model.loss import BPRLoss, EmbLoss\nfrom recbole.utils import InputType\n\nfrom recbole_gnn.model.abstract_recommender import GeneralGraphRecommender\nfrom recbole_gnn.model.layers import BiGNNConv\n\n\nclass NGCF(GeneralGraphRecommender):\n    r\"\"\"NGCF is a model that incorporate GNN for recommendation.\n    We implement the model following the original author with a pairwise training mode.\n    \"\"\"\n    input_type = InputType.PAIRWISE\n\n    def __init__(self, config, dataset):\n        super(NGCF, self).__init__(config, dataset)\n\n        # load parameters info\n        self.embedding_size = config['embedding_size']\n        self.hidden_size_list = config['hidden_size_list']\n        self.hidden_size_list = [self.embedding_size] + self.hidden_size_list\n        self.node_dropout = config['node_dropout']\n        self.message_dropout = config['message_dropout']\n        self.reg_weight = config['reg_weight']\n\n        # define layers and loss\n        self.user_embedding = nn.Embedding(self.n_users, self.embedding_size)\n        self.item_embedding = nn.Embedding(self.n_items, self.embedding_size)\n        self.GNNlayers = torch.nn.ModuleList()\n        for input_size, output_size in zip(self.hidden_size_list[:-1], self.hidden_size_list[1:]):\n            self.GNNlayers.append(BiGNNConv(input_size, output_size))\n        self.mf_loss = BPRLoss()\n        self.reg_loss = EmbLoss()\n\n        # storage variables for full sort evaluation acceleration\n        self.restore_user_e = None\n        self.restore_item_e = None\n\n        # parameters initialization\n        self.apply(xavier_normal_initialization)\n        self.other_parameter_name = ['restore_user_e', 'restore_item_e']\n\n    def get_ego_embeddings(self):\n        r\"\"\"Get the embedding of users and items and combine to an embedding matrix.\n\n        Returns:\n            Tensor of the embedding matrix. Shape of (n_items+n_users, embedding_dim)\n        \"\"\"\n        user_embeddings = self.user_embedding.weight\n        item_embeddings = self.item_embedding.weight\n        ego_embeddings = torch.cat([user_embeddings, item_embeddings], dim=0)\n        return ego_embeddings\n\n    def forward(self):\n        if self.node_dropout == 0:\n            edge_index, edge_weight = self.edge_index, self.edge_weight\n        else:\n            edge_index, edge_weight = self.edge_index, self.edge_weight\n            if self.use_sparse:\n                row, col, edge_weight = edge_index.t().coo()\n                edge_index = torch.stack([row, col], 0)\n                edge_index, edge_weight = dropout_adj(edge_index=edge_index, edge_attr=edge_weight,\n                                                      p=self.node_dropout, training=self.training)\n                from torch_sparse import SparseTensor\n                edge_index = SparseTensor(row=edge_index[0], col=edge_index[1], value=edge_weight,\n                                          sparse_sizes=(self.n_users + self.n_items, self.n_users + self.n_items))\n                edge_index = edge_index.t()\n                edge_weight = None\n            else:\n                edge_index, edge_weight = dropout_adj(edge_index=edge_index, edge_attr=edge_weight,\n                                                      p=self.node_dropout, training=self.training)\n\n        all_embeddings = self.get_ego_embeddings()\n        embeddings_list = [all_embeddings]\n        for gnn in self.GNNlayers:\n            all_embeddings = gnn(all_embeddings, edge_index, edge_weight)\n            all_embeddings = nn.LeakyReLU(negative_slope=0.2)(all_embeddings)\n            all_embeddings = nn.Dropout(self.message_dropout)(all_embeddings)\n            all_embeddings = F.normalize(all_embeddings, p=2, dim=1)\n            embeddings_list += [all_embeddings]  # storage output embedding of each layer\n        ngcf_all_embeddings = torch.cat(embeddings_list, dim=1)\n\n        user_all_embeddings, item_all_embeddings = torch.split(ngcf_all_embeddings, [self.n_users, self.n_items])\n\n        return user_all_embeddings, item_all_embeddings\n\n    def calculate_loss(self, interaction):\n        # clear the storage variable when training\n        if self.restore_user_e is not None or self.restore_item_e is not None:\n            self.restore_user_e, self.restore_item_e = None, None\n\n        user = interaction[self.USER_ID]\n        pos_item = interaction[self.ITEM_ID]\n        neg_item = interaction[self.NEG_ITEM_ID]\n\n        user_all_embeddings, item_all_embeddings = self.forward()\n        u_embeddings = user_all_embeddings[user]\n        pos_embeddings = item_all_embeddings[pos_item]\n        neg_embeddings = item_all_embeddings[neg_item]\n\n        pos_scores = torch.mul(u_embeddings, pos_embeddings).sum(dim=1)\n        neg_scores = torch.mul(u_embeddings, neg_embeddings).sum(dim=1)\n        mf_loss = self.mf_loss(pos_scores, neg_scores)  # calculate BPR Loss\n\n        reg_loss = self.reg_loss(u_embeddings, pos_embeddings, neg_embeddings)  # L2 regularization of embeddings\n\n        return mf_loss + self.reg_weight * reg_loss\n\n    def predict(self, interaction):\n        user = interaction[self.USER_ID]\n        item = interaction[self.ITEM_ID]\n\n        user_all_embeddings, item_all_embeddings = self.forward()\n\n        u_embeddings = user_all_embeddings[user]\n        i_embeddings = item_all_embeddings[item]\n        scores = torch.mul(u_embeddings, i_embeddings).sum(dim=1)\n        return scores\n\n    def full_sort_predict(self, interaction):\n        user = interaction[self.USER_ID]\n        if self.restore_user_e is None or self.restore_item_e is None:\n            self.restore_user_e, self.restore_item_e = self.forward()\n        # get user embedding from storage variable\n        u_embeddings = self.restore_user_e[user]\n\n        # dot with all item embedding to accelerate\n        scores = torch.matmul(u_embeddings, self.restore_item_e.transpose(0, 1))\n\n        return scores.view(-1)\n"
  },
  {
    "path": "recbole_gnn/model/general_recommender/sgl.py",
    "content": "# -*- coding: utf-8 -*-\n# @Time   : 2022/3/8\n# @Author : Changxin Tian\n# @Email  : cx.tian@outlook.com\nr\"\"\"\nSGL\n################################################\nReference:\n    Jiancan Wu et al. \"SGL: Self-supervised Graph Learning for Recommendation\" in SIGIR 2021.\n\nReference code:\n    https://github.com/wujcan/SGL\n\"\"\"\n\nimport numpy as np\nimport torch\nimport torch.nn.functional as F\nfrom torch_geometric.utils import degree\nfrom torch_geometric.nn.conv.gcn_conv import gcn_norm\n\nfrom recbole.model.init import xavier_uniform_initialization\nfrom recbole.model.loss import EmbLoss\nfrom recbole.utils import InputType\n\nfrom recbole_gnn.model.abstract_recommender import GeneralGraphRecommender\nfrom recbole_gnn.model.layers import LightGCNConv\n\n\nclass SGL(GeneralGraphRecommender):\n    r\"\"\"SGL is a GCN-based recommender model.\n\n    SGL supplements the classical supervised task of recommendation with an auxiliary\n    self supervised task, which reinforces node representation learning via self-\n    discrimination.Specifically,SGL generates multiple views of a node, maximizing the\n    agreement between different views of the same node compared to that of other nodes.\n    SGL devises three operators to generate the views — node dropout, edge dropout, and\n    random walk — that change the graph structure in different manners.\n\n    We implement the model following the original author with a pairwise training mode.\n    \"\"\"\n    input_type = InputType.PAIRWISE\n\n    def __init__(self, config, dataset):\n        super(SGL, self).__init__(config, dataset)\n\n        # load parameters info\n        self.latent_dim = config[\"embedding_size\"]\n        self.n_layers = int(config[\"n_layers\"])\n        self.aug_type = config[\"type\"]\n        self.drop_ratio = config[\"drop_ratio\"]\n        self.ssl_tau = config[\"ssl_tau\"]\n        self.reg_weight = config[\"reg_weight\"]\n        self.ssl_weight = config[\"ssl_weight\"]\n\n        self._user = dataset.inter_feat[dataset.uid_field]\n        self._item = dataset.inter_feat[dataset.iid_field]\n        self.dataset = dataset\n\n        # define layers and loss\n        self.user_embedding = torch.nn.Embedding(self.n_users, self.latent_dim)\n        self.item_embedding = torch.nn.Embedding(self.n_items, self.latent_dim)\n        self.gcn_conv = LightGCNConv(dim=self.latent_dim)\n        self.reg_loss = EmbLoss()\n\n        # storage variables for full sort evaluation acceleration\n        self.restore_user_e = None\n        self.restore_item_e = None\n\n        # parameters initialization\n        self.apply(xavier_uniform_initialization)\n        self.other_parameter_name = ['restore_user_e', 'restore_item_e']\n\n    def train(self, mode: bool = True):\n        r\"\"\"Override train method of base class. The subgraph is reconstructed each time it is called.\n\n        \"\"\"\n        T = super().train(mode=mode)\n        if mode:\n            self.graph_construction()\n        return T\n\n    def graph_construction(self):\n        r\"\"\"Devise three operators to generate the views — node dropout, edge dropout, and random walk of a node.\n\n        \"\"\"\n        if self.aug_type == \"ND\" or self.aug_type == \"ED\":\n            self.sub_graph1 = [self.random_graph_augment()] * self.n_layers\n            self.sub_graph2 = [self.random_graph_augment()] * self.n_layers\n        elif self.aug_type == \"RW\":\n            self.sub_graph1 = [self.random_graph_augment() for _ in range(self.n_layers)]\n            self.sub_graph2 = [self.random_graph_augment() for _ in range(self.n_layers)]\n\n    def random_graph_augment(self):\n        def rand_sample(high, size=None, replace=True):\n            return np.random.choice(np.arange(high), size=size, replace=replace)\n\n        if self.aug_type == \"ND\":\n            drop_user = rand_sample(self.n_users, size=int(self.n_users * self.drop_ratio), replace=False)\n            drop_item = rand_sample(self.n_items, size=int(self.n_items * self.drop_ratio), replace=False)\n\n            mask = np.isin(self._user.numpy(), drop_user)\n            mask |= np.isin(self._item.numpy(), drop_item)\n            keep = np.where(~mask)\n\n            row = self._user[keep]\n            col = self._item[keep] + self.n_users\n\n        elif self.aug_type == \"ED\" or self.aug_type == \"RW\":\n            keep = rand_sample(len(self._user), size=int(len(self._user) * (1 - self.drop_ratio)), replace=False)\n            row = self._user[keep]\n            col = self._item[keep] + self.n_users\n\n        edge_index1 = torch.stack([row, col])\n        edge_index2 = torch.stack([col, row])\n        edge_index = torch.cat([edge_index1, edge_index2], dim=1)\n        edge_weight = torch.ones(edge_index.size(1))\n        num_nodes = self.n_users + self.n_items\n\n        if self.use_sparse:\n            adj_t = self.dataset.edge_index_to_adj_t(edge_index, edge_weight, num_nodes, num_nodes)\n            adj_t = gcn_norm(adj_t, None, num_nodes, add_self_loops=False)\n            return adj_t.to(self.device), None\n\n        edge_index, edge_weight = gcn_norm(edge_index, edge_weight, num_nodes, add_self_loops=False)\n\n        return edge_index.to(self.device), edge_weight.to(self.device)\n\n    def forward(self, graph=None):\n        all_embeddings = torch.cat([self.user_embedding.weight, self.item_embedding.weight])\n        embeddings_list = [all_embeddings]\n\n        if graph is None:  # for the original graph\n            for _ in range(self.n_layers):\n                all_embeddings = self.gcn_conv(all_embeddings, self.edge_index, self.edge_weight)\n                embeddings_list.append(all_embeddings)\n        else:  # for the augmented graph\n            for graph_edge_index, graph_edge_weight in graph:\n                all_embeddings = self.gcn_conv(all_embeddings, graph_edge_index, graph_edge_weight)\n                embeddings_list.append(all_embeddings)\n\n        embeddings_list = torch.stack(embeddings_list, dim=1)\n        embeddings_list = torch.mean(embeddings_list, dim=1, keepdim=False)\n        user_all_embeddings, item_all_embeddings = torch.split(embeddings_list, [self.n_users, self.n_items], dim=0)\n\n        return user_all_embeddings, item_all_embeddings\n\n    def calc_bpr_loss(self, user_emd, item_emd, user_list, pos_item_list, neg_item_list):\n        r\"\"\"Calculate the the pairwise Bayesian Personalized Ranking (BPR) loss and parameter regularization loss.\n\n        Args:\n            user_emd (torch.Tensor): Ego embedding of all users after forwarding.\n            item_emd (torch.Tensor): Ego embedding of all items after forwarding.\n            user_list (torch.Tensor): List of the user.\n            pos_item_list (torch.Tensor): List of positive examples.\n            neg_item_list (torch.Tensor): List of negative examples.\n\n        Returns:\n            torch.Tensor: Loss of BPR tasks and parameter regularization.\n        \"\"\"\n        u_e = user_emd[user_list]\n        pi_e = item_emd[pos_item_list]\n        ni_e = item_emd[neg_item_list]\n        p_scores = torch.mul(u_e, pi_e).sum(dim=1)\n        n_scores = torch.mul(u_e, ni_e).sum(dim=1)\n\n        l1 = torch.sum(-F.logsigmoid(p_scores - n_scores))\n\n        u_e_p = self.user_embedding(user_list)\n        pi_e_p = self.item_embedding(pos_item_list)\n        ni_e_p = self.item_embedding(neg_item_list)\n\n        l2 = self.reg_loss(u_e_p, pi_e_p, ni_e_p)\n\n        return l1 + l2 * self.reg_weight\n\n    def calc_ssl_loss(self, user_list, pos_item_list, user_sub1, user_sub2, item_sub1, item_sub2):\n        r\"\"\"Calculate the loss of self-supervised tasks.\n\n        Args:\n            user_list (torch.Tensor): List of the user.\n            pos_item_list (torch.Tensor): List of positive examples.\n            user_sub1 (torch.Tensor): Ego embedding of all users in the first subgraph after forwarding.\n            user_sub2 (torch.Tensor): Ego embedding of all users in the second subgraph after forwarding.\n            item_sub1 (torch.Tensor): Ego embedding of all items in the first subgraph after forwarding.\n            item_sub2 (torch.Tensor): Ego embedding of all items in the second subgraph after forwarding.\n\n        Returns:\n            torch.Tensor: Loss of self-supervised tasks.\n        \"\"\"\n\n        u_emd1 = F.normalize(user_sub1[user_list], dim=1)\n        u_emd2 = F.normalize(user_sub2[user_list], dim=1)\n        all_user2 = F.normalize(user_sub2, dim=1)\n        v1 = torch.sum(u_emd1 * u_emd2, dim=1)\n        v2 = u_emd1.matmul(all_user2.T)\n        v1 = torch.exp(v1 / self.ssl_tau)\n        v2 = torch.sum(torch.exp(v2 / self.ssl_tau), dim=1)\n        ssl_user = -torch.sum(torch.log(v1 / v2))\n\n        i_emd1 = F.normalize(item_sub1[pos_item_list], dim=1)\n        i_emd2 = F.normalize(item_sub2[pos_item_list], dim=1)\n        all_item2 = F.normalize(item_sub2, dim=1)\n        v3 = torch.sum(i_emd1 * i_emd2, dim=1)\n        v4 = i_emd1.matmul(all_item2.T)\n        v3 = torch.exp(v3 / self.ssl_tau)\n        v4 = torch.sum(torch.exp(v4 / self.ssl_tau), dim=1)\n        ssl_item = -torch.sum(torch.log(v3 / v4))\n\n        return (ssl_item + ssl_user) * self.ssl_weight\n\n    def calculate_loss(self, interaction):\n        if self.restore_user_e is not None or self.restore_item_e is not None:\n            self.restore_user_e, self.restore_item_e = None, None\n\n        user_list = interaction[self.USER_ID]\n        pos_item_list = interaction[self.ITEM_ID]\n        neg_item_list = interaction[self.NEG_ITEM_ID]\n\n        user_emd, item_emd = self.forward()\n        user_sub1, item_sub1 = self.forward(self.sub_graph1)\n        user_sub2, item_sub2 = self.forward(self.sub_graph2)\n\n        total_loss = self.calc_bpr_loss(user_emd, item_emd, user_list, pos_item_list, neg_item_list) + \\\n            self.calc_ssl_loss(user_list, pos_item_list, user_sub1, user_sub2, item_sub1, item_sub2)\n        return total_loss\n\n    def predict(self, interaction):\n        if self.restore_user_e is None or self.restore_item_e is None:\n            self.restore_user_e, self.restore_item_e = self.forward()\n\n        user = self.restore_user_e[interaction[self.USER_ID]]\n        item = self.restore_item_e[interaction[self.ITEM_ID]]\n        return torch.sum(user * item, dim=1)\n\n    def full_sort_predict(self, interaction):\n        if self.restore_user_e is None or self.restore_item_e is None:\n            self.restore_user_e, self.restore_item_e = self.forward()\n\n        user = self.restore_user_e[interaction[self.USER_ID]]\n        return user.matmul(self.restore_item_e.T)\n"
  },
  {
    "path": "recbole_gnn/model/general_recommender/simgcl.py",
    "content": "# -*- coding: utf-8 -*-\nr\"\"\"\nSimGCL\n################################################\nReference:\n    Junliang Yu, Hongzhi Yin, Xin Xia, Tong Chen, Lizhen Cui, Quoc Viet Hung Nguyen. \"Are Graph Augmentations Necessary? Simple Graph Contrastive Learning for Recommendation.\" in SIGIR 2022.\n\"\"\"\n\n\nimport torch\nimport torch.nn.functional as F\n\nfrom recbole_gnn.model.general_recommender import LightGCN\n\n\nclass SimGCL(LightGCN):\n    def __init__(self, config, dataset):\n        super(SimGCL, self).__init__(config, dataset)\n\n        self.cl_rate = config['lambda']\n        self.eps = config['eps']\n        self.temperature = config['temperature']\n\n    def forward(self, perturbed=False):\n        all_embs = self.get_ego_embeddings()\n        embeddings_list = []\n\n        for layer_idx in range(self.n_layers):\n            all_embs = self.gcn_conv(all_embs, self.edge_index, self.edge_weight)\n            if perturbed:\n                random_noise = torch.rand_like(all_embs, device=all_embs.device)\n                all_embs = all_embs + torch.sign(all_embs) * F.normalize(random_noise, dim=-1) * self.eps\n            embeddings_list.append(all_embs)\n        lightgcn_all_embeddings = torch.stack(embeddings_list, dim=1)\n        lightgcn_all_embeddings = torch.mean(lightgcn_all_embeddings, dim=1)\n\n        user_all_embeddings, item_all_embeddings = torch.split(lightgcn_all_embeddings, [self.n_users, self.n_items])\n        return user_all_embeddings, item_all_embeddings\n\n    def calculate_cl_loss(self, x1, x2):\n        x1, x2 = F.normalize(x1, dim=-1), F.normalize(x2, dim=-1)\n        pos_score = (x1 * x2).sum(dim=-1)\n        pos_score = torch.exp(pos_score / self.temperature)\n        ttl_score = torch.matmul(x1, x2.transpose(0, 1))\n        ttl_score = torch.exp(ttl_score / self.temperature).sum(dim=1)\n        return -torch.log(pos_score / ttl_score).sum()\n\n    def calculate_loss(self, interaction):\n        loss = super().calculate_loss(interaction)\n\n        user = torch.unique(interaction[self.USER_ID])\n        pos_item = torch.unique(interaction[self.ITEM_ID])\n\n        perturbed_user_embs_1, perturbed_item_embs_1 = self.forward(perturbed=True)\n        perturbed_user_embs_2, perturbed_item_embs_2 = self.forward(perturbed=True)\n\n        user_cl_loss = self.calculate_cl_loss(perturbed_user_embs_1[user], perturbed_user_embs_2[user])\n        item_cl_loss = self.calculate_cl_loss(perturbed_item_embs_1[pos_item], perturbed_item_embs_2[pos_item])\n\n        return loss + self.cl_rate * (user_cl_loss + item_cl_loss)\n"
  },
  {
    "path": "recbole_gnn/model/general_recommender/ssl4rec.py",
    "content": "r\"\"\"\nSSL4REC\n################################################\nReference:\n    Tiansheng Yao et al. \"Self-supervised Learning for Large-scale Item Recommendations.\" in CIKM 2021.\n\nReference code:\n    https://github.com/Coder-Yu/SELFRec/model/graph/SSL4Rec.py\n\"\"\"\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom recbole.model.loss import EmbLoss\nfrom recbole.utils import InputType\n\nfrom recbole.model.init import xavier_uniform_initialization\nfrom recbole_gnn.model.abstract_recommender import GeneralGraphRecommender\n\n\nclass SSL4REC(GeneralGraphRecommender):\n    input_type = InputType.PAIRWISE\n\n    def __init__(self, config, dataset):\n        super(SSL4REC, self).__init__(config, dataset)\n\n        # load parameters info\n        self.tau = config[\"tau\"]\n        self.reg_weight = config[\"reg_weight\"]\n        self.cl_rate = config[\"ssl_weight\"]\n        self.require_pow = config[\"require_pow\"]\n\n        self.reg_loss = EmbLoss()\n\n        self.encoder = DNN_Encoder(config, dataset)\n\n        # storage variables for full sort evaluation acceleration\n        self.restore_user_e = None\n        self.restore_item_e = None\n\n        # parameters initialization\n        self.apply(xavier_uniform_initialization)\n        self.other_parameter_name = ['restore_user_e', 'restore_item_e']\n\n    def forward(self, user, item):\n        user_e, item_e = self.encoder(user, item)\n        return user_e, item_e\n\n    def calculate_batch_softmax_loss(self, user_emb, item_emb, temperature):\n        user_emb, item_emb = F.normalize(user_emb, dim=1), F.normalize(item_emb, dim=1)\n        pos_score = (user_emb * item_emb).sum(dim=-1)\n        pos_score = torch.exp(pos_score / temperature)\n        ttl_score = torch.matmul(user_emb, item_emb.transpose(0, 1))\n        ttl_score = torch.exp(ttl_score / temperature).sum(dim=1)\n        loss = -torch.log(pos_score / ttl_score + 10e-6)\n        return torch.mean(loss)\n\n    def calculate_loss(self, interaction):\n        # clear the storage variable when training\n        if self.restore_user_e is not None or self.restore_item_e is not None:\n            self.restore_user_e, self.restore_item_e = None, None\n\n        user = interaction[self.USER_ID]\n        pos_item = interaction[self.ITEM_ID]\n\n        user_embeddings, item_embeddings = self.forward(user, pos_item)\n\n        rec_loss = self.calculate_batch_softmax_loss(user_embeddings, item_embeddings, self.tau)\n        cl_loss = self.encoder.calculate_cl_loss(pos_item)\n        reg_loss = self.reg_loss(user_embeddings, item_embeddings, require_pow=self.require_pow)\n\n        loss = rec_loss + self.cl_rate * cl_loss + self.reg_weight * reg_loss\n\n        return loss\n\n    def predict(self, interaction):\n        user = interaction[self.USER_ID]\n        item = interaction[self.ITEM_ID]\n\n        user_embeddings, item_embeddings = self.forward(user, item)\n\n        u_embeddings = user_embeddings[user]\n        i_embeddings = item_embeddings[item]\n        scores = torch.mul(u_embeddings, i_embeddings).sum(dim=1)\n        return scores\n\n    def full_sort_predict(self, interaction):\n        user = interaction[self.USER_ID]\n        if self.restore_user_e is None or self.restore_item_e is None:\n            self.restore_user_e, self.restore_item_e = self.forward(torch.arange(\n                self.n_users, device=self.device), torch.arange(self.n_items, device=self.device))\n        # get user embedding from storage variable\n        u_embeddings = self.restore_user_e[user]\n\n        # dot with all item embedding to accelerate\n        scores = torch.matmul(u_embeddings, self.restore_item_e.transpose(0, 1))\n\n        return scores.view(-1)\n\n\nclass DNN_Encoder(nn.Module):\n    def __init__(self, config, dataset):\n        super(DNN_Encoder, self).__init__()\n\n        self.emb_size = config[\"embedding_size\"]\n        self.drop_ratio = config[\"drop_ratio\"]\n        self.tau = config[\"tau\"]\n\n        self.USER_ID = config[\"USER_ID_FIELD\"]\n        self.ITEM_ID = config[\"ITEM_ID_FIELD\"]\n        self.n_users = dataset.num(self.USER_ID)\n        self.n_items = dataset.num(self.ITEM_ID)\n\n        self.user_tower = nn.Sequential(\n            nn.Linear(self.emb_size, 1024),\n            nn.ReLU(True),\n            nn.Linear(1024, 128),\n            nn.Tanh()\n        )\n        self.item_tower = nn.Sequential(\n            nn.Linear(self.emb_size, 1024),\n            nn.ReLU(True),\n            nn.Linear(1024, 128),\n            nn.Tanh()\n        )\n        self.dropout = nn.Dropout(self.drop_ratio)\n\n        self.initial_user_emb = nn.Embedding(self.n_users, self.emb_size)\n        self.initial_item_emb = nn.Embedding(self.n_items, self.emb_size)\n        self.reset_parameters()\n\n    def reset_parameters(self):\n        nn.init.xavier_uniform_(self.initial_user_emb.weight)\n        nn.init.xavier_uniform_(self.initial_item_emb.weight)\n\n    def forward(self, q, x):\n        q_emb = self.initial_user_emb(q)\n        i_emb = self.initial_item_emb(x)\n\n        q_emb = self.user_tower(q_emb)\n        i_emb = self.item_tower(i_emb)\n\n        return q_emb, i_emb\n\n    def item_encoding(self, x):\n        i_emb = self.initial_item_emb(x)\n        i1_emb = self.dropout(i_emb)\n        i2_emb = self.dropout(i_emb)\n\n        i1_emb = self.item_tower(i1_emb)\n        i2_emb = self.item_tower(i2_emb)\n\n        return i1_emb, i2_emb\n\n    def calculate_cl_loss(self, idx):\n        x1, x2 = self.item_encoding(idx)\n        x1, x2 = F.normalize(x1, dim=-1), F.normalize(x2, dim=-1)\n        pos_score = (x1 * x2).sum(dim=-1)\n        pos_score = torch.exp(pos_score / self.tau)\n        ttl_score = torch.matmul(x1, x2.transpose(0, 1))\n        ttl_score = torch.exp(ttl_score / self.tau).sum(dim=1)\n        return -torch.log(pos_score / ttl_score).mean()\n"
  },
  {
    "path": "recbole_gnn/model/general_recommender/xsimgcl.py",
    "content": "# -*- coding: utf-8 -*-\nr\"\"\"\nXSimGCL\n################################################\nReference:\n    Junliang Yu, Xin Xia, Tong Chen, Lizhen Cui, Nguyen Quoc Viet Hung, Hongzhi Yin. \"XSimGCL: Towards Extremely Simple Graph Contrastive Learning for Recommendation\" in TKDE 2023.\n\nReference code:\n    https://github.com/Coder-Yu/SELFRec/blob/main/model/graph/XSimGCL.py\n\"\"\"\n\n\nimport torch\nimport torch.nn.functional as F\n\nfrom recbole_gnn.model.general_recommender import LightGCN\n\n\nclass XSimGCL(LightGCN):\n    def __init__(self, config, dataset):\n        super(XSimGCL, self).__init__(config, dataset)\n\n        self.cl_rate = config['lambda']\n        self.eps = config['eps']\n        self.temperature = config['temperature']\n        self.layer_cl = config['layer_cl']\n\n    def forward(self, perturbed=False):\n        all_embs = self.get_ego_embeddings()\n        all_embs_cl = all_embs\n        embeddings_list = []\n\n        for layer_idx in range(self.n_layers):\n            all_embs = self.gcn_conv(all_embs, self.edge_index, self.edge_weight)\n            if perturbed:\n                random_noise = torch.rand_like(all_embs, device=all_embs.device)\n                all_embs = all_embs + torch.sign(all_embs) * F.normalize(random_noise, dim=-1) * self.eps\n            embeddings_list.append(all_embs)\n            if layer_idx == self.layer_cl - 1:\n                all_embs_cl = all_embs\n        lightgcn_all_embeddings = torch.stack(embeddings_list, dim=1)\n        lightgcn_all_embeddings = torch.mean(lightgcn_all_embeddings, dim=1)\n\n        user_all_embeddings, item_all_embeddings = torch.split(lightgcn_all_embeddings, [self.n_users, self.n_items])\n        user_all_embeddings_cl, item_all_embeddings_cl = torch.split(all_embs_cl, [self.n_users, self.n_items])\n        if perturbed:\n            return user_all_embeddings, item_all_embeddings, user_all_embeddings_cl, item_all_embeddings_cl\n        return user_all_embeddings, item_all_embeddings\n\n    def calculate_cl_loss(self, x1, x2):\n        x1, x2 = F.normalize(x1, dim=-1), F.normalize(x2, dim=-1)\n        pos_score = (x1 * x2).sum(dim=-1)\n        pos_score = torch.exp(pos_score / self.temperature)\n        ttl_score = torch.matmul(x1, x2.transpose(0, 1))\n        ttl_score = torch.exp(ttl_score / self.temperature).sum(dim=1)\n        return -torch.log(pos_score / ttl_score).mean()\n\n    def calculate_loss(self, interaction):\n        # clear the storage variable when training\n        if self.restore_user_e is not None or self.restore_item_e is not None:\n            self.restore_user_e, self.restore_item_e = None, None\n\n        user = interaction[self.USER_ID]\n        pos_item = interaction[self.ITEM_ID]\n        neg_item = interaction[self.NEG_ITEM_ID]\n\n        user_all_embeddings, item_all_embeddings, user_all_embeddings_cl, item_all_embeddings_cl = self.forward(perturbed=True)\n        u_embeddings = user_all_embeddings[user]\n        pos_embeddings = item_all_embeddings[pos_item]\n        neg_embeddings = item_all_embeddings[neg_item]\n\n        # calculate BPR Loss\n        pos_scores = torch.mul(u_embeddings, pos_embeddings).sum(dim=1)\n        neg_scores = torch.mul(u_embeddings, neg_embeddings).sum(dim=1)\n        mf_loss = self.mf_loss(pos_scores, neg_scores)\n\n        # calculate regularization Loss\n        u_ego_embeddings = self.user_embedding(user)\n        pos_ego_embeddings = self.item_embedding(pos_item)\n        neg_ego_embeddings = self.item_embedding(neg_item)\n        reg_loss = self.reg_loss(u_ego_embeddings, pos_ego_embeddings, neg_ego_embeddings, require_pow=self.require_pow)\n\n        user = torch.unique(interaction[self.USER_ID])\n        pos_item = torch.unique(interaction[self.ITEM_ID])\n\n        # calculate CL Loss\n        user_cl_loss = self.calculate_cl_loss(user_all_embeddings[user], user_all_embeddings_cl[user])\n        item_cl_loss = self.calculate_cl_loss(item_all_embeddings[pos_item], item_all_embeddings_cl[pos_item])\n\n        return mf_loss, self.reg_weight * reg_loss, self.cl_rate * (user_cl_loss + item_cl_loss)\n"
  },
  {
    "path": "recbole_gnn/model/layers.py",
    "content": "import numpy as np\nimport torch\nimport torch.nn as nn\nfrom torch_geometric.nn import MessagePassing\nfrom torch_sparse import matmul\n\n\nclass LightGCNConv(MessagePassing):\n    def __init__(self, dim):\n        super(LightGCNConv, self).__init__(aggr='add')\n        self.dim = dim\n\n    def forward(self, x, edge_index, edge_weight):\n        return self.propagate(edge_index, x=x, edge_weight=edge_weight)\n\n    def message(self, x_j, edge_weight):\n        return edge_weight.view(-1, 1) * x_j\n\n    def message_and_aggregate(self, adj_t, x):\n        return matmul(adj_t, x, reduce=self.aggr)\n\n    def __repr__(self):\n        return '{}({})'.format(self.__class__.__name__, self.dim)\n\n\nclass BipartiteGCNConv(MessagePassing):\n    def __init__(self, dim):\n        super(BipartiteGCNConv, self).__init__(aggr='add')\n        self.dim = dim\n\n    def forward(self, x, edge_index, edge_weight, size):\n        return self.propagate(edge_index, x=x, edge_weight=edge_weight, size=size)\n\n    def message(self, x_j, edge_weight):\n        return edge_weight.view(-1, 1) * x_j\n\n    def __repr__(self):\n        return '{}({})'.format(self.__class__.__name__, self.dim)\n\n\nclass BiGNNConv(MessagePassing):\n    r\"\"\"Propagate a layer of Bi-interaction GNN\n\n    .. math::\n        output = (L+I)EW_1 + LE \\otimes EW_2\n    \"\"\"\n\n    def __init__(self, in_channels, out_channels):\n        super().__init__(aggr='add')\n        self.in_channels, self.out_channels = in_channels, out_channels\n        self.lin1 = torch.nn.Linear(in_features=in_channels, out_features=out_channels)\n        self.lin2 = torch.nn.Linear(in_features=in_channels, out_features=out_channels)\n\n    def forward(self, x, edge_index, edge_weight):\n        x_prop = self.propagate(edge_index, x=x, edge_weight=edge_weight)\n        x_trans = self.lin1(x_prop + x)\n        x_inter = self.lin2(torch.mul(x_prop, x))\n        return x_trans + x_inter\n\n    def message(self, x_j, edge_weight):\n        return edge_weight.view(-1, 1) * x_j\n\n    def message_and_aggregate(self, adj_t, x):\n        return matmul(adj_t, x, reduce=self.aggr)\n\n    def __repr__(self):\n        return '{}({},{})'.format(self.__class__.__name__, self.in_channels, self.out_channels)\n\n\nclass SRGNNConv(MessagePassing):\n    def __init__(self, dim):\n        # mean aggregation to incorporate weight naturally\n        super(SRGNNConv, self).__init__(aggr='mean')\n\n        self.lin = torch.nn.Linear(dim, dim)\n\n    def forward(self, x, edge_index):\n        x = self.lin(x)\n        return self.propagate(edge_index, x=x)\n\n\nclass SRGNNCell(nn.Module):\n    def __init__(self, dim):\n        super(SRGNNCell, self).__init__()\n\n        self.dim = dim\n        self.incomming_conv = SRGNNConv(dim)\n        self.outcomming_conv = SRGNNConv(dim)\n\n        self.lin_ih = nn.Linear(2 * dim, 3 * dim)\n        self.lin_hh = nn.Linear(dim, 3 * dim)\n\n        self._reset_parameters()\n\n    def forward(self, hidden, edge_index):\n        input_in = self.incomming_conv(hidden, edge_index)\n        reversed_edge_index = torch.flip(edge_index, dims=[0])\n        input_out = self.outcomming_conv(hidden, reversed_edge_index)\n        inputs = torch.cat([input_in, input_out], dim=-1)\n\n        gi = self.lin_ih(inputs)\n        gh = self.lin_hh(hidden)\n        i_r, i_i, i_n = gi.chunk(3, -1)\n        h_r, h_i, h_n = gh.chunk(3, -1)\n        reset_gate = torch.sigmoid(i_r + h_r)\n        input_gate = torch.sigmoid(i_i + h_i)\n        new_gate = torch.tanh(i_n + reset_gate * h_n)\n        hy = (1 - input_gate) * hidden + input_gate * new_gate\n        return hy\n\n    def _reset_parameters(self):\n        stdv = 1.0 / np.sqrt(self.dim)\n        for weight in self.parameters():\n            weight.data.uniform_(-stdv, stdv)\n"
  },
  {
    "path": "recbole_gnn/model/sequential_recommender/__init__.py",
    "content": "from recbole_gnn.model.sequential_recommender.gcegnn import GCEGNN\nfrom recbole_gnn.model.sequential_recommender.gcsan import GCSAN\nfrom recbole_gnn.model.sequential_recommender.lessr import LESSR\nfrom recbole_gnn.model.sequential_recommender.niser import NISER\nfrom recbole_gnn.model.sequential_recommender.sgnnhn import SGNNHN\nfrom recbole_gnn.model.sequential_recommender.srgnn import SRGNN\nfrom recbole_gnn.model.sequential_recommender.tagnn import TAGNN\n"
  },
  {
    "path": "recbole_gnn/model/sequential_recommender/gcegnn.py",
    "content": "# @Time   : 2022/3/22\n# @Author : Yupeng Hou\n# @Email  : houyupeng@ruc.edu.cn\n\nr\"\"\"\nGCE-GNN\n################################################\n\nReference:\n    Ziyang Wang et al. \"Global Context Enhanced Graph Neural Networks for Session-based Recommendation.\" in SIGIR 2020.\n\nReference code:\n    https://github.com/CCIIPLab/GCE-GNN\n\n\"\"\"\n\nimport numpy as np\nfrom tqdm import tqdm\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch_geometric.nn import MessagePassing\nfrom torch_geometric.utils import softmax\nfrom recbole.model.loss import BPRLoss\nfrom recbole.model.abstract_recommender import SequentialRecommender\n\n\nclass LocalAggregator(MessagePassing):\n    def __init__(self, dim, alpha):\n        super().__init__(aggr='add')\n        self.edge_emb = nn.Embedding(4, dim)\n        self.leakyrelu = nn.LeakyReLU(alpha)\n\n    def forward(self, x, edge_index, edge_attr):\n        return self.propagate(edge_index, x=x, edge_attr=edge_attr)\n\n    def message(self, x_j, x_i, edge_attr, index, ptr, size_i):\n        x = x_j * x_i\n        a = self.edge_emb(edge_attr)\n        e = (x * a).sum(dim=-1)\n        e = self.leakyrelu(e)\n        e = softmax(e, index, ptr, size_i)\n        return e.unsqueeze(-1) * x_j\n\n\nclass GlobalAggregator(nn.Module):\n    def __init__(self, dim, dropout, act=torch.relu):\n        super(GlobalAggregator, self).__init__()\n        self.dropout = dropout\n        self.act = act\n        self.dim = dim\n\n        self.w_1 = nn.Parameter(torch.Tensor(self.dim + 1, self.dim))\n        self.w_2 = nn.Parameter(torch.Tensor(self.dim, 1))\n        self.w_3 = nn.Parameter(torch.Tensor(2 * self.dim, self.dim))\n        self.bias = nn.Parameter(torch.Tensor(self.dim))\n\n    def forward(self, self_vectors, neighbor_vector, batch_size, masks, neighbor_weight, extra_vector=None):\n        if extra_vector is not None:\n            alpha = torch.matmul(torch.cat([extra_vector.unsqueeze(2).repeat(1, 1, neighbor_vector.shape[2], 1)*neighbor_vector, neighbor_weight.unsqueeze(-1)], -1), self.w_1).squeeze(-1)\n            alpha = F.leaky_relu(alpha, negative_slope=0.2)\n            alpha = torch.matmul(alpha, self.w_2).squeeze(-1)\n            alpha = torch.softmax(alpha, -1).unsqueeze(-1)\n            neighbor_vector = torch.sum(alpha * neighbor_vector, dim=-2)\n        else:\n            neighbor_vector = torch.mean(neighbor_vector, dim=2)\n        # self_vectors = F.dropout(self_vectors, 0.5, training=self.training)\n        output = torch.cat([self_vectors, neighbor_vector], -1)\n        output = F.dropout(output, self.dropout, training=self.training)\n        output = torch.matmul(output, self.w_3)\n        output = output.view(batch_size, -1, self.dim)\n        output = self.act(output)\n        return output\n\n\nclass GCEGNN(SequentialRecommender):\n    def __init__(self, config, dataset):\n        super(GCEGNN, self).__init__(config, dataset)\n\n        # load parameters info\n        self.embedding_size = config['embedding_size']\n        self.leakyrelu_alpha = config['leakyrelu_alpha']\n        self.dropout_local = config['dropout_local']\n        self.dropout_global = config['dropout_global']\n        self.dropout_gcn = config['dropout_gcn']\n        self.device = config['device']\n        self.loss_type = config['loss_type']\n        self.build_global_graph = config['build_global_graph']\n        self.sample_num = config['sample_num']\n        self.hop = config['hop']\n        self.max_seq_length = dataset.field2seqlen[self.ITEM_SEQ]\n\n        # global graph construction\n        self.global_graph = None\n        if self.build_global_graph:\n            self.global_adj, self.global_weight = self.construct_global_graph(dataset)\n\n        # item embedding\n        self.item_embedding = nn.Embedding(self.n_items, self.embedding_size, padding_idx=0)\n        self.pos_embedding = nn.Embedding(self.max_seq_length, self.embedding_size)\n\n        # define layers and loss\n        # Aggregator\n        self.local_agg = LocalAggregator(self.embedding_size, self.leakyrelu_alpha)\n        global_agg_list = []\n        for i in range(self.hop):\n            global_agg_list.append(GlobalAggregator(self.embedding_size, self.dropout_gcn))\n        self.global_agg = nn.ModuleList(global_agg_list)\n\n        self.w_1 = nn.Linear(2 * self.embedding_size, self.embedding_size, bias=False)\n        self.w_2 = nn.Linear(self.embedding_size, 1, bias=False)\n        self.glu1 = nn.Linear(self.embedding_size, self.embedding_size)\n        self.glu2 = nn.Linear(self.embedding_size, self.embedding_size, bias=False)\n        if self.loss_type == 'BPR':\n            self.loss_fct = BPRLoss()\n        elif self.loss_type == 'CE':\n            self.loss_fct = nn.CrossEntropyLoss()\n        else:\n            raise NotImplementedError(\"Make sure 'loss_type' in ['BPR', 'CE']!\")\n\n        self.reset_parameters()\n        self.other_parameter_name = ['global_adj', 'global_weight']\n\n    def reset_parameters(self):\n        stdv = 1.0 / np.sqrt(self.embedding_size)\n        for weight in self.parameters():\n            weight.data.uniform_(-stdv, stdv)\n\n    def _add_edge(self, graph, sid, tid):\n        if tid not in graph[sid]:\n            graph[sid][tid] = 0\n        graph[sid][tid] += 1\n\n    def construct_global_graph(self, dataset):\n        self.logger.info('Constructing global graphs.')\n        item_id_list = dataset.inter_feat['item_id_list']\n        src_item_ids = item_id_list[:,:4].tolist()\n        tgt_itme_id = dataset.inter_feat['item_id'].tolist()\n        global_graph = [{} for _ in range(self.n_items)]\n        for i in tqdm(range(len(tgt_itme_id)), desc='Converting: '):\n            tid = tgt_itme_id[i]\n            for sid in src_item_ids[i]:\n                if sid > 0:\n                    self._add_edge(global_graph, tid, sid)\n                    self._add_edge(global_graph, sid, tid)\n        global_adj = [[] for _ in range(self.n_items)]\n        global_weight = [[] for _ in range(self.n_items)]\n        for i in tqdm(range(self.n_items), desc='Sorting: '):\n            sorted_out_edges = [v for v in sorted(global_graph[i].items(), reverse=True, key=lambda x: x[1])]\n            global_adj[i] = [v[0] for v in sorted_out_edges[:self.sample_num]]\n            global_weight[i] = [v[1] for v in sorted_out_edges[:self.sample_num]]\n            if len(global_adj[i]) < self.sample_num:\n                for j in range(self.sample_num - len(global_adj[i])):\n                    global_adj[i].append(0)\n                    global_weight[i].append(0)\n        return torch.LongTensor(global_adj).to(self.device), torch.FloatTensor(global_weight).to(self.device)\n\n    def fusion(self, hidden, mask):\n        batch_size = hidden.shape[0]\n        length = hidden.shape[1]\n        pos_emb = self.pos_embedding.weight[:length]\n        pos_emb = pos_emb.unsqueeze(0).expand(batch_size, -1, -1)\n\n        hs = torch.sum(hidden * mask, -2) / torch.sum(mask, 1)\n        hs = hs.unsqueeze(-2).expand(-1, length, -1)\n        nh = self.w_1(torch.cat([pos_emb, hidden], -1))\n        nh = torch.tanh(nh)\n        nh = torch.sigmoid(self.glu1(nh) + self.glu2(hs))\n        beta = self.w_2(nh)\n        beta = beta * mask\n        final_h = torch.sum(beta * hidden, 1)\n        return final_h\n\n    def forward(self, x, edge_index, edge_attr, alias_inputs, item_seq_len):\n        batch_size = alias_inputs.shape[0]\n        mask = alias_inputs.gt(0).unsqueeze(-1)\n        h = self.item_embedding(x)\n\n        # local\n        h_local = self.local_agg(h, edge_index, edge_attr)\n\n        # global\n        item_neighbors = [F.pad(x[alias_inputs], (0, self.max_seq_length - x[alias_inputs].shape[1]), \"constant\", 0)]\n        weight_neighbors = []\n        support_size = self.max_seq_length\n\n        for i in range(self.hop):\n            item_sample_i, weight_sample_i = self.global_adj[item_neighbors[-1].view(-1)], self.global_weight[item_neighbors[-1].view(-1)]\n            support_size *= self.sample_num\n            item_neighbors.append(item_sample_i.view(batch_size, support_size))\n            weight_neighbors.append(weight_sample_i.view(batch_size, support_size))\n\n        entity_vectors = [self.item_embedding(i) for i in item_neighbors]\n        weight_vectors = weight_neighbors\n\n        session_info = []\n        item_emb = h[alias_inputs] * mask\n\n        # mean \n        sum_item_emb = torch.sum(item_emb, 1) / torch.sum(mask.float(), 1)\n\n        # sum\n        # sum_item_emb = torch.sum(item_emb, 1)\n\n        sum_item_emb = sum_item_emb.unsqueeze(-2)\n        for i in range(self.hop):\n            session_info.append(sum_item_emb.repeat(1, entity_vectors[i].shape[1], 1))\n\n        for n_hop in range(self.hop):\n            entity_vectors_next_iter = []\n            shape = [batch_size, -1, self.sample_num, self.embedding_size]\n            for hop in range(self.hop - n_hop):\n                aggregator = self.global_agg[n_hop]\n                vector = aggregator(self_vectors=entity_vectors[hop],\n                                    neighbor_vector=entity_vectors[hop + 1].view(shape),\n                                    masks=None,\n                                    batch_size=batch_size,\n                                    neighbor_weight=weight_vectors[hop].view(batch_size, -1, self.sample_num),\n                                    extra_vector=session_info[hop])\n                entity_vectors_next_iter.append(vector)\n            entity_vectors = entity_vectors_next_iter\n\n        h_global = entity_vectors[0].view(batch_size, self.max_seq_length, self.embedding_size)\n        h_global = h_global[:,:alias_inputs.shape[1],:]\n\n        h_local = F.dropout(h_local, self.dropout_local, training=self.training)\n        h_global = F.dropout(h_global, self.dropout_global, training=self.training)\n        h_local = h_local[alias_inputs]\n\n        h_session = h_local + h_global\n        h_session = self.fusion(h_session, mask)\n        return h_session\n\n    def calculate_loss(self, interaction):\n        x = interaction['x']\n        edge_index = interaction['edge_index']\n        edge_attr = interaction['edge_attr']\n        alias_inputs = interaction['alias_inputs']\n        item_seq_len = interaction[self.ITEM_SEQ_LEN]\n        seq_output = self.forward(x, edge_index, edge_attr, alias_inputs, item_seq_len)\n        pos_items = interaction[self.POS_ITEM_ID]\n        if self.loss_type == 'BPR':\n            neg_items = interaction[self.NEG_ITEM_ID]\n            pos_items_emb = self.item_embedding(pos_items)\n            neg_items_emb = self.item_embedding(neg_items)\n            pos_score = torch.sum(seq_output * pos_items_emb, dim=-1)  # [B]\n            neg_score = torch.sum(seq_output * neg_items_emb, dim=-1)  # [B]\n            loss = self.loss_fct(pos_score, neg_score)\n            return loss\n        else:  # self.loss_type = 'CE'\n            test_item_emb = self.item_embedding.weight\n            logits = torch.matmul(seq_output, test_item_emb.transpose(0, 1))\n            loss = self.loss_fct(logits, pos_items)\n            return loss\n\n    def predict(self, interaction):\n        test_item = interaction[self.ITEM_ID]\n        x = interaction['x']\n        edge_index = interaction['edge_index']\n        edge_attr = interaction['edge_attr']\n        alias_inputs = interaction['alias_inputs']\n        item_seq_len = interaction[self.ITEM_SEQ_LEN]\n        seq_output = self.forward(x, edge_index, edge_attr, alias_inputs, item_seq_len)\n        test_item_emb = self.item_embedding(test_item)\n        scores = torch.mul(seq_output, test_item_emb).sum(dim=1)  # [B]\n        return scores\n\n    def full_sort_predict(self, interaction):\n        x = interaction['x']\n        edge_index = interaction['edge_index']\n        edge_attr = interaction['edge_attr']\n        alias_inputs = interaction['alias_inputs']\n        item_seq_len = interaction[self.ITEM_SEQ_LEN]\n        seq_output = self.forward(x, edge_index, edge_attr, alias_inputs, item_seq_len)\n        test_items_emb = self.item_embedding.weight\n        scores = torch.matmul(seq_output, test_items_emb.transpose(0, 1))  # [B, n_items]\n        return scores\n"
  },
  {
    "path": "recbole_gnn/model/sequential_recommender/gcsan.py",
    "content": "# @Time   : 2022/3/7\n# @Author : Yupeng Hou\n# @Email  : houyupeng@ruc.edu.cn\n\nr\"\"\"\nGCSAN\n################################################\n\nReference:\n    Chengfeng Xu et al. \"Graph Contextualized Self-Attention Network for Session-based Recommendation.\" in IJCAI 2019.\n\n\"\"\"\n\nimport torch\nfrom torch import nn\nfrom recbole.model.layers import TransformerEncoder\nfrom recbole.model.loss import EmbLoss, BPRLoss\nfrom recbole.model.abstract_recommender import SequentialRecommender\n\nfrom recbole_gnn.model.layers import SRGNNCell\n\n\nclass GCSAN(SequentialRecommender):\n    r\"\"\"GCSAN captures rich local dependencies via graph neural network,\n     and learns long-range dependencies by applying the self-attention mechanism.\n     \n    Note:\n\n        In the original paper, the attention mechanism in the self-attention layer is a single head,\n        for the reusability of the project code, we use a unified transformer component.\n        According to the experimental results, we only applied regularization to embedding.\n    \"\"\"\n\n    def __init__(self, config, dataset):\n        super(GCSAN, self).__init__(config, dataset)\n\n        # load parameters info\n        self.n_layers = config['n_layers']\n        self.n_heads = config['n_heads']\n        self.hidden_size = config['hidden_size']  # same as embedding_size\n        self.inner_size = config['inner_size']  # the dimensionality in feed-forward layer\n        self.hidden_dropout_prob = config['hidden_dropout_prob']\n        self.attn_dropout_prob = config['attn_dropout_prob']\n        self.hidden_act = config['hidden_act']\n        self.layer_norm_eps = config['layer_norm_eps']\n\n        self.step = config['step']\n        self.device = config['device']\n        self.weight = config['weight']\n        self.reg_weight = config['reg_weight']\n        self.loss_type = config['loss_type']\n        self.initializer_range = config['initializer_range']\n\n        # item embedding\n        self.item_embedding = nn.Embedding(self.n_items, self.hidden_size, padding_idx=0)\n\n        # define layers and loss\n        self.gnncell = SRGNNCell(self.hidden_size)\n        self.self_attention = TransformerEncoder(\n            n_layers=self.n_layers,\n            n_heads=self.n_heads,\n            hidden_size=self.hidden_size,\n            inner_size=self.inner_size,\n            hidden_dropout_prob=self.hidden_dropout_prob,\n            attn_dropout_prob=self.attn_dropout_prob,\n            hidden_act=self.hidden_act,\n            layer_norm_eps=self.layer_norm_eps\n        )\n        self.reg_loss = EmbLoss()\n        if self.loss_type == 'BPR':\n            self.loss_fct = BPRLoss()\n        elif self.loss_type == 'CE':\n            self.loss_fct = nn.CrossEntropyLoss()\n        else:\n            raise NotImplementedError(\"Make sure 'loss_type' in ['BPR', 'CE']!\")\n\n        # parameters initialization\n        self.apply(self._init_weights)\n\n    def _init_weights(self, module):\n        \"\"\" Initialize the weights \"\"\"\n        if isinstance(module, (nn.Linear, nn.Embedding)):\n            # Slightly different from the TF version which uses truncated_normal for initialization\n            # cf https://github.com/pytorch/pytorch/pull/5617\n            module.weight.data.normal_(mean=0.0, std=self.initializer_range)\n        elif isinstance(module, nn.LayerNorm):\n            module.bias.data.zero_()\n            module.weight.data.fill_(1.0)\n        if isinstance(module, nn.Linear) and module.bias is not None:\n            module.bias.data.zero_()\n\n    def get_attention_mask(self, item_seq):\n        \"\"\"Generate left-to-right uni-directional attention mask for multi-head attention.\"\"\"\n        attention_mask = (item_seq > 0).long()\n        extended_attention_mask = attention_mask.unsqueeze(1).unsqueeze(2)  # torch.int64\n        # mask for left-to-right unidirectional\n        max_len = attention_mask.size(-1)\n        attn_shape = (1, max_len, max_len)\n        subsequent_mask = torch.triu(torch.ones(attn_shape), diagonal=1)  # torch.uint8\n        subsequent_mask = (subsequent_mask == 0).unsqueeze(1)\n        subsequent_mask = subsequent_mask.long().to(item_seq.device)\n\n        extended_attention_mask = extended_attention_mask * subsequent_mask\n        extended_attention_mask = extended_attention_mask.to(dtype=next(self.parameters()).dtype)  # fp16 compatibility\n        extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0\n        return extended_attention_mask\n\n    def forward(self, x, edge_index, alias_inputs, item_seq_len):\n        hidden = self.item_embedding(x)\n        for i in range(self.step):\n            hidden = self.gnncell(hidden, edge_index)\n\n        seq_hidden = hidden[alias_inputs]\n        # fetch the last hidden state of last timestamp\n        ht = self.gather_indexes(seq_hidden, item_seq_len - 1)\n\n        attention_mask = self.get_attention_mask(alias_inputs)\n        outputs = self.self_attention(seq_hidden, attention_mask, output_all_encoded_layers=True)\n        output = outputs[-1]\n        at = self.gather_indexes(output, item_seq_len - 1)\n        seq_output = self.weight * at + (1 - self.weight) * ht\n        return seq_output\n\n    def calculate_loss(self, interaction):\n        x = interaction['x']\n        edge_index = interaction['edge_index']\n        alias_inputs = interaction['alias_inputs']\n        item_seq_len = interaction[self.ITEM_SEQ_LEN]\n        seq_output = self.forward(x, edge_index, alias_inputs, item_seq_len)\n        pos_items = interaction[self.POS_ITEM_ID]\n        if self.loss_type == 'BPR':\n            neg_items = interaction[self.NEG_ITEM_ID]\n            pos_items_emb = self.item_embedding(pos_items)\n            neg_items_emb = self.item_embedding(neg_items)\n            pos_score = torch.sum(seq_output * pos_items_emb, dim=-1)  # [B]\n            neg_score = torch.sum(seq_output * neg_items_emb, dim=-1)  # [B]\n            loss = self.loss_fct(pos_score, neg_score)\n        else:  # self.loss_type = 'CE'\n            test_item_emb = self.item_embedding.weight\n            logits = torch.matmul(seq_output, test_item_emb.transpose(0, 1))\n            loss = self.loss_fct(logits, pos_items)\n        reg_loss = self.reg_loss(self.item_embedding.weight)\n        total_loss = loss + self.reg_weight * reg_loss\n        return total_loss\n\n    def predict(self, interaction):\n        test_item = interaction[self.ITEM_ID]\n        x = interaction['x']\n        edge_index = interaction['edge_index']\n        alias_inputs = interaction['alias_inputs']\n        item_seq_len = interaction[self.ITEM_SEQ_LEN]\n        seq_output = self.forward(x, edge_index, alias_inputs, item_seq_len)\n        test_item_emb = self.item_embedding(test_item)\n        scores = torch.mul(seq_output, test_item_emb).sum(dim=1)  # [B]\n        return scores\n\n    def full_sort_predict(self, interaction):\n        x = interaction['x']\n        edge_index = interaction['edge_index']\n        alias_inputs = interaction['alias_inputs']\n        item_seq_len = interaction[self.ITEM_SEQ_LEN]\n        seq_output = self.forward(x, edge_index, alias_inputs, item_seq_len)\n        test_items_emb = self.item_embedding.weight\n        scores = torch.matmul(seq_output, test_items_emb.transpose(0, 1))  # [B, n_items]\n        return scores\n"
  },
  {
    "path": "recbole_gnn/model/sequential_recommender/lessr.py",
    "content": "# @Time   : 2022/3/11\n# @Author : Yupeng Hou\n# @Email  : houyupeng@ruc.edu.cn\n\nr\"\"\"\nLESSR\n################################################\n\nReference:\n    Tianwen Chen and Raymond Chi-Wing Wong. \"Handling Information Loss of Graph Neural Networks for Session-based Recommendation.\" in KDD 2020.\n\nReference code:\n    https://github.com/twchen/lessr\n\n\"\"\"\n\nimport torch\nfrom torch import nn\nfrom torch_geometric.utils import softmax\nfrom torch_geometric.nn import global_add_pool\nfrom recbole.model.abstract_recommender import SequentialRecommender\n\n\nclass EOPA(nn.Module):\n    def __init__(\n        self, input_dim, output_dim, batch_norm=True, feat_drop=0.0, activation=None\n    ):\n        super().__init__()\n        self.batch_norm = nn.BatchNorm1d(input_dim) if batch_norm else None\n        self.feat_drop = nn.Dropout(feat_drop)\n        self.gru = nn.GRU(input_dim, input_dim, batch_first=True)\n        self.fc_self = nn.Linear(input_dim, output_dim, bias=False)\n        self.fc_neigh = nn.Linear(input_dim, output_dim, bias=False)\n        self.activation = activation\n\n    def reducer(self, nodes):\n        m = nodes.mailbox['m']  # (num_nodes, deg, d)\n        # m[i]: the messages passed to the i-th node with in-degree equal to 'deg'\n        # the order of messages follows the order of incoming edges\n        # since the edges are sorted by occurrence time when the EOP multigraph is built\n        # the messages are in the order required by EOPA\n        _, hn = self.gru(m)  # hn: (1, num_nodes, d)\n        return {'neigh': hn.squeeze(0)}\n\n    def forward(self, mg, feat):\n        import dgl.function as fn\n\n        with mg.local_scope():\n            if self.batch_norm is not None:\n                feat = self.batch_norm(feat)\n            mg.ndata['ft'] = self.feat_drop(feat)\n            if mg.number_of_edges() > 0:\n                mg.update_all(fn.copy_u('ft', 'm'), self.reducer)\n                neigh = mg.ndata['neigh']\n                rst = self.fc_self(feat) + self.fc_neigh(neigh)\n            else:\n                rst = self.fc_self(feat)\n            if self.activation is not None:\n                rst = self.activation(rst)\n            return rst\n\n\nclass SGAT(nn.Module):\n    def __init__(\n        self,\n        input_dim,\n        hidden_dim,\n        output_dim,\n        batch_norm=True,\n        feat_drop=0.0,\n        activation=None,\n    ):\n        super().__init__()\n        self.batch_norm = nn.BatchNorm1d(input_dim) if batch_norm else None\n        self.feat_drop = nn.Dropout(feat_drop)\n        self.fc_q = nn.Linear(input_dim, hidden_dim, bias=True)\n        self.fc_k = nn.Linear(input_dim, hidden_dim, bias=False)\n        self.fc_v = nn.Linear(input_dim, output_dim, bias=False)\n        self.fc_e = nn.Linear(hidden_dim, 1, bias=False)\n        self.activation = activation\n\n    def forward(self, sg, feat):\n        import dgl.ops as F\n\n        if self.batch_norm is not None:\n            feat = self.batch_norm(feat)\n        feat = self.feat_drop(feat)\n        q = self.fc_q(feat)\n        k = self.fc_k(feat)\n        v = self.fc_v(feat)\n        e = F.u_add_v(sg, q, k)\n        e = self.fc_e(torch.sigmoid(e))\n        a = F.edge_softmax(sg, e)\n        rst = F.u_mul_e_sum(sg, v, a)\n        if self.activation is not None:\n            rst = self.activation(rst)\n        return rst\n\n\nclass AttnReadout(nn.Module):\n    def __init__(\n        self,\n        input_dim,\n        hidden_dim,\n        output_dim,\n        batch_norm=True,\n        feat_drop=0.0,\n        activation=None,\n    ):\n        super().__init__()\n        self.batch_norm = nn.BatchNorm1d(input_dim) if batch_norm else None\n        self.feat_drop = nn.Dropout(feat_drop)\n        self.fc_u = nn.Linear(input_dim, hidden_dim, bias=False)\n        self.fc_v = nn.Linear(input_dim, hidden_dim, bias=True)\n        self.fc_e = nn.Linear(hidden_dim, 1, bias=False)\n        self.fc_out = (\n            nn.Linear(input_dim, output_dim, bias=False)\n            if output_dim != input_dim else None\n        )\n        self.activation = activation\n\n    def forward(self, g, feat, last_nodes, batch):\n        if self.batch_norm is not None:\n            feat = self.batch_norm(feat)\n        feat = self.feat_drop(feat)\n        feat_u = self.fc_u(feat)\n        feat_v = self.fc_v(feat[last_nodes])\n        feat_v = torch.index_select(feat_v, dim=0, index=batch)\n        e = self.fc_e(torch.sigmoid(feat_u + feat_v))\n        alpha = softmax(e, batch)\n        feat_norm = feat * alpha\n        rst = global_add_pool(feat_norm, batch)\n        if self.fc_out is not None:\n            rst = self.fc_out(rst)\n        if self.activation is not None:\n            rst = self.activation(rst)\n        return rst\n\n\nclass LESSR(SequentialRecommender):\n    r\"\"\"LESSR analyzes the information losses when constructing session graphs,\n    and emphasises lossy session encoding problem and the ineffective long-range dependency capturing problem.\n    To solve the first problem, authors propose a lossless encoding scheme and an edge-order preserving aggregation layer.\n    To solve the second problem, authors propose a shortcut graph attention layer that effectively captures long-range dependencies.\n\n    Note:\n        We follow the original implementation, which requires DGL package.\n        We find it difficult to implement these functions via PyG, so we remain them.\n        If you would like to test this model, please install DGL.\n    \"\"\"\n\n    def __init__(self, config, dataset):\n        super().__init__(config, dataset)\n\n        embedding_dim = config['embedding_size']\n        self.num_layers = config['n_layers']\n        batch_norm = config['batch_norm']\n        feat_drop = config['feat_drop']\n        self.loss_type = config['loss_type']\n\n        self.item_embedding = nn.Embedding(self.n_items, embedding_dim, max_norm=1)\n        self.layers = nn.ModuleList()\n        input_dim = embedding_dim\n        for i in range(self.num_layers):\n            if i % 2 == 0:\n                layer = EOPA(\n                    input_dim,\n                    embedding_dim,\n                    batch_norm=batch_norm,\n                    feat_drop=feat_drop,\n                    activation=nn.PReLU(embedding_dim),\n                )\n            else:\n                layer = SGAT(\n                    input_dim,\n                    embedding_dim,\n                    embedding_dim,\n                    batch_norm=batch_norm,\n                    feat_drop=feat_drop,\n                    activation=nn.PReLU(embedding_dim),\n                )\n            input_dim += embedding_dim\n            self.layers.append(layer)\n        self.readout = AttnReadout(\n            input_dim,\n            embedding_dim,\n            embedding_dim,\n            batch_norm=batch_norm,\n            feat_drop=feat_drop,\n            activation=nn.PReLU(embedding_dim),\n        )\n        input_dim += embedding_dim\n        self.batch_norm = nn.BatchNorm1d(input_dim) if batch_norm else None\n        self.feat_drop = nn.Dropout(feat_drop)\n        self.fc_sr = nn.Linear(input_dim, embedding_dim, bias=False)\n\n        if self.loss_type == 'CE':\n            self.loss_fct = nn.CrossEntropyLoss()\n        else:\n            raise NotImplementedError(\"Make sure 'loss_type' in ['CE']!\")\n\n    def forward(self, x, edge_index_EOP, edge_index_shortcut, batch, is_last):\n        import dgl\n\n        mg = dgl.graph((edge_index_EOP[0], edge_index_EOP[1]), num_nodes=batch.shape[0])\n        sg = dgl.graph((edge_index_shortcut[0], edge_index_shortcut[1]), num_nodes=batch.shape[0])\n\n        feat = self.item_embedding(x)\n        for i, layer in enumerate(self.layers):\n            if i % 2 == 0:\n                out = layer(mg, feat)\n            else:\n                out = layer(sg, feat)\n            feat = torch.cat([out, feat], dim=1)\n        sr_g = self.readout(mg, feat, is_last, batch)\n        sr_l = feat[is_last]\n        sr = torch.cat([sr_l, sr_g], dim=1)\n        if self.batch_norm is not None:\n            sr = self.batch_norm(sr)\n        sr = self.fc_sr(self.feat_drop(sr))\n        return sr\n\n    def calculate_loss(self, interaction):\n        x = interaction['x']\n        edge_index_EOP = interaction['edge_index_EOP']\n        edge_index_shortcut = interaction['edge_index_shortcut']\n        batch = interaction['batch']\n        is_last = interaction['is_last']\n        seq_output = self.forward(x, edge_index_EOP, edge_index_shortcut, batch, is_last)\n        pos_items = interaction[self.POS_ITEM_ID]\n        test_item_emb = self.item_embedding.weight\n        logits = torch.matmul(seq_output, test_item_emb.transpose(0, 1))\n        loss = self.loss_fct(logits, pos_items)\n        return loss\n\n    def predict(self, interaction):\n        test_item = interaction[self.ITEM_ID]\n        x = interaction['x']\n        edge_index_EOP = interaction['edge_index_EOP']\n        edge_index_shortcut = interaction['edge_index_shortcut']\n        batch = interaction['batch']\n        is_last = interaction['is_last']\n        seq_output = self.forward(x, edge_index_EOP, edge_index_shortcut, batch, is_last)\n        test_item_emb = self.item_embedding(test_item)\n        scores = torch.mul(seq_output, test_item_emb).sum(dim=1)  # [B]\n        return scores\n\n    def full_sort_predict(self, interaction):\n        x = interaction['x']\n        edge_index_EOP = interaction['edge_index_EOP']\n        edge_index_shortcut = interaction['edge_index_shortcut']\n        batch = interaction['batch']\n        is_last = interaction['is_last']\n        seq_output = self.forward(x, edge_index_EOP, edge_index_shortcut, batch, is_last)\n        test_items_emb = self.item_embedding.weight\n        scores = torch.matmul(seq_output, test_items_emb.transpose(0, 1))  # [B, n_items]\n        return scores\n"
  },
  {
    "path": "recbole_gnn/model/sequential_recommender/niser.py",
    "content": "# @Time   : 2022/3/7\n# @Author : Yupeng Hou\n# @Email  : houyupeng@ruc.edu.cn\n\nr\"\"\"\nNISER\n################################################\n\nReference:\n    Priyanka Gupta et al. \"NISER: Normalized Item and Session Representations to Handle Popularity Bias.\" in CIKM 2019 GRLA workshop.\n\n\"\"\"\nimport numpy as np\nimport torch\nfrom torch import nn\nimport torch.nn.functional as F\nfrom recbole.model.loss import BPRLoss\nfrom recbole.model.abstract_recommender import SequentialRecommender\n\nfrom recbole_gnn.model.layers import SRGNNCell\n\n\nclass NISER(SequentialRecommender):\n    r\"\"\"NISER+ is a GNN-based model that normalizes session and item embeddings to handle popularity bias.\n    \"\"\"\n\n    def __init__(self, config, dataset):\n        super(NISER, self).__init__(config, dataset)\n\n        # load parameters info\n        self.embedding_size = config['embedding_size']\n        self.step = config['step']\n        self.device = config['device']\n        self.loss_type = config['loss_type']\n        self.sigma = config['sigma']\n        self.max_seq_length = dataset.field2seqlen[self.ITEM_SEQ]\n\n        # item embedding\n        self.item_embedding = nn.Embedding(self.n_items, self.embedding_size, padding_idx=0)\n        self.pos_embedding = nn.Embedding(self.max_seq_length, self.embedding_size)\n        self.item_dropout = nn.Dropout(config['item_dropout'])\n\n        # define layers and loss\n        self.gnncell = SRGNNCell(self.embedding_size)\n        self.linear_one = nn.Linear(self.embedding_size, self.embedding_size)\n        self.linear_two = nn.Linear(self.embedding_size, self.embedding_size)\n        self.linear_three = nn.Linear(self.embedding_size, 1, bias=False)\n        self.linear_transform = nn.Linear(self.embedding_size * 2, self.embedding_size)\n        if self.loss_type == 'BPR':\n            self.loss_fct = BPRLoss()\n        elif self.loss_type == 'CE':\n            self.loss_fct = nn.CrossEntropyLoss()\n        else:\n            raise NotImplementedError(\"Make sure 'loss_type' in ['BPR', 'CE']!\")\n\n        # parameters initialization\n        self._reset_parameters()\n\n    def _reset_parameters(self):\n        stdv = 1.0 / np.sqrt(self.embedding_size)\n        for weight in self.parameters():\n            weight.data.uniform_(-stdv, stdv)\n\n    def forward(self, x, edge_index, alias_inputs, item_seq_len):\n        mask = alias_inputs.gt(0)\n        hidden = self.item_embedding(x)\n        # Dropout in NISER+\n        hidden = self.item_dropout(hidden)\n        # Normalize item embeddings\n        hidden = F.normalize(hidden, dim=-1)\n        for i in range(self.step):\n            hidden = self.gnncell(hidden, edge_index)\n\n        seq_hidden = hidden[alias_inputs]\n        batch_size = seq_hidden.shape[0]\n        pos_emb = self.pos_embedding.weight[:seq_hidden.shape[1]]\n        pos_emb = pos_emb.unsqueeze(0).expand(batch_size, -1, -1)\n        seq_hidden = seq_hidden + pos_emb\n        # fetch the last hidden state of last timestamp\n        ht = self.gather_indexes(seq_hidden, item_seq_len - 1)\n        q1 = self.linear_one(ht).view(ht.size(0), 1, ht.size(1))\n        q2 = self.linear_two(seq_hidden)\n\n        alpha = self.linear_three(torch.sigmoid(q1 + q2))\n        a = torch.sum(alpha * seq_hidden * mask.view(mask.size(0), -1, 1).float(), 1)\n        seq_output = self.linear_transform(torch.cat([a, ht], dim=1))\n        # Normalize session embeddings\n        seq_output = F.normalize(seq_output, dim=-1)\n        return seq_output\n\n    def calculate_loss(self, interaction):\n        x = interaction['x']\n        edge_index = interaction['edge_index']\n        alias_inputs = interaction['alias_inputs']\n        item_seq_len = interaction[self.ITEM_SEQ_LEN]\n        seq_output = self.forward(x, edge_index, alias_inputs, item_seq_len)\n        pos_items = interaction[self.POS_ITEM_ID]\n        if self.loss_type == 'BPR':\n            neg_items = interaction[self.NEG_ITEM_ID]\n            pos_items_emb = F.normalize(self.item_embedding(pos_items), dim=-1)\n            neg_items_emb = F.normalize(self.item_embedding(neg_items), dim=-1)\n            pos_score = torch.sum(seq_output * pos_items_emb, dim=-1)  # [B]\n            neg_score = torch.sum(seq_output * neg_items_emb, dim=-1)  # [B]\n            loss = self.loss_fct(self.sigma * pos_score, self.sigma * neg_score)\n            return loss\n        else:  # self.loss_type = 'CE'\n            test_item_emb = F.normalize(self.item_embedding.weight, dim=-1)\n            logits = self.sigma * torch.matmul(seq_output, test_item_emb.transpose(0, 1))\n            loss = self.loss_fct(logits, pos_items)\n            return loss\n\n    def predict(self, interaction):\n        test_item = interaction[self.ITEM_ID]\n        x = interaction['x']\n        edge_index = interaction['edge_index']\n        alias_inputs = interaction['alias_inputs']\n        item_seq_len = interaction[self.ITEM_SEQ_LEN]\n        seq_output = self.forward(x, edge_index, alias_inputs, item_seq_len)\n        test_item_emb = F.normalize(self.item_embedding(test_item), dim=-1)\n        scores = torch.mul(seq_output, test_item_emb).sum(dim=1)  # [B]\n        return scores\n\n    def full_sort_predict(self, interaction):\n        x = interaction['x']\n        edge_index = interaction['edge_index']\n        alias_inputs = interaction['alias_inputs']\n        item_seq_len = interaction[self.ITEM_SEQ_LEN]\n        seq_output = self.forward(x, edge_index, alias_inputs, item_seq_len)\n        test_items_emb = F.normalize(self.item_embedding.weight, dim=-1)\n        scores = torch.matmul(seq_output, test_items_emb.transpose(0, 1))  # [B, n_items]\n        return scores\n"
  },
  {
    "path": "recbole_gnn/model/sequential_recommender/sgnnhn.py",
    "content": "# @Time   : 2022/3/28\n# @Author : Yupeng Hou\n# @Email  : houyupeng@ruc.edu.cn\n\nr\"\"\"\nSRGNN\n################################################\n\nReference:\n    Zhiqiang Pan et al. \"Star Graph Neural Networks for Session-based Recommendation.\" in CIKM 2020.\n\nReference code:\n    https://bitbucket.org/nudtpanzq/sgnn-hn\n\n\"\"\"\n\nimport math\nimport numpy as np\nimport torch\nfrom torch import nn\nfrom torch_geometric.nn import global_mean_pool, global_add_pool\nfrom torch_geometric.utils import softmax\nfrom recbole.model.abstract_recommender import SequentialRecommender\nfrom recbole.model.loss import BPRLoss\n\nfrom recbole_gnn.model.layers import SRGNNCell\n\n\ndef layer_norm(x):\n    ave_x = torch.mean(x, -1).unsqueeze(-1)\n    x = x - ave_x\n    norm_x = torch.sqrt(torch.sum(x**2, -1)).unsqueeze(-1)\n    y = x / norm_x\n    return y\n\n\nclass SGNNHN(SequentialRecommender):\n    r\"\"\"SGNN-HN applies a star graph neural network to model the complex transition relationship between items in an ongoing session.\n        To avoid overfitting, it applies highway networks to adaptively select embeddings from item representations.\n    \"\"\"\n\n    def __init__(self, config, dataset):\n        super(SGNNHN, self).__init__(config, dataset)\n\n        # load parameters info\n        self.embedding_size = config['embedding_size']\n        self.step = config['step']\n        self.device = config['device']\n        self.loss_type = config['loss_type']\n        self.scale = config['scale']\n\n        # item embedding\n        self.item_embedding = nn.Embedding(self.n_items, self.embedding_size, padding_idx=0)\n        self.max_seq_length = dataset.field2seqlen[self.ITEM_SEQ]\n        self.pos_embedding = nn.Embedding(self.max_seq_length, self.embedding_size)\n\n        # define layers and loss\n        self.gnncell = SRGNNCell(self.embedding_size)\n        self.linear_one = nn.Linear(self.embedding_size, self.embedding_size)\n        self.linear_two = nn.Linear(self.embedding_size, self.embedding_size)\n        self.linear_three = nn.Linear(self.embedding_size, self.embedding_size)\n        self.linear_four = nn.Linear(self.embedding_size, 1, bias=False)\n        self.linear_transform = nn.Linear(self.embedding_size * 2, self.embedding_size)\n        if self.loss_type == 'BPR':\n            self.loss_fct = BPRLoss()\n        elif self.loss_type == 'CE':\n            self.loss_fct = nn.CrossEntropyLoss()\n        else:\n            raise NotImplementedError(\"Make sure 'loss_type' in ['BPR', 'CE']!\")\n\n        # parameters initialization\n        self._reset_parameters()\n\n    def _reset_parameters(self):\n        stdv = 1.0 / np.sqrt(self.embedding_size)\n        for weight in self.parameters():\n            weight.data.uniform_(-stdv, stdv)\n\n    def att_out(self, hidden, star_node, batch):\n        star_node_repeat = torch.index_select(star_node, 0, batch)\n        sim = (hidden * star_node_repeat).sum(dim=-1)\n        sim = softmax(sim, batch)\n        att_hidden = sim.unsqueeze(-1) * hidden\n        output = global_add_pool(att_hidden, batch)\n\n        return output\n\n    def forward(self, x, edge_index, batch, alias_inputs, item_seq_len):\n        mask = alias_inputs.gt(0)\n        hidden = self.item_embedding(x)\n\n        star_node = global_mean_pool(hidden, batch)\n        for i in range(self.step):\n            hidden = self.gnncell(hidden, edge_index)\n            star_node_repeat = torch.index_select(star_node, 0, batch)\n            sim = (hidden * star_node_repeat).sum(dim=-1, keepdim=True) / math.sqrt(self.embedding_size)\n            alpha = torch.sigmoid(sim)\n            hidden = (1 - alpha) * hidden + alpha * star_node_repeat\n            star_node = self.att_out(hidden, star_node, batch)\n\n        seq_hidden = hidden[alias_inputs]\n        bs, item_num, _ = seq_hidden.shape\n        pos_emb = self.pos_embedding.weight[:item_num]\n        pos_emb = pos_emb.unsqueeze(0).expand(bs, -1, -1)\n        seq_hidden = seq_hidden + pos_emb\n\n        # fetch the last hidden state of last timestamp\n        ht = self.gather_indexes(seq_hidden, item_seq_len - 1)\n        q1 = self.linear_one(ht).view(ht.size(0), 1, ht.size(1))\n        q2 = self.linear_two(seq_hidden)\n        q3 = self.linear_three(star_node).view(star_node.shape[0], 1, star_node.shape[1])\n\n        alpha = self.linear_four(torch.sigmoid(q1 + q2 + q3))\n        a = torch.sum(alpha * seq_hidden * mask.view(mask.size(0), -1, 1).float(), 1)\n        seq_output = self.linear_transform(torch.cat([a, ht], dim=1))\n        return layer_norm(seq_output)\n\n    def calculate_loss(self, interaction):\n        x = interaction['x']\n        edge_index = interaction['edge_index']\n        batch = interaction['batch']\n        alias_inputs = interaction['alias_inputs']\n        item_seq_len = interaction[self.ITEM_SEQ_LEN]\n        seq_output = self.forward(x, edge_index, batch, alias_inputs, item_seq_len)\n        pos_items = interaction[self.POS_ITEM_ID]\n        if self.loss_type == 'BPR':\n            neg_items = interaction[self.NEG_ITEM_ID]\n            pos_items_emb = layer_norm(self.item_embedding(pos_items))\n            neg_items_emb = layer_norm(self.item_embedding(neg_items))\n            pos_score = torch.sum(seq_output * pos_items_emb, dim=-1) * self.scale  # [B]\n            neg_score = torch.sum(seq_output * neg_items_emb, dim=-1) * self.scale  # [B]\n            loss = self.loss_fct(pos_score, neg_score)\n            return loss\n        else:  # self.loss_type = 'CE'\n            test_item_emb = layer_norm(self.item_embedding.weight)\n            logits = torch.matmul(seq_output, test_item_emb.transpose(0, 1)) * self.scale\n            loss = self.loss_fct(logits, pos_items)\n            return loss\n\n    def predict(self, interaction):\n        test_item = interaction[self.ITEM_ID]\n        x = interaction['x']\n        edge_index = interaction['edge_index']\n        batch = interaction['batch']\n        alias_inputs = interaction['alias_inputs']\n        item_seq_len = interaction[self.ITEM_SEQ_LEN]\n        seq_output = self.forward(x, edge_index, batch, alias_inputs, item_seq_len)\n        test_item_emb = layer_norm(self.item_embedding(test_item))\n        scores = torch.mul(seq_output, test_item_emb).sum(dim=1) * self.scale  # [B]\n        return scores\n\n    def full_sort_predict(self, interaction):\n        x = interaction['x']\n        edge_index = interaction['edge_index']\n        batch = interaction['batch']\n        alias_inputs = interaction['alias_inputs']\n        item_seq_len = interaction[self.ITEM_SEQ_LEN]\n        seq_output = self.forward(x, edge_index, batch, alias_inputs, item_seq_len)\n        test_items_emb = layer_norm(self.item_embedding.weight)\n        scores = torch.matmul(seq_output, test_items_emb.transpose(0, 1)) * self.scale  # [B, n_items]\n        return scores\n"
  },
  {
    "path": "recbole_gnn/model/sequential_recommender/srgnn.py",
    "content": "# @Time   : 2022/3/7\n# @Author : Yupeng Hou\n# @Email  : houyupeng@ruc.edu.cn\n\nr\"\"\"\nSRGNN\n################################################\n\nReference:\n    Shu Wu et al. \"Session-based Recommendation with Graph Neural Networks.\" in AAAI 2019.\n\nReference code:\n    https://github.com/CRIPAC-DIG/SR-GNN\n\n\"\"\"\nimport numpy as np\nimport torch\nfrom torch import nn\nfrom recbole.model.loss import BPRLoss\nfrom recbole.model.abstract_recommender import SequentialRecommender\n\nfrom recbole_gnn.model.layers import SRGNNCell\n\n\nclass SRGNN(SequentialRecommender):\n    r\"\"\"SRGNN regards the conversation history as a directed graph.\n    In addition to considering the connection between the item and the adjacent item,\n    it also considers the connection with other interactive items.\n\n    Such as: A example of a session sequence(eg:item1, item2, item3, item2, item4) and the connection matrix A\n\n    Outgoing edges:\n        === ===== ===== ===== =====\n         \\    1     2     3     4\n        === ===== ===== ===== =====\n         1    0     1     0     0\n         2    0     0    1/2   1/2\n         3    0     1     0     0\n         4    0     0     0     0\n        === ===== ===== ===== =====\n\n    Incoming edges:\n        === ===== ===== ===== =====\n         \\    1     2     3     4\n        === ===== ===== ===== =====\n         1    0     0     0     0\n         2   1/2    0    1/2    0\n         3    0     1     0     0\n         4    0     1     0     0\n        === ===== ===== ===== =====\n    \"\"\"\n\n    def __init__(self, config, dataset):\n        super(SRGNN, self).__init__(config, dataset)\n\n        # load parameters info\n        self.embedding_size = config['embedding_size']\n        self.step = config['step']\n        self.device = config['device']\n        self.loss_type = config['loss_type']\n\n        # item embedding\n        self.item_embedding = nn.Embedding(self.n_items, self.embedding_size, padding_idx=0)\n\n        # define layers and loss\n        self.gnncell = SRGNNCell(self.embedding_size)\n        self.linear_one = nn.Linear(self.embedding_size, self.embedding_size)\n        self.linear_two = nn.Linear(self.embedding_size, self.embedding_size)\n        self.linear_three = nn.Linear(self.embedding_size, 1, bias=False)\n        self.linear_transform = nn.Linear(self.embedding_size * 2, self.embedding_size)\n        if self.loss_type == 'BPR':\n            self.loss_fct = BPRLoss()\n        elif self.loss_type == 'CE':\n            self.loss_fct = nn.CrossEntropyLoss()\n        else:\n            raise NotImplementedError(\"Make sure 'loss_type' in ['BPR', 'CE']!\")\n\n        # parameters initialization\n        self._reset_parameters()\n\n    def _reset_parameters(self):\n        stdv = 1.0 / np.sqrt(self.embedding_size)\n        for weight in self.parameters():\n            weight.data.uniform_(-stdv, stdv)\n\n    def forward(self, x, edge_index, alias_inputs, item_seq_len):\n        mask = alias_inputs.gt(0)\n        hidden = self.item_embedding(x)\n        for i in range(self.step):\n            hidden = self.gnncell(hidden, edge_index)\n\n        seq_hidden = hidden[alias_inputs]\n        # fetch the last hidden state of last timestamp\n        ht = self.gather_indexes(seq_hidden, item_seq_len - 1)\n        q1 = self.linear_one(ht).view(ht.size(0), 1, ht.size(1))\n        q2 = self.linear_two(seq_hidden)\n\n        alpha = self.linear_three(torch.sigmoid(q1 + q2))\n        a = torch.sum(alpha * seq_hidden * mask.view(mask.size(0), -1, 1).float(), 1)\n        seq_output = self.linear_transform(torch.cat([a, ht], dim=1))\n        return seq_output\n\n    def calculate_loss(self, interaction):\n        x = interaction['x']\n        edge_index = interaction['edge_index']\n        alias_inputs = interaction['alias_inputs']\n        item_seq_len = interaction[self.ITEM_SEQ_LEN]\n        seq_output = self.forward(x, edge_index, alias_inputs, item_seq_len)\n        pos_items = interaction[self.POS_ITEM_ID]\n        if self.loss_type == 'BPR':\n            neg_items = interaction[self.NEG_ITEM_ID]\n            pos_items_emb = self.item_embedding(pos_items)\n            neg_items_emb = self.item_embedding(neg_items)\n            pos_score = torch.sum(seq_output * pos_items_emb, dim=-1)  # [B]\n            neg_score = torch.sum(seq_output * neg_items_emb, dim=-1)  # [B]\n            loss = self.loss_fct(pos_score, neg_score)\n            return loss\n        else:  # self.loss_type = 'CE'\n            test_item_emb = self.item_embedding.weight\n            logits = torch.matmul(seq_output, test_item_emb.transpose(0, 1))\n            loss = self.loss_fct(logits, pos_items)\n            return loss\n\n    def predict(self, interaction):\n        test_item = interaction[self.ITEM_ID]\n        x = interaction['x']\n        edge_index = interaction['edge_index']\n        alias_inputs = interaction['alias_inputs']\n        item_seq_len = interaction[self.ITEM_SEQ_LEN]\n        seq_output = self.forward(x, edge_index, alias_inputs, item_seq_len)\n        test_item_emb = self.item_embedding(test_item)\n        scores = torch.mul(seq_output, test_item_emb).sum(dim=1)  # [B]\n        return scores\n\n    def full_sort_predict(self, interaction):\n        x = interaction['x']\n        edge_index = interaction['edge_index']\n        alias_inputs = interaction['alias_inputs']\n        item_seq_len = interaction[self.ITEM_SEQ_LEN]\n        seq_output = self.forward(x, edge_index, alias_inputs, item_seq_len)\n        test_items_emb = self.item_embedding.weight\n        scores = torch.matmul(seq_output, test_items_emb.transpose(0, 1))  # [B, n_items]\n        return scores\n"
  },
  {
    "path": "recbole_gnn/model/sequential_recommender/tagnn.py",
    "content": "# @Time   : 2022/3/17\n# @Author : Yupeng Hou\n# @Email  : houyupeng@ruc.edu.cn\n\nr\"\"\"\nTAGNN\n################################################\n\nReference:\n    Feng Yu et al. \"TAGNN: Target Attentive Graph Neural Networks for Session-based Recommendation.\" in SIGIR 2020 short.\n    Implemented using PyTorch Geometric.\n\nReference code:\n    https://github.com/CRIPAC-DIG/TAGNN\n\n\"\"\"\nimport numpy as np\nimport torch\nfrom torch import nn\nimport torch.nn.functional as F\nfrom recbole.model.abstract_recommender import SequentialRecommender\n\nfrom recbole_gnn.model.layers import SRGNNCell\n\n\nclass TAGNN(SequentialRecommender):\n    r\"\"\"TAGNN introduces target-aware attention and adaptively activates different user interests with respect to varied target items.\n    \"\"\"\n\n    def __init__(self, config, dataset):\n        super(TAGNN, self).__init__(config, dataset)\n\n        # load parameters info\n        self.embedding_size = config['embedding_size']\n        self.step = config['step']\n        self.device = config['device']\n        self.loss_type = config['loss_type']\n\n        # item embedding\n        self.item_embedding = nn.Embedding(self.n_items, self.embedding_size, padding_idx=0)\n\n        # define layers and loss\n        self.gnncell = SRGNNCell(self.embedding_size)\n        self.linear_one = nn.Linear(self.embedding_size, self.embedding_size)\n        self.linear_two = nn.Linear(self.embedding_size, self.embedding_size)\n        self.linear_three = nn.Linear(self.embedding_size, 1, bias=False)\n        self.linear_transform = nn.Linear(self.embedding_size * 2, self.embedding_size)\n        self.linear_t = nn.Linear(self.embedding_size, self.embedding_size, bias=False)  #target attention\n        if self.loss_type == 'CE':\n            self.loss_fct = nn.CrossEntropyLoss()\n        else:\n            raise NotImplementedError(\"Make sure 'loss_type' in ['BPR', 'CE']!\")\n\n        # parameters initialization\n        self._reset_parameters()\n\n    def _reset_parameters(self):\n        stdv = 1.0 / np.sqrt(self.embedding_size)\n        for weight in self.parameters():\n            weight.data.uniform_(-stdv, stdv)\n\n    def forward(self, x, edge_index, alias_inputs, item_seq_len):\n        mask = alias_inputs.gt(0)\n        hidden = self.item_embedding(x)\n        for i in range(self.step):\n            hidden = self.gnncell(hidden, edge_index)\n\n        seq_hidden = hidden[alias_inputs]\n        # fetch the last hidden state of last timestamp\n        ht = self.gather_indexes(seq_hidden, item_seq_len - 1)\n        q1 = self.linear_one(ht).view(ht.size(0), 1, ht.size(1))\n        q2 = self.linear_two(seq_hidden)\n\n        alpha = self.linear_three(torch.sigmoid(q1 + q2))\n        alpha = F.softmax(alpha, 1)\n        a = torch.sum(alpha * seq_hidden * mask.view(mask.size(0), -1, 1).float(), 1)\n        seq_output = self.linear_transform(torch.cat([a, ht], dim=1))\n\n        seq_hidden = seq_hidden * mask.view(mask.shape[0], -1, 1).float()\n        qt = self.linear_t(seq_hidden)\n        b = self.item_embedding.weight\n        beta = F.softmax(b @ qt.transpose(1,2), -1)\n        target = beta @ seq_hidden\n        a = seq_output.view(ht.shape[0], 1, ht.shape[1])  # b,1,d\n        a = a + target  # b,n,d\n        scores = torch.sum(a * b, -1)  # b,n\n        return scores\n\n    def calculate_loss(self, interaction):\n        x = interaction['x']\n        edge_index = interaction['edge_index']\n        alias_inputs = interaction['alias_inputs']\n        item_seq_len = interaction[self.ITEM_SEQ_LEN]\n        logits = self.forward(x, edge_index, alias_inputs, item_seq_len)\n        pos_items = interaction[self.POS_ITEM_ID]\n        loss = self.loss_fct(logits, pos_items)\n        return loss\n\n    def predict(self, interaction):\n        pass\n\n    def full_sort_predict(self, interaction):\n        x = interaction['x']\n        edge_index = interaction['edge_index']\n        alias_inputs = interaction['alias_inputs']\n        item_seq_len = interaction[self.ITEM_SEQ_LEN]\n        scores = self.forward(x, edge_index, alias_inputs, item_seq_len)\n        return scores\n"
  },
  {
    "path": "recbole_gnn/model/social_recommender/__init__.py",
    "content": "from recbole_gnn.model.social_recommender.diffnet import DiffNet\nfrom recbole_gnn.model.social_recommender.mhcn import MHCN\nfrom recbole_gnn.model.social_recommender.sept import SEPT"
  },
  {
    "path": "recbole_gnn/model/social_recommender/diffnet.py",
    "content": "# @Time   : 2022/3/15\n# @Author : Lanling Xu\n# @Email  : xulanling_sherry@163.com\n\nr\"\"\"\nDiffNet\n################################################\nReference:\n    Le Wu et al. \"A Neural Influence Diffusion Model for Social Recommendation.\" in SIGIR 2019.\n\nReference code:\n    https://github.com/PeiJieSun/diffnet\n\"\"\"\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\n\nfrom recbole.model.init import xavier_uniform_initialization\nfrom recbole.model.loss import BPRLoss, EmbLoss\nfrom recbole.utils import InputType\n\nfrom recbole_gnn.model.abstract_recommender import SocialRecommender\nfrom recbole_gnn.model.layers import BipartiteGCNConv\n\n\nclass DiffNet(SocialRecommender):\n    r\"\"\"DiffNet is a deep influence propagation model to stimulate how users are influenced by the recursive social diffusion process for social recommendation.\n    We implement the model following the original author with a pairwise training mode.\n    \"\"\"\n    input_type = InputType.PAIRWISE\n\n    def __init__(self, config, dataset):\n        super(DiffNet, self).__init__(config, dataset)\n\n        # load dataset info\n        self.edge_index, self.edge_weight = dataset.get_bipartite_inter_mat(row='user')\n        self.edge_index, self.edge_weight = self.edge_index.to(self.device), self.edge_weight.to(self.device)\n\n        self.net_edge_index, self.net_edge_weight = dataset.get_norm_net_adj_mat(row_norm=True)\n        self.net_edge_index, self.net_edge_weight = self.net_edge_index.to(self.device), self.net_edge_weight.to(self.device)\n\n        # load parameters info\n        self.embedding_size = config['embedding_size']  # int type:the embedding size of DiffNet\n        self.n_layers = config['n_layers']  # int type:the GCN layer num of DiffNet for social net\n        self.reg_weight = config['reg_weight']  # float32 type: the weight decay for l2 normalization\n        self.pretrained_review = config['pretrained_review']  # bool type:whether to load pre-trained review vectors of users and items\n\n        # define layers and loss\n        self.user_embedding = torch.nn.Embedding(num_embeddings=self.n_users, embedding_dim=self.embedding_size)\n        self.item_embedding = torch.nn.Embedding(num_embeddings=self.n_items, embedding_dim=self.embedding_size)\n        self.bipartite_gcn_conv = BipartiteGCNConv(dim=self.embedding_size)\n        self.mf_loss = BPRLoss()\n        self.reg_loss = EmbLoss()\n\n        # storage variables for full sort evaluation acceleration\n        self.restore_user_e = None\n        self.restore_item_e = None\n\n        # parameters initialization\n        self.apply(xavier_uniform_initialization)\n        self.other_parameter_name = ['restore_user_e', 'restore_item_e']\n\n        if self.pretrained_review:\n            # handle review information, map the origin review into the new space\n            self.user_review_embedding = nn.Embedding(self.n_users, self.embedding_size, padding_idx=0)\n            self.user_review_embedding.weight.requires_grad = False\n            self.user_review_embedding.weight.data.copy_(self.convertDistribution(dataset.user_feat['user_review_emb']))\n\n            self.item_review_embedding = nn.Embedding(self.n_items, self.embedding_size, padding_idx=0)\n            self.item_review_embedding.weight.requires_grad = False\n            self.item_review_embedding.weight.data.copy_(self.convertDistribution(dataset.item_feat['item_review_emb']))\n\n            self.user_fusion_layer = nn.Linear(self.embedding_size, self.embedding_size)\n            self.item_fusion_layer = nn.Linear(self.embedding_size, self.embedding_size)\n            self.activation = nn.Sigmoid()\n\n    def convertDistribution(self, x):\n        mean, std = torch.mean(x), torch.std(x)\n        y = (x - mean) * 0.2 / std\n        return y\n\n    def forward(self):\n        user_embedding = self.user_embedding.weight\n        final_item_embedding = self.item_embedding.weight\n\n        if self.pretrained_review:\n            user_reduce_dim_vector_matrix = self.activation(self.user_fusion_layer(self.user_review_embedding.weight))\n            item_reduce_dim_vector_matrix = self.activation(self.item_fusion_layer(self.item_review_embedding.weight))\n\n            user_review_vector_matrix = self.convertDistribution(user_reduce_dim_vector_matrix)\n            item_review_vector_matrix = self.convertDistribution(item_reduce_dim_vector_matrix)\n\n            user_embedding = user_embedding + user_review_vector_matrix\n            final_item_embedding = final_item_embedding + item_review_vector_matrix\n\n        user_embedding_from_consumed_items = self.bipartite_gcn_conv(x=(final_item_embedding, user_embedding), edge_index=self.edge_index.flip([0]), edge_weight=self.edge_weight, size=(self.n_items, self.n_users))\n\n        embeddings_list = [user_embedding]\n        for layer_idx in range(self.n_layers):\n            user_embedding = self.bipartite_gcn_conv((user_embedding, user_embedding), self.net_edge_index.flip([0]), self.net_edge_weight, size=(self.n_users, self.n_users))\n            embeddings_list.append(user_embedding)\n        final_user_embedding = torch.stack(embeddings_list, dim=1)\n        final_user_embedding = torch.sum(final_user_embedding, dim=1) + user_embedding_from_consumed_items\n\n        return final_user_embedding, final_item_embedding\n\n    def calculate_loss(self, interaction):\n        # clear the storage variable when training\n        if self.restore_user_e is not None or self.restore_item_e is not None:\n            self.restore_user_e, self.restore_item_e = None, None\n\n        user = interaction[self.USER_ID]\n        pos_item = interaction[self.ITEM_ID]\n        neg_item = interaction[self.NEG_ITEM_ID]\n\n        user_all_embeddings, item_all_embeddings = self.forward()\n        u_embeddings = user_all_embeddings[user]\n        pos_embeddings = item_all_embeddings[pos_item]\n        neg_embeddings = item_all_embeddings[neg_item]\n\n        # calculate BPR Loss\n        pos_scores = torch.mul(u_embeddings, pos_embeddings).sum(dim=1)\n        neg_scores = torch.mul(u_embeddings, neg_embeddings).sum(dim=1)\n        mf_loss = self.mf_loss(pos_scores, neg_scores)\n\n        # calculate regularization Loss\n        u_ego_embeddings = self.user_embedding(user)\n        pos_ego_embeddings = self.item_embedding(pos_item)\n        neg_ego_embeddings = self.item_embedding(neg_item)\n\n        reg_loss = self.reg_loss(u_ego_embeddings, pos_ego_embeddings, neg_ego_embeddings)\n        loss = mf_loss + self.reg_weight * reg_loss\n\n        return loss\n\n    def predict(self, interaction):\n        user = interaction[self.USER_ID]\n        item = interaction[self.ITEM_ID]\n\n        user_all_embeddings, item_all_embeddings = self.forward()\n\n        u_embeddings = user_all_embeddings[user]\n        i_embeddings = item_all_embeddings[item]\n        scores = torch.mul(u_embeddings, i_embeddings).sum(dim=1)\n        return scores\n\n    def full_sort_predict(self, interaction):\n        user = interaction[self.USER_ID]\n        if self.restore_user_e is None or self.restore_item_e is None:\n            self.restore_user_e, self.restore_item_e = self.forward()\n        # get user embedding from storage variable\n        u_embeddings = self.restore_user_e[user]\n\n        # dot with all item embedding to accelerate\n        scores = torch.matmul(u_embeddings, self.restore_item_e.transpose(0, 1))\n\n        return scores.view(-1)"
  },
  {
    "path": "recbole_gnn/model/social_recommender/mhcn.py",
    "content": "# @Time   : 2022/4/5\n# @Author : Lanling Xu\n# @Email  : xulanling_sherry@163.com\n\nr\"\"\"\nMHCN\n################################################\nReference:\n    Junliang Yu et al. \"Self-Supervised Multi-Channel Hypergraph Convolutional Network for Social Recommendation.\" in WWW 2021.\n\nReference code:\n    https://github.com/Coder-Yu/QRec\n\"\"\"\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom scipy.sparse import coo_matrix\n\nfrom recbole.model.init import xavier_uniform_initialization\nfrom recbole.model.loss import BPRLoss, EmbLoss\nfrom recbole.utils import InputType\n\nfrom recbole_gnn.model.abstract_recommender import SocialRecommender\nfrom recbole_gnn.model.layers import BipartiteGCNConv\n\n\nclass GatingLayer(nn.Module):\n    def __init__(self, dim):\n        super(GatingLayer, self).__init__()\n        self.dim = dim\n        self.linear = nn.Linear(self.dim, self.dim)\n        self.activation = nn.Sigmoid()\n\n    def forward(self, emb):\n        embedding = self.linear(emb)\n        embedding = self.activation(embedding)\n        embedding = torch.mul(emb, embedding)\n        return embedding\n\n\nclass AttLayer(nn.Module):\n    def __init__(self, dim):\n        super(AttLayer, self).__init__()\n        self.dim = dim\n        self.attention_mat = nn.Parameter(torch.randn([self.dim, self.dim]))\n        self.attention = nn.Parameter(torch.randn([1, self.dim]))\n\n    def forward(self, *embs):\n        weights = []\n        emb_list = []\n        for embedding in embs:\n            weights.append(torch.sum(torch.mul(self.attention, torch.matmul(embedding, self.attention_mat)), dim=1))\n            emb_list.append(embedding)\n        score = torch.nn.Softmax(dim=0)(torch.stack(weights, dim=0))\n        embeddings = torch.stack(emb_list, dim=0)\n        mixed_embeddings = torch.mul(embeddings, score.unsqueeze(dim=2).repeat(1, 1, self.dim)).sum(dim=0)\n        return mixed_embeddings\n\n\nclass MHCN(SocialRecommender):\n    r\"\"\"MHCN fuses hypergraph modeling and graph neural networks in social recommendation by \n    exploiting multiple types of high-order user relations under a multi-channel setting.\n    \n    We implement the model following the original author with a pairwise training mode.\n    \"\"\"\n    input_type = InputType.PAIRWISE\n\n    def __init__(self, config, dataset):\n        super(MHCN, self).__init__(config, dataset)\n\n        # load dataset info\n        self.R_user_edge_index, self.R_user_edge_weight, self.R_item_edge_index, self.R_item_edge_weight = self.get_bipartite_inter_mat(dataset)\n        H_s, H_j, H_p = self.get_motif_adj_matrix(dataset)\n\n        # transform matrix to edge index and edge weight for convolution\n        self.H_s_edge_index, self.H_s_edge_weight = self.get_edge_index_weight(H_s)\n        self.H_j_edge_index, self.H_j_edge_weight = self.get_edge_index_weight(H_j)\n        self.H_p_edge_index, self.H_p_edge_weight = self.get_edge_index_weight(H_p)\n\n        # load parameters info\n        self.embedding_size = config['embedding_size']\n        self.n_layers = config['n_layers']\n        self.ssl_reg = config['ssl_reg']\n        self.reg_weight = config['reg_weight']\n\n        # define embedding and loss\n        self.user_embedding = nn.Embedding(self.n_users, self.embedding_size)\n        self.item_embedding = nn.Embedding(self.n_items, self.embedding_size)\n        self.bipartite_gcn_conv = BipartiteGCNConv(dim=self.embedding_size)\n        self.mf_loss = BPRLoss()\n        self.reg_loss = EmbLoss()\n\n        # define gating layers\n        self.gating_c1 = GatingLayer(self.embedding_size)\n        self.gating_c2 = GatingLayer(self.embedding_size)\n        self.gating_c3 = GatingLayer(self.embedding_size)\n        self.gating_simple = GatingLayer(self.embedding_size)\n\n        # define self supervised gating layers\n        self.ss_gating_c1 = GatingLayer(self.embedding_size)\n        self.ss_gating_c2 = GatingLayer(self.embedding_size)\n        self.ss_gating_c3 = GatingLayer(self.embedding_size)\n\n        # define attention layers\n        self.attention_layer = AttLayer(self.embedding_size)\n\n        # storage variables for full sort evaluation acceleration\n        self.restore_user_e = None\n        self.restore_item_e = None\n\n        # parameters initialization\n        self.apply(xavier_uniform_initialization)\n        self.other_parameter_name = ['restore_user_e', 'restore_item_e']\n\n    def get_bipartite_inter_mat(self, dataset):\n        R_user_edge_index, R_user_edge_weight = dataset.get_bipartite_inter_mat(row='user', row_norm=False)\n        R_item_edge_index, R_item_edge_weight = dataset.get_bipartite_inter_mat(row='item', row_norm=False)\n        return R_user_edge_index.to(self.device), R_user_edge_weight.to(self.device), R_item_edge_index.to(self.device), R_item_edge_weight.to(self.device)\n\n    def get_edge_index_weight(self, matrix):\n        matrix = coo_matrix(matrix)\n        edge_index = torch.stack([torch.LongTensor(matrix.row), torch.LongTensor(matrix.col)])\n        edge_weight = torch.FloatTensor(matrix.data)\n        return edge_index.to(self.device), edge_weight.to(self.device)\n\n    def get_motif_adj_matrix(self, dataset):\n        S = dataset.net_matrix()\n        Y = dataset.inter_matrix()\n        B = S.multiply(S.T)\n        U = S - B\n        C1 = (U.dot(U)).multiply(U.T)\n        A1 = C1 + C1.T\n        C2 = (B.dot(U)).multiply(U.T) + (U.dot(B)).multiply(U.T) + (U.dot(U)).multiply(B)\n        A2 = C2 + C2.T\n        C3 = (B.dot(B)).multiply(U) + (B.dot(U)).multiply(B) + (U.dot(B)).multiply(B)\n        A3 = C3 + C3.T\n        A4 = (B.dot(B)).multiply(B)\n        C5 = (U.dot(U)).multiply(U) + (U.dot(U.T)).multiply(U) + (U.T.dot(U)).multiply(U)\n        A5 = C5 + C5.T\n        A6 = (U.dot(B)).multiply(U) + (B.dot(U.T)).multiply(U.T) + (U.T.dot(U)).multiply(B)\n        A7 = (U.T.dot(B)).multiply(U.T) + (B.dot(U)).multiply(U) + (U.dot(U.T)).multiply(B)\n        A8 = (Y.dot(Y.T)).multiply(B)\n        A9 = (Y.dot(Y.T)).multiply(U)\n        A9 = A9 + A9.T\n        A10  = Y.dot(Y.T) - A8 - A9\n        # addition and row-normalization\n        H_s = sum([A1, A2, A3, A4, A5, A6, A7])\n        # add epsilon to avoid divide by zero Warning\n        H_s = H_s.multiply(1.0 / (H_s.sum(axis=1) + 1e-7).reshape(-1, 1))\n        H_j = sum([A8, A9])\n        H_j = H_j.multiply(1.0 / (H_j.sum(axis=1) + 1e-7).reshape(-1, 1))\n        H_p = A10\n        H_p = H_p.multiply(H_p > 1)\n        H_p = H_p.multiply(1.0 / (H_p.sum(axis=1) + 1e-7).reshape(-1, 1))\n        return H_s, H_j, H_p\n\n    def forward(self):\n        # get ego embeddings\n        user_embeddings = self.user_embedding.weight\n        item_embeddings = self.item_embedding.weight\n\n        # self-gating\n        user_embeddings_c1 = self.gating_c1(user_embeddings)\n        user_embeddings_c2 = self.gating_c2(user_embeddings)\n        user_embeddings_c3 = self.gating_c3(user_embeddings)\n        simple_user_embeddings = self.gating_simple(user_embeddings)\n\n        all_embeddings_c1 = [user_embeddings_c1]\n        all_embeddings_c2 = [user_embeddings_c2]\n        all_embeddings_c3 = [user_embeddings_c3]\n        all_embeddings_simple = [simple_user_embeddings]\n        all_embeddings_i = [item_embeddings]\n\n        for layer_idx in range(self.n_layers):\n            mixed_embedding = self.attention_layer(user_embeddings_c1, user_embeddings_c2, user_embeddings_c3) + simple_user_embeddings / 2\n            \n            # Channel S\n            user_embeddings_c1 = self.bipartite_gcn_conv((user_embeddings_c1, user_embeddings_c1), self.H_s_edge_index.flip([0]), self.H_s_edge_weight, size=(self.n_users, self.n_users))\n            norm_embeddings = F.normalize(user_embeddings_c1, p=2, dim=1)\n            all_embeddings_c1 += [norm_embeddings]\n\n            # Channel J\n            user_embeddings_c2 = self.bipartite_gcn_conv((user_embeddings_c2, user_embeddings_c2), self.H_j_edge_index.flip([0]), self.H_j_edge_weight, size=(self.n_users, self.n_users))\n            norm_embeddings = F.normalize(user_embeddings_c2, p=2, dim=1)\n            all_embeddings_c2 += [norm_embeddings]\n\n            # Channel P\n            user_embeddings_c3 = self.bipartite_gcn_conv((user_embeddings_c3, user_embeddings_c3), self.H_p_edge_index.flip([0]), self.H_p_edge_weight, size=(self.n_users, self.n_users))\n            norm_embeddings = F.normalize(user_embeddings_c3, p=2, dim=1)\n            all_embeddings_c3 += [norm_embeddings]\n\n            # item convolution\n            new_item_embeddings = self.bipartite_gcn_conv((mixed_embedding, item_embeddings), self.R_item_edge_index.flip([0]), self.R_item_edge_weight, size=(self.n_users, self.n_items))\n            norm_embeddings = F.normalize(new_item_embeddings, p=2, dim=1)\n            all_embeddings_i += [norm_embeddings]\n            simple_user_embeddings = self.bipartite_gcn_conv((item_embeddings, simple_user_embeddings), self.R_user_edge_index.flip([0]), self.R_user_edge_weight, size=(self.n_items, self.n_users))\n            norm_embeddings = F.normalize(simple_user_embeddings, p=2, dim=1)\n            all_embeddings_simple += [norm_embeddings]\n            item_embeddings = new_item_embeddings\n\n        # averaging the channel-specific embeddings\n        user_embeddings_c1 = torch.stack(all_embeddings_c1, dim=0).sum(dim=0)\n        user_embeddings_c2 = torch.stack(all_embeddings_c2, dim=0).sum(dim=0)\n        user_embeddings_c3 = torch.stack(all_embeddings_c3, dim=0).sum(dim=0)\n        simple_user_embeddings = torch.stack(all_embeddings_simple, dim=0).sum(dim=0)\n        item_all_embeddings = torch.stack(all_embeddings_i, dim=0).sum(dim=0)\n\n        # aggregating channel-specific embeddings\n        user_all_embeddings = self.attention_layer(user_embeddings_c1, user_embeddings_c2, user_embeddings_c3)\n        user_all_embeddings += simple_user_embeddings / 2\n\n        return user_all_embeddings, item_all_embeddings\n\n    def hierarchical_self_supervision(self, user_embeddings, edge_index, edge_weight):\n        def row_shuffle(embedding):\n            shuffled_embeddings = embedding[torch.randperm(embedding.size(0))]\n            return shuffled_embeddings\n        def row_column_shuffle(embedding):\n            shuffled_embeddings = embedding[:, torch.randperm(embedding.size(1))]\n            shuffled_embeddings = shuffled_embeddings[torch.randperm(embedding.size(0))]\n            return shuffled_embeddings\n        def score(x1, x2):\n            return torch.sum(torch.mul(x1, x2), dim=1)\n\n        # For Douban, normalization is needed.\n        # user_embeddings = F.normalize(user_embeddings, p=2, dim=1) \n        edge_embeddings = self.bipartite_gcn_conv((user_embeddings, user_embeddings), edge_index.flip([0]), edge_weight, size=(self.n_users, self.n_users))\n        # Local MIM\n        pos = score(user_embeddings, edge_embeddings)\n        neg1 = score(row_shuffle(user_embeddings), edge_embeddings)\n        neg2 = score(row_column_shuffle(edge_embeddings), user_embeddings)\n        local_loss = torch.sum(-torch.log(torch.sigmoid(pos - neg1)) - torch.log(torch.sigmoid(neg1 - neg2)))\n        # Global MIM\n        graph = torch.mean(edge_embeddings, dim=0, keepdim=True)\n        pos = score(edge_embeddings, graph)\n        neg1 = score(row_column_shuffle(edge_embeddings), graph)\n        global_loss = torch.sum(-torch.log(torch.sigmoid(pos - neg1)))\n        return global_loss + local_loss\n\n    def calculate_loss(self, interaction):\n        # clear the storage variable when training\n        if self.restore_user_e is not None or self.restore_item_e is not None:\n            self.restore_user_e, self.restore_item_e = None, None\n\n        user = interaction[self.USER_ID]\n        pos_item = interaction[self.ITEM_ID]\n        neg_item = interaction[self.NEG_ITEM_ID]\n\n        user_all_embeddings, item_all_embeddings = self.forward()\n        u_embeddings = user_all_embeddings[user]\n        pos_embeddings = item_all_embeddings[pos_item]\n        neg_embeddings = item_all_embeddings[neg_item]\n\n        # calculate BPR Loss\n        pos_scores = torch.mul(u_embeddings, pos_embeddings).sum(dim=1)\n        neg_scores = torch.mul(u_embeddings, neg_embeddings).sum(dim=1)\n        mf_loss = self.mf_loss(pos_scores, neg_scores)\n\n        # calculate self-supervised loss\n        ss_loss = self.hierarchical_self_supervision(self.ss_gating_c1(user_all_embeddings), self.H_s_edge_index, self.H_s_edge_weight)\n        ss_loss += self.hierarchical_self_supervision(self.ss_gating_c2(user_all_embeddings), self.H_j_edge_index, self.H_j_edge_weight)\n        ss_loss += self.hierarchical_self_supervision(self.ss_gating_c3(user_all_embeddings), self.H_p_edge_index, self.H_p_edge_weight)\n\n        # calculate regularization Loss\n        u_ego_embeddings = self.user_embedding(user)\n        pos_ego_embeddings = self.item_embedding(pos_item)\n        neg_ego_embeddings = self.item_embedding(neg_item)\n\n        reg_loss = self.reg_loss(u_ego_embeddings, pos_ego_embeddings, neg_ego_embeddings)\n        loss = mf_loss + self.ssl_reg * ss_loss + self.reg_weight * reg_loss\n\n        return loss\n\n    def predict(self, interaction):\n        user = interaction[self.USER_ID]\n        item = interaction[self.ITEM_ID]\n\n        user_all_embeddings, item_all_embeddings = self.forward()\n\n        u_embeddings = user_all_embeddings[user]\n        i_embeddings = item_all_embeddings[item]\n        scores = torch.mul(u_embeddings, i_embeddings).sum(dim=1)\n        return scores\n\n    def full_sort_predict(self, interaction):\n        user = interaction[self.USER_ID]\n        if self.restore_user_e is None or self.restore_item_e is None:\n            self.restore_user_e, self.restore_item_e = self.forward()\n        # get user embedding from storage variable\n        u_embeddings = self.restore_user_e[user]\n\n        # dot with all item embedding to accelerate\n        scores = torch.matmul(u_embeddings, self.restore_item_e.transpose(0, 1))\n\n        return scores.view(-1)"
  },
  {
    "path": "recbole_gnn/model/social_recommender/sept.py",
    "content": "# @Time   : 2022/3/29\n# @Author : Lanling Xu\n# @Email  : xulanling_sherry@163.com\n\nr\"\"\"\nSEPT\n################################################\nReference:\n    Junliang Yu et al. \"Socially-Aware Self-Supervised Tri-Training for Recommendation.\" in KDD 2021.\n\nReference code:\n    https://github.com/Coder-Yu/QRec\n\"\"\"\n\nimport numpy as np\nimport torch\nimport torch.nn.functional as F\n\nfrom scipy.sparse import coo_matrix, eye\nfrom torch_geometric.utils import degree\n\nfrom recbole.model.init import xavier_uniform_initialization\nfrom recbole.model.loss import BPRLoss, EmbLoss\nfrom recbole.utils import InputType\n\nfrom recbole_gnn.model.abstract_recommender import SocialRecommender\nfrom recbole_gnn.model.layers import LightGCNConv\n\n\nclass SEPT(SocialRecommender):\n    r\"\"\"SEPT is a socially-aware GCN-based SSL framework that integrates tri-training.\n\n    Under the regime of tri-training for multi-view encoding, the framework builds three graph \n    encoders (one for recommendation) upon the augmented views and iteratively improves each \n    encoder with self-supervision signals from other users, generated by the other two encoders.\n\n    We implement the model following the original author with a pairwise training mode.\n    \"\"\"\n    input_type = InputType.PAIRWISE\n\n    def __init__(self, config, dataset):\n        super(SEPT, self).__init__(config, dataset)\n\n        # load dataset info\n        self.edge_index, self.edge_weight = dataset.get_norm_adj_mat()\n        self.edge_index, self.edge_weight = self.edge_index.to(self.device), self.edge_weight.to(self.device)\n\n        # generate intermediate data\n        self.social_edge_index, self.social_edge_weight, self.sharing_edge_index, \\\n        self.sharing_edge_weight = self.get_user_view_matrix(dataset)\n\n        self._user = dataset.inter_feat[dataset.uid_field]\n        self._item = dataset.inter_feat[dataset.iid_field]\n\n        self._src_user = dataset.net_feat[dataset.net_src_field]\n        self._tgt_user = dataset.net_feat[dataset.net_tgt_field]\n\n        # load parameters info\n        self.latent_dim = config[\"embedding_size\"]\n        self.n_layers = int(config[\"n_layers\"])\n        self.drop_ratio = config[\"drop_ratio\"]\n        self.instance_cnt = config[\"instance_cnt\"]\n        self.reg_weight = config[\"reg_weight\"]\n        self.ssl_weight = config[\"ssl_weight\"]\n        self.ssl_tau = config[\"ssl_tau\"]\n\n        # define layers and loss\n        self.user_embedding = torch.nn.Embedding(self.n_users, self.latent_dim)\n        self.item_embedding = torch.nn.Embedding(self.n_items, self.latent_dim)\n        self.gcn_conv = LightGCNConv(dim=self.latent_dim)\n        self.mf_loss = BPRLoss()\n        self.reg_loss = EmbLoss()\n\n        # storage variables for full sort evaluation acceleration\n        self.user_all_embeddings = None\n        self.restore_user_e = None\n        self.restore_item_e = None\n\n        # parameters initialization\n        self.apply(xavier_uniform_initialization)\n        self.other_parameter_name = ['restore_user_e', 'restore_item_e']\n\n    def get_norm_edge_weight(self, edge_index, node_num):\n        r\"\"\"Get normalized edge weight using the laplace matrix.\n        \"\"\"\n        deg = degree(edge_index[0], node_num)\n        norm_deg = 1. / torch.sqrt(torch.where(deg == 0, torch.ones([1]), deg))\n        edge_weight = norm_deg[edge_index[0]] * norm_deg[edge_index[1]]\n        return edge_weight\n\n    def get_user_view_matrix(self, dataset):\n        # Friend View: A_f = (SS) ⊙ S\n        social_mat = dataset.net_matrix()\n        social_matrix = social_mat.dot(social_mat)\n        social_matrix =  social_matrix.toarray() * social_mat.toarray() + eye(self.n_users)\n        social_matrix = coo_matrix(social_matrix)\n        social_edge_index = torch.stack([torch.LongTensor(social_matrix.row), torch.LongTensor(social_matrix.col)])\n        social_edge_weight = self.get_norm_edge_weight(social_edge_index, self.n_users)\n\n        # Sharing View: A_s = (RR^T) ⊙ S\n        rating_mat = dataset.inter_matrix()\n        sharing_matrix = rating_mat.dot(rating_mat.T)\n        sharing_matrix = sharing_matrix.toarray() * social_mat.toarray() + eye(self.n_users)\n        sharing_matrix = coo_matrix(sharing_matrix)\n        sharing_edge_index = torch.stack([torch.LongTensor(sharing_matrix.row), torch.LongTensor(sharing_matrix.col)])\n        sharing_edge_weight = self.get_norm_edge_weight(sharing_edge_index, self.n_users)\n\n        return social_edge_index.to(self.device), social_edge_weight.to(self.device), \\\n               sharing_edge_index.to(self.device), sharing_edge_weight.to(self.device)\n\n    def subgraph_construction(self):\n        r\"\"\"Perturb the joint graph to construct subgraph for integrated self-supervision signals.\n        \"\"\"\n        def rand_sample(high, size=None, replace=True):\n            return np.random.choice(np.arange(high), size=size, replace=replace)\n        # perturb the raw graph with edge dropout\n        keep = rand_sample(len(self._user), size=int(len(self._user) * (1 - self.drop_ratio)), replace=False)\n        row = self._user[keep]\n        col = self._item[keep] + self.n_users\n\n        # perturb the social graph with edge dropout\n        net_keep = rand_sample(len(self._src_user), size=int(len(self._src_user) * (1 - self.drop_ratio)), replace=False)\n        net_row = self._src_user[net_keep]\n        net_col = self._tgt_user[net_keep]\n\n        # concatenation and normalization\n        edge_index1 = torch.stack([row, col])\n        edge_index2 = torch.stack([col, row])\n        edge_index3 = torch.stack([net_row, net_col])\n        edge_index = torch.cat([edge_index1, edge_index2, edge_index3], dim=1)\n        edge_weight = self.get_norm_edge_weight(edge_index, self.n_users + self.n_items)\n\n        self.sub_graph = edge_index.to(self.device), edge_weight.to(self.device)\n\n    def get_ego_embeddings(self):\n        r\"\"\"Get the embedding of users and items and combine to an embedding matrix.\n        Returns:\n            Tensor of the embedding matrix. Shape of [n_items+n_users, embedding_dim]\n        \"\"\"\n        user_embeddings = self.user_embedding.weight\n        item_embeddings = self.item_embedding.weight\n        ego_embeddings = torch.cat([user_embeddings, item_embeddings], dim=0)\n        return ego_embeddings\n\n    def forward(self, graph=None):\n        all_embeddings = torch.cat([self.user_embedding.weight, self.item_embedding.weight])\n        embeddings_list = [all_embeddings]\n\n        if graph is None:  # for the original graph\n            edge_index, edge_weight = self.edge_index, self.edge_weight\n        else:  # for the augmented graph\n            edge_index, edge_weight = graph\n\n        for _ in range(self.n_layers):\n            all_embeddings = self.gcn_conv(all_embeddings, edge_index, edge_weight)\n            norm_embeddings = F.normalize(all_embeddings, p=2, dim=1)\n            embeddings_list.append(norm_embeddings)\n\n        all_embeddings = torch.stack(embeddings_list, dim=1)\n        all_embeddings = torch.sum(all_embeddings, dim=1)\n        user_all_embeddings, item_all_embeddings = torch.split(all_embeddings, [self.n_users, self.n_items], dim=0)\n\n        return user_all_embeddings, item_all_embeddings\n\n    def user_view_forward(self):\n        all_social_embeddings = self.user_embedding.weight\n        all_sharing_embeddings = self.user_embedding.weight\n        social_embeddings_list = [all_social_embeddings]\n        sharing_embeddings_list = [all_sharing_embeddings]\n\n        for _ in range(self.n_layers):\n            # friend view\n            all_social_embeddings = self.gcn_conv(all_social_embeddings, self.social_edge_index, self.social_edge_weight)\n            norm_social_embeddings = F.normalize(all_social_embeddings, p=2, dim=1)\n            social_embeddings_list.append(norm_social_embeddings)\n            # sharing view\n            all_sharing_embeddings = self.gcn_conv(all_sharing_embeddings, self.sharing_edge_index, self.sharing_edge_weight)\n            norm_sharing_embeddings = F.normalize(all_sharing_embeddings, p=2, dim=1)\n            sharing_embeddings_list.append(norm_sharing_embeddings)\n\n        social_all_embeddings = torch.stack(social_embeddings_list, dim=1)\n        social_all_embeddings = torch.sum(social_all_embeddings, dim=1)\n\n        sharing_all_embeddings = torch.stack(sharing_embeddings_list, dim=1)\n        sharing_all_embeddings = torch.sum(sharing_all_embeddings, dim=1)\n\n        return social_all_embeddings, sharing_all_embeddings\n\n    def label_prediction(self, emb, aug_emb):\n        prob = torch.matmul(emb, aug_emb.transpose(0, 1))\n        prob = F.softmax(prob, dim=1)\n        return prob\n\n    def sampling(self, logits):\n        return torch.topk(logits, k=self.instance_cnt)[1]\n\n    def generate_pesudo_labels(self, prob1, prob2):\n        positive = (prob1 + prob2) / 2\n        pos_examples = self.sampling(positive)\n        return pos_examples\n\n    def calculate_ssl_loss(self, aug_emb, positive, emb):\n        pos_emb = aug_emb[positive]\n        pos_score = torch.sum(emb.unsqueeze(dim=1).repeat(1, self.instance_cnt, 1) * pos_emb, dim=2)\n        ttl_score = torch.matmul(emb, aug_emb.transpose(0, 1))\n        pos_score = torch.sum(torch.exp(pos_score / self.ssl_tau), dim=1)\n        ttl_score = torch.sum(torch.exp(ttl_score / self.ssl_tau), dim=1)\n        ssl_loss = - torch.sum(torch.log(pos_score / ttl_score))\n        return ssl_loss\n\n    def calculate_rec_loss(self, interaction):\n        # clear the storage variable when training\n        if self.restore_user_e is not None or self.restore_item_e is not None:\n            self.restore_user_e, self.restore_item_e = None, None\n\n        user = interaction[self.USER_ID]\n        pos_item = interaction[self.ITEM_ID]\n        neg_item = interaction[self.NEG_ITEM_ID]\n\n        self.user_all_embeddings, item_all_embeddings = self.forward()\n        u_embeddings = self.user_all_embeddings[user]\n        pos_embeddings = item_all_embeddings[pos_item]\n        neg_embeddings = item_all_embeddings[neg_item]\n\n        # calculate BPR Loss\n        pos_scores = torch.mul(u_embeddings, pos_embeddings).sum(dim=1)\n        neg_scores = torch.mul(u_embeddings, neg_embeddings).sum(dim=1)\n        mf_loss = self.mf_loss(pos_scores, neg_scores)\n\n        # calculate regularization Loss\n        u_ego_embeddings = self.user_embedding(user)\n        pos_ego_embeddings = self.item_embedding(pos_item)\n        neg_ego_embeddings = self.item_embedding(neg_item)\n\n        reg_loss = self.reg_loss(u_ego_embeddings, pos_ego_embeddings, neg_ego_embeddings)\n        loss = mf_loss + self.reg_weight * reg_loss\n\n        return loss\n\n    def calculate_loss(self, interaction):\n        # preference view\n        rec_loss = self.calculate_rec_loss(interaction)\n\n        # unlabeled sample view\n        aug_user_embeddings, _ = self.forward(graph=self.sub_graph)\n\n        # friend and sharing views\n        friend_view_embeddings, sharing_view_embeddings = self.user_view_forward()\n\n        user = interaction[self.USER_ID]\n        aug_u_embeddings = aug_user_embeddings[user]\n        social_u_embeddings = friend_view_embeddings[user]\n        sharing_u_embeddings = sharing_view_embeddings[user]\n        rec_u_embeddings = self.user_all_embeddings[user]\n\n        aug_u_embeddings = F.normalize(aug_u_embeddings, p=2, dim=1)\n        social_u_embeddings = F.normalize(social_u_embeddings, p=2, dim=1)\n        sharing_u_embeddings = F.normalize(sharing_u_embeddings, p=2, dim=1)\n        rec_u_embeddings = F.normalize(rec_u_embeddings, p=2, dim=1)\n\n        # self-supervision prediction\n        social_prediction = self.label_prediction(social_u_embeddings, aug_u_embeddings)\n        sharing_prediction = self.label_prediction(sharing_u_embeddings, aug_u_embeddings)\n        rec_prediction = self.label_prediction(rec_u_embeddings, aug_u_embeddings)\n\n        # find informative positive examples for each encoder\n        friend_pos = self.generate_pesudo_labels(sharing_prediction, rec_prediction)\n        sharing_pos = self.generate_pesudo_labels(social_prediction, rec_prediction)\n        rec_pos = self.generate_pesudo_labels(social_prediction, sharing_prediction)\n\n        # neighbor-discrimination based contrastive learning\n        ssl_loss = self.calculate_ssl_loss(aug_u_embeddings, friend_pos, social_u_embeddings)\n        ssl_loss += self.calculate_ssl_loss(aug_u_embeddings, sharing_pos, sharing_u_embeddings)\n        ssl_loss += self.calculate_ssl_loss(aug_u_embeddings, rec_pos, rec_u_embeddings)\n\n        # L = L_r + β * L_{ssl}\n        loss = rec_loss + self.ssl_weight * ssl_loss\n\n        return loss\n\n    def predict(self, interaction):\n        user = interaction[self.USER_ID]\n        item = interaction[self.ITEM_ID]\n\n        user_all_embeddings, item_all_embeddings = self.forward()\n\n        u_embeddings = user_all_embeddings[user]\n        i_embeddings = item_all_embeddings[item]\n        scores = torch.mul(u_embeddings, i_embeddings).sum(dim=1)\n        return scores\n\n    def full_sort_predict(self, interaction):\n        user = interaction[self.USER_ID]\n        if self.restore_user_e is None or self.restore_item_e is None:\n            self.restore_user_e, self.restore_item_e = self.forward()\n        # get user embedding from storage variable\n        u_embeddings = self.restore_user_e[user]\n\n        # dot with all item embedding to accelerate\n        scores = torch.matmul(u_embeddings, self.restore_item_e.transpose(0, 1))\n\n        return scores.view(-1)"
  },
  {
    "path": "recbole_gnn/properties/model/DiffNet.yaml",
    "content": "embedding_size: 64\nn_layers: 2\nreg_weight: 1e-05\npretrained_review: False"
  },
  {
    "path": "recbole_gnn/properties/model/DirectAU.yaml",
    "content": "embedding_size: 64\nencoder: \"MF\"   # \"MF\" or \"lightGCN\"\ngamma: 0.5\nweight_decay: 1e-6\ntrain_batch_size: 256\n\n# n_layers: 3 # needed for LightGCN"
  },
  {
    "path": "recbole_gnn/properties/model/GCEGNN.yaml",
    "content": "embedding_size: 64\nleakyrelu_alpha: 0.2\ndropout_local: 0.\ndropout_global: 0.5\ndropout_gcn: 0.\nloss_type: CE\ngnn_transform: sess_graph\n\n# global\nbuild_global_graph: True\nsample_num: 12\nhop: 1\n"
  },
  {
    "path": "recbole_gnn/properties/model/GCSAN.yaml",
    "content": "n_layers: 1\nn_heads: 1\nhidden_size: 64\ninner_size: 256\nhidden_dropout_prob: 0.2\nattn_dropout_prob: 0.2\nhidden_act: 'gelu'\nlayer_norm_eps: 1e-12\ninitializer_range: 0.02\nstep: 1\nweight: 0.6\nreg_weight: 5e-5\nloss_type: 'CE'\ngnn_transform: sess_graph\n"
  },
  {
    "path": "recbole_gnn/properties/model/HMLET.yaml",
    "content": "embedding_size: 64\nn_layers: 4\nreg_weight: 1e-05\nrequire_pow: True\ngate_layer_ids: [2,3]\ngating_mlp_dims: [64,16,2]\ndropout_ratio: 0.2\nactivation_function: elu\n\nwarm_up_epochs: 50\nori_temp: 0.7\nmin_temp: 0.01\ngum_temp_decay: 0.005\nepoch_temp_decay: 1\n"
  },
  {
    "path": "recbole_gnn/properties/model/LESSR.yaml",
    "content": "embedding_size: 64\nn_layers: 4\nbatch_norm: True\nfeat_drop: 0.2\nloss_type: CE\ngnn_transform: sess_graph\n"
  },
  {
    "path": "recbole_gnn/properties/model/LightGCL.yaml",
    "content": "embedding_size: 64              # (int) The embedding size of users and items.\r\nn_layers: 2                     # (int) The number of layers in LightGCL.\r\ndropout: 0.0                    # (float) The dropout ratio.\r\ntemp: 0.8                       # (float) The temperature in softmax.\r\nlambda1: 0.01                   # (float) The hyperparameter to control the strengths of SSL.\r\nlambda2: 1e-05                  # (float) The L2 regularization weight.\r\nq: 5                            # (int) A slightly overestimated rank of the adjacency matrix."
  },
  {
    "path": "recbole_gnn/properties/model/LightGCN.yaml",
    "content": "embedding_size: 64\nn_layers: 2\nreg_weight: 1e-05\nrequire_pow: True"
  },
  {
    "path": "recbole_gnn/properties/model/MHCN.yaml",
    "content": "embedding_size: 64\nn_layers: 2\nssl_reg: 1e-05\nreg_weight: 1e-05"
  },
  {
    "path": "recbole_gnn/properties/model/NCL.yaml",
    "content": "embedding_size: 64\nn_layers: 3\nreg_weight: 1e-4\n\nssl_temp: 0.1\nssl_reg: 1e-7\nhyper_layers: 1\n\nalpha: 1\n\nproto_reg: 8e-8\nnum_clusters: 1000\n\nm_step: 1\nwarm_up_step: 20\n"
  },
  {
    "path": "recbole_gnn/properties/model/NGCF.yaml",
    "content": "embedding_size: 64\nhidden_size_list: [64,64,64]\nnode_dropout: 0.0\nmessage_dropout: 0.1\nreg_weight: 1e-5"
  },
  {
    "path": "recbole_gnn/properties/model/NISER.yaml",
    "content": "embedding_size: 64\nstep: 1\nsigma: 16\nitem_dropout: 0.1\nloss_type: 'CE'\ngnn_transform: sess_graph"
  },
  {
    "path": "recbole_gnn/properties/model/SEPT.yaml",
    "content": "warm_up_epochs: 100\nembedding_size: 64\nn_layers: 2\ndrop_ratio: 0.3\ninstance_cnt: 10\nreg_weight: 1e-05\nssl_weight: 1e-07\nssl_tau: 0.1"
  },
  {
    "path": "recbole_gnn/properties/model/SGL.yaml",
    "content": "type: \"ED\"\nn_layers: 3\nssl_tau: 0.5\nreg_weight: 1e-5\nssl_weight: 0.05\ndrop_ratio: 0.1\nembedding_size: 64"
  },
  {
    "path": "recbole_gnn/properties/model/SGNNHN.yaml",
    "content": "embedding_size: 64\nstep: 6\nscale: 12\nloss_type: 'CE'\ngnn_transform: sess_graph\n"
  },
  {
    "path": "recbole_gnn/properties/model/SRGNN.yaml",
    "content": "embedding_size: 64\nstep: 1\nloss_type: 'CE'\ngnn_transform: sess_graph"
  },
  {
    "path": "recbole_gnn/properties/model/SSL4REC.yaml",
    "content": "embedding_size: 64\ndrop_ratio: 0.1\ntau: 0.1\nreg_weight: 1e-04\nssl_weight: 1e-05\nrequire_pow: True"
  },
  {
    "path": "recbole_gnn/properties/model/SimGCL.yaml",
    "content": "embedding_size: 64\nn_layers: 2\nreg_weight: 1e-4\n\nlambda: 0.5\neps: 0.1\ntemperature: 0.2\n"
  },
  {
    "path": "recbole_gnn/properties/model/TAGNN.yaml",
    "content": "embedding_size: 64\nstep: 1\nloss_type: 'CE'\ngnn_transform: sess_graph\n"
  },
  {
    "path": "recbole_gnn/properties/model/XSimGCL.yaml",
    "content": "embedding_size: 64\nn_layers: 2\nreg_weight: 0.0001\n\nlambda: 0.1\neps: 0.2\ntemperature: 0.2\nlayer_cl: 1\nrequire_pow: True\n"
  },
  {
    "path": "recbole_gnn/properties/quick_start_config/sequential_base.yaml",
    "content": "train_neg_sample_args: ~\n"
  },
  {
    "path": "recbole_gnn/properties/quick_start_config/social_base.yaml",
    "content": "NET_SOURCE_ID_FIELD: source_id\nNET_TARGET_ID_FIELD: target_id\n\nload_col: \n    inter: ['user_id', 'item_id', 'rating', 'timestamp']\n    net: [source_id, target_id]\n\nfilter_net_by_inter: True\nundirected_net: True"
  },
  {
    "path": "recbole_gnn/quick_start.py",
    "content": "import logging\nfrom logging import getLogger\nfrom recbole.utils import init_logger, init_seed, set_color\n\nfrom recbole_gnn.config import Config\nfrom recbole_gnn.utils import create_dataset, data_preparation, get_model, get_trainer\n\n\ndef run_recbole_gnn(model=None, dataset=None, config_file_list=None, config_dict=None, saved=True):\n    r\"\"\" A fast running api, which includes the complete process of\n    training and testing a model on a specified dataset\n    Args:\n        model (str, optional): Model name. Defaults to ``None``.\n        dataset (str, optional): Dataset name. Defaults to ``None``.\n        config_file_list (list, optional): Config files used to modify experiment parameters. Defaults to ``None``.\n        config_dict (dict, optional): Parameters dictionary used to modify experiment parameters. Defaults to ``None``.\n        saved (bool, optional): Whether to save the model. Defaults to ``True``.\n    \"\"\"\n    # configurations initialization\n    config = Config(model=model, dataset=dataset, config_file_list=config_file_list, config_dict=config_dict)\n    try:\n        assert config[\"enable_sparse\"] in [True, False, None]\n    except AssertionError:\n        raise ValueError(\"Your config `enable_sparse` must be `True` or `False` or `None`\")\n    init_seed(config['seed'], config['reproducibility'])\n    # logger initialization\n    init_logger(config)\n    logger = getLogger()\n\n    logger.info(config)\n\n    # dataset filtering\n    dataset = create_dataset(config)\n    logger.info(dataset)\n\n    # dataset splitting\n    train_data, valid_data, test_data = data_preparation(config, dataset)\n\n    # model loading and initialization\n    init_seed(config['seed'], config['reproducibility'])\n    model = get_model(config['model'])(config, train_data.dataset).to(config['device'])\n    logger.info(model)\n\n    # trainer loading and initialization\n    trainer = get_trainer(config['MODEL_TYPE'], config['model'])(config, model)\n\n    # model training\n    best_valid_score, best_valid_result = trainer.fit(\n        train_data, valid_data, saved=saved, show_progress=config['show_progress']\n    )\n\n    # model evaluation\n    test_result = trainer.evaluate(test_data, load_best_model=saved, show_progress=config['show_progress'])\n\n    logger.info(set_color('best valid ', 'yellow') + f': {best_valid_result}')\n    logger.info(set_color('test result', 'yellow') + f': {test_result}')\n\n    return {\n        'best_valid_score': best_valid_score,\n        'valid_score_bigger': config['valid_metric_bigger'],\n        'best_valid_result': best_valid_result,\n        'test_result': test_result\n    }\n\n\ndef objective_function(config_dict=None, config_file_list=None, saved=True):\n    r\"\"\" The default objective_function used in HyperTuning\n\n    Args:\n        config_dict (dict, optional): Parameters dictionary used to modify experiment parameters. Defaults to ``None``.\n        config_file_list (list, optional): Config files used to modify experiment parameters. Defaults to ``None``.\n        saved (bool, optional): Whether to save the model. Defaults to ``True``.\n    \"\"\"\n\n    config = Config(config_dict=config_dict, config_file_list=config_file_list)\n    try:\n        assert config[\"enable_sparse\"] in [True, False, None]\n    except AssertionError:\n        raise ValueError(\"Your config `enable_sparse` must be `True` or `False` or `None`\")\n    init_seed(config['seed'], config['reproducibility'])\n    logging.basicConfig(level=logging.ERROR)\n    dataset = create_dataset(config)\n    train_data, valid_data, test_data = data_preparation(config, dataset)\n    init_seed(config['seed'], config['reproducibility'])\n    model = get_model(config['model'])(config, train_data.dataset).to(config['device'])\n    trainer = get_trainer(config['MODEL_TYPE'], config['model'])(config, model)\n    best_valid_score, best_valid_result = trainer.fit(train_data, valid_data, verbose=False, saved=saved)\n    test_result = trainer.evaluate(test_data, load_best_model=saved)\n\n    return {\n        'model': config['model'],\n        'best_valid_score': best_valid_score,\n        'valid_score_bigger': config['valid_metric_bigger'],\n        'best_valid_result': best_valid_result,\n        'test_result': test_result\n    }\n"
  },
  {
    "path": "recbole_gnn/trainer.py",
    "content": "from time import time\nimport math\nfrom torch.nn.utils.clip_grad import clip_grad_norm_\nfrom tqdm import tqdm\nfrom recbole.trainer import Trainer\nfrom recbole.utils import early_stopping, dict2str, set_color, get_gpu_usage\n\n\nclass NCLTrainer(Trainer):\n    def __init__(self, config, model):\n        super(NCLTrainer, self).__init__(config, model)\n\n        self.num_m_step = config['m_step']\n        assert self.num_m_step is not None\n\n    def fit(self, train_data, valid_data=None, verbose=True, saved=True, show_progress=False, callback_fn=None):\n        r\"\"\"Train the model based on the train data and the valid data.\n        Args:\n            train_data (DataLoader): the train data\n            valid_data (DataLoader, optional): the valid data, default: None.\n                                               If it's None, the early_stopping is invalid.\n            verbose (bool, optional): whether to write training and evaluation information to logger, default: True\n            saved (bool, optional): whether to save the model parameters, default: True\n            show_progress (bool): Show the progress of training epoch and evaluate epoch. Defaults to ``False``.\n            callback_fn (callable): Optional callback function executed at end of epoch.\n                                    Includes (epoch_idx, valid_score) input arguments.\n        Returns:\n             (float, dict): best valid score and best valid result. If valid_data is None, it returns (-1, None)\n        \"\"\"\n        if saved and self.start_epoch >= self.epochs:\n            self._save_checkpoint(-1)\n\n        self.eval_collector.data_collect(train_data)\n\n        for epoch_idx in range(self.start_epoch, self.epochs):\n\n            # only differences from the original trainer\n            if epoch_idx % self.num_m_step == 0:\n                self.logger.info(\"Running E-step ! \")\n                self.model.e_step()\n            # train\n            training_start_time = time()\n            train_loss = self._train_epoch(train_data, epoch_idx, show_progress=show_progress)\n            self.train_loss_dict[epoch_idx] = sum(train_loss) if isinstance(train_loss, tuple) else train_loss\n            training_end_time = time()\n            train_loss_output = \\\n                self._generate_train_loss_output(epoch_idx, training_start_time, training_end_time, train_loss)\n            if verbose:\n                self.logger.info(train_loss_output)\n            self._add_train_loss_to_tensorboard(epoch_idx, train_loss)\n\n            # eval\n            if self.eval_step <= 0 or not valid_data:\n                if saved:\n                    self._save_checkpoint(epoch_idx)\n                    update_output = set_color('Saving current', 'blue') + ': %s' % self.saved_model_file\n                    if verbose:\n                        self.logger.info(update_output)\n                continue\n            if (epoch_idx + 1) % self.eval_step == 0:\n                valid_start_time = time()\n                valid_score, valid_result = self._valid_epoch(valid_data, show_progress=show_progress)\n                self.best_valid_score, self.cur_step, stop_flag, update_flag = early_stopping(\n                    valid_score,\n                    self.best_valid_score,\n                    self.cur_step,\n                    max_step=self.stopping_step,\n                    bigger=self.valid_metric_bigger\n                )\n                valid_end_time = time()\n                valid_score_output = (set_color(\"epoch %d evaluating\", 'green') + \" [\" + set_color(\"time\", 'blue')\n                                    + \": %.2fs, \" + set_color(\"valid_score\", 'blue') + \": %f]\") % \\\n                                     (epoch_idx, valid_end_time - valid_start_time, valid_score)\n                valid_result_output = set_color('valid result', 'blue') + ': \\n' + dict2str(valid_result)\n                if verbose:\n                    self.logger.info(valid_score_output)\n                    self.logger.info(valid_result_output)\n                self.tensorboard.add_scalar('Vaild_score', valid_score, epoch_idx)\n\n                if update_flag:\n                    if saved:\n                        self._save_checkpoint(epoch_idx)\n                        update_output = set_color('Saving current best', 'blue') + ': %s' % self.saved_model_file\n                        if verbose:\n                            self.logger.info(update_output)\n                    self.best_valid_result = valid_result\n\n                if callback_fn:\n                    callback_fn(epoch_idx, valid_score)\n\n                if stop_flag:\n                    stop_output = 'Finished training, best eval result in epoch %d' % \\\n                                  (epoch_idx - self.cur_step * self.eval_step)\n                    if verbose:\n                        self.logger.info(stop_output)\n                    break\n        self._add_hparam_to_tensorboard(self.best_valid_score)\n        return self.best_valid_score, self.best_valid_result\n\n    def _train_epoch(self, train_data, epoch_idx, loss_func=None, show_progress=False):\n        r\"\"\"Train the model in an epoch\n        Args:\n            train_data (DataLoader): The train data.\n            epoch_idx (int): The current epoch id.\n            loss_func (function): The loss function of :attr:`model`. If it is ``None``, the loss function will be\n                :attr:`self.model.calculate_loss`. Defaults to ``None``.\n            show_progress (bool): Show the progress of training epoch. Defaults to ``False``.\n        Returns:\n            float/tuple: The sum of loss returned by all batches in this epoch. If the loss in each batch contains\n            multiple parts and the model return these multiple parts loss instead of the sum of loss, it will return a\n            tuple which includes the sum of loss in each part.\n        \"\"\"\n        self.model.train()\n        loss_func = loss_func or self.model.calculate_loss\n        total_loss = None\n        iter_data = (\n            tqdm(\n                train_data,\n                total=len(train_data),\n                ncols=100,\n                desc=set_color(f\"Train {epoch_idx:>5}\", 'pink'),\n            ) if show_progress else train_data\n        )\n        for batch_idx, interaction in enumerate(iter_data):\n            interaction = interaction.to(self.device)\n            self.optimizer.zero_grad()\n            losses = loss_func(interaction)\n            if isinstance(losses, tuple):\n                if epoch_idx < self.config['warm_up_step']:\n                    losses = losses[:-1]\n                loss = sum(losses)\n                loss_tuple = tuple(per_loss.item() for per_loss in losses)\n                total_loss = loss_tuple if total_loss is None else tuple(map(sum, zip(total_loss, loss_tuple)))\n            else:\n                loss = losses\n                total_loss = losses.item() if total_loss is None else total_loss + losses.item()\n            self._check_nan(loss)\n            loss.backward()\n            if self.clip_grad_norm:\n                clip_grad_norm_(self.model.parameters(), **self.clip_grad_norm)\n            self.optimizer.step()\n            if self.gpu_available and show_progress:\n                iter_data.set_postfix_str(set_color('GPU RAM: ' + get_gpu_usage(self.device), 'yellow'))\n        return total_loss\n\n\nclass HMLETTrainer(Trainer):\n    def __init__(self, config, model):\n        super(HMLETTrainer, self).__init__(config, model)\n\n        self.warm_up_epochs = config['warm_up_epochs']\n        self.ori_temp = config['ori_temp']\n        self.min_temp = config['min_temp']\n        self.gum_temp_decay = config['gum_temp_decay']\n        self.epoch_temp_decay = config['epoch_temp_decay']\n\n    def _train_epoch(self, train_data, epoch_idx, loss_func=None, show_progress=False):\n        if epoch_idx > self.warm_up_epochs:\n            # Temp decay\n            gum_temp = self.ori_temp * math.exp(-self.gum_temp_decay*(epoch_idx - self.warm_up_epochs))\n            self.model.gum_temp = max(gum_temp, self.min_temp)\n            self.logger.info(f'Current gumbel softmax temperature: {self.model.gum_temp}')\n\n            for gating in self.model.gating_nets:\n                self.model._gating_freeze(gating, True)\n        return super()._train_epoch(train_data, epoch_idx, loss_func, show_progress)\n\n\nclass SEPTTrainer(Trainer):\n    def __init__(self, config, model):\n        super(SEPTTrainer, self).__init__(config, model)\n        self.warm_up_epochs = config['warm_up_epochs']\n\n    def _train_epoch(self, train_data, epoch_idx, loss_func=None, show_progress=False):\n        if epoch_idx < self.warm_up_epochs:\n            loss_func = self.model.calculate_rec_loss\n        else:\n            self.model.subgraph_construction()\n        return super()._train_epoch(train_data, epoch_idx, loss_func, show_progress)"
  },
  {
    "path": "recbole_gnn/utils.py",
    "content": "import os\nimport pickle\nimport importlib\nfrom logging import getLogger\nfrom recbole.data.utils import load_split_dataloaders, create_samplers, save_split_dataloaders\nfrom recbole.data.utils import create_dataset as create_recbole_dataset\nfrom recbole.data.utils import data_preparation as recbole_data_preparation\nfrom recbole.utils import set_color, Enum\nfrom recbole.utils import get_model as get_recbole_model\nfrom recbole.utils import get_trainer as get_recbole_trainer\nfrom recbole.utils.argument_list import dataset_arguments\n\nfrom recbole_gnn.data.dataloader import CustomizedTrainDataLoader, CustomizedNegSampleEvalDataLoader, CustomizedFullSortEvalDataLoader\n\n\ndef create_dataset(config):\n    \"\"\"Create dataset according to :attr:`config['model']` and :attr:`config['MODEL_TYPE']`.\n    If :attr:`config['dataset_save_path']` file exists and\n    its :attr:`config` of dataset is equal to current :attr:`config` of dataset.\n    It will return the saved dataset in :attr:`config['dataset_save_path']`.\n    Args:\n        config (Config): An instance object of Config, used to record parameter information.\n    Returns:\n        Dataset: Constructed dataset.\n    \"\"\"\n    model_type = config['MODEL_TYPE']\n    dataset_module = importlib.import_module('recbole_gnn.data.dataset')\n    gen_graph_module_path = '.'.join(['recbole_gnn.model.general_recommender', config['model'].lower()])\n    seq_module_path = '.'.join(['recbole_gnn.model.sequential_recommender', config['model'].lower()])\n    if hasattr(dataset_module, config['model'] + 'Dataset'):\n        dataset_class = getattr(dataset_module, config['model'] + 'Dataset')\n    elif importlib.util.find_spec(gen_graph_module_path, __name__):\n        dataset_class = getattr(dataset_module, 'GeneralGraphDataset')\n    elif importlib.util.find_spec(seq_module_path, __name__):\n        dataset_class = getattr(dataset_module, 'SessionGraphDataset')\n    elif model_type == ModelType.SOCIAL:\n        dataset_class = getattr(dataset_module, 'SocialDataset')\n    else:\n        return create_recbole_dataset(config)\n\n    default_file = os.path.join(config['checkpoint_dir'], f'{config[\"dataset\"]}-{dataset_class.__name__}.pth')\n    file = config['dataset_save_path'] or default_file\n    if os.path.exists(file):\n        with open(file, 'rb') as f:\n            dataset = pickle.load(f)\n        dataset_args_unchanged = True\n        for arg in dataset_arguments + ['seed', 'repeatable']:\n            if config[arg] != dataset.config[arg]:\n                dataset_args_unchanged = False\n                break\n        if dataset_args_unchanged:\n            logger = getLogger()\n            logger.info(set_color('Load filtered dataset from', 'pink') + f': [{file}]')\n            return dataset\n\n    dataset = dataset_class(config)\n    if config['save_dataset']:\n        dataset.save()\n    return dataset\n\n\ndef get_model(model_name):\n    r\"\"\"Automatically select model class based on model name\n    Args:\n        model_name (str): model name\n    Returns:\n        Recommender: model class\n    \"\"\"\n    model_submodule = [\n        'general_recommender', 'sequential_recommender', 'social_recommender'\n    ]\n\n    model_file_name = model_name.lower()\n    model_module = None\n    for submodule in model_submodule:\n        module_path = '.'.join(['recbole_gnn.model', submodule, model_file_name])\n        if importlib.util.find_spec(module_path, __name__):\n            model_module = importlib.import_module(module_path, __name__)\n            break\n\n    if model_module is None:\n        model_class = get_recbole_model(model_name)\n    else:\n        model_class = getattr(model_module, model_name)\n    return model_class\n\n\ndef _get_customized_dataloader(config, phase):\n    if phase == 'train':\n        return CustomizedTrainDataLoader\n    else:\n        eval_mode = config[\"eval_args\"][\"mode\"]\n        if eval_mode == 'full':\n            return CustomizedFullSortEvalDataLoader\n        else:\n            return CustomizedNegSampleEvalDataLoader\n\n\ndef data_preparation(config, dataset):\n    \"\"\"Split the dataset by :attr:`config['eval_args']` and create training, validation and test dataloader.\n    Note:\n        If we can load split dataloaders by :meth:`load_split_dataloaders`, we will not create new split dataloaders.\n    Args:\n        config (Config): An instance object of Config, used to record parameter information.\n        dataset (Dataset): An instance object of Dataset, which contains all interaction records.\n    Returns:\n        tuple:\n            - train_data (AbstractDataLoader): The dataloader for training.\n            - valid_data (AbstractDataLoader): The dataloader for validation.\n            - test_data (AbstractDataLoader): The dataloader for testing.\n    \"\"\"\n    seq_module_path = '.'.join(['recbole_gnn.model.sequential_recommender', config['model'].lower()])\n    if importlib.util.find_spec(seq_module_path, __name__):\n        # Special condition for sequential models of RecBole-Graph\n        dataloaders = load_split_dataloaders(config)\n        if dataloaders is not None:\n            train_data, valid_data, test_data = dataloaders\n        else:\n            built_datasets = dataset.build()\n            train_dataset, valid_dataset, test_dataset = built_datasets\n            train_sampler, valid_sampler, test_sampler = create_samplers(config, dataset, built_datasets)\n\n            train_data = _get_customized_dataloader(config, 'train')(config, train_dataset, train_sampler, shuffle=True)\n            valid_data = _get_customized_dataloader(config, 'evaluation')(config, valid_dataset, valid_sampler, shuffle=False)\n            test_data = _get_customized_dataloader(config, 'evaluation')(config, test_dataset, test_sampler, shuffle=False)\n            if config['save_dataloaders']:\n                save_split_dataloaders(config, dataloaders=(train_data, valid_data, test_data))\n\n        logger = getLogger()\n        logger.info(\n            set_color('[Training]: ', 'pink') + set_color('train_batch_size', 'cyan') + ' = ' +\n            set_color(f'[{config[\"train_batch_size\"]}]', 'yellow') + set_color(' negative sampling', 'cyan') + ': ' +\n            set_color(f'[{config[\"train_neg_sample_args\"]}]', 'yellow')\n        )\n        logger.info(\n            set_color('[Evaluation]: ', 'pink') + set_color('eval_batch_size', 'cyan') + ' = ' +\n            set_color(f'[{config[\"eval_batch_size\"]}]', 'yellow') + set_color(' eval_args', 'cyan') + ': ' +\n            set_color(f'[{config[\"eval_args\"]}]', 'yellow')\n        )\n        return train_data, valid_data, test_data\n    else:\n        return recbole_data_preparation(config, dataset)\n\n\ndef get_trainer(model_type, model_name):\n    r\"\"\"Automatically select trainer class based on model type and model name\n    Args:\n        model_type (ModelType): model type\n        model_name (str): model name\n    Returns:\n        Trainer: trainer class\n    \"\"\"\n    try:\n        return getattr(importlib.import_module('recbole_gnn.trainer'), model_name + 'Trainer')\n    except AttributeError:\n        return get_recbole_trainer(model_type, model_name)\n\n\nclass ModelType(Enum):\n    \"\"\"Type of models.\n\n    - ``Social``: Social-based Recommendation\n    \"\"\"\n\n    SOCIAL = 7"
  },
  {
    "path": "results/README.md",
    "content": "## General Model Results\n\n* [ml-1m](general/ml-1m.md)\n\n## Sequential Model Results\n\n* [diginetica](sequential/diginetica.md)\n\n## Social-aware Model Results\n\n* [lastfm](social/lastfm.md)\n"
  },
  {
    "path": "results/general/ml-1m.md",
    "content": "# Experimental Setting\n\n**Dataset:** [MovieLens-1M](https://grouplens.org/datasets/movielens/)\n\n**Filtering:** Remove interactions with a rating score of less than 3\n\n**Evaluation:** ratio-based 8:1:1, full sort\n\n**Metrics:** Recall@10, NGCG@10, MRR@10, Hit@10, Precision@10\n\n**Properties:**\n\n```yaml\n# dataset config\nfield_separator: \"\\t\"\nseq_separator: \" \"\nUSER_ID_FIELD: user_id\nITEM_ID_FIELD: item_id\nRATING_FIELD: rating\nNEG_PREFIX: neg_\nLABEL_FIELD: label\nload_col:\n    inter: [user_id, item_id, rating]\nval_interval:\n    rating: \"[3,inf)\"\nunused_col: \n    inter: [rating]\n\n# training and evaluation\nepochs: 500\ntrain_batch_size: 4096\nvalid_metric: MRR@10\neval_batch_size: 4096000\n```\n\nFor fairness, we restrict users' and items' embedding dimension as following. Please adjust the name of the corresponding args of different models.\n```\nembedding_size: 64\n```\n\n# Dataset Statistics\n\n| Dataset    | #Users | #Items | #Interactions | Sparsity |\n| ---------- | ------ | ------ | ------------- | -------- |\n| ml-1m      | 6,040  | 3,629  | 836,478       | 96.18%   |\n\n# Evaluation Results\n\n| Method       | Recall@10 | MRR@10 | NDCG@10 | Hit@10 | Precision@10 |\n|--------------|-----------|--------|---------|--------|--------------|\n| **BPR**      | 0.1776    | 0.4187 | 0.2401  | 0.7199 | 0.1779       |\n| **NeuMF**    | 0.1651    | 0.4020 | 0.2271  | 0.7029 | 0.1700       |\n| **NGCF**     | 0.1814    | 0.4354 | 0.2508  | 0.7239 | 0.1850       |\n| **LightGCN** | 0.1861    | 0.4388 | 0.2538  | 0.7330 | 0.1863       |\n| **LightGCL** | 0.1867    | 0.4283 | 0.2479  | 0.7370 | 0.1815       |\n| **SGL**      | 0.1889    | 0.4315 | 0.2505  | 0.7392 | 0.1843       |\n| **HMLET**    | 0.1847    | 0.4297 | 0.2490  | 0.7305 | 0.1836       |\n| **NCL**      | 0.2021    | 0.4599 | 0.2702  | 0.7565 | 0.1962       |\n| **SimGCL**   | 0.2029    | 0.4550 | 0.2667  | 0.7640 | 0.1933       |\n| **XSimGCL**  | 0.2116    | 0.4638 | 0.2750  | 0.7743 | 0.1987       |\n\n# Hyper-parameters\n\n|              | Best hyper-parameters                                                                                                              | Tuning range                                                                                                                                                                                                                   |\n|--------------|------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| **BPR**      | learning_rate=0.001                                                                                                                | learning_rate choice [0.05, 0.02, 0.01, 0.005, 0.002, 0.001, 0.0005, 0.0002, 0.0001, 0.00005, 0.00002, 0.00001]                                                                                                                |\n| **NeuMF**    | learning_rate=0.0001<br />mlp_hidden_size=[32,16,8]<br />dropout_prob=0                                                            | learning_rate choice [0.005, 0.002, 0.001, 0.0005, 0.0002, 0.0001, 0.00005]<br/>mlp_hidden_size choice ['[64,64]', '[64,32]', '[64,32,16]','[32,16,8]']<br/>dropout_prob choice [0, 0.1, 0.2]                                  |\n| **NGCF**     | learning_rate=0.0002<br />message_dropout=0.0<br />node_dropout=0.0                                                                | learning_rate choice [0.001, 0.0005, 0.0002]<br/>node_dropout choice [0.0, 0.1]<br/>message_dropout choice [0.0, 0.1]                                                                                                          |\n| **LightGCN** | learning_rate=0.002<br />n_layers=3<br />reg_weight=0.0001                                                                         | learning_rate choice [0.005, 0.002, 0.001]<br/>n_layers choice [2, 3]<br/>reg_weight choice [1e-4, 1e-5]                                                                                                                       |\n| **LightGCL** | learning_rate=0.001<br />n_layers=2<br />lambda1=0.0001<br />temp=2<br />lambda2=1e-7<br />dropout=0.1                             | learning_rate choice [0.001]<br/>n_layers choice [2, 3]<br/>lambda1 choice [0.01, 0.005, 0.001, 0.0001, 1e-5, 1e-7]<br/>temp choice [0.5, 0.8, 2, 3]<br/>lambda2 choice [1e-4, 1e-5, 1e-7]<br/>dropout choice [0.0, 0.1, 0.25] |\n| **SGL**      | learning_rate=0.002<br />n_layers=3<br />reg_weight=0.0001<br />ssl_tau=0.5<br />drop_ratio=0.1<br />ssl_weight=0.005              | learning_rate choice [0.002]<br/>n_layers choice [3]<br/>reg_weight choice [1e-4]<br/>ssl_tau choice [0.1, 0.5]<br/>drop_ratio choice [0.1, 0.3]<br/>ssl_weight choice [1e-5, 1e-6, 1e-7, 0.005, 0.01, 0.05]                   |\n| **HMLET**    | learning_rate=0.002<br />n_layers=4<br />activation_function=leakyrelu                                                             | learning_rate choice [0.002, 0.001, 0.0005]<br/>n_layers choice [3, 4]<br/>activation_function choice ['elu', 'leakyrelu']                                                                                                     |\n| **NCL**      | learning_rate=0.002<br />n_layers=3<br />reg_weight=0.0001<br />ssl_temp=0.1<br />ssl_reg=1e-06<br />hyper_layers=1<br />alpha=1.5 | learning_rate choice [0.002]<br/>n_layers choice [3]<br/>reg_weight choice [1e-4]<br/>ssl_temp choice [0.1, 0.05]<br/>ssl_reg choice [1e-7, 1e-6]<br/>hyper_layers choice [1]<br/>alpha choice [1, 0.8, 1.5]                   |\n| **SimGCL**   | learning_rate=0.002<br />n_layers=2<br />reg_weight=0.0001<br />temperature=0.05<br />lambda=1e-5<br />eps=0.1                     | learning_rate choice [0.002]<br/>n_layers choice [2, 3]<br/>reg_weight choice [1e-4]<br/>temperature choice [0.05, 0.1, 0.2]<br/>lambda choice [1e-5, 1e-6, 1e-7, 0.005, 0.01, 0.05]<br/>eps choice [0.1, 0.2]                 |\n| **XSimGCL** | learning_rate=0.002<br />n_layers=2<br />reg_weight=0.0001<br />temperature=0.2<br />lambda=0.1<br />eps=0.2<br />layer_cl=1 | learning_rate choice [0.002]<br/>n_layers choice [2, 3]<br/>reg_weight choice [1e-4]<br/>temperature choice [0.05, 0.1, 0.2]<br/>lambda choice [1e-5, 1e-6, 1e-7, 1e-4, 0.005, 0.01, 0.05, 0.1]<br/>eps choice [0.1, 0.2]<br/>layer_cl choice [1] |\n"
  },
  {
    "path": "results/sequential/diginetica.md",
    "content": "# Experimental Setting\n\n**Dataset:** diginetica-not-merged\n\n**Filtering:** Remove users and items with less than 5 interactions\n\n**Evaluation:** leave one out, full sort\n\n**Metrics:** Recall@10, NGCG@10, MRR@10, Hit@10, Precision@10\n\n**Properties:**\n\n```yaml\n# dataset config\nfield_separator: \"\\t\"\nseq_separator: \" \"\nUSER_ID_FIELD: session_id\nITEM_ID_FIELD: item_id\nTIME_FIELD: timestamp\nNEG_PREFIX: neg_\nITEM_LIST_LENGTH_FIELD: item_length\nLIST_SUFFIX: _list\nMAX_ITEM_LIST_LENGTH: 20\nPOSITION_FIELD: position_id\nload_col:\n  inter: [session_id, item_id, timestamp]\nuser_inter_num_interval: \"[5,inf)\"\nitem_inter_num_interval: \"[5,inf)\"\n\n# training and evaluation\nepochs: 500\ntrain_batch_size: 4096\neval_batch_size: 2000\nvalid_metric: MRR@10\neval_args:\n  split: {'LS':\"valid_and_test\"}\n  mode: full\n  order: TO\ntrain_neg_sample_args: ~\n```\n\nFor fairness, we restrict users' and items' embedding dimension as following. Please adjust the name of the corresponding args of different models.\n```\nembedding_size: 64\n```\n\n# Dataset Statistics\n\n| Dataset    | #Users | #Items | #Interactions | Sparsity |\n| ---------- | ------ | ------ | ------------- | -------- |\n| diginetica | 72,014 | 29,454 | 580,490       | 99.97%   |\n\n# Evaluation Results\n\n| Method               | Recall@10 | MRR@10 | NDCG@10 | Hit@10 | Precision@10 |\n| -------------------- | --------- | ------ | ------- | ------ | ------------ |\n| **GRU4Rec**          | 0.3691    | 0.1632 | 0.2114  | 0.3691 | 0.0369       |\n| **NARM**             | 0.3801    | 0.1695 | 0.2188  | 0.3801 | 0.0380       |\n| **SASRec**           | 0.4144    | 0.1857 | 0.2393  | 0.4144 | 0.0414       |\n| **SR-GNN**           | 0.3881    | 0.1754 | 0.2253  | 0.3881 | 0.0388       |\n| **GC-SAN**           | 0.4127    | 0.1881 | 0.2408  | 0.4127 | 0.0413       |\n| **NISER+**           | 0.4144    | 0.1904 | 0.2430  | 0.4144 | 0.0414       |\n| **LESSR**            | 0.3964    | 0.1763 | 0.2279  | 0.3964 | 0.0396       |\n| **TAGNN**            | 0.3894    | 0.1763 | 0.2263  | 0.3894 | 0.0389       |\n| **GCE-GNN**          | 0.4284    | 0.1961 | 0.2507  | 0.4284 | 0.0428       |\n| **SGNN-HN**          | 0.4183    | 0.1877 | 0.2418  | 0.4183 | 0.0418       |\n\n# Hyper-parameters\n\n|                      | Best hyper-parameters                                                     | Tuning range                                                     |\n| -------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |\n| **GRU4Rec** | learning_rate=0.01<br />hidden_size=128<br />dropout_prob=0.3<br />num_layers=1 | learning_rate in [1e-2, 1e-3, 3e-3]<br />num_layers in [1, 2, 3]<br />hidden_size in [128]<br />dropout_prob in [0.1, 0.2, 0.3] |\n| **SASRec**           | learning_rate=0.001<br />n_layers=2<br />attn_dropout_prob=0.2<br />hidden_dropout_prob=0.2 | learning_rate in [0.001, 0.0001]<br />n_layers in [1, 2]<br />hidden_dropout_prob in [0.2, 0.5]<br />attn_dropout_prob in [0.2, 0.5] |\n| **NARM**             | learning_rate=0.001<br />hidden_size=128<br />n_layers=1<br />dropout_probs=[0.25, 0.5] | learning_rate in [0.001, 0.01, 0.03]<br />hidden_size in [128]<br />n_layers in [1, 2]<br />dropout_probs in ['[0.25,0.5]', '[0.2,0.2]', '[0.1,0.2]'] |\n| **SR-GNN**            | learning_rate=0.001<br />step=1                              | learning_rate in [0.01, 0.001, 0.0001]<br />step in [1, 2]    |\n| **GC-SAN**            | learning_rate=0.001<br />step=1                              | learning_rate in [0.01, 0.001, 0.0001]<br />step in [1, 2]    |\n| **NISER+**            | learning_rate=0.001<br />sigma=16                              | learning_rate in [0.01, 0.001, 0.003]<br />sigma in [10, 16, 20]    |\n| **LESSR**            | learning_rate=0.001<br />n_layers=4                              | learning_rate in [0.01, 0.001, 0.003]<br />n_layers in [2, 4]    |\n| **TAGNN**            | learning_rate=0.001                              | learning_rate in [0.01, 0.001, 0.003]<br />train_batch_size=512    |\n| **GCE-GNN**            | learning_rate=0.001<br />dropout_global=0.5                              | learning_rate in [0.01, 0.001, 0.003]<br />dropout_global in [0.2, 0.5]    |\n| **SGNN-HN**            | learning_rate=0.003<br />scale=12<br />step=2                              | learning_rate in [0.01, 0.001, 0.003]<br />scale in [12, 16, 20]<br />step in [2, 4, 6]    |\n"
  },
  {
    "path": "results/social/lastfm.md",
    "content": "# Experimental Setting\n\n**Dataset:** [LastFM](http://files.grouplens.org/datasets/hetrec2011/)\n\n> Note that datasets for social recommendation methods can be downloaded from [Social-Datasets](https://github.com/Sherry-XLL/Social-Datasets).\n\n**Filtering:** None\n\n**Evaluation:** ratio-based 8:1:1, full sort\n\n**Metrics:** Recall@10, NGCG@10, MRR@10, Hit@10, Precision@10\n\n**Properties:**\n\n```yaml\n# dataset config\nfield_separator: \"\\t\"\nseq_separator: \" \"\nUSER_ID_FIELD: user_id\nITEM_ID_FIELD: artist_id\nNET_SOURCE_ID_FIELD: source_id\nNET_TARGET_ID_FIELD: target_id\nLABEL_FIELD: label\nNEG_PREFIX: neg_\nload_col:\n  inter: [user_id, artist_id]\n  net: [source_id, target_id]\n\n# social network config\nfilter_net_by_inter: True\nundirected_net: True\n\n# training and evaluation\nepochs: 5000\ntrain_batch_size: 4096\neval_batch_size: 409600000\nvalid_metric: NDCG@10\nstopping_step: 50\n```\n\nFor fairness, we restrict users' and items' embedding dimension as following. Please adjust the name of the corresponding args of different models.\n```\nembedding_size: 64\n```\n\n# Dataset Statistics\n\n| Dataset    | #Users | #Items | #Interactions | Sparsity |\n| ---------- | ------ | ------ | ------------- | -------- |\n| lastfm     | 1,892  | 17,632 | 92,834        | 99.72%   |\n\n# Evaluation Results\n\n| Method               | Recall@10 | MRR@10 | NDCG@10 | Hit@10 | Precision@10 |\n| -------------------- | --------- | ------ | ------- | ------ | ------------ |\n| **BPR**              | 0.1761    | 0.3026 | 0.1674  | 0.5573 | 0.0858       |\n| **NeuMF**            | 0.1696    | 0.2924 | 0.1604  | 0.5456 | 0.0828       |\n| **NGCF**             | 0.1960    | 0.3479 | 0.1898  | 0.6141 | 0.0961       |\n| **LightGCN**         | 0.2064    | 0.3559 | 0.1972  | 0.6322 | 0.1009       |\n| **DiffNet**          | 0.1757    | 0.3117 | 0.1694  | 0.5621 | 0.0857       |\n| **MHCN**             | 0.2123    | 0.3782 | 0.2068  | 0.6523 | 0.1042       |\n| **SEPT**             | 0.2127    | 0.3703 | 0.2057  | 0.6465 | 0.1044       |\n\n# Hyper-parameters\n\n|                      | Best hyper-parameters                                                     | Tuning range                                                     |\n| -------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |\n| **BPR**             | learning_rate=0.0005                              | learning_rate in [0.01, 0.005, 0.001, 0.0005, 0.0001]    |\n| **NeuMF**           | learning_rate=0.0005<br />dropout_prob=0.1                           | learning_rate in [0.01, 0.005, 0.001, 0.0005, 0.0001]<br />dropout_prob in [0.1, 0.2, 0.3]   |\n| **NGCF**              | learning_rate=0.0005<br />hidden_size_list=[64,64,64]                              | learning_rate in [0.01, 0.005, 0.001, 0.0005, 0.0001]<br />hidden_size_list in ['[64]', '[64,64]', '[64,64,64]']    |\n| **LightGCN**             | learning_rate=0.001<br />n_layers=3                              | learning_rate in [0.01, 0.005, 0.001, 0.0005, 0.0001]<br />n_layers in [1, 2, 3]    |\n| **DiffNet**           | learning_rate=0.0005<br />n_layers=1                           | learning_rate in [0.01, 0.005, 0.001, 0.0005, 0.0001]<br />n_layers in [1, 2, 3]   |\n| **MHCN**              | learning_rate=0.0005<br />n_layers=2<br />ssl_reg=1e-05                              | learning_rate in [0.01, 0.005, 0.001, 0.0005, 0.0001]<br />n_layers in [1, 2, 3]<br />ssl_reg in [1e-04, 1e-05, 1e-06]    |\n| **SEPT**             | learning_rate=0.0005<br />n_layers=2<br />ssl_weight=1e-07                              | learning_rate in [0.01, 0.005, 0.001, 0.0005, 0.0001]<br />n_layers in [1, 2, 3]<br />ssl_weight in [1e-3, 1e-4, 1e-5, 1e-6, 1e-7]    |\n"
  },
  {
    "path": "run_hyper.py",
    "content": "import argparse\n\nfrom recbole.trainer import HyperTuning\nfrom recbole_gnn.quick_start import objective_function\n\n\ndef main():\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--config_files', type=str, default=None, help='fixed config files')\n    parser.add_argument('--params_file', type=str, default=None, help='parameters file')\n    parser.add_argument('--output_file', type=str, default='hyper_example.result', help='output file')\n    args, _ = parser.parse_known_args()\n\n    # plz set algo='exhaustive' to use exhaustive search, in this case, max_evals is auto set\n    config_file_list = args.config_files.strip().split(' ') if args.config_files else None\n    hp = HyperTuning(objective_function, algo='exhaustive',\n                     params_file=args.params_file, fixed_config_file_list=config_file_list)\n    hp.run()\n    hp.export_result(output_file=args.output_file)\n    print('best params: ', hp.best_params)\n    print('best result: ')\n    print(hp.params2result[hp.params2str(hp.best_params)])\n\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "run_recbole_gnn.py",
    "content": "import argparse\n\nfrom recbole_gnn.quick_start import run_recbole_gnn\n\n\nif __name__ == '__main__':\n    parser = argparse.ArgumentParser()\n    parser.add_argument('--model', '-m', type=str, default='BPR', help='name of models')\n    parser.add_argument('--dataset', '-d', type=str, default='ml-100k', help='name of datasets')\n    parser.add_argument('--config_files', type=str, default=None, help='config files')\n\n    args, _ = parser.parse_known_args()\n\n    config_file_list = args.config_files.strip().split(' ') if args.config_files else None\n    run_recbole_gnn(model=args.model, dataset=args.dataset, config_file_list=config_file_list)\n"
  },
  {
    "path": "run_test.sh",
    "content": "#!/bin/bash\n\n\npython -m pytest -v tests/test_model.py\necho \"model tests finished\"\n"
  },
  {
    "path": "tests/test_data/test/test.inter",
    "content": "user_id:token\titem_id:token\trating:float\ttimestamp:float\n196\t242\t3\t881250949\n186\t302\t3\t891717742\n22\t377\t1\t878887116\n244\t51\t2\t880606923\n166\t346\t1\t886397596\n298\t474\t4\t884182806\n115\t265\t2\t881171488\n253\t465\t5\t891628467\n305\t451\t3\t886324817\n6\t86\t3\t883603013\n62\t257\t2\t879372434\n286\t1014\t5\t879781125\n200\t222\t5\t876042340\n210\t40\t3\t891035994\n224\t29\t3\t888104457\n303\t785\t3\t879485318\n122\t387\t5\t879270459\n194\t274\t2\t879539794\n291\t1042\t4\t874834944\n234\t1184\t2\t892079237\n119\t392\t4\t886176814\n167\t486\t4\t892738452\n299\t144\t4\t877881320\n291\t118\t2\t874833878\n308\t1\t4\t887736532\n95\t546\t2\t879196566\n38\t95\t5\t892430094\n102\t768\t2\t883748450\n63\t277\t4\t875747401\n160\t234\t5\t876861185\n50\t246\t3\t877052329\n301\t98\t4\t882075827\n225\t193\t4\t879539727\n290\t88\t4\t880731963\n97\t194\t3\t884238860\n157\t274\t4\t886890835\n181\t1081\t1\t878962623\n278\t603\t5\t891295330\n276\t796\t1\t874791932\n7\t32\t4\t891350932\n10\t16\t4\t877888877\n284\t304\t4\t885329322\n201\t979\t2\t884114233\n276\t564\t3\t874791805\n287\t327\t5\t875333916\n246\t201\t5\t884921594\n242\t1137\t5\t879741196\n249\t241\t5\t879641194\n99\t4\t5\t886519097\n178\t332\t3\t882823437\n251\t100\t4\t886271884\n81\t432\t2\t876535131\n260\t322\t4\t890618898\n25\t181\t5\t885853415\n59\t196\t5\t888205088\n72\t679\t2\t880037164\n87\t384\t4\t879877127\n290\t143\t5\t880474293\n42\t423\t5\t881107687\n292\t515\t4\t881103977\n115\t20\t3\t881171009\n20\t288\t1\t879667584\n201\t219\t4\t884112673\n13\t526\t3\t882141053\n246\t919\t4\t884920949\n138\t26\t5\t879024232\n167\t232\t1\t892738341\n60\t427\t5\t883326620\n57\t304\t5\t883698581\n223\t274\t4\t891550094\n189\t512\t4\t893277702\n243\t15\t3\t879987440\n92\t1049\t1\t890251826\n246\t416\t3\t884923047\n194\t165\t4\t879546723\n241\t690\t2\t887249482\n178\t248\t4\t882823954\n254\t1444\t3\t886475558\n293\t5\t3\t888906576\n127\t229\t5\t884364867\n225\t237\t5\t879539643\n299\t229\t3\t878192429\n225\t480\t5\t879540748\n276\t54\t3\t874791025\n291\t144\t5\t874835091\n222\t366\t4\t878183381\n267\t518\t5\t878971773\n42\t403\t3\t881108684\n11\t111\t4\t891903862\n95\t625\t4\t888954412\n8\t338\t4\t879361873\n162\t25\t4\t877635573\n87\t1016\t4\t879876194\n279\t154\t5\t875296291\n145\t275\t2\t885557505\n119\t1153\t5\t874781198\n62\t498\t4\t879373848\n62\t382\t3\t879375537\n28\t209\t4\t881961214\n135\t23\t4\t879857765\n32\t294\t3\t883709863\n90\t382\t5\t891383835\n286\t208\t4\t877531942\n293\t685\t3\t888905170\n216\t144\t4\t880234639\n166\t328\t5\t886397722\n250\t496\t4\t878090499\n271\t132\t5\t885848672\n160\t174\t5\t876860807\n265\t118\t4\t875320714\n198\t498\t3\t884207492\n42\t96\t5\t881107178\n168\t151\t5\t884288058\n110\t307\t4\t886987260\n58\t144\t4\t884304936\n90\t648\t4\t891384754\n271\t346\t4\t885844430\n62\t21\t3\t879373460\n279\t832\t3\t881375854\n237\t514\t4\t879376641\n94\t789\t4\t891720887\n128\t485\t3\t879966895\n298\t317\t4\t884182806\n44\t195\t5\t878347874\n264\t200\t5\t886122352\n194\t385\t2\t879524643\n72\t195\t5\t880037702\n222\t750\t5\t883815120\n250\t264\t3\t878089182\n41\t265\t3\t890687042\n224\t245\t3\t888082216\n82\t135\t3\t878769629\n262\t1147\t4\t879791710\n293\t471\t3\t888904884\n216\t658\t3\t880245029\n250\t140\t3\t878092059\n59\t23\t5\t888205300\n286\t379\t5\t877533771\n244\t815\t4\t880605185\n7\t479\t4\t891352010\n174\t368\t1\t886434402\n87\t274\t4\t879876734\n194\t1211\t2\t879551380\n82\t1134\t2\t884714402\n13\t836\t2\t882139746\n13\t272\t4\t884538403\n244\t756\t2\t880605157\n305\t427\t5\t886323090\n95\t787\t2\t888954930\n43\t14\t2\t883955745\n299\t955\t4\t889502823\n57\t419\t3\t883698454\n84\t405\t3\t883452363\n269\t504\t4\t891449922\n299\t111\t3\t877878184\n194\t466\t4\t879525876\n160\t135\t4\t876860807\n99\t268\t3\t885678247\n10\t486\t4\t877886846\n259\t117\t4\t874724988\n85\t427\t3\t879456350\n303\t919\t4\t879467295\n213\t273\t5\t878870987\n121\t514\t3\t891387947\n90\t98\t5\t891383204\n49\t559\t2\t888067405\n42\t794\t3\t881108425\n155\t323\t2\t879371261\n68\t117\t4\t876973939\n172\t177\t4\t875537965\n19\t4\t4\t885412840\n268\t231\t4\t875744136\n5\t2\t3\t875636053\n305\t117\t2\t886324028\n44\t294\t4\t883612356\n43\t137\t4\t875975656\n279\t1336\t1\t875298353\n80\t466\t5\t887401701\n254\t164\t4\t886472768\n298\t281\t3\t884183336\n279\t1240\t1\t892174404\n66\t298\t4\t883601324\n18\t443\t3\t880130193\n268\t1035\t2\t875542174\n99\t79\t4\t885680138\n13\t98\t4\t881515011\n26\t258\t3\t891347949\n7\t455\t4\t891353086\n222\t755\t4\t878183481\n200\t673\t5\t884128554\n119\t328\t4\t876923913\n213\t172\t5\t878955442\n276\t322\t3\t874786392\n94\t1217\t3\t891723086\n130\t379\t4\t875801662\n38\t328\t4\t892428688\n160\t719\t3\t876857977\n293\t1267\t3\t888906966\n26\t930\t2\t891385985\n130\t216\t4\t875216545\n92\t1079\t3\t886443455\n256\t452\t4\t882164999\n1\t61\t4\t878542420\n72\t48\t4\t880036718\n56\t755\t3\t892910207\n13\t360\t4\t882140926\n15\t405\t2\t879455957\n92\t77\t3\t875654637\n207\t476\t2\t884386343\n292\t174\t5\t881105481\n232\t483\t5\t888549622\n251\t748\t2\t886272175\n224\t26\t3\t888104153\n181\t220\t4\t878962392\n259\t255\t4\t874724710\n305\t471\t4\t886323648\n52\t280\t3\t882922806\n161\t202\t5\t891170769\n148\t408\t5\t877399018\n125\t235\t2\t892838559\n97\t228\t5\t884238860\n58\t1098\t4\t884304936\n83\t234\t4\t887665548\n90\t347\t4\t891383319\n272\t178\t5\t879455113\n194\t181\t3\t879521396\n125\t478\t4\t879454628\n110\t688\t1\t886987605\n299\t14\t4\t877877775\n151\t10\t5\t879524921\n269\t127\t4\t891446165\n6\t14\t5\t883599249\n54\t106\t3\t880937882\n303\t69\t5\t879467542\n16\t944\t1\t877727122\n301\t790\t4\t882078621\n276\t1091\t3\t874793035\n305\t214\t2\t886323068\n194\t1028\t2\t879541148\n91\t323\t2\t891438397\n87\t554\t4\t879875940\n294\t109\t4\t877819599\n286\t171\t4\t877531791\n200\t318\t5\t884128458\n229\t328\t1\t891632142\n178\t568\t4\t882826555\n303\t842\t2\t879484804\n62\t65\t4\t879374686\n207\t591\t3\t876018608\n92\t172\t4\t875653271\n301\t401\t4\t882078040\n36\t339\t5\t882157581\n70\t746\t3\t884150257\n63\t242\t3\t875747190\n28\t201\t3\t881961671\n279\t68\t4\t875307407\n250\t7\t4\t878089716\n14\t98\t3\t890881335\n299\t1018\t3\t889502324\n194\t54\t3\t879525876\n303\t815\t3\t879485532\n119\t237\t5\t874775038\n295\t218\t5\t879966498\n268\t930\t2\t875742942\n268\t2\t2\t875744173\n66\t258\t4\t883601089\n233\t202\t5\t879394264\n83\t623\t4\t880308578\n214\t334\t3\t891542540\n192\t476\t2\t881368243\n100\t344\t4\t891374868\n268\t145\t1\t875744501\n301\t56\t4\t882076587\n307\t89\t5\t879283786\n234\t141\t3\t892334609\n83\t576\t4\t880308755\n181\t264\t2\t878961624\n297\t133\t4\t875240090\n38\t153\t5\t892430369\n7\t382\t4\t891352093\n264\t813\t4\t886122952\n181\t872\t1\t878961814\n201\t146\t1\t884140579\n85\t507\t4\t879456199\n269\t367\t3\t891450023\n59\t468\t3\t888205855\n286\t143\t4\t889651549\n193\t96\t1\t889124507\n113\t595\t5\t875936424\n292\t11\t5\t881104093\n130\t1014\t3\t876250718\n275\t98\t4\t875155140\n189\t520\t5\t893265380\n219\t82\t1\t889452455\n218\t209\t5\t877488546\n123\t427\t3\t879873020\n119\t222\t5\t874775311\n158\t177\t4\t880134407\n222\t118\t4\t877563802\n302\t322\t2\t879436875\n279\t501\t3\t875308843\n301\t79\t5\t882076403\n181\t3\t2\t878963441\n201\t695\t1\t884140115\n13\t198\t3\t881515193\n1\t189\t3\t888732928\n145\t237\t5\t875270570\n23\t385\t4\t874786462\n201\t767\t4\t884114505\n296\t705\t5\t884197193\n42\t546\t3\t881105817\n33\t872\t3\t891964230\n301\t554\t3\t882078830\n16\t64\t5\t877720297\n95\t135\t3\t879197562\n154\t357\t4\t879138713\n77\t484\t5\t884733766\n296\t508\t5\t884196584\n302\t303\t2\t879436785\n244\t673\t3\t880606667\n222\t77\t4\t878183616\n13\t215\t5\t882140588\n16\t705\t5\t877722736\n270\t452\t4\t876956264\n145\t15\t2\t875270655\n187\t64\t5\t879465631\n200\t304\t5\t876041644\n170\t749\t5\t887646170\n101\t829\t3\t877136138\n184\t218\t3\t889909840\n128\t204\t4\t879967478\n181\t1295\t1\t878961781\n184\t153\t3\t889911285\n1\t33\t4\t878542699\n1\t160\t4\t875072547\n184\t321\t5\t889906967\n54\t595\t3\t880937813\n94\t343\t4\t891725009\n128\t508\t4\t879967767\n23\t323\t2\t874784266\n301\t227\t3\t882077222\n301\t191\t3\t882075672\n112\t903\t1\t892440172\n82\t183\t3\t878769848\n222\t724\t3\t878181976\n218\t430\t3\t877488316\n308\t1197\t4\t887739521\n303\t134\t5\t879467959\n133\t751\t3\t890588547\n215\t212\t2\t891435680\n69\t256\t5\t882126156\n254\t662\t4\t887347350\n276\t2\t4\t874792436\n104\t984\t1\t888442575\n63\t1067\t3\t875747514\n267\t410\t4\t878970785\n13\t56\t5\t881515011\n240\t879\t3\t885775745\n286\t237\t2\t875806800\n294\t271\t5\t889241426\n90\t1086\t4\t891384424\n18\t26\t4\t880129731\n92\t229\t3\t875656201\n308\t649\t4\t887739292\n144\t89\t3\t888105691\n191\t302\t4\t891560253\n59\t951\t3\t888206409\n200\t96\t5\t884129409\n16\t197\t5\t877726146\n61\t678\t3\t892302309\n271\t199\t4\t885848448\n271\t709\t3\t885849325\n142\t169\t5\t888640356\n275\t597\t3\t876197678\n222\t151\t3\t878182109\n87\t40\t3\t879876917\n207\t258\t4\t877879172\n272\t1393\t2\t879454663\n177\t333\t4\t880130397\n207\t1115\t2\t879664906\n299\t577\t3\t889503806\n271\t378\t4\t885849447\n305\t425\t4\t886324486\n49\t959\t2\t888068912\n94\t1224\t3\t891722802\n130\t1017\t3\t874953895\n10\t175\t3\t877888677\n203\t321\t3\t880433418\n191\t286\t4\t891560842\n43\t323\t3\t875975110\n21\t558\t5\t874951695\n197\t96\t5\t891409839\n13\t344\t2\t888073635\n194\t66\t3\t879527264\n234\t206\t4\t892334543\n308\t402\t4\t887740700\n308\t640\t4\t887737036\n269\t522\t5\t891447773\n94\t265\t4\t891721889\n268\t62\t3\t875310824\n272\t12\t5\t879455254\n121\t291\t3\t891390477\n296\t20\t5\t884196921\n134\t286\t3\t891732334\n180\t462\t5\t877544218\n234\t612\t3\t892079140\n104\t117\t2\t888465972\n38\t758\t1\t892434626\n269\t845\t1\t891456255\n7\t163\t4\t891353444\n234\t1451\t3\t892078343\n275\t405\t2\t876197645\n52\t250\t3\t882922661\n102\t823\t3\t888801465\n13\t186\t4\t890704999\n178\t731\t4\t882827532\n236\t71\t3\t890116671\n256\t781\t5\t882165296\n263\t176\t5\t891299752\n244\t186\t3\t880605697\n279\t1181\t4\t875314001\n43\t815\t4\t883956189\n83\t78\t2\t880309089\n151\t197\t5\t879528710\n254\t436\t2\t886474216\n109\t631\t3\t880579371\n297\t716\t3\t875239422\n249\t188\t4\t879641067\n144\t699\t4\t888106106\n301\t604\t4\t882075994\n64\t392\t3\t889737542\n92\t501\t2\t875653665\n222\t97\t4\t878181739\n268\t436\t3\t875310745\n293\t135\t5\t888905550\n213\t173\t5\t878955442\n160\t460\t2\t876861185\n13\t498\t4\t882139901\n59\t715\t5\t888205921\n5\t17\t4\t875636198\n125\t163\t5\t879454956\n174\t315\t5\t886432749\n114\t505\t3\t881260203\n213\t515\t4\t878870518\n23\t196\t2\t874786926\n128\t15\t4\t879968827\n239\t56\t4\t889179478\n181\t279\t1\t878962955\n291\t80\t4\t875086354\n250\t238\t4\t878089963\n201\t649\t3\t884114275\n60\t60\t5\t883327734\n181\t325\t2\t878961814\n119\t407\t3\t887038665\n287\t1\t5\t875334088\n216\t228\t3\t880245642\n216\t531\t4\t880233810\n203\t471\t4\t880434463\n92\t587\t3\t875660408\n13\t892\t3\t882774224\n213\t176\t4\t878956338\n286\t288\t5\t875806672\n117\t1047\t2\t881009697\n99\t111\t1\t885678886\n11\t558\t3\t891904214\n65\t47\t2\t879216672\n295\t194\t4\t879517412\n269\t217\t2\t891451610\n85\t259\t2\t881705026\n250\t596\t5\t878089921\n137\t144\t5\t881433689\n201\t960\t2\t884112077\n257\t137\t4\t882049932\n111\t328\t4\t891679939\n91\t480\t4\t891438875\n215\t211\t4\t891436202\n181\t938\t1\t878961586\n189\t1060\t5\t893264301\n1\t20\t4\t887431883\n303\t404\t4\t879468375\n299\t305\t3\t879737314\n187\t210\t4\t879465242\n222\t278\t2\t877563913\n214\t568\t4\t892668197\n293\t770\t3\t888906655\n285\t191\t4\t890595859\n303\t252\t3\t879544791\n96\t156\t4\t884402860\n72\t1110\t3\t880037334\n115\t1067\t4\t881171009\n7\t430\t3\t891352178\n116\t350\t3\t886977926\n73\t480\t4\t888625753\n269\t246\t5\t891457067\n263\t419\t5\t891299514\n70\t431\t3\t884150257\n221\t475\t4\t875244204\n72\t182\t5\t880036515\n25\t357\t4\t885852757\n290\t50\t5\t880473582\n189\t526\t4\t893266205\n299\t303\t3\t877618584\n264\t294\t3\t886121516\n200\t365\t5\t884129962\n187\t135\t4\t879465653\n184\t187\t4\t889909024\n63\t289\t2\t875746985\n13\t229\t4\t882397650\n298\t486\t3\t884183063\n235\t185\t4\t889655435\n62\t712\t4\t879376178\n246\t94\t2\t884923505\n54\t742\t5\t880934806\n63\t762\t3\t875747688\n11\t732\t3\t891904596\n92\t168\t4\t875653723\n8\t550\t3\t879362356\n307\t174\t4\t879283480\n303\t200\t4\t879468459\n256\t849\t2\t882164603\n72\t54\t3\t880036854\n164\t406\t2\t889402389\n117\t150\t4\t880125101\n224\t77\t4\t888103872\n193\t869\t3\t889127811\n94\t184\t2\t891720862\n281\t338\t2\t881200457\n130\t109\t3\t874953794\n128\t371\t1\t879966954\n94\t720\t1\t891723593\n182\t845\t3\t885613067\n129\t873\t1\t883245452\n254\t229\t4\t886474580\n64\t381\t4\t879365491\n151\t176\t2\t879524293\n45\t25\t4\t881014015\n193\t879\t3\t889123257\n276\t922\t4\t889174849\n276\t57\t3\t874787526\n234\t187\t4\t892079140\n181\t306\t1\t878962006\n21\t370\t1\t874951293\n293\t249\t3\t888905229\n264\t721\t5\t886123656\n10\t611\t5\t877886722\n197\t346\t3\t891409070\n276\t142\t3\t874792945\n308\t427\t4\t887736584\n221\t943\t4\t875246759\n131\t126\t4\t883681514\n268\t824\t2\t876518557\n109\t8\t3\t880572642\n198\t58\t3\t884208173\n230\t680\t4\t880484286\n181\t741\t1\t878962918\n192\t1061\t4\t881368891\n234\t448\t3\t892335501\n90\t900\t4\t891382309\n193\t941\t4\t889124890\n128\t603\t5\t879966839\n126\t905\t2\t887855283\n244\t265\t4\t880606634\n90\t289\t3\t891382310\n157\t25\t3\t886890787\n305\t71\t3\t886323684\n119\t382\t5\t874781742\n21\t222\t2\t874951382\n231\t181\t4\t888605273\n280\t508\t3\t891700453\n288\t132\t3\t886374129\n279\t1497\t2\t890780576\n301\t33\t4\t882078228\n72\t699\t3\t880036783\n90\t259\t2\t891382392\n308\t55\t3\t887738760\n59\t742\t3\t888203053\n94\t744\t4\t891721462\n130\t642\t4\t875216933\n26\t1015\t3\t891352136\n56\t121\t5\t892679480\n82\t508\t2\t884714249\n62\t12\t4\t879373613\n276\t40\t3\t874791871\n181\t1015\t1\t878963121\n152\t301\t3\t880147407\n178\t845\t4\t882824291\n217\t597\t4\t889070087\n79\t303\t4\t891271203\n138\t484\t4\t879024127\n308\t81\t5\t887737293\n75\t284\t2\t884050393\n269\t198\t4\t891447062\n307\t94\t3\t877122695\n222\t781\t3\t881059677\n121\t740\t3\t891390544\n269\t22\t1\t891448072\n13\t864\t4\t882141924\n230\t742\t5\t880485043\n269\t507\t4\t891448800\n239\t1099\t5\t889179253\n245\t1028\t5\t888513447\n56\t546\t3\t892679460\n295\t961\t5\t879519556\n271\t1028\t2\t885848102\n222\t812\t2\t881059117\n69\t240\t3\t882126156\n10\t7\t4\t877892210\n22\t376\t3\t878887112\n294\t931\t3\t889242857\n82\t717\t1\t884714492\n279\t399\t4\t875313859\n269\t234\t1\t891449406\n6\t98\t5\t883600680\n243\t1039\t4\t879988184\n298\t181\t4\t884125629\n282\t325\t1\t881703044\n78\t323\t1\t879633567\n118\t200\t5\t875384647\n283\t1114\t5\t879297545\n171\t292\t4\t891034835\n70\t217\t4\t884151119\n10\t100\t5\t877891747\n245\t181\t4\t888513664\n107\t333\t3\t891264267\n246\t561\t1\t884923445\n13\t901\t1\t883670672\n276\t70\t4\t874790826\n244\t17\t2\t880607205\n189\t56\t5\t893265263\n226\t242\t5\t883888671\n62\t1016\t4\t879373008\n276\t417\t4\t874792907\n214\t478\t4\t891544052\n306\t235\t4\t876504354\n222\t26\t3\t878183043\n280\t631\t5\t891700751\n60\t430\t5\t883326122\n56\t71\t4\t892683275\n42\t274\t5\t881105817\n1\t202\t5\t875072442\n13\t809\t4\t882397582\n173\t289\t4\t877556988\n15\t749\t1\t879455311\n185\t23\t4\t883524249\n280\t540\t3\t891702304\n244\t381\t4\t880604077\n150\t293\t4\t878746946\n7\t497\t4\t891352134\n178\t317\t4\t882826915\n178\t742\t3\t882823833\n95\t1217\t3\t880572658\n234\t1462\t3\t892333865\n97\t222\t5\t884238887\n109\t127\t2\t880563471\n117\t268\t5\t880124306\n269\t705\t2\t891448850\n130\t1246\t3\t876252497\n264\t655\t4\t886123530\n207\t13\t3\t875506839\n42\t588\t5\t881108147\n246\t409\t2\t884923372\n87\t367\t4\t879876702\n101\t304\t3\t877135677\n256\t127\t4\t882164406\n92\t794\t3\t875654798\n181\t762\t2\t878963418\n213\t235\t1\t878955115\n92\t739\t2\t876175582\n292\t661\t5\t881105561\n246\t665\t4\t884922831\n274\t845\t5\t878945579\n188\t692\t5\t875072583\n18\t86\t4\t880129731\n5\t439\t1\t878844423\n236\t632\t3\t890116254\n193\t407\t4\t889127921\n144\t709\t4\t888105940\n90\t1198\t5\t891383866\n48\t609\t4\t879434819\n5\t225\t2\t875635723\n22\t128\t5\t878887983\n311\t432\t4\t884365485\n8\t22\t5\t879362183\n276\t188\t4\t874792547\n222\t173\t5\t878183043\n72\t866\t4\t880035887\n299\t134\t4\t878192311\n1\t171\t5\t889751711\n308\t295\t3\t887741461\n165\t216\t4\t879525778\n222\t49\t3\t878183512\n181\t121\t4\t878962623\n200\t11\t5\t884129542\n234\t626\t4\t892336358\n244\t707\t4\t880606243\n90\t25\t5\t891384789\n208\t216\t5\t883108324\n263\t96\t4\t891298336\n134\t323\t4\t891732335\n279\t586\t4\t892864663\n2\t292\t4\t888550774\n288\t593\t2\t886892127\n49\t302\t4\t888065432\n286\t153\t5\t877531406\n205\t304\t3\t888284313\n22\t80\t4\t878887227\n234\t318\t4\t892078890\n223\t328\t3\t891548959\n15\t25\t3\t879456204\n268\t147\t4\t876514002\n94\t1220\t3\t891722678\n274\t405\t4\t878945840\n7\t492\t5\t891352010\n268\t217\t2\t875744501\n16\t55\t5\t877717956\n164\t620\t3\t889402298\n290\t161\t4\t880474293\n92\t515\t4\t875640800\n239\t1070\t5\t889179032\n56\t449\t5\t892679308\n248\t234\t4\t884534968\n234\t10\t3\t891227851\n280\t1049\t2\t891702486\n308\t187\t5\t887738760\n276\t64\t5\t874787441\n192\t948\t3\t881368302\n122\t509\t4\t879270511\n85\t588\t3\t880838306\n262\t931\t2\t879790874\n201\t272\t3\t886013700\n181\t870\t2\t878962623\n295\t739\t4\t879518319\n263\t568\t4\t891299387\n295\t39\t4\t879518279\n201\t1100\t4\t884112800\n93\t820\t3\t888705966\n159\t1028\t5\t880557539\n158\t665\t2\t880134532\n293\t423\t3\t888906070\n82\t597\t3\t878768882\n276\t181\t5\t874786488\n13\t823\t5\t882397833\n217\t2\t3\t889069782\n83\t660\t4\t880308256\n189\t20\t5\t893264466\n222\t796\t4\t878183684\n146\t1022\t5\t891458193\n267\t121\t3\t878970681\n126\t294\t3\t887855087\n181\t1060\t1\t878962675\n125\t80\t4\t892838865\n43\t120\t4\t884029430\n13\t780\t1\t882142057\n253\t259\t2\t891628883\n42\t44\t3\t881108548\n77\t518\t4\t884753202\n291\t686\t5\t874835165\n268\t21\t3\t875742822\n262\t28\t3\t879792220\n234\t81\t3\t892334680\n29\t245\t3\t882820803\n236\t57\t5\t890116575\n158\t729\t3\t880133116\n156\t661\t4\t888185947\n232\t52\t5\t888550130\n168\t866\t5\t884287927\n37\t288\t4\t880915258\n141\t245\t3\t884584426\n235\t230\t4\t889655162\n102\t70\t3\t888803537\n77\t172\t3\t884752562\n90\t506\t5\t891383319\n186\t566\t5\t879023663\n44\t660\t5\t878347915\n118\t774\t5\t875385198\n7\t661\t5\t891351624\n49\t1003\t2\t888068651\n62\t68\t1\t879374969\n42\t1028\t4\t881106072\n178\t433\t4\t882827834\n85\t51\t2\t879454782\n77\t474\t5\t884732407\n58\t1099\t2\t892243079\n56\t1047\t4\t892911290\n197\t688\t1\t891409564\n286\t99\t4\t878141681\n90\t258\t3\t891382121\n181\t1288\t1\t878962349\n295\t190\t4\t879517062\n224\t69\t4\t888082495\n272\t317\t4\t879454977\n221\t1010\t3\t875246662\n66\t877\t1\t883601089\n207\t318\t5\t877124871\n234\t487\t3\t892079237\n7\t648\t5\t891351653\n87\t82\t5\t879875774\n195\t1052\t1\t877835102\n44\t449\t5\t883613334\n306\t287\t4\t876504442\n194\t172\t3\t879521474\n94\t62\t3\t891722933\n167\t659\t4\t892738277\n108\t100\t4\t879879720\n230\t304\t5\t880484286\n181\t927\t1\t878962675\n54\t302\t4\t880928519\n90\t22\t4\t891384357\n181\t696\t2\t878962997\n286\t357\t4\t877531537\n14\t269\t4\t892242403\n311\t179\t2\t884365357\n92\t121\t5\t875640679\n21\t440\t1\t874951798\n244\t550\t1\t880602264\n181\t405\t4\t878962919\n65\t806\t4\t879216529\n37\t540\t2\t880916070\n44\t443\t5\t878348289\n244\t183\t4\t880606043\n1\t265\t4\t878542441\n270\t25\t5\t876954456\n299\t387\t2\t889502756\n94\t572\t3\t891723883\n286\t746\t4\t877533058\n239\t272\t5\t889181247\n216\t55\t5\t880245145\n254\t121\t3\t886472369\n62\t665\t2\t879376483\n178\t385\t4\t882826982\n194\t23\t4\t879522819\n268\t955\t3\t875745160\n188\t143\t5\t875072674\n276\t294\t4\t874786366\n158\t1098\t4\t880135069\n207\t845\t3\t881681663\n161\t48\t1\t891170745\n305\t654\t4\t886323937\n47\t324\t3\t879439078\n64\t736\t4\t889739212\n191\t751\t3\t891560753\n7\t378\t5\t891353011\n59\t92\t5\t888204997\n69\t268\t5\t882027109\n10\t461\t3\t877888944\n21\t129\t4\t874951382\n58\t9\t4\t884304328\n194\t152\t3\t879549996\n7\t200\t5\t891353543\n113\t126\t5\t875076827\n173\t328\t5\t877557028\n95\t233\t4\t879196354\n16\t194\t5\t877720733\n59\t323\t4\t888206809\n311\t654\t3\t884365075\n292\t589\t4\t881105516\n43\t203\t4\t883955224\n79\t50\t4\t891271545\n235\t70\t5\t889655619\n125\t190\t5\t892836309\n284\t322\t3\t885329671\n303\t161\t5\t879468547\n254\t378\t3\t886474396\n255\t1034\t1\t883217030\n104\t301\t2\t888442275\n90\t923\t5\t891383912\n6\t463\t4\t883601713\n279\t122\t1\t875297433\n286\t298\t4\t875807004\n222\t448\t3\t878183565\n297\t57\t5\t875239383\n42\t625\t3\t881108873\n130\t1217\t4\t875801778\n254\t357\t3\t886472466\n109\t475\t1\t880563641\n230\t1444\t2\t880485726\n244\t310\t3\t880601905\n6\t301\t2\t883600406\n36\t748\t4\t882157285\n256\t443\t3\t882164727\n102\t515\t1\t888801316\n104\t285\t4\t888465201\n21\t447\t5\t874951695\n111\t301\t4\t891680028\n18\t408\t5\t880129628\n25\t222\t4\t885852817\n110\t944\t3\t886989501\n270\t98\t5\t876955868\n68\t237\t5\t876974133\n83\t215\t4\t880307940\n6\t258\t2\t883268278\n89\t216\t5\t879459859\n128\t317\t4\t879968029\n305\t512\t4\t886323525\n184\t412\t2\t889912691\n286\t175\t5\t877532470\n279\t1428\t3\t888465209\n256\t86\t5\t882165103\n221\t48\t5\t875245462\n140\t332\t3\t879013617\n190\t977\t2\t891042938\n11\t227\t3\t891905896\n201\t203\t5\t884114471\n150\t181\t5\t878746685\n126\t245\t3\t887854726\n20\t208\t2\t879669401\n144\t742\t4\t888104122\n181\t930\t1\t878963275\n109\t566\t4\t880578814\n85\t1065\t3\t879455021\n213\t133\t3\t878955973\n222\t379\t1\t878184290\n223\t11\t3\t891550649\n215\t421\t4\t891435704\n218\t208\t3\t877488366\n174\t937\t5\t886432989\n275\t186\t3\t880314383\n68\t742\t1\t876974198\n268\t583\t4\t876513830\n160\t462\t4\t876858346\n195\t273\t4\t878019342\n224\t178\t4\t888082468\n5\t110\t1\t875636493\n99\t1016\t5\t885678724\n2\t251\t5\t888552084\n292\t9\t4\t881104148\n72\t568\t4\t880037203\n85\t228\t3\t882813248\n83\t281\t5\t880307072\n92\t831\t2\t886443708\n7\t543\t3\t891351772\n87\t401\t2\t879876813\n287\t926\t4\t875334340\n1\t155\t2\t878542201\n234\t632\t2\t892079538\n222\t53\t5\t878184113\n24\t64\t5\t875322758\n7\t554\t3\t891354639\n82\t56\t3\t878769410\n161\t318\t3\t891170824\n196\t393\t4\t881251863\n56\t91\t4\t892683275\n82\t477\t3\t876311344\n7\t472\t2\t891353357\n256\t761\t4\t882164644\n226\t56\t4\t883889102\n279\t741\t5\t875296891\n308\t1286\t3\t887738151\n16\t8\t5\t877722736\n180\t202\t3\t877128388\n203\t93\t4\t880434940\n145\t56\t5\t875271896\n288\t305\t4\t886372527\n84\t742\t3\t883450643\n44\t644\t3\t878347818\n17\t13\t3\t885272654\n313\t117\t4\t891015319\n148\t1\t4\t877019411\n197\t347\t4\t891409070\n21\t164\t5\t874951695\n279\t982\t3\t875298314\n239\t491\t5\t889181015\n185\t287\t5\t883526288\n297\t89\t4\t875239125\n303\t68\t4\t879467361\n186\t250\t1\t879023607\n73\t206\t3\t888625754\n104\t756\t2\t888465739\n94\t216\t3\t885870665\n239\t194\t5\t889178833\n197\t511\t5\t891409839\n280\t1\t4\t891700426\n1\t117\t3\t874965739\n224\t583\t1\t888103729\n303\t397\t1\t879543831\n60\t162\t4\t883327734\n198\t258\t4\t884204501\n239\t513\t5\t889178887\n6\t69\t3\t883601277\n233\t375\t4\t876374419\n85\t642\t4\t882995615\n110\t38\t3\t886988574\n184\t522\t3\t889908462\n99\t873\t1\t885678436\n13\t418\t2\t882398763\n201\t518\t4\t884112201\n13\t858\t1\t882397068\n214\t131\t3\t891544465\n296\t228\t4\t884197264\n222\t87\t3\t878182589\n279\t725\t4\t875314144\n217\t182\t2\t889070109\n85\t433\t3\t879828720\n239\t234\t3\t889178762\n13\t72\t4\t882141727\n194\t77\t3\t879527421\n208\t663\t5\t883108476\n109\t178\t3\t880572950\n230\t172\t4\t880484523\n59\t485\t2\t888204466\n313\t478\t3\t891014373\n70\t1133\t3\t884151344\n62\t182\t5\t879375169\n198\t234\t3\t884207833\n65\t125\t4\t879217509\n174\t660\t5\t886514261\n90\t12\t5\t891383241\n130\t1248\t3\t880396702\n100\t354\t2\t891375260\n283\t432\t5\t879297965\n275\t418\t3\t875154718\n311\t98\t5\t884364502\n195\t751\t4\t883295500\n130\t105\t4\t876251160\n269\t252\t1\t891456350\n286\t73\t5\t877532965\n7\t623\t3\t891354217\n56\t222\t5\t892679439\n210\t204\t5\t887730676\n239\t9\t5\t889180446\n96\t87\t4\t884403531\n297\t73\t2\t875239691\n249\t239\t3\t879572284\n94\t860\t2\t891723706\n84\t121\t4\t883452307\n275\t265\t4\t880314031\n135\t1046\t3\t879858003\n291\t1178\t4\t875086354\n125\t382\t1\t892836623\n70\t399\t4\t884068521\n311\t9\t4\t884963365\n301\t523\t4\t882076146\n152\t685\t5\t880149074\n244\t172\t4\t880605665\n275\t1091\t2\t875154535\n53\t281\t4\t879443288\n198\t118\t2\t884206513\n244\t790\t4\t880608037\n26\t125\t4\t891371676\n151\t13\t3\t879542688\n124\t496\t1\t890286933\n24\t191\t5\t875323003\n271\t65\t3\t885849419\n307\t634\t3\t879283385\n294\t1245\t3\t877819265\n234\t241\t2\t892335042\n25\t501\t3\t885852301\n293\t137\t3\t888904653\n201\t432\t3\t884111312\n75\t240\t1\t884050661\n13\t181\t5\t882140354\n207\t68\t2\t877125350\n2\t50\t5\t888552084\n313\t566\t4\t891016220\n144\t125\t4\t888104191\n188\t443\t4\t875074329\n276\t324\t4\t874786419\n145\t974\t1\t882182634\n72\t234\t4\t880037418\n83\t385\t4\t887665549\n181\t619\t3\t878963086\n109\t402\t4\t880581344\n207\t107\t3\t876198301\n185\t216\t4\t883526268\n14\t213\t5\t890881557\n149\t319\t2\t883512658\n57\t79\t5\t883698495\n230\t963\t5\t880484370\n176\t875\t4\t886047442\n253\t97\t4\t891628501\n284\t269\t4\t885328991\n106\t526\t4\t881452685\n121\t180\t3\t891388286\n62\t86\t2\t879374640\n291\t418\t4\t875086920\n84\t1033\t4\t883452711\n293\t380\t2\t888907527\n207\t58\t3\t875991047\n194\t187\t4\t879520813\n109\t97\t3\t880578711\n283\t845\t4\t879297442\n297\t275\t5\t874954260\n181\t334\t1\t878961749\n78\t255\t4\t879633745\n11\t425\t4\t891904300\n308\t59\t4\t887737647\n193\t1078\t4\t889126943\n297\t234\t3\t875239018\n87\t585\t4\t879877008\n250\t204\t2\t878091682\n8\t50\t5\t879362124\n186\t148\t4\t891719774\n312\t692\t4\t891699426\n91\t683\t3\t891438351\n5\t454\t1\t875721432\n291\t376\t3\t875086534\n175\t127\t5\t877107640\n145\t737\t2\t875272833\n7\t644\t5\t891351685\n276\t419\t5\t874792907\n83\t210\t5\t880307751\n102\t524\t3\t888803537\n153\t174\t1\t881371140\n62\t302\t3\t879371909\n49\t995\t3\t888065577\n268\t298\t3\t875742647\n207\t554\t2\t877822854\n313\t616\t5\t891015049\n286\t44\t3\t877532173\n279\t168\t5\t875296435\n276\t474\t5\t889174904\n62\t59\t4\t879373821\n254\t219\t1\t886475980\n83\t97\t4\t880308690\n63\t100\t5\t875747319\n16\t178\t5\t877719333\n297\t233\t2\t875239914\n90\t945\t5\t891383866\n85\t25\t2\t879452769\n42\t98\t4\t881106711\n303\t393\t4\t879484981\n274\t50\t5\t878944679\n104\t299\t3\t888442436\n94\t792\t4\t885873006\n184\t98\t4\t889908539\n293\t708\t3\t888907527\n248\t589\t4\t884534968\n18\t950\t3\t880130764\n217\t27\t1\t889070011\n200\t892\t4\t884127082\n201\t148\t1\t884140751\n296\t222\t5\t884196640\n7\t662\t3\t892133739\n196\t381\t4\t881251728\n69\t427\t3\t882145465\n72\t196\t4\t880036747\n256\t472\t4\t882152471\n128\t182\t4\t879967225\n151\t747\t3\t879524564\n7\t171\t3\t891351287\n286\t85\t5\t877533224\n172\t220\t4\t875537441\n308\t516\t4\t887736743\n190\t974\t2\t891625949\n82\t756\t1\t878768741\n308\t436\t4\t887739257\n59\t235\t1\t888203658\n64\t1063\t3\t889739539\n145\t756\t2\t885557506\n220\t298\t4\t881198966\n21\t324\t4\t874950889\n285\t269\t4\t890595313\n207\t65\t3\t878104594\n198\t658\t3\t884208173\n220\t333\t3\t881197771\n210\t70\t4\t887730589\n181\t14\t1\t878962392\n158\t128\t2\t880134296\n143\t682\t3\t888407741\n75\t237\t2\t884050309\n199\t221\t4\t883782854\n223\t1150\t2\t891549841\n297\t25\t4\t874954497\n276\t78\t4\t877934828\n299\t847\t4\t877877649\n293\t325\t2\t888904353\n301\t138\t2\t882079446\n1\t47\t4\t875072125\n164\t281\t4\t889401906\n96\t673\t4\t884402860\n291\t1016\t4\t874833827\n7\t451\t5\t891353892\n233\t177\t4\t877661496\n6\t517\t4\t883602212\n202\t283\t3\t879727153\n214\t117\t4\t891543241\n184\t602\t4\t889909691\n277\t257\t3\t879543487\n194\t212\t1\t879524216\n95\t68\t4\t879196231\n25\t257\t4\t885853415\n6\t23\t4\t883601365\n38\t573\t1\t892433660\n313\t436\t4\t891029877\n22\t241\t3\t878888025\n262\t617\t3\t879793715\n130\t569\t3\t880396494\n66\t181\t5\t883601425\n21\t948\t1\t874951054\n181\t1332\t1\t878962278\n262\t174\t3\t879791948\n206\t302\t5\t888180227\n222\t22\t5\t878183285\n76\t61\t4\t875028123\n151\t703\t4\t879542460\n314\t28\t5\t877888346\n13\t147\t3\t882397502\n44\t258\t4\t878340824\n303\t418\t4\t879483510\n16\t89\t2\t877717833\n270\t558\t5\t876954927\n248\t117\t5\t884535433\n125\t318\t5\t879454309\n138\t523\t5\t879024043\n268\t386\t2\t875743978\n291\t15\t5\t874833668\n234\t147\t3\t892335372\n239\t96\t5\t889178798\n15\t331\t3\t879455166\n94\t155\t2\t891723807\n136\t89\t4\t882848925\n223\t423\t3\t891550684\n82\t194\t4\t878770027\n145\t355\t3\t888396967\n280\t845\t3\t891700925\n179\t339\t1\t892151366\n178\t199\t4\t882826306\n307\t949\t4\t877123315\n10\t488\t5\t877888613\n116\t331\t3\t876451911\n23\t258\t5\t876785704\n308\t174\t4\t887736696\n185\t114\t4\t883524320\n188\t237\t3\t875073648\n118\t654\t5\t875385007\n246\t721\t4\t884921794\n234\t98\t4\t892078567\n194\t239\t3\t879522917\n94\t24\t4\t885873423\n122\t378\t4\t879270769\n312\t100\t4\t891698613\n262\t64\t5\t879793022\n154\t242\t3\t879138235\n223\t763\t3\t891550067\n99\t403\t4\t885680374\n83\t43\t4\t880308690\n130\t307\t4\t877984546\n174\t402\t5\t886513729\n256\t487\t5\t882164231\n59\t177\t4\t888204349\n161\t168\t1\t891171174\n244\t53\t3\t880607489\n250\t196\t4\t878091818\n43\t40\t3\t883956468\n285\t150\t5\t890595636\n42\t953\t2\t881108815\n97\t670\t5\t884239744\n122\t510\t4\t879270327\n61\t323\t3\t891206450\n222\t106\t2\t883816184\n4\t264\t3\t892004275\n304\t259\t1\t884967253\n37\t403\t5\t880915942\n49\t68\t1\t888069513\n303\t1098\t4\t879467959\n165\t372\t5\t879525987\n176\t324\t5\t886047292\n3\t335\t1\t889237269\n56\t869\t3\t892683895\n44\t15\t4\t878341343\n190\t117\t4\t891033697\n29\t189\t4\t882821942\n94\t174\t4\t885870231\n130\t949\t3\t876251944\n117\t181\t5\t880124648\n303\t779\t1\t879543418\n19\t435\t5\t885412840\n194\t191\t4\t879521856\n158\t24\t4\t880134261\n56\t447\t4\t892679067\n262\t223\t3\t879791816\n181\t1334\t1\t878962240\n214\t137\t4\t891543227\n92\t747\t4\t875656164\n188\t96\t5\t875073128\n58\t173\t5\t884305353\n244\t154\t5\t880606385\n134\t879\t4\t891732393\n298\t625\t4\t884183406\n254\t230\t4\t886472400\n230\t138\t3\t880485197\n16\t209\t5\t877722736\n151\t835\t5\t879524199\n181\t1327\t1\t878963305\n145\t1248\t3\t875272195\n200\t588\t5\t884128499\n248\t257\t3\t884535840\n297\t432\t4\t875239658\n312\t133\t5\t891699296\n151\t12\t5\t879524368\n110\t568\t3\t886988449\n305\t483\t5\t886323068\n141\t258\t5\t884584338\n44\t240\t4\t878346997\n186\t263\t3\t879023571\n214\t213\t4\t891544414\n233\t208\t4\t880610814\n104\t287\t2\t888465347\n312\t153\t2\t891699491\n1\t222\t4\t878873388\n206\t323\t1\t888179833\n230\t419\t4\t880484587\n56\t450\t3\t892679374\n94\t651\t5\t891725332\n205\t316\t4\t888284710\n14\t174\t5\t890881294\n268\t790\t2\t876513785\n276\t1081\t3\t880913705\n83\t929\t3\t880307140\n268\t580\t3\t875309344\n222\t1041\t3\t881060155\n279\t89\t4\t875306910\n5\t424\t1\t875635807\n112\t331\t4\t884992603\n296\t429\t5\t884197330\n18\t202\t3\t880130515\n13\t868\t5\t882139901\n87\t210\t5\t879875734\n10\t285\t5\t877889186\n181\t328\t3\t878961227\n23\t463\t4\t874785843\n253\t746\t3\t891628630\n234\t228\t3\t892079190\n299\t1047\t2\t877880041\n66\t1\t3\t883601324\n216\t174\t5\t881432488\n290\t208\t3\t880475245\n79\t1161\t2\t891271697\n264\t448\t2\t886122031\n4\t303\t5\t892002352\n144\t831\t3\t888104805\n138\t517\t4\t879024279\n64\t433\t2\t889740286\n5\t1\t4\t875635748\n276\t357\t5\t874787526\n62\t433\t5\t879375588\n239\t475\t5\t889178689\n293\t166\t3\t888905520\n130\t234\t5\t875216932\n264\t70\t4\t886123596\n208\t197\t5\t883108797\n24\t763\t5\t875322875\n279\t1162\t3\t875314334\n3\t245\t1\t889237247\n101\t596\t3\t877136564\n162\t1019\t4\t877636556\n223\t908\t1\t891548802\n99\t246\t3\t888469392\n239\t430\t3\t889180338\n160\t160\t5\t876862078\n172\t580\t4\t875538028\n303\t1160\t2\t879544629\n54\t676\t5\t880935294\n44\t507\t3\t878347392\n210\t97\t5\t887736454\n164\t930\t4\t889402340\n299\t240\t2\t877878414\n28\t217\t3\t881961671\n305\t79\t3\t886324276\n18\t729\t3\t880131236\n82\t343\t1\t884713755\n109\t1012\t4\t880564570\n207\t25\t4\t876079113\n92\t1209\t1\t875660468\n109\t1\t4\t880563619\n15\t222\t3\t879455730\n58\t709\t5\t884304812\n303\t693\t4\t879466771\n152\t111\t5\t880148782\n194\t160\t2\t879551380\n92\t241\t3\t875655961\n77\t91\t3\t884752924\n244\t662\t3\t880606533\n177\t321\t2\t880130481\n131\t221\t3\t883681561\n197\t302\t3\t891409070\n227\t50\t4\t879035347\n85\t282\t3\t879829618\n295\t72\t4\t879518714\n181\t1\t3\t878962392\n277\t255\t4\t879544145\n279\t96\t4\t875310606\n1\t253\t5\t874965970\n18\t182\t4\t880130640\n276\t568\t4\t882659211\n87\t177\t5\t879875940\n177\t69\t1\t880131088\n213\t13\t4\t878955139\n125\t134\t5\t879454532\n128\t739\t4\t879969349\n291\t428\t5\t874871766\n25\t208\t4\t885852337\n288\t272\t5\t889225463\n207\t1350\t2\t877878772\n271\t56\t3\t885848559\n5\t363\t3\t875635225\n274\t748\t5\t878944406\n70\t419\t5\t884065035\n311\t559\t2\t884366187\n151\t919\t5\t879524368\n199\t268\t5\t883782509\n201\t209\t3\t884112801\n99\t274\t1\t885679157\n11\t740\t4\t891903067\n59\t77\t4\t888206254\n184\t277\t3\t889907971\n222\t88\t4\t878183336\n38\t161\t5\t892432062\n59\t418\t2\t888205188\n104\t300\t3\t888442275\n298\t1346\t3\t884126061\n180\t1119\t3\t877128156\n7\t674\t2\t891352659\n121\t14\t5\t891390014\n268\t1041\t1\t875743735\n252\t277\t4\t891456797\n303\t411\t4\t879483802\n210\t527\t5\t887736232\n234\t648\t3\t892826760\n312\t573\t5\t891712535\n308\t215\t3\t887737483\n234\t1397\t4\t892334976\n75\t546\t3\t884050422\n117\t15\t5\t880125887\n246\t239\t3\t884921380\n64\t516\t5\t889737376\n85\t187\t5\t879454235\n239\t81\t3\t889179808\n59\t54\t4\t888205921\n256\t220\t3\t882151690\n216\t196\t5\t880245145\n203\t282\t1\t880434919\n13\t195\t3\t881515296\n144\t153\t5\t888105823\n100\t268\t3\t891374982\n210\t274\t5\t887730676\n94\t471\t4\t891721642\n13\t807\t1\t886304229\n125\t657\t3\t892836422\n65\t1142\t4\t879217349\n1\t113\t5\t878542738\n76\t175\t4\t875028853\n294\t508\t4\t877819532\n263\t1451\t4\t891299949\n294\t930\t3\t889242704\n121\t117\t1\t891388600\n85\t13\t3\t879452866\n303\t426\t3\t879542535\n212\t180\t1\t879303974\n6\t492\t5\t883601089\n181\t240\t1\t878963122\n279\t746\t5\t875310233\n303\t1109\t4\t879467936\n184\t191\t4\t889908716\n310\t116\t5\t879436104\n313\t22\t3\t891014870\n314\t1150\t4\t877887002\n13\t121\t5\t882397503\n43\t5\t4\t875981421\n58\t214\t2\t884305296\n215\t164\t3\t891436633\n62\t288\t2\t879371909\n280\t127\t5\t891702544\n161\t898\t3\t891170191\n11\t723\t5\t891904637\n94\t218\t3\t891721851\n35\t243\t2\t875459046\n311\t566\t4\t884366112\n48\t680\t3\t879434330\n85\t604\t4\t882995132\n288\t527\t3\t886373565\n184\t514\t5\t889908497\n151\t929\t3\t879543457\n90\t690\t4\t891383319\n11\t38\t3\t891905936\n104\t1016\t1\t888466002\n106\t582\t4\t881451199\n181\t1010\t1\t878962774\n37\t117\t4\t880915674\n276\t845\t4\t874786807\n22\t258\t5\t878886261\n70\t82\t4\t884068075\n5\t98\t3\t875720691\n308\t95\t4\t887737130\n60\t208\t5\t883326028\n270\t778\t5\t876955711\n243\t208\t4\t879989134\n92\t540\t2\t875813197\n81\t280\t4\t876534214\n293\t412\t1\t888905377\n200\t478\t5\t884128788\n13\t308\t3\t881514726\n56\t184\t4\t892679088\n116\t250\t4\t876452606\n295\t172\t4\t879516986\n63\t1007\t5\t875747368\n295\t235\t4\t879517943\n104\t1010\t1\t888465554\n156\t641\t5\t888185677\n269\t1165\t1\t891446904\n160\t430\t5\t876861799\n237\t191\t4\t879376773\n287\t252\t1\t875334361\n290\t132\t3\t880473993\n45\t109\t5\t881012356\n224\t678\t3\t888082277\n145\t764\t2\t888398257\n277\t1011\t3\t879543697\n65\t100\t3\t879217558\n272\t1101\t5\t879454977\n116\t255\t3\t876452524\n184\t86\t5\t889908694\n285\t151\t5\t890595636\n222\t148\t2\t881061164\n72\t28\t4\t880036824\n271\t187\t5\t885848343\n94\t211\t5\t891721142\n246\t425\t5\t884921918\n115\t8\t5\t881171982\n176\t327\t3\t886047176\n13\t396\t3\t882141727\n129\t331\t2\t883244737\n257\t1260\t2\t880496892\n95\t1\t5\t879197329\n147\t904\t5\t885594015\n151\t58\t4\t879524849\n184\t660\t3\t889909962\n311\t386\t3\t884365747\n105\t268\t4\t889214268\n158\t510\t3\t880134296\n34\t312\t4\t888602742\n72\t427\t5\t880037702\n263\t416\t5\t891299697\n94\t1048\t4\t891722678\n200\t291\t3\t891825292\n45\t118\t4\t881014550\n279\t144\t4\t880850073\n145\t22\t5\t875273021\n71\t89\t5\t880864462\n182\t69\t5\t876435435\n193\t627\t4\t889126972\n214\t302\t4\t892668197\n151\t485\t5\t879525002\n102\t322\t3\t883277645\n234\t571\t2\t892318158\n249\t930\t2\t879640585\n195\t328\t4\t884420059\n109\t258\t5\t880562908\n222\t552\t2\t878184596\n282\t288\t4\t879949367\n117\t758\t2\t881011217\n23\t381\t4\t874787350\n112\t327\t1\t884992535\n303\t145\t1\t879543573\n252\t300\t4\t891448664\n151\t372\t5\t879524819\n282\t327\t5\t879949417\n304\t237\t5\t884968415\n290\t568\t3\t880474716\n64\t160\t4\t889739288\n28\t79\t4\t881961003\n168\t1278\t3\t884287560\n265\t471\t4\t875320302\n18\t113\t5\t880129628\n83\t82\t5\t887665423\n90\t499\t5\t891383866\n234\t1186\t4\t892335707\n87\t196\t5\t879877681\n26\t685\t3\t891371676\n150\t129\t4\t878746946\n161\t98\t4\t891171357\n70\t210\t4\t884065854\n51\t182\t3\t883498790\n222\t1057\t4\t881061370\n92\t176\t5\t875652981\n204\t216\t4\t892513864\n164\t685\t5\t889402160\n57\t682\t3\t883696824\n184\t207\t4\t889908903\n60\t403\t3\t883327087\n92\t180\t5\t875653016\n43\t204\t4\t883956122\n222\t1042\t4\t878184514\n197\t300\t4\t891409422\n92\t790\t3\t875907618\n294\t282\t3\t877821796\n201\t747\t2\t884113635\n201\t215\t2\t884140382\n193\t410\t3\t889127633\n271\t705\t4\t885849052\n214\t693\t3\t891544414\n73\t657\t5\t888625422\n90\t187\t4\t891383561\n315\t273\t3\t879821349\n48\t309\t3\t879434132\n255\t472\t1\t883216958\n270\t671\t4\t876956360\n66\t7\t3\t883601355\n6\t478\t4\t883602762\n101\t222\t3\t877136243\n207\t1046\t4\t875509787\n144\t182\t3\t888105743\n85\t83\t4\t886282959\n102\t625\t3\t883748418\n158\t770\t5\t880134477\n297\t588\t4\t875238579\n90\t507\t5\t891383987\n271\t482\t5\t885848519\n130\t901\t1\t884624044\n178\t276\t3\t882823978\n90\t245\t3\t891382612\n181\t1094\t1\t878963086\n311\t143\t3\t884364812\n267\t17\t4\t878971773\n201\t51\t2\t884140751\n194\t647\t4\t879521531\n59\t387\t3\t888206562\n1\t227\t4\t876892946\n116\t751\t3\t890131577\n170\t292\t5\t884103732\n110\t578\t3\t886988536\n60\t1021\t5\t883326185\n287\t347\t4\t888177040\n197\t55\t3\t891409982\n38\t679\t5\t892432062\n195\t1014\t4\t879673925\n279\t227\t4\t889326161\n84\t748\t4\t883449530\n31\t886\t2\t881547877\n316\t98\t5\t880853743\n25\t25\t5\t885853415\n168\t274\t4\t884287865\n103\t24\t4\t880415847\n299\t588\t4\t877880852\n194\t478\t3\t879521329\n287\t294\t5\t875333873\n234\t582\t4\t892334883\n279\t1048\t1\t886015533\n87\t9\t4\t879877931\n181\t408\t1\t878962550\n279\t1151\t2\t875744584\n49\t47\t5\t888068715\n296\t855\t5\t884197352\n44\t95\t4\t878347569\n92\t216\t3\t875653867\n135\t39\t3\t879857931\n13\t66\t3\t882141485\n262\t386\t3\t879795512\n7\t676\t3\t891354499\n116\t942\t3\t876454090\n318\t474\t4\t884495742\n141\t826\t2\t884585437\n269\t13\t4\t891446662\n222\t1044\t4\t881060578\n82\t455\t4\t876311319\n279\t254\t3\t879572960\n42\t685\t4\t881105972\n145\t1245\t5\t875271397\n184\t161\t2\t889909640\n49\t625\t3\t888067031\n177\t243\t1\t882142141\n313\t99\t4\t891014029\n32\t290\t3\t883717913\n308\t848\t4\t887736925\n145\t448\t5\t877343121\n130\t542\t3\t875801778\n130\t806\t3\t875217096\n165\t288\t2\t879525673\n249\t255\t3\t879571752\n49\t581\t3\t888068143\n195\t300\t3\t890588925\n118\t475\t5\t875384793\n130\t316\t4\t888211794\n104\t293\t3\t888465166\n201\t1229\t3\t884140307\n142\t82\t4\t888640356\n119\t718\t5\t874774956\n303\t94\t3\t879485318\n99\t50\t5\t885679998\n306\t14\t5\t876503995\n92\t709\t2\t875654590\n227\t295\t5\t879035387\n3\t337\t1\t889236983\n94\t820\t1\t891723186\n59\t1107\t4\t888206254\n30\t539\t3\t885941454\n262\t821\t3\t879794887\n6\t508\t3\t883599530\n311\t716\t4\t884365718\n268\t364\t3\t875743979\n262\t553\t4\t879795122\n214\t275\t3\t891542968\n16\t56\t5\t877719863\n262\t293\t2\t879790906\n293\t132\t4\t888905481\n62\t132\t5\t879375022\n94\t346\t4\t891725410\n13\t59\t4\t882140425\n240\t313\t5\t885775604\n102\t161\t2\t888801876\n83\t301\t2\t891181430\n291\t7\t5\t874834481\n312\t28\t4\t891698300\n31\t484\t5\t881548030\n291\t70\t4\t874868146\n56\t172\t5\t892737191\n109\t588\t4\t880578388\n110\t1246\t2\t886989613\n59\t429\t4\t888204597\n246\t1218\t3\t884922801\n65\t196\t5\t879216637\n24\t367\t2\t875323241\n92\t115\t3\t875654125\n308\t741\t4\t887739863\n301\t660\t4\t882076782\n214\t1129\t4\t892668249\n158\t241\t4\t880134445\n269\t674\t2\t891451754\n308\t493\t3\t887737293\n32\t151\t3\t883717850\n224\t191\t4\t888082468\n215\t423\t5\t891435526\n32\t1012\t4\t883717581\n154\t289\t2\t879138345\n201\t509\t3\t884111546\n85\t298\t4\t880581629\n180\t68\t5\t877127721\n184\t36\t3\t889910195\n188\t218\t5\t875074667\n305\t11\t1\t886323237\n144\t508\t4\t888104150\n73\t94\t1\t888625754\n194\t205\t3\t879524291\n177\t203\t4\t880131026\n276\t273\t4\t874786517\n198\t7\t4\t884205317\n108\t290\t4\t879880076\n189\t197\t5\t893265291\n73\t56\t4\t888626041\n172\t462\t3\t875537717\n120\t546\t2\t889490979\n101\t471\t3\t877136535\n5\t102\t3\t875721196\n26\t235\t2\t891372429\n268\t1249\t2\t875743793\n276\t773\t3\t874792794\n13\t150\t5\t882140588\n7\t401\t4\t891354257\n128\t482\t4\t879967432\n104\t7\t3\t888465972\n293\t39\t3\t888906804\n256\t25\t5\t882150552\n90\t821\t3\t891385843\n275\t69\t3\t880314089\n22\t510\t5\t878887765\n312\t494\t5\t891698454\n207\t192\t3\t877822350\n264\t504\t5\t886122577\n137\t687\t4\t881432756\n185\t740\t4\t883524475\n307\t687\t1\t879114143\n42\t176\t3\t881107178\n145\t472\t3\t875271128\n189\t634\t3\t893265506\n262\t121\t3\t879790536\n251\t148\t2\t886272547\n259\t772\t4\t874724882\n239\t58\t5\t889179623\n312\t921\t5\t891699295\n92\t15\t3\t875640189\n81\t742\t2\t876533764\n311\t419\t3\t884364931\n102\t448\t3\t888803002\n249\t746\t5\t879641209\n95\t527\t4\t888954440\n19\t655\t3\t885412723\n79\t100\t5\t891271652\n189\t751\t4\t893265046\n253\t510\t5\t891628416\n201\t919\t3\t884141208\n1\t17\t3\t875073198\n214\t42\t5\t892668130\n7\t81\t5\t891352626\n234\t132\t4\t892333865\n59\t148\t3\t888203175\n13\t354\t2\t888779458\n6\t469\t5\t883601155\n82\t14\t4\t876311280\n109\t627\t5\t880582133\n305\t50\t5\t886321799\n195\t154\t3\t888737525\n277\t279\t4\t879543592\n223\t8\t2\t891550684\n92\t81\t3\t875654929\n201\t69\t2\t884112901\n94\t58\t5\t891720540\n217\t144\t4\t889069782\n244\t148\t2\t880605071\n313\t200\t3\t891017736\n181\t874\t1\t878961749\n116\t1216\t3\t876452582\n303\t433\t4\t879467985\n117\t151\t4\t880126373\n221\t327\t4\t875243968\n46\t307\t3\t883611430\n91\t28\t4\t891439243\n151\t317\t5\t879524610\n64\t176\t4\t889737567\n90\t553\t2\t891384959\n116\t271\t4\t886310197\n291\t1139\t3\t874871671\n62\t111\t3\t879372670\n196\t251\t3\t881251274\n303\t120\t2\t879544099\n49\t547\t5\t888066187\n307\t1022\t4\t879283008\n303\t176\t5\t879467260\n286\t154\t4\t877533381\n291\t501\t4\t875087100\n235\t87\t4\t889655162\n254\t379\t1\t886474650\n276\t157\t5\t874790773\n135\t1208\t3\t879858003\n57\t243\t3\t883696547\n276\t1157\t2\t874795772\n7\t576\t5\t892132943\n250\t404\t4\t878092144\n318\t768\t2\t884498022\n234\t808\t2\t892335707\n289\t282\t3\t876789180\n87\t1079\t2\t879877240\n50\t823\t3\t877052784\n25\t258\t5\t885853199\n18\t496\t5\t880130470\n193\t790\t3\t889127381\n263\t510\t4\t891298392\n209\t906\t2\t883589546\n207\t716\t3\t875508783\n314\t535\t4\t877887002\n250\t338\t4\t883263374\n262\t568\t3\t879794113\n95\t172\t4\t879196847\n94\t470\t4\t891722006\n59\t583\t5\t888205921\n277\t282\t4\t879543697\n303\t1286\t4\t879467413\n271\t714\t3\t885848863\n269\t235\t3\t891446756\n148\t140\t1\t877019882\n223\t977\t2\t891550295\n210\t357\t5\t887736206\n185\t199\t4\t883526268\n174\t80\t1\t886515210\n235\t480\t4\t889655044\n276\t939\t3\t874790855\n99\t354\t2\t888469332\n308\t163\t4\t887737084\n303\t738\t2\t879544276\n224\t873\t2\t888082187\n298\t252\t4\t884183833\n44\t208\t4\t878347420\n315\t13\t4\t879821158\n215\t197\t4\t891435357\n269\t9\t4\t891446246\n42\t195\t5\t881107949\n293\t79\t3\t888906045\n246\t68\t5\t884922341\n101\t405\t4\t877137015\n92\t665\t3\t875906853\n249\t88\t4\t879572668\n60\t525\t5\t883325944\n13\t331\t3\t881515457\n271\t750\t4\t885844698\n92\t731\t4\t875653769\n254\t188\t3\t886473672\n311\t203\t5\t884365201\n263\t197\t4\t891299752\n201\t660\t3\t884140927\n279\t79\t3\t875296461\n138\t496\t4\t879024043\n209\t251\t5\t883417810\n217\t7\t4\t889069741\n261\t340\t5\t890454045\n176\t258\t4\t886047026\n303\t1037\t3\t879544340\n81\t169\t4\t876534751\n62\t114\t4\t879373568\n72\t530\t4\t880037164\n276\t364\t3\t877935377\n88\t750\t2\t891037276\n49\t7\t4\t888067307\n263\t117\t3\t891299387\n9\t298\t5\t886960055\n92\t528\t4\t875657681\n249\t708\t4\t879572403\n262\t754\t3\t879961283\n196\t655\t5\t881251793\n207\t1436\t3\t878191574\n256\t771\t2\t882164999\n276\t226\t4\t874792520\n134\t313\t5\t891732150\n311\t849\t3\t884365781\n181\t1383\t1\t878962086\n203\t148\t3\t880434755\n247\t736\t5\t893097024\n313\t745\t3\t891016583\n311\t83\t5\t884364812\n251\t1014\t5\t886272486\n227\t411\t4\t879035897\n59\t550\t5\t888206605\n201\t206\t2\t884112029\n58\t100\t5\t884304553\n249\t723\t4\t879641093\n286\t1316\t5\t884583549\n11\t725\t3\t891905568\n7\t228\t4\t891350845\n92\t846\t3\t886443471\n160\t56\t5\t876770222\n103\t127\t4\t880416331\n11\t110\t3\t891905324\n87\t2\t4\t879876074\n45\t763\t2\t881013563\n293\t605\t3\t888907702\n291\t732\t4\t874868097\n254\t575\t3\t886476165\n49\t334\t4\t888065744\n222\t1284\t4\t878184422\n161\t162\t2\t891171413\n268\t1\t3\t875742341\n59\t215\t5\t888204430\n177\t209\t4\t880130736\n151\t1298\t4\t879528520\n299\t235\t1\t877878184\n29\t332\t4\t882820869\n30\t435\t5\t885941156\n297\t182\t3\t875239125\n315\t185\t4\t879821267\n23\t172\t4\t874785889\n262\t47\t2\t879794599\n321\t496\t4\t879438607\n191\t754\t3\t891560366\n106\t778\t4\t881453040\n7\t151\t4\t891352749\n178\t678\t3\t882823530\n84\t12\t5\t883452874\n94\t168\t5\t891721378\n264\t33\t3\t886122644\n239\t529\t5\t889179808\n90\t657\t5\t891385190\n261\t875\t5\t890454351\n190\t302\t5\t891033606\n112\t289\t5\t884992690\n144\t106\t3\t888104684\n199\t258\t4\t883782403\n224\t20\t1\t888104487\n85\t501\t3\t880838306\n301\t202\t5\t882076211\n145\t743\t1\t888398516\n294\t127\t5\t877819265\n130\t206\t3\t875801695\n103\t121\t3\t880415766\n152\t412\t2\t880149328\n267\t840\t4\t878970926\n286\t231\t3\t877532094\n200\t24\t2\t884127370\n5\t211\t4\t875636631\n160\t117\t4\t876767822\n6\t357\t4\t883602422\n158\t72\t3\t880135118\n297\t736\t4\t875239975\n250\t244\t4\t878089786\n57\t760\t2\t883697617\n58\t268\t5\t884304288\n23\t1006\t3\t874785809\n301\t1228\t4\t882079423\n307\t265\t3\t877122816\n276\t1095\t1\t877935135\n223\t411\t1\t891550005\n92\t24\t3\t875640448\n137\t300\t5\t881432524\n164\t117\t5\t889401816\n276\t38\t3\t874792574\n213\t294\t3\t878870226\n286\t34\t5\t877534701\n232\t197\t4\t888549563\n150\t221\t4\t878747017\n21\t103\t1\t874951245\n130\t731\t3\t876251922\n222\t441\t2\t881059920\n1\t90\t4\t878542300\n189\t1005\t4\t893265971\n49\t38\t1\t888068289\n311\t5\t3\t884365853\n36\t307\t4\t882157227\n128\t228\t3\t879969329\n151\t89\t5\t879524491\n248\t475\t5\t884535446\n95\t1229\t2\t879198800\n213\t609\t4\t878955533\n203\t181\t5\t880434278\n308\t863\t3\t887736881\n269\t47\t4\t891448386\n198\t100\t1\t884207325\n297\t307\t4\t878771124\n305\t189\t5\t886323303\n266\t676\t3\t892257897\n197\t229\t3\t891410039\n74\t272\t5\t888333194\n127\t294\t4\t884363803\n194\t4\t4\t879521397\n177\t56\t5\t880130618\n45\t473\t3\t881014417\n57\t28\t4\t883698324\n239\t187\t5\t889178798\n268\t94\t2\t875743630\n238\t252\t3\t883576644\n201\t1010\t3\t884140579\n131\t1281\t4\t883681561\n270\t97\t4\t876955633\n159\t127\t5\t880989744\n230\t202\t4\t880485352\n92\t219\t4\t875654888\n318\t356\t4\t884496671\n123\t531\t3\t879872671\n267\t403\t4\t878971939\n232\t630\t3\t888550060\n5\t382\t5\t875636587\n16\t155\t3\t877719157\n180\t762\t4\t877126241\n178\t282\t3\t882823978\n319\t313\t5\t889816026\n180\t737\t3\t877128327\n270\t736\t5\t876955087\n269\t658\t2\t891448497\n293\t496\t5\t888905840\n269\t793\t4\t891449880\n54\t685\t3\t880935504\n21\t98\t5\t874951657\n303\t209\t5\t879467328\n13\t766\t4\t882139686\n314\t95\t5\t877888168\n151\t387\t5\t879542353\n230\t378\t5\t880485159\n201\t403\t3\t884112427\n95\t1206\t4\t888956137\n270\t370\t5\t876956232\n256\t716\t5\t882165135\n80\t582\t3\t887401701\n303\t435\t5\t879466491\n312\t121\t3\t891698174\n151\t1006\t1\t879524974\n62\t258\t5\t879371909\n189\t1115\t4\t893264270\n77\t195\t5\t884733695\n99\t742\t5\t885679114\n291\t1028\t3\t875086561\n293\t748\t2\t888904327\n181\t1342\t1\t878962168\n206\t900\t1\t888179980\n83\t338\t4\t883868647\n262\t179\t4\t879962570\n253\t216\t4\t891628252\n223\t596\t3\t891549713\n108\t50\t4\t879879739\n94\t347\t5\t891724950\n293\t779\t1\t888908066\n101\t281\t2\t877136842\n267\t980\t3\t878970578\n201\t1245\t4\t884141015\n314\t1263\t2\t877890611\n271\t111\t4\t885847956\n314\t276\t1\t877886413\n18\t387\t4\t880130155\n207\t4\t4\t876198457\n313\t96\t5\t891015144\n21\t299\t1\t874950931\n215\t144\t4\t891435107\n279\t1376\t4\t886016680\n234\t1015\t2\t892079617\n296\t248\t5\t884196765\n270\t83\t4\t876954995\n210\t161\t5\t887736393\n201\t79\t4\t884112245\n5\t376\t2\t879198045\n184\t181\t4\t889907426\n104\t411\t1\t888465739\n275\t449\t3\t876198328\n185\t269\t5\t883524428\n276\t550\t4\t874792574\n279\t1182\t3\t875314370\n216\t69\t5\t880235229\n21\t457\t1\t874951054\n16\t471\t3\t877724845\n147\t292\t5\t885594040\n291\t250\t4\t874805927\n28\t95\t3\t881956917\n29\t539\t2\t882821044\n291\t471\t4\t874833746\n7\t580\t3\t892132171\n181\t16\t1\t878962996\n297\t218\t3\t875409827\n308\t559\t4\t887740367\n87\t211\t5\t879876812\n97\t89\t5\t884238939\n21\t596\t3\t874951617\n59\t710\t3\t888205463\n238\t756\t3\t883576476\n178\t209\t4\t882826944\n186\t470\t5\t879023693\n299\t615\t4\t878192555\n10\t504\t5\t877892110\n110\t682\t4\t886987354\n109\t101\t1\t880578186\n157\t250\t1\t886890296\n267\t386\t3\t878973597\n181\t327\t3\t878961780\n207\t87\t4\t884386260\n47\t995\t3\t879440429\n148\t114\t5\t877016735\n94\t9\t5\t885872684\n60\t222\t4\t883327441\n244\t409\t4\t880605294\n276\t246\t4\t874786686\n90\t906\t2\t891382240\n234\t20\t4\t891227979\n106\t107\t4\t883876961\n216\t697\t4\t883981700\n294\t1199\t2\t889242142\n323\t257\t2\t878739393\n140\t268\t4\t879013684\n220\t303\t4\t881198014\n67\t64\t5\t875379211\n170\t299\t3\t886190476\n230\t142\t4\t880485633\n299\t641\t4\t889501514\n7\t581\t5\t891353477\n275\t501\t3\t875154718\n44\t250\t5\t878346709\n291\t214\t4\t874868146\n11\t741\t5\t891902745\n59\t286\t3\t888202532\n174\t395\t1\t886515154\n194\t234\t3\t879521167\n57\t204\t4\t883698272\n314\t417\t4\t877888855\n201\t197\t4\t884113422\n184\t155\t3\t889912656\n194\t792\t4\t879524504\n159\t1037\t2\t884360502\n186\t983\t3\t879023152\n181\t979\t2\t878963241\n68\t7\t3\t876974096\n286\t721\t3\t877532329\n316\t306\t4\t880853072\n280\t781\t4\t891701699\n13\t14\t4\t884538727\n211\t127\t4\t879461498\n187\t215\t3\t879465805\n71\t134\t3\t885016614\n306\t242\t5\t876503793\n64\t684\t4\t889740199\n303\t277\t3\t879468547\n198\t135\t5\t884208061\n232\t91\t5\t888549515\n98\t47\t4\t880498898\n53\t24\t3\t879442538\n299\t971\t2\t889502353\n254\t1116\t3\t886473448\n7\t106\t4\t891353892\n12\t300\t4\t879958639\n239\t10\t5\t889180338\n238\t111\t4\t883576603\n130\t267\t5\t875801239\n90\t662\t5\t891385842\n63\t20\t3\t875748004\n40\t268\t4\t889041430\n181\t221\t1\t878962465\n298\t152\t3\t884183336\n104\t327\t2\t888442202\n42\t185\t4\t881107449\n181\t995\t1\t878961585\n258\t288\t1\t885700919\n291\t578\t4\t874835242\n148\t70\t5\t877021271\n305\t187\t4\t886323189\n184\t71\t4\t889911552\n94\t556\t3\t891722882\n158\t1011\t4\t880132579\n7\t528\t5\t891352659\n174\t237\t4\t886434047\n158\t190\t5\t880134332\n201\t853\t4\t884114635\n276\t43\t1\t874791383\n278\t311\t4\t891295130\n229\t347\t1\t891632073\n101\t252\t3\t877136628\n63\t1028\t3\t875748198\n275\t520\t4\t880314218\n275\t173\t3\t875154795\n62\t1073\t4\t879374752\n230\t234\t4\t880484756\n109\t975\t3\t880572351\n73\t357\t5\t888626007\n83\t118\t3\t880307071\n4\t361\t5\t892002353\n130\t245\t1\t874953526\n64\t778\t5\t889739806\n15\t473\t1\t879456204\n244\t89\t5\t880602210\n7\t643\t4\t891350932\n219\t347\t1\t889386819\n295\t704\t5\t879519266\n293\t288\t3\t888904327\n125\t997\t2\t892838976\n279\t487\t3\t890282182\n76\t582\t3\t882607444\n272\t48\t4\t879455143\n269\t285\t5\t891446165\n244\t380\t4\t880608133\n271\t220\t3\t885848179\n321\t287\t3\t879438857\n306\t864\t3\t876504286\n224\t332\t3\t888103429\n57\t1047\t4\t883697679\n145\t591\t4\t879161848\n85\t277\t2\t879452938\n116\t7\t2\t876453915\n52\t95\t4\t882922927\n209\t688\t1\t883589626\n145\t260\t4\t875269871\n208\t202\t4\t883108476\n160\t187\t5\t876770168\n141\t274\t5\t884585220\n260\t990\t5\t890618729\n177\t299\t4\t880130500\n82\t231\t2\t878769815\n223\t969\t5\t891550649\n107\t271\t2\t891264432\n26\t25\t3\t891373727\n297\t1016\t3\t874955131\n244\t167\t3\t880607853\n15\t678\t1\t879455311\n286\t709\t4\t877532748\n82\t411\t3\t878768902\n167\t364\t3\t892738212\n99\t181\t5\t885680138\n56\t196\t2\t892678628\n293\t346\t3\t888904004\n7\t650\t3\t891350965\n90\t425\t4\t891384996\n228\t475\t3\t889388521\n82\t919\t3\t876311280\n43\t151\t4\t875975613\n10\t289\t4\t877886223\n197\t515\t5\t891409935\n57\t756\t3\t883697730\n246\t82\t2\t884921986\n62\t24\t4\t879372633\n323\t223\t4\t878739699\n13\t320\t1\t882397010\n268\t63\t1\t875743792\n18\t863\t3\t880130680\n271\t410\t2\t885848238\n307\t509\t3\t877121019\n54\t298\t4\t892681300\n295\t47\t5\t879518166\n194\t237\t3\t879538959\n194\t82\t2\t879524216\n311\t385\t5\t884365284\n287\t257\t4\t875334224\n290\t82\t4\t880473918\n262\t96\t4\t879793022\n279\t491\t5\t875296435\n290\t393\t3\t880475169\n145\t393\t5\t875273174\n305\t61\t4\t886323378\n269\t156\t5\t891449364\n276\t180\t5\t874787353\n323\t298\t4\t878739275\n296\t258\t5\t884196469\n18\t965\t4\t880132012\n72\t528\t4\t880036664\n224\t949\t3\t888104057\n125\t239\t5\t892838375\n244\t652\t5\t880606533\n135\t431\t2\t879857868\n138\t211\t4\t879024183\n59\t604\t3\t888204927\n221\t1059\t4\t875245077\n13\t451\t1\t882141872\n42\t69\t4\t881107375\n10\t340\t4\t880371312\n219\t882\t3\t889386741\n60\t604\t4\t883327997\n125\t152\t1\t879454892\n63\t50\t4\t875747292\n255\t448\t3\t883216544\n311\t172\t5\t884364763\n7\t582\t5\t892135347\n7\t127\t5\t891351728\n189\t203\t3\t893265921\n59\t470\t3\t888205714\n313\t148\t2\t891031979\n234\t161\t3\t892335824\n6\t143\t2\t883601053\n305\t960\t1\t886324362\n226\t147\t3\t883889479\n204\t340\t5\t892389195\n13\t493\t5\t882140206\n186\t281\t4\t879023390\n6\t275\t4\t883599102\n269\t82\t2\t891450780\n69\t300\t3\t882027204\n259\t959\t4\t888720593\n5\t62\t4\t875637575\n181\t1164\t3\t878962464\n135\t449\t3\t879857843\n222\t1207\t2\t881060659\n5\t231\t2\t875635947\n286\t258\t4\t877530390\n104\t249\t3\t888465675\n303\t65\t4\t879467436\n295\t73\t4\t879519009\n201\t686\t2\t884112352\n13\t289\t2\t882140759\n184\t100\t5\t889907652\n262\t786\t3\t879795319\n234\t614\t3\t892334609\n1\t64\t5\t875072404\n325\t485\t3\t891478599\n312\t641\t5\t891698300\n207\t810\t2\t877125506\n262\t509\t3\t879792818\n239\t478\t5\t889178986\n142\t181\t5\t888640317\n296\t242\t4\t884196057\n291\t571\t2\t875086608\n13\t488\t3\t890704999\n294\t676\t3\t877821514\n69\t174\t5\t882145548\n195\t265\t4\t888737346\n121\t509\t5\t891388145\n279\t509\t3\t875296552\n49\t17\t2\t888068651\n7\t196\t5\t891351432\n280\t472\t2\t891702086\n221\t780\t3\t875246552\n175\t96\t3\t877108051\n180\t431\t4\t877442098\n311\t1222\t3\t884366010\n44\t120\t4\t878346977\n318\t257\t5\t884471030\n59\t588\t2\t888204389\n320\t117\t4\t884748641\n256\t939\t5\t882164893\n310\t24\t4\t879436242\n236\t265\t2\t890116191\n83\t139\t3\t880308959\n280\t128\t3\t891701188\n43\t52\t4\t883955224\n18\t494\t3\t880131497\n303\t87\t3\t879466421\n91\t427\t4\t891439057\n318\t631\t4\t884496855\n275\t258\t3\t875154310\n97\t482\t5\t884238693\n174\t160\t5\t886514377\n268\t470\t3\t875310745\n188\t769\t2\t875074720\n94\t89\t3\t885870284\n7\t44\t5\t891351728\n158\t85\t4\t880135118\n256\t765\t4\t882165328\n221\t69\t4\t875245641\n196\t67\t5\t881252017\n232\t175\t5\t888549815\n159\t685\t4\t880557347\n99\t182\t4\t886518810\n175\t71\t4\t877107942\n254\t624\t2\t886473254\n326\t22\t4\t879874989\n303\t291\t3\t879484804\n270\t53\t4\t876956106\n181\t1001\t1\t878963038\n254\t418\t3\t886473078\n56\t235\t1\t892911348\n11\t190\t3\t891904174\n162\t181\t4\t877635798\n117\t829\t3\t881010219\n268\t52\t3\t875309319\n320\t177\t5\t884749360\n6\t294\t2\t883599938\n210\t380\t4\t887736482\n151\t969\t5\t879542510\n42\t684\t4\t881108093\n62\t365\t2\t879376096\n207\t121\t3\t875504876\n59\t70\t3\t888204758\n26\t455\t3\t891371506\n234\t705\t5\t892318002\n270\t466\t5\t876955899\n97\t484\t3\t884238966\n11\t660\t3\t891904573\n5\t377\t1\t878844615\n56\t797\t4\t892910860\n305\t923\t5\t886323237\n173\t286\t5\t877556626\n67\t1095\t4\t875379287\n213\t12\t5\t878955409\n268\t684\t3\t875744321\n36\t883\t5\t882157581\n100\t321\t1\t891375112\n269\t729\t2\t891448569\n131\t100\t5\t883681418\n308\t298\t5\t887741383\n14\t709\t5\t879119693\n284\t305\t4\t885328906\n191\t752\t3\t891560481\n222\t29\t3\t878184571\n201\t421\t2\t884111708\n207\t864\t3\t877750738\n303\t1315\t3\t879544791\n52\t1086\t4\t882922562\n305\t529\t5\t886324097\n223\t318\t4\t891550711\n22\t79\t4\t878887765\n137\t546\t5\t881433116\n292\t328\t3\t877560833\n249\t11\t5\t879640868\n269\t616\t4\t891450453\n197\t294\t4\t891409290\n42\t603\t4\t881107502\n26\t1016\t3\t891377609\n7\t560\t3\t892132798\n193\t435\t4\t889124439\n7\t559\t5\t891354882\n299\t186\t3\t889503233\n115\t127\t5\t881171760\n59\t433\t5\t888205982\n217\t22\t5\t889069741\n279\t709\t4\t875310195\n257\t345\t4\t887066556\n279\t789\t4\t875306580\n279\t919\t3\t892864663\n63\t222\t3\t875747635\n178\t73\t5\t882827985\n90\t1194\t4\t891383718\n111\t313\t4\t891679901\n13\t848\t5\t882140001\n94\t625\t4\t891723086\n59\t496\t4\t888205144\n179\t905\t4\t892151331\n303\t302\t4\t879465986\n299\t516\t4\t889503159\n10\t505\t4\t877886846\n62\t464\t4\t879375196\n56\t69\t4\t892678893\n92\t289\t3\t875641367\n308\t378\t3\t887740700\n13\t144\t4\t882397146\n181\t1348\t1\t878962200\n15\t932\t1\t879456465\n244\t155\t3\t880608599\n234\t233\t2\t892335990\n15\t127\t2\t879455505\n110\t1179\t2\t886989501\n181\t302\t2\t878961511\n236\t313\t4\t890115777\n310\t536\t4\t879436137\n37\t55\t3\t880915942\n234\t617\t3\t892078741\n303\t369\t1\t879544130\n75\t409\t3\t884050829\n197\t518\t1\t891409982\n314\t692\t5\t877888445\n187\t523\t3\t879465125\n151\t402\t3\t879543423\n268\t264\t3\t876513607\n224\t215\t4\t888082612\n292\t195\t5\t881103568\n16\t191\t5\t877719454\n99\t597\t4\t885679210\n234\t482\t4\t892334803\n303\t323\t1\t879466214\n233\t99\t3\t877663383\n66\t249\t4\t883602158\n280\t204\t3\t891700643\n301\t174\t5\t882075827\n92\t1142\t4\t886442422\n99\t410\t5\t885679262\n221\t1250\t2\t875247855\n97\t98\t4\t884238728\n313\t673\t4\t891016622\n58\t109\t4\t884304396\n270\t781\t5\t876955750\n13\t476\t2\t882141997\n189\t1\t5\t893264174\n67\t147\t3\t875379357\n234\t50\t4\t892079237\n40\t880\t3\t889041643\n294\t222\t4\t877819353\n293\t629\t3\t888907753\n7\t241\t4\t891354053\n87\t775\t2\t879876848\n314\t1289\t2\t877887388\n131\t750\t5\t883681723\n296\t48\t5\t884197091\n81\t3\t4\t876592546\n151\t186\t4\t879524222\n57\t926\t3\t883697831\n234\t134\t5\t892333573\n53\t174\t5\t879442561\n280\t544\t4\t891701302\n123\t135\t5\t879872868\n109\t797\t3\t880582856\n96\t479\t4\t884403758\n236\t286\t5\t890115777\n201\t313\t5\t884110598\n174\t471\t5\t886433804\n130\t931\t2\t880396881\n151\t15\t4\t879524879\n90\t529\t5\t891385132\n59\t12\t5\t888204260\n3\t343\t3\t889237122\n310\t845\t5\t879436534\n224\t658\t1\t888103840\n4\t357\t4\t892003525\n25\t615\t5\t885852611\n11\t517\t2\t891905222\n298\t91\t2\t884182932\n59\t170\t4\t888204430\n147\t305\t4\t885593997\n314\t1518\t4\t877891426\n256\t413\t4\t882163956\n234\t618\t3\t892078343\n246\t8\t3\t884921245\n255\t678\t2\t883215795\n92\t106\t3\t875640609\n272\t127\t5\t879454725\n104\t269\t5\t888441878\n276\t406\t2\t874786831\n276\t34\t2\t877934264\n97\t50\t5\t884239471\n150\t121\t2\t878747322\n14\t530\t5\t890881433\n23\t170\t4\t874785348\n13\t97\t4\t882399357\n165\t325\t4\t879525672\n244\t7\t4\t880602558\n95\t416\t4\t888954961\n28\t98\t5\t881961531\n259\t269\t3\t877923906\n82\t596\t3\t876311195\n28\t173\t3\t881956220\n94\t455\t3\t891721777\n276\t384\t3\t874792189\n298\t8\t5\t884182748\n151\t210\t4\t879524419\n77\t238\t5\t884733965\n200\t241\t4\t884129782\n201\t405\t4\t884112427\n193\t332\t3\t889123257\n38\t139\t2\t892432786\n291\t226\t5\t874834895\n113\t326\t5\t875935609\n313\t191\t5\t891013829\n207\t531\t4\t877878342\n214\t151\t5\t892668153\n44\t123\t4\t878346532\n18\t154\t4\t880131358\n297\t628\t4\t874954497\n279\t116\t1\t888799670\n7\t28\t5\t891352341\n115\t92\t4\t881172049\n308\t581\t4\t887740500\n62\t138\t1\t879376709\n81\t824\t3\t876534437\n293\t1161\t2\t888905062\n13\t781\t3\t882399528\n13\t338\t1\t882140740\n41\t28\t4\t890687353\n280\t554\t1\t891701998\n287\t249\t5\t875334430\n117\t50\t5\t880126022\n178\t106\t2\t882824983\n201\t117\t2\t884112487\n256\t1057\t2\t882163805\n221\t204\t4\t875246008\n318\t659\t4\t884495868\n262\t11\t4\t879793597\n154\t488\t4\t879138831\n186\t385\t4\t879023894\n303\t1095\t2\t879543988\n302\t323\t2\t879436875\n198\t179\t4\t884209264\n99\t168\t5\t885680374\n229\t313\t2\t891631948\n126\t262\t4\t887854726\n72\t226\t4\t880037307\n109\t31\t4\t880577844\n34\t242\t5\t888601628\n173\t323\t5\t877556926\n156\t276\t3\t888185854\n122\t215\t4\t879270676\n276\t583\t3\t874791444\n224\t528\t3\t888082658\n208\t88\t5\t883108324\n295\t483\t5\t879517348\n279\t65\t1\t875306767\n43\t64\t5\t875981247\n89\t197\t5\t879459859\n308\t435\t4\t887737484\n315\t305\t5\t881017419\n42\t1041\t4\t881109060\n164\t299\t4\t889401383\n7\t153\t5\t891352220\n93\t412\t2\t888706037\n125\t1180\t3\t892838865\n70\t50\t4\t884064188\n177\t960\t3\t880131161\n75\t476\t1\t884050393\n62\t401\t3\t879376727\n130\t366\t5\t876251972\n312\t228\t3\t891699040\n158\t414\t4\t880135118\n279\t42\t4\t875308843\n210\t58\t4\t887730177\n43\t66\t4\t875981506\n151\t490\t5\t879528418\n293\t665\t2\t888908117\n293\t36\t1\t888908041\n102\t405\t2\t888801812\n276\t291\t3\t874791169\n21\t839\t1\t874951797\n194\t663\t4\t879524292\n38\t432\t1\t892430282\n92\t453\t1\t875906882\n311\t180\t4\t884364764\n198\t214\t4\t884208273\n82\t661\t4\t878769703\n267\t238\t4\t878971629\n291\t466\t5\t874834768\n151\t692\t3\t879524669\n60\t47\t4\t883326399\n92\t79\t4\t875653198\n97\t115\t5\t884239525\n314\t1218\t4\t877887525\n319\t338\t2\t879977242\n5\t407\t3\t875635431\n15\t685\t4\t879456288\n99\t204\t4\t885679952\n123\t192\t5\t879873119\n47\t340\t5\t879439078\n222\t135\t5\t878181563\n224\t149\t1\t888103999\n58\t284\t4\t884304519\n320\t294\t4\t884748418\n268\t135\t4\t875309583\n83\t640\t2\t880308550\n106\t692\t3\t881453290\n287\t11\t5\t875335124\n305\t186\t4\t886323902\n181\t1320\t1\t878962279\n49\t49\t2\t888068990\n6\t221\t4\t883599431\n85\t647\t4\t879453844\n128\t736\t5\t879968352\n279\t827\t1\t888426577\n271\t630\t2\t885848943\n303\t748\t2\t879466214\n249\t124\t5\t879572646\n280\t693\t3\t891701027\n207\t827\t3\t876018501\n60\t616\t3\t883327087\n21\t184\t4\t874951797\n286\t628\t4\t875806800\n145\t183\t5\t875272009\n311\t28\t5\t884365140\n25\t228\t4\t885852920\n76\t92\t4\t882606108\n246\t406\t3\t884924749\n201\t292\t3\t884110598\n235\t647\t4\t889655045\n286\t133\t4\t877531730\n48\t174\t5\t879434723\n144\t685\t3\t888105473\n5\t24\t4\t879198229\n85\t272\t4\t893110061\n286\t7\t4\t875807003\n64\t93\t2\t889739025\n151\t429\t5\t879528673\n191\t301\t4\t891561336\n287\t56\t5\t875334759\n96\t153\t4\t884403624\n125\t615\t3\t879454793\n150\t100\t2\t878746636\n93\t15\t5\t888705388\n84\t528\t5\t883453617\n318\t50\t2\t884495696\n13\t167\t4\t882141659\n213\t471\t3\t878870816\n178\t234\t4\t882826783\n128\t418\t4\t879968164\n195\t496\t4\t888737525\n13\t570\t5\t882397581\n276\t843\t4\t874792989\n54\t268\t5\t883963510\n305\t347\t3\t886308111\n14\t474\t4\t890881557\n18\t58\t4\t880130613\n263\t921\t3\t891298727\n289\t849\t4\t876789943\n194\t321\t3\t879520306\n11\t746\t4\t891905032\n298\t842\t4\t884127249\n56\t215\t5\t892678547\n13\t844\t1\t882397010\n38\t465\t5\t892432476\n308\t165\t3\t887736696\n214\t652\t4\t891543972\n102\t300\t3\t875886434\n7\t420\t5\t891353219\n61\t328\t5\t891206371\n307\t100\t3\t879206424\n21\t590\t1\t874951898\n311\t68\t1\t884365824\n95\t1230\t1\t888956901\n303\t182\t5\t879467105\n145\t13\t5\t875270507\n50\t253\t5\t877052550\n194\t530\t4\t879521167\n145\t1\t3\t882181396\n222\t157\t4\t878181976\n7\t188\t5\t891352778\n109\t100\t4\t880563080\n90\t631\t5\t891384570\n7\t78\t3\t891354165\n181\t1324\t1\t878962464\n201\t332\t2\t884110887\n13\t685\t5\t882397582\n82\t73\t4\t878769888\n267\t423\t3\t878972842\n194\t1206\t1\t879554453\n269\t106\t1\t891451947\n99\t895\t3\t885678304\n235\t1149\t4\t889655595\n200\t665\t4\t884130621\n312\t188\t3\t891698793\n145\t50\t5\t885557660\n234\t71\t3\t892334338\n213\t48\t5\t878955848\n244\t216\t4\t880605869\n316\t588\t1\t880853992\n85\t175\t4\t879828912\n124\t50\t3\t890287508\n137\t237\t4\t881432965\n13\t567\t1\t882396955\n151\t162\t5\t879528779\n187\t116\t5\t879464978\n193\t554\t3\t889126088\n49\t741\t4\t888068079\n291\t54\t4\t874834963\n316\t292\t4\t880853072\n271\t514\t4\t885848408\n194\t404\t3\t879522445\n268\t721\t3\t875743587\n277\t1197\t4\t879543768\n301\t606\t3\t882076890\n89\t1048\t3\t879460027\n253\t50\t4\t891628518\n102\t732\t3\t888804089\n311\t662\t4\t884365018\n201\t943\t3\t884114275\n246\t816\t4\t884925218\n172\t488\t3\t875537965\n280\t38\t3\t891701832\n43\t1057\t2\t884029777\n311\t661\t3\t884365075\n59\t287\t5\t888203175\n268\t83\t4\t875309344\n315\t651\t3\t879799457\n145\t299\t4\t875269822\n248\t174\t3\t884534992\n327\t191\t4\t887820828\n268\t672\t2\t875744501\n297\t286\t5\t874953892\n295\t151\t4\t879517635\n13\t877\t2\t882140792\n70\t584\t3\t884150236\n145\t460\t1\t875271312\n275\t176\t4\t880314320\n48\t259\t4\t879434270\n235\t419\t5\t889655858\n83\t413\t1\t891182379\n147\t258\t4\t885594040\n92\t521\t4\t875813412\n246\t728\t1\t884923829\n43\t284\t5\t883955441\n207\t203\t3\t877124625\n234\t485\t3\t892079434\n201\t587\t4\t884140975\n286\t689\t5\t884583549\n69\t12\t5\t882145567\n237\t494\t4\t879376553\n85\t133\t4\t879453876\n276\t85\t3\t874791871\n311\t366\t5\t884366010\n320\t399\t3\t884749411\n114\t175\t5\t881259955\n42\t121\t4\t881110578\n7\t680\t4\t891350703\n154\t302\t4\t879138235\n106\t660\t4\t881451631\n313\t71\t4\t891030144\n90\t526\t5\t891383866\n94\t186\t4\t891722278\n224\t43\t3\t888104456\n44\t230\t2\t883613335\n229\t315\t1\t891632945\n151\t480\t5\t879524151\n311\t505\t4\t884365451\n320\t202\t4\t884750946\n113\t329\t3\t875935312\n255\t859\t3\t883216748\n193\t827\t2\t890859916\n276\t789\t3\t874791623\n259\t750\t4\t888630424\n204\t172\t3\t892513819\n78\t412\t4\t879634223\n85\t98\t4\t879453716\n279\t393\t1\t875314093\n222\t323\t3\t877562839\n288\t127\t5\t886374451\n42\t606\t3\t881107538\n25\t729\t4\t885852697\n119\t213\t5\t874781257\n116\t185\t3\t876453519\n123\t13\t3\t879873988\n315\t657\t4\t879821299\n142\t243\t1\t888640199\n13\t480\t3\t881515193\n201\t326\t2\t884111095\n43\t631\t2\t883955675\n195\t387\t4\t891762491\n95\t174\t5\t879196231\n130\t332\t4\t876250582\n233\t482\t4\t877661437\n44\t530\t5\t878348725\n292\t86\t4\t881105778\n176\t294\t2\t886047220\n157\t405\t3\t886890342\n207\t787\t3\t876079054\n239\t204\t3\t889180888\n251\t144\t5\t886271920\n269\t923\t4\t891447169\n178\t148\t4\t882824325\n138\t121\t4\t879023558\n30\t82\t4\t875060217\n302\t245\t2\t879436911\n34\t690\t4\t888602513\n292\t276\t5\t881103915\n271\t11\t4\t885848408\n69\t175\t3\t882145586\n42\t456\t3\t881106113\n311\t568\t5\t884365325\n183\t241\t4\t892323453\n269\t411\t1\t891451013\n288\t196\t5\t886373474\n268\t42\t4\t875310384\n308\t634\t4\t887737334\n308\t166\t3\t887737837\n57\t831\t1\t883697785\n207\t410\t3\t877838946\n271\t211\t5\t885849164\n16\t144\t5\t877721142\n90\t603\t5\t891385132\n209\t408\t4\t883417517\n299\t238\t4\t877880852\n279\t1228\t4\t890779991\n128\t140\t4\t879968308\n307\t173\t5\t879283786\n167\t392\t1\t892738307\n22\t791\t1\t878887227\n291\t159\t4\t875087488\n194\t705\t2\t879524007\n10\t489\t4\t877892210\n95\t128\t3\t879196354\n10\t657\t4\t877892110\n59\t855\t4\t888204502\n124\t11\t5\t890287645\n7\t133\t5\t891353192\n256\t692\t5\t882165066\n85\t629\t3\t879454685\n271\t1266\t2\t885848943\n276\t1416\t3\t874792634\n155\t988\t2\t879371261\n318\t476\t4\t884495164\n307\t258\t5\t879283786\n28\t7\t5\t881961531\n236\t729\t5\t890118372\n38\t672\t3\t892434800\n7\t93\t5\t891351042\n255\t217\t2\t883216600\n184\t729\t3\t889909840\n154\t175\t5\t879138784\n311\t403\t4\t884365889\n116\t301\t3\t892683732\n94\t229\t3\t891722979\n221\t508\t4\t875244160\n95\t636\t1\t879196566\n44\t56\t2\t878348601\n305\t203\t4\t886323839\n207\t508\t4\t877879259\n130\t161\t4\t875802058\n98\t163\t3\t880499053\n328\t9\t4\t885045993\n178\t218\t3\t882827776\n293\t293\t4\t888904795\n162\t742\t4\t877635758\n128\t79\t4\t879967692\n307\t1411\t4\t877124058\n269\t514\t4\t891449123\n195\t186\t3\t888737240\n327\t533\t4\t887822530\n189\t91\t3\t893265684\n206\t1394\t1\t888179981\n95\t143\t4\t880571951\n31\t682\t2\t881547834\n94\t157\t5\t891725332\n73\t588\t2\t888625754\n256\t819\t4\t882151052\n291\t366\t3\t874868255\n222\t153\t4\t878182416\n207\t98\t4\t875509887\n222\t298\t4\t877563253\n286\t151\t5\t875806800\n116\t262\t3\t876751342\n7\t174\t5\t891350757\n148\t495\t4\t877016735\n311\t495\t4\t884366066\n178\t255\t4\t882824001\n181\t597\t3\t878963276\n123\t847\t4\t879873193\n291\t77\t4\t874834799\n237\t528\t5\t879376606\n140\t301\t3\t879013747\n290\t222\t4\t880731778\n177\t79\t4\t880130758\n65\t202\t4\t879217852\n311\t181\t4\t884364724\n125\t796\t3\t892838591\n77\t168\t4\t884752721\n58\t960\t4\t884305004\n117\t405\t5\t880126174\n248\t127\t5\t884535084\n5\t423\t4\t875636793\n254\t286\t1\t887346861\n289\t7\t4\t876789628\n241\t294\t3\t887250085\n213\t690\t3\t878870275\n99\t508\t4\t885678840\n275\t523\t4\t880314031\n168\t284\t2\t884288112\n28\t380\t4\t881961394\n144\t31\t3\t888105823\n198\t651\t4\t884207424\n181\t1093\t1\t878962391\n221\t268\t5\t876502910\n267\t739\t4\t878973276\n129\t303\t3\t883244011\n301\t496\t5\t882075743\n94\t33\t3\t891721919\n318\t64\t4\t884495590\n298\t477\t4\t884126202\n290\t476\t3\t880475837\n16\t942\t4\t877719863\n130\t815\t3\t874953866\n181\t304\t1\t878961586\n178\t125\t4\t882824431\n42\t506\t3\t881108760\n320\t284\t4\t884748818\n138\t151\t4\t879023389\n197\t849\t3\t891410124\n215\t157\t4\t891435573\n94\t1119\t4\t891723261\n293\t724\t3\t888907061\n79\t246\t5\t891271545\n279\t1492\t4\t888430806\n189\t30\t4\t893266205\n233\t806\t4\t880610396\n198\t24\t2\t884205385\n222\t172\t5\t878183079\n276\t301\t4\t877584219\n70\t417\t3\t884066823\n305\t15\t1\t886322796\n201\t370\t1\t884114506\n57\t409\t4\t883697655\n13\t314\t1\t884538485\n206\t245\t1\t888179772\n125\t173\t5\t879454100\n128\t143\t5\t879967300\n92\t763\t3\t886443192\n65\t56\t3\t879217816\n236\t506\t5\t890118153\n262\t77\t2\t879794829\n90\t958\t4\t891383561\n144\t91\t2\t888106106\n63\t841\t1\t875747917\n323\t117\t3\t878739355\n197\t176\t5\t891409798\n277\t273\t5\t879544145\n176\t288\t3\t886046979\n38\t838\t2\t892433680\n99\t546\t4\t885679353\n326\t186\t4\t879877143\n59\t663\t4\t888204928\n59\t702\t5\t888205463\n26\t15\t4\t891386369\n7\t182\t4\t891350965\n112\t354\t3\t891304031\n109\t154\t2\t880578121\n121\t405\t2\t891390579\n293\t167\t3\t888907702\n297\t198\t3\t875238923\n276\t11\t5\t874787497\n222\t210\t4\t878184338\n287\t92\t4\t875334896\n62\t443\t3\t879375080\n106\t703\t4\t881450039\n276\t1218\t4\t874792040\n230\t210\t5\t880484975\n246\t184\t4\t884921948\n22\t511\t4\t878887983\n165\t258\t5\t879525672\n161\t174\t2\t891170800\n109\t89\t4\t880573263\n305\t87\t1\t886323153\n195\t181\t5\t875771440\n7\t193\t5\t892135346\n326\t480\t4\t879875691\n77\t125\t3\t884733014\n85\t58\t4\t879829689\n186\t588\t4\t879024535\n256\t280\t5\t882151167\n84\t529\t5\t883453108\n74\t288\t3\t888333280\n102\t432\t3\t883748418\n194\t770\t4\t879525342\n267\t114\t5\t878971514\n1\t92\t3\t876892425\n16\t504\t5\t877718168\n211\t300\t2\t879461395\n90\t31\t4\t891384673\n234\t657\t4\t892079840\n60\t1020\t4\t883327018\n92\t947\t4\t875654929\n158\t1\t4\t880132443\n87\t1000\t3\t879877173\n276\t104\t1\t874836682\n1\t228\t5\t878543541\n42\t143\t4\t881108229\n43\t26\t5\t883954901\n299\t1322\t3\t877878001\n130\t200\t5\t875217392\n307\t71\t5\t879283169\n147\t339\t5\t885594204\n311\t229\t5\t884365890\n296\t286\t5\t884196209\n217\t82\t5\t889069842\n80\t886\t4\t883605238\n314\t9\t4\t877886375\n64\t527\t4\t879365590\n249\t79\t5\t879572777\n21\t298\t5\t874951382\n68\t118\t2\t876974248\n215\t151\t5\t891435761\n305\t238\t3\t886323617\n308\t417\t3\t887740254\n102\t118\t3\t888801465\n189\t120\t1\t893264954\n112\t750\t4\t884992444\n130\t622\t3\t875802173\n188\t474\t4\t875072674\n56\t585\t3\t892911366\n56\t230\t5\t892676339\n20\t11\t2\t879669401\n20\t176\t2\t879669152\n222\t25\t3\t877563437\n49\t148\t1\t888068195\n307\t431\t4\t877123333\n144\t313\t5\t888103407\n23\t404\t4\t874787860\n144\t961\t3\t888106106\n160\t3\t3\t876770124\n22\t227\t4\t878888067\n79\t508\t3\t891271676\n18\t647\t4\t880129595\n151\t481\t3\t879524669\n312\t480\t5\t891698224\n256\t29\t4\t882164644\n158\t568\t4\t880134532\n311\t141\t4\t884366187\n303\t179\t5\t879466491\n25\t478\t5\t885852271\n195\t407\t2\t877835302\n152\t147\t3\t880149045\n145\t1001\t4\t875271607\n151\t260\t1\t879523998\n194\t576\t2\t879528568\n271\t624\t2\t885849558\n162\t121\t4\t877636000\n313\t65\t2\t891016962\n6\t532\t3\t883600066\n22\t433\t3\t878886479\n13\t915\t5\t892015023\n327\t461\t3\t887746665\n200\t402\t4\t884129029\n271\t22\t5\t885848518\n269\t478\t4\t891448980\n315\t431\t2\t879821300\n178\t121\t5\t882824291\n210\t502\t3\t891035965\n76\t135\t5\t875028792\n318\t648\t5\t884495534\n279\t1291\t4\t875297708\n75\t121\t4\t884050450\n90\t618\t5\t891385335\n44\t174\t5\t878347662\n293\t729\t2\t888907145\n217\t195\t5\t889069709\n224\t708\t2\t888104153\n246\t121\t4\t884922627\n284\t906\t3\t885328836\n301\t172\t5\t882076403\n244\t31\t4\t880603484\n95\t395\t3\t888956928\n303\t330\t3\t879552065\n198\t640\t3\t884208651\n256\t802\t3\t882164955\n46\t690\t5\t883611274\n305\t209\t5\t886322966\n83\t364\t1\t886534501\n224\t1208\t1\t888104554\n295\t67\t4\t879519042\n116\t248\t3\t876452492\n201\t37\t2\t884114635\n155\t748\t2\t879371261\n318\t508\t4\t884494976\n274\t288\t4\t878944379\n263\t333\t2\t891296842\n145\t172\t5\t882181632\n188\t191\t3\t875073128\n119\t313\t5\t886176135\n270\t306\t5\t876953744\n262\t91\t3\t879792713\n131\t845\t4\t883681351\n250\t260\t4\t878089144\n33\t307\t3\t891964148\n37\t183\t4\t880930042\n6\t211\t5\t883601155\n85\t517\t5\t879455238\n308\t164\t4\t887738664\n42\t746\t3\t881108279\n102\t1025\t2\t883278200\n311\t70\t4\t884364999\n181\t1322\t1\t878962086\n17\t508\t3\t885272779\n174\t396\t1\t886515104\n125\t150\t1\t879454892\n181\t1364\t1\t878962464\n235\t511\t5\t889655162\n1\t266\t1\t885345728\n295\t727\t5\t879517682\n56\t194\t5\t892676908\n83\t1035\t4\t880308959\n100\t355\t4\t891375313\n106\t828\t2\t883876872\n270\t327\t5\t876953900\n181\t680\t1\t878961709\n115\t228\t4\t881171488\n286\t771\t2\t877535119\n234\t151\t3\t892334481\n16\t92\t4\t877721905\n130\t410\t5\t875802105\n271\t121\t2\t885848132\n320\t1157\t4\t884751336\n189\t462\t5\t893265741\n313\t31\t4\t891015486\n49\t238\t4\t888068762\n60\t79\t4\t883326620\n13\t226\t4\t882397651\n1\t121\t4\t875071823\n150\t246\t5\t878746719\n13\t548\t3\t882398743\n179\t751\t1\t892151565\n222\t426\t1\t878181351\n7\t614\t5\t891352489\n157\t1132\t3\t886891132\n193\t368\t1\t889127860\n130\t993\t5\t874953665\n166\t322\t5\t886397723\n62\t4\t4\t879374640\n253\t183\t5\t891628341\n261\t117\t4\t890455974\n269\t1020\t4\t891449571\n269\t136\t4\t891449075\n322\t197\t5\t887313983\n7\t647\t5\t891352489\n112\t748\t3\t884992651\n170\t245\t5\t884103758\n271\t823\t3\t885848237\n294\t288\t5\t877818729\n151\t522\t5\t879524443\n311\t213\t4\t884365075\n26\t257\t3\t891371596\n291\t627\t4\t875086991\n26\t7\t3\t891350826\n221\t468\t3\t875246824\n318\t204\t5\t884496218\n87\t996\t3\t879876848\n279\t88\t1\t882146554\n279\t562\t3\t890451433\n207\t14\t4\t875504876\n279\t163\t5\t875313311\n230\t238\t1\t880484778\n94\t235\t4\t891722980\n293\t931\t1\t888905252\n121\t86\t5\t891388286\n198\t180\t3\t884207298\n292\t653\t4\t881105442\n92\t781\t3\t875907649\n291\t572\t3\t874834944\n48\t690\t4\t879434211\n102\t264\t2\t883277645\n1\t114\t5\t875072173\n180\t79\t3\t877442037\n255\t879\t3\t883215660\n250\t2\t4\t878090414\n119\t716\t5\t874782190\n101\t282\t3\t877135883\n244\t220\t2\t880605264\n67\t1\t3\t875379445\n291\t99\t4\t875086887\n59\t238\t5\t888204553\n311\t73\t4\t884366187\n177\t919\t4\t880130736\n1\t132\t4\t878542889\n144\t778\t4\t888106044\n1\t74\t1\t889751736\n268\t68\t4\t875744173\n232\t705\t5\t888549838\n49\t758\t1\t888067596\n102\t313\t3\t887048184\n279\t1093\t4\t875298330\n279\t1493\t1\t888465068\n22\t173\t5\t878886368\n122\t715\t5\t879270741\n145\t315\t5\t883840797\n119\t1101\t5\t874781779\n261\t259\t4\t890454843\n1\t134\t4\t875073067\n94\t45\t5\t886008764\n330\t11\t4\t876546561\n291\t741\t5\t874834481\n6\t180\t4\t883601311\n188\t88\t4\t875075300\n299\t921\t3\t889502087\n253\t203\t4\t891628651\n215\t194\t4\t891436150\n291\t273\t3\t874833705\n303\t867\t3\t879484373\n6\t477\t1\t883599509\n307\t1110\t4\t877122208\n130\t876\t4\t874953291\n95\t483\t3\t879198697\n74\t326\t4\t888333329\n13\t305\t4\t881514811\n4\t260\t4\t892004275\n261\t294\t4\t890454217\n159\t259\t4\t893255969\n137\t55\t5\t881433689\n174\t699\t5\t886514220\n286\t158\t3\t877533472\n87\t1183\t3\t879875995\n270\t230\t3\t876955868\n91\t172\t4\t891439208\n296\t272\t5\t884198772\n125\t483\t4\t879454628\n62\t1118\t3\t879375537\n328\t200\t4\t885046420\n296\t510\t5\t884197264\n234\t500\t3\t892078890\n237\t100\t5\t879376381\n150\t13\t4\t878746889\n301\t610\t3\t882077176\n151\t25\t4\t879528496\n271\t8\t4\t885848770\n87\t303\t3\t879875471\n293\t1220\t2\t888907552\n113\t294\t4\t875935277\n311\t518\t3\t884365451\n181\t123\t2\t878963276\n328\t905\t3\t888641999\n110\t301\t2\t886987505\n288\t742\t3\t886893063\n111\t887\t3\t891679692\n194\t196\t3\t879524007\n239\t605\t4\t889180446\n109\t5\t3\t880580637\n291\t824\t4\t874833962\n16\t168\t4\t877721142\n14\t357\t2\t890881294\n22\t687\t1\t878887476\n207\t746\t4\t877878342\n312\t1299\t4\t891698832\n268\t250\t4\t875742530\n68\t411\t1\t876974596\n195\t887\t4\t886782489\n271\t50\t5\t885848640\n74\t9\t4\t888333458\n308\t802\t3\t887738717\n144\t66\t4\t888106078\n195\t14\t4\t890985390\n18\t199\t3\t880129769\n13\t918\t3\t892524090\n174\t41\t1\t886515063\n109\t159\t4\t880578121\n227\t293\t5\t879035387\n233\t357\t5\t877661553\n264\t475\t5\t886122706\n205\t678\t1\t888284618\n275\t1066\t3\t880313679\n56\t68\t3\t892910913\n78\t1160\t5\t879634134\n130\t682\t4\t881076059\n127\t380\t5\t884364950\n130\t568\t5\t876251693\n58\t1100\t2\t884304979\n49\t473\t3\t888067164\n13\t273\t3\t882397502\n203\t336\t3\t880433474\n330\t136\t5\t876546378\n109\t195\t5\t880578038\n186\t406\t1\t879023272\n293\t148\t1\t888907015\n280\t1028\t5\t891702276\n143\t331\t5\t888407622\n183\t96\t3\t891463617\n60\t699\t4\t883327539\n178\t131\t4\t882827947\n297\t216\t4\t875409423\n59\t1117\t4\t888203313\n276\t429\t5\t874790972\n179\t258\t5\t892151270\n87\t386\t2\t879877006\n198\t1169\t4\t884208834\n119\t54\t4\t886176814\n297\t20\t4\t874954763\n1\t98\t4\t875072404\n268\t205\t5\t875309859\n279\t174\t4\t875306636\n64\t187\t5\t889737395\n119\t1262\t3\t890627252\n75\t1017\t5\t884050502\n27\t742\t3\t891543129\n307\t21\t4\t876433101\n37\t685\t3\t880915528\n82\t15\t3\t876311365\n244\t238\t5\t880606118\n271\t274\t3\t885848014\n174\t1014\t3\t890664424\n210\t135\t5\t887736352\n262\t258\t4\t879961282\n320\t68\t5\t884749327\n85\t660\t4\t879829618\n311\t348\t4\t884364108\n82\t208\t3\t878769815\n1\t186\t4\t875073128\n145\t368\t3\t888398492\n276\t401\t3\t874792094\n23\t213\t3\t874785675\n64\t515\t5\t889737478\n63\t237\t3\t875747342\n293\t227\t2\t888906990\n322\t32\t5\t887314417\n74\t285\t3\t888333428\n297\t202\t3\t875238638\n82\t216\t4\t878769949\n280\t145\t3\t891702198\n200\t227\t5\t884129006\n290\t21\t3\t880475695\n43\t820\t2\t884029742\n95\t573\t1\t888954808\n181\t20\t1\t878962919\n178\t926\t4\t882824671\n81\t476\t2\t876534124\n194\t410\t3\t879541042\n325\t402\t2\t891479706\n276\t347\t4\t885159630\n207\t133\t4\t875812281\n87\t135\t5\t879875649\n331\t7\t4\t877196633\n315\t8\t3\t879820961\n106\t435\t3\t881452355\n286\t83\t5\t877531975\n87\t157\t3\t879877799\n87\t163\t4\t879877083\n286\t655\t3\t889651746\n232\t8\t2\t888549757\n254\t380\t4\t886474456\n96\t91\t5\t884403250\n232\t1\t4\t880062302\n315\t98\t4\t879821193\n43\t553\t4\t875981159\n305\t679\t3\t886324792\n61\t690\t2\t891206407\n44\t665\t1\t883613372\n92\t1016\t2\t875640582\n168\t255\t1\t884287560\n276\t270\t4\t879131395\n328\t568\t3\t885047896\n222\t1053\t3\t881060735\n93\t222\t4\t888705295\n330\t235\t5\t876544690\n82\t504\t4\t878769917\n2\t314\t1\t888980085\n89\t732\t5\t879459909\n38\t216\t5\t892430486\n308\t85\t4\t887741245\n24\t153\t4\t875323368\n235\t1464\t4\t889655266\n1\t221\t5\t887431921\n222\t715\t2\t878183924\n222\t69\t5\t878182338\n43\t114\t5\t883954950\n331\t486\t3\t877196308\n223\t322\t4\t891548920\n201\t452\t1\t884114770\n158\t271\t4\t880132232\n32\t249\t4\t883717645\n314\t90\t2\t877888758\n313\t245\t3\t891013144\n102\t576\t2\t888802722\n211\t526\t4\t879459952\n268\t425\t4\t875310549\n332\t770\t3\t888098170\n38\t508\t2\t892429399\n280\t975\t4\t891702252\n10\t463\t4\t877889186\n92\t386\t3\t875907727\n268\t374\t2\t875744895\n69\t258\t4\t882027204\n210\t96\t4\t887736616\n213\t144\t5\t878956047\n254\t50\t5\t886471151\n58\t272\t5\t884647314\n327\t210\t3\t887744065\n291\t385\t4\t874835141\n291\t324\t1\t874805453\n246\t596\t3\t884921511\n11\t714\t4\t891904214\n329\t100\t4\t891655812\n86\t258\t5\t879570366\n7\t621\t5\t892132773\n246\t80\t2\t884923329\n308\t481\t4\t887737997\n54\t820\t3\t880937992\n177\t651\t3\t880130862\n10\t655\t5\t877891904\n83\t631\t2\t887664566\n145\t993\t3\t875270616\n255\t185\t4\t883216449\n18\t607\t3\t880131752\n226\t180\t4\t883889322\n234\t616\t2\t892334976\n274\t25\t5\t878945541\n293\t156\t4\t888905948\n83\t476\t3\t880307359\n295\t173\t5\t879518257\n286\t1039\t5\t877531730\n42\t48\t5\t881107821\n208\t204\t3\t883108360\n232\t275\t2\t885939945\n267\t94\t3\t878972558\n271\t242\t4\t885844495\n125\t97\t3\t879454385\n323\t333\t4\t878738865\n305\t56\t1\t886323068\n145\t250\t5\t882182944\n38\t1030\t5\t892434475\n202\t515\t1\t879726778\n181\t975\t2\t878963343\n332\t566\t4\t888360342\n108\t13\t3\t879879834\n194\t520\t5\t879545114\n144\t62\t2\t888105902\n194\t1183\t2\t879554453\n148\t172\t5\t877016513\n144\t1147\t4\t888105587\n269\t961\t5\t891457067\n290\t71\t5\t880473667\n249\t597\t2\t879640436\n65\t676\t5\t879217689\n301\t395\t1\t882079384\n267\t546\t3\t878970877\n207\t754\t4\t879577345\n201\t777\t1\t884112673\n314\t1095\t3\t877887356\n210\t631\t5\t887736796\n22\t456\t1\t878887413\n59\t931\t2\t888203610\n92\t715\t4\t875656288\n50\t475\t5\t877052167\n188\t159\t3\t875074589\n303\t700\t3\t879485718\n197\t288\t3\t891409387\n244\t676\t4\t880604858\n44\t88\t2\t878348885\n164\t597\t4\t889402225\n11\t230\t4\t891905783\n6\t297\t3\t883599134\n186\t925\t5\t879023152\n190\t147\t4\t891033863\n184\t1137\t5\t889907812\n85\t269\t3\t891289966\n185\t127\t5\t883525183\n44\t257\t4\t878346689\n293\t484\t5\t888906217\n150\t1\t4\t878746441\n60\t179\t4\t883326566\n75\t147\t3\t884050134\n269\t640\t5\t891457067\n138\t493\t4\t879024382\n299\t271\t3\t879737472\n92\t928\t3\t886443582\n299\t24\t3\t877877732\n292\t183\t5\t881103478\n5\t394\t2\t879198031\n62\t559\t3\t879375912\n198\t549\t3\t884208518\n288\t1039\t2\t886373565\n152\t272\t5\t890322298\n42\t999\t4\t881108982\n64\t333\t3\t879365313\n99\t682\t2\t885678371\n59\t121\t4\t888203313\n135\t233\t3\t879857843\n7\t22\t5\t891351121\n24\t427\t5\t875323002\n144\t747\t5\t888105473\n261\t322\t4\t890454974\n201\t475\t4\t884112748\n133\t258\t5\t890588639\n110\t245\t3\t886987540\n5\t384\t3\t875636389\n139\t268\t4\t879537876\n112\t322\t4\t884992690\n234\t596\t2\t891227979\n301\t184\t4\t882077222\n291\t1471\t3\t874834914\n285\t216\t3\t890595900\n85\t53\t3\t882995643\n275\t183\t3\t880314500\n296\t275\t4\t884196555\n271\t197\t4\t885848915\n29\t748\t2\t882821558\n221\t172\t5\t875245907\n323\t9\t4\t878739325\n111\t340\t4\t891679692\n95\t176\t3\t879196298\n207\t170\t4\t877125221\n136\t276\t5\t882693489\n124\t616\t4\t890287645\n185\t528\t4\t883526268\n167\t404\t3\t892738278\n286\t341\t5\t884069544\n84\t322\t3\t883449567\n151\t529\t5\t879542610\n264\t401\t5\t886123656\n289\t1\t3\t876789736\n144\t64\t5\t888105140\n56\t29\t3\t892910913\n23\t528\t4\t874786974\n328\t742\t4\t885047309\n125\t785\t3\t892838558\n200\t72\t4\t884129542\n249\t23\t4\t879572432\n130\t56\t5\t875216283\n140\t319\t4\t879013617\n49\t102\t2\t888067164\n158\t483\t5\t880133225\n222\t58\t3\t878182479\n194\t213\t2\t879523575\n177\t89\t5\t880131088\n7\t268\t3\t891350703\n59\t549\t4\t888205659\n145\t411\t2\t875271522\n265\t7\t2\t875320689\n248\t282\t2\t884535582\n239\t47\t2\t889180169\n319\t879\t5\t876280338\n42\t102\t5\t881108873\n301\t1035\t4\t882078809\n326\t69\t2\t879874964\n180\t67\t1\t877127591\n280\t99\t2\t891700475\n145\t682\t3\t879161624\n214\t79\t4\t891544306\n259\t210\t4\t874725485\n57\t864\t3\t883697512\n261\t597\t4\t890456142\n136\t298\t4\t882693569\n293\t705\t5\t888906338\n194\t470\t3\t879527421\n75\t496\t5\t884051921\n202\t172\t3\t879726778\n23\t183\t3\t874785728\n38\t403\t1\t892432205\n52\t1009\t5\t882922328\n95\t720\t2\t879196513\n65\t97\t5\t879216605\n207\t290\t2\t878104627\n201\t2\t2\t884112487\n190\t751\t4\t891033606\n162\t685\t3\t877635917\n221\t250\t5\t875244633\n92\t134\t4\t875656623\n49\t695\t3\t888068957\n102\t391\t2\t888802767\n6\t500\t4\t883601277\n152\t25\t3\t880149045\n145\t278\t4\t875272871\n328\t271\t3\t885044607\n116\t750\t4\t886309481\n90\t237\t4\t891385215\n221\t318\t5\t875245690\n128\t283\t5\t879966729\n94\t467\t4\t885873423\n221\t1218\t3\t875246745\n281\t332\t4\t881200603\n294\t539\t4\t889241707\n300\t948\t4\t875650018\n326\t153\t4\t879875751\n62\t28\t3\t879375169\n159\t249\t4\t884027269\n76\t811\t4\t882606323\n74\t237\t4\t888333428\n81\t411\t2\t876534244\n280\t227\t3\t891702153\n224\t22\t5\t888103581\n64\t77\t3\t889737420\n194\t756\t1\t879549899\n15\t20\t3\t879455541\n43\t328\t4\t875975061\n244\t100\t4\t880604252\n327\t805\t4\t887819462\n21\t928\t3\t874951616\n83\t254\t2\t880327839\n14\t22\t3\t890881521\n318\t610\t5\t884496525\n92\t756\t3\t886443582\n222\t1078\t2\t878183449\n62\t157\t3\t879374686\n13\t840\t3\t886261387\n271\t300\t2\t885844583\n59\t13\t5\t888203415\n208\t514\t4\t883108324\n289\t815\t3\t876789581\n279\t249\t3\t878878420\n326\t50\t5\t879875112\n73\t12\t5\t888624976\n28\t234\t4\t881956144\n6\t95\t2\t883602133\n90\t354\t3\t891382240\n96\t519\t4\t884402896\n7\t627\t3\t891352594\n254\t649\t1\t886474619\n328\t519\t5\t885046420\n247\t751\t3\t893081411\n45\t472\t3\t881014417\n323\t127\t5\t878739137\n268\t566\t3\t875744321\n291\t816\t3\t874867852\n59\t405\t3\t888203578\n200\t409\t2\t884127431\n332\t975\t3\t887938631\n239\t612\t5\t889178616\n22\t399\t4\t878887157\n267\t147\t3\t878970681\n235\t319\t4\t889654419\n87\t70\t5\t879876448\n216\t143\t2\t881428956\n268\t121\t2\t875743141\n239\t317\t5\t889179291\n269\t922\t5\t891457067\n207\t468\t4\t877124806\n270\t148\t4\t876954062\n184\t559\t3\t889910418\n304\t271\t4\t884968415\n331\t479\t2\t877196504\n157\t283\t4\t886890692\n239\t183\t5\t889180071\n261\t339\t5\t890454351\n301\t58\t4\t882077285\n145\t339\t3\t882181058\n10\t321\t4\t879163494\n48\t308\t5\t879434292\n321\t631\t4\t879440264\n32\t591\t3\t883717581\n125\t1036\t2\t892839191\n1\t84\t4\t875072923\n21\t742\t3\t874951617\n22\t186\t5\t878886368\n292\t324\t3\t881104533\n72\t129\t4\t880035588\n256\t642\t4\t882164893\n92\t1095\t2\t886443728\n73\t475\t4\t888625753\n290\t274\t4\t880731874\n83\t543\t2\t887665445\n56\t597\t3\t892679439\n83\t216\t4\t880307846\n215\t22\t3\t891435161\n101\t369\t2\t877136928\n328\t521\t4\t885047484\n307\t175\t4\t877117651\n201\t23\t4\t884111830\n197\t570\t4\t891410124\n26\t286\t3\t891347400\n90\t489\t5\t891384357\n98\t517\t5\t880498990\n57\t250\t3\t883697223\n163\t288\t3\t891220226\n1\t31\t3\t875072144\n104\t324\t1\t888442404\n333\t894\t3\t891045496\n311\t22\t4\t884364538\n237\t211\t4\t879376515\n44\t603\t4\t878347420\n22\t96\t5\t878887680\n213\t546\t4\t878870903\n257\t258\t3\t879029516\n327\t300\t2\t887743541\n279\t1017\t3\t875296891\n53\t845\t3\t879443083\n85\t97\t2\t879829667\n43\t286\t4\t875975028\n181\t7\t4\t878963037\n297\t574\t1\t875239092\n201\t651\t4\t884111217\n320\t99\t4\t884751440\n94\t180\t5\t885870284\n235\t85\t4\t889655232\n305\t131\t3\t886323440\n234\t229\t4\t892334189\n328\t591\t3\t885047018\n328\t754\t4\t885044607\n258\t323\t4\t885701062\n3\t323\t2\t889237269\n16\t70\t4\t877720118\n286\t425\t2\t877532013\n327\t702\t2\t887819021\n200\t265\t5\t884128372\n207\t131\t3\t878104377\n292\t10\t5\t881104606\n214\t179\t5\t892668130\n155\t321\t4\t879370963\n106\t213\t4\t881453065\n200\t586\t4\t884130391\n305\t216\t5\t886323563\n279\t1113\t3\t888806035\n178\t984\t2\t882823530\n331\t133\t3\t877196443\n58\t45\t5\t884305295\n167\t1306\t5\t892738385\n151\t191\t3\t879524326\n326\t168\t3\t879874859\n297\t443\t2\t875240133\n191\t288\t3\t891562090\n81\t471\t3\t876533586\n284\t258\t4\t885329146\n5\t267\t4\t875635064\n150\t325\t1\t878747322\n257\t59\t5\t879547440\n145\t443\t3\t882182658\n271\t191\t5\t885848448\n176\t297\t3\t886047918\n158\t38\t4\t880134607\n152\t716\t5\t884019001\n232\t638\t5\t888549988\n109\t930\t3\t880572351\n243\t660\t4\t879988422\n57\t744\t5\t883698581\n145\t1057\t1\t875271312\n235\t275\t5\t889655550\n181\t124\t1\t878962550\n145\t182\t5\t885622510\n249\t476\t3\t879640481\n44\t11\t3\t878347915\n194\t566\t4\t879522819\n109\t218\t4\t880578633\n49\t10\t3\t888066086\n269\t210\t1\t891449608\n87\t233\t4\t879876036\n314\t791\t4\t877889398\n292\t132\t4\t881105340\n7\t300\t4\t891350703\n291\t460\t5\t874834254\n292\t176\t5\t881103478\n290\t1028\t3\t880732365\n122\t427\t3\t879270165\n17\t151\t4\t885272751\n59\t47\t5\t888205574\n29\t689\t2\t882821705\n274\t411\t3\t878945888\n190\t340\t1\t891033153\n213\t50\t5\t878870456\n14\t111\t3\t876965165\n321\t131\t4\t879439883\n221\t1314\t3\t875247833\n195\t100\t5\t875771440\n236\t187\t3\t890118340\n92\t619\t4\t875640487\n303\t576\t3\t879485417\n42\t210\t5\t881108633\n246\t423\t3\t884920900\n181\t823\t2\t878963343\n197\t231\t3\t891410124\n181\t369\t3\t878963418\n130\t172\t5\t875801530\n276\t1131\t3\t874796116\n252\t742\t4\t891455743\n221\t1067\t3\t875244387\n292\t488\t5\t881105657\n177\t124\t3\t880130881\n42\t785\t4\t881109060\n1\t70\t3\t875072895\n13\t178\t4\t882139829\n76\t276\t5\t875027601\n269\t72\t2\t891451470\n3\t331\t4\t889237455\n290\t429\t4\t880474606\n159\t815\t3\t880557387\n248\t474\t2\t884534672\n214\t1065\t5\t892668173\n30\t181\t4\t875060217\n8\t182\t5\t879362183\n238\t118\t3\t883576509\n249\t176\t4\t879641109\n264\t1069\t5\t886123728\n98\t655\t3\t880498861\n123\t275\t4\t879873726\n181\t688\t1\t878961668\n7\t162\t5\t891353444\n119\t269\t3\t892564213\n181\t457\t1\t878961474\n138\t483\t5\t879024280\n56\t63\t3\t892910268\n291\t122\t3\t874834289\n326\t468\t3\t879875572\n92\t175\t4\t875653549\n293\t654\t5\t888905760\n162\t1047\t5\t877635896\n303\t549\t3\t879484846\n325\t504\t3\t891477905\n267\t654\t5\t878971902\n130\t546\t4\t876250932\n216\t577\t1\t881432453\n301\t53\t1\t882078883\n91\t423\t5\t891439090\n301\t384\t5\t882079315\n291\t672\t3\t874867741\n18\t196\t3\t880131297\n195\t1084\t4\t888737345\n222\t939\t3\t878182211\n327\t274\t2\t887819462\n254\t577\t1\t886476092\n332\t693\t5\t888098538\n267\t55\t4\t878972785\n16\t443\t5\t877727055\n158\t79\t4\t880134332\n305\t14\t4\t886322893\n87\t67\t4\t879877007\n313\t175\t4\t891014697\n43\t498\t5\t875981275\n234\t1035\t3\t892335142\n90\t11\t4\t891384113\n230\t196\t5\t880484755\n1\t60\t5\t875072370\n262\t185\t3\t879793164\n221\t1407\t3\t875247833\n279\t382\t4\t875312947\n211\t678\t3\t879461394\n287\t1016\t5\t875334430\n167\t603\t4\t892738212\n119\t154\t5\t874782022\n126\t878\t5\t887938392\n60\t474\t5\t883326028\n296\t427\t5\t884198772\n300\t243\t4\t875650068\n194\t971\t3\t879551049\n83\t186\t4\t880308601\n207\t1242\t5\t884386260\n311\t1116\t3\t884364623\n181\t406\t1\t878962955\n130\t550\t5\t878537602\n245\t222\t4\t888513212\n168\t235\t2\t884288270\n256\t756\t4\t882151167\n1\t177\t5\t876892701\n59\t10\t4\t888203234\n223\t258\t1\t891548802\n243\t225\t3\t879987655\n148\t1149\t5\t877016513\n10\t48\t4\t877889058\n178\t549\t4\t882827689\n295\t4\t4\t879518568\n99\t124\t2\t885678886\n334\t117\t3\t891544735\n263\t523\t5\t891298107\n230\t402\t5\t880485445\n152\t132\t5\t882475496\n189\t45\t3\t893265657\n130\t231\t3\t875801422\n334\t282\t4\t891544925\n91\t193\t3\t891439057\n244\t97\t2\t880605514\n83\t866\t3\t883867947\n222\t217\t3\t881060062\n10\t203\t4\t877891967\n173\t300\t4\t877556988\n269\t168\t4\t891448850\n292\t100\t5\t881103999\n60\t508\t4\t883327368\n197\t431\t3\t891409935\n313\t265\t4\t891016853\n234\t506\t4\t892318107\n234\t959\t2\t892334189\n154\t484\t4\t879139096\n14\t56\t5\t879119579\n201\t1211\t3\t884113806\n181\t359\t1\t878961668\n52\t748\t4\t882922629\n308\t579\t3\t887740700\n212\t515\t4\t879303571\n13\t42\t4\t882141393\n268\t99\t3\t875744744\n119\t245\t4\t886176618\n44\t202\t4\t878347315\n126\t884\t5\t887938392\n159\t111\t4\t880556981\n90\t301\t4\t891382392\n320\t42\t4\t884751712\n301\t25\t3\t882075110\n114\t269\t4\t881256090\n9\t691\t5\t886960055\n315\t17\t1\t879821003\n137\t195\t5\t881433689\n183\t562\t3\t891467003\n297\t301\t4\t876529834\n334\t603\t5\t891628849\n18\t954\t3\t880130640\n152\t97\t5\t882475618\n184\t498\t5\t889913687\n325\t430\t5\t891478028\n39\t315\t4\t891400094\n231\t127\t3\t879965565\n302\t309\t2\t879436820\n63\t150\t4\t875747292\n201\t375\t3\t884287140\n200\t103\t2\t891825521\n13\t94\t3\t882142057\n297\t22\t4\t875238984\n201\t844\t2\t884112537\n14\t93\t3\t879119311\n240\t343\t3\t885775831\n184\t716\t3\t889909987\n216\t12\t5\t881432544\n38\t122\t1\t892434801\n257\t276\t5\t882049973\n256\t778\t4\t882165103\n200\t229\t5\t884129696\n148\t177\t2\t877020715\n249\t22\t5\t879572926\n184\t47\t4\t889909640\n276\t58\t4\t874791169\n268\t432\t3\t875310145\n224\t258\t3\t888081947\n145\t25\t2\t875270655\n298\t261\t4\t884126805\n244\t743\t5\t880602170\n289\t410\t2\t876790361\n59\t132\t5\t888205744\n301\t1112\t4\t882079294\n56\t1090\t3\t892683641\n327\t192\t5\t887820828\n285\t288\t5\t890595584\n133\t328\t3\t890588577\n71\t346\t4\t885016248\n293\t1132\t3\t888905416\n13\t908\t1\t886302385\n1\t27\t2\t876892946\n271\t172\t5\t885848616\n286\t269\t5\t879780839\n49\t926\t1\t888069117\n290\t153\t3\t880475310\n226\t270\t4\t883888639\n104\t122\t3\t888465739\n311\t233\t4\t884365889\n60\t178\t5\t883326399\n200\t191\t5\t884128554\n128\t276\t4\t879967550\n157\t748\t2\t886890015\n303\t460\t4\t879543600\n5\t445\t3\t875720744\n268\t540\t1\t875542174\n290\t218\t2\t880475542\n181\t1346\t1\t878962086\n189\t276\t3\t893264300\n90\t659\t4\t891384357\n321\t134\t3\t879438607\n279\t108\t4\t892174381\n197\t770\t3\t891410082\n217\t566\t4\t889069903\n193\t682\t1\t889123377\n34\t310\t4\t888601628\n293\t157\t5\t888905779\n297\t300\t3\t874953892\n24\t742\t4\t875323915\n259\t405\t3\t874725120\n303\t1007\t5\t879544576\n326\t282\t2\t879875964\n10\t218\t4\t877889261\n334\t635\t2\t891548155\n272\t8\t4\t879455015\n76\t1129\t5\t875028075\n13\t300\t1\t881515736\n194\t431\t4\t879524291\n256\t291\t5\t882152630\n148\t185\t1\t877398385\n276\t318\t5\t874787496\n227\t126\t4\t879035158\n311\t553\t3\t884365451\n198\t427\t4\t884207009\n13\t180\t5\t882141248\n286\t100\t3\t876521650\n271\t451\t3\t885849447\n59\t318\t5\t888204349\n328\t655\t4\t886037429\n25\t174\t5\t885853415\n90\t971\t4\t891385250\n157\t150\t5\t874813703\n106\t69\t4\t881449886\n173\t322\t4\t877557028\n276\t1135\t4\t874791527\n276\t76\t4\t874791506\n49\t546\t1\t888069636\n115\t234\t5\t881171982\n307\t22\t3\t879205470\n82\t218\t3\t878769748\n116\t1082\t3\t876453171\n80\t50\t3\t887401533\n59\t381\t5\t888205659\n236\t143\t4\t890116163\n56\t174\t5\t892737191\n82\t413\t1\t884714593\n82\t69\t4\t878769948\n144\t727\t3\t888105765\n7\t526\t5\t891351042\n49\t531\t3\t888066511\n1\t260\t1\t875071713\n243\t129\t2\t879987526\n313\t488\t5\t891013496\n207\t273\t4\t878104569\n334\t222\t4\t891544904\n83\t95\t4\t880308453\n162\t230\t2\t877636860\n326\t496\t5\t879874825\n236\t686\t3\t890118372\n17\t9\t3\t885272558\n92\t1215\t2\t890251747\n82\t147\t3\t876311473\n201\t242\t4\t884110598\n223\t237\t5\t891549657\n168\t295\t4\t884287615\n186\t977\t3\t879023273\n246\t356\t2\t884923047\n62\t135\t4\t879375080\n320\t456\t3\t884748904\n48\t603\t4\t879434607\n209\t269\t2\t883589606\n236\t1328\t4\t890116132\n92\t673\t4\t875656392\n71\t285\t3\t877319414\n5\t167\t2\t875636281\n67\t240\t5\t875379566\n188\t554\t2\t875074891\n326\t54\t3\t879876300\n234\t462\t4\t892079840\n31\t302\t4\t881547719\n228\t886\t1\t889387173\n172\t603\t3\t875538027\n314\t1139\t5\t877888480\n297\t652\t3\t875239346\n264\t659\t5\t886122577\n118\t174\t5\t875385007\n216\t286\t4\t881432501\n290\t1013\t2\t880732131\n256\t278\t5\t882151517\n200\t820\t3\t884127370\n49\t312\t3\t888065786\n118\t433\t5\t875384793\n293\t195\t3\t888906119\n13\t29\t2\t882397833\n42\t405\t4\t881105541\n293\t566\t3\t888907312\n125\t158\t4\t892839066\n315\t230\t4\t879821300\n296\t83\t5\t884199624\n188\t204\t4\t875073478\n201\t4\t4\t884111830\n253\t747\t3\t891628501\n315\t531\t5\t879799457\n210\t134\t5\t887736070\n119\t1170\t3\t890627339\n151\t509\t4\t879524778\n81\t273\t4\t876533710\n324\t748\t5\t880575108\n43\t15\t5\t875975546\n298\t432\t4\t884183307\n250\t127\t4\t878089881\n286\t1265\t5\t884069544\n203\t294\t2\t880433398\n267\t226\t3\t878972463\n194\t735\t4\t879524718\n303\t99\t4\t879467514\n193\t195\t1\t889124507\n57\t588\t4\t883698454\n92\t672\t3\t875660028\n207\t269\t4\t877845577\n325\t154\t3\t891478480\n280\t86\t4\t891700475\n197\t449\t5\t891410124\n39\t352\t5\t891400704\n197\t510\t5\t891409935\n117\t1\t4\t880126083\n132\t922\t5\t891278996\n271\t180\t5\t885849087\n222\t433\t4\t881059876\n103\t117\t4\t880416313\n201\t26\t4\t884111927\n270\t387\t5\t876955689\n104\t100\t4\t888465166\n95\t96\t4\t879196298\n130\t204\t5\t875216718\n290\t239\t2\t880474451\n314\t833\t4\t877887155\n313\t969\t4\t891015387\n295\t722\t4\t879518881\n269\t412\t3\t891446904\n49\t1\t2\t888068651\n332\t228\t5\t888359980\n301\t11\t4\t882076291\n125\t434\t4\t879454100\n336\t66\t3\t877756126\n1\t145\t2\t875073067\n327\t230\t4\t887820448\n262\t292\t4\t879961282\n313\t205\t5\t891013652\n321\t523\t3\t879440687\n248\t185\t3\t884534772\n38\t384\t5\t892433660\n224\t778\t1\t888104057\n217\t1222\t1\t889070050\n6\t475\t5\t883599478\n331\t47\t5\t877196235\n38\t423\t5\t892430071\n1\t174\t5\t875073198\n308\t60\t3\t887737760\n207\t642\t3\t875991116\n215\t1039\t5\t891436543\n56\t239\t4\t892676970\n109\t1011\t3\t880571872\n10\t124\t5\t877888545\n320\t210\t5\t884749227\n269\t180\t3\t891448120\n290\t380\t3\t880731766\n311\t205\t5\t884365357\n129\t270\t3\t883243934\n109\t281\t2\t880571919\n235\t898\t3\t889654553\n335\t328\t3\t891566903\n13\t508\t3\t882140426\n201\t558\t2\t884112537\n276\t801\t3\t877935306\n81\t118\t2\t876533764\n288\t200\t4\t886373534\n263\t97\t4\t891299387\n293\t87\t4\t888907015\n136\t117\t4\t882694498\n318\t660\t3\t884497207\n295\t405\t5\t879518319\n201\t480\t4\t884111598\n232\t708\t4\t888550060\n197\t566\t4\t891409893\n313\t180\t5\t891014898\n109\t230\t5\t880579107\n168\t596\t4\t884287615\n201\t980\t3\t884140927\n222\t554\t2\t881060435\n115\t11\t4\t881171348\n334\t224\t2\t891545020\n119\t697\t5\t874782068\n198\t385\t3\t884208778\n91\t507\t4\t891438977\n62\t281\t3\t879373118\n239\t98\t5\t889180410\n324\t1033\t4\t880575589\n201\t823\t3\t884140975\n322\t50\t5\t887314418\n107\t305\t4\t891264327\n64\t2\t3\t889737609\n28\t50\t4\t881957090\n246\t202\t3\t884922272\n168\t1197\t5\t884287927\n34\t259\t2\t888602808\n286\t465\t5\t889651698\n184\t521\t4\t889908873\n106\t286\t4\t881449486\n198\t1117\t3\t884205252\n291\t53\t5\t874834827\n25\t477\t4\t885853155\n1\t159\t3\t875073180\n181\t1393\t1\t878961709\n169\t301\t4\t891268622\n60\t172\t4\t883326339\n178\t427\t5\t882826162\n149\t327\t2\t883512689\n280\t96\t4\t891700664\n205\t984\t1\t888284710\n92\t431\t4\t875660164\n244\t369\t4\t880605294\n308\t291\t3\t887739472\n235\t684\t4\t889655162\n218\t194\t3\t877488546\n307\t313\t5\t888095725\n18\t69\t3\t880129527\n23\t215\t2\t874787116\n184\t132\t5\t889913687\n244\t237\t5\t880602334\n211\t181\t1\t879461498\n236\t696\t2\t890117223\n145\t672\t3\t882182689\n235\t648\t4\t889655662\n116\t1016\t2\t876453376\n178\t358\t1\t888512993\n11\t561\t2\t891905936\n329\t512\t4\t891656347\n183\t405\t4\t891464393\n308\t467\t4\t887737194\n207\t576\t3\t877822904\n198\t249\t2\t884205277\n100\t750\t4\t891375016\n291\t168\t5\t874871800\n115\t762\t4\t881170508\n151\t169\t5\t879524268\n305\t403\t2\t886324792\n338\t494\t3\t879438570\n292\t525\t5\t881105701\n234\t671\t3\t892336257\n234\t584\t3\t892333653\n279\t275\t3\t875249232\n234\t638\t4\t892335989\n110\t79\t4\t886988480\n106\t273\t3\t881453290\n128\t111\t3\t879969215\n298\t151\t3\t884183952\n42\t845\t5\t881110719\n128\t747\t3\t879968742\n190\t717\t3\t891042938\n1\t82\t5\t878542589\n99\t421\t3\t885680772\n313\t208\t3\t891015167\n13\t45\t3\t882139863\n305\t302\t4\t886307860\n94\t185\t5\t885873684\n271\t204\t4\t885848314\n128\t83\t5\t879967691\n267\t50\t5\t878974783\n142\t189\t4\t888640317\n1\t56\t4\t875072716\n18\t214\t4\t880132078\n188\t234\t4\t875073048\n235\t100\t4\t889655550\n303\t408\t4\t879467035\n100\t266\t2\t891375484\n178\t302\t4\t892239796\n42\t781\t4\t881108280\n18\t488\t3\t880130065\n184\t14\t4\t889907738\n293\t521\t3\t888906288\n293\t849\t2\t888907891\n198\t156\t3\t884207058\n234\t966\t4\t892334189\n181\t1351\t1\t878962168\n194\t153\t3\t879546723\n1\t272\t3\t887431647\n265\t279\t2\t875320462\n159\t323\t4\t880485443\n332\t229\t5\t888360342\n334\t229\t2\t891549777\n126\t258\t4\t887853919\n200\t225\t4\t876042299\n63\t246\t3\t875747514\n271\t134\t3\t885848518\n179\t316\t5\t892151202\n308\t959\t3\t887739335\n270\t70\t5\t876955066\n181\t1198\t1\t878962585\n21\t445\t3\t874951658\n326\t675\t4\t879875457\n268\t823\t2\t875742942\n109\t845\t4\t880571684\n339\t132\t5\t891032953\n244\t95\t4\t880606418\n62\t702\t2\t879376079\n321\t615\t5\t879440109\n254\t141\t3\t886472836\n295\t423\t4\t879517372\n271\t241\t3\t885849207\n7\t519\t4\t891352831\n334\t52\t4\t891548579\n136\t14\t5\t882693338\n192\t1160\t4\t881367456\n259\t176\t4\t874725386\n244\t509\t5\t880606017\n238\t815\t2\t883576398\n73\t127\t5\t888625200\n249\t455\t4\t879640326\n320\t291\t4\t884749014\n13\t820\t4\t882398743\n10\t283\t4\t877892276\n321\t207\t3\t879440244\n201\t991\t4\t884110735\n102\t559\t3\t888803052\n190\t742\t3\t891033841\n311\t99\t5\t884365075\n309\t333\t3\t877370419\n62\t685\t2\t879373175\n116\t187\t5\t886310197\n295\t966\t5\t879518060\n234\t72\t3\t892335674\n255\t984\t1\t883215902\n161\t582\t1\t891170800\n87\t550\t4\t879876074\n59\t559\t5\t888206562\n140\t322\t3\t879013684\n224\t301\t3\t888082013\n90\t486\t5\t891383912\n14\t792\t5\t879119651\n194\t216\t3\t879523785\n222\t501\t2\t881060331\n90\t311\t4\t891382163\n328\t43\t3\t886038224\n7\t633\t5\t891351509\n151\t228\t5\t879524345\n297\t223\t5\t875238638\n207\t529\t4\t878191679\n130\t930\t3\t876251072\n314\t743\t1\t877886443\n181\t926\t1\t878962866\n13\t509\t5\t882140691\n232\t523\t4\t888549757\n201\t87\t3\t884111775\n223\t470\t4\t891550767\n18\t602\t3\t880131407\n82\t495\t3\t878769668\n144\t403\t3\t888105636\n186\t322\t5\t879022927\n250\t174\t3\t878092104\n321\t194\t3\t879441225\n28\t12\t4\t881956853\n28\t895\t4\t882826398\n151\t405\t3\t879543055\n207\t1102\t3\t880839891\n201\t164\t3\t884112627\n6\t509\t4\t883602664\n42\t380\t4\t881108548\n221\t895\t2\t885081339\n328\t10\t4\t885047099\n270\t159\t4\t876956233\n269\t340\t5\t891446132\n216\t249\t3\t880232917\n201\t1424\t3\t884113114\n85\t86\t4\t879454189\n95\t843\t4\t880572448\n306\t275\t4\t876503894\n256\t235\t3\t882153668\n85\t692\t3\t879828490\n11\t312\t4\t891902157\n305\t210\t3\t886323006\n181\t321\t2\t878961623\n151\t7\t4\t879524610\n296\t961\t5\t884197287\n119\t595\t3\t874781067\n314\t929\t3\t877887356\n279\t363\t5\t890451473\n188\t357\t4\t875073647\n214\t872\t2\t891542492\n234\t209\t4\t892317967\n5\t426\t3\t878844510\n1\t80\t4\t876893008\n246\t578\t2\t884923306\n294\t979\t3\t877819897\n314\t73\t4\t877889205\n312\t98\t4\t891698300\n208\t662\t4\t883108842\n43\t382\t5\t883955702\n254\t596\t4\t886473852\n3\t294\t2\t889237224\n44\t153\t4\t878347234\n25\t742\t4\t885852569\n94\t79\t4\t885882967\n262\t406\t3\t879791537\n35\t1025\t3\t875459237\n148\t501\t4\t877020297\n70\t423\t5\t884066910\n83\t265\t5\t880308186\n5\t222\t4\t875635174\n308\t1028\t2\t887738972\n109\t62\t3\t880578711\n49\t173\t3\t888067691\n314\t468\t4\t877892214\n334\t1163\t4\t891544764\n269\t205\t3\t891447841\n38\t318\t3\t892430071\n102\t222\t3\t888801406\n329\t297\t4\t891655868\n305\t1411\t3\t886324865\n236\t289\t4\t890117820\n313\t131\t4\t891015513\n332\t284\t5\t887938245\n121\t121\t2\t891388501\n60\t183\t5\t883326399\n339\t1030\t1\t891036707\n296\t544\t4\t884196938\n11\t720\t1\t891904717\n263\t272\t5\t891296919\n303\t203\t5\t879467669\n288\t182\t4\t886374497\n291\t17\t4\t874834850\n308\t628\t3\t887738104\n13\t755\t3\t882399014\n64\t231\t3\t889740880\n277\t24\t4\t879543931\n130\t572\t3\t878537853\n293\t386\t2\t888908065\n279\t368\t1\t886016352\n189\t253\t4\t893264150\n296\t32\t4\t884197131\n305\t169\t5\t886322893\n303\t262\t5\t879466065\n95\t211\t3\t879197652\n207\t1098\t4\t877879172\n110\t1248\t3\t886989126\n312\t408\t4\t891698174\n279\t1413\t5\t875314434\n15\t301\t4\t879455233\n116\t484\t4\t886310197\n198\t51\t3\t884208455\n13\t2\t3\t882397650\n332\t232\t5\t888098373\n44\t55\t4\t878347455\n62\t716\t4\t879375951\n148\t529\t5\t877398901\n303\t421\t4\t879466966\n276\t56\t5\t874791623\n311\t484\t4\t884366590\n58\t475\t5\t884304609\n85\t488\t4\t879455197\n330\t584\t3\t876547220\n181\t1067\t1\t878962550\n301\t515\t3\t882074561\n13\t830\t1\t882397581\n127\t268\t1\t884363990\n37\t56\t5\t880915810\n314\t924\t5\t877886921\n201\t210\t2\t884111270\n198\t511\t4\t884208326\n94\t742\t3\t891722214\n209\t258\t2\t883589626\n305\t610\t3\t886324128\n67\t405\t5\t875379794\n294\t120\t2\t889242937\n246\t98\t4\t884921428\n194\t162\t3\t879549899\n307\t393\t3\t877123041\n95\t976\t2\t879195703\n268\t252\t3\t875743182\n216\t298\t5\t881721819\n5\t453\t1\t879198898\n223\t845\t4\t891549713\n293\t124\t4\t888904696\n224\t1119\t3\t888082634\n299\t176\t4\t880699166\n130\t71\t5\t875801695\n130\t50\t5\t874953665\n54\t313\t4\t890608360\n62\t473\t4\t879373046\n312\t495\t4\t891699372\n125\t22\t5\t892836395\n318\t357\t4\t884496069\n204\t748\t1\t892392030\n182\t293\t3\t885613152\n49\t569\t3\t888067482\n69\t56\t5\t882145428\n64\t959\t4\t889739903\n325\t179\t5\t891478529\n286\t272\t5\t884069298\n116\t880\t3\t876680723\n215\t89\t4\t891435060\n46\t333\t5\t883611374\n246\t294\t2\t884924460\n213\t25\t4\t878870750\n90\t213\t5\t891383718\n110\t188\t4\t886988574\n212\t511\t4\t879304051\n57\t1059\t3\t883697432\n57\t825\t1\t883697761\n297\t282\t3\t874954845\n276\t176\t5\t874792401\n106\t45\t3\t881453290\n151\t66\t4\t879524974\n276\t66\t3\t874791993\n269\t76\t3\t891448456\n154\t286\t4\t879138235\n210\t219\t3\t887808581\n306\t319\t4\t876503793\n324\t471\t5\t880575412\n265\t472\t3\t875320542\n85\t389\t3\t882995832\n54\t325\t3\t880930146\n18\t498\t4\t880129940\n271\t345\t3\t885844666\n123\t22\t4\t879809943\n87\t1189\t5\t879877951\n217\t810\t3\t889070050\n198\t148\t3\t884206401\n116\t257\t3\t876452523\n131\t274\t3\t883681351\n297\t692\t3\t875239018\n266\t874\t2\t892257101\n109\t796\t3\t880582856\n189\t480\t5\t893265291\n22\t294\t1\t878886262\n234\t471\t3\t892335074\n328\t679\t2\t885049460\n56\t79\t4\t892676303\n178\t978\t2\t882824983\n216\t226\t3\t880244803\n38\t444\t1\t892433912\n219\t179\t5\t889492687\n43\t944\t2\t883956260\n279\t1484\t3\t875307587\n236\t507\t3\t890115897\n296\t1009\t3\t884196921\n271\t490\t4\t885848886\n206\t903\t2\t888180018\n21\t295\t3\t874951349\n318\t47\t2\t884496855\n59\t230\t4\t888205714\n151\t175\t5\t879524244\n263\t86\t4\t891299574\n308\t193\t3\t887737837\n152\t125\t5\t880149165\n123\t165\t5\t879872672\n169\t174\t4\t891359418\n294\t10\t3\t877819490\n197\t651\t5\t891409839\n263\t892\t3\t891297766\n63\t109\t4\t875747731\n206\t362\t1\t888180018\n52\t498\t5\t882922948\n316\t213\t5\t880853516\n72\t89\t3\t880037164\n189\t705\t4\t893265569\n80\t87\t4\t887401307\n198\t746\t4\t884207946\n85\t56\t4\t879453587\n194\t56\t5\t879521936\n110\t82\t4\t886988480\n99\t741\t3\t885678886\n7\t195\t5\t891352626\n323\t546\t2\t878739519\n21\t982\t1\t874951482\n334\t93\t4\t891545020\n12\t82\t4\t879959610\n43\t235\t3\t875975520\n228\t288\t4\t889387173\n109\t90\t3\t880583192\n13\t64\t5\t882140037\n178\t288\t5\t882823353\n181\t887\t1\t878962005\n123\t606\t3\t879872540\n82\t64\t5\t878770169\n138\t285\t4\t879023245\n87\t1182\t3\t879877043\n201\t304\t2\t884110967\n70\t202\t4\t884066713\n178\t655\t4\t882827247\n327\t558\t4\t887746196\n315\t654\t5\t879821193\n251\t55\t3\t886271856\n42\t70\t3\t881109148\n311\t482\t4\t884365104\n129\t272\t4\t883243972\n307\t193\t3\t879205470\n10\t4\t4\t877889130\n338\t211\t4\t879438092\n95\t514\t2\t888954076\n342\t1047\t2\t874984854\n342\t792\t3\t875318882\n201\t213\t4\t884111873\n32\t276\t4\t883717913\n257\t289\t4\t879029543\n14\t175\t5\t879119497\n299\t174\t4\t877880961\n6\t134\t5\t883602283\n320\t433\t4\t884751730\n305\t257\t2\t886322122\n28\t153\t3\t881961214\n308\t609\t4\t887739757\n287\t218\t5\t875335424\n62\t421\t5\t879375716\n269\t172\t3\t891449031\n119\t628\t4\t874775185\n279\t1142\t1\t890780603\n224\t1442\t3\t888104281\n308\t528\t3\t887737036\n151\t435\t4\t879524131\n328\t216\t3\t885045899\n295\t493\t5\t879516961\n62\t96\t4\t879374835\n59\t1109\t3\t888205088\n255\t258\t4\t883215406\n102\t195\t4\t888801360\n128\t660\t2\t879968415\n8\t79\t4\t879362286\n197\t1419\t2\t891410124\n217\t578\t5\t889070087\n313\t204\t4\t891014401\n162\t298\t4\t877635690\n30\t289\t2\t876847817\n260\t319\t2\t890618198\n57\t294\t4\t883696547\n334\t86\t4\t891548295\n308\t54\t2\t887740254\n210\t255\t4\t887730842\n213\t447\t4\t878955598\n189\t1021\t5\t893266251\n220\t306\t4\t881197664\n104\t1241\t1\t888465379\n339\t582\t4\t891032793\n28\t184\t4\t881961671\n51\t148\t3\t883498623\n244\t157\t4\t880604119\n234\t491\t4\t892079538\n275\t588\t3\t875154535\n186\t53\t1\t879023882\n99\t1052\t1\t885679533\n269\t131\t5\t891449728\n311\t720\t3\t884366307\n270\t1119\t5\t876955729\n286\t1035\t3\t877532094\n311\t94\t3\t884366187\n211\t257\t5\t879461498\n239\t671\t5\t889179290\n201\t98\t4\t884111312\n43\t403\t4\t883956305\n315\t216\t4\t879821120\n53\t924\t3\t879443303\n308\t452\t2\t887741329\n338\t613\t3\t879438597\n90\t357\t5\t891385132\n303\t327\t1\t879466166\n247\t271\t2\t893081411\n144\t303\t4\t888103407\n102\t1030\t1\t892994075\n90\t739\t5\t891384789\n72\t527\t4\t880036746\n286\t248\t5\t875806800\n201\t32\t3\t884140049\n327\t497\t4\t887818658\n141\t125\t5\t884585642\n167\t675\t1\t892738277\n262\t217\t3\t879792818\n151\t813\t4\t879524222\n13\t859\t1\t882397040\n276\t207\t4\t874795988\n246\t1073\t4\t884921380\n298\t98\t4\t884127720\n23\t88\t3\t874787410\n94\t700\t2\t891723427\n130\t772\t4\t876251804\n5\t403\t3\t875636152\n297\t176\t4\t881708055\n178\t250\t4\t888514821\n128\t417\t4\t879968447\n270\t281\t5\t876956137\n63\t251\t4\t875747514\n42\t357\t5\t881107687\n100\t288\t2\t891374603\n334\t100\t5\t891544707\n162\t222\t4\t877635758\n184\t1020\t4\t889908630\n13\t625\t2\t882398691\n72\t79\t4\t880037119\n213\t8\t3\t878955564\n82\t13\t2\t878768615\n314\t735\t5\t877888855\n59\t488\t3\t888205956\n14\t313\t2\t890880970\n236\t200\t3\t890115856\n325\t240\t1\t891479592\n286\t164\t3\t877533586\n268\t768\t3\t875744895\n83\t77\t4\t880308426\n313\t230\t3\t891015049\n21\t218\t4\t874951696\n325\t656\t4\t891478219\n283\t83\t4\t879298239\n223\t323\t2\t891549017\n130\t418\t5\t875801631\n28\t282\t4\t881957425\n43\t7\t4\t875975520\n293\t559\t2\t888906168\n286\t432\t3\t878141681\n176\t272\t5\t886047068\n237\t499\t2\t879376487\n332\t451\t5\t888360179\n303\t273\t3\t879468274\n286\t13\t2\t876521933\n327\t169\t2\t887744205\n262\t50\t2\t879962366\n312\t631\t5\t891699599\n102\t734\t2\t892993786\n16\t655\t5\t877724066\n23\t90\t2\t874787370\n249\t182\t5\t879640949\n18\t209\t4\t880130861\n293\t216\t4\t888905990\n308\t607\t3\t887737084\n164\t689\t5\t889401490\n306\t1009\t4\t876503995\n327\t655\t4\t887745303\n280\t756\t4\t891701649\n106\t97\t5\t881450810\n109\t147\t4\t880564679\n156\t58\t4\t888185906\n133\t260\t1\t890588878\n23\t511\t5\t874786770\n112\t689\t4\t884992668\n116\t313\t5\t886978155\n271\t13\t4\t885847714\n313\t136\t5\t891014474\n240\t898\t5\t885775770\n52\t405\t4\t882922610\n280\t202\t3\t891701090\n262\t1278\t4\t879961819\n275\t252\t2\t876197944\n187\t732\t3\t879465419\n13\t428\t5\t882140588\n268\t946\t3\t875310442\n234\t283\t3\t891227814\n16\t151\t5\t877721905\n336\t108\t3\t877757320\n235\t435\t5\t889655434\n216\t274\t3\t880233061\n246\t215\t2\t884921058\n13\t913\t1\t892014908\n21\t439\t1\t874951820\n94\t99\t3\t891721815\n82\t275\t2\t884714125\n339\t55\t3\t891032765\n59\t1116\t3\t888206562\n217\t685\t5\t889069782\n295\t736\t5\t879966498\n170\t328\t3\t884103860\n151\t826\t1\t879543212\n13\t212\t5\t882399385\n223\t1\t4\t891549324\n246\t196\t3\t884921861\n154\t137\t3\t879138657\n158\t144\t4\t880134445\n11\t120\t2\t891903935\n18\t630\t3\t880132188\n197\t181\t5\t891409893\n235\t433\t4\t889655596\n331\t69\t5\t877196384\n244\t278\t3\t880605294\n217\t540\t1\t889070087\n312\t134\t5\t891698764\n299\t168\t4\t878192039\n234\t1172\t3\t892079076\n224\t632\t2\t888103872\n327\t474\t3\t887743986\n184\t780\t4\t889913254\n62\t1107\t1\t879376159\n65\t70\t1\t879216529\n101\t928\t2\t877136302\n210\t465\t4\t887737131\n144\t237\t4\t888104258\n320\t250\t4\t884751992\n311\t692\t4\t884364652\n159\t328\t3\t893255993\n128\t77\t3\t879968447\n167\t48\t1\t892738277\n291\t558\t4\t874867757\n56\t143\t3\t892910182\n38\t392\t5\t892430120\n293\t264\t3\t888904392\n115\t69\t1\t881171825\n276\t250\t4\t874786784\n280\t225\t4\t891701974\n295\t588\t4\t879517682\n26\t321\t3\t891347949\n302\t328\t3\t879436844\n145\t109\t4\t875270903\n201\t380\t1\t884140825\n57\t252\t2\t883697807\n280\t100\t3\t891700385\n310\t258\t3\t879435606\n26\t269\t4\t891347478\n308\t4\t5\t887737890\n269\t174\t1\t891449124\n262\t71\t4\t879794951\n221\t684\t4\t875247454\n263\t521\t3\t891297988\n256\t276\t3\t882151198\n1\t229\t4\t878542075\n266\t508\t4\t892258004\n59\t127\t5\t888204430\n325\t505\t4\t891478557\n327\t133\t4\t887745662\n282\t269\t4\t879949347\n151\t300\t4\t879523942\n104\t283\t4\t888465582\n291\t1017\t4\t874833911\n276\t770\t4\t877935446\n334\t1108\t4\t891549632\n224\t879\t3\t888082099\n64\t1133\t4\t889739975\n58\t42\t4\t884304936\n106\t584\t4\t881453481\n159\t258\t4\t893255836\n268\t248\t3\t875742530\n318\t286\t3\t884470681\n6\t525\t5\t883601203\n327\t431\t3\t887820384\n77\t23\t4\t884753173\n95\t15\t4\t879195062\n255\t452\t3\t883216672\n144\t328\t3\t888103407\n102\t307\t4\t883748222\n269\t1014\t3\t891446838\n184\t172\t4\t889908497\n306\t306\t5\t876503792\n49\t732\t3\t888069040\n181\t1347\t1\t878962052\n293\t514\t4\t888906378\n330\t121\t4\t876544582\n125\t1074\t3\t892838827\n291\t147\t4\t874805768\n269\t214\t3\t891448547\n13\t168\t4\t881515193\n305\t76\t1\t886323506\n313\t435\t5\t891013803\n307\t229\t5\t879538921\n314\t54\t4\t877888892\n269\t529\t5\t891455815\n283\t186\t5\t879298239\n158\t8\t5\t880134948\n92\t87\t3\t876175077\n85\t842\t3\t882995704\n20\t118\t4\t879668442\n193\t393\t4\t889126808\n167\t222\t4\t892737995\n201\t1187\t3\t884112201\n125\t346\t1\t892835800\n144\t880\t5\t888103509\n234\t628\t2\t892826612\n291\t574\t1\t875087656\n224\t977\t2\t888104281\n152\t780\t5\t884019189\n71\t462\t5\t877319567\n151\t755\t3\t879543366\n135\t229\t2\t879857843\n92\t931\t1\t875644796\n95\t33\t3\t880571704\n130\t125\t5\t875801963\n269\t405\t1\t891450902\n297\t277\t3\t875048641\n62\t527\t4\t879373692\n221\t17\t4\t875245406\n11\t743\t2\t891904065\n230\t50\t5\t880484755\n159\t930\t4\t880557824\n174\t107\t5\t886434361\n97\t7\t5\t884238939\n84\t289\t5\t883449419\n63\t948\t3\t875746948\n125\t143\t5\t879454793\n160\t126\t3\t876769148\n316\t483\t4\t880853810\n32\t117\t3\t883717555\n327\t93\t4\t887744432\n13\t856\t5\t886303171\n216\t202\t4\t880234346\n92\t1212\t3\t876175626\n1\t140\t1\t878543133\n263\t183\t4\t891298655\n5\t173\t4\t875636675\n85\t372\t4\t879828720\n194\t519\t4\t879521474\n109\t550\t5\t880579107\n201\t198\t4\t884111873\n340\t172\t4\t884990620\n49\t117\t1\t888069459\n7\t642\t3\t892132277\n239\t286\t1\t889178512\n198\t568\t3\t884208710\n237\t23\t4\t879376606\n239\t135\t5\t889178762\n5\t241\t1\t875720948\n72\t382\t4\t880036691\n297\t480\t4\t875238923\n249\t826\t1\t879640481\n25\t127\t3\t885853030\n94\t227\t3\t891722759\n195\t591\t4\t892281779\n92\t85\t3\t875812364\n85\t709\t5\t879828941\n308\t502\t5\t887739521\n311\t117\t4\t884366852\n247\t251\t4\t893081395\n235\t792\t4\t889655490\n329\t326\t3\t891656639\n338\t79\t4\t879438715\n244\t428\t4\t880606155\n187\t70\t4\t879465394\n253\t483\t5\t891628122\n194\t62\t2\t879524504\n70\t71\t3\t884066399\n203\t332\t5\t880433474\n49\t72\t2\t888069246\n308\t673\t4\t887737243\n246\t426\t3\t884923471\n280\t231\t3\t891701974\n180\t433\t5\t877127273\n110\t1250\t3\t886988818\n327\t811\t4\t887747363\n339\t47\t4\t891032701\n194\t132\t3\t879520991\n1\t225\t2\t878542738\n36\t319\t2\t882157356\n342\t746\t4\t875320227\n260\t1105\t5\t890618729\n40\t754\t4\t889041790\n175\t31\t4\t877108051\n62\t827\t2\t879373421\n138\t100\t5\t879022956\n252\t9\t5\t891456797\n59\t421\t5\t888206015\n110\t540\t3\t886988793\n1\t235\t5\t875071589\n334\t269\t3\t891544049\n301\t95\t5\t882076334\n63\t6\t3\t875747439\n269\t805\t2\t891450623\n151\t357\t5\t879524585\n268\t404\t4\t875309430\n199\t473\t4\t883783005\n22\t780\t1\t878887377\n28\t441\t2\t881961782\n299\t210\t4\t889502980\n317\t326\t3\t891446438\n254\t384\t1\t886475790\n178\t245\t3\t882823460\n297\t194\t3\t875239453\n90\t966\t5\t891385843\n11\t734\t3\t891905349\n325\t514\t4\t891478006\n249\t411\t3\t879640436\n18\t964\t3\t880132252\n311\t118\t3\t884963203\n334\t293\t3\t891544840\n294\t483\t4\t889854323\n297\t86\t5\t875238883\n293\t647\t5\t888905760\n294\t876\t3\t889241633\n286\t142\t4\t877534793\n308\t569\t3\t887740410\n222\t164\t4\t878181768\n49\t721\t2\t888068934\n303\t1090\t1\t879485686\n73\t474\t5\t888625200\n93\t845\t4\t888705321\n85\t1101\t4\t879454046\n223\t216\t5\t891550925\n42\t1043\t2\t881108633\n234\t212\t2\t892334883\n16\t288\t3\t877717078\n13\t319\t4\t882139327\n135\t294\t4\t879857575\n168\t411\t1\t884288222\n72\t204\t4\t880037853\n144\t523\t5\t888105338\n303\t398\t1\t879485372\n128\t215\t3\t879967452\n320\t11\t4\t884749255\n267\t684\t4\t878973088\n60\t490\t4\t883326958\n189\t694\t4\t893265946\n116\t905\t2\t890131519\n249\t240\t4\t879640343\n110\t300\t3\t886987380\n201\t1063\t3\t884113453\n180\t121\t5\t877127830\n87\t1072\t3\t879876610\n6\t209\t4\t883601713\n63\t301\t5\t875747010\n179\t895\t5\t892151565\n148\t98\t3\t877017714\n13\t312\t1\t883670630\n15\t278\t1\t879455843\n176\t305\t5\t886047068\n102\t66\t3\t892992129\n293\t251\t4\t888904734\n42\t204\t5\t881107821\n328\t523\t5\t885046206\n206\t333\t4\t888179565\n279\t67\t4\t875310419\n158\t42\t3\t880134913\n70\t151\t3\t884148603\n271\t661\t4\t885848373\n37\t222\t3\t880915528\n279\t1095\t1\t886016480\n250\t200\t5\t883263374\n103\t144\t4\t880420510\n50\t1084\t5\t877052501\n128\t1141\t4\t879968827\n336\t577\t1\t877757396\n275\t191\t4\t880314797\n95\t173\t5\t879198547\n87\t651\t4\t879875893\n21\t678\t2\t874951005\n145\t1217\t2\t875272349\n13\t860\t1\t882396984\n312\t676\t3\t891699295\n200\t431\t5\t884129006\n102\t67\t1\t892993706\n325\t506\t5\t891478180\n221\t1073\t4\t875245846\n2\t297\t4\t888550871\n305\t733\t3\t886324661\n275\t969\t2\t880314412\n11\t215\t3\t891904389\n341\t876\t4\t890757886\n231\t126\t5\t888605273\n269\t474\t4\t891448823\n13\t540\t3\t882398410\n102\t809\t3\t888802768\n254\t240\t1\t886476165\n234\t486\t3\t892079373\n256\t932\t3\t882150508\n249\t58\t5\t879572516\n305\t947\t4\t886322838\n262\t15\t3\t879962366\n325\t187\t3\t891478455\n184\t836\t4\t889909142\n11\t428\t4\t891905032\n40\t258\t3\t889041981\n313\t740\t2\t891016540\n276\t1314\t3\t874796412\n101\t1051\t2\t877136891\n236\t699\t4\t890116095\n207\t134\t4\t875991160\n215\t82\t3\t891435995\n125\t945\t5\t892836465\n120\t282\t4\t889490172\n293\t461\t2\t888905519\n160\t93\t5\t876767572\n298\t418\t4\t884183406\n326\t444\t4\t879877413\n246\t849\t1\t884923687\n278\t301\t2\t891294980\n166\t288\t3\t886397510\n328\t4\t3\t885047895\n70\t265\t4\t884067503\n298\t465\t4\t884182806\n343\t186\t4\t876407485\n205\t313\t3\t888284313\n201\t461\t4\t884113924\n276\t1478\t3\t889174849\n91\t264\t4\t891438583\n250\t294\t1\t878089033\n68\t405\t3\t876974518\n246\t99\t3\t884922657\n10\t704\t3\t877892050\n97\t435\t4\t884238752\n99\t118\t2\t885679237\n102\t302\t3\t880680541\n70\t152\t4\t884149877\n41\t31\t3\t890687473\n178\t179\t2\t882828320\n6\t19\t4\t883602965\n89\t246\t5\t879461219\n254\t257\t3\t886471389\n94\t402\t4\t891723261\n42\t404\t5\t881108760\n130\t566\t4\t878537558\n13\t614\t4\t884538634\n286\t642\t3\t877531498\n291\t410\t5\t874834481\n214\t121\t4\t891543632\n246\t284\t1\t884922475\n130\t413\t3\t876251127\n320\t1210\t4\t884751316\n60\t810\t4\t883327201\n141\t744\t5\t884584981\n288\t97\t4\t886629750\n145\t750\t4\t885555884\n189\t496\t5\t893265380\n130\t55\t5\t875216507\n328\t431\t2\t885047822\n177\t1039\t3\t880130807\n201\t281\t2\t884112352\n301\t456\t3\t882074838\n136\t56\t4\t882848783\n74\t15\t4\t888333542\n169\t429\t3\t891359250\n1\t120\t1\t875241637\n100\t302\t4\t891374528\n303\t716\t2\t879467639\n216\t498\t3\t880235329\n6\t476\t1\t883600175\n329\t98\t4\t891656300\n230\t511\t2\t880485656\n113\t321\t3\t875075887\n64\t100\t4\t879365558\n13\t876\t2\t881515521\n269\t771\t1\t891451754\n6\t154\t3\t883602730\n327\t962\t3\t887820545\n179\t345\t1\t892151565\n60\t152\t4\t883328033\n222\t250\t2\t877563801\n83\t252\t4\t883868598\n330\t51\t5\t876546753\n125\t290\t4\t892838375\n181\t286\t1\t878961173\n327\t451\t4\t887819411\n161\t14\t4\t891171413\n18\t82\t3\t880131236\n24\t372\t4\t875323553\n200\t286\t4\t884125953\n73\t202\t2\t888626577\n22\t29\t1\t878888228\n96\t8\t5\t884403020\n343\t1107\t3\t876406977\n297\t12\t5\t875239619\n279\t1411\t3\t884556545\n110\t202\t2\t886988909\n94\t257\t4\t891724178\n72\t176\t2\t880037203\n102\t89\t4\t888801315\n119\t684\t4\t886177338\n60\t151\t5\t883326995\n295\t404\t4\t879518378\n308\t447\t4\t887739056\n312\t1203\t5\t891699599\n343\t55\t3\t876405129\n284\t259\t2\t885329593\n276\t563\t3\t874977334\n280\t736\t2\t891700341\n311\t310\t4\t884363865\n18\t739\t3\t880131776\n87\t209\t5\t879876488\n13\t90\t3\t882141872\n58\t1097\t5\t884504973\n224\t243\t2\t888082277\n279\t780\t4\t875314165\n56\t568\t4\t892910797\n330\t215\t5\t876547925\n7\t92\t5\t891352010\n179\t315\t5\t892151202\n64\t239\t3\t889740033\n297\t699\t4\t875239658\n21\t424\t1\t874951293\n188\t792\t2\t875075062\n91\t195\t5\t891439057\n293\t194\t4\t888906045\n94\t727\t5\t891722458\n274\t148\t2\t878946133\n57\t282\t5\t883697223\n276\t780\t3\t874792143\n216\t651\t5\t880233912\n151\t241\t3\t879542645\n62\t8\t5\t879373820\n197\t68\t2\t891410082\n59\t385\t4\t888205659\n119\t275\t5\t874774575\n118\t324\t4\t875384444\n304\t298\t5\t884968415\n26\t9\t4\t891386369\n312\t847\t3\t891698174\n308\t965\t4\t887738387\n270\t707\t5\t876954927\n297\t31\t3\t881708087\n221\t100\t5\t875244125\n116\t760\t3\t886309812\n119\t193\t4\t874781872\n177\t300\t2\t880130434\n161\t654\t3\t891171357\n303\t235\t4\t879484563\n117\t174\t4\t881011393\n327\t216\t3\t887818991\n327\t1098\t4\t887820828\n23\t516\t4\t874787330\n181\t1051\t2\t878962586\n48\t661\t5\t879434954\n76\t531\t4\t875028007\n189\t129\t3\t893264378\n1\t125\t3\t878542960\n312\t144\t1\t891698987\n301\t410\t4\t882074460\n306\t476\t3\t876504679\n38\t616\t3\t892433375\n223\t298\t5\t891549570\n145\t1292\t1\t875271357\n328\t528\t5\t886037457\n174\t458\t4\t886433862\n303\t31\t3\t879467361\n23\t83\t4\t874785926\n6\t175\t4\t883601426\n173\t938\t3\t877557076\n313\t239\t3\t891028873\n38\t780\t4\t892434217\n184\t89\t4\t889908572\n44\t155\t3\t878348947\n244\t13\t4\t880604379\n13\t263\t5\t881515647\n344\t479\t4\t884901093\n40\t340\t2\t889041454\n141\t222\t4\t884584865\n144\t286\t4\t888103370\n324\t597\t4\t880575493\n222\t700\t3\t881060550\n96\t484\t5\t884402860\n90\t199\t5\t891384423\n1\t215\t3\t876893145\n270\t379\t5\t876956232\n251\t257\t3\t886272378\n246\t109\t5\t884921794\n130\t90\t4\t875801920\n326\t318\t5\t879875612\n9\t521\t4\t886959343\n221\t32\t4\t875245223\n20\t186\t3\t879669040\n37\t79\t4\t880915810\n279\t871\t4\t875297410\n163\t56\t4\t891220097\n84\t284\t3\t883450093\n201\t676\t2\t884140927\n46\t1062\t5\t883614766\n72\t82\t3\t880037242\n117\t176\t5\t881012028\n269\t608\t4\t891449526\n148\t214\t5\t877019882\n294\t1067\t4\t877819421\n121\t174\t3\t891388063\n20\t172\t3\t879669181\n59\t724\t5\t888205265\n108\t125\t3\t879879864\n49\t53\t4\t888067405\n294\t678\t2\t877818861\n240\t301\t5\t885775683\n299\t602\t3\t878191995\n246\t802\t1\t884923471\n13\t788\t1\t882396914\n303\t1508\t1\t879544130\n207\t1283\t4\t884386260\n255\t271\t4\t883215525\n195\t477\t2\t885110922\n312\t557\t5\t891699599\n144\t302\t3\t888103530\n102\t399\t2\t888802722\n297\t515\t5\t874954353\n106\t165\t5\t881450536\n291\t421\t4\t875087352\n145\t552\t5\t888398747\n89\t936\t5\t879461219\n85\t71\t4\t879456308\n282\t271\t3\t881702919\n339\t856\t5\t891034922\n135\t227\t3\t879857843\n151\t91\t2\t879542796\n221\t467\t4\t875245928\n286\t196\t4\t877533543\n116\t195\t4\t876453626\n94\t738\t2\t891723558\n144\t172\t4\t888105312\n214\t208\t5\t892668153\n234\t519\t5\t892078342\n244\t596\t4\t880604735\n222\t739\t4\t878184924\n74\t126\t3\t888333428\n45\t127\t5\t881007272\n344\t306\t5\t884814359\n116\t887\t3\t881246591\n181\t1362\t1\t878962200\n144\t461\t4\t888106044\n189\t1099\t5\t893266074\n53\t228\t3\t879442561\n2\t290\t3\t888551441\n299\t739\t3\t889502865\n313\t139\t3\t891030334\n274\t275\t5\t878944679\n321\t521\t2\t879441201\n134\t539\t4\t891732335\n269\t486\t3\t891449922\n94\t655\t4\t891720862\n262\t1220\t4\t879794296\n181\t1265\t1\t878961668\n109\t4\t2\t880572756\n12\t96\t4\t879959583\n109\t42\t1\t880572756\n90\t307\t5\t891383319\n77\t498\t5\t884734016\n314\t620\t3\t877887212\n48\t210\t3\t879434886\n305\t1101\t4\t886323563\n198\t357\t5\t884207267\n222\t293\t3\t877563353\n207\t186\t4\t877879173\n158\t580\t4\t880135093\n255\t551\t1\t883216672\n87\t1047\t3\t879877280\n301\t9\t3\t882074291\n279\t1498\t4\t891208884\n299\t343\t3\t881605700\n339\t288\t3\t891036899\n13\t782\t3\t885744650\n210\t722\t4\t891036021\n200\t528\t4\t884128426\n193\t693\t4\t889124374\n297\t678\t3\t874954093\n128\t216\t5\t879967102\n311\t38\t3\t884365954\n169\t879\t5\t891268653\n174\t82\t1\t886515472\n13\t440\t1\t882397040\n95\t378\t4\t888954699\n321\t224\t3\t879439733\n180\t83\t5\t877128388\n150\t127\t5\t878746889\n332\t233\t4\t888360370\n102\t83\t3\t888803487\n263\t678\t2\t891297766\n128\t97\t3\t879968125\n239\t288\t2\t889178513\n275\t202\t3\t875155167\n311\t471\t4\t884963254\n267\t145\t4\t878972903\n253\t210\t4\t891628598\n250\t64\t5\t878090153\n284\t339\t3\t885329671\n327\t849\t2\t887822530\n11\t90\t2\t891905298\n222\t93\t2\t883815577\n299\t26\t4\t878192601\n276\t748\t3\t883822507\n274\t496\t5\t878946473\n252\t129\t4\t891456876\n244\t1225\t2\t880606818\n75\t820\t3\t884050979\n194\t52\t4\t879525876\n328\t627\t3\t885048365\n201\t955\t3\t884114895\n253\t198\t5\t891628392\n221\t39\t4\t875245798\n334\t317\t3\t891546000\n271\t414\t4\t885849470\n158\t525\t5\t880133288\n64\t705\t5\t879365558\n294\t24\t4\t877819761\n28\t480\t5\t881957002\n269\t959\t5\t891457067\n299\t270\t4\t878052375\n151\t655\t4\t879542645\n177\t87\t4\t880130931\n269\t15\t2\t891446348\n279\t740\t3\t875736276\n332\t673\t5\t888360307\n269\t483\t4\t891448800\n91\t682\t2\t891438184\n246\t17\t2\t884922658\n290\t418\t3\t880474293\n9\t487\t5\t886960056\n217\t797\t4\t889070011\n234\t14\t3\t891227730\n292\t1050\t4\t881105778\n65\t1129\t4\t879217258\n222\t231\t2\t878182005\n299\t32\t3\t877881169\n279\t685\t3\t884982881\n15\t620\t4\t879456204\n68\t178\t5\t876974755\n293\t210\t3\t888905665\n43\t931\t1\t884029742\n344\t278\t3\t884900454\n56\t368\t3\t892911589\n339\t30\t3\t891032765\n144\t518\t3\t888106182\n125\t734\t3\t892838977\n12\t735\t5\t879960826\n269\t484\t3\t891448895\n90\t179\t5\t891385389\n185\t237\t4\t883526268\n243\t275\t3\t879987084\n269\t1091\t2\t891451705\n11\t429\t5\t891904335\n13\t88\t4\t882141485\n120\t25\t5\t889490370\n198\t402\t3\t884209147\n165\t304\t3\t879525672\n138\t98\t5\t879024043\n94\t561\t3\t891722882\n293\t188\t3\t888906288\n39\t258\t4\t891400280\n159\t237\t3\t880485766\n344\t39\t3\t884901290\n69\t1017\t5\t882126156\n230\t673\t3\t880485573\n160\t124\t4\t876767360\n44\t228\t5\t883613334\n298\t1142\t4\t884183572\n345\t1160\t3\t884994606\n94\t133\t4\t885882685\n121\t122\t2\t891390501\n325\t109\t2\t891478528\n160\t1019\t5\t876857977\n205\t333\t4\t888284618\n343\t44\t3\t876406640\n321\t1028\t2\t879441064\n102\t986\t1\t888802319\n268\t123\t3\t875742794\n19\t153\t4\t885412840\n125\t511\t5\t879454699\n332\t1188\t5\t888098374\n90\t132\t5\t891384673\n16\t657\t5\t877723882\n316\t50\t1\t880853654\n272\t11\t4\t879455143\n85\t380\t4\t882995704\n279\t1118\t3\t875310631\n269\t761\t2\t891451374\n75\t696\t4\t884050979\n249\t469\t4\t879641285\n311\t671\t3\t884365954\n58\t222\t4\t884304656\n254\t99\t3\t886473254\n308\t632\t3\t887738057\n125\t1272\t1\t879454892\n49\t40\t1\t888069222\n83\t1101\t2\t880308256\n16\t294\t4\t877717116\n94\t214\t5\t891725332\n295\t624\t5\t879518654\n152\t866\t5\t880149224\n128\t227\t2\t879968946\n119\t235\t5\t874774956\n122\t1268\t2\t879270711\n276\t561\t2\t874792745\n251\t109\t4\t886272547\n7\t90\t3\t891352984\n184\t275\t5\t889913687\n262\t628\t2\t879962366\n279\t13\t3\t875249210\n181\t764\t1\t878962866\n21\t56\t5\t874951658\n298\t660\t3\t884182838\n98\t321\t3\t880498519\n145\t949\t4\t875272652\n164\t458\t4\t889402050\n232\t64\t4\t888549441\n184\t126\t3\t889907971\n269\t209\t4\t891448895\n26\t100\t5\t891386368\n57\t1093\t3\t883697352\n117\t338\t3\t886019636\n297\t97\t5\t875239871\n276\t969\t4\t874792839\n119\t1263\t3\t886177338\n345\t722\t3\t884993783\n318\t72\t4\t884498540\n246\t410\t1\t884923175\n158\t809\t3\t880134675\n178\t651\t4\t882826915\n254\t625\t3\t886473808\n21\t106\t2\t874951447\n225\t136\t5\t879540707\n41\t486\t4\t890687305\n234\t191\t4\t892334765\n78\t289\t4\t879633567\n90\t9\t4\t891385787\n313\t415\t2\t891030367\n180\t716\t1\t877128119\n344\t462\t2\t884901156\n268\t810\t2\t875744388\n195\t227\t3\t888737346\n72\t603\t4\t880037417\n31\t135\t4\t881548030\n303\t1267\t3\t879484327\n64\t731\t3\t889739648\n62\t89\t5\t879374640\n151\t662\t4\t879525054\n189\t1372\t4\t893264429\n213\t79\t5\t878956263\n219\t13\t1\t889452455\n345\t708\t3\t884992786\n244\t712\t3\t880607925\n220\t288\t5\t881197887\n1\t6\t5\t887431973\n239\t923\t5\t889179033\n290\t202\t4\t880474590\n194\t523\t5\t879521596\n200\t831\t4\t891825565\n346\t213\t3\t874948173\n267\t214\t4\t878972342\n100\t340\t3\t891374707\n42\t521\t2\t881107989\n214\t45\t4\t891543952\n264\t320\t4\t886122261\n145\t1102\t1\t888398162\n10\t22\t5\t877888812\n299\t71\t3\t878192238\n313\t608\t4\t891017585\n209\t242\t4\t883589606\n221\t92\t4\t875245989\n293\t646\t3\t888906244\n184\t1012\t3\t889907448\n70\t260\t2\t884065247\n90\t30\t5\t891385843\n144\t1169\t4\t888106044\n1\t104\t1\t875241619\n21\t288\t3\t874950932\n6\t523\t5\t883601528\n248\t181\t4\t884535374\n168\t409\t4\t884287846\n234\t878\t2\t892336477\n44\t238\t4\t878347598\n296\t1073\t5\t884197330\n296\t96\t5\t884197287\n206\t288\t5\t888179565\n76\t100\t5\t875028391\n327\t50\t3\t887745574\n308\t811\t4\t887739212\n338\t168\t3\t879438225\n125\t238\t3\t892838322\n299\t1074\t3\t889502786\n85\t203\t5\t879455402\n77\t431\t5\t884733695\n18\t367\t4\t880130802\n293\t572\t2\t888907931\n286\t228\t3\t889651576\n246\t568\t4\t884922451\n174\t902\t3\t890168363\n268\t163\t2\t875743656\n291\t555\t1\t874868629\n151\t478\t5\t879524471\n269\t63\t1\t891450857\n11\t97\t4\t891904300\n83\t748\t2\t886534501\n83\t125\t5\t880306811\n145\t717\t3\t888398702\n56\t426\t4\t892683303\n339\t435\t4\t891032189\n35\t242\t2\t875459166\n18\t462\t3\t880130065\n194\t708\t3\t879528106\n14\t514\t4\t879119579\n345\t651\t4\t884992493\n279\t415\t3\t875314313\n12\t471\t5\t879959670\n126\t332\t2\t887853735\n16\t22\t5\t877721071\n116\t758\t1\t876452980\n220\t325\t1\t881198435\n151\t328\t3\t879523838\n280\t11\t5\t891700570\n10\t155\t4\t877889186\n73\t1149\t4\t888626299\n180\t213\t5\t877128388\n13\t831\t3\t882398385\n181\t1291\t1\t878963167\n92\t132\t3\t875812211\n345\t202\t3\t884992218\n269\t482\t3\t891448823\n59\t241\t4\t888205574\n322\t508\t4\t887314073\n18\t25\t3\t880131591\n343\t135\t5\t876404568\n62\t856\t4\t879374866\n144\t528\t4\t888105846\n24\t662\t5\t875323440\n108\t282\t3\t879880055\n95\t518\t4\t888954076\n276\t383\t2\t877934828\n187\t427\t5\t879465597\n13\t315\t5\t884538466\n332\t98\t5\t888359903\n12\t172\t4\t879959088\n347\t22\t5\t881654005\n201\t8\t3\t884141438\n90\t855\t5\t891383752\n193\t1132\t3\t889127660\n99\t203\t4\t885680723\n122\t708\t5\t879270605\n15\t742\t2\t879456049\n222\t1239\t2\t881060762\n57\t56\t3\t883698646\n332\t595\t4\t887938574\n6\t498\t4\t883601053\n339\t58\t3\t891032379\n268\t154\t4\t875743563\n102\t202\t4\t892991269\n213\t474\t2\t878955635\n73\t196\t4\t888626177\n283\t70\t4\t879298206\n122\t212\t5\t879270567\n201\t454\t2\t884111830\n298\t652\t3\t884183099\n7\t10\t4\t891352864\n314\t29\t5\t877889234\n130\t1277\t4\t876250897\n201\t275\t4\t884113634\n304\t681\t2\t884967167\n130\t748\t4\t874953526\n118\t176\t5\t875384793\n182\t237\t3\t885613067\n13\t794\t4\t882399615\n242\t934\t5\t879741196\n69\t1134\t5\t882072998\n77\t153\t5\t884732685\n151\t196\t4\t879542670\n279\t202\t4\t875307587\n233\t958\t5\t875508372\n284\t682\t3\t885329322\n181\t301\t2\t878961303\n286\t419\t5\t889651990\n327\t14\t4\t887744167\n256\t195\t5\t882164406\n331\t1100\t2\t877196634\n102\t186\t4\t888803487\n119\t338\t1\t892565167\n234\t316\t4\t891033851\n295\t378\t4\t879518233\n14\t100\t5\t876965165\n184\t1006\t3\t889910078\n216\t721\t4\t880245213\n130\t148\t4\t876251127\n130\t229\t4\t875802173\n158\t100\t5\t880132401\n222\t972\t2\t881059758\n122\t792\t3\t879270459\n59\t14\t5\t888203234\n31\t705\t5\t881548110\n254\t501\t3\t886476281\n297\t475\t5\t874954426\n193\t328\t3\t889122993\n292\t28\t4\t881105734\n1\t49\t3\t878542478\n242\t1152\t5\t879741196\n267\t559\t3\t878972614\n82\t705\t3\t878769598\n292\t1039\t4\t881105778\n14\t455\t4\t880929745\n308\t511\t5\t887737130\n236\t170\t5\t890116451\n334\t4\t3\t891548345\n130\t1215\t2\t876251389\n145\t203\t5\t875271948\n156\t205\t3\t888185735\n340\t435\t4\t884990546\n94\t385\t2\t891721975\n94\t109\t4\t891721974\n168\t988\t2\t884287145\n313\t151\t1\t891014982\n96\t645\t5\t884403020\n308\t109\t3\t887738894\n94\t393\t3\t891721684\n21\t995\t2\t874950932\n5\t234\t2\t875720692\n317\t350\t5\t891446819\n102\t62\t3\t888801812\n118\t156\t5\t875384946\n276\t786\t3\t874791694\n116\t259\t4\t876452186\n81\t93\t3\t876533657\n92\t595\t3\t886443534\n250\t111\t4\t878091915\n344\t215\t3\t884900818\n320\t148\t4\t884748708\n79\t124\t5\t891271870\n94\t313\t4\t891724925\n1\t206\t4\t876893205\n128\t966\t4\t879968071\n269\t664\t5\t891457067\n318\t795\t2\t884498766\n16\t940\t2\t877721236\n54\t276\t5\t880931595\n291\t1109\t4\t874834768\n298\t172\t4\t884124993\n234\t292\t4\t891033821\n106\t15\t3\t883876518\n114\t1104\t5\t881260352\n299\t137\t4\t877877535\n301\t771\t2\t882079256\n73\t7\t4\t888625956\n332\t44\t3\t888360342\n308\t1019\t4\t887738570\n187\t28\t4\t879465597\n94\t783\t2\t891723495\n15\t137\t4\t879455939\n286\t56\t2\t877531469\n222\t756\t4\t877564031\n18\t699\t5\t880130802\n68\t245\t3\t876973777\n134\t748\t5\t891732365\n334\t1207\t2\t891550121\n243\t223\t4\t879988262\n322\t479\t5\t887313892\n334\t481\t5\t891546206\n243\t13\t4\t879987362\n268\t16\t3\t875306691\n90\t241\t5\t891384611\n267\t484\t5\t878971542\n233\t48\t5\t877663184\n77\t4\t3\t884752721\n184\t92\t3\t889908657\n148\t596\t5\t877020297\n59\t664\t4\t888205614\n110\t734\t2\t886989566\n285\t628\t2\t890595636\n244\t101\t5\t880603288\n314\t366\t4\t877891354\n303\t654\t5\t879467328\n186\t333\t3\t891718820\n92\t785\t3\t875660304\n151\t486\t5\t879525002\n6\t188\t3\t883602462\n293\t125\t2\t888905086\n194\t51\t4\t879549793\n291\t552\t3\t874834963\n87\t790\t4\t879876885\n299\t50\t4\t877877775\n56\t1\t4\t892683248\n277\t9\t4\t879543336\n174\t823\t4\t886434376\n92\t1047\t1\t875644796\n177\t182\t5\t880130684\n41\t751\t4\t890686872\n1\t76\t4\t878543176\n113\t262\t2\t875075983\n271\t657\t4\t885848559\n323\t7\t2\t878739355\n303\t373\t2\t879544276\n138\t238\t5\t879024382\n325\t98\t4\t891478079\n106\t64\t4\t881449830\n222\t155\t4\t878184113\n345\t367\t4\t884993069\n273\t328\t3\t891293048\n144\t1039\t4\t888105587\n157\t127\t5\t886890541\n211\t310\t3\t879461394\n56\t31\t4\t892679259\n168\t1016\t5\t884287615\n303\t129\t5\t879468547\n76\t258\t3\t875027206\n223\t249\t2\t891549876\n60\t28\t5\t883326155\n321\t507\t3\t879441336\n141\t932\t3\t884585128\n73\t286\t4\t888792192\n226\t480\t4\t883888853\n90\t713\t4\t891385466\n272\t172\t4\t879455043\n19\t313\t2\t885411792\n145\t286\t3\t875269755\n342\t764\t1\t875318762\n224\t322\t2\t888082013\n328\t1126\t3\t885046580\n268\t552\t2\t876514108\n179\t354\t4\t892151331\n308\t526\t3\t887739426\n267\t693\t4\t878972266\n345\t402\t4\t884993464\n6\t213\t4\t883602462\n12\t143\t5\t879959635\n210\t160\t4\t887737210\n290\t546\t2\t880475564\n293\t300\t2\t888904004\n58\t248\t4\t884794774\n303\t181\t5\t879468082\n298\t498\t5\t884182573\n347\t501\t4\t881654410\n236\t172\t3\t890116539\n102\t121\t3\t888801673\n290\t404\t3\t880475341\n92\t123\t2\t875640251\n151\t274\t5\t879542369\n6\t432\t4\t883601713\n256\t1289\t4\t882150552\n43\t216\t5\t875981128\n189\t632\t5\t893265624\n263\t514\t3\t891299387\n22\t117\t4\t878887869\n250\t44\t4\t878090199\n269\t188\t2\t891450675\n278\t98\t4\t891295360\n155\t294\t3\t879371194\n140\t334\t2\t879013684\n18\t190\t4\t880130155\n239\t198\t5\t889181047\n104\t342\t3\t888442437\n251\t258\t3\t886271496\n72\t64\t5\t880036549\n305\t338\t3\t886308252\n72\t566\t4\t880037277\n339\t226\t2\t891034744\n1\t72\t4\t878542678\n194\t511\t4\t879520991\n316\t549\t5\t880854049\n201\t150\t4\t884139983\n206\t1127\t4\t888180081\n48\t187\t5\t879434954\n279\t418\t3\t875733888\n94\t153\t5\t891725333\n217\t53\t1\t889069974\n94\t765\t3\t891723619\n250\t485\t4\t878092104\n79\t288\t3\t891272015\n230\t393\t3\t880485110\n128\t64\t5\t879966954\n311\t367\t3\t884365780\n76\t518\t3\t875498895\n62\t153\t4\t879374686\n6\t515\t4\t883599273\n215\t11\t2\t891436024\n145\t569\t4\t877343156\n213\t715\t5\t878955915\n94\t1199\t3\t891724798\n10\t294\t3\t879163524\n344\t181\t3\t884901047\n53\t100\t5\t879442537\n20\t678\t4\t879667684\n207\t294\t3\t875504669\n123\t285\t5\t879873830\n256\t1028\t4\t882151690\n174\t94\t2\t886515062\n5\t154\t3\t875636691\n308\t488\t4\t887736696\n222\t436\t4\t878184358\n200\t7\t4\t876042451\n65\t121\t4\t879217458\n7\t485\t5\t891351851\n295\t843\t4\t879517994\n63\t111\t3\t875747896\n7\t511\t5\t891351624\n198\t11\t4\t884207392\n295\t1503\t2\t879517082\n267\t28\t4\t878972524\n91\t99\t2\t891439386\n151\t321\t4\t879523900\n13\t302\t5\t881514811\n293\t1098\t2\t888905519\n42\t131\t2\t881108548\n328\t1135\t1\t885045528\n14\t519\t5\t890881335\n234\t142\t2\t892334852\n230\t154\t4\t880485159\n152\t98\t2\t882473974\n164\t313\t5\t889401284\n55\t144\t5\t878176398\n318\t1014\t2\t884494919\n3\t332\t1\t889237224\n290\t818\t3\t880732656\n125\t175\t2\t879455184\n243\t93\t2\t879987173\n21\t670\t3\t874951696\n268\t228\t4\t875309945\n7\t654\t5\t892135347\n82\t178\t4\t878769629\n318\t524\t3\t884496123\n89\t381\t4\t879459999\n301\t123\t4\t882074726\n193\t673\t4\t889126551\n1\t185\t4\t875072631\n323\t79\t4\t878739829\n21\t219\t5\t874951797\n197\t328\t4\t891409290\n184\t15\t3\t889907812\n313\t482\t5\t891016193\n109\t823\t3\t880572296\n152\t167\t5\t882477430\n297\t629\t3\t875410013\n167\t1147\t4\t892738384\n264\t524\t3\t886123596\n280\t571\t3\t891702338\n222\t577\t1\t878185137\n21\t591\t3\t874951382\n210\t501\t4\t887736998\n280\t230\t3\t891702153\n86\t286\t3\t879569555\n320\t174\t4\t884749255\n144\t50\t5\t888103929\n256\t97\t4\t882165103\n65\t427\t5\t879216734\n198\t429\t4\t884207691\n184\t217\t3\t889910394\n151\t709\t5\t879524778\n18\t530\t4\t880129877\n43\t724\t4\t875981390\n86\t319\t3\t879569555\n242\t305\t5\t879741340\n97\t28\t5\t884238778\n114\t195\t4\t881260861\n188\t69\t4\t875072009\n301\t230\t4\t882077033\n85\t241\t3\t882995340\n129\t313\t3\t883243934\n106\t77\t4\t881451716\n261\t748\t3\t890454310\n188\t7\t5\t875073477\n13\t208\t5\t882140624\n342\t288\t5\t875318267\n299\t286\t4\t877618524\n311\t204\t5\t884365617\n125\t813\t1\t879455184\n276\t463\t4\t874792839\n13\t421\t2\t882140389\n141\t472\t5\t884585274\n222\t550\t3\t878184623\n191\t896\t3\t891562090\n144\t516\t2\t888105197\n216\t1047\t3\t881428365\n151\t213\t5\t879524849\n144\t845\t4\t888104191\n4\t356\t3\t892003459\n96\t64\t5\t884403336\n160\t79\t4\t876859413\n49\t369\t1\t888069329\n110\t332\t3\t886987287\n209\t351\t2\t883589546\n178\t1004\t4\t882827375\n344\t97\t3\t884901156\n11\t203\t4\t891905856\n241\t307\t4\t887249795\n239\t312\t2\t889181247\n276\t719\t3\t877935336\n18\t191\t4\t880130193\n141\t535\t5\t884585195\n18\t971\t4\t880131878\n162\t42\t3\t877636675\n342\t591\t3\t875318629\n278\t525\t5\t891295330\n102\t217\t2\t888803149\n16\t447\t5\t877724066\n343\t82\t5\t876405735\n109\t357\t2\t880572528\n301\t732\t4\t882077351\n303\t202\t5\t879468149\n250\t378\t4\t878092059\n234\t507\t4\t892334803\n217\t68\t3\t889069974\n87\t523\t5\t879875649\n95\t26\t3\t880571951\n245\t94\t2\t888513081\n95\t289\t2\t879191590\n334\t1008\t4\t891545126\n201\t896\t3\t884110766\n126\t323\t3\t887853568\n150\t475\t5\t878746764\n59\t871\t2\t888203865\n227\t9\t3\t879035431\n169\t603\t5\t891359171\n293\t553\t3\t888907453\n"
  },
  {
    "path": "tests/test_data/test/test.net",
    "content": "source_id:token\ttarget_id:token\n187\t100\n119\t40\n96\t119\n12\t52\n153\t131\n259\t232\n191\t307\n83\t150\n86\t255\n177\t4\n210\t192\n25\t323\n90\t298\n38\t47\n201\t283\n93\t63\n115\t190\n143\t293\n147\t265\n320\t68\n188\t273\n332\t321\n212\t203\n326\t98\n74\t270\n4\t333\n87\t261\n163\t207\n18\t175\n127\t77\n296\t179\n17\t101\n24\t30\n102\t288\n345\t269\n270\t188\n235\t297\n68\t303\n313\t43\n239\t109\n28\t76\n108\t227\n78\t218\n96\t30\n180\t301\n211\t12\n234\t34\n178\t53\n3\t243\n179\t73\n98\t92\n310\t116\n154\t271\n293\t3\n80\t297\n329\t254\n198\t134\n341\t238\n75\t185\n166\t64\n205\t142\n317\t163\n261\t91\n314\t322\n4\t33\n71\t73\n289\t182\n21\t12\n248\t49\n255\t32\n261\t170\n257\t314\n159\t118\n212\t221\n5\t177\n204\t57\n132\t120\n13\t275\n340\t252\n245\t251\n334\t15\n130\t103\n280\t187\n232\t153\n242\t341\n219\t123\n6\t290\n49\t289\n46\t347\n185\t231\n57\t254\n134\t248\n24\t234\n57\t207\n147\t295\n191\t274\n340\t54\n280\t150\n190\t4\n238\t198\n72\t123\n122\t178\n7\t334\n11\t90\n232\t78\n16\t77\n41\t190\n108\t101\n212\t66\n258\t18\n321\t250\n126\t280\n271\t85\n11\t176\n22\t69\n129\t159\n235\t193\n129\t88\n221\t315\n308\t329\n103\t83\n180\t43\n208\t87\n64\t75\n92\t36\n298\t151\n56\t103\n162\t268\n81\t252\n344\t115\n67\t282\n132\t17\n83\t307\n299\t82\n321\t227\n48\t13\n212\t57\n344\t280\n195\t81\n112\t122\n345\t346\n65\t18\n269\t3\n131\t123\n185\t311\n124\t330\n347\t297\n321\t251\n196\t135\n65\t122\n322\t197\n334\t160\n129\t64\n38\t17\n289\t256\n51\t286\n107\t260\n300\t101\n290\t281\n192\t170\n42\t2\n54\t260\n126\t1\n326\t294\n119\t14\n48\t172\n133\t191\n332\t157\n311\t99\n115\t123\n160\t201\n269\t267\n302\t184\n262\t168\n11\t80\n317\t155\n163\t310\n290\t32\n90\t239\n246\t129\n105\t189\n336\t8\n266\t100\n153\t311\n7\t20\n329\t94\n135\t38\n216\t331\n291\t89\n121\t253\n246\t82\n113\t325\n99\t313\n226\t188\n319\t60\n195\t280\n245\t319\n168\t291\n63\t127\n316\t280\n67\t69\n40\t143\n177\t18\n239\t253\n213\t304\n218\t315\n18\t312\n165\t6\n324\t232\n167\t156\n295\t275\n42\t110\n25\t226\n114\t104\n172\t305\n66\t26\n51\t303\n247\t110\n245\t18\n335\t307\n325\t95\n289\t81\n166\t141\n4\t39\n171\t16\n79\t145\n187\t65\n102\t105\n234\t70\n321\t104\n62\t179\n171\t122\n225\t239\n283\t315\n121\t107\n154\t297\n309\t170\n3\t38\n78\t345\n164\t238\n92\t142\n339\t4\n251\t61\n223\t240\n167\t39\n223\t8\n61\t253\n220\t256\n139\t247\n199\t267\n344\t264\n336\t56\n110\t235\n75\t90\n93\t321\n345\t277\n119\t260\n214\t10\n15\t86\n102\t5\n34\t213\n223\t238\n243\t169\n107\t223\n106\t175\n218\t104\n28\t82\n267\t37\n331\t124\n16\t146\n186\t289\n226\t304\n109\t34\n124\t73\n165\t286\n260\t70\n94\t159\n151\t257\n151\t210\n263\t288\n276\t218\n222\t79\n48\t133\n67\t218\n282\t250\n127\t195\n222\t316\n19\t272\n238\t43\n71\t240\n208\t65\n219\t300\n338\t29\n75\t86\n86\t269\n91\t100\n273\t248\n202\t9\n190\t33\n84\t92\n124\t306\n284\t70\n281\t341\n247\t302\n306\t230\n320\t279\n319\t41\n91\t160\n323\t201\n305\t194\n41\t156\n220\t264\n296\t310\n183\t131\n232\t21\n239\t218\n302\t49\n250\t287\n200\t109\n96\t263\n225\t221\n123\t263\n329\t256\n136\t344\n338\t76\n233\t245\n347\t198\n99\t83\n240\t81\n238\t291\n78\t331\n56\t225\n21\t93\n24\t293\n28\t155\n245\t19\n225\t198\n90\t235\n191\t35\n146\t28\n303\t194\n203\t276\n189\t49\n265\t232\n204\t198\n283\t217\n306\t44\n133\t175\n256\t80\n345\t215\n97\t13\n25\t287\n104\t48\n20\t50\n155\t340\n202\t57\n343\t263\n135\t293\n152\t266\n232\t182\n86\t217\n73\t72\n143\t44\n299\t162\n277\t324\n154\t124\n307\t210\n210\t226\n323\t293\n55\t97\n52\t8\n32\t163\n312\t307\n271\t171\n204\t34\n64\t282\n311\t315\n174\t58\n56\t84\n217\t275\n86\t180\n342\t84\n340\t174\n13\t80\n100\t197\n189\t341\n5\t86\n9\t40\n210\t329\n260\t188\n236\t261\n94\t282\n105\t188\n141\t258\n132\t285\n17\t156\n70\t213\n204\t5\n344\t74\n34\t202\n347\t263\n121\t312\n146\t219\n31\t48\n53\t291\n213\t203\n125\t9\n279\t301\n247\t140\n217\t2\n298\t83\n315\t311\n165\t209\n169\t270\n259\t40\n174\t285\n21\t276\n58\t229\n165\t84\n48\t29\n222\t257\n38\t209\n336\t30\n53\t63\n269\t243\n36\t324\n252\t138\n113\t155\n123\t290\n10\t253\n346\t15\n217\t36\n15\t102\n264\t149\n143\t122\n300\t178\n25\t220\n58\t231\n19\t250\n11\t147\n73\t186\n90\t109\n248\t104\n196\t55\n308\t298\n316\t7\n160\t208\n173\t323\n196\t176\n147\t168\n168\t293\n274\t328\n6\t133\n177\t226\n49\t336\n173\t7\n307\t1\n85\t128\n63\t241\n39\t323\n167\t173\n298\t253\n171\t42\n196\t326\n53\t329\n221\t307\n51\t194\n192\t231\n13\t23\n308\t117\n324\t84\n228\t13\n231\t156\n314\t286\n321\t314\n140\t30\n143\t288\n55\t340\n192\t264\n119\t220\n28\t226\n248\t309\n227\t122\n157\t227\n81\t178\n143\t329\n327\t170\n199\t308\n297\t27\n28\t101\n317\t179\n176\t293\n328\t265\n64\t256\n176\t316\n336\t315\n137\t189\n290\t209\n243\t232\n305\t233\n28\t26\n216\t306\n155\t65\n246\t166\n148\t218\n28\t343\n31\t148\n6\t38\n43\t267\n85\t30\n5\t212\n328\t157\n93\t65\n158\t179\n315\t256\n261\t210\n8\t234\n137\t163\n261\t9\n247\t231\n32\t266\n118\t191\n107\t34\n87\t153\n132\t81\n41\t235\n80\t103\n13\t167\n31\t166\n290\t32\n53\t125\n131\t163\n188\t82\n68\t38\n94\t325\n254\t129\n99\t63\n267\t164\n1\t46\n175\t36\n99\t72\n328\t80\n84\t221\n164\t80\n232\t264\n172\t70\n227\t346\n183\t44\n208\t184\n120\t317\n20\t154\n76\t315\n52\t200\n231\t46\n343\t241\n42\t284\n229\t345\n213\t75\n155\t135\n28\t261\n22\t255\n106\t169\n310\t347\n212\t275\n104\t314\n347\t181\n285\t72\n26\t68\n6\t331\n19\t227\n325\t108\n325\t110\n152\t226\n221\t160\n310\t226\n145\t57\n228\t299\n233\t139\n291\t1\n52\t173\n173\t33\n48\t339\n188\t27\n329\t117\n216\t73\n291\t325\n180\t22\n343\t95\n293\t172\n31\t146\n99\t213\n290\t10\n79\t212\n184\t96\n257\t27\n11\t323\n117\t95\n215\t118\n258\t23\n"
  },
  {
    "path": "tests/test_model.py",
    "content": "import os\nimport unittest\n\nfrom recbole_gnn.quick_start import objective_function\n\ncurrent_path = os.path.dirname(os.path.realpath(__file__))\nconfig_file_list = [os.path.join(current_path, 'test_model.yaml')]\n\n\ndef quick_test(config_dict):\n    objective_function(config_dict=config_dict, config_file_list=config_file_list, saved=False)\n\n\nclass TestGeneralRecommender(unittest.TestCase):\n    def test_bpr(self):\n        config_dict = {\n            'model': 'BPR',\n        }\n        quick_test(config_dict)\n\n    def test_neumf(self):\n        config_dict = {\n            'model': 'NeuMF',\n        }\n        quick_test(config_dict)\n\n    def test_ngcf(self):\n        config_dict = {\n            'model': 'NGCF',\n        }\n        quick_test(config_dict)\n\n    def test_lightgcn(self):\n        config_dict = {\n            'model': 'LightGCN',\n        }\n        quick_test(config_dict)\n\n    def test_sgl(self):\n        config_dict = {\n            'model': 'SGL',\n        }\n        quick_test(config_dict)\n\n    def test_hmlet(self):\n        config_dict = {\n            'model': 'HMLET',\n        }\n        quick_test(config_dict)\n\n    def test_ncl(self):\n        config_dict = {\n            'model': 'NCL',\n            'num_clusters': 10\n        }\n        quick_test(config_dict)\n\n    def test_simgcl(self):\n        config_dict = {\n            'model': 'SimGCL'\n        }\n        quick_test(config_dict)\n\n    def test_xsimgcl(self):\n        config_dict = {\n            'model': 'XSimGCL'\n        }\n        quick_test(config_dict)\n\n    def test_lightgcl(self):\n        config_dict = {\n            'model': 'LightGCL'\n        }\n        quick_test(config_dict)\n\n    def test_directau(self):\n        config_dict = {\n            'model': 'DirectAU'\n        }\n        quick_test(config_dict)\n\n    def test_ssl4rec(self):\n        config_dict = {\n            'model': 'SSL4REC'\n        }\n        quick_test(config_dict)\n\n\nclass TestSequentialRecommender(unittest.TestCase):\n    def test_gru4rec(self):\n        config_dict = {\n            'model': 'GRU4Rec',\n        }\n        quick_test(config_dict)\n\n    def test_narm(self):\n        config_dict = {\n            'model': 'NARM',\n        }\n        quick_test(config_dict)\n\n    def test_sasrec(self):\n        config_dict = {\n            'model': 'SASRec',\n        }\n        quick_test(config_dict)\n\n    def test_srgnn(self):\n        config_dict = {\n            'model': 'SRGNN',\n        }\n        quick_test(config_dict)\n\n    def test_srgnn_uni100(self):\n        config_dict = {\n            'model': 'SRGNN',\n            'eval_args': {\n                'split': {'LS': \"valid_and_test\"},\n                'mode': 'uni100',\n                'order': 'TO'\n            }\n        }\n        quick_test(config_dict)\n\n    def test_gcsan(self):\n        config_dict = {\n            'model': 'GCSAN',\n        }\n        quick_test(config_dict)\n\n    def test_niser(self):\n        config_dict = {\n            'model': 'NISER',\n        }\n        quick_test(config_dict)\n\n    def test_lessr(self):\n        config_dict = {\n            'model': 'LESSR'\n        }\n        quick_test(config_dict)\n\n    def test_tagnn(self):\n        config_dict = {\n            'model': 'TAGNN'\n        }\n        quick_test(config_dict)\n\n    def test_gcegnn(self):\n        config_dict = {\n            'model': 'GCEGNN'\n        }\n        quick_test(config_dict)\n\n    def test_sgnnhn(self):\n        config_dict = {\n            'model': 'SGNNHN'\n        }\n        quick_test(config_dict)\n\n\nclass TestSocialRecommender(unittest.TestCase):\n    def test_diffnet(self):\n        config_dict = {\n            'model': 'DiffNet',\n        }\n        quick_test(config_dict)\n\n    def test_mhcn(self):\n        config_dict = {\n            'model': 'MHCN',\n        }\n        quick_test(config_dict)\n\n    def test_sept(self):\n        config_dict = {\n            'model': 'SEPT',\n        }\n        quick_test(config_dict)\n\n\nif __name__ == '__main__':\n    unittest.main()\n"
  },
  {
    "path": "tests/test_model.yaml",
    "content": "dataset: test\nepochs: 1\nstate: ERROR\ndata_path: tests/test_data/\n\n# Atomic File Format\nfield_separator: \"\\t\"\nseq_separator: \" \"\n\n# Common Features\nUSER_ID_FIELD: user_id\nITEM_ID_FIELD: item_id\nRATING_FIELD: rating\nTIME_FIELD: timestamp\nseq_len: ~\n# Label for Point-wise DataLoader\nLABEL_FIELD: label\n# NegSample Prefix for Pair-wise DataLoader\nNEG_PREFIX: neg_\n# Sequential Model Needed\nITEM_LIST_LENGTH_FIELD: item_length\nLIST_SUFFIX: _list\nMAX_ITEM_LIST_LENGTH: 50\nPOSITION_FIELD: position_id\n# social network config\nNET_SOURCE_ID_FIELD: source_id\nNET_TARGET_ID_FIELD: target_id\nfilter_net_by_inter: True\nundirected_net: True\n\n# Selectively Loading\nload_col:\n    inter: [user_id, item_id, rating, timestamp]\n    net: [source_id, target_id]\n\nunload_col: ~\n\n# Preprocessing\nalias_of_user_id: ~\nalias_of_item_id: ~\nalias_of_entity_id: ~\nalias_of_relation_id: ~\npreload_weight: ~\nnormalize_field: ~\nnormalize_all: True\n"
  }
]