[
  {
    "path": ".gitignore",
    "content": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# C extensions\n*.so\n\n# Distribution / packaging\n.Python\nbuild/\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nlib/\nlib64/\nparts/\nsdist/\nvar/\nwheels/\npip-wheel-metadata/\nshare/python-wheels/\n*.egg-info/\n.installed.cfg\n*.egg\nMANIFEST\n\n# PyInstaller\n#  Usually these files are written by a python script from a template\n#  before PyInstaller builds the exe, so as to inject date/other infos into it.\n*.manifest\n*.spec\n\n# Installer logs\npip-log.txt\npip-delete-this-directory.txt\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/\n.nox/\n.coverage\n.coverage.*\n.cache\nnosetests.xml\ncoverage.xml\n*.cover\n*.py,cover\n.hypothesis/\n.pytest_cache/\n\n# Translations\n*.mo\n*.pot\n\n# Django stuff:\n*.log\nlocal_settings.py\ndb.sqlite3\ndb.sqlite3-journal\n\n# Flask stuff:\ninstance/\n.webassets-cache\n\n# Scrapy stuff:\n.scrapy\n\n# Sphinx documentation\ndocs/_build/\n\n# PyBuilder\ntarget/\n\n# Jupyter Notebook\n.ipynb_checkpoints\n\n# IPython\nprofile_default/\nipython_config.py\n\n# pyenv\n.python-version\n\n# pipenv\n#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.\n#   However, in case of collaboration, if having platform-specific dependencies or dependencies\n#   having no cross-platform support, pipenv may install dependencies that don't work, or not\n#   install all needed dependencies.\n#Pipfile.lock\n\n# PEP 582; used by e.g. github.com/David-OConnor/pyflow\n__pypackages__/\n\n# Celery stuff\ncelerybeat-schedule\ncelerybeat.pid\n\n# SageMath parsed files\n*.sage.py\n\n# Environments\n.env\n.venv\nenv/\nvenv/\nENV/\nenv.bak/\nvenv.bak/\n\n# Spyder project settings\n.spyderproject\n.spyproject\n\n# Rope project settings\n.ropeproject\n\n# mkdocs documentation\n/site\n\n# mypy\n.mypy_cache/\n.dmypy.json\ndmypy.json\n\n# Pyre type checker\n.pyre/\n\n# PyCharm files\n.idea\n\n# Log files\nlogs/\n"
  },
  {
    "path": "README.md",
    "content": "# LexGLUE: A Benchmark Dataset for Legal Language Understanding in English :balance_scale: :trophy: :student: :woman_judge:\n\n![LexGLUE Graphic](https://repository-images.githubusercontent.com/411072132/5c49b313-ab36-4391-b785-40d9478d0f73)\n\n## Dataset Summary\n\nInspired by the recent widespread use of the GLUE multi-task benchmark NLP dataset ([Wang et al., 2018](https://aclanthology.org/W18-5446/)), the subsequent more difficult SuperGLUE ([Wang et al., 2109](https://openreview.net/forum?id=rJ4km2R5t7)), other previous multi-task NLP benchmarks ([Conneau and Kiela,2018](https://aclanthology.org/L18-1269/); [McCann et al., 2018](https://arxiv.org/abs/1806.08730)), and similar initiatives in other domains ([Peng et al., 2019](https://arxiv.org/abs/1906.05474)), we introduce LexGLUE, a benchmark dataset to evaluate the performance of NLP methods in legal tasks. LexGLUE is based on seven existing legal NLP datasets, selected using criteria largely from SuperGLUE.\n\nWe anticipate that more datasets, tasks, and languages will be added in later versions of LexGLUE. As more legal NLP datasets become available, we also plan to favor datasets checked thoroughly for validity (scores reflecting real-life performance), annotation quality, statistical power,and social bias ([Bowman and Dahl, 2021](https://aclanthology.org/2021.naacl-main.385/)).\n\nAs in GLUE and SuperGLUE ([Wang et al., 2109](https://openreview.net/forum?id=rJ4km2R5t7)) one of our goals is to push towards generic (or *foundation*) models that can cope with multiple NLP tasks, in our case legal NLP tasks,possibly with limited task-specific fine-tuning. An-other goal is to provide a convenient and informative entry point for NLP researchers and practitioners wishing to explore or develop methods for legalNLP. Having these goals in mind, the datasets we include in LexGLUE and the tasks they address have been simplified in several ways, discussed below, to make it easier for newcomers and generic models to address all tasks. We provide PythonAPIs integrated with Hugging Face (Wolf et al.,2020; Lhoest et al., 2021) to easily import all the datasets, experiment with and evaluate their performance.\n\nBy unifying and facilitating the access to a set of law-related datasets and tasks, we hope to attract not only more NLP experts, but also more interdisciplinary researchers (e.g., law doctoral students willing to take NLP courses). More broadly, we hope LexGLUE will speed up the adoption and transparent evaluation of new legal NLP methods and approaches in the commercial sector too. Indeed, there have been many commercial press releases in legal-tech industry, but almost no independent evaluation of the veracity of the performance of various machine learning and NLP-based offerings. A standard publicly available benchmark would also allay concerns of undue influence in predictive models, including the use of metadata which the relevant law expressly disregards.\n\nIf you participate, use the LexGLUE benchmark, or our experimentation library, please cite:\n\n[*Ilias Chalkidis, Abhik Jana, Dirk Hartung, Michael Bommarito, Ion Androutsopoulos, Daniel Martin Katz, and Nikolaos Aletras.*\n*LexGLUE: A Benchmark Dataset for Legal Language Understanding in English.*\n*2022. In the Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Dublin, Ireland.*](https://aclanthology.org/2022.acl-long.297/)\n```\n@inproceedings{chalkidis-etal-2022-lexglue,\n    title = \"{L}ex{GLUE}: A Benchmark Dataset for Legal Language Understanding in {E}nglish\",\n    author = \"Chalkidis, Ilias  and\n      Jana, Abhik  and\n      Hartung, Dirk  and\n      Bommarito, Michael  and\n      Androutsopoulos, Ion  and\n      Katz, Daniel  and\n      Aletras, Nikolaos\",\n    booktitle = \"Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)\",\n    month = may,\n    year = \"2022\",\n    address = \"Dublin, Ireland\",\n    publisher = \"Association for Computational Linguistics\",\n    url = \"https://aclanthology.org/2022.acl-long.297\",\n    pages = \"4310--4330\",\n}\n```\n\n\n## Supported Tasks\n\n\n<table>\n        <tr><td><b>Dataset</b></td><td><b>Source</b></td><td><b>Sub-domain</b></td><td><b>Task Type</b></td><td><b>Train/Dev/Test Instances</b></td><td><b>Classes</b></td><tr>\n<tr><td>ECtHR (Task A)</td><td> <a href=\"https://aclanthology.org/P19-1424/\">Chalkidis et al. (2019)</a> </td><td>ECHR</td><td>Multi-label classification</td><td>9,000/1,000/1,000</td><td>10+1</td></tr>\n<tr><td>ECtHR (Task B)</td><td> <a href=\"https://aclanthology.org/2021.naacl-main.22/\">Chalkidis et al. (2021a)</a> </td><td>ECHR</td><td>Multi-label classification </td><td>9,000/1,000/1,000</td><td>10+1</td></tr>\n<tr><td>SCOTUS</td><td> <a href=\"http://scdb.wustl.edu\">Spaeth et al. (2020)</a></td><td>US Law</td><td>Multi-class classification</td><td>5,000/1,400/1,400</td><td>14</td></tr>\n<tr><td>EUR-LEX</td><td> <a href=\"https://arxiv.org/abs/2109.00904\">Chalkidis et al. (2021b)</a></td><td>EU Law</td><td>Multi-label classification</td><td>55,000/5,000/5,000</td><td>100</td></tr>\n<tr><td>LEDGAR</td><td> <a href=\"https://aclanthology.org/2020.lrec-1.155/\">Tuggener et al. (2020)</a></td><td>Contracts</td><td>Multi-class classification</td><td>60,000/10,000/10,000</td><td>100</td></tr>\n<tr><td>UNFAIR-ToS</td><td><a href=\"https://arxiv.org/abs/1805.01217\"> Lippi et al. (2019)</a></td><td>Contracts</td><td>Multi-label classification</td><td>5,532/2,275/1,607</td><td>8+1</td></tr>\n<tr><td>CaseHOLD</td><td><a href=\"https://arxiv.org/abs/2104.08671\">Zheng et al. (2021)</a></td><td>US Law</td><td>Multiple choice QA</td><td>45,000/3,900/3,900</td><td>n/a</td></tr>\n</table>\n\n### ECtHR (Task A)\n\nThe European Court of Human Rights (ECtHR) hears allegations that a state has breached human rights provisions of the European Convention of Human Rights (ECHR). For each case, the dataset provides a list of factual paragraphs (facts) from the case description. Each case is mapped to articles of the ECHR that were violated (if any).\n\n### ECtHR (Task B)\n\nThe European Court of Human Rights (ECtHR) hears allegations that a state has breached human rights provisions of the European Convention of Human Rights (ECHR). For each case, the dataset provides a list of factual paragraphs (facts) from the case description. Each case is mapped to articles of ECHR that were allegedly violated (considered by the court).\n\n### SCOTUS\n\nThe US Supreme Court (SCOTUS) is the highest federal court in the United States of America and generally hears only the most controversial or otherwise complex cases which have not been sufficiently well solved by lower courts. This is a single-label multi-class classification task, where given a document (court opinion), the task is to predict the relevant issue areas. The 14 issue areas cluster 278 issues whose focus is on the subject matter of the controversy (dispute).\n\n### EUR-LEX\n\nEuropean Union (EU) legislation is published in EUR-Lex portal. All EU laws are annotated by EU's Publications Office with multiple concepts from the EuroVoc thesaurus, a multilingual thesaurus maintained by the Publications Office. The current version of EuroVoc contains more than 7k concepts referring to various activities of the EU and its Member States (e.g., economics, health-care, trade). Given a document, the task is to predict its EuroVoc labels (concepts).\n\n### LEDGAR\n\nLEDGAR dataset aims contract provision (paragraph) classification. The contract provisions come from contracts obtained from the US Securities and Exchange Commission (SEC) filings, which are publicly available from EDGAR. Each label represents the single main topic (theme) of the corresponding contract provision.\n\n### UNFAIR-ToS\n\nThe UNFAIR-ToS dataset contains 50 Terms of Service (ToS) from on-line platforms (e.g., YouTube, Ebay, Facebook, etc.). The dataset has been annotated on the sentence-level with 8 types of unfair contractual terms (sentences), meaning terms that potentially violate user rights according to the European consumer law.\n\n### CaseHOLD\n\nThe CaseHOLD (Case Holdings on Legal Decisions) dataset includes multiple choice questions about holdings of US court cases from the Harvard Law Library case law corpus. Holdings are short summaries of legal rulings accompany referenced decisions relevant for the present case. The input consists of an excerpt (or prompt) from a court decision, containing a reference to a particular case, while the holding statement is masked out. The model must identify the correct (masked) holding statement from a selection of five choices.\n\n## Leaderboard\n\n### Averaged LexGLUE Scores\n\nWe report the arithmetic, harmonic, and geometric mean across tasks following [Shavrina and Malykh (2021)](https://openreview.net/pdf?id=PPGfoNJnLKd). We acknowledge that the use of scores aggregated over tasks has been criticized in general NLU benchmarks (e.g., GLUE), as models are trained with different numbers of samples, task complexity, and evaluation metrics per task. We believe that the use of a standard common metric (F1) across tasks and averaging with harmonic mean alleviate this issue.\n\n<table>\n<tr><td><b>Averaging</b></td><td><b>Arithmetic</b></td><td><b>Harmonic</b></td><td><b>Geometric</b></td></tr>\n<tr><td><b>Model</b></td><td>μ-F1  / m-F1 </td><td>μ-F1  / m-F1 </td><td>μ-F1  / m-F1 </td></tr>\n<tr><td>BERT</td><td> 77.8 /  69.5 </td><td> 76.7 /  68.2 </td><td> 77.2 /  68.8 </td></tr>\n<tr><td>RoBERTa</td><td> 77.8 /  68.7 </td><td> 76.8 /  67.5 </td><td> 77.3 /  68.1 </td></tr>\n<tr><td>RoBERTa (Large)</td><td> 79.4 /  70.8 </td><td> 78.4 /  69.1 </td><td> 78.9 /  70.0 </td></tr>\n<tr><td>DeBERTa</td><td> 78.3 /  69.7 </td><td> 77.4 /  68.5 </td><td> 77.8 /  69.1 </td></tr>\n<tr><td>Longformer</td><td> 78.5 /  70.5 </td><td> 77.5 /  69.5 </td><td> 78.0 /  70.0 </td></tr>\n<tr><td>BigBird</td><td> 78.2 /  69.6 </td><td> 77.2 /  68.5 </td><td> 77.7 /  69.0 </td></tr>\n<tr><td>Legal-BERT</td><td> <b>79.8</b> /  <b>72.0</b> </td><td> <b>78.9</b> /  <b>70.8</b> </td><td> <b>79.3</b> /  <b>71.4</b> </td></tr>\n<tr><td>CaseLaw-BERT</td><td> 79.4 /  70.9 </td><td> 78.5 /  69.7 </td><td> 78.9 /  70.3 </td></tr>\n</table>\n\n### Task-wise LexGLUE scores\n\n#### Large-sized (:older_man:) Models [1]\n\n<table>\n        <tr><td><b>Dataset</b></td><td><b>ECtHR A</b></td><td><b>ECtHR B</b></td><td><b>SCOTUS</b></td><td><b>EUR-LEX</b></td><td><b>LEDGAR</b></td><td><b>UNFAIR-ToS</b></td><td><b>CaseHOLD</b></td></tr>\n<tr><td><b>Model</b></td><td>μ-F1  / m-F1 </td><td>μ-F1  / m-F1 </td><td>μ-F1  / m-F1 </td><td>μ-F1  / m-F1 </td><td>μ-F1  / m-F1 </td><td>μ-F1  / m-F1</td><td>μ-F1 / m-F1  </td></tr>\n<tr><td>RoBERTa</td> <td> 73.8 /  67.6 </td> <td> 79.8 /  71.6 </td> <td> 75.5 /  66.3 </td> <td> 67.9 /  50.3 </td> <td> 88.6 /  83.6 </td> <td> 95.8 /  81.6 </td> <td> 74.4 </td> </tr>\n</table>\n\n[1] Results reported by [Chalkidis et al. (2021)](https://arxiv.org/abs/2110.00976). All large-sized transformer-based models follow the same specifications (L=24, H=1024, A=18).\n\n#### Medium-sized (:man:) Models [2]\n\n<table>\n        <tr><td><b>Dataset</b></td><td><b>ECtHR A</b></td><td><b>ECtHR B</b></td><td><b>SCOTUS</b></td><td><b>EUR-LEX</b></td><td><b>LEDGAR</b></td><td><b>UNFAIR-ToS</b></td><td><b>CaseHOLD</b></td></tr>\n<tr><td><b>Model</b></td><td>μ-F1  / m-F1 </td><td>μ-F1  / m-F1 </td><td>μ-F1  / m-F1 </td><td>μ-F1  / m-F1 </td><td>μ-F1  / m-F1 </td><td>μ-F1  / m-F1</td><td>μ-F1 / m-F1  </td></tr>\n<tr><td>TFIDF+SVM</td><td> 62.6 / 48.9  </td><td>73.0 / 63.8 </td><td> 74.0 / 64.4 </td><td>63.4  / 47.9 </td><td>87.0   / 81.4 </td><td>94.7  / 75.0</td><td>22.4   </td></tr>\n<td>BERT</td> <td> <b>71.2</b> /  63.6 </td> <td> 79.7 /  73.4 </td> <td> 68.3 /  58.3 </td> <td> 71.4 /  57.2 </td> <td> 87.6 /  81.8 </td> <td> 95.6 /  81.3 </td> <td> 70.8 </td> </tr>\n<td>RoBERTa</td> <td> 69.2 /  59.0 </td> <td> 77.3 /  68.9 </td> <td> 71.6 /  62.0 </td> <td> 71.9 /  <b>57.9</b> </td> <td> 87.9 /  82.3 </td> <td> 95.2 /  79.2 </td> <td> 71.4 </td> </tr>\n<td>DeBERTa</td> <td> 70.0 /  60.8 </td> <td> 78.8 /  71.0 </td> <td> 71.1 /  62.7 </td> <td> <b>72.1</b> /  57.4 </td> <td> 88.2 /  <b>83.1</b> </td> <td> 95.5 /  80.3 </td> <td> 72.6 </td> </tr>\n<td>Longformer</td> <td> 69.9 /  <b>64.7</b> </td> <td> 79.4 /  71.7 </td> <td> 72.9 /  64.0 </td> <td> 71.6 /  57.7 </td> <td> 88.2 /  83.0 </td> <td> 95.5 /  80.9 </td> <td> 71.9 </td> </tr>\n<td>BigBird</td> <td> 70.0 /  62.9 </td> <td> 78.8 /  70.9 </td> <td> 72.8 /  62.0 </td> <td> 71.5 /  56.8 </td> <td> 87.8 /  82.6 </td> <td> 95.7 /  81.3 </td> <td> 70.8 </td> </tr>\n<td>Legal-BERT</td> <td> 70.0 /  64.0 </td> <td> <b>80.4</b> /  <b>74.7</b> </td> <td> 76.4 /  <b>66.5</b> </td> <td> <b>72.1</b> /  57.4 </td> <td> 88.2 /  83.0 </td> <td> <b>96.0</b> /  <b>83.0</b> </td> <td> 75.3 </td> </tr>\n<td>CaseLaw-BERT</td> <td> 69.8 /  62.9 </td> <td> 78.8 /  70.3 </td> <td> <b>76.6</b> /  65.9 </td> <td> 70.7 /  56.6 </td> <td> <b>88.3</b> /  83.0 </td> <td> <b>96.0</b> /  82.3 </td> <td> <b>75.4</b> </td> </tr>\n\n</table>\n\n[2] Results reported by [Chalkidis et al. (2021)](https://arxiv.org/abs/2110.00976). All medium-sized transformer-based models follow the same specifications (L=12, H=768, A=12).\n\n#### Small-sized (:baby:) Models [3]\n\n<table>\n<tr><td><b>Dataset</b></td><td><b>ECtHR A</b></td><td><b>ECtHR B</b></td><td><b>SCOTUS</b></td><td><b>EUR-LEX</b></td><td><b>LEDGAR</b></td><td><b>UNFAIR-ToS</b></td><td><b>CaseHOLD</b></td></tr>\n        <tr><td><b>Model</b></td><td>μ-F1  / m-F1 </td><td>μ-F1  / m-F1 </td><td>μ-F1  / m-F1 </td><td>μ-F1  / m-F1 </td><td>μ-F1  / m-F1 </td><td>μ-F1  / m-F1</td><td>μ-F1 / m-F1  </td></tr>\n<tr><td>BERT-Tiny</td><td>n/a</td><td>n/a</td><td>\t62.8 / 40.9</td><td>\t65.5 / 27.5</td><td>\t83.9 / 74.7</td><td>\t94.3 / 11.1</td><td>\t68.3</td></tr>\n<tr><td>Mini-LM (v2)</td><td>n/a</td><td>n/a</td><td>\t60.8 / 45.5</td><td>\t62.2 / 35.6</td><td>\t86.7 / 79.6</td><td>\t93.9 / 13.2</td><td>\t71.3</td></tr>\n<tr><td>Distil-BERT</td><td>n/a</td><td>n/a</td><td>\t67.0 / 55.9</td><td>\t66.0 / 51.5</td><td>\t87.5 / <b>81.5</b></td><td>\t<b>97.1</b> / <b>79.4</b></td><td>\t68.6</td>\n<tr><td>Legal-BERT </td><td>n/a</b></td><td>n/a</td><td><b>75.6</b> / <b>68.5</b></td><td> <b>73.4</b> / <b>54.4</b><td><b>87.8</b> /81.4</td><td><b>97.1</b> / 76.3</td><td><b>74.7</b></td></tr>\n\n</table>\n\n[3] Results reported by Atreya Shankar ([@atreyasha](https://github.com/atreyasha)) :hugs: :partying_face:. More details (e.g., validation scores, log files) are provided [here](https://github.com/coastalcph/lex-glue/discussions/categories/new-results). The small-sized models' specifications are:\n\n* BERT-Tiny (L=2, H=128, A=2) by [Turc et al. (2020)](https://arxiv.org/abs/1908.08962)\n* Mini-LM (v2) (L=12, H=386, A=12) by [Wang et al. (2020)](https://arxiv.org/abs/2002.10957)\n* Distil-BERT (L=6, H=768, A=12) by [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108)\n* Legal-BERT (L=6, H=512, A=8) by [Chalkidis et al. (2020)](https://arxiv.org/abs/2010.02559)\n\n\n## Frequently Asked Questions (FAQ)\n\n### Where are the datasets?\n\nWe provide access to LexGLUE on [Hugging Face Datasets](https://huggingface.co/datasets) (Lhoest et al., 2021) at https://huggingface.co/datasets/coastalcph/lex_glue.  \n\nFor example to load the SCOTUS [Spaeth et al. (2020)](http://scdb.wustl.edu) dataset, you first simply install the datasets python library and then make the following call:\n\n```python\n\nfrom datasets import load_dataset \ndataset = load_dataset(\"coastalcph/lex_glue\", \"scotus\")\n\n```\n\n### How to run experiments?\n\nFurthermore, to make reproducing the results for the already examined models or future models even easier, we release our code in this repository. In folder `/experiments`, there are Python scripts, relying on the [Hugging Face Transformers](https://huggingface.co/transformers/) library, to run and evaluate any Transformer-based model (e.g., BERT, RoBERTa, LegalBERT, and their hierarchical variants, as well as, Longforrmer, and BigBird). We also provide bash scripts in folder `/scripts` to replicate the experiments for each dataset with 5 randoms seeds, as we did for the reported results for the original leaderboard.\n\nMake sure that all required packages are installed:\n\n```\ntorch>=1.9.0\ntransformers>=4.9.0\nscikit-learn>=0.24.1\ntqdm>=4.61.1\nnumpy>=1.20.1\ndatasets>=1.12.1\nnltk>=3.5\nscipy>=1.6.3\n```\n\nFor example to replicate the results for RoBERTa ([Liu et al., 2019](https://arxiv.org/abs/1907.11692)) on UNFAIR-ToS [Lippi et al. (2019)](https://arxiv.org/abs/1805.01217), you have to configure the relevant bash script (`run_unfair_tos.sh`):\n\n```\n> nano run_unfair_tos.sh\nGPU_NUMBER=1\nMODEL_NAME='roberta-base'\nLOWER_CASE='False'\nBATCH_SIZE=8\nACCUMULATION_STEPS=1\nTASK='unfair_tos'\n```\n\nand then run it:\n\n```\n> sh run_unfair_tos.sh\n```\n\n**Note:**  The bash scripts make use of two HF arguments/parameters (`--fp16`, `--fp16_full_eval`), which are only applicable (working) when there are available (and correctly configured) NVIDIA GPUs in a machine station (server or cluster), while also `torch` is correctly configured to use these compute resources.\n\nSo, in case you don't have such resources, just delete these two arguments from the scripts to train models with standard `fp32` precision. In case you have such resources, make sure to correctly install the NVIDIA CUDA drivers, and also correctly install `torch` to identify these resources (Consider this page to figure out the appropriate steps: https://pytorch.org/get-started/locally/)\n\n### I don't have the resources to run all these Muppets. What can I do?\n\nYou can use Google Colab with GPU acceleration for free online (https://colab.research.google.com). \n- Set Up a new notebook (https://colab.research.google.com) and git clone the project.\n- Navigate to Edit → Notebook Settings and select GPU from the Hardware Accelerator drop-down. You will probably get assigned with an NVIDIA Tesla K80 12GB.\n- You will also have to decrease the batch size and increase the accumulation steps for hierarchical models.\n\nBut, this is an interesting open problem (Efficient NLP), please consider using lighter pre-trained (smaller/faster) models, like: \n- The smaller [Legal-BERT](https://huggingface.co/nlpaueb/legal-bert-small-uncased) of [Chalkidis et al. (2020)](https://arxiv.org/abs/2010.02559),\n- Smaller [BERT](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) models of [Turc et al. (2020)](https://arxiv.org/abs/1908.08962),\n- [Mini-LM](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) of [Wang et al. (2020)](https://arxiv.org/abs/2002.10957),\n\n, or non transformer-based neural models, like:\n\n- LSTM-based [(Hochreiter and Schmidhuber, 1997)](https://dl.acm.org/doi/10.1162/neco.1997.9.8.1735) models, like the Hierarchical Attention Network (HAN) of [Yang et al. (2016)](https://aclanthology.org/N16-1174/),\n- Graph-based models, like the Graph Attention Network (GAT) of [Veličković et al. (2017)](https://arxiv.org/abs/1710.10903)\n\n, or even non neural models, like:\n\n- Bag of Word (BoW) models using TF-IDF representations (e.g., SVM, Random Forest),\n- The eXtreme Gradient Boosting (XGBoost) of [Chen and Guestrin (2016)](http://arxiv.org/abs/1603.02754),\n\nand report back the results. We are curious!\n\n### How to participate?\n\nWe are currently still lacking some technical infrastructure, e.g., an integrated submission environment comprised of an automated evaluation and an automatically updated leaderboard. We plan to develop the necessary publicly available web infrastructure extend the public infrastructure of LexGLUE in the near future. \n\nIn the mean-time, we ask participants to re-use and expand our code to submit new results, if possible, and open a new discussion (submission) in our repository (https://github.com/coastalcph/lex-glue/discussions/new?category=new-results) presenting their results, providing the auto-generated result logs and the relevant publication (or pre-print), if available, accompanied with a pull request including the code amendments that are needed to reproduce their experiments. Upon reviewing your results, we'll update the public leaderboard accordingly.\n\n### I want to re-load fine-tuned HierBERT models. How can I do this?\n\nYou can re-load fine-tuned HierBERT models following our example python script [\"Re-load HierBERT models\"](https://github.com/coastalcph/lex-glue/blob/main/utils/load_hierbert.py).\n\n### I still have open questions...\n\nPlease post your question on [Discussions](https://github.com/coastalcph/lex-glue/discussions) section or communicate with the corresponding author via e-mail.\n\n## Credits\n\nThanks to [@JamesLYC88](https://github.com/JamesLYC88) and [@danigoju](https://github.com/danigoju) for digging up for :bug:s!\n"
  },
  {
    "path": "experiments/case_hold.py",
    "content": "#!/usr/bin/env python\n# coding=utf-8\n\"\"\" Finetuning models on CaseHOLD (e.g. Bert, RoBERTa, LEGAL-BERT).\"\"\"\n\nimport logging\nimport os\nfrom dataclasses import dataclass, field\nfrom typing import Optional\n\nimport numpy as np\nimport random\nimport shutil\nimport glob\n\nimport transformers\nfrom transformers import (\n\tAutoConfig,\n\tAutoModelForMultipleChoice,\n\tAutoTokenizer,\n\tEvalPrediction,\n\tHfArgumentParser,\n\tTrainer,\n\tTrainingArguments,\n\tset_seed,\n)\nfrom transformers.trainer_utils import is_main_process\nfrom transformers import EarlyStoppingCallback\nfrom casehold_helpers import MultipleChoiceDataset, Split\nfrom sklearn.metrics import f1_score\nfrom models.deberta import DebertaForMultipleChoice\n\n\nlogger = logging.getLogger(__name__)\n\n\n@dataclass\nclass ModelArguments:\n\t\"\"\"\n\tArguments pertaining to which model/config/tokenizer we are going to fine-tune from.\n\t\"\"\"\n\n\tmodel_name_or_path: str = field(\n\t\tmetadata={\"help\": \"Path to pretrained model or model identifier from huggingface.co/models\"}\n\t)\n\tconfig_name: Optional[str] = field(\n\t\tdefault=None, metadata={\"help\": \"Pretrained config name or path if not the same as model_name\"}\n\t)\n\ttokenizer_name: Optional[str] = field(\n\t\tdefault=None, metadata={\"help\": \"Pretrained tokenizer name or path if not the same as model_name\"}\n\t)\n\tcache_dir: Optional[str] = field(\n\t\tdefault=None,\n\t\tmetadata={\"help\": \"Where do you want to store the pretrained models downloaded from huggingface.co\"},\n\t)\n\n\n@dataclass\nclass DataTrainingArguments:\n\t\"\"\"\n\tArguments pertaining to what data we are going to input our model for training and eval.\n\t\"\"\"\n\n\ttask_name: str = field(default=\"case_hold\", metadata={\"help\": \"The name of the task to train on\"})\n\tmax_seq_length: int = field(\n\t\tdefault=256,\n\t\tmetadata={\n\t\t\t\"help\": \"The maximum total input sequence length after tokenization. Sequences longer \"\n\t\t\t\"than this will be truncated, sequences shorter will be padded.\"\n\t\t},\n\t)\n\tpad_to_max_length: bool = field(\n\t\tdefault=True,\n\t\tmetadata={\n\t\t\t\"help\": \"Whether to pad all samples to `max_seq_length`. \"\n\t\t\t\"If False, will pad the samples dynamically when batching to the maximum length in the batch.\"\n\t\t},\n\t)\n\tmax_train_samples: Optional[int] = field(\n\t\tdefault=None,\n\t\tmetadata={\n\t\t\t\"help\": \"For debugging purposes or quicker training, truncate the number of training examples to this \"\n\t\t\t\"value if set.\"\n\t\t},\n\t)\n\tmax_eval_samples: Optional[int] = field(\n\t\tdefault=None,\n\t\tmetadata={\n\t\t\t\"help\": \"For debugging purposes or quicker training, truncate the number of evaluation examples to this \"\n\t\t\t\"value if set.\"\n\t\t},\n\t)\n\tmax_predict_samples: Optional[int] = field(\n\t\tdefault=None,\n\t\tmetadata={\n\t\t\t\"help\": \"For debugging purposes or quicker training, truncate the number of prediction examples to this \"\n\t\t\t\"value if set.\"\n\t\t},\n\t)\n\toverwrite_cache: bool = field(\n\t\tdefault=False, metadata={\"help\": \"Overwrite the cached training and evaluation sets\"}\n\t)\n\n\ndef main():\n\t# See all possible arguments in src/transformers/training_args.py\n\t# or by passing the --help flag to this script.\n\t# We now keep distinct sets of args, for a cleaner separation of concerns.\n\n\tparser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))\n\t# Add custom arguments for computing pre-train loss\n\tparser.add_argument(\"--ptl\", type=bool, default=False)\n\tmodel_args, data_args, training_args, custom_args = parser.parse_args_into_dataclasses()\n\n\tif (\n\t\tos.path.exists(training_args.output_dir)\n\t\tand os.listdir(training_args.output_dir)\n\t\tand training_args.do_train\n\t\tand not training_args.overwrite_output_dir\n\t):\n\t\traise ValueError(\n\t\t\tf\"Output directory ({training_args.output_dir}) already exists and is not empty. Use --overwrite_output_dir to overcome.\"\n\t\t)\n\n\t# Setup logging\n\tlogging.basicConfig(\n\t\tformat=\"%(asctime)s - %(levelname)s - %(name)s -   %(message)s\",\n\t\tdatefmt=\"%m/%d/%Y %H:%M:%S\",\n\t\tlevel=logging.INFO if training_args.local_rank in [-1, 0] else logging.WARN,\n\t)\n\tlogger.warning(\n\t\t\"Process rank: %s, device: %s, n_gpu: %s, distributed training: %s, 16-bits training: %s\",\n\t\ttraining_args.local_rank,\n\t\ttraining_args.device,\n\t\ttraining_args.n_gpu,\n\t\tbool(training_args.local_rank != -1),\n\t\ttraining_args.fp16,\n\t)\n\t# Set the verbosity to info of the Transformers logger (on main process only):\n\tif is_main_process(training_args.local_rank):\n\t\ttransformers.utils.logging.set_verbosity_info()\n\t\ttransformers.utils.logging.enable_default_handler()\n\t\ttransformers.utils.logging.enable_explicit_format()\n\tlogger.info(\"Training/evaluation parameters %s\", training_args)\n\n\t# Set seed\n\tset_seed(training_args.seed)\n\n\t# Load pretrained model and tokenizer\n\tconfig = AutoConfig.from_pretrained(\n\t\tmodel_args.config_name if model_args.config_name else model_args.model_name_or_path,\n\t\tnum_labels=5,\n\t\tfinetuning_task=data_args.task_name,\n\t\tcache_dir=model_args.cache_dir,\n\t)\n\n\tif config.model_type == 'big_bird':\n\t\tconfig.attention_type = 'original_full'\n\telif config.model_type == 'longformer':\n\t\tconfig.attention_window = [data_args.max_seq_length] * config.num_hidden_layers\n\n\ttokenizer = AutoTokenizer.from_pretrained(\n\t\tmodel_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,\n\t\tcache_dir=model_args.cache_dir,\n\t\t# Default fast tokenizer is buggy on CaseHOLD task, switch to legacy tokenizer\n\t\tuse_fast=True,\n\t)\n\n\tif config.model_type != 'deberta':\n\t\tmodel = AutoModelForMultipleChoice.from_pretrained(\n\t\t\tmodel_args.model_name_or_path,\n\t\t\tfrom_tf=bool(\".ckpt\" in model_args.model_name_or_path),\n\t\t\tconfig=config,\n\t\t\tcache_dir=model_args.cache_dir,\n\t\t)\n\telse:\n\t\tmodel = DebertaForMultipleChoice.from_pretrained(\n\t\t\tmodel_args.model_name_or_path,\n\t\t\tfrom_tf=bool(\".ckpt\" in model_args.model_name_or_path),\n\t\t\tconfig=config,\n\t\t\tcache_dir=model_args.cache_dir,\n\t\t)\n\n\ttrain_dataset = None\n\teval_dataset = None\n\n\t# If do_train passed, train_dataset by default loads train split from file named train.csv in data directory\n\tif training_args.do_train:\n\t\ttrain_dataset = \\\n\t\t\tMultipleChoiceDataset(\n\t\t\t\ttokenizer=tokenizer,\n\t\t\t\ttask=data_args.task_name,\n\t\t\t\tmax_seq_length=data_args.max_seq_length,\n\t\t\t\toverwrite_cache=data_args.overwrite_cache,\n\t\t\t\tmode=Split.train,\n\t\t\t)\n\n\t# If do_eval or do_predict passed, eval_dataset by default loads dev split from file named dev.csv in data directory\n\tif training_args.do_eval:\n\t\teval_dataset = \\\n\t\t\tMultipleChoiceDataset(\n\t\t\t\ttokenizer=tokenizer,\n\t\t\t\ttask=data_args.task_name,\n\t\t\t\tmax_seq_length=data_args.max_seq_length,\n\t\t\t\toverwrite_cache=data_args.overwrite_cache,\n\t\t\t\tmode=Split.dev,\n\t\t\t)\n\n\tif training_args.do_predict:\n\t\tpredict_dataset = \\\n\t\t\tMultipleChoiceDataset(\n\t\t\t\ttokenizer=tokenizer,\n\t\t\t\ttask=data_args.task_name,\n\t\t\t\tmax_seq_length=data_args.max_seq_length,\n\t\t\t\toverwrite_cache=data_args.overwrite_cache,\n\t\t\t\tmode=Split.test,\n\t\t\t)\n\n\tif training_args.do_train:\n\t\tif data_args.max_train_samples is not None:\n\t\t\ttrain_dataset = train_dataset[:data_args.max_train_samples]\n\t\t# Log a few random samples from the training set:\n\t\tfor index in random.sample(range(len(train_dataset)), 3):\n\t\t\tlogger.info(f\"Sample {index} of the training set: {train_dataset[index]}.\")\n\n\tif training_args.do_eval:\n\t\tif data_args.max_eval_samples is not None:\n\t\t\teval_dataset = eval_dataset[:data_args.max_eval_samples]\n\n\tif training_args.do_predict:\n\t\tif data_args.max_predict_samples is not None:\n\t\t\tpredict_dataset = predict_dataset[:data_args.max_predict_samples]\n\n\t# Define custom compute_metrics function, returns macro F1 metric for CaseHOLD task\n\tdef compute_metrics(p: EvalPrediction):\n\t\tpreds = np.argmax(p.predictions, axis=1)\n\t\t# Compute macro and micro F1 for 5-class CaseHOLD task\n\t\tmacro_f1 = f1_score(y_true=p.label_ids, y_pred=preds, average='macro', zero_division=0)\n\t\tmicro_f1 = f1_score(y_true=p.label_ids, y_pred=preds, average='micro', zero_division=0)\n\t\treturn {'macro-f1': macro_f1, 'micro-f1': micro_f1}\n\n\t# Initialize our Trainer\n\ttrainer = Trainer(\n\t\tmodel=model,\n\t\targs=training_args,\n\t\ttrain_dataset=train_dataset,\n\t\teval_dataset=eval_dataset,\n\t\tcompute_metrics=compute_metrics,\n\t\tcallbacks=[EarlyStoppingCallback(early_stopping_patience=3)]\n\t)\n\n\t# Training\n\tif training_args.do_train:\n\t\ttrainer.train(\n\t\t\tmodel_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None\n\t\t)\n\t\ttrainer.save_model()\n\t\t# Re-save the tokenizer for model sharing\n\t\tif trainer.is_world_process_zero():\n\t\t\ttokenizer.save_pretrained(training_args.output_dir)\n\n\t# Evaluation on eval_dataset\n\tif training_args.do_eval:\n\t\tlogger.info(\"*** Evaluate ***\")\n\t\tmetrics = trainer.evaluate(eval_dataset=eval_dataset)\n\n\t\tmax_eval_samples = data_args.max_eval_samples if data_args.max_eval_samples is not None else len(eval_dataset)\n\t\tmetrics[\"eval_samples\"] = min(max_eval_samples, len(eval_dataset))\n\n\t\ttrainer.log_metrics(\"eval\", metrics)\n\t\ttrainer.save_metrics(\"eval\", metrics)\n\n\t# Predict on eval_dataset\n\tif training_args.do_predict:\n\t\tlogger.info(\"*** Predict ***\")\n\n\t\tpredictions, labels, metrics = trainer.predict(predict_dataset, metric_key_prefix=\"predict\")\n\n\t\tmax_predict_samples = (\n\t\t\tdata_args.max_predict_samples if data_args.max_predict_samples is not None else len(predict_dataset)\n\t\t)\n\t\tmetrics[\"predict_samples\"] = min(max_predict_samples, len(predict_dataset))\n\n\t\ttrainer.log_metrics(\"predict\", metrics)\n\t\ttrainer.save_metrics(\"predict\", metrics)\n\n\t\toutput_predict_file = os.path.join(training_args.output_dir, \"test_predictions.csv\")\n\t\tif trainer.is_world_process_zero():\n\t\t\twith open(output_predict_file, \"w\") as writer:\n\t\t\t\tfor index, pred_list in enumerate(predictions):\n\t\t\t\t\tpred_line = '\\t'.join([f'{pred:.5f}' for pred in pred_list])\n\t\t\t\t\twriter.write(f\"{index}\\t{pred_line}\\n\")\n\n\t# Clean up checkpoints\n\tcheckpoints = [filepath for filepath in glob.glob(f'{training_args.output_dir}/*/') if '/checkpoint' in filepath]\n\tfor checkpoint in checkpoints:\n\t\tshutil.rmtree(checkpoint)\n\n\ndef _mp_fn(index):\n\t# For xla_spawn (TPUs)\n\tmain()\n\n\nif __name__ == \"__main__\":\n\tmain()\n"
  },
  {
    "path": "experiments/casehold_helpers.py",
    "content": "import logging\nimport os\nfrom dataclasses import dataclass\nfrom enum import Enum\nfrom typing import List, Optional\n\nimport tqdm\nimport re\n\nfrom filelock import FileLock\nfrom transformers import PreTrainedTokenizer, is_tf_available, is_torch_available\nimport datasets\n\nlogger = logging.getLogger(__name__)\n\n\n@dataclass(frozen=True)\nclass InputFeatures:\n    \"\"\"\n    A single set of features of data.\n    Property names are the same names as the corresponding inputs to a model.\n    \"\"\"\n\n    input_ids: List[List[int]]\n    attention_mask: Optional[List[List[int]]]\n    token_type_ids: Optional[List[List[int]]]\n    label: Optional[int]\n\n\nclass Split(Enum):\n    train = \"train\"\n    dev = \"dev\"\n    test = \"test\"\n\n\nif is_torch_available():\n    import torch\n    from torch.utils.data.dataset import Dataset\n\n    class MultipleChoiceDataset(Dataset):\n        \"\"\"\n        PyTorch multiple choice dataset class\n        \"\"\"\n\n        features: List[InputFeatures]\n\n        def __init__(\n            self,\n            tokenizer: PreTrainedTokenizer,\n            task: str,\n            max_seq_length: Optional[int] = None,\n            overwrite_cache=False,\n            mode: Split = Split.train,\n        ):\n            dataset = datasets.load_dataset('lex_glue', task)\n            tokenizer_name = re.sub('[^a-z]+', ' ', tokenizer.name_or_path).title().replace(' ', '')\n            cached_features_file = os.path.join(\n                '.cache',\n                task,\n                \"cached_{}_{}_{}_{}\".format(\n                    mode.value,\n                    tokenizer_name,\n                    str(max_seq_length),\n                    task,\n                ),\n            )\n\n            # Make sure only the first process in distributed training processes the dataset,\n            # and the others will use the cache.\n            lock_path = cached_features_file + \".lock\"\n            if not os.path.exists(os.path.join('.cache', task)):\n                if not os.path.exists('.cache'):\n                    os.mkdir('.cache')\n                os.mkdir(os.path.join('.cache', task))\n            with FileLock(lock_path):\n\n                if os.path.exists(cached_features_file) and not overwrite_cache:\n                    logger.info(f\"Loading features from cached file {cached_features_file}\")\n                    self.features = torch.load(cached_features_file)\n                else:\n                    logger.info(f\"Creating features from dataset file at {task}\")\n                    if mode == Split.dev:\n                        examples = dataset['validation']\n                    elif mode == Split.test:\n                        examples = dataset['test']\n                    elif mode == Split.train:\n                        examples = dataset['train']\n                    logger.info(\"Training examples: %s\", len(examples))\n                    self.features = convert_examples_to_features(\n                        examples,\n                        max_seq_length,\n                        tokenizer,\n                    )\n                    logger.info(\"Saving features into cached file %s\", cached_features_file)\n                    torch.save(self.features, cached_features_file)\n\n        def __len__(self):\n            return len(self.features)\n\n        def __getitem__(self, i) -> InputFeatures:\n            return self.features[i]\n\n\nif is_tf_available():\n    import tensorflow as tf\n\n    class TFMultipleChoiceDataset:\n        \"\"\"\n        TensorFlow multiple choice dataset class\n        \"\"\"\n\n        features: List[InputFeatures]\n\n        def __init__(\n            self,\n            tokenizer: PreTrainedTokenizer,\n            task: str,\n            max_seq_length: Optional[int] = 256,\n            overwrite_cache=False,\n            mode: Split = Split.train,\n        ):\n            dataset = datasets.load_dataset('lex_glue')\n\n            logger.info(f\"Creating features from dataset file at {task}\")\n            if mode == Split.dev:\n                examples = dataset['validation']\n            elif mode == Split.test:\n                examples = dataset['test']\n            else:\n                examples = dataset['train']\n            logger.info(f\"{mode.name.title()} examples: %s\", len(examples))\n\n            self.features = convert_examples_to_features(\n                examples,\n                max_seq_length,\n                tokenizer,\n            )\n\n            def gen():\n                for (ex_index, ex) in tqdm.tqdm(enumerate(self.features), desc=\"convert examples to features\"):\n                    if ex_index % 10000 == 0:\n                        logger.info(\"Writing example %d of %d\" % (ex_index, len(examples)))\n\n                    yield (\n                        {\n                            \"input_ids\": ex.input_ids,\n                            \"attention_mask\": ex.attention_mask,\n                            \"token_type_ids\": ex.token_type_ids,\n                        },\n                        ex.label,\n                    )\n\n            self.dataset = tf.data.Dataset.from_generator(\n                gen,\n                (\n                    {\n                        \"input_ids\": tf.int32,\n                        \"attention_mask\": tf.int32,\n                        \"token_type_ids\": tf.int32,\n                    },\n                    tf.int64,\n                ),\n                (\n                    {\n                        \"input_ids\": tf.TensorShape([None, None]),\n                        \"attention_mask\": tf.TensorShape([None, None]),\n                        \"token_type_ids\": tf.TensorShape([None, None]),\n                    },\n                    tf.TensorShape([]),\n                ),\n            )\n\n        def get_dataset(self):\n            self.dataset = self.dataset.apply(tf.data.experimental.assert_cardinality(len(self.features)))\n\n            return self.dataset\n\n        def __len__(self):\n            return len(self.features)\n\n        def __getitem__(self, i) -> InputFeatures:\n            return self.features[i]\n\n\ndef convert_examples_to_features(\n    examples: datasets.Dataset,\n    max_length: int,\n    tokenizer: PreTrainedTokenizer,\n) -> List[InputFeatures]:\n    \"\"\"\n    Loads a data file into a list of `InputFeatures`\n    \"\"\"\n    features = []\n    for (ex_index, example) in tqdm.tqdm(enumerate(examples), desc=\"convert examples to features\"):\n        if ex_index % 10000 == 0:\n            logger.info(\"Writing example %d of %d\" % (ex_index, len(examples)))\n        choices_inputs = []\n        for ending_idx, ending in enumerate(example['endings']):\n            context = example['context']\n            inputs = tokenizer(\n                context,\n                ending,\n                add_special_tokens=True,\n                max_length=max_length,\n                padding=\"max_length\",\n                truncation=True,\n            )\n\n            choices_inputs.append(inputs)\n        \n        label = example['label']\n\n        input_ids = [x[\"input_ids\"] for x in choices_inputs]\n        attention_mask = (\n            [x[\"attention_mask\"] for x in choices_inputs] if \"attention_mask\" in choices_inputs[0] else None\n        )\n        token_type_ids = (\n            [x[\"token_type_ids\"] for x in choices_inputs] if \"token_type_ids\" in choices_inputs[0] else None\n        )\n\n        features.append(\n            InputFeatures(\n                input_ids=input_ids,\n                attention_mask=attention_mask,\n                token_type_ids=token_type_ids,\n                label=label,\n            )\n        )\n\n    for f in features[:2]:\n        logger.info(\"*** Example ***\")\n        logger.info(\"feature: %s\" % f)\n\n    return features\n"
  },
  {
    "path": "experiments/ecthr.py",
    "content": "#!/usr/bin/env python\n# coding=utf-8\n\"\"\" Finetuning models on the ECtHR dataset (e.g. Bert, RoBERTa, LEGAL-BERT).\"\"\"\n\nimport logging\nimport os\nimport random\nimport sys\nfrom dataclasses import dataclass, field\nfrom typing import Optional\n\nimport datasets\nimport numpy as np\nfrom datasets import load_dataset\nfrom sklearn.metrics import f1_score\nfrom trainer import MultilabelTrainer\nfrom scipy.special import expit\nfrom torch import nn\nimport glob\nimport shutil\n\nimport transformers\nfrom transformers import (\n    AutoConfig,\n    AutoModelForSequenceClassification,\n    AutoTokenizer,\n    DataCollatorWithPadding,\n    EvalPrediction,\n    HfArgumentParser,\n    TrainingArguments,\n    default_data_collator,\n    set_seed,\n    EarlyStoppingCallback,\n)\nfrom transformers.trainer_utils import get_last_checkpoint\nfrom transformers.utils import check_min_version\nfrom transformers.utils.versions import require_version\nfrom models.hierbert import HierarchicalBert\nfrom models.deberta import DebertaForSequenceClassification\n\n\n# Will error if the minimal version of Transformers is not installed. Remove at your own risks.\ncheck_min_version(\"4.9.0\")\n\nrequire_version(\"datasets>=1.8.0\", \"To fix: pip install -r examples/pytorch/text-classification/requirements.txt\")\n\nlogger = logging.getLogger(__name__)\n\n\n@dataclass\nclass DataTrainingArguments:\n    \"\"\"\n    Arguments pertaining to what data we are going to input our model for training and eval.\n\n    Using `HfArgumentParser` we can turn this class\n    into argparse arguments to be able to specify them on\n    the command line.\n    \"\"\"\n\n    max_seq_length: Optional[int] = field(\n        default=4096,\n        metadata={\n            \"help\": \"The maximum total input sequence length after tokenization. Sequences longer \"\n            \"than this will be truncated, sequences shorter will be padded.\"\n        },\n    )\n    max_segments: Optional[int] = field(\n        default=64,\n        metadata={\n            \"help\": \"The maximum number of segments (paragraphs) to be considered. Sequences longer \"\n                    \"than this will be truncated, sequences shorter will be padded.\"\n        },\n    )\n    max_seg_length: Optional[int] = field(\n        default=128,\n        metadata={\n            \"help\": \"The maximum segment (paragraph) length to be considered. Segments longer \"\n                    \"than this will be truncated, sequences shorter will be padded.\"\n        },\n    )\n    overwrite_cache: bool = field(\n        default=False, metadata={\"help\": \"Overwrite the cached preprocessed datasets or not.\"}\n    )\n    pad_to_max_length: bool = field(\n        default=True,\n        metadata={\n            \"help\": \"Whether to pad all samples to `max_seq_length`. \"\n            \"If False, will pad the samples dynamically when batching to the maximum length in the batch.\"\n        },\n    )\n    max_train_samples: Optional[int] = field(\n        default=None,\n        metadata={\n            \"help\": \"For debugging purposes or quicker training, truncate the number of training examples to this \"\n            \"value if set.\"\n        },\n    )\n    max_eval_samples: Optional[int] = field(\n        default=None,\n        metadata={\n            \"help\": \"For debugging purposes or quicker training, truncate the number of evaluation examples to this \"\n            \"value if set.\"\n        },\n    )\n    max_predict_samples: Optional[int] = field(\n        default=None,\n        metadata={\n            \"help\": \"For debugging purposes or quicker training, truncate the number of prediction examples to this \"\n            \"value if set.\"\n        },\n    )\n    task: Optional[str] = field(\n        default='ecthr_a',\n        metadata={\n            \"help\": \"Define downstream task\"\n        },\n    )\n    server_ip: Optional[str] = field(default=None, metadata={\"help\": \"For distant debugging.\"})\n    server_port: Optional[str] = field(default=None, metadata={\"help\": \"For distant debugging.\"})\n\n\n@dataclass\nclass ModelArguments:\n    \"\"\"\n    Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.\n    \"\"\"\n\n    model_name_or_path: str = field(\n        default=None, metadata={\"help\": \"Path to pretrained model or model identifier from huggingface.co/models\"}\n    )\n    hierarchical: bool = field(\n        default=True, metadata={\"help\": \"Whether to use a hierarchical variant or not\"}\n    )\n    config_name: Optional[str] = field(\n        default=None, metadata={\"help\": \"Pretrained config name or path if not the same as model_name\"}\n    )\n    tokenizer_name: Optional[str] = field(\n        default=None, metadata={\"help\": \"Pretrained tokenizer name or path if not the same as model_name\"}\n    )\n    cache_dir: Optional[str] = field(\n        default=None,\n        metadata={\"help\": \"Where do you want to store the pretrained models downloaded from huggingface.co\"},\n    )\n    do_lower_case: Optional[bool] = field(\n        default=True,\n        metadata={\"help\": \"arg to indicate if tokenizer should do lower case in AutoTokenizer.from_pretrained()\"},\n    )\n    use_fast_tokenizer: bool = field(\n        default=True,\n        metadata={\"help\": \"Whether to use one of the fast tokenizer (backed by the tokenizers library) or not.\"},\n    )\n    model_revision: str = field(\n        default=\"main\",\n        metadata={\"help\": \"The specific model version to use (can be a branch name, tag name or commit id).\"},\n    )\n    use_auth_token: bool = field(\n        default=False,\n        metadata={\n            \"help\": \"Will use the token generated when running `transformers-cli login` (necessary to use this script \"\n            \"with private models).\"\n        },\n    )\n\n\ndef main():\n    # See all possible arguments in src/transformers/training_args.py\n    # or by passing the --help flag to this script.\n    # We now keep distinct sets of args, for a cleaner separation of concerns.\n\n    parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))\n    model_args, data_args, training_args = parser.parse_args_into_dataclasses()\n\n    # Fix boolean parameter\n    if model_args.do_lower_case == 'False' or not model_args.do_lower_case:\n        model_args.do_lower_case = False\n    else:\n        model_args.do_lower_case = True\n\n    if model_args.hierarchical == 'False' or not model_args.hierarchical:\n        model_args.hierarchical = False\n    else:\n        model_args.hierarchical = True\n\n    # Setup distant debugging if needed\n    if data_args.server_ip and data_args.server_port:\n        # Distant debugging - see https://code.visualstudio.com/docs/python/debugging#_attach-to-a-local-script\n        import ptvsd\n\n        print(\"Waiting for debugger attach\")\n        ptvsd.enable_attach(address=(data_args.server_ip, data_args.server_port), redirect_output=True)\n        ptvsd.wait_for_attach()\n\n    # Setup logging\n    logging.basicConfig(\n        format=\"%(asctime)s - %(levelname)s - %(name)s - %(message)s\",\n        datefmt=\"%m/%d/%Y %H:%M:%S\",\n        handlers=[logging.StreamHandler(sys.stdout)],\n    )\n\n    log_level = training_args.get_process_log_level()\n    logger.setLevel(log_level)\n    datasets.utils.logging.set_verbosity(log_level)\n    transformers.utils.logging.set_verbosity(log_level)\n    transformers.utils.logging.enable_default_handler()\n    transformers.utils.logging.enable_explicit_format()\n\n    # Log on each process the small summary:\n    logger.warning(\n        f\"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}\"\n        + f\"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}\"\n    )\n    logger.info(f\"Training/evaluation parameters {training_args}\")\n\n    # Detecting last checkpoint.\n    last_checkpoint = None\n    if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:\n        last_checkpoint = get_last_checkpoint(training_args.output_dir)\n        if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0:\n            raise ValueError(\n                f\"Output directory ({training_args.output_dir}) already exists and is not empty. \"\n                \"Use --overwrite_output_dir to overcome.\"\n            )\n        elif last_checkpoint is not None:\n            logger.info(\n                f\"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change \"\n                \"the `--output_dir` or add `--overwrite_output_dir` to train from scratch.\"\n            )\n\n    # Set seed before initializing model.\n    set_seed(training_args.seed)\n\n    # In distributed training, the load_dataset function guarantees that only one local process can concurrently\n    # download the dataset.\n    # Downloading and loading eurlex dataset from the hub.\n    if training_args.do_train:\n        train_dataset = load_dataset(\"lex_glue\", name=data_args.task, split=\"train\", data_dir='data', cache_dir=model_args.cache_dir)\n\n    if training_args.do_eval:\n        eval_dataset = load_dataset(\"lex_glue\", name=data_args.task, split=\"validation\", data_dir='data', cache_dir=model_args.cache_dir)\n\n    if training_args.do_predict:\n        predict_dataset = load_dataset(\"lex_glue\", name=data_args.task, split=\"test\", data_dir='data', cache_dir=model_args.cache_dir)\n\n    # Labels\n    label_list = list(range(10))\n    num_labels = len(label_list)\n\n    # Load pretrained model and tokenizer\n    # In distributed training, the .from_pretrained methods guarantee that only one local process can concurrently\n    # download model & vocab.\n    config = AutoConfig.from_pretrained(\n        model_args.config_name if model_args.config_name else model_args.model_name_or_path,\n        num_labels=num_labels,\n        finetuning_task=f\"{data_args.task}\",\n        cache_dir=model_args.cache_dir,\n        revision=model_args.model_revision,\n        use_auth_token=True if model_args.use_auth_token else None,\n    )\n    tokenizer = AutoTokenizer.from_pretrained(\n        model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,\n        do_lower_case=model_args.do_lower_case,\n        cache_dir=model_args.cache_dir,\n        use_fast=model_args.use_fast_tokenizer,\n        revision=model_args.model_revision,\n        use_auth_token=True if model_args.use_auth_token else None,\n    )\n    if config.model_type == 'deberta' and model_args.hierarchical:\n        model = DebertaForSequenceClassification.from_pretrained(\n            model_args.model_name_or_path,\n            from_tf=bool(\".ckpt\" in model_args.model_name_or_path),\n            config=config,\n            cache_dir=model_args.cache_dir,\n            revision=model_args.model_revision,\n            use_auth_token=True if model_args.use_auth_token else None,\n        )\n    else:\n        model = AutoModelForSequenceClassification.from_pretrained(\n            model_args.model_name_or_path,\n            from_tf=bool(\".ckpt\" in model_args.model_name_or_path),\n            config=config,\n            cache_dir=model_args.cache_dir,\n            revision=model_args.model_revision,\n            use_auth_token=True if model_args.use_auth_token else None,\n        )\n\n    if model_args.hierarchical:\n        # Hack the classifier encoder to use hierarchical BERT\n        if config.model_type in ['bert', 'deberta']:\n            if config.model_type == 'bert':\n                segment_encoder = model.bert\n            else:\n                segment_encoder = model.deberta\n            model_encoder = HierarchicalBert(encoder=segment_encoder,\n                                             max_segments=data_args.max_segments,\n                                             max_segment_length=data_args.max_seg_length)\n            if config.model_type == 'bert':\n                model.bert = model_encoder\n            elif config.model_type == 'deberta':\n                model.deberta = model_encoder\n            else:\n                raise NotImplementedError(f\"{config.model_type} is no supported yet!\")\n        elif config.model_type == 'roberta':\n            model_encoder = HierarchicalBert(encoder=model.roberta, max_segments=data_args.max_segments,\n                                             max_segment_length=data_args.max_seg_length)\n            model.roberta = model_encoder\n            # Build a new classification layer, as well\n            dense = nn.Linear(config.hidden_size, config.hidden_size)\n            dense.load_state_dict(model.classifier.dense.state_dict())  # load weights\n            dropout = nn.Dropout(config.hidden_dropout_prob).to(model.device)\n            out_proj = nn.Linear(config.hidden_size, config.num_labels).to(model.device)\n            out_proj.load_state_dict(model.classifier.out_proj.state_dict())  # load weights\n            model.classifier = nn.Sequential(dense, dropout, out_proj).to(model.device)\n        elif config.model_type in ['longformer', 'big_bird']:\n            pass\n        else:\n            raise NotImplementedError(f\"{config.model_type} is no supported yet!\")\n\n    # Preprocessing the datasets\n    # Padding strategy\n    if data_args.pad_to_max_length:\n        padding = \"max_length\"\n    else:\n        # We will pad later, dynamically at batch creation, to the max sequence length in each batch\n        padding = False\n\n    def preprocess_function(examples):\n        # Tokenize the texts\n        if model_args.hierarchical:\n            case_template = [[0] * data_args.max_seg_length]\n            if config.model_type == 'roberta':\n                batch = {'input_ids': [], 'attention_mask': []}\n                for case in examples['text']:\n                    case_encodings = tokenizer(case[:data_args.max_segments], padding=padding,\n                                               max_length=data_args.max_seg_length, truncation=True)\n                    batch['input_ids'].append(case_encodings['input_ids'] + case_template * (\n                                data_args.max_segments - len(case_encodings['input_ids'])))\n                    batch['attention_mask'].append(case_encodings['attention_mask'] + case_template * (\n                                data_args.max_segments - len(case_encodings['attention_mask'])))\n            else:\n                batch = {'input_ids': [], 'attention_mask': [], 'token_type_ids': []}\n                for case in examples['text']:\n                    case_encodings = tokenizer(case[:data_args.max_segments], padding=padding,\n                                               max_length=data_args.max_seg_length, truncation=True)\n                    batch['input_ids'].append(case_encodings['input_ids'] + case_template * (\n                            data_args.max_segments - len(case_encodings['input_ids'])))\n                    batch['attention_mask'].append(case_encodings['attention_mask'] + case_template * (\n                            data_args.max_segments - len(case_encodings['attention_mask'])))\n                    batch['token_type_ids'].append(case_encodings['token_type_ids'] + case_template * (\n                            data_args.max_segments - len(case_encodings['token_type_ids'])))\n        elif config.model_type in ['longformer', 'big_bird']:\n            cases = []\n            max_position_embeddings = config.max_position_embeddings - 2 if config.model_type == 'longformer' \\\n                else config.max_position_embeddings\n            for case in examples['text']:\n                cases.append(f' {tokenizer.sep_token} '.join(\n                    [' '.join(fact.split()[:data_args.max_seg_length]) for fact in case[:data_args.max_segments]]))\n            batch = tokenizer(cases, padding=padding, max_length=max_position_embeddings, truncation=True)\n            if config.model_type == 'longformer':\n                global_attention_mask = np.zeros((len(cases), max_position_embeddings), dtype=np.int32)\n                # global attention on cls token\n                global_attention_mask[:, 0] = 1\n                batch['global_attention_mask'] = list(global_attention_mask)\n        else:\n            cases = []\n            for case in examples['text']:\n                cases.append(f'\\n'.join(case))\n            batch = tokenizer(cases, padding=padding, max_length=512, truncation=True)\n\n        batch[\"labels\"] = [[1 if label in labels else 0 for label in label_list] for labels in examples[\"labels\"]]\n\n        return batch\n\n    if training_args.do_train:\n        if data_args.max_train_samples is not None:\n            train_dataset = train_dataset.select(range(data_args.max_train_samples))\n        with training_args.main_process_first(desc=\"train dataset map pre-processing\"):\n            train_dataset = train_dataset.map(\n                preprocess_function,\n                batched=True,\n                load_from_cache_file=not data_args.overwrite_cache,\n                desc=\"Running tokenizer on train dataset\",\n            )\n        # Log a few random samples from the training set:\n        for index in random.sample(range(len(train_dataset)), 3):\n            logger.info(f\"Sample {index} of the training set: {train_dataset[index]}.\")\n\n    if training_args.do_eval:\n        if data_args.max_eval_samples is not None:\n            eval_dataset = eval_dataset.select(range(data_args.max_eval_samples))\n        with training_args.main_process_first(desc=\"validation dataset map pre-processing\"):\n            eval_dataset = eval_dataset.map(\n                preprocess_function,\n                batched=True,\n                load_from_cache_file=not data_args.overwrite_cache,\n                desc=\"Running tokenizer on validation dataset\",\n            )\n\n    if training_args.do_predict:\n        if data_args.max_predict_samples is not None:\n            predict_dataset = predict_dataset.select(range(data_args.max_predict_samples))\n        with training_args.main_process_first(desc=\"prediction dataset map pre-processing\"):\n            predict_dataset = predict_dataset.map(\n                preprocess_function,\n                batched=True,\n                load_from_cache_file=not data_args.overwrite_cache,\n                desc=\"Running tokenizer on prediction dataset\",\n            )\n\n    # You can define your custom compute_metrics function. It takes an `EvalPrediction` object (a namedtuple with a\n    # predictions and label_ids field) and has to return a dictionary string to float.\n    def compute_metrics(p: EvalPrediction):\n        # Fix gold labels\n        y_true = np.zeros((p.label_ids.shape[0], p.label_ids.shape[1] + 1), dtype=np.int32)\n        y_true[:, :-1] = p.label_ids\n        y_true[:, -1] = (np.sum(p.label_ids, axis=1) == 0).astype('int32')\n        # Fix predictions\n        logits = p.predictions[0] if isinstance(p.predictions, tuple) else p.predictions\n        preds = (expit(logits) > 0.5).astype('int32')\n        y_pred = np.zeros((p.label_ids.shape[0], p.label_ids.shape[1] + 1), dtype=np.int32)\n        y_pred[:, :-1] = preds\n        y_pred[:, -1] = (np.sum(preds, axis=1) == 0).astype('int32')\n        # Compute scores\n        macro_f1 = f1_score(y_true=y_true, y_pred=y_pred, average='macro', zero_division=0)\n        micro_f1 = f1_score(y_true=y_true, y_pred=y_pred, average='micro', zero_division=0)\n        return {'macro-f1': macro_f1, 'micro-f1': micro_f1}\n\n    # Data collator will default to DataCollatorWithPadding, so we change it if we already did the padding.\n    if data_args.pad_to_max_length:\n        data_collator = default_data_collator\n    elif training_args.fp16:\n        data_collator = DataCollatorWithPadding(tokenizer, pad_to_multiple_of=8)\n    else:\n        data_collator = None\n\n    # Initialize our Trainer\n    trainer = MultilabelTrainer(\n        model=model,\n        args=training_args,\n        train_dataset=train_dataset if training_args.do_train else None,\n        eval_dataset=eval_dataset if training_args.do_eval else None,\n        compute_metrics=compute_metrics,\n        tokenizer=tokenizer,\n        data_collator=data_collator,\n        callbacks=[EarlyStoppingCallback(early_stopping_patience=3)]\n    )\n\n    # Training\n    if training_args.do_train:\n        checkpoint = None\n        if training_args.resume_from_checkpoint is not None:\n            checkpoint = training_args.resume_from_checkpoint\n        elif last_checkpoint is not None:\n            checkpoint = last_checkpoint\n        train_result = trainer.train(resume_from_checkpoint=checkpoint)\n        metrics = train_result.metrics\n        max_train_samples = (\n            data_args.max_train_samples if data_args.max_train_samples is not None else len(train_dataset)\n        )\n        metrics[\"train_samples\"] = min(max_train_samples, len(train_dataset))\n\n        trainer.save_model()  # Saves the tokenizer too for easy upload\n\n        trainer.log_metrics(\"train\", metrics)\n        trainer.save_metrics(\"train\", metrics)\n        trainer.save_state()\n\n    # Evaluation\n    if training_args.do_eval:\n        logger.info(\"*** Evaluate ***\")\n        metrics = trainer.evaluate(eval_dataset=eval_dataset)\n\n        max_eval_samples = data_args.max_eval_samples if data_args.max_eval_samples is not None else len(eval_dataset)\n        metrics[\"eval_samples\"] = min(max_eval_samples, len(eval_dataset))\n\n        trainer.log_metrics(\"eval\", metrics)\n        trainer.save_metrics(\"eval\", metrics)\n\n    # Prediction\n    if training_args.do_predict:\n        logger.info(\"*** Predict ***\")\n        predictions, labels, metrics = trainer.predict(predict_dataset, metric_key_prefix=\"predict\")\n\n        max_predict_samples = (\n            data_args.max_predict_samples if data_args.max_predict_samples is not None else len(predict_dataset)\n        )\n        metrics[\"predict_samples\"] = min(max_predict_samples, len(predict_dataset))\n\n        trainer.log_metrics(\"predict\", metrics)\n        trainer.save_metrics(\"predict\", metrics)\n\n        output_predict_file = os.path.join(training_args.output_dir, \"test_predictions.csv\")\n        if trainer.is_world_process_zero():\n            with open(output_predict_file, \"w\") as writer:\n                for index, pred_list in enumerate(predictions[0]):\n                    pred_line = '\\t'.join([f'{pred:.5f}' for pred in pred_list])\n                    writer.write(f\"{index}\\t{pred_line}\\n\")\n\n    # Clean up checkpoints\n    checkpoints = [filepath for filepath in glob.glob(f'{training_args.output_dir}/*/') if '/checkpoint' in filepath]\n    for checkpoint in checkpoints:\n        shutil.rmtree(checkpoint)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "experiments/eurlex.py",
    "content": "#!/usr/bin/env python\n# coding=utf-8\n\"\"\" Finetuning models on EUR-LEX (e.g. Bert, RoBERTa, LEGAL-BERT).\"\"\"\n\nimport logging\nimport os\nimport random\nimport sys\nfrom dataclasses import dataclass, field\nfrom typing import Optional\n\nimport datasets\nfrom datasets import load_dataset\nfrom sklearn.metrics import f1_score\nfrom trainer import MultilabelTrainer\nfrom scipy.special import expit\nimport glob\nimport shutil\n\nimport transformers\nfrom transformers import (\n    AutoConfig,\n    AutoModelForSequenceClassification,\n    AutoTokenizer,\n    DataCollatorWithPadding,\n    EvalPrediction,\n    HfArgumentParser,\n    TrainingArguments,\n    default_data_collator,\n    set_seed,\n    EarlyStoppingCallback,\n)\nfrom transformers.trainer_utils import get_last_checkpoint\nfrom transformers.utils import check_min_version\nfrom transformers.utils.versions import require_version\n\n\n# Will error if the minimal version of Transformers is not installed. Remove at your own risks.\ncheck_min_version(\"4.9.0\")\n\nrequire_version(\"datasets>=1.8.0\", \"To fix: pip install -r examples/pytorch/text-classification/requirements.txt\")\n\nlogger = logging.getLogger(__name__)\n\n\n@dataclass\nclass DataTrainingArguments:\n    \"\"\"\n    Arguments pertaining to what data we are going to input our model for training and eval.\n\n    Using `HfArgumentParser` we can turn this class\n    into argparse arguments to be able to specify them on\n    the command line.\n    \"\"\"\n\n    max_seq_length: Optional[int] = field(\n        default=512,\n        metadata={\n            \"help\": \"The maximum total input sequence length after tokenization. Sequences longer \"\n            \"than this will be truncated, sequences shorter will be padded.\"\n        },\n    )\n    overwrite_cache: bool = field(\n        default=False, metadata={\"help\": \"Overwrite the cached preprocessed datasets or not.\"}\n    )\n    pad_to_max_length: bool = field(\n        default=True,\n        metadata={\n            \"help\": \"Whether to pad all samples to `max_seq_length`. \"\n            \"If False, will pad the samples dynamically when batching to the maximum length in the batch.\"\n        },\n    )\n    max_train_samples: Optional[int] = field(\n        default=None,\n        metadata={\n            \"help\": \"For debugging purposes or quicker training, truncate the number of training examples to this \"\n            \"value if set.\"\n        },\n    )\n    max_eval_samples: Optional[int] = field(\n        default=None,\n        metadata={\n            \"help\": \"For debugging purposes or quicker training, truncate the number of evaluation examples to this \"\n            \"value if set.\"\n        },\n    )\n    max_predict_samples: Optional[int] = field(\n        default=None,\n        metadata={\n            \"help\": \"For debugging purposes or quicker training, truncate the number of prediction examples to this \"\n            \"value if set.\"\n        },\n    )\n    server_ip: Optional[str] = field(default=None, metadata={\"help\": \"For distant debugging.\"})\n    server_port: Optional[str] = field(default=None, metadata={\"help\": \"For distant debugging.\"})\n\n\n@dataclass\nclass ModelArguments:\n    \"\"\"\n    Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.\n    \"\"\"\n\n    model_name_or_path: str = field(\n        default=None, metadata={\"help\": \"Path to pretrained model or model identifier from huggingface.co/models\"}\n    )\n    config_name: Optional[str] = field(\n        default=None, metadata={\"help\": \"Pretrained config name or path if not the same as model_name\"}\n    )\n    tokenizer_name: Optional[str] = field(\n        default=None, metadata={\"help\": \"Pretrained tokenizer name or path if not the same as model_name\"}\n    )\n    cache_dir: Optional[str] = field(\n        default=None,\n        metadata={\"help\": \"Where do you want to store the pretrained models downloaded from huggingface.co\"},\n    )\n    do_lower_case: Optional[bool] = field(\n        default=True,\n        metadata={\"help\": \"arg to indicate if tokenizer should do lower case in AutoTokenizer.from_pretrained()\"},\n    )\n    use_fast_tokenizer: bool = field(\n        default=True,\n        metadata={\"help\": \"Whether to use one of the fast tokenizer (backed by the tokenizers library) or not.\"},\n    )\n    model_revision: str = field(\n        default=\"main\",\n        metadata={\"help\": \"The specific model version to use (can be a branch name, tag name or commit id).\"},\n    )\n    use_auth_token: bool = field(\n        default=False,\n        metadata={\n            \"help\": \"Will use the token generated when running `transformers-cli login` (necessary to use this script \"\n            \"with private models).\"\n        },\n    )\n\n\ndef main():\n    # See all possible arguments in src/transformers/training_args.py\n    # or by passing the --help flag to this script.\n    # We now keep distinct sets of args, for a cleaner separation of concerns.\n\n    parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))\n    model_args, data_args, training_args = parser.parse_args_into_dataclasses()\n\n    # Setup distant debugging if needed\n    if data_args.server_ip and data_args.server_port:\n        # Distant debugging - see https://code.visualstudio.com/docs/python/debugging#_attach-to-a-local-script\n        import ptvsd\n\n        print(\"Waiting for debugger attach\")\n        ptvsd.enable_attach(address=(data_args.server_ip, data_args.server_port), redirect_output=True)\n        ptvsd.wait_for_attach()\n\n    # Fix boolean parameter\n    if model_args.do_lower_case == 'False' or not model_args.do_lower_case:\n        model_args.do_lower_case = False\n        'Tokenizer do_lower_case False'\n    else:\n        model_args.do_lower_case = True\n\n    # Setup logging\n    logging.basicConfig(\n        format=\"%(asctime)s - %(levelname)s - %(name)s - %(message)s\",\n        datefmt=\"%m/%d/%Y %H:%M:%S\",\n        handlers=[logging.StreamHandler(sys.stdout)],\n    )\n\n    log_level = training_args.get_process_log_level()\n    logger.setLevel(log_level)\n    datasets.utils.logging.set_verbosity(log_level)\n    transformers.utils.logging.set_verbosity(log_level)\n    transformers.utils.logging.enable_default_handler()\n    transformers.utils.logging.enable_explicit_format()\n\n    # Log on each process the small summary:\n    logger.warning(\n        f\"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}\"\n        + f\"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}\"\n    )\n    logger.info(f\"Training/evaluation parameters {training_args}\")\n\n    # Detecting last checkpoint.\n    last_checkpoint = None\n    if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:\n        last_checkpoint = get_last_checkpoint(training_args.output_dir)\n        if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0:\n            raise ValueError(\n                f\"Output directory ({training_args.output_dir}) already exists and is not empty. \"\n                \"Use --overwrite_output_dir to overcome.\"\n            )\n        elif last_checkpoint is not None:\n            logger.info(\n                f\"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change \"\n                \"the `--output_dir` or add `--overwrite_output_dir` to train from scratch.\"\n            )\n\n    # Set seed before initializing model.\n    set_seed(training_args.seed)\n\n    # In distributed training, the load_dataset function guarantees that only one local process can concurrently\n    # download the dataset.\n    # Downloading and loading eurlex dataset from the hub.\n    if training_args.do_train:\n        train_dataset = load_dataset(\"lex_glue\", \"eurlex\", split=\"train\", cache_dir=model_args.cache_dir)\n\n    if training_args.do_eval:\n        eval_dataset = load_dataset(\"lex_glue\", \"eurlex\", split=\"validation\", cache_dir=model_args.cache_dir)\n\n    if training_args.do_predict:\n        predict_dataset = load_dataset(\"lex_glue\", \"eurlex\", split=\"test\", cache_dir=model_args.cache_dir)\n\n    # Labels\n    label_list = list(range(100))\n    num_labels = len(label_list)\n\n    # Load pretrained model and tokenizer\n    # In distributed training, the .from_pretrained methods guarantee that only one local process can concurrently\n    # download model & vocab.\n    config = AutoConfig.from_pretrained(\n        model_args.config_name if model_args.config_name else model_args.model_name_or_path,\n        num_labels=num_labels,\n        finetuning_task=\"eurlex\",\n        cache_dir=model_args.cache_dir,\n        revision=model_args.model_revision,\n        use_auth_token=True if model_args.use_auth_token else None,\n    )\n\n    if config.model_type == 'big_bird':\n        config.attention_type = 'original_full'\n\n    tokenizer = AutoTokenizer.from_pretrained(\n        model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,\n        do_lower_case=model_args.do_lower_case,\n        cache_dir=model_args.cache_dir,\n        use_fast=model_args.use_fast_tokenizer,\n        revision=model_args.model_revision,\n        use_auth_token=True if model_args.use_auth_token else None,\n    )\n    model = AutoModelForSequenceClassification.from_pretrained(\n        model_args.model_name_or_path,\n        from_tf=bool(\".ckpt\" in model_args.model_name_or_path),\n        config=config,\n        cache_dir=model_args.cache_dir,\n        revision=model_args.model_revision,\n        use_auth_token=True if model_args.use_auth_token else None,\n    )\n\n    # Preprocessing the datasets\n    # Padding strategy\n    if data_args.pad_to_max_length:\n        padding = \"max_length\"\n    else:\n        # We will pad later, dynamically at batch creation, to the max sequence length in each batch\n        padding = False\n\n    def preprocess_function(examples):\n        # Tokenize the texts\n        batch = tokenizer(\n            examples[\"text\"],\n            padding=padding,\n            max_length=data_args.max_seq_length,\n            truncation=True,\n        )\n        batch[\"labels\"] = [[1 if label in labels else 0 for label in label_list] for labels in examples[\"labels\"]]\n\n        return batch\n\n    if training_args.do_train:\n        if data_args.max_train_samples is not None:\n            train_dataset = train_dataset.select(range(data_args.max_train_samples))\n        with training_args.main_process_first(desc=\"train dataset map pre-processing\"):\n            train_dataset = train_dataset.map(\n                preprocess_function,\n                batched=True,\n                load_from_cache_file=not data_args.overwrite_cache,\n                desc=\"Running tokenizer on train dataset\",\n            )\n        # Log a few random samples from the training set:\n        for index in random.sample(range(len(train_dataset)), 3):\n            logger.info(f\"Sample {index} of the training set: {train_dataset[index]}.\")\n\n    if training_args.do_eval:\n        if data_args.max_eval_samples is not None:\n            eval_dataset = eval_dataset.select(range(data_args.max_eval_samples))\n        with training_args.main_process_first(desc=\"validation dataset map pre-processing\"):\n            eval_dataset = eval_dataset.map(\n                preprocess_function,\n                batched=True,\n                load_from_cache_file=not data_args.overwrite_cache,\n                desc=\"Running tokenizer on validation dataset\",\n            )\n\n    if training_args.do_predict:\n        if data_args.max_predict_samples is not None:\n            predict_dataset = predict_dataset.select(range(data_args.max_predict_samples))\n        with training_args.main_process_first(desc=\"prediction dataset map pre-processing\"):\n            predict_dataset = predict_dataset.map(\n                preprocess_function,\n                batched=True,\n                load_from_cache_file=not data_args.overwrite_cache,\n                desc=\"Running tokenizer on prediction dataset\",\n            )\n\n    # You can define your custom compute_metrics function. It takes an `EvalPrediction` object (a namedtuple with a\n    # predictions and label_ids field) and has to return a dictionary string to float.\n    def compute_metrics(p: EvalPrediction):\n        logits = p.predictions[0] if isinstance(p.predictions, tuple) else p.predictions\n        preds = (expit(logits) > 0.5).astype('int32')\n        macro_f1 = f1_score(y_true=p.label_ids, y_pred=preds, average='macro', zero_division=0)\n        micro_f1 = f1_score(y_true=p.label_ids, y_pred=preds, average='micro', zero_division=0)\n        return {'macro-f1': macro_f1, 'micro-f1': micro_f1}\n\n    # Data collator will default to DataCollatorWithPadding, so we change it if we already did the padding.\n    if data_args.pad_to_max_length:\n        data_collator = default_data_collator\n    elif training_args.fp16:\n        data_collator = DataCollatorWithPadding(tokenizer, pad_to_multiple_of=8)\n    else:\n        data_collator = None\n\n    # Initialize our Trainer\n    trainer = MultilabelTrainer(\n        model=model,\n        args=training_args,\n        train_dataset=train_dataset if training_args.do_train else None,\n        eval_dataset=eval_dataset if training_args.do_eval else None,\n        compute_metrics=compute_metrics,\n        tokenizer=tokenizer,\n        data_collator=data_collator,\n        callbacks=[EarlyStoppingCallback(early_stopping_patience=3)]\n    )\n\n    # Training\n    if training_args.do_train:\n        checkpoint = None\n        if training_args.resume_from_checkpoint is not None:\n            checkpoint = training_args.resume_from_checkpoint\n        elif last_checkpoint is not None:\n            checkpoint = last_checkpoint\n        train_result = trainer.train(resume_from_checkpoint=checkpoint)\n        metrics = train_result.metrics\n        max_train_samples = (\n            data_args.max_train_samples if data_args.max_train_samples is not None else len(train_dataset)\n        )\n        metrics[\"train_samples\"] = min(max_train_samples, len(train_dataset))\n\n        trainer.save_model()  # Saves the tokenizer too for easy upload\n\n        trainer.log_metrics(\"train\", metrics)\n        trainer.save_metrics(\"train\", metrics)\n        trainer.save_state()\n\n    # Evaluation\n    if training_args.do_eval:\n        logger.info(\"*** Evaluate ***\")\n        metrics = trainer.evaluate(eval_dataset=eval_dataset)\n\n        max_eval_samples = data_args.max_eval_samples if data_args.max_eval_samples is not None else len(eval_dataset)\n        metrics[\"eval_samples\"] = min(max_eval_samples, len(eval_dataset))\n\n        trainer.log_metrics(\"eval\", metrics)\n        trainer.save_metrics(\"eval\", metrics)\n\n    # Prediction\n    if training_args.do_predict:\n        logger.info(\"*** Predict ***\")\n        predictions, labels, metrics = trainer.predict(predict_dataset, metric_key_prefix=\"predict\")\n\n        max_predict_samples = (\n            data_args.max_predict_samples if data_args.max_predict_samples is not None else len(predict_dataset)\n        )\n        metrics[\"predict_samples\"] = min(max_predict_samples, len(predict_dataset))\n\n        trainer.log_metrics(\"predict\", metrics)\n        trainer.save_metrics(\"predict\", metrics)\n\n        output_predict_file = os.path.join(training_args.output_dir, \"test_predictions.csv\")\n        if trainer.is_world_process_zero():\n            with open(output_predict_file, \"w\") as writer:\n                for index, pred_list in enumerate(predictions):\n                    pred_line = '\\t'.join([f'{pred:.5f}' for pred in pred_list])\n                    writer.write(f\"{index}\\t{pred_line}\\n\")\n\n\n    # Clean up checkpoints\n    checkpoints = [filepath for filepath in glob.glob(f'{training_args.output_dir}/*/') if '/checkpoint' in filepath]\n    for checkpoint in checkpoints:\n        shutil.rmtree(checkpoint)\n\n\nif __name__ == \"__main__\":\n    main()"
  },
  {
    "path": "experiments/ledgar.py",
    "content": "#!/usr/bin/env python\n# coding=utf-8\n\"\"\" Finetuning models on LEDGAR (e.g. Bert, RoBERTa, LEGAL-BERT).\"\"\"\n\nimport logging\nimport os\nimport random\nimport sys\nfrom dataclasses import dataclass, field\nfrom typing import Optional\n\nimport datasets\nfrom datasets import load_dataset\nfrom sklearn.metrics import f1_score\nimport numpy as np\nimport glob\nimport shutil\n\nimport transformers\nfrom transformers import (\n    AutoConfig,\n    AutoModelForSequenceClassification,\n    AutoTokenizer,\n    DataCollatorWithPadding,\n    EvalPrediction,\n    HfArgumentParser,\n    TrainingArguments,\n    default_data_collator,\n    set_seed,\n    EarlyStoppingCallback,\n    Trainer\n)\nfrom transformers.trainer_utils import get_last_checkpoint\nfrom transformers.utils import check_min_version\nfrom transformers.utils.versions import require_version\n\n\n# Will error if the minimal version of Transformers is not installed. Remove at your own risks.\ncheck_min_version(\"4.9.0\")\n\nrequire_version(\"datasets>=1.8.0\", \"To fix: pip install -r examples/pytorch/text-classification/requirements.txt\")\n\nlogger = logging.getLogger(__name__)\n\n\n@dataclass\nclass DataTrainingArguments:\n    \"\"\"\n    Arguments pertaining to what data we are going to input our model for training and eval.\n\n    Using `HfArgumentParser` we can turn this class\n    into argparse arguments to be able to specify them on\n    the command line.\n    \"\"\"\n\n    max_seq_length: Optional[int] = field(\n        default=512,\n        metadata={\n            \"help\": \"The maximum total input sequence length after tokenization. Sequences longer \"\n            \"than this will be truncated, sequences shorter will be padded.\"\n        },\n    )\n    overwrite_cache: bool = field(\n        default=False, metadata={\"help\": \"Overwrite the cached preprocessed datasets or not.\"}\n    )\n    pad_to_max_length: bool = field(\n        default=True,\n        metadata={\n            \"help\": \"Whether to pad all samples to `max_seq_length`. \"\n            \"If False, will pad the samples dynamically when batching to the maximum length in the batch.\"\n        },\n    )\n    max_train_samples: Optional[int] = field(\n        default=None,\n        metadata={\n            \"help\": \"For debugging purposes or quicker training, truncate the number of training examples to this \"\n            \"value if set.\"\n        },\n    )\n    max_eval_samples: Optional[int] = field(\n        default=None,\n        metadata={\n            \"help\": \"For debugging purposes or quicker training, truncate the number of evaluation examples to this \"\n            \"value if set.\"\n        },\n    )\n    max_predict_samples: Optional[int] = field(\n        default=None,\n        metadata={\n            \"help\": \"For debugging purposes or quicker training, truncate the number of prediction examples to this \"\n            \"value if set.\"\n        },\n    )\n    server_ip: Optional[str] = field(default=None, metadata={\"help\": \"For distant debugging.\"})\n    server_port: Optional[str] = field(default=None, metadata={\"help\": \"For distant debugging.\"})\n\n\n@dataclass\nclass ModelArguments:\n    \"\"\"\n    Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.\n    \"\"\"\n\n    model_name_or_path: str = field(\n        default=None, metadata={\"help\": \"Path to pretrained model or model identifier from huggingface.co/models\"}\n    )\n    config_name: Optional[str] = field(\n        default=None, metadata={\"help\": \"Pretrained config name or path if not the same as model_name\"}\n    )\n    tokenizer_name: Optional[str] = field(\n        default=None, metadata={\"help\": \"Pretrained tokenizer name or path if not the same as model_name\"}\n    )\n    cache_dir: Optional[str] = field(\n        default=None,\n        metadata={\"help\": \"Where do you want to store the pretrained models downloaded from huggingface.co\"},\n    )\n    do_lower_case: Optional[bool] = field(\n        default=True,\n        metadata={\"help\": \"arg to indicate if tokenizer should do lower case in AutoTokenizer.from_pretrained()\"},\n    )\n    use_fast_tokenizer: bool = field(\n        default=True,\n        metadata={\"help\": \"Whether to use one of the fast tokenizer (backed by the tokenizers library) or not.\"},\n    )\n    model_revision: str = field(\n        default=\"main\",\n        metadata={\"help\": \"The specific model version to use (can be a branch name, tag name or commit id).\"},\n    )\n    use_auth_token: bool = field(\n        default=False,\n        metadata={\n            \"help\": \"Will use the token generated when running `transformers-cli login` (necessary to use this script \"\n            \"with private models).\"\n        },\n    )\n\n\ndef main():\n    # See all possible arguments in src/transformers/training_args.py\n    # or by passing the --help flag to this script.\n    # We now keep distinct sets of args, for a cleaner separation of concerns.\n\n    parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))\n    model_args, data_args, training_args = parser.parse_args_into_dataclasses()\n\n    # Setup distant debugging if needed\n    if data_args.server_ip and data_args.server_port:\n        # Distant debugging - see https://code.visualstudio.com/docs/python/debugging#_attach-to-a-local-script\n        import ptvsd\n\n        print(\"Waiting for debugger attach\")\n        ptvsd.enable_attach(address=(data_args.server_ip, data_args.server_port), redirect_output=True)\n        ptvsd.wait_for_attach()\n\n    # Fix boolean parameter\n    if model_args.do_lower_case == 'False' or not model_args.do_lower_case:\n        model_args.do_lower_case = False\n        'Tokenizer do_lower_case False'\n    else:\n        model_args.do_lower_case = True\n\n    # Setup logging\n    logging.basicConfig(\n        format=\"%(asctime)s - %(levelname)s - %(name)s - %(message)s\",\n        datefmt=\"%m/%d/%Y %H:%M:%S\",\n        handlers=[logging.StreamHandler(sys.stdout)],\n    )\n\n    log_level = training_args.get_process_log_level()\n    logger.setLevel(log_level)\n    datasets.utils.logging.set_verbosity(log_level)\n    transformers.utils.logging.set_verbosity(log_level)\n    transformers.utils.logging.enable_default_handler()\n    transformers.utils.logging.enable_explicit_format()\n\n    # Log on each process the small summary:\n    logger.warning(\n        f\"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}\"\n        + f\"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}\"\n    )\n    logger.info(f\"Training/evaluation parameters {training_args}\")\n\n    # Detecting last checkpoint.\n    last_checkpoint = None\n    if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:\n        last_checkpoint = get_last_checkpoint(training_args.output_dir)\n        if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0:\n            raise ValueError(\n                f\"Output directory ({training_args.output_dir}) already exists and is not empty. \"\n                \"Use --overwrite_output_dir to overcome.\"\n            )\n        elif last_checkpoint is not None:\n            logger.info(\n                f\"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change \"\n                \"the `--output_dir` or add `--overwrite_output_dir` to train from scratch.\"\n            )\n\n    # Set seed before initializing model.\n    set_seed(training_args.seed)\n\n    # In distributed training, the load_dataset function guarantees that only one local process can concurrently\n    # download the dataset.\n    # Downloading and loading eurlex dataset from the hub.\n    if training_args.do_train:\n        train_dataset = load_dataset(\"lex_glue\", \"ledgar\", split=\"train\", cache_dir=model_args.cache_dir)\n\n    if training_args.do_eval:\n        eval_dataset = load_dataset(\"lex_glue\", \"ledgar\", split=\"validation\", cache_dir=model_args.cache_dir)\n\n    if training_args.do_predict:\n        predict_dataset = load_dataset(\"lex_glue\", \"ledgar\", split=\"test\", cache_dir=model_args.cache_dir)\n\n    # Labels\n    label_list = list(range(100))\n    num_labels = len(label_list)\n\n    # Load pretrained model and tokenizer\n    # In distributed training, the .from_pretrained methods guarantee that only one local process can concurrently\n    # download model & vocab.\n    config = AutoConfig.from_pretrained(\n        model_args.config_name if model_args.config_name else model_args.model_name_or_path,\n        num_labels=num_labels,\n        finetuning_task=\"eurlex\",\n        cache_dir=model_args.cache_dir,\n        revision=model_args.model_revision,\n        use_auth_token=True if model_args.use_auth_token else None,\n    )\n\n    if config.model_type == 'big_bird':\n        config.attention_type = 'original_full'\n\n    tokenizer = AutoTokenizer.from_pretrained(\n        model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,\n        do_lower_case=model_args.do_lower_case,\n        cache_dir=model_args.cache_dir,\n        use_fast=model_args.use_fast_tokenizer,\n        revision=model_args.model_revision,\n        use_auth_token=True if model_args.use_auth_token else None,\n    )\n    model = AutoModelForSequenceClassification.from_pretrained(\n        model_args.model_name_or_path,\n        from_tf=bool(\".ckpt\" in model_args.model_name_or_path),\n        config=config,\n        cache_dir=model_args.cache_dir,\n        revision=model_args.model_revision,\n        use_auth_token=True if model_args.use_auth_token else None,\n    )\n\n    # Preprocessing the datasets\n    # Padding strategy\n    if data_args.pad_to_max_length:\n        padding = \"max_length\"\n    else:\n        # We will pad later, dynamically at batch creation, to the max sequence length in each batch\n        padding = False\n\n    def preprocess_function(examples):\n        # Tokenize the texts\n        batch = tokenizer(\n            examples[\"text\"],\n            padding=padding,\n            max_length=data_args.max_seq_length,\n            truncation=True,\n        )\n        batch[\"label\"] = [label_list.index(label) for label in examples[\"label\"]]\n\n        return batch\n\n    if training_args.do_train:\n        if data_args.max_train_samples is not None:\n            train_dataset = train_dataset.select(range(data_args.max_train_samples))\n        with training_args.main_process_first(desc=\"train dataset map pre-processing\"):\n            train_dataset = train_dataset.map(\n                preprocess_function,\n                batched=True,\n                load_from_cache_file=not data_args.overwrite_cache,\n                desc=\"Running tokenizer on train dataset\",\n            )\n        # Log a few random samples from the training set:\n        for index in random.sample(range(len(train_dataset)), 3):\n            logger.info(f\"Sample {index} of the training set: {train_dataset[index]}.\")\n\n    if training_args.do_eval:\n        if data_args.max_eval_samples is not None:\n            eval_dataset = eval_dataset.select(range(data_args.max_eval_samples))\n        with training_args.main_process_first(desc=\"validation dataset map pre-processing\"):\n            eval_dataset = eval_dataset.map(\n                preprocess_function,\n                batched=True,\n                load_from_cache_file=not data_args.overwrite_cache,\n                desc=\"Running tokenizer on validation dataset\",\n            )\n\n    if training_args.do_predict:\n        if data_args.max_predict_samples is not None:\n            predict_dataset = predict_dataset.select(range(data_args.max_predict_samples))\n        with training_args.main_process_first(desc=\"prediction dataset map pre-processing\"):\n            predict_dataset = predict_dataset.map(\n                preprocess_function,\n                batched=True,\n                load_from_cache_file=not data_args.overwrite_cache,\n                desc=\"Running tokenizer on prediction dataset\",\n            )\n\n    # You can define your custom compute_metrics function. It takes an `EvalPrediction` object (a namedtuple with a\n    # predictions and label_ids field) and has to return a dictionary string to float.\n    def compute_metrics(p: EvalPrediction):\n        logits = p.predictions[0] if isinstance(p.predictions, tuple) else p.predictions\n        preds = np.argmax(logits, axis=1)\n        macro_f1 = f1_score(y_true=p.label_ids, y_pred=preds, average='macro', zero_division=0)\n        micro_f1 = f1_score(y_true=p.label_ids, y_pred=preds, average='micro', zero_division=0)\n        return {'macro-f1': macro_f1, 'micro-f1': micro_f1}\n\n    # Data collator will default to DataCollatorWithPadding, so we change it if we already did the padding.\n    if data_args.pad_to_max_length:\n        data_collator = default_data_collator\n    elif training_args.fp16:\n        data_collator = DataCollatorWithPadding(tokenizer, pad_to_multiple_of=8)\n    else:\n        data_collator = None\n\n    # Initialize our Trainer\n    trainer = Trainer(\n        model=model,\n        args=training_args,\n        train_dataset=train_dataset if training_args.do_train else None,\n        eval_dataset=eval_dataset if training_args.do_eval else None,\n        compute_metrics=compute_metrics,\n        tokenizer=tokenizer,\n        data_collator=data_collator,\n        callbacks=[EarlyStoppingCallback(early_stopping_patience=3)]\n    )\n\n    # Training\n    if training_args.do_train:\n        checkpoint = None\n        if training_args.resume_from_checkpoint is not None:\n            checkpoint = training_args.resume_from_checkpoint\n        elif last_checkpoint is not None:\n            checkpoint = last_checkpoint\n        train_result = trainer.train(resume_from_checkpoint=checkpoint)\n        metrics = train_result.metrics\n        max_train_samples = (\n            data_args.max_train_samples if data_args.max_train_samples is not None else len(train_dataset)\n        )\n        metrics[\"train_samples\"] = min(max_train_samples, len(train_dataset))\n\n        trainer.save_model()  # Saves the tokenizer too for easy upload\n\n        trainer.log_metrics(\"train\", metrics)\n        trainer.save_metrics(\"train\", metrics)\n        trainer.save_state()\n\n    # Evaluation\n    if training_args.do_eval:\n        logger.info(\"*** Evaluate ***\")\n        metrics = trainer.evaluate(eval_dataset=eval_dataset)\n\n        max_eval_samples = data_args.max_eval_samples if data_args.max_eval_samples is not None else len(eval_dataset)\n        metrics[\"eval_samples\"] = min(max_eval_samples, len(eval_dataset))\n\n        trainer.log_metrics(\"eval\", metrics)\n        trainer.save_metrics(\"eval\", metrics)\n\n    # Prediction\n    if training_args.do_predict:\n        logger.info(\"*** Predict ***\")\n        predictions, labels, metrics = trainer.predict(predict_dataset, metric_key_prefix=\"predict\")\n\n        max_predict_samples = (\n            data_args.max_predict_samples if data_args.max_predict_samples is not None else len(predict_dataset)\n        )\n        metrics[\"predict_samples\"] = min(max_predict_samples, len(predict_dataset))\n\n        trainer.log_metrics(\"predict\", metrics)\n        trainer.save_metrics(\"predict\", metrics)\n\n        output_predict_file = os.path.join(training_args.output_dir, \"test_predictions.csv\")\n        if trainer.is_world_process_zero():\n            with open(output_predict_file, \"w\") as writer:\n                for index, pred_list in enumerate(predictions):\n                    pred_line = '\\t'.join([f'{pred:.5f}' for pred in pred_list])\n                    writer.write(f\"{index}\\t{pred_line}\\n\")\n\n\n    # Clean up checkpoints\n    checkpoints = [filepath for filepath in glob.glob(f'{training_args.output_dir}/*/') if '/checkpoint' in filepath]\n    for checkpoint in checkpoints:\n        shutil.rmtree(checkpoint)\n\n\nif __name__ == \"__main__\":\n    main()"
  },
  {
    "path": "experiments/scotus.py",
    "content": "#!/usr/bin/env python\n# coding=utf-8\n\"\"\" Finetuning models on SCOTUS (e.g. Bert, RoBERTa, LEGAL-BERT).\"\"\"\n\nimport logging\nimport os\nimport random\nimport re\nimport sys\nfrom dataclasses import dataclass, field\nfrom typing import Optional\n\nimport datasets\nfrom datasets import load_dataset\nfrom sklearn.metrics import f1_score\nfrom models.hierbert import HierarchicalBert\nimport numpy as np\nfrom torch import nn\nimport glob\nimport shutil\n\nimport transformers\nfrom transformers import (\n    Trainer,\n    AutoConfig,\n    AutoModelForSequenceClassification,\n    AutoTokenizer,\n    DataCollatorWithPadding,\n    EvalPrediction,\n    HfArgumentParser,\n    TrainingArguments,\n    default_data_collator,\n    set_seed,\n    EarlyStoppingCallback,\n)\nfrom transformers.trainer_utils import get_last_checkpoint\nfrom transformers.utils import check_min_version\nfrom transformers.utils.versions import require_version\nfrom models.deberta import DebertaForSequenceClassification\n\n\n# Will error if the minimal version of Transformers is not installed. Remove at your own risks.\ncheck_min_version(\"4.9.0\")\n\nrequire_version(\"datasets>=1.8.0\", \"To fix: pip install -r examples/pytorch/text-classification/requirements.txt\")\n\nlogger = logging.getLogger(__name__)\n\n\n@dataclass\nclass DataTrainingArguments:\n    \"\"\"\n    Arguments pertaining to what data we are going to input our model for training and eval.\n\n    Using `HfArgumentParser` we can turn this class\n    into argparse arguments to be able to specify them on\n    the command line.\n    \"\"\"\n\n    max_seq_length: Optional[int] = field(\n        default=128,\n        metadata={\n            \"help\": \"The maximum total input sequence length after tokenization. Sequences longer \"\n                    \"than this will be truncated, sequences shorter will be padded.\"\n        },\n    )\n    max_segments: Optional[int] = field(\n        default=64,\n        metadata={\n            \"help\": \"The maximum number of segments (paragraphs) to be considered. Sequences longer \"\n                    \"than this will be truncated, sequences shorter will be padded.\"\n        },\n    )\n    max_seg_length: Optional[int] = field(\n        default=128,\n        metadata={\n            \"help\": \"The maximum total input sequence length after tokenization. Sequences longer \"\n                    \"than this will be truncated, sequences shorter will be padded.\"\n        },\n    )\n    overwrite_cache: bool = field(\n        default=False, metadata={\"help\": \"Overwrite the cached preprocessed datasets or not.\"}\n    )\n    pad_to_max_length: bool = field(\n        default=True,\n        metadata={\n            \"help\": \"Whether to pad all samples to `max_seq_length`. \"\n            \"If False, will pad the samples dynamically when batching to the maximum length in the batch.\"\n        },\n    )\n    max_train_samples: Optional[int] = field(\n        default=None,\n        metadata={\n            \"help\": \"For debugging purposes or quicker training, truncate the number of training examples to this \"\n            \"value if set.\"\n        },\n    )\n    max_eval_samples: Optional[int] = field(\n        default=None,\n        metadata={\n            \"help\": \"For debugging purposes or quicker training, truncate the number of evaluation examples to this \"\n            \"value if set.\"\n        },\n    )\n    max_predict_samples: Optional[int] = field(\n        default=None,\n        metadata={\n            \"help\": \"For debugging purposes or quicker training, truncate the number of prediction examples to this \"\n            \"value if set.\"\n        },\n    )\n    server_ip: Optional[str] = field(default=None, metadata={\"help\": \"For distant debugging.\"})\n    server_port: Optional[str] = field(default=None, metadata={\"help\": \"For distant debugging.\"})\n\n\n@dataclass\nclass ModelArguments:\n    \"\"\"\n    Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.\n    \"\"\"\n\n    model_name_or_path: str = field(\n        default=None, metadata={\"help\": \"Path to pretrained model or model identifier from huggingface.co/models\"}\n    )\n    hierarchical: bool = field(\n        default=True, metadata={\"help\": \"Whether to use a hierarchical variant or not\"}\n    )\n    config_name: Optional[str] = field(\n        default=None, metadata={\"help\": \"Pretrained config name or path if not the same as model_name\"}\n    )\n    tokenizer_name: Optional[str] = field(\n        default=None, metadata={\"help\": \"Pretrained tokenizer name or path if not the same as model_name\"}\n    )\n    cache_dir: Optional[str] = field(\n        default=None,\n        metadata={\"help\": \"Where do you want to store the pretrained models downloaded from huggingface.co\"},\n    )\n    do_lower_case: Optional[bool] = field(\n        default=True,\n        metadata={\"help\": \"arg to indicate if tokenizer should do lower case in AutoTokenizer.from_pretrained()\"},\n    )\n    use_fast_tokenizer: bool = field(\n        default=True,\n        metadata={\"help\": \"Whether to use one of the fast tokenizer (backed by the tokenizers library) or not.\"},\n    )\n    model_revision: str = field(\n        default=\"main\",\n        metadata={\"help\": \"The specific model version to use (can be a branch name, tag name or commit id).\"},\n    )\n    use_auth_token: bool = field(\n        default=False,\n        metadata={\n            \"help\": \"Will use the token generated when running `transformers-cli login` (necessary to use this script \"\n            \"with private models).\"\n        },\n    )\n\n\ndef main():\n    # See all possible arguments in src/transformers/training_args.py\n    # or by passing the --help flag to this script.\n    # We now keep distinct sets of args, for a cleaner separation of concerns.\n\n    parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))\n    model_args, data_args, training_args = parser.parse_args_into_dataclasses()\n\n    # Setup distant debugging if needed\n    if data_args.server_ip and data_args.server_port:\n        # Distant debugging - see https://code.visualstudio.com/docs/python/debugging#_attach-to-a-local-script\n        import ptvsd\n\n        print(\"Waiting for debugger attach\")\n        ptvsd.enable_attach(address=(data_args.server_ip, data_args.server_port), redirect_output=True)\n        ptvsd.wait_for_attach()\n\n    # Fix boolean parameter\n    if model_args.do_lower_case == 'False' or not model_args.do_lower_case:\n        model_args.do_lower_case = False\n    else:\n        model_args.do_lower_case = True\n\n    if model_args.hierarchical == 'False' or not model_args.hierarchical:\n        model_args.hierarchical = False\n    else:\n        model_args.hierarchical = True\n\n    # Setup logging\n    logging.basicConfig(\n        format=\"%(asctime)s - %(levelname)s - %(name)s - %(message)s\",\n        datefmt=\"%m/%d/%Y %H:%M:%S\",\n        handlers=[logging.StreamHandler(sys.stdout)],\n    )\n\n    log_level = training_args.get_process_log_level()\n    logger.setLevel(log_level)\n    datasets.utils.logging.set_verbosity(log_level)\n    transformers.utils.logging.set_verbosity(log_level)\n    transformers.utils.logging.enable_default_handler()\n    transformers.utils.logging.enable_explicit_format()\n\n    # Log on each process the small summary:\n    logger.warning(\n        f\"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}\"\n        + f\"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}\"\n    )\n    logger.info(f\"Training/evaluation parameters {training_args}\")\n\n    # Detecting last checkpoint.\n    last_checkpoint = None\n    if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:\n        last_checkpoint = get_last_checkpoint(training_args.output_dir)\n        if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0:\n            raise ValueError(\n                f\"Output directory ({training_args.output_dir}) already exists and is not empty. \"\n                \"Use --overwrite_output_dir to overcome.\"\n            )\n        elif last_checkpoint is not None:\n            logger.info(\n                f\"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change \"\n                \"the `--output_dir` or add `--overwrite_output_dir` to train from scratch.\"\n            )\n\n    # Set seed before initializing model.\n    set_seed(training_args.seed)\n\n    # In distributed training, the load_dataset function guarantees that only one local process can concurrently\n    # download the dataset.\n    # Downloading and loading eurlex dataset from the hub.\n    if training_args.do_train:\n        train_dataset = load_dataset(\"lex_glue\", \"scotus\", split=\"train\", cache_dir=model_args.cache_dir)\n\n    if training_args.do_eval:\n        eval_dataset = load_dataset(\"lex_glue\", \"scotus\", split=\"validation\", cache_dir=model_args.cache_dir)\n\n    if training_args.do_predict:\n        predict_dataset = load_dataset(\"lex_glue\", \"scotus\", split=\"test\", cache_dir=model_args.cache_dir)\n\n    # Labels\n    label_list = list(range(14))\n    num_labels = len(label_list)\n\n    # Load pretrained model and tokenizer\n    # In distributed training, the .from_pretrained methods guarantee that only one local process can concurrently\n    # download model & vocab.\n    config = AutoConfig.from_pretrained(\n        model_args.config_name if model_args.config_name else model_args.model_name_or_path,\n        num_labels=num_labels,\n        finetuning_task=\"scotus\",\n        cache_dir=model_args.cache_dir,\n        revision=model_args.model_revision,\n        use_auth_token=True if model_args.use_auth_token else None,\n    )\n    tokenizer = AutoTokenizer.from_pretrained(\n        model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,\n        do_lower_case=model_args.do_lower_case,\n        cache_dir=model_args.cache_dir,\n        use_fast=model_args.use_fast_tokenizer,\n        revision=model_args.model_revision,\n        use_auth_token=True if model_args.use_auth_token else None,\n    )\n    if config.model_type == 'deberta' and model_args.hierarchical:\n        model = DebertaForSequenceClassification.from_pretrained(\n            model_args.model_name_or_path,\n            from_tf=bool(\".ckpt\" in model_args.model_name_or_path),\n            config=config,\n            cache_dir=model_args.cache_dir,\n            revision=model_args.model_revision,\n            use_auth_token=True if model_args.use_auth_token else None,\n        )\n    else:\n        model = AutoModelForSequenceClassification.from_pretrained(\n            model_args.model_name_or_path,\n            from_tf=bool(\".ckpt\" in model_args.model_name_or_path),\n            config=config,\n            cache_dir=model_args.cache_dir,\n            revision=model_args.model_revision,\n            use_auth_token=True if model_args.use_auth_token else None,\n        )\n    if model_args.hierarchical:\n        # Hack the classifier encoder to use hierarchical BERT\n        if config.model_type in ['bert', 'deberta']:\n            if config.model_type == 'bert':\n                segment_encoder = model.bert\n            else:\n                segment_encoder = model.deberta\n            model_encoder = HierarchicalBert(encoder=segment_encoder,\n                                             max_segments=data_args.max_segments,\n                                             max_segment_length=data_args.max_seg_length)\n            if config.model_type == 'bert':\n                model.bert = model_encoder\n            elif config.model_type == 'deberta':\n                model.deberta = model_encoder\n            else:\n                raise NotImplementedError(f\"{config.model_type} is no supported yet!\")\n        elif config.model_type == 'roberta':\n            model_encoder = HierarchicalBert(encoder=model.roberta, max_segments=data_args.max_segments,\n                                             max_segment_length=data_args.max_seg_length)\n            model.roberta = model_encoder\n            # Build a new classification layer, as well\n            dense = nn.Linear(config.hidden_size, config.hidden_size)\n            dense.load_state_dict(model.classifier.dense.state_dict())  # load weights\n            dropout = nn.Dropout(config.hidden_dropout_prob).to(model.device)\n            out_proj = nn.Linear(config.hidden_size, config.num_labels).to(model.device)\n            out_proj.load_state_dict(model.classifier.out_proj.state_dict())  # load weights\n            model.classifier = nn.Sequential(dense, dropout, out_proj).to(model.device)\n        elif config.model_type in ['longformer', 'big_bird']:\n            pass\n        else:\n            raise NotImplementedError(f\"{config.model_type} is no supported yet!\")\n\n    # Preprocessing the datasets\n    # Padding strategy\n    if data_args.pad_to_max_length:\n        padding = \"max_length\"\n    else:\n        # We will pad later, dynamically at batch creation, to the max sequence length in each batch\n        padding = False\n\n    def preprocess_function(examples):\n        # Tokenize the texts\n        if model_args.hierarchical:\n            case_template = [[0] * data_args.max_seq_length]\n            if config.model_type == 'roberta':\n                batch = {'input_ids': [], 'attention_mask': []}\n                for doc in examples['text']:\n                    doc = re.split('\\n{2,}', doc)\n                    doc_encodings = tokenizer(doc[:data_args.max_segments], padding=padding,\n                                              max_length=data_args.max_seg_length, truncation=True)\n                    batch['input_ids'].append(doc_encodings['input_ids'] + case_template * (\n                            data_args.max_segments - len(doc_encodings['input_ids'])))\n                    batch['attention_mask'].append(doc_encodings['attention_mask'] + case_template * (\n                            data_args.max_segments - len(doc_encodings['attention_mask'])))\n            else:\n                batch = {'input_ids': [], 'attention_mask': [], 'token_type_ids': []}\n                for doc in examples['text']:\n                    doc = re.split('\\n{2,}', doc)\n                    doc_encodings = tokenizer(doc[:data_args.max_segments], padding=padding,\n                                              max_length=data_args.max_seg_length, truncation=True)\n                    batch['input_ids'].append(doc_encodings['input_ids'] + case_template * (\n                                data_args.max_segments - len(doc_encodings['input_ids'])))\n                    batch['attention_mask'].append(doc_encodings['attention_mask'] + case_template * (\n                                data_args.max_segments - len(doc_encodings['attention_mask'])))\n                    batch['token_type_ids'].append(doc_encodings['token_type_ids'] + case_template * (\n                                data_args.max_segments - len(doc_encodings['token_type_ids'])))\n        elif config.model_type in ['longformer', 'big_bird']:\n            cases = []\n            max_position_embeddings = config.max_position_embeddings - 2 if config.model_type == 'longformer' \\\n                else config.max_position_embeddings\n            for doc in examples['text']:\n                doc = re.split('\\n{2,}', doc)\n                cases.append(f' {tokenizer.sep_token} '.join([' '.join(paragraph.split()[:data_args.max_seg_length])\n                                                              for paragraph in doc[:data_args.max_segments]]))\n            batch = tokenizer(cases, padding=padding, max_length=max_position_embeddings, truncation=True)\n            if config.model_type == 'longformer':\n                global_attention_mask = np.zeros((len(cases), max_position_embeddings), dtype=np.int32)\n                # global attention on cls token\n                global_attention_mask[:, 0] = 1\n                batch['global_attention_mask'] = list(global_attention_mask)\n        else:\n            batch = tokenizer(examples['text'], padding=padding, max_length=512, truncation=True)\n\n        batch[\"label\"] = [label_list.index(labels) for labels in examples[\"label\"]]\n\n        return batch\n\n    if training_args.do_train:\n        if data_args.max_train_samples is not None:\n            train_dataset = train_dataset.select(range(data_args.max_train_samples))\n        with training_args.main_process_first(desc=\"train dataset map pre-processing\"):\n            train_dataset = train_dataset.map(\n                preprocess_function,\n                batched=True,\n                load_from_cache_file=not data_args.overwrite_cache,\n                desc=\"Running tokenizer on train dataset\",\n            )\n        # Log a few random samples from the training set:\n        for index in random.sample(range(len(train_dataset)), 3):\n            logger.info(f\"Sample {index} of the training set: {train_dataset[index]}.\")\n\n    if training_args.do_eval:\n        if data_args.max_eval_samples is not None:\n            eval_dataset = eval_dataset.select(range(data_args.max_eval_samples))\n        with training_args.main_process_first(desc=\"validation dataset map pre-processing\"):\n            eval_dataset = eval_dataset.map(\n                preprocess_function,\n                batched=True,\n                load_from_cache_file=not data_args.overwrite_cache,\n                desc=\"Running tokenizer on validation dataset\",\n            )\n\n    if training_args.do_predict:\n        if data_args.max_predict_samples is not None:\n            predict_dataset = predict_dataset.select(range(data_args.max_predict_samples))\n        with training_args.main_process_first(desc=\"prediction dataset map pre-processing\"):\n            predict_dataset = predict_dataset.map(\n                preprocess_function,\n                batched=True,\n                load_from_cache_file=not data_args.overwrite_cache,\n                desc=\"Running tokenizer on prediction dataset\",\n            )\n\n    # You can define your custom compute_metrics function. It takes an `EvalPrediction` object (a namedtuple with a\n    # predictions and label_ids field) and has to return a dictionary string to float.\n    def compute_metrics(p: EvalPrediction):\n        logits = p.predictions[0] if isinstance(p.predictions, tuple) else p.predictions\n        preds = np.argmax(logits, axis=1)\n        macro_f1 = f1_score(y_true=p.label_ids, y_pred=preds, average='macro', zero_division=0)\n        micro_f1 = f1_score(y_true=p.label_ids, y_pred=preds, average='micro', zero_division=0)\n        return {'macro-f1': macro_f1, 'micro-f1': micro_f1}\n\n    # Data collator will default to DataCollatorWithPadding, so we change it if we already did the padding.\n    if data_args.pad_to_max_length:\n        data_collator = default_data_collator\n    elif training_args.fp16:\n        data_collator = DataCollatorWithPadding(tokenizer, pad_to_multiple_of=8)\n    else:\n        data_collator = None\n\n    # Initialize our Trainer\n    trainer = Trainer(\n        model=model,\n        args=training_args,\n        train_dataset=train_dataset if training_args.do_train else None,\n        eval_dataset=eval_dataset if training_args.do_eval else None,\n        compute_metrics=compute_metrics,\n        tokenizer=tokenizer,\n        data_collator=data_collator,\n        callbacks=[EarlyStoppingCallback(early_stopping_patience=3)]\n    )\n\n    # Training\n    if training_args.do_train:\n        checkpoint = None\n        if training_args.resume_from_checkpoint is not None:\n            checkpoint = training_args.resume_from_checkpoint\n        elif last_checkpoint is not None:\n            checkpoint = last_checkpoint\n        train_result = trainer.train(resume_from_checkpoint=checkpoint)\n        metrics = train_result.metrics\n        max_train_samples = (\n            data_args.max_train_samples if data_args.max_train_samples is not None else len(train_dataset)\n        )\n        metrics[\"train_samples\"] = min(max_train_samples, len(train_dataset))\n\n        trainer.save_model()  # Saves the tokenizer too for easy upload\n\n        trainer.log_metrics(\"train\", metrics)\n        trainer.save_metrics(\"train\", metrics)\n        trainer.save_state()\n\n    # Evaluation\n    if training_args.do_eval:\n        logger.info(\"*** Evaluate ***\")\n        metrics = trainer.evaluate(eval_dataset=eval_dataset)\n\n        max_eval_samples = data_args.max_eval_samples if data_args.max_eval_samples is not None else len(eval_dataset)\n        metrics[\"eval_samples\"] = min(max_eval_samples, len(eval_dataset))\n\n        trainer.log_metrics(\"eval\", metrics)\n        trainer.save_metrics(\"eval\", metrics)\n\n    # Prediction\n    if training_args.do_predict:\n        logger.info(\"*** Predict ***\")\n        predictions, labels, metrics = trainer.predict(predict_dataset, metric_key_prefix=\"predict\")\n\n        max_predict_samples = (\n            data_args.max_predict_samples if data_args.max_predict_samples is not None else len(predict_dataset)\n        )\n        metrics[\"predict_samples\"] = min(max_predict_samples, len(predict_dataset))\n\n        trainer.log_metrics(\"predict\", metrics)\n        trainer.save_metrics(\"predict\", metrics)\n\n        output_predict_file = os.path.join(training_args.output_dir, \"test_predictions.csv\")\n        if trainer.is_world_process_zero():\n            with open(output_predict_file, \"w\") as writer:\n                for index, pred_list in enumerate(predictions[0]):\n                    pred_line = '\\t'.join([f'{pred:.5f}' for pred in pred_list])\n                    writer.write(f\"{index}\\t{pred_line}\\n\")\n\n    # Clean up checkpoints\n    checkpoints = [filepath for filepath in glob.glob(f'{training_args.output_dir}/*/') if '/checkpoint' in filepath]\n    for checkpoint in checkpoints:\n        shutil.rmtree(checkpoint)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "experiments/trainer.py",
    "content": "from torch import nn\nfrom transformers import Trainer\n\n\nclass MultilabelTrainer(Trainer):\n    def compute_loss(self, model, inputs, return_outputs=False):\n        labels = inputs.pop(\"labels\")\n        outputs = model(**inputs)\n        logits = outputs.logits\n        loss_fct = nn.BCEWithLogitsLoss()\n        loss = loss_fct(logits.view(-1, self.model.config.num_labels),\n                        labels.float().view(-1, self.model.config.num_labels))\n        return (loss, outputs) if return_outputs else loss\n"
  },
  {
    "path": "experiments/unfair_tos.py",
    "content": "#!/usr/bin/env python\n# coding=utf-8\n\"\"\" Finetuning models on UNFAIR-ToC (e.g. Bert, RoBERTa, LEGAL-BERT).\"\"\"\n\nimport logging\nimport os\nimport random\nimport sys\nfrom dataclasses import dataclass, field\nfrom typing import Optional\n\nimport datasets\nfrom datasets import load_dataset\nfrom sklearn.metrics import f1_score\nfrom trainer import MultilabelTrainer\nfrom scipy.special import expit\nimport glob\nimport shutil\nimport numpy as np\n\nimport transformers\nfrom transformers import (\n    AutoConfig,\n    AutoModelForSequenceClassification,\n    AutoTokenizer,\n    DataCollatorWithPadding,\n    EvalPrediction,\n    HfArgumentParser,\n    TrainingArguments,\n    default_data_collator,\n    set_seed,\n    EarlyStoppingCallback,\n)\nfrom transformers.trainer_utils import get_last_checkpoint\nfrom transformers.utils import check_min_version\nfrom transformers.utils.versions import require_version\n\n\n# Will error if the minimal version of Transformers is not installed. Remove at your own risks.\ncheck_min_version(\"4.9.0\")\n\nrequire_version(\"datasets>=1.8.0\", \"To fix: pip install -r examples/pytorch/text-classification/requirements.txt\")\n\nlogger = logging.getLogger(__name__)\n\n\n@dataclass\nclass DataTrainingArguments:\n    \"\"\"\n    Arguments pertaining to what data we are going to input our model for training and eval.\n\n    Using `HfArgumentParser` we can turn this class\n    into argparse arguments to be able to specify them on\n    the command line.\n    \"\"\"\n\n    max_seq_length: Optional[int] = field(\n        default=128,\n        metadata={\n            \"help\": \"The maximum total input sequence length after tokenization. Sequences longer \"\n            \"than this will be truncated, sequences shorter will be padded.\"\n        },\n    )\n    overwrite_cache: bool = field(\n        default=False, metadata={\"help\": \"Overwrite the cached preprocessed datasets or not.\"}\n    )\n    pad_to_max_length: bool = field(\n        default=True,\n        metadata={\n            \"help\": \"Whether to pad all samples to `max_seq_length`. \"\n            \"If False, will pad the samples dynamically when batching to the maximum length in the batch.\"\n        },\n    )\n    max_train_samples: Optional[int] = field(\n        default=None,\n        metadata={\n            \"help\": \"For debugging purposes or quicker training, truncate the number of training examples to this \"\n            \"value if set.\"\n        },\n    )\n    max_eval_samples: Optional[int] = field(\n        default=None,\n        metadata={\n            \"help\": \"For debugging purposes or quicker training, truncate the number of evaluation examples to this \"\n            \"value if set.\"\n        },\n    )\n    max_predict_samples: Optional[int] = field(\n        default=None,\n        metadata={\n            \"help\": \"For debugging purposes or quicker training, truncate the number of prediction examples to this \"\n            \"value if set.\"\n        },\n    )\n    server_ip: Optional[str] = field(default=None, metadata={\"help\": \"For distant debugging.\"})\n    server_port: Optional[str] = field(default=None, metadata={\"help\": \"For distant debugging.\"})\n\n\n@dataclass\nclass ModelArguments:\n    \"\"\"\n    Arguments pertaining to which model/config/tokenizer we are going to fine-tune from.\n    \"\"\"\n\n    model_name_or_path: str = field(\n        default=None, metadata={\"help\": \"Path to pretrained model or model identifier from huggingface.co/models\"}\n    )\n    config_name: Optional[str] = field(\n        default=None, metadata={\"help\": \"Pretrained config name or path if not the same as model_name\"}\n    )\n    tokenizer_name: Optional[str] = field(\n        default=None, metadata={\"help\": \"Pretrained tokenizer name or path if not the same as model_name\"}\n    )\n    cache_dir: Optional[str] = field(\n        default=None,\n        metadata={\"help\": \"Where do you want to store the pretrained models downloaded from huggingface.co\"},\n    )\n    do_lower_case: Optional[bool] = field(\n        default=True,\n        metadata={\"help\": \"arg to indicate if tokenizer should do lower case in AutoTokenizer.from_pretrained()\"},\n    )\n    use_fast_tokenizer: bool = field(\n        default=True,\n        metadata={\"help\": \"Whether to use one of the fast tokenizer (backed by the tokenizers library) or not.\"},\n    )\n    model_revision: str = field(\n        default=\"main\",\n        metadata={\"help\": \"The specific model version to use (can be a branch name, tag name or commit id).\"},\n    )\n    use_auth_token: bool = field(\n        default=False,\n        metadata={\n            \"help\": \"Will use the token generated when running `transformers-cli login` (necessary to use this script \"\n            \"with private models).\"\n        },\n    )\n\n\ndef main():\n    # See all possible arguments in src/transformers/training_args.py\n    # or by passing the --help flag to this script.\n    # We now keep distinct sets of args, for a cleaner separation of concerns.\n\n    parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments))\n    model_args, data_args, training_args = parser.parse_args_into_dataclasses()\n\n    # Setup distant debugging if needed\n    if data_args.server_ip and data_args.server_port:\n        # Distant debugging - see https://code.visualstudio.com/docs/python/debugging#_attach-to-a-local-script\n        import ptvsd\n\n        print(\"Waiting for debugger attach\")\n        ptvsd.enable_attach(address=(data_args.server_ip, data_args.server_port), redirect_output=True)\n        ptvsd.wait_for_attach()\n\n    # Fix boolean parameter\n    if model_args.do_lower_case == 'False' or not model_args.do_lower_case:\n        model_args.do_lower_case = False\n        'Tokenizer do_lower_case False'\n    else:\n        model_args.do_lower_case = True\n\n    # Setup logging\n    logging.basicConfig(\n        format=\"%(asctime)s - %(levelname)s - %(name)s - %(message)s\",\n        datefmt=\"%m/%d/%Y %H:%M:%S\",\n        handlers=[logging.StreamHandler(sys.stdout)],\n    )\n\n    log_level = training_args.get_process_log_level()\n    logger.setLevel(log_level)\n    datasets.utils.logging.set_verbosity(log_level)\n    transformers.utils.logging.set_verbosity(log_level)\n    transformers.utils.logging.enable_default_handler()\n    transformers.utils.logging.enable_explicit_format()\n\n    # Log on each process the small summary:\n    logger.warning(\n        f\"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}\"\n        + f\"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}\"\n    )\n    logger.info(f\"Training/evaluation parameters {training_args}\")\n\n    # Detecting last checkpoint.\n    last_checkpoint = None\n    if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir:\n        last_checkpoint = get_last_checkpoint(training_args.output_dir)\n        if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0:\n            raise ValueError(\n                f\"Output directory ({training_args.output_dir}) already exists and is not empty. \"\n                \"Use --overwrite_output_dir to overcome.\"\n            )\n        elif last_checkpoint is not None:\n            logger.info(\n                f\"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change \"\n                \"the `--output_dir` or add `--overwrite_output_dir` to train from scratch.\"\n            )\n\n    # Set seed before initializing model.\n    set_seed(training_args.seed)\n\n    # In distributed training, the load_dataset function guarantees that only one local process can concurrently\n    # download the dataset.\n    # Downloading and loading eurlex dataset from the hub.\n    if training_args.do_train:\n        train_dataset = load_dataset(\"lex_glue\", \"unfair_tos\", split=\"train\", data_dir='data', cache_dir=model_args.cache_dir)\n\n    if training_args.do_eval:\n        eval_dataset = load_dataset(\"lex_glue\", \"unfair_tos\", split=\"validation\", data_dir='data', cache_dir=model_args.cache_dir)\n\n    if training_args.do_predict:\n        predict_dataset = load_dataset(\"lex_glue\", \"unfair_tos\", split=\"test\", data_dir='data', cache_dir=model_args.cache_dir)\n\n    # Labels\n    label_list = list(range(8))\n    num_labels = len(label_list)\n\n    # Load pretrained model and tokenizer\n    # In distributed training, the .from_pretrained methods guarantee that only one local process can concurrently\n    # download model & vocab.\n    config = AutoConfig.from_pretrained(\n        model_args.config_name if model_args.config_name else model_args.model_name_or_path,\n        num_labels=num_labels,\n        finetuning_task=\"unfair_toc\",\n        cache_dir=model_args.cache_dir,\n        revision=model_args.model_revision,\n        use_auth_token=True if model_args.use_auth_token else None,\n    )\n\n    if config.model_type == 'big_bird':\n        config.attention_type = 'original_full'\n\n    if config.model_type == 'longformer':\n        config.attention_window = [128] * config.num_hidden_layers\n\n    tokenizer = AutoTokenizer.from_pretrained(\n        model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path,\n        do_lower_case=model_args.do_lower_case,\n        cache_dir=model_args.cache_dir,\n        use_fast=model_args.use_fast_tokenizer,\n        revision=model_args.model_revision,\n        use_auth_token=True if model_args.use_auth_token else None,\n    )\n    model = AutoModelForSequenceClassification.from_pretrained(\n        model_args.model_name_or_path,\n        from_tf=bool(\".ckpt\" in model_args.model_name_or_path),\n        config=config,\n        cache_dir=model_args.cache_dir,\n        revision=model_args.model_revision,\n        use_auth_token=True if model_args.use_auth_token else None,\n    )\n\n    # Preprocessing the datasets\n    # Padding strategy\n    if data_args.pad_to_max_length:\n        padding = \"max_length\"\n    else:\n        # We will pad later, dynamically at batch creation, to the max sequence length in each batch\n        padding = False\n\n    def preprocess_function(examples):\n        # Tokenize the texts\n        batch = tokenizer(\n            examples[\"text\"],\n            padding=padding,\n            max_length=data_args.max_seq_length,\n            truncation=True,\n        )\n        batch[\"labels\"] = [[1 if label in labels else 0 for label in label_list] for labels in\n                              examples[\"labels\"]]\n\n        return batch\n\n    if training_args.do_train:\n        if data_args.max_train_samples is not None:\n            train_dataset = train_dataset.select(range(data_args.max_train_samples))\n        with training_args.main_process_first(desc=\"train dataset map pre-processing\"):\n            train_dataset = train_dataset.map(\n                preprocess_function,\n                batched=True,\n                load_from_cache_file=not data_args.overwrite_cache,\n                desc=\"Running tokenizer on train dataset\",\n            )\n        # Log a few random samples from the training set:\n        for index in random.sample(range(len(train_dataset)), 3):\n            logger.info(f\"Sample {index} of the training set: {train_dataset[index]}.\")\n\n    if training_args.do_eval:\n        if data_args.max_eval_samples is not None:\n            eval_dataset = eval_dataset.select(range(data_args.max_eval_samples))\n        with training_args.main_process_first(desc=\"validation dataset map pre-processing\"):\n            eval_dataset = eval_dataset.map(\n                preprocess_function,\n                batched=True,\n                load_from_cache_file=not data_args.overwrite_cache,\n                desc=\"Running tokenizer on validation dataset\",\n            )\n\n    if training_args.do_predict:\n        if data_args.max_predict_samples is not None:\n            predict_dataset = predict_dataset.select(range(data_args.max_predict_samples))\n        with training_args.main_process_first(desc=\"prediction dataset map pre-processing\"):\n            predict_dataset = predict_dataset.map(\n                preprocess_function,\n                batched=True,\n                load_from_cache_file=not data_args.overwrite_cache,\n                desc=\"Running tokenizer on prediction dataset\",\n            )\n\n    # You can define your custom compute_metrics function. It takes an `EvalPrediction` object (a namedtuple with a\n    # predictions and label_ids field) and has to return a dictionary string to float.\n    def compute_metrics(p: EvalPrediction):\n        # Fix gold labels\n        y_true = np.zeros((p.label_ids.shape[0], p.label_ids.shape[1] + 1), dtype=np.int32)\n        y_true[:, :-1] = p.label_ids\n        y_true[:, -1] = (np.sum(p.label_ids, axis=1) == 0).astype('int32')\n        # Fix predictions\n        logits = p.predictions[0] if isinstance(p.predictions, tuple) else p.predictions\n        preds = (expit(logits) > 0.5).astype('int32')\n        y_pred = np.zeros((p.label_ids.shape[0], p.label_ids.shape[1] + 1), dtype=np.int32)\n        y_pred[:, :-1] = preds\n        y_pred[:, -1] = (np.sum(preds, axis=1) == 0).astype('int32')\n        # Compute scores\n        macro_f1 = f1_score(y_true=y_true, y_pred=y_pred, average='macro', zero_division=0)\n        micro_f1 = f1_score(y_true=y_true, y_pred=y_pred, average='micro', zero_division=0)\n        return {'macro-f1': macro_f1, 'micro-f1': micro_f1}\n\n    # Data collator will default to DataCollatorWithPadding, so we change it if we already did the padding.\n    if data_args.pad_to_max_length:\n        data_collator = default_data_collator\n    elif training_args.fp16:\n        data_collator = DataCollatorWithPadding(tokenizer, pad_to_multiple_of=8)\n    else:\n        data_collator = None\n\n    # Initialize our Trainer\n    trainer = MultilabelTrainer(\n        model=model,\n        args=training_args,\n        train_dataset=train_dataset if training_args.do_train else None,\n        eval_dataset=eval_dataset if training_args.do_eval else None,\n        compute_metrics=compute_metrics,\n        tokenizer=tokenizer,\n        data_collator=data_collator,\n        callbacks=[EarlyStoppingCallback(early_stopping_patience=3)]\n    )\n\n    # Training\n    if training_args.do_train:\n        checkpoint = None\n        if training_args.resume_from_checkpoint is not None:\n            checkpoint = training_args.resume_from_checkpoint\n        elif last_checkpoint is not None:\n            checkpoint = last_checkpoint\n        train_result = trainer.train(resume_from_checkpoint=checkpoint)\n        metrics = train_result.metrics\n        max_train_samples = (\n            data_args.max_train_samples if data_args.max_train_samples is not None else len(train_dataset)\n        )\n        metrics[\"train_samples\"] = min(max_train_samples, len(train_dataset))\n\n        trainer.save_model()  # Saves the tokenizer too for easy upload\n\n        trainer.log_metrics(\"train\", metrics)\n        trainer.save_metrics(\"train\", metrics)\n        trainer.save_state()\n\n    # Evaluation\n    if training_args.do_eval:\n        logger.info(\"*** Evaluate ***\")\n        metrics = trainer.evaluate(eval_dataset=eval_dataset)\n\n        max_eval_samples = data_args.max_eval_samples if data_args.max_eval_samples is not None else len(eval_dataset)\n        metrics[\"eval_samples\"] = min(max_eval_samples, len(eval_dataset))\n\n        trainer.log_metrics(\"eval\", metrics)\n        trainer.save_metrics(\"eval\", metrics)\n\n    # Prediction\n    if training_args.do_predict:\n        logger.info(\"*** Predict ***\")\n        predictions, labels, metrics = trainer.predict(predict_dataset, metric_key_prefix=\"predict\")\n\n        max_predict_samples = (\n            data_args.max_predict_samples if data_args.max_predict_samples is not None else len(predict_dataset)\n        )\n        metrics[\"predict_samples\"] = min(max_predict_samples, len(predict_dataset))\n\n        trainer.log_metrics(\"predict\", metrics)\n        trainer.save_metrics(\"predict\", metrics)\n\n        output_predict_file = os.path.join(training_args.output_dir, \"test_predictions.csv\")\n        if trainer.is_world_process_zero():\n            with open(output_predict_file, \"w\") as writer:\n                for index, pred_list in enumerate(predictions[0]):\n                    pred_line = '\\t'.join([f'{pred:.5f}' for pred in pred_list])\n                    writer.write(f\"{index}\\t{pred_line}\\n\")\n\n    # Clean up checkpoints\n    checkpoints = [filepath for filepath in glob.glob(f'{training_args.output_dir}/*/') if '/checkpoint' in filepath]\n    for checkpoint in checkpoints:\n        shutil.rmtree(checkpoint)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "models/deberta.py",
    "content": "import torch\nfrom torch import nn\nfrom transformers import DebertaPreTrainedModel, DebertaModel\nfrom transformers.modeling_outputs import SequenceClassifierOutput, MultipleChoiceModelOutput\nfrom transformers.activations import ACT2FN\n\n\nclass ContextPooler(nn.Module):\n    def __init__(self, config):\n        super().__init__()\n        self.dense = nn.Linear(config.pooler_hidden_size, config.pooler_hidden_size)\n        self.dropout = StableDropout(config.pooler_dropout)\n        self.config = config\n\n    def forward(self, hidden_states):\n        # We \"pool\" the model by simply taking the hidden state corresponding\n        # to the first token.\n\n        context_token = hidden_states[:, 0]\n        context_token = self.dropout(context_token)\n        pooled_output = self.dense(context_token)\n        pooled_output = ACT2FN[self.config.pooler_hidden_act](pooled_output)\n        return pooled_output\n\n    @property\n    def output_dim(self):\n        return self.config.hidden_size\n\n\nclass DropoutContext(object):\n    def __init__(self):\n        self.dropout = 0\n        self.mask = None\n        self.scale = 1\n        self.reuse_mask = True\n\n\ndef get_mask(input, local_context):\n    if not isinstance(local_context, DropoutContext):\n        dropout = local_context\n        mask = None\n    else:\n        dropout = local_context.dropout\n        dropout *= local_context.scale\n        mask = local_context.mask if local_context.reuse_mask else None\n\n    if dropout > 0 and mask is None:\n        mask = (1 - torch.empty_like(input).bernoulli_(1 - dropout)).bool()\n\n    if isinstance(local_context, DropoutContext):\n        if local_context.mask is None:\n            local_context.mask = mask\n\n    return mask, dropout\n\n\nclass XDropout(torch.autograd.Function):\n    \"\"\"Optimized dropout function to save computation and memory by using mask operation instead of multiplication.\"\"\"\n\n    @staticmethod\n    def forward(ctx, input, local_ctx):\n        mask, dropout = get_mask(input, local_ctx)\n        ctx.scale = 1.0 / (1 - dropout)\n        if dropout > 0:\n            ctx.save_for_backward(mask)\n            return input.masked_fill(mask, 0) * ctx.scale\n        else:\n            return input\n\n    @staticmethod\n    def backward(ctx, grad_output):\n        if ctx.scale > 1:\n            (mask,) = ctx.saved_tensors\n            return grad_output.masked_fill(mask, 0) * ctx.scale, None\n        else:\n            return grad_output, None\n\n\nclass StableDropout(nn.Module):\n    \"\"\"\n    Optimized dropout module for stabilizing the training\n\n    Args:\n        drop_prob (float): the dropout probabilities\n    \"\"\"\n\n    def __init__(self, drop_prob):\n        super().__init__()\n        self.drop_prob = drop_prob\n        self.count = 0\n        self.context_stack = None\n\n    def forward(self, x):\n        \"\"\"\n        Call the module\n\n        Args:\n            x (:obj:`torch.tensor`): The input tensor to apply dropout\n        \"\"\"\n        if self.training and self.drop_prob > 0:\n            return XDropout.apply(x, self.get_context())\n        return x\n\n    def clear_context(self):\n        self.count = 0\n        self.context_stack = None\n\n    def init_context(self, reuse_mask=True, scale=1):\n        if self.context_stack is None:\n            self.context_stack = []\n        self.count = 0\n        for c in self.context_stack:\n            c.reuse_mask = reuse_mask\n            c.scale = scale\n\n    def get_context(self):\n        if self.context_stack is not None:\n            if self.count >= len(self.context_stack):\n                self.context_stack.append(DropoutContext())\n            ctx = self.context_stack[self.count]\n            ctx.dropout = self.drop_prob\n            self.count += 1\n            return ctx\n        else:\n            return self.drop_prob\n\n\nclass DebertaForSequenceClassification(DebertaPreTrainedModel):\n    def __init__(self, config):\n        super().__init__(config)\n\n        num_labels = getattr(config, \"num_labels\", 2)\n        self.num_labels = num_labels\n\n        self.deberta = DebertaModel(config)\n\n        self.classifier = nn.Linear(config.hidden_size, num_labels)\n        drop_out = getattr(config, \"cls_dropout\", None)\n        drop_out = self.config.hidden_dropout_prob if drop_out is None else drop_out\n        self.dropout = nn.Dropout(drop_out)\n\n        self.init_weights()\n\n    def get_input_embeddings(self):\n        return self.deberta.get_input_embeddings()\n\n    def set_input_embeddings(self, new_embeddings):\n        self.deberta.set_input_embeddings(new_embeddings)\n\n    def forward(\n        self,\n        input_ids=None,\n        attention_mask=None,\n        token_type_ids=None,\n        position_ids=None,\n        inputs_embeds=None,\n        labels=None,\n        output_attentions=None,\n        output_hidden_states=None,\n        return_dict=None,\n    ):\n        r\"\"\"\n        labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):\n            Labels for computing the sequence classification/regression loss. Indices should be in :obj:`[0, ...,\n            config.num_labels - 1]`. If :obj:`config.num_labels == 1` a regression loss is computed (Mean-Square loss),\n            If :obj:`config.num_labels > 1` a classification loss is computed (Cross-Entropy).\n        \"\"\"\n        return_dict = return_dict if return_dict is not None else self.config.use_return_dict\n\n        outputs = self.deberta(\n            input_ids,\n            token_type_ids=token_type_ids,\n            attention_mask=attention_mask,\n            position_ids=position_ids,\n            inputs_embeds=inputs_embeds,\n            output_attentions=output_attentions,\n            output_hidden_states=output_hidden_states,\n            return_dict=return_dict,\n        )\n\n        pooled_output = self.dropout(outputs[1])\n        logits = self.classifier(pooled_output)\n\n        loss = None\n        if labels is not None:\n            if self.num_labels == 1:\n                # regression task\n                loss_fn = nn.MSELoss()\n                logits = logits.view(-1).to(labels.dtype)\n                loss = loss_fn(logits, labels.view(-1))\n            elif labels.dim() == 1 or labels.size(-1) == 1:\n                label_index = (labels >= 0).nonzero()\n                labels = labels.long()\n                if label_index.size(0) > 0:\n                    labeled_logits = torch.gather(logits, 0, label_index.expand(label_index.size(0), logits.size(1)))\n                    labels = torch.gather(labels, 0, label_index.view(-1))\n                    loss_fct = nn.CrossEntropyLoss()\n                    loss = loss_fct(labeled_logits.view(-1, self.num_labels).float(), labels.view(-1))\n                else:\n                    loss = torch.tensor(0).to(logits)\n            else:\n                log_softmax = nn.LogSoftmax(-1)\n                loss = -((log_softmax(logits) * labels).sum(-1)).mean()\n        if not return_dict:\n            output = (logits,) + outputs[1:]\n            return ((loss,) + output) if loss is not None else output\n        else:\n            return SequenceClassifierOutput(\n                loss=loss,\n                logits=logits,\n                hidden_states=outputs.hidden_states,\n                attentions=outputs.attentions,\n            )\n\n\nclass DebertaForMultipleChoice(DebertaPreTrainedModel):\n    def __init__(self, config):\n        super().__init__(config)\n\n        self.deberta = DebertaModel(config)\n        self.pooler = ContextPooler(config)\n        output_dim = self.pooler.output_dim\n        drop_out = getattr(config, \"cls_dropout\", None)\n        drop_out = self.config.hidden_dropout_prob if drop_out is None else drop_out\n        self.dropout = StableDropout(drop_out)\n        self.classifier = nn.Linear(output_dim, 1)\n\n        self.init_weights()\n\n    def forward(\n            self,\n            input_ids=None,\n            attention_mask=None,\n            token_type_ids=None,\n            position_ids=None,\n            inputs_embeds=None,\n            labels=None,\n            output_attentions=None,\n            output_hidden_states=None,\n            return_dict=None,\n    ):\n        r\"\"\"\n        labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):\n            Labels for computing the multiple choice classification loss. Indices should be in ``[0, ...,\n            num_choices-1]`` where :obj:`num_choices` is the size of the second dimension of the input tensors. (See\n            :obj:`input_ids` above)\n        \"\"\"\n        return_dict = return_dict if return_dict is not None else self.config.use_return_dict\n        num_choices = input_ids.shape[1] if input_ids is not None else inputs_embeds.shape[1]\n\n        input_ids = input_ids.view(-1, input_ids.size(-1)) if input_ids is not None else None\n        attention_mask = attention_mask.view(-1, attention_mask.size(-1)) if attention_mask is not None else None\n        token_type_ids = token_type_ids.view(-1, token_type_ids.size(-1)) if token_type_ids is not None else None\n        position_ids = position_ids.view(-1, position_ids.size(-1)) if position_ids is not None else None\n        inputs_embeds = (\n            inputs_embeds.view(-1, inputs_embeds.size(-2), inputs_embeds.size(-1))\n            if inputs_embeds is not None\n            else None\n        )\n\n        outputs = self.deberta(\n            input_ids,\n            token_type_ids=token_type_ids,\n            attention_mask=attention_mask,\n            position_ids=position_ids,\n            inputs_embeds=inputs_embeds,\n            output_attentions=output_attentions,\n            output_hidden_states=output_hidden_states,\n            return_dict=return_dict,\n        )\n\n        encoder_layer = outputs[0]\n        pooled_output = self.pooler(encoder_layer)\n\n        pooled_output = self.dropout(pooled_output)\n        logits = self.classifier(pooled_output)\n        reshaped_logits = logits.view(-1, num_choices)\n\n        loss = None\n        if labels is not None:\n            loss_fct = nn.CrossEntropyLoss()\n            loss = loss_fct(reshaped_logits, labels)\n\n        if not return_dict:\n            output = (reshaped_logits,) + outputs[2:]\n            return ((loss,) + output) if loss is not None else output\n\n        return MultipleChoiceModelOutput(\n            loss=loss,\n            logits=reshaped_logits,\n            hidden_states=outputs.hidden_states,\n            attentions=outputs.attentions,\n        )\n\n"
  },
  {
    "path": "models/hierbert.py",
    "content": "from dataclasses import dataclass\nfrom typing import Optional, Tuple\n\nimport torch\nimport numpy as np\nfrom torch import nn\nfrom transformers.file_utils import ModelOutput\n\n\n@dataclass\nclass SimpleOutput(ModelOutput):\n    last_hidden_state: torch.FloatTensor = None\n    past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None\n    hidden_states: Optional[Tuple[torch.FloatTensor]] = None\n    attentions: Optional[Tuple[torch.FloatTensor]] = None\n    cross_attentions: Optional[Tuple[torch.FloatTensor]] = None\n\n\ndef sinusoidal_init(num_embeddings: int, embedding_dim: int):\n    # keep dim 0 for padding token position encoding zero vector\n    position_enc = np.array([\n        [pos / np.power(10000, 2 * i / embedding_dim) for i in range(embedding_dim)]\n        if pos != 0 else np.zeros(embedding_dim) for pos in range(num_embeddings)])\n\n    position_enc[1:, 0::2] = np.sin(position_enc[1:, 0::2])  # dim 2i\n    position_enc[1:, 1::2] = np.cos(position_enc[1:, 1::2])  # dim 2i+1\n    return torch.from_numpy(position_enc).type(torch.FloatTensor)\n\n\nclass HierarchicalBert(nn.Module):\n\n    def __init__(self, encoder, max_segments=64, max_segment_length=128):\n        super(HierarchicalBert, self).__init__()\n        supported_models = ['bert', 'roberta', 'deberta']\n        assert encoder.config.model_type in supported_models  # other model types are not supported so far\n        # Pre-trained segment (token-wise) encoder, e.g., BERT\n        self.encoder = encoder\n        # Specs for the segment-wise encoder\n        self.hidden_size = encoder.config.hidden_size\n        self.max_segments = max_segments\n        self.max_segment_length = max_segment_length\n        # Init sinusoidal positional embeddings\n        self.seg_pos_embeddings = nn.Embedding(max_segments + 1, encoder.config.hidden_size,\n                                               padding_idx=0,\n                                               _weight=sinusoidal_init(max_segments + 1, encoder.config.hidden_size))\n        # Init segment-wise transformer-based encoder\n        self.seg_encoder = nn.Transformer(d_model=encoder.config.hidden_size,\n                                          nhead=encoder.config.num_attention_heads,\n                                          batch_first=True, dim_feedforward=encoder.config.intermediate_size,\n                                          activation=encoder.config.hidden_act,\n                                          dropout=encoder.config.hidden_dropout_prob,\n                                          layer_norm_eps=encoder.config.layer_norm_eps,\n                                          num_encoder_layers=2, num_decoder_layers=0).encoder\n\n    def forward(self,\n                input_ids=None,\n                attention_mask=None,\n                token_type_ids=None,\n                position_ids=None,\n                head_mask=None,\n                inputs_embeds=None,\n                labels=None,\n                output_attentions=None,\n                output_hidden_states=None,\n                return_dict=None,\n                ):\n        # Hypothetical Example\n        # Batch of 4 documents: (batch_size, n_segments, max_segment_length) --> (4, 64, 128)\n        # BERT-BASE encoder: 768 hidden units\n\n        # Squash samples and segments into a single axis (batch_size * n_segments, max_segment_length) --> (256, 128)\n        input_ids_reshape = input_ids.contiguous().view(-1, input_ids.size(-1))\n        attention_mask_reshape = attention_mask.contiguous().view(-1, attention_mask.size(-1))\n        if token_type_ids is not None:\n            token_type_ids_reshape = token_type_ids.contiguous().view(-1, token_type_ids.size(-1))\n        else:\n            token_type_ids_reshape = None\n\n        # Encode segments with BERT --> (256, 128, 768)\n        encoder_outputs = self.encoder(input_ids=input_ids_reshape,\n                                       attention_mask=attention_mask_reshape,\n                                       token_type_ids=token_type_ids_reshape)[0]\n\n        # Reshape back to (batch_size, n_segments, max_segment_length, output_size) --> (4, 64, 128, 768)\n        encoder_outputs = encoder_outputs.contiguous().view(input_ids.size(0), self.max_segments,\n                                                            self.max_segment_length,\n                                                            self.hidden_size)\n\n        # Gather CLS outputs per segment --> (4, 64, 768)\n        encoder_outputs = encoder_outputs[:, :, 0]\n\n        # Infer real segments, i.e., mask paddings\n        seg_mask = (torch.sum(input_ids, 2) != 0).to(input_ids.dtype)\n        # Infer and collect segment positional embeddings\n        seg_positions = torch.arange(1, self.max_segments + 1).to(input_ids.device) * seg_mask\n        # Add segment positional embeddings to segment inputs\n        encoder_outputs += self.seg_pos_embeddings(seg_positions)\n\n        # Encode segments with segment-wise transformer\n        seg_encoder_outputs = self.seg_encoder(encoder_outputs)\n\n        # Collect document representation\n        outputs, _ = torch.max(seg_encoder_outputs, 1)\n\n        return SimpleOutput(last_hidden_state=outputs, hidden_states=outputs)\n\n\nif __name__ == \"__main__\":\n    from transformers import AutoTokenizer, AutoModel, AutoModelForSequenceClassification\n    tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')\n\n    # Use as a stand-alone encoder\n    bert = AutoModel.from_pretrained('bert-base-uncased')\n    model = HierarchicalBert(encoder=bert, max_segments=64, max_segment_length=128)\n\n    fake_inputs = {'input_ids': [], 'attention_mask': [], 'token_type_ids': []}\n    for i in range(4):\n        # Tokenize segment\n        temp_inputs = tokenizer(['dog ' * 126] * 64)\n        fake_inputs['input_ids'].append(temp_inputs['input_ids'])\n        fake_inputs['attention_mask'].append(temp_inputs['attention_mask'])\n        fake_inputs['token_type_ids'].append(temp_inputs['token_type_ids'])\n\n    fake_inputs['input_ids'] = torch.as_tensor(fake_inputs['input_ids'])\n    fake_inputs['attention_mask'] = torch.as_tensor(fake_inputs['attention_mask'])\n    fake_inputs['token_type_ids'] = torch.as_tensor(fake_inputs['token_type_ids'])\n\n    output = model(fake_inputs['input_ids'], fake_inputs['attention_mask'], fake_inputs['token_type_ids'])\n\n    # 4 document representations of 768 features are expected\n    assert output[0].shape == torch.Size([4, 768])\n\n    # Use with HuggingFace AutoModelForSequenceClassification and Trainer API\n\n    # Init Classifier\n    model = AutoModelForSequenceClassification.from_pretrained('bert-base-uncased', num_labels=10)\n    # Replace flat BERT encoder with hierarchical BERT encoder\n    model.bert = HierarchicalBert(encoder=model.bert, max_segments=64, max_segment_length=128)\n    output = model(fake_inputs['input_ids'], fake_inputs['attention_mask'], fake_inputs['token_type_ids'])\n\n    # 4 document outputs with 10 (num_labels) logits are expected\n    assert output.logits.shape == torch.Size([4, 10])\n\n"
  },
  {
    "path": "models/tfidf_svm.py",
    "content": "import pandas\nfrom nltk.corpus import stopwords\nfrom sklearn.feature_extraction.text import TfidfTransformer, CountVectorizer\nfrom sklearn.model_selection import GridSearchCV\nfrom sklearn.multiclass import OneVsRestClassifier\nfrom sklearn.svm import LinearSVC\nfrom sklearn import metrics\nfrom sklearn.model_selection import PredefinedSplit\nfrom sklearn.preprocessing import MultiLabelBinarizer\nimport numpy as np\nimport pandas as pd\nfrom sklearn.preprocessing import FunctionTransformer\nfrom sklearn.pipeline import FeatureUnion, Pipeline\nfrom datasets import load_dataset\nimport logging\nimport os\nimport argparse\n\ndataset_n_classes = {'ecthr_a': 10, 'ecthr_b': 10, 'scotus': 14, 'eurlex': 100, 'ledgar': 100, 'unfair_tos': 8, 'case_hold': 5}\n\n\ndef main():\n    parser = argparse.ArgumentParser()\n    # Required arguments\n    parser.add_argument('--dataset',  default='case_hold', type=str)\n    parser.add_argument('--task_type', default='multi_class', type=str)\n    parser.add_argument('--text_limit', default=-1, type=int)\n    config = parser.parse_args()\n    n_classes = dataset_n_classes[config.dataset]\n\n    if not os.path.exists(f'logs/{config.dataset}'):\n        if not os.path.exists(f'logs'):\n            os.mkdir(f'logs')\n        os.mkdir(f'logs/{config.dataset}')\n    handlers = [logging.FileHandler(f'logs/{config.dataset}_svm.txt'), logging.StreamHandler()]\n    logging.basicConfig(handlers=handlers, level=logging.INFO)\n\n    def get_text(dataset):\n        if 'ecthr' in config.dataset:\n            texts = [' '.join(text) for text in dataset['text']]\n            return [' '.join(text.split()[:config.text_limit]) for text in texts]\n        elif config.dataset == 'case_hold':\n            data = [[context] + endings for context, endings in zip(dataset['context'], dataset['endings'])]\n            return pd.DataFrame(data=data,\n                                columns=['context', 'option_1', 'option_2', 'option_3', 'options_4', 'option_5']\n                                )\n        else:\n            return [' '.join(text.split()[:config.text_limit]) for text in dataset['text']]\n\n    def get_labels(dataset, mlb=None):\n        if config.task_type == 'multi_class':\n            return dataset['label']\n        else:\n            return mlb.transform(dataset['labels']).tolist()\n\n    def add_zero_class(labels):\n        augmented_labels = np.zeros((len(labels), len(labels[0]) + 1), dtype=np.int32)\n        augmented_labels[:, :-1] = labels\n        augmented_labels[:, -1] = (np.sum(labels, axis=1) == 0).astype('int32')\n        return augmented_labels\n\n    scores = {'micro-f1': [], 'macro-f1': []}\n    dataset = load_dataset('lex_glue', config.dataset)\n\n    for seed in range(1, 6):\n        if config.task_type == 'multi_label':\n            classifier = OneVsRestClassifier(LinearSVC(random_state=seed, max_iter=50000))\n            parameters = {\n                'vect__max_features': [10000, 20000, 40000],\n                'clf__estimator__C': [0.1, 1, 10],\n                'clf__estimator__loss': ('hinge', 'squared_hinge')\n            }\n        elif config.dataset == 'case_hold':\n            classifier = LinearSVC(random_state=seed, max_iter=50000)\n            parameters = {\n                'clf__C': [0.1, 1, 10],\n                'clf__loss': ('hinge', 'squared_hinge')\n            }\n        else:\n            classifier = LinearSVC(random_state=seed, max_iter=50000)\n            parameters = {\n                'vect__max_features': [10000, 20000, 40000],\n                'clf__C': [0.1, 1, 10],\n                'clf__loss': ('hinge', 'squared_hinge')\n            }\n\n        # Init Pipeline (TF-IDF, SVM)\n        if config.dataset == 'case_hold':\n            text_clf = Pipeline([\n                ('union', FeatureUnion([('context_tfidf',\n                                Pipeline([('extract_field', FunctionTransformer(lambda x: x['context'], validate=False)),\n                                          ('vect', CountVectorizer(stop_words=stopwords.words('english'),\n                                                                   ngram_range=(1, 3), min_df=5, max_features=40000)),\n                                          ('tfidf', TfidfTransformer())]))] +\n                             [(f'option_{idx}_tfidf',\n                               Pipeline([('extract_field', FunctionTransformer(lambda x: x[f'option_{idx}'], validate=False)),\n                                         ('vect', CountVectorizer(stop_words=stopwords.words('english'),\n                                                                  ngram_range=(1, 3), min_df=5, max_features=40000)),\n                                         ('tfidf', TfidfTransformer())]))\n                              for idx in range(1, 6)]\n                             )),\n                ('clf', classifier)\n            ])\n        else:\n            text_clf = Pipeline([('vect', CountVectorizer(stop_words=stopwords.words('english'),\n                                                          ngram_range=(1, 3), min_df=5)),\n                                 ('tfidf', TfidfTransformer()),\n                                 ('clf', classifier),\n                                 ])\n\n        # Fixate Validation Split\n        split_index = [-1] * len(dataset['train']) + [0] * len(dataset['validation'])\n        val_split = PredefinedSplit(test_fold=split_index)\n        gs_clf = GridSearchCV(text_clf, parameters, cv=val_split, n_jobs=32, verbose=4, refit = False)\n\n        # Pre-process inputs, outputs\n        x_train = get_text(dataset['train'])\n        x_val = get_text(dataset['validation'])\n        x_train_val = pd.concat([x_train, x_val])\n        \n        if config.task_type == 'multi_label':\n            mlb = MultiLabelBinarizer(classes=range(n_classes))\n            mlb.fit(dataset['train']['labels'])\n        else:\n            mlb = None\n        y_train = get_labels(dataset['train'], mlb)\n        y_val = get_labels(dataset['validation'], mlb)\n        y_train_val = y_train + y_val\n\n        # Train classifier\n        gs_clf = gs_clf.fit(x_train_val, y_train_val)\n\n        # Print best hyper-parameters\n        logging.info('Best Parameters:')\n        for param_name in sorted(parameters.keys()):\n            logging.info(\"%s: %r\" % (param_name, gs_clf.best_params_[param_name]))\n        \n        # Retrain model with best CV parameters only with train data\n        text_clf.set_params(**gs_clf.best_params_)\n        gs_clf = text_clf.fit(x_train, y_train)\n        \n        # Report results\n        logging.info('VALIDATION RESULTS:')\n        y_pred = gs_clf.predict(get_text(dataset['validation']))\n        y_true = get_labels(dataset[\"validation\"], mlb)\n        if config.task_type == 'multi_label' and config.dataset != 'eurlex':\n            y_true = add_zero_class(y_true)\n            y_pred = add_zero_class(y_pred)\n\n        logging.info(f'Micro-F1: {metrics.f1_score(y_true, y_pred, average=\"micro\")*100:.1f}')\n        logging.info(f'Macro-F1: {metrics.f1_score(y_true, y_pred, average=\"macro\")*100:.1f}')\n\n        logging.info('TEST RESULTS:')\n        y_pred = gs_clf.predict(get_text(dataset['test']))\n        y_true = get_labels(dataset[\"test\"], mlb)\n        if config.task_type == 'multi_label' and config.dataset != 'eurlex':\n            y_true = add_zero_class(y_true)\n            y_pred = add_zero_class(y_pred)\n        logging.info(f'Micro-F1: {metrics.f1_score(y_true, y_pred, average=\"micro\")*100:.1f}')\n        logging.info(f'Macro-F1: {metrics.f1_score(y_true, y_pred, average=\"macro\")*100:.1f}')\n\n        scores['micro-f1'].append(metrics.f1_score(y_true, y_pred, average=\"micro\"))\n        scores['macro-f1'].append(metrics.f1_score(y_true, y_pred, average=\"macro\"))\n\n    # Report averaged results across runs\n    logging.info('-' * 100)\n    logging.info(f'Micro-F1: {np.mean(scores[\"micro-f1\"])*100:.1f} +/- {np.std(scores[\"micro-f1\"])*100:.1f}\\t'\n                 f'Macro-F1: {np.mean(scores[\"macro-f1\"])*100:.1f} +/- {np.std(scores[\"macro-f1\"])*100:.1f}')\n\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "requirements.txt",
    "content": "torch>=1.9.0\ntransformers>=4.9.0\nscikit-learn>=0.24.1\ntqdm>=4.61.1\nnumpy>=1.20.1\ndatasets>=1.18.1\nnltk>=3.5\nscipy>=1.6.3\n"
  },
  {
    "path": "scripts/run_case_hold.sh",
    "content": "GPU_NUMBER=0\nMODEL_NAME='bert-base-uncased'\nBATCH_SIZE=8\nACCUMULATION_STEPS=1\nTASK='case_hold'\n\nCUDA_VISIBLE_DEVICES=${GPU_NUMBER} python experiments/case_hold.py --task_name ${TASK} --model_name_or_path ${MODEL_NAME} --output_dir logs/${TASK}/${MODEL_NAME}/seed_1 --do_train --do_eval --do_pred --overwrite_output_dir --load_best_model_at_end --metric_for_best_model micro-f1 --greater_is_better True --evaluation_strategy epoch --save_strategy epoch --save_total_limit 5 --num_train_epochs 20 --learning_rate 3e-5 --per_device_train_batch_size ${BATCH_SIZE} --per_device_eval_batch_size ${BATCH_SIZE} --seed 1 --fp16 --fp16_full_eval --gradient_accumulation_steps ${ACCUMULATION_STEPS} --eval_accumulation_steps ${ACCUMULATION_STEPS}\n\nCUDA_VISIBLE_DEVICES=${GPU_NUMBER} python experiments/case_hold.py --task_name ${TASK} --model_name_or_path ${MODEL_NAME} --output_dir logs/${TASK}/${MODEL_NAME}/seed_2 --do_train --do_eval --do_pred --overwrite_output_dir --load_best_model_at_end --metric_for_best_model micro-f1 --greater_is_better True --evaluation_strategy epoch --save_strategy epoch --save_total_limit 5 --num_train_epochs 20 --learning_rate 3e-5 --per_device_train_batch_size ${BATCH_SIZE} --per_device_eval_batch_size ${BATCH_SIZE} --seed 2 --fp16 --fp16_full_eval --gradient_accumulation_steps ${ACCUMULATION_STEPS} --eval_accumulation_steps ${ACCUMULATION_STEPS}\n\nCUDA_VISIBLE_DEVICES=${GPU_NUMBER} python experiments/case_hold.py --task_name ${TASK} --model_name_or_path ${MODEL_NAME} --output_dir logs/${TASK}/${MODEL_NAME}/seed_3 --do_train --do_eval --do_pred --overwrite_output_dir --load_best_model_at_end --metric_for_best_model micro-f1 --greater_is_better True --evaluation_strategy epoch --save_strategy epoch --save_total_limit 5 --num_train_epochs 20 --learning_rate 3e-5 --per_device_train_batch_size ${BATCH_SIZE} --per_device_eval_batch_size ${BATCH_SIZE} --seed 3 --fp16 --fp16_full_eval --gradient_accumulation_steps ${ACCUMULATION_STEPS} --eval_accumulation_steps ${ACCUMULATION_STEPS}\n\nCUDA_VISIBLE_DEVICES=${GPU_NUMBER} python experiments/case_hold.py --task_name ${TASK} --model_name_or_path ${MODEL_NAME} --output_dir logs/${TASK}/${MODEL_NAME}/seed_4 --do_train --do_eval --do_pred --overwrite_output_dir --load_best_model_at_end --metric_for_best_model micro-f1 --greater_is_better True --evaluation_strategy epoch --save_strategy epoch --save_total_limit 5 --num_train_epochs 20 --learning_rate 3e-5 --per_device_train_batch_size ${BATCH_SIZE} --per_device_eval_batch_size ${BATCH_SIZE} --seed 4 --fp16 --fp16_full_eval --gradient_accumulation_steps ${ACCUMULATION_STEPS} --eval_accumulation_steps ${ACCUMULATION_STEPS}\n\nCUDA_VISIBLE_DEVICES=${GPU_NUMBER} python experiments/case_hold.py --task_name ${TASK} --model_name_or_path ${MODEL_NAME} --output_dir logs/${TASK}/${MODEL_NAME}/seed_5 --do_train --do_eval --do_pred --overwrite_output_dir --load_best_model_at_end --metric_for_best_model micro-f1 --greater_is_better True --evaluation_strategy epoch --save_strategy epoch --save_total_limit 5 --num_train_epochs 20 --learning_rate 3e-5 --per_device_train_batch_size ${BATCH_SIZE} --per_device_eval_batch_size ${BATCH_SIZE} --seed 5 --fp16 --fp16_full_eval --gradient_accumulation_steps ${ACCUMULATION_STEPS} --eval_accumulation_steps ${ACCUMULATION_STEPS}\n\npython statistics/compute_avg_scores.py --dataset ${TASK}"
  },
  {
    "path": "scripts/run_ecthr.sh",
    "content": "GPU_NUMBER=0\nMODEL_NAME='bert-base-uncased'\nLOWER_CASE='True'\nBATCH_SIZE=2\nACCUMULATION_STEPS=4\nTASK='ecthr_a'\n\nCUDA_VISIBLE_DEVICES=${GPU_NUMBER} python experiments/ecthr.py --model_name_or_path ${MODEL_NAME} --do_lower_case ${LOWER_CASE} --task ${TASK} --output_dir logs/${TASK}/${MODEL_NAME}/seed_1 --do_train --do_eval --do_pred --overwrite_output_dir --load_best_model_at_end --metric_for_best_model micro-f1 --greater_is_better True --evaluation_strategy epoch --save_strategy epoch --save_total_limit 5 --num_train_epochs 20 --learning_rate 3e-5 --per_device_train_batch_size ${BATCH_SIZE} --per_device_eval_batch_size ${BATCH_SIZE} --seed 1 --fp16 --fp16_full_eval --gradient_accumulation_steps ${ACCUMULATION_STEPS} --eval_accumulation_steps ${ACCUMULATION_STEPS}\n\nCUDA_VISIBLE_DEVICES=${GPU_NUMBER} python experiments/ecthr.py --model_name_or_path ${MODEL_NAME} --do_lower_case ${LOWER_CASE} --task ${TASK} --output_dir logs/${TASK}/${MODEL_NAME}/seed_2 --do_train --do_eval --do_pred --overwrite_output_dir --load_best_model_at_end --metric_for_best_model micro-f1 --greater_is_better True --evaluation_strategy epoch --save_strategy epoch --save_total_limit 5 --num_train_epochs 20 --learning_rate 3e-5 --per_device_train_batch_size ${BATCH_SIZE} --per_device_eval_batch_size ${BATCH_SIZE} --seed 2 --fp16 --fp16_full_eval --gradient_accumulation_steps ${ACCUMULATION_STEPS} --eval_accumulation_steps ${ACCUMULATION_STEPS}\n\nCUDA_VISIBLE_DEVICES=${GPU_NUMBER} python experiments/ecthr.py --model_name_or_path ${MODEL_NAME} --do_lower_case ${LOWER_CASE} --task ${TASK} --output_dir logs/${TASK}/${MODEL_NAME}/seed_3 --do_train --do_eval --do_pred --overwrite_output_dir --load_best_model_at_end --metric_for_best_model micro-f1 --greater_is_better True --evaluation_strategy epoch --save_strategy epoch --save_total_limit 5 --num_train_epochs 20 --learning_rate 3e-5 --per_device_train_batch_size ${BATCH_SIZE} --per_device_eval_batch_size ${BATCH_SIZE} --seed 3 --fp16 --fp16_full_eval --gradient_accumulation_steps ${ACCUMULATION_STEPS} --eval_accumulation_steps ${ACCUMULATION_STEPS}\n\nCUDA_VISIBLE_DEVICES=${GPU_NUMBER} python experiments/ecthr.py --model_name_or_path ${MODEL_NAME} --do_lower_case ${LOWER_CASE} --task ${TASK} --output_dir logs/${TASK}/${MODEL_NAME}/seed_4 --do_train --do_eval --do_pred --overwrite_output_dir --load_best_model_at_end --metric_for_best_model micro-f1 --greater_is_better True --evaluation_strategy epoch --save_strategy epoch --save_total_limit 5 --num_train_epochs 20 --learning_rate 3e-5 --per_device_train_batch_size ${BATCH_SIZE} --per_device_eval_batch_size ${BATCH_SIZE} --seed 4 --fp16 --fp16_full_eval --gradient_accumulation_steps ${ACCUMULATION_STEPS} --eval_accumulation_steps ${ACCUMULATION_STEPS}\n\nCUDA_VISIBLE_DEVICES=${GPU_NUMBER} python experiments/ecthr.py --model_name_or_path ${MODEL_NAME} --do_lower_case ${LOWER_CASE} --task ${TASK} --output_dir logs/${TASK}/${MODEL_NAME}/seed_5 --do_train --do_eval --do_pred --overwrite_output_dir --load_best_model_at_end --metric_for_best_model micro-f1 --greater_is_better True --evaluation_strategy epoch --save_strategy epoch --save_total_limit 5 --num_train_epochs 20 --learning_rate 3e-5 --per_device_train_batch_size ${BATCH_SIZE} --per_device_eval_batch_size ${BATCH_SIZE} --seed 5 --fp16 --fp16_full_eval --gradient_accumulation_steps ${ACCUMULATION_STEPS} --eval_accumulation_steps ${ACCUMULATION_STEPS}\n"
  },
  {
    "path": "scripts/run_eurlex.sh",
    "content": "GPU_NUMBER=6\nMODEL_NAME='bert-base-uncased'\nLOWER_CASE='True'\nBATCH_SIZE=8\nACCUMULATION_STEPS=1\nTASK='eurlex'\n\nCUDA_VISIBLE_DEVICES=${GPU_NUMBER} python experiments/eurlex.py --model_name_or_path ${MODEL_NAME} --do_lower_case ${LOWER_CASE}  --output_dir logs/${TASK}/${MODEL_NAME}/seed_1 --do_train --do_eval --do_pred --overwrite_output_dir --load_best_model_at_end --metric_for_best_model micro-f1 --greater_is_better True --evaluation_strategy epoch --save_strategy epoch --save_total_limit 5 --num_train_epochs 2 --learning_rate 3e-5 --per_device_train_batch_size ${BATCH_SIZE} --per_device_eval_batch_size ${BATCH_SIZE} --seed 1 --fp16 --fp16_full_eval --gradient_accumulation_steps ${ACCUMULATION_STEPS} --eval_accumulation_steps ${ACCUMULATION_STEPS}\n\nCUDA_VISIBLE_DEVICES=${GPU_NUMBER} python experiments/eurlex.py --model_name_or_path ${MODEL_NAME} --do_lower_case ${LOWER_CASE} --output_dir logs/${TASK}/${MODEL_NAME}/seed_2 --do_train --do_eval --do_pred --overwrite_output_dir --load_best_model_at_end --metric_for_best_model micro-f1 --greater_is_better True --evaluation_strategy epoch --save_strategy epoch --save_total_limit 5 --num_train_epochs 2 --learning_rate 3e-5 --per_device_train_batch_size ${BATCH_SIZE} --per_device_eval_batch_size ${BATCH_SIZE} --seed 2 --fp16 --fp16_full_eval --gradient_accumulation_steps ${ACCUMULATION_STEPS} --eval_accumulation_steps ${ACCUMULATION_STEPS}\n\nCUDA_VISIBLE_DEVICES=${GPU_NUMBER} python experiments/eurlex.py --model_name_or_path ${MODEL_NAME} --do_lower_case ${LOWER_CASE} --output_dir logs/${TASK}/${MODEL_NAME}/seed_3 --do_train --do_eval --do_pred --overwrite_output_dir --load_best_model_at_end --metric_for_best_model micro-f1 --greater_is_better True --evaluation_strategy epoch --save_strategy epoch --save_total_limit 5 --num_train_epochs 2 --learning_rate 3e-5 --per_device_train_batch_size ${BATCH_SIZE} --per_device_eval_batch_size ${BATCH_SIZE} --seed 3 --fp16 --fp16_full_eval --gradient_accumulation_steps ${ACCUMULATION_STEPS} --eval_accumulation_steps ${ACCUMULATION_STEPS}\n\nCUDA_VISIBLE_DEVICES=${GPU_NUMBER} python experiments/eurlex.py --model_name_or_path ${MODEL_NAME} --do_lower_case ${LOWER_CASE} --output_dir logs/${TASK}/${MODEL_NAME}/seed_4 --do_train --do_eval --do_pred --overwrite_output_dir --load_best_model_at_end --metric_for_best_model micro-f1 --greater_is_better True --evaluation_strategy epoch --save_strategy epoch --save_total_limit 5 --num_train_epochs 2 --learning_rate 3e-5 --per_device_train_batch_size ${BATCH_SIZE} --per_device_eval_batch_size ${BATCH_SIZE} --seed 4 --fp16 --fp16_full_eval --gradient_accumulation_steps ${ACCUMULATION_STEPS} --eval_accumulation_steps ${ACCUMULATION_STEPS}\n\nCUDA_VISIBLE_DEVICES=${GPU_NUMBER} python experiments/eurlex.py --model_name_or_path ${MODEL_NAME} --do_lower_case ${LOWER_CASE} --output_dir logs/${TASK}/${MODEL_NAME}/seed_5 --do_train --do_eval --do_pred --overwrite_output_dir --load_best_model_at_end --metric_for_best_model micro-f1 --greater_is_better True --evaluation_strategy epoch --save_strategy epoch --save_total_limit 5 --num_train_epochs 2 --learning_rate 3e-5 --per_device_train_batch_size ${BATCH_SIZE} --per_device_eval_batch_size ${BATCH_SIZE} --seed 5 --fp16 --fp16_full_eval --gradient_accumulation_steps ${ACCUMULATION_STEPS} --eval_accumulation_steps ${ACCUMULATION_STEPS}\n\npython statistics/compute_avg_scores.py --dataset ${TASK}"
  },
  {
    "path": "scripts/run_ledgar.sh",
    "content": "GPU_NUMBER=0\nMODEL_NAME='bert-base-uncased'\nLOWER_CASE='True'\nBATCH_SIZE=8\nACCUMULATION_STEPS=1\nTASK='ledgar'\n\nCUDA_VISIBLE_DEVICES=${GPU_NUMBER} python experiments/ledgar.py --model_name_or_path ${MODEL_NAME} --do_lower_case ${LOWER_CASE}  --output_dir logs/${TASK}/${MODEL_NAME}/seed_1 --do_train --do_eval --do_pred --overwrite_output_dir --load_best_model_at_end --metric_for_best_model micro-f1 --greater_is_better True --evaluation_strategy epoch --save_strategy epoch --save_total_limit 5 --num_train_epochs 20 --learning_rate 3e-5 --per_device_train_batch_size ${BATCH_SIZE} --per_device_eval_batch_size ${BATCH_SIZE} --seed 1 --fp16 --fp16_full_eval --gradient_accumulation_steps ${ACCUMULATION_STEPS} --eval_accumulation_steps ${ACCUMULATION_STEPS}\n\nCUDA_VISIBLE_DEVICES=${GPU_NUMBER} python experiments/ledgar.py --model_name_or_path ${MODEL_NAME} --do_lower_case ${LOWER_CASE} --output_dir logs/${TASK}/${MODEL_NAME}/seed_2 --do_train --do_eval --do_pred --overwrite_output_dir --load_best_model_at_end --metric_for_best_model micro-f1 --greater_is_better True --evaluation_strategy epoch --save_strategy epoch --save_total_limit 5 --num_train_epochs 20 --learning_rate 3e-5 --per_device_train_batch_size ${BATCH_SIZE} --per_device_eval_batch_size ${BATCH_SIZE} --seed 2 --fp16 --fp16_full_eval --gradient_accumulation_steps ${ACCUMULATION_STEPS} --eval_accumulation_steps ${ACCUMULATION_STEPS}\n\nCUDA_VISIBLE_DEVICES=${GPU_NUMBER} python experiments/ledgar.py --model_name_or_path ${MODEL_NAME} --do_lower_case ${LOWER_CASE} --output_dir logs/${TASK}/${MODEL_NAME}/seed_3 --do_train --do_eval --do_pred --overwrite_output_dir --load_best_model_at_end --metric_for_best_model micro-f1 --greater_is_better True --evaluation_strategy epoch --save_strategy epoch --save_total_limit 5 --num_train_epochs 20 --learning_rate 3e-5 --per_device_train_batch_size ${BATCH_SIZE} --per_device_eval_batch_size ${BATCH_SIZE} --seed 3 --fp16 --fp16_full_eval --gradient_accumulation_steps ${ACCUMULATION_STEPS} --eval_accumulation_steps ${ACCUMULATION_STEPS}\n\nCUDA_VISIBLE_DEVICES=${GPU_NUMBER} python experiments/ledgar.py --model_name_or_path ${MODEL_NAME} --do_lower_case ${LOWER_CASE} --output_dir logs/${TASK}/${MODEL_NAME}/seed_4 --do_train --do_eval --do_pred --overwrite_output_dir --load_best_model_at_end --metric_for_best_model micro-f1 --greater_is_better True --evaluation_strategy epoch --save_strategy epoch --save_total_limit 5 --num_train_epochs 20 --learning_rate 3e-5 --per_device_train_batch_size ${BATCH_SIZE} --per_device_eval_batch_size ${BATCH_SIZE} --seed 4 --fp16 --fp16_full_eval --gradient_accumulation_steps ${ACCUMULATION_STEPS} --eval_accumulation_steps ${ACCUMULATION_STEPS}\n\nCUDA_VISIBLE_DEVICES=${GPU_NUMBER} python experiments/ledgar.py --model_name_or_path ${MODEL_NAME} --do_lower_case ${LOWER_CASE} --output_dir logs/${TASK}/${MODEL_NAME}/seed_5 --do_train --do_eval --do_pred --overwrite_output_dir --load_best_model_at_end --metric_for_best_model micro-f1 --greater_is_better True --evaluation_strategy epoch --save_strategy epoch --save_total_limit 5 --num_train_epochs 20 --learning_rate 3e-5 --per_device_train_batch_size ${BATCH_SIZE} --per_device_eval_batch_size ${BATCH_SIZE} --seed 5 --fp16 --fp16_full_eval --gradient_accumulation_steps ${ACCUMULATION_STEPS} --eval_accumulation_steps ${ACCUMULATION_STEPS}\n\npython statistics/compute_avg_scores.py --dataset ${TASK}"
  },
  {
    "path": "scripts/run_scotus.sh",
    "content": "GPU_NUMBER=0\nMODEL_NAME='bert-base-uncased'\nLOWER_CASE='True'\nBATCH_SIZE=2\nACCUMULATION_STEPS=4\nTASK='scotus'\n\nCUDA_VISIBLE_DEVICES=${GPU_NUMBER} python experiments/scotus.py --model_name_or_path ${MODEL_NAME} --do_lower_case ${LOWER_CASE}  --output_dir logs/${TASK}/${MODEL_NAME}/seed_1 --do_train --do_eval --do_pred --overwrite_output_dir --load_best_model_at_end --metric_for_best_model micro-f1 --greater_is_better True --evaluation_strategy epoch --save_strategy epoch --save_total_limit 5 --num_train_epochs 20 --learning_rate 3e-5 --per_device_train_batch_size ${BATCH_SIZE} --per_device_eval_batch_size ${BATCH_SIZE} --seed 1 --fp16 --fp16_full_eval --gradient_accumulation_steps ${ACCUMULATION_STEPS} --eval_accumulation_steps ${ACCUMULATION_STEPS}\n\nCUDA_VISIBLE_DEVICES=${GPU_NUMBER} python experiments/scotus.py --model_name_or_path ${MODEL_NAME} --do_lower_case ${LOWER_CASE} --output_dir logs/${TASK}/${MODEL_NAME}/seed_2 --do_train --do_eval --do_pred --overwrite_output_dir --load_best_model_at_end --metric_for_best_model micro-f1 --greater_is_better True --evaluation_strategy epoch --save_strategy epoch --save_total_limit 5 --num_train_epochs 20 --learning_rate 3e-5 --per_device_train_batch_size ${BATCH_SIZE} --per_device_eval_batch_size ${BATCH_SIZE} --seed 2 --fp16 --fp16_full_eval --gradient_accumulation_steps ${ACCUMULATION_STEPS} --eval_accumulation_steps ${ACCUMULATION_STEPS}\n\nCUDA_VISIBLE_DEVICES=${GPU_NUMBER} python experiments/scotus.py --model_name_or_path ${MODEL_NAME} --do_lower_case ${LOWER_CASE} --output_dir logs/${TASK}/${MODEL_NAME}/seed_3 --do_train --do_eval --do_pred --overwrite_output_dir --load_best_model_at_end --metric_for_best_model micro-f1 --greater_is_better True --evaluation_strategy epoch --save_strategy epoch --save_total_limit 5 --num_train_epochs 20 --learning_rate 3e-5 --per_device_train_batch_size ${BATCH_SIZE} --per_device_eval_batch_size ${BATCH_SIZE} --seed 3 --fp16 --fp16_full_eval --gradient_accumulation_steps ${ACCUMULATION_STEPS} --eval_accumulation_steps ${ACCUMULATION_STEPS}\n\nCUDA_VISIBLE_DEVICES=${GPU_NUMBER} python experiments/scotus.py --model_name_or_path ${MODEL_NAME} --do_lower_case ${LOWER_CASE} --output_dir logs/${TASK}/${MODEL_NAME}/seed_4 --do_train --do_eval --do_pred --overwrite_output_dir --load_best_model_at_end --metric_for_best_model micro-f1 --greater_is_better True --evaluation_strategy epoch --save_strategy epoch --save_total_limit 5 --num_train_epochs 20 --learning_rate 3e-5 --per_device_train_batch_size ${BATCH_SIZE} --per_device_eval_batch_size ${BATCH_SIZE} --seed 4 --fp16 --fp16_full_eval --gradient_accumulation_steps ${ACCUMULATION_STEPS} --eval_accumulation_steps ${ACCUMULATION_STEPS}\n\nCUDA_VISIBLE_DEVICES=${GPU_NUMBER} python experiments/scotus.py  --model_name_or_path ${MODEL_NAME} --do_lower_case ${LOWER_CASE} --output_dir logs/${TASK}/${MODEL_NAME}/seed_5 --do_train --do_eval --do_pred --overwrite_output_dir --load_best_model_at_end --metric_for_best_model micro-f1 --greater_is_better True --evaluation_strategy epoch --save_strategy epoch --save_total_limit 5 --num_train_epochs 20 --learning_rate 3e-5 --per_device_train_batch_size ${BATCH_SIZE} --per_device_eval_batch_size ${BATCH_SIZE} --seed 5 --fp16 --fp16_full_eval --gradient_accumulation_steps ${ACCUMULATION_STEPS} --eval_accumulation_steps ${ACCUMULATION_STEPS}\n\npython statistics/compute_avg_scores.py --dataset ${TASK}"
  },
  {
    "path": "scripts/run_tfidf_svm.sh",
    "content": "DATASET='eurlex'\nTASK_TYPE='multi_label'\nN_CLASSES=100\n\npython models/tfidf_svm.py --dataset ${DATASET} --task_type ${TASK_TYPE} --n_classes ${N_CLASSES}"
  },
  {
    "path": "scripts/run_unfair_tos.sh",
    "content": "GPU_NUMBER=0\nMODEL_NAME='bert-base-uncased'\nLOWER_CASE='True'\nBATCH_SIZE=8\nACCUMULATION_STEPS=1\nTASK='unfair_tos'\n\nCUDA_VISIBLE_DEVICES=${GPU_NUMBER} python experiments/unfair_tos.py --model_name_or_path ${MODEL_NAME} --do_lower_case ${LOWER_CASE} --output_dir logs/${TASK}/${MODEL_NAME}/seed_1 --do_train --do_eval --do_pred --overwrite_output_dir --load_best_model_at_end --metric_for_best_model micro-f1 --greater_is_better True --evaluation_strategy epoch --save_strategy epoch --save_total_limit 5 --num_train_epochs 20 --learning_rate 3e-5 --per_device_train_batch_size ${BATCH_SIZE} --per_device_eval_batch_size ${BATCH_SIZE} --seed 1 --fp16 --fp16_full_eval --gradient_accumulation_steps ${ACCUMULATION_STEPS} --eval_accumulation_steps ${ACCUMULATION_STEPS}\n\nCUDA_VISIBLE_DEVICES=${GPU_NUMBER} python experiments/unfair_tos.py --model_name_or_path ${MODEL_NAME} --do_lower_case ${LOWER_CASE} --output_dir logs/${TASK}/${MODEL_NAME}/seed_2 --do_train --do_eval --do_pred --overwrite_output_dir --load_best_model_at_end --metric_for_best_model micro-f1 --greater_is_better True --evaluation_strategy epoch --save_strategy epoch --save_total_limit 5 --num_train_epochs 20 --learning_rate 3e-5 --per_device_train_batch_size ${BATCH_SIZE} --per_device_eval_batch_size ${BATCH_SIZE} --seed 2 --fp16 --fp16_full_eval --gradient_accumulation_steps ${ACCUMULATION_STEPS} --eval_accumulation_steps ${ACCUMULATION_STEPS}\n\nCUDA_VISIBLE_DEVICES=${GPU_NUMBER} python experiments/unfair_tos.py --model_name_or_path ${MODEL_NAME} --do_lower_case ${LOWER_CASE} --output_dir logs/${TASK}/${MODEL_NAME}/seed_3 --do_train --do_eval --do_pred --overwrite_output_dir --load_best_model_at_end --metric_for_best_model micro-f1 --greater_is_better True --evaluation_strategy epoch --save_strategy epoch --save_total_limit 5 --num_train_epochs 20 --learning_rate 3e-5 --per_device_train_batch_size ${BATCH_SIZE} --per_device_eval_batch_size ${BATCH_SIZE} --seed 3 --fp16 --fp16_full_eval --gradient_accumulation_steps ${ACCUMULATION_STEPS} --eval_accumulation_steps ${ACCUMULATION_STEPS}\n\nCUDA_VISIBLE_DEVICES=${GPU_NUMBER} python experiments/unfair_tos.py --model_name_or_path ${MODEL_NAME} --do_lower_case ${LOWER_CASE} --output_dir logs/${TASK}/${MODEL_NAME}/seed_4 --do_train --do_eval --do_pred --overwrite_output_dir --load_best_model_at_end --metric_for_best_model micro-f1 --greater_is_better True --evaluation_strategy epoch --save_strategy epoch --save_total_limit 5 --num_train_epochs 20 --learning_rate 3e-5 --per_device_train_batch_size ${BATCH_SIZE} --per_device_eval_batch_size ${BATCH_SIZE} --seed 4 --fp16 --fp16_full_eval --gradient_accumulation_steps ${ACCUMULATION_STEPS} --eval_accumulation_steps ${ACCUMULATION_STEPS}\n\nCUDA_VISIBLE_DEVICES=${GPU_NUMBER} python experiments/unfair_tos.py --model_name_or_path ${MODEL_NAME} --do_lower_case ${LOWER_CASE} --output_dir .logs/${TASK}/${MODEL_NAME}/seed_5 --do_train --do_eval --do_pred --overwrite_output_dir --load_best_model_at_end --metric_for_best_model micro-f1 --greater_is_better True --evaluation_strategy epoch --save_strategy epoch --save_total_limit 5 --num_train_epochs 20 --learning_rate 3e-5 --per_device_train_batch_size ${BATCH_SIZE} --per_device_eval_batch_size ${BATCH_SIZE} --seed 5 --fp16 --fp16_full_eval --gradient_accumulation_steps ${ACCUMULATION_STEPS} --eval_accumulation_steps ${ACCUMULATION_STEPS}\n\npython statistics/compute_avg_scores.py --dataset ${TASK}\n"
  },
  {
    "path": "statistics/compute_avg_lexglue_scores.py",
    "content": "import copy\nimport json\nimport os\nimport argparse\nimport numpy as np\nfrom scipy.stats import hmean, gmean\n\n\ndef main():\n    ''' set default hyperparams in default_hyperparams.py '''\n    parser = argparse.ArgumentParser()\n\n    # Required arguments\n    parser.add_argument('--filter_outliers', default=True)\n    parser.add_argument('--top_k', default=1)\n    config = parser.parse_args()\n\n    MODELS = ['bert-base-uncased', 'roberta-base', 'microsoft/deberta-base', 'allenai/longformer-base-4096',\n              'google/bigbird-roberta-base', 'nlpaueb/legal-bert-base-uncased', 'zlucia/custom-legalbert', 'roberta-large']\n    DATASETS = ['ecthr_a', 'ecthr_b', 'eurlex', 'scotus', 'ledgar', 'unfair_tos', 'casehold']\n    MODEL_NAMES = ['BERT', 'RoBERTa', 'DeBERTa', 'Longformer', 'BigBird', 'Legal-BERT', 'CaseLaw-BERT', 'RoBERTa']\n\n    score_dicts = {model: {'dev': {'micro': [], 'macro': []}, 'test': {'micro': [], 'macro': []}}\n                   for model in MODELS}\n\n    for model in MODELS:\n        for dataset in DATASETS:\n            BASE_DIR = f'/Users/rwg642/Desktop/LEXGLUE/RESULTS/{dataset}'\n\n            score_dict = {'dev': {'micro': [], 'macro': []},\n                          'test': {'micro': [], 'macro': []}}\n\n            for seed in range(1, 6):\n                try:\n                    seed = f'seed_{seed}'\n                    with open(os.path.join(BASE_DIR, model, seed, 'all_results.json')) as json_file:\n                        json_data = json.load(json_file)\n                        score_dict['dev']['micro'].append(float(json_data['eval_micro-f1']))\n                        score_dict['dev']['macro'].append(float(json_data['eval_macro-f1']))\n                        score_dict['test']['micro'].append(float(json_data['predict_micro-f1']))\n                        score_dict['test']['macro'].append(float(json_data['predict_macro-f1']))\n                except:\n                    continue\n            temp_stats = copy.deepcopy(score_dict)\n            if config.filter_outliers:\n                seed_scores = [(idx, score) for (idx, score) in enumerate(score_dict['dev']['macro'])]\n                sorted_scores = sorted(seed_scores, key=lambda tup: tup[1], reverse=True)\n                top_k_ids = [idx for idx, score in sorted_scores[:config.top_k]]\n                for subset in ['dev', 'test']:\n                    temp_stats[subset]['micro'] = [score for idx, score in enumerate(score_dict[subset]['micro']) if\n                                                   idx in top_k_ids]\n                    temp_stats[subset]['macro'] = [score for idx, score in enumerate(score_dict[subset]['macro']) if\n                                                   idx in top_k_ids]\n            for subset in ['dev', 'test']:\n                for avg in ['micro', 'macro']:\n                    score_dicts[model][subset][avg].append(np.mean(temp_stats[subset][avg]))\n\n    print('-' * 253)\n    print(f'{\"MODEL NAME\":>35} | {\"A-MEAN\":<33} | {\"H-MEAN\":<33} | {\"G-MEAN\":<33} |')\n    print('-' * 253)\n    for idx, (method, stats) in enumerate(score_dicts.items()):\n        algo_means = {'dev': {'micro': [0.0, 0.0, 0.0], 'macro': [0.0, 0.0, 0.0]},\n                      'test': {'micro': [0.0, 0.0, 0.0], 'macro': [0.0, 0.0, 0.0]}}\n        for subset in ['dev', 'test']:\n            for avg in ['micro', 'macro']:\n                algo_means[subset][avg][0] = np.mean(stats[subset][avg])\n                algo_means[subset][avg][1] = hmean(stats[subset][avg])\n                algo_means[subset][avg][2] = gmean(stats[subset][avg])\n        report_line = f'<tr><td>{MODEL_NAMES[idx]}</td>'\n        for task_idx in range(3):\n            report_line += f'<td> {algo_means[\"test\"][\"micro\"][task_idx] * 100:.1f} / '\n            report_line += f' {algo_means[\"test\"][\"macro\"][task_idx] * 100:.1f} </td>'\n        report_line += '</tr>'\n\n        print(report_line)\n\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "statistics/compute_avg_scores.py",
    "content": "import copy\nimport json\nimport os\nimport argparse\nimport numpy as np\nimport warnings\n\nwarnings.filterwarnings('ignore')\n\ndef main():\n    ''' set default hyperparams in default_hyperparams.py '''\n    parser = argparse.ArgumentParser()\n\n    # Required arguments\n    parser.add_argument('--dataset',  default='scotus')\n    parser.add_argument('--filter_outliers', default=True)\n    parser.add_argument('--top_k', default=3)\n    config = parser.parse_args()\n\n    BASE_DIR = f'logs/{config.dataset}'\n\n    if os.path.exists(BASE_DIR):\n        print(f'{BASE_DIR} exists!')\n\n    score_dicts = {}\n    MODELS = ['bert-base-uncased', 'roberta-base', 'microsoft/deberta-base', 'allenai/longformer-base-4096',\n              'google/bigbird-roberta-base', 'nlpaueb/legal-bert-base-uncased', 'zlucia/custom-legalbert', 'roberta-large']\n\n    for model in MODELS:\n        score_dict = {'dev': {'micro': [], 'macro': []},\n                      'test': {'micro': [], 'macro': []}}\n\n        for seed in range(1, 6):\n            seed = f'seed_{seed}'\n            try:\n                with open(os.path.join(BASE_DIR, model, seed, 'all_results.json')) as json_file:\n                    json_data = json.load(json_file)\n                    score_dict['dev']['micro'].append(float(json_data['eval_micro-f1']))\n                    score_dict['dev']['macro'].append(float(json_data['eval_macro-f1']))\n                    score_dict['test']['micro'].append(float(json_data['predict_micro-f1']))\n                    score_dict['test']['macro'].append(float(json_data['predict_macro-f1']))\n            except:\n                continue\n\n        score_dicts[model] = score_dict\n\n    print(f'{\" \" * 36} {\"VALIDATION\":<47} | {\"TEST\"}')\n    print('-' * 200)\n    for algo, stats in score_dicts.items():\n        temp_stats = copy.deepcopy(stats)\n        if config.filter_outliers:\n            seed_scores = [(idx, score) for (idx, score) in enumerate(stats['dev']['macro'])]\n            sorted_scores = sorted(seed_scores, key=lambda tup: tup[1], reverse=True)\n            top_k_ids = [idx for idx, score in sorted_scores[:config.top_k]]\n            temp_stats['dev']['micro'] = [score for idx, score in enumerate(stats['dev']['micro']) if\n                                           idx in top_k_ids]\n            temp_stats['dev']['macro'] = [score for idx, score in enumerate(stats['dev']['macro']) if\n                                           idx in top_k_ids]\n            temp_stats['test']['micro'] = [score for idx, score in enumerate(stats['test']['micro']) if\n                                           idx in top_k_ids[:1]]\n            temp_stats['test']['macro'] = [score for idx, score in enumerate(stats['test']['macro']) if\n                                           idx in top_k_ids[:1]]\n\n        report_line = f'{algo:>35}: MICRO-F1: {np.mean(temp_stats[\"dev\"][\"micro\"])*100:.1f}\\t ± {np.std(temp_stats[\"dev\"][\"micro\"])*100:.1f}\\t'\n        report_line += f'MACRO-F1: {np.mean(temp_stats[\"dev\"][\"macro\"])*100:.1f}\\t ± {np.std(temp_stats[\"dev\"][\"macro\"])*100:.1f}\\t'\n        report_line += ' | '\n        report_line += f'MICRO-F1: {np.mean(temp_stats[\"test\"][\"micro\"])*100:.1f}\\t'\n        report_line += f'MACRO-F1: {np.mean(temp_stats[\"test\"][\"macro\"])*100:.1f}\\t'\n\n        print(report_line)\n\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "statistics/compute_lexglue_scores.py",
    "content": "import copy\nimport json\nimport os\nimport argparse\nimport numpy as np\n\n\ndef main():\n    ''' set default hyperparams in default_hyperparams.py '''\n    parser = argparse.ArgumentParser()\n\n    # Required arguments\n    parser.add_argument('--filter_outliers', default=True)\n    parser.add_argument('--top_k', default=1)\n    config = parser.parse_args()\n\n    MODELS = ['bert-base-uncased', 'roberta-base', 'microsoft/deberta-base', 'allenai/longformer-base-4096',\n              'google/bigbird-roberta-base', 'nlpaueb/legal-bert-base-uncased', 'zlucia/custom-legalbert']\n    DATASETS = ['ecthr_a', 'ecthr_b', 'eurlex', 'scotus', 'ledgar', 'unfair_tos', 'casehold']\n    MODEL_NAMES = ['BERT', 'RoBERTa', 'DeBERTa', 'Longformer', 'BigBird', 'Legal-BERT', 'CaseLaw-BERT']\n\n    score_dicts = {model: {'dev': {'micro': [], 'macro': []}, 'test': {'micro': [], 'macro': []}}\n                   for model in MODELS}\n\n    for model in MODELS:\n        for dataset in DATASETS:\n            BASE_DIR = f'logs/{dataset}'\n\n            score_dict = {'dev': {'micro': [], 'macro': []},\n                          'test': {'micro': [], 'macro': []}}\n\n            for seed in range(1, 6):\n                try:\n                    seed = f'seed_{seed}'\n                    with open(os.path.join(BASE_DIR, model, seed, 'all_results.json')) as json_file:\n                        json_data = json.load(json_file)\n                        score_dict['dev']['micro'].append(float(json_data['eval_micro-f1']))\n                        score_dict['dev']['macro'].append(float(json_data['eval_macro-f1']))\n                        score_dict['test']['micro'].append(float(json_data['predict_micro-f1']))\n                        score_dict['test']['macro'].append(float(json_data['predict_macro-f1']))\n                except:\n                    continue\n            temp_stats = copy.deepcopy(score_dict)\n            if config.filter_outliers:\n                seed_scores = [(idx, score) for (idx, score) in enumerate(score_dict['dev']['macro'])]\n                sorted_scores = sorted(seed_scores, key=lambda tup: tup[1], reverse=True)\n                top_k_ids = [idx for idx, score in sorted_scores[:config.top_k]]\n                for subset in ['dev', 'test']:\n                    temp_stats[subset]['micro'] = [score for idx, score in enumerate(score_dict[subset]['micro']) if\n                                                   idx in top_k_ids]\n                    temp_stats[subset]['macro'] = [score for idx, score in enumerate(score_dict[subset]['macro']) if\n                                                   idx in top_k_ids]\n            for subset in ['dev', 'test']:\n                for avg in ['micro', 'macro']:\n                    score_dicts[model][subset][avg].append(np.mean(temp_stats[subset][avg]))\n\n    print('-' * 253)\n    print(f'{\"DATASET\":>35} & ', ' & '.join([f\"{dataset}\" for dataset in DATASETS]).upper(), ' \\\\\\\\')\n    print('-' * 253)\n    for idx, (method, stats) in enumerate(score_dicts.items()):\n        report_line = f'<tr><td>{MODEL_NAMES[idx]}</td> '\n        for task_idx in range(len(DATASETS)):\n            report_line += f'<td> {stats[\"test\"][\"micro\"][task_idx] * 100:.1f} / '\n            report_line += f' {stats[\"test\"][\"macro\"][task_idx] * 100:.1f} </td> '\n            # report_line += '</tr>' if task_idx == len(DATASETS) - 1 else '&'\n        report_line += '</tr>'\n        print(report_line)\n        # print('-' * 253)\n\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "statistics/report_model_results.py",
    "content": "import json\nimport os\nimport argparse\n\n\ndef main():\n    ''' set default hyperparams in default_hyperparams.py '''\n    parser = argparse.ArgumentParser()\n\n    # Required arguments\n    parser.add_argument('--model',  default='roberta-base')\n    config = parser.parse_args()\n\n    MODEL = config.model\n    TASKS = ['ecthr_a', 'ecthr_b', 'scotus', 'eurlex', 'ledgar', 'unfair_tos', 'case_hold']\n    for task in TASKS:\n        print('-' * 100)\n        print(task.upper())\n        print('-' * 100)\n        BASE_DIR = f'logs/{task}'\n        print(f'{\" \" * 10}   | {\"VALIDATION\":<40} | {\"TEST\":<40}')\n        print('-' * 100)\n        for seed in range(1, 6):\n            seed = f'seed_{seed}'\n            try:\n                with open(os.path.join(BASE_DIR, MODEL, seed, 'all_results.json')) as json_file:\n                    json_data = json.load(json_file)\n                    dev_micro_f1 = float(json_data['eval_micro-f1'])\n                    dev_macro_f1 = float(json_data['eval_macro-f1'])\n                    test_micro_f1 = float(json_data['predict_micro-f1'])\n                    test_macro_f1 = float(json_data['predict_macro-f1'])\n                    epoch = float(json_data['epoch'])\n                report_line = f'EPOCH: {epoch: 2.1f} | '\n                report_line += f'MICRO-F1: {dev_micro_f1 * 100:.1f}\\t'\n                report_line += f'MACRO-F1: {dev_macro_f1 * 100:.1f}\\t'\n                report_line += ' | '\n                report_line += f'MICRO-F1: {test_micro_f1 * 100:.1f}\\t'\n                report_line += f'MACRO-F1: {test_macro_f1 * 100:.1f}\\t'\n                print(report_line)\n            except:\n                continue\n\n\nif __name__ == '__main__':\n    main()"
  },
  {
    "path": "statistics/report_train_time.py",
    "content": "import json\nimport os\nimport numpy as np\nimport datetime\n\n\ndef main():\n\n    for dataset in ['ecthr_a', 'ecthr_b', 'scotus', 'eurlex', 'ledgar', 'unfair_tos']:\n        print(f'{dataset.upper()}')\n        print('-'*100)\n        BASE_DIR = f'logs/{dataset}'\n        score_dicts = {}\n        MODELS = ['bert-base-uncased', 'roberta-base', 'microsoft/deberta-base', 'nlpaueb/legal-bert-base-uncased',\n                  'zlucia/custom-legalbert', 'allenai/longformer-base-4096', 'google/bigbird-roberta-base']\n        for model in MODELS:\n            score_dict = {'time': [], 'epochs': [], 'time/epoch': []}\n\n            for seed in range(1, 6):\n                seed = f'seed_{seed}'\n                try:\n                    with open(os.path.join(BASE_DIR, model, seed, 'trainer_state.json')) as json_file:\n                        json_data = json.load(json_file)\n                        score_dict['time'].append(json_data['log_history'][-1]['train_runtime'])\n                        score_dict['epochs'].append(json_data['log_history'][-1]['epoch'])\n                        score_dict['time/epoch'].append(json_data['log_history'][-1]['train_runtime']/json_data['log_history'][-1]['epoch'])\n                except:\n                    continue\n\n            score_dicts[model] = score_dict\n\n        for algo, stats in score_dicts.items():\n            total_time = np.mean(stats[\"time\"])\n            time_epoch = np.mean(stats[\"time/epoch\"])\n            print(f'{algo:>35}: TRAIN TIME: {str(datetime.timedelta(seconds=total_time)).split(\".\")[0]}\\t '\n                  f'TIME/EPOCH: {str(datetime.timedelta(seconds=time_epoch)).split(\".\")[0]}\\t'\n                  f' EPOCHS: {np.mean(stats[\"epochs\"]):.1f} ± {np.std(stats[\"epochs\"]):.1f}')\n\n\nif __name__ == '__main__':\n    main()\n"
  },
  {
    "path": "utils/fix_casehold.py",
    "content": "import re\nimport csv\nimport numpy as np\nprompts = []\ntexts = []\n\nwith open('casehold_fixed.csv', \"w\", encoding=\"utf-8\") as out_f:\n    with open('casehold.csv', \"r\", encoding=\"utf-8\") as f:\n        for line in f.readlines():\n            # Eliminate broken records\n            if not re.match('\\d', line) or not re.match('.+\\d\\n$', line):\n                continue\n            else:\n                # Discard samples that are extremely long\n                if len(line) < 5000:\n                    out_f.write(line)\n\n# Reload cleansed data and count text\nwith open('casehold_fixed.csv', \"r\", encoding=\"utf-8\") as f:\n    data = list(csv.reader(f))[1:]\n\nfor idx, sample in enumerate(data):\n    for choice in sample[2:7]:\n        texts.append(sample[1] + ' ' + choice)\n\n# Compute approximate length per sample\nt_lengths = [len(text.split()) for text in texts]\n\nprint(np.mean(t_lengths))\nprint(np.median(t_lengths))\n\n"
  },
  {
    "path": "utils/load_hierbert.py",
    "content": "from models.hierbert import HierarchicalBert\nfrom transformers import AutoTokenizer, AutoModelForSequenceClassification\nimport torch\n\nMODEL_PATH = '...'\n\n# Load Tokenizer\ntokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)\n\n# Load BERT base model\nmodel = AutoModelForSequenceClassification.from_pretrained(MODEL_PATH)\n\n# Transform BERT base model to Hierarchical BERT\nsegment_encoder = model.bert\nmodel_encoder = HierarchicalBert(encoder=segment_encoder,  max_segments=64, max_segment_length=128)\nmodel.bert = model_encoder\n\n# Load Hierarchical BERT model\nmodel_state_dict = torch.load(f'{MODEL_PATH}/pytorch_model.bin', map_location=torch.device('cpu'))\nmodel.load_state_dict(model_state_dict)\n\n\n# Pre-process text following the hierarchical 3D pre-processing\n# as described either in experiments/ecthr.py, or experiments/scotus.py\ninputs = ...\n\n# Inference\nsoft_predictions = model.predict(inputs)\n\n# Post-process predictions, e.g., sigmoid or argmax\nhard_predictions = torch.argmax(soft_predictions)\n"
  },
  {
    "path": "utils/preprocess_unfair_tos.py",
    "content": "import glob\nimport json\nimport re\n\nfilenames = glob.glob('/Users/rwg642/Downloads/ToS/sentences/*.txt')\n\ndata = {}\ntotal_sentence_count = 0\ncompanies = []\nfor filename in filenames:\n    with open(filename) as file:\n        company = filename.split('/')[-1].split('.')[0]\n        data[f'{company}'] = []\n        text = ''\n        for line in file.readlines():\n            total_sentence_count += 1\n            data[f'{company}'].append(\n                {'company': company, 'release_year': '-', 'labels': [], 'text': line.replace('-lrb-', '(').replace('-rrb-', ')')})\n            text += line + ' '\n\n        matches = re.findall('20[0-2][0-9]', text)\n        if matches:\n            date = matches[0]\n        else:\n            date = '-'\n        companies.append((company, date))\n\nprint('All sentences: ', total_sentence_count)\n\nannotated_sentences = 0\nfor label_type, label_name in zip(\n        ['Labels_A', 'Labels_CH', 'Labels_CR', 'Labels_J', 'Labels_LAW', 'Labels_LTD', 'Labels_TER', 'Labels_USE'],\n        ['Arbitration', 'Unilateral change', 'Content removal', 'Jurisdiction', 'Choice of law',\n         'Limitation of liability', 'Unilateral termination', 'Contract by using']):\n    filenames = glob.glob(f'/Users/rwg642/Downloads/ToS/{label_type}/*.txt')\n    sentence_count = 0\n    for filename in filenames:\n        company = filename.split('/')[-1].split('.')[0]\n        with open(filename) as file:\n            for idx, line in enumerate(file.readlines()):\n                if line == '1\\n':\n                    data[f'{company}'][idx]['labels'].append(label_name)\n                    sentence_count += 1\n                    annotated_sentences += 1\n\n    print(f'{label_type}: ', sentence_count)\n\n\nprint('Unannotated: ', total_sentence_count - annotated_sentences)\n\ncompanies = [('Tinder', '-'), ('Betterpoints_UK', '-'), ('Deliveroo', '-'), ('9gag', '-'), ('Booking', '-'),\n             ('YouTube', '-'), ('Yahoo', '-'), ('TrueCaller', '-'), ('Skype', '2006'), ('WorldOfWarcraft', '2012'),\n             ('Viber', '2013'), ('Microsoft', '2013'), ('Instagram', '2013'), ('Rovio', '2013'), ('Onavo', '2013'),\n             ('Moves-app', '2014'), ('Syncme', '2014'), ('Google', '2014'), ('Facebook', '2015'), ('Vivino', '2015'),\n             ('Atlas', '2015'), ('Dropbox', '2016'), ('musically', '2016'), ('Spotify', '2016'), ('Endomondo', '2016'),\n             ('WhatsApp', '2016'), ('Zynga', '2016'), ('PokemonGo', '2016'), ('Masquerade', '2016'),\n             ('Skyscanner', '2016'), ('Nintendo', '2017'), ('Airbnb', '2017'), ('Crowdtangle', '2017'),\n             ('TripAdvisor', '2017'), ('Supercell', '2017'), ('Headspace', '2017'), ('Fitbit', '2017'),\n             ('Vimeo', '2017'), ('Oculus', '2017'), ('LindenLab', '2017'), ('Academia', '2017'), ('Amazon', '2017'),\n             ('Netflix', '2017'), ('Snap', '2017'), ('Twitter', '2017'), ('LinkedIn', '2017'), ('Duolingo', '2017'),\n             ('Uber', '2017'), ('Evernote', '2017'), ('eBay', '2017')]\n\n\nwith open('/Users/rwg642/PycharmProjects/LexGLUE/dataloaders/unfair_toc/unfair_toc.jsonl', 'w') as out_file:\n    for company, year in companies[:30]:\n        for record in data[f'{company}']:\n            record['data_type'] = 'train'\n            record['release_year'] = year\n            out_file.write(json.dumps(record) + '\\n')\n    for company, year in companies[30:40]:\n        for record in data[f'{company}']:\n            record['data_type'] = 'val'\n            record['release_year'] = year\n            out_file.write(json.dumps(record) + '\\n')\n    for company, year in companies[40:]:\n        for record in data[f'{company}']:\n            record['data_type'] = 'test'\n            record['release_year'] = year\n            out_file.write(json.dumps(record) + '\\n')\n\nprint()\n# import numpy as np\n# import matplotlib.pyplot as plt\n\n# ecthr = [71.5, 17.0, 15.5, 18.5, 14.7]\n# ledgar = [0.1, 81.1, 0.1, 81.1]\n# ecthr_mean = np.mean(ecthr)\n# ledgar_mean = np.mean(ledgar)\n#\n# ecthr_max = np.max(ecthr)\n# ledgar_max= np.max(ledgar)\n#\n# ecthr_std = np.std(ecthr)\n# ledgar_std = np.std(ledgar)\n#\n#\n# fig, ax = plt.subplots()\n# ax.bar(np.arange(2), [ecthr_max, ledgar_max], align='center', alpha=0.3, capsize=10)\n# ax.bar(np.arange(2), [ecthr_mean, ledgar_mean], yerr=[ecthr_std, ledgar_std], align='center', alpha=0.6, ecolor='black', capsize=10)\n# ax.set_ylabel('Macro-F1')\n# ax.set_xticks(np.arange(2))\n# ax.set_xticklabels(['ECtHR (Task A)', 'LEDGAR'])\n# ax.yaxis.grid(True)\n#\n# # Save the figure and show\n# plt.tight_layout()\n# plt.show()\n"
  },
  {
    "path": "utils/subsample_ledgar.py",
    "content": "import json\nimport random\nimport tqdm\nfrom collections import Counter\n\n# NOTE: The dataset has been first enriched with metadata from SEC-EDGAR\n# to figure out the year of submission for the original filings. This\n# part is missing from the script.\n\n# Parse original (augmented) dataset\ncategories = []\nwith open('ledgar.jsonl') as file:\n    for line in tqdm.tqdm(file.readlines()):\n        data = json.loads(line)\n        categories.extend(data['labels'])\n\n# Find the top-100 labels.\ncategories = set([label for label, count in Counter(categories).most_common()[:100]])\n\n\n# Subsample examples labeled with one of the top-100 labels.\nwith open('ledgar_small.jsonl', 'w') as out_file:\n    with open('ledgar.jsonl') as file:\n        for line in tqdm.tqdm(file.readlines()):\n            data = json.loads(line)\n            if set(data['labels']).intersection(categories):\n                labels = set(data['labels']).intersection(categories)\n                if len(labels) == 1:\n                    data['labels'] = sorted(list(labels))\n                    data.pop('clause_types', None)\n                    out_file.write(json.dumps(data)+'\\n')\n\n\n# Organize examples in clusters by year\nyears = []\nsamples = {year: [] for year in ['2016', '2017', '2018', '2019']}\nwith open('ledgar_small.jsonl') as file:\n    for line in tqdm.tqdm(file.readlines()):\n        data = json.loads(line)\n        years.append(data['year'])\n        data.pop('filer_cik', None)\n        data.pop('filer_name', None)\n        data.pop('filer_state', None)\n        data.pop('filer_industry', None)\n        samples[data['year']].append(data)\n\n\n# Write final dataset 60k/10k/10k\nrandom.seed(1)\nwith open('ledgar.jsonl', 'w') as file:\n    final_samples = random.sample(samples['2016'], 30000)\n    final_samples += random.sample(samples['2017'], 30000)\n    for sample in final_samples:\n        sample['data_type'] = 'train'\n        file.write(json.dumps(sample) + '\\n')\n    final_samples = random.sample(samples['2018'], 10000)\n    for sample in final_samples:\n        sample['data_type'] = 'dev'\n        file.write(json.dumps(sample) + '\\n')\n    final_samples = random.sample(samples['2019'], 10000)\n    for sample in final_samples:\n        sample['data_type'] = 'test'\n        file.write(json.dumps(sample) + '\\n')\n"
  }
]