[
  {
    "path": "README.md",
    "content": "# Best Incremental Learning\n\nIncremental Learning Repository: A collection of documents, papers, source code, and talks for incremental learning.\n\n**Keywords:** Incremental Learning, Continual Learning, Continuous Learning, Lifelong Learning, Catastrophic Forgetting\n\n> **CATALOGUE**\n>\n> [Quick Start](#quick-start) :sparkles: [Survey](#survey) :sparkles: [Papers by Categories](#papers-by-categories) :sparkles: [Datasets](#datasets) :sparkles: [Tutorial, Workshop, & Talks](#workshop)\n>\n> [Competitions](#competitions) :sparkles: [Awesome Reference](#awesome-reference) :sparkles: [Full Paper List](#paper-list) \n\n## 1 Quick Start <span id='quick-start'></span>\n\n[Continual Learning | Papers With Code](https://paperswithcode.com/task/continual-learning)\n\n[Incremental Learning | Papers With Code](https://paperswithcode.com/task/incremental-learning)\n\n[Class Incremental Learning from the Past to Present by 思悥 | 知乎 ](https://zhuanlan.zhihu.com/p/490308909?utm_source=wechat_session&utm_medium=social&utm_oi=1162267494193799168&utm_campaign=shareopn) (In Chinese)\n\n[A Little Survey of Incremental Learning | 知乎](https://zhuanlan.zhihu.com/p/301117945) (In Chinese)\n\n**Origin of the Study**\n\n+ Catastrophic Forgetting, Rehearsal and Pseudorehearsal(1995)[[paper]](https://www.tandfonline.com/doi/abs/10.1080/09540099550039318)\n\n+ Catastrophic forgetting in connectionist networks(1999)[[paper]](https://www.sciencedirect.com/science/article/pii/S1364661399012942)\n\n+ Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem(1989)[[paper]](https://www.sciencedirect.com/science/article/abs/pii/S0079742108605368)\n\n**Toolbox & Framework**\n\n+ **[CLHive]** [[code]](https://github.com/naderAsadi/CLHive)\n+ **[PTIL]** Prompt-based Incremental Learning Toolbox [[code]](https://github.com/Vision-Intelligence-and-Robots-Group/Prompt-based-CL-Toolbox)\n+ **[LAMDA-PILOT]** PILOT: A Pre-Trained Model-Based Continual Learning Toolbox(arXiv 2023)[[paper]](https://arxiv.org/abs/2309.07117)[[code]](https://github.com/sun-hailong/LAMDA-PILOT)![GitHub stars](https://img.shields.io/github/stars/sun-hailong/LAMDA-PILOT.svg?logo=github&label=Stars)\n\n+ **[FACIL]** Class-incremental learning: survey and performance evaluation on image classification(TPAMI 2022)[[paper]](https://arxiv.org/abs/2010.15277)[[code]](https://github.com/mmasana/FACIL)![GitHub stars](https://img.shields.io/github/stars/mmasana/FACIL.svg?logo=github&label=Stars)\n\n+ **[Avalanche]** Avalanche: An End-to-End Library for Continual Learning(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021W/CLVision/html/Lomonaco_Avalanche_An_End-to-End_Library_for_Continual_Learning_CVPRW_2021_paper.html)[[code]](https://github.com/ContinualAI/avalanche)![GitHub stars](https://img.shields.io/github/stars/ContinualAI/avalanche.svg?logo=github&label=Stars)\n\n+ **[PyCIL]** PyCIL: A Python Toolbox for Class-Incremental Learning(arXiv 2021)[[paper]](https://arxiv.org/abs/2112.12533)[[code]](https://github.com/G-U-N/PyCIL)![GitHub stars](https://img.shields.io/github/stars/G-U-N/PyCIL.svg?logo=github&label=Stars)\n\n+ **[Mammoth]** An Extendible (General) Continual Learning Framework for Pytorch [[code]](https://github.com/aimagelab/mammoth)![GitHub stars](https://img.shields.io/github/stars/aimagelab/mammoth.svg?logo=github&label=Stars)\n\n+ **[PyContinual]** An Easy and Extendible Framework for Continual Learning[[code]](https://github.com/ZixuanKe/PyContinual)![GitHub stars](https://img.shields.io/github/stars/ZixuanKe/PyContinual.svg?logo=github&label=Stars)\n\n**Books**\n\n+ Lifelong Machine Learning [[Link]](https://www.cs.uic.edu/~liub/lifelong-machine-learning.html)\n\n## 2  Survey <span id='survey'></span>\n\n### 2.1 Surveys\n+ A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual Learning(arXiv 2023)[[github]](https://github.com/EnnengYang/Awesome-Forgetting-in-Deep-Learning)\n\n+ Deep Class-Incremental Learning: A Survey(arXiv 2023)[[paper]](https://arxiv.org/pdf/2302.03648.pdf)[[code]](https://github.com/zhoudw-zdw/CIL_Survey/)![GitHub stars](https://img.shields.io/github/stars/zhoudw-zdw/CIL_Survey.svg?logo=github&label=Stars)\n\n+ A Comprehensive Survey of Continual Learning: Theory, Method and Application(arxiv 2023)[[paper]](https://arxiv.org/abs/2302.00487)\n\n+ **[FACIL]** Class-incremental learning: survey and performance evaluation on image classification(TPAMI 2022)[[paper]](https://arxiv.org/abs/2010.15277)[[code]](https://github.com/mmasana/FACIL)![GitHub stars](https://img.shields.io/github/stars/mmasana/FACIL.svg?logo=github&label=Stars)\n\n+ Online Continual Learning in Image Classification: An Empirical Survey (Neurocomputing 2021)[[paper]](https://arxiv.org/abs/2101.10423)\n\n+ A continual learning survey: Defying forgetting in classification tasks (TPAMI 2021) [[paper]](https://ieeexplore.ieee.org/abstract/document/9349197)\n\n+ Rehearsal revealed: The limits and merits of revisiting samples in continual learning (ICCV 2021)[[paper]](https://arxiv.org/abs/2104.07446)\n\n+ Continual Lifelong Learning in Natural Language Processing: A Survey (COLING 2020) [[paper]](https://www.aclweb.org/anthology/2020.coling-main.574/)\n\n+ A Comprehensive Study of Class Incremental Learning Algorithms for Visual Tasks (Neural Networks 2020) [[paper]](https://arxiv.org/abs/2011.01844)\n\n+ Embracing Change: Continual Learning in Deep Neural Networks(Trends in Cognitive Sciences 2020)[[paper]](https://www.sciencedirect.com/science/article/pii/S1364661320302199)\n\n+ Towards Continual Reinforcement Learning: A Review and Perspectives(arXiv 2020)[[paper]](https://arxiv.org/abs/2012.13490)\n\n+ Class-incremental learning: survey and performance evaluation(arXiv 2020) [[paper]](https://arxiv.org/abs/2010.15277) \n\n+ A comprehensive, application-oriented study of catastrophic forgetting in DNNs (ICLR 2019) [[paper]](https://openreview.net/forum?id=BkloRs0qK7)\n\n+ Three scenarios for continual learning (arXiv 2019) [[paper]](https://arxiv.org/abs/1904.07734v1)\n\n+ Continual lifelong learning with neural networks: A review(arXiv 2019)[[paper]](https://arxiv.org/abs/1802.07569)\n\n+ 类别增量学习研究进展和性能评价 (自动化学报 2023)[[paper]](http://www.aas.net.cn/cn/article/doi/10.16383/j.aas.c220588)\n\n### 2.2 Analysis & Study\n\n+ How Well Do Unsupervised Learning Algorithms Model Human Real-time and Life-long Learning?(NeurIPS 2022)[[paper]](https://openreview.net/forum?id=c0l2YolqD2T)\n\n+ **[WPTP]** A Theoretical Study on Solving Continual Learning(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2211.02633)[[code]](https://github.com/k-gyuhak/WPTP)![GitHub stars](https://img.shields.io/github/stars/k-gyuhak/WPTP)\n\n+ The Challenges of Continuous Self-Supervised Learning(ECCV 2022)[[peper]](https://arxiv.org/abs/2203.12710)\n\n+ Continual learning: a feature extraction formalization, an efficient algorithm, and fundamental obstructions(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2203.14383)\n\n+ A simple but strong baseline for online continual learning: Repeated Augmented Rehearsal(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2209.13917)[[code]](https://github.com/YaqianZhang/RepeatedAugmentedRehearsal)![GitHub stars](https://img.shields.io/github/stars/YaqianZhang/RepeatedAugmentedRehearsal)\n\n+ Exploring Example Influence in Continual Learning(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2209.12241)\n\n+ Biological underpinnings for lifelong learning machines(Nat. Mach. Intell. 2022)[[paper]](https://www.nature.com/articles/s42256-022-00452-0)\n\n+ Probing Representation Forgetting in Supervised and Unsupervised Continual Learning(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Davari_Probing_Representation_Forgetting_in_Supervised_and_Unsupervised_Continual_Learning_CVPR_2022_paper.html)[[code]](https://github.com/rezazzr/Probing-Representation-Forgetting)![GitHub stars](https://img.shields.io/github/stars/rezazzr/Probing-Representation-Forgetting.svg?logo=github&label=Stars)\n\n+ **[OpenLORIS-Object]** Towards Lifelong Object Recognition: A Dataset and Benchmark(Pattern Recognit 2022)[[paper]](https://www.sciencedirect.com/science/article/abs/pii/S0031320322003004)\n\n+ Probing Representation Forgetting in Supervised and Unsupervised Continual Learning (CVPR 2022) [[paper]](https://arxiv.org/abs/2203.13381)\n\n+ Learngene: From Open-World to Your Learning Task (AAAI 2022) [[paper]](https://arxiv.org/pdf/2106.06788.pdf)\n\n+ Continual Normalization: Rethinking Batch Normalization for Online Continual Learning (ICLR 2022) [[paper]](https://openreview.net/forum?id=vwLLQ-HwqhZ)\n\n+ **[CLEVA-Compass]** CLEVA-Compass: A Continual Learning Evaluation Assessment Compass to Promote Research Transparency and Comparability (ICLR 2022) [[paper]](https://openreview.net/pdf?id=rHMaBYbkkRJ)[[code]](https://github.com/ml-research/CLEVA-Compass)![GitHub stars](https://img.shields.io/github/stars/ml-research/CLEVA-Compass.svg?logo=github&label=Stars)\n\n+ Learning curves for continual learning in neural networks: Self-knowledge transfer and forgetting (ICLR 2022) [[paper]](https://openreview.net/pdf?id=tFgdrQbbaa)\n\n+ **[CKL]** Towards Continual Knowledge Learning of Language Models (ICLR 2022) [[paper]](https://openreview.net/pdf?id=vfsRB5MImo9)\n\n+ Pretrained Language Model in Continual Learning: A Comparative Study (ICLR 2022) [[paper]](https://openreview.net/pdf?id=figzpGMrdD)\n\n+ Effect of scale on catastrophic forgetting in neural networks (ICLR 2022) [[paper]](https://openreview.net/pdf?id=GhVS8_yPeEa)\n\n+ LifeLonger: A Benchmark for Continual Disease Classification(arXiv 2022)[[paper]](https://arxiv.org/abs/2204.05737)\n\n+ **[CDDB]** A Continual Deepfake Detection Benchmark: Dataset, Methods, and Essentials(arXiv 2022)[[paper]](https://arxiv.org/abs/2205.05467)\n\n+ **[BN Tricks]** Diagnosing Batch Normalization in Class Incremental Learning(arXiv 2022)[[paper]](https://arxiv.org/abs/2202.08025)\n\n+ Architecture Matters in Continual Learning(arXiv 2022)[[paper]](https://arxiv.org/abs/2202.00275)\n\n+ Learning where to learn: Gradient sparsity in meta and continual learning(NeurIPS 2021) [[paper]](https://proceedings.neurips.cc/paper/2021/hash/2a10665525774fa2501c2c8c4985ce61-Abstract.html)\n\n+ Continuous Coordination As a Realistic Scenario for Lifelong Learning(ICML 2021)[[paper]](https://proceedings.mlr.press/v139/nekoei21a.html)\n\n+ Understanding the Role of Training Regimes in Continual Learning (NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/hash/518a38cc9a0173d0b2dc088166981cf8-Abstract.html)\n\n+ Optimal Continual Learning has Perfect Memory and is NP-HARD (ICML 2020)[[paper]](https://proceedings.mlr.press/v119/knoblauch20a.html)\n\n### 2.3 Settings\n\n+ **[FSCIL]** Few-shot Class Incremental Learning [[Link]](https://github.com/xyutao/fscil)![GitHub stars](https://img.shields.io/github/stars/xyutao/fscil.svg?logo=github&label=Stars)\n\n+ **[DCIL]**  Decentralized Class Incremental Learning [[paper]](https://ieeexplore.ieee.org/document/9932643)[[Setting]](https://github.com/Vision-Intelligence-and-Robots-Group/DCIL)\n\n## 3 Papers by Categories <span id='papers-by-categories'></span>\n\n**Tips:** you can use ctrl+F to match abbreviations with articles, or browse the [paper list](#paper-list) below.\n\n### 3.1 From an Algorithm Perspective\n\n|      |                      Network Structure                       |                          Rehearsal                           |\n| :--: | :----------------------------------------------------------: | :----------------------------------------------------------: |\n|2024|**SEED**(ICLR 2024)[[paper]](https://openreview.net/forum?id=sSyytcewxe)<br/>**CAMA**(ICLR 2024)[[paper]](https://openreview.net/forum?id=7M0EzjugaN)[[code]](https://github.com/snumprlab/cl-alfred)<br/>**SFR**(ICLR 2024)[[paper]](https://openreview.net/attachment?id=2dhxxIKhqz&name=pdf)[[code]](https://aaltoml.github.io/sfr/)<br/>**HLOP**(ICLR 2024)[[paper]](https://openreview.net/forum?id=MeB86edZ1P)<br/>**TPL**(ICLR 2024)[[paper]](https://openreview.net/forum?id=8QfK9Dq4q0)[[code]](https://github.com/linhaowei1/TPL)<br/>**EFC**(ICLR 2024)[[paper]](https://openreview.net/forum?id=7D9X2cFnt1)<br/>**PICLE**(ICLR 2024)[[paper]](https://openreview.net/forum?id=MVe2dnWPCu)<br/>**OVOR**(ICLR 2024)[[paper]](https://openreview.net/forum?id=FbuyDzZTPt)[[code]](https://github.com/jpmorganchase/ovor)<br/>**PEC**(ICLR 2024)[[paper]](https://openreview.net/forum?id=DJZDgMOLXQ)[[code]](https://github.com/michalzajac-ml/pec)<br/>**refresh learning**(ICLR 2024)[[paper]](https://openreview.net/forum?id=BE5aK0ETbp)<br/>**POCON**(WACV 2024)[[paper]](https://arxiv.org/pdf/2309.06086.pdf)<br/>**CLTA**(WACV 2024)[[paper]](https://arxiv.org/abs/2308.09544)[[code]](https://github.com/fszatkowski/cl-teacher-adaptation)<br/>**FG-KSR**(AAAI 2024)[[paper]](https://arxiv.org/abs/2312.12722)[[code]](https://github.com/scok30/vit-cil) | **MOSE**(CVPR 2024)[[paper]](https://arxiv.org/abs/2404.00417)[[code]](https://github.com/AnAppleCore/MOSE)<br/>**AISEOCL**(Pattern Recognition 2024)[[paper]](https://www.sciencedirect.com/science/article/abs/pii/S0031320323009354#:~:text=We%20propose%20a%20novel%20adaptive,the%20same%20class%20or%20not)<br/>**AF-FCL**(ICLR 2024)[[paper]](https://openreview.net/forum?id=ShQrnAsbPI)[[code]](https://anonymous.4open.science/r/AF-FCL-7D65)<br/>**DietCL**(ICLR 2024)[[paper]](https://openreview.net/forum?id=Xvfz8NHmCj)<br/>**BGS**(ICLR 2024)[[paper]](https://openreview.net/forum?id=3Y7r6xueJJ)<br/>**DMU**(WACV 2024)[[paper]](https://openaccess.thecvf.com/content/WACV2024/papers/Raghavan_Online_Class-Incremental_Learning_for_Real-World_Food_Image_Classification_WACV_2024_paper.pdf)[[code]](https://gitlab.com/viper-purdue/OCIL-real-world-food-image-classification) |\n| 2023 |**A-Prompts** (arXiv 2023)[[paper]](https://arxiv.org/abs/2303.13898)<br/>**ESN**(AAAI 2023)[[paper]](https://arxiv.org/abs/2211.15969)[[code]](https://github.com/iamwangyabin/ESN)![GitHub stars](https://img.shields.io/github/stars/iamwangyabin/ESN.svg?logo=github&label=Stars)<br/>**RevisitingCIL**(arXiv 2023)[[paper]](https://arxiv.org/abs/2303.07338)[[code]](https://github.com/zhoudw-zdw/RevisitingCIL)![GitHub stars](https://img.shields.io/github/stars/zhoudw-zdw/RevisitingCIL.svg?logo=github&label=Stars)<br/>**LwP**(ICLR 2023)[[paper]](https://openreview.net/forum?id=gfPUokHsW-)<br/>**SDMLP**(ICLR 2023)[[paper]](https://openreview.net/forum?id=JknGeelZJpHP)<br/>**SaLinA**(ICLR 2023)[[paper]](https://openreview.net/forum?id=ZloanUtG4a)[[code]](https://github.com/facebookresearch/salina/tree/main/salina_cl)<br />**BEEF**(ICLR 2023)[[paper]](https://openreview.net/pdf?id=iP77_axu0h3)[[code]](https://github.com/G-U-N/ICLR23-BEEF)![GitHub stars](https://img.shields.io/github/stars/G-U-N/ICLR23-BEEF.svg?logo=github&label=Stars)<br/>**WaRP**(ICLR 2023)[[paper]](https://openreview.net/pdf?id=kPLzOfPfA2l)<br/>**OBC**(ICLR 2023)[[paper]](https://openreview.net/pdf?id=18XzeuYZh_)<br/>**NC-FSCIL**(ICLR 2023)[[paper]](https://openreview.net/pdf?id=y5W8tpojhtJ)[[code]](https://github.com/NeuralCollapseApplications/FSCIL)![GitHub stars](https://img.shields.io/github/stars/NeuralCollapseApplications/FSCIL.svg?logo=github&label=Stars)<br/>**iVoro**(ICLR 2023)[[paper]](https://openreview.net/pdf?id=zJXg_Wmob03)<br/>**DAS**(ICLR 2023)[[paper]](https://openreview.net/pdf?id=m_GDIItaI3o)<br/>**Progressive Prompts**(ICLR 2023)[[paper]](https://openreview.net/pdf?id=UJTgQBc91_)<br/>**SDP**(ICLR 2023)[[paper]](https://openreview.net/pdf?id=qco4ekz2Epm)[[code]](https://github.com/yonseivnl/sdp)![GitHub stars](https://img.shields.io/github/stars/yonseivnl/sdp.svg?logo=github&label=Stars)<br/>**iLDR**(ICLR 2023)[[paper]](https://arxiv.org/pdf/2202.05411.pdf)<br/>**SoftNet-FSCIL**(ICLR 2023)[[paper]](https://openreview.net/pdf?id=z57WK5lGeHd)[[code]](https://github.com/ihaeyong/SoftNet-FSCIL)![GitHub stars](https://img.shields.io/github/stars/ihaeyong/SoftNet-FSCIL.svg?logo=github&label=Stars)<br />**PAR**(CVPR 2023)[[paper]](https://arxiv.org/pdf/2304.05288.pdf)<br/>**PETAL**(CVPR 2023)[[paper]](https://arxiv.org/pdf/2212.09713.pdf)[[code]](https://github.com/dhanajitb/petal)![GitHub stars](https://img.shields.io/github/stars/dhanajitb/petal.svg?logo=github&label=Stars)<br/>**SAVC**(CVPR 2023)[[paper]](https://arxiv.org/pdf/2304.00426.pdf)[[code]](https://github.com/zysong0113/SAVC)![GitHub stars](https://img.shields.io/github/stars/zysong0113/SAVC.svg?logo=github&label=Stars)<br/>**CODA-Prompt**(CVPR 2023)[[paper]](https://arxiv.org/pdf/2211.13218.pdf)[[code]](https://github.com/GT-RIPL/CODA-Prompt)![GitHub stars](https://img.shields.io/github/stars/GT-RIPL/CODA-Prompt.svg?logo=github&label=Stars) | **FeTrIL**(WACV 2023)[[paper]](https://arxiv.org/abs/2211.13131)[[code]](https://github.com/GregoirePetit/FeTrIL)![GitHub stars](https://img.shields.io/github/stars/GregoirePetit/FeTrIL.svg?logo=github&label=Stars)<br />**ESMER**(ICLR 2023)[[paper]](https://openreview.net/forum?id=zlbci7019Z3)[[code]](https://github.com/NeurAI-Lab/ESMER)![GitHub stars](https://img.shields.io/github/stars/NeurAI-Lab/ESMER.svg?logo=github&label=Stars)<br/>**MEMO**(ICLR 2023)[[paper]](https://arxiv.org/abs/2205.13218)[[code]](https://github.com/wangkiw/ICLR23-MEMO)![GitHub stars](https://img.shields.io/github/stars/wangkiw/ICLR23-MEMO.svg?logo=github&label=Stars)<br/>**CUDOS**(ICLR 2023)[[paper]](https://openreview.net/pdf?id=ih0uFRFhaZZ)<br/>**ACGAN**(ICLR 2023)[[paper]](https://openreview.net/pdf?id=cRxYWKiTan)[[code]](https://github.com/daiqing98/FedCIL)![GitHub stars](https://img.shields.io/github/stars/daiqing98/FedCIL.svg?logo=github&label=Stars)<br/>**TAMiL**(ICLR 2023)[[paper]](https://openreview.net/pdf?id=-M0TNnyWFT5)[[code]](https://github.com/NeurAI-Lab/TAMiL)![GitHub stars](https://img.shields.io/github/stars/NeurAI-Lab/TAMiL.svg?logo=github&label=Stars)<br />**RSOI**(CVPR 2023)[[paper]](https://arxiv.org/pdf/2304.10177.pdf)[[code]](https://github.com/feifeiobama/InfluenceCL)![GitHub stars](https://img.shields.io/github/stars/feifeiobama/InfluenceCL.svg?logo=github&label=Stars)<br/>**TBBN**(CVPR 2023)[[paper]](https://arxiv.org/pdf/2201.12559.pdf)<br/>**AMSS**(CVPR 2023)[[paper]](https://arxiv.org/pdf/2304.05015.pdf)<br/>**DGCL**(CVPR 2023)[[paper]](https://arxiv.org/pdf/2304.03931.pdf)<br/>**PCR**(CVPR 2023)[[paper]](https://arxiv.org/pdf/2304.04408.pdf)[[code]](https://github.com/FelixHuiweiLin/PCR)![GitHub stars](https://img.shields.io/github/stars/FelixHuiweiLin/PCR.svg?logo=github&label=Stars)<br/>**FMWISS**(CVPR 2023)[[paper]](https://arxiv.org/pdf/2302.14250.pdf)<br/>**CL-DETR**(CVPR 2023)[[paper]](https://arxiv.org/pdf/2304.03110.pdf)[[code]](https://github.com/yaoyao-liu/CL-DETR)![GitHub stars](https://img.shields.io/github/stars/yaoyao-liu/CL-DETR.svg?logo=github&label=Stars)<br/>**PIVOT**(CVPR 2023)[[paper]](https://arxiv.org/pdf/2212.04842.pdf)<br/>**CIM-CIL**(CVPR 2023)[[paper]](https://arxiv.org/pdf/2303.14042.pdf)[[code]](https://github.com/xfflzl/CIM-CIL)![GitHub stars](https://img.shields.io/github/stars/xfflzl/CIM-CIL.svg?logo=github&label=Stars)<br/>**DNE**(CVPR 2023)[[paper]](https://arxiv.org/pdf/2303.12696.pdf) |\n| 2022 | **RD-IOD**(ACM Trans 2022)[[paper]](https://dl.acm.org/doi/abs/10.1145/3472393)<br/>**NCM**(arXiv 2022)[[paper]](https://arxiv.org/abs/2202.05491)<br/>**IPP**(arXiv 2022)[[paper]](https://arxiv.org/abs/2204.03410)<br/>**Incremental-DETR**(arXiv 2022)[[paper]](https://arxiv.org/abs/2205.04042)<br/>**ELI**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Joseph_Energy-Based_Latent_Aligner_for_Incremental_Learning_CVPR_2022_paper.html)<br/>**CASSLE**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Fini_Self-Supervised_Models_Are_Continual_Learners_CVPR_2022_paper.html)[[code]](https://github.com/DonkeyShot21/cassle)![GitHub stars](https://img.shields.io/github/stars/DonkeyShot21/cassle.svg?logo=github&label=Stars)<br/>**iFS-RCNN**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Nguyen_iFS-RCNN_An_Incremental_Few-Shot_Instance_Segmenter_CVPR_2022_paper.html)<br/>**WILSON**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Cermelli_Incremental_Learning_in_Semantic_Segmentation_From_Image_Labels_CVPR_2022_paper.html)[[code]](https://github.com/fcdl94/WILSON)![GitHub stars](https://img.shields.io/github/stars/fcdl94/WILSON.svg?logo=github&label=Stars)<br/>**Connector**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Lin_Towards_Better_Plasticity-Stability_Trade-Off_in_Incremental_Learning_A_Simple_Linear_CVPR_2022_paper.html)[[code]](https://github.com/lingl1024/Connector)![GitHub stars](https://img.shields.io/github/stars/lingl1024/Connector.svg?logo=github&label=Stars)<br/>**PAD**(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.13167)<br/>**ERD**(CVPR 2022)[[paper]](https://arxiv.org/abs/2204.02136)[[code]](https://github.com/Hi-FT/ERD)![GitHub stars](https://img.shields.io/github/stars/Hi-FT/ERD.svg?logo=github&label=Stars)<br/>**AFC**(CVPR 2022)[[paper]](https://arxiv.org/abs/2204.00895)[[code]](https://github.com/kminsoo/AFC)![GitHub stars](https://img.shields.io/github/stars/kminsoo/AFC.svg?logo=github&label=Stars)<br/>**FACT**(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.06953)[[code]](https://github.com/zhoudw-zdw/CVPR22-Fact)![GitHub stars](https://img.shields.io/github/stars/zhoudw-zdw/CVPR22-Fact.svg?logo=github&label=Stars)<br/>**L2P**(CVPR 2022)[[paper]](https://arxiv.org/abs/2112.08654)[[code]](https://github.com/google-research/l2p)![GitHub stars](https://img.shields.io/github/stars/google-research/l2p.svg?logo=github&label=Stars)<br/>**MEAT**(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.11684)[[code]](https://github.com/zju-vipa/MEAT-TIL)![GitHub stars](https://img.shields.io/github/stars/zju-vipa/MEAT-TIL.svg?logo=github&label=Stars)<br/>**RCIL**(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.05402)[[code]](https://github.com/zhangchbin/RCIL)![GitHub stars](https://img.shields.io/github/stars/zhangchbin/RCIL.svg?logo=github&label=Stars)<br/>**ZITS**(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.00867)[[code]](https://github.com/DQiaole/ZITS_inpainting)![GitHub stars](https://img.shields.io/github/stars/DQiaole/ZITS_inpainting.svg?logo=github&label=Stars)<br/>**MTPSL**(CVPR 2022)[[paper]](https://arxiv.org/abs/2111.14893)[[code]](https://github.com/VICO-UoE/MTPSL)![GitHub stars](https://img.shields.io/github/stars/VICO-UoE/MTPSL.svg?logo=github&label=Stars)<br/>**MMA**(CVPR-Workshop 2022)[[paper]](https://arxiv.org/abs/2204.08766)<br/>**CoSCL**(ECCV 2022)[[paper]](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136860249.pdf)[[code]](https://github.com/lywang3081/CoSCL)![GitHub stars](https://img.shields.io/github/stars/lywang3081/CoSCL.svg?logo=github&label=Stars)<br />**AdNS**(ECCV 2022)[[paper]](https://arxiv.org/abs/2207.12061)<br/>**ProCA**(ECCV 2022)[[paper]](https://arxiv.org/abs/2207.10856)[[code]](https://github.com/Hongbin98/ProCA)![GitHub stars](https://img.shields.io/github/stars/Hongbin98/ProCA.svg?logo=github&label=Stars)<br/>**R-DFCIL**(ECCV 2022)[[paper]](https://arxiv.org/abs/2203.13104)[[code]](https://github.com/jianzhangcs/R-DFCIL)![GitHub stars](https://img.shields.io/github/stars/jianzhangcs/R-DFCIL.svg?logo=github&label=Stars)<br/>**S3C**(ECCV 2022)[[paper]](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136850427.pdf)[[code]](https://github.com/JAYATEJAK/S3C)![GitHub stars](https://img.shields.io/github/stars/JAYATEJAK/S3C.svg?logo=github&label=Stars)<br/>**H^2^**(ECCV 2022)[[paper]](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136710518.pdf)<br/>**DualPrompt**(ECCV 2022)[[paper]](https://arxiv.org/abs/2204.04799)<br/>**ALICE**(ECCV 2022)[[paper]](https://arxiv.org/pdf/2208.00147.pdf)[[code]](https://github.com/CanPeng123/FSCIL_ALICE)![GitHub stars](https://img.shields.io/github/stars/CanPeng123/FSCIL_ALICE.svg?logo=github&label=Stars)<br/>**RU-TIL**(ECCV 2022)[[paper]](https://arxiv.org/pdf/2207.09074.pdf)[[code]](https://github.com/CSIPlab/task-increment-rank-update)![GitHub stars](https://img.shields.io/github/stars/CSIPlab/task-increment-rank-update.svg?logo=github&label=Stars)<br/>**FOSTER**(ECCV 2022)[[paper]](https://arxiv.org/abs/2204.04662)<br/>**SSR**(ICLR 2022)[[paper]](https://openreview.net/pdf?id=boJy41J-tnQ)[[code]](https://github.com/feyzaakyurek/subspace-reg)![GitHub stars](https://img.shields.io/github/stars/feyzaakyurek/subspace-reg.svg?logo=github&label=Stars)<br/>**RGO**(ICLR 2022)[[paper]](https://openreview.net/pdf?id=7YDLgf9_zgm)<br/>**TRGP**(ICLR 2022)[[paper]](https://openreview.net/pdf?id=iEvAf8i6JjO)<br/>**AGCN**(ICME 2022)[[paper]](https://arxiv.org/abs/2203.05534)[[code]](https://github.com/Kaile-Du/AGCN)![GitHub stars](https://img.shields.io/github/stars/Kaile-Du/AGCN.svg?logo=github&label=Stars)<br/>**WSN**(ICML 2022)[[paper]](https://proceedings.mlr.press/v162/kang22b/kang22b.pdf)[[code]](https://github.com/ihaeyong/WSN)![GitHub stars](https://img.shields.io/github/stars/ihaeyong/WSN.svg?logo=github&label=Stars)<br/>**NISPA**(ICML 2022)[[paper]](https://proceedings.mlr.press/v162/gurbuz22a/gurbuz22a.pdf)[[code]](https://github.com/BurakGurbuz97/NISPA)![GitHub stars](https://img.shields.io/github/stars/BurakGurbuz97/NISPA.svg?logo=github&label=Stars)<br/>**S-FSVI**(ICML 2022)[[paper]](https://proceedings.mlr.press/v162/rudner22a/rudner22a.pdf)[[code]](https://github.com/timrudner/S-FSVI)![GitHub stars](https://img.shields.io/github/stars/timrudner/S-FSVI.svg?logo=github&label=Stars)<br/>**CUBER**(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2211.00789)<br/>**ADA**(NeurIPS 2022)[[paper]](https://www.amazon.science/publications/memory-efficient-continual-learning-with-transformers)<br/>**CLOM**(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2210.04524)<br/>**S-Prompt**(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2207.12819)<br/>**ALIFE**(NIPS 2022)[[paper]](https://arxiv.org/abs/2210.06816)<br/>**PMT**(NIPS 2022)[[paper]](https://arxiv.org/abs/2112.07066)<br/>**STCISS**(TNNLS 2022)[[paper]](https://arxiv.org/abs/2012.03362)<br/>**DSN**(TPAMI 2022)[[paper]](https://ieeexplore.ieee.org/document/9779071)<br/>**MgSvF**(TPAMI 2022)[[paper]](https://arxiv.org/abs/2006.15524)<br/>**TransIL**(WACV 2022)[[paper]](https://arxiv.org/pdf/2110.08421.pdf) | **NER-FSCIL**(ACL 2022)[[paper]](https://aclanthology.org/2022.acl-long.43/)<br/>**LIMIT**(arXiv 2022)[[paper]](https://arxiv.org/abs/2203.17030)<br/>**EMP**(arXiv 2022)[[paper]](https://arxiv.org/abs/2204.07275)<br/>**SPTM**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Wu_Class-Incremental_Learning_With_Strong_Pre-Trained_Models_CVPR_2022_paper.html)<br/>**BER**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Toldo_Bring_Evanescent_Representations_to_Life_in_Lifelong_Class_Incremental_Learning_CVPR_2022_paper.html)<br/>**Sylph**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Yin_Sylph_A_Hypernetwork_Framework_for_Incremental_Few-Shot_Object_Detection_CVPR_2022_paper.html)<br/>**MetaFSCIL**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Chi_MetaFSCIL_A_Meta-Learning_Approach_for_Few-Shot_Class_Incremental_Learning_CVPR_2022_paper.html)<br/>**FCIL**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Dong_Federated_Class-Incremental_Learning_CVPR_2022_paper.html)[[code]](https://github.com/conditionWang/FCIL)![GitHub stars](https://img.shields.io/github/stars/conditionWang/FCIL.svg?logo=github&label=Stars)<br/>**FILIT**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Chen_Few-Shot_Incremental_Learning_for_Label-to-Image_Translation_CVPR_2022_paper.html)<br/>**PuriDivER**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Bang_Online_Continual_Learning_on_a_Contaminated_Data_Stream_With_Blurry_CVPR_2022_paper.html)[[code]](https://github.com/clovaai/puridiver)![GitHub stars](https://img.shields.io/github/stars/clovaai/puridiver.svg?logo=github&label=Stars)<br/>**SNCL**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Yan_Learning_Bayesian_Sparse_Networks_With_Full_Experience_Replay_for_Continual_CVPR_2022_paper.html)<br/>**DVC**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Gu_Not_Just_Selection_but_Exploration_Online_Class-Incremental_Continual_Learning_via_CVPR_2022_paper.html)[[code]](https://github.com/YananGu/DVC)![GitHub stars](https://img.shields.io/github/stars/YananGu/DVC.svg?logo=github&label=Stars)<br/>**CVS**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Wan_Continual_Learning_for_Visual_Search_With_Backward_Consistent_Feature_Embedding_CVPR_2022_paper.html)<br/>**CPL**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Chen_Continual_Predictive_Learning_From_Videos_CVPR_2022_paper.html)<br/>**GCR**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Tiwari_GCR_Gradient_Coreset_Based_Replay_Buffer_Selection_for_Continual_Learning_CVPR_2022_paper.html)<br/>**LVT**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Wang_Continual_Learning_With_Lifelong_Vision_Transformer_CVPR_2022_paper.html)<br/>**vCLIMB**(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Villa_vCLIMB_A_Novel_Video_Class_Incremental_Learning_Benchmark_CVPR_2022_paper.html)[[code]](https://vclimb.netlify.app/)<br/>**Learn-to-Imagine**(CVPR 2022)[[paper]](https://arxiv.org/abs/2204.08932)[[code]](https://github.com/TOM-tym/Learn-to-Imagine)![GitHub stars](https://img.shields.io/github/stars/TOM-tym/Learn-to-Imagine.svg?logo=github&label=Stars)<br/>**DCR**(CVPR 2022)[[paper]](https://arxiv.org/abs/2204.04078)<br/>**DIY-FSCIL**(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.14843)<br/>**C-FSCIL**(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.16588)[[code]](https://github.com/IBM/constrained-FSCIL)![GitHub stars](https://img.shields.io/github/stars/IBM/constrained-FSCIL.svg?logo=github&label=Stars)<br/>**SSRE**(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.06359)<br/>**CwD**(CVPR 2022)[[paper]](https://arxiv.org/abs/2112.04731)[[code]](https://github.com/Yujun-Shi/CwD)![GitHub stars](https://img.shields.io/github/stars/Yujun-Shi/CwD.svg?logo=github&label=Stars)<br/>**MSL**(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.03970)<br/>**DyTox**(CVPR 2022)[[paper]](https://arxiv.org/abs/2111.11326)[[code]](https://github.com/arthurdouillard/dytox)![GitHub stars](https://img.shields.io/github/stars/arthurdouillard/dytox.svg?logo=github&label=Stars)<br/>**X-DER**(ECCV 2022)[[paper]](https://arxiv.org/abs/2201.00766)<br/>**clsss-iNCD**(ECCV 2022)[[paper]](https://arxiv.org/abs/2207.08605)[[code]](https://github.com/OatmealLiu/class-iNCD)![GitHub stars](https://img.shields.io/github/stars/OatmealLiu/class-iNCD.svg?logo=github&label=Stars)<br/>**ARI**(ECCV 2022)[[paper]](https://arxiv.org/abs/2208.12967)[[code]](https://github.com/bhrqw/ARI)![GitHub stars](https://img.shields.io/github/stars/bhrqw/ARI.svg?logo=github&label=Stars)<br/>**Long-Tailed-CIL**(ECCV 2022)[[paper]](https://arxiv.org/abs/2210.00266)[[code]](https://github.com/xialeiliu/Long-Tailed-CIL)![GitHub stars](https://img.shields.io/github/stars/xialeiliu/Long-Tailed-CIL.svg?logo=github&label=Stars)<br/>**LIRF**(ECCV 2022)[[paper]](https://arxiv.org/abs/2207.08224)<br/>**DSDM**(ECCV 2022)[[paper]](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136850721.pdf)[[code]](https://github.com/Julien-pour/Dynamic-Sparse-Distributed-Memory)![GitHub stars](https://img.shields.io/github/stars/Julien-pour/Dynamic-Sparse-Distributed-Memory.svg?logo=github&label=Stars)<br/>**CVT**(ECCV 2022)[[paper]](https://arxiv.org/pdf/2207.13516.pdf)<br/>**TwF**(ECCV 2022)[[paper]](https://arxiv.org/abs/2206.00388)[[code]](https://github.com/mbosc/twf)![GitHub stars](https://img.shields.io/github/stars/mbosc/twf.svg?logo=github&label=Stars)<br/>**CSCCT**(ECCV 2022)[[paper]](https://cscct.github.io)[[code]](https://github.com/ashok-arjun/CSCCT)![GitHub stars](https://img.shields.io/github/stars/ashok-arjun/CSCCT.svg?logo=github&label=Stars)<br/>**DLCFT**(ECCV 2022)[[paper]](https://arxiv.org/abs/2208.08112)<br/>**ERDR**(ECCV2022)[[paper]](https://arxiv.org/pdf/2207.11213.pdf)<br/>**NCDwF**(ECCV2022)[[paper]](https://arxiv.org/abs/2207.10659)<br/>**CoMPS**(ICLR 2022)[[paper]](https://openreview.net/pdf?id=PVJ6j87gOHz)<br/>**i-fuzzy**(ICLR 2022)[[paper]](https://openreview.net/pdf?id=nrGGfMbY_qK)[[code]](https://github.com/naver-ai/i-Blurry)![GitHub stars](https://img.shields.io/github/stars/naver-ai/i-Blurry.svg?logo=github&label=Stars)<br/>**CLS-ER**(ICLR 2022)[[paper]](https://openreview.net/pdf?id=uxxFrDwrE7Y)[[code]](https://github.com/NeurAI-Lab/CLS-ER)![GitHub stars](https://img.shields.io/github/stars/NeurAI-Lab/CLS-ER.svg?logo=github&label=Stars)<br/>**MRDC**(ICLR 2022)[[paper]](https://openreview.net/pdf?id=a7H7OucbWaU)[[code]](https://github.com/andrearosasco/DistilledReplay)![GitHub stars](https://img.shields.io/github/stars/andrearosasco/DistilledReplay.svg?logo=github&label=Stars)<br/>**OCS**(ICLR 2022)[[paper]](https://openreview.net/pdf?id=f9D-5WNG4Nv)<br/>**InfoRS**(ICLR 2022)[[paper]](https://openreview.net/pdf?id=IpctgL7khPp)<br/>**ER-AML**(ICLR 2022)[[paper]](https://openreview.net/pdf?id=N8MaByOzUfb)[[code]](https://github.com/pclucas14/aml)![GitHub stars](https://img.shields.io/github/stars/pclucas14/aml.svg?logo=github&label=Stars)<br/>**FAS**(ICLR 2022)[[paper]](https://openreview.net/pdf?id=metRpM4Zrcb)<br/>**LUMP**(ICLR 2022)[[paper]](https://openreview.net/pdf?id=9Hrka5PA7LW)<br/>**CF-IL**(ICLR 2022)[[paper]](https://openreview.net/pdf?id=RxplU3vmBx)[[code]](https://github.com/MozhganPourKeshavarz/Cost-Free-Incremental-Learning)![GitHub stars](https://img.shields.io/github/stars/MozhganPourKeshavarz/Cost-Free-Incremental-Learning.svg?logo=github&label=Stars)<br/>**LFPT5**(ICLR 2022)[[paper]](https://openreview.net/pdf?id=HCRVf71PMF)[[code]](https://github.com/qcwthu/Lifelong-Fewshot-Language-Learning)![GitHub stars](https://img.shields.io/github/stars/qcwthu/Lifelong-Fewshot-Language-Learning.svg?logo=github&label=Stars)<br/>**Model Zoo**(ICLR 2022)[[paper]](https://arxiv.org/abs/2106.03027)<br/>**OCM**(ICML 2022)[[paper]](https://proceedings.mlr.press/v162/guo22g/guo22g.pdf)[[code]](https://github.com/gydpku/OCM)![GitHub stars](https://img.shields.io/github/stars/gydpku/OCM.svg?logo=github&label=Stars)<br/>**DRO**(ICML 2022)[[paper]](https://proceedings.mlr.press/v162/wang22v/wang22v.pdf)[[code]](https://github.com/joey-wang123/DRO-Task-free)![GitHub stars](https://img.shields.io/github/stars/joey-wang123/DRO-Task-free.svg?logo=github&label=Stars)<br/>**EAK**(ICPR 2022)[[paper]](https://arxiv.org/abs/2206.02577)<br/>**RAR**(NeurIPS 2022)[[paper]](https://openreview.net/forum?id=XEoih0EwCwL&referrer=%5Bthe%20profile%20of%20Tianyi%20Zhou%5D(%2Fprofile%3Fid%3D~Tianyi_Zhou2))<br/>**LiDER**(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2210.06443)<br/>**SparCL**(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2209.09476)<br/>**ClonEx-SAC**(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2209.13900)<br/>**ODDL**(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2210.06579)<br/>**CSSL**(PRL 2022)[[paper]](https://arxiv.org/abs/2108.06552)<br/>**MBP**(TNNLS 2022)[[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9705128)<br/>**CandVot**(WACV 2022)[[paper]](https://openaccess.thecvf.com/content/WACV2022/papers/He_Online_Continual_Learning_via_Candidates_Voting_WACV_2022_paper.pdf)<br/>**FlashCards**(WACV 2022)[[paper]](https://openaccess.thecvf.com/content/WACV2022/papers/Gopalakrishnan_Knowledge_Capture_and_Replay_for_Continual_Learning_WACV_2022_paper.pdf) |\n| 2021 | **Meta-DR**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/html/Volpi_Continual_Adaptation_of_Visual_Representations_via_Domain_Randomization_and_Meta-Learning_CVPR_2021_paper.html)<br />**continual cross-modal retrieval**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021W/CLVision/html/Wang_Continual_Learning_in_Cross-Modal_Retrieval_CVPRW_2021_paper.html)<br />**DER**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Yan_DER_Dynamically_Expandable_Representation_for_Class_Incremental_Learning_CVPR_2021_paper.pdf)[[code]](https://github.com/Rhyssiyan/DER-ClassIL.pytorch)![GitHub stars](https://img.shields.io/github/stars/Rhyssiyan/DER-ClassIL.pytorch.svg?logo=github&label=Stars)<br />**EFT**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Verma_Efficient_Feature_Transformations_for_Discriminative_and_Generative_Continual_Learning_CVPR_2021_paper.pdf)[[code]](https://github.com/vkverma01/EFT)![GitHub stars](https://img.shields.io/github/stars/vkverma01/EFT.svg?logo=github&label=Stars)<br />**PASS**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Zhu_Prototype_Augmentation_and_Self-Supervision_for_Incremental_Learning_CVPR_2021_paper.pdf)[[code]](https://github.com/Impression2805/CVPR21_PASS)![GitHub stars](https://img.shields.io/github/stars/Impression2805/CVPR21_PASS.svg?logo=github&label=Stars)<br />**GeoDL**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Simon_On_Learning_the_Geodesic_Path_for_Incremental_Learning_CVPR_2021_paper.pdf)[[code]](https://github.com/chrysts/geodesic_continual_learning)![GitHub stars](https://img.shields.io/github/stars/chrysts/geodesic_continual_learning.svg?logo=github&label=Stars)<br />**IL-ReduNet**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Wu_Incremental_Learning_via_Rate_Reduction_CVPR_2021_paper.pdf)<br />**PIGWM**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Zhou_Image_De-Raining_via_Continual_Learning_CVPR_2021_paper.pdf)<br />**BLIP**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Shi_Continual_Learning_via_Bit-Level_Information_Preserving_CVPR_2021_paper.pdf)[[code]](https://github.com/Yujun-Shi/BLIP)![GitHub stars](https://img.shields.io/github/stars/Yujun-Shi/BLIP.svg?logo=github&label=Stars)<br />**Adam-NSCL**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_Training_Networks_in_Null_Space_of_Feature_Covariance_for_Continual_CVPR_2021_paper.pdf)[[code]](https://github.com/ShipengWang/Adam-NSCL)![GitHub stars](https://img.shields.io/github/stars/ShipengWang/Adam-NSCL.svg?logo=github&label=Stars)<br />**PLOP**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Douillard_PLOP_Learning_Without_Forgetting_for_Continual_Semantic_Segmentation_CVPR_2021_paper.pdf)[[code]](https://github.com/arthurdouillard/CVPR2021_PLOP)![GitHub stars](https://img.shields.io/github/stars/arthurdouillard/CVPR2021_PLOP.svg?logo=github&label=Stars)<br />**SDR**(CVPR 2021)[[paper]](https://lttm.dei.unipd.it/paper_data/SDR/)[[code]](https://github.com/LTTM/SDR)![GitHub stars](https://img.shields.io/github/stars/LTTM/SDR.svg?logo=github&label=Stars)<br />**SKD**(CVPR 2021)[[paper]](https://arxiv.org/abs/2103.04059)<br />**Always Be Dreaming**(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/html/Smith_Always_Be_Dreaming_A_New_Approach_for_Data-Free_Class-Incremental_Learning_ICCV_2021_paper.html)[[code]](https://github.com/GT-RIPL/AlwaysBeDreaming-DFCIL)![GitHub stars](https://img.shields.io/github/stars/GT-RIPL/AlwaysBeDreaming-DFCIL.svg?logo=github&label=Stars)<br />**SPB**(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Wu_Striking_a_Balance_Between_Stability_and_Plasticity_for_Class-Incremental_Learning_ICCV_2021_paper.pdf)<br />**Else-Net**(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Li_Else-Net_Elastic_Semantic_Network_for_Continual_Action_Recognition_From_Skeleton_ICCV_2021_paper.pdf)<br />**LCwoF-Framework**(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Kukleva_Generalized_and_Incremental_Few-Shot_Learning_by_Explicit_Learning_and_Calibration_ICCV_2021_paper.pdf)<br />**AFEC**(NeurIPS 2021)[[paper]](https://openreview.net/pdf/72a18fad6fce88ef0286e9c7582229cf1c8d9f93.pdf)[[code]](https://github.com/lywang3081/AFEC)![GitHub stars](https://img.shields.io/github/stars/lywang3081/AFEC.svg?logo=github&label=Stars)<br />**F2M**(NeurIPS 2021)[[paper]](https://openreview.net/forum?id=ALvt7nXa2q)[[code]](https://github.com/moukamisama/F2M)![GitHub stars](https://img.shields.io/github/stars/moukamisama/F2M.svg?logo=github&label=Stars)<br />**NCL**(NeurIPS 2021)[[paper]](https://openreview.net/forum?id=W9250bXDgpK)[[code]](https://github.com/tachukao/ncl)![GitHub stars](https://img.shields.io/github/stars/tachukao/ncl.svg?logo=github&label=Stars)<br />**BCL**(NeurIPS 2021)[[paper]](https://openreview.net/forum?id=u1XV9BPAB9)[[code]](https://github.com/krm9c/Balanced-Continual-Learning)![GitHub stars](https://img.shields.io/github/stars/krm9c/Balanced-Continual-Learning.svg?logo=github&label=Stars)<br />**Posterior Meta-Replay**(NeurIPS 2021)[[paper]](https://proceedings.neurips.cc/paper/2021/hash/761b42cfff120aac30045f7a110d0256-Abstract.html)<br />**MARK**(NeurIPS 2021)[[paper]](https://openreview.net/forum?id=hHTctAv9Lvh)[[code]](https://github.com/JuliousHurtado/meta-training-setup)![GitHub stars](https://img.shields.io/github/stars/JuliousHurtado/meta-training-setup.svg?logo=github&label=Stars)<br />**Co-occur**(NeurIPS 2021)[[paper]](https://proceedings.neurips.cc/paper/2021/hash/ffc58105bf6f8a91aba0fa2d99e6f106-Abstract.html)[[code]](https://github.com/dongnana777/bridging-non-co-occurrence)![GitHub stars](https://img.shields.io/github/stars/dongnana777/bridging-non-co-occurrence.svg?logo=github&label=Stars)<br />**LINC**(AAAI 2021)[[paper]](https://www.cs.uic.edu/~liub/publications/LINC_paper_AAAI_2021_camera_ready.pdf)<br />**CLNER**(AAAI 2021)[[paper]](https://www.aaai.org/AAAI21Papers/AAAI-7791.MonaikulN.pdf)<br />**CLIS**(AAAI 2021)[[paper]](https://www.aaai.org/AAAI21Papers/AAAI-2989.ZhengE.pdf)<br />**PCL**(AAAI 2021)[[paper]](https://www.cs.uic.edu/~liub/publications/AAAI2021_PCL.pdf)<br />**MAS3**(AAAI 2021)[[paper]](https://arxiv.org/abs/2009.12518)<br />**FSLL**(AAAI 2021)[[paper]](https://arxiv.org/pdf/2103.00991.pdf)<br />**VAR-GPs**(ICML 2021)[[paper]](https://proceedings.mlr.press/v139/kapoor21b.html)<br />**BSA**(ICML 2021)[[paper]](https://proceedings.mlr.press/v139/kumar21a.html)<br />**GPM**(ICLR 2021)[[paper]](https://arxiv.org/abs/2103.09762)[[code]](https://github.com/sahagobinda/GPM)![GitHub stars](https://img.shields.io/github/stars/sahagobinda/GPM.svg?logo=github&label=Stars)<br />![GitHub stars](https://img.shields.io/github/stars/sahagobinda/GPM.svg?logo=github&label=Stars)<br /> | **TMN**(TNNLS 2021)[[paper]](https://ieeexplore.ieee.org/document/9540230?mkt_tok=NzU2LUdQSC04OTkAAAGEWh3nzSNX8-bTkVna2NbuB0POeJj2Og3psx0tXhIg9QWKppanhkVXCPQQMF_mCm4oXM9ds24H4-usCcZ06Vy9lezgWYCQrpxt6YPWkhuvj-E)<br />**RKD**(AAAI 2021)[[paper]](https://ojs.aaai.org/index.php/AAAI/article/view/16213)<br />**AANets**(CVPR 2021)[[paper]](https://class-il.mpi-inf.mpg.de/)[[code]](https://github.com/yaoyao-liu/class-incremental-learning)![GitHub stars](https://img.shields.io/github/stars/yaoyao-liu/class-incremental-learning.svg?logo=github&label=Stars)<br />**ORDisCo**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_ORDisCo_Effective_and_Efficient_Usage_of_Incremental_Unlabeled_Data_for_CVPR_2021_paper.pdf)<br />**DDE**(CVPR 2021)[[paper]](https://arxiv.org/abs/2103.01737)[[code]](https://github.com/JoyHuYY1412/DDE_CIL)![GitHub stars](https://img.shields.io/github/stars/JoyHuYY1412/DDE_CIL.svg?logo=github&label=Stars)<br />**IIRC**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Abdelsalam_IIRC_Incremental_Implicitly-Refined_Classification_CVPR_2021_paper.pdf)<br />**Hyper-LifelongGAN**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Zhai_Hyper-LifelongGAN_Scalable_Lifelong_Learning_for_Image_Conditioned_Generation_CVPR_2021_paper.pdf)<br />**CEC**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Zhang_Few-Shot_Incremental_Learning_With_Continually_Evolved_Classifiers_CVPR_2021_paper.pdf)<br />**iMTFA**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Ganea_Incremental_Few-Shot_Instance_Segmentation_CVPR_2021_paper.pdf)<br />**RM**(CVPR 2021)[[paper]](https://ieeexplore.ieee.org/document/9577808)<br />**LOGD**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Tang_Layerwise_Optimization_by_Gradient_Decomposition_for_Continual_Learning_CVPR_2021_paper.pdf)<br />**SPPR**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/html/Zhu_Self-Promoted_Prototype_Refinement_for_Few-Shot_Class-Incremental_Learning_CVPR_2021_paper.html)<br />**LReID**(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Pu_Lifelong_Person_Re-Identification_via_Adaptive_Knowledge_Accumulation_CVPR_2021_paper.pdf)[[code]](https://github.com/TPCD/LifelongReID)![GitHub stars](https://img.shields.io/github/stars/TPCD/LifelongReID.svg?logo=github&label=Stars)<br />**SS-IL**(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Ahn_SS-IL_Separated_Softmax_for_Incremental_Learning_ICCV_2021_paper.pdf)<br />**TCD**(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Park_Class-Incremental_Learning_for_Action_Recognition_in_Videos_ICCV_2021_paper.pdf)<br />**CLOC**(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/html/Cai_Online_Continual_Learning_With_Natural_Distribution_Shifts_An_Empirical_Study_ICCV_2021_paper.html)[[code]](https://github.com/IntelLabs/continuallearning)![GitHub stars](https://img.shields.io/github/stars/IntelLabs/continuallearning.svg?logo=github&label=Stars)<br />**CoPE**(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/De_Lange_Continual_Prototype_Evolution_Learning_Online_From_Non-Stationary_Data_Streams_ICCV_2021_paper.pdf)[[code]](https://github.com/Mattdl/ContinualPrototypeEvolution)![GitHub stars](https://img.shields.io/github/stars/Mattdl/ContinualPrototypeEvolution.svg?logo=github&label=Stars)<br />**Co2L**(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Cha_Co2L_Contrastive_Continual_Learning_ICCV_2021_paper.pdf)[[code]](https://github.com/chaht01/co2l)![GitHub stars](https://img.shields.io/github/stars/chaht01/co2l.svg?logo=github&label=Stars)<br />**SPR**(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Kim_Continual_Learning_on_Noisy_Data_Streams_via_Self-Purified_Replay_ICCV_2021_paper.pdf)<br />**NACL**(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/html/Rostami_Detection_and_Continual_Learning_of_Novel_Face_Presentation_Attacks_ICCV_2021_paper.html)<br />**CL-HSCNet**(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/html/Wang_Continual_Learning_for_Image-Based_Camera_Localization_ICCV_2021_paper.html)[[code]](https://github.com/AaltoVision/CL_HSCNet)![GitHub stars](https://img.shields.io/github/stars/AaltoVision/CL_HSCNet.svg?logo=github&label=Stars)<br />**RECALL**(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/html/Maracani_RECALL_Replay-Based_Continual_Learning_in_Semantic_Segmentation_ICCV_2021_paper.html)[[code]](https://github.com/lttm/recall)![GitHub stars](https://img.shields.io/github/stars/lttm/recall.svg?logo=github&label=Stars)<br />**VAE**(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Cheraghian_Synthesized_Feature_Based_Few-Shot_Class-Incremental_Learning_on_a_Mixture_of_ICCV_2021_paper.pdf)<br />**ERT**(ICPR 2021)[[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9412614)[[code]](https://github.com/hastings24/rethinking_er)![GitHub stars](https://img.shields.io/github/stars/hastings24/rethinking_er.svg?logo=github&label=Stars)<br />**KCL**(ICML 2021)[[paper]](https://proceedings.mlr.press/v139/derakhshani21a.html)[[code]](https://github.com/mmderakhshani/KCL)![GitHub stars](https://img.shields.io/github/stars/mmderakhshani/KCL.svg?logo=github&label=Stars)<br />**MLIOD**(TPAMI 2021)[[paper]](https://arxiv.org/abs/2003.08798)[[code]](https://github.com/JosephKJ/iOD)![GitHub stars](https://img.shields.io/github/stars/JosephKJ/iOD.svg?logo=github&label=Stars)<br />**BNS**(NeurIPS 2021)[[paper]](https://papers.nips.cc/paper/2021/hash/ac64504cc249b070772848642cffe6ff-Abstract.html)<br />**FS-DGPM**(NeurIPS 2021)[[paper]](https://openreview.net/forum?id=q1eCa1kMfDd)<br />**SSUL**(NeurIPS  2021)[[paper]](https://proceedings.neurips.cc/paper/2021/file/5a9542c773018268fc6271f7afeea969-Paper.pdf)<br />**DualNet**(NeurIPS 2021)[[paper]](https://openreview.net/pdf?id=eQ7Kh-QeWnO)<br />**classAug**(NeurIPS 2021)[[paper]](https://papers.nips.cc/paper/2021/file/77ee3bc58ce560b86c2b59363281e914-Paper.pdf)<br />**GMED**(NeurIPS 2021)[[paper]](https://papers.nips.cc/paper/2021/hash/f45a1078feb35de77d26b3f7a52ef502-Abstract.html)<br />**BooVAE**(NeurIPS 2021)[[paper]](https://papers.nips.cc/paper/2021/hash/952285b9b7e7a1be5aa7849f32ffff05-Abstract.html)[[code]](https://github.com/AKuzina/BooVAE)![GitHub stars](https://img.shields.io/github/stars/AKuzina/BooVAE.svg?logo=github&label=Stars)<br />**GeMCL**(NeurIPS 2021)[[paper]](https://papers.nips.cc/paper/2021/hash/b4e267d84075f66ebd967d95331fcc03-Abstract.html)<br />**RMM**(NIPS 2021)[[paper]](https://proceedings.neurips.cc/paper/2021/file/1cbcaa5abbb6b70f378a3a03d0c26386-Paper.pdf)[[code]](https://github.com/aminbana/gemcl)![GitHub stars](https://img.shields.io/github/stars/aminbana/gemcl.svg?logo=github&label=Stars)<br />**LSF**(IJCAI 2021)[[paper]](https://www.ijcai.org/proceedings/2021/0137.pdf)<br />**ASER**(AAAI 2021)[[paper]](https://www.aaai.org/AAAI21Papers/AAAI-9988.ShimD.pdf)[[code]](https://github.com/RaptorMai/online-continual-learning)![GitHub stars](https://img.shields.io/github/stars/RaptorMai/online-continual-learning.svg?logo=github&label=Stars)<br />**CML**(AAAI 2021)[[paper]](https://www.aaai.org/AAAI21Papers/AAAI-4847.WuT.pdf)[[code]](https://github.com/wutong8023/AAAI-CML)![GitHub stars](https://img.shields.io/github/stars/wutong8023/AAAI-CML.svg?logo=github&label=Stars)<br />**HAL**(AAAI 2021)[[paper]](https://www.aaai.org/AAAI21Papers/AAAI-9700.ChaudhryA.pdf)<br />**MDMT**(AAAI 2021)[[paper]](https://arxiv.org/abs/2012.07236)<br />**AU**(WACV 2021)[[paper]](https://openaccess.thecvf.com/content/WACV2021/html/Kurmi_Do_Not_Forget_to_Attend_to_Uncertainty_While_Mitigating_Catastrophic_WACV_2021_paper.html)<br />**IDBR**(NAACL 2021)[[paper]](https://www.aclweb.org/anthology/2021.naacl-main.218.pdf)[[code]](https://github.com/GT-SALT/IDBR)![GitHub stars](https://img.shields.io/github/stars/GT-SALT/IDBR.svg?logo=github&label=Stars)<br />**COIL**(ACM MM 2021)[[paper]](https://arxiv.org/pdf/2107.12654.pdf)<br /> |\n| 2020 | **CWR\\***(CVPR 2020)[[paper]](https://arxiv.org/abs/1907.03799v3)<br />**MiB**(CVPR 2020)[[paper]](https://openaccess.thecvf.com/content_CVPR_2020/papers/Cermelli_Modeling_the_Background_for_Incremental_Learning_in_Semantic_Segmentation_CVPR_2020_paper.pdf)[[code]](https://github.com/fcdl94/MiB)![GitHub stars](https://img.shields.io/github/stars/fcdl94/MiB.svg?logo=github&label=Stars)<br />**K-FAC**(CVPR 2020)[[paper]](https://openaccess.thecvf.com/content_CVPR_2020/html/Lee_Continual_Learning_With_Extended_Kronecker-Factored_Approximate_Curvature_CVPR_2020_paper.html)<br />**SDC**(CVPR 2020)[[paper]](https://openaccess.thecvf.com/content_CVPR_2020/html/Yu_Semantic_Drift_Compensation_for_Class-Incremental_Learning_CVPR_2020_paper.html)[[code]](https://github.com/yulu0724/SDC-IL)![GitHub stars](https://img.shields.io/github/stars/yulu0724/SDC-IL.svg?logo=github&label=Stars)<br />**NLTF**(AAAI 2020) [[paper]](https://ojs.aaai.org//index.php/AAAI/article/view/6617)<br />**CLCL**(ICLR 2020)[[paper]](https://openreview.net/forum?id=rklnDgHtDS)[[code]](https://github.com/yli1/CLCL)![GitHub stars](https://img.shields.io/github/stars/yli1/CLCL.svg?logo=github&label=Stars)<br />**APD**(ICLR 2020)[[paper]](https://arxiv.org/pdf/1902.09432.pdf)<br />**HYPERCL**(ICLR 2020)[[paper]](https://openreview.net/forum?id=SJgwNerKvB)[[code]](https://github.com/chrhenning/hypercl)![GitHub stars](https://img.shields.io/github/stars/chrhenning/hypercl.svg?logo=github&label=Stars)<br />**CN-DPM**(ICLR 2020)[[paper]](https://arxiv.org/pdf/2001.00689.pdf)<br />**UCB**(ICLR 2020)[[paper]](https://openreview.net/forum?id=HklUCCVKDB)[[code]](https://github.com/SaynaEbrahimi/UCB)![GitHub stars](https://img.shields.io/github/stars/SaynaEbrahimi/UCB.svg?logo=github&label=Stars)<br />**CLAW**(ICLR 2020)[[paper]](https://openreview.net/forum?id=Hklso24Kwr)<br />**CAT**(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/d7488039246a405baf6a7cbc3613a56f-Paper.pdf)[[code]](https://github.com/ZixuanKe/CAT)![GitHub stars](https://img.shields.io/github/stars/ZixuanKe/CAT.svg?logo=github&label=Stars)<br />**AGS-CL**(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/hash/258be18e31c8188555c2ff05b4d542c3-Abstract.html)<br />**MERLIN**(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/a5585a4d4b12277fee5cad0880611bc6-Paper.pdf)[[code]](https://github.com/mattriemer/mer)![GitHub stars](https://img.shields.io/github/stars/mattriemer/mer.svg?logo=github&label=Stars)<br />**OSAKA**(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/c0a271bc0ecb776a094786474322cb82-Paper.pdf)[[code]](https://github.com/ElementAI/osaka)![GitHub stars](https://img.shields.io/github/stars/ElementAI/osaka.svg?logo=github&label=Stars)<br />**RATT**(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/c2964caac096f26db222cb325aa267cb-Paper.pdf)<br />**CCLL**(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/hash/b3b43aeeacb258365cc69cdaf42a68af-Abstract.html)<br />**CIDA**(ECCV 2020)[[paper]](https://link.springer.com/chapter/10.1007/978-3-030-58601-0_4)<br />**GraphSAIL**(CIKM 2020)[[paper]](https://dl.acm.org/doi/abs/10.1145/3340531.3412754)<br />**ANML**(ECAI 2020)[[paper]](https://arxiv.org/abs/2002.09571)[[code]](https://github.com/uvm-neurobotics-lab/ANML)![GitHub stars](https://img.shields.io/github/stars/uvm-neurobotics-lab/ANML.svg?logo=github&label=Stars)<br />**ICWR**(BMVC 2020)[[paper]](https://arxiv.org/pdf/2008.13710.pdf)<br />**DAM**(TPAMI 2020)[[paper]](https://openreview.net/pdf?id=7YDLgf9_zgm)<br />**OGD**(PMLR 2020)[[paper]](http://proceedings.mlr.press/v108/farajtabar20a.html)<br />**MC-OCL**(ECCV2020)[[paper]](https://link.springer.com/chapter/10.1007/978-3-030-58604-1_43)[[code]](https://github.com/DonkeyShot21/batch-level-distillation)![GitHub stars](https://img.shields.io/github/stars/DonkeyShot21/batch-level-distillation.svg?logo=github&label=Stars)<br />**RCM**(ECCV 2020)[[paper]](https://link.springer.com/chapter/10.1007/978-3-030-58565-5_41)[[code]](https://github.com/menelaoskanakis/RCM)![GitHub stars](https://img.shields.io/github/stars/menelaoskanakis/RCM.svg?logo=github&label=Stars)<br />**OvA-INN**(IJCNN 2020)[[paper]](https://ieeexplore.ieee.org/abstract/document/9206766)<br />**XtarNet**(ICLM 2020)[[paper]](http://proceedings.mlr.press/v119/yoon20b/yoon20b.pdf)[[code]](https://github.com/EdwinKim3069/XtarNet)![GitHub stars](https://img.shields.io/github/stars/EdwinKim3069/XtarNet.svg?logo=github&label=Stars)<br />**DMC**(WACV 2020)[[paper]](https://openaccess.thecvf.com/content_WACV_2020/html/Zhang_Class-incremental_Learning_via_Deep_Model_Consolidation_WACV_2020_paper.html)<br /> | **iTAML**(CVPR 2020)[[paper]](https://arxiv.org/pdf/2003.11652.pdf)[[code]](https://github.com/brjathu/iTAML)![GitHub stars](https://img.shields.io/github/stars/brjathu/iTAML.svg?logo=github&label=Stars)<br />**FSCIL**(CVPR 2020)[[paper]](https://arxiv.org/pdf/2004.10956.pdf)[[code]](https://github.com/xyutao/fscil)![GitHub stars](https://img.shields.io/github/stars/xyutao/fscil.svg?logo=github&label=Stars)<br />**GFR**(CVPR 2020)[[paper]](https://ieeexplore.ieee.org/document/9150851/#:~:text=Generative%20Feature%20Replay%20For%20Class-Incremental%20Learning%20Abstract%3A%20Humans,that%20the%20task-ID%20is%20unknown%20at%20inference%20time.)[[code]](https://github.com/xialeiliu/GFR-IL)![GitHub stars](https://img.shields.io/github/stars/xialeiliu/GFR-IL.svg?logo=github&label=Stars)<br />**OSIL**(CVPR 2020)[[paper]](https://openaccess.thecvf.com/content_CVPR_2020/html/He_Incremental_Learning_in_Online_Scenario_CVPR_2020_paper.html)<br />**ONCE**(CVPR 2020)[[paper]](https://openaccess.thecvf.com/content_CVPR_2020/html/Perez-Rua_Incremental_Few-Shot_Object_Detection_CVPR_2020_paper.html)<br />**WA**(CVPR 2020)[[paper]](https://openaccess.thecvf.com/content_CVPR_2020/papers/Zhao_Maintaining_Discrimination_and_Fairness_in_Class_Incremental_Learning_CVPR_2020_paper.pdf)[[code]](https://github.com/hugoycj/Incremental-Learning-with-Weight-Aligning)<br />**CGATE**(CVPR 2020)[[paper]](https://openaccess.thecvf.com/content_CVPR_2020/html/Abati_Conditional_Channel_Gated_Networks_for_Task-Aware_Continual_Learning_CVPR_2020_paper.html)[[code]](https://github.com/lit-leo/cgate)![GitHub stars](https://img.shields.io/github/stars/lit-leo/cgate.svg?logo=github&label=Stars)<br />**Mnemonics Training**(CVPR 2020)[[paper]](https://class-il.mpi-inf.mpg.de/mnemonics-training/)[[code]](https://github.com/yaoyao-liu/class-incremental-learning)![GitHub stars](https://img.shields.io/github/stars/yaoyao-liu/class-incremental-learning.svg?logo=github&label=Stars)<br />**MEGA**(NeurIPS 2020)[[paper]](https://par.nsf.gov/servlets/purl/10233158)<br />**GAN Memory**(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/hash/bf201d5407a6509fa536afc4b380577e-Abstract.html)[[code]](https://github.com/MiaoyunZhao/GANmemory_LifelongLearning)![GitHub stars](https://img.shields.io/github/stars/MiaoyunZhao/GANmemory_LifelongLearning.svg?logo=github&label=Stars)<br />**Coreset**(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/aa2a77371374094fe9e0bc1de3f94ed9-Paper.pdf)<br />**FROMP**(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/2f3bbb9730639e9ea48f309d9a79ff01-Paper.pdf)[[code]](https://github.com/team-approx-bayes/fromp)![GitHub stars](https://img.shields.io/github/stars/team-approx-bayes/fromp.svg?logo=github&label=Stars)<br />**DER**(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/b704ea2c39778f07c617f6b7ce480e9e-Paper.pdf)[[code]](https://github.com/aimagelab/mammoth)![GitHub stars](https://img.shields.io/github/stars/aimagelab/mammoth.svg?logo=github&label=Stars)<br />**InstAParam**(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/ca4b5656b7e193e6bb9064c672ac8dce-Paper.pdf)<br />**BOCL**(AAAI 2020)[[paper]](https://ojs.aaai.org//index.php/AAAI/article/view/6060)<br />**REMIND**(ECCV 2020)[[paper]](https://arxiv.org/pdf/1910.02509v3)[[code]](https://github.com/tyler-hayes/REMIND)![GitHub stars](https://img.shields.io/github/stars/tyler-hayes/REMIND.svg?logo=github&label=Stars)<br />**ACL**(ECCV 2020)[[paper]](https://link.springer.com/chapter/10.1007/978-3-030-58621-8_23)[[code]](https://github.com/facebookresearch/Adversarial-Continual-Learning)![GitHub stars](https://img.shields.io/github/stars/facebookresearch/Adversarial-Continual-Learning.svg?logo=github&label=Stars)<br />**TPCIL**(ECCV 2020)[[paper]](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123640256.pdf)<br />**GDumb**(ECCV 2020)[[paper]](https://www.robots.ox.ac.uk/~tvg/publications/2020/gdumb.pdf)[[code]](https://github.com/drimpossible/GDumb)![GitHub stars](https://img.shields.io/github/stars/drimpossible/GDumb.svg?logo=github&label=Stars)<br />**PRS**(ECCV 2020)[[paper]](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123580409.pdf)<br />**PODNet**(ECCV 2020)[[paper]](https://arxiv.org/abs/2004.13513)[[code]](https://github.com/arthurdouillard/incremental_learning.pytorch)![GitHub stars](https://img.shields.io/github/stars/arthurdouillard/incremental_learning.pytorch.svg?logo=github&label=Stars)<br />**FA**(ECCV 2020)[[paper]](https://link.springer.com/chapter/10.1007/978-3-030-58517-4_41)<br />**L-VAEGAN**(ECCV 2020)[[paper]](https://link.springer.com/chapter/10.1007/978-3-030-58565-5_46)<br />**Piggyback GAN**(ECCV 2020)[[paper]](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123660392.pdf)[[code]](https://github.com/arunmallya/piggyback)![GitHub stars](https://img.shields.io/github/stars/arunmallya/piggyback.svg?logo=github&label=Stars)<br />**IDA**(ECCV 2020)[[paper]](https://arxiv.org/abs/2002.04162)<br />**RCM**(ECCV 2020)[[paper]](https://arxiv.org/abs/2007.12540)<br />**LAMOL**(ICLR 2020)[[paper]](https://openreview.net/forum?id=Skgxcn4YDS)[[code]](https://github.com/chho33/LAMOL)![GitHub stars](https://img.shields.io/github/stars/chho33/LAMOL.svg?logo=github&label=Stars)<br />**FRCL**(ICLR 2020)[[paper]](https://arxiv.org/abs/1901.11356)[[code]](https://github.com/AndreevP/FRCL)![GitHub stars](https://img.shields.io/github/stars/AndreevP/FRCL.svg?logo=github&label=Stars)<br />**GRS**(ICLR 2020)[[paper]](https://openreview.net/forum?id=SJlsFpVtDB)<br />**Brain-inspired replay**(Natrue Communications 2020)[[paper]](https://www.nature.com/articles/s41467-020-17866-2)[[code]](https://github.com/GMvandeVen/brain-inspired-replay)![GitHub stars](https://img.shields.io/github/stars/GMvandeVen/brain-inspired-replay.svg?logo=github&label=Stars)<br />**CLIFER**(FG 2020)[[paper]](https://ieeexplore.ieee.org/document/9320226)<br />**ScaIL**(WACV 2020)[[paper]](https://openaccess.thecvf.com/content_WACV_2020/html/Belouadah_ScaIL_Classifier_Weights_Scaling_for_Class_Incremental_Learning_WACV_2020_paper.html)[[code]](https://github.com/EdenBelouadah/class-incremental-learning)![GitHub stars](https://img.shields.io/github/stars/EdenBelouadah/class-incremental-learning.svg?logo=github&label=Stars)<br />**ARPER**(EMNLP 2020)[[paper]](https://arxiv.org/abs/2010.00910)<br />**DnR**(COLING 2020)[[paper]](https://www.aclweb.org/anthology/2020.coling-main.318.pdf)<br />**ADER**(RecSys 2020)[[paper]](https://arxiv.org/abs/2007.12000)[[code]](https://github.com/DoubleMuL/ADER)![GitHub stars](https://img.shields.io/github/stars/DoubleMuL/ADER.svg?logo=github&label=Stars)<br />**MUC**(ECCV 2020)[[paper]](http://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123710698.pdf)[[code]](https://github.com/liuyudut/MUC)![GitHub stars](https://img.shields.io/github/stars/liuyudut/MUC.svg?logo=github&label=Stars)<br /> |\n| 2019 | **LwM**(CVPR 2019)[[paper]](https://ieeexplore.ieee.org/document/8953962)<br />**CPG**(NeurIPS 2019)[[paper]](https://arxiv.org/pdf/1910.06562v1.pdf)[[code]](https://github.com/ivclab/CPG)![GitHub stars](https://img.shields.io/github/stars/ivclab/CPG.svg?logo=github&label=Stars)<br />**UCL**(NeurIPS 2019)[[paper]](https://proceedings.neurips.cc/paper/2019/file/2c3ddf4bf13852db711dd1901fb517fa-Paper.pdf)<br />**OML**(NeurIPS 2019)[[paper]](https://proceedings.neurips.cc/paper/2019/hash/f4dd765c12f2ef67f98f3558c282a9cd-Abstract.html)[[code]](https://github.com/Khurramjaved96/mrcl)![GitHub stars](https://img.shields.io/github/stars/Khurramjaved96/mrcl.svg?logo=github&label=Stars)<br />**ALASSO**(ICCV 2019)[[paper]](https://openaccess.thecvf.com/content_ICCV_2019/papers/Park_Continual_Learning_by_Asymmetric_Loss_Approximation_With_Single-Side_Overestimation_ICCV_2019_paper.pdf)<br />**Learn-to-Grow**(PMLR 2019)[[paper]](http://proceedings.mlr.press/v97/li19m/li19m.pdf)<br />**OWM**(Nature Machine Intelligence 2019)[[paper]](https://www.nature.com/articles/s42256-019-0080-x#Sec2)[[code]](https://github.com/beijixiong3510/OWM)![GitHub stars](https://img.shields.io/github/stars/beijixiong3510/OWM.svg?logo=github&label=Stars)<br /> | **LUCIR**(CVPR 2019)[[paper]](https://openaccess.thecvf.com/content_CVPR_2019/html/Hou_Learning_a_Unified_Classifier_Incrementally_via_Rebalancing_CVPR_2019_paper.html)[[code]](https://github.com/hshustc/CVPR19_Incremental_Learning)![GitHub stars](https://img.shields.io/github/stars/hshustc/CVPR19_Incremental_Learning.svg?logo=github&label=Stars)<br />**TFCL**(CVPR 2019)[[paper]](https://openaccess.thecvf.com/content_CVPR_2019/papers/Aljundi_Task-Free_Continual_Learning_CVPR_2019_paper.pdf)<br />**GD**(CVPR 2019)[[paper]](https://ieeexplore.ieee.org/document/9010368)[[code]](https://github.com/kibok90/iccv2019-inc)![GitHub stars](https://img.shields.io/github/stars/kibok90/iccv2019-inc.svg?logo=github&label=Stars)<br />**DGM**(CVPR 2019)[[paper]](https://openaccess.thecvf.com/content_CVPR_2019/papers/Ostapenko_Learning_to_Remember_A_Synaptic_Plasticity_Driven_Framework_for_Continual_CVPR_2019_paper.pdf)<br />**BiC**(CVPR 2019)[[paper]](https://arxiv.org/abs/1905.13260)[[code]](https://github.com/wuyuebupt/LargeScaleIncrementalLearning)![GitHub stars](https://img.shields.io/github/stars/wuyuebupt/LargeScaleIncrementalLearning.svg?logo=github&label=Stars)<br />**MER**(ICLR 2019)[[paper]](https://openreview.net/pdf?id=B1gTShAct7)[[code]](https://github.com/mattriemer/mer)![GitHub stars](https://img.shields.io/github/stars/mattriemer/mer.svg?logo=github&label=Stars)<br />**PGMA**(ICLR 2019)[[paper]](https://openreview.net/forum?id=ryGvcoA5YX)<br />**A-GEM**(ICLR 2019)[[paper]](https://arxiv.org/pdf/1812.00420.pdf)[[code]](https://github.com/facebookresearch/agem)![GitHub stars](https://img.shields.io/github/stars/facebookresearch/agem.svg?logo=github&label=Stars)<br />**IL2M**(ICCV 2019)[[paper]](https://ieeexplore.ieee.org/document/9009019)<br />**ILCAN**(ICCV 2019)[[paper]](https://ieeexplore.ieee.org/document/9009031)<br />**Lifelong GAN**(ICCV 2019)[[paper]](https://openaccess.thecvf.com/content_ICCV_2019/html/Zhai_Lifelong_GAN_Continual_Learning_for_Conditional_Image_Generation_ICCV_2019_paper.html)<br />**GSS**(NIPS 2019)[[paper]](https://proceedings.neurips.cc/paper/2019/file/e562cd9c0768d5464b64cf61da7fc6bb-Paper.pdf)<br />**ER**(NIPS 2019)[[paper]](https://arxiv.org/abs/1811.11682)<br />**MIR**(NIPS 2019)[[paper]](https://proceedings.neurips.cc/paper/2019/hash/15825aee15eb335cc13f9b559f166ee8-Abstract.html)[[code]](https://github.com/optimass/Maximally_Interfered_Retrieval)![GitHub stars](https://img.shields.io/github/stars/optimass/Maximally_Interfered_Retrieval.svg?logo=github&label=Stars)<br />**RPS-Net**(NIPS 2019)[[paper]](https://www.researchgate.net/profile/Salman-Khan-62/publication/333617650_Random_Path_Selection_for_Incremental_Learning/links/5d04905ea6fdcc39f11b7355/Random-Path-Selection-for-Incremental-Learning.pdf)<br />**CLEER**(IJCAI 2019)[[paper]](https://arxiv.org/abs/1903.04566)<br />**PAE**(ICMR 2019)[[paper]](https://dl.acm.org/doi/10.1145/3323873.3325053)[[code]](https://github.com/ivclab/PAE)![GitHub stars](https://img.shields.io/github/stars/ivclab/PAE.svg?logo=github&label=Stars)<br /> |\n| 2018 | **PackNet**(CVPR 2018)[[paper]](https://openaccess.thecvf.com/content_cvpr_2018/html/Mallya_PackNet_Adding_Multiple_CVPR_2018_paper.html)[[code]](https://github.com/arunmallya/packnet)![GitHub stars](https://img.shields.io/github/stars/arunmallya/packnet.svg?logo=github&label=Stars)<br />**OLA**(NIPS 2018)[[paper]](https://proceedings.neurips.cc/paper/2018/hash/f31b20466ae89669f9741e047487eb37-Abstract.html)<br />**RCL**(NIPS 2018)[[paper]](http://papers.nips.cc/paper/7369-reinforced-continual-learning.pdf)[[code]](https://github.com/xujinfan/Reinforced-Continual-Learning)![GitHub stars](https://img.shields.io/github/stars/xujinfan/Reinforced-Continual-Learning.svg?logo=github&label=Stars)<br />**MARL**(ICLR 2018)[[paper]](https://openreview.net/forum?id=ry8dvM-R-)<br />**DEN**(ICLR 2018)[[paper]](https://openreview.net/forum?id=Sk7KsfW0-)[[code]](https://github.com/jaehong31/DEN)![GitHub stars](https://img.shields.io/github/stars/jaehong31/DEN.svg?logo=github&label=Stars)<br />**P&C**(ICML 2018)[[paper]](https://arxiv.org/abs/1805.06370)<br />**Piggyback**(ECCV 2018)[[paper]](https://openaccess.thecvf.com/content_ECCV_2018/papers/Arun_Mallya_Piggyback_Adapting_a_ECCV_2018_paper.pdf)[[code]](https://github.com/arunmallya/piggyback)![GitHub stars](https://img.shields.io/github/stars/arunmallya/piggyback.svg?logo=github&label=Stars)<br />**RWalk**(ECCV 2018)[[paper]](https://openaccess.thecvf.com/content_ECCV_2018/html/Arslan_Chaudhry__Riemannian_Walk_ECCV_2018_paper.html)<br />**MAS**(ECCV 2018)[[paper]](https://arxiv.org/pdf/1711.09601.pdf)[[code]](https://github.com/rahafaljundi/MAS-Memory-Aware-Synapses)![GitHub stars](https://img.shields.io/github/stars/rahafaljundi/MAS-Memory-Aware-Synapses.svg?logo=github&label=Stars)<br />**R-EWC**(ICPR 2018)[[paper]](https://ieeexplore.ieee.org/abstract/document/8545895)[[code]](https://github.com/xialeiliu/RotateNetworks)![GitHub stars](https://img.shields.io/github/stars/xialeiliu/RotateNetworks.svg?logo=github&label=Stars)<br />**HAT**(PMLR 2018)[[paper]](http://proceedings.mlr.press/v80/serra18a.html)[[code]](https://github.com/joansj/hat)![GitHub stars](https://img.shields.io/github/stars/joansj/hat.svg?logo=github&label=Stars)<br /> | **MeRGANs**(NIPS 2018)[[paper]](https://arxiv.org/abs/1809.02058)[[code]](https://github.com/WuChenshen/MeRGAN)![GitHub stars](https://img.shields.io/github/stars/WuChenshen/MeRGAN.svg?logo=github&label=Stars)<br />**EEIL**(ECCV 2018)[[paper]](https://arxiv.org/abs/1807.09536)[[code]](https://github.com/fmcp/EndToEndIncrementalLearning)![GitHub stars](https://img.shields.io/github/stars/fmcp/EndToEndIncrementalLearning.svg?logo=github&label=Stars)<br />**Adaptation by Distillation**(ECCV 2018)[[paper]](http://openaccess.thecvf.com/content_ECCV_2018/papers/Saihui_Hou_Progressive_Lifelong_Learning_ECCV_2018_paper.pdf)<br />**ESGR**(BMVC 2018)[[paper]](http://bmvc2018.org/contents/papers/0325.pdf)[[code]](https://github.com/TonyPod/ESGR)![GitHub stars](https://img.shields.io/github/stars/TonyPod/ESGR.svg?logo=github&label=Stars)<br />**VCL**(ICLR 2018)[[paper]](https://arxiv.org/pdf/1710.10628.pdf#page=13&zoom=100,110,890)<br />**FearNet**(ICLR 2018)[[paper]](https://openreview.net/forum?id=SJ1Xmf-Rb)<br />**DGDMN**(ICLR 2018)[[paper]](https://openreview.net/forum?id=BkVsWbbAW)<br/> |\n| 2017 | **Expert Gate**(CVPR 2017)[[paper]](https://openaccess.thecvf.com/content_cvpr_2017/papers/Aljundi_Expert_Gate_Lifelong_CVPR_2017_paper.pdf)[[code]](https://github.com/wannabeOG/ExpertNet-Pytorch)![GitHub stars](https://img.shields.io/github/stars/wannabeOG/ExpertNet-Pytorch.svg?logo=github&label=Stars)<br />**ILOD**(ICCV 2017)[[paper]](https://openaccess.thecvf.com/content_ICCV_2017/papers/Shmelkov_Incremental_Learning_of_ICCV_2017_paper.pdf)[[code]](https://github.com/kshmelkov/incremental_detectors)![GitHub stars](https://img.shields.io/github/stars/kshmelkov/incremental_detectors.svg?logo=github&label=Stars)<br />**EBLL**(ICCV2017)[[paper]](https://arxiv.org/abs/1704.01920)<br />**IMM**(NIPS 2017)[[paper]](https://arxiv.org/abs/1703.08475)[[code]](https://github.com/btjhjeon/IMM_tensorflow)![GitHub stars](https://img.shields.io/github/stars/btjhjeon/IMM_tensorflow.svg?logo=github&label=Stars)<br />**SI**(ICML 2017)[[paper]](http://proceedings.mlr.press/v70/zenke17a/zenke17a.pdf)[[code]](https://github.com/ganguli-lab/pathint)![GitHub stars](https://img.shields.io/github/stars/ganguli-lab/pathint.svg?logo=github&label=Stars)<br />**EWC**(PNAS 2017)[[paper]](https://arxiv.org/abs/1612.00796)[[code]](https://github.com/stokesj/EWC)![GitHub stars](https://img.shields.io/github/stars/stokesj/EWC.svg?logo=github&label=Stars)<br /> | **iCARL**(CVPR 2017)[[paper]](https://arxiv.org/abs/1611.07725)[[code]](https://github.com/srebuffi/iCaRL)![GitHub stars](https://img.shields.io/github/stars/srebuffi/iCaRL.svg?logo=github&label=Stars)<br />**GEM**(NIPS 2017)[[paper]](https://proceedings.neurips.cc/paper/2017/hash/f87522788a2be2d171666752f97ddebb-Abstract.html)[[code]](https://github.com/facebookresearch/GradientEpisodicMemory)![GitHub stars](https://img.shields.io/github/stars/facebookresearch/GradientEpisodicMemory.svg?logo=github&label=Stars)<br />**DGR**(NIPS 2017)[[paper]](https://proceedings.neurips.cc/paper/2017/file/0efbe98067c6c73dba1250d2beaa81f9-Paper.pdf)[[code]](https://github.com/kuc2477/pytorch-deep-generative-replay)![GitHub stars](https://img.shields.io/github/stars/kuc2477/pytorch-deep-generative-replay.svg?logo=github&label=Stars)<br /> |\n| 2016 | **LwF**(ECCV 2016)[[paper]](https://link.springer.com/chapter/10.1007/978-3-319-46493-0_37)[[code]](https://github.com/lizhitwo/LearningWithoutForgetting)![GitHub stars](https://img.shields.io/github/stars/lizhitwo/LearningWithoutForgetting.svg?logo=github&label=Stars)<br /> |                                                              |\n\n\n\n\n### 3.2 From a Data Deployment Perspective\n\n**Data decentralized incremental learning**\n\n+ **[DCID]** Deep Class Incremental Learning from Decentralized Data(TNNLS 2022)[[paper]](https://ieeexplore.ieee.org/document/9932643)[[code]](https://github.com/Vision-Intelligence-and-Robots-Group/DCIL)\n+ **[GLFC]** Federated Class-Incremental Learning(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.11473)[[code]](https://github.com/conditionWang/FCIL)\n+ **[FedWeIT]** Federated Continual Learning with Weighted Inter-client Transfer(ICML 2021)[[paper]](https://proceedings.mlr.press/v139/yoon21b.html)[[code]](https://github.com/wyjeong/FedWeIT)\n\n**Data centralized incremental learning**\n\nAll other studies aforementioned except those already in the 'Decentralized' section.\n\n## 4 Datasets <span id='datasets'></span>\n\n| datasets                                                     | describes                                                    |\n| :----------------------------------------------------------- | :----------------------------------------------------------- |\n| [ImageNet](https://image-net.org)                            | There are 1.28 million training images and 50,000 validation images in over 1,000 categories. Usually crop into 224×224 color image |\n| [TinyImageNet](https://www.kaggle.com/c/tiny-imagenet)       | Contains 100,000 64×64 color images of 200 categories (500 per category). Each class has 500 training images, 50 validation images, and 50 test images. |\n| [MiniImageNet](https://lyy.mpi-inf.mpg.de/mtl/download/Lmzjm9tX.html) | This dataset is a subset of ImageNet used for few-shot learning. It consists of 60, 000 colour images of size 84 × 84 with 100 classes, each having 600 examples. |\n| [SubImageNet](https://openaccess.thecvf.com/content_CVPR_2019/html/Hou_Learning_a_Unified_Classifier_Incrementally_via_Rebalancing_CVPR_2019_paper.html) | This dataset is a 100-class subset of ImageNet's **random sample**, which contains approximately 130,000 images for training and 5,000 images for testing. |\n| [CIFAR-10/100](https://www.cs.toronto.edu/~kriz/cifar.html)  | Both datasets contain 60,000 natural RGB images of the size 32 × 32, including 50,000 training and 10,000 test images. CIFAR10 has 10 classes, while CIFAR100 has 100 classes. |\n| [CORe50](https://vlomonaco.github.io/core50/)                | This dataset consists of 164,866 128×128 RGB-D images: 11 sessions × 50 objects × (around 300) frames per session.<br />[Github](https://github.com/vlomonaco/core50)<br />[CORe50: a New Dataset and Benchmark for Continuous Object Recognition](http://proceedings.mlr.press/v78/lomonaco17a.html)<br /> |\n| [OpenLORIS-Object](https://www.sciencedirect.com/science/article/pii/S0031320322003004?via%3Dihub) | This is the first real-world dataset for robotic vision with independent and quantifiable environmental factors, compared with other lifelong learning datasets, with 186 instances, 63 categories and 2,138,050 images. |\n\n\n\n## 5 Lecture, Tutorial, Workshop, & Talks<span id='workshop'>\n\n**Life-Long learning | 李宏毅**\n\nLife-long Learning: [[ppt]](https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/life_v2.pptx) [[pdf]](https://speech.ee.ntu.edu.tw/~hylee/ml/ml2021-course-data/life_v2.pdf)\n\nCatastrophic Forgetting [[Chinese]](https://youtu.be/rWF9sg5w6Zk) [[English]](https://youtu.be/yAX8Ydfek_I)\n\nMitigating Catastrophic Forgetting [[Chinese]](https://youtu.be/Y9Jay_vxOsM) [[English]](https://youtu.be/-2r4cqDP4BY)\n\nMeta Learning : Learn to Learn [[Chinese]](https://www.youtube.com/watch?v=xoastiYx9JU)\n\n**Continual AI Lecture**\n\n[Open World Lifelong Learning | A Continual Machine Learning Course](http://owll-lab.com/teaching/cl_lecture/)\n\n[Prompting-based Continual Learning | Continual AI](https://www.youtube.com/watch?v=19bylhGhfAw)\n\n**VALSE Webinar** (In Chinese)\n\n[20211215【学无止境：深度连续学习】洪晓鹏：记忆拓扑保持的深度增量学习方法](https://www.bilibili.com/video/BV1Qi4y197uf?spm_id_from=333.999.0.0)\n\n[20211215【学无止境：深度连续学习】李玺：基于深度神经网络的持续性学习理论与方法](https://www.bilibili.com/video/BV1XR4y1W7mr?spm_id_from=333.999.0.0)\n\n**ACM MULTIMEDIA**\n\n[ACM2021 Few-shot Learning for Multi-Modality Tasks](https://ingrid725.github.io/ACM-Multimedia-2021/)\n\n**CVPR Workshop**\n\n[CVPR 2022 Workshop on Continual Learning in Computer Vision](https://sites.google.com/view/clvision2022/overview) \n\n[CVPR2021 Workshop on Continual Learning in Computer Vision](https://sites.google.com/view/clvision2021)\n\n[CVPR2020 Workshop on Continual Learning in Computer Vision](https://sites.google.com/view/clvision2020/overview)\n\n[CVPR2017 Continuous and\nOpen-Set Learning\nWorkshop](https://erodner.github.io/continuouslearningcvpr2017/)\n\n**ICML Tutorial/Workshop**\n\n[ICML 2021 Workshop on Theory and Foundation of Continual Learning](https://sites.google.com/view/cl-theory-icml2021)\n\n[ICML 2021 Tutorial on Continual Learning with Deep Architectures](https://sites.google.com/view/cltutorial-icml2021)\n\n[ICML2020  Workshop on Continual Learning](https://sites.google.com/view/cl-icml/)\n\n**NeurIPS Workshop**\n\n[NeurIPS2021 4th Robot Learning Workshop: Self-Supervised and Lifelong Learning](http://www.robot-learning.ml/2021/)\n\n[NeurIPS2018 Continual learning Workshop](https://sites.google.com/view/continual2018/home)\n\n[NeurIPS2016 Continual Learning and Deep Networks Workshop](https://sites.google.com/site/cldlnips2016/)\n\n**IJCAI Workshop**\n\n[IJCAI 2021 International Workshop on Continual Semi-Supervised Learning](https://sites.google.com/view/sscl-workshop-ijcai-2021/overview)\n\n**ContinualAI wiki**\n\n[A Non-profit Research Organization and Open Community on Continual Learning for AI](https://www.continualai.org/)\n\n**CoLLAs**\n\n[Conference on Lifelong Learning Agents - CoLLAs 2022](https://lifelong-ml.cc/)\n\n## 6 Competitions <span id='competitions'></span>\n\n**achieved**\n\n[3rd CLVISION CVPR Workshop Challenge 2022](https://sites.google.com/view/clvision2022/challenge)\n\n[IJCAI 2021 - International Workshop on Continual Semi-Supervised Learning](https://sites.google.com/view/sscl-workshop-ijcai-2021/)\n\n[2rd CLVISION CVPR Workshop Challenge 2021](https://eval.ai/web/challenges/challenge-page/829/overview)\n\n[1rd CLVISION CVPR Workshop Challenge 2020](https://sites.google.com/view/clvision2020/challenge)\n\n## 7 Awesome Reference <span id='awesome-reference'></span>\n\n[1] https://github.com/xialeiliu/Awesome-Incremental-Learning\n\n## 8 Contact Us <span id='contact-us'></span>\n\nShould there be any concerns on this page, please don't hesitate to let us know via [hongxiaopeng@ieee.org](mailto:hongxiaopeng@ieee.org) or [xl330@126.com](mailto:xl330@126.com).\n\n\n\n# Full Paper List <span id='paper-list'></span>\n\n\n\n## arXiv (If accepted, welcome corrections)\n+ Continual Instruction Tuning for Large Multimodal Models [[paper]](https://arxiv.org/abs/2311.16206)\n+ Continual Adversarial Defense [[paper]](https://arxiv.org/abs/2312.09481)[[code]](https://github.com/cc13qq/CAD)\n+ Class-Prototype Conditional Diffusion Model for Continual Learning with Generative Replay [[paper]](https://arxiv.org/abs/2312.06710)[[code]](https://github.com/dnkhanh45/cpdm)\n+ Class Incremental Learning for Adversarial Robustnes [[paper]](https://arxiv.org/pdf/2312.03289.pdf)\n+ KOPPA: Improving Prompt-based Continual Learning with Key-Query Orthogonal Projection and Prototype-based One-Versus-All [[paper]](https://arxiv.org/abs/2311.15414)\n+ Prompt Gradient Projection for Continual Learning [[paper]](https://openreview.net/forum?id=EH2O3h7sBI)\n\n\n\n\n\n## 2024\n+ **[MOSE]** Orchestrate Latent Expertise: Advancing Online Continual Learning with Multi-Level Supervision and Reverse Self-Distillation(CVPR 2024) [[paper]](https://arxiv.org/abs/2404.00417)[[code]](https://github.com/AnAppleCore/MOSE)\n+ **[AISEOCL]** Adaptive instance similarity embedding for online continual learning (Pattern Recognition 2024) [[paper]](https://www.sciencedirect.com/science/article/abs/pii/S0031320323009354#:~:text=We%20propose%20a%20novel%20adaptive,the%20same%20class%20or%20not)\n+ **[SEED]** Divide and not forget: Ensemble of  selectively trained experts in Continual Learning(ICLR 2024) [[paper]](https://openreview.net/forum?id=sSyytcewxe)\n+ **[CAMA]** Online Continual Learning for Interactive Instruction Following Agents(ICLR 2024) [[paper]](https://openreview.net/forum?id=7M0EzjugaN)[[code]](https://github.com/snumprlab/cl-alfred)\n+ **[SFR]]** Function-space Parameterization of Neural Networks for Sequential Learning(ICLR2024) [[paper]](https://openreview.net/attachment?id=2dhxxIKhqz&name=pdf)[[code]](https://aaltoml.github.io/sfr/)\n+ **[HLOP]** Hebbian Learning based Orthogonal Projection for Continual Learning of Spiking Neural Networks(ICLR 2024) [[paper]](https://openreview.net/forum?id=MeB86edZ1P)\n+ **[TPL]** Class Incremental Learning via Likelihood Ratio Based Task Prediction(ICLR 2024) [[paper]](https://openreview.net/forum?id=8QfK9Dq4q0)[[code]](https://github.com/linhaowei1/TPL)\n+ **[AF-FCL]** Accurate Forgetting for Heterogeneous Federated Continual Learning(ICLR 2024) [[paper]](https://openreview.net/forum?id=ShQrnAsbPI)[[code]](https://anonymous.4open.science/r/AF-FCL-7D65)\n+ **[EFC]** Elastic Feature Consolidation For Cold Start Exemplar-Free Incremental Learning(ICLR 2024) [[paper]](https://openreview.net/forum?id=7D9X2cFnt1)\n+ **[DietCL]** Continual Learning on a Diet:Learning from Sparsely Labeled Streams Under Constrained Computation(ICLR 2024) [[paper]](https://openreview.net/forum?id=Xvfz8NHmCj)\n+ **[PICLE]** A Probabilistic Framework for Modular Continual Learning(ICLR 2024) [[paper]](https://openreview.net/forum?id=MVe2dnWPCu)\n+ **OVOR** OVOR: OnePrompt with Virtual Outlier Regularization for Rehearsal-Free Class-Incremental Learning(ICLR 2024) [[paper]](https://openreview.net/forum?id=FbuyDzZTPt)[[code]](https://github.com/jpmorganchase/ovor)\n+ **[BGS]** Continual Learning in the Presence of Spurious Correlations: Analyses and a Simple Baseline(ICLR 2024) [[paper]](https://openreview.net/forum?id=3Y7r6xueJJ)\n+ **[PEC]** Prediction Error-based Classification for Class-Incremental Learning(ICLR 2024) [[paper]](https://openreview.net/forum?id=DJZDgMOLXQ)[[code]](https://github.com/michalzajac-ml/pec)\n+ **[refresh learning]** A Unified and General Framework for Continual Learning(ICLR 2024) [[paper]](https://openreview.net/forum?id=BE5aK0ETbp)\n+ **[CPPO]** CPPO: Continual Learning for Reinforcement Learning with Human Feedback(ICLR 2024) [[paper]](https://openreview.net/forum?id=86zAUE80pP)\n+ **[JARe]** Scalable Language Model with Generalized Continual Learning(ICLR 2024) [[paper]](https://openreview.net/forum?id=mz8owj4DXu)\n+ **[POCON]** Plasticity-Optimized Complementary Networks for Unsupervised Continual(WACV 2024) [[paper]](https://arxiv.org/pdf/2309.06086.pdf)\n+ **[DMU]** Online Class-Incremental Learning For Real-World Food Image Classification(WACV 2024) [[paper]](https://openaccess.thecvf.com/content/WACV2024/papers/Raghavan_Online_Class-Incremental_Learning_for_Real-World_Food_Image_Classification_WACV_2024_paper.pdf)[[code]](https://gitlab.com/viper-purdue/OCIL-real-world-food-image-classification)\n+ **[CLTA]** Adapt Your Teacher: Improving Knowledge Distillation for Exemplar-free Continual Learning(WACV 2024) [[paper]](https://arxiv.org/abs/2308.09544)[[code]](https://github.com/fszatkowski/cl-teacher-adaptation)\n+ **[FG-KSR]** Fine-Grained Knowledge Selection and Restoration for Non-Exemplar Class Incremental Learning(AAAI 2024) [[paper]](https://arxiv.org/abs/2312.12722)[[code]](https://github.com/scok30/vit-cil)\n\n## 2023\n+ **[PRD]** Prototype-Sample Relation Distillation: Towards Replay-FreeContinual Learning(ICML 2023) [[paper]](https://arxiv.org/pdf/2303.14771.pdf)\n+ A Unified Continual Learning Framework with General Parameter-Efficient Tuning(ICCV 2023) [[paper]](https://arxiv.org/abs/2303.10070)[[code]](https://github.com/gqk/LAE?tab=readme-ov-file)\n+ Cross-Modal Alternating Learning with Task-Aware Representations for Continual Learning(TMM 2023) [[paper]](https://ieeexplore.ieee.org/abstract/document/10347466)[[code]](https://csgaobb.github.io/)\n+ Semantic Knowledge Guided Class-Incremental Learning(TCSVT 2023) [[paper]](https://ieeexplore.ieee.org/document/10083158)\n+ Non-Exemplar Class-Incremental Learning via Adaptive Old Class Reconstruction(ACM MM 2023) [[paper]](https://dl.acm.org/doi/10.1145/3581783.3611926)[[code]](https://github.com/Mysteriousplayer/POLO-NECIL)\n+ **[HiDe-Prompt]** Hierarchical Decomposition of Prompt-Based Continual Learning: Rethinking Obscured Sub-optimality(NeurIPS 2023)[[paper]](https://arxiv.org/abs/2310.07234)[[code]](https://github.com/thu-ml/HiDe-Prompt)\n+ TriRE: A Multi-Mechanism Learning Paradigm for Continual Knowledge Retention and Promotion(NeurIPS 2023)[[paper]](https://arxiv.org/abs/2310.08217)\n+ **[AdaB2N]** Overcoming Recency Bias of Normalization Statistics in Continual Learning: Balance and Adaptation(NeurIPS 2023)[[paper]](https://arxiv.org/abs/2310.08855)[[code]]](https://github.com/lvyilin/AdaB2N)\n+ Online Class Incremental Learning on Stochastic Blurry Task Boundary via Mask and Visual Prompt Tuning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Moon_Online_Class_Incremental_Learning_on_Stochastic_Blurry_Task_Boundary_via_ICCV_2023_paper.pdf)\n+ Decouple Before Interact: Multi-Modal Prompt Learning for Continual Visual Question Answering(ICCV 2023)[[paper]]( https://openaccess.thecvf.com/content/ICCV2023/papers/Qian_Decouple_Before_Interact_Multi-Modal_Prompt_Learning_for_Continual_Visual_Question_ICCV_2023_paper.pdf)\n+ Prototype Reminiscence and Augmented Asymmetric Knowledge Aggregation for Non-Exemplar Class-Incremental Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Shi_Prototype_Reminiscence_and_Augmented_Asymmetric_Knowledge_Aggregation_for_Non-Exemplar_Class-Incremental_ICCV_2023_paper.pdf)\n+ When Prompt-based Incremental Learning Does Not Meet Strong Pretraining(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Tang_When_Prompt-based_Incremental_Learning_Does_Not_Meet_Strong_Pretraining_ICCV_2023_paper.pdf)\n+ Class-incremental Continual Learning for Instance Segmentation with Image-level Weak Supervision(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Hsieh_Class-incremental_Continual_Learning_for_Instance_Segmentation_with_Image-level_Weak_Supervision_ICCV_2023_paper.pdf)\n+ Dynamic Residual Classifier for Class Incremental Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Chen_Dynamic_Residual_Classifier_for_Class_Incremental_Learning_ICCV_2023_paper.pdf)\n+ Audio-Visual Class-Incremental Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Pian_Audio-Visual_Class-Incremental_Learning_ICCV_2023_paper.pdf)\n+ First Session Adaptation: A Strong Replay-Free Baseline for Class-Incremental Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Panos_First_Session_Adaptation_A_Strong_Replay-Free_Baseline_for_Class-Incremental_Learning_ICCV_2023_paper.pdf)\n+ Self-Organizing Pathway Expansion for Non-Exemplar Class-Incremental Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Zhu_Self-Organizing_Pathway_Expansion_for_Non-Exemplar_Class-Incremental_Learning_ICCV_2023_paper.pdf)\n+ Heterogeneous Forgetting Compensation for Class-Incremental Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Dong_Heterogeneous_Forgetting_Compensation_for_Class-Incremental_Learning_ICCV_2023_paper.pdf)\n+ Masked Autoencoders are Efficient Class Incremental Learners(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Zhai_Masked_Autoencoders_are_Efficient_Class_Incremental_Learners_ICCV_2023_paper.pdf)\n+ Knowledge Restore and Transfer for Multi-Label Class-Incremental Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Dong_Knowledge_Restore_and_Transfer_for_Multi-Label_Class-Incremental_Learning_ICCV_2023_paper.pdf)\n+ Space-time Prompting for Video Class-incremental Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Pei_Space-time_Prompting_for_Video_Class-incremental_Learning_ICCV_2023_paper.pdf)\n+ CLNeRF: Continual Learning Meets NeRF(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Cai_CLNeRF_Continual_Learning_Meets_NeRF_ICCV_2023_paper.pdf)\n+ Rapid Adaptation in Online Continual Learning: Are We Evaluating It Right?(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Al_Kader_Hammoud_Rapid_Adaptation_in_Online_Continual_Learning_Are_We_Evaluating_It_ICCV_2023_paper.pdf)\n+ Exemplar-Free Continual Transformer with Convolutions(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Roy_Exemplar-Free_Continual_Transformer_with_Convolutions_ICCV_2023_paper.pdf)\n+ Self-Evolved Dynamic Expansion Model for Task-Free Continual Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Ye_Self-Evolved_Dynamic_Expansion_Model_for_Task-Free_Continual_Learning_ICCV_2023_paper.pdf)\n+ Class-incremental Continual Learning for Instance Segmentation with Image-level Weak Supervision(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Hsieh_Class-incremental_Continual_Learning_for_Instance_Segmentation_with_Image-level_Weak_Supervision_ICCV_2023_paper.pdf)\n+ Contrastive Continuity on Augmentation Stability Rehearsal for Continual Self-Supervised Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Cheng_Contrastive_Continuity_on_Augmentation_Stability_Rehearsal_for_Continual_Self-Supervised_Learning_ICCV_2023_paper.pdf)\n+ Measuring Asymmetric Gradient Discrepancy in Parallel Continual Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Lyu_Measuring_Asymmetric_Gradient_Discrepancy_in_Parallel_Continual_Learning_ICCV_2023_paper.pdf)\n+ Wasserstein Expansible Variational Autoencoder for Discriminative and Generative Continual Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Ye_Wasserstein_Expansible_Variational_Autoencoder_for_Discriminative_and_Generative_Continual_Learning_ICCV_2023_paper.pdf)\n+ Data Augmented Flatness-aware Gradient Projection for Continual Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Yang_Data_Augmented_Flatness-aware_Gradient_Projection_for_Continual_Learning_ICCV_2023_paper.pdf)\n+ A Unified Continual Learning Framework with General Parameter-Efficient Tuning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Gao_A_Unified_Continual_Learning_Framework_with_General_Parameter-Efficient_Tuning_ICCV_2023_paper.pdf)\n+ Introducing Language Guidance in Prompt-based Continual Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Khan_Introducing_Language_Guidance_in_Prompt-based_Continual_Learning_ICCV_2023_paper.pdf)\n+ Continual Learning for Personalized Co-speech Gesture Generation(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Ahuja_Continual_Learning_for_Personalized_Co-speech_Gesture_Generation_ICCV_2023_paper.pdf)\n+ Growing a Brain with Sparsity-Inducing Generation for Continual Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Jin_Growing_a_Brain_with_Sparsity-Inducing_Generation_for_Continual_Learning_ICCV_2023_paper.pdf)\n+ Towards Realistic Evaluation of Industrial Continual Learning Scenarios with an Emphasis on Energy Consumption and Computational Footprint(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Chavan_Towards_Realistic_Evaluation_of_Industrial_Continual_Learning_Scenarios_with_an_ICCV_2023_paper.pdf)\n+ Class-Incremental Grouping Network for Continual Audio-Visual Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Mo_Class-Incremental_Grouping_Network_for_Continual_Audio-Visual_Learning_ICCV_2023_paper.pdf)\n+ ICICLE: Interpretable Class Incremental Continual Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Rymarczyk_ICICLE_Interpretable_Class_Incremental_Continual_Learning_ICCV_2023_paper.pdf)\n+ Online Prototype Learning for Online Continual Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Wei_Online_Prototype_Learning_for_Online_Continual_Learning_ICCV_2023_paper.pdf)\n+ NAPA-VQ: Neighborhood-Aware Prototype Augmentation with Vector Quantization for Continual Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Malepathirana_NAPA-VQ_Neighborhood-Aware_Prototype_Augmentation_with_Vector_Quantization_for_Continual_Learning_ICCV_2023_paper.pdf)\n+ Few-shot Continual Infomax Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Gu_Few-shot_Continual_Infomax_Learning_ICCV_2023_paper.pdf)\n+ SLCA: Slow Learner with Classifier Alignment for Continual Learning on a Pre-trained Model(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Zhang_SLCA_Slow_Learner_with_Classifier_Alignment_for_Continual_Learning_on_ICCV_2023_paper.pdf)\n+ Instance and Category Supervision are Alternate Learners for Continual Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Tian_Instance_and_Category_Supervision_are_Alternate_Learners_for_Continual_Learning_ICCV_2023_paper.pdf)\n+ Preventing Zero-Shot Transfer Degradation in Continual Learning of Vision-Language Models(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Zheng_Preventing_Zero-Shot_Transfer_Degradation_in_Continual_Learning_of_Vision-Language_Models_ICCV_2023_paper.pdf)\n+ CLR: Channel-wise Lightweight Reprogramming for Continual Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Ge_CLR_Channel-wise_Lightweight_Reprogramming_for_Continual_Learning_ICCV_2023_paper.pdf)\n+ Complementary Domain Adaptation and Generalization for Unsupervised Continual Domain Shift Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Cho_Complementary_Domain_Adaptation_and_Generalization_for_Unsupervised_Continual_Domain_Shift_ICCV_2023_paper.pdf)\n+ TARGET: Federated Class-Continual Learning via Exemplar-Free Distillation(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Zhang_TARGET_Federated_Class-Continual_Learning_via_Exemplar-Free_Distillation_ICCV_2023_paper.pdf)\n+ CBA: Improving Online Continual Learning via Continual Bias Adaptor(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Wang_CBA_Improving_Online_Continual_Learning_via_Continual_Bias_Adaptor_ICCV_2023_paper.pdf)\n+ Continual Zero-Shot Learning through Semantically Guided Generative Random Walks(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Zhang_Continual_Zero-Shot_Learning_through_Semantically_Guided_Generative_Random_Walks_ICCV_2023_paper.pdf)\n+ A Soft Nearest-Neighbor Framework for Continual Semi-Supervised Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Kang_A_Soft_Nearest-Neighbor_Framework_for_Continual_Semi-Supervised_Learning_ICCV_2023_paper.pdf)\n+ Online Continual Learning on Hierarchical Label Expansion(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Lee_Online_Continual_Learning_on_Hierarchical_Label_Expansion_ICCV_2023_paper.pdf)\n+ Investigating the Catastrophic Forgetting in Multimodal Large Language Models (NeurIPS Workshop 23) [[paper]](https://arxiv.org/abs/2309.10313)\n+ Generating Instance-level Prompts for Rehearsal-free Continual Learning(ICCV 2023)[[paper]](https://openaccess.thecvf.com/content/ICCV2023/papers/Jung_Generating_Instance-level_Prompts_for_Rehearsal-free_Continual_Learning_ICCV_2023_paper.pdf)\n+ Heterogeneous Continual Learning(CVPR 2023)[[paper]](https://arxiv.org/abs/2306.08593)\n+ Partial Hypernetworks for Continual Learning(CoLLAs 2023)[[paper]](https://arxiv.org/abs/2306.10724)\n+ Learnability and Algorithm for Continual Learning(ICML 2023)[[paper]](https://arxiv.org/abs/2306.12646)\n+ Parameter-Level Soft-Masking for Continual Learning(ICML 2023)[[paper]](https://arxiv.org/abs/2306.14775)\n+ Improving Online Continual Learning Performance and Stability with Temporal Ensembles(CoLLAs 2023)[[paper]](https://arxiv.org/abs/2306.16817)\n+ Exploring Continual Learning for Code Generation Models(ACL 2023)[[paper]](https://arxiv.org/abs/2307.02435)\n+ **[Fed-CPrompt]** Fed-CPrompt: Contrastive Prompt for Rehearsal-Free Federated Continual Learning(FL-ICML 2023)[[paper]](https://arxiv.org/abs/2307.04869)\n+ Online Continual Learning for Robust Indoor Object Recognition(ICCV 2023)[[paper]](https://arxiv.org/abs/2307.09827)\n+ Proxy Anchor-based Unsupervised Learning for Continuous Generalized Category Discovery(ICCV 2023)[[paper]](https://arxiv.org/abs/2307.10943)\n+ **[XLDA]** XLDA: Linear Discriminant Analysis for Scaling Continual Learning to Extreme Classification at the Edge[ICML 2023][[paper]](https://arxiv.org/abs/2307.11317)\n+ **[CLR]** CLR: Channel-wise Lightweight Reprogramming for Continual Learning(ICCV 2023)[[paper]](https://arxiv.org/abs/2307.11386)\n+ **[CS-VQLA]** Revisiting Distillation for Continual Learning on Visual Question Localized-Answering in Robotic Surgery(MICCAI 2023)[[paper]](https://arxiv.org/abs/2307.12045)[[code]](https://github.com/longbai1006/CS-VQLA)\n+ Online Prototype Learning for Online Continual Learning(ICCV 2023)[[paper]](https://arxiv.org/abs/2308.00301)[[code]](https://github.com/weilllllls/OnPro)\n+ Cost-effective On-device Continual Learning over Memory Hierarchy with Miro(ACM MobiCom 23)[[paper]](https://arxiv.org/abs/2308.06053)\n+ **[CBA]** CBA: Improving Online Continual Learning via Continual Bias Adaptor(ICCV 2023)[[paper]](https://arxiv.org/abs/2308.06925)\n+ **[A-Prompts]** Remind of the Past: Incremental Learning with Analogical Prompts(arXiv 2023)[[paper]](https://arxiv.org/abs/2303.13898)\n+ **[ESN]** Isolation and Impartial Aggregation: A Paradigm of Incremental Learning without Interference(AAAI 2023)[[paper]](https://arxiv.org/abs/2211.15969)[[code]](https://github.com/iamwangyabin/ESN)![GitHub stars](https://img.shields.io/github/stars/iamwangyabin/ESN.svg?logo=github&label=Stars)\n+ **[RevisitingCIL]** Revisiting Class-Incremental Learning with Pre-Trained Models: Generalizability and Adaptivity are All You Need(arXiv 2023)[[paper]](https://arxiv.org/abs/2303.07338)[[code]](https://github.com/zhoudw-zdw/RevisitingCIL)![GitHub stars](https://img.shields.io/github/stars/zhoudw-zdw/RevisitingCIL.svg?logo=github&label=Stars)\n+ **[LwP]** Learning without Prejudices: Continual Unbiased Learning via Benign and Malignant Forgetting(ICLR 2023)[[paper]](https://openreview.net/forum?id=gfPUokHsW-)\n+ **[SDMLP]** Sparse Distributed Memory is a Continual Learner(ICLR 2023)[[paper]](https://openreview.net/forum?id=JknGeelZJpHP)\n+ **[SaLinA]** Building a Subspace of Policies for Scalable Continual Learning(ICLR 2023)[[paper]](https://openreview.net/forum?id=ZloanUtG4a)[[code]](https://github.com/facebookresearch/salina/tree/main/salina_cl)\n+ **[BEEF]** BEEF:Bi-Compatible Class-Incremental Learning via Energy-Based Expansion and Fusion(ICLR 2023)[[paper]](https://openreview.net/pdf?id=iP77_axu0h3)[[code]](https://github.com/G-U-N/ICLR23-BEEF)![GitHub stars](https://img.shields.io/github/stars/G-U-N/ICLR23-BEEF.svg?logo=github&label=Stars)\n+ **[WaRP]** Warping the Space: Weight Space Rotation for Class-Incremental Few-Shot Learning(ICLR 2023)[[paper]](https://openreview.net/pdf?id=kPLzOfPfA2l)\n+ **[OBC]** Online Bias Correction for Task-Free Continual Learning(ICLR 2023)[[paper]](https://openreview.net/pdf?id=18XzeuYZh_)\n+ **[NC-FSCIL]** Neural Collapse Inspired Feature-Classifier Alignment for Few-Shot Class-Incremental Learning(ICLR 2023)[[paper]](https://openreview.net/pdf?id=y5W8tpojhtJ)[[code]](https://github.com/NeuralCollapseApplications/FSCIL)![GitHub stars](https://img.shields.io/github/stars/NeuralCollapseApplications/FSCIL.svg?logo=github&label=Stars)\n+ **[iVoro]** Progressive Voronoi Diagram Subdivision Enables Accurate Data-free Class-Incremental Learning(ICLR 2023)[[paper]](https://openreview.net/pdf?id=zJXg_Wmob03)\n+ **[DAS]** Continual Learning of Language Models(ICLR 2023)[[paper]](https://openreview.net/pdf?id=m_GDIItaI3o)\n+ **[Progressive Prompts]** Progressive Prompts: Continual Learning for Language Models without Forgetting(ICLR 2023)[[paper]](https://openreview.net/pdf?id=UJTgQBc91_)\n+ **[SDP]** Online Boundary-Free Continual Learning by Scheduled Data Prior(ICLR 2023)[[paper]](https://openreview.net/pdf?id=qco4ekz2Epm)[[code]](https://github.com/yonseivnl/sdp)![GitHub stars](https://img.shields.io/github/stars/yonseivnl/sdp.svg?logo=github&label=Stars)\n+ **[iLDR]** Incremental Learning of Structured Memory via Closed-Loop Transcription(ICLR 2023)[[paper]](https://arxiv.org/pdf/2202.05411.pdf)\n+ **[SoftNet-FSCIL]** On the Soft-Subnetwork for Few-Shot Class Incremental Learning On the Soft-Subnetwork for Few-Shot Class Incremental Learning(ICLR 2023)[[paper]](https://openreview.net/pdf?id=z57WK5lGeHd)[[code]](https://github.com/ihaeyong/SoftNet-FSCIL)![GitHub stars](https://img.shields.io/github/stars/ihaeyong/SoftNet-FSCIL.svg?logo=github&label=Stars)\n+ **[ESMER]** Error Sensitivity Modulation based Experience Replay: Mitigating Abrupt Representation Drift in Continual Learning(ICLR 2023)[[paper]](https://openreview.net/forum?id=zlbci7019Z3)[[code]](https://github.com/NeurAI-Lab/ESMER)![GitHub stars](https://img.shields.io/github/stars/NeurAI-Lab/ESMER.svg?logo=github&label=Stars)\n+ **[MEMO]** A Model or 603 Exemplars: Towards Memory-Efficient Class-Incremental Learning(ICLR 2023)[[paper]](https://arxiv.org/abs/2205.13218)[[code]](https://github.com/wangkiw/ICLR23-MEMO)![GitHub stars](https://img.shields.io/github/stars/wangkiw/ICLR23-MEMO.svg?logo=github&label=Stars)\n+ **[CUDOS]** Continual Unsupervised Disentangling of Self-Organizing Representations(ICLR 2023)[[paper]](https://openreview.net/pdf?id=ih0uFRFhaZZ)\n+ **[ACGAN]** Better Generative Replay for Continual Federated Learning(ICLR 2023)[[paper]](https://openreview.net/pdf?id=cRxYWKiTan)[[code]](https://github.com/daiqing98/FedCIL)![GitHub stars](https://img.shields.io/github/stars/daiqing98/FedCIL.svg?logo=github&label=Stars)\n+ **[TAMiL]** Task-Aware Information Routing from Common Representation Space in Lifelong Learning(ICLR 2023)[[paper]](https://openreview.net/pdf?id=-M0TNnyWFT5)[[code]](https://github.com/NeurAI-Lab/TAMiL)![GitHub stars](https://img.shields.io/github/stars/NeurAI-Lab/TAMiL.svg?logo=github&label=Stars)\n+ **[FeTrIL]** Feature Translation for Exemplar-Free Class-Incremental Learning(WACV 2023)[[paper]](https://arxiv.org/abs/2211.13131)[[code]](https://github.com/GregoirePetit/FeTrIL)![GitHub stars](https://img.shields.io/github/stars/GregoirePetit/FeTrIL.svg?logo=github&label=Stars)\n+ **[RSOI]** Regularizing Second-Order Influences for Continual Learning(CVPR 2023)[[paper]](https://arxiv.org/pdf/2304.10177.pdf)[[code]](https://github.com/feifeiobama/InfluenceCL)![GitHub stars](https://img.shields.io/github/stars/feifeiobama/InfluenceCL.svg?logo=github&label=Stars)\n+ **[TBBN]** Rebalancing Batch Normalization for Exemplar-based Class-Incremental Learning(CVPR 2023)[[paper]](https://arxiv.org/pdf/2201.12559.pdf)\n+ **[AMSS]** Continual Semantic Segmentation with Automatic Memory Sample Selection(CVPR 2023)[[paper]](https://arxiv.org/pdf/2304.05015.pdf)\n+ **[DGCL]** Exploring Data Geometry for Continual Learning(CVPR 2023)[[paper]](https://arxiv.org/pdf/2304.03931.pdf)\n+ **[PCR]** PCR: Proxy-based Contrastive Replay for Online Class-Incremental Continual Learning(CVPR 2023)[[paper]](https://arxiv.org/pdf/2304.04408.pdf)[[code]](https://github.com/FelixHuiweiLin/PCR)![GitHub stars](https://img.shields.io/github/stars/FelixHuiweiLin/PCR.svg?logo=github&label=Stars)\n+ **[FMWISS]** Foundation Model Drives Weakly Incremental Learning for Semantic Segmentation(CVPR 2023)[[paper]](https://arxiv.org/pdf/2302.14250.pdf)\n+ **[CL-DETR]** Continual Detection Transformer for Incremental Object Detection(CVPR 2023)[[paper]](https://arxiv.org/pdf/2304.03110.pdf)[[code]](https://github.com/yaoyao-liu/CL-DETR)![GitHub stars](https://img.shields.io/github/stars/yaoyao-liu/CL-DETR.svg?logo=github&label=Stars)\n+ **[PIVOT]** PIVOT: Prompting for Video Continual Learning(CVPR 2023)[[paper]](https://arxiv.org/pdf/2212.04842.pdf)\n+ **[CIM-CIL]** Class-Incremental Exemplar Compression for Class-Incremental Learning(CVPR 2023)[[paper]](https://arxiv.org/pdf/2303.14042.pdf)[[code]](https://github.com/xfflzl/CIM-CIL)![GitHub stars](https://img.shields.io/github/stars/xfflzl/CIM-CIL.svg?logo=github&label=Stars)\n+ **[DNE]** Dense Network Expansion for Class Incremental Learning(CVPR 2023)[[paper]](https://arxiv.org/pdf/2303.12696.pdf)\n+ **[PAR]** Task Difficulty Aware Parameter Allocation & Regularization for Lifelong Learning(CVPR 2023)[[paper]](https://arxiv.org/pdf/2304.05288.pdf)\n+ **[PETAL]** A Probabilistic Framework for Lifelong Test-Time Adaptation(CVPR 2023)[[paper]](https://arxiv.org/pdf/2212.09713.pdf)[[code]](https://github.com/dhanajitb/petal)![GitHub stars](https://img.shields.io/github/stars/dhanajitb/petal.svg?logo=github&label=Stars)\n+ **[SAVC]** Learning with Fantasy: Semantic-Aware Virtual Contrastive Constraint for Few-Shot Class-Incremental Learning(CVPR 2023)[[paper]](https://arxiv.org/pdf/2304.00426.pdf)[[code]](https://github.com/zysong0113/SAVC)![GitHub stars](https://img.shields.io/github/stars/zysong0113/SAVC.svg?logo=github&label=Stars)\n+ **[CODA-Prompt]** CODA-Prompt: COntinual Decomposed Attention-based Prompting for Rehearsal-Free Continual Learning(CVPR 2023)[[paper]](https://arxiv.org/pdf/2211.13218.pdf)[[code]](https://github.com/GT-RIPL/CODA-Prompt)![GitHub stars](https://img.shields.io/github/stars/GT-RIPL/CODA-Prompt.svg?logo=github&label=Stars)\n\n ## 2022\n+ **[RD-IOD]** RD-IOD: Two-Level Residual-Distillation-Based Triple-Network for Incremental Object Detection(ACM Trans 2022)[[paper]](https://dl.acm.org/doi/abs/10.1145/3472393)\n+ **[NCM]** Exemplar-free Online Continual Learning(arXiv 2022)[[paper]](https://arxiv.org/abs/2202.05491)\n+ **[IPP]** Incremental Prototype Prompt-tuning with Pre-trained Representation for Class Incremental Learning(arXiv 2022)[[paper]](https://arxiv.org/abs/2204.03410)\n+ **[Incremental-DETR]** Incremental-DETR: Incremental Few-Shot Object Detection via Self-Supervised Learning(arXiv 2022)[[paper]](https://arxiv.org/abs/2205.04042)\n+ **[ELI]** Energy-Based Latent Aligner for Incremental Learning(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Joseph_Energy-Based_Latent_Aligner_for_Incremental_Learning_CVPR_2022_paper.html)\n+ **[CASSLE]** Self-Supervised Models Are Continual Learners(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Fini_Self-Supervised_Models_Are_Continual_Learners_CVPR_2022_paper.html)[[code]](https://github.com/DonkeyShot21/cassle)![GitHub stars](https://img.shields.io/github/stars/DonkeyShot21/cassle.svg?logo=github&label=Stars)\n+ **[iFS-RCNN]** iFS-RCNN: An Incremental Few-Shot Instance Segmenter(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Nguyen_iFS-RCNN_An_Incremental_Few-Shot_Instance_Segmenter_CVPR_2022_paper.html)\n+ **[WILSON]** Incremental Learning in Semantic Segmentation From Image Labels(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Cermelli_Incremental_Learning_in_Semantic_Segmentation_From_Image_Labels_CVPR_2022_paper.html)[[code]](https://github.com/fcdl94/WILSON)![GitHub stars](https://img.shields.io/github/stars/fcdl94/WILSON.svg?logo=github&label=Stars)\n+ **[Connector]** Towards Better Plasticity-Stability Trade-Off in Incremental Learning: A Simple Linear Connector(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Lin_Towards_Better_Plasticity-Stability_Trade-Off_in_Incremental_Learning_A_Simple_Linear_CVPR_2022_paper.html)[[code]](https://github.com/lingl1024/Connector)![GitHub stars](https://img.shields.io/github/stars/lingl1024/Connector.svg?logo=github&label=Stars)\n+ **[PAD]** Towards Exemplar-Free Continual Learning in Vision Transformers: an Account of Attention, Functional and Weight Regularization(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.13167)\n+ **[ERD]** Overcoming Catastrophic Forgetting in Incremental Object Detection via Elastic Response Distillation(CVPR 2022)[[paper]](https://arxiv.org/abs/2204.02136)[[code]](https://github.com/Hi-FT/ERD)![GitHub stars](https://img.shields.io/github/stars/Hi-FT/ERD.svg?logo=github&label=Stars)\n+ **[AFC]** Class-Incremental Learning by Knowledge Distillation with Adaptive Feature Consolidation(CVPR 2022)[[paper]](https://arxiv.org/abs/2204.00895)[[code]](https://github.com/kminsoo/AFC)![GitHub stars](https://img.shields.io/github/stars/kminsoo/AFC.svg?logo=github&label=Stars)\n+ **[FACT]** Forward Compatible Few-Shot Class-Incremental Learning(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.06953)[[code]](https://github.com/zhoudw-zdw/CVPR22-Fact)![GitHub stars](https://img.shields.io/github/stars/zhoudw-zdw/CVPR22-Fact.svg?logo=github&label=Stars)\n+ **[L2P]** Learning to Prompt for Continual Learning(CVPR 2022)[[paper]](https://arxiv.org/abs/2112.08654)[[code]](https://github.com/google-research/l2p)![GitHub stars](https://img.shields.io/github/stars/google-research/l2p.svg?logo=github&label=Stars)\n+ **[MEAT]** Meta-attention for ViT-backed Continual Learning(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.11684)[[code]](https://github.com/zju-vipa/MEAT-TIL)![GitHub stars](https://img.shields.io/github/stars/zju-vipa/MEAT-TIL.svg?logo=github&label=Stars)\n+ **[RCIL]** Representation Compensation Networks for Continual Semantic Segmentation(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.05402)[[code]](https://github.com/zhangchbin/RCIL)![GitHub stars](https://img.shields.io/github/stars/zhangchbin/RCIL.svg?logo=github&label=Stars)\n+ **[ZITS]** Incremental Transformer Structure Enhanced Image Inpainting with Masking Positional Encoding(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.00867)[[code]](https://github.com/DQiaole/ZITS_inpainting)![GitHub stars](https://img.shields.io/github/stars/DQiaole/ZITS_inpainting.svg?logo=github&label=Stars)\n+ **[MTPSL]** Learning Multiple Dense Prediction Tasks from Partially Annotated Data(CVPR 2022)[[paper]](https://arxiv.org/abs/2111.14893)[[code]](https://github.com/VICO-UoE/MTPSL)![GitHub stars](https://img.shields.io/github/stars/VICO-UoE/MTPSL.svg?logo=github&label=Stars)\n+ **[MMA]** Modeling Missing Annotations for Incremental Learning in Object Detection(CVPR-Workshop 2022)[[paper]](https://arxiv.org/abs/2204.08766)\n+ **[CoSCL]** CoSCL: Cooperation of Small Continual\n\tLearners is Stronger than a Big One(ECCV 2022)[[paper]](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136860249.pdf)[[code]](https://github.com/lywang3081/CoSCL)![GitHub stars](https://img.shields.io/github/stars/lywang3081/CoSCL.svg?logo=github&label=Stars)<br />\n+ **[AdNS]** Balancing Stability and Plasticity through Advanced Null Space in Continual Learning(ECCV 2022)[[paper]](https://arxiv.org/abs/2207.12061)\n+ **[ProCA]** Prototype-Guided Continual Adaptation for Class-Incremental Unsupervised Domain Adaptation(ECCV 2022)[[paper]](https://arxiv.org/abs/2207.10856)[[code]](https://github.com/Hongbin98/ProCA)![GitHub stars](https://img.shields.io/github/stars/Hongbin98/ProCA.svg?logo=github&label=Stars)\n+ **[R-DFCIL]** R-DFCIL: Relation-Guided Representation Learning for Data-Free Class Incremental Learning(ECCV 2022)[[paper]](https://arxiv.org/abs/2203.13104)[[code]](https://github.com/jianzhangcs/R-DFCIL)![GitHub stars](https://img.shields.io/github/stars/jianzhangcs/R-DFCIL.svg?logo=github&label=Stars)\n+ **[S3C]** S3C: Self-Supervised Stochastic Classifiers for Few-Shot Class-Incremental Learning(ECCV 2022)[[paper]](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136850427.pdf)[[code]](https://github.com/JAYATEJAK/S3C)![GitHub stars](https://img.shields.io/github/stars/JAYATEJAK/S3C.svg?logo=github&label=Stars)\n+ **[H^2^]** Helpful or Harmful: Inter-Task Association in Continual Learning(ECCV 2022)[[paper]](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136710518.pdf)\n+ **[DualPrompt]** DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning(ECCV 2022)[[paper]](https://arxiv.org/abs/2204.04799)\n+ **[ALICE]** Few-Shot Class Incremental Learning From an Open-Set Perspective(ECCV 2022)[[paper]](https://arxiv.org/pdf/2208.00147.pdf)[[code]](https://github.com/CanPeng123/FSCIL_ALICE)![GitHub stars](https://img.shields.io/github/stars/CanPeng123/FSCIL_ALICE.svg?logo=github&label=Stars)\n+ **[RU-TIL]** Incremental Task Learning with Incremental Rank Updates(ECCV 2022)[[paper]](https://arxiv.org/pdf/2207.09074.pdf)[[code]](https://github.com/CSIPlab/task-increment-rank-update)![GitHub stars](https://img.shields.io/github/stars/CSIPlab/task-increment-rank-update.svg?logo=github&label=Stars)\n+ **[FOSTER]** FOSTER: Feature Boosti ng and Compression for Class-Incremental Learning(ECCV 2022)[[paper]](https://arxiv.org/abs/2204.04662)\n+ **[SSR]** Subspace Regularizers for Few-Shot Class Incremental Learning(ICLR 2022)[[paper]](https://openreview.net/pdf?id=boJy41J-tnQ)[[code]](https://github.com/feyzaakyurek/subspace-reg)![GitHub stars](https://img.shields.io/github/stars/feyzaakyurek/subspace-reg.svg?logo=github&label=Stars)\n+ **[RGO]** Continual Learning with Recursive Gradient Optimization(ICLR 2022)[[paper]](https://openreview.net/pdf?id=7YDLgf9_zgm)\n+ **[TRGP]** TRGP: Trust Region Gradient Projection for Continual Learning(ICLR 2022)[[paper]](https://openreview.net/pdf?id=iEvAf8i6JjO)\n+ **[AGCN]** AGCN: Augmented Graph Convolutional Network for Lifelong Multi-Label Image Recognition(ICME 2022)[[paper]](https://arxiv.org/abs/2203.05534)[[code]](https://github.com/Kaile-Du/AGCN)![GitHub stars](https://img.shields.io/github/stars/Kaile-Du/AGCN.svg?logo=github&label=Stars)\n+ **[WSN]** Forget-free Continual Learning with Winning Subnetworks(ICML 2022)[[paper]](https://proceedings.mlr.press/v162/kang22b/kang22b.pdf)[[code]](https://github.com/ihaeyong/WSN)![GitHub stars](https://img.shields.io/github/stars/ihaeyong/WSN.svg?logo=github&label=Stars)\n+ **[NISPA]** NISPA: Neuro-Inspired Stability-Plasticity Adaptation for Continual Learning in Sparse Networks(ICML 2022)[[paper]](https://proceedings.mlr.press/v162/gurbuz22a/gurbuz22a.pdf)[[code]](https://github.com/BurakGurbuz97/NISPA)![GitHub stars](https://img.shields.io/github/stars/BurakGurbuz97/NISPA.svg?logo=github&label=Stars)\n+ **[S-FSVI]** Continual Learning via Sequential Function-Space Variational Inference(ICML 2022)[[paper]](https://proceedings.mlr.press/v162/rudner22a/rudner22a.pdf)[[code]](https://github.com/timrudner/S-FSVI)![GitHub stars](https://img.shields.io/github/stars/timrudner/S-FSVI.svg?logo=github&label=Stars)\n+ **[CUBER]** Beyond Not-Forgetting: Continual Learning with Backward Knowledge Transfer(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2211.00789)\n+ **[ADA]** Memory Efficient Continual Learning with Transformers(NeurIPS 2022)[[paper]](https://www.amazon.science/publications/memory-efficient-continual-learning-with-transformers)\n+ **[CLOM]** Margin-Based Few-Shot Class-Incremental Learning with Class-Level Overfitting Mitigation(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2210.04524)\n+ **[S-Prompt]** S-Prompts Learning with Pre-trained Transformers: An Occam's Razor for Domain Incremental Learning(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2207.12819)\n+ **[ALIFE]** ALIFE: Adaptive Logit Regularizer and Feature Replay for Incremental Semantic Segmentation(NIPS 2022)[[paper]](https://arxiv.org/abs/2210.06816)\n+ **[PMT]** Continual Learning In Environments With Polynomial Mixing Times(NIPS 2022)[[paper]](https://arxiv.org/abs/2112.07066)\n+ **[STCISS]** Self-training for class-incremental semantic segmentation(TNNLS 2022)[[paper]](https://arxiv.org/abs/2012.03362)\n+ **[DSN]** Dynamic Support Network for Few-shot Class Incremental Learning(TPAMI 2022)[[paper]](https://ieeexplore.ieee.org/document/9779071)\n+ **[MgSvF]** MgSvF: Multi-Grained Slow vs. Fast Framework for Few-Shot Class-Incremental Learning(TPAMI 2022)[[paper]](https://arxiv.org/abs/2006.15524)\n+ **[TransIL]** Dataset Knowledge Transfer for Class-Incremental Learning without Memory(WACV 2022)[[paper]](https://arxiv.org/pdf/2110.08421.pdf)\n+ **[NER-FSCIL]** Few-Shot Class-Incremental Learning for Named Entity Recognition(ACL 2022)[[paper]](https://aclanthology.org/2022.acl-long.43/)\n+ **[LIMIT]** Few-Shot Class-Incremental Learning by Sampling Multi-Phase Tasks(arXiv 2022)[[paper]](https://arxiv.org/abs/2203.17030)\n+ **[EMP]** Incremental Prompting: Episodic Memory Prompt for Lifelong Event Detection(arXiv 2022)[[paper]](https://arxiv.org/abs/2204.07275)\n+ **[SPTM]** Class-Incremental Learning With Strong Pre-Trained Model(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Wu_Class-Incremental_Learning_With_Strong_Pre-Trained_Models_CVPR_2022_paper.html)\n+ **[BER]** Bring Evanescent Representations to Life in Lifelong Class Incremental Learning(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Toldo_Bring_Evanescent_Representations_to_Life_in_Lifelong_Class_Incremental_Learning_CVPR_2022_paper.html)\n+ **[Sylph]** Sylph: A Hypernetwork Framework for Incremental Few-Shot Object Detection(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Yin_Sylph_A_Hypernetwork_Framework_for_Incremental_Few-Shot_Object_Detection_CVPR_2022_paper.html)\n+ **[MetaFSCIL]** MetaFSCIL: A Meta-Learning Approach for Few-Shot Class Incremental Learning(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Chi_MetaFSCIL_A_Meta-Learning_Approach_for_Few-Shot_Class_Incremental_Learning_CVPR_2022_paper.html)\n+ **[FCIL]** Federated Class-Incremental Learning(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Dong_Federated_Class-Incremental_Learning_CVPR_2022_paper.html)[[code]](https://github.com/conditionWang/FCIL)![GitHub stars](https://img.shields.io/github/stars/conditionWang/FCIL.svg?logo=github&label=Stars)\n+ **[FILIT]** Few-Shot Incremental Learning for Label-to-Image Translation(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Chen_Few-Shot_Incremental_Learning_for_Label-to-Image_Translation_CVPR_2022_paper.html)\n+ **[PuriDivER]** Online Continual Learning on a Contaminated Data Stream With Blurry Task Boundaries(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Bang_Online_Continual_Learning_on_a_Contaminated_Data_Stream_With_Blurry_CVPR_2022_paper.html)[[code]](https://github.com/clovaai/puridiver)![GitHub stars](https://img.shields.io/github/stars/clovaai/puridiver.svg?logo=github&label=Stars)\n+ **[SNCL]** Learning Bayesian Sparse Networks With Full Experience Replay for Continual Learning(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Yan_Learning_Bayesian_Sparse_Networks_With_Full_Experience_Replay_for_Continual_CVPR_2022_paper.html)\n+ **[DVC]** Not Just Selection, but Exploration: Online Class-Incremental Continual Learning via Dual View Consistency(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Gu_Not_Just_Selection_but_Exploration_Online_Class-Incremental_Continual_Learning_via_CVPR_2022_paper.html)[[code]](https://github.com/YananGu/DVC)![GitHub stars](https://img.shields.io/github/stars/YananGu/DVC.svg?logo=github&label=Stars)\n+ **[CVS]** Continual Learning for Visual Search With Backward Consistent Feature Embedding(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Wan_Continual_Learning_for_Visual_Search_With_Backward_Consistent_Feature_Embedding_CVPR_2022_paper.html)\n+ **[CPL]** Continual Predictive Learning From Videos(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Chen_Continual_Predictive_Learning_From_Videos_CVPR_2022_paper.html)\n+ **[GCR]** GCR: Gradient Coreset Based Replay Buffer Selection for Continual Learning(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Tiwari_GCR_Gradient_Coreset_Based_Replay_Buffer_Selection_for_Continual_Learning_CVPR_2022_paper.html)\n+ **[LVT]** Continual Learning With Lifelong Vision Transformer(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Wang_Continual_Learning_With_Lifelong_Vision_Transformer_CVPR_2022_paper.html)\n+ **[vCLIMB]** vCLIMB: A Novel Video Class Incremental Learning Benchmark(CVPR 2022)[[paper]](https://openaccess.thecvf.com/content/CVPR2022/html/Villa_vCLIMB_A_Novel_Video_Class_Incremental_Learning_Benchmark_CVPR_2022_paper.html)[[code]](https://vclimb.netlify.app/)\n+ **[Learn-to-Imagine]** Learning to Imagine: Diversify Memory for Incremental Learning using Unlabeled Data(CVPR 2022)[[paper]](https://arxiv.org/abs/2204.08932)[[code]](https://github.com/TOM-tym/Learn-to-Imagine)![GitHub stars](https://img.shields.io/github/stars/TOM-tym/Learn-to-Imagine.svg?logo=github&label=Stars)\n+ **[DCR]** General Incremental Learning with Domain-aware Categorical Representations(CVPR 2022)[[paper]](https://arxiv.org/abs/2204.04078)\n+ **[DIY-FSCIL]** Doodle It Yourself: Class Incremental Learning by Drawing a Few Sketches(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.14843)\n+ **[C-FSCIL]** Constrained Few-shot Class-incremental Learning(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.16588)[[code]](https://github.com/IBM/constrained-FSCIL)![GitHub stars](https://img.shields.io/github/stars/IBM/constrained-FSCIL.svg?logo=github&label=Stars)\n+ **[SSRE]** Self-Sustaining Representation Expansion for Non-Exemplar Class-Incremental Learning(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.06359)\n+ **[CwD]** Mimicking the Oracle: An Initial Phase Decorrelation Approach for Class Incremental Learning(CVPR 2022)[[paper]](https://arxiv.org/abs/2112.04731)[[code]](https://github.com/Yujun-Shi/CwD)![GitHub stars](https://img.shields.io/github/stars/Yujun-Shi/CwD.svg?logo=github&label=Stars)\n+ **[MSL]** On Generalizing Beyond Domains in Cross-Domain Continual Learning(CVPR 2022)[[paper]](https://arxiv.org/abs/2203.03970)\n+ **[DyTox]** DyTox: Transformers for Continual Learning with DYnamic TOken Expansion(CVPR 2022)[[paper]](https://arxiv.org/abs/2111.11326)[[code]](https://github.com/arthurdouillard/dytox)![GitHub stars](https://img.shields.io/github/stars/arthurdouillard/dytox.svg?logo=github&label=Stars)\n+ **[X-DER]** Class-Incremental Continual Learning into the eXtended DER-vers(ECCV 2022)[[paper]](https://arxiv.org/abs/2201.00766)\n+ **[clsss-iNCD]** Class-incremental Novel Class Discovery(ECCV 2022)[[paper]](https://arxiv.org/abs/2207.08605)[[code]](https://github.com/OatmealLiu/class-iNCD)![GitHub stars](https://img.shields.io/github/stars/OatmealLiu/class-iNCD.svg?logo=github&label=Stars)\n+ **[ARI]** Anti-Retroactive Interference for Lifelong Learning(ECCV 2022)[[paper]](https://arxiv.org/abs/2208.12967)[[code]](https://github.com/bhrqw/ARI)![GitHub stars](https://img.shields.io/github/stars/bhrqw/ARI.svg?logo=github&label=Stars)\n+ **[Long-Tailed-CIL]** Long-Tailed Class Incremental Learning(ECCV 2022)[[paper]](https://arxiv.org/abs/2210.00266)[[code]](https://github.com/xialeiliu/Long-Tailed-CIL)![GitHub stars](https://img.shields.io/github/stars/xialeiliu/Long-Tailed-CIL.svg?logo=github&label=Stars)\n+ **[LIRF]** Learning with Recoverable Forgetting(ECCV 2022)[[paper]](https://arxiv.org/abs/2207.08224)\n+ **[DSDM]** Online Task-free Continual Learning with Dynamic Sparse Distributed Memory(ECCV 2022)[[paper]](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136850721.pdf)[[code]](https://github.com/Julien-pour/Dynamic-Sparse-Distributed-Memory)![GitHub stars](https://img.shields.io/github/stars/Julien-pour/Dynamic-Sparse-Distributed-Memory.svg?logo=github&label=Stars)\n+ **[CVT]** Online Continual Learning with Contrastive Vision Transformer(ECCV 2022)[[paper]](https://arxiv.org/pdf/2207.13516.pdf)\n+ **[TwF]** Transfer without Forgetting(ECCV 2022)[[paper]](https://arxiv.org/abs/2206.00388)[[code]](https://github.com/mbosc/twf)![GitHub stars](https://img.shields.io/github/stars/mbosc/twf.svg?logo=github&label=Stars)\n+ **[CSCCT]** Class-Incremental Learning with Cross-Space Clustering and Controlled Transfer(ECCV 2022)[[paper]](https://cscct.github.io)[[code]](https://github.com/ashok-arjun/CSCCT)![GitHub stars](https://img.shields.io/github/stars/ashok-arjun/CSCCT.svg?logo=github&label=Stars)\n+ **[DLCFT]** DLCFT: Deep Linear Continual Fine-Tuning for General Incremental Learning(ECCV 2022)[[paper]](https://arxiv.org/abs/2208.08112)\n+ **[ERDR]** Few-Shot Class-Incremental Learning via Entropy-Regularized Data-Free Replay(ECCV2022)[[paper]](https://arxiv.org/pdf/2207.11213.pdf)\n+ **[NCDwF]** Novel Class Discovery without Forgetting(ECCV2022)[[paper]](https://arxiv.org/abs/2207.10659)\n+ **[CoMPS]** CoMPS: Continual Meta Policy Search(ICLR 2022)[[paper]](https://openreview.net/pdf?id=PVJ6j87gOHz)\n+ **[i-fuzzy]** Online Continual Learning on Class Incremental Blurry Task Configuration with Anytime Inference(ICLR 2022)[[paper]](https://openreview.net/pdf?id=nrGGfMbY_qK)[[code]](https://github.com/naver-ai/i-Blurry)![GitHub stars](https://img.shields.io/github/stars/naver-ai/i-Blurry.svg?logo=github&label=Stars)\n+ **[CLS-ER]** Learning Fast, Learning Slow: A General Continual Learning Method based on Complementary Learning System(ICLR 2022)[[paper]](https://openreview.net/pdf?id=uxxFrDwrE7Y)[[code]](https://github.com/NeurAI-Lab/CLS-ER)![GitHub stars](https://img.shields.io/github/stars/NeurAI-Lab/CLS-ER.svg?logo=github&label=Stars)\n+ **[MRDC]** Memory Replay with Data Compression for Continual Learning(ICLR 2022)[[paper]](https://openreview.net/pdf?id=a7H7OucbWaU)[[code]](https://github.com/andrearosasco/DistilledReplay)![GitHub stars](https://img.shields.io/github/stars/andrearosasco/DistilledReplay.svg?logo=github&label=Stars)\n+ **[OCS]** Online Coreset Selection for Rehearsal-based Continual Learning(ICLR 2022)[[paper]](https://openreview.net/pdf?id=f9D-5WNG4Nv)\n+ **[InfoRS]** Information-theoretic Online Memory Selection for Continual Learning(ICLR 2022)[[paper]](https://openreview.net/pdf?id=IpctgL7khPp)\n+ **[ER-AML]** New Insights on Reducing Abrupt Representation Change in Online Continual Learning(ICLR 2022)[[paper]](https://openreview.net/pdf?id=N8MaByOzUfb)[[code]](https://github.com/pclucas14/aml)![GitHub stars](https://img.shields.io/github/stars/pclucas14/aml.svg?logo=github&label=Stars)\n+ **[FAS]** Continual Learning with Filter Atom Swapping(ICLR 2022)[[paper]](https://openreview.net/pdf?id=metRpM4Zrcb)\n+ **[LUMP]** Rethinking the Representational Continuity: Towards Unsupervised Continual Learning(ICLR 2022)[[paper]](https://openreview.net/pdf?id=9Hrka5PA7LW)\n+ **[CF-IL]** Looking Back on Learned Experiences For Class/task Incremental Learning(ICLR 2022)[[paper]](https://openreview.net/pdf?id=RxplU3vmBx)[[code]](https://github.com/MozhganPourKeshavarz/Cost-Free-Incremental-Learning)![GitHub stars](https://img.shields.io/github/stars/MozhganPourKeshavarz/Cost-Free-Incremental-Learning.svg?logo=github&label=Stars)\n+ **[LFPT5]** LFPT5: A Unified Framework for Lifelong Few-shot Language Learning Based on Prompt Tuning of T5(ICLR 2022)[[paper]](https://openreview.net/pdf?id=HCRVf71PMF)[[code]](https://github.com/qcwthu/Lifelong-Fewshot-Language-Learning)![GitHub stars](https://img.shields.io/github/stars/qcwthu/Lifelong-Fewshot-Language-Learning.svg?logo=github&label=Stars)\n+ **[Model Zoo]** Model Zoo: A Growing Brain That Learns Continually(ICLR 2022)[[paper]](https://arxiv.org/abs/2106.03027)\n+ **[OCM]** Online Continual Learning through Mutual Information Maximization(ICML 2022)[[paper]](https://proceedings.mlr.press/v162/guo22g/guo22g.pdf)[[code]](https://github.com/gydpku/OCM)![GitHub stars](https://img.shields.io/github/stars/gydpku/OCM.svg?logo=github&label=Stars)\n+ **[DRO]** Improving Task-free Continual Learning by Distributionally Robust Memory Evolution(ICML 2022)[[paper]](https://proceedings.mlr.press/v162/wang22v/wang22v.pdf)[[code]](https://github.com/joey-wang123/DRO-Task-free)![GitHub stars](https://img.shields.io/github/stars/joey-wang123/DRO-Task-free.svg?logo=github&label=Stars)\n+ **[EAK]** Effects of Auxiliary Knowledge on Continual Learning(ICPR 2022)[[paper]](https://arxiv.org/abs/2206.02577)\n+ **[RAR]** Retrospective Adversarial Replay for Continual Learning(NeurIPS 2022)[[paper]](https://openreview.net/forum?id=XEoih0EwCwL&referrer=%5Bthe%20profile%20of%20Tianyi%20Zhou%5D(%2Fprofile%3Fid%3D~Tianyi_Zhou2))\n+ **[LiDER]** On the Effectiveness of Lipschitz-Driven Rehearsal in Continual Learning(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2210.06443)\n+ **[SparCL]** SparCL: Sparse Continual Learning on the Edge(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2209.09476)\n+ **[ClonEx-SAC]** Disentangling Transfer in Continual Reinforcement Learning(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2209.13900)\n+ **[ODDL]** Task-Free Continual Learning via Online Discrepancy Distance Learning(NeurIPS 2022)[[paper]](https://arxiv.org/abs/2210.06579)\n+ **[CSSL]** Continual semi-supervised learning through contrastive interpolation consistency(PRL 2022)[[paper]](https://arxiv.org/abs/2108.06552)\n+ **[MBP]** Model Behavior Preserving for Class-Incremental Learning(TNNLS 2022)[[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9705128)\n+ **[CandVot]** Online Continual Learning via Candidates Voting(WACV 2022)[[paper]](https://openaccess.thecvf.com/content/WACV2022/papers/He_Online_Continual_Learning_via_Candidates_Voting_WACV_2022_paper.pdf)\n+ **[FlashCards]** Knowledge Capture and Replay for Continual Learning(WACV 2022)[[paper]](https://openaccess.thecvf.com/content/WACV2022/papers/Gopalakrishnan_Knowledge_Capture_and_Replay_for_Continual_Learning_WACV_2022_paper.pdf)\n+ **[Meta-DR]** Continual Adaptation of Visual Representations via Domain Randomization and Meta-learning(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/html/Volpi_Continual_Adaptation_of_Visual_Representations_via_Domain_Randomization_and_Meta-Learning_CVPR_2021_paper.html)\n+ **[continual cross-modal retrieval]** Continual learning in cross-modal retrieval(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021W/CLVision/html/Wang_Continual_Learning_in_Cross-Modal_Retrieval_CVPRW_2021_paper.html)\n+ **[DER]** DER:Dynamically expandable representation for class incremental learning(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Yan_DER_Dynamically_Expandable_Representation_for_Class_Incremental_Learning_CVPR_2021_paper.pdf)[[code]](https://github.com/Rhyssiyan/DER-ClassIL.pytorch)![GitHub stars](https://img.shields.io/github/stars/Rhyssiyan/DER-ClassIL.pytorch.svg?logo=github&label=Stars)\n+ **[EFT]** Efficient Feature Transformations for Discriminative and Generative Continual Learning(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Verma_Efficient_Feature_Transformations_for_Discriminative_and_Generative_Continual_Learning_CVPR_2021_paper.pdf)[[code]](https://github.com/vkverma01/EFT)![GitHub stars](https://img.shields.io/github/stars/vkverma01/EFT.svg?logo=github&label=Stars)\n+ **[PASS]** Prototype Augmentation and Self-Supervision for Incremental Learning(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Zhu_Prototype_Augmentation_and_Self-Supervision_for_Incremental_Learning_CVPR_2021_paper.pdf)[[code]](https://github.com/Impression2805/CVPR21_PASS)![GitHub stars](https://img.shields.io/github/stars/Impression2805/CVPR21_PASS.svg?logo=github&label=Stars)\n+ **[GeoDL]** On Learning the Geodesic Path for Incremental Learning(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Simon_On_Learning_the_Geodesic_Path_for_Incremental_Learning_CVPR_2021_paper.pdf)[[code]](https://github.com/chrysts/geodesic_continual_learning)![GitHub stars](https://img.shields.io/github/stars/chrysts/geodesic_continual_learning.svg?logo=github&label=Stars)\n+ **[IL-ReduNet]** Incremental Learning via Rate Reduction(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Wu_Incremental_Learning_via_Rate_Reduction_CVPR_2021_paper.pdf)\n+ **[PIGWM]** Image De-raining via Continual Learning(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Zhou_Image_De-Raining_via_Continual_Learning_CVPR_2021_paper.pdf)\n+ **[BLIP]** Continual Learning via Bit-Level Information Preserving(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Shi_Continual_Learning_via_Bit-Level_Information_Preserving_CVPR_2021_paper.pdf)[[code]](https://github.com/Yujun-Shi/BLIP)![GitHub stars](https://img.shields.io/github/stars/Yujun-Shi/BLIP.svg?logo=github&label=Stars)\n+ **[Adam-NSCL]** Training Networks in Null Space of Feature Covariance for Continual Learning(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_Training_Networks_in_Null_Space_of_Feature_Covariance_for_Continual_CVPR_2021_paper.pdf)[[code]](https://github.com/ShipengWang/Adam-NSCL)![GitHub stars](https://img.shields.io/github/stars/ShipengWang/Adam-NSCL.svg?logo=github&label=Stars)\n+ **[PLOP]** PLOP: Learning without Forgetting for Continual Semantic Segmentation(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Douillard_PLOP_Learning_Without_Forgetting_for_Continual_Semantic_Segmentation_CVPR_2021_paper.pdf)[[code]](https://github.com/arthurdouillard/CVPR2021_PLOP)![GitHub stars](https://img.shields.io/github/stars/arthurdouillard/CVPR2021_PLOP.svg?logo=github&label=Stars)\n+ **[SDR]** Continual Semantic Segmentation via Repulsion-Attraction of Sparse and Disentangled Latent Representations(CVPR 2021)[[paper]](https://lttm.dei.unipd.it/paper_data/SDR/)[[code]](https://github.com/LTTM/SDR)![GitHub stars](https://img.shields.io/github/stars/LTTM/SDR.svg?logo=github&label=Stars)\n+ **[SKD]** Semantic-aware Knowledge Distillation for Few-Shot Class-Incremental Learning(CVPR 2021)[[paper]](https://arxiv.org/abs/2103.04059)\n+ **[SPB]** Striking a balance between stability and plasticity for class-incremental learning(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Wu_Striking_a_Balance_Between_Stability_and_Plasticity_for_Class-Incremental_Learning_ICCV_2021_paper.pdf)\n+ **[Else-Net]** Else-Net: Elastic Semantic Network for Continual Action Recognition from Skeleton Data(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Li_Else-Net_Elastic_Semantic_Network_for_Continual_Action_Recognition_From_Skeleton_ICCV_2021_paper.pdf)\n+ **[LCwoF-Framework]** Generalized and Incremental Few-Shot Learning by Explicit Learning and Calibration without Forgetting(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Kukleva_Generalized_and_Incremental_Few-Shot_Learning_by_Explicit_Learning_and_Calibration_ICCV_2021_paper.pdf)\n+ **[AFEC]** AFEC: Active Forgetting of Negative Transfer in Continual Learning(NeurIPS 2021)[[paper]](https://openreview.net/pdf/72a18fad6fce88ef0286e9c7582229cf1c8d9f93.pdf)[[code]](https://github.com/lywang3081/AFEC)![GitHub stars](https://img.shields.io/github/stars/lywang3081/AFEC.svg?logo=github&label=Stars)\n+ **[F2M]** Overcoming Catastrophic Forgetting in Incremental Few-Shot Learning by Finding Flat Minima(NeurIPS 2021)[[paper]](https://openreview.net/forum?id=ALvt7nXa2q)[[code]](https://github.com/moukamisama/F2M)![GitHub stars](https://img.shields.io/github/stars/moukamisama/F2M.svg?logo=github&label=Stars)\n+ **[NCL]** Natural continual learning: success is a journey, not (just) a destination(NeurIPS 2021)[[paper]](https://openreview.net/forum?id=W9250bXDgpK)[[code]](https://github.com/tachukao/ncl)![GitHub stars](https://img.shields.io/github/stars/tachukao/ncl.svg?logo=github&label=Stars)\n+ **[BCL]** Formalizing the Generalization-Forgetting Trade-off in Continual Learning(NeurIPS 2021)[[paper]](https://openreview.net/forum?id=u1XV9BPAB9)[[code]](https://github.com/krm9c/Balanced-Continual-Learning)![GitHub stars](https://img.shields.io/github/stars/krm9c/Balanced-Continual-Learning.svg?logo=github&label=Stars)\n+ **[Posterior Meta-Replay]** Posterior Meta-Replay for Continual Learning(NeurIPS 2021)[[paper]](https://proceedings.neurips.cc/paper/2021/hash/761b42cfff120aac30045f7a110d0256-Abstract.html)\n+ **[MARK]** Optimizing Reusable Knowledge for Continual Learning via Metalearning(NeurIPS 2021)[[paper]](https://openreview.net/forum?id=hHTctAv9Lvh)[[code]](https://github.com/JuliousHurtado/meta-training-setup)![GitHub stars](https://img.shields.io/github/stars/JuliousHurtado/meta-training-setup.svg?logo=github&label=Stars)\n+ **[Co-occur]** Bridging Non Co-occurrence with Unlabeled In-the-wild Data for Incremental Object Detection(NeurIPS 2021)[[paper]](https://proceedings.neurips.cc/paper/2021/hash/ffc58105bf6f8a91aba0fa2d99e6f106-Abstract.html)[[code]](https://github.com/dongnana777/bridging-non-co-occurrence)![GitHub stars](https://img.shields.io/github/stars/dongnana777/bridging-non-co-occurrence.svg?logo=github&label=Stars)\n+ **[LINC]** Lifelong and Continual Learning Dialogue Systems: Learning during Conversation(AAAI 2021)[[paper]](https://www.cs.uic.edu/~liub/publications/LINC_paper_AAAI_2021_camera_ready.pdf)\n+ **[CLNER]** Continual learning for named entity recognition(AAAI 2021)[[paper]](https://www.aaai.org/AAAI21Papers/AAAI-7791.MonaikulN.pdf)\n+ **[CLIS]** A Continual Learning Framework for Uncertainty-Aware Interactive Image Segmentation(AAAI 2021)[[paper]](https://www.aaai.org/AAAI21Papers/AAAI-2989.ZhengE.pdf)\n+ **[PCL]** Continual Learning by Using Information of Each Class Holistically(AAAI 2021)[[paper]](https://www.cs.uic.edu/~liub/publications/AAAI2021_PCL.pdf)\n+ **[MAS3]** Unsupervised Model Adaptation for Continual Semantic Segmentation(AAAI 2021)[[paper]](https://arxiv.org/abs/2009.12518)\n+ **[FSLL]** Few-Shot Lifelong Learning(AAAI 2021)[[paper]](https://arxiv.org/pdf/2103.00991.pdf)\n+ **[VAR-GPs]** Variational Auto-Regressive Gaussian Processes for Continual Learning(ICML 2021)[[paper]](https://proceedings.mlr.press/v139/kapoor21b.html)\n+ **[BSA]** Bayesian Structural Adaptation for Continual Learning(ICML 2021)[[paper]](https://proceedings.mlr.press/v139/kumar21a.html)\n+ **[GPM]** Gradient projection memory for continual learning(ICLR 2021)[[paper]](https://arxiv.org/abs/2103.09762)[[code]](https://github.com/sahagobinda/GPM)![GitHub stars](https://img.shields.io/github/stars/sahagobinda/GPM.svg?logo=github&label=Stars)\n+ **[TMN]** Triple-Memory Networks: A Brain-Inspired Method for Continual Learning(TNNLS 2021)[[paper]](https://ieeexplore.ieee.org/document/9540230?mkt_tok=NzU2LUdQSC04OTkAAAGEWh3nzSNX8-bTkVna2NbuB0POeJj2Og3psx0tXhIg9QWKppanhkVXCPQQMF_mCm4oXM9ds24H4-usCcZ06Vy9lezgWYCQrpxt6YPWkhuvj-E)\n+ **[RKD]** Few-Shot Class-Incremental Learning via Relation Knowledge Distillation(AAAI 2021)[[paper]](https://ojs.aaai.org/index.php/AAAI/article/view/16213)\n+ **[AANets]** Adaptive aggregation networks for class-incremental learning(CVPR 2021)[[paper]](https://class-il.mpi-inf.mpg.de/)[[code]](https://github.com/yaoyao-liu/class-incremental-learning)![GitHub stars](https://img.shields.io/github/stars/yaoyao-liu/class-incremental-learning.svg?logo=github&label=Stars)\n+ **[ORDisCo]** ORDisCo: Effective and Efficient Usage of Incremental Unlabeled Data for Semi-supervised Continual Learning(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_ORDisCo_Effective_and_Efficient_Usage_of_Incremental_Unlabeled_Data_for_CVPR_2021_paper.pdf)\n+ **[DDE]** Distilling Causal Effect of Data in Class-Incremental Learning(CVPR 2021)[[paper]](https://arxiv.org/abs/2103.01737)[[code]](https://github.com/JoyHuYY1412/DDE_CIL)![GitHub stars](https://img.shields.io/github/stars/JoyHuYY1412/DDE_CIL.svg?logo=github&label=Stars)\n+ **[IIRC]** IIRC: Incremental Implicitly-Refined Classification(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Abdelsalam_IIRC_Incremental_Implicitly-Refined_Classification_CVPR_2021_paper.pdf)\n+ **[Hyper-LifelongGAN]** Hyper-LifelongGAN: Scalable Lifelong Learning for Image Conditioned Generation(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Zhai_Hyper-LifelongGAN_Scalable_Lifelong_Learning_for_Image_Conditioned_Generation_CVPR_2021_paper.pdf)\n+ **[CEC]** Few-Shot Incremental Learning with Continually Evolved Classifiers(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Zhang_Few-Shot_Incremental_Learning_With_Continually_Evolved_Classifiers_CVPR_2021_paper.pdf)\n+ **[iMTFA]** Incremental Few-Shot Instance Segmentation(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Ganea_Incremental_Few-Shot_Instance_Segmentation_CVPR_2021_paper.pdf)\n+ **[RM]** Rainbow memory: Continual learning with a memory of diverse samples(CVPR 2021)[[paper]](https://ieeexplore.ieee.org/document/9577808)\n+ **[LOGD]** Layerwise Optimization by Gradient Decomposition for Continual Learning(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Tang_Layerwise_Optimization_by_Gradient_Decomposition_for_Continual_Learning_CVPR_2021_paper.pdf)\n+ **[SPPR]** Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/html/Zhu_Self-Promoted_Prototype_Refinement_for_Few-Shot_Class-Incremental_Learning_CVPR_2021_paper.html)\n+ **[LReID]** Lifelong Person Re-Identification via Adaptive Knowledge Accumulation(CVPR 2021)[[paper]](https://openaccess.thecvf.com/content/CVPR2021/papers/Pu_Lifelong_Person_Re-Identification_via_Adaptive_Knowledge_Accumulation_CVPR_2021_paper.pdf)[[code]](https://github.com/TPCD/LifelongReID)![GitHub stars](https://img.shields.io/github/stars/TPCD/LifelongReID.svg?logo=github&label=Stars)\n+ **[SS-IL]** SS-IL: Separated Softmax for Incremental Learning(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Ahn_SS-IL_Separated_Softmax_for_Incremental_Learning_ICCV_2021_paper.pdf)\n+ **[TCD]** Class-Incremental Learning for Action Recognition in Videos(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Park_Class-Incremental_Learning_for_Action_Recognition_in_Videos_ICCV_2021_paper.pdf)\n+ **[CLOC]** Online Continual Learning with Natural Distribution Shifts: An Empirical Study with Visual Data(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/html/Cai_Online_Continual_Learning_With_Natural_Distribution_Shifts_An_Empirical_Study_ICCV_2021_paper.html)[[code]](https://github.com/IntelLabs/continuallearning)![GitHub stars](https://img.shields.io/github/stars/IntelLabs/continuallearning.svg?logo=github&label=Stars)\n+ **[CoPE]** Continual Prototype Evolution:Learning Online from Non-Stationary Data Streams(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/De_Lange_Continual_Prototype_Evolution_Learning_Online_From_Non-Stationary_Data_Streams_ICCV_2021_paper.pdf)[[code]](https://github.com/Mattdl/ContinualPrototypeEvolution)![GitHub stars](https://img.shields.io/github/stars/Mattdl/ContinualPrototypeEvolution.svg?logo=github&label=Stars)\n+ **[Co2L]** Co2L: Contrastive Continual Learning(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Cha_Co2L_Contrastive_Continual_Learning_ICCV_2021_paper.pdf)[[code]](https://github.com/chaht01/co2l)![GitHub stars](https://img.shields.io/github/stars/chaht01/co2l.svg?logo=github&label=Stars)\n+ **[SPR]** Continual Learning on Noisy Data Streams via Self-Purified Replay(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Kim_Continual_Learning_on_Noisy_Data_Streams_via_Self-Purified_Replay_ICCV_2021_paper.pdf)\n+ **[NACL]** Detection and Continual Learning of Novel Face Presentation Attacks(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/html/Rostami_Detection_and_Continual_Learning_of_Novel_Face_Presentation_Attacks_ICCV_2021_paper.html)\n+ **[Always Be Dreaming]** Always Be Dreaming: A New Approach for Data-Free Class-Incremental Learning(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/html/Smith_Always_Be_Dreaming_A_New_Approach_for_Data-Free_Class-Incremental_Learning_ICCV_2021_paper.html)[[code]](https://github.com/GT-RIPL/AlwaysBeDreaming-DFCIL)![GitHub stars](https://img.shields.io/github/stars/GT-RIPL/AlwaysBeDreaming-DFCIL.svg?logo=github&label=Stars)\n+ **[CL-HSCNet]** Continual Learning for Image-Based Camera Localization(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/html/Wang_Continual_Learning_for_Image-Based_Camera_Localization_ICCV_2021_paper.html)[[code]](https://github.com/AaltoVision/CL_HSCNet)![GitHub stars](https://img.shields.io/github/stars/AaltoVision/CL_HSCNet.svg?logo=github&label=Stars)\n+ **[RECALL]** RECALL: Replay-based Continual Learning in Semantic Segmentation(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/html/Maracani_RECALL_Replay-Based_Continual_Learning_in_Semantic_Segmentation_ICCV_2021_paper.html)[[code]](https://github.com/lttm/recall)![GitHub stars](https://img.shields.io/github/stars/lttm/recall.svg?logo=github&label=Stars)\n+ **[VAE]** Synthesized Feature based Few-Shot Class-Incremental Learning on a Mixture of Subspaces(ICCV 2021)[[paper]](https://openaccess.thecvf.com/content/ICCV2021/papers/Cheraghian_Synthesized_Feature_Based_Few-Shot_Class-Incremental_Learning_on_a_Mixture_of_ICCV_2021_paper.pdf)\n+ **[ERT]** Rethinking Experience Replay: a Bag of Tricks for Continual Learning(ICPR 2021)[[paper]](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9412614)[[code]](https://github.com/hastings24/rethinking_er)![GitHub stars](https://img.shields.io/github/stars/hastings24/rethinking_er.svg?logo=github&label=Stars)\n+ **[KCL]** Kernel Continual Learning(ICML 2021)[[paper]](https://proceedings.mlr.press/v139/derakhshani21a.html)[[code]](https://github.com/mmderakhshani/KCL)![GitHub stars](https://img.shields.io/github/stars/mmderakhshani/KCL.svg?logo=github&label=Stars)\n+ **[MLIOD]** Incremental Object Detection via Meta-Learning(TPAMI 2021)[[paper]](https://arxiv.org/abs/2003.08798)[[code]](https://github.com/JosephKJ/iOD)![GitHub stars](https://img.shields.io/github/stars/JosephKJ/iOD.svg?logo=github&label=Stars)\n+ **[BNS]** BNS: Building Network Structures Dynamically for Continual Learning(NeurIPS 2021)[[paper]](https://papers.nips.cc/paper/2021/hash/ac64504cc249b070772848642cffe6ff-Abstract.html)\n+ **[FS-DGPM]** Flattening Sharpness for Dynamic Gradient Projection Memory Benefits Continual Learning(NeurIPS 2021)[[paper]](https://openreview.net/forum?id=q1eCa1kMfDd)\n+ **[SSUL]** SSUL: Semantic Segmentation with Unknown Label for Exemplar-based Class-Incremental Learning(NeurIPS  2021)[[paper]](https://proceedings.neurips.cc/paper/2021/file/5a9542c773018268fc6271f7afeea969-Paper.pdf)\n+ **[DualNet]** DualNet: Continual Learning, Fast and Slow(NeurIPS 2021)[[paper]](https://openreview.net/pdf?id=eQ7Kh-QeWnO)\n+ **[classAug]** Class-Incremental Learning via Dual Augmentation(NeurIPS 2021)[[paper]](https://papers.nips.cc/paper/2021/file/77ee3bc58ce560b86c2b59363281e914-Paper.pdf)\n+ **[GMED]** Gradient-based Editing of Memory Examples for Online Task-free Continual Learning(NeurIPS 2021)[[paper]](https://papers.nips.cc/paper/2021/hash/f45a1078feb35de77d26b3f7a52ef502-Abstract.html)\n+ **[BooVAE]** BooVAE: Boosting Approach for Continual Learning of VAE(NeurIPS 2021)[[paper]](https://papers.nips.cc/paper/2021/hash/952285b9b7e7a1be5aa7849f32ffff05-Abstract.html)[[code]](https://github.com/AKuzina/BooVAE)![GitHub stars](https://img.shields.io/github/stars/AKuzina/BooVAE.svg?logo=github&label=Stars)\n+ **[GeMCL]** Generative vs. Discriminative: Rethinking The Meta-Continual Learning(NeurIPS 2021)[[paper]](https://papers.nips.cc/paper/2021/hash/b4e267d84075f66ebd967d95331fcc03-Abstract.html)\n+ **[RMM]** RMM: Reinforced Memory Management for Class-Incremental Learning(NIPS 2021)[[paper]](https://proceedings.neurips.cc/paper/2021/file/1cbcaa5abbb6b70f378a3a03d0c26386-Paper.pdf)[[code]](https://github.com/aminbana/gemcl)![GitHub stars](https://img.shields.io/github/stars/aminbana/gemcl.svg?logo=github&label=Stars)\n+ **[LSF]** Learning with Selective Forgetting(IJCAI 2021)[[paper]](https://www.ijcai.org/proceedings/2021/0137.pdf)\n+ **[ASER]** Online Class-Incremental Continual Learning with Adversarial Shapley Value(AAAI 2021)[[paper]](https://www.aaai.org/AAAI21Papers/AAAI-9988.ShimD.pdf)[[code]](https://github.com/RaptorMai/online-continual-learning)![GitHub stars](https://img.shields.io/github/stars/RaptorMai/online-continual-learning.svg?logo=github&label=Stars)\n+ **[CML]** Curriculum-Meta Learning for Order-Robust Continual Relation Extraction(AAAI 2021)[[paper]](https://www.aaai.org/AAAI21Papers/AAAI-4847.WuT.pdf)[[code]](https://github.com/wutong8023/AAAI-CML)![GitHub stars](https://img.shields.io/github/stars/wutong8023/AAAI-CML.svg?logo=github&label=Stars)\n+ **[HAL]** Using Hindsight to Anchor Past Knowledge in Continual Learning(AAAI 2021)[[paper]](https://www.aaai.org/AAAI21Papers/AAAI-9700.ChaudhryA.pdf)\n+ **[MDMT]** Multi-Domain Multi-Task Rehearsal for Lifelong Learning(AAAI 2021)[[paper]](https://arxiv.org/abs/2012.07236)\n+ **[AU]** Do Not Forget to Attend to Uncertainty While Mitigating Catastrophic Forgetting(WACV 2021)[[paper]](https://openaccess.thecvf.com/content/WACV2021/html/Kurmi_Do_Not_Forget_to_Attend_to_Uncertainty_While_Mitigating_Catastrophic_WACV_2021_paper.html)\n+ **[IDBR]** Continual Learning for Text Classification with Information Disentanglement Based Regularization(NAACL 2021)[[paper]](https://www.aclweb.org/anthology/2021.naacl-main.218.pdf)[[code]](https://github.com/GT-SALT/IDBR)![GitHub stars](https://img.shields.io/github/stars/GT-SALT/IDBR.svg?logo=github&label=Stars)\n+ **[COIL]** Co-Transport for Class-Incremental Learning(ACM MM 2021)[[paper]](https://arxiv.org/pdf/2107.12654.pdf)\n\n\n ## 2020\n\n+ **[CWR\\*]** Rehearsal-Free Continual Learning over Small Non-I.I.D. Batches(CVPR 2020)[[paper]](https://arxiv.org/abs/1907.03799v3)\n+ **[MiB]** Modeling the Background for Incremental Learning in Semantic Segmentation(CVPR 2020)[[paper]](https://openaccess.thecvf.com/content_CVPR_2020/papers/Cermelli_Modeling_the_Background_for_Incremental_Learning_in_Semantic_Segmentation_CVPR_2020_paper.pdf)[[code]](https://github.com/fcdl94/MiB)![GitHub stars](https://img.shields.io/github/stars/fcdl94/MiB.svg?logo=github&label=Stars)\n+ **[K-FAC]** Continual Learning with Extended Kronecker-factored Approximate Curvature(CVPR 2020)[[paper]](https://openaccess.thecvf.com/content_CVPR_2020/html/Lee_Continual_Learning_With_Extended_Kronecker-Factored_Approximate_Curvature_CVPR_2020_paper.html)\n+ **[SDC]** Semantic Drift Compensation for Class-Incremental Learning(CVPR 2020)[[paper]](https://openaccess.thecvf.com/content_CVPR_2020/html/Yu_Semantic_Drift_Compensation_for_Class-Incremental_Learning_CVPR_2020_paper.html)[[code]](https://github.com/yulu0724/SDC-IL)![GitHub stars](https://img.shields.io/github/stars/yulu0724/SDC-IL.svg?logo=github&label=Stars)\n+ **[NLTF]** Incremental Multi-Domain Learning with Network Latent Tensor Factorization(AAAI 2020)[[paper]](https://ojs.aaai.org//index.php/AAAI/article/view/6617)\n+ **[CLCL]** Compositional Continual Language Learning(ICLR 2020)[[paper]](https://openreview.net/forum?id=rklnDgHtDS)[[code]](https://github.com/yli1/CLCL)![GitHub stars](https://img.shields.io/github/stars/yli1/CLCL.svg?logo=github&label=Stars)\n+ **[APD]** Scalable and Order-robust Continual Learning with Additive Parameter Decomposition(ICLR 2020)[[paper]](https://arxiv.org/pdf/1902.09432.pdf)\n+ **[HYPERCL]** Continual learning with hypernetworks(ICLR 2020)[[paper]](https://openreview.net/forum?id=SJgwNerKvB)[[code]](https://github.com/chrhenning/hypercl)![GitHub stars](https://img.shields.io/github/stars/chrhenning/hypercl.svg?logo=github&label=Stars)\n+ **[CN-DPM]** A Neural Dirichlet Process Mixture Model for Task-Free Continual Learning(ICLR 2020)[[paper]](https://arxiv.org/pdf/2001.00689.pdf)\n+ **[UCB]** Uncertainty-guided Continual Learning with Bayesian Neural Networks(ICLR 2020)[[paper]](https://openreview.net/forum?id=HklUCCVKDB)[[code]](https://github.com/SaynaEbrahimi/UCB)![GitHub stars](https://img.shields.io/github/stars/SaynaEbrahimi/UCB.svg?logo=github&label=Stars)\n+ **[CLAW]** Continual Learning with Adaptive Weights(ICLR 2020)[[paper]](https://openreview.net/forum?id=Hklso24Kwr)\n+ **[CAT]** Continual Learning of a Mixed Sequence of Similar and Dissimilar Tasks(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/d7488039246a405baf6a7cbc3613a56f-Paper.pdf)[[code]](https://github.com/ZixuanKe/CAT)![GitHub stars](https://img.shields.io/github/stars/ZixuanKe/CAT.svg?logo=github&label=Stars)\n+ **[AGS-CL]** Continual Learning with Node-Importance based Adaptive Group Sparse Regularization(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/hash/258be18e31c8188555c2ff05b4d542c3-Abstract.html)\n+ **[MERLIN]** Meta-Consolidation for Continual Learning(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/a5585a4d4b12277fee5cad0880611bc6-Paper.pdf)[[code]](https://github.com/mattriemer/mer)![GitHub stars](https://img.shields.io/github/stars/mattriemer/mer.svg?logo=github&label=Stars)\n+ **[OSAKA]** Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/c0a271bc0ecb776a094786474322cb82-Paper.pdf)[[code]](https://github.com/ElementAI/osaka)![GitHub stars](https://img.shields.io/github/stars/ElementAI/osaka.svg?logo=github&label=Stars)\n+ **[RATT]** RATT: Recurrent Attention to Transient Tasks for Continual Image Captioning(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/c2964caac096f26db222cb325aa267cb-Paper.pdf)\n+ **[CCLL]** Calibrating CNNs for Lifelong Learning(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/hash/b3b43aeeacb258365cc69cdaf42a68af-Abstract.html)\n+ **[CIDA]** Class-Incremental Domain Adaptation(ECCV 2020)[[paper]](https://link.springer.com/chapter/10.1007/978-3-030-58601-0_4)\n+ **[GraphSAIL]** GraphSAIL: Graph Structure Aware Incremental Learning for Recommender Systems(CIKM 2020)[[paper]](https://dl.acm.org/doi/abs/10.1145/3340531.3412754)\n+ **[ANML]** Learning to Continually Learn(ECAI 2020)[[paper]](https://arxiv.org/abs/2002.09571)[[code]](https://github.com/uvm-neurobotics-lab/ANML)![GitHub stars](https://img.shields.io/github/stars/uvm-neurobotics-lab/ANML.svg?logo=github&label=Stars)\n+ **[ICWR]** Initial Classifier Weights Replay for Memoryless Class Incremental Learning(BMVC 2020)[[paper]](https://arxiv.org/pdf/2008.13710.pdf)\n+ **[DAM]** Incremental Learning Through Deep Adaptation(TPAMI 2020)[[paper]](https://openreview.net/pdf?id=7YDLgf9_zgm)\n+ **[OGD]** Orthogonal Gradient Descent for Continual Learning(PMLR 2020)[[paper]](http://proceedings.mlr.press/v108/farajtabar20a.html)\n+ **[MC-OCL]** Online Continual Learning under Extreme Memory Constraints(ECCV2020)[[paper]](https://link.springer.com/chapter/10.1007/978-3-030-58604-1_43)[[code]](https://github.com/DonkeyShot21/batch-level-distillation)![GitHub stars](https://img.shields.io/github/stars/DonkeyShot21/batch-level-distillation.svg?logo=github&label=Stars)\n+ **[RCM]** Reparameterizing convolutions for incremental multi-task learning without task interference(ECCV 2020)[[paper]](https://link.springer.com/chapter/10.1007/978-3-030-58565-5_41)[[code]](https://github.com/menelaoskanakis/RCM)![GitHub stars](https://img.shields.io/github/stars/menelaoskanakis/RCM.svg?logo=github&label=Stars)\n+ **[OvA-INN]** OvA-INN: Continual Learning with Invertible Neural Networks(IJCNN 2020)[[paper]](https://ieeexplore.ieee.org/abstract/document/9206766)\n+ **[XtarNet]** XtarNet: Learning to Extract Task-Adaptive Representation for Incremental Few-Shot Learning(ICLM 2020)[[paper]](http://proceedings.mlr.press/v119/yoon20b/yoon20b.pdf)[[code]](https://github.com/EdwinKim3069/XtarNet)![GitHub stars](https://img.shields.io/github/stars/EdwinKim3069/XtarNet.svg?logo=github&label=Stars)\n+ **[DMC]** Class-incremental learning via deep model consolidation(WACV 2020)[[paper]](https://openaccess.thecvf.com/content_WACV_2020/html/Zhang_Class-incremental_Learning_via_Deep_Model_Consolidation_WACV_2020_paper.html)\n+ **[iTAML]** iTAML : An Incremental Task-Agnostic Meta-learning Approach(CVPR 2020)[[paper]](https://arxiv.org/pdf/2003.11652.pdf)[[code]](https://github.com/brjathu/iTAML)![GitHub stars](https://img.shields.io/github/stars/brjathu/iTAML.svg?logo=github&label=Stars)\n+ **[FSCIL]** Few-Shot Class-Incremental Learning(CVPR 2020)[[paper]](https://arxiv.org/pdf/2004.10956.pdf)[[code]](https://github.com/xyutao/fscil)![GitHub stars](https://img.shields.io/github/stars/xyutao/fscil.svg?logo=github&label=Stars)\n+ **[GFR]** Generative feature replay for class-incremental learning(CVPR 2020)[[paper]](https://ieeexplore.ieee.org/document/9150851/#:~:text=Generative%20Feature%20Replay%20For%20Class-Incremental%20Learning%20Abstract%3A%20Humans,that%20the%20task-ID%20is%20unknown%20at%20inference%20time.)[[code]](https://github.com/xialeiliu/GFR-IL)![GitHub stars](https://img.shields.io/github/stars/xialeiliu/GFR-IL.svg?logo=github&label=Stars)\n+ **[OSIL]** Incremental Learning In Online Scenario(CVPR 2020)[[paper]](https://openaccess.thecvf.com/content_CVPR_2020/html/He_Incremental_Learning_in_Online_Scenario_CVPR_2020_paper.html)\n+ **[ONCE]** Incremental Few-Shot Object Detection(CVPR 2020)[[paper]](https://openaccess.thecvf.com/content_CVPR_2020/html/Perez-Rua_Incremental_Few-Shot_Object_Detection_CVPR_2020_paper.html)\n+ **[WA]** Maintaining discrimination and fairness in class incremental learning(CVPR 2020)[[paper]](https://openaccess.thecvf.com/content_CVPR_2020/papers/Zhao_Maintaining_Discrimination_and_Fairness_in_Class_Incremental_Learning_CVPR_2020_paper.pdf)[[code]](https://github.com/hugoycj/Incremental-Learning-with-Weight-Aligning)\n+ **[CGATE]** Conditional Channel Gated Networks for Task-Aware Continual Learning(CVPR 2020)[[paper]](https://openaccess.thecvf.com/content_CVPR_2020/html/Abati_Conditional_Channel_Gated_Networks_for_Task-Aware_Continual_Learning_CVPR_2020_paper.html)[[code]](https://github.com/lit-leo/cgate)![GitHub stars](https://img.shields.io/github/stars/lit-leo/cgate.svg?logo=github&label=Stars)\n+ **[Mnemonics Training]** Mnemonics Training: Multi-Class Incremental Learning without Forgetting(CVPR 2020)[[paper]](https://class-il.mpi-inf.mpg.de/mnemonics-training/)[[code]](https://github.com/yaoyao-liu/class-incremental-learning)![GitHub stars](https://img.shields.io/github/stars/yaoyao-liu/class-incremental-learning.svg?logo=github&label=Stars)\n+ **[MEGA]** Improved schemes for episodic memory based lifelong learning algorithm(NeurIPS 2020)[[paper]](https://par.nsf.gov/servlets/purl/10233158)\n+ **[GAN Memory]** GAN Memory with No Forgetting(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/hash/bf201d5407a6509fa536afc4b380577e-Abstract.html)[[code]](https://github.com/MiaoyunZhao/GANmemory_LifelongLearning)![GitHub stars](https://img.shields.io/github/stars/MiaoyunZhao/GANmemory_LifelongLearning.svg?logo=github&label=Stars)\n+ **[Coreset]** Coresets via Bilevel Optimization for Continual Learning and Streaming(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/aa2a77371374094fe9e0bc1de3f94ed9-Paper.pdf)\n+ **[FROMP]** Continual Deep Learning by Functional Regularisation of Memorable Past(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/2f3bbb9730639e9ea48f309d9a79ff01-Paper.pdf)[[code]](https://github.com/team-approx-bayes/fromp)![GitHub stars](https://img.shields.io/github/stars/team-approx-bayes/fromp.svg?logo=github&label=Stars)\n+ **[DER]** Dark Experience for General Continual Learning: a Strong, Simple Baseline(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/b704ea2c39778f07c617f6b7ce480e9e-Paper.pdf)[[code]](https://github.com/aimagelab/mammoth)![GitHub stars](https://img.shields.io/github/stars/aimagelab/mammoth.svg?logo=github&label=Stars)\n+ **[InstAParam]** Mitigating Forgetting in Online Continual Learning via Instance-Aware Parameterization(NeurIPS 2020)[[paper]](https://proceedings.neurips.cc/paper/2020/file/ca4b5656b7e193e6bb9064c672ac8dce-Paper.pdf)\n+ **[BOCL]** Bi-Objective Continual Learning: Learning \"New\" While Consolidating \"Known\"(AAAI 2020)[[paper]](https://ojs.aaai.org//index.php/AAAI/article/view/6060)\n+ **[REMIND]** Remind your neural network to prevent catastrophic forgetting(ECCV 2020)[[paper]](https://arxiv.org/pdf/1910.02509v3)[[code]](https://github.com/tyler-hayes/REMIND)![GitHub stars](https://img.shields.io/github/stars/tyler-hayes/REMIND.svg?logo=github&label=Stars)\n+ **[ACL]** Adversarial Continual Learning(ECCV 2020)[[paper]](https://link.springer.com/chapter/10.1007/978-3-030-58621-8_23)[[code]](https://github.com/facebookresearch/Adversarial-Continual-Learning)![GitHub stars](https://img.shields.io/github/stars/facebookresearch/Adversarial-Continual-Learning.svg?logo=github&label=Stars)\n+ **[TPCIL]** Topology-Preserving Class-Incremental Learning(ECCV 2020)[[paper]](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123640256.pdf)\n+ **[GDumb]** GDumb:A simple approach that questions our progress in continual learning(ECCV 2020)[[paper]](https://www.robots.ox.ac.uk/~tvg/publications/2020/gdumb.pdf)[[code]](https://github.com/drimpossible/GDumb)![GitHub stars](https://img.shields.io/github/stars/drimpossible/GDumb.svg?logo=github&label=Stars)\n+ **[PRS]** Imbalanced Continual Learning with Partitioning Reservoir Sampling(ECCV 2020)[[paper]](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123580409.pdf)\n+ **[PODNet]** Pooled Outputs Distillation for Small-Tasks Incremental Learning(ECCV 2020)[[paper]](https://arxiv.org/abs/2004.13513)[[code]](https://github.com/arthurdouillard/incremental_learning.pytorch)![GitHub stars](https://img.shields.io/github/stars/arthurdouillard/incremental_learning.pytorch.svg?logo=github&label=Stars)\n+ **[FA]** Memory-Efficient Incremental Learning Through Feature Adaptation(ECCV 2020)[[paper]](https://link.springer.com/chapter/10.1007/978-3-030-58517-4_41)\n+ **[L-VAEGAN]** Learning latent representions across multiple data domains using Lifelong VAEGAN(ECCV 2020)[[paper]](https://link.springer.com/chapter/10.1007/978-3-030-58565-5_46)\n+ **[Piggyback GAN]** Piggyback GAN: Efficient Lifelong Learning for Image Conditioned Generation(ECCV 2020)[[paper]](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123660392.pdf)[[code]](https://github.com/arunmallya/piggyback)![GitHub stars](https://img.shields.io/github/stars/arunmallya/piggyback.svg?logo=github&label=Stars)\n+ **[IDA]** Incremental Meta-Learning via Indirect Discriminant Alignment(ECCV 2020)[[paper]](https://arxiv.org/abs/2002.04162)\n+ **[RCM]** Reparameterizing Convolutions for Incremental Multi-Task Learning Without Task Interference(ECCV 2020)[[paper]](https://arxiv.org/abs/2007.12540)\n+ **[LAMOL]** LAMOL: LAnguage MOdeling for Lifelong Language Learning(ICLR 2020)[[paper]](https://openreview.net/forum?id=Skgxcn4YDS)[[code]](https://github.com/chho33/LAMOL)![GitHub stars](https://img.shields.io/github/stars/chho33/LAMOL.svg?logo=github&label=Stars)\n+ **[FRCL]** Functional Regularisation for Continual Learning with Gaussian Processes(ICLR 2020)[[paper]](https://arxiv.org/abs/1901.11356)[[code]](https://github.com/AndreevP/FRCL)![GitHub stars](https://img.shields.io/github/stars/AndreevP/FRCL.svg?logo=github&label=Stars)\n+ **[GRS]** Continual Learning with Bayesian Neural Networks for Non-Stationary Data(ICLR 2020)[[paper]](https://openreview.net/forum?id=SJlsFpVtDB)\n+ **[Brain-inspired replay]** Brain-inspired replay for continual learning with artificial neural networks(Natrue Communications 2020)[[paper]](https://www.nature.com/articles/s41467-020-17866-2)[[code]](https://github.com/GMvandeVen/brain-inspired-replay)![GitHub stars](https://img.shields.io/github/stars/GMvandeVen/brain-inspired-replay.svg?logo=github&label=Stars)\n+ **[ScaIL]** ScaIL: Classifier Weights Scaling for Class Incremental Learning(WACV 2020)[[paper]](https://openaccess.thecvf.com/content_WACV_2020/html/Belouadah_ScaIL_Classifier_Weights_Scaling_for_Class_Incremental_Learning_WACV_2020_paper.html)[[code]](https://github.com/EdenBelouadah/class-incremental-learning)![GitHub stars](https://img.shields.io/github/stars/EdenBelouadah/class-incremental-learning.svg?logo=github&label=Stars)\n+ **[CLIFER]** CLIFER: Continual Learning with Imagination for Facial Expression Recognition(FG 2020)[[paper]](https://ieeexplore.ieee.org/document/9320226)\n+ **[ARPER]** Continual Learning for Natural Language Generation in Task-oriented Dialog Systems(EMNLP 2020)[[paper]](https://arxiv.org/abs/2010.00910)\n+ **[DnR]** Distill and Replay for Continual Language Learning(COLING 2020)[[paper]](https://www.aclweb.org/anthology/2020.coling-main.318.pdf)\n+ **[ADER]** ADER: Adaptively Distilled Exemplar Replay Towards Continual Learning for Session-based Recommendation(RecSys 2020)[[paper]](https://arxiv.org/abs/2007.12000)[[code]](https://github.com/DoubleMuL/ADER)![GitHub stars](https://img.shields.io/github/stars/DoubleMuL/ADER.svg?logo=github&label=Stars)\n+ **[MUC]** More Classifiers, Less Forgetting: A Generic Multi-classifier Paradigm for Incremental Learning(ECCV 2020)[[paper]](http://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123710698.pdf)[[code]](https://github.com/liuyudut/MUC)![GitHub stars](https://img.shields.io/github/stars/liuyudut/MUC.svg?logo=github&label=Stars)\n\n\n ## 2019\n\n+ **[LwM]** Learning without memorizing(CVPR 2019)[[paper]](https://ieeexplore.ieee.org/document/8953962)\n+ **[CPG]** Compacting, picking and growing for unforgetting continual learning(NeurIPS 2019)[[paper]](https://arxiv.org/pdf/1910.06562v1.pdf)[[code]](https://github.com/ivclab/CPG)![GitHub stars](https://img.shields.io/github/stars/ivclab/CPG.svg?logo=github&label=Stars)\n+ **[UCL]** Uncertainty-based continual learning with adaptive regularization(NeurIPS 2019)[[paper]](https://proceedings.neurips.cc/paper/2019/file/2c3ddf4bf13852db711dd1901fb517fa-Paper.pdf)\n+ **[OML]** Meta-Learning Representations for Continual Learning(NeurIPS 2019)[[paper]](https://proceedings.neurips.cc/paper/2019/hash/f4dd765c12f2ef67f98f3558c282a9cd-Abstract.html)[[code]](https://github.com/Khurramjaved96/mrcl)![GitHub stars](https://img.shields.io/github/stars/Khurramjaved96/mrcl.svg?logo=github&label=Stars)\n+ **[ALASSO]** Continual Learning by Asymmetric Loss Approximation with Single-Side Overestimation(ICCV 2019)[[paper]](https://openaccess.thecvf.com/content_ICCV_2019/papers/Park_Continual_Learning_by_Asymmetric_Loss_Approximation_With_Single-Side_Overestimation_ICCV_2019_paper.pdf)\n+ **[Learn-to-Grow]** Learn to grow: A continual structure learning framework for overcoming catastrophic forgetting(PMLR 2019)[[paper]](http://proceedings.mlr.press/v97/li19m/li19m.pdf)\n+ **[OWM]** Continual Learning of Context-dependent Processing in Neural Networks(Nature Machine Intelligence 2019)[[paper]](https://www.nature.com/articles/s42256-019-0080-x#Sec2)[[code]](https://github.com/beijixiong3510/OWM)![GitHub stars](https://img.shields.io/github/stars/beijixiong3510/OWM.svg?logo=github&label=Stars)\n+ **[LUCIR]** Learning a Unified Classifier Incrementally via Rebalancing(CVPR 2019)[[paper]](https://openaccess.thecvf.com/content_CVPR_2019/html/Hou_Learning_a_Unified_Classifier_Incrementally_via_Rebalancing_CVPR_2019_paper.html)[[code]](https://github.com/hshustc/CVPR19_Incremental_Learning)![GitHub stars](https://img.shields.io/github/stars/hshustc/CVPR19_Incremental_Learning.svg?logo=github&label=Stars)\n+ **[TFCL]** Task-Free Continual Learning(CVPR 2019)[[paper]](https://openaccess.thecvf.com/content_CVPR_2019/papers/Aljundi_Task-Free_Continual_Learning_CVPR_2019_paper.pdf)\n+ **[GD-WILD]** Overcoming catastrophic forgetting with unlabeled data in the wild(CVPR 2019)[[paper]](https://ieeexplore.ieee.org/document/9010368)[[code]](https://github.com/kibok90/iccv2019-inc)![GitHub stars](https://img.shields.io/github/stars/kibok90/iccv2019-inc.svg?logo=github&label=Stars)\n+ **[DGM]** Learning to Remember: A Synaptic Plasticity Driven Framework for Continual Learning(CVPR 2019)[[paper]](https://openaccess.thecvf.com/content_CVPR_2019/papers/Ostapenko_Learning_to_Remember_A_Synaptic_Plasticity_Driven_Framework_for_Continual_CVPR_2019_paper.pdf)\n+ **[BiC]** Large Scale Incremental Learning(CVPR 2019)[[paper]](https://arxiv.org/abs/1905.13260)[[code]](https://github.com/wuyuebupt/LargeScaleIncrementalLearning)![GitHub stars](https://img.shields.io/github/stars/wuyuebupt/LargeScaleIncrementalLearning.svg?logo=github&label=Stars)\n+ **[MER]** Learning to learn without forgetting by maximizing transfer and minimizing interference(ICLR 2019)[[paper]](https://openreview.net/pdf?id=B1gTShAct7)[[code]](https://github.com/mattriemer/mer)![GitHub stars](https://img.shields.io/github/stars/mattriemer/mer.svg?logo=github&label=Stars)\n+ **[PGMA]** Overcoming catastrophic forgetting for continual learning via model adaptation(ICLR 2019)[[paper]](https://openreview.net/forum?id=ryGvcoA5YX)\n+ **[A-GEM]** Efficient Lifelong Learning with A-GEM(ICLR 2019)[[paper]](https://arxiv.org/pdf/1812.00420.pdf)[[code]](https://github.com/facebookresearch/agem)![GitHub stars](https://img.shields.io/github/stars/facebookresearch/agem.svg?logo=github&label=Stars)\n+ **[IL2M]** Class incremental learning with dual memory(ICCV 2019)[[paper]](https://ieeexplore.ieee.org/document/9009019)\n+ **[ILCAN]** Incremental learning using conditional adversarial networks(ICCV 2019)[[paper]](https://ieeexplore.ieee.org/document/9009031)\n+ **[Lifelong GAN]** Lifelong GAN: Continual Learning for Conditional Image Generation(ICCV 2019)[[paper]](https://openaccess.thecvf.com/content_ICCV_2019/html/Zhai_Lifelong_GAN_Continual_Learning_for_Conditional_Image_Generation_ICCV_2019_paper.html)\n+ **[GSS]** Gradient based sample selection for online continual learning(NIPS 2019)[[paper]](https://proceedings.neurips.cc/paper/2019/file/e562cd9c0768d5464b64cf61da7fc6bb-Paper.pdf)\n+ **[ER]** Experience Replay for Continual Learning(NIPS 2019)[[paper]](https://arxiv.org/abs/1811.11682)\n+ **[MIR]** Online Continual Learning with Maximal Interfered Retrieval(NIPS 2019)[[paper]](https://proceedings.neurips.cc/paper/2019/hash/15825aee15eb335cc13f9b559f166ee8-Abstract.html)[[code]](https://github.com/optimass/Maximally_Interfered_Retrieval)![GitHub stars](https://img.shields.io/github/stars/optimass/Maximally_Interfered_Retrieval.svg?logo=github&label=Stars)\n+ **[RPS-Net]** Random Path Selection for Incremental Learning(NIPS 2019)[[paper]](https://www.researchgate.net/profile/Salman-Khan-62/publication/333617650_Random_Path_Selection_for_Incremental_Learning/links/5d04905ea6fdcc39f11b7355/Random-Path-Selection-for-Incremental-Learning.pdf)\n+ **[CLEER]** Complementary Learning for Overcoming Catastrophic Forgetting Using Experience Replay(IJCAI 2019)[[paper]](https://arxiv.org/abs/1903.04566)\n+ **[PAE]** Increasingly Packing Multiple Facial-Informatics Modules in A Unified Deep-Learning Model via Lifelong Learning(ICMR 2019)[[paper]](https://dl.acm.org/doi/10.1145/3323873.3325053)[[code]](https://github.com/ivclab/PAE)![GitHub stars](https://img.shields.io/github/stars/ivclab/PAE.svg?logo=github&label=Stars)\n\n\n ## 2018\n\n+ **[PackNet]** PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning(CVPR 2018)[[paper]](https://openaccess.thecvf.com/content_cvpr_2018/html/Mallya_PackNet_Adding_Multiple_CVPR_2018_paper.html)[[code]](https://github.com/arunmallya/packnet)![GitHub stars](https://img.shields.io/github/stars/arunmallya/packnet.svg?logo=github&label=Stars)\n+ **[OLA]** Online Structured Laplace Approximations for Overcoming Catastrophic Forgetting(NIPS 2018)[[paper]](https://proceedings.neurips.cc/paper/2018/hash/f31b20466ae89669f9741e047487eb37-Abstract.html)\n+ **[RCL]** Reinforced Continual Learning(NIPS 2018)[[paper]](http://papers.nips.cc/paper/7369-reinforced-continual-learning.pdf)[[code]](https://github.com/xujinfan/Reinforced-Continual-Learning)![GitHub stars](https://img.shields.io/github/stars/xujinfan/Reinforced-Continual-Learning.svg?logo=github&label=Stars)\n+ **[MARL]** Routing networks: Adaptive selection of non-linear functions for multi-task learning(ICLR 2018)[[paper]](https://openreview.net/forum?id=ry8dvM-R-)\n+ **[P&C]** Progress & Compress: A scalable framework for continual learning(ICML 2018)[[paper]](https://arxiv.org/abs/1805.06370)\n+ **[DEN]** Lifelong Learning with Dynamically Expandable Networks(ICLR 2018)[[paper]](https://openreview.net/forum?id=Sk7KsfW0-)[[code]](https://github.com/jaehong31/DEN)![GitHub stars](https://img.shields.io/github/stars/jaehong31/DEN.svg?logo=github&label=Stars)\n+ **[Piggyback]** Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights(ECCV 2018)[[paper]](https://openaccess.thecvf.com/content_ECCV_2018/papers/Arun_Mallya_Piggyback_Adapting_a_ECCV_2018_paper.pdf)[[code]](https://github.com/arunmallya/piggyback)![GitHub stars](https://img.shields.io/github/stars/arunmallya/piggyback.svg?logo=github&label=Stars)\n+ **[RWalk]** Riemanian Walk for Incremental Learning: Understanding Forgetting and Intransigence(ECCV 2018)[[paper]](https://openaccess.thecvf.com/content_ECCV_2018/html/Arslan_Chaudhry__Riemannian_Walk_ECCV_2018_paper.html)\n+ **[MAS]** Memory Aware Synapses: Learning What not to Forget(ECCV 2018)[[paper]](https://arxiv.org/pdf/1711.09601.pdf)[[code]](https://github.com/rahafaljundi/MAS-Memory-Aware-Synapses)![GitHub stars](https://img.shields.io/github/stars/rahafaljundi/MAS-Memory-Aware-Synapses.svg?logo=github&label=Stars)\n+ **[R-EWC]** Rotate your Networks: Better Weight Consolidation and Less Catastrophic Forgetting(ICPR 2018)[[paper]](https://ieeexplore.ieee.org/abstract/document/8545895)[[code]](https://github.com/xialeiliu/RotateNetworks)![GitHub stars](https://img.shields.io/github/stars/xialeiliu/RotateNetworks.svg?logo=github&label=Stars)\n+ **[HAT]** Overcoming Catastrophic Forgetting with Hard Attention to the Task(PMLR 2018)[[paper]](http://proceedings.mlr.press/v80/serra18a.html)[[code]](https://github.com/joansj/hat)![GitHub stars](https://img.shields.io/github/stars/joansj/hat.svg?logo=github&label=Stars)\n+ **[MeRGANs]** Memory Replay GANs:learning to generate images from new categories without forgetting(NIPS 2018)[[paper]](https://arxiv.org/abs/1809.02058)[[code]](https://github.com/WuChenshen/MeRGAN)![GitHub stars](https://img.shields.io/github/stars/WuChenshen/MeRGAN.svg?logo=github&label=Stars)\n+ **[EEIL]** End-to-End Incremental Learning(ECCV 2018)[[paper]](https://arxiv.org/abs/1807.09536)[[code]](https://github.com/fmcp/EndToEndIncrementalLearning)![GitHub stars](https://img.shields.io/github/stars/fmcp/EndToEndIncrementalLearning.svg?logo=github&label=Stars)\n+ **[Adaptation by Distillation]** Lifelong Learning via Progressive Distillation and Retrospection(ECCV 2018)[[paper]](http://openaccess.thecvf.com/content_ECCV_2018/papers/Saihui_Hou_Progressive_Lifelong_Learning_ECCV_2018_paper.pdf)\n+ **[ESGR]** Exemplar-Supported Generative Reproduction for Class Incremental Learning(BMVC 2018)[[paper]](http://bmvc2018.org/contents/papers/0325.pdf)[[code]](https://github.com/TonyPod/ESGR)![GitHub stars](https://img.shields.io/github/stars/TonyPod/ESGR.svg?logo=github&label=Stars)\n+ **[VCL]** Variational Continual Learning(ICLR 2018)[[paper]](https://arxiv.org/pdf/1710.10628.pdf#page=13&zoom=100,110,890)\n+ **[FearNet]** FearNet: Brain-Inspired Model for Incremental Learning(ICLR 2018)[[paper]](https://openreview.net/forum?id=SJ1Xmf-Rb)\n+ **[DGDMN]** Deep Generative Dual Memory Network for Continual Learning(ICLR 2018)[[paper]](https://openreview.net/forum?id=BkVsWbbAW)\n\n\n ## 2017\n\n+ **[Expert Gate]** Expert Gate: Lifelong learning with a network of experts(CVPR 2017)[[paper]](https://openaccess.thecvf.com/content_cvpr_2017/papers/Aljundi_Expert_Gate_Lifelong_CVPR_2017_paper.pdf)[[code]](https://github.com/wannabeOG/ExpertNet-Pytorch)![GitHub stars](https://img.shields.io/github/stars/wannabeOG/ExpertNet-Pytorch.svg?logo=github&label=Stars)\n+ **[ILOD]** Incremental Learning of Object Detectors without Catastrophic Forgetting(ICCV 2017)[[paper]](https://openaccess.thecvf.com/content_ICCV_2017/papers/Shmelkov_Incremental_Learning_of_ICCV_2017_paper.pdf)[[code]](https://github.com/kshmelkov/incremental_detectors)![GitHub stars](https://img.shields.io/github/stars/kshmelkov/incremental_detectors.svg?logo=github&label=Stars)\n+ **[EBLL]** Encoder Based Lifelong Learning(ICCV2017)[[paper]](https://arxiv.org/abs/1704.01920)\n+ **[IMM]** Overcoming Catastrophic Forgetting by Incremental Moment Matching(NIPS 2017)[[paper]](https://arxiv.org/abs/1703.08475)[[code]](https://github.com/btjhjeon/IMM_tensorflow)![GitHub stars](https://img.shields.io/github/stars/btjhjeon/IMM_tensorflow.svg?logo=github&label=Stars)\n+ **[SI]** Continual Learning through Synaptic Intelligence(ICML 2017)[[paper]](http://proceedings.mlr.press/v70/zenke17a/zenke17a.pdf)[[code]](https://github.com/ganguli-lab/pathint)![GitHub stars](https://img.shields.io/github/stars/ganguli-lab/pathint.svg?logo=github&label=Stars)\n+ **[EWC]** Overcoming Catastrophic Forgetting in Neural Networks(PNAS 2017)[[paper]](https://arxiv.org/abs/1612.00796)[[code]](https://github.com/stokesj/EWC)![GitHub stars](https://img.shields.io/github/stars/stokesj/EWC.svg?logo=github&label=Stars)\n+ **[iCARL]** iCaRL: Incremental Classifier and Representation Learning(CVPR 2017)[[paper]](https://arxiv.org/abs/1611.07725)[[code]](https://github.com/srebuffi/iCaRL)![GitHub stars](https://img.shields.io/github/stars/srebuffi/iCaRL.svg?logo=github&label=Stars)\n+ **[GEM]** Gradient Episodic Memory for Continual Learning(NIPS 2017)[[paper]](https://proceedings.neurips.cc/paper/2017/hash/f87522788a2be2d171666752f97ddebb-Abstract.html)[[code]](https://github.com/facebookresearch/GradientEpisodicMemory)![GitHub stars](https://img.shields.io/github/stars/facebookresearch/GradientEpisodicMemory.svg?logo=github&label=Stars)\n+ **[DGR]** Continual Learning with Deep Generative Replay(NIPS 2017)[[paper]](https://proceedings.neurips.cc/paper/2017/file/0efbe98067c6c73dba1250d2beaa81f9-Paper.pdf)[[code]](https://github.com/kuc2477/pytorch-deep-generative-replay)![GitHub stars](https://img.shields.io/github/stars/kuc2477/pytorch-deep-generative-replay.svg?logo=github&label=Stars)\n\n\n ## 2016\n\n+ **[LwF]** Learning without Forgetting(ECCV 2016)[[paper]](https://link.springer.com/chapter/10.1007/978-3-319-46493-0_37)[[code]](https://github.com/lizhitwo/LearningWithoutForgetting)![GitHub stars](https://img.shields.io/github/stars/lizhitwo/LearningWithoutForgetting.svg?logo=github&label=Stars)\n\n\n\n# :gift_heart: Contributors <span id='contributors'></span>\n\n[<img src=\"pics/contributor_1.jfif\"  width=\"80\" />](https://github.com/pinna526)    [<img src=\"pics/contributor_2.jfif\"  width=\"80\" />](https://github.com/xiaopenghong)    [<img src=\"pics/contributor_4.jfif\"  width=\"80\" />](https://github.com/iamwangyabin)    [<img src=\"pics/contributor_3.jfif\"  width=\"80\" />](https://github.com/ZhihengCV)    [<img src=\"pics/contributor_5.jfif\"  width=\"80\" />](https://github.com/benmagnifico)    [<img src=\"pics/contributor_6.jfif\"  width=\"80\" />](https://github.com/zxxxxh)\n\n"
  }
]