Research

In the last few years we have witnessed a renewed and steadily growing interest in the ability to learn continuously from high-dimensional data. In this page, we will keep track of recent Continual/Lifelong Learning developments in the research community.

Publications

In this section we maintain an updated list of publications related to Continual Learning. This references list is automatically generated by a single bibtex file maintained by the ContinualAI community through an open Mendeley group! Join our group here to add a reference to your paper! Please, remember to follow the (very simple) contributions guidelines when adding new papers.

Search among 242 papers!

Filter list by keyword:
Filter list by regex:
Filter list by year:

[framework] [som] [sparsity] [dual] [spiking] [rnn] [nlp] [graph] [vision] [hebbian] [audio] [bayes] [generative] [mnist] [fashion] [cifar] [core50] [imagenet] [omniglot] [cubs] [experimental] [theoretical]

Applications

23 papers

In this section we maintain a list of all applicative papers produced on continual learning and related topics.

  • Continual Learning of Predictive Models in Video Sequences via Variational Autoencoders by Damian Campo, Giulia Slavic, Mohamad Baydoun, Lucio Marcenaro and Carlo Regazzoni. arXiv, 2020. [vision]

  • Unsupervised Model Personalization While Preserving Privacy and Scalability: An Open Problem by Matthias De Lange, Xu Jia, Sarah Parisot, Ales Leonardis, Gregory Slabaugh and Tinne Tuytelaars. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14451–14460, 2020. [framework] [mnist] [vision]

  • Incremental Learning for End-to-End Automatic Speech Recognition by Li Fu, Xiaoxiao Li and Libo Zi. arXiv, 2020. [audio]

  • Neural Topic Modeling with Continual Lifelong Learning by Pankaj Gupta, Yatin Chaudhary, Thomas Runkler and Hinrich Schütze. ICML, 2020. [nlp]

  • CLOPS: Continual Learning of Physiological Signals by Dani Kiyasseh, Tingting Zhu and David A Clifton. arXiv, 2020.

  • Class-Agnostic Continual Learning of Alternating Languages and Domains by Germán Kruszewski, Ionut-Teodor Sorodoc and Tomas Mikolov. arXiv, 2020. [nlp] [rnn]

  • Clinical Applications of Continual Learning Machine Learning by Cecilia S Lee and Aaron Y Lee. The Lancet Digital Health, e279–e281, 2020.

  • Continual Learning for Domain Adaptation in Chest X-Ray Classification by Matthias Lenga, Heinrich Schulz and Axel Saalbach. arXiv, 1–11, 2020. [vision]

  • Sequential Domain Adaptation through Elastic Weight Consolidation for Sentiment Analysis by Avinash Madasu and Vijjini Anvesh Rao. arXiv, 2020. [nlp] [rnn]

  • Importance Driven Continual Learning for Segmentation Across Domains by Sinan Özgür Özgün, Anne-Marie Rickmann, Abhijit Guha Roy and Christian Wachinger. arXiv, 1–10, 2020. [vision]

  • LAMOL: LAnguage MOdeling for Lifelong Language Learning by Fan-Keng Sun, Cheng-Hao Ho and Hung-Yi Lee. ICLR, 2020. [nlp]

  • Non-Parametric Adaptation for Neural Machine Translation by Ankur Bapna and Orhan Firat. arXiv, 2019. [nlp]

  • Episodic Memory in Lifelong Language Learning by Cyprien de Masson D’Autume, Sebastian Ruder, Lingpeng Kong and Dani Yogatama. NeurIPS, 2019. [nlp]

  • Continual Adaptation for Efficient Machine Communication by Robert D Hawkins, Minae Kwon, Dorsa Sadigh and Noah D Goodman. Proceedings of the ICML Workshop on Adaptive & Multitask Learning: Algorithms & Systems, 2019.

  • Continual Learning for Sentence Representations Using Conceptors by Tianlin Liu, Lyle Ungar and João Sedoc. NAACL, 2019. [nlp]

  • Lifelong and Interactive Learning of Factual Knowledge in Dialogues by Sahisnu Mazumder, Bing Liu, Shuai Wang and Nianzu Ma. Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue, 21–31, 2019. [nlp]

  • Making Good on LSTMs’ Unfulfilled Promise by Daniel Philps, Artur d’Avila Garcez and Tillman Weyde. arXiv, 2019. [rnn]

  • Overcoming Catastrophic Forgetting During Domain Adaptation of Neural Machine Translation by Brian Thompson, Jeremy Gwinnup, Huda Khayrallah, Kevin Duh and Philipp Koehn. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2062–2068, 2019. [nlp] [rnn]

  • A Multi-Task Learning Framework for Overcoming the Catastrophic Forgetting in Automatic Speech Recognition by Jiabin Xue, Jiqing Han, Tieran Zheng, Xiang Gao and Jiaxing Guo. arXiv, 2019. [audio] [rnn]

  • Lifelong Learning for Scene Recognition in Remote Sensing Images by Min Zhai, Huaping Liu and Fuchun Sun. IEEE Geoscience and Remote Sensing Letters, 1472–1476, 2019. [vision]

  • Towards Continual Learning in Medical Imaging by Chaitanya Baweja, Ben Glocker and Konstantinos Kamnitsas. NeurIPS Workshop on Continual Learning, 1–4, 2018. [vision]

  • Toward Continual Learning for Conversational Agents by and Sungjin Lee. arXiv, 2018. [nlp]

  • Principles of Lifelong Learning for Predictive User Modeling by Ashish Kapoor and Eric Horvitz. User Modeling 2007, 37–46, 2009.

Architectural Methods

25 papers

In this section we collect all the papers introducing a continual learning strategy employing some architectural methods.

  • Continual Learning with Adaptive Weights (CLAW) by Tameem Adel, Han Zhao and Richard E Turner. International Conference on Learning Representations, 2020. [cifar] [mnist] [omniglot]

  • Continual Learning with Gated Incremental Memories for Sequential Data Processing by Andrea Cossu, Antonio Carta and Davide Bacciu. Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN 2020), 2020. [mnist] [rnn]

  • Bayesian Nonparametric Weight Factorization for Continual Learning by Nikhil Mehta, Kevin J Liang and Lawrence Carin. arXiv, 1–17, 2020. [bayes] [cifar] [mnist] [sparsity]

  • SpaceNet: Make Free Space For Continual Learning by Ghada Sokar, Decebal Constantin Mocanu and Mykola Pechenizkiy. arXiv, 2020. [cifar] [fashion] [mnist] [sparsity]

  • Efficient Continual Learning with Modular Networks and Task-Driven Priors by Tom Veniat, Ludovic Denoyer and Marc’Aurelio Ranzato. arXiv, 2020. [experimental]

  • Progressive Memory Banks for Incremental Domain Adaptation by Nabiha Asghar, Lili Mou, Kira A Selby, Kevin D Pantasdo, Pascal Poupart and Xin Jiang. International Conference on Learning Representations, 2019. [nlp] [rnn]

  • Autonomous Deep Learning: Continual Learning Approach for Dynamic Environments by Andri Ashfahani and Mahardhika Pratama. Proceedings of the 2019 SIAM International Conference on Data Mining, 666–674, 2019. [mnist]

  • Compacting, Picking and Growing for Unforgetting Continual Learning by Steven C Y Hung, Cheng-Hao Tu, Cheng-En Wu, Chien-Hung Chen, Yi-Ming Chan and Chu-Song Chen. NeurIPS, 13669–13679, 2019. [cifar] [imagenet]

  • Learn to Grow: A Continual Structure Learning Framework for Overcoming Catastrophic Forgetting by Xilai Li, Yingbo Zhou, Tianfu Wu, Richard Socher and Caiming Xiong. arXiv, 2019. [cifar] [mnist]

  • Towards AutoML in the Presence of Drift: First Results by Jorge G. Madrid, Hugo Jair Escalante, Eduardo F. Morales, Wei-Wei Tu, Yang Yu, Lisheng Sun-Hosoya, Isabelle Guyon and Michele Sebag. arXiv, 2019.

  • Continual Unsupervised Representation Learning by Dushyant Rao, Francesco Visin, Andrei A Rusu, Yee Whye Teh, Razvan Pascanu and Raia Hadsell. NeurIPS, 2019. [mnist] [omniglot]

  • A Progressive Model to Enable Continual Learning for Semantic Slot Filling by Yilin Shen, Xiangyu Zeng and Hongxia Jin. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 1279–1284, 2019. [nlp]

  • Adaptive Compression-Based Lifelong Learning by Shivangi Srivastava, Maxim Berman, Matthew B Blaschko and Devis Tuia. BMVC, 2019. [imagenet] [sparsity]

  • Frosting Weights for Better Continual Training by Xiaofeng Zhu, Feng Liu, Goce Trajcevski and Dingding Wang. 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA), 506–510, 2019. [cifar] [mnist]

  • Dynamic Few-Shot Visual Learning Without Forgetting by Spyros Gidaris and Nikos Komodakis. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 4367–4375, 2018. [imagenet] [vision]

  • HOUDINI: Lifelong Learning as Program Synthesis by Lazar Valkov, Dipak Chaudhari, Akash Srivastava, Charles Sutton and Swarat Chaudhuri. NeurIPS, 8687–8698, 2018.

  • Reinforced Continual Learning by Ju Xu and Zhanxing Zhu. Advances in Neural Information Processing Systems, 899–908, 2018. [cifar] [mnist]

  • Lifelong Learning With Dynamically Expandable Networks by Jaehong Yoon, Eunho Yang, Jeongtae Lee and Sung Ju Hwang. ICLR, 11, 2018. [cifar] [mnist] [sparsity]

  • Expert Gate: Lifelong Learning with a Network of Experts by Rahaf Aljundi, Punarjay Chakravarty and Tinne Tuytelaars. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. [vision]

  • Neurogenesis Deep Learning by Timothy John Draelos, Nadine E Miner, Christopher Lamb, Jonathan A Cox, Craig Michael Vineyard, Kristofor David Carlson, William Mark Severa, Conrad D James and James Bradley Aimone. IJCNN, 2017. [mnist]

  • Net2Net: Accelerating Learning via Knowledge Transfer by Tianqi Chen, Ian Goodfellow and Jonathon Shlens. ICLR, 2016.

  • Continual Learning through Evolvable Neural Turing Machines by Benno Luders, Mikkel Schlager and Sebastian Risi. NIPS 2016 Workshop on Continual Learning and Deep Networks, 2016.

  • Progressive Neural Networks by Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu and Raia Hadsell. arXiv, 2016. [mnist]

  • Knowledge Transfer in Deep Block-Modular Neural Networks by Alexander V. Terekhov, Guglielmo Montone and J. Kevin O’Regan. Conference on Biomimetic and Biohybrid Systems, 268–279, 2015. [vision]

  • A Self-Organising Network That Grows When Required by Stephen Marsland, Jonathan Shapiro and Ulrich Nehmzow. Neural Networks, 1041–1058, 2002. [som]

Benchmarks

4 papers

In this section we list all the papers related to new benchmarks proposals for continual learning and related topics.

  • Defining Benchmarks for Continual Few-Shot Learning by Antreas Antoniou, Massimiliano Patacchiola, Mateusz Ochal and Amos Storkey. arXiv, 2020. [imagenet]

  • Continual Reinforcement Learning in 3D Non-Stationary Environments by Vincenzo Lomonaco, Karan Desai, Eugenio Culurciello and Davide Maltoni. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 248–249, 2020.

  • OpenLORIS-Object: A Robotic Vision Dataset and Benchmark for Lifelong Deep Learning by Qi She, Fan Feng, Xinyue Hao, Qihan Yang, Chuanlin Lan, Vincenzo Lomonaco, Xuesong Shi, Zhengwei Wang, Yao Guo, Yimin Zhang, Fei Qiao and Rosa H M Chan. arXiv, 1–8, 2019. [vision]

  • CORe50: A New Dataset and Benchmark for Continuous Object Recognition by Vincenzo Lomonaco and Davide Maltoni. Proceedings of the 1st Annual Conference on Robot Learning, 17–26, 2017. [vision]

Bioinspired Methods

22 papers

In this section we list all the papers related to bioinspired continual learning approaches.

  • Controlled Forgetting: Targeted Stimulation and Dopaminergic Plasticity Modulation for Unsupervised Lifelong Learning in Spiking Neural Networks by Jason M. Allred and Kaushik Roy. Frontiers in Neuroscience, 7, 2020. [spiking]

  • Cognitively-Inspired Model for Incremental Learning Using a Few Examples by A. Ayub and A. R. Wagner. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020. [cifar] [cubs] [dual]

  • Storing Encoded Episodes as Concepts for Continual Learning by Ali Ayub and Alan R. Wagner. arXiv, 2020. [generative] [imagenet] [mnist]

  • Spiking Neural Predictive Coding for Continual Learning from Data Streams by and Alexander Ororbia. arXiv, 2020. [spiking]

  • Brain-like Replay for Continual Learning with Artificial Neural Networks by Gido M. van de Ven, Hava T. Siegelmann and Andreas S. Tolias. International Conference on Learning Representations (Workshop on Bridging AI and Cognitive Science), 2020. [cifar]

  • Selfless Sequential Learning by Rahaf Aljundi, Marcus Rohrbach and Tinne Tuytelaars. ICLR, 2019. [cifar] [mnist] [sparsity]

  • Backpropamine: Training Self-Modifying Neural Networks with Differentiable Neuromodulated Plasticity by Thomas Miconi, Aditya Rawal, Jeff Clune and Kenneth O Stanley. ICLR, 2019.

  • Continual Learning of Recurrent Neural Networks by Locally Aligning Distributed Representations by Alexander Ororbia, Ankur Mali, C Lee Giles and Daniel Kifer. arXiv, 2019. [mnist] [rnn] [spiking]

  • Lifelong Neural Predictive Coding: Sparsity Yields Less Forgetting When Learning Cumulatively by Alexander Ororbia, Ankur Mali, Daniel Kifer and C Lee Giles. arXiv, 1–11, 2019. [fashion] [mnist] [sparsity]

  • FearNet: Brain-Inspired Model for Incremental Learning by Ronald Kemker and Christopher Kanan. ICLR, 2018. [audio] [cifar] [generative]

  • Differentiable Plasticity: Training Plastic Neural Networks with Backpropagation by Thomas Miconi, Kenneth Stanley and Jeff Clune. International Conference on Machine Learning, 3559–3568, 2018.

  • Lifelong Learning of Spatiotemporal Representations With Dual-Memory Recurrent Self-Organization by German I Parisi, Jun Tani, Cornelius Weber and Stefan Wermter. Frontiers in Neurorobotics, 2018. [core50] [dual] [rnn] [som]

  • SLAYER: Spike Layer Error Reassignment in Time by Sumit Bam Shrestha and Garrick Orchard. Advances in Neural Information Processing Systems 31, 1412–1421, 2018.

  • Neurogenesis-Inspired Dictionary Learning: Online Model Adaption in a Changing World by Sahil Garg, Irina Rish, Guillermo Cecchi and Aurelie Lozano. IJCAI International Joint Conference on Artificial Intelligence, 1696–1702, 2017. [nlp] [vision]

  • Diffusion-Based Neuromodulation Can Eliminate Catastrophic Forgetting in Simple Neural Networks by Roby Velez and Jeff Clune. PLoS ONE, 1–31, 2017.

  • How Do Neurons Operate on Sparse Distributed Representations? A Mathematical Theory of Sparsity, Neurons and Active Dendrites by Subutai Ahmad and Jeff Hawkins. arXiv, 1–23, 2016. [hebbian] [sparsity]

  • Continuous Online Sequence Learning with an Unsupervised Neural Network Model by Yuwei Cui, Subutai Ahmad and Jeff Hawkins. Neural Computation, 2474–2504, 2016. [spiking]

  • Backpropagation of Hebbian Plasticity for Continual Learning by and Thomas Miconi. NIPS Workshop - Continual Learning, 5, 2016.

  • Mitigation of Catastrophic Forgetting in Recurrent Neural Networks Using a Fixed Expansion Layer by Robert Coop and Itamar Arel. The 2013 International Joint Conference on Neural Networks (IJCNN), 1–7, 2013. [mnist] [rnn] [sparsity]

  • Compete to Compute by Rupesh Kumar Srivastava, Jonathan Masci, Sohrob Kazerounian, Faustino Gomez and Jürgen Schmidhuber. Advances in Neural Information Processing Systems 26, 2013. [mnist] [sparsity]

  • Mitigation of Catastrophic Interference in Neural Networks Using a Fixed Expansion Layer by Robert Coop and Itamar Arel. 2012 IEEE 55th International Midwest Symposium on Circuits and Systems (MWSCAS), 726–729, 2012. [sparsity]

  • Synaptic Plasticity: Taming the Beast by L F Abbott and Sacha B Nelson. Nature Neuroscience, 1178–1183, 2000. [hebbian]

Catastrophic Forgetting Studies

9 papers

In this section we list all the major contributions trying to understand catastrophic forgetting and its implication in machines that learn continually.

  • Sequential Mastery of Multiple Visual Tasks: Networks Naturally Learn to Learn and Forget to Forget by Guy Davidson and Michael C Mozer. CVPR, 9282–9293, 2020. [vision]

  • Dissecting Catastrophic Forgetting in Continual Learning by Deep Visualization by Giang Nguyen, Shuan Chen, Thao Do, Tae Joon Jun, Ho-Jin Choi and Daeyoung Kim. arXiv, 2020. [vision]

  • Toward Understanding Catastrophic Forgetting in Continual Learning by Cuong V Nguyen, Alessandro Achille, Michael Lam, Tal Hassner, Vijay Mahadevan and Stefano Soatto. arXiv, 2019. [cifar] [mnist]

  • An Empirical Study of Example Forgetting during Deep Neural Network Learning by Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio and Geoffrey J Gordon. International Conference on Learning Representations, 2019. [cifar] [mnist]

  • Localizing Catastrophic Forgetting in Neural Networks by Felix Wiewel and Bin Yang. arXiv, 2019. [mnist]

  • Don’t Forget, There Is More than Forgetting: New Metrics for Continual Learning by Natalia Díaz-Rodrǵuez, Vincenzo Lomonaco, David Filliat and Davide Maltoni. arXiv, 2018. [cifar] [framework]

  • The Stability-Plasticity Dilemma: Investigating the Continuum from Catastrophic Forgetting to Age-Limited Learning Effects by Martial Mermillod, Aurélia Bugaiska and Patrick Bonin. Frontiers in Psychology, 504, 2013.

  • Catastrophic Forgetting in Connectionist Networks by and Robert French. Trends in Cognitive Sciences, 128–135, 1999. [sparsity]

  • How Does a Brain Build a Cognitive Code? by and Stephen Grossberg. Psychological Review, 1–51, 1980.

Classics

8 papers

In this section you’ll find pioneering and classic continual learning papers. We recommend to read all the papers in this section for a good background on current continual deep learning developments.

  • The Organization of Behavior: A Neuropsychological Theory by and D O Hebb. Lawrence Erlbaum, 2002. [hebbian]

  • Pseudo-Recurrent Connectionist Networks: An Approach to the ‘Sensitivity-Stability’ Dilemma by and Robert French. Connection Science, 353–380, 1997. [dual]

  • CHILD: A First Step Towards Continual Learning by and Mark B Ring. Machine Learning, 77–104, 1997.

  • Is Learning The N-Th Thing Any Easier Than Learning The First? by and Sebastian Thrun. Advances in Neural Information Processing Systems 8, 640–646, 1996. [vision]

  • Learning in the Presence of Concept Drift and Hidden Contexts by Gerhard Widmer and Miroslav Kubat. Machine Learning, 69–101, 1996.

  • Using Semi-Distributed Representations to Overcome Catastrophic Forgetting in Connectionist Networks by and Robert French. In Proceedings of the 13th Annual Cognitive Science Society Conference, 173–178, 1991. [sparsity]

  • The ART of Adaptive Pattern Recognition by a Self-Organizing Neural Network by Gail A. Carpenter and Stephen Grossberg. Computer, 77–88, 1988.

  • How Does a Brain Build a Cognitive Code? by and Stephen Grossberg. Psychological Review, 1–51, 1980.

Continual Few Shot Learning

7 papers

Here we list the papers related to Few-Shot continual and incremental learning.

  • Defining Benchmarks for Continual Few-Shot Learning by Antreas Antoniou, Massimiliano Patacchiola, Mateusz Ochal and Amos Storkey. arXiv, 2020. [imagenet]

  • Tell Me What This Is: Few-Shot Incremental Object Learning by a Robot by Ali Ayub and Alan R. Wagner. arXiv, 2020.

  • La-MAML: Look-Ahead Meta Learning for Continual Learning by Gunshi Gupta, Karmesh Yadav and Liam Paull. arXiv, 2020.

  • iTAML: An Incremental Task-Agnostic Meta-Learning Approach by Jathushan Rajasegaran, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan and Mubarak Shah. IEEE/CVF Conference on Computer Vision and Pattern Recognition, 13588—13597, 2020. [cifar] [imagenet]

  • Wandering within a World: Online Contextualized Few-Shot Learning by Mengye Ren, Michael L Iuzzolino, Michael C Mozer and Richard S Zemel. arXiv, 2020. [omniglot]

  • Few-Shot Class-Incremental Learning by X. Tao, Hong X., X. Chang, S. Dong, X. Wei and Y. Gong. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020. [cifar]

  • Few-Shot Class-Incremental Learning via Feature Space Composition by H. Zhao, Y. Fu, X. Li, S. Li, B. Omar and X. Li. arXiv, 2020. [cifar] [cubs]

Continual Meta Learning

4 papers

In this section we list all the papers related to the continual meta-learning.

  • Online Fast Adaptation and Knowledge Accumulation: A New Approach to Continual Learning by Massimo Caccia, Pau Rodriguez, Oleksiy Ostapenko, Fabrice Normandin, Min Lin, Lucas Caccia, Issam Laradji, Irina Rish, Alexande Lacoste, David Vazquez and Laurent Charlin. arXiv, 2020. [fashion] [framework] [mnist]

  • Continuous Meta-Learning without Tasks by James Harrison, Apoorva Sharma, Chelsea Finn and Marco Pavone. arXiv, 2019. [imagenet] [mnist]

  • Task Agnostic Continual Learning via Meta Learning by Xu He, Jakub Sygnowski, Alexandre Galashov, Andrei A Rusu, Yee Whye Teh and Razvan Pascanu. arXiv:1906.05201 [cs, stat], 2019. [mnist]

  • Reconciling Meta-Learning and Continual Learning with Online Mixtures of Tasks by Ghassen Jerfel, Erin Grant, Tom Griffiths and Katherine A Heller. Advances in Neural Information Processing Systems, 9122–9133, 2019. [bayes] [vision]

Continual Reinforcement Learning

19 papers

In this section we list all the papers related to the continual Reinforcement Learning.

  • Reducing Catastrophic Forgetting When Evolving Neural Networks by and Joseph Early. arXiv, 2019.

  • A Meta-MDP Approach to Exploration for Lifelong Reinforcement Learning by Francisco M Garcia and Philip S Thomas. NeurIPS, 5691–5700, 2019.

  • Policy Consolidation for Continual Reinforcement Learning by Christos Kaplanis, Murray Shanahan and Claudia Clopath. ICML, 2019.

  • Continual Learning Exploiting Structure of Fractal Reservoir Computing by Taisuke Kobayashi and Toshiki Sugino. Artificial Neural Networks and Machine Learning – ICANN 2019: Workshop and Special Sessions, 35–47, 2019. [rnn]

  • Deep Online Learning via Meta-Learning: Continual Adaptation for Model-Based RL by Anusha Nagabandi, Chelsea Finn and Sergey Levine. 7th International Conference on Learning Representations, ICLR 2019, 2019.

  • Leaky Tiling Activations: A Simple Approach to Learning Sparse Representations Online by Yangchen Pan, Kirby Banman and Martha White. arXiv, 2019. [sparsity]

  • Learning to Learn without Forgetting by Maximizing Transfer and Minimizing Interference by Matthew Riemer, Ignacio Cases, Robert Ajemian, Miao Liu, Irina Rish, Yuhai Tu and Gerald Tesauro. ICLR, 2019. [mnist]

  • Experience Replay for Continual Learning by David Rolnick, Arun Ahuja, Jonathan Schwarz, Timothy P Lillicrap and Greg Wayne. NeurIPS, 350–360, 2019.

  • Selective Experience Replay for Lifelong Learning by David Isele and Akansel Cosgun. Thirty-Second AAAI Conference on Artificial Intelligence, 3302–3309, 2018.

  • Continual Reinforcement Learning with Complex Synapses by Christos Kaplanis, Murray Shanahan and Claudia Clopath. ICML, 2018.

  • Unicorn: Continual Learning with a Universal, Off-Policy Agent by Daniel J Mankowitz, Augustin Žídek, André Barreto, Dan Horgan, Matteo Hessel, John Quan, Junhyuk Oh, Hado van Hasselt, David Silver and Tom Schaul. arXiv, 1–17, 2018.

  • Lifelong Inverse Reinforcement Learning by Jorge A Mendez, Shashank Shivkumar and Eric Eaton. NeurIPS, 4502–4513, 2018.

  • Progress & Compress: A Scalable Framework for Continual Learning by Jonathan Schwarz, Wojciech Czarnecki, Jelena Luketina, Agnieszka Grabska-Barwinska, Yee Whye Teh, Razvan Pascanu and Raia Hadsell. International Conference on Machine Learning, 4528–4537, 2018. [vision]

  • Overcoming Catastrophic Forgetting in Neural Networks by James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran and Raia Hadsell. PNAS, 3521–3526, 2017. [mnist]

  • Stable Predictive Representations with General Value Functions for Continual Learning by Matthew Schlegel, Adam White and Martha White. Continual Learning and Deep Networks Workshop at the Neural Information Processing System Conference, 2017.

  • Continual Learning through Evolvable Neural Turing Machines by Benno Luders, Mikkel Schlager and Sebastian Risi. NIPS 2016 Workshop on Continual Learning and Deep Networks, 2016.

  • Progressive Neural Networks by Andrei A Rusu, Neil C Rabinowitz, Guillaume Desjardins, Hubert Soyer, James Kirkpatrick, Koray Kavukcuoglu, Razvan Pascanu and Raia Hadsell. arXiv, 2016. [mnist]

  • Lifelong-RL: Lifelong Relaxation Labeling for Separating Entities and Aspects in Opinion Targets. by Lei Shu, Bing Liu, Hu Xu and Annice Kim. Proceedings of the Conference on Empirical Methods in Natural Language Processing. Conference on Empirical Methods in Natural Language Processing, 225–235, 2016. [nlp]

  • CHILD: A First Step Towards Continual Learning by and Mark B Ring. Machine Learning, 77–104, 1997. [rnn]

Continual Sequential Learning

20 papers

Here we maintain a list of all the papers related to the continual learning at the intersection with sequential learning.

  • Continual Learning with Gated Incremental Memories for Sequential Data Processing by Andrea Cossu, Antonio Carta and Davide Bacciu. Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN 2020), 2020. [mnist] [rnn]

  • Continual Prototype Evolution: Learning Online from Non-Stationary Data Streams by Matthias De Lange and Tinne Tuytelaars. arXiv, 2020. [cifar] [framework] [mnist] [vision]

  • Organizing Recurrent Network Dynamics by Task-Computation to Enable Continual Learning by Lea Duncker, Laura N Driscoll, Krishna V Shenoy, Maneesh Sahani and David Sussillo. Advances in Neural Information Processing Systems, 2020. [rnn]

  • Continual Learning in Recurrent Neural Networks by Benjamin Ehret, Christian Henning, Maria R Cervera, Alexander Meulemans, Johannes von Oswald and Benjamin F Grewe. arXiv, 2020. [audio] [rnn]

  • Lifelong Machine Learning with Deep Streaming Linear Discriminant Analysis by Tyler L Hayes and Christopher Kanan. CLVision Workshop at CVPR 2020, 1–15, 2020. [core50] [imagenet]

  • Meta-Consolidation for Continual Learning by K J Joseph and Vineeth N Balasubramanian. NeurIPS, 2020. [bayes] [cifar] [imagenet] [mnist]

  • Continual Learning with Bayesian Neural Networks for Non-Stationary Data by Richard Kurle, Botond Cseke, Alexej Klushyn, Patrick van der Smagt and Stephan Günnemann. Eighth International Conference on Learning Representations, 2020. [bayes]

  • Compositional Language Continual Learning by Yuanpeng Li, Liang Zhao, Kenneth Church and Mohamed Elhoseiny. Eighth International Conference on Learning Representations, 2020. [nlp] [rnn]

  • Online Continual Learning on Sequences by German I Parisi and Vincenzo Lomonaco. arXiv, 2020. [framework]

  • Gradient Based Sample Selection for Online Continual Learning by Rahaf Aljundi, Min Lin, Baptiste Goujaud and Yoshua Bengio. Advances in Neural Information Processing Systems 32, 11816–11825, 2019. [cifar] [mnist]

  • Online Continual Learning with Maximal Interfered Retrieval by Rahaf Aljundi, Eugene Belilovsky, Tinne Tuytelaars, Laurent Charlin, Massimo Caccia, Min Lin and Lucas Page-Caccia. Advances in Neural Information Processing Systems 32, 11849–11860, 2019. [cifar] [mnist]

  • Task-Free Continual Learning by Rahaf Aljundi, Klaas Kelchtermans and Tinne Tuytelaars. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. [vision]

  • Efficient Lifelong Learning with A-GEM by Arslan Chaudhry, Marc’Aurelio Ranzato, Marcus Rohrbach and Mohamed Elhoseiny. ICLR, 2019. [cifar] [mnist]

  • Task Agnostic Continual Learning via Meta Learning by Xu He, Jakub Sygnowski, Alexandre Galashov, Andrei A Rusu, Yee Whye Teh and Razvan Pascanu. arXiv:1906.05201 [cs, stat], 2019. [mnist]

  • A Study on Catastrophic Forgetting in Deep LSTM Networks by Monika Schak and Alexander Gepperth. Artificial Neural Networks and Machine Learning – ICANN 2019: Deep Learning, 714–728, 2019. [rnn]

  • Unsupervised Progressive Learning and the STAM Architecture by James Smith, Seth Baer, Cameron Taylor and Constantine Dovrolis. arXiv, 2019. [mnist]

  • Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence by Arslan Chaudhry, Puneet K Dokania, Thalaiyasingam Ajanthan and Philip H.S. Torr. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2018. [cifar] [mnist]

  • Overcoming Catastrophic Interference Using Conceptor-Aided Backpropagation by Xu He and Herbert Jaeger. ICLR, 2018. [mnist]

  • Gradient Episodic Memory for Continual Learning by David Lopez-Paz and Marc’Aurelio Ranzato. NIPS, 2017. [cifar] [mnist]

  • iCaRL: Incremental Classifier and Representation Learning by Sylvestre-Alvise Rebuffi, Alexander Kolesnikov, Georg Sperl and Christoph H Lampert. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. [cifar]

Dissertation and Theses

6 papers

In this section we maintain a list of all the dissertation and thesis produced on continual learning and related topics.

  • Continual Learning: Tackling Catastrophic Forgetting in Deep Neural Networks with Replay Processes by and Timoth’ee Lesort. arXiv, 2020. [cifar] [framework] [generative] [mnist] [vision]

  • Continual Learning in Neural Networks by and Rahaf Aljundi. arXiv, 2019. [cifar] [imagenet] [mnist] [vision]

  • Continual Deep Learning via Progressive Learning by and Haytham M. Fayek. RMIT University, 2019. [audio] [cifar] [imagenet] [sparsity]

  • Continual Learning with Deep Architectures by and Vincenzo Lomonaco. University of Bologna, 2019. [core50] [framework]

  • Explanation-Based Neural Network Learning: A Lifelong Learning Approach by and Sebastian Thrun. Springer, 1996. [framework]

  • Continual Learning in Reinforcement Environments by and Mark Ring. University of Texas, 1994. [framework]

Generative Replay Methods

5 papers

In this section we collect all the papers introducing a continual learning strategy employing some generative replay methods.

  • Brain-Inspired Replay for Continual Learning with Artificial Neural Networks by Gido M. van de Ven, Hava T. Siegelmann and Andreas S. Tolias. Nature Communications, 2020. [cifar] [framework] [generative] [mnist]

  • Complementary Learning for Overcoming Catastrophic Forgetting Using Experience Replay by Mohammad Rostami, Soheil Kolouri and Praveen K Pilly. arXiv, 2019.

  • Continual Learning of New Sound Classes Using Generative Replay by Zhepei Wang, Cem Subakan, Efthymios Tzinis, Paris Smaragdis and Laurent Charlin. arXiv, 2019. [audio]

  • Generative Replay with Feedback Connections as a General Strategy for Continual Learning by Gido M. van de Ven and Andreas S. Tolias. arXiv, 2018. [framework] [generative] [mnist]

  • Continual Learning with Deep Generative Replay by Hanul Shin, Jung Kwon Lee, Jaehong Kim and Jiwon Kim. Advances in Neural Information Processing Systems 30, 2990–2999, 2017. [mnist]

Hybrid Methods

8 papers

In this section we collect all the papers introducing a continual learning strategy employing some hybrid methods, mixing different strategies.

  • Rehearsal-Free Continual Learning over Small Non-I.I.D. Batches by Vincenzo Lomonaco, Davide Maltoni and Lorenzo Pellegrini. CVPR Workshop on Continual Learning for Computer Vision, 246–247, 2020. [core50]

  • Linear Mode Connectivity in Multitask and Continual Learning by Seyed Iman Mirzadeh, Mehrdad Farajtabar, Dilan Gorur, Razvan Pascanu and Hassan Ghasemzadeh. arXiv, 2020. [cifar] [experimental] [mnist]

  • Single-Net Continual Learning with Progressive Segmented Training (PST) by Xiaocong Du, Gouranga Charan, Frank Liu and Yu Cao. arXiv, 1629–1636, 2019. [cifar]

  • Continuous Learning in Single-Incremental-Task Scenarios by Davide Maltoni and Vincenzo Lomonaco. Neural Networks, 56–73, 2019. [core50] [framework]

  • Toward Training Recurrent Neural Networks for Lifelong Learning by Shagun Sodhani, Sarath Chandar and Yoshua Bengio. Neural Computation, 1–35, 2019. [rnn]

  • Continual Learning of New Sound Classes Using Generative Replay by Zhepei Wang, Cem Subakan, Efthymios Tzinis, Paris Smaragdis and Laurent Charlin. arXiv, 2019. [audio]

  • Lifelong Learning via Progressive Distillation and Retrospection by Saihui Hou, Xinyu Pan, Chen Change Loy, Zilei Wang and Dahua Lin. ECCV, 2018. [imagenet] [vision]

  • Progress & Compress: A Scalable Framework for Continual Learning by Jonathan Schwarz, Wojciech Czarnecki, Jelena Luketina, Agnieszka Grabska-Barwinska, Yee Whye Teh, Razvan Pascanu and Raia Hadsell. International Conference on Machine Learning, 4528–4537, 2018. [vision]

Meta Continual Learning

8 papers

In this section we list all the papers related to the meta-continual learning.

  • Learning to Continually Learn by Shawn Beaulieu, Lapo Frati, Thomas Miconi, Joel Lehman, Kenneth O. Stanley, Jeff Clune and Nick Cheney. ECAI, 2020. [vision]

  • Continual Learning with Deep Artificial Neurons by Blake Camp, Jaya Krishna Mandivarapu and Rolando Estrada. arXiv, 2020. [experimental]

  • Meta-Consolidation for Continual Learning by K J Joseph and Vineeth N Balasubramanian. NeurIPS, 2020. [bayes] [cifar] [imagenet] [mnist]

  • Meta Continual Learning via Dynamic Programming by R Krishnan and Prasanna Balaprakash. arXiv, 2020. [omniglot]

  • Online Meta-Learning by Chelsea Finn, Aravind Rajeswaran, Sham Kakade and Sergey Levine. ICML, 2019. [experimental] [mnist]

  • Meta-Learning Representations for Continual Learning by Khurram Javed and Martha White. NeurIPS, 2019. [omniglot]

  • Learning to Learn without Forgetting by Maximizing Transfer and Minimizing Interference by Matthew Riemer, Ignacio Cases, Robert Ajemian, Miao Liu, Irina Rish, Yuhai Tu and Gerald Tesauro. ICLR, 2019. [mnist]

  • Meta Continual Learning by Risto Vuorio, Dong-Yeon Cho, Daejoong Kim and Jiwon Kim. arXiv, 2018. [mnist]

Metrics and Evaluations

6 papers

In this section we list all the papers related to the continual learning evalution protocols and metrics.

  • Online Fast Adaptation and Knowledge Accumulation: A New Approach to Continual Learning by Massimo Caccia, Pau Rodriguez, Oleksiy Ostapenko, Fabrice Normandin, Min Lin, Lucas Caccia, Issam Laradji, Irina Rish, Alexande Lacoste, David Vazquez and Laurent Charlin. arXiv, 2020. [fashion] [framework] [mnist]

  • Optimal Continual Learning Has Perfect Memory and Is NP-HARD by Jeremias Knoblauch, Hisham Husain and Tom Diethe. ICML, 2020. [theoretical]

  • Regularization Shortcomings for Continual Learning by Timothée Lesort, Andrei Stoian and David Filliat. arXiv, 2020. [fashion] [mnist]

  • Strategies for Improving Single-Head Continual Learning Performance by Alaa El Khatib and Fakhri Karray. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 452–460, 2019. [cifar] [mnist]

  • Towards Robust Evaluations of Continual Learning by Sebastian Farquhar and Yarin Gal. Privacy in Machine Learning and Artificial Intelligence Workshop, ICML, 2019. [fashion] [framework]

  • Three Scenarios for Continual Learning by Gido M van de Ven and Andreas S Tolias. Continual Learning Workshop NeurIPS, 2018. [framework] [mnist]

Neuroscience

5 papers

In this section we maintain a list of all Neuroscience papers that can be related (and useful) for continual machine learning.

  • Can Sleep Protect Memories from Catastrophic Forgetting? by Oscar C Gonzalez, Yury Sokolov, Giri Krishnan and Maxim Bazhenov. bioRxiv, 569038, 2019.

  • Synaptic Consolidation: An Approach to Long-Term Learning by and Claudia Clopath. Cognitive Neurodynamics, 251–257, 2012. [hebbian]

  • The Organization of Behavior: A Neuropsychological Theory by and D O Hebb. Lawrence Erlbaum, 2002. [hebbian]

  • Negative Transfer Errors in Sequential Cognitive Skills: Strong-but-Wrong Sequence Application. by Dan J. Woltz, Michael K. Gardner and Brian G. Bell. Journal of Experimental Psychology: Learning, Memory, and Cognition, 601–625, 2000.

  • Connectionist Models of Recognition Memory: Constraints Imposed by Learning and Forgetting Functions. by and R Ratcliff. Psychological review, 285–308, 1990.

Others

27 papers

In this section we list all the other papers not appearing in at least one of the above sections.

  • Continual Learning Using Task Conditional Neural Networks by Honglin Li, Payam Barnaghi, Shirin Enshaeifar and Frieder Ganz. arXiv, 2020. [cifar] [mnist]

  • Energy-Based Models for Continual Learning by Shuang Li, Yilun Du, Gido M. van de Ven, Antonio Torralba and Igor Mordatch. arXiv, 2020. [cifar] [experimental] [mnist]

  • Continual Universal Object Detection by Xialei Liu, Hao Yang, Avinash Ravichandran, Rahul Bhotika and Stefano Soatto. arXiv, 2020.

  • Mnemonics Training: Multi-Class Incremental Learning without Forgetting by Yaoyao Liu, An-An Liu, Yuting Su, Bernt Schiele and Qianru Sun. arXiv, 2020. [cifar] [imagenet]

  • Structured Compression and Sharing of Representational Space for Continual Learning by Gobinda Saha, Isha Garg, Aayush Ankit and Kaushik Roy. arXiv, 2020. [cifar] [mnist]

  • Lifelong Graph Learning by Chen Wang, Yuheng Qiu and Sebastian Scherer. arXiv, 2020. [graph]

  • Superposition of Many Models into One by Brian Cheung, Alex Terekhov, Yubei Chen, Pulkit Agrawal and Bruno Olshausen. arXiv, 2019. [cifar] [mnist]

  • Continual Learning in Practice by Tom Diethe, Tom Borchert, Eno Thereska, Borja Balle and Neil Lawrence. arXiv, 2019.

  • Dynamically Constraining Connectionist Networks to Produce Distributed, Orthogonal Representations to Reduce Catastrophic Interference by and Robert French. Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society, 335–340, 2019.

  • Continual Learning via Neural Pruning by Siavash Golkar, Michael Kagan and Kyunghyun Cho. arXiv, 2019. [cifar] [mnist] [sparsity]

  • BooVAE: A Scalable Framework for Continual VAE Learning under Boosting Approach by Anna Kuzina, Evgenii Egorov and Evgeny Burnaev. arXiv, 2019. [bayes] [fashion] [mnist]

  • Overcoming Catastrophic Forgetting with Unlabeled Data in the Wild by Kibok Lee, Kimin Lee, Jinwoo Shin and Honglak Lee. Proceedings of the IEEE International Conference on Computer Vision, 312–321, 2019.

  • Continual Learning Using Bayesian Neural Networks by HongLin Li, Payam Barnaghi, Shirin Enshaeifar and Frieder Ganz. arXiv, 2019. [bayes] [mnist]

  • Unified Probabilistic Deep Continual Learning through Generative Replay and Open Set Recognition by Martin Mundt, Sagnik Majumder, Iuliia Pliushch, Yong Won Hong and Visvanathan Ramesh. arXiv, 2019. [audio] [bayes] [fashion] [framework] [generative] [mnist] [vision]

  • Continual Rare-Class Recognition with Emerging Novel Subclasses by Hung Nguyen, Xuejian Wang and Leman Akoglu. ECML, 2019. [nlp]

  • Random Path Selection for Incremental Learning by Jathushan Rajasegaran, Munawar Hayat, Salman Khan Fahad, Shahbaz Khan and Ling Shao. NeurIPS, 12669–12679, 2019. [cifar] [imagenet] [mnist]

  • Improving and Understanding Variational Continual Learning by Siddharth Swaroop, Cuong V Nguyen, Thang D Bui and Richard E Turner. Continual Learning Workshop NeurIPS, 1–17, 2019. [bayes] [mnist]

  • Continual Learning via Online Leverage Score Sampling by Dan Teng and Sakyasingha Dasgupta. arXiv, 2019. [cifar] [mnist]