Given a set of problems to solve, the dominant paradigm in the AI community has been to solve each problem or task independently. This is in sharp contrast with the human capability to build from past experience and transfer knowledge to speed-up the learning process for a new task. To mimic such a capability, the machine learning community has introduced the concept of continual learning or lifelong learning. The main advantage of this paradigm is that it enables learning with less data, it often allows to learn faster and to generalize better. From an industrial standpoint, the potential of lifelong learning is tremendous as this would mean deploying machine learning models faster by bypassing the need to collect labels.
ACTIVITIES
Our chair is structured around several lines of research including self-supervised learning, continual adaptation, online learning, and stream learning.
On the Road to Online Adaptation for Semantic Image Segmentation. R. Volpi, P. De Jorge, D. Larlus, G. Csurka. Computer Vision and Pattern Recognition conference (CVPR), 2022.
Concept Generalization in Visual Representation Learning. M. B. Sariyildiz, Y. Kalantidis, D. Larlus, K. Alahari. (ICCV), 2021.
Continual Adaptation of Visual Representations via Domain Randomization and Meta-learning. R. Volpi, D. Larlus, G. Rogez. Computer Vision and Pattern Recognition conference (CVPR), 2021.
Hard Negative Mixing for Contrastive Learning. Y. Kalantidis, M. B. Sariyildiz, N. Pion, P. Weinzaepfel, D. Larlus. (NeurIPS) 2020.
Learning Visual Representations with Caption Annotations. B. Sariyildiz, J. Perez, D. Larlus. European Conference on Computer Vision (ECCV) 2020.
Published on January 9, 2024 Updated on January 9, 2024
Share the linkCopyCopiedClose the modal windowShare the URL of this pageI recommend:Consultable at this address:La page sera alors accessible depuis votre menu "Mes favoris".Stop videoPlay videoMutePlay audioChat: A question? Chatbot Robo FabricaMatomo traffic statisticsX (formerly Twitter)