Colloquium spécial 40 ans du LORIA : L'apprentissage profond et le futur de l'IA - Yann Le Cun

Durée : 00:01:04
Nombre de vues 1437
Nombre d’ajouts dans une liste de lecture 1
Nombre de favoris 0

Over the last few years, rapid progress in AI have enabled our smartphones, social networks, and search engines to understand our voice, recognize our faces, and identifiy objects in our photos with very good accuracy. These dramatic improvements are due in large part to the emergence of a new class of machine learning methods known as Deep Learning.

A particular type of deep learning system called convolutional network (ConvNet) has been particularly successful for image and speech recognition. ConvNets are a kind of artificial neural network whose architecture is somewhat inspired by that of the visual cortex. What distinguishes ConvNets and other deep learning systems from previous approaches is their ability to learn the entire perception process from end to end. Deep learning system automatically learn appropriate representations of the perceptual world as part of the learning process. A new type of deep learning architectures, memory-augmented networks, goes beyond perception by enabling reasoning, attention, and factual memory.

Deep Learning systems are being deployed in an increasingly large numbers of applications such as photo and video collection management, content filtering, medical image analysis, face recognition, self-driving cars, robot perception and control, speech recognition, natural language understanding, and language translation.

But we are still quite far from emulating the learning abilities of animal of humans. A key element we are missing is predictive (or unsupervised) learning: the ability of a machine to model the environment, predict possible futures and understand how the world works by observing it and acting in it, a very active topic of research at the moment.

Mots clés : iaem

 Informations

  • Ajouté par : Jean-Michel Antoine
  • Contributeur(s) :
    • Olivia Brenner (productor)
  • Mis à jour le : 12 avril 2016 00:00
  • Type : Colloques et Conférences
  • Langue principale : Français
  • Licence : Licence Creative Commons