Leslie Valiant - L’évolution biologique comme forme d’apprentissage

12:00
Jeudi
5
Jui
2012
Organisé par : 
L’équipe "Keynotes" du LIG
Intervenant : 
Leslie Valiant - Harvard University

Leslie Valiant was educated at King’s College, Cambridge ; Imperial College, London ; and at Warwick University where he received his Ph.D. in computer science in 1974. He is currently T. Jefferson Coolidge Professor of Computer Science and Applied Mathematics in the School of Engineering and Applied Sciences at Harvard University, where he has taught since 1982. Before coming to Harvard he had taught at Carnegie Mellon University, Leeds University, and the University of Edinburgh.

His work has ranged over several areas of theoretical computer science, particularly complexity theory, computational learning, and parallel computation. He also has interests in computational neuroscience, evolution and artificial intelligence. Among his many contributions to complexity theory, he introduced the notion of #P-completeness to explain why enumeration and reliability problems are intractable, as well as the concept of holographic algorithms. Among his contributions to computational learning, his most famous one certainly is the "probably approximately correct" (PAC) model that has helped the field of machine learning theory grow.

He received the Nevanlinna Prize at the International Congress of Mathematicians in 1986, the Knuth Award in 1997, the European Association for Theoretical Computer Science EATCS Award in 2008, and the 2010 A. M. Turing Award. He is a Fellow of the Royal Society (London) and a member of the National Academy of Sciences (USA).

- For LIG members : show the video with your LDAP IMAG account on Mi2S

"Réalisation technique : Djamel Hadji | Tous droits réservés"

Living organisms function according to protein circuits. Darwin’s theory of evolution suggests that these circuits have evolved through variation guided by natural selection. However, the question of which circuits can so evolve in realistic population sizes and within realistic numbers of generations has remained essentially unaddressed.

We suggest that computational learning theory offers the framework for investigating this question, of how circuits can come into being adaptively from experience, without a designer. We formulate evolution as a form of learning from examples. The targets of the learning process are the functions of highest fitness. The examples are the experiences. The learning process is constrained so that the feedback from the experiences is Darwinian. We formulate a notion of evolvability that distinguishes function classes that are evolvable with polynomially bounded resources from those that are not. The dilemma is that if the function class, say for the expression levels of proteins in terms of each other, is too restrictive, then it will not support biology, while if it is too expressive then no evolution algorithm will exist to navigate it. We shall review current work in this area.