An invitation to give a Kolmogorov Lecture acknowledges life-long research contributions to one of the fields initiated or transformed by Kolmogorov.
Prof. Vapnik is the inventor of the Vapnik–Chervonenkis theory of statistical learning, and the support vector machine method. and the support vector machine
Prof. Merton won the Nobel Prize in Economic Sciences in 1997, for his pioneering contributions to continuous-time finance and the Black–Scholes formula.
Prof. Sinai is well-known for his work on dynamical systems, which have provided the groundwork for advances in the physical sciences.
Prof. Rissanen was the inventor of the minimum description length principle and practical approaches to arithmetic coding for lossless data compression.
Prof. Martin-Löf is internationally renowned for his work on the foundations of probability, statistics, mathematical logic, especially type theory which has influenced computer science.
Prof. Levin is well-known for his work in randomness in computing, algorithmic complexity and intractability, foundations of mathematics and computer science.
Prof. Solomonoff was the inventor of algorithmic probability, his General Theory of Inductive Inference, and was a founder of algorithmic information theory.
Most of our past winners' talks have been published in the prestigious Computer Journal.
I will discuss two main topics in this lecture. Firstly, the Universal Distribution and some of its properties: its accuracy, its incomputability, its subjectivity. Secondly, I’m going to tell how to use this distribution to create very intelligent machines.
Many years ago in 1960, I discovered what we now call the Universal Probability Distribution. It is the probability distribution on all possible output strings of a universal computer with random input. It seemed to solve all kinds of prediction problems and resolve serious difficulties in the foundations of Bayesian statistics.
Physical computing media are asymmetric. Their symmetry is broken by irregularities, physical boundaries, external connections, and so on.
Such peculiarities, however, are highly variable and extraneous to the fundamental nature of the media. Thus, they are pruned from theoretical models, such as cellular automata, and reliance on them is frowned upon in programming practice.
Yet, computation, like many other highly organized activities, is incompatible with perfect symmetry. Some standard mechanisms must assure breaking the symmetry inherent in idealized computing models.
In the 2005 Lecture Professor Martin-Löf gave an analysis of Zermelo's axiom of choice in constructive type theory, arguing that the problem with the theory ‘is not the existence of the choice function but the extensionality of it, which is not visible in an extensional framework where all functions are by definition extensional’.
Cantor conceived set theory in a sequence of six papers published in the Mathematische Annalen during the five year period 1879–1884. In the fifth of these papers, published in 1883, he stated as a law of thought (Denkgesetz) that every set can be well-ordered or, more precisely, that it is always possible to bring any well-defined set into the form of a well-ordered set. Now to call it a law of thought was implicitly to claim self-evidence for it, but he must have given up that claim at some point, because in the 1890s he made an unsuccessful attempt at demonstrating the well-ordering principle.
The main objective in this paper is to quantify and measure intuitive notions like complexity, learnable information, and noise.
All of them can be measured in terms of shortest code length, so that we consider the modeling task as finished when we can say 'When using this model the data have x bits of complexity, y bits of information, leaving z bits as unexplained noise'.
There are two ways to formalize the three notions.
The first is with help of Kolmogorov's structure function in the algorithmic complexity theory, and the second is based on a suitable generalization of Shannon's coding theory.
The three notions formalized in the algorithmic theory are non-computable, which makes the presentation of the central ideas easy.
The second formalization, which is the main topic in this paper, is more difficult but it is needed to avoid the non-computability problem.
Andrei Kolmogorov (1903-1987) made fundamental contributions to probability theory, statistics, analysis, mathematical logic, the theory of algorithms, information theory, and the theory of dynamic systems. His discoveries completely changed many of these areas. He was a brilliant teacher, with many of his students and followers continuing his groundbreaking work.
In 1998, Royal Holloway, University of London, established the Computer Learning Research Centre under the directorship of Alexander Gammerman. Professor Gammerman and the Centre's Deputy Director, Vladimir Vovk, were joined by several outstanding researchers in the field of Kolmogorov Complexity.
To celebrate the centenary of Kolmogorov's birth (25 April 1903), the Computer Learning Research Centre proposed the establishment of the University of London Kolmogorov Lecture and Medal. This decision was approved in 2002, and the Lectures have been held since then at Royal Holloway, University of London.
The Organising and Programme Board consists of members from several Colleges within the University of London. The Advisory Board consists of some of the previous Kolmogorov speakers who give advice on future speaker nominations.
For enquiries or potential nominations, please contact us.
Lecture: kl@rhul.ac.uk
General enquiries: clrc@cs.rhul.ac.uk
Office: (+44) 01784 443432
Fax: (+44) 01784 439786
Centre For Reliable Machine Learning
Royal Holloway, University of London
Egham, Surrey TW20 0EX
United Kingdom