Learning and the cooperative computational universe
| Authors | |
|---|---|
| Publication date | 2008 |
| Host editors |
|
| Book title | Philosophy of information |
| ISBN |
|
| Series | Handbook of the philosophy of science, 8 |
| Pages (from-to) | 133-167 |
| Number of pages | 807 |
| Publisher | Amsterdam: North-Holland |
| Organisations |
|
| Abstract |
In the summer of 1956, a number of scientists gathered at the Dartmouth College in Hanover, New Hampshire. Their goal was to study human intelligence with the help of computers. Their central hypothesis was: "that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." During that conference, where amongst others John McCarthy, Claude Shannon and Marvin Minsky were present, the new discipline of Artificial Intelligence was born. It is striking that 'learning' was considered to be an important aspect of human intelligence from the start. A better understanding of the phenomenon of learning was high on the agenda of the emerging young science.
Now, fifty years later, the study of learning is one of the success stories of AI. There is a multitude of learning techniques for the computer. Data mining techniques are being used for marketing, stock management, production optimization and fraud detection in the commercial domain. Biologically inspired learning models such as neural networks and genetic algorithms are being used to simulate human cognition and evolution. In disciplines like computer vision and computational linguistics, machine learning is in the center of interest ([Kearns and Vazirani, 1994], [Mitchell, 1997], [Adriaans and Zantinge, 1997], [Cornuéjols and Miclet, 2003]). But, researchers do not have much reason to sit back and rest, because there is still a whole list of questions that are begging for answers. One of the biggest embarrassments is that we still do not know what learning is exactly. The toolbox of a machine learner looks like a haphazardly collected bunch of screwdrivers, hammers en chisels of dubious origin. For some jobs they work, but we do not understand why, for others they do not work and we also do not understand why. One thing is certain. If we understand learning as data compression then there will never be a general theory that explains what learning is exactly. |
| Document type | Chapter |
| Permalink to this page | |