NATS 1700 6.0 COMPUTERS, INFORMATION AND SOCIETY
Lecture 13: Expert Systems, Neural Nets, Alife, etc.
| Previous | Next | Syllabus | Selected References | Home |
| 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
8 | 9 | 10 | 11 | 12 | 13 |
14 | 15 | 16 | 17 | 18 | 19 |
20 | 21 | 22 |
Introduction
-
We will now explore a few representative areas of AI, 'expert systems,' 'neural nets' and
'alife,' touching briefly on a relatively new, rapidly growing technology which, for simplicity, we will call
'genetic programming.' A very good site to visit is
Approaches: Methods Used to Create Intelligence, where
you can find an introduction to many relevant topics, such as neuron physiology, boolean logic, expert systems,
neural nets, artificial life, genetic programming, etc. Another important general reference is Artificial Intelligence, Expert Systems, Neural Networks and Knowledge Based Systems at
CompInfo.
-
Read a good introduction to Expert Systems,
and a brief History of Expert Systems.
-
Matthew Caryl's Neural Nets. is
a comprehensive, but readable, account of the history, background and basic principles of this approach. Kevin Gurney presents
A Brief History of Neural Nets, with
useful references to the early work by Wiener (Cybernetics), McCulloch and Pitts (Artificial Neurons), and others. Another resource is
Brief History of Neural Networks, which includes a useful timeline.
Finally, you may want to read the ai-faq/neural-nets.
-
For a general introduction, brief history and links concerning alife, check the comp.ai.alife Frequently Asked Questions.
-
Here is a good Introduction to Genetic Programming .
The details are unavoidably a bit technical, but the general idea should be fairly easy to grasp.
Topics
- The initial tasks assigned to computers were essentially computational: number crunching. Networking and artificial
intelligence although seriously considered in theoretical work already in the early fifties, did not become established until
the late fifties and late sixties, respectively.
- The initial effort to develop general methods for solving broad classes of problems were met with a predictable
failure: the more problems a program could tackle, the more inadequately it performed on particular problems. It was
therefore necessary to go back to the drawing boards: how do human beings solve problems? If we study how an 'expert'
approaches a problem, we see that much of the expertise (though not all, by any means) consists in the ability to
think logically through an often vast, but rather organized tree of alternatives. This observation, and the introduction
in 1959 of LISP, a computer language designed by J McCarthy to handle large lists of abstract data,
were the starting point of artificial expert systems.
- Examples of early expert systems were Dendral, specialized in determining the molecular structure
of organic molecules when given some of their physico-chemical properties; Prospector, which allowed
the user to establish, with certain degrees of probability, the nature and location of ore deposits when supplied
with the geology of a given site; Internist which helped physicians in their diagnostic work; etc.
The list grew rapidly, extending also to business and industrial manufacturing.
- A very clear overview
of neural nets by Jochen Fröhlich of the Fachhochschule in Regensburg, not only covers the
entire area, from the perceptron to self-organizing algorithms, but includes good demonstrations
written in Java. You may also want to read at least the initial section of a beautiful presentation on
Neural
Networks and the Computational Brain by Stephen Jones.
- As Fröhlich points out, "for every computer program someone must have worked out every single possibility
so that the computer will be able to cope with all situation. This can be done for a word processor but trying to get a
computer to recognize speech is very difficult because of all the possible variations." The neural networks
approach begins by giving up any attempt to make an inventory of all the possible variations. Once again the
inspiration comes, at least in part, from observing how we tackle and solve problems. Even if we are not experts,
we can learn. A second consideration comes from our attempt to decipher the physiological workings of our brain.
Already in 1943, Warren McCulloch and Walter Pitts had realized that digital computers could be used to model the
main features of biological neurons, and in 1948 they argued that any behavior that can be described, within rather
broad limits, by language can be reproduced by an appropriately constructed network of their neural models. It was
only in 1962, however, that Frank Rosemblatt built the first neural network, the perceptron. Here
is Stephen Jones' concise but clear description: "The Perceptron consists in a net of sensor units feeding to
a set of association units which feed one or more response units. If the sensor units feed enough 'yes' votes to
the association unit to which they are mapped to exceed the threshold of that association unit then it will be
excited or 'fire'. When enough association units fire so as to exceed the threshold of the response unit to which
they are mapped, then the response unit will fire. If the result is correct then the thresholds of the response units
will be left as they are, but if the result is incorrect then the thresholds of the response units will be modified.
This process is iterated enough times for the response unit to give a correct response to the input of the whole
Perceptron system. Thus the Perceptron is said to be 'trainable'. The output of the network is affected by altering
the weighting or the value contributed by each connection."
- Since 1962 AI research has led to more and more effective neural nets, introduced a variety of novel learning
techniques, and found important applications almost everywhere, from predicting the stock market to diagnosing
diseases, from finding faults in the space shuttle to sorting mail.
A Cellular Automaton
- What's It All About, Alife? is Robert Crawford's well written introduction to artificial life
or a-life. The article was published in the April 1996 issue of Technology Review. Go to the Library
to read it. A-life is the study of those artificial systems that exhibit in some ways some of the properties of populations of living
systems: self-organization, adaptation, evolution, metabolism, etc. In this sense, living systems are also examples of A-Life.
This is important, because the study of A-Life may also shed light on life itself.
- We know that natural selection, operating over long periods of time, has gradually allowed the evolution of
a huge number of species. What is even more important is that, in any given period, most of the existent species
appear to be well adapted to the particular ecological niche they occupy. By being a somewhat anthropomorphic, we could
say that each species has solved the problem of finding which changes in their genome would allow it to function
optimally in their environment. If we describe natural selection in these terms, we may be tempted to ask whether
we might not be able to imitate natural selection in solving our own problems. This is the basic idea of another
rapidly growing area of AI, variously called a-life, genetic programming, evolutionary programming etc. In Crawford's
words: this new technology "dubbed artificial life, or alife for short--introduces populations of computer-virus-like
programs into computers, where they interact and eventually produce a kind of ecosystem, one available for
'experimentation' in a way that a natural ecosystem cannot be."
- In Artificial Societies
Peter Tyson states: "The road to such artificial societies was laid down in 1953, when mathematician John von Neumann
invented self-replicating automata. These cellular automata, as they are also known, consist of a lattice of cells
with specific values that change according to fixed rules for computing a cell's new value based on its current value
and the values of its immediate neighbors. Von Neumann found that, when left to their own devices, cellular automata
naturally formed patterns, reproduced, even 'died.'" One famous example of cellular automata is John Conway's
The Game of Life.
Another interesting site, Conway's Game of Life
offers "a pop-up Java applet that displays a collection of the greatest patterns ever created
in Conway's Game of Life." Another Java applet is John Conway's Game of LIfe .
- An important area of research related to cellular automata, with exciting applications, including to the field of
robotics, is that of Microworlds.
"A Microworld is a term coined at the MIT Media Lab Learning and Common Sense Group. It means, literally, a tiny
world inside which a student can explore alternatives, test hypotheses, and discover facts that are true about that world.
It differs from a simulation in that the student is encouraged to think about it as a 'real' world, and not simply as
a simulation of another world (for example, the one in which we physically move about in)." An important
contribution to Microworld was Mitchel Resnick's StarLogo,
"a programmable modeling environment for exploring the workings of decentralized systems--systems that are organized
without an organizer, coordinated without a coordinator." There are versions of StarLogo for just about every
computer platform, including PCs, MACs, etc. The program is free and well worth downloading. Resnick has also published
a marvelous book where the basic philosophy and examples of StarLogo programs are discussed: Mitchel Resnick,
Turtles, Termites, and Traffic Jams: Explorations in Massively Parallel Microworlds, The MIT Press,
1994, 1999.
Check also the The Golem Project, or Genetically Organized Lifelike Electro Mechanics,
at Brandeis University. Here is how these scientists describe it: "we conducted a set of experiments in which simple
electro-mechanical systems evolved from scratch to yield physical locomoting machines. Like biological lifeforms whose
structure and function exploit the behaviors afforded by their own chemical and mechanical medium, our evolved creatures
take advantage of the nature of their own medium - thermoplastic, motors, and artificial neurons. We thus achieve autonomy
of design and construction using evolution in a limited universe physical simulation, coupled to off-the-shelf rapid
manufacturing technology. This is the first time robots have been robotically designed and robotically fabricated."
- Genetic algorithms are computer programs which, in the world of programs, simulate phenomena, such
as reproduction, crossover and mutation, which are at work in the world of DNA. Just as their biological counterpart,
genetic algorithms create new combinations of programs, some of which are better suited than the others
to perform the functions they were intended to perform. For a taste of this type of programs, play with Genetic Java, a
sample genetic algorithm applet by Dan Loughlin.
- More generally, genetic programming is a technique inspired by the concepts of Darwinian evolution.
A population of 'individuals,' each representing a potential solution to the problem to be investigated, undergoes a
sort of biological evolution. The solution offered by each individual is assigned a certain numerical value (fitness)
which gives a quantitative idea of how good that solution is. New individuals are generated by procedures analogous
to biological reproduction, with parents chosen from the existing population not deterministically, but with a
probability proportional to their fitness. The new individuals gradually replace less fit individuals, and the fitness
of the population as a whole improves with each new generation. These techniques have also been applied to a wide
variety of problems, including in the synthesis of new pharmaceutical drugs.
For a good, detailed example of new applications of genetic programming to concrete problems, see for example
R Kicinger and T Arciszewski's article Breeding Better Buildings, which appeared in the November-December 2007 issue
of American Scientist. Here is the abstract:
"Engineers tend to be conservative in their designs of buildings, to err on the side of safety. However,
some modern structures may benefit from a more creative approach. Taking inspiration from genetics, Kicinger
and his colleagues have created software that 'breeds' basic building structures. Pieces such as beams,
columns and bracings are 'genes' and how they are combined becomes the 'genome' of the building. The best
results are recombined to produce subsequent generations that improve on their parents. The authors'
programs have automatically produced some designs that mimic known, strong building structures, and they
hope the programs will soon produce some creative designs that improve on human ideas."
Questions and Exercises
- Visit IBM's website devoted to Deep Blue,
the machine which defeated Garry Kasparov, the world's chess champion. Read also Chess Is Too Easy, where
Selmer Bringsjord claims: "Deep Blue's victory over Gary Kasparov may have been entertaining, but contrary to popular
belief, it tells us nothing about the future of artificial intelligence. What's needed is a more creative test of
mind versus machine". What do you think?
- Visit also Man vs Machine Championship. 21 to 27 June, 2005, London,
where you can even play against Hydra. "With the processing power equivalent to more than 200 standard PCs, the HYDRA computer
is the world's most powerful chess computer according to IPCCC officials. Housed in a secure server room in Abu Dhabi, HYDRA is a
64-way cluster computer- 64 computers connected and operating as if they are a single machine. Each computer has an Intel Xeon 3.06 Ghz."
Picture Credit: Discrete Dynamics Lab
Last Modification Date: 07 July 2008
|