I. FROM COMPUTERS
TO MECHANICAL MINDS?
II.
SYMBOLIC SYSTEMS-2: BRAIN MODELING-1
III.
THE RETURN OF THE BRAIN MODELING PARADIGM
Following Malcolm Spector's and John Kitsuse's definition of social problems as claims-making activities, this paper examines the claims-making activities in artificial intelligence research from a social problems constructionist perspective. The constructionist perspective views social problems as constructed through the claims-making activities of interested groups. "The activity of making claims, complaints, or demands for change is the core of what we call social problems activities. Definitions of conditions as social problems are constructed by members of a society who attempt to call attention to situations they find repugnant and who try to mobilize the institutions to do something about them."
Artificial intelligence (AI) research, the search for mechanical thinking ability, has been the subject of scientific and philosophical debate and claims-making activities within the academic/scientific community beginning with the inception of the idea . Thomas S. Kuhn's descriptions of scientific revolutions based in scientific paradigms (world-views) provides structure for examining the history of AI research claims-making. Kuhn claimed that science is not cumulative as it is presented in history texts, but that it is structured in group viewpoints of the scientists. These group viewpoints he labeled as paradigms. A scientific paradigm structures the kinds of questions a scientist can ask when doing "normal science." For a new scientific paradigm to replace an older paradigm requires a "Gestalt" experience for the scientists . As a social problem the AI debate has led to claims on all sides of the issue far exceeding the accomplishments of the claimants; and the channeling of public research funds in directions based on these claims . The primary reason for this claims-making has been the specific paradigm or world view that the AI researchers have held.
This paper is divided into three sections describing the claims-making activities of AI researchers. The first outlines the history of AI research and the emergence of two paradigms of AI research. The second describes the results of the conflict between the two research paradigms; the philosophically based AI research and the brain modeling based AI research. The third summarizes and describes current claims-making activities in this field of research. <-top
I. FROM COMPUTERS
TO MECHANICAL MINDS?
Part of the problem for the sociologist or layman in identifying the claims-making activities present in the artificial intelligence research field is based on the normal proliferation of jargon in any specialized endeavor that is used to describe that specialized endeavor. Many terms are used within the AI research field to describe the same two basic paradigmatic research bases. Some examples are given in the following paragraphs. These two research paradigms developed at approximately the same time, but with very different research focuses.
One scientific viewpoint was grounded in
physical systems. They believed thinking was a product of the complicated
chemical machine, the human brain. This neuroscience (brain simulation)
based paradigm became known as the "bottom-up" approach. Scientists
working with this approach viewed computer hardware as the place to focus
their research. Jargon for mechanical processes designed to simulate
brain function include such terms as "cybernetics," "perceptrons," "neural
modeling," "brain modeling," "neural nets," "multi-layer machines,"
"materialists" and "connectionist" .
The other model was grounded in language
as a symbolic representation of the world. These scientists believed
thought processes could be reduced to basic symbols, and that these basic
symbols could then be programmed into a computer to teach it to think.
They drew their view from certain philosophical explorations such as Wittgenstein's
Tractatus. Bertrand Russell, Wittgenstein, Descartes and Goedel
were philosophers engaging in philosophical phenomenology - the search
for atomic facts and basic objects. Unfortunately these AI researchers
seemed unaware of Wittgenstein's later recant of his earlier writings in
Philosophical Investigations, published in 1953 , or Goedel's mathematical
"Incompleteness Theorem" . "Goedel's theorem states that in any consistent
system which is strong enough to produce simple arithmetic, there are formulae
which cannot be proved in the system but which we can see to be true" .
Because of this channeling toward philosophically based research grounded
in the idea of basic symbols, little progress was made in AI research in
the early 1980's.
This approach to AI research became known as the "top-down" approach and worked with computer programs (software) trying to duplicate human thought processes. Terms used for this type of research include "complex information processing," "dualists," "symbolic manipulation," and "symbolic systems." One source of the neuroscience based AI paradigm's beginnings can be identified with the book, Cybernetics, written by Massachusetts Institute of Technology (M.I.T.) professor Norbert Wiener and first published in 1948. He argued that feedback was the way creatures, including human beings, learn about and adapt to their environment . "This cybernetic, or neural-modeling, approach to machine intelligence was soon dubbed the 'bottom-up' approach" with the goal of starting with a model of the brain function in the primitive organism and working up to a human equivalent . The silicon computer chip was yet to be invented, and hardware limitations proved expensively daunting to many early researchers.
Frank Rosenblatt, a research psychologist at the Cornell Aeronautical Laboratory optimistically continued this line of research, and in 1958 staged a demonstration of his "perceptron," an IBM 704 computer connected to an "eye" made of photoelectric cells and programmed to distinguish between two patterns of squares . Rosenblatt figured that it "is both easier and more profitable to axiomatize the physical system and then investigate this system analytically to determine its behavior, than to axiomatize the behavior and then design a physical system by techniques of logical synthesis" . In 1960, one year beyond his own projected deadline, Rosenblatt demonstrated the Mark I, a perceptron that could learn to make slight discriminations between letters of the alphabet through a "trial-and-error" process.
In 1956, Allen Newell, Herbert A. Simon,
and J. C. Shaw preferred to call their research "complex information
processing" . They advocated a "top-down" approach to AI research
primarily because software could be easily modified, and failure more easily
abandoned . They created a computer program called Logic Theorist
using the philosophical treatise on mathematics, Principia Mathematica,
by Alfred North Whitehead and Bertrand Russell as their inspiration.
Their primary programming task was to avoid what AI researchers call "combinatorial
explosion;" as the number of variables considered increases, the number
of combinations capable of being created increases exponentially .
As yet, AI researchers have been unable to cope with the infinite diversity
- infinite combination capable of being processed by the human mind .
At this point, Simon predicted that through
AI research "digital computers would be the world's chess champions" and
"discover at least one important new mathematical theorem" within 10 years
. They followed Logic Theorist with an improved version in 1957 that
they called General Problem Solver (GPS) capable of a limited heuristic
problem solving method . A heuristic problem solving method starts
with a goal and makes choices appearing to approach that goal. An
example of this would be when trying to get to the downtown area in an
unfamiliar city, we keep turning on to the streets that lead in the direction
of the large buildings located downtown.
Encouraged by his program General Problem Solver (GPS), Herbert Simon made the claim in 1957 that
II.
SYMBOLIC SYSTEMS-2: BRAIN MODELING-1
Two AI researchers are credited with the supersedence of the symbolic systems model over the neuroscience approach. They succeeded in channeling the majority of research funds in the direction of the "top-down" symbolic based research . High school classmate to Rosenblatt and professor at M.I.T., Marvin Minsky met Seymour Papert, another M.I.T. professor, and together they became advocates of the perspective known as the "top-down" approach . In 1965, Minsky and Papert began circulating through the field of AI researchers a draft of their book, Perceptrons, attacking Rosenblatt's perceptron work. In their book they described writings about the perceptron research as being "without scientific value" . Minsky and Papert clearly attack the brain-modeling research as a paradigm (as defined by Thomas Kuhn) when they go on to write:
III.
THE RETURN OF THE BRAIN MODELING PARADIGM
Current successful AI research is certainly headed in the direction of paradigm synthesis. The most successful symbolic based research outcome, so far, has been "expert systems," computer programs that locate data in a specific area of expertise such as medicine, law, or a specific business application . AI researchers combining expert systems and neural nets are creating hybrid systems with the ability to provide feedback to improve each other. Don Barker, coordinator of the Computer Assisted Learning Center at Gonzaga University, says