Monday, June 12, 2006

History of Artificial Intelligence

Alan Turing and the Turing Test
Alan Turing was an English mathematician who developed the Turing machine and also worked at England's code breaking center during WWII where he created the bombe, a machine that broke code generated by the German Enigma machine.

After the war Turing turned his attention towards the idea of intelligent machines, and 1950 he published a now famous paper titled "Computing Machinery and Intelligence", in which he described a test of computer intelligence. This test, known as the Turing test is conducted by a judge, who asks questions and has conversations with two entities: a human being and a computer program. The program is deemed intelligent if the judge is unable to tell which is a real human and which is a program. Although it has become a controversial test of intelligence, there is a $100,000 prize, called the Loebner Prize, for the first program that can pass the test.

The Dartmouth Summer Research Project on Artificial Intelligence (1956)
A seminal event in the development of AI was the Dartmouth Summer Research Project on Artificial Intelligence, which was organized by John McCarthy who coined the term artificial intelligence in naming the conference. This was the first organized attempt at creating AI and was attended by many people who later became leaders in the field, including Marvin Minsky, who I will talk about in a later post, and Claude Shannon, the Noble Prize winning founder of information theory.

Also at the conference were Allen Newell and Herbert Simon who presented their program Logic Theorist. Logic Theorist was able to prove basic equations of logic from the Principia Mathematica, and even found a better proof than the authors had for one theorem.

Physical Symbol Systems (PSS)
Later Newell and Simon went on to form the Physical Symbol System Hypothesis. This hypothesis simply states that a physical symbol system (PSS) is both necessary and sufficient for intelligence. This is a bold statement, for it defines not only the minimum requirements of AI, but also states that all intelligent systems, including the human mind, must have a PSS.

So what's a PSS? It is a system of symbols, which could be of any type, like shapes, sounds, numbers or letters. These symbols represent things in the real world. Let's use shapes as an example:

This is a symbol for a male bathroom


And this a symbol for handicapped

Not only must the system have a collection of meaningful symbols, but it must also be able to relate them to one another, for example it should be able to logically reason, based upon it's knowledge of the previous symbols, that this:

Is a symbol for a male bathroom. To be meaning full the system must have inputs and outputs. In a computer, the symbols are the strings of ones and zeros:


Assuming that we humans are each a PSS, then it follows that we can never truly see, touch, taste, smell or feel 'reality'. It's like this:


We have a robot on the left and a human on the right. Each has an instrument to perceive its environment: the robot has two cameras, the human two eyes. Each translates the light source into its own symbolic representation:



The robot's software symbolically represents the light as a pattern of binary digits, which themselves represent transistor states. The human symbolically represents the light as connection and firing patterns of neurons in the nervous system.




Sunday, June 11, 2006

# 1,376,358!

I just went to my Technorati profile and it lists my blog as the 1,376,358th most popular blog. It will be fun to watch that number change.

Saturday, June 10, 2006

John McCarthy














John McCarthy is considered to be one of the founders of artificial intelligence (AI), and coined the term in 1956 at a Dartmouth College conference on the subject. In 1958 he created the LISP programming language, and it is still the most widely used programming language for AI. McCarthy has worked in logical AI, which is an attempt to use logical formalism to symbolically represent common sense knowledge, which the computer can than use to intelligently solve problems.

For someone as naive as me, this begs the question: who exactly is cognitive science linked to AI? Obviously if we understood exactly how the human mind works it would much easier to make machines think like it, but AI can be used as a tool to understand the mind as well. Consider this quote by Herbert Simon:

AI can have two purposes. One is to use the power of computers to augment human thinking, just as we use motors to augment human or horse power. Robotics and expert systems are major branches of that. The other is to use a computer's artificial intelligence to understand how humans think. In a humanoid way. If you test your programs not merely by what they can accomplish, but how they accomplish it, then you're really doing cognitive science; you're using AI to understand the human mind.
Since John McCarthy's impact on cognitive science cannot be understood without understanding AI's impact on cognitive science, my following post(s) will focus on AI's relation to cognitive science.

His webpage can be found here: John McCarthy

CCortex: a 20 billion neuron neural network.

I just found the Aritificial Development hompage today. They are putting together a huge neural network using an aray of 500 computers.

"Our goal is to achieve a realistic whole-brain simulation for the purpose of creating new cognitive computational products."


Not quite a whole brain yet, but 20 billion neurons is a good start. Trying to make the brain think should be a fun job.

CCortex Website

Friday, June 09, 2006

Topic of the moment: Cognitive Science George Miller
Introduction:
This May I emerged from the world of biomolecules with a bachelors degree in biochemistry, yet my interest is now being drawn toward the mind. Free from all academic responsibilities, I am exploring the various methods and disciplines used to understand the mind, and publishing my explorations here, for all to see.

My first stop is cognitive science. Cognitive science is the study of the mind and how it works- whether its processing images, speaking, dreaming or performing any one of the numerous feats its capable of. According to the Stanford Encyclopedia of Philosophy, there are six principle founders of cognitive neuroscience: George Miller, John McCarthy, Marvin Minsky, Allen Newell, Herber Simon and Noam Chomsky. I know next to nothing about any of these people, but today I am going to start learning about George Miller and share it with you.


George Miller is most famous for his paper titled: The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information.

In his paper, Miller analyzes the results from experiments where the subject is asked to judge the scale of something- how loud a sound is, for example, or how salty some water is. He looks at many different senses: sound, sight, touch and taste, as well as different categories within a sense, like pitch, loudness, duration, and others. What he finds is that we can quite easily discriminate between two categories of a stimulus. Say we are looking at a temperature stimulus. If there are two categories, hot and cold, then people can judge the stimulus with a high degree of accuracy. The same applies with three categories, say hot, medium and cold. The accuracy holds up until there are too many categories- 7 plus or minus 2 categories. That is to say, for most types of stimuli, we can correctly judge the category if there are about 7 categories.

Interestingly, it doesn't change things much if the distance between categories is changed- for example if the pitches of a sound are spaced close together or far apart, we have the same limit in the number of categories we can correctly differentiate them into. I find this quite strange, since in one experiment, you could judge 5 closely spaced low pitches accurately, in another you could judge 5 closely spaced high pitches accurately, but if all you were asked to judge all 10 pitches in the same experiment, it would be very difficult to do so accurately.

Miller also goes on to explain that the ability to accurately judge categories improves when more than one aspect of the stimuli is changed- loudness and length in addition to pitch, for example. He says this could be one explanation for how in the real world people are able to tell the difference between thousands of categories. He also explains how information theory can be used as a yardstick for measuring these judgment responses as well as other psychological qualities.

It was an interesting article- I believe his use of information theory was used replicated in many other psychological papers.

Wednesday, June 07, 2006

Hello World!

Are you there? Anywhere?

My name is Brandon Field, I am interested in neuro-stuff, so this is neurofield. I might as well try posting a neuro-picture right about now:

This is a artificially collored electron micrograph of neurons.

If anyone has cool pictures of neurons, brains or other neuro related stuff, you can send them to me at: