Introduction to Artificial Intelligence: A Brief History

Stanford’s Artificial Intelligence and Machine Learning classes have been online for the past week, and tens of thousands are watching the video lectures intently. Today marks the first homework assignment deadline, and the online message boards are ablaze with discussion. I’ve decided to summarize information in the textbook that is used or suggested for both classes, to which most online students probably do not have access. The foundations of artificial intelligence (AI) will be discussed here, spanning all the way back from the time of the Greek Philosophers.

Coursework

I have made the reluctant decision to opt out of the advanced track and switch over to the basic track after having registered to the AI and machine learning (ML) classes. Essentially, I won’t be taking tests or doing assignments, but I can watch the video lectures whenever I so desire. Unfortunately, a few weeks before the course began, I registered to take an exam at the end of the year that may (if I pass, at least) have important implications for my future. I realized after a few days of watching videos that this class would be at the expense of studying for that exam.

In fact, I now realize that I will pretty much be studying for almost the next year, because I’ll take an even more important (and difficult) exam in the middle of 2012 as well. Since completing the Stanford courses simply doesn’t give me enough credit, and since it’s basically just for interest, I won’t be able to continue on with the rest of the class. However, I realize that I still may be able to contribute, because I have access to the Artificial Intelligence: A Modern Approach textbook, co-authored by Professor Norvig, one of the two professors teaching the AI course. So hopefully anyone interested can use these notes for additional information.

What is AI?

When the question of AI comes up, different people yield varying definitions. However, they largely can generally be categorized four ways. Artificial intelligence involves systems that: 1) act like humans, 2) think like humans, 3) act rationally, 4) think rationally.

1) Acting humanly: The Turing Test

The idea behind the “Turing Test” – created by Alan Turing in 1950 – as we know it today is simple: Imagine you’re in a room with a hole in it, and picture a message being passed through a hole. You are told to basically have a conversation by writing a response on a sheet of paper and passing that through the hole. The question is: Can you tell whether or not the entity you’re conversing with is human or a computer? Essentially, a computer is considered to be “intelligent” if it fools the human into believing that it is a human.

2) Thinking Humanly: The Cognitive Modelling Approach

The cognitive modelling approach basically concerns itself not with whether a computer can solve a problem, but how it solves a problem. Computers are sometimes compared with humans in terms of how they reached a problem. Does their reasoning follow the same steps as our reasoning?

3) Acting Rationally: The Rational Agent Approach

In instances where time is limited, an agent may not be able to carefully consider alternatives and make the best choice, but make a quick choice nonetheless. In humans, you might consider this a reflex. As the textbook explains: “For example, pulling one’s hand off of a hot stove is a reflex action that is more successful than a slower action taken after careful deliberation.” While a computer might be programmed to think carefully, it’s not practical depending on the situation.

4) Thinking Rationally: The Laws of Thought Approach

This approach deals with essentially irrefutable reasoning processes. For example, if all people are good, and I am a person, then I must be good. There are certainly limitations to this approach, such as the fact that a computer may not stop searching for an answer if there is none. All though nowadays, there’s always the reset button…

Foundations of AI

Early History

Around 450 BCE, in ancient Greece, Plato reported a dialogue in which Socrates asked Euthyphro “I want to know what is characteristic of piety which makes all actions pious… that I may have it to turn to, and to use as a standard whereby to judge your actions and those of other men.” According to the textbook, this is basically Socrates’ way of asking for an algorithm that differentiates piety and non-piety. Though he developed a system of syllogisms – an argument where the conclusion is derived from other facts – he also believed that we have intuitive reasoning, and that our minds are not governed by logical processes.

In the ninth century (CE), Arab mathematician Muḥammad ibn Mūsā al-Khwārizmī was the first to express a computation as a formal algorithm. This was the brilliant man responsible for introducing Arabic numerals to Europe.

In the mid-17th century, Rene Descartes proposed that the mind and body are distinct – the mind is not a physical thing – known today as Cartesian Dualism. However, he also believed that animals did not possess this dualist quality, believing that they were essentially biological machines.

David Hilbert, a German mathematician, produced a list of 23 problems in 1900, that he correctly predicted would occupy mathematicians for decades. His final question asked if there were fundamental limits to the power of effective proof procedures; Austrian logician Kurt Godel, who was born 50 years after Hilbert, came up with a solution. Developing what are called the “incompleteness theorems,” he concluded that “in any language expressive enough to describe the properties of the natural numbers, […] their truth cannot be established by any algorithm.” This is one thing that motivated Alan Turing’s work on figuring out what is capable of being computed.

In fact, “the modern digital electronic computer was invented independently and almost simultaneously by scientists in three countries embattled in World War II. The first operational modern computer was the Heath Robinson, built in 1940 by Alan Turing’s team for the single purpose of deciphering German messages.” In the early 1950’s, the first computer to make a profit was made – the IBM 701. Gradually, computers advanced to the point that every few years, they double in memory and speed, which is now an idea known as Moore’s Law.

Recent History

In 1997, a computer beat Garry Kasparov, the world chess champion, in a rematch once dubbed “the most spectacular chess event in history.” In 2005, the other professor of the AI course, Sebastian Thrun, co-created the “Stanley” – a self-driving car that completed the entire 150-mile off-road course of the DARPA Grand Challenge within 7 hours. Last year, here in Japan, a computer beat the female shogi champion, in a similar scenario with Kasparov. Clearly, computers are getting smarter.

In fact earlier this year, the IBM’s supercomputer “Watson” dominated the American TV trivia game-show, Jeopardy, as seen in the video below.

A month and a half ago, Cornell University uploaded an interesting conversation between two artificial intelligence systems. Chatbots are systems that are designed to mimic human interactions. It’s very interesting, going from awkward, to crazy, to hilarious, to existential, in two minutes. The left side is Alan, and the right side is Shruti. Please enjoy this representation of our current state of human-mimicking artificial intelligence.

The Future of AI

Artificial intelligence is noticeably improving every decade, and people are finding innovative ways to use technologies that we already have available. For example, after its stint on Jeopardy, Watson is now going in the direction of medicine – it will be used to help doctors make diagnostic decisions or aggregate useful information for patients’ benefits. The creative application of technology is almost as important as their creation, because it’s really how they are used that determines how substantial a contribution they make to society.

With the technological advances rapidly commercializing, the industry of AI is booming. We see it in important health contexts like medicine, and in entertainment contexts like video games. It shouldn’t be long until we begin to embrace technology such as autonomous robots as companions. This will fundamentally change the way we live, act, and even think, much like the internet has done for us. I find it inevitable, but unlike many others who believe that it will happen soon, I hesitate to say when.

Until then, classes like Stanford’s AI course will connect us to this knowledge, and bring more of us closer to understanding it. It also has inspired people like Aprille Glover, to make her own website and coinciding course called “Artistic Intelligence,” in which she teaches art that is apparently related to the topics discussed in Stanford’s AI class, while giving relevant homework assignments. She is not affiliated with the course – and is taking it herself – but it’s interesting to see the kind of impact this course has already made.

If the Stanford course ends as well as it started, I do believe it will be the first big change in post-secondary education.

This entry was posted in Science, Technology and tagged , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *