Before we embark on a course in Artificial Intelligence, we should consider for a moment whether automating intelligence is really possible!
Artificial intelligence research makes the assumption that human intelligence can be reduced to the (complex) manipulation of symbols, and that it does not matter what medium is used to manipulate these symbols - it does not have to be a biological brain! This assumption does not go unchallenged among philosophers etc. Some argue that true intelligence can never be achieved by a computer, but requires some human property which cannot be simulated. There are endless philosophical debates on this issue (some on comp.ai.philosophy), brought recently to public attention again in Penrose's book.
The most well known contributions to the philosophical debate are Turing's ``Turing test'' paper, and Searle's ``Chinese room''. Very roughly, Turing considered how you would be able to conclude that a machine was really intelligent. He argued that the only reasonable way was to do a test. The test involves a human communicating with a human and with a computer in other rooms, using a computer for the communication. The first human can ask the other human/computer any questions they like, including very subjective questions like ``What do you think of this Poem''. If the computer answers so well that the first human can't tell which of the two others is human, then we say that the computer is intelligent.
Searle argued that just behaving intelligently wasn't enough. He tried to demonstrate this by suggesting a thought experiment (the ``Chinese room''). Imagine that you don't speak any Chinese, but that you have a huge rule book which allows you to look up chinese sentences and tells you how to reply to them in Chinese. You don't understand Chinese, but can behave in an apparently intelligent way. He claimed that computers, even if they appeared intelligent, wouldn't really be, as they'd be just using something like the rule book of the Chinese room.
Many people go further than Searle, and claim that computers will never even be able to appear to be really intelligent (so will never pass the Turing test). There are therefore a number of positions that you might adopt:
My view is that, though computers can clearly behave intelligently in performing certain limited tasks, full intelligence is a very long way off and hard to imagine (though I don't see any findamental reason why a computer couldn't be genuinely intelligent.) However, these philosophical issues rarely impinge on AI practice and research. It is clear that AI techniques can be used to produce useful programs that conventionally require human intelligence, and that this work helps us understand the nature of our own intelligence. This is as much as we can expect from AI for now, and it still makes it a fascinating topic!