Brooks (brooksmoses) wrote,
Brooks
brooksmoses

Computer chess and the simulation of intelligence

A couple of things recently came into conjunction to get me started thinking about artificial intelligence -- Russ recently posted about a recent set of chess games between Gary Kasparov and X3d-Fritz, and there's a thread on rec.arts.sf.composition discussing artificial intelligences and what they hypothetically could or could not be programmed to do.

So, I ended up writing a fairly substantial usenet post about this, and decided that it might also be of interest to post here (with a bit of editing) as well. The subject, primarily, is what the differences are between "weak AI" and "strong AI", and in particular whether or not it's necessary to emulate human capability to "think" on the inside in order to produce something that's functionally equivalent on the outside, and whether it is in fact valid to distinguish between the two.

My conclusion on the matter is that there is a very distinct difference between having intelligence and acting with an outward appearance of the same, and that it is quite possible to produce the latter by having a capability to do a vast amount of simplistic things that are not themselves (either individually or in the aggregate) an actual intelligence.

The chess matches that I mentioned -- specifically, the third game, in which Kasparov beat the computer fairly soundly -- provide what I think is an excellent demonstration of this.

The computer program largely acted by starting with pre-programmed openings (much as human players do), and then once those were out, doing an approximately 10-level search-ahead on possible positions. As such, it's nearly unbeatable.

In the third game, however, Kasparov beat it by playing a game where the position was such that there was no clearly evident danger to either player -- that is, no move that either player could make that would give them the immediate upper hand -- yet a set of moves that would take some 20 moves to execute before they'd bear fruit of an unstoppable victory. The computer simply didn't have the search depth to see that far ahead, and without a human intelligence powering it, didn't realize either the danger it was in (although the 20 moves were rather complicated on a possible-move scale, they were rather simple on a broad-brush conceptual scale, and obvious to most of the human witnesses) or that the only possible solution involved a set of similar moves that wouldn't bear immediate fruit.

Eventually, Kasparov's plan started to bear its first fruit, and it was generally clear to all of the humans that the computer was done-for, since it hadn't even made the first move at getting in position to counter things -- instead, it had sort of moved about aimlessly, doing the equivalent of twiddling its thumbs. And so the programmers overrode the computer to declare a forfeit, despite the computer thinking that the positions were nearly equal.

The upshot of all this is that, although the computer -- with a very effective ability to search possible positions, and a relatively simple algorithm for evaluating the "value" of a given position -- would, in a game against almost any human player, make an appearance of playing perfectly, there are still sharp holes where its programming tells it nothing, and it has no capability to realize that it's in that sort of hole or to come up with methods on the fly (or, for that matter, after the fact) to deal with it.

That, I think, is a crucial point: to a good human player with the same knowledge, there would have been many cues that something was up. Kasparov, as white, would not have been starting out playing towards a draw; however, he was clearly responsible from an early stage for getting the board into the position it was in. Therefore, he must have a plan with what to do with it, and that plan is very unlikely to consist of aimlessly moving pieces about. Therefore, moving pieces aimlessly about is not an effective strategy to play against it; one must find a way to accomplish something. Similarly, one must figure out how Kasparov is accomplishing something; it is fairly clear that he must be doing so even if one can't see it. And thus, even though one would not necessarily get any closer to realizing what the plan was, one would certainly have a very strong sense that this was not a good position to be in. But the computer didn't have the beginnings of capability for that sort of meta-gaming.

And, I think that if we look into the holes, we see that there's a vast gulf between how the computer "thinks" about chess and how a human player thinks about it -- the computer is, in particular, incapable of any sort of high-level strategizing other than what comes from the ability to evaluate the value of a board position, and that ability is something that is limited very directly by what the programmers have programmed into it. However, the capability of modern computers to do a remarkably large number of move-by-move searches (i.e., low-level strategy) has gotten to the point where it has almost papered over these holes, and so it takes someone of Kasparov's skill to find them. We may well get to a point where the holes are papered over so completely that no human at all can find them, but I don't think that gets us any closer to make a computer that can do high-level strategy directly.

I also think that the same sorts of things are happening with AI in a broader sense. We can do programs that do low-level thinking very effectively, and in at least some cases it's reasonable to suspect that an ability to do vast amounts of low-level thinking can create an appearance of doing high-level thinking -- and we will very certainly get much better at this in the coming years -- but I don't think we've come anywhere close to figuring out how the high-level thinking works.

Beyond this, I think that this points to a significant problem with the conjecture that a vast number of low-level processes will spontaneously give rise to high-level intelligence. The problem is that this is very difficult to falsify, because an alternate possibility is that the vast number of low-level processes has given rise to something that has most outward appearances of high-level intelligence. And so, if we accept the test of outward appearances, proving that something is intelligent becomes a matter of proving a negative; we must prove that there are no holes where it does not appear to be highly intelligent.

There are those who would claim that, therefore, the difference between the outward appearance of intelligence and the actual existence of it is not a difference that we should concern ourselves with. In other words, if a system can pass the Turing test, it is intelligent, and we are finished with figuring out how to do AI.

I claim that this is nonsense; it is a demonstration that the Turing test and similar black-box tests are not a generally useful tests of the existence of intelligence, and ought be expected to produce a number of false positives. Consider what Kasparov demonstrated with this particular chess game: It took a notable amount of insight -- insight based, in particular, on an understanding of the internals of the system -- to determine where the holes were likely to be, and beyond this it took a remarkable amount of cleverness to find a method by which this hole could be found in practice, and the skill of a chess grandmaster to demonstrate it. Yet, once the hole was demonstrated, it was quite clear that it was indeed a hole; this was not something that was particularly subtle once found.

Consider, then, this: suppose we hadn't had Kasparov. Would the hole have been found? (Probably, because there are easier ways to get to it than by playing through a whole game, but let's ignore that.) If the hole hadn't been found, would that change whether the computer was capable of effective high-level strategy?

This problem is unlikely to be limited to chess computers; it will affect any sort of "intelligent" machine. It will very likely require a substantial amount of cleverness to devise a test to demonstrate the differences between true intelligence and the ability to fake it, but I think we would delude ourselves to claim that because of this the difference is negligible.
Subscribe
  • Post a new comment

    Error

    default userpic

    Your reply will be screened

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.
  • 1 comment