Sunday, May 22, 2011

Game A.I. Introduction

Origins of Nagi

A number of years ago, I was challenged to think of a more advanced system for navigation than waypoints in Counter Strike. What started as a simple exercise in exploratory dynamic navigation exploded into a half decade of advancement in bot dynamics to create a challenge FPS artificial intelligence which could not be distinguished from ordinary players. This project has had many names, but the original was simply Nagi.

While I will not go into the details about Nagi in my first post, the idea behind her was simple: create a dynamic bot. At the time, and still even to today, most bots in games are what are known as "reflex" agents; they are capable of reacting to situations through a set of predefined rules, perhaps with some degree of randomness related to selection along these rules. In effect, the entirety of their program can be thought of as a sophisticated if/then/else branch tables.

This system is used for a variety of reasons: first and foremost, it is exceptionally fast, both to program and execute. In a day and age where every triangle on the screen relates to real world dollars spent by consumers, A.I. gets a back seat. In addition, consumers are not expecting much, so why bother to put terribly large amounts of effort into something which will be largely overlooked? Finally, it is exceptionally easy to tweak a few if statements; it is exceptionally difficult to tweak an artificially evolving intelligence, and the behaviors that may result are often random and not very useful.

No one said artificial intelligence was easy.

However, reflex agents are quite brittle; once they encounter a situation they have not been informed about or are unable to identify, they are essentially beaten. In addition, once they encounter a player who is unable to beat them, they are terminators, unable to lose. This causes a wide disparity between players, and a sort of cut off behavior; once the mechanism to beat them is identified, they are no longer a threat, but until then, they are an insurmountable one. This is why most bosses and enemies have clearly identified tactics and weak points; if they did not, it would be unreasonable to assume they could be beaten.

This is all well and good for single player, obviously; the only person you are really competing against is yourself. It is, however, exceptionally less useful for multiplayer; beating a bot is either not a challenge, or not possible. To think of it another way, most players are actually between the two extremes of "new player" and "tournament winning expert", but most bots hang out at either end of the curve. This results in a lack of satisfaction and training when competing against bots.

Reflex agents are also brittle in another way; they cannot cope with changing games. They need specialized code to insert them into even a broad class of related games. This means each game requires a mix of generalized code and specialized code, increasing production time, and therefore harming AI even more; after all, time spent on graphics makes money better than time spent on AI.

But this only serves to make the game less enjoyable.

They also have the problem of scaling badly; the number of rules necessary to make an A.I. on either end of the curve is quite small, but making one that works in the middle requires an exceptionally large amount. This is because the rules define exactly what the AI knows, which requires complex and tightly coupled code for every given action or tactic.

Is there a solution? Yes. A good many, in fact.
The methods I will talk about are related to dynamic knowledge bases, essentially the same tactics babies use to explore and learn about their environments. By storing knowledge as data instead of code, it can stand to a large amount of manipulation, and generic learning algorithms can be applied to it. By gathering data as the AI plays, and by discarding data as it becomes obsolete, learning and refinement can occur in a generic way. By using general purpose learning algorithms in concert with each other, we get the best of all worlds.

The problem is how to do this quickly enough to compete with rule based learning. I'll get into that tomorrow.

5 comments:

  1. i can not wait to read the follow on this.

    ReplyDelete
  2. Nice blog, I used to play tons of CS back in the day

    ReplyDelete
  3. Oh and you should turn captchas off, it's annoying and small blogs don't really need it

    ReplyDelete