Tuesday, June 2, 2009

Designing Emergent AI, Part 1: An Introduction

A lot of people have been curious about how the AI in AI War: Fleet Command works, since we have been able to achieve so much more realistic strategic/tactical results compared to the AI in most RTS games. Part 1 of this series will give an overview of the design philosophy we used, and later parts will delve more deeply into specific sub-topics.

Decision Trees: AI In Most RTS Games
First, the way that AI systems in most games work is via giant decision trees (IF A, then C, IF B, then D, etc, etc). This can make for human-like behavior up to a point, but it requires a lot of development and ultimately winds up with exploitable flaws. My favorite example from pretty much every RTS game since 1998 is how they pathfind around walls; if you leave a small gap in your wall, the AI will almost always try to go through that hole, which lets human players mass their units at these choke points since they are "tricking" the AI into using a hole in the wall that is actually a trap. The AI thus sends wave after wave through the hole, dying every time.

Not only does that rules-based decision tree approach take forever to program, but it's also so exploitable in many ways beyond just the above. Yet, to emulate how a human player might play, that sort of approach is generally needed. I started out using a decision tree, but pretty soon realized that this was kind of boring even at the basic conceptual level -- if I wanted to play against humans, I could just play against another human. I wanted an AI that acted in a new way, different from what another human could do, like playing against Skynet or the Buggers from Ender's Game, or something like that. An AI that felt fresh and intelligent, but that played with scary differences from how a human ever could, since our brains have different strengths and weaknesses compared to a CPU. There are countless examples of this in fiction and film, but not so many in games.

Decentralized Intelligence
The approach that I settled on, and which gave surprisingly quick results early in the development of the game, was simulating intelligence in each of the individual units, rather than simulating a single overall controlling intelligence. If you have ever ready Prey, by Michael Crichton, it works vaguely like the swarms of nanobots in that book. The primary difference is that my individual units are a lot more intelligent than each of his nanobots, and thus an average swarm in my game might be 30 to 2,000 ships, rather than millions or billions of nanobots. But this also means that my units are at zero risk of ever reaching true sentience -- people from the future won't be coming back to kill me to prevent the coming AI apocalypse. The primary benefit is that I can get much more intelligent results with much less code and fewer agents.

Strategic Tiers
There are really three levels of thinking to the AI in AI War: strategic, sub-commander, and individual-unit. So this isn't even a true swarm intelligence, because it combines swarm intelligence (at the individual-unit level) with more global rules and behaviors. How the AI decides which planets to reinforce, or which planets to send waves against, is all based on the strategic level of logic -- the global commander, if you will. The method by which an AI determines how to use its ships in attacking or defending at an individual planet is based on a combination of the sub-commander and individual-ship logic.

Sub-Commanders
Here's the cool thing: the sub-commander logic is completely emergent. Based on how the individual-unit logic is coded, the units do what is best for themselves, but also take into account what the rest of the group is doing. It's kind of the idea of flocking behavior, but applied to tactics and target selection instead of movement. So when you see the AI send its ships into your planet, break them into two or three groups, and hit a variety of targets on your planet all at once, that's actually emergent sub-commander behavior that was never explicitly programmed. There's nothing remotely like that in the game code, but the AI is always doing stuff like that. The AI does some surprisingly intelligent things that way, things I never thought of, and it never does the really moronic stuff that rules-based AIs occasionally do.

And the best part is that it is fairly un-trickable. Not to say that the system is perfect, but if a player finds a way to trick the AI, all they have to do is tell me and I can usually put a counter into the code pretty quickly. There haven't been any ways to trick the AI since the alpha releases that I'm aware of, though. The AI runs on a separate thread on the host computer only, so that lets it do some really heavy data crunching (using LINQ, actually -- my background is in database programming and ERP / financial tracking / revenue forecasting applications in TSQL, a lot of which came across to the AI here). Taking lots of variables into effect means that it can make highly intelligent decisions without causing any lag whatsoever on your average dual-core host.

Fuzzy Logic
Fuzzy logic / randomization is also another key component to our AI. A big part of making an unpredictable AI system is making it so that it always make a good choice, but not necessarily the 100% best one (since, with repetition, the "best" choice becomes increasingly non-ideal through its predictability). If an AI player only ever made perfect decisions, to counter them you only need to figure out yourself what the best decision is (or create a false weakness in your forces, such as with the hole in the wall example), and then you can predict what the AI will do with a high degree of accuracy -- approaching 100% in certain cases in a lot of other RTS games. With fuzzy logic in place, I'd say that you have no better than a 50% chance of ever predicting what the AI in AI War is going to do... and usually it's way less predictable than even that.

Intelligent Mistakes
Bear in mind that the lower difficulty levels make some intentionally-stupid decisions that a novice human might make (such as going for the best target despite whatever is guarding it). That makes the lower-level AIs still feel like a real opponent, but a much less fearsome one. Figuring out ways in which to tone down the AI for the lower difficulties was one of the big challenges for me, actually. Partly it boiled down to just withholding the best tactics from the lower-level AIs, but also there were some intentionally-less-than-ideal assumptions that I also had to seed into its decision making at those lower levels.

Skipping The Economic Simulation
Lastly, the AI in AI War follows wholly different economic rules than the human players (but all of the tactical and most strategic rules are the same). For instance, the AI starts with 20,000+ ships in most games, whereas you start with 4 ships per player. If it just overwhelmed you with everything, it would crush you immediately. Same as if all the bad guys in every level of a Mario Bros game attacked you at once, you'd die immediately (there would be nowhere to jump to). Or if all the enemies in any given level of an FPS game just ran directly at you and shot with perfect accuracy, you'd have no hope.

Think about your average FPS that simulates your involvement in military operations -- all of the enemies are not always aware of what you and your allies are doing, so even if the enemies have overwhelming odds against you, you can still win by doing limited engagements and striking key targets, etc. I think the same is true in real wars in many cases, but that's not something that you see in the skirmish modes of other RTS games.

This is a big topic that I'll touch on more deeply in a future article in this series, as it's likely to be the most controversial design decision I've made with the game. A few people will likely view this as a form of cheating AI, but I have good reasons for having done it this way (primarily that it allows for so many varied simulations, versus one symmetrical simulation). The AI ships never get bonuses above the players, the AI does not have undue information about player activities, and the AI does not get bonuses or penalties based on player actions beyond the visible AI Progress indicator (more on that below). The strategic and tactical code for the AI in the game uses the exact same rules as constrain the human players, and that's where the intelligence of our system really shines.

Asymetrical AI
In AI War, to offer procedural campaigns that give a certain David vs Goliath feel (where the human players are always David to some degree), I made a separate rules system for parts of the AI versus what the humans do. The AI's economy works based on internal reinforcement points, wave countdowns, and an overall AI Progress number that gets increased or decreased based on player actions. This lets the players somewhat set the pace of game advancement, which adds another layer of strategy that you would normally only encounter in turn-based games. It's a very asymmetrical sort of system that you totally couldn't have in a pvp-style of skirmish game with AI acting as human standins, but it works beautifully in a co-op-style game where the AI is always the enemy.

Next Time
This provides a pretty good overview of the decisions we made and how it all came together. In the next article, which is now available, I delve into some actual code. If there is anything that readers particularly want me to address in a future article, don't hesitate to ask! I'm not shy about talking about the inner workings of the AI system here, since this is something I'd really like to see other developers do in their games. I play lots of games other than my own, just like anyone else, and I'd like to see stronger AI across the board.

AI Article Index
Part 1 gives an overview of the AI approach being used, and its benefits.
Part 2 shows some LINQ code and discusses things such as danger levels.
Part 3 talks about the limitations of this approach.
Part 4 talks about the asymmetry of this AI design.
Part 5 is a transcript of a discussion about the desirability of slightly-nonideal emergent decisions versus too-predictable "perfect" decisions.
Part 6 talks about player agency versus AI agency, and why the gameplay of AI War is based around keeping the AI deviously reactive rather than ever fully giving it the "tempo."

12 comments:

Chris W said...

Interesting stuff, it reminds me of my college days. Maybe I can break out of dull database programming and back into cool AI coding too.

Anonymous said...

I was working on a system that was again rules based but with randomness used to make a decision of which rules to run - based on goal that needed to be converged on.

My pet project secured sponsorship to an AI conference, but I have left it where it is since there are some flaws that I admit have beaten me.

Its a great read detailing what you have done - and I'd love to see high level snippets of code or pseudo code detailing how you went about it.

Bravo !

Kurt said...

Hi Christopher,

A member over at AiGameDev.com linked to this article via Slashdot in the thread scaling game AI with relational databases.

I was curious if you could share your database, game and C# wisdom with us there.

Otherwise I'll keep hammering you for details here, especially the rule system as I am writing one myself.

BTW did you write your own engine or is it part of the database system your using and if so which database?

Neat stuff, thanks for sharing!

Unknown said...

Very interesting article :) I'm eagerly awaiting the next part.

I'm interested in trying something similar, with individual AI in the swarm setting you explained :)

I don't know yet how it'll work, but I'm considering using first order logic together with fuzzy logic. (Fuzzy with the same reason as yours, to avoid perfect solution finding)

We'll see :) I haven't seen any first order logic based game AI's, but it seems as a great possible tool.

But thanks again for your writings :) I'll have a look on the other stuff on your blog when I find time as well :)

Adam said...

I just started taking an AI course in my graduate program, and I have to say that this was an enticing read! I can't wait to see more in this series!

Now I'll have to check out the game. Very neat ideas. Kudos!

Christopher M. Park said...

Okay, you guys ask, and I deliver: the second article in my series is now up, and it includes some parts of the database-inspired code, as well as talking about how the AI uses "danger levels" to make interesting decisions:

http://christophermpark.blogspot.com/2009/06/designing-emergent-ai-part-2-queries.html

I'll also be doing future articles about other topics in this series. I have a laundry list of things I want to discuss, but I'm mostly just doing them in the order in which readers seem most interested. So let me know what else you'd specifically like to see addressed!

I won't be posting the entire code from my AI system, but I'm not shy about posting a lot of snippets. The best approach in your own games would be to look at the ideas I'm presenting, and then come up with your own implementation based on your specific game design. Many parts of the AI in this game (like with any game) are pretty specific to the design. It's the overall concepts that are transferrable, and my goal with posting code is to help get the concepts across.

Christopher M. Park said...

Hi Kurt,

I'd definitely be interested in talking to you guys over there. I'll hop over to that forum and register.

Anything you want to know, just ask and I'll do my best to explain my thinking on it.

The game engine is 100% from scratch, the only outside code we are using are the .NET libraries, an ogg vorbis decoder, the SlimDX libraries, and a Mersenne Twister random algorithm.

To be clear, I'm not really using a true database in the game engine itself -- not a relational database, anyway. What I'm basically doing is applying relational database concepts into native C#, using LINQ as well as using a system of rollups and indexes for efficiency (more on that in my article Optimizing 30,000+ Ships In Realtime In C#).

Christopher M. Park said...

Thanks to everyone else who is commenting here, too -- I'm really super flooded with conversations at the moment, so I can't respond to everything individually, but I do read them all. Thanks for stopping by and posting!

CT said...

When you said you skipped the economic simulation, that pretty much killed it for me. Why not put that at the start? It seems pretty important.

Christopher M. Park said...

CT,

Like I said, that's the most controversial thing. However, the basis of my decision for that was this: the most important thing is what unique challenges the player has to face, unless it's a competitive multiplayer context.

There are a zillion other symmetrical RTS games out there. The economic AI in AI War is patterned more like one of the old space opera type games, or like a single-player scripted campaign in a modern RTS. I'll be doing another article about this, but asymmetry can make for a lot of new and interesting situations. I'm not going to convince everyone with this argument, but the players who bought the game seem to be having a blast, so I must have done something right.

The point being that there are many different systems in an AI, and just because one of them is asymmetrical doesn't mean that the others can't be symmetrical. The strategic and tactical elements don't get any advantages, and the AI is segmented in such a way that it's not like they can just overwhelm you or something. It all works out, but it's hard to explain briefly. Hence the whole other article on this one topic sometime soon.

Thanks for stopping by.

cyfr said...

Hello,

I was reading about this and doing some experimentation because I find it so interesting (also because I just read I, ROBOT ). I have a question though - did you or do you know how to design each agent to be able to analyze patterns to avoid pitfalls?

Christopher M. Park said...

The individual agents aren't really that intelligent -- they do see and recognize many different kinds of threats and opportunities, but they have only a certain percentage chance of really reacting to them.

Something I've been increasingly realizing lately is how important the scale of this game is to the way that the AI seems intelligent. It's really using something akin to "the wisdom in crowds" in making its decisions, because it's not really making one decision when there are 100 agents, it's making 100 decisions. Those 100 decisions might have 50 that are very similar to one another, then another 30 that are kind of divergent, then another 20 that are various forms of more-divergent-still behavior. That makes for a very fluid and alive-feeling adversary, taking advantage of many of the heuristics that we humans have innately as part of us.

I've been reading "Thinking, Fast And Slow" lately and realizing just how much of that psychology I had unintentionally, intuitively, or intentionally added to the AI in the game (it runs the gamut in terms of intentionality on my part).