Sunday, July 19, 2009

Designing Emergent AI, Part 4: Asymetrical Goals

The first part of this article series was basically an introduction to our AI design, and the second part of this article series took a look at some of the LINQ code used in the game, as well as discussing danger levels and clarifying a few points from the first article. The third part of this series covered the limitations of this approach. The topic, this time, is the asymmetrical system used for things like resource management in AI War.

This topic is a very challenging one for me to approach, because there are such heated feelings on both sides of it amongst game designers. Just recently I remembered that I had actually already written an article on this topic, targeted at my alpha testers in order to explain to them the conceptual shift that I was planning at that time. They were skeptical of the idea at the time (as I would have been if someone else had suggested it to me), and this article went a long way toward convincing them that the concept might have some merit -- the most convincing thing of all, of course, was when they could actually see how well the AI worked in practice.

The game had been in development for less than three months at the time I originally wrote this, but amazingly almost all of it still holds true now, 2 months after the game's public release (the things that no longer hold true are how some of the AI types played out in the end, and the division of AI ships into offensive/defensive groups -- these are minor points that existing AI War players will notice the discrepancy in, but otherwise this makes no difference to the overall ideas being presented here).

ABOUT THE ARTIFICIAL INTELLIGENCE IN AI WAR
Originally Written January, 2009

About the AI in most RTS Games
Most Real-Time Strategy (RTS) games use a style of AI that tries to mimic what a human player might do. The AI in these games has all the same responsibilities and duties as the human players, and the success of the AI is predicated on it properly simulating what a human might do.

These sorts of AI rely heavily on exceedingly complex Finite State Machines (FSMs) -- in other words, "if the situation is X, then do Y, then Z." This sort of deterministic algorithm takes a long time to design and program, and is overall pretty predictable. Worst of all, these algorithms tend not to have very interesting results -- invariably, facing this sort of AI is sort of like playing against another human, only stupider-yet-faster. Clever players are able to trick the AI by finding patterns and weaknesses in the algorithms, and the AI tends to be slow to respond to what the players are doing -- if it responds at all. This is after months of work on the part of some poor AI programmer(s).


Nondeterministic AI in other Games
In most modern AI design, nondeterminism is an important goal -- given the same inputs, the AI shouldn't always give EXACTLY the same output. Otherwise the AI is inhumanly predictable. The chief ways of combating this predictability are fuzzy logic (where inputs are "fuzzified," so that the outputs are also thus less precise) or some variation of a learning AI, which grows and changes over time (its historical knowledge makes it act differently over the course of its existence).

The problem with a standard learning AI is that it can easily learn the wrong lessons and start doing something strange. Debugging is very difficult, because it's hard to know what the AI thinks it is doing and why. Also, until it has some base amount of historical data, it seems that it will either be a) wildly unpredictable and unproductive, or b) using deterministic methods that make it predictable. It can be like teaching an amoeba to tap-dance -- but instead it starts setting things on fire, and you wonder what made it think that was part of what it should do.

Therefore, even with a learning AI, you're likely to have a pretty predictable early game. Plus, if the game supports saving, the entire historical knowledge of the AI would have to be saved if the AI is to keep track of what it was doing (and keep its learned intelligence). This can make save files absolutely huge, among other various disadvantages.


Semi-Stateless AI in AI War
For AI War, I wanted a nondeterministic AI that was not dependent on having any historical knowledge. Of course, this means a pretty fair amount of fuzzy logic by definition -- that is indeed in place -- but I also wanted some of the characteristics of a learning AI. Essentially, I wanted to model data mining practices in modern business applications (something I'm imminently familiar with from my day job). My rule of thumb was this: At any given point in time, the AI should be able to look at a set of meaningful variables, apply some rules and formulas, and come to the best possible conclusion (fuzzified, of course).

A human example: At chess tournaments, often the grandmasters will play against the normal-human players 40 or so at a time (purely for fun/publicity). The 40 lower-ranked players sit at tables in a ring around the room, each with their own chess game against the grandmaster. The grandmaster walks slowly around the room, making a move at each game in sequence. There is no way that the grandmaster is remembering the state of all 40 games; rather, he analyzes each game as he comes to it, and makes the best possible move at the time. He has to think harder at games where there is a particularly clever player, but by and large the grandmaster will win every game out of the 40 because of the skill gap. The general outcome is that the grandmaster picks the cleverest of the other players, and lets them win on purpose (if there is a prize -- if the grandmaster is showing off, they'll just beat everyone).

The AI in AI War is a lot like that grandmaster -- it is capable of coming to the game "blind" about once per second, and making the best possible choices at that time. Minor bits of data are accumulated over time and factored in (as the grandmaster might remember details about the cleverer opponents facing him), but overall this is not necessary. The AI also remembers some data about its past actions to a) help it follow through on past decisions unless there is a compelling reason not to, and b) to help prevent it from falling into patterns that the opponents can take advantage of.


Decentralization of Command In Other RTS Games
One important factor in creating an interesting AI opponent is that it must be able to do multiple things at once. In all too many games, the AI will just have one military arm moving around the board at a time, which is not what a human player would typically do. Where are the diversions, the multi-front attacks, the flanking?

In AI War, it was important to me that the tactical and strategic commanders be able to perform as many different activities as makes sense given the units at hand. Typically this would be accomplished by creating multiple independent "agents" per AI player, and assigning control each unit to a particular agent. You then run into issues of the agents having to negotiate and coordinate among themselves, but in AI War's semi-stateless environment it is even worse -- how do you intelligently divide up the units among these arbitrary agents if you have to keep reassigning them? And how to know when to spawn more agents, versus consolidate useless agents? These problems can all be solved, but they are not something to be attempted lightly, and nor will they be kind to the CPU. The central problem is that of resource management. Not only the delegation of control of existing units, but trying to balance tactical/strategic elements with the generation of resources and the production of new units.

Which brings me to my next point...


Resource Management In AI War
I struggled with the command decentralization issue for several days, trying to come up with a solution that would meet all of my many criteria, and ultimately came to a realization: what matters is not what the AI is actually doing, but what the visible effect is to the human players. If the AI struggles with all these things invisibly, burning up programmer hours and CPU cycles, and then comes to result A, wouldn't it be better to just shortcut some of that and have it arrive at result A? Specifically, I realized that the economic simulations had a very low payoff as far as the players were concerned. If I took out the human-style economic model for the AI -- no resources, no techs, just a generalized, linearized ship-creation algorithm -- what would the impact be?

First of all, this change makes it so that the AI players do not use builders, factories, reactors, or any of the other economy-related ships. This changes the gameplay to a mildly significant degree, because the human players cannot use the strategy of targeting economic buildings to weaken the AI. This is definitely a drawback, but in most RTS games the AI tends to have so many resources that this is generally not a viable strategy, anyway.

Having decided that this gameplay shift was acceptable, I set about designing a ship-creation algorithm. This is harder than it sounds, as each AI has to know a) what ships it is allowed to build at any given point in the game, b) how much material it is allow to spend per "resource event," and other factors that keep things fair. Note that this is NOT your typical "cheating AI" that can just do whatever it wants -- the AI plays by different rules here, but they are strict rules that simulate essentially what the AI would otherwise be doing, anyway.


Decentralization of Command In AI War
Now that the economy is handled, we are back to the issue of decentralization. Each AI is now allowed to build a certain number of ships of specific types during each resource event, but how to intelligently choose which to build? Also, how to decide what to do with the ships that already exist?

First off, the AI needs to divide its units into two categories -- offensive and defensive. Most games don't do this, but in AI War this works very effectively. Each AI decides that it wants to have a certain number of ships of certain types defending each planet or capital ship that it controls. It's first priority is production to meet those defensive goals.

Any units that aren't needed for defense are considered offensive units. These get handed over to the strategic-routing algorithm, which is described next. Each unit-producing ship controlled by the AI player will build offensive units based on a complex algorthm of fuzzy-logic and weighting -- the result is a fairly mixed army that trends towards the strongest units the AI can produce (and the favored units of a given AI type), but which never fully stops building the weaker units (which are always still useful to some degree in AI War).


Strategic Routing in AI War
The strategy planning in AI War consists of deciding what units to send to which planets. Defensive units tend not to leave their planet unless the thing they are protecting also leaves, so this pretty much just means routing around the offensive units.

This takes the form of two general activities: 1) attacking -- sending ships to the planet at which they should be able to do the most amount of damage; and 2) retreating from overwhelming defeats, when possible (no other RTS game AI that I have encountered has ever bothered with this).

The AI does not use cheap factors such as player score, who the host is, or any other non-gameplay variables in these sorts of decisions. All its decisions are based on what units are where, and what it currently calculates the outcome of a conflict is most likely to be.


Tactical Decisions in AI War
When it comes to tactical decision-making, this is actually one of the conceptually simpler parts of the AI. Each unit tries to get to its optimal targeting range, and targets the ships it is best able to hurt. It will stay and fight until such time as it dies, all its enemies are dead, or the Strategic Routing algorithm tells it to run away.


AI Types in AI War
When I think about the human-emulating AI found in most RTS games, it strikes me that this is extremely limiting. In most genres of game, you face off against a variety of different enemies that have powers that differ from those of the human players. Some opponents are vastly stronger, others are weaker, and all are distinctive and interesting in their own way. In RTS games, everyone is pretty much the same -- or at least things are balanced to the point of approximate fairness -- and I think this harms the longevity of these games.

What seems more interesting to me, and what I've decided to do in AI War, is to provide a wide variety of types of AI, each with their own strengths, weaknesses, and genuinely unique behaviors. Some have ships that the player can never get, others start with many planets instead of one, some never capture planets but instead roam around the neutral planets as lawless raiders. The possibilities are endless, especially when playing against multiple AI types in a single game.

The result is situations that are often intentionally unfair -- as they would be in real war. You can simulate being the invading force into a galaxy controlled by evil aliens, or simulate the opposite -- they're invading you. I think of this as being rather like the situation in Ender's Game. You can have AIs that are timid and hide, or others that come after you with unbridled aggression -- Terminators, or Borg, or whatever you want to call them. Some will use strange, alien combat tactics to throw you off guard, while others will routinely use confusing diversions to mask what their true objective is. Others will outclass you in terms of technology from the very start, others will have vastly more manpower but inferior technology -- the list goes on and on.

The whole goal of a game like this is to provide an interesting, varied play experience for the players. If all the AIs are essentially the same except for their general demeanor (as in most other games), that doesn't provide a lot of options. AI War's style of varied AIs is not "AI cheating" in the traditional sense -- each type of AI has specific rules that it must follow, which are simply different than the rules the human players are held to.

Think of this as the offense and defense in a football game: each team has wildly different goals -- one to score, one to not let the other team score -- but the overall success of one team versus another is determined based on how well they do in their respective goals. In football, the teams also routinely switch off who is on offense and who is on defense, and that's where the analogy to AI War starts to break down a bit. In AI War, it would be more like if one football team always played offense, but the defenders were allowed to have an extra three guys on the field. Maybe the defenders could only score by forcing a turnover and running the ball all the way back, but that would be significantly easier due to the extra players.

The AI Types in AI War are like that -- they unbalance the rules on purpose, not to cheat, but to provide something genuinely new and varied.


The Future of AI in AI War
At present, the AI does not yet make very interesting tactical decisions -- flanking, firepower concentration on key targets, etc. These and other "behaviorlets" will be added to future iterations of the AI. The AI will evaluate the behaviorlet conditions during tactical encounters, and employ them when it deems it makes sense to do so.

In fact, this concept of "behaviorlets" is the main thing that is left to do with the AI all across the board. Right now the AI is very by-the-numbers, which makes it predictable except where the fuzzy logic makes it somewhat randomized. This is comparable to the end state of many other RTS game AIs (yet it took me days instead of months or years), but with the architecture of AI War, I can continue to add in new "behaviorlets" in all aspects of the AI, to make it progressively more formidable, varied, and human-seeming. Example behaviorlets include scouting, sniping, mine laying and avoidance, using staging areas for attacks, economy targeting, use of transports, planet denial, and more. All of these things can be programmed into the existing architecture with relative ease; the complexity is bounded to the behaviorlet itself, rather than having to worry about how the behaviorlet will interact with larger elements of (for example) your typical AI finite state machine. This makes for a very object-oriented approach to the AI, which fits my thinking style.

Computer Requirements
This is a very modular AI system that can be extended over time, and yet it is also extremely efficient -- it is already able to control tens of thousands of ships without significant slowdown on a dual-core computer. However, it is being designed with dual-core computers in mind, and it is highly unlikely to run well at all on a single-processor computer.

On the other hand, the AI logic is only run on the game host (also unusual for an RTS game -- possibly another first), which means that single-threaded computers can join someone else's multiplayer game just fine. Given that this is the largest-scale RTS game in existence at the moment in terms of active units allowed during gameplay (by at least a factor of two), this is also pretty notable.

In Conclusion (We Return To The Present)
As noted at the start of this article, the main body of this article was written when the game was still in alpha stages, with the prior few months having been spent testing out the mechanics, the networking, the graphics pipeline, etc, in pure player-vs-player (pvp) modes. I had spent a week or so with the AI in a more traditional model, and felt like that was not working the way I wanted on any level, and so I decided to go back to the underlying principles of what I was trying to accomplish to see if there was a simpler approach.

After a few more days of thought, and then maybe three days of coding, I had a basic prototype of the AI working. I wrote this article then to let my alpha testers know what I was planning, and then spent the next three months refining, expanding, and completing the AI (and the rest of the game) before a beta in April. The game spent a month in beta, ironing out bugs and balance issues, and then went on sale on the Arcen site and then Stardock's Impulse in May.

This asymmetrical AI is the aspect of the game design that is most commonly criticized by other programmers or game designers who have not actually played the game. From my growing list of players, and the few reviews that the game has received so far (our exposure is still low, but growing due to the game's huge popularity on Impulse and elsewhere), this hasn't seemed to be an issue for anyone who actually plays the game. That comes back to my core realization with the AI: it doesn't matter what the AI is really doing, it matters what the AI seems to be doing, and what it makes the player do.

This style of AI creates a lot of unique gameplay situations, provides a good, fair-seeming challenge, and in general provides a degree of variety you don't encounter with other AIs. This isn't going to power sentient robots anytime soon, but in terms of creating an interesting game it seems to have paid off. That's really the goal, isn't it? To create a game that people find fun, challenging, and interesting? To get too hung up on the semantics of one approach versus another, and which is more "authentic" is counterproductive in my book. We really ought to have more games experimenting with various approaches to AI design, because I think it could make for some really fun games in all sorts of genres.

Next Time?
I have a number of other articles planned about the game, most notably a series on game design ideas that flopped for one reason or another and thus didn't make it into the final game -- an exploration of the failures-along-the-way is always fun and illuminating. But as far as the topic of AI goes, I've covered all of the points I set out to cover. Unless readers bring up more topics that they'd like me to address, this will probably be the final article in my AI series.

Part 5 of this series is a transcript of a discussion about the desirability of slightly-nonideal emergent decisions versus too-predictable "perfect" decisions.

AI Article Index
Part 1 gives an overview of the AI approach being used, and its benefits.
Part 2 shows some LINQ code and discusses things such as danger levels.
Part 3 talks about the limitations of this approach.
Part 4 talks about the asymmetry of this AI design.
Part 5 is a transcript of a discussion about the desirability of slightly-nonideal emergent decisions versus too-predictable "perfect" decisions.
Part 6 talks about player agency versus AI agency, and why the gameplay of AI War is based around keeping the AI deviously reactive rather than ever fully giving it the "tempo."

5 comments:

  1. Nice articles Chris.

    "There is no way that the grandmaster is remembering the state of all 40 games;"

    It is easy to underestimate the ability of a chess grandmaster.

    Try stealing a pawn in one of the simultaneous matches, and you'll discover that the grandmaster is generally able to remember the position of the pieces, as well as the sequence of moves taken so to reach that position.

    George Koltanowski once played 56 blindfold games simultaneously. Winning 50 and drawing 6.
    http://www.chessbase.com/columns/column.asp?pid=163

    ReplyDelete
  2. Well, that's a good point. Though I would suggest that it's possible that if you steal the pawn, the grandmaster thinks "how did the board ever get to this state?" and then discovers the cheat that way. Losing any material is notable, and being down material to someone way less skilled than you is notable.

    That's really impressive about
    George Koltanowski, and I'm sure there must be a lot of grandmasters like him. At U18 nationals I saw some less-famous grandmasters that I think weren't remembering the entire state of the board, but then again I could be wrong.

    Still, there's the Chess practice method where you set up a late game board and then try to win it in X number of moves. That implies a certain bit of statelessness, at the very least, but it's less exciting than the grandmaster example. :)

    Thanks for commenting, and I'm glad you enjoyed the articles!

    ReplyDelete
  3. From studies on how grand masters play, they are actually state machines. (running alot as your ai is set up)

    They tested them by flashing a chess board for a fraction of a second. The masters were able to set up all the boards with the exception of boards with illegal positions. (on those they set up all pieces but couldn't place the illegal ones)

    It was concluded that since they have played so many games, that just seeing a board, they recall the best move.

    ReplyDelete
  4. I have read all 6 parts of this (and yes, I know it's 4.5 years later).

    I was just introduced to this game from a friend. I'm nowhere near experienced yet.

    But I would like to say that abstracting out the economy of the game, while it solves a number of issues with the AI as you point out, does affect player ability. Just as you mentioned that some people go for ultra low alertness over longer time, and others go for "100% completeness", some of us like the idea of economic warfare. Having a completely abstracted economy AI that doesn't respond to the game makes that sort of behavior unplayable.

    Maybe this won't actually make a difference -- being huge, and old, you could say that the AI has reserve materials, enough forced labor, etc.; maybe the idea that a small band of rebels could run the empire out of resources just isn't feasible.

    But if the empire, like most real-world empires, is constantly spending what it takes in, with no significant strategic reserve, then attacking the supply lines should affect what it can build -- which should just be more rules ("the AI plays by different rules here, but they are strict rules that simulate essentially what the AI would otherwise be doing, anyway"), and in particular, if you actually do enough economic harm to be noticeable, we should see both an increase in the AI alertness, and the potential for reduced future assaults (if you survive the initial retribution, anyways :-).

    ReplyDelete
  5. In terms of the "abstracted out" economy, the idea of having no economic war is not something that goes away. It just means that the AI uses different rules than you.

    For example, you can impact the AI's ability to produce ships all over the place, but the means by which those production hits take place have nothing to do with how you do yours. The AI would not be very good at playing your sort of game, but it's good at playing its sort.

    Examples off the top of my head:
    1. Your management of AI Progress is a direct impact on its resource management. This includes raiding for data centers, and doing other things.
    2. Your ability to "strip mine" planets but leave them under the control of the AI really affects what the AI can do on border planets.
    3. Taking fewer planets of your own impacts the AI's ability to ramp up.
    4. Taking planet with a large number of connections makes it so that the AI winds up having to spread its forces very thinly.

    I think that you've mistook my meaning of "abstracted out the economy" for "the AI has no rules and can just magically do whatever." The AI is excessively rules-bound, and you have ways of impacting its throughput or results or both. That's basically what the economic warfare would be about (denial of production).

    Hope that makes sense!

    ReplyDelete

Note: Only a member of this blog may post a comment.