Wednesday, June 17, 2009

How to stream an ogg file using DirectSound in SlimDX.

With a nudge in the right direction from the SlimDX devs, I've finally got a working streaming solution for DirectSound in SlimDX. This will be a part of the 1.007 release of AI War, improving performance in the game, but I also wanted to share the code for this since it isn't out there anywhere else.

First of all, let me note that this entire solution is really just an adaptation of an MDX Directsound streaming solution by Dr Gary. Secondly, it uses the wonderful Ogg Vorbis Decoder (a C# wrapper for the standard ogg dlls) by Atachiants Roman. And, lastly, of course this uses the SlimDX library.

Here's the code, including a compiled test executable. Do with it what you will. If you want a deep explanation of what is going on there, you want to read Dr Gary's article above. This is basically his work, after all, and he explains it very nicely.

Here are the major things I added/changed:

1. SlimDX is used instead of MDX, which obviously necessitated a lot of changes.

2. Rather than use Dr Gary's wav stream class, I'm using the ogg stream class. This required even more changes, especially since the ogg calls are apparently not threadsafe (Dr. Gary was using his wav stream object from multiple threads, which the ogg library did not like one bit).

3. The way I am pulling data off of the ogg stream is a bit different from how the original code was pulling data off of the wav stream. In the original, the stream was seeking every time, to prevent data skips and loss. In my version, I'm using a loop to make sure that this isn't neccessary, and thus there's a minor performance gain.

4. I took out support for looping. This code was originally being adapted for AI War, which doesn't need looping in the same sense. If you want to loop the file, just do like I'm doing in the sample and start playing it again after it completes. My goal was to keep this simple.

5. There are a variety of other minor changes, just switching it to be more in line with the AI War codebase. This was for my own purposes, but since I wanted to make the code public these changes are also seen here.

That's it! Happy coding.

Monday, June 15, 2009

Designing Emergent AI, Part 3: Limitations

The first part of this article series was basically an introduction to our AI design, and the second part of this article series took a look at some of the LINQ code used in the game, as well as discussing danger levels and clarifying a few points from the first article. The second article was basically just for programmers and AI enthusiasts, as is this third one. The topic, this time, is the manner in which this approach is limited.

This Approach Is Limited?
Well, of course. I don't think that there is any one AI approach that can be used to the exclusion of all others. And our AI designs are already heavily hybridized (as the first article in the series mentions), so it's not like I'm using any single approach with AI War, anyway. I used the techniques and approaches that were relevant for the game I was making, and in some respects (such as pathfinding), I blended in highly traditional approaches.

The purpose of this article is to discuss the limits of the new approaches I've proposed (the emergent aspects, and the LINQ aspects), and thereby expose the ways in which these techniques can be blended with other techniques in games beyond just AI War. Let's get started:

Where Emergence Quickly Breaks Down
To get emergent behavior, you need to have a sufficient number of agents. And to keep them from doing truly crazy stuff, you need to have a relatively simple set of bounds for their emergence (in the case of AI War, those bounds are choosing what to attack or where to run away to. Basically: tactics). Here are some examples of places that I think it would be massively ineffective to try for emergent behavior:

1. A competitive puzzle game like tetris. There's only one agent (the cursor).

2. Economic simulations in RTS games. That's all too interrelated, since any one decision has potentially massive effects on the rest of the economy (i.e., build one building and the opportunity cost is pretty high, so some sort of ongoing coordination would be very much needed).

3. Any sort of game that requires historical knowledge in order to predict player actions. For example, card games that involve bluffing would not fare well without a lot of historical knowledge being analyzed.

4. Any game that is turn-based, and which has a limited number of moves per turn, would probably not do well with this because of the huge opportunity cost. Civilization IV could probably use emergent techniques despite the fact that it is turn-based, but Chess or Go are right out.

Why Emergence Works For AI War
1. There are a lot of units, so that gives opportunities for compound thinking based on intelligence in each unit.

2. Though the decision space is very bounded as noted above (what to attack, or where to retreat to), that same decision space is also very complex. Just look at all of the decision points in the LINQ logic in the prior article. There are often fifty ways any given attack could succeed to some degree, and there are a thousand ways they could fail. This sort of subtlety lets the AI fuzzify its choices between the nearly-best choices, and the result is effective unpredictableness to the tactics that feels relatively human.

3. There is a very low opportunity cost per decision made. The AI can make a hundred decisions in a second for a group of ships, and then as soon as those orders are carried out it can make a whole new set of decisions if the situation has changed unfavorably.

4. Unpredictableness is valuable above most all else in the tactics, which works well with emergence. As long as the AI doesn't do something really stupid, just having it choose effectively randomly between the available pretty-good choices results in surprise tactics and apparent strategies that are really just happenstance. If there were fewer paths to success (as in a racing game, for example), the boundedness of decisions to make would be too tight to make any real opportunities for desirable emergent behavior.

Learning Over Time
One intentional design choice that I made with AI War was to keep it focused on the present. I remember seeing Grandmaster Chess players at Chess tournaments, where they would play against 50 regular ranked players at once, walking around the room and analyzing each position in turn. On most boards they could glance at the setup and make their move almost instantly. On a few they would pause a bit more. They would still typically win every game, just choosing whoever put up the best fight to cede a victory to (that person typically got a prize).

I figured it would be nice to have an AI with basically those capabilities. It would look at the board at present, disregarding anything it learned in the past (in fact, it would be just a subject matter expert, not a learning AI in any capacity), and then it would make one of the better choices based on what it saw. This is particularly interesting when players do something unexpected, because then the AI does something equally unexpected in response (responding instantly to the new, unusual "board state").

I feel like this works quite well for AI War, but it's not something that will evolve over time without the players learning and changing (thus forcing the AI to react to the different situations), or without my making extensions and tweaks to the AI. This was the sort of system I had consciously designed, but in a game with fully symmetrical rules for the AI and the humans, this approach would probably be somewhat limited.

The better solution, in those cases, would be to combine emergent decision making with data that is collected and aggregated/evaluated over time. That collected data becomes just more decision points in the central LINQ queries for each agent. Of course, this requires a lot more storage space, and more processing power, but the benefits would probably be immense if someone is able to pull it off with properly bounded evaluations (so that the AI does not "learn" something really stupid and counterproductive).

Isn't That LINQ Query Just A Decision Tree?
One complaint that a few AI programmers have had is that the LINQ queries that I've shared in the second article aren't really that different from a traditional decision tree. And to that, I reply: you're absolutely right, that aspect is not really too different. The main advantage is an increase in readability (assuming you know LINQ, complex statements are much more efficiently expressed).

The other advantage, however, is that soft "preferences" can easily be expressed. Rather than having a huge and branching set of IF/ELSE statements, you have the ORDER BY clause in LINQ which makes the tree evaluation a lot more flexible. In other words, if you have just this:
ORDER BY comparison 1,
comparison 2,
comparison 3
You would be able to have a case where you have 1 as true, 2 as false, and 3 as true just as easily as all three being true, or having false, true, false, or any other combination. I can't think of a way to get that sort of flexibility in traditional code without a lot of duplication (the same checks in multiple branches of the tree), or the use of the dreaded and hard-to-read gotos.

So, while the idea of the LINQ query is basically the same as the decision tree in concept, in practice it is not only more readable, but it can be vastly more effective depending on how complex your tree would otherwise be. You don't even have to use LINQ -- any sorting algorithm of sufficient complexity could do the same sort of thing. The novelty of the approach is not LINQ itself, but rather using a tiered sorting algorithm in place of a decision tree. You could also express the above LINQ code in C# as:
someCollection.Sort( delegate( SomeType o1, SomeType o2 )
int val = o1.property1.CompareTo( o2.property1 );
if ( val == 0 )
val = o1.property2.CompareTo( o2.property2 );
if ( val == 0 )
val = o1.property3.CompareTo( o2.property3 );
return val;
} );
In fact, throughout the code of AI War, statements of that very sort are quite common. These are more efficient to execute than the equivalent LINQ statement, so on the main thread where performance is key, this is the sort of approach I use. In alpha versions I had it directly in LINQ, which was excellent for prototyping the approaches, but then I converted everything except the AI thread into these sorts instead of LINQ, purely for performance reasons. If you're working in another language or don't know SQL-style code, you could easily start and end with this kind of sorting.

Combining Approaches
In AI War, I use basically the following approaches:

1. Traditional pathfinding
2. Simple decision-engine-style logic for central strategic decisions.
3. Per-unit decision-engine-style logic for tactics.
4. Fuzzy logic (in the form of minor randomization for decisions) throughout to reduce predictability and encourage emergence given #3.
5. Soft-preferences-style code (in the form of LINQ and sorts) instead of hard-rules-style IF/ELSE decision tree logic.

If I were to write an AI for a traditional symmetrical RTS AI, I'd also want to combine in the following (which are not needed, and thus not present, in AI War):

6. A decision tree for economic management (using the sorting/LINQ approach).
7. A learning-over-time component to support AI at a level for really great human players in a symmetrical system.

If I were to write an AI for a FPS or racing game, I'd probably take an approach very similar to what those genres already do. I can't think of a better way to do AI in those genres than they already are in the better games in each genre. You could potentially combine in emergence with larger groups of enemies in war-style games, and that could have some very interesting results, but for an individual AI player I can't think of much different to do.

For a puzzle game, or a turn-based affair like Chess, there again I would use basically the current approaches from other games, which are deep analysis of future moves and their consequences and results, and thus selection of the best one (with some heuristics thrown in to speed processing, perhaps.).

Performers or adventure games could also use mild bits of swarm behavior to interesting effect, but I think a lot of players (myself included) really do like the convention of heavily rules-based enemies in those genres. These games don't really tend to feature AI that is meant to mimic humans, rather AI that is meant to just follow simple rules that the human players can learn and remember.

The sort-style decision tree might be easier to code and might bring results slightly more efficiently in all of the above cases, but the end result for the player wouldn't be much different. Still, making AI code easier to program and read is a good thing for everyone (lower costs/time to develop, fewer bugs based on poor readability, etc).

Next Time
Part 4 of this series talks about the asymmetry in this AI design.

AI Article Index
Part 1 gives an overview of the AI approach being used, and its benefits.
Part 2 shows some LINQ code and discusses things such as danger levels.
Part 3 talks about the limitations of this approach.
Part 4 talks about the asymmetry of this AI design.
Part 5 is a transcript of a discussion about the desirability of slightly-nonideal emergent decisions versus too-predictable "perfect" decisions.
Part 6 talks about player agency versus AI agency, and why the gameplay of AI War is based around keeping the AI deviously reactive rather than ever fully giving it the "tempo."

Choosing a DirectX Platform In C#

Apparently my game AI War: Fleet Command is the first selling game to use the SlimDX development framework. Does it reflect poorly on me that I didn't know that until today, a month after my game was actually released? Well, that knowledge probably wouldn't have scared me off from using their library, anyway -- the proof is in the pudding, and I liked their pudding the best out of all the ones I tried.

At the time I remember seeing a "projects that use SlimDX" page, and I even looked a few of those, but I had no idea that they were all freeware, hobbyist, or yet to be released. Color me surprised. In this article, I'm going to talk a bit about why I wound up choosing SlimDX, my experiences with it, and what other frameworks I used before it.

Early Alpha: Managed DirectX (MDX)
When I first started prototyping AI War, I did so in MDX simply because that's what my other games in the past have been coded in. However, as is widely known, MDX simply isn't supported anymore. There are some bugs and glitches in there, a few things that don't perform as well as they should, and a lot of the latest stuff (and anything beyond DX9) is not well supported to my knowledge. Overall I was pretty happy with MDX, though, and for my purposes it worked well enough to use it for the first four or so months of work on AI War.

After a certain point, however, I was having performance issues that I suspected were due to MDX. Also, it was coming up to where I needed to start adding music to the game, and I didn't want to just use DirectShow, which I had used in the past and found very limiting. Since I knew MDX was out of date, I figured it would be a good time to start looking for a more robust, modern platform.

Evaluating the XNA Framework
The first place that I turned was to the XNA framework, since it was Microsoft's spiritual successor to MDX. I actually converted my game Alden Ridge to be XNA-based for a while (never did try with AI War), and in general the conversion was pretty painless and quick. Some things are done differently, but overall the transition was smooth and the XNA way of naming things is a little more sensible sometimes. I skipped a lot of their higher-level Game classes and such, instead just embedding the XNA content in a panel in my window, same as I had been doing with MDX previously.

Very quickly I found out that having Shader Model 2.0 as a minimum hardware requirement for Alden Ridge was not going to be a good thing. It meant that my and my wife's laptops, both IBM Thinkpads (R42 and T40, respectively), could no longer run the game even though they had been able to run it fine with MDX.

That was a dealbreaker for me with XNA, but the other big issue for me was how resources, particularly audio, were handled in the 2.0 version of XNA (they may have changed this since, I am not certain). Basically I would have to run all of my audio through a special conversion tool that they provided, and then it would sit on the disk in a not-compressed format taking way more room than nice little ogg or MP3 files do. This would, in effect, make Alden Ridge an 800MB download for users instead of a 150MB download -- this would have been a dealbreaker on its own, too.

In the future I have some interest in porting some of my games to the XBox, and XNA is the obvious way to go with that. Conversion from MDX to XNA was easy given that I had kept most of the library-touching code pretty isolated, so that is something that I can approach when the time comes without having to build in XNA support all along.

Other Libraries
I looked at a number of other libraries as well, such as the Tao Framework and others based on Mono and OpenGL. Those were interesting, and in the future I might want to work with them to get some better cross-platform support, but converting all my code to Mono and OpenGL instead of some form of DirectX is going to be a lot more time consuming. I also had worries about what performance might be like on two totally new platforms, so I didn't want to sink days into this and then still find out it was unworkable for some reason or another.

Switching To SlimDX
I found SlimDX basically through google searches, and once I had exhausted all of the more official paths to modern DirectX in C# I decided to give them a try. To be clear, I wasn't wary because of their status as "hobbyists" or because I was worried the project would die, I was wary simply because it was small, new, and had relatively few tutorials. Basically all the things they say on their home page. All of those points are simply The Way Things Are(tm), and are not at all a reason not to choose SlimDX -- it's quite a good product (as I'll explain below).

Conversion to their library from MDX was crazy easy, even moreso than when I converted to XNA, and the first thing I noticed was how much smaller their installer and runtime dlls were than the MDX equivalents. And how many fewer of them there were (everything is in one convenient SlimDX.dll, rather than being spread out through nearly a dozen MDX dlls).

When I got everything working in SlimDX, there was a noticeable performance jump from MDX. Often on the order of 10% or so, and occasionally even more than that (30% during some key spikes). This was on the August 2008 version of SlimDX, which apparently had some performance issues that are not present in the newer March 2009 version. Those issues were primarily with matrix math, which I use a fair bit of for all the transforms in AI War, but I saw little difference between the two versions of SlimDX. March 2009 was perhaps a bit faster, but it was not nearly so notable as the original jump from MDX had been.

The conversion to SlimDX took part of one afternoon, and then suddenly things were running faster, so I decided to stay with the library given that I knew it was also under active maintenance (unlike MDX). And it works just fine on older graphics cards, same as MDX does.

The only problems that I really encountered with SlimDX were:

1) At first, they did not support Force Feedback, which is an issue for Alden Ridge. They have added support for that in the latest June 2009 release, but I've been so tied up with post-release work for AI War that I have not had a chance to try it yet.

2) The performance of the Line class in SlimDX was terrible, but it was still at least on par with MDX so I suspect this was an issue with the underlying DirectX code and not SlimDX itself. A simple solution was to just have a 1px square sprite that I transform and scale into drawing very nice lines (angled or straight). That's a fair bit of matrix transformation when there are a lot of lines on the screen, and SlimDX handled those with no issue whatsoever.

3) When I was originally trying to set up music in the game, I ran into a lot of trouble trying to choose between XAudio2 and DirectSound for playback. I had already settled on using an excellent C#-based ogg vorbis decoder by Atachiants Roman (which I also highly recommend as the best solution for decoding ogg files in C#), but I was having some issues with both tools. With some advice from the SlimDX team, I managed to cobble together a home-grown streaming approach using DirectSound, but I know it is not the ideal way to handle music streaming. I will probably update that in a future release of AI War, but for now it is reliable and works well enough, the main issue is that it wastes around 70MB of RAM that could be put to some other use. I'm told that the SlimDX team is planning to later add a sample that shows how to properly handle streaming in DirectSound in SlimDX, and that will be the trigger for me to update AI War in that capacity.

In general, I suppose the low volume of tutorials was my only complaint with SlimDX, and that's something that I can't really fault them for given their newness and small size. There were enough tutorials in their SDK that I was able to get everything going except for the optimal DirectSound streaming, and that's pretty darn good as far as I'm concerned.

I should also point out that my needs as far as graphics go are pretty limited, I was mostly using their matrixes, DirectX9 sprites and text classes, and things like that. I can't really comment on the 3D aspects, or the DX10 or DX11 support, though I've seen the classes there and they are reputed to be very good. My experience with their DirectInput, DirectSound, XAudio2, and Direct3D9 components have left me feeling like this is a very solid library throughout, and the clear choice for C# developers who want to do DirectX development but don't need XBox 360 support.

Update: I now have a streaming solution for SlimDX.

Wednesday, June 3, 2009

Designing Emergent AI, Part 2: Queries and Code

The first part of this article series has been a hit with a lot of people, yet criticized by others for being too introductory/broad. Fair enough, starting with this article I'm going to get a lot lower-level. If you're not a programmer or an AI enthusiast, you probably won't find much interest beyond this point.

What Do You Mean, It Works Like A Database?
First question that most people are asking is in what way my AI code can possibly be like a database. In a past article (Optimizing 30,000+ Ships In Realtime In C#) I already talked about how I am using frequent rollups for performance reasons. That also helps with the performance of the AI, but that's really not the meat of what I'm talking about.

I'm using LINQ for things like target selection and other goal selection with the AI in the game, and that really cuts down on the amount of code to make the first level of a decision (it also cuts down on the amount of code needed in general, I'd estimate that the entirety of the decision-making AI code for the game is less than 20,000 lines of code). Here's one of the LINQ queries for determining target selection:

var targets =
//30% chance to ignore damage enemy can do to them, and just go for highest-value targets
( unit.UnitType.AIAlwaysStrikeStrongestAgainst ||
AILoop.Instance.AIRandom.Next( 0, 100 ) < 30 ?
from obj in rollup.EnemyUnits
where ( unit.GuardingObjectNumber <= 0 || //must not be guarding, or guard target must be within certain range of guard post
Mat.ApproxDistanceBetweenPoints( unit.GuardingObject.LocationCenter,
obj.LocationCenter ) < Configuration.GUARD_RADIUS )
orderby obj.UnitType.ShipType == ShipType.Scout ascending, //scouts are the lowest priority
obj.GetHasAttackPenaltyAgainstThis( unit ) ascending, //ships that we have penalties against are the last to be hit
(double)obj.GetAttackPowerAgainstThis( unit, usesSmartTargeting ) / (double)obj.UnitType.MaxHealth descending, //how much damage I can do against the enemy out of its total health
obj.IsProtectedByForceField ascending, //avoid ships that are under force fields
obj.NearbyMilitaryUnitPower ascending, //strength of nearby enemies
Mat.ApproxDistanceBetweenPoints( obj.LocationCenter, unit.LocationCenter ) ascending, //how close am I to the enemy
obj.UnitType.ShieldRating ascending, //how high are their shields
unit.UnitType.AttackPower ascending, //go for the lowest-attack target (economic, probably)
obj.Health ascending //how much health the enemy has left
select obj
from obj in rollup.EnemyUnits
where ( unit.GuardingObjectNumber <= 0 || //must not be guarding, or guard target must be within certain range of guard post
Mat.ApproxDistanceBetweenPoints( unit.GuardingObject.LocationCenter,
obj.LocationCenter ) < Configuration.GUARD_RADIUS )
orderby obj.UnitType.ShipType == ShipType.Scout ascending, //scouts are the lowest priority
( chooseWeaklyDefendedTarget ?
obj.UnitType.TripleBasicFirePower >= obj.NearbyMilitaryUnitPower :
( chooseStronglyDefendedTarget ?
obj.UnitType.TripleBasicFirePower < obj.NearbyMilitaryUnitPower : true ) ) descending, //lightly defended area
(double)obj.GetAttackPowerAgainstThis( unit, usesSmartTargeting ) / (double)obj.UnitType.MaxHealth descending, //how much damage I can do against the enemy out of its total health
obj.IsProtectedByForceField ascending, //avoid ships that are under force fields
obj.NearbyMilitaryUnitPower ascending, //strength of nearby enemies
obj.GetHitPercent( unit ) descending, //how likely I am to hit the enemy
unit.GetAttackPowerAgainstThis( obj, false ) descending, //how much damage the enemy can do to me
obj.Health ascending //how much health the enemy has left
select obj

Blogger eats a lot of the formatting there, but hopefully you can see what is going on based on the comments in green. In some ways you could call this a decision-tree (it does have multiple tiers of sorting), but the overall code is a lot more brief and (when properly formatted with tabs, etc) easier to read. And the best thing is that, since these are implemented as a sort, rather than distinct if/else or where clause statements, what you arrive at is a preference for the AI to do one thing versus another thing.

There are a lot of things that it takes into consideration up there, and there are a few different modes in which it can run, and that provides a lot of intelligence on its own. But that's not enough. The loop that actually evaluates the above logic also adds some more intelligence of its own:

bool foundTarget = false;
foreach ( AIUnit enemyUnit in targets )
if ( enemyUnit.Health <= 0 || enemyUnit.CloakingLevel == CloakingLevel.Full )
continue; //skip targets that are already dead, or are cloaked
if ( unit.CloakingLevel == CloakingLevel.Full &&
enemyUnit.UnitType.ShipType == ShipType.Scout )
continue; //don't give away the position of cloaked ships to scouts
if ( unit.CloakingLevel != CloakingLevel.None &&
enemyUnit.UnitType.TachyonBeamRange > 0 )
continue; //cloaked ships will not attack tachyon beam sources
if ( enemyUnit.UnitType.VeryLowPriorityTarget )
continue; //if it is a very low priority target, just skip it
if ( enemyUnit.IsProtectedByCounterMissiles && unit.UnitType.ShotIsMissile )
continue; //if enemy is protected by counter-missiles and we fire missiles, skip it
if ( enemyUnit.IsProtectedByCounterSnipers && unit.UnitType.ShotIsSniper )
continue; //if enemy is protected by counter-sniper flares and we fire sniper shots, skip it
if ( enemyUnit.GetAttackPowerAgainstThis( unit, false ) == 0 )
continue; //if we are unable to hurt the enemy unit, skip attacking it
if ( unit.EffectiveMoveSpeed == 0 && !unit.UnitType.UsesTeleportation &&
enemyUnit.GetHitPercent( unit ) <>
continue; //stop ourselves from targeting fixed ships onto long-range targets

gc = GameCommand.Create( GameCommandType.SetAttackTarget, true );
gc.FGObjectNumber1 = unit.FGObjectNumber;
gc.FGObjectNumber2 = enemyUnit.FGObjectNumber;
gc.Integer1 = 0; //Not Attack-Moving
unit.LastAICommand = gc.AICommand;
AILoop.Instance.RequestCommand( gc );
foundTarget = true;

//if no target in range, and guarding, go back to base if out of range
if ( !foundTarget && unit.GuardingObjectNumber > 0 )
Point guardCenter = unit.GuardingObject.LocationCenter;
if ( Mat.ApproxDistanceBetweenPoints( guardCenter, unit.LocationCenter ) >
Configuration.GUARD_RADIUS )
Move( unit, guardCenter );

Nothing real surprising in there, but basically it has a few more decision points (most of these being hard rules, rather than preferences). Elsewhere, in the pursuit logic once targets are selected, ships have a preference for not all targeting exactly the same thing -- this aspect of them watching what each other are doing is all that is really needed, at least in the game design I am using, to make them do things like branching and splitting and hitting more targets, as well as targeting effectively.

Rather than analyzing the above code point by point, I'll mostly just let it speak for itself. The comments are pretty self-explanatory overall, but if anyone does have questions about a specific part, let me know.

Danger Levels
One important piece of logic from the above code that I will touch on is that of danger levels, or basically the lines of code above where it is evaluating whether or not to prefer a target based on how well it is defended by nearby ships. All ships have a 30% chance to disregard the danger level and just go for their best targets, and some ship types (like Bombers) do that pretty much all the time, and this makes the AI harder to predict.

The main benefit of an approach like that is that it causes the AI to most of the time try to pick off targets that are lightly defended (such as scrubbing out all the outlying harvesters or poorly defended constructors in a system), and yet there's still a risk that the ships, or part of the ships will make a run at a really-valuable-but-defended target like your command station or an Advanced Factory. This sort of uncertainty generally comes across as very human-like, and even if the AI makes the occasional suicide run with a batch of ships, quite often those suicide runs can be effective if you are not careful.

Changing The AI Based On Player Feedback
Another criticism that some readers had about the first AI article was to do with my note that I would fix any exploits that people find and tell me about. Fair enough, I didn't really explain myself there and so I can understand that criticism. However, as I noted, we haven't had any exploits against the AI since our alpha versions, so I'm not overly concerned that this will be a common issue.

But, more to the point, what I was actually trying to convey is that with a system of AI like what you see above, putting in extra tweaks and logic is actually fairly straightforward. In our alpha versions, whenever someone found a way to trick the AI I could often come up with a solution within a few days, sometimes a few hours. The reason is simple: the LINQ code is short and expressive. All I have to do is decide what sort of rule or preference to make the AI start considering, what relative priority that preference should have if it is indeed a preference, and then add in a few lines of code. That's it.

With a decision tree approach, I don't think I'd be able t to do that -- the code gets huge and spread out through many classes (I have 10 classes for the AI, including ones that are basically just data holders -- AIUnit, etc -- and others that are rollups -- AIUnitRollup, which is used per player per planet). My argument for using this sort of code approach is not only that it works well and can result in some nice emergent behavior, but also that it is comparably easy to maintain and extend. That's something to consider -- this style of AI is pretty quick to try out, if you want to experiment with it in your next project.

Next Time
Part 3 of this series talks the limitations of this approach.

AI Article Index
Part 1 gives an overview of the AI approach being used, and its benefits.
Part 2 shows some LINQ code and discusses things such as danger levels.
Part 3 talks about the limitations of this approach.
Part 4 talks about the asymmetry of this AI design.
Part 5 is a transcript of a discussion about the desirability of slightly-nonideal emergent decisions versus too-predictable "perfect" decisions.
Part 6 talks about player agency versus AI agency, and why the gameplay of AI War is based around keeping the AI deviously reactive rather than ever fully giving it the "tempo."

Tuesday, June 2, 2009

Designing Emergent AI, Part 1: An Introduction

A lot of people have been curious about how the AI in AI War: Fleet Command works, since we have been able to achieve so much more realistic strategic/tactical results compared to the AI in most RTS games. Part 1 of this series will give an overview of the design philosophy we used, and later parts will delve more deeply into specific sub-topics.

Decision Trees: AI In Most RTS Games
First, the way that AI systems in most games work is via giant decision trees (IF A, then C, IF B, then D, etc, etc). This can make for human-like behavior up to a point, but it requires a lot of development and ultimately winds up with exploitable flaws. My favorite example from pretty much every RTS game since 1998 is how they pathfind around walls; if you leave a small gap in your wall, the AI will almost always try to go through that hole, which lets human players mass their units at these choke points since they are "tricking" the AI into using a hole in the wall that is actually a trap. The AI thus sends wave after wave through the hole, dying every time.

Not only does that rules-based decision tree approach take forever to program, but it's also so exploitable in many ways beyond just the above. Yet, to emulate how a human player might play, that sort of approach is generally needed. I started out using a decision tree, but pretty soon realized that this was kind of boring even at the basic conceptual level -- if I wanted to play against humans, I could just play against another human. I wanted an AI that acted in a new way, different from what another human could do, like playing against Skynet or the Buggers from Ender's Game, or something like that. An AI that felt fresh and intelligent, but that played with scary differences from how a human ever could, since our brains have different strengths and weaknesses compared to a CPU. There are countless examples of this in fiction and film, but not so many in games.

Decentralized Intelligence
The approach that I settled on, and which gave surprisingly quick results early in the development of the game, was simulating intelligence in each of the individual units, rather than simulating a single overall controlling intelligence. If you have ever ready Prey, by Michael Crichton, it works vaguely like the swarms of nanobots in that book. The primary difference is that my individual units are a lot more intelligent than each of his nanobots, and thus an average swarm in my game might be 30 to 2,000 ships, rather than millions or billions of nanobots. But this also means that my units are at zero risk of ever reaching true sentience -- people from the future won't be coming back to kill me to prevent the coming AI apocalypse. The primary benefit is that I can get much more intelligent results with much less code and fewer agents.

Strategic Tiers
There are really three levels of thinking to the AI in AI War: strategic, sub-commander, and individual-unit. So this isn't even a true swarm intelligence, because it combines swarm intelligence (at the individual-unit level) with more global rules and behaviors. How the AI decides which planets to reinforce, or which planets to send waves against, is all based on the strategic level of logic -- the global commander, if you will. The method by which an AI determines how to use its ships in attacking or defending at an individual planet is based on a combination of the sub-commander and individual-ship logic.

Here's the cool thing: the sub-commander logic is completely emergent. Based on how the individual-unit logic is coded, the units do what is best for themselves, but also take into account what the rest of the group is doing. It's kind of the idea of flocking behavior, but applied to tactics and target selection instead of movement. So when you see the AI send its ships into your planet, break them into two or three groups, and hit a variety of targets on your planet all at once, that's actually emergent sub-commander behavior that was never explicitly programmed. There's nothing remotely like that in the game code, but the AI is always doing stuff like that. The AI does some surprisingly intelligent things that way, things I never thought of, and it never does the really moronic stuff that rules-based AIs occasionally do.

And the best part is that it is fairly un-trickable. Not to say that the system is perfect, but if a player finds a way to trick the AI, all they have to do is tell me and I can usually put a counter into the code pretty quickly. There haven't been any ways to trick the AI since the alpha releases that I'm aware of, though. The AI runs on a separate thread on the host computer only, so that lets it do some really heavy data crunching (using LINQ, actually -- my background is in database programming and ERP / financial tracking / revenue forecasting applications in TSQL, a lot of which came across to the AI here). Taking lots of variables into effect means that it can make highly intelligent decisions without causing any lag whatsoever on your average dual-core host.

Fuzzy Logic
Fuzzy logic / randomization is also another key component to our AI. A big part of making an unpredictable AI system is making it so that it always make a good choice, but not necessarily the 100% best one (since, with repetition, the "best" choice becomes increasingly non-ideal through its predictability). If an AI player only ever made perfect decisions, to counter them you only need to figure out yourself what the best decision is (or create a false weakness in your forces, such as with the hole in the wall example), and then you can predict what the AI will do with a high degree of accuracy -- approaching 100% in certain cases in a lot of other RTS games. With fuzzy logic in place, I'd say that you have no better than a 50% chance of ever predicting what the AI in AI War is going to do... and usually it's way less predictable than even that.

Intelligent Mistakes
Bear in mind that the lower difficulty levels make some intentionally-stupid decisions that a novice human might make (such as going for the best target despite whatever is guarding it). That makes the lower-level AIs still feel like a real opponent, but a much less fearsome one. Figuring out ways in which to tone down the AI for the lower difficulties was one of the big challenges for me, actually. Partly it boiled down to just withholding the best tactics from the lower-level AIs, but also there were some intentionally-less-than-ideal assumptions that I also had to seed into its decision making at those lower levels.

Skipping The Economic Simulation
Lastly, the AI in AI War follows wholly different economic rules than the human players (but all of the tactical and most strategic rules are the same). For instance, the AI starts with 20,000+ ships in most games, whereas you start with 4 ships per player. If it just overwhelmed you with everything, it would crush you immediately. Same as if all the bad guys in every level of a Mario Bros game attacked you at once, you'd die immediately (there would be nowhere to jump to). Or if all the enemies in any given level of an FPS game just ran directly at you and shot with perfect accuracy, you'd have no hope.

Think about your average FPS that simulates your involvement in military operations -- all of the enemies are not always aware of what you and your allies are doing, so even if the enemies have overwhelming odds against you, you can still win by doing limited engagements and striking key targets, etc. I think the same is true in real wars in many cases, but that's not something that you see in the skirmish modes of other RTS games.

This is a big topic that I'll touch on more deeply in a future article in this series, as it's likely to be the most controversial design decision I've made with the game. A few people will likely view this as a form of cheating AI, but I have good reasons for having done it this way (primarily that it allows for so many varied simulations, versus one symmetrical simulation). The AI ships never get bonuses above the players, the AI does not have undue information about player activities, and the AI does not get bonuses or penalties based on player actions beyond the visible AI Progress indicator (more on that below). The strategic and tactical code for the AI in the game uses the exact same rules as constrain the human players, and that's where the intelligence of our system really shines.

Asymetrical AI
In AI War, to offer procedural campaigns that give a certain David vs Goliath feel (where the human players are always David to some degree), I made a separate rules system for parts of the AI versus what the humans do. The AI's economy works based on internal reinforcement points, wave countdowns, and an overall AI Progress number that gets increased or decreased based on player actions. This lets the players somewhat set the pace of game advancement, which adds another layer of strategy that you would normally only encounter in turn-based games. It's a very asymmetrical sort of system that you totally couldn't have in a pvp-style of skirmish game with AI acting as human standins, but it works beautifully in a co-op-style game where the AI is always the enemy.

Next Time
This provides a pretty good overview of the decisions we made and how it all came together. In the next article, which is now available, I delve into some actual code. If there is anything that readers particularly want me to address in a future article, don't hesitate to ask! I'm not shy about talking about the inner workings of the AI system here, since this is something I'd really like to see other developers do in their games. I play lots of games other than my own, just like anyone else, and I'd like to see stronger AI across the board.

AI Article Index
Part 1 gives an overview of the AI approach being used, and its benefits.
Part 2 shows some LINQ code and discusses things such as danger levels.
Part 3 talks about the limitations of this approach.
Part 4 talks about the asymmetry of this AI design.
Part 5 is a transcript of a discussion about the desirability of slightly-nonideal emergent decisions versus too-predictable "perfect" decisions.
Part 6 talks about player agency versus AI agency, and why the gameplay of AI War is based around keeping the AI deviously reactive rather than ever fully giving it the "tempo."

Monday, June 1, 2009

Optimizing 30,000+ Ships In Realtime In C#

My latest game, AI War: Fleet Command, has some of the highest unit counts in the RTS genre. Most campaigns against moderately-hard AIs have over 30,000 ships being simulated at once by the time they are done, and many multiplayer campaigns have upwards of 60,000 or more ships. The game is coded in C#, a JIT-compiled programming language that some people think doesn't have the power/efficiency of C++.

I don't have any numbers to really compare the processing power of one versus the other in an empirical sense, but looking at AI War I think this is a sizeable victory for the advocates of C# as a major language for heavy-duty applications. Some people have been wondering how on earth I managed to support so many ships at once in realtime, and this article pulls back the curtain.

The Game Must Be Load-Balanced In Multiplayer, Right?
Load-balancing sounds good in theory, but in most RTS games that just wouldn't be feasible except maybe on LAN games because of the frequency at which state of each of the units is changing. There is just too much unit data to pass back and forth, so all of the processing for the game simulation itself has to be run on all the computers in the game. Years ago there was a great article on Gamasutra called 1500 Archers on a 28.8: Network Programming in Age of Empires and beyond. That's a rather technical article, but it basically outlines the approach that pretty much all RTS games use for their networking and simulation. AI War is no different, and in fact as the unit counts go up we are even more dependent on this approach because there is even more data in the simulation.

Multithreaded AI On The Host Only
The big difference in AI War from any other RTS game that I know of is how the AI is handled. Essentially, what we did was separate out the real "thinking" portion of the AI into a separate thread. Most non-programmers don't know this, but that actually requires an entire copy of all of the relevant game data for the secondary thread (two threads cannot easily share data, or you get all sorts of "locking" and "blocking" issues that very much adversely affect performance). Anyway, so the main game thread passes data to the secondary AI thread, which then analyzes it on its own time and passes commands back to the main thread.

So far, most of that is done by other recent RTS games, such as for example Supreme Commander. The key difference here is that our AI thread is only run on the host computer. If you read the 28.8 article above, you already know that player-issued commands are requested during one 200ms "turn" and then simultaneously executed in the next turn by all computers in the game. My reasoning was: why can't the AI issue commands in the same way? Even though the commands often have different content from human-issued commands (since the AI has some differences in how it manages its economy, etc), the same basic approach works without increasing network load too heavily, and without requiring an AI simulation on any machine other than the host.

Do All AI War Players Need Multiple Cores, Then?
Short answer: no. All of this multithreading basically gets us to the point where the AI is on the host only, so the host greatly benefits from multiple cores, and yet the rest of the simulation is run on all of the clients in a game, and thus the primary requirements for the main processor in a game is the same all computers in the game. In early versions of the game, we topped out at around 10,000 units in a game before lag set in, but through a series of optimizations on our simulation code I got that up to 60,000 units, and now that number has increased to theoretically around 120,000 units (though I have never run more than 80,000 units in one game, since that just doesn't happen very much).

The requirements for the host are heavier because of the AI, but the clients won't benefit much at all from multiple cores. The host probably needs to be a dual core 2.4Ghz machine for (possibly) the highest difficulties and (definitely) for multiplayer games, but a 2.4Ghz single-core machine works reasonably well as the host of a single-player game. I've even had it running with minimal lag on a 1.8Ghz single-core, but that was a slightly below-average experience so we consider that under our minimum requirements (even though it is playable).

Optimizations vs. Inherent Advantages Of AI War's Design
Mainly the optimizations were something that just required thought, code profiling, and lots of time and testing (more on that in a minute), but there were also a few key advantages that I had based on the design of the game itself:

Advantage 1: Faux-2D Space
The game is set in space, and I am treating it as pseudo-3D space even though the game is 2D (aka, ships can pass one another because they are presumed to be on slightly different Z axes). This meant that collision-detection between ships is not really a major focus of the simulation, since the only thing that really stops movement is force fields. There are still shot-collisions, and collisions with mines, but I was able to get a lot of optimizations out of this aspect of setting the game in space in the manner that I did.

Advantage 2: Limited Pathfinding Needs
The game is set in space, and so the straightest path between two points is always a line. So that means that pathfinding, which is a huge part of the CPU load in terrestrial RTS games, was a nonfactor for AI War. That freed up some CPU cycles to do something else -- in my case, I decided to have much more intelligent auto-targeting of ships in the game, using that CPU that the lack of pathfinding freed up. This ultimately started using even more CPU than pathfinding typically does, but later optimizations resolved that.

Side note: My other game, Alden Ridge (which is not yet released, but actually predated AI War in terms of development), has very heavy pathfinding as part of the core of it. I actually found some awesome ways to optimize pathfinding for that other game, but scaling that up to an RTS game would have been a challenge.

Advantage 3: Multiple Planets
Like many space-based games, AI War is divided up into individual planets. This is a huge benefit to processing the simulation, because when two ships are on different planets, you know there can't possibly be any interactions between them. This basically acts as a massive pre-sorting of all the ships, so that when you are doing things like target-selection or collision detection or whatever, you are working with smaller batches of ships at a go (which can still be in excess of 5,000 or 6,000 ships at once, but it's not likely there would ever be 60,000 ships all in one system).

And Now For The Optimizations
Okay, so those were the advantages that the overall design created for me, and I decided to make full use of them. But even with those advantages, some serious optimization was needed in order for the game to support even 20,000 units (which is about double what you see in the largest versions of most other games in the genre). These optimizations were integrated on an ongoing process through our alpha and beta development, and even now occasionally one occurs to me and I put it in (such as with the 1.004 release).

Here's a list of a few key optimizations that were made:

Optimization 1: Fixed-Int Math
Fixed-int math is used instead of floating-point. This was needed to insure consistent simulations between differing CPU architectures, but it is also faster. A lot of handheld consoles, like the Nintendo DS for instance, do not even have a floating-point system and so fixed-point math is always used there. There were not many good examples of fixed-point math that I could find in C# (the language AI War is coded in), so there was some discussion on StackOverflow on how to do this best. If you are curious, here it is, along with the working code I ended up with for that:

Optimization 2: Distance Approximation
Through profiling, I discovered that the single-biggest CPU eater in alpha versions of the game was range checks. As in, "how far apart are these two objects?" Range checks are needed for all sorts of things, from collision detection to target selection to moving to a destination point. The range checks were particularly slow because the standard formula involves taking a square root, which is expensive in terms of processor (it is an iterative function in programming -- it makes 16 guesses, each of which gets closer to the real square root). With so many ships in the game, I was often having half a million range checks or so per second, and that was really limiting with the square root in there.

The solution turned out to be to move to a distance approximation method that avoids square roots (which unfortunately I don't have the link to, but it was a C implementation that I converted to use in C# -- if another dev or anyone else wants my code for that, I'll post it). Anyway, the realization was that most ranges don't need to be super precise, because it's more a question of "which ships are relatively closer." But even for things like movement and collision detection, the less precise range checks work until units are very close together, and then I switch to the more precise, more CPU-intensive range check. That really mitigates the CPU load.

Optimization 3: Cycles Per Second
Like most games, AI War has a game loop. It runs at 22 iterations per second, which through testing was determined to be the sweet spot for having a reasonable framerate but the best possible performance on midrange PCs. So this was a bit of a cheat, but no one has commented or has even seemed to notice. When given the choice between a 60fps game and a game that has huge numbers of units making complex decisions, I opted for the latter.

Optimization 4: Movement Calculation Caching
One sizable optimization involved calculating movement more efficiently. Units move at all sorts of angles in AI War, like in most RTS games, and I'm not using a sub-pixel position scheme, so that means that floating-point style math (fixed point math in my case) is needed. Basically, to move from point A to point B, a ship must move at a certain angle, but as the ship moves that angle changes slightly because of the integer nature of the pixel coordinates. So every cycle the game must recalculate every ship's movement angle (and thus the x/y components of that movement by taking the sin and cos), and that gives smooth, perfect lanes of travel.

That works great, but optimizations are sorely needed when you start talking about 30,000+ moving objects (which can happen if you play against two of the Special Forces Captain AI types, which have very few non-moving ships). The trick I am using is to only recalculate the angle every 1 turn (so roughly 4 times per second) instead of once per cycle (which would be 22 times per second). The visual difference is imperceptible to the players, but I'm doing much less processing for movement.

Optimization 5: Scheduled Collision Detection
Another big optimization was to do with collision detection. Ships are not allowed to stop on top of one another, so whenever a ship reaches a destination point it must check to see if it is on top of other ships, and if so then move to an open space. This is iterative, and as you can imagine eats up a lot of CPU on planets with many ships on them. If a lot of ships are collision-detecting at once, too much CPU is required, which in alpha versions would cause a lot of lag when big fleets came out of a wormhole, etc.

The solution was to spread out the collisions of this sort, since there's not really a practical reason that they have to be done right away. The game thus only processes 25 collisions of this sort per CPU cycle, or 550 per second. This means that sometimes you see ships bunch up when coming out of a wormhole or all moving to a big point, and then they fly out of the center in waves, which is actually a very cool visual effect that was completely unintentional to start with. Bear in mind that this is just referring to collision detection for ships that might have stopped on top of one another -- when talking about collisions with force fields or mines, that happens in closer to realtime, but there are fewer cross-checks to do with those, so that works out CPU-wise.

Optimization 6: Ship Rollups And Indexes
Pre-sorting of ships was a huge improvement across the board for AI War, and I don't think most RTS games bother with this because their unit counts are often too low to really see much of a benefit from it. I come from a SQL database background, though, where indexing tables is a key to good performance. That's another programming environment with vast numbers of data points (a table with 100,000 rows is a pretty small one), and I took a lot of ideas from my experience there. Basically, whenever I need to make decisions about ships, I pre-sort them into smaller per-purpose collections that can be looped over much more efficiently. These sorts of rollups cause more load when a new ship is created or when it moves from one planet to another, but that's a minuscule difference that is not compounded by looping, and it makes for huge gains in later processing.

Optimization 7: Compressed, Batched Network Commands
Commands that are sent across the network are batched and compressed before being sent. All of the requested commands (from human players and the AI) are batched on the host computer and then compressed as one big chunk and sent via TCP to the clients for decompression, parsing, and execution. This works well with the 200ms turn-based approach outlined in the 28.8 article above. Having commands to 4,000 units actually compresses pretty well, so you tend not to see any network lag on broadband connections even with 4 or more players. But you do need to turn off any background upload/download processes going on, because those can be hefty. And if the game host's connection is less than ideal, or there are a lot of players in the game, then one of the clients should be the voice chat host (using Skype or Teamspeak or whatever).

It might surprise other devs that we are using TCP instead of UDP, but we wanted robustness and reliability over speed. With compression, we already had all the performance we needed and more, even with so many units in the game. I actually started out with a UDP solution in early alpha versions, but it was problematic in many ways and so I just recoded everything using TCP sockets, and I've been much more pleased with the results of that.

Optimization 8: Scheduled Target Selection
The last major optimization I'll talk about is to do with target selection. The AI thread and the human players of course do a lot of the target selection themselves -- choosing targets to chase, or to move over to, etc. But one of the nice things about spaceships is that they can typically fire while moving. So if ships don't yet have a target, or are not yet in range of their AI/human-given target, they should shoot at whatever they pass on the way. And if they are left unattended somewhere by the player, they should shoot at whatever comes in range. And if players put them in attack-move mode or the new "free roaming defender" mode from version 1.004, then the ships need to respond to threats in an automated fashion. Plus then you have engineers which need to find targets to repair or assist, and you've got tractor beams that need to look for targets to grab, and tachyon beam emitters that need to find targets to tag, etc. All of this happens as part of the main simulation thread, because it needs to happen on an interval shorter than once per 200ms turn, which is the interval on which human/AI commands are issued.

Target selection thus became the single biggest bottleneck for having more units in the game, and especially for having thousands of ships from multiple teams on a single planet. There were a lot of small optimizations that I made with this process (the range-check efficiency improvement that was already mentioned being one key one), but the biggest thing was to also stagger these target selection checks. The game currently processes around 1,100 target-selection-checks for ships per second per player. In a game with 60,000 ships, that might not sound like a lot, but never fear -- there are rarely more than 10,000 ships all in battle at one time, and that's even a rare case with most battles being between 1,000 and 3,000 ships. The other things is that since that is per player, if you have 60,000 ships in your game you probably have at least 4 players, so that would only take it around 13-14 seconds to do a full target selection loop for all ships -- but in reality, as I've said already, it takes much less time since not all ships are in battle and they don't require target selection checks if there are no enemies on their planet.

Optimization 9: Doubling Ship Capacity In Version 1.0004
Related to the target-selection optimization, I'll talk briefly about the optimization in version 1.004 that lets the game make the leap to almost double capacity. Basically, I came to the realization that if there were 50,000 ships sitting on enemy planets that I am not attacking, and there are only 10,000 ships that I am actively engaged with attacking or defending against at the moment, then I can completely ignore those other ships unless they have specific orders (like moving or collision-detecting, etc). This lets the CPU be concentrated on ships that actually need it, and those other ships just don't do anything. This is an easy state to detect because of all the rollups in Optimization 6 above. This simple change has let me run the game at almost double-speed with 80,000 ships in the game -- previously the game would start lagging on a 2.4Ghz CPU well before that point.

What About Graphics Performance?
Everything so far has pretty much addressed CPU efficiency, but what about the graphics? It takes a lot to draw thousands of onscreen ships at once, even if the game is 2D.

Well, the game is 2D, but it uses Direct3D9 under the hood. That lets us do 3D effects like the zoom (the parallax effect of the planets is just a simple parallax effect, it's not true Z depth), and it also lets us take advantage of hardware acceleration. There were a lot of optimizations we had to make in that part of our engine (our entire engine is custom from the ground up, by the way), but those are fairly standard and prosaic, so I won't bore you with those.

For lower-end graphics cards, showing a few thousand ship sprites at once can still be a challenge because of the way that the graphics card has to do texture state changes. So we have a settings option that lets the game do more efficient sorting for the graphics card, but which does cause some strange/incorrect graphical overlap on occasion when you are all the way zoomed in. Most players don't even notice the difference, and the increase in performance is notable if you are on a laptop card (as two of our alpha testers were) or a card that is more than three or four years old.

In Conclusion
Optimization is important for any game -- or any other high-performance or high-throughput application -- but when you have such a vast number of ships doing expensive calculations, the need for optimization becomes even greater. The techniques outlined in this article are a great place to start, but there were even more other, smaller techniques used in AI War, and the degree to which any given technique is appropriate will vary by game. Optimization on the game Alden Ridge has been an entirely different experience, for instance.

Most of all, I hope that this article has shown not only the importance of optimization when creating games of massive scale, but also that C# is a very viable language for applications of this CPU intensity. The conventional wisdom is that a realtime game of this scope would have to be in a different language (ASM, C, or C++, usually), but that is clearly not true. C# is a much faster language to program in than C++, generally speaking, and this enabled me to create a game of this sort with only seven months of development. For developers large or small, that sort of speedup in the development process can be quite significant!