What Do You Mean, It Works Like A Database?
First question that most people are asking is in what way my AI code can possibly be like a database. In a past article (Optimizing 30,000+ Ships In Realtime In C#) I already talked about how I am using frequent rollups for performance reasons. That also helps with the performance of the AI, but that's really not the meat of what I'm talking about.
I'm using LINQ for things like target selection and other goal selection with the AI in the game, and that really cuts down on the amount of code to make the first level of a decision (it also cuts down on the amount of code needed in general, I'd estimate that the entirety of the decision-making AI code for the game is less than 20,000 lines of code). Here's one of the LINQ queries for determining target selection:
var targets =
//30% chance to ignore damage enemy can do to them, and just go for highest-value targets
( unit.UnitType.AIAlwaysStrikeStrongestAgainst ||
AILoop.Instance.AIRandom.Next( 0, 100 ) < 30 ?
from obj in rollup.EnemyUnits
where ( unit.GuardingObjectNumber <= 0 || //must not be guarding, or guard target must be within certain range of guard post
Mat.ApproxDistanceBetweenPoints( unit.GuardingObject.LocationCenter,
obj.LocationCenter ) < Configuration.GUARD_RADIUS )
orderby obj.UnitType.ShipType == ShipType.Scout ascending, //scouts are the lowest priority
obj.GetHasAttackPenaltyAgainstThis( unit ) ascending, //ships that we have penalties against are the last to be hit
(double)obj.GetAttackPowerAgainstThis( unit, usesSmartTargeting ) / (double)obj.UnitType.MaxHealth descending, //how much damage I can do against the enemy out of its total health
obj.IsProtectedByForceField ascending, //avoid ships that are under force fields
obj.NearbyMilitaryUnitPower ascending, //strength of nearby enemies
Mat.ApproxDistanceBetweenPoints( obj.LocationCenter, unit.LocationCenter ) ascending, //how close am I to the enemy
obj.UnitType.ShieldRating ascending, //how high are their shields
unit.UnitType.AttackPower ascending, //go for the lowest-attack target (economic, probably)
obj.Health ascending //how much health the enemy has left
select obj
:
from obj in rollup.EnemyUnits
where ( unit.GuardingObjectNumber <= 0 || //must not be guarding, or guard target must be within certain range of guard post
Mat.ApproxDistanceBetweenPoints( unit.GuardingObject.LocationCenter,
obj.LocationCenter ) < Configuration.GUARD_RADIUS )
orderby obj.UnitType.ShipType == ShipType.Scout ascending, //scouts are the lowest priority
( chooseWeaklyDefendedTarget ?
obj.UnitType.TripleBasicFirePower >= obj.NearbyMilitaryUnitPower :
( chooseStronglyDefendedTarget ?
obj.UnitType.TripleBasicFirePower < obj.NearbyMilitaryUnitPower : true ) ) descending, //lightly defended area
(double)obj.GetAttackPowerAgainstThis( unit, usesSmartTargeting ) / (double)obj.UnitType.MaxHealth descending, //how much damage I can do against the enemy out of its total health
obj.IsProtectedByForceField ascending, //avoid ships that are under force fields
obj.NearbyMilitaryUnitPower ascending, //strength of nearby enemies
obj.GetHitPercent( unit ) descending, //how likely I am to hit the enemy
unit.GetAttackPowerAgainstThis( obj, false ) descending, //how much damage the enemy can do to me
obj.Health ascending //how much health the enemy has left
select obj
);
Blogger eats a lot of the formatting there, but hopefully you can see what is going on based on the comments in green. In some ways you could call this a decision-tree (it does have multiple tiers of sorting), but the overall code is a lot more brief and (when properly formatted with tabs, etc) easier to read. And the best thing is that, since these are implemented as a sort, rather than distinct if/else or where clause statements, what you arrive at is a preference for the AI to do one thing versus another thing.
There are a lot of things that it takes into consideration up there, and there are a few different modes in which it can run, and that provides a lot of intelligence on its own. But that's not enough. The loop that actually evaluates the above logic also adds some more intelligence of its own:
bool foundTarget = false;
foreach ( AIUnit enemyUnit in targets )
{
if ( enemyUnit.Health <= 0 || enemyUnit.CloakingLevel == CloakingLevel.Full )
continue; //skip targets that are already dead, or are cloaked
if ( unit.CloakingLevel == CloakingLevel.Full &&
enemyUnit.UnitType.ShipType == ShipType.Scout )
continue; //don't give away the position of cloaked ships to scouts
if ( unit.CloakingLevel != CloakingLevel.None &&
enemyUnit.UnitType.TachyonBeamRange > 0 )
continue; //cloaked ships will not attack tachyon beam sources
if ( enemyUnit.UnitType.VeryLowPriorityTarget )
continue; //if it is a very low priority target, just skip it
if ( enemyUnit.IsProtectedByCounterMissiles && unit.UnitType.ShotIsMissile )
continue; //if enemy is protected by counter-missiles and we fire missiles, skip it
if ( enemyUnit.IsProtectedByCounterSnipers && unit.UnitType.ShotIsSniper )
continue; //if enemy is protected by counter-sniper flares and we fire sniper shots, skip it
if ( enemyUnit.GetAttackPowerAgainstThis( unit, false ) == 0 )
continue; //if we are unable to hurt the enemy unit, skip attacking it
if ( unit.EffectiveMoveSpeed == 0 && !unit.UnitType.UsesTeleportation &&
enemyUnit.GetHitPercent( unit ) <>
continue; //stop ourselves from targeting fixed ships onto long-range targets
gc = GameCommand.Create( GameCommandType.SetAttackTarget, true );
gc.FGObjectNumber1 = unit.FGObjectNumber;
gc.FGObjectNumber2 = enemyUnit.FGObjectNumber;
gc.Integer1 = 0; //Not Attack-Moving
unit.LastAICommand = gc.AICommand;
AILoop.Instance.RequestCommand( gc );
foundTarget = true;
break;
}
//if no target in range, and guarding, go back to base if out of range
if ( !foundTarget && unit.GuardingObjectNumber > 0 )
{
Point guardCenter = unit.GuardingObject.LocationCenter;
if ( Mat.ApproxDistanceBetweenPoints( guardCenter, unit.LocationCenter ) >
Configuration.GUARD_RADIUS )
Move( unit, guardCenter );
}
Nothing real surprising in there, but basically it has a few more decision points (most of these being hard rules, rather than preferences). Elsewhere, in the pursuit logic once targets are selected, ships have a preference for not all targeting exactly the same thing -- this aspect of them watching what each other are doing is all that is really needed, at least in the game design I am using, to make them do things like branching and splitting and hitting more targets, as well as targeting effectively.
Rather than analyzing the above code point by point, I'll mostly just let it speak for itself. The comments are pretty self-explanatory overall, but if anyone does have questions about a specific part, let me know.
Danger Levels
One important piece of logic from the above code that I will touch on is that of danger levels, or basically the lines of code above where it is evaluating whether or not to prefer a target based on how well it is defended by nearby ships. All ships have a 30% chance to disregard the danger level and just go for their best targets, and some ship types (like Bombers) do that pretty much all the time, and this makes the AI harder to predict.
The main benefit of an approach like that is that it causes the AI to most of the time try to pick off targets that are lightly defended (such as scrubbing out all the outlying harvesters or poorly defended constructors in a system), and yet there's still a risk that the ships, or part of the ships will make a run at a really-valuable-but-defended target like your command station or an Advanced Factory. This sort of uncertainty generally comes across as very human-like, and even if the AI makes the occasional suicide run with a batch of ships, quite often those suicide runs can be effective if you are not careful.
Changing The AI Based On Player Feedback
Another criticism that some readers had about the first AI article was to do with my note that I would fix any exploits that people find and tell me about. Fair enough, I didn't really explain myself there and so I can understand that criticism. However, as I noted, we haven't had any exploits against the AI since our alpha versions, so I'm not overly concerned that this will be a common issue.
But, more to the point, what I was actually trying to convey is that with a system of AI like what you see above, putting in extra tweaks and logic is actually fairly straightforward. In our alpha versions, whenever someone found a way to trick the AI I could often come up with a solution within a few days, sometimes a few hours. The reason is simple: the LINQ code is short and expressive. All I have to do is decide what sort of rule or preference to make the AI start considering, what relative priority that preference should have if it is indeed a preference, and then add in a few lines of code. That's it.
With a decision tree approach, I don't think I'd be able t to do that -- the code gets huge and spread out through many classes (I have 10 classes for the AI, including ones that are basically just data holders -- AIUnit, etc -- and others that are rollups -- AIUnitRollup, which is used per player per planet). My argument for using this sort of code approach is not only that it works well and can result in some nice emergent behavior, but also that it is comparably easy to maintain and extend. That's something to consider -- this style of AI is pretty quick to try out, if you want to experiment with it in your next project.
Next Time
Part 3 of this series talks the limitations of this approach.
AI Article Index
Part 1 gives an overview of the AI approach being used, and its benefits.
Part 2 shows some LINQ code and discusses things such as danger levels.
Part 3 talks about the limitations of this approach.
Part 4 talks about the asymmetry of this AI design.
Part 5 is a transcript of a discussion about the desirability of slightly-nonideal emergent decisions versus too-predictable "perfect" decisions.
Part 6 talks about player agency versus AI agency, and why the gameplay of AI War is based around keeping the AI deviously reactive rather than ever fully giving it the "tempo."
Hi Christopher,
ReplyDeleteGlad to see developers utilizing relational databases to aid AI techniques. I've started a project combining the Unreal Tournament 2004 bots using Pogamut and a sqlite RDB for locational/situational memory at
http://code.google.com/p/sqlitebot
and am interested in discussing or working with developers that are utilizing relational database approaches to better AI.
Thanks
Jeremy Cothran
Hi Jeremy,
ReplyDeleteThanks for the note -- but do note that I'm using relational database techniques not a relational database itself.
I actually originally thought about potentially basing a lot of the game on the free version of MSSQL, but the performance just wasn't there for realtime stuff, and plus there was so much else that the players had to install. So I dropped that idea completely, and then later got into LINQ and found that a great substitute.
But, for AI only, I could definitely see how something like sqlite would work for your purposes. Bravo with that, I bet you'll have some really good results there. I, too, am glad to see more people using a RDB approach with AI, and/or games in general.
My email and forums are always open if you want to bounce ideas around at some point.
Thanks for the clarifications, wasn't aware of LINQ and understand more about it now after looking up a bit.
ReplyDeletePersistence of memory (not just the Salvador Dali painting :) is what I'm really interested in and a lightweight, file-based RDB could help to share some of those locational/situational facts/statistics between bots/AI and simulation runs.
Cheers
Jeremy
Ah, that makes good sense. I'm not really persisting memory long-term with this, just using it as basically an analysis engine. A lot of my database background is actually also in tandem with web-based software, which is all stateless. So by nature I took a kind of similar approach here.
ReplyDeleteOne thing that really strikes me is that if you have your AI running on just the server/host, and issuing player-style commands through an interface like what I'm doing with AI War, then you should be able to really store hundreds of megs of historical data in your RDB without worrying about having to sync or send that to clients.
That could be a really cool thing in the long term, because different servers could have bots that learned different things over time, and thus which play differently. What an interesting situation that would be, when server browsing. :)
Great article. So I'm curious:
ReplyDeleteIf your AI is losing, does it have a chance to realize that the current plan isn't working and change its priorities?
For example, you have a certain set of ordered preferences. Maybe you sort by A then B then C. You'd get much different results if you sort by B then C then A. That's your sort priorities.
So there is a random 0 to 1 number. If the AI think are going well, the random number would have to be above .95 to cause a sort priorities change. If things are going badly, maybe the threshold is only .25.
This would give a human like property: as the AI gets more and more desperate it would start to act more erratically as it changed it's target preferences. But it might also allow the AI to stage a come back. That could be entertainingly infuriating for the "I-thought-I-was-winning" human player.
Hi Jason,
ReplyDeleteThat's a really interesting idea, but I had never really thought much about that. The AI does not really keep track of whether it is losing or winning, it just plays the best it can (well, except when it is intentionally crippled by the difficulty level) in every given scenario.
In AI War, the difficulty of the home planets and such is tougher just because of the way the campaign scenarios are designed, but if that wasn't the case I would definitely go with a system like what you are proposing. I'm tempted to do something like that now, actually, but I have to decide specifically how that would work -- I really like the idea of the AI making more desperate attempts to kill player home planets when they get too down.
The main thing I want to stay away from is "rubberband AI," but using your approach that's not at all what it would be. I like it, and I've added it to my list for further investigation for an upcoming DLC release. Thanks!
This comment has been removed by the author.
ReplyDeleteChristopher,
ReplyDeleteEnjoying the reading, a friend pointed me to the first in the series yesterday.
If you want to make your code snippets look prettier on blogger, try out the instructions here.
Thanks, Patrick! That really helps make it a lot more legible -- much appreciated!
ReplyDeleteHi this has been a very inspiring post for me partly because I am very new to the whole game development arena and partly because you have lifted all my concerns on using C# in a game in favor of other "faster" languages.
ReplyDeletePlease keep the posts coming and if you could do one about your game engine , that would be terrific !
Thanks a lot :)
Hi George,
ReplyDeleteThanks for stopping by, I'm glad this looks to be of interest. I'll have more articles soon, probably one or two per week for a little while.
What sort of things, specifically, are you interested in knowing about the engine I'm using? I'm very willing to talk about that, but it's a huge topic that would need to be broken down in some manner before I could write very useful articles about it. :)
Well if the articles could empahsize on the design itself and how exactly did you acomplish creating the engine on your own ,how easy or not where the tools of your choice to work with(or not :)) and what made you decide to use those particular tools.
ReplyDeleteThanks again for your posts :)
Christopher,
ReplyDeleteI found your games via Tom Chick's excellent review of AI War. Then I found your blog and saw you were writing articles about game development and had to subscribe.
I'd love to read more about how you built both Alden Ridge and AI War ... really anything goes. I am currently working on iPhone games as a hobby. I'm slowly learning the basics of TileMaps ... anything you care to write about in that aspect would be great to learn. Or, if you had a list of great game dev articles on the internet that you learned from. Just happy to have another blog to read about game development! Cheers,
Hi Greg,
ReplyDeleteThanks for stopping by and subscribing! I'll have to write another article soon, that's for sure; sometimes I get so busy that I'm a fairly unreliable blogger when it comes to scheduling, but there are still a lot of things I want to talk about with AI War and Alden Ridge.
I've added a note to visit the topic of sprite/tile based games at some point before too long, I think that's a great topic. I can't recall offhand where I first read about a lot of those sorts of things, a lot of it was so long ago. Some of it came from the Game Programming All In One books back in the early 2000s, I don't know if those are still being updated or not. Hope that helps!
Great series. We use LINQ heavily in our financial data minining, comparison and normalization operation.
ReplyDeleteWe use dynamically generated LINQ queries by using expression trees and the techniques outlined here
http://tomasp.net/articles/dynamic-linq-queries.aspx
This comment relates closely to the question about changing sortation in response to conditions.