Monday, September 21, 2009

Designing Emergent AI, Part 5: Don't Squeeze a Handful of Sand

In the fourth installment of this series, I talked about the asymmetrical nature of the AI in AI War. That was intended to be the final article in the series, unless more questions came up that I needed to address. Recently a discussion arose on the Arcen Games forums, however, which I think really helps to illuminate an important point that I never managed to cover in this article series before now.

The Question, From User "dumpsterKEEPER"

I really like the AI behavior that allows for retreats when faced with overwhelming odds. On a number of occasions however, I've noticed this general scenario:

The AI has a strike force on one of my worlds when I jump a fleet in to defend it. As it's a sizeable defense force, the retreat logic kicks in and the AI splits off into multiple groups heading towards exit wormholes. One or more of these groups travel to an undefended wormhole and exit the system without a scratch. However, another group will head straight towards a turreted/defended wormhole and get completely wiped out in their attempt to leave the system.

I would suggest that when retreating, the AI should attempt to prioritize exit wormholes that are undefended or lightly defended. This would make sense with their retreating posture and would leave more AI ships alive to attack again in the future.

My initial response
I thought about this, but it leads to some undesirable ways to then further trick the AI -- leave a REALLY big force on the other side of the undefended wormhole, and then the AI pops out and gets slaughtered. This is one of those places (like the gap in the wall logic) that leads to too-predictable AI in other games. I agree, it will often make the AI act slightly non-ideal in this case, but it also protects it from being trapped into REALLY non-ideal situations, if that makes sense.

One of the forum moderators then suggested that I might consider simply having the AI probe undefended wormholes to see if it's safe. My response to that:

Probing undefended wormholes requires

A) time, during which the AI ships have to do something (and during which time they might well just be getting killed, or giving the human player time to come up with a counter)

B) a way to evaluate if the other side is safe (is it safe if the wormhole is clear, but there are massive fortifications on the other planet? what about if it is a long string of planets, and the danger is somewhere further down? do we get into situations where the AI just runs back and forth between some planets because it can't find an acceptable exit to punch through at either end?).

C) a lot of coordination between the AI ships, which isn't super compatible with the whole emergent AI thing.

This is one of those times where the emergent AI will make a slightly nonideal choice sometimes, but that slightly nonideal choice is actually better in the long term than always making the predictable 100% best choice. I intentionally avoid coding in hard rules like that, because as soon as the AI is too predictable, even if it is very smart, the players can formulate some second-order strategies to counter it. I know this because my play group (the AI War alpha group) is expert at finding these strategies in pretty much all RTS games. We never did find anything too exploitative for RoL or RoN thanks to the lack of walls, but we did for AoE2, AoE3, Empire Earth, SupCom, and all the various expansions -- and the SupCom AI mods, as well, though that took longer.

This really goes to the fundamental nature of the AI in AI War, and why it is in the main better. Will it sometimes do things like this that are tempting to "fix?" Yes. When there is a direct way to evaluate this sort of thing on a per-ship basis, I try to do that while also still making sure it remains fuzzy. A lot of the recent minor AI rules updates have been that sort of thing. But when something requires a lot of intentional ship coordination, or a lot of looking-ahead or scouting or what have you, that's where I start getting very nervous and staying away. The premise of this sort of AI is to stay away from those sort of things, because even though they fix the direct problem, they often cause a whole raft of other problems and exploits down the line...

More from dumpsterKEEPER
Ah, that makes sense. I hadn't thought about it from the potential exploits perspective. Thanks for explaining your thinking though, that's helpful to understand why the AI sometimes behaves the way it does. I can understand the hesitation to add specific rules to the AI as eventually you'd essentially end up with a decision tree AI.

In regards to this issue in particular, I don't think it's a huge deal, it just sometimes strikes me as odd that a group of AI ships will impale themselves on obviously placed defenses. On the other hand, occasional sub-optimal decisions do sometimes catch me by surprise and make the AI feel more "human."

My closing notes
No problem! Glad it's not too big an issue. And, for sure, sometimes those groups of ships will, in the process of impaling themselves, do something important. Or sometimes they'll have enough strength to break through into the adjoining planet and do some real damage on the other side. One of the players in my game on Saturday actually lost his home planet to something like that.

When I was working on the AI code to start out, originally everything was rules-based. And when I switched to a more emergent style of AI code, I thought I'd have to build in more rules there, too. That's why I was so surprised how quickly the AI started being effective, and from that stage on I became careful of second-order effects. I think that's the unique bit of knowledge that I stumbled across by accident, which leads to more effective AI designs in general.

Effective AI is like holding a handful of sand: some sand will trickle uselessly from your hand no matter what, this can't be prevented, but most game AI programmers squeeze the sand more tightly to try and save it, and wind up losing so much more sand in the process.

Next Time?
Once again, we come to the "end." Unless readers bring up more topics that they'd like me to address, this will probably be the final article in my AI series. So, I'm sure that another AI-focused article will come up at some point, in other words, but I don't have any idea when or what the specific subject matter will be. In the meantime, I do have other articles planned on other game-related subjects!

AI Article Index
Part 1 gives an overview of the AI approach being used, and its benefits.
Part 2 shows some LINQ code and discusses things such as danger levels.
Part 3 talks about the limitations of this approach.
Part 4 talks about the asymmetry of this AI design.
Part 5 is a transcript of a discussion about the desirability of slightly-nonideal emergent decisions versus too-predictable "perfect" decisions.
Part 6 talks about player agency versus AI agency, and why the gameplay of AI War is based around keeping the AI deviously reactive rather than ever fully giving it the "tempo."

3 comments:

Grammarye said...

Superb set of articles - it was a most informative read.

If you went ahead with the concept that dumpsterKEEPER proposes of going for undefended wormholes only and accepted your AI could then get jumped by a large hidden force on the far side, a human would probably respond to such a situation either by not opting for such an 'all eggs in one basket' approach but would instead split their forces across several undefended wormholes, or learn that their opponent is sneaky and that they like to ambush people, and bear that in mind for the future. The human would also take into account the likely routes and their danger.

I guess alternatives (in the general case of AI, rather than AI Wars per se) could be:

a) When there's only a few possible exits, split up to try and preserve as many units regardless of defenses, visible or not. A shotgun approach if you will

When there's lots of exits, prefer the undefended ones but still try and split up (human ship captains probably would too to increase the likelihood of losing pursuit). Swap to a more focused shotgun approach.

b) Prefer undefended exits and learn if such actions tend to lead to wholesale losses, in which case the preference would be balanced out by some form of memory indicator.

c) Do a lot more 'thinking' about not only what the immediate exit should be but what is known to lie along the route back to wherever they're trying to get to.

d) All three :)

Regardless the preference-based approach combined with emergence would seem likely to yield more unpredictable results, and as such, I think is a great step towards better AI in games.

KIKI said...

This series of articles made me learn linq/lambada, this is the first time i see this language futures in a good light(getting a good result performance v.s. time to implement)

Thanks :)

ArchitectGuy said...

Christopher

I just discovered your series of articles following a link from the Mono group.
Excellent content.

Carlos.