The main reason for making these changes (and balance changes in general, you don't have to click those links to understand this article) in AI War is to increase the opportunity cost for a number of actions, and to provide a more rich set of strategic options in general. Some expert players had concerns with the above linked changes that they would basically reduce the viability of certain advanced strategies, but the second link demonstrates why I feel like the strategies in question are all still very valid (but no longer abusable). However, these come with some cost.
But for purposes of this blog article, this isn't a discussion of any specific changes, but is rather a discussion of what makes for good changes versus bad changes. Some external commentators have noted that I am "faffing" about with game balance, which I take great offense to -- I know exactly what I am doing, in terms of my long-term goals, and my actions have all been purposeful and productive. I'm building a longterm game environment, of the sort you normally only see in MMOs. This requires some ongoing thought and commentary from expert players who find tricky ways to abuse the mechanics. But I'm getting ahead of myself.
Why Nerf Strategies?
When an advanced strategy comes up that I feel like is too exploitative of subtle unit interactions or too abusive of the game rules, I often will add a counter (either in AI logic, or in the game rules/mechanics themselves) to counter this. To some, on the surface this seems counter to the goal of having a strategically rich environment. Is it my goal to have all players playing the game exactly the same way, with minor variations? Of course not.
In general, the only reason I ever nerf a given strategy is if it gives too great a benefit at too low of a cost. There have been some really challenging issues of late with players taking too few planets and doing all sorts of clever things, which really causes the AI to be less effective and lowers the difficulty in an artificial way. My response to this has been partly to teach the AI some new behaviorlets, and partly to reduce the benefits and increase the costs for these more esoteric strategies.
Preserve A Rich Decision Space
When I look at AI War, or any game for that matter, the main thing I am looking at is the "decision space." When a single strategy or group of strategies are too effective, the decision space effectively shrinks because expert players would be fools to use any other strategy. This becomes a failing of the game which I have to address through balance updates and new/updated game mechanics in some cases. Individual ship balance is only the beginning, because how players use all the myriad types of ships in concert, plus how they plan their overall strategy, can have an even more complicated effect on game balance.
My goal is not to make all strategies exactly equal (because then the decision space is shrunk by nature of the fact that any strategy is as good as the next, so it doesn't really matter what you do). Having no interesting deviations in strategy is just as much of a game-killer as having one best strategy is.
Instead, my goal is to make strategies that are generally all within a standard deviation of one another, so that players with different playstyles can play as they wish, but also which are context-specific to a degree, so that the truly expert players will adjust their strategy very heavily depending on the specific circumstances of a given scenario. This not only adds to the richness of the strategy of the game, it adds to the replay value.
Of course, when players play below their true difficulty level, they have more latitude to just use their favorite strategy and have done with it. But when things are really neck-and-neck, players should have to make appropriate evaluations of the map and act accordingly, rather than being able to artificially lower the difficulty through exploitative tactics.
Balancing For Very Long-Term Play
Will this annoy some players who rely on these tactics to play at a higher difficulty level? Of course it will, and that is an unfortunate side effect. Any balance shift in any RTS game seems to annoy someone, while (hopefully) the majority rejoice. You might assume that because AI War is not a competitive pvp affair that these sorts of balance issues are not important. To a certain extent this is true, it is certainly much less important that the unit balance be perfect because of a number of facets of the AI War design. However, the overall strategic balance is critical for the longevity of the game.
When I play any RTS game, I am going so solo or co-op against the AI in skirmish mode. That's the only way I play. I basically can get 6-12 months of biweekly play out of most of the better RTS games, and that's the point at which I get bored with the game because I have figured out some sort of killer best strategy that the AI can't counter and that I can't top. At that point there's no other way that I really want to play the game, and I've lost interest playing the game using that best strategy, so there's pretty much nothing left for me to do with the game and I move on.
That's all well and good if you are trying to sell a huge series of RTS games, but with AI War I intend to grow and build it as a series of expansions, not sequels. That means that the core game had better be extraordinarily rock solid, with absolutely no best-paths that people discover after however many months of play. There are always tricky things that players figure out, of course, and so that makes an ongoing balance load for me. This is not unexpected -- Starcraft is still getting balance patches some 11 years after its release, from what I hear, and it is regarded as supremely well balanced.
As with the Starcraft balance updates, my goal is not to quash player innovation -- I applaud it. However, my goal is to keep all strategies within essentially a single standard deviation of the norm, and also to add as much context-sensitivity to the grand strategies as possible. The kiss of death for an RTS game, in my opinion, is when all the games start feeling basically the same to expert players. That's when it's time to move on and find a new game to play. My goal is to keep that from ever happening with AI War, because that's the only way I'll maintain my own interest in the game, let alone the interest of anyone else.
That sort of outlook will annoy a few players as I go, unfortunately -- and I need to be very careful to listen to player feedback and not do something that pisses people off for no reason, or which is hated by a majority of the playerbase. In general I'm pretty averse to doing things that players don't like, which I think is a crucial attitude for game designers to have (just "doing your own thing" or having a "take it or leave it" attitude is stupid and is suicide). However, it's impossible to please everyone when making any given change, and so player feedback has to be weighed against the longterm health of the game. Rebalancing a game that has already been released is always a tricky proposition, but you only have to look at examples such as Starcraft or World of Warcraft to see how incredible the results can be in the long term if care is taken.
I've written about this in broader terms, as well. Balancing strategy games is a difficult task at best and an impossible one at worst. Because AI War is a co-op game, I think you have an easier environment to tune in. Players aren't going to be exploiting the hell out of one another and ruining the metagame. They don't have to play the perfect game and face other players' exploits at every turn in order to win. The AI provides a somewhat constant challenge against which you can safely benchmark player behavior without having to worry about flavor-of-the-month play as much.
Do you gather play statistics and metrics from everyone's play sessions in order to get a better look at patterns of play? I think it wouldn't be too intrusive if you allowed players to opt into a program that quantifies their play and does some analysis to help you see what strategies are perhaps used too often for their own good (then maybe you can figure out where some misconceptions are). Also, you can see how players react to different evenutalities during game play to see if their behavior matches up with what you think should happen. In a game like AI War where there are not absolute counters, but relative counters, seeing how players react in certain situations can highlight how they are misperceiving the decision space and help you write better tooltips so as to reveal information that the player should be finding, but wasn't.
Cool article that you linked to, it lays out the issues very clearly, I think. You're absolutely right about the easier balancing load in single-player and co-op. To some degree, the way that I handle the varied ship mixes (you have different units available in each game), also contributes to a lowered need for balance. Since not everything is always available, you have to weigh what is there in any game. With the number of ships in the base game alone, and the length of games, this means that there is generally at least 100 to 120 hours of variety before you start having much in the way of repeats, so forming long-term habits is extremely hard to do unless you play the same scenario repeatedly.
I don't gather any play statistics, mainly because I don't know what I would do with the data. Often there is a huge skew based on how players handle tactics, or how they manage their wider strategy and logistics, or what their ship mixes were in given battles, etc -- so looking at kill-to-loss stats for a player can be misleading. There are just so many factors at play, that I feel like play results are fairly un-analyzable in a meaningful mathmatical sense.
Instead, I treat it more as a art, moreso than a science, and use a lot of observation and reported observations of players. When I'm unsure about something, I wait for a lot of players to report on their experiences with it; when the solution or the problem is obvious once a player brings it up, then I take care of it right away.
Interestingly, for many AI War players who frequent the Arcen forums, there are really two metagames at play. There's the metagame that any game has, and it's a satisfyingly rich on in AI War. And then there's a second layer of metagame, where they find something exploitative (majorly or minorly), and report their findings to me so that the AI/balance/whatever "grows" and changes over time. Some of them have commented that it makes it almost like playing against a learning AI, since it does in fact learn over time through their reporting stuff to me. A number of them even fill their sigs on the forums with all the various things they have gotten nerfed through their cleverness at exploiting and then reporting them, and I've found that amusing and cool.
This is why I made the WoW comparison in the main article (though I've never played WoW, I've certainly heard a lot about it). I'm sure that Blizzard tracks lots of different metrics about players and so on on their servers, but it does seem that they use a lot of reporting and compound data in determining their own balance shifts for their various classes. Again, art with a hint of science. In this way, that sort of evolving nature makes AI War almost like an MMO in how it grows, even though it's not online-only or for-pay, or any of those other models. I can recall having played other evolving games in the past, like Counter-Strike, or GraaL, or a number of others.
In many ways, I actually feel sort of like a DM in a role playing game. My goal is not to obliterate players, but is to provide a rich play experience. Okay, sure, that's true of all game designers. But in my case, I get direct feedback from many of those players, and adjust the game accordingly, which -- in the context of balance and AI/opponent behavior -- is the sort of feedback loop that I've personally only encountered in boardgames like Hero Quest or Descent: Journeys in the Dark, etc.
I could probably stand to be more scientific in how I handle data collection -- a lot of game design bloggers advocate that sort of thing -- but I really don't think that will ever be my style. I'm quite scientific and rational about how I analyze the data, but I really prefer "softer" data collection methods that rely on keen observation and honest reporting from my players. Especially in the strategy genre, this has given pretty stellar results for me so far. If this was a game for early childhood or something, that would be a different story!
Post a Comment