Guest article by Mike Mullins on designing solo games.
Welcome to another edition of of Meeple Speak. This one is a special one, we are trying something we’ve never done before. It is more of a crossover with an Cardboard Edison article that was posted last week (article here) – this of this as part 2 of that article. So who is Mike Mullins? Perhaps the best way to introduce him is to use Cardboard Edison’s bio on him: “Mike Mullins is a longtime playtester and developer, best known for the creation of solo modes for games such as Castle Dice, Lagoon, and Compounded. He and Darrell Louder recently co-designed Bottom of the 9th, which of course is playable solo as well.”
by Mike Mullins
Where Were We?
For those of you that haven’t read Part One of this solo design double-feature, head on over to Cardboard Edison to check it out. Or don’t, whatever. I can’t make you.
In the aforementioned Part One, we worked our way through the multiplayer game, identified its key elements, and sketched out a solo game within the design space of allotted components. We’d also just come to the conclusion that it was time to adjust the “randomness slider” in order to create an AI opponent. The key to this process is understanding the types of random effects, and the impact those have on your design. Oh, and for all of our sakes (mostly mine), “an AI opponent” and “the AI” will all just be called “Z” (who’s totally a guy, because women are better at games and I want to beat Z).
Random elements in games serve two basic purposes. First, they take a game out of the realm of solvable puzzles. This is in no way intended to disparage those games, because some of them are amazing; there are simply some types of gameplay that a puzzle wouldn’t do justice. Second, randomness forces a player to make both proactive and reactive decisions, which really make a solo game shine. The way in which uncertainty affects a player is largely dependent on when it happens with respect to the predictable elements.
Random < —-|————— > Predictable
The first stop on the slider after true randomness is to give the player information about Z before the game, not during. If you add 3 attack cards to Z’s deck during setup, the player can plan a bit more, even if the rest of his actions are random.
< ———|————— >
The second click towards the “smart” end of the spectrum is if Z starts with a random choice that is followed by a predictable action. The player doesn’t know the type of action that the Z will take, but can complete the “if, then” logic to see how each possible action would be resolved. A simple example would be, “If Z drafts a resource, he’ll take iron, but if he drafts an animal, he’ll take a pig.” Now the player is faced with a proactive decision. Knowing the odds of each choice, should they pre-empt Z’s possible action, or proceed with their own strategy?
Z’s getting even smarter, because now we’ll use a weighted random choice before moving onto a decision tree. While it’s true that any event whose outcomes have uneven probabilities of occurring is “weighted” towards a certain result, some games take that a step further. Artificial restrictions can be placed on Z’s options, or contingencies such as rerolls and mitigating effects can be put in place. This doesn’t actually provide the player with any more information, just an increased confidence that their decision will be correct.
< ——————–|—– >
If we let Z make a predictable choice first, we’ve given the player a lot more to work with. Given a certain game state, Z will react in a predictable way, at least up to a point. This could be based on the game phase (after I play cards, Z attacks), player actions (if I attack, Z will draw a card), or even more specific situations (Since I am ahead, Z will move their piece). The randomness can be the target of the action, type of effect, etc.
< ———————-|– >
The last stop for the slider before it reaches a strictly defined decision tree isn’t technically any different from the previous one. We know what Z will do, and that is coupled with a random element, but specifically that action’s efficacy. That’s most easily illustrated by an attack you know is coming, but can’t be sure how much damage it will do. This is worth separating out because it’s important to know that as a designer, you’ve limited the player to a single decision: can I risk performing an action that doesn’t address the max possible damage from the monster?
At this point, you’ve got a viable opponent that you can play against, so start playing. The first thing that will jump out at you are cards and effects that require an additional decision. Your choices are to remove these elements during setup, define the decision process in the rules, or randomize the results. Every time you decide to leave one of these elements in the game and provide the player with a rule to resolve them, you’ve created an exception. They are easy to identify in your rules. “All cards resolve normally, except the following…” and “Z will collect resources unless…” are just two examples. Think of these like negative VP; each unique exception you create subtracts points from the value of your design, and you have to ask yourself, is it worth it? These are “acceptable exceptions.” Don’t forget that a blanket rule covering multiple similar situations only counts as a single negative VP, so look for those opportunities.
Whispered in poorly-lit game rooms or discussed in all caps in forums is the most polarizing concept in solo game design. When met with one of the aforementioned situations in which a decision must be made for Z, you could opt to (dare I say it?) let the player decide. There are some significant benefits to this approach beyond drastically reducing your rules load. Allowing players to resolve these occurrences creates built-in difficulty levels. In your first few plays, force Z to make choices that benefit you. With more experience, you could choose the result randomly, or even try to make the best play possible for Z. Forcing a player to consider the implications of an opponent’s choices is arguably more compelling than another solution.
What’s the downside? Some people HATE IT. If you go this route, it has to be because you truly believe it improves the solo game; it can’t be an escape clause to avoid tuning your AI.
Cheaters Sometimes Prosper
Back to the lecture at hand, the key to any game is the quality of the decisions you make. If there’s a reasonable amount of random decision making in your solo game and Z regularly competes with the human player, it’s your multiplayer design that probably needs improvement! The reverse is true, too; if you can randomly bumble your way to a solo victory, something needs to change. Let’s assume you’re happy with where you are on the slider, and you’re pretty sure that you’ve got the right number of exceptions. Now it’s time for Z to cheat, but how? The simplest adjustment is to give Z extra points. If this is done effectively, you are not only forcing the player to tighten up their strategy, but also creating additional decisions. A simple example is a high value card that Z would score well for, but doesn’t do the player any good. If Z wasn’t cheating, the player could ignore the card and crack on, but now a decision must be made.
Calling extra points the “simplest” fix doesn’t mean you should necessarily start there. If Z’s poor decisions detract from or even obviate the player’s decisions, simply increasing point totals won’t improve the experience. If you’re playing your niece in basketball and give her 5 points for every basket, it doesn’t make the game any more rewarding to play. There’s a veritable plethora of other ways for Z to cheat, such as taking extra actions, reducing the effectiveness of player actions, and having Z ignore global rules or restrictions.
But What About Me?
We’ve methodically identified and stripped away unnecessary multiplayer elements. We selected and tuned Z’s intelligence. We created (and trimmed) exceptions to keep Z from stumbling during the game, and bolstered his play with a few cheat codes. Now it’s time to sit in the player’s chair and make sure that there is enough game left that the experience is a rewarding one. If there is often a clear optimal play, or conversely a frequent choice between two equally futile options, something needs to change. The former situation requires tweaking of your AI, but the latter means the player needs some new options. Don’t be afraid to introduce new actions that a player can take to combat this ruthless automaton.
I Think I Just Scored
In any manner of “beat-your-score” solo game – whether simple score attack, a game employing a dummy player, or a contest against an AI opponent – you should include score targets. This really shouldn’t take much convincing; I can’t imagine you’d tolerate playing game with your friends that said “after 7 rounds, see if you played better than last time.” Furthermore, with no scoring rubric, how does a player know if they’ve reached the pinnacle or if they have room to improve? Finally, adding target scores demonstrates to the player that the solo game isn’t an afterthought. You’ve tested it enough to know what a given final score represents, and you’ve turned solo from a glorified practice mode into a true contest.
Where do you set the goals? In my opinion, a new player should win a solo game no more than 40% of the time, but more importantly, is able to increase that percentage with repeated plays. You don’t have to add scoring tiers or additional rules for higher difficulties, but that will surely cause your game to hit the table after the initial level has been conquered.
Thank you for reading my latest foray into sesquipedalianism. I hope there were some things in here that might help you better cater to the solo community. If you want to tell me this was amazing, or inane, or just ask some questions, ping me on Twitter @bluedevilduke.