In order to lighten the mood a bit from my post earlier today, let’s talk about game design.
You can’t build games in an ivory tower, or even at an Ikea writing desk. Today, I’d like to talk about what happens when one kind of reality meets theory, and a bit about the lifecycle of a game design.
Playtesting is an integral part of the game development cycle. Game design, unlike publishing, is an iterative process. You build, you play, you build, you play. In the process of playing, you will uncover problems, and they will need fixing.
I’ll illustrate. My current project, To Seek Adventure, is a two-player adventure game. It’s a roundtable game — no single GM. The game uses random scenario generation, rather like In a Wicked Age. For several generations of design document, it used a random location for each scene.
In a white box environment, playing through the game myself and working out the math, this seemed fine. Put into actual play, it crashed and burned. The game was too random. Players had too many curve balls to react to.
So my co-designer and I went back to the drawing board.1 We created a scenario generation system based on drawing some random elements (with just one location), then brainstorming a set of challenges for the heroes to face.
We took that back to the table, and damn, the game ran like a charm. I’ll be talking about these mechanics more in the future, but suffice it to say, I’m really happy with them. The game’s a mile closer to finished because and only because we put it into actual play.
During the same rounds of playtesting, we found that another mechanic was unnecessary because players tended to do it automatically. So we struck that rule. In the white box, it felt fine. But with real live people… superfluous. Entirely.
The next step is external playtesting. As a designer, I’ve got to know whether the game works when I’m not in the room. An earlier version came back with mixed results. Time to try again with the latest and greatest.
Vincent Baker’s talked about how roleplaying games modify a conversation. I’d go so far as to say that you can’t tell what that conversation is until you’ve had it. You can guess, sure, but it’s like high school debate. Prepare all you want, but you’re going to end up in emergent arguments you never predicted.
Playtesting is vitally necessary because we, as designers, can’t whiteboard human elements. A roleplaying game lives and dies on human elements. Those need to be observed, and the game needs to be improved in response to them. More to the point, a roleplaying game only truly exists in play. It doesn’t matter how beautiful your game’s clockworks are… it must play, or else it’s hardly a game at all.
- Actually the gridded Moleskine. ↩
4 thoughts on “The Importance of Playtesting.”
“Vincent Baker’s talked about how roleplaying games modify a conversation. I’d go so far as to say that you can’t tell what that conversation is until you’ve had it.”
I’d add to that, and once you have had it you cannot be entirely sure that other parties walk away with the same experience or message from the conversation, or even refer to the conversation in future in the same way or with the same reference as others.
I get the idea that it touches on persona research, but functionally has to keep some distance from it (case of Portigal vs Cooper in that research field).
That does make me curious though. You are completely right that one cannot whiteboard human elements. But one also faces limitations in using human elements for playtesting. A human is an individual, and while we can “capture” human types, this is typically tied to experience factors, and not the effective actioning within the gameplay. But is that a divide, or a stimulus for the designer.
“But is that a divide, or a stimulus for the designer.”
Could you clarify the line you’re trying to draw here? I think I get it, but I’m a little fuzzy.
Alright, what I’m trying to get to is from the observation that quite often in INDIE environments the game designer ends up taking on the gameplay testing from unique perspectives, yet limited perspectives. In essence the number of perspectives being equal to the number of people involved in the testing. Something which often pops up in friends & family rounds as a limitation as the perspective by those invited to engage in gameplay testing has been influenced already by the designer(s) in question. A very typical thing among INDIE releases is a bit of a phase where gameplay requires adjusting because during the gameplay testing users were (so to speak) more replicating (or seeking to) an experience, rather than seeking most efficient paths (through engaging in hammering actioning). Which can often be a bit of a wakeup call. Post factum, obviously.
So, that possible difference approach (are we replicating intended / expectations / experience or are we going through scenarios of actioning) can either cause a divide between the game designer and what he or she wants (intends, versus how customers run off with it – regardless of whether we are looking at this as mere mechanisms, features or game experience) and the gameplay by the customer. Or it could serve as a stimulus, knowing that there is by default a divide, which is not easy to bridge, could serve as an argument towards more flexible feature and mechanism design.
Maybe that is more clear. What I sometimes notice, is not so much the eternal debate of intended vs unintended results, but this strange situation where the designer wakes up to a situation of “hey we never thought of that approach during gameplay testing, why didn’t we either provide more choice/pathways or come up with a more flexible system for the player to create choices/paths”.
Granted, I’m looking at it from the current background of dealing with a number of INDIE projects, where sofar quite often it turns out to be a divide in retrospect, and very rarely a stimulus while still under way.
I’m sure some day we’ll be able to capture human behaviour in individual aspects reliable (though part of me hopes that will never happen), but in the mean time there is always a bit of a trap in which perspective we take on (as if it were a suit, so to speak) when we engage in analysing experiences and mechanisms alike. Analysis of mechanisms is easier, even if complex. But behaviour does tend to influence the (ab)use of the mechanism as such.
As for fuzzy, no worries. Join the club. Brain is actively advocating going on strike, in view of yet another overly long trainride today =P
Ah, I think I see now. Yes, one thing about indie game development is that tester experiences can be influenced by the designer. That’s certainly an issue with tests where the designer is a participant — since my game focuses on two players, I’ve necessarily been a player in internal playtests.
But, then again, a roleplaying game — especially a small press one — is aiming for someone with a set of pre-existing biases. The total audience for my game probably isn’t large enough to make up a statistically significant sample, so a lot of what I’m doing is working with people’s distinct biases.
You also bring up the fairly common “why didn’t I think of that?” moment in playtesting. That’s an important one, and another reason to test. Quite frankly, the whole process is about bringing fresh brains in. It’s damn hard to make a good game without that.