Guerrilla Usability For Game Developers (Part 1)

, , No Comments
Dear Game Developers:

As a player, have you ever been so frustrated with a game's controls that you've quit it? Have you ever found a tutorial so unclear that it's more efficient to watch Twitch streamers than to go through the process yourself? Have you ever downloaded a game mod to increase font sizes or fundamentally alter the UI?

If so, you're not alone. And if not, you're lucky. Many products -- games, apps, and websites alike -- contain elements that make them hard to use or understand. Anything that makes a product harder to use is a "usability issue."

Usability issues lurk in many products, but while larger companies may have the resources to hire professional user researchers, smaller companies may not be able to do so. But that doesn't mean you need to go without usability testing -- there are multiple methods and tests that a single developer can conduct.

There are two types of user research to identify usability problems: qualitative research and quantitative research.

What Is Qualitative Research? To break it down to its simplest part, qualitative research answers "why." For example, what do players think of my tutorial, do they like it? Dislike it? Why?  It provides insights into where players are struggling and how one might address or fix those issues. 
What Is Quantitative Research? Very simply, quantitative research answers "how." A common approach to quantitative research is A/B testing, which pits designs against each other.  A research question for an A/B test could be:  "How much faster, if at all, do players complete a tutorial if I place the tooltips on the left versus the right?" I'll go over A/B testing, and other quantitative tests, in "Guerrilla Usability For Game Developers (Part 2)."

Guerrilla Qualitative Research 

Qualitative research allows you to listen and understand what players think about your game, and why they have that opinion. Professionally, there are many types of qualitative research tests that a researcher can use (1:1 lab studies, card sorting, diary studies, and more), but to keep your research cheap, you'll want to focus on 1:1 in-depth player interviews. (I'll go over this in a bit).

Now, when you've decided that you want to test your game qualitatively, there are four things you need to ask yourself. Blogger David Peter Simon explained it well in his post "The Art of Guerrilla Usability Testing":

  1. What shall we test?
  2. Where will we test?
  3. With whom should we test?
  4. How will we test?

What Shall We Test

Testing can be done at any phase in the design cycle. Only have your UI design sketched out on index cards? Fine. Have a fully functioning prototype? Awesome.

But "what shall we test" goes beyond "what type of resources do we need to run a test?" In terms of game development, it's "what game element do we want to focus on right now?"

Scenario: You've developed a lot of your game, and have started working on a tutorial. So far, you've designed a tutorial such that the tutorial text always sits on the right side of the screen, and updates when a user completes an action. You're confident that this will make sense to users, but you decide to test it just in case.


Where Will We Test

What you're testing will affect where (and with whom) you can test.

Want to test a mobile game? Consider bringing it to your local DMV and ask people if they'd be willing to play your game and give feedback on it while they wait. Want to test a computer game, or get feedback on a few designs sketched on paper? Bring your ideas to a coffee shop and ask people if they'd be willing to help you. Want to test a console game? Get feedback from players at conventions when they try their game.

Scenario: You bring a copy of your game to Starbucks and ask a few players to give feedback on the tutorial.

With Whom Should We Test?

If you're in a public space, find friendly-looking strangers who don't look to be rushing elsewhere. At the DMV and coffee shop, you're not necessarily going to find the types of people who would play your game after it comes out. But it's often people who aren't gamers who give unique feedback -- a regular gamer might know that WASD will move their character, but if a game is lacking a tutorial or feature that explains movement, a non-gamer might have no idea how to move their character.

If you're at a convention, you'll likely get people like your target population to test the game. When possible, it's good to test with these types of people -- but keep in mind the thoughts and concerns of non-gamers matter, too.

In addition, how many users you test with matters, too. Research suggests that running 5 participants will catch 75% of your usability issues -- after 5, you begin to have diminishing returns.


Increase in proportion of usability problems found as a function of number of users tested
"Why You Only Need to Test With 5 Users" by Jacob Nielsen


There are times when you should run more than 5 participants -- so I suggest you read the article linked in the graph above.

If you're not comfortable approaching strangers, consider enlisting the help of acquaintances, or friends of friends -- people who aren't as concerned about your feelings when giving criticism about your game.


How Shall We Test?

There are a few ways to run a 1:1 player interview -- and it depends on the kind of feedback you're looking for. To get overall feedback about what the participant is thinking, use a "think-aloud" protocol to gauge what players are thinking. To get feedback about whether participants can find or do something, use a "task-based" test to identify problems with certain elements of your game. Of course, these two protocols can be combined into one.

Scenario: You test your tutorial with 5 people at Starbucks. Three of them do not see the tutorial immediately on the right side of the screen, instead inquiring aloud how they learn to play the game. Four of them says that the font on the tutorial box (once they see it) is too small and hard to read. 

Running a 1:1 In-Depth Interview

But running a qualitative study is more than just asking people what they think about your game -- a study is designed. Using the scenario listed above, I'll give you an example of what one could do to design a test.

  1. Identify Research Question
    Question: What do people think about the current version of the tutorial? Is it easy to use?
  2. Identify Testing Location
    There's a local coffee shop nearby that should work. I'll go on a weekend so that people aren't necessarily rushing to or from work. 
  3. Design Research Study
    I want to know what people think of the tutorial, so I'll ask them to talk aloud about what they're thinking as they're going through it. If they ask about where tutorial text is, I'll ask, "Where would you expect it to be?" I'll also ask use some task based scenarios, so I'll ask: "Imagine you wanted to read the previous tooltip. How would you do that?" or "Imagine you want to disable the tooltips. How would you do that?" In addition, I'd also ask whether they thought the tutorial was complete, or if they thought it was helpful.
    Before bringing the study to Starbucks, I'll conduct my study with a friend/family member and time how long it takes. If it's more than 15 minutes, I'll cut down the study to the more essential parts.
  4. Run Study
    I'll bring my computer to the coffee shop, ask people whether they'd be willing to test and give them an estimate of how long the test will take (< 15 minutes, I'm not paying people so they won't necessarily want to take too long). Offer to buy them coffee/food in exchange. Give them a consent form, let them know that they can stop at any time, and tell them that you want their honest feedback.I'll ask open ended questions, like: 
    • What are your thoughts on this?
    • Is there anything here that's confusing to you?
    • Do you have a favorite thing about what you've seen here?
    • Do you have a least favorite thing about what you've seen here?
  5. Analyze Results & Develop Solutions
    After I've spoken to 5 participants, I'll go through my notes and look for patterns. What kinds of things did people like and dislike? How many individuals had trouble with a specific area? Did the participants have any particular feedback on how to improve the tutorial? After finding the patterns, I come up with some ways to fix what the participants mentioned. But it's important to note that not all feedback is helpful, and not all feedback should be included. When you're analyzing results, look for patterns and repetition -- just because one person said that she or he wants a feature doesn't mean it's worth including. 

Things To Keep In Mind

I must give words of caution: when you put your game in front of players, you will get criticism, and not all of it constructive. One of the most difficult parts of conducting user research as a designer is accepting criticism gracefully. When someone criticizes your game, they're not insulting you and they're not (necessarily) calling your baby ugly -- they're offering honest feedback. And honest feedback is what you need.

As you're running a 1:1 test, if someone gives negative feedback, don't  insult them. Don't say, "no, you're doing it wrong." Pretty much, in a usability test, the only things a designer should say are the open-ended research questions, any tasks you've prepared, and any probing questions like, "What specifically do you not like about that?" If they ask you a question about certain elements of the game, don't answer it. If they say something mean about it, don't say something mean back. It's difficult to do, but it's necessary. 

Summary

In summary, guerrilla usability testing can be used to get quick and dirty feedback on your game, at any point in the design cycle. Qualitative testing requires developers to put aside their feelings and ask for honest feedback, without getting defensive in return.


Finally, here are a few additional things to keep in mind:
  1. When selecting your participants, try to get a balance of who you recruit. Don’t recruit all men, or all women -- even though you might not be talking with people who are big gamers, your target demographic will include all genders.
  2. If you’re using a computer to test, consider installing a program that records audio and the screen. XSplit is a good choice. This way, you can look back through the videos and refresh what people said and did.
  3. If/when a person asks you, a "how do I" question (how do I jump, which button should I press to run, etc), reflect the question back to them. "Which button do you think you would press to run? Why?"
  4. If at any point in the study someone is no longer talking aloud, feel free to remind them to talk aloud. 

In Part 2, we’ll discuss how you can use tools like SurveyMonkey to quantitatively test the efficacy of specific game elements and compare different designs.


____

Want to learn more about usability testing? Check out these other resources:



0 comments:

Post a Comment