The landing page for justice-impacted users.

Deciding who to test

Usability testing, which I’ve written about in the past, is exactly what it sounds like: it is testing the usability of a product. The industry standard is to test ~5 people per user group, per device. Since we were testing two user groups and designing for both desktop and mobile, that added up to 20 tests!

  • Mobile — 5 justice-impacted, 5 students
  • Desktop — 5 justice-impacted, 5 students

Recruitment and digital technology comfort levels

The first step is recruitment. Once we identified where we would send out the call, we developed a sign up form that was part form/part survey. We collected their contact information but also asked a few questions. The most important question we asked was about comfort with digital technology. We found that justice-impacted testers had a very wide range of comfort with digital technology where some users had very low comfort and others had very high comfort. Students reported a much more consistent range.

The higher the bar, the higher the differential between comfort levels among users surveyed.

Community-led usability testing moderators

While developing this testing structure with our partner organization, the Justice Impact Alliance, they proposed an intriguing idea that aligned with PBN’s efforts to increase participatory design in our field. Because the Justice Impact Alliance staff have developed strong relationships with the justice-impacted people they serve and the students that volunteer with them, they wondered if they themselves should moderate the tests rather than Pro Bono Net. Their plan was to have several moderators conduct a handful. Normally, I’d advise against using numerous moderators, simply for reasons of consistency. By having several moderators you risk inconsistency in what moderators focus on and find compelling, what follow up questions they ask, etc.

Challenges of external moderators

This route did pose several significant challenges.

  • Context building: None of the moderators we trained had any experience in web design up to this point. Most had participated in our user persona workshops but that’s about it. Their understanding of web design principles was very fresh and so context building was key.
  • Limited time and availability: Our moderators are community leaders who work with a variety of people that demands a lot of their time and focus. This meant that although they were willing and eager to participate, they had many other responsibilities on their mind and so the amount of training we did had to be incisively impactful and highly efficient.
  • Difficulty of user research: I have written before about how conducting user research isn’t as inaccessible as people think. I stick to that sentiment but having said that, it’s important to remember that many people do this as a full time job and undergo a ton of training to build this skill. It’s impossible to train people in all of those nuances with just some materials and a one hour workshop. We had to pick the most important parts to train on.
  • Consistency of documentation: Although we knew we wouldn’t get the same level of note taking and annotation, we still wanted to try our best to achieve some level of consistency across our moderators. We wanted them to use the same script, use the same follow-up questioning patterns, and make similarly detailed notes on their impressions of the tests they conducted.

Training materials

With all of those challenges in mind, we went about identifying what solutions we could offer. We agreed that a combination of training materials and a training session would be best. We scheduled an hour-long workshop with our moderators, developed materials, and sent them out beforehand so they could review them and come with questions.

  • Moderator guide: This was a single document they could bookmark and keep as their compass to navigate this entire process. It starts with links to the script, the beta site, and the example tests. This included information about usability testing is and is not. It included information about how a test will go, what equipment they and the participants need, troubleshooting screenshares, best practices, etc.
Part of the Moderator Guide we prepared for our external moderators.
  • Design intent: In the guide we included a reminder of how we designed the site, what our intentions and hypotheses were. This is crucial! Moderators must know the design well and know the intentions behind it.
  • Training deck: After the training session was over, we sent them both the recording of the session and the slide deck itself.
  • Example tests: We then linked them to recording clips of previous usability tests. Those were accompanied with notes on what to learn from those clips. We titled them with names like “Good introduction” and “Bad introduction.” Those included examples of my own mistakes. This was not only to teach our moderators but also so they could see how I myself mess up too and hopefully alleviate any pressure they may have been feeling.

Training session

The actual training session started with a design review. Again, it’s so important that your moderators know the design through and through. Without that context, they will not glean many insights from the user’s experience. For example, if they don’t realize that there is an entire section of the site that users are missing, they won’t be able to report back that their users totally skipped over this crucial feature.

  • Ask the user to narrate their thought process as much as possible.
  • Avoid the urge to give them hints and allow them to get lost (this can be especially tempting when you know the user outside of this setting).
  • Listen as much as possible, only speak in order to get the user to speak.
  • Study the design before testing.
  • Take notes but pay attention to the session. Expect to re-watch the recording and take thorough notes then that way you aren’t pulled out of the session.
  • Write down your main impressions immediately after the session, that’s when it is most fresh in your mind.
  • Never send them a link to pages you want them to get to, instruct them using the website so that you can see how easily they can find things.
  • Know your script well, you will have to jump around depending on where the user goes.
  • Loosen up! Don’t take this too seriously and build some comfort with your participant.

What we learned

In the end we learned a lot from this process. We did indeed get a variety of insights that we may not have gotten doing this on our own. Here is what we learned.

We had good moderators

Our moderators were fantastic. I can’t say enough about how grateful we are for all of their time and effort. They showed up to the trainings, took it seriously, asked good questions, and then conducted some really effective tests. For just a few hours of training, I am really impressed with the outcomes. This goes to show both how far some strategic training can go but also how useful it can be to have community leaders involved in testing. What they may have lacked in user research experience, they made up for in intuitive understanding of their participants’ experiences.

Candid feedback

We found that some participants did seem to offer candid feedback that possibly they may not have given if they felt us tech professionals were too far removed from their real-life experiences. It’s impossible to know for sure but I think it’s a fair assumption. In other studies I do think we have gotten candid feedback by building rapport with the user and telling them explicitly, “You can’t hurt my feelings on this design. If you hate it, I love hearing that so I know how to make it better.” However, being a part of the community you are studying just naturally carries some built-in rapport.

Training materials and sessions were key

We found that the training materials we gave were critical to the success of the study. We tried our best not to overdo it and I do think we struck a good balance there. If we were able to do two training sessions, I would have included some role playing where we act out a mock session live. Reviewing videos is helpful but actually trying it out and getting over some of the apprehension is very helpful.

Documentation issues

Documentation was tricky. We found out too late that the Zoom accounts being used had blocked permissions making it difficult for us to access the recordings. It proved tricky to explain to our moderators how to download them and then upload them to our Drive. This slowed things down a lot. In the future, I would have advocated to make everyone use a Zoom account under our own team.

Note taking is too time consuming

Additionally, getting our moderators to put in notes was very challenging and for good reason. Taking the time to write down all of those notes into a document is not easy when you have a whole other job on your plate. It came down to me watching every recording (which you should do anyways) and taking my own notes down too. We could have seen this coming and in the end, I think it was unavoidable. Factor this into your timeline.

Skill building and the empowerment of the collective

I want to end on one of my favorite parts of this experiment which is that we all got to grow our skill sets. On our end, we got to learn a lot about how to train moderators, how to make this skill more accessible, and how to step back and let go of control. On our partner’s end, they got to learn a new research skill. Besides having new tools in our toolboxes, we all got to learn something new. Just the act of trying something new and learning from it is an empowering experience. I care about my work but I also care about being more human and facilitating experiences that help us get in touch with ourselves and what we can achieve as a collective.


This blog was originally published by Ariadne Brazo on Medium. You can view the original post, here. To read Designing for Very Different Users — Justice Impact Network (Part I), click here