Design audits: What to do when inheriting someone else’s disaster | by Jeremy Abrams | Aug, 2021


As designers, we envision our process beginning with understanding the problem, empathizing with users, and discovering possible solutions. But reality isn’t always so kind. In many (if not most) cases, we’re brought onto projects already in production, with the problem defined and the solution at least partially implemented. These projects frequently function without a designer until stakeholders notice underwhelming performance metrics or a few too many negative App Store reviews. By the time we join, our teams are up against compressed timelines, shrinking financial runways, and overzealous executive expectations. With everyone looking to us to right the ship, we need a process to efficiently audit and repair these products.

Over the years, I’ve applied a 4 step process that’s proven effective:

The absolute first thing you have to do is download the live application and run through every single possible user flow. Don’t rely on stakeholders, developers, other designers, or even internal documentation to accurately communicate the experience to you. Manually install the application and use it like any other first time user.

I recommend screen recording this process, so you can easily review the flows later on.

Draft user stories as you go

As you run through each flow, open a notepad and make a list of every user story you identify. You should use the complete user story format to ensure that you can assign a user-centered benefit to each feature.¹ If you aren’t sure of the benefit, ask the client. If the client isn’t sure, be sure to challenge its value during user testing.

Map user flows along the way

Stories aren’t all you need to understand the ins and outs of the application. You should also screenshot every screen in each flow and use those to construct a user flow diagram. If you notice any bugs or gaps in the experience, draw arrows pointing to a sticky note that describes the issue.

Make sure you dig into edge cases, since those often reveal hidden faults. If you’re auditing a food delivery application, for example, you might use a non-existent address, order from a closed restaurant, or use an expired credit card. Test the limits of the application.

When you’re done, compare and contrast

After you’ve documented every possible story and mapped out the complete user flow, it’s time to compare the production experience to the original designs. If the company has any clickable prototypes on hand, run through those in the same way that you ran through the live application. Identify which user stories also exist within the prototype, which are unique to it, and which are missing (if any). If the company didn’t create a prototype, use whatever they do have to paint a picture of the original vision. User flows, wireframes, or even powerpoint presentations can shed light on design decisions.

At this point you should have a few rough documents:

  1. A list of user stories that apply to the production application, the prototype, or both;
  2. A user flow diagram made up of production screenshots; and
  3. A collection of bugs or gaps you discovered in the live experience.

It’s now time to merge everything into a single, holistic requirements document that will serve as your source of truth going forward. This document should combine the user stories that already exist in production with the user stories that should exist but are missing. Each user story live in production should link to its respective user flow.

Describe when a story is “accepted”

Once all stories are in the document, write out acceptance criteria for each one. The acceptance criteria will determine when a story has successfully been implemented into the application.

You may find that many production stories are missing crucial elements of their acceptance criteria, leading to subpar user experiences. Highlight what’s missing as a reminder to address your concerns during usability testing.

But! Don’t jump the gun

This is not the time to design new screens or update flows. At this stage, with everything laid out in front of you, it’ll definitely be tempting. But remember, until you complete your usability studies, you’re only working from the company’s original assumptions (and some of your own). Many of your stories may require heavy modification or even removal. You may also need to add stories you didn’t expect, adjust acceptance criteria, or re-prioritize your efforts.

Now that you have a single source of truth that defines the company’s vision, you need to observe the live application in use.

Choose the best test format

In general, you have a few options for conducting these tests:

  1. In-person, moderated
  2. In-person, unmoderated
  3. Remote, moderated
  4. Remote, unmoderated

Unmoderated tests allow you to cheaply and quickly observe what users do en masse, while moderated tests allow you to observe why they do it. In-person tests promote a natural rapport with your participants, build trust, and allow you to control their environment, while remote tests are geographically unconstrained, COVID-safe, and closely mimic real world circumstances.

Since the app is already in production, you can see what users are doing by pulling down analytics data. If your company hasn’t yet implemented analytics tools, request they make that a priority. With the what handled, I recommend conducting moderated interviews to dig into the why.

In a normal year, I would say that in-person interviews are preferred, since rapport and trust help collect accurate results. Unfortunately, in these pandemic-ridden times, remote may be your only option. In this case, there is some very sophisticated remote moderating software at your disposal.

Software like dscout and usertesting.com can recruit high-quality participants on your behalf and serve as a platform for conducting interviews and synthesizing data. If you’re looking for cheaper options, Craigslist and Zoom can work just fine. They’re more effort, to be sure, but doable when financially constrained.

For a thorough guide on how to conduct usability tests, see Adam Fard’s article, How To Successfully Conduct User Testing In 6 Simple Steps.

Patiently synthesize your results

After completing your interviews, give yourself plenty of time to thoroughly synthesize the results. Too often, designers rush the synthesis to appease impatient stakeholders. This is a recipe for disaster, as you’ll likely provide a flawed report that sends the product roadmap hurtling in the wrong direction.

Maze has a great piece on how to synthesize test results.

Synthesizing your results should reveal a trove of information that guides your next steps. Now is the time to use that data to improve the user experience.

Update user stories and acceptance criteria

Start by incorporating your insights into your user stories and acceptance criteria. Correct inaccurate acceptance criteria, add or remove stories where appropriate, and fill in any other experience gaps.

Update the visual designs

This is the moment you’ve been waiting for. Now that you have a requirements document rooted in validated user data, you can enhance the visual design as needed. Incorporate your changes into all pre-existing design documentation (including, but not limited to, any high-fidelity mockups, clickable prototypes, and design pattern libraries). If there are any documents that you choose not to update for one reason or another, be sure to archive those documents to avoid future misunderstandings.

If you’re in the particularly loathsome situation of working for a company without any pre-existing design documentation, now is the time to kick that off. Figma is my tool of choice, but Sketch or Adobe XD will also suffice.

Validate your changes before passing off to devs

Developing these enhancements is going to be expensive and time consuming, so it’s critical that you validate your recommendations before passing them along. While your last round of user tests challenged the live version of the application, this round is going to challenge the (alleged) upgrades you’ve made to the UI.

The best way to do this is by building out a high-fidelity, clickable prototype that mimics the production application. If that isn’t possible, however, you can certainly use a rudimentary alternative to test many facets of your design. Tree tests, five-second tests, and card sorting exercises, don’t even require a prototype and go a long way to validate navigational components, cognitive maps, and information hierarchies.

Update and Repeat

After you complete your second round of testing, update your product and design documentation accordingly. Repeat this process until you have validated all of your recommendations. Once you’ve made your final updates, you can pass your changes to the development team.

Importantly, although this is the final step in your audit, this is not the end of your role on the project. Design must continue throughout the development process, and beyond. Regardless of thorough documentation, questions will arise that require your attention. There may be questions regarding responsivity, strange edge cases, particular acceptance criteria, and more. On top of all that, technical constraints may cause variations in the design that warrant additional user testing.

Flailing products halfway through the product lifecycle are, in my experience, far more common than fresh ideas working their way through discovery. Some projects don’t even have a designer (let alone an agile product manager) until after budget-sensitive executives have their arms twisted in the face of devastating metrics. In those situations, the designer brought on board is often setup to fail, with little time, money, or documentation available to support them. As both a third-party consultant and staff designer, I’ve used this 4-step process to troubleshoot and repair products quickly and efficiently. I encourage you to give it a try and share your results.



Source link

Latest articles

Related articles

Leave a reply

Please enter your comment!
Please enter your name here