At-Home COVID-19 Test

Partnering with BD, my team at Scanwell Health developed the first-ever digitally read, at-home COVID-19 test. I joined the company right before the partnership started, so I was able to see the project from discovery through launch. While this is not the only product I’ve worked on at Scanwell, it definitely is the one I’m the proudest of. I participated in truly every aspect of design: UX, UI, research, videos, copywriting, and many more. This product has helped over a million people during the COVID-19 pandemic, and I’m happy and grateful to have contributed to it.

 

Team

Scanwell Health (YC S18)

team of 3 designers + 2 researchers

Timeframe

1 year, December 2020 to December 2021

Skills

Mobile Design (iOS and Android), Wireframing, Prototyping, Usability Testing, Remote Interviews, Competitive Analysis, WCAG accessibility, Visual Design, Figma, ProtoPie, UserTesting.com

My Role

As a founding designer (1 of 3) of the project, I was responsible for more than just the experience (UX) of the product, although that’s what I spent most of my time on. From discovery to launch, I also owned instruction design, interaction design, system design, and was heavily involved in user research and usability testing. It’s fair to say that my experience on this product has made me a more well-rounded UX generalist, and I loved being relentlessly resourceful and scrappy through it all.

Overview

 

I joined Scanwell Health in December 2020: the deadliest, most infectious month of the COVID-19 pandemic at the time (source: NBC). Scanwell was preparing for a partnership with medical manufacturer BD, and naturally, this project became the #1 priority for the company very soon after. Due to the nature of our small (but mighty) team, I was given lots of ownership from day 1. Within my first 30 days, I delivered a comprehensive usability report of our v0 design, presented ideas to senior managers both internally and externally, and participated in physical packaging design and FDA regulatory meetings. That was just the beginning.

In October 2021, we launched on Amazon and became the first digitally read, at-home COVID test on the market. Since then, we have helped over a million people in the US stay safe and healthy. Countless hours, day and night, were put into shipping this product, and I personally grew exponentially through this unique experience.

In this case study, I will highlight some of the most unique problems I solved with design within this project. If you’re curious about any other work I’ve done, please don’t hesitate to reach out to me! ✉️

Challenge NO. 1

Due to the nature of our product’s at-home use case, users can truly test from anywhere at any time. That’s great for increasing access to COVID testing, but it presents a unique problem for our digitally read test: users may not get an accurate result (or one at all) if they test in a bad lighting environment, and they only have 5 minutes before a test stick expires. Our machine learning algorithm relies on the phone camera to interpret test results, therefore lighting is crucial for the success of a test. So, how do we design a solution that ensures users are in an ideal lighting environment?

 

Through user research, we learned that most users aren’t familiar with the concept of scanning for a COVID test result, let alone the ideal lighting environment for that. This frustration is further aggravated when users find out that they only have 5 minutes to scan the test stick before it expires. After many discussions with the Design and Product teams, and based on our user research data, we concluded that it is unrealistic to offload the “find a good environment” task to the users during the critical 5-minute scanning window. Instead, I proposed that we educate users before, ideally multiple times.

As a result, Home Lighting Check was born. This is an additional scanning step that happens at the very beginning of the testing process. My intention was to set users up for success as soon as possible. In this case, before they even start performing the test.

This feature went through multiple iterations, as shown in the Figma file below. Each version was tested with real users (n>=30) through usertesting.com and evaluated using task analysis. As scanning is the most critical step of the Scanwell experience, the Home Lighting Check has been a major part of the app that is well received amongst users. I was involved in the design of every iteration, including the video production of the current version. Our scanning success rate increased by 21.4% since adding this feature.

This is a LIVE Figma file and you can see the entire file by clicking on “Page” on the bottom left corner.

Challenge NO. 2

Our test entered the market in October 2021 when schools, workplaces, and restaurants often required people to show a negative test to enter an indoor space. Because of that, we saw an unexpected amount of repeat users ever since we launched, many of whom test weekly or more frequently. The design at the time required users to watch every video instruction every time they test, before moving on to the next step. This can easily make a 5-minute test a 15-minute one, even though it did promote the proper performance of each step. Customer complaints started flowing in as soon as the second week of launch, so the design team was urgently tasked to design a shortened testing workflow, without compromising test accuracy.

video demo of the shortened workflow

view clickable prototype I made

 

There were several constraints in this project: 1. This workflow was already FDA authorized and tested through rounds of clinical and usability studies. Therefore, the shortened workflow has to be low-risk in terms of usability heuristics. 2. The engineering team was working on several major feature updates, and not much work could be added to the upcoming sprint. The solution had to require relatively minimal dev time. 3. Time is our enemy, the design needed to be finalized within a week.

With all the constraints in mind, I worked with 2 other designers to brainstorm solutions. Some of the ideas we’ve considered were: cut down instruction copy, combine smaller steps into one big step so there’re fewer steps in total, and shorten each video. All of these were ultimately ruled out because they felt like less-than-thoughtful solutions that may solve one problem now, but cause more down the line.

Based on internal and cross-functional team feedback, we ultimately went with a “looped video preview” solution. Instead of a still image thumbnail, the screen now automatically plays a looped video (like a GIF) that highlights the most important parts of each step. This design allows repeat users to be reminded of key points (recognition over recall!) while still giving them the option to watch the full video. In this iteration, we’ve decided that a repeat user = someone who has done the test successfully twice. Of course, frequency and cadence are equally important in determining if a user can be considered experienced, and those factors will be included in the next iteration.

As for our next steps, my team plans to conduct an A/B test with repeat users in Q1 2022. We will measure task completion, ease of use, and level of confidence. We also plan to test with first-time users, to examine the possibility of implementing this design for all users.

Challenge NO. 3

The result screen is arguably the most important screen of the entire testing experience. After all, users purchase our product to find out whether or not they were infected with COVID-19.

There were multiple challenges associated with the design of the result screens. First of all, because we’re providing medical diagnoses to users, the content can be lengthy. The result screen language needs to be comprehensive, and to be empathetic and intuitive in addition to that was not an easy task. Secondly, the FDA requires certain things to be said in certain ways, which may sometimes include jargon and convoluted sentences. In everything we did at Scanwell, especially when it came to the result screens, regulatory risks were important to consider. Last but not least, we wanted to build for the future of Scanwell. At that time, we were also establishing our first design system and UI templates. We wanted the result screens to be consistent yet adaptable, simple yet not too simplified. Ideally, we could apply the same design to other tests we onboard in the future.

 

The general approach we had was to lay out all the information that needs to be included, on a piece of paper, then to rank-order each item based on their importance. This helped us determine the information hierarchy of the design, before diving into visuals. Shown below are some of the iterations we’ve designed since February 2021. Each iteration was tested with at least 30 users (200 for v1 from clinical studies) via remote observational studies, done by one UX researcher and me.

From testing v1, we learned that users have no problem understanding if they’ve got a positive or negative result. However, very few people understood the meaning of a rapid test result, and the exact next steps to take. Is a second test needed? How long should the self-isolation be? Besides knowing the result itself, users didn’t know the answers to these questions when asked, and v2 was designed based on these usability findings.

In v2, we combined the “result explanation” paragraphs with the result. By doing so, we hoped to naturally guide users’ attention to the fine prints after they read their test results. We also made the FAQ sections collapsed accordion cards, rather than all expanded by default, to reduce the cognitive load that this lengthy screen imposed on users. As usability results came in, we were happy to find that users stayed on the screen for longer and result comprehension improved overall. However, it was still unclear to a lot of users what next steps they should take.

So we began another round of redesign, this time specifically targeting the unclear messaging of the next steps. Through talking to users, we learned that most users don’t care for the medical definition of a “negative” result. They know that it means they don’t have COVID, and that’s good enough. However, when we asked “Did you know that a second test is needed for an accurate result?”, most people said No and that they would prefer to have that information more clearly stated by the app. As such, we decided to further simplify the screen by pushing the medical definitions and FAQs into a secondary page. This allowed us to free up more space for the next steps, which we renamed to “critical next steps”. When users first land on this screen, they’re no longer distracted by the big, green NEGATIVE, but can rather focus on the rest of the content. This design was also driven by WCAG (accessibility guidelines) as the text is now much more adaptable to increased font sizes if/when a user chooses to do so.

At the time of this design sprint, we were also tasked by the Product team to design a set of result screen templates that could be used for future qualitative and quantitative tests. I led the research and design of these templates, along with my intern, and v3 was partially informed by that project as well.

Other Projects

 

Reflecting back, it’s hard to believe how far we’ve come from conception to launch. While designing to solve complex challenges were some of my favorite moments at Scanwell, they’re not all that I’ve worked on. From research to design, to hosting workshops, below are some of my other proudest projects.

  • Competitive research reports (see example) where I collaborated closely with my UX Research team to dive deeply into other at-home COVID test products. We completed a total of 8 competitive analyses in the span of 3 months, and many of the findings informed product and design decisions regarding our own product.

  • Server-driven UI App Builder. Our backend engineering team proposed that to scale the Scanwell app into a true medical hub including 10s and 100s of tests, our app needs to be built in a sustainable way. One of the ways we considered was this server-driven approach where the entire app UI is built using a builder form. This was an exploratory project that didn’t end up succeeding, but it was such a learning experience to design beyond the app, work extremely closely with engineering, and understand how the backend engineering of an app functions.

  • Accessibility has always been a topic of interest at Scanwell, so putting on my designer hat, I decided to host a Design Jam surrounding the topic. During the 1 hour session, I facilitated team members to learn more about accessibility design, brainstorm ideas, and complete an impact effort matrix. I then presented our findings to the product team as well as the company all-hand.

  • Scanwell’s first-ever (still evolving) design system! These have gone through many rounds of iterations based on conversations with Prod/Eng and WCAG accessibility audits, and are now implemented in the production app. To see the UI elements and various templates, you can click on “Page” on the bottom left corner.

My Takeaways

 
 

We shipped it!! (honestly still pinching myself!)

I feel extremely lucky to have worked on such an important product of our time, and am very proud that my design is contributing to the health and safety of so many people. This project (and many other that I worked on at Scanwell) further confirmed my passion for tech and design for good. I want to empower people, through health-tech or other fields, to live more fully. That’s my biggest motivation.

More than anything, my year at Scanwell was filled with learning and growth. Transitioning from grad school to industry, I am eternally grateful to have met some of the best mentors and coworkers I could ever ask for. I loved being scrappy, relentlessly resourceful, and seeing the impact of my work.

Onto the next big thing!✨