Facilitating communication during epidemics
Role: UX researcher/evangelist and designer
Timeline: September - December 2018
Skills & tools: User interviews, moderated in-person usability testing, remote unmoderated usability testing (usertesting.com), wireframing & high fidelity mockups (Sketch), prototyping (InVision).
Context: User Interface Design & Development class project @ UC Berkeley School of Information.
Users: Patients, nurse case managers, and epidemiologists.
Synopsis: EpiSync is an app prototype that allows for fast and accurate data collection, communication, and responses to epidemiological events for users who struggle with a lack of timely information, and coordinated communication. My unique role was evangelizing UX research within my team, as it was not a class requirement. I recruited participants and conducted user interviews with county and state level staff and public health students. From this research, we produced wireframes and prototypes that were tested with users throughout the process. My teammates were unsure of the value of UX research at the start of the project, but by the end they were proud of our work, and promoting its value to others.
The main challenge of this project was comprehensively understanding the field. For example, processes differed depending on state, county, local agency. We had to understand unique pain points for specific people and also pain points that persist more broadly in the field. The synthesis of the user research directly impacted our feature development and design decisions. The final deliverable was a working prototype of a comprehensive app and presentation.
In 2016, Division of Communicable Disease Control reported approximately 20 million cases of infectious disease - 1 in 2 Californians. Even relatively rare diseases in the United States like TB affect almost 6% of the population.
We used the double diamond approach to guide our exploration
How might technology help the people who are involved in fighting epidemics? This is the question my team and I set out to answer during our semester-long User Interface Design & Development class project.
At first we had no idea where to start because the field was so complex and niche. We knew absolutely nothing about this so we started looking at as many secondary sources as we could:
the CDC website
local health department websites
internal presentations from health departments that were posted online, and on.
We learned that the primary stakeholders involved in stopping potential epidemics were:
Despite our secondary research we still had questions. How might technology help stop outbreaks before they turn into epidemics? The day to day process for handling cases of people experiencing symptoms wasn’t documented online. We needed to understand the context and perspective of the people on the ground before we could answer this question. I recruited 4 participants to interview. For patient perspective we did extensive desk research including written/youtube patient accounts collected by public health agencies. I conducted 2 of the interviews solo, and my designer colleague (who was focusing on the epidemiologist experience) accompanied me for the other two interviews.
This was the first time I’d co-conducted interviews with another person, so I learned a lot about how to do that better. It’s not enough to make an interview guide together and then head into the interview. Reviewing right before the interview would ensure we are on the same page about the flow of questions, what each of us want to get out of the interview, and how to organize our time.
Insights & Analysis
After each interview, we discussed the findings together and pulled out key insights, areas of opportunity, and current frustrations.
Phases of disease management (notification, investigation, controlling spread) are not serial: in reality there is blurring between investigating and controlling the disease and case managers enter / access information from these phases fluidly over the lifetime of a case.
Logistical circumstances under which information is collected:
Different environments (at home, at clinic, over the phone, texting)
Gathered from any sources: CDC disease info, lab results, local or state health agency, patient info and medical history
Key information is passed between colleagues verbally and informally when one person overhears the conversation of a colleague sitting in the same physical area. This chance occurrence is a large determining factor in how quickly a source of the outbreak is found.
This information typically is not logged anywhere digitally - much of the case details live in the head of the case manager.
We developed jobs-to-be done (JTBD) statements for each user profile, based on our research. JTBD state what job the product is helping the customer complete. They are an alternative to personas that:
Make assumptions explicit
Links motivation with goals
Establish the context of use, as opposed to stating demographics, preferences, etc. like personas (not as helpful)
Don’t presuppose the solution; there could be many ways of resolving each job
Feature generation and mapping
We each generated a set of features that would accomplish our job statements. Because the different experiences needed to interact with each other, we mapped which features were related between the patient, case manager, and epidemiologist.
We iterated on several wireframes for each feature, to generate different design approaches. I focused on navigation, information architecture, and data visualization.
testing - Round 1
I conducted 3 moderated, in person usability sessions for the case manager experience. The goal was to gather general feedback about the app, its value, and the context of use. I wasn’t looking for feedback on specific aesthetic components of the design yet.
Findings - Round 1
Profile icon didn’t signal that it was specific to the patient (and not the user’s own profile). The contrast wasn’t drastic enough for users to know when they were currently on the profile section.
Phone icon is too broad a representation; users thought it could have many purposes, including the intended ‘contacts list’.
Rx/milestone features didn’t align with how case managers approach their work. Although users wanted to see the medication and lab results, the case manager does not directly enter or manage that information. Those details come from the doctor and the lab.
We focused too much on information entry and retrieval, when the case manager is really a conduit for communication.
There is much sporadic communication between patients, doctors, epidemiologists, and the case manager. The case manager’s job is to manage communication and understanding across these different people, leveraging that for a successful outcome. This finding was so critical that I updated by job statements. While making design decisions going forward, the JBTD statements were now an accurate resource to refer back to.
Testing - round 2
Based on the previous findings, I iterated on my wireframes, converting them into high fidelity mockups. I conducted 3 unmoderated, remote usability sessions via usertesting.com. The goal was to evaluate specific interaction designs, the copy, and lightweight visual design elements.
Findings - Round 2
Better recognition of icons; users understood what each section contained where they were in the app at all times.
Intuitive information architecture for lab results, medication information; users found this information on the first attempt.
Case managers wanted more detailed disease info and medication info, so they can answer any questions that arise from their patient, and correspond knowledgeably with their patient’s doctor.
Users didn’t understand how the map could be useful part of workflow, because they didn’t discover all parts of the flow during testing. They only clicked on the map itself, and didn’t click on the pin points at the side of the map, which kickstarted the workflow of adding contact information.
This was my first experience doing unmoderated usability tests. I learned how important it is to have clear tasks and to test it out beforehand on the same technology as participants will be using. One of my tasks didn’t work because usertesting.com renders differently on mobile vs desktop.
It was tricky to make the tradeoff between having a participant go through the whole arc of an interaction vs breaking it up into pieces. I broke the map interaction into pieces and users didn’t have enough context to complete the task.
Based on the usability findings, we iterated on our designs, culminating in a comprehensive web / table / mobile interface.
Since this was a class project, we all technically had the role of designer. However, each of us gravitated towards the role that fit us best. I took on the role of user researcher, and my two colleagues took on the role of designer, and product manager. Throughout this project, we worked extremely closely with each other. This project was particularly rewarding because we learned a lot from each other’s strengths. For example, my design teammate pushed me to think of more creative solutions while I pushed her to look to our user research findings when developing these solutions. Our third teammate, who took on the role of product manager, helped move us forward when we got stuck in the weeds. We very much complemented each other, and gained a deeper appreciation of each other’s fields.