UX Research
undraw_data_trends_b0wg.png

Qualitative Usability Study

Building better reporting tools for non-profits

Artboard.jpg

Project Overview

Role: UX researcher*

Timeline: Feb - May 2018

Skills & Tools: Interviewing, moderated usability testing, affinity diagramming.

*My context: I was a full-time project manager, but got buy-in to do this as a side project because I saw an opportunity to improve part of a core product. The company is just getting into UX, so there are currently no designers nor other researchers. This was the first UX research study, and I collaborated with a product manager throughout.

The company: The company builds custom enterprise CRM systems for non-profit clients.

The product: The product helps mentoring organizations manage their day-to-day processes, including data and workflow management. This project is under NDA so the product name is omitted.

The users: Staff at mentoring organizations include coordinators, managers, and directors. Mentoring organizations are non-profits that match people who have specific skills or knowledge (mentors), with those who need or want those skills (mentee). Big Brothers Big Sisters is an example.

Synopsis: Within the product, there is a reporting tool that helps users generate spreadsheets, charts and graphs, and reports of their data. This tool is crucial for our nonprofit users to quantifiably demonstrate their impact. As project lead, I developed the timeline, research questions, and decided which research methods best fit these questions. Through usability testing and interviews, I gathered insights that revealed:

  • Mental models of users were incongruent with the intent of tool,

  • People were experiencing high cognitive load, and taking too much time to complete tasks,

  • People sometimes did not trust the tool or their data.

Recommendations: I translated my findings into concrete design recommendations, starting with technically feasible tweaks that would significantly reduce confusion (i.e. “low hanging fruit”), working up to a redesign that would align better with people’s inclinations when using the tool.

Impact: As the first UX research at the company, this study catalyzed steps towards increasing company-wide UX literacy.


Problem exploration

As with many smaller companies, I wore many hats. One of these hats was providing advanced technical support to the product. I noticed that our support team was spending a lot of time responding to customer tickets about the reporting tool because users were confused. I saw the opportunity to do research on this feature because our nonprofit users relied heavily on the reporting tool data to show their impact (and therefore gain funding). I compiled and analyzed quantitative data to understand the frequency of tickets by topic. This data was helpful to understand what types of reports were causing issues, but it didn’t tell me why people were struggling.

Initial analysis of support ticket topic and frequency. People most frequently submitted tickets because of errors in the system. The second and third most frequent tickets were around issues when filtering data, and reports containing inconsistent data.

Initial analysis of support ticket topic and frequency. People most frequently submitted tickets because of errors in the system. The second and third most frequent tickets were around issues when filtering data, and reports containing inconsistent data.

Methods

In addition to pinpointing usability issues, I aimed to understand how the reporting tool did or did not help people reach goals in their day-to-day activities. I included time for open-ended interview questions to answer these broader questions. For the core of the study, I opted for a moderated qualitative usability study approach. This method gave me the flexibility to ask questions throughout the session so I could probe deeper into why people behaved in certain ways (e.g. why they sighed or groaned).

Scenarios for usability testing

To thoughtfully develop scenarios in which people would use the tool to generate reports, I relied on the support ticket analysis mentioned above as well as interviewing our support managers. They have an intimate understanding of the users and their goals in using the tool. From this analysis I learned that:

  1. There are a few ‘common’ reports that all users are likely to need, but;

  2. Users actually require reports of varying scope, for example reports showing detailed, tactical data, vs. high-level aggregate data.

This made generating scenarios tricky; I conscientiously made assumptions of what 'typical' reports were, while being aware that I might be asking people to perform tasks they may not normally do. To account for these assumptions, I had “backup” scenarios and also had people show me a report they recently built, so I could get a better understanding of how they naturally use the reporting tool.

Participants

I recruited participants from different size organizations because large nonprofits have different data needs and behaviors than small ones. The reporting tool is one of the more complicated features in the product. I recruited participants who had used the tool before so the study session didn’t become a training session. I also recruited participants who use the tool for different reasons: mentoring coordinators typically use it fo logistical tasks. Directors often use it for strategic decision-making. With a larger fund for incentive payments, I would have recruited more participants for each of these groups of users.

Study participants; shown by how often they use the reporting tool and the size of their organization.

Study participants; shown by how often they use the reporting tool and the size of their organization.

Researching!

I encouraged the product manager to observe the study sessions with me. We conducted sessions with 6 participants. Ideally, we would recruit more people but this was entirely limited by the budget for incentives to encourage participation. Each session was an hour long and was remote since our users are spread across North America.

Synthesis

At the end of the study, I had 6 hours of recorded interviews and usability tests. I reviewed each session, noting participant comments and observations about their behavior. I used affinity diagramming to cluster similar themes together, and then analyzed each theme inductively to arrive at three key insights.

Initial affinity diagramming, sorting quotes and observations into themes.

Initial affinity diagramming, sorting quotes and observations into themes.

Findings

Finding #1: Cognitive load

Visualization shared with my team of a common user journey to illustrate finding #3: reports take too much time to create.

Visualization shared with my team of a common user journey to illustrate finding #3: reports take too much time to create.

Users encounter complicated choices because they are shown all possible options at once (like menus, checkboxes, etc.), with non-specific labels, and jargon-heavy instructions.

Finding #2: Incongruent mental model

The current design of the filtering feature did not clearly communicate its function nor match how users conceptualized filtering their data.

Finding #3: Too much time

Users spend a lot of time trying to get the right data in their report. For example, people navigate back and forth between steps because they can’t see the finished report until the end of the process.

These findings combined meant that often:

  • people didn’t always trust the tool,

  • they sometimes felt inadequate when using it,

  • and when faced with these challenges, they reverted to what they were comfortable using: Excel.

This takes users out of the product, which means they don’t benefit from the other good things the reporting tool does offer.

Design recommendations

Deciding how to best communicate my findings and recommendations was surprisingly the hardest part. While it was important to share insights to make the product better, it was critical to be sensitive to those who had worked on the feature for several years.

To solve this, I relied on quotes directly from users to highlight opportunities for improvement. My colleagues had a lightbulb moment when they realized the disconnect between their perception of the tool and our users’ perception. For example, we discussed how our team valued having control over making the most detailed of choices, but most users actually wanted choices made for them or their needs anticipated by the tool. Common reactions from colleagues during the presentation were, ‘I can’t believe people have been not understanding, but now it makes total sense why!’

I organized my recommendations into 3 levels based on technical feasibility and impact on the user’s experience because the engineering team has limited resources:

  • Low hanging fruit / very technically feasible: refine UI elements to reduce cognitive load, make labels and instructions more accessible to non-technical users, create default options that users typically expect, improve content of existing error messages and add additional messages where they would be helpful.

  • Short-term vision / technically feasible: provide example reports early in the process that enhance the user’s ability to tangibly visualize the end report results.

  • Long-term ideal vision / requires technical resources: A redesign of the reporting tool so that it’s dynamically real-time (e.g. users can see their report as they build it rather than seeing it only at the end of the process ).

Impact

This study helped propel conversations about our company’s UX maturity, and encouraged steps towards increasing our UX literacy. For example, the product team started a UX brown bag program. The company is currently switching the platform’s codebase, which will provide an opportunity for the ‘long-term vision’ recommendations implementation without as many engineering resources.

Retrospective

I thoroughly enjoyed this project, even more than I expected (if that’s possible; I was pretty excited)! This was the first full research study I had done in a company (as opposed to side projects). This project immediately confirmed that this is the work I want to do in my career. It was incredibly exhilarating to deeply understand users and communicate those insights with my team, building their empathy for our customers, and see potential for product improvements.

I also learned a couple lessons to apply to future projects: research findings don’t speak for themselves, the researcher has to speak for them. Additionally, involve stakeholders early and often so there are no unexpected questions at the end and everyone understands the strengths as well as limitations of the study.

 

Project thumbnail from undraw.co