MENTOR Assessment Web App
Role: Lead / User researcher & interaction design
Team: Engineer, junior designer
Timeline: Jan 2017 - April 2018
Skills & tools: Interviews, user scenarios, prototyping
The company: CiviCore builds enterprise customer relationship management (CRM) systems for nonprofit organizations.
Synopsis: Our client, MENTOR, is a national organization that provides resources, best practices, and support to a network of smaller mentoring agencies around the country. The project rolled out the National Quality Mentoring System (NQMS) assessment to all of these smaller affiliated mentoring agencies across the country. The assessment asks 92 questions aimed at providing a consistent mechanism for measuring the quality of mentoring programs. Before this project, agencies were completing their assessments in a variety of unstandardized ways, from excel sheets to online surveys that were hard to use and didn’t track historical trends. The goal was to unify and standardized the assessment experience. A complication was that we were designing the assessment within an existing information system. As team lead, I developed the project timeline, scope of work, budget, and coordinated with the client-side stakeholders. My team and I interviewed users, generated scenarios and user flows to prototype, and ultimately transformed a complex information ecosystem into a concise, clean designed system.
To create an interface for a 92 question long assessment, divided into 26 sections for a diverse set of users and stakeholders.
I interviewed 10 users multiple times throughout the year. These users were staff members from smaller mentoring organizations, as well as staff from the National Mentoring Partnership. They had different levels of engagement with the assessment and different goals. From the interviews, I developed a set of 3 distinct personas who would be using the system.
We interviewed users to find out what tasks they perform when filling out the assessment. These conversations can be hard because people often think about what the solution should look like instead of what their needs are. I focused on asking questions to get at the underlying need (e.g. what value is a requested feature providing and why is this important?). I try to use the laddering approach, and keep asking “why” questions.
Based on these interview data, with our personas in hand, we worked with the client to develop the main user journey. We took note of major questions and comments we needed to answer through our next round of interviews. Here is one example:
Site map + information architecture
We then broke down the user journey into the primary tasks a user would need to take in the system. This was helpful to do before thinking about the interface, so we could separate the tasks from the design.
Prototypes and testing
Once we had a good sense of the tasks users needed to accomplish, we developed a simple working prototype of the entire assessment experience to gather feedback and refine the design.
We tested with users and learned that:
✔ It was easy for users to take the assessment in one sitting;
✘ It wasn’t easy to take it in smaller chunks, over time.
We learned that staff had different expertise, so it was likely that one person on the team would complete a couple sections, and then ask a colleague to complete sections that they were more knowledgeable about.
We updated the design to reflect our findings; structuring the assessment so that users could easily access a single question at a time and view a snapshot of the whole assessment. We continued iterating on the design for each subsequent layer of functionality. In this project, it was best to divide the work up into stages because it helped the client provide focused feedback.
Here are the key interactions when users take the assessment.
Test assumptions and ideas as soon as possible -- had we tested even earlier in the process, we would have cut down on rework.
If I had the opportunity (i.e. resources), doing in-person observations would have given us more useful information than interviews. Often times people aren’t aware of the way they do tasks, so can’t express this verbally. Observations of users would uncover these idiosyncratic behaviors that people aren’t aware of and better inform our prototypes early on.
Getting insight from other team members and colleagues is crucial - nothing replaces explaining your solution to the problem and getting feedback from a fresh perspective to see the gaps.