< Projects

Bringing Data to Roadmap Prioritization via MaxDiff Survey

Blurred to protect survey results. (TURF Analysis: The TURF Analysis chart helps determine what combination of capabilities would “reach” the largest audience. For example, from the above chart we could conclude that if we were to build the 5 features in option 1, we would “reach” 88.5% of our user base. If we built the 3 features in option 2, we would “reach” 51.3% of our user base. “Reach” is defined as the % of users for which we would build at least one of their top 2 most important features.) Learn more about TURF Analysis.

MY ROLE | Lead User Researcher:

  • Designed, built, and distributed the survey with Qualtrics

  • Created the unmoderated survey task in UserTesting

  • Analyzed survey results and generated a survey report

  • Analyzed unmoderated testing clips, created highlight reels

  • Translated a list of features into action-oriented user tasks

  • Provided roadmap prioritization recommendations based on findings

DURATION | ~3 weeks
IMPACT
|
Focused efforts on 2 roadmap items, de-prioritizing the rest.
De-prioritized mobile design & development from the roadmap.

DIRECT STAKEHOLDERS | Principal Product Designer, Senior Product Manager

METHODOLOGIES | MaxDiff Survey, Unmoderated Testing


 

Background & problem

The Incident Response team needed to find a data-centered approach to prioritizing their product roadmap. Without data, prioritization would have been entirely subjective.

research questions

  • Which features are most impactful to our users’ incident response journey?

  • What is the first thing users do when starting their incident response journey?

  • What role does mobile play in the incident response journey? Should we build a full-fledged mobile experience?

 

methods

1. maxdiff survey (n=269)

MaxDiff surveys are used to measure the preference and/or importance that respondents place on a list of items. In this case, we are measuring the importance that users of on-call/incident response tools place on a set of potential Incident Intelligence product capabilities. Learn more about MaxDiff Surveys.

We translated a list of features into verb-oriented and product-agnostic tasks that a user of any on-call/incident response tool would understand. With a list of 8 features, we needed a sample size of 180 to achieve statistical significance in the results.

We targeted two sets of users:

  1. Users of Splunk’s Incident Response product (n=206)

  2. Users of non-Splunk incident response products (sourced from UserInterviews and UserTesting based on screener criteria) (n=63)

In doing so, we were able to compare the results from these two user sets to determine whether or not the needs of users of Splunk Incident Response differ from those of competitor tools.

A screenshot of the MaxDiff Survey on Qualtrics.

2. unmoderated task on usertesting

The design of a survey greatly impacts the results. Survey designs, much like product designs, need to be evaluated for usability, accessibility, and built-in biases/assumption.

Creating an unmoderated task on UserTesting and redirecting users to our survey provided 2 benefits to the project:

  1. Pilot testing the survey
    I launched the UserTesting project to 2 participants at a time, reviewing the footage to catch issues in unclear language, usability, accessibility, and participant fit. After 3 rounds of pilot testing, we felt confident that we had caught most of the issues.

  2. Gathering qualitative data to answer the “why” behind participants’ survey answers
    Directing participants to “think out loud” through their survey responses allows us to peer into their thought process as they justify their responses. The qualitative findings from the “thinking out loud” of 63 survey participants added color to the survey results.

Screenshot of a highlight reel of participants taking the survey while “thinking out loud.” Created with UserTesting.


 

key insights & Findings

The combination of qualitative and quantitative findings led us to some interesting insights. Here are a few examples:

our mobile experience does not have to be end-to-end, but it does play an important role

Participants noted that if they are on-call, they always have access to a laptop. While some found the prospect of a fully-fledged mobile experience to be interesting, it was not something they expected from us. Our efforts would be best spent building other features that users found to be more important.

“It would be genius for mobile to do everything, but that it would be a heavy app... a lot of complexity."

“Anyone on call will have access to a laptop or be very close to it. If they're not, they're not technically on call.”


Our quantitative data showed that [redacted]% of users believe that [redacted task] is the most important task to complete with their mobile device. After that, users would likely switch to their laptop/desktop for their more in-depth investigation.

“I’ll still most likely prefer to do it on my desktop app.. but at least to be able to [redacted], [redacted], and [redacted]…”

 

users are more concerned about service impact than user impact

Our users know which infrastructure services are most critical to their users, so they can use their knowledge to infer the user impact. This was supported by both qualitative and quantitative data.

“Identifying services impacted is more important than users.. identifying infrastructure impacted helps you to triage, narrow down the incident and find a solution faster.”

“Infra is definitely important. identifying users is great, but it might be a lot of users while still being a non-critical application... I’d rather solve the problem.”

 

Users are wary of system-generated next steps, and rely on personal and institutional knowledge instead

Quantitative data showed “system-generated next steps” as being voted least important to our users. We dug into the qualitative data to learn more, and discovered that users prefer user-curated next steps over system-generated ones. Users need to build trust in the system before they feel comfortable relying on system-generated next steps, and in some cases would flat out ignore any of the tool’s suggestions.

“Recommended next steps is least important because I likely already know what to do, or I can check my wiki to see what to do next.”

“User-defined recommended next steps would be more trustworthy than system generated- that means an engineer has come in and provided direction.”

 

build just 2 features to reach ~75% of target users

If we build [redacted feature], we could reach ~60% of our user base. If we also built [redacted feature], reach jumps to ~75%. If we were to focus our efforts on these 2 priority features moving forward, we would create significant customer value and satisfy the vast majority of our users’ needs.

 

These are just a few of many findings. The full report is packed full with more insights, quotes, highlight clips, and recommendations. Please contact me if you’d like to learn more.


 

impact

A few notable impacts:

  • A few designers and a PM on the Incident Response team had been defining and designing an end-to-end experience on mobile. After this report, the roadmap was adjusted to re-focus mobile efforts on just one phase of an incident: [Redacted task].

  • The team prioritized 2 of the 8 features based on the data.

  • After circulating the project process, I assisted another user researcher in executing a similar prioritization effort with her own MaxDiff survey.

 

challenges & learnings

  • This was my first time utilizing unmoderated testing on a survey, and I will always take this approach in the future. No matter how many proofreads and feedback sessions you hold, nothing catches issues like usability testing.

  • Watching and listening to participants fill out the survey was invaluable in helping us make sense of the quantitative data. Upon receiving the quantitative report, the team was jumping to conclusions and assumptions about why participants answered the way they did. These assumptions were usually incorrect, and we might’ve operated on these assumptions if we didn’t have the qualitative data to color in the gaps.

thank you…

Darren Lasso | Principal UX Designer

  • Darren knows exactly when to get User Research involved. He engaged with the research every step of the way, including joining in for the qualitative analysis. His “future of Incident Intelligence” document calls out quotes from users, links to UserTesting clips, and references the survey data to justify roadmap decisions.