Framing hypothesis from users’ feedback

Framing hypothesis from users’ feedback

This case study aims to illustrate the process of identifying initial problems through user testing, it helped the design teams made a lot of impactful decisions to improve the product experiences. All user data and quantities in this process have been adjusted in accordance with the company's NDA.

image

We initiated qualitative testing sessions to assess a specific product functionality using a Figma prototype. This allowed us to gauge the impact and comprehension of the function from the users' perspective.

We conducted a Q&A session to gather additional feedbacks after the testing. By posing specific questions to users based on their interactions during testing, we can dive into their thoughts and gain a deeper understanding of their perspectives on the product.

Methodologies

Functionality testing, qualitative feedback and analysis on prototype's performance

Duration

15-20 mins for each user depend on the complexity of the function

Participants

6 users within the product target group per group. Normally in 1-2 groups depend on design alternatives.

Measurements

  1. Success rate
  2. Drop off rate
  3. Completion time of each tasks (seconds)

Success Rate

The success rate tell us if the basic performance of the functions are understandable from users perspective

Drop Off Rate

The drop off rate help us to identify any mistakes or ‘surprises’ from the design user journey, so we can make sure users are experiencing the expected paths.

Completion time

We believed that it is not effective to gauge the design only by the success rate as we need a deeper evaluation to see if the functions are understandable enough under the standard criteria (time)

Users’ Reaction

We observe specific user reactions during the tasks, of any delays or additional steps. Then we inquired about the reasons behind these reactions for potential improvements.

Users’ Impression

We sought user impressions of the function to find out whether users held the same expectations for the functionality

Users’ Description

We ask users to describe the function and compare with our original statement. This enabled us to assess its alignment with the product KPIs.

We organise all feedback from the testing and transform them into cards. This approach allowed us to manipulate and arrange feedbacks, identifying insights, and enabling the grouping of related insight to define initial problems.

image

Cards Sorting

Following the conversion of feedback into cards, we organise them by incorporating hashtags. Instances of duplicated hashtags prompted the creation of categories, enabling us to group related cards together. This categorisation facilitated the identification of patterns, allowing us to prioritise and define initial problems so we can make informed decisions on areas of focus.

image

Swipe Left or Swipe Right

When evaluating functionality performance, we exclude subjective feedback that cannot be measured. Similar to the approach used in dating apps where we either 'Like' or 'Pass,' we filter through the feedback. This process ensures that we focus on the most impactful insights, allowing us to assess and measure functionality based on the defined criteria.

image

Framing Hypothesis

After evaluating the functionalities through user insights, that would lead us to enhancing function performance through design approaches. Aligned with our predefined criteria, we frame hypothesis as possible directions and outlining how we can validate their accuracy. This ensures that the alternatives presented in upcoming design tasks are measurable and aligned with our objectives.

image

What’s Next

Informing future design decisions

We generate design tasks for the upcoming implementation based on the evaluation of the current functionalities. These tasks are then reviewed with the broader team to anticipate any potential technical limitations.

Addressing areas for further research

There might be insights that are not design-related; in such instances, we bring these observations to the stakeholders. Together, we determine whether further research is necessary.

More Testing

Sometimes we encounter situations where more targeted testing is required to refine the design direction based on defined problems. This is especially common in A/B testing, where decisions cannot always be made strictly based on predefined criteria. In such cases, adjustments are made, and further testing is conducted to ensure a more precise outcome.

More case studies