Labeling in analysis
How to allow customers to label feedback and manage the labels on the Workspaces (analysis tool of GetFeedback platform).
Roles and responsibilities
Product Design.
Interaction Design. 
User Research. 
Technical exploration.
Value Proposition
Customer experience (CX) leaders and coordinators, customer service coordinators, designers, and product managers find it difficult to analyze their text feedback and triage them to get valuable insights after collecting them in multiple surveys. By labeling it into themes, topics, and trends they can see those insights quicker to act upon them. 
Goals for the project  
Build a feature where users can create their labels, manage them, and manually label their feedback in the analysis tool. 
Due to the significant amount of feedback, we received regarding the limitations of our current feature, managing labels became a top priority. Our data showed that users were creating approximately 50 to 80 labels per account, making it crucial to manage them effectively. Additionally, this work will serve as a foundation.  on for developing new features and establishing connections with other domains.
Background 
Analyzing open-text feedback and extracting insights is a major challenge for CX programs. GetFeedback's product offers several tools to address this challenge, such as the ability to create labels and aggregate results. However, customers currently face limitations with this feature, as labels cannot be migrated to Workspaces, GetFeedback's analysis tool. This prevents customers from using tags in their analysis and organizing their text analysis effectively. Additionally, the legacy product lacks the structure for complex analysis, creating a disjointed experience between the app and website versions. Overall, while GetFeedback offers solutions for text analysis, there are limitations that prevent customers from fully utilizing the product's capabilities.
What is Workspaces in the product of GetFeedback?
GetFeedback's workspaces help users organize and manage feedback and survey projects by creating separate workspaces for different teams or projects, controlling access, and providing a centralized location for data. Users can create, customize, view, and analyze surveys and collaborate with team members. Workspaces offer an efficient and flexible solution for managing feedback and survey projects across teams and departments.
Problem definition
Analyzing the current product and from feedback from previous research over text analysis, we understood that being able to label the feedback can help our customers to understand and analyze open feedback, assign feedback to stakeholders, define priorities, in other words, be more actionable with this feedback. So, the problem definition would be: 
"We assume that not being able to organize and categorize this feedback for further analysis is a problem for our users because they can’t understand all feedback that is coming and act upon it."
Design Process
To begin the process, we developed design hypotheses and user stories, which we then tested through customer sessions to validate their effectiveness.

V1 Design Hypothesis 
The V1 Design Hypothesis includes several key beliefs. Firstly, we believe that users desire the ability to label their main feedback topics and structure those labels into categories or groups. Additionally, we believe that users require a dedicated space to create and manage these labels effectively. We also believe that it is crucial for these labels to be visible throughout the entire process, from responses to analysis and automation. Furthermore, users should have the ability to tag any feedback within Workspaces and easily filter and search those labels within the Workspace.
V1 User stories ​​​​​​​
Stakeholders management
Once the user stories and hypotheses were defined, the product manager and I approached the engineering manager and the higher-level leadership to discuss incorporating the feature into the roadmap. However, we faced resistance due to concerns about the project's size, and the team's capacity was required for other priorities outlined by engineering and the business.

After engaging in extensive discussions and sharing our plans with other teams, we considered their needs regarding implementing the label feedback system in their own roadmaps. As a result, we formulated a proposal for a more streamlined solution. We made a commitment to test this solution with customers to determine its value for them.

In the event that the proposed solution did not prove valuable, we were prepared to step back and reevaluate our approach to text analysis. We would explore alternative methods of addressing the problems without relying on labeling.
V1 User actions definition and User flow 
Due to the potential for the project to expand beyond our capacity for the quarter, we broke down the planned actions for V1 and tested their value to customers.
• Create, delete, edit, and group labels. 
• Have an overview of all labels.
• Have a response view with the labels.
• Select color for labels.
• Filtering in all categories for labels.
• Adding a label manually to a response.
• Remove a label to a response.
Designing the Solution
Label Manager
A place for customers to manage the taxonomy of their labels, group them, delete, edit, and search for the labels. We placed it under the insights domain of the product to validate if that was the right place for it, or in other domains.
Manual Labeling 
V1 was required for customers to be able to manually label their feedback from the full response view on the analysis tool ( it’s called the full response panel). I also wanted users to do that quickly and easily, as this was the moment they analyzed their responses. This was not the ideal scenario, but one of the first steps to have labeling on our product.
Prototype 

Based on all the decisions I created a prototype that would cover all the use cases defined.
User Test
During the prototype testing phase, we conducted product value validation and user testing sessions with eight customers. Based on their feedback, we made some important observations. Firstly, users were able to navigate the prototype without any issues and found it to be intuitive and user-friendly. However, some users suggested incorporating additional features, such as the ability to use drag-and-drop gestures for label grouping. 
Additionally, users highlighted the need for more streamlined actions to merge repeated labels. Finally, they recommended that only assigned users should have the ability to manage labels, while any user should be able to label responses.

Final V1 
After receiving feedback from customers, I made some small interactions on the prototype. Usability testing indicated that the prototype was intuitive and easy to use. However, there were certain aspects of the usability that were unclear to users, so I collaborated with the content writer to revise the copy. We then began the handoff process with the team and broke down the project into two stages: first, implementing manual labeling on the side panel, and then adding the label manager. This approach allowed us to gather feedback from customers and enhance their experience. Below are some final screen examples:
Selecting labels
Selecting labels
Grouped labels
Grouped labels
Creating labels
Creating labels
Grouping labels
Grouping labels
Exploration in data viz for Labels results
Exploration in data viz for Labels results
Overall, the project has been a success in terms of creating a user-friendly and intuitive labeling system for our customers. The usability testing and feedback from customers have been positive. We have already seen early adoption of the manual labeling feature by some of our biggest customers, such as Carrefour.
Success Metrics
Our success metrics for the first few months include observing the adoption rate of manual labeling on the side panel, early adoption by our biggest customers, and retention rate on the label manager tool. We aim to see a 33-46% retention rate of customers who use the tool for at least 30 days.
Lessons Learnt
Through this project, we have learned the importance of revising copy to make it more clear and more helpful for users. We also realized the need to break down the project into smaller parts to gather feedback and add value to our customers' experiences.
Next Steps
Our next steps involve investigating the top priorities for automation for labeling responses, individual response level labeling, and data visualization for labels. Additionally, we will explore how AI can help with automated auto-labeling, which would improve the efficiency and accuracy of the labeling process.
Back to Top