Loading…

The Loop: Feedback Frameworks

After speaking with subject matter experts, we decided to take a step back. In this post, we list and organize our methods of feedback into a matrix. The goal is to offer a clear framework to follow and to identify areas that could use bolstering with alternative methods.

Article hero image

Over the past few months, we’ve gotten quite a few questions from our communities about how we compile and use feedback on our product teams. One of our Q1 goals was to gather and deliver to you the structure for our various feedback mechanisms we use to build products.

After speaking with subject matter experts, we decided to take a step back. In this post, we list and organize our methods of feedback into a matrix. The goal is to offer a clear framework to follow and to identify areas that could use bolstering with alternative methods. Some of these feedback types belong in a few of the quadrants, what we did in that case was to include it with the research types it resembled the most.

We see feedback as falling into a few types and within either qualitative data (meaning descriptive and conceptual) and quantitative (counted, measured, and expressed using numbers).

Qualitative Quantitative Intent/desire exploratory

  • Exploratory user interviews

  • New feature requests

  • Comparative site analysis

  • Historical archive research

  • Traffic and usage data analytics

  • Exploratory surveys

Attitude/perception product experience

  • Product experience user interviews

  • User discussions (meta, mod team, chat, mod council)

  • CM feedback

  • Surveys (site satisfaction, UX surveys, mod surveys)

  • Site satisfaction survey

Action/behavior product usage

  • Usability studies

  • Bug reports

  • Support requests

  • Feature enhancements

  • Usage analytics (feature usage, engagement, and task completion)

  • A/B testing

  • Observational testing

  • Data Team analyses

Intent/Desire, Exploratory, or “Things I want to see”

These are feedback methods that allow us to better define the “what” we build for our users. These are methods used to understand gaps in the product and explore new ideas.

Qualitative

Exploratory User Interviews

The Stack Overflow User Research team spends a lot of time interviewing users 1:1. One of the things they talk to our users about are specific pain points. We work to understand the “why” or “what” users are trying to achieve so we can come up with solutions that cater to their needs.

New Feature Requests

Requests for new features or new product ideas are another input that drives exploratory research. Often we see trends around tasks that users need to accomplish but can't given the current set of features or tools. These come from many places, Meta, User Interviews, Social Media, and internally.

Market Research

Our marketing team keeps a close eye on what is new and upcoming in the industry, what competitors have released, what is popular, and what isn’t resonating in the market.

Comparative Site Analyses

Our product team performs similar market research. They keep a close eye on products that are similar or ours, identify the strategies and features that have been differentiators in the field and then interpret what it means for our products, and what we can learn.

Historical Archive Research

We have ten years of data and 172 sites on the network. While we are thinking in the future tense, something that is important for us to keep in mind is that we’ve performed many experiments in the past we can continue to learn from. We’ve also built and launched features that have been successful as well as those that have fizzled on release and we should keep that historical knowledge accessible for future decisions. There are also historical threads on Meta that we learn from. A priority is learning from our successes and not remaking the same mistakes.

Quantitative

Traffic and Usage Analyses

Numbers and trends are helpful when we approach designing and developing new features as we can learn a lot about how to build forward by how existing features are currently used. For a hypothetical example, if there are a lot of people using specific search terms when looking for posts, we can extrapolate that this is an interesting area of discovery for people and discuss how surfacing that type of content could be easier in the future.

Surveys

One-off surveys are great ways to both validate new ideas and dig into the needs of our users. For example, often we will add specific questions to our Site Satisfaction Survey to get information about ways our users would like to see improvements, or to kick the tires on new ideas.

Attitude/Perception, Product Experience, or “How users feel”

Getting feedback from users about product perception and experience is an important input towards how we determine what to build moving forward. There are quite a few channels we utilize for this (discussed below).

Qualitative

Product Experience User Interviews

Our product and user research teams utilize interviews as feedback mechanisms often. You can read more about their process in this “The Loop” blog post.

User Discussions

We are lucky to have products that facilitate discussions about our products. These types of discussions happen across Meta sites, in our Stack Overflow for Teams instance utilized by moderators, and in chat. We use all of those as valuable inputs when determining how best to approach updating and improving existing features.

Community Manager Feedback

Community Managers are on the front lines of our product, engaging directly with the community, and have a unique perspective and expertise. They are embedded with our product team and not only provide feedback at every stage during product development from strategy through delivery, they surface new insights to the product team on a regular basis.

Open Ended Survey Responses

Many of our surveys have open ended questions that allow for long form content. Users often share valuable feedback that can be used towards making informed product decisions across the network.

Quantitative

Site Satisfaction Survey

Our Site Satisfaction Survey is run on Stack Overflow. It’s delivered to both logged in and anonymous users. Each month, we analyse how satisfied both groups are and use that data both to set goals and evaluate newly released features.

Action/Behavior, Product Usage, or “What Users Do”

This pertains to feedback about already built features that are currently on our platform. Learning about how people are using them, where they are running into trouble, and how they could have been approached in a more intuitive way.

Qualitative

Usability Studies

Our product designers spend a lot of time with community members one-on-one. One thing they do is watch users as they go through activities on our sites to learn things like how easy things come to them, if there are features that are hard to figure out, and if things are being used as designed.

Bug Reports

Another good way to determine what people are doing on our sites (or not doing, for that matter) is to monitor bug reports as they come in. Bug reports not only tell us about things that need to be fixed, but they also tell us about how people are using our platform. For a hypothetical example, if a bug is obvious, and it doesn’t register with users for a few months after a feature is released, it’s likely that people aren’t using that feature as designed, or at all.

Support Requests

Similar to bug reports, areas that users are often getting stuck and asking for help tell us that a feature has been poorly designed, or has poor usability.

Quantitative

Usage Analysis

Our analytics data can also tell us how usable features are in a quantitative way. We can see where people are dropping out of workflows, what areas of the site are being used the most, and how features are performing after they are released.

A/B Testing

We frequently use A/B tests to introduce new features or test different interpretations of the same feature with a portion of our users. We can then analyze how different variants perform against each other and make educated decisions about how to move forward.

Data Team Requests

Our data science team has dozens of years of experience with finding patterns and proving or disproving hypotheses. We use our internal instance of Stack Overflow for Teams to queue up requests for them to take deeper dives into large sets of data.

What we have learned

Putting this matrix together not only helped us visualize all the various methods used by our product team to fuel product decisions, but it also allowed us to identify methods that are missing where we’re a bit light for different user segments. We found the process to be useful and enlightening to our team and wanted to share the knowledge with you. In May, we’re going to share with you a feature that has been built from the processes, and how they drove decision making through discovery and design.

For further discussion, head over to this MSE post.

Login with your stackoverflow.com account to take part in the discussion.