The Loop #1: How we conduct research on the Community team
👋Hello! I’m Donna, the Community Design Lead at Stack Overflow.
Welcome to CHAPTER #1 of The Loop, a new blog series from the Stack Overflow Community team. This post is a deep-dive into our research approach: what it used to be, what it is now, and how it continues to evolve.
Learn more about The Loop in CHAPTER #0 (yes, our chapters are zero-based).
If you work on a product that’s ever benefited from research – whether that’s talking directly to users, analyzing experiment data, or any number of other research methods – you know how indispensable these inputs are for making the right decisions.
But how do you decide which methods to use and when? How do you know if you’re spending the right amount of time on research? How do you know when it’s time to change your research methods?
These are questions that the Community team has been grappling with, particularly in the last year. While we certainly don’t have all the answers, I’d like to share:
- What our research used to be
- What it is now
- How it continues to evolve
The early days
If you’ve been with us for a long time, you may remember when our research process looked something like this.
Meta feedback involved direct exchanges between users and staff members on Meta, a site where users discussed the site, shared ideas, gave feedback, and talked to staff. These conversations led to the Stack Overflow that many of us are familiar with, and we learned a lot while working with this small group of highly motivated users for many years.
Our current approach: scrappy and mixed-method
As our community has grown, our research needs have become more complex. With 50 million people coming to the site every month – all with unique needs and backgrounds – our team and its research approach have evolved to keep up with this ever-growing complexity. We’ve added folks to the team with specialized research skills, like UX researchers and data scientists, while people like product managers, designers, community managers, marketers, and developers contribute research as well.
Together, we conduct mixed-methods research that helps us create a holistic picture of how we’re doing. This collective research seeks to answer questions like:
- What do users need?
- What do customers need?
- Are our ideas and decisions on the right track?
- How are our products and features performing?
This mixed-method approach allows us to hear from a variety of inputs throughout the life cycle of a product or feature, from early conception to post-launch.
However, our small team can’t always conduct all of the research we’d like – which is where the “scrappy” element of our research approach comes in. Generally, the greater the cost and impact of a project, the more research energy we’ll devote to it. This matrix visualizes how we might decide the amount of research that goes into a project.
Today, our research process might look something like this for a high-impact, high-cost project.
Think of each method as a puzzle piece, and the outcome as the completed puzzle. I’ll talk more in detail below about our methods and why we use them.
Method #1: surveys
Surveys are one of our favorite sources of both qualitative and quantitative feedback. We currently use surveys for:
- General site satisfaction. This is used to gauge trends in satisfaction and helps us identify directional improvements. We use this data to inform our roadmap.
- The beginning of a project. This helps us vet early ideas and point us in the right direction. We use survey data, as well as other inputs, to help us identify requirements and design.
- The end of a project. Once we ship, we can see how people are feeling about the new feature and identify changes for future iterations.
⚠️Why surveys? Surveys are a great way to get a high volume of qualitative feedback, which we can use to understand macro trends as well as micro issues for things like usability and copy. We can also target surveys to specific audiences, so that we are gathering data from people who’d be affected by the outcome of a particular project.
Method #2: user research
Qualitative research helps us understand the why and how of user behavior, allowing us to get deeper insights than we might through other methods. Generally, these are semi-structured discussions that take place by video call or in writing – where we’ll talk in-depth with users about a specific topic, feature, or design. We generally conduct these throughout the beginning and middle phases of projects. We use a few different sub-methods, depending on the project:
- User interviews. We have conversations with people who may be affected by a given change. Since Stack Overflow is an online community where groups of people interact with each other, our conversations are not limited to the end user. For example, if we are updating the question asking form, not only do we talk to people asking questions, but also those answering and moderating questions.
- Meta feedback. We monitor Meta for bug reports and small usability/copy issues after shipping changes to the site. Note that this may change based on what the next iteration of Meta looks like, as mentioned in the previous chapter of The Loop.
⚠️Why user research? We get deeper insights than we might through other methods. Like surveys, we can target user research to specific audiences, and will talk to a range of groups affected by a given product change.
Method #3: quantitative data
While qualitative data helps us understand why, quantitative data helps us understand how many. We use data analysis and A/B tests to provide insight into how our decisions scale, as well as how changes we make contribute to site usage and overall performance goals. Sub-methods include:
- General data analysis. This is used to understand patterns in site usage across various user segments.
- A/B tests. We test as much as we can, particularly areas that impact core interactions on the site.
⚠️Why quantitative data? Statistical analysis allows us to see how (or if) earlier research insights scale and how the changes we make impact top-line performance goals.
Method #4: secondary research
Today’s Stack Overflow doesn’t exist in a vacuum. When we consider changes to the site, we want to understand the broader contexts of our users, as well as any historical insights that led us to the current state of the site. This helps ensure that we’re not reinventing the wheel or ignoring hard-learned lessons from the past. Sub-methods include:
- Comparative site analysis. We look at other sites to understand patterns and standards that users are learning offsite. This awareness helps us understand how to make a user’s entry into Stack Overflow as seamless as possible.
- Archive research. We scour Meta and talk to various staff, particularly community managers and developers, to learn how and why a feature arrived at its current state.
⚠️Why secondary research? We can learn from broader offsite patterns as well as research from previous iterations on the product.
How we talk to and learn from our users has changed a lot over the years. By broadening our research approach, we’ve lost some of the trust and familiarity of regular, frequent exchanges with a small, passionate, and generous group of people on Meta. At the same time, we’ve brought more rigor and precision into our research approach, which means we have more confidence when making decisions.
Our research approach is constantly evolving. As Sara and Juan wrote in their latest blog post, we’re expanding our research toolbelt to include a working group (a hand-selected group of folks we’ll gather feedback from regularly). We want to continue not just listening to our users, but improving how we do so.
How does your organization conduct research? How has your organization’s research changed over time, and what have you learned from these changes?
✋Opt-in to our Research email list (must have a Stack Overflow account). You’ll receive invitations to participate in surveys, user interviews, and more. You’ll receive up to a few emails per year and can opt-out at any time.
📖Read CHAPTER #0 of The Loop, where Sara and Juan talk about our plans to improve how we listen to users.
I find it quite remarkable that every modern software development methology suggests to get new features as fast as possible in front of the users, collect feedback and iterate, while SO does the opposite.
Actually, while “move fast and break things” may be trendy in niche markets (mostly web and social), it’s not widespread and it’s far from obvious that it’s “good”.
“This help us” → “This helps us” (though it is not entirely clear what “this” refers to. (Survey) data? (an uncountable noun). If it is “surveys”, then it is inconsistent with the first bullet point (that should be “They are used” then)).
It’s not inconsistent.
> We currently use surveys for [..] The beginning of a project. This helps us….
It’s the act of using surveys that helps them.
Re “This helps ensure that we’re not reinventing the wheel or ignoring hard-learned lessons from the past”: The first Stack Overflow podcast series (started in 2008) is a very good source for the research that went into designing Stack Overflow. For example, everybody (pun intended), including new Stack Overflow staff, should know Clay Shirky’s work inside out.
Many users want some kind of forum (maybe because they don’t know any better), and knowing the reasons for the design of a Q&A site like Stack Overflow could prevent turning Stack Overflow into a forum (though this does not preclude a (new) fifth place (chat being the fourth) that has some features of a help desk).
As Jeff Atwood said on the podcast (possibly paraphrased), “I am done with forums!”
en DOT wikipedia DOT org/wiki/Clay_Shirky
stackoverflow DOT blog/2008/page/17/
I find it odd that meta feedback isn’t solicited until the ship phase. Given some of the missteps we’ve seen lately, It seems like a meta check on “is this a terrible idea” might be at least somewhat useful… and that’s the sort of thing one might wish to have early enough that it’s easy to change course. This isn’t even about me. I personally don’t spend a lot of time on Meta. Obviously, there are advantages to having more objective tools out there, and I’m not saying that “talk with Meta” is the be-all end-all, or anything like it. It’s just that this looks like “We’re trying to cut you out of the loop as much as we can while patting you on the head to make you feel better about it.” It suggests that you’re trying to insulate yourself from the community. People, you make mistakes. Now, meta certainly has its own biases. I’m not going to say that the consensus of meta is going to be right every time. It’s just that there are going to be times where you folks will make errors that Meta could have caught for you, and if you aren’t listening, those errors make it all the way to ship.
The problem is Meta is such a small, vocal minority of SO users. Their feedback is important but as these days the majority of the users are not deeply involved in the site the way Meta users are, the priority should not be on Meta. I do agree though that Meta does catch errors and should be listened to and worked with (even with the recent behavior)
Just because not everybody votes, doesn’t mean you stop listening to the people who do.
Excuse me? The people on Meta are the ones vital to SO, and they’re the ones that helped build SO. Not listening to Meta is disrespectful.
The advantage of Meta is that it is such a small, vocal minority of *extremely active* SO users. They’re your power users. I understand that meta isn’t reflective of all users, but it does represent a very important piece of the puzzle.
Decreasing the importance you put on meta to a degree makes sense, but I’m still frustrated by how meta has been being cut out of discussions pretty completely and its very valid concerns have been ignored. Please listen to your most active users more.
This is becoming utterly awkward. Never did SO reflect publicly on what went wrong in the past months. Now they claim transparency, while moving away from their own community with processes that solve fictional issues – consistently ignoring the factual problems. Which makes me believe they are not really interested in solving the same issues as the power users experience.
Obviously, there is a corporate vision and the board demands numbers. But to me it’s clear that the capitalization will come from feeding on the existing knowledge base and introducing paid products, rather than fostering what’s already there – a once vibrant community.
I think in the long run, SO as we know it is of little value to the shareholders, and I expect others to jump into the void it will create.
But even with all that in mind, it’s mind boggling to see to what extent SO will go in protecting the mishaps of their leadership. It would have been so easy to just admit they made a mistake with Monica, and SO would even have been in a better place after a rectification.
I can only conclude one of these two things: either management is really just too stubborn, possibly acting on emotion, or there’s a real policy of chasing away too vocal power users – which would make it easier to achieve business goals in the future.
PS: I’m not sure what you mean with “even with the recent behavior”, but this feels like you are blaming your community for caring.
“Opt-in to our Research email list (must have a Stack Overflow account).”
I guess our opinion only matters if we can monetized in some way. Or did you just forget the other 170+ site in your network (again).
buddy this is the stackoverflow blog
This is replacing Meta Stack *Exchange* though. So the fact that this blog only cares about Stack Overflow is just another example, not an excuse.
Is the research going to be done solely on StackOverflow users and then the results/changes applied to all the sites on the StackExchange network? Because I don’t think it makes sense, SO and all the other sites of the network are 1 to 1 in either scale or approach.
“What do users need? What do customers need?” The same things, as they are the very same people. There exists no secret, unknown companies who suddenly pops up on SO out of nowhere, after stumbling upon the site by accident, then immediately decide that they want to use Teams, Talent, Advertising etc. All those businesses originate from your current users. Understanding this is the key to understanding the products and how to sell them. No user trust -> no customer trust -> no business.
Exactly. Feedback on Meta did not just address SE, but also Jobs and other features. And lot of customers surely got used to the fact that their employees or them personally can make a meta post about a problem or bad design.
“How we talk to and learn from our users has changed a lot over the years.”
Very interesting article. When listening or reading a particular success story, always analyze what exactly could help the person you are targeting, and what is available to you at the same time: similar personal qualities, one city of residence, equally developed abilities for any area knowledge and skills in a specific field, etc. For example, you want to repeat the successful experience of a famous entrepreneur. He developed the ability to communicate with people, negotiate, make sales – and you can say the same about yourself. Then you can, just like him, take over the main work of concluding transactions and relations with clients, delegating other tasks to employees. Did you know that, a quote from Wikipedia: “The interrogation of experience has a long term tradition in continental philosophy. Experience plays an important role in the philosophy of Søren Kierkegaard. The German term Erfahrung, often translated into English as “experience”, has a slightly different implication, connoting the coherency of life’s experiences”, source – https://en.wikipedia.org/wiki/Experience. Good luck!
I find this blog technically awful to support any kind of discussions.
I’m afraid StackExchange considers that it’s a feature, not a bug.
Major changes to anything should only be done with metrics and goals in place — otherwise, how do you know if a change worked. Thus, data needs to be quantitative. I suggest a viable approach would be to use non-quantitative tools to determine where to focus your quantitative efforts.