Loading…

Unpacking the user research behind Collectives™

We spent over a hundred hours in 1-on-1 interviews and collected thousands of survey responses to better understand what our community and customers wanted to see in this product. This feedback shaped everything from naming to moderation, and helped us evolve and iterate the concept into what it is today.

Article hero image

This week we launched a new initiative: CollectivesTM on Stack Overflow. For over a year now, I’ve led our Product Research efforts on this project. In this post I want to share some of the valuable feedback we’ve collected from community members of our site, how their input shaped what we launched, and what we hope to launch in the near future.

Before I dive in, here are some key things to know:

  • We’ve collected a lot of feedback from users and moderators on the site, as well as potential customers. We’ve spent over a hundred hours in 1-on-1 interviews with users and potential customers who have generously given their time to research sessions. We’ve also collected thousands of survey responses, and spent a lot of time reading through feature requests and feedback about past projects on Meta.
  • While we wanted a representative set of feedback, we had a focus on engaging with active, highly contributing Stack Overflow members and moderators. In particular we had a panel of 12 users and moderators who met with us consistently for the last year, and whose feedback has been invaluable.
  • This isn’t the end of us collecting feedback - this is still a Beta release. We will continue iterating on these features based on new feedback and insights. I encourage you all to continue sharing feedback, and if you’re interested, to also opt-in to research participation in your email settings so my team and I can contact you for more detailed feedback.

Background and context

When the idea of companies having ‘public teams’ or ‘spaces’ was first brought up, my mind went to all the ways it could possibly go wrong. But I was also really curious about whether there was any genuine value that organizations could add for our users. So this sparked our first big research and design effort: a 5-day design sprint, attended by a wide range of teams from across the company; product management, community management, product design, engineering, data, product research, and more.

While Stack Overflow needs paid products in order to operate, we want to do this by adding value to the community, not changing things for the worse. So we settled on the following mission statement for our design sprint: “How might we enable companies to build relationships with the Stack Overflow community in a way that improves Stack Overflow as a resource for developers?”

During the design sprint, there were a few things we settled on early:

  • We should aim to not change the guidelines around what is on topic for Stack Overflow. This means we should be thinking about organizations who are technology providers, and who likely already have technologies with active tags on Stack Overflow.
  • In order for organizations to add value, they would need to directly participate in Q&A in some form - not as a channel for customer support, but as a way for technology providers to share their knowledge in a productive way.
  • While so much of our work has been focused on tearing down the barriers to entry for newer users, this would be an ideal project to also focus on where we can add features for more engaged users: what new ways could they participate?

We eventually came to a potential solution (Collectives) which we felt adhered to these criteria, and addressed our initial design sprint mission. The concept consisted of a few parts, including: badges on user cards for verified employees/topic experts, a way for technology providers to indicate an existing answer represented best practice, and some kind of page for an organization to showcase their relevant tags and other pertinent information.

Early Feedback

The first time we researched these concepts with users (as part of the 5-day design sprint), we learned a lot. One common theme that emerged was that these concepts could help instill trust in an answer, help answer-seekers get to solutions quicker, and potentially help with the issues of outdated answers and identifying canonical answers when it came to duplicates. On the other hand we were cautioned to be careful with not changing things like sort order, so we wouldn’t hurt the democratised nature of the site.

A couple of other highlights from these early research sessions included:

  • When it came to verified employees/topic experts, our research participants saw value here, but only if we maintained relevance. It was critical that these badges were scoped; they should only appear when a user participated on tags which they actually had expertise in. We also got positive feedback about the fact this would be a great way to recognize members on the site who continually contribute their expertise to certain topic areas. On the flip side, we learned that the word ‘verified’ had the wrong connotations (thus beginning a near 18 month struggle for us on what to call this role, with at least 15 different potential names...). Verified reminded participants of social media, and we have no desire to become a social media platform.
  • The idea of technology providers being able to mark existing answers on the platform as representing best practice was very popular in research. Some users pointed out that this could potentially be a useful way of designating a canonical answer when handling duplicates. Back then we were calling this concept ‘endorsed answers’, which was not a popular term, and one that non-native English speakers felt was particularly unclear (we weren’t having a good run with copywriting!). So we renamed this to ‘recommended answers’, and this is the one part of the concept that hasn’t changed significantly since our design sprint.

At the end of the design sprint, we were still just scratching the surface of what we needed to learn from Stack Overflow users. We had a million questions about how these features may help or hurt the community, what would make a good collective, what additional features might make this idea more complete, etc. So we began a series of what we called ‘research sprints’, which were essentially intensive blocks of focused research, aiming to address our biggest open questions and hypotheses.

Key findings: Articles

The appetite for longer form content was something we’d heard in the past, but also dug into as part of our research sprints. For example, 23.1% of responses to a survey we ran of visitors to Stack Overflow (n=1010) said they believed how-to guides would be a positive addition to the site. We also know that plenty of contributors have tried to make content that would really be better suited to an article fit into a Q&A format. Overall, the feedback we got through surveys and interviews was that, with the right guardrails, this could be a positive addition to the site.

Some users were cautious because of a project that we had sunsetted a few years ago called Documentation. There were several issues we heard about when it came to Documentation, but the ones brought up by our users most often were the influx of poor quality or repetitive content, as well as issues introduced by users unfairly being able to gain reputation. These were, of course, problems we were keen not to repeat. So that brought us to our first key decision on this feature: at the time of launch we are limiting Article creation to Recognized Members of a collective. However, we are planning on launching a review process where any member of a collective can submit Articles that will then be reviewed by the Recognized Members of that collective.

In our customer research, something that we discussed was that in order to make this review process successful, customers should be clear about the type and style of Article that would make a good addition to the collective. Hopefully this should go some way to start addressing the first issue we saw with Documentation, helping to raise the bar quality-wise and ensuring that we aren’t just seeing repetition of existing help docs and documentation.

The other factor when it came to quality was making sure voting was part of the mix. In our initial designs we only had an upvote-style button to signal good quality. But through research we heard that users didn’t want to see upvotes without downvotes. So we added a downvote option in, mirroring Q&A. Which brought us to the other problem we heard about with Documentation: reputation...

I’ll be honest in saying that rep was a topic we got extremely mixed reactions on. Nearly everyone we spoke to had a different take on how we should handle Article rep. So unfortunately, we haven’t found (and probably won’t find) a solution that everyone loves. Some users proposed a new bucket of rep for Article contributions, some encouraged us to offer more rep for Article creation to reflect the added effort it takes to write an Article, and others didn’t think rep should be part of the feature at all.

Key findings: Customer research

Another topic we discussed at length with users was how organizations would interact on the platform. We spoke to some who had attempted to facilitate developer support on Stack Overflow before, and to users who had seen these efforts unfold. Some of the key things we learned from these discussions were that participation from organizations still had to be deeply technical, not be promotional, and perhaps most importantly, we needed organizations to be consistently active to actually see a positive community impact.

This spurred several rounds of research with potential customer partners. We aimed to understand if they would be willing to make this kind of commitment, and if so, how we could facilitate ongoing participation with this initiative.

This research led to two things. First, helping our team clearly understand what type of organization we wanted to work with: ones that would take the time to understand the site, ensure they were putting dedicated resources towards participation, and didn’t see it as purely an outlet for marketing.

The second was that we would need to build out dashboards to help Recognzed Members target where to participate on the site. There’s obviously a million ways someone could contribute and we wanted to help make sure Collectives would enhance the community, and fill in the gaps. Part of the dashboards we designed include curated lists to help focus participation, e.g. one of the lists is ‘questions over 30 hours old without an answer’. We hope these curated dashboards will help our customers enrich their community on the site.

Key findings: Moderation

Besides customers and users, there was also another group we were keen to speak to: Stack Overflow Moderators. We were lucky enough to be able to consistently interview several mods throughout our discovery process. These interviews brought a unique perspective to our designs, and helped us consider rules, guidelines, and community health, as well as assess possible abuse vectors.

Some key topics we covered with participating Stack Overflow moderators were how moderation of new features should be handled, and whether there should be any new rules. One big takeaway from these interviews was that the mod team should moderate the majority of new features, with the option to pull in a Community Manager from our staff where they feel appropriate.

On the topic of new rules, we didn’t end up adding much. It was agreed that the new content types should still adhere to the existing Stack Overflow rules, licenses, and on-topic guidelines. However, the one thing we were encouraged to do was to provide our customers with guidelines and help docs to ensure their expectations about community norms and rules are set. We’ve done many demos and sessions with our launch customers, and have written several new help docs, and will continue adding to this as the product evolves.

There were dozens more topics we discussed in research, including: our ongoing saga with what to name the new roles and features, how to handle potentially off-topic questions, how to prevent vote fraud, assessing several new content types, possible incentives other than rep, notifications, and so much more. The time and energy that users and moderators have put into sharing their opinions and giving these concepts careful thought and consideration is amazing. If we had launched this initiative without any feedback it would have looked very different, and I am personally really happy about just how much we have learned from community members over the last year and a bit.

Check out the Go Language Collective and the Google Cloud Collective.

For those who want to understand what Product Research looks like at Stack Overflow: Through qualitative and quantitative research, we work to understand the needs, motivations, and pain points of the community and of customers to help guide the direction of our products. This is both in terms of what products should be built, but also how they should be executed. We see ourselves as advocates for the users, and often partner with Community Management, Product Management, Product Design, and Data.

Add to the discussion

Login with your stackoverflow.com account to take part in the discussion.