Every platform team has a war story about a tool they spent months building that ended up underused, misunderstood, or abandoned. Not because it was poorly engineered, but because it didn’t fit naturally into how product developers worked or because it solved a problem no one really had.
I’ve seen this happen more than once. It’s not about bad intentions or lack of technical rigor. It’s that discovery was skipped. Discovery means understanding how engineers actually work—their workflows, pain points, and the problems that genuinely slow them down—before building solutions. Without it, platform teams end up solving problems that seem important from the outside but don't match the reality of daily development work. And when discovery is missing, platform work starts to drift from its real purpose, which is supposed to be empowering engineers to deploy working software faster and with confidence.
That’s why I’m excited about something we’ve been doing on the Platform Engineering team at Stack Overflow: a practice borrowed from product management called continuous discovery.
What is continuous discovery?
The term comes from product leader Teresa Torres, who defines it as the habit of weekly touchpoints with customers to validate ideas, test assumptions, and reduce risk in product decisions. It’s a structured way to stay close to the people you're building for so you’re not guessing. The idea is to make the discovery process itself more agile and responsive, rather than relying on a heavy, infrequent research phase.
In product teams, this is second nature. But in platform work? It’s less common. Platform teams often default to solution thinking using previous experience and heuristics, not user problems. We scale by building reliable infrastructure, automating processes, and shipping DevOps tools. But we don't always validate whether these tools solve real problems. That can lead, to solutions that sit unused, accumulate technical debt, and eventually rot because they never found genuine adoption. That’s why continuous discovery has become a useful framework for us.
Why platform teams should embrace continuous discovery
Platform teams serve internal developers, which makes us both builders and enablers. But internal developers are still customers, so we’ve learned to treat the platform as a product. And in fast-moving orgs where innovation is constant and disruptions are common, developer needs evolve just as quickly. New architectures, shifting priorities, and emerging tools all change what “good” looks like. Continuous discovery helps us keep up. It allows us to:
- Avoid building the wrong thing.
- Discover real problems hiding behind noisy requests.
- Make smarter bets with limited time and resources.
- Increase adoption by creating tools that fit.
How we apply continuous discovery at Stack Overflow
Here’s what the concept looks like in practice on Stack Overflow’s Platform Engineering team:
1. Engineers as customers
We treat internal teams as customers, not just stakeholders. We spend time understanding their context before writing code. As we plan quarterly, we focus on outcomes and engage our partner teams early, starting with the problems they’re facing, not just the solutions they’re asking for—what Teresa Torres calls focusing on opportunities rather than jumping straight to solutions. From there, we explore multiple ways to address those problems, weighing trade-offs before we commit. It helps us avoid locking into the first idea and gives us space to discover what’s most impactful, not just most requested.
2. Lightweight discovery loops
We regularly check our assumptions with partner teams rather than building in isolation—what Torres calls assumption mapping and testing. There’s a lot of collaboration with partner teams throughout the whole process; they’re not just informed after the fact. The transparency culture makes it easier to surface uncertainties early, challenge assumptions, and spot gaps before they become real problems. It also builds trust, so when we do make trade-offs, teams understand the why, not just the what. This ongoing visibility also makes it easier to gather feedback as we build, so if we ever need to pivot, we can do it with confidence and context.
3. Measuring what matters
Adoption is the key metric, but so is friction—the small moments of confusion, extra steps, or workarounds that signal a tool isn't quite fitting into developers' workflows. We look for signs like: Do people need to ask for help to use basic features? Are they building workarounds instead of using our tool directly? How much context-switching does our solution require? Are users reaching out with new ideas or just complaints?
If our platform is invisible in the best way (freeing up time and reducing toil), we know we’re doing something right. A good platform feels like infrastructure—developers use it without thinking about it, it handles complexity behind the scenes, and it makes devs’ existing workflows faster rather than requiring them to learn new ones. A non-optimized platform, on the other hand, makes itself known through constant friction: developers have to remember special commands, work around limitations, or spend time figuring out how to make it fit their needs. High adoption with low friction signals we've built something that genuinely improves the developer experience, while high adoption with high friction might just mean people are forced to use a tool that doesn't quite work for them.
To get clarity about our effectiveness, we gather data; data lets us make sense of the world. We track the number of developers asking for help with things we assumed we had solved and lead time for code changes—these tell us about friction before it becomes a bigger adoption problem. We monitor usage patterns to see if tools are becoming part of daily workflows or just occasional necessities. We also pay attention to what we're not measuring—ongoing developer interviews help us understand the qualitative side that pure usage metrics might miss. When these signals align—low friction indicators, high organic usage, and positive developer feedback—we know we've hit the mark. When they don't, that's our cue to dig deeper and iterate.
Lessons we've learned
- Discovery is not one-and-done. Teams evolve. So should our understanding of their needs. What worked for a team six months ago might not fit their current priorities. Regular check-ins help us stay aligned with how teams’ development practices and pain points shift over time.
- Feedback ≠ requirements. Sometimes what people ask for isn't what they need. Dig deeper. Instead of just accepting feature requests at face value, we ask developers to tell us a story about the last time they encountered the problem they want us to solve—what Torres calls story-based interviewing. This reveals the actual context behind the request, often showing us that the real need is different from the initial ask.
- Shipping fast is great. Shipping right is better. Discovery helps us do both.
What makes this hard
Platform teams are often judged by stability, not creativity. Balancing discovery with uptime and reliability takes effort. So does breaking out of the “tickets and delivery” cycle to explore problems upstream. But the teams that manage it? They build platforms that people want to use, not just have to use. Start by blocking time for discovery in your sprint planning, measuring both adoption and friction metrics, and most importantly, talking to your users periodically rather than waiting for them to come to you with problems.
Cultural shifts like this take time because you're not just changing the process; you're changing what people believe is acceptable or expected. That kind of change doesn't happen just because leadership says it should, or because a manager adds a new agenda to planning meetings. It sticks when ICs feel inspired and safe enough to work differently and when managers back that up with support and consistency. Sometimes a C-suite champion helps set the tone, but day-to-day, it's middle managers and senior ICs who do the slow, steady work of normalizing new behavior. You need repeated proof that it's okay to pause and ask why, to explore, to admit uncertainty. Without that psychological safety, people just go back to what they know: deliverables and deadlines. Culture doesn't shift because someone says it should; it shifts when enough people start believing that a different way of working is not just allowed, but valued.
Getting started
When we first started leaning into continuous discovery, it wasn’t a formal initiative—it began with a few focused interviews across the internal teams we support. We already knew our goal: help them ship faster and more reliably. But we needed to clarify how we could accomplish. So instead of assuming what our internal teams needed, we asked. We wanted to make sure we were solving problems they actually had, not just ones we thought they had.
We kept it simple—conversations, not surveys. We asked about friction, blockers, and workarounds. We listened for the moments that made their work harder than it needed to be. Those interviews helped us dig past surface-level requests and understand the real constraints shaping their experience. We documented what we learned and used it to shape our roadmap. Not as a wishlist, but as a reflection of what teams were actually struggling with.
Before investing fully in any solution, we tested ideas with a few teams. Sometimes that meant building a prototype. Other times, it was just walking through a proposed change and gathering feedback. That lightweight loop (listen, test, adjust) gave us the confidence to move forward and gave our partner teams confidence that we were building with them, not just for them.
Final thoughts
At the end of the day, platform engineering is about outcomes, not outputs. The best platforms are the ones developers barely notice because they work, they fit, and they evolve alongside their users. Continuous discovery gives us a way to get there by solving for bottlenecks that aren’t code. It’s a practice I’m excited to keep deepening with my team, one conversation at a time.