Loading…
From the Network
Releases

What’s new at Stack Overflow: March 2026

All that's new on Stack Overflow last month, including the redesigned Stack Overflow now available in beta and open-ended questions now available to all users, plus a shoutout to the community members earning the Populist badge.

What’s new at Stack Overflow: February 2026

This month, we’ve launched several improvements to AI Assist, opened Chat to all users on Stack Overflow, launched custom badges across the network, and launched one of the first community-authored coding challenges.

What’s new at Stack Overflow: January 2026

For this first edition of the new year, we’re taking a step back to highlight some of the most impactful features shipped over the last year and how they can help you start 2026 strong.

Your 2025 Stacked: A year of knowledge, community, and impact

From tough questions to standout answers, your team built a lot in 2025. Your 2025 Stacked brings those contributions together in one shareable snapshot—celebrating the people, posts, and topics that defined your year in Stack Internal.

Latest articles

How everyone and anyone can use AI for good

There are big hitters in the AI space that use this tech for humanitarian and environmental good—from start-ups fighting climate change to voice recognition experts diagnosing diseases. But you don't need to be backed by AWS or Microsoft to do good. In part two of this series, we dive into how anyone can use AI for good.

Is anyone using AI for good?

In a world where AI is replacing human workers, using up energy and water, and deepening disconnect, is AI for humanitarian good even possible? The answer is yes. In the first part of this two-part series, we're taking a look at just a few AI do-gooders and what they're doing to fight climate change, make healthcare more accessible, and help their communities.

More Podcast
Around the web
m4iler.cloud

Let's Get Physical

OAuth2 means nothing if you forget to lock your doors.

happel.ai

Hacking Super Mario 64 using covering spaces (+ hyperbolic geometry)

“But first we will need to cover parallel universes.”

lucumr.pocoo.org

AI and the Ship of Theseus

Can you copyright something you didn’t create?

jyn.dev

Remotely unlocking an encrypted hard disk

Mission Impossible: Break into is your old ThinkPad that doesn't charge.

amplifying.ai

What Claude Code actually chooses

What the AI you choose chooses is not your choice.

plugsocketmuseum.nl

The museum of plugs and sockets

How many breakers could you blow if you plugged all of them in at once?

line-mode.cern.ch

The first webpage ever published

There's an alternate timeline where we're all in the gophersphere instead of the World Wide Web.

youjustneedpostgres.com

You just need Postgres

....except for the cases you’d need to use other databases, too.

chrisloy.dev

AI makes interfaces disposable

Maybe the UI was really the agentic friends we made along the way.

victoriaritvo.com

Semantle solver

Ask yourself—is it more work to create a Wordle solver than to just solve the Wordle?

spawn-queue.acm.org

What every experimenter must know about randomization

Your randomization is not so random after all.

rkirov.github.io

Learning Lean: Part 1

The sequel of the beloved "If You Give a Mouse a Cookie" is called "If You Give a Mathematician an IDE.”

Want updates to your inbox?

Every week we’ll share a collection of great questions from our community, news and articles from our blog, and awesome links from around the web.

Read previous issues →

or edit your settings on your profile page.

Issue 319: Dogfooding your SDLC

Sometimes, the world of tech can feel a bit like a sci-fi movie—building AI tools with AI tools, AI self-help books, museums of old technology and webpages showcased for our viewing pleasure. If our SciFi Stack Exchange community is to be believed (and I want to believe), a movie about our current life would be considered Speculative Fiction which…yeah, that sounds right. What’s not so speculative is that this Overflow is chock-full of interesting stories and answers for you from around the web. On our pod, we’re joined by Thibault Sottiaux, OpenAI’s engineering lead on Codex, to talk about dogfooding the future of agentic SDLC. Microsoft’s Marcus Fontoura sat down with us to discuss his new book, Human Agency in the Digital World, an “AI-era self-help book” about giving humans back the control in the tech revolution. Plus, we’ve got a new, redesigned Stack Overflow for your viewing pleasure, and a deep dive on our how we’re giving content creators back their agency with our partnership with Cloudflare. But not everything in the future needs to be new school…sometimes old school still reigns supreme. At least that’s the case with our Q&A with pompelmi, the open-source project fighting one of the oldest kind of attack vectors, file uploads. That’s why we’re making sure you get plenty of the vintage in this week’s not-so-speculative newsletter—like the world’s first webpage and a museum filled with every kind of plug and socket in the world. And in a world of the speculative, one thing is proven: Stack sites are where you go for answers for every question you have, new or old. Why are Olympic athletes so much faster than normal humans going both forward AND backwards? Why doesn’t anything rhyme with orange? How far in the future can a commit message go? No speculation needed. We have all of that and more in the links below.

Issue 318: The year of the AI developer

Happy New Year! We’ve just entered the year of the Fire Horse on the Lunar calendar. Now, not everything needs to be a metaphor for the AI revolution, but if the Lunar forecast is correct, the year of the Fire Horse will bring rapid transformation and intense energy—and it sure does feel like a Fire Horse year to us. On the pod, we’re joined by Shireesh Thota from Microsoft to chat all things Azure databases including how the architecture will change with AI. Wikimedia Deutschland’s Philippe Saade sat down with us to discuss how they vectorized 30 million entries during their Wikidata Embedding Project to fight scraping and meet AI need—a very Fire Horse move. To prove we’re not just horsing around about scraping, we’ve got an episode of Leaders of Code with Cloudflare’s Will Allen that dives into how we partnered with Cloudflare to launch a pay-per-crawl model. And it’s not just us feeling the heat from the Fire Horse. From the web, we have the story of a FitBit and a sleepless dev who realized AI is transforming how we interact with interfaces. Even the ancient art of mathematics is feeling a change—one PhD mathematician/programmer is learning Lean to keep up with theoretical mathematic’s shifts because of AI. One thing will always stay the same, though: developers love to solve problems the hard way—at least that’s what the story on creating a solver for a Wordle variant sounds like to us. But even in the year of the Fire Horse, we know you’re looking to us for trusted answers. And actually, the AI trust gap is a big problem for developers; we have the deep dive on the blog. So, we’re going to take a page out of the Metal Ox, known for honesty and dependability, and end this Overflow with our trusty and dependable Q&A. What does it mean for something to be “natural”? Is it wrong to ask math people to pick a lane? Are there quokkas in space? Is it just cope to pretend you know Gen Alpha slang? We’ll try to meet your astrological expectations of us in issue 318—luckily, things with the number 3 bring good fortune for the Fire Horse. That good fortune must be starting already because auspiciously for you, we’ve got all those links and more ready below.

Issue 317: The moral quandary of AI

This is probably not news to you, but the tech world is having a moral quandary lately. Wherever you stand in the ethical and philosophical AI discourse, we’re right beside you with our chins on our fists à la The Thinker. On the pod, Professor Tom Griffiths from Princeton’s AI Lab joins us to detail the philosophical and mathematical history of understanding the human mind, and how these discoveries underlie our development of AI. We also chat with Deepgram’s Scott Stephenson on how they’re advancing voice AI technology, and where voice cloning fits into the ethical dilemmas of this day and age. On the blog, we’re taking an optimistic look at the philosophical AI conundrum. For instance—what if AI will actually create more developer jobs in the long run? We’ve got a piece this week covering how AI’s need for innovation and code will lead to more creative opportunities for developers in every layer of tech, from hardware to application. We’re also wondering—is anyone using AI for good? We’re answering that in a two-part deep-dive on companies using AI for humanitarian good, plus how the every day you and I can use this tech to make the world a better place. But not everyone around the web is as optimistic as our blog this week. We’ve got the story of how one engineer had an AI agent write a hit piece on them after their public critique of the agent’s code—certainly a valid reason for pessimism. But regardless of your outlook on AI, morally or otherwise, the tech is here, which is why this week "we've included the outcome engineering (o16g for those who want to compress the middle of long jargon) manifesto that lays out the 16 rules for the next chapter of software engineering. And not every ethical and philosophical debate needs to be on AI—there’s plenty of other moral arguments to consider from this week’s questions. Is it immoral for your D&D character to attack a solar body if it’s malicious? Is it wrong for Hollywood to label everything as a “true story” if only part of it is true? Where is the line between working and doodling if it’s all in CSS? Will I condemn the universe if I open a portal with my mind? They probably didn’t teach you any of that in Philosophy 101 but don’t worry—we have all of that and more in the links below.

Issue 316: A technological 2-for-1

It’s time for a classic Stack Overflow Q&A. Q: What’s better than one interview from the floor of re:Invent? A: FOUR interviews from the floor of re:Invent. Also, this question is now closed for being off-topic. Okay, okay, fine, let’s try to stay on-topic—namely the topic of AI. On the pod this week, we’re bringing you chats with Inception’s Stefano Ermon on the power of diffusion models and Roomie’s Aldo Luevano on building physical and software AI with a purpose and real ROI. We’re also joined by Pathway’s Zuzanna Stamirowska and Victor Szczerba to dive into the world’s first post-transformer frontier model, and Mary Technology’s Rowan McNamee to chat about LLMs in the legal world—we’ll have to ask him if this week’s 4-for-1 podcast deal is so good it should be illegal. Speaking of The Law, we consult the Law Stack Exchange as to whether social media grifting is grifting at all—plus the answer to your burning question on what happens to rocket ship boosters that don’t burn up. Not everything is rocket science, though. For instance neural networks, especially since we have a visualizer for you this week that’ll demystify those mystifying robot brains. Let’s stay on-topic and continue our demystification—we’ve got the story of one dev’s attempt to find what’s on the other side of Google’s 8.8.8.8 DNS. Maybe we owe all the mysteries around the tech we build to the complexity we’ve been adding to it, which is probably why one of the stories from the web this week is on Wirth's Law of lean software. Oh no, we’ve gone off-topic again. Well, we tried our best. And don’t worry, we’ve got plenty of on-topic and not-closed questions to round out this week’s off-topic Overflow. Is impersonation the highest form of flattery if you’re impersonating a Window’s user with lower privileges? Can you “just bumping this thread!” and “quick follow-up on this!” your way into faster code review? Should you let AI kill your darlings if your darlings are all trash? All of those wonderfully on-topic answers ready for you in the links below.