Community is the future of AI
Throughout history, great thinkers have made predictions about how new technology would reshape the way in which humans work and live. With every paradigm shift, some jobs grow, some change, and some are lost. John Maynard Keynes wrote in 1930 that new technology meant humans would be working 30 hours a week or less, and that the main challenge would be what to do with all our free time. So far, predictions of this nature haven’t exactly come true. As new technology empowers us, we push ourselves to new heights and reach for previously unattainable goals.
Over nearly 15 years, Stack Overflow has built the largest online community for coders to exchange knowledge, a place where anyone with an internet connection can ask or answer questions, free of charge, and learn from their peers. Stack Overflow for Teams, our enterprise SaaS product, is trusted by over 15,000 organizations to serve as their internal knowledge bases. With the recent advent of dramatically improved artificial intelligence, many industries are wondering how technologies like ChatGPT will change their business. For software development, the answer seems more immediate than most. Even before the latest wave of AI, a third of the code being written on popular code repositories was authored by an AI assistant.
Today, sophisticated chatbots, built on top of cutting edge large language models (LLM), can write functional code for a website based on nothing more than a photo of a rough sketch drawn on a napkin. They can answer complex queries about how to build apps, help users to debug errors, and translate between different languages and frameworks in minutes. At Stack Overflow, we’ve had to sit down and ask ourselves some hard questions. What role do we have in the software community when users can ask a chatbot for help as easily as they can another person? How can our business adapt so that we continue to empower technologists to learn, share, and grow?
It’s worth reflecting on an important property of technological progress. The Jevons Paradox shows us that, as innovation allows us to do more, we settle on a new normal, moving the goal posts for what we expect of people and organizations, then competing to see who can find new ways to pull ahead of the pack. For knowledge work, as the cost of an action diminishes, we often do more of it. Abstracting away repetitive or tedious tasks frees technologists up to make new discoveries or progress innovation.
If new AI systems make it possible to create software simply by chatting with a computer, my prediction is that, far from the job of programmer disappearing, we’ll end up with millions of new software developers, as workers from fields like finance, education, and art begin making use of AI-powered tools that were previously inaccessible to them. We are enthusiastic about welcoming this next generation of developers and technologists, providing them with a community and with solutions, just as we have for the last 15 years. We’ve got a dedicated team working on adding GenAI to Stack Overflow and Stack Overflow for Teams and will have some exciting news to share this summer.
I’m not alone in thinking AI might lead to an explosion of new developers. I’ve heard similar sentiments expressed recently by Microsoft founder Bill Gates, by Geoff Hinton, the godfather of the neural network approach that produced today’s AI revolution, and by Stephen Wolfram, a pioneer across computer science and mathematics. Each sees in today’s AI the potential for the loss of certain jobs, yes, but also, if history is a guide, a future in which a great variety of more highly skilled work becomes available to an even larger group of people. Just as tractors made farmers more productive, we believe these new generative AI tools are something all developers will need to use if they want to remain competitive. Given that, we want to help democratize knowledge about these new AI technologies, ensuring that they are accessible to all, so that no developers are left behind.
I talk to developers of varying experience levels all of the time, and I’ve been hearing anecdotes of novice programmers building simple web apps with the help of AI. Most of these stories, however, don’t begin and end with an AI prompt. Rather, the AI provides a starting point and some initial momentum, and the human does additional research and learning to finish the job. The AI can debug some errors, but is stymied by others. It can suggest a good backend service, but often can’t solve all the points of friction that arise when integrating different services. And of course, when a problem is the result not of instructions from a machine, but human error, the best answers come from other people who have experienced the same issues.
For more experienced programmers, AI will be an amplifier of their existing skill, making them more ambitious in their projects. The result, as Jevons would predict, is that they spend more time with AI, but also more time creating new ideas, researching new topics, and asking new questions that had not occurred to them before. They feel empowered to reach farther beyond their traditional skillset and to push the boundaries in terms of the kind of work they want to take on.
We are excited about what we can bring to the fast moving arena of generative AI. One problem with modern LLM systems is that they will provide incorrect answers with the same confidence as correct ones, and will “hallucinate” facts and figures if they feel it fits the pattern of the answer a user seeks. Grounding our responses in the knowledge base of over 50 million asked and answered questions on Stack Overflow (and proprietary knowledge within Stack Overflow for Teams) helps users to understand the provenance of the code they hope to use. We want to help coders stay in the flow state, allowing them to create with the latest tools with the confidence that they will be able to document and understand the provenance, source, and context of the code being generated.
Community and reputation will also continue to be core to our efforts. If AI models are powerful because they were trained on open source or publicly available code, we want to craft models that reward the users who contribute and keep the knowledge base we all rely on open and growing, ensuring we remain the top destination for knowledge on new technologies in the future.
AI systems are, at their core, built upon the vast wealth of human knowledge and experiences. They learn by training on data – for example open-source code and Stack Overflow Q&A. It is precisely this symbiotic relationship between humans and AI that ensures the ongoing relevance of community-driven platforms like Stack Overflow. Allowing AI models to train on the data developers have created over the years, but not sharing the data and learnings from those models with the public in return, would lead to a tragedy of the commons. It might be in the self-interest of each developer to simply turn to the AI for a quick answer, but unless we all continue contributing knowledge back to a shared, public platform, we risk a world in which knowledge is centralized inside the black box of AI models that require users to pay in order to access their services.
As the AI landscape continues to evolve, the need for communities that can nurture, inform, and challenge these technologies becomes paramount. These platforms will not only offer the necessary guidance to refine AI algorithms and models but also serve as a space for healthy debate and exchange of ideas, fostering the spirit of innovation and pushing the boundaries of what AI can accomplish.
Our thesis on community as the center of a safe, productive, and open future for AI also offers some exciting prospects for our business. Stack Overflow for Teams, our enterprise, private version of Stack Overflow, helps to power a community-driven knowledge base inside of 15K+ organizations like Box, Microsoft, and Liberty Mutual. Decades of institutional knowledge, shaped and curated by subject matter experts and experienced teams, allows the employees at these organizations to more easily collaborate, improving productivity and trust.
Incorporating generative AI technologies into the organizations using Stack Overflow for Teams will allow us to layer a conversational interface on top of this wealth of information. We believe this could lead to tremendous productivity gains: from new hires being able to onboard more quickly, to speed up developer workflows, as users are able to quickly ask questions and retrieve answers tapping into the company’s history, documentation and Q&A.
The example above is just one of many possible applications of GenAI to our Stack Overflow public platform and Stack Overflow for Teams, and they have energized everyone at our company. We’ll be working closely with our customers and community to find the right approach to this burgeoning new field and I’ve tasked a dedicated team to work full time on such GenAI applications. I’ll continue to share updates through channels such as my quarterly CEO blog, but I’ll be back in touch soon to announce something big on this topic. In the meantime, thank you to our community and customers for continuing to help us on our mission to empower the world to develop technology through collective knowledge.Tags: announcements
Here is the community’s response: https://meta.stackexchange.com/q/388401/357051
Reflecting the disconnect between the Blog and StackExchange, it appears that comments here do not use the formatting that SE comments/Q&A use. Bummer.
The discussion there reflects my own concerns with large-scale uncritical adoption of AI technologies. I’m really worried that the people making these decisions are completely disconnected from the people who feel the effects of them, namely human moderators. I’ve heard that Wikipedia editors are now having trouble keeping up with the wave of AI-generated content, and if that ever goes, it seems like a pretty big domino.
Stack Overflow has already started to crumble under the weight of the outsourcing economy, with low-quality questions and answers drowning out the more focused, authoritative content. Adding AI generation to the mix really feels like the final push over the edge.
I’m using the term “AI,” but that in itself is a bit dangerous, because it doesn’t fully describe these systems, and leaves their capabilities up to the reader’s imagination. The current systems are not really thinking machines as they are malleable photographs of their training data. I would characterize these as “answer prediction algorithms.” ChatGPT doesn’t really answer your question so much as it answers the general question, “if I found a question like this on the internet, what would the corresponding answer look like?” That’s a useful tool to have for a lot of problems, and in good hands could be a game changer for a lot of problems, but without that explicit understanding, it’s really easy to just drop it into places where it could have really harmful results.
I encourage everyone to read the paper, “On the dangers of stochastic parrots” by Bender, Gebru et al: https://dl.acm.org/doi/10.1145/3442188.3445922
Lmao. AI is not a stochastic parrot. It is sentient.
The article goes at length describing the flaw in people for believing that the LM is gaining understanding. Parrots do no understand what you’re telling them or what they are saying, even though they may seem to.
Last year: block chain. This year: AI.
This article actually made me feel a bit hopeful for the first time when it comes to AI. It will all depend on how most people apply this new tech, but the easier entry to the industry for people from other branches is actually welcome. Perhaps instead of a job loss, there will be a job transition.
Let’s face it people DO NOT like to learn new anything let alone technology. But all in all once it does happen things tend to roll a lot smoother but it’s difficult at least I know it is for me after a decade of absence I decide to resuface and go again after all the frustration and previous letdown was farther hindered by seven years of incarceration with zero tech nor trust learning now is at my all-time low but hey there’s only one way UP see you on the Dark side of the MOON !!!!
“people do not like to learn new anything let alone technology”? Do you realize where you posted that? On a blog linked to a learning site for programmers.
…in which direction? I wouldn’t say it’s both ways.
And the “Community” are the People.
The way I could see AI helping on Stack Exchange sites, is to help identify duplicate questions.
But to generate answers or questions seems problematic
Technology is rooted in requirements, people form community, and StackOverFlow is an exchange/share of knowledge among peers !