Loading…

Why knowledge management is foundational to AI success

Providing the right context to AI can improve accuracy and reduce hallucinations.

Article hero image

Amid all the conversations about how AI is revolutionizing work—making everyday tasks more efficient and repeatable and multiplying the efforts of individuals—it’s easy to get a bit carried away: What can’t AI do?

Despite its name, generative AI—AI capable of creating images, code, text, music, whatever—can’t make something from nothing. AI models are trained on the information they’re given. In the case of large language models (LLMs), this usually means a big body of text. If the AI is trained on accurate, up-to-date, and well-organized information, it will tend to respond with answers that are accurate, up-to-date, and relevant. Research from MIT has shown that integrating a knowledge base into a LLM tends to improve the output and reduce hallucinations. This means that AI and ML advancements, far from superseding the need for knowledge management, actually make it more essential.

Quality in, quality out

LLMs trained on stale, incomplete information are prone to “hallucinations”—incorrect results, from slightly off-base to totally incoherent. Hallucinations include incorrect answers to questions and false information about people and events.

The classic computing rule of “garbage in, garbage out” applies to generative AI, too. Your AI model is dependent on the training data you provide; if that data is outdated, poorly structured, or full of holes, the AI will start inventing answers that mislead users and create headaches, even chaos, for your organization.

Avoiding hallucinations requires a body of knowledge that is:

  • Accurate and trustworthy, with information quality verified by knowledgeable users
  • Up-to-date and easy to refresh as new data/edge cases emerge
  • Contextual, meaning it captures the context in which solutions are sought and offered
  • Continuously improving and self-sustaining

A knowledge management (KM) approach that enables discussion and collaboration improves the quality of your knowledge base, since it allows you to work with colleagues to vet the AI’s responses and refine prompt structure to improve answer quality. This interaction acts as a form of reinforcement learning in AI: humans applying their judgment to the quality and accuracy of the AI-generated output and helping the AI (and humans) improve.

Ask the right questions

With LLMs, how you structure your queries affects the quality of your results. That’s why prompt engineering—understanding how to structure queries to get the best results from an AI—is emerging as both a crucial skill and an area where generative AI can help with both sides of the conversation: the prompt and the response.

According to the Gartner® report Solution Path for Knowledge Management (June 2023), “Prompt engineering, the act of formulating an instruction or question for an AI, is rapidly becoming a critical skill in and of itself. Interacting with intelligent assistants in an iterative, conversational way will improve the knowledge workers’ ability to guide the AI through KM tasks and share the knowledge gained with human colleagues.”

Use AI to centralize knowledge-sharing

Capturing and sharing knowledge is essential to a thriving KM practice. AI-powered knowledge capture, content enrichment, and AI assistants can help you introduce learning and knowledge-sharing practices to the entire organization and embed them in everyday workflows.

Per Gartner’s Solution Path for Knowledge Management, “Products like Stack Overflow for Teams can be integrated with Microsoft Teams or Slack to provide a Q&A forum with a persistent knowledge store. Users can post a direct question to the community. Answers are upvoted or downvoted and the best answer becomes pinned as the top response. All answered questions are searchable and can be curated like any other knowledge source. This approach has the additional advantage of keeping knowledge sharing central to the flow of work.”

Another Gartner report, Assessing How Generative AI Can Improve Developer Experience (June 2023), recommends that organizations “collect and disseminate proven practices (such as tips for prompt engineering and approaches to code validation) for using generative AI tools by forming a community of practice for generative-AI-augmented development.” The report further recommends that organizations “ensure you have the skills and knowledge necessary to be successful using generative AI by learning and applying your organization’s approved tools, use cases and processes.”

Mind the complexity cliff

Generative AI tools are great for new developers and more seasoned ones looking to learn new skills or expand existing ones. But there’s a complexity cliff: After a certain point, an AI’s ability to handle the nuances, interdependencies, and full context of a problem and its solution drops off.

“LLMs are very good at enhancing developers, allowing them to do more and move faster,” Marcos Grappeggia, product manager for Google Cloud’s Duet, said on a recent episode of the Stack Overflow podcast. That includes testing and experimenting with languages and technologies beyond their comfort zone. But Grappeggia cautions that LLMs “are not a great replacement for day-to-day developers…if you don’t understand your code, that’s still a recipe for failure.”

That complexity cliff is where you need humans, with their capacity for original thought and their ability to exercise experience-informed judgment. Your goal is a KM strategy that leverages the huge power of AI by refining and validating it on human-made knowledge.

Stack Overflow for Teams is purpose-built to capture, collaborate, and share knowledge—everything from new technologies like GenAI to transformations like cloud. Find out how organizations are using Stack Overflow for Teams to build secure, collective knowledge bases and scale learning across teams at stackoverflow.co/teams.

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

Add to the discussion

Login with your stackoverflow.com account to take part in the discussion.