Connect with us

Tech

Microsoft’s new AI tool wants to find and fix AI-generated text that’s factually wrong

Published

on

Microsoft has unveiled a new tool which looks to stop AI models from generating content that is not factually correct – more commonly known as hallucinations.

The new Correction feature builds on Microsoft’s existing ‘groundedness detection’, which essentially cross-references AI text to a supporting document input by the user. The tool will be available as part of Microsoft’s Azure AI Safety API and can be used with any text generating AI model, like OpenAI’s GPT-4o and Meta’s Llama.


Continue Reading
Advertisement

Trending