Connect with us

Tech

Biased and hallucinatory AI models can produce inequitable results

Published

on

cuJ2nHdA2cLngX4bhsHsye 1200 80

“Code me a treasure-hunting game.” “Cover ‘Gangnam Style” by Psy in the style of Adele.” “Create a photorealistic, closeup video of two pirate ships battling each other as they sail inside a cup of coffee.” Even that final prompt is no exaggeration – today’s best AI tools can create all these and more in minutes, making AI seem like a real-world type of modern-day magic.

We know, of course, that it isn’t magic. In fact, a huge amount of work, instruction and information go into the models that power GenAI and produce its output. AI systems need to be trained to learn patterns from data: GPT-3, the base model of ChatGPT, was trained on 45TB of Common Crawl data, the equivalent of around 45 million 100-page PDF documents. In the same way that we humans learn from experience, training helps AI models to better understand and process information. Only then can they make accurate predictions, perform important tasks and improve over time. 


Continue Reading
Advertisement

Trending