AI Hallucinations, when AI goes on a fantasy spree!

Sujoy Roy
3 min readMar 21, 2024

--

Taming AI Hallucinations

Ever heard of AI hallucination? It’s when a fancy AI spits out stuff that’s as real as unicorns. Yup, you heard me right! Instead of giving you the facts on what’s actually happening, it dishes out made-up stories, like a bot that’s been binge-watching too many sci-fi flicks.

In general, AI hallucination refers to instances when an AI generates unexpected, untrue results not backed by real-world data. AI hallucinations can be false content, news, or information about people, events, or facts. These outputs are usually inaccurate, misleading, or nonsensical, often due to biases or errors in the training data or algorithms. It can lead to incorrect conclusions, flawed predictions, or inappropriate actions.

The biggest impact of AI hallucination in recent times was when Google’s parent company, Alphabet, lost $100 billion in market value after its new conversational bot Google BARD produced a factual error in its first demo.

So, what are the primary reason behind AI hallucinations?

This can be attributed to Insufficient training data, overfitting of data (a phenomenon where the model performs well on its training data but poorly on new, unseen data) and not proper encoded prompts.

Detecting and preventing AI hallucinations is vital because it ensures that AI systems make accurate decisions without errors. It helps build trust in the system and ensures fairness for everyone. By finding and fixing hallucinations, AI systems can operate ethically and comply with legal regulations, preventing harm or unfair outcomes. Overall, detecting AI hallucinations is crucial for making sure AI technology is reliable and used responsibly.

Detecting AI hallucination poses challenges but can be addressed through various methods:

  1. Human Oversight: Regular human monitoring is crucial. Experts can review AI-generated outputs, identifying and rectifying discrepancies.
  2. Quality Assurance: Rigorous testing and validation processes are essential. This ensures errors are caught early.
  3. Diverse Training Data: Exposure to diverse examples mitigates biases. It aids in better generalization and accurate outputs.
  4. Feedback Mechanisms: Continuous improvement relies on user feedback. Incorporating feedback loops refines algorithms and reduces errors.
  5. Transparency and Explainability: Transparent AI systems aid in spotting and preventing hallucinations. Understanding the AI’s decision-making process helps identify errors and biases effectively.

Some companies are currently working hard to stop AI from making up weird stuff by incorporating changes in their LLMs. Take OpenAI, for instance. They came up with a new way to tackle this problem. They compared two methods, one where the AI gets feedback only at the end “outcome supervision”, and another where it gets feedback after each little thought it has “process supervision”. Turns out, the second method works better because it keeps the AI in check along the way. So, by watching its steps, they keep the AI from going off the rails and making stuff up. Smart, right?

“The motivation behind this research is to address hallucinations in order to make models more capable at solving challenging reasoning problems.”
~ Karl Cobbe | mathgen researcher at OpenAI

Currently, there are no defined methods to spot AI hallucinations, however, the best bet is to do some digging yourself. If what the AI says sounds fishy, check it against reliable sources. Don’t just swallow everything the AI spits out — be a savvy sleuth and verify the facts. Trustworthy sources can help you spot any funny business in the AI’s responses. So, remember to always have a skeptic’s eye when dealing with AI!

Final Thoughts

It seems like AI’s been having a bit of a trip lately with these hallucinations, huh? Trust in what it spits out is kind of shaky right now. But fear not! We can beef up that trust by putting AI through some serious tests, like giving it a pop quiz but way harder. Let’s not forget to spill the beans on how AI makes its decisions. Transparency’s key here, folks! Plus, having some humans keeping an eye on things and fixing any hiccups can really save the day. So, who’s up for the challenge of trusting our friendly neighbourhood AI?

--

--

Sujoy Roy

A technology enthusiast, #Engineer, likes to speak on #artificial intelligence #tech #digital transformation #Cloud Computing #Fintech. Follow me @sujoyshub