Dont Panic and always bring your RAG.
Are you aware of the current and numerous situations regarding LLM hallucinations?
Artificial hallucinations
In the field of artificial intelligence, a hallucination or artificial hallucination is a confident response by an AI that does not seem to be justified by its training data.
See https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)
Lying or Hallucinating? Hiding Information?
Whether it's the latest chatbot or other AI system, proper training is essential. Training data can be outdated or may have other issues. If you want your chatbot to access the information you know, use BunkerDox to provide the data it needs to get things right.
Lying or Hallucinating? By making stuff up
According to web search results, there was a case in which a lawyer used ChatGPT to prepare a court filing and cited fake cases generated by the chatbot. The lawyer claimed he was unaware that ChatGPT could fabricate cases, believing instead that he was using it as a legal research tool.
Lying or Hallucinating? To make itself look good
Sometimes your chatbot will give you an answer that seems so confident, you can't help but believe it. How many of you have spent hours debugging after such a moment of AI arrogance?