Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

What is AI Inference and Hallucination?

AI inference is the process that a trained Language model uses, based on previous training to arrived at reasonable answers to  questions or Asks. Inference is from the word, Infer, simply meaning, an answer or guess based on what is a learned pattern.

Oga Trainer profile image
by Oga Trainer
What is AI Inference and Hallucination?
Photo by Growtika / Unsplash

Inferences and Hallucinations in AI

AI inference is the process that a trained Language model uses, based on previous training to arrived at reasonable answers to  questions or Asks. Inference is from the word, Infer, simply meaning, an answer or guess based on what is a learned pattern. Here is how it works, when a LLM receives a Prompt from a user, it searches through the data used to train it. It then makes reasonable associations and relates those associations to patterns by identifying objects and key words that are associated with the Prompt. It then summarizes them into a new data, which is then presented as the “Inference”. So in summary, AI inference is answer based based on a previous data that was used in training the LLM. It is important to point out that not all Inferences are based on pre-trained data. Models with Internet access capability can scan the internet for additional data when making inferences.

There are two major types of Inferences in AI Models.

1. Machine Learning Models: These models are mostly used  to make predictions based on available data.For example, a LLM model can predict future prices based on current prices of a given product.

2. Deep Learning Models: These are more focused on understanding complex data patterns and understanding speech patterns and images patterns.

Hallucinations in AI refers to a LLM model giving wrong answers or answers that not based on real data. Simply put, AI Hallucination refers to Wrong answers that neither make sense nor based on reality. AI Hallucination should not be confused with  AI Bias. AI Bias is, in most cases intentionally injected into a LLM by the model trainers. For example, a LLM trainer  can push a specific agenda by training the model with their own interpretations of alternative facts and realities. Once trained on these Biases, the Model then shows it to future users as  correct Inferences.

Oga Trainer profile image
by Oga Trainer

Subscribe to New Posts

Membership has it's privileges. Become a member and enjoy them all.

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More