Artificial intelligence learns to say “I don’t know”

WORLD12.05.2026
Artificial intelligence learns to say “I don’t know”

South Korean researchers have developed a new method that allows artificial intelligence to admit its uncertainty instead of creating misinformation on topics it does not know. Research shows that this can increase reliability, especially in critical areas such as healthcare and autonomous driving.
“Elchi” reports that South Korean scientists have developed a new training method that allows artificial intelligence to “admit its lack of knowledge.” Researchers say this approach could reduce the problem of artificial intelligence creating misinformation or “hallucinations.”
The research was conducted by researchers at the Korea Advanced Institute of Science and Technology (KAIST). The results were published in the academic journal Nature Machine Intelligence.
One of the biggest problems of Artificial Intelligence
According to experts, one of the biggest problems of artificial intelligence is “overconfidence.” Systems used, especially in the healthcare sector, can give definitive answers even when they are not sure.
Previous studies have shown that models like OpenAI’s ChatGPT can sometimes present incorrect information as if it were true.
This situation is called “hallucination” in the technology world.
They were inspired by the human brain
Researchers were inspired by how the human brain works to solve problems.
According to scientists, the human brain can generate signals even without external stimuli, even before birth. The research team applied a similar approach to artificial intelligence.
In the new method, the artificial intelligence model undergoes a short initial training process with random “noise” data before being trained with real data.
Thanks to this process, the model learns to recognize its own uncertainty before it starts learning.
The “I don’t know anything yet” stage
Researchers state that the method allows artificial intelligence to learn the “I don’t know anything yet” state by reducing its initial overconfidence.
Thus, instead of giving wrong answers on topics it did not encounter during training, systems start to give answers with a lower level of confidence.
The research team states that the new method improves the ability of artificial intelligence to “distinguish between what it knows and what it does not know.”

Şayəstə