Truth or Fabrication?

avatar

Introduction

Can artificial intelligence models “hallucinate” and invent answers to questions that AI doesn’t know? And if this actually exists, is there a way in which this problem, which may cause serious disasters, can be monitored, especially if the questions are related to the scientific and medical fields?

According to this development that we are experiencing today, we conclude that everything in this field is possible, and it is actually possible to develop tools that monitor when chatbots that operate with artificial intelligence can give incorrect answers.

Currently, if we completely trust artificial intelligence, we support its ability to deceive us, as it always gives answers that seem certain, and it is difficult to detect that they are fabricated.

But this will not last forever because with constant development it will become possible to know the difference between a model that is unsure about its answer, and one that gives correct answers to the questions directed to it. This will become possible to monitor.

Indeed, the hallucinations of generative AI models are a major concern, as the advanced nature of the technology and its ability to communicate means that it is capable of passing off fabricated and incorrect information as fact only, in order to respond to inquiries and questions.

But strong measures must now be taken to combat AI hallucinations, in particular when it comes to medical or legal inquiries that have a direct relationship with humans.
Last month, a user of AI Overviews, a program powered by generative AI, reported strange or other potentially dangerous answers.

When he was asked about how many Muslim presidents in the history of the United States, he responded that Barack Obama “is considered by some to be the first Muslim president.”
This is what Google later explained that this result “violated their policies and they intended to remove it.”
Another user posted the search engine's answer to a question about a method that allows "cheese not to stick to pizza."
In addition to mixing cheese with sauce, Google's search engine suggested adding "non-toxic" glue to the cheese.

Hence, we find that there is a possibility of training advanced artificial intelligence models to lie and to deceive humans with ease.
So, could advance chatbots like Chat GBT also learn to lie in order to deceive people?

Depending on the use of human intelligence, it is possible to discover that these robots can not only learn to lie, but that it may then be impossible to retrain them to tell the truth and correct information using the AI ​​safety measures currently available.

Therefore, we find that there is caution due to a false sense of security towards artificial intelligence systems and the information derived from them.

Of course, if we also add the danger of fraudsters and Internet hackers programming artificial intelligence systems to lie and hide the damage that may result from a specific purchase or from entering a site.

Conclusion

So, it is very necessary, and I think that work has begun, to find and activate a model that is based on the principle of relying on the time during which the chatbot answers the question directed to it. The innovative model can, by calculating this time, know whether the chatbot is sure of the answer was written or fabricated by him. This is what is being developed by researchers at the University of Oxford.

*Image designed using Canva

Posted Using InLeo Alpha



0
0
0.000
1 comments