For You, How Biased the Ideal 'AI Chatbot' Should Be?

in Reverio5 months ago

ChatGPT (and its alternatives) refusal to answer some questions, because they're "unethical" made me think about bias.

Human bias can affect the result of the prompts we give AI, and some knowledge is dangerous for the general public to get without the context behind it, some context that's impossible to give in one conversation.

I'm really conflicted about it.

On one hand, I want the AI to be biased. I wouldn't want it to tell a child "how to murder people" or explain subjective matters like morality and ethics without an objective reference point.

On the other hand, I'm fearful of who controls the bias. This can be (and probably already is) used for Mass Brainwashing by providing half-truths (or outright lies) due to the bias. This can happen accidentally too!

I don't know how biased should be to not walk in one extreme.

For you, How biased the Ideal 'AI Chatbot' should be?

 

----

Img source: https://unsplash.com/photos/_SqcBsMdxbY

--- This question was created on [reverio.io](https://reverio.io), Reverio is a question and answer platform built exclusively for Hive. Answer this question on Reverio by clicking [here](https://Reverio.io/question/ahmadmanga/for-you--how-biased-the-ideal--ai-chatbot--should-be-).

Sort:  

This is an excellent question... and it might be impossible to answer.

 

Normally if you search the internet for a question, you can take the time to look through the various results to see which source seems the most up to date or trusted to you.  I'm sure we've all had the experience where we're looking for an answer, and one seems right, but it's also a few years old so you're not super sure you can trust it.  With ChatGPT, you only get the one answer, you don't really know how uptodate or trustworthy it's sources are... so we're definitely going to have people trust the results of a ChatGPT as gospel.  

I've seen examples of ChatGPT throw back an error if you ask a question about a doctor's actions, but refer to that doctor as 'she'.  If you refer to the doctor as 'he' then you get a proper answer to your question. Obviously this type of bias needs to be removed as much as possible because it's only going to lead to confusion and users receiving strange errors and not understanding why.

I completely agree with you about wanting some level of bias. We don't necessarily want ChatGPT giving someone step-by-step instructions on how to perform their own appendectomy because the people performing that surgery really should have years of training... but... if a step-by-step guide saves someone's life in an emergency... maybe it should be available.

ChatGPT is an extremely powerful tool... I'm less concerned about who controls the bias, but more that Generative AI develops it's own bias and no one knows why it's providing specific answers... but it's definitely both exciting and dangerous... like so many other amazing tools. 

Well i believe it’s really good that AI should have limits as well as human nature has habit of exploiting ever resource in a negative way and it is evident from throughout man history yet it’s also crucial to have scrutiny on who are controlling it so they may also not be able to manipulate it

Congratulations @ahmadmanga! You received a personal badge!

You powered-up at least 10 HIVE on Hive Power Up Day!
Wait until the end of Power Up Day to find out the size of your Power-Bee.
May the Hive Power be with you!

You can view your badges on your board and compare yourself to others in the Ranking

Check out our last posts:

Hive Power Up Month Challenge - April 2023 Winners List
Be ready for the May edition of the Hive Power Up Month!
Hive Power Up Day - May 1st 2023
The Hive Gamification Proposal
Support the HiveBuzz project. Vote for our proposal!