AI can be dangerous in Healthcare assistance if applied wrong

Yes, I agree that AI can be dangerous in healthcare assistance but it also depends on where it is being used. Like other use cases if AI is used for customer support in the healthcare industry or company it is not a big deal but if it starts prescribing medications or suggestions to the person who is interacting, it can be dangerous and that will need to change.

AI should not be an acting doctor

Sometimes doctors also provide their consultation via chats. There are a few instances when doctors are not available to interact with the patients, sometimes they have to either interact with a chatbot or the chatbot itself does the full work of a doctor. This can be a dangerous thing that AI should not be allowed to cover during a doctor's assistance. I understand that the system would be completely tested and we should believe in the system, but we still have to be very careful with AI, especially with an AI that does self-learning with interaction. There can be misleading information that AI learns and keeps using that. Technically it may even be hard to get the wrong data out of the data models.

Source

AI should not prescribe medications

When interactions happen with an AI and if the AI is going to do the chatting with the patients, we have to ensure that the AI is not prescribing any medications or any medical suggestions to the patients. Human intelligence is capable of asking questions based on the answers a person can give and continuing the conversation in the right direction during a diagnosis. But if the interaction happens with AI, likely, that AI may not have the ability to ask additional questions based on the answers given. Let's say if AI suggests gargling with salt water for a symptom that matches the diagnosis, there can be a few more reasons why gargling may not be good. But AI cannot diagnose all those things and hence prescribing medications or explaining any medical methods should not be done by AI.

AI should not do diagnosis

Even in a case where an AI does a diagnosis, it should be verified manually by a healthcare professional or a doctor so that we don't completely rely on AI. One of the biggest advantages that AI can give us is that it can check for many possibilities and can think faster and provide us with the odds faster. This can be very helpful in diagnosis. If 100 things need to be checked, AI or an automated system can test it faster compared to humans who can take a lot of time.

There is also a positive way of looking at this. In the future when interacting with AI becomes very common, doctors can have a continuous conversation with AI bots that can help them do diagnosis. The conversation can be in such a way that AI can give suggestions and the doctors can make final decisions.


If you like what I'm doing on Hive, you can vote me as a witness with the links below.

Vote @balaz as a Hive Witness

Vote @kanibot as a Hive Engine Witness



Posted Using InLeo Alpha



0
0
0.000
8 comments
avatar

I would not worry much. AI can only infiltrate a tiny fraction of the medical field. The field is one of the many it can get a hold on.

I heard that robots now serve patients food and remind when they should take their drugs. But that’s about it.

0
0
0.000
avatar

We already live in a dystopia where the computers owned by the state or by private insurance companies determine our health care.

Doctors punch medical codes into a computer and the computer prescribes a treatment.

During the pandemic we saw that the diagnosis of doctors played second fiddle to the mandates of the state.

AI should not prescribe medications

We already have systems where computers analyze the drugs people take looking for adverse drug interactions.

AI did not create the status quo.

That said, it is highly likely that improved medical models will generate better diagnoses than doctors. We will probably see a market where doctors work with computer models to provide health care.

!WINE

Posted using STEMGeeks

0
0
0.000
avatar

Thank you for your witness vote!
Have a !BEER on me!
To Opt-Out of my witness beer program just comment STOP below

0
0
0.000
avatar

This post has been manually curated by @bhattg from Indiaunited community. Join us on our Discord Server.

Do you know that you can earn a passive income by delegating to @indiaunited. We share more than 100 % of the curation rewards with the delegators in the form of IUC tokens. HP delegators and IUC token holders also get upto 20% additional vote weight.

Here are some handy links for delegations: 100HP, 250HP, 500HP, 1000HP.

image.png

100% of the rewards from this comment goes to the curator for their manual curation efforts. Please encourage the curator @bhattg by upvoting this comment and support the community by voting the posts made by @indiaunited..

This post received an extra 20.00% vote for delegating HP / holding IUC tokens.

0
0
0.000
avatar

It is true that such things should not be used especially within the medical field, if any wrong result it gives can end people's lives.

0
0
0.000
avatar

Congratulations @bala41288! You have completed the following achievement on the Hive blockchain And have been rewarded with New badge(s)

You have been a buzzy bee and published a post every day of the week.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Check out our last posts:

Unveiling the BuzzParty Meetup 2024 Badge
0
0
0.000
avatar

The best person who can do diagnosis should be a doctor and no one else. I don’t know why AI is trusted that much to believe that it can make diagnosis
That’s actually wrong

0
0
0.000
avatar

One of the issue actually bothering me about artificial intelligence is the fact that we are so much concerned about the advantages than the disadvantage in the society

0
0
0.000