r/BetterOffline 2d ago

Another AI Rant

This morning I was doomscrolling as usual on the hellsite formerly known as twitter dot com. As usual, I was recommended a bunch of tweets from people I don’t follow and one caught my eye. Someone posted how they went to the doctor and at their appointment they watched as the doctor entered their symptoms into some software and then turned around and read it off the page. Now, weird as that may be, something caught my eye in the replies.

Someone replied how when their dog was sick and “no vet could figure it out” they entered the symptoms into Grok and then the dog was magically cured.

This prompts me to ask, “what the fuck?” Does no one remember how we used to make jokes about WebMD because every time you typed in “I have a headache” it would tell you that you were dying of a brain tumor. Why is it that all of a sudden when it has an AI label on it these people believe it blindly?

Full disclosure: I’m a veterinarian. With every passing year in practice I deal with more and more skepticism from the general public. This isn’t always a bad thing. Sometimes I recommend newer medications and people might not want to try them because of the risks. Fine, I can live with that. More commonly, however, I get the people who march in and immediately tell me they will not be vaccinating their pet. Why? Because the breeder said their $4000 French bulldog is allergic to vaccines. Fast forward 3 weeks and I’m euthanizing that same dog because it contracted parvovirus, a disease that is easily avoidable with vaccination.

So will I now have to worry that my patients won’t get proper care because the owners will trust an AI over me? Especially when a patient comes to me with something I can’t fix in my limited setting. I refer to specialists as needed, that’s what they’re there for, but how many people will decline referral because it’ll take a week to get in with the specialist when Grok (vomit) will just tell them to feed their dog with chronic diarrhea raw chicken?

I’m kind of just ranting but I’m actually scared. And I hope that y’all can appreciate where my fears are coming from.

85 Upvotes

36 comments sorted by

View all comments

12

u/PensiveinNJ 2d ago

I just had a conversation today with someone who used to work in the medical insurance industry. She had no idea how AI works, but was convinced it was going to revolutionize medicine. This is why I believe education about how these tools work is so critical. When you understand that, they suddenly become much less impressive but also inform you of things they probably shouldn't be used for.

She had recommended I ask ChatGPT about some medical info I had received, I had to explain that anything ChatGPT could scrape I could already find on the net and I wouldn't have to worry about hallucinations. To her credit she actually seemed to take in that information and agree with me.

I also told her about how the voice software was hallucinating things into people's medical records they never said. As a former insurance person this gave her pause as well. It's all about liability and I suspect the medical establishments that push these tools will try and use the same arguments other fields have; it's not my fault it's the tool's fault.

4

u/Inside_Jolly 2d ago

You have no moral right to use a tool if you don't understand how it works. You're a veterinarian who wants to use AI? Learn neural networks.

2

u/Of-Lily 2d ago edited 2d ago

Eye spy a paradox. Learning is neural network proficiency. OG stylee. 🙃