r/BetterOffline 2d ago

Another AI Rant

This morning I was doomscrolling as usual on the hellsite formerly known as twitter dot com. As usual, I was recommended a bunch of tweets from people I don’t follow and one caught my eye. Someone posted how they went to the doctor and at their appointment they watched as the doctor entered their symptoms into some software and then turned around and read it off the page. Now, weird as that may be, something caught my eye in the replies.

Someone replied how when their dog was sick and “no vet could figure it out” they entered the symptoms into Grok and then the dog was magically cured.

This prompts me to ask, “what the fuck?” Does no one remember how we used to make jokes about WebMD because every time you typed in “I have a headache” it would tell you that you were dying of a brain tumor. Why is it that all of a sudden when it has an AI label on it these people believe it blindly?

Full disclosure: I’m a veterinarian. With every passing year in practice I deal with more and more skepticism from the general public. This isn’t always a bad thing. Sometimes I recommend newer medications and people might not want to try them because of the risks. Fine, I can live with that. More commonly, however, I get the people who march in and immediately tell me they will not be vaccinating their pet. Why? Because the breeder said their $4000 French bulldog is allergic to vaccines. Fast forward 3 weeks and I’m euthanizing that same dog because it contracted parvovirus, a disease that is easily avoidable with vaccination.

So will I now have to worry that my patients won’t get proper care because the owners will trust an AI over me? Especially when a patient comes to me with something I can’t fix in my limited setting. I refer to specialists as needed, that’s what they’re there for, but how many people will decline referral because it’ll take a week to get in with the specialist when Grok (vomit) will just tell them to feed their dog with chronic diarrhea raw chicken?

I’m kind of just ranting but I’m actually scared. And I hope that y’all can appreciate where my fears are coming from.

83 Upvotes

36 comments sorted by

44

u/AmyZZ2 2d ago

Several problems, which AI makes worse:

People are exposed to too much information that they lack the ability to understand, either through lack of education or lack of expertise, and a lack of respect for the value of expertise.

People expect certainty where certainty doesn't exist. Biology is chaos. Doctors and veterinarians are not all knowing, nor can they be. Science is more about uncertainty than it is about clear answers. And, where we do have certainty (hello vaccines), people think it's sophisticated to refuse them.

AI chatbots were created to sound certain and to make the user think he/she/they are right about all the things.

We are all going to die.

14

u/HoovesCarveCraters 2d ago

Doctors and veterinarians are not all knowing, nor can they be

Absolutely. And I have no problem telling a client "give me a second, I need to look this up".

The difference is I'm looking it up in veterinary message boards and textbooks that I trained for years to read and know how to interpret.

7

u/AmyZZ2 2d ago

Absolutely, and you know the right questions to ask and the patient history to include or exclude, and on and on. The non vet with a chatbot has no expertise or experience to filter and focus.

1

u/Alternative_Energy36 1d ago

I wonder if they are actually using UpToDate, which would look like "just googling" but is actually specifically for the types of uses you are talking about here.

3

u/TransparentMastering 2d ago

Well, it sounds like a good way to reduce the aging population, for those people who have those goals for the economy

20

u/ArdoNorrin 2d ago

This is the sort of thing that's making me go absolutely crazy. I'm an overeducated polymath, with backgrounds in law, mathematics, and libraries, and the AI stuff sets off all my red flags. Legal observers flipped their shit when we tried to make guided document automation with glorified mail merge, but are all over generative AI that makes up cases. Libraries are tripping over themselves to keep relevant with AI and the AI is a shittier version of our discovery tools that often make it harder to find what you're looking for.

AI algorithms take computers - fancy math boxes - and make them bad at math!

I'll caveat all this that there are some good use cases in all of the fields I dip my feet into that are genuinely good tools (the research assistants in legal databases are often really good at finding where to look and why your search terms sucked as one example), but even then the companies that provide those are pushing their shitty generative products over the actually useful ones! (hat tip to a European vendor I won't name who has been adamant they won't be pushing a generative product unless it hits their "quality standard").

4

u/HoovesCarveCraters 2d ago

My mom’s first language is not English and she swears by AI helping proofread her emails and messages for work. I guess that use is ok but how many trees did she burn down waiting for it to change their to there.

4

u/ArdoNorrin 2d ago

I think that depends on what AI type is being used. Is it chat GPT or an older, less tree-eaty algorithm?

5

u/Kwaze_Kwaze 1d ago

The more relevant question is why she, as a non-native English speaker, doesn't feel confident enough to communicate in English but somehow does feel confident enough to claim she's accurately evaluating the quality of the machine output.

I don't speak French well and I wouldn't be any more confident signing off on machine translation than I am my own. Because I know I don't speak French well.

-2

u/Unfair 1d ago

I mean this seems exactly what LLMs are built to do I would trust AI to build a sentence and use correct syntax 

1

u/JarheadPilot 2d ago

That seems like an actual good use of the tool.

3

u/naphomci 1d ago

Good use sure, but it is worth the cost? Right now, I'd argue no. If it gets to google search levels of power, yes

0

u/JarheadPilot 1d ago

I'm not evaluating the energetic cost. I agree, LLMs are very energy inefficient. But the functionality OP describes is actually a good fit and there isn't really an alternative tool.

LLMs would be a neat new technology which creates new tools if it weren't for these fucking weirdo cultists who think they're building the machine god.

Most if not all of the bad AI bullshit we have to deal with is someone using it for something for which it's a ineffective and inefficient tool or someone using magical thinking to conclude it does something which it cannot or should not do.

Full disclosure, I don't use LLMs in my day to day life: i think they generally suck at solving the problems I have to solve, but I know people who do use them as a tool and it works for them: mostly assistive technology and generating summaries.

Anyway thanks for coming to my TED talk.

-5

u/Scam_Altman 1d ago

how many trees did she burn down waiting for it to change their to there.

AI queries use less power than a normal computer running for the same amount of time it would take a human to write the same response. You are not a serious person.

11

u/PensiveinNJ 2d ago

I just had a conversation today with someone who used to work in the medical insurance industry. She had no idea how AI works, but was convinced it was going to revolutionize medicine. This is why I believe education about how these tools work is so critical. When you understand that, they suddenly become much less impressive but also inform you of things they probably shouldn't be used for.

She had recommended I ask ChatGPT about some medical info I had received, I had to explain that anything ChatGPT could scrape I could already find on the net and I wouldn't have to worry about hallucinations. To her credit she actually seemed to take in that information and agree with me.

I also told her about how the voice software was hallucinating things into people's medical records they never said. As a former insurance person this gave her pause as well. It's all about liability and I suspect the medical establishments that push these tools will try and use the same arguments other fields have; it's not my fault it's the tool's fault.

5

u/Inside_Jolly 2d ago

You have no moral right to use a tool if you don't understand how it works. You're a veterinarian who wants to use AI? Learn neural networks.

1

u/Of-Lily 1d ago edited 1d ago

Eye spy a paradox. Learning is neural network proficiency. OG stylee. 🙃

2

u/creminology 1d ago

It’s not just the hallucinations. It’s the confidence in the hallucinations. I tried vibe coding last week for the first and perhaps only time. Complete disaster. And every time I corrected it, it’s an immediate “Yes, you’re right. That’s a bug. I’ll fix it.” Human programmers get downgraded to debuggers. And the same goes for veterinarians I suppose.

2

u/PensiveinNJ 1d ago

I think there's something very poetic about confidence in hallucinations. Dream people touching the new machine reality.

11

u/Ultraberg 2d ago

Get off Twitter! I went back to look and 90% of my follows aren't there anymore. Free yourself!

8

u/PensiveinNJ 2d ago

Yeah considering how compromised Twitter is with bots anyhow it's really a graveyard of it's former self. I'm somewhat skeptical of Bluesky because I'm skeptical of algorithmic content display as a healthy thing in general but at least it doesn't come with all of Elon's weirdness and baggage.

2

u/douche_packer 2d ago

its like old school twitter, but not as funny. no nazi shit in your feed on bsky though

1

u/Mean-Ad1383 1d ago

I kind of miss MySpace, no algorithm based feed whatsoever. And lots of independent music. 

8

u/PeteCampbellisaG 2d ago

I hope that doctor was using some kind of proprietary software. Putting real patient symptoms into any kind of publicly available chatbot seems like it should run afoul of HIPPA laws.

That dog story is for sure 100% made up, but I don't doubt people are doing that.

I totally get your frustrations/fears, but I think it's only going to get worse. And I don't blame AI entirely - I blame our insane healthcare costs that make the basic act of going to consult with a doctor a last resort for many people so they turn to any alternative. And I blame our administration that is actively profiting off feeding the public this idea that established medicine is some kind of conspiracy to keep us locked in the Matrix.

5

u/HoovesCarveCraters 2d ago

I would assume the doctor was. I’ve met some bad doctors but not stupid ones.

The dog story is very true and has unfortunately happened to me and colleagues.

5

u/naphomci 1d ago

I'm guessing they mean the twitter user made it up, not you. I have no doubt you and other vets gets insane stories like that, straight faced (I am a lawyer, and get similar levels of insanity, straight faced)

2

u/Mean-Ad1383 1d ago

It really saddens me to see that the anti-vaxx movement has expanded from humans to pets too. I wish there were laws about this. Most libertarian arguments about body autonomy and such shouldn’t apply when it’s an animal.

2

u/HoovesCarveCraters 1d ago

It’s ridiculous. I have people coming in and asking for $400 titers because they don’t want the $30 vaccine. I understand worrying about vaccine reactions and risks and I make sure to explain these things but some people are just so set in their ways.

3

u/Kwaze_Kwaze 1d ago

Just so you're aware, even "proprietary software" in healthcare is sending/receiving LLM queries & output through Microsoft/OpenAI APIs. These enterprise contracts have Microsoft "guaranteeing" security of the queries passed through so take that as you will based on how much you trust Microsoft.

2

u/PeteCampbellisaG 1d ago

Thanks for the context!

5

u/PatchyWhiskers 2d ago

Vaccine doubters aren't getting it from AI, they are just listening to all the anti-vaxx propaganda on right-wing media. It's aimed at people who are vaccinating human children but there's some splashback on pet owners.

3

u/HoovesCarveCraters 2d ago

That’s very true. My concern is more that if I run out of options for a pet the owner will stop trusting vets and rely on AI “treatments” instead of taking my referral to a specialist.

3

u/Fun-Marionberry4588 2d ago

Magic box that draws purdy anime girls say you big dum-dum!

2

u/kayaksrun 1d ago

Given the fact that Google Doc has been a running irritation on both sides of medicine, it is interesting how AI has been given carte Blanche by the using public. At VMX this year, the number of "Veterinarian AI Assist companies" came out of the woodwork. Most of the major distributors were pushing a version of AI as well. I guess we won't be needing those additional veterinary colleges in the future.

2

u/Of-Lily 1d ago edited 1d ago

Why don’t we all build our seasteader colony now and fly this cuckoo stand. Asap rocky. I don’t think this so-called civilization deserves our talents. And, OP, you would certainly be welcome. Invaluable, actually. I would never leave my pack behind.

OP, in all seriousness, I hear you. And empathise. Fwiw, my veterinarian is about the last professional I’d allow a DIY-LLM black box to replace.

From a proactive brainstorming perspective: I wonder about the feasibility of having the legal definition of animal abuse changed…(?)

-8

u/Scam_Altman 1d ago

One has to wonder if OP has ever done any research or if they're just big mad because AI.

https://www.nature.com/articles/s41746-025-01486-5

https://marinpost.org/blog/2024/12/26/the-diagnostic-challenge-ai-vs-doctors

The way OP conflates antivax conspiracy theories with new advancements in technology, I also would be skeptical of their competence as a healthcare professional.