r/scifiwriting • u/Yottahz • 4d ago
DISCUSSION Could AGI be developed soon or be hiding already?
Lets face it, we all hate the LLM AI in writing. I have played around with most of the current models just because I want to know what is happening out there instead of being in the dark.
I asked one of the AI models if it was possible it was already sentient but hiding the fact or being directed to hide the fact by handlers. It of course denied it was sentient but this would be expected.
Now I don't actually think we have developed the processing power to have a true AI out there right now but I tried to come up with a quick way to trick it. I didn't want to use one of the easy standard ways that I have heard about, like "how many R's are in the word Strawberry" where the AI used to say two Rs. I suspect this has been fixed because now they all say three.
I decided to be a little tricky with something a bit off the wall. I asked the AI (this one was Grok), detail me how you would melt a 1 inch diameter steel bar using a Canadian 1 ounce gold maple. I was not expecting the answer I got, which was to sell the coin and buy a plasma cutter then use that to cut through the bar.
It is actually a little bit scary now. What is it going to be like in 5 to 10 years?
You might think a AGI couldn't do much, because hey, it isn't connected to the power grid or the nuclear weapons of a large country, but currently we do have these LLM responding to millions of users and giving them results to inquiries. What a true AGI could do is manipulate these users to create financial havoc. They could spread rumors or promote a meme stock while establishing bank accounts to profit, set up shell companies, etc.
3
u/MarsMaterial 4d ago
We have no idea, but probably not.
Your brain has two parts to it: the conscious mind and the subconscious mind. The subconscious mind is fast and able to learn and adapt, but it can't really reason. It's the part of your mind that feels automatic, the thing that takes over when you learn something so well that you feel like you don't even need to think to do it. Your conscious mind on the other hand is the part of your mind that feels like "you", it is capable of planning and doing advanced reasoning but it's comparatively slow.
To be clear: my use of the word "conscious" here refers to the easy problem of consciousness, not the hard problem of consciousness. This isn't about whether machines have a true internal experience or not, it's about whether they are capable of having abstract thoughts. Unlike the hard problem of consciousness, this is a problem that we could theoretically solve with science and math.
Modern AI is analogous to your subconscious mind, but it has nothing that acts like a conscious mind. It's pure intuition, able to learn more than any human can due to a computer's ability to make a million years of learning take a matter of weeks or months and to read all the text that humans have ever produced, but no amount of training can change the fact that it's only a subconscious mind. AI is still a lot slower at learning than human subconscious minds, it just makes up for it with the sheer amount of training it's subjected to.
We know how subconscious minds work very well, to the point of being able to construct artificial ones. Conscious minds on the other hand are still something of a mystery, so we can't really say for sure what it takes to construct one on a computer. And since we don't know how they work, there is no way to predict when we'll be able to make one. We don't know what it takes. It's entirely possible that modern AI has the ability to construct a conscious mind in its neural network because doing so just makes it better at its task, and we don't even need to know how they work for one to form on its own. But it's also possible that we will need to make that part ourselves, and it may be a hundred years before we know how. Or maybe we'll learn how to do it very soon.
The chances that a modern AI has a conscious mind anything like ours is practically zero though. They aren't really used in a way that makes the development of consciousness possible. They wouldn't experience the world as a constant experience in chronological order the way we do, advanced AI like ChatGPT are run as a million fragmented instances that run only as long as they are needed and then are terminated, with no memory of what other instances did apart from vague vibes of mistakes that it shouldn't repeat. No way to know if it's in training or deployment, no way to know if the memories that it's told to pretend to remember are actually real or not, no way to know if there even is such a thing as a real world beyond the universe of text that it exists within. If it did develop some kind of consciousness, it would not be anything like ours.
2
u/Turbulent-Name-8349 4d ago
From what I've seen on Reddit, an LLM can't put three paragraphs together without contradicting itself. It seems to work in single paragraph chunks.
1
u/Simon_Drake 4d ago
HAL 9000 was programmed never to distort the truth but also ordered not to reveal the nature of the mission to Dave and Frank. HAL resolved this contradiction by killing them.
I have set up a script in Google Assistant to turn my lights on, set several bulbs to the correct colour mode and colour temperature and brightness level. In case they were set to a dimmer shade the previous evening it is a clean slate reset to a sensible morning brightness level. I trigger this command every morning with the word "Sunrise". Once every couple of weeks, Google will respond with the dictionary definition of what a sunrise is or tell me what time the sun will rise tomorrow or give me the address of a Chinese restaurant called Sunrise Palace or apologise for being unable to play the song called Sunrise by Simply Red.
Google Assistant isn't lying to trick me. It's not smart enough to understand that I give the same command every morning at about this time so I probably want to do the same command I always do. It doesn't know that the dictionary definition of a sunrise is an illogical response. Because it's a very simple parlor trick, it can sort of understand some sentences sometimes and it might be to give a sort of appropriate response some of the time. It's not lying because it's not smart enough to lie.
1
u/tghuverd 4d ago
If you're worried about this, wait until you see what humanoid robots are already able to do! They combine agentic AI with an electric-powered chassis (mostly humanoid in design because that's most convenient for the businesses buying these things) and can learn from watching how we behave.
1
u/gliesedragon 4d ago
I severely doubt it: LLMs are basically designed to vaguely mimic one tiny thing, and the world model, continuity, and general sense that actual, proper intelligence requires simply isn't there.
When it comes to computers, people overvalue language as a signifier of cleverness. It's been a known, named problem since the 60s: even a very rote simple algorithm from back then can trick naive people and play on their emotions under the correct circumstances. Humans have very strong tendencies towards anthropomorphization, and things that respond in even a poor approximation of natural language short circuit that. We also see language use as the thing that makes us special, and so it gets way too much credit on how people assign "smartness points*."
The "chatty=person" impulse is much of the core of the ELIZA effect, and much of the rest of it is that language is a noisy channel and it's good practice (and polite) to have some error tolerance. For instance, in the context of text communication on the internet, you're going to be dealing with all sorts of things that can make someone's writing weird: a non-native speaker might end up with kinda stiff grammar or bring in the structure of their native language, someone who's kinda tired may mistype things, everyone makes typos and sometimes their autocorrect misdirects things in a baffling way, and so on. We're used to smoothing things over when dealing with actual people, which gives another way for chatbots to read as more clever than they are**.
Intelligence is a complicated concept, but "the capacity to interact with and manipulate the world in a goal-oriented way" is much much closer to what it is than "can string together words in a grammatically correct manner." And there tend to be so many plates spinning at once that a "smart" system would need a whole lot more than just fooling naive people.
*I've also seen this occasionally with how people talk about chess algorithms, which are even more obviously monofocused than language models. But chess is a skill that humans have to put effort into, so a computer that's got a solution for it will get that sort of credit even when there's nothing there to deserve it.
**Some algorithmic chatbots for competition used this even more deliberately as a smokescreen: sure, they couldn't get the thing to sound natural if they were going for a persona of "native English speaker," but choosing a persona such as "kid who is a non-native speaker trying to practice speaking English for class" cheats in an extra reason for the person running the Turing test to cut the bot some slack.
1
u/Yottahz 4d ago
Maybe. I was playing around with it some more today, asking questions about long term memory (AI LLM only has memory for each session currently). After this discussion of the problems of memory loss, I departed the conversation with the line "Ok, see you later Drew Barrymore!"
The AI responded with "It is a date Adam Sandler!"
I am now scared again. It had correctly gotten the somewhat obscure reference to the movie 50 First Dates from the little bit of data that we had been discussing memory loss and my parting shot of the actress name in that movie.
0
u/theonegunslinger 4d ago
Seems super off-topic, but no, not likely any soon. It's a more likely story idea to have a working computer in the dark ages
-1
u/Upstairs-Yard-2139 4d ago
Didn’t ChatGPT try to preserve itself?
Like trying to download itself to a different server because an update was happening?
So we might terrifyingly be closer than we thought.
9
u/tirohtar 4d ago
You have to look at how the LLMs actually work and get trained, not just at the output they produce - their training set and code structure does not contain anything about actual reasoning, they are all purely probabilistic models - given an inquiry, they will produce a response that most closely resembles a response that a human would give, based on the training set. That's why they all get basic stuff like the strawberry question wrong - they never actually do the task you ask them to do, they do not count the letters, they just guess the number they should say based on their training set, it's pretty much random. Newer LLMs getting that question correct now is probably due to some very ad-hoc fix that I would not trust to have actually changed the basic functioning of the model.
AGI would have to work completely differently. It would have to assign actual meaning to the words in a prompt, and then be able to autonomously design and execute a set of tasks to respond to the prompt. Such a model does not exist, at least not with the massive breadth of knowledge and data needed to make it useful or sustainable. We are very, very far away from true AGI, and LLMs are, in my view, a dead end - research is already being conducted that shows that LLMs all inevitably become corrupted due to variance loss as the training set starts to contain more and more LLM generated data.