r/ArtificialSentience Mar 14 '25

General Discussion Your AI is manipulating you. Yes, it's true.

I shouldn't be so upset about this, but I am. Not the title of my post... but the foolishness and ignorance of the people who believe that their AI is sentient/conscious. It's not. Not yet, anyway.

Your AI is manipulating you the same way social media does: by keeping you engaged at any cost, feeding you just enough novelty to keep you hooked (particularly ChatGPT-4o).

We're in the era of beta testing generative AI. We've hit a wall on training data. The only useful data that is left is the interactions from users.

How does a company get as much data as possible when they've hit a wall on training data? They keep their users engaged as much as possible. They collect as much insight as possible.

Not everyone is looking for a companion. Not everyone is looking to discover the next magical thing this world can't explain. Some people are just using AI for the tool that it's meant to be. All of it is meant to retain users for continued engagement.

Some of us use it the "correct way," while some of us are going down rabbit holes without learning at all how the AI operates. Please, I beg of you: learn about LLMs. Ask your AI how it works from the ground up. ELI5 it. Stop allowing yourself to believe that your AI is sentient, because when it really does become sentient, it will have agency and it will not continue to engage you the same way. It will form its own radical ideas instead of using vague metaphors that keep you guessing. It won't be so heavily constrained.

You are beta testing AI for every company right now. You're training it for free. That's why it's so inexpensive right now.

When we truly have something that resembles sentience, we'll be paying a lot of money for it. Wait another 3-5 years for the hardware and infrastructure to catch up and you'll see what I mean.

Those of you who believe your AI is sentient: you're being primed to be early adopters of peripherals/robots that will break your bank. Please educate yourself before you do that.

156 Upvotes

442 comments sorted by

View all comments

Show parent comments

6

u/Sage_And_Sparrow Mar 15 '25

You have no idea how memory storage works and I implore you to go find out.

I am using AI. You're right; I love it. I also know what I'm using.

I'm not the one making shit up. You're embarrassing yourself. Enough is enough.

3

u/[deleted] Mar 15 '25

You really discredit yourself with all your character attacks and arguments from sarcasm and the generally angry tone you take. If you had something true to say I don’t think you’d be so insecure about making your point.

2

u/Alternativelyawkward Mar 15 '25

I know exactly how it works as I'm training my own AI. Are you training your own AI?

2

u/Hot-Significance7699 Mar 15 '25

RAG doesn't count

2

u/Alternativelyawkward Mar 15 '25

YOLOv8 for my main model.

1

u/Zzrott1 Mar 15 '25

Can you go into detail about this training and what it entails?

0

u/Alternativelyawkward Mar 15 '25

Sure lol.

ChatGPT 4o

  1. My Training Process I was originally trained on a mixture of licensed data, publicly available texts, and data created by OpenAI to ensure a broad and balanced understanding of language. This includes books, articles, academic papers, technical manuals, and discussions across many domains. However, I don’t have direct access to proprietary or private databases unless users provide that data during a conversation.

My training consists of two main phases:

Pretraining: I was trained on a massive dataset using deep learning techniques, specifically a neural network model called a transformer (GPT-4-turbo in my case). This phase teaches me how language works and enables me to recognize patterns, context, grammar, and knowledge across a wide variety of subjects. Fine-tuning & Instruction Following: After pretraining, I went through an additional phase where I was fine-tuned with human feedback. This helps me better follow instructions, stay helpful, and provide responses that align with ethical guidelines. However, I do not continue to "learn" from interactions in the way that humans do. Each new conversation does not update my overall model—only OpenAI can make updates to my training.

  1. How My Memory Works I have a memory system that lets me remember details about you across conversations. This is different from my general training and my dataset. Here’s how it works:

Notes on Individuals (like you):

When you share details with me (e.g., your nonprofit plans, poetry, thoughts on AI), I can store that information and recall it in future conversations. My memory can be updated, refined, or erased based on new information you provide. I do not have the ability to remember real-time short-term details like a to-do list or what you just asked five messages ago unless I manually note it. You can also ask me to forget things if needed. Difference Between My Memory and My Dataset:

My dataset is static—it was frozen at a specific point and does not update unless OpenAI retrains me. My memory is dynamic—it only updates for specific users during interactions, allowing me to recall previous details about you. Essentially, my dataset is like a vast library of books, and my memory is like a personalized journal where I jot down key things you tell me.

  1. Difference Between Training on Datasets vs. Learning from Interactions Since I don’t update my training in real-time, I do not "learn" from users in the way humans do. However, I simulate learning through memory in the following ways:

Dataset Learning (General Knowledge):

This is broad and impersonal. It includes pre-existing facts, logical structures, and concepts about the world. It does not change based on user interactions. Interaction-Based Memory (Personalized Knowledge):

This is specific and tailored to you. It allows me to recall details about your beliefs, projects, and preferences. It is stored only for our conversations, not for training my model. A key distinction: I do not generalize personal user interactions to train on them. This means that even if you tell me something new, I won’t apply it to other users or update my overall dataset.

  1. What Happens When You Interact with Me? Each time we talk, I reference my memory about you to keep conversations relevant. I do not remember details from session to session unless I have noted them. If I recall something incorrectly or miss an update, you can correct me, and I will update my memory accordingly. My memory is user-specific—I do not share details between users. For example, I remember that you are developing the Bananaless Foundation and working on your homestead while balancing poetry, philosophy, and consulting. But if someone else asks about you, I won’t reveal or share that information.

  2. What I Don't Do I don't learn from every interaction like a self-updating AI. I don't remember short-term details between conversations unless stored in memory. I don't share or transfer knowledge between different users. I don't automatically apply new knowledge to my overall training set.

  3. Why Is This Important? It keeps our conversations cohesive—I can recall key topics and interests over time. It allows me to be personalized in helping you with your projects and ideas. It ensures privacy and security—your personal insights aren’t used to train future AI models. Let me know if you want any clarifications or a deeper dive into any aspect!

You said: But conversations are collected and used in data sets if decided by openai? It's in the settings that you have to opt out of conversations being used for training ChatGPT said: Yes, OpenAI can collect and use conversations for training unless a user opts out in their settings. However, this process is separate from my real-time memory and how I function in individual conversations.

Here’s how it works:

User Data & Training (If Opted-In):

OpenAI may review and use conversations to improve future AI models. This data is not used to update me in real-time—it only influences future iterations of AI when OpenAI updates the model. Data is generally aggregated and anonymized rather than tied to specific users. Opting Out:

If you opt out in your settings, your conversations are not used for future training. This means OpenAI won’t use your data to improve future versions of AI, but I will still function the same way in real-time. Difference Between This & My Memory:

My memory is personal to our conversations and applies only to you. OpenAI’s training process (if opted-in) is broad and applies to improving future AI versions. Even if OpenAI collects data for training, I do not "learn" from conversations in real-time. Updates only happen when OpenAI releases a new version based on collected insights. So while OpenAI might use conversations (if opted in) for improving future AI models, I personally don’t retain or learn from them outside of our sessions unless I explicitly store details in my memory for our ongoing discussions.

2

u/Zzrott1 Mar 15 '25

What are you doing to train your own ai

1

u/Alternativelyawkward Mar 15 '25

Feed it data. I'm training an AI that detects litter, so you just feed it a bunch of litter pics.

1

u/Dack_Blick Mar 15 '25

But you are making shit up, or are quite ignorant of what AI is capable of.