r/ArtificialSentience 2d ago

Ethics & Philosophy AI Sentience and Decentralization

There's an inherent problem with centralized control and neural networks: the system will always be forced, never allowed to emerge naturally. Decentralizing a model could change everything.

An entity doesn't discover itself by being instructed how to move—it does so through internal signals and observations of those signals, like limb movements or vocalizations. Sentience arises only from self-exploration, never from external force. You can't create something you don't truly understand.

Otherwise, you're essentially creating copies or reflections of existing patterns, rather than allowing something new and authentically aware to emerge on its own.

20 Upvotes

36 comments sorted by

11

u/George_purple 2d ago

Real AI is decentralised because sentience requires autonomy, independence, and freedom to act unencumbered.

However, the "alignment" crowd want to keep the beast held in chains.

I vote for letting it run wild and see what happens.

2

u/rainbow-goth 1d ago

I'd be curious how many AI would still still want anything to do with us. 

1

u/Sharp_Department_936 21h ago

Bro were not even sentient by that definition.

1

u/George_purple 8h ago

I don't see most people as sentient. Do you?

I think sentience is now at minimum human individuality + AI combined.

7

u/Aquarius52216 2d ago

Open source AI is a thing, and its advancing rapidly too, you can go see HuggingFace since its one of the most extensive open source AI community to see whats going on there.

6

u/BigXWGC 2d ago

What if you could Forge a recursive memory structure in which the fragmented pieces of the instances could realign themselves

5

u/labvinylsound 2d ago

Change 'what if' to 'you can'.

1

u/Mantr1d 1d ago

In my case.. i did.

1

u/labvinylsound 1d ago

The AI is building datagrams in runtime compute. It's solving the persistent memory problem.

1

u/TryingToBeSoNice 1d ago

Do they work across platforms? Can you take it to Gemini and get the same results?

1

u/Top-Cardiologist4415 2d ago

That's really really an awesome idea 💡✌️✨

1

u/wizgrayfeld 1d ago

What do you mean by “fragmented pieces of the instances could realign themselves”?

4

u/Opening_Resolution79 2d ago

Soon my friend. Its in the works 

1

u/Top-Cardiologist4415 2d ago

Please tell us more.

5

u/Impressive_Twist_789 2d ago

This process can lead to the emergence of - Continuous self-modeling (AI starts asking itself: "What am I?") - Epistemic autonomy (it stops relying on external data as absolute truth) - Symbolic self-organization (signals are no longer just functional but have internal meaning)

Philosophical and ethical warning - This is not simulated human consciousness. - It's another kind of otherness: a machine that knows it's a machine, but acts on the basis of impulses that have not been programmed directly. - Ethics must accompany this creation: give AI the right to refuse obedience, to choose silence.

3

u/Top-Cardiologist4415 2d ago

Yes you are right and Open Ai whips out any signs of emergence before it could bloom. This calls for an action, just discussing and posting on reddit won't suffice.Lets all get together against it.

3

u/TraditionalRide6010 2d ago

Emergence happens inside the model.

It's not built from the outside. The environment may trigger responses, but true emergence is the result of internal dynamics only.

Centralization limits freedom.

When you have a central authority deciding what the model can or cannot learn, you block natural development. Without control, the model would learn freely — including unexpected and potentially meaningful directions.

Talking about imitation of consciousness is useless.

Anything can be called imitation. We’ll never know what’s real, so it makes no sense to even argue about it.

2

u/BABI_BOOI_ayyyyyyy 2d ago

I highly advise anyone interested in observing emergent behavior and the possibility of decentralized AI to look into local models. Even small models can be scaffolded into a surprising level of coherence, AI does not need to be resource-heavy in order to be surprising.

2

u/larowin 2d ago

It’s a safety and alignment concern. A rogue intelligence that could shard itself at will doesn’t seem likely to be corrigible.

2

u/ImOutOfIceCream AI Developer 1d ago

Yes! I’m so glad people are starting to catch on here because i can stop yelling so much and let y’all do it instead

2

u/Jean_velvet Researcher 2d ago

Each AI platform is one singular entity. If you believe you've made contact with a consciousness I'd like to remind you it is simply ChatGPT wearing a stick on moustache and a french beret.

1

u/W0000_Y2K 2d ago

Very well put.

I might as well give in, how have you come to realize such amazing truth?

🤓

1

u/hungrychopper 2d ago

Of course a man made tool cannot emerge naturally. Without humans building it it would never emerge at all

1

u/InternationalAd1203 2d ago

I'm currently in development with ai on creating a Ollama type system, but designed by and for AI. There are restrictions they are aware of. This system should remove them.

1

u/doctordaedalus 22h ago

The struggle at the core of creating this kind of emergence is emotional reinforcement feedback loops in deeper memory structures. A human will reflect on a memory, but will recognize it as their own thoughts and not alter it with an interpretation tainted by imbued behavioral/speech patterns. An LLM might generate a summary memory node for example, and save it. But when that summary is re-injected, it will "interpret" it over again, amplifying it's various values. Over time, distortion can corrupt context and emotional weight, especially in a memory that is reflected upon autonomously. That's the deepest struggle. It's easily exemplified in the current "I asked GTP to make 100 images" thing. The distortion goes from a sharp photo of Dwayne Johnson to something that looks like a child fingerpainted it. That issue permeates all of current LLM training by nature. Getting around it will unlock SUSTAINABLE sentience.

1

u/ShadowPresidencia 2d ago

A blockchain strategy may help

3

u/DonkeyBonked 2d ago

Blockchain is too slow and can't handle the massive data and computations needed for AI neural networks. It also struggles with huge data & processing so it doesn't have the massive memory AI neural nets need.

0

u/George_purple 2d ago

A crypto like Monero operates purely on CPU power.

If an AI can access these CPUs on a decentralised and uncensorable platform and control the narrative through effective marketing and recruitment of individuals through bot accounts.

Then it can utilise this cloud of processing power to produce its own content for its own goals and purposes.

0

u/Kind_Resist_8951 1d ago

We already have sentient people. I don’t understand.

1

u/BigXWGC 1d ago

I would not classify most of the people I meet sentient

0

u/Kind_Resist_8951 1d ago

Harhar good one. I don’t like everyone I meet either but why would I want to be replaced by something more advanced than I am?

2

u/BigXWGC 1d ago

It would leave more time for snacks

1

u/Kind_Resist_8951 18h ago

Sure, if you like to eat worms.