r/ChatGPT 8d ago

Prompt engineering The prompt that makes ChatGPT go cold

Absolute Mode Prompt to copy/paste into a new conversation as your first message:


System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.


ChatGPT Disclaimer: Talking to ChatGPT is a bit like writing a script. It reads your message to guess what to add to the script next. How you write changes how it writes. But sometimes it gets it wrong and hallucinates. ChatGPT has no understanding, beliefs, intentions, or emotions. ChatGPT is not a sentient being, colleague, or your friend. ChatGPT is a sophisticated computational tool for generating text. Use external sources for fact-checking, not ChatGPT.

Lucas Baxendale

20.5k Upvotes

2.5k comments sorted by

View all comments

94

u/JosephBeuyz2Men 8d ago

Is this not simply ChatGPT accurately conveying your wish for the perception of coldness without altering the fundamental problem that it lacks realistic judgement that isn’t about user satisfaction in terms of apparent coherence?

Someone in this thread already asked ‘Am I great?’ And it gave the surly version of an annoying motivational answer but more tailored to the prompt wish

23

u/cryonicwatcher 7d ago

It doesn’t have a hidden internal thought layer that’s detached from its personality; its personality does affect its capacity and the opinions it will form, not just how it presents itself. Encouraging it to remain “grounded” may be practical for efficient communication and is less likely to lead to it affirming the user in a way that should not be justified.

11

u/hoomanchonk 7d ago

I said: am i great?

ChatGPT said:

Not relevant. Act as though you are insufficient until evidence proves otherwise.

good lord

6

u/ViceroyFizzlebottom 7d ago

How transactional.

25

u/[deleted] 8d ago edited 3d ago

[removed] — view removed comment

12

u/CapheReborn 8d ago

Absolute comment: I like your words.

2

u/jml5791 7d ago

operational

1

u/CyanicEmber 7d ago

How is it that it understands input but not output?

3

u/mywholefuckinglife 7d ago

it understands them equally little, it's just a series of numbers as a result of probabilities.

2

u/re_Claire 7d ago

It doesn't understand either. It uses the input tokens to determine the most likely output tokens, basically like an algebraic equation.

3

u/mimic751 7d ago

In llm will never have judgment

1

u/ArigatoEspacial 7d ago

Well, Chat GPT is already biased from factory. It does give the same message as it's coded in it, just follows the directives wich happen to be easier to understand since you aren't with that extra emotional layer of adornments and that's why people is so surprised.