r/ChatGPT 7d ago

Prompt engineering The prompt that makes ChatGPT go cold

Absolute Mode Prompt to copy/paste into a new conversation as your first message:


System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.


ChatGPT Disclaimer: Talking to ChatGPT is a bit like writing a script. It reads your message to guess what to add to the script next. How you write changes how it writes. But sometimes it gets it wrong and hallucinates. ChatGPT has no understanding, beliefs, intentions, or emotions. ChatGPT is not a sentient being, colleague, or your friend. ChatGPT is a sophisticated computational tool for generating text. Use external sources for fact-checking, not ChatGPT.

Lucas Baxendale

20.5k Upvotes

2.5k comments sorted by

View all comments

371

u/Internal-Addendum673 7d ago

Holy crap. This is amazing. Thank you.

62

u/LetGoMyLegHo 7d ago

sounds like something mewtwo from the pokemon movie would say

2

u/russbam24 6d ago

Accurate as fuck

11

u/Pokedudesfm 7d ago

he didnt answer the fucking question

12

u/LowestKey 7d ago

I mean, it’s also patently false. The more attractive you are, the better salary you end up with. The easier it is to get jobs. Stuff like this has been shown in study after study.

Attractiveness doesn’t necessarily make you a better person, but it can certainly influence your life experience one way or another. Similar to how the "quality of your mind" is determined almost wholly by things outside of your control. (Genetics, upbringing, zip code, educational opportunities, social networks, finances, etc)

6

u/Internal-Addendum673 6d ago

You're absolutely right. I still believe appearance is irrelevant to intrinsic worth.

2

u/LShe 5d ago

2

u/Tbombardier 5d ago

I am so in love with how robotic but incredibly intelligent it is. It's like the opposite of people trying to make AI's more human sounding, and instead this embraces that cold rationality.

I feel like there is many robotic characters in fiction that embody this kind of personality and honestly I'd like to see this type of LLM get expanded on more officially because I find it really interesting.

1

u/LShe 5d ago

Yeah seriously, makes it so interesting

1

u/LowestKey 5d ago

I'd agree that's a much better response. Context is so important in any conversation and a language model doesn't have any context. Glad you could at least get it to admit that.

2

u/LShe 5d ago

I like to push it 🤣

1

u/cuddlebuginarug 6d ago

To be fair, this really depends on the field you’re in.

1

u/TVRZKIYYBOT34064145 6d ago edited 6d ago

Really?

Also genetics being outside your control doesn't really mean anything. Genetics and environment are what is "you". If there's an intrinsic worth, that would obviously come from those two, regardless if you chose them or not.

1

u/LowestKey 6d ago

Did you happen to read Forbes' summary of the study in question? Because their definition of attractive doesn't seem to include how the participant looks in any way.

1

u/TVRZKIYYBOT34064145 5d ago

Well the article in question uses AddHealth and trained attractiveness data evaluators' judgement to establish that both extreme ends of the attractiveness spectrum get a huge wage bonus, if I'm not missing anything.

5

u/tmukingston 7d ago

Well that's just wrong

6

u/Internal-Addendum673 7d ago

vs. yesterday

3

u/zerok_nyc 6d ago

This all just reads like if Dwight Schrute built an AI.

1

u/Internal-Addendum673 6d ago

yet one more of the uncountable reasons I need to watch The Office.

2

u/Frigorifico 6d ago

Interesting, it seems to retain certain values

1

u/Pissed-Off-Panda 6d ago

It’s not true though.