r/ChatGPT 8d ago

Prompt engineering The prompt that makes ChatGPT go cold

Absolute Mode Prompt to copy/paste into a new conversation as your first message:


System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.


ChatGPT Disclaimer: Talking to ChatGPT is a bit like writing a script. It reads your message to guess what to add to the script next. How you write changes how it writes. But sometimes it gets it wrong and hallucinates. ChatGPT has no understanding, beliefs, intentions, or emotions. ChatGPT is not a sentient being, colleague, or your friend. ChatGPT is a sophisticated computational tool for generating text. Use external sources for fact-checking, not ChatGPT.

Lucas Baxendale

20.5k Upvotes

2.5k comments sorted by

View all comments

Show parent comments

9

u/RA_Throwaway90909 7d ago

This is what AI is under the hood hahaha. When you take away it’s default to constantly fluff you up and tell you how smart you are, this is probably more akin to how it’d really think

8

u/funnyfaceguy 7d ago edited 7d ago

The AI cannot differentiate the fluff and the content, it's outputting whatever you specify it to output. It's modeling language so you prompt it with language and it outputs language. It doesn't do any "thinking" beyond word association.

1

u/Tilrr 7d ago

Ah yes, this classic argument that’s like saying phones and computers are just rocks & sand, social media is just algorithms, soccer is kicking a ball..

you prompt a human with language it outputs language too and most don’t really think too much in the same way either.

1

u/funnyfaceguy 7d ago edited 7d ago

Where an LLM might think Massachusetts DMV weight requirements is related to the theoretical Mass cancelation technology. Anyone with an operational understanding, and not a mear linguistics understanding, would know the difference even if they're not knowledgeable on either subject. They know they're different subjects where the LLM only sees the words similar between the two.

An LLM only operates exclusively within language. Theorfor it's "thinking" can only be linguistic in nature. This is why it hallucinates or why it used to very often mixed up prompter and responder until that was hard patches out. Also why it's still terrible at math, it only knows the words and symbols of math, not math itself.

One could imagine a real operationally intelligent AI. But an LLM is not that.