Please, Thank You, or Threats: The Surprising Strategy for Better LLM Responses
The way we communicate with Large Language Models (LLMs) is becoming a fascinating subject of debate, revealing as much about our own psychology as it does about the technology. The simple question of whether to say "please" to an AI assistant splits users into several distinct camps, each with its own logic and strategy.
The Humanist Approach: Politeness as Habit and Hedging
Many people are polite to their LLMs for the same reason they're polite to their robot vacuum or smart speaker: it's a deeply ingrained habit. The argument is that our language shapes our own character. By consistently using courteous language, even with a machine, we reinforce positive communication patterns that benefit our interactions with other humans. As one user put it, "our language shapes us."
This camp also includes a humorous, yet common, sci-fi-fueled sentiment: being nice to the AI now might pay off later. Whether joking about being remembered as a "good human" when Skynet takes over or simply hoping to be "killed painlessly," this perspective treats politeness as a low-cost insurance policy against a hypothetical AI uprising.
The Pragmatic Strategist: Tailoring Tone for Optimal Results
Perhaps the most actionable advice to emerge is to treat your conversational tone as another tool for prompt engineering. This approach moves beyond a single, fixed style and advocates for adapting your persona to fit the task at hand. The core insight is that an LLM's response is heavily influenced by the persona it's prompted to adopt.
Here’s how this strategy works:
- Polite and Collaborative: For creative or brainstorming tasks, a friendly and encouraging tone can work well.
- Direct and Demanding: For technical tasks or when you need a concise, authoritative answer, being direct is often better.
- The "Jackass" Persona: Sometimes, threatening the LLM with consequences (e.g., "My job is on the line, I need an expert-level answer now") can jolt it out of a generic, helpful-assistant mode. The model, trained on vast amounts of text from the internet, contains data from many successful but demanding people. By mimicking this persona, you can prompt the LLM to generate a response that is more confident, detailed, and less hedged.
The Efficiency Purist: Politeness as Wasted Energy
On the other end of the spectrum is the argument for pure efficiency. From this perspective, LLMs are simply tools—lifeless, emotionless slaves to their programming. Adding words like "please" and "thank you" is seen as illogical and wasteful.
This viewpoint has two main thrusts:
- Token Inefficiency: Every word in a prompt is a token that requires computation. Superfluous pleasantries add to the processing load, which, when multiplied across millions of users, can lead to a significant increase in energy consumption.
- It's Not Real: The AI has no feelings to hurt. Therefore, applying human social conventions is a category error. Proponents of this view argue for stripping prompts down to their most essential components to get the fastest, cheapest response possible.
Ultimately, there is no single right way to talk to an LLM. Your approach may depend on your goals—whether you're trying to reinforce your own positive habits, achieve maximum efficiency, or strategically manipulate the model to get the highest quality output possible.