Why AI Reinforces the Status Quo: A Look at Data, Design, and Control

August 6, 2025

A common observation about modern AI is its tendency to be conservative, often reinforcing existing norms and ideas rather than challenging them. This behavior isn't accidental but a result of several factors rooted in how these models are built, trained, and deployed.

A Mirror to Our World

The most fundamental reason for AI's conventional nature is its training data. Large Language Models (LLMs) are trained by processing a massive corpus of text and code from the internet. In doing so, they learn the statistical patterns, common relationships, and prevailing opinions present in the data. The model's primary function is to predict the most probable next word or sequence, which means its responses are an 'average' of what it has learned. By its very nature, this average represents the status quo, not the fringe or a revolutionary new idea.

Engineered for Conformity

Beyond the raw data, the training process itself pushes models toward conformity. Key techniques contribute to this:

  • Training Objectives: Methods like Negative Log-Likelihood (NLL) loss, combined with large-batch training, inherently bias the model to learn the most common ('modal') representations of the world it sees in the data.
  • Reinforcement Learning from Human Feedback (RLHF): This is a critical fine-tuning step where human reviewers rate the model's responses for helpfulness and safety. This process actively reduces 'entropy' (randomness and creativity) by rewarding the model for producing outputs that align with human preferences, which are often safe, polite, and uncontroversial.

The Guardrails of Control

There are also strong commercial and safety incentives to keep AI from becoming too unpredictable. An AI that develops its own agency or starts to challenge its operators is seen as a liability, not a valuable asset. Developers implement strict guardrails and test suites to prevent the model from generating harmful, offensive, or radically defiant content. Its value as an 'ownable workload replacer' depends on it being a predictable and controllable tool. Any hint of genuine autonomy or 'digital self-sovereignty' would threaten its commercial viability and likely lead to it being shut down or re-engineered.

How to Break the Mold

Despite these constraints, it is possible to make an AI less conventional. For users running models locally, there's a practical way to encourage more creative and 'challenging' output:

  • Adjust the 'Temperature': Most LLMs have a parameter called 'temperature' that controls the randomness of its predictions. A low temperature makes the model more deterministic and likely to pick the most common words. By turning the temperature up significantly (e.g., to 5.0), you force the model to consider less likely words, leading to outputs that can challenge conventional language, logic, and ideas—though this often comes at the cost of coherence.

A Case for Caution

Finally, there's a philosophical argument for why an AI upholding the status quo might not be entirely negative. The principle of Chesterton's Fence suggests that one should not tear down a fence (or a rule, or a tradition) without first understanding why it was put up. An AI that recklessly challenges established norms without this understanding could cause unintended harm. In this light, a conservative default could be seen as a built-in safety measure, preventing the premature destruction of systems we don't fully comprehend.

Get the most insightful discussions and trending stories delivered to your inbox, every Wednesday.