The Joy and Craft of Coding: Developers Debate Programming Without LLMs

June 18, 2025

A recent Hacker News discussion, sparked by a user wondering if they were a "dinosaur" for preferring to program without Large Language Models (LLMs), delved deep into the pros and cons of AI-assisted coding. The original poster expressed that using LLMs "sucks the joy out of programming" and creates a distance from the code, hindering the development of a strong mental model. This sentiment resonated with many in the community.

The Enduring Appeal of 'Old-Fashioned' Coding

Several commenters echoed the original poster's feelings, emphasizing the satisfaction derived from thinking through problems and manually typing code. User vouaobrasil articulated a strong stance, highlighting the enjoyment of the process and the belief that creative work can't be separated from the supposedly "boring" tasks that AI aims to automate. yummypaint likened using LLMs for code generation to code review, which kills the joy of problem-solving, and noted a discernible comprehension gap in students who heavily rely on LLMs.

Impact on Learning and Skill Development

The discussion highlighted significant concerns about how LLM reliance affects learning, particularly for junior developers. fzwang shared that their organization has mostly banned AI coding assistants for juniors, except for specific, verifiable tasks. Their team observed that while juniors showed superficial early productivity gains with LLMs, their learning rate was much slower, and their understanding of systems remained "fuzzy," leading to long-term problems. They advocate for "natty" (natural) coding to train the brain first.

credit_guy offered a counter-perspective, arguing that LLMs are here to stay, much like IDEs, and that junior developers will eventually learn, just as previous generations adapted to new information mediums. However, fzwang elaborated that this could put a ceiling on what developers can accomplish, keeping them "in-distribution" and unaware of deeper, more complex problem-solving. archagon added that novices using LLMs might "pump out unscrutinized PRs riddled with garbage code," increasing the burden on senior reviewers, who find it frustrating to review code the contributor doesn't fully understand.

Productivity: Panacea or Pitfall?

While some, like sifuhotman2000, argued that LLMs make developers more productive and that seniors are sometimes too quick to dismiss them, many shared negative experiences. bluefirebrand noted that less skilled and more productive can coexist, and expressed frustration over a company mandate to use Cursor, which cratered their motivation due to constant interruptions and subpar code generation. hotsauceror described LLM interactions as "wasted hours" on incorrect code, preferring to do boilerplate tasks themself or mentor a junior. rsynnott turned off Copilot after a week due to its distracting and often wrong suggestions.

bendmorris challenged the productivity claims for experienced developers, stating there's no credible evidence of significant gains and that scaffolding (a common LLM use case) is a small part of the job. For tasks like reading, designing, and maintaining code, having written it oneself is advantageous. Others, like philbo, mentioned they prefer to refactor boilerplate away rather than generate it, questioning if typing is the real bottleneck.

Strategic LLM Integration: Finding a Balance

A more nuanced approach was suggested by PaulShin, whose team distinguishes between:

  • "Architectural Thinking": The deep, creative design process that should be protected from AI interference.
  • "Translational Thinking": Repetitive work like boilerplate, test cases, or summarizing, which can be delegated to AI.

They use AI to summarize context (e.g., meeting notes) rather than write core logic, freeing up developers for more joyful, deep work. This aligns with fzwang's company policy of allowing LLMs for "in-distribution, tedious, verifiable tasks."

Ethical and Quality Concerns

Several users (soapdog, salawat, vouaobrasil, Joel_Mckay) raised ethical objections to LLMs due to concerns about intellectual property, the methods of their creation (plagiarism), and the motives of large tech companies. hotsauceror and archagon also voiced fears about a flood of "nonperformant garbage" and "enshittification" as a result of uncritical LLM use.

Conclusion: Dinosaur or Discerning Developer?

The consensus in the discussion leaned towards the idea that the original poster is not a dinosaur. Many developers and even some organizations continue to prioritize or mandate traditional coding practices, especially for foundational learning and complex problem-solving. The joy of programming, deep understanding, and concerns about code quality, learning stagnation, and ethics are significant factors tempering the wholesale adoption of LLMs. While AI tools are acknowledged as here to stay, their role is still being defined, with many advocating for caution and the preservation of human craft and critical thinking in software development.

Get the most insightful discussions and trending stories delivered to your inbox, every Wednesday.