LLMs for Learning Code: Powerful Tutors or Harmful Crutches?

May 22, 2025

The question of whether Large Language Models (LLMs) are a boon or a bane for aspiring programmers sparked a lively Hacker News discussion. The consensus? It's not a simple yes or no. Like any powerful tool, the utility of LLMs in learning to code hinges critically on how they are used, with opinions ranging from them being a "godsend" for surmounting initial hurdles to a "footgun" that can hinder deep understanding if misused.

The Upside: How LLMs Can Accelerate Learning

Commenters shared several ways LLMs can be beneficial when approached correctly:

  • Personalized Explanations & Analogies: A standout use case is leveraging LLMs to explain new concepts in terms of what the learner already understands. One user suggested prompts like, "I am familiar with object encapsulation... If pure functional programming does not allow for mutation, how do you manage the state of objects over time?" or "Help me learn about Kubernetes. I am familiar with using docker, docker compose, and virtual machines." This tailored approach can make complex ideas more digestible.

  • Bridging Knowledge Gaps: LLMs can act as tireless tutors, answering even "trivial" questions that beginners might hesitate to ask humans. They can also help learners formulate better questions as they explore a new domain.

  • Generating Understandable Examples: Finding simple, working examples for a specific technique or framework can be challenging, as documentation or open-source projects are often complex. LLMs can provide "super basic" examples, offering a clean starting point for deconstruction and learning, which can be "surprisingly hard to find elsewhere."

  • Synthesizing Information: For topics with dense or hard-to-understand documentation, LLMs can be prompted to synthesize information into simpler terms.

The Downside: Potential Pitfalls and Risks

Despite the benefits, there are significant risks if LLMs are not used thoughtfully:

  • Over-reliance and Superficial Learning: The most common concern is that learners might become overly reliant on LLMs, simply copying and pasting code without truly understanding it. This can lead to a "fuzzy understanding" that compounds over time, described by one commenter as "similar to the old copy-from-StackOverflow phenomenon, but on steroids." As another user put it, "the learning happens when you bang your head... If it doesn't hurt... you're not really learning."

  • The Illusion of Progress: LLMs can give a "false sense of progress" by producing working code quickly. However, this can mask a lack of fundamental understanding and critical thinking, as "whatever mental model and thinking flaws you start with is going to be amplified."

  • Incorrect or Suboptimal Output: LLMs are not infallible. They can generate incorrect, inefficient, or outdated code. A beginner might not have the expertise to identify these flaws, potentially internalizing bad practices. One commenter noted, "I asked LLM to generate some code for me. It didn’t use generics... and gave me some shit code." Another compared it to a "tape measure that could be wrong 50% of the time."

  • Hindering Self-Discovery: The "experience and self-discovery" gained from struggling with problems is crucial for developing expertise. Outsourcing this struggle to an LLM can stunt growth.

Strategies for Using LLMs Effectively in Programming Education

The discussion yielded valuable advice on how to harness LLMs productively:

  • Active Engagement is Key: Don't just ask for answers. "Dig into problems, try to understand why it was solved a specific way, ask what are the con's of doing it another way." Treat the LLM as a Socratic partner.

  • Debug and Deconstruct: Use LLM-generated code as a starting point. Actively debug it, take it apart, and understand how each piece works. "If you read what they produce, learn to debug it, and make it an active learning experience, then yes, they are useful."

  • Step-by-Step Problem Solving: Instead of asking for a complete solution, work with the LLM in a "step-by-step" manner to understand the thought process.

  • Verify and Cross-Reference: Treat LLM output with healthy skepticism. One user suggested assuming a "20% chance of bullshit." Always verify information with official documentation (RTFM) or other reliable sources. "Ask the LLM to explain, then verify by searching it yourself."

  • Focus on Fundamentals: Use LLMs to help learn the fundamentals, not to bypass them. The calculator analogy was invoked: "¿Is a calculator useful or harmful when learning maths?"

  • Know Your LLM's "Level": One insightful comment suggested: "If the LLM is more junior than you, then go ahead and let it full autopilot. Check the results like you would check the result from a junior. If the LLM is more senior than you, learn from it – treat it like a tutor and ask a lot of questions... Ask until you have no dumb questions left."

  • Write First, Then Review: Some suggest writing the code yourself first, then using the LLM as a "reviewer" or for discussing alternative approaches.

Educator and Industry Perspectives

One commenter, running a computer science education program, shared that they've found LLMs "mostly not worth it or actively harmful for students/junior engineers." Their students using LLMs tended to learn much slower due to a lack of "grokking the problem" and developing "fuzzy" understanding. They even restricted LLM use for juniors, requiring disclosure and a "permit" for specific situations. This highlights a real-world concern about the impact on foundational learning and the potential for seniors to rely on LLMs for code generation, thus reducing opportunities for junior developers ("one is pulling up the ladder behind them, while the other is shooting their feet").

Conclusion: A Tool to be Wielded Wisely

The consensus from the Hacker News discussion is clear: LLMs are not inherently good or bad for learning to program; their impact depends entirely on the learner's approach. When used as an interactive tool for exploration, questioning, and understanding, they can be incredibly powerful. However, when used as a crutch to avoid the necessary struggles of learning, or to "outsource your thinking," they can be detrimental. The path to becoming a good programmer still requires critical thinking, problem-solving, and a deep understanding of fundamentals—efforts that LLMs can augment but not replace.

Get the most insightful discussions and trending stories delivered to your inbox, every Wednesday.