Coding with AI: Who Takes the Credit (and Blame) for LLM-Generated Code?

June 9, 2025

The increasing use of Large Language Models (LLMs) in software development has sparked a crucial conversation: how much credit can, and should, a developer take for code generated with AI assistance? This discussion from Hacker News delves into the nuances of authorship, responsibility, and practical approaches to attribution.

The Unwavering Line of Responsibility

A strong consensus among commenters is that regardless of how the code was generated, the developer integrating it assumes full responsibility. As user dalmo3 succinctly put it, "How much blame do you take when it breaks? Same thing." This sentiment is echoed by 2rsf, who states that credit taken should mirror the responsibility assumed. PeterStuer further clarifies the legal standing: "All, as the LLM has no legal agency. It is up to you to check whether the LLM output would tread on someone else's IP. There's no 'I didn't do it' backdoor here." This underscores that the developer is accountable for functionality, bugs, and any intellectual property implications.

The Attribution Debate: To Cite or Not to Cite?

Opinions diverge on whether and how to attribute code generated by LLMs:

  • Always Cite for Transparency: User turtleyacht champions the practice of always citing sources, including LLMs, emphasizing it's about "show your work." For team environments, this transparency is vital for post-mortem analyses and ensuring thoughtfulness in the codebase. They suggest that commit messages should mention tradeoffs, and prompts (or links to them, especially if on a company account) might need to be auditable if they can reproduce company-owned code. This approach treats LLM output similarly to academic references or code snippets from external sources.

  • Conditional Citation – Like Stack Overflow or Code Generators: Users byoung2 and nssnsjsjsjs draw parallels to copying from Stack Overflow or using code generators. sherdil2022 provides a nuanced approach: they cite the URL for Stack Overflow snippets. For LLM-generated code, if it's heavily edited and iterated upon, they give no attribution, feeling they've made it their own. However, for "vibe code" (copied as-is or with minor changes), they would include the prompts or LLM chat URL for reproducibility.

  • A New Definition of Authorship: andrewfromx suggests the definition of "author" is evolving. "You birth it. You watched it get created and tweaked small problems along the way. It wouldn't have been born without you. You wrote the prompts!" This perspective implies that the act of prompting and refining is a form of authorship, especially as LLM use becomes ubiquitous.

Ownership and Intellectual Property Concerns

The discussion also touches upon the complex issue of IP ownership. mattmanser raises a critical point: "If an AI wrote the code, you don't own it as AI content is not copyrightable." This has significant implications for companies relying on AI-generated code, potentially leaving codebases vulnerable if their IP protection, traditionally reliant on copyright, is weakened. The question of how much IP protects a "vibe coded codebase" remains largely untested in court.

Practical Takeaways

Several practical considerations emerge:

  • Documentation: Clear documentation in commit messages about design choices, alternatives, and potentially LLM assistance can be invaluable (turtleyacht).
  • Auditability: For sensitive or company-owned code, ensure that prompts used are accessible and auditable, possibly by storing them in company systems (turtleyacht).
  • Company Policy: As LLM use grows, clear company policies on attribution and usage will become increasingly important (turtleyacht).

Ultimately, while LLMs are powerful tools transforming software development, the human developer remains at the center of responsibility and, for now, in a shifting landscape of what it means to be an author.

Get the most insightful discussions and trending stories delivered to your inbox, every Wednesday.