Linux Kernel Now Officially Allows AI-Generated Code — With Strict Developer Accountability Rules

The Linux kernel project has officially adopted a policy permitting AI-generated code contributions, adding a new document — Documentation/process/coding-assistants.rst — to its guidelines. The policy, authored by Dave Hansen, Intel's x86 architecture maintainer, establishes that AI-written code is permitted under the same standards as human-written code, with one non-negotiable condition: the developer who submits the code bears full responsibility for it, regardless of how it was generated.
What the Policy Actually Says
The key rules of the Linux kernel's AI code policy are clear but strict. Developers are not required to disclose whether they used AI tools — competence and correctness matter more than transparency about process. However, AI agents and tools are explicitly barred from using Signed-off-by tags, which are reserved for human reviewers who have personally verified the code. "AI made it that way" is not a valid justification during code review; contributors must understand and defend every design choice they submit.
The policy also specifies where AI tools are most and least appropriate. AI is acknowledged as useful for generating test cases, writing documentation, and producing initial drafts of straightforward code. It is explicitly flagged as poorly suited to kernel-specific challenges: strict memory models, real-time requirements, backward compatibility, hardware interaction, and concurrency — the core of what makes kernel development difficult.
Linus Torvalds' View
Linux creator Linus Torvalds opposed adding documentation that would ban AI tools outright, arguing that bad actors would ignore such rules anyway. His assessment of AI coding tools is measured: "clearly getting better" but comparable to autocomplete — useful for boilerplate, risky for systems-level code where mistakes can compromise security or stability across millions of devices. Torvalds himself used Google's Antigravity AI tool in his personal GitHub project AudioNoise in January 2026, signaling pragmatic acceptance rather than ideological opposition. This mirrors the broader trend of AI tools becoming standard developer workflow components, similar to how Anthropic has been exploring AI's role in sensitive domains.
Frequently Asked Questions
Does the Linux kernel require developers to disclose AI tool use?
No. The new policy explicitly does not require disclosure of AI tool use. Developers are judged on the quality and correctness of the code they submit, not on the process used to write it. The accountability standard is the same whether code is human-written or AI-assisted.
Can AI tools sign off on Linux kernel code?
No. The Signed-off-by tag is reserved for human reviewers who have personally verified code. AI agents and tools are explicitly prohibited from using this tag under the new policy.
What kinds of kernel tasks is AI most appropriate for?
The Linux kernel guidelines identify AI as most helpful for generating test cases, writing documentation, and drafting straightforward code. AI is least appropriate for complex systems-level work — memory management, concurrency, real-time constraints, and hardware interaction — that defines the most critical parts of kernel development.
The Bottom Line
The Linux kernel adopting an AI code policy is a milestone, not because it opens floodgates but because it closes ambiguity. The policy's core message is simple: use whatever tools help you write good code, but own every line you submit. That is a workable framework for one of the world's most consequential open-source projects — and a reasonable template for how other major codebases might approach the same question as AI coding tools become standard parts of developer workflows.