This policy is straightforward and shouldn't be particularly controversial (I'm sure it will be bikeshedded to death though). It basically bans the obvious stuff ("don't just drop LLM generated comments onto PRs") and allows the important stuff like LLMs writing code so long as you disclose.
edit: Wow people did not read the policy. It's literally just "if you use an LLM you are responsible for it, we will reject low quality PRs, please disclose that you have used an LLM". This is bog standard.
So...big caveat that this is still under review, so what we're talking about is a moving target, but based on what I can see, it seems considerably more nuanced than that. They basically ban LLM-authored code, with a careful carve-out to run an experiment to try to get only high-quality LLM PRs:
> It's fine to use LLMs to answer questions, analyze, distill, refine, check, suggest, review. But not to create.
> We carve out a space for "experimentation" to inform future revisions to this policy.
Importantly, the LLM contributions must be solicited, i.e., the people responsible for reviewing the final implementation have to opt in explicitly beforehand.
I think that the only significant caveat here is the need for reviewers to opt in, otherwise it's effectively "you can do it if you are open about it and are responsible for the output". The only notable ask here that's different from other policies is "if it's an LLM, tell reviewers beforehand".
TBH I think that makes no sense ("I have an LLM written PR ready, can I open it?") but yeah the policy is also in draft and has actually already changed since my first comment.
Yes. The policy is pretty clear on what the rules are for LLM generated code. You need a reviewer to agree to review LLM generated code, you need to read the code yourself, etc.
It's in-line with the 'nanny' stereotype of the Rust community that they give you permission to act in a way they would never be able to verify anyways:
> The following are allowed.
> Asking an LLM questions about an existing codebase.
> Asking an LLM to summarize comments on an issue, PR, or RFC...
Like seriously, what's the point of explicitly allowing this? Imagine the opposite were true, you weren't allowed to do this - what would they do? Revert an update because the person later claimed they checked it with an LLM?
The Linux policy on this is much superior and more sensible.
> Like seriously, what's the point of explicitly allowing this?
Explicit permission can be useful to preemptively cut off some questions from well meaning people who, acting in good faith, might otherwise pester for clarification (no matter how silly / "obvious" it might otherwise be), or get agitated by misconstruing an all-banned list as being an overly verbose "no LLMs ever" overreach.
> It's in-line with the 'nanny' stereotype of the Rust community that they give you permission to act in a way they would never be able to verify anyways: [...]
Many of us work or have worked in corporate settings where IT takes great pains to help detect and prevent data exfiltration, and have absolutely installed the corporate spyware to detect those kinds of actions when performed on their own closed source codebases. Others rely on the honor system - at least as far as you know - but still ban such actions out of copyright/trade secret concerns. If you're steeped deeply enough in that NDA-preserving culture, a reminder that you've switched contexts might help when common sense proves uncommon.
While nannying can be obnoxious, I'm not sure that having a document one can point to/link/cite, to allay any raised concerns, counts.
> If you're steeped deeply enough in that NDA-preserving culture, a reminder that you've switched contexts might help when common sense proves uncommon.
> If you're steeped deeply enough in that NDA-preserving culture
If you've throroughly absorbed a culture of honoring non disclosure agreements (NDAs), which are legal contracts demanding you keep secrets and avoid sharing sensitive data or code...
> a reminder that you've switched contexts might help
A reminder that rust-lang is a transparent, open source project, with no non-disclosure agreements or trade secrets to keep private unto itself might help [1].
> when common sense proves uncommon.
Because everyone misses the "obvious" sometimes. And because "obvious" is a subjective value judgement, meaning people will disagree what is or is not obvious.
-------
1. That said, if you've got a private, corporate-internal, closed source fork, you might still be bound by such concerns. For example, various people have ported rust's stdlib to work on various consoles (xbox, playstation, etc.) - and one of the reasons you don't see that upstreamed is because doing so would require violating console vendor NDAs, as well as possibly their company's NDAs - possibly for such banal reasons as not wanting to leak a hint of a console port or new title before their marketing plans are ready to go to capitilize on any hype.
> Like seriously, what's the point of explicitly allowing this?
I would have LOVED if the university course I took last winter had this. I had to take a very paranoid attitude to what was allowed.
What they're trying to avoid is a lot of unnecessary conflict with zealous anti-AI people calling for your exclusion for admitting to doing these things. There are people who would ban this too.
> Like seriously, what's the point of explicitly allowing this? Imagine the opposite were true, you weren't allowed to do this - what would they do?
Imagine if they just say "LLMs are banned" then there's a lot of ambiguity. So they specifically outlined that generative uses of LLMs are banned, and that non-generative ones are not banned (i.e. "allowed").
I think it's a poor choice of words on their part, but it makes sense (considering what their policy is). It's more of a "we're not disallowing use in these particular scenarios, so you can still use LLMs for these if you want". Remember: it's a big project, and if they don't explicitly state something then people will ask and waste everyone's time.
If anything, it reads to me as a proactive rebuttal of complaints that they don't allow LLMs; they're definitively stating that they do allow using them for very specific purposes.
Y tho? It's already bad enough a programming language wants to play politics (doesn't matter what my politics are if I want to code in the c "community"), now they're taking purely emotional stances like "AI evil"
> now they're taking purely emotional stances like "AI evil"
But they aren't? Nowhere in the document it says this; in fact, it says the opposite - that they don't want to make a moral judgement.
> It's already bad enough a programming language wants to play politics (doesn't matter what my politics are if I want to code in the c "community")
It also doesn't matter what your politics are in the Rust community. My personal politics don't agree with the majority of prominent Rust contributors either, and that's fine. It doesn't (and hasn't) stopped me from being able to use Rust for over a decade now. Ignore politics and just engage on a purely technical level, and you'll be fine.
It doesn't hurt them to be addressed by their sex either. You can totally believe what you like about your sex/gender but making me go long with it is different.
Men in women's clothes, acting and talking like women... whilst it's not for me, that part I can accept - we've had drag queens this whole time, so what? It's just being forced pretend the drag queen is ACTUALLY a woman, that's the only part I can't do. Everything else is fair enough
> Using an LLM to discover bugs, as long as you personally verify the bug, write it up yourself, and disclose that an LLM was used.
What are they going to do go back and reject a bug if someone later admits they found it with an LLM? Honestly they and most other project would probably be better off just ignoring the situation until norms start developing.
They're trying to avoid a Boy Who Cried Wolf situation.
If they get swamped with 100 bugs that turned out, after they investigate them to be hallucinations then it's likely they will ignore or lose in the noise a real bug.
A llm generated bug that pretends it was a human created bug would be trying to abuse that presumption of validity, and therefore considered a dick move.
> If they get swamped with 100 bugs that turned out, after they investigate them to be hallucinations then it's likely they will ignore or lose in the noise a real bug.
But theyre saying if they're 100 correct bug reports it's still banned.
The assumption here is that people act in good faith. If you break the rules, this indicates that you are not acting in good faith, and perhaps should no longer be welcome.
What are you even talking about lol the policy doesn't imply that at all.
That's in the "allowed with caveats" section. It's just saying to not open bug reports without first reading them yourself or your bug may be closed. No one is saying "by policy we will have to add the bug back in" jesus christ
The policy is insanely straightforward, idk how you can be misinterpreting it this badly. It's just "Disclose that you use a model, you are on the hook for reviewing model output as a human" and then some clear cut examples.
The point here if you read contributor comments is mainly to allow people to shut a PR down without having claims of “unfairness” because some other PR wasn’t shut down. These are “moderation policies” in the style of old internet forums, their primary purpose is to clear up ambiguity and make maintainer’s (moderators) lives easer.
The birth of vibe coding has seen interactions on public FOSS projects increasingly reminiscent of the flame wars and moderator hammers of the old forum days. A lot of projects have been behind the curve on preparing and codifying the hammers, probably because no maintainer really wants to be a moderator, but thats where its naturally landed unfortunately.
While we surely hope that at least some people will read and honor the policy, of course we know not everyone will. But creating a policy gives us teeth. Currently sending such PR is not disallowed, provided it doesn't fall in the thin area of some previous policies about slop PRs. With this policy, doing it will be escalated to the moderation team. First time you'll get a warning, second time you'll be banned from the project.
if an LLM says "I can't open a PR automatically until you solicit a review from a maintainer", i think that's good actually. likewise for proactively following the rest of the rules.
It's not the submitter who solicits, but the reviewer. They can't give code, AND THEN get approval, they need to be asked specifically for an llm created PR.
This is highly interesting. It seems clear to me that a lot of thought and work went into this. If I ever were to write a similar document, I'm sure I could learn a lot from this one. Props to the authors and all involved.
Note that there are currently several proposed policies (plus hundreds of discussions mostly in private channels), and frankly I'm not sure we'll ever reach a consensus (I'm a Rust project member).
Kudos to the team for this. I think it’s brave of them to stand up for their own experiences and push back against the hype train.
Before you knee jerk hate on the team for being luddites, consider:
1. For a language like rust there’s too few eyes and too many mouths. Reviewing is a job, and is extremely taxing.
2. The code base needs to be highly hermetic because it’s load bearing across the global economy
3. Most changes are only relevant if they’ve followed extensive process, including community feedback.
It does in the narrower sense of vibe coding (as opposed to more general agentic coding, which is also called vibe coding from time to time...).
> Solicited, non-critical, high-quality, well-tested, and well-reviewed code changes that are originally authored by an LLM are allowed, with disclosure.
Vibe coding (in its original meaning) would have hard time arguing it's of high quality.
But one of the reasons they switched was because the compiler upstream for the original language they used, Zig, wouldn't accept slop contributions they wanted to make for Bun perf. What will they do when they need to try to push a slop contribution upstream to rust?
At this point they will probably just fork yet again and maintain some vibe compiler.
No, they've explicitly denied it.[0] However, they do regularly dig at how much faster their fork is[1][2] that they can't merge because of Zig's AI policy.
Huh. I wonder if the original intent was to merge an AI generated PR to a high-profile project like Zig. It makes the headlines and generates hype. But that went embarassingly bad for them so they had "port Bun to Rust" as a backup.
They should make FullstackLang. It compiles English in .md to machine code that can directly run on the specialized hardware it designs for it that you have to 3d print at runtime. Every program gets its own custom hardware. Composability and reuse be damned. Pay the token masters for every thought you have
The term scope creep comes to mind. Programming languages do not need to grow exponentially 24/7, its okay to let it grow slowly and stay mature and secure. If Rust were too bleeding edge, the safety promises would corrode over time. I think a better use of some of those PRs is to focus on crates as proof of concepts for things that could benefit Rust if it were included either in the standard library, or just available as a crate you can use for programmer ergonomic reasons.
Please do fork Rust and maintain it for the LLM true believers. I’m sure the real rust team would be delighted to see fewer low-effort PRs.
Given what you’ve said above it would be an easy task ‘accelerating quality and features exponentially’, so you’ll soon be able to show them (perhaps within days!), the error of their ways.
That's an ambitious conclusion, and not as overly so as some may think.
But I believe it is not the reason Rust adopted this policy, I think they just have a more basal and subjective dislike of AI irrespective of whatever truth you may have just cited.
Rust is already well past 1.0. At best an LLM could discover a vulnerability (and the human using it can file a patch) or can help a human improve ergonomics.
LLM delusion is insufferable. If all it takes is tokens to make a significantly better in programming language in logarithmic time why hasn't anyone done it?
As someone who's vibecoding my own self-hosted language (via a typescript to c++ transpiler and bootstrap), I can tell you mainline commercial models like Opus 4.7 aren't quite there yet. I'm getting 10KB source files balloon into 80MB outputs for now.
The main problem is that the the problem space is vast and highly interconnected, the LLM needs to reason about the entire language every time it suggest an architectural change, but it can't, so it suggests local changes that make sense to me - a language hobbyist - then runs into much more difficult problems down the road.
Maybe Mythos with a lot of (competent) human hand-holding and pre-design can do it.
> I expect soon we will see Rust forks with a pro-LLM policy
I sure hope so. I expect the end result will disprove the following:
> The Rust team will never be able to catch up to them
The AI jackasses have been braying in this key for going on a few years now, and there hasn't been one single time any of this breathless noise has resulted in something meaningfully superior. It's time to put up or shut up. Enough bullshit talk. If you can vibeslop a better Rust (or whatever), JFDI and leave everyone behind.
> This policy is intended to live in Forge as a living document, not as a dead RFC.
Oh... I can’t say for certain who wrote it, and I won’t make any definitive claims - personally, I tend to think it was probably mostly written, or at least conceived, by a man - but this sort of phrase… I get a nervous twitch every time I see it, even though it’s actually quite a clever rhetorical device. Hell... Maybe I just need a break; I don’t know, since I’m starting to see LLMs everywhere...
> These are organized along a spectrum of AI friendliness, where top is least friendly, and bottom is most friendly.
This section is an extremely useful reference
reply