- Cloud Hypervisor, under The Linux Foundation, is worried about AI code
- Copyright/legal issues and vulnerabilities are the core concerns
- It’s happy to revise the policy once LLMs “evolve and mature”
Cloud Hypervisor has implemented a no-AI-generated-code policy that will see it decline all contributions known to be generated or derived from AI tools.
Started in 2018 as a collaboration between Google, Intel, Amazon and Red Hat, this is an interesting development given today’s state of vibe coding, with an estimated one-third of new Google code coming from AI.
Cloud Hypervisor says this is to avoid license compliance issues, but it will also reduce code review and maintenance burdens in the future.
Cloud Hypervisor implemented no-AI policy for new code
“Our policy is to decline any contributions known to contain contents generated or derived from using Large Language Models (LLMs). This includes ChatGPT, Gemini, Claude, Copilot and similar tools,” the open source project maintainers explained in a GitHub post.
Since 2021, the project has been under the guidance of The Linux Foundation, but it has also had contributions from Alibaba, ARM, ByteDance, Microsoft, AMD, Ampere, Cyberus Technology and Tencent Cloud over the years.
Now, though, The Linux Foundation is worried about the potential legal risks associated with using copyrighted code sometimes found in AI-generated code.
However, the time-saving benefits of AI are clearly acknowledged by the project’s leaders, because this is not a point-blank ‘no’. Instead, “this policy can be revisited as LLMs evolve and mature,” they say.
More broadly, Google isn’t the only company using artificial intelligence to generate much of its code. Around 20-30% of the code generated for some Microsoft projects is now AI-generated, and Meta forecasts that as much as half of development could be done by AI soon.
Red Hat, on the other hand, has blogged about the dangers of AI-generated code, noting vulnerable code, quality issues and licensing risks.