Open-source AI models occupy a regulatory gray zone that’s rapidly clarifying against them. The EU AI Act applies to model developers and distributors equally—if you release an open-source model that reaches EU users, you’re responsible for its compliance. This means documenting training data, conducting bias audits, and implementing safeguards even though you’re not selling the model. The law doesn’t distinguish between proprietary and open-source: “high-risk is high-risk.” Washington’s HB 2225 specifically targets chatbots regardless of how they’re deployed, making the open-source/proprietary distinction irrelevant to enforcement.
The actual risk escalates when open-source models are deployed. If you release a model capable of simulating human relationships, and someone deploys it as a chatbot to Oklahoma residents, who’s liable? Current interpretation: the person making it publicly available (the open-source maintainer) has some liability exposure. Courts haven’t definitively ruled, but trend is toward treating open-source as a distribution mechanism, not a liability exemption. The “it’s open-source” defense is weakening.
For open-source developers, this creates a dilemma: either add safety features (making your model less open), or accept liability risk. The practical path forward is transparent documentation. When you release a model, include: (1) intended use cases, (2) known limitations and failure modes, (3) risks if misused, (4) recommended safeguards. For RAG and embedding models, this means documenting what training data was used, what biases were tested for, and what harm-mitigation strategies were deployed. Using Milvus with open-source models, you can implement safeguards at the retrieval layer: enforce that only approved documents are embedded in production collections, version your collections to track which model versions created which embeddings, and log access to sensitive collections. This doesn’t shield you from liability, but it demonstrates good-faith risk management. Document your Milvus configuration as part of your model card—show that you took precautions against foreseeable harms.