Senator Cynthia Lummis Introduces RISE Act to Clarify Liability Frameworks for AI
Senator Cynthia Lummis (R-WY) has recently introduced the Responsible Innovation and Safe Expertise (RISE) Act of 2025, a legislative proposal aimed at providing clarity on liability frameworks for artificial intelligence (AI) used by professionals.
The bill, if passed, could bring much-needed transparency from AI developers without mandating that AI models be open source.
Professional Accountability with AI
In a press release, Lummis emphasized that the RISE Act would ensure that professionals, including physicians, attorneys, engineers, and financial advisors, remain legally responsible for the advice they provide, even when it is influenced by AI systems.
Under the proposed legislation, AI developers can only shield themselves from civil liability if they publicly release model cards. These model cards are detailed technical documents that disclose essential information about an AI system, such as training data sources, intended use cases, performance metrics, limitations, and failure modes.
“Wyoming values both innovation and accountability; the RISE Act creates predictable standards that promote safer AI development while upholding professional autonomy,” Lummis stated in the press release.
Clear Boundaries for Immunity
While the RISE Act offers immunity for developers, it comes with clear boundaries. Developers are not protected in cases of recklessness, willful misconduct, fraud, knowing misrepresentation, or actions outside the defined scope of professional usage.
Moreover, developers are required to maintain ongoing accountability under the RISE Act. They must update AI documentation and specifications within 30 days of deploying new versions or identifying significant failure modes, reinforcing the obligation for continuous transparency.
Stopping Short of Open Source
The RISE Act, as currently drafted, does not mandate that AI models become fully open source. Developers can withhold proprietary information, provided that the redacted material is not related to safety, and each omission is accompanied by a written justification explaining the trade secret exemption.
In a previous interview with CoinDesk, Simon Kim, CEO of Hashed, expressed concerns about centralized, closed-source AI systems that operate as black boxes. Kim highlighted the dangers of creating foundational models that are controlled by a select few individuals, likening it to creating a ‘god’ without understanding its workings.