Trustible https://trustible.ai/ Where AI Governance Gets Done Tue, 03 Mar 2026 14:12:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://trustible.ai/wp-content/uploads/2026/02/Layer_1.svg Trustible https://trustible.ai/ 32 32 A Governance Framework for Agentic AI https://trustible.ai/post/a-governance-framework-for-agentic-ai/ Tue, 03 Mar 2026 12:39:18 +0000 https://trustible.ai/?p=23237 AI governance has always been about reviewing outputs before anything consequential happens. Agentic AI changes that. These systems don’t just generate content, they take action. They call APIs, execute code, send messages, and interact with software on their own. The human checkpoint that traditional governance relied on is no longer guaranteed. Most organizations already have […]

The post A Governance Framework for Agentic AI appeared first on Trustible.

]]>
AI governance has always been about reviewing outputs before anything consequential happens. Agentic AI changes that. These systems don’t just generate content, they take action. They call APIs, execute code, send messages, and interact with software on their own. The human checkpoint that traditional governance relied on is no longer guaranteed.

Most organizations already have AI governance programs in place. The good news is you don’t need to start from scratch. But agentic AI introduces new risks around autonomy, liability, data access, and third-party interactions that existing programs weren’t built to address.

We wrote this white paper to help governance professionals understand what’s new, what’s at stake, and how to extend their programs to cover it.

Inside, you’ll find:

  • How agentic AI differs from the generative AI you’re already governing
  • The six risk areas that demand specific attention
  • A practical framework for extending your existing governance program
  • A nine-question risk assessment you can adapt for intake and periodic reviews

The post A Governance Framework for Agentic AI appeared first on Trustible.

]]>
Introducing Trustible’s New Brand: AI Governance That Accelerates Innovation https://trustible.ai/post/introducing-trustibles-new-brand-ai-governance-that-accelerates-innovation/ Wed, 18 Feb 2026 12:47:22 +0000 https://trustible.ai/?p=23178 A New Chapter for Trustible: Built for What’s Next When Andrew and I founded Trustible, we understood that AI would be the most transformational technology humanity had ever created. We also saw that organizations were struggling to figure out how to adopt this technology responsibly with the confidence, clarity, and trust needed to move forward. […]

The post Introducing Trustible’s New Brand: AI Governance That Accelerates Innovation appeared first on Trustible.

]]>
A New Chapter for Trustible: Built for What’s Next

When Andrew and I founded Trustible, we understood that AI would be the most transformational technology humanity had ever created. We also saw that organizations were struggling to figure out how to adopt this technology responsibly with the confidence, clarity, and trust needed to move forward. That understanding hasn’t changed. But everything else has.

Three years in, our mission remains the same: to help enterprises safely unlock all of the positive potential that AI brings to the world. But the landscape has shifted dramatically.

Organizations are now serious about AI adoption. It has moved from exploratory pilots to AI in production that touches every corner of enterprise operations. Regulations have shifted from theoretical frameworks to enforceable requirements that diverge across jurisdictions. And the people responsible for governing AI are being asked to be risk leaders, data scientists, policy experts, lawyers, and internal guards for their teams, often with skillsets and expertise that would be unrealistic for any single person to possess. They’re being asked to do more, faster, with tools that weren’t built for this moment.

Today, we’re introducing a new brand that reflects where we’ve been, where we’re going, and what we’ve learned from the organizations we’re privileged to work with.

AI Governance Isn’t a Blocker. It’s the Path to Adopting AI at Scale.

We’ve heard versions of the same story dozens of times: “We want to adopt AI. We know we need to govern it. But we don’t know where to start, and we can’t afford to get it wrong.” And just as often: “We already have hundreds of AI use cases proposed, but governance is becoming a bottleneck.”

That tension between moving fast and staying safe is real. But it doesn’t have to be an either-or choice. 

The organizations that scale AI successfully don’t treat governance as a compliance checkbox. They treat it as the foundation of their AI strategy that makes speed possible. They know what AI exists across their organization. They can explain how it works and where the risks are. They have processes that route decisions to the right people at the right time. And when customers, boards, or regulators ask questions, they have answers.

That’s what we mean when we say AI governance isn’t no, it’s go.

Governance done right doesn’t block innovation. It creates the clarity and confidence that lets teams move faster, deploy more AI, and prove their decisions were sound.

What Makes Trustible Different

Our customers tell us we’re different in a few specific ways, and our new brand makes those differences clearer.

We’re AI-native. Trustible wasn’t retrofitted from a legacy MLOps platform or bolted on as a module to an existing GRC tool. We were built from day one to handle the unique challenges of AI: dynamic models, third-party vendors, evolving regulations, and use cases that don’t fit neatly into traditional risk frameworks.

We deliver practical intelligence. AI governance can’t be theoretical. Our platform embeds human expertise directly into how it works. Continuously updated risk taxonomies and mitigation guidance, automated risk ratings, AI incident insights, and regulatory compliance frameworks created by our experts are applied directly to the decisions teams need to make today. You get the benefit of deep governance knowledge without needing to become an expert yourself, all while being operationalized through a purpose-built, AI-powered platform.

We understand the regulatory environment better than anyone else. AI regulations are evolving rapidly and diverging across jurisdictions. Our embedded intelligence doesn’t just track these changes; it helps guide organizations on what they actually mean for their specific AI use cases, translating complex requirements into actionable governance with human insight and automations.

We’re a partner, not just a platform. The organizations we work with aren’t just licensing software. They’re working with a team that understands AI governance as a discipline, knows where the field is headed, and actively supports their change management alongside them. We call this Service-as-a-Software. 

What this brand means going forward

This rebrand isn’t cosmetic. It’s a reflection of how we see our role in the AI ecosystem.

We’re not here to help organizations check a box. We’re here to help them adopt AI at the speed their business demands, safely, transparently, and with confidence.

We’re not here to add bureaucracy. We’re here to replace the chaos of spreadsheets, email threads, manual reviews, and tools not built for AI governance that prevent organizations from scaling AI adoption. We provide structure and risk intelligence that actually works.

And we’re not here to be the star of the show. We’re here to be the platform that gets out of the way so teams can focus on what matters: deploying AI that creates value they can trust.

What Hasn’t Changed

Our mission hasn’t changed: to give organizations the clarity to adopt AI safely and at scale.

Our commitment to being a Public Benefit Corporation hasn’t changed. We still believe AI governance is critical to steering AI adoption toward outcomes that benefit everyone, not just individual organizations.

And our focus on listening hasn’t changed. The best ideas for what Trustible should become next don’t come from us. They come from our customers using the platform every day, from our partners, and from the broader AI governance community navigating real AI governance challenges in real organizations.

What’s Next

If you’re already a Trustible customer, thank you. This brand exists because you’ve trusted us to be part of how you govern AI, and we don’t take that lightly.

If you’re evaluating AI governance platforms, we’d welcome a conversation. Not a demo-first sales pitch, but a real discussion about where you are, what you’re trying to solve, and whether we’re the right fit. 

And if you’re thinking about joining the Trustible team, now’s the time. We’re building something that matters, with people who care deeply about getting it right. 

AI governance is no longer optional. The question isn’t whether to govern AI. It’s how to do it in a way that enables the outcomes you’re trying to achieve. 

We’re here to help you answer that question.

Because AI clarity is AI velocity.

Gerald Kierce
Co-Founder & CEO
Trustible

The post Introducing Trustible’s New Brand: AI Governance That Accelerates Innovation appeared first on Trustible.

]]>
Trustible Partners with Coalition for Health AI to Accelerate Responsible AI Adoption in Healthcare https://trustible.ai/post/trustible-partners-with-coalition-for-health-ai-to-accelerate-responsible-ai-adoption-in-healthcare/ Thu, 12 Feb 2026 15:53:54 +0000 https://trustible.ai/?p=22594 Trustible, a leading provider of AI governance software for enterprises, today announced it has joined the Coalition for Health AI (CHAI)’s Partner Program, helping to set standards for how AI models can be responsibly governed. CHAI is a provider-led coalition committed to developing industry best practices and frameworks to further innovation, safety and security for […]

The post Trustible Partners with Coalition for Health AI to Accelerate Responsible AI Adoption in Healthcare appeared first on Trustible.

]]>
Trustible, a leading provider of AI governance software for enterprises, today announced it has joined the Coalition for Health AI (CHAI)’s Partner Program, helping to set standards for how AI models can be responsibly governed. CHAI is a provider-led coalition committed to developing industry best practices and frameworks to further innovation, safety and security for health AI. Through participation in the CHAI Partner Program, Trustible is helping to set precedent for how AI models can be effectively governed, and brought to market faster across institutions and populations. Trustible will integrate CHAI’s AI Governance Framework directly into its platform, enabling healthcare organizations to operationalize CHAI’s framework with structured workflows, embedded AI governance guidance, and audit-ready documentation.

Why This Matters

Healthcare stands at a pivotal moment in AI adoption. AI holds extraordinary potential to improve patient outcomes, reduce clinician burnout, accelerate research, and expand access to care. But realizing that potential requires clarity on how to deploy AI responsibly in environments where the stakes are uniquely high given the potential for harm.

Healthcare organizations face distinctive AI governance challenges: protecting patient safety and privacy, meeting complex regulatory requirements across federal and state jurisdictions, maintaining clinical validity, addressing health equity concerns, and building trust across patients, providers, and regulators. Without clear guidance on what responsible AI governance looks like in practice, many organizations struggle to move from AI ambition to confident deployment.

CHAI’s AI governance frameworks provide that clarity. Developed collaboratively by leading healthcare organizations, the frameworks translate principles into actionable practices tailored to healthcare’s unique context. Through this partnership, Trustible customers can now operationalize CHAI frameworks and guidance directly within their AI governance workflows for intake reviews, risk assessments, vendor evaluations, and compliance reporting.

“Healthcare needs AI governance that matches the pace of innovation without compromising patient trust or safety,” said Gerald Kierce, CEO and Co-Founder of Trustible. “CHAI has done the hard work of defining what responsible AI means in healthcare. We’re making it operational. Trustible customers can now move faster on AI because they have clear, purpose-built guidance on what responsible AI adoption looks like in practice.”

“I am thrilled to welcome Trustible to CHAI’s Partner Program, committed to supporting our community by advancing effective and responsible health AI,” said Brian Anderson, CHAI’s CEO. “We are driven by the engagement, expertise, and trusted capabilities of our members and the feedback of our broader health ecosystem and the public. We look forward to working together to unlock the potential benefits of AI, on a foundation of trust, safety, and security.”

What the Partnership Includes

As a CHAI Partner, Trustible will focus on three areas of collaboration:

  • Platform Integration: CHAI’s AI Governance Framework is integrated as a compliance framework within Trustible’s platform, enabling healthcare organizations to map their AI governance activities to CHAI’s guidance with clear traceability.
  • Healthcare-Specific Guidance: Trustible customers gain access to healthcare-tailored use case templates, risk assessments, vendor and model reviews, as well as tailored mitigation strategies informed by CHAI’s cross-sector expertise developed by leading healthcare organizations.
  • Education & Thought Leadership: Joint efforts to educate healthcare stakeholders on practical AI governance, including guidance on operationalizing CHAI’s frameworks across health systems, payers, life sciences, and health tech companies.

For healthcare organizations, this integration means faster, more confident AI decisions. Instead of building governance practices from scratch or adapting generic frameworks that miss healthcare’s nuances, teams can apply guidance developed by and for healthcare leaders. Trustible connects CHAI’s frameworks directly to intake workflows, risk assessments, and vendor evaluations so governance happens as AI enters the organization, not as an afterthought. This means healthcare organizations can approve low-risk AI faster, apply appropriate oversight to high-risk systems, and demonstrate compliance with both clinical and regulatory expectations. Governance becomes the enabler that helps healthcare organizations realize AI’s potential while maintaining the trust patients, providers, and regulators require.

About the CHAI Partner Program

The CHAI Partner program provides CHAI members with trusted and ready providers of data, governance platforms, services, and testing and evaluation, all relevant to AI development and/or implementation efforts. These specialized resources ensure solutions are both effective and responsible.

CHAI Partners accelerate AI adoption in healthcare by providing tools and processes that streamline development. CHAI is developing a framework to evaluate health AI solutions using consensus-based standards and best practices. Involvement in the CHAI Partner Program enables us to deliver AI validation services in alignment with CHAI’s best practice frameworks to developers, providers, and payers across the CHAI network, shaping the future of responsible AI adoption.

Learn more on CHAI’s Partner Program here

About Trustible

Trustible is where AI governance gets done. We help regulated enterprises manage AI risk, comply with regulations, and accelerate safe, responsible AI adoption through our industry-leading AI governance platform and embedded intelligence that turns governance into measurable action. We’ve raised venture capital from leading investors including Lookout Ventures, Tau Ventures, Inner Loop Capital, Alumni Ventures, FoundersX, Harlem Capital, VamosVentures, and JHH VC. At a time when AI governance is rapidly becoming a strategic priority for global enterprises, Trustible is defining how the world adopts AI safely, ethically, and at scale. Trustible is headquartered in Arlington, Virginia. Learn more at trustible.ai/

About CHAI

CHAI was started by clinicians. The coalition’s mission is to build the broadest possible consensus across the health ecosystem to help ensure health AI is trusted, secure and safe. The CHAI membership is open and rapidly expanding. Today, we consist of more than 3,000 members including health systems, patient advocacy groups, academia, and a wide range of industry start-ups and incumbents. CHAI is committed to convening and dialogue to achieve consensus. There are no limits to who can join and participate. Learn more about a CHAI membership here.

Press contact for Trustible: trustible@5wpr.com, press@trustibledev.wpenginepowered.com

Press contact for CHAI: CHAI@12080group.com

The post Trustible Partners with Coalition for Health AI to Accelerate Responsible AI Adoption in Healthcare appeared first on Trustible.

]]>
Leidos and Trustible Launch Joint Initiative to Redefine AI Governance with Agents https://trustible.ai/post/leidos-and-trustible-launch-joint-initiative-to-redefine-ai-governance-with-agents/ Wed, 04 Feb 2026 14:44:37 +0000 https://trustible.ai/?p=22588 Collaboration applies proven AI principles to help automate governance, reduce friction, and support AI innovation and adoption across government missions. Arlington, Va. – FEB. 4, 2026 — AI governance is too often a brake on innovation. Trustible and Leidos (NYSE: LDOS) are working to change that. Today, the companies announced a partnership to redefine AI […]

The post Leidos and Trustible Launch Joint Initiative to Redefine AI Governance with Agents appeared first on Trustible.

]]>
Collaboration applies proven AI principles to help automate governance, reduce friction, and support AI innovation and adoption across government missions.

Arlington, Va. – FEB. 4, 2026 — AI governance is too often a brake on innovation. Trustible and Leidos (NYSE: LDOS) are working to change that. Today, the companies announced a partnership to redefine AI governance through automation, demonstrating in initial engagements the ability to compress AI governance processes from weeks into hours—and in some cases minutes—while maintaining rigorous oversight and control.

AI governance refers to how organizations put guardrails around the use of AI. Making sure systems are reviewed, approved, and monitored so leaders understand how they work, what risks they carry, and when they are ready to be used. It helps ensure AI is deployed responsibly, with transparency, accountability, and alignment to legal, ethical, and mission needs.

Built on Leidos’ AI that has been deployed in real-world missions over decades, the collaboration focuses on removing friction from AI adoption while maintaining accountability. By combining Trustible’s automated AI governance platform with Leidos’ experience building agentic capabilities at scale in national missions, the initiative helps agencies unlock innovation while managing AI risk.

As government agencies respond to new federal directives calling for accelerated AI alongside strong oversight, the need for governance that enables progress has become increasingly clear. The collaboration with Trustible helps operationalize governance through automation, enabling agencies to move from policy to practice more efficiently. At its core, the approach aims to ensure AI governance is outcome-driven—supporting real mission results as well as compliance.

“AI governance can’t be a manual, after-the-fact process—especially as agencies begin to adopt more autonomous and agentic systems,” said Gerald Kierce, co-founder and CEO of Trustible. “Working with Leidos, we’re using automation to streamline governance from the start—reducing friction, strengthening control, and helping agencies deploy AI faster while maintaining the oversight and risk management their missions demand.”

Accelerating Governance Timelines

In a successful proof-of-concept engagement, Leidos and Trustible showed how automated governance can reduce barriers to AI deployment. Using Trustible’s platform, Leidos compressed the initial AI governance intake process that traditionally took weeks into hours—and, in select cases, minutes—depending on system complexity and risk. The result demonstrates how automation can streamline governance workflows while preserving the rigor, accountability, and transparency required in mission-critical environments.

“AI governance needs to play a different role in mission delivery,” said Geoff Schaefer, vice president of AI strategy and governance at Leidos. “It must control risk while simultaneously removing friction. By automating core governance processes, we’re able to strengthen safeguards while reducing the barriers that have historically slowed AI adoption in complex, regulated environments.”

With more advanced agentic capabilities under development, Leidos and Trustible anticipate that governance timelines may compress further, enabling mission teams and oversight bodies to focus more on outcomes rather than process.

Governance Designed to Unlock Innovation at Scale

The joint approach is designed to support AI adoption across the missions and sectors Leidos serves—including civilian, homeland, defense, intelligence, and international partners. By automating and embedding governance directly into AI workflows, the approach scales across AI capabilities and mission contexts, enabling organizations to manage risk more consistently while advancing real-world outcomes.

Leidos has integrated Trustible’s platform into its own enterprise governance, reinforcing its commitment to delivering AI systems that are tested, secure-by-design, and accountable at scale.

About Trustible

Trustible provides commercial and government customers with an actionable AI governance platform that simplifies compliance, streamlines risk assessments, and accelerates responsible adoption. Headquartered in Arlington, VA, Trustible is backed by leading investors and is growing rapidly across public and private sectors. Visit https://www.trustibledev.wpenginepowered.com/ for more information.

About Leidos

Leidos is an industry and technology leader serving government and commercial customers with smarter, more efficient digital and mission innovations. Headquartered in Reston, Virginia, with 47,000 employees worldwide, Leidos reported annual revenues of approximately $16.7 billion for the fiscal year ending January 3, 2025. Learn more at https://www.leidos.com/

 

Certain statements in this announcement constitute “forward-looking statements” within the meaning of the rules and regulations of the U.S. Securities and Exchange Commission (SEC). These statements are based on management’s current beliefs and expectations and are subject to significant risks and uncertainties. These statements are not guarantees of future results or occurrences. A number of factors could cause our actual results, performance, achievements, or industry results to be different from the results, performance, or achievements expressed or implied by such forward-looking statements. These factors include, but are not limited to, the “Risk Factors” set forth in Leidos’ Annual Report on Form 10-K for the fiscal year ended January 3, 2025, and other such filings that Leidos makes with the SEC from time to time. Readers are cautioned not to place undue reliance on such forward-looking statements, which speak only as of the date hereof. Leidos does not undertake to update forward-looking statements to reflect the impact of circumstances or events that arise after the date the forward-looking statements were made.

 

Media Contacts

Gerald Kierce
CEO & Co-Founder
trustible@5WPR.com
202-355-4413

Brandon Ver Velde
Senior Media Relations Manager
(571) 526-6257 | brandon.p.vervelde@leidos.com

The post Leidos and Trustible Launch Joint Initiative to Redefine AI Governance with Agents appeared first on Trustible.

]]>
A Pragmatic Blueprint for AI Regulation https://trustible.ai/post/a-pragmatic-blueprint-for-ai-regulation/ Thu, 29 Jan 2026 14:15:00 +0000 https://trustible.ai/?p=22575 An AI startup’s proposal for fair, pro-growth, pro-AI, non-partisan, AI regulation AI is one of the most transformative technologies of the century, with the potential to accelerate scientific research, improve healthcare outcomes, and help small businesses compete with larger enterprises. The United States currently leads the world in AI development. Yet despite this leadership, a […]

The post A Pragmatic Blueprint for AI Regulation appeared first on Trustible.

]]>

An AI startup’s proposal for fair, pro-growth, pro-AI, non-partisan, AI regulation

AI is one of the most transformative technologies of the century, with the potential to accelerate scientific research, improve healthcare outcomes, and help small businesses compete with larger enterprises. The United States currently leads the world in AI development. Yet despite this leadership, a significant gap has emerged between AI’s potential and its actual adoption. Many businesses remain on the sidelines, uncertain whether AI tools are reliable enough to deploy, unclear on their legal exposure, and unsure which vendors they can trust.

This adoption gap is the central challenge facing American AI policy today. It poses a direct risk to national competitiveness. China and other nations are investing heavily in AI deployment across their economies, and they will not wait for American businesses to build confidence. If the United States cannot translate its technological leadership into widespread adoption, that leadership will erode. There is also a domestic economic risk. Billions of dollars have flowed into AI companies on the expectation of transformative returns. If adoption stalls and revenue growth disappoints, a bubble correction could devastate the very industry the United States is counting on to maintain its edge.

Closing this gap requires trust. And trust requires a regulatory environment that establishes clear rules without stifling innovation. At Trustible, we define AI governance as the combination of processes, policies, and evaluations that manage and mitigate the risks of AI. Done well, governance does not slow adoption. It accelerates adoption by giving businesses the confidence to invest and deploy. Critically, trust cannot be mandated. Attempting to force AI on skeptical businesses, workers, or consumers will generate backlash. Sustainable adoption requires bringing stakeholders along willingly and building genuine confidence in the systems being deployed.

Right now, policymakers are not hitting the mark. The AI policy landscape is fragmented and uncertain. The European Union’s AI Act’s rollout has been marked by repeated debates over timing and simplification. State laws in the United States face constant threat of federal preemption. High-profile lawsuits are working through courts with judges applying old frameworks to new problems. Meanwhile, the proposals on the table tend toward extremes: some are too heavy, imposing compliance burdens only the largest firms can absorb; others are too light, gesturing at concerns without creating real accountability.

The loudest voices in the debate have crowded out the reasonable middle. AI doomers treat the technology as an existential threat demanding precautionary restrictions. AI optimists dismiss concerns about harm as obstacles to progress. Neither camp addresses what most businesses actually need: a stable, predictable environment where they can adopt AI with confidence.

We call ourselves AI pragmatists. We believe AI will be genuinely transformative, but that transformation does not have to be catastrophic or ungoverned. We are not interested in hypothetical extinction scenarios, nor do we believe that market forces alone will solve every problem. Pragmatism means focusing on the actual barriers to adoption, the real harms that have materialized, and the practical compromises that can align incentives across the value chain.

At its core, good regulation allocates risk appropriately. It places accountability on those best positioned to manage it while protecting those who lack the information to protect themselves. No one wants to fly on an unregulated plane or receive care from an unlicensed professional. Thoughtful regulatory frameworks build trust in industries, and that trust allows markets to function and grow.

This paper offers policymakers a pragmatic framework built around five core positions: a shared liability model that distributes accountability across model providers, deployers, and end users; a balanced approach to copyright that protects creators while enabling beneficial AI development; principles for protecting children while building AI literacy; content provenance systems that help distinguish authentic from synthetic content; and information-sharing mechanisms that reduce uncertainty across the ecosystem. Each position reflects insights from our direct experience helping companies govern AI systems in practice, and each is designed to create conditions where responsible actors can thrive.

The post A Pragmatic Blueprint for AI Regulation appeared first on Trustible.

]]>
Trustible Leads Inaugural Sponsor Cohort for the AI Incident Database https://trustible.ai/post/trustible-leads-inaugural-sponsor-cohort-for-the-ai-incident-database/ Mon, 26 Jan 2026 14:11:02 +0000 https://trustible.ai/?p=22569 Trustible, a leading provider of AI governance software for enterprises, today announced a partnership with the Responsible AI Collaborative (RAIC), the independent nonprofit behind the AI Incident Database (AIID). Trustible is leading RAIC’s inaugural cohort of corporate sponsors, and will integrate AIID incident data directly into its platform and collaborate with RAIC on research into […]

The post Trustible Leads Inaugural Sponsor Cohort for the AI Incident Database appeared first on Trustible.

]]>
Trustible, a leading provider of AI governance software for enterprises, today announced a partnership with the Responsible AI Collaborative (RAIC), the independent nonprofit behind the AI Incident Database (AIID). Trustible is leading RAIC’s inaugural cohort of corporate sponsors, and will integrate AIID incident data directly into its platform and collaborate with RAIC on research into AI risk and real-world governance practices.

Why This Matters

Effective AI governance requires more than monitoring models in isolation. It means understanding how AI systems fail in the real world and building processes to prevent those failures. The AIID is the definitive public record of AI-related incidents, including over 5,000 incident reports collected and curated over eight years, used by central banks, intergovernmental organizations, researchers, and practitioners worldwide.

Through this partnership, Trustible customers will be able to view AIID incident reports directly in the Trustible platform, and through Trustible’s actionable intelligence, link their internal AI inventories against relevant incidents from the database proactively. Trustible users will receive customized alerts whenever new incidents are reported for use cases, models, or vendors tracked in Trustible’s AI inventory. This enables organizations to stay ahead of the curve on the most recent AI risks, and understand the potential mitigation strategies in near real-time, building trust and confidence in their deployments of AI.

“We’ve long valued the RAIC’s work maintaining this resource,” said Andrew Gamino-Cheong, CTO and Co-Founder of Trustible. “Our risk and mitigation taxonomies already draw heavily on AIID data. This partnership strengthens that connection, and we’re committed to supporting RAIC’s independence, not shaping it. Their credibility exists because they’ve kept editorial control in-house, and that’s exactly how it should stay.”

“The AIID was created so companies like Trustible can motivate AI governance decisions from demonstrated risks,” said AIID founder Sean McGregor. “Trustible’s ability to link recommendations to clear statements of what companies are working to prevent supplies an answer to the all-important question of ‘why spend money on AI governance?'”

What the Partnership Includes

The Trustible and AIID partnership is focused on three areas of collaboration:

  • Platform integration: Authorized use of AIID content within Trustible’s AI governance platform.
  • Education & Thought Leadership: Support for RAIC’s operations and continued development of the database, including opportunities to educate the business and academic communities on the latest AI risks and mitigation strategies.
  • Joint research: Collaborative work on incident analysis, emerging AI risks, and governance best practices, with findings published publicly.

Our Commitments

This partnership by design helps bolster RAIC’s operations, continuing the work of the AIID, without any interruption or influence by business partners. Trustible will have no role in RAIC’s editorial decisions, and will not influence decisions regarding incidents logged in the AIID, or how incidents are evaluated. Those independent decisions will continue to and will always be left to RAIC’s editorial team, operating under their publicly available methodology.

For Trustible customersall platform and organizational data will continue to remain confidential. No customer information stored in the Trustible platform will be shared with any third-party without explicit written consent. The partnership with AIID is an extension of Trustible’s commitment to building the most comprehensive, but passive, actionable intelligence layer in AI, combining legal, policy, technical, and business intelligence into a single pane of glass view. 

About Trustible

Trustible is where AI governance gets done. We help regulated enterprises manage AI risk, comply with regulations, and accelerate safe, responsible AI adoption through our industry-leading AI governance platform and embedded intelligence that turns governance into measurable action. We've raised $7.69M in funding to date with support from leading investors including Lookout Ventures, Tau Ventures, Inner Loop Capital, Alumni Ventures, FoundersX, Harlem Capital, VamosVentures, and JHH VC. At a time when AI governance is rapidly becoming a strategic priority for global enterprises, Trustible is defining how the world adopts AI safely, ethically, and at scale. Trustible is headquartered in Arlington, Virginia. Learn more at trustible.ai/

About the Responsible AI Collaborative

The Responsible AI Collaborative (RAIC) is an independent nonprofit that maintains the AI Incident Database (AIID), the most widely used public repository of real-world AI harms. Over eight years, the AIID has grown to over 5,000 curated incident reports and has informed the development of national and intergovernmental AI standards. Learn more at incidentdatabase.ai.

Media Contacts: 

For Trustible:
5WPR
trustible@5wpr.com
press@trustibledev.wpenginepowered.com

For Responsible AI Collaborative (RAIC):
info@raicollab.org

The post Trustible Leads Inaugural Sponsor Cohort for the AI Incident Database appeared first on Trustible.

]]>
Everything You Need to Know About New York’s RAISE Act  https://trustible.ai/post/everything-you-need-to-know-about-new-yorks-raise-act/ Fri, 09 Jan 2026 13:32:41 +0000 https://trustible.ai/?p=22562 New York became the second state last year to enact a frontier model disclosure law when Governor Kathy Hochul signed the Responsible AI Safety and Education (RAISE) Act. The new law requires frontier model providers to disclose certain safety processes for their models and report certain safety incidents to state regulators, with many similarities to California’s slate of AI laws passed last fall. The RAISE Act will take effect on January 1, 2027. This article covers who must comply with the RAISE Act, what transparency obligations the law creates, and how the law will be enforced.

The post Everything You Need to Know About New York’s RAISE Act  appeared first on Trustible.

]]>
New York became the second state last year to enact a frontier model disclosure law when Governor Kathy Hochul signed the Responsible AI Safety and Education (RAISE) Act. The new law requires frontier model providers to disclose certain safety processes for their models and report certain safety incidents to state regulators, with many similarities to California’s slate of AI laws passed last fall. The RAISE Act will take effect on January 1, 2027. This article covers who must comply with the RAISE Act, what transparency obligations the law creates, and how the law will be enforced.

Scope of the RAISE Act

The RAISE Act applies to “large frontier developers” that train, or initiate the training of, frontier models. An entity is considered a large frontier developer if it had (collectively with its affiliates) a gross revenue that exceeds $500 million dollars in the previous calendar year. Frontier models developed by these companies are covered by the law if they are foundational models that were “trained using a quantity of computing power greater than 10^26 integer or floating-point operations.” The law is limited to frontier models that were “developed, deployed, or operat[e] in whole or in part in New York state.” This means that the RAISE act will reach only a handful of model providers, such as OpenAI, Anthropic, and Meta. 

The law’s key requirements are also anchored around “catastrophic risks” posed by frontier models. The RAISE Act defines these risks foreseeable and material to the frontier model throughout its lifecycle, and that “materially” cause death or serious injury to 50 or more people, or cause more than $1 billion in damage from a single incident. The harm caused by a catastrophic risk must come from specific incidents, such as providing expert-level assistance in creating or releasing a chemical, biological, radiological, or nuclear weapon. There are some limited exceptions for what counts as a catastrophic risk, such as frontier model output information that is publicly available or lawful federal government activity. The catastrophic risk language imposes additional limits on the law’s applicability and obligations. 

Key Transparency and Reporting Requirements

Frontier model developers that are covered under the law must disclose certain information about their frontier models. The law requires that frontier model developers develop, implement, and publicly disclose a frontier AI framework that describes how the developers address certain safety activities, such as an assessment for thresholds that could trigger a catastrophic risk, mitigations that can be applied for catastrophic risks, and processes for updating the frontier AI framework. 

Frontier model developers are also required to update their frontier AI frameworks annually, as well as when their frontier models are materially modified. Updates to the framework because of model modifications require a published disclosure and justification within 30 days of the changes. Before a new or substantially modified version of a model is deployed, the Frontier model developer must publish a transparency report on their website that contains information such as how consumers can communicate with the developer, the model’s release date, intended model uses, and model use restrictions.

The RAISE Act also imposes reporting obligations on frontier model providers that are impacted by critical safety incidents. These incidents include unauthorized access or modification of model weights that cause death or bodily injury, harm the results from a catastrophic risk, loss of model control that results in death or bodily injury, or a model that uses deceptive techniques against the frontier developer to subvert controls or monitoring from the frontier developer. Critical safety incidents must be reported to the state regulators within 72 hours of determining that an incident has occurred. Incidents that pose an imminent risk of death or serious injury must be reported within 24 hours.   

Enforcement and Penalties

The law empowers the Attorney General to bring civil suits for violating the law and explicitly states that it does not create a private right of action. Penalties for first time violations can be as high as $1 million dollars and up to $3 million dollars per subsequent violations. The law does not prevent frontier model developers from asserting that “person, entity, or factor” caused the alleged harm.  

FAQs About the RAISE Act

How does the RAISE Act compare to California’s SB-53?

The RAISE Act and SB-53 are substantially similar, with some very minor differences. SB-53 has a 15 day reporting period for critical incidents, whereas the RAISE Act is 72 hours. The penalties are capped under SB-53 at $1 million dollars. SB-53 establishes whistleblower protections for employees at frontier model companies who submit complaints about violating the law, whereas the RAISE Act does not address this specifically (note: there may be protections codified elsewhere under New York state law). The RAISE Act also explicitly scopes the law around models developed or deployed within New York state, whereas SB-53 does not include similar language.

How does the RAISE Act interact with the White House AI executive order?

Governor Hochul signed the RAISE Act in the wake of President Trump’s Ensuring a National Policy Framework for AI Executive Order (EO), which seeks to prohibit states from enacting their own AI laws. The EO directs the Department of Justice (DOJ) to identify state AI laws that unconstitutionally regulate interstate commerce and bring legal challenges against them. Disclosure requirements for AI companies (i.e., the RAISE Act) is specifically mentioned as a category of law that will face evaluation from the DOJ. While the EO cannot prevent states from actually enacting AI laws, the threatened lawsuits and funding cuts are meant to deter them. It is possible that the legal challenges to the law was a motivating factor for Governor Hochul to sign the law.

What does the RAISE Act mean for AI governance professionals?

The law targets disclosure requirements for frontier model developers, which means in the immediate future there may not be explicit requirements for downstream deployers. However, as the model developers begin implementing their AI frameworks, it is possible that third party agreements or their terms of service may impose new reporting obligations on downstream actors. For instance, the model providers may shift some risk identification responsibilities to downstream deployers and users because they would be better suited to understand how risks are realized in the real world.   

The post Everything You Need to Know About New York’s RAISE Act  appeared first on Trustible.

]]>
Everything You Need to Know About the Executive Order on a National AI Policy Framework (2025) https://trustible.ai/post/everything-you-need-to-know-about-the-executive-order-on-a-national-ai-policy-framework-2025/ Mon, 15 Dec 2025 17:48:15 +0000 https://trustible.ai/?p=22559 On December 11, 2025, President Trump signed an Executive Order directing the federal government to build a “minimally burdensome” national framework for AI and to push back against state AI laws the Administration views as harmful to innovation. The EO takes a new, novel approach via Executive Branch authority, creating an AI Litigation Task Force and asking the U.S. Department of Commerce to evaluate state AI laws and identify “onerous” laws (explicitly citing laws that require models to “alter their truthful outputs”.)

The post Everything You Need to Know About the Executive Order on a National AI Policy Framework (2025) appeared first on Trustible.

]]>
TL;DR — On December 11, 2025, President Trump signed an Executive Order directing the federal government to build a “minimally burdensome” national framework for AI and to push back against state AI laws the Administration views as harmful to innovation. 

The EO takes a new, novel approach via Executive Branch authority, creating an AI Litigation Task Force and asking the U.S. Department of Commerce to evaluate state AI laws and identify “onerous” laws (explicitly citing laws that require models to “alter their truthful outputs”.) As the stick, the EO seeks to tie federal funding and grants to state compliance, and directs the FCC and FTC to consider federal reporting, disclosure, and preemption positions. The EO will almost certainly produce litigation and political pushback rather than offer any immediate regulatory clarity in the short-term, and with the potential to hamper AI innovation in the long-term. 

In this piece, we’ll break down what’s included in the EO, the potential flashpoints and ramifications, why this matters for broader AI adoption, and what actions businesses can take today to stay ahead of the curve while this battle rages in the courts.

What the EO Does

  • Directs the Attorney General to create an AI Litigation Task Force (within 30 days) to identify and challenge state AI laws on commerce-clause, preemption, or other grounds.
  • Orders the Secretary of Commerce to publish an evaluation of state AI laws (within 90 days) and identify “onerous” laws, including those that require models to alter truthful outputs or that may raise First Amendment concerns.
  • Obligates Commerce to issue a Policy Notice conditioning BEAD broadband funds on state compliance; agencies to assess whether discretionary grants should be conditioned on states not enacting or enforcing certain AI laws.
  • Requires FCC to initiate a proceeding to consider a federal reporting and disclosure standard that would preempt conflicting state laws.
  • Requires FTC to issue a policy statement explaining when state laws requiring alterations to truthful AI outputs are preempted by the FTC Act’s prohibition on deceptive acts or practices.
  • Directs the Administration to prepare legislative recommendations and work with Congress for a uniform federal AI framework while carving out certain state prerogatives (child safety, state procurement, some infrastructure.)

Why This Matters

The EO is an aggressive, Executive Branch attempt to replace a growing patchwork of state AI rules with a single federal floor as part of the Administration’s AI Action Plan released this summer. It aims to remove barriers to AI innovation under the goal of remaining nationally competitive with China and other nation states in what’s shaping up to be the space race of the 21st century. This is the second attempt at instituting a moratorium, after a previous Senate Republican attempt in the summer failed 99-1, and it continues to be a contentious topic across bipartisan lines. 

The EO comes after a revival attempt was contemplated earlier this month as an addition to the must-pass National Defense Authorization Act (NDAA) or tied to other appropriations in the lead up to short-term appropriations expiring in January. Congress largely balked at that effort, and Trump opted to take unilateral action instead.

If successful, it would create a national compliance environment for frontier model builders and providers, as well as Big Tech as a whole. If it fails, litigation and state challenges will create years of multi-jurisdictional uncertainty. For businesses, the immediate effect is likely more legal risk and less operational certainty at a time when buyer trust in AI is already fragile.

Who’s Impacted by This EO, and When? 

The first step of the EO establishes the AI Litigation Task Force, which within 90 days will identify which states and which state laws the Administration views as in contradiction to the goals of the EO (enforcing that the “policy of the United States to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.”) Realistically, this broad policy could be applied to any and all state-level AI laws or a specific set of targets. But the Administration likely has its targets set on California’s recent slew of AI laws targeting model developers, such as SB 53, and other politically convenient targets like Colorado, and potentially send up a warning flare to New York should Governor Hochel sign the R.A.I.S.E. Act or successor proposals. However, it’s notable that this EO isn’t completely pre-empting state law; there are carveouts for state AI laws such as those that address child safety, infrastructure, state-level procurement of AI solutions, and other areas as the Administration deems appropriate. 

It’s conceivable that on Day 90 or Day 91, the states deemed in contradiction may face Executive Branch action, including impacts to BEAD funding and lawsuits. But, multiple states aren’t waiting, and are already preparing their own coordinated challenge in the courts on the substance and novelty of the approach infringing on states rights and federal action unlawfully constraining state legislative authority.

On publication of this EO, no state or business is directly impacted today; it’s still business as usual, and laws are still in effect. Until this is resolved in the courts or through further Executive action, the status quo remains.

Section by Section: An Analysis

Section 1 (Purpose) frames state regulation as an innovation threat and specifically criticizes laws, citing Colorado’s algorithmic-discrimination language, that the Administration says could force models to produce “false” outputs. This political framing signals that easing regulatory burdens will be prioritized over state experimentation on harms.

Section 2 (Policy) states the high-level objective: sustain U.S. AI dominance with minimal regulatory burden. It is useful rhetoric but offers little operational clarity.

Section 3 (AI Litigation Task Force) institutionalizes litigation as a policy instrument. Expect the Task Force to identify state laws for challenge and to coordinate federal suits that will likely produce uneven judicial outcomes across jurisdictions.

Section 4 (Evaluation of State AI Laws) requires Commerce to identify laws that require models to alter truthful outputs or that could violate the First Amendment. The EO’s characterization of anti-discrimination interventions as forcing “false” outputs will be heavily contested; courts may reject broad readings equating mitigation of discriminatory impact with compelled false speech.

Section 5 (Restrictions on State Funding) conditions BEAD and potentially other discretionary grants on state compliance. Using federal funding levers to shape state law raises anti-commandeering and Spending Clause concerns that are likely to be litigated.

Sections 6 and 7 (FCC & FTC roles) place the FCC and FTC at the center of the federal push to set disclosure and deceptive-practices norms. Both agencies’ statutory authority to regulate AI model providers is contestable; recent Supreme Court precedent narrowing agency power will complicate aggressive rulemaking.

Section 8 (Legislation) asks for legislative recommendations but acknowledges political limits and preserves certain state authorities. Comprehensive federal AI legislation remains unlikely in the near term.

As we mentioned, the EO will be subject to numerous lawsuits. Legal issues at play include:

  • Preemption without explicit congressional direction. Congress generally has the authority to override state laws with federal laws. However, it’s unclear if the Executive Branch can pre-empt state laws without Congress. The EO directs federal agencies to look at existing law for a pre-emptive hook, but if language isn’t explicit, then the Executive Branch may not be able to read it into the law. 
  • Anti-commandeering state legislature. The Executive Branch cannot force states into acting in a specific manner and it can be said to go both ways; in that states cannot be forced into not acting. There have also been court battles over conditioning federal funds on enacting federal policy at the state level, which is exactly what the EO attempts to do with BEAD funding.
  • Interstate commerce authority. Congress generally regulates activities “in interstate commerce” and those activities have seemingly grown in the last few decades. However, it would be asking courts to assert that Congress has exclusive authority to regulate AI despite not having passed a law to support that claim. Moreover, the Trump Administration will assert that states are regulating activities in other states, which they cannot generally do (known as the dormant commerce clause).  
  • Agency rulemaking authority. The EO directs the FCC to begin a proceeding on federal AI model standards and disclosures that override state laws. The Communications Act would not likely support the FCC’s endeavors. The Supreme Court has also restricted agency rulemaking authority, which will make it more difficult for the FCC to act. The FTC is also instructed to look at ways to preempt state law, but the FTC Act does not have a broad pre-emption for state laws. The FTC would need to find a direct conflict with these states’ laws to assert that pre-emption exists.
  • Compelled speech under the First Amendment. Disclosure laws are already heavily litigated because of First Amendment compelled speech issues (i.e., being forced to say something when you ordinarily would not.) There is a reasonable argument that model providers should disclose certain pieces of information, but courts could decide the best mechanism for that is through private party contracts and not regulations.

Trustible’s Take

  • Short-term uncertainty is the most likely outcome of this EO. Litigation and agency reviews will keep the regulatory landscape in flux for the immediate future. The EO does not create the operational trust signals organizations need to be confident in their deployment and use of AI, such as clear liability rules or safe harbors, so buyer caution is likely to continue if not be further exacerbated. Organizations will likely respond defensively with stronger contractual protections, deeper governance, and additional insurance for assurance and liability mitigation. As well, startups will be more exposed than larger firms that can absorb legal and compliance costs, which creates an innovation barrier.
  • The Trump Administration has contextualized this EO as “pre-empting” state laws. However, it’s unclear if the Executive Branch can pre-empt state laws without Congress. We point out in our analysis that, while the EO directs agencies to look at existing law for a pre-emptive hook, if the language isn’t explicitly there then the Executive Branch may not be able to read it into the law. 
  • The legal battles will almost certainly focus on the Interstate Commerce implications, but don’t sleep on arguments against anti-commendeering. Essentially, the Executive Branch cannot force states into doing something and it can be said to go both ways – that states cannot be forced into NOT doing something. Stay tuned to see if courts will address this particular issue.  
  • The rallying cry continues to be unleashing AI innovation, but is this EO achieving that end? The current landscape heavily favors the “buyer beware” mentality, which does not give businesses sufficient assurances to integrate AI into their operations. Yes, some are protecting themselves with new contractual provisions or AI insurance, but that does not close some of the larger liability gaps.

What Businesses Need Should Do Now

  1. Building trust is key. Companies should be laser focused on demonstrating trust in their AI tools with customers and end users, because confidence in AI is still relatively low.
  2. Strengthen contracts. Companies should take nothing for granted and do comprehensive reviews of their contract templates to make sure clauses addressing warranties and indemnities are updated for AI tools and that responsibilities for AI management and oversight are clearly stated. 
  3. Document governance. Maintain detailed records on AI testing and evaluations, red-team reports, model cards, and audit trails. It is also important to provide public disclosures about the types of documentation you maintain.
  4. Design for layered compliance. Companies should assume state level rules will take effect, which means they should proceed with implementing their compliance programs. 
  5. Engage with rulemaking. Participate in public forums when it makes sense, such as file comments in proceedings that may stem from this EO or joining a coalition of like-minded businesses.
  6. Review insurance. It is not enough to just have cyber insurance, companies that use AI to support their business operations should get AI-specific insurance.

Bottom Line

The Administration’s EO is the most recent action in a string of many that tests the bounds of federalism, missing the mark on one fundamental truth: that the solution to creating a more dynamic, competitive, and pro-innovation AI economy is collaboration and responsible regulation in partnership with states – not via executive fiat, and not without Congress. Pitting states and the federal government on sides of this debate, rather than partners, actually reduces competitiveness, introduces friction, and does the exact opposite of what the EO sets out to achieve.

The EO is a novel federal effort to set a national floor for AI policy, using litigation, funding conditions, and agency proceedings to displace state rules. But because it rests on legally contested pillars, it is more likely to produce years of litigation and regulatory friction than immediate clarity. Organizations should treat this moment as an escalation of regulatory risk: tighten and strengthen governance that’s agile enough to operate under a shifting legal map, yet flexible enough to adapt to ongoing innovation.

The post Everything You Need to Know About the Executive Order on a National AI Policy Framework (2025) appeared first on Trustible.

]]>
The Path to Agentic Governance: Innovations, Lessons Learned, and Our 2025 Milestones https://trustible.ai/post/the-path-to-agentic-governance-innovations-lessons-learned-and-our-2025-milestones/ Thu, 11 Dec 2025 02:03:01 +0000 https://trustible.ai/?p=22535 In 2025, Trustible delivered the continuous, scalable programs needed for faster AI adoption at the same time that AI governance itself was shifting from principles and pilots to real production.
Our strengthened intelligence, collaboration, automation, and change management capabilities helped enterprises deploy AI deeper into workflows, decisions, and customer experiences.

The post The Path to Agentic Governance: Innovations, Lessons Learned, and Our 2025 Milestones appeared first on Trustible.

]]>

Summary

  • AI governance became operational this year, moving from principles and pilots to real production as enterprises deployed AI deeper into workflows, decisions, and customer experiences.
  • Trustible delivered the foundation organizations needed for this shift, strengthening intelligence, collaboration, automation, and change-management capabilities so governance teams can run continuous, scalable programs and support faster AI adoption.
  • Our customers gained unprecedented visibility and control, cutting AI footprint discovery time nearly in half, accelerating cross-functional reviews, and reducing manual governance workload with expert-driven automation.
  • Year Two governance emerged as the new frontier, focused on monitoring drift, managing ongoing compliance, adapting to evolving regulations, and sustaining program health as AI systems and organizations grow more dynamic.
  • Looking ahead, Trustible is preparing for the era of agentic AI governance, introducing an orchestration layer that detects meaningful signals, highlights emerging risks, suggests interventions, integrates across existing risk and tech stacks, and empowers every stakeholder to contribute clearly and confidently.

A Look Back at 2025

In 2025, Trustible delivered the continuous, scalable programs needed for faster AI adoption at the same time that AI governance itself was shifting from principles and pilots to real production.

Our strengthened intelligence, collaboration, automation, and change management capabilities helped enterprises deploy AI deeper into workflows, decisions, and customer experiences. 

Outcomes included:

  • Customers are gaining unprecedented visibility and control, cutting AI inventorying time nearly in half, accelerating cross-functional reviews, and reducing their manual governance workload with expert-driven automation.
  • AI systems and organizations are growing more dynamic as governance has emerged as the new frontier, focusing on monitoring drift, managing ongoing compliance, adapting to evolving regulations, and sustaining program health.

Preparing for the era of agentic AI governance, we are introducing an orchestration layer that detects meaningful signals, highlights emerging risks, suggests interventions, and integrates across existing risk and tech stacks, all while empowering every stakeholder to contribute clearly and confidently.

As principle-driven frameworks and controlled pilot projects moved rapidly into production, a decisive shift in the enterprise AI landscape occurred. AI now touches core workflows, customer interactions, and high-stakes decisions. Autonomy is increasing, risks are more complex, and expectations from regulators and boards are rising just as fast.

Clearly, governance is no longer optional; it’s an operational capability that organizations must run, measure, and improve the same way they run cybersecurity or traditional GRC.

Trustible was built for this moment. Our mission has always been to safely accelerate enterprise adoption of AI. Over the past year, we’ve translated that mission into capabilities that teams draw tangible value from every day. As customers shifted from governance theory to governance practice, we focused on the four areas where organizations most often struggle: intelligence at scale, collaboration, automation, and change management.

1. Intelligence at Scale

The challenge: Turn episodic assessment into a continuous, informed process.

AI governance is inherently cross-functional. It touches legal, security, risk, data, engineering, procurement, privacy, compliance, operations, and product — often in a single use case review. But in most organizations, these teams don’t share a common workspace, process, or language. That lack of structure creates friction, delays, and confusion over both ownership and process.

Customers repeatedly told us the same thing: getting people involved wasn’t the hard part. It was aligning those people around a coordinated workflow. Tasks arrived out of sequence and ad hoc, stakeholders were overloaded with information they didn’t need, and ownership was unclear. Even motivated organizations struggled to maintain momentum across teams.

This problem isn’t about willingness; it’s about infrastructure.

How we addressed the challenge:

To give organizations the clearest possible view of their AI landscape, we strengthened the Trustible platform’s intelligence layer with:

  • Additional framework support, including for the Singapore Model AI Governance Framework, Databricks AI Governance Framework, U.S. National Security AI Framework, and others.
  • A unified risk and mitigation model that captures inherent and residual risk shows how mitigations map across multiple threats and highlights remaining exposure.
  • A structured Model Evaluations module that supports documentation, testing, validation, and performance tracking over time, enabling real governance of model behavior.
  • Expanded Vendor Intelligence, with category-level scoring, transparency into scoring logic, improved documentation, and a redesigned vendor workflow.
  • A more powerful dashboard and reporting experience that surfaces insights across risk posture, AI deployment activity, departmental adoption, workflow performance, and benefit analysis.
  • Flexible filtering across dashboards, inventories, and reports, giving teams the ability to quickly segment their AI footprint by any attribute, pattern, or linked asset and support executive visibility on an organization’s AI outcomes.

The outcome:

  • Organizations reduced the time required to compile a full AI inventory by 40–60% after centralizing use cases, models, and vendors into Trustible.
  • Governance teams identified high-risk or poorly documented use cases 2–3x faster thanks to standardized residual-risk scoring and mitigation mapping.
  • Model and vendor evaluations became more complete and audit-ready, with 30–50% fewer documentation gaps compared to pre-Trustible processes.
  • Several teams uncovered redundant and overlapping use cases during onboarding, ultimately reducing duplicative use cases by 10–15% in their first year.

2. Collaboration

The challenge: Build an efficient, cross-functional infrastructure

Before organizations can govern their AI, they need to know what exists, who owns it, how it works, and what it touches. Seems obvious, but use cases emerge organically, teams experiment on their own, third-party vendors quietly introduce AI into products while providers constantly change capabilities.

Challenges from outside the enterprise are just as great. Many organizations don’t have the resources to keep up with changes to regulations, standards, legislative action, emerging risks, AI incidents, legal cases and more.

A fragmented operational landscape stalls effective governance due to lack of clarity. This uncertainty slows decision-making, makes leaders nervous, confuses teams, and blocks progress on the very controls and safeguards organizations want. Our customers describe it as “trying to govern in the dark.” Without capacity, without embedded intelligence, governance becomes reactive and often implemented too late.

How we addressed the challenge:

We invested in making Trustible the place where governance work gets done collaboratively. Key improvements included:

  • Task groups that enforce a logical, sequential workflow, ensuring stakeholders act at the right time, with the right context, and without receiving premature notifications.
  • A simplified contributor experience that limits what each user sees to only what they need to complete their tasks, preventing overwhelm and reducing friction, and for access control purposes, ensuring users only see data relevant to their roles.
  • A structured departments system that improves reporting, ownership clarity, and visibility across business units.
  • Redesigned vendor assessment and use case intake workflows that support internal and external participation while making risk scoring more transparent and repeatable.
  • Identity and access improvements, including better user management, contributor invitations, refined permissions, SSO & SAML 2.0 support, and branded organizational settings.

The outcome:

  • Cross-functional task completion times improved by 35–50% once task groups were enforced and contributors only received notifications when work was actually ready.
  • Contributor confusion and rework decreased by up to 70% after role-specific views limited users to the tasks and context they needed.
  • Vendor assessments that previously spanned multiple weeks were completed faster under the redesigned workflow and clearer scoring categories.
  • Governance programs onboarded new business units twice as fast using departments and more structured ownership metadata.

3. Automation

The challenge: reduce processing, increase evaluation

Many teams, even with strong governance, struggle under the operational burden. Intake feels repetitive. Documentation requires repeatedly chasing down the same details. Vendor reviews mean sifting through long questionnaires. Periodic reviews get lost in calendars. And governance teams spend most of their time processing information rather than evaluating it.  Customers repeatedly told us: “We spend more time handling the mechanics of governance than doing governance itself.”

But there is a path forward: governance teams spend a disproportionate amount of time on use cases that are familiar, predictable, or low-risk — approx. 80 percent. These need oversight, but not bespoke analysis. Meanwhile, novel or complex cases, requiring additional governance, are deprioritized and delayed due to competing priorities and quick wins.

One customer offered an apt use case metaphor: “Governance should be a conveyor belt. The system should pick up everything, sort what’s familiar, and only hand the novel or high-risk items to us.”

Trustible classifies familiar patterns, pre-populates known attributes, applies standard mitigations, and elevates use cases that truly require human review. As cycle times shrink, consistency increases, and governance becomes sustainable at enterprise scale, the conveyor belt metaphor is realized.

How we addressed the challenge:

To reduce manual workload and help teams focus on what matters, we introduced or expanded:

  • A more intelligent intake system that allows configurable fields, adds helpful guidance, and automatically completes tasks when all required details are already supplied.
  • Review workflows that automatically recommend when a use case should be revisited, tied directly to the use case’s risk level.
  • Workflow enhancements that reduce context switching, including automatic redirection to the next assigned task and consolidated risk and benefit assignment.
  • AI Analyzer for AI-assisted document review, supporting both curated and custom question sets, multi-document analysis, and exportable reporting.
  • Bulk upload capabilities and expanded APIs that allow organizations to automate intake, documentation, workflow creation, and system integration.

The outcome:

  • 30–50% drop in intake cycle times thanks to auto-complete logic, next-task routing, and clearer guided documentation steps.
  • 60–80% reduction in time spent on analysis of vendor compliance policies and Terms of Service thanks to AI Analyzer accelerating document review.
  • Governance teams spent about half as much time on manual triage due to Trustible’s expert-driven automated risk scoring, surfacing only novel or high-risk items.
  • Bulk import and API-connected workflows cut inventory setup time from months to days or weeks, depending on program size.

4. Change Management

The challenge: surface and quickly respond to change

Most organizations focus early governance efforts on intake. They build inventories, assess initial risks, publish baseline documentation, and establish review processes. As systems and processes change, policies may no longer fit business requirements. And regulations, as well as frameworks designed to help standardize governance work, also undergo regular changes and updates. When the ground shifts beneath you, you need to be prepared to respond.  

“Year One governance” questions:

  • What do we have?
  • What risks exist?
  • What regulations and standards do we need to comply with?
  • What documentation is needed?
  • What processes should we follow?

“Year Two governance” introduces a different set of questions:

  • What changed?
  • What drifted?
  • What fell out of compliance?
  • What needs re-evaluation?
  • What’s new or emerging?

Clearly, organizations need governance systems that adapt in real time, not just at intake.

How we addressed the challenge:

To support this next stage of governance maturity, we strengthened the platform’s flexibility, clarity, and adaptability. Updates across the platform include:

  • Clearer task guidance, conditional logic, and improved workflow design that help contributors provide accurate information and stay aligned over time.
  • Updated risk taxonomies and scoring rules that reflect evolving standards and more nuanced real-world risks.
  • Better visibility into review needs, status changes, and inventory health, ensuring teams don’t lose track of required updates.
  • Improvements to navigation, session management, communication, and user interfaces to support broader organizational adoption and sustained engagement.
  • A Use Case History view that provides a record of changes, supporting transparency and audit readiness.

The outcome:

  • Ongoing review compliance increased by 2–3x once automated review workflows and inventory indicators were enabled.
  • Drift, scope changes, and compliance issues were detected 50% earlier due to structured review cycles and clearer signals in dashboards and reports.
  • Governance updates, such as taxonomy changes, new fields, or modified workflows, were rolled out 30–40% faster with fewer interruptions to teams.
  • Organizations reported a 40–60% reduction in contributor support requests after improvements to guidance, workflow clarity, and task design.

What’s Next: Year Two & Agentic AI Governance

Across industries, organizations are reaching the end of their first major AI governance milestone: they’ve built inventories, established intake processes, and created foundational governance frameworks.

Year Two governance is about continuity. It’s understanding when models shift, when use cases expand beyond their intended scope, when vendors introduce new terms or capabilities, and when internal compliance begins to drift. It’s about embedding governance into the ongoing lifecycle of AI systems, not just at the point of creation.

At the same time, agentic systems are reshaping the enterprise AI landscape, introducing new classes of use cases, risks, and mitigations. As nearly every sector accelerates AI adoption, the “where” and “how” of governance is changing, too. In an environment defined by technology consolidation, platform sprawl, and the high cost of change management within large enterprises, teams need governance that fits seamlessly into existing tools and processes while still providing the expertise and intelligence required to scale safely.

Broader organizational adoption of AI governance means governance teams often spend critical time educating stakeholders who may not yet have the depth of knowledge, expertise, or vocabulary to clearly articulate how, why, or where risks may arise from a use case or vendor.

To meet those needs, we’ll soon introduce our vision for agentic AI governance: an orchestration layer that actively connects systems, people, processes, and signals across the AI lifecycle. 

Instead of requiring governance teams to manually monitor every update, Trustible will focus attention on the signals that matter, detecting drift, highlighting non-compliance, suggesting interventions, escalating emerging risks, highlighting regulatory changes and inventory exposure, and automating actions when appropriate. These insights will surface across the tools and workflows where teams already operate.

We’ll also enable new ways to conduct AI governance: from anywhere in your risk stack, to anywhere in your tech stack, powered by Trustible’s intelligence and expertise. And a new capability supplementing this education work will enable stakeholders to clearly define their AI use cases with a single click.

This next chapter builds on the foundation we established this year: deeper intelligence, clearer collaboration, smarter automation, and resilient change management. Together, these capabilities create the conditions for a new kind of governance, one suited to a world where AI is more autonomous, more embedded, and more central to enterprise operations.

And we’re just getting started.

The post The Path to Agentic Governance: Innovations, Lessons Learned, and Our 2025 Milestones appeared first on Trustible.

]]>
5 AI Governance Trends Heading into 2026 https://trustible.ai/post/5-ai-governance-trends-heading-into-2026/ Wed, 10 Dec 2025 17:08:56 +0000 https://trustible.ai/?p=22537 AI has moved from experimental pilots to systems that shape real-world decisions, customer interactions, and mission outcomes. Organizations across sectors, including financial services, healthcare, insurance, retail, and the public sector, now depend on AI to run core operations and deliver better experiences. And their enthusiasm to adopt the technology responsibly is also growing. 

The post 5 AI Governance Trends Heading into 2026 appeared first on Trustible.

]]>
The organizational playbook for AI governance that organizations relied on in 2024-25 will not work for the dynamic AI ecosystem of 2026 and beyond. 

AI has moved from experimental pilots to systems that shape real-world decisions, customer interactions, and mission outcomes. Organizations across sectors, including financial services, healthcare, insurance, retail, and the public sector, now depend on AI to run core operations and deliver better experiences. And their enthusiasm to adopt the technology responsibly is also growing. 

But the oversight environment around AI is shifting just as quickly. New regulations, changing public expectations, and more complex system architectures mean that the manual governance practices many teams have used thus far will not be able to keep up with AI adoption demands. Oversight is not a static risk assessment or a legal review once the AI system is deployed. AI governance as a discipline needs to be embedded through every stage of the AI lifecycle – whether you’re building AI systems yourself or leveraging them from third-parties. 

Organizations face a landscape where regulatory enforcement is tightening, employees and customers want clarity on how AI is used, and AI technologies evolve faster than internal controls typically can. AI Governance (defined as the policies, processes, and structures that guide how AI is designed, deployed, and monitored) has become the mechanism that connects what AI can do with what an organization can responsibly and legally deliver.

Several forces are fueling this urgency. Global regulations, including the EU AI Act, are slowly shifting from conceptual frameworks to actual enforcement, although with delays and uncertainty around timelines. High-profile AI incidents continue to raise expectations for transparency and accountability. And as AI becomes embedded in nearly every team and workflow, unchecked adoption introduces new operational, ethical, and reputational risks.

The five trends in this paper outline what will define AI governance heading into 2026. Each introduces practical new demands, from granular regulation to the rise of autonomous agents which will require organizations to rethink processes, tools, and cross-functional collaboration. By understanding these trends now, leaders can build governance capabilities that stay ahead of regulation, reduce risk, and unlock faster, safer AI adoption.

  • Trend 1: AI Governance Goes Beyond Intake
  • Trend 2: AI Third-Party Risk Becomes Full Supply Chain Risk
  • Trend 3: Agentic AI Explodes and Old Playbooks Won’t Hold
  • Trend 4: Quantifying and Articulating AI ROI
  • Trend 5: AI Regulations Move Up the Stack

Want the Full Playbook for 2026? 

Download our full whitepaper for: 

  • A deeper analysis of all five trends
  • Tactical recommendations that your organization can implement
  • A detailed look at how Trustible operationalizes governance

It’s the playbook organizations will need to stay ahead of the regulatory curve, scale AI responsibly, and maintain public and stakeholder trust.

The post 5 AI Governance Trends Heading into 2026 appeared first on Trustible.

]]>