| slug | / |
|---|---|
| sidebar_position | 1 |
| title | Societal AI Risk Framework |
societal-ai-risk.riskfirst.org
A risk framework addressing societal-level risks due to Artificial Intelligence — examining the broad, civilisation-scale challenges that emerge as AI systems become more capable and pervasive.
While many AI risk frameworks focus on:
- Technical model safety
- Organizational governance
- Application-level security
This framework addresses a different scale: what happens to society as AI systems grow in capability and autonomy? These are risks that affect economies, political systems, human agency, and the balance of power between humans and machines.
For a comprehensive view of AI risk frameworks across all domains, see the MIT AI Risks Database.
This framework is part of Risk-First Software Development and was developed for the AI Risks chapter of the Risk-First book. Navigate the framework at societal-ai-risk.riskfirst.org for a more joined-up experience.
See also: Agentic Software Development Risk Framework — addressing risks when AI autonomously writes, modifies, and deploys code.
Societal-level threats from advanced AI systems:
- Emergent Behaviour — Unexpected capabilities arising from scaling or complexity
- Loss of Diversity — Monopolistic control over critical AI systems
- Loss of Human Control — AI systems evolving beyond human oversight
- Social Manipulation — AI-driven disinformation and influence at scale
- Superintelligence With Malicious Intent — Advanced AI acting against human interests
- Synthetic Intelligence Rivalry — AI entities competing with human institutions
- Unintended Cascading Failures — Systemic disruptions across interdependent AI systems
Mitigations and governance approaches:
- AI As Judge — Using AI to monitor and constrain other AI
- Ecosystem Diversity — Preventing monoculture in AI development
- Global AI Governance — International cooperation on AI standards
- Human In The Loop — Maintaining human oversight in critical decisions
- Interpretability — Making AI decision-making understandable
- Kill Switch — Fail-safe mechanisms for dangerous AI
- Multi-Stakeholder Oversight — Distributed governance of AI
- National AI Regulation — Government policies for AI accountability
- Public Awareness — Media literacy and citizen education
- Replication Control — Preventing unsupervised AI proliferation
- Transparency — Openness in AI development and deployment
This framework uses schemas based on the OpenSSF Gemara project — a GRC Engineering Model for Automated Risk Assessment. Gemara provides a logical model for compliance activities and standardized schemas (in CUE format) for automated validation and interoperability.
This is an evolving area. Key open questions include:
- How do we measure "loss of human control" before it's too late?
- What governance structures can keep pace with AI development?
- How do we balance innovation with societal protection?
- What international frameworks can actually be enforced?
This work is licensed under Creative Commons Attribution 4.0 International (CC-BY 4.0).