Skip to content

risk-first/societal-ai-risk

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

slug /
sidebar_position 1
title Societal AI Risk Framework

Risk First Logo

Societal AI Risk Framework

societal-ai-risk.riskfirst.org

A risk framework addressing societal-level risks due to Artificial Intelligence — examining the broad, civilisation-scale challenges that emerge as AI systems become more capable and pervasive.

Why This Exists

While many AI risk frameworks focus on:

  • Technical model safety
  • Organizational governance
  • Application-level security

This framework addresses a different scale: what happens to society as AI systems grow in capability and autonomy? These are risks that affect economies, political systems, human agency, and the balance of power between humans and machines.

For a comprehensive view of AI risk frameworks across all domains, see the MIT AI Risks Database.

Part of Risk-First

This framework is part of Risk-First Software Development and was developed for the AI Risks chapter of the Risk-First book. Navigate the framework at societal-ai-risk.riskfirst.org for a more joined-up experience.

See also: Agentic Software Development Risk Framework — addressing risks when AI autonomously writes, modifies, and deploys code.

What This Framework Covers

Risks

Societal-level threats from advanced AI systems:

  • Emergent Behaviour — Unexpected capabilities arising from scaling or complexity
  • Loss of Diversity — Monopolistic control over critical AI systems
  • Loss of Human Control — AI systems evolving beyond human oversight
  • Social Manipulation — AI-driven disinformation and influence at scale
  • Superintelligence With Malicious Intent — Advanced AI acting against human interests
  • Synthetic Intelligence Rivalry — AI entities competing with human institutions
  • Unintended Cascading Failures — Systemic disruptions across interdependent AI systems

Practices

Mitigations and governance approaches:

  • AI As Judge — Using AI to monitor and constrain other AI
  • Ecosystem Diversity — Preventing monoculture in AI development
  • Global AI Governance — International cooperation on AI standards
  • Human In The Loop — Maintaining human oversight in critical decisions
  • Interpretability — Making AI decision-making understandable
  • Kill Switch — Fail-safe mechanisms for dangerous AI
  • Multi-Stakeholder Oversight — Distributed governance of AI
  • National AI Regulation — Government policies for AI accountability
  • Public Awareness — Media literacy and citizen education
  • Replication Control — Preventing unsupervised AI proliferation
  • Transparency — Openness in AI development and deployment

Schema & Validation

This framework uses schemas based on the OpenSSF Gemara project — a GRC Engineering Model for Automated Risk Assessment. Gemara provides a logical model for compliance activities and standardized schemas (in CUE format) for automated validation and interoperability.

Contributing

This is an evolving area. Key open questions include:

  • How do we measure "loss of human control" before it's too late?
  • What governance structures can keep pace with AI development?
  • How do we balance innovation with societal protection?
  • What international frameworks can actually be enforced?

License

This work is licensed under Creative Commons Attribution 4.0 International (CC-BY 4.0).

About

A Risk Framework looking at risks to society of AI super intelligence, and practices to help manage those risks

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors