The Enhanced 4-Lenses Framework for AI Risk Classification

A Structured Instrument for Assessing AI Systems

Traditional information security frameworks classify systems through the lens of Availability, Integrity, and Confidentiality (AIC). This approach works well for conventional IT infrastructure, where systems behave predictably and risks are well-understood. AI systems, however, present fundamentally different challenges. They learn, adapt, make autonomous decisions, and their behavior can change in ways that traditional classification frameworks cannot adequately capture.

The Enhanced 4-Lenses Framework provides a structured instrument for classifying and assessing AI-specific risks across four integrated dimensions. It does not prescribe what to do about these risks—rather, it helps organizations see and understand the nature of their AI systems and usage scenarios in a way that enables informed governance decisions.


Why Traditional AIC Classification Is Insufficient

The Challenge

Traditional information security frameworks, particularly the Availability, Integrity, and Confidentiality (AIC) triad that organizations have relied upon for decades, are fundamentally inadequate for governing artificial intelligence systems. The emergence of autonomous Al agents, foundation models, and adaptive learning systems has created governance gaps that expose organizations to unprecedented business, regulatory, and reputational risks.

Stacks Image 5

Why Traditional AIC Frameworks Are Insufficient for Al Governance

The European Union's Al Act, which entered force in August 2024, represents the first comprehensive regulatory framework for Al systems globally, establishing risk-based classifications that extend far beyond traditional security considerations. Similar regulatory developments across multiple jurisdictions indicate a global shift toward Al-specific governance requirements that conventional IT security frameworks cannot adequately address.

The AIC triad was designed for systems where the primary concerns are uptime, data integrity, and access control. AI systems introduce new categories of risk that this framework was never intended to address:

Traditional AIC Addresses AI Systems Also Require
Is the system available? How do autonomous agents maintain availability?
Is the data intact and unmodified? How do we detect model tampering or training data poisoning?
Is access to data controlled? How do we preserve privacy in federated learning or multi-agent systems?

The Enhanced 4-Lenses Framework extends this foundation to provide a comprehensive risk classification approach specifically designed for AI.


The Four Lenses: A Classification Instrument

Stacks Image 9

The Solution: Enhanced 4-Lenses Framework

This paper presents a conceptual framework that addresses critical gaps in Al governance through systematic analysis of current challenges, regulatory requirements, and literature synthesis. The Enhanced 4-Lenses Framework extends traditional AIC principles through four integrated dimensions designed to provide comprehensive oversight of Al systems while maintaining organizational effectiveness and regulatory compliance.

The framework consists of four integrated lenses, each providing a distinct perspective for classifying AI-related risks. Together, they enable organizations to develop a complete risk profile for any AI system or usage scenario.

Lens 1: AIC+ (Enhanced Security Classification)

Purpose: Classify AI-specific security risks that extend beyond traditional availability, integrity, and confidentiality concerns.

This lens helps organizations answer:

  • How does autonomous agent behavior affect system availability requirements?
  • What model integrity risks exist (adversarial attacks, model tampering, training data poisoning)?
  • What confidentiality challenges arise from agent-to-agent communication or federated learning?

Classification Output: A security risk profile that accounts for AI's unique characteristics—autonomy, learning, and opacity.

Lens 2: Data+ (Dynamic Data Risk Classification)

Purpose: Classify data-related risks that change dynamically based on how AI systems use that data.

Traditional data classification is static—a piece of data is classified once and that classification remains constant. AI systems use data in multiple contexts (training, fine-tuning, inference, feedback loops), and the risk profile changes with each usage. This lens helps organizations answer:

  • How does data classification change when data moves from training to inference contexts?
  • What risks exist in unstructured data representations (vectors, embeddings, knowledge graphs)?
  • What bias, provenance, or quality risks exist in training datasets?

Classification Output: A dynamic data risk profile that reflects the multiple ways AI systems interact with data throughout their lifecycle.

Lens 3: AI+ (AI System Risk Classification)

Purpose: Classify risks inherent to the AI models, agents, and systems themselves.

This lens focuses on the AI system as an entity with its own risk characteristics. It helps organizations answer:

  • What is the autonomy level of this AI system, and what risks does that create?
  • What capability drift or performance degradation risks exist over time?
  • What foundation model risks apply (capability assessment, safety boundaries, emergent behaviors)?

Classification Output: An AI-specific risk classification based on system architecture, autonomy, and capability characteristics.

Lens 4: Effect+ (Impact Risk Classification)

Purpose: Classify the real-world impact risks that AI systems create for organizations, stakeholders, and society.

AI systems do not exist in isolation—they make decisions that affect people, processes, and outcomes. This lens helps organizations answer:

  • What are the potential consequences of autonomous decisions made by this system?
  • How do we measure and validate the actual impact of AI-driven actions?
  • What organizational, cultural, and stakeholder risks does this AI usage present?

Classification Output: An impact risk profile that accounts for both intended and unintended consequences across multiple stakeholder groups.


How Organizations Use the Framework

The Enhanced 4-Lenses Framework is an analytical tool, not a prescriptive methodology. Organizations use it to systematically classify and assess AI risk:

Step 1: Select an AI System or Usage Scenario

Identify a specific AI system, application, or usage scenario that requires risk assessment. This could be a customer service chatbot, a predictive maintenance model, an autonomous decision-making agent, or any other AI-enabled capability.

Step 2: Examine Through Each Lens

Systematically work through each of the four lenses, asking the classification questions specific to that dimension. Document the risks identified through each lens.

Step 3: Develop a Comprehensive Risk Profile

Integrate the findings from all four lenses to create a complete risk profile. This profile reveals:

  • Where traditional AIC classification is insufficient
  • What AI-specific risks exist that conventional frameworks miss
  • How risks interact across the four dimensions

Step 4: Use the Classification to Inform Governance

The risk profile enables targeted governance decisions. Organizations can prioritize where to invest in controls, monitoring, and mitigation based on the actual risk characteristics of their AI systems.


The Regulatory Context

The EU AI Act, which came into force in August 2024, establishes a risk-based classification system for AI systems. Organizations must classify their AI systems into risk categories (unacceptable, high, limited, minimal) and apply corresponding governance requirements.

The Enhanced 4-Lenses Framework aligns with this regulatory approach by providing a structured method for conducting the risk assessment that regulators require. It helps organizations answer the fundamental question: What kind of AI system is this, and what risks does it present?

Fines for non-compliance with the EU AI Act can reach €35 million or 7% of global turnover, making accurate risk classification a business-critical capability.


Business Value of Structured Risk Classification

Organizations that adopt the Enhanced 4-Lenses Framework gain several strategic advantages:

Benefit Description
Regulatory Readiness Structured approach to risk-based classification required by EU AI Act and similar regulations
Informed Decision-Making Clear understanding of AI risk enables targeted governance investments
Risk Visibility Identifies blind spots that traditional frameworks miss
Stakeholder Confidence Demonstrates systematic approach to AI risk management
Innovation Enablement Accurate risk classification allows appropriate risk-taking for high-value AI initiatives

Implementation: A Modular Approach

The framework is designed for incremental adoption. Organizations can begin with a single lens and expand over time:

Phase 1: Apply the AIC+ lens to extend existing security risk assessments Phase 2: Add the Data+ lens to classify dynamic data risks Phase 3: Incorporate the AI+ lens for AI system-specific risk classification Phase 4: Complete the framework with the Effect+ lens for impact assessment

Each phase builds on the previous one, allowing organizations to develop their AI risk classification capabilities progressively.


Get Started

The Enhanced 4-Lenses Framework is available under a Creative Commons license. Organizations are encouraged to adapt and apply it to their specific contexts.

Download the framework materials:

Contact: For questions, collaboration, or to share your experience using the framework, reach out to Jan W. Veldsink at jan@grio.nl.


About the Author

Jan W. Veldsink MSc specializes in AI governance, systems thinking, and organizational complexity. Through Grio, he helps organizations navigate the challenges of AI adoption with structured frameworks and practical guidance.

Visit www.grio.nl for more resources on AI governance, prompting exercises, and organizational development.