Skip to main content
Back to projects
ResearchCompleted

Beyond Fragmentation

A Life-Value Alternative for AI Governance

MA thesis on AI governance. Introduces Responsibility Fog, Cognitive Debt, and the VALOR Framework.

If everyone is screaming, find the silent one. The silent one is struggling to breathe.
First Responder's Rule
RoleResearcher & Author
Duration2024–2026
StatusCompleted

The Challenge

AI governance fails not from a lack of regulation, but through two fundamental mechanisms that systematically degrade human flourishing: Responsibility Fog and Cognitive Debt. How can governance frameworks address these structural failures?

The Approach

Grounded in John McMurtry's Life-Value Onto-Axiology. Transdisciplinary research across philosophy, sociology, law, computer science, and political economy. Validated through production systems built during the research (Arctic Tracker, Gjöll, BORG). Structural verification via Neo4j knowledge graph: 775 nodes, 411 relationships, 39 systematic sweeps.

Outcomes

278PagesComplete thesis manuscript
775Knowledge Graph NodesNeo4j structural verification
247Citations VerifiedIndependently verified against primary databases
3Production SystemsArctic Tracker, Gjöll, BORG
39Systematic SweepsCross-referencing verification passes
16Years in Emergency ServicesFoundation for first responder cognition applied to AI

The Argument

The system is not broken. It is working exactly as designed.

AI governance fails not from a lack of regulation, but through two fundamental mechanisms that systematically degrade human flourishing: Responsibility Fog — the engineered diffusion of accountability across fragmented structures — and Cognitive Debt — the compounding erosion of human judgment through algorithmic dependence.

These mechanisms create an accelerating feedback loop. The Fog reduces incentives to maintain human capacity. Eroded judgment makes populations less able to detect and contest the Fog. The result is not a malevolent superintelligence but something more insidious: the Benevolent Cage — a system of total algorithmic care that offers safety, convenience, and optimization at the price of human agency.

Grounded in John McMurtry’s Life-Value Onto-Axiology and validated through production systems built during the research, the thesis proposes the VALOR Framework as a governance alternative already operationalized in the EU AI Act (2024).

The Investment-Sentiment Gap

For every dollar spent helping workers, thirty-three are spent replacing them.

844 occupational tasks. 104 occupations. 1,500 workers surveyed. The data reveals a structural capital allocation that systematically finances the replacement of human judgment while starving the tools that support it.

Core Diagnostic Concepts

The vocabulary for what is happening

Responsibility Fog

The systematic diffusion of accountability across fragmented authority structures, technical complexity claims, legal shields, and regulatory capture — allowing AI harms to proliferate without consequences.

Cognitive Debt

The compound costs of outsourcing human judgment to algorithmic systems. Like technical debt, cognitive shortcuts accumulate compound interest that threatens long-term capacity.

The Benevolent Cage

Not a Terminator scenario. A future of totalizing algorithmic care that eliminates agency under the guise of protection. Control through comfort rather than coercion.

VALOR Framework

Verification, Alignment, Legitimacy, Oversight, Responsibility — five governance principles operationalized in the EU AI Act and tested through production systems.

Applied Research Portfolio

Veritas in praxi — theory tested against practice

Arctic Tracker

Conservation Analytics — 473,000+ CITES trade records processed. AI-powered conservation intelligence targeting Nature journal publication.

Gjöll

Fire Safety Intelligence — Icelandic fire safety database. First responder cognition applied to building inspection intelligence.

BORG

University AI Governance — Iceland’s first comprehensive university AI governance platform. Responsible deployment at institutional scale.

Author

AI Project Manager · University of Akureyri

Sixteen years in Icelandic fire and rescue services — paramedic, firefighter, leader. Transitioned to AI in 2022 with thousands of hours of hands-on work since. Developer of Iceland’s first comprehensive university AI policy, Expert Panel Member at Rannís (Icelandic Centre for Research), host of the Temjum tæknina (Taming Technology) podcast.

I don’t just study AI — I build with it. I operate a complex, evolving suite of AI tools and production systems that I use daily to ship real work. This matters because understanding the technology at the level required to govern it means being inside it: building, breaking, and rebuilding.

Collaborators

Prof. Giorgio BaruchelloThesis Supervisor
Dr. Kristian GuttesenCo-author · The Irreducible Human

Resources

Lessons Learned

  • What we call separate 'industrial revolutions' are a single accelerating process
  • Technology is of value only if it enables a more coherent and inclusive range of thought, feeling, and action
  • Understanding the technology at the level required to govern it means being inside it: building, breaking, and rebuilding

Explore the Project

Related Projects

View all projects