A recent global report from KPMG and the University of Melbourne (Trust, attitudes and use of artificial intelligence: A global study 2025) confirms a troubling trend: 70% of employees are now using free, public generative AI tools at work, while only 41% report their employer has any policy governing their use.

A new survey on AI use in the Icelandic labor market (from the union Viska) shows that this global problem has become a local reality: 80% of professionals use the technology, and 66% use their own personal accounts.
This data is important, not for what it says, but for what it reveals by not saying it. At first glance, it looks like a success story. But if we look deeper, we do not see signs of healthy adoption. We see the symptoms of an uncontrolled "shadow implementation" that is creating measurable, systemic risk.
The "80% use" statistic is misleading because it conflates two fundamentally different patterns of behavior.
The reality is that the workforce is already bifurcating:
Of course, most employees fall somewhere on the spectrum between these two poles. But this shallow analysis exposes a core risk the survey fails to measure.
With every task solved without deep understanding or critical review, we are "taking a loan" against our capacity for independent thought. This debt manifests as measurable risk:
And in the absence of employer-led strategy, training, and policy, a key question hangs in the air: Who is responsible for the errors? The employee who was just trying to cope, or the manager who left them to cope alone?
This is the first systemic risk the survey misses: It fails to distinguish between value-creating capability and high-risk survival tactics.
The survey confirms that 66% of staff use their own, personal AI accounts. In a purely capitalist world, employers must be thrilled: staff are personally paying for tools that increase their own productivity. All the benefit flows directly to the company, while the employee bears all the costs and all the risk.
How can we compare Employee A, who has developed 10 custom tools, pays for their own premium subscription, and has multiplied their output, with Employee B in the same department who only uses the free version of ChatGPT now and then? This unequal implementation is not just unfair—it creates systemic risk:
This is a textbook example of what philosopher John McMurtry calls prioritizing money-value, where the sole goal is to turn money into more money. The question the survey avoids is this: Is this productivity boom only serving money-value, or will it also serve life-value—where the goal is to improve human quality of life?
Should the benefit translate into higher wages? A shorter work week? More time for creativity? Or should it simply vanish into the balance sheets of companies that haven't even taken a position on the technology?
This is the second systemic risk: The survey measures productivity but ignores the discussion about the distribution of value.
Perhaps the most profound systemic risk of all—one so deep it's invisible in the data—is that this superficial survey comes from a labor union. An institution whose entire purpose is to protect its members from risk and exploitation has taken on the role of a passive observer. It measures and normalizes the very trend it should be challenging: the fact that its members are now personally bearing the cost and risk of their own tools of production.
This data is not a cause for celebration. It is a distress call. It shows a labor market grappling with a technological revolution without a strategy, without training, and without a clear senseof purpose.
The solution is not to ban. The solution is leadership.
If we do not take these steps—if we continue to let the fog of responsibility reign—we are not just failing in our duty. We are actively choosing the default path.

The Sjalli-Kiss is a symbol of the freedom to make mistakes. In an age of surveillance culture and AI — are we truly living?

If it is considered administrative malpractice to use Claude to judge the contribution of scholars, why is it considered "academic integrity" to use Turnitin to judge the originality of a student?

A recent incident in Iceland reveals dangerous flaws when AI like Claude assesses human contribution without understanding. AI is powerful but shouldn't judge careers. The article is based entirely on public media coverage and does not assume that all facts of the case are fully known.