From Shadow AI to Strategy: Analyzing Systemic Risk in the Modern Workplace
A recent global report from KPMG and the University of Melbourne (Trust, attitudes and use of artificial intelligence: A global study 2025) confirms a troubling trend: 70% of employees are now using free, public generative AI tools at work, while only 41% report their employer has any policy governing their use.
A new survey on AI use in the Icelandic labor market (from the union Viska) shows that this global problem has become a local reality: 80% of professionals use the technology, and 66% use their own personal accounts.
This data is important, not for what it says, but for what it reveals by not saying it. At first glance, it looks like a success story. But if we look deeper, we do not see signs of healthy adoption. We see the symptoms of an uncontrolled "shadow implementation" that is creating measurable, systemic risk.
The Analysis is Superficial: It Measures "Use," Not "Competence"
The "80% use" statistic is misleading because it conflates two fundamentally different patterns of behavior.
The reality is that the workforce is already bifurcating:
- The Responsible and Ambitious: This group uses AI to multiply their output, increase their autonomy, and improve the quality of their work. They are not just increasing efficiency—they are creating genuine value.
- The Tired and Desperate: This group uses the technology primarily to survive their workload and catch their breath. Lacking a deep understanding of the technology's limitations, this individual creates enormous risk. The risk isn't just in data security—it's in the quality of the work and in what I call intellectual debt.
Of course, most employees fall somewhere on the spectrum between these two poles. But this shallow analysis exposes a core risk the survey fails to measure.
With every task solved without deep understanding or critical review, we are "taking a loan" against our capacity for independent thought. This debt manifests as measurable risk:
- A declining ability to spot subtle errors in AI-generated output.
- Increased uniformity in creative work as everyone draws from the same foundation models.
- A genuine danger of "competence collapse," where fundamental skills are lost.
And in the absence of employer-led strategy, training, and policy, a key question hangs in the air: Who is responsible for the errors? The employee who was just trying to cope, or the manager who left them to cope alone?
This is the first systemic risk the survey misses: It fails to distinguish between value-creating capability and high-risk survival tactics.
The Analysis is Limited: It Fails to Ask, "Who Profits?"
The survey confirms that 66% of staff use their own, personal AI accounts. In a purely capitalist world, employers must be thrilled: staff are personally paying for tools that increase their own productivity. All the benefit flows directly to the company, while the employee bears all the costs and all the risk.
How can we compare Employee A, who has developed 10 custom tools, pays for their own premium subscription, and has multiplied their output, with Employee B in the same department who only uses the free version of ChatGPT now and then? This unequal implementation is not just unfair—it creates systemic risk:
- Increased employee turnover and burnout.
- Decreased loyalty and trust in leadership.
- Loss of competitive advantage as the most talented people flee to employers who use technology to improve the work environment.
This is a textbook example of what philosopher John McMurtry calls prioritizing money-value, where the sole goal is to turn money into more money. The question the survey avoids is this: Is this productivity boom only serving money-value, or will it also serve life-value—where the goal is to improve human quality of life?
Should the benefit translate into higher wages? A shorter work week? More time for creativity? Or should it simply vanish into the balance sheets of companies that haven't even taken a position on the technology?
This is the second systemic risk: The survey measures productivity but ignores the discussion about the distribution of value.
Perhaps the most profound systemic risk of all—one so deep it's invisible in the data—is that this superficial survey comes from a labor union. An institution whose entire purpose is to protect its members from risk and exploitation has taken on the role of a passive observer. It measures and normalizes the very trend it should be challenging: the fact that its members are now personally bearing the cost and risk of their own tools of production.
Conclusion: From Shadow Adoption to Strategic Responsibility
This data is not a cause for celebration. It is a distress call. It shows a labor market grappling with a technological revolution without a strategy, without training, and without a clear senseof purpose.
The solution is not to ban. The solution is leadership.
- Invest in Skills: Teach staff not just how to use the tools, but how to think with them—critically and responsibly. We must pay down the intellectual debt.
- Provide Secure Tools: Take ownership of data security by providing secure, enterprise-grade accounts. Dispel the "fog of responsibility."
- Answer the Value Question: Begin a transparent discussion about how the gains will be shared, ensuring that "life-value" guides the strategy, not just "money-value."
- Take a Stand: Leaders and employees must make a conscious choice: Are we using this technology to maximize the extraction of value, or to enhance human capability and well-being? This is not a technical question; it is a moral one.
If we do not take these steps—if we continue to let the fog of responsibility reign—we are not just failing in our duty. We are actively choosing the default path.
- Gillespie, Nicole; Lockey, Steven; Ward, Tabi; Macdade, Alexandria; Hassed, Gerard (2025). Trust, attitudes and use of artificial intelligence: A global study 2025. The University of Melbourne. Report. https://doi.org/10.26188/28822919.v1
- Hilmarsson, V. (2025, 11. nóvember). 80% sérfræðinga nota gervigreind í starfi og 67% segja hana auka afköst. Viska. https://www.viska.is/um-visku/i-frettir/80percent-serfraedinga-nota-gervigreind-i-starfi-en-vinnuveitendur-standa-langt-ad-baki
Related Articles
View all articles
We're All Boat Makers Now: A Paramedic's Guide to the Societal Dunkirk
A brief perspective on how I view the societal challenges and what mitigation strategies I am experimenting with
Access Shock: A Theory on the New Rules of the AI Economy
AI is making expert knowledge a public good, triggering a societal imbalance the author calls "Access Shock." This theory outlines a four-step model for navigating this disruption—from sudden access to a new equilibrium or collapse—arguing that human skills like critical thinking and trust are now more valuable than ever.

UBI: The Policy No One Believes In — Until It Pays Their Bills
Nobody Believes in UBI Until the Direct Deposit Hits