Generative AI is not something we glide smoothly back into our previous normal lives with. Things that used to be hard simply aren't anymore.

Generative AI is not something we glide smoothly back into our previous normal lives with. Things that used to be hard simply aren't anymore. That is a bitter pill to swallow, and we have only just begun chewing it.
There is an old instinct in the education system, both within institutions and across professions, to defend how hard something is. The complexity is the point. The struggle is the credential. We built entire systems around the idea that if something was hard to learn, the hardness itself had value. Then a language model showed up and quietly dissolved the difficulties while the institution was still drafting policy about it.
I am not talking about the vapid, utopian version of this. I am talking about something far more uncomfortable: a systemic shift in what counts as skill, what counts as knowledge, and what the gap between expert and beginner actually means right now.
If a teacher needs to explain to a student how complex the problem is and the student solves it in a few minutes and verifies the result with the help of a large language model (LLM), then it is not complex anymore. Not for that student. Not in that moment.
This is not a failure of the education system. This is a mirror being held up to it. The question it forces is one most institutions are not ready to answer: which parts of what we teach were always just a waypoint — a step toward a destination — and which parts were the destination itself?
These tasks still require judgment about the final product. But the raw difficulty, the hours and the years, has been compressed into a single prompt. That is the fact. But facts have emotional consequences, especially when you have built your identity on the hardness.
Denial: “It can't do this as well.”
Anger: “This is theft of what we built.”
Bargaining: “Yes, but you still need to understand the underlying process.”
Depression: Silence. The professional stops talking about the value of the work.
Acceptance: Redefining what the expert is — not the one who executes, but the one who judges.
We need an honest map. Here are four categories I believe are necessary to describe the terrain we stand on:
Category I — Skills that AI has dissolved: Tasks that required many hours of training to perform at a basic level of competence. First drafts. Basic research. Setup. Translations. Simple analysis. These tasks still require judgment about the product — but the threshold to begin has collapsed. The hours are gone. Defending these tasks as though the difficulty still exists is a particular kind of denial.
Category II — Core skills in active atrophy: Communication. Critical thinking. Ethical reasoning. These skills do not survive automatically — and that makes it profoundly dangerous to misclassify them. They are the primary casualty of algorithm dependency, precisely because the language model performs such a convincing simulation of them that the human stops training the underlying capacity. The muscle does not atrophy because we consciously decide to stop using it, but because the inactivity goes unnoticed. The student who lets the model construct the argument has not learned to argue. The professional who lets the system tick the ethics checkbox has not developed ethical judgment — they have developed trust in a system that transcribes the ethical consensus of its training data, frozen at a particular point in time and applied to circumstances it was never designed for. These skills will only survive if they are deliberately trained — including in situations where it would be faster and easier to use the AI. They do not sustain themselves automatically. They require deliberate friction. That friction needs to be designed into curricula, into professional practice, and into the daily structure of how we work. Nobody is doing this at scale. Almost nobody is even clearly proposing it.
Category III — Skills required to work with AI: The ability to direct, verify, interrogate, and take responsibility for the outputs of AI. Knowing when to trust it and when to ignore it. Designing the context in which it operates. Recognizing failure patterns that look like competence. This is a new discipline. It is not innate, it is not systematically taught, and the people who need it most are the ones most likely not to know they lack it.
Category IV — Skills that must remain human, by systemic necessity: This is the category nobody is discussing honestly — and the framing matters enormously. We do not keep the doctor, the judge, or the police officer human out of sentimentality or emotional attachment. We keep them human because an algorithm cannot bear accountability. It cannot have its medical license revoked, be disbarred, be imprisoned, or be fired. It cannot face consequences. The reckoning that the profession is built on — what makes the professional credential meaningful — requires a biological being at the end of the chain. Accountability is not a philosophical choice. It is a systemic requirement that demands there be a human who can be held responsible when the system fails. Category IV is not defined by the demands of current legislation. It is determined by where personal consequences — such as loss of professional license, criminal liability, and social shame — enforce the caution that the objective liability of legal entities simply cannot guarantee. A corporation can pay a fine and continue. The doctor who loses their medical license cannot.
When we tell students that something should be hard and that they should struggle the way we did, we are sometimes conveying genuine wisdom about the value of deep training. But sometimes we are defending the difficulty because we built our identity on having survived it. These two things are profoundly different.
The discourse of professions has always used the former to justify the latter. Some thresholds contain real knowledge. Others contain protection of the privileges of the class. Legal education is an obvious example: the tiered system, the number of years, the examinations — part of it was always about knowledge, part was always about restricting access. Most thresholds contain both, entangled in a way that makes honest separation politically impossible from within. AI forces a reckoning on what is what, and institutions respond by mostly avoiding the question, hoping the answer will be delayed long enough to become someone else's problem.
It will not be delayed. The change is systemic and mechanical. The boundaries that define what a person must be able to do without a machine — and who bears the responsibility when the machine fails — will not remain empty. If institutions do not deliberately set these boundaries through policy and rigorous structure, the market will determine the answer by default — it will maximize output and work entirely against human capability.
The doctor who rubber-stamps the algorithm's unseen diagnosis. The judge who relies on the risk assessment. The analyst who forwards the machine-generated report without reading it. This is no imagined future. This is the current practice of the default state — it has already begun, already producing consequences, and as yet remains largely unexamined.
The question is not whether this is happening. The question is whether we choose the boundaries, or inherit them.