AI Did Not Take Your Job. It Promoted You.
Most knowledge work is moving one level up the stack: less first draft, more judgment, orchestration, and accountability.
The weird thing about AI at work is that both optimists and doomers keep reaching for movie plots.
In one version, the machine politely helps with email and then sits in the corner like a very expensive stapler.
In the other, it storms the building, takes your laptop, and leaves you explaining transferable skills to a recruiter named Brad.
Real work is less cinematic and more annoying.
In a lot of knowledge jobs, AI does not immediately replace the human. It drags the human one level up the stack. You stop being the person who writes the first draft of everything. You become the person who decides what should exist, what good looks like, what can be trusted, and what actually ships.
That is why I keep coming back to the same line: AI did not take your job. It promoted you.
Not to “manager,” exactly. More like operator, editor, reviewer, systems designer, and person who still gets blamed when the output is wrong. So yes, a promotion. But in the traditional corporate sense, where responsibility shows up before the pay band does.
I think this frame is more useful than the usual replacement panic because it matches what is already happening in real workflows. The ILO and NASK’s 2025 global exposure index estimates that 25 percent of global employment sits in occupations potentially exposed to generative AI, with higher exposure in high-income countries, and says transformation rather than replacement is the more likely outcome. Full job automation, in their framing, remains limited because many tasks still require human involvement.[1]
That does not mean everyone is safe. It definitely does not mean everyone gets a nice strategic role and a bigger title.
It means “job” is the wrong unit of analysis.
A job is a lumpy bag of tasks. AI peels that bag apart.
Some tasks get cheaper. Some disappear. Some become review work. Some move upstream into planning and constraint setting. Some move downstream into verification, integration, exception handling, and accountability. The human role that survives is usually less about producing every line from scratch and more about governing the system that now produces a lot of the raw material.
That is the promotion.
The wrong debate is replacement
The wrong debate is “Will AI replace software engineers?” or “Will AI replace marketers?” or “Will AI replace analysts?”
The useful question is: which parts of those jobs are moving from direct execution to supervision?
That sounds obvious, but teams miss it all the time.
If you are a developer, draft code, boilerplate tests, migrations, documentation stubs, and first-pass debugging suggestions get cheaper. If you are a product manager, turning a fuzzy pile of stakeholder thoughts into a first draft of a PRD gets cheaper. If you are in support, standard replies and retrieval-heavy responses get cheaper. If you are an operator, summarizing a process, comparing options, or turning a meeting mess into action items gets cheaper.
Cheaper does not mean solved. It means the scarce part moves.
The bottleneck has moved upstream.
When a model can produce ten decent starting points in two minutes, the expensive thing is no longer typing speed. It is framing the problem, supplying the right context, defining the constraints, spotting the hidden failure modes, and deciding which of the ten drafts deserves to exist in the world.
That is why the work starts to feel managerial even when your title does not.
OECD analysis points in the same direction. In occupations with high AI exposure, vacancies disproportionately demand management and business-process skills, along with social, emotional, and digital skills. Across ten OECD countries, 72 percent of vacancies in high-exposure occupations demanded at least one management skill and 67 percent demanded at least one business-process skill.[2] My read is simple: a growing share of white-collar work now includes managerial motions even when nobody calls them that. You are specifying work, routing work, reviewing work, and coordinating outputs across humans and systems.
Again, not glamorous. Just accurate.
This also explains why AI is hitting white-collar, highly digitized work first. OECD work on worker exposure says the occupations most exposed to AI are typically white-collar roles such as IT professionals, managers, and science and engineering professionals, while manual occupations tend to have lower AI exposure.[3] For the people reading this blog - developers, leads, PMs, founders, CTOs - that matters. The ladder wobbling under your feet is not a factory story. It is your story.
And yes, clerical work faces the harshest direct exposure in the ILO data.[1] Some jobs are under much more direct pressure. Some task layers will be shaved off so aggressively that the “promotion” feels less like advancement and more like being told to supervise your own replacement. I am not pretending otherwise.
I am saying the dominant early pattern in knowledge work is not clean replacement. It is messy reallocation.
Your new job description is not “use AI more”
This is where most teams get embarrassingly vague.
They tell people to “use AI” as if that were a workflow and not a cry for help.
What actually matters is whether the job has been redesigned around cheaper first drafts and more expensive judgment.
The new job description usually looks something like this:
You define the artifact before it exists.
You specify what good looks like.
You supply examples, edge cases, and constraints.
You choose what the model is allowed to do and what it is not allowed to touch.
You evaluate the output.
You integrate it into a broader system.
You own the result anyway.
That is not “prompting.” That is operating.
I notice this in my own work constantly. I start fewer things from a blank page now. I start with a brief, a list of failure modes, and some kind of verification step. The keyboard time went down. The responsibility absolutely did not.
This is why I keep saying acceptance criteria are becoming the real leverage point. If generation is cheap, specification matters more. A bad brief no longer creates slow bad work. It creates fast bad work. That is worse. Garbage at machine speed is still garbage. It just arrives with better punctuation.
There is also a shift from judgment as theater to artifact as evidence.
In the old version of white-collar work, a lot of value was signaled socially. Could you sound smart in a meeting? Could you explain the plan? Could you bluff through ambiguity with enough confidence that nobody asked follow-up questions?
AI is rude to that style of competence. It can generate plausible language by the barrel. So the emphasis moves toward the artifact that survives contact with reality: the shipped feature, the tested process, the memo that withstands scrutiny, the forecast with clear assumptions, the incident write-up that actually explains what happened.
That is a better game, frankly. But it is less forgiving.
The smartest people in AI-heavy environments are not the ones who can make the model sound magical in a demo. They are the ones who can build a harness around it: context packs, templates, examples, validations, tests, rollback paths, review rules, and clear ownership. Harness engineering sounds less sexy than “prompt wizardry,” which is unfortunate for marketing but excellent for civilization.
The best AI workflow is usually slightly boring.
Why juniors get lifted and seniors get rearranged
One of the most interesting things in the early evidence is who benefits most from these tools.
In customer support, Brynjolfsson, Li, and Raymond found that access to a generative AI assistant increased productivity by 14 percent on average, with a 34 percent improvement for novice and lower-skilled workers and minimal impact for the most experienced workers.[4] In a separate experiment on professional writing tasks, Noy and Zhang found that ChatGPT users finished 40 percent faster and produced output judged 18 percent higher in quality.[5] And in software development, Cui and coauthors reported that across three field experiments involving 4,867 developers at Microsoft, Accenture, and a Fortune 100 company, access to an AI coding assistant increased completed tasks by 26.08 percent, with larger gains for less experienced developers.[6]
That is not a small pattern. It is the shape of the change.
AI is very good at collapsing parts of the apprenticeship curve.
It helps people produce a passable first version sooner. It exposes lower-skill workers to patterns that used to live mostly in the heads of stronger operators. It narrows some performance gaps. It makes the middle of the quality distribution fatter.
Good. Also dangerous.
Good, because more people can do useful work faster.
Dangerous, because a lot of career ladders were built on earning your way through the routine layers. If the routine layers get compressed, organizations have to become much more deliberate about how people learn judgment. You cannot grow seniors out of thin air. And you definitely cannot grow them by asking juniors to rubber-stamp machine output all day like exhausted airport screeners.
So the ladder changes.
Juniors can often get to competent output faster.[4][5][6]
Mids lose some of the value that came from being the person who could grind through the standard work reliably.
Seniors remain critical, but the nature of their value shifts. Less of it comes from being the fastest hands on the keyboard. More of it comes from task decomposition, exception handling, quality judgment, system design, and teaching others where the model will betray them.
This is why some experienced workers misread the moment.
They look at AI helping a junior write passable code or prose and conclude that seniority no longer matters.
Wrong.
The problem is usually not that expertise disappears. It is that expertise moves up a layer. When the easy 60 percent gets cheaper, the remaining 40 percent becomes the whole game.
That 40 percent is where costly mistakes live.
The jagged frontier is why blind delegation is stupid
If AI really were just a universal multiplier, the story would be easy. You would hand it everything and go home early.
Sadly, reality insists on nuance.
Dell’Acqua and coauthors’ work on what they called the “jagged technological frontier” is useful here. In the BCG field experiment, consultants using AI completed tasks inside the model’s capability frontier faster and at higher quality. A Harvard summary of the study reports more than 25 percent greater speed, more than 40 percent higher human-rated performance, and more than 12 percent more task completion on those tasks. But on harder tasks outside the frontier, consultants using AI were 19 percentage points less likely to produce the correct answer.[7]
That is the management problem.
Your promotion is not just “use AI more.” Your promotion is deciding where AI belongs, where it needs guardrails, and where it should stay out of the room.
Some tasks should be delegated cleanly.
Some should be AI-assisted but tightly checked.
Some should use AI for option generation and humans for final reasoning.
Some should remain almost entirely human because the cost of subtle error is too high.
That routing decision is work. Real work.
The lazy way to use AI is to ask it for everything.
The adult way is to map the workflow, identify the high-volume low-novelty steps, keep humans close to the decision points, and build a verification loop that catches silent failure.
This is where “preference is not performance” becomes practical rather than philosophical.
A tool can feel magical in chat and still be mediocre in a shipping workflow. If it creates more review burden than usable output, you do not have leverage. You have a very cheerful source of rework.
The metrics that matter are boring and therefore excellent: cycle time, acceptance rate, rework, escaped defects, reviewer load, and time spent clarifying requirements after generation. If those do not improve, your “AI strategy” is probably an expensive first-draft machine wearing a blazer.
Promotion without a raise is still a promotion
To be clear, I am using “promotion” to describe functional change, not moral progress.
A company can absolutely use AI to widen spans of control, compress headcount, raise output expectations, and dump more review work onto the same number of people. The ILO is explicit that policy choices and implementation paths will shape both worker retention and job quality in AI-exposed occupations.[1]
So yes, the promotion can be rude.
You can get more leverage and less comfort at the same time.
You can move into more judgment-heavy work while also being measured harder, interrupted more often, and asked to cover a broader surface area. This is what partial automation looks like in the wild. It rarely arrives with a brass band. It arrives as “Can you also oversee the AI-assisted version of this process?”
That is why half-adoption is miserable.
If management automates generation but not evaluation, workers become janitors for machine sludge. They spend their day reviewing plausible nonsense, correcting avoidable errors, and cleaning up drafts that should never have existed. That is not leverage. That is a new flavor of admin burden.
If management redesigns the workflow properly, the human gets moved toward the work that actually deserves a human: ambiguous trade-offs, exceptions, prioritization, escalation, stakeholder judgment, and quality control.
Those are not identical futures.
So when leaders say “we are rolling out AI,” the real question is not whether the model is good. The real question is whether the workflow around the model is sane.
CTOs, especially, should treat this as org design rather than software procurement.
Buying a capable model is easy.
Deciding where it sits in the workflow, how people learn to use it, what must be tested, what gets logged, what gets escalated, and what counts as done - that is the hard part. That is management work. Which is exactly why I think “promotion” is the right word.
A concrete workflow: shipping a feature with AI in the loop
Let me make this less abstract.
Say a team needs to ship a modest internal feature: a permissions update, a reporting view, maybe a boring admin screen. The kind of task that used to begin with somebody staring into an editor and metabolizing caffeine.
In the old workflow, a developer might spend the first chunk of time translating a fuzzy request into a plan, then drafting implementation, then remembering tests, then writing the supporting documentation and release notes because no one else wanted to.
In the promoted workflow, the sequence changes.
1. Start with the brief, not the code. Write the user outcome, the constraints, the non-goals, the edge cases, and the acceptance criteria. This is not paperwork. This is the control surface.
2. Use AI to turn that brief into options: implementation outline, likely risks, missing requirements, test cases, migration concerns, and rollout questions. Make the model show its assumptions.
3. Let AI draft the first pass of code, tests, docs, and change summary where appropriate.
4. Run the boring machines on the machine output: linting, unit tests, type checks, security scanning, schema diff review, whatever applies.
5. Use the human review budget on what is actually risky: business logic, failure handling, weird permissions edges, naming that affects maintainability, and whether the thing should exist in this form at all.
6. Use AI again for downstream packaging: support macros, stakeholder update, release notes, runbook tweaks, and follow-up tickets.
Notice what changed.
The human did not disappear. The human moved.
Less blank-page drafting.
More problem framing.
More evaluation.
More integration.
More accountability.
That is the promotion in one small workflow.
I use the same pattern in writing. I do not start by asking the model to “write the article.” I start by trying to make the argument legible to myself, define what would make it true, and decide what evidence deserves to stay. Then the machine can help with structure, counterarguments, phrase alternatives, compression, and cleanup. The model is useful. The model is not the author. The job has moved upward, not outward.
This also explains why AI-heavy teams start caring more about reusable scaffolding. Shared prompts are fine. Shared rubrics are better. Shared evaluation checklists are better than that. The moment your team can generate work cheaply, consistency stops being a nice-to-have and becomes survival gear.
What teams should actually change
If you buy the promotion frame, a few practical implications follow.
First, train people on review and specification, not just on tool features.
Most AI training is basically a software demo with better posture. That is not enough. People need to learn how to scope tasks, express constraints, inspect outputs, and recognize failure patterns. Otherwise you are handing power tools to people and congratulating yourself because the box looked premium.
Second, redesign role expectations explicitly.
Do not say “everyone should use AI” and then evaluate them as if the work were unchanged. If the first draft is now cheap, then clarity, judgment, and orchestration should be rewarded more directly.
Third, instrument actual workflows.
Preference is not performance. Measure cycle time, acceptance rate, escaped defects, and rework on a handful of recurring processes. If the tool helps only in demos, that is not adoption. That is theater with a subscription fee.
Fourth, protect learning loops for less experienced people.
If juniors never have to think, they will not become seniors. Let AI accelerate them, but do not let it replace the reasoning reps completely. Ask for explanations. Rotate who defines the acceptance criteria. Make people compare outputs, not just consume them.
Fifth, stop fetishizing the prompt.
The prompt matters. But the bigger win is almost always in the surrounding system: better context, cleaner data, stronger templates, reusable checks, and clearer ownership boundaries. The best AI workflow is usually slightly boring because reliability is usually slightly boring.
None of this sounds glamorous. That is exactly why it works.
The slightly annoying conclusion
The useful question is not whether AI can do your job.
It is whether you have moved your value one level up before the market forces you to.
If your value is mostly raw drafting, raw formatting, raw summarizing, raw boilerplate coding, or raw information rearrangement, AI is very rude news. Those layers are getting cheaper, sometimes dramatically.[4][5][6]
If your value is in defining the work, building the harness, judging the output, spotting the edge cases, aligning people around decisions, and standing behind the artifact, AI is not removing you. It is increasing your span of action.
That still might be exhausting.
It still might be unfair.
It still might come with exactly zero ceremonial appreciation from management.
But it is a more precise description of what is happening.
AI did not take your job. It promoted you.
The annoying part is that promoted people are supposed to know what good looks like.
Do you?
Notes / references
1. International Labour Organization and NASK, “One in four jobs at risk of being transformed by GenAI, new ILO-NASK Global Index shows,” May 20, 2025, summarizing Generative AI and Jobs: A Refined Global Index of Occupational Exposure (ILO Working Paper 140, 2025).
2. OECD, Artificial Intelligence and the Changing Demand for Skills in the Labour Market, OECD Artificial Intelligence Papers No. 14, 2024. The report finds that in high AI exposure occupations, 72 percent of vacancies demanded at least one management skill and 67 percent demanded at least one business-process skill across the ten-country sample.
3. OECD, Who Will Be the Workers Most Affected by AI?, OECD Artificial Intelligence Papers, 2024. The executive summary notes that many occupations most exposed to AI are white-collar roles such as IT professionals, managers, and science and engineering professionals.
4. Erik Brynjolfsson, Danielle Li, and Lindsey R. Raymond, “Generative AI at Work,” NBER Working Paper 31161, 2023.
5. Shakked Noy and Whitney Zhang, “Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence,” Science 381, no. 6654 (2023): 187-192. DOI: 10.1126/science.adh2586.
6. Zheyuan (Kevin) Cui, Mert Demirer, Sonia Jaffe, Leon Musolff, Sida Peng, and Tobias Salz, “The Effects of Generative AI on High-Skilled Work: Evidence from Three Field Experiments with Software Developers,” SSRN working paper, August 20, 2025. DOI: 10.2139/ssrn.4945566.
7. Fabrizio Dell’Acqua, Edward McFowland III, Ethan Mollick, Hila Lifshitz-Assaf, Katherine C. Kellogg, Saran Rajendran, Lisa Krayer, Francois Candelon, and Karim R. Lakhani, “Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of Artificial Intelligence on Knowledge Worker Productivity and Quality,” Organization Science 37, no. 2 (2026): 403-423. For the summarized speed, quality, and task-completion figures used above, see also Harvard Business School’s official summaries of the study from September and November 2023.

