Industry Commentary

The Jevons Paradox Came for Knowledge Work

An eight-month Berkeley study found that AI tools don't reduce work — they make people voluntarily do more of it. The implications for engineering teams are serious.

By John Jansen · · 6 min read

Share

The Study Nobody Wanted to Hear

An eight-month ethnographic study from UC Berkeley's Haas School of Business, published in Harvard Business Review in February 2026, tracked what happened when a 200-person technology company gave its employees access to generative AI tools. The company did not mandate AI use. It offered enterprise subscriptions and let people adopt at their own pace.

The researchers — Aruna Ranganathan, associate professor of management, and Xingqi Maggie Ye, a doctoral student — expected to find the usual productivity story. AI helps people work faster, freeing up time for higher-order thinking. That is not what they found.

Eighty-three percent of workers said AI increased their workload. Employees worked at a faster pace, took on a broader scope of tasks, and extended work into more hours of the day. Nobody asked them to. They did it because AI made doing more feel possible.

The study hit the front page of Hacker News and generated the kind of discussion where everyone agrees with the finding while simultaneously insisting it does not apply to them.

How It Actually Works

The intensification is not about AI being bad at its job. It is about AI being good enough to change what people attempt.

Role boundaries dissolve. Product managers started writing code. Researchers took on engineering tasks. People across the organisation attempted work they would previously have outsourced, deferred, or avoided entirely. When the cost of attempting something drops to near zero, the scope of what counts as "my job" expands to fill the space.

Parallel work creates the illusion of progress. Workers managed several active threads at once — writing code manually while an AI generated an alternative version, running multiple agents in parallel, reviving long-deferred tasks because AI could "handle them" in the background. This felt like momentum. The reality was continual attention switching, frequent checking of AI outputs, and a growing number of open tasks that each demanded follow-up.

The ratchet never reverses. Once someone demonstrated they could handle a broader scope with AI assistance, that broader scope became the new baseline. The productivity gain was absorbed as expectation, not reclaimed as time. Twelve-hour days became normal not because anyone mandated them, but because the work that filled them felt achievable.

This Has a Name

In 1865, the economist William Stanley Jevons observed that as steam engines became more fuel-efficient, coal consumption did not decrease. It increased. Cheaper energy per unit of work meant more total work was undertaken. The efficiency gain was real, but it was overwhelmed by the demand it unlocked.

This is the Jevons Paradox, and it has arrived for knowledge work.

AI makes cognitive tasks cheaper. Writing a first draft, generating test cases, exploring an unfamiliar API, scaffolding a new service — all of these now cost less time and effort than they did two years ago. The efficiency gain is real. But the response is not to do the same work in less time. It is to do more work in the same time, and then more work in more time.

The industrial version of Jevons Paradox is generally considered a good thing. Cheaper energy drives economic growth. But industrial workers go home at the end of a shift. Knowledge workers carry the work in their heads. When the paradox operates on cognition rather than coal, the overflow is not economic expansion — it is burnout.

What This Means for Engineering Teams

The Berkeley study was conducted at a single company, and ethnographic research does not generalise the way randomised trials do. But the patterns it describes are recognisable to anyone running an engineering organisation that has adopted AI tooling in the past year.

Headcount planning gets harder. If AI tools increase output per person but also increase hours per person, the productivity gain is partly real and partly an artefact of people working longer without billing for it or complaining about it. Planning based on observed output without accounting for the hours behind it will lead to understaffing.

Quality risk is hidden. The study found that the initial productivity surge can give way to lower quality work, weakened decision-making, and turnover. These are lagging indicators. By the time they show up in your metrics, the damage has compounded. A team that shipped more features last quarter and loses two senior engineers this quarter did not get more productive. It borrowed against its own capacity.

Scope management becomes a leadership problem. When AI makes everything feel achievable, the bottleneck shifts from execution to judgment. Which tasks should actually be attempted? Where should role boundaries hold? These are not questions that individual contributors will answer well on their own, because the incentive structure — do more, ship more, demonstrate impact — pushes in one direction only.

The uncomfortable conclusion is that AI tooling adoption requires more management attention, not less. Not to monitor usage or enforce policies, but to actively constrain scope, protect focus time, and make it safe for people to say no to work that AI makes technically possible but strategically pointless.

The Efficiency Trap

The standard narrative about AI and work goes like this: AI handles the routine tasks, humans focus on the creative and strategic ones, everyone is more productive and happier. It is a clean story. It is the story that vendors tell, that executives repeat, and that individual contributors want to believe.

The Berkeley study suggests a messier reality. AI does not neatly separate routine from creative work. It lowers the activation energy for all work, which means people attempt more of all of it. The result is not a workforce freed from drudgery. It is a workforce running harder, across a wider surface area, with less time to think deeply about any of it.

This is not an argument against AI adoption. The tools are genuinely useful. But adopting them without adjusting expectations, workload norms, and management practices is not a productivity strategy. It is a way to convert your best people's discretionary effort into short-term output gains that will not survive their departure.

Want to discuss this?

We write about what we're actually working on. If this is relevant to something you're building, we'd love to hear about it.