This week, MIT Tech Review ran a piece arguing that the single most useful dataset for understanding AI’s impact on work would be a long-term record of what people actually do at their jobs — task by task, hour by hour, before and after the tools arrive. The piece is honest about what is missing. It is also, in a way the author does not notice, a confession.
Because the dataset already exists. It has existed for three years. It is just stored in a place economists do not look: inside the working lives of disabled people who started using these tools the week they shipped.
I am writing this on a Thursday morning in February 2026. My screen reader is not reading anything yet. I have just asked a model to summarise a 71-page acoustic consultancy brief sent to me last night by a project lead in Rotterdam. He knows I am blind. He sent the PDF anyway, because he also knows what I will do with it. Three years ago that document would have cost me four hours of fighting with badly tagged—labeled in a way my screen reader could understand—headings and an inaccessible figure on page 43. This morning it cost me eleven minutes. I read the brief. I wrote back with two specific objections to the reverberation targets on the ground floor. The project moved.
None of that appears in any productivity dataset I am aware of. The brief was not a “task.” My eleven minutes were not “hours logged.” The objections I raised do not have a job title attached to them in job category systems or whatever taxonomy the next study will use. The work happened in a way the measurement instruments cannot pick up, the way a dog whistle does not register on a meeting-room mic.
Here is the thing the Anthropic researcher quoted in the MIT piece almost says, and then does not. The “jobs apocalypse” frame assumes a baseline in which everyone was already fully inside the labour market, doing the thing the job description says they were doing, at the rate the spreadsheet says they were doing it. For a lot of us that baseline is fiction. The baseline is: you were doing 60 percent of the visible job and 200 percent of the invisible work required to participate in it at all. Strip out the invisible 200 percent and the visible 60 percent looks, suddenly, like a full role. That is not augmentation. That is the first time anyone counted you accurately.
I called Mette Johansen about this last week. She is an autistic civil engineer in Aarhus I have known since 2019, back when she was still masking through every client meeting and going home to lie on the kitchen floor for two hours afterwards. She uses a model now to draft the social register of her emails — the warmth, the small talk, the closing line that says “looking forward to hearing from you.” The tool handles those elements without making her want to peel her skin off. “I do the engineering,” she said. “It always did the engineering. The tool does the part that was costing me the engineering.” She paused. “If someone measures my output now versus 2022, they will say AI made me 30 percent more productive. That is not what happened. AI stopped charging me a tax.”
The tax is the dataset nobody is collecting.
I keep thinking about the disability theorist Mia Mingus. In 2011, she wrote about the difference between access as accommodation and access as a different starting point — the framework where you do not retrofit the room, you build a different room. Mingus’s work matters to this argument because it shows that true inclusion is not about making disabled people fit into existing systems, but about building systems that work differently from the start. The economists looking at AI and labour are still inside the first framework. They are asking: how much faster does the existing worker do the existing task? They are not asking the second question, which is the only one that matters: who is now in the room who was not in the room before, and what are they doing that the old room had no category for?
The honest answer is that no one in the productivity literature wants to ask that, because the answer would require admitting the old measurements were never measuring labour. They were measuring a specific kind of body performing a specific kind of cognition under conditions designed for that body. Everything else got coded as absence, or low output, or “not a culture fit,” or unemployment. The dataset of what we were actually doing lived in our chests and our kitchen floors.
So when MIT Tech Review says the missing data is task-level records of what workers do all day, I want to say: the missing data is who was never counted as a worker, doing what was never counted as work, paying a tax that was never counted as a cost. That data is recoverable. It would just require asking us. The instruments exist. They are called questions.
It is 6:14 AM. The pool opens at 6:15. I am standing in the doorway. The water is hearing itself back, and for forty seconds nobody is measuring anything at all.
This article was prompted by The one piece of data that could actually shed light on your job and AI from MIT Tech Review.