Last week the Guardian ran a headline calling the effect of the new Alzheimer’s drugs “trivial.” Seventeen clinical trials were assessed by a team of researchers. They found that drugs designed to slow the buildup of amyloid — a protein that accumulates in the brains of people with Alzheimer’s disease — slowed the loss of mental abilities by an amount that did not clear the threshold for what neurologists call a “meaningful” difference. The word they used — across the summary, the review, and most of the reporting — was trivial.

I read this on a Tuesday in October 2026, in the back of a minicab stuck on Camberwell New Road, and I had to read it three times. Trivial to whom.

Here is what the reviewers measured: scores on the Clinical Dementia Rating scale, a standardized test doctors use to measure memory and thinking problems, tallied against a statistical threshold decided by other clinicians in a room I have never been in. Here is what they did not measure: whether the person taking the drug was still able to walk their dog to the corner shop in month fourteen instead of month eleven. Whether their daughter could keep her job because the crisis point slid three months to the right. Whether they made it to the wedding.

The trial’s success measures were built on the premise that the only thinking worth counting is the thinking that looks like the thinking of the person designing the trial.

I want to be fair to the reviewers. Concerns about the anti-amyloid drugs are real — the side effects are serious, the costs are enormous, the hype around lecanemab and donanemab — the two anti-amyloid drugs at the center of this debate — has run ahead of the evidence. Robin Emsley at the University of Manchester, a researcher who has criticized overhyped drug claims, and others have been arguing for years that the industry has been selling thin data as a breakthrough. That critique is earned.

But “trivial” is not a neutral word. It is a judgement about whose time matters.

I keep thinking about a conversation I had in March 2023, in a church hall in Deptford, a London neighborhood. I spoke with a woman named Priya whose mother had been on a donanemab trial. This conversation matters because it shows what the medical measures missed. She told me the drug might have given them six extra months of her mother still knowing her grandson’s name. “The doctors keep saying it’s small,” she said. “It’s not small in this house.” She was not sentimental about it. She knew her mother was still declining. She just knew what six months was worth, because she was the one cooking dinner in it.

Compare that to the language of the review. “No meaningful effect on cognition.” Meaningful to the measurer. Not to Priya. Not to her mother. Not to the grandson.

This is a problem disability researchers have been naming for decades, though rarely in the context of dementia. Mike Oliver, who developed the social model of disability in British sociology in the 1980s — the idea that disability is produced by society’s barriers, not by individual bodies — spent his career pointing out that the tools medicine uses to measure disabled lives were built by people who did not live them. The scale, the threshold, the “clinically significant change” — these are not found in nature. Someone chose them. Usually someone who scored well on them.

In dementia research, the people who score well on them are, by definition, not the people the research is supposed to serve.

There is a version of this argument that ends with “include patients in trial design,” and everyone nods, and a committee gets formed, and the success measures stay the same. I have sat through that committee. In 2019 I was on an advisory panel for a transport accessibility study where we, the disabled advisors, said the outcome measures were wrong — and the researchers thanked us warmly and used the original measures because those were the ones the funder recognised.

The copy has won. The scale has become more real than the life it was supposed to describe.

What would a different success measure look like? Functional time. Months of being able to do the specific thing that matters to this person in this household. A woman in Lagos might measure it in market trips. A man in Glasgow in whether he still recognises his brother’s voice on the phone. Priya measured it in her son’s name.

These are not soft outcomes. They are harder to score than a clinical rating scale, which is exactly why they were not chosen. Difficulty of measurement is not the same as triviality of effect. The review conflated them, and the headline writers followed.

I am not arguing the drugs work. I am arguing we do not know, because the question asked was the wrong one, and the people who could have told us the right one were not in the room.

On Camberwell New Road the cab finally moved. I put my phone away. I thought about Priya’s mother, and the scale of months, and the grandson whose name got to stay in her mouth for a little longer than it would have otherwise, and how a stranger in a journal had just called that trivial.