» Theodore Dalrymple

PJ Lifestyle

Theodore Dalrymple

Theodore Dalrymple, a physician, is a contributing editor of City Journal and the Dietrich Weismann Fellow at the Manhattan Institute. His new book is Second Opinion: A Doctor's Notes from the Inner City.
Follow Theodore:

Measuring a Hospital’s Quality

Monday, August 24th, 2015 - by Theodore Dalrymple

shutterstock_235063651

We all want to be treated in the best hospitals by the best doctors, but this is not possible so long as any difference in quality between them exists. The best hospitals and the best doctors cannot treat everybody. Moreover, it is much harder to tell which hospital and which doctor is the best than many of us suppose. League tables for doctors and hospitals are not like such tables for baseball or football teams, matters of straightforward record. They require measurements of enormous complexity, and the results are only trustworthy and valuable if the data that go into compiling them are both accurate and relevant.

A paper in a recent edition of the British Medical Journal casts doubt on the value of global judgments on hospitals as expressed in league tables.

The authors reasoned that if such judgments were of any value, a hospital’s standardised mortality ratio (the proportion of people of any particular category who died compared with the number of that category expected on average to die) ought to correlate strongly with the number of avoidable deaths that occur in that hospital. A hospital with a high standardised mortality ratio – the usual way of measuring its overall quality – ought to have a high rate of avoidable deaths, if that ratio is a true measure of the quality of medical care in that hospital compared with other hospitals.

The authors then examined the statistics for 34 hospitals in England, 10 of them in 2009 and 24 others in 2012-13. They correlated their SMRs with the proportion of deaths that were avoidable, calculated by the proportion of 100 deaths that occurred in each of the hospitals, chosen at random and examined by experts to determine whether they occurred because of any act of commission or omission by the hospital. Of course, whether a death is avoidable is usually a matter of judgment; it is rarely that incompetence or negligence is so great that death is indubitably its consequence. For the purposes of this study, a death was deemed to have been avoidable if the experts assessing the case thought there was more than a fifty percent chance that it was.

The correlation between hospitals’ SMRs and their rate of avoidable deaths was so slight as to be negligible: indeed it was not statistically significant. Overall the rate of avoidable death was low: 5.2 percent in 2009 and 3.6 percent in 2012-13, 115 cases in 3400 examined. This difference was statistically significant, but one cannot rush to the conclusion that hospitals had improved in the intervening period, for various factors had changed also that could have affected the rate (for example, the wider use and compliance with requests not to resuscitate).

There were limitations to the study: for example, agreement between experts as to what was an avoidable death was far from unanimous. Moreover, the experts were not blinded to the hospitals from which the cases they examined came. They might therefore have been influenced by biases, for or against, conscious or unconscious, in their judgment as to which death was avoidable. Further, a hospital’s global Standardised Mortality Ratio might have disguised exceptionally good and exceptionally bad departments within it that balanced each other overall.

Nevertheless, the lesson seems clear: the global SMR as a measure of a hospital’s quality is invalid. This is not to say that there are no good and bad hospitals, only that the SMR is not the way to assess them, perhaps because the SMR itself is far from a watertight measure and is subject to a large number of confounding factors. We should be as accurate as possible, but not believe ourselves to be more accurate than were actually are.

__________

Image via Shutterstock

Read bullet | Comments »

Would You Want to Know Your Risk of Having a Heart Attack in the Next Five Years?

Saturday, August 15th, 2015 - by Theodore Dalrymple

shutterstock_213185248

Being mortal, we are all under sentence of death, but the execution of the sentence is more imminent in some of us than in others. People who suffer from angina, for example, are aware that they could suffer a fatal heart attack at any time; and even if human beings can accommodate themselves to most situations, the awareness of the threat in the back of one’s mind must be disconcerting, to say the least.

Would we wish to know our statistical risk of death in the next five years? I suppose we vary in this as in everything else: there is no hard and fast rule.

The question went through my mind as I read a paper in a recent edition of the New England Journal of Medicine. The authors took a defined group of patients – those with stable angina and type II diabetes – and measured their troponin levels. Troponin is an enzyme that is found in the blood when the heart muscle is damaged by infarction, but with a new technique it is possible to measure much slighter increases in the level than previously.

The authors found that, of 2285 patients who came within the study, the 897 who had slightly raised levels of troponin had nearly twice the risk of fatal or non-fatal heart attack or stroke within the next five years compared with those who did not have a raised level. This increased risk persisted after adjustment for as many relevant variables as they could think of, so the relationship is probably a real one and not merely a statistical artifact. 27.1 per cent of patients with raised troponin levels suffered fatal or non-fatal heart attack or stroke in the succeeding five years compared with 12.9 per cent of those with normal troponin levels.

The authors assigned their patients at random to normal medical treatment or to such medical treatment plus angioplasty or coronary artery bypass graft. What they found was that the additional treatment made no difference to the outcome. In other words, it was useless except from the economic point of view of those carrying it out. However, they did find that the risk of fatal or non-fatal heart attack and stroke was considerably increased in the small percentage of patients whose troponin levels rose by more than 25 per cent during the study.

Perhaps one day the knowledge of an increased 5-year risk of fatal or nonfatal heart attack or stroke will become useful, for progress is constantly being made.  But if you were a patient with stable angina and Type II diabetes, would you want to know that your risk of fatal or non-fatal heart attack or stroke in the next five years was 29.1 per cent rather than 12.9 per cent? What would you do with this information if you had it? If the figures were 100 per cent and 1 per cent respectively, they might be of some use, for many of us want to settle our affairs before we die; but, apart from increasing our level of anxiety slightly, what use to us are the figures? Of course, if we fell into the 12.9 per cent risk group, we might feel slightly better, for, regrettably, it is a comfort to us to know that others are worse off than we.

There was a curious omission in the paper, as in many other papers of this type. Initially, 2368 patients were recruited but only 2285 of them ‘were successfully analyzed for troponin concentrations.’ Why were the other 83 (3.6 per cent) not successfully analysed? Were their samples lost in transit, put in the wrong bottles, mislabeled, etc.? If this is what happens during trials, with all their elaborate checks and a plethora of staff, what happens in normal circumstances? This is an important point, for it means that the benefits to be expected from medical practice in normal circumstances are thereby slightly overestimated by comparison with the benefits found in trials.

 Image via Shutterstock

Read bullet | Comments »

Study: Cold 17.36 Times More Hazardous to Mankind Than Heat

Tuesday, August 11th, 2015 - by Theodore Dalrymple

shutterstock_147857693

Some climates are better, or at least more agreeable, than others. Furthermore, it is well known that extremes of temperature raise death rates considerably. It has been estimated, for example, that they increase by between 8.9 and 12 percent during heatwaves, and by 12.5 per cent during spells of exceptional cold. The reasons for this are not fully understood; only a very small percentage of deaths during heatwaves are directly attributable to heatstroke. Moreover, there is an asymmetry between the effects of extreme heat and extreme cold on mortality. The effects of the former are immediate, and persist only as long as the heat persists; the effects of cold on mortality last three or four weeks after the cold has ceased.

A paper in a recent edition of the Lancet attempts to determine what percentage of deaths in thirteen different countries – Australia, Brazil, Canada, China, Italy, Japan, South Korea, Spain, Taiwan, Thailand, the UK and the USA – are associated with changes in the weather. The paper is of such enormous statistical sophistication that I doubt whether more than one in a thousand doctors is qualified to assess its validity: and, indeed, only one of its twenty authors is medically qualified. Nevertheless, the title of the paper, “Mortality risk attributable to high and low ambient temperatures: a multicountry observational study,” seems to me (as someone very unversed in these matters) to make an elementary statistical howler: that a statistical association by itself implies causation.

However, let us overlook this criticism, and moreover assume that the authors’ complex statistical analysis of the data (or rather that of their computers) is valid beyond further criticism. The size of their sample is certainly impressive: they have analyzed 74,225,200 deaths in relation to deviations from average or normal or optimal ambient temperatures. After much calculation, the authors come to the conclusion that 7.71 percent of the deaths included in the study were attributable to excess heat or cold, that is to say 5,722,763 of the deaths.

Read bullet | 8 Comments »

Do Sweetened Drinks Cause Type II Diabetes?

Friday, July 31st, 2015 - by Theodore Dalrymple

shutterstock_186904268

When I was in my early thirties, I several times visited an island in the Pacific called Nauru. From the medical point of view, it was of the utmost interest because fifty percent of the population has Type II diabetes and it therefore represented the epidemiological shape of things to come.

The Nauruans had become diabetic only recently, when they suddenly (and briefly, as it turned out) became the richest people per capita in the world, thanks to the phosphate rock in which their tiny island was covered. From a life of subsistence on fish and coconuts they went straight to being millionaires. They abandoned their traditional diet and started to eat, on average, 7000 calories per day. Not surprisingly they were enormously fat. They liked sweet drinks and consumed Fanta by the case-load. For those who liked alcohol as well there was Château Yquem. They were unique in the world in being both rich and having a short life expectancy.

The Nauruans were, in a sense, pioneers of the diabetogenic diet.

Type II diabetes is now a threat to public health that dwarfs Ebola virus in scale, but kills slowly and undramatically, rather by stealth than by coups de théâtre. No one ever walked around in spacesuits because there was a Type II diabetic on the ward.

The Nauruans (and the Pima Indians of Alaska) were almost certainly susceptible genetically to the disease, which did not affect them until they adopted their horrible diet. But, to a lesser extent admittedly, what happened to them has happened everywhere, particularly in the U.S. and Britain, where patterns of consumption bear some slight resemblance to those of the Nauruans.

Read bullet | 15 Comments »

There Is Still a Lot Doctors Do Not Know About Pneumonia

Friday, July 24th, 2015 - by Theodore Dalrymple

When I was a young doctor working in the countryside I contracted pneumonia. It took me a long time to recognize my main symptom – breathlessness – because I thought that symptoms were what patients, not doctors, had, and that therefore I could not myself have any. I therefore considered my feeling of being unwell to be an illusion. In the abstract I knew that doctors got ill and died, of course, but I found it difficult to believe this in practice, especially in my own case. When the symptom was severe enough, however, the penny dropped, and I looked at my own x-ray with something akin to pride.

Pneumonia, like almost all infectious diseases, is much less common now than it was then, but still remains a common enough cause of hospital admission and of death among the elderly. But according to a paper published in a recent edition of the New England Journal of Medicine, the annual cost of pneumonia requiring hospital treatment is $10 billion, a modest sum compared with the estimated costs of other diseases that one reads about in the same journal. Perhaps hospitals should try harder to raise their prices for treating pneumonia.

The paper is titled “Community-Acquired Pneumonia Requiring Hospitalization among U.S. Adults” – community being everywhere except the hospital, one of the commonest places to catch pneumonia. For eighteen months they enrolled all adults aged 18 and above whom physicians at five hospitals diagnosed as having pneumonia and needing to be admitted to the hospital. They then used a variety of diagnostic tests for bacterial and viral pathogens to be found in these patients, and compared them with a sample of patients attending one of the hospitals for non-respiratory problems.

By assuming that all patients in the hospitals’ catchment areas who needed hospitalization for pneumonia were admitted to the five hospitals and no others, they worked out the population rate of admission for hospitalization for pneumonia: 2.48 cases per year per thousand of population at risk. Assuming (what is almost certainly false) that having had pneumonia once does not mean increased susceptibility to having it again, the figure means than an adult has about a one in seven chance of being admitted to the hospital for pneumonia at some time in his life.

Unsurprisingly, the risk increased enormously with age. Between the ages of 50 and 64 you are 4 times more likely to need hospitalization for pneumonia than between the ages of 18 and 49, 9 times more likely between the ages of 65 and 79, and 25 times more likely after the age of 80. This makes me rather proud, the member of a small elite in fact, to have had pneumonia when I was only 27.

Read bullet | 7 Comments »

How Dangerous Is Synthetic Cannabis?

Wednesday, July 15th, 2015 - by Theodore Dalrymple
"USMC-100201-M-3762C-001" by Lance Cpl. Damany S. Coleman via Wikimedia Commons

“USMC-100201-M-3762C-001″ by Lance Cpl. Damany S. Coleman via Wikimedia Commons

Man is a creature that likes to change his mental state, even if it is for the worse. It is the change that he seeks, not the end result; Nirvana for him is a constantly fluctuating or dramatic state of mind. This, for obvious reasons, is particularly so for the bored and dissatisfied. In the prison in which I worked, for example, the prisoners would take any pills that they happened to find in the hope that they would have some — any — effect on their mental state, irrespective of the dangers that might be involved in producing it.

A recent article in the New England Journal of Medicine draws attention to an increase in the United States of reported side effects caused by the consumption of synthetic cannabinoids. These were first synthesised in the 1980s as research tools, but soon escaped the laboratory. (How, at whose instigation and for what reasons, one would like to know?) Now there are illicit chemical laboratories, mainly in Europe, producing an ever wider range of such cannabinoids, the law limping after them with its prohibitions, only for new compounds that are legal (until banned) to be synthesised almost immediately. The story is a tribute, in a way, to human ingenuity.

Between January and May 2014, poison control centers throughout the United States received 1085 calls concerning possible side effects of synthetic cannabinoids, but in the same period in 205 received 3572 such calls, which mysteriously the CDC calculated as a 229 percent increase, when it is a 329 percent increase.

Of the 2961 calls concerning cases in which the medical condition and outcome was known, 335 were serious, which is to say life-threatening or resulting in significant disability, and a further 1407 necessitated medical treatment. 1219, therefore, were minor and quickly self-limiting.

Read bullet | 16 Comments »

This New Drug Could Dramatically Reduce Obesity

Tuesday, July 7th, 2015 - by Theodore Dalrymple

shutterstock_135802118

Novels should have good first lines: “Call me Ishmael”; “It was the best of times, it was the worst of times”; “Every happy family is happy in the same way”; “There was no possibility of going for a walk that day”; “It is a truth universally acknowledged… etc.” One does not expect scientific papers to grab one’s attention in quite the same way.

But a paper published in a recent edition of the New England Journal of Medicine opens interestingly, if not quite with the same literary flair as that of Melville, Dickens or Tolstoy, etc. It begins:

The increase in the rate of obesity, a chronic disease with serious health consequences, largely explains the recent trebling of the prevalence of type 2 diabetes.

This is an odd way of putting it, as if the authors themselves did not quite believe what they were saying. They would not have written “Cancer, a chronic disease with serious health consequences…” because to have cancer is to have bad health. Poverty is not caused by having too little money; poverty is having too little money.

Perhaps it doesn’t matter much what one calls obesity—disease or the wages of sin, or at least of weakness—because everyone is agreed that it is a bad thing and ought, if possible, to be reduced.

The authors conducted a double blind trial against placebo of a drug called liraglutide, which was injected daily subcutaneously for 56 weeks. 3731 fat patients from 191 clinics on 5 continents with a high body-mass index were allocated either to the drug or to placebo (2487 to active treatment, 1244 to placebo), both groups being given standard advice about diet and exercise.

The first thing to note is that only 71.9 percent of the treatment group, and 64.4 percent of the placebo group, completed the trial. When one considers that compliance during trials, with the relatively intense attention it gives to patients, is considerably higher than in normal clinical conditions, the actual usefulness of the drug, at least in the epidemiological sense, is much reduced.

Read bullet | 17 Comments »

Do You Need Aggressive Cancer Screening After DVT or Pulmonary Embolus?

Friday, June 26th, 2015 - by Theodore Dalrymple
shutterstock_149926556

Image via Shutterstock

I like to have a prejudice overthrown from time to time: it helps to persuade me that my other prejudices are reasonable because I am a reasonable man who is open to the evidence. This is especially the case where the prejudice is one that I do not really care much about. I can give it up without much regret.

A paper in a recent edition of the New England Journal of Medicine overthrew one such minor prejudice, namely that the more thoroughly a person was investigated for an occult cancer after suffering an unexpected deep vein thrombosis (DVT) or pulmonary embolus (PE), the more likely that one would be found.

The association between spontaneous DVT and cancer has long been known. Its discoverer was Armand Trousseau, a great French physician of the middle of the nineteenth century, who noticed that people who suffered DVTs often had cancers such as that of the pancreas, in those days always diagnosed at post mortem. By a strange and tragic coincidence, Trousseau himself suffered a DVT and a few months later was dead – of pancreatic cancer. This is a story from medical history that, once heard, is never forgotten.

It has been found that about 10 per cent of people have diagnosable cancer within a year of having had a DVT or PE. So it seems to stand to reason that if people who suffer such events are investigated up hill and down dale immediately afterwards, some cancers will be caught earlier and treated, and therefore survival will be increased.

Some Canadian researchers tested this hypothesis. They randomly divided 854 patients with either DVT or PE into two groups: those who were tested for cancer by simple methods such as physical examination, blood tests and chest x-ray, and those who, in addition to all these, had CT scans of the abdomen and pelvis. They were then followed up for a year to see whether there was any difference in outcome.

Read bullet | 5 Comments »

Can Dual Prescriptions for Opioids and Tranquilizers Increase Your Risk of Dying?

Monday, June 22nd, 2015 - by Theodore Dalrymple

shutterstock_288419228

Some patients stick in one’s mind longer than others. I remember, for example, a Welshman with the sing-song intonation of the Welsh who told me on his arrival in hospital that he needed his diazzies [diazepam], and temazzies [temazepam] and his flurazzies [flurazepam] and his lorazzies [lorazepam] and his bromazzies [bromazepam] and his oxazzies [oxazepam]. All these are tranquilizing drugs of the benzodiazepine group, and need was not quite the word for his consumption of them.

On another occasion a patient, a heroin addict, accused me of murdering him because I would not prescribe diazepam for him. In actual fact, I believed that precisely the opposite was almost the case: that if I prescribed for him what he wanted, his chances of dying by overdose, intentionally or unintentionally, would be much increased.

A paper in a recent edition of the British Medical Journal suggests that I was right. It examined the question of whether people prescribed opioid analgesics were more likely to die of overdose if they were also prescribed benzodiazepine tranquilizers than if they were not.

The authors examined the records of patients treated in the U.S. Veterans Administration system between the years 2004 and 2009. Their sample was of 422,786 patients who were prescribed opioids at some time in those years, a figure which I found prima facie astonishing, though perhaps wrongly. Of those 422,786, 2,400 died by overdose, and the authors compared the rate of prescribing of benzodiazepines in addition to opioids among those who had died by overdose with those who had not. What they found was that those who were prescribed both classes of drugs were 3.86 times as likely to die by overdose as those who were prescribed opioids alone. More surprisingly, perhaps, was the fact that those who had ever been prescribed benzodiazepines as well as opioids, but were not currently in receipt of a dual prescription, were 2.33 times more likely to die by overdose than those who had never received a dual prescription.

For a number of reasons these results do not show, in the strictest pharmacological sense, that dual prescription causes additional deaths, the most important of which is that patients prescribed benzodiazepines (usually to calm them down, or get them out of the doctor’s office without a terrible scene taking place) may be more psychologically disturbed than those not prescribed them, and therefore more likely to take overdoses in the first place. There is a dose-response relationship between the dose of benzodiazepine prescribed and the risk of death by overdose, but even this does not prove that dual prescription causes death, because it is likely that those prescribed higher doses of benzodiazepines are yet more psychologically disturbed.

Read bullet | 14 Comments »

Should You Demand Fresh Blood for Your Next Transfusion?

Tuesday, June 16th, 2015 - by Theodore Dalrymple
Image via Shutterstock

Image via Shutterstock

As everyone knows, fresh human blood rejuvenated Dracula no end: stored blood simply would not do for him.

Blood has long been a fluid endowed with mystic significance. Only comparatively recently in human history have people donated it to strangers with anything like a good grace. I once worked in a remote country, much given to drunkenness, where people would only give blood to their relatives, though fortunately they lived in large families. A man there once had an accident requiring rapid and repeated transfusion. His family had all been at a party. After transfusion, he himself was drunk.

It has long been thought that the longer human blood had been stored in blood banks, the less good its quality. There were two papers in a recent edition of the New England Journal of Medicine that tested this hypothesis, which (within limits, of course) turned out to be false, as so many hypotheses do. I think most people would instinctively feel, because it stands to reason, that fresh blood is best; we agree with Dracula.

Normally, blood is taken from donors, treated chemically and tested for viruses and refrigerated. In practice it is not kept more than six weeks, though this period is to an extent arbitrary and by convention. In the first trial, conducted in Canada, Britain, France, the Netherlands and Belgium, critically ill patients in need of blood transfusions were allocated randomly to receive either blood that was less than a week old or blood that was three weeks old.

One short passage in the paper was slightly troubling from the point of view of medical ethics: “At sites where deferred consent was permitted, written informed consent was obtained from the patient or surrogate decision maker as soon as possible after enrollment.” This appears to mean, unless I have misunderstood, that consent was retrospective, in other words that the patients were asked “Do you consent to having been experimented upon?” Even where such consent was not given, refusal was all but pointless, for they were then asked whether, nevertheless, they consented to the use of the data gathered in their case.

Read bullet | 5 Comments »

Should Unvaccinated Children Be Forced to Stay Home From School?

Monday, June 8th, 2015 - by Theodore Dalrymple
Image via Shutterstock

Image via Shutterstock

It so happened that I was preparing an introduction to an anthology of the writings of Edmund Burke when I read an article in a recent edition of the New England Journal of Medicine with the title “Social Distancing and the Unvaccinated.” One of Burke’s main contentions, at least according to me, is that politics are, or ought to be, more than the application of abstract first principles to practical affairs: and, as if to prove him right, along came this article.

The question was this: if it is permissible for parents to refuse to have their children immunized against preventable childhood diseases, does the state have the right, through one or other of its agencies, to exclude those children temporarily from school or other social institutions if there is an epidemic developing?

This question can be answered neither by a single abstract principle alone nor by appeal to scientific fact. The matter is complex, and on this occasion arose in the context of an outbreak of measles in California that soon spread and was in part occasioned by a reduction in the rate of immunization against the disease consequent upon the fraudulent activities of Dr Andrew Wakefield, a British doctor who claimed falsely to have discovered a link between the measles, mumps and rubella vaccine and the development of childhood autism.

In addition, two sets of parents in New York legally challenged the exclusion of their children from school because they were unimmunized against chickenpox after an infected child was found in the school. The article did not make clear whether the exclusion was primarily to protect the unimmunized children themselves or others in the school, or both (no immunization conferring 100 per cent immunity, and the more cases encountered the greater the likelihood of spread).

Scientific considerations are relevant to, but not probative of, any answer. The article, strangely, made no mention of the fact that parents’ rights, which we all accept within quite wide limits, nevertheless may impinge on those of their children, for example that to life itself: in which case parents’ rights have to be, or at any rate are, overridden.

If the parents’ decision not to immunize were one of life or death, either for their or other children, most (but perhaps not all) people would agree that their say in the matter should not count. But in fact it is rarely one of life and death, but rather one of transient illness with very occasional severe complications. Just how great is the risk of the latter is dependent on factors other than the parents’ decision not to immunize: measles is much less serious a disease in rich than in poor countries, for example. Moreover, some questions, for example, how long it is necessary to socially distance (Orwellian phrase) children in order to abrogate the risk of spreading may not be completely answerable in the current state of knowledge.

How many days off school for one child equal the risk of contraction of a mild illness by another? There is no way of answering this question except by the exercise of judgment in particular circumstances. This is precisely what Burke would have predicted: what we decide cannot be determined by appealing to conflicting rights alone, the more fundamental of them prevailing. Sometimes one will prevail, sometimes another; there is no way of making politics a matter of such accurate calculation that no faculty of judgment, with its permanent possibility of error, will ever have to be exercised.

The article focuses on religious objectors to immunization, but they are probably outnumbered by Californian-style cranks, paranoiacs and believers in all you read on the internet.

Read bullet | 24 Comments »

How Many More Children Must Die from a Mutant Strain of Malaria?

Saturday, March 28th, 2015 - by Theodore Dalrymple

I am slightly ashamed of how much I liked Burma when I visited it nearly a third of a century ago. What a delight it was to go to a country in which there had been no progress for 40 years! Of course it was a xenophobic, klepto-socialist, Buddho-Marxist military dictatorship run by the shadowy, sinister and corrupt General Ne Win, and so, in theory, I should have hated it. Instead, I loved it and wished I could have stayed.

Since then there has probably been some progress, no doubt to the detriment of the country’s charm. Burma (now Myanmar) is slowly rejoining the rest of the world, and one consequence of this will be the more rapid advance of treatment-resistant malaria.

A recent paper in the Lancet examined the proportion of patients in Burma with malaria in whom the parasite, Plasmodium falciparum, was resistant to what is now the mainstay of treatment, artemisinin, a derivative of a herbal remedy known for hundreds of years to Chinese medicine. The results are not reassuring.

There was a time, not so very long ago, when the global eradication of malaria was envisaged by the WHO, and it looked for a time as if it might even be achieved. The means employed to eradicate it was insecticide that killed the mosquitoes that transmitted the malarial parasites, but a combination of pressure from environmentalists who were worried about the effects of DDT on the ecosystem and mosquito resistance to insecticides led to a recrudescence of the disease.

At the same time, unfortunately, resistance to antimalarial drugs emerged. Control of malaria, not its eradication, became the goal; an insect and a protozoan had defeated the best efforts of mankind. And this is no small matter: the last time resistance to a mainstay of treatment for malaria, chloroquine, emerged in South-East Asia, millions of people died as a result in Africa for lack of an alternative treatment.

What most surprised me about this paper was the method the authors used to determine the prevalence of resistance to artemisinin in the malarial parasites of Burma: for I remember the days when such prevalence was measured by the crude clinical method of giving patients chloroquine and estimating how many of them failed to get better.

The genetic mutations that make the parasite resistant to artemisinin have been recognized. The authors were able to estimate the percentage of patients with malarial parasites that had mutations associated with drug resistance. Nearly 40 percent of their sample had such mutations, and in a province nearest to India the figure was nearly half. The prospects for the geographical spread of resistance are therefore high.

Nor is this all. Artemisinin resistance was first recognized in Cambodia 10 years ago but the mutations in Burma were different, suggesting that resistance can arise spontaneously in different places at the same time. From the evolutionary point of view, this is not altogether surprising: selection pressure to develop resistance to artemisinin exists wherever the drug is widely used.

One way of reducing the spread of resistance is the use in treatment of malaria of more than one antimalarial drug at a time, but this will only retard the spread, not prevent it altogether. As with tuberculosis, it is likely that parasites resistant to all known drugs will emerge. The authors of the paper end on a pessimistic note:

The pace at which the geographical extent of artemisinin resistance is spreading is faster than the rate at which control and elimination measures are being developed and instituted, or new drugs being introduced.

In other words, deaths from malaria will increase rather than continue to decrease, which is what we have come to think of as the normal evolution of a disease.

Read bullet | 14 Comments »

Does Air Pollution Cause Poor Lung Function in Children?

Tuesday, March 17th, 2015 - by Theodore Dalrymple

000.000

When I was a boy in London I used to love what we called pea-soupers, that is to say fogs so thick that you couldn’t see your hand in front of your face at midday. They came every November and buses, with a man walking slowly before them to guide them, would loom up suddenly out of the gloom with their headlights like the glowing eyes of monsters. It took my father so long to drive to work that by the time he arrived it was time for him to come home again. I loved those fogs, but then the government went and spoiled the fun by passing the Clean Air Act. They never returned, those wonderful, exciting fogs.

Little did I know (or care) that those wonderful, exciting fogs killed thousands by bronchitis. But many years later I got bronchitis for the first and only time in my life from breathing the polluted winter air of Calcutta. I have also traveled in Communist countries where it seemed that the only thing the factories produced was pollution. I don’t need persuading that clear air is a good thing, not only aesthetically but also from the health point of view.

Southern California used to have some of the worst air pollution in the United States, but the quality of the air in Los Angeles has improved over the last two or three decades. Researchers who reported their findings in a recent edition of the New England Journal of Medicine conducted what is called a natural experiment: they estimated the pulmonary capacity of children who grew up as the level of pollution declined.

Most research on the health effects of air pollution has concentrated on deaths from cardiovascular disease among adults, usually of a certain age. But it is known that relatively poor lung function among younger people predicts cardiovascular disease later in life quite well. There is also an association between air pollution and early death from cardiovascular disease, though of course an association does not by itself prove causation. Does air pollution cause poor lung function in children?

The researchers measured lung function in three cohorts of children, 2120 in all, aged 11 to 15, who were of those ages between 1994 and 1998, 1997 and 2001, and 2007 and 2011. During this period, atmospheric pollution in Los Angeles declined markedly, as measured by levels of nitrogen dioxide, ozone and particulate matter.

Lung function, estimated by forced expiratory volume, improved (or at any rate increased) as air pollution declined. The numbers of children with lower than predicted function declined from 7.9 percent to 6.3 percent to 3.6 percent in the three cohorts. The improvement occurred among whites and Hispanics, boys and girls, and even those with asthma, i.e. the asthmatics, were less incapacitated.

The authors thought that the improvement in lung function was likely to persist into adulthood or, to put it in a slightly less cheerful way, damage done in childhood by air pollution might be permanent. This is not quite so pessimistic as it sounds, for there is probably no age at which an improvement in the quality of the air is not capable of producing an improvement in health.

The main drawback of the study was that there was no control group, that is to say a population whose cohorts of children experienced no improvement in the quality of the air they breathed. Perhaps the function of their lungs would have shown the same improvement as well, though I rather doubt it.

One little semantic point about the paper: children aged 11 were referred to as students rather than as pupils. Perhaps this is because we nowadays expect people to grow up very quickly, but not very far.

Read bullet | Comments »

What We Can Learn from Today’s Medical Experiment Failures

Thursday, February 5th, 2015 - by Theodore Dalrymple

shutterstock_141561382

In the past, medical journals, pharmaceutical companies and researchers themselves have been criticized for publishing selectively only their positive results, that is to say, the results that they wanted to find. This is important because accentuation of the positive can easily mislead the medical profession into believing that a certain drug or treatment is much more effective than it really is.

On reading the New England Journal of Medicine and other medical journals, I sometimes wonder whether the pendulum has swung too far in the other direction, in accentuating the negative. To read of so many bright ideas that did not work could act as a discouragement to others and even lead to that permanent temptation of ageing doctors, therapeutic nihilism. But the truth is the truth, and we must follow it wherever it leads.

A recent edition of the NEJM, for example, reported on three trials, two with negative results and one with mildly positive ones. The trials involved the early treatment of stroke, the prophylaxis of HIV injection, and the treatment of angina refractory to normal treatment (a growing problem). Only the latter was successful, but it involved 104 patients as against 6729 patients in the two unsuccessful ones.

The successful trial involved the insertion of a device that increased pressure in the coronary sinus, the vein that drains the blood from the heart itself. For reasons not understood, this seems to redistribute the blood flow in the heart muscle, thus relieving angina. In the trial, the new device relieved and reduced angina symptoms, and improved the quality of life in the patients who received it compared with those who underwent a placebo-operation. The trial was too small, however, to determine whether the device improved survival, though even if it did not a reduction of symptoms and an improvement in the quality of life is worthwhile.

The trial of chemoprophylaxis of HIV was, by contrast, a total failure. The trial recruited 5029 young women in Africa who were given an anti-HIV drug in tablet or cream form, and others who were given placebos. The rate at which they became infected with HIV was compared, and no difference was found.

In large part this was because the patients did not take or use the pills or cream, though they claimed to have done so. A drug that few take is not of much use however effective it might be in theory, especially in prophylaxis rather than treatment. And this points to another problem of pharmaceutical research: in drug trials that require patients’ compliance with a regime, that compliance may be high during the trial itself (thanks to the researchers’ vigilance and enthusiasm) but low in “natural” conditions, when the patients are left to their own devices.

The trial of magnesium sulphate in the early treatment of stroke was also a failure. It had been suggested by experiments on animals that this chemical protects brain cells from degeneration after ischaemic stroke. It stood to reason, then, that it might improve the outcome in humans in ischaemic stroke, at least if given as soon as suspected.

Alas, it was no to be. The trial, involving 1700 patients, showed that the early administration of magnesium sulphate did not improve outcome in the slightest. At 90 days there was no difference between those who received it and those who had received placebo.

Is an idea bad just because it does not work? Could it be that those who discover something useful are just luckier than their colleagues? Perhaps there ought to be a Nobel Prize for failure, that is to say for the brightest idea that failed.

*****

image illustration via shutterstock / 

Read bullet | Comments »

The Parents’ Impossible Choice: Increase Your Premature Baby’s Risk of Death or Blindness?

Wednesday, January 28th, 2015 - by Theodore Dalrymple

shutterstock_243214936

How informed is informed? What is the psychological effect of being told of every last possible complication of a treatment? Do all people react the same way to information, or does their reaction depend upon such factors as their intelligence, level of education, and cultural presuppositions, and if so does the informing doctor have to take account of them, and if so how and to what degree? An orthopedic surgeon once told me that obtaining informed consent from patients now takes him so long that he had had to reduce the number of patients that he treats.

An article in a recent edition of the New England Journal of Medicine extols the ethical glories of informed consent without much attention to its limits, difficulties and disadvantages.

It starts by referring to a trial of the level of oxygen in the air given to premature babies, of whom very large numbers are born yearly. Back in the 1940s it was thought that air rich in oxygen would compensate for premature babies’ poor respiratory system, but early in the 1950s British doctors began to suspect, correctly, that these high levels of oxygen caused retinal damage leading to permanent blindness. Fifty years later, the optimal level of oxygen is still not known with certainty, and a trial was conducted that showed that while higher levels of oxygen caused an increased frequency of retinopathy, lower levels resulted in more deaths. The authors of the trial have been criticized because they allegedly did not inform the parents of the possibility that lower levels of oxygen might lead to decreased survival, which was reasonably foreseeable.

How reasonable does reasonability have to be? Many of the most serious consequences of a treatment are totally unexpected and not at all foreseeable (no one suspected that high levels of oxygen for premature babies would result in blindness, for example, and it took many years before this was realized). Ignorance is, after all, the main reason for conducting research.

But suppose parents of premature babies had been asked to participate in a trial in which their offspring were to be allocated randomly to an increased risk of blindness or an increased risk of death. Surely this frankness would have been cruel, all the more so as the precise risks could not have been known in advance. Parents would feel guilt alike if their babies died or were blind.

shutterstock_245650636

Now that the answer is known, more or less, parents can be asked to choose in the light of knowledge: but their informed consent will be agonizing because there is no correct answer. Personally, I would rather trust the doctor sufficiently to act in my best interests in the light of his knowledge and experience. So far in life I have not had reason to regret this attitude, though I am aware that it has its hazards also. But

…why should they know their fate?

Since sorrow never comes too late,

And happiness too swiftly flies.

Thought would destroy their paradise.

No more; where ignorance is bliss,

‘Tis folly to be wise.

And I have often thought what medical ethicists would have made of the pioneers of anesthesia. They did not seek the informed consent of their patients, in part, but only in part, because they hadn’t much information to give. What moral irresponsibility, giving potentially noxious and even fatal substances to unsuspecting experimental subjects without warning them of the dangers!

And there are even some medical ethicists who think we should not take advantage of knowledge gained unethically. All operations should henceforth be performed without anesthesia, therefore.

*****

image illustrations via shutterstock / / 

Read bullet | Comments »

Which Medical Treatments Today Will We Someday Regard as Barbaric?

Wednesday, January 14th, 2015 - by Theodore Dalrymple

shutterstock_113504578Medical history is instructive, if for no other reason than that it might help to moderate somewhat the medical profession’s natural inclination to arrogance, hubris and self-importance. But the medical curriculum is now too crowded to teach it to medical students and practicing doctors are too busy with their work and keeping up-to-date to devote any time to it. It is only when they retire that doctors take an interest in it, as a kind of golf of the mind, and by then it is too late: any harm caused by their former hubris has already been done.

Until I read an article in a recent edition of the Lancet, I knew of only one eminent doctor who had been shot by his patient or a patient’s relative: the Nobel Prize-winning Portuguese neurologist Egas Moniz, who was paralyzed by a bullet in the back. It was he who first developed the frontal lobotomy, though he was also a pioneer of cerebral arteriography. As he was active politically during Salazar’s dictatorship, I am not sure whether his patient shot him for medical or political reasons, or for some combination of the two.

Read bullet | 47 Comments »

A Breakthrough Discovery in Treating Strokes?

Wednesday, January 7th, 2015 - by Theodore Dalrymple

shutterstock_102645029

Of late the New England Journal of Medicine has seemed like the burial ground of good ideas. Researchers follow a promising lead only to find that their new idea fails the crucial test of experience: and the difference between success and failure in research is made to appear as much a matter of chance or luck as of brilliance or skill.

In the first issue of the Journal for 2015, Dutch researchers from 16 different hospitals report an unequivocal success in the treatment of ischemic stroke.

Until now the only proven worthwhile treatment of patients with the kind of stroke that results from the blockage of a cerebral artery is the infusion within four and a half hours of the drug called alteplase, which dissolves thrombus (and which is manufactured from the ovaries of Chinese hamsters). But even with the use of this drug the prognosis is not very good, and there are several contra-indications to its use.

Read bullet | 6 Comments »

Is Overeating the Primary Cause of Obesity?

Friday, December 26th, 2014 - by Theodore Dalrymple

Everyone knows the pleasures of having his prejudices confirmed by the evidence. The pleasures of changing one’s mind because of the evidence are somewhat less frequently experienced, though none the less real. Among those pleasures is that of self-congratulation on one’s own open-mindedness and rationality. It would therefore delight me to learn that my prejudice about obesity — that it is a natural consequence of overeating, which is to say of human weakness and self-indulgence — was false.

I therefore read with interest and anticipation a recent article in the New England Journal of Medicine with the title “Microbiota, Antibiotics, and Obesity.” The connection of antibiotics with obesity had not previously occurred to me; perhaps the real reason why so many people now have the appearance of beached whales was about to be revealed to me.

Read bullet | 66 Comments »

Which Accidental Deaths Were Most Common 400 Years Ago?

Tuesday, December 23rd, 2014 - by Theodore Dalrymple

TudorEnglandSunrise

It is easier to advise than to have or to retain a sense of proportion, especially when it is most needed. I have never known anyone genuinely comforted by the idea that others were worse off than he, which perhaps explains why complaint does not decrease in proportion to improvement in general conditions. And he would be a callow doctor who tried to console the parents of a dead child with the thought that, not much more than a century ago, an eighth of all children died before their first birthday.

Still, it is well that from time to time medical journals such as the Lancet should carry articles about medical history, for otherwise we might take our current state of knowledge for granted. Ingratitude, after all, is the mother of much discontent. To know how much we owe to our forebears keeps us from imagining that our ability to diagnose and cure is the consequence of our own peculiar brilliance, rather than simply because we came after so much effort down the ages.

A little article in the Lancet recently was written by two historians who are in the process of analyzing the results of 9000 coroners’ inquests into accidental deaths in Tudor England. It seems astonishing to me that such records should have survived for more than four centuries, but also that the state should have cared enough about the deaths of ordinary people to hold such inquests (coroners’ inquests had already been established for 400 years at the time of the Tudors). In other words, an importance was given to individual human life even before the doctrines of the Enlightenment took root: the soil was already fertile. 

Read bullet | 74 Comments »

Does Brain Damage Make a Case for Ending Sports?

Tuesday, December 16th, 2014 - by Theodore Dalrymple

shutterstock_123635224

When I was working in Africa I read a paper that proved that intravenous corticosteroids were of no benefit in cerebral malaria. Soon afterwards I had a patient with that foul disease whom I had treated according to the scientific evidence, but who failed to respond, at least as far as his mental condition was concerned  – which, after all, was quite important. To save the body without the mind is of doubtful value.

I gave the patient an injection of corticosteroid and he responded as if by miracle. What was I supposed to conclude? That, according to the evidence, it was mere coincidence? This I could not do: and I have retained a healthy (or is it unhealthy?) skepticism of large, controlled trials ever since. For in the large numbers of patients who take part in such trials there may be patients who react idiosyncratically, that is to say, differently from the rest.

A paper in a recent edition of the New England Journal of Medicine brought back my experience with cerebral malaria. Animal experimentation had shown that progesterone, one of the class of steroids produced naturally by females, protected against the harmful effects of severe brain injury. The paper does not specify what exactly it was necessary to do to experimental animals to reach this conclusion, but it does say that it has been proven in several species. What is not said is often as eloquent as what is said.

Read bullet | 21 Comments »

Bad Brains: Can Science Figure Out How to Create a Good Person?

Saturday, December 13th, 2014 - by Theodore Dalrymple

shutterstock_71631349

If brevity is the soul of wit, verbosity is often the veil of ignorance. There was an instance of this in a recent article in the New England Journal of Medicine, with the title “Conduct Disorder and Callous-Unemotional Traits in Youth.” At considerable length and with much polysyllabic vocabulary, it told us much that we already knew (some of it true by definition). It mistook the illusion of progress for progress itself.

The paper starts with a definition:

The term “conduct disorder” refers to a pattern of repetitive rule-breaking behavior, aggression and disregard for others.

It sounds to me like a recipe for success in the modern art world, where “transgressive” is a term of the highest praise. But, says the paper, such problems have received increased attention recently, for two reasons: first, young people with conduct disorder sometimes “perpetrate violent events,” and second, the Diagnostic and Statistical Manual of Mental Disorders has modified its criteria for diagnosis. This latter seems to me an odd reason for increased attention. (Whose attention, by the way, the authors do not specify. The attention is like the pain in the room as described by Mrs Gradgrind. She thought there was a pain somewhere in the room, but couldn’t positively say that she had got it.)

Read bullet | 9 Comments »

Do Drug Trials Often Fail to Reveal the Harmful Side Effects They Discover?

Monday, December 8th, 2014 - by Theodore Dalrymple

shutterstock_98834942 (1)

The truth, the whole truth, and nothing but the truth: that is what one swears to tell in a court of law. One lies there and then. It is a noble ideal that one swears to, but one that in practice is impossible to live up to. Not only is the truth rarely pure and never simple, as Oscar Wilde said, but it is never whole, even in the most rigorous of scientific papers.

Not that scientific papers are often as rigorous as they could or should be. This is especially so in trials of drugs or procedures, the kind of investigation that is said to be the gold standard of modern medical evidence.

Considering how every doctor learns that the most fundamental principle of medical ethics is primum non nocere, first do no harm, it is strange how little interest doctors often take in the harms that their treatment does. Psychologically, this is not difficult to understand: every doctors wants to think he is doing good, and therefore has a powerful motive for disregarding or underestimating the harm that he does. But in addition, trials of drugs or procedures often fail to mention the harms caused by the drug or procedure that they uncover.

This is the royal road to over-treatment: it encourages doctors to be overoptimistic on their patients’ behalf. It also skews or makes impossible so-called informed consent: for if the harms are unknown even to the doctor, how can he inform the patient of them? The doctor becomes more a propagandist than informant, and the patient cannot give his informed consent because such consent involves weighing up a known against an unknown.

A paper in a recent edition of the British Medical Journal examined a large series of papers to see whether they had fully reported adverse events caused by the drug or procedure under trial. It found that, even where a specific harm was anticipated and looked for, the reporting was inadequate in the great majority of cases.

Read bullet | 10 Comments »

Should Old People Drink More Alcohol & Less Milk?

Monday, November 24th, 2014 - by Theodore Dalrymple

shutterstock_231978970

In my youth the government encouraged people to eat more eggs and butter and drink more milk for the sake of their health. Perhaps it was the right advice after a prolonged period of war-induced shortage, but no one would offer, or take, the same advice today. Nutritional advice is like the weather and public opinion, which is to say highly changeable.

How quickly things go from being the elixir of life to deadly poison! A recent paper from Sweden in the British Medical Journal suggests that, at least for people aged between 49 and 75, milk now falls into the latter category, especially for women.

Milk was once thought to protect against osteoporosis, the demineralization of bone that often results in fractures. It stood (partially) to reason that it should, for milk contains many of the nutrients necessary for bone growth.

On the other hand, it also stood (partially) to reason that it should do more harm than good, for consumption of milk increases the level of galactose in the blood and galactose has been found to promote ageing in many animals, up to and including mice. If you want an old mouse quickly, inject a young one with galactose.

In other words, there is reason to believe both that the consumption of milk does good and that it does harm. Which is it? This is the question that the Swedish researchers set out to answer.

Read bullet | 30 Comments »

Is the Most Popular Treatment for Lower Back Pain No More Effective Than a Placebo?

Saturday, November 15th, 2014 - by Theodore Dalrymple

shutterstock_52832905

Low back pain is a condition so common that, intermittently, I suffer from it myself. It comes and goes for no apparent reason, lasting a few days at a time. Nearly 40 years ago I realized that, though I had liked to think of myself as nearly immune from nervous tension, anxiety could cause it.

I was in a far distant country and I had a problem with my return air ticket. At the same time I suffered agonizing low back pain, which I did not connect with the problem of my ticket. When the problem was sorted out, however, my back pain disappeared within two hours.

In general, low back pain is poorly correlated with X-ray and MRI findings. Epidemiological research shows that the self-employed are much less prone to it than employees, and also that those higher in the hierarchy suffer it less than those lower – and not because they do less physical labor. Now comes evidence, in a recent paper from Australia published in the Lancet, that the recommended first treatment usually given for such pain, acetaminophen, also known as paracetamol, is useless, or at least no better than placebo (which is not quite the same thing, of course).

Read bullet | 46 Comments »