» Theodore Dalrymple

PJ Lifestyle

Theodore Dalrymple

Theodore Dalrymple, a physician, is a contributing editor of City Journal and the Dietrich Weismann Fellow at the Manhattan Institute. His new book is Second Opinion: A Doctor's Notes from the Inner City.
Follow Theodore:

How Many More Children Must Die from a Mutant Strain of Malaria?

Saturday, March 28th, 2015 - by Theodore Dalrymple

I am slightly ashamed of how much I liked Burma when I visited it nearly a third of a century ago. What a delight it was to go to a country in which there had been no progress for 40 years! Of course it was a xenophobic, klepto-socialist, Buddho-Marxist military dictatorship run by the shadowy, sinister and corrupt General Ne Win, and so, in theory, I should have hated it. Instead, I loved it and wished I could have stayed.

Since then there has probably been some progress, no doubt to the detriment of the country’s charm. Burma (now Myanmar) is slowly rejoining the rest of the world, and one consequence of this will be the more rapid advance of treatment-resistant malaria.

A recent paper in the Lancet examined the proportion of patients in Burma with malaria in whom the parasite, Plasmodium falciparum, was resistant to what is now the mainstay of treatment, artemisinin, a derivative of a herbal remedy known for hundreds of years to Chinese medicine. The results are not reassuring.

There was a time, not so very long ago, when the global eradication of malaria was envisaged by the WHO, and it looked for a time as if it might even be achieved. The means employed to eradicate it was insecticide that killed the mosquitoes that transmitted the malarial parasites, but a combination of pressure from environmentalists who were worried about the effects of DDT on the ecosystem and mosquito resistance to insecticides led to a recrudescence of the disease.

At the same time, unfortunately, resistance to antimalarial drugs emerged. Control of malaria, not its eradication, became the goal; an insect and a protozoan had defeated the best efforts of mankind. And this is no small matter: the last time resistance to a mainstay of treatment for malaria, chloroquine, emerged in South-East Asia, millions of people died as a result in Africa for lack of an alternative treatment.

What most surprised me about this paper was the method the authors used to determine the prevalence of resistance to artemisinin in the malarial parasites of Burma: for I remember the days when such prevalence was measured by the crude clinical method of giving patients chloroquine and estimating how many of them failed to get better.

The genetic mutations that make the parasite resistant to artemisinin have been recognized. The authors were able to estimate the percentage of patients with malarial parasites that had mutations associated with drug resistance. Nearly 40 percent of their sample had such mutations, and in a province nearest to India the figure was nearly half. The prospects for the geographical spread of resistance are therefore high.

Nor is this all. Artemisinin resistance was first recognized in Cambodia 10 years ago but the mutations in Burma were different, suggesting that resistance can arise spontaneously in different places at the same time. From the evolutionary point of view, this is not altogether surprising: selection pressure to develop resistance to artemisinin exists wherever the drug is widely used.

One way of reducing the spread of resistance is the use in treatment of malaria of more than one antimalarial drug at a time, but this will only retard the spread, not prevent it altogether. As with tuberculosis, it is likely that parasites resistant to all known drugs will emerge. The authors of the paper end on a pessimistic note:

The pace at which the geographical extent of artemisinin resistance is spreading is faster than the rate at which control and elimination measures are being developed and instituted, or new drugs being introduced.

In other words, deaths from malaria will increase rather than continue to decrease, which is what we have come to think of as the normal evolution of a disease.

Read bullet | 14 Comments »

Does Air Pollution Cause Poor Lung Function in Children?

Tuesday, March 17th, 2015 - by Theodore Dalrymple

000.000

When I was a boy in London I used to love what we called pea-soupers, that is to say fogs so thick that you couldn’t see your hand in front of your face at midday. They came every November and buses, with a man walking slowly before them to guide them, would loom up suddenly out of the gloom with their headlights like the glowing eyes of monsters. It took my father so long to drive to work that by the time he arrived it was time for him to come home again. I loved those fogs, but then the government went and spoiled the fun by passing the Clean Air Act. They never returned, those wonderful, exciting fogs.

Little did I know (or care) that those wonderful, exciting fogs killed thousands by bronchitis. But many years later I got bronchitis for the first and only time in my life from breathing the polluted winter air of Calcutta. I have also traveled in Communist countries where it seemed that the only thing the factories produced was pollution. I don’t need persuading that clear air is a good thing, not only aesthetically but also from the health point of view.

Southern California used to have some of the worst air pollution in the United States, but the quality of the air in Los Angeles has improved over the last two or three decades. Researchers who reported their findings in a recent edition of the New England Journal of Medicine conducted what is called a natural experiment: they estimated the pulmonary capacity of children who grew up as the level of pollution declined.

Most research on the health effects of air pollution has concentrated on deaths from cardiovascular disease among adults, usually of a certain age. But it is known that relatively poor lung function among younger people predicts cardiovascular disease later in life quite well. There is also an association between air pollution and early death from cardiovascular disease, though of course an association does not by itself prove causation. Does air pollution cause poor lung function in children?

The researchers measured lung function in three cohorts of children, 2120 in all, aged 11 to 15, who were of those ages between 1994 and 1998, 1997 and 2001, and 2007 and 2011. During this period, atmospheric pollution in Los Angeles declined markedly, as measured by levels of nitrogen dioxide, ozone and particulate matter.

Lung function, estimated by forced expiratory volume, improved (or at any rate increased) as air pollution declined. The numbers of children with lower than predicted function declined from 7.9 percent to 6.3 percent to 3.6 percent in the three cohorts. The improvement occurred among whites and Hispanics, boys and girls, and even those with asthma, i.e. the asthmatics, were less incapacitated.

The authors thought that the improvement in lung function was likely to persist into adulthood or, to put it in a slightly less cheerful way, damage done in childhood by air pollution might be permanent. This is not quite so pessimistic as it sounds, for there is probably no age at which an improvement in the quality of the air is not capable of producing an improvement in health.

The main drawback of the study was that there was no control group, that is to say a population whose cohorts of children experienced no improvement in the quality of the air they breathed. Perhaps the function of their lungs would have shown the same improvement as well, though I rather doubt it.

One little semantic point about the paper: children aged 11 were referred to as students rather than as pupils. Perhaps this is because we nowadays expect people to grow up very quickly, but not very far.

Read bullet | Comments »

What We Can Learn from Today’s Medical Experiment Failures

Thursday, February 5th, 2015 - by Theodore Dalrymple

shutterstock_141561382

In the past, medical journals, pharmaceutical companies and researchers themselves have been criticized for publishing selectively only their positive results, that is to say, the results that they wanted to find. This is important because accentuation of the positive can easily mislead the medical profession into believing that a certain drug or treatment is much more effective than it really is.

On reading the New England Journal of Medicine and other medical journals, I sometimes wonder whether the pendulum has swung too far in the other direction, in accentuating the negative. To read of so many bright ideas that did not work could act as a discouragement to others and even lead to that permanent temptation of ageing doctors, therapeutic nihilism. But the truth is the truth, and we must follow it wherever it leads.

A recent edition of the NEJM, for example, reported on three trials, two with negative results and one with mildly positive ones. The trials involved the early treatment of stroke, the prophylaxis of HIV injection, and the treatment of angina refractory to normal treatment (a growing problem). Only the latter was successful, but it involved 104 patients as against 6729 patients in the two unsuccessful ones.

The successful trial involved the insertion of a device that increased pressure in the coronary sinus, the vein that drains the blood from the heart itself. For reasons not understood, this seems to redistribute the blood flow in the heart muscle, thus relieving angina. In the trial, the new device relieved and reduced angina symptoms, and improved the quality of life in the patients who received it compared with those who underwent a placebo-operation. The trial was too small, however, to determine whether the device improved survival, though even if it did not a reduction of symptoms and an improvement in the quality of life is worthwhile.

The trial of chemoprophylaxis of HIV was, by contrast, a total failure. The trial recruited 5029 young women in Africa who were given an anti-HIV drug in tablet or cream form, and others who were given placebos. The rate at which they became infected with HIV was compared, and no difference was found.

In large part this was because the patients did not take or use the pills or cream, though they claimed to have done so. A drug that few take is not of much use however effective it might be in theory, especially in prophylaxis rather than treatment. And this points to another problem of pharmaceutical research: in drug trials that require patients’ compliance with a regime, that compliance may be high during the trial itself (thanks to the researchers’ vigilance and enthusiasm) but low in “natural” conditions, when the patients are left to their own devices.

The trial of magnesium sulphate in the early treatment of stroke was also a failure. It had been suggested by experiments on animals that this chemical protects brain cells from degeneration after ischaemic stroke. It stood to reason, then, that it might improve the outcome in humans in ischaemic stroke, at least if given as soon as suspected.

Alas, it was no to be. The trial, involving 1700 patients, showed that the early administration of magnesium sulphate did not improve outcome in the slightest. At 90 days there was no difference between those who received it and those who had received placebo.

Is an idea bad just because it does not work? Could it be that those who discover something useful are just luckier than their colleagues? Perhaps there ought to be a Nobel Prize for failure, that is to say for the brightest idea that failed.

*****

image illustration via shutterstock / 

Read bullet | Comments »

The Parents’ Impossible Choice: Increase Your Premature Baby’s Risk of Death or Blindness?

Wednesday, January 28th, 2015 - by Theodore Dalrymple

shutterstock_243214936

How informed is informed? What is the psychological effect of being told of every last possible complication of a treatment? Do all people react the same way to information, or does their reaction depend upon such factors as their intelligence, level of education, and cultural presuppositions, and if so does the informing doctor have to take account of them, and if so how and to what degree? An orthopedic surgeon once told me that obtaining informed consent from patients now takes him so long that he had had to reduce the number of patients that he treats.

An article in a recent edition of the New England Journal of Medicine extols the ethical glories of informed consent without much attention to its limits, difficulties and disadvantages.

It starts by referring to a trial of the level of oxygen in the air given to premature babies, of whom very large numbers are born yearly. Back in the 1940s it was thought that air rich in oxygen would compensate for premature babies’ poor respiratory system, but early in the 1950s British doctors began to suspect, correctly, that these high levels of oxygen caused retinal damage leading to permanent blindness. Fifty years later, the optimal level of oxygen is still not known with certainty, and a trial was conducted that showed that while higher levels of oxygen caused an increased frequency of retinopathy, lower levels resulted in more deaths. The authors of the trial have been criticized because they allegedly did not inform the parents of the possibility that lower levels of oxygen might lead to decreased survival, which was reasonably foreseeable.

How reasonable does reasonability have to be? Many of the most serious consequences of a treatment are totally unexpected and not at all foreseeable (no one suspected that high levels of oxygen for premature babies would result in blindness, for example, and it took many years before this was realized). Ignorance is, after all, the main reason for conducting research.

But suppose parents of premature babies had been asked to participate in a trial in which their offspring were to be allocated randomly to an increased risk of blindness or an increased risk of death. Surely this frankness would have been cruel, all the more so as the precise risks could not have been known in advance. Parents would feel guilt alike if their babies died or were blind.

shutterstock_245650636

Now that the answer is known, more or less, parents can be asked to choose in the light of knowledge: but their informed consent will be agonizing because there is no correct answer. Personally, I would rather trust the doctor sufficiently to act in my best interests in the light of his knowledge and experience. So far in life I have not had reason to regret this attitude, though I am aware that it has its hazards also. But

…why should they know their fate?

Since sorrow never comes too late,

And happiness too swiftly flies.

Thought would destroy their paradise.

No more; where ignorance is bliss,

‘Tis folly to be wise.

And I have often thought what medical ethicists would have made of the pioneers of anesthesia. They did not seek the informed consent of their patients, in part, but only in part, because they hadn’t much information to give. What moral irresponsibility, giving potentially noxious and even fatal substances to unsuspecting experimental subjects without warning them of the dangers!

And there are even some medical ethicists who think we should not take advantage of knowledge gained unethically. All operations should henceforth be performed without anesthesia, therefore.

*****

image illustrations via shutterstock / / 

Read bullet | Comments »

Which Medical Treatments Today Will We Someday Regard as Barbaric?

Wednesday, January 14th, 2015 - by Theodore Dalrymple

shutterstock_113504578Medical history is instructive, if for no other reason than that it might help to moderate somewhat the medical profession’s natural inclination to arrogance, hubris and self-importance. But the medical curriculum is now too crowded to teach it to medical students and practicing doctors are too busy with their work and keeping up-to-date to devote any time to it. It is only when they retire that doctors take an interest in it, as a kind of golf of the mind, and by then it is too late: any harm caused by their former hubris has already been done.

Until I read an article in a recent edition of the Lancet, I knew of only one eminent doctor who had been shot by his patient or a patient’s relative: the Nobel Prize-winning Portuguese neurologist Egas Moniz, who was paralyzed by a bullet in the back. It was he who first developed the frontal lobotomy, though he was also a pioneer of cerebral arteriography. As he was active politically during Salazar’s dictatorship, I am not sure whether his patient shot him for medical or political reasons, or for some combination of the two.

Read bullet | 47 Comments »

A Breakthrough Discovery in Treating Strokes?

Wednesday, January 7th, 2015 - by Theodore Dalrymple

shutterstock_102645029

Of late the New England Journal of Medicine has seemed like the burial ground of good ideas. Researchers follow a promising lead only to find that their new idea fails the crucial test of experience: and the difference between success and failure in research is made to appear as much a matter of chance or luck as of brilliance or skill.

In the first issue of the Journal for 2015, Dutch researchers from 16 different hospitals report an unequivocal success in the treatment of ischemic stroke.

Until now the only proven worthwhile treatment of patients with the kind of stroke that results from the blockage of a cerebral artery is the infusion within four and a half hours of the drug called alteplase, which dissolves thrombus (and which is manufactured from the ovaries of Chinese hamsters). But even with the use of this drug the prognosis is not very good, and there are several contra-indications to its use.

Read bullet | 6 Comments »

Is Overeating the Primary Cause of Obesity?

Friday, December 26th, 2014 - by Theodore Dalrymple

Everyone knows the pleasures of having his prejudices confirmed by the evidence. The pleasures of changing one’s mind because of the evidence are somewhat less frequently experienced, though none the less real. Among those pleasures is that of self-congratulation on one’s own open-mindedness and rationality. It would therefore delight me to learn that my prejudice about obesity — that it is a natural consequence of overeating, which is to say of human weakness and self-indulgence — was false.

I therefore read with interest and anticipation a recent article in the New England Journal of Medicine with the title “Microbiota, Antibiotics, and Obesity.” The connection of antibiotics with obesity had not previously occurred to me; perhaps the real reason why so many people now have the appearance of beached whales was about to be revealed to me.

Read bullet | 66 Comments »

Which Accidental Deaths Were Most Common 400 Years Ago?

Tuesday, December 23rd, 2014 - by Theodore Dalrymple

TudorEnglandSunrise

It is easier to advise than to have or to retain a sense of proportion, especially when it is most needed. I have never known anyone genuinely comforted by the idea that others were worse off than he, which perhaps explains why complaint does not decrease in proportion to improvement in general conditions. And he would be a callow doctor who tried to console the parents of a dead child with the thought that, not much more than a century ago, an eighth of all children died before their first birthday.

Still, it is well that from time to time medical journals such as the Lancet should carry articles about medical history, for otherwise we might take our current state of knowledge for granted. Ingratitude, after all, is the mother of much discontent. To know how much we owe to our forebears keeps us from imagining that our ability to diagnose and cure is the consequence of our own peculiar brilliance, rather than simply because we came after so much effort down the ages.

A little article in the Lancet recently was written by two historians who are in the process of analyzing the results of 9000 coroners’ inquests into accidental deaths in Tudor England. It seems astonishing to me that such records should have survived for more than four centuries, but also that the state should have cared enough about the deaths of ordinary people to hold such inquests (coroners’ inquests had already been established for 400 years at the time of the Tudors). In other words, an importance was given to individual human life even before the doctrines of the Enlightenment took root: the soil was already fertile. 

Read bullet | 74 Comments »

Does Brain Damage Make a Case for Ending Sports?

Tuesday, December 16th, 2014 - by Theodore Dalrymple

shutterstock_123635224

When I was working in Africa I read a paper that proved that intravenous corticosteroids were of no benefit in cerebral malaria. Soon afterwards I had a patient with that foul disease whom I had treated according to the scientific evidence, but who failed to respond, at least as far as his mental condition was concerned  – which, after all, was quite important. To save the body without the mind is of doubtful value.

I gave the patient an injection of corticosteroid and he responded as if by miracle. What was I supposed to conclude? That, according to the evidence, it was mere coincidence? This I could not do: and I have retained a healthy (or is it unhealthy?) skepticism of large, controlled trials ever since. For in the large numbers of patients who take part in such trials there may be patients who react idiosyncratically, that is to say, differently from the rest.

A paper in a recent edition of the New England Journal of Medicine brought back my experience with cerebral malaria. Animal experimentation had shown that progesterone, one of the class of steroids produced naturally by females, protected against the harmful effects of severe brain injury. The paper does not specify what exactly it was necessary to do to experimental animals to reach this conclusion, but it does say that it has been proven in several species. What is not said is often as eloquent as what is said.

Read bullet | 21 Comments »

Bad Brains: Can Science Figure Out How to Create a Good Person?

Saturday, December 13th, 2014 - by Theodore Dalrymple

shutterstock_71631349

If brevity is the soul of wit, verbosity is often the veil of ignorance. There was an instance of this in a recent article in the New England Journal of Medicine, with the title “Conduct Disorder and Callous-Unemotional Traits in Youth.” At considerable length and with much polysyllabic vocabulary, it told us much that we already knew (some of it true by definition). It mistook the illusion of progress for progress itself.

The paper starts with a definition:

The term “conduct disorder” refers to a pattern of repetitive rule-breaking behavior, aggression and disregard for others.

It sounds to me like a recipe for success in the modern art world, where “transgressive” is a term of the highest praise. But, says the paper, such problems have received increased attention recently, for two reasons: first, young people with conduct disorder sometimes “perpetrate violent events,” and second, the Diagnostic and Statistical Manual of Mental Disorders has modified its criteria for diagnosis. This latter seems to me an odd reason for increased attention. (Whose attention, by the way, the authors do not specify. The attention is like the pain in the room as described by Mrs Gradgrind. She thought there was a pain somewhere in the room, but couldn’t positively say that she had got it.)

Read bullet | 9 Comments »

Do Drug Trials Often Fail to Reveal the Harmful Side Effects They Discover?

Monday, December 8th, 2014 - by Theodore Dalrymple

shutterstock_98834942 (1)

The truth, the whole truth, and nothing but the truth: that is what one swears to tell in a court of law. One lies there and then. It is a noble ideal that one swears to, but one that in practice is impossible to live up to. Not only is the truth rarely pure and never simple, as Oscar Wilde said, but it is never whole, even in the most rigorous of scientific papers.

Not that scientific papers are often as rigorous as they could or should be. This is especially so in trials of drugs or procedures, the kind of investigation that is said to be the gold standard of modern medical evidence.

Considering how every doctor learns that the most fundamental principle of medical ethics is primum non nocere, first do no harm, it is strange how little interest doctors often take in the harms that their treatment does. Psychologically, this is not difficult to understand: every doctors wants to think he is doing good, and therefore has a powerful motive for disregarding or underestimating the harm that he does. But in addition, trials of drugs or procedures often fail to mention the harms caused by the drug or procedure that they uncover.

This is the royal road to over-treatment: it encourages doctors to be overoptimistic on their patients’ behalf. It also skews or makes impossible so-called informed consent: for if the harms are unknown even to the doctor, how can he inform the patient of them? The doctor becomes more a propagandist than informant, and the patient cannot give his informed consent because such consent involves weighing up a known against an unknown.

A paper in a recent edition of the British Medical Journal examined a large series of papers to see whether they had fully reported adverse events caused by the drug or procedure under trial. It found that, even where a specific harm was anticipated and looked for, the reporting was inadequate in the great majority of cases.

Read bullet | 10 Comments »

Should Old People Drink More Alcohol & Less Milk?

Monday, November 24th, 2014 - by Theodore Dalrymple

shutterstock_231978970

In my youth the government encouraged people to eat more eggs and butter and drink more milk for the sake of their health. Perhaps it was the right advice after a prolonged period of war-induced shortage, but no one would offer, or take, the same advice today. Nutritional advice is like the weather and public opinion, which is to say highly changeable.

How quickly things go from being the elixir of life to deadly poison! A recent paper from Sweden in the British Medical Journal suggests that, at least for people aged between 49 and 75, milk now falls into the latter category, especially for women.

Milk was once thought to protect against osteoporosis, the demineralization of bone that often results in fractures. It stood (partially) to reason that it should, for milk contains many of the nutrients necessary for bone growth.

On the other hand, it also stood (partially) to reason that it should do more harm than good, for consumption of milk increases the level of galactose in the blood and galactose has been found to promote ageing in many animals, up to and including mice. If you want an old mouse quickly, inject a young one with galactose.

In other words, there is reason to believe both that the consumption of milk does good and that it does harm. Which is it? This is the question that the Swedish researchers set out to answer.

Read bullet | 30 Comments »

Is the Most Popular Treatment for Lower Back Pain No More Effective Than a Placebo?

Saturday, November 15th, 2014 - by Theodore Dalrymple

shutterstock_52832905

Low back pain is a condition so common that, intermittently, I suffer from it myself. It comes and goes for no apparent reason, lasting a few days at a time. Nearly 40 years ago I realized that, though I had liked to think of myself as nearly immune from nervous tension, anxiety could cause it.

I was in a far distant country and I had a problem with my return air ticket. At the same time I suffered agonizing low back pain, which I did not connect with the problem of my ticket. When the problem was sorted out, however, my back pain disappeared within two hours.

In general, low back pain is poorly correlated with X-ray and MRI findings. Epidemiological research shows that the self-employed are much less prone to it than employees, and also that those higher in the hierarchy suffer it less than those lower – and not because they do less physical labor. Now comes evidence, in a recent paper from Australia published in the Lancet, that the recommended first treatment usually given for such pain, acetaminophen, also known as paracetamol, is useless, or at least no better than placebo (which is not quite the same thing, of course).

Read bullet | 46 Comments »

Do SSRI Antidepressants Increase Suicidal Thoughts?

Monday, November 10th, 2014 - by Theodore Dalrymple

 shutterstock_196110470

Hope springs eternal, but so do financial crises in hospitals. Once, while researching the history of the hospital in which I was working at the time, I discovered that it had been so short of money in the 1840s that it had been forced to sell some land to a railway company that wanted to build a line near the hospital. The physicians were against the sale, for they feared the noise of the trains might kill the patients, “especially the brain cases.” They were overruled, and when the first train went by they observed the patients anxiously to monitor the adverse effect on them. There was none.

However, psychiatric hospitals seem often to be built near railway lines, which act as a magnet to the patients who are suicidal. Patients of such hospitals who commit suicide while on the premises usually do so by hanging, while those who do so outside usually jump from a tall building or throw themselves in front of trains.

A paper from Germany in a recent edition of the British Journal of Psychiatry analyzes the characteristics of 100 suicides of psychiatric patients who threw themselves in front of trains conveniently near to the hospitals in which they were resident at the time. It took the authors ten years to collect their sample, whom they compared with other patients of the same age, sex and psychiatric diagnosis who did not throw themselves in front of trains. The object of the exercise was to see whether such suicides could be predicted and therefore prevented. The authors rather laconically remark that when a man throws himself in front of a train — and nearly two-thirds of the cases were men — it is likely that he really means to die.

Read bullet | 15 Comments »

Quarantine Nurses & Doctors Returning From Treating Ebola in Africa?

Monday, November 3rd, 2014 - by Theodore Dalrymple

There is no new thing under the sun, least of all panic at the approach of an epidemic of a deadly disease. In 1720, the preface to Loimologia, Nathaniel Hodges’ account of the Great Plague of London in 1665, first published in Latin in 1672, referred to the outbreak of plague in Marseilles:

The Alarm we have of late been justly under from a most terrible Destroyer in a neighbouring Kingdom, very naturally calls for all possible Precautions against its Invasion and Progress here…

In fact, though no one was to know it, no epidemic of plague was ever to occur in Western Europe again; and it is doubtful whether the precautions referred to made much difference.

The death rate from the Ebola virus is probably greater than that from bubonic plague, though of course the plague spread much faster and killed far more people in total than Ebola ever has: and at least we, unlike our plague-ridden ancestors, know the causative organism of the Ebola disease, even if we are not certain how the virus first came to infect Mankind.

Read bullet | 11 Comments »

Do You Have Confidence in Doctors?

Sunday, October 26th, 2014 - by Theodore Dalrymple

shutterstock_183108011

You might have supposed that trust in the medical profession would have risen as medicine became more effective at warding off death and disease, but you would have been mistaken. In fact, precisely the reverse has happened throughout the western world, but particularly in the United States. Half a century ago, nearly three quarters of Americans had confidence in the medical profession qua profession; now only about a third do so.

According to international surveys reported in an article in a recent New England Journal of Medicine, Americans are among the most mistrustful of doctors of any western people. Asked whether, all things considered, doctors in their country could be trusted, 58 percent of Americans answered in the affirmative; by contrast, 83 percent of the Swiss answered positively. Positive answers were returned by 79, 78, and 76 percent of the Danish, Dutch and British respectively. Americans were 24th of 29 nations polled in their trust of doctors. Furthermore, just fewer than half of Americans in the lowest third of the income range thought that doctors in general could be trusted, and younger Americans were also less likely to trust their doctors than older ones.

Curiously enough, though, Americans were among the most satisfied of nations with their last encounter with their doctor. Only the Swiss and Danes were more satisfied than they, and then not by very much (64, 61 and 56 percent respectively). In other countries, then, people were more likely to trust doctors in general than be satisfied by their last visit to the doctor; in America, it was about the same proportion.

What, if anything, does this mean?

Read bullet | 32 Comments »

How Informed Is Informed Consent and Does It Matter?

Sunday, October 12th, 2014 - by Theodore Dalrymple

shutterstock_210932566

How informed is informed consent and does it matter much, or as much as medical ethicists say it does? Do doctors have a duty only to make sure that their message is sent, or also a duty to make sure that it is received, and if received that it is retained? The prayer of General Absolution in the Book of Common Prayer refers to those things which we have done and ought not to have done, and those things which we ought to have done and have not done. When it comes to informed consent, there are also those things which patients have heard and ought not to have heard, and those things which they ought to have heard and have not.

This is proven in a recent paper in the British Medical Journal. Patients with stable angina in ten hospitals in the United States  were asked what they thought the benefits were of the percutaneous coronary procedures they were about to undergo. The scientific evidence on this matter is more or less universally accepted: such procedures improve angina symptoms but do not increase life expectancy or reduce the rate of heart attacks.

Read bullet | 5 Comments »

Would Free Contraceptives Reduce Teen Pregnancies and Abortions?

Sunday, October 5th, 2014 - by Theodore Dalrymple

Paolovi

One of the more extraordinary experiences of my medical career was injecting rural African women with a long-term contraceptive in a Catholic mission hospital under a portrait of Pope Paul VI. The contraceptive was handed to me by an aged Swiss nun who was otherwise deeply orthodox, but who recognized that worn-out women who had already had ten children were in danger of their lives if they had any more. I refrained from remarking on the paradox: I had already learned that there is more to life than intellectual consistency.

In the west, of course, the problem of unwanted pregnancy is different: it arises mainly among teenagers of what used to be called the lower classes. Pregnancy rates among the latter in the United States are among the highest in the western world. According to a paper in a recent edition of the New England Journal of Medicine, such pregnancies cost the United States $10 billion a year: to me a suspiciously round figure, especially as it includes the cost of education foregone by the pregnant girls. Perhaps I am a cynic, but I am not altogether so sanguine about the economic value of modern education. Be that as it may, the Centers for Disease Control and Prevention (CDC) has set a goal of reducing teenage pregnancy by 20 percent between 2009 and 2015.

An experiment conducted in St Louis provided 1404 girls aged between 14 and 19 with free contraceptive advice and free long-acting contraceptive devices to see whether such provision would reduce the rate of unwanted pregnancy among them. The comparison group was that of similar girls in the rest of the United States who were not included in the experiment.

Read bullet | 29 Comments »

How Cardiologists Have Been Wasting Time for Years

Tuesday, September 30th, 2014 - by Theodore Dalrymple

shutterstock_207528547

We live in the age of acronym. To read a medical journal is sometimes like trying to decipher a code; once, when I was a judge in a competition of medical poetry, I read a poem composed entirely of figures and acronyms:

RTA [road traffic accident]

ETA [expected time of arrival] 13.20 hrs

CGS [Glasgow Coma Scale] 3…

The last line of the poem, inevitably, was:

RIP

Sometimes one has the impression that the acronym has been devised before the thing that it is attached to has been decided. In a recent paper in the New England Journal of Medicine, for example, I came across the acronym SWEDEHEART. It stood for the Swedish Web System for Enhancement and Development of Evidence-based Care in Heart Disease Evaluated According to Recommended Therapies. If the web system come before the acronym, however, you could see why the latter was necessary, the former being longer than the average tin-pot dictator’s list of honorific titles.

The paper in which the acronym occurred was yet another in which a common medical practice was shown to be valueless, or very nearly so. It turns out yet again that doctors do things not because they do the patients any good, but because they can do them.

Read bullet | 13 Comments »

How Did King Richard III Die?

Monday, September 22nd, 2014 - by Theodore Dalrymple

When Hamlet tells Claudius that Polonius, whom he has just killed, is at dinner being eaten rather than eating, Claudius is puzzled. Hamlet explains that the worms are eating Polonius, and Claudius, still puzzled, asks Hamlet what he means by this

Nothing [replied Hamlet] but to show how a king may go a progress through the guts of a beggar.

In other words, we all come to the same end.

I thought of this passage when I read a paper about the death of Richard III in a recent edition of the Lancet. His remains were found recently buried under a car park in Leicester, a dismal provincial town in England, one of many ruined by planned modernization. The car park had once been a priory.

A long historical battle has raged over Richard’s real nature, whether he was hero or villain as per Shakespeare (few people think he might have been something in between the two). Certainly his remains, now more than 500 years old, have not been treated with undue respect: a team of forensic pathologists and archaeologists have examined them minutely for clues as to how he died at the battle of Bosworth Field in 1485.

Read bullet | 8 Comments »

When Is a Public Health Emergency Really an Emergency?

Tuesday, September 16th, 2014 - by Theodore Dalrymple

The question is important because public health emergencies allow governments to ignore the usual restrictions or restraints upon their actions. In public health emergencies, governments can override property rights and abrogate all kinds of civil liberties such as freedom of movement. They can .r our goods and tells us where to go and where to stay. They do so only for our own good: health being the highest good, of course.

A recent edition of the New England Journal of Medicine discusses the issue in the context of the declaration of a public health emergency in Massachusetts by the governor of that state, Deval Patrick.

In most people’s minds, no doubt, a public health emergency would be something like the Black Death, the epidemic of plague that wiped out a third of Europe’s population in the fourteenth century. A natural disaster of large proportions might also count, not only because of the death and injury caused directly by the disaster, but by the epidemics which often follow such disasters.

What, then, was the public health emergency that “obliged” Patrick to declare that it existed and that he could and should take uncontrolled administrative measures to halt it?

Read bullet | 12 Comments »

Is It ‘Unjust’ for Doctors to Die from Ebola?

Sunday, September 7th, 2014 - by Theodore Dalrymple

16107791

When I visited the John F. Kennedy hospital in Monrovia during the long Liberian Civil War, it had been destroyed by a kind of conscientious vandalism. Every last piece of hospital furniture and equipment had been disabled, the wheels sawn off trolleys and gurneys, the electronics smashed. This was not just the result of bombardment during the war but of willful and thorough dismantlement.

There were no patients and no staff in the hospital; it was a ghost establishment, completely deserted. I was severely criticized for suggesting in a book that the painstaking destruction of the hospital, which shortly before had performed open heart surgery, was of symbolic significance.

I was pleased to see from an article in a recent edition of the New England Journal of Medicine that it had re-opened, but saddened to see that its problems were now of an even more terrifying nature than those it encountered during the civil war: for the hospital is at the center of the epidemic of Ebola virus disease, and two of its senior physicians, Sam Brisbane and Abraham Borbor, have recently died of it. The article in the journal lamented their passing and praised them for their bravery in not deserting their posts; they both knew of the dangers of refusing to do so.

Read bullet | 11 Comments »

Do You Really Need That Colonoscopy?

Tuesday, September 2nd, 2014 - by Theodore Dalrymple

shutterstock_1836800 (1)

Every few months I receive a computerized invitation from my doctor asking me to have a colonoscopy to screen for polyps in my bowel. I always tell myself that I am too busy just now, I will have it another time. But really I don’t want to have it at all, and I know that when the next invitation comes I will be too busy then as well.

I am also eager to find a rational reason, or at least a rationalization, for my refusal. I thought I found it in a paper from Norway in a recent edition of the New England Journal of Medicine.

The authors examined the death rate from colorectal cancer in Norway among the 40,826 patients between 1993 and 2007 who had had polyps removed at colonoscopy in that country (the records are more or less complete). They compared the number of deaths in that population with the expected death rate from the disease in the population the same age as a whole. The paper reports that 398 deaths were expected and 383 deaths were observed.

This small difference does not mean that colonoscopy does not work in preventing death from colorectal cancer, of course. This is because the relevant comparison is with people who had polyps not removed by colonoscopy rather than with the population as a whole.

The 40,826 patients who had polyps removed at colonoscopy, however, were not a random sample of the adult population because Norway does not have a screening program for colonic polyps. The patients had colonoscopy in the first place because they were symptomatic, for example bleeding per rectum. They were therefore much more likely to suffer from polyps or cancer in the first place than the rest of the population.

Read bullet | 15 Comments »

Does a Popular Antibiotic Raise the Risk of Heart Attack?

Tuesday, August 26th, 2014 - by Theodore Dalrymple

shutterstock_170257355

I happened to notice recently a report in a French newspaper of a study just published in the British Medical Journal, a study that had purportedly shown an increased incidence of cardiac death in people who took an antibiotic called clarithromycin. As I had myself taken this drug a couple of times in my life (though not, of course, quite as prescribed, because no one ever takes drugs quite as prescribed), I felt a certain personal interest in the question.

I needn’t have worried because the paper, from Denmark, claimed that the increased risk of cardiac death occurred only while the patient was taking the drug, not afterwards. But the closer I looked at the paper, the more darkness it seemed to shed on what doctors ought to do.

Denmark is a small country with a population of about 5.5 million, but it has the best health records in the world. This means that statisticians are able to churn out comparisons as Danish dairy farmers churn out butter.

Read bullet | 5 Comments »