Many medical papers nowadays have such complex statistics that not one in a hundred doctors understands them fully, and the rest have merely to hope or take it on trust that the authors’ conclusions really do follow from their data. I am afraid I hold to the rather crude view that, if results involving large numbers of patients need involved and sophisticated statistical manipulation to yield a positive outcome, they probably are not very important clinically, however statistically significant they may be. Clear-cut results are not very common these days.
I therefore rejoiced to see in a recent edition of the Lancet the report of an experiment so conclusive that it hardly needed statistical confirmation to prove it. The experiment was a double-blind trial of the desensitization of children with an allergy to peanuts by means of oral immunotherapy (OIT).
Ninety-nine children aged between 7 and 12 with proven allergy to peanuts were divided into two groups: those who, unbeknown to them, received small but increasing doses of peanut protein mixed into their food over a period of six months, and those who did not. At the end of that period, 62 percent of the treated group, but none of the untreated, tolerated a challenge of 1400 milligrams of peanut allergy. The children who had had the OIT were 25 times less sensitive than those who had not. When the control group who had not had it were given it, they too became less sensitive.
The authors also demonstrated that the quality of life of the desensitized children improved because they became less anxious that any food might ambush them, as it were, and cause an allergic reaction. Anyone who has seen an allergic reaction to peanuts (or other nuts) will understand this. Since the number of food products that bear the warning “may contain peanuts” is ever-increasing – peanuts seem almost as ubiquitous in our environment as rock music – the world must appear a dangerous place to those with the allergy.
It is a hard lesson in life that many of the most important things that happen to us are beyond our control. Indeed, a large part of wisdom consists of the willingness and ability to distinguish what is and what is not happenstance. The distinction, however, may be very difficult: and while too little fatalism leads to fruitless struggle, too much leads to acceptance of the avoidable.
A recent study from the Mayo Clinic published in the British Medical Journal examines patients with myocardial infarction (heart attack) presenting to hospital out of hours and those who present during normal working hours. They pool the data from all the studies that have been done around the world, and come to the conclusion that patients presenting at nights and weekends have a 5 percent increased risk of death. It therefore seems best, if you must have a heart attack, to have it during regular hours, though this is difficult to arrange for yourself.
Interestingly, subsidiary findings are first that the difference between the death rates has been increasing of late; and second that the difference is less in the United States than in Europe, where it is less than in other parts of the world. Could this mean that, at least in one respect, the American health care system is better than others around the world?
There is no subject that provokes conspiracy theories quite like the immunization of children. That innocent, healthy creatures should have alien substances forcibly introduced into their bodies seems unnatural and almost cruel. As one internet blogger put it:
Don’t take your baby to get a shot, how do you know if they tell the truth when giving the baby the shot, I wouldn’t know because all vaccines are clear and who knows what crap is in that needle.
The most common conspiracy theory at the moment is that children are being poisoned with vaccines to boost the profits of the pharmaceutical companies that make the vaccines. No doubt such companies sometimes get up to no good, as do all organizations staffed by human beings, but that is not also to assert that they never get up to any good.
A relatively new vaccine is that against rotavirus, the virus that is the largest single cause of diarrhea in children. In poor countries this is a cause of death; in richer countries it is a leading cause of visits to the hospital but the cause of relatively few deaths.
Since rotavirus immunization of infants was introduced in the United States, hospital visits and admissions have declined by four fifths among the immunized. However, evidence of benefit is not the same as evidence of harmlessness, and one has the distinct impression that opponents of immunization on general, quasi-philosophical grounds, almost hope that proof of harmfulness will emerge.
A study published in a recent edition of the New England Journal of Medicine examined the question of one possible harmful side-effect of immunization against rotavirus, namely intestinal intussusception, a condition in which a part of the intestine telescopes into an adjacent part, and which can lead to fatal bowel necrosis if untreated.
The authors compared the rate of intussusception among infants immunized with two types of vaccine between 2008 and 2013 with that among infants from 2001 to 2005, before the vaccine was used. There is always the possibility that rates of intussusception might have changed spontaneously, with or without the vaccine, but the authors think that this is slight: certainly there is no reason to think it.
Just as a more permissive attitude to cannabis gains momentum in the United States, so does a more restrictive attitude to tobacco. It is as if there were a law of the conservation of prohibition: if one substance is permitted after having been prohibited, another will be prohibited after having been permitted.
While Colorado permits the use of marijuana by those over 21 for any purpose, New York City prepares to prevent sales of tobacco to anyone under the age of 21. An article in a recent edition of the New England Journal of Medicine comes out strongly in favor of this more restrictive approach to the sale of tobacco. The arguments it uses and those it refutes are instructive.
Those who go on to smoke throughout their lives generally start at an early age: earlier, that is, than 21. Thus if adolescents could be discouraged from smoking, rates of smoking among adults would decline markedly.
Of course, it is not enough for something to be desired or desirable for it to be feasible. Such evidence as exists, however, suggests that restricting sales to minors might work. A town in Massachusetts, Needham, forbade the sale of tobacco to those under 21, and the rate of smoking among high school students declined by nearly half five years later. The rate in a neighboring town, which did not impose the ban, fell in the same time by only a third as much. Furthermore, raising the minimum drinking age to 21 was followed by (one cannot with absolute certainty say caused) a fall in alcohol consumption by adolescents, drunk driving, and motor accidents.
Scientists are often portrayed as archetypally rational men, mere calculating machines in human form who propose correct new theories by infallible deduction from what is already known. Science cannot possibly advance in this way, however, and the philosopher Karl Popper pointed out long ago that leaps of the imagination are as necessary to science as they were to art
I have never been able to make such leaps myself, which is why I admire them in others. I remember meeting a researcher into malaria who was trying to produce a vaccine, not against the malarial parasite itself, but against the stomach lining of the mosquitoes that carried the parasite. He hoped that such a vaccine would kill the mosquitoes – causing them to explode in mid-flight, perhaps – and thus prevent the spread of the disease. The idea did not work, but I was impressed by the boldness of the conception.
For the scientist no information is too obscure to be of potential use. And what information could be more obscure than that the desert-dwelling grasshopper mouse that likes eating the bark scorpion, whose sting causes severe pain in all other possible predators and makes them avoid it? Most of us, I think, would say, “All very interesting, professor, but so what?” The scientist, however, asks why the grasshopper mouse is immune to the painful effects of the scorpion venom, and whether, on discovering the reason, it might not help in the development of new analgesics. Mankind has long believed that remedies for its afflictions are to be found in Nature, but only scientists can go about systematically investigating the possibilities. Imagination is a necessary but not sufficient quality for scientific research.
A recent article in the New England Journal of Medicine, in a long-running series that tries to connect basic scientific research with clinical progress, draws attention to research on the grasshopper mouse. The article is provocatively entitled Darwin 1, Pharma 0, thereby drawing our attention to the fact that millions of years of natural selection have done for the grasshopper mouse what a century of research by pharmaceutical companies has not been able to do for Man. The comparison seems neither apt nor fair, but any stick these days is good enough to beat Big Pharma with.
The grasshopper mouse, it seems, has a mutant gene that prevents a component in the scorpion venom from activating the peripheral nerve cells involved in the transmission of pain. Could human pain be alleviated or even abolished if a compound were found that acts on the mechanism that the normal version of the gene, present in all other mammal genomes, controls?
The enormous, even exponential, advance in the understanding of human genetics over the past three decades has so far yielded much less improvement in clinical results than was once hoped. It has proved more difficult than anticipated to translate biological knowledge into clinical benefit.
This is not, of course, to say that there have been no benefits at all from the advances in genetic understanding, particularly in such fields as prenatal counselling. Another superficially promising field is that of pharmacogenetics, that is to say the prediction of responses to medicaments according to the patients’ genetic type. This is very important, for hitherto it has proved difficult to predict whether a patient will respond positively or negatively to a given treatment, and whether he or she needs a higher or a lower dose to produce a desired effect.
The latter is particularly important in the case of treatment with anticoagulants (blood-thinners) because a therapeutic dose is usually so close to a dangerous dose. If we could predict who needs what dose rather than, as at present, proceed essentially by trial and error, it would be of great advantage to patients who need anticoagulation. They would receive the benefit of anticoagulation – fewer heart attacks and strokes – without the risks of complications such as cerebral and other bleeds.
Three trials of attempts to tailor doses of anticoagulants according to the patients’ genetic type have been published in a recent edition of the New England Journal of Medicine. The authors compared prescription of anticoagulants by the normal methods with determination by genetic type. The results of the three trials were contradictory.
Prevention is better than cure provided (which is not always the case) that prevention does less harm than the disease it prevents. Since obesity is now of epidemic proportions all over the world, and it is estimated that in just over a decade’s time there will be 500,000,000 people with type II diabetes consequent upon obesity, prevention of obesity is devoutly to be wished – which is not to say that it will be easy or even possible.
An article in a recent edition of the New England Journal of Medicine asks the question of how early in life prevention of obesity should begin, given that once it is established it is refractory to treatment. Although the epidemic may have peaked in the United States, there is no room for complacency because the proportion of fat people is already enormous. A half of American pregnant women are seriously overweight or outright fat, and fat women tend to have fat children. They gain even more weight during pregnancy, and women who gain weight excessively during pregnancy are especially likely to have fat children.
The article is a typical example of what might be called risk factor medicine. A disease or disorder is found to be associated statistically with some independent variable which may or may not be causally related to that disease or disorder, so that doctors hope that by reducing the prevalence of the risk factor in some way they will also reduce the prevalence of the disease or disorder. Since many of the risk factors are behavioral rather than biological, and there is nothing as difficult to change as human behavior, doctors’ hopes are often frustrated.
There is nothing quite as difficult to predict as the future. In my lifetime I have already lived through an “inevitable” ice age that never materialized and “inevitable” mass starvation (through overpopulation) that also never happened. When I was in Central America I remember reading a book called Inevitable Revolutions by the historian Walter LaFeber, but more than a quarter of a century later the inevitable still had not taken place. By now, according to predictions, most of us should have been dead from AIDS, that is if variant Creutzfeldt-Jakob Disease or Ebola virus had not got us first. The repeated failure of confident predictions is therefore almost enough to make one sceptical of dire visions of the future. Only the sheer pleasure of contemplating catastrophe to come keeps the market for apocalypses alive.
One of our present concerns in the western world is the rapid aging of the population. Never have so many people lived to so ripe an old age, and this at a time when the birth rate is falling. Who is going to support the doddering old fools who will soon be more numerous than the energetic and productive young?
A recent article in the New England Journal of Medicine points out that something unexpected has happened to confound the gloomy prognostications of epidemiologists and demographers. As the percentage of people surviving into old age increases, so the proportion of them who suffer from dementia decreases. People are not only living longer, but living better. This is a phenomenon that has happened across the western world.
The article states that “in 1993, 12.2% of surveyed adults 70 years of age or older [in America] had cognitive impairment, as compared with 8.7% in 2002.” Similar results have been obtained elsewhere. In the light of this unexpected and unpredicted trend, estimates of the prevalence of dementia in England have had to be revised downwards by 24 percent. The burden of the elderly on the economy will therefore not be as great as was feared.
What accounts for the decline in the prevalence of dementia?
Some years ago a medical paper was published that caused a run on Brazil nuts throughout the western world. For a time they disappeared entirely from supermarket shelves; for Brazil nuts contain a high level of selenium and the medical paper had suggested that the high rate of heart attacks in a certain province of China was caused by the exceptionally low level of selenium in the inhabitants’ diet.
But of course for every panacea there is an equal and opposite health scare. Brazil nuts concentrate radium and give off more radiation than any other food; they also often contain relatively high levels of aflatoxin, produced by a fungus of the Aspergillus genus. Aflatoxins are very carcinogenic, leading to cancer of the liver.
So where Brazil nuts are concerned, the question boils down to whether you would prefer to die of heart attack or cancer.
A paper in a recent edition of the New England Journal of Medicine suggests that, overall, nuts are very good for you. The authors compared the death rates among female nurses and male health nurses according to their self-reported consumption of nuts. They divided nut consumption into true nuts and peanuts, the latter being legumes rather than nuts.
I never cease to be amazed at the organizational feat of such studies: 76,464 women and 42,498 men were followed up between 1980 and 2010 and 1986 and 2010, respectively. After various statistical manipulations which 99 percent of the readership of the NEJM would not understand it was found that the more nuts people ate (including the false nuts known as peanuts), the lower their all-cause mortality. They didn’t just die less of such illnesses as heart attack, stroke and cancer, but of all illnesses whatsoever. At last, then, the panacea!
People who ate nuts more than seven times a week(!), hunter-gatherer style, had a 20 percent less chance of dying in the period of follow up than those who never ate nuts. This is a very big effect, for me almost too big to be believed.
Some of the residents of Hyde, the town in Cheshire, England, where the late Dr. Harold Shipman practiced family medicine, used to say, “He’s a good doctor, but you don’t live long.” Indeed not: it is now believed that Dr. Shipman, over a period lasting a quarter of a century, murdered 200 or more of his elderly patients with injections of morphine or heroin.
If the preservation of life be not the definition of a good doctor, what is? Here is the definition published in a recent edition of the New England Journal of Medicine:
The habitual and judicious use of communication, knowledge, technical skills, clinical reasoning, emotions, values, and reflection in daily practice for the benefit of the individual and, the community being served.
Whatever one thinks of this definition, it is clear that it would not make the goodness of doctors altogether easy to measure.
It does not follow from the unmeasurability of something, however, that it does not exist or is unimportant: nor, unfortunately, that what is measurable truly exists or is at all important. Nothing is easier to measure in an activity as complex as medical practice as the trivial, and nothing is easier to miss than the important.
The above definition of a good doctor appeared in an article on the need for Obamacare to ensure that doctors provide value for money so that they can be paid by result. This is a potential problem whenever there is a financial intermediary between the doctor and the patient. Thenceforth it is not the patient who decides what he wants from a doctor but an insurance company or, increasingly under Obamacare, the government.
When I was a very young doctor I had an enormously fat patient – in those days it was rare to be so fat – who was admitted to the hospital for a long time to try to get her to lose weight more or less by starving her. I still remember her semi-liquid form flowing over the sides of the bed. I tried to be nice and understating.
“I suppose you eat for comfort,” I said to her.
“No, dear,” she replied. “I just like the taste.”
I did no know then that she was (if I may be permitted what in the circumstances is a slightly ridiculous metaphor) the canary in the mine, and that only 40 years later many human mastodons would bestride the world, at least in America and Britain.
With this epidemic has grown a new surgical speciality: bariatric surgery, that is to say surgery to correct obesity. A paper in a recent edition of the Journal of the American Medical Association reports on the results of two types of such surgery to treat obesity, gastric bypass and laparoscopic gastric band. (How long before a rock group calls itself the Laparoscopic Gastric Band?) The authors conglomerated the results from 10 hospitals so that the results should reflect average practice, not just the very best practice.
Gastric bypass proved to have better results all round than gastric banding, except that there were a small number of deaths immediately after surgery. But the results were distinctly variable even for the same procedure; for example, at three years after operation those who had had a gastric bypass varied between having lost 59.2 per cent of their baseline weight and having gained 0.9 per cent. Those who underwent the gastric banding varied between having lost 56.1 per cent of their original weight and having gained 12.6 per cent. On average, however, the two groups had lost 31.5 per cent and 15.9 per cent respectively of their original weights after their operations.
Most of the weight loss was within the first year after the procedures; one sub-group among the patients began to regain weight after six months, and all began to regain weight after two years. The weight they gained after two years, however, was slight by comparison with what they had lost.
It has long been my opinion that all notions of human equality, other than that of formal equality before the law, are destructive of human intelligence and sensibility. My opinion was confirmed recently when I read an editorial in the Lancet, one of the two most important general medical journals in the world.
The title of the editorial was “Equity in Child Survival.” I could have written the editorial myself from the title alone, so utterly predictable was its drift:
Although Indonesia has reduced child mortality by 40% during the past decade, data from 2007 show that children in rural areas were almost 60% more likely to die than those living in urban ones, while those in the poorest 20% were more than twice as likely to die as those in the richest 20%, and girls were 20% more likely to die than boys.
Note here that even if inequality were the same as inequity, there is nothing in these figures to show that inequity had increased in Indonesia during the decade, or to show that it had not actually decreased; and if equity in this sense were an important goal in itself, it would matter little whether the health of the poorest improved, or the health of the richest deteriorated.
In a country the size and complexity of Indonesia, with hundreds of inhabited islands, some of them very remote, it is hardly surprising that there should be quite wide geographical variations in health, wealth and productivity. It is no more inequitable that there should be these variations than that the French should have so much better health than the Americans, or for that matter than the Bangladeshis.
Not long after I suggested satirically that money might be the cure for the terrible disease of burglary, experiments were performed to bribe drug addicts into remaining abstinent. I had suggested that money was a genuine pharmacological treatment of burglary because there would be a dose-response relationship (the larger the dose of money given to burglars, the greater and longer-lasting their law-abidingness) and that, as with most drugs, there would be treatment failures. Some burglars are more interested in the excitement of burglary than in its material rewards; money would have little or no effect on them.
It turns out that money as a drug is a bit like aspirin: it can be used for many illnesses. The fat have been bribed to lose weight; the drunk to stop drinking; the diabetic to take their pills and stop eating sugar; the smokers to stop smoking; and the indolent to start taking exercise. It’s enough to make you wonder whether there is anything that can’t be cured by money. The latest disease to yield to money’s curative, or at least alleviatory, properties is schizophrenia.
Medication can improve this condition but unfortunately patients often do not take it for long and then relapse. This is partly because they do not accept in the first place that they are ill and partly because the medicine they are supposed to take has many, and sometimes very disagreeable, side effects.
To counter the propensity of schizophrenic patients not to take their medicine, long-acting injectable forms were developed; but it is easy for schizophrenics not to accept them either. Researchers in England and Switzerland wondered whether, if patients were bribed to take the injections, they would do so with greater regularity. Their trial was a small one, involving only 131 patients, divided into those who were offered a bribe (in the paper, published recently in the British Medical Journal, it is more delicately called a financial incentive) and those who were not. The bribe was not large, $22 per monthly injection; but it must be remembered that most of the patients were probably unemployed and living in relative poverty. There are still people in our society to whom $264 a year would be well worth having.
Nothing I ever wrote provoked quite as many hostile responses as my suggestion in an article in Belgium that rock music of all kinds was a terrible environmental pollutant and ought to be strictly controlled, like car exhaust or industrial effluent. I also suggested that its agitating effect upon youth was a cause alike of much bad behavior and many car accidents. Moreover, in the future there will be an epidemic of well-merited deafness. The young were particularly infuriated.
Considering the awfulness of noise as a destroyer of quality of life, its effects on health have been comparatively little studied. Some time ago, however, it was discovered that those living near a major airport, Schiphol in Amsterdam, and subjected to aircraft noise had higher blood pressure than those who lived in its absence. Two papers in a recent edition of the British Medical Journal, one from Britain and one from America, confirm and extend this observation.
The American paper analyzed the admission rates to hospital for cardiovascular diseases such as heart attack and stroke of those who lived near airports and those who did not. They analyzed the data from 89 airports. They found that those exposed to 10 decibels extra of aircraft noise had a 3.5 percent increased admission rate for such diseases, and they estimated that overall 2.3 percent of all admissions for such disease were attributable to aircraft noise.
The British paper analyzed hospital admission rates for the same diseases in the geographical area around Heathrow Airport, the busiest long-haul international airport in the world (and the worst, in my experience). The authors controlled the results for such possible confounders as air pollution and road traffic noise; they found that those exposed to the highest levels of daytime noise had a 25 percent increased rate of stroke and a 20 percent increased rate of coronary artery disease. The authors warned, however, that they had not controlled for all possible relevant confounders, and therefore, in effect, that their results must be taken with the proverbial pinch of salt.
When I was a student, a trauma surgeon described how, in the early days of transplants, he had to physically restrain the transplant surgeons from “harvesting” the kidneys of potential donors. So enthusiastic were surgeons about this exhilarating technology that they were willing to sacrifice one life for another, for they tended to count a life saved by transplant as being of more than ordinary value, perhaps double; and, no doubt irrationally, I have remained mildly suspicious of them, the transplant surgeons, ever since.
There were two opposing articles in a recent edition of the New England Journal of Medicine about the ethics of transplantation. For a number of years the supply of organs for transplant has not equalled the demand, and one way of meeting it would be to relax slightly the rules governing the removal of transplantable organs from donors. At the moment the dead-donor rule (known as the DDR — an acronym that for me still brings first to mind the German Democratic Republic) prevails, according to which the donation of an organ must not kill the donor.
One of the authors suggests that the DDR is routinely violated in any case and that, in so far as it is obeyed, it limits the number of organs available for transplant and thereby allows people to die who could have been saved. But, says the author, “it is not obvious why certain living patients, such as those who are near death but on life support, should not be allowed to donate their organs, if doing so would benefit others and be consistent with their own interests. … Allegiance to the DDR … limits the procurement of transplantable organs by denying some patients the option to donate in situations in which death is imminent and donation is desired.”
Sir Winston Churchill was an inveterate enemy to all physical exertion that went by the name of exercise. He attributed his productivity in life to his physical indolence and once gave the advice that you should never stand when you can sit and never sit when you can lie. He did much of his work in bed.
Modern medicine is decisively against him in his opposition to exercise. Reading the introduction to a paper in a recent edition of the British Medical Journal, you might be forgiven for concluding that the panacea has at long last been found, and that it is exercise. People who are physically active live longer and suffer less from heart disease, strokes, cancer, and diabetes than do the sedentary. They do better in the hospital; and physical inactivity has been estimated to be the fifth most serious contributor to the disease burden of Europe.
The authors of this paper attempted to find out whether exercise is as effective as drugs in reducing mortality in a variety of conditions such as diabetes, stroke, coronary artery disease, and heart failure. They did no actual trial themselves, but rather performed a meta-analysis of the meta-analyses of all the trials that have been published and are relevant to this question: in other words their paper is what might be called a meta-meta-analysis.
Exercise has rarely been compared directly with drug treatments; the authors had therefore mainly to compare the published statistical effect of exercise (compared with no exercise) with that of medication of various sorts (compared with placebo or other medicines). They analyzed the results of 16 meta-analyses which were based upon a total of 254 trials, 54 of them on the effects of exercise and 248 of them on the effects of drugs, involving 339,274 patients in all.
In short they found that the only statistically significant differences in mortality were in stroke and in heart failure. Drugs (diuretics) were superior to exercise in the latter, and exercise to drug treatment in the former. Otherwise there was no difference between drug treatment and exercise.
Diseases that have no objective tests to distinguish them from normality have a tendency to spread like fungus: for example, it is years since I heard anyone say that he was unhappy rather than depressed, and it cannot be a coincidence that 10 percent of the populations of most western countries are now taking antidepressants. Yet the state of melancholia undoubtedly exists, as anyone who has seen a case will attest.
Likewise with autism. I remember an isolated, friendless and uncommunicative patient who tried to kill himself when his landlord could no longer tolerate the collection of light bulbs that he had collected since childhood, was constantly enlarging, and that now threatened to fill the whole house. For the patient light bulbs were the meaning of life. It was difficult to believe in such a case that there was not something biologically wrong with the patient, even if one could find it.
An editorial in the New England Journal of Medicine traces the convoluted history of the diagnosis of autism and Asperger’s syndrome. The pediatricians Leo Kanner and Hans Asperger described the conditions in 1943 and 1944, respectively.
Kanner thought that two features were essential to autism, a psychological separation from the world manifest very early in a child’s life and an obsessive desire to prevent change in the person’s immediate surroundings. Kanner thought that such children had similar parents, often of high intelligence but who were better and happier with ideas than with human relationships. This gave rise later to the concept of the “refrigerator mother,” that is to say a cold and uncommunicative woman who did not cuddle her child or provide it with any emotional warmth, and whose conduct caused the child, by a mechanism of defense, to withdraw into its own world. This was also the era of the “schizophrenogenic” mother, the mother who communicated two messages in one verbal utterance, leaving the child uncertain as to what was meant.
These theories have now been abandoned; they were not only wrong but cruel, for they blamed the mother for the child’s devastating condition. Biology is back in fashion.
Like politicians, doctors are inclined to believe that doing something (especially when it is them doing it) is better than doing nothing. They mistake benevolent intentions for good results, believing that the first guarantee the second. How can philanthropy go wrong?
Besides, doing something stimulates the economy in a way that doing nothing cannot possibly match. If people did only what was necessary, or what was good for them, or what was right, the whole of our economy would soon collapse.
Be that as it may, and for whatever reason, clinical trials that have positive results are more likely to be published than those with negative results. Thanks to several well-publicized scandals, this publication bias, as it is called, is on the decline. GlaxoSmithKline, one of the largest pharmaceutical companies in the world, has promised that henceforth it will publish the results even of trials that are unfavorable to their products as well as those that are favorable.
A paper by Danish researchers just published in the British Medical Journal assesses the extent to which published reports of trials of screening procedures, such as mammography, colonoscopy, PSA-levels, etc., report their harmful effects and consequences as well as their positive ones.
This is particularly important ethically because screening reverses the usual relationship between patient and health-care system. In screening it is the health-care system that initiates the contact, not the other way round. Screening is offered to healthy people, or at least to those complaining of nothing; moreover, the chances of benefit from screening are often slight and those who do benefit from them do so in a sense at the expense of those who are harmed by them. The moral imperative to know the harms of screening is therefore great.
Not long ago the New England Journal of Medicine ran an article on the vexed question of physician-assisted suicide in the case of the terminally ill, and doctors were asked to vote, for or against, online. The results of the poll have just been published.
As the editors are at pains to point out, such a poll has no scientific validity, since those who took the trouble to vote were not a representative sample of anyone but themselves. This does not mean, though, that the poll was altogether without interest, though certain data would have made it even more interesting.
In all, the journal received 5,205 votes from doctors around the world. However, the editors noticed that there were multiple votes in quick succession from several locations in Canada, suggesting a concerted effort to influence the result. These – 1,137 of them – were excluded from the report, leaving 4,068 votes deemed valid.
It would have been interesting to know in which direction the discounted votes voted, but this information was not given. Do those against or those in favor of physician-assisted suicide have a more active lobby or pressure group in Canada? I am not sure I would know which way to bet: one could almost hold a poll on the subject.
Sometimes a single phrase is enough to expose a tissue of lies, and such a phrase was used in a recent editorial in The Lancet titled “The lethal burden of drug overdose.” It praised the Obama administration’s drug policy for recognizing “the futility of a punitive approach, addressing drug addiction, instead, as any other chronic illness.” The canary in the coal mine here is “any other chronic illness.”
The punitive approach may or may not be futile. It certainly works in Singapore, if by working we mean a consequent low rate of drug use; but Singapore is a small city state with very few points of entry that can hardly be a model for larger polities. It also seems to work in Sweden, which had the most punitive approach in Europe and the lowest drug use; but the latter may also be for reasons other than the punishment of drug takers. In most countries (unlike Sweden) consumption is not illegal, only possession. That is why there were often a number of patients in my hospital who had swallowed large quantities of heroin or cocaine when arrest by the police seemed imminent or inevitable. Once the drug was safely in their bodies (that is to say, safely in the legal, not the medical, sense), they could not be accused of any drug offense. Therefore, the “punitive approach” has not been tried with determination or consistency in the vast majority of countries; like Christianity according to G. K. Chesterton, it has not been tried and found wanting, it has been found difficult and left untried.
But the tissue of lies is implicit in the phrase “as any other chronic illness.” Addiction is not a chronic illness in the sense that, say, rheumatoid arthritis is a chronic illness. If it were, Mao Tse-Tung’s policy of threatening to shoot addicts who did not give up drugs would not have worked; but it did. Nor would thousands of American servicemen returning from Vietnam where they had addicted themselves to heroin simply have stopped when they returned home; but they did. Nor can one easily imagine an organization called Arthritics Anonymous whose members attend weekly meetings and stand up and say, “My name is Bill, and I’m an arthritic.”
Most doctors fall into one of two categories: the smaller, who are excessively concerned with their health and regard each bodily sensation as the harbinger of serious disease, and the larger, who neglect it and ignore their symptoms altogether.
I belong to the latter. When I was a young man, for instance, I failed to recognise the symptoms of pneumonia and ignored them until I could hardly breathe. For me, doctors treated illness; they did not suffer from it themselves.
Even more difficult for many doctors is illness among their close relatives. How far should they interfere with diagnosis and treatment, at the risk of antagonising their colleagues? If they interfere, they might be regarded as difficult and obstructive; if they do not, they may overlook serious and even life-threatening mistakes.
A doctor recounts her experience in a recent edition of the New England Journal of Medicine. Her aged father collapsed at home while she happened to be there; he had recently had a quadruple bypass operation. His blood pressure had fallen dramatically.
At the hospital he was diagnosed with dehydration and given intravenous fluids. For a time his blood pressure improved and he felt better. Then his blood pressure dropped again. His daughter called a nurse who increased the fluids and for some reason switched off the alarm of the blood pressure monitor. Then she left.
When her father’s blood pressure dropped yet again, his doctor daughter went to the nursing station to inform the medical team. There she was more or less cold-shouldered, and because she did not want to appear one of those “difficult” relatives who seem to think that their loved one is the only patient the hospital has to look after, she did not insist. After all, doctors and nurses have many subtle or unconscious (and sometimes not so subtle or unconscious) ways of wreaking revenge on those whom they consider to have caused them unnecessary grief.
The medical team had overlooked one of the most obvious causes of loss of blood pressure in this case, namely internal haemorrhage. The patient was on anticoagulants after his cardiac surgery, and such a complication is not uncommon. His daughter decided to examine him herself by means of a rectal examination and found that he was indeed bleeding intestinally.
Schopenhauer would have enjoyed the spectacle of grand rounds in academic hospitals: his theory that people argue for victory more than for truth would have found confirmation there.
In grand rounds a physician presents a complex or enigmatic case to the other physicians of the hospital, who then discuss it in detail. The ostensible purpose is to teach, learn and sometimes to enquire; but such human desires as to show off, to appear more-learned-than-thou, and to appear brilliant are often much in evidence. I once worked in a hospital where an ancient and celebrated physician, who had had more diseases, albeit rare and obscure ones, named after him than any other physician in history, attended such rounds until well into his nineties. Once he had spoken he would ostentatiously turn off his hearing aid, the entire matter having been settled to his satisfaction by his own opinion.
The New England Journal of Medicine carries each week a case report from the Massachusetts General Hospital, presented on a grand round. Generally speaking they record a triumph of diagnosis and often of treatment, somewhat like a Sherlock Holmes story. The more obscure the diagnosis the more brilliant appears the solution, seemingly reached effortlessly by the teamwork of clinicians and pathologists. One cannot help but wonder, sometimes, what has been left out. Certainly the patient’s experience doesn’t get much of a look in.
Recently there was a case reported in the journal that brought to mind the old saying of Victorian surgeons, “the operation was a success, but the patient died.” It concerned a fifty-three year old woman who suffered from persistent redness of the skin and enlargement of the lymph nodes. She was susceptible to infections for which she had repeatedly been admitted to hospital (not the Massachusetts General) and treated with antibiotics.
Even non-hypochondriacs such as I sometimes worry fleetingly about their health when, having reached a certain age, some of their friends and acquaintances fall foul of a disease, namely (in this case) cancer of the prostate. But my anxiety does not last long and so far I have managed successfully to resist all attempts by my medical colleagues to measure my prostate specific antigen (PSA). I want to have as little to do with doctors as possible, other than socially of course, and there is nothing quite like a high PSA level to provoke doctors’ interference in a man’s life.
Would this interference, though, prolong my life if I allowed it to take place? A recent paper in the New England Journal of Medicine starts optimistically and ends pessimistically. It draws attention to the fact that mortality from prostate cancer has fallen drastically and attributes this to improvement both in early diagnosis of the cancer by means of screening and of treatment once diagnosed.
The body of the paper, however, is less sanguine. First 18,880 elderly men were divided into those who were given finasteride, a drug that was hoped would prevent cancer, and those given placebo. Some years later it was discovered that finasteride did indeed reduce the numbers of patients who developed cancer, in fact by nearly a third.
So far so good: but this is not the end of the story. Unfortunately, prostate cancer is a very variable disease such that, while some men die of it, many more men die with it than of it. And while finasteride seems to have prevented many low-grade cancers, those that would not have killed the men in any case, it seems also to have increased both the number and proportion of the more serious kind.
The major medical journals of the world receive far more papers than they can ever publish, and so it is rather surprising when dull, trivial or bad work appears in them. This must mean either that the editors of the journals, like Homer, sometimes nod, or that the general standard of the work submitted for publication is lower than one might hope or suppose.
A recent paper in the New England Journal of Medicine, entitled “Glucose Levels and Risk of Dementia,” by no fewer than fourteen authors, is a case in point. They repeatedly measured the blood glucose levels of 2067 people aged on average 76 at the start of the study, followed them up for a median length of 6.8 years, and correlated the levels with the patient’s chances of developing dementia.
It was already known that diabetics are at increased risk of developing dementia, not surprisingly in view of the damage that diabetes does to small blood vessels in the brain. But the authors of the paper put forward the hypothesis that higher levels of glucose even in non-diabetics would increase the risk of developing dementia.
They indeed found that non-diabetic patients with a blood sugar level of 115 milligrams per decilitre were more likely to develop dementia than those with a level of 100 milligrams. However, the extra chance, 1.18 times, though statistically significant, was so small that its significance in any other sense must be doubted. Generally speaking, epidemiological surveys which find such small differences are not of much value from the point of view of elucidation of the causation of diseases. If you trawled through a hundred factors – coffee consumption, number of begonias in the garden, subscription to a newspaper, etc. – you would probably find five such factors with odds rations as large (or small).