Simple scientific questions require simple scientific answers; doctors want unequivocal guidance to their practice so that they do not fumble in the dark. But it is easier to ask questions than to answer them, as two papers published in the same week in the New England Journal of Medicine and the Journal of the American Medical Association attest.
The question asked by the two papers was the optimum level of oxygenation in the blood of pre-term infants. In the past it was rather naively supposed that if oxygen were necessary, then more of it must be better; but premature infants who were exposed to high levels of oxygen developed a condition known as retinopathy of prematurity, often leaving them blind or severely impaired visually.
The two trials, one from Britain, Australia and New Zealand, and the other from the United States, Canada, Argentina, Finland, Germany and Israel, sought to establish whether a higher or lower level of oxygen saturation of the blood was better for infants born very prematurely. The results were different, if not quite diametrically opposed.
The first trial found that babies treated so that their blood oxygen saturation was higher had a lower death rate at 36 weeks than those treated so that their levels were lower. 15.9 per cent in the high saturation group died compared with 23.1 per cent in the lower. You would have to treat 14 babies with the high oxygen saturation to save life more than treating them at the lower level.
Not long ago I bought a book, published in 1922, titled Syphilis of the Innocent. Needless to say, the title implied a corollary: for if syphilis could be contracted by the innocent (as, for example, in the congenital form of the disease), it could also be contracted by the guilty.
In general, however, physicians do not inquire after the morals of their patients, except in so far as those morals have immediate pathological consequences. They do not refuse to treat patients because they find them disgusting, because they find them unappealing, because they are appalled by the way they choose to live. They try to treat them as they find them; they may inform, but they do not reprehend.
However, in practice things are sometimes more complex than this ecumenical generosity of spirit might suggest. According to an article in a recent edition of the New England Journal of Medicine, some doctors have been turning away patients on the grounds that they were too fat (one physician suggested that she did so because, ridiculously, she feared for the safety of her staff once the patients weighed more than 200 pounds), or that their children have gone unimmunized. Is such discrimination by physicians legitimate or illegitimate, legally or morally speaking? Is there not a danger that physicians may hide behind pseudo-medical justifications to express their personal prejudices or to coerce patients into doing what the physicians think is good for them?
Does practice really make perfect? Does it even lead to improvement? One feels instinctively that it should, that the more experience a physician has, the better for the patient. Much of the skill of diagnosis is pattern-recognition rather than complex intellectual detection, and it follows that the longer a physician has been at it, the quicker he will recognize what is wrong with his patients. He has experience of more cases than younger doctors to guide him.
But the practice of medicine is more than mere diagnosis. It often requires manual dexterity as well, and the ability to assimilate new information as advances are made. These may decline rather than improve with age. Too young a doctor is inexperienced; too old a doctor is past it.
A recent paper, whose first author comes from the Orwellianly named Department of Veterans’ Affairs Center for Health Equity Research and Promotion, examined the relationship between the years of an obstetrician’s experience and the rate of complications the women under his care experienced during childbirth. The authors examined the records of 6,705,311 deliveries by 5,175 obstetricians in Florida and New York. No one, I think, would criticize the authors for the smallness of their sample.
They examined the rate of serious complications such as infection, haemorrhage, thrombosis, and tear during or after delivery, divided by obstetrician according to his number of years of post-training experience. Reassuringly, and perhaps not surprisingly, experience reduced the number of such complications decade after decade. The rate of complications was 15 percent in the first ten years after residency; it declined by about 2 percent to 13 percent in the first decade thereafter, by about 1 percent in the subsequent decade to 12 percent, and by half a percent in the next. In other words, improvement continued, but less quickly as the obstetricians became more experienced; the authors appear not to have continued their study to the age at which the rate of complications started to rise again (if indeed there is such an age).
There was a time in my country when, among other unpleasant duties, the prison doctor was required to assess prisoners for their fitness for execution. Needless to say, not much attention was paid in medical school to this particular skill: the physician was on his own because in those days there were no such things as official guidelines. The rough and ready rule was that a man was fit to be executed if he knew that he was to be executed and why. It was the death-penalty equivalent of informed consent to surgery.
One of the last British executioners, Albert Pierrepoint, who hanged about 600 people, wrote in his memoirs that he was often asked if people struggled on their way to the gallows. He replied that he had known only one do so; to which he added, by way of explanation, “And he was a foreigner.” However, foreign nationality was not in itself a contraindication to execution. Pierrepoint was one of the executioners at Nuremberg.
An article in a recent edition of the New England Journal of Medicine draws attention to the ethical and practical dilemmas of American physicians asked to assess people for fitness to carry concealed weapons. Again this is not a skill taught in medical schools. No firm criteria, beyond those of common sense (which have not been validated by research), have been laid down. It seems obvious that people with paranoid personalities or psychoses, gross depression or mania, those who take cocaine, amphetamines, or other stimulants, and alcoholics should be refused permission to carry concealed weapons. But many of those conditions (if taking cocaine can properly be called a condition) are easy to conceal or difficult to detect. How far is the doctor to go in attempting to detect them? Interestingly, or curiously, the authors do not mention hair or blood tests, which could certainly help the doctor detect drug and alcohol abuse.
Pain is obviously one of the most important symptoms with which doctors deal, but measuring its severity objectively is difficult. Some people turn a twinge into agony, while others raise not a murmur in the last extremities of torture. And it is universally accepted that a person’s psychological state or disposition has a profound effect on his perception of pain.
Philosophers, indeed, have used the phenomenon of pain to debate what seemed to them an important question, namely whether there were such things as private languages or inner states inaccessible to others.
Clever experiments reported in a recent issue of the New England Journal of Medicine offer the hope, perhaps illusory, that brain imaging techniques might one day distinguish between real and severe pain on the one hand from exaggerated or false pain on the other (people may exaggerate or lie about pain for a variety of reasons).
Having recently returned from Madrid, I confess that I saw little evidence of the Mediterranean diet being consumed there (apart, that is, from the red wine): though, of course, Madrid is in the middle of the peninsula, far from the Mediterranean. Perhaps things are different on the coast. Nevertheless, at over 80 years, Spain has one of the highest life expectancies in the world.
Is this because of the much-vaunted Mediterranean diet? Spanish research recently reported in the New England Journal of Medicine provides some – but not very much – support for the healthiness of that diet.
The researchers divided 7000 people aged between 55 and 80 at risk of heart attack or stroke because they smoked or had type 2 diabetes into three dietary groups. One group (the control) was given dietary advice concerning what they should eat; the two other two groups were cajoled by intensive training sessions into eating a Mediterranean diet, supplemented respectively by extra olive oil or nuts, supplied to them free of charge.
They were then followed up for nearly five years, to find which group suffered from the most (or the least) heart attacks and strokes. The authors, of whom there were 18, concluded:
Among persons at high cardiovascular risk, a Mediterranean diet supplemented with extra-virgin olive oil or nuts reduced the incidence of major cardiovascular events.
The authors found that the diets reduced the risk of the subjects suffering a heart attack or stroke by about 30 percent. Put another way, 3 cardiovascular events were prevented by the diet per thousand patient years. You could put it yet another way, though the authors chose not to do so: 100 people would have to have stuck to the diet for 10 years for three of them to avoid a stroke or a heart attack. This result was statistically significant, which is to say that it was unlikely to have come about by chance alone, but was it significant in any other way?
The unexamined life, said Socrates, is not worth living; but sometimes I wonder whether the too-closely examined life is not worth living either, for examination uncovers dilemmas where none existed before.
Two articles in a recent edition of the New England Journal of Medicine ask the question of whether employers should, or have the right to, refuse to employ smokers, as increasing numbers do in the 21 states that permit such discrimination against them.
As is by now no secret, smokers are more likely to suffer from many types of illness than non-smokers, and their health insurance is therefore considerably more expensive than that of non-smokers. They impose costs on their employers which weigh upon all workers, smokers or not. (The authors do not take into account that smokers not only contribute to taxes by their habit but, by dying early, reduce pension costs.)
The authors worry that refusal to hire smokers would be discriminatory against people of lower social class, since it is among the latter that smoking is most prevalent. I am not sure that this is right: the majority of people in all social classes now do not smoke, while people who apply for jobs at any particular level are likely to be of the same social class. Except in the case where there is only one applicant for a job, then, it is likely that there will always be an applicant of any given social class who does not smoke. The discrimination remains against smokers, therefore, and not by proxy against members of lower social class.
Twenty-seven years ago I found what seemed to be the only functioning storm-drain in Tanzania, in East Africa, and fell down it, severely injuring a knee in the process. The journey to the mission hospital in the back of a pick-up truck over sixty miles of rutted laterite road was one of the more agonising experiences of my life.
I had an arthroscopy when I returned home several weeks later — I could not even hobble until then — and the orthopaedic surgeon told me that unless I did physical therapy every day for a very long time it was inevitable that I should be crippled by arthritis within twenty years.
It was equally inevitable that I would not do physical therapy every day for a long time; and here I am, twenty-seven years later, without so much as a twinge from my knee. My faith in the predictive powers of orthopaedic surgeons has been somewhat dented.
That was why I read with interest a paper in a recent edition of the New England Journal of Medicine comparing physical therapy with surgery for meniscal tears in the knees of people with osteoarthritis. To cut a long story short, there was no difference in outcome, an important finding, since 465,000 people undergo operations for precisely this situation every year in the United States alone.
Actually, the uselessness of operation had been established before — the uselessness from the patients’ point of view, that is. Two previous trials had compared real with sham operations, and with no operations at all, and found no difference in the outcome two years later. One might suppose that, in the light of these findings, the 465,000 operations still performed annually constituted something of a scandal.
The clinical trial reported in the NEJM is, like all such trials, not definitive. The follow-up period was only 6 months, relatively few patients were recruited to it, and some patients initially allocated to physical therapy had an operation nonetheless for reasons that are not entirely clear. Moreover, the trial is only that of operation versus physical therapy; strictly speaking, there should also be a comparison with patients who had no treatment at all.
The Duke of Wellington, surveying his soldiers before the Battle of Waterloo, famously said that he did not know what they did to the enemy, but by God they frightened him.
No one thought in those days of the psychological effect upon the soldiers of witnessing so much violence (more than 30,000 were killed during the battle, about one in six of those who took part in it); nor could anyone have done so if he had thought of it. But it is now accepted wisdom that active military service leads men subsequently to commit crimes of violence, though the reasons for this are unknown.
A recent paper in The Lancet examined the association of military service and subsequent crimes of violence, which turned out to be much weaker than suspected. The authors examined the criminal records of 8,280 British soldiers who had served in Iraq and Afghanistan with that of 4,080 of those who had not. When controlled for such factors as age, level of education, pre-service record of violent offenses, rank, and length of service, there was no significant difference in the criminal records of those who had served in Iraq and Afghanistan and those who had not.
When, however, those who were deployed in a combat role were compared with those who had not been so deployed, it was found that the former had higher levels of violent offending as measured by their criminal records. Interestingly, however, those who were involved in actual fighting had considerably higher prior levels of violent offending than those not so involved, suggesting that more aggressive types either volunteered or were selected for combat service. Somewhat alarmingly, nearly half of soldiers involved in the fighting had criminal records for violence.
For a long time doctors were subject to contradictory imperatives with regard to AIDS. On the one hand they were enjoined to treat it as they would treat any other disease, without animadversion on the way in which the patient had caught it; on the other hand they had, before testing for the presence of HIV, to seek special permission of the patient and to ensure that he or she had had counselling before the test was taken – quite unlike the testing for any other disease, syphilis for example. So AIDS was at the same time a disease like any other and also in a completely different category from all other diseases.
It cannot be said that pre-test counseling is universally popular among patients. There was an Australian clinic that famously offered the test with “guaranteed no counseling” and it did not lack for clients. For quite a number of years, however, HIV-test counselling has provided a living for the kind of people who like to hover around the edges of human catastrophe.
However, the recommendation by the United States Preventive Services Task Force (USPSTF), reported in an article in a recent edition of the New England Journal of Medicine, that henceforth the screening of adults for HIV infection should be routine will, if adopted, put paid to all such pre-test counseling. One cannot counsel scores or hundreds of millions of people.
Seven years ago the USPSTF came to a different conclusion on the question of screening for HIV, believing that the benefits were insufficient to recommend it. Since then, however, evidence has accumulated that treating people early in the course of their infection not only prolongs their life but reduces spread of the infection.
For some reason that I have never quite fathomed, immunization against infectious diseases has from its very inception in Jenner’s time been one of the most viscerally feared and bitterly opposed of all medical techniques. Perhaps people felt that to immunize was to interfere sacrilegiously with the course of nature, and that people, especially children, had the duty to die of infectious diseases just as Nature “intended.” Perhaps they felt that, if it worked, it would allow the survival of the unfittest. At any rate, few medical procedures have been as persistently, minutely, and fervently examined for harmful effects as immunization has.
In general, the results have been disappointing for those who wished to show that immunization was invariably followed by Nature’s retribution, particularly in the neurological sphere. Scare has succeeded scare without ever being confirmed, though those who hold to the anti-immunization faith refuse to abandon it. Now, at last, there seems to be evidence of a genuine association between a certain type of immunization and a neurological condition.
That association is that between the immunization of children with an anti-influenza virus and narcolepsy, a condition characterized by chronic, excessive daytime sleepiness and a tendency to cataplexy, that is to say a loss of muscular tone triggered by strong emotion. It was first observed in Finland and Sweden; subsequent studies in other European countries and in Canada failed to find an association, but a further study, this time in England, and reported in the British Medical Journal, confirmed that the Finnish and Swedish findings.
In October 2009, children at risk of pulmonary complications during a pandemic of influenza were immunized against it with a vaccine against the causative virus. Most of the children immunized suffered from asthma (interestingly, one of the theories to account for the recent rise in the proportion of children suffering from asthma and other allergic conditions is that, having been immunized against all the common childhood infectious diseases, their immune systems have not developed as Nature “intended”).
No doubt I have forgotten much pharmacology since I was a student, but one diagram in my textbook has stuck in my mind ever since. It illustrated the natural history, as it were, of the way in which new drugs are received by doctors and the general public. First they are regarded as a panacea; then they are regarded as deadly poison; finally they are regarded as useful in some cases.
It is not easy to say which of these stages the medical use of cannabis and cannabis-derivatives has now reached. The uncertainty was illustrated by the on-line response from readers to an article in the latest New England Journal of Medicine about this usage. Some said that cannabis, or any drug derived from it, was a panacea, others (fewer) that it was deadly poison, and yet others that it was of value in some cases.
The author started his article with what doctors call a clinical vignette, a fictionalized but nonetheless realistic case. A 68-year-old woman with secondaries from her cancer of the breast suffers from nausea due to her chemotherapy and bone pain from the secondaries that is unrelieved by any conventional medication. She asks the doctor whether it is worth trying marijuana since she lives in a state that permits consumption for medical purposes and her family could grow it for her. What should the doctor reply?
It is tempting for people to suppose that if a little of something is good for them, then a lot of it must be better. Unfortunately this is not always or even usually the case; and I first realized that people are inclined to make this mistake when, as a student many years ago, I was shown a baby who was bright orange; it was suffering from a condition known as carotenemia. The parents, having heard that carrots were healthy, concluded that only carrots were healthy, and fed their baby accordingly.
A study from Sweden, recently published in the British Medical Journal, examines the important question of whether calcium supplements are good for middle-aged and old women. The question is important because millions of women around the world take such supplements – 60 percent of American middle-aged and old women, for example. There is no one quite like the Swedes for carrying out such epidemiological studies because the medical records of their population are by far the most comprehensive in the world: creepily so, one is sometimes inclined to think.
What the Swedish researchers found was that the graph of the relationship between calcium intake and death rates was a U-shaped curve. People with a low consumption of calcium had a higher mortality than those with a moderate consumption, but so did people with a high consumption.
The sample of women was not small, and in the period of study 11,944 of the 70,259 women studied had died. Those with a high dietary consumption of calcium alone had an increased death rate of 1.4 times for all causes of mortality, 1.49 times for cardiovascular mortality, and 2.14 times for ischaemic heart disease (heart attacks) compared with those whose who consumption of calcium was associated with the lowest mortality, that is to say a moderate consumption.
During my childhood, medicine always tasted disgusting and I suspected that adults made it so deliberately to spite children. They could have made it delicious had they wanted to.
Disgusting ingredients have been used in supposedly therapeutic concoctions down the ages. They had three qualities: vileness, rarity, and expense. These strongly promoted the placebo effect, for who would not claim to feel better if continuing to swallow camel’s goat’s bile were the alternative? A little bit of what revolts you does you good, that is the theory.
Now at least when we resort to disgusting means, they are scientifically reasonable. I worked for a time for a surgeon in a country where antibiotics were not easily available, who wanted to test honey as an antiseptic dressing for open wounds (bacteria do not grow in honey). I cannot remember the results from the bacteriological point of view, but I recall that the aesthetic results were not pleasing.
I have also seen the use of maggots for wound cleaning. The therapy is effective, but it is difficult not to be repelled by it, especially if (as I have) you have actually suffered a parasitic skin infection by maggots.
However, my disgust at honey and maggots paled by comparison with what I felt upon reading the title of a paper in a recent edition of the New England Journal of Medicine, “Duodenal Infusion of Donor Feces for Recurrent Clostridium difficile.” The excrement of various creatures was long an ingredient of supposed remedies in the days when nothing really worked, but I had fondly supposed that medicine had passes what Freud, in another context, would have called the anal stage.
Life being complex, many simple principles turn out on examination to be not as simple as at first thought. For example, everyone knows, or thinks that he knows, that prevention is better than cure. But is it always? It is often very difficult to say with certainty.
Three articles in a recent British Medical Journal tackle the vexed question of mammography, whose purpose is to detect cancer of the breast early in its development on the assumption that early detection leads to more effective treatment. The advice to women, therefore, is to get themselves scanned regularly.
This seems straightforward and commonsensical, but in fact the question of whether the light of mammography is worth its candle is devilishly complex. For example, if the treatment of breast cancer has improved (and death rates in Britain have almost halved between 1990 and 2010, thanks mainly to improved treatment rather than to early finding), then the number of cases found by mammography in order to save a single life has to increase. This in turn means that old trials – and all trials to determine the long-term effect of mammography have to be old – may no longer be relevant to the present situation. Trials of mammography are, in effect, always trying to hit a moving target.
The main problem that has bedevilled mammography is that of the false positive: the diagnosis of cancer when in fact there is none. For example, it is estimated that approximately 70,000 women in America are falsely diagnosed with cancer annually by means of mammography, that is to say a half of all those who are diagnosed.
Hospital is a dangerous place, especially for the old and very sick — which is one reason why a measure of a hospital’s efficiency is the speed with which it discharges patients home after treatment. Another reason for this measure is, of course, economy. Long stays in hospital are hugely expensive.
However, aiming to discharge patients as quickly as possible may be neither humane nor efficient. People are not units of accounting or components in an assembly line or mere mechanical contrivances. Hospitals are not car repair shops.
An article in the New England Journal of Medicine reflects upon the fact that nearly a fifth of patients treated under Medicare, 2.6 million individuals, return to hospital for further treatment within 30 days of their discharge as cured or sufficiently improved to manage at home.
Rather surprisingly, perhaps, the chances of a patient having to return to hospital do not reflect the seriousness of his original condition, nor are re-admissions invariably for the same condition as that for which the patient was admitted in the first place. On the contrary, in the majority of cases the patient is readmitted for something quite different. For example, 63, 71 and 64 percent of patients readmitted after treatment for heart failure, pneumonia, or chronic obstructive pulmonary disease are readmitted for reasons other than their original diseases.
Medical controversies last a long time and are often bitter not only because science gives provisional rather then definitive answers to most questions, any of which answers may soon be overturned by further evidence, but because science by itself provides no means of deciding between incommensurable results according to a single criterion of value. Besides, everyone likes a good intellectual argument and wants to keep it going as long as possible.
An editorial in a recent edition of the New England Journal of Medicine claims that the long-running controversy over whether surgery or angioplasty is better for diabetic patients with ischaemic heart disease has now been decisively resolved in favor of the former, thanks to a paper published in the same edition. The matter is not a small one: in the United States alone 175,000 diabetic patients were treated last year either with surgery or angioplasty, and the figure is likely to rise as the number of diabetics grows.
The paper described a trial in which 947 diabetic patients with ischaemic heart disease underwent surgery and 953 underwent angioplasty (there were no untreated controls). At five years, mortality in the angiolasty group was 16.3 percent as against 10.9 percent in the surgical group; in total 26.6 percent of those treated with angioplasty had either died or had had a stroke or heart attack, as against 18.7 percent of the surgical group.
My father’s life expectancy at birth was 48 years. He survived to be 83, and he was by several years younger at his death than his brothers and sister at their deaths. He and they lived through what has been called “the demographic transition,” from low life expectancies to high.
A recent paper in the Lancet charts the worldwide evolution of life expectancy between 1970 and 2010. Life expectancy has fallen in only 4 of the 187 countries with populations of 50,000 or more, the four being Zimbabwe, Lesotho, Ukraine, and Belarus. In the first two, AIDS was the cause; in the second two, alcohol.
Worldwide life expectancy between 1970 and 2010 rose at a rate of 3-4 years per decade, except for the 1990s, when the rate of improvement was considerably lower. In Asia and Latin America, the average age at death rose by 1 year every 2 years, a startling rate of improvement. But the greatest improvement in recent years has been in sub-Saharan Africa: life expectancy in Angola, Ethiopia, Niger, and Rwanda has increased by 10–15 years since 1990.
According to the authors, two medical interventions account for this: first the availability of anti-retroviral drugs to treat AIDS, and second the availability of both insecticide-treated mosquito nets and artemisin-combination treatment regimes for malaria.
Doctors of the old school tend to be rather proud of how hard they worked when they were young, and to attribute their current enormous technical competence as well as the magnificence of their character to the long hours that they then endured. They were not much fun at the time, perhaps, but it made them what they are.
I remember those long hours well, and how at the end of a forty-eight hour shift my head felt as if it contained nothing but lead shot, as if it might just fall off my body. Leaving the hospital was like leaving prison after a long sentence; the starving man dreams of food, but the sleepless man dreams of bed.
It has long been suspected that such exhaustion cannot be good for patients; no one in his right mind would wish to be flown by a pilot who had gone two days without sleep, for example. Why should doctors be immune from the normal effects of fatigue on performance?
A study in a recent edition of the Journal of the American Medical Association attempted to demonstrate the effects of a protected sleep period on interns and residents when they were obliged to work shifts longer than 30 hours. On some such shifts they were given five hours, between 12:30 am and 5.30 am, when they could not be interrupted except by the direst emergency, and when they were given the opportunity to sleep. This might not be what most mothers would call a good night’s sleep, but it was better than what was normally available to such interns and residents.
The subjects of the experiments acted as their own controls: half the time they had protected sleep periods, and half the time they hadn’t. Unsurprisingly, they got more sleep (about an hour more per night) when they were given such a protected period rather than when they were not. They were more alert, both subjectively and objectively, when they had slept 3 hours a night instead of only 2. Three hours is hardly enough to make one feel fully rested, but slugabeds know that even a quarter of an hour of extra sleep can seem the most luxurious thing in the world.
Portentousness is the means by which cliché, the banal and the obvious are turned into technicality or wisdom, or both. An editorial in a recent edition of the Journal of the American Medical Association titled “Mental Health Effects of Hurricane Sandy: Characteristics, Potential Aftermath, and Response” illustrates this very well. One expects a medical journal to contain information that is not common knowledge or available to everyone on the most minimal reflection; it is therefore tempting, though a logical error, for authors to suppose that if what they have written is published in such a medical journal, it ipso facto contains such information.
The editorial in question makes statements such as “The mental health effects of any given disaster are related to the intensity of exposure to the event. Sustaining personal injury and experiencing the injury or death of a loved one in the disaster are particularly potent predictors of psychological impairment.” In other words those who suffer more suffer more. The editorial continues, “Research has also indicated that disaster-related displacement, relocation, and loss of property and personal finances are risk factors for mental health problems…”
I don’t suppose this will come as any great surprise, let alone shock, to readers. I will overlook the rather strange locution “loss of personal finances” – one continues to have personal finances even in bankruptcy. But how vital is research that tells us that people who are displaced and lose their possessions are likely to be unhappy for a long time? Until such research was done, did anyone for a moment doubt that losing your home, becoming a refugee, having your wife or child killed in front of you. etc., was a potent cause of misery? Have we so lost our common humanity that we need “research” to tell us this, or that such misery may be long-lasting?
At dinner the other night, a cardiologist spoke of the economic burden on modern society of the elderly. This, he said, could only increase as life expectancy improved.
I was not sure that he was right, and not merely because I am now fast approaching old age and do not like to consider myself (yet) a burden on or to society. A very large percentage of a person’s lifetime medical costs arise in the last six years of his life; and, after all, a person only dies once. Besides, and more importantly, it is clear that active old age is much more common than it once was. Eighty really is the new seventy, seventy the new sixty, and so forth. It is far from clear that the number of years of disabled or dependent life are increasing just because life expectancy is increasing.
There used to be a similar pessimism about cardiopulmonary resuscitation. What was the point of trying to restart the heart of someone whose heart had stopped if a) the chances of success were not very great, b) they were likely soon to have another cardiac arrest and so their long-term survival rate was low and c) even when restarted, the person whose heart it was would live burdened with neurological deficits caused by a period of hypoxia (low oxygen)?
A paper in the New England Journal of Medicine examines the question of whether rates of survival of cardiopulmonary resuscitation have improved over the last years and, if so, whether the patients who are resuscitated have a better neurological outcome.
Human kind cannot bear very much reality, wrote T. S. Eliot, and a recent paper in the New England Journal of Medicine bears him out. The authors of the paper asked 1193 patients who had opted for chemotherapy for their metastatic cancer of the colon or lung how likely it was that the chemotherapy would cure them. The correct answer, of course, was that it was very unlikely (in the current state of the art); but 69 percent of patients with lung cancer and 81 percent with cancer of the colon had a much higher hope of cure than was reasonable in their circumstances.
The authors found that those patients with the least accurate estimate of the chances of cure (that is to say who were the most falsely optimistic) rated their doctors the highest for their communication skills. In other words it is possible that doctors who give an optimistic message are those that patients think have told them the most, in the best and clearest way; but it is also possible that optimistic patients view their doctors in a benevolent light. What doctors tell patients, and what patients hear their doctors tell them, may be very different as every doctor is, or ought to be, aware.
The paper raises the question of what constitutes truly informed consent. How many patients know or truly appreciate that, as the authors put it, “chemotherapy is not curative, and the survival benefit seen in clinical trials is usually measured in weeks or months”? For there to be informed consent, is it necessary for the doctor merely to have given the relevant information, or is it necessary for the patient to have inwardly digested it, to believe it? Is the onus entirely on the doctor, or does the patient have some responsibility? Is a doctor automatically to blame if a patient has not understood and absorbed his message? At any rate, the authors say that “this misunderstanding could represent an obstacle to optimal end-of-life planning and care.”
When I was a young doctor, which is now a long time ago, patients who were close to death were often denied drugs like morphine for fear of turning them into addicts during their last weeks of earthly existence. This was both absurd and cruel; but nowadays we have gone to the opposite extreme. We dish out addictive painkillers as if we were doling out candy at a children’s party, with the result that there are now hundreds of thousands if not millions of iatrogenic — that is to say, medically created — addicts.
An editorial in the New England Journal of Medicine asks why this change happened, and provides at least two possible answers.
The first is that there has been a sea change in medical and social sensibility. Nowadays, doctors feel constrained to take patients at their word: a patient is in pain if he says he is because he is supposedly the best authority on his own state of mind and the sensations that he feels. This certainly meant that at the hospital where I worked you could see patients, allegedly with severe and incapacitating back pain, skipping up the stairs and returning with their prescriptions for the strongest analgesics to treat their supposed pain. In the new dispensation, doctors were professionally bound to believe what the patients said, not what they observed them doing.
The automatic credence placed in what a patient says — or credulity, if you prefer — is deemed inherently more sympathetic than a certain critical or questioning attitude towards it. And since it is now possible, indeed normal, for patients to report on doctors adversely and very publicly via the internet and other electronic media, doctors find themselves in a situation in which they must do what patients want or have their reputations publicly ruined. When in doubt, then, prescribe.
You can never be slim or rich enough, said the late duchess of Windsor; but can you ever be too worried about your health? Epidemiologists are always finding new things for us to fret about, new threats in the environment for us to avoid if we can or bite our nails over if we can’t. It is, as the French say, their métier.
One of the latest scares is over a parasite called Toxoplasma gondii, a protozoon parasite that until recently was thought to be harmless for everyone except for the fetuses of pregnant women and people with much reduced immunity, for example patients with AIDS or Hodgkin’s lymphoma. This parasite is, if not quite ubiquitous, very common, so if it really is harmful there might be a lot to worry about. This is no time for complacency: where health scares are concerned, it never is.
The definitive host of the Toxoplasma parasite is the domestic cat, but it can be passed on to other animals, especially those that provide us with our animal protein (although cattle are relatively resistant to infection), and it thus enters the human food chain.
A recent editorial in The Lancet contains a sentence that could become a locus classicus of epidemiological neurosis. Having pointed out “the widely held view that Toxoplasma gondii contamination in food and human infection in general should not cause public concern,” it goes on to say, “However, infections could have as yet poorly understood adverse consequences.” That no definite adverse consequences have not yet been discovered does not necessarily mean that they are not there; there is an old medical dictum that says that absence of evidence is not evidence of absence. Relaxation, about anything then, can never be justified.