There seems to be a lot of fraud these days, but perhaps there always was; maybe it was just that we more naïve in those days. As soon as the Volkswagen scandal broke, the personal injury lawyers were out in force — a group whose activities are usually morally fraudulent if not illegal.
Research fraud in medicine is quite common and falls into two main categories:
- Fraud perpetrated by individuals taking short cuts to a brilliant reputation, or rather a reputation for brilliance.
- Fraud perpetrated by drug companies attempting to prove that their product developed at such huge expense is better, safer and more therapeutic than any other product on the market.
Although I have spent much of my career exploring the less meritorious aspects of human conduct, there is a type of research fraud that I had not suspected to exist until I recently read an article in the New England Journal of Medicine. Once you know that volunteers for pharmacological experiments are paid, it becomes obvious, and I feel slightly foolish for not having realized it before: the volunteers also commit fraud.
Some exaggerate the severity of their symptoms so that they are included in a study, and some do not reveal that they are taking prescription drugs, allowing chemical interactions which could alter the results of the experimental drug by more than one possible mechanism and in more than one direction. Others conceal (not surprisingly) that they are taking controlled substances. Generally speaking, the word of potential subjects for experiments is taken at face value; they are not tested because it would be too expensive to do so. Research is quite expensive enough to conduct without this added burden.
One study found that of 100 research subjects who had enrolled in two trials in the last year, or three in the last four years, many had lied about themselves. A quarter had exaggerated their symptoms in order to be included and a seventh had claimed to have health problems they did not have. Nearly a third concealed health problems they did have, more than a quarter concealed the use of prescription drugs and a fifth the use of illicit drugs. And 43 of the 100 people failed to mention that they were currently enrolled in another trial at the same time.
One cannot conclude from this that the general population is appallingly dishonest because people who seek to enter trials for pay are not representative of the population as a whole. Nevertheless, where scientific results are founded on the use of paid subjects, the results are likely to be skewed.
For example, if a person has exaggerated his symptoms at the outset of the trial, it is likely that he will also exaggerate the benefit of both the placebo and the active drug. Those who lie in this fashion may not be equally distributed between the placebo and the active treatment, so the benefits of the drug might be either inflated or underestimated. At any rate, the results would not be trustworthy. This is especially so, of course, in the case of diseases whose symptomatology correlates poorly or not at all with any objectively measurable biochemical marker.
One way to lessen the problem would be to make payment contingent on the truthfulness of the experimental subject. In combination with random tests, this might be enough to deter such fraud. However, it would not eliminate it entirely, for some fraud is probably disinterested, so to speak: committed for the sheer pleasure of committing it, as an end in itself or to fool people of supposedly superior intelligence. One should never underestimate the perversity of the human soul. Where, indeed, would I be without it?
It is a happy coincidence in the history of medicine that the means available to treat Type I diabetes (the type that usually starts in childhood and requires lifetime treatment with insulin) became available just as its incidence was rising sharply. Such diabetes was rare in the nineteenth century but was rapidly fatal. Pediatricians at the beginning of the twentieth century had few patients with diabetes, and this was not because they failed to recognize it when they saw it. Since the middle of the twentieth century, however, Type I diabetes has become much more common, suggesting an environment cause – yet to be fully elucidated.
Although insulin keeps many alive who would once have died, Type I diabetics continue to suffer long-term ill-effects and complications, among them eye, kidney and peripheral vascular disease. It is though that the better the control of blood sugar levels, the later and less serious the complications.
One of the problems that bedevils treatment is that blood sugar can be lowered too far, and low blood sugar itself leads to complications and sometimes to fatality. The fear of hypoglycaemic attacks is a constant one for diabetics. The aim of treatment is to keep blood sugar at normal levels without inducing such attacks.
A paper in a recent edition of the New England Journal of Medicine describes treatment with a kind of pump that continually alters the rate of insulin according to the level of the patient’s blood sugar. The investigators, in Britain and Germany, treated 58 patients with Type I diabetes with this pump for 12 weeks to see whether it improved control of blood sugar levels compared with normal treatment.
They found that it did. The percentage of the time when the blood sugar of the treated patients was within the proper range increased, 11 percent for adults and 24.7 percent for children and adolescents, without either an increase in the number of hypoglycaemic attacks or an increase in the total amount of insulin infused. Both undesirable peaks and troughs were avoided when the new pump was used.
There are many ways of dividing humanity into two. One such way is to separate those who desire that everything should be explicable, preferably by a single grand overarching theory, and those who desire that a mystery should always remain. I suspect that believers in alternative medicine are predominantly of the latter disposition. They probably also derive a certain pleasure from defying and sometimes even triumphing over medical or other authority.
If I am right, extravagant belief in the therapeutic benefits of cannabis should decline as its claims are investigated with scientific rigour. If a chemical found in cannabis is prescribed in precisely the same manner as, say, antihypertensives, it will lose the mystique of its derivation. Like most drugs, it will merely be useful in some cases.
There’s an interesting review in the New England Journal of Medicine on the use of cannabis-derived substances in cases of epilepsy. Although new anticonvulsants have been developed in recent years, the proportion of epilepsy that remains untreatable has stayed more or less the same at 30 percent. The article contained an admirably clear exposition of the theoretical reasons why various chemicals found in cannabis (more than 500!) might work in cases of epilepsy.
The human brain has naturally occurring cannabinoid receptors, and there’s evidence of their disruption in some forms of epilepsy. Work in animals suggests that substances that block cannabinoid receptors lower the seizure threshold, which is a contributing factor for epilepsy. An epidemiological study conducted in New York has found that adults who smoked cannabis within the last 90 days were less likely to have an epileptic seizure than those who did not.
Anecdotal evidence dating from a surprisingly long way back also suggests a therapeutic effect of cannabis on the rate and severity of epileptic fits.
But none of the above proves the case for cannabis as a treatment for epilepsy. The history of therapeutics is littered with treatments that did not work in the end but were initially supported by exactly the same kind of evidence in their favor.
Few properly controlled trials have been performed: two were positive and two were negative. Among the difficulties of investigating the matter scientifically are the sheer number of chemical compounds to be tested, and also regulatory prohibitions. It’s important to remember that not all the substances found in cannabis have the kind of psychological effects that make it popular with aficionados.
Properly conducted double-blind trials are necessary because the placebo effect is strong, particularly in children. The authors sum up very succinctly the pitfalls of all other kinds of evidence adduced by enthusiasts (not only for cannabis, but for all treatment both orthodox and, especially, unorthodox):
The gap between patient beliefs and available scientific evidence highlights a set of factors that confound cannabinoid research and therapy, including the naturalistic fallacy (the belief that nature’s products are safe), the conversion of anecdotes and strong beliefs into facts, failure to appreciate the difference between research and treatment, and a desire to control one’s care, including access to therapies of perceived benefit.
Intriguingly, the authors quote one study that showed that parents of epileptic children who moved to Colorado so that their children could receive cannabinoid treatments reported more than twice as much benefit as the parents who already resided in the state (47 percent compared to 22). By itself, this does not prove very much, unless the children of the two groups were similarly afflicted in the first case – epilepsy not being a single condition with an identical degree of severity. Nevertheless, it is what I would have expected.
The purpose of research is to discover what was previously unknown. Research wouldn’t be necessary if we knew everything there was to know, but that will never be the case so research will always be a necessity, so long as knowledge remains preferable to ignorance. And while wisdom may be folly where ignorance is bliss, you can never know that to be true until after you’ve become wise.
Apparently, all of this is perfectly obvious except to certain trial lawyers, whose job it is to exploit the corrupt and corrupting tort system.
A recent edition of the New England Journal of Medicine reports the outcome of a case in which three plaintiffs sought to sue the University of Alabama Institutional Review Board and the electronics firm Masimo. The case was brought on behalf of three infants, born premature, who were enrolled in a clinical trial concerning the best oxygen concentration to give such infants.
At the time of this trial, it was known that oxygen concentrations below 89 percent resulted in higher rates of death, while those above 95 percent resulted in higher rates of retinopathy, which causes permanent blindness. As a result, the recommended concentration was between 89 to 95 percent, but the actual optimal percentage was unknown. The trial sought to clarify matters by allocating premature infants randomly to concentrations between 89 and 91 percent, and also between 92 and 95 percent.
We all want to be treated in the best hospitals by the best doctors, but this is not possible so long as any difference in quality between them exists. The best hospitals and the best doctors cannot treat everybody. Moreover, it is much harder to tell which hospital and which doctor is the best than many of us suppose. League tables for doctors and hospitals are not like such tables for baseball or football teams, matters of straightforward record. They require measurements of enormous complexity, and the results are only trustworthy and valuable if the data that go into compiling them are both accurate and relevant.
A paper in a recent edition of the British Medical Journal casts doubt on the value of global judgments on hospitals as expressed in league tables.
The authors reasoned that if such judgments were of any value, a hospital’s standardised mortality ratio (the proportion of people of any particular category who died compared with the number of that category expected on average to die) ought to correlate strongly with the number of avoidable deaths that occur in that hospital. A hospital with a high standardised mortality ratio – the usual way of measuring its overall quality – ought to have a high rate of avoidable deaths, if that ratio is a true measure of the quality of medical care in that hospital compared with other hospitals.
The authors then examined the statistics for 34 hospitals in England, 10 of them in 2009 and 24 others in 2012-13. They correlated their SMRs with the proportion of deaths that were avoidable, calculated by the proportion of 100 deaths that occurred in each of the hospitals, chosen at random and examined by experts to determine whether they occurred because of any act of commission or omission by the hospital. Of course, whether a death is avoidable is usually a matter of judgment; it is rarely that incompetence or negligence is so great that death is indubitably its consequence. For the purposes of this study, a death was deemed to have been avoidable if the experts assessing the case thought there was more than a fifty percent chance that it was.
The correlation between hospitals’ SMRs and their rate of avoidable deaths was so slight as to be negligible: indeed it was not statistically significant. Overall the rate of avoidable death was low: 5.2 percent in 2009 and 3.6 percent in 2012-13, 115 cases in 3400 examined. This difference was statistically significant, but one cannot rush to the conclusion that hospitals had improved in the intervening period, for various factors had changed also that could have affected the rate (for example, the wider use and compliance with requests not to resuscitate).
There were limitations to the study: for example, agreement between experts as to what was an avoidable death was far from unanimous. Moreover, the experts were not blinded to the hospitals from which the cases they examined came. They might therefore have been influenced by biases, for or against, conscious or unconscious, in their judgment as to which death was avoidable. Further, a hospital’s global Standardised Mortality Ratio might have disguised exceptionally good and exceptionally bad departments within it that balanced each other overall.
Nevertheless, the lesson seems clear: the global SMR as a measure of a hospital’s quality is invalid. This is not to say that there are no good and bad hospitals, only that the SMR is not the way to assess them, perhaps because the SMR itself is far from a watertight measure and is subject to a large number of confounding factors. We should be as accurate as possible, but not believe ourselves to be more accurate than were actually are.
Image via Shutterstock
Being mortal, we are all under sentence of death, but the execution of the sentence is more imminent in some of us than in others. People who suffer from angina, for example, are aware that they could suffer a fatal heart attack at any time; and even if human beings can accommodate themselves to most situations, the awareness of the threat in the back of one’s mind must be disconcerting, to say the least.
Would we wish to know our statistical risk of death in the next five years? I suppose we vary in this as in everything else: there is no hard and fast rule.
The question went through my mind as I read a paper in a recent edition of the New England Journal of Medicine. The authors took a defined group of patients – those with stable angina and type II diabetes – and measured their troponin levels. Troponin is an enzyme that is found in the blood when the heart muscle is damaged by infarction, but with a new technique it is possible to measure much slighter increases in the level than previously.
The authors found that, of 2285 patients who came within the study, the 897 who had slightly raised levels of troponin had nearly twice the risk of fatal or non-fatal heart attack or stroke within the next five years compared with those who did not have a raised level. This increased risk persisted after adjustment for as many relevant variables as they could think of, so the relationship is probably a real one and not merely a statistical artifact. And 27.1 per cent of patients with raised troponin levels suffered fatal or non-fatal heart attack or stroke in the succeeding five years compared with 12.9 percent of those with normal troponin levels.
The authors assigned their patients at random to normal medical treatment or to such medical treatment plus angioplasty or coronary artery bypass graft. What they found was that the additional treatment made no difference to the outcome. In other words, it was useless except from the economic point of view of those carrying it out. However, they did find that the risk of fatal or non-fatal heart attack and stroke was considerably increased in the small percentage of patients whose troponin levels rose by more than 25 percent during the study.
Some climates are better, or at least more agreeable, than others. Furthermore, it is well known that extremes of temperature raise death rates considerably. It has been estimated, for example, that they increase by between 8.9 and 12 percent during heatwaves, and by 12.5 per cent during spells of exceptional cold. The reasons for this are not fully understood; only a very small percentage of deaths during heatwaves are directly attributable to heatstroke. Moreover, there is an asymmetry between the effects of extreme heat and extreme cold on mortality. The effects of the former are immediate, and persist only as long as the heat persists; the effects of cold on mortality last three or four weeks after the cold has ceased.
A paper in a recent edition of the Lancet attempts to determine what percentage of deaths in thirteen different countries – Australia, Brazil, Canada, China, Italy, Japan, South Korea, Spain, Taiwan, Thailand, the UK and the USA – are associated with changes in the weather. The paper is of such enormous statistical sophistication that I doubt whether more than one in a thousand doctors is qualified to assess its validity: and, indeed, only one of its twenty authors is medically qualified. Nevertheless, the title of the paper, “Mortality risk attributable to high and low ambient temperatures: a multicountry observational study,” seems to me (as someone very unversed in these matters) to make an elementary statistical howler: that a statistical association by itself implies causation.
However, let us overlook this criticism, and moreover assume that the authors’ complex statistical analysis of the data (or rather that of their computers) is valid beyond further criticism. The size of their sample is certainly impressive: they have analyzed 74,225,200 deaths in relation to deviations from average or normal or optimal ambient temperatures. After much calculation, the authors come to the conclusion that 7.71 percent of the deaths included in the study were attributable to excess heat or cold, that is to say 5,722,763 of the deaths.
When I was in my early thirties, I several times visited an island in the Pacific called Nauru. From the medical point of view, it was of the utmost interest because fifty percent of the population has Type II diabetes and it therefore represented the epidemiological shape of things to come.
The Nauruans had become diabetic only recently, when they suddenly (and briefly, as it turned out) became the richest people per capita in the world, thanks to the phosphate rock in which their tiny island was covered. From a life of subsistence on fish and coconuts they went straight to being millionaires. They abandoned their traditional diet and started to eat, on average, 7000 calories per day. Not surprisingly they were enormously fat. They liked sweet drinks and consumed Fanta by the case-load. For those who liked alcohol as well there was Château Yquem. They were unique in the world in being both rich and having a short life expectancy.
The Nauruans were, in a sense, pioneers of the diabetogenic diet.
Type II diabetes is now a threat to public health that dwarfs Ebola virus in scale, but kills slowly and undramatically, rather by stealth than by coups de théâtre. No one ever walked around in spacesuits because there was a Type II diabetic on the ward.
The Nauruans (and the Pima Indians of Alaska) were almost certainly susceptible genetically to the disease, which did not affect them until they adopted their horrible diet. But, to a lesser extent admittedly, what happened to them has happened everywhere, particularly in the U.S. and Britain, where patterns of consumption bear some slight resemblance to those of the Nauruans.
When I was a young doctor working in the countryside I contracted pneumonia. It took me a long time to recognize my main symptom – breathlessness – because I thought that symptoms were what patients, not doctors, had, and that therefore I could not myself have any. I therefore considered my feeling of being unwell to be an illusion. In the abstract I knew that doctors got ill and died, of course, but I found it difficult to believe this in practice, especially in my own case. When the symptom was severe enough, however, the penny dropped, and I looked at my own x-ray with something akin to pride.
Pneumonia, like almost all infectious diseases, is much less common now than it was then, but still remains a common enough cause of hospital admission and of death among the elderly. But according to a paper published in a recent edition of the New England Journal of Medicine, the annual cost of pneumonia requiring hospital treatment is $10 billion, a modest sum compared with the estimated costs of other diseases that one reads about in the same journal. Perhaps hospitals should try harder to raise their prices for treating pneumonia.
The paper is titled “Community-Acquired Pneumonia Requiring Hospitalization among U.S. Adults” – community being everywhere except the hospital, one of the commonest places to catch pneumonia. For eighteen months they enrolled all adults aged 18 and above whom physicians at five hospitals diagnosed as having pneumonia and needing to be admitted to the hospital. They then used a variety of diagnostic tests for bacterial and viral pathogens to be found in these patients, and compared them with a sample of patients attending one of the hospitals for non-respiratory problems.
By assuming that all patients in the hospitals’ catchment areas who needed hospitalization for pneumonia were admitted to the five hospitals and no others, they worked out the population rate of admission for hospitalization for pneumonia: 2.48 cases per year per thousand of population at risk. Assuming (what is almost certainly false) that having had pneumonia once does not mean increased susceptibility to having it again, the figure means than an adult has about a one in seven chance of being admitted to the hospital for pneumonia at some time in his life.
Unsurprisingly, the risk increased enormously with age. Between the ages of 50 and 64 you are 4 times more likely to need hospitalization for pneumonia than between the ages of 18 and 49, 9 times more likely between the ages of 65 and 79, and 25 times more likely after the age of 80. This makes me rather proud, the member of a small elite in fact, to have had pneumonia when I was only 27.
Man is a creature that likes to change his mental state, even if it is for the worse. It is the change that he seeks, not the end result; Nirvana for him is a constantly fluctuating or dramatic state of mind. This, for obvious reasons, is particularly so for the bored and dissatisfied. In the prison in which I worked, for example, the prisoners would take any pills that they happened to find in the hope that they would have some — any — effect on their mental state, irrespective of the dangers that might be involved in producing it.
A recent article in the New England Journal of Medicine draws attention to an increase in the United States of reported side effects caused by the consumption of synthetic cannabinoids. These were first synthesised in the 1980s as research tools, but soon escaped the laboratory. (How, at whose instigation and for what reasons, one would like to know?) Now there are illicit chemical laboratories, mainly in Europe, producing an ever wider range of such cannabinoids, the law limping after them with its prohibitions, only for new compounds that are legal (until banned) to be synthesised almost immediately. The story is a tribute, in a way, to human ingenuity.
Between January and May 2014, poison control centers throughout the United States received 1085 calls concerning possible side effects of synthetic cannabinoids, but in the same period in 205 received 3572 such calls, which mysteriously the CDC calculated as a 229 percent increase, when it is a 329 percent increase.
Of the 2961 calls concerning cases in which the medical condition and outcome was known, 335 were serious, which is to say life-threatening or resulting in significant disability, and a further 1407 necessitated medical treatment. 1219, therefore, were minor and quickly self-limiting.
Novels should have good first lines: “Call me Ishmael”; “It was the best of times, it was the worst of times”; “Every happy family is happy in the same way”; “There was no possibility of going for a walk that day”; “It is a truth universally acknowledged… etc.” One does not expect scientific papers to grab one’s attention in quite the same way.
But a paper published in a recent edition of the New England Journal of Medicine opens interestingly, if not quite with the same literary flair as that of Melville, Dickens or Tolstoy, etc. It begins:
The increase in the rate of obesity, a chronic disease with serious health consequences, largely explains the recent trebling of the prevalence of type 2 diabetes.
This is an odd way of putting it, as if the authors themselves did not quite believe what they were saying. They would not have written “Cancer, a chronic disease with serious health consequences…” because to have cancer is to have bad health. Poverty is not caused by having too little money; poverty is having too little money.
Perhaps it doesn’t matter much what one calls obesity—disease or the wages of sin, or at least of weakness—because everyone is agreed that it is a bad thing and ought, if possible, to be reduced.
The authors conducted a double blind trial against placebo of a drug called liraglutide, which was injected daily subcutaneously for 56 weeks. 3731 fat patients from 191 clinics on 5 continents with a high body-mass index were allocated either to the drug or to placebo (2487 to active treatment, 1244 to placebo), both groups being given standard advice about diet and exercise.
The first thing to note is that only 71.9 percent of the treatment group, and 64.4 percent of the placebo group, completed the trial. When one considers that compliance during trials, with the relatively intense attention it gives to patients, is considerably higher than in normal clinical conditions, the actual usefulness of the drug, at least in the epidemiological sense, is much reduced.
I like to have a prejudice overthrown from time to time: it helps to persuade me that my other prejudices are reasonable because I am a reasonable man who is open to the evidence. This is especially the case where the prejudice is one that I do not really care much about. I can give it up without much regret.
A paper in a recent edition of the New England Journal of Medicine overthrew one such minor prejudice, namely that the more thoroughly a person was investigated for an occult cancer after suffering an unexpected deep vein thrombosis (DVT) or pulmonary embolus (PE), the more likely that one would be found.
The association between spontaneous DVT and cancer has long been known. Its discoverer was Armand Trousseau, a great French physician of the middle of the nineteenth century, who noticed that people who suffered DVTs often had cancers such as that of the pancreas, in those days always diagnosed at post mortem. By a strange and tragic coincidence, Trousseau himself suffered a DVT and a few months later was dead – of pancreatic cancer. This is a story from medical history that, once heard, is never forgotten.
It has been found that about 10 per cent of people have diagnosable cancer within a year of having had a DVT or PE. So it seems to stand to reason that if people who suffer such events are investigated up hill and down dale immediately afterwards, some cancers will be caught earlier and treated, and therefore survival will be increased.
Some Canadian researchers tested this hypothesis. They randomly divided 854 patients with either DVT or PE into two groups: those who were tested for cancer by simple methods such as physical examination, blood tests and chest x-ray, and those who, in addition to all these, had CT scans of the abdomen and pelvis. They were then followed up for a year to see whether there was any difference in outcome.
Some patients stick in one’s mind longer than others. I remember, for example, a Welshman with the sing-song intonation of the Welsh who told me on his arrival in hospital that he needed his diazzies [diazepam], and temazzies [temazepam] and his flurazzies [flurazepam] and his lorazzies [lorazepam] and his bromazzies [bromazepam] and his oxazzies [oxazepam]. All these are tranquilizing drugs of the benzodiazepine group, and need was not quite the word for his consumption of them.
On another occasion a patient, a heroin addict, accused me of murdering him because I would not prescribe diazepam for him. In actual fact, I believed that precisely the opposite was almost the case: that if I prescribed for him what he wanted, his chances of dying by overdose, intentionally or unintentionally, would be much increased.
A paper in a recent edition of the British Medical Journal suggests that I was right. It examined the question of whether people prescribed opioid analgesics were more likely to die of overdose if they were also prescribed benzodiazepine tranquilizers than if they were not.
The authors examined the records of patients treated in the U.S. Veterans Administration system between the years 2004 and 2009. Their sample was of 422,786 patients who were prescribed opioids at some time in those years, a figure which I found prima facie astonishing, though perhaps wrongly. Of those 422,786, 2,400 died by overdose, and the authors compared the rate of prescribing of benzodiazepines in addition to opioids among those who had died by overdose with those who had not. What they found was that those who were prescribed both classes of drugs were 3.86 times as likely to die by overdose as those who were prescribed opioids alone. More surprisingly, perhaps, was the fact that those who had ever been prescribed benzodiazepines as well as opioids, but were not currently in receipt of a dual prescription, were 2.33 times more likely to die by overdose than those who had never received a dual prescription.
For a number of reasons these results do not show, in the strictest pharmacological sense, that dual prescription causes additional deaths, the most important of which is that patients prescribed benzodiazepines (usually to calm them down, or get them out of the doctor’s office without a terrible scene taking place) may be more psychologically disturbed than those not prescribed them, and therefore more likely to take overdoses in the first place. There is a dose-response relationship between the dose of benzodiazepine prescribed and the risk of death by overdose, but even this does not prove that dual prescription causes death, because it is likely that those prescribed higher doses of benzodiazepines are yet more psychologically disturbed.
As everyone knows, fresh human blood rejuvenated Dracula no end: stored blood simply would not do for him.
Blood has long been a fluid endowed with mystic significance. Only comparatively recently in human history have people donated it to strangers with anything like a good grace. I once worked in a remote country, much given to drunkenness, where people would only give blood to their relatives, though fortunately they lived in large families. A man there once had an accident requiring rapid and repeated transfusion. His family had all been at a party. After transfusion, he himself was drunk.
It has long been thought that the longer human blood had been stored in blood banks, the less good its quality. There were two papers in a recent edition of the New England Journal of Medicine that tested this hypothesis, which (within limits, of course) turned out to be false, as so many hypotheses do. I think most people would instinctively feel, because it stands to reason, that fresh blood is best; we agree with Dracula.
Normally, blood is taken from donors, treated chemically and tested for viruses and refrigerated. In practice it is not kept more than six weeks, though this period is to an extent arbitrary and by convention. In the first trial, conducted in Canada, Britain, France, the Netherlands and Belgium, critically ill patients in need of blood transfusions were allocated randomly to receive either blood that was less than a week old or blood that was three weeks old.
One short passage in the paper was slightly troubling from the point of view of medical ethics: “At sites where deferred consent was permitted, written informed consent was obtained from the patient or surrogate decision maker as soon as possible after enrollment.” This appears to mean, unless I have misunderstood, that consent was retrospective, in other words that the patients were asked “Do you consent to having been experimented upon?” Even where such consent was not given, refusal was all but pointless, for they were then asked whether, nevertheless, they consented to the use of the data gathered in their case.
It so happened that I was preparing an introduction to an anthology of the writings of Edmund Burke when I read an article in a recent edition of the New England Journal of Medicine with the title “Social Distancing and the Unvaccinated.” One of Burke’s main contentions, at least according to me, is that politics are, or ought to be, more than the application of abstract first principles to practical affairs: and, as if to prove him right, along came this article.
The question was this: if it is permissible for parents to refuse to have their children immunized against preventable childhood diseases, does the state have the right, through one or other of its agencies, to exclude those children temporarily from school or other social institutions if there is an epidemic developing?
This question can be answered neither by a single abstract principle alone nor by appeal to scientific fact. The matter is complex, and on this occasion arose in the context of an outbreak of measles in California that soon spread and was in part occasioned by a reduction in the rate of immunization against the disease consequent upon the fraudulent activities of Dr Andrew Wakefield, a British doctor who claimed falsely to have discovered a link between the measles, mumps and rubella vaccine and the development of childhood autism.
In addition, two sets of parents in New York legally challenged the exclusion of their children from school because they were unimmunized against chickenpox after an infected child was found in the school. The article did not make clear whether the exclusion was primarily to protect the unimmunized children themselves or others in the school, or both (no immunization conferring 100 per cent immunity, and the more cases encountered the greater the likelihood of spread).
Scientific considerations are relevant to, but not probative of, any answer. The article, strangely, made no mention of the fact that parents’ rights, which we all accept within quite wide limits, nevertheless may impinge on those of their children, for example that to life itself: in which case parents’ rights have to be, or at any rate are, overridden.
If the parents’ decision not to immunize were one of life or death, either for their or other children, most (but perhaps not all) people would agree that their say in the matter should not count. But in fact it is rarely one of life and death, but rather one of transient illness with very occasional severe complications. Just how great is the risk of the latter is dependent on factors other than the parents’ decision not to immunize: measles is much less serious a disease in rich than in poor countries, for example. Moreover, some questions, for example, how long it is necessary to socially distance (Orwellian phrase) children in order to abrogate the risk of spreading may not be completely answerable in the current state of knowledge.
How many days off school for one child equal the risk of contraction of a mild illness by another? There is no way of answering this question except by the exercise of judgment in particular circumstances. This is precisely what Burke would have predicted: what we decide cannot be determined by appealing to conflicting rights alone, the more fundamental of them prevailing. Sometimes one will prevail, sometimes another; there is no way of making politics a matter of such accurate calculation that no faculty of judgment, with its permanent possibility of error, will ever have to be exercised.
The article focuses on religious objectors to immunization, but they are probably outnumbered by Californian-style cranks, paranoiacs and believers in all you read on the internet.
I am slightly ashamed of how much I liked Burma when I visited it nearly a third of a century ago. What a delight it was to go to a country in which there had been no progress for 40 years! Of course it was a xenophobic, klepto-socialist, Buddho-Marxist military dictatorship run by the shadowy, sinister and corrupt General Ne Win, and so, in theory, I should have hated it. Instead, I loved it and wished I could have stayed.
Since then there has probably been some progress, no doubt to the detriment of the country’s charm. Burma (now Myanmar) is slowly rejoining the rest of the world, and one consequence of this will be the more rapid advance of treatment-resistant malaria.
A recent paper in the Lancet examined the proportion of patients in Burma with malaria in whom the parasite, Plasmodium falciparum, was resistant to what is now the mainstay of treatment, artemisinin, a derivative of a herbal remedy known for hundreds of years to Chinese medicine. The results are not reassuring.
There was a time, not so very long ago, when the global eradication of malaria was envisaged by the WHO, and it looked for a time as if it might even be achieved. The means employed to eradicate it was insecticide that killed the mosquitoes that transmitted the malarial parasites, but a combination of pressure from environmentalists who were worried about the effects of DDT on the ecosystem and mosquito resistance to insecticides led to a recrudescence of the disease.
At the same time, unfortunately, resistance to antimalarial drugs emerged. Control of malaria, not its eradication, became the goal; an insect and a protozoan had defeated the best efforts of mankind. And this is no small matter: the last time resistance to a mainstay of treatment for malaria, chloroquine, emerged in South-East Asia, millions of people died as a result in Africa for lack of an alternative treatment.
What most surprised me about this paper was the method the authors used to determine the prevalence of resistance to artemisinin in the malarial parasites of Burma: for I remember the days when such prevalence was measured by the crude clinical method of giving patients chloroquine and estimating how many of them failed to get better.
The genetic mutations that make the parasite resistant to artemisinin have been recognized. The authors were able to estimate the percentage of patients with malarial parasites that had mutations associated with drug resistance. Nearly 40 percent of their sample had such mutations, and in a province nearest to India the figure was nearly half. The prospects for the geographical spread of resistance are therefore high.
Nor is this all. Artemisinin resistance was first recognized in Cambodia 10 years ago but the mutations in Burma were different, suggesting that resistance can arise spontaneously in different places at the same time. From the evolutionary point of view, this is not altogether surprising: selection pressure to develop resistance to artemisinin exists wherever the drug is widely used.
One way of reducing the spread of resistance is the use in treatment of malaria of more than one antimalarial drug at a time, but this will only retard the spread, not prevent it altogether. As with tuberculosis, it is likely that parasites resistant to all known drugs will emerge. The authors of the paper end on a pessimistic note:
The pace at which the geographical extent of artemisinin resistance is spreading is faster than the rate at which control and elimination measures are being developed and instituted, or new drugs being introduced.
In other words, deaths from malaria will increase rather than continue to decrease, which is what we have come to think of as the normal evolution of a disease.
When I was a boy in London I used to love what we called pea-soupers, that is to say fogs so thick that you couldn’t see your hand in front of your face at midday. They came every November and buses, with a man walking slowly before them to guide them, would loom up suddenly out of the gloom with their headlights like the glowing eyes of monsters. It took my father so long to drive to work that by the time he arrived it was time for him to come home again. I loved those fogs, but then the government went and spoiled the fun by passing the Clean Air Act. They never returned, those wonderful, exciting fogs.
Little did I know (or care) that those wonderful, exciting fogs killed thousands by bronchitis. But many years later I got bronchitis for the first and only time in my life from breathing the polluted winter air of Calcutta. I have also traveled in Communist countries where it seemed that the only thing the factories produced was pollution. I don’t need persuading that clear air is a good thing, not only aesthetically but also from the health point of view.
Southern California used to have some of the worst air pollution in the United States, but the quality of the air in Los Angeles has improved over the last two or three decades. Researchers who reported their findings in a recent edition of the New England Journal of Medicine conducted what is called a natural experiment: they estimated the pulmonary capacity of children who grew up as the level of pollution declined.
Most research on the health effects of air pollution has concentrated on deaths from cardiovascular disease among adults, usually of a certain age. But it is known that relatively poor lung function among younger people predicts cardiovascular disease later in life quite well. There is also an association between air pollution and early death from cardiovascular disease, though of course an association does not by itself prove causation. Does air pollution cause poor lung function in children?
The researchers measured lung function in three cohorts of children, 2120 in all, aged 11 to 15, who were of those ages between 1994 and 1998, 1997 and 2001, and 2007 and 2011. During this period, atmospheric pollution in Los Angeles declined markedly, as measured by levels of nitrogen dioxide, ozone and particulate matter.
Lung function, estimated by forced expiratory volume, improved (or at any rate increased) as air pollution declined. The numbers of children with lower than predicted function declined from 7.9 percent to 6.3 percent to 3.6 percent in the three cohorts. The improvement occurred among whites and Hispanics, boys and girls, and even those with asthma, i.e. the asthmatics, were less incapacitated.
The authors thought that the improvement in lung function was likely to persist into adulthood or, to put it in a slightly less cheerful way, damage done in childhood by air pollution might be permanent. This is not quite so pessimistic as it sounds, for there is probably no age at which an improvement in the quality of the air is not capable of producing an improvement in health.
The main drawback of the study was that there was no control group, that is to say a population whose cohorts of children experienced no improvement in the quality of the air they breathed. Perhaps the function of their lungs would have shown the same improvement as well, though I rather doubt it.
One little semantic point about the paper: children aged 11 were referred to as students rather than as pupils. Perhaps this is because we nowadays expect people to grow up very quickly, but not very far.
In the past, medical journals, pharmaceutical companies and researchers themselves have been criticized for publishing selectively only their positive results, that is to say, the results that they wanted to find. This is important because accentuation of the positive can easily mislead the medical profession into believing that a certain drug or treatment is much more effective than it really is.
On reading the New England Journal of Medicine and other medical journals, I sometimes wonder whether the pendulum has swung too far in the other direction, in accentuating the negative. To read of so many bright ideas that did not work could act as a discouragement to others and even lead to that permanent temptation of ageing doctors, therapeutic nihilism. But the truth is the truth, and we must follow it wherever it leads.
A recent edition of the NEJM, for example, reported on three trials, two with negative results and one with mildly positive ones. The trials involved the early treatment of stroke, the prophylaxis of HIV injection, and the treatment of angina refractory to normal treatment (a growing problem). Only the latter was successful, but it involved 104 patients as against 6729 patients in the two unsuccessful ones.
The successful trial involved the insertion of a device that increased pressure in the coronary sinus, the vein that drains the blood from the heart itself. For reasons not understood, this seems to redistribute the blood flow in the heart muscle, thus relieving angina. In the trial, the new device relieved and reduced angina symptoms, and improved the quality of life in the patients who received it compared with those who underwent a placebo-operation. The trial was too small, however, to determine whether the device improved survival, though even if it did not a reduction of symptoms and an improvement in the quality of life is worthwhile.
The trial of chemoprophylaxis of HIV was, by contrast, a total failure. The trial recruited 5029 young women in Africa who were given an anti-HIV drug in tablet or cream form, and others who were given placebos. The rate at which they became infected with HIV was compared, and no difference was found.
In large part this was because the patients did not take or use the pills or cream, though they claimed to have done so. A drug that few take is not of much use however effective it might be in theory, especially in prophylaxis rather than treatment. And this points to another problem of pharmaceutical research: in drug trials that require patients’ compliance with a regime, that compliance may be high during the trial itself (thanks to the researchers’ vigilance and enthusiasm) but low in “natural” conditions, when the patients are left to their own devices.
The trial of magnesium sulphate in the early treatment of stroke was also a failure. It had been suggested by experiments on animals that this chemical protects brain cells from degeneration after ischaemic stroke. It stood to reason, then, that it might improve the outcome in humans in ischaemic stroke, at least if given as soon as suspected.
Alas, it was no to be. The trial, involving 1700 patients, showed that the early administration of magnesium sulphate did not improve outcome in the slightest. At 90 days there was no difference between those who received it and those who had received placebo.
Is an idea bad just because it does not work? Could it be that those who discover something useful are just luckier than their colleagues? Perhaps there ought to be a Nobel Prize for failure, that is to say for the brightest idea that failed.
image illustration via shutterstock / PathDoc
How informed is informed? What is the psychological effect of being told of every last possible complication of a treatment? Do all people react the same way to information, or does their reaction depend upon such factors as their intelligence, level of education, and cultural presuppositions, and if so does the informing doctor have to take account of them, and if so how and to what degree? An orthopedic surgeon once told me that obtaining informed consent from patients now takes him so long that he had had to reduce the number of patients that he treats.
An article in a recent edition of the New England Journal of Medicine extols the ethical glories of informed consent without much attention to its limits, difficulties and disadvantages.
It starts by referring to a trial of the level of oxygen in the air given to premature babies, of whom very large numbers are born yearly. Back in the 1940s it was thought that air rich in oxygen would compensate for premature babies’ poor respiratory system, but early in the 1950s British doctors began to suspect, correctly, that these high levels of oxygen caused retinal damage leading to permanent blindness. Fifty years later, the optimal level of oxygen is still not known with certainty, and a trial was conducted that showed that while higher levels of oxygen caused an increased frequency of retinopathy, lower levels resulted in more deaths. The authors of the trial have been criticized because they allegedly did not inform the parents of the possibility that lower levels of oxygen might lead to decreased survival, which was reasonably foreseeable.
How reasonable does reasonability have to be? Many of the most serious consequences of a treatment are totally unexpected and not at all foreseeable (no one suspected that high levels of oxygen for premature babies would result in blindness, for example, and it took many years before this was realized). Ignorance is, after all, the main reason for conducting research.
But suppose parents of premature babies had been asked to participate in a trial in which their offspring were to be allocated randomly to an increased risk of blindness or an increased risk of death. Surely this frankness would have been cruel, all the more so as the precise risks could not have been known in advance. Parents would feel guilt alike if their babies died or were blind.
Now that the answer is known, more or less, parents can be asked to choose in the light of knowledge: but their informed consent will be agonizing because there is no correct answer. Personally, I would rather trust the doctor sufficiently to act in my best interests in the light of his knowledge and experience. So far in life I have not had reason to regret this attitude, though I am aware that it has its hazards also. But
…why should they know their fate?
Since sorrow never comes too late,
And happiness too swiftly flies.
Thought would destroy their paradise.
No more; where ignorance is bliss,
‘Tis folly to be wise.
And I have often thought what medical ethicists would have made of the pioneers of anesthesia. They did not seek the informed consent of their patients, in part, but only in part, because they hadn’t much information to give. What moral irresponsibility, giving potentially noxious and even fatal substances to unsuspecting experimental subjects without warning them of the dangers!
And there are even some medical ethicists who think we should not take advantage of knowledge gained unethically. All operations should henceforth be performed without anesthesia, therefore.
Medical history is instructive, if for no other reason than that it might help to moderate somewhat the medical profession’s natural inclination to arrogance, hubris and self-importance. But the medical curriculum is now too crowded to teach it to medical students and practicing doctors are too busy with their work and keeping up-to-date to devote any time to it. It is only when they retire that doctors take an interest in it, as a kind of golf of the mind, and by then it is too late: any harm caused by their former hubris has already been done.
Until I read an article in a recent edition of the Lancet, I knew of only one eminent doctor who had been shot by his patient or a patient’s relative: the Nobel Prize-winning Portuguese neurologist Egas Moniz, who was paralyzed by a bullet in the back. It was he who first developed the frontal lobotomy, though he was also a pioneer of cerebral arteriography. As he was active politically during Salazar’s dictatorship, I am not sure whether his patient shot him for medical or political reasons, or for some combination of the two.
Of late the New England Journal of Medicine has seemed like the burial ground of good ideas. Researchers follow a promising lead only to find that their new idea fails the crucial test of experience: and the difference between success and failure in research is made to appear as much a matter of chance or luck as of brilliance or skill.
In the first issue of the Journal for 2015, Dutch researchers from 16 different hospitals report an unequivocal success in the treatment of ischemic stroke.
Until now the only proven worthwhile treatment of patients with the kind of stroke that results from the blockage of a cerebral artery is the infusion within four and a half hours of the drug called alteplase, which dissolves thrombus (and which is manufactured from the ovaries of Chinese hamsters). But even with the use of this drug the prognosis is not very good, and there are several contra-indications to its use.
Everyone knows the pleasures of having his prejudices confirmed by the evidence. The pleasures of changing one’s mind because of the evidence are somewhat less frequently experienced, though none the less real. Among those pleasures is that of self-congratulation on one’s own open-mindedness and rationality. It would therefore delight me to learn that my prejudice about obesity — that it is a natural consequence of overeating, which is to say of human weakness and self-indulgence — was false.
I therefore read with interest and anticipation a recent article in the New England Journal of Medicine with the title “Microbiota, Antibiotics, and Obesity.” The connection of antibiotics with obesity had not previously occurred to me; perhaps the real reason why so many people now have the appearance of beached whales was about to be revealed to me.
It is easier to advise than to have or to retain a sense of proportion, especially when it is most needed. I have never known anyone genuinely comforted by the idea that others were worse off than he, which perhaps explains why complaint does not decrease in proportion to improvement in general conditions. And he would be a callow doctor who tried to console the parents of a dead child with the thought that, not much more than a century ago, an eighth of all children died before their first birthday.
Still, it is well that from time to time medical journals such as the Lancet should carry articles about medical history, for otherwise we might take our current state of knowledge for granted. Ingratitude, after all, is the mother of much discontent. To know how much we owe to our forebears keeps us from imagining that our ability to diagnose and cure is the consequence of our own peculiar brilliance, rather than simply because we came after so much effort down the ages.
A little article in the Lancet recently was written by two historians who are in the process of analyzing the results of 9000 coroners’ inquests into accidental deaths in Tudor England. It seems astonishing to me that such records should have survived for more than four centuries, but also that the state should have cared enough about the deaths of ordinary people to hold such inquests (coroners’ inquests had already been established for 400 years at the time of the Tudors). In other words, an importance was given to individual human life even before the doctrines of the Enlightenment took root: the soil was already fertile.
When I was working in Africa I read a paper that proved that intravenous corticosteroids were of no benefit in cerebral malaria. Soon afterwards I had a patient with that foul disease whom I had treated according to the scientific evidence, but who failed to respond, at least as far as his mental condition was concerned – which, after all, was quite important. To save the body without the mind is of doubtful value.
I gave the patient an injection of corticosteroid and he responded as if by miracle. What was I supposed to conclude? That, according to the evidence, it was mere coincidence? This I could not do: and I have retained a healthy (or is it unhealthy?) skepticism of large, controlled trials ever since. For in the large numbers of patients who take part in such trials there may be patients who react idiosyncratically, that is to say, differently from the rest.
A paper in a recent edition of the New England Journal of Medicine brought back my experience with cerebral malaria. Animal experimentation had shown that progesterone, one of the class of steroids produced naturally by females, protected against the harmful effects of severe brain injury. The paper does not specify what exactly it was necessary to do to experimental animals to reach this conclusion, but it does say that it has been proven in several species. What is not said is often as eloquent as what is said.